text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
listlengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
listlengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
listlengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
## The aim of this notebook is to calculate a cost estimation of different methods to calculate the energy of a system with different Phase Estimation protocols
```python
import numpy as np
from itertools import combinations
import scipy
from scipy.optimize import minimize
from scipy.special import binom, gamma
from scipy.integrate import quad, dblquad
from scipy import integrate
import sympy
```
## IMPORTANT: to these cost, we have to add the QFT cost, which is minor, and has an associated error.
Question: what is the base of the logs that appear?
# qDrift and Trotterization
The first algorithm we would like to estimate its complexity is the q-Drift protocol from appendix A. The cost is
$$3n(10+12\log \epsilon^{-1})\log N = 3\frac{27\pi^2}{2}\frac{\lambda^2}{\delta_E^2 P_f}(10+12\log \epsilon^{-1})\log N$$
where $\lambda = \sum a_\gamma $ for the Hamiltonian $H = a_\gamma H_\gamma$;
$\delta_E$ is the error in Phase Estimation (an arbitrary parameter chosen by the user);
and $P_f = \frac{3}{2}p_f$ the probability of failure (also chosen by the user). The $\epsilon$ parameter is given by the smallest of
$$\epsilon_j = \epsilon_{tot}\frac{2^j}{2(2^m-1)};$$
We also need that
$$ n = 4\frac{\pi^2(2^m-1)^2}{\epsilon_{tot}}$$ with
$$m = q +\log_2 \left(\frac{1}{2p_f} + \frac{1}{2}\right),$$
$\delta_E = 2\lambda\delta$, $q = \log_2 \delta -1$; and $P_f = p_f +2\epsilon_{tot}$
We want to minimize the total cost
$$3n(10+12\log \epsilon^{-1})\log N = 3\frac{27\pi^2}{2}\frac{\lambda^2}{\delta_E^2 P_f}(10+12\log \epsilon^{-1})\log N$$
where $\epsilon$ is the error of individual rotations $$\epsilon = \frac{\epsilon(j)}{n(j)} = \frac{\epsilon_{tot}^2}{4\pi^2(2^m-1)^2}$$
```python
def calc_qdrift_resources(lambd, N, deltaE = 1e-4, P_failure = .1):
n = ((27*np.pi**2/2)*(lambd/deltaE)**2) / P_failure**3
delta = deltaE/(2*lambd)
q = np.log2(1/delta)-1
pf = 2/3*P_failure
eps_tot = P_failure/6
#sanity check
assert (pf +2*eps_tot)/P_failure == 1
m = q + np.log2(1/(2*pf)+1/2)
# Another sanity check. This should coincide
eps_tot_2 = 4*(np.pi*2**m-1)**2/n
print(eps_tot,eps_tot_2)
# error in individual rotations
eps = (eps_tot/(2*np.pi*(2**m-1)))**2
rost_cost_factor = 3*(10+12*np.log(1/eps))*np.log(N)
print('eps',eps)
return rost_cost_factor*n
calc_qdrift_resources(lambd = 768, N = (467403)**(1/4))
```
0.016666666666666666 0.01896296276647537
eps 1.8639547394483464e-21
4.484316495050195e+22
For the randomised Hamiltonian approach, the equations are similar. However, now $p_f = 3/4P_f$ and
$$n = 8\Gamma^2\left(\frac{ \pi^3 \Lambda^3}{8\delta_E^3}\right)^{1/2}\left(\frac{1+p_f}{p_f}\right)^{3/2}\frac{1}{\epsilon_{tot}^{1/2}} = 4.35\sqrt{8}\pi^{3/2}\Gamma^2 \frac{\Lambda^{3/2}}{\delta_E^{3/2}P_f^2}$$
```python
def calc_rand_ham_resources(Lambd, lambd, Gamma, N, deltaE = 1e-4, P_failure = .1):
n = 4.35*np.sqrt(8)*(np.pi*Lambd/deltaE)**(3/2) *(Gamma/ P_failure)**2
print('n',n)
# error in individual rotations
Lambda_A = Lambd/(2*lambd)
delta = deltaE/(2*lambd)
q = np.log2(1/delta)-1
pf = 3/4*P_failure
eps_tot = P_failure/8
#sanity check
assert (pf +2*eps_tot)/P_failure == 1
m = q + np.log2(1/(2*pf)+1/2)
n1 = 8*Gamma**2 * ( 2**(m+1)*np.pi**3*Lambda_A**3/eps_tot )**(1/2) *2*(2**m-1)
print('n1',n1)
# Another sanity check. This should coincide
eps_tot_2 = ( 8*Gamma**2* (np.pi*Lambd/(2*deltaE))**(3/2)* ((1+pf)/pf)**(3/2) /n1 )**2
eps_tot_3 = 1/ ( 4.35* (1/P_failure)**2 * (pf/(1+pf))**(3/2) )**2
print(eps_tot,eps_tot_2,eps_tot_3)
n2 = 8*Gamma**2 * ( 2**(m+1)*np.pi**3*Lambda_A**3/eps_tot_2 )**(1/2) *2*(2**m-1)
print('n2',n2)
n3 = 8*Gamma**2 * ( 2**(m+1)*np.pi**3*Lambda_A**3/eps_tot_3 )**(1/2) *2*(2**m-1)
print('n3',n3)
# Esto probablemente esté mal:
eps = 1/4*(eps_tot/(np.pi*2**m*Lambda_A))**(3/2)
rost_cost_factor = 3*(10+12*np.log(1/eps))*np.log(N)
print('eps',eps)
return rost_cost_factor*n
calc_rand_ham_resources(Lambd = 4.07, lambd = 768, Gamma =467403, N = (467403)**(1/4))
```
n 1.2289484451445014e+22
n1 1.3712296468152677e+22
0.0125 0.012500000454215137 0.015561916785328065
n2 1.3712296219019378e+22
n3 1.2289484228162224e+22
eps 1.1265689857382676e-12
4.092893794257138e+25
# Taylorization (babbush2016exponential)
Let us know calcula the cost of performing Phase Estimation.
1. We have already mentioned that in this case, controlling the direction of the time evolution adds negligible cost. We will also take the unitary $U$ in Phase estimation to be $U_r$. The number of segments we will have to Hamiltonian simulate in the phase estimation protocol is $r \approx \frac{4.7}{\epsilon_{\text{PEA}}}$.
2. Using oblivious amplitude amplification operator $G$ requires to use $\mathcal{W}$ three times.
3. Each operator $G$ requires to use Prepare$(\beta)$ twice and Select$(V)$ once.
4. The cost of Select$(V)$ is bounded in $8N\lceil \log_2\Gamma + 1\rceil\frac{K(K+1)(2K+1)}{3}+ 16N K(K+1)$.
5. The cost of Prepare$(\beta)$ is $(20+24\log\epsilon^{-1}_{SS})K$ T gates for the preparation of $\ket{k}$; and $(10+12\log\epsilon^{-1}_{SS})2^{\lceil \log \Gamma \rceil + 1}K$ T gates for the implementation of the $K$ Prepare$(W)$ circuits. Here notice that $2K$ and $2^{\lceil \log \Gamma \rceil + 1}K$ rotations up to error $\epsilon_{SS}$ will be implemented.
Remember that
$$ K = O\left( \frac{\log(r/\epsilon_{HS})}{\log \log(r/\epsilon_{HS})} \right)$$
Notice that the $\Lambda$ parameters comes in the algorithm only implicitly, since we take the evolution time of a single segment to be $t_1 = \ln 2/\Lambda$ such that the first segment in Phase estimation has $r = \frac{\Lambda t_1}{\ln 2} = 1$ as it should be. In general, we will need to implement $r \approx \frac{4.7}{\epsilon_{PEA}}$. However, since $\epsilon_{PEA}$ makes reference to $H$ and we are instead simulating $H \ln 2/ \Lambda$, we will have to calculate the eigenvalue to precision $\epsilon \ln 2/ \Lambda$; so it is equivalently to fixing an initial time $t_1$ and running multiple segments in each of the $U$ operators in Phase Estimation.
```python
def Taylor_naive(Lambd, Gamma, N, epsilon_PEA = .4*eps_tot, epsilon_HS = .2*eps_tot, epsilon_S = .4*eps_tot):
r = 4.7*Lambd / (epsilon_PEA*np.log(2)) # The simulated time
K_list = []
for m_j in range(0, int(np.ceil(np.log(r)))):
t_j = 2**m_j
epsilon_HS_mj = epsilon_HS / r * 2**m_j
K = np.ceil(np.log2(t_j/epsilon_HS_mj) / np.log2( np.log2 (t_j/epsilon_HS_mj)))
K_list.append(K)
result = 0
epsilon_SS = epsilon_S /(np.sum([3*2*(K*2**(np.ceil(np.log2(Gamma)+1)) + 2*K) for K in K_list]))
for m_j in range(0, int(np.ceil(np.log(r)))):
t_j = 2**m_j
epsilon_HS_mj = epsilon_HS / r * t_j
K = np.ceil(np.log2(t_j/epsilon_HS_mj) / np.log2( np.log2 (t_j/epsilon_HS_mj)))
Select_V = 8*N*np.ceil(np.log2(Gamma) +1)*K*(K+1)*(2*K+1)/3 + 16*N*K*(K+1)
Prepare_beta_1 = (20+24*np.log2(1/epsilon_SS))*K
Prepare_beta_2 = (10+12*np.log2(1/epsilon_SS))*K*2**(np.ceil(np.log2(Gamma)+1))
Prepare_beta = Prepare_beta_1 + Prepare_beta_2
result += 3*(2*Prepare_beta + Select_V)*t_j
return result
Taylor_naive(Lambd = 4.07, Gamma =467403, N = (467403)**(1/4))
```
95463557056106.14
```python
eps_tot = .0125
def Taylor_on_the_fly(Gamma, N, phi_max, dphi_max, zeta_max_i, epsilon_PEA = .4*eps_tot, epsilon_HS = .1*eps_tot, epsilon_S = .4*eps_tot, epsilon_H = .1*eps_tot, order = 10):
'''
Error terms
eps_PEA: Phase estimation,
eps_HS: the truncation of K,
eps_S: gate synthesis,
eps_H: discretization of integrals,
eps_taylor: truncation of taylor series to order o
'''
t = 4.7/epsilon_PEA
x_max = np.log(N * t/ epsilon_H)
lambd = Gamma*phi_max**4 * x_max**5
r = lambd* t / np.log(2)
K_list = []
for m_j in range(0, int(np.ceil(np.log(r)))):
t_j = 2**m_j
epsilon_HS_mj = epsilon_HS / r * 2**m_j
K = np.ceil(np.log2(t_j/epsilon_HS_mj) / np.log2( np.log2 (t_j/epsilon_HS_mj)))
K_list.append(K)
epsilon_SS = epsilon_S /np.sum([3*2*(2*K) for K in K_list])
# We distribute the error between all C-U in phase estimation uniformly
eps_tay_m_j = eps_tay/((6+2)*np.max(K_list)*r*3*2)
x = sympy.Symbol('x')
order = max(order_find(function = sqrt(x), x0 = 1, e = eps_tay_m_j, xeval = x_max),
order_find(function = exp(max_zeta_i*(x)**2), x0 = 0, e = eps_tay_m_j, xeval = x_max))
result = 0
for m_j in range(0, int(np.ceil(np.log(r)))):
t_j = 2**m_j
epsilon_HS_mj = epsilon_HS / r * 2**m_j
K = np.ceil(np.log2(t_j/epsilon_HS_mj) / np.log2( np.log2 (t_j/epsilon_HS_mj)))
mu = ( 3*K*2*r/epsilon_H *2*(4*dphi_max + phi_max/x_max)*phi_max**3 * x_max**6 )**6
n = np.log(mu)/3
Select_V = 8*N*np.ceil(np.log2(Gamma) +1)*K*(K+1)*(2*K+1)/3 + 16*N*K*(K+1)
Prepare_beta_1 = (20+24*np.log2(1/epsilon_SS))*K
Prepare_beta_2 = ( 6*35*n**2*(order-1)*4*N + (252+70*(order-1))*n**2 )*K
Prepare_beta = Prepare_beta_1 + Prepare_beta_2
result += 3*(2*Prepare_beta + Select_V)*t_j
return result
Taylor_on_the_fly(Gamma = 467403, N = (467403)**(1/4), phi_max = .1, dphi_max = .1)
```
4.81664159586087e+21
```python
# Taylor approximation at x0 of the function 'function'
def taylor(function,x0,n):
i = 0
p = 0
while i <= n:
p = p + (function.diff(x,i).subs(x,x0))/(factorial(i))*(x-x0)**i
i += 1
return p
#print(taylor(sympy.sqrt(x), 1, 5))#.subs(x,1).evalf())
def order_find(function, x0, e, xeval):
x = sympy.Symbol('x')
def factorial(n):
if n <= 0:
return 1
else:
return n*factorial(n-1)
def taylor_err(function,x0,n, z = None):
if z == None:
z = x0
#print('coefficient order',n, function.diff(x,n)/(factorial(n)))#.subs(x,z))
a = (function.diff(x,n).subs(x,z))/(factorial(n))*(x-x0)**n
#print('coefficient order',n, (function.diff(x,n).subs(x,z)/(factorial(n))*(x-x0)**n))
#print('a',a)
return a
order = 0
te = 1
zeta = np.linspace(x0,xeval,20)
while te > e:# or order < 10:
order += 1
#for z in zeta:
#print(taylor_err(f, x0, order, z).subs(x,xeval).evalf())
te = np.max([np.abs(taylor_err(function, x0, order, z).subs(x,xeval).evalf()) for z in zeta])
#print('order',order, te,'\n')
return order
x = sympy.Symbol('x')
order_find(sympy.sqrt(x), x0 = 1, e = 1e-3, xeval = 2)
```
order 1 0.500000000000000
order 2 0.125000000000000
order 3 0.0625000000000000
order 4 0.0390625000000000
order 5 0.0273437500000000
order 6 0.0205078125000000
order 7 0.0161132812500000
order 8 0.0130920410156250
order 9 0.0109100341796875
order 10 0.00927352905273438
order 11 0.00800895690917969
order 12 0.00700783729553223
order 13 0.00619924068450928
order 14 0.00553503632545471
order 15 0.00498153269290924
order 16 0.00451451400294900
order 17 0.00411617453210056
order 18 0.00377315998775885
order 19 0.00347527893609367
order 20 0.00321463301588665
order 21 0.00298501637189474
order 22 0.00278149252835647
order 23 0.00260009084172452
order 24 0.00243758516411674
order 25 0.00229133005426974
order 26 0.00215913793575417
order 27 0.00203918582821228
order 28 0.00192994373027233
order 29 0.00183011905456859
order 30 0.00173861310184016
order 31 0.00165448666142854
order 32 0.00157693259917408
order 33 0.00150525384466617
order 34 0.00143884558681325
order 35 0.00137718077594982
order 36 0.00131979824361858
order 37 0.00126629290941783
order 38 0.00121630766299344
order 39 0.00116952659903215
order 40 0.00112566935156845
order 41 0.00108448632651106
order 42 0.00104575467199281
order 43 0.00100927485785353
order 44 0.000974867760426703
44
```python
33/2048
```
0.01611328125
```python
eps_tot = .0125
def error_optimizer(eps_array):
epsilon_PEA = eps_array[0]
epsilon_S = eps_array[1]
epsilon_HS = eps_tot - eps_array[0] - eps_array[1]
return Taylor_naive(Lambd = 4.07, Gamma =467403, N = (467403)**(1/4),
epsilon_PEA = epsilon_PEA, epsilon_HS= epsilon_HS, epsilon_S = epsilon_S)
eps_array = [.005, .005]
#A = np.array([1,1])
constraint = scipy.optimize.LinearConstraint(A = np.array([[1,1],[1,0],[0,1]]), lb = [0,0,0], ub = [eps_tot,eps_tot,eps_tot], keep_feasible=True)
minimize(error_optimizer, x0 = eps_array, method='SLSQP', tol=1, constraints = (constraint))
```
fun: 95463557056106.14
jac: array([-1.96938319e+16, -6.01179179e+14])
message: 'Inequality constraints incompatible'
nfev: 3
nit: 1
njev: 1
status: 4
success: False
x: array([0.005, 0.005])
# Configuration interaction (babbush2017exponentially)
\begin{equation}
\begin{split}
&\mu M \zeta = \mu \max_{\gamma,\rho}||\aleph_{\rho,\gamma}||_{\max} = \\
&=\max \left[ \frac{672\pi^2}{\alpha^3}\varphi^4_{\max}x^5_{\max}\log^6 \left(\frac{K_2 \varphi^4_{\max}x^5_{\max}}{\delta}\right)\right.\\
&, \left.\frac{256\pi^2}{\alpha^3}Z_q\varphi^2_{\max}x^2_{\max}\log^3 \left(\frac{K_1 Z_q\varphi^2_{\max}x^2_{\max}}{\delta}\right),\right.\\
&\left. \frac{32\gamma_1^2}{\alpha^3}\varphi^2_{\max}x_{\max}\log^3 \left(\frac{K_0\varphi^2_{\max}x_{\max}}{\delta}\right)\right]
\end{split}
\end{equation}
```python
eps_tot = 0.125
def configuration_interaction(N, eta, alpha, gamma1, K0, K1, K2, epsilon_PEA = .4*eps_tot, epsilon_HS = .1*eps_tot, epsilon_S = .4*eps_tot, epsilon_H = .1*eps_tot):
t = 4.7/epsilon_PEA
x_max = np.log(N * t/ epsilon_HS)
Gamma = binom(eta, 2)*binom(N-eta, 2) + binom(eta,1)*binom(N-eta,1) + 1 # = d
Zq = eta
'''
Warning, we have a circular definition here of delta, mu_M_zeta and r.
In practice we have to find the smallest value of mu_M_zeta compatible with delta:
mu_M_zeta \leq f( epsilon_H / 3K*2 Gamma t mu_M_zeta), with f the np.max defining mu_M_zeta below
Due to this complication we distribute the error uniformly accross all C-U which is not optimal
'''
delta = epsilon_H/(3*r*K) # delta is the error in calculating a single integral. There are 3K r of them in the simulation
# This is an upper bound, not an equality!!!
mu_M_zeta = np.max([
672*np.pi**2/(alpha**3)*phi_max**4*x_max**5*(np.log(K2*phi_max**4*x_max**5/delta))**6,
256*np.pi**2/(alpha**3)*Zq*phi_max**2*x_max**2*(np.log(K1*Zq*phi_max**2*x_max**2/delta))**3,
32*gamma1**2**2/(alpha**3)*phi_max**2*x_max*(np.log(K0*phi_max**2*x_max/delta))**3
])
r = 2*Gamma*t*mu_M_zeta
K = np.log2(r/epsilon_HS)/np.log2(np.log2(r/epsilon_HS))
epsilon_SS = epsilon_S / (2*K*2*3*r)
Prepare_beta = (20+24*np.log2(1/epsilon_SS))*K
mu = ( r/epsilon_H *2*(4*dphi_max + phi_max/x_max)*phi_max**3 * x_max**6 )**6
n = np.log(mu)/3
Sample_w = ( 6*35*n**2*(order-1)*4*N + (189+35*(order-1))*n**2 )*K
Q_val = 2*Sample_w
Q_col = 6*(32*eta*np.log2(N) + 24*eta**2 + 16*eta*(eta+1)*np.log2(N))
Select_H = Q_val + 2*Q_col
Select_V = K*Select_H
return r*3*(2*Prepare_beta + Select_V)
```
# Low depth quantum simulation of materials (babbush2018low) Trotter
```python
def low_depth_trotter(N, Omega, eps_PEA, eps_HS, eps_S):
def f(x, y):
return 1/(x**2 + y**2)
def I(N0):
return integrate.nquad(f, [[1, N0],[1, N0]])[0]
t = 4.7/eps_PEA
sum_1_nu = 4*np.pi(np.sqrt(3)*N**(1/3)/2 - 1) + 3 - 3/N**(1/3) + 3*I(N**(1/3))
max_V = eta**2/(2*np.pi*Omega**(1/3))*sum_1_nu
max_U = eta**2/(np.pi*Omega**(1/3))*sum_1_nu
nu_max = 3*(N**(1/3))**2
max_T = 2*np.pi**2*eta/(Omega**(2/3))* nu_max
r = np.sqrt(2*t**3/eps_HS *(max_T**2*(max_U + max_V) + max_T*(max_U + max_V)**2))
eps_SS = eps_S/(2N +N*(N-1) + N*np.log(N/2) + 8*N)
exp_UV_cost = (4*N**2 + 4*N)*np.log(1/eps_SS)
FFFT_cost = (2 + 4*np.log(1/eps_SS))*n*np.log(N) + 4*N*np.log(1/eps_SS)
exp_T_cost = 32*N*np.log(1/eps_SS)
return r*(exp_UV_cost + FFFT_cost + exp_T_cost)
```
(16.5877713181052, 2.1199305958404497e-06)
from sympy import *
x,y= symbols('x y')
a = symbols('a', positive=True)
expr=1/(x**2 + y**2)
integral = integrate(expr,(x,1,a),(y,1,a))
integral
integral.evalf(subs={a:10^6})
# Low depth quantum simulation of materials (babbush2018low) Taylor
```python
def low_depth_taylor(N, lambd, Lambd, eps_PEA, eps_HS, eps_S, Ham_norm):
'''To be used in plane wave basis'''
t = 4.7/eps_PEA
r = t*Lambda/np.log(2)
K_list = []
for m_j in range(0, int(np.ceil(np.log(r)))):
t_j = 2**m_j
epsilon_HS_mj = epsilon_HS / r * 2**m_j
K = np.ceil(np.log2(t_j/epsilon_HS_mj) / np.log2( np.log2 (t_j/epsilon_HS_mj)))
K_list.append(K)
epsilon_SS = epsilon_S /np.sum([3*2*(2*K) for K in K_list]) # The extra two is because Uniform requires 2 Rz gates
mu = np.ceil(np.log(2*np.sqrt(2)*Lambdd/eps_PEA) + np.log(1 + eps_PEA/(8*lambd)) + np.log(1 - (Ham_norm/lambd)**2))
result = 0
for m_j in range(0, int(np.ceil(np.log(r)))):
t_j = 2**m_j
epsilon_HS_mj = epsilon_HS / r * 2**m_j
K = np.ceil(np.log2(t_j/epsilon_HS_mj) / np.log2( np.log2 (t_j/epsilon_HS_mj)))
prepare_beta = K*(6*N+40*np.log(N)+16*np.log(1/epsilon_SS) + 10*mu)
select_V = K*(12*N+8*np.log(N))
result += 3*(2*prepare_beta + select_V)*t_j
return result
```
# Low depth quantum simulation of materials (babbush2018low) On-the fly
```python
def low_depth_on_the_fly(N, lambd, Omega, eps_PEA, eps_HS, eps_S, Ham_norm, J):
'''To be used in plane wave basis
J: Number of atoms
'''
Lambd = (2*eta+1)*N**3 / (2*Omega**(1/3)*np.pi)
t = 4.7/eps_PEA
r = t*Lambd/np.log(2)
mu = np.ceil(np.log(2*np.sqrt(2)*Lambdd/eps_PEA) + np.log(1 + eps_PEA/(8*lambd)) + np.log(1 - (Ham_norm/lambd)**2))
#K_list = []
#epsilon_SS = epsilon_S /np.sum([3*2*(2*K) for K in K_list])
x = sympy.Symbol('x')
order = order_find(function = cos(x), x0 = 1, e = e, xeval = x_max)
sample_w = 70*np.log(N)**2 + 29* np.log(N) + (21+14)*order/2*np.log(N)**2 + 2*order*np.log(N) + J*(35*order/2 + 63 + 2*order/np.log(N))*np.log(N)**2
kickback = 32*np.log(mu)
result = 0
for m_j in range(0, int(np.ceil(np.log(r)))):
t_j = 2**m_j
epsilon_HS_mj = epsilon_HS / r * 2**m_j
#K_list.append(K)
K = np.ceil(np.log2(t_j/epsilon_HS_mj) / np.log2( np.log2 (t_j/epsilon_HS_mj)))
prepare_W = 2*sample_w + kickback
prepare_beta = K*prepare_W
select_H = (12*N + 8*np.log(N))
select_V = K*select_H
result += 3*(2*prepare_beta + select_V)*t_j
return result
```
## Linear T complexity (babbush2018encoding)
```python
def linear_T(N, lambd, eps_PEA, eps_SS):
'''To be used in plane wave basis'''
t = 4.7/eps_PEA
r = lambd*t
mu = np.ceil(np.log(2*np.sqrt(2)*lambd/eps_PEA) + np.log(1 + eps_PEA/(8*lambd)) + np.log(1 - (Ham_norm/lambd)**2))
eps_SS = eps_S / (r*2*P)
S = 12*N+8*np.log(N)
P = 6*N + 40*np.log(N)+ 24*np.log(1/eps_SS) + 10*mu
return r*(2*P + S)
```
## Sparsity and low rank factorization (berry2019qubitization)
```python
def sparsity_low_rank(N, lambd, eps_PEA, eps_SS, L):
t = 4.7/eps_PEA
r = lambd*t
mu = np.ceil(np.log(2*np.sqrt(2)*lambd/eps_PEA) + np.log(1 + eps_PEA/(8*lambd)) + np.log(1 - (Ham_norm/lambd)**2))
d = L(N**2/8 + N/4)
M = np.log(N**2) + mu
def closest_power(x):
possible_results = np.floor(np.log2(x)), np.ceil(np.log2(x))
return min(possible_results, key= lambda z: abs(x-2**z))
kc = 2**closest_power(np.sqrt(d/M))
ku = 2**closest_power(np.sqrt(d))
QROAM = 4*(np.ceil(d/kc)+4*M*(kc-1)+2*np.ceil(d/ku) + 4*k_u)
Select = (4*N + 4*np.log(N))*4 # The *4 because Toffoli -> T-gates
# 7 times per prepare, we have to use Uniform
eps_SS = eps_S/ (7*2*r)
Uniform = 8*np.log(L) + 56*np.log(1/eps_SS) + 52*np.log(N/2) ### Warning, this is in T gates already!!!!
Other_subprepare = mu + np.log(L) + 6*np.log(N/2)
continuous_register = 2*(np.log(N/2))**2 + 3*np.log(N/2)
Prepare = 4*(QROAM + Other_subprepare + continuous_register) + Uniform # The 4 is Toffoli -> T-gates
return r*(2*Prepare + Select)
```
4.442882938158366
## Interaction picture (low2019hamiltonian)
```python
def interaction_picture(N, Gamma, lambd_T, lambd_U_V, eps_S, eps_HS, eps_PEA):
'''
The number of rotations is very large here:
Each of the r segments can be simulated as e^{-i(U+V)t} T(e^{-i \int H_I (s) ds})
- The Time Ordered Dyson series segment is represented by TDS
- TDS is made of oblivious Amplitude Amplification of TDS_beta: 2x Ref + 3x TDS_beta
< TDS_beta is made of COEF DYS_K COEF'
< DYS_K is made of
· 4K U operators
· K Compare and K Swap
· (3K + 1) ADD operators
· K HAM-T operators, made of,
> x2 e^{-i(U+V)t}
> x2 FFFT
> x2 Prepare
> Select
Also, the e^{-i(U+V)t} is
> x2 FFFT
> N log 1/eps_SS Phase operators
> N Multiplications
'''
t = 4.7/eps_PEA
r = lambd_T*t # lambd_T is necessary to take tau = 1
# Notice that K is a bit different than in other articles because each segment is now its own Taylor series, which has the consequence of larger error
K = np.ceil( -1 + 2* np.log(2*r/epsilon_HS)/np.log(np.log(2*r/epsilon_HS)) ) # We
delta = eps_HS / t # Alternatively we can substitute t by r changing delta in the following line to 1/2. t represents L in the main text (see before eq 21 in the original article)
tau = 1/np.ceil(2*lambd_T) # tau = t/ np.ceil(2 * lambd_T * t)
M = np.max(16*tau/delta * (2*lambd_U_V + lambd_T), K**2)
rot_FFFT = 2*N/2*np.log2(N)
rot_U = 4*K
rot_COEF = 2**(np.ceil(np.log2(K) + 1))
rot_prep = 16*N
epsilon_SS = 1e-2
consistent = False
while not consistent:
rot_exp_U_V = rot_FFFT + N*np.log2(1/epsilon_SS) + N
num_rotations = ((((2*rot_prep + 2* rot_FFFT + 2*np.log(M)*rot_exp_U_V)*K * rot_U) + 2*rot_COEF)*3 + rot_exp_U_V)*r
proposed_eps_SS = eps_S / num_rotations
if proposed_eps_SS < epsilon_SS:
consistent = True
else:
epsilon_SS /= 10
# Cost
exp_U_V= 46*N*(np.log(1/eps_SS))**2+8*N + 8*N*np.log2(1/eps_SS)*np.log2(N) + 4*N*np.log(N)
COEF = rot_COEF * (10 + 12*np.log2(K))
U = 8*(np.log2(M) + np.log2(1/eps_SS))
ADD = 4*np.log2(K)
Comp = 8*np.log2(M)
FFFT = (2 + 4*np.log(1/eps_SS))*N*np.log2(N) - 4*np.log2(1/eps_SS)*N
Prep = 2**9*(1 + np.log2(N))+2**6*3*N*np.log2(1/eps_SS)
Select = 8*N
REF = 16*(np.log2(Gamma) + 2*np.log(K+1)+ 2*np.log(M))
cost = ((((2*Prep + Select + 2*FFFT + 2*np.log(M)*exp_U_V)*K + (3*K+1)*ADD + K*Comp + 4*K*U +2*COEF)*3 + 2*REF) + exp_U_V)*r
return cost
```
## Sublinear scaling and interaction picture babbush2019quantum
```python
def sublinear_scaling_interaction(N, eta, Gamma, lambd_T, lambd_U_V, eps_S, eps_HS, eps_PEA, eps_mu, eps_M_0):
'''
See the interaction_picture function for more background
J represents the number of atoms
In this article there are three additional sources of error,
- the precision on preparing the amplitudes sqrt(zeta_l), eps_mu
- the precision on the position of the atomic nuclei, 1/delta_R. In the article we take log(1/delta_R) < 1/3 log(N)
- The precision due to the finite value of M_0 = eta N t / eps_M_0
The algorithm follows a very similar structure to that of the interaction_picture one.
'''
t = 4.7/eps_PEA
r = lambd_U_V*t # lambd_T is necessary to take tau = 1
# Notice that K is a bit different than in other articles because each segment is now its own Taylor series, which has the consequence of larger error
K = np.ceil( -1 + 2* np.log(2*r/epsilon_HS)/np.log(np.log(2*r/epsilon_HS)) ) # We
delta = eps_HS / t # Alternatively we can substitute t by r changing delta in the following line to 1/2. t represents L in the main text (see before eq 21 in the original article)
tau = 1/np.ceil(2*lambd_U_V) # tau = t/ np.ceil(2 * lambd_T * t)
M = np.max(16*tau/delta * (lambd_U_V + 2*lambd_T), K**2)
M0 = eta * N * tau / (eps_M_0/r)
rot_exp_T = np.log2(eta) + 2*np.log2(N)
rot_select_1 = 1/3*np.log2(N) + 2
rot_Subprepare = 2 # Only the two rotations from Uniform in Subprepare
rot_COEF = 2**(np.ceil(np.log2(K) + 1))
num_rotations = (((2*np.log(M)*rot_exp_T + rot_select_1)*K + 2*rot_COEF)*3 + rot_exp_T )*r
eps_SS = eps_S / num_rotations
num_Subprepare = 2*3*K*3*r
eps_mus = eps_mu / num_Subp
Subprep = 4*J + 4*np.log(1/eps_mus) +8*np.log2(1/eps_SS)+ 12*np.log2(J)
n = 1/3*np.log2(N) + 1
Prep = 3*(79*n**2 +43*n*np.log2(M0) + 44*n)
exp_T = rot_exp_T * 4*np.log(1/eps_SS)
select_0 = 16*eta*np.log2(N)
select_1 = 8*eta*np.log2(N) + 14*(np.log2(N))**2 + 4*np.log2(N)*np.log(1/eps_SS)
HAM_T = 2*np.log(M)*exp_T + 2*(3*(Subprep + Prep)) + select_0 + select_1 #The 3 multiplying Subprep and Prep comes from oblivious AA
U = 8*(np.log2(M) + np.log2(1/eps_SS))
ADD = 4*np.log2(K)
Comp = 8*np.log2(M)
COEF = rot_COEF * (10 + 12*np.log2(K))
REF = 16*(np.log2(Gamma) + 2*np.log(K+1)+ 2*np.log(M))
cost = (((4*K*U + K*Comp + (3*K + 1)*ADD + K*HAM_T) + 2*COEF)*3 + 2*REF)*r
# Initial state antisymmetrization
antisymmetrization = 3*eta*np.log2(eta)*(np.log2(eta)-1)*(2* np.ceil(np.log2(eta**2)) + np.log(N))
return cost + antisymmetrization
```
# Finding the molecule parameters
```python
# Docs https://quantumai.google/reference/python/openfermion/
from openfermion.chem import geometry_from_pubchem, MolecularData
from openfermionpsi4 import run_psi4
from openfermion.transforms import get_fermion_operator, jordan_wigner
import openfermion
from openfermion.utils import Grid
from openfermion.hamiltonians import plane_wave_external_potential, plane_wave_potential, plane_wave_kinetic
from openfermion.hamiltonians import plane_wave_hamiltonian
from openfermion.hamiltonians import dual_basis_external_potential, dual_basis_potential, dual_basis_kinetic
from pyscf.mcscf import avas
methane_geometry = geometry_from_pubchem('methane')
print(methane_geometry)
basis = 'sto-3g'
molecule = MolecularData(methane_geometry, basis, multiplicity = 1)
print(molecule)
molecule = run_psi4(molecule,run_scf=True,
run_mp2=True,
#run_cisd=False,
#run_ccsd=True,
run_fci=False
)
```
[('C', (0, 0, 0)), ('H', (0.5541, 0.7996, 0.4965)), ('H', (0.6833, -0.8134, -0.2536)), ('H', (-0.7782, -0.3735, 0.6692)), ('H', (-0.4593, 0.3874, -0.9121))]
<openfermion.chem.molecular_data.MolecularData object at 0x7fe5c9b3b0b8>
```python
from pyscf import gto, scf, mcscf, fci,ao2mo
mol = gto.Mole()
mol = gto.M(
atom = methane_geometry,
basis = basis)
myhf = scf.RHF(mol) #.x2c() The x2c is relativistic. We are not so desperate :P
myhf.kernel()
mol.atom
```
converged SCF energy = -39.7265817118519
[('C', (0, 0, 0)),
('H', (0.5541, 0.7996, 0.4965)),
('H', (0.6833, -0.8134, -0.2536)),
('H', (-0.7782, -0.3735, 0.6692)),
('H', (-0.4593, 0.3874, -0.9121))]
```python
ao_labels = ['C 2pz']
norb, ne_act, orbs = avas.avas(myhf, ao_labels, canonicalize=False)
mo_ints_myhf = ao2mo.kernel(mol, myhf.mo_coeff) #orbs)
print(mo_ints_myhf.shape)
mo_ints_orbs = ao2mo.kernel(mol, orbs)
print(mo_ints_orbs.shape)
print(mo_ints_myhf - mo_ints_orbs)
```
(45, 45)
(45, 45)
[[ 2.92982983e+00 -2.85459192e-01 5.91465974e-02 ... -1.11860112e-02
2.66471566e-02 2.05217574e-01]
[-2.85459192e-01 2.35658170e-02 -1.43562384e-01 ... 3.49209290e-07
2.42649131e-04 -1.62842063e-02]
[ 5.91465974e-02 -1.43562384e-01 -2.99766787e+00 ... 1.84095139e-06
-7.74859929e-06 -2.61148750e-01]
...
[-1.11860112e-02 3.49209289e-07 1.84095139e-06 ... -1.31244080e-05
5.53431412e-03 1.39180374e-02]
[ 2.66471566e-02 2.42649131e-04 -7.74859929e-06 ... 5.53431412e-03
5.74825787e-02 6.73904843e-03]
[ 2.05217574e-01 -1.62842063e-02 -2.61148750e-01 ... 1.39180374e-02
6.73904843e-03 -7.40561615e-02]]
```python
'''
To obtain these Hamiltonians one must choose to study the system without a spin degree of freedom (spinless),
one must the specify dimension in which the calculation is performed (n_dimensions, usually 3),
one must specify how many plane waves are in each dimension (grid_length)
and one must specify the length scale of the plane wave harmonics in each dimension (length_scale)
and also the locations and charges of the nuclei.
Taken from https://quantumai.google/openfermion/tutorials/intro_to_openfermion
'''
grid = Grid(dimensions = 3, length = 8, scale = 1.) # La complejidad crece bastante con length
grid.volume_scale()
plane_wave_H = plane_wave_hamiltonian(grid, methane_geometry, True)
plane_wave_H
```
```python
## Selection of active orbitals
ao_labels = ['Fe 3d', 'C 2pz']
norb, ne_act, orbs = avas.avas(mf, ao_labels, canonicalize=False)
```
```python
## Low rank approximation. See
'''
https://quantumai.google/openfermion/tutorials/circuits_3_arbitrary_basis_trotter
Low rank approximation: https://github.com/quantumlib/OpenFermion/blob/4781602e094699f0fe0844bcded8ef0d45653e81/src/openfermion/circuits/low_rank.py#L76
Ground state: https://quantumai.google/reference/python/openfermion/linalg/get_ground_state
On the integration of low rank calculations: https://github.com/quantumlib/OpenFermion/issues/708#issuecomment-777640581
'''
```
```python
molecule = run_psi4(molecule,run_scf=True,
run_mp2=True,
run_cisd=False,
run_ccsd=True,
run_fci=False
)
```
```python
dir(molecule)
```
['__class__',
'__delattr__',
'__dict__',
'__dir__',
'__doc__',
'__eq__',
'__format__',
'__ge__',
'__getattribute__',
'__gt__',
'__hash__',
'__init__',
'__init_subclass__',
'__le__',
'__lt__',
'__module__',
'__ne__',
'__new__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__setattr__',
'__sizeof__',
'__str__',
'__subclasshook__',
'__weakref__',
'_canonical_orbitals',
'_ccsd_double_amps',
'_ccsd_single_amps',
'_cisd_one_rdm',
'_cisd_two_rdm',
'_fci_one_rdm',
'_fci_two_rdm',
'_one_body_integrals',
'_overlap_integrals',
'_two_body_integrals',
'atoms',
'basis',
'canonical_orbitals',
'ccsd_double_amps',
'ccsd_energy',
'ccsd_single_amps',
'charge',
'cisd_energy',
'cisd_one_rdm',
'cisd_two_rdm',
'description',
'fci_energy',
'fci_one_rdm',
'fci_two_rdm',
'filename',
'general_calculations',
'geometry',
'get_active_space_integrals',
'get_antisym',
'get_from_file',
'get_integrals',
'get_j',
'get_k',
'get_molecular_hamiltonian',
'get_molecular_rdm',
'get_n_alpha_electrons',
'get_n_beta_electrons',
'hf_energy',
'init_lazy_properties',
'load',
'mp2_energy',
'multiplicity',
'n_atoms',
'n_electrons',
'n_orbitals',
'n_qubits',
'name',
'nuclear_repulsion',
'one_body_integrals',
'orbital_energies',
'overlap_integrals',
'protons',
'save',
'two_body_integrals']
```python
molecule.two_body_integrals
```
array([[[[ 3.50111808e+00, 2.93359989e-01, 1.10767442e-06, ...,
-3.28313798e-05, -6.91012233e-05, 3.26772351e-01],
[ 2.93359989e-01, 3.86008907e-02, 2.06425379e-07, ...,
-4.34506685e-06, -9.16302869e-06, 4.38093713e-02],
[ 1.10767442e-06, 2.06425379e-07, 8.77763273e-03, ...,
-8.81953045e-05, 1.34410199e-02, 2.86638503e-06],
...,
[-3.28313798e-05, -4.34506685e-06, -8.81953045e-05, ...,
2.07111511e-02, -2.63509490e-08, -2.89584668e-06],
[-6.91012233e-05, -9.16302869e-06, 1.34410199e-02, ...,
-2.63509490e-08, 2.07122620e-02, -6.31082236e-06],
[ 3.26772351e-01, 4.38093713e-02, 2.86638503e-06, ...,
-2.89584668e-06, -6.31082236e-06, 4.97918460e-02]],
[[ 2.93359989e-01, 3.86008907e-02, 2.06425379e-07, ...,
-4.34506685e-06, -9.16302869e-06, 4.38093713e-02],
[ 7.08049932e-01, 9.18605309e-03, -1.41229460e-07, ...,
-1.13582449e-06, -2.32160079e-06, 9.68165213e-03],
[ 3.55133593e-06, -3.53156643e-08, -1.37788849e-02, ...,
1.25125629e-04, -1.90679933e-02, -3.72634072e-06],
...,
[-2.83415206e-05, -1.06179581e-06, 6.44065334e-05, ...,
-1.42569709e-02, 1.97199747e-08, -2.52223399e-06],
[-6.09306795e-05, -2.23060513e-06, -9.81482842e-03, ...,
1.97259665e-08, -1.42571143e-02, -5.17749579e-06],
[ 2.87620277e-01, 1.03823976e-02, -1.87134836e-06, ...,
-2.54526708e-06, -5.16579481e-06, 1.09153886e-02]],
[[ 1.10767442e-06, 2.06425379e-07, 8.77763273e-03, ...,
-8.81953045e-05, 1.34410199e-02, 2.86638503e-06],
[ 3.55133593e-06, -3.53156643e-08, -1.37788849e-02, ...,
1.25125629e-04, -1.90679933e-02, -3.72634072e-06],
[ 6.33204173e-01, 3.91902849e-03, -3.67742779e-03, ...,
-1.58186659e-03, -4.46298057e-03, 3.73327229e-03],
...,
[-1.64474956e-03, -3.57628621e-05, 1.18656398e-03, ...,
-2.88043313e-03, 1.75897619e-03, -3.46276503e-05],
[ 2.50609227e-01, 5.44929645e-03, 3.34711879e-03, ...,
1.75726711e-03, 5.04403945e-03, 5.25585657e-03],
[ 4.76341024e-05, 1.07477119e-06, -6.77256482e-03, ...,
6.46649796e-05, -9.87178726e-03, -9.03711095e-07]],
...,
[[-3.28313798e-05, -4.34506685e-06, -8.81953045e-05, ...,
2.07111511e-02, -2.63509490e-08, -2.89584668e-06],
[-2.83415206e-05, -1.06179581e-06, 6.44065334e-05, ...,
-1.42569709e-02, 1.97199747e-08, -2.52223399e-06],
[-1.64474956e-03, -3.57628621e-05, 1.18656398e-03, ...,
-2.88043313e-03, 1.75897619e-03, -3.46276503e-05],
...,
[ 7.30916095e-01, 8.45416653e-03, -2.08784404e-04, ...,
-1.03692031e-03, -6.67196375e-04, 8.07457223e-03],
[-2.01565298e-07, -1.14455491e-08, 1.28860659e-04, ...,
-6.61335954e-04, 4.00337188e-04, -4.08711279e-08],
[ 4.35423201e-06, -4.26310508e-07, 9.23093626e-05, ...,
-2.01905932e-02, -2.31220933e-08, -2.49459591e-06]],
[[-6.91012233e-05, -9.16302869e-06, 1.34410199e-02, ...,
-2.63509490e-08, 2.07122620e-02, -6.31082236e-06],
[-6.09306795e-05, -2.23060513e-06, -9.81482842e-03, ...,
1.97259665e-08, -1.42571143e-02, -5.17749579e-06],
[ 2.50609227e-01, 5.44929645e-03, 3.34711879e-03, ...,
1.75726711e-03, 5.04403945e-03, 5.25585657e-03],
...,
[-2.01565298e-07, -1.14455491e-08, 1.28860659e-04, ...,
-6.61335954e-04, 4.00337188e-04, -4.08711280e-08],
[ 7.30929458e-01, 8.45459050e-03, 3.71731601e-04, ...,
3.97564082e-04, 1.16894374e-03, 8.07564932e-03],
[ 7.47527832e-06, -9.59599899e-07, -1.40713753e-02, ...,
-2.31100940e-08, -2.01906880e-02, -5.10946321e-06]],
[[ 3.26772351e-01, 4.38093713e-02, 2.86638503e-06, ...,
-2.89584668e-06, -6.31082236e-06, 4.97918460e-02],
[ 2.87620277e-01, 1.03823976e-02, -1.87134836e-06, ...,
-2.54526708e-06, -5.16579481e-06, 1.09153886e-02],
[ 4.76341024e-05, 1.07477119e-06, -6.77256482e-03, ...,
6.46649796e-05, -9.87178726e-03, -9.03711095e-07],
...,
[ 4.35423201e-06, -4.26310507e-07, 9.23093626e-05, ...,
-2.01905932e-02, -2.31220933e-08, -2.49459591e-06],
[ 7.47527832e-06, -9.59599899e-07, -1.40713753e-02, ...,
-2.31100940e-08, -2.01906880e-02, -5.10946321e-06],
[ 6.93626813e-01, 1.22216132e-02, -5.63226815e-06, ...,
-5.35882864e-06, -1.09470989e-05, 1.26981969e-02]]],
[[[ 2.93359989e-01, 7.08049932e-01, 3.55133593e-06, ...,
-2.83415206e-05, -6.09306795e-05, 2.87620277e-01],
[ 3.86008907e-02, 9.18605309e-03, -3.53156643e-08, ...,
-1.06179581e-06, -2.23060513e-06, 1.03823976e-02],
[ 2.06425379e-07, -1.41229460e-07, -1.37788849e-02, ...,
6.44065334e-05, -9.81482842e-03, -1.87134836e-06],
...,
[-4.34506685e-06, -1.13582449e-06, 1.25125629e-04, ...,
-1.42569709e-02, 1.97259665e-08, -2.54526708e-06],
[-9.16302869e-06, -2.32160079e-06, -1.90679933e-02, ...,
1.97199747e-08, -1.42571143e-02, -5.16579481e-06],
[ 4.38093713e-02, 9.68165213e-03, -3.72634072e-06, ...,
-2.52223399e-06, -5.17749580e-06, 1.09153886e-02]],
[[ 3.86008907e-02, 9.18605309e-03, -3.53156643e-08, ...,
-1.06179581e-06, -2.23060513e-06, 1.03823976e-02],
[ 9.18605309e-03, 5.03450166e-01, 3.21671631e-06, ...,
-8.66094513e-06, -1.94808815e-05, 8.77183115e-02],
[-3.53156643e-08, 3.21671631e-06, 1.08115986e-01, ...,
-1.22116196e-04, 1.85996575e-02, 3.06744488e-06],
...,
[-1.06179581e-06, -8.66094513e-06, -1.22116196e-04, ...,
4.76647154e-02, 8.42450793e-08, -2.20323013e-06],
[-2.23060513e-06, -1.94808815e-05, 1.85996575e-02, ...,
8.42450793e-08, 4.76646025e-02, -4.57068000e-06],
[ 1.03823976e-02, 8.77183115e-02, 3.06744488e-06, ...,
-2.20323013e-06, -4.57068000e-06, 7.34767459e-02]],
[[ 2.06425379e-07, -1.41229460e-07, -1.37788849e-02, ...,
6.44065334e-05, -9.81482842e-03, -1.87134836e-06],
[-3.53156643e-08, 3.21671631e-06, 1.08115986e-01, ...,
-1.22116196e-04, 1.85996575e-02, 3.06744488e-06],
[ 3.91902849e-03, 4.85078209e-01, 5.45464260e-02, ...,
-6.82034659e-03, -1.92377543e-02, 6.88314722e-02],
...,
[-3.57628621e-05, -5.31163517e-04, -1.76729677e-02, ...,
-3.36298146e-03, 2.04861041e-03, -4.70514494e-04],
[ 5.44929645e-03, 8.09269071e-02, -4.98598966e-02, ...,
2.04559813e-03, 5.86512604e-03, 7.17006501e-02],
[ 1.07477119e-06, 1.36872951e-05, -1.05354363e-02, ...,
-2.71611902e-04, 4.13909802e-02, 2.33650853e-05]],
...,
[[-4.34506685e-06, -1.13582449e-06, 1.25125629e-04, ...,
-1.42569709e-02, 1.97259665e-08, -2.54526708e-06],
[-1.06179581e-06, -8.66094513e-06, -1.22116196e-04, ...,
4.76647154e-02, 8.42450792e-08, -2.20323013e-06],
[-3.57628621e-05, -5.31163517e-04, -1.76729677e-02, ...,
-3.36298146e-03, 2.04861041e-03, -4.70514494e-04],
...,
[ 8.45416653e-03, 4.93027410e-01, -1.79579657e-02, ...,
2.65922313e-02, 1.70006790e-02, 8.17318129e-02],
[-1.14455491e-08, 4.82654193e-08, 1.09508288e-02, ...,
1.70169833e-02, -1.01859830e-02, 6.16828042e-07],
[-4.26310508e-07, 3.47471534e-06, -6.33691367e-04, ...,
8.92744321e-03, 6.58138124e-07, 2.31223997e-06]],
[[-9.16302869e-06, -2.32160079e-06, -1.90679933e-02, ...,
1.97199747e-08, -1.42571143e-02, -5.16579481e-06],
[-2.23060513e-06, -1.94808815e-05, 1.85996575e-02, ...,
8.42450792e-08, 4.76646025e-02, -4.57068000e-06],
[ 5.44929645e-03, 8.09269071e-02, -4.98598966e-02, ...,
2.04559813e-03, 5.86512604e-03, 7.17006501e-02],
...,
[-1.14455491e-08, 4.82654193e-08, 1.09508288e-02, ...,
1.70169833e-02, -1.01859830e-02, 6.16828042e-07],
[ 8.45459050e-03, 4.93033242e-01, 3.14033557e-02, ...,
-1.01934701e-02, -2.97359262e-02, 8.17170195e-02],
[-9.59599899e-07, 7.41517256e-06, 9.64840370e-02, ...,
6.58198841e-07, 8.91069925e-03, 3.42974564e-06]],
[[ 4.38093713e-02, 9.68165213e-03, -3.72634072e-06, ...,
-2.52223399e-06, -5.17749580e-06, 1.09153886e-02],
[ 1.03823976e-02, 8.77183115e-02, 3.06744488e-06, ...,
-2.20323013e-06, -4.57068000e-06, 7.34767459e-02],
[ 1.07477119e-06, 1.36872951e-05, -1.05354363e-02, ...,
-2.71611902e-04, 4.13909802e-02, 2.33650853e-05],
...,
[-4.26310508e-07, 3.47471534e-06, -6.33691367e-04, ...,
8.92744321e-03, 6.58138124e-07, 2.31223997e-06],
[-9.59599899e-07, 7.41517256e-06, 9.64840370e-02, ...,
6.58198841e-07, 8.91069925e-03, 3.42974564e-06],
[ 1.22216132e-02, 4.69767734e-01, 4.18023449e-05, ...,
-4.89659525e-06, -1.21614618e-05, 6.56355674e-02]]],
[[[ 1.10767442e-06, 3.55133593e-06, 6.33204173e-01, ...,
-1.64474956e-03, 2.50609227e-01, 4.76341024e-05],
[ 2.06425379e-07, -3.53156643e-08, 3.91902849e-03, ...,
-3.57628621e-05, 5.44929645e-03, 1.07477119e-06],
[ 8.77763273e-03, -1.37788849e-02, -3.67742779e-03, ...,
1.18656398e-03, 3.34711879e-03, -6.77256482e-03],
...,
[-8.81953045e-05, 1.25125629e-04, -1.58186659e-03, ...,
-2.88043313e-03, 1.75726711e-03, 6.46649796e-05],
[ 1.34410199e-02, -1.90679933e-02, -4.46298057e-03, ...,
1.75897619e-03, 5.04403945e-03, -9.87178726e-03],
[ 2.86638503e-06, -3.72634072e-06, 3.73327229e-03, ...,
-3.46276503e-05, 5.25585657e-03, -9.03711095e-07]],
[[ 2.06425379e-07, -3.53156643e-08, 3.91902849e-03, ...,
-3.57628621e-05, 5.44929645e-03, 1.07477119e-06],
[-1.41229460e-07, 3.21671631e-06, 4.85078209e-01, ...,
-5.31163517e-04, 8.09269071e-02, 1.36872951e-05],
[-1.37788849e-02, 1.08115986e-01, 5.45464260e-02, ...,
-1.76729677e-02, -4.98598966e-02, -1.05354363e-02],
...,
[ 6.44065334e-05, -1.22116196e-04, -6.82034659e-03, ...,
-3.36298146e-03, 2.04559813e-03, -2.71611902e-04],
[-9.81482842e-03, 1.85996575e-02, -1.92377543e-02, ...,
2.04861041e-03, 5.86512604e-03, 4.13909802e-02],
[-1.87134836e-06, 3.06744488e-06, 6.88314722e-02, ...,
-4.70514494e-04, 7.17006501e-02, 2.33650853e-05]],
[[ 8.77763273e-03, -1.37788849e-02, -3.67742779e-03, ...,
1.18656398e-03, 3.34711879e-03, -6.77256482e-03],
[-1.37788849e-02, 1.08115986e-01, 5.45464260e-02, ...,
-1.76729677e-02, -4.98598966e-02, -1.05354363e-02],
[-3.67742779e-03, 5.45464260e-02, 5.46182984e-01, ...,
-3.53212839e-03, 2.60690805e-02, -2.88691240e-02],
...,
[ 1.18656398e-03, -1.76729677e-02, -3.53212839e-03, ...,
2.65790147e-02, 2.08745115e-03, 8.54093957e-03],
[ 3.34711879e-03, -4.98598966e-02, 2.60690805e-02, ...,
2.08745115e-03, 1.27781451e-01, 2.41109016e-02],
[-6.77256482e-03, -1.05354363e-02, -2.88691240e-02, ...,
8.54093957e-03, 2.41109016e-02, 4.70702988e-02]],
...,
[[-8.81953045e-05, 1.25125629e-04, -1.58186659e-03, ...,
-2.88043313e-03, 1.75726711e-03, 6.46649796e-05],
[ 6.44065334e-05, -1.22116196e-04, -6.82034659e-03, ...,
-3.36298146e-03, 2.04559813e-03, -2.71611902e-04],
[ 1.18656398e-03, -1.76729677e-02, -3.53212839e-03, ...,
2.65790147e-02, 2.08745115e-03, 8.54093957e-03],
...,
[-2.08784404e-04, -1.79579657e-02, 4.50045695e-01, ...,
-8.49638631e-03, 8.82422963e-02, 2.11341434e-02],
[ 1.28860659e-04, 1.09508288e-02, 2.50289706e-03, ...,
-4.04799749e-03, -2.55352239e-03, -1.28886098e-02],
[ 9.23093626e-05, -6.33691367e-04, 1.86134377e-02, ...,
3.11226522e-02, -1.89925526e-02, 1.25310980e-04]],
[[ 1.34410199e-02, -1.90679933e-02, -4.46298057e-03, ...,
1.75897619e-03, 5.04403945e-03, -9.87178726e-03],
[-9.81482842e-03, 1.85996575e-02, -1.92377543e-02, ...,
2.04861041e-03, 5.86512604e-03, 4.13909802e-02],
[ 3.34711879e-03, -4.98598966e-02, 2.60690805e-02, ...,
2.08745115e-03, 1.27781451e-01, 2.41109016e-02],
...,
[ 1.28860659e-04, 1.09508288e-02, 2.50289706e-03, ...,
-4.04799749e-03, -2.55352239e-03, -1.28886098e-02],
[ 3.71731601e-04, 3.14033557e-02, 5.37769937e-01, ...,
-3.15977911e-03, 4.58716473e-02, -3.69569922e-02],
[-1.40713753e-02, 9.64840370e-02, 5.25253416e-02, ...,
-1.89951557e-02, -5.44872612e-02, -1.87973186e-02]],
[[ 2.86638503e-06, -3.72634072e-06, 3.73327229e-03, ...,
-3.46276503e-05, 5.25585657e-03, -9.03711095e-07],
[-1.87134836e-06, 3.06744488e-06, 6.88314722e-02, ...,
-4.70514494e-04, 7.17006501e-02, 2.33650853e-05],
[-6.77256482e-03, -1.05354363e-02, -2.88691240e-02, ...,
8.54093957e-03, 2.41109016e-02, 4.70702988e-02],
...,
[ 9.23093626e-05, -6.33691367e-04, 1.86134377e-02, ...,
3.11226522e-02, -1.89925526e-02, 1.25310980e-04],
[-1.40713753e-02, 9.64840370e-02, 5.25253416e-02, ...,
-1.89951557e-02, -5.44872612e-02, -1.87973186e-02],
[-5.63226815e-06, 4.18023449e-05, 4.59426468e-01, ...,
-3.70220438e-04, 5.67780971e-02, 4.96886151e-07]]],
...,
[[[-3.28313798e-05, -2.83415206e-05, -1.64474956e-03, ...,
7.30916095e-01, -2.01565298e-07, 4.35423201e-06],
[-4.34506685e-06, -1.06179581e-06, -3.57628621e-05, ...,
8.45416653e-03, -1.14455491e-08, -4.26310508e-07],
[-8.81953045e-05, 6.44065334e-05, 1.18656398e-03, ...,
-2.08784404e-04, 1.28860659e-04, 9.23093626e-05],
...,
[ 2.07111511e-02, -1.42569709e-02, -2.88043313e-03, ...,
-1.03692031e-03, -6.61335954e-04, -2.01905932e-02],
[-2.63509490e-08, 1.97199747e-08, 1.75897619e-03, ...,
-6.67196375e-04, 4.00337188e-04, -2.31220933e-08],
[-2.89584668e-06, -2.52223399e-06, -3.46276503e-05, ...,
8.07457223e-03, -4.08711279e-08, -2.49459591e-06]],
[[-4.34506685e-06, -1.06179581e-06, -3.57628621e-05, ...,
8.45416653e-03, -1.14455491e-08, -4.26310508e-07],
[-1.13582449e-06, -8.66094513e-06, -5.31163517e-04, ...,
4.93027410e-01, 4.82654193e-08, 3.47471534e-06],
[ 1.25125629e-04, -1.22116196e-04, -1.76729677e-02, ...,
-1.79579657e-02, 1.09508288e-02, -6.33691367e-04],
...,
[-1.42569709e-02, 4.76647154e-02, -3.36298146e-03, ...,
2.65922313e-02, 1.70169833e-02, 8.92744321e-03],
[ 1.97259665e-08, 8.42450792e-08, 2.04861041e-03, ...,
1.70006790e-02, -1.01859830e-02, 6.58138124e-07],
[-2.54526708e-06, -2.20323013e-06, -4.70514494e-04, ...,
8.17318129e-02, 6.16828042e-07, 2.31223997e-06]],
[[-8.81953045e-05, 6.44065334e-05, 1.18656398e-03, ...,
-2.08784404e-04, 1.28860659e-04, 9.23093626e-05],
[ 1.25125629e-04, -1.22116196e-04, -1.76729677e-02, ...,
-1.79579657e-02, 1.09508288e-02, -6.33691367e-04],
[-1.58186659e-03, -6.82034659e-03, -3.53212839e-03, ...,
4.50045695e-01, 2.50289706e-03, 1.86134377e-02],
...,
[-2.88043313e-03, -3.36298146e-03, 2.65790147e-02, ...,
-8.49638631e-03, -4.04799749e-03, 3.11226522e-02],
[ 1.75726711e-03, 2.04559813e-03, 2.08745115e-03, ...,
8.82422963e-02, -2.55352239e-03, -1.89925526e-02],
[ 6.46649796e-05, -2.71611902e-04, 8.54093957e-03, ...,
2.11341434e-02, -1.28886098e-02, 1.25310980e-04]],
...,
[[ 2.07111511e-02, -1.42569709e-02, -2.88043313e-03, ...,
-1.03692031e-03, -6.61335954e-04, -2.01905932e-02],
[-1.42569709e-02, 4.76647154e-02, -3.36298146e-03, ...,
2.65922313e-02, 1.70169833e-02, 8.92744321e-03],
[-2.88043313e-03, -3.36298146e-03, 2.65790147e-02, ...,
-8.49638631e-03, -4.04799749e-03, 3.11226522e-02],
...,
[-1.03692031e-03, 2.65922313e-02, -8.49638631e-03, ...,
5.61190785e-01, 9.40789903e-03, -3.31969562e-02],
[-6.61335954e-04, 1.70169833e-02, -4.04799749e-03, ...,
9.40789903e-03, 2.79620622e-02, -2.12487548e-02],
[-2.01905932e-02, 8.92744321e-03, 3.11226522e-02, ...,
-3.31969562e-02, -2.12487548e-02, 1.01806000e-01]],
[[-2.63509490e-08, 1.97199747e-08, 1.75897619e-03, ...,
-6.67196375e-04, 4.00337188e-04, -2.31220933e-08],
[ 1.97259665e-08, 8.42450792e-08, 2.04861041e-03, ...,
1.70006790e-02, -1.01859830e-02, 6.58138124e-07],
[ 1.75726711e-03, 2.04559813e-03, 2.08745115e-03, ...,
8.82422963e-02, -2.55352239e-03, -1.89925526e-02],
...,
[-6.61335954e-04, 1.70169833e-02, -4.04799749e-03, ...,
9.40789903e-03, 2.79620622e-02, -2.12487548e-02],
[ 3.97564082e-04, -1.01934701e-02, -3.15977911e-03, ...,
4.67730306e-01, 3.01499502e-03, 1.27084640e-02],
[-2.31100940e-08, 6.58198841e-07, -1.89951557e-02, ...,
-2.12351669e-02, 1.27020238e-02, -1.60013524e-06]],
[[-2.89584668e-06, -2.52223399e-06, -3.46276503e-05, ...,
8.07457223e-03, -4.08711280e-08, -2.49459591e-06],
[-2.54526708e-06, -2.20323013e-06, -4.70514494e-04, ...,
8.17318129e-02, 6.16828042e-07, 2.31223997e-06],
[ 6.46649796e-05, -2.71611902e-04, 8.54093957e-03, ...,
2.11341434e-02, -1.28886098e-02, 1.25310980e-04],
...,
[-2.01905932e-02, 8.92744321e-03, 3.11226522e-02, ...,
-3.31969562e-02, -2.12487548e-02, 1.01806000e-01],
[-2.31100940e-08, 6.58198841e-07, -1.89951557e-02, ...,
-2.12351669e-02, 1.27020238e-02, -1.60013524e-06],
[-5.35882864e-06, -4.89659525e-06, -3.70220438e-04, ...,
4.77271777e-01, -1.69108655e-06, 2.26242104e-05]]],
[[[-6.91012233e-05, -6.09306795e-05, 2.50609227e-01, ...,
-2.01565298e-07, 7.30929458e-01, 7.47527832e-06],
[-9.16302869e-06, -2.23060513e-06, 5.44929645e-03, ...,
-1.14455491e-08, 8.45459050e-03, -9.59599899e-07],
[ 1.34410199e-02, -9.81482842e-03, 3.34711879e-03, ...,
1.28860659e-04, 3.71731601e-04, -1.40713753e-02],
...,
[-2.63509490e-08, 1.97259665e-08, 1.75726711e-03, ...,
-6.61335954e-04, 3.97564082e-04, -2.31100940e-08],
[ 2.07122620e-02, -1.42571143e-02, 5.04403945e-03, ...,
4.00337188e-04, 1.16894374e-03, -2.01906880e-02],
[-6.31082236e-06, -5.17749579e-06, 5.25585657e-03, ...,
-4.08711280e-08, 8.07564932e-03, -5.10946321e-06]],
[[-9.16302869e-06, -2.23060513e-06, 5.44929645e-03, ...,
-1.14455491e-08, 8.45459050e-03, -9.59599899e-07],
[-2.32160079e-06, -1.94808815e-05, 8.09269071e-02, ...,
4.82654193e-08, 4.93033242e-01, 7.41517256e-06],
[-1.90679933e-02, 1.85996575e-02, -4.98598966e-02, ...,
1.09508288e-02, 3.14033557e-02, 9.64840370e-02],
...,
[ 1.97199747e-08, 8.42450792e-08, 2.04559813e-03, ...,
1.70169833e-02, -1.01934701e-02, 6.58198841e-07],
[-1.42571143e-02, 4.76646025e-02, 5.86512604e-03, ...,
-1.01859830e-02, -2.97359262e-02, 8.91069925e-03],
[-5.16579481e-06, -4.57068000e-06, 7.17006501e-02, ...,
6.16828042e-07, 8.17170195e-02, 3.42974564e-06]],
[[ 1.34410199e-02, -9.81482842e-03, 3.34711879e-03, ...,
1.28860659e-04, 3.71731601e-04, -1.40713753e-02],
[-1.90679933e-02, 1.85996575e-02, -4.98598966e-02, ...,
1.09508288e-02, 3.14033557e-02, 9.64840370e-02],
[-4.46298057e-03, -1.92377543e-02, 2.60690805e-02, ...,
2.50289706e-03, 5.37769937e-01, 5.25253416e-02],
...,
[ 1.75897619e-03, 2.04861041e-03, 2.08745115e-03, ...,
-4.04799749e-03, -3.15977911e-03, -1.89951557e-02],
[ 5.04403945e-03, 5.86512604e-03, 1.27781451e-01, ...,
-2.55352239e-03, 4.58716473e-02, -5.44872612e-02],
[-9.87178726e-03, 4.13909802e-02, 2.41109016e-02, ...,
-1.28886098e-02, -3.69569922e-02, -1.87973186e-02]],
...,
[[-2.63509490e-08, 1.97259665e-08, 1.75726711e-03, ...,
-6.61335954e-04, 3.97564082e-04, -2.31100940e-08],
[ 1.97199747e-08, 8.42450792e-08, 2.04559813e-03, ...,
1.70169833e-02, -1.01934701e-02, 6.58198841e-07],
[ 1.75897619e-03, 2.04861041e-03, 2.08745115e-03, ...,
-4.04799749e-03, -3.15977911e-03, -1.89951557e-02],
...,
[-6.67196375e-04, 1.70006790e-02, 8.82422963e-02, ...,
9.40789903e-03, 4.67730306e-01, -2.12351669e-02],
[ 4.00337188e-04, -1.01859830e-02, -2.55352239e-03, ...,
2.79620622e-02, 3.01499502e-03, 1.27020238e-02],
[-2.31220933e-08, 6.58138124e-07, -1.89925526e-02, ...,
-2.12487548e-02, 1.27084640e-02, -1.60013524e-06]],
[[ 2.07122620e-02, -1.42571143e-02, 5.04403945e-03, ...,
4.00337188e-04, 1.16894374e-03, -2.01906880e-02],
[-1.42571143e-02, 4.76646025e-02, 5.86512604e-03, ...,
-1.01859830e-02, -2.97359262e-02, 8.91069925e-03],
[ 5.04403945e-03, 5.86512604e-03, 1.27781451e-01, ...,
-2.55352239e-03, 4.58716473e-02, -5.44872612e-02],
...,
[ 4.00337188e-04, -1.01859830e-02, -2.55352239e-03, ...,
2.79620622e-02, 3.01499502e-03, 1.27020238e-02],
[ 1.16894374e-03, -2.97359262e-02, 4.58716473e-02, ...,
3.01499502e-03, 5.63117985e-01, 3.70448140e-02],
[-2.01906880e-02, 8.91069925e-03, -5.44872612e-02, ...,
1.27020238e-02, 3.70448140e-02, 1.01845168e-01]],
[[-6.31082236e-06, -5.17749579e-06, 5.25585657e-03, ...,
-4.08711280e-08, 8.07564932e-03, -5.10946321e-06],
[-5.16579481e-06, -4.57068000e-06, 7.17006501e-02, ...,
6.16828042e-07, 8.17170195e-02, 3.42974564e-06],
[-9.87178726e-03, 4.13909802e-02, 2.41109016e-02, ...,
-1.28886098e-02, -3.69569922e-02, -1.87973186e-02],
...,
[-2.31220933e-08, 6.58138124e-07, -1.89925526e-02, ...,
-2.12487548e-02, 1.27084640e-02, -1.60013524e-06],
[-2.01906880e-02, 8.91069925e-03, -5.44872612e-02, ...,
1.27020238e-02, 3.70448140e-02, 1.01845168e-01],
[-1.09470989e-05, -1.21614618e-05, 5.67780971e-02, ...,
-1.69108655e-06, 4.77312715e-01, 4.77368077e-05]]],
[[[ 3.26772351e-01, 2.87620277e-01, 4.76341024e-05, ...,
4.35423201e-06, 7.47527832e-06, 6.93626813e-01],
[ 4.38093713e-02, 1.03823976e-02, 1.07477119e-06, ...,
-4.26310508e-07, -9.59599899e-07, 1.22216132e-02],
[ 2.86638503e-06, -1.87134836e-06, -6.77256482e-03, ...,
9.23093626e-05, -1.40713753e-02, -5.63226815e-06],
...,
[-2.89584668e-06, -2.54526708e-06, 6.46649796e-05, ...,
-2.01905932e-02, -2.31100940e-08, -5.35882864e-06],
[-6.31082236e-06, -5.16579481e-06, -9.87178726e-03, ...,
-2.31220933e-08, -2.01906880e-02, -1.09470989e-05],
[ 4.97918460e-02, 1.09153886e-02, -9.03711095e-07, ...,
-2.49459592e-06, -5.10946321e-06, 1.26981969e-02]],
[[ 4.38093713e-02, 1.03823976e-02, 1.07477119e-06, ...,
-4.26310508e-07, -9.59599899e-07, 1.22216132e-02],
[ 9.68165213e-03, 8.77183115e-02, 1.36872951e-05, ...,
3.47471534e-06, 7.41517256e-06, 4.69767734e-01],
[-3.72634072e-06, 3.06744488e-06, -1.05354363e-02, ...,
-6.33691367e-04, 9.64840370e-02, 4.18023449e-05],
...,
[-2.52223399e-06, -2.20323013e-06, -2.71611902e-04, ...,
8.92744321e-03, 6.58198841e-07, -4.89659525e-06],
[-5.17749579e-06, -4.57068000e-06, 4.13909802e-02, ...,
6.58138124e-07, 8.91069925e-03, -1.21614618e-05],
[ 1.09153886e-02, 7.34767459e-02, 2.33650853e-05, ...,
2.31223997e-06, 3.42974564e-06, 6.56355674e-02]],
[[ 2.86638503e-06, -1.87134836e-06, -6.77256482e-03, ...,
9.23093626e-05, -1.40713753e-02, -5.63226815e-06],
[-3.72634072e-06, 3.06744488e-06, -1.05354363e-02, ...,
-6.33691367e-04, 9.64840370e-02, 4.18023449e-05],
[ 3.73327229e-03, 6.88314722e-02, -2.88691240e-02, ...,
1.86134377e-02, 5.25253416e-02, 4.59426468e-01],
...,
[-3.46276503e-05, -4.70514494e-04, 8.54093957e-03, ...,
3.11226522e-02, -1.89951557e-02, -3.70220438e-04],
[ 5.25585657e-03, 7.17006501e-02, 2.41109016e-02, ...,
-1.89925526e-02, -5.44872612e-02, 5.67780971e-02],
[-9.03711095e-07, 2.33650853e-05, 4.70702988e-02, ...,
1.25310980e-04, -1.87973186e-02, 4.96886152e-07]],
...,
[[-2.89584668e-06, -2.54526708e-06, 6.46649796e-05, ...,
-2.01905932e-02, -2.31100940e-08, -5.35882864e-06],
[-2.52223399e-06, -2.20323013e-06, -2.71611902e-04, ...,
8.92744321e-03, 6.58198841e-07, -4.89659525e-06],
[-3.46276503e-05, -4.70514494e-04, 8.54093957e-03, ...,
3.11226522e-02, -1.89951557e-02, -3.70220438e-04],
...,
[ 8.07457223e-03, 8.17318129e-02, 2.11341434e-02, ...,
-3.31969562e-02, -2.12351669e-02, 4.77271777e-01],
[-4.08711280e-08, 6.16828042e-07, -1.28886098e-02, ...,
-2.12487548e-02, 1.27020238e-02, -1.69108655e-06],
[-2.49459591e-06, 2.31223997e-06, 1.25310980e-04, ...,
1.01806000e-01, -1.60013524e-06, 2.26242104e-05]],
[[-6.31082236e-06, -5.16579481e-06, -9.87178726e-03, ...,
-2.31220933e-08, -2.01906880e-02, -1.09470989e-05],
[-5.17749579e-06, -4.57068000e-06, 4.13909802e-02, ...,
6.58138124e-07, 8.91069925e-03, -1.21614618e-05],
[ 5.25585657e-03, 7.17006501e-02, 2.41109016e-02, ...,
-1.89925526e-02, -5.44872612e-02, 5.67780971e-02],
...,
[-4.08711280e-08, 6.16828042e-07, -1.28886098e-02, ...,
-2.12487548e-02, 1.27020238e-02, -1.69108655e-06],
[ 8.07564932e-03, 8.17170195e-02, -3.69569922e-02, ...,
1.27084640e-02, 3.70448140e-02, 4.77312715e-01],
[-5.10946321e-06, 3.42974564e-06, -1.87973186e-02, ...,
-1.60013524e-06, 1.01845168e-01, 4.77368077e-05]],
[[ 4.97918460e-02, 1.09153886e-02, -9.03711095e-07, ...,
-2.49459591e-06, -5.10946321e-06, 1.26981969e-02],
[ 1.09153886e-02, 7.34767459e-02, 2.33650853e-05, ...,
2.31223997e-06, 3.42974564e-06, 6.56355674e-02],
[-9.03711095e-07, 2.33650853e-05, 4.70702988e-02, ...,
1.25310980e-04, -1.87973186e-02, 4.96886151e-07],
...,
[-2.49459591e-06, 2.31223997e-06, 1.25310980e-04, ...,
1.01806000e-01, -1.60013524e-06, 2.26242104e-05],
[-5.10946321e-06, 3.42974564e-06, -1.87973186e-02, ...,
-1.60013524e-06, 1.01845168e-01, 4.77368077e-05],
[ 1.26981969e-02, 6.56355674e-02, 4.96886151e-07, ...,
2.26242104e-05, 4.77368077e-05, 4.61373762e-01]]]])
```python
fermionic_hamiltonian = get_fermion_operator(molecule.get_molecular_hamiltonian())
plane_waves_hamiltonian = openfermion.get_diagonal_coulomb_hamiltonian(fermionic_hamiltonian)
plane_waves_hamiltonian
```
```python
from functools import reduce
import numpy
from pyscf import gto
from pyscf import scf
from pyscf import mcscf
from pyscf import fci
```
```python
mol = gto.Mole()
mol.atom = '''
Fe 0.000000 0.000000 0.000000
C -0.713500 -0.982049 -1.648000
C 0.713500 -0.982049 -1.648000
C 1.154467 0.375109 -1.648000
C 0.000000 1.213879 -1.648000
C -1.154467 0.375109 -1.648000
H -1.347694 -1.854942 -1.638208
H 1.347694 -1.854942 -1.638208
H 2.180615 0.708525 -1.638208
H 0.000000 2.292835 -1.638208
H -2.180615 0.708525 -1.638208
C -0.713500 -0.982049 1.648000
C -1.154467 0.375109 1.648000
C -0.000000 1.213879 1.648000
C 1.154467 0.375109 1.648000
C 0.713500 -0.982049 1.648000
H -1.347694 -1.854942 1.638208
H -2.180615 0.708525 1.638208
H 0.000000 2.292835 1.638208
H 2.180615 0.708525 1.638208
H 1.347694 -1.854942 1.638208
'''
mol.basis = 'cc-pvtz-dk'
mol.spin = 0
mol.build()
```
<pyscf.gto.mole.Mole at 0x7fc0ff3e5438>
```python
mol.__dict__
```
{'output': None,
'max_memory': 4000,
'charge': 0,
'spin': 0,
'symmetry': False,
'symmetry_subgroup': None,
'cart': False,
'atom': '\nFe 0.000000 0.000000 0.000000 \nC -0.713500 -0.982049 -1.648000 \nC 0.713500 -0.982049 -1.648000 \nC 1.154467 0.375109 -1.648000 \nC 0.000000 1.213879 -1.648000 \nC -1.154467 0.375109 -1.648000 \nH -1.347694 -1.854942 -1.638208 \nH 1.347694 -1.854942 -1.638208 \nH 2.180615 0.708525 -1.638208 \nH 0.000000 2.292835 -1.638208 \nH -2.180615 0.708525 -1.638208 \nC -0.713500 -0.982049 1.648000 \nC -1.154467 0.375109 1.648000 \nC -0.000000 1.213879 1.648000 \nC 1.154467 0.375109 1.648000 \nC 0.713500 -0.982049 1.648000 \nH -1.347694 -1.854942 1.638208 \nH -2.180615 0.708525 1.638208 \nH 0.000000 2.292835 1.638208 \nH 2.180615 0.708525 1.638208 \nH 1.347694 -1.854942 1.638208 \n',
'basis': 'cc-pvtz-dk',
'nucmod': {},
'ecp': {},
'nucprop': {},
'_atm': array([[ 26, 20, 1, 23, 0, 0],
[ 6, 24, 1, 27, 0, 0],
[ 6, 28, 1, 31, 0, 0],
[ 6, 32, 1, 35, 0, 0],
[ 6, 36, 1, 39, 0, 0],
[ 6, 40, 1, 43, 0, 0],
[ 1, 44, 1, 47, 0, 0],
[ 1, 48, 1, 51, 0, 0],
[ 1, 52, 1, 55, 0, 0],
[ 1, 56, 1, 59, 0, 0],
[ 1, 60, 1, 63, 0, 0],
[ 6, 64, 1, 67, 0, 0],
[ 6, 68, 1, 71, 0, 0],
[ 6, 72, 1, 75, 0, 0],
[ 6, 76, 1, 79, 0, 0],
[ 6, 80, 1, 83, 0, 0],
[ 1, 84, 1, 87, 0, 0],
[ 1, 88, 1, 91, 0, 0],
[ 1, 92, 1, 95, 0, 0],
[ 1, 96, 1, 99, 0, 0],
[ 1, 100, 1, 103, 0, 0]], dtype=int32),
'_bas': array([[ 0, 0, 19, ..., 148, 167, 0],
[ 0, 0, 1, ..., 281, 282, 0],
[ 0, 1, 15, ..., 283, 298, 0],
...,
[ 20, 1, 1, ..., 421, 422, 0],
[ 20, 1, 1, ..., 423, 424, 0],
[ 20, 2, 1, ..., 425, 426, 0]], dtype=int32),
'_env': array([ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
-1.34831959e+00, -1.85580365e+00, -3.11426865e+00, 0.00000000e+00,
1.34831959e+00, -1.85580365e+00, -3.11426865e+00, 0.00000000e+00,
2.18162645e+00, 7.08853277e-01, -3.11426865e+00, 0.00000000e+00,
0.00000000e+00, 2.29389886e+00, -3.11426865e+00, 0.00000000e+00,
-2.18162645e+00, 7.08853277e-01, -3.11426865e+00, 0.00000000e+00,
-2.54677256e+00, -3.50533236e+00, -3.09576446e+00, 0.00000000e+00,
2.54677256e+00, -3.50533236e+00, -3.09576446e+00, 0.00000000e+00,
4.12076513e+00, 1.33891820e+00, -3.09576446e+00, 0.00000000e+00,
0.00000000e+00, 4.33283020e+00, -3.09576446e+00, 0.00000000e+00,
-4.12076513e+00, 1.33891820e+00, -3.09576446e+00, 0.00000000e+00,
-1.34831959e+00, -1.85580365e+00, 3.11426865e+00, 0.00000000e+00,
-2.18162645e+00, 7.08853277e-01, 3.11426865e+00, 0.00000000e+00,
0.00000000e+00, 2.29389886e+00, 3.11426865e+00, 0.00000000e+00,
2.18162645e+00, 7.08853277e-01, 3.11426865e+00, 0.00000000e+00,
1.34831959e+00, -1.85580365e+00, 3.11426865e+00, 0.00000000e+00,
-2.54677256e+00, -3.50533236e+00, 3.09576446e+00, 0.00000000e+00,
-4.12076513e+00, 1.33891820e+00, 3.09576446e+00, 0.00000000e+00,
0.00000000e+00, 4.33283020e+00, 3.09576446e+00, 0.00000000e+00,
4.12076513e+00, 1.33891820e+00, 3.09576446e+00, 0.00000000e+00,
2.54677256e+00, -3.50533236e+00, 3.09576446e+00, 0.00000000e+00,
8.23600000e+03, 1.23500000e+03, 2.80800000e+02, 7.92700000e+01,
2.55900000e+01, 8.99700000e+00, 3.31900000e+00, 3.64300000e-01,
1.51092978e+00, 2.30028597e+00, 3.78085788e+00, 5.63467028e+00,
6.90228313e+00, 5.82189670e+00, 2.19400053e+00, -1.08364117e-02,
-5.48474761e-01, -8.37447421e-01, -1.38627626e+00, -2.12609047e+00,
-2.79077716e+00, -2.89628036e+00, -1.83642868e+00, 1.23250240e+00,
9.05900000e-01, 2.34598494e+00, 1.28500000e-01, 5.42241022e-01,
1.87100000e+01, 4.13300000e+00, 1.20000000e+00, 4.48904647e+00,
4.19768669e+00, 2.98763027e+00, 3.82700000e-01, 8.78127282e-01,
1.20900000e-01, 2.07977938e-01, 1.09700000e+00, 3.06825171e+00,
3.18000000e-01, 3.51379844e-01, 7.61000000e-01, 1.06690522e+00,
4.31626500e+06, 6.46342400e+05, 1.47089700e+05, 4.16615200e+04,
1.35907700e+04, 4.90575000e+03, 1.91274600e+03, 7.92604300e+02,
3.44806500e+02, 1.55899900e+02, 7.22309100e+01, 3.27250600e+01,
1.56676200e+01, 7.50348300e+00, 3.31222300e+00, 1.55847100e+00,
6.83914000e-01, 1.46757000e-01, 7.05830000e-02, 3.34945134e+01,
2.33247270e+01, 2.12339771e+01, 2.14171129e+01, 2.40769159e+01,
2.93496851e+01, 3.70729071e+01, 4.48088389e+01, 4.67322128e+01,
3.55940807e+01, 1.46257769e+01, 2.55879631e+00, 1.80958542e+00,
1.01824838e+00, 8.87158080e-02, -1.36379650e-03, 1.52004229e-05,
-1.49762710e-05, 1.10710707e-05, -1.72257137e+01, -1.20366858e+01,
-1.09869947e+01, -1.10953232e+01, -1.25265848e+01, -1.53960383e+01,
-1.97837700e+01, -2.48244728e+01, -2.81168176e+01, -2.56542315e+01,
-1.43762959e+01, 1.59327273e+00, 1.02474583e+01, 5.58014937e+00,
5.23411006e-01, -2.08304502e-02, -8.36021508e-05, -2.91737148e-04,
1.51188993e-04, 4.06974987e+00, 2.82377954e+00, 2.56335164e+00,
2.59496628e+00, 2.93712527e+00, 3.63360109e+00, 4.71691163e+00,
6.08229856e+00, 7.23016411e+00, 7.30884938e+00, 4.66206211e+00,
-8.89338367e-01, -6.14467956e+00, -5.11798675e+00, 9.24740638e-01,
2.54028507e+00, 7.40912666e-01, 1.14994219e-02, -2.43128278e-03,
-1.27064278e+00, -8.41147338e-01, -7.30662972e-01, -7.53224544e-01,
-8.48714279e-01, -1.05199880e+00, -1.36608599e+00, -1.76287110e+00,
-2.09742961e+00, -2.12708082e+00, -1.36165834e+00, 2.59001807e-01,
1.88145027e+00, 1.63945063e+00, -3.64546653e-01, -1.05745761e+00,
-5.99488889e-01, 2.36851996e-01, 2.59769283e-01, -1.73934259e+00,
-1.25609444e+00, -1.14306442e+00, -1.18601190e+00, -1.29802068e+00,
-1.69345006e+00, -2.04378565e+00, -2.90211584e+00, -3.04106788e+00,
-3.68757391e+00, -1.64820245e+00, -2.83157581e-01, 3.95648045e+00,
1.76349591e+00, 3.27021164e-01, -3.70481160e+00, 6.75559356e-01,
1.15552472e+00, -3.82200882e-01, -1.73717388e+00, -1.25452828e+00,
-1.13671832e+00, -1.17243305e+00, -1.27770942e+00, -1.67482460e+00,
-2.00826526e+00, -2.87758591e+00, -2.98752509e+00, -3.68613644e+00,
-1.61131696e+00, -3.34435684e-01, 4.16499181e+00, 2.22227193e+00,
-7.58648631e-01, -4.60727418e+00, 3.02080367e+00, -1.72340269e-01,
-3.27957661e-01, 3.14490000e-02, 1.88677518e-01, 1.77456900e+04,
4.20072100e+03, 1.36442900e+03, 5.22080600e+02, 2.21459500e+02,
1.00909600e+02, 4.84011500e+01, 2.39853600e+01, 1.21825000e+01,
6.24229800e+00, 3.11094400e+00, 1.50995800e+00, 7.10845000e-01,
2.72598000e-01, 1.03972000e-01, 7.34946211e+01, 5.58413050e+01,
6.30929152e+01, 7.49518965e+01, 8.66102976e+01, 9.02155699e+01,
7.82231433e+01, 5.14816685e+01, 2.19625257e+01, 4.45397627e+00,
2.52681661e-01, -1.14556154e-02, -3.53030885e-03, -1.82157444e-04,
5.16715017e-06, -2.56960383e+01, -1.97340578e+01, -2.23317100e+01,
-2.67074777e+01, -3.12544887e+01, -3.34512337e+01, -3.01044569e+01,
-2.08603854e+01, -9.09166955e+00, 9.70531476e-01, 4.04631205e+00,
2.22444798e+00, 5.36573947e-01, 2.59473308e-02, -5.45193682e-04,
7.52207684e+00, 6.07206786e+00, 6.80164369e+00, 8.14687540e+00,
9.54509995e+00, 1.02424474e+01, 9.24118953e+00, 6.44368521e+00,
2.82238384e+00, -4.33434821e-01, -1.47839165e+00, -8.77479444e-01,
-1.97086115e-01, 1.73176537e-01, 1.37808011e-01, 8.67657705e+00,
6.75387489e+00, 7.60301039e+00, 9.20513005e+00, 1.06504851e+01,
1.15676943e+01, 1.02832864e+01, 7.33996142e+00, 3.10138501e+00,
-3.88459349e-01, -1.76742966e+00, -9.83316404e-01, -2.00998444e-01,
2.78854232e-01, 1.11515713e-01, 1.88667072e+01, 1.39145234e+01,
1.67796475e+01, 1.87505778e+01, 2.36452652e+01, 2.33948787e+01,
2.32702741e+01, 1.42740260e+01, 8.54586456e+00, -2.06354050e+00,
-4.46428476e+00, -3.20549237e+00, 1.68637698e+00, 4.79044256e-01,
-1.66327877e-01, 3.81660000e-02, 4.92130408e-02, 1.14884000e+02,
3.38878000e+01, 1.23730000e+01, 4.99925000e+00, 2.07043000e+00,
8.28183000e-01, 3.07547000e-01, 3.77639570e+01, 3.22529469e+01,
2.12931119e+01, 1.05466665e+01, 3.37921324e+00, 6.91908027e-01,
8.04428567e-02, -5.29878506e+01, -4.52935072e+01, -3.07251158e+01,
-1.53848438e+01, -3.82363621e+00, 1.26283043e-01, 2.40502346e-01,
7.11058250e+01, 6.13713699e+01, 4.34326050e+01, 2.11220904e+01,
1.32102889e+00, -1.70340908e+00, -1.69230443e-02, 9.94550000e-02,
4.59595718e-02, 3.22430000e+00, 2.74783157e+01, 7.75800000e-01,
1.11415947e+00, 2.05150000e+00, 9.48671353e+00, 3.38700000e+01,
5.09500000e+00, 1.15900000e+00, 9.05626514e-01, 1.63118079e+00,
2.40483635e+00, 3.25800000e-01, 1.08950276e+00, 1.02700000e-01,
4.58345379e-01, 1.40700000e+00, 4.47045795e+00, 3.88000000e-01,
8.93354953e-01, 1.05700000e+00, 2.87515071e+00]),
'_ecpbas': array([], shape=(0, 8), dtype=float64),
'stdout': <ipykernel.iostream.OutStream at 0x7fb409de9908>,
'groupname': 'C1',
'topgroup': 'C1',
'symm_orb': None,
'irrep_id': None,
'irrep_name': None,
'_symm_orig': None,
'_symm_axes': None,
'_nelectron': None,
'_nao': None,
'_enuc': None,
'_atom': [('Fe', [0.0, 0.0, 0.0]),
('C', [-1.3483195898771716, -1.8558036509029943, -3.1142686532832218]),
('C', [1.3483195898771716, -1.8558036509029943, -3.1142686532832218]),
('C', [2.181626449848253, 0.7088532768594759, -3.1142686532832218]),
('C', [0.0, 2.2938988583609126, -3.1142686532832218]),
('C', [-2.181626449848253, 0.7088532768594759, -3.1142686532832218]),
('H', [-2.5467725597195865, -3.505332356952965, -3.095764455071481]),
('H', [2.5467725597195865, -3.505332356952965, -3.095764455071481]),
('H', [4.120765133118442, 1.3389182024074604, -3.095764455071481]),
('H', [0.0, 4.332830198817134, -3.095764455071481]),
('H', [-4.120765133118442, 1.3389182024074604, -3.095764455071481]),
('C', [-1.3483195898771716, -1.8558036509029943, 3.1142686532832218]),
('C', [-2.181626449848253, 0.7088532768594759, 3.1142686532832218]),
('C', [0.0, 2.2938988583609126, 3.1142686532832218]),
('C', [2.181626449848253, 0.7088532768594759, 3.1142686532832218]),
('C', [1.3483195898771716, -1.8558036509029943, 3.1142686532832218]),
('H', [-2.5467725597195865, -3.505332356952965, 3.095764455071481]),
('H', [-4.120765133118442, 1.3389182024074604, 3.095764455071481]),
('H', [0.0, 4.332830198817134, 3.095764455071481]),
('H', [4.120765133118442, 1.3389182024074604, 3.095764455071481]),
('H', [2.5467725597195865, -3.505332356952965, 3.095764455071481])],
'_basis': {'C': [[0,
[8236.0, 0.0006772, -0.0001445],
[1235.0, 0.0042785, -0.0009156],
[280.8, 0.0213576, -0.0046031],
[79.27, 0.0821858, -0.0182284],
[25.59, 0.2350715, -0.055869],
[8.997, 0.4342613, -0.1269889],
[3.319, 0.3457333, -0.170105],
[0.3643, -0.0089547, 0.598676]],
[0, [0.9059, 1.0]],
[0, [0.1285, 1.0]],
[1, [18.71, 0.0140738], [4.133, 0.0869016], [1.2, 0.2902016]],
[1, [0.3827, 1.0]],
[1, [0.1209, 1.0]],
[2, [1.097, 1.0]],
[2, [0.318, 1.0]],
[3, [0.761, 1.0]]],
'Fe': [[0,
[4316265.0, 0.00014, -7.2e-05, 1.7e-05, -4e-06, -7e-06, -1.2e-05],
[646342.4, 0.000405, -0.000209, 4.9e-05, -1.1e-05, -2.1e-05, -3.6e-05],
[147089.7, 0.001119, -0.000579, 0.000135, -2.9e-05, -5.8e-05, -9.9e-05],
[41661.52, 0.002907, -0.001506, 0.000352, -7.7e-05, -0.000155, -0.000263],
[13590.77, 0.007571, -0.003939, 0.000923, -0.000201, -0.000393, -0.000664],
[4905.75, 0.019818, -0.010396, 0.002452, -0.000535, -0.001101, -0.001869],
[1912.746, 0.050734, -0.027074, 0.006451, -0.001408, -0.002693, -0.004542],
[792.6043, 0.118729, -0.065777, 0.016106, -0.003518, -0.007404, -0.012601],
[344.8065, 0.231164, -0.139082, 0.035742, -0.007814, -0.014484, -0.024423],
[155.8999, 0.319322, -0.23015, 0.065528, -0.014372, -0.031853, -0.054652],
[72.23091, 0.233648, -0.229663, 0.07443, -0.016383, -0.025352, -0.042541],
[32.72506, 0.074022, 0.046091, -0.025711, 0.005643, -0.007887, -0.015989],
[15.66762, 0.090952, 0.515051, -0.308645, 0.071221, 0.19147, 0.345964],
[7.503483, 0.088898, 0.487175, -0.446544, 0.1078, 0.148242, 0.320641],
[3.312223, 0.014302, 0.08438, 0.148985, -0.044262, 0.050761, -0.202125],
[1.558471,
-0.000387,
-0.005911,
0.720395,
-0.225999,
-1.012246,
-2.160673],
[0.683914, 8e-06, -4.4e-05, 0.389698, -0.237628, 0.342339, 2.627486],
[0.146757, -2.5e-05, -0.000487, 0.019184, 0.29778, 1.857266, -0.475452],
[0.070583, 3.2e-05, 0.000437, -0.007023, 0.565497, -1.063679, -1.566613]],
[0, [0.031449, 1.0]],
[1,
[17745.69, 0.000123, -4.3e-05, 9e-06, 1.4e-05, 3e-05],
[4200.721, 0.000566, -0.0002, 4.4e-05, 6.6e-05, 0.000134],
[1364.429, 0.002608, -0.000923, 0.000201, 0.000303, 0.000659],
[522.0806, 0.010295, -0.003668, 0.0008, 0.001219, 0.002447],
[221.4595, 0.034751, -0.012539, 0.002738, 0.00412, 0.009014],
[100.9096, 0.09669, -0.035848, 0.007848, 0.011953, 0.023823],
[48.40115, 0.21003, -0.080822, 0.017739, 0.02662, 0.059364],
[23.98536, 0.332457, -0.134697, 0.029749, 0.045699, 0.08758],
[12.1825, 0.330771, -0.136912, 0.030389, 0.045033, 0.122286],
[6.242298, 0.154733, 0.033713, -0.010765, -0.013011, -0.068112],
[3.110944, 0.020964, 0.33567, -0.087689, -0.141375, -0.351907],
[1.509958, -0.002346, 0.455496, -0.12847, -0.194148, -0.623706],
[0.710845, -0.001854, 0.28176, -0.073996, -0.10177, 0.841448],
[0.272598, -0.000317, 0.04515, 0.215455, 0.467864, 0.79207],
[0.103972, 3e-05, -0.003165, 0.572005, 0.624218, -0.91751]],
[1, [0.038166, 1.0]],
[2,
[114.884, 0.003532, -0.003896, 0.005703],
[33.8878, 0.02555, -0.028207, 0.041691],
[12.373, 0.098357, -0.111573, 0.172043],
[4.99925, 0.237919, -0.272839, 0.408607],
[2.07043, 0.356539, -0.317152, 0.119525],
[0.828183, 0.362848, 0.052062, -0.766038],
[0.307547, 0.238803, 0.561269, -0.043081]],
[2, [0.099455, 1.0]],
[3, [3.2243, 1.0]],
[3, [0.7758, 1.0]],
[4, [2.0515, 1.0]]],
'H': [[0, [33.87, 0.0060773], [5.095, 0.0453176], [1.159, 0.2028364]],
[0, [0.3258, 1.0]],
[0, [0.1027, 1.0]],
[1, [1.407, 1.0]],
[1, [0.388, 1.0]],
[2, [1.057, 1.0]]]},
'_ecp': {},
'_built': True,
'_pseudo': {},
'_keys': {'_atm',
'_atom',
'_bas',
'_basis',
'_built',
'_ecp',
'_ecpbas',
'_enuc',
'_env',
'_nao',
'_nelectron',
'_pseudo',
'_symm_axes',
'_symm_orig',
'atom',
'basis',
'cart',
'charge',
'ecp',
'groupname',
'incore_anyway',
'irrep_id',
'irrep_name',
'max_memory',
'nucmod',
'nucprop',
'output',
'spin',
'stdout',
'symm_orb',
'symmetry',
'symmetry_subgroup',
'topgroup',
'unit',
'verbose'}}
```python
mf = scf.ROHF(mol)
```
```python
mf.kernel()
```
```python
from pyscf.mcscf import avas
# See also 43-dmet_cas.py and function gto.mole.search_ao_label for the rules
# of "ao_labels" in the following
ao_labels = ['Fe 3d', 'C 2pz']
norb, ne_act, orbs = avas.avas(mf, ao_labels, canonicalize=False)
```
|
f4cdc57462b3eaa4512d73338df43ac45660c817
| 107,686 |
ipynb
|
Jupyter Notebook
|
Resource_estimation.ipynb
|
PabloAMC/TFermion
|
ed313a7d9cae0c4ca232732bed046f56bc8594a2
|
[
"Apache-2.0"
] | 4 |
2021-12-02T09:13:16.000Z
|
2022-01-25T10:43:50.000Z
|
Resource_estimation.ipynb
|
PabloAMC/TFermion
|
ed313a7d9cae0c4ca232732bed046f56bc8594a2
|
[
"Apache-2.0"
] | 3 |
2021-12-21T14:22:57.000Z
|
2022-02-05T18:35:16.000Z
|
Resource_estimation.ipynb
|
PabloAMC/TFermion
|
ed313a7d9cae0c4ca232732bed046f56bc8594a2
|
[
"Apache-2.0"
] | null | null | null | 46.019658 | 1,025 | 0.494493 | true | 35,968 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.874077 | 0.746139 | 0.652183 |
__label__yue_Hant
| 0.151888 | 0.353571 |
To generate a presentation from these slides, run:
`jupyter nbconvert PRC-presentation.ipynb --to slides --post serve SlidesExporter.reveal_theme=serif SlidesExporter.reveal_scroll=True SlidesExporter.reveal_transition=none`
# PRC: Implementing a Novel Machine Learning Method in Julia
## John Waczak
## May 4 2021
# Outline
1. Problem Description
2. Implementation Details
3. Model Application
4. Next Steps
5. What Worked and What Didn't
6. Lessons Learned
7. Topics Suggestions
# 1. Problem Description
- **Abstract Problem**: Machine learning, particularly deep neural networks, tend to be black box solutions. It's hard to understand what exactly is happening to your data.
- **Specific Research Problem**: Using frequency decomposition of 64 electrode EEG measurements to predict blinking, specifically, eye aspect ratio (EAR)
- **Solution**: Develop a new neural network layer based on a simple binning procedure
Partition the domain of your function into $N$ bins with $N+1$ bin edges. Model a bin by a smooth step function:
\begin{equation}
B(x) = \frac{1}{2}\left(\tanh(\alpha(x-b_l))-\tanh(\alpha(x-b_r)) \right)
\end{equation}
where $\alpha$ controlls the steepness (a hyperparameter) and $b_l$ and $b_r$ are the left and right bin bin edges, respectively.
Action of the layer is as follows:
# 2. Implementation Details
**Language of choice:** `Julia`
**Why?** The neural network library in Julia, `Flux.jl`, is implemented **100%** in Julia:
Popular libraries like tensorflow are built on top of lower level C++ implementations. This makes it harder to implement new models.
## Summary:
- Binning layer implemented in Julia by extending `Flux.jl`
- Code version controlled via `github` and `gitlab`
- Package envrionment managed via Julia, i.e. we version control the `Project.toml` and `Manifest.toml` files. This is similar to the Conda environments we talked about in class.
- Code organized into standard package structure via templates from `PkgTemplates.jl`
- Added fancy docstrings to allow for `help` lookup at the REPL
- Began writing tests that are stored in `/tests/`
- CI/CD implemented via Travis.ci by adding a simple `.travis.yaml` file
## Easy package install from github
## Environment Management
## Help from fancy dosctrings
## Running Tests
## CI/CD via Travis
# 3. Model Application
**Demo Problem**: bin optimization for maintaining integral.
- Sample sine wave from $x=0$ to $x=10$ with $1000$ points
- Use `DomainBinner` to bin the 1000 points down to 20
- Optimize bin edges to maintain the value of the integral
**Problem 2**: Predicting Eye Aspect Ratio (EAR) value from binned EEG frequency data.
**Model**: Binning layer to reduce 257 frequencies down to 100. Output of binning layer connected to a dense layer with 100 nodes. The dense nodes are then connected to estimate the Ear value
### Results so far
- The `DomainBinner` is able to function together with the `Dense` layer *out of the box* via the `Chain()` function
- `Flux` is able to track the gradients as expected and update the bin edges
- A single electrode is insufficient to predict the EAR value (as expected)
Output of evaluation scripts are captured and converted into nice human readable html/markdown documents using `Weave.jl`.
# 4. Next Steps
- Create slurm scritps to train models on Europa
- Train more reliable models by utilizing multiple electrodes simultaneously
- Perform a hyperparamter optimization (number of bins, number of nodes in hidden layer, etc...)
- Apply binning layer to frequency dimension of Hyper-Spectral Images
- Explore using tensorboard for live tracking of model convergence
- Optimize the code for performance (I think Julia is actually column-major, unlike Python so I should probably transpose my data for faster training...)
- If this actually proves usefull, submit a PR and add the layer to the `Flux` ecosystem.
# 5. How did you start? What worked? What didn't work?
- Started development of layer via notebooks
- The documentation for `Flux.jl` was super helpful for making a new layer that works with the package
- Had issues early on trying to get the auto-diff to work on batches of data.
- Had trouble coming up with a simple toy problem to test the model on. Eventually was able to settle on the integraal test.
# 6. Lessons Learned
The two big takeaways for me were:
- I'm finally starting to appreciate the value of writing tests for everything. This was super helpful when I was trying to get the auto-diff to work on batches of data. It also helped me catch when I made code-breaking changes (specifically array shape related issues).
- Using new environments for new projects. This was super helpful for verifying that the code works across multiple machines.
# 7. Topic Suggestions
- The version control tutorials were super helpful. It would be nice to add some additional content for the proper way to contribute to larger codes you don't own. Maybe you could add forking and submitting pull requests to the git homework assignment.
- I would personally be interested in a lecture on containers. I've had plenty of issues trying to work across multiple operating systems and I think this could help.
```julia
```
|
d696dc3ce5acf89afdd94aa69cbabf18b8525e75
| 9,851 |
ipynb
|
Jupyter Notebook
|
notebooks/PRC-Presentation.ipynb
|
john-waczak/Binning.jl
|
71da7029a7cd1467e1874ec66a9c923ec55ae487
|
[
"MIT"
] | null | null | null |
notebooks/PRC-Presentation.ipynb
|
john-waczak/Binning.jl
|
71da7029a7cd1467e1874ec66a9c923ec55ae487
|
[
"MIT"
] | null | null | null |
notebooks/PRC-Presentation.ipynb
|
john-waczak/Binning.jl
|
71da7029a7cd1467e1874ec66a9c923ec55ae487
|
[
"MIT"
] | null | null | null | 28.636628 | 280 | 0.591717 | true | 1,236 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.752013 | 0.855851 | 0.643611 |
__label__eng_Latn
| 0.995621 | 0.333654 |
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('bmh')
import linearsolve as ls
%matplotlib inline
```
# New-Keynesian Model
This program verifies the math underlying the interactive simulation of the new-Keynesian model: https://www.briancjenkins.com/simulations/nk.html
## Equilibrium Conditions
The equilibrium conditions of the new-Keynesian model:
\begin{align}
y_{t} & = E_ty_{t+1} - \frac{1}{\sigma} r_t + g_t\\
\pi_t & = \beta E_t\pi_{t+1} + \kappa y_t + u_t\\
i_t & = \phi_{\pi} \pi_t + \phi_y y_t + v_t\\
r_t & = i_t - E_t \pi_{t+1}\\
g_{t+1} & = \rho_g g_{t} + \epsilon^g_{t+1}\\
u_{t+1} & = \rho_u u_{t} + \epsilon^u_{t+1}\\
v_{t+1} & = \rho_v v_{t} + \epsilon^v_{t+1}
\end{align}
## Analytic Solution
The model's endogenous variables as functions of exogenous state variables:
\begin{align}
y_{t} & = a_1g_t + a_2u_t + a_3v_t\\
\pi_{t} & = b_1g_t + b_2u_t + b_3v_t\\
i_{t} & = c_1g_t + c_2u_t + c_3v_t\\
r_{t} & = d_1g_t + d_2u_t + d_3v_t
\end{align}
where:
\begin{align}
a_1 & = \frac{1-\beta\rho_g}{(1-\beta\rho_g)(1-\rho_g+\sigma^{-1}\phi_y)+ \sigma^{-1}\kappa(\phi_{\pi}-\rho_g)}\\
a_2 & = -\frac{\sigma^{-1}(\phi_{\pi} - \rho_u)}{(1-\beta\rho_u)(1-\rho_u+\sigma^{-1}\phi_y)+ \sigma^{-1}\kappa(\phi_{\pi}-\rho_u)}\\
a_3 & = -\frac{\sigma^{-1}(1-\beta\rho_v)}{(1-\beta\rho_v)(1-\rho_v+\sigma^{-1}\phi_y)+ \sigma^{-1}\kappa(\phi_{\pi}-\rho_v)}
\end{align}
and:
\begin{align}
b_1 & = \frac{\kappa}{(1-\beta\rho_g)(1-\rho_g+\sigma^{-1}\phi_y)+ \sigma^{-1}\kappa(\phi_{\pi}-\rho_g)}\\
b_2 & = \frac{1-\rho_u+\sigma^{-1}\phi_y}{(1-\beta\rho_u)(1-\rho_u+\sigma^{-1}\phi_y)+ \sigma^{-1}\kappa(\phi_{\pi}-\rho_u)}\\
b_3 & = -\frac{\sigma^{-1}\kappa}{(1-\beta\rho_v)(1-\rho_v+\sigma^{-1}\phi_y)+ \sigma^{-1}\kappa(\phi_{\pi}-\rho_v)}\\
\end{align}
and:
\begin{align}
c_1 & = \phi_ya_1 + \phi_{\pi}b_1\\
c_2 & = \phi_ya_2 + \phi_{\pi}b_2\\
c_3 & = 1+ \phi_ya_3 + \phi_{\pi}b_3\\
\end{align}
and:
\begin{align}
d_1 & = c_1 - \rho_g b_1\\
d_2 & = c_2 - \rho_u b_2\\
d_3 & = c_3 - \rho_v b_3\\
\end{align}
## Compute Solution with `linearsolve`
```python
# Input model parameters
beta = np.exp(-2/100)
sigma= 1
kappa= 0.25
phi_pi= 1.5
phi_y = 0.5
rho_g = 0.25
rho_u = 0.35
rho_v = 0.5
parameters=pd.Series()
parameters.beta=beta
parameters.sigma=sigma
parameters.kappa=kappa
parameters.phi_pi=phi_pi
parameters.phi_y=phi_y
parameters.rho_g=rho_g
parameters.rho_u=rho_u
parameters.rho_v=rho_v
def equilibrium_equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Variables
fwd = variables_forward
cur = variables_current
# Euler equation
euler_eqn = fwd.y -1/p.sigma*cur.r + cur.g - cur.y
# NK Phillips curve
phillips_curve = p.beta*fwd.pi + p.kappa*cur.y + cur.u - cur.pi
# Interest rate rule for monetary policy
interest_rule = p.phi_y*cur.y+p.phi_pi*cur.pi + cur.v - cur.i
# Fisher equation
fisher_eqn = cur.i - fwd.pi - cur.r
# Exogenous demand
g_proc = p.rho_g*cur.g - fwd.g
# Exogenous inflation
u_proc = p.rho_u*cur.u - fwd.u
# Exogenous monetary policy
v_proc = p.rho_v*cur.v - fwd.v
# Stack equilibrium conditions into a numpy array
return np.array([
euler_eqn,
phillips_curve,
interest_rule,
fisher_eqn,
g_proc,
u_proc,
v_proc
])
# Initialize the nk
nk = ls.model(equilibrium_equations,
nstates=3,
varNames=['g','u','v','i','r','y','pi'],
parameters=parameters)
# Set the steady state of the nk
nk.set_ss([0,0,0,0,0,0,0])
# Find the log-linear approximation around the non-stochastic steady state
nk.approximate_and_solve(loglinear=False)
# Solve the nk
nk.solve_klein(nk.a,nk.b)
```
## Compute Solution Directly
```python
a1=(1-beta*rho_g)/((1-beta*rho_g)*(1-rho_g+phi_y/sigma)+kappa/sigma*(phi_pi-rho_g))
a2=-(phi_pi-rho_u)/sigma/((1-beta*rho_u)*(1-rho_u+phi_y/sigma)+kappa/sigma*(phi_pi-rho_u))
a3=-(1-beta*rho_v)/sigma/((1-beta*rho_v)*(1-rho_v+phi_y/sigma)+kappa/sigma*(phi_pi-rho_v))
b1=kappa/((1-beta*rho_g)*(1-rho_g+phi_y/sigma)+kappa/sigma*(phi_pi-rho_g))
b2=(1-rho_u+phi_y/sigma)/((1-beta*rho_u)*(1-rho_u+phi_y/sigma)+kappa/sigma*(phi_pi-rho_u))
b3=-kappa/sigma/((1-beta*rho_v)*(1-rho_v+phi_y/sigma)+kappa/sigma*(phi_pi-rho_v))
c1=phi_y*a1+phi_pi*b1
c2=phi_y*a2+phi_pi*b2
c3=phi_y*a3+phi_pi*b3+1
d1=c1-rho_g*b1
d2=c2-rho_u*b2
d3=c3-rho_v*b3
```
## Compare Analytic and Numeric Solutions
```python
print('verify a1,a2, a3:')
print(a1-nk.f[2,0])
print(a2-nk.f[2,1])
print(a3-nk.f[2,2])
print('\n')
print('verify b1,b2, b3:')
print(b1-nk.f[3,0])
print(b2-nk.f[3,1])
print(b3-nk.f[3,2])
print('\n')
print('verify c1,c2, c3:')
print(c1-nk.f[0,0])
print(c2-nk.f[0,1])
print(c3-nk.f[0,2])
print('\n')
print('verify d1,d2, d3:')
print(d1-nk.f[1,0])
print(d2-nk.f[1,1])
print(d3-nk.f[1,2])
```
verify a1,a2, a3:
-6.661338147750939e-16
2.220446049250313e-15
9.992007221626409e-16
verify b1,b2, b3:
3.3306690738754696e-16
-6.661338147750939e-16
-5.551115123125783e-17
verify c1,c2, c3:
5.551115123125783e-16
-1.7763568394002505e-15
8.326672684688674e-17
verify d1,d2, d3:
5.551115123125783e-16
-1.7763568394002505e-15
-2.7755575615628914e-16
## Plot Simulations
```python
# Compute impulse responses and plot
nk.impulse(T=15,t0=1,shocks=[0.2,0.5,1])
# Create the figure and axes
fig = plt.figure(figsize=(12,12))
ax1 = fig.add_subplot(3,1,1)
ax2 = fig.add_subplot(3,1,2)
ax3 = fig.add_subplot(3,1,3)
# Plot commands
nk.irs['e_g'][['y','pi','r','i']].plot(lw='5',alpha=0.5,grid=True,title='Demand shock',ax=ax1).legend(loc='upper right',ncol=5)
nk.irs['e_u'][['y','pi','r','i']].plot(lw='5',alpha=0.5,grid=True,title='Inflation shock',ax=ax2).legend(loc='upper right',ncol=5)
nk.irs['e_v'][['y','pi','r','i']].plot(lw='5',alpha=0.5,grid=True,title='Interest rate shock',ax=ax3).legend(loc='upper right',ncol=5)
fig.tight_layout()
```
```python
```
|
4202beecd48ba754750c9edebc6d4a70706ebece
| 131,147 |
ipynb
|
Jupyter Notebook
|
python/nk-simulation-verification.ipynb
|
letsgoexploring/dynamic-models
|
c1f49cf05c76bd4a60c075a1dbf552cb2ca38882
|
[
"MIT"
] | 1 |
2021-05-17T11:20:43.000Z
|
2021-05-17T11:20:43.000Z
|
python/nk-simulation-verification.ipynb
|
letsgoexploring/dynamic-models
|
c1f49cf05c76bd4a60c075a1dbf552cb2ca38882
|
[
"MIT"
] | null | null | null |
python/nk-simulation-verification.ipynb
|
letsgoexploring/dynamic-models
|
c1f49cf05c76bd4a60c075a1dbf552cb2ca38882
|
[
"MIT"
] | 1 |
2021-11-04T10:17:09.000Z
|
2021-11-04T10:17:09.000Z
| 375.77937 | 121,144 | 0.931382 | true | 2,405 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.894789 | 0.795658 | 0.711946 |
__label__eng_Latn
| 0.112736 | 0.492422 |
# Custom Functions for NRPy+
## Author: Patrick Nelson
## Introduction
Using SymPy, NRPy+ is able to provide a lot of different functions out of the box, covering many different use cases. However, there are some more obscure functions that we can't access; we may also want to use a different method than the SymPy function uses or use data from multiple different points.
The code below first tells SymPy that `nrpyMyFunc` should be treated as a SymPy function. This name is what should be used in the python code. Then, a dictionary is imported from `outputC` so we can add an entry to it. The key for the entry should be identical to the function; the entry (`myFunc_Ccode` in the example below) should be identical to the name of the function in the C code.
```python
import sympy as sp
nrpyMyFunc = sp.Function('nrpyMyFunc')
from outputC import custom_functions_for_SymPy_ccode
custom_functions_for_SymPy_ccode["nrpyMyFunc"] = "myFunc_Ccode"
```
The above method is not restricted to functions; macros can be used as well. Additionally, SymPy interprets the argument of the function quite generously; the argument of the SymPy function can be more SymPy code, a string, or even nothing (possibly more!). Consider the following examples; the Python code will be on the left and the resulting C code on the right:
* `x = nrpyMyFunc(y)` $\rightarrow$ `x = myFunc_Ccode(y)`
* Here, the Python symbol y can be any SymPy expression; the appropriate corresponding expression will be substituted in the resulting C code, and CSE will be applied as well.
* `x = nrpyMyFunc()` $\rightarrow$ `x = myFunc_Ccode()`
* `x = nrpyMyFunc` $\rightarrow$ `x = myFunc_Ccode`
* `x = nrpyMyFunc("SOME_GF")` $\rightarrow$ `x = myFunc_Ccode(SOME_GF)`
As you can see, we have considerable flexibility in how the C code turns out.
### Obscure functions
Most functions that we may need to use (such as trigonometric or logarithmic functions) are already included in SymPy. However, there are myriad other types of functions that can show up in various problems, such as the dilogarithm function, which is needed for the [Split Monopole initial data in `GiRaFFE_NRPy`](Tutorial-GiRaFFEfood_NRPy-Split_Monopole.ipynb). In this case, we were able to use a preexisting library, but this is not necessary in general; it is just as possible to add a function that you have written yourself.
### Changing functionality
Sometimes, SymPy may have the function you need, but implemented in a way that is not ideal. This was the case when we wrote [Tutorial-Min_Max_and_Piecewise_Expressions](../Tutorial-Min_Max_and_Piecewise_Expressions.ipynb). The SymPy implementation `sp.Abs` assumed a complex input (even when we specified that the input was real), resulting in a function that was much more computationally expensive than it needed to be and introducing errors into the produced C code. We solved this by adding `nrpyAbs` to the dictionary, pointing to the basic C implementation of the function.
### Multiple gridpoints
Operations like interpolation are critical to some applications (`GiRaFFE_NRPy` in particular depends on interpolation for its staggered grids). However, the only context in which data is read from multiple gridpoints in NRPy+ is finite-difference derivatives. For the interpolation of metric gridfunctions required in `GiRaFFE_NRPy`, we handwrote an entire function to replicate a macro in the original `GiRaFFE`, requiring several extra gridfunctions of storage compared to the original code. An alternative would be to port the macro to the `GiRaFFE_NRPy` C code and use the above method to allow SymPy-generated kernels to use it.
A basic interpolator could be written as follows:
```C
#define A0 1.0
#define A1 1.0
#define METRIC auxevol_gfs[IDX4S(GF_TO_INTERP, i0,i1,i2)]
#define METRICp1 auxevol_gfs[IDX4S(GF_TO_INTERP, i0+(flux_dirn==0),i1+(flux_dirn==1),i2+(flux_dirn==2))]
#define METRIC_INTERPED(GF_TO_INTERP) (A0*(METRIC) + A1*(METRICp1))
```
This computes the average of a values of the some gridfunction `GF_TO_INTERP` at the current point and the next one in a given `flux_dirn`, which is a basic interpolation of the value at the cell face.
Naturally, one must take care to use the correct gridfunction macros as defined in the C code. A function could be written to automate this because SymPy expressions can be easily cast to strings and NRPy+'s gridfunction `#define`s are straightforwrard to generate. For instance, the following could be used:
```python
import os,sys
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
from outputC import outputC,custom_functions_for_SymPy_ccode # NRPy+: Core C code output module
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
gammaDD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","gammaDD","sym01")
alpha = gri.register_gridfunctions("AUXEVOL","alpha")
nrpyMetricInterp = sp.Function('nrpyMetricInterp')
def gf_id_of(gridfunction):
return (str(gridfunction).upper() + "GF")
custom_functions_for_SymPy_ccode["nrpyMetricInterp"] = "METRIC_INTERPED"
x = nrpyMetricInterp(gf_id_of(alpha)) * nrpyMetricInterp(gf_id_of(gammaDD[0][0]))
outputC(x,"x")
```
/*
* Original SymPy expression:
* "x = nrpyMetricInterp(ALPHAGF)*nrpyMetricInterp(GAMMADD00GF)"
*/
{
x = METRIC_INTERPED(ALPHAGF)*METRIC_INTERPED(GAMMADD00GF);
}
|
ac96f4e995bf7286ede342d40c954c4e060d153b
| 7,489 |
ipynb
|
Jupyter Notebook
|
in_progress/Tutorial-NRPy_Custom_Functions.ipynb
|
Harmohit-Singh/nrpytutorial
|
81e6fe09c6882a2d95e1d0ea57f465fc7eda41e1
|
[
"BSD-2-Clause"
] | 66 |
2018-06-26T22:18:09.000Z
|
2022-02-09T21:12:33.000Z
|
in_progress/Tutorial-NRPy_Custom_Functions.ipynb
|
Harmohit-Singh/nrpytutorial
|
81e6fe09c6882a2d95e1d0ea57f465fc7eda41e1
|
[
"BSD-2-Clause"
] | 14 |
2020-02-13T16:09:29.000Z
|
2021-11-12T14:59:59.000Z
|
in_progress/Tutorial-NRPy_Custom_Functions.ipynb
|
Harmohit-Singh/nrpytutorial
|
81e6fe09c6882a2d95e1d0ea57f465fc7eda41e1
|
[
"BSD-2-Clause"
] | 30 |
2019-01-09T09:57:51.000Z
|
2022-03-08T18:45:08.000Z
| 53.492857 | 644 | 0.667646 | true | 1,434 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.812867 | 0.835484 | 0.679137 |
__label__eng_Latn
| 0.992152 | 0.416195 |
# Evolutionary Dynamics
We will now consider how Game Theory can be used to study evolutionary processes. The main difference is that we now consider not two player games but game with an **infinite** population. The strategies will make up a dynamic population that changes over time.
## Reproduction
[Video](https://youtu.be/kBhoG3pjyG0?list=PLnC5h3PY-znxMsG0TRYGOyrnEO-QhVwLb)
Consider a simple model of population growth: let $x(t)$ denote the size of the population at time $t$ and let as assume that the rate of growth is $a$ per population size:
$$\frac{dx}{dt}=ax$$
Note that from here on we will refer to this rate as a **fitness**.
The solution of this differential equation is:
$$x(t)=x_0e^{at}\text{ where }x_0=x(0)$$
```python
import sympy as sym
sym.init_printing()
x = sym.Function('x')
t, a = sym.symbols('t, a')
sym.dsolve(sym.Derivative(x(t), t) - a * x(t), x(t))
```
(This is exponential growth.)
We can also use scipy to solve this differential equation numerically (relevant for more complex dynamics):
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.integrate import odeint
t = np.linspace(0, 10, 100) # Obtain 100 time points
def dx(x, t, a):
"""Define the derivate of x"""
return a * x
```
If $a=10>0$:
```python
a = 10
xs = odeint(func=dx, y0=1, t=t, args=(a,))
plt.plot(xs);
```
If $a=-10<0$:
```python
a = -10
xs = odeint(func=dx, y0=1, t=t, args=(a,))
plt.plot(xs);
```
## Selection
[Video](https://youtu.be/ERbQGLLNGYo?list=PLnC5h3PY-znxMsG0TRYGOyrnEO-QhVwLb)
Reproduction alone is not enough to study evolutionary processes. Let us consider a population made up of two types of individuals:
- $x(t)$ denotes the first type;
- $y(t)$ denotes the second type.
Let us assume the same expressions for the as before:
$$\frac{dx}{dt}=ax\qquad\frac{dy}{dt}=by$$
both these population will increase or decrease independantly so there's not much of interest there **but** if we introduce the following:
$$
\rho(t) = \frac{x(t)}{y(t)}
$$
then $\lim_{t\to\infty}\rho(t)$ indicates which type takes over the population over time.
We have:
$$
\frac{d\rho}{dt} = \frac{\frac{dx}{dt}y - \frac{dy}{dt}x}{y ^ 2} = \frac{xy(a - b)}{y^2}
$$
which gives:
$$
\frac{d\rho}{dt} = (a-b)\rho
$$
which has solution (this is just the same differential equation as the previous section):
$$
\rho(t) = \rho_0e^{(a-b)t}\text{ where }\rho_0=\rho(0)
$$
note that even if both population grow, but one grows faster than the other (eg $a > b$) then the overall population will grow but one will take over:
```python
def drho(rho, t, a, b):
"""Define the derivate of x"""
return (a - b) * rho
a, b = 10, 5
rhos = odeint(func=drho, y0=1, t=t, args=(a, b))
plt.plot(rhos);
```
## Selection with constant population size
[Video](https://youtu.be/_bsaV5sq6ZU?list=PLnC5h3PY-znxMsG0TRYGOyrnEO-QhVwLb)
Let us consider the case of $x(t) + y(t)=1$: so the case of a constant population size (choosing a constant of 1 is just a question of scale). For this to be possible, the rates need to reduced:
$$\frac{dx}{dt}=x(a - \phi)\qquad\frac{dy}{dt}=y(b - \phi)$$
because $x(t) + y(t)=1$:
$$\frac{dx}{dt} + \frac{dy}{dt} = 0$$
also:
$$\frac{dx}{dt} + \frac{dy}{dt} = ax + by - \phi(x + y)= ax + by - \phi$$
thus $\phi=ax+by$ (this corresponds to the average of the fitness).
Substituting $y=1-x$ we have:
$$\frac{dx}{dt}=x(a - ax-b(1-x))=x(a(1 - x)-b(1-x))$$
giving:
$$\frac{dx}{dt}=x(a-b)(1-x)$$
We do not need to solve this differential equation. There are two stable points:
- $x=0$: no population of first type: no change
- $x=1$: no population of second type: no change
Also:
- $a=b$: if both types have the same fitness: no change
```python
def dxy(xy, t, a, b):
"""
Define the derivate of x and y.
It takes `xy` as a vector
"""
x, y = xy
phi = a * x + b * y
return x * (a - phi), y * (b - phi)
a, b = 10, 5
xys = odeint(func=dxy, y0=[.5, .5], t=t, args=(a, b))
plt.plot(xys);
```
```python
a, b = 10, 5
xys = odeint(func=dxy, y0=[1, 0], t=t, args=(a, b))
plt.plot(xys);
```
```python
a, b = 10, 5
xys = odeint(func=dxy, y0=[0, 1], t=t, args=(a, b))
plt.plot(xys);
```
```python
a, b = 5, 5
xys = odeint(func=dxy, y0=[.5, .5], t=t, args=(a, b))
plt.plot(xys);
```
|
c7a0ea7f6eb34279245a37828959a9775374d3d3
| 55,371 |
ipynb
|
Jupyter Notebook
|
nbs/chapters/10-Evolutionary-dynamics.ipynb
|
prokolyvakis/gt
|
e679e5d54d9a98583ad4981411ce505cea31f028
|
[
"MIT"
] | null | null | null |
nbs/chapters/10-Evolutionary-dynamics.ipynb
|
prokolyvakis/gt
|
e679e5d54d9a98583ad4981411ce505cea31f028
|
[
"MIT"
] | null | null | null |
nbs/chapters/10-Evolutionary-dynamics.ipynb
|
prokolyvakis/gt
|
e679e5d54d9a98583ad4981411ce505cea31f028
|
[
"MIT"
] | null | null | null | 132.466507 | 8,468 | 0.883098 | true | 1,474 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.83762 | 0.798187 | 0.668577 |
__label__eng_Latn
| 0.97077 | 0.39166 |
Stability of defection, optimisation of strategies and the limits of memory in the PD.
----------------------
## 2. Stability of defection
```python
import opt_mo
import numpy as np
import sympy as sym
import itertools
import axelrod as axl
import matplotlib.pyplot as plt
```
//anaconda3/envs/opt-mo/lib/python3.6/site-packages/sklearn/externals/joblib/__init__.py:15: DeprecationWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+.
warnings.warn(msg, category=DeprecationWarning)
```python
from fractions import Fraction
```
```python
from axelrod.action import Action
C, D = Action.C, Action.D
```
```python
import tqdm
```
```python
import matplotlib.transforms as transforms
import matplotlib
```
```python
import warnings; warnings.simplefilter('ignore')
```
```python
import operator
import functools
```
```python
sym.init_printing()
```
```python
font = {"size": 10, "weight": "bold"}
matplotlib.rc("font", **font)
```
Analytical check
----------------
```python
p_1, p_2, p_3, p_4 = sym.symbols("p_1, p_2, p_3, p_4")
q_1, q_2, q_3, q_4 = sym.symbols("q_1, q_2, q_3, q_4")
k_1, k_2, k_3, k_4 = sym.symbols("k_1, k_2, k_3, k_4")
p, q, k = (p_1, p_2, p_3, p_4), (q_1, q_2, q_3, q_4), (k_1, k_2, k_3, k_4)
```
```python
def get_Q_N_derivative(player, opponent):
x = np.array(player)
Q = opt_mo.utility.quadratic_term_numerator(opponent)
c = opt_mo.utility.linear_term_numerator(opponent)
return np.dot(x, Q) + c
```
```python
def get_Q_N(player, opponent):
x = np.array(player)
Q = opt_mo.utility.quadratic_term_numerator(opponent)
c = opt_mo.utility.linear_term_numerator(opponent)
a = opt_mo.utility.constant_term_numerator(opponent)
return np.dot(x, Q.dot(x.T) * 1 / 2) + np.dot(c, x.T) + a
```
```python
def get_Q_D(player, opponent):
x = np.array(player)
Q_bar = opt_mo.utility.quadratic_term_denominator(opponent)
c_bar = opt_mo.utility.linear_term_denominator(opponent)
a_bar = opt_mo.utility.constant_term_denominator(opponent)
return np.dot(x, Q_bar.dot(x.T) * 1 / 2) + np.dot(c_bar, x.T) + a_bar
```
```python
def get_Q_D_derivative(player, opponent):
x = np.array(player)
Q_bar = opt_mo.utility.quadratic_term_denominator(opponent)
c_bar = opt_mo.utility.linear_term_denominator(opponent)
a_bar = opt_mo.utility.constant_term_denominator(opponent)
return np.dot(x, Q_bar) + c_bar
```
**Check quadratic derivative of**
$$\frac{1}{2}pQp^T + cp +a $$
```python
expr = get_Q_N(p, q)
```
```python
diff = [sym.diff(expr, i) for i in p]
```
```python
derivatives = get_Q_N_derivative(p, q)
```
```python
for i in range(4):
assert (diff[i] - derivatives[i]).simplify() == 0
```
**Check derivative of utility**
```python
def get_written_derivative_of_utility(player, opponents):
sums = 0
for opponent in opponents:
numerator = (get_Q_N_derivative(player, opponent) * get_Q_D(player, opponent) -
get_Q_D_derivative(player, opponent) * get_Q_N(player, opponent))
denominator = get_Q_D(player, opponent) ** 2
sums += numerator / denominator
return (sums) * (1 / len(opponents))
```
```python
for seed in range(100):
num_players = 5
np.random.seed(seed)
opponents = [[np.random.random() for _ in range(4)] for _ in range(num_players)]
np.random.seed(seed + 1000)
player = [np.random.random() for _ in range(4)]
written_derivative = get_written_derivative_of_utility(player, opponents)
utility = opt_mo.tournament_utility(p, opponents)
utility_derivative = [sym.diff(utility, i) for i in p]
utility_derivative = [expr.subs({p_1: player[0], p_2: player[1], p_3: player[2], p_4: player[3]}) for expr
in utility_derivative]
differences = written_derivative - utility_derivative
for difference in differences:
assert np.isclose(round(difference, 10), 0)
```
```python
opponents = [q, k]
```
```python
utility = opt_mo.tournament_utility(p, opponents)
derivative_of_utility = [sym.diff(utility, i) for i in p]
```
```python
written_derivative = get_written_derivative_of_utility(p, opponents)
```
```python
for i in tqdm.tqdm(range(4)):
assert (written_derivative[i] - derivative_of_utility[i]).simplify() == 0
```
0%| | 0/4 [00:00<?, ?it/s][A
25%|██▌ | 1/4 [00:06<00:18, 6.28s/it][A
50%|█████ | 2/4 [02:27<01:33, 46.69s/it][A
75%|███████▌ | 3/4 [02:33<00:34, 34.56s/it][A
100%|██████████| 4/4 [02:34<00:00, 24.63s/it][A
**Stability of defection**
**Check condition for defection stability**
```python
opponents = [q, k]
```
```python
utility = opt_mo.tournament_utility(p, opponents)
```
```python
diff_utility = [sym.diff(utility, i) for i in p]
```
```python
diff_utility_at_zero = [expr.subs({p_1: 0, p_2: 0, p_3: 0, p_4: 0}).expand() for expr in diff_utility]
```
```python
def get_derivate_for_p_zeros(opponents):
sums = 0
for opponent in opponents:
lhs = opt_mo.utility.linear_term_numerator(opponent) * opt_mo.utility.constant_term_denominator(opponent)
rhs = opt_mo.utility.linear_term_denominator(opponent) * opt_mo.utility.constant_term_numerator(opponent)
denominator = opt_mo.utility.constant_term_denominator(opponent) ** 2
sums += (lhs - rhs) / denominator
return (sums) * (1 / len(opponents))
```
```python
expression = get_derivate_for_p_zeros(opponents)
```
```python
for i in tqdm.tqdm(range(4)):
assert (diff_utility_at_zero[i] - expression[i]).simplify() == 0
```
100%|██████████| 4/4 [00:02<00:00, 1.54it/s]
```python
def stackplot(plot, eco, logscale=True):
populations = eco.population_sizes
figure, ax = plt.subplots(figsize=(10, 8))
# figure = ax.get_figure()
figure.patch.set_facecolor('#ffffff')
figure.patch.set_alpha(0.2)
turns = range(len(populations))
pops = [
[populations[iturn][ir] for iturn in turns]
for ir in plot.result_set.ranking
]
ax.stackplot(turns, *pops)
ax.yaxis.tick_left()
ax.yaxis.set_label_position("right")
ax.yaxis.labelpad = 25.0
ax.set_ylim([0.0, 1.0])
#ax.set_xlim([0.0, 10 ** 3])
ax.set_ylabel("Relative population size", fontweight='bold')
ax.set_xlabel("Turn", fontweight='bold')
trans = transforms.blended_transform_factory(ax.transAxes, ax.transData)
ticks = []
for i, n in enumerate(plot.result_set.ranked_names):
x = -0.01
y = (i + 0.5) * 1 / plot.result_set.num_players
if n != 'Defector':
opponent_strings = (n.split(':')[1].replace('[', '').replace(']', '')).split(",")
opponent = [Fraction(float(op)).limit_denominator() for op in opponent_strings]
label = '$q^{(%s)}$'% i
# for p in range(3):
# label += r'\frac{' + str(opponent[p].numerator) + '}{' + str(opponent[p].denominator) + '},'
# label += r'\frac{' + str(opponent[3].numerator) + '}{' + str(opponent[3].denominator) + '})$'
# print(label)
n = label
ax.annotate(
n,
xy=(x, y),
xycoords=trans,
clip_on=False,
va="center",
ha="right",
fontsize=10,
)
ticks.append(y)
ax.set_yticks(ticks)
ax.tick_params(direction="out")
ax.set_yticklabels([])
if logscale:
ax.set_xscale("log")
for tick in ax.yaxis.get_majorticklabels(): # example for xaxis
tick.set_fontsize(20)
plt.tight_layout()
plt.xlim(0, 10 **2)
return figure.savefig('../img/population_defection_takes_over.pdf', facecolor=figure.get_facecolor(),
edgecolor='none', bbox_inches='tight')
```
```python
for seed in range(2, 10):
np.random.seed(seed)
opponents = [[np.round(np.random.random(), 5) for _ in range(4)] for _ in range(3)]
derivative = get_derivate_for_p_zeros(opponents)
if all([el < 0 for el in derivative]):
print('Found at: %s' % seed)
break
```
Found at: 5
```python
defection_stable_opponents_set = [axl.MemoryOnePlayer(ps, initial=D) for ps in opponents] + [axl.Defector()]
```
```python
tournament = axl.Tournament(defection_stable_opponents_set)
results = tournament.play(progress_bar=False)
eco = axl.Ecosystem(results)
eco.reproduce(500)
```
```python
color = '#ffffff'
```
```python
plot = axl.Plot(results)
p = stackplot(plot, eco)
```
```python
populations = eco.population_sizes
turns = range(len(populations))
pops = [
[populations[iturn][ir] for iturn in turns]
for ir in plot.result_set.ranking
]
```
```python
figure, ax = plt.subplots(figsize=(10, 8))
ax.stackplot(turns, *pops)
ax.yaxis.tick_left()
ax.yaxis.set_label_position("right")
ax.yaxis.labelpad = 25.0
ax.set_ylim([0.0, 1.0])
# ax.set_ylabel("Relative population size", fontweight='bold', color=color)
ax.set_xlabel("Turn", fontweight='bold', color=color)
trans = transforms.blended_transform_factory(ax.transAxes, ax.transData)
ticks = []
for i, n in enumerate(plot.result_set.ranked_names):
x = -0.01
y = (i + 0.5) * 1 / plot.result_set.num_players
if n != 'Defector':
opponent_strings = (n.split(':')[1].replace('[', '').replace(']', '')).split(",")
opponent = [Fraction(float(op)).limit_denominator() for op in opponent_strings]
label = '$q^{(%s)}$'% i
n = label
ax.annotate(
n,
xy=(x, y),
xycoords=trans,
clip_on=False,
va="center",
ha="right",
fontsize=17,
color=color
)
ticks.append(y)
ax.set_yticks(ticks)
ax.tick_params(direction="out")
ax.set_yticklabels([])
ax.set_xscale("log")
ax.spines['bottom'].set_color(color)
ax.spines['top'].set_color(color)
ax.spines['right'].set_color(color)
ax.spines['left'].set_color(color)
ax.tick_params(axis='x', colors=color)
ax.tick_params(axis='y', colors=color)
for tick in ax.yaxis.get_majorticklabels(): # example for xaxis
tick.set_fontsize(20)
plt.tight_layout()
plt.xlim(0, 10 **2);
figure.savefig('/Users/storm/src/talks/talks/2020-02-26-Max-Planck/static/population_defection_takes_over.png',
bbox_inches='tight',
transparent=True, dpi=100)
```
```python
p.savefig('../img/population_defection_takes_over.pdf', facecolor=fig.get_facecolor(), edgecolor='none',
bbox_inches='tight')
```
```python
for seed in range(5000):
np.random.seed(4)
opponents = [[round(np.random.random(), 5) for _ in range(4)] for _ in range(3)]
np.random.seed(seed)
other_opponent =[[np.random.random() for _ in range(4)]]
derivative = get_derivate_for_p_zeros(opponents + other_opponent)
if all([el < 0 for el in derivative]):
print('Found at: %s' % seed)
break
```
```python
defection_stable_opponents_set = [axl.MemoryOnePlayer(ps, initial=C) for ps in opponents]
defection_stable_opponents_set += [axl.Defector()]
```
```python
tournament = axl.Tournament(defection_stable_opponents_set)
results = tournament.play(progress_bar=False)
eco = axl.Ecosystem(results)
eco.reproduce(50000)
```
```python
populations = eco.population_sizes
turns = range(len(populations))
pops = [
[populations[iturn][ir] for iturn in turns]
for ir in plot.result_set.ranking
]
```
```python
figure, ax = plt.subplots(figsize=(10, 8))
ax.stackplot(turns, *pops)
ax.yaxis.tick_left()
ax.yaxis.set_label_position("right")
ax.yaxis.labelpad = 25.0
ax.set_ylim([0.0, 1.0])
# ax.set_ylabel("Relative population size", fontweight='bold', color=color)
ax.set_xlabel("Turn", fontweight='bold', color=color)
trans = transforms.blended_transform_factory(ax.transAxes, ax.transData)
ticks = []
for i, n in enumerate(plot.result_set.ranked_names):
x = -0.01
y = (i + 0.5) * 1 / plot.result_set.num_players
if n != 'Defector':
opponent_strings = (n.split(':')[1].replace('[', '').replace(']', '')).split(",")
opponent = [Fraction(float(op)).limit_denominator() for op in opponent_strings]
label = '$q^{(%s)}$'% i
n = label
ax.annotate(
n,
xy=(x, y),
xycoords=trans,
clip_on=False,
va="center",
ha="right",
fontsize=17,
color=color
)
ticks.append(y)
ax.set_yticks(ticks)
ax.tick_params(direction="out")
ax.set_yticklabels([])
ax.set_xscale("log")
ax.spines['bottom'].set_color(color)
ax.spines['top'].set_color(color)
ax.spines['right'].set_color(color)
ax.spines['left'].set_color(color)
ax.tick_params(axis='x', colors=color)
ax.tick_params(axis='y', colors=color)
for tick in ax.yaxis.get_majorticklabels(): # example for xaxis
tick.set_fontsize(20)
plt.tight_layout()
plt.xlim(0, 10 **2);
figure.savefig('/Users/storm/src/talks/talks/2020-02-26-Max-Planck/static/population_defection_fails.png',
bbox_inches='tight',
transparent=True, dpi=100)
```
```python
plot = axl.Plot(results)
p = stackplot(plot, eco)
p.savefig('../img/population_defection_fails.pdf', bbox_inches='tight')
```
```python
```
```python
```
|
03f22ba109d00383f20bbc11afe820a4c39e98cd
| 130,536 |
ipynb
|
Jupyter Notebook
|
nbs/2. Stability of defection.ipynb
|
Nikoleta-v3/Memory-size-in-the-prisoners-dilemma
|
00889fe606c64e8f1d83a3dc07cb4d60bfd5a46e
|
[
"MIT"
] | 2 |
2020-10-17T08:01:44.000Z
|
2020-10-17T08:01:54.000Z
|
nbs/2. Stability of defection.ipynb
|
Nikoleta-v3/Memory-size-in-the-prisoners-dilemma
|
00889fe606c64e8f1d83a3dc07cb4d60bfd5a46e
|
[
"MIT"
] | 33 |
2019-11-12T16:19:18.000Z
|
2020-09-18T13:53:28.000Z
|
nbs/2. Stability of defection.ipynb
|
Nikoleta-v3/Memory-size-in-the-prisoners-dilemma
|
00889fe606c64e8f1d83a3dc07cb4d60bfd5a46e
|
[
"MIT"
] | 2 |
2020-03-30T18:00:37.000Z
|
2020-10-17T08:01:58.000Z
| 146.834646 | 28,564 | 0.883304 | true | 3,884 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.805632 | 0.757794 | 0.610504 |
__label__eng_Latn
| 0.468128 | 0.256735 |
```python
import numpy as np
import matplotlib.pyplot as plt
from sympy import *
```
```python
t = 0
x = np.linspace(-4,4,1000)
```
```python
y = np.exp(-np.power(x-3*t,2))*np.sin(3*np.pi*(x-t))
```
```python
plt.figure(figsize=(10,5),dpi=250)
plt.plot(x,y)
```
```python
x, y, t = symbols('x y t')
y = exp(-(x-3*t)**2)*sin(3*pi*(x-t))
simplify(y)
```
$\displaystyle - e^{- \left(3 t - x\right)^{2}} \sin{\left(\pi \left(3 t - 3 x\right) \right)}$
```python
```
|
7aaf4e12625c1fd1cf89e884d1033cbd0f0b80f4
| 115,100 |
ipynb
|
Jupyter Notebook
|
SimPy/plot_wavepacket.ipynb
|
nahian-147/my_codes
|
9729c56b227d75354ea49982720de94ed1c21909
|
[
"MIT"
] | null | null | null |
SimPy/plot_wavepacket.ipynb
|
nahian-147/my_codes
|
9729c56b227d75354ea49982720de94ed1c21909
|
[
"MIT"
] | null | null | null |
SimPy/plot_wavepacket.ipynb
|
nahian-147/my_codes
|
9729c56b227d75354ea49982720de94ed1c21909
|
[
"MIT"
] | null | null | null | 906.299213 | 112,413 | 0.955873 | true | 181 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.824462 | 0.661923 | 0.54573 |
__label__eng_Latn
| 0.239344 | 0.106244 |
## Optimal Charging Example
We have an electric storage device with state-of-charge (SOC) $q_t \in \mathbb{R}_+$ at time $t$ and capacity $Q \in \mathbb{R}_+$. We denote the amount of energy charged from time $t$ to time $t+1$ as $u_t \in \mathbb{R}$, i.e., $q_{t+1} = q_t + u_t$. Power is limited by $C \in \mathbb{R}_+$ ($D \in \mathbb{R}_+$), the maximum possible magnitude of charging (discharging) power. The energy price $P(u_t)$ is higher when buying energy from the grid compared to the case of selling energy to the grid. Specifically,
\begin{equation}
P(u_i) = \begin{cases}
p_t u_t (1+\eta) \quad &\text{if} \quad u_t > 0 \\
p_t u_t (1-\eta) \quad &\text{otherwise},
\end{cases}
\end{equation}
where $p_t \in \mathbb{R}_+$ is the average market price at time $t$ and $0 < \eta < 1$. To optimize the cost of charging the energy storage from empty to full within a time period of length $T$, we solve the optimization problem
\begin{equation}
\begin{array}{ll}
\text{minimize} \quad & \sum_{t=0}^T p_t \left(u_t + \eta |u_t|\right) + \gamma u_t^2\\
\text{subject to} \quad &q_{t+1} = q_t + u_t \quad \forall t \in \{0,...,T \}\\
&-D \leq u_t \leq C \quad \forall t \in \{0,...,T \}\\
&0 \leq q_t \leq Q \quad \forall t \in \{0,...,T \}\\
&q_0 = 0\\
&q_{T+1} = Q,
\end{array}
\end{equation}
where $u_t \in \mathbb{R}$ and $q_t \in \mathbb{R}_+$ are the variables. We have added the regularization term $\gamma u_t^2$ to reduce stress on the electronic system due to peak power values, with $\gamma \in \mathbb{R}_+$. We reformulate the problem to be [DPP-compliant](https://www.cvxpy.org/tutorial/advanced/index.html#disciplined-parametrized-programming) by introducing the parameter $s_t = p_t \eta$ and we use time vectors $u \in \mathbb{R}^T$, $p, s \in \mathbb{R}_+^T$ and $q \in \mathbb{R}_+^{T+1}$ to summarize the temporal variables and parameters. Finally, we solve
\begin{equation}
\begin{array}{ll}
\text{minimize} \quad & p^T u + s^T |u| + \gamma \Vert u \Vert_2^2\\
\text{subject to} \quad &q_{1:T+1} = q_{0:T} + u\\
&-D \mathbb{1} \leq u \leq C \mathbb{1}\\
&\mathbb{0} \leq q \leq Q \mathbb{1}\\
&q_0 = 0\\
&q_{T+1} = Q,
\end{array}
\end{equation}
where $|u|$ is the element-wise absolute value of $u$. Let's define the corresponding CVXPY problem. To model a one-day period with a resolution of one minute, we choose $T=24 \cdot 60 = 1440$.
```python
import cvxpy as cp
import numpy as np
# define dimension
T = 1440
# define variables
u = cp.Variable(T, name='u')
q = cp.Variable(T+1, name='q')
# define parameters
p = cp.Parameter(T, nonneg=True, name='p')
s = cp.Parameter(T, nonneg=True, name='s')
D = cp.Parameter(nonneg=True, name='D')
C = cp.Parameter(nonneg=True, name='C')
Q = cp.Parameter(nonneg=True, name='Q')
gamma = cp.Parameter(nonneg=True, name='gamma')
# define objective
objective = cp.Minimize(p@u + s@cp.abs(u) + gamma*cp.sum_squares(u))
# define constraints
constraints = [q[1:] == q[:-1] + u,
-D <= u, u<= C,
0 <= q, q <= Q,
q[0] == 0, q[-1] == Q]
# define problem
problem = cp.Problem(objective, constraints)
```
Assign parameter values and solve the problem. The one-day period starts at 2pm with a medium energy price level until 5pm, high price level from 5pm to midnight and low prices otherwise.
```python
import matplotlib.pyplot as plt
p.value = np.concatenate((3*np.ones(3*60),
5*np.ones(7*60),
1*np.ones(14*60)), axis=0)
eta = 0.1
s.value = eta*p.value
Q.value = 1
C.value = 3*Q.value/(24*60)
D.value = 2*C.value
gamma.value = 100
val = problem.solve()
fig, ax1 = plt.subplots()
ax1.plot(100*q.value, color='b')
ax1.grid()
ax1.set_xlabel('Time [min]')
ax1.set_ylabel('SOC [%]', color='b')
ax1.tick_params(axis='y', labelcolor='b')
ax2 = ax1.twinx()
ax2.plot(100*p.value / max(p.value), color='m')
ax2.set_ylabel('Price Level [%]', color='m')
ax2.tick_params(axis='y', labelcolor='m')
```
We observe that it is optimal to charge the storage with maximum power during the medium price phase, then empty the storage when prices are highest, and then fully charge the storage for the lowest price of the day. Generating C source for the problem is as easy as:
```python
from cvxpygen import cpg
cpg.generate_code(problem, code_dir='charging_code')
```
Now, you can use a python wrapper around the generated code as a custom CVXPY solve method.
```python
from charging_code.cpg_solver import cpg_solve
import numpy as np
import pickle
import time
# load the serialized problem formulation
with open('charging_code/problem.pickle', 'rb') as f:
prob = pickle.load(f)
# assign parameter values
prob.param_dict['p'].value = np.concatenate((3*np.ones(3*60),
5*np.ones(7*60),
1*np.ones(14*60)), axis=0)
eta = 0.1
prob.param_dict['s'].value = eta*prob.param_dict['p'].value
prob.param_dict['Q'].value = 1
prob.param_dict['C'].value = 5*prob.param_dict['Q'].value/(24*60)
prob.param_dict['D'].value = 2*prob.param_dict['C'].value
# solve problem conventionally
t0 = time.time()
# CVXPY chooses eps_abs=eps_rel=1e-5, max_iter=10000, polish=True by default,
# however, we choose the OSQP default values here, as they are used for code generation as well
val = prob.solve(solver='OSQP', eps_abs=1e-3, eps_rel=1e-3, max_iter=4000, polish=False)
t1 = time.time()
print('\nCVXPY\nSolve time: %.3f ms' % (1000 * (t1 - t0)))
print('Objective function value: %.6f\n' % val)
# solve problem with C code via python wrapper
prob.register_solve('CPG', cpg_solve)
t0 = time.time()
val = prob.solve(method='CPG')
t1 = time.time()
print('\nCVXPYgen\nSolve time: %.3f ms' % (1000 * (t1 - t0)))
print('Objective function value: %.6f\n' % val)
```
\[1\] Wang, Yang, Brendan O'Donoghue, and Stephen Boyd. "Approximate dynamic programming via iterated Bellman inequalities." International Journal of Robust and Nonlinear Control 25.10 (2015): 1472-1496.
|
ad3fba7c9deec803fa5c41b05300e67c5ce1764b
| 8,539 |
ipynb
|
Jupyter Notebook
|
examples/charging.ipynb
|
cvxgrp/cvxpygen
|
5aaacdb894354288e2f12a16ca738248c69181c4
|
[
"Apache-2.0"
] | 32 |
2022-02-25T03:30:06.000Z
|
2022-03-30T16:24:40.000Z
|
examples/charging.ipynb
|
cvxgrp/cvxpygen
|
5aaacdb894354288e2f12a16ca738248c69181c4
|
[
"Apache-2.0"
] | null | null | null |
examples/charging.ipynb
|
cvxgrp/cvxpygen
|
5aaacdb894354288e2f12a16ca738248c69181c4
|
[
"Apache-2.0"
] | null | null | null | 38.29148 | 606 | 0.553343 | true | 1,906 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.952574 | 0.887205 | 0.845128 |
__label__eng_Latn
| 0.857173 | 0.801849 |
<a href="https://colab.research.google.com/github/Shawn2776/Build_sample/blob/master/module1-vectors-and-matrices/Shawn_Harrington_LS_DS20_131_Vectors_and_Matrices_Assignment.ipynb" target="_parent"></a>
# Part 1 - Scalars and Vectors
For the questions below it is not sufficient to simply provide answer to the questions, but you must solve the problems and show your work using python (the NumPy library will help a lot!) Translate the vectors and matrices into their appropriate python representations and use numpy or functions that you write yourself to demonstrate the result or property.
## 1.1 Create a two-dimensional vector and plot it on a graph
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
two_d_vector = [.4, 1]
plt.arrow(0,0, two_d_vector[0], two_d_vector[1],head_width=.1, head_length=.1, color ='orange')
plt.xlim(-.2,1)
plt.ylim(-.2,1.2)
```
## 1.2 Create a three-dimensional vector and plot it on a graph
```
vectors = np.array([[0, 0, 0, 2, 3, 4],
[0, 0, 0, 7, 4, 6],
[0, 0, 0, 2, 8, 9]])
X, Y, Z, U, V, W = zip(*vectors)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.quiver(X, Y, Z, U, V, W, length=.5)
ax.set_xlim([0, 4])
ax.set_ylim([0, 6])
ax.set_zlim([0, 5])
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
```
```
```
## 1.3 Scale the vectors you created in 1.1 by $5$, $\pi$, and $-e$ and plot all four vectors (original + 3 scaled vectors) on a graph. What do you notice about these vectors?
```
from math import e, pi
two_d_vector = [.4, 1]
scaled_to_e = np.multiply(e, two_d_vector)
scaled_to_pi = np.multiply(pi, two_d_vector)
scaled_by_5 = np.multiply(5, two_d_vector)
plt.arrow(0,0, scaled_by_5[0], scaled_by_5[1],head_width=.1, head_length=.1, color ='green')
plt.arrow(0,0, scaled_to_pi[0], scaled_to_pi[1],head_width=.1, head_length=.1, color ='blue')
plt.arrow(0,0, scaled_to_e[0], scaled_to_e[1],head_width=.1, head_length=.1, color ='red')
plt.arrow(0,0, two_d_vector[0], two_d_vector[1],head_width=.1, head_length=.1, color ='orange')
plt.xlim(-.2,3)
plt.ylim(-.2,6)
```
```
```
## 1.4 Graph vectors $\vec{a}$ and $\vec{b}$ and plot them on a graph
\begin{align}
\vec{a} = \begin{bmatrix} 5 \\ 7 \end{bmatrix}
\qquad
\vec{b} = \begin{bmatrix} 3 \\4 \end{bmatrix}
\end{align}
```
a = np.array([5,7])
b = np.array([3,4])
plt.arrow(0,0,a[0],a[1], head_width=.2, head_length=.2, color='b')
plt.arrow(0,0,b[0],b[1], head_width=.2, head_length=.2, color='r')
plt.xlim(0, 6)
plt.ylim(0,8)
```
## 1.5 find $\vec{a} - \vec{b}$ and plot the result on the same graph as $\vec{a}$ and $\vec{b}$. Is there a relationship between vectors $\vec{a} \thinspace, \vec{b} \thinspace \text{and} \thinspace \vec{a-b}$
```
a = np.array([5,7])
b = np.array([3,4])
c = a - b
plt.arrow(0,0,a[0],a[1], head_width=.2, head_length=.2, color='b')
plt.arrow(0,0,b[0],b[1], head_width=.2, head_length=.2, color='r')
plt.arrow(0,0,c[0],c[1], head_width=.2, head_length=.2, color='y')
plt.xlim(0, 6)
plt.ylim(0,8)
```
## 1.6 Find $c \cdot d$
\begin{align}
\vec{c} = \begin{bmatrix}7 & 22 & 4 & 16\end{bmatrix}
\qquad
\vec{d} = \begin{bmatrix}12 & 6 & 2 & 9\end{bmatrix}
\end{align}
```
c = np.array([7, 22, 4, 16])
d = np.array([12, 6, 2, 9])
c_dot_d = (c*d).sum()
c_dot_d
```
368
```
np.vdot(c, d )
```
368
## 1.7 Find $e \times f$
\begin{align}
\vec{e} = \begin{bmatrix} 5 \\ 7 \\ 2 \end{bmatrix}
\qquad
\vec{f} = \begin{bmatrix} 3 \\4 \\ 6 \end{bmatrix}
\end{align}
```
e = np.array([5, 7, 2])
f = np.array([3, 4, 6])
e_cross_f = np.cross(e,f)
e_cross_f
```
array([ 34, -24, -1])
```
np.array
```
## 1.8 Find $||g||$ and then find $||h||$. Which is longer?
\begin{align}
\vec{g} = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 8 \end{bmatrix}
\qquad
\vec{h} = \begin{bmatrix} 3 \\3 \\ 3 \\ 3 \end{bmatrix}
\end{align}
```
g = np.array([1, 1, 1, 8])
h = np.array([3, 3, 3, 3])
g_magnitude = np.sqrt((g**2).sum())
h_magnitude = np.sqrt((h**2).sum())
if g_magnitude > h_magnitude:
print(f"The magnitude of g ({g_magnitude}) is greater than the magnitude of h ({h_magnitude})")
else:
print(f"The magnitude of h ({h_magnitude}) is greater than the magnitude of g ({g_magnitude})")
```
The magnitude of g (8.18535277187245) is greater than the magnitude of h (6.0)
# Part 2 - Matrices
## 2.1 What are the dimensions of the following matrices? Which of the following can be multiplied together? See if you can find all of the different legal combinations.
\begin{align}
A = \begin{bmatrix}
1 & 2 \\
3 & 4 \\
5 & 6
\end{bmatrix}
\qquad
B = \begin{bmatrix}
2 & 4 & 6 \\
\end{bmatrix}
\qquad
C = \begin{bmatrix}
9 & 6 & 3 \\
4 & 7 & 11
\end{bmatrix}
\qquad
D = \begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}
\qquad
E = \begin{bmatrix}
1 & 3 \\
5 & 7
\end{bmatrix}
\end{align}
```
A = np.array([[1, 2],
[3, 4],
[5,6]])
B = np.array([2, 4, 6])
C = np.array([[9, 6, 3],
[4, 7, 11]])
D = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
E = np.array([[1, 3],
[5, 7]])
```
A - 3x2
> A*B = No // A*C = Yes // A*D = No // A*E = Yes
B - 1x3
> BA = Yes // BC = No // BD = Yes // BE = No
C - 2x3
> CA = Yes // CB = No // CD = Yes // CE = No
D - 3x3
> DA = Yes // DB = No // DC = No // DE = No
E - 2x2
> EA = No // EB = No // EC = Yes // ED = No
## 2.2 Find the following products: CD, AE, and BA. What are the dimensions of the resulting matrices? How does that relate to the dimensions of their factor matrices?
```
print(np.matmul(C,D))
print(np.matmul(A,E))
print(np.matmul(B,A))
```
[[ 9 6 3]
[ 4 7 11]]
[[11 17]
[23 37]
[35 57]]
[44 56]
CD = 2x3
AE = 3x2
BA = 1x2
## 2.3 Find $F^{T}$. How are the numbers along the main diagonal (top left to bottom right) of the original matrix and its transpose related? What are the dimensions of $F$? What are the dimensions of $F^{T}$?
\begin{align}
F =
\begin{bmatrix}
20 & 19 & 18 & 17 \\
16 & 15 & 14 & 13 \\
12 & 11 & 10 & 9 \\
8 & 7 & 6 & 5 \\
4 & 3 & 2 & 1
\end{bmatrix}
\end{align}
The main diagonal in F, and it's transpose, F.T are the same.
F - 5x4
F.T - 4x5
```
F = np.array([[20,19,18,17],
[16,15,14,13],
[12,11,10,9],
[8,7,6,5],
[4,3,2,1]])
F.T
```
array([[20, 16, 12, 8, 4],
[19, 15, 11, 7, 3],
[18, 14, 10, 6, 2],
[17, 13, 9, 5, 1]])
# Part 3 - Square Matrices
## 3.1 Find $IG$ (be sure to show your work) 😃
You don't have to do anything crazy complicated here to show your work, just create the G matrix as specified below, and a corresponding 2x2 Identity matrix and then multiply them together to show the result. You don't need to write LaTeX or anything like that (unless you want to).
\begin{align}
G=
\begin{bmatrix}
13 & 14 \\
21 & 12
\end{bmatrix}
\end{align}
```
G = np.array([[13,14],
[21,12]])
G_identity = np.array([[1,0],
[0,1]])
# If the product is equal to G, we have the proper IG
np.matmul(G, G_identity)
```
array([[13, 14],
[21, 12]])
## 3.2 Find $|H|$ and then find $|J|$.
\begin{align}
H=
\begin{bmatrix}
12 & 11 \\
7 & 10
\end{bmatrix}
\qquad
J=
\begin{bmatrix}
0 & 1 & 2 \\
7 & 10 & 4 \\
3 & 2 & 0
\end{bmatrix}
\end{align}
```
H = np.array([[12,11],
[7,10]])
J = np.array([[0,1,2],
[7,10,4],
[3,2,0]])
```
```
np.linalg.det(H)
```
43.000000000000014
```
np.linalg.det(J)
```
-19.999999999999996
## 3.3 Find $H^{-1}$ and then find $J^{-1}$
$H^{-1}$ = \begin{bmatrix}
10 & -11 \\
-7 & 12
\end{bmatrix}
---
$J^{-1}$ = No Determinant
## 3.4 Find $HH^{-1}$ and then find $J^{-1}J$. Is $HH^{-1} == J^{-1}J$? Why or Why not?
Please ignore Python rounding errors. If necessary, format your output so that it rounds to 5 significant digits (the fifth decimal place).
I would say no, as $J^{-1}$ doesn't exist
# Stretch Goals:
A reminder that these challenges are optional. If you finish your work quickly we welcome you to work on them. If there are other activities that you feel like will help your understanding of the above topics more, feel free to work on that. Topics from the Stretch Goals sections will never end up on Sprint Challenges. You don't have to do these in order, you don't have to do all of them.
- Write a function that can calculate the dot product of any two vectors of equal length that are passed to it.
- Write a function that can calculate the norm of any vector
- Prove to yourself again that the vectors in 1.9 are orthogonal by graphing them.
- Research how to plot a 3d graph with animations so that you can make the graph rotate (this will be easier in a local notebook than in google colab)
- Create and plot a matrix on a 2d graph.
- Create and plot a matrix on a 3d graph.
- Plot two vectors that are not collinear on a 2d graph. Calculate the determinant of the 2x2 matrix that these vectors form. How does this determinant relate to the graphical interpretation of the vectors?
|
73962d1a82ce5bebe6ae6304259cb30bfb2ad5be
| 115,641 |
ipynb
|
Jupyter Notebook
|
module1-vectors-and-matrices/Shawn_Harrington_LS_DS20_131_Vectors_and_Matrices_Assignment.ipynb
|
Shawn2776/Build_sample
|
d783717537b8618906f71a3306d918b571fe5f04
|
[
"MIT"
] | null | null | null |
module1-vectors-and-matrices/Shawn_Harrington_LS_DS20_131_Vectors_and_Matrices_Assignment.ipynb
|
Shawn2776/Build_sample
|
d783717537b8618906f71a3306d918b571fe5f04
|
[
"MIT"
] | null | null | null |
module1-vectors-and-matrices/Shawn_Harrington_LS_DS20_131_Vectors_and_Matrices_Assignment.ipynb
|
Shawn2776/Build_sample
|
d783717537b8618906f71a3306d918b571fe5f04
|
[
"MIT"
] | null | null | null | 110.134286 | 46,510 | 0.833684 | true | 3,272 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.917303 | 0.894789 | 0.820793 |
__label__eng_Latn
| 0.930515 | 0.745309 |
# How close is close enough?
This is based on [Greg Wilson's post on testing](http://software-carpentry.org/blog/2014/10/why-we-dont-teach-testing.html), but avoids the big, difficult questions. Instead I focus on his comment about "close enough" in [the full phugoid model notebook](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/01_phugoid/01_03_PhugoidFullModel.ipynb).
See also [this page from NASA on grid convergence](http://www.grc.nasa.gov/WWW/wind/valid/tutorial/spatconv.html). Also [this paper by Liu](http://ocw.mit.edu/courses/mathematics/18-304-undergraduate-seminar-in-discrete-mathematics-spring-2006/projects/xtrpltn_liu_xpnd.pdf), which is rather mathematical, on Richardson extrapolation and its extensions.
## Round off
Let's start by reminding ourselves how round-off error comes in to numerical calculations. As an example, remember that $\sin(2\pi) = 0$, and in fact $\sin(2 k \pi)=0$ for any integer $k$.
We'll use `numpy` for the calculations.
```python
%matplotlib notebook
import numpy
from matplotlib import pyplot
from numpy import pi, sin, arange
```
```python
ks = arange(10)
sins = sin(2.0*ks*pi)
for k, s in zip(ks, sins):
print("sin(2 k pi) evaluates to {:.3g} when k={}.".format(s, k))
print("Adding these all up (result should be 0) we get {}.".format(sins.sum()))
```
sin(2 k pi) evaluates to 0 when k=0.
sin(2 k pi) evaluates to -2.45e-16 when k=1.
sin(2 k pi) evaluates to -4.9e-16 when k=2.
sin(2 k pi) evaluates to -7.35e-16 when k=3.
sin(2 k pi) evaluates to -9.8e-16 when k=4.
sin(2 k pi) evaluates to -1.22e-15 when k=5.
sin(2 k pi) evaluates to -1.47e-15 when k=6.
sin(2 k pi) evaluates to -1.71e-15 when k=7.
sin(2 k pi) evaluates to -1.96e-15 when k=8.
sin(2 k pi) evaluates to -2.2e-15 when k=9.
Adding these all up (result should be 0) we get -1.1021821192326179e-14.
So we had ten different numerical calculations with errors between $0$ and $\sim 2 \times 10^{-15}$, which when combined lead to a total error $\sim 10^{-14}$. This illustrates the standard result:
Summing $N$ calculations with errors $\delta_i$, where $\delta_i \le \delta = \max_i \delta_i$, leads to a total error ${\cal E}$ which is bounded by ${\cal E} \le N \delta$.
## Going beyond sums
We'll use a very simple initial value problem (as the calculations are faster):
$$
\begin{equation}
y' = -\sin(x), \qquad y(0) = 1
\end{equation}
$$
which has the solution $y = \cos(x)$. We'll solve this using Euler's method, as in the original notebook:
$$
\begin{equation}
y_{n+1} = y_n - h \sin(x_n), \qquad y_0 = 1
\end{equation}
$$
where $h$ is the grid spacing and $x_n = n h$, with $n = 0, 1, \dots$ the grid step.
```python
def simple_euler(h, N):
"""
Solve the problem y' = -sin(x), y(0) = 1 using Euler's method.
Parameters
----------
h : float
Grid spacing
N : int
Number of steps
Returns
-------
Y : float
y(Nh) as approximated by Euler's method
"""
# Initial data
Y = 1.0
x = 0.0
for n in range(N):
Y -= h*sin(x)
x += h
return Y
```
So, how good is this method? Check by comparing against the exact solution when $X=1$.
```python
X = 1.0
N_all = numpy.array([2**i for i in range(3, 20)])
h_all = X / N_all
Y_exact = numpy.cos(X)
Y_approx = numpy.zeros_like(h_all)
Y_errors = numpy.zeros_like(h_all)
for i, N in enumerate(N_all):
h = h_all[i]
Y_approx[i] = simple_euler(h, N)
Y_errors[i] = numpy.abs(Y_approx[i] - Y_exact)
```
```python
pyplot.loglog(h_all, Y_errors, 'kx')
pyplot.xlabel(r'$h$')
pyplot.ylabel('Error');
```
So, what should we expect? If implemented correctly, we know that Euler's method behaves as $\text{Error} \propto h + {\cal O}(h^2)$. As a first guess we drop the higher order terms, giving $\text{Error} = c_1 h$, or $\log(\text{Error}) = \log(h) + \text{const}$.
We can then compute the best fit line through the data and see if it matches this assumption.
```python
simple_p = numpy.polyfit(numpy.log(h_all), numpy.log(Y_errors), 1)
pyplot.loglog(h_all, Y_errors, 'kx', label='Data')
pyplot.loglog(h_all, numpy.exp(simple_p[1])*h_all**(simple_p[0]), 'b-', label='Fit, slope={:.4f}'.format(simple_p[0]))
pyplot.legend(loc='upper left')
pyplot.xlabel(r'$h$')
pyplot.ylabel('Error');
```
So, the best fit line matches the expected slope (1) to better than $0.3\%$. Is this good enough?
First, let's do a sanity check. Why do we believe that the slope shouldn't be *exactly* $1$? It's because of our assumption: that $\text{Error} \propto h + {\cal O}(h^2)$, and that we could ignore the higher order terms. The assumption that the error takes this form is essentially saying that we've implemented the algorithm correctly (which is what we're trying to check!). The assumption that we can ignore the higher order terms is more reasonable when $h$ is small, like $10^{-5}$, but not when $h \sim 10^{-1}$. So the slope should get closer to $1$ if we ignore the results for larger $h$. Let's do that calculation:
```python
for i in range(1, len(Y_errors)-2):
partial_p = numpy.polyfit(numpy.log(h_all[i:]), numpy.log(Y_errors[i:]), 1)
print("The slope, when ignoring {} entries, is {:.6f}. (slope-1)={:.4g}".format(i+1, partial_p[0], partial_p[0]-1.0))
```
The slope, when ignoring 2 entries, is 1.000313. (slope-1)=0.0003133
The slope, when ignoring 3 entries, is 1.000176. (slope-1)=0.0001758
The slope, when ignoring 4 entries, is 1.000099. (slope-1)=9.92e-05
The slope, when ignoring 5 entries, is 1.000056. (slope-1)=5.638e-05
The slope, when ignoring 6 entries, is 1.000032. (slope-1)=3.23e-05
The slope, when ignoring 7 entries, is 1.000019. (slope-1)=1.867e-05
The slope, when ignoring 8 entries, is 1.000011. (slope-1)=1.09e-05
The slope, when ignoring 9 entries, is 1.000006. (slope-1)=6.439e-06
The slope, when ignoring 10 entries, is 1.000004. (slope-1)=3.851e-06
The slope, when ignoring 11 entries, is 1.000002. (slope-1)=2.336e-06
The slope, when ignoring 12 entries, is 1.000001. (slope-1)=1.439e-06
The slope, when ignoring 13 entries, is 1.000001. (slope-1)=9.026e-07
The slope, when ignoring 14 entries, is 1.000001. (slope-1)=5.777e-07
The slope, when ignoring 15 entries, is 1.000000. (slope-1)=3.79e-07
That's good news. We could just fit the final few entries to get closer to the expected slope, but we're still not answering how close is "close enough"
One additional question that is quite important. What's the effect of changing how far we integrate, by changing $X$? Let's make $X$ smaller.
```python
X = 1.0e-5
h_all = X / N_all
Y_exact_short = numpy.cos(X)
Y_approx_short = numpy.zeros_like(h_all)
Y_errors_short = numpy.zeros_like(h_all)
for i, N in enumerate(N_all):
h = h_all[i]
Y_approx_short[i] = simple_euler(h, N)
Y_errors_short[i] = numpy.abs(Y_approx_short[i] - Y_exact_short)
```
```python
simple_p_short = numpy.polyfit(numpy.log(h_all[:-6]), numpy.log(Y_errors_short[:-6]), 1)
pyplot.loglog(h_all, Y_errors_short, 'kx', label='Data')
pyplot.loglog(h_all[:-6], numpy.exp(simple_p_short[1])*h_all[:-6]**(simple_p_short[0]), 'b-', label='Fit, slope={:.4f}'.format(simple_p_short[0]))
pyplot.legend(loc='lower right')
pyplot.xlabel(r'$h$')
pyplot.ylabel('Error');
```
We see that the algorithm converges as expected until $h$ is so small that floating point round-off errors become important. Below that point the error of each individual step does not depend on $h$ (as the truncation error of the algorithm is less than floating point round-off) but is essentially random, and has the magnitude of floating point round-off. These individual errors will then add up. To check this, we can look at the contribution from floating point errors at each step:
```python
floating_error_step = numpy.spacing(1.0e-3)
floating_error = floating_error_step * N_all
pyplot.loglog(h_all, Y_errors_short, 'kx', label='Data')
pyplot.loglog(h_all, floating_error, 'b-', label='Floating point error contribution')
pyplot.legend(loc='upper left')
pyplot.xlabel(r'$h$')
pyplot.ylabel('Error');
```
Let's check the truncation error instead. First, redo the analysis to find out *exactly* what that should look like.
```python
import sympy
sympy.init_printing()
y = sympy.Function('y')
f = sympy.Function('f')
x, h, b= sympy.symbols('x, h, b')
y_n_p_1 = y(x) + h * f(x, y(x))
```
```python
truncation_error = sympy.series(y(h),h)-y_n_p_1.subs(x, 0)
truncation_error = truncation_error.subs(f(0, y(0)), sympy.Subs(sympy.Derivative(y(x), x),(x,),(0,)))
truncation_error
```
So the truncation error should be the sum of each of these terms: for $h$ sufficiently small the higher order terms should have no effect at all. We can check, using that the derivatives of $\cos(x)$ at $x=0$ are alternatively $1$ and $0$ in magnitude:
```python
h_truncation = numpy.array([2**(-i) for i in range(4,20)])
Y_truncation_error = numpy.zeros_like(h_truncation)
Y_expected_truncation_error = numpy.zeros_like(h_truncation)
for i, h in enumerate(h_truncation):
Y_truncation_error[i] = numpy.abs(simple_euler(h, 1) - numpy.cos(h))
Y_expected_truncation_error[i] = h**2/2.0 - h**4/24.0 + h**6/720.0 - h**8/40320.
```
```python
fig=pyplot.figure(figsize=(12,6))
ax1=fig.add_subplot(121)
ax1.loglog(h_truncation, Y_truncation_error, 'kx', label='Data')
ax1.loglog(h_truncation, Y_expected_truncation_error, 'b-', label='Expected truncation error')
ax1.legend(loc='upper left')
ax1.set_xlabel(r'$h$')
ax1.set_ylabel('Error');
ax2=fig.add_subplot(122)
ax2.loglog(h_truncation, numpy.abs(Y_truncation_error-Y_expected_truncation_error), 'kx', label='Difference in data')
ax2.loglog(h_truncation, numpy.minimum(numpy.spacing(1.0),numpy.spacing(h_truncation**4)/numpy.spacing(1.0)), 'b-', label='Floating point limit')
ax2.legend(loc='lower right')
ax2.set_xlabel(r'$h$')
ax2.set_ylabel('Difference in truncation error');
```
We note that for $h$ sufficiently small the limitation is no longer the floating point error, but instead falls off like $h^4$. This is (likely) because the $h^4$ term is not correctly captured in one of the calculations (as $h \lesssim 10^{-4}$ we have $h^4$ less than floating point round-off). If we went to larger $h$ we would again find that the "expected truncation error" doesn't match, as more terms in the expansion would be needed.
## Richardson extrapolation
Let's go back to a different application of the "close enough" issue. What do we actually want to use the convergence plot *for*?
At its heart, Euler's method is approximating the exact solution $y(x)$ at a point $x=X$ to some degree of accuracy. We care about two things: the exact value $y(X)$ and the error made in approximating it, $E_h$. As, in general, we don't know the exact solution, we can't know the value of the error. However, we can show how it depends on the grid step $h$, and from that we can approximately calculate how big it is for any system. That is, we can put *error bars* on our answer. For those that like the "computation as experiment" analogy, this is exactly what we should be doing.
To do this, we use *Richardson extrapolation*. We compute the solution with Euler's method twice, to get $y^{(h)}$ and $y^{(2h)}$: two approximations to the solution $y(X)$ computed with different (but related) step lengths. Using the error analysis as a basis, we *assume* that
$$
\begin{equation}
y(X) = y^{(h)} + C h.
\end{equation}
$$
Given the two calculations $y^{(h)}$ and $y^{(2h)}$, and our assumption, we can solve for the exact solution $y(X)$ and for the error $C h$ to find
$$
\begin{equation}
y(X) = 2 y^{(h)} - y^{(2h)}, \qquad C h = y^{(2h)} - y^{(h)}.
\end{equation}
$$
So, we can go back to our original problem and look at $y(1)$, the Richardson extrapolated "exact" solution, and the error bound that results.
```python
X = 1.0
N_all = numpy.array([2**i for i in range(3, 20)])
h_all = X / N_all
Y_richardson = numpy.zeros_like(Y_approx)
Y_richardson_error = numpy.zeros_like(Y_approx)
for i in range(1, len(h_all)):
Y_richardson[i] = 2.0*Y_approx[i] - Y_approx[i-1]
Y_richardson_error[i] = abs(Y_approx[i-1] - Y_approx[i])
```
```python
fig=pyplot.figure(figsize=(8,6))
ax1=fig.add_subplot(111)
ax1.loglog(h_all[1:], Y_errors[1:], 'bo', label='Data errors')
ax1.set_xscale("log", nonposx='clip')
ax1.set_yscale("log", nonposy='clip')
ax1.errorbar(h_all[1:], numpy.abs(Y_exact - Y_richardson[1:]), yerr=Y_richardson_error[1:],
lolims=True, marker='x', color='k', ls='None', label='Richardson extrapolation and error estimate')
ax1.legend(loc='lower right')
ax1.set_xlabel(r'$h$')
ax1.set_ylabel('Error');
```
We see that the error bar found from Richardson extrapolation pretty much matches up with the error in the original calculation, as expected. So, using the two best results (i.e., those with the highest resolution, or smallest $h$) we can say that
```python
print("y(1) = {} ± {}.".format(Y_richardson[-1], Y_richardson_error[-1]))
```
y(1) = 0.5403023058678328 ± 8.024897067970826e-07.
However, this analysis is all based on the *assumption* that the error is exactly proportional to $h$. We know this isn't true; we're neglecting higher order terms. We see this because we measure a convergence rate that isn't exactly $1$. So we need a different model for the behaviour of our algorithm. We could add more terms to the error (as expected), but this leads to more parameters to fit, which is bad (["with four parameters I can fit an elephant"](http://en.wikiquote.org/wiki/John_von_Neumann) and so on). Instead we keep a single error term, but write $y(X) = y^{(h)} + C h^s$ where $s$ is measured from the data (above we have $1.0006$). This leads to the Richardson extrapolation formulas
$$
\begin{equation}
y(X) = \frac{2^s y^{(h)} - y^{(2h)}}{2^s - 1}, \qquad C h = \frac{y^{(2h)} - y^{(h)}}{2^s - 1}.
\end{equation}
$$
Applying this assumption to the data, we get a new set of error bars:
```python
Y_richardson_measured_s = numpy.zeros_like(Y_approx)
Y_richardson_error_measured_s = numpy.zeros_like(Y_approx)
for i in range(1, len(h_all)):
Y_richardson_measured_s[i] = (2.0**(simple_p[0])*Y_approx[i] - Y_approx[i-1])/(2.0**(simple_p[0])-1.0)
Y_richardson_error_measured_s[i] = abs(Y_approx[i-1] - Y_approx[i])/(2.0**(simple_p[0])-1.0)
print("y(1) = {} ± {}.".format(Y_richardson_measured_s[-1], Y_richardson_error_measured_s[-1]))
print("Difference between predicted exact values is {:.4g}.".format(abs(Y_richardson[-1]-Y_richardson_measured_s[-1])))
```
y(1) = 0.5403023064917923 ± 8.018657471901381e-07.
Difference between predicted exact values is 6.24e-10.
You have to look quite hard to see the difference. But the key point here is that the two exact values predicted by the different assumptions lie within each other's error bars.
Let's repeat this analysis using the data from the phugoid problem sheet. In this case we have a more complex system, a longer integration error (leading to a larger error), worse resolution (leading to a larger error), fewer data points, and no knowledge of the exact solution. However, we are still using Euler's method, so we expect the same behaviour for the error.
I explicitly give the values of the errors here rather than the code, to save time. I modified the range of values considered to
`dt_values = numpy.array([0.1*2**(-i) for i in range(8)])`
to ensure that nice factor 2 between each resolution. I then computed the differences between each using the `get_diffgrid` function, and did the best fit line getting a slope of $1.21154575$ with the data below:
```python
dt_values = numpy.array([ 0.1, 0.05, 0.025, 0.0125, 0.00625, 0.003125, 0.0015625])
diffgrid = numpy.array([ 25.4562819, 10.52418949, 4.75647465, 2.20894037, 1.01024986, 0.42865587, 0.14217568])
s = 1.21154575
Y_richardson_phugoid = numpy.zeros_like(diffgrid)
Y_richardson_phugoid_error = numpy.zeros_like(diffgrid)
Y_richardson_phugoid_measured_s = numpy.zeros_like(diffgrid)
Y_richardson_phugoid_error_measured_s = numpy.zeros_like(diffgrid)
for i in range(1, len(diffgrid)):
Y_richardson_phugoid[i] = (2.0*diffgrid[i] - diffgrid[i-1])
Y_richardson_phugoid_measured_s[i] = (2.0**(s)*diffgrid[i] - diffgrid[i-1])/(2.0**(s)-1.0)
Y_richardson_phugoid_error[i] = abs(diffgrid[i-1] - diffgrid[i])
Y_richardson_phugoid_error_measured_s[i] = abs(diffgrid[i-1] - diffgrid[i])/(2.0**(s)-1.0)
print("Phugoid limit, standard assumption = {} ± {}.".format(Y_richardson_phugoid[-1], Y_richardson_phugoid_error[-1]))
print("Phugoid limit, measured slope = {} ± {}.".format(Y_richardson_phugoid_measured_s[-1], Y_richardson_phugoid_error_measured_s[-1]))
print("Difference between predicted limits is {:.4g}.".format(abs(Y_richardson_phugoid[-1]-Y_richardson_phugoid_measured_s[-1])))
```
Phugoid limit, standard assumption = -0.14430451 ± 0.28648019.
Phugoid limit, measured slope = -0.07553820347702735 ± 0.21771388347702736.
Difference between predicted limits is 0.06877.
We see the errors are much larger, but that the difference between the limiting values is within the predicted error bars of either result. Therefore the assumption that the algorithm is behaving as the idealized Euler's method does is *close enough* that the predicted result lies within the predicted error bars.
So how close *is* close enough? We need
$$
\begin{equation}
\left| \frac{2^s y^{(h)} - y^{(2h)}}{2^s - 1} - \left( 2 y^{(h)} - y^{(2h)} \right) \right| \le \left| \frac{y^{(2h)} - y^{(h)}}{2^s - 1} \right|.
\end{equation}
$$
```python
yh, y2h, s = sympy.symbols('y^h, y^{2h}, s')
Eq1 = sympy.Eq((2**s*yh-y2h)/(2**s-1)-(2*yh-y2h) , (y2h-yh)/(2**s-1))
sympy.solve(Eq1, s)
```
There's another root to check:
```python
Eq2 = sympy.Eq((2**s*yh-y2h)/(2**s-1)-(2*yh-y2h) , -(y2h-yh)/(2**s-1))
sympy.solve(Eq2, s)
```
So the threshold is $s = \log(3)/\log(2) \simeq 1.585$.
But there's also the other interval, for which we need
$$
\begin{equation}
\left| \frac{2^s y^{(h)} - y^{(2h)}}{2^s - 1} - \left( 2 y^{(h)} - y^{(2h)} \right) \right| \le \left| y^{(2h)} - y^{(h)} \right|.
\end{equation}
$$
```python
Eq3 = sympy.Eq((2**s*yh-y2h)/(2**s-1)-(2*yh-y2h) , (y2h-yh))
sympy.solve(Eq3, s)
```
```python
Eq4 = sympy.Eq((2**s*yh-y2h)/(2**s-1)-(2*yh-y2h) , -(y2h-yh))
sympy.solve(Eq4, s)
```
This gives the lower bound of $\simeq 0.585$.
This is very specific to Euler's method. What if we're using a better method with convergence rate $s_e$, so that the idealized behaviour of the algorithm is $y(X) = y^{(h)} + C h^{s_e}$? In that case, if we measure a convergence rate of $s_m$, then the results are close enough when
$$
\begin{equation}
\left| \frac{2^{s_e} y^{(h)} - y^{(2h)}}{2^{s_e} - 1} - \frac{2^{s_m} y^{(h)} - y^{(2h)}}{2^{s_m} - 1} \right| \le \left| \frac{y^{(2h)} - y^{(h)}}{2^{s_e} - 1} \right|
\end{equation}
$$
and
$$
\begin{equation}
\left| \frac{2^{s_e} y^{(h)} - y^{(2h)}}{2^{s_e} - 1} - \frac{2^{s_m} y^{(h)} - y^{(2h)}}{2^{s_m} - 1} \right| \le \left| \frac{y^{(2h)} - y^{(h)}}{2^{s_m} - 1} \right| .
\end{equation}
$$
```python
yh, y2h, se, sm = sympy.symbols('y^h, y^{2h}, s_e, s_m')
Eq5 = sympy.Eq((2**sm*yh-y2h)/(2**sm-1)-(2**se*yh-y2h)/(2**se-1) , (y2h-yh)/(2**sm-1))
sympy.solve(Eq5, sm)
```
```python
Eq6 = sympy.Eq((2**sm*yh-y2h)/(2**sm-1)-(2**se*yh-y2h)/(2**se-1) , -(y2h-yh)/(2**se-1))
sympy.solve(Eq6, sm)
```
So we can see how the bound changes with increased accuracy of the ideal algorithm:
```python
s = numpy.arange(1,10)
upper_limit = numpy.log(2.0**(s+1)-1.0)/numpy.log(2.0)
lower_limit = numpy.log(2.0**(s-1)+0.5)/numpy.log(2.0)
pyplot.plot(s, upper_limit-s, 'kx--', label='Upper limit')
pyplot.plot(s, lower_limit-s, 'ko--', label='Lower limit')
pyplot.xlabel(r'$s$')
pyplot.ylim(-1.5, 1.5)
pyplot.legend(loc='center right');
```
For algorithms where the accuracy is high (I'd say $s \ge 6$) then the measured convergence rate is *close enough* if $s_m \in s_e \pm 1$!
## Setup
The problem that's at hand is a numerical method, in this case Euler's method, that's solving a differential equation
$$
\begin{equation}
y'(x) = f(x, y), \qquad y(0) = y_0.
\end{equation}
$$
This method uses small steps $h \ll 1$ to approximate the solution $y(x)$ from $x=0$ to some point $X > 0$. Formal analysis shows that, under certain weak assumptions (essentially about the smoothness of $f$, particularly at $x=0$), the error of the method is first order. What this means is that the difference between the numerical solution $\hat{y}(X)$ and the true solution $y(X)$ is proportional to $h$, for sufficiently small $h$. We call this the error $E(h)$. This suggests that, in the limit as $h$ goes to zero, the error will go to zero, and the numerical solution will match the true solution.
Note immediately that most of the time, as in the phugoid model, we don't know the exact solution so can't measure the error. Instead we can measure *self convergence* by checking that the numerical solution converges to *something*. In other words, we want to check that $\hat{y}(X) = Y(X) + {\cal O}(h)$, so that in the limit as $h$ goes to zero we get a single, unique solution. Further analysis is needed to show that this is the *true* solution (essentially we need to impose *consistency* of the difference equation to the differential equation), which we won't worry about here. To do this we define the *difference*
$$
D(h) = | \hat{y}_h(X) - \hat{y}_{ref}(X) |.
$$
Here $\hat{y}_{ref}$ is some reference solution, assumed to be computed at high accuracy.
With this setup, we're measuring convergence by checking that $D(h) = {\cal O}(h)$: the solution converges to something fixed if the difference converges to zero with $h$. Again, for Euler's method, a formal analysis shows that
$$
\begin{equation}
D(h) = a h + {\cal O}(h^2)
\end{equation}
$$
where $a$ is some constant.
## The problem
It might be that we've incorrectly implemented Euler's method and it doesn't converge in the expected fashion. There are three possibilities.
### Unstable, inconsistent
In this case we're expecting the solution doesn't converge to a limit, so that
$$
\begin{equation}
D(h) = \alpha h^{-s} + {\cal O}(h^{-s+1}).
\end{equation}
$$
In this case the difference diverges as $h$ gets smaller: this corresponds to the error diverging. This is really bad, and the error should be really obvious. Later we'll see what bounds we can put on the coefficients given the data.
### Stable, inconsistent
This is a slightly odd case. If we had
$$
\begin{equation}
E(h) = E_0 + \alpha h + {\cal O}(h^2)
\end{equation}
$$
then the interpretation is straightforward: the algorithm is wrong (as $h$ goes to zero the error does not), but it is converging to something.
If, on the other hand, we have
$$
\begin{equation}
D(h) = D_0 + \alpha h + {\cal O}(h^2)
\end{equation}
$$
the the *difference* between two numerical solutions is not going to zero with $h$. This is not impossible, but would be really odd at the continuum level. However, there is one important point that we cannot ignore: the limitations of floating point arithmetic.
#### Floating point effects
We know that we can't perfectly represent a real number on a computer, leading to an intrinsic error when representing a number $z$ which we'll call $\delta_z$. We have to count on the worst case, so that adding $N$ numbers $\{ z_i \}$ leads to a total error of $N \delta$ where $\delta = \max_i \delta_{z_i}$.
However, in our case it's even worse than that. Each step of using the Euler method introduces an error, which is compounded by the previous errors: the numerical data we use to go from $x$ to $x + h$ is already wrong thanks to the earlier steps in the method. We can do further analysis (far too [briefly summarized here](http://nbviewer.ipython.org/github/IanHawke/NumericalMethods/blob/master/Lectures/14%20-%20Predictor-Corrector%20Methods.ipynb)) to show the error at $X$ will be additionally amplified by a factor $\propto e^{\lambda X}$, where $\lambda \sim \partial_y f$.
This gives us a handle on the minimum error to expect. If the initial $\delta$ comes from standard double precision floating point then it will be $\sim 10^{-16}$. Euler's method for a moderately complex system, such as the phugoid system, uses ${\cal O}(10^1)$ operations per step. Then we have to work out $\lambda$, which is the maximum eigenvalue of the Jacobian matrix $\partial_y f$. Let's use `sympy` for that.
```python
import sympy
sympy.init_printing()
```
```python
v, theta, x, y, g, v_t, C_D, C_L = sympy.symbols('v, theta, x, y, g, v_t, C_D, C_L')
```
```python
q = sympy.Matrix([v, theta, x, y])
f = sympy.Matrix([-g*sympy.sin(theta)-C_D/C_L*g/v_t**2*v**2, -g/v*sympy.cos(theta)+g/v_t**2*v, v*sympy.cos(theta), v*sympy.sin(theta)])
```
```python
J = f.jacobian(q)
r = J.eigenvals()
r
```
We can then plug in some of the numbers from the phugoid model to get rid of the constants.
```python
r1 = list(r.keys())[1]
r2 = list(r.keys())[2]
r1=r1.subs([(g, 9.81), (v_t, 30.0), (C_D, 1.0/40.0), (C_L, 1.0)])
r2=r2.subs([(g, 9.81), (v_t, 30.0), (C_D, 1.0/40.0), (C_L, 1.0)])
```
We'll then assume that $v \simeq v_t$ and plot how the eigenvalue varies with $\theta$.
```python
r1 = r1.subs(v, 30.0)
r2 = r2.subs(v, 30.0)
sympy.plot(r1);
```
```python
sympy.plot(r2);
```
In the original simulation we had $h \le 0.1$ So it seems reasonable to assume that $h \lambda \sim {\cal O}(10^{-1})$. On the original phugoid problem we had $X = 100$ so our error bound is probably $\sim \delta \times 10^1 \times e^{0.1 \times 100} \simeq 2 \times 10^{-11}$.
Summarizing this fairly lengthy discussion:
* There will be floating point error leading to a best expected accuracy.
* We can estimate this from the problem and its parameters
* In this case the estimate is ${\cal O}(10^{-11})$, so there is probably no point in trying to do better than $10^{-10}$.
### Stable, consistent
Finally, there's the case where we measure something that is converging, but isn't clearly converging perfectly in line with the analysis. The point here is to be as sure as possible that we're getting something reasonable.
The analysis shows that
$$
\begin{equation}
D(h) = \alpha h + \beta h^2 + \dots.
\end{equation}
$$
In order to actually measure something, we *assume* that $h$ is sufficiently small that we can ignore all terms except the first. We then model $D$ as $D = \hat{\alpha} h^s$ and measure $\hat{\alpha}$ and $s$. If $s$ is close to one we believe our assumptions were reasonable and say that everything is fine.
Here is the crux of Greg's question. What does "close to one" mean here? How should we interpret our results. With what confidence can we say that the data is consistent with our algorithm being correct?
## Analysis
That was a lot of background. Let's do some back of the envelope calculations to see how bad things can be.
We want to show that our algorithm is behaving as
$$
\begin{equation}
D(h) = h
\end{equation}
$$
for sufficiently small $h$.
Let's suppose that the algorithm we've implemented actually behaves as
$$
\begin{equation}
D(h) = 10^{-8} h^{-1} + h + 10^{-8} h^2.
\end{equation}
$$
Over what region would it appear to be behaving "correctly"?
```python
h = numpy.logspace(-14.0, 14.0)
D = 1e-8/h + h + 1e-8*h**2
```
```python
pyplot.loglog(h, D, 'bo', label = 'Difference')
pyplot.loglog(h, h, 'r-', label = 'D = h model')
pyplot.xlabel('h')
pyplot.ylabel('Difference')
pyplot.legend(loc='lower right');
```
So, despite being *completely inconsistent, and unstable*, the algorithm appears to behave correctly over **12** orders of magnitude: from $h \sim 10^{-4}$ to $h \sim 10^{8}$.
In reality, the behaviour at the top end (large $h$) is not a concern. What's more important is how it behaves for small $h$.
It's also noticeable that in the convergence check for the phugoid model, the behaviour was only checked over 2 orders of magnitude. So, should we be concerned?
In the original phugoid model notebook the differences were computed over a fairly narrow range of $h$. If we modify that range, we can check the data in more detail.
```python
h_values = numpy.array([0.1*2**(-i) for i in range(-3,18)])
differences = numpy.array([3.70456578e+04, 5.11556903e+02, 2.68347318e+02,
1.40645139e+02, 7.00473279e+01, 3.51807674e+01,
1.76241342e+01, 1.15159153e+00, 5.69993649e-01,
2.83511479e-01, 1.41334800e-01, 7.05107406e-02,
3.51645549e-02, 1.75078941e-02, 8.68366865e-03,
4.27258022e-03, 2.06729609e-03, 9.64716814e-04,
4.13441038e-04, 1.37809468e-04, 0.00000000e+00])
```
Note that the final value is from the reference solution, so of course there's no information there.
First we plot this on a loglog scale to see if it's roughly correct.
```python
pyplot.loglog(h_values[:-1], differences[:-1], 'kx')
pyplot.xlabel('h')
pyplot.ylabel('Differences');
```
We see that visually the behaviour looks ok for $h \lt 10^{-2}$. Above that it isn't behaving well. So let's replot, excluding the values of $h$ which don't appear to be "sufficiently small".
```python
h_small = h_values[numpy.logical_and(h_values < 1e-2, h_values > 1e-6)]
differences_small = differences[numpy.logical_and(h_values < 1e-2, h_values > 1e-6)]
pyplot.loglog(h_small, differences_small, 'kx')
pyplot.xlabel('h')
pyplot.ylabel('Differences');
```
We now do our standard thing: assume that this is perfectly modelled by $D(h) = a h^s$ and use linear regression (of $\log(D)$ against $\log(h)$) to find the parameters $a$ and $s$. First we'll do this using the entire dataset.
```python
p_all = numpy.polyfit(numpy.log(h_small), numpy.log(differences_small), 1)
print("The measured value of s is {:.3f}".format(p_all[0]))
```
The measured value of s is 1.052
So this is off by about 5%. Let's plot the line of best fit and see where the difference lies.
```python
pyplot.loglog(h_small, differences_small, 'kx', label = 'Differences')
pyplot.loglog(h_small, numpy.exp(p_all[1])*h_small**p_all[0], 'b-', label = "Fit, slope = {:.3f}".format(p_all[0]))
pyplot.legend()
pyplot.xlabel('h')
pyplot.ylabel('Differences');
```
This is a bit concerning: why isn't the data appearing to get better? Let's try doing the fit for all possible sets of four consecutive points.
```python
for start in range(len(h_small)-4):
p_fourpoints = numpy.polyfit(numpy.log(h_small[start:start+4]), numpy.log(differences_small[start:start+4]), 1)
print("Measured value starting from point {} is {:.3f}".format(start, p_fourpoints[0]))
```
Measured value starting from point 0 is 1.009
Measured value starting from point 1 is 1.005
Measured value starting from point 2 is 1.004
Measured value starting from point 3 is 1.004
Measured value starting from point 4 is 1.007
Measured value starting from point 5 is 1.013
Measured value starting from point 6 is 1.027
Measured value starting from point 7 is 1.056
Measured value starting from point 8 is 1.121
So we get pretty good values at low resolutions: at high resolutions something odd is happening.
# More terms
```python
ks = numpy.arange(6)
hs = 2.0**(-4-ks)
y = numpy.zeros(len(ks))
for i, h in enumerate(hs):
y[i] = 1.0 + 0.1*h+0.2*h**2
```
```python
alpha = numpy.zeros_like(y)
for i in range(len(y)-1):
alpha[i] = y[i+1]-y[i]
a_coeff = alpha[0]*alpha[2]-alpha[1]**2
b_coeff = alpha[1]*alpha[2]-alpha[0]*alpha[3]
c_coeff = alpha[1]*alpha[3] - alpha[2]**2
sol_plus = (-b_coeff + numpy.sqrt(b_coeff**2-4.0*a_coeff*c_coeff))/(2.0*a_coeff)
sol_minus = (-b_coeff - numpy.sqrt(b_coeff**2-4.0*a_coeff*c_coeff))/(2.0*a_coeff)
print("Two solutions are {} and {}".format(-numpy.log2(sol_plus), -numpy.log2(sol_minus)))
```
Two solutions are 0.99999999999764 and 1.999999999962835
|
52a6db4e624ca64dfeecfe1c7531d4c6b291086a
| 790,592 |
ipynb
|
Jupyter Notebook
|
Close-Enough-Notes.ipynb
|
IanHawke/close-enough-balloons
|
6b7c27d90e8c012b1f95c2daa4ff2e84849a2c52
|
[
"CC-BY-3.0"
] | 3 |
2015-03-10T23:49:33.000Z
|
2016-06-01T23:53:24.000Z
|
Close-Enough-Notes.ipynb
|
IanHawke/close-enough-balloons
|
6b7c27d90e8c012b1f95c2daa4ff2e84849a2c52
|
[
"CC-BY-3.0"
] | null | null | null |
Close-Enough-Notes.ipynb
|
IanHawke/close-enough-balloons
|
6b7c27d90e8c012b1f95c2daa4ff2e84849a2c52
|
[
"CC-BY-3.0"
] | null | null | null | 55.801242 | 12,129 | 0.629411 | true | 10,054 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.833325 | 0.880797 | 0.73399 |
__label__eng_Latn
| 0.980273 | 0.543636 |
Applications of Linear Alebra: PCA
====
We will explore 3 applications of linear algebra in data analysis - change of basis (for dimension reduction), projections (for solving linear systems) and the quadratic form (for optimization). The first application is the change of basis to the eigenvector basis that underlies Principal Components Analysis s(PCA).
We will review the following in class:
- The standard basis
- Orthonormal basis and orthgonal matrices
- Change of basis
- Similar matrices
- Eigendecomposition
- Sample covariance
- Covariance as a linear transform
- PCA and dimension reduction
- PCA and "explained variance"
- SVD
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
```
Variance and covariance
----
Remember the formula for covariance
$$
\text{Cov}(X, Y) = \frac{\sum_{i=1}^n(X_i - \bar{X})(Y_i - \bar{Y})}{n-1}
$$
where $\text{Cov}(X, X)$ is the sample variance of $X$.
```python
def cov(x, y):
"""Returns covariance of vectors x and y)."""
xbar = x.mean()
ybar = y.mean()
return np.sum((x - xbar)*(y - ybar))/(len(x) - 1)
```
```python
X = np.random.random(10)
Y = np.random.random(10)
```
```python
np.array([[cov(X, X), cov(X, Y)], [cov(Y, X), cov(Y,Y)]])
```
array([[ 0.08255874, -0.009372 ],
[-0.009372 , 0.08437116]])
```python
# This can of course be calculated using numpy's built in cov() function
np.cov(X, Y)
```
array([[ 0.08255874, -0.009372 ],
[-0.009372 , 0.08437116]])
```python
# Extension to more variables is done in a pair-wise way
Z = np.random.random(10)
np.cov([X, Y, Z])
```
array([[ 0.08255874, -0.009372 , 0.02351863],
[-0.009372 , 0.08437116, -0.02369603],
[ 0.02351863, -0.02369603, 0.12269876]])
Eigendecomposition of the covariance matrix
----
```python
mu = [0,0]
sigma = [[0.6,0.2],[0.2,0.2]]
n = 1000
x = np.random.multivariate_normal(mu, sigma, n).T
```
```python
A = np.cov(x)
```
```python
m = np.array([[1,2,3],[6,5,4]])
ms = m - m.mean(1).reshape(2,1)
np.dot(ms, ms.T)/2
```
array([[ 1., -1.],
[-1., 1.]])
```python
e, v = np.linalg.eig(A)
```
```python
plt.scatter(x[0,:], x[1,:], alpha=0.2)
for e_, v_ in zip(e, v.T):
plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2)
plt.axis([-3,3,-3,3])
plt.title('Eigenvectors of covariance matrix scaled by eigenvalue.');
```
Covariance matrix as a linear transformation
----
The covariance matrix is a linear transformation that maps $\mathbb{R}^n$ in the direction of its eigenvectors with scaling factor given by the eigenvalues. Here we see it applied to a collection of random vectors in the box bounded by [-1, 1].
### We will assume we have a covariance matrix
```python
covx = np.array([[1,0.6],[0.6,1]])
```
### Create random vectors in a box
```python
u = np.random.uniform(-1, 1, (100, 2)).T
```
### Apply covariance matrix as linear transformation
```python
y = covx @ u
```
```python
e1, v1 = np.linalg.eig(covx)
```
### The linear transform maps the random vectors as described.
```python
plt.scatter(u[0], u[1], c='blue')
plt.scatter(y[0], y[1], c='orange')
for e_, v_ in zip(e1, v1.T):
plt.plot([0, e_*v_[0]], [0, e_*v_[1]], 'r-', lw=2)
plt.xticks([])
plt.yticks([])
pass
```
PCA
----
Principal Components Analysis (PCA) basically means to find and rank all the eigenvalues and eigenvectors of a covariance matrix. This is useful because high-dimensional data (with $p$ features) may have nearly all their variation in a small number of dimensions $k$, i.e. in the subspace spanned by the eigenvectors of the covariance matrix that have the $k$ largest eigenvalues. If we project the original data into this subspace, we can have a dimension reduction (from $p$ to $k$) with hopefully little loss of information.
Numerically, PCA is typically done using SVD on the data matrix rather than eigendecomposition on the covariance matrix. The next section explains why this works.
### Data matrices that have zero mean for all feature vectors
\begin{align}
\text{Cov}(X, Y) &= \frac{\sum_{i=1}^n(X_i - \bar{X})(Y_i - \bar{Y})}{n-1} \\
&= \frac{\sum_{i=1}^nX_iY_i}{n-1} \\
&= \frac{XY^T}{n-1}
\end{align}
and so the covariance matrix for a data set X that has zero mean in each feature vector is just $XX^T/(n-1)$.
In other words, we can also get the eigendecomposition of the covariance matrix from the positive semi-definite matrix $XX^T$.
### Note that zeroing the feature vector does not affect the covariance matrix
```python
np.set_printoptions(precision=3)
```
```python
X = np.random.random((5,4))
X
```
array([[ 0.224, 0.136, 0.364, 0.189],
[ 0.79 , 0.007, 0.486, 0.682],
[ 0.349, 0.013, 0.484, 0.094],
[ 0.771, 0.924, 0.636, 0.692],
[ 0.33 , 0.01 , 0.439, 0.183]])
```python
### Subtract the row mean from each row
```
```python
Y = X - X.mean(1)[:, None]
```
```python
Y.mean(1)
```
array([ 0.000e+00, -2.776e-17, 0.000e+00, 2.776e-17, 0.000e+00])
```python
Y
```
array([[-0.004, -0.092, 0.136, -0.039],
[ 0.298, -0.484, -0.005, 0.191],
[ 0.114, -0.222, 0.249, -0.141],
[ 0.015, 0.168, -0.119, -0.064],
[ 0.089, -0.23 , 0.199, -0.058]])
```python
### Calculate the covariance
```
```python
np.cov(X)
```
array([[ 0.01 , 0.012, 0.02 , -0.01 , 0.017],
[ 0.012, 0.12 , 0.038, -0.029, 0.042],
[ 0.02 , 0.038, 0.048, -0.019, 0.04 ],
[-0.01 , -0.029, -0.019, 0.016, -0.019],
[ 0.017, 0.042, 0.04 , -0.019, 0.035]])
```python
np.cov(Y)
```
array([[ 0.01 , 0.012, 0.02 , -0.01 , 0.017],
[ 0.012, 0.12 , 0.038, -0.029, 0.042],
[ 0.02 , 0.038, 0.048, -0.019, 0.04 ],
[-0.01 , -0.029, -0.019, 0.016, -0.019],
[ 0.017, 0.042, 0.04 , -0.019, 0.035]])
Eigendecomposition of the covariance matrix
----
```python
e1, v1 = np.linalg.eig(np.dot(x, x.T)/(n-1))
```
```python
plt.scatter(x[0,:], x[1,:], alpha=0.2)
for e_, v_ in zip(e1, v1.T):
plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2)
plt.axis([-3,3,-3,3]);
```
Change of basis via PCA
----
### We can transform the original data set so that the eigenvectors are the basis vectors and find the new coordinates of the data points with respect to this new basis
This is the change of basis transformation covered in the Linear Alegebra module. First, note that the covariance matrix is a real symmetric matrix, and so the eigenvector matrix is an orthogonal matrix.
```python
e, v = np.linalg.eig(np.cov(x))
v.dot(v.T)
```
array([[ 1., 0.],
[ 0., 1.]])
### Linear algebra review for change of basis
Graphical illustration of change of basis
----
Suppose we have a vector $u$ in the standard basis $B$ , and a matrix $A$ that maps $u$ to $v$, also in $B$. We can use the eigenvalues of $A$ to form a new basis $B'$. As explained above, to bring a vector $u$ from $B$-space to a vector $u'$ in $B'$-space, we multiply it by $Q^{-1}$, the inverse of the matrix having the eigenvctors as column vectors. Now, in the eigenvector basis, the equivalent operation to $A$ is the diagonal matrix $\Lambda$ - this takes $u'$ to $v'$. Finally, we convert $v'$ back to a vector $v$ in the standard basis by multiplying with $Q$.
```python
ys = np.dot(v1.T, x)
```
#### Principal components
Principal components are simply the eigenvectors of the covariance matrix used as basis vectors. Each of the original data points is expressed as a linear combination of the principal components, giving rise to a new set of coordinates.
```python
plt.scatter(ys[0,:], ys[1,:], alpha=0.2)
for e_, v_ in zip(e1, np.eye(2)):
plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2)
plt.axis([-3,3,-3,3]);
```
For example, if we only use the first column of `ys`, we will have the projection of the data onto the first principal component, capturing the majority of the variance in the data with a single feature that is a linear combination of the original features.
#### Transform back to original coordinates
We may need to transform the (reduced) data set to the original feature coordinates for interpretation. This is simply another linear transform (matrix multiplication).
```python
zs = np.dot(v1, ys)
```
```python
plt.scatter(zs[0,:], zs[1,:], alpha=0.2)
for e_, v_ in zip(e1, v1.T):
plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2)
plt.axis([-3,3,-3,3]);
```
```python
u, s, v = np.linalg.svd(x)
u.dot(u.T)
```
array([[ 1., 0.],
[ 0., 1.]])
Dimension reduction via PCA
----
We have the sepctral decomposition of the covariance matrix
$$
A = Q^{-1}\Lambda Q
$$
Suppose $\Lambda$ is a rank $p$ matrix. To reduce the dimensionality to $k \le p$, we simply set all but the first $k$ values of the diagonal of $\Lambda$ to zero. This is equivalent to ignoring all except the first $k$ principal components.
What does this achieve? Recall that $A$ is a covariance matrix, and the trace of the matrix is the overall variability, since it is the sum of the variances.
```python
A
```
array([[ 0.605, 0.202],
[ 0.202, 0.209]])
```python
A.trace()
```
0.81349656039067819
```python
e, v = np.linalg.eig(A)
D = np.diag(e)
D
```
array([[ 0.69 , 0. ],
[ 0. , 0.124]])
```python
D.trace()
```
0.81349656039067819
```python
D[0,0]/D.trace()
```
0.84806267856011852
Since the trace is invariant under change of basis, the total variability is also unchanged by PCA. By keeping only the first $k$ principal components, we can still "explain" $\sum_{i=1}^k e[i]/\sum{e}$ of the total variability. Sometimes, the degree of dimension reduction is specified as keeping enough principal components so that (say) $90\%$ of the total variability is explained.
### Using Singular Value Decomposition (SVD) for PCA
SVD is a decomposition of the data matrix $X = U S V^T$ where $U$ and $V$ are orthogonal matrices and $S$ is a diagnonal matrix.
Recall that the transpose of an orthogonal matrix is also its inverse, so if we multiply on the right by $X^T$, we get the follwoing simplification
\begin{align}
X &= U S V^T \\
X X^T &= U S V^T (U S V^T)^T \\
&= U S V^T V S U^T \\
&= U S^2 U^T
\end{align}
Compare with the eigendecomposition of a matrix $A = W \Lambda W^{-1}$, we see that SVD gives us the eigendecomposition of the matrix $XX^T$, which as we have just seen, is basically a scaled version of the covariance for a data matrix with zero mean, with the eigenvectors given by $U$ and eigenvealuse by $S^2$ (scaled by $n-1$)..
```python
u, s, v = np.linalg.svd(x)
```
```python
e2 = s**2/(n-1)
v2 = u
plt.scatter(x[0,:], x[1,:], alpha=0.2)
for e_, v_ in zip(e2, v2):
plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2)
plt.axis([-3,3,-3,3]);
```
```python
v1 # from eigenvectors of covariance matrix
```
array([[ 0.922, -0.387],
[ 0.387, 0.922]])
```python
v2 # from SVD
```
array([[-0.922, -0.387],
[-0.387, 0.922]])
```python
e1 # from eigenvalues of covariance matrix
```
array([ 0.693, 0.124])
```python
e2 # from SVD
```
array([ 0.693, 0.124])
```python
a0 = np.random.normal(0,1,100)
a1 = a0 + np.random.normal(0,.5,100)
a2 = 2*a0 + a1 + np.random.normal(5,0.01,100)
xs = np.vstack([a0, a1, a2])
xs.shape
```
(3, 100)
```python
U, s, V = np.linalg.svd(xs)
```
```python
(s**2)/(99)
```
array([ 36.809, 1.658, 0.125])
```python
U
```
array([[-0.099, -0.675, -0.731],
[-0.113, -0.723, 0.682],
[-0.989, 0.149, -0.005]])
|
51109fcd07c1caaebed2b45f6804943e3e486561
| 237,081 |
ipynb
|
Jupyter Notebook
|
notebook/13D_PCA.ipynb
|
cliburn/sta-663-2017
|
89e059dfff25a4aa427cdec5ded755ab456fbc16
|
[
"MIT"
] | 52 |
2017-01-11T03:16:00.000Z
|
2021-01-15T05:28:48.000Z
|
notebook/13D_PCA.ipynb
|
slimdt/Duke_Stat633_2017
|
89e059dfff25a4aa427cdec5ded755ab456fbc16
|
[
"MIT"
] | 1 |
2017-04-16T17:10:49.000Z
|
2017-04-16T19:13:03.000Z
|
notebook/13D_PCA.ipynb
|
slimdt/Duke_Stat633_2017
|
89e059dfff25a4aa427cdec5ded755ab456fbc16
|
[
"MIT"
] | 47 |
2017-01-13T04:50:54.000Z
|
2021-06-23T11:48:33.000Z
| 198.893456 | 43,166 | 0.904396 | true | 3,949 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.835484 | 0.870597 | 0.72737 |
__label__eng_Latn
| 0.954255 | 0.528255 |
<b>Traçar um esboço do gráfico e obter uma equação da parábola que satisfaça as condições dadas.</b>
<b>20. Vértice: $V(4,1)$; diretriz $d: y + 3 = 0$</b><br><br>
<b>Arrumando a equação da diretriz</b><br><br>
$d: y = -3$<br><br>
<b>Fazendo um esboço é possivel perceber que a parábola é paralela ao eixo $y$, logo sua équação é dada por $(x-h)^2 = 2p(y-k)$</b><br><br>
<b>Substituindo os pontos do vértice por $x=4$ e $y=1$</b><br><br>
$(x-4)^2 = 2p(y-1)$<br><br>
<b>Achando o valor de $p$, utilizando a coordenada da diretriz $D(4,-3)$</b><br><br>
$\frac{p}{2} = \sqrt{(4-4)^2 + (1-(-3)^2)}$<br><br>
$\frac{p}{2} = \sqrt{0 + (4)^2}$<br><br>
$\frac{p}{2} = \pm \sqrt{16}$<br><br>
$\frac{p}{2} = 4$<br><br>
$p=8$<br><br>
<b>Substituindo o valor de $p$ na fórmula</b><br><br>
$(x-4)^2 = 2 \cdot 8 \cdot (y-1)$<br><br>
$(x-4)^2 = 16(y-1)$<br><br>
$x^2 -8x + 16 = 16y -16$<br><br>
$x^2 -8x -16y + 32 = 0$<br><br>
<b>Gráfico da parábola</b><br><br>
```python
from sympy import *
from sympy.plotting import plot_implicit
x, y = symbols("x y")
plot_implicit(Eq((x-4)**2, 16*(y-1)), (x,-20,20), (y,-10,20),
title=u'Gráfico da parábola', xlabel='x', ylabel='y');
```
|
26fecf76b536c8fae350cb2cee75d4f59238d3c1
| 14,486 |
ipynb
|
Jupyter Notebook
|
Problemas Propostos. Pag. 172 - 175/20.ipynb
|
mateuschaves/GEOMETRIA-ANALITICA
|
bc47ece7ebab154e2894226c6d939b7e7f332878
|
[
"MIT"
] | 1 |
2020-02-03T16:40:45.000Z
|
2020-02-03T16:40:45.000Z
|
Problemas Propostos. Pag. 172 - 175/20.ipynb
|
mateuschaves/GEOMETRIA-ANALITICA
|
bc47ece7ebab154e2894226c6d939b7e7f332878
|
[
"MIT"
] | null | null | null |
Problemas Propostos. Pag. 172 - 175/20.ipynb
|
mateuschaves/GEOMETRIA-ANALITICA
|
bc47ece7ebab154e2894226c6d939b7e7f332878
|
[
"MIT"
] | null | null | null | 166.505747 | 12,160 | 0.890791 | true | 546 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.91848 | 0.884039 | 0.811973 |
__label__por_Latn
| 0.947435 | 0.724817 |
```python
from __future__ import division
from sympy import *
x, y, z, t = symbols('x y z t')
k, m, n = symbols('k m n', integer=True)
f, g, h = symbols('f g h', cls=Function)
```
```python
solve(x+3-4,x)
```
[1]
La orden Matrix() es una función de sympy para crear matrices. Donde () es el argumento de la función y para crear las matrices tenemos que usar corchetes cuadrados.
```python
A = Matrix([[3,8],[7,-2]])
```
```python
A
```
Matrix([
[3, 8],
[7, -2]])
```python
A.transpose()
```
Matrix([
[3, 7],
[8, -2]])
```python
A.T
```
Matrix([
[3, 7],
[8, -2]])
```python
A.inv()
```
Matrix([
[1/31, 4/31],
[7/62, -3/62]])
```python
X = Matrix([[1,0],[1,1],[1,2],[1,3]])
```
```python
X
```
Matrix([
[1, 0],
[1, 1],
[1, 2],
[1, 3]])
```python
X1 = X.T
```
```python
X1
```
Matrix([
[1, 1, 1, 1],
[0, 1, 2, 3]])
```python
X2 = X1*X
```
```python
X2
```
Matrix([
[4, 6],
[6, 14]])
```python
X3 = X2.inv()
```
```python
X3
```
Matrix([
[ 7/10, -3/10],
[-3/10, 1/5]])
```python
y = Matrix([[1],[1],[2],[2]])
```
```python
y
```
Matrix([
[1],
[1],
[2],
[2]])
```python
X4 = X1*y
```
```python
X4
```
Matrix([
[ 6],
[11]])
```python
beta = X3*X4
```
```python
beta
```
Matrix([
[9/10],
[ 2/5]])
```python
```
|
24004a4513600760584ce847d73f4d2343e145a0
| 6,408 |
ipynb
|
Jupyter Notebook
|
matrices.ipynb
|
CogniMath/regression_matrices
|
1f8a384807dfc65f6e61462927f41cbcb500596c
|
[
"MIT"
] | null | null | null |
matrices.ipynb
|
CogniMath/regression_matrices
|
1f8a384807dfc65f6e61462927f41cbcb500596c
|
[
"MIT"
] | null | null | null |
matrices.ipynb
|
CogniMath/regression_matrices
|
1f8a384807dfc65f6e61462927f41cbcb500596c
|
[
"MIT"
] | null | null | null | 16.388747 | 171 | 0.413233 | true | 569 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.908618 | 0.835484 | 0.759135 |
__label__spa_Latn
| 0.106575 | 0.602058 |
# Fundamentos de Lógica
## Lógica Proposicional
É a matemática de $2$ (dois) valores Booleanos, `VERDADEIRO` (`TRUE`, `T`, `1`) e `FALSO` (`FALSE`, `F`, `0`),
e $5$ (cinco) operadores.
### Operadores da Lógica Proposicional
| | operador | nome | programação |
|---|:----------:|:------------:|:-----------:|
| 1 | $\land$ | conjunção | `AND`, `&&` |
| 2 | $\lor$ | disjunção | `OR`, `\|\|` |
| 3 | $\lnot$ | negação | `NOT`, `!` |
| 4 | $\implies$ | implicação | |
| 5 | $\equiv$ | equivalência | `==` |
**Lógica Proposicional: Exemplos**
1. $x \land y$ é `VERDADEIRO` se $x$ e $y$ forem `VERDADEIRO`;
2. $x \lor y$ é `VERDADEIRO` se $x$ ou $y$ forem `VERDADEIRO` (ou ambos);
3. $\lnot x$ é `VERDADEIRO` se $x$ for `FALSO`;
4. $x \implies y$ é `VERDADEIRO` se $x$ for `FALSO` e $y$ for `VERDADEIRO` (ou ambos);
5. $x\equiv y$ é `VERDADEIRO` se $x$ e $y$ forem `VERDADEIRO` ou ambos forem `FALSO`.
**Lógica Proposicional: Tabela-Verdade**
$x$ | $y$ | | | $\land$ | $\lor$ |
------ | ----- | -- | -- | ------- | ------ |
*`F`* | *`F`* | | | `F` | `F` |
*`F`* | *`T`* | | | `F` | `T` |
*`T`* | *`F`* | | | `F` | `T` |
*`T`* | *`T`* | | | `T` | `T` |
| $x$ | $y$ | | | $\Rightarrow$ | $\equiv$ |
| ------ | ----- | -- | -- | ------------ | -------- |
| *`F`* | *`F`* | | | `T` | `T` |
| *`F`* | *`T`* | | | `T` | `F` |
| *`T`* | *`F`* | | | `F` | `F` |
| *`T`* | *`T`* | | | `T` | `T` |
$x$ | $\lnot x$
:---: | :---:
`F` | `T`
`T` | `F`
**Python3**
```python
from sympy import *
x = True
y = True
x and y
```
True
```python
x or y
```
True
```python
not x
```
False
```python
Implies(x, y)
```
$\displaystyle \text{True}$
```python
x == y
```
True
## Lógica de Predicados (lógica de primeira ordem)
A Lógica de Predicados estende a Lógica Proposicional com a adição de operadores
que trabalham com conjuntos de elementos.
**Operadores**
| operador | operação |
|:--:|:--:|
| $\forall$ | quantificação universal (para todo) |
| $\exists$ | quantificação existencial (existe) |
**Exemplos**
$(\forall\ x \in R : S)$ - para todo $x$ em $R$, a fórmula $S$ é `VERDADEIRA`;
$(\exists \ x \in R : S)$ - existe $x$ em R, em que a fórmula $S$ é `VERDADEIRA`.
$\forall\ x \in \{2, 3, 4, 5\} : x > 1$
$\exists\ x \in \{2, 3, 4, 5\} : x > 2$
$\forall\ x \in N : x\ mod\ 2 = 0$ (é par?) `F`
$\exists\ x \in N : x\ mod\ 2 = 0$ (é par?) `V`
$\exists\ x \in N : \sqrt{x} = 2$ (é par?) `V`
## Exercícios
1. Resolva as expressões a seguir para $(x,y)=(T,T)$, $(x,y)=(T,F)$, $(x,y)=(F,T)$ e $(x,y)=(F,F)$:
- As expressões são sempre avaliadas da esquerda para a direita com exceção da existência de parênteses, por exemplo para $x \lor (x \land y)$, primeiro avalia-se $(x \land y)$ e depois a disjunção $\lor$;
- $(x,y)=(T,F)$ significa que $x=T$ e $y=F$;
- Se preferir, monte uma tabela-verdade para cada expressão.
a. $x\land y \lor x$
b. $\neg (x\lor y \land x)$
c. $x \land y \implies \neg x$
d. $(x \implies y) \equiv (x \lor y)$
e. $(x \land y) \equiv (x \lor y)$
f. $(\neg x \land y) \equiv (x \lor \neg y)$
g. $\neg(x \implies y) \equiv (x \land y)$
h. $x \lor (x \lor y) \land \neg(x \land \neg y)$
2. Para o conjunto dos números inteiros $Z$ ($\ldots, -3, -2, -1, 0, 1, 2, 3, \ldots$). Avalie as
seguintes expressões:
a. $\forall x \in Z : x \text{ é par} \land\ x \text{ é impar}$
b. $\exists x \in Z : x \text{ é par} \land\ x \text{ é impar}$
c. $\forall x \in Z : x \text{ é par} \lor\ x \text{ é impar}$
d. $\exists x \in Z : x \text{ é par} \lor\ x \text{ é impar}$
e. $\forall x \in Z : x \text{ é primo}$
f. $\exists x \in Z : x \text{ é primo}$
g. $\neg(\forall x \in Z : x \text{ é primo} )$
h. $\forall x \in \{2,4,6,8\}, \forall y \in \{1,3,5,7\} : x + y = 11$
i. $\exists x \in \{2,4,6,8\}, \forall y \in \{1,3,5,7\} : x + y = 11$
j. $\exists x \in \{2,4,6,8\}, \exists y \in \{1,3,5,7\} : x + y = 11$
l. $\forall x \in \{0\}, \forall y \in \{1,3,5,7\} : xy = 0$
3. [Scheinerman] Assinale como verdadeira (T) ou false (F) cada uma das expressões sobre inteiros a seguir:
a. $\forall x, \forall y : x + y = 0$
b. $\forall x, \exists y : x + y = 0$
c. $\exists x, \forall y : x + y = 0$
d. $\exists x, \exists y : x + y = 0$
e. $\forall x, \forall y : xy = 0$
f. $\forall x, \exists y : xy = 0$
g. $\exists x, \forall y : xy = 0$
h. $\exists x, \exists y : xy = 0$
|
1817df85cfbbf9f00a726ae51df618c58115cd15
| 10,213 |
ipynb
|
Jupyter Notebook
|
notebooks/MD-Semana03.ipynb
|
prof-holanda/matematica-discreta
|
83e3a1b008baf11a11cddc5148f85c523531bf16
|
[
"CC0-1.0"
] | null | null | null |
notebooks/MD-Semana03.ipynb
|
prof-holanda/matematica-discreta
|
83e3a1b008baf11a11cddc5148f85c523531bf16
|
[
"CC0-1.0"
] | null | null | null |
notebooks/MD-Semana03.ipynb
|
prof-holanda/matematica-discreta
|
83e3a1b008baf11a11cddc5148f85c523531bf16
|
[
"CC0-1.0"
] | 1 |
2021-05-18T13:25:27.000Z
|
2021-05-18T13:25:27.000Z
| 24.144208 | 218 | 0.41643 | true | 1,998 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.874077 | 0.843895 | 0.737629 |
__label__por_Latn
| 0.608072 | 0.552092 |
Find the volume of the following "stadium". The inner wall is part of a sphere of radius 2 and the outer wall is a cylinder of radius 4.
$$ \int_{\pi/2}^{2\pi} \int_{\pi/4}^{\pi/2} \int_2^{4/\sin\phi} \rho^2 \sin\phi \,d\rho\,d\phi\,d\theta $$
```python
from sympy import *
```
```python
rh,ph,th = var("rho phi theta")
```
```python
vol = integrate(rh**2*sin(ph),(rh,2,4/sin(ph)),(ph,pi/4,pi/2),(th,pi/2,2*pi))
simplify(vol)
```
$\displaystyle 2 \pi \left(16 - \sqrt{2}\right)$
```python
vol.evalf()
```
$\displaystyle 91.6451990385567$
```python
import numpy as np
from scipy.integrate import tplquad
```
```python
tplquad(lambda r,p,t: r**2*np.sin(p),np.pi/2,2*np.pi,np.pi/4,np.pi/2,2,lambda t,p: 4/np.sin(p))
```
(91.64519903855667, 1.0174661006896088e-12)
|
4112db9978eb05594220d579d9e467c0020c2f05
| 2,717 |
ipynb
|
Jupyter Notebook
|
extras/stadium-solution.ipynb
|
drewyoungren/mvc
|
f5217ae7888050d722c66de95756586f662841d2
|
[
"MIT"
] | null | null | null |
extras/stadium-solution.ipynb
|
drewyoungren/mvc
|
f5217ae7888050d722c66de95756586f662841d2
|
[
"MIT"
] | null | null | null |
extras/stadium-solution.ipynb
|
drewyoungren/mvc
|
f5217ae7888050d722c66de95756586f662841d2
|
[
"MIT"
] | null | null | null | 19.832117 | 147 | 0.485462 | true | 306 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.964855 | 0.885631 | 0.854506 |
__label__eng_Latn
| 0.426985 | 0.823637 |
# An intuitive introduction to the Entropy
Let $X$ be a discrete random variable (RV) taking values from set $\mathcal{X}$ with probability mass function $P(X)$.
*Definition* the entropy $H(X)$ of the discrete random variable $X$ is
\begin{equation}
H(X) = \sum_{x\in\mathcal{X}}P(X)\log \frac{1}{P(X)} = -\sum_{x\in\mathcal{X}}P(X)\log P(X).
\end{equation}
How to make sense out of this definition? We'll, rather informally, argue below that the entrpy of an RV provides a lower bound on the amount of information provided by the RV, which we'll dfine as the average number of bits required to transmit the value the RV has taken.
As a motivating example consider asking your friend for advice. The probabilities of his answers are given in the table below:
| $x$ | $P(x)$ |
|----------|--------|
| OK | $1/2$ |
| Average | $1/4$ |
| Bad | $1/8$ |
| Terrible | $1/8$ |
To transmit the answer of your friend you must introduce an *encoding*, e.g.:
| $x$ | $P(x)$ | Code 1 |
|----------|--------|--------|
| OK | $1/2$ | 00 |
| Average | $1/4$ | 01 |
| Bad | $1/8$ | 10 |
| Terrible | $1/8$ | 11 |
Under this encoding, we spend 2 bits per answer.
However, we could also consider a variable length code, that uses shorter codewords for more frequent symbols:
| $x$ | $P(x)$ | Code 2 |
|----------|--------|--------|
| OK | $1/2$ | 0 |
| Average | $1/4$ | 10 |
| Bad | $1/8$ | 110 |
| Terrible | $1/8$ | 111 |
Under this encoding the average number of bits to encode an answer is:
\begin{equation}
\mathbb{E}[L] = \frac{1}{2} \cdot 1 + \frac{1}{4} \cdot 2 + \frac{1}{8} \cdot 3 + \frac{1}{8} \cdot 3 = \frac{7}{8}
\end{equation}
Thus, the new code is more efficient. Is it the best we can do?
### The code space
We'll now try formalize the coding task, i.e. the assignment of code lengths to possible values of the RV.
Let's first observe an important property of our code: in a variable length coding, no codeword can be the prefix of another one. Otherwise, decoding is not deterministic. Therefore, whenever a value is assigned a symbol of length $L$, $1/2^L$ of the code space is reserved and not available to other codes.
This can be visualised as a code space. Below, we indicate the codes assigned to symbols in the example and grey-out codes that are not available because the shorter codes are used:
We can observe, that the length 1 code for "OK" uses $1/2$ of the available codes, the langth 2 code for "Average" uses $1/4$ and the two length 3 codes for "Bad" and "Terrible" each use $1/8$ of the code space.
In general, a code of length $L$ uses $1/2^L$ of the code space. Equivalently, assignign a fraction $f$ of the code space to a symbol makes it use a symbol of length $L=\log_2(1/f)$.
Assuming that we assign a fraction of a bit to a symbol, our optmal coding problem can be formulated as partitioning the code space into four regions (one for each value of the RV) such that the average length of the code is minised.
Formally, let $p_1, p_2, p_3, p_4$ be the proibabilities asigned to the 4 symbols and let $f_1, f_2, f_3, f_4$ be the coding space fractoins assigned to them.
We want to:
\begin{align}
\text{minimize } &p_1 \log_2 \frac{1}{f_1} + p_2 \log_2 \frac{1}{f_2} + p_3 \log_2 \frac{1}{f_3} + p_4 \log_2 \frac{1}{f_4} \\
\text{subject to: } & f_1 + f_2 + f_3 + f_4 = 1
\end{align}
For simplicity, we will solve this problem for the case of only two symbols:
\begin{align}
\text{minimize } &p_1 \log_2 \frac{1}{f_1} + p_2 \log_2 \frac{1}{f_2} \\
\text{subject to: } & f_1 + f_2 = 1
\end{align}
Notice first that $p_2 = 1-p_1$ and likewise $f_2 = 1-f_1$. Then our minimization objective becomes
\begin{equation}
\text{minimize } C = p_1 \log_2 \frac{1}{f_1} + (1-p_1) \log_2 \frac{1}{1-f_1}
\end{equation}
To get the minimum over $f_1$ we compute the derivative of the expression with respect to $f_1$ and set it to zero:
\begin{equation}
\frac{\partial C}{\partial f_1} = \frac{p_1}{\log 2}\frac{-1}{f_1} + \frac{1 - p_1}{\log 2}\frac{1}{1 - f_1}
\end{equation}
Multiplying both sides by $\log 2 f_1 (1- f_1)$ we obtain:
\begin{align}
p_1(1-f_1) &= (1-p_1)f_1 \\
p_1 - p_1f_1 & =f_1 - p_1f_1 \\
f_1 &= p_1
\end{align}
Thus the optimal fraction of code space allocated to symbol 1 is $p_1$, the probability assigned to this symbol and the optimal code length is $\log_2(\frac{1}{p_1})$!
We now see, that the entropy
\begin{equation}
H(X) = \sum_{x\in\mathcal{X}}P(X)\log \frac{1}{P(X)}
\end{equation}
is simply the average code length!
### A note about logarithm basis
It is custommary to copute the entropy using natural logarithms, which gives its value in "nats". IF $\log_2$ were used, the entrpy has units of bits and corresponds and lowerbounds the average amount of bits needed to transmit a value of the RV.
# Further Reading
1. Chris Olah "Visual Information theory": https://colah.github.io/posts/2015-09-Visual-Information/
2. JA Thomas ad TM Cover, "Elements of Information Theory", chapter 2
|
1e5b28fdf3bff6e9c754dc3ef47b78fd21e9fa8f
| 11,654 |
ipynb
|
Jupyter Notebook
|
ML/Lectures/02_entropy.ipynb
|
TheFebrin/DataScience
|
3e58b89315960e7d4896e44075a8105fcb78f0c0
|
[
"MIT"
] | null | null | null |
ML/Lectures/02_entropy.ipynb
|
TheFebrin/DataScience
|
3e58b89315960e7d4896e44075a8105fcb78f0c0
|
[
"MIT"
] | null | null | null |
ML/Lectures/02_entropy.ipynb
|
TheFebrin/DataScience
|
3e58b89315960e7d4896e44075a8105fcb78f0c0
|
[
"MIT"
] | null | null | null | 11,654 | 11,654 | 0.794663 | true | 1,663 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.937211 | 0.921922 | 0.864035 |
__label__eng_Latn
| 0.991754 | 0.845776 |
# Neural Network
## neuron
a neuron takes input $x \in \mathbb{R}^{d}$, multiply $x$ by weights $w$ and add bias term $b$, finally use a activation function $g$.
that is:
$$f(x) = g(w^{T}x + b)$$
it is analogous to the functionality of biological neuron.
some useful activation function:
$$
\begin{equation}
\begin{split}
\text{sigmoid:}\quad &g(z) = \frac{1}{1 + e^{-z}} \\
\text{tanh:}\quad &g(z) = \frac{e^{z}-e^{-z}}{e^{z} + e^{-z}} \\
\text{relu:}\quad &g(z) = max(z,0) \\
\text{leaky relu:}\quad &g(z) = max(z, \epsilon{z})\ ,\ \epsilon\text{ is a small positive number}\\
\text{identity:}\quad &g(z) = z
\end{split}
\end{equation}
$$
linear regression's forward process is a neuron with identity activation function.
logistic regression's forward process is a neuron with sigmoid activation function.
## neural network
building neural network is analogous to lego bricks: you take individual bricks and stack them together to build complex structures.
we use bracket to denote layer, we take the above as example
$[0]$ denote input layer, $[1]$ denote hidden layer, $[2]$ denote output layer
$a^{[l]}$ denote the output of layer $l$, set $a^{[0]} := x$
$z^{[l]}$ denote the affine result of layer $l$
we have:
$$z^{[l]} = W^{[l]}a^{[l-1]} + b^{[l]}$$
$$a^{[l]} = g^{[l]}(z^{[l]})$$
where $W^{[l]} \in \mathbb{R}^{d[l] \times d[l-1]}$, $b^{[l]} \in \mathbb{R}^{d[l]}$.
## weight decay
recall that to mitigate overfitting, we use $l_{2}$ and $l_{1}$ regularization in linear and logistic regression.
weight decay is a alias of $l_{2}$ regularization, can be generalize to neural network, we concatenate $W^{[l]}$ and flatten it to get $w$ in this setting.
first adding $l_{2}$ norm penalty:
$$J(w,b) = \sum_{i=1}^{n}l(w, b, x^{(i)}, y^{(i)}) + \frac{\lambda}{2}\left \| w \right \|^{2} $$
then by gradient descent, we have:
$$
\begin{equation}
\begin{split}
w:=& w-\eta\frac{\partial}{\partial w}J(w, b) \\
=& w-\eta\frac{\partial}{\partial w}\left(\sum_{i=1}^{n}l(w, b, x^{(i)}, y^{(i)}) + \frac{\lambda}{2}\left \| w \right \|^{2}\right) \\
=& (1 - \eta\lambda)w - \eta\frac{\partial}{\partial w}\sum_{i=1}^{n}l(w, b, x^{(i)}, y^{(i)})
\end{split}
\end{equation}
$$
multiply by $(1 - \eta\lambda)$ is weight decay.
often we do not calculate bias term in regularization, so does weight decay.
## dropout
to strength robustness through perturbation, we can deliberately add perturbation in traning, dropout is one of that skill.
we actually do the following in hidden neuron:
$$
a_{dropout} =
\begin{cases}
0 &\text{with probability }p \\
\frac{a}{1-p} &\text{otherwise}
\end{cases}
$$
this operation randomly dropout neuron with probability $p$ and keep the expectation unchanged:
$$E(a_{dropout}) = E(a)$$
depict this process below:
one more thing: we do not use dropout in predicting.
## prerequesities for back-propagation
suppose in forward-propagation $x \to y \to l$, where $x \in \mathbb{R}^{n}$, $y \in \mathbb{R} ^{m}$, loss $l \in \mathbb{R}$.
then:
$$
\frac{\partial l}{\partial y} = \begin{bmatrix}
\frac{\partial l}{\partial y_{1}} \\
...\\
\frac{\partial l}{\partial y_{m}}
\end{bmatrix}
\quad
\frac{\partial l}{\partial x} = \begin{bmatrix}
\frac{\partial l}{\partial x_{1}} \\
...\\
\frac{\partial l}{\partial x_{n}}
\end{bmatrix}
$$
by total differential equation:
$$
\frac{\partial l}{\partial x_{k}} = \sum_{j=1}^{m}\frac{\partial l}{\partial y_{j}}\frac{\partial y_{j}}{\partial x_{k}}
$$
then we can connect $\frac{\partial l}{\partial x}$ and $\frac{\partial l}{\partial y}$ by:
$$
\frac{\partial l}{\partial x} = \begin{bmatrix}
\frac{\partial l}{\partial x_{1}} \\
...\\
\frac{\partial l}{\partial x_{n}}
\end{bmatrix}
=
\begin{bmatrix}
\frac{\partial y_{1}}{\partial x_{1}} & ... & \frac{\partial y_{m}}{\partial x_{1}}\\
\vdots & \ddots & \vdots \\
\frac{\partial y_{1}}{\partial x_{n}}& .... & \frac{\partial y_{m}}{\partial x_{n}}
\end{bmatrix}
\begin{bmatrix}
\frac{\partial l}{\partial y_{1}} \\
...\\
\frac{\partial l}{\partial y_{m}}
\end{bmatrix}
=
(\frac{\partial y}{\partial x})^{T}\frac{\partial l}{\partial y}
$$
here $\frac{\partial y}{\partial x}$ is the jacobian matrix.
unlike other activation functions, calculate softmax depend on other neurons, so jacobian of softmax.
$$
\frac{\partial a_{i}}{\partial z_{j}} = \frac{\partial}{\partial z_{j}}\left(\frac{exp(z_{i})}{\sum_{s=1}^{k}exp(z_{s})}\right)
$$
it is easy to check the jacobian of matrix-multiplication:
$$\frac{\partial Mx}{\partial x} = M$$
## back-propagation
gradient descent update rule:
$$W^{[l]} = W^{[l]} - \alpha\frac{\partial{L}}{\partial{W^{[l]}}}$$
$$b^{[l]} = b^{[l]} - \alpha\frac{\partial{L}}{\partial{b^{[l]}}}$$
to proceed, we must compute the gradient with respect to the parameters.
we can define a three-step recipe for computing the gradients as follows:
1.for output layer, we have:
$$
\frac{\partial L(\hat{y}, y)}{\partial z^{[N]}} = (\frac{\partial \hat{y}}{\partial z^{[N]}})^{T}\frac{\partial L(\hat{y}, y)}{\partial \hat{y}}
$$
if $g^{[N]}$ is softmax.
$$
\frac{\partial L(\hat{y}, y)}{\partial z^{[N]}} = \frac{\partial L(\hat{y}, y)}{\partial \hat{y}} \odot {g^{[N]}}'(z^{[N]})
$$
if not softmax.
the above computations are all straight forward.
2.for $l=N-1,...,1$, we have:
$$z^{[l + 1]} = W^{[l + 1]}a^{[l]} + b^{[l + 1]}$$
so by our prerequesities:
$$
\frac{\partial L}{\partial a^{[l]}} = (\frac{\partial z^{[l+1]}}{\partial a^{[l]}})^{T}\frac{\partial L}{\partial z^{[l+1]}} = (W^{[l+1]})^{T}\frac{\partial L}{\partial z^{[l+1]}}
$$
we also have:
$$a^{[l]} = g^{[l]}z^{[l]}$$
we do not use softmax activation in hidden layers, so the dependent is direct:
$$\frac{\partial L}{\partial z^{[l]}} = \frac{\partial L}{\partial a^{[l]}} \odot {g^{[l]}}'(z^{[l]})$$
combine two equations:
$$\frac{\partial L}{\partial z^{[l]}} = (W^{[l+1]})^{T}\frac{\partial L}{\partial z^{[l+1]}} \odot {g^{[l]}}'(z^{[l]})$$
3.final step, because:
$$z^{[l]} = W^{[l]}a^{[l - 1]} + b^{[l]}$$
so:
$$\frac{\partial L}{\partial W^{[l]}} = \frac{\partial L}{\partial z^{[l]}}(a^{[l - 1]})^{T}$$
$$\frac{\partial L}{\partial b^{[l]}}=\frac{\partial L}{\partial z^{[l]}}$$
## xavier initialization
to mitigate vanishing and exploding gradient, to insure breaking symmtry, we should carefully initialize weights.
consider a fully connected layer without bias term and activation function:
$$o_{i} = \sum_{j=1}^{n_{in}}w_{ij}x_{j}$$
suppose $w_{ij}$ draw from a distribution of 0 mean and $\sigma^{2}$ variance, not necessarily guassian.
suppose $x_{j}$ draw from a distribution of 0 mean and $\gamma^{2}$ variance, all $w_{ij}, x_{j}$ are independent.
then mean of $o_{i}$ is of course 0, variance:
$$
\begin{equation}
\begin{split}
Var[o_{i}] =& E[o_{i}^{2}] - (E[o_{i}])^{2}\\
=&\sum_{j=1}^{n_{in}}E[w_{ij}^{2}x_{j}^{2}] \\
=&\sum_{j=1}^{n_{in}}E[w_{ij}^{2}]E[x_{j}^{2}] \\
=&n_{in}\sigma^{2}\gamma^{2}
\end{split}
\end{equation}
$$
to keep variance fixed, we need to set $n_{in}\sigma^{2}=1$.
consider back-propagation, we have:
$$\frac{\partial L}{\partial x_{j}} = \sum_{i=1}^{n_{out}}w_{ij}\frac{\partial L}{\partial o_{i}}$$
so by the same inference, we need to set $$n_{out}\sigma^{2} = 1$$.
we cannot satisfy both conditions simutaneously, we simply try to satisfy:
$$\frac{1}{2}(n_{in} + n_{out})\sigma^{2} = 1 \ \text{ or }\ \sigma = \sqrt{\frac{n_{in} + n_{out}}{2}}$$
this is the reasoning under xavier initialization.
```python
```
|
b3f83a3099cc325ee56598eaca629c3eb3b6e5a0
| 11,299 |
ipynb
|
Jupyter Notebook
|
_build/html/_sources/09_neural_network.ipynb
|
newfacade/machine-learning-notes
|
1e59fe7f9b21e16151654dee888ceccc726274d3
|
[
"MIT"
] | null | null | null |
_build/html/_sources/09_neural_network.ipynb
|
newfacade/machine-learning-notes
|
1e59fe7f9b21e16151654dee888ceccc726274d3
|
[
"MIT"
] | null | null | null |
_build/html/_sources/09_neural_network.ipynb
|
newfacade/machine-learning-notes
|
1e59fe7f9b21e16151654dee888ceccc726274d3
|
[
"MIT"
] | null | null | null | 33.52819 | 200 | 0.48491 | true | 2,629 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.955319 | 0.870597 | 0.831698 |
__label__eng_Latn
| 0.928043 | 0.770647 |
# Water heating
An insulated, rigid tank contains 4 kg of water at 100 kPa, where initially 0.25 of the mass is liquid. An electric heater turns on and operates until all of the liquid has vaporized. (Neglect the heat capacity of the tank and heater.)
<figure>
<center>
<figcaption>Figure: Rigid tank containting water and electric heater</figcaption>
</center>
</figure>
**Problem:**
- Determine the final temperature and pressure of the water.
- Determine the electrical work required by this process.
- Determine the total change in entropy associated with this process.
- Plot the state points for the water on a temperature-specific entropy diagram.
First, load the necessary modules and specify the known/initial conditions.
```python
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
import cantera as ct
from pint import UnitRegistry
ureg = UnitRegistry()
Q_ = ureg.Quantity
```
```python
mass = Q_(4, 'kg')
pressure_initial = Q_(100, 'kPa')
quality_initial = 0.25
quality_final = 1.0
# specify the initial state using pressure and quality
state_initial = ct.Water()
state_initial.PX = pressure_initial.to('Pa').magnitude, quality_initial
state_initial()
```
water:
temperature 372.809 K
pressure 100000 Pa
density 2.35669 kg/m^3
mean mol. weight 18.016 amu
vapor fraction 0.25
1 kg 1 kmol
----------- ------------
enthalpy -1.49887e+07 -2.7e+08 J
internal energy -1.50312e+07 -2.708e+08 J
entropy 6337.06 1.142e+05 J/K
Gibbs function -1.73512e+07 -3.126e+08 J
heat capacity c_p inf inf J/K
heat capacity c_v 20936.9 3.772e+05 J/K
## Find final temperature and pressure
Due to conservation of mass, since the mass and volume of the system are fixed, the specific volume and density must be constant:
\begin{align}
v_2 &= v_1 \\
\rho_2 &= \rho_1
\end{align}
Therefore the final state is fixed by the density and quality, where $x_2 = 1$:
```python
state_final = ct.Water()
# rather than use
state_final.DX = state_initial.density, quality_final
```
Hmm, what happened here? It looks like Cantera unfortunately does not support specifying the thermodynamic state using density and quality. (With quality as one property, it only supports temperature or pressure as the other property.)
Fortunately, CoolProp *does* support specifying the state the way we need to solve this problem, so let's use that for the final state:
```python
from CoolProp.CoolProp import PropsSI
temp_final = PropsSI(
'T', 'D', state_initial.density, 'Q', quality_final, 'water'
) * ureg.kelvin
pres_final = PropsSI(
'P', 'D', state_initial.density, 'Q', quality_final, 'water'
) * ureg.pascal
print(f'Final temperature: {temp_final: .2f}')
print(f'Final pressure: {pres_final: .2f}')
# We can then set the final state using the Cantera object,
# now that we know temperature
state_final = ct.Water()
state_final.TX = temp_final.magnitude, quality_final
```
Final temperature: 420.08 kelvin
Final pressure: 438257.38 pascal
## Find electrical work required
To find the work required, we can do an energy balance on the (closed) system:
\begin{equation}
W_{\text{in}} = m (u_2 - u_1)
\end{equation}
```python
work = mass * (Q_(state_final.u, 'J/kg') - Q_(state_initial.u, 'J/kg'))
print(f'Electrical work required: {work.to(ureg.megajoule): .2f}')
```
Electrical work required: 6.47 megajoule
## Find entropy change
The total entropy change is the change in entropy of the system plus that of the surroundings:
\begin{align}
\Delta S_{\text{total}} &= \Delta S_{\text{system}} + \Delta S_{\text{surr}} \\
\Delta S_{\text{total}} &= \Delta S_{\text{system}} = m (s_2 - s_1)
\end{align}
since the entropy change of the surroundings is zero.
```python
entropy_change = mass * (Q_(state_final.s, 'J/kg') - Q_(state_initial.s, 'J/kg'))
print(f'Entropy change: {entropy_change: .2f}')
```
Entropy change: 16195.98 joule
This process is irreversible, associated with a positive increase in total entropy.
## Plot the state points for water
We can construct the saturated liquid and saturated vapor lines in a temperature–specific entropy diagram (T–s diagram), and then plot the initial and final states locations along with the process line (of constant density):
```python
f = ct.Water()
# Array of temperatures from fluid minimum temperature to critical temperature
temps = np.arange(np.ceil(f.min_temp) + 0.15, f.critical_temperature, 0.1)
def get_sat_entropy_fluid(T):
'''Gets entropy for temperature along saturated liquid line'''
f = ct.Water()
f.TX = T, 0.0
return f.s
def get_sat_entropy_gas(T):
'''Gets entropy for temperature along saturated vapor line'''
f = ct.Water()
f.TX = T, 1.0
return f.s
# calculate entropy values associated with temperatures along
# saturation lines
entropies_f = np.array([get_sat_entropy_fluid(T) for T in temps])
entropies_g = np.array([get_sat_entropy_gas(T) for T in temps])
# critical point
f.TP = f.critical_temperature, f.critical_pressure
fig, ax = plt.subplots(figsize=(5, 3))
# Plot the saturated liquid line, critical point,
# and saturated vapor line
ax.plot(entropies_f, temps)
ax.plot(f.s, f.T, 'o')
ax.plot(entropies_g, temps)
plt.xlabel('Specific entropy (J/kg⋅K)')
plt.ylabel('Temperature (K)')
# Plot the initial and final states, and label them
ax.plot(state_initial.s, state_initial.T, 's')
ax.annotate('(1)', xy=(state_initial.s, state_initial.T),
xytext=(0, -20), textcoords='offset points',
ha='right', va='bottom'
)
ax.plot(state_final.s, state_final.T, 's')
ax.annotate('(2)', xy=(state_final.s, state_final.T),
xytext=(20, 0), textcoords='offset points',
ha='right', va='bottom'
)
# show process line of constant density
temps = np.arange(state_initial.T, state_final.T, 0.1)
def get_entrophy(T, density):
f = ct.Water()
f.TD = T, density
return f.s
entropies = np.array([get_entrophy(T, state_initial.density) for T in temps])
ax.plot(entropies, temps, '--')
plt.grid(True)
fig.tight_layout()
plt.show()
```
<IPython.core.display.Javascript object>
```python
```
|
6ff3773c0d2ff0703385a702c7f955be2a29b102
| 133,311 |
ipynb
|
Jupyter Notebook
|
content/second-law/heating-water.ipynb
|
msb002/computational-thermo
|
9302288217a36e0ce29e320688a3f574921909a5
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null |
content/second-law/heating-water.ipynb
|
msb002/computational-thermo
|
9302288217a36e0ce29e320688a3f574921909a5
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null |
content/second-law/heating-water.ipynb
|
msb002/computational-thermo
|
9302288217a36e0ce29e320688a3f574921909a5
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | 117.144991 | 86,887 | 0.801629 | true | 1,715 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.800692 | 0.709019 | 0.567706 |
__label__eng_Latn
| 0.902585 | 0.157301 |
dS/dt=-bSI, dI/dt=bSI (uso b para beta)
```python
from sympy import *
from sympy.abc import S,I,t,b
```
```python
#puntos criticos
P=-b*S*I
Q=b*S*I
#establecer P(S,I)=0 y Q(S,I)=0
Peqn=Eq(P,0)
Qeqn=Eq(Q,0)
print(solve((Peqn,Qeqn),S,I))
#matriz Jacobiana
J11=diff(P,S)
J12=diff(P,I)
J21=diff(Q,S)
J22=diff(Q,I)
J=Matrix([[J11,J12],[J21,J22]])
pprint(J)
```
[(0, I), (S, 0)]
⎡-I⋅b -S⋅b⎤
⎢ ⎥
⎣I⋅b S⋅b ⎦
```python
#J en el punto critico
Jc1=J.subs([(S,0),(I,I)])
pprint(Jc1.eigenvals())
pprint(Jc1.eigenvects())
Jc2=J.subs([(S,S),(I,0)])
pprint(Jc2.eigenvals())
pprint(Jc2.eigenvects())
```
{0: 1, -I⋅b: 1}
⎡⎛ ⎡⎡0⎤⎤⎞ ⎛ ⎡⎡-1⎤⎤⎞⎤
⎢⎜0, 1, ⎢⎢ ⎥⎥⎟, ⎜-I⋅b, 1, ⎢⎢ ⎥⎥⎟⎥
⎣⎝ ⎣⎣1⎦⎦⎠ ⎝ ⎣⎣1 ⎦⎦⎠⎦
{0: 1, S⋅b: 1}
⎡⎛ ⎡⎡1⎤⎤⎞ ⎛ ⎡⎡-1⎤⎤⎞⎤
⎢⎜0, 1, ⎢⎢ ⎥⎥⎟, ⎜S⋅b, 1, ⎢⎢ ⎥⎥⎟⎥
⎣⎝ ⎣⎣0⎦⎦⎠ ⎝ ⎣⎣1 ⎦⎦⎠⎦
Los puntos criticos son no hiperbolicos.
```python
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
import pylab as pl
import matplotlib
```
```python
b=1
def dx_dt(x,t):
return [ -b*x[0]*x[1] , b*x[1]*x[0] ]
#trayectorias en tiempo hacia adelante
ts=np.linspace(0,10,500)
ic=np.linspace(20000,100000,3)
for r in ic:
for s in ic:
x0=[r,s]
xs=odeint(dx_dt,x0,ts)
plt.plot(xs[:,0],xs[:,1],"-", color="orangered", lw=1.5)
#trayectorias en tiempo hacia atras
ts=np.linspace(0,-10,500)
ic=np.linspace(20000,100000,3)
for r in ic:
for s in ic:
x0=[r,s]
xs=odeint(dx_dt,x0,ts)
plt.plot(xs[:,0],xs[:,1],"-", color="orangered", lw=1.5)
#etiquetas de ejes y estilo de letra
plt.xlabel('S',fontsize=20)
plt.ylabel('I',fontsize=20)
plt.tick_params(labelsize=12)
plt.ticklabel_format(style="sci", scilimits=(0,0))
plt.xlim(0,100000)
plt.ylim(0,100000)
#campo vectorial
X,Y=np.mgrid[0:100000:15j,0:100000:15j]
u=-b*X*Y
v=b*Y*X
pl.quiver(X,Y,u,v,color='dimgray')
plt.savefig("SIinf.pdf",bbox_inches='tight')
plt.show()
```
Analisis de Bifurcaciones
El punto critico del sistema no depende del parametro beta por lo que no cambia al variar beta.
|
a099718e70389d1ba800e59d3b011c3b1eac317a
| 138,254 |
ipynb
|
Jupyter Notebook
|
ModeloSI(infecciosas).ipynb
|
deleonja/dynamical-sys
|
024acc61a4e36d46b1502ce0391707e4afbc58e2
|
[
"MIT"
] | null | null | null |
ModeloSI(infecciosas).ipynb
|
deleonja/dynamical-sys
|
024acc61a4e36d46b1502ce0391707e4afbc58e2
|
[
"MIT"
] | null | null | null |
ModeloSI(infecciosas).ipynb
|
deleonja/dynamical-sys
|
024acc61a4e36d46b1502ce0391707e4afbc58e2
|
[
"MIT"
] | null | null | null | 723.842932 | 83,496 | 0.780556 | true | 1,081 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.943348 | 0.874077 | 0.824559 |
__label__spa_Latn
| 0.175819 | 0.754059 |
# ODE
We will solve the following linear Cauchy model
\begin{align}
y^{\prime}(t) &= \lambda y(t)\\
y(0) & = 1
\end{align}
whose exact solution is
$$
y(t) = e^{\lambda t}
$$
```python
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
import scipy.linalg
import numpy.linalg
l = -5.
t0 = 0.
tf = 10.
y0 = 1.
s = linspace(t0,tf,5000)
exact = lambda x: exp(l*x)
```
### Forward Euler
$$
\frac{y_{n}-y_{n-1}}{h} = f(y_{n-1}, t_{n-1})
$$
```python
def fe(l,y0,t0,tf,h):
timesteps = arange(t0,tf+1e-10, h)
sol = zeros_like(timesteps)
sol[0] = y0
for i in range(1,len(sol)):
sol[i] = sol[i-1]*(1+l*h)
return sol, timesteps
y, t = fe(l,y0,t0,tf,0.1)
_ = plot(t,y, 'o-')
_ = plot(s,exact(s))
error = numpy.linalg.norm(exact(t) - y, 2)
print(error)
```
### Backward Euler
$$
\frac{y_{n}-y_{n-1}}{h} = f(y_{n}, t_{n})
$$
```python
def be(l,y0,t0,tf,h):
pass # TODO
```
### $\theta$-method
$$
\frac{y_{n}-y_{n-1}}{h} = \theta\, f(y_{n}, t_{n}) + (1-\theta)\,f(y_{n-1}, t_{n-1})
$$
```python
def tm(theta,l,y0,t0,tf,h):
pass # TODO
```
### Simple adaptive time stepper
For each time step:
- Compute solution with CN
- Compute solution with BE
- Check the difference
- If the difference satisfy a given tolerance:
- keep the solution of higher order
- double the step size
- go to the next step
- Else:
- half the step size and repeat the time step
```python
def adaptive(l,y0,t0,tf,h0, hmax=0.9,tol=1e-3):
sol = []
sol.append(y0)
t = []
t.append(t0)
h = h0
while t[-1] < tf:
#print 'current t =', t[-1], ' h=', h
current_sol = sol[-1]
current_t = t[-1]
sol_cn, _ = tm(0.5,l,current_sol,current_t, current_t + h, h)
sol_be, _ = tm(1.,l,current_sol,current_t, current_t + h, h)
if (abs(sol_cn[-1] - sol_be[-1]) < tol): #accept
sol.append(sol_cn[-1])
t.append(current_t+h)
h *= 2.
if h > hmax:
h=hmax
else:
h /= 2.
return sol, t
y,t = adaptive(l,y0,t0,tf,0.9)
_ = plot(t,y, 'o-')
_ = plot(s,exact(array(s)))
error = numpy.linalg.norm(exact(array(t)) - y, infty)
print error, len(y)
```
```python
```
```python
```
|
b1fd088ab82ab536594403f3f4dcf6283e13a6c3
| 31,698 |
ipynb
|
Jupyter Notebook
|
notes/07a_ode.ipynb
|
nicdom23/P1.4_seed
|
002066fdb10b7f1d687683fb0618be9a5a4503d4
|
[
"CC-BY-4.0"
] | 14 |
2017-10-10T14:35:52.000Z
|
2021-07-15T18:15:49.000Z
|
notes/07a_ode.ipynb
|
emaballarin/P1.4_seed
|
19e46b174745c4071ebcda08a23cdba4f05fd676
|
[
"CC-BY-4.0"
] | 2 |
2018-01-31T12:05:45.000Z
|
2020-01-26T18:24:57.000Z
|
notes/07a_ode.ipynb
|
emaballarin/P1.4_seed
|
19e46b174745c4071ebcda08a23cdba4f05fd676
|
[
"CC-BY-4.0"
] | 129 |
2017-10-05T09:08:27.000Z
|
2021-03-13T20:30:07.000Z
| 123.338521 | 17,220 | 0.872105 | true | 810 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.912436 | 0.877477 | 0.800642 |
__label__eng_Latn
| 0.505215 | 0.698491 |
**This notebook is an exercise in the [Computer Vision](https://www.kaggle.com/learn/computer-vision) course. You can reference the tutorial at [this link](https://www.kaggle.com/ryanholbrook/convolution-and-relu).**
---
# Introduction #
In this exercise, you'll work on building some intuition around feature extraction. First, we'll walk through the example we did in the tutorial again, but this time, with a kernel you choose yourself. We've mostly been working with images in this course, but what's behind all of the operations we're learning about is mathematics. So, we'll also take a look at how these feature maps can be represented instead as arrays of numbers and what effect convolution with a kernel will have on them.
Run the cell below to get started!
```python
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.computer_vision.ex2 import *
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
tf.config.run_functions_eagerly(True)
```
# Apply Transformations #
The next few exercises walk through feature extraction just like the example in the tutorial. Run the following cell to load an image we'll use for the next few exercises.
```python
image_path = '../input/computer-vision-resources/car_illus.jpg'
image = tf.io.read_file(image_path)
image = tf.io.decode_jpeg(image, channels=1)
image = tf.image.resize(image, size=[400, 400])
img = tf.squeeze(image).numpy()
plt.figure(figsize=(6, 6))
plt.imshow(img, cmap='gray')
plt.axis('off')
plt.show();
```
You can run this cell to see some standard kernels used in image processing.
```python
import learntools.computer_vision.visiontools as visiontools
from learntools.computer_vision.visiontools import edge, bottom_sobel, emboss, sharpen
kernels = [edge, bottom_sobel, emboss, sharpen]
names = ["Edge Detect", "Bottom Sobel", "Emboss", "Sharpen"]
plt.figure(figsize=(12, 12))
for i, (kernel, name) in enumerate(zip(kernels, names)):
plt.subplot(1, 4, i+1)
visiontools.show_kernel(kernel)
plt.title(name)
plt.tight_layout()
```
# 1) Define Kernel #
Use the next code cell to define a kernel. You have your choice of what kind of kernel to apply. One thing to keep in mind is that the *sum* of the numbers in the kernel determines how bright the final image is. Generally, you should try to keep the sum of the numbers between 0 and 1 (though that's not required for a correct answer).
In general, a kernel can have any number of rows and columns. For this exercise, let's use a $3 \times 3$ kernel, which often gives the best results. Define a kernel with `tf.constant`.
```python
# YOUR CODE HERE: Define a kernel with 3 rows and 3 columns.
kernel = tf.constant([
#____,
[-1, -1, -1],
[-1, 8, -1],
[-1, -1, -1]
])
# Uncomment to view kernel
# visiontools.show_kernel(kernel)
# Check your answer
q_1.check()
```
<IPython.core.display.Javascript object>
<span style="color:#33cc33">Correct</span>
```python
# Lines below will give you a hint or solution code
#q_1.hint()
#q_1.solution()
```
Now we'll do the first step of feature extraction, the filtering step. First run this cell to do some reformatting for TensorFlow.
```python
# Reformat for batch compatibility.
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
image = tf.expand_dims(image, axis=0)
kernel = tf.reshape(kernel, [*kernel.shape, 1, 1])
kernel = tf.cast(kernel, dtype=tf.float32)
```
# 2) Apply Convolution #
Now we'll apply the kernel to the image by a convolution. The *layer* in Keras that does this is `layers.Conv2D`. What is the *backend function* in TensorFlow that performs the same operation?
```python
# YOUR CODE HERE: Give the TensorFlow convolution function (without arguments)
conv_fn = tf.nn.conv2d
# Check your answer
q_2.check()
```
<IPython.core.display.Javascript object>
<span style="color:#33cc33">Correct</span>
```python
# Lines below will give you a hint or solution code
#q_2.hint()
#q_2.solution()
```
Once you've got the correct answer, run this next cell to execute the convolution and see the result!
```python
image_filter = conv_fn(
input=image,
filters=kernel,
strides=1, # or (1, 1)
padding='SAME',
)
plt.imshow(
# Reformat for plotting
tf.squeeze(image_filter)
)
plt.axis('off')
plt.show();
```
Can you see how the kernel you chose relates to the feature map it produced?
# 3) Apply ReLU #
Now detect the feature with the ReLU function. In Keras, you'll usually use this as the activation function in a `Conv2D` layer. What is the *backend function* in TensorFlow that does the same thing?
```python
# YOUR CODE HERE: Give the TensorFlow ReLU function (without arguments)
relu_fn = tf.nn.relu
# Check your answer
q_3.check()
```
<IPython.core.display.Javascript object>
<span style="color:#33cc33">Correct</span>
```python
# Lines below will give you a hint or solution code
#q_3.hint()
#q_3.solution()
```
Once you've got the solution, run this cell to detect the feature with ReLU and see the result!
The image you see below is the feature map produced by the kernel you chose. If you like, experiment with some of the other suggested kernels above, or, try to invent one that will extract a certain kind of feature.
```python
image_detect = relu_fn(image_filter)
plt.imshow(
# Reformat for plotting
tf.squeeze(image_detect)
)
plt.axis('off')
plt.show();
```
In the tutorial, our discussion of kernels and feature maps was mainly visual. We saw the effect of `Conv2D` and `ReLU` by observing how they transformed some example images.
But the operations in a convolutional network (like in all neural networks) are usually defined through mathematical functions, through a computation on numbers. In the next exercise, we'll take a moment to explore this point of view.
Let's start by defining a simple array to act as an image, and another array to act as the kernel. Run the following cell to see these arrays.
```python
# Sympy is a python library for symbolic mathematics. It has a nice
# pretty printer for matrices, which is all we'll use it for.
import sympy
sympy.init_printing()
from IPython.display import display
image = np.array([
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 1, 1, 1],
[0, 1, 0, 0, 0, 0],
])
kernel = np.array([
[1, -1],
[1, -1],
])
display(sympy.Matrix(image))
display(sympy.Matrix(kernel))
# Reformat for Tensorflow
image = tf.cast(image, dtype=tf.float32)
image = tf.reshape(image, [1, *image.shape, 1])
kernel = tf.reshape(kernel, [*kernel.shape, 1, 1])
kernel = tf.cast(kernel, dtype=tf.float32)
```
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_png function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_rgba function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_mask function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The MathtextBackendBitmap class was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
$\displaystyle \left[\begin{matrix}0 & 1 & 0 & 0 & 0 & 0\\0 & 1 & 0 & 0 & 0 & 0\\0 & 1 & 0 & 0 & 0 & 0\\0 & 1 & 0 & 0 & 0 & 0\\0 & 1 & 0 & 1 & 1 & 1\\0 & 1 & 0 & 0 & 0 & 0\end{matrix}\right]$
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_png function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_rgba function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_mask function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The MathtextBackendBitmap class was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
$\displaystyle \left[\begin{matrix}1 & -1\\1 & -1\end{matrix}\right]$
# 4) Observe Convolution on a Numerical Matrix #
What do you see? The image is simply a long vertical line on the left and a short horizontal line on the lower right. What about the kernel? What effect do you think it will have on this image? After you've thought about it, run the next cell for the answer.
```python
# View the solution (Run this code cell to receive credit!)
q_4.check()
```
<IPython.core.display.Javascript object>
<span style="color:#33cc33">Correct:</span>
In the tutorial, we talked about how the pattern of positive numbers will tell you the kind of features the kernel will extract. This kernel has a vertical column of 1's, and so we would expect it to return features of vertical lines.
Now let's try it out. Run the next cell to apply convolution and ReLU to the image and display the result.
```python
image_filter = tf.nn.conv2d(
input=image,
filters=kernel,
strides=1,
padding='VALID',
)
image_detect = tf.nn.relu(image_filter)
# The first matrix is the image after convolution, and the second is
# the image after ReLU.
display(sympy.Matrix(tf.squeeze(image_filter).numpy()))
display(sympy.Matrix(tf.squeeze(image_detect).numpy()))
```
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_png function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_rgba function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_mask function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The MathtextBackendBitmap class was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
$\displaystyle \left[\begin{matrix}-2.0 & 2.0 & 0.0 & 0.0 & 0.0\\-2.0 & 2.0 & 0.0 & 0.0 & 0.0\\-2.0 & 2.0 & 0.0 & 0.0 & 0.0\\-2.0 & 2.0 & -1.0 & 0.0 & 0.0\\-2.0 & 2.0 & -1.0 & 0.0 & 0.0\end{matrix}\right]$
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_png function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_rgba function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The to_mask function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
/opt/conda/lib/python3.7/site-packages/IPython/lib/latextools.py:126: MatplotlibDeprecationWarning:
The MathtextBackendBitmap class was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use mathtext.math_to_image instead.
mt.to_png(f, s, fontsize=12, dpi=dpi, color=color)
$\displaystyle \left[\begin{matrix}0.0 & 2.0 & 0.0 & 0.0 & 0.0\\0.0 & 2.0 & 0.0 & 0.0 & 0.0\\0.0 & 2.0 & 0.0 & 0.0 & 0.0\\0.0 & 2.0 & 0.0 & 0.0 & 0.0\\0.0 & 2.0 & 0.0 & 0.0 & 0.0\end{matrix}\right]$
Is the result what you expected?
# Conclusion #
In this lesson, you learned about the first two operations a convolutional classifier uses for feature extraction: **filtering** an image with a **convolution** and **detecting** the feature with the **rectified linear unit**.
# Keep Going #
Move on to [**Lesson 3**](https://www.kaggle.com/ryanholbrook/maximum-pooling) to learn the final operation: **condensing** the feature map with **maximum pooling**!
---
*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/196537) to chat with other Learners.*
|
8c8a3c0b8eeb0d8fd23a7d84859090da95665310
| 475,244 |
ipynb
|
Jupyter Notebook
|
Kaggle-Computer-Vision/2-exercise-convolution-and-relu.ipynb
|
rahulbakshee/cv
|
6eb849a6e7b54ba42e99d8e28c9722b2d4aab2f3
|
[
"MIT"
] | 6 |
2021-03-03T10:12:22.000Z
|
2022-02-28T13:19:50.000Z
|
Kaggle-Computer-Vision/2-exercise-convolution-and-relu.ipynb
|
rahulbakshee/cv
|
6eb849a6e7b54ba42e99d8e28c9722b2d4aab2f3
|
[
"MIT"
] | null | null | null |
Kaggle-Computer-Vision/2-exercise-convolution-and-relu.ipynb
|
rahulbakshee/cv
|
6eb849a6e7b54ba42e99d8e28c9722b2d4aab2f3
|
[
"MIT"
] | 1 |
2021-03-03T10:50:56.000Z
|
2021-03-03T10:50:56.000Z
| 475,244 | 475,244 | 0.955072 | true | 3,974 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.79053 | 0.774583 | 0.612332 |
__label__eng_Latn
| 0.955692 | 0.260982 |
Chapter 14 of [A Guided Tour of Mathematical Methods for the Physical Sciences](https://www.cambridge.org/core/books/guided-tour-of-mathematical-methods-for-the-physical-sciences/3C5AAC644C2B09F3F17837E703516920) introduces Fourier Analysis. In it, (well-behaved) functions can be transformed into a sum of trigonometric function, whereas discrete series can be represented in so-called Fourier series. The former application helps us solve differential equations, for example, and the latter is used to explore the frequency content in a time series, among other things. Let us focus in this notebook on an application of the latter. First, we load some libraries we are going to use:
```python
import matplotlib.pyplot as plt
%matplotlib notebook
import numpy as np
import pandas as pd
```
Next, we download two years of measurements of the sea height in the Auckland Harbour. The interval between measurements is 1000 seconds:
```python
df = pd.read_csv('https://auckland.figshare.com/ndownloader/files/21944535',squeeze=True)
sea_height = df.values
dt = 1000 # in seconds
t = np.arange(len(sea_height))*dt # in seconds
```
And we plot these data:
```python
days = t/3600/24
%matplotlib notebook
plt.figure()
plt.plot(days,sea_height)
plt.xlabel('time (days)')
plt.ylabel('sea level (m)')
plt.show()
```
<IPython.core.display.Javascript object>
Fluctuations at this time scale are for a large part due to the tides. Zoom in to explore this time series in more detail. By eye, it appears the tide is dominated by a diurnal signal with a significant amplitude variation.
### Fourier
Let us have a closer look at the signal, with Fourier analysis. We can decompose the time series $f(t)$ of length $L$ as
\begin{equation}
f(t) = \sum_{n=-\infty }^{\infty }c_{n}\;e^{in\pi t/L}, \label{Four.16}
\end{equation}
and the coeffients $c_n$ can be calculated with
\begin{equation}
c_{n}=\frac{1}{2L}\int_{-L}^{L}f(x)e^{-in\pi t/L}dx.
\label{Four.18}
\end{equation}
With $f(t)$ being a time series, the $c_n$ coefficients represent its frequency spectrum, and the coefficients are spaced by $1/L$. This means that in theory the longer the time series ($L$), the greater spectral resolution.
In practice, we'll approximate $f(t)$ with L coefficients in a discrete fourier transform algorithm called the Fast Fourier Transform. In python, we have a library for that in the numpy.fft suite:
```python
cn = np.fft.fft(sea_height) # array of the (complex) Fourier coefficients
freq = np.fft.fftfreq(t.shape[-1],dt) # array of the frequencies associated with each coefficient
period = 1/freq/3600 # The period in hours (never mind the warning when it comes to the divisiont when freq=0)
```
/home/kvan637/miniconda3/lib/python3.7/site-packages/ipykernel_launcher.py:4: RuntimeWarning: divide by zero encountered in true_divide
after removing the cwd from sys.path.
The absolute value of the complex Fourier coefficients $c_n$ present information about the power in the signal at each frequency:
```python
plt.figure()
plt.plot(period,np.abs(cn), color='black')
plt.xlim(0,25) # zooming in on periods between 0 and 25 hours
plt.xlabel("Period (h)")
plt.show()
```
<IPython.core.display.Javascript object>
From the absolute value of the Fourier transform of two years of sea level measurements in the Auckland Harbour, we confirm the main period in the signal is roughly 12 hours. We say "rougly", because have a closer look by zooming in on the largest peak. What do you observe?
Tides on Earth are the result of the gravitational attraction of (large) bodies of mass. The biggest contributor is the Moon, followed by the Sun. The mass of the moon is much smaller than the Sun, but its greater proximity to Earth more than makes up for that. On average, the Sun contributes 40% of the total tidal energy. It may be obvious that the Earth experiences high tide when it faces the Sun or the Moon, but there is also a high tide associated on the opposite side of Earth. The Earth spins around its axis in 24 hours, and this means we can see spikes at 12 and 24 hours due to the Sun (named $S_2$ and $S_1$, respectively). However, because our Moon rotates around the Earth in 29.5 days, the period of diurnal lunar contribution is 12.4 hours. This is called the $M_2$ period. A smaller peak at 12 hours is the diurnal solar contribution, and the 24-hr peak is the daily contribution from the Sun. In the time domain, the original plot of our data, these close but different periods result in a beat pattern in the amplitudes of sea height.
### Multi-tapering
The absolute value of the output of the FFT of a single realisation of a noisy discrete and finite time series can be a poor representation of the true power spectrum. Several methods attempt to address the uncertainties in these estimates. One of those techniques is called the multi-taper method, and [free python code exists](https://github.com/krischer/mtspec).
### Homework
Install the multitaper [MTSPEC](https://github.com/krischer/mtspec) package either here on your server in the cloud, or on your local computer. Run the multitaper method on this time series to determine which peaks in the power spectral estimate are significant, and which are not. We discussed the ~12 hr and 24 hr periods, but how about the smaller bumps between 0 and 10 hr periodicity in the figure above. Do you think they are real?
```python
```
|
46983891684b91b0af6df77b4f0ce37b09b5a7ac
| 158,869 |
ipynb
|
Jupyter Notebook
|
14_Fourier_Analysis.ipynb
|
PALab/mathematical-notebooks-for-the-physical-sciences
|
f5b759aa4746c54ea9cac8c0001093b2204047d4
|
[
"MIT"
] | 6 |
2018-05-31T02:29:10.000Z
|
2021-08-16T15:02:38.000Z
|
14_Fourier_Analysis.ipynb
|
PALab/mathematical-notebooks-for-the-physical-sciences
|
f5b759aa4746c54ea9cac8c0001093b2204047d4
|
[
"MIT"
] | null | null | null |
14_Fourier_Analysis.ipynb
|
PALab/mathematical-notebooks-for-the-physical-sciences
|
f5b759aa4746c54ea9cac8c0001093b2204047d4
|
[
"MIT"
] | 3 |
2018-06-22T00:45:17.000Z
|
2020-08-16T14:25:40.000Z
| 76.563373 | 40,827 | 0.685615 | true | 1,337 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.945801 | 0.867036 | 0.820044 |
__label__eng_Latn
| 0.997565 | 0.743569 |
```python
from devito import *
from examples.seismic.source import WaveletSource, TimeAxis
from examples.seismic import plot_image
import numpy as np
from sympy import init_printing, latex
init_printing(use_latex=True)
```
```python
# Initial grid: 1km x 1km, with spacing 100m
extent = (2000., 2000.)
shape = (81, 81)
x = SpaceDimension(name='x', spacing=Constant(name='h_x', value=extent[0]/(shape[0]-1)))
z = SpaceDimension(name='z', spacing=Constant(name='h_z', value=extent[1]/(shape[1]-1)))
grid = Grid(extent=extent, shape=shape, dimensions=(x, z))
```
```python
class DGaussSource(WaveletSource):
def wavelet(self, f0, t):
a = 0.004
return -2.*a*(t - 1./f0) * np.exp(-a * (t - 1./f0)**2)
# Timestep size from Eq. 7 with V_p=6000. and dx=100
t0, tn = 0., 200.
dt = 1e2*(1. / np.sqrt(2.)) / 60.
time_range = TimeAxis(start=t0, stop=tn, step=dt)
src = DGaussSource(name='src', grid=grid, f0=0.01, time_range=time_range)
src.coordinates.data[:] = [1000., 1000.]
```
```python
#NBVAL_SKIP
src.show()
```
```python
# Now we create the velocity and pressure fields
p = TimeFunction(name='p', grid=grid, staggered=NODE, space_order=2, time_order=1)
v = VectorTimeFunction(name='v', grid=grid, space_order=2, time_order=1)
```
```python
from devito.finite_differences.operators import div, grad
t = grid.stepping_dim
time = grid.time_dim
# We need some initial conditions
V_p = 4.0
density = 1.
ro = 1/density * dt
l2m = V_p*V_p*density * dt
# The source injection term
src_p = src.inject(field=p.forward, expr=src)
# 2nd order acoustic according to fdelmoc
u_v_2 = Eq(v.forward, v + ro * grad(p))
u_p_2 = Eq(p.forward, p + l2m * div(v.forward))
```
```python
u_v_2
```
```python
u_p_2
```
```python
op_2 = Operator([u_v_2, u_p_2] + src_p)
```
```python
#NBVAL_IGNORE_OUTPUT
# Propagate the source
op_2(time=src.time_range.num-1)
```
Operator `Kernel` run in 0.04 s
```python
#NBVAL_SKIP
# Let's see what we got....
plot_image(v[0].data[0])
plot_image(v[1].data[0])
plot_image(p.data[0])
```
```python
norm_p = norm(p)
assert np.isclose(norm_p, .35098, atol=1e-4, rtol=0)
```
```python
# # 4th order acoustic according to fdelmoc
# # Now we create the velocity and pressure fields
p4 = TimeFunction(name='p', grid=grid, staggered=NODE, space_order=4, time_order=1)
v4 = VectorTimeFunction(name='v', grid=grid, space_order=4, time_order=1)
u_v_4 = Eq(v4.forward, v4 + ro * grad(p4))
u_p_4 = Eq(p4.forward, p4 + l2m * div(v4.forward))
```
```python
#NBVAL_IGNORE_OUTPUT
op_4 = Operator([u_v_4, u_p_4] + src_p)
# Propagate the source
op_4(time=src.time_range.num-1)
```
Operator `Kernel` run in 0.04 s
```python
#NBVAL_SKIP
# Let's see what we got....
plot_image(v4[0].data[-1])
plot_image(v4[1].data[-1])
plot_image(p4.data[-1])
```
```python
norm_p = norm(p)
assert np.isclose(norm_p, .35098, atol=1e-4, rtol=0)
```
|
a1f5d0a1c14d413db810468958960c147d44d26c
| 137,092 |
ipynb
|
Jupyter Notebook
|
examples/seismic/tutorials/05_staggered_acoustic.ipynb
|
BrunoMot/devito
|
b6e077857765b7b5fad812ec5774635ca4c6fbb7
|
[
"MIT"
] | 2 |
2019-04-05T20:52:23.000Z
|
2019-11-03T21:36:53.000Z
|
examples/seismic/tutorials/05_staggered_acoustic.ipynb
|
BrunoMot/devito
|
b6e077857765b7b5fad812ec5774635ca4c6fbb7
|
[
"MIT"
] | 9 |
2019-06-11T20:54:19.000Z
|
2020-04-06T17:56:10.000Z
|
examples/seismic/tutorials/05_staggered_acoustic.ipynb
|
BrunoMot/devito
|
b6e077857765b7b5fad812ec5774635ca4c6fbb7
|
[
"MIT"
] | null | null | null | 336.009804 | 16,808 | 0.934387 | true | 999 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.885631 | 0.79053 | 0.700119 |
__label__eng_Latn
| 0.330676 | 0.464941 |
# Sandbox/demo to go along with the MD assignment and lectures
## PharmSci 175/275
Author: David Mobley
## MD for a simple Lennard-Jones system
### Here, let's play with a very simple model system
In the MD assignment, we will be looking at a very simple polymer model. But here, let's backtrack to simple Lennard-Jones spheres (as used in the energy minimization assignment) and look at what happens if we run a simple dynamics calculation on a couple of particles subject to this potential, which will allow us to get a feel for how the integrator works.
Remember, for a Lennard-Jones system, the dimensionless form of our potential is
\begin{equation}
U^* = \sum \limits_{i} 4\left( r_{ij}^{-12} - r_{ij}^{-6}\right)
\end{equation}
which we can easily graph to get a sense of what will happen.
### We graph the potential for a pair of particles:
```python
#Get pylab ready for plotting in this notebook - "magic" command specific for iPython notebooks
%pylab inline
#Import numpy
import numpy as np
#Generate array of distances to graph at
r = np.arange( 0,3, 0.01)
#Calculate U
U = 4.*(r**(-12.) - r**(-6.))
#Graph
plot( r, U, 'b-')
#Label x and y axes
xlabel('r')
ylabel('U')
#Adjust y limits not to be auto-scaled; since this goes to infinity at zero, the graph will not be useful unless we truncate
ylim(-2,2)
```
## Now, let's run some dynamics on a pair of particles subject to this potential
Here I've written a modified `mdlib.f90` (updated from that from the energy minimization and MD assignments) called `md_sandbox.f90`. Compile it as usual (with `f2py`) into `md_sandbox` (e.g. `f2py -c -m md_sandbox md_sandbox.f90` or similar) so that you can import it below.
The difference between this and `mdlib` that you will use in your MD assignment is that I've removed the bonds between atoms, so that we have simple Lennard-Jones particles rather than LJ polymers.
### First, we set up our system:
```python
#Let's define the variables we'll need
dt = 0.001
Cut = 2.5
L = 10 #Let's just put these in a fairly big box so we have room
M = 1 #Here this doesn't actually do anything, but I didn't remove it from the functions
#Import our library
from md_sandbox import *
#Choose N for number of particles
N = 2
#Allocate position array - initially just zeros
Pos = np.zeros((N,3), float)
#In this case, place two LJ particles at specified initial positions - chosen so they are relatively near each other
#If you had more than two particles you'd need to adjust this
Pos[0,:] = np.array([0,0,0])
Pos[1,:] = np.array([1.2,0,0])
#Might be worth experimenting to see what happens if we randomly place the particles instead
#Assign initial velocities - in this case I'll start off with them stationary and see what happens
Vel = np.array([[0,0,0],[0,0,0]], float)
#You could tweak the initial velocities to see what happens under different conditions
```
## Before we do any dynamics, notice that this is really a 1D system, so let's write a function to compute r, which we will store later
```python
def get_r(Pos):
"""Calculate r, the distance between particles, for a position array containing just two particles. Return it."""
#Get displacement
dist = Pos[1,:] - Pos[0,:]
#Calculate distance and return
return np.sqrt( np.dot( dist, dist))
```
## OK, now let's start doing some dynamics
We're going to want to store the distance between the two particles as a function of time, so we can graph it. We also might want to look at the population of each distance as a function of time and see how that compares with the energy landscape. Though, right now that's a little premature. Let's just start off by taking a few timesteps and see how the distance changes.
```python
#Define storage for positions at each time so we can track them
max_steps = 5000 #Maximum number of steps we will take - so we know how many positions we might store
Pos_t = np.zeros(( N,3,max_steps), float)
#Store initial positions
Pos_t[:,:,0] = Pos
#Make up initial forces
Forces = np.zeros((N,3), float)
#Kick things off by calculating energy and forces
energy, Forces = calcenergyforces( Pos, M, L, Cut, Forces )
#Take a timestep
Pos, Vel, Accel, KEnergy, PEnergy = vvintegrate( Pos, Vel, Forces, M, L, Cut, dt )
#Store new positions
Pos_t[:,:,1] = Pos
#Print original and current distance
for i in range(0,2):
print(get_r(Pos_t[:,:,i]))
```
## What should the long-time behavior of this system be?
Before going on to the step below, think for a minute about what motion these particles should exhibit on long timescales and what it would look like if you've graphed it.
Once you've done so, write a `for` loop to run over max_steps and at each step, update the energy and forces, take a timestep, and store the new positions.
```python
for i in range(max_steps):
#Your code goes here
energy, Forces = calcenergyforces( Pos, M, L, Cut, Forces )
Pos, Vel, Accel, KEnergy, PEnergy = vvintegrate( Pos, Vel, Forces, M, L, Cut, dt )
Pos_t[:,:,i] = Pos
```
## Once you've done that, use this code to graph r versus time for your particles
```python
#Find x axis (time values)
t = dt*np.arange(0,max_steps)
#Find y axis (r values)
r_vs_t = []
for i in range(max_steps):
r=get_r(Pos_t[:,:,i])
r_vs_t.append(r)
r_vs_t = np.array(r_vs_t)
#Plot
figure()
plot(t, r_vs_t)
```
## Additional exercises
Some things you might want to try on the above 1D system:
- Change the amount of time the data is graphed for; see if you can make it oscillate repeatedly
- Consider how you could adjust the total energy of the system
- What if you change the total energy (increasing it or reducing it). Can you make it look more like a harmonic oscillator? Less like a harmonic oscillator? Why or why not?
- Can you find solutions which are not oscillatory? How?
- For the ambitious: Consider extending this example to three or more particles and graph just some of the distances involved. What kind of solutions can you find?
# Now let's do MD on a molecule!
So that was some very simple MD on a 1D Lennard-Jones system. Now let's head towards the opposite extreme and do our first molecular system.
Here, we'll draw on OpenMM, a molecular mechanics toolkit we'll see again later in the class, for running some simple energy minimizations and molecular dynamics. And we'll use the new `openforcefield` tools for assigning a force field for a molecule, because they make this very simple (and you have them already installed).
## First, we generate a molecule
```python
# What SMILES string for the guest? Should be isomeric SMILES
mol_smiles = 'OC(CC1CCCC1)=O'
# Import stuff
from openeye.oechem import *
import oenotebook as oenb
from openeye.oeomega import * # conformer generation
from openeye.oequacpac import * #for partial charge assignment
# Create empty OEMol
mol = OEMol()
# Convert SMILES
OESmilesToMol(mol, mol_smiles)
# Draw
oenb.draw_mol(mol)
```
## Now we generate a 3D structure of our compound and assign partial charges
```python
#initialize omega for conformer generation
omega = OEOmega()
omega.SetMaxConfs(100) #Generate up to 100 conformers since we'll use for docking
omega.SetIncludeInput(False)
omega.SetStrictStereo(True) #Refuse to generate conformers if stereochemistry not provided
#Initialize charge generation
chargeEngine = OEAM1BCCCharges()
# Set to use a simple neutral pH model
OESetNeutralpHModel(mol)
# Generate conformers with Omega; keep only best conformer
status = omega(mol)
if not status:
print("Error generating conformers for %s." % (guest_smiles))
# Assign AM1-BCC charges
OEAssignCharges(mol, chargeEngine)
# Write out PDB of molecule
ofile = oemolostream('mymolecule.pdb')
OEWriteMolecule(ofile, mol)
ofile.close()
```
## Now we apply a force field to assign parameters to the system
This uses the `openforcefield` toolkit and its `smirnoff99Frosst` forcefield (a new GAFF-like forcefield for small molecules).
```python
from openforcefield.typing.engines.smirnoff import *
ff = ForceField('forcefield/smirnoff99Frosst.offxml')
topology = generateTopologyFromOEMol(mol)
system = ff.createSystem(topology, [mol])
from openforcefield.utils import get_data_filename, extractPositionsFromOEMol, generateTopologyFromOEMol
positions = extractPositionsFromOEMol(mol)
```
## Next we'll start off by doing an energy minimization
We first prepare the system, but then the actual energy minimization is very simple. I believe OpenMM is using an L-BFGS minimizer.
```python
# Even though we're just going to minimize, we still have to set up an integrator, since a Simulation needs one
integrator = openmm.VerletIntegrator(2.0*unit.femtoseconds)
# Prep the Simulation using the parameterized system, the integrator, and the topology
simulation = app.Simulation(topology, system, integrator)
# Copy in the positions
simulation.context.setPositions( positions)
# Get initial state and energy; print
state = simulation.context.getState(getEnergy = True)
energy = state.getPotentialEnergy() / unit.kilocalories_per_mole
print("Energy before minimization (kcal/mol): %.2g" % energy)
# Minimize, get final state and energy and print
simulation.minimizeEnergy()
state = simulation.context.getState(getEnergy=True, getPositions=True)
energy = state.getPotentialEnergy() / unit.kilocalories_per_mole
print("Energy after minimization (kcal/mol): %.2g" % energy)
newpositions = state.getPositions()
```
Energy before minimization (kcal/mol): 95
Energy after minimization (kcal/mol): 61
## Now we do some bookkeeping to run MD
Most of this is just setup, file bookkeeping, temperature selection, etc. The key part which actually runs it is the second-to-last block of code which says `simulation.step(1000)` which runs 1000 steps of dynamics. The timestep for this is set near the top, and is 2 femtoseconds per timestep, so the total time is 2000 fs, or 2 ps.
Simulation snapshots are stored to a NetCDF file for later visualization, every 100 frames (so we'll end up with 10 of them). You can adjust these settings if you like.
```python
# Set up NetCDF reporter for storing trajectory; prep for Langevin dynamics
from mdtraj.reporters import NetCDFReporter
integrator = openmm.LangevinIntegrator(300*unit.kelvin, 1./unit.picosecond, 2.*unit.femtoseconds)
# Prep Simulation
simulation = app.Simulation(topology, system, integrator)
# Copy in minimized positions
simulation.context.setPositions(newpositions)
# Initialize velocities to correct temperature
simulation.context.setVelocitiesToTemperature(300*unit.kelvin)
# Set up to write trajectory file to NetCDF file in data directory every 100 frames
netcdf_reporter = NetCDFReporter(os.path.join('.', 'trajectory.nc'), 100) #Store every 100 frames
# Initialize reporters, including a CSV file to store certain stats every 100 frames
simulation.reporters.append(netcdf_reporter)
simulation.reporters.append(app.StateDataReporter(os.path.join('.', 'data.csv'), 100, step=True, potentialEnergy=True, temperature=True, density=True))
# Run the simulation and print start info; store timing
print("Starting simulation")
start = time.clock()
simulation.step(1000) #1000 steps of dynamics
end = time.clock()
# Print elapsed time info, finalize trajectory file
print("Elapsed time %.2f seconds" % (end-start))
netcdf_reporter.close()
print("Done!")
```
Starting simulation
Elapsed time 0.15 seconds
Done!
## Now we can visualize the results with `nglview`
If you don't have `nglview`, you can also load the PDB file, `mymolecule.pdb`, with a standard viewer like VMD or Chimera, and then load the NetCDF trajectory file (`trajectory.nc`) in afterwards.
```python
# Load stored trajectory using MDTraj; the trajectory doesn't contain chemistry info so we also load a PDB
import mdtraj
import nglview
traj= mdtraj.load(os.path.join('.', 'trajectory.nc'), top=os.path.join('.', 'mymolecule.pdb'))
# View the trajectory
view = nglview.show_mdtraj(traj)
view
```
<p>Failed to display Jupyter Widget of type <code>NGLWidget</code>.</p>
<p>
If you're reading this message in the Jupyter Notebook or JupyterLab Notebook, it may mean
that the widgets JavaScript is still loading. If this message persists, it
likely means that the widgets JavaScript library is either not installed or
not enabled. See the <a href="https://ipywidgets.readthedocs.io/en/stable/user_install.html">Jupyter
Widgets Documentation</a> for setup instructions.
</p>
<p>
If you're reading this message in another frontend (for example, a static
rendering on GitHub or <a href="https://nbviewer.jupyter.org/">NBViewer</a>),
it may mean that your frontend doesn't currently support widgets.
</p>
## Exercises
- Try your own molecule. What about one which has the potential for internal hydrogen bonding?
- Run the dynamics longer
- What is different if you lower the temperature? Raise the temperature?
|
fa16b40daeebbb67ebbaaec890c026b1cfab586f
| 68,579 |
ipynb
|
Jupyter Notebook
|
uci-pharmsci/lectures/MD/MD Sandbox.ipynb
|
inferential/drug-computing
|
25ff2f04b2a1f7cb71c552f62e722edb26cc297f
|
[
"CC-BY-4.0",
"MIT"
] | 103 |
2017-10-21T18:49:01.000Z
|
2022-03-24T22:05:21.000Z
|
uci-pharmsci/lectures/MD/MD Sandbox.ipynb
|
inferential/drug-computing
|
25ff2f04b2a1f7cb71c552f62e722edb26cc297f
|
[
"CC-BY-4.0",
"MIT"
] | 29 |
2017-10-23T20:57:17.000Z
|
2022-03-15T21:57:09.000Z
|
uci-pharmsci/lectures/MD/MD Sandbox.ipynb
|
inferential/drug-computing
|
25ff2f04b2a1f7cb71c552f62e722edb26cc297f
|
[
"CC-BY-4.0",
"MIT"
] | 36 |
2018-01-18T20:22:29.000Z
|
2022-03-16T13:08:09.000Z
| 111.510569 | 28,448 | 0.845317 | true | 3,276 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.885631 | 0.824462 | 0.730169 |
__label__eng_Latn
| 0.989303 | 0.53476 |
<a href="https://colab.research.google.com/github/julianovale/simulacao_python/blob/master/0006_ex_trem_kronecker_algebra_computacao_simbolica.ipynb" target="_parent"></a>
```
from sympy import I, Matrix, symbols, Symbol, eye
from sympy.physics.quantum import TensorProduct
from datetime import datetime
```
## Descrever exemplo
```
'''
Rotas
'''
R1 = Matrix([[0,"L1p3",0,0],[0,0,"L1v1",0],[0,0,0,"L1v3"],[0,0,0,0]])
R2 = Matrix([[0,"L2p3",0,0],[0,0,"L2v2",0],[0,0,0,"L2v3"],[0,0,0,0]])
```
```
'''
Seções de bloqueio
'''
T1 = Matrix([[0, 'p1'],['v1', 0]])
T2 = Matrix([[0, 'p2'],['v2', 0]])
T3 = Matrix([[0, 'p3'],['v3', 0]])
```
```
momento_inicio = datetime.now()
'''
Algebra
'''
rotas = TensorProduct(R1, eye(4)) + TensorProduct(eye(4), R2)
secoes = TensorProduct((TensorProduct(T1, eye(2)) + TensorProduct(eye(2), T2)),eye(2))+TensorProduct(eye(4),T3)
sistema = TensorProduct(rotas, secoes)
# calcula tempo de processamento
tempo_processamento = datetime.now() - momento_inicio
```
```
sistema
```
Matrix([
[0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*p3, L2p3*p2, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*p3, L1p3*p2, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, L2p3*v3, 0, 0, L2p3*p2, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v3, 0, 0, L1p3*p2, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, L2p3*v2, 0, 0, L2p3*p3, 0, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v2, 0, 0, L1p3*p3, 0, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v2, L2p3*v3, 0, 0, 0, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v2, L1p3*v3, 0, 0, 0, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, 0, 0, 0, L2p3*p3, L2p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, 0, 0, 0, L1p3*p3, L1p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, 0, L2p3*v3, 0, 0, L2p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, 0, L1p3*v3, 0, 0, L1p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, L2p3*v2, 0, 0, L2p3*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, L1p3*v2, 0, 0, L1p3*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, L2p3*v2, L2p3*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, L1p3*v2, L1p3*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*p3, L2v2*p2, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*p3, L1p3*p2, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v3, 0, 0, L2v2*p2, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v3, 0, 0, L1p3*p2, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v2, 0, 0, L2v2*p3, 0, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v2, 0, 0, L1p3*p3, 0, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v2, L2v2*v3, 0, 0, 0, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v2, L1p3*v3, 0, 0, 0, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, 0, 0, 0, L2v2*p3, L2v2*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, 0, 0, 0, L1p3*p3, L1p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, 0, L2v2*v3, 0, 0, L2v2*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, 0, L1p3*v3, 0, 0, L1p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, L2v2*v2, 0, 0, L2v2*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, L1p3*v2, 0, 0, L1p3*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, L2v2*v2, L2v2*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, L1p3*v2, L1p3*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*p3, L2v3*p2, 0, L2v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*p3, L1p3*p2, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v3, 0, 0, L2v3*p2, 0, L2v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v3, 0, 0, L1p3*p2, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v2, 0, 0, L2v3*p3, 0, 0, L2v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v2, 0, 0, L1p3*p3, 0, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v2, L2v3*v3, 0, 0, 0, 0, L2v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v2, L1p3*v3, 0, 0, 0, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, 0, 0, 0, L2v3*p3, L2v3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, 0, 0, 0, L1p3*p3, L1p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, 0, L2v3*v3, 0, 0, L2v3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, 0, L1p3*v3, 0, 0, L1p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, L2v3*v2, 0, 0, L2v3*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, L1p3*v2, 0, 0, L1p3*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, L2v3*v2, L2v3*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, L1p3*v2, L1p3*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*p3, L1p3*p2, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v3, 0, 0, L1p3*p2, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v2, 0, 0, L1p3*p3, 0, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v2, L1p3*v3, 0, 0, 0, 0, L1p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, 0, 0, 0, L1p3*p3, L1p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, 0, L1p3*v3, 0, 0, L1p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, L1p3*v2, 0, 0, L1p3*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1p3*v1, 0, L1p3*v2, L1p3*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*p3, L2p3*p2, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*p3, L1v1*p2, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v3, 0, 0, L2p3*p2, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v3, 0, 0, L1v1*p2, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v2, 0, 0, L2p3*p3, 0, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v2, 0, 0, L1v1*p3, 0, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v2, L2p3*v3, 0, 0, 0, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v2, L1v1*v3, 0, 0, 0, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, 0, 0, 0, L2p3*p3, L2p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, 0, 0, 0, L1v1*p3, L1v1*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, 0, L2p3*v3, 0, 0, L2p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, 0, L1v1*v3, 0, 0, L1v1*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, L2p3*v2, 0, 0, L2p3*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, L1v1*v2, 0, 0, L1v1*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, L2p3*v2, L2p3*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, L1v1*v2, L1v1*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*p3, L2v2*p2, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*p3, L1v1*p2, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v3, 0, 0, L2v2*p2, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v3, 0, 0, L1v1*p2, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v2, 0, 0, L2v2*p3, 0, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v2, 0, 0, L1v1*p3, 0, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v2, L2v2*v3, 0, 0, 0, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v2, L1v1*v3, 0, 0, 0, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, 0, 0, 0, L2v2*p3, L2v2*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, 0, 0, 0, L1v1*p3, L1v1*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, 0, L2v2*v3, 0, 0, L2v2*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, 0, L1v1*v3, 0, 0, L1v1*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, L2v2*v2, 0, 0, L2v2*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, L1v1*v2, 0, 0, L1v1*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, L2v2*v2, L2v2*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, L1v1*v2, L1v1*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*p3, L2v3*p2, 0, L2v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*p3, L1v1*p2, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v3, 0, 0, L2v3*p2, 0, L2v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v3, 0, 0, L1v1*p2, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v2, 0, 0, L2v3*p3, 0, 0, L2v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v2, 0, 0, L1v1*p3, 0, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v2, L2v3*v3, 0, 0, 0, 0, L2v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v2, L1v1*v3, 0, 0, 0, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, 0, 0, 0, L2v3*p3, L2v3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, 0, 0, 0, L1v1*p3, L1v1*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, 0, L2v3*v3, 0, 0, L2v3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, 0, L1v1*v3, 0, 0, L1v1*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, L2v3*v2, 0, 0, L2v3*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, L1v1*v2, 0, 0, L1v1*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, L2v3*v2, L2v3*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, L1v1*v2, L1v1*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*p3, L1v1*p2, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v3, 0, 0, L1v1*p2, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v2, 0, 0, L1v1*p3, 0, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v2, L1v1*v3, 0, 0, 0, 0, L1v1*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, 0, 0, 0, L1v1*p3, L1v1*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, 0, L1v1*v3, 0, 0, L1v1*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, L1v1*v2, 0, 0, L1v1*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v1*v1, 0, L1v1*v2, L1v1*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*p3, L2p3*p2, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*p3, L1v3*p2, 0, L1v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v3, 0, 0, L2p3*p2, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v3, 0, 0, L1v3*p2, 0, L1v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v2, 0, 0, L2p3*p3, 0, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v2, 0, 0, L1v3*p3, 0, 0, L1v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v2, L2p3*v3, 0, 0, 0, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v2, L1v3*v3, 0, 0, 0, 0, L1v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, 0, 0, 0, L2p3*p3, L2p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, 0, 0, 0, L1v3*p3, L1v3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, 0, L2p3*v3, 0, 0, L2p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, 0, L1v3*v3, 0, 0, L1v3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, L2p3*v2, 0, 0, L2p3*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, L1v3*v2, 0, 0, L1v3*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, L2p3*v2, L2p3*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, L1v3*v2, L1v3*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*p3, L2v2*p2, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*p3, L1v3*p2, 0, L1v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v3, 0, 0, L2v2*p2, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v3, 0, 0, L1v3*p2, 0, L1v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v2, 0, 0, L2v2*p3, 0, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v2, 0, 0, L1v3*p3, 0, 0, L1v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v2, L2v2*v3, 0, 0, 0, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v2, L1v3*v3, 0, 0, 0, 0, L1v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, 0, 0, 0, L2v2*p3, L2v2*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, 0, 0, 0, L1v3*p3, L1v3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, 0, L2v2*v3, 0, 0, L2v2*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, 0, L1v3*v3, 0, 0, L1v3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, L2v2*v2, 0, 0, L2v2*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, L1v3*v2, 0, 0, L1v3*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, L2v2*v2, L2v2*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, L1v3*v2, L1v3*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*p3, L2v3*p2, 0, L2v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*p3, L1v3*p2, 0, L1v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v3, 0, 0, L2v3*p2, 0, L2v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v3, 0, 0, L1v3*p2, 0, L1v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v2, 0, 0, L2v3*p3, 0, 0, L2v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v2, 0, 0, L1v3*p3, 0, 0, L1v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v2, L2v3*v3, 0, 0, 0, 0, L2v3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v2, L1v3*v3, 0, 0, 0, 0, L1v3*p1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, 0, 0, 0, L2v3*p3, L2v3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, 0, 0, 0, L1v3*p3, L1v3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, 0, L2v3*v3, 0, 0, L2v3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, 0, L1v3*v3, 0, 0, L1v3*p2, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, L2v3*v2, 0, 0, L2v3*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, L1v3*v2, 0, 0, L1v3*p3, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, L2v3*v2, L2v3*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, L1v3*v2, L1v3*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*p3, L1v3*p2, 0, L1v3*p1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v3, 0, 0, L1v3*p2, 0, L1v3*p1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v2, 0, 0, L1v3*p3, 0, 0, L1v3*p1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v2, L1v3*v3, 0, 0, 0, 0, L1v3*p1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, 0, 0, 0, L1v3*p3, L1v3*p2, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, 0, L1v3*v3, 0, 0, L1v3*p2],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, L1v3*v2, 0, 0, L1v3*p3],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L1v3*v1, 0, L1v3*v2, L1v3*v3, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*p3, L2p3*p2, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v3, 0, 0, L2p3*p2, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v2, 0, 0, L2p3*p3, 0, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v2, L2p3*v3, 0, 0, 0, 0, L2p3*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, 0, 0, 0, L2p3*p3, L2p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, 0, L2p3*v3, 0, 0, L2p3*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, L2p3*v2, 0, 0, L2p3*p3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2p3*v1, 0, L2p3*v2, L2p3*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*p3, L2v2*p2, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v3, 0, 0, L2v2*p2, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v2, 0, 0, L2v2*p3, 0, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v2, L2v2*v3, 0, 0, 0, 0, L2v2*p1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, 0, 0, 0, L2v2*p3, L2v2*p2, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, 0, L2v2*v3, 0, 0, L2v2*p2, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, L2v2*v2, 0, 0, L2v2*p3, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v2*v1, 0, L2v2*v2, L2v2*v3, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*p3, L2v3*p2, 0, L2v3*p1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v3, 0, 0, L2v3*p2, 0, L2v3*p1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v2, 0, 0, L2v3*p3, 0, 0, L2v3*p1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v2, L2v3*v3, 0, 0, 0, 0, L2v3*p1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, 0, 0, 0, L2v3*p3, L2v3*p2, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, 0, L2v3*v3, 0, 0, L2v3*p2],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, L2v3*v2, 0, 0, L2v3*p3],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, L2v3*v1, 0, L2v3*v2, L2v3*v3, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
```
sistema.shape
```
(128, 128)
```
print(tempo_processamento)
```
0:00:02.921827
```
sistema[0,9]
```
L2p3*p3
```
sistema[0,12]
```
L2p3*p1
```
sistema[1,8]
```
L2p3*v3
|
ccc026da9ac030bcb8bf89d329ff1cf7bc7e1cb7
| 150,820 |
ipynb
|
Jupyter Notebook
|
0006_ex_trem_kronecker_algebra_computacao_simbolica.ipynb
|
julianovale/simulacao_python
|
9d29fe05d1580ca46311fc6fb6ab41b1b1c7ca5d
|
[
"MIT"
] | null | null | null |
0006_ex_trem_kronecker_algebra_computacao_simbolica.ipynb
|
julianovale/simulacao_python
|
9d29fe05d1580ca46311fc6fb6ab41b1b1c7ca5d
|
[
"MIT"
] | null | null | null |
0006_ex_trem_kronecker_algebra_computacao_simbolica.ipynb
|
julianovale/simulacao_python
|
9d29fe05d1580ca46311fc6fb6ab41b1b1c7ca5d
|
[
"MIT"
] | null | null | null | 364.299517 | 1,124 | 0.147467 | true | 66,839 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.884039 | 0.66888 | 0.591316 |
__label__krc_Cyrl
| 1.00001 | 0.212156 |
```python
import semicon
import sympy
sympy.init_printing()
```
# continuum dispersion
```python
import kwant
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
%matplotlib inline
```
```python
a = 0.5
g0 = 1
bands = ['gamma_6c', 'gamma_8v', 'gamma_7v']
model = semicon.models.ZincBlende(
components=['foreman'],
bands=bands,
default_databank='winkler'
)
params = model\
.parameters(material='InAs')\
.renormalize(new_gamma_0=g0)
# define continuum dispersion function
continuum = kwant.continuum.lambdify(str(model.hamiltonian), locals=params)
# define tight-binding dispersion function
template = kwant.continuum.discretize(model.hamiltonian, grid_spacing=a)
syst = kwant.wraparound.wraparound(template).finalized()
p = lambda k_x, k_y, k_z: {'k_x': k_x, 'k_y': k_y, 'k_z': k_z, **params}
tb = lambda k_x, k_y, k_z: syst.hamiltonian_submatrix(params=p(k_x, k_y, k_z))
# get dispersions
k = np.linspace(-np.pi/a, np.pi/a, 101)
e = np.array([la.eigvalsh(continuum(k_x=ki, k_y=0, k_z=0)) for ki in k])
e_tb = np.array([la.eigvalsh(tb(k_x=a*ki, k_y=0, k_z=0)) for ki in k])
```
```python
plt.plot(k, e, 'r-');
plt.plot(k, e_tb, 'k-');
plt.ylim(-1, 2)
plt.grid();
```
|
e1da1e8b0cc8644f9c0c29365254b41375af91be
| 28,860 |
ipynb
|
Jupyter Notebook
|
notebooks/spurious-solutions.ipynb
|
quantum-tinkerer/semicon
|
3b4fc8c3f9a25553fc181a4cb9e5e4109c59a5e2
|
[
"BSD-2-Clause"
] | null | null | null |
notebooks/spurious-solutions.ipynb
|
quantum-tinkerer/semicon
|
3b4fc8c3f9a25553fc181a4cb9e5e4109c59a5e2
|
[
"BSD-2-Clause"
] | null | null | null |
notebooks/spurious-solutions.ipynb
|
quantum-tinkerer/semicon
|
3b4fc8c3f9a25553fc181a4cb9e5e4109c59a5e2
|
[
"BSD-2-Clause"
] | 1 |
2019-12-30T00:29:36.000Z
|
2019-12-30T00:29:36.000Z
| 232.741935 | 26,100 | 0.919023 | true | 407 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.887205 | 0.709019 | 0.629045 |
__label__eng_Latn
| 0.171369 | 0.299813 |
# Custom Enzyme Catalysed Reactions
This tutorial has been adapted from the tutorials in https://github.com/mic-pan/BGT_BiochemicalNetworkTutorials
```julia
# Since BondGraphs is not yet in the package manager, we will need to include it directly from Github
# NOTE: You will need Julia >= 1.7
using Pkg; Pkg.add(url="https://github.com/jedforrest/BondGraphs.jl")
using BondGraphs
```
```julia
using Plots
using ModelingToolkit
using Catalyst
```
## Enzyme catalysed reactions
We can create custom reaction components which describe enzyme-catalyzed reaction mechanices, such as the Michaelis-Menten rate law.
```julia
# GlobalScope parameters do not have a namespace
@parameters R T
R, T = GlobalScope(R), GlobalScope(T)
@parameters t
D = Differential(t)
# Effort and flow variables
@variables E[1:2](t) F[1:2](t)
@parameters r1 r2 k_c e_T
```
\begin{equation}
\left[
\begin{array}{c}
r1 \\
r2 \\
k_{c} \\
e_{T} \\
\end{array}
\right]
\end{equation}
```julia
# Custom component
ReMM = Dict(
:description => """
Michaelis-Menten reaction
r1: Rate of reaction 1
r2: Rate of reaction 2
k_c: Affinity of complex relative to free enzyme
e_T: Total amount of enzyme
R: Universal Gas Constant
T: Temperature
""",
:numports => 2,
:variables => Dict(
:parameters => Dict(
r1 => 1,
r2 => 1,
k_c => 1,
e_T => 1
),
:globals => Dict(
R => 8.314,
T => 310.0
)
),
:equations => [
0 ~ F[1] + F[2],
0 ~ F[1] - e_T*r1*r2*k_c*(exp(E[1]/R/T) - exp(E[2]/R/T)) / (r1*exp(E[1]/R/T) + r2*exp(E[2]/R/T) + k_c*(r1+r2))
]
)
```
Dict{Symbol, Any} with 4 entries:
:variables => Dict{Symbol, Dict{Num}}(:parameters=>Dict{Num, Int64}(e_T=>1,…
:description => "Michaelis-Menten reaction\nr1: Rate of reaction 1\nr2: Rate …
:equations => Equation[0 ~ F[1](t) + F[2](t), 0 ~ (-e_T*k_c*r1*r2*(exp(E[1]…
:numports => 2
```julia
ReMM[:equations]
```
\begin{align}
0 =& F_1\left( t \right) + F_2\left( t \right) \\
0 =& \frac{ - e_{T} k_{c} r1 r2 \left( - e^{\frac{E_2\left( t \right)}{R T}} + e^{\frac{E_1\left( t \right)}{R T}} \right)}{k_{c} \left( r1 + r2 \right) + r1 e^{\frac{E_1\left( t \right)}{R T}} + r2 e^{\frac{E_2\left( t \right)}{R T}}} + F_1\left( t \right)
\end{align}
We can add this component to the BondGraphs component library
```julia
addlibrary!(Dict(:ReMM => ReMM))
haskey(BondGraphs.DEFAULT_LIBRARY, :ReMM)
```
true
We will create a simple bond graph with this component: $S \rightleftharpoons P$
```julia
bg = BondGraph("Enzyme network")
S = Component(:Ce, "S"; K=1)
P = Component(:Ce, "P"; K=1)
ReMM = Component(:ReMM, "R"; r1=200, r2=200)
add_node!(bg, [S, P, ReMM])
connect!(bg, S, ReMM)
connect!(bg, ReMM, P)
plot(bg)
```
```julia
constitutive_relations(bg)
```
\begin{align}
\frac{dS_{+}q(t)}{dt} =& \frac{R_{+}e_{T} R_{+}k_{c} R_{+}r1 R_{+}r2 \left( P_{+}K P_{+q}\left( t \right) - S_{+}K S_{+q}\left( t \right) \right)}{R_{+}k_{c} R_{+}r1 + R_{+}k_{c} R_{+}r2 + P_{+}K R_{+}r2 P_{+q}\left( t \right) + R_{+}r1 S_{+}K S_{+q}\left( t \right)} \\
\frac{dP_{+}q(t)}{dt} =& \frac{R_{+}e_{T} R_{+}k_{c} R_{+}r1 R_{+}r2 \left( S_{+}K S_{+q}\left( t \right) - P_{+}K P_{+q}\left( t \right) \right)}{R_{+}k_{c} R_{+}r1 + R_{+}k_{c} R_{+}r2 + P_{+}K R_{+}r2 P_{+q}\left( t \right) + R_{+}r1 S_{+}K S_{+q}\left( t \right)}
\end{align}
```julia
constitutive_relations(bg; sub_defaults=true)
```
\begin{align}
\frac{dS_{+}q(t)}{dt} =& \frac{200 P_{+q}\left( t \right) - 200 S_{+q}\left( t \right)}{2 + P_{+q}\left( t \right) + S_{+q}\left( t \right)} \\
\frac{dP_{+}q(t)}{dt} =& \frac{ - 200 P_{+q}\left( t \right) + 200 S_{+q}\left( t \right)}{2 + P_{+q}\left( t \right) + S_{+q}\left( t \right)}
\end{align}
```julia
sol = simulate(bg, (0.,5.); u0=[200,100])
plot(sol)
```
A slightly more complicated example $A \rightleftharpoons B \rightleftharpoons C$
```julia
rn_abc = @reaction_network ABC_Pathway begin
(1, 1), A <--> B
(1, 1), B <--> C
end
bg = BondGraph(rn_abc)
# Set K values to integers (for easier simplifying)
bg.A.K, bg.B.K, bg.C.K = 1, 1, 1
# Swap in new reaction types
MM1 = Component(:ReMM, "MM1"; r1=100, r2=100)
MM2 = Component(:ReMM, "MM2"; r1=100, r2=100)
swap!(bg, bg.R1, MM1)
swap!(bg, bg.R2, MM2)
plot(bg)
```
```julia
constitutive_relations(bg; sub_defaults=true)
```
\begin{align}
\frac{dA_{+}q(t)}{dt} =& \frac{ - 100 A_{+q}\left( t \right) + 100 B_{+q}\left( t \right)}{2 + A_{+q}\left( t \right) + B_{+q}\left( t \right)} \\
\frac{dB_{+}q(t)}{dt} =& \frac{ - 200 \left( B_{+q}\left( t \right) \right)^{2} + 200 A_{+q}\left( t \right) - 400 B_{+q}\left( t \right) + 200 C_{+q}\left( t \right) + 200 A_{+q}\left( t \right) C_{+q}\left( t \right)}{\left( 2 + A_{+q}\left( t \right) + B_{+q}\left( t \right) \right) \left( 2 + B_{+q}\left( t \right) + C_{+q}\left( t \right) \right)} \\
\frac{dC_{+}q(t)}{dt} =& \frac{100 B_{+q}\left( t \right) - 100 C_{+q}\left( t \right)}{2 + B_{+q}\left( t \right) + C_{+q}\left( t \right)}
\end{align}
```julia
sol = simulate(bg, (0.,20.); u0=[200,50,100])
plot(sol)
```
|
756437bc5e28cac524ee6c94d2168b9530357da4
| 204,302 |
ipynb
|
Jupyter Notebook
|
Biochemical/EnzymeReactions.ipynb
|
jedforrest/BondGraphsTutorials
|
d93b12d9c3e31c9c6b53d9997d92c81587ee299e
|
[
"MIT"
] | null | null | null |
Biochemical/EnzymeReactions.ipynb
|
jedforrest/BondGraphsTutorials
|
d93b12d9c3e31c9c6b53d9997d92c81587ee299e
|
[
"MIT"
] | null | null | null |
Biochemical/EnzymeReactions.ipynb
|
jedforrest/BondGraphsTutorials
|
d93b12d9c3e31c9c6b53d9997d92c81587ee299e
|
[
"MIT"
] | null | null | null | 154.774242 | 14,598 | 0.673997 | true | 2,111 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.822189 | 0.798187 | 0.65626 |
__label__eng_Latn
| 0.396024 | 0.363044 |
# Clase 4
## Overfitting y underfitting
```python
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
```
```python
sns.set(rc={'figure.figsize': (12, 8)}, style='white')
np.random.seed(42)
```
## Simulación con un modelo lineal
Generamos datos de acuerdo al siguiente modelo:
\begin{align}
y &= \alpha_1 x + \alpha_0 + \epsilon \\
\alpha_1 &= 1.5 \\
\alpha_0 &= 2 \\
\epsilon &\sim \mathcal{N}(0, 1)\\
\end{align}
```python
X = np.linspace(0, 10, 30, endpoint=False)
α_1 = 1.5
α_0 = 2
ϵ = np.random.normal(0, 1, size=len(X))
Y = α_1 * X + α_0 + ϵ
```
```python
sns.scatterplot(X, Y)
```
## Regresión lineal
Ahora hacemos un fit con un modelo lineal.
$$ \hat{y} = \alpha_1 x + \alpha_0$$
```python
modelo_lineal = np.polyfit(X, Y, deg=1)
```
```python
modelo_lineal
```
array([1.39458098, 2.32137836])
```python
Y_predicted = np.polyval(modelo_lineal, X)
ax = sns.scatterplot(X, Y, label='observado')
sns.lineplot(X, Y_predicted, color='r', label='modelo')
```
A continuación, simulamos nuevos datos, usando el modelo inicial.
```python
X_nuevos = np.linspace(10, 15, 15)
Y_nuevos = α_1 * X_nuevos + α_0 + np.random.normal(0, 1, size=len(X_nuevos))
```
```python
Y_predicted_nuevos = np.polyval(modelo_lineal, X_nuevos)
X_todos = np.append(X, X_nuevos)
Y_todos = np.append(Y, Y_nuevos)
Y_predicted_todos = np.append(Y_predicted, Y_predicted_nuevos)
ax = sns.scatterplot(X, Y, label='observado')
sns.scatterplot(X_nuevos, Y_nuevos, label='nuevo', color='orange', ax=ax)
sns.lineplot(X_todos, Y_predicted_todos, color='r', label='modelo', alpha=0.3, ax=ax)
```
Para ver cuán bueno fue el fit, ploteamos los residuos.
$$e = y - \hat{y}$$
Dónde $y$ son los valores observados y $\hat{y}$ las predicciones de nuestro modelo.
```python
def plot_residuals(x, y, y_pred):
residuals = y - y_pred
_fig, (ax1, ax2) = plt.subplots(1, 2)
sns.scatterplot(x, residuals, ax=ax1)
ax1.axhline(0, linestyle='--', color='grey', alpha=0.6)
sns.distplot(residuals, ax=ax2)
```
```python
plot_residuals(X, Y, Y_predicted)
```
También podemos calcular el [coeficiente de determinación](https://es.wikipedia.org/wiki/Coeficiente_de_determinación), conocido como $R^2$, que determina que proporción de la varianza de los datos explica el modelo.
$$R^2 := 1 - \frac{RSS}{TSS} \\
RSS = \sum (y - \hat{y})^2 \\
TSS = \sum (y - \bar{y})^2$$
Donde $RSS$ es la [suma del cuadrado de los residuos](https://en.wikipedia.org/wiki/Residual_sum_of_squares), $TSS$ es la [suma total de los cuadrados](https://en.wikipedia.org/wiki/Total_sum_of_squares) y $\bar{y}$ es la media lo los valores observados de $y$.
```python
def r_squared(y, y_pred):
RSS = ((y - y_pred) ** 2).sum()
TSS = ((y - y.mean()) ** 2).sum()
return 1 - RSS/TSS
```
```python
r_squared(Y, Y_predicted)
```
0.9590924499139448
## Regresión polinómica
Probamos ahora fitteando una [regresión polinomial](https://es.wikipedia.org/wiki/Regresión_no_lineal#Regresión_polinomial) de 4to grado.
$$ \hat{y} = \sum_{i=0}^{4} \alpha_i x^i$$
```python
modelo_polinomico = np.polyfit(X, Y, deg=4)
modelo_polinomico
```
array([-0.0047652 , 0.09426085, -0.56455497, 2.36610021, 2.24718869])
```python
Y_predicted = np.polyval(modelo_polinomico, X)
ax = sns.scatterplot(X, Y, label='observado')
sns.lineplot(X, Y_predicted, color='r', label='modelo')
```
```python
r_squared(Y, Y_predicted)
```
0.9656338527552731
```python
Y_predicted_nuevos = np.polyval(modelo_polinomico, X_nuevos)
Y_predicted_todos = np.append(Y_predicted, Y_predicted_nuevos)
ax = sns.scatterplot(X, Y, label='observado')
sns.scatterplot(X_nuevos, Y_nuevos, label='nuevo', color='orange', ax=ax)
sns.lineplot(X_todos, Y_predicted_todos, color='r', label='modelo', alpha=0.3, ax=ax)
```
## Simulación con un modelo cuadrático
\begin{align}
y &= \alpha_2 x^2 + \alpha_1 x + \alpha_0 + \epsilon \\
\alpha_2 &= 2 \\
\alpha_1 &= 1.5 \\
\alpha_0 &= 2 \\
\epsilon &\sim \mathcal{N}(0, 1)\\
\end{align}
```python
X = np.linspace(-3, 3, 30, endpoint=False)
α_2 = 2
α_1 = 1.5
α_0 = 2
ϵ = np.random.normal(0, 1, size=len(X))
Y = α_2 * X * X + α_1 * X + α_0 + ϵ
```
```python
sns.scatterplot(X, Y)
```
```python
modelo_lineal = np.polyfit(X, Y, deg=1)
Y_predicted = np.polyval(modelo_lineal, X)
ax = sns.scatterplot(X, Y, label='observado')
sns.lineplot(X, Y_predicted, color='r', label='modelo')
```
```python
r_squared(Y, Y_predicted)
```
0.12354571557502825
```python
plot_residuals(X, Y, Y_predicted)
```
En este caso, podemos observar que nuestro modelo no captura correctamente la estructura de los datos.
## Ejercicio
Ajustar un modelo cuadrático y calcular el coeficiente de determinación.
|
437a28f17623b7ba825d32473dfd0d746dc5d71d
| 565,180 |
ipynb
|
Jupyter Notebook
|
clases/clase_4/clase_4_overfitting.ipynb
|
lambdaclass/data_etudes
|
6454c95def03d82a12e4ea0bad0801e25c09b745
|
[
"MIT"
] | 11 |
2019-08-29T22:00:31.000Z
|
2021-03-21T07:32:52.000Z
|
clases/clase_4/clase_4_overfitting.ipynb
|
lambdaclass/data_etudes
|
6454c95def03d82a12e4ea0bad0801e25c09b745
|
[
"MIT"
] | 14 |
2019-09-03T21:40:48.000Z
|
2020-06-24T14:10:38.000Z
|
clases/clase_4/clase_4_overfitting.ipynb
|
lambdaclass/data_etudes
|
6454c95def03d82a12e4ea0bad0801e25c09b745
|
[
"MIT"
] | 2 |
2019-09-06T13:48:46.000Z
|
2020-07-21T14:14:08.000Z
| 856.333333 | 76,888 | 0.954043 | true | 1,652 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.938124 | 0.803174 | 0.753477 |
__label__spa_Latn
| 0.443554 | 0.588911 |
# Optimization of a Dissipative Quantum Gate
```python
# NBVAL_IGNORE_OUTPUT
%load_ext watermark
import sys
import os
import qutip
import numpy as np
import scipy
import matplotlib
import matplotlib.pylab as plt
import krotov
import copy
from functools import partial
from itertools import product
%watermark -v --iversions
```
Python implementation: CPython
Python version : 3.8.1
IPython version : 7.24.1
numpy : 1.20.3
sys : 3.8.1 (default, Aug 12 2020, 19:33:59)
[GCC 8.3.0]
krotov : 1.2.1+dev
scipy : 1.6.3
matplotlib: 3.4.2
qutip : 4.6.1
$\newcommand{tr}[0]{\operatorname{tr}}
\newcommand{diag}[0]{\operatorname{diag}}
\newcommand{abs}[0]{\operatorname{abs}}
\newcommand{pop}[0]{\operatorname{pop}}
\newcommand{aux}[0]{\text{aux}}
\newcommand{int}[0]{\text{int}}
\newcommand{opt}[0]{\text{opt}}
\newcommand{tgt}[0]{\text{tgt}}
\newcommand{init}[0]{\text{init}}
\newcommand{lab}[0]{\text{lab}}
\newcommand{rwa}[0]{\text{rwa}}
\newcommand{bra}[1]{\langle#1\vert}
\newcommand{ket}[1]{\vert#1\rangle}
\newcommand{Bra}[1]{\left\langle#1\right\vert}
\newcommand{Ket}[1]{\left\vert#1\right\rangle}
\newcommand{Braket}[2]{\left\langle #1\vphantom{#2}\mid{#2}\vphantom{#1}\right\rangle}
\newcommand{ketbra}[2]{\vert#1\rangle\!\langle#2\vert}
\newcommand{op}[1]{\hat{#1}}
\newcommand{Op}[1]{\hat{#1}}
\newcommand{dd}[0]{\,\text{d}}
\newcommand{Liouville}[0]{\mathcal{L}}
\newcommand{DynMap}[0]{\mathcal{E}}
\newcommand{identity}[0]{\mathbf{1}}
\newcommand{Norm}[1]{\lVert#1\rVert}
\newcommand{Abs}[1]{\left\vert#1\right\vert}
\newcommand{avg}[1]{\langle#1\rangle}
\newcommand{Avg}[1]{\left\langle#1\right\rangle}
\newcommand{AbsSq}[1]{\left\vert#1\right\vert^2}
\newcommand{Re}[0]{\operatorname{Re}}
\newcommand{Im}[0]{\operatorname{Im}}$
This example illustrates the optimization for a quantum gate in an open quantum system, where the dynamics is governed by the Liouville-von Neumann equation. A naive extension of a gate optimization to Liouville space would seem to imply that it is necessary to optimize over the full basis of Liouville space (16 matrices, for a two-qubit gate). However, [Goerz et al., New J. Phys. 16, 055012 (2014)][1] showed that is not necessary, but that a set of 3 density matrices is sufficient to track the optimization.
This example reproduces the "Example II" from that paper, considering the optimization towards a $\sqrt{\text{iSWAP}}$ two-qubit gate on a system of two transmons with a shared transmission line resonator.
[1]: https://michaelgoerz.net/research/Goerz_NJP2014.pdf
**Note**: This notebook uses some parallelization features (`parallel_map`/`multiprocessing`). Unfortunately, on Windows (and macOS with Python >= 3.8), `multiprocessing` does not work correctly for functions defined in a Jupyter notebook (due to the [spawn method](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) being used on Windows, instead of Unix-`fork`, see also https://stackoverflow.com/questions/45719956). We can use the third-party [loky](https://loky.readthedocs.io/) library to fix this, but this significantly increases the overhead of multi-process parallelization. The use of parallelization here is for illustration only and makes no guarantee of actually improving the runtime of the optimization.
```python
if sys.platform != 'linux':
krotov.parallelization.set_parallelization(use_loky=True)
from krotov.parallelization import parallel_map
```
## The two-transmon system
We consider the Hamiltonian from Eq (17) in the paper, in the rotating wave approximation, together with spontaneous decay and dephasing of each qubit. Alltogether, we define the Liouvillian as follows:
```python
def two_qubit_transmon_liouvillian(
ω1, ω2, ωd, δ1, δ2, J, q1T1, q2T1, q1T2, q2T2, T, Omega, n_qubit
):
from qutip import tensor, identity, destroy
b1 = tensor(identity(n_qubit), destroy(n_qubit))
b2 = tensor(destroy(n_qubit), identity(n_qubit))
H0 = (
(ω1 - ωd - δ1 / 2) * b1.dag() * b1
+ (δ1 / 2) * b1.dag() * b1 * b1.dag() * b1
+ (ω2 - ωd - δ2 / 2) * b2.dag() * b2
+ (δ2 / 2) * b2.dag() * b2 * b2.dag() * b2
+ J * (b1.dag() * b2 + b1 * b2.dag())
)
H1_re = 0.5 * (b1 + b1.dag() + b2 + b2.dag()) # 0.5 is due to RWA
H1_im = 0.5j * (b1.dag() - b1 + b2.dag() - b2)
H = [H0, [H1_re, Omega], [H1_im, ZeroPulse]]
A1 = np.sqrt(1 / q1T1) * b1 # decay of qubit 1
A2 = np.sqrt(1 / q2T1) * b2 # decay of qubit 2
A3 = np.sqrt(1 / q1T2) * b1.dag() * b1 # dephasing of qubit 1
A4 = np.sqrt(1 / q2T2) * b2.dag() * b2 # dephasing of qubit 2
L = krotov.objectives.liouvillian(H, c_ops=[A1, A2, A3, A4])
return L
```
We will use internal units GHz and ns. Values in GHz contain an implicit factor 2π, and MHz and μs are converted to GHz and ns, respectively:
```python
GHz = 2 * np.pi
MHz = 1e-3 * GHz
ns = 1
μs = 1000 * ns
```
This implicit factor $2 \pi$ is because frequencies ($\nu$) convert to energies as $E = h \nu$, but our propagation routines assume a unit $\hbar = 1$ for energies. Thus, the factor $h / \hbar = 2 \pi$.
We will use the same parameters as those given in Table 2 of the paper:
```python
ω1 = 4.3796 * GHz # qubit frequency 1
ω2 = 4.6137 * GHz # qubit frequency 2
ωd = 4.4985 * GHz # drive frequency
δ1 = -239.3 * MHz # anharmonicity 1
δ2 = -242.8 * MHz # anharmonicity 2
J = -2.3 * MHz # effective qubit-qubit coupling
q1T1 = 38.0 * μs # decay time for qubit 1
q2T1 = 32.0 * μs # decay time for qubit 2
q1T2 = 29.5 * μs # dephasing time for qubit 1
q2T2 = 16.0 * μs # dephasing time for qubit 2
T = 400 * ns # gate duration
```
```python
tlist = np.linspace(0, T, 2000)
```
While in the original paper, each transmon was cut off at 6 levels, here we truncate at 5 levels. This makes the propagation faster, while potentially introducing a slightly larger truncation error.
```python
n_qubit = 5 # number of transmon levels to consider
```
In the Liouvillian, note the control being split up into a separate real and imaginary part. As a guess control we use a real-valued constant pulse with an amplitude of 35 MHz, acting over 400 ns, with a switch-on and switch-off in the first 20 ns (see plot below)
```python
def Omega(t, args):
E0 = 35.0 * MHz
return E0 * krotov.shapes.flattop(t, 0, T, t_rise=(20 * ns), func='sinsq')
```
The imaginary part start out as zero:
```python
def ZeroPulse(t, args):
return 0.0
```
We can now instantiate the Liouvillian:
```python
L = two_qubit_transmon_liouvillian(
ω1, ω2, ωd, δ1, δ2, J, q1T1, q2T1, q1T2, q2T2, T, Omega, n_qubit
)
```
The guess pulse looks as follows:
```python
def plot_pulse(pulse, tlist, xlimit=None):
fig, ax = plt.subplots()
if callable(pulse):
pulse = np.array([pulse(t, None) for t in tlist])
ax.plot(tlist, pulse/MHz)
ax.set_xlabel('time (ns)')
ax.set_ylabel('pulse amplitude (MHz)')
if xlimit is not None:
ax.set_xlim(xlimit)
plt.show(fig)
```
```python
plot_pulse(L[1][1], tlist)
```
## Optimization objectives
Our target gate is $\Op{O} = \sqrt{\text{iSWAP}}$:
```python
SQRTISWAP = qutip.Qobj(np.array(
[[1, 0, 0, 0],
[0, 1 / np.sqrt(2), 1j / np.sqrt(2), 0],
[0, 1j / np.sqrt(2), 1 / np.sqrt(2), 0],
[0, 0, 0, 1]]),
dims=[[2, 2], [2, 2]]
)
```
The key idea explored in the paper is that a set of three density matrices is sufficient to track the optimization
$$
\begin{align}
\Op{\rho}_1
&= \sum_{i=1}^{d} \frac{2 (d-i+1)}{d (d+1)} \ketbra{i}{i} \\
\Op{\rho}_2
&= \sum_{i,j=1}^{d} \frac{1}{d} \ketbra{i}{j} \\
\Op{\rho}_3
&= \sum_{i=1}^{d} \frac{1}{d} \ketbra{i}{i}
\end{align}
$$
In our case, $d=4$ for a two qubit-gate, and the $\ket{i}$, $\ket{j}$ are the canonical basis states $\ket{00}$, $\ket{01}$, $\ket{10}$, $\ket{11}$
```python
ket00 = qutip.ket((0, 0), dim=(n_qubit, n_qubit))
ket01 = qutip.ket((0, 1), dim=(n_qubit, n_qubit))
ket10 = qutip.ket((1, 0), dim=(n_qubit, n_qubit))
ket11 = qutip.ket((1, 1), dim=(n_qubit, n_qubit))
basis = [ket00, ket01, ket10, ket11]
```
The three density matrices play different roles in the optimization, and, as shown in the paper, convergence may improve significantly by weighing the states relatively to each other. For this example, we place a strong emphasis on the optimization $\Op{\rho}_1 \rightarrow \Op{O}^\dagger \Op{\rho}_1 \Op{O}$, by a factor of 20. This reflects that the hardest part of the optimization is identifying the basis in which the gate is diagonal. We will be using the real-part functional ($J_{T,\text{re}}$) to evaluate the success of $\Op{\rho}_i \rightarrow \Op{O}\Op{\rho}_i\Op{O}^\dagger$. Because $\Op{\rho}_1$ and $\Op{\rho}_3$ are mixed states, the Hilbert-Schmidt overlap will take values smaller than one in the optimal case. To compensate, we divide the weights by the purity of the respective states.
```python
weights = np.array([20, 1, 1], dtype=np.float64)
weights *= len(weights) / np.sum(weights) # manual normalization
weights /= np.array([0.3, 1.0, 0.25]) # purities
```
The `krotov.gate_objectives` routine can initialize the density matrices $\Op{\rho}_1$, $\Op{\rho}_2$, $\Op{\rho}_3$ automatically, via the parameter `liouville_states_set`. Alternatively, we could also use the `'full'` basis of 16 matrices or the extended set of $d+1 = 5$ pure-state density matrices.
```python
objectives = krotov.gate_objectives(
basis,
SQRTISWAP,
L,
liouville_states_set='3states',
weights=weights,
normalize_weights=False,
)
objectives
```
[Objective[ρ₀[5⊗5,5⊗5] to ρ₁[5⊗5,5⊗5] via [𝓛₀[[5⊗5,5⊗5],[5⊗5,5⊗5]], [𝓛₁[[5⊗5,5⊗5],[5⊗5,5⊗5]], u₁(t)], [𝓛₂[[5⊗5,5⊗5],[5⊗5,5⊗5]], u₂(t)]]],
Objective[ρ₂[5⊗5,5⊗5] to ρ₃[5⊗5,5⊗5] via [𝓛₀[[5⊗5,5⊗5],[5⊗5,5⊗5]], [𝓛₁[[5⊗5,5⊗5],[5⊗5,5⊗5]], u₁(t)], [𝓛₂[[5⊗5,5⊗5],[5⊗5,5⊗5]], u₂(t)]]],
Objective[ρ₄[5⊗5,5⊗5] to ρ₅[5⊗5,5⊗5] via [𝓛₀[[5⊗5,5⊗5],[5⊗5,5⊗5]], [𝓛₁[[5⊗5,5⊗5],[5⊗5,5⊗5]], u₁(t)], [𝓛₂[[5⊗5,5⊗5],[5⊗5,5⊗5]], u₂(t)]]]]
The use of `normalize_weights=False` is because we have included the purities in the weights, as discussed above.
## Dynamics under the Guess Pulse
For numerical efficiency, both for the analysis of the guess/optimized controls, we will use a stateful density matrix propagator:
A true physical measure for the success of the optimization is the "average gate fidelity". Evaluating the fidelity requires to simulate the dynamics of the full basis of Liouville space:
```python
full_liouville_basis = [psi * phi.dag() for (psi, phi) in product(basis, basis)]
```
We propagate these under the guess control:
```python
def propagate_guess(initial_state):
return objectives[0].mesolve(
tlist,
rho0=initial_state,
).states[-1]
```
```python
full_states_T = parallel_map(
propagate_guess, values=full_liouville_basis,
)
```
```python
print("F_avg = %.3f" % krotov.functionals.F_avg(full_states_T, basis, SQRTISWAP))
```
F_avg = 0.344
Note that we use $F_{T,\text{re}}$, not $F_{\text{avg}}$ to steer the optimization, as the Krotov boundary condition $\frac{\partial F_{\text{avg}}}{\partial \rho^\dagger}$ would be non-trivial.
Before doing the optimization, we can look the population dynamics under the guess pulse. For this purpose we propagate the pure-state density matrices corresponding to the canonical logical basis in Hilbert space, and obtain the expectation values for the projection onto these same states:
```python
rho00, rho01, rho10, rho11 = [qutip.ket2dm(psi) for psi in basis]
```
```python
def propagate_guess_for_expvals(initial_state):
return objectives[0].propagate(
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(),
rho0=initial_state,
e_ops=[rho00, rho01, rho10, rho11]
)
```
```python
def plot_population_dynamics(dyn00, dyn01, dyn10, dyn11):
fig, axs = plt.subplots(ncols=2, nrows=2, figsize=(16, 8))
axs = np.ndarray.flatten(axs)
labels = ['00', '01', '10', '11']
dyns = [dyn00, dyn01, dyn10, dyn11]
for (ax, dyn, title) in zip(axs, dyns, labels):
for (i, label) in enumerate(labels):
ax.plot(dyn.times, dyn.expect[i], label=label)
ax.legend()
ax.set_title(title)
plt.show(fig)
```
```python
plot_population_dynamics(
*parallel_map(
propagate_guess_for_expvals,
values=[rho00, rho01, rho10, rho11],
)
)
```
## Optimization
We now define the optimization parameters for the controls, the Krotov step size $\lambda_a$ and the update-shape that will ensure that the pulse switch-on and switch-off stays intact.
```python
pulse_options = {
L[i][1]: dict(
lambda_a=1.0,
update_shape=partial(
krotov.shapes.flattop, t_start=0, t_stop=T, t_rise=(20 * ns))
)
for i in [1, 2]
}
```
Then we run the optimization for 2000 iterations
```python
opt_result = krotov.optimize_pulses(
objectives,
pulse_options,
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(reentrant=True),
chi_constructor=krotov.functionals.chis_re,
info_hook=krotov.info_hooks.print_table(J_T=krotov.functionals.J_T_re),
iter_stop=3,
)
```
iter. J_T ∑∫gₐ(t)dt J ΔJ_T ΔJ secs
0 1.22e-01 0.00e+00 1.22e-01 n/a n/a 9
1 7.49e-02 2.34e-02 9.83e-02 -4.67e-02 -2.33e-02 26
2 7.41e-02 4.06e-04 7.45e-02 -8.12e-04 -4.06e-04 26
3 7.33e-02 3.78e-04 7.37e-02 -7.55e-04 -3.77e-04 26
(this takes a while)...
```python
dumpfile = "./3states_opt_result.dump"
if os.path.isfile(dumpfile):
opt_result = krotov.result.Result.load(dumpfile, objectives)
else:
opt_result = krotov.optimize_pulses(
objectives,
pulse_options,
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(reentrant=True),
chi_constructor=krotov.functionals.chis_re,
info_hook=krotov.info_hooks.print_table(J_T=krotov.functionals.J_T_re),
iter_stop=5,
continue_from=opt_result
)
opt_result.dump(dumpfile)
```
```python
opt_result
```
Krotov Optimization Result
--------------------------
- Started at 2019-02-25 00:43:31
- Number of objectives: 3
- Number of iterations: 2000
- Reason for termination: Reached 2000 iterations
- Ended at 2019-02-25 23:19:34 (22:36:03)
## Optimization result
```python
optimized_control = opt_result.optimized_controls[0] + 1j * opt_result.optimized_controls[1]
```
```python
plot_pulse(np.abs(optimized_control), tlist)
```
```python
def propagate_opt(initial_state):
return opt_result.optimized_objectives[0].propagate(
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(),
rho0=initial_state,
).states[-1]
```
```python
opt_full_states_T = parallel_map(
propagate_opt, values=full_liouville_basis,
)
```
```python
print("F_avg = %.3f" % krotov.functionals.F_avg(opt_full_states_T, basis, SQRTISWAP))
```
F_avg = 0.977
```python
def propagate_opt_for_expvals(initial_state):
return opt_result.optimized_objectives[0].propagate(
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(),
rho0=initial_state,
e_ops=[rho00, rho01, rho10, rho11]
)
```
Plotting the population dynamics, we see the expected behavior for the $\sqrt{\text{iSWAP}}$ gate.
```python
plot_population_dynamics(
*parallel_map(
propagate_opt_for_expvals,
values=[rho00, rho01, rho10, rho11],
)
)
```
```python
def plot_convergence(result):
fig, ax = plt.subplots()
ax.semilogy(result.iters, result.info_vals)
ax.set_xlabel('OCT iteration')
ax.set_ylabel(r'optimization error $J_{T, re}$')
plt.show(fig)
```
```python
plot_convergence(opt_result)
```
|
b1f2dac899a4b91caea097607ee2b92c44ca36aa
| 447,088 |
ipynb
|
Jupyter Notebook
|
docs/notebooks/06_example_3states.ipynb
|
mcditoos/krotov
|
6a70cc791fa21186997ad2ca5a72f6d30574e7a0
|
[
"BSD-3-Clause"
] | null | null | null |
docs/notebooks/06_example_3states.ipynb
|
mcditoos/krotov
|
6a70cc791fa21186997ad2ca5a72f6d30574e7a0
|
[
"BSD-3-Clause"
] | null | null | null |
docs/notebooks/06_example_3states.ipynb
|
mcditoos/krotov
|
6a70cc791fa21186997ad2ca5a72f6d30574e7a0
|
[
"BSD-3-Clause"
] | 1 |
2021-11-26T17:01:29.000Z
|
2021-11-26T17:01:29.000Z
| 320.26361 | 259,596 | 0.927632 | true | 5,364 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.73412 | 0.740174 | 0.543376 |
__label__eng_Latn
| 0.795625 | 0.100775 |
# 微分方程式の計算について N0.5-3 2階線形定数同次微分方程式
### 学籍番号[_________]クラス[_____] クラス番号[_____] 名前[_______________]
########### 2階線形定数係数同次微分方程式
$$ \frac{d^2y}{dx^2}+b\frac{dy}{dx} +cy = 0 ------------(1)$$
上記の(1)式から得られる特性方程式は $$ \lambda^2 + b \lambda + c =0 $$
解の公式より $$ \lambda = \frac{-b \pm \sqrt{ b^2 - 4\times 1\times c }}{2\times1}$$
(1)判別式 $ b^2-4\times c >0 $ なら
2つの実数解 $ \lambda_1 ,\lambda_2 $ から一般解は
$$ y = C_1 e^{\lambda_1 x} + C_2 e^{\lambda_2 x} $$
(2)判別式 $ b^2-4\times c = 0 $ なら 1つの実数解(重解) $ \lambda $ から一般解は
$$ y = ( C_1+ C_2 x) e^{\lambda x} $$
(3)判別式 $ b^2-4\times c < 0 $ なら 複素共役な解 $ \lambda = \alpha \pm i \beta $ から一般解は
$$ y = e^{\alpha x}( C_1 \cos( \beta x) + C_2 \sin( \beta x)) $$
```python
from sympy import *
x, n , a ,C1, C2 ,C3 = symbols('x n a C1 C2 C3')
f,g,y = symbols('f g y' ,cls=Function)
init_printing()
m ='5///3'
i =0
```
```python
diffeq = Eq(y(x).diff(x, 2)-3* y(x).diff(x, 1) + 2*y(x) ,0)
i =0
i=i+1
print( 'No.',m,'---',i)
diffeq
```
```python
simplify(dsolve(diffeq, y(x)))
```
```python
diffeq = Eq(y(x).diff(x, 2)-6* y(x).diff(x, 1) + 9*y(x) ,0)
i=i+1
print( 'No.',m,'---',i)
diffeq
```
```python
simplify(dsolve(diffeq, y(x)))
```
```python
diffeq = Eq(y(x).diff(x, 2)-2* y(x).diff(x, 1) + 3*y(x) ,0)
i=i+1
print( 'No.',m,'---',i)
diffeq
```
```python
simplify(dsolve(diffeq, y(x)))
```
```python
diffeq = Eq(y(x).diff(x, 2)+4* y(x).diff(x, 1) + 4*y(x) ,0)
i=i+1
print( 'No.',m,'---',i)
diffeq
```
```python
soln = simplify(dsolve(diffeq, y(x)))
soln
```
```python
# y(0)=3, y'(0) = -4
constants = solve([soln.rhs.subs(x,0)-3 ,soln.rhs.diff(x,1).subs(x,0)- -4])
constants
```
```python
solne = soln.subs(constants)
solne
```
```python
diffeq = Eq(y(x).diff(x, 2)+4* y(x).diff(x, 1) + 13*y(x) ,0)
i=i+1
print( 'No.',m,'---',i)
diffeq
```
```python
soln = simplify(dsolve(diffeq, y(x)))
soln
```
```python
# y(π)= -6, y'(π) = 24
constants = solve([soln.rhs.subs(x,pi)- -6 ,soln.rhs.diff(x,1).subs(x,pi)- 24])
constants
```
```python
solne = soln.subs(constants)
solne
```
```python
diffeq = Eq(y(x).diff(x, 2)+3* y(x).diff(x, 1) + 2*y(x) ,0)
i=i+1
print( 'No.',m,'---',i)
diffeq
```
```python
soln = simplify(dsolve(diffeq, y(x)))
soln
```
```python
# y(0)=1, y'(0) = 2
constants = solve([soln.rhs.subs(x,0)- 1 ,soln.rhs.diff(x,1).subs(x,0)- 2])
constants
```
```python
solne = soln.subs(constants)
solne
```
```python
```
|
dfaa5e6af9ec7150d5ce46751d22a1c3a95cb5f2
| 61,309 |
ipynb
|
Jupyter Notebook
|
11_20181211-bibunhoteisiki-7-3-Ex&ans.ipynb
|
kt-pro-git-1/Calculus_Differential_Equation-public
|
d5deaf117e6841c4f6ceb53bc80b020220fd4814
|
[
"MIT"
] | 1 |
2019-07-10T11:33:18.000Z
|
2019-07-10T11:33:18.000Z
|
11_20181211-bibunhoteisiki-7-3-Ex&ans.ipynb
|
kt-pro-git-1/Calculus_Differential_Equation-public
|
d5deaf117e6841c4f6ceb53bc80b020220fd4814
|
[
"MIT"
] | null | null | null |
11_20181211-bibunhoteisiki-7-3-Ex&ans.ipynb
|
kt-pro-git-1/Calculus_Differential_Equation-public
|
d5deaf117e6841c4f6ceb53bc80b020220fd4814
|
[
"MIT"
] | null | null | null | 95.348367 | 4,092 | 0.825784 | true | 1,280 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.924142 | 0.803174 | 0.742247 |
__label__yue_Hant
| 0.150624 | 0.562819 |
<table border="0">
<tr>
<td>
</td>
<td>
</td>
</tr>
</table>
# Double Machine Learning: Use Cases and Examples
Double Machine Learning (DML) is an algorithm that applies arbitrary machine learning methods
to fit the treatment and response, then uses a linear model to predict the response residuals
from the treatment residuals.
The EconML SDK implements the following DML classes:
* LinearDMLCateEstimator: suitable for estimating heterogeneous treatment effects.
* SparseLinearDMLCateEstimator: suitable for the case when $W$ is high dimensional vector and both the first stage and second stage estimate are linear.
In ths notebook, we show the performance of the DML on both synthetic data and observational data.
**Notebook contents:**
1. Example usage with single continuous treatment synthetic data
2. Example usage with single binary treatment synthetic data
3. Example usage with multiple continuous treatment synthetic data
4. Example usage with single continuous treatment observational data
5. Example usage with multiple continuous treatment, multiple outcome observational data
```python
import econml
```
```python
## Ignore warnings
import warnings
warnings.filterwarnings('ignore')
```
```python
# Main imports
from econml.dml import DMLCateEstimator, LinearDMLCateEstimator,SparseLinearDMLCateEstimator
# Helper imports
import numpy as np
from itertools import product
from sklearn.linear_model import Lasso, LassoCV, LogisticRegression, LogisticRegressionCV,LinearRegression,MultiTaskElasticNet,MultiTaskElasticNetCV
from sklearn.ensemble import RandomForestRegressor,RandomForestClassifier
from sklearn.preprocessing import PolynomialFeatures
import matplotlib.pyplot as plt
import matplotlib
from sklearn.model_selection import train_test_split
%matplotlib inline
```
## 1. Example Usage with Single Continuous Treatment Synthetic Data and Model Selection
### 1.1. DGP
We use the data generating process (DGP) from [here](https://arxiv.org/abs/1806.03467). The DGP is described by the following equations:
\begin{align}
T =& \langle W, \beta\rangle + \eta, & \;\eta \sim \text{Uniform}(-1, 1)\\
Y =& T\cdot \theta(X) + \langle W, \gamma\rangle + \epsilon, &\; \epsilon \sim \text{Uniform}(-1, 1)\\
W \sim& \text{Normal}(0,\, I_{n_w})\\
X \sim& \text{Uniform}(0,1)^{n_x}
\end{align}
where $W$ is a matrix of high-dimensional confounders and $\beta, \gamma$ have high sparsity.
For this DGP,
\begin{align}
\theta(x) = \exp(2\cdot x_1).
\end{align}
```python
# Treatment effect function
def exp_te(x):
return np.exp(2*x[0])
```
```python
# DGP constants
np.random.seed(123)
n = 2000
n_w = 30
support_size = 5
n_x = 1
# Outcome support
support_Y = np.random.choice(np.arange(n_w), size=support_size, replace=False)
coefs_Y = np.random.uniform(0, 1, size=support_size)
epsilon_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Treatment support
support_T = support_Y
coefs_T = np.random.uniform(0, 1, size=support_size)
eta_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Generate controls, covariates, treatments and outcomes
W = np.random.normal(0, 1, size=(n, n_w))
X = np.random.uniform(0, 1, size=(n, n_x))
# Heterogeneous treatment effects
TE = np.array([exp_te(x_i) for x_i in X])
T = np.dot(W[:, support_T], coefs_T) + eta_sample(n)
Y = TE * T + np.dot(W[:, support_Y], coefs_Y) + epsilon_sample(n)
Y_train, Y_val, T_train, T_val, X_train, X_val, W_train, W_val = train_test_split(Y, T, X, W, test_size=.2)
# Generate test data
X_test = np.array(list(product(np.arange(0, 1, 0.01), repeat=n_x)))
```
### 1.2. Train Estimator
We train models in three different ways, and compare their performance.
#### 1.2.1. Default Setting
```python
est = LinearDMLCateEstimator(model_y=RandomForestRegressor(),
model_t=RandomForestRegressor(),
random_state=123)
est.fit(Y_train, T_train, X_train, W_train)
te_pred = est.effect(X_test)
```
#### 1.2.2. Polynomial Features for Heterogeneity
```python
est1 = SparseLinearDMLCateEstimator(model_y=RandomForestRegressor(),
model_t=RandomForestRegressor(),
featurizer=PolynomialFeatures(degree=3),
random_state=123)
est1.fit(Y_train, T_train, X_train, W_train)
te_pred1=est1.effect(X_test)
```
#### 1.2.3. Polynomial Features with regularization
```python
est2 = DMLCateEstimator(model_y=RandomForestRegressor(),
model_t=RandomForestRegressor(),
model_final=Lasso(alpha=0.1, fit_intercept=False),
featurizer=PolynomialFeatures(degree=10),
random_state=123)
est2.fit(Y_train, T_train, X_train, W_train)
te_pred2=est2.effect(X_test)
```
#### 1.2.4 Random Forest Final Stage
```python
from econml.dml import ForestDMLCateEstimator
# One can replace model_y and model_t with any scikit-learn regressor and classifier correspondingly
# as long as it accepts the sample_weight keyword argument at fit time.
est3 = ForestDMLCateEstimator(model_y=RandomForestRegressor(),
model_t=RandomForestRegressor(),
discrete_treatment=False,
n_estimators=1000,
subsample_fr=.8,
min_samples_leaf=10,
min_impurity_decrease=0.001,
verbose=0, min_weight_fraction_leaf=.01)
est3.fit(Y_train, T_train, X_train, W_train)
te_pred3 = est3.effect(X_test)
```
### 1.3. Performance Visualization
```python
plt.figure(figsize=(10,6))
plt.plot(X_test, te_pred, label='DML default')
plt.plot(X_test, te_pred1, label='DML polynomial degree=3')
plt.plot(X_test, te_pred2, label='DML polynomial degree=10 with Lasso')
plt.plot(X_test, te_pred3, label='ForestDML')
expected_te = np.array([exp_te(x_i) for x_i in X_test])
plt.plot(X_test, expected_te, 'b--', label='True effect')
plt.ylabel('Treatment Effect')
plt.xlabel('x')
plt.legend()
plt.show()
```
### 1.4. Model selection
For the three different models above, we can use score function to estimate the final model performance. The score is the MSE of the final stage Y residual, which can be seen as a proxy of the MSE of treatment effect.
```python
score={}
score["DML default"] = est.score(Y_val, T_val, X_val, W_val)
score["DML polynomial degree=2"] = est1.score(Y_val, T_val, X_val, W_val)
score["DML polynomial degree=10 with Lasso"] = est2.score(Y_val, T_val, X_val, W_val)
score["ForestDML"] = est3.score(Y_val, T_val, X_val, W_val)
score
```
{'DML default': 3.1627949174866945,
'DML polynomial degree=2': 2.5474269412280446,
'DML polynomial degree=10 with Lasso': 3.2675102573467165,
'ForestDML': 3.9378482922121463}
```python
print("best model selected by score: ",min(score,key=lambda x: score.get(x)))
```
best model selected by score: DML polynomial degree=2
```python
mse_te={}
mse_te["DML default"] = ((expected_te - te_pred)**2).mean()
mse_te["DML polynomial degree=2"] = ((expected_te - te_pred1)**2).mean()
mse_te["DML polynomial degree=10 with Lasso"] = ((expected_te - te_pred2)**2).mean()
mse_te["ForestDML"] = ((expected_te - te_pred3)**2).mean()
mse_te
```
{'DML default': 0.1921791643128576,
'DML polynomial degree=2': 0.030107565825366556,
'DML polynomial degree=10 with Lasso': 0.270549915512218,
'ForestDML': 0.13621133188807513}
```python
print("best model selected by MSE of TE: ", min(mse_te, key=lambda x: mse_te.get(x)))
```
best model selected by MSE of TE: DML polynomial degree=2
## 2. Example Usage with Single Binary Treatment Synthetic Data and Confidence Intervals
### 2.1. DGP
We use the following DGP:
\begin{align}
T \sim & \text{Bernoulli}\left(f(W)\right), &\; f(W)=\sigma(\langle W, \beta\rangle + \eta), \;\eta \sim \text{Uniform}(-1, 1)\\
Y = & T\cdot \theta(X) + \langle W, \gamma\rangle + \epsilon, & \; \epsilon \sim \text{Uniform}(-1, 1)\\
W \sim & \text{Normal}(0,\, I_{n_w}) & \\
X \sim & \text{Uniform}(0,\, 1)^{n_x}
\end{align}
where $W$ is a matrix of high-dimensional confounders, $\beta, \gamma$ have high sparsity and $\sigma$ is the sigmoid function.
For this DGP,
\begin{align}
\theta(x) = \exp( 2\cdot x_1 ).
\end{align}
```python
# Treatment effect function
def exp_te(x):
return np.exp(2 * x[0])# DGP constants
np.random.seed(123)
n = 1000
n_w = 30
support_size = 5
n_x = 4
# Outcome support
support_Y = np.random.choice(range(n_w), size=support_size, replace=False)
coefs_Y = np.random.uniform(0, 1, size=support_size)
epsilon_sample = lambda n:np.random.uniform(-1, 1, size=n)
# Treatment support
support_T = support_Y
coefs_T = np.random.uniform(0, 1, size=support_size)
eta_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Generate controls, covariates, treatments and outcomes
W = np.random.normal(0, 1, size=(n, n_w))
X = np.random.uniform(0, 1, size=(n, n_x))
# Heterogeneous treatment effects
TE = np.array([exp_te(x_i) for x_i in X])
# Define treatment
log_odds = np.dot(W[:, support_T], coefs_T) + eta_sample(n)
T_sigmoid = 1/(1 + np.exp(-log_odds))
T = np.array([np.random.binomial(1, p) for p in T_sigmoid])
# Define the outcome
Y = TE * T + np.dot(W[:, support_Y], coefs_Y) + epsilon_sample(n)
# get testing data
X_test = np.random.uniform(0, 1, size=(n, n_x))
X_test[:, 0] = np.linspace(0, 1, n)
```
### 2.2. Train Estimator
```python
est = LinearDMLCateEstimator(model_y=RandomForestRegressor(),
model_t=RandomForestClassifier(min_samples_leaf=10),
discrete_treatment=True,
linear_first_stages=False,
n_splits=6)
est.fit(Y, T, X, W, inference='statsmodels')
te_pred = est.effect(X_test)
lb, ub = est.effect_interval(X_test, alpha=0.01)
```
```python
est2 = SparseLinearDMLCateEstimator(model_y=RandomForestRegressor(),
model_t=RandomForestClassifier(min_samples_leaf=10),
discrete_treatment=True,
featurizer=PolynomialFeatures(degree=2),
linear_first_stages=False,
n_splits=6)
est2.fit(Y, T, X, W, inference='debiasedlasso')
te_pred2 = est2.effect(X_test)
lb2, ub2 = est2.effect_interval(X_test, alpha=0.01)
```
```python
est3 = ForestDMLCateEstimator(model_y=RandomForestRegressor(),
model_t=RandomForestClassifier(min_samples_leaf=10),
discrete_treatment=True,
n_estimators=1000,
subsample_fr=.8,
min_samples_leaf=10,
min_impurity_decrease=0.001,
verbose=0, min_weight_fraction_leaf=.01,
n_crossfit_splits=6)
est3.fit(Y, T, X, W, inference='blb')
te_pred3 = est3.effect(X_test)
lb3, ub3 = est3.effect_interval(X_test, alpha=0.01)
```
### 2.3. Performance Visualization
```python
expected_te=np.array([exp_te(x_i) for x_i in X_test])
plt.figure(figsize=(16,6))
plt.subplot(1, 3, 1)
plt.plot(X_test[:, 0], te_pred, label='LinearDML', alpha=.6)
plt.fill_between(X_test[:, 0], lb, ub, alpha=.4)
plt.plot(X_test[:, 0], expected_te, 'b--', label='True effect')
plt.ylabel('Treatment Effect')
plt.xlabel('x')
plt.legend()
plt.subplot(1, 3, 2)
plt.plot(X_test[:, 0], te_pred2, label='SparseLinearDML', alpha=.6)
plt.fill_between(X_test[:, 0], lb2, ub2, alpha=.4)
plt.plot(X_test[:, 0], expected_te, 'b--', label='True effect')
plt.ylabel('Treatment Effect')
plt.xlabel('x')
plt.legend()
plt.subplot(1, 3, 3)
plt.plot(X_test[:, 0], te_pred3, label='ForestDML', alpha=.6)
plt.fill_between(X_test[:, 0], lb3, ub3, alpha=.4)
plt.plot(X_test[:, 0], expected_te, 'b--', label='True effect')
plt.ylabel('Treatment Effect')
plt.xlabel('x')
plt.legend()
plt.show()
```
## 3. Example Usage with Multiple Continuous Treatment Synthetic Data
### 3.1. DGP
We use the data generating process (DGP) from [here](https://arxiv.org/abs/1806.03467), and modify the treatment to generate multiple treatments. The DGP is described by the following equations:
\begin{align}
T =& \langle W, \beta\rangle + \eta, & \;\eta \sim \text{Uniform}(-1, 1)\\
Y =& T\cdot \theta_{1}(X) + T^{2}\cdot \theta_{2}(X) + \langle W, \gamma\rangle + \epsilon, &\; \epsilon \sim \text{Uniform}(-1, 1)\\
W \sim& \text{Normal}(0,\, I_{n_w})\\
X \sim& \text{Uniform}(0,1)^{n_x}
\end{align}
where $W$ is a matrix of high-dimensional confounders and $\beta, \gamma$ have high sparsity.
For this DGP,
\begin{align}
\theta_{1}(x) = \exp(2\cdot x_1)\\
\theta_{2}(x) = x_1^{2}\\
\end{align}
```python
# DGP constants
np.random.seed(123)
n = 6000
n_w = 30
support_size = 5
n_x = 5
# Outcome support
support_Y = np.random.choice(np.arange(n_w), size=support_size, replace=False)
coefs_Y = np.random.uniform(0, 1, size=support_size)
epsilon_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Treatment support
support_T = support_Y
coefs_T = np.random.uniform(0, 1, size=support_size)
eta_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Generate controls, covariates, treatments and outcomes
W = np.random.normal(0, 1, size=(n, n_w))
X = np.random.uniform(0, 1, size=(n, n_x))
# Heterogeneous treatment effects
TE1 = np.array([x_i[0] for x_i in X])
TE2 = np.array([x_i[0]**2 for x_i in X]).flatten()
T = np.dot(W[:, support_T], coefs_T) + eta_sample(n)
Y = TE1 * T + TE2 * T**2 + np.dot(W[:, support_Y], coefs_Y) + epsilon_sample(n)
# Generate test data
X_test = np.random.uniform(0, 1, size=(100, n_x))
X_test[:, 0] = np.linspace(0, 1, 100)
```
### 3.2. Train Estimator
```python
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.multioutput import MultiOutputRegressor
from sklearn.linear_model import ElasticNetCV
est = LinearDMLCateEstimator(model_y=GradientBoostingRegressor(n_estimators=100, max_depth=3, min_samples_leaf=20),
model_t=MultiOutputRegressor(GradientBoostingRegressor(n_estimators=100,
max_depth=3,
min_samples_leaf=20)),
featurizer=PolynomialFeatures(degree=2, include_bias=False),
linear_first_stages=False,
n_splits=5)
```
```python
T = T.reshape(-1,1)
est.fit(Y, np.concatenate((T, T**2), axis=1), X, W, inference='statsmodels')
```
<econml.dml.LinearDMLCateEstimator at 0x247c089d160>
```python
te_pred = est.const_marginal_effect(X_test)
```
```python
lb, ub = est.const_marginal_effect_interval(X_test, alpha=0.01)
```
### 3.3. Performance Visualization
```python
plt.figure(figsize=(10,6))
plt.plot(X_test[:, 0], te_pred[:, 0], label='DML estimate1')
plt.fill_between(X_test[:, 0], lb[:, 0], ub[:, 0], alpha=.4)
plt.plot(X_test[:, 0], te_pred[:, 1], label='DML estimate2')
plt.fill_between(X_test[:, 0], lb[:, 1], ub[:, 1], alpha=.4)
expected_te1 = np.array([x_i[0] for x_i in X_test])
expected_te2=np.array([x_i[0]**2 for x_i in X_test]).flatten()
plt.plot(X_test[:, 0], expected_te1, '--', label='True effect1')
plt.plot(X_test[:, 0], expected_te2, '--', label='True effect2')
plt.ylabel("Treatment Effect")
plt.xlabel("x")
plt.legend()
plt.show()
```
## 4. Example Usage with Single Continuous Treatment Observational Data
We applied our technique to Dominick’s dataset, a popular historical dataset of store-level orange juice prices and sales provided by University of Chicago Booth School of Business.
The dataset is comprised of a large number of covariates $W$, but researchers might only be interested in learning the elasticity of demand as a function of a few variables $x$ such
as income or education.
We applied the `LinearDMLCateEstimator` to estimate orange juice price elasticity
as a function of income, and our results, unveil the natural phenomenon that lower income consumers are more price-sensitive.
### 4.1. Data
```python
# A few more imports
import os
import pandas as pd
import urllib.request
from sklearn.preprocessing import StandardScaler
```
```python
# Import the data
file_name = "oj_large.csv"
if not os.path.isfile(file_name):
print("Downloading file (this might take a few seconds)...")
urllib.request.urlretrieve("https://msalicedatapublic.blob.core.windows.net/datasets/OrangeJuice/oj_large.csv", file_name)
oj_data = pd.read_csv(file_name)
```
```python
oj_data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>store</th>
<th>brand</th>
<th>week</th>
<th>logmove</th>
<th>feat</th>
<th>price</th>
<th>AGE60</th>
<th>EDUC</th>
<th>ETHNIC</th>
<th>INCOME</th>
<th>HHLARGE</th>
<th>WORKWOM</th>
<th>HVAL150</th>
<th>SSTRDIST</th>
<th>SSTRVOL</th>
<th>CPDIST5</th>
<th>CPWVOL5</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2</td>
<td>tropicana</td>
<td>40</td>
<td>9.018695</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
<tr>
<th>1</th>
<td>2</td>
<td>tropicana</td>
<td>46</td>
<td>8.723231</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
<tr>
<th>2</th>
<td>2</td>
<td>tropicana</td>
<td>47</td>
<td>8.253228</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
<tr>
<th>3</th>
<td>2</td>
<td>tropicana</td>
<td>48</td>
<td>8.987197</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
<tr>
<th>4</th>
<td>2</td>
<td>tropicana</td>
<td>50</td>
<td>9.093357</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
</tbody>
</table>
</div>
```python
# Prepare data
Y = oj_data['logmove'].values
T = np.log(oj_data["price"]).values
scaler = StandardScaler()
W1 = scaler.fit_transform(oj_data[[c for c in oj_data.columns if c not in ['price', 'logmove', 'brand', 'week', 'store','INCOME']]].values)
W2 = pd.get_dummies(oj_data[['brand']]).values
W = np.concatenate([W1, W2], axis=1)
X=scaler.fit_transform(oj_data[['INCOME']].values)
```
```python
## Generate test data
min_income = -1
max_income = 1
delta = (1 - (-1)) / 100
X_test = np.arange(min_income, max_income + delta - 0.001, delta).reshape(-1,1)
```
### 4.2. Train Estimator
```python
est = LinearDMLCateEstimator(model_y=RandomForestRegressor(),model_t=RandomForestRegressor())
est.fit(Y, T, X, W)
te_pred=est.effect(X_test)
```
### 4.3. Performance Visualization
```python
# Plot Oranje Juice elasticity as a function of income
plt.figure(figsize=(10,6))
plt.plot(X_test, te_pred, label="OJ Elasticity")
plt.xlabel(r'Scale(Income)')
plt.ylabel('Orange Juice Elasticity')
plt.legend()
plt.title("Orange Juice Elasticity vs Income")
plt.show()
```
### 4.4. Confidence Intervals
We can also get confidence intervals around our predictions by passing an additional `inference` argument to `fit`. All estimators support bootstrap intervals, which involves refitting the same estimator repeatedly on subsamples of the original data, but `LinearDMLCateEstimator` also supports a more efficient approach which can be achieved by passing `'statsmodels'` as the inference argument.
```python
est.fit(Y, T, X, W, inference='statsmodels')
te_pred=est.effect(X_test)
te_pred_interval = est.const_marginal_effect_interval(X_test, alpha=0.02)
```
```python
# Plot Oranje Juice elasticity as a function of income
plt.figure(figsize=(10,6))
plt.plot(X_test.flatten(), te_pred, label="OJ Elasticity")
plt.fill_between(X_test.flatten(), te_pred_interval[0], te_pred_interval[1], alpha=.5, label="1-99% CI")
plt.xlabel(r'Scale(Income)')
plt.ylabel('Orange Juice Elasticity')
plt.title("Orange Juice Elasticity vs Income")
plt.legend()
plt.show()
```
## 5. Example Usage with Multiple Continuous Treatment, Multiple Outcome Observational Data
We use the same data, but in this case, we want to fit the demand of multiple brand as a function of the price of each one of them, i.e. fit the matrix of cross price elasticities. It can be done, by simply setting as $Y$ to be the vector of demands and $T$ to be the vector of prices. Then we can obtain the matrix of cross price elasticities.
\begin{align}
Y=[Logmove_{tropicana},Logmove_{minute.maid},Logmove_{dominicks}] \\
T=[Logprice_{tropicana},Logprice_{minute.maid},Logprice_{dominicks}] \\
\end{align}
### 5.1. Data
```python
# Import the data
oj_data = pd.read_csv(file_name)
```
```python
# Prepare data
oj_data['price'] = np.log(oj_data["price"])
# Transform dataset.
# For each store in each week, get a vector of logmove and a vector of logprice for each brand.
# Other features are store specific, will be the same for all brands.
groupbylist = ["store", "week", "AGE60", "EDUC", "ETHNIC", "INCOME",
"HHLARGE", "WORKWOM", "HVAL150",
"SSTRDIST", "SSTRVOL", "CPDIST5", "CPWVOL5"]
oj_data1 = pd.pivot_table(oj_data,index=groupbylist,
columns=oj_data.groupby(groupbylist).cumcount(),
values=['logmove', 'price'],
aggfunc='sum').reset_index()
oj_data1.columns = oj_data1.columns.map('{0[0]}{0[1]}'.format)
oj_data1 = oj_data1.rename(index=str,
columns={"logmove0": "logmove_T",
"logmove1": "logmove_M",
"logmove2":"logmove_D",
"price0":"price_T",
"price1":"price_M",
"price2":"price_D"})
# Define Y,T,X,W
Y = oj_data1[['logmove_T', "logmove_M", "logmove_D"]].values
T = oj_data1[['price_T', "price_M", "price_D"]].values
scaler = StandardScaler()
W=scaler.fit_transform(oj_data1[[c for c in groupbylist if c not in ['week', 'store', 'INCOME']]].values)
X=scaler.fit_transform(oj_data1[['INCOME']].values)
```
```python
## Generate test data
min_income = -1
max_income = 1
delta = (1 - (-1)) / 100
X_test = np.arange(min_income, max_income + delta - 0.001, delta).reshape(-1, 1)
```
### 5.2. Train Estimator
```python
est = LinearDMLCateEstimator(model_y=MultiTaskElasticNetCV(cv=3, tol=1, selection='random'),
model_t=MultiTaskElasticNetCV(cv=3),
featurizer=PolynomialFeatures(1),
linear_first_stages=True)
est.fit(Y, T, X, W)
te_pred = est.const_marginal_effect(X_test)
```
c:\users\vasy\documents\econml\econml\utilities.py:942: UserWarning: Co-variance matrix is undertermined. Inference will be invalid!
warnings.warn("Co-variance matrix is undertermined. Inference will be invalid!")
### 5.3. Performance Visualization
```python
# Plot Oranje Juice elasticity as a function of income
plt.figure(figsize=(18, 10))
dic={0:"Tropicana", 1:"Minute.maid", 2:"Dominicks"}
for i in range(3):
for j in range(3):
plt.subplot(3, 3, 3 * i + j + 1)
plt.plot(X_test, te_pred[:, i, j],
color="C{}".format(str(3 * i + j)),
label="OJ Elasticity {} to {}".format(dic[j], dic[i]))
plt.xlabel(r'Scale(Income)')
plt.ylabel('Orange Juice Elasticity')
plt.legend()
plt.suptitle("Orange Juice Elasticity vs Income", fontsize=16)
plt.show()
```
**Findings**: Look at the diagonal of the matrix, the TE of OJ prices are always negative to the sales across all the brand, but people with higher income are less price-sensitive. By contrast, for the non-diagonal of the matrix, the TE of prices for other brands are always positive to the sales for that brand, the TE is affected by income in different ways for different competitors. In addition, compare to previous plot, the negative TE of OJ prices for each brand are all larger than the TE considering all brand together, which means we would have underestimated the effect of price changes on demand.
### 5.4. Confidence Intervals
```python
est.fit(Y, T, X, W, inference='statsmodels')
te_pred = est.const_marginal_effect(X_test)
te_pred_interval = est.const_marginal_effect_interval(X_test, alpha=0.02)
```
c:\users\vasy\documents\econml\econml\utilities.py:942: UserWarning: Co-variance matrix is undertermined. Inference will be invalid!
warnings.warn("Co-variance matrix is undertermined. Inference will be invalid!")
```python
# Plot Oranje Juice elasticity as a function of income
plt.figure(figsize=(18, 10))
dic={0:"Tropicana", 1:"Minute.maid", 2:"Dominicks"}
for i in range(3):
for j in range(3):
plt.subplot(3, 3, 3 * i + j + 1)
plt.plot(X_test, te_pred[:, i, j],
color="C{}".format(str(3 * i + j)),
label="OJ Elasticity {} to {}".format(dic[j], dic[i]))
plt.fill_between(X_test.flatten(), te_pred_interval[0][:, i, j],te_pred_interval[1][:, i,j], color="C{}".format(str(3*i+j)),alpha=.5, label="1-99% CI")
plt.xlabel(r'Scale(Income)')
plt.ylabel('Orange Juice Elasticity')
plt.legend()
plt.suptitle("Orange Juice Elasticity vs Income",fontsize=16)
plt.show()
```
```python
```
|
2611f299139473c1409b0ccac8a583c36cfae18f
| 639,159 |
ipynb
|
Jupyter Notebook
|
notebooks/Double Machine Learning Examples.ipynb
|
v-keacqu/EconML
|
97f12b5e668f6d1161c440e9264705605816f9ec
|
[
"MIT"
] | 1 |
2020-04-14T19:53:08.000Z
|
2020-04-14T19:53:08.000Z
|
notebooks/Double Machine Learning Examples.ipynb
|
ryosukeinoue/EconML
|
70cd6d3145cbd3ffc404f9e874c0a7020171ad2e
|
[
"MIT"
] | null | null | null |
notebooks/Double Machine Learning Examples.ipynb
|
ryosukeinoue/EconML
|
70cd6d3145cbd3ffc404f9e874c0a7020171ad2e
|
[
"MIT"
] | null | null | null | 466.539416 | 141,556 | 0.935684 | true | 7,858 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.727975 | 0.740174 | 0.538829 |
__label__eng_Latn
| 0.586078 | 0.090209 |
# ReLU Layers
We can write a ReLU layer $z = \max(Wx+b, 0)$ as the
convex optimization problem
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & \|z-\tilde Wx - b\|_2^2 \\[.2cm]
\mbox{subject to} & z \geq 0, \\
& \tilde W = W,
\end{array}
\label{eq:prob}
\end{equation}
with variables $z$ and $\tilde W$,
and parameters $W$, $b$, and $x$.
(Note that we have added an extra variable $\tilde W$ so
that the problem is DPP.)
We can embed this problem into a PyTorch `Module` and use it
as a layer in a sequential neural network.
We note that this example is purely illustrative;
one can implement a ReLU layer much more efficiently
by directly performing the matrix multiplication, vector addition,
and then taking the positive part.
```python
from cvxpylayers.torch import CvxpyLayer
import torch
import cvxpy as cp
```
```python
class ReluLayer(torch.nn.Module):
def __init__(self, D_in, D_out):
super(ReluLayer, self).__init__()
self.W = torch.nn.Parameter(1e-3*torch.randn(D_out, D_in))
self.b = torch.nn.Parameter(1e-3*torch.randn(D_out))
z = cp.Variable(D_out)
Wtilde = cp.Variable((D_out, D_in))
W = cp.Parameter((D_out, D_in))
b = cp.Parameter(D_out)
x = cp.Parameter(D_in)
prob = cp.Problem(cp.Minimize(cp.sum_squares(z-Wtilde@x-b)), [z >= 0, Wtilde==W])
self.layer = CvxpyLayer(prob, [W, b, x], [z])
def forward(self, x):
# when x is batched, repeat W and b
if x.ndim == 2:
batch_size = x.shape[0]
return self.layer(self.W.repeat(batch_size, 1, 1), self.b.repeat(batch_size, 1), x)[0]
else:
return self.layer(self.W, self.b, x)[0]
```
We generate synthetic data and create a network of two `ReluLayer`s followed by a linear layer.
```python
torch.manual_seed(0)
net = torch.nn.Sequential(
ReluLayer(20, 20),
ReluLayer(20, 20),
torch.nn.Linear(20, 1)
)
X = torch.randn(300, 20)
Y = torch.randn(300, 1)
```
Now we can optimize the parameters inside the network using, for example, the ADAM optimizer.
The code below solves 15000 convex optimization problems and calls backward 15000 times.
```python
opt = torch.optim.Adam(net.parameters(), lr=1e-2)
for _ in range(25):
opt.zero_grad()
l = torch.nn.MSELoss()(net(X), Y)
print (l.item())
l.backward()
opt.step()
```
1.0796713829040527
1.0764707326889038
1.0727819204330444
1.067252516746521
1.0606187582015991
1.051621913909912
1.0402582883834839
1.0264172554016113
1.0121591091156006
0.9986547231674194
0.9878703951835632
0.9796753525733948
0.9698525667190552
0.9556602239608765
0.939254105091095
0.9228951930999756
0.906936764717102
0.8898395299911499
0.8709890246391296
0.8507254123687744
0.8293333053588867
0.8077667951583862
0.7869061231613159
0.7656839489936829
0.742659330368042
|
6ad24388eed5e509410b3826d4eaba0732855224
| 4,905 |
ipynb
|
Jupyter Notebook
|
examples/torch/ReLU Layers.ipynb
|
RanganThaya/cvxpylayers
|
483e9220ff34a8eea31d80f83a5cdc930925925d
|
[
"Apache-2.0"
] | 1,287 |
2019-10-25T21:19:48.000Z
|
2022-03-30T16:35:11.000Z
|
examples/torch/ReLU Layers.ipynb
|
RanganThaya/cvxpylayers
|
483e9220ff34a8eea31d80f83a5cdc930925925d
|
[
"Apache-2.0"
] | 100 |
2019-10-28T15:38:19.000Z
|
2022-03-18T14:23:16.000Z
|
examples/torch/ReLU Layers.ipynb
|
RanganThaya/cvxpylayers
|
483e9220ff34a8eea31d80f83a5cdc930925925d
|
[
"Apache-2.0"
] | 115 |
2019-10-28T16:57:31.000Z
|
2022-03-22T18:20:48.000Z
| 27.869318 | 107 | 0.53578 | true | 972 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.954647 | 0.7773 | 0.742047 |
__label__eng_Latn
| 0.782865 | 0.562356 |
## A simple Susceptible-Infected-Recovered (SIR) model
The SIR model is explained in many places (e.g. [here](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology#The_SIR_model)). Economists sometimes embed this in a larger economics model, for example this [NBER paper](https://www.nber.org/papers/w26902) working paper and references cited within.
Python code below by Jonathan Conning, guided by this ['Coronavirus Curve - Numberphile](https://www.youtube.com/watch?v=k6nLfCbAzgo) video. If you're not doing so already you can run this as an interactive [web app](https://ricardian.herokuapp.com/). The notebook with code is [here](https://github.com/jhconning/Econ-Teach/blob/master/notebooks/epidemic/SIRmodel.ipynb)
### Model
The proportion of the population are infected $I_t$, susceptible $S$, and recovered $R$ evolve over time according to these equation swhich depend on the transmission rate $\beta$ and the recovery rate $\gamma$:
$$
\begin{align}
\frac{dI}{dt} &= \beta \cdot S \cdot I - \gamma \cdot I \\
\frac{dS}{dt} &=-\beta \cdot S \cdot I \\
\frac{dR}{dt} &= \gamma \cdot I
\end{align}
$$
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from ipywidgets import interact
```
```python
N = 1 # Size of the population (so everything in proportions)
I0 = 0.01 # Initial proportion of the population infected
S0 = N - I0 # Initial proportion of the population susceptible
R0 = 0.0 # Initial proportion of the population recovered
maxT = 25 # max number of periods in simulation
beta = 0.5 # transmission rate
gamma = 0.1 # recovery rate
```
```python
def SIR(y, t, beta, gamma):
'''the SIR model'''
S, I, R = y
dSdt = -beta*S*I
dIdt = beta*S*I - gamma*I
dRdt = gamma*I
return([dSdt, dIdt, dRdt])
```
```python
def plotSIR(beta = beta, gamma = gamma, maxT = maxT):
'''Solve differential equations in SIR and plot'''
t = np.linspace(0, maxT, 1000)
soln = odeint(SIR,[S0,I0,R0], t, args=(beta, gamma))
soln = np.array(soln)
plt.figure(figsize=[8,6])
plt.plot(t, soln[:,0], linewidth=3, label = 'S(t)')
plt.plot(t, soln[:,1], linewidth=3, label = 'I(t)')
plt.plot(t, soln[:,2], linewidth=3, label = 'R(t)')
plt.grid()
plt.legend()
plt.xlabel("Time")
plt.ylabel("proportions")
plt.title("SIR model")
plt.show()
```
Change the parameters with the sliders below (you'll only have interactivity with a running jupyter notebook server).
```python
interact(plotSIR, beta=(0,1,0.05), gamma=(0,1,0.05), maxT=(5,100,5));
```
interactive(children=(FloatSlider(value=0.5, description='beta', max=1.0, step=0.05), FloatSlider(value=0.1, d…
Below is a plot with the default parameters ($\beta=0.5$, $\gamma=0.1$) to have a graphic in case widget above does not display.
```python
plotSIR(beta, gamma, maxT)
```
# A More complex model
## In this model, social distancing for all can be counterproductive!
One feature of the Coronavirus is that some cases are almost Asymptomatic. In general, these cases recover faster. Also, one might expect that these cases move faster to begin with. How would we do this? Let's make a model in which
$$
\begin{align}
\frac{dI}{dt} &= \beta \cdot S \cdot I - \gamma \cdot I \\
\frac{dS}{dt} &=-\beta \cdot S \cdot I \\
\frac{dR}{dt} &= \gamma \cdot I
\end{align}
$$
```python
```
|
ecaaf0763b2422d8cf27c76424dd17455114d1cb
| 256,201 |
ipynb
|
Jupyter Notebook
|
notebooks/SIRmodel.ipynb
|
hunterecon2/epidemics
|
aef768b4e47c0dce21bd9d3b545b6b0e1b8534d3
|
[
"MIT"
] | 1 |
2020-05-27T17:38:37.000Z
|
2020-05-27T17:38:37.000Z
|
notebooks/SIRmodel.ipynb
|
hunterecon2/epidemics
|
aef768b4e47c0dce21bd9d3b545b6b0e1b8534d3
|
[
"MIT"
] | 2 |
2020-04-01T20:08:05.000Z
|
2020-04-19T15:28:47.000Z
|
notebooks/SIRmodel.ipynb
|
hunterecon2/epidemics
|
aef768b4e47c0dce21bd9d3b545b6b0e1b8534d3
|
[
"MIT"
] | null | null | null | 282.159692 | 38,912 | 0.923861 | true | 1,021 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.899121 | 0.757794 | 0.681349 |
__label__eng_Latn
| 0.908813 | 0.421333 |
# The Fourier Transform
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Summary of Properties, Theorems and Transforms
The [properties](properties.ipynb), [theorems](theorems.ipynb) and transforms of the Fourier transform as derived in the previous sections are summarized in the following. The corresponding tables serve as a reference for the application of the Fourier transform in the theory of signals and systems. Please refer to the respective sections for details.
### Definition
The Fourier transform and its inverse are defined as
\begin{align}
X(j \omega) &= \int_{-\infty}^{\infty} x(t) \, e^{- j \omega t} \; dt \\
x(t) &= \frac{1}{2 \pi} \int_{- \infty}^{\infty} X(j \omega) \, e^{j \omega t} \; d\omega
\end{align}
### Properties and Theorems
The properties and theorems of the Fourier transform are given as
| | $x(t) \qquad \qquad \qquad \qquad$ | $X(j \omega) = \mathcal{F} \{ x(t) \} \qquad \qquad \qquad$ |
|:---|:---:|:---:|:---|
| [Duality](properties.ipynb#Duality) | $ \begin{matrix} x_1(t) \\ x_2(j t) \end{matrix}$ | $ \begin{matrix} x_2(j \omega) \\ 2 \pi \, x_1(- \omega) \end{matrix}$ |
| [Linearity](properties.ipynb#Linearity) | $A \, x_1(t) + B \, x_2(t)$ | $A \, X_1(j \omega) + B \, X_2(j \omega)$ |
| [Real-valued signal](properties.ipynb#Real-valued-signals) | $x(t) = x^*(t)$ | $X(j \omega) = X^*(- j \omega)$ |
| [Scaling](theorems.ipynb#Temporal-Scaling-Theorem) | $x(a t)$ | $\frac{1}{\lvert a \rvert} X\left( \frac{j \omega}{a} \right)$ |
| [Convolution](theorems.ipynb#Convolution-Theorem) | $x(t) * h(t)$ | $X(j \omega) \cdot H(j \omega)$ |
| [Shift](theorems.ipynb#Temporal-Shift-Theorem) | $x(t - \tau)$ | $e^{-j \omega \tau} \cdot X(j \omega)$ |
| [Differentiation](theorems.ipynb#Differentiation-Theorem) | $\frac{d}{dt} x(t)$ | $j \omega X(j \omega)$ |
| [Integration](theorems.ipynb#Integration-Theorem) | $\int_{-\infty}^{t} x(t) \; dt$ | $\frac{1}{j \omega} X(j \omega) + \pi X(0) \delta(\omega)$ |
| [Multiplication](theorems.ipynb#Multiplication-Theorem) | $x(t) \cdot h(t)$ | $\frac{1}{2 \pi} X(j \omega) * H(j \omega)$ |
| [Modulation](theorems.ipynb#Modulation-Theorem) | $e^{j \omega_0 t}\cdot x(t)$ | $X\left(j (\omega - \omega_0) \right)$ |
where $A, B \in \mathbb{C}$, $a \in \mathbb{R} \setminus \{0\}$ and $\tau, \omega_0 \in \mathbb{R}$
### Selected Transforms
Fourier transforms which are frequently used are given as
| $x(t) \qquad \qquad \qquad \qquad$ | $X(j \omega) = \mathcal{F} \{ x(t) \} \qquad \qquad \qquad \qquad \qquad \qquad$ |
|:---:|:---:|:---|
| $\delta(t)$ | $1$ |
| $1$ | $2 \pi \, \delta(\omega)$ |
| $\text{sgn}(t)$ | $\frac{2}{j \omega}$ |
| $\epsilon(t)$ | $\pi \, \delta(\omega) + \frac{1}{j \omega}$ |
| $\text{rect}(t)$ | $\text{si} \left( \frac{\omega}{2} \right)$ |
| $\Lambda(t)$ | $\text{si}^2 \left( \frac{\omega}{2} \right)$ |
| $e^{- j \omega_0 t}$ | $2 \pi \, \delta(\omega - \omega_0)$ |
| $\sin(\omega_0 t)$ | $j \pi \left( \delta(\omega+\omega_0) - \delta(\omega-\omega_0) \right)$ |
| $\cos(\omega_0 t)$ | $\pi \left( \delta(\omega+\omega_0) + \delta(\omega-\omega_0) \right)$ |
| $e^{- \alpha^2 t^2}$ | $\frac{\sqrt{\pi}}{\alpha} e^{- \frac{\omega^2}{4 \alpha^2}}$ |
| ${\bot \!\! \bot \!\! \bot} ( t )$ | ${\bot \!\! \bot \!\! \bot} \left( \frac{\omega}{2 \pi} \right)$ |
where $\omega_0 \in \mathbb{R}$, $\alpha \in \mathbb{R}^+$ and ${\bot \!\! \bot \!\! \bot} ( t ) = \sum_{k = -\infty}^{\infty} \delta(t - k)$. More Fourier transforms may be found in the literature or [online](https://en.wikipedia.org/wiki/Fourier_transform#Tables_of_important_Fourier_transforms).
**Copyright**
The notebooks are provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources). Feel free to use the notebooks for your own educational purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Lecture Notes on Signals and Systems* by Sascha Spors.
|
a1cf0690e20266d7b251287dc16984d497c9539b
| 5,982 |
ipynb
|
Jupyter Notebook
|
fourier_transform/table_theorems_transforms.ipynb
|
swchao/signalsAndSystemsLecture
|
7f135d091499e1d3d635bac6ddf22adee15454f8
|
[
"MIT"
] | 3 |
2019-01-27T12:39:27.000Z
|
2022-03-15T10:26:12.000Z
|
fourier_transform/table_theorems_transforms.ipynb
|
swchao/signalsAndSystemsLecture
|
7f135d091499e1d3d635bac6ddf22adee15454f8
|
[
"MIT"
] | null | null | null |
fourier_transform/table_theorems_transforms.ipynb
|
swchao/signalsAndSystemsLecture
|
7f135d091499e1d3d635bac6ddf22adee15454f8
|
[
"MIT"
] | 2 |
2020-09-18T06:26:48.000Z
|
2021-12-10T06:11:45.000Z
| 49.032787 | 486 | 0.547977 | true | 1,525 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.626124 | 0.851953 | 0.533428 |
__label__eng_Latn
| 0.420371 | 0.077662 |
## Perturbed DG Experiments - Exponential Toy Game
#### On Duality Gap as a Measure for Monitoring GAN Training
---
This Notebook Contains the Code for the experiments and visualization pertaining perturbed DG over the exponential toy game.
---
### Imports
```python
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Flatten
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import LogNorm
from sympy import symbols, Matrix, Function, simplify, exp, hessian, solve, init_printing
from sympy import Symbol, solve, Derivative,pprint
from math import exp,pow
from sympy import *
import sympy
from sympy.solvers.solveset import nonlinsolve
```
### Function Definition
```python
def exp_minmax_loss_fxn(x,y):
a = -0.01
b = 0.3
c = 0.5
f = tf.math.exp(a*(x**2 + y**2))*((b*(x**2) + y)**2 + (c*(y**2) + x)**2)
return f
```
### Function Visualization
```python
def show_exp_minmax_loss_fxn():
fig = plt.figure(figsize=(10, 10))
w1_min, w1_max, w1_step = -25.0, 25.0, 0.5
w2_min, w2_max, w2_step = -25.0, 25.0, 0.5
W1, W2 = np.meshgrid(np.arange(w1_min, w1_max+ w1_step, w1_step), np.arange(w2_min, w2_max+ w2_step, w2_step))
Z = exp_minmax_loss_fxn(W1, W2 )
ax = plt.axes(projection='3d', elev=80, azim=-50)
ax.set_xlim((w1_min, w1_max))
ax.set_ylim((w2_min, w2_max))
ax.plot_surface(W1, W2, Z, rstride=1, cstride=1, edgecolor='none', alpha=.7, cmap=plt.cm.jet)
ax.set_xlabel('$W1$')
ax.set_ylabel('$W2$')
ax.set_zlabel('$Z$')
plt.savefig('exp_minmax_fxn.png')
plt.show()
show_exp_minmax_loss_fxn()
```
### Gradient and Hessian Computation
```python
init_printing()
a = -0.01
b = 0.3
c = 0.5
x1, x2 = symbols('x1 x2')
f, g, h = symbols('f g h', cls=Function)
X = Matrix([x1,x2])
f = Matrix([exp(-0.01*(x1**2 + x2**2))*((0.3*(x1**2)+x2)**2 + (0.5*(x2**2)+x1)**2)])
```
```python
gradf = simplify(f.jacobian(X))
gradf
```
$\displaystyle \left[\begin{matrix}\left(1.2 x_{1} \left(0.3 x_{1}^{2} + x_{2}\right) - 0.02 x_{1} \left(\left(x_{1} + 0.5 x_{2}^{2}\right)^{2} + \left(0.3 x_{1}^{2} + x_{2}\right)^{2}\right) + 2 x_{1} + 1.0 x_{2}^{2}\right) e^{- 0.01 x_{1}^{2} - 0.01 x_{2}^{2}} & \left(0.6 x_{1}^{2} + 2.0 x_{2} \left(x_{1} + 0.5 x_{2}^{2}\right) - 0.02 x_{2} \left(\left(x_{1} + 0.5 x_{2}^{2}\right)^{2} + \left(0.3 x_{1}^{2} + x_{2}\right)^{2}\right) + 2 x_{2}\right) e^{- 0.01 x_{1}^{2} - 0.01 x_{2}^{2}}\end{matrix}\right]$
```python
hessianf = simplify(hessian(f, X))
hessianf
```
$\displaystyle \left[\begin{matrix}\left(0.0004 x_{1}^{2} \left(\left(x_{1} + 0.5 x_{2}^{2}\right)^{2} + \left(0.3 x_{1}^{2} + x_{2}\right)^{2}\right) + 1.08 x_{1}^{2} - 0.04 x_{1} \left(1.2 x_{1} \left(0.3 x_{1}^{2} + x_{2}\right) + 2 x_{1} + 1.0 x_{2}^{2}\right) + 1.2 x_{2} - 0.02 \left(x_{1} + 0.5 x_{2}^{2}\right)^{2} - 0.02 \left(0.3 x_{1}^{2} + x_{2}\right)^{2} + 2\right) e^{- 0.01 x_{1}^{2} - 0.01 x_{2}^{2}} & \left(0.0004 x_{1} x_{2} \left(\left(x_{1} + 0.5 x_{2}^{2}\right)^{2} + \left(0.3 x_{1}^{2} + x_{2}\right)^{2}\right) - 0.02 x_{1} \left(0.6 x_{1}^{2} + 2.0 x_{2} \left(x_{1} + 0.5 x_{2}^{2}\right) + 2 x_{2}\right) + 1.2 x_{1} - 0.02 x_{2} \left(1.2 x_{1} \left(0.3 x_{1}^{2} + x_{2}\right) + 2 x_{1} + 1.0 x_{2}^{2}\right) + 2.0 x_{2}\right) e^{- 0.01 x_{1}^{2} - 0.01 x_{2}^{2}}\\\left(0.0004 x_{1} x_{2} \left(\left(x_{1} + 0.5 x_{2}^{2}\right)^{2} + \left(0.3 x_{1}^{2} + x_{2}\right)^{2}\right) - 0.02 x_{1} \left(0.6 x_{1}^{2} + 2.0 x_{2} \left(x_{1} + 0.5 x_{2}^{2}\right) + 2 x_{2}\right) + 1.2 x_{1} - 0.02 x_{2} \left(1.2 x_{1} \left(0.3 x_{1}^{2} + x_{2}\right) + 2 x_{1} + 1.0 x_{2}^{2}\right) + 2.0 x_{2}\right) e^{- 0.01 x_{1}^{2} - 0.01 x_{2}^{2}} & \left(2.0 x_{1} + 0.0004 x_{2}^{2} \left(\left(x_{1} + 0.5 x_{2}^{2}\right)^{2} + \left(0.3 x_{1}^{2} + x_{2}\right)^{2}\right) + 3.0 x_{2}^{2} - 0.04 x_{2} \left(0.6 x_{1}^{2} + 2.0 x_{2} \left(x_{1} + 0.5 x_{2}^{2}\right) + 2 x_{2}\right) - 0.02 \left(x_{1} + 0.5 x_{2}^{2}\right)^{2} - 0.02 \left(0.3 x_{1}^{2} + x_{2}\right)^{2} + 2\right) e^{- 0.01 x_{1}^{2} - 0.01 x_{2}^{2}}\end{matrix}\right]$
```python
def get_jacobian(val_x=0,val_y=0):
x = Symbol('x')
y = Symbol('y')
f1 = -sympy.exp(a*(x**2 + y**2))*((b*(x**2) + y)**2 + (c*(y**2) + x)**2)
f2 = sympy.exp(a*(x**2 + y**2))*((b*(x**2) + y)**2 + (c*(y**2) + x)**2)
d1 = Derivative(f1,x).doit()
d2 = Derivative(f2,y).doit()
print( ' Gradients : ',d1.subs(x,val_x).subs(y,val_y),' \t ',d2.subs(x,val_x).subs(y,val_y))
d11 = Derivative(d1,x).doit()
d22 = Derivative(d2,y).doit()
d12 = Derivative(d1,y).doit()
d21 = Derivative(d2,x).doit()
Jacobian = Matrix([[d11.subs(x,val_x).subs(y,val_y),d12.subs(x,val_x).subs(y,val_y)],[d21.subs(x,val_x).subs(y,val_y),d22.subs(x,val_x).subs(y,val_y)]])
print( ' Jacobian : \n')
pprint(Jacobian)
eigenVals = Jacobian.eigenvals()
expanded_eigenvals = [complex(key) for key in eigenVals.keys() for i in range(eigenVals[key]) ]
print('\n\n EigenValues : \n\n {}'.format(expanded_eigenvals))
return expanded_eigenvals
```
```python
init_points = [(-12.467547,-8.67366),(0.0,0.0)]
for x,y in init_points:
print('--'*50 +'\n \t Init Point : X:{} Y:{} \n'.format(x,y)+'--'*50)
eigenvals = get_jacobian(x,y)
eigenvals = [np.array([complex(item) for item in eigenvals])]
```
----------------------------------------------------------------------------------------------------
Init Point : X:-12.467547 Y:-8.67366
----------------------------------------------------------------------------------------------------
Gradients : 0.0620581635165252 -0.0683054521266797
Jacobian :
⎡1.11718768601266 12.1764191162642⎤
⎢ ⎥
⎣-12.1764191162642 9.82599396907192⎦
EigenValues :
[(5.47159082754229-11.371207313911825j), (5.47159082754229+11.371207313911825j)]
----------------------------------------------------------------------------------------------------
Init Point : X:0.0 Y:0.0
----------------------------------------------------------------------------------------------------
Gradients : 0 0
Jacobian :
⎡-2 0⎤
⎢ ⎥
⎣0 2⎦
EigenValues :
[(-2+0j), (2+0j)]
```python
import tqdm
import matplotlib.pyplot as plt
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers,models
import time
import math
from math import ceil
from math import floor
from numpy import ones
from numpy import expand_dims
from numpy import log
from numpy import mean
from numpy import std
from numpy import exp
from numpy.random import shuffle
from numpy import asarray
EPOCHS = 15
alpha=0.95
decay=0.95
B=tf.Variable(0.0)
duality_gap_batch=tf.Variable(0.0)
acc_Y_cost=tf.Variable(0.0)
acc_X_cost=tf.Variable(0.0)
num_batches=1
class TAU(object):
def __init__(self, x=-5.0,y=-10.0,optimizer='adam'):
self.X = tf.Variable(x)
self.Y = tf.Variable(y)
print( self.X, self.Y)
self.lr = 5e-3
self.gradients_of_X=-1
self.gradients_of_Y=-1
self.Y_loss=-1
self.X_loss=-1
if(optimizer=='adam'):
self.optimizer='adam'
self.X_optimizer=tf.keras.optimizers.Adam(self.lr)
self.Y_optimizer=tf.keras.optimizers.Adam(self.lr)
elif(optimizer=='sgd'):
self.optimizer='sgd'
self.X_optimizer=tf.keras.optimizers.SGD(self.lr)
self.Y_optimizer=tf.keras.optimizers.SGD(self.lr)
self.update_X_list = []
self.update_Y_list = []
def show_contour(self,epoch=0,reward=0):
fig, ax = plt.subplots(figsize=(10, 6))
w1_min, w1_max, w1_step = -26.0, 26.0, 0.2
w2_min, w2_max, w2_step = -26.0, 26.0, 0.2
W1, W2 = np.meshgrid(np.arange(w1_min, w1_max+ w1_step, w1_step), np.arange(w2_min, w2_max+ w2_step, w2_step))
Z = exp_minmax_loss_fxn(W1, W2 )
cs = ax.contourf(W1, W2, Z, cmap=plt.cm.Greys ,alpha=0.6)
ax.plot( self.update_X_list, self.update_Y_list, color='g',linewidth=2.0,label='Autoloss')
ax.scatter( [-12,-11,12], [-9,7,-7], color='b',linewidth=2.0,label='Nash',marker='x')
ax.scatter( [-.5], [-1], color='r',linewidth=2.0,label='Non Nash',marker='*')
cbar = fig.colorbar(cs)
leg = plt.legend()
ax.set_xlabel('$X$')
ax.set_ylabel('$Y$')
plt.title('Iteration : {} Reward : {} DualityGAP :{}'.format(epoch,reward,duality_gap_batch.numpy()))
plt.xticks(np.arange(w1_min, w1_max+5, 5.0))
plt.yticks(np.arange(w2_min, w2_max+5, 5.0))
ax.set_xlim((w1_min, w1_max))
ax.set_ylim((w2_min, w2_max))
plt.grid()
plt.show()
def calculate_duality_gap(self,random=False):
stddev = 0.01
self.update_X_list.append(self.X.numpy())
self.update_Y_list.append(self.Y.numpy())
self.update_X_list_dg = []
self.update_Y_list_dg = []
self.update_X_list_dg.append(self.X.numpy())
self.update_Y_list_dg.append(self.Y.numpy())
if(random==True):
X = tf.Variable(self.X.numpy())
Y = tf.Variable(self.Y.numpy()+ abs(tf.random.normal(mean=1.0, stddev=stddev,shape=self.Y.numpy().shape)))
else:
X = tf.Variable(self.X.numpy())
Y = tf.Variable(self.Y.numpy())
iterations = 500
lr = 5e-4
if(self.optimizer=='adam'):
X_optimizer=tf.keras.optimizers.Adam(self.lr)
Y_optimizer=tf.keras.optimizers.Adam(self.lr)
elif(self.optimizer=='sgd'):
X_optimizer=tf.keras.optimizers.SGD(self.lr)
Y_optimizer=tf.keras.optimizers.SGD(self.lr)
for iteration in range(iterations):
with tf.GradientTape() as Y_tape:
Y_tape.watch(Y)
Y_loss = exp_minmax_loss_fxn(X,Y)
gradients_of_Y = Y_tape.gradient(Y_loss, Y)
# Y = Y - lr*gradients_of_Y
Y_optimizer.apply_gradients(zip([gradients_of_Y],[Y]))
if(iteration%5==0):
self.update_Y_list_dg.append(Y.numpy())
self.update_X_list.append(X.numpy())
fmin = exp_minmax_loss_fxn(X,Y).numpy()
if(random==True):
X = tf.Variable(self.X.numpy()+ abs(tf.random.normal(mean=0.0,stddev=stddev,shape=self.X.numpy().shape)))
Y = tf.Variable(self.Y.numpy())
else:
X = tf.Variable(self.X.numpy())
Y = tf.Variable(self.Y.numpy())
for iteration in range(iterations):
with tf.GradientTape() as X_tape:
X_tape.watch(X)
X_loss = -1*exp_minmax_loss_fxn(X,Y)
gradients_of_X = X_tape.gradient(X_loss, X)
# X = X - lr*gradients_of_X
X_optimizer.apply_gradients(zip([gradients_of_X],[X]))
if(iteration%5==0):
self.update_X_list_dg.append(X.numpy())
self.update_Y_list.append(Y.numpy())
fmax = exp_minmax_loss_fxn(X,Y).numpy()
print('Duality Gap Random : ',random,' : ',fmax - fmin)
return fmax - fmin
def train_step_Y(self):
with tf.GradientTape() as Y_tape:
Y_tape.watch(self.Y)
self.Y_loss = exp_minmax_loss_fxn(self.X,self.Y)
self.gradients_of_Y = Y_tape.gradient(self.Y_loss,self.Y)
self.Y_optimizer.apply_gradients(zip([self.gradients_of_Y],[self.Y]))
return self.gradients_of_Y,self.Y_loss
def train_step_X(self):
with tf.GradientTape() as X_tape:
X_tape.watch(self.X)
self.X_loss = -1*exp_minmax_loss_fxn(self.X,self.Y)
self.gradients_of_X = X_tape.gradient(self.X_loss, self.X)
self.X_optimizer.apply_gradients(zip([self.gradients_of_X],[self.X]))
return self.gradients_of_X,self.X_loss
def train(self,action,epoch,i):
if action[0]==1:
self.gradients_of_Y,self.Y_loss=self.train_step_Y()
else:
self.gradients_of_X,self.X_loss=self.train_step_X()
if(epoch%5==0):
self.update_X_list.append(self.X.numpy())
self.update_Y_list.append(self.Y.numpy())
class Controller():
def __init__(self,epoch=500):
self.epoch = epoch
def compute_dg(self,x,y,opt):
tau = TAU(x,y,opt)
data = {}
vanilla_dg = tau.calculate_duality_gap(random=False)
updates = {
'X':tau.update_X_list,
'Y':tau.update_Y_list,
'X_DG':tau.update_X_list_dg,
'Y_DG':tau.update_Y_list_dg,
'DG':vanilla_dg
}
data['vanilla']=updates
tau = TAU(x,y,opt)
random_dg = tau.calculate_duality_gap(random=True)
updates = {
'X':tau.update_X_list,
'Y':tau.update_Y_list,
'X_DG':tau.update_X_list_dg,
'Y_DG':tau.update_Y_list_dg,
'DG':random_dg
}
data['random']=updates
print('Vanilla DG : {} \nLocal Random DG : {}'.format(vanilla_dg,random_dg))
print('Final Coordinates : ' ,tau.update_X_list_dg[-1],tau.update_Y_list_dg[-1])
updates = {
'X':tau.update_X_list,
'Y':tau.update_Y_list,
'X_DG':tau.update_X_list_dg,
'Y_DG':tau.update_Y_list_dg,
'vanilla_DG':vanilla_dg,
'local_random_DG':random_dg
}
return data
def train_on_ratio(self,x=-12.87,y=-7.31,nx=1,ny=1,k=1,opt='adam'):
tau = TAU(x,y,opt)
for epoch in tqdm.tqdm(range(k*self.epoch)):
for i in range(nx):
tau.train([0,1],epoch,i)
for i in range(ny):
tau.train([1,0],epoch,i)
updates = {
'X':tau.update_X_list,
'Y':tau.update_Y_list,
'DG':tau.calculate_duality_gap()
}
return updates
def train_on_random_ratio(self,x=-5.1,y=-10.1,opt='adam'):
tau = TAU(x,y,opt)
max_iter = 20
min_iter = 1
nx = np.random.random_integers(min_iter,max_iter+1)
ny = np.random.random_integers(min_iter,max_iter+1)
for epoch in tqdm.tqdm(range(self.epoch)):
for i in range(nx):
tau.train([0,1],epoch,i)
for i in range(ny):
tau.train([1,0],epoch,i)
updates = {
'X' :tau.update_X_list,
'Y' :tau.update_Y_list
}
return updates,nx+ny
controller=Controller(5000)
controller.train_on_ratio()
optimizers = ['adam']
final_dat = {}
init_points = [(0.0,0.0),( -12.467547,-8.67366)]
ntrials = 5
for opt in optimizers:
print(opt)
data={
'single':{},
'random':{}
}
for x,y in init_points:
print(x,y)
trials = []
for i in range(ntrials):
trials.append(controller.compute_dg(x,y,opt))
final_dat['X{}Y{}'.format(x,y)] = trials
```
0%| | 0/5000 [00:00<?, ?it/s]
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-12.87> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-7.31>
100%|██████████| 5000/5000 [00:29<00:00, 172.40it/s]
Duality Gap Random : False : 41.495758
adam
0.0 0.0
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>
Duality Gap Random : False : 0.0
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>
Duality Gap Random : True : 49.21852
Vanilla DG : 0.0
Local Random DG : 49.21852111816406
Final Coordinates : 4.5419307 0.017989157
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>
Duality Gap Random : False : 0.0
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>
Duality Gap Random : True : 49.216076
Vanilla DG : 0.0
Local Random DG : 49.2160758972168
Final Coordinates : 4.541852 0.02073147
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>
Duality Gap Random : False : 0.0
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>
Duality Gap Random : True : 49.220108
Vanilla DG : 0.0
Local Random DG : 49.22010803222656
Final Coordinates : 4.541986 0.020360613
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>
Duality Gap Random : False : 0.0
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>
Duality Gap Random : True : 49.205074
Vanilla DG : 0.0
Local Random DG : 49.205074310302734
Final Coordinates : 4.5414867 0.019537272
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>
Duality Gap Random : False : 0.0
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>
Duality Gap Random : True : 49.216587
Vanilla DG : 0.0
Local Random DG : 49.21658706665039
Final Coordinates : 4.5418653 0.017021596
-12.467547 -8.67366
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-12.467547> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-8.67366>
Duality Gap Random : False : 0.001953125
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-12.467547> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-8.67366>
Duality Gap Random : True : 0.001663208
Vanilla DG : 0.001953125
Local Random DG : 0.0016632080078125
Final Coordinates : -12.519718 -8.659502
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-12.467547> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-8.67366>
Duality Gap Random : False : 0.001953125
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-12.467547> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-8.67366>
Duality Gap Random : True : 0.0016479492
Vanilla DG : 0.001953125
Local Random DG : 0.00164794921875
Final Coordinates : -12.519718 -8.658344
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-12.467547> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-8.67366>
Duality Gap Random : False : 0.001953125
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-12.467547> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-8.67366>
Duality Gap Random : True : 0.0016784668
Vanilla DG : 0.001953125
Local Random DG : 0.001678466796875
Final Coordinates : -12.519718 -8.658691
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-12.467547> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-8.67366>
Duality Gap Random : False : 0.001953125
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-12.467547> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-8.67366>
Duality Gap Random : True : 0.0015716553
Vanilla DG : 0.001953125
Local Random DG : 0.0015716552734375
Final Coordinates : -12.519717 -8.65773
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-12.467547> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-8.67366>
Duality Gap Random : False : 0.001953125
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-12.467547> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=-8.67366>
Duality Gap Random : True : 0.001663208
Vanilla DG : 0.001953125
Local Random DG : 0.0016632080078125
Final Coordinates : -12.519718 -8.659255
```python
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
def show_contour(data,init_points):
fig, ax = plt.subplots(1,1,figsize=(12, 8))
inp = -1
w1_min, w1_max, w1_step = -20.0, 20.0, 0.2
w2_min, w2_max, w2_step = -20.0, 20.0, 0.2
W1, W2 = np.meshgrid(np.arange(w1_min, w1_max+ w1_step, w1_step), np.arange(w2_min, w2_max+ w2_step, w2_step))
Z = exp_minmax_loss_fxn(W1, W2 )
cmap_ = 'viridis'
cs = ax.contourf(W1, W2, Z ,cmap=plt.get_cmap(cmap_),alpha=0.999)
cbar = fig.colorbar(cs)
plt.rc('xtick',labelsize=15)
plt.rc('ytick',labelsize=15)
legend_elements = [Line2D([], [], color='cyan', lw=3, label='Vanilla DG'),
Line2D([], [], color='darksalmon', lw=3, label='Perturbed DG'),
Line2D([], [], marker='o',color='k', label='Start Point',markeredgecolor='w', markerfacecolor=(0, 0, 0, 0.01), markersize=20,lw=0,mew=2),
Line2D([], [], marker='D',color='k', label='End Point',markeredgecolor='w', markerfacecolor=(0, 0, 0, 0.01), markersize=15,lw=0,mew=2),
Line2D([], [], marker='x', label='Nash Point (A)',markeredgecolor='lawngreen', markersize=15,mew=3,lw=0),
Line2D([], [], marker='*', label='Non Nash Point (B)',markerfacecolor='r',markeredgecolor='r', markersize=20,mew=3,lw=0)]
plt.legend(handles=legend_elements,loc=4,fontsize=15, ncol=3,facecolor='k',framealpha=0.5,labelcolor='w')
scale = 4.5
axins = zoomed_inset_axes(ax, scale, loc=1)
axins.contourf(W1, W2, Z ,cmap=plt.get_cmap(cmap_),alpha=0.95)
axins1 = zoomed_inset_axes(ax,scale, loc=2)
axins1.contourf(W1, W2, Z ,cmap=plt.get_cmap(cmap_),alpha=0.95)
for x,y in init_points:
inp = inp+1
# print(x,y, data['X{}Y{}'.format(x,y)][0]['vanilla']['X_DG'])
start_marker='o'
end_marker='D'
color = 'cyan'
ax.plot( data['X{}Y{}'.format(x,y)][0]['vanilla']['X_DG'], final_dat['X{}Y{}'.format(x,y)][0]['vanilla']['Y'],color=color)
ax.scatter( data['X{}Y{}'.format(x,y)][0]['vanilla']['X_DG'][0],data['X{}Y{}'.format(x,y)][0]['vanilla']['Y'][0], color=color,s=550.0,facecolors='none',marker=start_marker)
ax.scatter( data['X{}Y{}'.format(x,y)][0]['vanilla']['X_DG'][-1],data['X{}Y{}'.format(x,y)][0]['vanilla']['Y'][-1], color=color,s=180.0, facecolors='none',marker=end_marker)
ax.plot( data['X{}Y{}'.format(x,y)][0]['vanilla']['X'], data['X{}Y{}'.format(x,y)][0]['vanilla']['Y_DG'],color=color)
ax.scatter(data['X{}Y{}'.format(x,y)][0]['vanilla']['X'][0],data['X{}Y{}'.format(x,y)][0]['vanilla']['Y_DG'][0], color=color,facecolors='none',s=550.0,marker=start_marker)
ax.scatter( data['X{}Y{}'.format(x,y)][0]['vanilla']['X'][-1],data['X{}Y{}'.format(x,y)][0]['vanilla']['Y_DG'][-1], color=color,facecolors='none',s=180.0,marker=end_marker)
color = 'cyan'
axins.plot( data['X{}Y{}'.format(x,y)][0]['vanilla']['X_DG'], final_dat['X{}Y{}'.format(x,y)][0]['vanilla']['Y'],color=color,label='X Worst - Vanilla ')
axins.scatter( data['X{}Y{}'.format(x,y)][0]['vanilla']['X_DG'][0],data['X{}Y{}'.format(x,y)][0]['vanilla']['Y'][0], color=color,s=scale*550.0,facecolors='none',marker=start_marker)
axins.scatter( data['X{}Y{}'.format(x,y)][0]['vanilla']['X_DG'][-1],data['X{}Y{}'.format(x,y)][0]['vanilla']['Y'][-1], color=color,s=scale*180.0, facecolors='none',marker=end_marker)
axins.plot( data['X{}Y{}'.format(x,y)][0]['vanilla']['X'], data['X{}Y{}'.format(x,y)][0]['vanilla']['Y_DG'],color=color,label='Y Worst - Vanilla ')
axins.scatter(data['X{}Y{}'.format(x,y)][0]['vanilla']['X'][0],data['X{}Y{}'.format(x,y)][0]['vanilla']['Y_DG'][0], color=color,facecolors='none',s=scale*550.0,marker=start_marker)
axins.scatter( data['X{}Y{}'.format(x,y)][0]['vanilla']['X'][-1],data['X{}Y{}'.format(x,y)][0]['vanilla']['Y_DG'][-1], color=color,facecolors='none',s=scale*180.0,marker=end_marker)
color = 'darksalmon'
lwdth = 8
for i in range(ntrials):
axins.plot( data['X{}Y{}'.format(x,y)][i]['random']['X_DG'], final_dat['X{}Y{}'.format(x,y)][i]['random']['Y'],color=color,alpha=0.1*(i+1))
axins.scatter( data['X{}Y{}'.format(x,y)][i]['random']['X_DG'][0],data['X{}Y{}'.format(x,y)][i]['random']['Y'][0], color=color,facecolors='none',s=scale*900.0,marker=start_marker)
axins.scatter( data['X{}Y{}'.format(x,y)][i]['random']['X_DG'][-1],data['X{}Y{}'.format(x,y)][i]['random']['Y'][-1], color=color,facecolors='none',s=scale*300.0,marker=end_marker)
lwdth = lwdth - 1.5
lwdth = 8
for i in range(ntrials):
axins.plot( data['X{}Y{}'.format(x,y)][i]['random']['X'], data['X{}Y{}'.format(x,y)][i]['random']['Y_DG'],color=color,alpha=0.1*(i+1))
axins.scatter(data['X{}Y{}'.format(x,y)][i]['random']['X'][0],data['X{}Y{}'.format(x,y)][i]['random']['Y_DG'][1], color=color,facecolors='none',s=scale*900.0,marker=start_marker)
axins.scatter( data['X{}Y{}'.format(x,y)][i]['random']['X'][-1],data['X{}Y{}'.format(x,y)][i]['random']['Y_DG'][-1], color=color,facecolors='none',s=scale*300.0,marker=end_marker)
lwdth = lwdth - 1.5
axins.scatter( [0.0], [0.0], color='r',linewidth=3.0,label='Non Nash',marker='*',s=100)
axins.set_ylim(-1, 1)
axins.set_xlim(-0.75, 0.75)
plt.yticks(visible=False)
plt.xticks(visible=False)
mark_inset(ax, axins, loc1=3, loc2=4, fc="none", ec="w",alpha=0.5,linestyle='--')
color = 'cyan'
axins1.plot( data['X{}Y{}'.format(x,y)][0]['vanilla']['X_DG'], final_dat['X{}Y{}'.format(x,y)][0]['vanilla']['Y'],color=color)
axins1.scatter( data['X{}Y{}'.format(x,y)][0]['vanilla']['X_DG'][0],data['X{}Y{}'.format(x,y)][0]['vanilla']['Y'][0], color=color,s=scale*550.0,facecolors='none',marker=start_marker,label='Start point')
axins1.scatter( data['X{}Y{}'.format(x,y)][0]['vanilla']['X_DG'][-1],data['X{}Y{}'.format(x,y)][0]['vanilla']['Y'][-1], color=color,s=scale*180.0, facecolors='none',marker=end_marker,label='End point')
axins1.plot( data['X{}Y{}'.format(x,y)][0]['vanilla']['X'], data['X{}Y{}'.format(x,y)][0]['vanilla']['Y_DG'],color=color)
axins1.scatter(data['X{}Y{}'.format(x,y)][0]['vanilla']['X'][0],data['X{}Y{}'.format(x,y)][0]['vanilla']['Y_DG'][0], color=color,facecolors='none',s=scale*550.0,marker=start_marker)
axins1.scatter( data['X{}Y{}'.format(x,y)][0]['vanilla']['X'][-1],data['X{}Y{}'.format(x,y)][0]['vanilla']['Y_DG'][-1], color=color,facecolors='none',s=scale*180.0,marker=end_marker)
color = 'darksalmon'
lwdth = 8
for i in range(ntrials):
axins1.plot( data['X{}Y{}'.format(x,y)][i]['random']['X_DG'], final_dat['X{}Y{}'.format(x,y)][i]['random']['Y'],color=color,alpha=0.1*(i+1))
axins1.scatter( data['X{}Y{}'.format(x,y)][i]['random']['X_DG'][0],data['X{}Y{}'.format(x,y)][i]['random']['Y'][0], color=color,facecolors='none',s=scale*900.0,marker=start_marker)
axins1.scatter( data['X{}Y{}'.format(x,y)][i]['random']['X_DG'][-1],data['X{}Y{}'.format(x,y)][i]['random']['Y'][-1], color=color,facecolors='none',s=scale*300.0,marker=end_marker)
lwdth = lwdth - 1.5
lwdth = 8
for i in range(ntrials):
axins1.plot( data['X{}Y{}'.format(x,y)][i]['random']['X'], data['X{}Y{}'.format(x,y)][i]['random']['Y_DG'],color=color,alpha=0.1*(i+1))
axins1.scatter(data['X{}Y{}'.format(x,y)][i]['random']['X'][0],data['X{}Y{}'.format(x,y)][i]['random']['Y_DG'][1], color=color,facecolors='none',s=scale*900.0,marker=start_marker)
axins1.scatter( data['X{}Y{}'.format(x,y)][i]['random']['X'][-1],data['X{}Y{}'.format(x,y)][i]['random']['Y_DG'][-1], color=color,facecolors='none',s=scale*300.0,marker=end_marker)
lwdth = lwdth - 1.5
axins1.scatter( [0.0], [0.0], color='r',linewidth=3.0,marker='*',s=100)
axins1.scatter( [-12.467547], [-8.67366],linewidth=3.0, color='lawngreen',marker='x',s=100)
axins.set_xticks([])
axins1.set_xticks([])
axins.set_yticks([])
axins1.set_yticks([])
axins1.set_xlim(-13.2, -11.8)
axins1.set_ylim(-9.6, -7.6)
plt.yticks(visible=False)
plt.xticks(visible=False)
mark_inset(ax, axins1, loc1=3, loc2=4, fc="none", ec="w",alpha=0.5,linestyle='--')
color = 'darksalmon'
lwdth = 8
for i in range(ntrials):
ax.plot( data['X{}Y{}'.format(x,y)][i]['random']['X_DG'], final_dat['X{}Y{}'.format(x,y)][i]['random']['Y'],color=color,alpha=0.5*(i+1))
ax.scatter( data['X{}Y{}'.format(x,y)][i]['random']['X_DG'][0],data['X{}Y{}'.format(x,y)][i]['random']['Y'][0], color=color,facecolors='none',s=900.0,marker=start_marker)
ax.scatter( data['X{}Y{}'.format(x,y)][i]['random']['X_DG'][-1],data['X{}Y{}'.format(x,y)][i]['random']['Y'][-1], color=color,facecolors='none',s=300.0,marker=end_marker)
lwdth = lwdth - 1.5
lwdth = 8
for i in range(ntrials):
ax.plot( data['X{}Y{}'.format(x,y)][i]['random']['X'], data['X{}Y{}'.format(x,y)][i]['random']['Y_DG'],color=color,alpha=0.5*(i+1))
ax.scatter(data['X{}Y{}'.format(x,y)][i]['random']['X'][0],data['X{}Y{}'.format(x,y)][i]['random']['Y_DG'][1], color=color,facecolors='none',s=900.0,marker=start_marker)
ax.scatter( data['X{}Y{}'.format(x,y)][i]['random']['X'][-1],data['X{}Y{}'.format(x,y)][i]['random']['Y_DG'][-1], color=color,facecolors='none',s=300.0,marker=end_marker)
lwdth = lwdth - 1.5
if(inp==0):
ax.scatter(4.5, 0, marker='>',color=color)
ax.scatter( [0.0], [0.0], color='r',linewidth=3.0,marker='*',s=100)
ax.scatter( [-12.467547], [-8.67366],linewidth=3.0, color='lawngreen',marker='x',s=100)
ax.set_xlabel('$x$',fontsize=20)
ax.set_ylabel('$y$',fontsize=20)
ax.set_xlim((w1_min, w1_max))
ax.set_ylim((w2_min, w2_max))
plt.grid()
plt.savefig('./comparison_vanilla_random.png')
plt.show()
```
```python
import seaborn as sns
sns.set_style("ticks", {"xtick.major.size": 1, "ytick.major.size": 1})
sns.set_style('whitegrid')
show_contour(final_dat,init_points)
plt.show
```
```python
```
```python
```
```python
```
|
7604960214aaac13bba011a6934d63b9e4d3c9fb
| 526,196 |
ipynb
|
Jupyter Notebook
|
TOY_GAME/.ipynb_checkpoints/main-checkpoint.ipynb
|
perturbed-dg/Perturbed-Duality-Gap
|
8065c12957089c5e83e148d9c9075444d5f57a80
|
[
"MIT"
] | 2 |
2021-01-06T10:08:46.000Z
|
2021-02-18T10:43:30.000Z
|
TOY_GAME/main.ipynb
|
perturbed-dg/Perturbed-Duality-Gap
|
8065c12957089c5e83e148d9c9075444d5f57a80
|
[
"MIT"
] | null | null | null |
TOY_GAME/main.ipynb
|
perturbed-dg/Perturbed-Duality-Gap
|
8065c12957089c5e83e148d9c9075444d5f57a80
|
[
"MIT"
] | 1 |
2021-01-03T05:53:28.000Z
|
2021-01-03T05:53:28.000Z
| 496.880076 | 393,032 | 0.921181 | true | 10,400 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.824462 | 0.766294 | 0.63178 |
__label__eng_Latn
| 0.111553 | 0.306167 |
# SARIMAX and ARIMA: Frequently Asked Questions (FAQ)
This notebook contains explanations for frequently asked questions.
* Comparing trends and exogenous variables in `SARIMAX`, `ARIMA` and `AutoReg`
* Reconstructing residuals, fitted values and forecasts in `SARIMAX` and `ARIMA`
* Initial residuals in `SARIMAX` and `ARIMA`
## Comparing trends and exogenous variables in `SARIMAX`, `ARIMA` and `AutoReg`
`ARIMA` are formally OLS with ARMA errors. A basic AR(1) in the OLS with ARMA errors is described as
$$
\begin{align}
Y_t & = \delta + \epsilon_t \\
\epsilon_t & = \rho \epsilon_{t-1} + \eta_t \\
\eta_t & \sim WN(0,\sigma^2) \\
\end{align}
$$
In large samples, $\hat{\delta}\stackrel{p}{\rightarrow} E[Y]$.
`SARIMAX` uses a different representation, so that the model when estimated using `SARIMAX` is
$$
\begin{align}
Y_t & = \phi + \rho Y_{t-1} + \eta_t \\
\eta_t & \sim WN(0,\sigma^2) \\
\end{align}
$$
This is the same representation that is used when the model is estimated using OLS (`AutoReg`). In large samples, $\hat{\phi}\stackrel{p}{\rightarrow} E[Y](1-\rho)$.
In the next cell, we simulate a large sample and verify that these relationship hold in practice.
```python
%matplotlib inline
```
```python
import numpy as np
import pandas as pd
rng = np.random.default_rng(20210819)
eta = rng.standard_normal(5200)
rho = 0.8
beta = 10
epsilon = eta.copy()
for i in range(1, eta.shape[0]):
epsilon[i] = rho * epsilon[i - 1] + eta[i]
y = beta + epsilon
y = y[200:]
```
```python
from statsmodels.tsa.api import SARIMAX, AutoReg
from statsmodels.tsa.arima.model import ARIMA
```
The three models are specified and estimated in the next cell. An AR(0) is included as a reference. The AR(0) is identical using all three estimators.
```python
ar0_res = SARIMAX(y, order=(0, 0, 0), trend="c").fit()
sarimax_res = SARIMAX(y, order=(1, 0, 0), trend="c").fit()
arima_res = ARIMA(y, order=(1, 0, 0), trend="c").fit()
autoreg_res = AutoReg(y, 1, trend="c").fit()
```
The table below contains the estimated parameter in the model, the estimated AR(1) coefficient, and the long-run mean which is either equal to the estimated parameters (AR(0) or `ARIMA`), or depends on the ratio of the intercept to 1 minus the AR(1) parameter.
```python
intercept = [
ar0_res.params[0],
sarimax_res.params[0],
arima_res.params[0],
autoreg_res.params[0],
]
rho_hat = [0] + [r.params[1] for r in (sarimax_res, arima_res, autoreg_res)]
long_run = [
ar0_res.params[0],
sarimax_res.params[0] / (1 - sarimax_res.params[1]),
arima_res.params[0],
autoreg_res.params[0] / (1 - autoreg_res.params[1]),
]
cols = ["AR(0)", "SARIMAX", "ARIMA", "AutoReg"]
pd.DataFrame(
[intercept, rho_hat, long_run],
columns=cols,
index=["delta-or-phi", "rho", "long-run mean"],
)
```
### Differences between trend and exog in `SARIMAX`
When `SARIMAX` includes `exog` variables, then the `exog` are treated as OLS regressors, so that the model estimated is
$$
\begin{align}
Y_t - X_t \beta & = \delta + \rho (Y_{t-1} - X_{t-1}\beta) + \eta_t \\
\eta_t & \sim WN(0,\sigma^2) \\
\end{align}
$$
In the next example, we omit the trend and instead include a column of 1, which produces a model that is equivalent, in large samples, to the case with no exogenous regressor and `trend="c"`. Here the estimated value of `const` matches the value estimated using `ARIMA`. This happens since both exog in `SARIMAX` and the trend in `ARIMA` are treated as linear regression models with ARMA errors.
```python
sarimax_exog_res = SARIMAX(y, exog=np.ones_like(y), order=(1, 0, 0), trend="n").fit()
print(sarimax_exog_res.summary())
```
### Using `exog` in `SARIMAX` and `ARIMA`
While `exog` are treated the same in both models, the intercept continues to differ. Below we add an exogenous regressor to `y` and then fit the model using all three methods. The data generating process is now
$$
\begin{align}
Y_t & = \delta + X_t \beta + \epsilon_t \\
\epsilon_t & = \rho \epsilon_{t-1} + \eta_t \\
\eta_t & \sim WN(0,\sigma^2) \\
\end{align}
$$
```python
full_x = rng.standard_normal(eta.shape)
x = full_x[200:]
y += 3 * x
```
```python
sarimax_exog_res = SARIMAX(y, exog=x, order=(1, 0, 0), trend="c").fit()
arima_exog_res = ARIMA(y, exog=x, order=(1, 0, 0), trend="c").fit()
```
Examining the parameter tables, we see that the parameter estimates on `x1` are identical while the estimates of the `intercept` continue to differ due to the differences in the treatment of trends in these estimators.
#### `SARIMAX`
```python
def print_params(s):
from io import StringIO
return pd.read_csv(StringIO(s.tables[1].as_csv()), index_col=0)
print_params(sarimax_exog_res.summary())
```
#### `ARIMA`
```python
print_params(arima_exog_res.summary())
```
### `exog` in `AutoReg`
When using `AutoReg` to estimate a model using OLS, the model differs from both `SARIMAX` and `ARIMA`. The `AutoReg` specification with exogenous variables is
$$
\begin{align}
Y_t & = \phi + \rho Y_{t-1} + X_{t}\beta + \eta_t \\
\eta_t & \sim WN(0,\sigma^2) \\
\end{align}
$$
This specification is not equivalent to the specification estimated in `SARIMAX` and `ARIMA`. Here the difference is non-trivial, and naive estimation on the same time series results in different parameter values, even in large samples (and the limit). Estimating this model changes the parameter estimates on the AR(1) coefficient.
#### `AutoReg`
```python
autoreg_exog_res = AutoReg(y, 1, exog=x, trend="c").fit()
print_params(autoreg_exog_res.summary())
```
The key difference can be seen by writing the model in lag operator notation.
$$
\begin{align}
(1-\phi L ) Y_t & = X_{t}\beta + \eta_t \Rightarrow \\
Y_t & = (1-\phi L )^{-1}\left(X_{t}\beta + \eta_t\right) \\
Y_t & = \sum_{i=0}^{\infty} \phi^i \left(X_{t-i}\beta + \eta_{t-i}\right)
\end{align}
$$
where it is is assumed that $|\phi|<1$. Here we see that $Y_t$ depends on all lagged values of $X_t$ and $\eta_t$. This differs from the specification estimated by `SARIMAX` and `ARIMA`, which can be seen to be
$$
\begin{align}
Y_t - X_t \beta & = \delta + \rho (Y_{t-1} - X_{t-1}\beta) + \eta_t \\
\left(1-\rho L \right)\left(Y_t - X_t \beta\right) & = \delta + \eta_t \\
Y_t - X_t \beta & = \frac{\delta}{1-\rho} + \left(1-\rho L \right)^{-1}\eta_t \\
Y_t - X_t \beta & = \frac{\delta}{1-\rho} + \sum_{i=0}^\infty \rho^i \eta_{t-i} \\
Y_t & = \frac{\delta}{1-\rho} + X_t \beta + \sum_{i=0}^\infty \rho^i \eta_{t-i} \\
\end{align}
$$
In this specification, $Y_t$ only depends on $X_t$ and no other lags.
### Using the correct DGP with `AutoReg`
Simulating the process that is estimated in `AutoReg` shows that the parameters are recovered from the true model.
```python
y = beta + eta
epsilon = eta.copy()
for i in range(1, eta.shape[0]):
y[i] = beta * (1 - rho) + rho * y[i - 1] + 3 * full_x[i] + eta[i]
y = y[200:]
```
#### `AutoReg` with correct DGP
```python
autoreg_alt_exog_res = AutoReg(y, 1, exog=x, trend="c").fit()
print_params(autoreg_alt_exog_res.summary())
```
## Reconstructing residuals, fitted values and forecasts in `SARIMAX` and `ARIMA`
In models that contain only autoregressive terms, trends and exogenous variables, fitted values and forecasts can be easily reconstructed once the maximum lag length in the model has been reached. In practice, this means after $(P+D)s+p+d$ periods. Earlier predictions and residuals are harder to reconstruct since the model builds the best prediction for $Y_t|Y_{t-1},Y_{t-2},...$. When the number of lags of $Y$ is less than the autoregressive order, then the expression for the optimal prediction differs from the model. For example, when predicting the very first value, $Y_1$, there is no information available from the history of $Y$, and so the best prediction is the unconditional mean. In the case of an AR(1), the second prediction will follow the model, so that when using `ARIMA`, the prediction is
$$
Y_2 = \hat{\delta} + \hat{\rho} \left(Y_1 - \hat{\delta}\right)
$$
since `ARIMA` treats both exogenous and trend terms as regression with ARMA errors.
This can be seen in the next set of cells.
```python
arima_res = ARIMA(y, order=(1, 0, 0), trend="c").fit()
print_params(arima_res.summary())
```
```python
arima_res.predict(0, 2)
```
```python
delta_hat, rho_hat = arima_res.params[:2]
delta_hat + rho_hat * (y[0] - delta_hat)
```
`SARIMAX` treats trend terms differently, and so the one-step forecast from a model estimated using `SARIMAX` is
$$
Y_2 = \hat\delta + \hat\rho Y_1
$$
```python
sarima_res = SARIMAX(y, order=(1, 0, 0), trend="c").fit()
print_params(sarima_res.summary())
```
```python
sarima_res.predict(0, 2)
```
```python
delta_hat, rho_hat = sarima_res.params[:2]
delta_hat + rho_hat * y[0]
```
### Prediction with MA components
When a model contains a MA component, the prediction is more complicated since errors are never directly observable. The prediction is still $Y_t|Y_{t-1},Y_{t-2},...$, and when the MA component is invertible, then the optimal prediction can be represented as a $t$-lag AR process. When $t$ is large, this should be very close to the prediction as if the errors were observable. For short lags, this can differ markedly.
In the next cell we simulate an MA(1) process, and fit an MA model.
```python
rho = 0.8
beta = 10
epsilon = eta.copy()
for i in range(1, eta.shape[0]):
epsilon[i] = rho * eta[i - 1] + eta[i]
y = beta + epsilon
y = y[200:]
ma_res = ARIMA(y, order=(0, 0, 1), trend="c").fit()
print_params(ma_res.summary())
```
We start by looking at predictions near the beginning of the sample corresponding `y[1]`, ..., `y[5]`.
```python
ma_res.predict(1, 5)
```
and the corresponding residuals that are needed to produce the "direct" forecasts
```python
ma_res.resid[:5]
```
Using the model parameters, we can produce the "direct" forecasts using the MA(1) specification
$$
\hat Y_t = \hat\delta + \hat\rho \hat\epsilon_{t-1}
$$
We see that these are not especially close to the actual model predictions for the initial forecasts, but that the gap quickly reduces.
```python
delta_hat, rho_hat = ma_res.params[:2]
direct = delta_hat + rho_hat * ma_res.resid[:5]
direct
```
The difference is nearly a standard deviation for the first but declines as the index increases.
```python
ma_res.predict(1, 5) - direct
```
We next look at the end of the sample and the final three predictions.
```python
t = y.shape[0]
ma_res.predict(t - 3, t - 1)
```
```python
ma_res.resid[-4:-1]
```
```python
direct = delta_hat + rho_hat * ma_res.resid[-4:-1]
direct
```
The "direct" forecasts are identical. This happens since the effect of the short sample has disappeared by the end of the sample (In practice it is negligible by observations 100 or so, and numerically absent by around observation 160).
```python
ma_res.predict(t - 3, t - 1) - direct
```
The same principle applies in more complicated model that include multiple lags or seasonal term - predictions in AR models are simple once the effective lag length has been reached, while predictions in models that contains MA components are only simple once the maximum root of the MA lag polynomial is sufficiently small so that the residuals are close to the true residuals.
### Prediction differences in `SARIMAX` and `ARIMA`
The formulas used to make predictions from `SARIMAX` and `ARIMA` models differ in one key aspect - `ARIMA` treats all trend terms, e.g, the intercept or time trend, as part of the exogenous regressors. For example, an AR(1) model with an intercept and linear time trend estimated using `ARIMA` has the specification
$$
\begin{align*}
Y_t - \delta_0 - \delta_1 t & = \epsilon_t \\
\epsilon_t & = \rho \epsilon_{t-1} + \eta_t
\end{align*}
$$
When the same model is estimated using `SARIMAX`, the specification is
$$
\begin{align*}
Y_t & = \epsilon_t \\
\epsilon_t & = \delta_0 + \delta_1 t + \rho \epsilon_{t-1} + \eta_t
\end{align*}
$$
The differences are more apparent when the model contains exogenous regressors, $X_t$. The `ARIMA` specification is
$$
\begin{align*}
Y_t - \delta_0 - \delta_1 t - X_t \beta & = \epsilon_t \\
\epsilon_t & = \rho \epsilon_{t-1} + \eta_t \\
& = \rho \left(Y_{t-1} - \delta_0 - \delta_1 (t-1) - X_{t-1} \beta\right) + \eta_t
\end{align*}
$$
while the `SARIMAX` specification is
$$
\begin{align*}
Y_t & = X_t \beta + \epsilon_t \\
\epsilon_t & = \delta_0 + \delta_1 t + \rho \epsilon_{t-1} + \eta_t \\
& = \delta_0 + \delta_1 t + \rho \left(Y_{t-1} - X_{t-1}\beta\right) + \eta_t
\end{align*}
$$
The key difference between these two is that the intercept and the trend are effectively equivalent to exogenous regressions in `ARIMA` while they are more like standard ARMA terms in `SARIMAX`.
The next cell simulates an ARX with a time trend using the specification in `ARIMA` and estimates the parameters using both estimators.
```python
rho = 0.8
beta = 2
delta0 = 10
delta1 = 0.5
epsilon = eta.copy()
for i in range(1, eta.shape[0]):
epsilon[i] = rho * epsilon[i - 1] + eta[i]
t = np.arange(epsilon.shape[0])
y = delta0 + delta1 * t + beta * full_x + epsilon
y = y[200:]
```
```python
start = np.array([110, delta1, beta, rho, 1])
arx_res = ARIMA(y, exog=x, order=(1, 0, 0), trend="ct").fit()
mod = SARIMAX(y, exog=x, order=(1, 0, 0), trend="ct")
start[:2] *= 1 - rho
sarimax_res = mod.fit(start_params=start, method="bfgs")
```
The two estimators fit similarly, although there is a small difference in the log-likelihood. This is a numerical issue and should not materially affect the predictions. Importantly the two trend parameters, `const` and `x1` (unfortunately named for the time trend), differ between the two. The other parameters are effectively identical.
```python
print(arx_res.summary())
```
```python
print(sarimax_res.summary())
```
## Initial residuals `SARIMAX` and `ARIMA`
Residuals for observations before the maximal model order, which depends on the AR, MA, Seasonal AR, Seasonal MA and differencing parameters, are not reliable and should not be used for performance assessment. In general, in an ARIMA with orders $(p,d,q)\times(P,D,Q,s)$, the formula for residuals that are less well behaved is:
$$
\max((P+D)s+p+d,Qs+q)
$$
We can simulate some data from an ARIMA(1,0,0)(1,0,0,12) and examine the residuals.
```python
import numpy as np
import pandas as pd
rho = 0.8
psi = -0.6
beta = 20
epsilon = eta.copy()
for i in range(13, eta.shape[0]):
epsilon[i] = (
rho * epsilon[i - 1]
+ psi * epsilon[i - 12]
- (rho * psi) * epsilon[i - 13]
+ eta[i]
)
y = beta + epsilon
y = y[200:]
```
With a large sample, the parameter estimates are very close to the DGP parameters.
```python
res = ARIMA(y, order=(1, 0, 0), trend="c", seasonal_order=(1, 0, 0, 12)).fit()
print(res.summary())
```
We can first examine the initial 13 residuals by plotting against the actual shocks in the model. While there is a correspondence, it is fairly weak and the correlation is much less than 1.
```python
import matplotlib.pyplot as plt
plt.rc("figure", figsize=(10, 10))
plt.rc("font", size=14)
_ = plt.scatter(res.resid[:13], eta[200 : 200 + 13])
```
Looking at the next 24 residuals and shocks, we see there is nearly perfect correlation. This is expected in large samples once the less accurate residuals are ignored.
```python
_ = plt.scatter(res.resid[13:37], eta[200 + 13 : 200 + 37])
```
Next, we simulate an ARIMA(1,1,0), and include a time trend.
```python
rng = np.random.default_rng(20210819)
eta = rng.standard_normal(5200)
rho = 0.8
beta = 20
epsilon = eta.copy()
for i in range(2, eta.shape[0]):
epsilon[i] = (1 + rho) * epsilon[i - 1] - rho * epsilon[i - 2] + eta[i]
t = np.arange(epsilon.shape[0])
y = beta + 2 * t + epsilon
y = y[200:]
```
Again the parameter estimates are very close to the DGP parameters.
```python
res = ARIMA(y, order=(1, 1, 0), trend="t").fit()
print(res.summary())
```
The residuals are not accurate, and the first residual is approximately 500. The others are closer, although in this model the first 2 should usually be ignored.
```python
res.resid[:5]
```
The reason why the first residual is so large is that the optimal prediction of this value is the mean of the difference, which is 1.77. Once the first value is known, the second value makes use of the first value in its prediction and the prediction is substantially closer to the truth.
```python
res.predict(0, 5)
```
It is worth noting that the results class contains two parameters than can be helpful in understanding which residuals are problematic, `loglikelihood_burn` and `nobs_diffuse`.
```python
res.loglikelihood_burn, res.nobs_diffuse
```
|
f1aa760dfb0e6f78c2811a6c8f9b6f820c2f4f90
| 30,942 |
ipynb
|
Jupyter Notebook
|
examples/notebooks/statespace_sarimax_faq.ipynb
|
CCHiggins/statsmodels
|
300b6fba90c65c8e94b4f83e04f7ae1b0ceeac2e
|
[
"BSD-3-Clause"
] | 6,931 |
2015-01-01T11:41:55.000Z
|
2022-03-31T17:03:24.000Z
|
examples/notebooks/statespace_sarimax_faq.ipynb
|
CCHiggins/statsmodels
|
300b6fba90c65c8e94b4f83e04f7ae1b0ceeac2e
|
[
"BSD-3-Clause"
] | 6,137 |
2015-01-01T00:33:45.000Z
|
2022-03-31T22:53:17.000Z
|
examples/notebooks/statespace_sarimax_faq.ipynb
|
CCHiggins/statsmodels
|
300b6fba90c65c8e94b4f83e04f7ae1b0ceeac2e
|
[
"BSD-3-Clause"
] | 2,608 |
2015-01-02T21:32:31.000Z
|
2022-03-31T07:38:30.000Z
| 31.317814 | 822 | 0.573363 | true | 5,024 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.824462 | 0.914901 | 0.754301 |
__label__eng_Latn
| 0.977714 | 0.590826 |
# Bayesian Temporal Matrix Factorization
**Published**: October 8, 2019
**Revised**: October 8, 2020
**Author**: Xinyu Chen [[**GitHub homepage**](https://github.com/xinychen)]
**Download**: This Jupyter notebook is at our GitHub repository. If you want to evaluate the code, please download the notebook from the [**transdim**](https://github.com/xinychen/transdim/blob/master/imputer/BTMF.ipynb) repository.
This notebook shows how to implement the Bayesian Temporal Matrix Factorization (BTMF), a fully Bayesian matrix factorization model, on some real-world data sets. To overcome the missing data problem in multivariate time series, BTMF takes into account both low-rank matrix structure and time series autoregression. For an in-depth discussion of BTMF, please see [1].
<div class="alert alert-block alert-info">
<font color="black">
<b>[1]</b> Xinyu Chen, Lijun Sun (2019). <b>Bayesian temporal factorization for multidimensional time series prediction</b>. arXiv:1910.06366. <a href="https://arxiv.org/pdf/1910.06366.pdf" title="PDF"><b>[PDF]</b></a>
</font>
</div>
## Bayesian Temporal Matrix Factorization Model
### 1 Model Specification
Following the general Bayesian probabilistic matrix factorization models (e.g., BPMF proposed by [Salakhutdinov & Mnih, 2008](https://www.cs.toronto.edu/~amnih/papers/bpmf.pdf)), we assume that each observed entry in $Y$ follows a Gaussian distribution with precision $\tau$:
\begin{equation}
y_{i,t}\sim\mathcal{N}\left(\boldsymbol{w}_i^\top\boldsymbol{x}_t,\tau^{-1}\right),\quad \left(i,t\right)\in\Omega.
\label{btmf_equation3}
\end{equation}
On the spatial dimension, we use a simple Gaussian factor matrix without imposing any dependencies explicitly:
\begin{equation}
\boldsymbol{w}_i\sim\mathcal{N}\left(\boldsymbol{\mu}_{w},\Lambda_w^{-1}\right),
\end{equation}
and we place a conjugate Gaussian-Wishart prior on the mean vector and the precision matrix:
\begin{equation}
\boldsymbol{\mu}_w | \Lambda_w \sim\mathcal{N}\left(\boldsymbol{\mu}_0,(\beta_0\Lambda_w)^{-1}\right),\Lambda_w\sim\mathcal{W}\left(W_0,\nu_0\right),
\end{equation}
where $\boldsymbol{\mu}_0\in \mathbb{R}^{R}$ is a mean vector, $\mathcal{W}\left(W_0,\nu_0\right)$ is a Wishart distribution with a $R\times R$ scale matrix $W_0$ and $\nu_0$ degrees of freedom.
In modeling the temporal factor matrix $X$, we re-write the VAR process as:
\begin{equation}
\begin{aligned}
\boldsymbol{x}_{t}&\sim\begin{cases}
\mathcal{N}\left(\boldsymbol{0},I_R\right),&\text{if $t\in\left\{1,2,...,h_d\right\}$}, \\
\mathcal{N}\left(A^\top \boldsymbol{v}_{t},\Sigma\right),&\text{otherwise},\\
\end{cases}\\
\end{aligned}
\label{btmf_equation5}
\end{equation}
Since the mean vector is defined by VAR, we need to place the conjugate matrix normal inverse Wishart (MNIW) prior on the coefficient matrix $A$ and the covariance matrix $\Sigma$ as follows,
\begin{equation}
\begin{aligned}
A\sim\mathcal{MN}_{(Rd)\times R}\left(M_0,\Psi_0,\Sigma\right),\quad
\Sigma \sim\mathcal{IW}\left(S_0,\nu_0\right), \\
\end{aligned}
\end{equation}
where the probability density function for the $Rd$-by-$R$ random matrix $A$ has the form:
\begin{equation}
\begin{aligned}
&p\left(A\mid M_0,\Psi_0,\Sigma\right) \\
=&\left(2\pi\right)^{-R^2d/2}\left|\Psi_0\right|^{-R/2}\left|\Sigma\right|^{-Rd/2} \\
&\times \exp\left(-\frac{1}{2}\text{tr}\left[\Sigma^{-1}\left(A-M_0\right)^{\top}\Psi_{0}^{-1}\left(A-M_0\right)\right]\right), \\
\end{aligned}
\label{mnpdf}
\end{equation}
where $\Psi_0\in\mathbb{R}^{(Rd)\times (Rd)}$ and $\Sigma\in\mathbb{R}^{R\times R}$ are played as covariance matrices.
For the only remaining parameter $\tau$, we place a Gamma prior $\tau\sim\text{Gamma}\left(\alpha,\beta\right)$ where $\alpha$ and $\beta$ are the shape and rate parameters, respectively.
The above specifies the full generative process of BTMF, and we could also see the Bayesian graphical model shown in Figure 4. Several parameters are introduced to define the prior distributions for hyperparameters, including $\boldsymbol{\mu}_{0}$, $W_0$, $\nu_0$, $\beta_0$, $\alpha$, $\beta$, $M_0$, $\Psi_0$, and $S_0$. These parameters need to provided in advance when training the model. However, it should be noted that the specification of these parameters has little impact on the final results, as the training data will play a much more important role in defining the posteriors of the hyperparameters.
> **Figure 4**: An overview graphical model of BTMF (time lag set: $\left\{1,2,...,d\right\}$). The shaded nodes ($y_{i,t}$) are the observed data in $\Omega$.
### 2 Model Inference
Given the complex structure of BTMF, it is intractable to write down the posterior distribution. Here we rely on the MCMC technique for Bayesian learning. In detail, we introduce a Gibbs sampling algorithm by deriving the full conditional distributions for all parameters and hyperparameters. Thanks to the use of conjugate priors in Figure 4, we can actually write down all the conditional distributions analytically. Below we summarize the Gibbs sampling procedure.
#### 1) Sampling Factor Matrix $W$ and Its Hyperparameters
> For programming convenience, we use $W\in\mathbb{R}^{N\times R}$ to replace $W\in\mathbb{R}^{R\times N}$.
```python
import numpy as np
from numpy.linalg import inv as inv
from numpy.random import normal as normrnd
from scipy.linalg import khatri_rao as kr_prod
from scipy.stats import wishart
from scipy.stats import invwishart
from numpy.linalg import solve as solve
from numpy.linalg import cholesky as cholesky_lower
from scipy.linalg import cholesky as cholesky_upper
from scipy.linalg import solve_triangular as solve_ut
import matplotlib.pyplot as plt
%matplotlib inline
```
```python
def mvnrnd_pre(mu, Lambda):
src = normrnd(size = (mu.shape[0],))
return solve_ut(cholesky_upper(Lambda, overwrite_a = True, check_finite = False),
src, lower = False, check_finite = False, overwrite_b = True) + mu
def cov_mat(mat, mat_bar):
mat = mat - mat_bar
return mat.T @ mat
def ten2mat(tensor, mode):
return np.reshape(np.moveaxis(tensor, mode, 0), (tensor.shape[mode], -1), order = 'F')
```
```python
def sample_factor_w(tau_sparse_mat, tau_ind, W, X, tau, beta0 = 1, vargin = 0):
"""Sampling N-by-R factor matrix W and its hyperparameters (mu_w, Lambda_w)."""
dim1, rank = W.shape
W_bar = np.mean(W, axis = 0)
temp = dim1 / (dim1 + beta0)
var_mu_hyper = temp * W_bar
var_W_hyper = inv(np.eye(rank) + cov_mat(W, W_bar) + temp * beta0 * np.outer(W_bar, W_bar))
var_Lambda_hyper = wishart.rvs(df = dim1 + rank, scale = var_W_hyper)
var_mu_hyper = mvnrnd_pre(var_mu_hyper, (dim1 + beta0) * var_Lambda_hyper)
if dim1 * rank ** 2 > 1e+8:
vargin = 1
if vargin == 0:
var1 = X.T
var2 = kr_prod(var1, var1)
var3 = (var2 @ tau_ind.T).reshape([rank, rank, dim1]) + var_Lambda_hyper[:, :, None]
var4 = var1 @ tau_sparse_mat.T + (var_Lambda_hyper @ var_mu_hyper)[:, None]
for i in range(dim1):
W[i, :] = mvnrnd_pre(solve(var3[:, :, i], var4[:, i]), var3[:, :, i])
elif vargin == 1:
for i in range(dim1):
pos0 = np.where(sparse_mat[i, :] != 0)
Xt = X[pos0[0], :]
var_mu = tau[i] * Xt.T @ sparse_mat[i, pos0[0]] + var_Lambda_hyper @ var_mu_hyper
var_Lambda = tau[i] * Xt.T @ Xt + var_Lambda_hyper
W[i, :] = mvnrnd_pre(solve(var_Lambda, var_mu), var_Lambda)
return W
```
#### 2) Sampling VAR Coefficients $A$ and Its Hyperparameters
**Foundations of VAR**
Vector autoregression (VAR) is a multivariate extension of autoregression (AR). Formally, VAR for $R$-dimensional vectors $\boldsymbol{x}_{t}$ can be written as follows,
\begin{equation}
\begin{aligned}
\boldsymbol{x}_{t}&=A_{1} \boldsymbol{x}_{t-h_1}+\cdots+A_{d} \boldsymbol{x}_{t-h_d}+\boldsymbol{\epsilon}_{t}, \\
&= A^\top \boldsymbol{v}_{t}+\boldsymbol{\epsilon}_{t},~t=h_d+1, \ldots, T, \\
\end{aligned}
\end{equation}
where
\begin{equation}
A=\left[A_{1}, \ldots, A_{d}\right]^{\top} \in \mathbb{R}^{(R d) \times R},\quad \boldsymbol{v}_{t}=\left[\begin{array}{c}{\boldsymbol{x}_{t-h_1}} \\ {\vdots} \\ {\boldsymbol{x}_{t-h_d}}\end{array}\right] \in \mathbb{R}^{(R d) \times 1}.
\end{equation}
In the following, if we define
\begin{equation}
Z=\left[\begin{array}{c}{\boldsymbol{x}_{h_d+1}^{\top}} \\ {\vdots} \\ {\boldsymbol{x}_{T}^{\top}}\end{array}\right] \in \mathbb{R}^{(T-h_d) \times R},\quad Q=\left[\begin{array}{c}{\boldsymbol{v}_{h_d+1}^{\top}} \\ {\vdots} \\ {\boldsymbol{v}_{T}^{\top}}\end{array}\right] \in \mathbb{R}^{(T-h_d) \times(R d)},
\end{equation}
then, we could write the above mentioned VAR as
\begin{equation}
\underbrace{Z}_{(T-h_d)\times R}\approx \underbrace{Q}_{(T-h_d)\times (Rd)}\times \underbrace{A}_{(Rd)\times R}.
\end{equation}
> To include temporal factors $\boldsymbol{x}_{t},t=1,...,h_d$, we also define $$Z_0=\left[\begin{array}{c}{\boldsymbol{x}_{1}^{\top}} \\ {\vdots} \\ {\boldsymbol{x}_{h_d}^{\top}}\end{array}\right] \in \mathbb{R}^{h_d \times R}.$$
**Build a Bayesian VAR on temporal factors $\boldsymbol{x}_{t}$**
\begin{equation}
\begin{aligned}
\boldsymbol{x}_{t}&\sim\begin{cases}\mathcal{N}\left(A^\top \boldsymbol{v}_{t},\Sigma\right),~\text{if $t\in\left\{h_d+1,...,T\right\}$},\\{\mathcal{N}\left(\boldsymbol{0},I_R\right),~\text{otherwise}}.\end{cases}\\
A&\sim\mathcal{MN}_{(Rd)\times R}\left(M_0,\Psi_0,\Sigma\right), \\
\Sigma &\sim\mathcal{IW}\left(S_0,\nu_0\right), \\
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
&\mathcal{M N}_{(R d) \times R}\left(A | M_{0}, \Psi_{0}, \Sigma\right)\\
\propto|&\Sigma|^{-R d / 2} \exp \left(-\frac{1}{2} \operatorname{tr}\left[\Sigma^{-1}\left(A-M_{0}\right)^{\top} \Psi_{0}^{-1}\left(A-M_{0}\right)\right]\right), \\
\end{aligned}
\end{equation}
and
\begin{equation}
\mathcal{I} \mathcal{W}\left(\Sigma | S_{0}, \nu_{0}\right) \propto|\Sigma|^{-\left(\nu_{0}+R+1\right) / 2} \exp \left(-\frac{1}{2} \operatorname{tr}\left(\Sigma^{-1}S_{0}\right)\right).
\end{equation}
**Likelihood from temporal factors $\boldsymbol{x}_{t}$**
\begin{equation}
\begin{aligned}
&\mathcal{L}\left(X\mid A,\Sigma\right) \\
\propto &\prod_{t=1}^{h_d}p\left(\boldsymbol{x}_{t}\mid \Sigma\right)\times \prod_{t=h_d+1}^{T}p\left(\boldsymbol{x}_{t}\mid A,\Sigma\right) \\
\propto &\left|\Sigma\right|^{-T/2}\exp\left\{-\frac{1}{2}\sum_{t=h_d+1}^{T}\left(\boldsymbol{x}_{t}-A^\top \boldsymbol{v}_{t}\right)^\top\Sigma^{-1}\left(\boldsymbol{x}_{t}-A^\top \boldsymbol{v}_{t}\right)\right\} \\
\propto &\left|\Sigma\right|^{-T/2}\exp\left\{-\frac{1}{2}\text{tr}\left[\Sigma^{-1}\left(Z_0^\top Z_0+\left(Z-QA\right)^\top \left(Z-QA\right)\right)\right]\right\}
\end{aligned}
\end{equation}
**Posterior distribution**
Consider
\begin{equation}
\begin{aligned}
&\left(A-M_{0}\right)^{\top} \Psi_{0}^{-1}\left(A-M_{0}\right)+S_0+Z_0^\top Z_0+\left(Z-QA\right)^\top \left(Z-QA\right) \\
=&A^\top\left(\Psi_0^{-1}+Q^\top Q\right)A-A^\top\left(\Psi_0^{-1}M_0+Q^\top Z\right) \\
&-\left(\Psi_0^{-1}M_0+Q^\top Z\right)^\top A \\
&+\left(\Psi_0^{-1}M_0+Q^\top Z\right)^\top\left(\Psi_0^{-1}+Q^\top Q\right)\left(\Psi_0^{-1}M_0+Q^\top Z\right) \\
&-\left(\Psi_0^{-1}M_0+Q^\top Z\right)^\top\left(\Psi_0^{-1}+Q^\top Q\right)\left(\Psi_0^{-1}M_0+Q^\top Z\right) \\
&+M_0^\top\Psi_0^{-1}M_0+S_0+Z_0^\top Z_0+Z^\top Z \\
=&\left(A-M^{*}\right)^\top\left(\Psi^{*}\right)^{-1}\left(A-M^{*}\right)+S^{*}, \\
\end{aligned}
\end{equation}
which is in the form of $\mathcal{MN}\left(\cdot\right)$ and $\mathcal{IW}\left(\cdot\right)$.
The $Rd$-by-$R$ matrix $A$ has a matrix normal distribution, and $R$-by-$R$ covariance matrix $\Sigma$ has an inverse Wishart distribution, that is,
\begin{equation}
A \sim \mathcal{M N}_{(R d) \times R}\left(M^{*}, \Psi^{*}, \Sigma\right), \quad \Sigma \sim \mathcal{I} \mathcal{W}\left(S^{*}, \nu^{*}\right),
\end{equation}
with
\begin{equation}
\begin{cases}
{\Psi^{*}=\left(\Psi_{0}^{-1}+Q^{\top} Q\right)^{-1}}, \\ {M^{*}=\Psi^{*}\left(\Psi_{0}^{-1} M_{0}+Q^{\top} Z\right)}, \\ {S^{*}=S_{0}+Z^\top Z+M_0^\top\Psi_0^{-1}M_0-\left(M^{*}\right)^\top\left(\Psi^{*}\right)^{-1}M^{*}}, \\
{\nu^{*}=\nu_{0}+T-h_d}.
\end{cases}
\end{equation}
```python
def mnrnd(M, U, V):
"""
Generate matrix normal distributed random matrix.
M is a m-by-n matrix, U is a m-by-m matrix, and V is a n-by-n matrix.
"""
dim1, dim2 = M.shape
X0 = np.random.randn(dim1, dim2)
P = cholesky_lower(U)
Q = cholesky_lower(V)
return M + P @ X0 @ Q.T
def sample_var_coefficient(X, time_lags):
dim, rank = X.shape
d = time_lags.shape[0]
tmax = np.max(time_lags)
Z_mat = X[tmax : dim, :]
Q_mat = np.zeros((dim - tmax, rank * d))
for k in range(d):
Q_mat[:, k * rank : (k + 1) * rank] = X[tmax - time_lags[k] : dim - time_lags[k], :]
var_Psi0 = np.eye(rank * d) + Q_mat.T @ Q_mat
var_Psi = inv(var_Psi0)
var_M = var_Psi @ Q_mat.T @ Z_mat
var_S = np.eye(rank) + Z_mat.T @ Z_mat - var_M.T @ var_Psi0 @ var_M
Sigma = invwishart.rvs(df = rank + dim - tmax, scale = var_S)
return mnrnd(var_M, var_Psi, Sigma), Sigma
```
#### 3) Sampling Factor Matrix $X$
**Posterior distribution**
\begin{equation}
\begin{aligned}
y_{it}&\sim\mathcal{N}\left(\boldsymbol{w}_{i}^\top\boldsymbol{x}_{t},\tau^{-1}\right),~\left(i,t\right)\in\Omega, \\
\boldsymbol{x}_{t}&\sim\begin{cases}\mathcal{N}\left(\sum_{k=1}^{d}A_{k} \boldsymbol{x}_{t-h_k},\Sigma\right),~\text{if $t\in\left\{h_d+1,...,T\right\}$},\\{\mathcal{N}\left(\boldsymbol{0},I\right),~\text{otherwise}}.\end{cases}\\
\end{aligned}
\end{equation}
If $t\in\left\{1,...,h_d\right\}$, parameters of the posterior distribution $\mathcal{N}\left(\boldsymbol{x}_{t}\mid \boldsymbol{\mu}_{t}^{*},\Sigma_{t}^{*}\right)$ are
\begin{equation}
\begin{aligned}
\Sigma_{t}^{*}&=\left(\sum_{k=1, h_{d}<t+h_{k} \leq T}^{d} {A}_{k}^{\top} \Sigma^{-1} A_{k}+\tau\sum_{i:(i,t)\in\Omega}\boldsymbol{w}_{i}\boldsymbol{w}_{i}^\top+I\right)^{-1}, \\
\boldsymbol{\mu}_{t}^{*}&=\Sigma_{t}^{*}\left(\sum_{k=1, h_{d}<t+h_{k} \leq T}^{d} A_{k}^{\top} \Sigma^{-1} \boldsymbol{\psi}_{t+h_{k}}+\tau\sum_{i:(i,t)\in\Omega}\boldsymbol{w}_{i}y_{it}\right). \\
\end{aligned}
\end{equation}
If $t\in\left\{h_d+1,...,T\right\}$, then parameters of the posterior distribution $\mathcal{N}\left(\boldsymbol{x}_{t}\mid \boldsymbol{\mu}_{t}^{*},\Sigma_{t}^{*}\right)$ are
\begin{equation}
\begin{aligned}
\Sigma_{t}^{*}&=\left(\sum_{k=1, h_{d}<t+h_{k} \leq T}^{d} {A}_{k}^{\top} \Sigma^{-1} A_{k}+\tau\sum_{i:(i,t)\in\Omega}\boldsymbol{w}_{i}\boldsymbol{w}_{i}^\top+\Sigma^{-1}\right)^{-1}, \\
\boldsymbol{\mu}_{t}^{*}&=\Sigma_{t}^{*}\left(\sum_{k=1, h_{d}<t+h_{k} \leq T}^{d} A_{k}^{\top} \Sigma^{-1} \boldsymbol{\psi}_{t+h_{k}}+\tau\sum_{i:(i,t)\in\Omega}\boldsymbol{w}_{i}y_{it}+\Sigma^{-1}\sum_{k=1}^{d}A_{k}\boldsymbol{x}_{t-h_k}\right), \\
\end{aligned}
\end{equation}
where
$$\boldsymbol{\psi}_{t+h_k}=\boldsymbol{x}_{t+h_k}-\sum_{l=1,l\neq k}^{d}A_{l}\boldsymbol{x}_{t+h_k-h_l}.$$
```python
def sample_factor_x(tau_sparse_mat, tau_ind, time_lags, W, X, A, Lambda_x):
"""Sampling T-by-R factor matrix X."""
dim2, rank = X.shape
tmax = np.max(time_lags)
tmin = np.min(time_lags)
d = time_lags.shape[0]
A0 = np.dstack([A] * d)
for k in range(d):
A0[k * rank : (k + 1) * rank, :, k] = 0
mat0 = Lambda_x @ A.T
mat1 = np.einsum('kij, jt -> kit', A.reshape([d, rank, rank]), Lambda_x)
mat2 = np.einsum('kit, kjt -> ij', mat1, A.reshape([d, rank, rank]))
var1 = W.T
var2 = kr_prod(var1, var1)
var3 = (var2 @ tau_ind).reshape([rank, rank, dim2]) + Lambda_x[:, :, None]
var4 = var1 @ tau_sparse_mat
for t in range(dim2):
Mt = np.zeros((rank, rank))
Nt = np.zeros(rank)
Qt = mat0 @ X[t - time_lags, :].reshape(rank * d)
index = list(range(0, d))
if t >= dim2 - tmax and t < dim2 - tmin:
index = list(np.where(t + time_lags < dim2))[0]
elif t < tmax:
Qt = np.zeros(rank)
index = list(np.where(t + time_lags >= tmax))[0]
if t < dim2 - tmin:
Mt = mat2.copy()
temp = np.zeros((rank * d, len(index)))
n = 0
for k in index:
temp[:, n] = X[t + time_lags[k] - time_lags, :].reshape(rank * d)
n += 1
temp0 = X[t + time_lags[index], :].T - np.einsum('ijk, ik -> jk', A0[:, :, index], temp)
Nt = np.einsum('kij, jk -> i', mat1[index, :, :], temp0)
var3[:, :, t] = var3[:, :, t] + Mt
if t < tmax:
var3[:, :, t] = var3[:, :, t] - Lambda_x + np.eye(rank)
X[t, :] = mvnrnd_pre(solve(var3[:, :, t], var4[:, t] + Nt + Qt), var3[:, :, t])
return X
```
#### 4) Sampling Precision $\tau$
```python
def sample_precision_tau(sparse_mat, mat_hat, ind):
var_alpha = 1e-6 + 0.5 * np.sum(ind, axis = 1)
var_beta = 1e-6 + 0.5 * np.sum(((sparse_mat - mat_hat) ** 2) * ind, axis = 1)
return np.random.gamma(var_alpha, 1 / var_beta)
```
```python
def compute_mape(var, var_hat):
return np.sum(np.abs(var - var_hat) / var) / var.shape[0]
def compute_rmse(var, var_hat):
return np.sqrt(np.sum((var - var_hat) ** 2) / var.shape[0])
```
#### 5) BTMF Implementation
```python
def BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter, multi_steps = 1):
"""Bayesian Temporal Matrix Factorization, BTMF."""
dim1, dim2 = sparse_mat.shape
d = time_lags.shape[0]
W = init["W"]
X = init["X"]
pos_test = np.where((dense_mat != 0) & (sparse_mat == 0))
dense_test = dense_mat[pos_test]
del dense_mat
ind = sparse_mat != 0
pos_obs = np.where(ind)
tau = np.ones(dim1)
W_plus = np.zeros((dim1, rank))
X_new_plus = np.zeros((dim2 + multi_steps, rank))
A_plus = np.zeros((rank * d, rank))
temp_hat = np.zeros(len(pos_test[0]))
show_iter = 200
mat_hat_plus = np.zeros((dim1, dim2))
for it in range(burn_iter + gibbs_iter):
tau_ind = tau[:, None] * ind
tau_sparse_mat = tau[:, None] * sparse_mat
W = sample_factor_w(tau_sparse_mat, tau_ind, W, X, tau, beta0 = 1, vargin = 0)
A, Sigma = sample_var_coefficient(X, time_lags)
X = sample_factor_x(tau_sparse_mat, tau_ind, time_lags, W, X, A, inv(Sigma))
mat_hat = W @ X.T
tau = sample_precision_tau(sparse_mat, mat_hat, ind)
temp_hat += mat_hat[pos_test]
if (it + 1) % show_iter == 0 and it < burn_iter:
temp_hat = temp_hat / show_iter
print('Iter: {}'.format(it + 1))
print('MAPE: {:.6}'.format(compute_mape(dense_test, temp_hat)))
print('RMSE: {:.6}'.format(compute_rmse(dense_test, temp_hat)))
temp_hat = np.zeros(len(pos_test[0]))
print()
X_new = np.zeros((dim2 + multi_steps, rank))
if it + 1 > burn_iter:
W_plus += W
A_plus += A
X_new[: dim2, :] = X.copy()
if multi_steps == 1:
X_new[dim2, :] = A.T @ X_new[dim2 - time_lags, :].reshape(rank * d)
elif multi_steps > 1:
for t0 in range(multi_steps):
X_new[dim2 + t0, :] = A.T @ X_new[dim2 + t0 - time_lags, :].reshape(rank * d)
X_new_plus += X_new
mat_hat_plus += mat_hat
mat_hat = mat_hat_plus / gibbs_iter
W = W_plus / gibbs_iter
X_new = X_new_plus / gibbs_iter
A = A_plus / gibbs_iter
print('Imputation MAPE: {:.6}'.format(compute_mape(dense_test, mat_hat[pos_test])))
print('Imputation RMSE: {:.6}'.format(compute_rmse(dense_test, mat_hat[pos_test])))
print()
return mat_hat, W, X_new, A
```
# PeMS-4W
```python
import numpy as np
import pandas as pd
np.random.seed(1000)
data = pd.read_csv('../datasets/California-data-set/pems-4w.csv', header = None)
dense_mat = data.values
random_mat = ten2mat(np.random.rand(data.values.shape[0], 288, 4 * 7), 0)
del data
missing_rate = 0.3
### Random missing (RM) scenario:
sparse_mat = np.multiply(dense_mat, np.round(random_mat + 0.5 - missing_rate))
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 288])
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %.2f minutes'%((end - start)/60.0))
```
```python
import numpy as np
import pandas as pd
np.random.seed(1000)
data = pd.read_csv('../datasets/California-data-set/pems-4w.csv', header = None)
dense_mat = data.values
random_mat = ten2mat(np.random.rand(data.values.shape[0], 288, 4 * 7), 0)
del data
missing_rate = 0.7
### Random missing (RM) scenario:
sparse_mat = np.multiply(dense_mat, np.round(random_mat + 0.5 - missing_rate))
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 288])
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %.2f minutes'%((end - start)/60.0))
```
```python
import numpy as np
import pandas as pd
np.random.seed(1000)
data = pd.read_csv('../datasets/California-data-set/pems-4w.csv', header = None)
dense_mat = data.values
dense_tensor = mat2ten(dense_mat, np.array([data.values.shape[0], 288, 4 * 7]), 0)
random_matrix = np.random.rand(data.values.shape[0], 4 * 7)
missing_rate = 0.3
### Non-random missing (NM) scenario:
binary_tensor = np.zeros(dense_tensor.shape)
for i1 in range(dense_tensor.shape[0]):
for i2 in range(dense_tensor.shape[2]):
binary_tensor[i1, :, i2] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
sparse_mat = ten2mat(np.multiply(dense_tensor, binary_tensor), 0)
del data, dense_tensor, binary_tensor
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 288])
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %.2f minutes'%((end - start)/60.0))
```
```python
import numpy as np
import pandas as pd
np.random.seed(1000)
data = pd.read_csv('../datasets/California-data-set/pems-4w.csv', header = None)
dense_mat = data.values
dense_tensor = mat2ten(dense_mat, np.array([data.values.shape[0], 288, 4 * 7]), 0)
random_matrix = np.random.rand(data.values.shape[0], 4 * 7)
missing_rate = 0.7
### Non-random missing (NM) scenario:
binary_tensor = np.zeros(dense_tensor.shape)
for i1 in range(dense_tensor.shape[0]):
for i2 in range(dense_tensor.shape[2]):
binary_tensor[i1, :, i2] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
sparse_mat = ten2mat(np.multiply(dense_tensor, binary_tensor), 0)
del data, dense_tensor, binary_tensor
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 288])
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %.2f minutes'%((end - start)/60.0))
```
# PeMS-8W
```python
import numpy as np
import pandas as pd
np.random.seed(1000)
data = pd.read_csv('../datasets/California-data-set/pems-8w.csv', header = None)
dense_mat = data.values
random_mat = ten2mat(np.random.rand(data.values.shape[0], 288, 8 * 7), 0)
del data
missing_rate = 0.3
### Random missing (RM) scenario:
sparse_mat = np.multiply(dense_mat, np.round(random_mat + 0.5 - missing_rate))
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 288])
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %.2f minutes'%((end - start)/60.0))
```
```python
import numpy as np
import pandas as pd
np.random.seed(1000)
data = pd.read_csv('../datasets/California-data-set/pems-8w.csv', header = None)
dense_mat = data.values
random_mat = ten2mat(np.random.rand(data.values.shape[0], 288, 8 * 7), 0)
del data
missing_rate = 0.7
### Random missing (RM) scenario:
sparse_mat = np.multiply(dense_mat, np.round(random_mat + 0.5 - missing_rate))
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 288])
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %.2f minutes'%((end - start)/60.0))
```
```python
import numpy as np
import pandas as pd
np.random.seed(1000)
data = pd.read_csv('../datasets/California-data-set/pems-8w.csv', header = None)
dense_mat = data.values
dense_tensor = mat2ten(dense_mat, np.array([data.values.shape[0], 288, 8 * 7]), 0)
random_matrix = np.random.rand(data.values.shape[0], 8 * 7)
missing_rate = 0.3
### Non-random missing (NM) scenario:
binary_tensor = np.zeros(dense_tensor.shape)
for i1 in range(dense_tensor.shape[0]):
for i2 in range(dense_tensor.shape[2]):
binary_tensor[i1, :, i2] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
sparse_mat = ten2mat(np.multiply(dense_tensor, binary_tensor), 0)
del data, dense_tensor, binary_tensor
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 288])
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %.2f minutes'%((end - start)/60.0))
```
```python
import numpy as np
import pandas as pd
np.random.seed(1000)
data = pd.read_csv('../datasets/California-data-set/pems-8w.csv', header = None)
dense_mat = data.values
dense_tensor = mat2ten(dense_mat, np.array([data.values.shape[0], 288, 8 * 7]), 0)
random_matrix = np.random.rand(data.values.shape[0], 8 * 7)
missing_rate = 0.7
### Non-random missing (NM) scenario:
binary_tensor = np.zeros(dense_tensor.shape)
for i1 in range(dense_tensor.shape[0]):
for i2 in range(dense_tensor.shape[2]):
binary_tensor[i1, :, i2] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
sparse_mat = ten2mat(np.multiply(dense_tensor, binary_tensor), 0)
del data, dense_tensor, binary_tensor
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 288])
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %.2f minutes'%((end - start)/60.0))
```
# London 1-M data
```python
import numpy as np
np.random.seed(1000)
missing_rate = 0.3
dense_mat = np.load('../datasets/London-data-set/hourly_speed_mat.npy')
binary_mat = dense_mat.copy()
binary_mat[binary_mat != 0] = 1
pos = np.where(np.sum(binary_mat, axis = 1) > 0.7 * binary_mat.shape[1])
dense_mat = dense_mat[pos[0], :]
## Random missing (RM)
random_mat = np.random.rand(dense_mat.shape[0], dense_mat.shape[1])
binary_mat = np.round(random_mat + 0.5 - missing_rate)
sparse_mat = np.multiply(dense_mat, binary_mat)
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 20
time_lags = np.array([1, 2, 24])
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
```python
import numpy as np
np.random.seed(1000)
missing_rate = 0.7
dense_mat = np.load('../datasets/London-data-set/hourly_speed_mat.npy')
binary_mat = dense_mat.copy()
binary_mat[binary_mat != 0] = 1
pos = np.where(np.sum(binary_mat, axis = 1) > 0.7 * binary_mat.shape[1])
dense_mat = dense_mat[pos[0], :]
## Random missing (RM)
random_mat = np.random.rand(dense_mat.shape[0], dense_mat.shape[1])
binary_mat = np.round(random_mat + 0.5 - missing_rate)
sparse_mat = np.multiply(dense_mat, binary_mat)
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 20
time_lags = np.array([1, 2, 24])
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
```python
import numpy as np
np.random.seed(1000)
missing_rate = 0.3
dense_mat = np.load('../datasets/London-data-set/hourly_speed_mat.npy')
binary_mat = dense_mat.copy()
binary_mat[binary_mat != 0] = 1
pos = np.where(np.sum(binary_mat, axis = 1) > 0.7 * binary_mat.shape[1])
dense_mat = dense_mat[pos[0], :]
## Non-random missing (NM)
binary_mat = np.zeros(dense_mat.shape)
random_mat = np.random.rand(dense_mat.shape[0], 30)
for i1 in range(dense_mat.shape[0]):
for i2 in range(30):
binary_mat[i1, i2 * 24 : (i2 + 1) * 24] = np.round(random_mat[i1, i2] + 0.5 - missing_rate)
sparse_mat = np.multiply(dense_mat, binary_mat)
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 20
time_lags = np.array([1, 2, 24])
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
```python
import numpy as np
np.random.seed(1000)
missing_rate = 0.7
dense_mat = np.load('../datasets/London-data-set/hourly_speed_mat.npy')
binary_mat = dense_mat.copy()
binary_mat[binary_mat != 0] = 1
pos = np.where(np.sum(binary_mat, axis = 1) > 0.7 * binary_mat.shape[1])
dense_mat = dense_mat[pos[0], :]
## Non-random missing (NM)
binary_mat = np.zeros(dense_mat.shape)
random_mat = np.random.rand(dense_mat.shape[0], 30)
for i1 in range(dense_mat.shape[0]):
for i2 in range(30):
binary_mat[i1, i2 * 24 : (i2 + 1) * 24] = np.round(random_mat[i1, i2] + 0.5 - missing_rate)
sparse_mat = np.multiply(dense_mat, binary_mat)
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 20
time_lags = np.array([1, 2, 24])
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
# Guangzhou 2-M data
```python
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.3
## Random missing (RM)
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
sparse_mat = np.multiply(dense_mat, binary_mat)
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 80
time_lags = np.array([1, 2, 144])
init = {"W": 0.01 * np.random.randn(dim1, rank), "X": 0.01 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
```python
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.7
## Random missing (RM)
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
sparse_mat = np.multiply(dense_mat, binary_mat)
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 80
time_lags = np.array([1, 2, 144])
init = {"W": 0.01 * np.random.randn(dim1, rank), "X": 0.01 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
```python
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor']
random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')['random_matrix']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.3
## Non-random missing (NM)
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
sparse_mat = np.multiply(dense_mat, binary_mat)
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 144])
init = {"W": 0.01 * np.random.randn(dim1, rank), "X": 0.01 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
```python
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor']
random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')['random_matrix']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.7
## Non-random missing (NM)
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
sparse_mat = np.multiply(dense_mat, binary_mat)
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 144])
init = {"W": 0.01 * np.random.randn(dim1, rank), "X": 0.01 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X_new, A = BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
### License
<div class="alert alert-block alert-danger">
<b>This work is released under the MIT license.</b>
</div>
|
06d338059eadd75647c4b5477ac60a64794b6b9e
| 49,820 |
ipynb
|
Jupyter Notebook
|
baselines/Large-Scale-BTMF-imputer.ipynb
|
vishalbelsare/tensor-learning
|
3c483bba1945dc55556c6bea3575d112739be1c7
|
[
"MIT"
] | 138 |
2019-06-08T03:04:13.000Z
|
2022-03-25T16:37:46.000Z
|
baselines/Large-Scale-BTMF-imputer.ipynb
|
mouyi321/tensor-learning
|
3c483bba1945dc55556c6bea3575d112739be1c7
|
[
"MIT"
] | 2 |
2019-10-16T00:54:14.000Z
|
2021-03-08T06:48:24.000Z
|
baselines/Large-Scale-BTMF-imputer.ipynb
|
mouyi321/tensor-learning
|
3c483bba1945dc55556c6bea3575d112739be1c7
|
[
"MIT"
] | 34 |
2019-07-02T12:48:40.000Z
|
2022-03-23T19:52:12.000Z
| 39.352291 | 629 | 0.54731 | true | 11,906 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.841826 | 0.749087 | 0.630601 |
__label__eng_Latn
| 0.211308 | 0.303428 |
```
%pylab inline
```
Here are some basic — and some more surprising — features of the iPython Notebook
that has been used to build this collection of astronomy examples.
```
>>> n = 0
>>> for i in range(5):
... n += i
...
>>> print n
```
```
# Exception tracebacks are attractive, detailed!
plot([1, 2, 3], [4, 5, 'a'])
```
```
!pwd
```
```
!cal 1 2013
```
```
files = !ls /usr/bin
```
```
%load spectral_classification.py
```
```
# See examples at http://matplotlib.org/gallery.html
%load http://matplotlib.org/mpl_examples/api/radar_chart.py
```
```
%timeit '-'.join(('abc', 'def', 'ghi'))
%timeit '-'.join(['abc', 'def', 'ghi'])
```
```
from IPython.display import Image, HTML, Latex, YouTubeVideo
```
```
f = 'venv/lib/python2.7/site-packages/pyface/images/about.jpg'
Image(filename=f)
```
```
HTML('')
```
```
YouTubeVideo('F4rFuIb1Ie4') # Fernando Pérez at PyConCA
```
```
from sympy.interactive import init_printing
init_printing()
from sympy import *
x, y = symbols('x y')
eq = ((x + y)**2 * (x + 1))
```
```
eq
```
```
expand(eq)
```
```
Latex(r'The Taylor series for $e^x$ is:'
r'$$\sum_{x=0}^\infty {x^n / n!}$$')
```
## XKCD Style
Recently, @jakevdp decided that his example plots looked too serious,
and wanted them to look more like hand-drawn plots in xkcd.
http://jakevdp.github.com/blog/2012/10/07/xkcd-style-plots-in-matplotlib/
```
```
|
709d5628fe2369a976a539b5ea87dc5ee357d476
| 4,821 |
ipynb
|
Jupyter Notebook
|
An-Introduction/Notebook-Features.ipynb
|
The-Assembly/astronomy-notebooks
|
e722d26a2982a4ba1332f72cdfb46b020403cc6c
|
[
"MIT"
] | null | null | null |
An-Introduction/Notebook-Features.ipynb
|
The-Assembly/astronomy-notebooks
|
e722d26a2982a4ba1332f72cdfb46b020403cc6c
|
[
"MIT"
] | null | null | null |
An-Introduction/Notebook-Features.ipynb
|
The-Assembly/astronomy-notebooks
|
e722d26a2982a4ba1332f72cdfb46b020403cc6c
|
[
"MIT"
] | 1 |
2021-04-05T20:52:24.000Z
|
2021-04-05T20:52:24.000Z
| 21.052402 | 102 | 0.446795 | true | 454 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.709019 | 0.841826 | 0.59687 |
__label__eng_Latn
| 0.580035 | 0.22506 |
## Reconstructing Horndeski theories via the Gaussian Process: $H(z)$ reconstruction
This is part 1 of a two-part notebook on using the Gaussian process (GP) to reconstruct Horndeski theories ([2105.12970](https://arxiv.org/abs/2105.12970)). For this notebook, we reconstruct the Hubble function $H(z)$ given the combined Pantheon/MCT, cosmic chronometers (CC), and baryon acoustic oscillations (BAO) data. The output will be directly used in part 2 on model building.
References to the data and python packages used in this work can be found at end of this notebook.
### 0. Datasets: Pantheon/MCT, CC, BAO
To start, we import the datasets (Pantheon/MCT, CC, BAO) which will be used for the reconstruction.
```python
%matplotlib inline
import numpy as np
from numpy import loadtxt, savetxt
from scipy.constants import c
from matplotlib import pyplot as plt
c_kms = c/1000 # speed of light in km/s
# load pantheon + mct H(z) data
loc_sn = 'pantheon_mct.txt'
loc_sn_corr = 'pantheon_mct_corr.txt'
sn_data = loadtxt(loc_sn)
sn_corr = loadtxt(loc_sn_corr)
# setup snia observations
z_sn = sn_data[:, 0]
Ez_sn = sn_data[:, 1]
sigEz_sn_stat = sn_data[:, 2]
# construct snia cov matrix
covEz_sn_corr = np.diag(sigEz_sn_stat)@ \
sn_corr@np.diag(sigEz_sn_stat)
# load pantheon compressed m(z) data
loc_lcparam = 'lcparam_DS17f.txt'
loc_lcparam_sys = 'sys_DS17f.txt'
lcparam = loadtxt(loc_lcparam, usecols = (1, 4, 5))
lcparam_sys = loadtxt(loc_lcparam_sys, skiprows = 1)
# setup pantheon samples
z_ps = lcparam[:, 0]
mz_ps = lcparam[:, 1]
sigmz_ps = lcparam[:, 2]
# pantheon samples systematics
covmz_ps_sys = lcparam_sys.reshape(40, 40)
covmz_ps_tot = covmz_ps_sys + np.diag(sigmz_ps**2)
# load cc dataset
loc_cc = 'cc_data.txt'
cc_data = loadtxt(loc_cc)
# setup cc observations
z_cc = cc_data[:, 0]
Hz_cc = cc_data[:, 1]
sigHz_cc = cc_data[:, 2]
# load bao's
loc_bao = 'bao_data.txt'
bao_data = loadtxt(loc_bao)
z_bao = bao_data[:, 0]
dmrs_bao = bao_data[:, 1] # dM/rs, rs = sound horizon
sigdmrs_bao = bao_data[:, 2]
dhrs_bao = bao_data[:, 3] # dH/rs
sigdhrs_bao = bao_data[:, 4]
```
The different datasets are visualized below.
```python
# pantheon/mct
plt.errorbar(z_sn, Ez_sn,
yerr = np.sqrt(np.diag(covEz_sn_corr)),
fmt = 'k^', markersize = 4,
ecolor = 'red', elinewidth = 2, capsize = 2)
plt.xlabel('$z$')
plt.ylabel('$E(z)$')
plt.show()
# pantheon samples
plt.errorbar(np.log(z_ps), mz_ps,
yerr = np.sqrt(np.diag(covmz_ps_tot)),
fmt = 'k^', markersize = 4,
ecolor = 'red', elinewidth = 2, capsize = 2)
plt.xlabel('$\ln(z)$')
plt.ylabel('$m(z)$')
plt.show()
# cosmic chronometers
plt.errorbar(z_cc, Hz_cc, yerr = sigHz_cc,
fmt = 'bo', markersize = 2,
ecolor = 'red', elinewidth = 2, capsize = 2)
plt.xlabel('$z$')
plt.ylabel('$H(z)$')
plt.show()
# bao
fig, ax = plt.subplots(1,2, figsize = (15,4))
ax[0].errorbar(z_bao, dmrs_bao, yerr = sigdmrs_bao,
fmt = 'k*', markersize = 4,
ecolor = 'red', elinewidth = 2, capsize = 2)
ax[0].set_xlabel('$z$')
ax[0].set_ylabel('$dM(z)/r_s$')
ax[1].errorbar(z_bao, dhrs_bao, yerr = sigdhrs_bao,
fmt = 'k*', markersize = 2,
ecolor = 'red', elinewidth = 2, capsize = 2)
ax[1].set_xlabel('$z$')
ax[1].set_ylabel('$dH(z)/r_s$')
plt.show()
```
### 1. Reconstructing $H(z)$
In this section, we shall use the combined datasets above to reconstruct the Hubble function $H(z)$. We shall also further complement the above datasets with an $H_0$ prior. But, first, let us import the GP function and the RBF kernel.
```python
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import (RBF, ConstantKernel)
```
In what follows, we will use the RBF kernel, also commonly known as the squared exponential kernel.
```python
kernels = {"SquaredExponential": ConstantKernel()*RBF()}
```
Select any one of the kernels...
```python
kern_name = "SquaredExponential"
kernel = kernels[kern_name]
```
The $H_0$ priors we shall consider for the reconstruction are given below.
```python
H0_priors = {'R19': {'ave': 74.03, 'std': 1.42},
'TRGB': {'ave': 69.8, 'std': 1.9},
'P18': {'ave': 67.4, 'std': 0.5}}
```
The Gaussian process is now performed in the next few lines. We begin with the CC dataset appended with the prior $H_0$.
```python
# set GP rec parameters
z_min = 1e-5
z_max = 2
n_div = 50
for H0_prior in H0_priors:
H0_ave = H0_priors[H0_prior]['ave']
H0_std = H0_priors[H0_prior]['std']
z_cc_prior = np.append(np.array([z_min]), z_cc)
Hz_cc_prior = np.append(np.array([H0_ave]), Hz_cc)
sigHz_cc_prior = np.append(np.array([H0_std]), sigHz_cc)
gp = GaussianProcessRegressor(kernel = kernel,
alpha = sigHz_cc_prior**2,
n_restarts_optimizer = 10)
gp.fit(z_cc_prior.reshape(-1, 1), Hz_cc_prior)
print("\nLearned kernel: %s" % gp.kernel_)
print("Log-marginal-likelihood: %.3f"
% gp.log_marginal_likelihood(gp.kernel_.theta))
z_cc_rec = np.linspace(z_min, z_max, n_div)
Hz_cc_rec, sigHz_cc_rec = gp.predict(z_cc_rec.reshape(-1, 1),
return_std=True)
fig = plt.figure()
plt.errorbar(z_cc_prior, Hz_cc_prior, yerr = sigHz_cc_prior,
label = 'CC', fmt = 'kx', markersize = 4,
ecolor = 'red', elinewidth = 2, capsize = 2)
plt.plot(z_cc_rec, Hz_cc_rec, 'b-', label = 'mean')
plt.fill_between(z_cc_rec, Hz_cc_rec - sigHz_cc_rec,
Hz_cc_rec + sigHz_cc_rec,
alpha = .5, facecolor = 'b', edgecolor='None',
label= r'$1\sigma$')
plt.fill_between(z_cc_rec, Hz_cc_rec - 2*sigHz_cc_rec,
Hz_cc_rec + 2*sigHz_cc_rec,
alpha = .2, facecolor = 'b', edgecolor='None',
label= r'$2\sigma$')
plt.title(H0_prior)
plt.xlabel('$z$')
plt.ylabel('$H(z)$')
plt.legend(loc = 'lower right', prop = {'size': 9.5})
plt.xlim(min(z_cc_rec),
max(z_cc_rec))
plt.show()
```
The Pantheon/MCT dataset comes with a correlation matrix. A consistent usage of the lone Pantheon/MCT dataset therefore requires drawing many samples over the corresponding mean and covariance matrix of $E(z)$. However, regarding it as a minority in the overall dataset (CC + Pantheon/MCT + BAO dataset), we will just consider its mean for simplicity.
Moving on, the BAO $H(z)$ function can be obtained without relying on the sound horizon $r_s$ by using the Pantheon $m(z)$ samples. In particular, we take the ratio of $dM(z)/dH(z)$ and use the distance duality relation (DDR), $d_L(z) = d_A(z)(1 + z)^2$ where $d_L(z)$ and $d_A(z)$ are the luminosity distance and angular diameter distance, respectively, to obtain
\begin{equation}
H(z) = \dfrac{ dM(z) }{ dH(z) } 10^{(25 + M - m(z))/5} \ [ c / \text{Mpc} ].
\end{equation}
This requires a calibrated absolute magnitude $M$ which we determine this by sampling over the $z < 0.1$ data points in the compressed Pantheon samples. This assumes that for such very low redshifts the cosmological model-dependence should drop out and so the $\Lambda$CDM model can be taken to be a reasonably fair assumption. Furthermore, in using the DDR, we are also restricting our attention to spatially-flat and isotropic cosmologies.
The $\Lambda$CDM model is prepared below for the sampling. We import ``cobaya`` and ``getdist`` for the sampling and its statistical analysis.
```python
from scipy.integrate import quad
from cobaya.run import run
def E_lcdm(a, om0):
'''returns the rescaled Hubble function of lcdm
input:
a = scale factor
om0 = matter density
ol0 = dark energy density
'''
ol0 = 1 - om0
return np.sqrt((om0/(a**3)) + ol0)
def E_inv_lcdm(z, om0):
a = 1/(z + 1)
return 1/E_lcdm(a, om0)
def dl_lcdm(z, om0):
'''returns dL = luminosity distance*H0/c'''
rz = quad(E_inv_lcdm, 0, z, args = (om0))[0]
return (1 + z)*rz
def m_lcdm(z, H0, om0, M):
'''returns the apparent magnitude m'''
return 5*np.log10(100000*(c_kms/H0)*dl_lcdm(z, om0)) + M
# prepare the log-likelihood, consider only z < 0.1 points
nr = 29
valid = [r for r in range(covmz_ps_tot.shape[0]) if r not in
np.arange(len(z_ps) - nr, len(z_ps))]
covmz_ps_red = covmz_ps_tot[valid][:, valid] # covariance matrix
invcovmz_ps_red = np.linalg.inv(covmz_ps_red) # inverse C
def loglike_lcdm(H0, om0, M):
'''returns the log-likelihood given (H0, om0, M)'''
m_th = np.array([m_lcdm(z, H0, om0, M)
for z in z_ps[:11]])
if om0 > 0:
noise = mz_ps[:11] - m_th
return -0.5*noise.T@invcovmz_ps_red@noise
else: # rule out negative h and om0
return -np.inf
```
Now, we sample over the parameter space $(H_0, \Omega_{m0}, M)$ using ``cobaya``.
*The next line will run for about five minutes*. (Optionally) skip and proceed to next line if the output has already been generated in the folder *chains*.
```python
M_calib = {}
for H0_prior in H0_priors:
info_lcdm = {"likelihood": {"loglikeSNIa": loglike_lcdm}}
info_lcdm["params"] = {"H0": {"prior": {"min": 0, "max": 100},
"ref": {"min": 60, "max": 80},
"proposal": 0.5,
"latex": r"H_0"},
"om0": {"prior": {"min": -0.2, "max": 1},
"ref": {"min": 0.2, "max":0.4},
"proposal": 0.01,
"latex": r"\Omega_{m0}"},
"M": {"prior": {"min": -22, "max": -16},
"ref": {"min": -20, "max": -19},
"proposal": 0.01,
"latex": r"M"}}
def ol0_lcdm(om0):
'''returns the dark energy density parameter'''
return 1 - om0
info_lcdm["params"]["ol0"] = {"derived": ol0_lcdm,
"latex": r"\Omega_{\Lambda}"}
def H0_prior_loglike(H0):
'''H0_prior: assume Gaussian'''
H0_ave = H0_priors[H0_prior]['ave']
H0_std = H0_priors[H0_prior]['std']
return -0.5*((H0 - H0_ave)/H0_std)**2
info_lcdm["prior"] = {"H0_prior": H0_prior_loglike}
info_lcdm["sampler"] = {"mcmc": {"Rminus1_stop": 0.001,
"max_tries": 1000}}
# save mcmc chain
info_lcdm["output"] = 'chains/M_calib_' + H0_prior
# overwrite chain, if it exists
info_lcdm["force"] = True
# run MCMC
updated_info_lcdm, sampler_lcdm = run(info_lcdm)
```
*Note*: Since the computation only included the $z < 0.1$ data points (the first 11 of 40 points in the compressed Pantheon samples), then it is to be expected that there is much less information available about the geometry of the expansion, i.e., large confidence intervals for the $\Omega$'s. Nonetheless, the justification of this methodology is that absolute magnitude $M$ (a nuisance parameter) has been constrained to subpercent precision since this depends less on the dynamics of cosmic expansion.
The calibrated $M$ for each of the $H_0$ priors is shown below.
```python
from getdist.mcsamples import loadMCSamples
import getdist.plots as gdplt
import os # requires *full path*
M_calib = {}
gdsamples_per_H0_prior = {}
for H0_prior in H0_priors:
folder_file = 'chains/M_calib_' + H0_prior
gdsamples = loadMCSamples(os.path.abspath(folder_file))
# get statistics
stats = gdsamples.getMargeStats()
M_ave = stats.parWithName("M").mean
M_std = stats.parWithName("M").err
M_calib[H0_prior] = {'ave': M_ave, 'std': M_std}
gdsamples_per_H0_prior[H0_prior] = gdsamples
# print calibrated M
for each in M_calib:
print(each, 'M = ', M_calib[each]['ave'], '+/-', M_calib[each]['std'])
```
[root] *WARNING* fine_bins not large enough to well sample smoothing scale - minuslogprior
[root] *WARNING* fine_bins not large enough to well sample smoothing scale - minuslogprior__H0_prior
[root] *WARNING* fine_bins not large enough to well sample smoothing scale - chi2
[root] *WARNING* fine_bins not large enough to well sample smoothing scale - chi2__loglikeSNIa
[root] *WARNING* fine_bins not large enough to well sample smoothing scale - chi2
[root] *WARNING* fine_bins not large enough to well sample smoothing scale - chi2__loglikeSNIa
[root] *WARNING* fine_bins not large enough to well sample smoothing scale - minuslogprior
[root] *WARNING* fine_bins not large enough to well sample smoothing scale - minuslogprior__H0_prior
[root] *WARNING* fine_bins not large enough to well sample smoothing scale - chi2
[root] *WARNING* fine_bins not large enough to well sample smoothing scale - chi2__loglikeSNIa
R19 M = -19.24672146997805 +/- 0.047619802661079305
TRGB M = -19.374133105389465 +/- 0.06345405899637714
P18 M = -19.44973655096901 +/- 0.028209150641493812
The posteriors for each of the $H_0$ priors are superposed in the next plot.
```python
def plot_calib():
'''plot posterior of cosmological parameters'''
gdsamples_R16 = gdsamples_per_H0_prior['R19']
gdsamples_TRGB = gdsamples_per_H0_prior['TRGB']
gdsamples_P18 = gdsamples_per_H0_prior['P18']
gdplot = gdplt.get_subplot_plotter()
gdplot.triangle_plot([gdsamples_R16, gdsamples_TRGB, gdsamples_P18],
["H0", "M"],
contour_ls = ['-', '--', '-.'],
contour_lws = [1.5, 1.5, 1.5],
contour_colors = [('blue'),
('red'),
('green')],
filled = True,
legend_loc = 'upper right',
legend_labels = ['R19', 'TRGB', 'P18'])
gdplot = gdplt.get_subplot_plotter()
gdplot.triangle_plot([gdsamples_R16, gdsamples_TRGB, gdsamples_P18],
["H0", "om0"],
contour_ls = ['-', '--', '-.'],
contour_lws = [1.5, 1.5, 1.5],
contour_colors = [('blue'),
('red'),
('green')],
filled = True,
legend_loc = 'upper right',
legend_labels = ['R19', 'TRGB', 'P18'])
plot_calib()
```
Preparing the Hubble function given the BAO and reconstructed $m(z)$ from the Pantheon samples.
```python
def H_bao(z, dMrs, dHrs, m, M):
'''Hubble function at redshift z
dMrs = dM(z)/rs
dHrs = dH(z)/rs
rs = sound horizon
(m, M) come from dA(z) -> DDR -> dL (pantheon samples)'''
return c_kms*(dMrs/dHrs)*(1 + z)*(10**5)*(10**((M - m)/5))
def sigH_bao(z, dMrs, sigdMrs,
dHrs, sigdHrs, m, sigm, M, sigM):
'''uncertainty in the H_bao'''
jac_dM = (10**((25 + M - m)/5))*c_kms*(1 + z)/dHrs
jac_dH = (10**((25 + M - m)/5))*c_kms*dMrs*(1 + z)/(dHrs**2)
jac_m = (20000*(10**((M - m)/5))*c_kms*dMrs*(1 + z)* \
np.log(10))/dHrs
jac_M = jac_m
var_dM = (jac_dM*sigdMrs)**2
var_dH = (jac_dH*sigdHrs)**2
var_m = (jac_m*sigm)**2
var_M = (jac_M*sigM)**2
return np.sqrt(var_dM + var_dH + var_m + var_M)
```
The compressed Pantheon samples' $m(z)$ is reconstructed and plotted below.
```python
gp = GaussianProcessRegressor(kernel = kernel,
alpha = np.diag(covmz_ps_tot),
n_restarts_optimizer = 10)
gp.fit(np.log(z_ps).reshape(-1, 1), mz_ps)
print("\nLearned kernel: %s" % gp.kernel_)
print("Log-marginal-likelihood: %.3f"
% gp.log_marginal_likelihood(gp.kernel_.theta))
logz_ps_rec = np.log(np.linspace(z_min, z_max, n_div*4))
z_ps_rec = np.exp(logz_ps_rec)
mz_ps_rec, sigmz_ps_rec = \
gp.predict(logz_ps_rec.reshape(-1, 1),
return_std = True)
def mz_approx(z):
'''approximates the reconstruction mz_rec piecewise'''
delta_logz = list(abs(np.log(z) - logz_ps_rec))
min_index = delta_logz.index(min(delta_logz))
return mz_ps_rec[min_index], sigmz_ps_rec[min_index]
# plot in log(z)
fig = plt.figure()
plt.errorbar(np.log(z_ps), mz_ps,
yerr = np.sqrt(np.diag(covmz_ps_tot)),
fmt = 'ko', markersize = 5,
ecolor = 'red', elinewidth = 2,
capsize = 2, label = 'Pantheon')
plt.plot(logz_ps_rec, mz_ps_rec, 'b--', label = 'mean')
plt.fill_between(logz_ps_rec,
mz_ps_rec - sigmz_ps_rec,
mz_ps_rec + sigmz_ps_rec,
alpha = .4, facecolor = 'b',
edgecolor = 'None', label = r'1$\sigma$')
plt.fill_between(logz_ps_rec,
mz_ps_rec - 2*sigmz_ps_rec,
mz_ps_rec + 2*sigmz_ps_rec,
alpha = .2, facecolor = 'b',
edgecolor = 'None', label = r'2$\sigma$')
plt.xlabel('$\ln(z)$')
plt.ylabel('$m(z)$')
plt.legend(loc = 'upper left', prop = {'size': 9.5})
plt.xlim(np.log10(z_min), np.log10(z_max))
plt.ylim(10, 30)
plt.show()
# plot in z
fig = plt.figure()
plt.errorbar(z_ps, mz_ps,
yerr = np.sqrt(np.diag(covmz_ps_tot)),
fmt = 'ko', markersize = 5,
ecolor = 'red', elinewidth = 2,
capsize = 2, label = 'Pantheon')
plt.plot(z_ps_rec, mz_ps_rec, 'b--', label = 'mean')
plt.fill_between(z_ps_rec,
mz_ps_rec - sigmz_ps_rec,
mz_ps_rec + sigmz_ps_rec,
alpha = .4, facecolor = 'b',
edgecolor='None', label = r'1$\sigma$')
plt.fill_between(z_ps_rec,
mz_ps_rec - 2*sigmz_ps_rec,
mz_ps_rec + 2*sigmz_ps_rec,
alpha = .2, facecolor = 'b',
edgecolor='None', label = r'2$\sigma$')
plt.xlabel('$z$')
plt.ylabel('$m(z)$')
plt.legend(loc = 'lower right', prop = {'size': 9.5})
plt.xlim(z_min, z_max)
plt.ylim(10, 30)
plt.show()
```
To simplify the BAO reconstructions, we setup the function below in order to obtain BAO-$H(z)$ corresponding to a given $H_0$ prior.
```python
def setup_BAO(H0_prior):
'''prepares H(z) from BAO for a given H0 prior and calibrated M'''
M_ave = M_calib[H0_prior]['ave']
M_std = M_calib[H0_prior]['std']
# setup H(z) BAO data
Hz_bao = []
sigHz_bao = []
for i in np.arange(0, len(z_bao)):
m, sigm = mz_approx(z_bao[i])
Hz_i = H_bao(z = z_bao[i],
dMrs = dmrs_bao[i],
dHrs = dhrs_bao[i],
m = m,
M = M_ave)
sigHz_i = sigH_bao(z = z_bao[i],
dMrs = dmrs_bao[i],
sigdMrs = sigdmrs_bao[i],
dHrs = dhrs_bao[i],
sigdHrs = sigdhrs_bao[i],
m = m,
sigm = sigm,
M = M_ave,
sigM = M_std)
Hz_bao.append(Hz_i)
sigHz_bao.append(sigHz_i)
Hz_bao = np.array(Hz_bao)
sigHz_bao = np.array(sigHz_bao)
return Hz_bao, sigHz_bao
```
The CC + BAO GP reconstructed-$H(z)$ is obtained in the next line.
```python
for H0_prior in H0_priors:
H0_ave = H0_priors[H0_prior]['ave']
H0_std = H0_priors[H0_prior]['std']
z_cc_prior = np.append(np.array([z_min]), z_cc)
Hz_cc_prior = np.append(np.array([H0_ave]), Hz_cc)
sigHz_cc_prior = np.append(np.array([H0_std]), sigHz_cc)
Hz_bao, sigHz_bao = setup_BAO(H0_prior)
z_bao_cc_prior = np.append(z_bao, z_cc_prior)
Hz_bao_cc_prior = np.append(Hz_bao, Hz_cc_prior)
sigHz_bao_cc_prior = np.append(sigHz_bao, sigHz_cc_prior)
gp_bao_cc = GaussianProcessRegressor(kernel = kernel,
alpha = sigHz_bao_cc_prior**2,
n_restarts_optimizer = 10)
gp_bao_cc.fit(z_bao_cc_prior.reshape(-1, 1), Hz_bao_cc_prior)
print("\nLearned kernel: %s" % gp_bao_cc.kernel_)
print("Log-marginal-likelihood: %.3f"
% gp_bao_cc.log_marginal_likelihood(gp_bao_cc.kernel_.theta))
z_bao_cc_rec = np.linspace(z_min, z_max, n_div)
Hz_bao_cc_rec, sigHz_bao_cc_rec = \
gp_bao_cc.predict(z_bao_cc_rec.reshape(-1, 1),
return_std=True)
fig = plt.figure()
plt.errorbar(z_bao, Hz_bao,
yerr = sigHz_bao,
fmt = 'k^', markersize = 4,
ecolor = 'red', elinewidth = 2,
capsize = 2, label = 'BAO')
plt.errorbar(z_cc_prior, Hz_cc_prior, yerr = sigHz_cc_prior,
label = 'CC', fmt = 'ko', markersize = 5,
ecolor = 'red', elinewidth = 2, capsize = 2)
plt.plot(z_bao_cc_rec,
Hz_bao_cc_rec, 'b--', label = 'mean')
plt.fill_between(z_bao_cc_rec,
Hz_bao_cc_rec - sigHz_bao_cc_rec,
Hz_bao_cc_rec + sigHz_bao_cc_rec,
alpha = .4, facecolor = 'b',
edgecolor='None', label=r'$1\sigma$')
plt.fill_between(z_bao_cc_rec,
Hz_bao_cc_rec - 2*sigHz_bao_cc_rec,
Hz_bao_cc_rec + 2*sigHz_bao_cc_rec,
alpha = .2, facecolor = 'b',
edgecolor='None', label=r'$2\sigma$')
plt.title(H0_prior)
plt.xlabel('$z$')
plt.ylabel('$H(z)$')
plt.legend(loc = 'upper left', prop = {'size': 9.5})
plt.xlim(z_min, z_max)
plt.show()
```
The GP reconstruction given the combined CC + Pantheon/MCT + BAO is obtained in the next line. For comparison, this is shown together with the reconstructions obtained from the CC and CC + BAO datasets only.
```python
for H0_prior in H0_priors:
H0_ave = H0_priors[H0_prior]['ave']
H0_std = H0_priors[H0_prior]['std']
# reconstruct H(z): CC + prior
z_cc_prior = np.append(np.array([z_min]), z_cc)
Hz_cc_prior = np.append(np.array([H0_ave]), Hz_cc)
sigHz_cc_prior = np.append(np.array([H0_std]), sigHz_cc)
gp = GaussianProcessRegressor(kernel = kernel,
alpha = sigHz_cc_prior**2,
n_restarts_optimizer = 10)
gp.fit(z_cc_prior.reshape(-1, 1), Hz_cc_prior)
z_cc_rec = np.linspace(z_min, z_max, n_div)
Hz_cc_rec, sigHz_cc_rec = gp.predict(z_cc_rec.reshape(-1, 1),
return_std = True)
# reconstruct H(z): CC + BAO + Prior
Hz_bao, sigHz_bao = setup_BAO(H0_prior)
z_bao_cc_prior = np.append(z_bao, z_cc_prior)
Hz_bao_cc_prior = np.append(Hz_bao, Hz_cc_prior)
sigHz_bao_cc_prior = np.append(sigHz_bao, sigHz_cc_prior)
gp_bao_cc = GaussianProcessRegressor(kernel = kernel,
alpha = sigHz_bao_cc_prior**2,
n_restarts_optimizer = 10)
gp_bao_cc.fit(z_bao_cc_prior.reshape(-1, 1), Hz_bao_cc_prior)
z_bao_cc_rec = np.linspace(z_min, z_max, n_div)
Hz_bao_cc_rec, sigHz_bao_cc_rec = \
gp_bao_cc.predict(z_bao_cc_rec.reshape(-1, 1),
return_std = True)
# Pantheon/MCT + CC + BAO + Prior, subscripted sbc for 'SN', 'BAO', 'CC'
Hz_pnmct = H0_ave*Ez_sn
var_pnmct = (np.diag(covEz_sn_corr)* \
(H0_ave**2)) + (H0_std*Ez_sn)**2
z_sbc = np.append(z_sn, z_bao_cc_prior)
Hz_sbc = np.append(Hz_pnmct, Hz_bao_cc_prior)
sigHz_sbc = np.append(np.sqrt(var_pnmct), sigHz_bao_cc_prior)
# save full data for part 2
sbc_data = np.stack((z_sbc, Hz_sbc, sigHz_sbc), axis = 1)
savetxt('sbc_data_' + H0_prior + '.txt', sbc_data)
gp_sbc = GaussianProcessRegressor(kernel = kernel,
alpha = sigHz_sbc**2,
n_restarts_optimizer = 10)
gp_sbc.fit(z_sbc.reshape(-1, 1), Hz_sbc)
z_sbc_rec = np.linspace(1e-5, 2, 100)
Hz_sbc_rec, sigHz_sbc_rec = \
gp_sbc.predict(z_sbc_rec.reshape(-1, 1), return_std = True)
print("\nLearned kernel: %s" % gp_sbc.kernel_)
print("Log-marginal-likelihood: %.3f"
% gp_sbc.log_marginal_likelihood(gp_sbc.kernel_.theta))
fig = plt.figure()
# plot Pantheon/MCT
plt.errorbar(z_sn, Hz_pnmct, yerr = np.sqrt(var_pnmct),
label = 'Pantheon/MCT', fmt = 'ko', markersize = 5,
ecolor = 'red', elinewidth = 2, capsize = 2)
# plot BAO
plt.errorbar(z_bao, Hz_bao, yerr = sigHz_bao,
fmt = 'k^', markersize = 5, ecolor = 'red',
elinewidth = 2, capsize = 2, label = 'BAO')
# plot CC
plt.errorbar(z_cc, Hz_cc, yerr = sigHz_cc, label = 'CC',
fmt = 'kx', markersize = 5,
ecolor = 'red', elinewidth = 2, capsize = 2)
# GP reconstruction from SN, BAO, CC
plt.plot(z_sbc_rec, Hz_sbc_rec, 'b--', label = 'mean')
plt.fill_between(z_sbc_rec,
Hz_sbc_rec - sigHz_sbc_rec,
Hz_sbc_rec + sigHz_sbc_rec,
alpha = .4, facecolor = 'b',
edgecolor = 'None', label = r'1$\sigma$')
plt.fill_between(z_sbc_rec,
Hz_sbc_rec - 2*sigHz_sbc_rec,
Hz_sbc_rec + 2*sigHz_sbc_rec,
alpha = .2, facecolor = 'b',
edgecolor = 'None', label = '2$\sigma$')
plt.title(H0_prior)
plt.xlabel('$z$')
plt.ylabel('$H(z)$')
plt.legend(loc = 'upper left', prop = {'size': 9.5})
plt.xlim(min(z_sbc_rec), max(z_sbc_rec))
plt.show()
fig = plt.figure()
# CC only
plt.plot(z_cc_rec, Hz_cc_rec, 'r--', label = 'CC')
plt.fill_between(z_cc_rec,
Hz_cc_rec - 2*sigHz_cc_rec,
Hz_cc_rec + 2*sigHz_cc_rec,
alpha = .2, facecolor = 'r',
edgecolor = 'r', hatch = '-')
# CC + BAO
plt.plot(z_bao_cc_rec, Hz_bao_cc_rec,
'g-.', label = 'BAO + CC')
plt.fill_between(z_bao_cc_rec,
Hz_bao_cc_rec - 2*sigHz_bao_cc_rec,
Hz_bao_cc_rec + 2*sigHz_bao_cc_rec,
alpha = .2, facecolor = 'g',
edgecolor ='g', hatch = '|')
# CC + SN + BAO
plt.plot(z_sbc_rec, Hz_sbc_rec, 'b-',
label = 'Pantheon/MCT + BAO + CC')
plt.fill_between(z_sbc_rec,
Hz_sbc_rec - 2*sigHz_sbc_rec,
Hz_sbc_rec + 2*sigHz_sbc_rec,
alpha = .2, facecolor = 'b',
edgecolor = 'b', hatch = 'x')
plt.title(H0_prior)
plt.xlabel('$z$')
plt.ylabel('$H(z)$')
plt.legend(loc = 'upper left', prop = {'size': 9.5})
plt.xlim(min(z_sbc_rec), max(z_sbc_rec))
plt.show()
```
The reconstructed Hubble function will be used in part 2 to draw Horndeski theories from the Hubble data.
### References
**Data sets** used in this work:
***Pantheon/MCT***: A. G. Riess et al., Type Ia Supernova Distances at Redshift > 1.5 from the Hubble Space
Telescope Multi-cycle Treasury Programs: The Early Expansion Rate, Astrophys. J. 853 (2018)
126 [[1710.00844](https://arxiv.org/abs/1710.00844)].
***Pantheon samples***: D. M. Scolnic et al., The Complete Light-curve Sample of Spectroscopically Confirmed SNe Ia
from Pan-STARRS1 and Cosmological Constraints from the Combined Pantheon Sample,
Astrophys. J. 859 (2018) 101 [[1710.00845](https://arxiv.org/abs/1710.00845)].
***Baryon Acoustic Oscillations***, from *various sources*:
(1) BOSS collaboration, The clustering of galaxies in the completed SDSS-III Baryon Oscillation
Spectroscopic Survey: cosmological analysis of the DR12 galaxy sample, Mon. Not. Roy.
Astron. Soc. 470 (2017) 2617 [[1607.03155](https://arxiv.org/abs/1607.03155)].
(2) J. E. Bautista et al., The Completed SDSS-IV extended Baryon Oscillation Spectroscopic
Survey: measurement of the BAO and growth rate of structure of the luminous red galaxy
sample from the anisotropic correlation function between redshifts 0.6 and 1, Mon. Not. Roy.
Astron. Soc. 500 (2020) 736 [[2007.08993](https://arxiv.org/abs/2007.08993)].
(3) H. Gil-Marin et al., The Completed SDSS-IV extended Baryon Oscillation Spectroscopic
Survey: measurement of the BAO and growth rate of structure of the luminous red galaxy
sample from the anisotropic power spectrum between redshifts 0.6 and 1.0, Mon. Not. Roy.
Astron. Soc. 498 (2020) 2492 [[2007.08994](https://arxiv.org/abs/2007.08994)].
(4) A. Tamone et al., The Completed SDSS-IV extended Baryon Oscillation Spectroscopic Survey:
Growth rate of structure measurement from anisotropic clustering analysis in configuration
space between redshift 0.6 and 1.1 for the Emission Line Galaxy sample, Mon. Not. Roy.
Astron. Soc. 499 (2020) 5527 [[2007.09009](https://arxiv.org/abs/2007.09009)].
(5) A. de Mattia et al., The Completed SDSS-IV extended Baryon Oscillation Spectroscopic
Survey: measurement of the BAO and growth rate of structure of the emission line galaxy
sample from the anisotropic power spectrum between redshift 0.6 and 1.1, Mon. Not. Roy.
Astron. Soc. 501 (2021) 5616 [[2007.09008](https://arxiv.org/abs/2007.09008)].
(6) R. Neveux et al., The completed SDSS-IV extended Baryon Oscillation Spectroscopic Survey:
BAO and RSD measurements from the anisotropic power spectrum of the quasar sample
between redshift 0.8 and 2.2, Mon. Not. Roy. Astron. Soc. 499 (2020) 210 [[2007.08999](https://arxiv.org/abs/2007.08999)].
(7) J. Hou et al., The Completed SDSS-IV extended Baryon Oscillation Spectroscopic Survey:
BAO and RSD measurements from anisotropic clustering analysis of the Quasar Sample in
configuration space between redshift 0.8 and 2.2, Mon. Not. Roy. Astron. Soc. 500 (2020) 1201
[[2007.08998](https://arxiv.org/abs/2007.08998)].
(8) V. de Sainte Agathe et al., Baryon acoustic oscillations at z = 2.34 from the correlations of
Lyα absorption in eBOSS DR14, Astron. Astrophys. 629 (2019) A85 [[1904.03400](https://arxiv.org/abs/1904.03400)].
(9) M. Blomqvist et al., Baryon acoustic oscillations from the cross-correlation of Lyα absorption
and quasars in eBOSS DR14, Astron. Astrophys. 629 (2019) A86 [[1904.03430](https://arxiv.org/abs/1904.03430)].
***Cosmic Chronometers***, from *various sources*:
(1) M. Moresco, L. Pozzetti, A. Cimatti, R. Jimenez, C. Maraston, L. Verde et al., A 6%
measurement of the Hubble parameter at z ∼ 0.45: direct evidence of the epoch of cosmic
re-acceleration, JCAP 05 (2016) 014 [[1601.01701](https://arxiv.org/abs/1601.01701)].
(2) M. Moresco, Raising the bar: new constraints on the Hubble parameter with cosmic
chronometers at z ∼ 2, Mon. Not. Roy. Astron. Soc. 450 (2015) L16 [[1503.01116](https://arxiv.org/abs/1503.01116)].
(3) C. Zhang, H. Zhang, S. Yuan, S. Liu, T.-J. Zhang and Y.-C. Sun, Four new observational H(z)
data from luminous red galaxies in the Sloan Digital Sky Survey data release seven, Research in
Astronomy and Astrophysics 14 (2014) 1221 [[1207.4541](https://arxiv.org/abs/1207.4541)].
(4) D. Stern, R. Jimenez, L. Verde, M. Kamionkowski and S. A. Stanford, Cosmic chronometers:
constraining the equation of state of dark energy. I: H(z) measurements, JCAP 2010 (2010)
008 [[0907.3149](https://arxiv.org/abs/0907.3149)].
(5) M. Moresco et al., Improved constraints on the expansion rate of the Universe up to z ˜1.1 from
the spectroscopic evolution of cosmic chronometers, JCAP 2012 (2012) 006 [[1201.3609](https://arxiv.org/abs/1201.3609)].
(6) Ratsimbazafy et al. Age-dating Luminous Red Galaxies observed with the Southern African
Large Telescope, Mon. Not. Roy. Astron. Soc. 467 (2017) 3239 [[1702.00418](https://arxiv.org/abs/1702.00418)].
***R19 $H_0$ prior***: A. G. Riess, S. Casertano, W. Yuan, L. M. Macri and D. Scolnic, Large Magellanic Cloud
Cepheid Standards Provide a 1% Foundation for the Determination of the Hubble Constant and
Stronger Evidence for Physics beyond ΛCDM, Astrophys. J. 876 (2019) 85 [[1903.07603](https://arxiv.org/abs/1903.07603)].
***TRGB $H_0$ prior***: W. L. Freedman et al., The Carnegie-Chicago Hubble Program. VIII. An Independent
Determination of the Hubble Constant Based on the Tip of the Red Giant Branch, Astrophys.
J. 882 (2019) 34 [[1907.05922](https://arxiv.org/abs/1907.05922)].
***P18 $H_0$ prior***: Planck collaboration, Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641 (2020) A6 [[1807.06209](https://arxiv.org/abs/1807.06209)].
**Python packages** used in this work:
``scikit-learn``: F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel et al., Scikit-learn:
Machine learning in Python, [Journal of Machine Learning Research 12 (2011) 2825](https://www.jmlr.org/papers/volume12/pedregosa11a/pedregosa11a.pdf?source=post_page---------------------------).
``cobaya``: J. Torrado and A. Lewis, Cobaya: Code for Bayesian Analysis of hierarchical physical models (2020) [[2005.05290](https://arxiv.org/abs/2005.05290)].
``getdist``: A. Lewis, GetDist: a Python package for analysing Monte Carlo samples (2019) [[1910.13970](https://arxiv.org/abs/1910.13970)].
``numpy``: C. R. Harris et al., Array programming with NumPy, [Nature 585 (2020) 357–362](https://www.nature.com/articles/s41586-020-2649-2?fbclid=IwAR3qKNC7soKsJlgbF2YCeYQl90umdrcbM6hw8vnpaVvqQiaMdTeL2GZxUR0).
``scipy``: P. Virtanen et al., SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python,
[Nature Methods 17 (2020) 261](https://www.nature.com/articles/s41592-019-0686-2).
``seaborn``: M. L. Waskom, seaborn: statistical data visualization, [Journal of Open Source Software 6
(2021) 3021](https://joss.theoj.org/papers/10.21105/joss.03021).
``matplotlib``: J. D. Hunter, Matplotlib: A 2d graphics environment, [Computing in Science Engineering 9
(2007) 90](https://ieeexplore.ieee.org/document/4160265).
|
16a547e2b942a8f527d567644d3c4fa987d4da15
| 484,248 |
ipynb
|
Jupyter Notebook
|
supp_ntbks_arxiv.2105.12970/gp_horndeski_part_1_Hz_reconstruction.ipynb
|
reggiebernardo/notebooks
|
b54efe619e600679a5c84de689461e26cf1f82af
|
[
"MIT"
] | null | null | null |
supp_ntbks_arxiv.2105.12970/gp_horndeski_part_1_Hz_reconstruction.ipynb
|
reggiebernardo/notebooks
|
b54efe619e600679a5c84de689461e26cf1f82af
|
[
"MIT"
] | null | null | null |
supp_ntbks_arxiv.2105.12970/gp_horndeski_part_1_Hz_reconstruction.ipynb
|
reggiebernardo/notebooks
|
b54efe619e600679a5c84de689461e26cf1f82af
|
[
"MIT"
] | null | null | null | 339.109244 | 40,620 | 0.918465 | true | 10,324 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.800692 | 0.715424 | 0.572834 |
__label__eng_Latn
| 0.511198 | 0.169216 |
```python
from sympy import *
init_printing()
from vierfeldertafel_v01 import Vierfelder_Tafel
```
# Beispiel:
| Wahrscheinlichkeit | Variable |
|----------------------------- |--------------|
|$P(A) = 0.5$ | `a1=0.5`, |
|$P_A(B) = 0.25$ | `b1_a1=0.25`,|
|$P_{\bar{A}}(\bar{B}) = 0.40$ | `b2_a2=0.40` |
```python
v = Vierfelder_Tafel(
a1 = 0.5,
b1_a1 = 0.25,
b2_a2 = 0.40
)
v.anzahl_loesungen
```
```python
v.tafel(digits=5)
```
```python
v.tree_a(digits=5)
```
```python
v.tree_b(digits=5)
```
```python
v.get_value(v.b2_a1) # P_A(B̄)
```
**Häufig benötigt:**
Ā B̄ ∩
```python
print('A\u0304, B\u0304, \u2229')
```
|
6b3d7cc106d1e9293b8df015fa9b3ac98a3ae894
| 2,238 |
ipynb
|
Jupyter Notebook
|
Vorlage.ipynb
|
w-meiners/vierfeldertafel
|
58cc234147e07c7f8903cc787dc2dc03b5053ce7
|
[
"MIT"
] | null | null | null |
Vorlage.ipynb
|
w-meiners/vierfeldertafel
|
58cc234147e07c7f8903cc787dc2dc03b5053ce7
|
[
"MIT"
] | null | null | null |
Vorlage.ipynb
|
w-meiners/vierfeldertafel
|
58cc234147e07c7f8903cc787dc2dc03b5053ce7
|
[
"MIT"
] | null | null | null | 18.65 | 56 | 0.450402 | true | 288 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.939913 | 0.737158 | 0.692865 |
__label__yue_Hant
| 0.091462 | 0.448088 |
# Introduction
多変量解析TMVA講習で学んだように、Machine Learningについての知識は、実験データを解析する上で欠かせないものになりつつあります。特に近年Deep learning(深層学習)が急速に発展してきており、実験物理分野でも用いられるようになりました。
Deep learningを扱うツールが整備されたことにより、誰でも簡単にDeep learningが使えるようになりました。一方で、その背後にある仕組みを知らないまま扱うと、落とし穴にハマることがあるかもしれません。
この講義では、Deep learningの基礎、特にDeep learningのベースとなるパーセプトロンを学んでいきます。式を追うだけでは理解しきれないことも多いと思うので、サンプルコードをぜひ自分の手で動かしてみてください。
## Jupyter notebookについて
この講義ではjupyter notebookを提供します。
数式と文章で解説をし、要所要所で式やアルゴリズムを理解するコードがpythonで書かれています。
## パッケージのインポート
まず始めに、必要なパッケージをインポートします。
numpyは多次元配列を効率的に行うパッケージで、matplotlibはプロット等を行うパッケージです。
"%matplotlib inline"はjupyter notebookでプロットを行う際に必要ですが、通常のpython scriptとしてコードを実行する際は必要ありません。
```python
#ライブラリのインポート
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
# パーセプトロン (Perceptron)
(単純)パーセプトロンは1957年に開発されたアルゴリズムで、分類問題を解くことができます。
分類問題とは、入力を複数のクラスにラベル付けすることです。物理実験分野ではシグナルとバックグラウンドの分類としてよく用いられます。
例えば、n個の入力変数$ \mathbf{x}=\{x_1, x_2,\dots,x_n\} $からクラス$ t\in\{0, 1\} $を推定する分類問題を考えると、入力がシグナル由来のものならば
$$ t = y(\mathbf{x}_S) = 1 $$
入力がバックグラウンド由来のものならば
$$t = y(\mathbf{x}_B) = 0 $$
となるような関数$y(\mathbf{x})$を作ることが目的になります。
パーセプトロンでは、関数を以下のように定義します。
$$
\begin{align*}
y(\mathbf{x}|\mathbf{w}, b) &= \Gamma(\mathbf{w}\cdot \mathbf{x} + b) \\
\Gamma(a) &= \begin{cases}
1 & (a\geq0)\\
0 & (a<0)
\end{cases}
\end{align*}
$$
ここで、$ \mathbf{w},b $は関数のパラメータ、$ \Gamma $はステップ関数です。関数のパラメータを問題によって調整することで、分類問題を解くことができます。
## ANDゲート
まず初めに2変数( $x_1, x_2$ )を入力とするパーセプトロンを用いてANDゲートを実装してみましょう。
ANDゲートは
| $x_1$ | $x_2$ | AND |
| :-: | :-: | :-: |
| 0 | 0 | 0 |
| 0 | 1 | 0 |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
のように、2つの入力変数がともに1のときのみ1を返し、それ以外のときは0を返すような関数です。
入力変数が2つなので、パーセプトロン関数を明示的に書くと、
$$
\begin{aligned}
y(x_1,x_2|w_1,w_2, b) &= \Gamma(w_1 x_1 + w_2 x_2 + b) \\
\Gamma(a) &= \begin{cases}
1 & (a\geq0)\\
0 & (a<0)
\end{cases}
\end{aligned}
$$
となります。これをpythonで書くと以下のようになります。
```python
def ANDGate(x1, x2):
w1 = 1.0 # パラメータの値は適当
w2 = 1.0 # パラメータの値は適当
b = -1.5 # パラメータの値は適当
a = w1 * x1 + w2 * x2 + b
if a >= 0.:
return 1
else:
return 0
# 正しく実装できているかを出力
print("AND Gate:")
print(" x1, x2, y")
print(" 0, 0, ", ANDGate(0, 0), "Correct" if ANDGate(0, 0) == 0 else "Wrong") # 正解は 0
print(" 0, 1, ", ANDGate(0, 1), "Correct" if ANDGate(0, 1) == 0 else "Wrong") # 正解は 0
print(" 1, 0, ", ANDGate(1, 0), "Correct" if ANDGate(1, 0) == 0 else "Wrong") # 正解は 0
print(" 1, 1, ", ANDGate(1, 1), "Correct" if ANDGate(1, 1) == 1 else "Wrong") # 正解は 1
```
上の例ではパラメータが適当なため、正しいANDゲートにはなっていません。パラメータ($w_1, w_2, b$)をどのように設定すれば、この関数がANDゲートとなるでしょうか?
実は答えは一意には決まらず無数にあるのですが、例えば$(w_1, w_2, b) = (1, 1, -1.5)$とするとうまくいきます。確認してみてください。
パーセプトロンの値をプロットすると
となります。
赤い領域で$y=1$、青い領域で$y=0$を出力しています。
パーセプトロンの式の形からもわかるように、$(x_1, x_2)$平面上を直線($w_1 x_1 + w_2 x_2 + b=0$)で区切ったような出力となります。
## NANDゲート・ORゲート
NANDゲートやORゲートは
| $x_1$ | $x_2$ | NAND | OR |
| -- | -- | -- | -- |
| 0 | 0 | 1 | 0 |
| 0 | 1 | 1 | 1 |
| 1 | 0 | 1 | 1 |
| 1 | 1 | 0 | 1 |
のような関数です。各自どのようにパラメータを決めればよいか確認してみてください。
```python
def NANDGate(x1, x2):
w1 = -1.0 # パラメータの値は適当
w2 = -1.0 # パラメータの値は適当
b = 1.5 # パラメータの値は適当
a = w1 * x1 + w2 * x2 + b
if a >= 0.:
return 1
else:
return 0
def ORGate(x1, x2):
w1 = 1.0 # パラメータの値は適当
w2 = 1.0 # パラメータの値は適当
b = -0.5 # パラメータの値は適当
a = w1 * x1 + w2 * x2 + b
if a >= 0.:
return 1
else:
return 0
# 正しく実装できているかを出力
print("NAND Gate:")
print(" x1, x2, y")
print(" 0, 0, ", NANDGate(0, 0), "Correct" if NANDGate(0, 0) == 1 else "Wrong") # 正解は 1
print(" 0, 1, ", NANDGate(0, 1), "Correct" if NANDGate(0, 1) == 1 else "Wrong") # 正解は 1
print(" 1, 0, ", NANDGate(1, 0), "Correct" if NANDGate(1, 0) == 1 else "Wrong") # 正解は 1
print(" 1, 1, ", NANDGate(1, 1), "Correct" if NANDGate(1, 1) == 0 else "Wrong") # 正解は 0
print("OR Gate:")
print(" x1, x2, y")
print(" 0, 0, ", ORGate(0, 0), "Correct" if ORGate(0, 0) == 0 else "Wrong") # 正解は 0
print(" 0, 1, ", ORGate(0, 1), "Correct" if ORGate(0, 1) == 1 else "Wrong") # 正解は 1
print(" 1, 0, ", ORGate(1, 0), "Correct" if ORGate(1, 0) == 1 else "Wrong") # 正解は 1
print(" 1, 1, ", ORGate(1, 1), "Correct" if ORGate(1, 1) == 1 else "Wrong") # 正解は 1
```
## ロジスティック回帰 (Logistic regression)
これまで見てきた3つのゲート(AND,OR,NAND)は、出力が一意に定まる問題でした。一方で現実の問題のほとんどは一意にラベル付けはできません。例えば($H\rightarrow\gamma\gamma$)のように、観測された事象がシグナルであるのかバックグラウンドであるのかは確率的にしか評価できないことがほとんどです。
そのため、関数の出力を0 or 1とするのではなく、0から1まで連続的な値を取るように変更します。ここで出力は、クラスのラベル(S or B)ではなく、そのラベルを取る確率を表します。
出力として0 or 1を取るステップ関数の代わりによく使われる関数がシグモイド関数です。
$$ \sigma(x) = \frac{1}{1+e^{-x}} $$
$x\rightarrow \pm \infty$の極限では、ステップ関数もシグモイド関数も同じ値となりますが、x=0付近で、シグモイド関数はなめらかになっており、0から1まで連続的な値を取ることができます。また、x軸のスケールを変更すると、シグモイド関数はステップ関数に限りなく近づくという性質もあります。
パーセプトロンの式は変更されて、
$$
\begin{align*}
y(\mathbf{x}|\mathbf{w}, b) &= \sigma(\mathbf{w}\cdot \mathbf{x} + b) \\
\sigma(x) &= \frac{1}{1+e^{-x}}
\end{align*}
$$
となります。先程の式のステップ関数の部分がシグモイド関数に変わっています。
この関数を用いて分類・回帰をすることをロジスティック回帰(Logistic regression)と呼ぶこともありますし、パーセプトロンと呼ぶこともあります。ここではパーセプトロンと呼びます。
なお、このステップ関数やシグモイド関数に対応する部分を__活性化関数(activation function)__とも呼びます。
## 実例
この式を用いて、実際に問題を解いてみましょう。
次のセルはヘルパー関数です。データ点の作成や、プロットを行います。内容は完全に理解する必要はありませんが、もし余力があればコードを追ってみてください。
```python
# 中心値の異なる2つの二次元ガウス分布
def getDataset1():
state = np.random.get_state()
np.random.seed(0) # 今回はデータセットの乱数を固定させます。
nSignal = 100 # 生成するシグナルイベントの数
nBackground = 100 # 生成するバックグラウンドイベントの数
# データ点の生成
# 平均(x1,x2) = (1.0, 0.0)、分散=1の2次元ガウス分布
xS = np.random.multivariate_normal([1.0, 0.0], [[1, 0], [0, 1]], size=nSignal)
tS = np.ones(nSignal) # Signalは1にラベリング
# 平均(x1,x2) = (-1.0, 0.0)、分散=1の2次元ガウス分布
xB = np.random.multivariate_normal([-1.0, 0.0], [[1, 0], [0, 1]], size=nBackground)
tB = np.zeros(nBackground) # Backgroundは0にラベリング
# 2つのラベルを持つ学習データを1つにまとめる
x = np.concatenate([xS, xB])
t = np.concatenate([tS, tB]).reshape(-1, 1)
# データをランダムに並び替える
p = np.random.permutation(len(x))
x = x[p]
t = t[p]
np.random.set_state(state)
return x, t
# 二次元ガウス分布と一様分布
def getDataset2():
state = np.random.get_state()
np.random.seed(0) # 今回はデータセットの乱数を固定させます。
nSignal = 100 # 生成するシグナルイベントの数
nBackground = 1000 # 生成するバックグラウンドイベントの数
# データ点の生成
xS = np.random.multivariate_normal([1.0, 0], [[1, 0], [0, 1]], size=nSignal) # 平均(x1,x2) = (1.0, 0.0)、分散=1の2次元ガウス分布
tS = np.ones(nSignal) # Signalは1にラベリング
xB = np.random.uniform(low=-5, high=5, size=(nBackground, 2)) # (-5, +5)の一様分布
tB = np.zeros(nBackground) # Backgroundは0にラベリング
# 2つのラベルを持つ学習データを1つにまとめる
x = np.concatenate([xS, xB])
t = np.concatenate([tS, tB]).reshape(-1, 1)
# データをランダムに並び替える
p = np.random.permutation(len(x))
x = x[p]
t = t[p]
np.random.set_state(state)
return x, t
# ラベル t={0,1}を持つデータ点のプロット
def plotDataPoint(x, t):
# シグナル/バックグラウンドの抽出
xS = x[t[:, 0] == 1] # シグナルのラベルだけを抽出
xB = x[t[:, 0] == 0] # バックグラウンドのラベルだけを抽出
# プロット
plt.scatter(xS[:, 0], xS[:, 1], label='Signal', c='red', s=10) # シグナルをプロット
plt.scatter(xB[:, 0], xB[:, 1], label='Background', c='blue', s=10) # バックグラウンドをプロット
plt.xlabel('x1') # x軸ラベルの設定
plt.ylabel('x2') # y軸ラベルの設定
plt.legend() # legendの表示
plt.show()
# prediction関数 の等高線プロット (fill)
def PlotPredictionContour(prediction, *args):
# 等高線を描くためのメッシュの生成
x1, x2 = np.mgrid[-5:5:100j, -5:5:100j] # x1 = (-5, 5), x2 = (-5, 5) の範囲で100点x100点のメッシュを作成
x1 = x1.flatten() # 二次元配列を一次元配列に変換 ( shape=(100, 100) => shape(10000, ))
x2 = x2.flatten() # 二次元配列を一次元配列に変換 ( shape=(100, 100) => shape(10000, ))
x = np.array([x1, x2]).T
# 関数predictionを使って入力xから出力yを計算し、等高線プロットを作成
y = prediction(x, *args)
cs = plt.tricontourf(x[:, 0], x[:, 1], y.flatten(), levels=10)
plt.colorbar(cs)
# prediction関数 の等高線プロット (line)
def PlotPredictionContourLine(prediction, *args):
# 等高線を描くためのメッシュの生成
x1, x2 = np.mgrid[-5:5:100j, -5:5:100j] # x1 = (-5, 5), x2 = (-5, 5) の範囲で100点x100点のメッシュを作成
x1 = x1.flatten() # 二次元配列を一次元配列に変換 ( shape=(100, 100) => shape(10000, ))
x2 = x2.flatten() # 二次元配列を一次元配列に変換 ( shape=(100, 100) => shape(10000, ))
x = np.array([x1, x2]).T
# 関数predictionを使って入力xから出力yを計算し、等高線プロットを作成
y = prediction(x, *args)
plt.tricontour(x[:, 0], x[:, 1], y.flatten(), levels=10)
def PlotPredictionRegression1D(x, t, prediction, *args):
# データ点のプロット
plt.scatter(x, t, s=10, c='black')
# 関数predictionの出力をプロット
y = prediction(x, *args)
plt.plot(x, y, c='red')
# 中間層の各ノードの出力をプロット
nNode = len(w2)
for i in range(nNode):
y = w2[i] * perceptron(x, w1[:, i], b1[:, i])
plt.plot(x, y, linestyle='dashed') # (中間層のノードの出力 * 重み)をプロット
y = b2[0] * np.ones(x.shape)
plt.plot(x, y, linestyle='dashed') # 最後の層のバイアスタームのプロット
plt.show()
```
上の関数を使ってデータ点をプロットしましょう。
```python
# データ点の取得
x, t = getDataset1()
# データ点をプロット
plotDataPoint(x, t)
```
このようなシグナル・バックグラウンドを分類することを考えます。この分類問題では、シグナルとバックグラウンドの分布が一部重なっています。したがって、そのような中間部分では出力0 or 1ではなく、そのイベントがシグナルである確率を返すように、シグモイド関数を使います。
パーセプトロンの式は
```python
def sigmoid(x):
return 1. / (1 + np.exp(-x))
def perceptron(x1, x2, w1, w2, b):
a = w1 * x1 + w2 * x2 + b
return sigmoid(a)
```
となります。最後の部分だけがステップ関数からシグモイド関数に変わっています。
より一般的にnumpyのarrayを用いてベクトル的に書くと、
```python
def sigmoid(x):
return 1. / (1 + np.exp(-x))
def perceptron(x, w, b):
a = np.dot(x, w) + b # w・x + b
return sigmoid(a)
```
のようになります。ここで、x,wはベクトルで、np.dot(x,w)はxとwの内積を取る操作をします。
試しに、入力変数($x$)やパラメータ$(w,b)$を変化させてどのような値が返ってくるのか確認してみましょう。
```python
x = np.array([1.0, 2.0])
w = np.array([1.0, 1.0])
b = np.array([-1.5])
print(f"w1 * x1 + w2 * x2 + b = {w[0]} * {x[0]} + {w[1]} * {x[1]} + {b[0]}")
perceptron(x, w, b)
```
$$
\begin{align}
\mathbf{w}\cdot \mathbf{x} + b &=
\begin{pmatrix}
w_1 & w_2
\end{pmatrix}
\cdot
\begin{pmatrix}
x_1 \\
x_2
\end{pmatrix}
+ b \\
&=
\begin{pmatrix}
1.0 & 1.0
\end{pmatrix}
\cdot
\begin{pmatrix}
1.0 \\
2.0
\end{pmatrix}
-1.5 = 1.5 \\
\sigma(1.5) &= \frac{1}{1+e^{-1.5}} = 0.818
\end{align}
$$
を計算しています。
xには複数の入力を与えることもできます。xのshapeを(データセットの数, 変数の数)、wのshapeを(変数の数)とすれば、perceptronの式の np.dot(x, w) の返り値のshapeは(データセットの数, )となります。
例えば次のようにして、一回の計算で複数のデータ点を処理できます。
```python
x = np.array([
[0.0, 0.0], # event 1
[0.0, 1.0], # event 2
[1.0, 0.0], # event 3
[1.0, 1.0], # event 4
]) # shape = (4, 2)
w = np.array([1.0, 1.0])
b = np.array([-1.5])
perceptron(x, w, b) # 返り値のshapeは(4,)
```
先程定義したヘルパー関数を用いて等高線プロットを作ると、
```python
# パーセプトロンのパラメータ
w = np.array([1.0, 1.0])
b = np.array([1.5])
# パーセプトロンの出力を等高線プロット
PlotPredictionContour(perceptron, w, b)
# データ点の取得
x, t = getDataset1()
# データ点をプロット
plotDataPoint(x, t)
```
となります。先程のステップ関数を使った例とは異なり、なめらかな出力をしていることがわかります。
パラメータの値を変化させて、パーセプトロンの出力がどのように変化するのか確かめてみましょう。
"最適な"パラメータの値は何になるでしょうか?
## パーセプトロンのグラフ表現
パーセプトロンの式をグラフで表すと視覚的にわかりやすく、また後々より複雑なモデルを作るときに、全体像を把握しやすくなります。
パーセプトロンの式は
$$
\begin{align*}
y(\mathbf{x}|\mathbf{w}, b) &= \sigma(\mathbf{w}\cdot \mathbf{x} + b)
\end{align*}
$$
でしたが、これをさらに分解して、各成分を明示的に書くと、
$$
\begin{align*}
a &= w_1 \cdot x_1 + w_2 \cdot x_2 + b \\
y &= \sigma(a)
\end{align*}
$$
となります。この式を
のように表現します。グラフにすることによって、入力変数($(x_1, x_2)$)が右に伝搬して出力($y$)までどのように計算されるのかがわかりやすくなります。
## 機械学習
これまでは手で関数のパラメータを決めてきましたが、問題が複雑になると、限界がきます。データからモデルのパラメータを自動で決めるのが、機械学習です。適切に学習させることで、人間が手でチューニングしたものよりも良い結果を得ることができます。
さて、この学習はどのようにして進めたら良いでしょうか?
最良の出力を得るパラメータとはどのようなものでしょうか?
ここでまた2クラス分類問題を考えます。n個の入力変数($\mathbf{x}=\{x_1, x_2,\dots,x_n\}$)からクラス($t\in\{0, 1\}$)を推定します。
ある観測値($ \mathbf{x}=\{x_1, x_2,\dots,x_n\}$)が得られた時、それがシグナルである確率を$y(\mathbf{x}|\mathbf{w}, b)$としていました(バックグラウンドである確率は$1-y(\mathbf{x}|\mathbf{w}, b)$)。ある観測値の集合が与えられ、それらに対してのラベルがわかっている時、パラメータが観測値を説明する確率は
$$
p(\mathbf{t}|\mathbf{w},b ) = \prod_{n=1}^{N} y_n^{t_n}\left(1-y_n \right)^{1-t_n}
$$
となります。この式では、$t_n = 1$の時$p_n = y(\mathbf{x}_n)$、$t_n = 0$の時$p_n=1-y(\mathbf{x}_n)$となります。
最尤法の考え方から、この確率を最大にするモデルパラメータを決定すれば良いことがわかります。負の対数を取ることで、
$$
E(\mathbf{w}, b) = -\log p(\mathbf{t}|\mathbf{w}, b) = -\sum_{n=1}^{N} \left( t_n \log y_n + \left( 1-t_n \right) \log \left( 1-y_n \right) \right)
$$
をパラメータ($\mathbf{w}, b$)に対して最小化する問題に帰着します。
機械学習において最小化する関数のことを__誤差関数(loss function)__と呼びます。
また、上のような関数をこの式を__交差エントロピー誤差関数(Cross entropy error function)__と呼びます。
この例では誤差関数として交差エントロピー誤差関数を使うということになります。
このままでは学習に使うデータ数によって誤差関数のスケールが変わってしまうので、使用するデータ数で規格化したものを実際には使います。
$$
E(\mathbf{w}, b) = - \frac{1}{N} \sum_{n=1}^{N} \left( t_n \log y_n + \left( 1-t_n \right) \log \left( 1-y_n \right) \right)
$$
ここでは2クラス分類を考えましたが、多クラス分類でも同様に考えることができます。
K個のクラス($t\in\{0, 1, \dots, K\}$)に分類するときは、尤度関数はn番目のイベントの出力のクラス$k$に対する出力
$$
p(\mathbf{t}|\mathbf{w},b ) = \prod_{n=1}^{N} \prod_{k=1}^{K} y_{nk}^{t_{nk}}
$$
となります。ここで$t_{nk}$はn番目のイベントのラベルで、クラスkの時に1,他のクラスのときに0となります(すなわち$\sum_{k=1}^{K}t_{nk}=1$です)。$y_{nk}$はn番目のイベントのラベルkに対するパーセプトロンの出力です。
同様に式変形すると、
$$
E(\mathbf{w}, b) = -\frac{1}{N} \sum_{n=1}^{N}\sum_{k=1}^{K} t_{nk} \log y_{nk}
$$
となります。この式も同様に交差エントロピー誤差関数(Cross entropy error function)と呼びます。
### 最急降下法 (Gradient descent)
パーセプトロンのパラメータを最適にする指針(誤差関数)は決めましたが、その最小化方法は自明ではありません。
よく用いられる手法は勾配法です。勾配法は、その名の通り、関数の勾配を用いて、その極小値を探索する手法です。
ここでは勾配法の最もシンプルな例である最急降下法(Gradient descent)を用いて誤差関数を最小化します。
最急降下法では、入力が$\mathbf{x}=(x_1,\dots,x_n)$の関数$f(\mathbf{x})$の最小値を、以下のように逐次的に求めます。
$$
\mathbf{x}^{(k+1)} = \mathbf{x}^{(k)} - \epsilon \cdot \left. \frac{\partial f}{\partial \mathbf{x}} \right|_{\mathbf{x}=\mathbf{x}^{(k)}}
$$
ここで$\epsilon$は1回あたりの更新の大きさを決めるパラメータで、値が小さいと、ゆっくり最小値を探します。
関数の一次微分($\left. \frac{\partial f}{\partial \mathbf{x}} \right|_{\mathbf{x}=\mathbf{x}^{(k)}}$)は点$\mathbf{x}^{(k)}$での傾きを表しており、(適切な$\epsilon$を選べば)必ず関数の値が小さくなるように更新されます。
簡単な例でこのアルゴリズムが正しく動作することを確かめてみます。
$f(x_1,x_2)=x_1^2 + x_2^2$を最小化する$(x_1, x_2)$を求めます。更新式は
$$
\begin{align*}
x_0 &\leftarrow x_0 - \epsilon \cdot (2\cdot x_1) \\
x_1 &\leftarrow x_1 - \epsilon \cdot (2\cdot x_2) \\
\end{align*}
$$
となります。
python codeでは
```python
# 初期値の設定
x1 = 4.0
x2 = 3.0
learning_rate = 0.1 # ステップ幅
num_steps = 10 # 繰り返し回数
for _ in range(num_steps):
# 値の更新
x1 = x1 - learning_rate * (2 * x1)
x2 = x2 - learning_rate * (2 * x2)
```
のようにして値を更新していきます。learning_rateが$\epsilon$に対応するパラメータです。
これを実行し、パラメータ($x_1, x_2$)の値の推移をプロットしてみましょう。
```python
# 目的関数の等高線プロット
x1, x2 = np.mgrid[-5:5:100j, -5:5:100j] # x1 = (-5, 5), x2 = (-5, 5) の範囲で100点x100点のメッシュを作成
y = np.square(x1) + np.square(x2) # y = x_0^2 + x_1^2
plt.contour(x1, x2, y, linestyles='dashed', levels=5)
# 初期値の設定
x1 = 4.0
x2 = 3.0
plt.scatter(x1, x2, c='black') # 初期値のプロット
# 最急降下法の実行
learning_rate = 0.1 # ステップ幅
num_steps = 10 # 繰り返し回数
for _ in range(num_steps):
# 値の更新
x1 = x1 - 2 * x1 * learning_rate
x2 = x2 - 2 * x2 * learning_rate
plt.scatter(x1, x2, edgecolors='black', facecolor='None') # 更新値のプロット
plt.xlabel('x1')
plt.ylabel('x2')
plt.xlim([-5, 5])
plt.ylim([-5, 5])
plt.show()
```
初期値(黒丸)から10回分値を更新しています。最小値を取る$(x_1, x_2)=(0, 0)$に近づいていることがわかります。
ステップ幅や更新回数を変更して変化を確認してみてください。
特にステップ幅の大きさはどの程度が適切となるでしょうか?
### 最急降下法を用いたパーセプトロンの学習
それではパーセプトロンのパラメータ($\mathbf{w}, b$)を交差エントロピー
$$
\begin{align*}
E(\mathbf{w}, b) &= \frac{1}{N} \sum_{n=1}^{N} E_n = -\frac{1}{N} \sum_{n=1}^{N} \left( t_n \log y_n + \left( 1-t_n \right) \log \left( 1-y_n \right) \right) \\
y(\mathbf{x}_n|\mathbf{w}, b) &= \sigma(\mathbf{w}\cdot \mathbf{x}_n + b)
\end{align*}
$$
を目的関数として、最急降下法を用いて最適化してみましょう。
誤差関数の微分は、
$$
\begin{align*}
\frac{\partial E_n}{\partial \mathbf{w}}
&= \frac{\partial E_n}{\partial y_n}\cdot \frac{\partial y_n}{\partial a} \cdot \frac{\partial a}{\partial \mathbf{w}}\\
&= \left[ \frac{y_n-t_n}{y_n(1-y_n)}\right] \cdot \left[ y_n(1-y_n) \right] \cdot \left[ \mathbf{x_n} \right] \\
&= \left(y_n-t_n\right) \cdot \mathbf{x_n} \\
\frac{\partial E_n}{\partial b}
&= \left(y_n-t_n\right)
\end{align*}
$$
となります。ぜひ各自手で計算してみてください。
($\frac{\partial \sigma(x)}{\partial x} = \sigma \cdot (1 - \sigma)$となることを使いました。)
一次微分の式を使って最急降下法で誤差関数を最小化します。
更新式をpythonで書くと以下のようになります。
```python
# データ点の取得
x, t = getDataset1()
# 初期値の設定
w = np.array([2.0, 0.0])
b = np.array([0.1])
# 最急降下法での学習
learning_rate = 0.1 # ステップ幅
num_steps = 1000 # 繰り返し回数
for _ in range(num_steps):
error = perceptron(x, w, b)[:, np.newaxis] - t # y - t
grad_W = error * x # (y - t) * x
grad_B = error * 1 # (y - t)
w = w - learning_rate * grad_W.mean(axis=0) # 全てのデータセットについての平均値を取る
b = b - learning_rate * grad_B.mean(axis=0) # 全てのデータセットについての平均値を取る
# プロット
PlotPredictionContour(perceptron, w, b)
## データ点をプロット
plotDataPoint(x, t)
```
## 単純パーセプトロンの限界
ここまで単純パーセプトロンが分類問題に有用であることを見てきました。しかし単純パーセプトロンは特定の問題においてしか有用でないことが知られています。例えば先程AND/OR/NAND回路を見てきましたが、XOR回路においてはどうなるでしょうか?
XOR回路は次のような真理値となります。
| $x_1$ | $x_2$ | XOR |
| -- | -- | -- |
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
これを単純パーセプトロン
$$
\begin{align*}
y(x_1,x_2|w_1,w_2, b) &= \Gamma(w_1 x_1 + w_2 x_2 + b) \\
\Gamma(a) &= \begin{cases}
1 & (a\geq0)\\
0 & (a<0)
\end{cases}
\end{align*}
$$
で分類できるでしょうか? 実はこの問題は分類ができません。式の形からわかるようにパーセプトロンでは線形に分類できる問題にしか対応できません。
もう一つの例として、一様分布するバックグラウンド上に、シグナルが二次元ガウス分布する状況を考えましょう。
```python
# データ点の取得
x, t = getDataset2()
# データ点をプロット
plotDataPoint(x, t)
```
このような例を、単純パーセプトロンで分類させようとしても、適切な予測ができません。
```python
# データ点の取得
x, t = getDataset2()
# 初期値の設定
w = np.array([0.1, 0.0])
b = np.array([0.0])
# 最急降下法での学習
learning_rate = 0.1 # ステップ幅
num_steps = 1000 # 繰り返し回数
for _ in range(num_steps):
error = perceptron(x, w, b)[:, np.newaxis] - t # y - t
grad_W = error * x # (y - t) * x
grad_B = error * 1 # (y - t) * 1
w = w - learning_rate * grad_W.mean(axis=0) # 全てのデータセットについての平均値を取る
b = b - learning_rate * grad_B.mean(axis=0) # 全てのデータセットについての平均値を取る
# 等高線プロット
PlotPredictionContour(perceptron, w, b)
# データ点をプロット
plotDataPoint(x, t)
```
# 多層パーセプトロン(Multilayer perceptron)
単純パーセプトロンはXORゲートが表現できませんでした。この問題は層の数を増やす(多層にする)ことで解決することができます。単純パーセプトロンを多層にしたものを多層パーセプトロン(Multilayer perceptron)と呼びます。
グラフとして表すと、次の図のようになります。
数式で表すと、
$$
\begin{align*}
\mathbf{z}^{(1)} &= \sigma(\mathbf{w}^{(1)}\cdot \mathbf{x} + b^{(1)}) \\
y(\mathbf{x}|\mathbf{w}^{(1)}, b^{(1)},\mathbf{w}^{(2)}, b^{(2)}) &= \sigma(\mathbf{w}^{(2)}\cdot \mathbf{z}^{(1)} + b^{(2)})
\end{align*}
$$
のようになります。単純パーセプトロンの出力ノードを、新たに入力として使ったような形になっています。
活性化関数としてステップ関数を用いて、多層パーセプトロンがXORゲートを表現できることを示します。
## 例1: XORゲート
NANDゲート・ORゲート・ANDゲートは単純パーセプトロンで構成できることを先程示しました。
XORはNANDゲート・ORゲート・ANDゲートを組み合わせることで作ることができます。
NANDゲートの出力を$z_1$、ORゲートの出力を$z_2$とし、$z_1$、$z_2$を入力として受け取るANDゲートを考えると、表のようになります。
| $x_1$ | $x_2$ | $z_1$ | $z_2$ | $y$ |
| -- | -- | -- | -- | -- |
| 0 | 0 | 1 | 0 | 0 |
| 0 | 1 | 1 | 1 | 1 |
| 1 | 0 | 1 | 1 | 1 |
| 1 | 1 | 0 | 1 | 0 |
一例ですが、数式で明示的に書くと、
$$
\begin{align*}
z_1(x_1, x_2) &= \Gamma(-(x_1 + x_2 + 1.5)) \ \text{(NANDゲート)} \\
z_2(x_1, x_2) &= \Gamma(x_1 + x_2 + 0.5) \ \text{(ORゲート)} \\
y(z_1, z_2) &= \Gamma(z_1 + z_2 + 1.5) \ \text{(ANDゲート)}
\end{align*}
$$
のようにしてXORゲートを構成することができます。
この構成を視覚的に表したのが以下の図です。
|ORゲート|NANDゲート|ANDゲート|XORゲート|
|:---:|:---:|:---:|:---:|
||||||
1層のパーセプトロンでは、直線で領域分割することしかできませんでしたが、複数のパーセプトロンを組み合わせることでより複雑な領域の指定が可能になっていることがわかります。
```python
def XORGate(x1, x2):
# OR Gate
w11 = 0.0 # パラメータの値は適当
w12 = 0.0 # パラメータの値は適当
b1 = 0.0 # パラメータの値は適当
a1 = w11 * x1 + w12 * x2 + b1
a1 = 1 if a1 > 0 else 0
# NAND Gate
w21 = 0.0 # パラメータの値は適当
w22 = 0.0 # パラメータの値は適当
b2 = 0.0 # パラメータの値は適当
a2 = w21 * x1 + w22 * x2 + b2
a2 = 1 if a2 > 0 else 0
# AND Gate
w31 = 0.0 # パラメータの値は適当
w32 = 0.0 # パラメータの値は適当
b3 = 0.0 # パラメータの値は適当
y = w31 * a1 + w32 * a2 + b3
y = 1 if y > 0 else 0
return y
# 正しく実装できているかを出力
print("XOR Gate:")
print(" x1, x2, y")
print(" 0, 0, ", XORGate(0, 0), "Correct" if XORGate(0, 0) == 0 else "Wrong") # 正解は 0
print(" 0, 1, ", XORGate(0, 1), "Correct" if XORGate(0, 1) == 1 else "Wrong") # 正解は 1
print(" 1, 0, ", XORGate(1, 0), "Correct" if XORGate(1, 0) == 1 else "Wrong") # 正解は 1
print(" 1, 1, ", XORGate(1, 1), "Correct" if XORGate(1, 1) == 0 else "Wrong") # 正解は 0
```
## 例2: 二次元ガウシアン
単純パーセプトロンでは分類できなかった以下のようなデータセット
```python
# データ点の取得
x, t = getDataset2()
# データ点をプロット
plotDataPoint(x, t)
```
を多層パーセプトロンで分類してみましょう。
数式としては、
$$
\begin{align*}
a_j^{(1)} &= w_{ij}^{(1)}\cdot x_{i} + b^{(1)}\\
z_j^{(1)} &= \sigma(a_j^{(1)}) \\
a_j^{(2)} &= w_{ij}^{(2)}\cdot x_{i} + b^{(2)}\\
y &= \sigma(a^{(2)})
\end{align*}
$$
となります。
多層パーセプトロンでは、中間層のノードの数は自由に変えることができます。ここではノード数を3としてどのような予測がされるかを見てみましょう。
```python
# データ点の取得
x, t = getDataset2()
# パーセプトロンの定義
def multilayerPerceptron(x, w1, b1, w2, b2):
a1 = np.dot(x, w1) + b1 # shape=(nEvents, 3)
z1 = sigmoid(a1) # shape=(nEvents, 3)
a2 = np.dot(z1, w2) + b2 # shape=(nEvents, 1)
y = sigmoid(a2) # shape=(nEvents, 1)
return y
# パーセプトロンのパラメータ (正規分布で初期化)
w1 = np.random.randn(2, 3) # shape=(2, 3)
b1 = np.random.randn(1, 3) # shape=(1, 3)
w2 = np.random.randn(3, 1) # shape=(3, 1)
b2 = np.random.randn(1, 1) # shape=(1, 1)
# 多層パーセプトロンの出力をプロット
PlotPredictionContour(multilayerPerceptron, w1, b1, w2, b2)
## データ点をプロット
plotDataPoint(x, t)
```
中間層のノードの数や、パラメータの値を変えてみて、どのように予測が変化するか見てみましょう。
```python
# データ点の取得
x, t = getDataset2()
# パーセプトロンの定義
def multilayerPerceptron(x, w1, b1, w2, b2):
a1 = np.dot(x, w1) + b1 # shape=(nEvents, 3)
z1 = sigmoid(a1) # shape=(nEvents, 3)
a2 = np.dot(z1, w2) + b2 # shape=(nEvents, 1)
y = sigmoid(a2) # shape=(nEvents, 1)
return y
# パーセプトロンのパラメータ
num_z = 3 # 中間層のノード数
w1 = np.zeros((2, num_z)) # shape=(2, nz)
b1 = np.zeros((1, num_z)) # shape=(1, nz)
w2 = np.zeros((num_z, 1)) # shape=(nz, 1)
b2 = np.zeros((1, 1)) # shape=(1, 1)
# z_0^{(1)}に関するパラメータ
w1[0][0] = -1.0 # 1層目の0番目のノードと2層目の0番目のノードをつなぐ重みを変更
w1[1][0] = +0.0 # 1層目の1番目のノードと2層目の0番目のノードをつなぐ重みを変更
b1[0][0] = +3.0 # 2層目の0番目のノードのバイアスを変更
# z_1^{(1)}に関するパラメータ
w1[0][1] = +0.0 # 1層目の0番目のノードと2層目の1番目のノードをつなぐ重みを変更
w1[1][1] = +1.0 # 1層目の1番目のノードと2層目の1番目のノードをつなぐ重みを変更
b1[0][1] = +2.0 # 2層目の1番目のノードのバイアスを変更
# z_2^{(1)}に関するパラメータ
w1[0][2] = +1.0 # 1層目の0番目のノードと2層目の2番目のノードをつなぐ重みを変更
w1[1][2] = +1.0 # 1層目の1番目のノードと2層目の2番目のノードをつなぐ重みを変更
b1[0][2] = +2.0 # 2層目の2番目のノードのバイアスを変更
# z_i^{(1)}から出力(y)を作るパラメータ
w2[0][0] = +1.0 # 2層目の0番目のノードと3層目の0番目のノードをつなぐ重みを変更
w2[1][0] = +1.0 # 2層目の1番目のノードと3層目の0番目のノードをつなぐ重みを変更
w2[2][0] = +1.0 # 2層目の2番目のノードと3層目の0番目のノードをつなぐ重みを変更
b2[0][0] = -3.0 # 3層目の0番目のノードのバイアスを変更
# プロット
PlotPredictionContour(multilayerPerceptron, w1, b1, w2, b2)
## データ点をプロット
plotDataPoint(x, t)
```
## 多層パーセプトロンの学習 (誤差逆伝搬)
先程の単純パーセプトロンでは、誤差関数の一次微分を手計算で求めました。
層の数を増やした多層パーセプトロンでは、一次微分を手計算するのは困難になります。
微分の計算を効率よく行える手法として誤差逆伝搬法があります。
ここでは、中間層が2層の多層パーセプトロンを例に、その手法について解説します。
ここでは簡単のためバイアスの項を省略します(バイアスが入った場合についても簡単に拡張できます)。
ネットワークを表す式は、
$$
\begin{align*}
a_j^{(1)} &= w_{ij}^{(1)}\cdot x_{i}\\
z_j^{(1)} &= \sigma(a_j^{(1)}) \\
a_j^{(2)} &= w_{ij}^{(2)}\cdot z_{i}^{(1)}\\
z_j^{(2)} &= \sigma(a_j^{(2)}) \\
a_j^{(3)} &= w_{ij}^{(3)}\cdot z_{i}^{(2)}\\
y &= \sigma(a^{(3)})
\end{align*}
$$
となります。
求めたいのは、誤差関数に対する各パラメータの一次微分です。すなわち
$\frac{\partial E}{\partial w_{ij}^{(1)}}$、$\frac{\partial E}{\partial w_{ij}^{(2)}}$、$\frac{\partial E}{\partial w_{ij}^{(3)}}$となります。
これらは連鎖率を使って求めることができます。
誤差関数は
$$
E(\mathbf{w}) = \frac{1}{N} \sum_{n=1}^{N} E_n(\mathbf{w}) = -\frac{1}{N} \sum_{n=1}^{N} \left( t_n \log y_n + \left( 1-t_n \right) \log \left( 1-y_n \right) \right)
$$
でした。
また、
$$
\delta_{i}^{(k)} = \frac{\partial E_n}{\partial a_{i}^{(k)}}
$$
として定義して、それぞれ計算をすると、
$$
\begin{align*}
\frac{\partial E_n}{\partial w_{ij}^{(3)}}
&= \frac{\partial E_n}{\partial a_1^{(3)}}\cdot \frac{\partial a_1^{(3)}}{\partial w_{ij}^{(3)}} \\
&= \left(y_n-t_n\right) \cdot z_1^{(2)} = \delta_{1}^{(3)} \cdot z_1^{(2)}
\end{align*}
$$
$$
\begin{align*}
\frac{\partial E_n}{\partial w_{ij}^{(2)}}
&= \frac{\partial E_n}{\partial a_j^{(2)}}\cdot \frac{\partial a_j^{(2)}}{\partial w_{ij}^{(2)}} \\
&= \left[ \frac{\partial E_n}{\partial a_1^{(3)}}\cdot \frac{\partial a_1^{(3)}}{\partial z_j^{(2)}}\cdot \frac{\partial z_j^{(2)}}{\partial a_j^{(2)}}\right] \cdot \frac{\partial a_j^{(2)}}{\partial w_{ij}^{(2)}} \\
&= \left[ \delta_{1}^{(3)}
\cdot w_{j1}^{(3)}
\cdot \sigma^{'}(a_j^{(2)}) \right]
\cdot z_{i}^{(1)} = \delta_{j}^{(2)} \cdot z_{i}^{(1)}
\end{align*}
$$
$$
\begin{align*}
\frac{\partial E_n}{\partial w_{ij}^{(1)}}
&= \frac{\partial E_n}{\partial a_j^{(1)}}\cdot \frac{\partial a_j^{(1)}}{\partial w_{ij}^{(1)}} \\
&= \left[ \sum_k \frac{\partial E_n}{\partial a_{k}^{(2)}}\cdot \frac{\partial a_{k}^{(2)}}{\partial z_j^{(1)}}\cdot \frac{\partial z_j^{(1)}}{\partial a_j^{(1)}}\right] \cdot \frac{\partial a_j^{(1)}}{\partial w_{ij}^{(1)}} \\
&= \left[ \sum_k \delta_{k}^{(2)}
\cdot w_{jk}^{(2)}
\cdot \sigma^{'}(a_j^{(1)}) \right]
\cdot x_{i} = \delta_{j}^{(1)} \cdot x_{i}
\end{align*}
$$
ここで、入力に近い方にある重みの計算では、それよりも後にある重みの計算に使用した値が再利用することができることがわかります。
このことを利用すると、複雑なネットワークの微分の計算がシステマティックに行えることになります。
実際に計算する際には、下流側から順に$\delta_{i}^{(k)}$を求めていきます。
$\delta_{i}^{(k)}$は活性化関数(ここではシグモイド関数)の微分値に、下流の誤差($\delta_{j}^{(k+1)}$)の線形和をとったものとなります。
$$
\delta_{i}^{(k)} =
\sigma^{'}(a_i^{(k)}) \left(
\sum_j w_{ij}^{(k+1)} \cdot \delta_{j}^{(k+1)} \right)
$$
重み($w_{ij}^{(k)}$)の微分はつながっているノードの$\delta_{i}^{(k)}$と$z_{i}^{(k)}$を用いて、
$$
\frac{\partial E_n}{\partial w_{ij}^{(k)}} = \delta_{j}^{(k)} \cdot z_{i}^{(k)}
$$
として求めることができます。
このようにして微分を求めることを誤差逆伝搬(Back-propagation)と呼びます。
この誤差逆伝搬の式を先程の中間層1層の多層パーセプトロンに適用してみましょう。
次のような手順で一次微分を求めることができます。
1. 入力($x_i$)とパラメータ($w_{ij}^{(1)},b_j^{(1)},w_{ij}^{(2)},b_j^{(2)}$)を元に順伝搬させて予測値$y$を得る。
$$
\begin{align*}
a_j^{(1)} &= w_{ij}^{(1)}\cdot x_{i} + b^{(1)}\\
z_j^{(1)} &= \sigma(a_j^{(1)}) \\
a_j^{(2)} &= w_{ij}^{(2)}\cdot x_{i} + b^{(2)}\\
y &= \sigma(a^{(2)})
\end{align*} $$
2. 出力$y$と真のラベル$t$を用いて誤差$\delta$を伝搬させる。
$$
\begin{align*}
\delta_{1}^{(2)} &= \left(y_n-t_n\right) \\
\delta_{j}^{(1)} &= \sigma^{'}(a_j^{(1)}) \cdot w_{j1}^{(2)} \cdot \delta_{1}^{(2)}
\end{align*} $$
3. 誤差$\delta$を元に微分を計算
$$
\begin{align*}
\frac{\partial E_n}{\partial w_{ij}^{(2)}} &= \delta_{j}^{(2)} \cdot z_{i}^{(2)} \\
\frac{\partial E_n}{\partial b_{j}^{(2)}} &= \delta_{j}^{(2)} \\
\frac{\partial E_n}{\partial w_{ij}^{(1)}} &= \delta_{j}^{(1)} \cdot x_{i} \\
\frac{\partial E_n}{\partial b_{j}^{(1)}} &= \delta_{j}^{(1)} \\
\end{align*} $$
誤差逆伝搬法をpythonで実装したものが以下になります。
```python
# シグモイド関数の微分
def grad_sigmoid(x):
return sigmoid(x) * (1 - sigmoid(x))
# データ点の取得
x, t = getDataset2()
# パーセプトロンのパラメータ
w1 = np.random.randn(2, 3) # shape=(2, 3)
b1 = np.random.randn(1, 3) # shape=(1, 3)
w2 = np.random.randn(3, 1) # shape=(3, 1)
b2 = np.random.randn(1, 1) # shape=(1, 1)
# 最急降下法での学習
learning_rate = 1.0 # ステップ幅
num_steps = 2 # 2000 # 繰り返し回数
for _ in range(num_steps):
# 順伝搬させる
a1 = np.dot(x, w1) + b1 # shape=(nEvents, 3)
z1 = sigmoid(a1) # shape=(nEvents, 3)
a2 = np.dot(z1, w2) + b2 # shape=(nEvents, 1)
z2 = sigmoid(a2) # shape=(nEvents, 1)
# 一次微分を求めるために逆伝搬させる
grad_A2 = z2 - t # shape=(nEvents, 1)
grad_W2 = (z1 * grad_A2)[:, :, np.newaxis] # shape=(nEvents, 3, 1)
grad_B2 = grad_A2[:, :, np.newaxis] # shape=(nEvents, 1, 1)
grad_A1 = grad_sigmoid(a1) * (grad_A2 * w2.T) # shape=(nEvents, 3)
grad_W1 = x[:, :, np.newaxis] * grad_A1[:, np.newaxis, :] # shape=(nEvents, 2, 3)
grad_B1 = grad_A1[:, np.newaxis, :] # shape=(nEvents, 1, 3)
w2 = w2 - learning_rate * grad_W2.mean(axis=0) # shape=(3, 1)
b2 = b2 - learning_rate * grad_B2.mean(axis=0) # shape=(1, 1)
w1 = w1 - learning_rate * grad_W1.mean(axis=0) # shape=(2, 3)
b1 = b1 - learning_rate * grad_B1.mean(axis=0) # shape=(1, 3)
# プロット
## パーセプトロンの出力を等高線プロット
PlotPredictionContour(multilayerPerceptron, w1, b1, w2, b2)
## データ点をプロット
plotDataPoint(x, t)
```
単純パーセプトロンでは、ある直線に対してどの程度離れているか、という指標でしか分類が行えていませんでしたが、多層パーセプトロンでは複数の直線からの距離を元に推定が行われています。
上の学習で得られたパラメータを使って、実際に中間層でどのような分類が行われているのかを見てみます。
図の$z_{i}^{(1)}$に対応するノードの出力をプロットします。
```python
# パーセプトロンの出力を等高線プロット
# 重みが(w1, b1)の1層パーセプトロンの出力を描画
PlotPredictionContourLine(perceptron, w1[:, 0], b1[:, 0]) # 中間層の0番目のノードの出力
PlotPredictionContourLine(perceptron, w1[:, 1], b1[:, 1]) # 中間層の1番目のノードの出力
PlotPredictionContourLine(perceptron, w1[:, 2], b1[:, 2]) # 中間層の2番目のノードの出力
# データ点をプロット
plotDataPoint(x, t)
```
このように、複数の(ここでは3つの)直線での分類を組み合わせて、2次元円形の領域を抜き出しています。
隠れ層の存在により、複雑な関数を近似できることがわかりました。
## 万能近似定理
ここまでは分類問題を考えてきました。ここでは回帰問題を考えて、多層パーセプトロンの表現能力を確認します。
分類問題では、パーセプトロンの出力を確率とみなすため、0~1の範囲を取るように、シグモイド関数を使いました。
回帰問題では任意の範囲の出力を取らせるようにします。よく使うのは、最後の活性化関数として線形関数(すなわち何もしない)を使うことです。
式として表すと、
$$
\begin{align*}
a_j^{(1)} &= w_{j}^{(1)}\cdot x + b^{(1)}\\
z_j^{(1)} &= \sigma(a_j^{(1)}) \\
a^{(2)} &= w_{i}^{(2)}\cdot z_{i} + b^{(2)}\\
y &= a^{(2)}
\end{align*}
$$
となります。最後の行がシグモイド関数から、恒等写像に変わっています。グラフとして表すと、
のようになります。
```python
# パーセプトロンの定義 (回帰)
def multilayerPerceptronRegression(x, w1, b1, w2, b2):
a1 = np.dot(x, w1) + b1
z1 = sigmoid(a1)
a2 = np.dot(z1, w2) + b2
y = a2
return y
```
入力として1次元の関数を使ってその表現能力を見てみます
誤差関数としては予測値と真の値との差の2乗和(二乗和誤差(mean squared error))を使います。すなわち、パラメータが観測値を説明する確率は
$$
p(\mathbf{t}|\mathbf{w},b ) = \prod_{n=1}^{N} \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{(y_n-t_n)^2}{2\sigma^2}}
$$
であり、最小化する誤差関数は
$$
E(\mathbf{w}, b) = \frac{1}{N} \sum_{n=1}^{N} E_n(\mathbf{w}, b) = \frac{1}{N} \sum_{n=1}^{N} \frac{1}{2} \left( y_n - t_n \right)^2
$$
誤差関数の微分は
$$
\begin{align*}
\frac{\partial E_n}{\partial a_1^{(k)}}
&= \frac{\partial E_n}{\partial y} \frac{\partial y}{\partial a_1^{(k)}} \\
&= \left. y_n-t_n \right.
\end{align*}
$$
となり、シグモイド関数+クロスエントロピーのときと似たような式になります。
### 二次曲線
```python
nSample = 20
# x=(-1, 1), t=x^2
x = np.linspace(-1, 1, nSample)[:, np.newaxis]
t = x * x
# パーセプトロンのパラメータ
nNode = 2 # 中間層のノード数
w1 = np.random.randn(1, nNode) # shape=(1, nNode)
b1 = np.random.randn(1, nNode) # shape=(1, nNode)
w2 = np.random.randn(nNode, 1) # shape=(nNode, 1)
b2 = np.random.randn(1, 1) # shape=(1, 1)
# うまく最適化されないときは以下で初期化してください
# w1 = np.array([[+2.79, -2.791]])
# b1 = np.array([[-2.80, -2.801]])
# w2 = np.array([[2.55], [2.551]])
# b2 = np.array([[-0.29]])
# 最急降下法での学習
learning_rate = 1.0 # ステップ幅
num_steps = 10000 # 繰り返し回数
for _ in range(num_steps):
# 順伝搬させる
a1 = np.dot(x, w1) + b1 # shape=(nEvents, 3)
z1 = sigmoid(a1) # shape=(nEvents, 3)
a2 = np.dot(z1, w2) + b2 # shape=(nEvents, 1)
z2 = a2 # shape=(nEvents, 1)
# 一次微分を求めるために逆伝搬させる
grad_A2 = z2 - t # shape=(nEvents, 1)
grad_W2 = (z1 * grad_A2)[:, :, np.newaxis] # shape=(nEvents, nNode, 1)
grad_B2 = grad_A2[:, :, np.newaxis] # shape=(nEvents, 1, 1)
grad_A1 = grad_sigmoid(a1) * (grad_A2 * w2.T) # shape=(nEvents, nNode)
grad_W1 = x[:, :, np.newaxis] * grad_A1[:, np.newaxis, :] # shape=(nEvents, 2, nNode)
grad_B1 = grad_A1[:, np.newaxis, :] # shape=(nEvents, 1, nNode)
w2 = w2 - learning_rate * grad_W2.mean(axis=0) # shape=(nNode, 1)
b2 = b2 - learning_rate * grad_B2.mean(axis=0) # shape=(1, 1)
w1 = w1 - learning_rate * grad_W1.mean(axis=0) # shape=(2, nNode)
b1 = b1 - learning_rate * grad_B1.mean(axis=0) # shape=(1, nNode)
# プロット
PlotPredictionRegression1D(x, t, multilayerPerceptronRegression, w1, b1, w2, b2)
```
黒点がデータ点、赤線が多層パーセプトロンによる予測値を表しています。
点線はパーセプトロンの中間層の各ノードにおける出力を表していて、点線の和が最後の出力(赤線)となります。
二次関数($y=x^2$)は中間層が2層の多層パーセプトロンでおおよそ近似できていることがわかります。
### Sin(x)
```python
nSample = 20
# x=(-pi, pi), t=sin(x)
x = np.linspace(-np.pi, np.pi, nSample)[:, np.newaxis]
t = np.sin(x)
# パーセプトロンのパラメータ
nNode = 4 # 中間層のノード数
w1 = np.random.randn(1, nNode) # shape=(1, nNode)
b1 = np.random.randn(1, nNode) # shape=(1, nNode)
w2 = np.random.randn(nNode, 1) # shape=(nNode, 1)
b2 = np.random.randn(1, 1) # shape=(1, 1)
# うまく最適化されないときは以下で初期化してください
# w1 = np.array([[+1.4, -1.4, 1.6, -1.8]])
# b1 = np.array([[-4.2, -4.2, -0.8, -1.2]])
# w2 = np.array([[-3.0], [3.0], [2.0], [-1.3]])
# b2 = np.array([[-0.3]])
# 最急降下法での学習
learning_rate = 1.0 # ステップ幅
num_steps = 10000 # 繰り返し回数
for _ in range(num_steps):
# 順伝搬させる
a1 = np.dot(x, w1) + b1 # shape=(nEvents, 3)
z1 = sigmoid(a1) # shape=(nEvents, 3)
a2 = np.dot(z1, w2) + b2 # shape=(nEvents, 1)
z2 = a2 # shape=(nEvents, 1)
# 一次微分を求めるために逆伝搬させる
grad_A2 = z2 - t # shape=(nEvents, 1)
grad_W2 = (z1 * grad_A2)[:, :, np.newaxis] # shape=(nEvents, nNode, 1)
grad_B2 = grad_A2[:, :, np.newaxis] # shape=(nEvents, 1, 1)
grad_A1 = grad_sigmoid(a1) * (grad_A2 * w2.T) # shape=(nEvents, nNode)
grad_W1 = x[:, :, np.newaxis] * grad_A1[:, np.newaxis, :] # shape=(nEvents, 2, nNode)
grad_B1 = grad_A1[:, np.newaxis, :] # shape=(nEvents, 1, nNode)
w2 = w2 - learning_rate * grad_W2.mean(axis=0) # shape=(nNode, 1)
b2 = b2 - learning_rate * grad_B2.mean(axis=0) # shape=(1, 1)
w1 = w1 - learning_rate * grad_W1.mean(axis=0) # shape=(2, nNode)
b1 = b1 - learning_rate * grad_B1.mean(axis=0) # shape=(1, nNode)
# プロット
PlotPredictionRegression1D(x, t, multilayerPerceptronRegression, w1, b1, w2, b2)
```
sinカーブも中間層のノード数4の多層パーセプトロンで近似することができました。
### 矩形関数
より一般的な関数もノードの数を増やすことで任意の精度で近似できることが知られています。
ここでは直感的にそれを確かめてみます。
多層パーセプトロンは以下のような関数を近似できます。
$$
\begin{align*}
y(x) &= \begin{cases}
1 & (-0.1 < a < 0.1)\\
0 & (\text{otherwise})
\end{cases}
\end{align*}
$$
```python
nSample = 100
x = np.linspace(-1, 1, nSample).reshape(-1, 1)
t = np.where(np.logical_and(-0.1 < x, x < 0.1), 1, 0)
# パーセプトロンのパラメータ
nNode = 2
w1 = np.array([[200, 200]])
b1 = np.array([[-20, +20]])
w2 = np.array([[-1], [1]])
b2 = np.array([[0.]])
# プロット
PlotPredictionRegression1D(x, t, multilayerPerceptronRegression, w1, b1, w2, b2)
```
この関数を複数用意することで任意の関数が近似できます。
# Keras
はじめに述べたように、最近はDeep learningが簡単に扱えるツールが多々あります。
ここではKerasというパッケージを使ってこれまで行ってきた学習を行います。
KerasはtensorflowやTheanoといったDeep learningパッケージを簡単に扱うようにするためのラッパーで、実際には裏ではTensorflow等のライブラリが動いています。
Kerasの詳細な使い方については[公式ドキュメント](https://keras.io/ja/)も適宜参照してください。
多層パーセプトロンで2次元ガウシアン状のシグナルを分類する問題をKerasでも実装してみましょう。
```python
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
# データ点の取得
x, t = getDataset2()
# モデルの定義
model = Sequential([
Dense(3, activation='sigmoid', input_dim=2), # ノード数が3の層を追加。活性化関数はシグモイド関数。
Dense(1, activation='sigmoid') # ノード数が1の層を追加。活性化関数はシグモイド関数。
])
# 誤差関数としてクロスエントロピーを指定。最適化手法は(確率的)勾配降下法
model.compile(loss='binary_crossentropy', optimizer=SGD(learning_rate=1.0))
# トレーニング
model.fit(
x=x,
y=t,
batch_size=len(x), # バッチサイズ。一回のステップで全てのデータを使うようにする。
epochs=3000, # 学習のステップ数
verbose=0, # 1とするとステップ毎に誤差関数の値などが表示される
)
# プロット
## パーセプトロンの出力を等高線プロット
PlotPredictionContour(model.predict)
## データ点をプロット
plotDataPoint(x, t)
```
パーセプトロンの実装が簡潔に行えていることがわかります。
Kerasではモデルの変更も簡単に行えます。
下の例では、
- 1層あたりのノード数の変更
- レイヤー数の変更
- 活性化関数の変更
- 誤差関数の最適化手法の変更
を行っています。
Kerasは深層学習で有用と知られている多くのテクニックが既に実装され、簡単に使えるようになっています。
例えば[誤差関数](https://keras.io/ja/losses/)、[最適化手法](https://keras.io/ja/optimizers/)、[活性化関数](https://keras.io/ja/activations/)など、使える関数が日本語でもまとめられています。
```python
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# データ点の取得
x, t = getDataset2()
# モデルの定義
model = Sequential([
Dense(32, activation='relu', input_dim=2), # ノード数が32の層を追加。活性化関数はrelu。
Dense(16, activation='relu'), # ノード数が16の層を追加。活性化関数はrelu。
Dense(8, activation='relu'), # ノード数が8の層を追加。活性化関数はrelu。
Dense(1, activation='sigmoid') # ノード数が1の層を追加。活性化関数はシグモイド関数。
])
# 誤差関数としてクロスエントロピーを指定。最適化手法は"adam"
model.compile(loss='binary_crossentropy', optimizer='adam')
# トレーニング
model.fit(
x=x,
y=t,
batch_size=128, # len(x), # バッチサイズ。128個のデータセット毎にパラメータの更新を行う。
epochs=500, # 学習のステップ数
verbose=0,
)
# プロット
## パーセプトロンの出力を等高線プロット
PlotPredictionContour(model.predict)
## データ点をプロット
plotDataPoint(x, t)
```
# 最後に
実習課題として以下の2つを用意しました。
- パラメータ最適化手法
- 活性化関数とパラメータの初期化
それぞれ別のjupyter notebookを用意しています。もし興味があれば取り組んでください。
|
3b2f35979e2b2a66dcf826c53f211a68e2a66664
| 56,075 |
ipynb
|
Jupyter Notebook
|
basic/DeepLearning.ipynb
|
saitoicepp/ppcc2021_DeepLearning
|
ba679567a9648584ada2394c4219f891d9af60bd
|
[
"Apache-2.0"
] | null | null | null |
basic/DeepLearning.ipynb
|
saitoicepp/ppcc2021_DeepLearning
|
ba679567a9648584ada2394c4219f891d9af60bd
|
[
"Apache-2.0"
] | null | null | null |
basic/DeepLearning.ipynb
|
saitoicepp/ppcc2021_DeepLearning
|
ba679567a9648584ada2394c4219f891d9af60bd
|
[
"Apache-2.0"
] | null | null | null | 31.118202 | 260 | 0.504432 | true | 21,963 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.841826 | 0.685949 | 0.57745 |
__label__yue_Hant
| 0.509498 | 0.179939 |
$$ \newcommand{\pd}[2]{ \frac{\partial #1}{\partial #2} }
\newcommand{\od}[2]{\frac{d #1}{d #2}}
\newcommand{\td}[2]{\frac{D #1}{D #2}}
\newcommand{\ab}[1]{\langle #1 \rangle}
\newcommand{\bss}[1]{\textsf{\textbf{#1}}}
\newcommand{\ol}{\overline}
\newcommand{\olx}[1]{\overline{#1}^x}
$$
# Hydrostatic and Geostrophic Balances
In the previous lecture, we obtained the local-tangent-plane form of the Boussinesq equations of motion. We repeat the final equations, in component form, below:
$$ \begin{align}
\td{u}{t} - f v &= -\pd{\phi}{x} + \nu \nabla^2 u \\
\td{v}{t} + f u &= -\pd{\phi}{y} + \nu \nabla^2 v \\
\td{w}{t} &= -\pd{\phi}{z} + b + \nu \nabla^2 w \ .
\end{align} $$
In this lecture, we ask _what are the dominant balances in these equations under common oceanographic conditions_.
## Hydrostatic Balance
We already saw hydrostatic balance for the _background pressure and density field_. For the large-scale flow, the dynamic pressure is also in hydrostatic balance:
$$ \pd{\phi}{z} = b \ . $$
To remind ourselves of the underlying physics, we can write out $\phi$ and $b$ explicitly and drop the common factor of $1/\rho_0$:
$$ \pd{}{z} \delta p = - g \delta \rho \ . $$
Hydrostatic balance can be used to define $\phi$ at any point. In order to do so, however, we must think a bit more about sea-surface height and its relation to pressure.
## Sea Surface Height, Dynamic Height, and Pressure
What is the dynamic pressure $\phi$ at an arbitrary depth $z$ for a flow in hydrostatic balance? The dynamc sea-surface $\eta$ is defined relative to the mean geoid (i.e. a surface of constant geopotential) at $z=0$.
We go back to the full hydrostatic balance with both backround and dynamic pressure:
$$ \pd{}{z} (p) = - g \rho \ . $$
We now integrate this equation in the vertical from an arbitrary $z$ up to the sea surface $\eta$:
$$ \int_z^\eta \pd{}{z'} p dz'
= p(\eta) - p(z)
= - g \int_z^\eta \rho dz' \ . $$
$p(\eta)$ is the pressure right at the sea surface, which is given by the atmospheric pressure. Although atmospheric pressure loading can have a small effect on ocean circulation, it is generally negligible compared to the huge pressures generated internally by the ocean. We will now take $p(\eta)=0$ to simplify the bookkeeping, this gives.
$$ p(z) = g \int_z^\eta \rho dz' \ . $$
Now let's subtract the reference pressure. It given integrating the hydrostatic balance for the background density up to $z=0$. This upper limit is important: the reference pressure is defined for a flat sea surface.
$$ p_{ref}(z) = g \int_z^0 \rho_0 dz' \ . $$
Subtracting the two equations, we obtain
$$ \begin{align}
\delta p(z) = p(z) - p_{ref}(z) &= g \int_z^\eta \rho dz' \ - g \int_z^0 \rho_0 dz' \\
&= g \int_0^\eta \rho dz' + g \int_z^0 \delta \rho dz'
\end{align} $$
We see there is a contribution from the density fluctuations in the interior (second term), plus the variations in the sea-surface height (first term). We can, to the usual precision of the Boussinesq approximation, neglect the density fluctuations within the depths 0 to $\eta$. Dividing through by $\rho_0$, we obtain
$$ \begin{align}
\phi(z) = g \eta + \int_z^0 b dz'
\end{align} $$
It is common in oceanography to define a quanity known as _dynamic height_, which expresses dynamic pressure variations in terms of their effective sea-surface height. In our notation, with the Boussinesq approximation, dynamic height is defined as
$$ H = \frac{\phi}{g} \ .$$
## Rossby Number
Let's estimate the relative size of the acceleration to Coriolis terms on the left-hand side of the horizontal momentum equations.
The ratio of these terms defines the _Rossby number_.
$$ R_O = \frac{ \left | \td{u}{t} \right | }{ | f v |} $$
The magnitude of the acceleration term can be estimated as $U^2 / L$, where $U$ is a representitive velocity scale of the flow and $L$ is a representative length scale. So we can estimate the Rossby number as
$$ R_O = \frac{U}{f L} \ . $$
What are representative values? For the large-scale ocean circulation, U = 0.01 m/s, L = 1000 km, and f = 10$^{-4}$ s$^{-1}$. This gives $R_O = 10^{-4}$. So the _acceleration terms are often totally negligible_!
For low Rossby number conditions, we can neglect the acceleration and write the horizontal momentum equations as
$$ \begin{align} - f v &= -\pd{\phi}{x} + \nu \nabla^2 u \\
f u &= -\pd{\phi}{y} + \nu \nabla^2 v
\end{align} $$
The _geostrophic flow_ is defined as the flow determined by the balance between the Coriolis term and and the pressure gradient:
$$ \begin{align} - f v_g &= -\pd{\phi}{x} \\
f u_g &= -\pd{\phi}{y}
\end{align} $$
while the ageostrophic flow is defined via the balance between the Coriolis term and the friction term:
$$ \begin{align} - f v_a &= \nu \nabla^2 u \\
f u_a &= \nu \nabla^2 v
\end{align} $$
The total flow is given by the sum of the geostrophic and ageostrophic components:
$$ \mathbf{u} = \mathbf{u}_a + \mathbf{u}_g \ .$$
## Geostrophic Flow
Away from the boundaries, Friction is weak, and the flow is, to a good approximation geostrophic. Geostrophic (or "balanced") flow is a ubiquitous feature of flows in the ocean and atmosphere. Geostrophic flow is characterized by flow along the pressure contours (i.e. _isobars_), as illustrated below.
Rotational flow around pressure minimum is called _cyclonic_. Cyclonc flow is counterclockwise in the northern hemisphere and (due to the change of sign of $f$) clockwise in the southern hemisphere.
_ [AVISO mean dynamic topography](https://www.aviso.altimetry.fr/en/applications/ocean/large-scale-circulation/mean-dynamic-topography.html) _
### Thermal Wind
The geostrophic flow is determined by the pressure and the pressure is determined by the density (via hydrostatic balance). We can see this relationship more clearly if we take the derivative in $z$ of the geostrophic equations:
$$ \begin{align} - f \pd{v_g}{z} &= -\pd{}{x}\pd{\phi}{z} = -\pd{b}{x}\\
f \pd{u_g}{z} &= -\pd{}{y} \pd{\phi}{z} = -\pd{b}{y} \ .
\end{align} $$
These equations, called _thermal wind_ relate the _vertical shear of the geostrophic flow_ to the _horizontal gradients of buoyancy_. They are very useful for interpreting hydrographic data.
[WOCE Atlantic Atlas Vol. 3](http://whp-atlas.ucsd.edu/whp_atlas/atlantic/a03/sections/printatlas/printatlas.htm)
```python
import xarray as xr
from matplotlib import pyplot as plt
%matplotlib inline
```
```python
! curl -O ftp://ftp.spacecenter.dk/pub/DTU10/2_MIN/DTU10MDT_2min.nc
```
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 111M 100 111M 0 0 227k 0 0:08:20 0:08:20 --:--:-- 190k02 0:09:39 196k:00:15 0:05:16 335k5:53 0:00:20 0:05:33 262k5:21 350k17 0:05:06 262k:24 0:04:11 288k:41 0:04:04 309k 281k 0 0:06:45 0:02:44 0:04:01 304k 0:03:51 310k 0 0:07:47 0:05:18 0:02:29 188k 120kk
```python
ds = xr.open_dataset('DTU10MDT_2min.nc')
ds
```
<xarray.Dataset>
Dimensions: (lat: 5400, lon: 10801)
Coordinates:
* lon (lon) float64 0.0 0.03333 0.06667 0.1 0.1333 0.1667 0.2 0.2333 ...
* lat (lat) float64 -89.98 -89.95 -89.92 -89.88 -89.85 -89.82 -89.78 ...
Data variables:
mdt (lat, lon) float64 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ...
Attributes:
Conventions: COARDS/CF-1.0
title: DTU10MDT_2min.mdt.nc
source: Danish National Space Center
node_offset: 1
```python
fig, ax = plt.subplots(figsize=(18,12))
ds.mdt.plot(ax=ax)
```
```python
```
|
411b04e1265b772c0eb2dde0caf2276d3743ae5e
| 355,907 |
ipynb
|
Jupyter Notebook
|
lectures/06_hydrostatic_geostrophic.ipynb
|
dgumustel/intro_to_physical_oceanography
|
45d21e1642038bfc93ebe645f16f55dba72901db
|
[
"MIT"
] | 1 |
2020-10-01T02:25:52.000Z
|
2020-10-01T02:25:52.000Z
|
book/06_hydrostatic_geostrophic.ipynb
|
Sumanshekhar17/intro_to_physical_oceanography
|
e624bbbd6d67235b3fcf2764e256dd2ed024481e
|
[
"MIT"
] | null | null | null |
book/06_hydrostatic_geostrophic.ipynb
|
Sumanshekhar17/intro_to_physical_oceanography
|
e624bbbd6d67235b3fcf2764e256dd2ed024481e
|
[
"MIT"
] | 2 |
2021-04-02T17:12:37.000Z
|
2021-06-23T16:18:41.000Z
| 1,170.746711 | 344,114 | 0.94262 | true | 2,449 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.822189 | 0.824462 | 0.677864 |
__label__eng_Latn
| 0.981133 | 0.413236 |
# Introduction
In the falling parachutist problem we encountered an error - there was a difference between the numerical calculation and the analytical solution. There are numerous types of errors associated with an engineering mathematical model:
1. Modeling errors: These have to do with the fact that we can never describe reality exactely and/or our model neglects certain effects (e.g. sidewind in the parachutist example).
2. Numerical errors: These have to do directly with the numerical method used and can be generally categorized as
* Round-off errors: due to the limited representation of numbers on computers
* Truncation error: due to approximating exact mathematical operators
## Errors
The basic definition of an error is
\begin{equation}
\text{True Error = True Value - Approximation}
\end{equation}
For simplicity, denote the "True Error" as $E_\text{t}$.
There is a problem with this definition however - it does not take into account the "magnitude" of the true value. For example, an error of one centimeter is much more significant if we are measuring a rivet rather than a bridge. To fix this, we define the true relative error, $\epsilon_\text{t}$:
\begin{equation}
\text{True Relative Error} = \frac{\text{True Error}}{\text{True Value}}
\end{equation}
or
\begin{equation}
\epsilon_\text{t} = \frac{E_\text{t}}{\text{True Value}}
\end{equation}
### Example
Suppose that you have the task of measuring the lengths of a bridge and a rivet and come up with 9999 and 9 cm, respectively. If the true values are 10,000 and 10 cm, respectively, compute (a) the true error and (b) the true percent relative error for each case.
#### Solution
For the bridge, the True Error is:
\begin{equation}
E_\text{t} = 10,000 - 9999 = 1 \text{cm}
\end{equation}
while for the rivet
\begin{equation}
E_\text{t} = 10 - 9 = 1 \text{cm}
\end{equation}
Both measurements have the same true error. But we known intuitively that the 1 cm is a bigger deal for the rivet. We can show this by using the relative error.
For the bridge, we have
\begin{equation}
\epsilon_\text{t} = \frac{E_\text{t}}{\text{True Value}} \times 100\%= \frac{1}{10,000} \times 100\% = 0.01 \%
\end{equation}
while for the rivet,
\begin{equation}
\epsilon_\text{t} = \frac{1}{10}\times 100\% = 10 \%
\end{equation}
Note that we used the words "True Error" - this is because the definition is based on the "True Value" - but this doesn't make sense since if we know the true value then we don't need an approximation!
This definition is still important however because it lays the foundations of error analysis. In addition, when testing any numerical method, we first try it out on problems where the True Value is known (e.g. analytical solution). What we often do is figure out an approximation for the true value or find the best estimate of the true value.
Another common situation in numerical methods is iteration: repeating the same task over and over again. What we can do in that case is define an iterative error as the difference between current and previous iterations:
\begin{equation}
\epsilon_\text{a} = \frac{ \text{current approximation} - \text{previous approximation} }{ \text{current approximation} }\times 100\%
\end{equation}
The usefulness of this definition is that it allows us to get an idea of how good our approximation is. Specifically, one can use a well known formula to determine the number of significant digits in an error. If your computed error $|\epsilon_\text{a}|$ (in \%) is less than
\begin{equation}
\epsilon_\text{s} = (0.5\times 10^{2-n})\%
\end{equation}
then your results are accurate to at least $n$ significant digits.
For example, if you want your results to be significant up to at least four digits, ($n = 4$) then your results must satify $ |\epsilon_\text{a}| < 0.5\times10^{-2}\%$ or $ |\epsilon_\text{a}| < 0.005\%$.
This formula can be inverted to obtain the number of significant digits. Taking the logarigthm
\begin{equation}
\log \epsilon_\text{s} = \log 0.5 + 2 - n
\end{equation}
and
\begin{equation}
n = \log 0.5 + 2 - \log \epsilon_\text{s}
\end{equation}
## Example
The exponential function $e^x$ can be approximated with the following (Taylor) series:
\begin{equation}
e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{3!} + \frac{x^4}{4!} + \frac{x^5}{5!} + \ldots
\end{equation}
Compute the true and approximate relative errors when using this formula for 1, 2, 3, 4, 5, and 6 terms. Use this formula to compute $e^{0.5}$ up to four significant digits
### Solution
We will first need to define a function that computes the exponential up to a certain number of terms
```python
# need to import the factorial and exponential functions from the math module
import math
from math import factorial, exp
def my_exponential(x,n):
'''x is the exponent and n is the number of terms'''
result = 0.0 # initialize the result to zero
for i in range(0,n): # loop over the number of terms: this goes from 0 to n-1 (n terms)
result = result + x**i/factorial(i) # add each term at a time
return result
```
```python
def sig_fig(error):
return math.floor(math.log10(0.5) + 2 - math.log10(error))
```
```python
# Now run the function for different number of terms
oldval = 0.0
x = 0.5
trueval = exp(x)
for n in range(1,10):
val = my_exponential(x,n)
ϵt = abs( (val - trueval)/trueval ) * 100
ϵa = abs( (val - oldval)/val ) * 100
oldval = val
print (n,'\t', "%.8f" % val, '\t', "%.8f" % ϵt, '\t', "%.8f" % ϵa, '\t', sig_fig(ϵa))
```
1 1.00000000 39.34693403 100.00000000 -1
2 1.50000000 9.02040104 33.33333333 0
3 1.62500000 1.43876780 7.69230769 0
4 1.64583333 0.17516226 1.26582278 1
5 1.64843750 0.01721156 0.15797788 2
6 1.64869792 0.00141649 0.01579529 3
7 1.64871962 0.00010024 0.00131626 4
8 1.64872117 0.00000622 0.00009402 5
9 1.64872127 0.00000034 0.00000588 6
```python
from math import factorial, exp
oldval = 0.0
x = 2.0
nterms = 10
for n in range(0,nterms):
newval = oldval + x**n/ factorial(n)
ϵa = abs( (newval - oldval)/newval) * 100
oldval = newval
print(n, ϵa,'%')
```
0 100.0 %
1 66.66666666666666 %
2 40.0 %
3 21.052631578947363 %
4 9.523809523809527 %
5 3.669724770642201 %
6 1.2084592145015065 %
7 0.34408602150537515 %
8 0.08594757198109124 %
9 0.019095813242941077 %
```python
from math import factorial, exp
oldval = 0.0
x = 2.0
nterms = 100
tol = 0.01 # percent
ϵa = 10000
n = 0
while ϵa > tol:
newval = oldval + x**n/ factorial(n)
ϵa = abs( (newval - oldval)/newval) * 100
oldval = newval
n = n + 1
print(n, ϵa,'%')
```
1 100.0 %
2 66.66666666666666 %
3 40.0 %
4 21.052631578947363 %
5 9.523809523809527 %
6 3.669724770642201 %
7 1.2084592145015065 %
8 0.34408602150537515 %
9 0.08594757198109124 %
10 0.019095813242941077 %
11 0.0038190167941204627 %
```python
def mycos(x, nterms):
result = 0.0
for n in range(0,nterms):
result += (-1)**n * x**(2*n) / factorial(2*n)
return result
```
```python
print(mycos(3, 5), math.cos(3))
```
```python
```
|
477392921105b113d423bfc431d57c72563fe88f
| 12,043 |
ipynb
|
Jupyter Notebook
|
misc/03. Errors.ipynb
|
jomorodi/NumericalMethods
|
e040693001941079b2e0acc12e0c3ee5c917671c
|
[
"MIT"
] | 3 |
2019-03-27T05:22:34.000Z
|
2021-01-27T10:49:13.000Z
|
misc/03. Errors.ipynb
|
jomorodi/NumericalMethods
|
e040693001941079b2e0acc12e0c3ee5c917671c
|
[
"MIT"
] | null | null | null |
misc/03. Errors.ipynb
|
jomorodi/NumericalMethods
|
e040693001941079b2e0acc12e0c3ee5c917671c
|
[
"MIT"
] | 7 |
2019-12-29T23:31:56.000Z
|
2021-12-28T19:04:10.000Z
| 35.842262 | 537 | 0.572449 | true | 2,360 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.870597 | 0.885631 | 0.771028 |
__label__eng_Latn
| 0.98335 | 0.629689 |
# Functions as Real-valued Circuits
## Base Case: Single Gate in the Circuit
### Single Multiply Gate
The first thing we are going to compute is a simple function of two variables $F = f(X,Y)=X*Y$.
The circuit takes two real-valued inputs $X$ and $Y$, computes the product $X*Y$ defined by the function $f$ and stores it in the variable $F$.
```python
def f(X,Y):
return X*Y
```
```python
F = f(-2,3)
print (f'The output F is: {F}')
```
The output F is: -6
### <font color=blue>*How should one tweak the inputs $(X,Y)$ slightly to increase the output of the function __f__?*</font>
### Strategy #1 - Numerical Gradient
The partial derivative of the function $f$ with respect to $X$ can be computed as:
\begin{align}
\frac{\partial f (X,Y)}{\partial X} = \lim_{h\to 0} \frac{f(X+h,Y)-f(X,Y)}{h} \\
\end{align}
This can be simulated in code by choosing $h$ to be a very small number:
```python
h = 0.0001
```
#### Computing a partial derivative with respect to $X$
```python
X = -2; Y = 3
```
```python
X_derivative = (f(X+h, Y)-f(X, Y))/h
print (f'the derivative in respect to X is: {X_derivative}')
```
the derivative in respect to X is: 3.00000000000189
Positive derivative value indicates that $X$ should be increased in order to increase $f$, let's try that:
```python
print (f'the original output is: {f(-2,3)}')
# increase X for a small value, for example 0.2
print (f'the new output is: {f(-1.8,3)}')
```
the original output is: -6
the new output is: -5.4
$-5.4$ is larger than $-6$, it works!
#### compute a partial derivative with respect to $Y$
```python
Y_derivative = (f(X, Y+h)-f(X, Y))/h
print (f'the derivative in respect to Y is: {Y_derivative}')
```
the derivative in respect to Y is: -2.0000000000042206
Negative derivative value indicates that $Y$ should be decreased in order to increase $f$, let's try that:
```python
print (f'the original output is: {f(-2,3)}')
# decrease Y for a small value, for example 0.1
print (f'the new output is: {f(-2,2.9)}')
```
the original output is: -6
the new output is: -5.8
$-5.8$ is larger than $-6$, it works!
The __gradient__ of a function is made up of all the partial derivatives of this function concatenated in a vector
\begin{align}
\nabla f(X,Y)=\left[\frac{\partial f (X,Y)}{\partial X},\frac{\partial f (X,Y)}{\partial Y}\right]
\end{align}
#### Gradually minimizing a function by using derivatives
In order to gradually maximize the function $f$ (step-by-step) towards our desired result, we need to update our parameters. This is done by adding to the parameter's value the value of its partial derivative. To achieve this gradually in small steps, we multiply the value of the partial derivative by a a small number (step).
```python
step_size = 0.01
F = f(X,Y)
print (f'The original output F is: {F}')
```
The original output F is: -6
```python
X = X + step_size * X_derivative
Y = Y + step_size * Y_derivative
print (f'X is: {X}, Y is: {Y}')
```
X is: -1.969999999999981, Y is: 2.979999999999958
```python
F_new = f(X,Y)
print (f'old output: {F}\nnew output: {F_new}')
```
old output: -6
new output: -5.87059999999986
The new output is larger than the old, thanks to partial derivatives.
This approach, however, is still expensive because we need to compute the circuit’s output as we tweak every input value independently a small amount.
### Strategy #2 - Analytic Gradient
We can use calculus to compute partial derivatives of the function $f(X,Y)$.
\begin{align}
\frac{\partial F}{\partial X}=\frac{\partial f (X,Y)}{\partial X} = \lim_{h\to 0} \frac{f(X+h,Y)-f(X,Y)}{h} \\
\end{align}
A partial derivative of $F=f(X,Y)$ in respect to $X$ is:
\begin{align*}
\frac{\partial F}{\partial X}=\frac{\partial f (X,Y)}{\partial X} &= \lim_{h\to 0} \frac{f(X+h,Y)-f(X,Y)}{h} \\\\
&=\lim_{h\to 0}\frac{(X+h)Y -XY}{h} \\
&=\lim_{h\to 0}\frac{XY+Yh-XY}{h} \\
&=\lim_{h\to 0}\frac{Yh}{h} \\
\frac{\partial F}{\partial X}=\frac{\partial f (X,Y)}{\partial X}&=Y
\end{align*}
A partial derivative of $F=f(X,Y)$ in respect to $Y$ is:
\begin{align*}
\frac{\partial F}{\partial Y}=\frac{\partial f (X,Y)}{\partial Y} &= \lim_{h\to 0} \frac{f(X,Y+h)-f(X,Y)}{h} \\\\
&=\lim_{h\to 0}\frac{X(Y+h) -XY}{h} \\
&=\lim_{h\to 0}\frac{XY+Xh-XY}{h} \\
&=\lim_{h\to 0}\frac{Xh}{h} \\
\frac{\partial F}{\partial Y}=\frac{\partial f (X,Y)}{\partial Y} &=X
\end{align*}
Here are both partial derivatives:
\begin{align*}
\frac{\partial F}{\partial X}=Y ; \frac{\partial F}{\partial Y}=X \\
\end{align*}
We can represent this as a gradient:
\begin{align}
\nabla f(X,Y)=\left[Y,X\right]
\end{align}
We can now use this information to increase the output of the function __f__:
```python
X = -2; Y = 3
F = f(X,Y)
print (f'the output F is: {F}')
```
the output F is: -6
```python
X_gradient = Y
Y_gradient = X
print (f'X-gradient: {X_gradient} \nY-gradient: {Y_gradient}')
```
X-gradient: 3
Y-gradient: -2
```python
step_size = 0.001
X = X + step_size * X_gradient
Y = Y + step_size * Y_gradient
print (f'X is now: {X}, \nY is now: {Y}')
```
X is now: -1.997,
Y is now: 2.998
```python
F_new = f(X,Y)
print (f'old output: {F}\nnew output: {F_new}')
```
old output: -6
new output: -5.987006000000001
The new output $-5.8706$ is larger than the old: $-6$.
***
### Single Add Gate
The second function we're going to compute is $G=g(X,Y)=X+Y.$
The circuit takes two real-valued inputs $X$ and $Y$ and computes the sum $X+Y$.
```python
def g(X,Y):
return X+Y
```
```python
X = -2; Y = 3
G = g(X,Y)
print (f'The output is: {G}')
```
The output is: 1
As we did before, we can use calculus to compute partial derivatives for the function $G=g(X,Y)$:
\begin{align}
\frac{\partial G}{\partial X}=\frac{\partial g (X,Y)}{\partial X} = \lim_{h\to 0} \frac{g(X+h,Y)-g(X,Y)}{h} \\
\end{align}
A partial derivative of $G=g(X,Y)$ in respect to $X$ is:
\begin{align*}
\frac{\partial G}{\partial X}=\frac{\partial g (X,Y)}{\partial X} &= \lim_{h\to 0} \frac{g(X+h,Y)-g(X,Y)}{h} \\\\
&=\lim_{h\to 0}\frac{X+h+Y -X-Y}{h} \\
\frac{\partial G}{\partial X}=\frac{\partial g (X,Y)}{\partial X}&=\lim_{h\to 0}\frac{h}{h} =1 \\
\end{align*}
A partial derivative of $G=g(X,Y)$ in respect to $Y$ is:
\begin{align*}
\frac{\partial G}{\partial Y}=\frac{\partial g (X,Y)}{\partial Y} &= \lim_{h\to 0} \frac{g(X,Y+h)-g(X,Y)}{h} \\\\
&=\lim_{h\to 0}\frac{X+Y+h -X-Y}{h} \\
\frac{\partial G}{\partial Y}=\frac{\partial g (X,Y)}{\partial Y}&=\lim_{h\to 0}\frac{h}{h} =1 \\
\end{align*}
Both partial derivatives $\frac{\partial G}{\partial X}$ and $\frac{\partial G}{\partial Y}$ in this case are equal to $1$.
We can use this information to maximize the function __g__:
```python
X_gradient = 1
Y_gradient = 1
print (f'X-gradient: {X_gradient} \nY-gradient: {Y_gradient}')
```
X-gradient: 1
Y-gradient: 1
```python
step_size = 0.01
X = X + step_size * X_gradient
Y = Y + step_size * Y_gradient
print (f'X is: {X} \nY is: {Y}')
```
X is: -1.99
Y is: 3.01
```python
F_new = g(X,Y)
print (f'old output: {F}\nnew output: {F_new}')
```
old output: -6
new output: 1.0199999999999998
## Recursive Case: Circuits with Multiple Gates
The expression we are computing now is $M = m(X,Y,Z)=(X+Y)*Z$.
```python
def m(X,Y,Z):
return (X+Y)*Z
```
```python
X = -2; Y = 5; Z = -4
```
this is equal to $M=m(-2,5,-4)=(-2+5)-4=3*-4=-12$
```python
M = m(X,Y,Z)
print (f'the output M is: {M}')
```
the output M is: -12
As we did before, we can use calculus to derive partial derivatives for the function $M=m(X,Y,Z)$.
\begin{align}
\frac{\partial M}{\partial X}=\frac{\partial m (X,Y,Z)}{\partial X} = \lim_{h\to 0} \frac{m(X+h,Y,Z)-m(X,Y,Z)}{h} \\
\end{align}
A partial derivative of $M=m(X,Y,Z)$ in respect to $X$ is:
\begin{align*}
\frac{\partial M}{\partial X}=\frac{\partial m(X,Y,Z)}{\partial X} &= \lim_{h\to 0} \frac{f(X+h,Y,Z)-f(X,Y,Z)}{h} \\\\
&=\lim_{h\to 0}\frac{(X+h+Y)*Z -(X+Y)*Z}{h} \\
&=\lim_{h\to 0}\frac{ZX+Zh+ZY-ZX-ZY}{h} \\
\frac{\partial M}{\partial X}=\frac{\partial m(X,Y,Z)}{\partial X}&=\lim_{h\to 0}\frac{Zh}{h} =Z \\
\end{align*}
Similarly, partial derivative of $M=m(X,Y,Z)$ in respect to $Y$ is:
\begin{align*}
\frac{\partial M}{\partial Y}=\frac{\partial m(X,Y,Z)}{\partial Y} &= \lim_{h\to 0} \frac{f(X,Y+h,Z)-f(X,Y,Z)}{h} \\\\
&=\lim_{h\to 0}\frac{(X+Y+h)*Z -(X+Y)*Z}{h} \\
&=\lim_{h\to 0}\frac{ZX+ZY+Zh-ZX-ZY}{h} \\
\frac{\partial M}{\partial Y}=\frac{\partial m(X,Y,Z)}{\partial Y}&=\lim_{h\to 0}\frac{Zh}{h} =Z \\
\end{align*}
A partial derivative of $M=m(X,Y,Z)$ in respect to $Z$ is:
\begin{align*}
\frac{\partial M}{\partial Z}=\frac{\partial m(X,Y,Z)}{\partial Z} &= \lim_{h\to 0} \frac{f(X,Y,Z+h)-f(X,Y,Z)}{h} \\\\
&=\lim_{h\to 0}\frac{(X+Y)*(Z+h) -(X+Y)*Z}{h} \\
&=\lim_{h\to 0}\frac{XZ+Xh+YZ+Yh-XZ-YZ}{h} \\
&=\lim_{h\to 0}\frac{Xh+Yh}{h} \\
&=\lim_{h\to 0}\frac{h(X+Y)}{h}\\
\frac{\partial M}{\partial Z}=\frac{\partial m(X,Y,Z)}{\partial Z}&=X+Y \\
\end{align*}
Here are all three partial derivatives:
\begin{align*}
\frac{\partial M}{\partial X}=Z ; \frac{\partial M}{\partial Y}=Z ;\frac{\partial M}{\partial Z}=X+Y \\
\end{align*}
We can represent this as a gradient:
\begin{align}
\nabla m(X,Y,Z)=\left[Z,Z,X+Y\right]
\end{align}
We can now use this information to maximize the output of the function __m__:
```python
X_gradient = Z
Y_gradient = Z
Z_gradient = X+Y
print (f'X-gradient: {X_gradient} \nY-gradient: {Y_gradient} \nZ-gradient: {Z_gradient}')
```
X-gradient: -4
Y-gradient: -4
Z-gradient: 3
```python
step_size = 0.01
X = X + step_size * X_gradient
Y = Y + step_size * Y_gradient
Z = Z + step_size * Z_gradient
print (f'X is: {X} \nY is: {Y} \nZ is: {Z}')
```
X is: -2.04
Y is: 4.96
Z is: -3.97
```python
M_new = m(X,Y,Z)
print (f'old output: {M}\nnew output: {M_new}')
```
old output: -12
new output: -11.5924
***
### Backpropagation
Instead of working with the function $m(X,Y,Z)=(X+Y)*Z$, we can simplify the computation by composing two new simpler functions: <br>$G=g(X,Y)=X+Y$, and <br>$F=f(G,Z)=G*Z$, into: <br>$F=f(g(X,Y),Z)$
Here, we can apply the chain rule for derivation, because $F$ is a function of $G$ and $G$ is a function of $X$ and $Y$.<br>
So instead of computing $\frac{\partial M}{\partial X}$, $\frac{\partial M}{\partial Y}$ and $\frac{\partial M}{\partial Z}$ which gets more complicated to compute with more complex expressions, we compute instead $\frac{\partial F}{\partial X}$, $\frac{\partial F}{\partial Y}$ and $\frac{\partial F}{\partial Z}$, which can be decomposed:
\begin{align*}
\frac{\partial F}{\partial X}=\frac{\partial F}{\partial G}\frac{\partial G}{\partial X}
\end{align*}
Here $\frac{\partial F}{\partial G}$ is a simple multiplication gate, whose derivate we have already computed:$\frac{\partial F}{\partial G}$ =$\frac{\partial f(G,Z)}{\partial G}=Z$
Also $\frac{\partial G}{\partial X}$ is a simple addition gate, whose derivate we have already computed: $\frac{\partial G}{\partial X}$ =$\frac{\partial g(X,Y)}{\partial X}=1$, thus:
\begin{align*}
\frac{\partial F}{\partial X}=\frac{\partial F}{\partial G}\frac{\partial G}{\partial X} = Z*1 = Z
\end{align*}
***
The same applies when computing the partial $\frac{\partial F}{\partial Y}$:
\begin{align*}
\frac{\partial F}{\partial Y}=\frac{\partial F}{\partial G}\frac{\partial G}{\partial Y}
\end{align*}
\begin{align*}
\frac{\partial F}{\partial Y}=\frac{\partial F}{\partial G}\frac{\partial G}{\partial Y} = Z*1 = Z
\end{align*}
***
The partial derivative $\frac{\partial F}{\partial Z}$ does not need to be decomposed:
\begin{align*}
\frac{\partial F}{\partial Z}=G=X+Y
\end{align*}
We can now use this information to maximize the output of the function __m_decomposed__:
```python
def m_decomposed(X,Y,Z):
G = g(X,Y)
F = f(G,Z)
return F
```
```python
X = -2; Y = 5; Z = -4
```
```python
F = m_decomposed(X,Y,Z)
print (f'the output is: {F}')
```
the output is: -12
```python
X_gradient = Z
Y_gradient = Z
Z_gradient = X+Y
print (f'X-gradient: {X_gradient}, \nY-gradient: {Y_gradient}, \nZ-gradient: {Z_gradient}.')
```
X-gradient: -4,
Y-gradient: -4,
Z-gradient: 3.
Here, we can observe, that in the backward pass, the multiplication gate switches the values of outputs: what used to be (3,-4) in the forward pass, it becomes (-4,3) in the backward pass. The addition gate, on the other hand just passes its input value to the ouput without changing it.
```python
step_size = 0.01
X = X + step_size * X_gradient
Y = Y + step_size * Y_gradient
Z = Z + step_size * Z_gradient
print (f'X is: {X}\nY is: {Y}\nZ is: {Z}')
```
X is: -2.04
Y is: 4.96
Z is: -3.97
```python
F_new = m(X,Y,Z)
print (f'old output: {F}\nnew output: {F_new}')
```
old output: -12
new output: -11.5924
***
### Example: More complex functions
Here is a seemingly complicated function:
\begin{align}
l(A,B,C,X,Y)&= \frac{1}{1+e^{-(AX+BY+C)}}\\
&or \\
l(A,B,C,X,Y)&= \sigma (AX+BY+C)\\
\end{align}
The function $\sigma$ is called a *sigmoid function*:
\begin{align}
\sigma&= \frac{1}{1+e^{-x}}\\
\end{align}
and it was used a lot in machine learning before.
The derivative of the sigmoid function is:
\begin{align}
\frac{d\sigma(x)}{dx}= \sigma(x) * (1-\sigma(x))
\end{align}
which means that once we compute the final activation $F=\sigma(AX+BY+C)$, we can simply calculate the derivative as $F*(1-F)$.
We can compute the forward pass of this function without a problem, however, directly computing the partial derivatives $\frac{\partial L}{\partial A}$, $\frac{\partial L}{\partial B}$, $\frac{\partial L}{\partial C}$, ... could be tricky. It is much better to use the chain rule and compose multiple functions togeher.
We can create 4 simple functions:
\begin{align*}
G=g(A,X)&=A*X \\
H=h(B,Y)&=B*Y \\
K=k(G,H,C)&=G+H+C \\
F=f(K)&= \frac{1}{1+e^{-x}}\\
\end{align*}
and compose them as:
\begin{align*}
F=f(k(g(A,X),h(B,Y),C))
\end{align*}
This looks much simpler on a diagram:
Let's compute all the partial derivatives by using the chain rule:
\begin{align*}
\frac{\partial F}{\partial A}&=\frac{\partial F}{\partial K}*\frac{\partial K}{\partial G}*\frac{\partial G}{\partial A} \\
\frac{\partial F}{\partial A}&= F(1-F)*1*X \\
\frac{\partial F}{\partial A}&= XF(1-F)
\end{align*}
\begin{align*}
\frac{\partial F}{\partial X}&=\frac{\partial F}{\partial K}*\frac{\partial K}{\partial G}*\frac{\partial G}{\partial X} \\
\frac{\partial F}{\partial X}&= F(1-F)*1*A \\
\frac{\partial F}{\partial X}&= AF(1-F)
\end{align*}
By following the exact same procedure for $B$, $Y$, and $C$ we get:
\begin{align*}
\frac{\partial F}{\partial B}&=YF(1-F)\\
\end{align*}
\begin{align*}
\frac{\partial F}{\partial Y}&=BF(1-F)\\
\end{align*}
\begin{align*}
\frac{\partial F}{\partial C}&=F(1-F)\\
\end{align*}
We can represent this as a gradient:
\begin{align}
\nabla l(A,B,C,X,Y)=\left[XF(1-F),YF(1-F),F(1-F),AF(1-F),BF(1-F) \right]
\end{align}
We can now use this information to maximize the output of the function __l__:
```python
import numpy as np
def sigmoid(x):
return 1 / (1 + np.exp(-x))
```
```python
def l(A,B,C,X,Y):
G = f(A,X)
H = f(B,Y)
K = G + H + C
F = sigmoid(K)
return F
```
```python
A = 1.0; B = 2.0; C = -3.0; X = -1.0; Y = 3.0
```
```python
F = l(A,B,C,X,Y)
print (f'the output F is: {F}')
```
the output F is: 0.8807970779778823
Since every partial derivative involves $F(1-F)$, we will compute it first as `F_K`
```python
gradient_end = 1
F_K = (F * (1 - F)) * gradient_end
print (f'F_K = {F_K}')
```
F_K = 0.10499358540350662
```python
A_gradient = X*F_K
B_gradient = Y*F_K
C_gradient = F_K
X_gradient = A*F_K
Y_gradient = B*F_K
print (f'A-gradient: {A_gradient} \nX-gradient: {X_gradient} \nB-gradient: {B_gradient} \nY-gradient: {Y_gradient}\nC-gradient: {C_gradient}')
```
A-gradient: -0.10499358540350662
X-gradient: 0.10499358540350662
B-gradient: 0.31498075621051985
Y-gradient: 0.20998717080701323
C-gradient: 0.10499358540350662
```python
step_size = 0.01
A = A + step_size * A_gradient
B = B + step_size * B_gradient
C = C + step_size * C_gradient
X = X + step_size * X_gradient
Y = Y + step_size * Y_gradient
print (f'A is: {A}, \nB is: {B}, \nC is: {C}, \nX is: {X}, \nY is: {Y}')
```
A is: 0.998950064145965,
B is: 2.0031498075621053,
C is: -2.9989500641459648,
X is: -0.998950064145965,
Y is: 3.00209987170807
```python
F_new = l(A,B,C,X,Y)
print (f'old output: {F}\nnew output: {F_new}')
```
old output: 0.8807970779778823
new output: 0.8825501816218984
The new output is higher than the old one!
|
21539df46e81ddcbd9698251dc885159f10bf975
| 37,930 |
ipynb
|
Jupyter Notebook
|
04. Functions as Real Valued Circuits.ipynb
|
Mistrymm7/machineintelligence
|
7629d61d46dafa8e5f3013082b1403813d165375
|
[
"Apache-2.0"
] | 82 |
2019-09-23T11:25:41.000Z
|
2022-03-29T22:56:10.000Z
|
04. Functions as Real Valued Circuits.ipynb
|
Iason-Giraud/machineintelligence
|
b34a070208c7ac7d7b8a1e1ad02813b39274921c
|
[
"Apache-2.0"
] | null | null | null |
04. Functions as Real Valued Circuits.ipynb
|
Iason-Giraud/machineintelligence
|
b34a070208c7ac7d7b8a1e1ad02813b39274921c
|
[
"Apache-2.0"
] | 31 |
2019-09-30T16:08:46.000Z
|
2022-02-19T10:29:07.000Z
| 22.233294 | 365 | 0.485737 | true | 5,908 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.964321 | 0.921922 | 0.889029 |
__label__eng_Latn
| 0.915713 | 0.903846 |
```python
%matplotlib inline
```
# Swirl
Image swirling is a non-linear image deformation that creates a whirlpool
effect. This example describes the implementation of this transform in
``skimage``, as well as the underlying warp mechanism.
## Image warping
When applying a geometric transformation on an image, we typically make use of
a reverse mapping, i.e., for each pixel in the output image, we compute its
corresponding position in the input. The reason is that, if we were to do it
the other way around (map each input pixel to its new output position), some
pixels in the output may be left empty. On the other hand, each output
coordinate has exactly one corresponding location in (or outside) the input
image, and even if that position is non-integer, we may use interpolation to
compute the corresponding image value.
## Performing a reverse mapping
To perform a geometric warp in ``skimage``, you simply need to provide the
reverse mapping to the :py:func:`skimage.transform.warp` function. E.g., consider
the case where we would like to shift an image 50 pixels to the left. The reverse
mapping for such a shift would be::
def shift_left(xy):
xy[:, 0] += 50
return xy
The corresponding call to warp is::
from skimage.transform import warp
warp(image, shift_left)
## The swirl transformation
Consider the coordinate $(x, y)$ in the output image. The reverse
mapping for the swirl transformation first computes, relative to a center
$(x_0, y_0)$, its polar coordinates,
\begin{align}\\theta = \\arctan(y/x)
\\rho = \sqrt{(x - x_0)^2 + (y - y_0)^2},\end{align}
and then transforms them according to
\begin{align}r = \ln(2) \, \mathtt{radius} / 5
\phi = \mathtt{rotation}
s = \mathtt{strength}
\\theta' = \phi + s \, e^{-\\rho / r + \\theta}\end{align}
where ``strength`` is a parameter for the amount of swirl, ``radius`` indicates
the swirl extent in pixels, and ``rotation`` adds a rotation angle. The
transformation of ``radius`` into $r$ is to ensure that the
transformation decays to $\\approx 1/1000^{\mathsf{th}}$ within the
specified radius.
```python
import matplotlib.pyplot as plt
from skimage import data
from skimage.transform import swirl
image = data.checkerboard()
swirled = swirl(image, rotation=0, strength=10, radius=120)
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, figsize=(8, 3),
sharex=True, sharey=True)
ax0.imshow(image, cmap=plt.cm.gray)
ax0.axis('off')
ax1.imshow(swirled, cmap=plt.cm.gray)
ax1.axis('off')
plt.show()
```
|
bb44166fd7448e6386f079b6bab2e36e290cb7ae
| 3,605 |
ipynb
|
Jupyter Notebook
|
digital-image-processing/notebooks/transform/plot_swirl.ipynb
|
sinamedialab/courses
|
720a78ebf4b4fb77f57a73870480233646f9a51d
|
[
"MIT"
] | null | null | null |
digital-image-processing/notebooks/transform/plot_swirl.ipynb
|
sinamedialab/courses
|
720a78ebf4b4fb77f57a73870480233646f9a51d
|
[
"MIT"
] | null | null | null |
digital-image-processing/notebooks/transform/plot_swirl.ipynb
|
sinamedialab/courses
|
720a78ebf4b4fb77f57a73870480233646f9a51d
|
[
"MIT"
] | null | null | null | 66.759259 | 2,187 | 0.638835 | true | 667 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.874077 | 0.833325 | 0.72839 |
__label__eng_Latn
| 0.991272 | 0.530626 |
# Solving orbital equations with different algorithms
This notebook was adapted from `Orbit_games.ipynb`.
We consider energy plots and orbital solutions in polar coordinates for the general potential energy
$\begin{align}
U(r) = k r^n
\end{align}$
for different ODE solution algorithms. The `solve_ivp` function can itself be specified to use different solution methods (with the `method` keyword). Here we will set it by default to use 'RK23', which is a variant on the Runge-Kutta second-order algorithm. Second-order in this context means that the accuracy of a calculation will improve by a factor of $10^2 = 100$ if $\Delta t$ is reduced by a factor of ten.
We will compare it with the crudest algorithm, Euler's method, which is first order, and a second-order algorithm called Leapfrog, which is designed to be precisely <em>time-reversal invariant</em>. This property guarantees conservation of energy, which is not true of the other algorithms we will consider.
To solve the differential equations for orbits, we have defined the $\mathbf{y}$
and $d\mathbf{y}/dt$ vectors as
$\begin{align}
\mathbf{y} = \left(\begin{array}{c} r(t) \\ \dot r(t) \\ \phi(t) \end{array} \right)
\qquad
\frac{d\mathbf{y}}{dt}
= \left(\begin{array}{c} \dot r(t) \\ \ddot r(t) \\ \dot\phi(t) \end{array} \right)
= \left(\begin{array}{c} \dot r(t) \\
-\frac{1}{\mu}\frac{dU_{\rm eff}(r)}{dr} \\
\frac{l}{\mu r^2} \end{array} \right)
\end{align}$
where we have substituted the differential equations for $\ddot r$ and $\dot\phi$.
Then Euler's method can be written as a simple prescription to obtain $\mathbf{y}_{i+1}$
from $\mathbf{y}_i$, where the subscripts label the elements of the `t_pts` array:
$\mathbf{y}_{i+1} = \mathbf{y}_i + \left(d\mathbf{y}/dt\right)_i \Delta t$, or, by components:
$\begin{align}
r_{i+1} &= r_i + \frac{d\mathbf{y}_i[0]}{dt} \Delta t \\
\dot r_{i+1} &= \dot r_{i} + \frac{d\mathbf{y}_i[1]}{dt} \Delta t \\
\phi_{i+1} &= \phi_i + \frac{d\mathbf{y}_i[2]}{dt} \Delta t
\end{align}$
**Look at the** `solve_ode_Euler` **method below and verify the algorithm is correctly implemented.**
The leapfrog method does better by evaluating $\dot r$ at a halfway time step before and after the $r$ evaluation,
which is both more accurate and incorporates time reversal:
$\begin{align}
\dot r_{i+1/2} &= \dot r_{i} + \frac{d\mathbf{y}_i[1]}{dt} \Delta t/2 \\
r_{i+1} &= r_i + \dot r_{i+1/2} \Delta t \\
\dot r_{i+1} &= \dot r_{i+1/2} + \frac{d\mathbf{y}_{i+1}[1]}{dt} \Delta t/2 \\
\phi_{i+1} &= \phi_i + \frac{d\mathbf{y}_i[2]}{dt} \Delta t
\end{align}$
**Look at the** `solve_ode_Leapfrog` **method below and verify the algorithm is correctly implemented.**
A third method is the second-order Runge-Kutta algorithm, which we invoke from `solve_ivp` as `RK23`.
It does not use a fixed time-step as in our "homemade" implementations, so there is not a direct
comparison, but we can still check if it conserves energy.
**Run the notebook. You are to turn in and comment on the "Change in energy with time" plot at the end.
Where do you see energy conserved or not conserved? Show that Euler is first order and leapfrog is second
order by changing $\Delta t$; describe what you did and what you found.**
**Try another potential to see if you get the same general conclusions.**
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
```
```python
# Change the common font size
font_size = 14
plt.rcParams.update({'font.size': font_size})
```
```python
```
```python
class GravitationalOrbits():
def __init__(self, m_1 = 1., m_2 = 1., G = 1.):
self.m1 = m_1
self.m2 = m_2
self.G = G
def dz_dt(self, t, z):
r_12 = np.sqrt( (z[0]-z[4])**2 + (z[2] -z[6])**2 )
return [z[1], self.G*self.m2* (z[4]-z[0]) / r_12**3, z[3], self.G*self.m2* (z[6] - z[2]) /r_12**3, z[5], -self.G *self.m1* (z[4] - z[0]) / r_12**3, z[7], -self.G*self.m1 * (z[6] - z[2]) /r_12**3 ]
def solve_ode(self, t_pts, z_0, abserr = 1.0e-8, relerr = 1.0e-8):
solution = solve_ivp(self.dz_dt, (t_pts[0], t_pts[-1]), z_0, t_eval = t_pts,
method ='RK23', atol = abserr, rtol =relerr)
x_1, x_dot_1, y_1, y_dot_1, x_2, x_dot_2, y_2, y_dot_2 = solution.y
return x_1, x_dot_1, y_1, y_dot_1, x_2, x_dot_2, y_2, y_dot_2
def solve_ode_Leapfrog(self, t_pts, z_0):
delta_t = t_pts[1] - t_pts[0]
x_1_0, x_dot_1_0, y_1_0, y_dot_1_0, x_2_0, x_dot_2_0, y_2_0, y_dot_2_0 = z_0
num = len(t_pts)
x_1 = np.zeros(num)
x_dot_1 = np.zeros(num)
x_dot_1_half = np.zeros(num)
y_1 = np.zeros(num)
y_dot_1 = np.zeros(num)
y_dot_1_half = np.zeros(num)
x_2 = np.zeros(num)
x_dot_2 = np.zeros(num)
x_dot_2_half = np.zeros(num)
y_2 = np.zeros(num)
y_dot_2 = np.zeros(num)
y_dot_2_half = np.zeros(num)
x_1[0] = x_1_0
x_dot_1[0] = x_dot_1_0
y_1[0] = y_1_0
y_dot_1[0] = y_dot_1_0
x_2[0] = x_2_0
x_dot_2[0] = x_dot_2_0
y_2[0] = y_2_0
y_dot_2[0] = y_dot_2_0
for i in np.arange(num - 1):
t = t_pts[i]
z = [x_1[i], x_dot_1[i], y_1[i], y_dot_1[i], x_2[i], x_dot_2[i], y_2[i], y_dot_2[i]]
out = self.dz_dt(t, z)
x_dot_1_half[i] = x_dot_1[i] + out[1] * delta_t/2.
x_1[i + 1] = x_1[i] + x_dot_1_half[i] * delta_t
y_dot_1_half[i] = y_dot_1[i] + out[3] * delta_t/2.
y_1[i+1] = y_1[i] + y_dot_1_half[i] *delta_t
x_dot_2_half[i] = x_dot_2[i] + out[5] * delta_t/2.
x_2[i+1] = x_2[i] + x_dot_2_half[i] * delta_t
y_dot_2_half[i] = y_dot_2[i] + out[7] + delta_t/2.
y_2[i+1] = y_2[i] + y_dot_2_half[i] * delta_t
z = [x_1[i+1], x_dot_1[i], y_1[i+1], y_dot_1[i], x_2[i+1], x_dot_2[i], y_2[i+1], y_dot_2[i]]
out = self.dz_dt(t, z)
x_dot_1[i+1] = x_dot_1_half[i] + out[1] * delta_t/2.
y_dot_1[i+1] = y_dot_1_half[i] + out[3] * delta_t/2.
x_dot_2[i+1] = x_dot_2_half[i] + out[5] * delta_t/2.
y_dot_2[i+1] = y_dot_2_half[i] + out[7] * delta_t/2.
return x_1, x_dot_1, y_1, y_dot_1, x_2, x_dot_2, y_2, y_dot_2
def solve_ode_Leapfrog_n(self, t_pts, z_0):
delta_t = t_pts[1] - t_pts[0]
num = len(t_pts)
n_tot = len(z_0)
z = np.zeros( shape = (n_tot, num))
dot_half = np.zeros( int(n_tot/2))
z[:, 0] = z_0
for i in np.arange(num - 1):
t = t_pts[i]
z_now = z[:,i]
out = np.asarray(self.dz_dt(t, z_now))
dot_half = z_now[1::2] + out[1::2]* delta_t/2.
z[0::2, i+1] = z_now[0::2] + dot_half * delta_t
z_now[0::2] = z[0::2, i+1]
out = np.asarray(self.dz_dt(t, z_now))
z[1::2, i+1] = dot_half + out[1::2] * delta_t/2.
return z
```
```python
def plot_y_vs_x(x, y, axis_labels=None, label=None, title=None,
color=None, linestyle=None, semilogy=False, loglog=False,
ax=None):
"""
Generic plotting function: return a figure axis with a plot of y vs. x,
with line color and style, title, axis labels, and line label
"""
if ax is None: # if the axis object doesn't exist, make one
ax = plt.gca()
if (semilogy):
line, = ax.semilogy(x, y, label=label,
color=color, linestyle=linestyle)
elif (loglog):
line, = ax.loglog(x, y, label=label,
color=color, linestyle=linestyle)
else:
line, = ax.plot(x, y, label=label,
color=color, linestyle=linestyle)
if label is not None: # if a label if passed, show the legend
ax.legend()
if title is not None: # set a title if one if passed
ax.set_title(title)
if axis_labels is not None: # set x-axis and y-axis labels if passed
ax.set_xlabel(axis_labels[0])
ax.set_ylabel(axis_labels[1])
return ax, line
```
```python
def start_stop_indices(t_pts, plot_start, plot_stop):
start_index = (np.fabs(t_pts-plot_start)).argmin() # index in t_pts array
stop_index = (np.fabs(t_pts-plot_stop)).argmin() # index in t_pts array
return start_index, stop_index
```
```python
orbit_label = (r'$x$', r'$y$')
t_start = 0.
t_end = 20.
delta_t = 0.001
t_pts = np.arange(t_start, t_end+ delta_t, delta_t)
```
```python
t_start = 0.
t_end = 20.
delta_t = 0.001
G = 1.
m1 = 1.
m2 = 9.
#Looked up the difference between sun & earth mass
o1 = GravitationalOrbits(m1, m2, G)
x_1_0, x_dot_1_0 = 1., -1.
y_1_0, y_dot_1_0 = 1., 1.
x_2_0, x_dot_2_0 = -(m1/m2) * x_1_0, -(m1/m2) * x_dot_1_0
y_2_0, y_dot_2_0 = -(m1/m2) * y_1_0, -(m1/m2) * y_dot_1_0
z_0 = [x_1_0, x_dot_1_0, y_1_0, y_dot_1_0, x_2_0, x_dot_2_0, y_2_0, y_dot_2_0]
x_1, x_dot_1, y_1, y_dot_1, x_2, x_dot_2, y_2, y_dot_2 = o1.solve_ode(t_pts, z_0)
fig = plt.figure(figsize=(5, 5))
ax = fig.add_subplot(1, 1, 1)
start, stop = start_stop_indices(t_pts, t_start, t_end)
ax.plot(x_1, y_1, color = 'blue', label = r'$m1$')
ax.plot(x_2, y_2, color = 'red', label = r'$m2$')
ax.set_title('Simple Gravitational Orbit')
ax.legend()
#ax.set_xlim(-np.max(x_2), np.max(x_2))
ax.set_aspect(1)
fig.tight_layout()
fig.savefig('Simple_orbits.png', bbox_inches = 'tight')
```
m2 is a larger mass than m1.
m2 represents the Sun while m1 represents the Earth
The Sun orbits in a little circle while the Earth is seen to be orbiting completely around the Sun
```python
t_start = 0.
t_end = 20.
delta_t = 0.001
G = 20.
m1 = 20.
m2 = 1.
#Looked up the difference between sun & earth mass
o1 = GravitationalOrbits(m1, m2, G)
x_1_0, x_dot_1_0 = 0.1, 0.
y_1_0, y_dot_1_0 = 0., .75
x_2_0, x_dot_2_0 = -(m1/m2) * x_1_0, -(m1/m2) * x_dot_1_0
y_2_0, y_dot_2_0 = -(m1/m2) * y_1_0, -(m1/m2) * y_dot_1_0
z_0 = [x_1_0, x_dot_1_0, y_1_0, y_dot_1_0, x_2_0, x_dot_2_0, y_2_0, y_dot_2_0]
x_1, x_dot_1, y_1, y_dot_1, x_2, x_dot_2, y_2, y_dot_2 = o1.solve_ode_Leapfrog(t_pts, z_0)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
start, stop = start_stop_indices(t_pts, t_start, t_end)
ax.plot(x_1, y_1, color = 'blue', label = r'$m1$')
ax.plot(x_2, y_2, color = 'red', label = r'$m2$')
ax.set_title('Simple Gravitational Orbit')
ax.legend()
#ax.set_xlim(-np.max(x_2), np.max(x_2))
ax.set_aspect(1)
fig.tight_layout()
fig.savefig('Simple_orbits.png', bbox_inches = 'tight')
#The Leapfrog axes are off, and I do not know how to fix it
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
|
73dc7288909090092917f90c7e3f7ccb449dad1e
| 52,589 |
ipynb
|
Jupyter Notebook
|
Orbital_eqs_with_different_algorithms.ipynb
|
AdithyaReji/5300_final
|
c44858eecf3e15ebdcaf9eb7dd0083212fa03284
|
[
"MIT"
] | null | null | null |
Orbital_eqs_with_different_algorithms.ipynb
|
AdithyaReji/5300_final
|
c44858eecf3e15ebdcaf9eb7dd0083212fa03284
|
[
"MIT"
] | null | null | null |
Orbital_eqs_with_different_algorithms.ipynb
|
AdithyaReji/5300_final
|
c44858eecf3e15ebdcaf9eb7dd0083212fa03284
|
[
"MIT"
] | null | null | null | 104.968064 | 18,464 | 0.81498 | true | 3,990 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.845942 | 0.884039 | 0.747846 |
__label__eng_Latn
| 0.753578 | 0.57583 |
```python
from google.colab import drive
drive.mount('gdrive')
```
Mounted at gdrive
```python
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense, Input, Flatten, Conv1D, AveragePooling1D, Dropout
from tensorflow.keras.layers import concatenate, Conv2D, MaxPooling2D, MaxPooling1D, BatchNormalization, Softmax
from tensorflow.keras.models import Model, Sequential
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import seaborn as sns
import tensorflow_probability as tfp
tfd = tfp.distributions
tfpl = tfp.layers
```
# Building Probabilistic Deep Learning Models
- dealing with uncertainty involved in modeling process
- want models to assign higher levels of uncertainty to incorrect predictions
- two main sources of uncertainty: aleatoric and epistemic
- aleatoric: uncertainty in the data itself (e.g. predicting coin toss; can't do it with certainty but can predict the probability)
- epistemic: model uncertainty (if there is not enough data, deep learning models may not have the correct parameters)
## Maximum Likelihood estimation
Probability density (or mass, for discrete distributions) function:
$$
P(y | \theta) = \text{Prob} (\text{sampling value $y$ from a distribution with parameter $\theta$})
$$
Multiple samples:
$$
P(y_1, \ldots, y_n | \theta) = \prod_{i=1}^n P(y_i | \theta)
$$
These functions are used for **probability**, where you know the distribution and want to make deductions about possible values sampled from it.
The **likelihood** is the reverse where the samples are fixed and $\theta$ is considered the independent variable. In this case, you know the samples from data collected but don't know $\theta$.
$$
L(\theta | y_1, \ldots, y_n) = P(y_1, \ldots, y_n | \theta)
$$
## Example: Bernoulli Distribution
$$
\begin{align}
L(\theta | y) &= \begin{cases}
1 - \theta \quad \text{if} \, y = 0 \\
\theta \qquad \, \, \, \text{if} \, y = 1 \\
\end{cases} \\
&= (1 - \theta)^{1 - y} \theta^y \quad y \in \{0,1\}
\end{align}
$$
Assuming samples are independent:
$$
L(\theta | y_1, \ldots, y_n) = \prod_{i=1}^n (1 - \theta)^{1 -y} \theta^y
$$
## Example: Normal Distribution
$$
L(\mu, \sigma | y) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp \Big( - \frac{1}{2 \sigma^2} (y - \mu)^2 \Big)
$$
Assuming samples are independent:
$$
L(\theta | y_1, \ldots, y_n) = \prod_{i=1}^n \frac{1}{\sqrt{2 \pi \sigma^2}} \exp \Big( - \frac{1}{2 \sigma^2} (y_i - \mu)^2 \Big)
$$
The likelihood is the same as the probability density function except it is a function of $\mu$ and $\sigma$ and the samples are considered constant.
## Maxium Likelihood Extimation
Usually we want to find maximum likelihood estimate $\theta_{MLE}$ which is the value of $\theta$ that maximizes $L(\theta | y_1, \ldots, y_n)$.
## Negative log-likelihood
The likelihood is a product (assuming independent observations) which can be turned into a sum by taking the log of the product. Since the log function increases with its argument, maximizing the log-likelihood is equivalent to maximizing the likelihood. Since the loss function must be a minimization problem, we consider the negative log-likelihood:
$$
\theta_{MLE} = \arg \min_{\theta} - \sum_{i=1}^n \log L(y_i | \theta)
$$
This is often used to find the best weights in a neural network where we assume that neural network predictions form part of a distribution that the true label is drawn from. In the case of binary classifiers, sparse categorical crossentropy is used as the loss function and, in fact, it is exactly the negative log likelihood of a Bernoulli distribution. In regression, minimizing the sum of squared errors is equivalent to minimizing the negative log-likelihood of a gaussian distribution (assuming Gaussian error term with constant variance).
## DistributionLambda Layer
```python
model = Sequential([
Dense(1, input_shape=(2,)),
tfpl.DistributionLambda(
lambda t: tfd.Normal(loc=t, scale=1), # output of previous layer describes mean of this normal distribution
convert_to_tensor_fn=tfd.Distribution.sample) # if you want to use output downstream of model, it
# needs to be converted from distribution object into a tensor,
# which can be done by sampling from distribution
]) # (could also be done with mean or mode)
# one required argument for Distribution lambda is a function that takes previous layer's output as input and returns distribution object
# this explicitly captures the uncertainty in the model by converting the output in the dense layer to a distribution
# output is now a distribution object instead of a scaler
# e.g. X_train --> shape (16,2) --> model(X_train) --> Normal dist with batch_shape (16,1) and event shape ()
# batch_shape (16,1) for number of samples (16) and number of outputs from previous dense layer (1)
def nll(y_true, y_pred):
return - y_pred.log_prob(y_true) # remember that y_pred is now a distribution object
model.compile(loss=nll, optimizer='rmsprop')
# model.fit(X_train, y_train, epochs=10)
# model(X_test).sample() # get predictions by sampling from distribution
# model(X_test).mean()
```
Create a probabilistic model using the `DistributionLambda` layer whose first layer represents:
$$
y = sigmoid(x) = \frac{1}{1 + \exp (-x)}
$$
```python
# deterministic model
model = Sequential([
Dense(1, input_shape=(1,), activation='sigmoid',
kernel_initializer=tf.constant_initializer(1),
bias_initializer=tf.constant_initializer(0))
])
x = np.linspace(-5,5,100)
plt.scatter(x, model.predict(x), alpha=0.4)
plt.plot(x, 1/(1+np.exp(-x)), color='r', alpha=0.8)
plt.show()
```
```python
# deterministic model always returns same prediction for same x value
x = [0]
for _ in range(5):
print(model.predict(x))
```
[[0.5]]
[[0.5]]
[[0.5]]
[[0.5]]
[[0.5]]
```python
# probabilistic model
model = Sequential([
Dense(1, input_shape=(1,), activation='sigmoid',
kernel_initializer=tf.constant_initializer(1),
bias_initializer=tf.constant_initializer(0)),
tfpl.DistributionLambda(lambda t: tfd.Bernoulli(probs=t),
convert_to_tensor_fn=tfd.Distribution.sample)
])
x = np.linspace(-5,5,100)
plt.scatter(x, model.predict(x), alpha=0.4)
plt.plot(x, 1/(1+np.exp(-x)), color='r', alpha=0.8)
plt.show()
```
```python
x = [0]
for _ in range(5):
print(model.predict(x))
```
[[1]]
[[1]]
[[0]]
[[0]]
[[0]]
```python
X_train = np.linspace(-5,5,500)[:, np.newaxis]
y_train = model.predict(X_train)
fig, ax = plt.subplots(figsize=(5,5))
ax.scatter(X_train, y_train, alpha=0.04, color='b', label='samples')
ax.plot(X_train, model(X_train).mean().numpy().flatten(), color='red', alpha=0.8, label='mean')
ax.legend()
plt.show()
```
```python
# change initial weights/biases so we can train model
model_untrained = Sequential([
Dense(1, input_shape=(1,), activation='sigmoid',
kernel_initializer=tf.constant_initializer(2),
bias_initializer=tf.constant_initializer(2)),
tfpl.DistributionLambda(lambda t: tfd.Bernoulli(probs=t),
convert_to_tensor_fn=tfd.Distribution.sample)
])
def nll(y_true, y_pred):
return -y_pred.log_prob(y_true)
model_untrained.compile(loss=nll, optimizer=tf.keras.optimizers.RMSprop(0.01))
epochs = [0]
training_weights = [model_untrained.weights[0].numpy()[0,0]]
training_bias = [model_untrained.weights[1].numpy()[0]]
for epoch in range(100):
model_untrained.fit(X_train, y_train, epochs=1, verbose=False)
epochs.append(epoch)
training_weights.append(model_untrained.weights[0].numpy()[0,0])
training_bias.append(model_untrained.weights[1].numpy()[0])
```
```python
plt.plot(epochs, training_weights, label='weight')
plt.plot(epochs, training_bias, label='bias')
plt.axhline(1, label='true_weight', color='k', linestyle=':')
plt.axhline(0, label='true_bias', color='k', linestyle='--')
plt.legend()
plt.show()
```
## Probabilistic Layers
```python
model = Sequential([
Dense(16, activation='relu', input_shape=(2,)),
Dense(2), # (batch_shape, 2)
tfpl.DistributionLambda(lambda t: tfd.Independent(tfd.Normal(loc=t[..., :1],
scale=tf.math.softplus(t[...,1:]))))
# softplus is needed to make sure scale is positive
# ... is needed to account for batch_shapes of any rank
])
# alternatively:
model = Sequential([
Dense(16, activation='relu', input_shape=(2,)),
Dense(2),
tfpl.IndependentNormal(event_shape=(1)) # accomplishes exact same thing as above
# event shape of 1 means this layer requires two parameters to be defined: mean and std dev
# this is why the previous dense layer has two units
# it also passes std devs through softplus function to make sure they are positive
])
model = Sequential([
Dense(16, activation='relu', input_shape=(2,)),
Dense(4), # requires 4 now
tfpl.IndependentNormal(event_shape=(2)) # this is specifying 2D multivariate normal distribution so now need 4 inputs
# two means and two std devs of a covariance matrix diagonal
])
model = Sequential([
Dense(16, activation='relu', input_shape=(2,)),
Dense(8), # requires 8 now
tfpl.IndependentNormal(event_shape=([2,2]))
])
# potentially tricky to know exactly how many params are needed for a certain event shape
# use params_size method in previous dense layer to have it figure out for you
event_shape = 2
model = Sequential([
Dense(16, activation='relu', input_shape=(2,)),
Dense(tfpl.IndependentNormal.params_size(event_shape)),
tfpl.IndependentNormal(event_shape,
convert_to_tensor_fn=tfd.Distribution.sample)
])
# for multi class classification problems, you can use OneHotCategorical layer instead of Dense layer with softmax at the end
num_classes = 10
model = Sequential([
Conv2D(16, 3, activation='relu', input_shape=(32,32,3)),
MaxPooling2D(3),
Flatten(),
Dense(64, activation='relu'),
Dense(tfpl.OneHotCategorical.params_size(num_classes)),
tfpl.OneHotCategorical(num_classes) # replaces Dense(num_classes, activation='softmax')
])
model.compile(loss=lambda y_true, y_pred: -y_pred.log_prob(y_true)) # y_true is one hot vector --> this returns nll of correct class
```
```python
# create training data
X_train = np.linspace(-1,1,100)[:,np.newaxis]
y_train = X_train + 0.3*np.random.randn(100)[:, np.newaxis]
plt.scatter(X_train, y_train, alpha=0.4)
```
```python
# deterministic linear regression with MSE loss
model = Sequential([
Dense(1, input_shape=(1,))
])
model.compile(loss=tf.keras.losses.MeanSquaredError(), optimizer=tf.keras.optimizers.RMSprop(0.005))
model.summary()
model.fit(X_train, y_train, epochs=200, verbose=False)
plt.scatter(X_train, y_train, alpha=0.4)
plt.plot(X_train, model.predict(X_train), color='red', alpha=0.8)
plt.show()
```
Disadvantage is that a deterministic model doesn't capture the uncertainty in predictions
```python
# deterministic linear regression with MSE loss
model = Sequential([
Dense(1, input_shape=(1,)),
tfpl.DistributionLambda(lambda t: tfd.Independent(tfd.Normal(loc=t, scale=1))) # parameterize mean of normal dist.
])
model.summary()
# same number of trainable params but output is now a distribution object
```
Model: "sequential_11"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_17 (Dense) (None, 1) 2
_________________________________________________________________
distribution_lambda_4 (Distr multiple 0
=================================================================
Total params: 2
Trainable params: 2
Non-trainable params: 0
_________________________________________________________________
```python
def nll(y_true, y_pred):
return -y_pred.log_prob(y_true)
model.compile(loss=nll, optimizer=tf.keras.optimizers.RMSprop(0.005))
model.fit(X_train, y_train, epochs=200, verbose=False)
```
<tensorflow.python.keras.callbacks.History at 0x7f06858b9a50>
```python
x = np.array([[0]])
y_model = model(x)
y_model
```
<tfp.distributions.Independent 'sequential_11_distribution_lambda_4_Independentsequential_11_distribution_lambda_4_Normal' batch_shape=[1] event_shape=[1] dtype=float32>
```python
y_model = model(X_train)
y_sample = y_model.sample()
y_hat = y_model.mean()
y_sd = y_model.stddev()
y_hat_m2sd = y_hat - 2 * y_sd
y_hat_p2sd = y_hat + 2 * y_sd
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,5), sharey=True)
ax1.scatter(X_train, y_train, alpha=0.4, label='data')
ax1.scatter(X_train, y_sample, alpha=0.4, color='red', label='model sample')
ax1.legend()
ax2.scatter(X_train, y_train, alpha=0.4, label='data')
ax2.plot(X_train, y_hat, color='red', alpha=0.8, label='model $\mu$')
ax2.plot(X_train, y_hat_m2sd, color='green', alpha=0.8, label='model $\mu \pm 2 \sigma$')
ax2.plot(X_train, y_hat_p2sd, color='green', alpha=0.8)
ax2.legend()
plt.show()
```
Now let's learn mean *AND* standard deviation!
```python
event_shape = 1
model = Sequential([
Dense(tfpl.IndependentNormal.params_size(event_shape), input_shape=(1,)), # needs two units now or just use params_size
tfpl.IndependentNormal(event_shape) # instead of just mean, this parameterizes mean and std dev
])
model.summary()
```
Model: "sequential_12"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_18 (Dense) (None, 2) 4
_________________________________________________________________
independent_normal_4 (Indepe multiple 0
=================================================================
Total params: 4
Trainable params: 4
Non-trainable params: 0
_________________________________________________________________
```python
model.compile(loss=nll, optimizer=tf.keras.optimizers.RMSprop(0.005))
model.fit(X_train, y_train, epochs=200, verbose=False)
```
<tensorflow.python.keras.callbacks.History at 0x7f0696987990>
```python
y_model = model(X_train)
y_sample = y_model.sample()
y_hat = y_model.mean()
y_sd = y_model.stddev()
y_hat_m2sd = y_hat - 2 * y_sd
y_hat_p2sd = y_hat + 2 * y_sd
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,5), sharey=True)
ax1.scatter(X_train, y_train, alpha=0.4, label='data')
ax1.scatter(X_train, y_sample, alpha=0.4, color='red', label='model sample')
ax1.legend()
ax2.scatter(X_train, y_train, alpha=0.4, label='data')
ax2.plot(X_train, y_hat, color='red', alpha=0.8, label='model $\mu$')
ax2.plot(X_train, y_hat_m2sd, color='green', alpha=0.8, label='model $\mu \pm 2 \sigma$')
ax2.plot(X_train, y_hat_p2sd, color='green', alpha=0.8)
ax2.legend()
plt.show()
```
Much better fit! Now we have information on mean and uncertainty in a model prediction (compared to just mean for deterministic model)
```python
# let's also look at nonlinear models
X_train = np.linspace(-1,1,1000)[:, np.newaxis]
# not only nonlinear trend but the error is also a function of x so it isnt same for all x unlike previous example
y_train = np.power(X_train, 3) + 0.1 * (2 + X_train) * np.random.randn(1000)[:, np.newaxis]
plt.scatter(X_train, y_train, alpha=0.1)
```
```python
event_shape = 1
model = Sequential([
Dense(8, input_shape=(1,), activation='sigmoid'),
Dense(tfpl.IndependentNormal.params_size(event_shape), input_shape=(1,)), # needs two units now or just use params_size
tfpl.IndependentNormal(event_shape) # instead of just mean, this parameterizes mean and std dev
])
model.compile(loss=nll, optimizer=tf.keras.optimizers.RMSprop(0.005))
model.summary()
model.fit(X_train, y_train, epochs=200, verbose=False)
```
Model: "sequential_13"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_19 (Dense) (None, 8) 16
_________________________________________________________________
dense_20 (Dense) (None, 2) 18
_________________________________________________________________
independent_normal_5 (Indepe multiple 0
=================================================================
Total params: 34
Trainable params: 34
Non-trainable params: 0
_________________________________________________________________
<tensorflow.python.keras.callbacks.History at 0x7f068727e510>
```python
y_model = model(X_train)
y_sample = y_model.sample()
y_hat = y_model.mean()
y_sd = y_model.stddev()
y_hat_m2sd = y_hat - 2 * y_sd
y_hat_p2sd = y_hat + 2 * y_sd
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,5), sharey=True)
ax1.scatter(X_train, y_train, alpha=0.4, label='data')
ax1.scatter(X_train, y_sample, alpha=0.4, color='red', label='model sample')
ax1.legend()
ax2.scatter(X_train, y_train, alpha=0.4, label='data')
ax2.plot(X_train, y_hat, color='red', alpha=0.8, label='model $\mu$')
ax2.plot(X_train, y_hat_m2sd, color='green', alpha=0.8, label='model $\mu \pm 2 \sigma$')
ax2.plot(X_train, y_hat_p2sd, color='green', alpha=0.8)
ax2.legend()
plt.show()
```
## Bayes by Backprop
Instead of the weights of each neuron being a single value, you can describe them with distributions. Bayes by Backprop is a method to determine the parameters of the distributions. Mathematically complex, need to explore further.
## DenseVariational layer
```python
# need to define the prior distribution P(w) - our belief for the distribution of params, w, before seeing any data
# standard assumption is the prior is a spherical gaussian, i.e. an independent normal dist. for each weight and bias
def prior(kernel_size, bias_size, dtype=None): # prior distribution for a dense layer
n = kernel_size + bias_size
return lambda t: tfd.Independent(tfd.Normal(loc=tf.zeros(n, dtype=dtype), scale=1), reinterpreted_batch_ndims=1)
# t is the input to the dense layer that this prior dist describes
# note that t isnt in the lambda function so the dist is the same for any input
# the prior does not have any trainable variables so it won't change during optimization
# need to define posterior distribution
def posterior(kernel_size, bias_size, dtype=None):
n = kernel_size + bias_size
return Sequential([
tfpl.VariableLayer(tfpl.IndependentNormal.params_size(n), dtype=dtype), # VariableLayer --> returns a tf variable
tfpl.IndependentNormal(n) # convert_to_tensor_fn is sampling by default (how to get weights for forward pass)
])
# similar to the prior function, this returns a callable object that returns a distribution object but it is a model rather than layer
# rules for defining prior/posterior:
# 1) inputs are kernel_size, bias_size, dtype
# 2) they return callable objects that take in tensor as input and return distribution object
N = len(X_train)
model = Sequential([
tfpl.DenseVariational(16, posterior, prior, kl_weight=1/N, activation='relu', input_shape=(8,)),
tfpl.DenseVariational(2, posterior, prior, kl_weight=1/N),
tfpl.IndependentNormal(1)
])
# replace dense layers with DenseVariational layers where everything is the same except the addition of posterior/prior functions
# this layer automatically applies KL_divergence loss which can be weighted according to kl_weight param (1/N is a the best choice)
# objective is to maximize ELBO (evidence lower bound) which is the difference between two terms
# 1) sum of expected log likelihood over the data (postive of nll loss function I have been using)
# 2) KL divergence between posterior and prior
# KL divergence measures similarity btw two functions so it will force the posterior to be similar to prior to minimize the loss
# kind of like regularizing the posterior
# note that maximizing the ELBO requires a trade-off
# i.e. we want the posterior to maximize log likelihood but at the same time we don't want it to diverge too much from prior
model.compile(loss=lambda y_true, y_pred: -y_pred.log_prob(y_true)) # nll is estimated using a single sample of posteriors to get weights
# the KL_divergence term is automatically included in the DenseVariational layers
```
Note that the posterior is a "variational posterior" to approximate the true posterior because it is generally to difficult to calculate the true posterior using Bayes' Theorem. We tune the parameters of the variational posterior to get as close to the true posterior as possible.
```python
# same data as before with new model that accounts for uncertainty in weights
X_train = np.linspace(-1,1,100)[:,np.newaxis]
y_train = X_train + 0.3*np.random.randn(100)[:, np.newaxis]
plt.scatter(X_train, y_train, alpha=0.4)
```
```python
def prior(kernel_size, bias_size, dtype=None):
n = kernel_size + bias_size
prior_model = Sequential([
tfpl.DistributionLambda(
lambda t: tfd.MultivariateNormalDiag(loc=tf.zeros(n), scale_diag=tf.ones(n))
) # prior is just unit gaussian for all parameters of a DenseVariational layer
]) # no trainable variables since the zeros and ones are hardcoded in
# also doesn't use the input so we don't even need to use Sequential but might be convenient since posterior does need it
return prior_model
# need to assume a distribution for posterior --> let's assume multivariate Gaussian
def posterior(kernel_size, bias_size, dtype=None):
n = kernel_size + bias_size
# we want trainable parameters, which we get using VariableLayer which are then passed to next layer
posterior_model = Sequential([
tfpl.VariableLayer(tfpl.MultivariateNormalTriL.params_size(n), dtype=dtype),
tfpl.MultivariateNormalTriL(n)
])
return posterior_model
```
```python
model = Sequential([
tfpl.DenseVariational(1, posterior, prior, input_shape=(1,), kl_weight=1/len(X_train), kl_use_exact=True)
])
# can use deterministic loss since the output of this model is not a distribution object
model.compile(loss=tf.keras.losses.MeanSquaredError(), optimizer=tf.keras.optimizers.RMSprop(0.005))
model.summary()
```
Model: "sequential_15"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_variational_2 (DenseVa (None, 1) 5
=================================================================
Total params: 5
Trainable params: 5
Non-trainable params: 0
_________________________________________________________________
A deterministic linear regression has two trainable params (mean and intercept) but this has 5. This is the mean and variance of mean and intercept (4) as well as the covariance between them. Note that the MultivariateNormalTriL scale param is a lower triangular matrix which will have 3 non-zero params for a 2x2 matrix since the upper right corner is 0. These three params are the variance and covariance described in the second sentence
```python
model.fit(X_train, y_train, epochs=500, verbose=False)
```
<tensorflow.python.keras.callbacks.History at 0x7f068fdc6b50>
```python
dummy_input = np.array([[0]])
model_prior = model.layers[0]._prior(dummy_input)
model_posterior = model.layers[0]._posterior(dummy_input)
print('prior mean: ', model_prior.mean().numpy()) # did not change during training since it is not trainable
print('prior variance: ', model_prior.variance().numpy()) # did not change during training since it is not trainable
print('posterior mean: ', model_posterior.mean().numpy())
print('posterior covariance: ', model_posterior.covariance().numpy())
```
prior mean: [0. 0.]
prior variance: [1. 1.]
posterior mean: [ 1.0255601 -0.01181691]
posterior covariance: [[ 0.01487845 -0.00026619]
[-0.00026619 0.00594118]]
```python
# plot an ensemble of linear regressions with weights sample from posterior distribution
plt.scatter(X_train, y_train, alpha=0.4)
for _ in range(10):
y_model = model(X_train) # each time I call model on training data, a different sample is taken from posterior
plt.plot(X_train, y_model, color='red', alpha=0.8)
plt.show()
```
```python
# let's also look at nonlinear models
X_train = np.linspace(-1,1,1000)[:, np.newaxis]
# not only nonlinear trend but the error is also a function of x so it isnt same for all x unlike previous example
y_train = np.power(X_train, 3) + 0.1 * (2 + X_train) * np.random.randn(1000)[:, np.newaxis]
plt.scatter(X_train, y_train, alpha=0.1)
```
```python
# this model accounts for uncertainty in weights and also returns a distribution object
# allows us to model both epistemic and aleatoric uncertainty, respectively
model = Sequential([
tfpl.DenseVariational(8, posterior, prior, kl_weight=1/len(X_train), input_shape=(1,), activation='sigmoid'),
tfpl.DenseVariational(tfpl.IndependentNormal.params_size(1), posterior, prior, kl_weight=1/len(X_train)),
tfpl.IndependentNormal(1)
])
def nll(y_true, y_pred):
return -y_pred.log_prob(y_true)
model.compile(loss=nll, optimizer=tf.keras.optimizers.RMSprop(0.005))
model.summary()
```
Model: "sequential_16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_variational_3 (DenseVa (None, 8) 152
_________________________________________________________________
dense_variational_4 (DenseVa (None, 2) 189
_________________________________________________________________
independent_normal_7 (Indepe multiple 0
=================================================================
Total params: 341
Trainable params: 341
Non-trainable params: 0
_________________________________________________________________
```python
model.fit(X_train, y_train, epochs=1000, verbose=False)
```
<tensorflow.python.keras.callbacks.History at 0x7f068f626450>
```python
model.evaluate(X_train, y_train)
```
32/32 [==============================] - 1s 1ms/step - loss: 0.0142
0.01424924936145544
```python
plt.scatter(X_train, y_train, marker='.', alpha=0.2, label='data')
for _ in range(5):
y_model = model(X_train)
y_hat = y_model.mean()
y_hat_m2sd = y_hat - 2 * y_model.stddev()
y_hat_p2sd = y_hat + 2 * y_model.stddev()
if _ == 0:
plt.plot(X_train, y_hat, color='red', alpha=0.8, label='model $\mu$')
plt.plot(X_train, y_hat_m2sd, color='green', alpha=0.8, label='model $\mu \pm 2 \sigma$')
plt.plot(X_train, y_hat_p2sd, color='green', alpha=0.8)
else:
plt.plot(X_train, y_hat, color='red', alpha=0.8)
plt.plot(X_train, y_hat_m2sd, color='green', alpha=0.8)
plt.plot(X_train, y_hat_p2sd, color='green', alpha=0.8)
plt.legend()
plt.show()
```
This allows us to model the uncertainty in the prediction by learning the variation as well as model the uncertainty in the parameters themselves, i.e. both aleatoric and epistemic uncertainty, respectively
## Reparameterization layers
Dense Variational layer is updated version of dense reparameterization layer - need to check if there is an updated version for convolutional layers (there wasn't when the video was filmed)
```python
# reparamaterization layers have same underlying algorithms and theoretical background as dense variational layer
model = Sequential([
tfpl.Convolution2DReparameterization(16, 3, activation='relu', input_shape=(28,28,1),
kernel_posterior_fn=tfpl.default_mean_field_normal_fn(),
kernel_prior_fn=tfpl.default_multivariate_normal_fn), # replaces Conv2D
MaxPooling2D(3),
Flatten(),
tfpl.DenseReparameterization(tfpl.OneHotCategorical.params_size(10)), # older version of Dense Variational layer
tfpl.OneHotCategorical(10)
])
# tfpl.default_mean_field_normal_fn() returns independent normal dist with trainable mean/stddev
# can also define a function to make a posterior/prior
def custom_multivariate_normal_fn(dtype, shape, name, trainable, add_variable_fn): # these are required inputs
normal = tfd.Normal(loc=tf.zeros(shape, dtype), scale=2*tf.ones(shape, dtype))
batch_ndims = tf.size(normal.batch_shape_tensor())
return tfd.Independent(normal, reinterpreted_batch_ndims=batch_ndims) # must return distribution object
model = Sequential([
tfpl.Convolution2DReparameterization(16, 3, activation='relu', input_shape=(28,28,1),
kernel_posterior_fn=tfpl.default_mean_field_normal_fn(),
kernel_prior_fn=custom_multivariate_normal_fn), # replaces Conv2D
MaxPooling2D(3),
Flatten(),
tfpl.DenseReparameterization(tfpl.OneHotCategorical.params_size(10)), # older version of Dense Variational layer
tfpl.OneHotCategorical(10)
])
```
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py:2191: UserWarning: `layer.add_variable` is deprecated and will be removed in a future version. Please use `layer.add_weight` method instead.
warnings.warn('`layer.add_variable` is deprecated and '
```python
# load HAR dataset (smartphone accelerometer data while person walks)
def load_HAR_data():
X_train = np.load('gdrive/MyDrive/Colab Notebooks/Datasets/HAR/x_train.npy')[...,:6]
y_train = np.load('gdrive/MyDrive/Colab Notebooks/Datasets/HAR/y_train.npy') - 1
X_test = np.load('gdrive/MyDrive/Colab Notebooks/Datasets/HAR/x_test.npy')[...,:6]
y_test = np.load('gdrive/MyDrive/Colab Notebooks/Datasets/HAR/y_test.npy') - 1
return (X_train, y_train), (X_test, y_test)
label_to_activity = {0:'walking horizontally', 1:'walking upstairs', 2:'walking downstairs',
3:'sitting', 4:'standing', 5:'laying'}
# change label to one-hot encoding
def integer_to_onehot(data_integer):
data_onehot = np.zeros(shape=(data_integer.shape[0], data_integer.max()+1))
for row in range(data_integer.shape[0]):
integer = int(data_integer[row])
data_onehot[row, integer] = 1
return data_onehot
(X_train, y_train), (X_test, y_test) = load_HAR_data()
y_train_oh = integer_to_onehot(y_train)
y_test_oh = integer_to_onehot(y_test)
```
```python
# plot data
def make_plots(num_examples_per_category):
for label in range(6):
X_label = X_train[y_train[:,0] == label]
for i in range(num_examples_per_category):
fig, ax = plt.subplots(figsize=(10,1))
ax.imshow(X_label[100*i].T, cmap='Greys', vmin=-1, vmax=1)
if i == 0:
ax.set_title(label_to_activity[label])
plt.show()
make_plots(1)
```
```python
# standard deterministic model:
model = Sequential([
Conv1D(input_shape=(128,6), filters=8, kernel_size=16, activation='relu'),
MaxPooling1D(16),
Flatten(),
Dense(6, activation='softmax')
])
model.summary()
```
Model: "sequential_19"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, 113, 8) 776
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 7, 8) 0
_________________________________________________________________
flatten_3 (Flatten) (None, 56) 0
_________________________________________________________________
dense_21 (Dense) (None, 6) 342
=================================================================
Total params: 1,118
Trainable params: 1,118
Non-trainable params: 0
_________________________________________________________________
```python
# convert to probabilistic model
divergence_fn = lambda q, p, _: tfd.kl_divergence(q,p) / len(X_train) # only works if analytical solution is known by tensorflow
model = Sequential([
tfpl.Convolution1DReparameterization(
input_shape=(128,6), filters=8, kernel_size=16, activation='relu',
kernel_prior_fn=tfpl.default_multivariate_normal_fn,
kernel_posterior_fn=tfpl.default_mean_field_normal_fn(is_singular=False),
kernel_divergence_fn=divergence_fn,
bias_prior_fn=tfpl.default_multivariate_normal_fn,
bias_posterior_fn=tfpl.default_mean_field_normal_fn(is_singular=False),
bias_divergence_fn=divergence_fn,
),
MaxPooling1D(16),
Flatten(),
tfpl.DenseReparameterization(
units=tfpl.OneHotCategorical.params_size(6), activation=None,
kernel_prior_fn=tfpl.default_multivariate_normal_fn,
kernel_posterior_fn=tfpl.default_mean_field_normal_fn(is_singular=False),
kernel_divergence_fn=divergence_fn,
bias_prior_fn=tfpl.default_multivariate_normal_fn,
bias_posterior_fn=tfpl.default_mean_field_normal_fn(is_singular=False),
bias_divergence_fn=divergence_fn,
),
tfpl.OneHotCategorical(event_size=6)
])
model.summary() # twice as many trainable params since each param is replaced with mean/stddev
```
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py:2191: UserWarning: `layer.add_variable` is deprecated and will be removed in a future version. Please use `layer.add_weight` method instead.
warnings.warn('`layer.add_variable` is deprecated and '
Model: "sequential_21"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_reparameterization_3 (None, 113, 8) 1552
_________________________________________________________________
max_pooling1d_3 (MaxPooling1 (None, 7, 8) 0
_________________________________________________________________
flatten_6 (Flatten) (None, 56) 0
_________________________________________________________________
dense_reparameterization_4 ( (None, 6) 684
_________________________________________________________________
one_hot_categorical_5 (OneHo multiple 0
=================================================================
Total params: 2,236
Trainable params: 2,236
Non-trainable params: 0
_________________________________________________________________
```python
def kl_approx(q, p, q_tensor): # Monte Carlo approximation of KL divergence
return tf.reduce_mean(q.log_prob(q_tensor) - p.log_prob(q_tensor))
divergence_fn = lambda q, p, q_tensor: k1_approx(q, p, q_tensor) / len(X_train) # need to approximate k1_div if analytical solution unknown
```
```python
def nll(y_true, y_pred):
return -y_pred.log_prob(y_true)
model.compile(loss=nll, optimizer=tf.keras.optimizers.RMSprop(0.005), metrics='accuracy', experimental_run_tf_function=False)
# need experimental_run_tf_function=False when using reparameterization layers
```
```python
model.fit(X_train, y_train_oh, epochs=20, verbose=False)
model.evaluate(X_train, y_train_oh)
model.evaluate(X_test, y_test_oh)
```
230/230 [==============================] - 1s 2ms/step - loss: 0.7120 - accuracy: 0.6878
93/93 [==============================] - 0s 3ms/step - loss: 0.9826 - accuracy: 0.6607
[0.9825744032859802, 0.6606718897819519]
```python
def analyze_model_predictions(image_num):
# show accelerometer data
print('Accelerometer data:')
fig, ax = plt.subplots(figsize=(10,1))
ax.imshow(X_test[image_num].T, cmap='Greys', vmin=-1, vmax=1)
ax.axis('off')
plt.show()
print('True activity: ', label_to_activity[y_test[image_num, 0]])
print('')
# print probability model assigns
print('Model estimated probs:')
predicted_probs = np.empty(shape=(200,6))
for i in range(200):
predicted_probs[i] = model(X_test[image_num][np.newaxis, ...]).mean().numpy()[0]
pct_2p5 = np.array([np.percentile(predicted_probs[:,i], 2.5) for i in range(6)])
pct_97p5 = np.array([np.percentile(predicted_probs[:,i], 97.5) for i in range(6)])
fig, ax = plt.subplots(figsize=(9,3))
bar = ax.bar(np.arange(6), pct_97p5, color='red')
bar[y_test[image_num, 0]].set_color('green')
bar = ax.bar(np.arange(6), pct_2p5, color='white', linewidth=1, edgecolor='white')
ax.set_xticklabels([''] + [activity for activity in label_to_activity.values()], rotation=45, horizontalalignment='right')
ax.set_ylim([0,1])
ax.set_ylabel('Probability')
plt.show()
```
```python
analyze_model_predictions(21)
```
```python
analyze_model_predictions(1137)
```
```python
```
|
ba9e3d52c82cf1c464a5dc7fe7f5695d91e578c4
| 709,874 |
ipynb
|
Jupyter Notebook
|
Probabilistic_Deep_Learning_with_TensorFlow_Week_2.ipynb
|
bwa6/Deep_Learning_Practice
|
d3e4df2a79573404e1f6ab656fb9a9b3c9580c7d
|
[
"MIT"
] | null | null | null |
Probabilistic_Deep_Learning_with_TensorFlow_Week_2.ipynb
|
bwa6/Deep_Learning_Practice
|
d3e4df2a79573404e1f6ab656fb9a9b3c9580c7d
|
[
"MIT"
] | null | null | null |
Probabilistic_Deep_Learning_with_TensorFlow_Week_2.ipynb
|
bwa6/Deep_Learning_Practice
|
d3e4df2a79573404e1f6ab656fb9a9b3c9580c7d
|
[
"MIT"
] | null | null | null | 709,874 | 709,874 | 0.938579 | true | 9,336 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.857768 | 0.812867 | 0.697252 |
__label__eng_Latn
| 0.79483 | 0.458281 |
# Interact Exercise 6
## Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
```
```python
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
```
:0: FutureWarning: IPython widgets are experimental and may change in the future.
## Exploring the Fermi distribution
In quantum statistics, the [Fermi-Dirac](http://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac_statistics) distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
```python
Image('fermidist.png')
```
In this equation:
* $\epsilon$ is the single particle energy.
* $\mu$ is the chemical potential, which is related to the total number of particles.
* $k$ is the Boltzmann constant.
* $T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
$$\begin{align}
F(\epsilon)=\frac1{e^{(\epsilon-\mu)/kT}+1}
\end{align}$$
Define a function `fermidist(energy, mu, kT)` that computes the distribution function for a given value of `energy`, chemical potential `mu` and temperature `kT`. Note here, `kT` is a single variable with units of energy. Make sure your function works with an array and don't use any `for` or `while` loops in your code.
```python
def fermidist(energy, mu, kT):
"""Compute the Fermi distribution at energy, mu and kT."""
F=1/(np.exp((energy-mu)/kT)+1)
return F
raise NotImplementedError()
```
```python
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
```
Write a function `plot_fermidist(mu, kT)` that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters `mu` and `kT`.
* Use enegies over the range $[0,10.0]$ and a suitable number of points.
* Choose an appropriate x and y limit for your visualization.
* Label your x and y axis and the overall visualization.
* Customize your plot in 3 other ways to make it effective and beautiful.
```python
def plot_fermidist(mu, kT):
energy=np.arange(10.0)
y=fermidist(energy,mu,kT)
plt.plot(energy, y)
plt.xlabel('Energy,$\epsilon(J)$')
plt.ylabel('Probability')
plt.title('Fermi-Dirac Distribution')
plt.ylim(0,1)
plt.xlim(0,10)
```
```python
plot_fermidist(4.0, 1.0)
```
```python
assert True # leave this for grading the plot_fermidist function
```
Use `interact` with `plot_fermidist` to explore the distribution:
* For `mu` use a floating point slider over the range $[0.0,5.0]$.
* for `kT` use a floating point slider over the range $[0.1,10.0]$.
```python
interact(plot_fermidist,mu=(0,5.0),kT=(.1,10.0))
```
Provide complete sentence answers to the following questions in the cell below:
* What happens when the temperature $kT$ is low?
* What happens when the temperature $kT$ is high?
* What is the effect of changing the chemical potential $\mu$?
* The number of particles in the system are related to the area under this curve. How does the chemical potential affect the number of particles.
Use LaTeX to typeset any mathematical symbols in your answer.
When the temperature kT is low the probability of the energy being close to zero increases and the probability density falls rapidly as energy increases.
When kT is high the probability density becomes more evenly distributed among the energy levels.
Changing the chemical potential $\mu$ shifts the distribution up
|
78fdae74138dd6f29b41ed18d39c52040ffa4d49
| 34,668 |
ipynb
|
Jupyter Notebook
|
midterm/InteractEx06.ipynb
|
rvperry/phys202-2015-work
|
646203157fc18b987f0e71d2b51d96543bf4abfc
|
[
"MIT"
] | null | null | null |
midterm/InteractEx06.ipynb
|
rvperry/phys202-2015-work
|
646203157fc18b987f0e71d2b51d96543bf4abfc
|
[
"MIT"
] | null | null | null |
midterm/InteractEx06.ipynb
|
rvperry/phys202-2015-work
|
646203157fc18b987f0e71d2b51d96543bf4abfc
|
[
"MIT"
] | null | null | null | 94.721311 | 12,976 | 0.855342 | true | 1,054 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.887205 | 0.893309 | 0.792548 |
__label__eng_Latn
| 0.988935 | 0.679688 |
<a href="https://colab.research.google.com/github/kalz2q/mycolabnotebooks/blob/master/chartmath204trigonometry.ipynb" target="_parent"></a>
# メモ
数学を Colab で学ぶプロジェクト
いま数学IIの三角関数 trigonometry
# 一般角
**一般角**
平面上で、点 $\mathrm O$ を中心として半直線 $\mathrm{OP}$ を回転させるとき、
この半直線 $\mathrm{OP}$ を動径、半直線 $\mathrm{OX}$ を始線という。
時計の針の回転と逆の向きに図った角度を正の角という。
$\mathrm{OX}$ と $\mathrm{OP}$ のなす角の一つを $\alpha$ とすると、
$\theta = \alpha + 360 \times n$
を動径 $\mathrm{OP}$ の一般角、という。
象限の角 と言う言い方をする。 座標軸に重なる時はどの象限の角でもないとする。
基本例題 108 p.165
次の角の動径を図示せよ。 また、それぞれ第何象限の角か。
* 650°
* 800°
* -630°
* -1280°
```
# sympy の実験 => α β を表す。 * => \times にする。
from sympy import *
init_printing(latex_printer=lambda *args, **kwargs: latex(*args, mul_symbol='times', **kwargs))
alpha, beta, theta = sympy.symbols('alpha beta theta')
n = symbols('n')
display(Eq(theta, alpha + 360 * n))
```
```
# 参考 角度の記号
%%latex
\angle A = 360^\circ
```
\angle A = 360^\circ
---
弧度法
半径 $r$ の円で、半径に等しい長さの弧 $\mathrm{AB}$ に対する中心角の大きさは、半径 $r$ に関係なく一定である。 この角の大きさを 1 ラジアン (1 弧度) と言い、ラジアンを単位をする角の表し方を弧度法と言う。 直角を 90°とする角度の表し方を度数法と言う。
* 1 ラジアン = $\displaystyle \left ( \frac{180}{\pi} \right )^\circ$
* 180°= $\pi$ ラジアン
* 弧の長さ $\quad l = r \theta$
* 面積 $\quad S = \frac 1 2 r^2 \theta = \frac 1 2 rl$
基本例題 109 p.167
次の角を、度数は弧度に、弧度は度数に、それぞれ書き直せ。
* 72°
* -320°
* $\frac{4}{15} \pi$
* $\frac{-13}{4}\pi$
半径 4、中心角 150°の扇形の弧の長さと面積を求めよ。
```
from sympy import *
display(Rational(72,180)*pi)
display(Rational(-320,180)*pi)
display(Rational(4,15)*180)
display(Rational(-13,4)*180)
```
$\displaystyle \frac{2 \pi}{5}$
$\displaystyle - \frac{16 \pi}{9}$
$\displaystyle 48$
$\displaystyle -585$
```
# 弧の長さは r theta なので
print("{} π".format(4 * 150/180))
# 面積は 1/2 r l なので
print("{} π".format(0.5*4* 4*150/180))
```
3.3333333333333335 π
6.666666666666667 π
---
一般角の三角関数の定義
動線 $\mathrm{OP}$ を表す一般角を $\theta$ とする。 $\mathrm{OP} = r,\; \mathrm {P}(x,y)$ とする時、
* 正弦 $\quad \sin \theta = \displaystyle \frac y r $
* 余弦 $\quad \cos \theta = \displaystyle \frac x r $
* 正接 $\quad \tan \theta = \displaystyle \frac y x $
ただし、$\theta = \displaystyle \frac \pi 2 + n\pi$ ($n$ は整数) に対しては $\tan \theta$ の値を定義しない。
三角関数の値域
$-1 \leq \sin \theta \leq 1,\; -1 \leq \cos \theta \leq 1,\; \tan \theta$ は実数全体
---
三角関数の相互関係
* $\tan \theta = \displaystyle \frac {\sin \theta}{\cos \theta}$
* $\sin^2 \theta + \cos^2 \theta = 1$
* $1 + \displaystyle \tan^2 \theta = \frac {1}{\cos^2 \theta}$
---
基本例題 110 p.170
$\theta$ が次の値の時、$\sin \theta, \cos \theta , \tan \theta$ の値を求めよ。
* $\displaystyle \frac{23}{6} \pi$
* $\displaystyle - \frac 5 4 \pi$
```
from sympy.abc import *
from sympy import *
x = Rational(23,6) * pi
display(x)
display(sin(x))
display(cos(x))
display(tan(x))
y = -Rational(5,4) * pi
display(y)
display(sin(y))
display(cos(y))
display(tan(y))
```
$\displaystyle \frac{23 \pi}{6}$
$\displaystyle - \frac{1}{2}$
$\displaystyle \frac{\sqrt{3}}{2}$
$\displaystyle - \frac{\sqrt{3}}{3}$
$\displaystyle - \frac{5 \pi}{4}$
$\displaystyle \frac{\sqrt{2}}{2}$
$\displaystyle - \frac{\sqrt{2}}{2}$
$\displaystyle -1$
---
基本例題 111 p.171 (1)
$\displaystyle \frac 3 2 \pi \lt \theta \lt 2 \pi$ とする。 $\cos \theta = \displaystyle \frac 5 {13}$ のとき、$\sin \theta$ と $\tan \theta$ を求める。
```
from sympy.abc import *
from sympy import *
# 与件 cos(theta) = Rational(5,13)
# 公式から sin(theta)**2 = 1 - cos(theta)**2
display(1 - Rational(5,13)**2)
display(sqrt(1 - Rational(5,13)**2)) # sin(theta)
# 定義より tan(theta) = sin(theta) / cos(theta)
display(Rational(12,13)/Rational(5,13))
```
$\displaystyle \frac{144}{169}$
$\displaystyle \frac{12}{13}$
$\displaystyle \frac{12}{5}$
---
基本例題 111 p.171 (2)
$\displaystyle \pi \lt \theta \lt \frac 3 2 \pi$ とする。 $\tan \theta = 7$ のとき、$\sin \theta$ と $\cos \theta$ を求める。
```
from sympy.abc import *
from sympy import *
# 与件 cos(theta) = 7
# 公式から cos(theta)**2 = 1/(1 + tan(theta)**2)
# 公式から sin(theta) = tan(theta) * cos(theta)
display(S(1) / (1 + 7**2)) # S(1) は 1 の実数表示 Rational(1,1)
display(sqrt(S(1) / (1 + 7**2))) # cos(theta)
display(S(7) * (-sqrt(2)/10))
```
$\displaystyle \frac{1}{50}$
$\displaystyle \frac{\sqrt{2}}{10}$
$\displaystyle - \frac{7 \sqrt{2}}{10}$
---
基本例題 112 p.172 (1)
* 等式 $\;\; \displaystyle \frac {\cos \theta}{1 + \sin \theta} + \tan \theta = \frac 1 {\cos \theta}$ を証明する。
左辺
$\qquad = \displaystyle \frac {\cos \theta}{1 + \sin \theta} + \frac {\sin \theta}{\cos \theta}$
$\qquad = \displaystyle \frac {\cos^2\theta}{(1 + \sin \theta)\cos\theta} + \frac {(1 + \sin \theta)\sin\theta}{(1 + \sin \theta)\cos \theta}$
$\qquad = \displaystyle \frac {\cos^2\theta + \sin^2\theta +\sin\theta}{(1 + \sin \theta)\cos\theta}$
$\qquad = \displaystyle \frac {1 + \sin \theta}{(1 + \sin \theta)\cos\theta}$
$\qquad = \displaystyle \frac {1}{\cos\theta}$
---
基本例題 112 p.172 (2)
* $\cos^2\theta + \sin\theta -\tan\theta (1 - \sin\theta) \cos\theta$ を計算する。
与式
$\qquad = \displaystyle \cos^2\theta + \sin\theta -\frac {\sin\theta}{\cos\theta} (1 - \sin\theta) \cos\theta$
$\qquad = \displaystyle \cos^2\theta + \sin\theta - \sin\theta + \sin^2\theta$
$\qquad = \displaystyle \cos^2\theta + \sin^2\theta$
$\qquad = 1$
---
基本例題 113 p.173
$\sin\theta +\cos\theta = \displaystyle\frac{\sqrt 3}{2}\;\; (\frac \pi 2 \lt \theta \lt \pi)$ の時、次の式の値を求める。
* $\sin\theta\cos\theta$
* $\sin^3\theta+\cos^3\theta$
* $\cos^3\theta-\sin^3\theta$
$\sin\theta +\cos\theta = \displaystyle\frac{\sqrt 3}{2}$
の両辺を 2 乗すると
$\sin^2\theta +\cos^2\theta + 2\sin\theta\cos\theta = \displaystyle\frac{3}{4}$
$1 + 2\sin\theta\cos\theta = \displaystyle\frac{3}{4}$
$2\sin\theta\cos\theta = \displaystyle\frac{3}{4} - 1$
$2\sin\theta\cos\theta = \displaystyle\frac{-1}{4}$
$\sin\theta\cos\theta = \displaystyle\frac{-1}{8}$
(2)
$\sin^3\theta+\cos^3\theta = (\sin\theta + \cos\theta)(\sin^2\theta - \sin\theta\cos\theta + \cos^2\theta)$
$\qquad = \displaystyle\frac{\sqrt 3}{2} (\sin^2\theta - \frac{-1}{8} + \cos^2\theta)$
$\qquad = \displaystyle\frac{\sqrt 3}{2} (1 + \frac{1}{8})$
$\qquad = \displaystyle\frac{9\sqrt 3}{16}$
(3)
$\cos^3\theta-\sin^3\theta = (\cos\theta - \sin\theta)(\sin^2\theta + \sin\theta\cos\theta + \cos^2\theta)$
$\qquad = (\cos\theta - \sin\theta)(1 + \sin\theta\cos\theta)$
$\qquad = (\cos\theta - \sin\theta)(1 + \displaystyle\frac{-1}{8})$
$\qquad = \displaystyle\frac{7}{8}(\cos\theta - \sin\theta)$
ところで
$(\cos\theta - \sin\theta)^2 = 1 - 2\sin\theta\cos\theta$
$\qquad = 1 - 2 \displaystyle\frac{-1}{8} = \frac 5 4$
与えられた象限では $\cos\theta - \sin\theta \lt 0$ なので
$\cos\theta - \sin\theta = - \displaystyle\frac{\sqrt 5}{2}$
よって与式 $\quad = - \displaystyle\frac 7 8 \times \frac {\sqrt 5} 2 = - \frac{7\sqrt 5}{16}$
---
三角関数の性質 p.174
$\theta + 2n\pi$ の三角関数
* $\sin(\theta + 2n\pi) = \sin\theta$
* $\cos(\theta + 2n\pi) = \cos\theta$
* $\tan(\theta + 2n\pi) = \tan\theta$
$-\theta$ の三角関数
* $\sin(-\theta) = - \sin\theta$
* $\cos(-\theta) = \cos\theta$
* $\tan(-\theta) = - \tan\theta$
$-\theta$ の三角関数
* $\sin(-\theta) = - \sin\theta$
* $\cos(-\theta) = \cos\theta$
* $\tan(-\theta) = - \tan\theta$
|
1e568506d8174380849ff5a1328d6379f36d6e84
| 23,089 |
ipynb
|
Jupyter Notebook
|
chartmath204trigonometry.ipynb
|
kalz2q/-yjupyternotebooks
|
ba37ac7822543b830fe8602b3f611bb617943463
|
[
"MIT"
] | 1 |
2021-09-16T03:45:19.000Z
|
2021-09-16T03:45:19.000Z
|
chartmath204trigonometry.ipynb
|
kalz2q/-yjupyternotebooks
|
ba37ac7822543b830fe8602b3f611bb617943463
|
[
"MIT"
] | null | null | null |
chartmath204trigonometry.ipynb
|
kalz2q/-yjupyternotebooks
|
ba37ac7822543b830fe8602b3f611bb617943463
|
[
"MIT"
] | null | null | null | 29.985714 | 2,078 | 0.431461 | true | 3,280 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.766294 | 0.654895 | 0.501842 |
__label__yue_Hant
| 0.382575 | 0.004275 |
# Multivariate Linear and Logistic Regression with Regularization
## Terminology & Symbols
* Training size (<span style="color:#C63">$m$</span>) - The number of samples we can use for learning
* Dimensionality (<span style="color:#C63">$d$</span>) - The number of dimensions in the input (feature) space
* Feature set (<span style="color:#C63">$X$</span>) - An $m \times d$ matrix where every row represents a single feature vector
* Target (<span style="color:#C63">$y$</span>) - An $m$-vector representing the value we are trying to predict
* Training set (<span style="color:#C63">$(X,y)$</span>) - The combined matrix of inputs and their associated known target values
* Test set - An optional set of hold out data used for validation
* Feature weights (<span style="color:#C63">$\theta$</span>) - the free variables in our modeling
* Hypothesis function (<span style="color:#C63">$h_{\theta}(x)$</span>) - the function/model we are trying to learn by manipulating $\theta$
* Loss Function (<span style="color:#C63">$J(\theta)$</span>) - the "error" introduced by our method (What we want to minimize)
## Motivating Problems
### Regression
```python
import numpy as np
import matplotlib.pyplot as plt
rng = np.random.RandomState(7)
## Setup training data
def f(x):
""" Magic ground truth function """
return 5./7. *((np.pi)**2 - (np.pi)**2 + - 0 * 4 + x * np.sin(x) + 2*(rng.rand(len(x))-0.5))
# generate full validation set of points
X = np.linspace(0, 10, 100)
# select a subset to act as the training data
rng.shuffle(X)
train_X = np.sort(X[:20])
train_y = f(train_X)
validation_y = f(X)
X = np.sort(X)[:, np.newaxis]
train_X = train_X[:, np.newaxis]
train_y = np.atleast_2d(train_y).T
## Plot the training data
plt.scatter(train_X, train_y, color='navy', s=30, marker='o', label="training data")
plt.legend(loc='lower left')
```
### Classification
```python
## Threshold the regression data
threshold = 3**2
radius_squared = (train_y)**2 + (train_X-5)**2
idxs1 = np.where(radius_squared > threshold)
idxs2 = np.where(radius_squared <= threshold)
## Plot the training data
plt.scatter(train_X[idxs1], train_y[idxs1], color='navy', s=30, marker='x', label="class 1")
plt.scatter(train_X[idxs2], train_y[idxs2], color='orange', s=30, marker='o', label="class 2")
plt.legend(loc='upper left')
plt.show()
```
## Linear Regression
### Univariate Linear Regression
```python
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(train_X, train_y)
y = model.predict(X)
plt.scatter(train_X, train_y, color='navy', s=30, marker='o', label="training data")
plt.plot(X, y, color='teal', linewidth=lw, label="degree %d" % degree)
```
Given a training set consisting of one feature, $x$, and one target, $y$, we want to generate a hypothesis $h$ about the linear relationship between $x$ and $y$:
$y \approx h(x) = a + bx$
The bias ($a$) and feature weight ($b$) can be rolled into a single weight vector i.e., $\mathbf{\theta} = \begin{bmatrix} a \\ b\end{bmatrix}$:
$h_{\theta}(x) = \theta_0 + \theta_1 x$
In vector notation (we implicitly prepend a 1 to every x i.e., $\mathbf{x} = \begin{bmatrix} 1 \\ x\end{bmatrix}$):
$h_{\theta}(x) = \mathbf{\theta}^T\mathbf{x}$
One evaluation metric of the quality of fit is by looking at the **cumulative** squared error/residual:
$J(\theta) = \sum_{i=1}^{m}(h_{\theta}(x_i) - y_i)^2$
By minimizing $J$, we get the "best" model. I could make this an average by adding a $\frac{1}{m}$ out front, but it should be clear that this has no effect on where in weight space the minimum lies (it only squashes or stretches $J(\theta)$.
```python
def compute_cost(X, y, theta):
m = y.shape[1]
J = np.sum((X.dot(theta)-y.T)**2)/2/m
return J
def gradient_descent(X, y, theta, alpha, iterations=100):
X = np.atleast_2d(X)
y = np.atleast_2d(y)
theta = np.atleast_2d(theta)
m = y.shape[1]
J_history = np.zeros(iterations)
for i in range(iterations):
H = X.dot(theta)
loss = H - y.T
gradient = X.T.dot(loss) / m
theta = theta - alpha * gradient
J_history[i] = compute_cost(X, y, theta)
return theta, J_history
##add bias
train_X_bias = np.hstack((np.ones((train_X.shape[0], 1)), train_X))
validation_X = np.hstack((np.ones((X.shape[0], 1)), X))
initial_theta = np.zeros((train_X_bias.shape[1],1))
final_theta, J_history = gradient_descent(train_X_bias, train_y, initial_theta, 0.01, 20)
print(final_theta)
y_predict = np.dot(validation_X,final_theta)
plt.scatter(train_X, train_y, color='navy', s=30, marker='o', label="training data")
plt.plot(X, y_predict, color='teal', linewidth=lw)
```
```python
plt.plot(range(len(J_history)), J_history)
```
### Multivariate Linear Regression
```python
from sklearn.preprocessing import PolynomialFeatures
degree = 25
model = make_pipeline(PolynomialFeatures(degree), LinearRegression())
model.fit(train_X, train_y)
y = model.predict(X)
plt.scatter(train_X, train_y, color='navy', s=30, marker='o', label="training data")
plt.plot(X, y, color='yellowgreen', linewidth=lw, label="degree %d" % degree)
plt.gca().set_ylim(-10,10)
```
Note, these last two equations do not specify the dimensionality ($d$) of $\theta$ and $x$:
$h_{\theta}(x) = \mathbf{\theta}^T\mathbf{x}$
$J(\theta) = \sum_{i=1}^{m}(h_{\theta}(\mathbf{x_i}) - y_i)^2$
We can adapt our original test case just by adding new parameters: $\mathbf{x} = \begin{bmatrix} 1 \\ x \\ x^2 \\ x^3 \\ x^4 \end{bmatrix}$
The optimum lies where the gradient is zero, we can get there two ways:
1. Solving the equation analytically
2. Solving the equation numerically through iteration
#### Closed Form Solution of Ordinary Least Squares
$J(\theta) = \sum_{i=1}^{m}(h_{\theta}(x_i) - y_i)^2$
$J(\theta) = \sum_{i=1}^{m}(\theta^Tx_i - y_i)^2$
The optimum occurs where the gradient is zero:
\begin{align}
\frac{d J(\theta)}{d\mathbf{\theta}} & = 0 \\
2 \sum_{i=1}^{m}\left[\theta^Tx_i - y_i)x_i\right] & = 0 \\
\sum_{i=1}^{m}\left[(\theta^Tx_i)x_i - y_ix_i\right] & = 0 \\
\sum_{i=1}^{m}(\theta^Tx_i)x_i & = \sum_{i=1}^{m}y_ix_i \\
\sum_{i=1}^{m}(x_ix_i^T)\theta & = \sum_{i=1}^{m}y_ix_i \\
\end{align}
Let $A = \sum_{i=1}^{m}(x_ix_i^T) = \mathbf{X}^T \mathbf{X}$
$b=\sum_{i=1}^{m}y_ix_i$ = $\mathbf{X}^T\mathbf{y}$
\begin{align}
A\theta & = b \\
\theta & = A^{-1}b \\
\theta & = (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T\mathbf{y} \\
\end{align}
Note, we are inverting $\mathbf{X}^T \mathbf{X}$ which is $m \times m$, i.e., big with lots of data.
#### Gradient Descent
Recall:
$\frac{d J(\theta)}{d\theta_j} = 2 \sum_{i=1}^{m}\left[\theta^Tx_i - y_i)x_{i,j}\right]$
Represents the gradient of the cost function, so if we want to minimize this, we follow its negative direction.
Algorithm:
> Initialize $\theta_j = \mathbf{0}$ <br>
> Repeat { <br>
>> $\theta_j = \theta_j - \alpha\left(\sum_{i=1}^{m}\left[(\theta^Tx_i - y_i)x_{i,j}\right]\right)$ <br>
> } <br>
We saw last week the benefits of stochastic gradient descent where we don't have to use the whole dataset each time to do the update.
## Feature Scaling
### Why?
Varied dimension "widths" will pull the gradient descent algorithm in wider directions.
### How?
* Range scaling $\left(\frac{x - x_{min}}{x_{max}- x_{min}}\right)$
* Z-Score scaling $\left(\frac{x - \mu}{\sigma}\right)$
## Learning Rate
plot # of iterations vs. cost function
## Regularization
$J(\theta) = \sum_{i=1}^{m}(h_{\theta}(\mathbf{x_i}) - y_i)^2 + p(\theta)$
### Ridge Regularization
* $L_2$ penalty term
* Tikhonov Regularization
$p(\theta) = \lambda \sum_{i=1}^{m}\theta_i^2$
```python
from sklearn.linear_model import Ridge
degree = 25
model = make_pipeline(PolynomialFeatures(degree), Ridge())
model.fit(train_X, train_y)
y = model.predict(X)
plt.scatter(train_X, train_y, color='navy', s=30, marker='o', label="training data")
plt.plot(X, y, color='gold', linewidth=lw, label="degree %d" % degree)
plt.gca().set_ylim(-10,10)
```
### LASSO Regularization
* $L_1$ penalty term
* Reduces number of dimensions
* Good for feature selection
$p(\theta) = \lambda \sum_{i=1}^{m}|\theta_i|$
```python
from sklearn.linear_model import Lasso
degree = 300
model = make_pipeline(PolynomialFeatures(degree), Lasso())
model.fit(train_X, train_y)
y = model.predict(X)
plt.scatter(train_X, train_y, color='navy', s=30, marker='o', label="training data")
plt.plot(X, y, color='darkorange', linewidth=lw, label="degree %d" % degree)
plt.gca().set_ylim(-10,10)
```
### Elastic Net Regularization
* Combination $L_1$ and $L_2$ Regularization
$p(\theta) = \lambda_1 \sum_{i=1}^{m}|\theta_i| + \lambda_2 \sum_{i=1}^{m}\theta_i^2$
```python
from sklearn.linear_model import ElasticNet
degree = 25
model = make_pipeline(PolynomialFeatures(degree), ElasticNet())
model.fit(train_X, train_y)
y = model.predict(X)
plt.scatter(train_X, train_y, color='navy', s=30, marker='o', label="training data")
plt.plot(X, y, color='firebrick', linewidth=lw, label="degree %d" % degree)
plt.gca().set_ylim(-10,10)
```
## Logistic Regression
We then formulate our problem such that we are using the sigmoid function to determine the probability that an input resides in a particular class of data. With multiple classes this turns into a "one-vs.-rest" scheme, thus if there are $k$ classes, there are $k$ instances of logistics regression. The highest prediction is taken as the class for that data point.
\begin{align}
h_\theta(x) & = g(\theta^Tx) \\
z & = \theta^Tx \\
g(z) & = \frac{1}{1+e^{-z}}\\
\end{align}
```python
def sigmoid(x):
return 1. / (1 + np.exp(-x))
xx = np.linspace(-10,10,100)
plt.plot(xx, sigmoid(xx))
```
We use the sigmoid function to take the discrete output and make it differentiable.
Our $y$ values are discrete (0 or 1). How do we define a cost function for this?
Existing cost function:
$J(\theta) = \sum_{i=1}^{m}(h_{\theta}(x_i) - y_i)^2$
This will not work as it produces a non-convex $J$.
So, we need a convex cost function that has the following properties:
* When the correct classification is 1, then a zero cost should be assigned to a value of $h(x)$ = 1
* When the correct classification is 1, then a maximal cost should be assigned to a value of $h(x)$ = 0
* When the correct classification is 0, then a zero cost should be assigned to a value of $h(x)$ = 0
* When the correct classification is 0, then a maximal cost should be assigned to a value of $h(x)$ = 1
Thus, we end up with:
$\text{Cost}(h_\theta(x), y) = $
\begin{cases}
-\log(h_\theta(x)), & \text{if } y = 1\\
-\log(1 - h_\theta(x)), & \text{if } y = 0
\end{cases}
We can do this more compactly by using the fact that y is discrete:
$\text{Cost}(h, y) = -y\log(h - (1-y)\log(1 - h)$
This simplifies the gradient needed for gradient descent:
$
\frac{d}{d \theta}J(\theta) = \frac{1}{m} \left(-y^Tlog(h) - (1-y)^Tlog(1-h))\right)
$
### Regularized Logistic Regression
```python
```
|
05d4bfd2d75d75088c05c49a33f0c840890c57a0
| 136,481 |
ipynb
|
Jupyter Notebook
|
weeks2&3.ipynb
|
maljovec/ML-Notebooks
|
c6f2036d2ec4cdeae3d4ad505cf5d945eb66414a
|
[
"MIT"
] | 2 |
2018-02-02T22:35:29.000Z
|
2018-02-02T22:44:30.000Z
|
weeks2&3.ipynb
|
maljovec/ML-Notebooks
|
c6f2036d2ec4cdeae3d4ad505cf5d945eb66414a
|
[
"MIT"
] | null | null | null |
weeks2&3.ipynb
|
maljovec/ML-Notebooks
|
c6f2036d2ec4cdeae3d4ad505cf5d945eb66414a
|
[
"MIT"
] | null | null | null | 142.464509 | 16,812 | 0.885376 | true | 3,408 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.913677 | 0.887205 | 0.810618 |
__label__eng_Latn
| 0.877953 | 0.72167 |
# Residual bias correction of Mean-field GF
This is the full computation of the residual bias correction for our (mean-field) GF solution for the percolation problem (immobile "solute" with vacancy diffusion). It work through all of the matrix averages "analytically" (storing them as polynomials in the concentration of the immobile solute, $c_\text{B}$), and then brings everything together to express the residual bias correction as an analytic function with numerical coefficients for the square lattice.
```python
import sys
sys.path.extend(['.','./Vacancy'])
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
%matplotlib inline
import scipy.sparse
import itertools
from numba import jit, njit, prange, guvectorize # faster runtime with update routines
from scipy.misc import comb
# from sympy import *
import onsager.PowerExpansion as PE
import onsager.crystal as crystal
import onsager.crystalStars as stars
import onsager.GFcalc as GFcalc
from tqdm import tnrange, tqdm_notebook
```
```python
# Turn off or on to run optional testing code in notebook:
# Also turns on / off progress bars
__TESTING__ = False
```
Now, we need to expand out our probability factors. Let $x$ be the concentration of solute B; imagine we have $N$ sites possible. Then, if there are $n$ B atoms, the probability factor is
$$P(n;N) = x^n (1-x)^{N-n} = x^n \sum_{j=0}^{N-n} \frac{(N-n)!}{j!(N-n-j)!} (-x)^j
= \sum_{j=0}^{N-n} \frac{(N-n)!}{j!(N-n-j)!} (-1)^j x^{n+j}$$
The factorial term is $N-n$ choose $j$, which is `scipy.misc.comb`.
We want to construct a probability matrix `P[n,c]` such that $P(n;N)$ is written as a sum over $x^c$ terms; $c=0\ldots N$.
```python
def calc_P(N):
"""
Returns the probability matrix P[n,c] where the probability of seeing `n` atoms
of type B in `N` sites is sum(c=0..N, x^c P[n,c])
:param N: total number of sites
:returns P[n,c]: matrix of probablities, n=0..N, c=0..N
"""
P = np.zeros((N+1, N+1), dtype=int)
for n in range(N+1):
Nn = N-n
P[n,n:] = comb([Nn]*(Nn+1), [j for j in range(Nn+1)])
for j in range(Nn+1):
P[n,j+n] *= (-1)**j
return P
```
```python
if __TESTING__:
calc_P(4)
```
array([[ 1, -4, 6, -4, 1],
[ 0, 1, -3, 3, -1],
[ 0, 0, 1, -2, 1],
[ 0, 0, 0, 1, -1],
[ 0, 0, 0, 0, 1]])
Normalization check: construct the $2^N$ states, and see if it averages to 1. Each state is a vector of length $N$, with entries that are 0 (A) or 1 (B). Here, we explicitly build our state space, and also do a quick count to determine $n_\text{B}$ for each state. Note: we prepend a value of 0, since this corresponds to the *initial* location of the vacancy.
*New version:* we now generate group operations for the square lattice, and take advantage of those to reduce the computational time.
```python
N = 24
prob = calc_P(N)
states = np.array([(0,) + st for st in itertools.product((0,1), repeat=N)])
nB = np.sum(states, axis=1)
```
```python
if __TESTING__:
norm = np.zeros(N+1, dtype=int)
for n in tqdm_notebook(nB):
norm += prob[n]
print(norm)
```
HBox(children=(IntProgress(value=0, max=16777216), HTML(value='')))
[1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
```python
states.shape
```
(16777216, 25)
```python
Pstates = np.array([prob[n] for n in nB])
```
Now, we do some analysis by constructing up to 3 jumps (corresponding to third power of our transition rate matrix $W$). We do this analysis by setting up some bookkeeping:
* We work with a list of displacement vectors `[dx_0, dx_1, dx_2, dx_3]`
* We construct the list of positions for the vacancy
* For each position, we identify the possible jumps (though we only need to do this
for positions that are reachable in 0-2 jumps.
* We construct a list of possible basis functions: these are all possible
differences of vacancy positions
* Finally, for each position, we identify which position corresponds to each possible
basis function, as well a list of all basis functions that are *not* in the state.
This is all sufficient to construct a sparse version of $W$ (and $\Gamma$) for a given state $\chi$.
```python
dxlist = [np.array([1,0]), np.array([-1,0]), np.array([0,1]), np.array([0,-1])]
```
```python
Njump = 3
sites = [np.array([0,0])]
sitedict = {(0,0): 0}
lastsites = sites.copy()
for nj in range(Njump):
newsites = []
for dx in dxlist:
for x in lastsites:
y = x+dx
yt = tuple(y)
if yt not in sitedict:
sitedict[yt] = len(sites)
sites.append(y)
newsites.append(y)
lastsites = newsites
Nsite = len(sites)
Nsite0 = len(sites) - len(lastsites)
sites0 = sites[:Nsite0]
```
```python
jumplist = []
for x in sites0:
jumplist.append([sitedict[tuple(x+dx)] for dx in dxlist])
if __TESTING__:
print(jumplist)
```
[[1, 2, 3, 4],
[5, 0, 6, 7],
[0, 8, 9, 10],
[6, 9, 11, 0],
[7, 10, 0, 12],
[13, 1, 14, 15],
[14, 3, 16, 1],
[15, 4, 1, 17],
[2, 18, 19, 20],
[3, 19, 21, 2],
[4, 20, 2, 22],
[16, 21, 23, 3],
[17, 22, 4, 24]]
```python
basisfunc, basisdict = [], {}
for x in sites:
for y in sites:
d = x-y
dt = tuple(d)
if dt not in basisdict:
basisdict[dt] = len(basisfunc)
basisfunc.append(d)
Nbasis = len(basisfunc)
```
Some matrices and lists to manage conversion between sites and basis functions.
We also include a matrix that corresponds to "matching" basis functions as a function of endstate $x$. This is used to correct the outer product for "missing" basis functions, for when the missing basis functions map onto identical sites.
```python
chibasisfound, chibasismiss, chibasismissmatch = [], [], []
chibasisfar = []
for x in sites:
xbdict = {}
xbmiss = []
xbmissmatch = {}
xbfar = {}
for bindex, b in enumerate(basisfunc):
bt = tuple(b)
y = x+b
yt = tuple(y)
if yt in basisdict:
xbfar[bindex] = basisdict[yt]
if yt in sitedict:
xbdict[bindex] = sitedict[yt]
else:
xbmiss.append(bindex)
if bt not in sitedict and yt in basisdict:
xbmissmatch[bindex] = basisdict[yt]
chibasisfound.append(xbdict)
chibasismiss.append(xbmiss)
chibasismissmatch.append(xbmissmatch)
chibasisfar.append(xbfar)
# make a set of "outside" and "inside" basis functions:
basisout = set([tuple(basisfunc[bindex]) for bindex in chibasismiss[0]])
basisin = set([tuple(bv) for bv in basisfunc if tuple(bv) not in basisout])
```
```python
# converting chibasisfound and chibasismiss into matrices:
chibasisfound_mat = np.zeros((N+1, Nbasis, N+1), dtype=int)
# chibasisfound_sparse = [scipy.sparse.csr_matrix((Nbasis, N+1), dtype=int)
# for n in range(N+1)]
chibasismiss_mat = np.zeros((N+1, Nbasis), dtype=int)
chibasismissmatch_mat = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
chibasisfar_mat = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
for n, cbf, cbm, cbmm, cbfar in zip(itertools.count(), chibasisfound,
chibasismiss, chibasismissmatch, chibasisfar):
for bindex in cbm:
chibasismiss_mat[n, bindex] = 1
for bindex, siteindex in cbf.items():
chibasisfound_mat[n, bindex, siteindex] = 1
# chibasisfound_sparse[n][bindex, siteindex] = 1
for bindex, siteindex in cbmm.items():
chibasismissmatch_mat[bindex, siteindex, n] = 1
for bindex, siteindex in cbfar.items():
chibasisfar_mat[bindex, siteindex, n] = 1
```
### Group operation simplification
For our 8 group operations, corresponding to the point group operations on a square, we're going to make a reduced state list that only contains one symmetry-unique representative. This requires mapping the group operations on Cartesian coordinates into corresponding group operations on our sites, and our basis functions.
```python
groupops = [np.array([[1,0],[0,1]]), np.array([[0,-1],[1,0]]),
np.array([[-1,0],[0,-1]]), np.array([[0,1],[-1,0]]),
np.array([[-1,0],[0,1]]), np.array([[1,0],[0,-1]]),
np.array([[0,-1],[-1,0]]), np.array([[0,1],[1,0]])]
```
```python
sitegroupops, basisgroupops = [], []
for g in groupops:
sg = np.zeros([Nsite, Nsite], dtype=int)
bg = np.zeros([Nbasis, Nbasis], dtype=int)
for n, x in enumerate(sites):
yt = tuple(np.dot(g, x))
sg[sitedict[yt], n] = 1
for n, x in enumerate(basisfunc):
yt = tuple(np.dot(g, x))
bg[basisdict[yt], n] = 1
sitegroupops.append(sg)
basisgroupops.append(bg)
```
```python
foundstates = set([])
binary = np.array([2**n for n in range(Nsite)])
symmstateslist, symmPlist = [], []
for st, P in tqdm_notebook(zip(states, Pstates), total=(2**N), disable=not __TESTING__):
bc = np.dot(st, binary)
if bc not in foundstates:
symmstateslist.append(st)
equivset = set([np.dot(np.dot(g, st), binary) for g in sitegroupops])
foundstates.update(equivset)
symmPlist.append(len(equivset)*P)
symmstates = np.array(symmstateslist)
symmPstates = np.array(symmPlist)
```
HBox(children=(IntProgress(value=0, max=16777216), HTML(value='')))
```python
symmstates.shape
```
(2107920, 25)
```python
if __TESTING__:
np.sum(symmPstates, axis=0)
```
array([1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0])
Now, we need symmetrized versions of a *lot* of our information from above, in order to properly account for all of the symmetrized versions of our basis functions. This includes
* Computation of bias function times a basis function
* Computation of two basis functions
* Inside/inside
* Inside/outside
* Outside/outside
* Outside/outside matching
We can group these in terms of what factor of concentration goes in front.
```python
biasvec_mat = np.zeros((2, N+1), dtype=int)
for j, dx in enumerate(dxlist):
biasvec_mat[:, j+1] -= dx
if __TESTING__:
print(np.dot(biasvec_mat, states[8388608]), states[8388608])
```
[-1 0] [0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
```python
def symmetrize(mat, groupops0, groupops1):
"""
Designed to symmetrize the first two entries of a matrix with the
corresponding group operations
"""
symmmat = np.zeros(mat.shape)
for g0, g1 in zip(groupops0, groupops1):
for i in range(mat.shape[0]):
for j in range(mat.shape[1]):
symmmat[i, j] += np.tensordot(np.tensordot(
mat, g1[j], axes=(1,0)), g0[i], axes=(0,0))
symmmat /= len(groupops0)
return symmmat
```
### Efficient matrix operations
Some `jit` functions via `numba` to make operations efficient:
```python
@njit(nogil=True, parallel=True)
def tripleouterupdate(summand, A, B, C):
"""Update summand[i,j,k] += A[i]*B[j]*C[k]"""
I, = A.shape
J, = B.shape
K, = C.shape
for i in prange(I):
for j in prange(J):
for k in prange(K):
summand[i, j, k] += A[i]*B[j]*C[k]
```
```python
@njit(nogil=True, parallel=True)
def matrixouterupdate(summand, A, B):
"""Update summand[i,j,k] += A[i, j]*B[k]"""
I,J = A.shape
K, = B.shape
for i in prange(I):
for j in prange(J):
for k in prange(K):
summand[i, j, k] += A[i,j]*B[k]
```
### Evaluation of averages
We have a state vector $\chi_i$ = 0 or 1, and for each end position $j$, we'll have the representation of $M_{\chi\chi'}$ as a vector $M_j$, we want the contribution to each basis function $b$.
Let's try some averages; first, without basis functions:
$\langle \tau_\chi \mathbf{b}_\chi\cdot\mathbf{b}_\chi\rangle_\chi$, the average residual bias.
```python
resbiasave = np.zeros(N+1, dtype=int)
for st, P in tqdm_notebook(zip(symmstates, symmPstates), total=symmstates.shape[0],
disable=not __TESTING__):
# bv = np.sum(dx for j, dx in enumerate(dxlist) if st[j+1] == 0)
bv = np.dot(biasvec_mat, st)
W = 4-np.sum(st[1:5])
if W>0:
resbiasave += P*(bv[0]*bv[0]+bv[1]*bv[1])*(12//W)
print(resbiasave/12)
```
HBox(children=(IntProgress(value=0, max=2107920), HTML(value='')))
[ 0. 1.33333333 0. 0. -1.33333333 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. ]
Now, an average involving a single basis function: $\langle \mathbf{b}_\chi \phi_{\chi,\mathbf{x}}\rangle_\chi$.
```python
biasvecbar = np.zeros((2, Nbasis, N+1), dtype=int)
Pc = np.zeros(N+1, dtype=int)
for st, P in tqdm_notebook(zip(symmstates, symmPstates), total=symmstates.shape[0],
disable=not __TESTING__):
# bv = np.sum(dx for j, dx in enumerate(dxlist) if st[j+1] == 0)
W = 4-np.sum(st[1:5])
if W==0 or W==4: continue
bv = np.dot(biasvec_mat, st)
Pc[1:] = P[:-1]
tripleouterupdate(biasvecbar, bv, np.dot(chibasisfound_mat[0], st), P)
tripleouterupdate(biasvecbar, bv, chibasismiss_mat[0], Pc)
symmbiasvecbar = symmetrize(biasvecbar, groupops, basisgroupops)
```
HBox(children=(IntProgress(value=0, max=2107920), HTML(value='')))
Now, let's try a basis / basis vector average: $\langle \sum_{\chi'} \phi_{\chi,\mathbf{x}} W_{\chi\chi'} \phi_{\chi',\mathbf{y}}\rangle_\chi$.
This gets a bit complicated with the "missing" basis functions for $\chi$, and especially when we consider those that are missing in both $\chi$ and $\chi'$. We also need to treat the "far" case, where both $\mathbf{x}$ and $\mathbf{y}$ are far away from the origin.
We ignore terms higher than $c^N$ ($N$=25); no contributions are found higher than 10.
```python
# @njit(nogil=True, parallel=True)
@jit
def matrixupdate(mat_bar, mat_vec, chibasis, chibasis_miss,
chibasismissmatch_mat, P, Pc, Pcc):
chibasis0, chibasis1 = chibasis[0], chibasis_miss[0]
chipbasis0, chipbasis1 = np.dot(mat_vec, chibasis), np.dot(mat_vec, chibasis_miss)
tripleouterupdate(mat_bar, chibasis0, chipbasis0, P)
tripleouterupdate(mat_bar, chibasis1, chipbasis0, Pc)
tripleouterupdate(mat_bar, chibasis0, chipbasis1, Pc)
# note: this is a little confusing; if the two ("missing") basis functions are
# referencing *different* sites, then we pick up a x^2 term; but if they
# reference the same site, it is a factor of x.
tripleouterupdate(mat_bar, chibasis1, chipbasis1, Pcc)
matchouter = np.dot(chibasismissmatch_mat, mat_vec)
matrixouterupdate(mat_bar, matchouter, Pc-Pcc)
```
```python
# I'm not entirely sure how this is supposed to read; the matching seems to be the key?
# @njit(nogil=True, parallel=True)
@jit
def farmatrixupdate(mat_bar, mat_vec, chibasis_far, Pc, Pcc):
# note: this is a little confusing; if the two ("missing") basis functions are
# referencing *different* sites, then we pick up a x^2 term; but if they
# reference the same site, it is a factor of x.
# tripleouterupdate(mat_bar, chibasis1, chipbasis1, Pcc)
matchouter = np.dot(chibasis_far, mat_vec)
matrixouterupdate(mat_bar, matchouter, Pc-Pcc)
```
```python
# @njit(nogil=True, parallel=True)
@jit
def vectorupdate(vec_bar, bv, vec, chibasis, chibasis_miss, P, Pc):
# chibasis0, chibasis1 = chibasis[0], chibasis_miss[0]
chipbasis0, chipbasis1 = np.dot(vec, chibasis), np.dot(vec, chibasis_miss)
tripleouterupdate(vec_bar, bv, chipbasis0, P)
tripleouterupdate(vec_bar, bv, chipbasis1, Pc)
```
```python
tauscale = 12
eye = tauscale*np.pad(np.eye(Nsite0, dtype=int), ((0,0), (0,Nsite-Nsite0)), 'constant')
onevec = np.array([1,] + [0,]*(Nsite-1))
# We don't expect to need c^N+1 or c^N+2 so we ignore those...
# Matrices: <sum_c' W_cc' chi chi'> and higher order (GG, and WGG terms...)
Wbar = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
WGbar = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
WGGbar = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
# far-field versions of the same; the matched versions, followed by the "summed" (baseline) version:
Wbar_far = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
WGbar_far = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
WGGbar_far = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
Wbar_far0 = np.zeros(N+1, dtype=int)
WGbar_far0 = np.zeros(N+1, dtype=int)
WGGbar_far0 = np.zeros(N+1, dtype=int)
# bias vector versions, including products with gamma:
biasvecbar = np.zeros((2, Nbasis, N+1), dtype=int)
biasGvecbar = np.zeros((2, Nbasis, N+1), dtype=int)
biasGGvecbar = np.zeros((2, Nbasis, N+1), dtype=int)
# residual bias vector versions:
resbiasave = np.zeros(N+1, dtype=int)
resbiasGave = np.zeros(N+1, dtype=int)
Pc, Pcc = np.zeros(N+1, dtype=int), np.zeros(N+1, dtype=int)
for st, P in tqdm_notebook(zip(symmstates, symmPstates), total=symmstates.shape[0],
disable=not __TESTING__):
Pc[1:] = P[:-1]
Pcc[2:] = P[:-2]
# basis0: those inside \chi, basis1: those outside \chi
chibasis = np.dot(chibasisfound_mat, st)
# chibasis0, chibasis1 = np.dot(chibasisfound_mat[0], st), chibasismiss_mat[0]
# construct our transition matrix:
W = np.zeros((Nsite0, Nsite), dtype=int)
for n, jumps in enumerate(jumplist):
if st[n] == 1: continue
for m in jumps:
if st[m] == 0:
W[n,n] -= 1
W[n,m] = 1
tau = -np.diag(W) # will be tau multiplied by tauscale = 12 (== -12//W[n,n])
Gam = W.copy() # Gamma matrix multiplied by tauscale = 12.
for n in range(Nsite0):
if tau[n] > 0:
tau[n] = tauscale//tau[n]
Gam[n,n] = 0
Gam[n] *= tau[n]
WG = -W[0,0]*np.dot(Gam[0,:Nsite0], Gam)+tauscale*tauscale*W[0,0]*onevec
WGG = np.dot(W[0,:Nsite0], np.dot(Gam[:,:Nsite0], Gam - 2*eye))
matrixupdate(Wbar, W[0], chibasis, chibasismiss_mat, chibasismissmatch_mat,
P, Pc, Pcc)
matrixupdate(WGbar, WG, chibasis, chibasismiss_mat, chibasismissmatch_mat,
P, Pc, Pcc)
matrixupdate(WGGbar, WGG, chibasis, chibasismiss_mat, chibasismissmatch_mat,
P, Pc, Pcc)
# far-field contributions of same:
farmatrixupdate(Wbar_far, W[0], chibasisfar_mat, Pc, Pcc)
farmatrixupdate(WGbar_far, WG, chibasisfar_mat, Pc, Pcc)
farmatrixupdate(WGGbar_far, WGG, chibasisfar_mat, Pc, Pcc)
Wbar_far0 += np.sum(W[0])*Pcc
WGbar_far0 += np.sum(WG)*Pcc
WGGbar_far0 += np.sum(WGG)*Pcc
# bias contributions (only bother if there's non-zero bias)
if tau[0]==0: continue
bv = np.sum(dx for j, dx in enumerate(dxlist) if st[j+1] == 0)
vectorupdate(biasvecbar, bv, onevec, chibasis, chibasismiss_mat, P, Pc)
vectorupdate(biasGvecbar, bv, Gam[0], chibasis, chibasismiss_mat, P, Pc)
vectorupdate(biasGGvecbar, bv, np.dot(Gam[0,:Nsite0],Gam-2*eye),
chibasis, chibasismiss_mat, P, Pc)
resbiasave += P*(bv[0]*bv[0]+bv[1]*bv[1])*tau[0]
bb = 0
for j, G in enumerate(Gam[0]):
if G>0:
bvp = np.array([0,0])
for k, dx in zip(jumplist[j], dxlist):
if st[k] == 0: bvp += dx
bb += G*np.dot(bv, bvp)*tau[j]
resbiasGave += P*bb
```
HBox(children=(IntProgress(value=0, max=2107920), HTML(value='')))
```python
if __TESTING__:
print(Wbar_far0, WGbar_far0, WGGbar_far0)
```
(array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0]),
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0]),
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0]))
```python
# scaling and symmetrization
symmWbar = symmetrize(Wbar, basisgroupops, basisgroupops)
symmWGbar = symmetrize(WGbar, basisgroupops, basisgroupops)/(tauscale*tauscale)
symmWGGbar = symmetrize(WGGbar, basisgroupops, basisgroupops)/(tauscale*tauscale)
symmWbar_far = symmetrize(Wbar_far, basisgroupops, basisgroupops)
symmWGbar_far = symmetrize(WGbar_far, basisgroupops, basisgroupops)/(tauscale*tauscale)
symmWGGbar_far = symmetrize(WGGbar_far, basisgroupops, basisgroupops)/(tauscale*tauscale)
symmresbiasave = resbiasave/tauscale
symmresbiasGave = resbiasGave/(tauscale*tauscale)
symmbiasvecbar = symmetrize(biasvecbar, groupops, basisgroupops)
symmbiasGvecbar = symmetrize(biasGvecbar, groupops, basisgroupops)/tauscale
symmbiasGGvecbar = symmetrize(biasGGvecbar, groupops, basisgroupops)/(tauscale*tauscale)
```
```python
symmresbiasave
```
array([ 0. , 1.33333333, 0. , 0. , -1.33333333,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ])
```python
symmresbiasGave
```
array([ 0. , 0. , -0.44444444, -0.44444444, -0.44444444,
0.44444444, 0.44444444, 0.44444444, 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ])
### Output of averages
Some helper functions to make the printing nicer, followed by direct output.
```python
def truncate_vec(v):
"""Return a vector that's shortened by truncating the high-order 0 components"""
return v[:(np.max(np.nonzero(v))+1)]
```
```python
def printvecbasis(VB):
"""Print out the components of a vector-basis matrix"""
for d in range(2):
print("dim {}".format(d+1))
for bv, v in zip(basisfunc, VB[d]):
if np.any(v) != 0:
print(bv, truncate_vec(v))
```
```python
def printbasisbasis(BB, comp=None):
"""Print out the components of a basis-basis matrix"""
for bv0, BB0 in zip(basisfunc, BB):
if comp is not None and tuple(bv0) not in comp: continue
for bv1, B in zip(basisfunc, BB0):
if np.any(B) != 0:
print(bv0, bv1, truncate_vec(B))
```
```python
printbasisbasis(symmWbar_far, {(0,0)})
```
[0 0] [0 0] [ 0. -4. 8. -4.]
[0 0] [-1 0] [ 0. 1. -2. 1.]
[0 0] [1 0] [ 0. 1. -2. 1.]
[0 0] [ 0 -1] [ 0. 1. -2. 1.]
[0 0] [0 1] [ 0. 1. -2. 1.]
```python
printbasisbasis(symmWbar-symmWbar_far)
```
[0 0] [0 0] [ 0. 4. -8. 4.]
[0 0] [-1 0] [ 0. -1. 2. -1.]
[0 0] [1 0] [ 0. -1. 2. -1.]
[0 0] [ 0 -1] [ 0. -1. 2. -1.]
[0 0] [0 1] [ 0. -1. 2. -1.]
[-1 0] [0 0] [ 0. -1. 2. -1.]
[-1 0] [-1 0] [ 0. 1. -3. 2.]
[-1 0] [1 0] [ 0. 0. 1. -1.]
[1 0] [0 0] [ 0. -1. 2. -1.]
[1 0] [-1 0] [ 0. 0. 1. -1.]
[1 0] [1 0] [ 0. 1. -3. 2.]
[ 0 -1] [0 0] [ 0. -1. 2. -1.]
[ 0 -1] [ 0 -1] [ 0. 1. -3. 2.]
[ 0 -1] [0 1] [ 0. 0. 1. -1.]
[0 1] [0 0] [ 0. -1. 2. -1.]
[0 1] [ 0 -1] [ 0. 0. 1. -1.]
[0 1] [0 1] [ 0. 1. -3. 2.]
```python
printbasisbasis(symmWGbar_far, {(0,0)})
```
[0 0] [0 0] [ 0. -3. 7. -4. 0. -1. 1.]
[0 0] [-2 0] [ 0. 0.25 -0.58333333 0.33333333 0. 0.08333333
-0.08333333]
[0 0] [-1 -1] [ 0. 0.5 -1.16666667 0.66666667 0. 0.16666667
-0.16666667]
[0 0] [-1 1] [ 0. 0.5 -1.16666667 0.66666667 0. 0.16666667
-0.16666667]
[0 0] [2 0] [ 0. 0.25 -0.58333333 0.33333333 0. 0.08333333
-0.08333333]
[0 0] [ 1 -1] [ 0. 0.5 -1.16666667 0.66666667 0. 0.16666667
-0.16666667]
[0 0] [1 1] [ 0. 0.5 -1.16666667 0.66666667 0. 0.16666667
-0.16666667]
[0 0] [ 0 -2] [ 0. 0.25 -0.58333333 0.33333333 0. 0.08333333
-0.08333333]
[0 0] [0 2] [ 0. 0.25 -0.58333333 0.33333333 0. 0.08333333
-0.08333333]
```python
printbasisbasis(symmWGbar-symmWGbar_far)
```
[0 0] [0 0] [ 0. 3. -7. 4. 0. 1. -1.]
[0 0] [-2 0] [ 0. -0.25 0.58333333 -0.33333333 0. -0.08333333
0.08333333]
[0 0] [-1 -1] [ 0. -0.5 1.16666667 -0.66666667 0. -0.16666667
0.16666667]
[0 0] [-1 1] [ 0. -0.5 1.16666667 -0.66666667 0. -0.16666667
0.16666667]
[0 0] [2 0] [ 0. -0.25 0.58333333 -0.33333333 0. -0.08333333
0.08333333]
[0 0] [ 1 -1] [ 0. -0.5 1.16666667 -0.66666667 0. -0.16666667
0.16666667]
[0 0] [1 1] [ 0. -0.5 1.16666667 -0.66666667 0. -0.16666667
0.16666667]
[0 0] [ 0 -2] [ 0. -0.25 0.58333333 -0.33333333 0. -0.08333333
0.08333333]
[0 0] [0 2] [ 0. -0.25 0.58333333 -0.33333333 0. -0.08333333
0.08333333]
[-1 0] [-1 0] [ 0. 0.75 -2.5 2. 0. 0.25 -0.5 ]
[-1 0] [1 0] [ 0. -0.25 0.83333333 -0.66666667 0. -0.08333333
0.16666667]
[-1 0] [ 0 -1] [ 0. -0.25 0.83333333 -0.66666667 0. -0.08333333
0.16666667]
[-1 0] [0 1] [ 0. -0.25 0.83333333 -0.66666667 0. -0.08333333
0.16666667]
[-1 0] [-2 0] [ 0. 0. -0.08333333 0. 0. 0.33333333
-0.25 ]
[-1 0] [-1 -1] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[-1 0] [-1 1] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[-1 0] [2 0] [ 0. 0. 0.25 -0.33333333 0. 0.
0.08333333]
[-1 0] [ 1 -1] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[-1 0] [1 1] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[-1 0] [ 0 -2] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[-1 0] [0 2] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[1 0] [-1 0] [ 0. -0.25 0.83333333 -0.66666667 0. -0.08333333
0.16666667]
[1 0] [1 0] [ 0. 0.75 -2.5 2. 0. 0.25 -0.5 ]
[1 0] [ 0 -1] [ 0. -0.25 0.83333333 -0.66666667 0. -0.08333333
0.16666667]
[1 0] [0 1] [ 0. -0.25 0.83333333 -0.66666667 0. -0.08333333
0.16666667]
[1 0] [-2 0] [ 0. 0. 0.25 -0.33333333 0. 0.
0.08333333]
[1 0] [-1 -1] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[1 0] [-1 1] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[1 0] [2 0] [ 0. 0. -0.08333333 0. 0. 0.33333333
-0.25 ]
[1 0] [ 1 -1] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[1 0] [1 1] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[1 0] [ 0 -2] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[1 0] [0 2] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[ 0 -1] [-1 0] [ 0. -0.25 0.83333333 -0.66666667 0. -0.08333333
0.16666667]
[ 0 -1] [1 0] [ 0. -0.25 0.83333333 -0.66666667 0. -0.08333333
0.16666667]
[ 0 -1] [ 0 -1] [ 0. 0.75 -2.5 2. 0. 0.25 -0.5 ]
[ 0 -1] [0 1] [ 0. -0.25 0.83333333 -0.66666667 0. -0.08333333
0.16666667]
[ 0 -1] [-2 0] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[ 0 -1] [-1 -1] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[ 0 -1] [-1 1] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[ 0 -1] [2 0] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[ 0 -1] [ 1 -1] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[ 0 -1] [1 1] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[ 0 -1] [ 0 -2] [ 0. 0. -0.08333333 0. 0. 0.33333333
-0.25 ]
[ 0 -1] [0 2] [ 0. 0. 0.25 -0.33333333 0. 0.
0.08333333]
[0 1] [-1 0] [ 0. -0.25 0.83333333 -0.66666667 0. -0.08333333
0.16666667]
[0 1] [1 0] [ 0. -0.25 0.83333333 -0.66666667 0. -0.08333333
0.16666667]
[0 1] [ 0 -1] [ 0. -0.25 0.83333333 -0.66666667 0. -0.08333333
0.16666667]
[0 1] [0 1] [ 0. 0.75 -2.5 2. 0. 0.25 -0.5 ]
[0 1] [-2 0] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[0 1] [-1 -1] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[0 1] [-1 1] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[0 1] [2 0] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[0 1] [ 1 -1] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[0 1] [1 1] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[0 1] [ 0 -2] [ 0. 0. 0.25 -0.33333333 0. 0.
0.08333333]
[0 1] [0 2] [ 0. 0. -0.08333333 0. 0. 0.33333333
-0.25 ]
[-2 0] [0 0] [ 0. -0.25 0.58333333 -0.33333333 0. -0.08333333
0.08333333]
[-2 0] [-1 0] [ 0. 0. -0.08333333 0. 0. 0.33333333
-0.25 ]
[-2 0] [1 0] [ 0. 0. 0.25 -0.33333333 0. 0.
0.08333333]
[-2 0] [ 0 -1] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[-2 0] [0 1] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[-2 0] [-2 0] [ 0. 0.08333333 -0.16666667 0. -0.33333333 0.91666667
-0.5 ]
[-2 0] [-1 -1] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[-2 0] [-1 1] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[-2 0] [2 0] [ 0. 0. 0.25 -0.33333333 0. 0.
0.08333333]
[-2 0] [ 1 -1] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[-2 0] [1 1] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[-2 0] [ 0 -2] [ 0. 0. 0.08333333 -0.33333333 0.5 -0.33333333
0.08333333]
[-2 0] [0 2] [ 0. 0. 0.08333333 -0.33333333 0.5 -0.33333333
0.08333333]
[-1 -1] [0 0] [ 0. -0.5 1.16666667 -0.66666667 0. -0.16666667
0.16666667]
[-1 -1] [-1 0] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[-1 -1] [1 0] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[-1 -1] [ 0 -1] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[-1 -1] [0 1] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[-1 -1] [-2 0] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[-1 -1] [-1 -1] [ 0. 0.16666667 -0.16666667 -0.66666667 0.33333333 1.16666667
-0.83333333]
[-1 -1] [-1 1] [ 0. 0.08333333 -0.41666667 0.66666667 -0.33333333 -0.08333333
0.08333333]
[-1 -1] [2 0] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[-1 -1] [ 1 -1] [ 0. 0.08333333 -0.41666667 0.66666667 -0.33333333 -0.08333333
0.08333333]
[-1 -1] [1 1] [ 0. 0. 0.66666667 -1.33333333 1. -0.66666667
0.33333333]
[-1 -1] [ 0 -2] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[-1 -1] [0 2] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[-1 1] [0 0] [ 0. -0.5 1.16666667 -0.66666667 0. -0.16666667
0.16666667]
[-1 1] [-1 0] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[-1 1] [1 0] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[-1 1] [ 0 -1] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[-1 1] [0 1] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[-1 1] [-2 0] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[-1 1] [-1 -1] [ 0. 0.08333333 -0.41666667 0.66666667 -0.33333333 -0.08333333
0.08333333]
[-1 1] [-1 1] [ 0. 0.16666667 -0.16666667 -0.66666667 0.33333333 1.16666667
-0.83333333]
[-1 1] [2 0] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[-1 1] [ 1 -1] [ 0. 0. 0.66666667 -1.33333333 1. -0.66666667
0.33333333]
[-1 1] [1 1] [ 0. 0.08333333 -0.41666667 0.66666667 -0.33333333 -0.08333333
0.08333333]
[-1 1] [ 0 -2] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[-1 1] [0 2] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[2 0] [0 0] [ 0. -0.25 0.58333333 -0.33333333 0. -0.08333333
0.08333333]
[2 0] [-1 0] [ 0. 0. 0.25 -0.33333333 0. 0.
0.08333333]
[2 0] [1 0] [ 0. 0. -0.08333333 0. 0. 0.33333333
-0.25 ]
[2 0] [ 0 -1] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[2 0] [0 1] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[2 0] [-2 0] [ 0. 0. 0.25 -0.33333333 0. 0.
0.08333333]
[2 0] [-1 -1] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[2 0] [-1 1] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[2 0] [2 0] [ 0. 0.08333333 -0.16666667 0. -0.33333333 0.91666667
-0.5 ]
[2 0] [ 1 -1] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[2 0] [1 1] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[2 0] [ 0 -2] [ 0. 0. 0.08333333 -0.33333333 0.5 -0.33333333
0.08333333]
[2 0] [0 2] [ 0. 0. 0.08333333 -0.33333333 0.5 -0.33333333
0.08333333]
[ 1 -1] [0 0] [ 0. -0.5 1.16666667 -0.66666667 0. -0.16666667
0.16666667]
[ 1 -1] [-1 0] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[ 1 -1] [1 0] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[ 1 -1] [ 0 -1] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[ 1 -1] [0 1] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[ 1 -1] [-2 0] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[ 1 -1] [-1 -1] [ 0. 0.08333333 -0.41666667 0.66666667 -0.33333333 -0.08333333
0.08333333]
[ 1 -1] [-1 1] [ 0. 0. 0.66666667 -1.33333333 1. -0.66666667
0.33333333]
[ 1 -1] [2 0] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[ 1 -1] [ 1 -1] [ 0. 0.16666667 -0.16666667 -0.66666667 0.33333333 1.16666667
-0.83333333]
[ 1 -1] [1 1] [ 0. 0.08333333 -0.41666667 0.66666667 -0.33333333 -0.08333333
0.08333333]
[ 1 -1] [ 0 -2] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[ 1 -1] [0 2] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[1 1] [0 0] [ 0. -0.5 1.16666667 -0.66666667 0. -0.16666667
0.16666667]
[1 1] [-1 0] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[1 1] [1 0] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[1 1] [ 0 -1] [ 0. 0. 0.16666667 -0.16666667 0. -0.16666667
0.16666667]
[1 1] [0 1] [ 0. 0. -0.16666667 0.16666667 0. 0.16666667
-0.16666667]
[1 1] [-2 0] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[1 1] [-1 -1] [ 0. 0. 0.66666667 -1.33333333 1. -0.66666667
0.33333333]
[1 1] [-1 1] [ 0. 0.08333333 -0.41666667 0.66666667 -0.33333333 -0.08333333
0.08333333]
[1 1] [2 0] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[1 1] [ 1 -1] [ 0. 0.08333333 -0.41666667 0.66666667 -0.33333333 -0.08333333
0.08333333]
[1 1] [1 1] [ 0. 0.16666667 -0.16666667 -0.66666667 0.33333333 1.16666667
-0.83333333]
[1 1] [ 0 -2] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[1 1] [0 2] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[ 0 -2] [0 0] [ 0. -0.25 0.58333333 -0.33333333 0. -0.08333333
0.08333333]
[ 0 -2] [-1 0] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[ 0 -2] [1 0] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[ 0 -2] [ 0 -1] [ 0. 0. -0.08333333 0. 0. 0.33333333
-0.25 ]
[ 0 -2] [0 1] [ 0. 0. 0.25 -0.33333333 0. 0.
0.08333333]
[ 0 -2] [-2 0] [ 0. 0. 0.08333333 -0.33333333 0.5 -0.33333333
0.08333333]
[ 0 -2] [-1 -1] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[ 0 -2] [-1 1] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[ 0 -2] [2 0] [ 0. 0. 0.08333333 -0.33333333 0.5 -0.33333333
0.08333333]
[ 0 -2] [ 1 -1] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[ 0 -2] [1 1] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[ 0 -2] [ 0 -2] [ 0. 0.08333333 -0.16666667 0. -0.33333333 0.91666667
-0.5 ]
[ 0 -2] [0 2] [ 0. 0. 0.25 -0.33333333 0. 0.
0.08333333]
[0 2] [0 0] [ 0. -0.25 0.58333333 -0.33333333 0. -0.08333333
0.08333333]
[0 2] [-1 0] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[0 2] [1 0] [ 0. 0. -0.08333333 0.16666667 0. -0.16666667
0.08333333]
[0 2] [ 0 -1] [ 0. 0. 0.25 -0.33333333 0. 0.
0.08333333]
[0 2] [0 1] [ 0. 0. -0.08333333 0. 0. 0.33333333
-0.25 ]
[0 2] [-2 0] [ 0. 0. 0.08333333 -0.33333333 0.5 -0.33333333
0.08333333]
[0 2] [-1 -1] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[0 2] [-1 1] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[0 2] [2 0] [ 0. 0. 0.08333333 -0.33333333 0.5 -0.33333333
0.08333333]
[0 2] [ 1 -1] [ 0. 0. -0.16666667 0.33333333 0. -0.33333333
0.16666667]
[0 2] [1 1] [ 0. 0.08333333 -0.25 0.33333333 -0.33333333 0.25
-0.08333333]
[0 2] [ 0 -2] [ 0. 0. 0.25 -0.33333333 0. 0.
0.08333333]
[0 2] [0 2] [ 0. 0.08333333 -0.16666667 0. -0.33333333 0.91666667
-0.5 ]
```python
printbasisbasis(symmWGGbar_far, {(0,0)})
```
[0 0] [0 0] [ 0. -3. 3. 0. 0. 3. -3.]
[0 0] [-1 0] [ 0. 2.5625 -4.83333333 2.22222222 0. -0.29166667
0.38888889 0. 0. -0.04861111]
[0 0] [1 0] [ 0. 2.5625 -4.83333333 2.22222222 0. -0.29166667
0.38888889 0. 0. -0.04861111]
[0 0] [ 0 -1] [ 0. 2.5625 -4.83333333 2.22222222 0. -0.29166667
0.38888889 0. 0. -0.04861111]
[0 0] [0 1] [ 0. 2.5625 -4.83333333 2.22222222 0. -0.29166667
0.38888889 0. 0. -0.04861111]
[0 0] [-2 0] [ 0. -0.75 1.75 -1. 0. -0.25 0.25]
[0 0] [-1 -1] [ 0. -1.5 3.5 -2. 0. -0.5 0.5]
[0 0] [-1 1] [ 0. -1.5 3.5 -2. 0. -0.5 0.5]
[0 0] [2 0] [ 0. -0.75 1.75 -1. 0. -0.25 0.25]
[0 0] [ 1 -1] [ 0. -1.5 3.5 -2. 0. -0.5 0.5]
[0 0] [1 1] [ 0. -1.5 3.5 -2. 0. -0.5 0.5]
[0 0] [ 0 -2] [ 0. -0.75 1.75 -1. 0. -0.25 0.25]
[0 0] [0 2] [ 0. -0.75 1.75 -1. 0. -0.25 0.25]
[0 0] [-3 0] [ 0. 0.0625 -0.16666667 0.11111111 0. 0.04166667
-0.05555556 0. 0. 0.00694444]
[0 0] [-2 -1] [ 0. 0.1875 -0.5 0.33333333 0. 0.125
-0.16666667 0. 0. 0.02083333]
[0 0] [-2 1] [ 0. 0.1875 -0.5 0.33333333 0. 0.125
-0.16666667 0. 0. 0.02083333]
[0 0] [-1 -2] [ 0. 0.1875 -0.5 0.33333333 0. 0.125
-0.16666667 0. 0. 0.02083333]
[0 0] [-1 2] [ 0. 0.1875 -0.5 0.33333333 0. 0.125
-0.16666667 0. 0. 0.02083333]
[0 0] [3 0] [ 0. 0.0625 -0.16666667 0.11111111 0. 0.04166667
-0.05555556 0. 0. 0.00694444]
[0 0] [ 2 -1] [ 0. 0.1875 -0.5 0.33333333 0. 0.125
-0.16666667 0. 0. 0.02083333]
[0 0] [2 1] [ 0. 0.1875 -0.5 0.33333333 0. 0.125
-0.16666667 0. 0. 0.02083333]
[0 0] [ 1 -2] [ 0. 0.1875 -0.5 0.33333333 0. 0.125
-0.16666667 0. 0. 0.02083333]
[0 0] [1 2] [ 0. 0.1875 -0.5 0.33333333 0. 0.125
-0.16666667 0. 0. 0.02083333]
[0 0] [ 0 -3] [ 0. 0.0625 -0.16666667 0.11111111 0. 0.04166667
-0.05555556 0. 0. 0.00694444]
[0 0] [0 3] [ 0. 0.0625 -0.16666667 0.11111111 0. 0.04166667
-0.05555556 0. 0. 0.00694444]
```python
printbasisbasis(symmWGGbar-symmWGGbar_far)
```
[0 0] [0 0] [ 0. 3. -3. 0. 0. -3. 3.]
[0 0] [-1 0] [ 0. -2.5625 4.83333333 -2.22222222 0. 0.29166667
-0.38888889 0. 0. 0.04861111]
[0 0] [1 0] [ 0. -2.5625 4.83333333 -2.22222222 0. 0.29166667
-0.38888889 0. 0. 0.04861111]
[0 0] [ 0 -1] [ 0. -2.5625 4.83333333 -2.22222222 0. 0.29166667
-0.38888889 0. 0. 0.04861111]
[0 0] [0 1] [ 0. -2.5625 4.83333333 -2.22222222 0. 0.29166667
-0.38888889 0. 0. 0.04861111]
[0 0] [-2 0] [ 0. 0.75 -1.75 1. 0. 0.25 -0.25]
[0 0] [-1 -1] [ 0. 1.5 -3.5 2. 0. 0.5 -0.5]
[0 0] [-1 1] [ 0. 1.5 -3.5 2. 0. 0.5 -0.5]
[0 0] [2 0] [ 0. 0.75 -1.75 1. 0. 0.25 -0.25]
[0 0] [ 1 -1] [ 0. 1.5 -3.5 2. 0. 0.5 -0.5]
[0 0] [1 1] [ 0. 1.5 -3.5 2. 0. 0.5 -0.5]
[0 0] [ 0 -2] [ 0. 0.75 -1.75 1. 0. 0.25 -0.25]
[0 0] [0 2] [ 0. 0.75 -1.75 1. 0. 0.25 -0.25]
[0 0] [-3 0] [ 0. -0.0625 0.16666667 -0.11111111 0. -0.04166667
0.05555556 0. 0. -0.00694444]
[0 0] [-2 -1] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[0 0] [-2 1] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[0 0] [-1 -2] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[0 0] [-1 2] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[0 0] [3 0] [ 0. -0.0625 0.16666667 -0.11111111 0. -0.04166667
0.05555556 0. 0. -0.00694444]
[0 0] [ 2 -1] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[0 0] [2 1] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[0 0] [ 1 -2] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[0 0] [1 2] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[0 0] [ 0 -3] [ 0. -0.0625 0.16666667 -0.11111111 0. -0.04166667
0.05555556 0. 0. -0.00694444]
[0 0] [0 3] [ 0. -0.0625 0.16666667 -0.11111111 0. -0.04166667
0.05555556 0. 0. -0.00694444]
[-1 0] [0 0] [ 0. -2.5625 4.83333333 -2.22222222 0. 0.29166667
-0.38888889 0. 0. 0.04861111]
[-1 0] [-1 0] [ 0. 0.75 -1.44444444 -0.16666667 0.05555556 -0.47222222
1.22222222 -0.05555556 0.16666667 -0.05555556]
[-1 0] [1 0] [ 0. 0.75 0.22916667 -0.4375 -0.04861111 -0.13194444
-0.50694444 0.04861111 0.21527778 -0.11805556]
[-1 0] [ 0 -1] [ 0. 0.75 -2.30555556 1.69444444 0.02777778 0.22222222
-0.36111111 -0.02777778 0.08333333 -0.08333333]
[-1 0] [0 1] [ 0. 0.75 -2.30555556 1.69444444 0.02777778 0.22222222
-0.36111111 -0.02777778 0.08333333 -0.08333333]
[-1 0] [-2 0] [ 0. 0. 0.22916667 0.09027778 -0.27083333 -0.60416667
0.54861111 0.02083333 -0.0625 0.04861111]
[-1 0] [-1 -1] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[-1 0] [-1 1] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[-1 0] [2 0] [ 0. -0.0625 -0.60416667 0.97916667 -0.02083333 -0.22916667
-0.0625 0.02083333 0.02083333 -0.04166667]
[-1 0] [ 1 -1] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[-1 0] [1 1] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[-1 0] [ 0 -2] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[-1 0] [0 2] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[-1 0] [-3 0] [ 0. 0. -0.02083333 0.00694444 0.00694444 0.09027778
-0.09027778 -0.00694444 -0.00694444 0.02083333]
[-1 0] [-2 -1] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[-1 0] [-2 1] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[-1 0] [-1 -2] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[-1 0] [-1 2] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[-1 0] [3 0] [ 0. 0. 0.05555556 -0.13888889 0.02777778 0.16666667
-0.11111111 -0.02777778 0.02777778]
[-1 0] [ 2 -1] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[-1 0] [2 1] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[-1 0] [ 1 -2] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[-1 0] [1 2] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[-1 0] [ 0 -3] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[-1 0] [0 3] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[1 0] [0 0] [ 0. -2.5625 4.83333333 -2.22222222 0. 0.29166667
-0.38888889 0. 0. 0.04861111]
[1 0] [-1 0] [ 0. 0.75 0.22916667 -0.4375 -0.04861111 -0.13194444
-0.50694444 0.04861111 0.21527778 -0.11805556]
[1 0] [1 0] [ 0. 0.75 -1.44444444 -0.16666667 0.05555556 -0.47222222
1.22222222 -0.05555556 0.16666667 -0.05555556]
[1 0] [ 0 -1] [ 0. 0.75 -2.30555556 1.69444444 0.02777778 0.22222222
-0.36111111 -0.02777778 0.08333333 -0.08333333]
[1 0] [0 1] [ 0. 0.75 -2.30555556 1.69444444 0.02777778 0.22222222
-0.36111111 -0.02777778 0.08333333 -0.08333333]
[1 0] [-2 0] [ 0. -0.0625 -0.60416667 0.97916667 -0.02083333 -0.22916667
-0.0625 0.02083333 0.02083333 -0.04166667]
[1 0] [-1 -1] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[1 0] [-1 1] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[1 0] [2 0] [ 0. 0. 0.22916667 0.09027778 -0.27083333 -0.60416667
0.54861111 0.02083333 -0.0625 0.04861111]
[1 0] [ 1 -1] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[1 0] [1 1] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[1 0] [ 0 -2] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[1 0] [0 2] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[1 0] [-3 0] [ 0. 0. 0.05555556 -0.13888889 0.02777778 0.16666667
-0.11111111 -0.02777778 0.02777778]
[1 0] [-2 -1] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[1 0] [-2 1] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[1 0] [-1 -2] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[1 0] [-1 2] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[1 0] [3 0] [ 0. 0. -0.02083333 0.00694444 0.00694444 0.09027778
-0.09027778 -0.00694444 -0.00694444 0.02083333]
[1 0] [ 2 -1] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[1 0] [2 1] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[1 0] [ 1 -2] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[1 0] [1 2] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[1 0] [ 0 -3] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[1 0] [0 3] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[ 0 -1] [0 0] [ 0. -2.5625 4.83333333 -2.22222222 0. 0.29166667
-0.38888889 0. 0. 0.04861111]
[ 0 -1] [-1 0] [ 0. 0.75 -2.30555556 1.69444444 0.02777778 0.22222222
-0.36111111 -0.02777778 0.08333333 -0.08333333]
[ 0 -1] [1 0] [ 0. 0.75 -2.30555556 1.69444444 0.02777778 0.22222222
-0.36111111 -0.02777778 0.08333333 -0.08333333]
[ 0 -1] [ 0 -1] [ 0. 0.75 -1.44444444 -0.16666667 0.05555556 -0.47222222
1.22222222 -0.05555556 0.16666667 -0.05555556]
[ 0 -1] [0 1] [ 0. 0.75 0.22916667 -0.4375 -0.04861111 -0.13194444
-0.50694444 0.04861111 0.21527778 -0.11805556]
[ 0 -1] [-2 0] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[ 0 -1] [-1 -1] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[ 0 -1] [-1 1] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[ 0 -1] [2 0] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[ 0 -1] [ 1 -1] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[ 0 -1] [1 1] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[ 0 -1] [ 0 -2] [ 0. 0. 0.22916667 0.09027778 -0.27083333 -0.60416667
0.54861111 0.02083333 -0.0625 0.04861111]
[ 0 -1] [0 2] [ 0. -0.0625 -0.60416667 0.97916667 -0.02083333 -0.22916667
-0.0625 0.02083333 0.02083333 -0.04166667]
[ 0 -1] [-3 0] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[ 0 -1] [-2 -1] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[ 0 -1] [-2 1] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[ 0 -1] [-1 -2] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[ 0 -1] [-1 2] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[ 0 -1] [3 0] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[ 0 -1] [ 2 -1] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[ 0 -1] [2 1] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[ 0 -1] [ 1 -2] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[ 0 -1] [1 2] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[ 0 -1] [ 0 -3] [ 0. 0. -0.02083333 0.00694444 0.00694444 0.09027778
-0.09027778 -0.00694444 -0.00694444 0.02083333]
[ 0 -1] [0 3] [ 0. 0. 0.05555556 -0.13888889 0.02777778 0.16666667
-0.11111111 -0.02777778 0.02777778]
[0 1] [0 0] [ 0. -2.5625 4.83333333 -2.22222222 0. 0.29166667
-0.38888889 0. 0. 0.04861111]
[0 1] [-1 0] [ 0. 0.75 -2.30555556 1.69444444 0.02777778 0.22222222
-0.36111111 -0.02777778 0.08333333 -0.08333333]
[0 1] [1 0] [ 0. 0.75 -2.30555556 1.69444444 0.02777778 0.22222222
-0.36111111 -0.02777778 0.08333333 -0.08333333]
[0 1] [ 0 -1] [ 0. 0.75 0.22916667 -0.4375 -0.04861111 -0.13194444
-0.50694444 0.04861111 0.21527778 -0.11805556]
[0 1] [0 1] [ 0. 0.75 -1.44444444 -0.16666667 0.05555556 -0.47222222
1.22222222 -0.05555556 0.16666667 -0.05555556]
[0 1] [-2 0] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[0 1] [-1 -1] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[0 1] [-1 1] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[0 1] [2 0] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[0 1] [ 1 -1] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[0 1] [1 1] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[0 1] [ 0 -2] [ 0. -0.0625 -0.60416667 0.97916667 -0.02083333 -0.22916667
-0.0625 0.02083333 0.02083333 -0.04166667]
[0 1] [0 2] [ 0. 0. 0.22916667 0.09027778 -0.27083333 -0.60416667
0.54861111 0.02083333 -0.0625 0.04861111]
[0 1] [-3 0] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[0 1] [-2 -1] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[0 1] [-2 1] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[0 1] [-1 -2] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[0 1] [-1 2] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[0 1] [3 0] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[0 1] [ 2 -1] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[0 1] [2 1] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[0 1] [ 1 -2] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[0 1] [1 2] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[0 1] [ 0 -3] [ 0. 0. 0.05555556 -0.13888889 0.02777778 0.16666667
-0.11111111 -0.02777778 0.02777778]
[0 1] [0 3] [ 0. 0. -0.02083333 0.00694444 0.00694444 0.09027778
-0.09027778 -0.00694444 -0.00694444 0.02083333]
[-2 0] [0 0] [ 0. 0.75 -1.75 1. 0. 0.25 -0.25]
[-2 0] [-1 0] [ 0. 0. 0.22916667 0.09027778 -0.27083333 -0.60416667
0.54861111 0.02083333 -0.0625 0.04861111]
[-2 0] [1 0] [ 0. -0.0625 -0.60416667 0.97916667 -0.02083333 -0.22916667
-0.0625 0.02083333 0.02083333 -0.04166667]
[-2 0] [ 0 -1] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[-2 0] [0 1] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[-2 0] [-2 0] [ 0. -0.25 0.47222222 -0.02777778 1.02777778 -2.5
1.25 -0.02777778 0.02777778 0.02777778]
[-2 0] [-1 -1] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[-2 0] [-1 1] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[-2 0] [2 0] [ 0. 0. -0.66666667 0.86111111 0.02777778 0.02777778
-0.27777778 -0.02777778 0.13888889 -0.08333333]
[-2 0] [ 1 -1] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[-2 0] [1 1] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[-2 0] [ 0 -2] [ 0. 0. -0.27777778 1.08333333 -1.52777778 0.86111111
-0.11111111 0.02777778 -0.08333333 0.02777778]
[-2 0] [0 2] [ 0. 0. -0.27777778 1.08333333 -1.52777778 0.86111111
-0.11111111 0.02777778 -0.08333333 0.02777778]
[-2 0] [-3 0] [ 0. 0.02083333 -0.04861111 0.00694444 -0.07638889 0.27083333
-0.17361111 -0.00694444 -0.03472222 0.04166667]
[-2 0] [-2 -1] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[-2 0] [-2 1] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[-2 0] [-1 -2] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[-2 0] [-1 2] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[-2 0] [3 0] [ 0. 0. 0.0625 -0.10416667 0.00694444 0.00694444
0.04861111 -0.00694444 -0.00694444 -0.00694444]
[-2 0] [ 2 -1] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[-2 0] [2 1] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[-2 0] [ 1 -2] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[-2 0] [1 2] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[-2 0] [ 0 -3] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[-2 0] [0 3] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[-1 -1] [0 0] [ 0. 1.5 -3.5 2. 0. 0.5 -0.5]
[-1 -1] [-1 0] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[-1 -1] [1 0] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[-1 -1] [ 0 -1] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[-1 -1] [0 1] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[-1 -1] [-2 0] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[-1 -1] [-1 -1] [ 0. -0.5 0.38888889 2.11111111 -1. -3.27777778
2.27777778 0. -0.11111111 0.11111111]
[-1 -1] [-1 1] [ 0. -0.25 1.25 -2. 1. 0.25 -0.25]
[-1 -1] [2 0] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[-1 -1] [ 1 -1] [ 0. -0.25 1.25 -2. 1. 0.25 -0.25]
[-1 -1] [1 1] [ 0. 0. -1.88888889 3.88888889 -3. 1.77777778
-0.77777778 0. 0.11111111 -0.11111111]
[-1 -1] [ 0 -2] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[-1 -1] [0 2] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[-1 -1] [-3 0] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[-1 -1] [-2 -1] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[-1 -1] [-2 1] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[-1 -1] [-1 -2] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[-1 -1] [-1 2] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[-1 -1] [3 0] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[-1 -1] [ 2 -1] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[-1 -1] [2 1] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[-1 -1] [ 1 -2] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[-1 -1] [1 2] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[-1 -1] [ 0 -3] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[-1 -1] [0 3] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[-1 1] [0 0] [ 0. 1.5 -3.5 2. 0. 0.5 -0.5]
[-1 1] [-1 0] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[-1 1] [1 0] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[-1 1] [ 0 -1] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[-1 1] [0 1] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[-1 1] [-2 0] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[-1 1] [-1 -1] [ 0. -0.25 1.25 -2. 1. 0.25 -0.25]
[-1 1] [-1 1] [ 0. -0.5 0.38888889 2.11111111 -1. -3.27777778
2.27777778 0. -0.11111111 0.11111111]
[-1 1] [2 0] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[-1 1] [ 1 -1] [ 0. 0. -1.88888889 3.88888889 -3. 1.77777778
-0.77777778 0. 0.11111111 -0.11111111]
[-1 1] [1 1] [ 0. -0.25 1.25 -2. 1. 0.25 -0.25]
[-1 1] [ 0 -2] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[-1 1] [0 2] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[-1 1] [-3 0] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[-1 1] [-2 -1] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[-1 1] [-2 1] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[-1 1] [-1 -2] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[-1 1] [-1 2] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[-1 1] [3 0] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[-1 1] [ 2 -1] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[-1 1] [2 1] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[-1 1] [ 1 -2] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[-1 1] [1 2] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[-1 1] [ 0 -3] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[-1 1] [0 3] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[2 0] [0 0] [ 0. 0.75 -1.75 1. 0. 0.25 -0.25]
[2 0] [-1 0] [ 0. -0.0625 -0.60416667 0.97916667 -0.02083333 -0.22916667
-0.0625 0.02083333 0.02083333 -0.04166667]
[2 0] [1 0] [ 0. 0. 0.22916667 0.09027778 -0.27083333 -0.60416667
0.54861111 0.02083333 -0.0625 0.04861111]
[2 0] [ 0 -1] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[2 0] [0 1] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[2 0] [-2 0] [ 0. 0. -0.66666667 0.86111111 0.02777778 0.02777778
-0.27777778 -0.02777778 0.13888889 -0.08333333]
[2 0] [-1 -1] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[2 0] [-1 1] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[2 0] [2 0] [ 0. -0.25 0.47222222 -0.02777778 1.02777778 -2.5
1.25 -0.02777778 0.02777778 0.02777778]
[2 0] [ 1 -1] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[2 0] [1 1] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[2 0] [ 0 -2] [ 0. 0. -0.27777778 1.08333333 -1.52777778 0.86111111
-0.11111111 0.02777778 -0.08333333 0.02777778]
[2 0] [0 2] [ 0. 0. -0.27777778 1.08333333 -1.52777778 0.86111111
-0.11111111 0.02777778 -0.08333333 0.02777778]
[2 0] [-3 0] [ 0. 0. 0.0625 -0.10416667 0.00694444 0.00694444
0.04861111 -0.00694444 -0.00694444 -0.00694444]
[2 0] [-2 -1] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[2 0] [-2 1] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[2 0] [-1 -2] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[2 0] [-1 2] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[2 0] [3 0] [ 0. 0.02083333 -0.04861111 0.00694444 -0.07638889 0.27083333
-0.17361111 -0.00694444 -0.03472222 0.04166667]
[2 0] [ 2 -1] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[2 0] [2 1] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[2 0] [ 1 -2] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[2 0] [1 2] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[2 0] [ 0 -3] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[2 0] [0 3] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[ 1 -1] [0 0] [ 0. 1.5 -3.5 2. 0. 0.5 -0.5]
[ 1 -1] [-1 0] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[ 1 -1] [1 0] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[ 1 -1] [ 0 -1] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[ 1 -1] [0 1] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[ 1 -1] [-2 0] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[ 1 -1] [-1 -1] [ 0. -0.25 1.25 -2. 1. 0.25 -0.25]
[ 1 -1] [-1 1] [ 0. 0. -1.88888889 3.88888889 -3. 1.77777778
-0.77777778 0. 0.11111111 -0.11111111]
[ 1 -1] [2 0] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[ 1 -1] [ 1 -1] [ 0. -0.5 0.38888889 2.11111111 -1. -3.27777778
2.27777778 0. -0.11111111 0.11111111]
[ 1 -1] [1 1] [ 0. -0.25 1.25 -2. 1. 0.25 -0.25]
[ 1 -1] [ 0 -2] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[ 1 -1] [0 2] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[ 1 -1] [-3 0] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[ 1 -1] [-2 -1] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[ 1 -1] [-2 1] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[ 1 -1] [-1 -2] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[ 1 -1] [-1 2] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[ 1 -1] [3 0] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[ 1 -1] [ 2 -1] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[ 1 -1] [2 1] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[ 1 -1] [ 1 -2] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[ 1 -1] [1 2] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[ 1 -1] [ 0 -3] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[ 1 -1] [0 3] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[1 1] [0 0] [ 0. 1.5 -3.5 2. 0. 0.5 -0.5]
[1 1] [-1 0] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[1 1] [1 0] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[1 1] [ 0 -1] [ 0. -0.10416667 -0.15277778 0.18055556 0.18055556 0.11111111
-0.18055556 -0.01388889 0.04166667 -0.0625 ]
[1 1] [0 1] [ 0. -0.04166667 0.59722222 -0.59722222 -0.06944444 -0.18055556
0.29166667 -0.01388889 -0.04166667 0.05555556]
[1 1] [-2 0] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[1 1] [-1 -1] [ 0. 0. -1.88888889 3.88888889 -3. 1.77777778
-0.77777778 0. 0.11111111 -0.11111111]
[1 1] [-1 1] [ 0. -0.25 1.25 -2. 1. 0.25 -0.25]
[1 1] [2 0] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[1 1] [ 1 -1] [ 0. -0.25 1.25 -2. 1. 0.25 -0.25]
[1 1] [1 1] [ 0. -0.5 0.38888889 2.11111111 -1. -3.27777778
2.27777778 0. -0.11111111 0.11111111]
[1 1] [ 0 -2] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[1 1] [0 2] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[1 1] [-3 0] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[1 1] [-2 -1] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[1 1] [-2 1] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[1 1] [-1 -2] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[1 1] [-1 2] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[1 1] [3 0] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[1 1] [ 2 -1] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[1 1] [2 1] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[1 1] [ 1 -2] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[1 1] [1 2] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[1 1] [ 0 -3] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[1 1] [0 3] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[ 0 -2] [0 0] [ 0. 0.75 -1.75 1. 0. 0.25 -0.25]
[ 0 -2] [-1 0] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[ 0 -2] [1 0] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[ 0 -2] [ 0 -1] [ 0. 0. 0.22916667 0.09027778 -0.27083333 -0.60416667
0.54861111 0.02083333 -0.0625 0.04861111]
[ 0 -2] [0 1] [ 0. -0.0625 -0.60416667 0.97916667 -0.02083333 -0.22916667
-0.0625 0.02083333 0.02083333 -0.04166667]
[ 0 -2] [-2 0] [ 0. 0. -0.27777778 1.08333333 -1.52777778 0.86111111
-0.11111111 0.02777778 -0.08333333 0.02777778]
[ 0 -2] [-1 -1] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[ 0 -2] [-1 1] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[ 0 -2] [2 0] [ 0. 0. -0.27777778 1.08333333 -1.52777778 0.86111111
-0.11111111 0.02777778 -0.08333333 0.02777778]
[ 0 -2] [ 1 -1] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[ 0 -2] [1 1] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[ 0 -2] [ 0 -2] [ 0. -0.25 0.47222222 -0.02777778 1.02777778 -2.5
1.25 -0.02777778 0.02777778 0.02777778]
[ 0 -2] [0 2] [ 0. 0. -0.66666667 0.86111111 0.02777778 0.02777778
-0.27777778 -0.02777778 0.13888889 -0.08333333]
[ 0 -2] [-3 0] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[ 0 -2] [-2 -1] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[ 0 -2] [-2 1] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[ 0 -2] [-1 -2] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[ 0 -2] [-1 2] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[ 0 -2] [3 0] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[ 0 -2] [ 2 -1] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[ 0 -2] [2 1] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[ 0 -2] [ 1 -2] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[ 0 -2] [1 2] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[ 0 -2] [ 0 -3] [ 0. 0.02083333 -0.04861111 0.00694444 -0.07638889 0.27083333
-0.17361111 -0.00694444 -0.03472222 0.04166667]
[ 0 -2] [0 3] [ 0. 0. 0.0625 -0.10416667 0.00694444 0.00694444
0.04861111 -0.00694444 -0.00694444 -0.00694444]
[0 2] [0 0] [ 0. 0.75 -1.75 1. 0. 0.25 -0.25]
[0 2] [-1 0] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[0 2] [1 0] [ 0. -0.04166667 0.32638889 -0.50694444 0.03472222 0.34027778
-0.15972222 0.00694444 0.02083333 -0.02083333]
[0 2] [ 0 -1] [ 0. -0.0625 -0.60416667 0.97916667 -0.02083333 -0.22916667
-0.0625 0.02083333 0.02083333 -0.04166667]
[0 2] [0 1] [ 0. 0. 0.22916667 0.09027778 -0.27083333 -0.60416667
0.54861111 0.02083333 -0.0625 0.04861111]
[0 2] [-2 0] [ 0. 0. -0.27777778 1.08333333 -1.52777778 0.86111111
-0.11111111 0.02777778 -0.08333333 0.02777778]
[0 2] [-1 -1] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[0 2] [-1 1] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[0 2] [2 0] [ 0. 0. -0.27777778 1.08333333 -1.52777778 0.86111111
-0.11111111 0.02777778 -0.08333333 0.02777778]
[0 2] [ 1 -1] [ 0. 0. 0.55555556 -1.05555556 0. 0.88888889
-0.38888889 0. 0.05555556 -0.05555556]
[0 2] [1 1] [ 0. -0.25 0.69444444 -0.94444444 1. -0.63888889
0.13888889 0. -0.05555556 0.05555556]
[0 2] [ 0 -2] [ 0. 0. -0.66666667 0.86111111 0.02777778 0.02777778
-0.27777778 -0.02777778 0.13888889 -0.08333333]
[0 2] [0 2] [ 0. -0.25 0.47222222 -0.02777778 1.02777778 -2.5
1.25 -0.02777778 0.02777778 0.02777778]
[0 2] [-3 0] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[0 2] [-2 -1] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[0 2] [-2 1] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[0 2] [-1 -2] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[0 2] [-1 2] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[0 2] [3 0] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[0 2] [ 2 -1] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[0 2] [2 1] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[0 2] [ 1 -2] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[0 2] [1 2] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[0 2] [ 0 -3] [ 0. 0. 0.0625 -0.10416667 0.00694444 0.00694444
0.04861111 -0.00694444 -0.00694444 -0.00694444]
[0 2] [0 3] [ 0. 0.02083333 -0.04861111 0.00694444 -0.07638889 0.27083333
-0.17361111 -0.00694444 -0.03472222 0.04166667]
[-3 0] [0 0] [ 0. -0.0625 0.16666667 -0.11111111 0. -0.04166667
0.05555556 0. 0. -0.00694444]
[-3 0] [-1 0] [ 0. 0. -0.02083333 0.00694444 0.00694444 0.09027778
-0.09027778 -0.00694444 -0.00694444 0.02083333]
[-3 0] [1 0] [ 0. 0. 0.05555556 -0.13888889 0.02777778 0.16666667
-0.11111111 -0.02777778 0.02777778]
[-3 0] [ 0 -1] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[-3 0] [0 1] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[-3 0] [-2 0] [ 0. 0.02083333 -0.04861111 0.00694444 -0.07638889 0.27083333
-0.17361111 -0.00694444 -0.03472222 0.04166667]
[-3 0] [-1 -1] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[-3 0] [-1 1] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[-3 0] [2 0] [ 0. 0. 0.0625 -0.10416667 0.00694444 0.00694444
0.04861111 -0.00694444 -0.00694444 -0.00694444]
[-3 0] [ 1 -1] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[-3 0] [1 1] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[-3 0] [ 0 -2] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[-3 0] [0 2] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[-3 0] [3 0] [ 0. 0. 0.0625 -0.10416667 0.00694444 0.00694444
0.04861111 -0.00694444 -0.00694444 -0.00694444]
[-3 0] [ 2 -1] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[-3 0] [2 1] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[-3 0] [ 1 -2] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[-3 0] [1 2] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[-2 -1] [0 0] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[-2 -1] [-1 0] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[-2 -1] [1 0] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[-2 -1] [ 0 -1] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[-2 -1] [0 1] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[-2 -1] [-2 0] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[-2 -1] [-1 -1] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[-2 -1] [-1 1] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[-2 -1] [2 0] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[-2 -1] [ 1 -1] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[-2 -1] [1 1] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[-2 -1] [ 0 -2] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[-2 -1] [0 2] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[-2 -1] [-2 1] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[-2 -1] [-1 2] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[-2 -1] [3 0] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[-2 -1] [ 2 -1] [ 0. 0. -0.02777778 0.05555556 0. -0.02777778
-0.02777778 0. 0.05555556 -0.02777778]
[-2 -1] [2 1] [ 0. 0. 0.20138889 -0.35416667 0.03472222 0.09027778
0.07638889 -0.03472222 0.02083333 -0.03472222]
[-2 -1] [ 1 -2] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[-2 -1] [1 2] [ 0. 0. -0.08333333 0.19444444 -0.02777778 -0.19444444
0.08333333 0.02777778 0.02777778 -0.02777778]
[-2 -1] [0 3] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[-2 1] [0 0] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[-2 1] [-1 0] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[-2 1] [1 0] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[-2 1] [ 0 -1] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[-2 1] [0 1] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[-2 1] [-2 0] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[-2 1] [-1 -1] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[-2 1] [-1 1] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[-2 1] [2 0] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[-2 1] [ 1 -1] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[-2 1] [1 1] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[-2 1] [ 0 -2] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[-2 1] [0 2] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[-2 1] [-2 -1] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[-2 1] [-1 -2] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[-2 1] [3 0] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[-2 1] [ 2 -1] [ 0. 0. 0.20138889 -0.35416667 0.03472222 0.09027778
0.07638889 -0.03472222 0.02083333 -0.03472222]
[-2 1] [2 1] [ 0. 0. -0.02777778 0.05555556 0. -0.02777778
-0.02777778 0. 0.05555556 -0.02777778]
[-2 1] [ 1 -2] [ 0. 0. -0.08333333 0.19444444 -0.02777778 -0.19444444
0.08333333 0.02777778 0.02777778 -0.02777778]
[-2 1] [1 2] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[-2 1] [ 0 -3] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[-1 -2] [0 0] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[-1 -2] [-1 0] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[-1 -2] [1 0] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[-1 -2] [ 0 -1] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[-1 -2] [0 1] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[-1 -2] [-2 0] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[-1 -2] [-1 -1] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[-1 -2] [-1 1] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[-1 -2] [2 0] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[-1 -2] [ 1 -1] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[-1 -2] [1 1] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[-1 -2] [ 0 -2] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[-1 -2] [0 2] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[-1 -2] [-2 1] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[-1 -2] [-1 2] [ 0. 0. -0.02777778 0.05555556 0. -0.02777778
-0.02777778 0. 0.05555556 -0.02777778]
[-1 -2] [3 0] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[-1 -2] [ 2 -1] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[-1 -2] [2 1] [ 0. 0. -0.08333333 0.19444444 -0.02777778 -0.19444444
0.08333333 0.02777778 0.02777778 -0.02777778]
[-1 -2] [ 1 -2] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[-1 -2] [1 2] [ 0. 0. 0.20138889 -0.35416667 0.03472222 0.09027778
0.07638889 -0.03472222 0.02083333 -0.03472222]
[-1 -2] [0 3] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[-1 2] [0 0] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[-1 2] [-1 0] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[-1 2] [1 0] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[-1 2] [ 0 -1] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[-1 2] [0 1] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[-1 2] [-2 0] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[-1 2] [-1 -1] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[-1 2] [-1 1] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[-1 2] [2 0] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[-1 2] [ 1 -1] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[-1 2] [1 1] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[-1 2] [ 0 -2] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[-1 2] [0 2] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[-1 2] [-2 -1] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[-1 2] [-1 -2] [ 0. 0. -0.02777778 0.05555556 0. -0.02777778
-0.02777778 0. 0.05555556 -0.02777778]
[-1 2] [3 0] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[-1 2] [ 2 -1] [ 0. 0. -0.08333333 0.19444444 -0.02777778 -0.19444444
0.08333333 0.02777778 0.02777778 -0.02777778]
[-1 2] [2 1] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[-1 2] [ 1 -2] [ 0. 0. 0.20138889 -0.35416667 0.03472222 0.09027778
0.07638889 -0.03472222 0.02083333 -0.03472222]
[-1 2] [1 2] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[-1 2] [ 0 -3] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[3 0] [0 0] [ 0. -0.0625 0.16666667 -0.11111111 0. -0.04166667
0.05555556 0. 0. -0.00694444]
[3 0] [-1 0] [ 0. 0. 0.05555556 -0.13888889 0.02777778 0.16666667
-0.11111111 -0.02777778 0.02777778]
[3 0] [1 0] [ 0. 0. -0.02083333 0.00694444 0.00694444 0.09027778
-0.09027778 -0.00694444 -0.00694444 0.02083333]
[3 0] [ 0 -1] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[3 0] [0 1] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[3 0] [-2 0] [ 0. 0. 0.0625 -0.10416667 0.00694444 0.00694444
0.04861111 -0.00694444 -0.00694444 -0.00694444]
[3 0] [-1 -1] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[3 0] [-1 1] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[3 0] [2 0] [ 0. 0.02083333 -0.04861111 0.00694444 -0.07638889 0.27083333
-0.17361111 -0.00694444 -0.03472222 0.04166667]
[3 0] [ 1 -1] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[3 0] [1 1] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[3 0] [ 0 -2] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[3 0] [0 2] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[3 0] [-3 0] [ 0. 0. 0.0625 -0.10416667 0.00694444 0.00694444
0.04861111 -0.00694444 -0.00694444 -0.00694444]
[3 0] [-2 -1] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[3 0] [-2 1] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[3 0] [-1 -2] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[3 0] [-1 2] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[ 2 -1] [0 0] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[ 2 -1] [-1 0] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[ 2 -1] [1 0] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[ 2 -1] [ 0 -1] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[ 2 -1] [0 1] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[ 2 -1] [-2 0] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[ 2 -1] [-1 -1] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[ 2 -1] [-1 1] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[ 2 -1] [2 0] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[ 2 -1] [ 1 -1] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[ 2 -1] [1 1] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[ 2 -1] [ 0 -2] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[ 2 -1] [0 2] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[ 2 -1] [-3 0] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[ 2 -1] [-2 -1] [ 0. 0. -0.02777778 0.05555556 0. -0.02777778
-0.02777778 0. 0.05555556 -0.02777778]
[ 2 -1] [-2 1] [ 0. 0. 0.20138889 -0.35416667 0.03472222 0.09027778
0.07638889 -0.03472222 0.02083333 -0.03472222]
[ 2 -1] [-1 -2] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[ 2 -1] [-1 2] [ 0. 0. -0.08333333 0.19444444 -0.02777778 -0.19444444
0.08333333 0.02777778 0.02777778 -0.02777778]
[ 2 -1] [2 1] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[ 2 -1] [1 2] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[ 2 -1] [0 3] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[2 1] [0 0] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[2 1] [-1 0] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[2 1] [1 0] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[2 1] [ 0 -1] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[2 1] [0 1] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[2 1] [-2 0] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[2 1] [-1 -1] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[2 1] [-1 1] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[2 1] [2 0] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[2 1] [ 1 -1] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[2 1] [1 1] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[2 1] [ 0 -2] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[2 1] [0 2] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[2 1] [-3 0] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[2 1] [-2 -1] [ 0. 0. 0.20138889 -0.35416667 0.03472222 0.09027778
0.07638889 -0.03472222 0.02083333 -0.03472222]
[2 1] [-2 1] [ 0. 0. -0.02777778 0.05555556 0. -0.02777778
-0.02777778 0. 0.05555556 -0.02777778]
[2 1] [-1 -2] [ 0. 0. -0.08333333 0.19444444 -0.02777778 -0.19444444
0.08333333 0.02777778 0.02777778 -0.02777778]
[2 1] [-1 2] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[2 1] [ 2 -1] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[2 1] [ 1 -2] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[2 1] [ 0 -3] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[ 1 -2] [0 0] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[ 1 -2] [-1 0] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[ 1 -2] [1 0] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[ 1 -2] [ 0 -1] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[ 1 -2] [0 1] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[ 1 -2] [-2 0] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[ 1 -2] [-1 -1] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[ 1 -2] [-1 1] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[ 1 -2] [2 0] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[ 1 -2] [ 1 -1] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[ 1 -2] [1 1] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[ 1 -2] [ 0 -2] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[ 1 -2] [0 2] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[ 1 -2] [-3 0] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[ 1 -2] [-2 -1] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[ 1 -2] [-2 1] [ 0. 0. -0.08333333 0.19444444 -0.02777778 -0.19444444
0.08333333 0.02777778 0.02777778 -0.02777778]
[ 1 -2] [-1 -2] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[ 1 -2] [-1 2] [ 0. 0. 0.20138889 -0.35416667 0.03472222 0.09027778
0.07638889 -0.03472222 0.02083333 -0.03472222]
[ 1 -2] [2 1] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[ 1 -2] [1 2] [ 0. 0. -0.02777778 0.05555556 0. -0.02777778
-0.02777778 0. 0.05555556 -0.02777778]
[ 1 -2] [0 3] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[1 2] [0 0] [ 0. -0.1875 0.5 -0.33333333 0. -0.125
0.16666667 0. 0. -0.02083333]
[1 2] [-1 0] [ 0. 0. -0.02777778 0.08333333 -0.02777778 -0.13888889
0.13888889 0.02777778 -0.08333333 0.02777778]
[1 2] [1 0] [ 0. 0. -0.04861111 0.09027778 -0.02083333 -0.04861111
0.04861111 0.02083333 -0.09027778 0.04861111]
[1 2] [ 0 -1] [ 0. 0. 0.04166667 -0.09722222 0.01388889 0.09722222
-0.04166667 -0.01388889 -0.01388889 0.01388889]
[1 2] [0 1] [ 0. 0. -0.05555556 0.05555556 0. 0.11111111
-0.11111111 0. -0.05555556 0.05555556]
[1 2] [-2 0] [ 0. 0. -0.02083333 0.00694444 0.13194444 -0.20138889
0.07638889 -0.00694444 0.03472222 -0.02083333]
[1 2] [-1 -1] [ 0. 0. 0.125 -0.29166667 0.29166667 -0.29166667
0.20833333 -0.04166667 0.04166667 -0.04166667]
[1 2] [-1 1] [ 0. 0.04166667 -0.18055556 0.29166667 -0.20833333 0.06944444
-0.04166667 0.04166667 -0.01388889]
[1 2] [2 0] [ 0. 0.02083333 -0.04861111 0.00694444 0.04861111 -0.02083333
-0.00694444 -0.00694444 0.00694444]
[1 2] [ 1 -1] [ 0. 0.02083333 -0.15277778 0.29166667 -0.125 -0.11111111
0.04166667 0.04166667 0.01388889 -0.02083333]
[1 2] [1 1] [ 0. 0.0625 -0.125 -0.06944444 0.04166667 0.41666667
-0.31944444 -0.04166667 -0.04166667 0.07638889]
[1 2] [ 0 -2] [ 0. 0. 0.02083333 -0.00694444 -0.00694444 -0.09027778
0.09027778 0.00694444 0.00694444 -0.02083333]
[1 2] [0 2] [ 0. 0.04166667 -0.11805556 0.10416667 -0.17361111 0.35416667
-0.21527778 0.00694444 -0.04861111 0.04861111]
[1 2] [-3 0] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[1 2] [-2 -1] [ 0. 0. -0.08333333 0.19444444 -0.02777778 -0.19444444
0.08333333 0.02777778 0.02777778 -0.02777778]
[1 2] [-2 1] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[1 2] [-1 -2] [ 0. 0. 0.20138889 -0.35416667 0.03472222 0.09027778
0.07638889 -0.03472222 0.02083333 -0.03472222]
[1 2] [-1 2] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[1 2] [ 2 -1] [ 0. 0. 0.01388889 -0.04166667 0.01388889 0.06944444
-0.06944444 -0.01388889 0.04166667 -0.01388889]
[1 2] [ 1 -2] [ 0. 0. -0.02777778 0.05555556 0. -0.02777778
-0.02777778 0. 0.05555556 -0.02777778]
[1 2] [ 0 -3] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[ 0 -3] [0 0] [ 0. -0.0625 0.16666667 -0.11111111 0. -0.04166667
0.05555556 0. 0. -0.00694444]
[ 0 -3] [-1 0] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[ 0 -3] [1 0] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[ 0 -3] [ 0 -1] [ 0. 0. -0.02083333 0.00694444 0.00694444 0.09027778
-0.09027778 -0.00694444 -0.00694444 0.02083333]
[ 0 -3] [0 1] [ 0. 0. 0.05555556 -0.13888889 0.02777778 0.16666667
-0.11111111 -0.02777778 0.02777778]
[ 0 -3] [-2 0] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[ 0 -3] [-1 -1] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[ 0 -3] [-1 1] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[ 0 -3] [2 0] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[ 0 -3] [ 1 -1] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[ 0 -3] [1 1] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[ 0 -3] [ 0 -2] [ 0. 0.02083333 -0.04861111 0.00694444 -0.07638889 0.27083333
-0.17361111 -0.00694444 -0.03472222 0.04166667]
[ 0 -3] [0 2] [ 0. 0. 0.0625 -0.10416667 0.00694444 0.00694444
0.04861111 -0.00694444 -0.00694444 -0.00694444]
[ 0 -3] [-2 1] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[ 0 -3] [-1 2] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[ 0 -3] [2 1] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[ 0 -3] [1 2] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[ 0 -3] [0 3] [ 0. 0. 0.0625 -0.10416667 0.00694444 0.00694444
0.04861111 -0.00694444 -0.00694444 -0.00694444]
[0 3] [0 0] [ 0. -0.0625 0.16666667 -0.11111111 0. -0.04166667
0.05555556 0. 0. -0.00694444]
[0 3] [-1 0] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[0 3] [1 0] [ 0. 0. -0.01388889 0.04166667 -0.01388889 -0.06944444
0.06944444 0.01388889 -0.04166667 0.01388889]
[0 3] [ 0 -1] [ 0. 0. 0.05555556 -0.13888889 0.02777778 0.16666667
-0.11111111 -0.02777778 0.02777778]
[0 3] [0 1] [ 0. 0. -0.02083333 0.00694444 0.00694444 0.09027778
-0.09027778 -0.00694444 -0.00694444 0.02083333]
[0 3] [-2 0] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[0 3] [-1 -1] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[0 3] [-1 1] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[0 3] [2 0] [ 0. 0. 0.02083333 -0.09027778 0.14583333 -0.10416667
0.03472222 -0.02083333 0.02083333 -0.00694444]
[0 3] [ 1 -1] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[0 3] [1 1] [ 0. 0.02083333 -0.06944444 0.09722222 -0.09722222 0.08333333
-0.04166667 0.01388889 -0.01388889 0.00694444]
[0 3] [ 0 -2] [ 0. 0. 0.0625 -0.10416667 0.00694444 0.00694444
0.04861111 -0.00694444 -0.00694444 -0.00694444]
[0 3] [0 2] [ 0. 0.02083333 -0.04861111 0.00694444 -0.07638889 0.27083333
-0.17361111 -0.00694444 -0.03472222 0.04166667]
[0 3] [-2 -1] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[0 3] [-1 -2] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[0 3] [ 2 -1] [ 0. 0. 0.00694444 -0.02083333 0.00694444 0.03472222
-0.03472222 -0.00694444 0.02083333 -0.00694444]
[0 3] [ 1 -2] [ 0. 0. -0.04166667 0.09722222 -0.01388889 -0.09722222
0.04166667 0.01388889 0.01388889 -0.01388889]
[0 3] [ 0 -3] [ 0. 0. 0.0625 -0.10416667 0.00694444 0.00694444
0.04861111 -0.00694444 -0.00694444 -0.00694444]
```python
printvecbasis(symmbiasvecbar)
```
dim 1
[-1 0] [ 0. 1. -1.]
[1 0] [ 0. -1. 1.]
dim 2
[ 0 -1] [ 0. 1. -1.]
[0 1] [ 0. -1. 1.]
```python
printvecbasis(symmbiasGvecbar)
```
dim 1
[-1 0] [ 0. 0. -0.33333333 0. 0. 0.33333333]
[1 0] [ 0. 0. 0.33333333 0. 0. -0.33333333]
[-2 0] [ 0. 0.33333333 -0.33333333 0. -0.33333333 0.33333333]
[-1 -1] [ 0. 0.33333333 -0.33333333 0. -0.33333333 0.33333333]
[-1 1] [ 0. 0.33333333 -0.33333333 0. -0.33333333 0.33333333]
[2 0] [ 0. -0.33333333 0.33333333 0. 0.33333333 -0.33333333]
[ 1 -1] [ 0. -0.33333333 0.33333333 0. 0.33333333 -0.33333333]
[1 1] [ 0. -0.33333333 0.33333333 0. 0.33333333 -0.33333333]
dim 2
[ 0 -1] [ 0. 0. -0.33333333 0. 0. 0.33333333]
[0 1] [ 0. 0. 0.33333333 0. 0. -0.33333333]
[-1 -1] [ 0. 0.33333333 -0.33333333 0. -0.33333333 0.33333333]
[-1 1] [ 0. -0.33333333 0.33333333 0. 0.33333333 -0.33333333]
[ 1 -1] [ 0. 0.33333333 -0.33333333 0. -0.33333333 0.33333333]
[1 1] [ 0. -0.33333333 0.33333333 0. 0.33333333 -0.33333333]
[ 0 -2] [ 0. 0.33333333 -0.33333333 0. -0.33333333 0.33333333]
[0 2] [ 0. -0.33333333 0.33333333 0. 0.33333333 -0.33333333]
```python
printvecbasis(symmbiasGGvecbar)
```
dim 1
[-1 0] [ 0. 0.25 0.66666667 0. -0.25 -0.58333333
0. 0. -0.08333333]
[1 0] [ 0. -0.25 -0.66666667 0. 0.25 0.58333333
0. 0. 0.08333333]
[-2 0] [ 0. -0.66666667 0.55555556 0. 0.66666667 -0.44444444
0. 0. -0.11111111]
[-1 -1] [ 0. -0.66666667 0.55555556 0. 0.66666667 -0.44444444
0. 0. -0.11111111]
[-1 1] [ 0. -0.66666667 0.55555556 0. 0.66666667 -0.44444444
0. 0. -0.11111111]
[2 0] [ 0. 0.66666667 -0.55555556 0. -0.66666667 0.44444444
0. 0. 0.11111111]
[ 1 -1] [ 0. 0.66666667 -0.55555556 0. -0.66666667 0.44444444
0. 0. 0.11111111]
[1 1] [ 0. 0.66666667 -0.55555556 0. -0.66666667 0.44444444
0. 0. 0.11111111]
[-3 0] [ 0. 0.08333333 -0.11111111 0. -0.08333333 0.13888889
0. 0. -0.02777778]
[-2 -1] [ 0. 0.16666667 -0.22222222 0. -0.16666667 0.27777778
0. 0. -0.05555556]
[-2 1] [ 0. 0.16666667 -0.22222222 0. -0.16666667 0.27777778
0. 0. -0.05555556]
[-1 -2] [ 0. 0.08333333 -0.11111111 0. -0.08333333 0.13888889
0. 0. -0.02777778]
[-1 2] [ 0. 0.08333333 -0.11111111 0. -0.08333333 0.13888889
0. 0. -0.02777778]
[3 0] [ 0. -0.08333333 0.11111111 0. 0.08333333 -0.13888889
0. 0. 0.02777778]
[ 2 -1] [ 0. -0.16666667 0.22222222 0. 0.16666667 -0.27777778
0. 0. 0.05555556]
[2 1] [ 0. -0.16666667 0.22222222 0. 0.16666667 -0.27777778
0. 0. 0.05555556]
[ 1 -2] [ 0. -0.08333333 0.11111111 0. 0.08333333 -0.13888889
0. 0. 0.02777778]
[1 2] [ 0. -0.08333333 0.11111111 0. 0.08333333 -0.13888889
0. 0. 0.02777778]
dim 2
[ 0 -1] [ 0. 0.25 0.66666667 0. -0.25 -0.58333333
0. 0. -0.08333333]
[0 1] [ 0. -0.25 -0.66666667 0. 0.25 0.58333333
0. 0. 0.08333333]
[-1 -1] [ 0. -0.66666667 0.55555556 0. 0.66666667 -0.44444444
0. 0. -0.11111111]
[-1 1] [ 0. 0.66666667 -0.55555556 0. -0.66666667 0.44444444
0. 0. 0.11111111]
[ 1 -1] [ 0. -0.66666667 0.55555556 0. 0.66666667 -0.44444444
0. 0. -0.11111111]
[1 1] [ 0. 0.66666667 -0.55555556 0. -0.66666667 0.44444444
0. 0. 0.11111111]
[ 0 -2] [ 0. -0.66666667 0.55555556 0. 0.66666667 -0.44444444
0. 0. -0.11111111]
[0 2] [ 0. 0.66666667 -0.55555556 0. -0.66666667 0.44444444
0. 0. 0.11111111]
[-2 -1] [ 0. 0.08333333 -0.11111111 0. -0.08333333 0.13888889
0. 0. -0.02777778]
[-2 1] [ 0. -0.08333333 0.11111111 0. 0.08333333 -0.13888889
0. 0. 0.02777778]
[-1 -2] [ 0. 0.16666667 -0.22222222 0. -0.16666667 0.27777778
0. 0. -0.05555556]
[-1 2] [ 0. -0.16666667 0.22222222 0. 0.16666667 -0.27777778
0. 0. 0.05555556]
[ 2 -1] [ 0. 0.08333333 -0.11111111 0. -0.08333333 0.13888889
0. 0. -0.02777778]
[2 1] [ 0. -0.08333333 0.11111111 0. 0.08333333 -0.13888889
0. 0. 0.02777778]
[ 1 -2] [ 0. 0.16666667 -0.22222222 0. -0.16666667 0.27777778
0. 0. -0.05555556]
[1 2] [ 0. -0.16666667 0.22222222 0. 0.16666667 -0.27777778
0. 0. 0.05555556]
[ 0 -3] [ 0. 0.08333333 -0.11111111 0. -0.08333333 0.13888889
0. 0. -0.02777778]
[0 3] [ 0. -0.08333333 0.11111111 0. 0.08333333 -0.13888889
0. 0. 0.02777778]
### Write out to HDF5 file
We now store the output in an HDF5 file for later use and analysis.
```python
import h5py
rewriteFile = False
printFile = False
```
```python
if rewriteFile:
with h5py.File('Neighbor-averaging.hdf5', 'w') as f:
f['dxlist'] = np.array(dxlist)
f['sites'] = np.array(sites)
f['jumplist'] = np.array(jumplist)
f['basisfunc'] = np.array(basisfunc)
f['symmWbar'] = symmWbar
f['symmWGbar'] = symmWGbar
f['symmWGGbar'] = symmWGGbar
f['symmWbar_far'] = symmWbar_far
f['symmWGbar_far'] = symmWGbar_far
f['symmWGGbar_far'] = symmWGGbar_far
f['symmresbias'] = symmresbiasave
f['symmresbiasGave'] = symmresbiasGave
f['symmbiasvecbar'] = symmbiasvecbar
f['symmbiasGvecbar'] = symmbiasGvecbar
f['symmbiasGGvecbar'] = symmbiasGGvecbar
```
```python
if printFile:
with h5py.File('Neighbor-averaging.hdf5', 'r') as f:
for k, c in f.items():
print(k)
print(c.value)
```
### Mapping onto vectorStars
We create the simplified symmetry basis functions using vectorStars, to folddown the full representation, and compute proper inverses. We also make our own Monkhorst-Pack mesh that is shifted off of the origin, and symmetrized, for simplicity.
```python
def mpmesh(Ndiv, pre=np.pi):
"""
Generates a MP mesh for a square lattice.
:param Ndiv: number of divisions
:param pre: prefactor for edge of Brilloiun zone (pi/a_0)
:returns k[Nk,2]: k-points
:returns w[Nk]: weight
"""
prescale = pre/Ndiv
wscale = 1./(Ndiv*Ndiv)
Nk = (Ndiv*(Ndiv+1))//2
kpt, w = np.zeros((Nk,2)), np.zeros(Nk)
i = 0
for n in range(Ndiv):
for m in range(n+1):
kpt[i,0] = prescale*(n+0.5)
kpt[i,1] = prescale*(m+0.5)
if n==m:
w[i] = wscale
else:
w[i] = 2*wscale
i += 1
return kpt, w
```
```python
square = crystal.Crystal(np.eye(2), [np.zeros(2)])
chem = 0
sitelist = square.sitelist(chem)
jumpnetwork = square.jumpnetwork(chem, 1.01) # [[((0,0), dx) for dx in dxlist]]
```
```python
starset = stars.StarSet(jumpnetwork, square, chem, 3)
vecstarset = stars.VectorStarSet(starset)
if __TESTING__:
print(starset)
```
Nshells: 3 Nstates: 24 Nstars: 5
Star 0 (4)
0: 0.[0,0]:0.[-1,0] (dx=[-1.0,-0.0])
1: 0.[0,0]:0.[0,1] (dx=[-0.0,1.0])
2: 0.[0,0]:0.[0,-1] (dx=[0.0,-1.0])
3: 0.[0,0]:0.[1,0] (dx=[1.0,0.0])
Star 1 (4)
4: 0.[0,0]:0.[1,-1] (dx=[1.0,-1.0])
5: 0.[0,0]:0.[1,1] (dx=[1.0,1.0])
6: 0.[0,0]:0.[-1,-1] (dx=[-1.0,-1.0])
7: 0.[0,0]:0.[-1,1] (dx=[-1.0,1.0])
Star 2 (4)
8: 0.[0,0]:0.[-2,0] (dx=[-2.0,-0.0])
9: 0.[0,0]:0.[0,2] (dx=[-0.0,2.0])
10: 0.[0,0]:0.[0,-2] (dx=[0.0,-2.0])
11: 0.[0,0]:0.[2,0] (dx=[2.0,0.0])
Star 3 (8)
12: 0.[0,0]:0.[1,-2] (dx=[1.0,-2.0])
13: 0.[0,0]:0.[1,2] (dx=[1.0,2.0])
14: 0.[0,0]:0.[-2,-1] (dx=[-2.0,-1.0])
15: 0.[0,0]:0.[-1,-2] (dx=[-1.0,-2.0])
16: 0.[0,0]:0.[2,1] (dx=[2.0,1.0])
17: 0.[0,0]:0.[2,-1] (dx=[2.0,-1.0])
18: 0.[0,0]:0.[-1,2] (dx=[-1.0,2.0])
19: 0.[0,0]:0.[-2,1] (dx=[-2.0,1.0])
Star 4 (4)
20: 0.[0,0]:0.[-3,0] (dx=[-3.0,-0.0])
21: 0.[0,0]:0.[3,0] (dx=[3.0,0.0])
22: 0.[0,0]:0.[0,3] (dx=[-0.0,3.0])
23: 0.[0,0]:0.[0,-3] (dx=[0.0,-3.0])
```python
if __TESTING__:
for vR, vV in zip(vecstarset.vecpos, vecstarset.vecvec):
print('')
for R, v in zip(vR, vV):
print(starset.states[R] , v)
```
0.[0,0]:0.[-1,0] (dx=[-1.0,-0.0]) [-0.5 -0. ]
0.[0,0]:0.[0,1] (dx=[-0.0,1.0]) [-0. 0.5]
0.[0,0]:0.[0,-1] (dx=[0.0,-1.0]) [ 0. -0.5]
0.[0,0]:0.[1,0] (dx=[1.0,0.0]) [0.5 0. ]
0.[0,0]:0.[1,-1] (dx=[1.0,-1.0]) [ 0.35355339 -0.35355339]
0.[0,0]:0.[1,1] (dx=[1.0,1.0]) [0.35355339 0.35355339]
0.[0,0]:0.[-1,-1] (dx=[-1.0,-1.0]) [-0.35355339 -0.35355339]
0.[0,0]:0.[-1,1] (dx=[-1.0,1.0]) [-0.35355339 0.35355339]
0.[0,0]:0.[-2,0] (dx=[-2.0,-0.0]) [-0.5 -0. ]
0.[0,0]:0.[0,2] (dx=[-0.0,2.0]) [-0. 0.5]
0.[0,0]:0.[0,-2] (dx=[0.0,-2.0]) [ 0. -0.5]
0.[0,0]:0.[2,0] (dx=[2.0,0.0]) [0.5 0. ]
0.[0,0]:0.[1,-2] (dx=[1.0,-2.0]) [ 0.15811388 -0.31622777]
0.[0,0]:0.[1,2] (dx=[1.0,2.0]) [0.15811388 0.31622777]
0.[0,0]:0.[-2,-1] (dx=[-2.0,-1.0]) [-0.31622777 -0.15811388]
0.[0,0]:0.[-1,-2] (dx=[-1.0,-2.0]) [-0.15811388 -0.31622777]
0.[0,0]:0.[2,1] (dx=[2.0,1.0]) [0.31622777 0.15811388]
0.[0,0]:0.[2,-1] (dx=[2.0,-1.0]) [ 0.31622777 -0.15811388]
0.[0,0]:0.[-1,2] (dx=[-1.0,2.0]) [-0.15811388 0.31622777]
0.[0,0]:0.[-2,1] (dx=[-2.0,1.0]) [-0.31622777 0.15811388]
0.[0,0]:0.[1,-2] (dx=[1.0,-2.0]) [-0.31622777 -0.15811388]
0.[0,0]:0.[1,2] (dx=[1.0,2.0]) [-0.31622777 0.15811388]
0.[0,0]:0.[-2,-1] (dx=[-2.0,-1.0]) [-0.15811388 0.31622777]
0.[0,0]:0.[-1,-2] (dx=[-1.0,-2.0]) [ 0.31622777 -0.15811388]
0.[0,0]:0.[2,1] (dx=[2.0,1.0]) [ 0.15811388 -0.31622777]
0.[0,0]:0.[2,-1] (dx=[2.0,-1.0]) [0.15811388 0.31622777]
0.[0,0]:0.[-1,2] (dx=[-1.0,2.0]) [0.31622777 0.15811388]
0.[0,0]:0.[-2,1] (dx=[-2.0,1.0]) [-0.15811388 -0.31622777]
0.[0,0]:0.[-3,0] (dx=[-3.0,-0.0]) [-0.5 -0. ]
0.[0,0]:0.[3,0] (dx=[3.0,0.0]) [0.5 0. ]
0.[0,0]:0.[0,3] (dx=[-0.0,3.0]) [-0. 0.5]
0.[0,0]:0.[0,-3] (dx=[0.0,-3.0]) [ 0. -0.5]
```python
GF = GFcalc.GFCrystalcalc(square, chem, sitelist, jumpnetwork, kptwt = mpmesh(32))
GF.SetRates(np.ones(1), np.zeros(1), np.ones(1), np.zeros(1))
```
```python
if __TESTING__:
print(GF)
```
GFcalc for crystal (chemistry=0):
#Lattice:
a1 = [1. 0.]
a2 = [0. 1.]
#Basis:
(0) 0.0 = [0. 0.]
kpt grid: [0 0] (528)
```python
GFmat, GFstarset = vecstarset.GFexpansion()
GF0array = np.array([GF(0,0,GFstarset.states[s[0]].R) for s in GFstarset.stars])
g0 = np.dot(GFmat, GF0array)
print(g0)
```
[[-3.63380228e-01 -1.93209535e-01 -1.80281366e-01 -1.93003147e-01
-1.32850952e-11 -1.13613420e-01]
[-1.93209535e-01 -4.24413181e-01 -1.72627262e-01 -3.04231075e-01
3.63234979e-02 -1.35447258e-01]
[-1.80281366e-01 -1.72627262e-01 -4.76993647e-01 -3.41764874e-01
-5.13691834e-02 -2.62902329e-01]
[-1.93003147e-01 -3.04231075e-01 -3.41764874e-01 -6.99942498e-01
1.09684277e-02 -2.95628381e-01]
[-1.32850952e-11 3.63234979e-02 -5.13691834e-02 1.09684277e-02
-2.91981959e-01 -3.46852142e-02]
[-1.13613420e-01 -1.35447258e-01 -2.62902329e-01 -2.95628381e-01
-3.46852142e-02 -5.42115430e-01]]
```python
basis2state = [starset.stateindex(stars.PairState(0, 0, bv, bv)) for bv in basisfunc]
basis2star = [starset.starindex(stars.PairState(0, 0, bv, bv)) for bv in basisfunc]
```
```python
if __TESTING__:
for bv, stateind, starind in zip(basisfunc, basis2state, basis2star):
print(bv, stateind, starind)
```
[0 0] None None
[-1 0] 0 0
[1 0] 3 0
[ 0 -1] 2 0
[0 1] 1 0
[-2 0] 8 2
[-1 -1] 6 1
[-1 1] 7 1
[2 0] 11 2
[ 1 -1] 4 1
[1 1] 5 1
[ 0 -2] 10 2
[0 2] 9 2
[-3 0] 20 4
[-2 -1] 14 3
[-2 1] 19 3
[-1 -2] 15 3
[-1 2] 18 3
[3 0] 21 4
[ 2 -1] 17 3
[2 1] 16 3
[ 1 -2] 12 3
[1 2] 13 3
[ 0 -3] 23 4
[0 3] 22 4
[4 0] None None
[ 3 -1] None None
[3 1] None None
[ 2 -2] None None
[2 2] None None
[ 1 -3] None None
[1 3] None None
[-4 0] None None
[-3 -1] None None
[-3 1] None None
[-2 -2] None None
[-2 2] None None
[-1 -3] None None
[-1 3] None None
[0 4] None None
[ 0 -4] None None
[5 0] None None
[ 4 -1] None None
[4 1] None None
[ 3 -2] None None
[3 2] None None
[ 2 -3] None None
[2 3] None None
[1 4] None None
[ 1 -4] None None
[-5 0] None None
[-4 -1] None None
[-4 1] None None
[-3 -2] None None
[-3 2] None None
[-2 -3] None None
[-2 3] None None
[-1 4] None None
[-1 -4] None None
[0 5] None None
[ 0 -5] None None
[6 0] None None
[ 5 -1] None None
[5 1] None None
[ 4 -2] None None
[4 2] None None
[ 3 -3] None None
[3 3] None None
[2 4] None None
[ 2 -4] None None
[1 5] None None
[ 1 -5] None None
[-6 0] None None
[-5 -1] None None
[-5 1] None None
[-4 -2] None None
[-4 2] None None
[-3 -3] None None
[-3 3] None None
[-2 4] None None
[-2 -4] None None
[-1 5] None None
[-1 -5] None None
[0 6] None None
[ 0 -6] None None
```python
state2basis = [basis2state.index(n) for n in range(starset.Nstates)]
```
```python
if __TESTING__:
print(state2basis)
```
[1, 4, 3, 2, 9, 10, 6, 7, 5, 12, 11, 8, 21, 22, 14, 16, 20, 19, 17, 15, 13, 18, 24, 23]
**Now** the real conversion begins! We start by mapping all of the bias vectors and local functions onto our vectorBasis.
```python
NVS = vecstarset.Nvstars
symmbiasvecVS = np.zeros((N+1, NVS))
symmbiasGvecVS = np.zeros((N+1, NVS))
symmbiasGGvecVS = np.zeros((N+1, NVS))
for i in range(vecstarset.Nvstars):
for Ri, vi in zip(vecstarset.vecpos[i], vecstarset.vecvec[i]):
bi = state2basis[Ri]
symmbiasvecVS[:, i] += np.dot(vi, symmbiasvecbar[:,bi,:])
symmbiasGvecVS[:, i] += np.dot(vi, symmbiasGvecbar[:,bi,:])
symmbiasGGvecVS[:, i] += np.dot(vi, symmbiasGGvecbar[:,bi,:])
stars.zeroclean(symmbiasvecVS);
stars.zeroclean(symmbiasGvecVS);
stars.zeroclean(symmbiasGGvecVS);
```
```python
for nv in range(NVS):
if not np.allclose(symmbiasvecVS[:,nv], 0):
print(nv, truncate_vec(symmbiasvecVS[:,nv]))
```
0 [ 0. -2. 2.]
```python
for nv in range(NVS):
if not np.allclose(symmbiasGvecVS[:,nv], 0):
print(nv, truncate_vec(symmbiasGvecVS[:,nv]))
```
0 [ 0. 0. 0.66666667 0. 0. -0.66666667]
1 [ 0. -0.94280904 0.94280904 0. 0.94280904 -0.94280904]
2 [ 0. -0.66666667 0.66666667 0. 0.66666667 -0.66666667]
```python
for nv in range(NVS):
if not np.allclose(symmbiasGGvecVS[:,nv], 0):
print(nv, truncate_vec(symmbiasGGvecVS[:,nv]))
```
0 [ 0. -0.5 -1.33333333 0. 0.5 1.16666667
0. 0. 0.16666667]
1 [ 0. 1.88561808 -1.5713484 0. -1.88561808 1.25707872
0. 0. 0.31426968]
2 [ 0. 1.33333333 -1.11111111 0. -1.33333333 0.88888889
0. 0. 0.22222222]
3 [ 0. -0.52704628 0.70272837 0. 0.52704628 -0.87841046
0. 0. 0.17568209]
5 [ 0. -0.16666667 0.22222222 0. 0.16666667 -0.27777778
0. 0. 0.05555556]
```python
symmWbarVS = np.zeros((N+1, NVS, NVS))
symmWGbarVS = np.zeros((N+1, NVS, NVS))
symmWGGbarVS = np.zeros((N+1, NVS, NVS))
for i in range(vecstarset.Nvstars):
for Ri, vi in zip(vecstarset.vecpos[i], vecstarset.vecvec[i]):
bi = state2basis[Ri]
for j in range(vecstarset.Nvstars):
for Rj, vj in zip(vecstarset.vecpos[j], vecstarset.vecvec[j]):
bj = state2basis[Rj]
vivj = np.dot(vi,vj)
symmWbarVS[:, i, j] += vivj*(symmWbar[bi,bj,:]-symmWbar_far[bi,bj,:])
symmWGbarVS[:, i, j] += vivj*(symmWGbar[bi,bj,:]-symmWGbar_far[bi,bj,:])
symmWGGbarVS[:, i, j] += vivj*(symmWGGbar[bi,bj,:]-symmWGGbar_far[bi,bj,:])
stars.zeroclean(symmWbarVS);
stars.zeroclean(symmWGbarVS);
stars.zeroclean(symmWGGbarVS);
```
```python
for nv,mv in itertools.product(range(NVS), repeat=2):
if not np.allclose(symmWbarVS[:,nv,mv], 0):
print(nv, mv, truncate_vec(symmWbarVS[:,nv,mv]))
```
0 0 [ 0. 1. -4. 3.]
```python
for nv,mv in itertools.product(range(NVS), repeat=2):
if not np.allclose(symmWGbarVS[:,nv,mv], 0):
print(nv, mv, truncate_vec(symmWGbarVS[:,nv,mv]))
```
0 0 [ 0. 1. -3.33333333 2.66666667 0. 0.33333333
-0.66666667]
0 1 [ 0. 0. -0.47140452 0.47140452 0. 0.47140452
-0.47140452]
0 2 [ 0. 0. -0.33333333 0.33333333 0. 0.33333333
-0.33333333]
1 0 [ 0. 0. -0.47140452 0.47140452 0. 0.47140452
-0.47140452]
1 1 [ 0. 0.16666667 -0.83333333 0.66666667 -0.66666667 1.83333333
-1.16666667]
1 2 [ 0. 0.11785113 -0.11785113 0. -0.47140452 0.82495791
-0.35355339]
2 0 [ 0. 0. -0.33333333 0.33333333 0. 0.33333333
-0.33333333]
2 1 [ 0. 0.11785113 -0.11785113 0. -0.47140452 0.82495791
-0.35355339]
2 2 [ 0. 0.08333333 -0.41666667 0.33333333 -0.33333333 0.91666667
-0.58333333]
```python
for nv,mv in itertools.product(range(NVS), repeat=2):
if not np.allclose(symmWGGbarVS[:,nv,mv], 0):
print(nv, mv, truncate_vec(symmWGGbarVS[:,nv,mv]))
```
0 0 [ 0. 0. -1.67361111 0.27083333 0.10416667 -0.34027778
1.72916667 -0.10416667 -0.04861111 0.0625 ]
0 1 [ 0. 0.08838835 1.06066017 -1.09994388 -0.35355339 -0.41247896
0.66782307 0. -0.11785113 0.16695577]
0 2 [ 0. 0.0625 0.83333333 -0.88888889 -0.25 -0.375
0.61111111 0. -0.08333333 0.09027778]
0 3 [ 0. 0. -0.13615362 0.19764235 -0.01317616 0.07466489
-0.14493773 0.01317616 -0.05709668 0.06588078]
0 4 [ 0. 0. -0.03513642 0.08784105 -0.01756821 -0.10540926
0.07027284 0.01756821 -0.01756821]
0 5 [ 0. 0. -0.07638889 0.14583333 -0.02083333 -0.07638889
0.02083333 0.02083333 -0.03472222 0.02083333]
1 0 [ 0. 0.08838835 1.06066017 -1.09994388 -0.35355339 -0.41247896
0.66782307 0. -0.11785113 0.16695577]
1 1 [ 0. -0.5 2.27777778 -1.77777778 2. -5.05555556
3.05555556 0. -0.22222222 0.22222222]
1 2 [ 0. -0.35355339 0.19641855 0.15713484 1.41421356 -2.16060405
0.74639049 0. -0.15713484 0.15713484]
1 3 [ 0. 0.0931695 -0.3478328 0.2981424 -0.372678 1.03107579
-0.74535599 0. -0.124226 0.1677051 ]
1 4 [ 0. 0. 0.0745356 -0.0993808 0. -0.0745356
0.124226 0. 0. -0.0248452]
1 5 [ 0. 0.02946278 -0.03928371 0. -0.11785113 0.25534412
-0.11785113 0. -0.03928371 0.02946278]
2 0 [ 0. 0.0625 0.83333333 -0.88888889 -0.25 -0.375
0.61111111 0. -0.08333333 0.09027778]
2 1 [ 0. -0.35355339 0.19641855 0.15713484 1.41421356 -2.16060405
0.74639049 0. -0.15713484 0.15713484]
2 2 [ 0. -0.25 1.13888889 -0.88888889 1. -2.52777778
1.52777778 0. -0.11111111 0.11111111]
2 3 [ 0. 0.06588078 -0.1932503 0.14054567 -0.26352314 0.67637606
-0.43920523 0. -0.08784105 0.1010172 ]
2 4 [ 0. 0. -0.05270463 0.07027284 0. 0.05270463
-0.08784105 0. 0. 0.01756821]
2 5 [ 0. 0.02083333 -0.11111111 0.11111111 -0.08333333 0.26388889
-0.22222222 0. -0.02777778 0.04861111]
3 0 [ 0. 0. -0.13615362 0.19764235 -0.01317616 0.07466489
-0.14493773 0.01317616 -0.05709668 0.06588078]
3 1 [ 0. 0.0931695 -0.3478328 0.2981424 -0.372678 1.03107579
-0.74535599 0. -0.124226 0.1677051 ]
3 2 [ 0. 0.06588078 -0.1932503 0.14054567 -0.26352314 0.67637606
-0.43920523 0. -0.08784105 0.1010172 ]
3 3 [ 0. 0. -0.11388889 0.15277778 -0.00833333 0.10277778
-0.14722222 0.00833333 -0.06388889 0.06944444]
3 4 [ 0. 0. -0.02222222 0.05555556 -0.01111111 -0.06666667
0.04444444 0.01111111 -0.01111111]
3 5 [ 0. 0. 0.04831258 -0.10980131 0.01317616 0.1010172
-0.03074437 -0.01317616 -0.03074437 0.02196026]
4 0 [ 0. 0. -0.03513642 0.08784105 -0.01756821 -0.10540926
0.07027284 0.01756821 -0.01756821]
4 1 [ 0. 0. 0.0745356 -0.0993808 0. -0.0745356
0.124226 0. 0. -0.0248452]
4 2 [ 0. 0. -0.05270463 0.07027284 0. 0.05270463
-0.08784105 0. 0. 0.01756821]
4 3 [ 0. 0. -0.02222222 0.05555556 -0.01111111 -0.06666667
0.04444444 0.01111111 -0.01111111]
4 4 [ 0. 0. -0.28888889 0.55555556 -0.06111111 -0.28333333
-0.00555556 0.06111111 0.02222222]
4 5 [ 0. 0. 0.03513642 -0.08784105 0.01756821 0.10540926
-0.07027284 -0.01756821 0.01756821]
5 0 [ 0. 0. -0.07638889 0.14583333 -0.02083333 -0.07638889
0.02083333 0.02083333 -0.03472222 0.02083333]
5 1 [ 0. 0.02946278 -0.03928371 0. -0.11785113 0.25534412
-0.11785113 0. -0.03928371 0.02946278]
5 2 [ 0. 0.02083333 -0.11111111 0.11111111 -0.08333333 0.26388889
-0.22222222 0. -0.02777778 0.04861111]
5 3 [ 0. 0. 0.04831258 -0.10980131 0.01317616 0.1010172
-0.03074437 -0.01317616 -0.03074437 0.02196026]
5 4 [ 0. 0. 0.03513642 -0.08784105 0.01756821 0.10540926
-0.07027284 -0.01756821 0.01756821]
5 5 [ 0. 0. -0.0625 0.10416667 -0.00694444 -0.00694444
-0.04861111 0.00694444 0.00694444 0.00694444]
### Fourier transformation of translationally invariant contributions
Our "far" functions represent the translationally invariant contributions, and this requires Fourier transforms, and Taylor expansions to then be made into local contributions.
Mathematically, we're attempting to compute $\eta_i\cdot M_{ij}\cdot\eta_j$; the issue is that $\eta_i$ does not go to zero in the far-field (it's not local), and $M$ can be written as a local function plus a translationally invariant function $M^0$. Only the latter is problematic. However, as $\eta_i$ comes from a Green function solution (using the Dyson equation), if we multiply by the $w^0$, we produce a local function. Hence, we can rewrite that matrix equation as $(w^0\eta)_i\cdot (g^0M^0g^0)_{ij}\cdot (w^0\eta_j)$. Now, then we "simply" need to evaluate $g^0M^0g^0$, which can be done using Fourier transforms, as it is the product of three translationally invariant functions.
```python
def FT(mat, kptwt):
"""
(real) Fourier transform of translationally invariant function.
:param mat[Nbasis, N+1]: far-field version of matrix;
each Nbasis is relative to 0
:param kptwt: tuple of (kpt[Nkpt, 2], wt[Nkpt])
:returns matFT[Nkpt, N+1]: FT of matrix
"""
kpt = kptwt[0]
matFT = np.zeros((kpt.shape[0], N+1))
for bv, matv in zip(basisfunc, mat):
matFT += np.outer(np.cos(np.dot(kpt, bv)), matv)
return matFT
```
```python
PE.Taylor2D(Lmax=6); # initialize
def Taylor(mat):
"""
(real) Taylor expansion of Fourier transform of translationally invariant function.
:param mat[Nbasis, N+1]: far-field version of matrix;
each Nbasis is relative to 0
:returns matTaylor: T2D version of FT Taylor expansion matrix
"""
pre = np.array([1., 0., -1/2, 0., 1/24]) # Taylor coefficients for cos()
matTaylor = PE.Taylor2D()
for bv, matv in zip(basisfunc, mat):
for ve in PE.Taylor2D.constructexpansion([(matv, bv)], pre=pre):
matTaylor += ve
matTaylor.reduce()
return matTaylor
```
```python
if __TESTING__:
print(FT(symmWbar_far[0], mpmesh(4)))
```
array([[ 0. , -0.30448187, 0.60896374, -0.30448187, 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0. , -1.38687407, 2.77374814, -1.38687407, 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0. , -2.46926627, 4.93853254, -2.46926627, 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0. , -2.9176078 , 5.8352156 , -2.9176078 , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0. , -4. , 8. , -4. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0. , -5.53073373, 11.06146746, -5.53073373, 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0. , -4. , 8. , -4. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0. , -5.0823922 , 10.1647844 , -5.0823922 , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0. , -6.61312593, 13.22625186, -6.61312593, 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0. , -7.69551813, 15.39103626, -7.69551813, 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ]])
```python
g0Taylor = (Taylor(symmWbar_far[0])[1]).inv() # extract out the "constant" term
```
```python
print(g0Taylor)
```
f^(-2, 0)(u)*(
(-1-0j) x^0 y^0 )
f^(0, 4)(u)*(
(-0.0625-0j) x^0 y^0
(-0.020833333333333332-0j) x^0 y^4
(0.125+0j) x^2 y^2
(-0.020833333333333332-0j) x^4 y^0 )
```python
g0WGbarTaylor = ( (g0Taylor*g0Taylor)*Taylor(symmWGbar_far[0])).reduce().truncate(0)
g0WGGbarTaylor = ( (g0Taylor*g0Taylor)*Taylor(symmWGGbar_far[0])).reduce().truncate(0)
```
```python
print(g0WGbarTaylor)
```
f^(-2, 0)(u)*(
[ 0. +0.j -2. +0.j 4.66666667+0.j -2.66666667+0.j
0. +0.j -0.66666667+0.j 0.66666667+0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j] x^0 y^0 )
f^(0, 4)(u)*(
[ 0. +0.j 0.125 +0.j -0.29166667+0.j 0.16666667+0.j
0. +0.j 0.04166667+0.j -0.04166667+0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j] x^0 y^0
[ 0. +0.j -0.04166667+0.j 0.09722222+0.j -0.05555556+0.j
0. +0.j -0.01388889+0.j 0.01388889+0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j] x^0 y^4
[ 0. +0.j 0.25 +0.j -0.58333333+0.j 0.33333333+0.j
0. +0.j 0.08333333+0.j -0.08333333+0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j] x^2 y^2
[ 0. +0.j -0.04166667+0.j 0.09722222+0.j -0.05555556+0.j
0. +0.j -0.01388889+0.j 0.01388889+0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j] x^4 y^0 )
```python
print(g0WGGbarTaylor)
```
f^(-2, 0)(u)*(
[ 0. +0.j 1. +0.j -2.66666667+0.j 1.44444444+0.j
0. +0.j 0.66666667+0.j -0.22222222+0.j 0. +0.j
0. +0.j -0.22222222+0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j] x^0 y^0 )
f^(0, 4)(u)*(
[ 0. +0.j 0.0625 +0.j -0.41666667+0.j 0.42361111+0.j
0. +0.j 0.29166667+0.j -0.43055556+0.j 0. +0.j
0. +0.j 0.06944444+0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j] x^0 y^0
[ 0. +0.j 0.02083333+0.j -0.05555556+0.j 0.03009259+0.j
0. +0.j 0.01388889+0.j -0.00462963+0.j 0. +0.j
0. +0.j -0.00462963+0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j] x^0 y^4
[ 0. +0.j -0.125 +0.j 0.33333333+0.j -0.18055556+0.j
0. +0.j -0.08333333+0.j 0.02777778+0.j 0. +0.j
0. +0.j 0.02777778+0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j] x^2 y^2
[ 0. +0.j 0.02083333+0.j -0.05555556+0.j 0.03009259+0.j
0. +0.j 0.01388889+0.j -0.00462963+0.j 0. +0.j
0. +0.j -0.00462963+0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j] x^4 y^0 )
```python
kpt, wt = mpmesh(32)
```
```python
g0FT = 1./FT(symmWbar_far[0], (kpt, wt))[:,1]
WGbarFT = FT(symmWGbar_far[0], (kpt, wt))
WGGbarFT = FT(symmWGGbar_far[0], (kpt, wt))
```
```python
if __TESTING__:
print(g0FT)
```
array([-2.07547456e+02, -4.15695538e+01, -2.30979101e+01, -1.60393719e+01,
-1.22572342e+01, -8.34202383e+00, -8.38075744e+00, -7.21714998e+00,
-5.65452983e+00, -4.27672572e+00, -5.14322595e+00, -4.68014938e+00,
-3.96890116e+00, -3.23694485e+00, -2.60387626e+00, -3.48490501e+00,
-3.26594854e+00, -2.90292384e+00, -2.49094138e+00, -2.09835320e+00,
-1.75720502e+00, -2.52529063e+00, -2.40829271e+00, -2.20496260e+00,
-1.95887611e+00, -1.70763175e+00, -1.47464847e+00, -1.27037382e+00,
-1.92120435e+00, -1.85272768e+00, -1.72999832e+00, -1.57477929e+00,
-1.40821436e+00, -1.24588775e+00, -1.09687286e+00, -9.65068852e-01,
-1.51678050e+00, -1.47377621e+00, -1.39505117e+00, -1.29233369e+00,
-1.17799019e+00, -1.06221965e+00, -9.51957533e-01, -8.51078432e-01,
-7.61171613e-01, -1.23302969e+00, -1.20445885e+00, -1.15135893e+00,
-1.08048155e+00, -9.99377583e-01, -9.14792419e-01, -8.31817811e-01,
-7.53750394e-01, -6.82368660e-01, -6.18351645e-01, -1.02647948e+00,
-1.00660181e+00, -9.69243946e-01, -9.18521299e-01, -8.59242410e-01,
-7.95964592e-01, -7.32397138e-01, -6.71189472e-01, -6.13995436e-01,
-5.61672827e-01, -5.14512064e-01, -8.71612445e-01, -8.57238285e-01,
-8.29994468e-01, -7.92517567e-01, -7.47992821e-01, -6.99578284e-01,
-6.49994418e-01, -6.01327408e-01, -5.55009210e-01, -5.11904020e-01,
-4.72437004e-01, -4.36723250e-01, -7.52655341e-01, -7.41912813e-01,
-7.21418593e-01, -6.92937228e-01, -6.58656652e-01, -6.20823768e-01,
-5.81461192e-01, -5.42205833e-01, -5.04260399e-01, -4.68423198e-01,
-4.35158206e-01, -4.04676424e-01, -3.77011272e-01, -6.59435418e-01,
-6.51174521e-01, -6.35333245e-01, -6.13139014e-01, -5.86145494e-01,
-5.55993425e-01, -5.24212160e-01, -4.92092712e-01, -4.60633834e-01,
-4.30544362e-01, -4.02279452e-01, -3.76091212e-01, -3.52080452e-01,
-3.30242344e-01, -5.85155264e-01, -5.78641409e-01, -5.66098657e-01,
-5.48410702e-01, -5.26714855e-01, -5.02239528e-01, -4.76162363e-01,
-4.49511579e-01, -4.23115450e-01, -3.97592112e-01, -3.73366489e-01,
-3.50701367e-01, -3.29732724e-01, -3.10503203e-01, -2.92990658e-01,
-5.25134585e-01, -5.19882481e-01, -5.09735399e-01, -4.95349530e-01,
-4.77580904e-01, -4.57371296e-01, -4.35644505e-01, -4.13229561e-01,
-3.90816326e-01, -3.68940243e-01, -3.47988381e-01, -3.28218113e-01,
-3.09781191e-01, -2.92748249e-01, -2.77130854e-01, -2.62899886e-01,
-4.76067053e-01, -4.71746550e-01, -4.63376389e-01, -4.51457645e-01,
-4.36651332e-01, -4.19695799e-01, -4.01329156e-01, -3.82228939e-01,
-3.62974109e-01, -3.44028404e-01, -3.25740341e-01, -3.08354068e-01,
-2.92025743e-01, -2.76841516e-01, -2.62834581e-01, -2.50000000e-01,
-2.38306838e-01, -4.35564725e-01, -4.31945305e-01, -4.24917422e-01,
-4.14873580e-01, -4.02336390e-01, -3.87897034e-01, -3.72155907e-01,
-3.55674595e-01, -3.38943637e-01, -3.22366184e-01, -3.06254752e-01,
-2.90837110e-01, -2.76267421e-01, -2.62639487e-01, -2.50000000e-01,
-2.38360522e-01, -2.27707633e-01, -2.18011124e-01, -4.01869586e-01,
-3.98786518e-01, -3.92788736e-01, -3.84190967e-01, -3.73415536e-01,
-3.60945269e-01, -3.47277051e-01, -3.32883041e-01, -3.18183306e-01,
-3.03530507e-01, -2.89205001e-01, -2.75417589e-01, -2.62317054e-01,
-2.50000000e-01, -2.38521202e-01, -2.27903346e-01, -2.18145533e-01,
-2.09230351e-01, -2.01129514e-01, -3.73665715e-01, -3.70998775e-01,
-3.65802286e-01, -3.58334116e-01, -3.48942567e-01, -3.38029408e-01,
-3.26012756e-01, -3.13295236e-01, -3.00240631e-01, -2.87159840e-01,
-2.74305210e-01, -2.61871313e-01, -2.50000000e-01, -2.38787760e-01,
-2.28293888e-01, -2.18548454e-01, -2.09559472e-01, -2.01319016e-01,
-1.93808213e-01, -1.87001195e-01, -3.49953895e-01, -3.47613630e-01,
-3.43047562e-01, -3.36471258e-01, -3.28177490e-01, -3.18506539e-01,
-3.07815896e-01, -2.96453708e-01, -2.84738667e-01, -2.72947250e-01,
-2.61307788e-01, -2.50000000e-01, -2.39158326e-01, -2.28877480e-01,
-2.19218960e-01, -2.10217635e-01, -2.01887831e-01, -1.94228646e-01,
-1.87228383e-01, -1.80868134e-01, -1.75124590e-01, -3.29966076e-01,
-3.27884713e-01, -3.23819203e-01, -3.17953159e-01, -3.10537124e-01,
-3.01864168e-01, -2.92244666e-01, -2.81983782e-01, -2.71363971e-01,
-2.60633394e-01, -2.50000000e-01, -2.39630281e-01, -2.29651405e-01,
-2.20155446e-01, -2.11204637e-01, -2.02836871e-01, -1.95070915e-01,
-1.87911064e-01, -1.81351092e-01, -1.75377516e-01, -1.69972182e-01,
-1.65114290e-01, -3.13105890e-01, -3.11231189e-01, -3.07565865e-01,
-3.02269091e-01, -2.95558930e-01, -2.87691848e-01, -2.78941308e-01,
-2.69578375e-01, -2.59856304e-01, -2.50000000e-01, -2.40200243e-01,
-2.30611966e-01, -2.21355546e-01, -2.12520056e-01, -2.04167564e-01,
-1.96337780e-01, -1.89052572e-01, -1.82320074e-01, -1.76138237e-01,
-1.70497800e-01, -1.65384693e-01, -1.60781938e-01, -1.56671122e-01,
-2.98906570e-01, -2.97197585e-01, -2.93853576e-01, -2.89014854e-01,
-2.82874283e-01, -2.75659739e-01, -2.67615607e-01, -2.58985793e-01,
-2.50000000e-01, -2.40864068e-01, -2.31754414e-01, -2.22816016e-01,
-2.14163112e-01, -2.05881730e-01, -1.98033254e-01, -1.90658403e-01,
-1.83781190e-01, -1.77412580e-01, -1.71553709e-01, -1.66198607e-01,
-1.61336431e-01, -1.56953263e-01, -1.53033510e-01, -1.49560983e-01,
-2.87000759e-01, -2.85424846e-01, -2.82339144e-01, -2.77869311e-01,
-2.72188562e-01, -2.65502348e-01, -2.58032075e-01, -2.50000000e-01,
-2.41616841e-01, -2.33072871e-01, -2.24532574e-01, -2.16132471e-01,
-2.07981388e-01, -2.00162456e-01, -1.92736111e-01, -1.85743554e-01,
-1.79210262e-01, -1.73149284e-01, -1.67564179e-01, -1.62451538e-01,
-1.57803067e-01, -1.53607289e-01, -1.49850879e-01, -1.46519714e-01,
-1.43599666e-01, -2.77098567e-01, -2.75629245e-01, -2.72750638e-01,
-2.68577006e-01, -2.63266217e-01, -2.57006126e-01, -2.50000000e-01,
-2.42452878e-01, -2.34560239e-01, -2.26499722e-01, -2.18425999e-01,
-2.10468498e-01, -2.02731386e-01, -1.95295156e-01, -1.88219205e-01,
-1.81544884e-01, -1.75298652e-01, -1.69495087e-01, -1.64139589e-01,
-1.59230730e-01, -1.54762221e-01, -1.50724520e-01, -1.47106121e-01,
-1.43894562e-01, -1.41077206e-01, -1.38641834e-01, -2.68971479e-01,
-2.67586866e-01, -2.64872967e-01, -2.60935201e-01, -2.55919506e-01,
-2.50000000e-01, -2.43365719e-01, -2.36208106e-01, -2.28710521e-01,
-2.21040461e-01, -2.13344633e-01, -2.05746625e-01, -1.98346681e-01,
-1.91222985e-01, -1.84433915e-01, -1.78020776e-01, -1.72010678e-01,
-1.66419307e-01, -1.61253453e-01, -1.56513213e-01, -1.52193858e-01,
-1.48287369e-01, -1.44783679e-01, -1.41671652e-01, -1.38939844e-01,
-1.36577089e-01, -1.34572940e-01, -2.62440488e-01, -2.61122134e-01,
-2.58537150e-01, -2.54784181e-01, -2.50000000e-01, -2.44348144e-01,
-2.38006636e-01, -2.31156346e-01, -2.23971154e-01, -2.16610566e-01,
-2.09214952e-01, -2.01903204e-01, -1.94772354e-01, -1.87898642e-01,
-1.81339523e-01, -1.75136157e-01, -1.69316070e-01, -1.63895749e-01,
-1.58883016e-01, -1.54279125e-01, -1.50080546e-01, -1.46280438e-01,
-1.42869854e-01, -1.39838695e-01, -1.37176454e-01, -1.34872796e-01,
-1.32917995e-01, -1.31303259e-01, -2.57367318e-01, -2.56099318e-01,
-2.53612351e-01, -2.50000000e-01, -2.45392176e-01, -2.39944472e-01,
-2.33826618e-01, -2.27211490e-01, -2.20265762e-01, -2.13142834e-01,
-2.05978203e-01, -1.98887110e-01, -1.91964061e-01, -1.85283737e-01,
-1.78902808e-01, -1.72862258e-01, -1.67189874e-01, -1.61902704e-01,
-1.57009324e-01, -1.52511843e-01, -1.48407624e-01, -1.44690713e-01,
-1.41353011e-01, -1.38385204e-01, -1.35777503e-01, -1.33520205e-01,
-1.31604137e-01, -1.30020976e-01, -1.28763495e-01, -2.53647991e-01,
-2.52416287e-01, -2.50000000e-01, -2.46489109e-01, -2.42008638e-01,
-2.36708505e-01, -2.30752504e-01, -2.24307773e-01, -2.17535796e-01,
-2.10585550e-01, -2.03588988e-01, -1.96658678e-01, -1.89887258e-01,
-1.83348235e-01, -1.77097674e-01, -1.71176390e-01, -1.65612327e-01,
-1.60422914e-01, -1.55617249e-01, -1.51198043e-01, -1.47163294e-01,
-1.43507684e-01, -1.40223719e-01, -1.37302654e-01, -1.34735213e-01,
-1.32512154e-01, -1.30624704e-01, -1.29064881e-01, -1.27825738e-01,
-1.26901540e-01, -2.51208179e-01, -2.50000000e-01, -2.47629534e-01,
-2.44184449e-01, -2.39786622e-01, -2.34582323e-01, -2.28731517e-01,
-2.22397631e-01, -2.15738787e-01, -2.08901091e-01, -2.02014181e-01,
-1.95188874e-01, -1.88516576e-01, -1.82070012e-01, -1.75904835e-01,
-1.70061732e-01, -1.64568734e-01, -1.59443505e-01, -1.54695471e-01,
-1.50327728e-01, -1.46338682e-01, -1.42723421e-01, -1.39474845e-01,
-1.36584576e-01, -1.34043671e-01, -1.31843187e-01, -1.29974612e-01,
-1.28430184e-01, -1.27203140e-01, -1.26287891e-01, -1.25680149e-01,
-2.50000000e-01, -2.48803387e-01, -2.46455453e-01, -2.43042733e-01,
-2.38685569e-01, -2.33528440e-01, -2.27729436e-01, -2.21450165e-01,
-2.14847094e-01, -2.08064916e-01, -2.01232127e-01, -1.94458677e-01,
-1.87835361e-01, -1.81434512e-01, -1.75311574e-01, -1.69507167e-01,
-1.64049361e-01, -1.58955930e-01, -1.54236461e-01, -1.49894236e-01,
-1.45927859e-01, -1.42332619e-01, -1.39101608e-01, -1.36226627e-01,
-1.33698899e-01, -1.31509628e-01, -1.29650429e-01, -1.28113650e-01,
-1.26892619e-01, -1.25981817e-01, -1.25377010e-01, -1.25075329e-01])
```python
pmax = np.sqrt(min([np.dot(G, G) for G in square.BZG]) / -np.log(1e-11))
prefactor = square.volume
g0Taylor_fnlp = {(n, l): GFcalc.Fnl_p(n, pmax) for (n, l) in g0Taylor.nl()}
g0Taylor_fnlu = {(n, l): GFcalc.Fnl_u(n, l, pmax, prefactor, d=2)
for (n, l) in g0Taylor.nl()}
if __TESTING__:
print(pmax)
```
0.6242315078967211
```python
if __TESTING__:
print(g0Taylor.nl(), g0WGbarTaylor.nl(), g0WGGbarTaylor.nl())
```
([(-2, 0), (0, 4)], [(-2, 0), (0, 4)], [(-2, 0), (0, 4)])
```python
g0WGbarsc = np.zeros_like(g0WGbarFT)
g0WGGbarsc = np.zeros_like(g0WGGbarFT)
for i, k in enumerate(kpt):
g0WGbarsc[i] = (g0FT[i]**2)*g0WGbarFT[i] - g0WGbarTaylor(k, g0Taylor_fnlp).real
g0WGGbarsc[i] = (g0FT[i]**2)*g0WGGbarFT[i] - g0WGGbarTaylor(k, g0Taylor_fnlp).real
```
```python
if __TESTING__:
print(truncate_vec(np.dot(wt, g0WGGbarsc)))
```
array([ 0. , 0.14478892, -0.62835166, 0.44880337, 0. ,
0.33877381, -0.26925509, 0. , 0. , -0.03475936,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ])
### inverse Fourier transformation
Now we go from the Fourier transformed version to the *inverse* Fourier transformed version (the final product version).
```python
# this list is a bit of overkill, but...
veclist = [GFstarset.states[s[0]].dx for s in GFstarset.stars]
g0WGbar, g0WGGbar = [], []
for x in veclist:
coskx = np.sum(np.cos(np.tensordot(kpt, np.dot(g, x), axes=(1, 0)))
for g in groupops) / 8
g0WGbar.append(np.dot(wt*coskx,g0WGbarsc) + g0WGbarTaylor(x, g0Taylor_fnlu).real)
g0WGGbar.append(np.dot(wt*coskx,g0WGGbarsc) + g0WGGbarTaylor(x, g0Taylor_fnlu).real)
```
```python
for v, g in zip(veclist, g0WGbar):
print(v, truncate_vec(g))
```
[0. 0.] [ 0. -0.08482923 0.19793486 -0.11310564 0. -0.02827641
0.02827641]
[0. 1.] [ 0. 0.16553345 -0.38624471 0.22071126 0. 0.05517782
-0.05517782]
[ 1. -1.] [ 0. 0.30248763 -0.70580448 0.40331684 0. 0.10082921
-0.10082921]
[ 0. -2.] [ 0. 0.39322104 -0.91751575 0.52429472 0. 0.13107368
-0.13107368]
[ 1. -2.] [ 0. 0.43996235 -1.02657882 0.58661647 0. 0.14665412
-0.14665412]
[2. 2.] [ 0. 0.51621963 -1.20451247 0.68829284 0. 0.17207321
-0.17207321]
[-3. -0.] [ 0. 0.52814564 -1.23233982 0.70419418 0. 0.17604855
-0.17604855]
[ 1. -3.] [ 0. 0.54850936 -1.27985518 0.73134582 0. 0.18283645
-0.18283645]
[ 3. -2.] [ 0. 0.59261795 -1.38277523 0.79015727 0. 0.19753932
-0.19753932]
[-0. 4.] [ 0. 0.62254226 -1.45259861 0.83005635 0. 0.20751409
-0.20751409]
[-4. -1.] [ 0. 0.63344479 -1.47803784 0.84459305 0. 0.21114826
-0.21114826]
[-3. 3.] [ 0. 0.64489535 -1.50475582 0.85986047 0. 0.21496512
-0.21496512]
[-4. -2.] [ 0. 0.66083076 -1.54193845 0.88110768 0. 0.22027692
-0.22027692]
[-5. -0.] [ 0. 0.69501434 -1.62170013 0.92668579 0. 0.23167145
-0.23167145]
[ 1. -5.] [ 0. 0.70175675 -1.63743242 0.93567567 0. 0.23391892
-0.23391892]
[6. 0.] [ 0. 0.75378654 -1.75883525 1.00504871 0. 0.25126218
-0.25126218]
```python
for v, g in zip(veclist, g0WGGbar):
print(v, truncate_vec(g))
```
[0. 0.] [ 0. -0.08646146 -0.01168398 0.11477505 0. 0.18460689
-0.21786611 0. 0. 0.01662961]
[0. 1.] [ 0. -0.02378012 0.07044044 -0.02288478 0. -0.0228802
-0.02467088 0. 0. 0.02377554]
[ 1. -1.] [ 0. -0.1544228 0.41815209 -0.23153244 0. -0.10930649
0.04491278 0. 0. 0.03219686]
[ 0. -2.] [ 0. -0.19919678 0.53636393 -0.29462537 0. -0.13797037
0.05288682 0. 0. 0.04254178]
[ 1. -2.] [ 0. -0.22230521 0.59746197 -0.32730496 0. -0.15285155
0.05714795 0. 0. 0.0478518 ]
[2. 2.] [ 0. -0.25976339 0.69600952 -0.37962332 0. -0.17648274
0.06323711 0. 0. 0.05662281]
[-3. -0.] [ 0. -0.26553675 0.71102589 -0.38745692 0. -0.17995238
0.06388796 0. 0. 0.05803221]
[ 1. -3.] [ 0. -0.27554375 0.73736146 -0.40144514 0. -0.18627396
0.06552883 0. 0. 0.06037257]
[ 3. -2.] [ 0. -0.29715105 0.79408693 -0.4314637 0. -0.19978484
0.06884046 0. 0. 0.06547219]
[-0. 4.] [ 0. -0.311763 0.83235176 -0.451636 0. -0.20882575
0.07092024 0. 0. 0.06895275]
[-4. -1.] [ 0. -0.31711534 0.84642681 -0.45910336 0. -0.21219612
0.0717799 0. 0. 0.07020811]
[-3. 3.] [ 0. -0.32274948 0.8612689 -0.46699851 0. -0.21576993
0.07272813 0. 0. 0.0715209 ]
[-4. -2.] [ 0. -0.33055592 0.88176352 -0.47784442 0. -0.22065168
0.07392532 0. 0. 0.07336318]
[-5. -0.] [ 0. -0.34734433 0.92592586 -0.50128534 0. -0.2312372
0.07664482 0. 0. 0.07729619]
[ 1. -5.] [ 0. -0.3506687 0.93469719 -0.50596233 0. -0.23335979
0.07722747 0. 0. 0.07806616]
[6. 0.] [ 0. -0.37638443 1.00267416 -0.54230952 0. -0.24990529
0.08194487 0. 0. 0.08398021]
### Putting it all together
All of the pieces are in place; we can now compute:
* Transport coefficients using the SCGF approach
* Residual bias correction to the latter
Quantities are expressed as polynomials in $c_\text{B}$, the concentration of the immobile species.
The Green function, and the correction $\eta$, end up having particularly simple expressions, that we will compute directly (it requires some simplification of the polynomial expressions which are more difficult to directly express here. It, unfortunately, also introduces a denominator polynomial which makes some of our expressions more complicated.
We have
$$\eta_i = -2\frac{g^0_{i0}}{1+g^0_{i0} - (1+3g^0_{i0})c_\text{B}}$$
```python
@njit(nogil=True, parallel=True)
def polymult(p, q):
"""
Multiplication of two polynomial coefficients, where
p(x) = sum_n p[n] * x^n
:param p: polynomial coefficients for p
:param q: polynomial coefficients for q
:returns pq: polynomial coefficients for pq
"""
P = p.shape[0]-1
Q = q.shape[0]-1
pq = np.zeros(P+Q+1)
for n in range(P+Q+1):
for i in range(max(0,n-Q), min(n,P)+1):
pq[n] += p[i]*q[n-i]
return pq
```
```python
@njit(nogil=True, parallel=True)
def polydiv(p, a):
"""
Division of polynomial p(x) by (x-a)
:param p: polynomial coefficients for p
:param a: term in nomial (x-a)
:returns d, r: divisor d(x), and remainder r
"""
P = p.shape[0]-1
d = np.zeros(P)
d[P-1] = p[P]
for n in range(P-2,-1,-1):
d[n] = p[n+1] + a*d[n+1]
return d, p[0] + a*d[0]
```
```python
divpoly = np.zeros(N+1)
divpoly[0], divpoly[1] = 1+g0[0,0], -(1+3*g0[0,0])
etabar_div = -2*g0[0] # this is etabar*div, so that etabar = etabar_div/div
etaW0_div = np.zeros(N+1)
etaW0_div[0] = -2 # this is W0*etabar*div (for the translational invariant terms)
```
```python
# unbiased:
L0 = np.zeros(N+1)
L0[0], L0[1] = 1., -1.
```
```python
# Note: vecstarset.outer[i,j, v1, v2] = 1/2 delta_ij delta_v1v2,
# so we can use dot-products throughout
# SCGF:
L1 = 0.5*np.dot(symmbiasvecVS, etabar_div)
L_SCGF = polymult(L0, divpoly)[:N+1] + L1
polydiv(L_SCGF, 1)
```
(array([-0.63661977, 0.63661977, 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ]),
-1.1102230246251565e-16)
```python
# print(np.dot(GFmat[0,0], g0WGGbar))
PsiB = polymult(polymult(divpoly, divpoly), symmresbiasave)[:N+1] + \
-2*polymult(divpoly, np.dot(symmbiasGvecVS, etabar_div))[:N+1] + \
np.dot(np.dot(symmWGbarVS, etabar_div), etabar_div) + \
4*np.dot(GFmat[0,0], g0WGbar) # far-field; note: etaW0_div == 2, so factor of 4
print(PsiB)
```
[ 0. -0.00515924 0.92551811 -0.74429862 -1.58454228 1.72570416
-0.31722213 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. ]
```python
WR = polymult(polymult(divpoly, divpoly), symmresbiasGave)[:N+1] - \
polymult(polymult(divpoly, divpoly), symmresbiasave)[:N+1] + \
-2*polymult(divpoly, L1)[:N+1] + \
-2*polymult(divpoly, np.dot(symmbiasGGvecVS, etabar_div))[:N+1] + \
np.dot(np.dot(symmWGGbarVS, etabar_div), etabar_div) + \
4*np.dot(GFmat[0,0], g0WGGbar)
print(WR)
```
[ 0. -0.00257962 -0.92594535 0.59936059 1.39125173 -1.57641803
0.89518596 0.19329055 -0.81663463 0.2424888 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. ]
```python
# Now, to put it together, and do the division...
cBv = np.linspace(0.01,1,num=99,endpoint=False)
D1, D2 = [], []
for cB in cBv:
# print(cB)
cA = 1-cB
cpow = np.array([cB**n for n in range(N+1)])
L0c, divc, L1c = np.dot(cpow, L0), np.dot(cpow, divpoly), np.dot(cpow, L_SCGF)
L1c /= divc
PsiBc, WRc = np.dot(cpow, PsiB)/(divc*divc), np.dot(cpow, WR)/(divc*divc)
L2c = L1c + 0.5*PsiBc*PsiBc/WRc
D0c = L0c/cA
D1c = L1c/cA
D2c = L2c/cA
D1.append(D1c)
D2.append(D2c)
print(cB, D1c, D2c, D2c/D1c) #, PsiBc)
D1v, D2v = np.array(D1), np.array(D2)
```
0.01 0.9886002147241751 0.9885831647653186 0.9999827534339942
0.02 0.9772326208040749 0.9770283919265403 0.9997910130370325
0.03 0.9658970820763075 0.9652880898733048 0.9993695061158135
0.04 0.9545934631443265 0.9533617822712406 0.9987097325504106
0.05 0.9433216293730424 0.9412519134805684 0.9978059276623926
0.060000000000000005 0.9320814468834736 0.9289618795840945 0.9966531172680136
0.06999999999999999 0.9208727825474499 0.9164956724239848 0.9952467808730795
0.08 0.9096955039823539 0.9038577722863163 0.9935827629459727
0.09 0.8985494795459097 0.8910531012773827 0.9916572448828134
0.09999999999999999 0.8874345783310165 0.8780869957266404 0.9894667361035718
0.11 0.8763506701606203 0.8649651855039531 0.9870080721743724
0.12 0.8652976255826346 0.8516937760318304 0.9842784157166221
0.13 0.8542753158649004 0.8382792313189914 0.9812752583987354
0.14 0.8432836129901864 0.8247283572850111 0.9779964232443928
0.15000000000000002 0.8323223896512355 0.8110482850356865 0.9744400668778556
0.16 0.821391519245848 0.7972464539250709 0.9706046815007956
0.17 0.8104908758720073 0.7833305943264771 0.9664890964795767
0.18000000000000002 0.7996203343230479 0.7693087100802413 0.9620924794659352
0.19 0.7887797700828595 0.7551890606118241 0.9574143369986444
0.2 0.7779690593211338 0.740980142729545 0.9524545145486045
0.21000000000000002 0.7671880788886491 0.7266906721215158 0.9472131959795335
0.22 0.7564367063125944 0.7123295645785532 0.9416909024033875
0.23 0.7457148197919312 0.697905916975185 0.9358884904150278
0.24000000000000002 0.7350222981927949 0.6834289880450272 0.9298071496951581
0.25 0.7243590210439317 0.6689081789901414 0.923448399974538
0.26 0.713724868532175 0.6543530139666951 0.9168140873561234
0.27 0.7031197214979573 0.6397731204914803 0.9099063799952579
0.28 0.6925434614308605 0.6251782098156391 0.9027277631413364
0.29000000000000004 0.6819959704652 0.610578057313364 0.8952810335475725
0.3 0.6714771313756471 0.5959824829343968 0.8875692932586023
0.31 0.660986827572886 0.5814013317698652 0.8795959427886708
0.32 0.6505249430993062 0.5668444547813717 0.871364673706047
0.33 0.6400913626247295 0.5523216897433049 0.8628794606421178
0.34 0.6296859714421733 0.5378428424480828 0.8541445527462813
0.35000000000000003 0.619308655463645 0.5234176682234535 0.8451644646102955
0.36000000000000004 0.6089593012159741 0.5090558538101174 0.835943966688137
0.37 0.5986377958366751 0.4947669996467533 0.8264880752396385
0.38 0.5883440270698451 0.4805606026080999 0.8168020418282419
0.39 0.5780778832620943 0.4664460392400235 0.8068913424050542
0.4 0.567839253358509 0.452432549533551 0.7967616660130833
0.41000000000000003 0.5576280268986472 0.43852922127764854 0.7864189031469795
0.42000000000000004 0.5474440940125656 0.4247449750281253 0.7758691338048778
0.43 0.5372873454168797 0.4110885497274403 0.7651186152699686
0.44 0.5271576724108546 0.39756848900741193 0.7541737696602396
0.45 0.5170549668725267 0.38419312820391205 0.7430411712854311
0.46 0.5069791212548572 0.37097058210957257 0.7317275338506228
0.47000000000000003 0.4969300285819163 0.35790873348736785 0.7202396975460067
0.48000000000000004 0.48690758244509547 0.34501522236470583 0.7085846160623517
0.49 0.4769116769993535 0.3322974361243652 0.6967693435713792
0.5 0.46694220695948974 0.31976250040527815 0.6848010217097801
0.51 0.4569990675964475 0.3074172708228256 0.6726868666049229
0.52 0.4470821547336471 0.2952683255149852 0.6604341559794391
0.53 0.43719136474334636 0.2833219585173722 0.6480502163708028
0.54 0.42732659454303284 0.2715841739669904 0.6355424105008312
0.55 0.41748774159184193 0.26006068113133274 0.622918124828637
0.56 0.40767470388700355 0.24875689025641692 0.6101847573190747
0.5700000000000001 0.39788737996031653 0.23767790922438006 0.5973497054570692
0.5800000000000001 0.3881256688746535 0.22682854100843955 0.5844203545364958
0.59 0.3783894702204891 0.21621328191032055 0.5714040662503959
0.6 0.3686786841124569 0.20583632056274512 0.5583081676074322
0.61 0.3589932111859354 0.1957015376771983 0.5451399401974694
0.62 0.3493329525936577 0.18581250651499764 0.5319066098271404
0.63 0.339697810002349 0.17617249405768515 0.5186153365441682
0.64 0.3300876855893888 0.16678446285093748 0.5052732050671176
0.65 0.3205024820395035 0.15765107349457858 0.4918872156351891
0.66 0.31094210254147836 0.14877468774980301 0.4784642752904686
0.67 0.30140645078490025 0.14015737223353228 0.46501118960309207
0.68 0.2918954309569213 0.13180090266872968 0.45153465484761635
0.6900000000000001 0.28240894773905223 0.12370676865868918 0.4380412506369843
0.7000000000000001 0.2729469063039779 0.11587617895261351 0.424537433018433
0.7100000000000001 0.2635092123123952 0.10831006716933242 0.41102952803383874
0.72 0.25409577190988025 0.1010090979457167 0.3975237257451944
0.73 0.24470649172377565 0.09397367347618461 0.3840260747240901
0.74 0.235341278860106 0.08720394040976778 0.37054247700253407
0.75 0.22600004090050865 0.08069979707133407 0.35707868348068267
0.76 0.21668268589920014 0.07446090097397515 0.3436402897858394
0.77 0.20738912237995627 0.06848667658994893 0.3302327325754093
0.78 0.19811925933311686 0.0627763233482323 0.3168612862754573
0.79 0.18887300621261896 0.05732882382741215 0.30353106024518767
0.8 0.17965027293304495 0.05214295211347268 0.2902469963566723
0.81 0.17045096986669986 0.047217282292946104 0.27701386697812336
0.8200000000000001 0.16127500784070503 0.042550197052895596 0.2638362733482139
0.8300000000000001 0.15212229813412093 0.03813989636025616 0.25071864432806257
0.8400000000000001 0.1429927524750811 0.0339844061942141 0.23766523551699906
0.85 0.1338862830379599 0.030081587306470845 0.224680128717458
0.86 0.12480280244056031 0.026429143985510637 0.2117672317342234
0.87 0.11574222374130434 0.023024632802142952 0.19893027849201647
0.88 0.10670446043647226 0.01986547131511161 0.18617282945672875
0.89 0.0976894264574475 0.01694894671657016 0.1734982723432504
0.9 0.08869703616798034 0.014272224398824916 0.1609098230948239
0.91 0.07972720436147818 0.011832356424862177 0.14841052711713068
0.92 0.07077984625831711 0.00962628988662167 0.1360032607515097
0.93 0.061854877503165005 0.007650875136172939 0.12369073297059649
0.9400000000000001 0.052952214162342796 0.005902873876406304 0.11147548728196825
0.9500000000000001 0.04407177272116553 0.00437896709881701 0.0993599038214772
0.9600000000000001 0.03521347008137245 0.0030757628576175344 0.08734620162426368
0.97 0.026377223558495894 0.001989803870182417 0.07543644105565907
0.98 0.017562950879301858 0.0011175749352018564 0.06363252638364612
0.99 0.008770570179264022 0.0004555101614051208 0.051936208489850394
```python
plt.rcParams['figure.figsize'] = (8,8)
fig, ax = plt.subplots()
ax.plot(cBv, D1, 'b', label='GF')
ax.plot(cBv, D2, 'r', label='GF+resbias')
ax.set_ylabel('$D^{\\rm A}$', fontsize='x-large')
ax.set_xlabel('$c_{\\rm B}$', fontsize='x-large')
ax.legend(bbox_to_anchor=(0.5,0.5,0.5,0.3), ncol=1, shadow=True,
frameon=True, fontsize='x-large', framealpha=1.)
plt.tight_layout()
plt.show()
```
### Final "analytic" versions
We now produce the analytic (with numerical coefficients) version of our transport coefficients.
```python
num_SCGF, denom_SCGF = truncate_vec(-polydiv(L_SCGF,1)[0]), truncate_vec(divpoly)
```
```python
num_SCGFbc, denom_SCGFbc = \
truncate_vec(-polydiv(0.5*polymult(PsiB,PsiB),1)[0]), \
truncate_vec(polymult(polymult(divpoly, divpoly), WR))
```
```python
# check remainders (should be 0 for both)
if __TESTING__:
print(polydiv(L_SCGF,1)[1], polydiv(0.5*polymult(PsiB,PsiB),1)[1])
```
(-1.1102230246251565e-16, -1.4343655928804322e-16)
```python
def print_fraction(numer, denom, powstring='**'):
"""
Returns a string representation of our polynomial ratio
"""
def format_pow(n):
if n==0:
return ''
if n==1:
return '*c'
return '*c' + powstring +'{}'.format(n)
# first, "divide" through until lowest order is constant on both:
while np.isclose(numer[0], 0) and np.isclose(denom[0], 0):
numer, denom = numer[1:], denom[1:]
# second, scale everything by lowest order term in denominator
scale = denom[np.min(np.nonzero(denom))]
numer /= scale
denom /= scale
s = '('
for n, coeff in enumerate(numer):
if not np.isclose(coeff, 0):
s += '{:+.10g}'.format(coeff) + format_pow(n)
s += ')/('
for n, coeff in enumerate(denom):
if not np.isclose(coeff, 0):
s += '{:+.10g}'.format(coeff) + format_pow(n)
s += ')'
return s
```
```python
print(print_fraction(num_SCGF, denom_SCGF))
```
(+1-1*c)/(+1+0.1415926534*c)
```python
print(print_fraction(num_SCGF, denom_SCGF) + ' + ' +\
print_fraction(num_SCGFbc, denom_SCGFbc))
```
(+1-1*c)/(+1+0.1415926534*c) + (-0.01272990904*c+4.554518972*c^2-408.7789226*c^3+242.2968878*c^4+1388.598607*c^5-1268.72564*c^6-960.1143785*c^7+1429.546709*c^8-475.4912083*c^9+48.12615674*c^10)/(+1+359.2297602*c-130.6761861*c^2-597.9247855*c^3+453.718084*c^4-184.779244*c^5-160.9498551*c^6+288.3954999*c^7-5.855536997*c^8-20.27314331*c^9-1.884593043*c^10)
**Note:** both of these polynomials have two factors of $(1-c)$ in them; so we can simplify further...
```python
polydiv(polydiv(polydiv(num_SCGFbc,1)[0],1)[0],1)
```
(array([-1.26654243e-11, -7.84439180e-12, 1.27299090e-02, -4.51632925e+00,
3.95191745e+02, 9.56840065e+02, 2.91830024e+02, -3.31112738e+02,
4.81261567e+01]), -1.8954227076761754e-11)
```python
polydiv(polydiv(denom_SCGFbc,1)[0],1)
```
(array([-4.26325641e-12, 1.00000000e+00, 3.61229760e+02, 5.90783334e+02,
2.22412123e+02, 3.07758995e+02, 2.08326624e+02, -5.20556028e+01,
-2.40423294e+01, -1.88459304e+00]), -7.673861546209082e-12)
```python
SCGFbc_func = print_fraction(num_SCGF, denom_SCGF) + ' + ' +\
print_fraction(polydiv(polydiv(num_SCGFbc,1)[0],1)[0],
polydiv(polydiv(denom_SCGFbc,1)[0],1)[0])
print(SCGFbc_func)
```
(+1-1*c)/(+1+0.1415926534*c) + (-0.01272990905*c+4.529059154*c^2-399.7080744*c^3-561.6483202*c^4+665.0100411*c^5+622.9427624*c^6-379.2388949*c^7+48.12615674*c^8)/(+1+361.2297602*c+590.7833342*c^2+222.4121227*c^3+307.7589952*c^4+208.3266238*c^5-52.05560275*c^6-24.0423294*c^7-1.884593043*c^8)
|
44507891f6cf2d973a9fc2c7dab70f92c96c8da4
| 267,848 |
ipynb
|
Jupyter Notebook
|
examples/GF-RBC.ipynb
|
sohamch/Onsager
|
1c943299b969fdebf77c21f092c5df2c9a142bc7
|
[
"MIT"
] | 9 |
2016-10-21T21:42:29.000Z
|
2021-08-25T08:11:33.000Z
|
examples/GF-RBC.ipynb
|
sohamch/Onsager
|
1c943299b969fdebf77c21f092c5df2c9a142bc7
|
[
"MIT"
] | 3 |
2017-07-03T16:00:54.000Z
|
2017-07-18T20:32:45.000Z
|
examples/GF-RBC.ipynb
|
sohamch/Onsager
|
1c943299b969fdebf77c21f092c5df2c9a142bc7
|
[
"MIT"
] | 4 |
2016-08-13T00:36:32.000Z
|
2022-01-13T02:12:24.000Z
| 57.07394 | 32,632 | 0.531201 | true | 90,581 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.861538 | 0.782662 | 0.674294 |
__label__yue_Hant
| 0.207644 | 0.404941 |
# Assignment #1 - Linear Regression
<font color="blue"> Abdullah Al Raqibul Islam </font>
# INDEX
I. Introduction<br/>
II. Data<br/>
  II.I. Reading the data<br/>
  II.II. Preprocessing of the data<br/>
  II.III. Visualization of the data<br/>
    II.III.I. Correlation Heatmap<br/>
    II.III.II. Pie-chart<br/>
    II.III.III. Count Plots<br/>
    II.III.IV. Line Plots<br/>
    II.III.V. Linear Regression Plots<br/>
    II.III.VI. Box Plots<br/>
    II.III.VII. Preliminary Observation About Data<br/>
III. Method<br/>
  III.I. Super Classs Definition<br/>
  III.II. Least Squares<br/>
  III.III. Least Mean Squares<br/>
IV. Usage Examples<br/>
  IV.I. Usage Example: Least Squares<br/>
  IV.II. Usage Example: Least Mean Squares<br/>
V. Preliminary Test<br/>
VI. Experiments<br/>
  VI.I. Plotting Functions<br/>
  VI.II. Data Partitioning<br/>
  **VI.III. Experiments on Least Squares**<br/>
    VI.III.I. Feature Scaling: Unscaled Data<br/>
      VI.III.I.I. Feature Selection: All Features<br/>
      VI.III.I.II. Feature Selection: Significant Features<br/>
    VI.III.II. Feature Scaling: Normalizaed Data<br/>
      VI.III.II.I. Feature Selection: All Features<br/>
      VI.III.II.II. Feature Selection: Significant Features<br/>
    VI.III.III. Feature Scaling: Standarized Data<br/>
      VI.III.III.I. Feature Selection: All Features<br/>
      VI.III.III.II. Feature Selection: Significant Features<br/>
  **VI.IV. Experiments on Least Mean Squares (LMS)**<br/>
    VI.IV.I. Feature Scaling: Unscaled Data<br/>
      VI.IV.I.I. Feature Selection: All Features<br/>
      VI.IV.I.II. Feature Selection: Significant Features<br/>
    VI.IV.II. Feature Scaling: Normalized Data<br/>
      VI.IV.II.I. Feature Selection: All Features<br/>
      VI.IV.II.II. Feature Selection: Significant Features<br/>
    VI.IV.III. Feature Scaling: Standarized Data<br/>
      VI.IV.III.I. Feature Selection: All Features<br/>
      VI.IV.III.II. Feature Selection: Significant Features<br/>
VII. Conclusions<br/>
VIII. Extra Credit<br/>
IX. References<br/>
# I. Introduction
The goal of this assignment is to perform linear regression on a dataset to predict the price of house sale [[1]](https://www.kaggle.com/harlfoxem/housesalesprediction) based on the different features (i.e. size, condition, facility).
The features have been explained in the data section. I have implemented and used two linear regression models namely `Least Squares` and `Least Mean Squares`. The detail of the algorithm discussed in `section III`. Section `IV` and `V` contains the integraty testing of the implemented algorithms. In `section VI`, I discussed the performance evaluations in details of the implementation domain. This section also contains the extra credit works listed in `section VIII`.
# II. Data
In this expriment I used `House Sales in King County, USA` [[1]](https://www.kaggle.com/harlfoxem/housesalesprediction) dataset from `kaggle` [[2]](https://www.kaggle.com). This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015. The dataset have 21613 rows and 21 attributes. Here is the attribute list along with the data-type.
**Attribute Information:**
0. id (int64)
1. date (object)
2. price (float64)
3. bedrooms (int64)
4. bathrooms (float64)
5. sqft_living (int64)
6. sqft_lot (int64)
7. floors (float64)
8. waterfront (int64)
9. view (int64)
10. condition (int64)
11. grade (int64)
12. sqft_above (int64)
13. sqft_basement (int64)
14. yr_built (int64)
15. yr_renovated (int64)
16. zipcode (int64)
17. lat (float64)
18. long (float64)
19. sqft_living15 (int64)
20. sqft_lot15 (int64)
## II.I. Reading the data
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# reading the data from csv datafile; using read_csv method from Pandas library
df_r = pd.read_csv("data/regression/kc_house_data.csv")
# displaying all the columns data of top 5 rows in the jupyter notebook
pd.set_option('max_columns', 28)
df_r.head(10)
```
```python
df_r.describe()
```
## II.II. Preprocessing of the data
```python
# get the metadata; getting familiarized with columns and data-types
df_r.info()
```
From the "Non-Null Count" column above, we can observe all the rows contain same number of `non-null` counts. From this count we can sense that there is no null values in this dataset.
To confirm this, let's check it using library function.
```python
# checking columns with null values
df_r.isna().any()
```
There is no null values in any particular columns. So we don't need to perform any data pre-processing here.
```python
# observe pairwise correlation of columns using library function
df_r.corr(method='pearson')
```
Pandas dataframe.corr() is used to find the pairwise correlation of all columns in the dataframe. Non-numeric data type columns from the dataframe is ignored (i.e. "date" column in this case).
This correlation matrix gives the measure of the strength of the association between the two columns. I used Pearson's Correlation Coefficient method to generate this matrix.
## II.III. Visualization of the data
### II.III.I. Correlation Heatmap
```python
f = plt.subplots(figsize=(10, 10))
sns.heatmap(df_r.corr(method='pearson'), annot=True, linewidths=.1,fmt= '.1f')
plt.title("Correlation Heatmap for HouseSales Data", color = 'green', fontsize = 20)
plt.show()
```
#### Observation:
1. The most correlated attributes with `price` are
* bathrooms
* sqft_living
* view
* grade
* sqft_above
* sqft_living15
2. Among them all, `sqft_living` and `grade` have the highest correlation coefficient (`0.7`).
3. This actually makes sense, as this data is suitable for `regrassion analysis` and house price mostly depends on the `size` and `quality`.
### II.III.II. Pie-chart
```python
# Plot number of records in the dataset for different condition
conditionList = []
conditionCount = []
for condition_name, subset in df_r.groupby('condition'):
conditionList.append(condition_name)
conditionCount.append(len(subset))
print(conditionList)
print(conditionCount)
plt.figure(figsize = (7, 7))
plt.pie(conditionCount, labels = conditionList)
plt.title("Pie Chart: Condition", color = 'green', fontsize = 20)
plt.show()
```
### II.III.III. Count Plots
```python
fig, ax = plt.subplots(figsize=(25,6))
sns.countplot(df_r.bathrooms)
plt.title("Count Plot: # of bathrooms", color = 'green', fontsize = 30)
```
```python
# Plot number of records in the dataset for different condition, view, waterfront, bedrooms, grade, zipcode
f, axes = plt.subplots(1, 3,figsize=(25,6))
sns.countplot(x= "condition", data=df_r, orient='v', ax=axes[0])
sns.countplot(x= "view", data=df_r, orient='v', ax=axes[1])
sns.countplot(x= "waterfront", data=df_r, orient='v', ax=axes[2])
f, axes = plt.subplots(1, 3,figsize=(25,6))
sns.countplot(x= "bedrooms", data=df_r, orient='v', ax=axes[0])
sns.countplot(x= "grade", data=df_r, orient='v', ax=axes[1])
sns.countplot(x= "zipcode", data=df_r, orient='v', ax=axes[2])
```
#### Observation:
1. The `pie-chart` and `count-plot` is used to compare parts to the whole. It helps picturing which types of data exist more in the dataset and can influence making any decision.
2. From the pie-chart we can observe that people buy more `condition 3` type house.
3. From the count-plot we can make simillar type of observations like we made in the pie-chart. That is majority people likes house with `2.5 bathrooms`.
4. Similarly, the group count-plot shows the prefered values for `condition`, `view`, `waterfront`, `bedrooms`, `grade`, and `zipcode`.
### II.III.IV. Line Plots
```python
# Plot the prices based on size
sns.set_theme(style="darkgrid")
sns.lineplot(x="sqft_living", y="price", data=df_r)
plt.title("Line Plot: Price Vs. Size", color = 'green', fontsize = 20)
```
### II.III.V. Linear Regression Plots
```python
# Plot linear regression on price based on size
g = sns.lmplot(data=df_r, x="sqft_living", y="price")
g.set_axis_labels("Living Space (sqft)", "Price (usd)")
plt.title("Linear Regression Plot: Price Vs. Size", color = 'green', fontsize = 20)
```
```python
# Plot linear regression on price based on size15
g = sns.lmplot(data=df_r, x="sqft_living15", y="price")
g.set_axis_labels("Living Space 15 (sqft)", "Price (usd)")
plt.title("Linear Regression Plot: Price Vs. Size_15", color = 'green', fontsize = 20)
```
```python
# Plot linear regression on price based on size_above
g = sns.lmplot(data=df_r, x="sqft_above", y="price")
g.set_axis_labels("Living Space Above (sqft)", "Price (usd)")
plt.title("Linear Regression Plot: Price Vs. Size_above", color = 'green', fontsize = 20)
```
```python
# Plot multiple linear regression on price based on size varying number of bedrooms
g = sns.lmplot(data=df_r, x="sqft_living", y="price", hue="bedrooms")
g.set_axis_labels("Living Space (sqft)", "Price (usd)")
plt.title("Multiple Linear Regression Plot: Price Vs. Size (w.r.t # of bedrooms)", color = 'green', fontsize = 20)
```
```python
# Plot multiple linear regression on price based on size varying waterfront
g = sns.lmplot(data=df_r, x="sqft_living", y="price", hue="waterfront")
g.set_axis_labels("Living Space (sqft)", "Price (usd)")
plt.title("Multiple Linear Regression Plot: Price Vs. Size (w.r.t waterfront)", color = 'green', fontsize = 20)
```
```python
# Plot multiple linear regression on price based on size varying view
g = sns.lmplot(data=df_r, x="sqft_living", y="price", hue="view")
g.set_axis_labels("Living Space (sqft)", "Price (usd)")
plt.title("Multiple Linear Regression Plot: Price Vs. Size (w.r.t waterfront)", color = 'green', fontsize = 20)
```
```python
# Plot multiple linear regression on price based on size varying condition
g = sns.lmplot(data=df_r, x="sqft_living", y="price", hue="condition")
g.set_axis_labels("Living Space (sqft)", "Price (usd)")
plt.title("Multiple Linear Regression Plot: Price Vs. Size (w.r.t condition)", color = 'green', fontsize = 20)
```
#### Observation:
1. The linear regression plots clearly show correlation between price and size.
2. The multiple linear regression plots indicate -
* Price is higher with waterfront
* House with view 4 have highest price within the same living space size
* House with condition 5 price more
### II.III.VI. Box Plots
```python
# Plot price on different condition
sns.boxplot(x='condition', y='price', data=df_r)
plt.title("Box Plot: Price Vs. Condition", color = 'green', fontsize = 20)
```
```python
# Plot price on varying bedrooms
sns.boxplot(x='bedrooms', y='price', data=df_r)
plt.title("Box Plot: Price Vs. Bedrooms", color = 'green', fontsize = 20)
```
#### Observation:
1. The boxplot indicate -
* Price range varies most for houses with 8 bedrooms
### II.III.VII. Preliminary Observation About Data
We made individual observations on the different plotting above. To summarize
* People mostly like houses with the following property -
* Three bedroom
* Condition 3
* Seventh grade
* 2.5 Bathrooms
* Without waterfront
* View 0
* This is actually true as we have observed from the linear regression graph that `with waterfront` price is prety high.
# III. Method
As I mentioned before, in this experiment I used two linear regression algorithms on the data described in `section II`. Given data, the goal of the linear regression algorithm is to find a best fit on all the data. The equation of a line can be written as
$$
f(x; a, b) = a x + b.
$$
Where we can replace the `a` and `b` by the weight symbol $w$,
$$
f(x; w) = w_1 x + w_0.
$$
Considering multiple input features as $x$, we can extend the input $x$ to an input vector with bias input $x_0 = 1$:
$$
\begin{align}
f(x; w) &= w_D x_D + \cdots + w_1 x_1 + w_0 \\
&= \sum_{i=0}^{D} w_i x_i \quad\text{where } x_0 = 1\\
&= w^\top x.
\end{align}
$$
Our goal here is to find the best value for $w$. So, this becomes an optimization problem. As this is a optimization problem, we have to define a cost function with respect whom we can figure out whether our choice is best or not. One possible solution is measuring the `Euclidean distances` between the target values and the predicted outputs. So we are defining our cost function like this:
$$
E(w) = \sum_{i=1}^N \Big( f(x_i; w_i) - t_i \Big)^2
$$
## III.I. Least Squares
Least squres is the simplest linear regression model. The parameter that gives best fit here will be
$$
w^* = \arg\min_w \sum_{i=1}^{N} \Big( f(x_i; w) - t_i \Big)^2
$$
Here the error funciton is quadratic. So this problem can be analytically solved by setting derivative with respect to $w$ to zero. Let's consider, the target values are collected in matrix $t$, and the input samples are in matrix $X$.
$$
\begin{align}
t &= [t_1, t_2, \cdots, t_N]^\top \\
\\
w &= [w_0, w_1, \cdots, w_D]^\top \\
\\
X &= \begin{bmatrix}
x_{10} & x_{11} & x_{12} & \dots & x_{1D} \\
x_{20} & x_{21} & x_{22} & \dots & x_{2D} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x_{N0} & x_{N1} & x_{N2} & \dots & x_{ND}
\end{bmatrix}
\end{align}
$$
where the first column is one's, $x_{*0} = [1, 1, \dots, 1]^\top$.
With this matrix, $f(x; w)$ can be written in matrix form as:
$$
f(x; w) = X w.
$$
Thus, the error function will become
$$
\begin{align}
E(w) &= \sum_{i=1}^N \Big(f(x_i; w_i) - t_i \Big)^2 \\
\end{align}
$$
Now, by doing the partial derivative w.r.t. $w$, we get -
$$
\begin{align}
\frac{\partial E(w)}{\partial w} &= 2 X^\top X w - 2 X^\top t \\
\end{align}
$$
By setting this to zero,
$$
\begin{align}
2 X^\top X w - 2 X^\top t &= 0\\
\\
w &= \big(X^\top X\big)^{-1} X^\top t
\end{align}
$$
So, given the feature matrix $X$ and our weight vector $w$, least squares find the best fitted line by this formula.
## III.II. Least Mean Squares
In `Least Squares` we observed that, it uses all the training data in finding the best fit. This is a computationally costly operation when dealing with learge features or number of entried. When data is sufficiently large, we can consider `Least Mean Squares` which learn *sequentially* instead of what doing by `Least Squares`. During the *sequential* learning process, $w$ is updated for a single data point in each iteration.
`Least Mean Squares` starts with an initial guess $w$ and changes it as it reads more data until it converges.
$$
w^{(k+1)} = w^{(k)} - \alpha \nabla E_k
$$
Here $k$ represents the steps for the repetition and $E_k$ is the error for the $k$'th sample and $\alpha$ is a learning rate. In the $k$'th iteration with the sample $x_k$, the gradient for the sum-of-squares error is
$$
\begin{align}
\nabla E_k = \frac{\partial E}{\partial w^{(k)}} &= \frac{\partial }{\partial w^{(k)}}\Big( f(x_k; w^{(k)}) - t_k \Big)^2 \\
&= 2\Big( {w^{(k)}}^\top x_k - t_k \Big) x_k.
\end{align}
$$
By replacing $\nabla E_k$, we get:
$$
w^{(k+1)} = w^{(k)} - \alpha \Big( {w^{(k)}}^\top x_k - t_k \Big) x_k.
$$
So, given the feature matrix $X$ and the randomly initialized weight vector $w$; in each iteration of data points, `Least Mean Squares` update the $w$ by this formula.
`Least Mean Squares` solves the potential computational issue of `Least Squares`, but it could be possible that this algorithm will never converge. For dealing this, we usually run this algorithm withing a fixed number of iterations.
## III.I. Super Classs Definition
These are the super class definitions for general linear model (i.e. `LinearModel`). This is the base model for the implementation of both the models (i.e. `Least Squares Model` and `Least Mean Square Model`). The abstract methods (i.e. `train` and `use`) will enable the unified interfaces for child classes to be overridden.
`LinearModel` also contains the implementations of two helper functions `_check_matrix` and `add_ones` which helps checking dimentional constraint and adding a basis to the featur matrix respectively.
```python
import numpy as np
from abc import ABC, abstractmethod
# Super class for machine learning models
class BaseModel(ABC):
""" Super class for ITCS Machine Learning Class"""
@abstractmethod
def train(self, X, T):
pass
@abstractmethod
def use(self, X):
pass
class LinearModel(BaseModel):
"""
Abstract class for a linear model
Attributes
==========
w ndarray
weight vector/matrix
"""
def __init__(self):
"""
weight vector w is initialized as None
"""
self.w = np.zeros([1,1])
# self.w = None
# check if the matrix is 2-dimensional. if not, raise an exception
def _check_matrix(self, mat, name):
if len(mat.shape) != 2:
raise ValueError(''.join(["Wrong matrix ", name]))
# add a basis
def add_ones(self, X):
"""
add a column basis to X input matrix
"""
self._check_matrix(X, 'X')
return np.hstack((np.ones((X.shape[0], 1)), X))
####################################################
#### abstract funcitons ############################
@abstractmethod
def train(self, X, T):
"""
train linear model
parameters
-----------
X 2d array
input data
T 2d array
target labels
"""
pass
@abstractmethod
def use(self, X):
"""
apply the learned model to input X
parameters
----------
X 2d array
input data
"""
pass
```
Inheriting the super class, we can define two difference classes for Least Squares and Least Mean Square algorithms.
## III.II. Least Squares
`LinearRegress` class implements the least square solutions. Here, I implemented the `train` and `use` functions. `train()` updates the weights using least squares solution and `use()` returns the predictions for the argument X by using the trained weight w.
```python
# Linear Regression Class for least squares
class LinearRegress(LinearModel):
"""
LinearRegress class
attributes
===========
w nd.array (column vector/matrix)
weights
"""
def __init__(self):
LinearModel.__init__(self)
# train lease-squares model
def train(self, X, T):
# First creast X1 by adding 1's column to X
N = X.shape[0]
X1 = self.add_ones(X)
# Next, using inverse, solve, lstsq function to get w*
self.w = np.linalg.inv(X1.transpose().dot(X1)).dot(X1.transpose()).dot(T)
# apply the learned model to data X
def use(self, X):
# First creast X1 by adding 1's column to X
N = X.shape[0]
X1 = self.add_ones(X)
return X1.dot(self.w)
```
## III.III. Least Mean Squares
For online learning, I have implemented LMS algorithm here. `train_step()` function updats the weights for a single input vector $x$ and one target label. I update the weight vector with $0$ if it is not initialized or it mismatches the shape with the input vector $x$. As the name suggests, `train()` function simply call `train_step()` in a loop to learn incrementally for the batch data.
```python
import collections # for checking iterable instance
# LMS class
class LMS(LinearModel):
"""
Lease Mean Squares. online learning algorithm
attributes
==========
w nd.array
weight matrix
alpha float
learning rate
"""
def __init__(self, alpha):
LinearModel.__init__(self)
self.alpha = alpha
# batch training by using train_step function
def train(self, X, T):
for x, t in zip(X, T):
self.train_step(x, t)
# train LMS model one step
# here the x is 1d vector
def train_step(self, x, t):
N = x.shape[0]
X1 = np.hstack((np.ones(1),x))
N = X1.shape[0]
#print(self.w)
if(len(self.w)!= N):
self.w = np.zeros(N)
y = self.w @ X1
self.w -= self.alpha * (y - t) * X1
# apply the current model to data X
def use(self, X):
N = X.shape[0]
X1 = np.hstack((np.ones((N, 1)), X.reshape((X.shape[0], -1))))
y = X1 @ self.w
N = X1.shape[0]
y.shape = (N,1)
return y
```
# IV. Usage Examples
In this section I used some sample linear data to test the integrary of my implementations.
```python
# HERE follow are for my code tests.
import matplotlib.pyplot as plt
%matplotlib inline
```
## IV.I. Usage Example: Least Squares
First prepare the sample data and put it in $x$. Then call `LinearRegress` model to train and predict the data. Line plot indicate the integrary of the `LinearRegress` implementation.
```python
X = np.linspace(0,10, 11).reshape((-1, 1))
T = -2 * X + 3.2
ls = LinearRegress()
ls.train(X, T)
plt.plot(ls.use(X))
```
## IV.II. Usage Example: Least Mean Squares
Here I used the same sample data $x$ and call `LMS` model to train and predict the data. In the first plotting, I draw the prediction lines in each `train_step`. And the later one showing the final prediction line after calling the `train` function. The line plots indicate the integrary of the `LMS` implementation.
```python
lms = LMS(0.1)
for x, t in zip(X, T):
lms.train_step(x, t)
plt.plot(lms.use(X))
```
```python
lms = LMS(0.1)
lms.train(X, T)
plt.plot(lms.use(X))
```
# V. Preliminary Test
This section contains the integrary test of the implmentation of `LinearRegress` and `LMS` classes. As all the tests passed here, getting much confidence for running my implementations of linear regression models (i.e. `LinearRegress` and `LMS`) w.r.t. the `House Sales in King County, USA` data described in `section II`.
```python
##################### WHAT I WILL RELEASE ############
# Self-Test code for accuracy of your model - DO NOT MODIFY THIS
# Primilnary test data
X = np.array([[2,5],
[6,2],
[1,9],
[4,5],
[6,3],
[7,4],
[8,3]])
T = X[:,0, None] * 3 - 2 * X[:, 1, None] + 3
N = X.shape[0]
def rmse(T, Y):
return np.sqrt(np.sum((T-Y)**2))
model_names = ['LS', 'LMS_All', 'LMS_1STEP']
models = [LinearRegress(), LMS(0.02), LMS(0.02)]
#train
for i, model in enumerate(models):
print("training ", model_names[i], "...")
if i == len(models) -1:
# train only one step for LMS2
model.train_step(X[0], T[0])
else:
model.train(X, T)
def check(a, b, eps=np.finfo(float).eps):
if abs(a-b) > eps:
print("failed.", a, b)
else:
print("passed.")
errors = [1.19e-13, 2.8753214702, 38.0584918251]
for i, model in enumerate(models):
print("---- Testing ", model_names[i], "...", end=" ")
# rmse test
err = rmse(T, model.use(X))
if check(err, errors[i], eps=1e-10):
print ("check your weights: ", model.w)
print ("oracle: ", )
```
# VI. Experiments
This section contains the experiment results of `LinearRegress` and `LMS` models (described in `section III`) w.r.t. the `House Sales in King County, USA` data (described in `section II`). Basically we will predict the `price` of the house considering the other features of the data (i.e. `sqft_living`, `view`, `grade`, etc.). My experiment domain is listed in the `Table 1` bellow.
<table>
<thead>
<tr>
<th>Algorithms</th>
<th>Feature Selection</th>
<th>Feature Scaling</th>
</tr>
</thead>
<tbody>
<tr>
<td align="middle">Least Squares<br><br>
Least Mean Squares</td>
<td align="middle">All Features<br><br>
Significant Features</td>
<td align="middle">Unscaled<br><br>
Normalized<br><br>
Standarized</td>
</tr>
<tr>
<td colspan="3" align="middle">Table 1: Summary of the experiment domain.</td>
</tr>
</tbody>
</table>
**Feature Scaling**
Differentt scales of input variables may cause problem for the regression models. Having features on a similar scale can help the gradient descent converge more quickly towards the minima.
* Dominated by one large scale inputs!
* "Standardize" the range of independent input features
There are different ways of doing the scaling. In this experiment I considered the following two feature scaling technique,
* Min-Max Normalization (I used `MinMaxScaler` from the `sklearn.preprocessing` library)
* Standardization (I used `StandardScaler` from the `sklearn.preprocessing` library)
Before jumping to the experiment results, here is the detailed descriptions of the experiment subsections:
**VI.I. Plotting Functions**
Data visualization is crutial to understand the performance of ML models. In this section, I put general graph plotting functions that will help visualizing different performance matric. Particularly the `residual plotting` is a requirement (the third one) for the `Extra Credit` (please check `section VIII`).
A residual is a measure of how far away a point is vertically from the prediction line. Basically, it is the error between a predicted value and the corresponding observed actual value.
**VI.II. Data Partitioning**
First we randomly shuffle the whole dataset, which gives two benifits,
1. Improve the ML model quality
2. Improve the predictive performance
As we know, to test the performance of linear regession model we have to partition the data into `train-data` and `test-data`. Model training will be done on the `train-data` and then the performance of the model training is done on the `test-data`. As we have total `21613` data point, I considered `19000` data for training and the rest for testing.
Besides partitioning, feature selection is also very crutial for regression models. In feature selection we actually determine which features are importent predicting the output. As discussed in `section II.III.I`, the most correlated attributes with the target attribute (i.e. `price`) are
1. bathrooms
2. sqft_living
3. view
4. grade
5. sqft_above
6. sqft_living15
So considering this, I do two types of experiments,
1. Selecting all the features
2. Selecting the most significant features (I considered the features that have `correlation_val >= 0.5` as the significant features).
I believe this covers the first two requirements of `Extra Credit` listed in `section VIII`.
## VI.I. Plotting Functions
In this section, I put general graph plotting functions (snapshot of `Prediction Vs. Actual`, `feature scaling` comparator plotting, `residual plotting`) that will help visualizing different performance matric.
```python
"""
Prediction comparator plotting
attributes
==========
predicted array of predicted values
actual array of actual values
feature_type feature selection
"""
def predict_comp_plotting(predicted, actual, feature_type):
fig = plt.figure(figsize=(10, 5), dpi = 150)
plt.plot(predicted, label="Prediction")
plt.plot(actual, label="Actual")
plt.legend(bbox_to_anchor=(0.83, 0.98), loc='upper left', borderaxespad=0.)
plt.xlabel('Test data index')
plt.ylabel('Prediction Vs. Actual')
plt.title('Snapshot of Prediction Vs. Actual plotting for ' + feature_type, color = 'green', fontsize = 20)
plt.show()
```
```python
"""
Feature scaling comparator plotting
attributes
==========
predicted array of predicted values
actual array of actual values
columns xticks of the box plot (feature title)
scale_type feature scaling
"""
def feature_scal_comp_plotting(unscaled, scaled, columns, scale_type):
f, axes = plt.subplots(1, 2, figsize=(15,6))
plt_norm = sns.boxplot(data=pd.DataFrame(unscaled), orient='v', ax=axes[0])
axes[0].set_xticklabels(columns, rotation=90)
axes[0].set_title('Unscaled Data')
sns.boxplot(data=pd.DataFrame(scaled), orient='v', ax=axes[1])
axes[1].set_xticklabels(columns, rotation=90)
axes[1].set_title(scale_type + ' Data')
f.suptitle('Comparing Unscaled Vs. ' + scale_type + ' data', color = 'green', fontsize = 20)
```
```python
"""
Residual plotting
attributes
==========
predicted array of predicted values
actual array of actual values
feature_type feature selection
"""
def residual_plotting(predicted, actual, feature_type):
diff = actual - predicted
sample_count = np.arange(len(actual))
fig=plt.figure(figsize=(10, 5), dpi= 144, facecolor='w', edgecolor='k')
plt.scatter(sample_count, diff, marker='.')
plt.plot([0, 2600],[0, 2600], 'r-')
plt.title('Residual Plot for ' + feature_type, color = 'green', fontsize = 20)
```
## VI.II. Data Partitioning
First we randomly shuffle the whole dataset. And then partition the data into `train-data` and `test-data` samples. As we have total 21613 data points, I considered 19000 data for training and the rest for testing.
```python
# randomly shuffel all data
df_r.sample(frac=1)
```
```python
# constants
# list of all features
all_features = ['long', 'lat',
'yr_renovated', 'yr_built',
'sqft_basement', 'sqft_above',
'sqft_lot', 'sqft_lot15',
'sqft_living', 'sqft_living15',
'condition', 'view', 'grade',
'floors', 'waterfront',
'zipcode',
'bedrooms', 'bathrooms']
# list of significant features
sig_features = ['sqft_living', 'sqft_living15', 'sqft_above', 'view', 'grade', 'bathrooms']
# list of target features
target_features = ['price']
# feature scaling type
scale_type_norm = 'Normalized'
scale_type_stand = 'Standarized'
# feature selection type
feature_type_all = "All Features"
feature_type_sig = "Significant Features"
# training data partition threshold
train_data_th = 19000
snapshot_th = 100
```
### VI.II.I. Unscaled Data
```python
# partitioning unscaled-data of all possible features
X_All = df_r[all_features].copy()
X_All_Train = np.array(X_All.iloc[:train_data_th])
X_All_Test = np.array(X_All.iloc[train_data_th:])
```
```python
# partitioning unscaled-data of most significant co-related features (i.e. correlation_val >= 0.5)
X = df_r[sig_features].copy()
X_Train = np.array(X.iloc[:train_data_th])
X_Test = np.array(X.iloc[train_data_th:])
```
```python
# partitioning unscaled-data of target features
T = df_r[target_features].copy()
T_Train = np.array(T.iloc[:train_data_th])
T_Test = np.array(T.iloc[train_data_th:])
```
### VI.II.II. Normalized Data
```python
# partitioning normalized-data of all possible features
# data normalization with sklearn
from sklearn.preprocessing import MinMaxScaler
X_All_Train_Norm = np.array(X_All.iloc[:train_data_th])
X_All_Test_Norm = np.array(X_All.iloc[train_data_th:])
# normalize all possible feature data
norm = MinMaxScaler().fit(X_All_Train)
X_All_Train_Norm = norm.transform(X_All_Train)
X_All_Test_Norm = norm.transform(X_All_Test)
feature_scal_comp_plotting(X_All_Train, X_All_Train_Norm, all_features, scale_type_norm)
```
```python
# partitioning normalized-data of most significant co-related features (i.e. correlation_val >= 0.5)
# data normalization with sklearn
from sklearn.preprocessing import MinMaxScaler
X_Train_Norm = np.array(X.iloc[:train_data_th])
X_Test_Norm = np.array(X.iloc[train_data_th:])
# normalize significant feature data
norm = MinMaxScaler().fit(X_Train)
X_Train_Norm = norm.transform(X_Train)
X_Test_Norm = norm.transform(X_Test)
feature_scal_comp_plotting(X_Train, X_Train_Norm, sig_features, scale_type_norm)
```
### VI.II.II. Standardized Data
```python
# partitioning standarized-data of all possible features
# data standardization with sklearn
from sklearn.preprocessing import StandardScaler
X_All_Train_Stand = np.array(X_All.iloc[:train_data_th])
X_All_Test_Stand = np.array(X_All.iloc[train_data_th:])
# define standard scaler
scaler = StandardScaler()
# standarized all possible feature data
X_All_Train_Stand = scaler.fit_transform(X_All_Train)
X_All_Test_Stand = scaler.fit_transform(X_All_Test)
feature_scal_comp_plotting(X_All_Train, X_All_Train_Stand, all_features, scale_type_stand)
```
```python
# partitioning standarized-data of most significant co-related features (i.e. correlation_val >= 0.5)
# data standardization with sklearn
from sklearn.preprocessing import StandardScaler
X_Train_Stand = np.array(X.iloc[:train_data_th])
X_Test_Stand = np.array(X.iloc[train_data_th:])
# define standard scaler
scaler = StandardScaler()
# standarized significant feature data
X_Train_Stand = scaler.fit_transform(X_Train)
X_Test_Stand = scaler.fit_transform(X_Test)
feature_scal_comp_plotting(X_Train, X_Train_Stand, sig_features, scale_type_stand)
```
## VI.III. Experiments on Least Squares
This section contains the experiment results on `Least Squares` algorithm.
### VI.III.I. Feature Scaling: Unscaled Data
Here is the `Least Squares` algorithm's performance evaliation on unscaled data.
#### VI.III.I.I. Feature Selection: All Features
```python
lr = LinearRegress()
lr.train(X_All_Train, T_Train)
T_Predicted = lr.use(X_All_Test)
print("rmse value", rmse(T_Test, T_Predicted))
predict_comp_plotting(T_Predicted[:snapshot_th], T_Test[:snapshot_th], feature_type_all)
residual_plotting(T_Predicted, T_Test, feature_type_all)
```
#### VI.III.I.II. Feature Selection: Significant Features
```python
lr = LinearRegress()
lr.train(X_Train, T_Train)
T_Predicted = lr.use(X_Test)
print("rmse value", rmse(T_Test, T_Predicted))
predict_comp_plotting(T_Predicted[:snapshot_th], T_Test[:snapshot_th], feature_type_sig)
residual_plotting(T_Predicted, T_Test, feature_type_sig)
```
**Observations:**
From the rmse value (`464727090` for all the features Vs. `12318249` for significant features) and the residual plotting we can understand the importance of removing `insignificant features`. The rmse value gets better by `38x` when we considered only the significant features.
From the residual plot, we can observe that the data points get more clustered on the prediction line. Which also support the evidence we get from the change of rmse value.
### VI.III.II. Feature Scaling: Normalizaed Data
Here is the `Least Squares` algorithm's performance evaliation on normalized data.
#### VI.III.II.I. Feature Selection: All Features
```python
lr = LinearRegress()
lr.train(X_All_Train_Norm, T_Train)
T_Predicted = lr.use(X_All_Test_Norm)
print("rmse value", rmse(T_Test, T_Predicted))
predict_comp_plotting(T_Predicted[:snapshot_th], T_Test[:snapshot_th], feature_type_all)
residual_plotting(T_Predicted, T_Test, feature_type_all)
```
#### VI.III.II.II. Feature Selection: Significant Features
```python
lr = LinearRegress()
lr.train(X_Train_Norm, T_Train)
T_Predicted = lr.use(X_Test_Norm)
print("rmse value", rmse(T_Test, T_Predicted))
predict_comp_plotting(T_Predicted[:snapshot_th], T_Test[:snapshot_th], feature_type_sig)
residual_plotting(T_Predicted, T_Test, feature_type_sig)
```
**Observations:**
From the rmse value (`40098628` for all the features Vs. `12318249` for significant features) we again gets the proof of the importance of removing `insignificant features`. The residual plot also supports this.
Although, the performance gap is reduced comparing to the experiment result of the unscaled data (`38x` Vs. `3.25`). I believe this change occurs as we `normalized` the input feature data. Interestingly, we also can observe that for all the features `normalization` improves the performance by `11.5`, but there is no such observation found in the significant feature data. I believe this is because the impacts of normalization have been dominated by the signigicant features.
### VI.III.III. Feature Scaling: Standarized Data
#### VI.III.III.I. Feature Selection: All Features
```python
lr = LinearRegress()
lr.train(X_All_Train_Stand, T_Train)
T_Predicted = lr.use(X_All_Test_Stand)
print("rmse value", rmse(T_Test, T_Predicted))
predict_comp_plotting(T_Predicted[:snapshot_th], T_Test[:snapshot_th], feature_type_all)
residual_plotting(T_Predicted, T_Test, feature_type_all)
```
#### VI.III.III.II. Feature Selection: Significant Features
```python
lr = LinearRegress()
lr.train(X_Train_Stand, T_Train)
T_Predicted = lr.use(X_Test_Stand)
print("rmse value", rmse(T_Test, T_Predicted))
predict_comp_plotting(T_Predicted[:snapshot_th], T_Test[:snapshot_th], feature_type_sig)
residual_plotting(T_Predicted, T_Test, feature_type_sig)
```
**Observations:**
From the rmse value (`11513128` for all the features Vs. `12533302` for significant features) we again gets the proof of the importance of removing `insignificant features`. The residual plot also supports this. Interestingly, the performance gap indicate that the performance of the algorithm degrades while selecting significant features on the standarized data. I believe this change occurs as we `standarided` the input feature data and the impacts of standaraization have been dominated here due to feature selection.
## VI.IV. Experiments on Least Mean Squares (LMS)
### VI.IV.I. Feature Scaling: Unscaled Data
#### VI.IV.I.I. Feature Selection: All Features
```python
lms = LMS(0.02)
lms.train(X_All_Train, T_Train)
T_Predicted = lms.use(X_All_Test)
print("rmse value", rmse(T_Test, T_Predicted))
```
#### VI.IV.I.II. Feature Selection: Significant Features
```python
lms = LMS(0.02)
lms.train(X_Train, T_Train)
T_Predicted = lms.use(X_Test)
print("rmse value", rmse(T_Test, T_Predicted))
```
We got `overflow encountered in multiply` while running the `Least Mean Squares` on the unscaled data.
### VI.IV.II. Feature Scaling: Normalized Data
#### VI.IV.II.I. Feature Selection: All Features
```python
lms = LMS(0.02)
lms.train(X_All_Train_Norm, T_Train)
T_Predicted = lms.use(X_All_Test_Norm)
print("rmse value", rmse(T_Test, T_Predicted))
predict_comp_plotting(T_Predicted[:snapshot_th], T_Test[:snapshot_th], feature_type_all)
residual_plotting(T_Predicted, T_Test, feature_type_all)
```
#### VI.IV.II.II. Feature Selection: Significant Features
```python
lms = LMS(0.02)
lms.train(X_Train_Norm, T_Train)
T_Predicted = lms.use(X_Test_Norm)
print("rmse value", rmse(T_Test, T_Predicted))
predict_comp_plotting(T_Predicted[:100], T_Test[:100], feature_type_sig)
residual_plotting(T_Predicted, T_Test, feature_type_sig)
```
### VI.IV.III. Feature Scaling: Standarized Data
#### VI.IV.III.I. Feature Selection: All Features
```python
lms = LMS(0.02)
lms.train(X_All_Train_Stand, T_Train)
T_Predicted = lms.use(X_All_Test_Stand)
print("rmse value", rmse(T_Test, T_Predicted))
predict_comp_plotting(T_Predicted[:snapshot_th], T_Test[:snapshot_th], feature_type_all)
residual_plotting(T_Predicted, T_Test, feature_type_all)
```
#### VI.IV.III.II. Feature Selection: Significant Features
```python
lms = LMS(0.02)
lms.train(X_Train_Stand, T_Train)
T_Predicted = lms.use(X_Test_Stand)
print("rmse value", rmse(T_Test, T_Predicted))
predict_comp_plotting(T_Predicted[:100], T_Test[:100], feature_type_sig)
residual_plotting(T_Predicted, T_Test, feature_type_sig)
```
**Observation**
* Normalization works better than standarization.
* Feature scaling makes important difference.
## VI.V. Experiment Results
In this section I like to give a recap about the overall experiment result. `Table-2` bellow gives the overall rmse values and `Table-3` represents the comparative performance (w.r.t. the rmse value).
<table>
<tr>
<th>Table 2: Summary of the RMSE value within the experiment domain.</th>
<th>Table 3: Comparison of RMSE value within the experiment domain.</th>
</tr>
<tr>
<td>
<table>
<tr>
<th>Algorithms</th>
<th>Feature Selection</th>
<th colspan="3" align="middle">Feature Scaling</th>
</tr>
<tr>
<th></th>
<th></th>
<th>Unscaled</th>
<th>Normalized</th>
<th>Standarized</th>
</tr>
<tr>
<th rowspan="2">Least Squares</th>
<th>All Features</th>
<td>464727090</td>
<td>40098628</td>
<td>11513129</td>
</tr>
<tr>
<th>Significant Features</th>
<td>12318249</td>
<td>12318249</td>
<td>12533303</td>
</tr>
<tr>
<th rowspan="2">Least Mean Squares</th>
<th>All Features</th>
<td>N/A</td>
<td>11021941</td>
<td>3.25e+39</td>
</tr>
<tr>
<th>Significant Features</th>
<td>N/A</td>
<td>12703794</td>
<td>13333369</td>
</tr>
<tr>
<td colspan="5" align="middle"></td>
</tr>
</table>
</td>
<td>
<table>
<tr>
<th>Algorithms</th>
<th>Feature Selection</th>
<th colspan="3" align="middle">Feature Scaling</th>
</tr>
<tr>
<th></th>
<th></th>
<th>Unscaled</th>
<th>Normalized</th>
<th>Standarized</th>
</tr>
<tr>
<th rowspan="2">Least Squares</th>
<th>All Features</th>
<td>37.70 X</td>
<td>3.25 X</td>
<td>0.94 X</td>
</tr>
<tr>
<th>Significant Features</th>
<td>X</td>
<td>X</td>
<td>1.02 X</td>
</tr>
<tr>
<th rowspan="2">Least Mean Squares</th>
<th>All Features</th>
<td>N/A</td>
<td>X</td>
<td>2.95e32 X</td>
</tr>
<tr>
<th>Significant Features</th>
<td>N/A</td>
<td>1.15 X</td>
<td>1.20 X</td>
</tr>
<tr>
<td colspan="5" align="middle"></td>
</tr>
</table>
</td>
</tr>
</table>
The above tables indicate that standarization within all the features gives the best result for `Least Squares`. On the other hand, normalization within all the features gives the best result for `Least Mean Squares`. It also indicates that the feature scaling improves the performance of `Least Squares` algorithm; but as I didn't able to get the result for `Least Mean Squares` on unscaled data, it was not possible to observe such behaviour. But it was surprising that instead of improving the performance, selecting significant features degrades the performance in `Least Mean Squares`. Also, the rmse value for `Least Mean Squares` on the standarized data for all the features are quite high, but the residual plotting indicates somewhat best performance. Due to the time constraint, I didn't go further to investigate these issues. Instead, I kept these as the future work for this assignment.
# VII. Conclusions
In this assignment I have performed linear regression on the `House Sales in King County, USA` using the `Linear Square` and `Linear Mean Square` methods. The `Linear Mean Square` is more sophisticated (as I get exception on the unscaled data) due to the greater detail in its computation compared to the `Linear Square` model.
# VIII. Extra Credit
If you want to work more for an extra credit, place your work here for additional analysis: weight and residual analysis.
Try to answer to the following questions:
1. what is the most and least significant features for your data. (covered in `section VI. Experiments`)
2. what are the consequences if you remove those features from the model? (covered in `section VI., VI.III.II., VI.III.III., VI.IV.II., VI.IV.III.`)
3. produce residual plots and observe the patterns for the goodness of fit (covered in `section VI., VI.III.II., VI.III.III., VI.IV.II., VI.IV.III.`)
# IX. References
1. harlfoxem (2016). House Sales in King County, USA, Version 1. Retrieved September 20, 2020 from https://www.kaggle.com/harlfoxem/housesalesprediction.
2. “Kaggle: Your Machine Learning and Data Science Community,” https://www.kaggle.com, accessed Oct. 1, 2020.
|
0e6e2ebaa8378df7dc7520f3fc5b2b9a4cd59147
| 68,523 |
ipynb
|
Jupyter Notebook
|
programming_assignments/1_Linear-Regression/Assign1.ipynb
|
biqar/Fall-2020-ITCS-8156-MachineLearning
|
ce14609327e5fa13f7af7b904a69da3aa3606f37
|
[
"MIT"
] | null | null | null |
programming_assignments/1_Linear-Regression/Assign1.ipynb
|
biqar/Fall-2020-ITCS-8156-MachineLearning
|
ce14609327e5fa13f7af7b904a69da3aa3606f37
|
[
"MIT"
] | null | null | null |
programming_assignments/1_Linear-Regression/Assign1.ipynb
|
biqar/Fall-2020-ITCS-8156-MachineLearning
|
ce14609327e5fa13f7af7b904a69da3aa3606f37
|
[
"MIT"
] | null | null | null | 32.707876 | 905 | 0.542767 | true | 12,176 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.841826 | 0.888759 | 0.74818 |
__label__eng_Latn
| 0.921472 | 0.576605 |
```python
import numpy as np
##################################################
##### Matplotlib boilerplate for consistency #####
##################################################
from ipywidgets import interact
from ipywidgets import FloatSlider
from matplotlib import pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg')
global_fig_width = 8
global_fig_height = global_fig_width / 1.61803399
font_size = 12
plt.rcParams['axes.axisbelow'] = True
plt.rcParams['axes.edgecolor'] = '0.8'
plt.rcParams['axes.grid'] = True
plt.rcParams['axes.labelpad'] = 8
plt.rcParams['axes.linewidth'] = 2
plt.rcParams['axes.titlepad'] = 16.0
plt.rcParams['axes.titlesize'] = font_size * 1.4
plt.rcParams['figure.figsize'] = (global_fig_width, global_fig_height)
plt.rcParams['font.sans-serif'] = ['Computer Modern Sans Serif', 'DejaVu Sans', 'sans-serif']
plt.rcParams['font.size'] = font_size
plt.rcParams['grid.color'] = '0.8'
plt.rcParams['grid.linestyle'] = 'dashed'
plt.rcParams['grid.linewidth'] = 2
plt.rcParams['lines.dash_capstyle'] = 'round'
plt.rcParams['lines.dashed_pattern'] = [1, 4]
plt.rcParams['xtick.labelsize'] = font_size
plt.rcParams['xtick.major.pad'] = 4
plt.rcParams['xtick.major.size'] = 0
plt.rcParams['ytick.labelsize'] = font_size
plt.rcParams['ytick.major.pad'] = 4
plt.rcParams['ytick.major.size'] = 0
##################################################
```
# Course Structure
- Lecture 1: Introduction to Bayesian Inference and Pints
- **Lecture 2: Maximum Likelihood Estimation**
- **The Likelihood function**
- **Maximum Likelihood Estimation**
- **Maximum a Posteriori Esimation**
- Lecture 3: MCMC sampling
- Lecture 4: Hierarchical models
# Likelihood
- Recall the likelihood appears in Bayes' Theorem
$$P(\theta | data) = \frac{\color{red}{{P(data|\theta)}} P(\theta)}{P(data)}$$
- Remember not a probability distribution because $\theta$ varies
- Most important choice, derived from the statistical model of the underlying process
- Encapsulates many subjective judgements about analysis.
# Equivalence relation
- A notation often seen in the literature is
$$\mathcal{L}(\theta | data) = P(data | \theta)$$
Therefore, a likelihood of $\theta$ for a particular data sample is equivalent to the probability of that data sample for that value of $\theta$. We call the above an *equivalence relation*
# Example: frequency of lift malfunctioning
- Imagine we want to create a model for the frequency a lift (elevator) breaks down in a given year, $X$.
- Assume a range of unpredictable and uncorrelated factors (temperature, lift usage, etc.) affect the functioning of the lift.
- therefore, $X ∼ \text{Poisson}(\theta)$, where $\theta$ is the mean number of times the lift breaks in one year.
- we don’t a priori know the true value of $\theta$, our model defines collection of probability models; one for each value of $\theta$.
- We call this collection of models the Likelihood.
```python
import math
from scipy.stats import poisson
def lift_likelihood(theta, data):
x = np.arange(0, 20)
plt.plot(x, poisson.pmf(x, theta), 'bo', ms=8, label='poisson pmf')
plt.vlines(x, 0, poisson.pmf(x, theta), colors='b', lw=5, alpha=0.5)
print(data)
if data is not None:
plt.vlines([data], 0, poisson.pmf(data, theta), colors='r', lw=5, alpha=0.9)
plt.ylim(0,0.25)
plt.ylabel(r'$P(X | \theta=%s)$'%str(theta))
plt.xlabel('X')
plt.show()
def lift_likelihood_no_data(theta):
lift_likelihood(theta,None)
def lift_likelihood_five_data(theta):
lift_likelihood(theta,5)
def lift_likelihood_w_theta(k):
theta = np.linspace(0, 20, 100)
plt.plot(theta, np.exp(-theta)*theta**k/(math.factorial(k)))
plt.ylabel(r'$P(k=%s | \theta)$'%str(k))
plt.xlabel(r'mean number of breakdowns $\theta$')
plt.show()
```
```python
# Display our collection of models
widget = FloatSlider(value=5.0, min=0.0, max=15.0, step=1.0, continuous_update=False)
interact(lift_likelihood_no_data, theta=widget, continuous_update=False);
```
interactive(children=(FloatSlider(value=5.0, continuous_update=False, description='theta', max=15.0, step=1.0)…
To calculate the likelihood:
- fix the data (say number of breakdowns is measured at 5)
- find the corresponding probability for each of the models
```python
interact(lift_likelihood_five_data, theta=widget, continuous_update=False);
```
interactive(children=(FloatSlider(value=5.0, continuous_update=False, description='theta', max=15.0, step=1.0)…
Using the equation for a Poisson distribution, and $k=5$ as the number of breakdowns, the likelihood function is:
$$\mathcal{L}(\theta | k) = P(k | \theta) = \exp(-\theta)\frac{\theta^k}{k!}$$
```python
lift_likelihood_w_theta(5)
```
# Example: ODE-based model
- Models can often take the form of differential equations evolving in time
- Likelihood is often straightforward to derive and relativly cheap to calculate due to *independent* (in time) measurement noise
- Lets consider the case of an ordinary differential equation (ODE) model, the reversible reaction model in the previous lecture:
$$\dot{y}(t) = k_1 (1 - y) - k_2 y,$$
where $k_1$ represents a forward reaction rate, $k_2$ is the backward reaction rate, and $y$ represents the concentration of a chemical solute.
- In an experiment, we take $N$ measurements of the system $z_i$, with $i = 0...N-1$, that are modelled with independent Gaussian measurement noise:
$$z_i \sim y(t_i) + N(0, \sigma)$$
- Assuming that $\sigma$ is unknown, we now have **three** model parameters: $\boldsymbol{\theta} = [k_1, k_2, \sigma]$, rather than one in the previous lift example
Lets look at a random realisations of this experiment, with $N=50$, $k_1=5$, $k_2=3$, and $\sigma=0.02$
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
def r(y, t, p):
k1 = p[0]
k2 = p[1]
dydt = k1 * (1 - y) - k2 * y
return dydt
def random(p):
y0 = 0.1
times = np.linspace(0, 1, 50)
values = odeint(r, y0, times, (p,))
values += np.random.normal(0, 0.02, values.shape)
plt.ylabel('Concentration')
plt.scatter(times, values)
plt.show()
```
```python
random([5,3])
```
- As before, this model describes an infinite family of probability distributions governed by the three parameters $\boldsymbol{\theta} = [k_1, k_2, \sigma]$
- However, now we have $N$ outputs due to the $N$ time samples that were measured, therefore a probabilty distribution around each time point $t_i$
```python
from scipy.stats import norm
def family_of_models(k1, k2, sigma):
y0 = 0.1
times = np.linspace(0, 1, 50)
index = int(len(times)/2)
values = odeint(r, y0, times, ([k1, k2],)).reshape(-1)
global_fig_width = 8
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 8/1.618))
ax1.set_ylabel('Concentration')
ax1.set_xlabel(r'$t$')
ax1.plot(times, values)
ax1.fill_between(times, values-sigma, values+sigma, alpha=0.5)
ax1.axvline(times[index],color='r')
concentrations = np.linspace(0, 1, 100)
ax2.plot(concentrations, norm.pdf(concentrations, values[index], sigma), color='r')
ax2.set_xlabel('concentration')
ax2.set_ylabel(r'$P(y(t_i) | \theta)$')
plt.show()
k1_widget = FloatSlider(value=5.0, min=0.0, max=15.0, step=1.0, continuous_update=False)
k2_widget = FloatSlider(value=3.0, min=0.0, max=15.0, step=1.0, continuous_update=False)
sigma_widget = FloatSlider(value=0.1, min=0.0, max=0.2, step=0.01, continuous_update=False)
```
```python
# Display our collection of models
interact(family_of_models, k1=k1_widget, k2=k2_widget, sigma=sigma_widget, continuous_update=False);
```
interactive(children=(FloatSlider(value=5.0, continuous_update=False, description='k1', max=15.0, step=1.0), F…
# Likelihood of ODE model
- we assume that the errors at each time point are independent, therefore the conditional probability density of observing the whole experimental trace from time sample 1 to time sample N is simply the product of the probability density functions at each time point
$$\mathcal{L}(\theta \vert \mathbf{y}) = \prod_{i=1}^{N} P(z_i | \boldsymbol{\theta}).$$
- With our further assumption that the experimental noise is also normally distributed with a mean of zero and variance of $\sigma^2$ , the likelihood can be expressed as
\begin{equation}
\mathcal{L}(\boldsymbol{\theta} \vert \mathbf{z}) = \prod_{i=1}^{N} \mathcal{N}(z_i \vert y(t, \boldsymbol{\theta}),\sigma^2) = \prod_{i=0}^{N} \frac{1}{\sqrt{2 \pi \sigma^2}} \exp \left(-\frac{\left(z_i -y(t_i, \boldsymbol{\theta})\right)^2}{2\sigma^2}\right),
\end{equation}
- Note that we denote the solution of the ODE as $y(t_i, \boldsymbol{\theta})$, to emphasis that the solution is a function of the parameters $\boldsymbol(\theta)$
- Normally work with the log-likelihood:
\begin{align}
l(\boldsymbol{\theta} \vert \mathbf{z}) &= \log(\mathcal{L}(\boldsymbol{\theta} \vert \mathbf{z})) \nonumber \\
&= -\frac{N}{2}\log(2\pi\sigma^2)-\frac{1}{2\sigma^2}\sum_{i=1}^N (z_i - y(t_i, \boldsymbol{\theta}))^2 \nonumber \\
&= -\frac{N}{2}\log(2\pi) - N\log(\sigma) - \frac{1}{2\sigma^2}\sum_{i=1}^N (z_i - y (t_i, \boldsymbol{\theta}))^2
\end{align}
- Notice the similarities with classical sum of squares error function (for fixed $\sigma$ anyway)!
```python
def calculate_log_likelihood(k1, k2, sigma, data, times, y0):
N = len(times)
values = odeint(r, y0, times, ([k1, k2],)).reshape(-1)
return -N/2.0*np.log(2*np.pi) - N*np.log(sigma) - (1.0/(2.0*sigma**2)) * np.sum((data - values)**2)
```
```python
def calculate_log_likelihood_all(K1, K2, sigma):
N = 50
y0 = 0.1
times = np.linspace(0, 1, N)
result = np.empty_like(K1)
data = odeint(r, y0, times, ([5, 3],)).reshape(-1) + np.random.normal(0, sigma, times.shape)
for i,(k1,k2) in enumerate(zip(K1,K2)):
result[i] = calculate_log_likelihood(k1, k2, sigma, data, times, y0)
return result
def show_ode_log_likelihood(sigma):
k1 = np.linspace(0, 15, 50)
k2 = np.linspace(0, 15, 50)
K1, K2 = np.meshgrid(k1, k2)
log_likelihood = calculate_log_likelihood_all(K1.flat, K2.flat, sigma).reshape(K1.shape)
plt.contourf(K1, K2, log_likelihood, 20, cmap='RdGy', levels=30);
plt.xlabel(r'$k_1$')
plt.ylabel(r'$k_2$')
plt.title('log-likelihood for ODE model')
plt.colorbar()
plt.show()
```
```python
interact(show_ode_log_likelihood, sigma=FloatSlider(value=0.01, min=0.001, max=0.1, step=0.001, continuous_update=False));
```
interactive(children=(FloatSlider(value=0.01, continuous_update=False, description='sigma', max=0.1, min=0.001…
# Maximum Likelihood Estimation
- In the previous lecture we worked out the posterior *distribution* a given parameter, given the available data
- Often only interested in the most *likely* parameter value
- Can use maximum likelihood estimation, simply find the value of $\theta$ that maximises the likelihood
- a **frequentist** approach, uses a likelihood function (not a valid probability distribution)
$$\theta_{mle} = \text{arg max}_{\theta \in \Omega} P(data|\theta)$$
# Coin example
- Back to the coin example. We perform an experiment of $n$ flips, and it lands heads up $h$ times
- Likelihood is:
$$\mathcal{L}(\theta \vert h\times H) = \theta^h (1-\theta)^{n-h}$$
# Finding the parameters that maximise the Likelihood
- For tractable likelihood functions, we can simply find the derivative of the likelihood and set to zero:
$$\frac{\partial \mathcal{L}(\theta \vert h\times H)}{\partial \theta} = \theta^{h - 1} (h - n \theta) (1-\theta)^{n -h - 1}$$
- Set $\frac{\partial \mathcal{L}(\theta \vert h\times H)}{\partial \theta} = 0$ to find the MLE estimator for $\theta$
$$\theta_{mle} = \frac{h}{n}$$
```python
def show_mle_coin(h):
n = 10
theta = np.linspace(0,1,100)
L = theta**h * (1-theta)**(n-h)
plt.plot(theta,L, label='loss function')
plt.xlabel(r'Probability of landing heads $\theta$')
plt.ylabel(r'MLE loss function $L(\theta)$')
max_L = h / n
plt.vlines(max_L, 0, np.max(L), color='r', linestyle='dashed', linewidth=2)
plt.scatter([max_L],[np.max(L)], color='r', label='MLE estimate', s=100)
plt.legend()
plt.show()
h_widget = FloatSlider(value=8.0, min=0.0, max=10.0, step=1.0, continuous_update=False)
```
```python
interact(show_mle_coin, h=h_widget);
```
interactive(children=(FloatSlider(value=8.0, continuous_update=False, description='h', max=10.0, step=1.0), Ou…
## Fisher information and the MLE estimator
- A measure of information (for $\theta$) contained in the data $X$ is given by the Fisher information $I(\theta)$
$$I(\theta) = -E\left[\frac{\partial^2 l(\theta | X)}{\partial \theta^2}\right]$$
- For large number of samples $N$, the MLE estimator converges to the true $\theta_0$ by
$$\theta_{mle} \sim \mathcal{N} \left(\theta_0, \frac{1}{NI(\theta)} \right)$$
- Corresponding approximate 95% confidence intervals for $\theta_0$ are
$$\theta_{mle} - 1.96 \sqrt{\frac{1}{nI(\theta)}} \le \theta_0 \le \theta_{mle} + 1.96 \sqrt{\frac{1}{nI(\theta)}}$$
# Non-linear optimisation
- Often the analytical approach is not feasible, can then turn to one of many non-linear optimisation algorithms
- Derivative-free Direct methods (aka Search methods)
- Brute-force exploration (look at the landscape and pick the lowest point)
- Random search (e.g. Simulated annealing)
- Coordinate search (aka Coordinate descent, aka Compass search)
- Search & poll / Pattern search
- Simplex methods (e.g. Nelder-Mead)
- Tabu search
- Dividing rectangles algorithm (DIRECT)
- Powell's conjugate direction method
- Multi-start methods
- Multi-level single-linkage (MLSL)
- Basin-hopping
- Derivative-free evolutionary methods and metaheuristics
- Genetic algorithms (GA)
- Differential evolution
- Evolution strategies
- Controlled random search (CRS)
- Swarm algorithms
- Metaheuristics
- Bayesian optimistation
- Gradient-estimating methods
- Finite difference methods
- Simplex gradient methods / Implicit filtering
- Natural evolution strategies (NES)
- Surrogate-model methods
- Trust-region methods
- Data-based Online Nonlinear Extremumseeker (DONE)
- Methods requiring the 1st-order gradient
- Root finding methods (e.g. Newton's method, BFGS)
- Gradient descent (aka Steepest descent)
- Stochastic gradient descent
- Continuation
# Some useful non-linear optimisation packages:
- [NLopt](https://nlopt.readthedocs.io)
- [Pagmo/Pygmo](http://esa.github.io/pygmo/)
- [Scipy](https://docs.scipy.org/doc/scipy/reference/optimize.html)
- [Pints](https://pints.readthedocs.io/en/latest/)
# Example using PINTS
- We will use a popular model of population growth, the logistic equation:
$$ \frac{dy(t)}{dt} = r y(t) \frac{k - y(t)}{k}$$
- Two parameters, the carrying capacity $k$, and the rate of growth $r$
- We will assume an unknown measurement noise $\sigma$, which gives the parameter set $\boldsymbol{\theta} = [r, k, \sigma]$
- The `pints.GaussianLogLikelihood` in PINTS implements the independent Gaussian noise log-likelihood derived earlier
\begin{align}
l(\boldsymbol{\theta} \vert \mathbf{z}) &= \log(L(\boldsymbol{\theta} \vert \mathbf{z})) \nonumber \\
&= -\frac{N}{2}\log(2\pi) - N\log(\sigma) - \frac{1}{2\sigma^2}\sum_{i=1}^N (z_i - y (t_i, \boldsymbol{\theta}))^2
\end{align}
```python
import pints
import pints.toy
import matplotlib.pyplot as plt
import numpy as np
p0 = 1 # initial population; initial value
model = pints.toy.LogisticModel(p0)
# Define the 'true' parameters
true_parameters = [0.1, 50, 5]
# Run a simulation to get test data
times = np.linspace(0, 100, 100)
values = model.simulate(true_parameters[:-1], times)
# Add some noise
values += np.random.normal(0, true_parameters[-1], values.shape)
```
```python
# Show the test data
plt.figure()
plt.xlabel('Time')
plt.ylabel(r'Population $y(t)$')
plt.scatter(times, values, label='data')
plt.plot(times, model.simulate(true_parameters[:-1], times), color='r', lw=3, label='true parameters')
plt.legend()
plt.show()
```
```python
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Create the log-likelihood function
log_likelihood = pints.GaussianLogLikelihood(problem)
# Select some boundaries
boundaries = pints.RectangularBoundaries([0, 0, 0], [100, 100, 100])
# Select a starting point
x0 = [50, 50, 50]
# Perform an optimization using XNES.
found_parameters, found_value = pints.optimise(log_likelihood, x0, boundaries=boundaries, method=pints.XNES)
print('log_likelihood at true solution:')
print(log_likelihood(true_parameters))
```
Maximising LogPDF
using Exponential Natural Evolution Strategy (xNES)
Running in sequential mode.
Population size: 7
Iter. Eval. Best Time m:s
0 7 -481.7851 0:00.0
1 14 -449.7718 0:00.0
2 21 -443.2462 0:00.0
3 28 -439.0006 0:00.0
20 147 -438.9613 0:00.0
40 287 -438.9469 0:00.1
60 427 -438.9469 0:00.1
80 567 -438.9469 0:00.1
100 707 -438.9469 0:00.1
120 847 -438.9469 0:00.1
140 987 -438.9469 0:00.1
160 1127 -438.9469 0:00.1
180 1267 -438.9469 0:00.2
200 1407 -438.9469 0:00.2
220 1547 -438.9469 0:00.2
240 1687 -438.9469 0:00.2
260 1827 -438.9469 0:00.2
280 1967 -438.9469 0:00.2
284 1988 -438.9469 0:00.2
Halting: No significant change for 200 iterations.
log_likelihood at true solution:
-288.64023040857603
```python
# Show the results
plt.figure()
plt.xlabel('Time')
plt.ylabel(r'Population $y(t)$')
found_mean = model.simulate(found_parameters[:-1], times)
plt.fill_between(times, found_mean - found_parameters[-1], found_mean + found_parameters[-1],
color='gray', alpha=0.3)
plt.plot(times, found_mean, color='r', label='found parameters')
plt.scatter(times, values, alpha=0.5, label='data')
plt.legend()
plt.show()
```
# Maximum a posteriori (MAP) estimation
- Rather than a likelihood, can also maximise the unnormalised posterior:
$$P(\theta | data) \sim P(data|\theta) P(\theta)$$
$$\theta_{map} = \text{arg max}_{\theta \in \Omega} P(data|\theta) P(\theta)$$
- MLE is a particular case of MAP, using a Uniform prior (just multiplies the likelihood by a constant)
- This method is a useful method of incorporporating domain knowledge on the parameters
- Related to regularisation in non-linear and linear optimisation
# Coin example with a Gaussian prior
- Reasonable assumption is that the coin is likely to be fair, lets use a Gaussian prior $N(0.5, \sigma)$
- Maximum a posteriori loss function $L(\theta)$ is therefore:
\begin{align*}
L(\theta) &= P(data|\theta) P(\theta) \\
&= \theta^h (1-\theta)^{n-h} \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{(\theta - 0.5)^2}{2 \sigma^2}}
\end{align*}
```python
def show_map_coin(h, sigma):
n = 10
theta = np.linspace(0,1,100)
L = theta**h * (1-theta)**(n-h) * np.exp(-(theta - 0.5)**2/(2*sigma**2)) / np.sqrt(2*np.pi*sigma**2)
plt.plot(theta,L, label='loss function')
plt.xlabel(r'Probability of landing heads $\theta$')
plt.ylabel(r'MAP loss function $L(\theta)$')
max_i = np.argmax(L)
plt.scatter([theta[max_i]],[L[max_i]], color='r', label='MAP estimate', s=100)
plt.legend()
plt.show()
h_widget = FloatSlider(value=8.0, min=0.0, max=10.0, step=1.0, continuous_update=False)
sigma_widget = FloatSlider(value=0.3, min=0.0, max=0.3, step=0.01, continuous_update=False)
```
```python
interact(show_map_coin, h=h_widget, sigma=sigma_widget, continuous_update=False);
```
interactive(children=(FloatSlider(value=8.0, continuous_update=False, description='h', max=10.0, step=1.0), Fl…
# Logistic growth example
Recall the logistic equation:
$$f(t) = \frac{k}{1+(k/p_0 - 1) \exp(-r t)}$$
- Anyone familiar with this equation could estimate a value of the carrying capacity $k$ from a plot
- Would be reasonable to therefore use a Gaussian Prior for $k$
```python
def show_logistic_estimate():
plt.xlabel('Time')
plt.ylabel(r'Population $y(t)$')
plt.scatter(times, values, label='data')
plt.plot([0, 100], [50, 50], c='k', ls='--', lw=3, label='estimate for $k$')
plt.legend()
plt.show()
```
```python
show_logistic_estimate()
```
## Finding MAP estimator in PINTS
```python
# Create the log-likelihood function (using the problem defined earlier)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over r
log_prior_r = pints.UniformLogPrior([0],[100])
# Create a gaussian prior over k
log_prior_k = pints.GaussianLogPrior(50,10)
# Create a uniform prior over sigma
log_prior_sigma = pints.UniformLogPrior([0],[100])
# Create a composed prior
log_prior = pints.ComposedLogPrior(log_prior_r, log_prior_k, log_prior_sigma)
```
```python
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Select some boundaries
boundaries = pints.RectangularBoundaries([0, 0, 0], [100, 100, 100])
# Select a starting point
x0 = [50, 50, 50]
# Perform an optimization using Particle Swarm Optimisation (PSO).
found_parameters, found_value = pints.optimise(log_likelihood, x0, boundaries=boundaries, method=pints.PSO)
print('posterior log-likelihood at true solution:')
print(log_posterior(true_parameters))
```
Maximising LogPDF
using Particle Swarm Optimisation (PSO)
Running in sequential mode.
Population size: 7
Iter. Eval. Best Time m:s
0 13 -489.3967 0:00.0
1 26 -489.3967 0:00.0
2 39 -475.6101 0:00.0
3 52 -463.3096 0:00.0
20 273 -439.1372 0:00.0
40 533 -438.9546 0:00.0
60 793 -438.9546 0:00.1
80 1053 -400.392 0:00.1
100 1313 -399.0795 0:00.1
120 1573 -397.7643 0:00.1
140 1833 -396.5449 0:00.1
160 2093 -383.5393 0:00.1
180 2353 -330.0047 0:00.1
200 2613 -311.9692 0:00.1
220 2873 -286.0045 0:00.2
240 3133 -286.0045 0:00.2
260 3393 -286.0045 0:00.2
280 3653 -286.0045 0:00.2
300 3913 -286.0045 0:00.2
320 4173 -286.0045 0:00.2
340 4433 -286.0045 0:00.2
360 4693 -286.0045 0:00.3
380 4953 -286.0045 0:00.3
400 5213 -286.0045 0:00.3
408 5304 -286.0045 0:00.3
Halting: No significant change for 200 iterations.
posterior log-likelihood at true solution:
-301.0720944067509
```python
# Show the results
plt.figure()
plt.xlabel('Time')
plt.ylabel(r'Population $y(t)$')
found_mean = model.simulate(found_parameters[:-1], times)
plt.fill_between(times, found_mean - found_parameters[-1], found_mean + found_parameters[-1],
color='gray', alpha=0.2)
plt.plot(times, found_mean)
plt.plot(times, values)
plt.show()
```
# Electrochemistry example - POMS
- three unresolved two-electron surface-confined polyoxometalate reduction processes by AC voltammetry
**(left)** Molecular structure of $[\text{PMo}_{12}\text{O}_{40}]^{3-}$ **(right)** Experimental AC voltammetry trace
- The sequence of six electron transfer steps are modelled by the following quasi-reversible reactions
\begin{align}
A + e^- \underset{k^1_{ox}(t)}{\overset{k^1_{red}(t)}{\rightleftarrows}} B,
\\
B + e^- \underset{k^2_{ox}(t)}{\overset{k^2_{red}(t)}{\rightleftarrows}} C,
\\
C + e^- \underset{k^3_{ox}(t)}{\overset{k^3_{red}(t)}{\rightleftarrows}} D,
\\
D + e^- \underset{k^4_{ox}(t)}{\overset{k^4_{red}(t)}{\rightleftarrows}} E,
\\
E + e^- \underset{k^5_{ox}(t)}{\overset{k^5_{red}(t)}{\rightleftarrows}} F,
\\
F + e^- \underset{k^6_{ox}(t)}{\overset{k^6_{red}(t)}{\rightleftarrows}} G,
\end{align}
where the forward $k_{red}$ and backwards $k_{ox}$ reaction rates are
given by
the Butler-Volmer
relationships
\begin{align}\label{eq:rate1}
k^i_{red}(t) &= k^0_i \exp\left(-\frac{\alpha_i F}{RT} [E_r(t) - E^0_i]
\right), \\
k^i_{ox}(t) &= k^0_i \exp\left((1-\alpha_i)\frac{F}{RT} [E_r(t) - E^0_i]
\right). \label{eq:rate2}
\end{align}
- This can be modelled by an ordinary differential equation containing 17 parameters to be estimated
$$
\mathbf{p} =
(E^0_1,E^0_2,E^0_3,E^0_4,E^0_5,E^0_6,k^0_1,k^0_2,k^0_3,k^0_4,k^0_5,k^0_6,
\alpha_1,
\alpha_2,
R_u,
C_{dl},
\Gamma).
$$
- The effect of the $E^0_i$ parameters on the simulated current is highly non-linear.
- In such a high dimensional space all non-linear optimisers we tried failed to find the global minimum
- But approximate values of $E^0_i$ can be easily read off the experimental current trace.... **solution:** put a Gaussian prior on all $E^0_i$ parameters
**standard deviation** of the Gaussian prior (i.e. confidence of the estimation of $E^0_i$), required to be $<= 0.1$ V for **reliable parameter estimation**
## Summary
- The likelihood function is derived from your model of the underlying process, and the experimental data
- It is often the most important choice, encapsulates many **subjective judgements**
- **Maximum likelihood estimation** (MLE) finds the *most likely* parameters that lead to the data
- **Maximum a posteriori estimation** (MAP) finds the most likely parameters that lead to the data, *given additional information* encoded in the prior
- **Non-linear Optimisation** methods are useful in both MLE and MAP estimation for non-trivial models.
```python
```
|
48d675084e67b741ce373e73dc0e2c508f92ada9
| 363,645 |
ipynb
|
Jupyter Notebook
|
Lecture-2-Maximum-Likelihood-Estimation.ipynb
|
pints-team/electrochemistry_course
|
eeff112e45affb4233a3708afd68488bfb92807a
|
[
"BSD-3-Clause"
] | 1 |
2019-03-18T05:36:58.000Z
|
2019-03-18T05:36:58.000Z
|
Lecture-2-Maximum-Likelihood-Estimation.ipynb
|
pints-team/electrochemistry_course
|
eeff112e45affb4233a3708afd68488bfb92807a
|
[
"BSD-3-Clause"
] | null | null | null |
Lecture-2-Maximum-Likelihood-Estimation.ipynb
|
pints-team/electrochemistry_course
|
eeff112e45affb4233a3708afd68488bfb92807a
|
[
"BSD-3-Clause"
] | null | null | null | 45.212607 | 296 | 0.513809 | true | 7,996 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.803174 | 0.76908 | 0.617705 |
__label__eng_Latn
| 0.738902 | 0.273466 |
# Model Project
```python
import numpy as np
from scipy import linalg
from scipy import optimize
import sympy as sm
import matplotlib.pyplot as plt
plt.style.use('seaborn')
```
```python
import numpy as np
import scipy as sp
from scipy import linalg
from scipy import optimize
from scipy import interpolate
import sympy as sm
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
```
# Model description
We consider a solow model for a closed economy with no growth in technology:
* $K_t$ is capital
* $L_t$ is labor (growing with a constant rate of $n$)
* $Y_t = F(K_t,L_t)$ is GDP
We have the following equations:
$$Y_t = BK_t^\alpha L_t^{1-\alpha}$$
$$L_{t+1} = (1+n) L_t$$
**Saving** can be defined as the following fraction:
$$ S_t = sY_t,\,s\in(0,1) $$
These equations imply that the rate and the wage is the following:
$$r_t = \alpha B \big(\frac{K_t}{L_t}\big)^{\alpha-1}$$
$$w_t = (1-\alpha) B \big(\frac{K_t}{L_t}\big)^{\alpha}$$
Capital accumulation is therefore:
$$K_{t+1}= K_t + I_t-\delta K_t$$
$k_t$ can be defined as:
$$k_t = \frac{K_t}{Y_t}$$
Using the intertemporal budget constraint it is possible to show that the transition equation is:
$$ k_{t+1} = \frac{1}{1+n}(sBk_t^\alpha+(1-\delta)k_t)$$
# Steady state
## Analytical solution
In a standard solowmodel there is only a steady state in pr. capita values. In the transition equation we set $k_{t+1} = k_t = k$ and by doing that we get the steady state values for $y_t$ and $k_t$.
$$ k = \frac{1}{1+n}(sBk^\alpha+(1-\delta)k)$$
```python
# We define our variables/symbols:
k = sm.symbols('k')
alpha = sm.symbols('alpha')
delta = sm.symbols('delta')
s = sm.symbols('s')
n = sm.symbols('n')
B = sm.symbols('B')
```
```python
# Here we define the steady state equation
ss = sm.Eq(k,(s*B*k**alpha+(1-delta)*k)/((1+n)))
```
```python
#Then we solve the steady state equation
kss = sm.solve(ss,k)[0]
kss
```
$\displaystyle \left(\frac{B s}{\delta + n}\right)^{- \frac{1}{\alpha - 1}}$
```python
# The code below saves our soloution as a function which will be handy later
ss_func = sm.lambdify((s,B,n,alpha,delta),kss)
```
## Numerical solution
When the numerical solution is done we have to choose some values for our parameters, we have chosen ones that we find reasonable:
```python
# Setting values for our parameters:
s = 0.2
B = 1
n = 0.01
alpha = 1/3
delta = 0.1
```
When we want to find the steady state level of capital we examine the following equation:
$$ 0 = k - \frac{1}{1+n}(sBk^\alpha+(1-\delta)k)$$
What the optimizer then does it try to numerically find a k that solves this equation, we use the method called bisect which we will explain after the optimizer.
```python
from scipy import optimize
# This peace of code makes a function that can numerically solve for steady state for our given model.
# We give this function our parameters and then it gives us the steady state level of capital.
def solve_for_ss(s,B,n,alpha,delta):
"""Args:
s (float): saving rate
B (float): Technology level
n (float): population growth rate
alpha (float): cobb-douglas parameter
delta (float): capital depreciation rate
Returns:
result (RootResults): The steady state level of capital
"""
# a. define objective function
f = lambda k: k**alpha
#
obj_kss = lambda kss: kss - (s*B*f(kss) + (1-delta)*kss)/((1+n))
#. b. call root finder
result = optimize.root_scalar(obj_kss,bracket=[0.1,100],method='bisect')
return result
```
We have used the algorythm "bisect" to do this optimisation. The bisect method works the following way, the algorythm chooses 2 values of x, a and b. When it chooses these values it makes sure that f(a) and f(b) has different signs. Then it figures out what the middlepoint between a and b are by doing $ m_0 = \frac{a+b}{2} $ whereafter it checks the sign of $m_0$. After this if f(a) and f($m_0$) has different signs then the next interval it examines will be $[a,m_0]$. If this is not the case it does the same thing with f(b) and f($m_0$). It keeps doing this same thing until f($m_0$) is sufficiently close to 0.
We have tried to plot this below. Lets say we want to find the roots of the polyonomial below. What the algoryhthm then does is it chooses an a and a b for example a=-13 and b=-5. These values are depicted in the picture as the green vertical lines notice that f(a) and f(b) do have different signs. The algorythm then calculates that $m_0=-9$ which is depicted with the vertical yellow line. The next interval the algorythm will examine will then be between a=-13 and $m_0$=-9 since f(a) and f($m_0$) have different signs, while f(b) and f($m_0$) are both negative values which leads to this interval being discarded.
```python
# Choosing the x and y intervals and creating the function
x = np.linspace(-15,15,400)
y = x**2-100
# Plotting the axes
plt.axhline(y=0, color='r', linestyle='-')
plt.axvline(x=0, color='r')
# Plotting a and b
plt.axvline(x=-13, color='g')
plt.axvline(x=-5, color='g')
# Plotting m_0
plt.axvline(x=-9, color='y')
# Creating the plot:
plt.plot(x,y)
plt.show()
```
The next thing we do is call the function and check the solution.
```python
solve_for_ss(s,B,n,alpha,delta)
```
converged: True
flag: 'converged'
function_calls: 48
iterations: 46
root: 2.4516358635037063
We see that our function has done 46 iterations which means it has redone the procedure above 46 times, and called the function 48 times. This is perfectly in line with what we imagine with 46 iterations it has to check f(a), f(b) and then it checks f($m_0$) 46 times which gives 48 function calls.
We now check wether this solution is the same as the one we would find analytically:
```python
solution = solve_for_ss(s,B,n,alpha,delta)
print(f' numerical solution is: {solution.root:.3f}')
print(f'analytical solution is: {ss_func(s,B,n,alpha,delta):.3f}')
```
numerical solution is: 2.452
analytical solution is: 2.452
We luckily find that the numerical and analytical solution is the same.
## Graphical representation
We would like to see the transition diagram, to see the transition towards steady state. We would also like to examine what happens if we change the savings rate s.
```python
# First we make a function that calculates capital in period 1 when we have capital in period 0.
def transition(k,s,B,n,alpha, delta):
"""
Returns:
k1(float) capital in the next period
Args:
k (float): Initial stock of capital
B (float): Technology level
s (float): The savings rate
alpha (float): cobb-douglas parameter
delta (float): capital depreciation rate
n (float): Population growth
"""
# Calculating capital in period 1
k1 = (s*B*k**alpha+(1-delta)*k)/((1+n))
return k1
```
We now want to store capital for each period which we can do by calculating them one after another and putting them in an array:
```python
def transition_curve(k,s,B,n,alpha, delta,T=1000,k_min=1e-20,k_max=6):
"""
Returns:
k_1(array): Array of capital in period 1
k_2(array): Array of capital in period 2
Args:
k (float): Initial stock of capital
B (float): Technology level
s (float): The savings rate
alpha (float): cobb-douglas parameter
delta (float): capital depreciation rate
n (float): Population growth
output:
For every value of capital computes the value of capital in the next period using the transition equation.
"""
#grids
k_1 = np.linspace(k_min,k_max,T)
k_1 = np.linspace(1e-20,6,T)
k_2 = np.empty(T)
#construct array of "tomorrows" capital
for i,k in enumerate(k_1):
k_plus = transition(k,s,B,n,alpha, delta)
k_2[i] = k_plus
return k_1, k_2
```
Atlast we want to draw the figure with a widget and for this reason we make a function that draws the transition diagram depending on s.
```python
def fig(s):
"""
Returns:
Plots the transition curve
Args:
s: savings rate
"""
#parameters
alpha = 1/3
s = s
B = 1
n = 0.2
delta = 0.2
#transition curve
k_1, k_2 = transition_curve(k,s,B,n,alpha, delta,T=1000,k_min=1e-20,k_max=6)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(k_1,k_2, label="Transition curve") #transition curve
ax.plot(k_1,k_1, '--', color='grey',label="45 degree") #45 degree line
ax.set_xlabel('$k_t$')
ax.set_ylabel('$k_t+1$')
ax.set_title('Transition curve')
ax.legend()
ax.set_xlim([0,4])
ax.set_ylim([0,4]);
return
# Making the widget:
import ipywidgets as widgets
widgets.interact(fig,
s = widgets.FloatSlider(description='s', min=0, max=1, step=0.01, value=0.5),
);
```
interactive(children=(FloatSlider(value=0.5, description='s', max=1.0, step=0.01), Output()), _dom_classes=('w…
# Model extensions
We would like to specify the model so that it better fits Denmark. We do this by converting the model into a solowmodel on a small open economy. The new model is similar to the old model in a lot of ways, but some changes are made. In an open economy it is assumed that there is capital movements are free which means that the rate cannot differ between countries. When we assume that it is a small economy, we also assume that the economy cannot affect the global rental rate.
The equation system that this new model is characterised by is the following:
$$ Y_t = BK_t^\alpha L_t^{1-\alpha}$$
$$ L_{t+1} = (1+n) L_t $$
$$ S_t = sY_t,\,s\in(0,1) $$
In this model we have to define wealth which we name V and F is receivables:
$$ V_t = K_t + F_t $$
The reason for the creation of the wealth value is that we there is no steady state in capital pr. worker since capital no longer is accumelated. Instead we can examine the transition equation in wealth:
$$ v_{t+1} = \frac{sw*}{1+n}+\frac{1+s\bar{r}}{1+n}v_t $$
Where w and r are defined as in the beginning which means:
$$\bar{r} = \alpha B \big(\frac{K_t}{L_t}\big)^{\alpha-1}$$
$$w* = (1-\alpha) B \big(\frac{K_t}{L_t}\big)^{\alpha}$$
The equation for the rate ensures that:
$$ k_t = k* $$
Which means that the rate and the wage always is in steady state.
In this model we also have the national income which is the income of the country plus the income you get from recievables from foreign countries:
$$ Y_t^n = Y_t + \bar{r}F_t$$
At first we want to solve the model, and we do it both numerically and analytically:
## Analytical solution:
First of all we again define our variables/symbols, and we also define the steady state equation
```python
# Defining our symbols:
v = sm.symbols('v')
alpha = sm.symbols('alpha')
delta = sm.symbols('delta')
s = sm.symbols('s')
n = sm.symbols('n')
B = sm.symbols('B')
k = sm.symbols('k')
w = sm.symbols('w')
r = sm.symbols('r')
# Creating the steady state equation:
ssopen = sm.Eq(v,(s*w/(1+n))+((1-delta+s*r)/(1+n))*v)
```
We solve the model:
```python
# Solving the model:
vss = sm.solve(ssopen,v)[0]
# Saving the solution:
ssopen_func = sm.lambdify((s,B,n,r,k,w,alpha,delta),vss)
# Displaying the analytical solution solution:
vss
```
$\displaystyle \frac{s w}{\delta + n - r s}$
## Numerical soltution:
We again define our parameter values:
```python
s = 0.2
B = 1
n = 0.01
k= 2
alpha = 1/3
delta = 0.1
w = (1-alpha)*B*k**alpha
r = alpha*B*k**(alpha-1)
```
```python
# Here we practically do the same as before we create a function that solves our model using the bisect method:
def solve_for_ssopen(s,B,n,k, alpha,delta):
""" solve for the steady state level of capital
Args:
s (float): saving rate
B (float): Technology level
n (float): population growth rate
k (float): Capital pr. worker
alpha (float): cobb-douglas parameter
delta (float): capital depreciation rate
Returns:
result (RootResults): The steady state wealth pr. worker in the economy
"""
# a. define objective function
obj_vss = lambda vss: vss - ((s*w/(1+n))+((1-delta+s*r)/(1+n))*vss)
#. b. call root finder
result = optimize.root_scalar(obj_vss,bracket=[0.1,100],method='bisect')
return result
```
We then check if our analytical solution and or numerical solution is the same:
```python
solution = solve_for_ssopen(s,B,n,k,alpha,delta)
print(f' numerical solution is: {solution.root:.3f}')
print(f'analytical solution is: {ssopen_func(s,B,n,r,k,w,alpha,delta):.3f}')
```
numerical solution is: 2.470
analytical solution is: 2.470
We luckily find that they match up.
## Benefits of opening up a closed economy:
We now want to examine what happens if a closed economy opens up. We will do this by looking at our first model and then examining what happens if we change it to the second one. The most interesting variabel to do the comparrison on is output pr. worker. Therefore we calculate the output pr. worker in steady state of our first model and compare it to the output pr. worker in an open economy with the same parameters. We can then examine whether there is welfare gain connected to opening an economy.
In the simple model output pr. worker is $$y_t = B k_t^\alpha $$
```python
# Setting values for our parameters:
s = 0.2
B = 1
n = 0.01
alpha = 1/3
delta = 0.1
# we also define k as the one we calculated in the original model
k = 2.452
```
In the simple model output pr. worker is $$y_t = B k_t^\alpha $$
We also remember that we can calculate the rate and the wage the following way:
$$r_t = \alpha B \big(\frac{K_t}{L_t}\big)^{\alpha-1}$$
$$w_t = (1-\alpha) B \big(\frac{K_t}{L_t}\big)^{\alpha}$$
We can through this calculate the steady state values for the original closed economy.
```python
print(f' Output pr. worker in the closed economy is: {B*ss_func(s,B,n,alpha,delta)**alpha}')
print(f' The rental rate in the closed economy is: {alpha*B*k**(alpha-1)}')
print(f' The wage in the closed economy is: {(1-alpha)*B*k**(alpha)}')
```
Output pr. worker in the closed economy is: 1.348399724926484
The rental rate in the closed economy is: 0.1833151821614239
The wage in the closed economy is: 0.898977653319623
When a small closed economy is opened the rate will instantaniously adapt and become equal to the rental rate of the outside world. This means that our examination depends on what the rate of the outside world is at the point of opening the economy. In our model the adaption time is almost instant, if we open the economy in period 0 then the economy is back in steady state in period 1.
We calculate the national output given the rate through a funciton:
```python
# Defining a function that can calculate the national output given the rate:
def national_output_calc(s,B,n,alpha,delta,r):
k = ((B*alpha)/r)**(1/(1-alpha))
w = (1-alpha)*B**(1/(1-alpha))*(alpha/r)**(alpha/(1-alpha))
v = (s*w)/(n+delta-s*r)
y = w + r*v
return y
```
Now we draw the function defined above, so that we can examine what the nation income pr. worker is given different rates in the outside world. In the plot we also put a vertical line for the rate of the closed economy and a horisental line for the old level of output pr. worker.
```python
# 100 linearly spaced numbers
x = np.linspace(0.05,0.4,100)
# Define the function we want to be drawn
y = national_output_calc(s,B,n,alpha,delta,x)
# Creating the plot
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
# Plotting vertical and horisontal line
plt.axhline(y=B*ss_func(s,B,n,alpha,delta)**alpha, color='g')
plt.axvline(x=alpha*B*k**(alpha-1))
# plot the function
plt.plot(x,y, 'r')
# show the plot
plt.show()
```
We see from the graph that there is welfare gains connected to opening up the economy. In case of domestic rate being equal to the rate foreign the output pr. worker is unchanged but in all other cases there is welfare gains.
# Conclusion
The conclusion of this project is that closed economies will experience an increase in the output pr. worker by opening the economy as long as the domestic interest rate is not equal to the foreign interest rate.
```python
```
|
c10d1e11c41118b094a34122b5f1fc529fe89d6e
| 55,305 |
ipynb
|
Jupyter Notebook
|
ModelProject.ipynb
|
Frederikbr622/My-repository
|
eb4ddbf46ac40dacab3ebfec665219876fddb39c
|
[
"MIT"
] | null | null | null |
ModelProject.ipynb
|
Frederikbr622/My-repository
|
eb4ddbf46ac40dacab3ebfec665219876fddb39c
|
[
"MIT"
] | null | null | null |
ModelProject.ipynb
|
Frederikbr622/My-repository
|
eb4ddbf46ac40dacab3ebfec665219876fddb39c
|
[
"MIT"
] | null | null | null | 60.575027 | 15,868 | 0.764922 | true | 4,611 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.904651 | 0.887205 | 0.80261 |
__label__eng_Latn
| 0.99525 | 0.703065 |
```python
%pylab
```
Using matplotlib backend: Qt5Agg
Populating the interactive namespace from numpy and matplotlib
```python
from random import uniform
```
Given $X \sim \Gamma(\alpha,1)$, with density
\begin{equation}
f_X(x) \propto f^*_X(x) = x^{\alpha-1} e^{-x}, \quad x > 0
\end{equation}
Using the envelope $Y \sim \text{Exp}(1/\alpha)$ with density
\begin{equation}
g_Y(x) = \frac{1}{\alpha} e^{-x/\alpha}, x > 0
\end{equation}
Also compute the probability of rejection.
```python
f = lambda alpha,x: (x**(alpha-1))*(np.exp(-x))
g = lambda alpha,x: (1/alpha)*(np.exp(-x/alpha))
```
1. Compute $\displaystyle{ M = \sup_{x\in(0,1)} \frac{f^*_X(x)}{g_Y(x)}}$: <br>
Note that
\begin{equation}
h(x) = \frac{f^*_X(x)}{g_Y(x)} = \frac{1}{\alpha}x^{\alpha-1}\exp\left(-x\left(1-\frac{1}{\alpha}\right)\right)
\end{equation}
To find supremum we may consider the quantity
\begin{equation}
y = \ln h(x) = - \ln \alpha + (\alpha - 1) \ln x - x \left( 1-\frac{1}{\alpha} \right)
\end{equation}
Then
\begin{equation}
\frac{dy}{dx} = \frac{\alpha - 1}{x} - \frac{\alpha-1}{\alpha}
\end{equation}
This is equal to zero if $\displaystyle{x = \alpha}$. In fact,
- if $x < \alpha$ then $dy/dx < 0$
- if $x > \alpha$ then $dy/dx > 0$
Therefore $y$ (and hence $h$) attains maximum at $x = \alpha$, therefore $\displaystyle{M = \alpha^{\alpha-2}e^{-(\alpha-1)}}$
2. We may simulate Y by using inversion -
Note that the CDF of $Y$ is $G_Y(x) = 1 - e^{-x/\alpha}$, and $G_Y^{-1}(x) = -\alpha \ln(1-x)$
```python
def my_rExp(alpha,size):
unisamp = [random.uniform(0,1) for i in range(size)]
return [-alpha*np.log(1-x) for x in unisamp]
```
3. We may then use rejection algorithm to simulate $X$.
- Simulate $U \sim U(0,1)$ and $X \sim \text{Exp}(1/\alpha)$.
- If $u \leq M^{-1}f^*_X(x)/g(x)$ then accept $u$, otherwise reject.
```python
M = lambda alpha: (alpha**(alpha-2))*np.exp(-(alpha-1))
h = lambda alpha,x: f(alpha,x)/(M(alpha)*g(alpha,x))
```
```python
def my_rGamma1(alpha,size):
count = 0
samp = []
while count < size:
u = [random.uniform(0,1) for i in range(size)]
exp_samp = my_rExp(alpha,size)
ubound = [h(alpha,x) for x in exp_samp]
idx = [i for i in range(len(u)) if u[i] <= ubound[i]]
new_samp = [u[i] for i in idx]
samp += new_samp
count = len(samp)
return [samp[i] for i in range(size)]
```
```python
my_rGamma1(2,10)
```
[0.5550633830391315,
0.9927212674394112,
0.6983735483799777,
0.49466601339924643,
0.8106875849861941,
0.9050509249637454,
0.3102623813753336,
0.98171935773631,
0.19668951612576313,
0.36127920327731633]
Notice that if $\tilde{X} = X/\beta$ then $\tilde{X} \sim \Gamma(\alpha,\beta)$. Using scaling we may simulate $\tilde{X}$.
```python
def my_rGamma(alpha,beta,size):
return [pt/beta for pt in my_rGamma1(alpha,size)]
```
```python
my_rGamma(3,2,30)
```
[0.38359214592412577,
0.4966073373612345,
0.2568435931525481,
0.3975477716675325,
0.31461628807870534,
0.43438550365391143,
0.05458809067372872,
0.33602126523235437,
0.22906344692827613,
0.3177055644399481,
0.21831892032313138,
0.3036546829741233,
0.16840905801462613,
0.15654361943179157,
0.2767799227870573,
0.1533971950751764,
0.04082917886722137,
0.07897936636851083,
0.4179233869683552,
0.2489387208380447,
0.31459121925637584,
0.002321617317671887,
0.028271625135758538,
0.20481184134471248,
0.32464579215021194,
0.1317549529978439,
0.23665240812238492,
0.44853126271481225,
0.4848416365454395,
0.39500027839659013]
|
02f6c0c9bcf0b18835ec94c9e94b513019b3c642
| 6,745 |
ipynb
|
Jupyter Notebook
|
M3S9/Rejection Sampling Gamma(a,b) Example.ipynb
|
ImperialCollegeLondon/Random-Stuff
|
219bc0e26ea6f5ee7548009c849959b268f54821
|
[
"MIT"
] | 2 |
2021-05-16T04:08:34.000Z
|
2021-11-27T12:56:10.000Z
|
M3S9/Rejection Sampling Gamma(a,b) Example.ipynb
|
ImperialCollegeLondon/Random-Stuff
|
219bc0e26ea6f5ee7548009c849959b268f54821
|
[
"MIT"
] | null | null | null |
M3S9/Rejection Sampling Gamma(a,b) Example.ipynb
|
ImperialCollegeLondon/Random-Stuff
|
219bc0e26ea6f5ee7548009c849959b268f54821
|
[
"MIT"
] | 3 |
2020-03-31T00:23:13.000Z
|
2020-05-13T15:01:46.000Z
| 26.347656 | 137 | 0.503039 | true | 1,410 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.875787 | 0.849971 | 0.744394 |
__label__yue_Hant
| 0.305702 | 0.567808 |
# Programming Exercise 4: Neural Networks Learning
## Introduction
In this exercise, you will implement the backpropagation algorithm for neural networks and apply it to the task of hand-written digit recognition. Before starting on the programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics.
All the information you need for solving this assignment is in this notebook, and all the code you will be implementing will take place within this notebook. The assignment can be promptly submitted to the coursera grader directly from this notebook (code and instructions are included below).
Before we begin with the exercises, we need to import all libraries required for this programming exercise. Throughout the course, we will be using [`numpy`](http://www.numpy.org/) for all arrays and matrix operations, [`matplotlib`](https://matplotlib.org/) for plotting, and [`scipy`](https://docs.scipy.org/doc/scipy/reference/) for scientific and numerical computation functions and tools. You can find instructions on how to install required libraries in the README file in the [github repository](https://github.com/dibgerge/ml-coursera-python-assignments).
```python
# used for manipulating directory paths
import os
# Scientific and vector computation for python
import numpy as np
# Plotting library
from matplotlib import pyplot
# Optimization module in scipy
from scipy import optimize
# will be used to load MATLAB mat datafile format
from scipy.io import loadmat
# library written for this exercise providing additional functions for assignment submission, and others
import utils
# define the submission/grader object for this exercise
grader = utils.Grader()
# tells matplotlib to embed plots within the notebook
%matplotlib inline
```
## Submission and Grading
After completing each part of the assignment, be sure to submit your solutions to the grader. The following is a breakdown of how each part of this exercise is scored.
| Section | Part | Submission function | Points
| :- |:- | :- | :-:
| 1 | [Feedforward and Cost Function](#section1) | [`nnCostFunction`](#nnCostFunction) | 30
| 2 | [Regularized Cost Function](#section2) | [`nnCostFunction`](#nnCostFunction) | 15
| 3 | [Sigmoid Gradient](#section3) | [`sigmoidGradient`](#sigmoidGradient) | 5
| 4 | [Neural Net Gradient Function (Backpropagation)](#section4) | [`nnCostFunction`](#nnCostFunction) | 40
| 5 | [Regularized Gradient](#section5) | [`nnCostFunction`](#nnCostFunction) |10
| | Total Points | | 100
You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.
<div class="alert alert-block alert-warning">
At the end of each section in this notebook, we have a cell which contains code for submitting the solutions thus far to the grader. Execute the cell to see your score up to the current section. For all your work to be submitted properly, you must execute those cells at least once.
</div>
## Neural Networks
In the previous exercise, you implemented feedforward propagation for neural networks and used it to predict handwritten digits with the weights we provided. In this exercise, you will implement the backpropagation algorithm to learn the parameters for the neural network.
We start the exercise by first loading the dataset.
```python
# training data stored in arrays X, y
data = loadmat(os.path.join('Data', 'ex4data1.mat'))
X, y = data['X'], data['y'].ravel()
# set the zero digit to 0, rather than its mapped 10 in this dataset
# This is an artifact due to the fact that this dataset was used in
# MATLAB where there is no index 0
y[y == 10] = 0
# Number of training examples
m = y.size
```
### 1.1 Visualizing the data
You will begin by visualizing a subset of the training set, using the function `displayData`, which is the same function we used in Exercise 3. It is provided in the `utils.py` file for this assignment as well. The dataset is also the same one you used in the previous exercise.
There are 5000 training examples in `ex4data1.mat`, where each training example is a 20 pixel by 20 pixel grayscale image of the digit. Each pixel is represented by a floating point number indicating the grayscale intensity at that location. The 20 by 20 grid of pixels is “unrolled” into a 400-dimensional vector. Each
of these training examples becomes a single row in our data matrix $X$. This gives us a 5000 by 400 matrix $X$ where every row is a training example for a handwritten digit image.
$$ X = \begin{bmatrix} - \left(x^{(1)} \right)^T - \\
- \left(x^{(2)} \right)^T - \\
\vdots \\
- \left(x^{(m)} \right)^T - \\
\end{bmatrix}
$$
The second part of the training set is a 5000-dimensional vector `y` that contains labels for the training set.
The following cell randomly selects 100 images from the dataset and plots them.
```python
# Randomly select 100 data points to display
rand_indices = np.random.choice(m, 100, replace=False)
sel = X[rand_indices, :]
utils.displayData(sel)
```
### 1.2 Model representation
Our neural network is shown in the following figure.
It has 3 layers - an input layer, a hidden layer and an output layer. Recall that our inputs are pixel values
of digit images. Since the images are of size $20 \times 20$, this gives us 400 input layer units (not counting the extra bias unit which always outputs +1). The training data was loaded into the variables `X` and `y` above.
You have been provided with a set of network parameters ($\Theta^{(1)}, \Theta^{(2)}$) already trained by us. These are stored in `ex4weights.mat` and will be loaded in the next cell of this notebook into `Theta1` and `Theta2`. The parameters have dimensions that are sized for a neural network with 25 units in the second layer and 10 output units (corresponding to the 10 digit classes).
```python
# Setup the parameters you will use for this exercise
input_layer_size = 400 # 20x20 Input Images of Digits
hidden_layer_size = 25 # 25 hidden units
num_labels = 10 # 10 labels, from 0 to 9
# Load the weights into variables Theta1 and Theta2
weights = loadmat(os.path.join('Data', 'ex4weights.mat'))
# Theta1 has size 25 x 401
# Theta2 has size 10 x 26
Theta1, Theta2 = weights['Theta1'], weights['Theta2']
# swap first and last columns of Theta2, due to legacy from MATLAB indexing,
# since the weight file ex3weights.mat was saved based on MATLAB indexing
Theta2 = np.roll(Theta2, 1, axis=0)
# Unroll parameters
nn_params = np.concatenate([Theta1.ravel(), Theta2.ravel()])
```
<a id="section1"></a>
### 1.3 Feedforward and cost function
Now you will implement the cost function and gradient for the neural network. First, complete the code for the function `nnCostFunction` in the next cell to return the cost.
Recall that the cost function for the neural network (without regularization) is:
$$ J(\theta) = \frac{1}{m} \sum_{i=1}^{m}\sum_{k=1}^{K} \left[ - y_k^{(i)} \log \left( \left( h_\theta \left( x^{(i)} \right) \right)_k \right) - \left( 1 - y_k^{(i)} \right) \log \left( 1 - \left( h_\theta \left( x^{(i)} \right) \right)_k \right) \right]$$
where $h_\theta \left( x^{(i)} \right)$ is computed as shown in the neural network figure above, and K = 10 is the total number of possible labels. Note that $h_\theta(x^{(i)})_k = a_k^{(3)}$ is the activation (output
value) of the $k^{th}$ output unit. Also, recall that whereas the original labels (in the variable y) were 0, 1, ..., 9, for the purpose of training a neural network, we need to encode the labels as vectors containing only values 0 or 1, so that
$$ y =
\begin{bmatrix} 1 \\ 0 \\ 0 \\\vdots \\ 0 \end{bmatrix}, \quad
\begin{bmatrix} 0 \\ 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, \quad \cdots \quad \text{or} \qquad
\begin{bmatrix} 0 \\ 0 \\ 0 \\ \vdots \\ 1 \end{bmatrix}.
$$
For example, if $x^{(i)}$ is an image of the digit 5, then the corresponding $y^{(i)}$ (that you should use with the cost function) should be a 10-dimensional vector with $y_5 = 1$, and the other elements equal to 0.
You should implement the feedforward computation that computes $h_\theta(x^{(i)})$ for every example $i$ and sum the cost over all examples. **Your code should also work for a dataset of any size, with any number of labels** (you can assume that there are always at least $K \ge 3$ labels).
<div class="alert alert-box alert-warning">
**Implementation Note:** The matrix $X$ contains the examples in rows (i.e., X[i,:] is the i-th training example $x^{(i)}$, expressed as a $n \times 1$ vector.) When you complete the code in `nnCostFunction`, you will need to add the column of 1’s to the X matrix. The parameters for each unit in the neural network is represented in Theta1 and Theta2 as one row. Specifically, the first row of Theta1 corresponds to the first hidden unit in the second layer. You can use a for-loop over the examples to compute the cost.
</div>
<a id="nnCostFunction"></a>
```python
def nnCostFunction(nn_params,
input_layer_size,
hidden_layer_size,
num_labels,
X, y, lambda_=0.0):
"""
Implements the neural network cost function and gradient for a two layer neural
network which performs classification.
Parameters
----------
nn_params : array_like
The parameters for the neural network which are "unrolled" into
a vector. This needs to be converted back into the weight matrices Theta1
and Theta2.
input_layer_size : int
Number of features for the input layer.
hidden_layer_size : int
Number of hidden units in the second layer.
num_labels : int
Total number of labels, or equivalently number of units in output layer.
X : array_like
Input dataset. A matrix of shape (m x input_layer_size).
y : array_like
Dataset labels. A vector of shape (m,).
lambda_ : float, optional
Regularization parameter.
Returns
-------
J : float
The computed value for the cost function at the current weight values.
grad : array_like
An "unrolled" vector of the partial derivatives of the concatenatation of
neural network weights Theta1 and Theta2.
Instructions
------------
You should complete the code by working through the following parts.
- Part 1: Feedforward the neural network and return the cost in the
variable J. After implementing Part 1, you can verify that your
cost function computation is correct by verifying the cost
computed in the following cell.
- Part 2: Implement the backpropagation algorithm to compute the gradients
Theta1_grad and Theta2_grad. You should return the partial derivatives of
the cost function with respect to Theta1 and Theta2 in Theta1_grad and
Theta2_grad, respectively. After implementing Part 2, you can check
that your implementation is correct by running checkNNGradients provided
in the utils.py module.
Note: The vector y passed into the function is a vector of labels
containing values from 0..K-1. You need to map this vector into a
binary vector of 1's and 0's to be used with the neural network
cost function.
Hint: We recommend implementing backpropagation using a for-loop
over the training examples if you are implementing it for the
first time.
- Part 3: Implement regularization with the cost function and gradients.
Hint: You can implement this around the code for
backpropagation. That is, you can compute the gradients for
the regularization separately and then add them to Theta1_grad
and Theta2_grad from Part 2.
Note
----
We have provided an implementation for the sigmoid function in the file
`utils.py` accompanying this assignment.
"""
# Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
# for our 2 layer neural network
Theta1 = np.reshape(nn_params[:hidden_layer_size * (input_layer_size + 1)],
(hidden_layer_size, (input_layer_size + 1)))
Theta2 = np.reshape(nn_params[(hidden_layer_size * (input_layer_size + 1)):],
(num_labels, (hidden_layer_size + 1)))
# Setup some useful variables
m = y.size
# You need to return the following variables correctly
J = 0
Theta1_grad = np.zeros(Theta1.shape)
Theta2_grad = np.zeros(Theta2.shape)
# ====================== YOUR CODE HERE ======================
# ================================================================
# Unroll gradients
# grad = np.concatenate([Theta1_grad.ravel(order=order), Theta2_grad.ravel(order=order)])
grad = np.concatenate([Theta1_grad.ravel(), Theta2_grad.ravel()])
return J, grad
```
<div class="alert alert-box alert-warning">
Use the following links to go back to the different parts of this exercise that require to modify the function `nnCostFunction`.<br>
Back to:
- [Feedforward and cost function](#section1)
- [Regularized cost](#section2)
- [Neural Network Gradient (Backpropagation)](#section4)
- [Regularized Gradient](#section5)
</div>
Once you are done, call your `nnCostFunction` using the loaded set of parameters for `Theta1` and `Theta2`. You should see that the cost is about 0.287629.
```python
lambda_ = 0
J, _ = nnCostFunction(nn_params, input_layer_size, hidden_layer_size,
num_labels, X, y, lambda_)
print('Cost at parameters (loaded from ex4weights): %.6f ' % J)
print('The cost should be about : 0.287629.')
```
*You should now submit your solutions.*
```python
grader = utils.Grader()
grader[1] = nnCostFunction
grader.grade()
```
<a id="section2"></a>
### 1.4 Regularized cost function
The cost function for neural networks with regularization is given by:
$$ J(\theta) = \frac{1}{m} \sum_{i=1}^{m}\sum_{k=1}^{K} \left[ - y_k^{(i)} \log \left( \left( h_\theta \left( x^{(i)} \right) \right)_k \right) - \left( 1 - y_k^{(i)} \right) \log \left( 1 - \left( h_\theta \left( x^{(i)} \right) \right)_k \right) \right] + \frac{\lambda}{2 m} \left[ \sum_{j=1}^{25} \sum_{k=1}^{400} \left( \Theta_{j,k}^{(1)} \right)^2 + \sum_{j=1}^{10} \sum_{k=1}^{25} \left( \Theta_{j,k}^{(2)} \right)^2 \right] $$
You can assume that the neural network will only have 3 layers - an input layer, a hidden layer and an output layer. However, your code should work for any number of input units, hidden units and outputs units. While we
have explicitly listed the indices above for $\Theta^{(1)}$ and $\Theta^{(2)}$ for clarity, do note that your code should in general work with $\Theta^{(1)}$ and $\Theta^{(2)}$ of any size. Note that you should not be regularizing the terms that correspond to the bias. For the matrices `Theta1` and `Theta2`, this corresponds to the first column of each matrix. You should now add regularization to your cost function. Notice that you can first compute the unregularized cost function $J$ using your existing `nnCostFunction` and then later add the cost for the regularization terms.
[Click here to go back to `nnCostFunction` for editing.](#nnCostFunction)
Once you are done, the next cell will call your `nnCostFunction` using the loaded set of parameters for `Theta1` and `Theta2`, and $\lambda = 1$. You should see that the cost is about 0.383770.
```python
# Weight regularization parameter (we set this to 1 here).
lambda_ = 1
J, _ = nnCostFunction(nn_params, input_layer_size, hidden_layer_size,
num_labels, X, y, lambda_)
print('Cost at parameters (loaded from ex4weights): %.6f' % J)
print('This value should be about : 0.383770.')
```
*You should now submit your solutions.*
```python
grader[2] = nnCostFunction
grader.grade()
```
## 2 Backpropagation
In this part of the exercise, you will implement the backpropagation algorithm to compute the gradient for the neural network cost function. You will need to update the function `nnCostFunction` so that it returns an appropriate value for `grad`. Once you have computed the gradient, you will be able to train the neural network by minimizing the cost function $J(\theta)$ using an advanced optimizer such as `scipy`'s `optimize.minimize`.
You will first implement the backpropagation algorithm to compute the gradients for the parameters for the (unregularized) neural network. After you have verified that your gradient computation for the unregularized case is correct, you will implement the gradient for the regularized neural network.
<a id="section3"></a>
### 2.1 Sigmoid Gradient
To help you get started with this part of the exercise, you will first implement
the sigmoid gradient function. The gradient for the sigmoid function can be
computed as
$$ g'(z) = \frac{d}{dz} g(z) = g(z)\left(1-g(z)\right) $$
where
$$ \text{sigmoid}(z) = g(z) = \frac{1}{1 + e^{-z}} $$
Now complete the implementation of `sigmoidGradient` in the next cell.
<a id="sigmoidGradient"></a>
```python
def sigmoidGradient(z):
"""
Computes the gradient of the sigmoid function evaluated at z.
This should work regardless if z is a matrix or a vector.
In particular, if z is a vector or matrix, you should return
the gradient for each element.
Parameters
----------
z : array_like
A vector or matrix as input to the sigmoid function.
Returns
--------
g : array_like
Gradient of the sigmoid function. Has the same shape as z.
Instructions
------------
Compute the gradient of the sigmoid function evaluated at
each value of z (z can be a matrix, vector or scalar).
Note
----
We have provided an implementation of the sigmoid function
in `utils.py` file accompanying this assignment.
"""
g = np.zeros(z.shape)
# ====================== YOUR CODE HERE ======================
# =============================================================
return g
```
When you are done, the following cell call `sigmoidGradient` on a given vector `z`. Try testing a few values by calling `sigmoidGradient(z)`. For large values (both positive and negative) of z, the gradient should be close to 0. When $z = 0$, the gradient should be exactly 0.25. Your code should also work with vectors and matrices. For a matrix, your function should perform the sigmoid gradient function on every element.
```python
z = np.array([-1, -0.5, 0, 0.5, 1])
g = sigmoidGradient(z)
print('Sigmoid gradient evaluated at [-1 -0.5 0 0.5 1]:\n ')
print(g)
```
*You should now submit your solutions.*
```python
grader[3] = sigmoidGradient
grader.grade()
```
## 2.2 Random Initialization
When training neural networks, it is important to randomly initialize the parameters for symmetry breaking. One effective strategy for random initialization is to randomly select values for $\Theta^{(l)}$ uniformly in the range $[-\epsilon_{init}, \epsilon_{init}]$. You should use $\epsilon_{init} = 0.12$. This range of values ensures that the parameters are kept small and makes the learning more efficient.
<div class="alert alert-box alert-warning">
One effective strategy for choosing $\epsilon_{init}$ is to base it on the number of units in the network. A good choice of $\epsilon_{init}$ is $\epsilon_{init} = \frac{\sqrt{6}}{\sqrt{L_{in} + L_{out}}}$ where $L_{in} = s_l$ and $L_{out} = s_{l+1}$ are the number of units in the layers adjacent to $\Theta^{l}$.
</div>
Your job is to complete the function `randInitializeWeights` to initialize the weights for $\Theta$. Modify the function by filling in the following code:
```python
# Randomly initialize the weights to small values
W = np.random.rand(L_out, 1 + L_in) * 2 * epsilon_init - epsilon_init
```
Note that we give the function an argument for $\epsilon$ with default value `epsilon_init = 0.12`.
```python
def randInitializeWeights(L_in, L_out, epsilon_init=0.12):
"""
Randomly initialize the weights of a layer in a neural network.
Parameters
----------
L_in : int
Number of incomming connections.
L_out : int
Number of outgoing connections.
epsilon_init : float, optional
Range of values which the weight can take from a uniform
distribution.
Returns
-------
W : array_like
The weight initialiatized to random values. Note that W should
be set to a matrix of size(L_out, 1 + L_in) as
the first column of W handles the "bias" terms.
Instructions
------------
Initialize W randomly so that we break the symmetry while training
the neural network. Note that the first column of W corresponds
to the parameters for the bias unit.
"""
# You need to return the following variables correctly
W = np.zeros((L_out, 1 + L_in))
# ====================== YOUR CODE HERE ======================
# ============================================================
return W
```
*You do not need to submit any code for this part of the exercise.*
Execute the following cell to initialize the weights for the 2 layers in the neural network using the `randInitializeWeights` function.
```python
print('Initializing Neural Network Parameters ...')
initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size)
initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels)
# Unroll parameters
initial_nn_params = np.concatenate([initial_Theta1.ravel(), initial_Theta2.ravel()], axis=0)
```
<a id="section4"></a>
### 2.4 Backpropagation
Now, you will implement the backpropagation algorithm. Recall that the intuition behind the backpropagation algorithm is as follows. Given a training example $(x^{(t)}, y^{(t)})$, we will first run a “forward pass” to compute all the activations throughout the network, including the output value of the hypothesis $h_\theta(x)$. Then, for each node $j$ in layer $l$, we would like to compute an “error term” $\delta_j^{(l)}$ that measures how much that node was “responsible” for any errors in our output.
For an output node, we can directly measure the difference between the network’s activation and the true target value, and use that to define $\delta_j^{(3)}$ (since layer 3 is the output layer). For the hidden units, you will compute $\delta_j^{(l)}$ based on a weighted average of the error terms of the nodes in layer $(l+1)$. In detail, here is the backpropagation algorithm (also depicted in the figure above). You should implement steps 1 to 4 in a loop that processes one example at a time. Concretely, you should implement a for-loop `for t in range(m)` and place steps 1-4 below inside the for-loop, with the $t^{th}$ iteration performing the calculation on the $t^{th}$ training example $(x^{(t)}, y^{(t)})$. Step 5 will divide the accumulated gradients by $m$ to obtain the gradients for the neural network cost function.
1. Set the input layer’s values $(a^{(1)})$ to the $t^{th }$training example $x^{(t)}$. Perform a feedforward pass, computing the activations $(z^{(2)}, a^{(2)}, z^{(3)}, a^{(3)})$ for layers 2 and 3. Note that you need to add a `+1` term to ensure that the vectors of activations for layers $a^{(1)}$ and $a^{(2)}$ also include the bias unit. In `numpy`, if a 1 is a column matrix, adding one corresponds to `a_1 = np.concatenate([np.ones((m, 1)), a_1], axis=1)`.
1. For each output unit $k$ in layer 3 (the output layer), set
$$\delta_k^{(3)} = \left(a_k^{(3)} - y_k \right)$$
where $y_k \in \{0, 1\}$ indicates whether the current training example belongs to class $k$ $(y_k = 1)$, or if it belongs to a different class $(y_k = 0)$. You may find logical arrays helpful for this task (explained in the previous programming exercise).
1. For the hidden layer $l = 2$, set
$$ \delta^{(2)} = \left( \Theta^{(2)} \right)^T \delta^{(3)} * g'\left(z^{(2)} \right)$$
Note that the symbol $*$ performs element wise multiplication in `numpy`.
1. Accumulate the gradient from this example using the following formula. Note that you should skip or remove $\delta_0^{(2)}$. In `numpy`, removing $\delta_0^{(2)}$ corresponds to `delta_2 = delta_2[1:]`.
$$ \Delta^{(l)} = \Delta^{(l)} + \delta^{(l+1)} (a^{(l)})^{(T)} $$
1. Obtain the (unregularized) gradient for the neural network cost function by dividing the accumulated gradients by $\frac{1}{m}$:
$$ \frac{\partial}{\partial \Theta_{ij}^{(l)}} J(\Theta) = D_{ij}^{(l)} = \frac{1}{m} \Delta_{ij}^{(l)}$$
<div class="alert alert-box alert-warning">
**Python/Numpy tip**: You should implement the backpropagation algorithm only after you have successfully completed the feedforward and cost functions. While implementing the backpropagation alogrithm, it is often useful to use the `shape` function to print out the shapes of the variables you are working with if you run into dimension mismatch errors.
</div>
[Click here to go back and update the function `nnCostFunction` with the backpropagation algorithm](#nnCostFunction).
**Note:** If the iterative solution provided above is proving to be difficult to implement, try implementing the vectorized approach which is easier to implement in the opinion of the moderators of this course. You can find the tutorial for the vectorized approach [here](https://www.coursera.org/learn/machine-learning/discussions/all/threads/a8Kce_WxEeS16yIACyoj1Q).
After you have implemented the backpropagation algorithm, we will proceed to run gradient checking on your implementation. The gradient check will allow you to increase your confidence that your code is
computing the gradients correctly.
### 2.4 Gradient checking
In your neural network, you are minimizing the cost function $J(\Theta)$. To perform gradient checking on your parameters, you can imagine “unrolling” the parameters $\Theta^{(1)}$, $\Theta^{(2)}$ into a long vector $\theta$. By doing so, you can think of the cost function being $J(\Theta)$ instead and use the following gradient checking procedure.
Suppose you have a function $f_i(\theta)$ that purportedly computes $\frac{\partial}{\partial \theta_i} J(\theta)$; you’d like to check if $f_i$ is outputting correct derivative values.
$$
\text{Let } \theta^{(i+)} = \theta + \begin{bmatrix} 0 \\ 0 \\ \vdots \\ \epsilon \\ \vdots \\ 0 \end{bmatrix}
\quad \text{and} \quad \theta^{(i-)} = \theta - \begin{bmatrix} 0 \\ 0 \\ \vdots \\ \epsilon \\ \vdots \\ 0 \end{bmatrix}
$$
So, $\theta^{(i+)}$ is the same as $\theta$, except its $i^{th}$ element has been incremented by $\epsilon$. Similarly, $\theta^{(i−)}$ is the corresponding vector with the $i^{th}$ element decreased by $\epsilon$. You can now numerically verify $f_i(\theta)$’s correctness by checking, for each $i$, that:
$$ f_i\left( \theta \right) \approx \frac{J\left( \theta^{(i+)}\right) - J\left( \theta^{(i-)} \right)}{2\epsilon} $$
The degree to which these two values should approximate each other will depend on the details of $J$. But assuming $\epsilon = 10^{-4}$, you’ll usually find that the left- and right-hand sides of the above will agree to at least 4 significant digits (and often many more).
We have implemented the function to compute the numerical gradient for you in `computeNumericalGradient` (within the file `utils.py`). While you are not required to modify the file, we highly encourage you to take a look at the code to understand how it works.
In the next cell we will run the provided function `checkNNGradients` which will create a small neural network and dataset that will be used for checking your gradients. If your backpropagation implementation is correct,
you should see a relative difference that is less than 1e-9.
<div class="alert alert-box alert-success">
**Practical Tip**: When performing gradient checking, it is much more efficient to use a small neural network with a relatively small number of input units and hidden units, thus having a relatively small number
of parameters. Each dimension of $\theta$ requires two evaluations of the cost function and this can be expensive. In the function `checkNNGradients`, our code creates a small random model and dataset which is used with `computeNumericalGradient` for gradient checking. Furthermore, after you are confident that your gradient computations are correct, you should turn off gradient checking before running your learning algorithm.
</div>
<div class="alert alert-box alert-success">
**Practical Tip:** Gradient checking works for any function where you are computing the cost and the gradient. Concretely, you can use the same `computeNumericalGradient` function to check if your gradient implementations for the other exercises are correct too (e.g., logistic regression’s cost function).
</div>
```python
utils.checkNNGradients(nnCostFunction)
```
*Once your cost function passes the gradient check for the (unregularized) neural network cost function, you should submit the neural network gradient function (backpropagation).*
```python
grader[4] = nnCostFunction
grader.grade()
```
<a id="section5"></a>
### 2.5 Regularized Neural Network
After you have successfully implemented the backpropagation algorithm, you will add regularization to the gradient. To account for regularization, it turns out that you can add this as an additional term *after* computing the gradients using backpropagation.
Specifically, after you have computed $\Delta_{ij}^{(l)}$ using backpropagation, you should add regularization using
$$ \begin{align}
& \frac{\partial}{\partial \Theta_{ij}^{(l)}} J(\Theta) = D_{ij}^{(l)} = \frac{1}{m} \Delta_{ij}^{(l)} & \qquad \text{for } j = 0 \\
& \frac{\partial}{\partial \Theta_{ij}^{(l)}} J(\Theta) = D_{ij}^{(l)} = \frac{1}{m} \Delta_{ij}^{(l)} + \frac{\lambda}{m} \Theta_{ij}^{(l)} & \qquad \text{for } j \ge 1
\end{align}
$$
Note that you should *not* be regularizing the first column of $\Theta^{(l)}$ which is used for the bias term. Furthermore, in the parameters $\Theta_{ij}^{(l)}$, $i$ is indexed starting from 1, and $j$ is indexed starting from 0. Thus,
$$
\Theta^{(l)} = \begin{bmatrix}
\Theta_{1,0}^{(i)} & \Theta_{1,1}^{(l)} & \cdots \\
\Theta_{2,0}^{(i)} & \Theta_{2,1}^{(l)} & \cdots \\
\vdots & ~ & \ddots
\end{bmatrix}
$$
[Now modify your code that computes grad in `nnCostFunction` to account for regularization.](#nnCostFunction)
After you are done, the following cell runs gradient checking on your implementation. If your code is correct, you should expect to see a relative difference that is less than 1e-9.
```python
# Check gradients by running checkNNGradients
lambda_ = 3
utils.checkNNGradients(nnCostFunction, lambda_)
# Also output the costFunction debugging values
debug_J, _ = nnCostFunction(nn_params, input_layer_size,
hidden_layer_size, num_labels, X, y, lambda_)
print('\n\nCost at (fixed) debugging parameters (w/ lambda = %f): %f ' % (lambda_, debug_J))
print('(for lambda = 3, this value should be about 0.576051)')
```
```python
grader[5] = nnCostFunction
grader.grade()
```
### 2.6 Learning parameters using `scipy.optimize.minimize`
After you have successfully implemented the neural network cost function
and gradient computation, the next step we will use `scipy`'s minimization to learn a good set parameters.
```python
# After you have completed the assignment, change the maxiter to a larger
# value to see how more training helps.
options= {'maxiter': 100}
# You should also try different values of lambda
lambda_ = 1
# Create "short hand" for the cost function to be minimized
costFunction = lambda p: nnCostFunction(p, input_layer_size,
hidden_layer_size,
num_labels, X, y, lambda_)
# Now, costFunction is a function that takes in only one argument
# (the neural network parameters)
res = optimize.minimize(costFunction,
initial_nn_params,
jac=True,
method='TNC',
options=options)
# get the solution of the optimization
nn_params = res.x
# Obtain Theta1 and Theta2 back from nn_params
Theta1 = np.reshape(nn_params[:hidden_layer_size * (input_layer_size + 1)],
(hidden_layer_size, (input_layer_size + 1)))
Theta2 = np.reshape(nn_params[(hidden_layer_size * (input_layer_size + 1)):],
(num_labels, (hidden_layer_size + 1)))
```
After the training completes, we will proceed to report the training accuracy of your classifier by computing the percentage of examples it got correct. If your implementation is correct, you should see a reported
training accuracy of about 95.3% (this may vary by about 1% due to the random initialization). It is possible to get higher training accuracies by training the neural network for more iterations. We encourage you to try
training the neural network for more iterations (e.g., set `maxiter` to 400) and also vary the regularization parameter $\lambda$. With the right learning settings, it is possible to get the neural network to perfectly fit the training set.
```python
pred = utils.predict(Theta1, Theta2, X)
print('Training Set Accuracy: %f' % (np.mean(pred == y) * 100))
```
## 3 Visualizing the Hidden Layer
One way to understand what your neural network is learning is to visualize what the representations captured by the hidden units. Informally, given a particular hidden unit, one way to visualize what it computes is to find an input $x$ that will cause it to activate (that is, to have an activation value
($a_i^{(l)}$) close to 1). For the neural network you trained, notice that the $i^{th}$ row of $\Theta^{(1)}$ is a 401-dimensional vector that represents the parameter for the $i^{th}$ hidden unit. If we discard the bias term, we get a 400 dimensional vector that represents the weights from each input pixel to the hidden unit.
Thus, one way to visualize the “representation” captured by the hidden unit is to reshape this 400 dimensional vector into a 20 × 20 image and display it (It turns out that this is equivalent to finding the input that gives the highest activation for the hidden unit, given a “norm” constraint on the input (i.e., $||x||_2 \le 1$)).
The next cell does this by using the `displayData` function and it will show you an image with 25 units,
each corresponding to one hidden unit in the network. In your trained network, you should find that the hidden units corresponds roughly to detectors that look for strokes and other patterns in the input.
```python
utils.displayData(Theta1[:, 1:])
```
### 3.1 Optional (ungraded) exercise
In this part of the exercise, you will get to try out different learning settings for the neural network to see how the performance of the neural network varies with the regularization parameter $\lambda$ and number of training steps (the `maxiter` option when using `scipy.optimize.minimize`). Neural networks are very powerful models that can form highly complex decision boundaries. Without regularization, it is possible for a neural network to “overfit” a training set so that it obtains close to 100% accuracy on the training set but does not as well on new examples that it has not seen before. You can set the regularization $\lambda$ to a smaller value and the `maxiter` parameter to a higher number of iterations to see this for youself.
|
ab2e6145325cf618348be454449d9e0c552cb84a
| 46,106 |
ipynb
|
Jupyter Notebook
|
Assignment-4/Given-Materials-Python/exercise4.ipynb
|
shamiul94/Machine-Learning-Coursera-Assignments-Solution
|
dfe101e55e6b5a0133b5660e14eab2b407ddfeb5
|
[
"MIT"
] | 1 |
2020-07-29T05:02:41.000Z
|
2020-07-29T05:02:41.000Z
|
Assignment-4/Given-Materials-Python/exercise4.ipynb
|
shamiul94/Machine-Learning-Coursera-Assignments-Solution
|
dfe101e55e6b5a0133b5660e14eab2b407ddfeb5
|
[
"MIT"
] | null | null | null |
Assignment-4/Given-Materials-Python/exercise4.ipynb
|
shamiul94/Machine-Learning-Coursera-Assignments-Solution
|
dfe101e55e6b5a0133b5660e14eab2b407ddfeb5
|
[
"MIT"
] | 2 |
2020-04-21T00:03:19.000Z
|
2020-05-19T11:29:44.000Z
| 49.629709 | 843 | 0.60465 | true | 8,833 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.805632 | 0.805632 | 0.649043 |
__label__eng_Latn
| 0.994834 | 0.346276 |
# Lecture 23: Beta distribution, Bayes' Billiards
## Stat 110, Prof. Joe Blitzstein, Harvard University
----
## Beta Distribution
The Beta distribution is characterized as $\operatorname{Beta}(\alpha, \beta)$ where $\alpha \gt 0 \text{, } \beta \gt 0$.
The Beta distribution PDF is given by:
\begin{align}
f(x) &= c \, x^{\alpha-1} \, (1 - x)^{\beta-1} \quad \text{where } 0 \lt x \lt 1
\end{align}
We will put aside the normalization constant $c$ for now (wait until lecture 25!).
$\operatorname{Beta}(\alpha ,\beta)$ distribution
* is a flexible family of continuous distributions over $(0,1)$ (see graph below for some examples)
* often used as a _prior_ for a parameter in range $(0,1)$
* _conjugate prior_ for $\operatorname{Binomial}$ distribution
```python
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator)
from scipy.stats import beta
%matplotlib inline
plt.xkcd()
_, ax = plt.subplots(figsize=(12,8))
x = np.linspace(0, 1.0, 500)
# seme Beta parameters
alphas = [0.5, 5, 1, 1, 2, 3]
betas = [0.5, 3, 2, 1, 1, 5]
params = map(list, zip(alphas, betas))
# qualitative color scheme
colors = ['#66c2a5', '#fc8d62', '#8da0cb', '#e78ac3', '#a6d854', '#ffd92f']
for i,(a,b) in enumerate(params):
y = beta.pdf(x, a, b)
ax.plot(x, y, color=colors[i], lw=3.2, label=r'$\alpha$={}, $\beta$={}'.format(a,b))
# legend styling
legend = ax.legend()
for label in legend.get_texts():
label.set_fontsize('large')
for label in legend.get_lines():
label.set_linewidth(1.5)
# y-axis
ax.set_ylim([0.0, 5.0])
ax.set_ylabel(r'$f(x)$')
# x-axis
ax.set_xlim([0, 1.0])
ax.set_xlabel(r'$x$')
# x-axis tick formatting
majorLocator = MultipleLocator(.2)
majorFormatter = FormatStrFormatter('%0.1f')
minorLocator = MultipleLocator(.1)
ax.xaxis.set_major_locator(majorLocator)
ax.xaxis.set_major_formatter(majorFormatter)
ax.xaxis.set_minor_locator(minorLocator)
ax.grid(color='grey', linestyle='-', linewidth=0.3)
plt.suptitle(r'Beta Distribution $f(x) = \frac{x^{\alpha-1} \, (1-x)^{\beta-1}}{B(\alpha,\beta)}$')
plt.show()
```
### Conjugate prior to the Binomial
Recall Laplace's Rule of Succession dealing with the problem of the sun rising: there, we probability $p$ that the sun will rise on any given day $X_k$, given a consecutive string of days $X_1,X_2, \dots, X_{k-1}$, to be i.i.d. $\operatorname{Bern}(p)$.
We made an assumption that $p \sim \operatorname{Unif}(0,1)$.
$\operatorname{Beta}(1,1)$ is the same as $\operatorname{Unif}(0,1)$, and so we will show how to generalize using the $\operatorname{Beta}$ distribution.
Given $X|p \sim \operatorname{Bin}(n,p)$. We get to observe $X$, but we do not know the true value of $p$.
In such a case, we can assume that the _prior_ $p \sim \operatorname{Beta}(\alpha, \beta)$. After observing $n$ further trials, where perhaps $k$ are successes and $n-k$ are failures, we can use this information to update our beliefs on the nature of $p$ using Bayes Theorem.
So we what we want is the _posterior_ $p|X$, since we will get to observe more values of $X$ and want to update our understanding of $p$.
\begin{align}
f(p|X=k) &= \frac{P(X=k|p) \, f(p)}{P(X=k)} \\
&= \frac{\binom{n}{k} \, p^k \, (1-p)^{n-k} \, c \, p^{\alpha-1}(1-p)^{\beta-1}}{P(X=k)} \\
&\propto p^{\alpha + k - 1} (1-p)^{\beta + n - k - 1} \quad \text{since }\binom{n}{k} \text{, }c \text{ and }P(X=k) \text{ do not depend on }p \\
\\
\Rightarrow p|X &\sim \operatorname{Beta}(\alpha + X, \beta + n - X)
\end{align}
_Conjugate_ refers to the fact that we are looking at an entire _family_ of $\operatorname{Beta}$ distributions as the _prior_. We started off with $\operatorname{Beta}(\alpha, \beta)$, and after an additional $n$ more observations of $X$ we end up with $\operatorname{Beta}(\alpha + X, \beta + n - X)$.
### Bayes' Billiards
The $\operatorname{Beta}(\alpha, \beta)$ distribution has PDF:
\begin{align}
f(x) &= c \, x^{\alpha-1} \, (1 - x)^{\beta-1}
\end{align}
Let's try to find the normalizing constant $c$ for the case where $\alpha \gt 0 \text{, } \beta \gt 0$, and $\alpha,\beta \in \mathbb{Z}$
In order to do that, we need to find out
\begin{align}
\int_0^1 x^k \, (1 - x)^{n-k} \, dx \\
\\
\rightarrow \int_0^1 \binom{n}{k} x^k \, (1 - x)^{n-k} \, dx
\end{align}
#### Story 1
We have $n+1$ white billiard balls, and we paint one of them pink. Now we throw them down on the number line from 0 to 1.
#### Story 2
We throw our $n+1$ white billiard balls down on the number line from 0 to 1, and _then_ randomly select one of them to paint in pink.
_Note that the image above could have resulted from either of the stories, so both stories are actually equivalent._
#### Doing calculus without using calculus
At this point, we know exactly how to the above integral _without using any calculus._
Let $X = \text{# balls to left of pink}$, so $X$ ranges from $0$ to $n$. If we condition on $p$ (pink billiard ball), we can consider this to be a binomial distribution problem, where "success" means being to the left of the pink ball.
\begin{align}
P(X = k) &= \int_0^1 P(X=k|p) \, f(p) \, dp &\quad \text{conditioning on } p \\
&= \int_0^1 \binom{n}{k} p^k \, (1-p)^{n-k} \, dp &\quad \text{since } f(p) \sim Unif(0,1) \\
\\
& &\quad \text{but from Story 2, } k \text{ could be any value in } \{0,n\} \\
\\
&= \boxed{ \frac{1}{n+1} }
\end{align}
And so now we have the normalizing constant when $\alpha, \beta$ are positive integers.
----
View [Lecture 23: Beta distribution | Statistics 110](http://bit.ly/2Nprujo) on YouTube.
|
33478a5e9fb444de4683296fb0122c1f824c6a27
| 138,733 |
ipynb
|
Jupyter Notebook
|
Lecture_23.ipynb
|
abhra-nilIITKgp/stats-110
|
258461cdfbdcf99de5b96bcf5b4af0dd98d48f85
|
[
"BSD-3-Clause"
] | 113 |
2016-04-29T07:27:33.000Z
|
2022-02-27T18:32:47.000Z
|
Lecture_23.ipynb
|
snoop2head/stats-110
|
88d0cc56ede406a584f6ba46368e548010f2b14a
|
[
"BSD-3-Clause"
] | null | null | null |
Lecture_23.ipynb
|
snoop2head/stats-110
|
88d0cc56ede406a584f6ba46368e548010f2b14a
|
[
"BSD-3-Clause"
] | 65 |
2016-12-24T02:02:25.000Z
|
2022-02-13T13:20:02.000Z
| 613.862832 | 130,422 | 0.930154 | true | 1,843 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.857768 | 0.872347 | 0.748272 |
__label__eng_Latn
| 0.929334 | 0.576818 |
# Linear Regression
```{note}
Linear Regression = Linear Model + Mean Square Loss<br/>
We can use either gradient descent or analytic solution to solve linear regression.<br/>
Linear Regression has nice geometric and probabilistic Interpretations.
```
## Model
Suppose $x \in \mathbb{R}^{d}$, $y \in \mathbb{R}$. Linear model is:
$$h(x) = w^{T}x + b$$
For simplicity, let:
$$x := [x,1] \in \mathbb{R}^{d + 1}$$
$$\theta := [w, b] \in \mathbb{R}^{d + 1}$$
Then linear model could be write as:
$$h(x) = \theta^{T}x$$
## Loss
Loss Function is mean square loss:
$$J(\theta) = \frac{1}{2}\sum_{i=1}^{n}(h(x^{(i)}) - y^{(i)})^{2}$$
## Update Rule
Gradient Descent:
$$\theta \to \theta - \alpha\nabla{J(\theta)}$$
Gradient of Linear Regression:
$$
\begin{equation}
\begin{split}
\frac{\partial }{\partial \theta_{j}}J(\theta) &= \frac{\partial }{\partial \theta}_{j}\frac{1}{2}\sum_{i=1}^{n}(h(x^{(i)}) - y^{(i)})^2 \\
&=\sum_{i=1}^{n}(h(x^{(i)}) - y^{(i)})\cdot\frac{\partial }{\partial \theta_{j}}(h(x^{(i)}) - y^{(i)})\\
& =\sum_{i=1}^{n}(h(x^{(i)}) - y^{(i)})\cdot{x_{j}}^{(i)}
\end{split}
\end{equation}
$$
Combine all dimensions:
$$\theta \to \theta - \alpha\sum_{i=1}^{n}(h(x^{(i)}) - y^{(i)})\cdot{x^{(i)}} $$
Write in matrix form:
$$\theta \to \theta - \alpha{X^{T}}(X\theta-y) $$
where $X \in \mathbb{R}^{n\times{d}}, y \in \mathbb{R}^{n}$.
## Analytic Solution
From above, we have:
$$\nabla{J(\theta)} = X^{T}X\theta - X^{T}y$$
If $X^{T}X$ is invertible:
$$\theta = (X^{T}X)^{-1}X^{T}y$$
Else the equation also have solution:
$$X^{T}X\theta=0 \Rightarrow \theta^{T}X^{T}X\theta = (X\theta)^{T}X\theta=0 \Rightarrow X\theta = 0$$
$$\mbox{null}(X^{T}X) = \mbox{null}(X) \Rightarrow \mbox{range}(X^{T}X) = \mbox{range}(X^{T})$$
## Examples
```python
from sklearn.datasets import load_boston
X, y = load_boston(return_X_y=True)
X.shape, y.shape
```
((506, 13), (506,))
```python
from sklearn.linear_model import LinearRegression
reg = LinearRegression()
reg.fit(X, y)
```
LinearRegression()
```python
from sklearn.metrics import mean_squared_error
mean_squared_error(y, reg.predict(X))
```
21.894831181729202
## Geometric Interpretation
Denote linear space $S = \mbox{span}\left \{\mbox{columns of } X \right \}$, linear combination of $S$ should be written as $X\theta$.
$X\theta$ is the projection of $y$ on $S \Leftrightarrow$ $X\theta - y$ orthogonal with $S \Leftrightarrow$ orthogonal with columns of $X \Leftrightarrow X^{T}(X\theta - y)=0$
Linear regression $\Leftrightarrow$ Finding the projection of $y$ on $S$.
## Probabilistic Interpretation
Assume targets and inputs are related via:
$$y^{(i)} = \theta^{T}x^{(i)} + \epsilon^{(i)}$$
where $\epsilon^{(i)}$ is the error term and distributed IID according to Gaussian with mean 0 and variance $\sigma^{2}$:
$$p(\epsilon^{(i)}) = \frac{1}{\sqrt{2\pi}\sigma}exp\left (-\frac{(\epsilon^{(i)})^{2}}{2\sigma^{2}}\right )$$
This is equivalent to say(we should denote that $\theta$ is not a random variable here):
$$p(y^{(i)}|x^{(i)}; \theta) = \frac{1}{\sqrt{2\pi}\sigma}exp\left ( -\frac{(y^{(i)} - \theta^{T}x^{(i)})^{2}}{2\sigma^{2}}\right)$$
The likelihood function:
$$
\begin{equation}
\begin{split}
L(\theta) &= \prod_{i=1}^{n}p(y^{(i)}|x^{(i)}; \theta) \\
&= \prod_{i=1}^{n}\frac{1}{\sqrt{2\pi}\sigma}exp\left ( -\frac{(y^{(i)} - \theta^{T}x^{(i)})^{2}}{2\sigma^{2}}\right)
\end{split}
\end{equation}
$$
Maximize the log likelihood:
$$
\begin{equation}
\begin{split}
\log(L(\theta)) &= \log\prod_{i=1}^{n}\frac{1}{\sqrt{2\pi}\sigma}\exp\left ( -\frac{(y^{(i)} - \theta^{T}x^{(i)})^{2}}{2\sigma^{2}}\right) \\
&= \sum_{i=1}^{n}\log\frac{1}{\sqrt{2\pi}\sigma}\exp\left ( -\frac{(y^{(i)} - \theta^{T}x^{(i)})^{2}}{2\sigma^{2}}\right) \\
&= n\log\frac{1}{\sqrt{2\pi}\sigma} - \frac{1}{2\sigma^{2}}\sum_{i=1}^{n}(y^{(i)} - \theta^{T}x^{(i)})^{2}
\end{split}
\end{equation}
$$
hence, maximizing the log likelihood gives the same answer as minimizing:
$$\frac{1}{2}\sum_{i=1}^{n}(y^{(i)} - \theta^{T}x^{(i)})^{2} = J(\theta)$$
Linear regression $\Leftrightarrow $ Maximum Likelihood Estimate given Gaussian error.
|
0900be1f158b5fe8b685f64ab8805b510ee27ae0
| 7,336 |
ipynb
|
Jupyter Notebook
|
chapter1/1.linear regression.ipynb
|
newfacade/machine-learning-handbook
|
2784702b3fd24fce8f00ca7c88cb060fa23798ea
|
[
"MIT"
] | 1 |
2022-01-10T16:20:34.000Z
|
2022-01-10T16:20:34.000Z
|
chapter1/1.linear regression.ipynb
|
newfacade/machine-learning-handbook
|
2784702b3fd24fce8f00ca7c88cb060fa23798ea
|
[
"MIT"
] | null | null | null |
chapter1/1.linear regression.ipynb
|
newfacade/machine-learning-handbook
|
2784702b3fd24fce8f00ca7c88cb060fa23798ea
|
[
"MIT"
] | null | null | null | 28 | 190 | 0.473282 | true | 1,582 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.962673 | 0.824462 | 0.793687 |
__label__eng_Latn
| 0.461886 | 0.682334 |
# day06: Gradient Descent for Linear Regression
# Objectives
* Learn how to fit weight parameters of Linear Regression to a simple dataset via gradient descent
* Understand impact of step size
* Understand impact of initialization
# Outline
* [Part 1: Loss and Gradient for 1-dim. Linear Regression](#part1)
* [Part 2: Gradient Descent Algorithm in a few lines of Python](#part2)
* [Part 3: Debugging with Trace Plots](#part3)
* [Part 4: Selecting the step size](#part4)
* [Part 5: Selecting the initialization](#part5)
* [Part 6: Using SciPy's built-in routines](#part6)
# Takeaways
* Gradient descent is a simple algorithm that can be implemented in a few lines of Python
* * Practical issues include selecting step size and initialization
* Step size matters a lot
* * Need to select carefully for each problem
* Initialization of the parameters can matter too!
* scipy offers some useful tools for gradient-based optimization
* * scipy's toolbox cannot do scalable "stochastic" methods (requires a modest size dataset, not too big)
* * "L-BFGS-B" method is highly recommended if you have your loss and gradient functions available
```python
import numpy as np
```
```python
# import plotting libraries
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn') # pretty matplotlib plots
import seaborn as sns
sns.set('notebook', font_scale=1.25, style='whitegrid')
```
# Create simple dataset: y = 1.234 * x + noise
We will *intentionally* create a toy dataset where we know that a good solution has slope near 1.234.
Naturally, the best slope for the finite dataset of N=100 examples we create won't be exactly 1.234 (because of the noise added plus the fact that our dataset size is limited).
```python
def create_dataset(N=100, slope=1.234, noise_stddev=0.1, random_state=0):
random_state = np.random.RandomState(int(random_state))
# input features
x_N = np.linspace(-2, 2, N)
# output features
y_N = slope * x_N + random_state.randn(N) * noise_stddev
return x_N, y_N
```
```python
x_N, y_N = create_dataset(N=50, noise_stddev=0.3)
```
```python
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(5,5))
plt.plot(x_N, y_N, 'k.');
plt.xlabel('x');
plt.ylabel('y');
```
# Part 1: Gradient Descent for 1-dim. Linear Regression
## Define model
Consider the *simplest* linear regression model. A single weight parameter $w \in \mathbb{R}$ representing the slope of the prediction line. No bias/intercept.
To make predictions, we just compute the weight multiplied by the input feature
$$
\hat{y}(x) = w \cdot x
$$
## Define loss function
We want to minimize the total *squared error* across all N observed data examples (input features $x_n$, output responses $y_n$)
\begin{align}
\min_{w \in \mathbb{R}} ~~ &\ell(w)
\\
\text{calc_loss}(w) = \ell(w) &= \sum_{n=1}^N (y_n - w x_n)^2
\end{align}
### Exercise 1A: Complete the code below
You should make it match the math expression above.
```python
def calc_loss(w):
''' Compute loss for slope-only least-squares linear regression
Args
----
w : float
Value of slope parameter
Returns
-------
loss : float
Sum of squared error loss at provided w value
'''
yhat_N = x_N * w
sum_squared_error = 0.0 # todo compute the sum of squared error between y and yhat
return sum_squared_error
```
# Define the gradient function
\begin{align}
\text{calc_grad}(w) = \ell'(w) &= \frac{\partial}{\partial w} [ \sum_{n=1}^N (y_n - w x_n)^2]
\\
&= \sum_{n=1}^N 2 (y_n - w x_n) (-x_n)
\\
&= 2 \sum_{n=1}^N (w x_n - y_n) (x_n)
\\
&= 2 w \left( \sum_{n=1}^N x_n^2 \right) - 2 \sum_{n=1}^N y_n x_n
\end{align}
Below, we've implemented the gradient calculation in code for you
```python
def calc_grad(w):
''' Compute gradient for slope-only least-squares linear regression
Args
----
w : float
Value of slope parameter
Returns
-------
g : float
Value of derivative of loss function at provided w value
'''
g = 2.0 * w * np.sum(np.square(x_N)) - 2.0 * np.sum(x_N * y_N)
return g
```
## Plot loss evaluated at each w from -3 to 8
We should see a "bowl" shape with one *global* minima, because our optimization problem is "convex"
```python
w_grid = np.linspace(-3, 8, 300) # create array of 300 values between -3 and 8
```
```python
loss_grid = np.asarray([calc_loss(w) for w in w_grid])
plt.plot(w_grid, loss_grid, 'b.-');
plt.xlabel('w');
plt.ylabel('loss(w)');
```
### Discussion 1b: Visually, at what value of $w$ does the loss function have a minima? Is it near where you would expect (hint: look above for the "true" slope value used to generate the data)
### Exercise 1c: Write NumPy code to identify which entry in the w_grid array corresponds to the lowest entry in the loss_grid array
Hint: use np.argmin
```python
# TODO write code here
```
## Sanity check: plot gradient evaluated at each w from -3 to 8
```python
grad_grid = np.asarray([calc_grad(w) for w in w_grid])
plt.plot(w_grid, grad_grid, 'b.-');
plt.xlabel('w');
plt.ylabel('grad(w)');
```
### Discussion 1d: Visually, at what value of $w$ does the gradient function cross zero? Is it the same place as the location of the minimum in the loss above?
TODO interpret the graph above and write your answer here, then discuss with your group
### Exercise 1d: Numerically, at which value of w does grad_grid cross zero?
We might try to estimate numerically where the gradient crosses zero.
We could do this in a few steps:
1) Compute the distance from each gradient in `grad_grid` to 0.0 (we could use just absolute distance)
2) Find the index of `grad_grid` with smallest distance (using `np.argmin`)
3) Plug that index into `w_grid` to get the $w$ value corresponding to that zero-crossing
```python
dist_from_zero_G = np.abs(grad_grid - 0.0)
zero_cross_index = 0 # TODO fix me for step 2 above
print("Zero crossing occurs at w = %.4f" % w_grid[0]) # TODO fix me for step 3 above
```
## Part 2: Gradient Descent (GD) as an algorithm in Python
### Define minimize_via_grad_descent algorithm
Can you understand what each step of this algorithm does?
```python
def minimize_via_grad_descent(calc_loss, calc_grad, init_w=0.0, step_size=0.001, max_iters=100):
''' Perform minimization of provided loss function via gradient descent
Args
----
calc_loss : function
calc_grad : function
init_w : float
step_size : float
max_iters : positive int
Return
----
wopt: float
array of optimized weights that approximately gives the least error
info_dict : dict
Contains information about the optimization procedure useful for debugging
Entries include:
* trace_loss_list : list of loss values
* trace_grad_list : list of gradient values
'''
w = 1.0 * init_w
grad = calc_grad(w)
# Create some lists to track progress over time (for debugging)
trace_loss_list = []
trace_w_list = []
trace_grad_list = []
for iter_id in range(max_iters):
if iter_id > 0:
w = w - step_size * grad
loss = calc_loss(w)
grad = calc_grad(w)
print(" iter %5d/%d | w % 13.5f | loss % 13.4f | grad % 13.4f" % (
iter_id, max_iters, w, loss, grad))
trace_loss_list.append(loss)
trace_w_list.append(w)
trace_grad_list.append(grad)
wopt = w
info_dict = dict(
trace_loss_list=trace_loss_list,
trace_w_list=trace_w_list,
trace_grad_list=trace_grad_list)
return wopt, info_dict
```
### Discussion 2a: Which line of the above function does the *parameter update* happen?
Remember, in math, the parameter update of gradient descent is this:
$$
w \gets w - \alpha \nabla_w \ell(w)
$$
where $\alpha > 0$ is the step size.
In words, this math says *move* the parameter $w$ from its current value a *small step* in the "downhill" direction (indicated by gradient).
TODO write down here which line above *you* think it is, then discuss with your group
```python
```
### Try it! Run GD with step_size = 0.001
Running the cell below will have the following effects:
1) one line will be printed for every iteration, indicating the current w value and its associated loss
2) the "optimal" value of w will be stored in the variable named `wopt` returned by this function
3) a dictionary of information useful for debugging will be stored in the `info_dict` returned by this function
```python
wopt, info_dict = minimize_via_grad_descent(calc_loss, calc_grad, step_size=0.001);
```
### Discussion 2b: Does it appear from the *loss* values in trace above that the GD procedure converged?
### Discussion 2c: Does it appear from the *parameter* values in trace above that the GD procedure converged?
### Exercise 2d: What exactly is the gradient of the returned "optimal" value of w?
Use your `calc_grad` function to check the result. What is the gradient of the returned `wopt`?
Does this look totally converged? Can you find a $w$ value that would be even better?
```python
# TODO call calc_grad on the return value from above
```
## Part 3: Diagnostic plots for gradient descent
Let's look at some trace functions.
Whenever you run gradient descent, an *excellent* debugging strategy is the ability to plot the loss, the gradient magnitude, and the parameter of interest at every step of the algorithm.
```python
fig, axes = plt.subplots(nrows=1, ncols=3, sharex=True, sharey=False, figsize=(18,3.6))
axes[0].plot(info_dict['trace_loss_list']);
axes[0].set_title('loss');
axes[1].plot(info_dict['trace_grad_list']);
axes[1].set_title('grad');
axes[2].plot(info_dict['trace_w_list']);
axes[2].set_title('w');
plt.xlim([0, 100]);
```
### Discussion 3a: What value do we expect the *loss* to converge to? Should it always be zero?
### Discussion 3b: What value do we expect the *gradient* to converge to? Should it always be zero?
# Part 4: Larger step sizes
## Try with larger step_size = 0.014
```python
wopt, info_dict = minimize_via_grad_descent(calc_loss, calc_grad, step_size=0.014);
```
```python
fig, axes = plt.subplots(nrows=1, ncols=3, sharex=True, sharey=False, figsize=(12,3))
axes[0].plot(info_dict['trace_loss_list'], '.-');
axes[0].set_title('loss');
axes[1].plot(info_dict['trace_grad_list'], '.-');
axes[1].set_title('grad');
axes[2].plot(info_dict['trace_w_list'], '.-');
axes[2].set_title('w');
```
### Discussion 4a: What happens here? How is this step size different than in Part 3 above?
TODO discuss with your group
## Try with even larger step size 0.1
```python
wopt, info_dict = minimize_via_grad_descent(calc_loss, calc_grad, step_size=0.1, max_iters=25);
```
### Discussion 3b: What happens here with this even larger step size? Is it converging?
### Exercise 3c: What is the largest step size you can get to converge reasonably?
```python
# TODO try some other step sizes here
wopt, info_dict = minimize_via_grad_descent(calc_loss, calc_grad, step_size=0) # TODO fix step_size
```
# Part 5: Sensitivity to initial conditions
### Exercise 5a: Try to call the defined procedure with a different initial condition for $w$. What happens?
You could try $w = 5.0$ or something else.
```python
# TODO try some other initial condition for init_w
wopt2, info_dict2 = minimize_via_grad_descent(calc_loss, calc_grad, init_w=0, step_size=0.001, max_iters=10) # TODO fix step_size
```
### Exercise 5b: Try again with another initial value.
```python
# TODO try some other initial condition for init_w
wopt3, info_dict3 = minimize_via_grad_descent(calc_loss, calc_grad, init_w=0, step_size=0.001, max_iters=10) # TODO fix
```
### Exercise 5c: Make a trace plot
Make a trace plot showing convergence from multiple different starting values for $w$. What do you notice?
```python
# TODO
```
# Part 6: Using scipy's built-in gradient optimization tools
```python
import scipy.optimize
```
Take a look at SciPy's built in minimization toolbox
<https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize>
We'll use "L-BFGS", a second-order method that uses the function and its gradient.
This is a "quasi-newton" method, which you can get an intuition for here:
https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization
```python
result = scipy.optimize.minimize(calc_loss, 0.0, jac=calc_grad, method='L-BFGS-B')
# Returns an object with several fields, let's print the result to get an idea
print(result)
```
```python
print(str(result.message))
```
```python
best_w = result.x
print(best_w)
```
```python
```
|
a66ef7949342423cbfe31a713a743156ea08c60a
| 21,863 |
ipynb
|
Jupyter Notebook
|
labs/day06_GradientDescent_LinearRegression.ipynb
|
ypark12/comp135-20f-assignments
|
653ac71d59230563ec60276678569b313e744d1e
|
[
"MIT"
] | 5 |
2020-09-09T21:44:55.000Z
|
2021-03-06T11:42:58.000Z
|
labs/day06_GradientDescent_LinearRegression.ipynb
|
ypark12/comp135-20f-assignments
|
653ac71d59230563ec60276678569b313e744d1e
|
[
"MIT"
] | null | null | null |
labs/day06_GradientDescent_LinearRegression.ipynb
|
ypark12/comp135-20f-assignments
|
653ac71d59230563ec60276678569b313e744d1e
|
[
"MIT"
] | 39 |
2020-09-11T19:16:29.000Z
|
2022-01-07T19:43:57.000Z
| 26.924877 | 201 | 0.552577 | true | 3,423 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.933431 | 0.94079 | 0.878162 |
__label__eng_Latn
| 0.9809 | 0.878598 |
# Principal Component Analysis (PCA)
| concept | description |
|---------|-------------|
| dimensionality reduction | replacing features by a smaller subset |
| principal components | new features the data is transformed to |
| vanilla PCA | use if the data fits into memory |
| incremental PCA | PCA variant that works in batches (for lots of data or frequent updates) |
| randomized PCA | faster when reducing number of dimensions heavily |
| Kernel PCA | PCA variant for non-linear transformations |
## Background
**Principal Component Analysis (PCA)** is an **unsupervised learning** method used for **dimensionality reduction**.
For multivariate data sets, it often happens that a few measured variables already contain a large part of the information and other metrics contribute little. PCA is a procedure that allows to condense such data sets to the “most informative” dimensions.
PCA transforms the feature space X into a new feature space T, the principal components W:
\begin{align}
T = X * W
\end{align}
## Principal Components
Each principal component is a linear combination of the original features that explains the **maximum variance** in the data.
The main difference to other dimensionality reduction methods is that the **principal components are ranked** – the **first principal component always explains the highest proportion of the variance**. The second principal component explains the next highest proportion and so on.
The **maximum number of principal components is the number of original features**.
- In that case, no information is lost in the transformation.
With less principal components than original features, the new feature space is an approximation of the original space.
\begin{align}
T \approx X * W
\end{align}
**Note:** The components work similarly as in NMF – the more components you have, the better you represent the data. However, in NMF each component tends to explain an **equal** part of the variance.
---
## Algorithm
### Step 1: Calculate the covariance matrix
It is assumed that the variables that have the greatest variance also contain the most information.
To measure the redundancy in the data, we calculate the covriance of each measured variable with each other. This then results in the covariance matrix C:
\begin{align}
C_{ij}=\frac{1}{n-1}\sum^n_{k=1}\left(x_{ik}-\bar{x}_i\right)\left(x_{jk}-\bar{x}_j\right)
\end{align}
where i and j are the features and n the number of data points.
**Note:** The features may have very different value ranges. Before calculating the covariance it is not only advisable to de-mean the data but also scale it.
### Step 2: Calculate the eigenvectors
Calculate the eigenvectors of the matrix C and sort them according to their corresponding eigenvalues (largest first). The eigenvectors are the so-called principal components and the associated eigenvalues indicate how much variance (“information”) is included in this principal component is included.
The cumulative sum of the eigenvalues gives the percentage of variance that is explained by the first two principal components.
### Step 3: Transform the data
Project the original data set by multiplying it with the desired number of principal components:
\begin{align}
T=X⋅W
\end{align}
where X is the d-dimensional (normalized) data set and W is a matrix, containing the principal components in columns.
Applications
reduce calculation time
reduce overfitting
preprocessing step for other unsupervised methods (e.g. clustering or t-SNE)
### What is it, and what does it have to do with this week's project?
#### Let's start with some familiar code:
```python
from keras.datasets import mnist
from matplotlib import pyplot as plt
%matplotlib inline
(xtrain, ytrain), (xtest, ytest) = mnist.load_data()
```
Using TensorFlow backend.
#### And let's write a function that takes in numpy arrays of images and renders/plots the first 40 of them:
```python
def draw_array(x):
plt.figure(figsize=(12,7))
for i in range(40):
plt.subplot(5, 8, i+1)
plt.imshow(x[i], cmap=plt.cm.Greys)
plt.axis('off')
draw_array(xtrain)
```
---
### 1. Need to reshape the data so that it can be properly handled in sklearn
```python
xtrain.shape
```
(60000, 28, 28)
```python
xtrain = xtrain.reshape(-1, 28*28)
```
```python
xtrain.shape
```
(60000, 784)
### Standard Scalar
- Some people recommend using standard scalar to 'demean' the data
```python
from sklearn.preprocessing import StandardScaler
```
```python
ss = StandardScaler()
ss_xtrain = ss.fit_transform(xtrain)
```
```python
ss_xtrain.shape
```
(60000, 784)
### 2. Split training data even further (to speed up the calculation)
```python
xsmall = xtrain[:1000]
ysmall = ytrain[:1000]
```
```python
xsmall.shape
```
(1000, 784)
### 3. Initialize PCA from Scikit-Learn and fit on X data
- By how many components would we like to decompose our data?
```python
from sklearn.decomposition import PCA
```
```python
m = PCA(n_components=40)
```
```python
m.fit(xtrain)
```
PCA(copy=True, iterated_power='auto', n_components=40, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)
---
- Notice that each of the components gives us 784 coefficients:
```python
m.components_.shape
```
(40, 784)
---
- And we can see the percentage of the overall variation in the data that each principal contributes.
```python
m.explained_variance_ratio_
```
array([0.09704664, 0.07095924, 0.06169089, 0.05389419, 0.04868797,
0.04312231, 0.0327193 , 0.02883895, 0.02762029, 0.02357001,
0.0210919 , 0.02022991, 0.01715818, 0.01692111, 0.01578641,
0.01482953, 0.01324561, 0.01276897, 0.01187263, 0.01152684,
0.01066166, 0.01006713, 0.00953573, 0.00912543, 0.00883404,
0.00839317, 0.00812579, 0.00786364, 0.00744733, 0.00690853,
0.00658087, 0.00648136, 0.00602485, 0.00586286, 0.00569746,
0.00543428, 0.00505236, 0.0048757 , 0.00480015, 0.00471926])
### 4. Use the trained PCA model to transform the data to a lower number of features.
- new data - with less features
```python
#new data - with less features
xt = m.transform(xsmall)
```
```python
xsmall.shape
```
(1000, 784)
```python
xt.shape
```
(1000, 40)
---
### 5. Use the inverse_transform() method to expand our reduced data back into its original shape.
```python
draw_array(xsmall.reshape(-1,28,28))
```
---
### 6. Visualize what our data looks after it has been reduced to N features.
```python
xback = m.inverse_transform(xt)
draw_array(xback.reshape(-1,28,28))
```
---
### 7. Can we actually see what the components look like, as well?
```python
m.explained_variance_ratio_
```
array([0.09704664, 0.07095924, 0.06169089, 0.05389419, 0.04868797,
0.04312231, 0.0327193 , 0.02883895, 0.02762029, 0.02357001,
0.0210919 , 0.02022991, 0.01715818, 0.01692111, 0.01578641,
0.01482953, 0.01324561, 0.01276897, 0.01187263, 0.01152684,
0.01066166, 0.01006713, 0.00953573, 0.00912543, 0.00883404,
0.00839317, 0.00812579, 0.00786364, 0.00744733, 0.00690853,
0.00658087, 0.00648136, 0.00602485, 0.00586286, 0.00569746,
0.00543428, 0.00505236, 0.0048757 , 0.00480015, 0.00471926])
```python
comp = m.components_.reshape(-1,28,28)
draw_array(comp)
```
```python
msmall = PCA(n_components=1)
msmall.fit(xsmall)
```
PCA(copy=True, iterated_power='auto', n_components=1, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)
```python
xs = msmall.transform(xsmall)
xs = msmall.inverse_transform(xs)
```
```python
draw_array(xs.reshape(-1,28,28))
```
---
### 8. How can this be applied in practice (e.g. on this week's project)?
- Increase up the training speed of your models - save time
- Upsampling technique - use pca to create new data points
---
### 9. Other interesting analysis:
- Examine the false positives using a confusion matrix.
```python
m
```
PCA(copy=True, iterated_power='auto', n_components=40, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)
```python
xt.shape
```
(1000, 40)
```python
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=5, max_depth=3)
rf.fit(xt,ysmall)
print(rf.score(xt,ysmall))
```
0.656
```python
xtest.shape
```
(10000, 28, 28)
```python
xtest = m.transform(xtest.reshape(-1,784))
```
```python
ypred = rf.predict(xtest)
```
```python
from sklearn.metrics import confusion_matrix
```
```python
conf = confusion_matrix(ytest, ypred)
conf
```
array([[ 520, 61, 23, 69, 5, 11, 100, 48, 136, 7],
[ 1, 1042, 13, 36, 0, 17, 7, 3, 16, 0],
[ 25, 68, 560, 101, 42, 15, 105, 51, 59, 6],
[ 97, 36, 16, 635, 5, 73, 17, 64, 61, 6],
[ 1, 15, 26, 16, 601, 19, 42, 175, 27, 60],
[ 33, 100, 64, 108, 42, 199, 35, 94, 198, 19],
[ 10, 70, 48, 79, 27, 23, 615, 56, 13, 17],
[ 11, 68, 5, 8, 52, 21, 8, 693, 31, 131],
[ 5, 35, 32, 61, 12, 32, 20, 79, 649, 49],
[ 6, 26, 3, 11, 253, 25, 17, 295, 35, 338]])
```python
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(12,10))
sns.heatmap(conf, cmap='Oranges', annot=True, fmt='g')
```
---
### 10. Other interesting analysis:
- We can see how much of an impact adding more components has on capturing the overall variance of the original data set.
- We can do this by transforming the data with another PCA model that captures the maximum number of components (which in our case is 784) and plotting the cumulative sum of the explained variance ratio.
```python
xtrain.shape
```
(60000, 784)
```python
m784 = PCA(n_components=10)
x = m784.fit_transform(xtrain)
```
```python
import numpy as np
```
```python
ex_var = m784.explained_variance_ratio_
ex = range(784)
y = np.array(ex_var).cumsum()
```
```python
plt.plot(ex,y)
```
```python
```
|
ba13fc6384fbd0c43f28039bb9c69ec4e2de7c12
| 520,991 |
ipynb
|
Jupyter Notebook
|
10_PCA_t-SNE_with_MNIST.ipynb
|
maximcondon/Project_MNIST_DeepLearning
|
8e9278d8d9b5cbe7b4e91831debebd2638d5e54e
|
[
"MIT"
] | null | null | null |
10_PCA_t-SNE_with_MNIST.ipynb
|
maximcondon/Project_MNIST_DeepLearning
|
8e9278d8d9b5cbe7b4e91831debebd2638d5e54e
|
[
"MIT"
] | null | null | null |
10_PCA_t-SNE_with_MNIST.ipynb
|
maximcondon/Project_MNIST_DeepLearning
|
8e9278d8d9b5cbe7b4e91831debebd2638d5e54e
|
[
"MIT"
] | null | null | null | 509.277615 | 194,464 | 0.93763 | true | 3,051 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.92523 | 0.847968 | 0.784565 |
__label__eng_Latn
| 0.967852 | 0.66114 |
```python
%pylab inline
import seaborn
```
Populating the interactive namespace from numpy and matplotlib
```python
from plotting import plot_trajectory
```
# Joint Space Trajectories
## Point-to-Point Motion
### Cubic Polynomial
\begin{align}
q(t) &= a_3 t^3 + a_2 t^2 + a_1 t + a_0 \\
\dot{q}(t) &= 3 a_3 t^2 + 2 a_2 t + a_1 \\
\ddot{q}(t) &= 6 a_3 t + 2 a_2
\end{align}
\begin{align}
a_0 &= q_i \\
a_1 &= \dot{q}_i \\
a_3 t_f^3 + a_2 t_f^2 + a_1 t_f + a_0 &= q_f \\
3 a_3 t_f^2 + 2 a_2 t_f + a_1 &= \dot{q}_f
\end{align}
\begin{align}
\begin{pmatrix} q_i \\ q_f \\ \dot q_i \\ \dot q_f \end{pmatrix} =
\begin{pmatrix}
0 & 0 & 0 & 1 \\
t^3 & t^2 & t & 1 \\
0 & 0 & 1 & 0 \\
3t^2 & 2t & 1 & 0
\end{pmatrix}
\begin{pmatrix} a_3 \\ a_2 \\ a_1 \\ a_0 \end{pmatrix}
\end{align}
```python
def cubic_trajectory(current_position, target_position,
current_velocity, target_velocity,
duration_in_seconds):
trajectories = []
t = duration_in_seconds
xs = linspace(0,t)
for qi, qf, dqi, dqf in zip(current_position, target_position,
current_velocity, target_velocity):
A = np.array([[0.0,0.0,0.0,1.0],
[t**3, t**2, t, 1.],
[0.0, 0.0, 1.0, 0.0],
[3.0 * t**2, 2*t, 1.0, 0.0]])
b = np.array([qi, qf, dqi, dqf])
x = np.linalg.solve(A,b)
qs = np.polyval(x, xs)
dqs = np.polyval([3. * x[0], 2. * x[1], x[2]], xs)
ddqs = np.polyval([6. * x[0], 2. * x[1]], xs)
trajectories.append((qs, dqs, ddqs))
return trajectories
```
```python
qi = [0,np.pi]
qf = [np.pi,0]
dqi = [0,0]
dqf = [0,0]
trajectories = cubic_trajectory(qi, qf, dqi, dqf, 50)
```
```python
plot_trajectory(trajectories[0], iscubic=True)
```
### Cubic Polynomial
```python
def quintic_trajectory(current_position, target_position, current_velocity,
target_velocity, current_acceleration,
target_acceleration, duration_in_seconds):
trajectories = []
t = duration_in_seconds
xs = linspace(0, t)
for qi, qf, dqi, dqf, ddqi, ddqf in zip(current_position, target_position,
current_velocity, target_velocity,
current_acceleration, target_acceleration):
A = np.array(
[[0.0, 0.0, 0.0, 0.0, 0.0, 1.0],
[t**5, t**4, t**3, t**2, t, 1.0],
[0.0, 0.0, 0.0, 0.0, 1.0, 0.0],
[5. * t**4, 4. * t**3, 3. * t**2, 2. * t, 1., 0.0],
[0.0, 0.0, 0.0, 2.0, 0.0, 0.0],
[20. * t**3, 12. * t**2, 6. * t, 2., 0.0, 0.0]])
b = np.array([qi, qf, dqi, dqf, ddqi, ddqf])
x = np.linalg.solve(A, b)
qs = np.polyval(x, xs)
dqs = np.polyval([5. * x[0], 4. * x[1], 3. * x[2], 2. * x[3], x[4]], xs)
ddqs = np.polyval([20. * x[0], 12. * x[1], 6. * x[2], 2. * x[3]], xs)
trajectories.append((qs, dqs, ddqs))
return trajectories
```
```python
qi = [0,np.pi]
qf = [np.pi,0]
dqi = [0,0]
dqf = [0,0]
ddqi = [0,0]
ddqf = [0,0]
trajectories = quintic_trajectory(qi, qf, dqi, dqf, ddqi, ddqf, 50)
```
```python
plot_trajectory(trajectories[0], iscubic=False)
```
|
eb3768c890d144bd7b727f6b74a2a7d6f4b5054d
| 149,725 |
ipynb
|
Jupyter Notebook
|
Joint-Space-Trajectories.ipynb
|
ipk-ntnu/tpk4170
|
2a394841586024d9b81f49a9e8ed2a9b331ddbc4
|
[
"MIT"
] | null | null | null |
Joint-Space-Trajectories.ipynb
|
ipk-ntnu/tpk4170
|
2a394841586024d9b81f49a9e8ed2a9b331ddbc4
|
[
"MIT"
] | null | null | null |
Joint-Space-Trajectories.ipynb
|
ipk-ntnu/tpk4170
|
2a394841586024d9b81f49a9e8ed2a9b331ddbc4
|
[
"MIT"
] | null | null | null | 554.537037 | 75,042 | 0.930733 | true | 1,258 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.950411 | 0.903294 | 0.858501 |
__label__eng_Latn
| 0.112109 | 0.832918 |
```python
from IPython.core.display import HTML, Image
css_file = 'style.css'
HTML(open(css_file, 'r').read())
```
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Philosopher:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Philosopher', sans-serif;
font-weight: 400;
font-size: 2.2em;
line-height: 100%;
color: rgb(0, 80, 120);
margin-bottom: 0.1em;
margin-top: 0.1em;
display: block;
}
.text_cell_render h2 {
font-family: 'Philosopher', serif;
font-weight: 400;
font-size: 1.9em;
line-height: 100%;
color: rgb(245,179,64);
margin-bottom: 0.1em;
margin-top: 0.1em;
display: block;
}
.text_cell_render h3 {
font-family: 'Philosopher', serif;
margin-top:12px;
margin-bottom: 3px;
font-style: italic;
color: rgb(94,127,192);
}
.text_cell_render h4 {
font-family: 'Philosopher', serif;
}
.text_cell_render h5 {
font-family: 'Alegreya Sans', sans-serif;
font-weight: 300;
font-size: 16pt;
color: grey;
font-style: italic;
margin-bottom: .1em;
margin-top: 0.1em;
display: block;
}
.text_cell_render h6 {
font-family: 'PT Mono', sans-serif;
font-weight: 300;
font-size: 10pt;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "PT Mono";
font-size: 100%;
}
</style>
```python
# Imports
from sympy import init_printing, Matrix, symbols, sqrt, Rational, acos
from numpy import matrix, transpose, sqrt
from numpy.linalg import pinv, inv, det, svd, norm
from scipy.linalg import pinv2
init_printing()
```
# An overview of key ideas
## Moving from vectors to matrices
Consider two position vectors $\underline{u} , \underline{v} \in \mathbb{R}^{3}$. They can be expressed as column vectors.
$$ \underline{u}=\begin{bmatrix}1\\-1\\0\end{bmatrix} \\ \underline{v}=\begin{bmatrix}0\\1\\-1\end{bmatrix} \tag{1} $$
We can add constant scalar multiples of these vectors, with the scalars $x_1$ and $x_2$.
$$ {x}_{1}\underline{u}+{x}_{2}\underline{v}=\underline{b} \tag{2} $$
This is simple vector addition. It's easy to visualize that if we combine all possible combinations, that we start filling a plane through the origin. Adding a third vector that is not in this plane will, using all possible linear combinations, extend to fill all of three-dimensional space.
$$ \underline{w}=\begin{bmatrix}0\\0\\1\end{bmatrix} \tag{3} $$
We now have (4) below.
$$ {x}_{1}\underline{u}+{x}_{2}\underline{v}+{x}_{3}\underline{w}={b} \tag{4} $$
We can write the coefficients of the vectors as a vector of unknowns, $\underline{x}$, shown in (5).
$$ \underline{x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}\tag{5}$$
Let us now write (4) is the form $A \underline{x} = \underline{b}$.
$$ \begin{bmatrix} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 0 & -1 & 1 \end{bmatrix}\begin{bmatrix} { x }_{ 1 } \\ { x }_{ 2 } \\ { x }_{ 3 } \end{bmatrix}=\begin{bmatrix} { x }_{ 1 } \\ { x }_{ 2 }-{ x }_{ 1 } \\ { x }_{ 3 }-{ x }_{ 2 } \end{bmatrix} \tag{6} $$
We create matrix $A$ in code below.
```python
A = Matrix([[1, 0, 0], [-1, 1, 0], [0, -1, 1]]) # Creating a matrix and putting
# it into a computer variable called C
A # Displaying it to the screen
```
This is the column-view of matrix-vector multiplication as opposed to the row view. Matrices are seen as column, representing vectors. Each element of the column vector $\underline{x}$ is a scalar multiple of the corresponding column in the matrix $A$.
$$ { x }_{ 1 }\begin{bmatrix} 1 \\ -1 \\ 0 \end{bmatrix}+{ x }_{ 2 }\begin{bmatrix} 0 \\ 1 \\ -1 \end{bmatrix}+{ x }_{ 3 }\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}=\begin{matrix} { x }_{ 1 } \\ -{ x }_{ 1 }+{ x }_{ 2 } \\ -{ x }_{ 2 }+{ x }_{ 3 } \end{matrix} = {x}_{1}\underline{u}+{x}_{2}\underline{v}+{x}_{3}\underline{w} \tag{7}$$
If we consider the solution vector, $\underline{b}$, then we can write (8).
$$ \begin{bmatrix} { x }_{ 1 } \\ { x }_{ 2 }-{ x }_{ 1 } \\ { x }_{ 3 }-{ x }_{ 2 } \end{bmatrix} = \begin{bmatrix}{b}_{1}\\{b}_{2}\\{b}_{3}\end{bmatrix} \tag{8} $$
By substitution this can be _converted_ to (9).
$$ \begin{bmatrix} { x }_{ 1 } \\ { x }_{ 2 } \\ { x }_{ 3 } \end{bmatrix}=\begin{bmatrix} { b }_{ 1 } \\ { b }_{ 1 }+{ b }_{ 2 } \\ { b }_{ 1 }+{ b }_{ 2 }+{ b }_{ 2 } \end{bmatrix} \tag{9} $$
This is, in fact, a contrived example. Looking closely, we can write the right-hand side of (9) as a matrix times $\underline{b}$.
$$ \begin{bmatrix}1&0&0\\1&1&0\\1&1&1\end{bmatrix}\begin{bmatrix}{b}_{1}\\{b}_{2}\\{b}_{3}\end{bmatrix} \tag{10} $$
This matrix is the inverse of $A$ such that $\underline{x} = A^{-1} \underline{b}$. We will learn much more about matrix inverses and whether matrices are invertible. The `sympy` library has a method called `.inv()` that will calculate the inverse.
```python
A.det()
```
```python
A.inv() # Inverse of matrix A
```
Something else we will come across is _lower triangular form_. Notice that all the values above the _main diagonal_ (values from the top-left to the bottom-right) as zero value. This matrix is _lower triangular_.
Now, let's replace $\underline{w}$ and create a matrix $M$. Unlike our original $\underline{w}$, this one is _special_ (for our purposes).
```python
x1, x2, x3, b1, b2, b3 = symbols('x1, x2, x3, b1, b2, b3') # Creating algebraic symbols
# This reserves these symbols so as not to see them as computer variable names
```
```python
M = Matrix([[1, 0, -1], [-1, 1, 0], [0, -1, 1]]) # Creating a matrix and putting
# it into a computer variable called M
M # Displaying it to the screen
```
And a vector of unknowns.
```python
x_vect = Matrix([[x1], [x2], [x3]]) # Giving this columns vector a computer
# variable name
x_vect
```
Now, $M \underline{x}$ can be calculated.
```python
M * x_vect
```
For the solution vector, we have three equations that we can create.
$$ { x }_{ 1 }-{ x }_{ 3 }={ b }_{ 1 }\\ { x }_{ 2 }-{ x }_{ 1 }={ b }_{ 2 }\\ { x }_{ 3 }-{ x }_{ 2 }={ b }_{ 3 } \tag{11}$$
Adding the left and righ-hand sides gives us $x_1 - x_3 + x_2 - x_1 + x_3 - x_2 = b_1 + b_2 + b_3$.
This leaves us with a constraint for $\underline{b}$ such that $ 0={ b }_{ 1 }+{ b }_{ 2 }+{ b }_{ 3 } $.
The problem is clear to see geometrically as the new $\underline{w}$ is in the same plane as $\underline{u}$ and $\underline{v}$. In essence $\underline{w}$ did not add anything. All combinations of $\underline{u}$, $\underline{v}$, and $\underline{w}$ will still be in the same plane.
The first $\underline{w}$ above created a matrix with three independent columns and their linear combinations could fill all of three-dimensional space (they spanned $\mathbb{R}^{3}$). This made the first matrix what we will call _invertible_ as opposed to the second one, $M$, which is not invertible.
Below, we create the three column vectors and check that $- \underline{u} + \left( - \underline{v} \right) = \underline{w}$. A linear combination of columns (vectors) one and two, produces three.
```python
u = Matrix([[1], [-1], [0]])
v = Matrix([[0], [1], [-1]])
w = Matrix([[-1], [0], [1]])
u, v, w
```
```python
-u
```
```python
-v
```
```python
-u - v
```
```python
w
```
```python
-u - v == w
```
True
## Example problems
### Example problem 1
+ Suppose $A$ is a matrix with the following solution
$$ {A}{\underline{x}}=\begin{bmatrix}1\\4\\1\\1\end{bmatrix} \\ \underline{x}=\begin{bmatrix}0\\1\\1\end{bmatrix}+{c}\begin{bmatrix}0\\2\\1\end{bmatrix} $$
+ What can you say about the columns of $A$?
#### Solution
```python
c = symbols('c')
x_vect = Matrix([[0], [1 + 2 * c], [1 + c]])
b = Matrix([[1], [4], [1], [1]])
```
```python
x_vect
```
```python
b
```
Note the following:
+ $\underline{x}$ is of size $m \times n$ and that is $ 3 \times 1 $
+ $\underline{b}$ is of size $ 4 \times 1 $
+ Therefor, $A$ must be of size $ 4 \times 3 $ and each column vector in $A$ is in $\mathbb{R}^{4}$
Let's call these columns of $A$ $C_1, C_2, C_3$, as illustrated in (12).
$$ \begin{bmatrix} \vdots & \vdots & \vdots \\ { C }_{ 1 } & { C }_{ 2 } & { C }_{ 3 } \\ \vdots & \vdots & \vdots \\ \vdots & \vdots & \vdots \end{bmatrix} \tag{12} $$
With the particular way in which $\underline{x}$ was written, we can say that we have a particular solution and a special solution, denoted as $ {A}\left({x}_{p}+{c}\cdot{x}_{s}\right)= \underline{b} $.
For $ c = 0 $ we have:
$$ {A}{\underline{x}}_{p}=b $$
For $c = 1 $ we have:
$$ \begin{align} A{ x }_{ p }+A{ \underline{x} }_{ s } &= b \\ \because \quad A{ \underline{x} }_{ p } &= b \\ b+A{ \underline{x} }_{ s } &= b \\ \therefore \quad A{ \underline{x} }_{ s } &= 0 \end{align} $$
We also have that the following
$$ { \underline{x} }_{ p }=\begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix},\quad { \underline{x} }_{ s }=\begin{bmatrix} 0 \\ 2 \\ 1 \end{bmatrix} $$
For $ \underline{x}_{p}$ we have the following:
$$ \begin{bmatrix} \vdots & \vdots & \vdots \\ { C }_{ 1 } & { C }_{ 2 } & { C }_{ 3 } \\ \vdots & \vdots & \vdots \\ \vdots & \vdots & \vdots \end{bmatrix}\begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} = \underline{b} \quad \Rightarrow \quad { C }_{ 2 }+{ C }_{ 3 }= \underline{b} $$
For $ \underline{x}_{s}$ we have the following:
$$ \begin{bmatrix} \vdots & \vdots & \vdots \\ { C }_{ 1 } & { C }_{ 2 } & { C }_{ 3 } \\ \vdots & \vdots & \vdots \\ \vdots & \vdots & \vdots \end{bmatrix}\begin{bmatrix} 0 \\ 2 \\ 1 \end{bmatrix}=\underline { 0 } \quad \Rightarrow \quad 2{ C }_{ 2 }+{ C }_{ 3 } = \underline{0} $$
Solving for $C_2$ and $C_3$ we have the following:
$$ \begin{align} {C}_{3} &= -2{C}_{2} \\ {C}_{2}-2{C}_{2} &= b \\ {C}_{2} &= -b \\ {C}_{3} &= 2b \end{align} $$
As for the first column of $A$, we need to know more about ranks and subspaces. We see, though, that columns two and three are already constant multiples of each other. So, as long as column one is not a constant multiple of $\underline{b}$, we are safe.
$$ A=\begin{bmatrix} \vdots & 1 & 2 \\ { C }_{ 1 } & 4 & 8 \\ \vdots & 1 & 2 \\ \vdots & 1 & 2 \end{bmatrix} $$
```python
```
|
f7f1a857100e6c931cf3a0c3409a509f541980e5
| 44,311 |
ipynb
|
Jupyter Notebook
|
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_1(part-2)_Geometric_of_Linear_Equations.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | null | null | null |
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_1(part-2)_Geometric_of_Linear_Equations.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | null | null | null |
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_1(part-2)_Geometric_of_Linear_Equations.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | 2 |
2022-02-09T15:41:33.000Z
|
2022-02-11T07:47:40.000Z
| 48.111835 | 3,700 | 0.695132 | true | 3,763 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.715424 | 0.863392 | 0.617691 |
__label__eng_Latn
| 0.908493 | 0.273434 |
## Constraint Handling
### Inequality Constraints
**If somebody is interested in implementing or willing to make contribution(s) regarding constraint handling of inequality constraints please let us know.
The G problem is suite is already available to experiment with different algorithms. So far, mostly parameter-less constraint handling is used for our algorithms.**
### Equality Constraints
We got a couple of questions of how equality constraints should be handled in a genetic algorithm. In general, functions without any smoothness are challenging to handle for genetic algorithms. An equality constraint is basically an extreme case, where the constraint violation is 0 at exactly one point and otherwise 1.
Let us consider the following constraint $g(x)$ where $x$ represents a variable:
$g(x): x = 5$
An equality constraint can be expressed by an inequality constraint:
$g(x): |x - 5| \leq 0$
or
$g(x): (x-5)^2 \leq 0$
However, all of constraints above are very strict and make **most of the search space infeasible**. Without providing more information to the algorithm those constraint are very difficult to satisfy.
For this reason, the constraint can be smoothed by adding an epsilon to it and, therefore, having two inequality constraints:
$g'(x): 5 - \epsilon \leq x \leq 5 + \epsilon$
Also, it can be simply expressed in one inequality constraint by:
$g'(x): (x-5)^2 - \hat{\epsilon} \leq 0$
Depending on the $\epsilon$ the solutions will be more or less close to the desired value. However, the genetic algorithm does not know anything about the problem itself which makes it difficult to handle and focus the search in the infeasible space.
**Constraint Handling Through Repair**
A simple approach is to handle constraints through a repair function. This is only possible if the equation of the constraint is known. The repair makes sure every solution that is evaluated is in fact feasible. Let us consider the following example where
the equality constraints need to consider more than one variable:
\begin{align}
\begin{split}
\min \;\; & f_1(x) = (x_1^2 + x_2^2) \\
\max \;\; & f_2(x) = -(x_1-1)^2 - x_2^2 \\[1mm]
\text{s.t.} \;\; & g_1(x_1, x_3) : x_1 + x_3 = 2\\[2mm]
& -2 \leq x_1 \leq 2 \\
& -2 \leq x_2 \leq 2 \\
& -2 \leq x_3 \leq 2
\end{split}
\end{align}
We implement the problem using by squaring the term and using an $\epsilon$ as we have explained above. The source code for the problem looks as follows:
```python
import numpy as np
from pymoo.model.problem import Problem
class MyProblem(Problem):
def __init__(self):
super().__init__(n_var=3,
n_obj=2,
n_constr=1,
xl=np.array([-2, -2, -2]),
xu=np.array([2, 2, 2]))
def _evaluate(self, x, out, *args, **kwargs):
f1 = x[:, 0] ** 2 + x[:, 1] ** 2
f2 = (x[:, 0] - 1) ** 2 + x[:, 1] ** 2
g1 = (x[:, 0] + x[:, 2] - 2) ** 2 - 1e-5
out["F"] = np.column_stack([f1, f2])
out["G"] = g1
```
As you might have noticed the problem has similar characteristics to problem in our getting started.
Before a solution is evaluated a repair function is called. To make sure a solution is feasible, an approach would be to either set $x_3 = 2 - x_1$ or $x_1 = 2 - x_3$. Additionally, we need to consider that this repair might produce a variable to be out of bounds.
```python
from pymoo.model.repair import Repair
class MyRepair(Repair):
def _do(self, problem, pop, **kwargs):
for k in range(len(pop)):
x = pop[k].X
if np.random.random() < 0.5:
x[2] = 2 - x[0]
if x[2] > 2:
val = x[2] - 2
x[0] += val
x[2] -= val
else:
x[0] = 2 - x[2]
if x[0] > 2:
val = x[0] - 2
x[2] += val
x[0] -= val
return pop
```
Now the algorithm object needs to be initialized with the repair operator and then can be run to solve the problem:
```python
from pymoo.algorithms.nsga2 import NSGA2
algorithm = NSGA2(pop_size=100, repair=MyRepair(), eliminate_duplicates=True)
from pymoo.optimize import minimize
from pymoo.visualization.scatter import Scatter
res = minimize(MyProblem(),
algorithm,
('n_gen', 20),
seed=1,
verbose=True)
plot = Scatter()
plot.add(res.F, color="red")
plot.show()
```
In our case it is easy to verify if the constraint is violated or not:
```python
print(res.X[:, 0] + res.X[:, 2])
```
[2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.
2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.
2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.
2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.
2. 2. 2. 2.]
If you would like to compare the solution without a repair you will see how searching only in the feasible space helps:
```python
algorithm = NSGA2(pop_size=100, eliminate_duplicates=True)
res = minimize(MyProblem(),
algorithm,
('n_gen', 20),
seed=1,
verbose=True)
plot = Scatter()
plot.add(res.F, color="red")
plot.show()
```
```python
print(res.X[:, 0] + res.X[:, 2])
```
[1.99992699 1.99992699 1.9987293 1.99992699 2.00089231]
Here in fact the $\epsilon$ term is necessary to find any feasible solution at all.
|
3954923c6c89db91bbc865b06568ea41fd3f1663
| 58,636 |
ipynb
|
Jupyter Notebook
|
doc/source/misc/constraint_handling.ipynb
|
Electr0phile/pymoo
|
652428473cc68b6d9deada3792635bc8a831b255
|
[
"Apache-2.0"
] | 1 |
2020-03-07T08:26:16.000Z
|
2020-03-07T08:26:16.000Z
|
doc/source/misc/constraint_handling.ipynb
|
Electr0phile/pymoo
|
652428473cc68b6d9deada3792635bc8a831b255
|
[
"Apache-2.0"
] | null | null | null |
doc/source/misc/constraint_handling.ipynb
|
Electr0phile/pymoo
|
652428473cc68b6d9deada3792635bc8a831b255
|
[
"Apache-2.0"
] | null | null | null | 127.747277 | 29,440 | 0.852821 | true | 1,698 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.845942 | 0.787931 | 0.666544 |
__label__eng_Latn
| 0.992855 | 0.386937 |
# Utility functions
The petitRADTRANS.nat_cst package contains some useful utility functions for generating spectra and observables, the use of which are shown below. It also contains useful constants, see [the nat_cst code documentation](../code_nat_cst.html). First we load the package:
```python
from petitRADTRANS import nat_cst as nc
```
## Planck function
The planck function $B_\nu(T)$, in units of erg/cm$^2$/s/Hz/steradian, for a given frequency array, can be generated like this:
```python
# Define wavelength array, in cm
lamb = np.logspace(-5,-2,100)
# Convert to frequencies
freq = nc.c / lamb
# Calculate Planck function at 5750 K
planck = nc.b(5750., freq)
```
Let's plot the Planck function:
```python
# Plot Planck function
import pylab as plt
plt.rcParams['figure.figsize'] = (10, 6)
plt.plot(lamb/1e-4, planck)
plt.xscale('log')
plt.xlabel('Wavelength (micron)')
plt.ylabel('Intensity (erg/cm$^2$/s/Hz/sterad)')
plt.show()
plt.clf()
```
## PHOENIX and ATLAS9 stellar model spectra
Within petitRADTRANS the PHOENIX and ATLAS9 stellar spectra can be used, as described in Appendix A of [van Boekel et al. (2012)](http://adsabs.harvard.edu/abs/2012SPIE.8442E..1FV). The PHOENIX model refrence, for stellar effective temperatures < 10,000 K is [Husser et al. (2013)](http://adsabs.harvard.edu/abs/2013A%26A...553A...6H). The ATLAS9 model references for effective temperatures > 10,000 K are Kurucz [(1979](http://adsabs.harvard.edu/abs/1979ApJS...40....1K), [1992](http://adsabs.harvard.edu/abs/1992IAUS..149..225K), [1994)](http://adsabs.harvard.edu/abs/1994KurCD..19.....K).
The models can be acessed like this, this is for a 5750 K effective temperature star on the main sequence:
```python
stellar_spec = nc.get_PHOENIX_spec(5750)
wlen_in_cm = stellar_spec[:,0]
flux_star = stellar_spec[:,1]
```
Let's plot the spectrum, and also overplot the black body flux from the previous section (note the required factor of $\pi$ to convert the black body intensity to flux):
```python
import pylab as plt
plt.plot(wlen_in_cm/1e-4, flux_star, label = 'PHOENIX model')
plt.plot(lamb/1e-4, np.pi*planck, label = 'black body flux')
plt.title(r'$T_{\rm eff}=5750$ K')
plt.xscale('log')
plt.xlabel('Wavelength (micron)')
plt.ylabel('Surface flux (erg/cm$^2$/s/Hz/sterad)')
plt.legend(loc = 'best', frameon = False)
plt.show()
plt.clf()
```
## Guillot temperature model
In petitRADTRANS, one can use analytical atmospheric P-T profile from [Guillot (2010)](http://adsabs.harvard.edu/abs/2010A%26A...520A..27G), his Equation 29:
\begin{equation}
T^4 = \frac{3T_{\rm int}^4}{4}\left(\frac{2}{3}+\tau\right) + \frac{3T_{\rm equ}^4}{4}\left[\frac{2}{3}+\frac{1}{\gamma\sqrt{3}}+\left(\frac{\gamma}{\sqrt{3}}-\frac{1}{\gamma\sqrt{3}}\right)e^{-\gamma\tau\sqrt{3}}\right]
\end{equation}
with $\tau = P\kappa_{\rm IR}/g$. Here, $\tau$ is the optical depth, $P$ the pressure, $\kappa_{\rm IR}$ the atmospheric opacity in the IR wavelengths (i.e. the cross-section per unit mass), $g$ the atmospheric surface gravity, $\gamma$ is the ratio between the optical and IR opacity, $T_{\rm equ}$ the atmospheric equilibrium temperature, and $T_{\rm int}$ is the planetary internal temperature.
Let's define an example, all units are cgs units, except for the pressure, which is in bars:
```python
kappa_IR = 0.01
gamma = 0.4
T_int = 200.
T_equ = 1500.
gravity = 1e1**2.45
pressures = np.logspace(-6, 2, 100)
temperature = nc.guillot_global(pressures, kappa_IR, gamma, gravity, T_int, T_equ)
```
Let's plot the P-T profile:
```python
plt.plot(temperature, pressures)
plt.yscale('log')
plt.ylim([1e2, 1e-6])
plt.xlabel('T (K)')
plt.ylabel('P (bar)')
plt.show()
plt.clf()
```
|
8360970db60a79d1d7376a461e4a2c1d13955985
| 92,006 |
ipynb
|
Jupyter Notebook
|
docs/content/notebooks/nat_cst_utility.ipynb
|
nborsato/petitRADTRANS
|
2df983bc46b892486b1b035d7c6933ab46f0d36c
|
[
"MIT"
] | null | null | null |
docs/content/notebooks/nat_cst_utility.ipynb
|
nborsato/petitRADTRANS
|
2df983bc46b892486b1b035d7c6933ab46f0d36c
|
[
"MIT"
] | null | null | null |
docs/content/notebooks/nat_cst_utility.ipynb
|
nborsato/petitRADTRANS
|
2df983bc46b892486b1b035d7c6933ab46f0d36c
|
[
"MIT"
] | null | null | null | 349.8327 | 36,104 | 0.92557 | true | 1,162 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.921922 | 0.815232 | 0.751581 |
__label__eng_Latn
| 0.812008 | 0.584506 |
# Sinais periódicos
Neste notebook avaliaremos os sinais periódicos e quais são as condições necessárias para periodicidade.
Esta propriedade dos sinais está ligada ao ***deslocamento no tempo***, uma transformação da variável independente.
Um sinal periódico, contínuo, é aquele para o qual a seguinte propriedade é válida:
\begin{equation}
x(t) = x(t \pm mT_p),
\end{equation}
ou seja, o valor do sinal no instante $t$ [s] é o mesmo para o instante $t \pm mT_p$ [s]. Dessa forma, o sinal se repete a cada
período $T_p$.
$T_p$ é o chamado período fundamental do sinal periódico. Neste caso, $x(t) = x(t \pm T_p) = x(t \pm 2T_p) = ... = x(t \pm kT_p)$.
Para os sinais discretos a definição é análoga:
\begin{equation}
x[n] = x[n \pm m N_p],
\end{equation}
com $N_p$ sendo um número de amostras inteiro.
Um sinal que não é periódico é chamado de aperiódico.
Vamos ver alguns exemplos de sinais periódicos contínuos e discretos.
```python
# importar as bibliotecas necessárias
import numpy as np # arrays
import matplotlib.pyplot as plt # plots
from scipy import signal # some signals
import IPython.display as ipd # to play signals
```
```python
# Configurações gerais
fs = 44100
t = np.linspace(0, 1, fs) # vetor temporal
freq = 2000 # Frequencia fundamental
```
```python
# seno ou cosseno
xt = np.sin(2*np.pi*freq*t)
# Figura
plt.figure()
plt.title('Seno')
plt.plot(t, xt, '-b', linewidth = 2, label = 'seno - 1000 [Hz]')
plt.legend(loc = 'best')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.xlim((0, 3/1000))
plt.tight_layout()
plt.show()
# play
ipd.Audio(xt, rate=fs) # load a NumPy array
```
## Um seno com 2 frequências
Se tivermos um sinal
\begin{equation}
x(t) = x(t) = \mathrm{sin}(2 \pi \ m_1 \ f t) + \mathrm{sin}(2 \pi \ m_2 \ f t),
\end{equation}
ele será um sinal periódico desde que $\frac{m_2}{m_1}$ seja um número racional. Do contrário, o sinal será quase-periódico. Ele parecerá periódico, mas se você olhar os detalhes, vai notar que o sinal nunca se repete.
```python
# seno ou cosseno - 2 frequencias
m = 3 #1.4*np.sqrt(2)
xt = np.sin(2*np.pi*freq*t) + np.sin(2*np.pi*m*freq*t)
# Figura
plt.figure()
plt.title('Seno')
plt.plot(t, xt, '-b', linewidth = 2, label = 'seno - 2 freq')
plt.legend(loc = 'best')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.xlim((0, 3/freq))
plt.tight_layout()
plt.show()
# play
ipd.Audio(xt, rate=fs) # load a NumPy array
```
```python
# dente de serra
xt = signal.sawtooth(2 * np.pi * freq * t)
# Figura
plt.figure()
plt.title('Dente de serra')
plt.plot(t, xt, '-b', linewidth = 2, label = 'sawtooth')
plt.legend(loc = 'best')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.xlim((0, 3/freq))
plt.tight_layout()
plt.show()
# play
ipd.Audio(xt, rate=fs) # load a NumPy array
```
```python
# onda quadrada
xt = signal.square(2 * np.pi * freq * t)
# Figura
plt.figure()
plt.title('Onda quadrada')
plt.plot(t, xt, '-b', linewidth = 2, label = 'square')
plt.legend(loc = 'best')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.xlim((0, 3/freq))
plt.tight_layout()
plt.show()
# play
ipd.Audio(xt, rate=fs) # load a NumPy array
```
```python
N = 9
n = np.arange(N)
xn = [2, 1, -1, 2, 1, -1, 2, 1, -1]
# Figura
plt.figure()
plt.title('Sinal discreto periódico')
plt.stem(n, xn, '-b', basefmt=" ", use_line_collection= True)
plt.grid(linestyle = '--', which='both')
plt.xlabel('Amostra [-]')
plt.ylabel('Amplitude [-]')
plt.ylim((-2.2, 2.2))
plt.tight_layout()
plt.show()
```
## Sinais discretos periódicos
A periodicidade em sinais discretos tem um limite prático. Pra pensar nisso, podemos imaginar um sinal contínuo $x(t) = \mathrm{cos}(\omega t)$. À medida que a frequência, $f$, do sinal aumenta, sua taxa de oscilação também aumenta. Mas, o que aconteceria no caso de um sinal do tipo
\begin{equation}
x[n] = \mathrm{cos}(\omega n) \ ?
\end{equation}
```python
N = 50
n = np.arange(N)
w = 0
xn = np.cos(w*n)
# Figura
plt.figure()
plt.title('Cosseno discreto')
plt.stem(n, xn, '-b', label = r'\omega = {:.3} [rad/s]'.format(float(w)), basefmt=" ", use_line_collection= True)
plt.legend(loc = 'best')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Amostra [-]')
plt.ylabel('Amplitude [-]')
plt.ylim((-1.2, 1.2))
plt.tight_layout()
plt.show()
```
|
1aa03ccce46a3bd5dbb8726e3e9e0f92545d394c
| 647,371 |
ipynb
|
Jupyter Notebook
|
Aula 5 - sinais periodicos/sinais periodicos.ipynb
|
eric-brandao/proc_de_sinais
|
681c151afbb256f6a5d160fe9f8d97f38eef4483
|
[
"CC0-1.0"
] | 8 |
2020-10-01T20:59:33.000Z
|
2021-07-27T22:46:58.000Z
|
Aula 5 - sinais periodicos/sinais periodicos.ipynb
|
eric-brandao/proc_de_sinais
|
681c151afbb256f6a5d160fe9f8d97f38eef4483
|
[
"CC0-1.0"
] | null | null | null |
Aula 5 - sinais periodicos/sinais periodicos.ipynb
|
eric-brandao/proc_de_sinais
|
681c151afbb256f6a5d160fe9f8d97f38eef4483
|
[
"CC0-1.0"
] | 9 |
2020-10-15T12:08:22.000Z
|
2021-04-12T12:26:53.000Z
| 1,610.375622 | 117,752 | 0.906933 | true | 1,517 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.857768 | 0.787931 | 0.675862 |
__label__por_Latn
| 0.883191 | 0.408586 |
# Kullback-Leibler Divergence
The Kullback-Leibler divergence (KLD) measures the distance between two probability distributions, $Q$ and $P$. KLD between $Q$ and $P$ is defined as follows.
* $D_{\mathrm{KL}}(P\|Q) = \sum_i P(i) \, \log\frac{P(i)}{Q(i)}$
* $D_{\mathrm{KL}}(P\|Q) \geq 0$
The way to interpret the value of KLD is
* as the KLD is closer to zero, $P$ and $Q$ are more similar
* as the KLD is moves away from zero, $P$ and $Q$ are more dissimilar (diverging, more distant)
In the example below, we will calculate KLD against three distributions, each associated with different models. Model 1 takes on the following form.
* $X_1 \sim \mathcal{N}(0, 1)$
* $X_2 \sim \mathcal{N}(1, 1)$
* $X_3 \sim \mathcal{N}(2 + 0.8x_1 - 0.2x_2, 1)$
Model 2 takes on the following form.
* $X_1 \sim \mathcal{N}(0.85, 1)$
* $X_2 \sim \mathcal{N}(1.05, 1)$
* $X_3 \sim \mathcal{N}(2 + 0.9x_1 - 0.25x_2, 1)$
Model 3 takes on the following form.
* $X_1 \sim \mathcal{N}(2, 1)$
* $X_2 \sim \mathcal{N}(5, 1)$
* $X_3 \sim \mathcal{N}(4 + 0.8x_1 - 0.8x_2, 1)$
Note how Models 1 and 2 were constructed to be very similar, and Model 3 to be very dissimilar to Models 1 and 2.
## Generate data
```python
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
import seaborn as sns
from numpy.random import normal
from scipy.stats import multivariate_normal, norm, entropy
np.random.seed(37)
sns.set_style('whitegrid')
num_samples = 1000
x1 = normal(0, 1, num_samples)
x2 = normal(1, 1, num_samples)
x3 = normal(2 + 0.8 * x1 - 0.2 * x2, 1, num_samples)
data1 = data = np.column_stack((x1, x2, x3))
means1 = data1.mean(axis=0)
covs1 = np.cov(data1, rowvar=False)
x1 = normal(0.85, 1, num_samples)
x2 = normal(1.05, 1, num_samples)
x3 = normal(2 + 0.9 * x1 - 0.25 * x2, 1, num_samples)
data2 = np.column_stack((x1, x2, x3))
means2 = data2.mean(axis=0)
covs2 = np.cov(data2, rowvar=False)
x1 = normal(2, 1, num_samples)
x2 = normal(5, 1, num_samples)
x3 = normal(4 + 0.8 * x1 - 0.8 * x2, 1, num_samples)
data3 = np.column_stack((x1, x2, x3))
means3 = data3.mean(axis=0)
covs3 = np.cov(data3, rowvar=False)
print('means_1 = {}'.format(means1))
print('covariance_1')
print(covs1)
print('')
print('means_2 = {}'.format(means2))
print('covariance_2')
print(covs2)
print('')
print('means_3 = {}'.format(means3))
print('covariance_3')
print(covs3)
```
means_1 = [0.01277839 0.9839153 1.80334137]
covariance_1
[[ 0.9634615 -0.00371354 0.76022725]
[-0.00371354 0.97865653 -0.25181086]
[ 0.76022725 -0.25181086 1.63064517]]
means_2 = [0.85083876 1.07957661 2.46909572]
covariance_2
[[ 1.00406579 0.03774339 0.91788487]
[ 0.03774339 1.00889847 -0.21973076]
[ 0.91788487 -0.21973076 1.94124604]]
means_3 = [2.00362816 4.97508849 1.65194765]
covariance_3
[[ 1.01322936 0.0112429 0.75369598]
[ 0.0112429 0.96736793 -0.76265399]
[ 0.75369598 -0.76265399 2.14695264]]
Note how we estimate the means and covariance matrix of Models 1 and 2 from the sampled data. For any observation, ${\mathbf X} = (x_{1}, \ldots, x_{k})$, we can compute the probablity of such data point according to the following probability density function.
$\begin{align}
f_{\mathbf X}(x_1,\ldots,x_k)
& = \frac{\exp\left(-\frac 1 2 ({\mathbf x}-{\boldsymbol\mu})^\mathrm{T}{\boldsymbol\Sigma}^{-1}({\mathbf x}-{\boldsymbol\mu})\right)}{\sqrt{(2\pi)^k|\boldsymbol\Sigma|}}
\end{align}$
## Visualize
Let's visualize the density curves of each variable in the models.
```python
fig, ax = plt.subplots(1, 1, figsize=(10, 5))
ax.set_title('Model 1')
ax.set_xlim([-4, 8])
sns.kdeplot(data1[:,0], bw=0.5, ax=ax, label=r'$x_{1}$')
sns.kdeplot(data1[:,1], bw=0.5, ax=ax, label=r'$x_{2}$')
sns.kdeplot(data1[:,2], bw=0.5, ax=ax, label=r'$x_{3}$')
fig, ax = plt.subplots(1, 1, figsize=(10, 5))
ax.set_title('Model 2')
ax.set_xlim([-4, 8])
sns.kdeplot(data2[:,0], bw=0.5, ax=ax, label=r'$x_{1}$')
sns.kdeplot(data2[:,1], bw=0.5, ax=ax, label=r'$x_{2}$')
sns.kdeplot(data2[:,2], bw=0.5, ax=ax, label=r'$x_{3}$')
fig, ax = plt.subplots(1, 1, figsize=(10, 5))
ax.set_title('Model 3')
ax.set_xlim([-4, 8])
sns.kdeplot(data3[:,0], bw=0.5, ax=ax, label=r'$x_{1}$')
sns.kdeplot(data3[:,1], bw=0.5, ax=ax, label=r'$x_{2}$')
sns.kdeplot(data3[:,2], bw=0.5, ax=ax, label=r'$x_{3}$')
```
## Measure divergence
Now that we have estimated the parameters (means and covariance matrix) of the models, we can plug these back into the density function above to estimate the probability of each data point in the data simulated from Model 1. Note that $P$ is the density function associated with Model 1, $Q1$ is the density function associated with Model 2, and $Q2$ is the density function associated with Model 3. Also note
* $D_{\mathrm{KL}}(P\|P) = 0$
* $D_{\mathrm{KL}}(P\|Q) \neq D_{\mathrm{KL}}(Q\|P)$ (the KLD is asymmetric)
```python
P = multivariate_normal.pdf(data1, mean=means1, cov=covs1)
Q1 = multivariate_normal.pdf(data1, mean=means2, cov=covs2)
Q2 = multivariate_normal.pdf(data1, mean=means3, cov=covs3)
print(entropy(P, P))
print(entropy(P, Q1))
print(entropy(P, Q2))
```
0.0
0.17877316564929496
6.628549732040807
This time around, $P$ is the density function associated with Model 2 and $Q1$ is the density function associated with Model 1 and $Q2$ with Model 3.
```python
P = multivariate_normal.pdf(data2, mean=means2, cov=covs2)
Q1 = multivariate_normal.pdf(data2, mean=means1, cov=covs1)
Q2 = multivariate_normal.pdf(data2, mean=means3, cov=covs3)
print(entropy(P, P))
print(entropy(P, Q1))
print(entropy(P, Q2))
```
0.0
0.18572628345083425
5.251771615080081
Finally, $P$ is the density function associated with Model 3 and $Q1$ is the density function associated with Model 1 and $Q2$ with Model 2.
```python
P = multivariate_normal.pdf(data3, mean=means3, cov=covs3)
Q1 = multivariate_normal.pdf(data3, mean=means1, cov=covs1)
Q2 = multivariate_normal.pdf(data3, mean=means2, cov=covs2)
print(entropy(P, P))
print(entropy(P, Q1))
print(entropy(P, Q2))
```
0.0
4.964071493531684
4.154646297473461
Since Models 1 and 2 are very similar (as can be seen by how we constructed them), their KLD is closer to zero. On the other hand the KLDs between these two models and Model 3 are farther from zero. Though, it is interesting to note, that Model 2 is closer to Model 3 than Model 1 is to Model 3.
|
eb33d5c6d43c3fad74809463d9f29dfc86e66558
| 135,096 |
ipynb
|
Jupyter Notebook
|
sphinx/datascience/source/kullback-leibler-divergence.ipynb
|
oneoffcoder/books
|
84619477294a3e37e0d7538adf819113c9e8dcb8
|
[
"CC-BY-4.0"
] | 26 |
2020-05-05T08:07:43.000Z
|
2022-02-12T03:28:15.000Z
|
sphinx/datascience/source/kullback-leibler-divergence.ipynb
|
oneoffcoder/books
|
84619477294a3e37e0d7538adf819113c9e8dcb8
|
[
"CC-BY-4.0"
] | 19 |
2021-03-10T00:33:51.000Z
|
2022-03-02T13:04:32.000Z
|
sphinx/datascience/source/kullback-leibler-divergence.ipynb
|
oneoffcoder/books
|
84619477294a3e37e0d7538adf819113c9e8dcb8
|
[
"CC-BY-4.0"
] | 2 |
2022-01-09T16:48:21.000Z
|
2022-02-19T17:06:50.000Z
| 322.424821 | 41,236 | 0.930264 | true | 2,388 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.952574 | 0.872347 | 0.830976 |
__label__eng_Latn
| 0.833456 | 0.768968 |
# Stochastic Differential Equations: Lab 1
```
from IPython.core.display import HTML
css_file = '../ipython_notebook_styles/ngcmstyle.css'
HTML(open(css_file, "r").read())
```
<link href='http://fonts.googleapis.com/css?family=Open+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Nixie+One' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.cell { /* set cell width */
width: 1000px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 1200px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.8em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Open Sans' sans-serif;
line-height: 140%;
font-size: 125%;
font-weight: 400;
width:900px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Arvo', serif;
font-style:regular;
font-weight: 400;
font-size: 45pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Arvo', serif;
font-weight: 400;
font-size: 30pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.1em;
margin-top: 0.3em;
display: block;
}
.text_cell_render h3 {
font-family: 'Arvo', serif;
margin-top:16px;
font-size: 22pt;
font-weight: 600;
margin-bottom: 3px;
font-style: regular;
color: rgb(102,102,0);
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Arvo', serif;
font-size: 14pt;
text-align: center;
margin-top: 0em;
margin-bottom: 2em;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Arvo', sans-serif;
font-weight: 400;
font-size: 16pt;
color: rgb(163,0,0);
font-style: italic;
margin-bottom: .1em;
margin-top: 0.8em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'PT Mono', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "PT Mono";
font-size: 90%;
}
</style>
This background for these exercises is article of D Higham, [*An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations*, SIAM Review 43:525-546 (2001)](http://epubs.siam.org/doi/abs/10.1137/S0036144500378302).
Higham provides Matlab codes illustrating the basic ideas at <http://personal.strath.ac.uk/d.j.higham/algfiles.html>, which are also given in the paper.
For random processes in `python` you should look at the `numpy.random` module. To set the initial seed (which you should *not* do in a real simulation, but allows for reproducible testing), see `numpy.random.seed`.
## Brownian processes
Simulate a Brownian process over $[0, 1]$ using a step length $\delta t = 1/N$ for $N = 500, 1000, 2000$. Use a fixed seed of `100`. Compare the results.
```
```
Evaluate the function $u(B(t)) = \sin^2(t + B(t))$, where $B(t)$ is a Brownian process, on $M$ Brownian paths for $M = 500, 1000, 2000$. Compare the *average* path for each $M$.
```
```
## Stochastic integrals
Write functions to compute the Itô and Stratonovich integrals of a function $h(t, B(t))$ of a *given* Brownian process $B(t)$ over the interval $[0, 1]$.
```
```
Test the functions on $h = B(t)$ for various $N$. Compare the limiting values of the integrals.
```
```
## Euler-Maruyama's method
Apply the Euler-Maruyama method to the stochastic differential equation
$$
\begin{equation}
dX(t) = \lambda X(t) + \mu X(t) \ dB(t), \qquad X(0) = X_0.
\end{equation}
$$
Choose any reasonable values of the free parameters $\lambda, \mu, X_0$.
The exact solution to this equation is $X(t) = X(0) \exp \left[ \left( \lambda - \tfrac{1}{2} \mu^2 \right) t + \mu B(t) \right]$. Fix the timetstep and compare your solution to the exact solution.
```
```
Vary the timestep of the Brownian path and check how the numerical solution compares to the exact solution.
## Convergence
Investigate the weak and strong convergence of your method, applied to the problem above.
```
```
## Milstein's method
Implement Milstein's method, applied to the problem above.
```
```
Check the convergence again.
```
```
Compare the *performance* of the Euler-Maruyama and Milstein method using eg `timeit`. At what point is one method better than the other?
```
```
## Population problem
Apply the algorithms, convergence and performance tests to the SDE
$$
\begin{equation}
dX(t) = r X(t) (K - X(t)) \ dt + \beta X(t) \ dB(t), \qquad X(0) = X_0.
\end{equation}
$$
Use the parameters $r = 2, K = 1, \beta = 0.25, X_0 = 0.5$.
```
```
Investigate how the behaviour varies as you change the parameters $r, K, \beta$.
```
```
|
b0bf02b0bdb9a557d1e4b3acd953f472e14b1976
| 12,921 |
ipynb
|
Jupyter Notebook
|
FEEG6016 Simulation and Modelling/2014/SDEs Lab 1.ipynb
|
ngcm/training-public
|
e5a0d8830df4292315c8879c4b571eef722fdefb
|
[
"MIT"
] | 7 |
2015-06-23T05:50:49.000Z
|
2016-06-22T10:29:53.000Z
|
FEEG6016 Simulation and Modelling/2014/SDEs Lab 1.ipynb
|
Jhongesell/training-public
|
e5a0d8830df4292315c8879c4b571eef722fdefb
|
[
"MIT"
] | 1 |
2017-11-28T08:29:55.000Z
|
2017-11-28T08:29:55.000Z
|
FEEG6016 Simulation and Modelling/2014/SDEs Lab 1.ipynb
|
Jhongesell/training-public
|
e5a0d8830df4292315c8879c4b571eef722fdefb
|
[
"MIT"
] | 24 |
2015-04-18T21:44:48.000Z
|
2019-01-09T17:35:58.000Z
| 29.29932 | 252 | 0.465057 | true | 1,737 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.672332 | 0.888759 | 0.597541 |
__label__eng_Latn
| 0.709518 | 0.226617 |
<a href="https://www.bigdatauniversity.com"></a>
<h1><center>Non Linear Regression Analysis</center></h1>
If the data shows a curvy trend, then linear regression will not produce very accurate results when compared to a non-linear regression because, as the name implies, linear regression presumes that the data is linear.
Let's learn about non linear regressions and apply an example on python. In this notebook, we fit a non-linear model to the datapoints corrensponding to China's GDP from 1960 to 2014.
<h2 id="importing_libraries">Importing required libraries</h2>
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
Though Linear regression is very good to solve many problems, it cannot be used for all datasets. First recall how linear regression, could model a dataset. It models a linear relation between a dependent variable y and independent variable x. It had a simple equation, of degree 1, for example y = $2x$ + 3.
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 2*(x) + 3
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
#plt.figure(figsize=(8,6))
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
Non-linear regressions are a relationship between independent variables $x$ and a dependent variable $y$ which result in a non-linear function modeled data. Essentially any relationship that is not linear can be termed as non-linear, and is usually represented by the polynomial of $k$ degrees (maximum power of $x$).
$$ \ y = a x^3 + b x^2 + c x + d \ $$
Non-linear functions can have elements like exponentials, logarithms, fractions, and others. For example: $$ y = \log(x)$$
Or even, more complicated such as :
$$ y = \log(a x^3 + b x^2 + c x + d)$$
Let's take a look at a cubic function's graph.
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 1*(x**3) + 1*(x**2) + 1*x + 3
y_noise = 20 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
As you can see, this function has $x^3$ and $x^2$ as independent variables. Also, the graphic of this function is not a straight line over the 2D plane. So this is a non-linear function.
Some other types of non-linear functions are:
### Quadratic
$$ Y = X^2 $$
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = np.power(x,2)
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
### Exponential
An exponential function with base c is defined by $$ Y = a + b c^X$$ where b ≠0, c > 0 , c ≠1, and x is any real number. The base, c, is constant and the exponent, x, is a variable.
```python
X = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
Y= np.exp(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
### Logarithmic
The response $y$ is a results of applying logarithmic map from input $x$'s to output variable $y$. It is one of the simplest form of __log()__: i.e. $$ y = \log(x)$$
Please consider that instead of $x$, we can use $X$, which can be polynomial representation of the $x$'s. In general form it would be written as
\begin{equation}
y = \log(X)
\end{equation}
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = np.log(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
### Sigmoidal/Logistic
$$ Y = a + \frac{b}{1+ c^{(X-d)}}$$
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = 1-4/(1+np.power(3, X-2))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
<a id="ref2"></a>
# Non-Linear Regression example
For an example, we're going to try and fit a non-linear model to the datapoints corresponding to China's GDP from 1960 to 2014. We download a dataset with two columns, the first, a year between 1960 and 2014, the second, China's corresponding annual gross domestic income in US dollars for that year.
```python
import numpy as np
import pandas as pd
#downloading dataset
!wget -nv -O china_gdp.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/china_gdp.csv
df = pd.read_csv("china_gdp.csv")
df.head(10)
```
2019-01-10 23:19:24 URL:https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/china_gdp.csv [1218/1218] -> "china_gdp.csv" [1]
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Year</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1960</td>
<td>5.918412e+10</td>
</tr>
<tr>
<th>1</th>
<td>1961</td>
<td>4.955705e+10</td>
</tr>
<tr>
<th>2</th>
<td>1962</td>
<td>4.668518e+10</td>
</tr>
<tr>
<th>3</th>
<td>1963</td>
<td>5.009730e+10</td>
</tr>
<tr>
<th>4</th>
<td>1964</td>
<td>5.906225e+10</td>
</tr>
<tr>
<th>5</th>
<td>1965</td>
<td>6.970915e+10</td>
</tr>
<tr>
<th>6</th>
<td>1966</td>
<td>7.587943e+10</td>
</tr>
<tr>
<th>7</th>
<td>1967</td>
<td>7.205703e+10</td>
</tr>
<tr>
<th>8</th>
<td>1968</td>
<td>6.999350e+10</td>
</tr>
<tr>
<th>9</th>
<td>1969</td>
<td>7.871882e+10</td>
</tr>
</tbody>
</table>
</div>
__Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
### Plotting the Dataset ###
This is what the datapoints look like. It kind of looks like an either logistic or exponential function. The growth starts off slow, then from 2005 on forward, the growth is very significant. And finally, it decelerate slightly in the 2010s.
```python
plt.figure(figsize=(8,5))
x_data, y_data = (df["Year"].values, df["Value"].values)
plt.plot(x_data, y_data, 'ro')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
```
### Choosing a model ###
From an initial look at the plot, we determine that the logistic function could be a good approximation,
since it has the property of starting with a slow growth, increasing growth in the middle, and then decreasing again at the end; as illustrated below:
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = 1.0 / (1.0 + np.exp(-X))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
```
The formula for the logistic function is the following:
$$ \hat{Y} = \frac1{1+e^{\beta_1(X-\beta_2)}}$$
$\beta_1$: Controls the curve's steepness,
$\beta_2$: Slides the curve on the x-axis.
### Building The Model ###
Now, let's build our regression model and initialize its parameters.
```python
def sigmoid(x, Beta_1, Beta_2):
y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2)))
return y
```
Lets look at a sample sigmoid line that might fit with the data:
```python
beta_1 = 0.10
beta_2 = 1990.0
#logistic function
Y_pred = sigmoid(x_data, beta_1 , beta_2)
#plot initial prediction against datapoints
plt.plot(x_data, Y_pred*15000000000000.)
plt.plot(x_data, y_data, 'ro')
```
Our task here is to find the best parameters for our model. Lets first normalize our x and y:
```python
# Lets normalize our data
xdata =x_data/max(x_data)
ydata =y_data/max(y_data)
```
#### How we find the best parameters for our fit line?
we can use __curve_fit__ which uses non-linear least squares to fit our sigmoid function, to data. Optimal values for the parameters so that the sum of the squared residuals of sigmoid(xdata, *popt) - ydata is minimized.
popt are our optimized parameters.
```python
from scipy.optimize import curve_fit
popt,pcov = curve_fit(sigmoid, xdata, ydata)
#print the final parameters
print(" beta_1 = %f, beta_2 = %f" % (popt[0], popt[1]))
```
beta_1 = 690.453017, beta_2 = 0.997207
Now we plot our resulting regression model.
```python
x = np.linspace(1960, 2015, 55)
x = x/max(x)
plt.figure(figsize=(8,5))
y = sigmoid(x, *popt)
plt.plot(xdata, ydata, 'ro', label='data')
plt.plot(x,y, linewidth=3.0, label='fit')
plt.legend(loc='best')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
```
## Practice
Can you calculate what is the accuracy of our model?
```python
msk=np.random.rand(len(df))<0.8
train_x=xdata[msk]
test_x=xdata[~msk]
train_y=ydata[msk]
test_y=ydata[~msk]
popt, pcov = curve_fit(sigmoid, train_x, train_y)
y_hat = sigmoid(test_x, *popt)
print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((y_hat - test_y) ** 2))
from sklearn.metrics import r2_score
print("R2-score: %.2f" % r2_score(y_hat , test_y) )
```
Mean absolute error: 0.04
Residual sum of squares (MSE): 0.00
R2-score: 0.95
Double-click __here__ for the solution.
<!-- Your answer is below:
# split data into train/test
msk = np.random.rand(len(df)) < 0.8
train_x = xdata[msk]
test_x = xdata[~msk]
train_y = ydata[msk]
test_y = ydata[~msk]
# build the model using train set
popt, pcov = curve_fit(sigmoid, train_x, train_y)
# predict using test set
y_hat = sigmoid(test_x, *popt)
# evaluation
print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((y_hat - test_y) ** 2))
from sklearn.metrics import r2_score
print("R2-score: %.2f" % r2_score(y_hat , test_y) )
-->
<h2>Want to learn more?</h2>
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="http://cocl.us/ML0101EN-SPSSModeler">SPSS Modeler</a>
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://cocl.us/ML0101EN_DSX">Watson Studio</a>
<h3>Thanks for completing this lesson!</h3>
<h4>Author: <a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a></h4>
<p><a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a>, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.</p>
<hr>
<p>Copyright © 2018 <a href="https://cocl.us/DX0108EN_CC">Cognitive Class</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.</p>
|
655414bd0e0eff90bdebaa7786b3b4ad7795a955
| 175,059 |
ipynb
|
Jupyter Notebook
|
Notebooks/ML0101EN-Reg-NoneLinearRegression-py-v1.ipynb
|
DanielBustillos/Machine-Learning-with-Python-A-Practical-Introduction
|
5bf7627fa30ed98b056e23cd5ea78a3a814561ac
|
[
"MIT"
] | 6 |
2019-10-29T19:01:47.000Z
|
2022-01-09T01:33:22.000Z
|
Notebooks/ML0101EN-Reg-NoneLinearRegression-py-v1.ipynb
|
vvignesh17/Course-Dashborad-Machine-Learning-with-Python-A-Practical-Introduction
|
5bf7627fa30ed98b056e23cd5ea78a3a814561ac
|
[
"MIT"
] | null | null | null |
Notebooks/ML0101EN-Reg-NoneLinearRegression-py-v1.ipynb
|
vvignesh17/Course-Dashborad-Machine-Learning-with-Python-A-Practical-Introduction
|
5bf7627fa30ed98b056e23cd5ea78a3a814561ac
|
[
"MIT"
] | 8 |
2019-08-25T18:56:36.000Z
|
2021-08-19T17:37:31.000Z
| 207.170414 | 18,632 | 0.910333 | true | 3,608 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.763484 | 0.841826 | 0.64272 |
__label__eng_Latn
| 0.956811 | 0.331585 |
# Optical Crosstalk in SPAD arrays
# Theory
Optical crosstalk is a interaction between pixels on the same
array. When an avalanche is triggered on one pixel it emits
secondary photons that can trigger another pixel on the same chip.
This secondary photon emission is proportional to the avalanche current
whose duration is typically < 20ns in SPADs using AQCs. Hence, when a
crosstalk event occurs, the second pixels is triggered with a delay
< 20ns. We can estimate the crosstalk by "coincidence counting",
that is counting the number of photons $C$ arriving in two pixels
within a short time window $\Delta t$ (e.g. 25-50ns).
The number of coincident events due to uncorrelated dark counts
can be computed from Poisson statistics. Then, the number
of crosstalk events is simply the total coincidence counts minus
the coincidence due to uncorrelated counts. The crosstalk probability
is the number of coincidences divided by the total counts.
**Reference**:
- [Restelli JMO 2006](http://dx.doi.org/10.1080/09500340600790121).
## Poisson statistics
Given the random variable $X \sim {\rm Poiss}\{\Lambda\}$,
where $\Lambda = \lambda\Delta t$. We ask, what is the probability
of having at least one count in a time window $\delta t$?
$$P\{X \ge 1\} = 1 - P\{X = 0\} = 1 - e^\Lambda
\xrightarrow[\scriptstyle\Lambda\to0]{} \Lambda$$
Hence, for two random variables $X_A \sim {\rm Poiss}\{\Lambda_A\}$
and $X_B \sim {\rm Poiss}\{\Lambda_B\}$, the propbability of having
at least one count in each variable in a time $\Delta t$ is:
$$P\{X_A \ge 1\}P\{X_B \ge 1\} = (1 - e^{-\Lambda_A})(1 - e^{-\Lambda_B})
\approx (\lambda_A\Delta t)(\lambda_B\Delta t)
\quad {\rm for}\quad\Delta t \ll \lambda^{-1}$$
In a measurement of duration $T$, number of "coincidences" $C_u$ is the number
of times both variables have at least one count in a window $\Delta t$
$$C_u = P\{X_A \ge 1\}P\{X_B \ge 1\} \frac{T}{\Delta t}
\approx \lambda_A\,\lambda_B\, T\,\,\Delta t
\quad {\rm for}\quad\Delta t \ll \lambda^{-1}$$
## Crosstalk probability
Given a measurement of duration $T$, with total counts $N_A$ and $N_B$ in
pixels $A$ and $B$. If $C$ are the total number of coincidence in windows of
duration $\Delta t$, then the coincidence $C_c$ due to crosstalk are:
\begin{equation}
C_c = C - C_u \qquad\qquad (1)
\end{equation}
where $C_u$ are the number of coincidences due to Poisson statistics,
i.e. the coincidences we would have if the two pixels were uncorrelated.
\begin{equation}
C_u = \left[1 - \exp\left(-\frac{N_A - C_c}{T}\right)\right]
\left[1 - \exp\left(-\frac{N_B - C_c}{T}\right)\right]
\frac{T}{\Delta t}
\end{equation}
The expression of $C_u$ can be replaced eq. (1) and then solved for $C_c$
(a numerical iterative solution is straightforward).
For simplicity, we could assume $C_c \ll N_A,N_B$ obtaining:
\begin{equation}
C_u = \left[1 - \exp\left(-\frac{N_A}{T}\right)\right]
\left[1 - \exp\left(-\frac{N_B}{T}\right)\right]
\frac{T}{\Delta t}
\end{equation}
In either cases, the probability of crosstalk is:
\begin{equation}
P_c = \frac{C_c}{N_A + N_B - C_c}
\end{equation}
The counts $C_c$ are independetent events and thus are Poisson distributed.
The standard deviation of $C_c$ is then
${\rm Std}\{C_c\} = \sqrt{C_c}$, and the standard deviation of $P_c$ is:
\begin{equation}
{\rm Std}\{P_c\} = \frac{\sqrt{C_c}}{N_A + N_B - C_c}
\end{equation}
## Define functions
```python
def coincidence_py(timestamps1, timestamps2):
"""Pure python coincidence counting."""
coinc = 0
i1, i2 = 0, 0
while (i1 < timestamps1.size) and (i2 < timestamps2.size):
if timestamps1[i1] == timestamps2[i2]:
coinc += 1
i1 += 1
i2 += 1
elif timestamps1[i1] > timestamps2[i2]:
i2 += 1
elif timestamps1[i1] < timestamps2[i2]:
i1 += 1
return coinc
```
```python
%load_ext Cython
```
```cython
%%cython
cimport numpy as np
def coincidence(np.int64_t[:] timestamps1, np.int64_t[:] timestamps2):
"""Cython coincidence counting."""
cdef np.int64_t coinc, i1, i2, size1, size2
size1 = timestamps1.size
size2 = timestamps2.size
coinc = 0
i1, i2 = 0, 0
while i1 < size1 and i2 < size2:
if timestamps1[i1] == timestamps2[i2]:
coinc += 1
i1 += 1
i2 += 1
elif timestamps1[i1] > timestamps2[i2]:
i2 += 1
elif timestamps1[i1] < timestamps2[i2]:
i1 += 1
return coinc
```
```python
def crosstalk_probability(t1, t2, tol=1e-6, max_iter=100):
"""Estimate crosstalk probability between two pixels in a SPAD array.
Given two input arrays of timestamps `t1` and `t2`, estimate
the crosstalk probability using Poisson statistics without
approximations.
Arguments:
t1, t2 (array of ints): arrays of timestamps from DCR measurements
for the two pixels to be measured. Timestamps need to be
integers and coincidences are detected when values in the two
arrays are equal. These arrays need to be rescaled, if
coincidence need to be computed on a delta t larger than \
a single timestamp unit.
tol (float): tollerance for iterative equasion solution
max_iter (int): max iterations used to solve the equation
Returns:
A 3-element tuple:
- crosstalk probability
- crosstalk probability standard deviation
- number of iterations used for the estimation.
"""
T = (max((t1.max(), t2.max())) - min((t1.min(), t2.min())))
C = coincidence(t1, t2)
# Find C_c by solving eq. (1) iteratively
C_c, C_u_prev = 0, 0
for i in range(max_iter):
C_u = ((1 - np.exp(-(t1.size - C_c)/T)) *
(1 - np.exp(-(t2.size - C_c)/T)) * T)
C_c = C - C_u
if np.abs(C_u - C_u_prev) < tol:
break
C_u_prev = C_u
P_c = C_c / (t1.size + t2.size - C_c)
sigma = np.sqrt(C_c) / (t1.size + t2.size - C_c)
return P_c, sigma, i
```
# Simulation
## Poisson processes
Here we simulate two poisson processes to check the the coincidences
are, as predicted, equal to $C_u$.
```python
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
```
Simulation parameters:
```python
λ_A = 1000 # cps
λ_B = 2000 # cps
T = 600 # s
delta_t = 50e-9 # s
P_c = 0.05
```
From theory, the number of coincidences are:
```python
C_u = λ_A * λ_B * T * delta_t
C_u
```
60.0
```python
np.sqrt(C_u)
```
7.745966692414834
Let's simulate timestamps for two uncorrelated Poisson processes:
```python
dt = np.random.exponential(scale=1/λ_A, size=10**6)
dt.mean(), dt.sum()
```
(0.0010001265879312606, 1000.1265879312606)
```python
np.random.seed(1)
t_A = np.cumsum(np.random.exponential(scale=1/λ_A, size=λ_A * T * 2))
assert t_A.max() > T
t_A = t_A[t_A < 600]
t_A = (t_A / delta_t).astype('int64')
t_B = np.cumsum(np.random.exponential(scale=1/λ_B, size=λ_B * T * 2))
assert t_B.max() > T
t_B = t_B[t_B < 600]
t_B = (t_B / delta_t).astype('int64')
```
The empirical coincidences are:
```python
C_u_sim = coincidence(t_A, t_B)
C_u_sim
```
62
```python
assert C_u - 3*np.sqrt(C_u) < C_u_sim < C_u + 3*np.sqrt(C_u)
```
Let's repeat the simulation $N$ times to be sure that the
estimated coincidence falls within the espected error range
all the times:
```python
N = 500
fail = 0
for _ in range(N):
t_A = np.cumsum(np.random.exponential(scale=1/λ_A, size=λ_A * T * 2))
assert t_A.max() > T
t_A = t_A[t_A < T]
t_A = (t_A / delta_t).astype('int64')
t_B = np.cumsum(np.random.exponential(scale=1/λ_B, size=λ_B * T * 2))
assert t_B.max() > T
t_B = t_B[t_B < T]
t_B = (t_B / delta_t).astype('int64')
C_u_sim = coincidence(t_A, t_B)
if C_u < C_u_sim - 3*np.sqrt(C_u_sim) or C_u > C_u_sim + 3*np.sqrt(C_u_sim):
fail += 1
print('>>> In %d simulations, the number of coincincence was outside the error range %d times'
% (N, fail/N))
```
>>> In 500 simulations, the number of coincincence was outside the error range 0 times
For further information of confidence intervals for Poisson distribution
estimators see:
- Patil & Kulkarni. *Statistical Journal* 10(2) pp211–227(2012).
COMPARISON OF CONFIDENCE INTERVALS FOR THE POISSON MEAN: SOME NEW ASPECTS.
[[PDF]](https://www.ine.pt/revstat/pdf/rs120203.pdf)
## Optical crosstalk
Here we simulate two Poisson processes plus crosstalk, to check
that the estimated crosstalk is consistent with the ground truth.
```python
λ_A = 1000 # cps
λ_B = 1550 # cps
T = 1200 # s
delta_t = 50e-9 # s
P_c = 0.1
```
```python
np.random.seed(1)
```
```python
t_A = np.cumsum(np.random.exponential(scale=1/λ_A, size=λ_A * T * 2))
assert t_A.max() > T
t_A = t_A[t_A < 600]
t_A = (t_A / delta_t).astype('int64')
t_ct_AB = t_A[np.random.rand(t_A.size) <= P_c]
t_B = np.cumsum(np.random.exponential(scale=1/λ_B, size=λ_B * T * 2))
assert t_B.max() > T
t_B = t_B[t_B < 600]
t_B = (t_B / delta_t).astype('int64')
t_ct_BA = t_B[np.random.rand(t_B.size) <= P_c]
t_B = np.hstack([t_B, t_ct_AB])
t_B.sort()
t_A = np.hstack([t_A, t_ct_BA])
t_A.sort()
```
```python
P_c_est, P_c_err, i = crosstalk_probability(t_A, t_B, delta_t)
P_c_est*100, P_c_err*300, i
```
(9.9967084810755544, 0.076659160092492767, 4)
Let's repeat the simulation $N$ times to obtain a distribution
of estimated crosstalk values:
```python
λ_A = 1000 # cps
λ_B = 300 # cps
T = 1200 # s
delta_t = 50e-9 # s
P_c = 0.05
```
```python
N = 100
P = []
I = 0
for _ in range(N):
t_A = np.cumsum(np.random.exponential(scale=1/λ_A, size=λ_A * T * 2))
assert t_A.max() > T
t_A = t_A[t_A < T]
t_A = (t_A / delta_t).astype('int64')
t_ct_AB = t_A[np.random.rand(t_A.size) <= P_c]
t_B = np.cumsum(np.random.exponential(scale=1/λ_B, size=λ_B * T * 2))
assert t_B.max() > T
t_B = t_B[t_B < T]
t_B = (t_B / delta_t).astype('int64')
t_ct_BA = t_B[np.random.rand(t_B.size) <= P_c]
t_B = np.hstack([t_B, t_ct_AB])
t_B.sort()
t_A = np.hstack([t_A, t_ct_BA])
t_A.sort()
P_c_est, P_c_err, i = crosstalk_probability(t_A, t_B)
I += i
P.append(P_c_est*100)
I/N
```
3.0
```python
plt.hist(P, range=(96*P_c, 104*P_c), bins=60, label='Estimator');
plt.axvline(P_c*100, ls='--', color='k', label='True value')
plt.xlabel('Crosstalk probability (%)')
plt.title('Simulation of crosstalk estimator accuracy');
```
> **NOTE** The crosstalk estimator is well centered around the true value.
|
585b9a5434369b2e01e010db53760ada5c3d213d
| 30,592 |
ipynb
|
Jupyter Notebook
|
Crosstalk Theory and Simulations.ipynb
|
tritemio/48-pixel-SPAD-crosstalk-analysis
|
63e03468d767abbf5b9e96f453503696091832e4
|
[
"MIT"
] | 1 |
2017-10-23T13:32:01.000Z
|
2017-10-23T13:32:01.000Z
|
Crosstalk Theory and Simulations.ipynb
|
tritemio/48-pixel-SPAD-crosstalk-analysis
|
63e03468d767abbf5b9e96f453503696091832e4
|
[
"MIT"
] | null | null | null |
Crosstalk Theory and Simulations.ipynb
|
tritemio/48-pixel-SPAD-crosstalk-analysis
|
63e03468d767abbf5b9e96f453503696091832e4
|
[
"MIT"
] | null | null | null | 45.321481 | 12,048 | 0.685735 | true | 3,440 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.882428 | 0.861538 | 0.760245 |
__label__eng_Latn
| 0.848825 | 0.604637 |
```python
from logicqubit.logic import *
from cmath import *
import numpy as np
import sympy as sp
from scipy.optimize import *
import matplotlib.pyplot as plt
```
Cuda is not available!
logicqubit version 1.5.6
```python
def ansatz_1q(q, theta):
return q.RY(theta)
def ansatz_2q(q1, q2, params):
q2.CNOT(q1)
q1.RY(params[0])
q2.RY(params[1])
q1.CNOT(q2)
q1.RY(params[0])
q2.RY(params[1])
q2.CNOT(q1)
q1.RY(params[0])
q2.RY(params[1])
def ansatz(reg, params):
n_qubits = len(reg)
depth = n_qubits
for i in range(depth):
for j in range(n_qubits):
if(j < n_qubits-1):
reg[j+1].CNOT(reg[j])
reg[i].RY(params[j])
def expectation_Z(theta):
# H = Sz
# <psi|H|psi> = <psi| [[1,0],[0,-1]] |psi>
# |0><0|=[[1,0],[0,0]] e |1><1|=[[0,0],[0,1]]
# <psi|H|psi> = <psi|0><0|psi> - <psi|1><1|psi> = <0> - <1>
logicQuBit = LogicQuBit(1)
q = Qubit()
ansatz_1q(q, theta)
res = logicQuBit.Measure([q])
return res[0]-res[1]
```
```python
params = np.linspace(0.0, 2 * np.pi, 25)
data = [expectation_Z(theta) for theta in params]
plt.xlabel('angulo')
plt.ylabel('valor esperado')
plt.plot(params, data)
plt.show()
```
```python
theta = 0.0
minimum = minimize(expectation_Z, theta, method='Nelder-Mead', options={'initial_simplex': np.array([[0.0], [0.05]]), 'xatol': 1.0e-2})
print(minimum)
```
final_simplex: (array([[3.14375],
[3.1375 ]]), array([-0.99999767, -0.99999163]))
fun: -0.9999976729291358
message: 'Optimization terminated successfully.'
nfev: 28
nit: 14
status: 0
success: True
x: array([3.14375])
/home/cleoner/anaconda3/lib/python3.8/site-packages/scipy/optimize/optimize.py:586: ComplexWarning: Casting complex values to real discards the imaginary part
fsim[k] = func(sim[k])
/home/cleoner/anaconda3/lib/python3.8/site-packages/scipy/optimize/optimize.py:611: ComplexWarning: Casting complex values to real discards the imaginary part
fsim[-1] = fxe
/home/cleoner/anaconda3/lib/python3.8/site-packages/scipy/optimize/optimize.py:637: ComplexWarning: Casting complex values to real discards the imaginary part
fsim[-1] = fxcc
```python
```
|
aaeaa3b342c6665be02571f604cfaf6ed0d8c6d5
| 22,865 |
ipynb
|
Jupyter Notebook
|
vqe_sz.ipynb
|
clnrp/quantum_machine_learning
|
5528a440d230b0613f1bd44a81a2a352441c76e5
|
[
"MIT"
] | null | null | null |
vqe_sz.ipynb
|
clnrp/quantum_machine_learning
|
5528a440d230b0613f1bd44a81a2a352441c76e5
|
[
"MIT"
] | null | null | null |
vqe_sz.ipynb
|
clnrp/quantum_machine_learning
|
5528a440d230b0613f1bd44a81a2a352441c76e5
|
[
"MIT"
] | null | null | null | 130.657143 | 18,196 | 0.871288 | true | 769 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.896251 | 0.754915 | 0.676594 |
__label__eng_Latn
| 0.325219 | 0.410285 |
#Imports
```
import numpy as np
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
plt.style.use('seaborn-white')
```
# Read and process data.
Download the file from this URL: https://drive.google.com/file/d/1UWWIi-sz9g0x3LFvkIZjvK1r2ZaCqgGS/view?usp=sharing
```
data = open('text.txt', 'r').read()
```
Process data and calculate indices
```
chars = list(set(data))
data_size, X_size = len(data), len(chars)
print("Corona Virus article has %d characters, %d unique characters" %(data_size, X_size))
char_to_idx = {ch:i for i,ch in enumerate(chars)}
idx_to_char = {i:ch for i,ch in enumerate(chars)}
```
# Constants and Hyperparameters
```
Hidden_Layer_size = 10 #size of the hidden layer
Time_steps = 10 # Number of time steps (length of the sequence) used for training
learning_rate = 1e-1 # Learning Rate
weight_sd = 0.1 #Standard deviation of weights for initialization
z_size = Hidden_Layer_size + X_size #Size of concatenation(H, X) vector
```
# Activation Functions and Derivatives
```
def sigmoid(x): # sigmoid function
return # write your code here
def dsigmoid(y): # derivative of sigmoid function
return # write your code here
def tanh(x): # tanh function
return # write your code here
def dtanh(y): # derivative of tanh
return # write your code here
```
# Quiz Question 1
What is the value of sigmoid(0) calculated from your code? (Answer up to 1 decimal point, e.g. 4.2 and NOT 4.29999999, no rounding off).
# Quiz Question 2
What is the value of dsigmoid(sigmoid(0)) calculated from your code?? (Answer up to 2 decimal point, e.g. 4.29 and NOT 4.29999999, no rounding off).
# Quiz Question 3
What is the value of tanh(dsigmoid(sigmoid(0))) calculated from your code?? (Answer up to 5 decimal point, e.g. 4.29999 and NOT 4.29999999, no rounding off).
# Quiz Question 4
What is the value of dtanh(tanh(dsigmoid(sigmoid(0)))) calculated from your code?? (Answer up to 5 decimal point, e.g. 4.29999 and NOT 4.29999999, no rounding off).
# Parameters
```
class Param:
def __init__(self, name, value):
self.name = name
self.v = value # parameter value
self.d = np.zeros_like(value) # derivative
self.m = np.zeros_like(value) # momentum for Adagrad
```
We use random weights with normal distribution (0, weight_sd) for tanh activation function and (0.5, weight_sd) for `sigmoid` activation function.
Biases are initialized to zeros.
# LSTM
You are making this network, please note f, i, c and o (also "v") in the image below:
Please note that we are concatenating the old_hidden_vector and new_input.
# Quiz Question 4
In the class definition below, what should be size_a, size_b, and size_c? ONLY use the variables defined above.
```
size_a = # write your code here
size_b = # write your code here
size_c = # write your code here
class Parameters:
def __init__(self):
self.W_f = Param('W_f', np.random.randn(size_a, size_b) * weight_sd + 0.5)
self.b_f = Param('b_f', np.zeros((size_a, 1)))
self.W_i = Param('W_i', np.random.randn(size_a, size_b) * weight_sd + 0.5)
self.b_i = Param('b_i', np.zeros((size_a, 1)))
self.W_C = Param('W_C', np.random.randn(size_a, size_b) * weight_sd)
self.b_C = Param('b_C', np.zeros((size_a, 1)))
self.W_o = Param('W_o', np.random.randn(size_a, size_b) * weight_sd + 0.5)
self.b_o = Param('b_o', np.zeros((size_a, 1)))
#For final layer to predict the next character
self.W_v = Param('W_v', np.random.randn(X_size, size_a) * weight_sd)
self.b_v = Param('b_v', np.zeros((size_c, 1)))
def all(self):
return [self.W_f, self.W_i, self.W_C, self.W_o, self.W_v,
self.b_f, self.b_i, self.b_C, self.b_o, self.b_v]
parameters = Parameters()
```
Look at these operations which we'll be writing:
**Concatenation of h and x:**
$z\:=\:\left[h_{t-1},\:x\right]$
$f_t=\sigma\left(W_f\cdot z\:+\:b_f\:\right)$
$i_i=\sigma\left(W_i\cdot z\:+\:b_i\right)$
$\overline{C_t}=\tanh\left(W_C\cdot z\:+\:b_C\right)$
$C_t=f_t\ast C_{t-1}+i_t\ast \overline{C}_t$
$o_t=\sigma\left(W_o\cdot z\:+\:b_i\right)$
$h_t=o_t\ast\tanh\left(C_t\right)$
**Logits:**
$v_t=W_v\cdot h_t+b_v$
**Softmax:**
$\hat{y}=softmax\left(v_t\right)$
```
def forward(x, h_prev, C_prev, p = parameters):
assert x.shape == (X_size, 1)
assert h_prev.shape == (Hidden_Layer_size, 1)
assert C_prev.shape == (Hidden_Layer_size, 1)
z = np.row_stack((h_prev, x))
f = # write your code here
i = # write your code here
C_bar = # write your code here
C = # write your code here
o = # write your code here
h = # write your code here
v = # write your code here
y = np.exp(v) / np.sum(np.exp(v)) #softmax
return z, f, i, C_bar, C, o, h, v, y
```
You must finish the function above before you can attempt the questions below.
# Quiz Question 5
What is the output of 'print(len(forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)), parameters)))'?
# Quiz Question 6.
Assuming you have fixed the forward function, run this command:
z, f, i, C_bar, C, o, h, v, y = forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)))
Now, find these values:
1. print(z.shape)
2. print(np.sum(z))
3. print(np.sum(f))
Copy and paste exact values you get in the logs into the quiz.
```
z, f, i, C_bar, C, o, h, v, y = forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)))
```
# Backpropagation
Here we are defining the backpropagation. It's too complicated, here is the whole code. (Please note that this would work only if your earlier code is perfect).
```
def backward(target, dh_next, dC_next, C_prev,
z, f, i, C_bar, C, o, h, v, y,
p = parameters):
assert z.shape == (X_size + Hidden_Layer_size, 1)
assert v.shape == (X_size, 1)
assert y.shape == (X_size, 1)
for param in [dh_next, dC_next, C_prev, f, i, C_bar, C, o, h]:
assert param.shape == (Hidden_Layer_size, 1)
dv = np.copy(y)
dv[target] -= 1
p.W_v.d += np.dot(dv, h.T)
p.b_v.d += dv
dh = np.dot(p.W_v.v.T, dv)
dh += dh_next
do = dh * tanh(C)
do = dsigmoid(o) * do
p.W_o.d += np.dot(do, z.T)
p.b_o.d += do
dC = np.copy(dC_next)
dC += dh * o * dtanh(tanh(C))
dC_bar = dC * i
dC_bar = dtanh(C_bar) * dC_bar
p.W_C.d += np.dot(dC_bar, z.T)
p.b_C.d += dC_bar
di = dC * C_bar
di = dsigmoid(i) * di
p.W_i.d += np.dot(di, z.T)
p.b_i.d += di
df = dC * C_prev
df = dsigmoid(f) * df
p.W_f.d += np.dot(df, z.T)
p.b_f.d += df
dz = (np.dot(p.W_f.v.T, df)
+ np.dot(p.W_i.v.T, di)
+ np.dot(p.W_C.v.T, dC_bar)
+ np.dot(p.W_o.v.T, do))
dh_prev = dz[:Hidden_Layer_size, :]
dC_prev = f * dC
return dh_prev, dC_prev
```
# Forward and Backward Combined Pass
Let's first clear the gradients before each backward pass
```
def clear_gradients(params = parameters):
for p in params.all():
p.d.fill(0)
```
Clip gradients to mitigate exploding gradients
```
def clip_gradients(params = parameters):
for p in params.all():
np.clip(p.d, -1, 1, out=p.d)
```
Calculate and store the values in forward pass. Accumulate gradients in backward pass and clip gradients to avoid exploding gradients.
input, target are list of integers, with character indexes.
h_prev is the array of initial h at h−1 (size H x 1)
C_prev is the array of initial C at C−1 (size H x 1)
Returns loss, final hT and CT
```
def forward_backward(inputs, targets, h_prev, C_prev):
global paramters
# To store the values for each time step
x_s, z_s, f_s, i_s, = {}, {}, {}, {}
C_bar_s, C_s, o_s, h_s = {}, {}, {}, {}
v_s, y_s = {}, {}
# Values at t - 1
h_s[-1] = np.copy(h_prev)
C_s[-1] = np.copy(C_prev)
loss = 0
# Loop through time steps
assert len(inputs) == Time_steps
for t in range(len(inputs)):
x_s[t] = np.zeros((X_size, 1))
x_s[t][inputs[t]] = 1 # Input character
(z_s[t], f_s[t], i_s[t],
C_bar_s[t], C_s[t], o_s[t], h_s[t],
v_s[t], y_s[t]) = \
forward(x_s[t], h_s[t - 1], C_s[t - 1]) # Forward pass
loss += -np.log(y_s[t][targets[t], 0]) # Loss for at t
clear_gradients()
dh_next = np.zeros_like(h_s[0]) #dh from the next character
dC_next = np.zeros_like(C_s[0]) #dh from the next character
for t in reversed(range(len(inputs))):
# Backward pass
dh_next, dC_next = \
backward(target = targets[t], dh_next = dh_next,
dC_next = dC_next, C_prev = C_s[t-1],
z = z_s[t], f = f_s[t], i = i_s[t], C_bar = C_bar_s[t],
C = C_s[t], o = o_s[t], h = h_s[t], v = v_s[t],
y = y_s[t])
clip_gradients()
return loss, h_s[len(inputs) - 1], C_s[len(inputs) - 1]
```
# Sample the next character
```
def sample(h_prev, C_prev, first_char_idx, sentence_length):
x = np.zeros((X_size, 1))
x[first_char_idx] = 1
h = h_prev
C = C_prev
indexes = []
for t in range(sentence_length):
_, _, _, _, C, _, h, _, p = forward(x, h, C)
idx = np.random.choice(range(X_size), p=p.ravel())
x = np.zeros((X_size, 1))
x[idx] = 1
indexes.append(idx)
return indexes
```
# Training (Adagrad)
Update the graph and display a sample output
```
def update_status(inputs, h_prev, C_prev):
#initialized later
global plot_iter, plot_loss
global smooth_loss
# Get predictions for 200 letters with current model
sample_idx = sample(h_prev, C_prev, inputs[0], 200)
txt = ''.join(idx_to_char[idx] for idx in sample_idx)
# Clear and plot
plt.plot(plot_iter, plot_loss)
display.clear_output(wait=True)
plt.show()
#Print prediction and loss
print("----\n %s \n----" % (txt, ))
print("iter %d, loss %f" % (iteration, smooth_loss))
```
# Update Parameters
\begin{align}
\theta_i &= \theta_i - \eta\frac{d\theta_i}{\sum dw_{\tau}^2} \\
d\theta_i &= \frac{\partial L}{\partial \theta_i}
\end{align}
```
def update_paramters(params = parameters):
for p in params.all():
p.m += p.d * p.d # Calculate sum of gradients
#print(learning_rate * dparam)
p.v += -(learning_rate * p.d / np.sqrt(p.m + 1e-8))
```
To delay the keyboard interrupt to prevent the training from stopping in the middle of an iteration
```
# Exponential average of loss
# Initialize to a error of a random model
smooth_loss = -np.log(1.0 / X_size) * Time_steps
iteration, pointer = 0, 0
# For the graph
plot_iter = np.zeros((0))
plot_loss = np.zeros((0))
```
# Training Loop
```
iter = 1000
while iter > 0:
# Reset
if pointer + Time_steps >= len(data) or iteration == 0:
g_h_prev = np.zeros((Hidden_Layer_size, 1))
g_C_prev = np.zeros((Hidden_Layer_size, 1))
pointer = 0
inputs = ([char_to_idx[ch]
for ch in data[pointer: pointer + Time_steps]])
targets = ([char_to_idx[ch]
for ch in data[pointer + 1: pointer + Time_steps + 1]])
loss, g_h_prev, g_C_prev = \
forward_backward(inputs, targets, g_h_prev, g_C_prev)
smooth_loss = smooth_loss * 0.999 + loss * 0.001
# Print every hundred steps
if iteration % 100 == 0:
update_status(inputs, g_h_prev, g_C_prev)
update_paramters()
plot_iter = np.append(plot_iter, [iteration])
plot_loss = np.append(plot_loss, [loss])
pointer += Time_steps
iteration += 1
iter = iter -1
```
# Quiz Question 7.
Run the above code for 50000 iterations making sure that you have 100 hidden layers and time_steps is 40. What is the loss value you're seeing?
|
6f4c69b00f5d4688f75f258b30b5b2e46753dd85
| 22,911 |
ipynb
|
Jupyter Notebook
|
04_RNN_LSTM/EVA_P2S3.ipynb
|
satyajitghana/TSAI-DeepNLP-END2.0
|
de5faeb8a3d266346e6e62f75b8f64c5514053bb
|
[
"MIT"
] | 1 |
2021-06-08T14:41:40.000Z
|
2021-06-08T14:41:40.000Z
|
04_RNN_LSTM/EVA_P2S3.ipynb
|
satyajitghana/TSAI-DeepNLP-END2.0
|
de5faeb8a3d266346e6e62f75b8f64c5514053bb
|
[
"MIT"
] | null | null | null |
04_RNN_LSTM/EVA_P2S3.ipynb
|
satyajitghana/TSAI-DeepNLP-END2.0
|
de5faeb8a3d266346e6e62f75b8f64c5514053bb
|
[
"MIT"
] | 8 |
2021-05-12T17:40:25.000Z
|
2022-01-20T14:38:43.000Z
| 30.75302 | 174 | 0.447383 | true | 3,646 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.868827 | 0.833325 | 0.724015 |
__label__eng_Latn
| 0.8481 | 0.520461 |
# Software profesional en Acústica 2020-21 (M2i)
*This notebook contains a modification of the notebook [FEM_Helmholtz_equation_Robin](https://github.com/spatialaudio/computational_acoustics/blob/master/FEM_Helmholtz_equation_Robin.ipynb), created by Sascha Spors, Frank Schultz, Computational Acoustics Examples, 2018. The text/images are licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/). The code is released under the [MIT license](https://opensource.org/licenses/MIT).*
# Numerical Solution of the Helmholtz Equation in unbounded domains with Perfectly Matched Layers and a Finite Element Method
This notebook illustrates the numerical solution of the wave equation for an harmonic excitation stated in an unbounded domain using a Finite Element Method. To truncate the unbounded domain, the Perfectly Matched Layer (PML) technique is used.
## Problem Statement
The homogeneous Helmholtz equation is given as
\begin{equation}
c^2\Delta P(\mathbf{x}) + \omega^2 P(\mathbf{x}) = 0\qquad \text{for }\quad\mathbf{x}=(x,y)\in\Omega_F
\end{equation}
where $\Omega_F$ is the fluid subdomain which where the numerical solution is computed. This subdomain $\Omega_F$ is surrounded by an artificial sponge layer, where the PML goberning equations are stated:
\begin{equation}
c^2\mathrm{div}(C\nabla P(\mathbf{x}) + \omega^2 M P(\mathbf{x}) = 0\qquad \text{for }\quad\mathbf{x}=(x,y)\in\Omega_{PML}
\end{equation}
where
$$
C=
\begin{pmatrix}
\frac{\gamma_y}{\gamma_x} & 0\\
0 & \frac{\gamma_x}{\gamma_y}
\end{pmatrix},
\qquad
M = \gamma_x\gamma_y
$$
with
$$
\gamma_{x}(x)=1+\frac{i}{\omega}\sigma_{x}(s),\qquad \gamma_{y}(y)=1+\frac{i}{\omega}\sigma_{y}(y)
$$
The function $\sigma_x$ and $\sigma_y$ are the so-called PML absorption profiles. These functions can be constant, quadratic or even singular. The only requirements are: $\sigma_x$ and $\sigma_y$ are positive and monotonically increasing.
The set of governing equations must be completed with boundary conditions in the fluid subdomain (in this case, Dirichlet boundary conditions will be used $P=1$ in the interior fluid boundary), and also null Dirichlet boundary conditions will be used in the exterior boundary of the PML subdomain $P=0$. Finally, to couple both fluid and PML subdomains, it is imposed
\begin{align}
u|_{\Omega_{F}}=u|_{\Omega_{PML}}\qquad \text{on }\quad\partial\Omega_{F}\cap \partial\Omega_{PML},\\
\nabla u|_{\Omega_{F}}\cdot \mathbf{n}=C\nabla u|_{\Omega_{PML}}\cdot \mathbf{n}\qquad \text{on }\quad\partial\Omega_{F}\cap \partial\Omega_{PML},\\
\end{align}
which are the natural coupling boundary conditions to avoid any reflection between both subdomains.
## Variational Formulation
If we extend the definition of the absorption PML profiles in the fluid subdomain as $\sigma_x(x)=0$ and $\sigma_y(y)=0$ for $\mathbf{x}=(x,y)\in\Omega_{F}$, then the Helmholtz equation and the PML equation can be written in the same manner in $\Omega=\Omega_F\cup\Omega_{PML}$. Again, as it has been made before, separating the real and the imaginary part of the equations and using a Green's formula, it holds that the pressure field $P=P_{re}+iP_{im}$
\begin{align}
&c^2\int_\Omega (C_{re}\nabla P_{re}-C_{im}\nabla P_{im})\cdot \nabla Q_{re} \mathrm{d}x
-\omega^2\int_{\Omega}(M_{re}P_{re}-M_{im}P_{im})Q_{re} \mathrm{d}x, \\
&c^2\int_\Omega (C_{im}\nabla P_{re}+C_{im}\nabla P_{re})\cdot \nabla Q_{im} \mathrm{d}x
-\omega^2\int_{\Omega}(M_{im}P_{re}+M_{im}P_{re})Q_{im} \mathrm{d}x,
\end{align}
for all test functions $Q=Q_{re}+iQ_{im}$ and
where the PML tensor C is given by $C=C_{re}+iC_{im}$ and $M=M_{re}+iM_{im}$.
It is common to express this integral equation in terms of the bilinear $a((P_{re},P_{im}, (Q_{re},Q_{im}))$ and linear $L((Q_{re},Q_{im}))$ forms
\begin{multline}
a((P_{re},P_{im}, (Q_{re},Q_{im})) = c^2\int_\Omega (C_{re}\nabla P_{re}-C_{im}\nabla P_{im})\cdot \nabla Q_{re} \mathrm{d}x
c^2\int_\Omega (C_{im}\nabla P_{re}+C_{im}\nabla P_{re})\cdot \nabla Q_{im} \mathrm{d}x \\
-\omega^2\int_{\Omega}(M_{re}P_{re}-M_{im}P_{im})Q_{re} \mathrm{d}x-\omega^2\int_{\Omega}(M_{im}P_{re}+M_{im}P_{re})Q_{im} \mathrm{d}x
\end{multline}
\begin{equation}
L((Q_{re},Q_{im})) = \int_\Omega f_{re}Q_{re}\mathrm{d}x + \int_\Omega f_{im}Q_{im}\mathrm{d}x ,
\end{equation}
where
\begin{equation}
a(P, V) = L(V).
\end{equation}
In this case, the source term $f=f_{re}+if_{im}=0$.
## Numerical Solution
The numerical solution of the variational problem is based on [FEniCS](https://fenicsproject.org/), an open-source framework for numerical solution of PDEs.
Its high-level Python interface `dolfin` is used in the following to define the problem and compute the solution.
The implementation is based on the variational formulation derived above.
It is common in the FEM to denote the real and imaginay part of the FEM solution of the problem by $u_{re}$, $u_{im}$ and the corresponding test functions by $v_{re}$ and $v_{im}$.
The definition of the problem in FEniCS is very close to the mathematical formulation of the problem.
#### Import modules and definte the physical data and the geometrical setting
```python
import numpy as np
import scipy.special as spe
from dolfin import *
from mshr import *
import matplotlib.pyplot as plt
# Parameter values
amplitude = 1.0 # amplitude [Pa]
degree = 0 # number of the Fourier mode in the exact solution
omega = 2*np.pi*200. # angular frequency [rad/s]
vel = 340 # sound speed [m/s]
# Geometrical setting
Radius = 1.0 # radius of a circular obstacle center at (0,0)
Lx = 2.0; Ly = 2.0; # dimensions of the fluid computational domain
th = 0.75 *vel/(omega/(2.*np.pi)) # PML thickness = 1.5*wavelength
```
#### Compute mesh
```python
# Computational domain
domain = Rectangle(Point(-Lx-th,-Ly-th),Point(Lx+th,Ly+th)) - Circle(Point(0, 0), Radius)
# Set PML subdomains (only for obtaining conformal meshes with the inner PML boundaries and identify the fluid domain)
domain.set_subdomain(1, domain)
domain.set_subdomain(2, Rectangle(Point(-Lx-th,-Ly-th),Point(-Lx,Ly+th)) + Rectangle(Point(Lx,-Ly-th),Point(Lx+th,Ly+th)))
domain.set_subdomain(3, Rectangle(Point(-Lx-th,-Ly-th),Point(Lx+th,-Ly)) + Rectangle(Point(-Lx-th,Ly),Point(Lx+th,Ly+th)))
domain.set_subdomain(4, Rectangle(Point(-Lx-th,-Ly-th),Point(-Lx,-Ly)) + Rectangle(Point(Lx,-Ly-th),Point(Lx+th,-Ly))
+Rectangle(Point(-Lx-th,Ly),Point(-Lx,Ly+th)) + Rectangle(Point(Lx,Ly),Point(Lx+th,Ly+th)))
# Create mesh
mesh = generate_mesh(domain, 60) # Values 30,60,120,240
print("Number of vertices: ",mesh.coordinates().shape[0])
print("Mesh h_min, h_max: ", mesh.hmin(), mesh.hmax())
# Plot mesh
plot(mesh)
plt.show()
```
#### Compute boundary and sudomain markers
```python
# Initialize subdomain and boundary markers
tol = 1e-3
PML_boundary = CompiledSubDomain('on_boundary and (near(fabs(x[0]),Lx+th,tol) or near(fabs(x[1]),Ly+th,tol))', Lx=Lx, Ly=Ly, th=th, tol=tol)
circle_boundary = CompiledSubDomain('on_boundary and pow(x[0],2)+pow(x[1],2) < pow(R,2) + tol', R=Radius, tol=tol)
# Initialize mesh function for boundary
boundary_markers = MeshFunction('size_t', mesh, mesh.topology().dim() - 1)
boundary_markers.set_all(0)
circle_boundary.mark(boundary_markers, 1)
PML_boundary.mark(boundary_markers, 2)
# Initialize mesh function for the physical domain
domain_markers = MeshFunction('size_t', mesh, 2, mesh.domains())
# Plot subdomain markers
plot(domain_markers)
plt.show()
```
#### Define the functional spaces and the integral measures
```python
# Define new measures associated with each exterior boundaries
dx = Measure('dx', domain=mesh, subdomain_data=domain_markers)
ds = Measure('ds', domain=mesh, subdomain_data=boundary_markers)
# Define function space (Lagrange 1st polynomials)
P1 = FiniteElement("Lagrange", mesh.ufl_cell(), 1)
Q = FunctionSpace(mesh, P1)
V = FunctionSpace(mesh, P1 * P1)
# Define variational unknowns (potential, Lagrange multiplier and 6 RAOs)
(u_re, u_im) = TrialFunctions(V)
(v_re, v_im) = TestFunctions(V)
```
#### Define source terms and Dirichlet boundary data
```python
# Define source term for the real and the imaginary part and null boundary conditions
zero = Constant("0.0")
f_re = zero
f_im = zero
# Define boundary conditions in the whole boundary for the real and the imaginary part
# Use homogeneous boundary conditions on the exterior boundary of the PML
g_re = Expression('cos(n*atan2(x[1],x[0]))', n=degree, degree=5)
g_im = Expression('sin(n*atan2(x[1],x[0]))', n=degree, degree=5)
bcs = [DirichletBC(V.sub(0), g_re, boundary_markers, 1), DirichletBC(V.sub(1), g_im, boundary_markers, 1),
DirichletBC(V.sub(0), zero, boundary_markers, 2), DirichletBC(V.sub(1), zero, boundary_markers, 2)]
```
#### Define the fluid and the PML coefficients
```python
# Define the absorption PML profile
sigma0=2e3
sx = Expression('fabs(x[0]) > Lx ? s0*pow(fabs(x[0])-Lx,2)/w : 0.', Lx=Lx, w=omega, s0=sigma0, degree=2)
sy = Expression('fabs(x[1]) > Ly ? s0*pow(fabs(x[1])-Ly,2)/w : 0.', Ly=Ly, w=omega, s0=sigma0, degree=2)
# Define the PML coefficients
gammax_div_gammay_re = 1/(sy*sy + 1) + (sx*sy)/(sy*sy + 1)
gammax_div_gammay_im = sx/(sy*sy + 1) - sy/(sy*sy + 1)
gammay_div_gammax_re = 1/(sx*sx + 1) + (sx*sy)/(sx*sx + 1)
gammay_div_gammax_im = sy/(sx*sx + 1) - sx/(sx*sx + 1)
gammax_dot_gammay_re = 1 - sx*sy
gammax_dot_gammay_im = sx + sy
# Define PDE coefficients
c2 = Expression('pow(vel,2)', vel=vel, degree=0)
w2 = Expression('pow(omega,2)', omega=omega, degree=0)
# Define PML matrices
C_re = as_matrix(((gammay_div_gammax_re, zero), (zero, gammax_div_gammay_re)))
C_im = as_matrix(((gammay_div_gammax_im, zero), (zero, gammax_div_gammay_im)))
M_re = gammax_dot_gammay_re
M_im = gammax_dot_gammay_im
```
#### Define the variational problem and compute the FEM solution
```python
# Define variational formulation in the PML and fluid subdomains
a = c2*inner((C_re*grad(u_re)-C_im*grad(u_im)), grad(v_re))*dx + c2*inner((C_im*grad(u_re)+C_re*grad(u_im)), grad(v_im))*dx \
-w2*(M_re*u_re-M_im*u_im)*v_re*dx - w2*(M_im*u_re+M_re*u_im)*v_im*dx
L = f_re*v_re*dx + f_im*v_im*dx
# Assembly matrix and source vector
A=assemble(a)
b=assemble(L)
# Apply boundary conditions to matrix and source vector
for bc in bcs:
bc.apply(A)
bc.apply(b)
# Compute solution and get real and imaginary parts
w = Function(V)
solve(A, w.vector(), b)
(u_re, u_im) = w.split(True)
u_re.rename("Re(u)", "Real FE approx.")
u_im.rename("Im(u)", "Imag. FE approx.")
```
```python
def plot_soundfield(u):
'''plots solution of FEM-based simulation'''
fig = plt.figure(figsize=(10,10))
fig = dolfin.plot(u)
plt.xlabel(r'$x$ in m')
plt.ylabel(r'$y$ in m')
plt.colorbar(fig, fraction=0.038, pad=0.04);
# plot sound field
plot_soundfield(u_re)
plt.title(r'$P_{re}$')
plot_soundfield(sqrt(u_re**2+u_im**2))
plt.title(r'$|P|$');
```
#### Define the exact solution and compute the $L^2$- relative error in the fluid subdomain
In this case, since the Dirichlet data is an harmonic expression (this is, a complex-exponential $e^{in\theta}$) on the interior circle, the solution is given by
$$
P(r,\theta)=e^{in\theta}H^{(1)}_{n}(kr)
$$
with $k=\omega/c$, $r=\sqrt{x^2+y^2}$, and $\theta=\mathrm{arctan}(y/x)$.
```python
# Define the exact solution based on the Hankel function of first kind and order "degree"
# Real part of the exact solution
class Exact_re(UserExpression):
def __init__(self, Radius, A, k, n, **kwargs):
self.Radius = Radius
self.A = A
self.k = k
self.n = n
super().__init__(**kwargs)
def eval(self, value, x):
r=np.sqrt(x[0]**2+x[1]**2)
theta=np.arctan2(x[1],x[0])
val = spe.hankel1(self.n, self.k*self.Radius)
value[0] = np.real(np.exp(1j*self.n*theta)*spe.hankel1(self.n, self.k*r)/val)
def value_shape(self):
return ()
class Exact_im(UserExpression):
def __init__(self, Radius, A, k, n, **kwargs):
self.Radius = Radius
self.A = A
self.k = k
self.n = n
super().__init__(**kwargs)
def eval(self, value, x):
r=np.sqrt(x[0]**2+x[1]**2)
theta=np.arctan2(x[1],x[0])
val = spe.hankel1(self.n, self.k*self.Radius)
value[0] = np.imag(np.exp(1j*self.n*theta)*spe.hankel1(self.n, self.k*r)/val)
def value_shape(self):
return ()
# Expressions of the exact solution and interpolation
uex_re = Exact_re(Radius=Radius, A=amplitude, k=omega/vel, n=degree, degree=3)
uex_im = Exact_im(Radius=Radius, A=amplitude, k=omega/vel, n=degree, degree=3)
uex_re_interp = interpolate(uex_re, Q)
uex_im_interp = interpolate(uex_im, Q)
uex_re_interp.rename("Re(u_ex)", "Real exact")
uex_im_interp.rename("Im(u_ex)", "Imag. exact")
# Compute relative error in L2-norm of the complex-value functions only in the fluid domain
err_re = Function(Q)
err_im = Function(Q)
err_re.vector().set_local(uex_re_interp.vector().get_local()-u_re.vector().get_local())
err_im.vector().set_local(uex_im_interp.vector().get_local()-u_im.vector().get_local())
error_rel=np.sqrt(assemble(err_re**2*dx(1))+assemble(err_im**2*dx(1)))/np.sqrt(assemble(uex_re_interp**2*dx(1))+assemble(uex_im_interp**2*dx(1)))
print("L2-relative error (%): ", error_rel*100.)
```
L2-relative error (%): 1.177577722523559
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT).
|
2ad770ec76ec6e314f50898fa2b5e769c081c2c5
| 156,996 |
ipynb
|
Jupyter Notebook
|
notebooks/FEniCS_Helmholtz_equation_PML_complex.ipynb
|
maprieto/SoftwareProfesionalAcustica
|
d6b9bd0d0e182c0c71a878a891c7420ced1b263d
|
[
"MIT"
] | null | null | null |
notebooks/FEniCS_Helmholtz_equation_PML_complex.ipynb
|
maprieto/SoftwareProfesionalAcustica
|
d6b9bd0d0e182c0c71a878a891c7420ced1b263d
|
[
"MIT"
] | null | null | null |
notebooks/FEniCS_Helmholtz_equation_PML_complex.ipynb
|
maprieto/SoftwareProfesionalAcustica
|
d6b9bd0d0e182c0c71a878a891c7420ced1b263d
|
[
"MIT"
] | null | null | null | 292.902985 | 68,844 | 0.922304 | true | 4,338 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.91848 | 0.640636 | 0.588411 |
__label__eng_Latn
| 0.823878 | 0.205407 |
# Problem Set 2: Classification
To run and solve this assignment, one must have a working IPython Notebook installation. The easiest way to set it up for both Windows and Linux is to install [Anaconda](https://www.continuum.io/downloads). Then save this file to your computer (use "Raw" link on gist\github), run Anaconda and choose this file in Anaconda's file explorer. Use `Python 3` version. Below statements assume that you have already followed these instructions. If you are new to Python or its scientific library, Numpy, there are some nice tutorials [here](https://www.learnpython.org/) and [here](http://www.scipy-lectures.org/).
To run code in a cell or to render [Markdown](https://en.wikipedia.org/wiki/Markdown)+[LaTeX](https://en.wikipedia.org/wiki/LaTeX) press `Ctr+Enter` or `[>|]`(like "play") button above. To edit any code or text cell [double]click on its content. To change cell type, choose "Markdown" or "Code" in the drop-down menu above.
If a certain output is given for some cells, that means that you are expected to get similar results in order to receive full points (small deviations are fine). For some parts we have already written the code for you. You should read it closely and understand what it does.
Total: 100 points.
### 1. Logistic Regression
In this part of the exercise, you will build a logistic regression model to predict whether a student
gets admitted into a university.
Suppose that you are the administrator of a university department and you want to determine
each applicant’s chance of admission based on their results on two exams. You have historical
data from previous applicants in *ex2data1.txt* that you can use as a training set for logistic regression. For each
training example, you have the applicant’s scores on two exams and the admissions decision.
Your task is to build a classification model that estimates an applicant’s probability of admission based on the scores from those two exams. This outline and code framework will guide you through the exercise.
**1\.1 Implementation**
```python
import sys
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
print('Tested with:')
print('Python', sys.version)
print({x.__name__: x.__version__ for x in [np, matplotlib]})
```
Tested with:
Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]
{'numpy': '1.14.3', 'matplotlib': '2.2.2'}
**1.1.1 Visualizing the data**
Before starting to implement any learning algorithm, it is always good to visualize the data if possible. This first part of the code will load the data and display it on a 2-dimensional plot by calling the function plotData. The axes are the two exam scores, and the positive and negative examples are shown with different markers.
```python
# it is good to isolate logical parts to avoid variables leaking into the
# global scope and messing up your logic later in weird ways
def read_classification_csv_data(fn, add_ones=False):
# read comma separated data
data = np.loadtxt(fn, delimiter=',')
X_, y_ = data[:, :-1], data[:, -1, None] # a fast way to keep last dim
# while X_ is a (100,2) matrix and y_ is the label
print(X_.shape, X_.min(), X_.max(), X_.dtype)
print(y_.shape, y_.min(), y_.max(), y_.dtype)
print("-------------------------")
# insert the column of 1's into the "X" matrix (for bias)
X = np.insert(X_, X_.shape[1], 1, axis=1) if add_ones else X_
y = y_.astype(np.int32) # set the label to int
return X, y
X_data, y_data = read_classification_csv_data('ex2data1.txt', add_ones=True)
print(X_data.shape, X_data.min(), X_data.max(), X_data.dtype)
print(y_data.shape, y_data.min(), y_data.max(), y_data.dtype)
```
(100, 2) 30.05882244669796 99.82785779692128 float64
(100, 1) 0.0 1.0 float64
-------------------------
(100, 3) 1.0 99.82785779692128 float64
(100, 1) 0 1 int32
```python
# how does the *X[y.ravel()==1, :2].T trick work?
# https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists
def plot_data(X, y, labels, markers, xlabel, ylabel, figsize=(10, 6), ax=None):
if figsize is not None:
plt.figure(figsize=figsize)
ax = ax or plt.gca()
for label_id, (label, marker) in enumerate(zip(labels, markers)):
ax.plot(*X_data[y_data.ravel()==label_id, :2].T, marker, label=label)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
plt.legend()
ax.grid(True)
student_plotting_spec = {
'X': X_data,
'y': y_data,
'xlabel': 'Exam 1 score',
'ylabel': 'Exam 2 score',
'labels': ['Not admitted', 'Admitted'],
'markers': ['yo', 'k+'],
'figsize': (10, 6)
}
plot_data(**student_plotting_spec)
plt.show()
```
**1.1.2 [5pts] Sigmoid function**
Before you start with the actual cost function, recall that the logistic regression hypothesis is defined as:
$h_\theta(x) = g(\theta^Tx)$
where function g is the sigmoid function. The sigmoid function is defined as:
$g(z) = \dfrac{1}{1+e^{-z}}$
Your first step is to implement/find a sigmoid function so it can be called by the rest of your program. Your code should also work with vectors and matrices. For a matrix, your function should perform the sigmoid function on every element.
When you are finished, (a) plot the sigmoid function, and (b) test the function with a scalar, a vector, and a matrix. For scalar large positive values of x, the sigmoid should be close to 1, while for scalar large negative values, the sigmoid should be close to 0. Evaluating sigmoid(0) should give you exactly 0.5.
```python
# check out scipy.special for great variaty of vectorized functions
# remember that sigmoid is the inverse of logit function
# maybe worth checking out scipy.special.logit first
def sigmoid(z):
sigmoid = 1./(1+np.exp(-z))
return sigmoid
#sigmoid = None
def check_that_sigmoid_f(f):
# don't use np.arange with float step because it works as
# val_{i+1} = val_i + step while val_i < end
# what might do wrong with float precision?
x_test = np.linspace(-10, 10, 50)
sigm_test = f(x_test)
plt.plot(x_test, sigm_test)
plt.title("Sigmoid function")
plt.grid(True)
plt.show()
# why should analytical_diff almost== finite_diff for sigmoid?
analytical_diff = sigm_test*(1-sigm_test)
finite_step = x_test[1]-x_test[0]
finite_diff = np.diff(sigm_test) / finite_step
print(x_test.shape, finite_diff.shape)
plt.plot(x_test[:-1]+finite_step/2, finite_diff)
plt.plot(x_test, analytical_diff)
plt.title("Numerical (finite difference) derivative of 1d sigmoid")
plt.grid(True)
plt.show()
check_that_sigmoid_f(sigmoid)
```
**1.1.3 [15pts] Cost function and gradient**
Now you will implement the cost function and gradient for logistic regression. Complete the code
in the functions *hyposesis_function* and *binary_logistic_loss* below to return the value of the hypothesis function and the cost, respectively. Recall that the cost function in logistic regression is
$j(\theta) \ = \ \frac{1}{m} \ \sum_{i=1}^{m} \ [ \ -y^{(i)} log(h_\theta(x^{(i)})) \ - \ (1 - y^{(i)})log(1-h_\theta(x^{(i)})) \ ]$
and the gradient of the cost is a vector of the same length as $\theta$ where the $j^{th}$ element (for $j = 0, 1,...,n$) is defined as follows:
$\frac{\partial J(\theta)}{\partial \theta_{j}} \ = \ \frac{1}{m} \ \sum_{i=1}^{m} \ (h_\theta(x^{(i)})-y^{(i)}) x_j^{(i)}$
where $m$ is the number of points and $n$ is the number of features. Note that while this gradient looks identical to the linear regression gradient, the formula is
actually different because linear and logistic regression have different definitions of $h_\theta(x)$.
What should be the value of the loss for $\theta = \bar 0$ regardless of input? Why? Make sure your code also outputs this value.
```python
# we are trying to fit a function that would return a
# "probability of "
# hyposesis_function describes parametric family of functions that we are
# going to pick our "best fitting function" from. It is parameterized by
# real-valued vector theta, i.e. we are going to pick
# h_best = argmin_{h \in H} logistic_loss_h(x, y, h)
# but because there exist a bijection between theta's and h's it is
# eqvivalent to choosing
# theta_best = argmin_{theta \in H} logistic_loss_theta(x, y, theta)
def hyposesis_function(x, theta):
#raise NotImplementedError('Implement it yourself.')
return sigmoid(np.matmul(x,theta))
# negative log likelihood of observing sequence of integer
# y's given probabilities y_pred's of each Bernoulli trial
# recommentation: convert both variables to float's
# or weird sign stuff might happen like -1*y != -y for uint8
# use np.mean and broadcasting
def binary_logistic_loss(y, y_pred):
assert y_pred.shape == y.shape
#raise NotImplementedError('Implement it yourself.')
#loss = (1/y.shape)*((-y) * np.log(y_pred)-(np.ones-y) * (1-np.log(y_pred)))
#print(y.shape)
loss = (1/y.shape[0])*np.linalg.norm((np.matmul(-y.T,np.log(y_pred))-np.matmul((1-y).T,np.log(1-y_pred))))
return loss
def logistic_loss_theta_grad(x, y, h, theta):
y_pred = h(x, theta)
point_wise_grads = (y_pred - y)*x
grad = np.mean(point_wise_grads, axis=0)[:, None]
assert grad.shape == theta.shape
return grad
def logistic_loss_theta(x, y, h, theta):
return binary_logistic_loss(y, h(x, theta))
```
```python
# Check that with theta as zeros, cost is about 0.693:
theta_init = np.zeros((X_data.shape[1], 1))
print(logistic_loss_theta(X_data, y_data, hyposesis_function, theta_init))
print(logistic_loss_theta_grad(X_data, y_data, hyposesis_function, theta_init))
```
0.6931471805599452
[[-12.00921659]
[-11.26284221]
[ -0.1 ]]
**1.1.4 Learning parameters using *fmin***
In the previous assignment, you found the optimal parameters of a linear regression model by
implementing gradient descent. You wrote a cost function and calculated its gradient, then took
a gradient descent step accordingly. This time, instead of taking gradient descent steps, you will
use a scipy.optimize built-in function called *fmin*.
The final $\theta$ value will then be used to plot the
decision boundary on the training data, as seen in the figure below.
```python
import climin
from functools import partial
```
```python
def optimize(theta_init, loss, loss_grad, max_iter=10000, print_every=1000, optimizer_fn=None, show=False):
theta = theta_init.copy()
opt_args = (theta, loss_grad)
if optimizer_fn is None:
optimizer_fn = partial(climin.GradientDescent, step_rate=1e-3, momentum=0.999)
optimizer = optimizer_fn(*opt_args)
loss_curve = []
for opt_info in optimizer:
n_iter = opt_info['n_iter']
f_value = loss(theta)
loss_curve.append(f_value)
if print_every != 0 and n_iter % print_every == 0:
print(n_iter, f_value)
if n_iter == max_iter:
break
if show:
plt.plot(loss_curve)
plt.show()
return theta, f_value
```
```python
theta_init = np.zeros((3, 1))
loss = partial(logistic_loss_theta, X_data, y_data, hyposesis_function)
loss_grad = partial(logistic_loss_theta_grad, X_data, y_data, hyposesis_function)
theta, best_cost = optimize(theta_init, loss, loss_grad, show=True)
print(best_cost)
```
```python
# Plotting the decision boundary: two points, draw a line between
# Decision boundary occurs when h = 0, or when
# theta_0*x1 + theta_1*x2 + theta_2 = 0
# y=mx+b is replaced by x2 = (-1/theta1)(theta2 + theta0*x1)
line_xs = np.array([np.min(X_data[:,0]), np.max(X_data[:,0])])
line_ys = (-1./theta[1])*(theta[2] + theta[0]*line_xs)
plot_data(**student_plotting_spec)
plt.plot(line_xs, line_ys, 'b-', lw=10, alpha=0.2, label='Decision Boundary')
plt.legend()
plt.show()
```
**1.1.5 [15pts] Evaluating logistic regression**
After learning the parameters, you can use the model to predict whether a particular student will
be admitted.
(a) [5 pts] Show that for a student with an Exam 1 score of 45 and an Exam 2 score of 85, you should
expect to see an admission probability of 0.776.
Another way to evaluate the quality of the parameters we have found is to see how well the
learned model predicts on our training set.
(b) [10 pts] In this part, your task is to complete the code in
*makePrediction*. The predict function will produce “1” or “0” predictions given a dataset and a learned
parameter vector $\theta$. After you have completed the code, the script below will proceed to report the
training accuracy of your classifier by computing the percentage of examples it got correct. You
should also see a Training Accuracy of 89.0.
```python
# For a student with an Exam 1 score of 45 and an Exam 2 score of 85,
# you should expect to see an admission probability of 0.776.
check_data = np.array([[45., 85., 1]])
print(check_data.shape)
print(hyposesis_function(check_data, theta))
# The sigmoid function gives the probability of enrollment, which is from 0 to 1. The theta is trained already by optimize().
```
(1, 3)
[[0.78755263]]
```python
# use hyposesis function and broadcast compare operator
def predict(x, theta):
#raise NotImplementedError('Implement it yourself.')
hypo = (hyposesis_function(x,theta)> 0.5 ).astype(int)
return hypo
def accuracy(x, y, theta):
#raise NotImplementedError('Implement it yourself.')
num = 0
for i in range(y.shape[0]):
if predict(x,theta)[i,:] == y[i,:]:
num += 1
return num/y.shape[0]
print(accuracy(X_data, y_data, theta))
```
0.9
### 2. Regularized logistic regression
In this part of the exercise, you will implement regularized logistic regression to predict whether microchips from a fabrication plant pass quality assurance (QA). During QA, each microchip goes through various tests to ensure it is functioning correctly. Suppose you are the product manager of the factory and you have the test results for some microchips on two different tests. From these two tests, you would like to determine whether the microchips should be accepted or rejected. To help you make the decision, you have a dataset of test results on past microchips in *ex2data2.txt*, from which you can build a logistic regression model.
**2.1 Visualizing the data**
Similar to the previous parts of this exercise, plotData is used to generate the figure below,
where the axes are the two test scores, and the positive (y = 1, accepted) and negative (y = 0,
rejected) examples are shown with different markers.
The figure below shows that our dataset cannot be separated into positive and negative examples by a
straight line. Therefore, a straightforward application of logistic regression will not perform well on this dataset since logistic regression will only be able to find a linear decision boundary.
```python
X_data_, y_data = read_classification_csv_data('ex2data2.txt')
X_data = X_data_ - X_data_.mean(axis=0)[None, :]
print(X_data.shape, X_data.min(), X_data.max(), X_data.dtype)
print(y_data.shape, y_data.min(), y_data.max(), y_data.dtype)
```
(118, 2) -0.83007 1.1089 float64
(118, 1) 0.0 1.0 float64
-------------------------
(118, 2) -0.9528415593220338 1.0161210915254237 float64
(118, 1) 0 1 int32
```python
chip_plotting_spec = {
'X': X_data,
'y': y_data,
'xlabel': 'Microchip Test 1 Result',
'ylabel': 'Microchip Test 2 Result',
'labels': ['rejected', 'accepted'],
'markers': ['yo', 'k+'],
'figsize': (6, 6)
}
plot_data(**chip_plotting_spec)
plt.show()
```
**2.2 Nonlinear feature mapping**
One way to fit the data better is to create more features from each data point. In *mapFeature* below, we will map the features into all polynomial terms of $x_1$ and $x_2$ up to the
sixth power as follows:
\begin{equation}
mapFeature(x) \ = \
\begin{bmatrix}
1 \\
x_1 \\
x_2 \\
x_1^2 \\
x_1x_2 \\
x_2^2 \\
x_1^3 \\
\vdots \\
x_1x_2^5 \\
x_2^6 \\
\end{bmatrix}
\end{equation}
As a result of this mapping, our vector of two features (the scores
on two QA tests) has been transformed into a 28-dimensional
vector. A logistic regression classifier trained on this
higher-dimension feature vector will have a more complex
decision boundary and will appear nonlinear when drawn in our
2-dimensional plot.
While the feature mapping allows us to build a more expressive
classifier, it is also more susceptible to overfitting. In the next parts
of the exercise, you will implement regularized logistic regression
to fit the data and also see for yourself how regularization can help combat the overfitting problem.
Either finite dimentional (or even infinite-dimentional, as you would see in the SVM leacture and the corresponding home assingment) feature mappings are usually denoted by $\Phi$ and therefore our hyposesis is now that the Bernoulli probability of chip matfunctioning might be described as
$$ p_i = \sigma(\Phi(x_i)^T \theta)$$
```python
from itertools import combinations_with_replacement
def polynomial_feature_map(X_data, degree=20, show_me_ur_powers=False):
assert len(X_data.shape) == 2
group_size = X_data.shape[1]
assert group_size == 2
# hm.. how to get all ordered pairs (c, d) of non-negative ints
# such that their sum is c + d <= dergee?
# it is eqvivalent to getting all groups of integers (a, b) such that
# 0 <= a <= b <= degree and definintg c = a, d = b - a
# their sum is below degree, both are >= 0
# then feature_i = (x_0 ^ c) * (x_1 ^ d)
comb_iterator = combinations_with_replacement(range(degree+1), group_size)
not_quite_powers = np.array(list(comb_iterator))
# The not_quite_powers here is all pairs combinations of powers from (0,0) to (20,20)
powers_bad_order = not_quite_powers.copy()
powers_bad_order[:, 1] -= not_quite_powers[:, 0]
# Here the pairs in "powers_bad_order" array has sums no larger than 20.
# let's reorder them so that lower power monomials come first
rising_power_idx = np.argsort(powers_bad_order.sum(axis=1))
# print(rising_power_idx.shape) #(231,)
# print(powers_bad_order.shape) #(231,2)
powers = powers_bad_order[rising_power_idx]
# print(powers.shape) # (231,2)
# the "powers" is the sortedd powers of pairs. from 0 to 20.
if show_me_ur_powers is True:
print(powers.T) # shape is (2,231)
print('total power per monomial', powers.sum(axis=1)) # shape is (231,)
X_with_powers = np.power(X_data[:, :, None], powers.T[None])
# print(X_with_powers.shape) # shape is (118,2,231)
X_poly = np.prod(X_with_powers, axis=1) # return the product of array elements over a given axis(column)
return X_poly
X_pf = polynomial_feature_map(X_data, show_me_ur_powers=True)
print(X_pf.shape) # so that we expand the input matrix from (118,2) ----- (118,231)
```
[[ 0 0 1 0 2 1 0 2 3 1 4 2 1 3 0 5 3 0 2 4 1 0 6 2
5 4 3 1 4 0 1 2 6 7 3 5 2 3 4 6 8 5 7 0 1 4 7 3
6 9 8 0 2 1 5 9 3 5 7 8 4 10 6 1 0 2 0 9 1 4 5 2
10 7 8 6 11 3 7 0 1 9 12 4 6 2 8 11 10 3 5 6 13 1 5 7
9 8 4 12 2 10 0 11 3 0 12 11 14 10 9 7 6 8 13 4 2 3 1 5
5 4 9 0 15 14 7 1 10 8 3 2 11 6 12 13 8 3 11 2 14 9 12 13
10 16 4 0 6 5 1 15 7 12 5 2 13 11 6 15 3 10 7 0 1 16 14 8
4 9 17 5 12 2 4 8 14 13 16 11 9 6 15 0 1 10 3 18 7 17 15 13
14 17 19 18 16 2 12 3 1 4 5 0 7 8 6 9 10 11 11 18 3 1 17 10
4 16 13 5 0 19 15 12 6 9 7 14 2 8 20]
[ 0 1 0 2 0 1 3 1 0 2 0 2 3 1 4 0 2 5 3 1 4 6 0 4
1 2 3 5 3 7 6 5 1 0 4 2 6 5 4 2 0 3 1 8 7 5 2 6
3 0 1 9 7 8 4 1 7 5 3 2 6 0 4 9 10 8 11 2 10 7 6 9
1 4 3 5 0 8 5 12 11 3 0 8 6 10 4 1 2 9 7 7 0 12 8 6
4 5 9 1 11 3 13 2 10 14 2 3 0 4 5 7 8 6 1 10 12 11 13 9
10 11 6 15 0 1 8 14 5 7 12 13 4 9 3 2 8 13 5 14 2 7 4 3
6 0 12 16 10 11 15 1 9 5 12 15 4 6 11 2 14 7 10 17 16 1 3 9
13 8 0 13 6 16 14 10 4 5 2 7 9 12 3 18 17 8 15 0 11 1 4 6
5 2 0 1 3 17 7 16 18 15 14 19 12 11 13 10 9 8 9 2 17 19 3 10
16 4 7 15 20 1 5 8 14 11 13 6 18 12 0]]
total power per monomial [ 0 1 1 2 2 2 3 3 3 3 4 4 4 4 4 5 5 5 5 5 5 6 6 6
6 6 6 6 7 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8 8 9 9 9
9 9 9 9 9 9 9 10 10 10 10 10 10 10 10 10 10 10 11 11 11 11 11 11
11 11 11 11 11 11 12 12 12 12 12 12 12 12 12 12 12 12 12 13 13 13 13 13
13 13 13 13 13 13 13 13 13 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 17 17 17 17 17 17 17 17 17 17 17 17 17 17 17
17 17 17 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 19 19
19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 20 20 20 20 20 20
20 20 20 20 20 20 20 20 20 20 20 20 20 20 20]
(118, 231)
**2.3 Cost function and gradient**
Now you will implement code to compute the cost function and gradient for regularized logistic
regression. Recall that the regularized cost function in logistic regression is:
$J(\theta) \ = \ [ \ \frac{1}{m} \ \sum_{i=1}^{m} \ [ \ -y^{(i)} log(h_\theta(x^{(i)})) \ - \ (1 - y^{(i)})log(1-h_\theta(x^{(i)})) \ ] \ ] \ + \frac{\lambda}{2m} \sum_{j=2}^{n} \theta_j^2 $
Note that you should not regularize the parameter $\theta_0$ (Why not? Think about why that would be a bad idea).
The gradient of the cost function is a vector where the j element is defined as follows (you should understand how to obtain this expression):
$\frac{\partial J(\theta)}{\partial \theta_{0}} \ = \ \frac{1}{m} \ \sum_{i=1}^{m} \ (h_\theta(x^{(i)})-y^{(i)}) x_j^{(i)} \quad \quad \quad \quad \quad \quad$ for $\quad j=0$
$\frac{\partial J(\theta)}{\partial \theta_{j}} \ = \ (\frac{1}{m} \ \sum_{i=1}^{m} \ (h_\theta(x^{(i)})-y^{(i)}) x_j^{(i)}) + \frac{\lambda}{m}\theta_j \quad \quad \quad$ for $\quad j \ge 1$
**2.3.1 [10pts] Implementing regularized logistic regression**
Re-implement computeCost with regularization.
```python
# Cost function, default lambda (regularization) 0
def logistic_loss_theta_w_reg(x, y, h, theta, lambda_=0.0):
#raise NotImplementedError('Implement it yourself.')
loss = (1/x.shape[0])*np.linalg.norm((-np.matmul(y.T,np.log(h(x,theta)))-np.matmul((1-y).T,np.log(1-h(x,theta)))))+\
(lambda_/(2*x.shape[0]))*(np.sum([i**2 for i in theta[1:]]))
return loss
def logistic_loss_theta_w_reg_grad(x, y, h, theta, lambda_=0.0):
#raise NotImplementedError('Implement it yourself.')
theta_new = theta.copy()
theta_new[0] = 0
grad = (1/x.shape[0])*np.matmul(x.T,(h(x,theta)-y))+(lambda_/x.shape[0])*theta_new
# print((h(x,theta)-y).shape) # shape is (118,1)
# print(x.shape)
# print(theta.shape)
# print(grad.shape) # shape is (231,1)
return grad
```
Once you are done, you will call your cost function using the initial value of
θ (initialized to all zeros). You should see that the cost is about 0.693.
```python
theta_init = np.zeros((X_pf.shape[1], 1))
print(logistic_loss_theta_w_reg(X_pf, y_data, hyposesis_function, theta_init))
print(logistic_loss_theta_w_reg_grad(X_pf, y_data, hyposesis_function, theta_init))
loss = partial(logistic_loss_theta_w_reg, X_pf, y_data, hyposesis_function)
loss_grad = partial(logistic_loss_theta_w_reg_grad, X_pf, y_data, hyposesis_function)
theta, best_cost = optimize(theta_init, loss, loss_grad, max_iter=10000, print_every=0, show=True)
print('best loss', best_cost)
print('best acc', accuracy(X_pf, y_data, theta))
```
**2.3.2 [15pts] Learning parameters using *minimize***
You will use *optimize.minimize* to learn the optimal parameters $\theta$. If you
have completed the cost and gradient for regularized logistic regression correctly, you should be able to learn the parameters
$\theta$ using *minimize*. Implement the function *optimizeRegularizedTheta* below.
```python
def optimizeRegularizedTheta():
theta,_ = optimize(theta_init, loss, loss_grad, max_iter=1000, print_every=0, show=False)
return theta
theta_min = optimizeRegularizedTheta()
print(theta_min.shape)
print(theta_min)
```
(231, 1)
[[ 4.19339701e+00]
[ 8.52370120e-02]
[ 4.83000122e-01]
[-5.71113682e+00]
[-6.33415275e+00]
[-4.85851307e+00]
[-1.15482614e+00]
[-3.93797616e-02]
[ 5.88228313e-01]
[ 5.58994006e-01]
[-4.95595320e+00]
[-1.77907328e+00]
[-1.21835200e+00]
[-7.36011170e-01]
[-4.27322478e+00]
[-5.88255852e-02]
[ 2.23702992e-01]
[-5.79127292e-01]
[-1.84906486e-01]
[ 1.46909611e-01]
[ 1.35488891e-01]
[-3.03590399e+00]
[-3.49565671e+00]
[-7.16125831e-01]
[-3.51474543e-02]
[-8.74099801e-01]
[-7.04807426e-02]
[-2.69920641e-01]
[-5.07694793e-03]
[-1.66430396e-01]
[-9.20245749e-03]
[-6.25079394e-02]
[ 2.05407014e-01]
[-3.33315099e-01]
[ 3.32576094e-02]
[ 2.26272922e-02]
[-3.68022741e-01]
[ 5.06708679e-02]
[-2.95429610e-01]
[-4.72926092e-01]
[-2.47931934e+00]
[ 4.43540734e-02]
[ 1.23238091e-01]
[-2.24395839e+00]
[ 1.89579943e-02]
[ 3.08343817e-03]
[-3.42453812e-02]
[-8.91670087e-03]
[ 4.76865374e-02]
[-4.26433497e-01]
[ 1.79765157e-01]
[ 5.84451249e-02]
[ 5.27961990e-04]
[-7.42674232e-02]
[-1.87008836e-02]
[ 1.43813057e-01]
[ 6.21155874e-02]
[ 4.71897940e-02]
[ 5.02625444e-02]
[-2.74566633e-01]
[-1.32306048e-01]
[-1.79961093e+00]
[-1.44399556e-01]
[ 1.11497527e-01]
[-1.72706415e+00]
[-2.26365393e-01]
[ 1.69931694e-01]
[-4.10467571e-02]
[-1.02711682e-01]
[ 9.32934891e-03]
[-1.77095009e-02]
[ 2.79586716e-02]
[ 1.38682657e-01]
[-2.70584123e-02]
[ 4.78758234e-02]
[ 2.19693798e-02]
[-4.48081097e-01]
[-2.09098620e-02]
[ 3.12877678e-02]
[-1.37068836e+00]
[ 1.36051647e-01]
[ 3.93146636e-02]
[-1.34640322e+00]
[-7.17655057e-02]
[-5.97289241e-02]
[-1.56843944e-01]
[-7.77637850e-02]
[ 1.27870504e-01]
[-1.66588545e-01]
[ 5.43557947e-02]
[ 3.43469483e-02]
[ 1.26222098e-02]
[-4.43712619e-01]
[-1.11546850e-01]
[-1.34675787e-02]
[-1.63440926e-02]
[-2.21378311e-02]
[ 2.05942644e-02]
[ 1.15515891e-02]
[ 1.03858303e-01]
[ 3.81628258e-02]
[ 3.61014307e-02]
[ 2.19279497e-01]
[-3.45691188e-02]
[-2.41146958e-02]
[-1.11280648e+00]
[-1.03966114e-01]
[ 2.76362328e-02]
[-1.04328352e+00]
[-4.43562449e-02]
[ 2.00487375e-02]
[ 1.86127856e-02]
[-2.97836768e-02]
[-3.04432111e-02]
[ 1.05813088e-01]
[-4.45870228e-02]
[-1.16744054e-01]
[ 4.46376830e-02]
[ 1.35456584e-01]
[ 2.39878299e-02]
[-1.03571909e-02]
[ 1.18447750e-02]
[-1.18921288e-02]
[ 2.35824986e-01]
[-4.31745137e-01]
[ 7.87361632e-02]
[-9.60938578e-03]
[-1.10087959e-01]
[ 1.49413819e-02]
[ 1.03566318e-02]
[-2.39143141e-02]
[ 4.01685045e-02]
[-1.55604437e-02]
[ 8.13163618e-03]
[ 2.47564838e-02]
[-2.63518850e-02]
[-1.42666051e-02]
[ 3.62104078e-02]
[ 1.26553519e-02]
[-9.07084076e-02]
[-6.63011263e-02]
[ 1.06704428e-02]
[-2.61144127e-02]
[ 1.85391330e-02]
[-1.67143591e-02]
[-8.39654312e-01]
[-3.03766474e-02]
[-9.19069045e-01]
[-1.69597764e-02]
[ 1.70657935e-02]
[ 1.25816765e-01]
[ 8.63941085e-02]
[ 1.12961227e-02]
[ 9.88457129e-03]
[-8.25440935e-03]
[ 3.84497823e-02]
[-1.02860498e-02]
[-7.86870098e-03]
[ 5.74357423e-03]
[-1.94635695e-02]
[-2.23016445e-02]
[ 7.16333061e-03]
[-5.90112275e-03]
[ 2.35473832e-01]
[-1.03596993e-01]
[ 6.17740333e-02]
[ 1.63423738e-02]
[ 5.65614716e-03]
[ 1.12409876e-02]
[-6.39117094e-03]
[-4.19547280e-01]
[ 1.25092637e-02]
[-9.56225734e-03]
[-7.24380336e-02]
[-2.20286082e-02]
[-7.55259119e-03]
[-1.56501412e-02]
[ 7.88558677e-03]
[-4.31600392e-02]
[ 6.31669729e-03]
[ 5.89185347e-03]
[-1.06440103e-02]
[ 1.21722727e-02]
[-7.69207347e-01]
[ 1.13524479e-01]
[-7.51113456e-03]
[ 2.94540137e-02]
[-7.02328052e-01]
[ 7.15960655e-03]
[ 7.13662704e-02]
[-6.61219788e-03]
[-4.97840320e-03]
[ 6.28006166e-03]
[-1.43473686e-02]
[-4.09859352e-01]
[ 5.07105026e-02]
[ 1.06359904e-02]
[ 3.52262534e-02]
[ 4.60329020e-03]
[-2.01665676e-02]
[-9.49694912e-02]
[ 1.02519823e-02]
[-6.77665343e-03]
[ 2.26662648e-01]
[-3.86337736e-03]
[ 3.32441679e-03]
[ 4.35664452e-03]
[-3.54621946e-03]
[ 3.68083207e-03]
[-4.04891902e-03]
[ 3.29156371e-03]
[-2.87407373e-02]
[ 2.41431549e-02]
[ 1.01123638e-01]
[ 7.93382703e-03]
[-3.77177979e-03]
[-1.66854686e-02]
[-9.48540640e-03]
[ 3.77942980e-03]
[ 9.44534636e-03]
[-6.50534030e-01]
[ 6.03629996e-02]
[ 4.86598974e-03]
[-4.17996453e-03]
[-7.19118899e-03]
[ 3.40165233e-03]
[ 4.76472928e-03]
[-5.58971766e-03]
[-5.89563494e-02]
[-4.37791519e-03]
[-6.09573885e-01]]
**2.4 Plotting the decision boundary**
To help you visualize the model learned by this classifier, we have provided the function
*plotBoundary* which plots the (non-linear) decision boundary that separates the
positive and negative examples.
```python
def plot_boundary(theta, X, y, labels, markers, xlabel, ylabel, figsize=(12, 10), ax=None):
"""
Function to plot the decision boundary for arbitrary theta, X, y, lambda value
Inside of this function is feature mapping, and the minimization routine.
It works by making a grid of x1 ("xvals") and x2 ("yvals") points,
And for each, computing whether the hypothesis classifies that point as
True or False. Then, a contour is drawn with a built-in pyplot function.
"""
ax = ax or plt.gca()
x_range = np.linspace(-1,1.5,50)
y_range = np.linspace(-1,1.5,50)
xx, yy = np.meshgrid(x_range, y_range)
X_fake = np.stack([xx, yy]).reshape(2, -1).T
X_fake_fm = polynomial_feature_map(X_fake)
y_pred_fake = hyposesis_function(X_fake_fm, theta)
for label_id, (label, marker) in enumerate(zip(labels, markers)):
ax.plot(*X_data[y_data.ravel()==label_id, :2].T, marker, label=label)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
plt.legend()
plt.grid()
return ax.contour( x_range, y_range, y_pred_fake.reshape(50, 50).T, [0.5])
def silent_optimize_w_lambda(lambda_):
theta_init = np.zeros((X_pf.shape[1], 1))
data = (X_pf_train, y_train, hyposesis_function)
loss = partial(logistic_loss_theta_w_reg, *data, lambda_=lambda_)
loss_grad = partial(logistic_loss_theta_w_reg_grad, *data, lambda_=lambda_)
optimizer_fn = partial(climin.GradientDescent, step_rate=1e-4, momentum=0.999)
theta, final_loss = optimize(
theta_init, loss, loss_grad, optimizer_fn=optimizer_fn,
max_iter=1000, print_every=0, show=False
)
return theta, final_loss
```
**2.4.1 [10pts] Plot Decision Boundaries**
(a) [2 pts] Use *plotBoundary* to obtain four subplots of the decision boundary for the following values of the regularization parameter: $\lambda \ = \ 0, 1, 5, 10$
(b) [2 pts] Comment on which plots are overfitting and which plots are underfitting.
**Answer**
When $\lambda = 0$(no regularization), plot is overfitting. When $\lambda=5,10$, plots are underfitting.
(c) [2 pts] Which is the model with the highest bias? The highest variance?
**Answer**
When $\lambda=10$, the model has the highest bias. When $\lambda=0$, the model has the highest variance.
(d) [2 pts] What is another way to detect overfitting?
**Answer**
There are two ways to detect overfitting.
(1) By cross-validation. If the model performes well on training data but bad on test data, overfitting. If the performances are both bad, under-fitting.
(2) By learning curve. If both validation accuracy and training accuracy converge at some low value, under-fitting. If both of them converge around a high value, but there is a huge gap between training acc and validation acc(training always performes better than validation), overfitting.
(e) [2 pts] Considering that later components of theta correspond to higher powers of monomials, plot values of theta and commend on effects of regularization
```python
# (a) Build a figure showing contours for various values of regularization parameter, lambda
np.random.seed(2)
train_idx_mask = np.random.rand(X_pf.shape[0]) < 0.3
X_pf_train, y_train = X_pf[train_idx_mask], y_data[train_idx_mask]
X_pf_test, y_test = X_pf[~train_idx_mask], y_data[~train_idx_mask]
print([x.shape for x in (X_pf_train, y_train, X_pf_test, y_test)])
new_plotting_spec = {
'X': X_data,
'y': y_data,
'xlabel': 'Microchip Test 1 Result',
'ylabel': 'Microchip Test 2 Result',
'labels': ['rejected', 'accepted'],
'markers': ['yo', 'k+'],
'figsize': (12, 10)
}
# you might find following lines useful:
#
# cnt_fmt = {0.5: 'Lambda = %d' % lambda_}
# ax.clabel(cnt, inline=1, fontsize=15, fmt=cnt_fmt)
#
# red dots indicate training samples
thetas = []
final_loss = []
plt.figure(figsize=(12,10))
for id_, lambda_ in enumerate([0, 1, 5, 10]):
theta, final_loss = silent_optimize_w_lambda(lambda_=lambda_)
acc = accuracy(X_pf_test, y_test, theta)
thetas.append(theta)
ax = plt.subplot(2, 2, id_+1)
cnt = plot_boundary(**new_plotting_spec, theta=theta, ax=ax)
cnt_fmt = {0.5: 'Lambda = %d' % lambda_}
ax.clabel(cnt, inline=1, fontsize=15, fmt=cnt_fmt)
plt.title('Decision Boundary, Accuracy={}, Loss={}'.format('%.2f' %acc,'%.2f' %final_loss))
plt.show()
# (e) [2 pts] Considering that later components of theta correspond to higher powers
# of monomials, plot values of theta and commend on effects of regularization
plt.figure(figsize=(8,6))
ax = None
for th_id, theta in enumerate(thetas):
ax = plt.subplot(2, 2, th_id+1, sharey=ax)
plt.plot(theta)
plt.show()
```
### 3. Written part
These problems are extremely important preparation for the exam. Submit solutions to each problem by filling the markdown cells below.
**3.1 [10pts]** Maximum likelihood for Logistic Regression
Showing all steps, derive the LR cost function using maximum likelihood. Assume that
the probability of y given x is described by:
$P(\ y=1 \; \vert \; x \ ; \ \theta \ ) = h_{\theta}(x)$
$P(\ y=0 \; \vert \; x \ ; \ \theta \ ) = 1 - h_{\theta}(x)$
**Answer**
$y^{(i)} \in {(0,1)}$ and $p(y=y^{(i)}|x_i,\theta)=h_{\theta}(x)^{y^{(i)}}(1 - h_{\theta}(x))^{1-y^{(i)}}$, which is a bernouli distribution.
$$ p(y=y^{(i)} |x,\theta)=\prod_{i=1}^{m}h_{\theta}(x)^{y^{(i)}}(1 - h_{\theta}(x))^{1-y^{(i)}}$$
Take $\log$ on both side, we will derive the logistic regression cost function using maximum likelihood:
$$ \log p(y=y^{(i)} |x,\theta) = \sum_{i=1}^{m}{y^{(i)}}\log(h_{\theta}(x))+(1-y^{(i)})\log(1 - h_{\theta}(x))$$
Therefore, the $J(\theta)$ would be,
$$ \mathbf {J(\theta)} = -\frac{1}{m}[\sum_{i=1}^{m}{y^{(i)}}\log(h_{\theta}(x))+(1-y^{(i)})\log(1 - h_{\theta}(x))]$$
**3.2 [10pts]** Logistic Regression Classification with Label Noise
Suppose you are building a logistic regression classifier for images of dogs, represented by a feature vector x, into one of two categories $y \in \{0,1\}$, where 0 is “terrier” and 1 is “husky.” You decide to use the logistic regression model $p(y = 1 \ \vert \ x) = h_{\theta}(x)=\sigma(\theta^Tx).$ You collected an image dataset **D**$\ = \{x^{(i)},t^{(i)}\}$, however, you were very tired and made
some mistakes in assigning labels $t^{(i)}.$ You estimate that you were correct in about $\tau$ fraction of all cases.
(a) Write down the equation for the posterior probability $p(t = 1 \ \vert \ x)$ of the label being 1 for some point x, in terms of the probability of the true class, $p(y = 1 \ \vert \ x).$
(b) Derive the modified cost function in terms of $\ \theta, x^{(i)},t^{(i)}$ and $\tau$.
**Answer**
**(a)**
y is the true label of two categories, which is $y \in \{0,1\}$. The t is the assign label, which is $t \in \{0,1\}$.
Because there are some mistakes in assigning labels $t^{(i)}$, so that there are 4 cases:
$$ p(t=1|x) = \tau p(y=1|x)+(1-\tau)p(y=0|x)$$
$$= \tau p(y=1|x)+(1-\tau)(1-p(y=1|x))$$
$$=1-\tau + (2\tau-1)\sigma(\theta^{T}x)$$
$$=1-\tau+(2\tau-1)\frac{1}{1+\exp(-\theta^{T}x)}$$
**(b)**
The modified cost function will not take $y^{(i)}$ as the label distribution parameter, instead, $t^{(i)}$ is used. The loss function w.r.t $\ \theta, x^{(i)},t^{(i)}$ and $\tau$ is:
$$ J'(\theta) = \frac{1}{m}\sum_{i=1}^{m}Cost(h_{\theta}(x^{(i)}),t^{(i)})$$
$$ = -\frac{1}{m}\sum_{i=1}^{m}t^{(i)}\log p(t=1|x^{(i)})+(1-t^{(i)})\log(1-p(t=1|x^{(i)}))$$
$$ = -\frac{1}{m}\sum_{i=1}^{m}t^{(i)}\log(1-\tau+\frac{2\tau-1}{1+\exp(-\theta^{T}x^{(i)})})+(1-t^{(i)})\log(\tau-\frac{2\tau-1}{1+\exp(-\theta^{T}x^{(i)})}))$$
**3.3 [10pts] Cross-entropy loss for multiclass classification**
This problem asks you to derive the cross-entropy loss for a multiclass classification problem using maximum likelihood.
Consider the multiclass classification problem in which each input is assigned to one of $K$ mutually exclusive classes. The binary target variables $y_k$ ∈ {0, 1} have a "one-hot" coding scheme, where the value is 1 for the indicated class and 0 for all others. Assume that we can interpret the network outputs as $h_k(x,\theta) = p(y_k = 1|x)$, or the probability of the kth class.
Show that the maximum likelihood estimate of the parameters $\theta$ can be obtained by minimizing the multiclass *cross-entropy* loss function
<p>
$L(\theta)= - \frac{1}{N}\sum_{i=1}^{N} \sum_{k=1}^{K} y_{ik} \log(h_k(x_i,\theta))$
</p>
<p>
where $N$ is the number of examples $\{x_i,y_i\}$. </p>
**Answer**
For a single data point, there exsits **k** possible classes. Like the Bernouli distribution in Q3.1, the distribution for all k classes looks like,
$$ p(y_k|x_k,\theta)=\prod_{k=1}^{K}p(y_k=1|x)^{y_k}, \ \ \ y_k \in (0,1)$$
Let's take the $\log$ value on both sides, which is the log-likelihood function,
$$ \log p(y_k|x_k,\theta)=\sum_{k=1}^{K}y_k \log p(y_k=1|x)$$
For all N data points, we sum up all N equations above, and divided by -1/N, which is the cross_entropy function $L(\theta)$
$$ L(\theta) = -\frac{1}{N}\sum_{i=1}^{N}\sum_{k=1}^{K}y_{ik}\log(h_k(x_i,\theta))$$
By minimizing the loss function above, we will get the maximum likelihood estimate(MLE) of the parameters $\theta$
$$ \theta_{MLE}=argmin_{\theta}L(\theta)$$
|
5d24df689003b12c4aedd0564b717feafb28c1e5
| 320,119 |
ipynb
|
Jupyter Notebook
|
ps2/pset2_ml2018.ipynb
|
guozhonghao1994/BU_CS542_Machine_Learning
|
5a8c05d09865063aa2075d67b234d5fdc3013709
|
[
"Apache-2.0"
] | 8 |
2020-02-29T20:52:13.000Z
|
2022-01-20T19:12:29.000Z
|
ps2/pset2_ml2018.ipynb
|
guozhonghao1994/BU_CS542_Machine_Learning
|
5a8c05d09865063aa2075d67b234d5fdc3013709
|
[
"Apache-2.0"
] | null | null | null |
ps2/pset2_ml2018.ipynb
|
guozhonghao1994/BU_CS542_Machine_Learning
|
5a8c05d09865063aa2075d67b234d5fdc3013709
|
[
"Apache-2.0"
] | 7 |
2019-09-20T23:45:24.000Z
|
2021-10-24T02:26:28.000Z
| 190.207368 | 91,528 | 0.882591 | true | 13,466 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.771844 | 0.805632 | 0.621822 |
__label__eng_Latn
| 0.919777 | 0.283031 |
---
## 02. Ecuación de Advección Multidimensional
Eduard Larrañaga (ealarranaga@unal.edu.co)
---
### Resumen
Se obtiene la solución de la ecuación de advección multidimensional por el método de volumenes finitos
---
---
## Ecuación Lineal de Advección 2-Dimensional
La ecuación lineal de advección en 2-dimensiones para la función $\psi = \psi(t,x,y)$ es
\begin{equation}
\partial_t \psi + u \partial_x \psi + v \partial_y \psi = 0
\end{equation}
donde $u$ es la velocidad de advección en direccion-x y $v$ es la velocidad en dirección-y . El promedio de la función $\psi(t,x,y)$ en la zona $i,j$ se denotará como $\psi_{i,j}$ y en general, el índice $i$ etiquetará la dirección-x mientras que el índice $j$ etiqueta la dirección-y.
<center></center>
---
### Método de Volumenes Finitos en 2-dimensiones
Ya que $u$ y $v$ se conisderarán como constantes, es posible colocarlas dentro de las derivadas parciales en la ecuación de advección,
\begin{equation}
\partial_t \psi + \partial_x (u \psi) + \partial_y (v \psi) = 0.
\end{equation}
Se define el promedio de la función $\psi$ en una zona al integras sobre el *volumen* 2-dimensional de uno de los intervalos,
\begin{equation}
\psi_{i,j} = \frac{1}{\Delta x \Delta y}
\int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}} \int_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}}
\psi(x,y,t) \, dx \, dy .
\end{equation}
De esta forma, al integrar la ecuación de advección con respecto a $x$ y a $y$, se obtiene
\begin{align}
\frac{1}{\Delta x \Delta y}
\int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}}
\int_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}} \partial_t \psi \, dx \, dy =
&- \frac{1}{\Delta x \Delta y}
\int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}} \int_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}}
\partial_x (u \psi) \, dx \, dy \nonumber \\
&- \frac{1}{\Delta x \Delta y}
\int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}} \int_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}}
\partial_y (v \psi) \, dx \, dy
\end{align}
y al intercambiar la integral con el operador de derivada temporal al lado izquierdo e integrar adecuadamente los terminos del lado derecho se llega a
\begin{align}
\frac{\partial \psi_{i,j}}{\partial t} =
&- \frac{1}{\Delta x\Delta y} \int_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}}
\left \{ (u \psi)_{i+\frac{1}{2},j} - (u \psi)_{i-\frac{1}{2},j} \right \} dy \nonumber \\
&- \frac{1}{\Delta x\Delta y} \int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}}
\left \{ (v \psi)_{i,j+\frac{1}{2}} - (v \psi)_{i,j-\frac{1}{2}} \right \} dx
\end{align}
Integración de esta ecuación entre los tiempos $t^n$ y $t^{n+1}$ da como resultado
\begin{align}
\psi_{i,j}^{n+1} - \psi_{i,j}^n =
&- \frac{1}{\Delta x\Delta y} \int_{t^n}^{t^{n+1}} \int_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}}
\left \{ (u \psi)_{i+\frac{1}{2},j} - (u \psi)_{i-\frac{1}{2},j} \right \} dy dt \nonumber \\
&- \frac{1}{\Delta x\Delta y} \int_{t^n}^{t^{n+1}} \int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}}
\left \{ (v \psi)_{i,j+\frac{1}{2}} - (v \psi)_{i,j-\frac{1}{2}} \right \} dx dt .
\end{align}
Ahora bien, el flujo a través de una superficie de interfaz se define como el promedio sobre la superficie de esa cara y en el tiempo. Esto da como resultado las siguientes expresiones:
1. A través de una cara de x constante:
\begin{equation}
\langle (u\psi)_{i+\frac{1}{2},j}\rangle_{(t)} = \frac{1}{\Delta y \Delta t}
\int_{t^n}^{t^{n+1}} \int_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}} (u\psi)_{i+\frac{1}{2},j}\, dy dt
\end{equation}
2. A través de una cara de y constante:
\begin{equation}
\langle (v\psi)_{i,j+\frac{1}{2}}\rangle_{(t)} = \frac{1}{\Delta x \Delta t}
\int_{t^n}^{t^{n+1}} \int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}} (v\psi)_{i,j+\frac{1}{2}}\, dx dt
\end{equation}
donde $\langle . \rangle_{(t)}$ denotará el promedio temporal sobre esa cara.
Al igual que en el caso de la advección 1-dimensional, el promedio temporal se reemplaza por el flujo en el punto medio en el tiempo y ahora, también el promedio sobre la cara se reemplazará por el valor del flujo en el centro de la cara,
\begin{equation}
\langle (u\psi)_{i+\frac{1}{2},j} \rangle_{(t)} \approx (u\psi)_{i+\frac{1}{2},j}^{n+\frac{1}{2}}
\end{equation}
y por lo tanto,
\begin{equation}
\psi_{i,j}^{n+1} = \psi_{i,j}^n - \Delta t \left [
\frac{(u\psi)_{i+\frac{1}{2},j}^{n+\frac{1}{2}} - (u\psi)_{i-\frac{1}{2},j}^{n+\frac{1}{2}}}{\Delta x} +
\frac{(v\psi)_{i,j+\frac{1}{2}}^{n+\frac{1}{2}} - (v\psi)_{i,j-\frac{1}{2}}^{n+\frac{1}{2}}}{\Delta y} \right ]
\end{equation}
En este problea de advección lineal, en el que $u$ y $v$ son constantes, solo es necesario encontrar los valores de $a$ aen las interfaces, esto quiere decir $\psi^{n+1/2}_{i\pm 1/2 , j}$ en las interfaces-x y $\psi^{n+1/2}_{i, j \pm 1/2}$ en las interfaces-y. Existen dos métodos para calcular estos estados: **con división dimensional** y **sin división**.
---
### Método de División Dimensional
Los métodos con división dimensional son los más sencillos de implementar y tienen como base que cada una de las dimensiones se trabaja de forma independiente a las demás. Esto implica que se utilizará el método de solución 1-dimensional descrito en clases anteriores en cada una de las direcciones.
El método de Strang es un algoritmo de segundo orden de precisión en el tiempo en el cual se alterna el orden de actualización dimensional en cada paso temporal. De esta forma, cada actualización temporal en un $\Delta t$ consiste en una actualización en $x$ seguida de una actualización en $y$,
\begin{eqnarray}
\bar{\psi}_{i,j} &=& \psi_{i,j}^n
- \Delta t \frac{ u \psi_{i+\frac{1}{2},j}^{n+\frac{1}{2}} - u \psi_{i-\frac{1}{2},j}^{n+\frac{1}{2}} }{\Delta x}\\
\psi_{i,j}^{n+1} &=& \bar{\psi}_{i,j}
- \Delta t \frac{ v \bar{\psi}_{i,j+\frac{1}{2}}^{n+\frac{1}{2}} - v \bar{\psi}_{i,j-\frac{1}{2}}^{n+\frac{1}{2}} }{\Delta y}.
\end{eqnarray}
Para construir los estados en las interfaces se sigue el mismo porceso descrito para la ecuación de advección 1-dimensional, i.e. considerando las expansiones desde la izquierda o desde la derecha y resolviendo el problema de Riemann correspondiente.
```python
```
|
946ec58c06708761c983bf2b01d2cf6f07a579d0
| 9,058 |
ipynb
|
Jupyter Notebook
|
11. PDE II. Volumenes Finitos/02. Ecuacion de Adveccion Multidimensional.ipynb
|
ashcat2005/AstrofisicaComputacional2022
|
67463ec4041eb08c0f326792fed0dcf9e970e9b7
|
[
"MIT"
] | 3 |
2022-03-08T06:18:56.000Z
|
2022-03-10T04:55:53.000Z
|
11. PDE II. Volumenes Finitos/02. Ecuacion de Adveccion Multidimensional.ipynb
|
ashcat2005/AstrofisicaComputacional2022
|
67463ec4041eb08c0f326792fed0dcf9e970e9b7
|
[
"MIT"
] | null | null | null |
11. PDE II. Volumenes Finitos/02. Ecuacion de Adveccion Multidimensional.ipynb
|
ashcat2005/AstrofisicaComputacional2022
|
67463ec4041eb08c0f326792fed0dcf9e970e9b7
|
[
"MIT"
] | 4 |
2022-03-09T17:47:43.000Z
|
2022-03-21T02:29:36.000Z
| 41.172727 | 369 | 0.533893 | true | 2,341 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.835484 | 0.731059 | 0.610787 |
__label__spa_Latn
| 0.885803 | 0.257394 |
Inverse Theory is the topic of Chapter 22 of [A Guided Tour of Mathematical Methods for the Physical Sciences](http://www.cambridge.org/nz/academic/subjects/physics/mathematical-methods/guided-tour-mathematical-methods-physical-sciences-3rd-edition#KUoGXYx5FTwytcUg.97). This notebook provides calculations on a small -- and idealized -- inverse problem of finding a karst with noisy gravity measurements to show you the workings of Bayesian inversion. In Chapter 22, we tackled this as a damped least squares problem, and the end of the chapter ends with a discussion on a Bayesian approach to inverse theory. Here, we extend the example to this Bayesian framework (and in the end show the link between the two strategies).
```python
import numpy as np
import matplotlib.pyplot as plt
```
Every inverse problem is related to a forward problem. In geophysics, our data are usually the result of doing some experiment on Earth that obeys the Laws of Physics. The goal is to obtain the physical parameters of the Earth that are responsible for these data. In this case, the physics to find the depth and mass of a karst involves measurements of the (negative) gravitational force anomaly at and above the surface of the Earth. (When karsts collapse, they can form sinkholes!)
## Newton's Law of gravitation
Newton postulated that the gravitational acceleration on a unit test mass $m$ due to a mass $M$ is proportional to $M$ and inversely proportional to their squared distance $R^2$. It took time, and heroic efforts by Michell and Cavendish to estimate the proportionality constant $G$:
```python
import scipy.constants
def F(M,R):
G = scipy.constants.G # Universal Constant (m^3⋅kg^−1⋅s^−2)
return G*M/R**2
```
## Our karst
A particular underground void (or, [karst](https://en.wikipedia.org/wiki/Karst)) results in a mass deficiency of 500000 kg, due to missing rock. The centre of this karst is at 15 m under ground.
We take two measurements (for reasons you can read about in Chapter 22!) of the gravitational acceleration anomaly: one at the surface and one on a 10 m ladder. Both measurements are done right over the karst (how we know the lateral position, remains a mystery to this day):
```python
dM = 500000 # the amount of missing mass in kg
r = 15 # true depth in m
m_true = np.array([dM,r])
rL=10 # second datum from a ladder of height rL in m
d_true = [F(dM,r), F(dM,r+rL)] # noise free data in N
print(d_true)
```
[1.483128888888889e-07, 5.3392640000000005e-08]
Technically, these data form a negative force anomaly, due to the lack of rock that *is* the void.
## Noisy data
Let's add some noise to the data, drawn from a Gaussian distribution with mean zero, and some standard deviation:
```python
mu, sigma = 0, 0.15 # mean and standard deviation
sigma_d = d_true*np.random.normal(mu, sigma, 2)
d = np.array(d_true + sigma_d)
print(d)
```
[1.28417250e-07 6.16325436e-08]
## Prior information
From years and years of finding karsts, we know that karsts in this area fit the following parameters:
```python
dM0 = 600000 # average mass (deficiency) of a karst
r0= 12 # average depth
m0 = np.array([dM0,r0])
std =0.25 # standard deviation on these parameters
sigma_m = m_true*std
print(m0,sigma_m)
```
[600000 12] [1.25e+05 3.75e+00]
Let's generate a 2d grid as search space for models with different masses and depths:
```python
x = np.linspace(100000, 800000, 200)
y = np.linspace(5, 20, 200)
xx, yy = np.meshgrid(x, y)
```
The prior distribution is then
\begin{equation}
\label{Basinv.2}
p( {\bf m } ) \propto \exp \left( - \frac{ \| {\bf m} - {\bf m}_0 \|^2 }{ 2 \sigma_m^2}
\right) .
\end{equation}
```python
pm = 0.5*(((xx-m0[0])/sigma_m[0])**2+ ((yy-m0[1])/sigma_m[1])**2)
# The next line finds the coordinates of the maximum value of pmd:
maxpm = np.argmax(np.exp(-pm), axis=None) # is the x*y index (single value), to unravel x and y:
MLE=np.unravel_index(maxpm, pm.shape)
fig, ax = plt.subplots()
ax.set_title('the prior probability p(d|m)')
ax.set_xlabel('Mass (deficit, kg)')
ax.set_ylabel('Depth (m)')
c = ax.pcolormesh(x, y, np.exp(-pm))
fig.colorbar(c, ax=ax)
ax.plot(dM,r,'ko')
ax.plot(x[MLE[1]],y[MLE[0]],'rx')
plt.show()
```
The black dot is the "true" model, and the red "x" the maximum value of the prior distribution.
The data tell us that $p({\bf d}|{\bf m})$ is
\begin{equation}
\label{Basinv.3}
p( {\bf d} | {\bf m } ) \propto \exp \left( - \frac{\| {\bf d} - {\bf F}( {\bf m}) \|^2}{2 \sigma_d^2}
\right) .
\end{equation}
```python
pdm = 0.5*(((d[0]-F(xx,yy))/sigma_d[0])**2 + ((d[1]-F(xx,yy+rL))/sigma_d[1])**2)
fig, ax = plt.subplots()
ax.set_title('the conditional probability p(d|m)')
ax.set_xlabel('Mass (deficit, kg)')
ax.set_ylabel('Depth (m)')
c = ax.pcolormesh(x, y, np.exp(-pdm))
ax.plot(dM,r,'ko')
fig.colorbar(c, ax=ax)
plt.show()
```
The linear feature of p(d|m) says that the data struggle to help us distinguish between shallow+small karsts, and deep+big ones. Note that we "know" the data uncertainty. In real life, this is unlikely...
## The posterior
When we apply Bayes' formula, we get a posterior distribution
\begin{equation}
\label{Basinv.4}
p( {\bf m } | {\bf d } ) \propto
\exp \left( - \frac{\| {\bf d} - {\bf F} ( {\bf m} ) \|^2}{2 \sigma_d^2}
- \frac{ \| {\bf m} - {\bf m}_0 \|^2 }{ 2 \sigma_m^2}
\right) .
\end{equation}
```python
pmd = np.exp(-(pm+pdm))
# The next line finds the coordinates of the maximum value of pmd:
maxpmd = np.argmax(pmd, axis=None) # is the x*y index (single value), to unravel x and y:
MLE=np.unravel_index(maxpmd, pmd.shape)
fig, ax = plt.subplots()
ax.set_title('the posterior probability p(m|d)')
ax.set_xlabel('Mass (deficit, kg)')
ax.set_ylabel('Depth (m)')
c = ax.pcolormesh(x, y, pmd)
ax.plot(dM,r,'ko')
ax.plot(x[MLE[1]],y[MLE[0]],'rx')
fig.colorbar(c, ax=ax)
plt.show()
```
The posterior distribution represents a data-driven update to the prior knowledge of the possible karst models. One can see that the "bull's eye" in the posterior is generally closer to the true parameters of the karst than the the prior bull's eye. More importantly, the posterior distribution should have narrowed the range of models that could represent the karst. (But as we added noise to the data, this may not be true for every realization of "noisy data").
## The maximum likelihood estimator
This bull's eye is the maximum likelihood estimator, and at the end of Chapter 22 you can read how for linear inverse problems (this one is not! Can you see from F(m) why?), this estimator is the same as the solution to the damped least squares solution, as you can learn at the end of Chapter 22.
## Your assigment: create an ensemble of maximum likelihood estimators
Build a loop, as if you came back every day for 100 days to repeat the noisy gravity measurements. In practice, this means you generate a new vector ${\bf d}$ Plot the maximum likelihood estimator of the posterior for each "day". If the errors are mean zero, we hope you see that the solutions have a mean that is centered on the "true" depth and mass of the karst!
```python
```
|
b128beaae6c337044194b32533b8009d892872c4
| 117,653 |
ipynb
|
Jupyter Notebook
|
22_Inverse_Theory.ipynb
|
PALab/mathematical-notebooks-for-the-physical-sciences
|
f5b759aa4746c54ea9cac8c0001093b2204047d4
|
[
"MIT"
] | 6 |
2018-05-31T02:29:10.000Z
|
2021-08-16T15:02:38.000Z
|
22_Inverse_Theory.ipynb
|
PALab/mathematical-notebooks-for-the-physical-sciences
|
f5b759aa4746c54ea9cac8c0001093b2204047d4
|
[
"MIT"
] | null | null | null |
22_Inverse_Theory.ipynb
|
PALab/mathematical-notebooks-for-the-physical-sciences
|
f5b759aa4746c54ea9cac8c0001093b2204047d4
|
[
"MIT"
] | 3 |
2018-06-22T00:45:17.000Z
|
2020-08-16T14:25:40.000Z
| 496.42616 | 47,499 | 0.930049 | true | 2,043 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.913677 | 0.865224 | 0.790535 |
__label__eng_Latn
| 0.985564 | 0.67501 |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#-Discrete-Uniform-Distribution" data-toc-modified-id="-Discrete-Uniform-Distribution-1"><span class="toc-item-num">1 </span><font face="gotham" color="purple"> Discrete Uniform Distribution</font></a></span><ul class="toc-item"><li><span><a href="#-Discrete-Probability-Mass-Function" data-toc-modified-id="-Discrete-Probability-Mass-Function-1.1"><span class="toc-item-num">1.1 </span><font face="gotham" color="purple"> Discrete Probability Mass Function</font></a></span></li></ul></li><li><span><a href="#-Binomial-Distribution" data-toc-modified-id="-Binomial-Distribution-2"><span class="toc-item-num">2 </span><font face="gotham" color="purple"> Binomial Distribution</font></a></span><ul class="toc-item"><li><span><a href="#--Example-1" data-toc-modified-id="--Example-1-2.1"><span class="toc-item-num">2.1 </span><font face="gotham" color="purple"> Example 1</font></a></span></li><li><span><a href="#--Example-2" data-toc-modified-id="--Example-2-2.2"><span class="toc-item-num">2.2 </span><font face="gotham" color="purple"> Example 2</font></a></span></li><li><span><a href="#-Binomial-R.V.-Generator" data-toc-modified-id="-Binomial-R.V.-Generator-2.3"><span class="toc-item-num">2.3 </span><font face="gotham" color="purple"> Binomial R.V. Generator</font></a></span></li><li><span><a href="#-Moments-of-Binomial-Distribution" data-toc-modified-id="-Moments-of-Binomial-Distribution-2.4"><span class="toc-item-num">2.4 </span><font face="gotham" color="purple"> Moments of Binomial Distribution</font></a></span></li></ul></li><li><span><a href="#-Poisson-Distribution" data-toc-modified-id="-Poisson-Distribution-3"><span class="toc-item-num">3 </span><font face="gotham" color="purple"> Poisson Distribution</font></a></span><ul class="toc-item"><li><span><a href="#Example" data-toc-modified-id="Example-3.1"><span class="toc-item-num">3.1 </span><font face="gotham" color="purple">Example</font></a></span></li><li><span><a href="#-Poisson-R.V.-Generator" data-toc-modified-id="-Poisson-R.V.-Generator-3.2"><span class="toc-item-num">3.2 </span><font face="gotham" color="purple"> Poisson R.V. Generator</font></a></span></li><li><span><a href="#-Moments-of-Poisson-Distribution" data-toc-modified-id="-Moments-of-Poisson-Distribution-3.3"><span class="toc-item-num">3.3 </span><font face="gotham" color="purple"> Moments of Poisson Distribution</font></a></span></li></ul></li><li><span><a href="#-Geometric-Distribution" data-toc-modified-id="-Geometric-Distribution-4"><span class="toc-item-num">4 </span><font face="gotham" color="purple"> Geometric Distribution</font></a></span><ul class="toc-item"><li><span><a href="#Example" data-toc-modified-id="Example-4.1"><span class="toc-item-num">4.1 </span><font face="gotham" color="purple">Example</font></a></span></li><li><span><a href="#-G-D-Moments-and-Generator" data-toc-modified-id="-G-D-Moments-and-Generator-4.2"><span class="toc-item-num">4.2 </span><font face="gotham" color="purple"> G-D Moments and Generator</font></a></span></li></ul></li><li><span><a href="#-Hypergeometric-Distribution" data-toc-modified-id="-Hypergeometric-Distribution-5"><span class="toc-item-num">5 </span><font face="gotham" color="purple"> Hypergeometric Distribution</font></a></span><ul class="toc-item"><li><span><a href="#-Example-" data-toc-modified-id="-Example--5.1"><span class="toc-item-num">5.1 </span><font face="gotham" color="purple"> Example </font></a></span></li></ul></li><li><span><a href="#-Continous-Uniform-Distribution-" data-toc-modified-id="-Continous-Uniform-Distribution--6"><span class="toc-item-num">6 </span><font face="gotham" color="purple"> Continous Uniform Distribution </font></a></span><ul class="toc-item"><li><span><a href="#-CDF-and-PDF-of-Continous-Uniform-Distribution-" data-toc-modified-id="-CDF-and-PDF-of-Continous-Uniform-Distribution--6.1"><span class="toc-item-num">6.1 </span><font face="gotham" color="purple"> CDF and PDF of Continous Uniform Distribution </font></a></span></li></ul></li><li><span><a href="#-Normal-Distribution" data-toc-modified-id="-Normal-Distribution-7"><span class="toc-item-num">7 </span><font face="gotham" color="purple"> Normal Distribution</font></a></span><ul class="toc-item"><li><span><a href="#-Inverse-Normal-CDF" data-toc-modified-id="-Inverse-Normal-CDF-7.1"><span class="toc-item-num">7.1 </span><font face="gotham" color="purple"> Inverse Normal CDF</font></a></span></li><li><span><a href="#-Normal-R.V.-Generator" data-toc-modified-id="-Normal-R.V.-Generator-7.2"><span class="toc-item-num">7.2 </span><font face="gotham" color="purple"> Normal R.V. Generator</font></a></span></li><li><span><a href="#-Bivariate-Normal-Distribution" data-toc-modified-id="-Bivariate-Normal-Distribution-7.3"><span class="toc-item-num">7.3 </span><font face="gotham" color="purple"> Bivariate Normal Distribution</font></a></span><ul class="toc-item"><li><span><a href="#-1st-Method-of-Formulation" data-toc-modified-id="-1st-Method-of-Formulation-7.3.1"><span class="toc-item-num">7.3.1 </span><font face="gotham" color="purple"> 1st Method of Formulation</font></a></span></li><li><span><a href="#-2st-Method-of-Formulation" data-toc-modified-id="-2st-Method-of-Formulation-7.3.2"><span class="toc-item-num">7.3.2 </span><font face="gotham" color="purple"> 2st Method of Formulation</font></a></span></li></ul></li></ul></li><li><span><a href="#-Beta-Distribution" data-toc-modified-id="-Beta-Distribution-8"><span class="toc-item-num">8 </span><font face="gotham" color="purple"> Beta Distribution</font></a></span></li><li><span><a href="#-$\chi^2$-Distribution" data-toc-modified-id="-$\chi^2$-Distribution-9"><span class="toc-item-num">9 </span><font face="gotham" color="purple"> <span id="MathJax-Element-503-Frame" class="mjx-chtml MathJax_CHTML" tabindex="0" data-mathml="<math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi>&#x03C7;</mi><mn>2</mn></msup></math>" role="presentation" style="font-size: 109%; position: relative;"><span id="MJXc-Node-4478" class="mjx-math" aria-hidden="true"><span id="MJXc-Node-4479" class="mjx-mrow"><span id="MJXc-Node-4480" class="mjx-msubsup"><span class="mjx-base"><span id="MJXc-Node-4481" class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.213em; padding-bottom: 0.496em;">χ</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.513em; padding-left: 0px; padding-right: 0.071em;"><span id="MJXc-Node-4482" class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.39em; padding-bottom: 0.355em;">2</span></span></span></span></span></span><span class="MJX_Assistive_MathML" role="presentation"><math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi>χ</mi><mn>2</mn></msup></math></span></span>$\chi^2$ Distribution</font></a></span><ul class="toc-item"><li><span><a href="#-$\chi^2$-PDF-and-CDF" data-toc-modified-id="-$\chi^2$-PDF-and-CDF-9.1"><span class="toc-item-num">9.1 </span><font face="gotham" color="purple"> <span id="MathJax-Element-542-Frame" class="mjx-chtml MathJax_CHTML" tabindex="0" data-mathml="<math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi>&#x03C7;</mi><mn>2</mn></msup></math>" role="presentation" style="font-size: 109%; position: relative;"><span id="MJXc-Node-4808" class="mjx-math" aria-hidden="true"><span id="MJXc-Node-4809" class="mjx-mrow"><span id="MJXc-Node-4810" class="mjx-msubsup"><span class="mjx-base"><span id="MJXc-Node-4811" class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.211em; padding-bottom: 0.461em;">χ</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.513em; padding-left: 0px; padding-right: 0.071em;"><span id="MJXc-Node-4812" class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.378em; padding-bottom: 0.336em;">2</span></span></span></span></span></span><span class="MJX_Assistive_MathML" role="presentation"><math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi>χ</mi><mn>2</mn></msup></math></span></span>$\chi^2$ PDF and CDF</font></a></span></li></ul></li><li><span><a href="#-F-Distribution" data-toc-modified-id="-F-Distribution-10"><span class="toc-item-num">10 </span><font face="gotham" color="purple"> F Distribution</font></a></span></li></ul></div>
```python
from texttable import Texttable
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
import pandas as pd
import scipy.stats
```
```python
import warnings
warnings.filterwarnings("ignore")
```
In this chapter, we will only be dealing with special distributions which are frequently encountered in practice.
# <font face="gotham" color="purple"> Discrete Uniform Distribution</font>
We have seen PMF and PDF in last chapter, here we review an example of discrete uniform distribution.
```python
unif_d = sp.stats.randint.rvs(0, 10, size = 1000)
fig, ax = plt.subplots(figsize = (12, 7))
h, bins, patches = ax.hist(unif_d, density = True, bins = 30)
ax.set_title('Discrete Uniform Frequency Distribution', size = 16)
plt.show()
```
## <font face="gotham" color="purple"> Discrete Probability Mass Function</font>
```python
x = np.arange(1, 11)
unif_pmf = sp.stats.randint.pmf(x, 2, 10)
fig, ax = plt.subplots(figsize = (12, 7))
ax.scatter(x, unif_pmf, s = 100, color = 'green', label = 'Low = 2, High = 9')
ax.plot(x,unif_pmf, lw = 3, color = 'k')
ax.legend(fontsize = 18, loc = 'center')
```
# <font face="gotham" color="purple"> Binomial Distribution</font>
The binomial experiment has 4 properties:
<font face="gotham" color="red">
* A sequence of $n$ identical trials
* Only two outcomes are possible: $success$ or $failure$
* The probability of success $p$ does not change from trial to trial
* Trials are independent events
</font>
The PMF of binomial ditribution is
<br>
<br>
<span style="color:red">
\begin{equation}
_nC_k p^kq^{n-k}
\end{equation}
</span>
<br>
## <font face="gotham" color="purple"> Example 1</font>
We use a simple example to explain the PMF.
<font face="gotham" color="purple"> Every month, a personal banker might meet 50 people enquiring loans, emperically 30% of them have bad credit history. So calculate probability from 1:50 people that have bad credit history.</font>
First we can asnwer what the probability is that the banker meet exactly $14$ people who have bad credit history?
```python
n = 50
k = 14 # what is the prob that exact 14 ppl she met had bad credit history?
b = scipy.special.comb(50, 14)
p = .3
f_bino = b*p**k*(1-p)**(n-k)
print('The prob of meeting {0} persons who have bad credit history is {1:.2f}%.'.format(k, f_bino * 100))
```
The prob of meeting 14 persons who have bad credit history is 11.89%.
Or we can use ```scipy.stats.binom.pmf```. To show the probability from $1$ to $50$ persons.
```python
n = 50
p = .3
bad_credit = np.arange(1, 51)
y = sp.stats.binom.pmf(bad_credit, n, p)
fig, ax = plt.subplots(figsize = (10, 6))
ax.plot(bad_credit, y, lw = 3, color = 'r', alpha = .5)
ax.set_ylim([0, .13])
ax.set_title('The probability that from 1 to 50 persons who have bad credit history', size = 16, x = .5, y = 1.02)
ax.set_xlabel('Number of Person Who Has Bad Credit History', size = 12)
ax.set_ylabel('Probability', size = 12)
plt.show()
```
Because there are the $30\%$ of people who have bad credit history, thus we can see the mean of the distribution is $15$ out of $50$.
## <font face="gotham" color="purple"> Example 2</font>
Next we can formulate another question using ```scipy.stats.binom.cdf```.
If a trade trades $n$ times a month, he has a $p%$ chance of winning the trade, find out the probability that he can win at least $k$ trades a month.
We can also ask what are the probabilites he wins more than $k$ trades, or between $(k_1, \ k_2)$ trades.
```python
n = 20 # number of trades
p = .55 # winning odds
k = 12 # at least win k trades
k1 = 14
k2 = 4
win_less = sp.stats.binom.cdf(k, n, p)
win_more = 1- sp.stats.binom.cdf(k, n, p)
win_betw = sp.stats.binom.cdf(k1, n, p) - sp.stats.binom.cdf(k2, n, p)
```
```python
table = Texttable()
table.set_cols_align([ 'c', 'c', 'c'])
table.set_cols_valign([ 'm', 'm', 'm'])
table.add_rows([['Win Less', ' Win More ', ' Win btw 4~14 '],
[ win_less, win_more, win_betw]])
print(table.draw())
```
+----------+------------+----------------+
| Win Less | Win More | Win btw 4~14 |
+==========+============+================+
| 0.748 | 0.252 | 0.943 |
+----------+------------+----------------+
What if the probability of wining changing from 0.1 to 0.8, what is the probability that he wins less than 6 trades, assuming every month he trades 20 times.
```python
chance = np.arange(.1, .81, .05)
win_less = sp.stats.binom.cdf(6, 20, chance)
data_dict = {'win_rate':chance, 'win_less':win_less} # I am using annotation, that's why i am using pandas to locate the index
df = pd.DataFrame(data_dict)
df.plot(figsize = (10, 6))
plt.show()
```
To show the idea in a scatter plot.
```python
fig, ax = plt.subplots(figsize = (12, 7))
ax.scatter(chance, win_less)
txt = 'This point means if the winning rate is 40%,\n then the probability of winning less\n than 6 trades is 25%.'
ax.annotate(txt, xy = (df.iloc[6][0], df.iloc[6][1]),
xytext = (.35, .5), weight = 'bold', color = 'Tomato', size = 14,
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3', color = 'b'))
plt.show()
```
## <font face="gotham" color="purple"> Binomial R.V. Generator</font>
```sp.stats.binom.rvs()``` is the randome generator for binomial distribution.
```python
n = 1000 # number of trials
p = 0.3 # probability of success
bino = sp.stats.binom.rvs(n, p, size = 1000)
```
```python
txt = 'This line is the $y =p \cdot n$ \n and where highest draw should be.'
fig, ax = plt.subplots(figsize = (12, 7))
h, bins, patches = ax.hist(bino,bins= 80)
ax.axvline(p*n, color = 'Tomato', ls = '--', lw = 3)
ax.annotate(txt, xy = (p*n, h.max()*0.7),
xytext = (p*n ,h.max()-5), weight = 'bold',
color = 'Tomato', size = 14,
arrowprops = dict(arrowstyle = '->',
connectionstyle = 'arc3', color = 'b'))
ax.set_title('Generated Binomial R.V.s', size = 18)
plt.show()
```
According to the example, if success rate <font face="gotham" color="red">$p=.3$</font>, total trials of <font face="gotham" color="red">$1000$</font>.Then the counts of successes have the frequency distribution in the histogram.
## <font face="gotham" color="purple"> Moments of Binomial Distribution</font>
We can also calculate all important moments of distributions by using ```sp.stats.binom.stats(n, p, moments = 'mvsk')```. The text table library is used to format the output.
```python
n = 1000 # number of trials
p = 0.3 # probability of success
bino_stats = sp.stats.binom.stats(n, p, moments = 'mvsk') # mean, variance, skewness, kurtosis
table = Texttable()
table.set_cols_align([ "c", "c", 'c','c'])
table.set_cols_valign([ "m", "m", 'm','m'])
table.add_rows([["mean", " variance ", ' skewness ', 'kurtosis '],
[ bino_stats[0], bino_stats[1], bino_stats[2], bino_stats[3]]])
print(table.draw())
```
+------+------------+------------+-----------+
| mean | variance | skewness | kurtosis |
+======+============+============+===========+
| 300 | 210 | 0.028 | -0.001 |
+------+------------+------------+-----------+
# <font face="gotham" color="purple"> Poisson Distribution</font>
When $n\rightarrow\infty$ and $p\rightarrow0$,binomial distribution converges to Poisson distribution, i.e. when $n$ is large and $p$ is small, we can use Poisson to approximate Binomial.
## <font face="gotham" color="purple">Example</font>
For an ordinary traders, every trade has $1/1000$ probabilty to encounter a 'Black Swan' shock, a trader sets $20$ trades per month, what is the probability that she will encounter $2$ 'Black Swan' within 5 years?
This problem can be solved by Binomial, the formulation as below
\begin{equation}
\text{Number of Trades} = 20\times 12\times 5=1200\\
P(x=2) = \binom{1200}{2}\Big(\frac{1}{1000}\Big)^2\Big(\frac{999}{1000}\Big)^{1198}
\end{equation}
Of course, we can employ Poisson PMF to approximate it, the parameter for Poisson distribution is
\begin{equation}
\lambda = np = 1200 \times \frac{1}{1000} = 1.2
\end{equation}
that means every 5 years, there is in average 1.2 times of Black Swan shock.
\begin{equation}
P(x=2)=\frac{\lambda^ke^{-\lambda}}{k!}=\frac{1.2^2e^{-1.2}}{2!}
\end{equation}
To use the PMF directly
```python
sp.special.comb(1200, 2)*(1/1000)**2*(999/1000)**1198
```
0.21698280952603388
Or use the built-in function for Poisson ```sp.stats.poisson.pmf()```.
```python
k = 2
n = 20 * 12 * 5 # 20 times per month, and 5 years span
p = 1/1000
lambdaP = p * n # lambda in Poisson
p = sp.stats.poisson.pmf(k, lambdaP)
print('The probability of having {0} BS shock(s) in a span of 5 years is {1:.2f}%.'.format(k, p*100))
```
The probability of having 2 BS shock(s) in a span of 5 years is 21.69%.
Suprisingly high probability of having 1 BS shock, and one BS shock could possibly wipe out the whole account. Take care, traders,<font face="gotham" color="red"> the survival is pivotal</font>!
So what's the probability of having more than $k$ times BS shock?
```python
k = 2
prob_sf = 1 - sp.stats.poisson.cdf(k, lambdaP)
prob_sf_inv = sp.stats.poisson.cdf(k, lambdaP)
print('The probability of having more than %1.0f BS shock in 5 years is %3.3f%%.' % (k, prob_sf*100)) # double % can escape formating
```
The probability of having more than 2 BS shock in 5 years is 12.051%.
## <font face="gotham" color="purple"> Poisson R.V. Generator</font>
```python
poiss = sp.stats.poisson.rvs(lambdaP, size = 10000)
fig, ax = plt.subplots(figsize = (12, 7))
h, bins, patches = ax.hist(poiss, density = True, bins = 10)
ax.set_title('Poisson Frequency Distribution', size = 16)
plt.show()
```
## <font face="gotham" color="purple"> Moments of Poisson Distribution</font>
Again we can compute all most important moments
```python
poiss_stats = sp.stats.poisson.stats(k, moments = 'mvsk') # mean, variance, skewness, kurtosis
table = Texttable()
table.set_cols_align([ "c", "c", 'c','c'])
table.set_cols_valign([ "m", "m", 'm','m'])
table.add_rows([["mean", " variance ", ' skewness ', 'kurtosis '],
[ poiss_stats[0], poiss_stats[1], poiss_stats[2], poiss_stats[3]]])
print(table.draw())
```
+------+------------+------------+-----------+
| mean | variance | skewness | kurtosis |
+======+============+============+===========+
| 2 | 2 | 0.707 | 0.500 |
+------+------------+------------+-----------+
# <font face="gotham" color="purple"> Geometric Distribution</font>
The PMF of G-D is <font face="gotham" color="red">$f(k)=p(1-p)^k$</font>, where <font face="gotham" color="red">$k$</font> is a non-negative integer. <font face="gotham" color="red">$p$</font> is the probability of success, and <font face="gotham" color="red">$k$</font> is the number of failures before success.
## <font face="gotham" color="purple">Example</font>
So G-D is trying to answer 'How many times you have to fail in order to embrace the first success?'
If each trade has 1/1000 chance to encounter a BS shock, what is the prob of encounter a BS shock after trade <font face="gotham" color="red">$k$</font> times.
```python
k = 10
p = 1/1000
geodist = (1 - 1/1000)**10*1/1000
geodist
print('The probability of observing exact %1.0f times of safe trading before a BS shock is %3.3f.' %(k, geodist))
```
The probability of observing exact 10 times of safe trading before a BS shock is 0.001.
Or use built-in ```sp.stats.geom.pmf()```
```python
sp.stats.geom.pmf(10, 1/1000)
```
0.000991035916125874
```python
sp.stats.geom.cdf(k,p)
print('The probability of observing %1.0f or fewer than %1.0f times of safe trading before a BS shock is %3.3f.'%(k,k,geodist))
```
The probability of observing 10 or fewer than 10 times of safe trading before a BS shock is 0.001.
## <font face="gotham" color="purple"> G-D Moments and Generator</font>
The moments and randome generator are as following
```python
mean, var, skew, kurt = sp.stats.geom.stats(p, moments='mvsk')
table = Texttable()
table.set_cols_align([ "c", "c", 'c','c', 'c', 'c'])
table.set_cols_valign([ "m", "m", 'm','m', 'm', 'm'])
table.add_rows([['p','k',"mean", " variance ", ' skewness ', ' kurtosis '],
[ p, k, mean, var, skew, kurt]])
print(table.draw())
```
+-------+----+------+------------+------------+------------+
| p | k | mean | variance | skewness | kurtosis |
+=======+====+======+============+============+============+
| 0.001 | 10 | 1000 | 999000 | 2.000 | 6.000 |
+-------+----+------+------------+------------+------------+
```python
geomet = sp.stats.geom.rvs(p, size = 10000)
fig, ax = plt.subplots(figsize = (12, 7))
h, bins, patches = ax.hist(geomet, density = True, bins = 30)
ax.set_title('Geometric Frequency Distribution', size = 16)
plt.show()
```
# <font face="gotham" color="purple"> Hypergeometric Distribution</font>
The main difference between hypergeometric and binomial is that the former sampling is not independent of each other, i.e. the <font face="gotham" color="red">sampling is without replacement</font>.
The formula is
\begin{equation}
f(x) =\frac{{K\choose x} {M-K \choose N-x}}{{M\choose N}}
\end{equation}
## <font face="gotham" color="purple"> Example </font>
There is 100 candies in an urn, 20 are red, 80 are blue, if we take 5 of them out from it. What is the prob of having exact 4 red candies?
Solution:
\begin{equation}
\frac{{20\choose4}{80\choose1}}{{100\choose5}}
\end{equation}
To solve it:
```python
C = sp.special.comb(20, 4)*sp.special.comb(80, 1) /sp.special.comb(100, 5)
print('The prob of have 4 red candies by taking 5 out is %1.6f%%.'% (C*100))
```
The prob of have 4 red candies by taking 5 out is 0.514826%.
Or with built-in function ```sp.stats.hypergeom.pmf()```
```python
# pmf(x, M, N, n, loc=0)
hgeo = sp.stats.hypergeom.pmf(4, 100, 20, 5,loc = 0) # the arg order the same as MATLAB, not recommended
hgeo
```
0.005148263616599428
What is the probability that at most $4$ red candies are taken? i.e. The sum-up of probabilities of drawing $1$, $2$, $3$, and $4$ candies.
```python
hgeo_cdf = sp.stats.hypergeom.cdf(4, 100, 20, 5,loc = 0) # the arg order the same as MATLAB, not recommended
print('The prob of have at most 4 red candies by taking 5 out is %1.6f%%.' %(hgeo_cdf*100))
```
The prob of have at most 4 red candies by taking 5 out is 99.979407%.
```python
hgeo_rv = sp.stats.hypergeom.rvs(100, 20, 5, size = 10000)
fig, ax = plt.subplots(figsize = (12, 7))
h, bins, patches = ax.hist(hgeo_rv, density = True)
ax.set_title('Geometric Frequency Distribution', size = 16)
s = ''' It can be interpreted as: if 100 candies in the urn,
20 are red, take 5 out of 100.
The chance of getting from 1 to 5 red candies,
is shown in the chart.
As we can see it is nearly impossible to get 4 or 5 candies.
But getting 1 red candy is the most possible outcome.'''
ax.text(1.6, .5, s, fontsize=14, color ='red')
plt.show()
```
```python
mean, var, skew, kurt = sp.stats.hypergeom.stats(100, 20, 5, moments='mvsk')
table = Texttable()
table.set_cols_align([ "c", "c", 'c','c', 'c', 'c'])
table.set_cols_valign([ "m", "m", 'm','m', 'm', 'm'])
table.add_rows([['M','k',"mean", " variance ", ' skewness ', ' kurtosis '],
[ 100, 20, mean, var, skew, kurt]])
print(table.draw())
```
+-----+----+------+------------+------------+------------+
| M | k | mean | variance | skewness | kurtosis |
+=====+====+======+============+============+============+
| 100 | 20 | 1 | 0.768 | 0.629 | -0.010 |
+-----+----+------+------------+------------+------------+
# <font face="gotham" color="purple"> Continous Uniform Distribution </font>
The PDF of Uniform Distribution is
\begin{equation}
f(x)=\frac{1}{b-a}
\end{equation}
And its r.v. generator is one of most frequently used function in NumPy: ```np.random.rand()```
```python
unif = np.random.rand(10000)
fig, ax = plt.subplots(figsize = (12, 7))
h, bins, patches = ax.hist(unif, density = True, bins = 30)
ax.set_title('Continous Uniform Frequency Distribution', size = 16)
plt.show()
```
## <font face="gotham" color="purple"> CDF and PDF of Continous Uniform Distribution </font>
```python
# pdf(x, loc=0, scale=1)
# cdf(x, loc=0, scale=1)
x = np.linspace(-.2, 1.2, 100)
unif_pdf = sp.stats.uniform.pdf(x)
unif_cdf = sp.stats.uniform.cdf(x)
fig, ax = plt.subplots(nrows = 1, ncols = 2, figsize = (17, 7))
ax[0].plot(x,unif_pdf, lw = 4, label = 'PDF of Continouse U-D')
ax[0].set_xlim([-.1, 1.1])
ax[0].set_ylim([0, 2])
ax[0].legend(fontsize = 16)
ax[1].plot(x,unif_cdf, lw = 4, label = 'CDF of Continouse U-D')
ax[1].set_xlim([-.2, 1.2])
ax[1].set_ylim([0, 2])
ax[1].legend(fontsize = 16)
plt.show()
```
```python
mean, var, skew, kurt = sp.stats.uniform.stats(moments='mvsk')
table = Texttable()
table.set_cols_align([ "c", "c", 'c','c'])
table.set_cols_valign([ "m", "m", 'm','m'])
table.add_rows([["mean", " variance ", ' skewness ', ' kurtosis '],
[ mean, var, skew, kurt]])
print(table.draw())
```
+-------+------------+------------+------------+
| mean | variance | skewness | kurtosis |
+=======+============+============+============+
| 0.500 | 0.083 | 0 | -1.200 |
+-------+------------+------------+------------+
# <font face="gotham" color="purple"> Normal Distribution</font>
The most convenient method of creating normal distribution plot is to use ```sp.stats.norm.pdf()```.
The PDF function single normal distribution is
$$
f(x)= \frac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^{2}}
$$
```python
mu = 2
sigma = 1
x = np.arange(-2, 6, 0.1)
norm_pdf = sp.stats.norm.pdf(x, mu, sigma)
norm_cdf = sp.stats.norm.cdf(x, mu, sigma)
fig, ax = plt.subplots(nrows = 1, ncols = 2, figsize = (17, 7))
ax[0].plot(x,norm_pdf, lw = 4, label = 'PDF of Normal Distribution', ls = '--')
ax[0].legend(fontsize = 16, loc = 'lower left', framealpha=0.2)
ax[1].plot(x,norm_cdf, lw = 4, label = 'CDF of Normal Distribution')
ax[1].legend(fontsize = 16,fancybox=True, framealpha=0.5)
plt.show()
```
## <font face="gotham" color="purple"> Inverse Normal CDF</font>
The inverse normal CDF is commonly used in calculating $p$-value, the plot below is useful to understand the idea.
```python
norm_95_r = sp.stats.norm.ppf(.975) # ppf mean point percentage function, actually inverse CDF
norm_95_l = sp.stats.norm.ppf(.025)
x = np.linspace(-5, 5, 200)
y = sp.stats.norm.pdf(x)
xl = np.linspace(-5, norm_95_l, 100)
yl = sp.stats.norm.pdf(xl)
xr = np.linspace(norm_95_r, 5, 100)
yr = sp.stats.norm.pdf(xr)
fig, ax = plt.subplots(figsize = (17, 7))
ax.plot(x,y, lw = 4, label = 'PDF of Normal Distribution', ls = '-', color = 'orange')
ax.set_ylim([0, .45])
ax.fill_between(x, y, 0, alpha=0.1, color = 'blue')
ax.fill_between(xl,yl, 0, alpha=0.6, color = 'blue')
ax.fill_between(xr,yr, 0, alpha=0.6, color = 'blue')
ax.text(-.2, 0.15, '95%', fontsize = 20)
ax.text(-2.3, 0.015, '2.5%', fontsize = 12, color = 'white')
ax.text(2.01, 0.015, '2.5%', fontsize = 12, color = 'white')
ax.annotate('±%.4f' %norm_95_r, xy = (norm_95_r, 0), xytext = (-.4, .05), weight = 'bold', color = 'r',
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3', color = 'b'), fontsize = 16)
ax.annotate('±%.4f' %norm_95_r, xy = (norm_95_l, 0), xytext = (-.4, .05), weight = 'bold', color = 'r',
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3', color = 'b'), fontsize = 16)
ax.set_title('Normal Distribution And 2.5% Shaded Area', size = 20)
plt.show()
```
## <font face="gotham" color="purple"> Normal R.V. Generator</font>
To generate a normal distribution histogram
```python
# rvs(loc=0, scale=1, size=1, random_state=None)
norm_rv = sp.stats.norm.rvs(mu, sigma, size = 5000)
fig, ax = plt.subplots(figsize = (12, 7))
h, bins, patches = ax.hist(norm_rv, density = True, bins = 50)
ax.set_title('Normal Frequency Distribution', size = 16)
plt.show()
```
```python
mean, var, skew, kurt = sp.stats.norm.stats(mu, sigma, moments='mvsk')
table = Texttable()
table.set_cols_align([ "c", "c", 'c','c'])
table.set_cols_valign([ "m", "m", 'm','m'])
table.add_rows([["mean", " variance ", ' skewness ', ' kurtosis '],
[ mean, var, skew, kurt]])
print(table.draw())
```
+------+------------+------------+------------+
| mean | variance | skewness | kurtosis |
+======+============+============+============+
| 2 | 1 | 0 | 0 |
+------+------------+------------+------------+
## <font face="gotham" color="purple"> Bivariate Normal Distribution</font>
The multivariate normal distribution density function is
\begin{equation}
f_\boldsymbol{X}(x_1,...,x_2)=\frac{1}{\sqrt{(2\pi)^k|\Sigma|}}\exp{\Big(-\frac{(x-\mu)^T\Sigma^{-1}(x-\mu)}{2}\Big)}
\end{equation}
### <font face="gotham" color="purple"> 1st Method of Formulation</font>
```python
%matplotlib inline
mu_x = 0
sigma_x = 2
mu_y = 0
sigma_y = 2
#Create grid and multivariate normal
x = np.linspace(-10,10,500)
y = np.linspace(-10,10,500)
X, Y = np.meshgrid(x,y)
pos = np.empty(X.shape + (2,))
pos[:, :, 0] = X; pos[:, :, 1] = Y # more technical than next one
norm = sp.stats.multivariate_normal([mu_x, mu_y], [[sigma_x, 0], [0, sigma_y]]) # frozen
#Make a 3D plot
fig = plt.figure(figsize = (10, 6))
ax = fig.gca(projection='3d')
ax.plot_surface(X, Y, norm.pdf(pos),cmap='viridis',linewidth=0)
ax.set_xlabel('X axis')
ax.set_ylabel('Y axis')
ax.set_zlabel('Z axis')
plt.show()
```
### <font face="gotham" color="purple"> 2st Method of Formulation</font>
```python
#Parameters to set
mu_x = 0
sigma_x = 7
mu_y = 0
sigma_y = 15
x = np.linspace(-10,10,500)
y = np.linspace(-10,10,500)
X,Y = np.meshgrid(x,y)
pos = np.array([X.flatten(),Y.flatten()]).T # more intuitive than last one
rv = sp.stats.multivariate_normal([mu_x, mu_y], [[sigma_x, 0], [0, sigma_y]])
fig = plt.figure(figsize=(10,10))
ax0 = fig.add_subplot(111)
ax0.contourf(X, Y, rv.pdf(pos).reshape(500,500),cmap='viridis')
plt.show()
```
# <font face="gotham" color="purple"> Beta Distribution</font>
The PDF of Beta distribution is
\begin{equation}
f(x, a, b)=\frac{\Gamma(a+b) x^{a-1}(1-x)^{b-1}}{\Gamma(a) \Gamma(b)}
\end{equation}
where $0\leq x \leq 1$ and $a>0$, $b>0$, $\Gamma$ is the Gamma function.
Beta distribution is a natural good choice for priors in Bayesian econometrics since its range is bounded in unit.
```python
params[0][0][1]
```
0.5
```python
x = np.linspace(0, 1, 100)
fig = plt.figure(figsize=(9, 9))
ax = fig.add_subplot(111)
params = np.array([[[.5,.5]],
[[5,1]],
[[1,5]],
[[2,2]],
[[2,3]],
[[3,2]],
[[1,1]]])
for i in range(params.shape[0]):
beta_pdf = sp.stats.beta.pdf(x, params[i][0][0], params[i][0][1])
ax.plot(x, beta_pdf, lw = 3, label = '$a = %.1f, b = %.1f$' % (params[i][0][0], params[i][0][1]))
ax.legend()
ax.axis([0, 1, 0, 3])
```
```python
fig = plt.figure(figsize=(9, 9))
ax = fig.add_subplot(111)
x = np.linspace(0,1,100)
params = np.array([[[.5,.5]],
[[5,1]],
[[1,5]],
[[2,2]],
[[2,3]],
[[3,2]],
[[1,1]]])
for i in range(params.shape[0]):
beta_cdf = sp.stats.beta.cdf(x, params[i][0][0], params[i][0][1])
ax.plot(x, beta_cdf, lw = 3, label = '$a = %.1f, b = %.1f$' % (params[i][0][0], params[i][0][1]))
ax.legend()
ax.axis([0, 1, 0, 1])
```
# <font face="gotham" color="purple"> $\chi^2$ Distribution</font>
$\chi^2$ distribution is closely connected with normal distributions, if $z$ has the standard normal distribution, then $z^2$ has the $\chi^2$ distribution with $d.f.=1$. And futher,if
\begin{equation}
z_1, z_2, ..., z_k \sim i.i.d. N(0, 1)
\end{equation}
Then summation has a $\chi^2$ distribution of $d.f. = k$:
\begin{equation}
\sum_{i=0}^k z_i^2 \sim \chi^2(k)
\end{equation}
## <font face="gotham" color="purple"> $\chi^2$ PDF and CDF</font>
```python
k = 1
x = np.linspace(0, 5, 100)
chi_pdf = sp.stats.chi2.pdf(x, k)
fig, ax = plt.subplots(figsize = (12, 7))
ax.plot(x, chi_pdf, lw = 3, c = 'r', label = '$\chi^2$ distribution with d.f. = 1')
ax.legend(fontsize = 18)
plt.show()
```
```python
fig, ax = plt.subplots(figsize = (12, 7))
for i in range(1, 6):
x = np.linspace(0, 5, 100)
chi_pdf = sp.stats.chi2.pdf(x, i)
ax.plot(x, chi_pdf, lw = 3, label = '$\chi^2$ distribution with d.f. = %.0d'%i)
ax.legend(fontsize = 12)
ax.axis([0, 5, 0, 1])
plt.show()
```
# <font face="gotham" color="purple"> F Distribution</font>
If $U_1$ has a $\chi^2$ distribution with $\nu_1$ d.f. and $U_2$ has a $\chi^2$ distribution with $\nu_2$ d.f., then
\begin{equation}
\frac{U_1/\nu_1}{U_2/\nu_2}\sim F(\nu_1, \nu_2)
\end{equation}
We are using $F$ distribution for ratios of variances.
```python
x = np.linspace(0, 5, 100)
fig, ax = plt.subplots(figsize = (12, 7))
df1 = 10
df2 = 5
f_pdf = sp.stats.f.pdf(x, dfn = df1, dfd = df2)
ax.plot(x, f_pdf, lw =3, label = '$df_1 = %.d, df_2 = %.d$' %(df1, df2))
df1 = 5
df2 = 10
f_pdf = sp.stats.f.pdf(x, dfn = df1, dfd = df2)
ax.plot(x, f_pdf, lw =3, label = '$df_1 = %.d, df_2 = %.d$ '%(df1, df2))
df1 = 8
df2 = 15
f_pdf = sp.stats.f.pdf(x, dfn = df1, dfd = df2)
ax.plot(x, f_pdf, lw =3, color = 'red', label = '$df_1 = %.d, df_2 = %.d$' %(df1, df2))
df1 = 15
df2 = 8
f_pdf = sp.stats.f.pdf(x, dfn = df1, dfd = df2)
ax.plot(x, f_pdf, lw =3, label = '$df_1 = %.d, df_2 = %.d$ '%(df1, df2))
ax.legend(fontsize = 15)
plt.show()
```
```python
```
|
b192d68f269fc2f84a458ab8c064e10c998347fc
| 880,062 |
ipynb
|
Jupyter Notebook
|
Chapter 4 - Most Important Discrete and Continuous Distributions.ipynb
|
WeijieChen-MacroAnalyst/Probability_Theory
|
ed33a87f7293413d5aac3d8495e7700c0ab21e75
|
[
"MIT"
] | 28 |
2020-12-31T06:47:04.000Z
|
2022-02-09T22:37:29.000Z
|
Chapter 4 - Most Important Discrete and Continuous Distributions.ipynb
|
WeijieChen-MacroAnalyst/Probability_Theory
|
ed33a87f7293413d5aac3d8495e7700c0ab21e75
|
[
"MIT"
] | null | null | null |
Chapter 4 - Most Important Discrete and Continuous Distributions.ipynb
|
WeijieChen-MacroAnalyst/Probability_Theory
|
ed33a87f7293413d5aac3d8495e7700c0ab21e75
|
[
"MIT"
] | 13 |
2021-01-11T17:43:02.000Z
|
2022-02-16T20:03:39.000Z
| 480.383188 | 139,068 | 0.93615 | true | 11,354 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.805632 | 0.875787 | 0.705562 |
__label__eng_Latn
| 0.469228 | 0.477589 |
Chapter 2
======
______
This chapter introduces more PyMC syntax and design patterns, and ways to think about how to model a system from a Bayesian perspective. It also contains tips and data visualization techniques for assessing goodness-of-fit for your Bayesian model.
## A little more on PyMC
### Parent and Child relationships
To assist with describing Bayesian relationships, and to be consistent with PyMC's documentation, we introduce *parent and child* variables.
* *parent variables* are variables that influence another variable.
* *child variable* are variables that are affected by other variables, i.e. are the subject of parent variables.
A variable can be both a parent and child. For example, consider the PyMC code below.
```
import pymc as mc
parameter = mc.Exponential("poisson_param", 1)
data_generator = mc.Poisson("data_generator", parameter)
data_plus_one = data_generator + 1
```
`parameter` controls the parameter of `data_generator`, hence influences its values. The former is a parent of the latter. By symmetry, `data_generator` is a child of `parameter`.
Likewise, `data_generator` is a parent to the variable `data_plus_one` (hence making `data_generator` both a parent and child variable). Although it does not look like one, `data_plus_one` should be treated as a PyMC variable as it is a *function* of another PyMC variable, hence is a child variable to `data_generator`.
This nomenclature is introduced to help us describe relationships in PyMC modeling. You can access a variables children and parent variables using the `children` and `parents` attributes attached to variables.
```
print "Children of `parameter`: "
print parameter.children
print "\nParents of `data_generator`: "
print data_generator.parents
print "\nChildren of `data_generator`: "
print data_generator.children
```
Children of `parameter`:
set([<pymc.distributions.Poisson 'data_generator' at 0x458d750>])
Parents of `data_generator`:
{'mu': <pymc.distributions.Exponential 'poisson_param' at 0x43f6150>}
Children of `data_generator`:
set([<pymc.PyMCObjects.Deterministic '(data_generator_add_1)' at 0x442a5d0>])
Of course a child can have more than one parent, and a parent can have many children.
### PyMC Variables
All PyMC variables also expose a `value` attribute. This method produces the *current* (possibly random) internal value of the variable. If the variable is a child variable, its value changes given the variable's parents' values. Using the same variables from before:
```
print "parameter.value =", parameter.value
print "data_generator.value =", data_generator.value
print "data_plus_one.value =", data_plus_one.value
```
parameter.value = 0.0534927837795
data_generator.value = 0
data_plus_one.value = 1
PyMC is concerned with two types of programming variables: `stochastic` and `deterministic`.
* *stochastic variables* are variables that are not deterministic, i.e., even if you knew all the values of the variables' parents (if it even has any parents), it would still be random. Included in this category are instances of classes `Poisson`, `DiscreteUniform`, and `Exponential`.
* *deterministic variables* are variables that are not random if the variables' parents were known. This might be confusing at first: a quick mental check is *if I knew all of variable `foo`'s parent variables, I could determine what `foo`'s value is.*
We will detail each below.
#### Initializing Stochastic variables
Initializing a stochastic variable requires a `name` argument, plus additional parameters that are class specific. For example:
`some_variable = mc.DiscreteUniform( "discrete_uni_var", 0, 4 )`
where 0,4 are the `DiscreteUniform`-specific upper and lower bound on the random variable. The [PyMC docs](http://pymc-devs.github.com/pymc/distributions.html) contain the specific parameters for stochastic variables. (Or use `??` if you are using IPython!)
The `name` attribute is used to retrieve the posterior distribution later in the analysis, so it is best to use a descriptive name. Typically, I use the Python variable's name as the name.
For multivariable problems, rather than creating a Python array of stochastic variables, addressing the `size` keyword in the call to a `Stochastic` variable creates multivariate array of (independent) stochastic variables. The array behaves like a Numpy array when used like one, and references to its `value` attribute return Numpy arrays.
The `size` argument also solves the annoying case where you may have many variables $\beta_i, \; i = 1,...,N$ you wish to model. Instead of creating arbitrary names and variables for each one, like:
beta_1 = mc.Uniform( "beta_1", 0, 1)
beta_2 = mc.Uniform( "beta_2", 0, 1)
...
we can instead wrap them into a single variable:
betas = mc.Uniform( "betas", 0, 1, size = N )
#### Calling `random()`
We can also call on a stochastic variable's `random()` method, which (given the parent values) will generate a new, random value. Below we demonstrate this using the texting example from the previous chapter.
```
lambda_1 = mc.Exponential("lambda_1", 1) # prior on first behaviour
lambda_2 = mc.Exponential("lambda_2", 1) # prior on second behaviour
tau = mc.DiscreteUniform("tau", lower=0, upper=10) # prior on behaviour change
print "lambda_1.value = %.3f" % lambda_1.value
print "lambda_2.value = %.3f" % lambda_2.value
print "tau.value = %.3f" % tau.value
print
lambda_1.random(), lambda_2.random(), tau.random()
print "After calling random() on the variables..."
print "lambda_1.value = %.3f" % lambda_1.value
print "lambda_2.value = %.3f" % lambda_2.value
print "tau.value = %.3f" % tau.value
```
lambda_1.value = 1.670
lambda_2.value = 1.235
tau.value = 3.000
After calling random() on the variables...
lambda_1.value = 0.754
lambda_2.value = 3.478
tau.value = 10.000
The call to `random` stores a new value into the variable's `value` attribute. In fact, this new value is stored in the computer's cache for faster recall and efficiency.
### **Warning**: *Don't update stochastic variables' values in-place.*
Straight from the PyMC docs, we quote [4]:
> `Stochastic` objects' values should not be updated in-place. This confuses PyMC's caching scheme... The only way a stochastic variable's value should be updated is using statements of the following form:
A.value = new_value
> The following are in-place updates and should **never** be used:
A.value += 3
A.value[2,1] = 5
A.value.attribute = new_attribute_value
#### Deterministic variables
Since most variables you will be modeling are stochastic, we distinguish deterministic variables with a `pymc.deterministic` wrapper. (If you are unfamiliar with Python wrappers (also called decorators), that's no problem. Just prepend the `pymc.deterministic` decorator before the variable declaration and you're good to go. No need to know more. ) The declaration of a deterministic variable uses a Python function:
@mc.deterministic
def some_deterministic_var(v1=v1,):
#jelly goes here.
For all purposes, we can treat the object `some_deterministic_var` as a variable and not a Python function.
Prepending with the wrapper is the easiest way, but not the only way, to create deterministic variables. This is not completely true: elementary operations, like addition, exponentials etc. implicitly create deterministic variables. For example, the following returns a deterministic variable:
```
type(lambda_1 + lambda_2)
```
pymc.PyMCObjects.Deterministic
The use of the `deterministic` wrapper was seen in the previous chapter's text-message example. Recall the model for $\lambda$ looked like:
$$
\lambda =
\cases{
\lambda_1 & \text{if } t \lt \tau \cr
\lambda_2 & \text{if } t \ge \tau
}
$$
And in PyMC code:
```
n_data_points = 5 # in CH1 we had ~70 data points
@mc.deterministic
def lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):
out = np.zeros(n_data_points)
out[:tau] = lambda_1 # lambda before tau is lambda1
out[tau:] = lambda_2 # lambda after tau is lambda1
return out
```
Clearly, if $\tau, \lambda_1$ and $\lambda_2$ are known, then $\lambda$ is known completely, hence it is a deterministic variable.
Inside the deterministic decorator, the `Stochastic` variables passed in behave like scalars or Numpy arrays ( if multivariable), and *not* like `Stochastic` variables. For example, running the following:
@mc.deterministic
def some_deterministic( stoch = some_stochastic_var ):
return stoch.value**2
will return an `AttributeError` detailing that `stoch` does not have a `value` attribute. It simply needs to be `stoch**2`. During the learning phase, it's the variable's `value` that is repeatedly passed in, not the actual variable.
Notice in the creation of the deterministic function we added defaults to each variable used in the function. This is a necessary step, and all variables *must* have default values.
### Including observations in the Model
At this point, it may not look like it, but we have fully specified our priors. For example, we can ask and answer questions like "What does my prior distribution of $\lambda_1$ look like?"
```
%pylab inline
figsize(12.5, 4)
samples = [lambda_1.random() for i in range(20000)]
hist(samples, bins=70, normed=True, histtype="stepfilled")
plt.title("Prior distribution for $\lambda_1$")
plt.xlim(0, 8)
```
To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model.
PyMC stochastic variables have a keyword argument `observed` which accepts a boolean (`False` by default). The keyword `observed` has a very simple role: fix the variable's current value, i.e. make `value` immutable. We have to specify an initial `value` in the variable's creation, equal to the observations we wish to include, typically an array (and it should be an Numpy array for speed). For example:
```
data = np.array([10, 5])
fixed_variable = mc.Poisson("fxd", 1, value=data, observed=True)
print "value: ", fixed_variable.value
print "calling .random()"
fixed_variable.random()
print "value: ", fixed_variable.value
```
value: [10 5]
calling .random()
value: [10 5]
This is how we include data into our models: initializing a stochastic variable to have a *fixed value*.
To complete our text message example, we fix the PyMC variable `observations` to the observed dataset.
```
#we're using some fake data here
data = np.array([10, 25, 15, 20, 35])
obs = mc.Poisson("obs", lambda_, value=data, observed=True)
print obs.value
```
[10 25 15 20 35]
### Finally...
We wrap all the created variables into a `mc.Model` class. With this `Model` class, we can analyze the variables as a single unit. This is an optional step, as the fitting algorithms can be sent an array of the variables rather than a `Model` class. I may or may not use this class in future examples ;)
```
model = mc.Model([obs, lambda_, lambda_1, lambda_2, tau])
```
## Modeling approaches
A good starting thought to Bayesian modeling is to think about *how your data might have been generated*. Position yourself in an omniscient position, and try to imagine how *you* would recreate the dataset.
In the last chapter we investigated text message data. We begin by asking how our observations may have been generated:
1. We started by thinking "what is the best random variable to describe this count data?" A Poisson random variable is a good candidate because it can represent count data. So we model the number of sms's received as sampled from a Poisson distribution.
2. Next, we think, "Ok, assuming sms's are Poisson-distributed, what do I need for the Poisson distribution?" Well, the Poisson distribution has a parameters $\lambda$.
3. Do we know $\lambda$? No. In fact, we have a suspicion that there are *two* $\lambda$ values, one for the earlier behaviour and one for the latter behaviour. We don't know when the behaviour switches though, but call the switchpoint $\tau$.
4. What is a good distribution for the two $\lambda$s? The exponential is good, as it assigns probabilities to positive real numbers. Well the exponential distribution has a parameter too, call it $\alpha$.
5. Do we know what the parameter $\alpha$ might be? No. At this point, we could continue and assign a distribution to $\alpha$, but it's better to stop once we reach a set level of ignorance: whereas we have a prior belief about $\lambda$, ("it probably changes over time", "it's likely between 10 and 30", etc.), we don't really have any strong beliefs about $\alpha$. So it's best to stop here.
What is a good value for $\alpha$ then? We think that the $\lambda$s are between 10-30, so if we set $\alpha$ really low (which corresponds to larger probability on high values) we are not reflecting our prior well. Similar, a too-high alpha misses our prior belief as well. A good idea for $\alpha$ as to reflect our belief is to set the value so that the mean of $\lambda$, given $\alpha$, is equal to our observed mean. This was shown in the last chapter.
6. We have no expert opinion of when $\tau$ might have occurred. So we will suppose $\tau$ is from a discrete uniform distribution over the entire timespan.
Below we give a graphical visualization of this, where arrows denote `parent-child` relationships. (provided by the [Daft Python library](http://daft-pgm.org/) )
PyMC, and other probabilistic programming languages, have been designed to tell these data-generation *stories*. More generally, B. Cronin writes [5]:
> Probabilistic programming will unlock narrative explanations of data, one of the holy grails of business analytics and the unsung hero of scientific persuasion. People think in terms of stories - thus the unreasonable power of the anecdote to drive decision-making, well-founded or not. But existing analytics largely fails to provide this kind of story; instead, numbers seemingly appear out of thin air, with little of the causal context that humans prefer when weighing their options.
### Same story; different ending.
Interestingly, we can create *new datasets* by retelling the story.
For example, if we reverse the above steps, we can simulate a possible realization of the dataset.
1\. Specify when the user's behaviour switches by sampling from $\text{DiscreteUniform}(0, 80)$:
```
tau = mc.rdiscrete_uniform(0, 80)
print tau
```
25
2\. Draw $\lambda_1$ and $\lambda_2$ from an $\text{Exp}(\alpha)$ distribution:
```
alpha = 1./20.
lambda_1, lambda_2 = mc.rexponential(alpha, 2)
print lambda_1, lambda_2
```
57.8911048197 30.7672520011
3\. For days before $\tau$, represent the user's received SMS count by sampling from $\text{Poi}(\lambda_1)$, and sample from $\text{Poi}(\lambda_2)$ for days after $\tau$. For example:
```
data = np.r_[mc.rpoisson(lambda_1, tau), mc.rpoisson(lambda_2, 80 - tau)]
```
4\. Plot the artificial dataset:
```
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau-1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlabel("Time (days)")
plt.ylabel("count of text-msgs received")
plt.title("Artificial dataset")
plt.xlim(0, 80)
plt.legend()
```
It is okay that our fictional dataset does not look like our observed dataset: the probability is incredibly small it indeed would. PyMC's engine is designed to find good parameters, $\lambda_i, \tau$, that maximize this probability.
The ability to generate artificial dataset is an interesting side effect of our modeling, and we will see that this ability is a very important method of Bayesian inference. We produce a few more datasets below:
```
def plot_artificial_sms_dataset():
tau = mc.rdiscrete_uniform(0, 80)
alpha = 1./20.
lambda_1, lambda_2 = mc.rexponential(alpha, 2)
data = np.r_[mc.rpoisson(lambda_1, tau), mc.rpoisson(lambda_2, 80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau-1], color="r", label="user behaviour changed")
plt.xlim(0, 80)
figsize(12.5, 5)
plt.title("More example of artificial datasets")
for i in range(4):
plt.subplot(4, 1, i)
plot_artificial_sms_dataset()
```
Later we will see how we use this to make predictions and test the appropriateness of our models.
#####Example: Bayesian A/B testing
A/B testing is a statistical design pattern for determining the difference of effectiveness between two different treatments. For example, a pharmaceutical company is interested in the effectiveness of drug A vs drug B. The company will test drug A on some fraction of their trials, and drug B on the other fraction (this fraction is often 1/2, but we will relax this assumption). After performing enough trials, the in-house statisticians sift through the data to determine which drug yielded better results.
Similarly, front-end web developers are interested in which design of their website yields more sales or some other metric of interest. They will route some fraction of visitors to site A, and the other fraction to site B, and record if the visit yielded a sale of not. The data is recorded (in real-time), and analyzed afterwards.
Often, the post-experiment analysis is done using something called a hypothesis test like *difference of means test* or *difference of proportions test*. This involves often misunderstood quantities like a "Z-score" and even more confusing "p-values" (please don't ask). If you have taken a statistics course, you have probably been taught this technique (though not necessarily *learned* this technique). And if you were like me, you may have felt uncomfortable with their derivation -- good: the Bayesian approach to this problem is much more natural.
### A Simple Case
As this is a hacker book, we'll continue with the web-dev example. For the moment, we will focus on the analysis of site A only. Assume that there is some true $0 \lt p_A \lt 1$ probability that users who, upon shown site A, eventually purchase from the site. This is the true effectiveness of site A. Currently, this quantity is unknown to us.
Suppose site A was shown to $N$ people, and $n$ people purchased from the site. One might conclude hastly that $p_A = \frac{n}{N}$. Unfortunately, the *observed frequency* $\frac{n}{N}$ does not necessarily equal $p_A$ -- there is a difference between the *observed frequency* and the *true frequency* of an event. The true frequency can be interpreted as the probability of an event occurring. For example, the true frequency of rolling a 1 on a 6-sided die is $\frac{1}{6}$. Knowing the true frequency of events like:
- fraction of users who make purchases,
- frequency of social attributes,
- percent of internet users with cats etc.
are common requests we ask of Nature. Unfortunately, often Nature hides the true frequency from us and we must *infer* it from observed data.
The *observed frequency* is then the frequency we observe: say rolling the die 100 times you may observe 20 rolls of 1. The observed frequency, 0.2, differs from the true frequency, $\frac{1}{6}$. We can use Bayesian statistics to infer probable values of the true frequency using an appropriate prior and observed data.
With respect to our A/B example, we are interested in using what we know, $N$ (the total trials adminsitered) and $n$ (the number of conversions), to estimate what $p_A$, the true frequency of buyers, might be.
To setup a Bayesian model, we need to assign prior distrbutions to our unknown quantities. *A priori*, what do we think $p_A$ might be? For this example, we have no strong conviction about $p_A$, so for now, let's assume $p_A$ is uniform over [0,1]:
```
import pymc as mc
# the parameters are the bounds of the Uniform.
p = mc.Uniform('p', lower=0, upper=1)
```
Had we had stronger beliefs, we could have expressed them in the prior above.
For this example, consider $p_A = 0.05$, and $N = 1500$ users shown site A, and we will simulate whether the user made a purchase or not. To simulate this from $N$ trials, we will use a *Bernoulli* distribution: if $ X\ \sim \text{Ber}(p)$, then $X$ is 1 with probability $p$ and 0 with probability $1-p$. Of course, in practice we do not know $p_A$, but we will use it here to simulate the data.
```
#set constants
p_true = 0.05 # remember, this is unknown.
N = 1500
# sample N Bernoulli random variables from Ber(0.05).
# each random variable has a 0.05 chance of being a 1.
# this is the data-generation step
occurances = mc.rbernoulli(p_true, N)
print occurances # Remember: Python treats True == 1, and False == 0
print occurances.sum()
```
[False False False ..., False False False]
79
The observed frequency is:
```
# Occurances.mean is equal to n/N.
print "What is the observed frequency in Group A? %.4f" % occurances.mean()
print "Does this equal the true frequency? %s" % (occurances.mean() == p_true)
```
What is the observed frequency in Group A? 0.0527
Does this equal the true frequency? False
We combine the observations into the PyMC `observed` variable, and run our inference algorithm:
```
#include the observations, which are Bernoulli
obs = mc.Bernoulli("obs", p, value=occurances, observed=True)
#To be explained in chapter 3
mcmc = mc.MCMC([p, obs])
mcmc.sample(18000, 1000)
```
[****************100%******************] 18000 of 18000 complete
We plot the posterior distribution of the unknown $p_A$ below:
```
figsize(12.5, 4)
plt.title("Posterior distribution of $p_A$, the true effectiveness of site A")
plt.vlines(p_true, 0, 90, linestyle="--", label="true $p_A$ (unknown)")
plt.hist(mcmc.trace("p")[:], bins=25, histtype="stepfilled", normed=True)
plt.legend()
```
Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, `N`, and observe how the posterior distribution changes.
### *A* and *B* Together
A similar anaylsis can be done for site B's response data to determine the analgous $p_B$. But what we are really interested in is the *difference* between $p_A$ and $p_B$. Let's infer $p_A$, $p_B$, *and* $\text{delta} = p_A - p_B$, all at once. We can do this using PyMC's deterministic variables. (We'll assume for this exercise that $p_B = 0.04$, so $\text{delta} = 0.01$, $N_B = 200$ (signifcantly less than $N_A$) and we will simulate site B's data like we did for site A's data )
```
import pymc as mc
figsize(12, 4)
#these two quantities are unknown to us.
true_p_A = 0.05
true_p_B = 0.04
#notice the unequal sample sizes -- no problem in Bayesian analysis.
N_A = 1500
N_B = 750
#generate some observations
observations_A = mc.rbernoulli(true_p_A, N_A)
observations_B = mc.rbernoulli(true_p_B, N_B)
print "Obs from Site A: ", observations_A[:30].astype(int), "..."
print "Obs from Site B: ", observations_B[:30].astype(int), "..."
```
Obs from Site A: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0] ...
Obs from Site B: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] ...
```
print observations_A.mean()
print observations_B.mean()
```
0.04
0.036
```
# Set up the pymc model. Again assume Uniform priors for p_A and p_B.
p_A = mc.Uniform("p_A", 0, 1)
p_B = mc.Uniform("p_B", 0, 1)
# Define the deterministic delta function. This is our unknown of interest.
@mc.deterministic
def delta(p_A=p_A, p_B=p_B):
return p_A - p_B
# Set of observations, in this case we have two observation datasets.
obs_A = mc.Bernoulli("obs_A", p_A, value=observations_A, observed=True)
obs_B = mc.Bernoulli("obs_B", p_B, value=observations_B, observed=True)
# To be explained in chapter 3.
mcmc = mc.MCMC([p_A, p_B, delta, obs_A, obs_B])
mcmc.sample(20000, 1000)
```
[****************100%******************] 20000 of 20000 complete
Below we plot the posterior distributions for the three unknowns:
```
p_A_samples = mcmc.trace("p_A")[:]
p_B_samples = mcmc.trace("p_B")[:]
delta_samples = mcmc.trace("delta")[:]
```
```
figsize(12.5, 10)
#histogram of posteriors
ax = plt.subplot(311)
plt.xlim(0, .1)
plt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_A$", color="#A60628", normed=True)
plt.vlines(true_p_A, 0, 80, linestyle="--", label="true $p_A$ (unknown)")
plt.legend(loc="upper right")
plt.title("Posterior distributions of $p_A$, $p_B$, and delta unknowns")
ax = plt.subplot(312)
plt.xlim(0, .1)
plt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_B$", color="#467821", normed=True)
plt.vlines(true_p_B, 0, 80, linestyle="--", label="true $p_B$ (unknown)")
plt.legend(loc="upper right")
ax = plt.subplot(313)
plt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of delta", color="#7A68A6", normed=True)
plt.vlines(true_p_A - true_p_B, 0, 60, linestyle="--",
label="true delta (unknown)")
plt.vlines(0, 0, 60, color="black", alpha=0.2)
plt.legend(loc="upper right")
```
Notice that as a result of `N_B < N_A`, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$.
With respect to the posterior distribution of $\text{delta}$, we can see that the majority of the distribution is above $p_B =0$, implying there site A's response is likely better than site B's response. The probability this inference is incorrect is easily computable:
```
#count the number of samples less than 0, i.e. the area under the curve
# before 0, represent the probability that site A is worse than site B.
print "Probability site A is WORSE than site B: %.3f" % \
(delta_samples < 0).mean()
print "Probability site A is BETTER than site B: %.3f" % \
(delta_samples > 0).mean()
```
Probability site A is WORSE than site B: 0.354
Probability site A is BETTER than site B: 0.646
If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential "power" than each additional data point for site A).
Try playing with the parameters `true_p_A`, `true_p_B`, `N_A`, and `N_B`, to see what the posterior of $\text{delta}$ looks like. Notice in all this, the difference in sample sizes between site A and site B was never mentioned: it naturally fits into Bayesian analysis.
I hope the readers feel this style of A/B testing is more natural than hypothesis testing, which the latter has probably confused more than helped practitioners. Later in this book, we will see two extensions of this model: the first to help dynamically adjust for bad sites, and the second will improve the speed of this computation by reducing the analysis to a single equation.
## An algorithm for human deceit
Social data has an additional layer of interest as people are not always honest with responses, which adds a further complication into inference. For example, simply asking individuals "Have you ever cheated on a test?" will surely contain some rate of dishonesty. What you can say for certain is that the true rate is less than your observed rate (assuming individuals lie *only* about *not cheating*; I cannot imagine one who would admit "Yes" to cheating when in fact they hadn't cheated).
To present an elegant solution to circumventing this dishonesty problem, and to demonstrate Bayesian modeling, we first need to introduce the binomial distribution.
### The Binomial Distribution
The binomial distribution is one of the most popular distributions, mostly because of its simplicity and usefulness. Unlike the other distributions we have encountered thus far in the book, the binomial distribution has 2 parameters: $N$, a positive integer representing $N$ trials or number of instances of potential events, and $p$, the probability of an event occurring in a single trial. Like the Poisson distribution, it is a discrete distribution, but unlike the Poisson distribution, it only weighs integers from $0$ to $N$. The mass distribution looks like:
$$P( X = k ) = {{N}\choose{k}} p^k(1-p)^{N-k}$$
If $X$ is a binomial random variable with parameters $p$ and $N$, denoted $X \sim \text{Bin}(N,p)$, then $X$ is the number of events that occurred in the $N$ trials (obviously $0 \le X \le N$). The larger $p$ is (while still remaining between 0 and 1), the more events are likely to occur. The expected value of a binomial is equal to $Np$. Below we plot the mass probability distribution for varying parameters.
```
figsize(12.5, 4)
import scipy.stats as stats
binomial = stats.binom
parameters = [(10, .4), (10, .9)]
colors = ["#348ABD", "#A60628"]
for i in range(2):
N, p = parameters[i]
_x = np.arange(N + 1)
plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],
edgecolor=colors[i],
alpha=0.6,
label="$N$: %d, $p$: %.1f" % (N, p),
linewidth=3)
plt.legend(loc="upper left")
plt.xlim(0, 10.5)
plt.xlabel("$k$")
plt.ylabel("$P(X = k)$")
plt.title("Probability mass distributions of binomial random variables")
```
The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \sim \text{Binomial}(N, p )$.
The expected value of a Bernoulli random variable is $p$. This can be seen by noting the more general Binomial random variable has expected value $Np$ and setting $N=1$.
##### Example: Cheating among students
We will use the binomial distribution to determine the frequency of students cheating during an exam. If we let $N$ be the total number of students who took the exam, and assuming each student is interviewed post-exam (answering without consequence), we will receive integer $X$ "Yes I did cheat" answers. We then find the posterior distribution of $p$, given $N$, some specified prior on $p$, and observed data $X$.
This is a completely absurd model. No student, even with a free-pass against punishment, would admit to cheating. What we need is a better *algorithm* to ask students if they had cheated. Ideally the algorithm should encourage individuals to be honest while preserving privacy. The following proposed algorithm is a solution I greatly admire for its ingenuity and effectiveness:
> In the interview process for each student, the student flips a coin, hidden from the interviewer. The student agrees to answer honestly if the coin comes up heads. Otherwise, if the coin comes up tails, the student (secretly) flips the coin again, and answers "Yes, I did cheat" if the coin flip lands heads, and "No, I did not cheat", if the coin flip lands tails. This way, the interviewer does not know if a "Yes" was the result of a guilty plea, or a Heads on a second coin toss. Thus privacy is preserved and the researchers receive honest answers.
I call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some *Yes*'s are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use PyMC to dig through this noisy model, and find a posterior distribution for the true frequency of liars.
Suppose 100 students are being surveyed for cheating, and we wish to find $p$, the proportion of cheaters. There a few ways we can model this in PyMC. I'll demonstrate the most explicit way, and later show a simplified version. Both versions arrive at the same inference. In our data-generation model, we sample $p$, the true proportion of cheaters, from a prior. Since we are quite ignorant about $p$, we will assign it a $\text{Uniform}(0,1)$ prior.
```
import pymc as mc
N = 100
p = mc.Uniform("freq_cheating", 0, 1)
```
Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students: 1 implies they cheated and 0 implies they did not.
```
true_answers = mc.Bernoulli("truths", p, size=N)
```
If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$: denote a 1 as a *Heads* and 0 a *Tails*.
```
first_coin_flips = mc.Bernoulli("first_flips", 0.5, size=N)
print first_coin_flips.value
```
[False True False True True False True False False False False True
False True False True False True False True True True True True
False False True False True True True False True False False True
True True False False True True False False True True True True
True False False True False True False False False True False True
True False False False True False True False False False True True
True True True True False False True True True True True True
False True False False False True False True True True False True
True False False True]
Although *not everyone* flips a second time, we can still model the possible realization of second coin-flips:
```
second_coin_flips = mc.Bernoulli("second_flips", 0.5, size=N)
```
Using these variables, we can return a possible realization of the *observed proportion* of "Yes" responses. We do this using a PyMC `deterministic` variable:
```
@mc.deterministic
def observed_proportion(t_a=true_answers,
fc=first_coin_flips,
sc=second_coin_flips):
observed = fc*t_a + (1-fc)*sc
return observed.sum() / float(N)
```
The line `fc*t_a + (1-fc)*sc` contains the heart of the Privacy algorithm. Elements in this array are 1 *if and only if* i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by `float(N)`, produces a proportion.
```
observed_proportion.value
```
0.72999999999999998
Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 "Yes" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a "Yes" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if *all students cheated*, we should expected to see on approximately 3/4 of all response be "Yes".
The researchers observe a Binomial random variable, with `N = 100` and `p = observed_proportion` with `value = 35`:
```
X = 35
observations = mc.Binomial("obs", N, observed_proportion, observed=True,
value=X)
```
Below we add all the variables of interest to a `Model` container and run our black-box algorithm over the model.
```
model = mc.Model([p, true_answers, first_coin_flips,
second_coin_flips, observed_proportion, observations])
### To be explained in Chapter 3!
mcmc = mc.MCMC(model)
mcmc.sample(120000, 80000, 4)
```
[****************100%******************] 120000 of 120000 complete
```
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", normed=True,
alpha=0.85, bins=30, label="posterior distribution",
color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)
plt.xlim(0, 1)
plt.legend()
```
With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as *a priori* we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency?
I would argue, yes, we have discovered something. It is implausible, according to our posterior, that there are *no cheaters*, i.e. the posterior assigns low probability to $p=0$. Since we started with an uniform prior, treating all values of $p$ as equally plausible, but the data ruled out $p=0$ as a possibility, we can be confident that there were cheaters.
This kind of algorithm can be used to gather private information from users and be *reasonably* confident that the data, though noisy, is truthful.
### Alternative PyMC Model
Given a value for $p$ (which from our god-like position we know), we can find the probability the student will answer yes:
\begin{align}
P(\text{"Yes"}) = & P( \text{Heads on first coin} )P( \text{cheater} ) + P( \text{Tails on first coin} )P( \text{Heads on second coin} ) \\\\
& = \frac{1}{2}p + \frac{1}{2}\frac{1}{2}\\\\
& = \frac{p}{2} + \frac{1}{4}
\end{align}
Thus, knowing $p$ we know the probability a student will respond "Yes". In PyMC, we can create a deterministic function to evaluate the probability of responding "Yes", given $p$:
```
p = mc.Uniform("freq_cheating", 0, 1)
@mc.deterministic
def p_skewed(p=p):
return 0.5*p + 0.25
```
I could have typed `p_skewed = 0.5*p + 0.25` instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a `deterministic` variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake.
If we know the probability of respondents saying "Yes", which is `p_skewed`, and we have $N=100$ students, the number of "Yes" responses is a binomial random variable with parameters `N` and `p_skewed`.
This is were we include our observed 35 "Yes" responses. In the declaration of the `mc.Binomial`, we include `value = 35` and `observed = True`.
```
yes_responses = mc.Binomial("number_cheaters", 100, p_skewed,
value=35, observed=True)
```
Below we add all the variables of interest to a `Model` container and run our black-box algorithm over the model.
```
model = mc.Model([yes_responses, p_skewed, p])
### To Be Explained in Chapter 3!
mcmc = mc.MCMC(model)
mcmc.sample(12500, 2500)
```
[****************100%******************] 12500 of 12500 complete
```
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", normed=True,
alpha=0.85, bins=30, label="posterior distribution",
color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)
plt.xlim(0, 1)
plt.legend()
```
### More PyMC Tricks
#### Protip: *Lighter* deterministic variables with `Lambda` class
Sometimes writing a deterministic function using the `@mc.deterministic` decorator can seem like a chore, especially for a small function. I have already mentioned that elementary math operations *can* produce deterministic variables implicitly, but what about operations like indexing or slicing? Built-in `Lambda` functions can handle this with the elegance and simplicity required. For example,
beta = mc.Normal( "coefficients", 0, size=(N,1) )
x = np.random.randn( (N,1) )
linear_combination = mc.Lambda( lambda x=x, beta = beta: np.dot( x.T, beta ) )
#### Protip: Arrays of PyMC variables
There is no reason why we cannot store multiple heterogeneous PyMC variables in a Numpy array. Just remember to set the `dtype` of the array to `object` upon initialization. For example:
```
N = 10
x = np.empty(N, dtype=object)
for i in range(0, N):
x[i] = mc.Exponential('x_%i' % i, (i+1)**2)
```
The remainder of this chapter examines some practical examples of PyMC and PyMC modeling:
#####Example: Challenger Space Shuttle Disaster <span id="challenger"/>
On January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commission on the accident concluded that it was caused by the failure of an O-ring in a field joint on the rocket booster, and that this failure was due to a faulty design that made the O-ring unacceptably sensitive to a number of factors including outside temperature. Of the previous 24 flights, data were available on failures of O-rings on 23, (one was lost at sea), and these data were discussed on the evening preceding the Challenger launch, but unfortunately only the data corresponding to the 7 flights on which there was a damage incident were considered important and these were thought to show no obvious trend. The data are shown below (see [1]):
```
figsize(12.5, 3.5)
np.set_printoptions(precision=3, suppress=True)
challenger_data = np.genfromtxt("data/challenger_data.csv", skip_header=1,
usecols=[1, 2], missing_values="NA",
delimiter=",")
#drop the NA values
challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]
#plot it, as a function of tempature (the first column)
print "Temp (F), O-Ring failure?"
print challenger_data
plt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color="k",
alpha=0.5)
plt.yticks([0, 1])
plt.ylabel("Damage Incident?")
plt.xlabel("Outside temperature (Fahrenheit)")
plt.title("Defects of the Space Shuttle O-Rings vs temperature")
```
It looks clear that *the probability* of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask "At temperature $t$, what is the probability of a damage incident?". The goal of this example is to answer that question.
We need a function of temperature, call it $p(t)$, that is bounded between 0 and 1 (so as to model a probability) and changes from 1 to 0 as we increase temperature. There are actually many such functions, but the most popular choice is the *logistic function.*
$$p(t) = \frac{1}{ 1 + e^{ \;\beta t } } $$
In this model, $\beta$ is the variable we are uncertain about. Below is the function plotted for $\beta = 1, 3, -5$.
```
figsize(12, 3)
def logistic(x, beta):
return 1.0 / (1.0 + np.exp(beta * x))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$")
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$")
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$")
plt.legend()
```
But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a *bias* term to our logistic function:
$$p(t) = \frac{1}{ 1 + e^{ \;\beta t + \alpha } } $$
Some plots are below, with differing $\alpha$.
```
def logistic(x, beta, alpha=0):
return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$", ls="--", lw=1)
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$", ls="--", lw=1)
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$", ls="--", lw=1)
plt.plot(x, logistic(x, 1, 1), label=r"$\beta = 1, \alpha = 1$",
color="#348ABD")
plt.plot(x, logistic(x, 3, -2), label=r"$\beta = 3, \alpha = -2$",
color="#A60628")
plt.plot(x, logistic(x, -5, 7), label=r"$\beta = -5, \alpha = 7$",
color="#7A68A6")
plt.legend(loc="lower left")
```
Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a *bias*. )
Let's start modeling this in PyMC. The $\beta, \alpha$ parameters have no reason to be positive, bounded or relatively large, so they are best modeled by a *Normal random variable*, introduced next.
### Normal distributions
A Normal random variable, denoted $X \sim N(\mu, 1/\tau)$, has a distribution with two parameters: the mean, $\mu$, and the *precision*, $\tau$. Those familiar with the Normal distribution already have probably seen $\sigma^2$ instead of $\tau$. They are in fact reciprocals of each other. The change was motivated by simpler mathematical analysis and is an artifact of older Bayesian methods. Just remember: The smaller $\tau$, the larger the spread of the distribution (i.e. we are more uncertain); the larger $\tau$, the tighter the distribution (i.e. we are more certain). Regardless, $\tau$ is always positive.
The probability density function of a $N( \mu, 1/\tau)$ random variable is:
$$ f(x | \mu, \tau) = \sqrt{\frac{\tau}{2\pi}} \exp\left( -\frac{\tau}{2} (x-\mu)^2 \right) $$
We plot some different density functions below.
```
import scipy.stats as stats
nor = stats.norm
x = np.linspace(-8, 7, 150)
mu = (-2, 0, 3)
tau = (.7, 1, 2.8)
colors = ["#348ABD", "#A60628", "#7A68A6"]
parameters = zip(mu, tau, colors)
for _mu, _tau, _color in parameters:
plt.plot(x, nor.pdf(x, _mu, scale=1./_tau),
label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color)
plt.fill_between(x, nor.pdf(x, _mu, scale=1./_tau), color=_color,
alpha=.33)
plt.legend(loc="upper right")
plt.xlabel("$x$")
plt.ylabel("density function at $x$")
plt.title("Probability distribution of three different Normal random \
variables")
```
A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\mu$. In fact, the expected value of a Normal is equal to its $\mu$ parameter:
$$ E[ X | \mu, \tau] = \mu$$
and its variance is equal to the inverse of $\tau$:
$$Var( X | \mu, \tau ) = \frac{1}{\tau}$$
Below we continue our modeling of the Challenger space craft:
```
import pymc as mc
temperature = challenger_data[:, 0]
D = challenger_data[:, 1] # defect or not?
#notice the`value` here. We explain why below.
beta = mc.Normal("beta", 0, 0.001, value=0)
alpha = mc.Normal("alpha", 0, 0.001, value=0)
@mc.deterministic
def p(t=temperature, alpha=alpha, beta=beta):
return 1.0 / (1. + np.exp(beta*t + alpha))
```
We have our probabilities, but how do we connect them to our observed data? A *Bernoulli* random variable with parameter $p$, denoted $\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like:
$$ \text{Defect Incident, $D_i$} \sim \text{Ber}( \;p(t_i)\; ), \;\; i=1..N$$
where $p(t)$ is our logistic function and $t_i$ are the temperatures we have observations about. Notice in the above code we had to set the values of `beta` and `alpha` to 0. The reason for this is that if `beta` and `alpha` are very large, they make `p` equal to 1 or 0. Unfortunately, `mc.Bernoulli` does not like probabilities of exactly 0 or 1, though they are mathematically well-defined probabilities. So by setting the coefficient values to `0`, we set the variable `p` to be a reasonable starting value. This has no effect on our results, nor does it mean we are including any additional information in our prior. It is simply a computational caveat in PyMC.
```
p.value
```
array([ 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5,
0.5])
```
# connect the probabilities in `p` with our observations through a
# Bernoulli random variable.
observed = mc.Bernoulli("bernoulli_obs", p, value=D, observed=True)
model = mc.Model([observed, beta, alpha])
# Mysterious code to be explained in Chapter 3
map_ = mc.MAP(model)
map_.fit()
mcmc = mc.MCMC(model)
mcmc.sample(120000, 100000, 2)
```
[****************100%******************] 120000 of 120000 complete
We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\alpha$ and $\beta$:
```
alpha_samples = mcmc.trace('alpha')[:, None] # best to make them 1d
beta_samples = mcmc.trace('beta')[:, None]
figsize(12.5, 6)
#histogram of the samples:
plt.subplot(211)
plt.title(r"Posterior distributions of the variables $\alpha, \beta$")
plt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\beta$", color="#7A68A6", normed=True)
plt.legend()
plt.subplot(212)
plt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\alpha$", color="#A60628", normed=True)
plt.legend()
```
All samples of $\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\beta = 0$, implying that temperature has no effect on the probability of defect.
Similarly, all $\alpha$ posterior values are negative and far away from 0, implying that it is correct to believe that $\alpha$ is significantly less than 0.
Regarding the spread of the data, we are very uncertain about what the true parameters might be (though considering the low sample size and the large overlap of defects-to-nondefects this behaviour is perhaps expected).
Next, let's look at the *expected probability* for a specific value of the temperature. That is, we average over all samples from the posterior to get a likely value for $p(t_i)$.
```
t = np.linspace(temperature.min() - 5, temperature.max()+5, 50)[:, None]
p_t = logistic(t.T, beta_samples, alpha_samples)
mean_prob_t = p_t.mean(axis=0)
```
```
figsize(12.5, 4)
plt.plot(t, mean_prob_t, lw=3, label="average posterior \nprobability \
of defect")
plt.plot(t, p_t[0, :], ls="--", label="realization from posterior")
plt.plot(t, p_t[-2, :], ls="--", label="realization from posterior")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.title("Posterior expected value of probability of defect; \
plus realizations")
plt.legend(loc="lower left")
plt.ylim(-0.1, 1.1)
plt.xlim(t.min(), t.max())
plt.ylabel("probability")
plt.xlabel("temperature")
```
Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.
An interesting question to ask is for what temperatures are we most uncertain about the defect-probability? Below we plot the expected value line **and** the associated 95% intervals for each temperature.
```
from scipy.stats.mstats import mquantiles
# vectorized bottom and top 5% quantiles for "confidence interval"
qs = mquantiles(p_t, [0.05, 0.95], axis=0)
plt.fill_between(t[:, 0], *qs, alpha=0.7,
color="#7A68A6")
plt.plot(t[:, 0], qs[0], label="95% CI", color="#7A68A6", alpha=0.7)
plt.plot(t, mean_prob_t, lw=1, ls="--", color="k",
label="average posterior \nprobability of defect")
plt.xlim(t.min(), t.max())
plt.ylim(-0.02, 1.02)
plt.legend(loc="lower left")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.xlabel("temp, $t$")
plt.ylabel("probability estimate")
plt.title("Posterior probability estimates given temp. $t$")
```
The *95% credible interval*, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.
More generally, we can see that as the temperature nears 60 degrees, the CI's spread out over [0,1] quickly. As we pass 70 degrees, the CI's tighten again. This can give us insight about how to proceed next: we should probably test more O-rings around 60-65 temperature to get a better estimate of probabilities in that range. Similarly, when reporting to scientists your estimates, you should be very cautious about simply telling them the expected probability, as we can see this does not reflect how *wide* the posterior distribution is.
### What about the day of the Challenger disaster?
On the day of the Challenger disaster, the outside temperature was 31 degrees Fahrenheit. What is the posterior distribution of a defect occurring, given this temperature? The distribution is plotted below. It looks almost guaranteed that the Challenger was going to be subject to defective O-rings.
```
figsize(12.5, 2.5)
prob_31 = logistic(31, beta_samples, alpha_samples)
plt.xlim(0.995, 1)
plt.hist(prob_31, bins=1000, normed=True, histtype='stepfilled')
plt.title("Posterior distribution of probability of defect, given $t = 31$")
plt.xlabel("probability of defect occuring in O-ring")
```
### Is our model appropriate?
The skeptical reader will say "You deliberately chose the logistic function for $p(t)$ and the specific priors. Perhaps other functions or priors will give different results. How do I know I have chosen a good model?" This is absolutely true. To consider an extreme situation, what if I had chosen the function $p(t) = 1,\; \forall t$, which guarantees a defect always occurring: I would have again predicted disaster on January 28th. Yet this is clearly a poorly chosen model. On the other hand, if I did choose the logistic function for $p(t)$, but specified all my priors to be very tight around 0, likely we would have very different posterior distributions. How do we know our model is an expression of the data? This encourages us to measure the model's **goodness of fit**.
We can think: *how can we test whether our model is a bad fit?* An idea is to compare observed data (which if we recall is a *fixed* stochastic variable) with artificial dataset which we can simulate. The rationale is that if the simulated dataset does not appear similar, statistically, to the observed dataset, then likely our model is not accurately represented the observed data.
Previously in this Chapter, we simulated artificial dataset for the SMS example. To do this, we sampled values from the priors. We saw how varied the resulting datasets looked like, and rarely did they mimic our observed dataset. In the current example, we should sample from the *posterior* distributions to create *very plausible datasets*. Luckily, our Bayesian framework makes this very easy. We only need to create a new `Stochastic` variable, that is exactly the same as our variable that stored the observations, but minus the observations themselves. If you recall, our `Stochastic` variable that stored our observed data was:
observed = mc.Bernoulli( "bernoulli_obs", p, value = D, observed=True)
Hence we create:
simulated_data = mc.Bernoulli("simulation_data", p )
Let's simulate 10 000:
```
simulated = mc.Bernoulli("bernoulli_sim", p)
N = 10000
mcmc = mc.MCMC([simulated, alpha, beta, observed])
mcmc.sample(N)
```
```
figsize(12.5, 5)
simulations = mcmc.trace("bernoulli_sim")[:]
print simulations.shape
plt.title("Simulated dataset using posterior parameters")
figsize(12.5, 6)
for i in range(4):
ax = subplot(4, 1, i+1)
plt.scatter(temperature, simulations[1000*i, :], color="k",
s=50, alpha=0.6)
```
Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer [here](http://stats.stackexchange.com/questions/53078/how-to-visualize-bayesian-goodness-of-fit-for-logistic-regression)!).
We wish to assess how good our model is. "Good" is a subjective term of course, so results must be relative to other models.
We will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use *Bayesian p-values*. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree.
The following graphical test is a novel data-viz approach to logistic regression. The plots are called *separation plots*[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible [original paper](http://mdwardlab.com/sites/default/files/GreenhillWardSacks.pdf), but I'll summarize their use here.
For each model, we calculate the proportion of times the posterior simulation proposed a value of 1 for a particular temperature, i.e. compute $P( \;\text{Defect} = 1 | t, \alpha, \beta )$ by averaging. This gives us the posterior probability of a defect at each data point in our dataset. For example, for the model we used above:
```
posterior_probability = simulations.mean(axis=0)
print "posterior prob of defect | realized defect "
for i in range(len(D)):
print "%.2f | %d" % (posterior_probability[i], D[i])
```
Next we sort each column by the posterior probabilities:
```
ix = np.argsort(posterior_probability)
print "probb | defect "
for i in range(len(D)):
print "%.2f | %d" % (posterior_probability[ix[i]], D[ix[i]])
```
We can present the above data better in a figure: I've wrapped this up into a `separation_plot` function.
```
from separation_plot import separation_plot
figsize(11., 1.5)
separation_plot(posterior_probability, D)
```
The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars *should* be close to the right-hand side, and deviations from this reflect missed predictions.
The black vertical line is the expected number of defects we should observe, given this model. This allows the user to see how the total number of events predicted by the model compares to the actual number of events in the data.
It is much more informative to compare this to separation plots for other models. Below we compare our model (top) versus three others:
1. the perfect model, which predicts the posterior probability to be equal 1 if a defect did occur.
2. a completely random model, which predicts random probabilities regardless of temperature.
3. a constant model: where $P(D = 1 \; | \; t) = c, \;\; \forall t$. The best choice for $c$ is the observed frequency of defects, in this case 7/23.
```
figsize(11., 1.25)
# Our temperature-dependent model
separation_plot(posterior_probability, D)
plt.title("Temperature-dependent model")
# Perfect model
# i.e. the probability of defect is equal to if a defect occurred or not.
p = D
separation_plot(p, D)
plt.title("Perfect model")
# random predictions
p = np.random.rand(23)
separation_plot(p, D)
plt.title("Random model")
# constant model
constant_prob = 7./23*np.ones(23)
separation_plot(constant_prob, D)
plt.title("Constant-prediction model")
```
In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.
The perfect model, the probability line is not well shown, as it is stuck to the bottom and top of the figure. Of course the perfect model is only for demonstration, and we cannot infer any scientific inference from it.
##### Exercises
1\. Try putting in extreme values for our observations in the cheating example. What happens if we observe 25 affirmative responses? 10? 50?
2\. Try plotting $\alpha$ samples versus $\beta$ samples. Why might the resulting plot look like this?
```
#type your code here.
figsize(12.5, 4)
plt.scatter(alpha_samples, beta_samples, alpha=0.1)
plt.title("Why does the plot look like this?")
plt.xlabel(r"$\alpha$")
plt.ylabel(r"$\beta$")
```
### References
- [1] Dalal, Fowlkes and Hoadley (1989),JASA, 84, 945-957.
- [2] German Rodriguez. Datasets. In WWS509. Retrieved 30/01/2013, from http://data.princeton.edu/wws509/datasets/#smoking.
- [3] McLeish, Don, and Cyntha Struthers. STATISTICS 450/850 Estimation and Hypothesis Testing. Winter 2012. Waterloo, Ontario: 2012. Print.
- [4] Fonnesbeck, Christopher. "Building Models." PyMC-Devs. N.p., n.d. Web. 26 Feb 2013. <http://pymc-devs.github.com/pymc/modelbuilding.html>.
- [5] Cronin, Beau. "Why Probabilistic Programming Matters." 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. <https://plus.google.com/u/0/107971134877020469960/posts/KpeRdJKR6Z1>.
- [6] S.P. Brooks, E.A. Catchpole, and B.J.T. Morgan. Bayesian animal survival estimation. Statistical Science, 15: 357–376, 2000
- [7] Gelman, Andrew. "Philosophy and the practice of Bayesian statistics." British Journal of Mathematical and Statistical Psychology. (2012): n. page. Web. 2 Apr. 2013.
- [8] Greenhill, Brian, Michael D. Ward, and Audrey Sacks. "The Separation Plot: A New Visual Method for Evaluating the Fit of Binary Models." American Journal of Political Science. 55.No.4 (2011): n. page. Web. 2 Apr. 2013.
```
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: Helvetica, serif;
}
h4{
margin-top:12px;
margin-bottom: 3px;
}
div.text_cell_render{
font-family: Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 145%;
font-size: 130%;
width:800px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro", source-code-pro,Consolas, monospace;
}
.prompt{
display: None;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 22pt;
color: #4057A1;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
```
```
|
0497594f57ce13b6b8a961c96b55816afa6e6930
| 469,512 |
ipynb
|
Jupyter Notebook
|
Chapter2_MorePyMC/MorePyMC.ipynb
|
Kleptine/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
|
3fe9939b38dbfd3547f15080232ae49caf262cec
|
[
"MIT"
] | 1 |
2021-04-07T23:15:02.000Z
|
2021-04-07T23:15:02.000Z
|
Chapter2_MorePyMC/MorePyMC.ipynb
|
Kleptine/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
|
3fe9939b38dbfd3547f15080232ae49caf262cec
|
[
"MIT"
] | null | null | null |
Chapter2_MorePyMC/MorePyMC.ipynb
|
Kleptine/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
|
3fe9939b38dbfd3547f15080232ae49caf262cec
|
[
"MIT"
] | null | null | null | 192.81807 | 53,172 | 0.873083 | true | 16,288 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.826712 | 0.884039 | 0.730846 |
__label__eng_Latn
| 0.995143 | 0.536331 |
# Lorenz
In this notebook we are going to syncronize a subsystem of a Lorenz system with another Lorenz system according to the proposed method of Pecora and Carroll (1990).
```python
from plotly import offline as py
from plotly import graph_objs as go
py.init_notebook_mode(connected=True)
```
A first system governed by the Lorenz,$$
\begin{align}
\dot{x}_t&=\sigma(y_t-x_t)\\
\dot{y}_t&=x_t(r-z_t)-y_t\\
\dot{z}_t&=x_ty_t-bz_t
\end{align},$$ acts as a transmitter with $r=60,\sigma=10,b=8/3$.
A second system governed by, $$
\begin{align}
\dot{x}_r&=x_d\\
\dot{y}_r&=x_r(r-z_r)-y_r\\
\dot{z}_r&=x_ry_r-bz_r
\end{align}, $$acts as a receiver, using the $x_d$ from the transmitter.
```python
from numba import jit
from random import uniform
r = 60
b = 8 / 3
sigma = 10
@jit(nopython=True)
def simulate(N, h):
xt = [uniform(0, 10)]
yt = [uniform(-60, 60)]
zt = [uniform(0, 140)]
xr = xt
yr = [uniform(-60, 60)]
zr = [uniform(0, 140)]
for i in range(1, N+1):
xti = xt[-1]
yti = yt[-1]
zti = zt[-1]
xt.append(xti + h * sigma * (yti - xti))
yt.append(yti + h * (xti * (r - zti) - yti))
zt.append(zti + h * (xti * yti - b * zti))
yri = yr[-1]
zri = zr[-1]
yr.append(yri + h * (xti * (r - zri) - yri))
zr.append(zri + h * (xti * yri - b * zri))
return (xt, yt, zt), (xr, yr, zr)
```
Introducing the error measure $e_2=y_t-y_r,e_3=z_t-z_r$ and subtracting the receiver equations from the transmitter equations one finds, $$
\begin{align}
\dot{e}_1&=0\\
\dot{e}_2&=-e_2-x_de_3\\
\dot{e}_3&=x_de_2-be_3
\end{align}. $$
If we multiply the second equation by $e_2$, the third equation by $4e_3$ and add the results we obtain, $$
-2\left(e_2^2+e_3^2\right)
=
e_2\dot{e}_2+e_3\dot{e}_3
=
\frac{1}{2}\frac{d}{dt}\left(e_2^2+e_3^2\right), $$ we identify $V(e_2,e_3)=e_2^2+e_3^2$ and see that $V$ is a Lyapunov function as $V$ it is positive definite and $\dot{V}$ is negative definite, thus, $e_2=e_3=0$ is a stable equilibrium point which one approaches asymptopically for $t\to\infty$ regardless of the initial conditions chosen for transmitter and receiver system, thus, the receiver will syncronize with the transmitter.
```python
N = 1000
h = 1e-2
t = np.arange(N) * h
(xt, yt, zt), (xr, yr, zr) = simulate(1000, 1e-2)
(xt[0], xr[0]), (yt[0], yr[0]), (zt[0], zr[0])
```
((6.324940084560655, 6.324940084560655),
(35.15227077484603, -16.044206306661586),
(127.13535435788735, 126.39523401898253))
```python
layout = go.Layout(
xaxis=dict(title='t'),
yaxis=dict(title='y(t)'),
)
figure = go.Figure([
go.Scatter(x=t, y=yt, mode='lines+markers', name='transmitter'),
go.Scatter(x=t, y=yr, mode='lines+markers', name='receiver'),
], layout)
py.iplot(figure)
```
<div id="e859f757-74a1-4d51-8e70-58c60e36b34b" style="height: 525px; width: 100%;" class="plotly-graph-div"></div>
```python
layout = go.Layout(
xaxis=dict(title='y(t)'),
yaxis=dict(title='z(t)'),
)
figure = go.Figure([
go.Scatter(x=yt, y=zt, mode='lines+markers', name='transmitter'),
go.Scatter(x=yr, y=zr, mode='lines+markers', name='receiver'),
], layout)
py.iplot(figure)
```
<div id="44ac0bf0-f0c4-4486-a4a9-81f632e6b911" style="height: 525px; width: 100%;" class="plotly-graph-div"></div>
|
98b2f9ea0bec8d041d9cdf7a0ae6ec14cc94183c
| 494,378 |
ipynb
|
Jupyter Notebook
|
notebooks/syncronisation/Lorenz.ipynb
|
bodokaiser/complex-systems
|
5e348d9cc382059b316ccc0a391183ba65aa8f3f
|
[
"Apache-2.0"
] | 2 |
2019-04-22T17:32:33.000Z
|
2019-10-03T18:09:01.000Z
|
notebooks/syncronisation/Lorenz.ipynb
|
bodokaiser/complex-systems
|
5e348d9cc382059b316ccc0a391183ba65aa8f3f
|
[
"Apache-2.0"
] | null | null | null |
notebooks/syncronisation/Lorenz.ipynb
|
bodokaiser/complex-systems
|
5e348d9cc382059b316ccc0a391183ba65aa8f3f
|
[
"Apache-2.0"
] | null | null | null | 59.470468 | 79,445 | 0.667164 | true | 1,250 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.913677 | 0.845942 | 0.772918 |
__label__eng_Latn
| 0.816473 | 0.634079 |
# Le-Net 1 based architecture
We start with 41X41 (I) after first convolution (9x9)we have 33X33 (L1). The next pooling layer reduces dimension with 3 to an output image of 11X11 with 4x4 pooling kernels (L2). Then we apply different types of convolution 4x4 kernels on the L2 layer resulting in 8x8 (L3) . Then followed by pooling 2X2 resulting in 4x4 output map (L4). So we have 16 connection for each element in layer L4 (which depend on the amount of different Covolutions in L3)
\begin{equation}
f(x)=\frac{1}{1+e^{-x}} \\
F_{k}= f( \sum_{i} \mathbf{W^{k}_{i} \cdot y_{i}}-b_{k})
\end{equation}
\begin{equation}
E=\sum_{k} \frac{1}{2}|t_k-F_{k}|^{2} \\
\Delta W_{ij}= - \eta \frac{dE}{d W_{ij}}
\end{equation}
\begin{equation}
\Delta W_{ij}= \sum_{k} - \eta \frac{dE}{d F_{k}} \frac{dF_{k}}{dx_{k}} \frac{dx_{k}}{dW_{ij}}=\sum_{k} \eta (t_{k}-F_{k})\frac{e^{-x_{k}}}{(1+e^{-x_{k}})^{2}} \frac{dx_{k}}{dW_{ij}} \\
= \eta (t_{k}-F_{k})\frac{e^{-x_{k}}}{(1+e^{-x_{k}})^{2}} y_{ij}
\end{equation}
\begin{equation}
\Delta b_{k}= - \eta \frac{dE}{d F_{k}} \frac{dF_{k}}{dx_{k}} \frac{dx_{k}}{b_{k}}=\eta (t_{k}-F_{k})\frac{e^{-x_{k}}}{(1+e^{-x_{k}})^{2}} \cdot-1
\end{equation}
Since $\frac{e^{-x_{k}}}{(1+e^{-x_{k}})^{2}}$ is always positive we can neglect this term in our programme
\begin{equation}
x_{k}=\sum_{ij} W^{k}[i,j] \; y^{4rb}[i,j] - b_{k}
\end{equation}
\begin{equation}
y^{4rb}[i,j]= \sum_{u,v} W^{3rb}[u,v] \; y^{3rb} [2i+u,2j+v]
\end{equation}
\begin{equation}
y^{3rb} [2i+u,2j+v]= f\left (x^{3rb}[2i+u,2j+v] \right)
\end{equation}
\begin{equation}
x^{3rb}[2i+u,2j+v]=\sum_{nm} W^{2rb}[n,m] \; y^{2rb}[n+(2i+u),m+(2j+v)] -b^{3rb}[2i+u,2j+v]
\end{equation}
\begin{equation}
\begin{split}
\Delta W^{2rb}[n,m] =\sum_{k} - \eta \frac{dE}{dF_{k}}
\frac{dF_{k}}{dx_{k}}
\sum_{ij} \frac{dx_{k}}{dy^{4rb}[i,j]}
\sum_{uv}\frac{dy^{4rb}[i,j]}{d y^{3rb} [2i+u,2j+v]}
\frac{d y^{3rb} [2i+u,2j+v]}{d x^{3rb}[2i+u,2j+v]}
\sum_{nm}\frac{d x^{3rb}[2i+u,2j+v]}{d W^{2rb}[n,m]}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\Delta b^{3rb}[2i+u,2j+v] =\sum_{k} - \eta \frac{dE}{dF_{k}}
\frac{dF_{k}}{dx_{k}}
\sum_{ij} \frac{dx_{k}}{dy^{4rb}[i,j]}
\sum_{uv}\frac{dy^{4rb}[i,j]}{d y^{3rb} [2i+u,2j+v]}
\frac{d y^{3rb} [2i+u,2j+v]}{d x^{3rb}[2i+u,2j+v]}
\frac{d x^{3rb}[2i+u,2j+v]}{d b^{3rb}[2i+u,2j+v]}
\end{split}
\end{equation}
\begin{equation}
\frac{dx_{k}}{dy^{4rb}[i,j]} = W^{4rbk}[i,j]\\
\end{equation}
\begin{equation}
\frac{dy^{4rb}[i,j]}{d y^{3rb} [2i+u,2j+v]} = W^{3rb}[u,v] \\
\end{equation}
\begin{equation}
\frac{d y^{3rb} [2i+u,2j+v]}{d x^{3rb}[2i+u,2j+v]}=\frac{e^{-x^{3rb}[2i+u,2j+v]}}{(1+e^{-x^{3rb}[2i+u,2j+v]})^2}
\end{equation}
This term is first not included since it is always positive. If the training will not converge it might be possible to include this term
\begin{equation}
\frac{d y^{3rb} [2i+u,2j+v]}{d W^{2rb}[n,m]}= y^{2rb} [n+(2i+u),m+(2j+v)] \\
\end{equation}
\begin{equation}
\frac{d x^{3rb}[2i+u,2j+v]}{d b^{3rb}[2i+u,2j+v]}=-1
\end{equation}
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from numpy import linalg as lin
import scipy.signal as sig
from PIL import Image
import glob
import matplotlib.cm as cm
import itertools
```
```python
########### Load Input ############################################################################################################################
# In this script I used the brightness to determine structures, instead of one RGB color:
# this is determined by: 0.2126*R + 0.7152*G + 0.0722*B
# Source: https://en.wikipedia.org/wiki/Relative_luminance
patchSize=40 # patchsize this must be 48 since our network can only handle this value
# Open forest
Amount_data= len(glob.glob('Forest/F*'))
Patches_F=np.empty([1,patchSize,patchSize])
Patches_F_RGB=np.empty([1,patchSize,patchSize,3])
Patches_t=np.empty([3])
for k in range (0, Amount_data):
name="Forest/F%d.png" % (k+1)
img = Image.open(name)
data=img.convert('RGB')
data= np.asarray( data, dtype="int32" )
data=0.2126*data[:,:,0]+0.7152*data[:,:,1]+0.0722*data[:,:,2]
data2=img.convert('RGB')
data2= np.asarray( data2, dtype="int32" )
Yamount=data.shape[0]/patchSize # Counts how many times the windowsize fits in the picture
Xamount=data.shape[1]/patchSize # Counts how many times the windowsize fits in the picture
# Create patches for structure
data_t=np.array([[data[j*patchSize:(j+1)*patchSize,i*patchSize:(i+1)*patchSize] for i in range(0,Xamount)] for j in range(0,Yamount)])
data_t=np.reshape(data_t, [data_t.shape[0]*data_t.shape[1], patchSize, patchSize])
Patches_F=np.append(Patches_F,data_t,axis=0)
#Create patches for colour
data_t=np.array([[data2[j*patchSize:(j+1)*patchSize,i*patchSize:(i+1)*patchSize,:] for i in range(0,Xamount)] for j in range(0,Yamount)])
data_t=np.reshape(data_t, [data_t.shape[0]*data_t.shape[1], patchSize, patchSize, 3])
Patches_F_RGB=np.append(Patches_F_RGB, data_t,axis=0)
Patches_F=np.delete(Patches_F, 0,0)
Patches_F_RGB=np.delete(Patches_F_RGB, 0,0)
# Open city
Amount_data= len(glob.glob('City/C*'))
Patches_C=np.empty([1,patchSize,patchSize])
Patches_C_RGB=np.empty([1,patchSize,patchSize,3])
Patches_t=np.empty([3])
for k in range (0, Amount_data):
name="City/C%d.png" % (k+1)
img = Image.open(name)
data=img.convert('RGB')
data = np.asarray( data, dtype="int32" )
data=0.2126*data[:,:,0]+0.7152*data[:,:,1]+0.0722*data[:,:,2]
data2=img.convert('RGB')
data2= np.asarray( data2, dtype="int32" )
Yamount=data.shape[0]/patchSize # Counts how many times the windowsize fits in the picture
Xamount=data.shape[1]/patchSize # Counts how many times the windowsize fits in the picture
# Create patches for structure
data_t=np.array([[data[j*patchSize:(j+1)*patchSize,i*patchSize:(i+1)*patchSize] for i in range(0,Xamount)] for j in range(0,Yamount)])
data_t=np.reshape(data_t, [data_t.shape[0]*data_t.shape[1], patchSize, patchSize])
Patches_C=np.append(Patches_C,data_t,axis=0)
#Create patches for colour
data_t=np.array([[data2[j*patchSize:(j+1)*patchSize,i*patchSize:(i+1)*patchSize,:] for i in range(0,Xamount)] for j in range(0,Yamount)])
data_t=np.reshape(data_t, [data_t.shape[0]*data_t.shape[1], patchSize, patchSize, 3])
Patches_C_RGB=np.append(Patches_C_RGB, data_t,axis=0)
Patches_C=np.delete(Patches_C, 0,0)
Patches_C_RGB=np.delete(Patches_C_RGB, 0,0)
# Open water
Amount_data= len(glob.glob('Water/W*'))
Patches_W=np.empty([1,patchSize,patchSize])
Patches_W_RGB=np.empty([1,patchSize,patchSize,3])
Patches_t=np.empty([3])
for k in range (0, Amount_data):
name="Water/W%d.png" % (k+1)
img = Image.open(name)
data=img.convert('RGB')
data = np.asarray( data, dtype="int32" )
data=0.2126*data[:,:,0]+0.7152*data[:,:,1]+0.0722*data[:,:,2]
data2 = img.convert('RGB')
data2 = np.asarray( data2, dtype="int32" )
Yamount=data.shape[0]/patchSize # Counts how many times the windowsize fits in the picture
Xamount=data.shape[1]/patchSize # Counts how many times the windowsize fits in the picture
# Create patches for structure
data_t=np.array([[data[j*patchSize:(j+1)*patchSize,i*patchSize:(i+1)*patchSize] for i in range(0,Xamount)] for j in range(0,Yamount)])
data_t=np.reshape(data_t, [data_t.shape[0]*data_t.shape[1], patchSize, patchSize])
Patches_W=np.append(Patches_W,data_t,axis=0)
#Create patches for colour
data_t=np.array([[data2[j*patchSize:(j+1)*patchSize,i*patchSize:(i+1)*patchSize,:] for i in range(0,Xamount)] for j in range(0,Yamount)])
data_t=np.reshape(data_t, [data_t.shape[0]*data_t.shape[1], patchSize, patchSize, 3])
Patches_W_RGB=np.append(Patches_W_RGB, data_t,axis=0)
Patches_W=np.delete(Patches_W, 0,0)
Patches_W_RGB=np.delete(Patches_W_RGB, 0,0)
# Open Grassland
#Amount_data= len(glob.glob('Grassland/G*'))
#Patches_G=np.empty([1,patchSize,patchSize])
#Patches_G_RGB=np.empty([1,patchSize,patchSize,3])
#Patches_t=np.empty([3])
#for k in range (0, Amount_data):
# name="Grassland/G%d.png" % (k+1)
# img = Image.open(name)
# data=img.convert('RGB')
# data= np.asarray( data, dtype="int32" )
# data=0.2126*data[:,:,0]+0.7152*data[:,:,1]+0.0722*data[:,:,2]
# data2=img.convert('RGB')
# data2= np.asarray( data2, dtype="int32" )
# Yamount=data.shape[0]/patchSize # Counts how many times the windowsize fits in the picture
# Xamount=data.shape[1]/patchSize # Counts how many times the windowsize fits in the picture
# Create patches for structure
# data_t=np.array([[data[j*patchSize:(j+1)*patchSize,i*patchSize:(i+1)*patchSize] for i in range(0,Xamount)] for j in range(0,Yamount)])
# data_t=np.reshape(data_t, [data_t.shape[0]*data_t.shape[1], patchSize, patchSize])
# Patches_G=np.append(Patches_G,data_t,axis=0)
#Create patches for colour
# data_t=np.array([[data2[j*patchSize:(j+1)*patchSize,i*patchSize:(i+1)*patchSize,:] for i in range(0,Xamount)] for j in range(0,Yamount)])
# data_t=np.reshape(data_t, [data_t.shape[0]*data_t.shape[1], patchSize, patchSize, 3])
# Patches_G_RGB=np.append(Patches_G_RGB, data_t,axis=0)
#Patches_G=np.delete(Patches_G, 0,0)
#Patches_G_RGB=np.delete(Patches_G_RGB, 0,0)
```
```python
print Patches_C.shape[0], Patches_W.shape[0], Patches_F.shape[0] #, Patches_G.shape[0]
```
7480 7972 7163
```python
########### Functions ############################################################################################################################
# Define Activitation functions, pooling and convolution functions (the rules)
def Sigmoid(x):
return (1/(1+np.exp(-x)))
def Sigmoid_dx(x):
return np.exp(-x)/((1+np.exp(-x))**2)
def TanH(x):
return (1-np.exp(-x))/(1+np.exp(-x))
def Pool(I,W):
PoolImg=np.zeros((len(I)/len(W),len(I)/len(W))) # W must fit an integer times into I.
for i in range(0,len(PoolImg)):
for j in range(0,len(PoolImg)):
SelAr=I[i*len(W):(i+1)*len(W),j*len(W):(j+1)*len(W)]
PoolImg[i,j]=np.inner(SelAr.flatten(),W.flatten()) # Now this is just an inner product since we have vectors
return PoolImg
# To automatically make Gaussian kernels
def makeGaussian(size, fwhm = 3, center=None):
x = np.arange(0, size, 1, float)
y = x[:,np.newaxis]
if center is None:
x0 = y0 = size // 2
else:
x0 = center[0]
y0 = center[1]
return np.exp(-4*np.log(2) * ((x-x0)**2 + (y-y0)**2) / fwhm**2)
# To automatically define pooling nodes
def Pool_node(N):
s=(N,N)
a=float(N)*float(N)
return (1.0/a)*np.ones(s)
```
```python
#################### Define pooling layers ###########################################################################
P12=Pool_node(4)*(1.0/100.0) #factor 1000 added to lower values more
P34=Pool_node(1)*(1.0/10.0)
#################### Define Convolution layers #######################################################################
######### First C layer #########
C1=[]
## First Kernel
# Inspiration: http://en.wikipedia.org/wiki/Sobel_operator
# http://stackoverflow.com/questions/9567882/sobel-filter-kernel-of-large-size
Kernel=np.array([[4,3,2,1,0,-1,-2,-3,-4],
[5,4,3,2,0,-2,-3,-4,-5],
[6,5,4,3,0,-3,-4,-5,-6],
[7,6,5,4,0,-4,-5,-6,-7],
[8,7,6,5,0,-5,-6,-7,-8],
[7,6,5,4,0,-4,-5,-6,-7],
[6,5,4,3,0,-3,-4,-5,-6],
[5,4,3,2,0,-2,-3,-4,-5],
[4,3,2,1,0,-1,-2,-3,-4]])
C1.append(Kernel)
## Second Kernel
Kernel=np.matrix.transpose(Kernel)
C1.append(Kernel)
##Third Kernel
#Kernel=makeGaussian(9,5)
#Kernel=(1/np.sum(Kernel))*Kernel
#C1.append(Kernel)
######### Initialize output weights and biases #########
# Define the number of branches in one row
N_branches= 3
ClassAmount=3 # Forest, City, Water, Grassland
Size_C2=5
S_H3=((patchSize-C1[0].shape[0]+1)/P12.shape[1])-Size_C2+1
S_H4=S_H3/P34.shape[1]
C2_INIT=np.random.rand(len(C1),N_branches, Size_C2, Size_C2) # second convolution weigths
W_INIT=np.random.rand(ClassAmount, len(C1), N_branches, S_H3, S_H3) # end-weight from output to classifier-neurons
W2_INIT=np.random.rand(ClassAmount,3)
H3_bias=np.random.rand(len(C1),N_branches) # bias in activation function from C2 to H3
Output_bias=np.random.rand(ClassAmount) # bias on the three classes
```
```python
N_plts=len(C1)
for i in range(0,N_plts):
plt.subplot(4,3,i+1)
plt.imshow(C1[i])
```
# For the extra information regarding the code in the following cell
a random patch is chosen in the following way: the program counts how many files and patches there are in total, then it permutes the sequence so that a random patch is chosen every iteration (forest, city, water). After selecting the number the file has to be found back.
```python
N_F=Patches_F.shape[0]
N_C=Patches_C.shape[0]
N_W=Patches_W.shape[0]
#N_G=Patches_G.shape[0]
N_total=N_F+N_C+N_W#+N_G
Sequence = np.arange(N_total)
Sequence = np.random.permutation(Sequence)
```
```python
print N_F, N_C, N_W#, N_G
print N_total
```
7163 7480 7972
22615
```python
# TRAINING PHASE: WITH COLOUR
#delta_H4=np.zeros((len(C1), N_branches, S_H4, S_H4))
#delta_H3=np.zeros((len(C1), N_branches, S_H4, S_H4))
C2=C2_INIT
W=W_INIT
W2=W2_INIT
n_W=1
n_C2=1.5*10**-2
Sample_iterations=0
N_1000=0
from itertools import product
###### Chooses patch and defines label #####
#for PP in range(0,len(Sequence)):
for PP in range(0,6000):
SS=Sequence[PP]
#SS=14000
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
Int_RGB=np.mean(np.mean(Patches_F_RGB[SS,:,:,:], axis=0), axis=0)/255
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
Int_RGB=np.mean(np.mean(Patches_C_RGB[SS-N_F,:,:,:], axis=0), axis=0)/255
elif(SS>=(N_F+N_C)) and (SS<N_F+N_C+N_W):
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
Int_RGB=np.mean(np.mean(Patches_W_RGB[SS-N_F-N_C,:,:,:], axis=0), axis=0)/255
#else:
# Class_label=np.array([0,0,0,1])
# inputPatch=Patches_G[SS-N_F-N_C-N_W]
# Int_RGB=np.mean(np.mean(Patches_G_RGB[SS-N_F-N_C-N_W,:,:,:], axis=0), axis=0)/255
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
II=1
ITER=0
while II==1:
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
#From here on BP trakes place!
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
y=np.append([H4.flatten()], [Int_RGB])
for k in range(0,ClassAmount):
W_t=np.append([W[k].flatten()], [W2[k]])
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
#f=f/np.sum((f))
###### Back-propagation #####
# First learning the delta's
delta_H4=np.zeros([ClassAmount,len(C1),N_branches,S_H4,S_H4])
e_k=f-Class_label
delta_k=e_k*Sigmoid_dx(x)
#Output_bias=Output_bias[k]+n_bias*e_k
for k in range(0, ClassAmount):
#update weights output layer
W[k]=W[k]-n_W*delta_k[k]*H4
W2[k]=W2[k]-n_W*delta_k[k]*Int_RGB
delta_H4[i]=delta_k[i]*W[i]
delta_H4=np.sum(delta_H4, axis=0)
delta_H3=(float(1)/10)*delta_H4
C2_diff=np.zeros([len(C1),N_branches, Size_C2, Size_C2])
for r in range(0, len(C1)):
C2_t=np.array([[delta_H3[r][:]*H2[r][(0+u):(4+u),(0+v):(4+v)] for u in range(0,Size_C2)] for v in range (0,Size_C2)])
C2_t=np.sum(np.sum(C2_t, axis=4),axis=3)
C2_t=np.rollaxis(C2_t,2)
C2_diff[r]=-n_C2*C2_t
C2=C2+C2_diff
ERROR=np.sum((Class_label-f)**2)
ITER=ITER+1
if ERROR<0.55 or ITER>4:
II=0
Sample_iterations=Sample_iterations+1
if Sample_iterations>1000:
n_W=0.7
n_C2=0.7*1.5*10**-2
if Sample_iterations>2000:
n_W=0.7*0.7
n_C2=0.7*0.7*1.5*10**-2
if Sample_iterations>3000:
n_W=0.7*0.7*0.7
n_C2=0.7*0.7*0.7*1.5*10**-2
if Sample_iterations>5000:
n_W=0.2
n_C2=0.0025
if Sample_iterations>7500:
n_W=0.1
n_C2=0.001
if Sample_iterations>10000:
n_W=0.01
n_C2=0.0005
print f, np.argmax(Class_label)
if np.argmax(f)==np.argmax(Class_label):
print True
else:
print False
# Sample_iterations=Sample_iterations+1
if (Sample_iterations-(1000*N_1000))==1000:
print Sample_iterations
N_1000=N_1000+1
# n_W=0.7*n_W
# n_C2=0.7*n_C2
```
[ 0.39361419 0.98184301 0.23449627] 1
True
[ 0.79045424 0.13992539 0.68416107] 0
True
[ 0.4549695 0.83344204 0.29403904] 0
False
[ 0.51065314 0.48795303 0.35689138] 0
True
[ 0.76903177 0.22003828 0.43187943] 0
True
[ 0.68237177 0.43030057 0.39829358] 0
True
[ 0.81147348 0.17320406 0.51455586] 0
True
[ 0.53735173 0.35723227 0.65967675] 2
True
[ 0.36248522 0.40660308 0.60908073] 2
True
[ 0.55037661 0.69836753 0.17117274] 1
True
[ 0.5300573 0.64907886 0.32507719] 1
True
[ 0.44247266 0.54818936 0.40668967] 1
True
[ 0.70582743 0.32617565 0.29193188] 0
True
[ 0.52148717 0.12948147 0.59057501] 2
True
[ 0.75395374 0.1311756 0.46353186] 0
True
[ 0.43109056 0.72466298 0.20171992] 1
True
[ 0.1552415 0.57841714 0.58120763] 2
True
[ 0.46093874 0.51691096 0.23782308] 1
True
[ 0.26943473 0.85180758 0.12578948] 1
True
[ 0.29475649 0.13401007 0.72087975] 2
True
[ 0.70790904 0.13857155 0.32673628] 0
True
[ 0.25927595 0.77151326 0.27352516] 1
True
[ 0.67547255 0.1620631 0.39398798] 0
True
[ 0.45665423 0.24359565 0.43090247] 0
True
[ 0.56907543 0.03953659 0.70440129] 2
True
[ 0.25596045 0.23744416 0.61092659] 2
True
[ 0.5400449 0.39922155 0.19712488] 0
True
[ 0.55867143 0.16073219 0.44953063] 2
False
[ 0.38058748 0.30981248 0.38019692] 2
False
[ 0.4138013 0.09153141 0.66705936] 2
True
[ 0.1721739 0.65822182 0.4692706 ] 2
False
[ 0.17537154 0.59741044 0.40439397] 1
True
[ 0.58123333 0.09748701 0.51210382] 0
True
[ 0.16754319 0.78191609 0.24653948] 1
True
[ 0.10784519 0.83349296 0.33142187] 1
True
[ 0.43823635 0.16052015 0.49362752] 2
True
[ 0.36884785 0.44719733 0.32215215] 1
True
[ 0.4116058 0.29286499 0.35408262] 0
True
[ 0.51701413 0.13594542 0.48311057] 2
False
[ 0.33977743 0.29633442 0.41020625] 0
False
[ 0.25088114 0.77638528 0.15228462] 1
True
[ 0.28815824 0.47561496 0.3313482 ] 1
True
[ 0.46767175 0.10769542 0.51287362] 2
True
[ 0.32690158 0.51910592 0.25089164] 1
True
[ 0.53019764 0.13320193 0.4567101 ] 0
True
[ 0.42828581 0.21540299 0.43722617] 2
True
[ 0.42923565 0.22190808 0.37597822] 0
True
[ 0.26369943 0.49648225 0.26843138] 1
True
[ 0.58577623 0.14264166 0.3220858 ] 0
True
[ 0.50869995 0.14891254 0.37728081] 0
True
[ 0.4510199 0.17552553 0.35681998] 2
False
[ 0.17759655 0.79829018 0.19230387] 1
True
[ 0.44002225 0.06527789 0.64894734] 2
True
[ 0.29414284 0.33622286 0.37491923] 2
True
[ 0.46945052 0.17078714 0.43327277] 0
True
[ 0.13353398 0.47444539 0.51351453] 2
True
[ 0.29750886 0.40067293 0.31437807] 1
True
[ 0.37871215 0.07859096 0.62570531] 2
True
[ 0.38459064 0.35541128 0.35543024] 0
True
[ 0.54007526 0.07901743 0.51582242] 2
False
[ 0.09386425 0.31394941 0.71125647] 2
True
[ 0.4166196 0.2003459 0.42754014] 2
True
[ 0.14567187 0.24377017 0.66921545] 2
True
[ 0.19331594 0.26740545 0.55034298] 2
True
[ 0.11832065 0.78800671 0.36318506] 2
False
[ 0.27517085 0.37191574 0.42933556] 2
True
[ 0.4695662 0.10327524 0.5005976 ] 0
False
[ 0.25882426 0.25074992 0.51924211] 2
True
[ 0.29176677 0.20678282 0.5364895 ] 2
True
[ 0.08736897 0.72976974 0.40112904] 1
True
[ 0.22410276 0.64780096 0.24413728] 1
True
[ 0.34935681 0.25983359 0.43849981] 0
False
[ 0.58446444 0.06219377 0.56889287] 0
True
[ 0.38955341 0.02360352 0.82354361] 2
True
[ 0.62092673 0.04019339 0.60899263] 0
True
[ 0.38639435 0.07376044 0.67987728] 2
True
[ 0.29146778 0.34891813 0.41383086] 2
True
[ 0.38144991 0.20751957 0.43372778] 0
False
[ 0.56990125 0.05622597 0.56260959] 2
False
[ 0.17278118 0.50646288 0.35674315] 1
True
[ 0.47688949 0.10859604 0.5056876 ] 0
False
[ 0.40838989 0.21767006 0.34100635] 0
True
[ 0.54291683 0.13908929 0.35143294] 0
True
[ 0.59399227 0.14564138 0.34244148] 0
True
[ 0.54507681 0.07963743 0.44700888] 2
False
[ 0.4407484 0.20849577 0.4445985 ] 2
True
[ 0.33029127 0.55935247 0.20795333] 1
True
[ 0.42294797 0.21011155 0.40382932] 0
True
[ 0.38761174 0.22816606 0.42315944] 2
True
[ 0.17336849 0.44957456 0.44252136] 1
True
[ 0.39123445 0.15454506 0.4742043 ] 2
True
[ 0.36051373 0.29688936 0.42645567] 2
True
[ 0.60369375 0.07899518 0.49886324] 0
True
[ 0.51916607 0.11910824 0.49307477] 2
False
[ 0.27278123 0.19919995 0.57977858] 2
True
[ 0.28097952 0.41944835 0.32214385] 1
True
[ 0.35154877 0.10868494 0.53667068] 2
True
[ 0.42372857 0.25594311 0.3605229 ] 0
True
[ 0.16658509 0.11379509 0.75197227] 2
True
[ 0.19279694 0.41654034 0.40246869] 1
True
[ 0.57510272 0.06877509 0.56404688] 0
True
[ 0.40467467 0.15925758 0.48482696] 2
True
[ 0.35095959 0.48155188 0.25539 ] 0
False
[ 0.46163504 0.18701118 0.37081661] 0
True
[ 0.38660593 0.1418076 0.38685376] 0
False
[ 0.40467521 0.30233222 0.23076699] 0
True
[ 0.53912469 0.31710748 0.2244859 ] 0
True
[ 0.27582655 0.47698337 0.28195252] 1
True
[ 0.72220442 0.07713064 0.20869296] 1
False
[ 0.23960818 0.72193698 0.11241502] 1
True
[ 0.46783641 0.20745814 0.31178806] 0
True
[ 0.21435434 0.21251569 0.48040874] 2
True
[ 0.30457928 0.07594164 0.58425912] 2
True
[ 0.46336372 0.17991844 0.29680727] 0
True
[ 0.60881934 0.04144037 0.48311065] 0
True
[ 0.56083758 0.15822664 0.22132049] 0
True
[ 0.43048968 0.51806333 0.09485095] 1
True
[ 0.35064919 0.48503367 0.1765668 ] 1
True
[ 0.39616663 0.53566563 0.14731733] 1
True
[ 0.45569319 0.08019346 0.44806782] 2
False
[ 0.47949925 0.1010412 0.43379969] 0
True
[ 0.10697171 0.53782929 0.46871522] 2
False
[ 0.49992717 0.09508871 0.45757207] 0
True
[ 0.47645131 0.22260575 0.29050075] 0
True
[ 0.37954933 0.10743637 0.52936701] 2
True
[ 0.43448508 0.12149219 0.50264113] 2
True
[ 0.60642615 0.02709133 0.55969337] 2
False
[ 0.42176141 0.3307835 0.27988155] 0
True
[ 0.19045557 0.64773674 0.20924508] 1
True
[ 0.49913177 0.09226364 0.49604578] 0
True
[ 0.33142864 0.59280561 0.12846371] 1
True
[ 0.48346206 0.07641806 0.50650975] 0
False
[ 0.65120916 0.04554967 0.4813092 ] 0
True
[ 0.57707166 0.13128957 0.25513132] 0
True
[ 0.63914711 0.01682827 0.64717139] 2
True
[ 0.40318265 0.42860941 0.1685816 ] 1
True
[ 0.36798811 0.38441956 0.21705228] 1
True
[ 0.55594316 0.09349569 0.39675945] 0
True
[ 0.55653794 0.15092934 0.31655165] 0
True
[ 0.21670833 0.6988065 0.09020961] 1
True
[ 0.50195805 0.16253365 0.30932232] 0
True
[ 0.5233484 0.15005527 0.41802962] 2
False
[ 0.5584132 0.08603524 0.38860999] 0
True
[ 0.5355193 0.07674903 0.40641579] 0
True
[ 0.47826297 0.11913251 0.4717719 ] 2
False
[ 0.43467402 0.08641993 0.43221711] 2
False
[ 0.30164657 0.45777838 0.25742727] 1
True
[ 0.17670474 0.66719853 0.24950858] 1
True
[ 0.43359562 0.11694658 0.44561186] 0
False
[ 0.4133173 0.45854133 0.14931664] 1
True
[ 0.18445693 0.76725901 0.15595199] 2
False
[ 0.38825386 0.35228101 0.23686968] 0
True
[ 0.34075799 0.2415341 0.4122296 ] 2
True
[ 0.36086373 0.15246564 0.41396974] 2
True
[ 0.43249202 0.10986339 0.3903782 ] 0
True
[ 0.42020917 0.24231151 0.37763244] 2
False
[ 0.27963182 0.484457 0.2958274 ] 1
True
[ 0.48308841 0.13465697 0.42119045] 2
False
[ 0.15336645 0.52222143 0.37635501] 1
True
[ 0.30475009 0.15993792 0.45603289] 2
True
[ 0.15303215 0.09468095 0.76273493] 2
True
[ 0.14029029 0.43831922 0.39722904] 1
True
[ 0.34044014 0.11844747 0.51649803] 2
True
[ 0.27103295 0.17608453 0.49780313] 2
True
[ 0.13746936 0.06495086 0.81184443] 2
True
[ 0.32222223 0.08316875 0.69451106] 2
True
[ 0.25160718 0.27530015 0.32711906] 1
False
[ 0.25099422 0.41146494 0.24274728] 1
True
[ 0.17196584 0.7275201 0.16151186] 1
True
[ 0.2000618 0.47400858 0.27509886] 1
True
[ 0.33049006 0.40423869 0.25019836] 0
False
[ 0.22811783 0.25766233 0.4769009 ] 2
True
[ 0.37048296 0.19326001 0.39792302] 2
True
[ 0.16318278 0.53043027 0.27507883] 1
True
[ 0.14933738 0.79701959 0.17064641] 1
True
[ 0.55673984 0.06365753 0.56770892] 0
False
[ 0.14558078 0.66250432 0.28756507] 1
True
[ 0.4014502 0.06386025 0.66673032] 2
True
[ 0.20742569 0.63607827 0.18850448] 1
True
[ 0.47434821 0.09548965 0.43369829] 0
True
[ 0.31235082 0.2754877 0.42236429] 0
False
[ 0.16229134 0.29811289 0.53822492] 2
True
[ 0.21801542 0.54946287 0.21903025] 1
True
[ 0.40958268 0.11027969 0.47561094] 2
True
[ 0.17204662 0.60159587 0.27600832] 1
True
[ 0.34563006 0.44209994 0.15926327] 1
True
[ 0.13720803 0.28137834 0.55132697] 2
True
[ 0.12041889 0.72782277 0.23796353] 1
True
[ 0.40094928 0.35394312 0.23056266] 0
True
[ 0.54467368 0.07820989 0.40457813] 0
True
[ 0.44441728 0.1594678 0.35946787] 0
True
[ 0.1686187 0.69708515 0.20698615] 2
False
[ 0.34487918 0.27254159 0.41044228] 2
True
[ 0.29912975 0.64070028 0.14426371] 1
True
[ 0.4529984 0.10500739 0.47047993] 0
False
[ 0.40909397 0.14130755 0.42937087] 2
True
[ 0.32877241 0.47600255 0.25145906] 1
True
[ 0.15515687 0.1356714 0.72433968] 2
True
[ 0.41360775 0.14889112 0.40354214] 0
True
[ 0.43457322 0.10923307 0.43636641] 0
False
[ 0.2214108 0.44752477 0.22812376] 1
True
[ 0.46379428 0.14634098 0.44437497] 2
False
[ 0.24752708 0.52515603 0.25103761] 1
True
[ 0.37096504 0.23320196 0.33290244] 2
False
[ 0.20625999 0.43852671 0.41015774] 1
True
[ 0.45772018 0.10041469 0.46091642] 2
True
[ 0.44408235 0.09770668 0.52353748] 0
False
[ 0.12092831 0.7901963 0.31449978] 2
False
[ 0.51209604 0.00756316 0.91123139] 2
True
[ 0.36925173 0.17744607 0.51741106] 2
True
[ 0.32427895 0.26572107 0.40617539] 0
False
[ 0.31412592 0.16856026 0.54351883] 2
True
[ 0.30519502 0.43244163 0.33181304] 1
True
[ 0.39980246 0.13308326 0.58896449] 2
True
[ 0.49459559 0.10838842 0.55190956] 0
False
[ 0.49916761 0.12891721 0.50858268] 0
False
[ 0.34892136 0.41655895 0.2616375 ] 1
True
[ 0.50182477 0.09537676 0.48490683] 2
False
[ 0.31270775 0.52929446 0.26805322] 1
True
[ 0.46815895 0.1136505 0.49847302] 0
False
[ 0.20142697 0.08387307 0.77822715] 2
True
[ 0.48088797 0.10388952 0.49449997] 0
False
[ 0.19558856 0.38472253 0.44468915] 2
True
[ 0.53382531 0.19087768 0.26718852] 0
True
[ 0.50583099 0.10769557 0.49004434] 2
False
[ 0.39721884 0.10890072 0.60046582] 2
True
[ 0.49931241 0.01477501 0.82121368] 2
True
[ 0.4140744 0.11212768 0.51669428] 2
True
[ 0.49013554 0.07291584 0.53680991] 2
True
[ 0.52159523 0.11202987 0.4776295 ] 0
True
[ 0.40434239 0.18427403 0.34914284] 0
True
[ 0.51653559 0.07313604 0.55721721] 2
True
[ 0.25424789 0.36783169 0.27529988] 1
True
[ 0.40204776 0.18034423 0.41051084] 2
True
[ 0.42005996 0.1608149 0.42482674] 0
False
[ 0.49310029 0.09040138 0.54133899] 0
False
[ 0.31940686 0.68126973 0.14842909] 1
True
[ 0.38803042 0.45260742 0.21388625] 1
True
[ 0.33759465 0.44938264 0.22568613] 1
True
[ 0.56212342 0.11403179 0.42593118] 0
True
[ 0.30891445 0.39997038 0.2529609 ] 1
True
[ 0.56422666 0.15891106 0.27301144] 0
True
[ 0.28537638 0.40902438 0.27623857] 1
True
[ 0.36328985 0.56729311 0.14977553] 1
True
[ 0.50993968 0.12335 0.4383343 ] 2
False
[ 0.42026583 0.25054248 0.35315101] 0
True
[ 0.29355739 0.31939496 0.4265636 ] 2
True
[ 0.16223059 0.47843532 0.41114287] 1
True
[ 0.13952586 0.74072047 0.23189745] 1
True
[ 0.46839926 0.09008285 0.53485431] 2
True
[ 0.4100449 0.18182116 0.42619381] 0
False
[ 0.48834856 0.33989488 0.19653809] 0
True
[ 0.43278823 0.15722841 0.45191157] 0
False
[ 0.36622037 0.47960922 0.17227428] 1
True
[ 0.33606528 0.78977931 0.07410551] 1
True
[ 0.52670353 0.27260999 0.1638535 ] 0
True
[ 0.40233267 0.42126217 0.17755201] 1
True
[ 0.42703523 0.44200127 0.1431598 ] 1
True
[ 0.40164783 0.13879654 0.50132945] 2
True
[ 0.54354115 0.05137489 0.53344239] 2
False
[ 0.59101475 0.11442569 0.28055719] 0
True
[ 0.44509251 0.17077223 0.35399256] 2
False
[ 0.34904186 0.37484219 0.26306602] 2
False
[ 0.45565875 0.14934501 0.44001375] 2
False
[ 0.40517001 0.19864117 0.40819971] 0
False
[ 0.38141972 0.27093861 0.25528062] 0
True
[ 0.59085364 0.08647319 0.43758807] 0
True
[ 0.43924518 0.1073282 0.44795098] 0
False
[ 0.57782254 0.13096847 0.33990931] 0
True
[ 0.63096296 0.08678642 0.45115563] 0
True
[ 0.48745432 0.17036531 0.36975041] 2
False
[ 0.42695926 0.39834077 0.21465923] 1
False
[ 0.46206531 0.14341995 0.37499923] 0
True
[ 0.32497131 0.41064041 0.28685836] 1
True
[ 0.42749477 0.17644052 0.43577356] 2
True
[ 0.42284951 0.23301856 0.36683146] 0
True
[ 0.37667039 0.53443616 0.1539032 ] 0
False
[ 0.47004339 0.44768347 0.16445388] 0
True
[ 0.48891139 0.47067561 0.10398259] 1
False
[ 0.69790406 0.0742019 0.33505621] 0
True
[ 0.40969057 0.49942671 0.10896959] 1
True
[ 0.33593654 0.42289786 0.21091997] 1
True
[ 0.21453223 0.39514853 0.41323638] 2
True
[ 0.38137912 0.43351383 0.21687297] 1
True
[ 0.43326755 0.1446206 0.42598877] 2
False
[ 0.29950968 0.42494077 0.30009986] 1
True
[ 0.14722737 0.46781766 0.38075349] 1
True
[ 0.295474 0.26292363 0.3787395 ] 2
True
[ 0.35602652 0.26611165 0.38260526] 2
True
[ 0.50314841 0.07979404 0.53176809] 0
False
[ 0.40593128 0.19948987 0.41397191] 2
True
[ 0.10770998 0.79497137 0.29681176] 1
True
[ 0.24658457 0.38569301 0.32046973] 1
True
[ 0.1109492 0.45049225 0.51314795] 2
True
[ 0.33606085 0.22599497 0.42290133] 2
True
[ 0.35181274 0.24435165 0.34454347] 0
True
[ 0.40154381 0.16564775 0.38968577] 0
True
[ 0.11903667 0.21399505 0.71853954] 2
True
[ 0.3502615 0.24355448 0.41124324] 2
True
[ 0.37997347 0.08951874 0.64040719] 2
True
[ 0.14708337 0.70865446 0.24190875] 1
True
[ 0.48363693 0.07371001 0.60302404] 2
True
[ 0.3958749 0.09157866 0.56098604] 2
True
[ 0.10512974 0.19028438 0.73553785] 2
True
[ 0.41529733 0.20808444 0.39498596] 0
True
[ 0.35975825 0.43492707 0.18266571] 1
True
[ 0.33533345 0.35356418 0.25516758] 0
False
[ 0.41731352 0.40023064 0.21967029] 0
True
[ 0.26535674 0.49649247 0.22079387] 1
True
[ 0.16949591 0.6849646 0.2259309 ] 1
True
[ 0.5169305 0.07969621 0.46584968] 0
True
[ 0.48771315 0.17247399 0.33513663] 2
False
[ 0.25551442 0.2659898 0.46510392] 2
True
[ 0.53916458 0.0960856 0.51050967] 2
False
[ 0.43172419 0.13397952 0.46309233] 2
True
[ 0.27770423 0.55736493 0.2345754 ] 1
True
[ 0.49877262 0.19212863 0.27500914] 0
True
[ 0.36653988 0.13308847 0.51913774] 2
True
[ 0.26227325 0.4978858 0.2771039 ] 1
True
[ 0.26564177 0.62137069 0.1634125 ] 1
True
[ 0.43248289 0.11957846 0.42545352] 2
False
[ 0.50632259 0.11087758 0.43576011] 0
True
[ 0.44334194 0.12963231 0.45054287] 0
False
[ 0.59773945 0.05186464 0.61315906] 0
False
[ 0.18828249 0.43755104 0.35526076] 2
False
[ 0.45589153 0.18148601 0.42331764] 0
True
[ 0.43049096 0.33665757 0.24011988] 0
True
[ 0.34210844 0.41060883 0.23821388] 1
True
[ 0.45224477 0.16003231 0.42599134] 0
True
[ 0.35691214 0.16974133 0.46023974] 2
True
[ 0.24243495 0.43314155 0.33236338] 1
True
[ 0.49148119 0.11467379 0.46499636] 2
False
[ 0.41280395 0.08697322 0.64399049] 2
True
[ 0.42091515 0.20781761 0.3943476 ] 0
True
[ 0.39094703 0.09795527 0.6038204 ] 2
True
[ 0.44674885 0.14211305 0.4411011 ] 2
False
[ 0.38292046 0.18938736 0.45570023] 2
True
[ 0.17326863 0.7354452 0.1967351 ] 1
True
[ 0.14984418 0.33333576 0.60657721] 2
True
[ 0.61845793 0.05852573 0.47628275] 0
True
[ 0.41927426 0.21251229 0.37064289] 0
True
[ 0.4519991 0.05711957 0.644018 ] 2
True
[ 0.24180939 0.3923287 0.32141307] 1
True
[ 0.52724853 0.08561217 0.5007055 ] 2
False
[ 0.39999509 0.18211725 0.36311964] 0
True
[ 0.265219 0.42695354 0.32068257] 1
True
[ 0.33566212 0.39908558 0.26552137] 1
True
[ 0.60250862 0.06334276 0.46862612] 0
True
[ 0.50440063 0.12763299 0.47489038] 0
True
[ 0.18954498 0.53781368 0.27507685] 1
True
[ 0.48389142 0.02675224 0.78068083] 2
True
[ 0.4361996 0.359922 0.2433493] 0
True
[ 0.47892469 0.19125491 0.36713346] 0
True
[ 0.51663203 0.08509369 0.48918444] 2
False
[ 0.41571742 0.16085182 0.42002118] 2
True
[ 0.42199085 0.16069837 0.36859773] 2
False
[ 0.17336664 0.08045539 0.78535294] 2
True
[ 0.14024201 0.14641036 0.75802928] 2
True
[ 0.2174353 0.59143655 0.22746078] 1
True
[ 0.42923504 0.14716174 0.42521469] 0
True
[ 0.24967036 0.16857995 0.59592588] 2
True
[ 0.47593709 0.10900789 0.48473949] 0
False
[ 0.24197595 0.40928555 0.32458323] 1
True
[ 0.44367136 0.17003701 0.43171708] 0
True
[ 0.38107737 0.21811211 0.41969243] 2
True
[ 0.29745406 0.40045896 0.26330464] 1
True
[ 0.15319209 0.43882339 0.43277246] 1
True
[ 0.47184899 0.22049892 0.27636465] 0
True
[ 0.24603276 0.4382338 0.30422653] 1
True
[ 0.42587046 0.10191056 0.50462084] 2
True
[ 0.2144042 0.39475073 0.33331179] 1
True
[ 0.2347891 0.72943091 0.1564138 ] 1
True
[ 0.39110779 0.09102724 0.58279095] 2
True
[ 0.40821703 0.30299708 0.25198112] 0
True
[ 0.46641718 0.14560455 0.40955915] 2
False
[ 0.39361279 0.26703488 0.28879999] 0
True
[ 0.23608493 0.70625332 0.16854375] 1
True
[ 0.28184492 0.55885975 0.1938963 ] 1
True
[ 0.20986723 0.5922122 0.20120618] 1
True
[ 0.29275916 0.4364652 0.23151104] 1
True
[ 0.21261858 0.39928074 0.43860375] 2
True
[ 0.35172368 0.22546717 0.39853424] 2
True
[ 0.15118756 0.64899375 0.26944476] 1
True
[ 0.23135197 0.22867143 0.53872962] 2
True
[ 0.09550379 0.2061144 0.75256311] 2
True
[ 0.25011374 0.39259022 0.35229353] 1
True
[ 0.06937557 0.77080214 0.33626255] 1
True
[ 0.14194407 0.59821896 0.31490599] 1
True
[ 0.30884372 0.19359007 0.51310817] 2
True
[ 0.22529258 0.52557441 0.25852534] 1
True
[ 0.39790984 0.17163396 0.42525483] 0
False
[ 0.13074399 0.8649772 0.11405 ] 1
True
[ 0.4472044 0.10501018 0.47343651] 0
False
[ 0.21770729 0.10183519 0.72051954] 2
True
[ 0.26754825 0.51891242 0.2062166 ] 1
True
[ 0.50067034 0.1202799 0.43209664] 2
False
[ 0.29079509 0.3076481 0.40137048] 2
True
[ 0.43183244 0.13949608 0.40257338] 0
True
[ 0.25749323 0.43574541 0.27645621] 0
False
[ 0.53075772 0.08964993 0.48086304] 0
True
[ 0.13063074 0.17626125 0.69636747] 2
True
[ 0.23901246 0.24720005 0.48887051] 2
True
[ 0.13202673 0.63915287 0.32650667] 1
True
[ 0.26541957 0.6154827 0.17126804] 1
True
[ 0.23630972 0.38294302 0.31120209] 1
True
[ 0.34108736 0.27281119 0.30977489] 0
True
[ 0.21055618 0.77218439 0.14400421] 1
True
[ 0.29097416 0.51685009 0.19048988] 1
True
[ 0.28400198 0.31098431 0.38378519] 2
True
[ 0.54506491 0.06115145 0.51113603] 2
False
[ 0.16841284 0.49827348 0.34852847] 1
True
[ 0.25521299 0.4204697 0.31352894] 1
True
[ 0.45600145 0.09914011 0.46054507] 0
False
[ 0.28950945 0.68095824 0.12305065] 1
True
[ 0.5249191 0.09120417 0.45841297] 0
True
[ 0.52525949 0.14299484 0.34882855] 2
False
[ 0.18121762 0.61918793 0.21267959] 1
True
[ 0.22106053 0.59560461 0.19334087] 1
True
[ 0.55156928 0.17307206 0.26774002] 0
True
[ 0.7131867 0.04077597 0.45073336] 0
True
[ 0.09643912 0.80975407 0.23547533] 1
True
[ 0.23661269 0.65196526 0.17159661] 1
True
[ 0.43670765 0.44953124 0.16393495] 1
True
[ 0.20333999 0.71112754 0.14289755] 1
True
[ 0.30219703 0.25019018 0.38305418] 2
True
[ 0.15677424 0.5403954 0.26324546] 1
True
[ 0.53332735 0.12143845 0.40188542] 0
True
[ 0.51575343 0.08936186 0.48568715] 2
False
[ 0.23195864 0.49748184 0.26979023] 1
True
[ 0.39243209 0.21344195 0.36770604] 0
True
[ 0.25630241 0.59511935 0.20186902] 1
True
[ 0.44454828 0.11469003 0.48824535] 0
False
[ 0.23843675 0.29858242 0.40023088] 2
True
[ 0.37118828 0.3052753 0.2968943 ] 0
True
[ 0.52365161 0.08312677 0.49106685] 2
False
[ 0.2549112 0.7912399 0.10068501] 1
True
[ 0.19948797 0.56524052 0.29511082] 2
False
[ 0.15025091 0.38755461 0.51445667] 1
False
[ 0.56115066 0.03244851 0.68735062] 2
True
[ 0.17191477 0.76700881 0.16864254] 1
True
[ 0.29458753 0.37158079 0.29860221] 0
False
[ 0.48511179 0.1397225 0.42280542] 2
False
[ 0.51763955 0.02838138 0.72271122] 2
True
[ 0.35713124 0.27468355 0.34490294] 1
False
[ 0.40685647 0.17371704 0.37869105] 0
True
[ 0.51396335 0.11970764 0.48250189] 2
False
[ 0.17483222 0.69396209 0.19548035] 1
True
[ 0.48254482 0.14151974 0.41208636] 0
True
[ 0.38417569 0.16031846 0.4667176 ] 2
True
[ 0.52307981 0.12981255 0.40163429] 0
True
[ 0.49814194 0.12232962 0.38639105] 0
True
[ 0.48215809 0.1121173 0.46194223] 2
False
[ 0.16824857 0.73935442 0.22417245] 1
True
[ 0.38370586 0.20554107 0.42153607] 2
True
[ 0.41012094 0.17556797 0.36623602] 0
True
[ 0.37361411 0.30143138 0.30561406] 0
True
[ 0.36871415 0.41375935 0.254797 ] 1
True
[ 0.41714979 0.15323606 0.42019978] 2
True
[ 0.56587738 0.02446847 0.6897867 ] 2
True
[ 0.46634735 0.08958717 0.46613281] 0
True
[ 0.55258922 0.05031185 0.54826698] 0
True
[ 0.52852765 0.12977423 0.37956036] 0
True
[ 0.60467463 0.03744089 0.58167458] 2
False
[ 0.20088508 0.59067762 0.22690087] 1
True
[ 0.27508285 0.24929542 0.45572939] 2
True
[ 0.35843316 0.20247795 0.40583631] 2
True
[ 0.2539956 0.48084926 0.26279588] 1
True
[ 0.49784039 0.07969872 0.53829791] 0
False
[ 0.2478245 0.23737004 0.40520148] 1
False
[ 0.30249961 0.15464317 0.48192699] 2
True
[ 0.40168145 0.17714018 0.39961309] 0
True
[ 0.43033898 0.14968954 0.44638457] 0
False
[ 0.32578825 0.13337339 0.54455319] 2
True
[ 0.29307722 0.319682 0.32843604] 2
True
[ 0.40779502 0.24865705 0.35741355] 0
True
[ 0.16607624 0.63538549 0.26678436] 1
True
[ 0.20222392 0.61172562 0.22070572] 1
True
[ 0.30925985 0.47630282 0.19295238] 0
False
[ 0.33413009 0.40270623 0.25918773] 1
True
[ 0.18217598 0.59676327 0.27659579] 1
True
[ 0.26270938 0.3812989 0.26098432] 1
True
[ 0.45469569 0.23656021 0.26845761] 0
True
[ 0.37756044 0.22313686 0.29590202] 0
True
[ 0.31682329 0.63972451 0.14036143] 1
True
[ 0.41910432 0.20137584 0.37351669] 0
True
[ 0.65571418 0.08758075 0.3337823 ] 0
True
[ 0.30076126 0.46295257 0.23509246] 1
True
[ 0.50258361 0.04262872 0.6437524 ] 2
True
[ 0.58292675 0.09750618 0.36411708] 0
True
[ 0.4982525 0.07796932 0.47935311] 0
True
[ 0.23074729 0.6381803 0.17334796] 1
True
[ 0.50876786 0.10545104 0.44406424] 2
False
[ 0.39791805 0.3164706 0.26408188] 0
True
[ 0.20268039 0.6753767 0.15656647] 1
True
[ 0.49852917 0.10755632 0.47855001] 2
False
[ 0.50652131 0.09983723 0.4466879 ] 2
False
[ 0.22899367 0.606027 0.21057484] 1
True
[ 0.37420412 0.41887079 0.22758865] 1
True
[ 0.38840395 0.13845914 0.49016242] 2
True
[ 0.15404573 0.58594189 0.36417801] 2
False
[ 0.12857407 0.43663374 0.52324501] 2
True
[ 0.23940712 0.41919997 0.35598802] 1
True
[ 0.38050456 0.22352316 0.35628063] 0
True
[ 0.42794907 0.1424913 0.46446374] 0
False
[ 0.41590554 0.38147306 0.22277709] 0
True
[ 0.60502055 0.08354579 0.45665303] 0
True
[ 0.56055375 0.06027844 0.53305409] 0
True
[ 0.398625 0.4762092 0.17871688] 1
True
[ 0.40532009 0.23320671 0.34208973] 0
True
[ 0.49972819 0.19139059 0.31731054] 0
True
[ 0.31641201 0.31242032 0.36963308] 2
True
[ 0.29622028 0.79516078 0.09184603] 1
True
[ 0.60186536 0.0601038 0.48937315] 0
True
[ 0.22547862 0.16704977 0.59898503] 2
True
[ 0.47391983 0.1186295 0.45420975] 2
False
[ 0.43164557 0.5061636 0.15988629] 1
True
[ 0.24577996 0.40434238 0.34786246] 1
True
[ 0.24466163 0.48210186 0.22127178] 1
True
[ 0.42694492 0.23486341 0.31257456] 0
True
[ 0.36225265 0.42052595 0.22784202] 1
True
[ 0.38904676 0.17305835 0.39999155] 2
True
[ 0.22047291 0.41192739 0.34482372] 1
True
[ 0.46280107 0.1962052 0.33355655] 2
False
[ 0.38951572 0.35699129 0.26917341] 0
True
[ 0.41200122 0.33262974 0.27295238] 0
True
[ 0.42284073 0.22391549 0.31363394] 0
True
[ 0.31004199 0.64878637 0.13640665] 1
True
[ 0.26863345 0.31722243 0.40758303] 2
True
[ 0.69218908 0.03601205 0.56226148] 0
True
[ 0.43157416 0.16622244 0.43518659] 0
False
[ 0.40974799 0.3156143 0.27318519] 0
True
[ 0.47275828 0.40411233 0.16684254] 1
False
[ 0.51118049 0.13676693 0.38533142] 0
True
[ 0.23374506 0.67357179 0.16430293] 1
True
[ 0.10758696 0.50164787 0.49180636] 2
False
[ 0.16266863 0.65330572 0.25013612] 1
True
[ 0.43849761 0.43808166 0.15244381] 0
True
[ 0.41104359 0.48031106 0.14717324] 1
True
[ 0.52561483 0.1061714 0.46715994] 0
True
[ 0.55713478 0.05563625 0.5290547 ] 2
False
[ 0.32941971 0.41238918 0.27361127] 1
True
[ 0.42938638 0.37564234 0.21399898] 0
True
[ 0.24629676 0.15520153 0.55327949] 2
True
[ 0.44484215 0.14747545 0.4087059 ] 0
True
[ 0.32233181 0.24058562 0.41227484] 2
True
[ 0.60283582 0.07535781 0.47105343] 2
False
[ 0.3097576 0.46735742 0.29445603] 1
True
[ 0.21219646 0.44332737 0.34877105] 1
True
[ 0.33914895 0.41159254 0.25212097] 1
True
[ 0.27658905 0.53153396 0.20829771] 1
True
[ 0.43568969 0.19423072 0.40684769] 0
True
[ 0.44770452 0.42967607 0.18739639] 0
True
[ 0.3659485 0.23453924 0.41353767] 2
True
[ 0.49815974 0.11219038 0.49318438] 0
True
[ 0.13327951 0.6968606 0.23858182] 1
True
[ 0.2694892 0.33288534 0.39983623] 2
True
[ 0.41010769 0.2644782 0.33380349] 0
True
[ 0.27628587 0.44447776 0.27032468] 1
True
[ 0.46682755 0.06869599 0.57343683] 2
True
[ 0.24948693 0.4050199 0.28420288] 1
True
[ 0.22003259 0.70408271 0.15833574] 1
True
[ 0.2054138 0.42108014 0.34760717] 1
True
[ 0.41059416 0.39898523 0.1927441 ] 0
True
[ 0.30519747 0.30927538 0.35374302] 2
True
[ 0.43009646 0.37123063 0.23850984] 0
True
[ 0.4176778 0.29227309 0.28576052] 0
True
[ 0.20940208 0.12937646 0.68371987] 2
True
[ 0.43268332 0.15818036 0.42037407] 2
False
[ 0.44426986 0.01987266 0.81736343] 2
True
[ 0.33960854 0.41051344 0.24554653] 1
True
[ 0.1962458 0.7463114 0.17118303] 2
False
[ 0.18110708 0.43731132 0.41014949] 1
True
[ 0.54455784 0.06717127 0.50474386] 2
False
[ 0.47408594 0.09697916 0.47800278] 0
False
[ 0.41728813 0.19573694 0.40800211] 0
True
[ 0.19934956 0.42821354 0.4331247 ] 2
True
[ 0.28218495 0.35669133 0.33683327] 1
True
[ 0.43475222 0.08120013 0.58447505] 2
True
[ 0.44704139 0.22653549 0.33517309] 0
True
[ 0.24270632 0.63577054 0.20948123] 1
True
[ 0.26892753 0.63260599 0.18922614] 1
True
[ 0.09787856 0.66792655 0.43511187] 2
False
[ 0.19037416 0.57448739 0.29780292] 1
True
[ 0.35333908 0.2325138 0.43218261] 2
True
[ 0.50917445 0.09244278 0.54504904] 0
False
[ 0.48255507 0.10586012 0.45809063] 2
False
[ 0.50017776 0.03663687 0.69375244] 2
True
[ 0.42242537 0.29945133 0.29551403] 0
True
[ 0.43042331 0.11815361 0.5354995 ] 0
False
[ 0.49961051 0.13741368 0.37994134] 0
True
[ 0.4454441 0.07755867 0.57025947] 2
True
[ 0.45575486 0.17079891 0.35751379] 0
True
[ 0.51055222 0.07013061 0.57782812] 2
True
[ 0.36030322 0.45865493 0.21685614] 1
True
[ 0.41961077 0.33678063 0.28977302] 0
True
[ 0.45002146 0.34549521 0.22963563] 1
False
[ 0.4368987 0.10786556 0.50899908] 2
True
[ 0.1779052 0.67400077 0.23967212] 1
True
[ 0.16264875 0.7015114 0.22094225] 1
True
[ 0.36298386 0.48667902 0.19713647] 0
False
[ 0.54405903 0.05614821 0.57098065] 2
True
[ 0.46659054 0.36608552 0.21501379] 0
True
[ 0.2273932 0.4489403 0.34311695] 1
True
[ 0.43229215 0.29956855 0.25709542] 0
True
[ 0.55437929 0.06997768 0.51262897] 2
False
[ 0.35731164 0.40237811 0.21572153] 1
True
[ 0.3353242 0.45415502 0.17691185] 1
True
[ 0.31551176 0.19801183 0.43209855] 2
True
[ 0.33488314 0.50732393 0.15958437] 1
True
[ 0.44569839 0.41157652 0.17387775] 0
True
[ 0.53197531 0.15066891 0.34373992] 0
True
[ 0.58849389 0.27249428 0.1783617 ] 0
True
[ 0.50139434 0.35323407 0.17849456] 1
False
[ 0.34097953 0.75780197 0.06476466] 1
True
[ 0.5248832 0.16586911 0.30383891] 2
False
[ 0.37487677 0.19685888 0.41198542] 2
True
[ 0.26496329 0.45137945 0.28353893] 1
True
[ 0.21249012 0.44365516 0.32309247] 1
True
[ 0.16190248 0.73760364 0.22374943] 1
True
[ 0.0846424 0.88778472 0.12738424] 1
True
[ 0.42522017 0.3528861 0.22360341] 0
True
[ 0.41164926 0.33873331 0.23742382] 0
True
[ 0.46879503 0.14546497 0.4401978 ] 2
False
[ 0.18289195 0.59966109 0.27672783] 1
True
[ 0.33092651 0.28948051 0.36634214] 0
False
[ 0.56333195 0.08032726 0.44447429] 0
True
[ 0.48111303 0.10199386 0.479577 ] 0
True
[ 0.22237713 0.40231663 0.3552399 ] 1
True
[ 0.20448476 0.20389288 0.56398464] 2
True
[ 0.48703057 0.18336555 0.29524751] 0
True
[ 0.14718329 0.80351206 0.17344807] 1
True
[ 0.32953511 0.42062755 0.27207778] 1
True
[ 0.39712167 0.20591332 0.39284151] 2
False
[ 0.1750218 0.45772469 0.4160825 ] 2
False
[ 0.21257605 0.48088879 0.28813491] 1
True
[ 0.45320321 0.2011707 0.38199307] 0
True
[ 0.44140152 0.13007651 0.43060985] 2
False
[ 0.4137696 0.20789016 0.37384274] 0
True
[ 0.13306451 0.64771025 0.33044025] 1
True
[ 0.19071442 0.65098944 0.23124214] 1
True
[ 0.24878751 0.43801787 0.28990732] 1
True
[ 0.30164221 0.27149144 0.4365624 ] 2
True
[ 0.18903072 0.433035 0.38194758] 1
True
[ 0.3742259 0.12231954 0.53570772] 2
True
[ 0.15296942 0.64391859 0.29010918] 1
True
[ 0.21738929 0.62087364 0.23270651] 1
True
[ 0.22473363 0.48719366 0.2974593 ] 1
True
[ 0.31969262 0.10308855 0.61230178] 2
True
[ 0.18064419 0.65759524 0.22388389] 1
True
[ 0.14826482 0.73382502 0.20164328] 1
True
[ 0.21813807 0.37962585 0.4107821 ] 2
True
[ 0.45958108 0.1039153 0.46227571] 0
False
[ 0.46948535 0.07962258 0.52042867] 0
False
[ 0.4231834 0.13051769 0.40988921] 0
True
[ 0.5513106 0.19443368 0.23950361] 0
True
[ 0.51719336 0.10529891 0.43839873] 2
False
[ 0.40220178 0.37577678 0.21819427] 0
True
[ 0.4101034 0.33273027 0.25352019] 0
True
[ 0.33717056 0.71928911 0.10493817] 1
True
[ 0.54405566 0.14345912 0.36328456] 2
False
[ 0.42910483 0.21491403 0.44411382] 2
True
[ 0.39512884 0.27091799 0.29174253] 0
True
[ 0.19010636 0.12568238 0.71978038] 2
True
[ 0.41994893 0.1755982 0.42375468] 2
True
[ 0.36064695 0.20444998 0.40666994] 2
True
[ 0.20846232 0.4989332 0.39491337] 1
True
[ 0.2748044 0.45669571 0.28670208] 1
True
[ 0.12575554 0.60919519 0.36626393] 1
True
[ 0.19688362 0.41072128 0.38737899] 1
True
[ 0.19541669 0.67327528 0.1913467 ] 1
True
[ 0.33952688 0.33866168 0.32695713] 1
False
[ 0.26310789 0.19763906 0.54568774] 2
True
[ 0.31347982 0.36827899 0.29251711] 0
False
[ 0.45178821 0.0655855 0.60484855] 2
True
[ 0.28585111 0.57063418 0.20064565] 1
True
[ 0.14648224 0.86272388 0.11838211] 1
True
[ 0.26326755 0.54429443 0.22658518] 1
True
[ 0.35275143 0.60620209 0.14982294] 0
False
[ 0.34088915 0.40242294 0.19922607] 1
True
[ 0.45387556 0.27438623 0.27292387] 0
True
[ 0.46884186 0.12269526 0.39508817] 0
True
[ 0.1923393 0.68911618 0.2027951 ] 1
True
[ 0.17380806 0.20667916 0.61936025] 2
True
[ 0.3304975 0.31781056 0.35530935] 2
True
[ 0.37853838 0.12894476 0.51058516] 2
True
[ 0.25593472 0.3330413 0.40416631] 2
True
[ 0.30656678 0.112089 0.64528324] 2
True
[ 0.39619116 0.24270474 0.34848784] 0
True
[ 0.39614077 0.07949432 0.60418795] 2
True
[ 0.35270588 0.16818547 0.46899178] 2
True
[ 0.33715143 0.25792015 0.39821316] 2
True
[ 0.3566762 0.17174686 0.46380141] 2
True
[ 0.45041738 0.08842949 0.57063757] 2
True
[ 0.43351468 0.12315949 0.48331478] 2
True
[ 0.49414413 0.08981997 0.52757031] 0
False
[ 0.43772662 0.12333097 0.44396386] 0
False
[ 0.49683275 0.19038361 0.31690792] 0
True
[ 0.4783489 0.11150509 0.49507809] 0
False
[ 0.2210986 0.48549757 0.30102886] 1
True
[ 0.61244955 0.03425402 0.6176751 ] 2
True
[ 0.28002738 0.44220093 0.26772963] 1
True
[ 0.32294362 0.4148384 0.25932102] 1
True
[ 0.40825721 0.37816496 0.21425132] 0
True
[ 0.55091284 0.05646255 0.51291254] 0
True
[ 0.22522244 0.84186817 0.07490818] 1
True
[ 0.4592691 0.16696622 0.44095898] 2
False
[ 0.43645742 0.11487924 0.48306096] 2
True
[ 0.54161281 0.06481494 0.50744707] 2
False
[ 0.53421004 0.06506878 0.51902788] 2
False
[ 0.51906158 0.06913412 0.53347011] 0
False
[ 0.47088912 0.11157325 0.46160366] 2
False
[ 0.47476498 0.17207446 0.32564215] 0
True
[ 0.24588912 0.67814509 0.16961792] 1
True
[ 0.46972205 0.0962178 0.4986284 ] 2
True
[ 0.44881413 0.10513781 0.45656285] 0
False
[ 0.41852418 0.37940243 0.2220733 ] 0
True
[ 0.34859766 0.49456555 0.20414619] 1
True
[ 0.36953487 0.43459607 0.2006565 ] 1
True
[ 0.51878005 0.0807628 0.48700348] 0
True
[ 0.28366037 0.12960322 0.58452488] 2
True
[ 0.49649228 0.08112082 0.49619679] 2
False
[ 0.55363691 0.1060839 0.34157091] 1
False
[ 0.40087102 0.05943219 0.60426265] 2
True
[ 0.38435295 0.10776746 0.45220284] 2
True
[ 0.32976202 0.10309034 0.58652298] 2
True
[ 0.51471705 0.06337949 0.53392386] 0
False
[ 0.40549788 0.34682972 0.23066833] 0
True
[ 0.59309367 0.06341007 0.36911938] 0
True
[ 0.36867664 0.32227722 0.24439702] 1
False
[ 0.39448715 0.30040482 0.2257646 ] 0
True
[ 0.33252343 0.40671611 0.22102977] 1
True
[ 0.29365694 0.61518284 0.11504209] 1
True
[ 0.39435504 0.20064925 0.36512158] 0
True
[ 0.45755208 0.31935719 0.17116807] 2
False
[ 0.42955203 0.133162 0.41193141] 2
False
[ 0.29925972 0.4558408 0.22147644] 1
True
[ 0.12706148 0.66177499 0.26506375] 1
True
[ 0.10621747 0.90533153 0.112641 ] 1
True
[ 0.3557213 0.23076051 0.31195105] 0
True
[ 0.22651554 0.58123175 0.19973516] 1
True
[ 0.3896674 0.23887528 0.30217606] 0
True
[ 0.64917493 0.12101796 0.26451995] 0
True
[ 0.52381772 0.24417061 0.22850727] 0
True
[ 0.45084208 0.08326297 0.50522044] 2
True
[ 0.38052329 0.41918133 0.19310235] 1
True
[ 0.39486191 0.18691021 0.37770193] 2
False
[ 0.11558606 0.86841884 0.15097644] 2
False
[ 0.41507927 0.21162109 0.37189223] 0
True
[ 0.45793047 0.13358094 0.45647115] 2
False
[ 0.17786678 0.61543411 0.27088166] 1
True
[ 0.42181749 0.27647566 0.34915996] 0
True
[ 0.21861869 0.74020363 0.13885035] 1
True
[ 0.20242346 0.55927767 0.26121991] 1
True
[ 0.49980283 0.09340337 0.4712129 ] 2
False
[ 0.37412364 0.17881531 0.41065887] 2
True
[ 0.44249698 0.06401237 0.59526301] 2
True
[ 0.40632828 0.0783884 0.59701382] 2
True
[ 0.2825779 0.24121793 0.47673465] 2
True
[ 0.33293041 0.46670608 0.22541076] 1
True
[ 0.23978722 0.3266933 0.40465677] 2
True
[ 0.3875751 0.28757215 0.29936983] 0
True
[ 0.22360541 0.78410992 0.13165867] 1
True
[ 0.45336904 0.12969923 0.45822231] 0
False
[ 0.26588485 0.76455274 0.13043126] 1
True
[ 0.1486103 0.19092403 0.69968988] 2
True
[ 0.25017662 0.43905833 0.28478147] 1
True
[ 0.15207945 0.63780149 0.29354988] 1
True
[ 0.45667392 0.03611651 0.70765744] 2
True
[ 0.18892745 0.40550308 0.44228461] 2
True
[ 0.43423552 0.25021894 0.37688941] 0
True
[ 0.4494502 0.05637034 0.63376601] 2
True
[ 0.46078818 0.11001733 0.41830155] 0
True
[ 0.39814099 0.25663988 0.28603832] 1
False
[ 0.33469732 0.18841638 0.43597748] 2
True
[ 0.3984543 0.05055928 0.69661794] 2
True
[ 0.31997907 0.24684504 0.38163334] 2
True
[ 0.32772779 0.52106764 0.18490423] 1
True
[ 0.42645402 0.1627159 0.41534587] 0
True
[ 0.46518447 0.11448731 0.46830053] 0
False
[ 0.43884413 0.15130269 0.38252088] 0
True
[ 0.41875317 0.14570789 0.44793195] 2
True
[ 0.14432697 0.78651262 0.19268701] 1
True
[ 0.34169345 0.43590501 0.20503205] 1
True
[ 0.40889036 0.06305686 0.64233057] 2
True
[ 0.21242256 0.46650095 0.30262824] 1
True
[ 0.46476156 0.11806425 0.44948614] 2
False
[ 0.18218449 0.55049154 0.29529849] 1
True
[ 0.38524904 0.22469784 0.3285326 ] 0
True
[ 0.35436267 0.37412193 0.24597911] 1
True
[ 0.43856714 0.14112002 0.42388416] 2
False
[ 0.13936271 0.85559191 0.13785052] 1
True
[ 0.38432248 0.14929552 0.46817102] 2
True
[ 0.40293568 0.15284848 0.41943421] 0
False
[ 0.46536666 0.10627695 0.47350493] 0
False
[ 0.38800326 0.08870787 0.58269792] 2
True
[ 0.29799648 0.60222174 0.1944978 ] 0
False
[ 0.44836181 0.52291027 0.15172656] 0
False
[ 0.40300993 0.19243513 0.41086655] 2
True
[ 0.42225318 0.69639582 0.08928532] 1
True
[ 0.39615874 0.38031395 0.23687202] 0
True
[ 0.41686751 0.40394216 0.23396223] 1
False
[ 0.59868051 0.13180457 0.3071273 ] 2
False
[ 0.54425028 0.14844344 0.37562278] 0
True
[ 0.30836148 0.46989562 0.25115747] 1
True
[ 0.51003933 0.11026031 0.47418709] 2
False
[ 0.29589112 0.09754276 0.63142371] 2
True
[ 0.39777588 0.1895301 0.4285149 ] 2
True
[ 0.20953076 0.8348976 0.13206739] 1
True
[ 0.50715195 0.07349512 0.52988107] 2
True
[ 0.19471173 0.528973 0.25990164] 1
True
[ 0.26926823 0.52776882 0.22163074] 1
True
[ 0.3891536 0.12053095 0.52264866] 2
True
[ 0.40473739 0.1288782 0.47286118] 0
False
[ 0.56431088 0.08558421 0.48068761] 0
True
[ 0.58970735 0.10405031 0.3868029 ] 0
True
[ 0.30798983 0.53300898 0.17794345] 1
True
[ 0.42329438 0.16504127 0.41910265] 0
True
[ 0.26426694 0.4885964 0.23351772] 1
True
[ 0.44334224 0.45723047 0.1373609 ] 1
True
[ 0.59636005 0.07500064 0.45793595] 0
True
[ 0.51490287 0.1547852 0.33377637] 0
True
[ 0.51035106 0.16822442 0.33964085] 0
True
[ 0.57355491 0.07701689 0.44537964] 0
True
[ 0.25529245 0.64076845 0.12543882] 2
False
[ 0.52108317 0.21513599 0.26291105] 0
True
[ 0.41546033 0.18063478 0.30995814] 0
True
[ 0.37272842 0.22731182 0.30585663] 0
True
[ 0.42325982 0.23546038 0.37022128] 0
True
[ 0.46605107 0.18001807 0.38648019] 0
True
[ 0.55231769 0.11457301 0.38278187] 2
False
[ 0.36163391 0.34235468 0.26745713] 1
False
[ 0.45779779 0.15088499 0.39814518] 0
True
[ 0.28650269 0.57732203 0.18039252] 1
True
[ 0.54488541 0.13336574 0.35630159] 0
True
[ 0.52286175 0.12940229 0.40235769] 2
False
[ 0.38482968 0.29103313 0.30092062] 1
False
[ 0.11292103 0.86691743 0.11895387] 1
True
[ 0.35197076 0.03781792 0.77469099] 2
True
[ 0.43620284 0.06250571 0.65031595] 2
True
[ 0.10777001 0.56658642 0.43223453] 2
False
[ 0.47490193 0.08789031 0.49090703] 2
True
[ 0.40221906 0.20703205 0.36969399] 0
True
[ 0.38894735 0.22299816 0.34647012] 0
True
[ 0.15833944 0.43881066 0.43944308] 2
True
[ 0.29303877 0.42166713 0.33713037] 1
True
[ 0.29597869 0.4548042 0.25319172] 0
False
[ 0.39358031 0.140993 0.49131295] 2
True
[ 0.54510368 0.13054345 0.35884386] 0
True
[ 0.46004553 0.1014954 0.516641 ] 2
True
[ 0.38840198 0.06416322 0.65972836] 2
True
[ 0.4348158 0.15860448 0.43237535] 2
False
[ 0.16931343 0.13147579 0.74566311] 2
True
[ 0.40645722 0.15434779 0.42250225] 2
True
[ 0.13028836 0.85052789 0.1625825 ] 1
True
[ 0.20364226 0.43190457 0.34614383] 1
True
[ 0.26084969 0.60833395 0.16528502] 1
True
[ 0.20575538 0.49852272 0.27172669] 1
True
[ 0.44266097 0.10328259 0.52386834] 0
False
[ 0.4001999 0.29817446 0.2659287 ] 0
True
[ 0.40360329 0.23553841 0.36487576] 0
True
[ 0.47893174 0.46622183 0.12415484] 1
False
[ 0.34266439 0.22994551 0.41101484] 2
True
[ 0.16678422 0.19088194 0.6407172 ] 2
True
[ 0.15573193 0.84106427 0.15507235] 2
False
[ 0.20390196 0.41665201 0.40971338] 1
True
[ 0.47593254 0.12486478 0.46847363] 2
False
[ 0.50351262 0.08853068 0.50677479] 0
False
[ 0.44699768 0.14258961 0.42772108] 0
True
[ 0.41970904 0.32379017 0.26370665] 0
True
[ 0.4377055 0.13317614 0.47887293] 2
True
[ 0.42265253 0.14675065 0.4198288 ] 2
False
[ 0.57391647 0.06307105 0.50477077] 0
True
[ 0.46650011 0.13878535 0.47656117] 0
False
[ 0.43351379 0.3014984 0.28032658] 0
True
[ 0.41048811 0.30155465 0.27635355] 0
True
[ 0.4440506 0.63199662 0.11559401] 1
True
[ 0.43457117 0.33409517 0.23735936] 0
True
[ 0.53699159 0.26463737 0.22229768] 0
True
[ 0.48622799 0.31333265 0.23349809] 1
False
[ 0.53003604 0.13560126 0.38410898] 2
False
[ 0.42808549 0.14747979 0.44132897] 0
False
[ 0.44657571 0.15920867 0.39488117] 0
True
[ 0.21104488 0.720068 0.17255926] 2
False
[ 0.14999597 0.66424307 0.29432839] 1
True
[ 0.4237709 0.25818114 0.35407709] 0
True
[ 0.52185982 0.13231032 0.39976915] 2
False
[ 0.45486999 0.03622163 0.72729891] 2
True
[ 0.35748934 0.3943398 0.29391407] 1
True
[ 0.22432762 0.53909606 0.28756662] 1
True
[ 0.45083535 0.16338391 0.39169623] 0
True
[ 0.24852895 0.6361399 0.15320537] 1
True
[ 0.39267293 0.20327642 0.41160253] 2
True
[ 0.41115346 0.19815295 0.39637266] 0
True
[ 0.43953776 0.11266015 0.45962307] 2
True
[ 0.44256665 0.08848377 0.58204865] 2
True
[ 0.4530765 0.13663848 0.40968566] 0
True
[ 0.17564481 0.52274486 0.34527358] 1
True
[ 0.18509955 0.56060366 0.28823546] 1
True
[ 0.38343281 0.08770981 0.60257317] 2
True
[ 0.27380066 0.38366079 0.28598778] 1
True
[ 0.42245895 0.13574821 0.44650943] 0
False
[ 0.44036433 0.17357423 0.43709482] 0
True
[ 0.61166148 0.08308018 0.41124886] 0
True
[ 0.30414302 0.34224087 0.31442155] 1
True
[ 0.14592757 0.67974941 0.21169102] 1
True
[ 0.31699079 0.28490795 0.37651367] 2
True
[ 0.18558538 0.5245285 0.29144369] 1
True
[ 0.23434606 0.47881253 0.31218203] 1
True
[ 0.31169911 0.18055927 0.48036542] 2
True
[ 0.24909111 0.38698197 0.318249 ] 1
True
[ 0.3350758 0.04803332 0.77165063] 2
True
[ 0.29266617 0.42112693 0.26646722] 1
True
[ 0.41910326 0.2512254 0.3593251 ] 0
True
[ 0.25983094 0.34007748 0.39520861] 2
True
[ 0.08865598 0.92764278 0.16319194] 2
False
[ 0.41374143 0.25745794 0.3662816 ] 0
True
[ 0.15770999 0.48042807 0.51089013] 2
True
[ 0.15555475 0.59084415 0.39329658] 1
True
[ 0.08626476 0.89303038 0.31189524] 2
False
[ 0.1077621 0.21529879 0.82836251] 2
True
[ 0.39937953 0.16918726 0.56252378] 2
True
[ 0.111174 0.68865641 0.49246866] 1
True
[ 0.26539383 0.43839914 0.37768881] 1
True
[ 0.45079424 0.11956211 0.51349519] 2
True
[ 0.073454 0.83616386 0.40480126] 1
True
[ 0.11280397 0.21155554 0.78384765] 2
True
[ 0.37680486 0.25544251 0.43112891] 0
False
[ 0.16982913 0.61281157 0.38772512] 1
True
[ 0.17427329 0.51917504 0.40482858] 1
True
[ 0.41387684 0.19545716 0.41404574] 0
False
[ 0.18933726 0.47778475 0.47070019] 2
False
[ 0.54196628 0.11298043 0.48580074] 0
True
[ 0.51300333 0.08954059 0.46372216] 0
True
[ 0.45906353 0.14720558 0.44661467] 2
False
[ 0.40767053 0.19618736 0.44686067] 2
True
[ 0.19796961 0.43951781 0.40918134] 1
True
[ 0.49331707 0.10813716 0.50900768] 2
True
[ 0.2593128 0.48078822 0.35468214] 1
True
[ 0.36053778 0.4573268 0.24936261] 1
True
[ 0.11740851 0.33088863 0.62818828] 2
True
[ 0.27872843 0.28340851 0.4031648 ] 1
False
[ 0.28057668 0.47637327 0.2839753 ] 0
False
[ 0.299443 0.39438624 0.27300153] 1
True
[ 0.17692892 0.69764212 0.28033492] 1
True
[ 0.33820263 0.30984662 0.33216366] 2
False
[ 0.40876816 0.19565497 0.40131063] 0
True
[ 0.255361 0.76811955 0.12918858] 1
True
[ 0.41290784 0.15679222 0.41113208] 2
False
[ 0.44247106 0.14832317 0.44127083] 2
False
[ 0.25104309 0.19054147 0.57070647] 2
True
[ 0.16938727 0.75120548 0.2126641 ] 1
True
[ 0.08537927 0.56730267 0.56416535] 2
False
[ 0.25503414 0.38638405 0.37100971] 1
True
[ 0.40235067 0.32559304 0.26117791] 0
True
[ 0.22317475 0.75977727 0.13548061] 1
True
[ 0.44473908 0.10231966 0.43258255] 2
False
[ 0.28259653 0.37028725 0.35583563] 0
False
[ 0.24240018 0.3992705 0.32421313] 1
True
[ 0.44756957 0.1657075 0.45385913] 0
False
[ 0.35795801 0.23206266 0.42753814] 2
True
[ 0.05974025 0.86277743 0.26275504] 1
True
[ 0.27499455 0.34862281 0.35819861] 2
True
[ 0.54540081 0.08456038 0.48764555] 0
True
[ 0.43713655 0.17883117 0.40756914] 0
True
[ 0.36948508 0.09364947 0.57633547] 2
True
[ 0.38219708 0.42285974 0.25082952] 1
True
[ 0.12514706 0.69769743 0.23622478] 1
True
[ 0.2160723 0.592063 0.22341364] 1
True
[ 0.17710377 0.42599369 0.45904355] 2
True
[ 0.35124997 0.23731163 0.37468024] 0
False
[ 0.18982827 0.3964755 0.43024243] 2
True
[ 0.55151082 0.11815062 0.33680446] 0
True
[ 0.54920626 0.05092174 0.63541627] 2
True
[ 0.17570232 0.71663456 0.21850402] 1
True
[ 0.35544888 0.10755884 0.593932 ] 2
True
[ 0.26938585 0.31284668 0.38616005] 2
True
[ 0.22805189 0.49747226 0.25763079] 1
True
[ 0.11309005 0.20540844 0.75424994] 2
True
[ 0.26738634 0.17289891 0.52936117] 2
True
[ 0.29400823 0.33718737 0.36576792] 2
True
[ 0.31678778 0.28619242 0.32636682] 1
False
[ 0.38030848 0.17625286 0.43781557] 2
True
[ 0.39581373 0.02757411 0.83188941] 2
True
[ 0.39954811 0.21834561 0.32902606] 0
True
[ 0.12929964 0.48804037 0.48609756] 2
False
[ 0.16790136 0.70132865 0.23218258] 1
True
[ 0.20423164 0.54268436 0.32086476] 1
True
[ 0.47035572 0.08767903 0.51114239] 0
False
[ 0.40382711 0.32018612 0.26926153] 0
True
[ 0.49464255 0.14014252 0.40167729] 0
True
[ 0.17466164 0.13338526 0.70108451] 2
True
[ 0.43261724 0.10340245 0.46176292] 2
True
[ 0.43406706 0.52639647 0.13541534] 1
True
[ 0.29948176 0.15048536 0.56854228] 2
True
[ 0.19888396 0.50309902 0.2553634 ] 1
True
[ 0.23424682 0.42148432 0.36811999] 1
True
[ 0.30614329 0.41482862 0.27853014] 0
False
[ 0.56875699 0.04748915 0.55567687] 2
False
[ 0.41952113 0.09639914 0.52449237] 2
True
[ 0.45391225 0.11745916 0.47239837] 0
False
1000
[ 0.41354684 0.41029243 0.19699888] 0
True
[ 0.50123495 0.23441871 0.24559232] 1
False
[ 0.56559514 0.10159624 0.46444501] 2
False
[ 0.42082014 0.09402027 0.54197531] 2
True
[ 0.28396811 0.46980894 0.23159394] 1
True
[ 0.13058989 0.85441817 0.11849286] 1
True
[ 0.39000644 0.18524875 0.35580518] 0
True
[ 0.41415591 0.25412072 0.31306289] 2
False
[ 0.49873094 0.12069768 0.41142855] 2
False
[ 0.59687057 0.12585608 0.31394829] 0
True
[ 0.3937697 0.34634886 0.24314799] 1
False
[ 0.22869859 0.2720965 0.47636681] 2
True
[ 0.33079231 0.35157917 0.27121171] 0
False
[ 0.59338354 0.11657442 0.29686983] 0
True
[ 0.53169416 0.21239525 0.21714454] 1
False
[ 0.23312707 0.44104447 0.31782703] 1
True
[ 0.189794 0.81793933 0.11877623] 1
True
[ 0.21843503 0.64884651 0.19976363] 1
True
[ 0.44089506 0.13065895 0.45540289] 0
False
[ 0.37269271 0.40543102 0.19013851] 1
True
[ 0.13501002 0.69858158 0.22348 ] 1
True
[ 0.18217851 0.70013396 0.1783636 ] 1
True
[ 0.36277802 0.47143863 0.17253769] 0
False
[ 0.3312991 0.39553007 0.24843617] 1
True
[ 0.43363446 0.26157896 0.24662431] 0
True
[ 0.42706581 0.15854013 0.41789986] 2
False
[ 0.23155657 0.74859582 0.12431834] 1
True
[ 0.18163541 0.50344899 0.36202697] 2
False
[ 0.36956803 0.193173 0.39176991] 0
False
[ 0.37890443 0.21082947 0.40564611] 2
True
[ 0.14463285 0.6854725 0.27049074] 1
True
[ 0.2438929 0.50723974 0.28536898] 1
True
[ 0.45815682 0.23933356 0.29283153] 0
True
[ 0.45540966 0.13105756 0.42950716] 2
False
[ 0.30461512 0.3640643 0.31355777] 1
True
[ 0.34297226 0.51087771 0.19100199] 1
True
[ 0.29966825 0.29791102 0.36090454] 0
False
[ 0.14995153 0.19481274 0.67128106] 2
True
[ 0.38167407 0.27575917 0.368059 ] 2
False
[ 0.18720302 0.76568715 0.16082756] 1
True
[ 0.07733023 0.34337234 0.67179378] 2
True
[ 0.30026392 0.23886171 0.43152905] 2
True
[ 0.28174738 0.39283868 0.30846638] 1
True
[ 0.143929 0.67380935 0.32332506] 2
False
[ 0.38407059 0.28479157 0.29898686] 0
True
[ 0.26042611 0.35931256 0.39256712] 1
False
[ 0.32475018 0.28049113 0.41606192] 2
True
[ 0.2574111 0.262759 0.50999651] 2
True
[ 0.25402324 0.44985957 0.36787241] 1
True
[ 0.38333956 0.09258756 0.585736 ] 2
True
[ 0.1262313 0.59927047 0.37700764] 1
True
[ 0.18860042 0.28215096 0.59476726] 2
True
[ 0.36921118 0.11200052 0.55061929] 0
False
[ 0.48344574 0.02015967 0.80379665] 2
True
[ 0.29011935 0.27993432 0.3942081 ] 2
True
[ 0.14543517 0.2604898 0.58548051] 2
True
[ 0.12217599 0.76442388 0.31779277] 1
True
[ 0.23070972 0.59677646 0.21726737] 1
True
[ 0.29271972 0.25641403 0.39929189] 2
True
[ 0.2483765 0.37373261 0.36837601] 0
False
[ 0.25785407 0.44659042 0.34007749] 1
True
[ 0.40029079 0.33844537 0.28161901] 0
True
[ 0.45567262 0.23974522 0.2773717 ] 0
True
[ 0.36330334 0.40519253 0.26415582] 0
False
[ 0.52183142 0.22395579 0.32452619] 0
True
[ 0.33821933 0.41082041 0.25226817] 1
True
[ 0.59170622 0.08202985 0.3584635 ] 0
True
[ 0.6691777 0.07653119 0.38334771] 0
True
[ 0.59598663 0.10426872 0.33462338] 2
False
[ 0.59235492 0.09264686 0.39511269] 0
True
[ 0.21509852 0.55755047 0.27350255] 1
True
[ 0.42985546 0.17413399 0.41112993] 2
False
[ 0.42776951 0.3312621 0.27007342] 0
True
[ 0.15841968 0.23682968 0.61768702] 2
True
[ 0.31370851 0.39932153 0.27955546] 1
True
[ 0.46895675 0.13448875 0.47381511] 0
False
[ 0.2255071 0.71203903 0.18299566] 1
True
[ 0.3961735 0.11709883 0.512447 ] 2
True
[ 0.34330788 0.426616 0.26324091] 1
True
[ 0.48242079 0.17436124 0.33154535] 0
True
[ 0.53970639 0.1332913 0.33424727] 0
True
[ 0.41719416 0.21705362 0.30361759] 0
True
[ 0.33757202 0.73853712 0.08961793] 1
True
[ 0.2222702 0.13620887 0.62770424] 2
True
[ 0.47397111 0.09888892 0.45903606] 2
False
[ 0.17206457 0.19675 0.605939 ] 2
True
[ 0.14097489 0.84036788 0.16378874] 1
True
[ 0.40811748 0.14341664 0.45446238] 0
False
[ 0.20363691 0.72678616 0.1795081 ] 2
False
[ 0.54780617 0.08015489 0.52001663] 2
False
[ 0.53466577 0.1218975 0.36988456] 0
True
[ 0.49619989 0.11843488 0.39460664] 0
True
[ 0.5217943 0.15257058 0.33709878] 0
True
[ 0.20003957 0.48115499 0.36979238] 1
True
[ 0.41140049 0.18996438 0.38307988] 0
True
[ 0.29242748 0.39629543 0.31930081] 1
True
[ 0.60422125 0.03214987 0.5806632 ] 2
False
[ 0.20811913 0.55207349 0.25934841] 1
True
[ 0.41527055 0.02748774 0.78839397] 2
True
[ 0.409584 0.33752536 0.24869047] 0
True
[ 0.25170041 0.68793596 0.14975751] 2
False
[ 0.46712474 0.17350318 0.37466217] 0
True
[ 0.24954795 0.59462243 0.22883198] 1
True
[ 0.31785043 0.34095272 0.31597595] 1
True
[ 0.39586193 0.21652038 0.34133193] 0
True
[ 0.47916639 0.25554965 0.26004294] 0
True
[ 0.36145225 0.52873555 0.14962425] 1
True
[ 0.42135321 0.15228275 0.41337086] 0
True
[ 0.29744287 0.3402503 0.34809938] 1
False
[ 0.23404031 0.45016489 0.32495681] 1
True
[ 0.43878113 0.25180911 0.27343272] 0
True
[ 0.31658529 0.21798507 0.46291053] 2
True
[ 0.1450127 0.69872076 0.22284992] 1
True
[ 0.17300008 0.2389953 0.55566739] 2
True
[ 0.48005789 0.27431444 0.24095604] 0
True
[ 0.48063432 0.50264589 0.11065832] 0
False
[ 0.38630204 0.56884081 0.14719066] 1
True
[ 0.42853153 0.30330402 0.2383518 ] 2
False
[ 0.4046368 0.11727121 0.50451357] 2
True
[ 0.1774861 0.48204685 0.30900061] 1
True
[ 0.52490694 0.13759806 0.33729937] 0
True
[ 0.3888114 0.27669425 0.26472025] 0
True
[ 0.41456535 0.53948899 0.13316399] 1
True
[ 0.48385263 0.26105978 0.250203 ] 0
True
[ 0.38713934 0.03783159 0.7746786 ] 2
True
[ 0.41010344 0.35235166 0.21241476] 0
True
[ 0.34469362 0.5837888 0.17974625] 1
True
[ 0.52238426 0.12196266 0.42955991] 2
False
[ 0.24162907 0.36146002 0.37160751] 2
True
[ 0.41142699 0.17891076 0.44231225] 2
True
[ 0.24151479 0.32941669 0.41745975] 2
True
[ 0.26966509 0.15358506 0.56154824] 2
True
[ 0.37473904 0.02799319 0.83086562] 2
True
[ 0.2848142 0.41208895 0.27394779] 1
True
[ 0.13598701 0.15542327 0.70839969] 2
True
[ 0.38904601 0.18017344 0.40068828] 2
True
[ 0.4070888 0.20523316 0.38856273] 2
False
[ 0.48936793 0.11515292 0.40323486] 0
True
[ 0.14442621 0.50370442 0.35048947] 1
True
[ 0.29262025 0.45054648 0.23490301] 1
True
[ 0.42000315 0.17127992 0.41468792] 0
True
[ 0.34754502 0.15100956 0.49718886] 0
False
[ 0.20230389 0.74899347 0.16462282] 1
True
[ 0.32595798 0.53101542 0.1933133 ] 1
True
[ 0.18442631 0.19564478 0.57203988] 2
True
[ 0.54653747 0.09938103 0.40651937] 0
True
[ 0.31541611 0.10752558 0.64039552] 2
True
[ 0.35964394 0.49144992 0.16474794] 1
True
[ 0.44871689 0.02287849 0.78660098] 2
True
[ 0.4051279 0.15416994 0.40443646] 2
False
[ 0.13681489 0.65349929 0.32291101] 2
False
[ 0.14844254 0.63469963 0.29835159] 1
True
[ 0.38447884 0.16308099 0.4512192 ] 0
False
[ 0.38388954 0.52296721 0.19337518] 1
True
[ 0.22211147 0.27873095 0.50642131] 2
True
[ 0.11127832 0.82384211 0.2056626 ] 1
True
[ 0.34969376 0.15523139 0.52542815] 2
True
[ 0.11365314 0.58698141 0.46431478] 1
True
[ 0.40254626 0.11723398 0.52645498] 0
False
[ 0.18437389 0.45563057 0.37456246] 1
True
[ 0.44903197 0.16421117 0.44969732] 0
False
[ 0.38760392 0.13513051 0.52139455] 2
True
[ 0.42029054 0.16839335 0.43322502] 0
False
[ 0.50191215 0.20143598 0.28106022] 0
True
[ 0.52751967 0.12049677 0.42593117] 2
False
[ 0.37338391 0.40822891 0.19352612] 1
True
[ 0.29153162 0.25103659 0.43208655] 2
True
[ 0.41599205 0.36105618 0.25129197] 0
True
[ 0.22079188 0.70398996 0.20274195] 1
True
[ 0.28159479 0.55159911 0.20517252] 1
True
[ 0.21445386 0.52954435 0.28499572] 1
True
[ 0.28260796 0.50093793 0.25167881] 1
True
[ 0.34801953 0.22699453 0.40982971] 2
True
[ 0.52029156 0.11278579 0.36953298] 0
True
[ 0.45853719 0.13619243 0.43243194] 0
True
[ 0.4805311 0.06449493 0.57894213] 2
True
[ 0.39388549 0.19927633 0.40523926] 2
True
[ 0.52509608 0.13843558 0.36487455] 0
True
[ 0.18534321 0.7293412 0.20440604] 1
True
[ 0.47439019 0.13296292 0.42750796] 2
False
[ 0.38721109 0.32503437 0.23906997] 0
True
[ 0.41162874 0.17440762 0.43517597] 0
False
[ 0.39312514 0.22281007 0.35505431] 0
True
[ 0.49806818 0.02218393 0.76363114] 2
True
[ 0.41661817 0.36157192 0.19581249] 0
True
[ 0.26299216 0.38055147 0.30280753] 1
True
[ 0.62887428 0.10897462 0.30686058] 0
True
[ 0.21479112 0.14901358 0.639789 ] 2
True
[ 0.50174589 0.16054618 0.34542744] 0
True
[ 0.32622224 0.38191619 0.27755166] 1
True
[ 0.27321668 0.75657334 0.10561852] 1
True
[ 0.32305444 0.67444863 0.10868738] 1
True
[ 0.55045881 0.12192557 0.32005343] 2
False
[ 0.54025376 0.0851551 0.50281182] 2
False
[ 0.29597838 0.23001052 0.44570215] 2
True
[ 0.37919574 0.23495692 0.38238297] 2
True
[ 0.32191002 0.30481854 0.35826605] 2
True
[ 0.11750969 0.71541209 0.30703783] 2
False
[ 0.26406345 0.36064104 0.31803018] 0
False
[ 0.20579562 0.42893063 0.41583542] 1
True
[ 0.35819111 0.41218413 0.24071247] 1
True
[ 0.14823272 0.81055443 0.15879266] 1
True
[ 0.12635593 0.48408701 0.48148023] 1
True
[ 0.25243971 0.41994066 0.36890702] 1
True
[ 0.2900409 0.5570888 0.20019919] 0
False
[ 0.26543691 0.46454539 0.2597997 ] 0
False
[ 0.44001524 0.15951341 0.44209231] 2
True
[ 0.48156589 0.10944054 0.49875106] 0
False
[ 0.51116906 0.12597931 0.41543602] 2
False
[ 0.40990964 0.10750923 0.53479023] 2
True
[ 0.48001594 0.09671191 0.49770675] 0
False
[ 0.25192694 0.71241697 0.17933871] 1
True
[ 0.51891415 0.11572954 0.4938025 ] 2
False
[ 0.56079981 0.10464783 0.43532825] 0
True
[ 0.49212164 0.09566274 0.5275839 ] 2
True
[ 0.48785644 0.08257703 0.51820836] 0
False
[ 0.41974268 0.38217412 0.20753358] 0
True
[ 0.57587286 0.08298087 0.50431797] 2
False
[ 0.44766352 0.08548151 0.58779546] 2
True
[ 0.44862437 0.09520764 0.48185062] 0
False
[ 0.58589914 0.10785538 0.37134349] 0
True
[ 0.53484524 0.12956581 0.41416535] 2
False
[ 0.30622716 0.43441766 0.30179825] 1
True
[ 0.17446184 0.4415918 0.40282126] 1
True
[ 0.32735885 0.56761488 0.15439578] 1
True
[ 0.33463822 0.43019187 0.24485349] 1
True
[ 0.13192878 0.19350598 0.70534161] 2
True
[ 0.41118188 0.22221289 0.37291925] 0
True
[ 0.21649999 0.27474973 0.51643059] 2
True
[ 0.40550667 0.14273434 0.52788939] 2
True
[ 0.22118619 0.6919203 0.2097088 ] 1
True
[ 0.21338168 0.50842947 0.28426528] 1
True
[ 0.47007794 0.01919977 0.82223625] 2
True
[ 0.3702672 0.40348885 0.21518645] 1
True
[ 0.38257919 0.09261251 0.57936806] 2
True
[ 0.27709288 0.30259648 0.43433759] 2
True
[ 0.41684474 0.13492141 0.50656026] 0
False
[ 0.53162098 0.1551095 0.3617564 ] 2
False
[ 0.4763256 0.09474116 0.48336785] 0
False
[ 0.55684462 0.08002662 0.47221108] 0
True
[ 0.39633178 0.20216645 0.35147624] 0
True
[ 0.11279357 0.29559416 0.65057051] 2
True
[ 0.46812504 0.16123322 0.3357009 ] 0
True
[ 0.51000891 0.20998819 0.28263738] 0
True
[ 0.49493659 0.11619636 0.47120122] 2
False
[ 0.55088167 0.13770539 0.31574887] 0
True
[ 0.50409655 0.32292333 0.18666634] 0
True
[ 0.42853687 0.14178745 0.44837708] 0
False
[ 0.39691323 0.35280332 0.22712847] 0
True
[ 0.42131235 0.17714633 0.42969521] 2
True
[ 0.52704435 0.07649665 0.56308205] 2
True
[ 0.20821396 0.19984909 0.53425758] 2
True
[ 0.2501804 0.71213396 0.16195696] 1
True
[ 0.47853162 0.10614012 0.49485134] 0
False
[ 0.46406366 0.11815863 0.49348332] 0
False
[ 0.41033808 0.37680858 0.24032651] 1
False
[ 0.4253534 0.06468485 0.63762796] 2
True
[ 0.28804427 0.43188456 0.28710324] 1
True
[ 0.55133248 0.05197904 0.48793633] 2
False
[ 0.14579794 0.1649276 0.74575913] 2
True
[ 0.26759067 0.4113684 0.29963358] 1
True
[ 0.27045423 0.56308045 0.20409313] 1
True
[ 0.41717417 0.14521477 0.43756075] 2
True
[ 0.26280553 0.76178535 0.09028674] 1
True
[ 0.33140645 0.09131353 0.59840506] 2
True
[ 0.28582535 0.28882233 0.3813932 ] 2
True
[ 0.49638683 0.12018026 0.42271561] 2
False
[ 0.41128731 0.18962049 0.3795412 ] 0
True
[ 0.22303825 0.39128481 0.36654519] 1
True
[ 0.41533659 0.19759493 0.4082097 ] 0
True
[ 0.3301515 0.25223574 0.39741912] 2
True
[ 0.18957267 0.57755017 0.28834171] 1
True
[ 0.34786184 0.10745527 0.58969048] 2
True
[ 0.39848722 0.12993535 0.47950598] 2
True
[ 0.34180459 0.10397592 0.5792464 ] 2
True
[ 0.34938102 0.18092459 0.40769051] 0
False
[ 0.23714883 0.60396946 0.24907952] 1
True
[ 0.43301626 0.13609302 0.45049511] 0
False
[ 0.30377101 0.71889816 0.1436875 ] 1
True
[ 0.33940795 0.23796215 0.38541809] 2
True
[ 0.16008288 0.11009596 0.78212142] 2
True
[ 0.18649331 0.82435284 0.13739674] 1
True
[ 0.21641782 0.12976911 0.66294561] 2
True
[ 0.34120013 0.44558921 0.23528487] 1
True
[ 0.25458132 0.40370798 0.31356663] 1
True
[ 0.15644663 0.86257471 0.10390383] 1
True
[ 0.37274282 0.19029755 0.39231755] 2
True
[ 0.0966996 0.4379686 0.52132081] 2
True
[ 0.42595052 0.1579785 0.41561996] 0
True
[ 0.19870222 0.57255694 0.23575928] 1
True
[ 0.18077122 0.48367329 0.33704562] 1
True
[ 0.25359787 0.3113439 0.41558969] 1
False
[ 0.37094075 0.20559667 0.42418033] 2
True
[ 0.39513827 0.27767323 0.30414472] 0
True
[ 0.4166021 0.16116764 0.44174385] 2
True
[ 0.550288 0.03177805 0.66038065] 2
True
[ 0.36209038 0.21140276 0.39908085] 2
True
[ 0.3176089 0.28135195 0.40745171] 2
True
[ 0.27895953 0.2660232 0.42533319] 2
True
[ 0.19982072 0.41726359 0.41121198] 1
True
[ 0.15929678 0.61829158 0.2537237 ] 1
True
[ 0.39178068 0.26425005 0.37827322] 0
True
[ 0.14161756 0.5864034 0.35611931] 1
True
[ 0.34299038 0.09889207 0.61684879] 2
True
[ 0.16806898 0.533891 0.30005896] 1
True
[ 0.43425653 0.13977744 0.42880476] 0
True
[ 0.34113164 0.15680881 0.50111632] 2
True
[ 0.40761414 0.10808238 0.47733247] 0
False
[ 0.26124176 0.69581547 0.14857021] 1
True
[ 0.36089536 0.39967953 0.25084527] 0
False
[ 0.48823388 0.19547007 0.35548695] 0
True
[ 0.54888299 0.11836567 0.33239786] 0
True
[ 0.50064424 0.02616203 0.74354835] 2
True
[ 0.5069672 0.37259343 0.14294714] 0
True
[ 0.38944773 0.41049202 0.16946035] 1
True
[ 0.32481711 0.35554515 0.27584194] 2
False
[ 0.42773969 0.17038234 0.42139989] 2
False
[ 0.52218343 0.02255508 0.74607869] 2
True
[ 0.34514425 0.31388804 0.3404467 ] 1
False
[ 0.42828747 0.0658184 0.57399294] 2
True
[ 0.29010708 0.52497695 0.21843416] 0
False
[ 0.37712443 0.20419894 0.4158765 ] 2
True
[ 0.48048829 0.13067705 0.44445328] 2
False
[ 0.15955913 0.5504265 0.29114213] 1
True
[ 0.40726546 0.36416332 0.22223751] 1
False
[ 0.36159811 0.28100637 0.32104083] 2
False
[ 0.37623137 0.16482532 0.44954342] 2
True
[ 0.1314829 0.5632597 0.34482994] 1
True
[ 0.20530602 0.70392356 0.18777766] 1
True
[ 0.31953562 0.12152653 0.55779256] 2
True
[ 0.37387481 0.19429427 0.406599 ] 0
False
[ 0.18029489 0.52342433 0.32251761] 1
True
[ 0.28654124 0.26269586 0.41791616] 0
False
[ 0.38633166 0.4495099 0.17784753] 0
False
[ 0.51919513 0.25099316 0.25226616] 0
True
[ 0.20973709 0.14194899 0.64007086] 2
True
[ 0.53196239 0.08771053 0.50693076] 2
False
[ 0.41418389 0.17532324 0.41398593] 2
False
[ 0.35685477 0.35160513 0.2893691 ] 0
True
[ 0.60948497 0.03342281 0.58959344] 2
False
[ 0.4541271 0.37212424 0.21649482] 1
False
[ 0.49899831 0.06305281 0.51988598] 0
False
[ 0.41272652 0.36730645 0.22944192] 0
True
[ 0.40267718 0.37641341 0.21134333] 0
True
[ 0.48410498 0.15873325 0.36342925] 0
True
[ 0.67752319 0.07000483 0.32791836] 0
True
[ 0.54182863 0.10272829 0.44112394] 2
False
[ 0.37196993 0.50652197 0.16130754] 1
True
[ 0.3891818 0.25275052 0.35306319] 2
False
[ 0.49860728 0.10485926 0.47950427] 2
False
[ 0.51555587 0.07996066 0.48553611] 2
False
[ 0.44979599 0.10516169 0.48636048] 2
True
[ 0.5378644 0.10185076 0.4993534 ] 0
True
[ 0.36345819 0.2816303 0.30857787] 0
True
[ 0.46140001 0.14006793 0.45714682] 2
False
[ 0.5497275 0.05237521 0.56293097] 0
False
[ 0.5452695 0.11065824 0.41033085] 0
True
[ 0.52675296 0.08210129 0.55788162] 0
False
[ 0.2482677 0.62872884 0.20530171] 2
False
[ 0.32816203 0.56075404 0.17666112] 1
True
[ 0.20615096 0.58530463 0.28410359] 2
False
[ 0.20961091 0.40970397 0.37323531] 1
True
[ 0.4553632 0.04265953 0.69169128] 2
True
[ 0.35250029 0.25225158 0.36602254] 1
False
[ 0.42016079 0.05671953 0.64236692] 2
True
[ 0.37810045 0.23112939 0.35490143] 1
False
[ 0.41669639 0.18002135 0.44231476] 0
False
[ 0.40435881 0.35158909 0.2580728 ] 1
False
[ 0.33285169 0.29063674 0.35304918] 2
True
[ 0.40974204 0.23664797 0.35080464] 0
True
[ 0.13710537 0.48058674 0.45599679] 2
False
[ 0.22743889 0.70939153 0.17222539] 1
True
[ 0.38912389 0.18452936 0.43506784] 0
False
[ 0.33298945 0.28003842 0.30765346] 1
False
[ 0.29106808 0.23342299 0.50308699] 2
True
[ 0.32833569 0.62346498 0.12556967] 1
True
[ 0.42207695 0.1283918 0.48425604] 0
False
[ 0.35305528 0.08614638 0.62208119] 2
True
[ 0.38934495 0.12888157 0.52193875] 0
False
[ 0.18522611 0.60878857 0.24803716] 1
True
[ 0.25411644 0.38039451 0.28768704] 1
True
[ 0.56607724 0.11653831 0.39871531] 2
False
[ 0.25348119 0.62498291 0.22070015] 1
True
[ 0.19926896 0.74829783 0.16040707] 1
True
[ 0.35106366 0.2416591 0.39721972] 2
True
[ 0.31146726 0.43400462 0.24205046] 1
True
[ 0.41937652 0.10044818 0.55129235] 0
False
[ 0.1972057 0.39707188 0.34263188] 1
True
[ 0.16121885 0.67676181 0.27312489] 2
False
[ 0.46327216 0.21787523 0.33861637] 0
True
[ 0.31802655 0.38148234 0.32218161] 1
True
[ 0.52102018 0.0438921 0.63159485] 2
True
[ 0.15698971 0.58152815 0.33295343] 1
True
[ 0.21536138 0.76508383 0.14970952] 1
True
[ 0.51085189 0.10255269 0.43102588] 2
False
[ 0.12689864 0.85811532 0.14522281] 1
True
[ 0.18797251 0.63169394 0.24099544] 1
True
[ 0.14576258 0.60907765 0.28033196] 1
True
[ 0.30975423 0.30640726 0.36312212] 2
True
[ 0.33989614 0.09507627 0.6091757 ] 2
True
[ 0.31616736 0.03440622 0.83298631] 2
True
[ 0.25053752 0.66473127 0.16914973] 1
True
[ 0.30148278 0.22389239 0.48540927] 2
True
[ 0.15968448 0.68331055 0.23777856] 1
True
[ 0.1086116 0.78691877 0.28944756] 2
False
[ 0.22370212 0.14437051 0.66197229] 2
True
[ 0.31356376 0.33014879 0.36648212] 0
False
[ 0.43023012 0.2125227 0.41204305] 0
True
[ 0.25613329 0.4046726 0.37486224] 1
True
[ 0.3438127 0.41339237 0.24000065] 1
True
[ 0.13070041 0.6759914 0.30199388] 1
True
[ 0.45274441 0.09304769 0.51891196] 0
False
[ 0.30824002 0.30159147 0.44449376] 2
True
[ 0.26478899 0.24062677 0.46671704] 2
True
[ 0.27693081 0.47314654 0.27445695] 1
True
[ 0.19123475 0.66343724 0.21158674] 1
True
[ 0.28363127 0.28766971 0.4447195 ] 2
True
[ 0.11766706 0.74414424 0.26948214] 1
True
[ 0.15876241 0.83526153 0.15706598] 1
True
[ 0.27253478 0.56454563 0.19406388] 0
False
[ 0.22035198 0.7902076 0.10966853] 1
True
[ 0.24239787 0.11226493 0.63304165] 2
True
[ 0.3021896 0.39403533 0.29454374] 1
True
[ 0.14154955 0.61999835 0.28823229] 1
True
[ 0.41988005 0.16128671 0.4312304 ] 0
False
[ 0.33315644 0.25023605 0.37340409] 0
False
[ 0.40358195 0.19432398 0.40575052] 2
True
[ 0.38726731 0.11766834 0.51731683] 0
False
[ 0.38766134 0.42402337 0.22018825] 1
True
[ 0.4073599 0.36078315 0.24414364] 0
True
[ 0.45397196 0.15708969 0.44393679] 2
False
[ 0.58722368 0.19363748 0.23960474] 0
True
[ 0.30167124 0.65127739 0.14258782] 1
True
[ 0.44933391 0.11543392 0.48007971] 0
False
[ 0.48876763 0.08562725 0.48964322] 2
True
[ 0.46860619 0.14579322 0.44893147] 2
False
[ 0.28663516 0.6219195 0.17062709] 0
False
[ 0.56381857 0.10396249 0.41157543] 0
True
[ 0.39318164 0.21861802 0.42594761] 2
True
[ 0.42374788 0.29549186 0.28023002] 1
False
[ 0.49260897 0.25344926 0.29002988] 0
True
[ 0.43020291 0.03629964 0.71967623] 2
True
[ 0.44950649 0.13094367 0.43241166] 2
False
[ 0.24550345 0.71764864 0.1632294 ] 1
True
[ 0.41212787 0.24845389 0.3582859 ] 0
True
[ 0.49923854 0.14560415 0.40359264] 0
True
[ 0.22907287 0.7068013 0.1900762 ] 1
True
[ 0.40187373 0.36860016 0.22770933] 0
True
[ 0.26733433 0.45245247 0.24913722] 1
True
[ 0.4848216 0.08452041 0.5364001 ] 2
True
[ 0.44789128 0.14707017 0.43445945] 2
False
[ 0.32717976 0.44445324 0.24717132] 1
True
[ 0.33897529 0.16716228 0.55156336] 2
True
[ 0.24607425 0.62536415 0.19924869] 1
True
[ 0.35918217 0.4052608 0.22022535] 1
True
[ 0.26635458 0.48116177 0.24257684] 1
True
[ 0.29402945 0.2849509 0.40219937] 2
True
[ 0.37122958 0.28708776 0.35078925] 0
True
[ 0.29497859 0.57515317 0.18827692] 1
True
[ 0.43668998 0.20028323 0.34853305] 0
True
[ 0.40342233 0.02716629 0.79977466] 2
True
[ 0.40615527 0.17598261 0.36958278] 2
False
[ 0.46418776 0.15077134 0.44314336] 2
False
[ 0.20481043 0.4741214 0.31968786] 1
True
[ 0.39761455 0.22755308 0.37977155] 0
True
[ 0.4359098 0.05414435 0.6405407 ] 2
True
[ 0.39757425 0.17298052 0.45579423] 2
True
[ 0.55138438 0.06447051 0.54497017] 0
True
[ 0.14101645 0.78685445 0.19733486] 2
False
[ 0.27838452 0.2373235 0.49050943] 2
True
[ 0.22133761 0.46429613 0.36370668] 1
True
[ 0.43913621 0.23039664 0.31103596] 0
True
[ 0.41598805 0.21536498 0.39395393] 0
True
[ 0.30099319 0.29870974 0.39643243] 2
True
[ 0.40815729 0.33234551 0.2759416 ] 0
True
[ 0.45736973 0.26023048 0.29347532] 0
True
[ 0.48976091 0.12276665 0.45784977] 2
False
[ 0.2403627 0.27832212 0.53923707] 2
True
[ 0.43815318 0.15923654 0.43920399] 0
False
[ 0.43931037 0.22987175 0.37655693] 0
True
[ 0.24434761 0.57438829 0.29015283] 1
True
[ 0.511229 0.18751305 0.31743638] 0
True
[ 0.41942525 0.22000162 0.32738021] 1
False
[ 0.56582341 0.04985301 0.51996364] 2
False
[ 0.49398671 0.11102401 0.47181247] 0
True
[ 0.31058252 0.21728648 0.48202269] 2
True
[ 0.36659581 0.33328726 0.29010415] 1
False
[ 0.43434978 0.16104549 0.40029928] 0
True
[ 0.40042366 0.28784791 0.30603535] 0
True
[ 0.30538723 0.709137 0.12766684] 1
True
[ 0.31290807 0.32367801 0.37818181] 2
True
[ 0.17895619 0.10697757 0.75407088] 2
True
[ 0.38895533 0.1682036 0.47859881] 0
False
[ 0.38229862 0.25472416 0.3455118 ] 0
True
[ 0.40008281 0.23698028 0.38781918] 0
True
[ 0.58027456 0.07096467 0.51577823] 0
True
[ 0.43877737 0.17571153 0.40083692] 2
False
[ 0.55292122 0.07281163 0.45625518] 2
False
[ 0.46027306 0.20274704 0.33066236] 2
False
[ 0.20900856 0.14707069 0.66151526] 2
True
[ 0.28459943 0.38548347 0.35004436] 1
True
[ 0.37777261 0.33759949 0.24793368] 1
False
[ 0.37997361 0.05749983 0.6819067 ] 2
True
[ 0.41064889 0.1903952 0.41212438] 0
False
[ 0.47630252 0.12549465 0.42514693] 0
True
[ 0.42709509 0.16540701 0.42777027] 2
True
[ 0.45984602 0.08132543 0.56742597] 0
False
[ 0.46880093 0.2736056 0.279068 ] 1
False
[ 0.41626057 0.31557342 0.27995999] 0
True
[ 0.46346508 0.17370504 0.35672016] 0
True
[ 0.49974337 0.05762856 0.55467025] 2
True
[ 0.48605594 0.51077184 0.1249173 ] 0
False
[ 0.47305902 0.23513776 0.27928621] 0
True
[ 0.21655254 0.70444341 0.19327416] 2
False
[ 0.51506389 0.12062876 0.47270287] 0
True
[ 0.56640371 0.11053844 0.38036593] 0
True
[ 0.24689657 0.59890241 0.22730908] 1
True
[ 0.43122435 0.25251203 0.34085769] 0
True
[ 0.44041241 0.38739853 0.19228722] 0
True
[ 0.30272641 0.74275662 0.12676523] 1
True
[ 0.31420215 0.64525764 0.15295917] 1
True
[ 0.3745649 0.40046557 0.22293325] 1
True
[ 0.24405596 0.37333983 0.41173663] 2
True
[ 0.5723297 0.10442087 0.42180766] 0
True
[ 0.41811407 0.08081664 0.56075214] 2
True
[ 0.27545634 0.23364585 0.4922521 ] 2
True
[ 0.47808909 0.15203222 0.38631391] 2
False
[ 0.40199697 0.3435727 0.25203263] 0
True
[ 0.36744706 0.17860118 0.42529846] 2
True
[ 0.3863716 0.03805493 0.76221102] 2
True
[ 0.21752777 0.33701092 0.42623483] 2
True
[ 0.42619435 0.11870513 0.5071735 ] 2
True
[ 0.43595291 0.08037959 0.5688783 ] 0
False
[ 0.49205638 0.12652519 0.45052757] 0
True
[ 0.44020339 0.14640722 0.42250251] 2
False
[ 0.46187343 0.10634891 0.48563705] 0
False
[ 0.34911197 0.51671757 0.20266465] 1
True
[ 0.4997995 0.18748039 0.34964692] 0
True
[ 0.314582 0.62686512 0.15495511] 1
True
[ 0.41399838 0.35038926 0.25572163] 0
True
[ 0.52766954 0.17894278 0.27311056] 0
True
[ 0.38834074 0.55919813 0.15200075] 1
True
[ 0.26356963 0.51596155 0.22443474] 1
True
[ 0.50502918 0.10596358 0.42066626] 0
True
[ 0.48176147 0.07997763 0.47493089] 2
False
[ 0.37300218 0.13161287 0.50356144] 2
True
[ 0.51699618 0.08903646 0.47607521] 2
False
[ 0.52564076 0.07993254 0.49233164] 2
False
[ 0.25685602 0.5767332 0.2323601 ] 1
True
[ 0.41536352 0.15968558 0.44376515] 2
True
[ 0.46442258 0.29254649 0.23757205] 0
True
[ 0.10946298 0.83571313 0.17305719] 1
True
[ 0.20276699 0.13505015 0.65564417] 2
True
[ 0.45819003 0.10934043 0.47547676] 0
False
[ 0.39243659 0.31082599 0.27871084] 1
False
[ 0.48723115 0.2248229 0.28875453] 0
True
[ 0.36983023 0.54851929 0.1412846 ] 1
True
[ 0.37273397 0.16550246 0.39685332] 2
True
[ 0.24168534 0.52707345 0.27005398] 1
True
[ 0.4939244 0.06478342 0.47163785] 0
True
[ 0.14549424 0.53654753 0.37916864] 1
True
[ 0.44045377 0.16822874 0.43630699] 0
True
[ 0.24856284 0.68881884 0.14887274] 1
True
[ 0.42312804 0.16928213 0.45355996] 2
True
[ 0.23480432 0.69069514 0.15395313] 1
True
[ 0.22536143 0.35412666 0.36741579] 1
False
[ 0.39532788 0.24877478 0.34767458] 0
True
[ 0.19252492 0.76033765 0.14307141] 1
True
[ 0.19705689 0.68203827 0.2237496 ] 1
True
[ 0.40463858 0.24545299 0.3084783 ] 1
False
[ 0.16418536 0.58324313 0.2522122 ] 1
True
[ 0.34819879 0.13209656 0.51084586] 2
True
[ 0.36440657 0.58002822 0.13036539] 1
True
[ 0.28006462 0.33380495 0.39045261] 2
True
[ 0.31424451 0.24463637 0.40044061] 2
True
[ 0.33224328 0.18107087 0.44669252] 0
False
[ 0.30125132 0.30409721 0.29434145] 0
False
[ 0.45781829 0.13545957 0.44714681] 2
False
[ 0.43101895 0.14581019 0.45089659] 2
True
[ 0.41122202 0.20486622 0.39198623] 0
True
[ 0.4120785 0.45817154 0.15778228] 0
False
[ 0.54330813 0.18113366 0.25826061] 0
True
[ 0.44491074 0.18869411 0.31974584] 0
True
[ 0.18900088 0.19057228 0.61296009] 2
True
[ 0.52599607 0.18080068 0.2725936 ] 0
True
[ 0.42586547 0.21514946 0.36826586] 2
False
[ 0.39470025 0.54376531 0.14411483] 1
True
[ 0.4766091 0.14365635 0.41201404] 0
True
[ 0.17753536 0.18228585 0.64796147] 2
True
[ 0.18594214 0.77900026 0.15828108] 2
False
[ 0.3768123 0.34986593 0.30452802] 0
True
[ 0.39850142 0.29472696 0.30368102] 0
True
[ 0.45325737 0.29347885 0.30746736] 2
False
[ 0.34313202 0.42684228 0.24884007] 1
True
[ 0.39425361 0.23113886 0.33433808] 0
True
[ 0.25413346 0.48336501 0.2883256 ] 1
True
[ 0.33709711 0.38433667 0.22909292] 1
True
[ 0.24528871 0.43233194 0.2890263 ] 1
True
[ 0.27685813 0.35019193 0.33649457] 1
True
[ 0.24484704 0.52709069 0.24948798] 1
True
[ 0.42996028 0.15714363 0.43869461] 0
False
[ 0.17241092 0.17848331 0.65306712] 2
True
[ 0.35837217 0.29114759 0.33349385] 2
False
[ 0.21797681 0.46433548 0.35669979] 2
False
[ 0.11713129 0.74427389 0.26648417] 1
True
[ 0.32223205 0.42914194 0.25349053] 1
True
[ 0.40755862 0.16279304 0.44593304] 0
False
[ 0.41834216 0.18070936 0.42363178] 2
True
[ 0.15563977 0.44803844 0.45564555] 1
False
[ 0.31354649 0.20468561 0.46663157] 2
True
[ 0.43095338 0.15180671 0.4377596 ] 0
False
[ 0.40905561 0.34054718 0.30675051] 0
True
[ 0.40930245 0.35161706 0.24223462] 0
True
[ 0.23603398 0.7244019 0.16519072] 2
False
[ 0.31053328 0.47027564 0.25340163] 1
True
[ 0.2618344 0.4060181 0.33005138] 1
True
[ 0.34528064 0.34315833 0.3164388 ] 1
False
[ 0.43866812 0.10276901 0.46415099] 0
False
[ 0.42185291 0.14316937 0.42855564] 0
False
[ 0.3135414 0.69719316 0.11166926] 1
True
[ 0.39483377 0.54695754 0.13570359] 1
True
[ 0.3183758 0.4435838 0.24954854] 1
True
[ 0.43914533 0.15015728 0.41013664] 2
False
[ 0.33497379 0.41210419 0.28664867] 1
True
[ 0.2172387 0.70233536 0.15634115] 1
True
[ 0.14257294 0.67442939 0.23594024] 1
True
[ 0.11022387 0.36156254 0.57875249] 2
True
[ 0.45165786 0.15454062 0.44760045] 0
True
[ 0.3792344 0.3957255 0.22650962] 0
False
[ 0.38612883 0.4740433 0.19901736] 0
False
[ 0.58917564 0.16036911 0.30779055] 0
True
[ 0.60240341 0.01819377 0.74340904] 2
True
[ 0.5649298 0.17959559 0.23363811] 0
True
[ 0.65677849 0.11499342 0.31938112] 0
True
[ 0.28891378 0.61616481 0.13710172] 2
False
[ 0.41304106 0.22938072 0.35005807] 2
False
[ 0.39451816 0.20301788 0.35798441] 0
True
[ 0.47529884 0.15263618 0.33677833] 0
True
[ 0.29067412 0.38842353 0.29063293] 1
True
[ 0.50741303 0.14701069 0.33686491] 2
False
[ 0.28358808 0.63282611 0.17853461] 1
True
[ 0.48939433 0.01823867 0.77039666] 2
True
[ 0.46747389 0.08980077 0.49349439] 0
False
[ 0.45028449 0.12472324 0.44398094] 2
False
[ 0.38220542 0.20880661 0.3965246 ] 2
True
[ 0.16469624 0.4436118 0.41319491] 1
True
[ 0.15277279 0.54315138 0.34331814] 1
True
[ 0.42096816 0.07310523 0.58806395] 2
True
[ 0.48763671 0.15043572 0.39479685] 0
True
[ 0.31511242 0.13366822 0.53344821] 2
True
[ 0.31288414 0.25464382 0.40619755] 0
False
[ 0.39430255 0.06145619 0.6598079 ] 2
True
[ 0.39368723 0.08589328 0.59209317] 0
False
[ 0.38423643 0.53848238 0.17427065] 1
True
[ 0.41508105 0.12178168 0.483717 ] 2
True
[ 0.45820961 0.08742365 0.52237236] 2
True
[ 0.42972528 0.16534047 0.42008044] 2
False
[ 0.39808079 0.13048554 0.50321946] 2
True
[ 0.22662034 0.65820659 0.17912092] 1
True
[ 0.45537986 0.13145998 0.43460645] 2
False
[ 0.26884492 0.41209515 0.30410038] 1
True
[ 0.52885998 0.10582111 0.40844064] 0
True
[ 0.36951543 0.0595506 0.71527331] 2
True
[ 0.41431115 0.21649627 0.36767377] 0
True
[ 0.33513274 0.28926489 0.3480704 ] 1
False
[ 0.17317619 0.57702947 0.31143241] 1
True
[ 0.39164393 0.26632895 0.29232564] 0
True
[ 0.36557509 0.19617752 0.38886104] 2
True
[ 0.20274454 0.78831568 0.12813201] 1
True
[ 0.40610127 0.09592847 0.52470331] 2
True
[ 0.4107209 0.14520132 0.41176587] 2
True
[ 0.33945226 0.29909077 0.32810681] 0
True
[ 0.48624861 0.12973501 0.41327747] 2
False
[ 0.46279847 0.01989829 0.79660935] 2
True
[ 0.34599348 0.33157436 0.28135021] 1
False
[ 0.3885184 0.06291113 0.63889278] 2
True
[ 0.47324225 0.10156032 0.49213495] 0
False
[ 0.42306959 0.16485434 0.44256226] 0
False
[ 0.46991252 0.17318919 0.36389442] 2
False
[ 0.17977203 0.583782 0.21961265] 1
True
[ 0.16949585 0.59715793 0.27568895] 1
True
[ 0.38451722 0.24067661 0.3147401 ] 0
True
[ 0.37308065 0.18998714 0.44134692] 2
True
[ 0.18362406 0.46755002 0.32206956] 1
True
[ 0.20495313 0.54200346 0.22997291] 1
True
[ 0.42929584 0.15318296 0.4457671 ] 0
False
[ 0.44333755 0.13840272 0.46214514] 2
True
[ 0.50242558 0.12956237 0.3581108 ] 0
True
[ 0.47804517 0.18016435 0.32332958] 0
True
[ 0.23343697 0.40658099 0.34192179] 2
False
[ 0.13158522 0.28710948 0.61796442] 2
True
[ 0.46884966 0.157676 0.40246867] 0
True
[ 0.40965622 0.12147 0.53743441] 2
True
[ 0.54114961 0.09840989 0.41971174] 0
True
[ 0.28622012 0.52334477 0.20279684] 1
True
[ 0.48000898 0.08124388 0.48431467] 0
False
[ 0.39890646 0.28989228 0.2710156 ] 0
True
[ 0.48664151 0.07569024 0.52973918] 0
False
[ 0.50215273 0.16873499 0.37858649] 0
True
[ 0.48855333 0.11255597 0.45527724] 2
False
[ 0.5498333 0.16544261 0.30045235] 0
True
[ 0.17934458 0.13327916 0.70145878] 2
True
[ 0.43641776 0.13815918 0.44166429] 0
False
[ 0.44156976 0.36295093 0.22109298] 1
False
[ 0.36951605 0.43252108 0.20858545] 0
False
[ 0.40919234 0.41009027 0.22302047] 1
True
[ 0.41979267 0.16468514 0.4162224 ] 0
True
[ 0.35417712 0.78785604 0.05739469] 1
True
[ 0.26041727 0.85734362 0.0695266 ] 1
True
[ 0.38378816 0.2122614 0.41145974] 2
True
[ 0.20526129 0.60050996 0.23104601] 2
False
[ 0.20487226 0.55886626 0.27948743] 1
True
[ 0.29251969 0.41151826 0.2746979 ] 1
True
[ 0.41348853 0.14890752 0.4460011 ] 0
False
[ 0.37830935 0.56870068 0.15212249] 1
True
[ 0.47128933 0.0672123 0.57424015] 2
True
[ 0.50374342 0.0854614 0.51718448] 2
True
[ 0.56834856 0.06579256 0.43949525] 0
True
[ 0.41899999 0.38152803 0.22065939] 0
True
[ 0.42559014 0.43735547 0.18231524] 1
True
[ 0.18510034 0.59339576 0.27301007] 1
True
[ 0.45738816 0.25184024 0.2568869 ] 1
False
[ 0.42541386 0.1628818 0.40260097] 2
False
[ 0.21572505 0.57274074 0.24578374] 2
False
[ 0.16831743 0.42651514 0.43139475] 2
True
[ 0.17956209 0.28512596 0.56421632] 2
True
[ 0.22766367 0.42808957 0.38072261] 1
True
[ 0.39474992 0.26775404 0.32420571] 0
True
[ 0.50304049 0.07946909 0.49490954] 0
True
[ 0.41474807 0.16322114 0.42219296] 2
True
[ 0.16948136 0.45866608 0.44427596] 2
False
[ 0.54445185 0.05306357 0.59279787] 2
True
[ 0.209383 0.40411141 0.38199377] 1
True
[ 0.30380049 0.37369661 0.30764094] 0
False
[ 0.48419455 0.11649857 0.44787105] 0
True
[ 0.34417075 0.32880278 0.29760576] 1
False
[ 0.19573494 0.4798498 0.37380389] 1
True
[ 0.46043903 0.11640592 0.46953647] 2
True
[ 0.41118103 0.11232767 0.57717421] 0
False
[ 0.52584141 0.12414236 0.40789613] 0
True
[ 0.4351637 0.12448082 0.53475549] 2
True
[ 0.46869124 0.25115511 0.27384985] 0
True
[ 0.47470269 0.19809686 0.30144501] 0
True
[ 0.23321607 0.47997561 0.35390481] 1
True
[ 0.23354063 0.40057787 0.34890672] 1
True
[ 0.43975146 0.27057031 0.27675075] 1
False
[ 0.3176613 0.54719597 0.17387207] 1
True
[ 0.48118603 0.07006817 0.56210703] 2
True
[ 0.37048285 0.17428796 0.3874522 ] 2
True
[ 0.31372516 0.33215198 0.31865003] 1
True
[ 0.18318532 0.8481398 0.10035526] 1
True
[ 0.2777824 0.40792876 0.23708342] 1
True
[ 0.11969181 0.91146545 0.08524988] 1
True
[ 0.22500713 0.76217303 0.14948782] 1
True
[ 0.17617859 0.72423367 0.20355337] 1
True
[ 0.33977125 0.23522587 0.40682665] 2
True
[ 0.16794045 0.59014434 0.26507557] 1
True
[ 0.15638556 0.72361377 0.19007881] 1
True
[ 0.15372521 0.84769028 0.12172066] 1
True
[ 0.3305841 0.2949619 0.34969574] 0
False
[ 0.17323465 0.5736821 0.25805472] 1
True
[ 0.14409302 0.77899716 0.17469824] 1
True
[ 0.29813746 0.58102912 0.18045281] 0
False
[ 0.18724297 0.68398338 0.22211792] 2
False
[ 0.19048365 0.59717782 0.2428852 ] 1
True
[ 0.39171211 0.20421927 0.40849565] 0
False
[ 0.44766907 0.01683786 0.813375 ] 2
True
[ 0.54234754 0.18036493 0.27271036] 0
True
[ 0.31951985 0.4130311 0.26289539] 1
True
[ 0.35723914 0.37414288 0.25579831] 1
True
[ 0.14814303 0.91997505 0.07082129] 1
True
[ 0.16801852 0.78748558 0.15404045] 1
True
[ 0.39923649 0.17332587 0.42330151] 2
True
[ 0.19444924 0.70541352 0.18036059] 1
True
[ 0.28607919 0.49205598 0.23168024] 2
False
[ 0.20055548 0.37749232 0.40830809] 2
True
[ 0.31979403 0.24640689 0.3903282 ] 0
False
[ 0.47352879 0.11259818 0.48550305] 0
False
[ 0.39761078 0.2264639 0.35670253] 2
False
[ 0.41267349 0.13555133 0.4351505 ] 0
False
[ 0.42125964 0.15791508 0.43199171] 0
False
[ 0.39124542 0.32720545 0.26602947] 0
True
[ 0.44329288 0.11502074 0.45181607] 0
False
[ 0.34869468 0.72769272 0.08147511] 1
True
[ 0.28082325 0.71896586 0.11579429] 1
True
[ 0.52297014 0.01799097 0.73890179] 2
True
[ 0.55552146 0.20010624 0.28820532] 0
True
[ 0.44130425 0.24733118 0.26681722] 0
True
[ 0.54808489 0.37274021 0.13808295] 0
True
[ 0.38038663 0.42403699 0.21573329] 1
True
[ 0.38202099 0.23959287 0.31802573] 0
True
[ 0.50286066 0.17531219 0.32065481] 2
False
[ 0.31838623 0.40447354 0.26580369] 1
True
[ 0.42316793 0.409809 0.1640087 ] 0
True
[ 0.52024453 0.32783609 0.17473658] 0
True
[ 0.44766657 0.13816215 0.40092474] 2
False
[ 0.29755197 0.40803684 0.247998 ] 1
True
[ 0.4386784 0.30014907 0.23082327] 0
True
[ 0.1631792 0.77170338 0.18585101] 2
False
[ 0.24642381 0.58711686 0.22207961] 1
True
[ 0.30086183 0.27497337 0.38217004] 1
False
[ 0.43577341 0.15036552 0.44563478] 0
False
[ 0.37341486 0.27124228 0.37862333] 2
True
[ 0.50708472 0.13410321 0.41230087] 0
True
[ 0.49019095 0.09824822 0.48458461] 0
True
[ 0.23401354 0.40307196 0.40763918] 2
True
[ 0.44754867 0.03240975 0.73654579] 2
True
[ 0.30539171 0.6452735 0.13838309] 1
True
[ 0.43922797 0.12115393 0.46253573] 0
False
[ 0.53893975 0.01873457 0.79616082] 2
True
[ 0.33586598 0.43271961 0.21505899] 1
True
[ 0.33153869 0.27199478 0.40422459] 2
True
[ 0.2636142 0.6129043 0.20507638] 1
True
[ 0.4928179 0.11457842 0.47503258] 2
False
[ 0.13847689 0.88545588 0.14749793] 1
True
[ 0.15951897 0.7556778 0.19138184] 1
True
[ 0.22670381 0.48588138 0.27340152] 1
True
[ 0.23927178 0.3179304 0.43120455] 2
True
[ 0.43849075 0.12627588 0.4459269 ] 0
False
[ 0.34416705 0.2750138 0.37870922] 2
True
[ 0.35863036 0.35073463 0.26071355] 1
False
[ 0.34974387 0.17178646 0.47206729] 2
True
[ 0.25560261 0.44736685 0.27004093] 1
True
[ 0.41493116 0.08942671 0.53105567] 2
True
[ 0.19411566 0.72060026 0.18405812] 1
True
[ 0.13473098 0.64383762 0.30914974] 1
True
[ 0.22210182 0.43300207 0.36345642] 1
True
[ 0.41267944 0.29555296 0.30341388] 0
True
[ 0.43040985 0.29192762 0.28600242] 0
True
[ 0.45534068 0.14858373 0.4397691 ] 2
False
[ 0.32254107 0.40336796 0.22041976] 0
False
[ 0.42020451 0.24115531 0.31357611] 0
True
[ 0.47574878 0.02201451 0.78224993] 2
True
[ 0.21372087 0.43690274 0.33953565] 1
True
[ 0.20269122 0.5580313 0.26330833] 1
True
[ 0.25381518 0.1317418 0.58201864] 2
True
[ 0.49873787 0.13490635 0.36904781] 0
True
[ 0.17194799 0.56764163 0.315117 ] 1
True
[ 0.31667658 0.35307304 0.32451735] 1
True
[ 0.42839534 0.2031095 0.36058249] 0
True
[ 0.26887946 0.57520273 0.16617846] 0
False
[ 0.49043363 0.33003389 0.20420882] 0
True
[ 0.59185169 0.03919342 0.55755055] 2
False
[ 0.56082467 0.11806203 0.36033802] 0
True
[ 0.23115597 0.46517279 0.32471081] 1
True
[ 0.50004085 0.20418402 0.29267651] 0
True
[ 0.34848307 0.42107161 0.26575078] 1
True
[ 0.21164544 0.76048457 0.13081449] 1
True
[ 0.45821978 0.12621753 0.44702999] 0
True
[ 0.18313517 0.85252059 0.11904693] 1
True
[ 0.49444386 0.08258285 0.5259335 ] 2
True
[ 0.35259442 0.53229709 0.14821194] 1
True
[ 0.33213966 0.66151398 0.10803406] 1
True
[ 0.40653488 0.19223838 0.35618803] 0
True
[ 0.44642705 0.44805874 0.13807723] 0
False
[ 0.31964535 0.78843278 0.07760513] 1
True
[ 0.42324662 0.22138038 0.34101801] 2
False
[ 0.24509902 0.5654267 0.24281752] 1
True
[ 0.48681836 0.32162407 0.23767364] 0
True
[ 0.39465883 0.28957426 0.27000378] 0
True
[ 0.46263696 0.17723325 0.33616166] 0
True
[ 0.30666152 0.51694259 0.17563367] 1
True
[ 0.45696901 0.22289497 0.30461949] 2
False
[ 0.40438889 0.2291807 0.33232337] 0
True
[ 0.35509664 0.5223716 0.17640933] 0
False
[ 0.52766252 0.14635467 0.35197648] 0
True
[ 0.46706771 0.28248634 0.26050102] 2
False
[ 0.6714172 0.06055028 0.46595155] 0
True
[ 0.50693029 0.14161549 0.38249485] 2
False
[ 0.60188915 0.09630263 0.41535627] 0
True
[ 0.27416911 0.34451691 0.33418428] 1
True
[ 0.20795143 0.40583014 0.40183227] 2
False
[ 0.27251519 0.27506813 0.42799938] 2
True
[ 0.53917307 0.07221377 0.59140769] 0
False
[ 0.46542222 0.18160234 0.38325588] 0
True
[ 0.52231411 0.10837727 0.42973865] 2
False
[ 0.44161624 0.16057587 0.44917087] 2
True
[ 0.47064573 0.13634953 0.45674866] 0
True
[ 0.44498582 0.06561153 0.64327717] 2
True
[ 0.42479744 0.35375438 0.25602612] 1
False
[ 0.30603386 0.32623299 0.39954705] 2
True
[ 0.2885108 0.4048499 0.27852756] 1
True
[ 0.48203439 0.11160621 0.45730077] 2
False
[ 0.4091911 0.31279472 0.2984561 ] 0
True
[ 0.16877791 0.17048632 0.64258514] 2
True
[ 0.19890864 0.46244465 0.30230732] 1
True
[ 0.45155934 0.1003641 0.47044545] 0
False
[ 0.34816109 0.27407032 0.37864942] 0
False
[ 0.30121622 0.63751515 0.16035117] 1
True
[ 0.2059315 0.63714158 0.23191812] 1
True
[ 0.33386516 0.5724045 0.17630135] 1
True
[ 0.21364039 0.41379053 0.42835285] 2
True
[ 0.29985773 0.5502348 0.16774353] 1
True
[ 0.30464602 0.52392353 0.17783193] 1
True
[ 0.32926903 0.12947452 0.55383137] 2
True
[ 0.15936474 0.18671043 0.68073243] 2
True
[ 0.40699601 0.29044721 0.28635186] 0
True
[ 0.57627826 0.0817201 0.37707257] 2
False
[ 0.4258655 0.13250324 0.44848325] 0
False
[ 0.49255015 0.16556285 0.32811871] 0
True
[ 0.29002426 0.54733432 0.20841489] 1
True
[ 0.45625972 0.25633215 0.27130777] 0
True
[ 0.52830234 0.09731449 0.48348174] 0
True
[ 0.58005361 0.03059332 0.65072123] 2
True
[ 0.59156988 0.01307338 0.79868667] 2
True
[ 0.53059606 0.04365927 0.629344 ] 2
True
[ 0.29177519 0.3918115 0.28156617] 1
True
[ 0.41229674 0.09822351 0.57416173] 2
True
[ 0.30204393 0.26991786 0.38682165] 2
True
[ 0.40825835 0.10555034 0.51997432] 0
False
[ 0.4531321 0.0985099 0.52983832] 2
True
[ 0.45498363 0.24713391 0.31822034] 0
True
[ 0.42301149 0.1852995 0.36124997] 0
True
[ 0.30743354 0.27294164 0.39927945] 2
True
[ 0.24542779 0.27060233 0.4847584 ] 2
True
[ 0.43572993 0.23622172 0.30039039] 1
False
[ 0.42297241 0.1651115 0.41429131] 0
True
[ 0.24090292 0.49252058 0.28695777] 1
True
[ 0.28659563 0.24094614 0.44827179] 2
True
[ 0.15991158 0.10222663 0.77365849] 2
True
[ 0.38980831 0.27354898 0.31940682] 0
True
[ 0.33801685 0.26826863 0.40141898] 2
True
[ 0.38706713 0.23108525 0.41159693] 1
False
[ 0.36731126 0.22955243 0.41416562] 2
True
[ 0.54205273 0.10094657 0.37818049] 0
True
[ 0.19912112 0.41197167 0.43282965] 2
True
[ 0.41929365 0.17267657 0.41337681] 0
True
[ 0.35517941 0.25130556 0.41172241] 0
False
[ 0.3735062 0.0995498 0.55815929] 2
True
[ 0.40864583 0.26153841 0.29276156] 1
False
[ 0.47683344 0.18565649 0.30683006] 0
True
[ 0.45365334 0.1494394 0.37271516] 0
True
[ 0.20957339 0.37441744 0.42074757] 2
True
[ 0.40459485 0.26013168 0.33495215] 0
True
[ 0.4036299 0.10852193 0.53251529] 2
True
[ 0.24899201 0.33476028 0.40414095] 2
True
[ 0.25430672 0.50205662 0.22841582] 1
True
[ 0.2686724 0.62859064 0.18946916] 1
True
[ 0.21079814 0.35392455 0.42029622] 2
True
[ 0.44007511 0.12988792 0.45363137] 0
False
[ 0.42265518 0.16617466 0.42767479] 2
True
[ 0.32833477 0.51828066 0.16479805] 1
True
[ 0.40208214 0.29980389 0.2863637 ] 0
True
[ 0.38561243 0.27174525 0.3371617 ] 0
True
[ 0.38957178 0.13372799 0.49306977] 2
True
[ 0.26524209 0.37223959 0.32029608] 1
True
[ 0.36458 0.4125497 0.21048993] 1
True
[ 0.52420292 0.11747041 0.39471275] 2
False
[ 0.49208876 0.12930654 0.39314359] 0
True
[ 0.41864461 0.28075503 0.34075083] 0
True
[ 0.43228652 0.30565705 0.27043089] 1
False
[ 0.17895091 0.74070303 0.18638874] 1
True
[ 0.43413127 0.16506092 0.40918966] 0
True
[ 0.25759815 0.62649659 0.19435671] 1
True
[ 0.15824945 0.67901245 0.24928885] 1
True
[ 0.29571912 0.44109048 0.23421041] 1
True
[ 0.16190744 0.69301503 0.20013467] 1
True
[ 0.1386005 0.84093133 0.13780331] 1
True
[ 0.40217303 0.22789151 0.36283186] 0
True
[ 0.23435952 0.78316533 0.11919084] 1
True
[ 0.45731152 0.26058364 0.26432999] 0
True
[ 0.30121573 0.27906173 0.38503435] 2
True
[ 0.36625819 0.41819113 0.20077807] 1
True
[ 0.19750398 0.60020424 0.20923381] 1
True
[ 0.20910029 0.44400794 0.28469715] 1
True
[ 0.20327452 0.45346961 0.33289605] 1
True
[ 0.42382348 0.17430205 0.30823265] 0
True
[ 0.3823215 0.25830169 0.29528832] 0
True
[ 0.16246944 0.71706158 0.18440035] 1
True
[ 0.40034463 0.44241544 0.21582609] 1
True
[ 0.36542502 0.07329264 0.60169639] 2
True
[ 0.09629785 0.88828351 0.13803569] 1
True
[ 0.26643969 0.48652685 0.26342466] 1
True
[ 0.16856067 0.57196794 0.27939927] 1
True
[ 0.17893378 0.53558976 0.26968049] 1
True
[ 0.23980131 0.48328256 0.22751061] 1
True
[ 0.26850815 0.31088593 0.3935086 ] 2
True
[ 0.39819161 0.34316136 0.21705643] 0
True
[ 0.38564044 0.27701381 0.30800033] 0
True
[ 0.1278314 0.75697485 0.2139863 ] 1
True
[ 0.17543497 0.42595058 0.41391826] 2
False
[ 0.47747365 0.05020781 0.60619407] 2
True
[ 0.25931213 0.6004987 0.18128261] 1
True
[ 0.33871103 0.40883394 0.21560543] 1
True
[ 0.46828693 0.16428181 0.37267061] 2
False
[ 0.24742672 0.53789083 0.24318233] 1
True
[ 0.22748243 0.69458458 0.16130328] 1
True
[ 0.0896893 0.24292797 0.71475349] 2
True
[ 0.38204971 0.16295725 0.4364992 ] 2
True
[ 0.29275453 0.37160189 0.33123339] 2
False
[ 0.09029907 0.70404187 0.3082949 ] 1
True
[ 0.34156258 0.15494473 0.48242646] 2
True
[ 0.21080606 0.43172202 0.34974442] 1
True
[ 0.50962026 0.03790336 0.63993875] 2
True
[ 0.28662729 0.1170212 0.62324407] 2
True
[ 0.34134042 0.32320623 0.33582087] 0
True
[ 0.17268855 0.69464491 0.21008674] 1
True
[ 0.14528021 0.62234662 0.27367538] 1
True
[ 0.3742523 0.23981414 0.31520851] 0
True
[ 0.38168043 0.21503044 0.38815589] 2
True
2000
[ 0.36533395 0.13849553 0.51136293] 2
True
[ 0.28625367 0.20084113 0.50641727] 2
True
[ 0.30732899 0.10425134 0.59254317] 2
True
[ 0.14523039 0.40926802 0.41671574] 1
False
[ 0.27492799 0.09372711 0.65832669] 2
True
[ 0.09116106 0.78070988 0.27685911] 2
False
[ 0.13072257 0.47638051 0.43075283] 1
True
[ 0.0751525 0.24499913 0.75441788] 2
True
[ 0.24647683 0.46156996 0.31690831] 0
False
[ 0.35993761 0.29636861 0.31328784] 0
True
[ 0.38261325 0.26693404 0.3122548 ] 0
True
[ 0.49865848 0.09899989 0.49740251] 0
True
[ 0.26444241 0.31991712 0.41486348] 0
False
[ 0.40053719 0.29573321 0.28841637] 1
False
[ 0.17205318 0.4287747 0.43020191] 2
True
[ 0.4691124 0.12784233 0.49931399] 0
False
[ 0.2910369 0.47154109 0.26786052] 1
True
[ 0.302553 0.31529855 0.35123006] 1
False
[ 0.40215031 0.20593261 0.38530084] 0
True
[ 0.46858173 0.1894469 0.39066864] 2
False
[ 0.36141069 0.37526194 0.2540766 ] 0
False
[ 0.34575464 0.66990515 0.13640328] 1
True
[ 0.48272592 0.21067204 0.32517512] 2
False
[ 0.22353428 0.61303641 0.23922454] 1
True
[ 0.36372939 0.28117857 0.34727756] 1
False
[ 0.30146175 0.73999787 0.10474415] 1
True
[ 0.35614929 0.4193773 0.23084845] 1
True
[ 0.16348715 0.7541651 0.18540614] 1
True
[ 0.45699543 0.1295506 0.43539286] 2
False
[ 0.31169264 0.39091071 0.31335207] 2
False
[ 0.31866358 0.25002954 0.38578537] 2
True
[ 0.13722109 0.51779009 0.42478758] 1
True
[ 0.31167795 0.29104555 0.37651433] 1
False
[ 0.32158674 0.18546743 0.49284472] 2
True
[ 0.49250405 0.12110034 0.42472885] 0
True
[ 0.40780911 0.13556972 0.4412606 ] 2
True
[ 0.40172751 0.19300692 0.40821204] 0
False
[ 0.16142009 0.55690906 0.34229553] 2
False
[ 0.40307883 0.2221199 0.35492437] 0
True
[ 0.21967251 0.61728207 0.23874798] 1
True
[ 0.54505304 0.07550252 0.50456229] 0
True
[ 0.39669471 0.163073 0.46493492] 2
True
[ 0.19875962 0.69652888 0.18580344] 1
True
[ 0.48444869 0.17083421 0.35194874] 0
True
[ 0.49145636 0.10452102 0.44277243] 0
True
[ 0.54897322 0.01220299 0.80692042] 2
True
[ 0.32628436 0.20175891 0.45681201] 2
True
[ 0.28011171 0.38289547 0.29386862] 0
False
[ 0.28933329 0.60276562 0.19884849] 1
True
[ 0.29990054 0.38566213 0.28017285] 1
True
[ 0.50395882 0.11527129 0.36060035] 0
True
[ 0.16942976 0.70251236 0.23088407] 2
False
[ 0.22771161 0.67934592 0.1713192 ] 1
True
[ 0.51131569 0.08936059 0.47062143] 2
False
[ 0.15249185 0.62649648 0.27884513] 1
True
[ 0.19433809 0.3317272 0.47579239] 1
False
[ 0.20892703 0.67972677 0.19932535] 1
True
[ 0.37440878 0.16991879 0.43153185] 0
False
[ 0.37273962 0.21741111 0.41101263] 2
True
[ 0.39266167 0.2631004 0.3714885 ] 0
True
[ 0.25691204 0.60766708 0.21313773] 1
True
[ 0.46774484 0.01860818 0.82310535] 2
True
[ 0.46851031 0.14211553 0.40458574] 1
False
[ 0.44066146 0.14841021 0.38482945] 0
True
[ 0.40397002 0.33448482 0.25715443] 0
True
[ 0.51389303 0.13612764 0.36715086] 0
True
[ 0.2904179 0.23779053 0.42700801] 2
True
[ 0.46800295 0.01870918 0.78506659] 2
True
[ 0.44358876 0.04672893 0.63492002] 2
True
[ 0.34428265 0.4332337 0.20730591] 1
True
[ 0.24314345 0.45412481 0.24982374] 1
True
[ 0.13140129 0.64892165 0.31925311] 1
True
[ 0.23440384 0.28116821 0.44393259] 2
True
[ 0.41180946 0.18063177 0.42055183] 2
True
[ 0.4362983 0.12055121 0.46112038] 0
False
[ 0.22572115 0.53156188 0.27867831] 1
True
[ 0.11671289 0.82414447 0.19003845] 2
False
[ 0.40457254 0.07464562 0.64150276] 2
True
[ 0.20641849 0.6868862 0.18155366] 1
True
[ 0.31927669 0.1445752 0.54361372] 2
True
[ 0.15778666 0.72761596 0.23255878] 1
True
[ 0.14490018 0.5812667 0.30426418] 1
True
[ 0.11160856 0.75967564 0.28295574] 1
True
[ 0.21162114 0.33377817 0.36315733] 1
False
[ 0.4125621 0.22921484 0.36804013] 0
True
[ 0.21874718 0.36367361 0.37121766] 0
False
[ 0.3321212 0.14105087 0.56532158] 0
False
[ 0.35702964 0.30056718 0.32897897] 2
False
[ 0.29168548 0.58338486 0.18123637] 1
True
[ 0.17482578 0.5348347 0.35132137] 1
True
[ 0.31566149 0.24409961 0.45008721] 2
True
[ 0.30447635 0.40871996 0.27897038] 1
True
[ 0.30144489 0.59242823 0.16698403] 1
True
[ 0.34415752 0.1593467 0.47091232] 0
False
[ 0.50666708 0.07276129 0.51886495] 2
True
[ 0.38624005 0.4199912 0.2117895 ] 0
False
[ 0.51012061 0.15665983 0.35775596] 2
False
[ 0.37412063 0.23172368 0.42036776] 0
False
[ 0.49357728 0.48927787 0.11946056] 1
False
[ 0.24492041 0.62561696 0.22451968] 0
False
[ 0.50473885 0.15328633 0.34671786] 0
True
[ 0.3252349 0.63569151 0.13325972] 1
True
[ 0.38527814 0.44267234 0.21585231] 1
True
[ 0.55440144 0.16443306 0.26818813] 0
True
[ 0.47676378 0.16311668 0.35425813] 0
True
[ 0.27293936 0.37530629 0.30976388] 1
True
[ 0.38823081 0.22536285 0.38500444] 2
False
[ 0.21014293 0.5468663 0.28566843] 1
True
[ 0.42941588 0.31208114 0.24634023] 0
True
[ 0.52808686 0.17569663 0.25554896] 0
True
[ 0.50511717 0.24502225 0.27947528] 0
True
[ 0.38630198 0.21300695 0.34883988] 0
True
[ 0.45144986 0.24720187 0.28435817] 0
True
[ 0.50199858 0.12812124 0.456362 ] 0
True
[ 0.52073476 0.14304917 0.38525249] 2
False
[ 0.23704269 0.42434403 0.32719705] 1
True
[ 0.34166271 0.54549648 0.18486233] 1
True
[ 0.39933482 0.36303381 0.25435251] 0
True
[ 0.4914107 0.11502372 0.4533398 ] 0
True
[ 0.30144999 0.54082686 0.21207367] 1
True
[ 0.36013353 0.55874102 0.16274655] 0
False
[ 0.27564962 0.62329682 0.18129973] 1
True
[ 0.53710246 0.23595826 0.19936135] 0
True
[ 0.52091182 0.12695729 0.36153464] 0
True
[ 0.52950137 0.19305035 0.31269559] 2
False
[ 0.4774613 0.21382468 0.30751673] 2
False
[ 0.33567792 0.41192105 0.26781116] 1
True
[ 0.29964977 0.6418302 0.16271737] 1
True
[ 0.25628376 0.55641571 0.24287748] 1
True
[ 0.4727134 0.16759402 0.39225878] 2
False
[ 0.35950133 0.19511209 0.44204441] 0
False
[ 0.29259871 0.45179473 0.2435475 ] 1
True
[ 0.31455695 0.75091443 0.0937948 ] 1
True
[ 0.3245985 0.47498831 0.2390861 ] 1
True
[ 0.42008435 0.39411371 0.19524668] 0
True
[ 0.51022859 0.12276746 0.4130902 ] 2
False
[ 0.50350148 0.13441057 0.41902494] 2
False
[ 0.44490687 0.18005866 0.40346072] 2
False
[ 0.40967346 0.3380556 0.2876333 ] 0
True
[ 0.26369472 0.31453683 0.39666028] 2
True
[ 0.45449888 0.11079976 0.48799403] 0
False
[ 0.26999789 0.61930946 0.19501137] 1
True
[ 0.30204254 0.48069867 0.25119004] 1
True
[ 0.30422705 0.32414279 0.36638503] 1
False
[ 0.13568239 0.81823033 0.19549612] 2
False
[ 0.43947503 0.11678256 0.46362673] 0
False
[ 0.46281182 0.17255382 0.40761816] 0
True
[ 0.34824106 0.1014331 0.6233684 ] 2
True
[ 0.34491152 0.108989 0.58083777] 0
False
[ 0.51703607 0.17593632 0.37564027] 0
True
[ 0.48165003 0.31282556 0.24279988] 1
False
[ 0.27539202 0.47371238 0.30600134] 1
True
[ 0.43979406 0.03726541 0.7405529 ] 2
True
[ 0.34278887 0.50529661 0.23838576] 1
True
[ 0.43604918 0.13903413 0.4768779 ] 0
False
[ 0.2872911 0.32337451 0.40264758] 2
True
[ 0.30465377 0.43336903 0.31167759] 1
True
[ 0.22877918 0.54931336 0.31218367] 1
True
[ 0.37055343 0.19017522 0.47131113] 0
False
[ 0.46773502 0.14136974 0.42705292] 2
False
[ 0.42293986 0.37748238 0.26663537] 0
True
[ 0.412793 0.24101844 0.30063036] 0
True
[ 0.33934968 0.40528419 0.25418377] 1
True
[ 0.46008904 0.31927327 0.23870495] 0
True
[ 0.61324043 0.08742567 0.41688144] 0
True
[ 0.42942256 0.13276267 0.43595643] 0
False
[ 0.57254671 0.10662526 0.38121784] 0
True
[ 0.43891617 0.2228605 0.34205288] 0
True
[ 0.38109038 0.2387462 0.39496437] 2
True
[ 0.18219439 0.23954454 0.5704591 ] 2
True
[ 0.4122008 0.13305909 0.491646 ] 0
False
[ 0.42151572 0.37498616 0.2420429 ] 0
True
[ 0.43280229 0.2749187 0.35612319] 0
True
[ 0.33419674 0.36486457 0.27155683] 1
True
[ 0.3203696 0.55303302 0.21018444] 1
True
[ 0.57746319 0.07491019 0.47454276] 0
True
[ 0.51476548 0.06846841 0.53306752] 2
True
[ 0.44380942 0.09083269 0.61747281] 2
True
[ 0.40796101 0.27300911 0.33782589] 0
True
[ 0.44587922 0.16481024 0.443873 ] 2
False
[ 0.3180278 0.5711102 0.1770426] 1
True
[ 0.57349176 0.16301792 0.3024581 ] 0
True
[ 0.18865271 0.87553884 0.12194206] 1
True
[ 0.48339783 0.24996064 0.30114971] 1
False
[ 0.38805929 0.4110532 0.22598638] 1
True
[ 0.43179218 0.25188618 0.28796758] 0
True
[ 0.2496082 0.64038725 0.20630923] 1
True
[ 0.56745571 0.04428608 0.59657755] 2
True
[ 0.54532184 0.10354115 0.41413175] 0
True
[ 0.35054258 0.49265732 0.18447307] 1
True
[ 0.3993518 0.60696193 0.12019195] 1
True
[ 0.48065949 0.17242331 0.36447251] 2
False
[ 0.32797549 0.28794044 0.38676556] 2
True
[ 0.56157676 0.11015396 0.38583703] 2
False
[ 0.42552797 0.17979738 0.42437989] 0
True
[ 0.24103879 0.53255894 0.240985 ] 1
True
[ 0.28042709 0.46079213 0.27696255] 0
False
[ 0.5398912 0.13550447 0.3855201 ] 2
False
[ 0.19838845 0.13663071 0.71128749] 2
True
[ 0.53297044 0.09858767 0.45345178] 0
True
[ 0.55516701 0.10158758 0.4311823 ] 2
False
[ 0.53638174 0.06253448 0.57146406] 0
False
[ 0.28362821 0.56755387 0.18708914] 1
True
[ 0.40219376 0.17898606 0.39967383] 0
True
[ 0.14794744 0.75498169 0.2180334 ] 1
True
[ 0.39909843 0.24204044 0.36557771] 0
True
[ 0.62512481 0.09857421 0.34585153] 0
True
[ 0.33559539 0.24130634 0.42168094] 2
True
[ 0.4566184 0.23796038 0.30372498] 0
True
[ 0.2856499 0.43465485 0.28605941] 1
True
[ 0.5483912 0.1195271 0.36974339] 2
False
[ 0.54063416 0.10692094 0.42208807] 0
True
[ 0.15067942 0.71976104 0.25432803] 1
True
[ 0.51277968 0.15321074 0.39403168] 2
False
[ 0.22901681 0.35542596 0.41778157] 1
False
[ 0.18317725 0.73357956 0.17825975] 1
True
[ 0.32634768 0.40842683 0.29436495] 1
True
[ 0.60159842 0.0191779 0.7314842 ] 2
True
[ 0.38987714 0.40328007 0.22003206] 1
True
[ 0.37159751 0.25634792 0.34021048] 0
True
[ 0.34969069 0.10376024 0.61065725] 2
True
[ 0.21925426 0.47601335 0.30288576] 1
True
[ 0.41271334 0.25266377 0.35128323] 2
False
[ 0.35390485 0.21918375 0.40519499] 2
True
[ 0.31731563 0.24755199 0.40189793] 2
True
[ 0.4542144 0.09451915 0.48798086] 0
False
[ 0.48585104 0.23546624 0.31584962] 0
True
[ 0.33650835 0.40186827 0.28130325] 2
False
[ 0.42945764 0.15499138 0.43989576] 0
False
[ 0.26470277 0.3931131 0.32721664] 1
True
[ 0.39067434 0.34055125 0.30389826] 0
True
[ 0.35176448 0.13801417 0.54409465] 2
True
[ 0.21231877 0.76845316 0.15602934] 1
True
[ 0.27980542 0.29154461 0.43139348] 2
True
[ 0.15259228 0.72645191 0.20684396] 1
True
[ 0.19765782 0.58990103 0.23610772] 1
True
[ 0.51596529 0.1259164 0.39762608] 0
True
[ 0.30855841 0.41420399 0.25396934] 1
True
[ 0.23066177 0.48009873 0.24829841] 1
True
[ 0.49314487 0.11282977 0.4203683 ] 0
True
[ 0.19599169 0.54856304 0.27411864] 1
True
[ 0.17272205 0.80194472 0.16765065] 1
True
[ 0.19847721 0.55259004 0.28417563] 1
True
[ 0.23294811 0.58579413 0.22797563] 0
False
[ 0.4987848 0.12606298 0.41835165] 2
False
[ 0.50768719 0.13105862 0.36062346] 0
True
[ 0.20533348 0.14373357 0.62442086] 2
True
[ 0.43015846 0.19120994 0.3560699 ] 2
False
[ 0.31944931 0.31796586 0.4134208 ] 2
True
[ 0.12915912 0.16519489 0.74173108] 2
True
[ 0.39611258 0.15697289 0.4757743 ] 2
True
[ 0.5624733 0.06952221 0.48849855] 2
False
[ 0.27273371 0.57330312 0.21073858] 1
True
[ 0.24393729 0.55249993 0.23769722] 1
True
[ 0.14530068 0.75869609 0.18910162] 2
False
[ 0.32993191 0.10151537 0.63338155] 2
True
[ 0.28262165 0.38944043 0.32619988] 1
True
[ 0.33901827 0.13942199 0.54238998] 0
False
[ 0.31004316 0.30817283 0.40205498] 2
True
[ 0.14307213 0.62486671 0.34695501] 2
False
[ 0.35769317 0.07335668 0.68097354] 2
True
[ 0.48238653 0.09010327 0.50786738] 0
False
[ 0.56593093 0.02230495 0.72549672] 2
True
[ 0.12690721 0.27756082 0.6381695 ] 2
True
[ 0.41199461 0.14314538 0.48380367] 2
True
[ 0.06512161 0.88723287 0.22422858] 1
True
[ 0.14695439 0.22885705 0.64982945] 2
True
[ 0.08972259 0.33923047 0.70745823] 2
True
[ 0.32299378 0.11969496 0.61702588] 0
False
[ 0.45824171 0.10409923 0.5317355 ] 0
False
[ 0.34697897 0.42948262 0.27258578] 1
True
[ 0.37088255 0.52645504 0.16309093] 1
True
[ 0.45963803 0.02062189 0.81470779] 2
True
[ 0.27425513 0.45876425 0.30484975] 1
True
[ 0.52416121 0.08067042 0.49326974] 2
False
[ 0.34941609 0.25339673 0.38079487] 2
True
[ 0.34498527 0.11989391 0.59087937] 0
False
[ 0.49372965 0.06693018 0.59358609] 2
True
[ 0.2127114 0.63608943 0.28498429] 1
True
[ 0.39362302 0.22323947 0.41466386] 2
True
[ 0.32097087 0.22252808 0.47300849] 0
False
[ 0.48628604 0.10631325 0.49120187] 2
True
[ 0.27620931 0.65434185 0.18259725] 1
True
[ 0.48647499 0.11893484 0.48585849] 0
True
[ 0.11487247 0.38714147 0.5700167 ] 2
True
[ 0.19321224 0.67902769 0.25205738] 2
False
[ 0.23924919 0.54795623 0.28338785] 1
True
[ 0.40669785 0.21273896 0.37778071] 0
True
[ 0.41215054 0.13728863 0.50458361] 2
True
[ 0.5403153 0.08655831 0.50541545] 2
False
[ 0.24741981 0.20536312 0.60119988] 2
True
[ 0.2096986 0.58502136 0.29641757] 1
True
[ 0.54531512 0.07489994 0.50237737] 0
True
[ 0.25136862 0.23540724 0.53229092] 2
True
[ 0.11330104 0.64436995 0.37240127] 1
True
[ 0.29622624 0.09997703 0.67100582] 2
True
[ 0.40104066 0.13533337 0.4409787 ] 2
True
[ 0.44722526 0.091898 0.5332043 ] 2
True
[ 0.39198924 0.18748081 0.41424319] 2
True
[ 0.39017026 0.05162796 0.71950065] 2
True
[ 0.43520143 0.09387636 0.56056346] 2
True
[ 0.32217504 0.14478947 0.6120244 ] 2
True
[ 0.26316214 0.42186806 0.31439364] 1
True
[ 0.19044343 0.47438711 0.35370014] 1
True
[ 0.30390263 0.37546475 0.30544723] 1
True
[ 0.51369829 0.02245419 0.77452841] 2
True
[ 0.34383762 0.29274248 0.38154372] 0
False
[ 0.47346168 0.15840927 0.43114619] 0
True
[ 0.48758141 0.01227641 0.85901056] 2
True
[ 0.55683449 0.11649558 0.39379508] 0
True
[ 0.5151466 0.0716158 0.54460981] 0
False
[ 0.21824343 0.61776338 0.26914336] 1
True
[ 0.42099841 0.08001435 0.58522926] 2
True
[ 0.40783868 0.1448474 0.46138932] 2
True
[ 0.25161284 0.62892846 0.17356544] 1
True
[ 0.24459813 0.55694615 0.2474683 ] 0
False
[ 0.3043811 0.63952145 0.12941166] 1
True
[ 0.57856981 0.07120057 0.45646224] 0
True
[ 0.21268696 0.59247894 0.27737269] 1
True
[ 0.46208944 0.11689321 0.44031466] 2
False
[ 0.19830084 0.28112772 0.52790956] 2
True
[ 0.48002663 0.1387619 0.43155748] 0
True
[ 0.39648493 0.24612488 0.33105729] 0
True
[ 0.44968139 0.16872509 0.46605827] 0
False
[ 0.46379199 0.05414518 0.66763554] 2
True
[ 0.44040107 0.21511707 0.31932972] 0
True
[ 0.31990249 0.22322461 0.41865715] 2
True
[ 0.38173579 0.19792065 0.43921387] 2
True
[ 0.25644542 0.26586463 0.49897385] 2
True
[ 0.39005441 0.35886987 0.26238019] 0
True
[ 0.53944969 0.10297347 0.44267232] 0
True
[ 0.34320618 0.19493889 0.45315037] 2
True
[ 0.32364065 0.67325092 0.13429511] 1
True
[ 0.49958887 0.03791556 0.63172399] 2
True
[ 0.56913227 0.01386289 0.80912082] 2
True
[ 0.4013734 0.31950849 0.26309091] 0
True
[ 0.45973784 0.21316835 0.29605279] 0
True
[ 0.39220285 0.4161655 0.1823163 ] 1
True
[ 0.51994609 0.09286354 0.43431781] 0
True
[ 0.32568147 0.40090943 0.26418067] 1
True
[ 0.34294522 0.65564191 0.14565335] 1
True
[ 0.56009222 0.07903258 0.436746 ] 2
False
[ 0.39482114 0.2166909 0.34692199] 0
True
[ 0.32876199 0.37138293 0.26548985] 1
True
[ 0.29116928 0.43979649 0.24017952] 1
True
[ 0.40978698 0.18265063 0.38570038] 0
True
[ 0.50170804 0.16998431 0.34229532] 0
True
[ 0.30142207 0.77315239 0.0868713 ] 1
True
[ 0.40692225 0.36366876 0.21760846] 0
True
[ 0.42669488 0.31131328 0.26452684] 1
False
[ 0.28996207 0.38177954 0.26952462] 1
True
[ 0.50670279 0.17199213 0.3515196 ] 0
True
[ 0.30847134 0.48195133 0.20103012] 1
True
[ 0.33520036 0.69537789 0.11197023] 1
True
[ 0.42414026 0.23807488 0.32911277] 0
True
[ 0.44951268 0.18963662 0.34975453] 2
False
[ 0.16385211 0.15333411 0.69127403] 2
True
[ 0.32019414 0.25957646 0.37232674] 2
True
[ 0.55784646 0.08570646 0.43731022] 0
True
[ 0.2066359 0.55899724 0.26405604] 1
True
[ 0.2010469 0.47158773 0.31915453] 1
True
[ 0.12337611 0.32673313 0.55088275] 2
True
[ 0.19482441 0.48344286 0.31309969] 1
True
[ 0.10381603 0.29350629 0.64457047] 2
True
[ 0.40368588 0.15203706 0.44895338] 2
True
[ 0.19846102 0.55078079 0.24183719] 1
True
[ 0.24915126 0.16656262 0.57083775] 2
True
[ 0.27442118 0.60661885 0.20507853] 0
False
[ 0.39807537 0.18233138 0.38390086] 0
True
[ 0.47288147 0.13362208 0.42502313] 2
False
[ 0.16999972 0.68687454 0.21754582] 1
True
[ 0.17332661 0.7419439 0.22657919] 2
False
[ 0.15540094 0.568145 0.37714746] 1
True
[ 0.21703626 0.42042046 0.38569655] 1
True
[ 0.38325603 0.09211362 0.57387959] 2
True
[ 0.46239703 0.15052712 0.44957929] 0
True
[ 0.48322464 0.02797445 0.74242368] 2
True
[ 0.37658262 0.16009082 0.46021847] 0
False
[ 0.42724448 0.32586575 0.2742073 ] 0
True
[ 0.41956679 0.16930748 0.42124681] 2
True
[ 0.52875332 0.16042503 0.40030857] 0
True
[ 0.53775253 0.0945734 0.45705695] 0
True
[ 0.24211894 0.38783108 0.38620208] 1
True
[ 0.57947549 0.11900933 0.39143912] 0
True
[ 0.52550368 0.15643716 0.33970335] 0
True
[ 0.18461701 0.68886789 0.19652173] 1
True
[ 0.3964453 0.20485734 0.34755307] 0
True
[ 0.40118833 0.24986852 0.33336731] 0
True
[ 0.3618407 0.48102023 0.21659244] 1
True
[ 0.44196502 0.14211382 0.44056944] 2
False
[ 0.3857871 0.26225518 0.31796218] 0
True
[ 0.31517005 0.40869661 0.27287356] 1
True
[ 0.18113338 0.15438718 0.64118819] 2
True
[ 0.32402994 0.58649948 0.21187329] 1
True
[ 0.3488769 0.39767349 0.24737015] 1
True
[ 0.55287719 0.12329376 0.36872551] 0
True
[ 0.28735816 0.65872218 0.15142844] 1
True
[ 0.42731173 0.20356824 0.37398586] 2
False
[ 0.41076455 0.20358136 0.37708713] 0
True
[ 0.27167873 0.71491269 0.1297546 ] 1
True
[ 0.23454476 0.36816331 0.38981477] 2
True
[ 0.28899592 0.63464817 0.17463832] 1
True
[ 0.32313757 0.27968931 0.40108595] 2
True
[ 0.22466928 0.70060799 0.16240688] 1
True
[ 0.42607744 0.07687118 0.55075007] 0
False
[ 0.48302681 0.09600295 0.5393162 ] 0
False
[ 0.35830047 0.47710187 0.19678907] 1
True
[ 0.46977424 0.10864084 0.44516481] 0
True
[ 0.42339338 0.18366147 0.39630554] 1
False
[ 0.20348201 0.55765145 0.23575421] 1
True
[ 0.24832961 0.46923179 0.23065688] 1
True
[ 0.15702514 0.73911933 0.19530745] 1
True
[ 0.47404254 0.1820532 0.35411983] 0
True
[ 0.4337019 0.14447769 0.43506972] 2
True
[ 0.29012239 0.6504871 0.16133731] 0
False
[ 0.43407664 0.13632289 0.41040471] 0
True
[ 0.4280067 0.25637393 0.31489784] 0
True
[ 0.18249819 0.69728005 0.18013937] 1
True
[ 0.42045577 0.33801521 0.20345872] 0
True
[ 0.14769622 0.3379212 0.48244376] 2
True
[ 0.44961871 0.10864042 0.46044094] 2
True
[ 0.40671011 0.20783022 0.38949509] 0
True
[ 0.35479434 0.26977344 0.30338612] 1
False
[ 0.30938229 0.30460932 0.37845387] 2
True
[ 0.57763188 0.03927313 0.62554066] 2
True
[ 0.39164231 0.19496776 0.40744325] 2
True
[ 0.15657534 0.74997781 0.20853473] 2
False
[ 0.12805232 0.1613239 0.77522405] 2
True
[ 0.26887067 0.33501039 0.3961899 ] 2
True
[ 0.17767969 0.60166803 0.28411377] 1
True
[ 0.43187757 0.04700526 0.68067792] 2
True
[ 0.27247072 0.45336923 0.28316741] 0
False
[ 0.19663908 0.62368065 0.26515443] 1
True
[ 0.4494156 0.09260932 0.53928671] 2
True
[ 0.27755112 0.41486402 0.27628941] 1
True
[ 0.43254178 0.11820314 0.48301056] 2
True
[ 0.31962599 0.31631757 0.32436644] 0
False
[ 0.49623588 0.28409383 0.2139342 ] 0
True
[ 0.39885981 0.23096394 0.36175682] 0
True
[ 0.2341735 0.59320533 0.21043717] 1
True
[ 0.39453754 0.19686426 0.37808049] 2
False
[ 0.25578949 0.66914884 0.16575866] 2
False
[ 0.38041047 0.16917907 0.45781567] 2
True
[ 0.31802781 0.39883786 0.3043274 ] 0
False
[ 0.42100898 0.13199783 0.45196117] 2
True
[ 0.5037952 0.12605171 0.41105799] 2
False
[ 0.38525141 0.34852636 0.24498551] 1
False
[ 0.41561361 0.08255433 0.5948383 ] 0
False
[ 0.22347007 0.51603075 0.27315179] 1
True
[ 0.45395849 0.10997827 0.51519585] 0
False
[ 0.31815365 0.38449659 0.28281023] 1
True
[ 0.1854409 0.1339476 0.70082241] 2
True
[ 0.50658919 0.06761087 0.51191919] 2
True
[ 0.42289369 0.15161159 0.45954428] 0
False
[ 0.32505977 0.77003382 0.10365075] 1
True
[ 0.38491201 0.42519726 0.18513963] 1
True
[ 0.42800977 0.18634113 0.41803795] 0
True
[ 0.44205568 0.16159613 0.43223015] 2
False
[ 0.19570786 0.68430677 0.18074075] 1
True
[ 0.40512252 0.24980106 0.3414513 ] 0
True
[ 0.60142494 0.08688543 0.39485815] 0
True
[ 0.42602751 0.08933663 0.53824401] 2
True
[ 0.51730836 0.10699032 0.45562763] 0
True
[ 0.30276985 0.64685253 0.15848934] 1
True
[ 0.60989385 0.06843395 0.47115096] 2
False
[ 0.49661843 0.28162036 0.23521859] 0
True
[ 0.58666321 0.03974765 0.61835413] 2
True
[ 0.4150882 0.2161182 0.34941488] 0
True
[ 0.44261478 0.52433743 0.13896903] 1
True
[ 0.29947906 0.48807226 0.19722185] 1
True
[ 0.48885071 0.03775842 0.69583771] 2
True
[ 0.37245022 0.24296893 0.34405457] 2
False
[ 0.27205139 0.34317384 0.35214437] 1
False
[ 0.44334564 0.08644867 0.51377148] 0
False
[ 0.48297316 0.05656177 0.61779706] 2
True
[ 0.36274366 0.43103006 0.20640692] 1
True
[ 0.27381612 0.64418912 0.17692004] 1
True
[ 0.43766609 0.10275595 0.49965853] 2
True
[ 0.42918116 0.12981724 0.43944432] 0
False
[ 0.36227159 0.22445968 0.43324566] 2
True
[ 0.39893313 0.40558717 0.2350959 ] 1
True
[ 0.1639363 0.60243915 0.24949961] 1
True
[ 0.4373217 0.14879235 0.41816227] 2
False
[ 0.48544202 0.04410491 0.66184802] 2
True
[ 0.2891609 0.60878189 0.15646929] 1
True
[ 0.34216134 0.24939121 0.36344581] 2
True
[ 0.43279845 0.12831873 0.42416968] 2
False
[ 0.46003929 0.14088866 0.46718996] 2
True
[ 0.23641603 0.44473642 0.32311952] 0
False
[ 0.24446514 0.63741882 0.1776902 ] 1
True
[ 0.50789532 0.07273102 0.51486332] 0
False
[ 0.42772185 0.15631092 0.44470415] 0
False
[ 0.21951886 0.36787497 0.39989413] 2
True
[ 0.43770776 0.15801509 0.46847249] 2
True
[ 0.25376078 0.57369673 0.21914408] 1
True
[ 0.17802115 0.54992114 0.3071994 ] 1
True
[ 0.16683565 0.14868637 0.70553375] 2
True
[ 0.29856801 0.56707615 0.18637142] 1
True
[ 0.40727235 0.27186724 0.27822454] 0
True
[ 0.42342697 0.26581489 0.3144929 ] 2
False
[ 0.27920449 0.45397575 0.24314709] 1
True
[ 0.1287546 0.25336775 0.58405365] 2
True
[ 0.30411588 0.21944399 0.44998343] 0
False
[ 0.26602129 0.35909402 0.32802375] 1
True
[ 0.42541553 0.37380606 0.2176217 ] 1
False
[ 0.48749798 0.09533051 0.46466155] 2
False
[ 0.34696678 0.16984997 0.48265708] 0
False
[ 0.63367724 0.08154858 0.39397299] 0
True
[ 0.1675413 0.60115562 0.27484878] 1
True
[ 0.38308403 0.18446441 0.42758113] 2
True
[ 0.17093498 0.5466323 0.27629828] 1
True
[ 0.37329088 0.41167983 0.22660673] 0
False
[ 0.44371668 0.16872874 0.39212109] 2
False
[ 0.3432233 0.5009305 0.19383268] 1
True
[ 0.48485045 0.13580946 0.38449596] 0
True
[ 0.44343466 0.18701503 0.39741618] 0
True
[ 0.25519558 0.41719203 0.32205005] 1
True
[ 0.39664868 0.28125433 0.30517372] 0
True
[ 0.36204079 0.33344281 0.30311052] 0
True
[ 0.39195799 0.24765341 0.34741478] 2
False
[ 0.44864057 0.14970266 0.44036795] 2
False
[ 0.44926217 0.13067207 0.46088566] 0
False
[ 0.37417374 0.44340377 0.24674342] 1
True
[ 0.42261048 0.22748222 0.36121369] 0
True
[ 0.42654769 0.2061759 0.39185085] 0
True
[ 0.25041861 0.57902126 0.21077906] 1
True
[ 0.19112085 0.68500462 0.1949907 ] 1
True
[ 0.3868743 0.21742422 0.32713445] 0
True
[ 0.4150436 0.17258741 0.41052702] 2
False
[ 0.47347981 0.13006932 0.46816557] 2
False
[ 0.46846338 0.13009537 0.45899129] 2
False
[ 0.45129814 0.11515366 0.47399867] 0
False
[ 0.38743316 0.13770208 0.50104771] 2
True
[ 0.18668571 0.11032196 0.74505795] 2
True
[ 0.41024486 0.15527704 0.42005667] 2
True
[ 0.36251163 0.36653644 0.25119797] 1
True
[ 0.45865545 0.111111 0.47864281] 0
False
[ 0.31150915 0.49302242 0.22780014] 0
False
[ 0.22144298 0.10915231 0.70285256] 2
True
[ 0.36492692 0.21858818 0.3989142 ] 2
True
[ 0.29649188 0.47576221 0.2376867 ] 1
True
[ 0.46689929 0.3367657 0.19717657] 1
False
[ 0.38856247 0.34319865 0.26921468] 0
True
[ 0.48290431 0.01682682 0.82614742] 2
True
[ 0.4179838 0.54271793 0.14572158] 0
False
[ 0.54144328 0.19631388 0.28407041] 1
False
[ 0.61314656 0.09597251 0.38519248] 0
True
[ 0.35139247 0.53600221 0.17612817] 0
False
[ 0.42593167 0.41189515 0.17523696] 0
True
[ 0.51506558 0.3228821 0.20006328] 0
True
[ 0.51166133 0.37954028 0.20024863] 0
True
[ 0.43504251 0.46617807 0.15402619] 1
True
[ 0.6787093 0.09605741 0.33599586] 0
True
[ 0.57536464 0.22053568 0.21613831] 0
True
[ 0.37004822 0.40938994 0.22555964] 1
True
[ 0.51651548 0.09931185 0.42481666] 2
False
[ 0.37546566 0.27349852 0.33806274] 2
False
[ 0.34424373 0.25196114 0.37960975] 1
False
[ 0.37563322 0.20148266 0.3930444 ] 2
True
[ 0.26439934 0.74592915 0.12754253] 1
True
[ 0.2001187 0.80302105 0.14169663] 1
True
[ 0.48764518 0.13818021 0.40711835] 0
True
[ 0.17684958 0.17449261 0.66046951] 2
True
[ 0.48398327 0.10033642 0.51523255] 0
False
[ 0.37310939 0.13506151 0.47020877] 2
True
[ 0.57016903 0.14813679 0.29531694] 0
True
[ 0.40018525 0.39593185 0.22716013] 0
True
[ 0.35348302 0.52476229 0.17225943] 1
True
[ 0.3846232 0.66596125 0.10596909] 1
True
[ 0.51644902 0.09720245 0.43145223] 0
True
[ 0.27961396 0.38627503 0.29186104] 1
True
[ 0.41673475 0.39383116 0.20190146] 1
False
[ 0.33198357 0.3752145 0.24536858] 1
True
[ 0.37886638 0.17891238 0.3957309 ] 2
True
[ 0.47622292 0.07380056 0.54919476] 2
True
[ 0.21128059 0.14915412 0.64007039] 2
True
[ 0.10559416 0.840242 0.14687258] 1
True
[ 0.21268031 0.4848845 0.32351854] 1
True
[ 0.36968323 0.48005888 0.17866679] 0
False
[ 0.30379376 0.67846153 0.13506704] 1
True
[ 0.1541608 0.24712633 0.60049388] 2
True
[ 0.34683754 0.64425041 0.11700817] 1
True
[ 0.39698248 0.44403286 0.1735218 ] 0
False
[ 0.46467023 0.36586595 0.21156548] 0
True
[ 0.43709898 0.21136536 0.35690229] 0
True
[ 0.44290776 0.43328626 0.19130935] 0
True
[ 0.54456012 0.07971278 0.45181092] 2
False
[ 0.38516933 0.67058213 0.11737136] 1
True
[ 0.56240185 0.11518195 0.35680265] 0
True
[ 0.49119051 0.13616111 0.37429817] 2
False
[ 0.28608666 0.10205015 0.63143263] 2
True
[ 0.42152096 0.13424142 0.4622951 ] 2
True
[ 0.44716517 0.23655223 0.30118703] 1
False
[ 0.42542205 0.18631072 0.42265979] 2
False
[ 0.19912505 0.60168137 0.24743228] 1
True
[ 0.48821866 0.16974762 0.34019336] 0
True
[ 0.37411807 0.50844892 0.17022697] 1
True
[ 0.48585667 0.09520764 0.47542934] 0
True
[ 0.56186536 0.0922135 0.43307163] 0
True
[ 0.26774515 0.32570887 0.38422098] 2
True
[ 0.04460934 0.98723014 0.04301434] 1
True
[ 0.26538423 0.52705021 0.23342371] 1
True
[ 0.49804375 0.01703256 0.82033895] 2
True
[ 0.41952922 0.22465987 0.39353404] 0
True
[ 0.53958881 0.144028 0.37710722] 0
True
[ 0.45312929 0.12139701 0.49268701] 2
True
[ 0.42845014 0.1221136 0.45940319] 2
True
[ 0.4937387 0.11026362 0.40917144] 0
True
[ 0.17787233 0.54632975 0.33177092] 1
True
[ 0.27996177 0.55044029 0.20055734] 1
True
[ 0.28235749 0.50374284 0.22764538] 1
True
[ 0.15565513 0.75835615 0.20982028] 2
False
[ 0.25778236 0.60434795 0.20042374] 1
True
[ 0.40151115 0.14569586 0.4727308 ] 2
True
[ 0.34305829 0.27064557 0.3749533 ] 1
False
[ 0.31371225 0.22130696 0.40654112] 2
True
[ 0.41307051 0.03956214 0.75250231] 2
True
[ 0.32124307 0.41641013 0.30565782] 1
True
[ 0.24082742 0.41750626 0.28555733] 0
False
[ 0.13275406 0.9185051 0.08285623] 1
True
[ 0.26618696 0.54217085 0.23779067] 1
True
[ 0.30390325 0.54733039 0.18992113] 1
True
[ 0.44520021 0.11370599 0.48474765] 2
True
[ 0.41141257 0.15619126 0.43579902] 2
True
[ 0.20686502 0.73602749 0.15695183] 1
True
[ 0.18286456 0.42879171 0.37516606] 1
True
[ 0.38180613 0.24468637 0.3198266 ] 1
False
[ 0.33530504 0.28990991 0.36208273] 2
True
[ 0.29531251 0.21663832 0.43688326] 2
True
[ 0.50733709 0.0472979 0.64203643] 2
True
[ 0.47089497 0.11137452 0.46164263] 0
True
[ 0.18956211 0.41553702 0.39127994] 1
True
[ 0.19715901 0.3827188 0.41609522] 2
True
[ 0.23204026 0.42880607 0.32814268] 1
True
[ 0.15360031 0.49028606 0.37145251] 1
True
[ 0.30664268 0.06086005 0.71957101] 2
True
[ 0.38127159 0.14670319 0.47793641] 0
False
[ 0.28684584 0.34616868 0.3358468 ] 0
False
[ 0.3314172 0.08631215 0.65427998] 2
True
[ 0.15407729 0.85433888 0.11680829] 1
True
[ 0.35936112 0.22263174 0.37639132] 2
True
[ 0.18488515 0.75849094 0.18757229] 1
True
[ 0.36675574 0.20669964 0.41785233] 2
True
[ 0.31538137 0.07504721 0.68257483] 2
True
[ 0.41594811 0.17227877 0.42229275] 2
True
[ 0.10188996 0.82328976 0.25275956] 2
False
[ 0.32762209 0.08980046 0.66089062] 2
True
[ 0.22238447 0.68819981 0.23405114] 1
True
[ 0.41407155 0.109441 0.509641 ] 0
False
[ 0.16307981 0.45110076 0.37208849] 1
True
[ 0.36527481 0.18160383 0.43825066] 0
False
[ 0.43371489 0.04015794 0.72609093] 2
True
[ 0.17619041 0.60655836 0.30061843] 1
True
[ 0.40411844 0.20207464 0.36849563] 0
True
[ 0.46468497 0.0801399 0.54691021] 0
False
[ 0.4479692 0.25216519 0.33809112] 2
False
[ 0.46348174 0.10196328 0.49598567] 0
False
[ 0.44667779 0.08339914 0.57326166] 2
True
[ 0.55992503 0.07639617 0.50931185] 0
True
[ 0.3478008 0.5295971 0.23084815] 1
True
[ 0.31036974 0.56463734 0.19969602] 1
True
[ 0.36425492 0.35015285 0.29676199] 0
True
[ 0.4499604 0.23144022 0.33650532] 0
True
[ 0.56240677 0.11735766 0.3945093 ] 0
True
[ 0.41601265 0.20699325 0.36879905] 0
True
[ 0.32603603 0.29853393 0.31933066] 1
False
[ 0.37433453 0.60371369 0.1273511 ] 1
True
[ 0.21547017 0.15306725 0.64720418] 2
True
[ 0.35682498 0.5413272 0.16088547] 1
True
[ 0.47067282 0.2224456 0.3160379 ] 0
True
[ 0.50460298 0.26206427 0.25355435] 1
False
[ 0.24020076 0.79499372 0.10185126] 1
True
[ 0.44564136 0.27480195 0.24949555] 0
True
[ 0.39064681 0.29151952 0.29779154] 0
True
[ 0.43752019 0.34756762 0.19860146] 0
True
[ 0.38625777 0.32713065 0.2389588 ] 0
True
[ 0.55353868 0.15654462 0.32321053] 0
True
[ 0.38530304 0.6549067 0.10660219] 1
True
[ 0.41401745 0.50886434 0.14590425] 1
True
[ 0.43667062 0.31035733 0.22313846] 0
True
[ 0.35740035 0.58954036 0.14462983] 1
True
[ 0.28291316 0.47497215 0.23004029] 1
True
[ 0.52370524 0.1726436 0.30747412] 2
False
[ 0.4224875 0.10583151 0.51612706] 2
True
[ 0.47090251 0.15542808 0.34423809] 0
True
[ 0.4773587 0.16641485 0.3766571 ] 2
False
[ 0.36280638 0.30304928 0.31365327] 2
False
[ 0.40696696 0.20333172 0.38475896] 0
True
[ 0.44699148 0.2299702 0.33872712] 0
True
[ 0.40763359 0.1757111 0.43817164] 0
False
[ 0.24720406 0.60847049 0.23433883] 1
True
[ 0.39895278 0.40165455 0.21518427] 1
True
[ 0.28418949 0.50366963 0.24556392] 1
True
[ 0.2604229 0.63337865 0.1642186 ] 1
True
[ 0.4744187 0.13014975 0.44051597] 0
True
[ 0.2784914 0.58059494 0.19470002] 1
True
[ 0.19880182 0.84065328 0.12145242] 1
True
[ 0.31986821 0.29479712 0.3663325 ] 2
True
[ 0.26489314 0.71168983 0.12657743] 1
True
[ 0.3638339 0.16814817 0.46726081] 0
False
[ 0.33058864 0.38720214 0.31345157] 0
False
[ 0.39643256 0.32435214 0.30189498] 2
False
[ 0.55307561 0.13216006 0.32670538] 0
True
[ 0.52385461 0.20012578 0.31265982] 0
True
[ 0.30185604 0.34484059 0.3639179 ] 2
True
[ 0.36813521 0.10860725 0.55722069] 2
True
[ 0.49485692 0.11568249 0.47450315] 2
False
[ 0.46804991 0.13863961 0.4524952 ] 2
False
[ 0.34528919 0.3960093 0.24819252] 1
True
[ 0.28576846 0.27419203 0.42103815] 2
True
[ 0.49242885 0.04230742 0.65104891] 2
True
[ 0.10657461 0.90056906 0.12805617] 1
True
[ 0.37663222 0.20206506 0.39134957] 0
False
[ 0.31949688 0.42326016 0.2667389 ] 1
True
[ 0.24012599 0.68263255 0.17304949] 1
True
[ 0.45974328 0.1805871 0.38651527] 2
False
[ 0.26721047 0.27913001 0.44127605] 2
True
[ 0.26784206 0.13962419 0.59226507] 2
True
[ 0.13842553 0.67748696 0.35988215] 2
False
[ 0.42214271 0.13349641 0.53742345] 0
False
[ 0.48496493 0.10510655 0.5105705 ] 0
False
[ 0.40653784 0.06643714 0.68337474] 2
True
[ 0.26022266 0.54283073 0.23252703] 1
True
[ 0.23860178 0.47044621 0.33065148] 1
True
[ 0.30658132 0.2146264 0.53186245] 2
True
[ 0.44628751 0.12965919 0.43600913] 0
True
[ 0.26730962 0.24181167 0.5141021 ] 2
True
[ 0.35895795 0.08906806 0.61708868] 2
True
[ 0.49410817 0.10935823 0.46332484] 2
False
[ 0.18607964 0.5938572 0.29230766] 1
True
[ 0.17698956 0.41925675 0.44599554] 1
False
[ 0.37130269 0.19237399 0.41677581] 0
False
[ 0.16613656 0.82850055 0.128948 ] 1
True
[ 0.3839794 0.31087873 0.29370063] 0
True
[ 0.43381689 0.07476667 0.58496951] 2
True
[ 0.35952101 0.30947039 0.30992786] 1
False
[ 0.1559878 0.69347152 0.28508916] 2
False
[ 0.31428891 0.25758444 0.3818171 ] 2
True
[ 0.33650688 0.44114043 0.27622455] 1
True
[ 0.42032169 0.11873063 0.51507085] 2
True
[ 0.18360511 0.68412945 0.23212802] 1
True
[ 0.44920729 0.1502668 0.45592725] 0
False
[ 0.4665729 0.10877166 0.48707283] 2
True
[ 0.19339558 0.55883659 0.30296694] 1
True
[ 0.50426421 0.11510566 0.4156323 ] 0
True
[ 0.37595806 0.41739055 0.22326251] 1
True
[ 0.23290577 0.2208594 0.5623357 ] 2
True
[ 0.24321724 0.46477303 0.32378311] 0
False
[ 0.45972761 0.09722097 0.49280713] 0
False
[ 0.4687538 0.13415531 0.44718729] 2
False
[ 0.4856428 0.15293581 0.38592944] 0
True
[ 0.3852418 0.07393823 0.63803924] 2
True
[ 0.37971501 0.09099085 0.58839161] 2
True
[ 0.31242386 0.46759957 0.23948812] 1
True
[ 0.21745339 0.50925563 0.30642702] 1
True
[ 0.36110096 0.41007455 0.25210454] 1
True
[ 0.36292096 0.17664992 0.49669863] 2
True
[ 0.32670747 0.57577949 0.18856702] 1
True
[ 0.2094113 0.40746194 0.42776239] 2
True
[ 0.45258411 0.14775985 0.44238441] 0
True
[ 0.1416029 0.61875563 0.30762557] 1
True
[ 0.20568572 0.2825652 0.50010276] 2
True
[ 0.22687775 0.35172135 0.38576905] 1
False
[ 0.33564632 0.24920987 0.41221534] 0
False
[ 0.27606957 0.25617436 0.47753235] 2
True
[ 0.44644304 0.19606928 0.35329859] 0
True
[ 0.35075949 0.43862965 0.22617401] 1
True
[ 0.20897969 0.41115224 0.38183141] 1
True
[ 0.17974766 0.76117505 0.17020415] 1
True
[ 0.42625468 0.15596257 0.43880648] 2
True
[ 0.34582427 0.34378223 0.24367417] 1
False
[ 0.15045947 0.85992039 0.14953214] 1
True
[ 0.39158123 0.03217661 0.78801893] 2
True
[ 0.3657861 0.230758 0.33170171] 0
True
[ 0.25959311 0.38262555 0.30167723] 1
True
[ 0.1547005 0.17583518 0.66229023] 2
True
[ 0.28832266 0.46051199 0.25083424] 1
True
[ 0.48498105 0.20577188 0.31629612] 0
True
[ 0.44056966 0.16245957 0.42384728] 2
False
[ 0.27532216 0.37182217 0.3586015 ] 2
False
[ 0.25523243 0.59821596 0.22223957] 1
True
[ 0.2230351 0.75235246 0.14305365] 1
True
[ 0.43077512 0.21492364 0.36534745] 0
True
[ 0.3836658 0.27478748 0.30429064] 1
False
[ 0.42370415 0.12737613 0.4414571 ] 0
False
[ 0.20462001 0.58223005 0.25891785] 1
True
[ 0.14929588 0.75038151 0.18218303] 1
True
[ 0.30034092 0.38961418 0.25210648] 2
False
[ 0.11801467 0.24364051 0.69429673] 2
True
[ 0.2842497 0.21596545 0.46588506] 2
True
[ 0.11985576 0.2173239 0.70725472] 2
True
[ 0.09955905 0.73021968 0.30378543] 1
True
[ 0.34156817 0.08632945 0.62922899] 2
True
[ 0.10821903 0.71950188 0.27832437] 1
True
[ 0.45107654 0.11946253 0.47702671] 0
False
[ 0.35169383 0.08973103 0.57647101] 2
True
[ 0.29864053 0.2866432 0.39531466] 2
True
[ 0.37825447 0.10500735 0.57278742] 2
True
[ 0.23449685 0.59537069 0.22505039] 1
True
[ 0.40945225 0.16748655 0.41268079] 0
False
[ 0.37466903 0.33487143 0.29241277] 0
True
[ 0.30736192 0.36260934 0.29017361] 1
True
[ 0.42865585 0.28652849 0.29274724] 0
True
[ 0.4161388 0.17177421 0.38929211] 2
False
[ 0.35587681 0.03212792 0.81423471] 2
True
[ 0.3888741 0.2934157 0.36361406] 2
False
[ 0.34427863 0.36084856 0.28769542] 1
True
[ 0.34454667 0.19758107 0.46881211] 2
True
[ 0.21969587 0.62509064 0.19584087] 1
True
[ 0.14920317 0.7735873 0.16689737] 1
True
[ 0.23086458 0.49334285 0.30374161] 1
True
[ 0.32054224 0.19077412 0.49563541] 0
False
[ 0.31573599 0.3298232 0.32657686] 2
False
[ 0.13276022 0.56395342 0.36327944] 2
False
[ 0.12553161 0.74817884 0.28779244] 1
True
[ 0.35288343 0.05476612 0.72839966] 2
True
[ 0.43396504 0.04048965 0.73387002] 2
True
[ 0.30850995 0.08469296 0.64262369] 2
True
[ 0.24758025 0.33495303 0.38726295] 2
True
[ 0.41238784 0.11393008 0.46172325] 2
True
[ 0.35365742 0.21103199 0.43626127] 2
True
[ 0.36569476 0.13388783 0.53678352] 2
True
[ 0.08000747 0.82369686 0.34368707] 2
False
[ 0.32927712 0.10983474 0.63525254] 0
False
[ 0.32115041 0.33774123 0.33245576] 1
True
[ 0.10926829 0.4716403 0.57791346] 2
True
[ 0.25640242 0.44729584 0.35516411] 0
False
[ 0.35182029 0.2336868 0.44840281] 2
True
[ 0.12905446 0.57763559 0.43057799] 1
True
[ 0.41762687 0.13103911 0.5252044 ] 0
False
[ 0.44383888 0.11846809 0.53291641] 0
False
[ 0.37833558 0.07818171 0.65243096] 2
True
[ 0.4762394 0.10190782 0.50648177] 0
False
[ 0.18555445 0.57924141 0.3221708 ] 1
True
[ 0.28587939 0.44129862 0.28179359] 1
True
[ 0.39983479 0.1930709 0.38802613] 0
True
[ 0.2119942 0.71267104 0.19291331] 1
True
[ 0.34396316 0.3169002 0.33553046] 1
False
[ 0.35217096 0.14752858 0.51742218] 0
False
[ 0.34408756 0.44123322 0.22408825] 0
False
[ 0.48705522 0.02400677 0.79526894] 2
True
[ 0.32494234 0.27198685 0.35909264] 2
True
[ 0.17766135 0.64970875 0.27301096] 2
False
[ 0.27545525 0.55646357 0.23759052] 1
True
[ 0.16803428 0.22790819 0.64458829] 2
True
[ 0.26815896 0.42259821 0.30900964] 1
True
[ 0.52423623 0.12219488 0.47389004] 0
True
[ 0.32498102 0.23857231 0.45810145] 2
True
[ 0.24762266 0.53864969 0.2779199 ] 1
True
[ 0.18637842 0.4854487 0.38889774] 1
True
[ 0.50598688 0.02826273 0.77467434] 2
True
[ 0.45127656 0.13560783 0.4638996 ] 2
True
[ 0.43658163 0.13340141 0.49408218] 0
False
[ 0.34850343 0.49627877 0.21678773] 1
True
[ 0.38968677 0.25319638 0.37303092] 0
True
[ 0.43563463 0.27028743 0.2797942 ] 0
True
[ 0.31765088 0.45646252 0.268278 ] 1
True
[ 0.23954532 0.49862679 0.27802576] 1
True
[ 0.37699249 0.41978649 0.22890299] 1
True
[ 0.55145294 0.03511677 0.62744941] 2
True
[ 0.16456299 0.27622002 0.60020453] 2
True
[ 0.53557891 0.09259055 0.43721826] 2
False
[ 0.39023741 0.1926493 0.39054528] 2
True
[ 0.39111098 0.17494382 0.45158131] 2
True
[ 0.18064325 0.45560853 0.39098683] 1
True
[ 0.43024737 0.19949132 0.36363825] 0
True
[ 0.43980683 0.07478551 0.60792466] 2
True
[ 0.2809355 0.47608536 0.29489504] 1
True
[ 0.15612002 0.15802428 0.69092953] 2
True
[ 0.20534482 0.59737824 0.25975057] 1
True
[ 0.50736489 0.1472329 0.34632253] 0
True
[ 0.16590168 0.57689343 0.29296493] 1
True
[ 0.41486713 0.17175185 0.40827617] 0
True
[ 0.21865926 0.5410054 0.32894421] 1
True
[ 0.24160429 0.48255299 0.29556205] 1
True
[ 0.23326302 0.64199949 0.1945558 ] 1
True
[ 0.39858052 0.33460218 0.28606699] 0
True
[ 0.29131364 0.31894377 0.4117432 ] 2
True
[ 0.4691509 0.14498084 0.42638514] 0
True
[ 0.38074278 0.16511254 0.46428173] 0
False
[ 0.17805357 0.55692049 0.32671085] 1
True
[ 0.18335679 0.14008181 0.66332629] 2
True
[ 0.39487587 0.17287261 0.42298112] 0
False
[ 0.2524725 0.40220414 0.37088763] 1
True
[ 0.28326908 0.47412795 0.23482419] 1
True
[ 0.61480667 0.08148337 0.43499116] 0
True
[ 0.46312008 0.19660217 0.36006855] 2
False
[ 0.30177801 0.25425328 0.45207138] 2
True
[ 0.54306633 0.12455105 0.37653357] 0
True
[ 0.19129777 0.65156907 0.23788397] 1
True
[ 0.40944436 0.19882133 0.40046231] 0
True
[ 0.44409817 0.13445361 0.46673104] 2
True
[ 0.172796 0.16482322 0.63517814] 2
True
[ 0.24965698 0.39036692 0.33859966] 1
True
[ 0.20675806 0.6685026 0.21093896] 1
True
[ 0.1692674 0.16929762 0.65415796] 2
True
[ 0.2773482 0.39510195 0.31539893] 1
True
[ 0.44571192 0.13091383 0.45927877] 0
False
[ 0.45736567 0.11249535 0.41935078] 0
True
[ 0.27417688 0.58095959 0.17496533] 1
True
[ 0.33447841 0.68456322 0.11287283] 1
True
[ 0.49980608 0.191282 0.3490467 ] 2
False
[ 0.31410601 0.4740652 0.21502569] 0
False
[ 0.33181228 0.42238963 0.24404223] 1
True
[ 0.38017321 0.34843195 0.27675183] 0
True
[ 0.42339523 0.46359992 0.18362186] 0
False
[ 0.5291331 0.15956573 0.29307147] 0
True
[ 0.33868618 0.39553656 0.24405187] 1
True
[ 0.47924768 0.35651747 0.20155616] 0
True
[ 0.32458641 0.48428091 0.20982079] 1
True
[ 0.24495362 0.12163604 0.64454167] 2
True
[ 0.36000016 0.30925588 0.30293558] 2
False
[ 0.42268044 0.56068236 0.13450586] 0
False
[ 0.46683106 0.2920384 0.25570341] 1
False
[ 0.49445624 0.21237913 0.31251004] 0
True
[ 0.42269956 0.35657391 0.23490771] 1
False
[ 0.21621417 0.83117351 0.10591709] 2
False
[ 0.20801595 0.57027463 0.27748557] 1
True
[ 0.48674291 0.18391491 0.35652289] 0
True
[ 0.4066087 0.35951527 0.23032412] 0
True
[ 0.50667468 0.11821096 0.47807414] 2
False
[ 0.20747116 0.72538642 0.18956516] 2
False
[ 0.41737583 0.21630396 0.36161406] 0
True
[ 0.42263165 0.1740422 0.41999867] 2
False
[ 0.51559957 0.15524806 0.36348179] 0
True
[ 0.46334504 0.19887088 0.33617028] 0
True
[ 0.15816897 0.66110344 0.25611143] 1
True
[ 0.56384769 0.07421333 0.47970091] 0
True
[ 0.54592406 0.11578738 0.41969758] 2
False
[ 0.3857357 0.25075874 0.37513379] 0
True
[ 0.32280164 0.41322484 0.27296871] 1
True
[ 0.42025706 0.07734129 0.6122668 ] 2
True
[ 0.39056565 0.09438025 0.59556457] 2
True
[ 0.522975 0.0204279 0.78537955] 2
True
[ 0.24166337 0.41258089 0.35262356] 1
True
[ 0.53794438 0.01392389 0.81095005] 2
True
[ 0.52222197 0.23801388 0.28000074] 0
True
[ 0.4675115 0.13467364 0.44204095] 2
False
[ 0.39689503 0.2501886 0.33495687] 0
True
[ 0.36509564 0.51764819 0.18404459] 1
True
[ 0.40086589 0.22402375 0.36401608] 0
True
[ 0.27671268 0.44286102 0.31153583] 1
True
[ 0.4088015 0.25740591 0.31414678] 0
True
[ 0.41436093 0.14885896 0.47485482] 2
True
[ 0.46413561 0.25751182 0.29634104] 0
True
[ 0.28362415 0.41065294 0.31851416] 1
True
[ 0.28510791 0.27131822 0.42811976] 2
True
[ 0.41888146 0.34083943 0.26035285] 0
True
[ 0.53667953 0.04324086 0.59733672] 2
True
[ 0.30333097 0.40686373 0.29334877] 1
True
[ 0.15557359 0.86788718 0.11797195] 1
True
[ 0.23493202 0.58654518 0.23207408] 1
True
[ 0.47117898 0.09373389 0.48493337] 2
True
[ 0.23513189 0.51625893 0.3082567 ] 1
True
[ 0.46182237 0.15057402 0.4130008 ] 2
False
[ 0.45205494 0.10416004 0.48475206] 0
False
[ 0.49656149 0.0989417 0.4549287 ] 0
True
[ 0.22735302 0.3957018 0.33830935] 1
True
[ 0.26115676 0.68531958 0.17261608] 0
False
[ 0.41976254 0.2110571 0.39606404] 0
True
[ 0.190899 0.19761649 0.63168624] 2
True
[ 0.52635376 0.1166659 0.38071365] 0
True
[ 0.37996587 0.42181113 0.24874887] 1
True
[ 0.33446537 0.26027137 0.40129839] 2
True
[ 0.32035553 0.5022645 0.22127825] 1
True
[ 0.37696353 0.18966172 0.41070045] 2
True
[ 0.4036695 0.25705014 0.34794153] 0
True
[ 0.47687117 0.07824706 0.53898564] 2
True
[ 0.56984534 0.19395184 0.30258926] 0
True
[ 0.53853585 0.17367944 0.29409845] 1
False
[ 0.26714669 0.70678792 0.1586102 ] 1
True
[ 0.25943391 0.39653321 0.29449516] 1
True
[ 0.20872827 0.54099426 0.26020761] 1
True
[ 0.44475982 0.3383173 0.22336795] 2
False
[ 0.39094387 0.25863739 0.32026644] 0
True
[ 0.34257533 0.55262612 0.1700551 ] 1
True
[ 0.43685625 0.43176426 0.20396177] 0
True
[ 0.25983692 0.51064035 0.22445154] 1
True
[ 0.23292529 0.57058437 0.22552504] 1
True
[ 0.37383223 0.2025251 0.39982725] 2
True
[ 0.43838512 0.18276612 0.42204492] 0
True
[ 0.18244196 0.16224542 0.65831769] 2
True
[ 0.561559 0.06156015 0.46662723] 2
False
[ 0.32201233 0.42375414 0.22664352] 0
False
[ 0.4415069 0.17638693 0.32721628] 1
False
[ 0.41987131 0.19506846 0.355612 ] 0
True
[ 0.16175531 0.50641552 0.33784059] 1
True
[ 0.30932562 0.27072802 0.39119717] 2
True
[ 0.50023183 0.21109343 0.28604494] 1
False
[ 0.27791781 0.47289278 0.23064003] 1
True
[ 0.38543375 0.36738773 0.22931357] 1
False
[ 0.19527035 0.55922526 0.25913523] 1
True
[ 0.17749574 0.73705264 0.20155554] 1
True
[ 0.36130298 0.12414348 0.51947289] 0
False
[ 0.25002134 0.57943076 0.22146335] 1
True
[ 0.39628754 0.19070307 0.40392172] 2
True
[ 0.4663556 0.06416218 0.55902169] 2
True
[ 0.40681825 0.15899072 0.4088952 ] 2
True
[ 0.32285291 0.28982201 0.33539882] 0
False
[ 0.15634183 0.81218383 0.13184697] 1
True
3000
[ 0.17331811 0.8334056 0.11542014] 1
True
[ 0.41804939 0.12612015 0.47680655] 2
True
[ 0.37447303 0.16796468 0.43760877] 0
False
[ 0.31323571 0.74174073 0.10457254] 1
True
[ 0.43093078 0.04523049 0.71011289] 2
True
[ 0.40825183 0.15509244 0.42944968] 0
False
[ 0.38900503 0.41298324 0.20007357] 1
True
[ 0.35651196 0.2802701 0.31889631] 2
False
[ 0.52147846 0.17877978 0.31752688] 0
True
[ 0.43495703 0.11697839 0.4570973 ] 0
False
[ 0.56599632 0.1527073 0.31084297] 0
True
[ 0.37237914 0.28521143 0.25662205] 1
False
[ 0.38174481 0.27516804 0.29098607] 0
True
[ 0.5054989 0.13632445 0.34437911] 0
True
[ 0.50150764 0.16916584 0.37782702] 0
True
[ 0.21709675 0.57856748 0.21141413] 1
True
[ 0.53295653 0.23097662 0.23062467] 0
True
[ 0.33453009 0.65680217 0.11202595] 1
True
[ 0.44487222 0.44221574 0.18210473] 0
True
[ 0.34129612 0.73252099 0.09804282] 1
True
[ 0.3586803 0.45898824 0.2096049 ] 1
True
[ 0.43557427 0.16933061 0.41686683] 2
False
[ 0.47143665 0.17263428 0.33385431] 2
False
[ 0.42252062 0.13310342 0.4429496 ] 2
True
[ 0.38513556 0.29538483 0.32473868] 0
True
[ 0.35482858 0.30255633 0.29082675] 2
False
[ 0.1795866 0.67621902 0.21243827] 1
True
[ 0.25518056 0.81209023 0.10141432] 1
True
[ 0.43554815 0.09685121 0.51796953] 2
True
[ 0.3291079 0.40265683 0.27476677] 1
True
[ 0.11611038 0.34473725 0.59416416] 2
True
[ 0.23569971 0.45062074 0.32782149] 1
True
[ 0.39083621 0.31051388 0.29230281] 2
False
[ 0.30688617 0.3442235 0.3105657 ] 1
True
[ 0.29067411 0.62068485 0.18190911] 1
True
[ 0.19760523 0.71863027 0.16257279] 1
True
[ 0.52371098 0.16247475 0.3318398 ] 2
False
[ 0.14360106 0.19381327 0.65401911] 2
True
[ 0.43101098 0.14357045 0.41425269] 2
False
[ 0.16161498 0.11651322 0.74198272] 2
True
[ 0.41826447 0.36229028 0.22515264] 0
True
[ 0.21046316 0.62279962 0.21463853] 1
True
[ 0.1348626 0.84044981 0.16099805] 1
True
[ 0.39511006 0.1734082 0.41645672] 2
True
[ 0.15800581 0.77224167 0.18050212] 2
False
[ 0.37737373 0.34238887 0.3040762 ] 0
True
[ 0.26799586 0.47524365 0.25013975] 1
True
[ 0.40310283 0.22616498 0.37751047] 0
True
[ 0.36618625 0.21846663 0.4170151 ] 0
False
[ 0.27647451 0.49785497 0.25399026] 1
True
[ 0.46290928 0.17112983 0.38841014] 2
False
[ 0.49182624 0.12468699 0.44897757] 0
True
[ 0.30122278 0.27587842 0.42738707] 2
True
[ 0.46077666 0.13424922 0.41491429] 2
False
[ 0.30723895 0.15294677 0.55918837] 2
True
[ 0.13756568 0.72727258 0.26124034] 1
True
[ 0.27430095 0.37271734 0.31323853] 1
True
[ 0.42856002 0.15123874 0.42184508] 0
True
[ 0.21495216 0.55762241 0.2736801 ] 1
True
[ 0.38080572 0.21572442 0.375826 ] 0
True
[ 0.42386577 0.27970099 0.29239809] 1
False
[ 0.14689367 0.7789048 0.1873484 ] 2
False
[ 0.3934943 0.27905072 0.32133477] 0
True
[ 0.43172142 0.1110904 0.52883731] 2
True
[ 0.3801723 0.08897339 0.62403959] 2
True
[ 0.2946365 0.2024775 0.46996143] 0
False
[ 0.48383785 0.10560492 0.45869654] 0
True
[ 0.19343239 0.71519448 0.21327147] 1
True
[ 0.20960322 0.46012855 0.31968128] 1
True
[ 0.37216755 0.49925512 0.18832391] 1
True
[ 0.22520187 0.72825034 0.17315225] 1
True
[ 0.17837801 0.50663867 0.3689936 ] 1
True
[ 0.4827653 0.12066296 0.42907307] 2
False
[ 0.40797052 0.18294028 0.44517301] 2
True
[ 0.21575972 0.64678818 0.2277666 ] 1
True
[ 0.23750275 0.36855439 0.35991426] 0
False
[ 0.46885824 0.10778726 0.49296144] 2
True
[ 0.18045005 0.61075088 0.30133992] 1
True
[ 0.51363114 0.01953546 0.79958124] 2
True
[ 0.23723357 0.70973318 0.16461753] 1
True
[ 0.41549933 0.06667635 0.64718518] 2
True
[ 0.54385706 0.1083971 0.37773781] 0
True
[ 0.36395973 0.17654022 0.46732275] 2
True
[ 0.3225504 0.24190622 0.38616429] 2
True
[ 0.44037269 0.07315464 0.59430562] 2
True
[ 0.11835734 0.27375685 0.64533256] 2
True
[ 0.3784482 0.16082524 0.45687084] 0
False
[ 0.40215078 0.14649569 0.45332738] 0
False
[ 0.56881041 0.04085876 0.61994672] 2
True
[ 0.24768656 0.49519188 0.2767352 ] 1
True
[ 0.45226422 0.04828766 0.67748758] 2
True
[ 0.18845161 0.19752026 0.59732791] 2
True
[ 0.16221897 0.71470368 0.20791342] 1
True
[ 0.44834768 0.10926044 0.47462279] 0
False
[ 0.29289085 0.28290138 0.37466516] 0
False
[ 0.46580904 0.17528112 0.38429673] 2
False
[ 0.37936879 0.34710863 0.25663939] 1
False
[ 0.41047997 0.12998118 0.46606877] 0
False
[ 0.42405165 0.35484851 0.20936696] 1
False
[ 0.24732688 0.73081746 0.14054768] 1
True
[ 0.19144469 0.75822534 0.14833884] 1
True
[ 0.37647366 0.2553668 0.33022244] 0
True
[ 0.44980412 0.16823568 0.40106268] 2
False
[ 0.4267782 0.14469317 0.43779157] 0
False
[ 0.3832533 0.52486243 0.16570965] 0
False
[ 0.2541152 0.63623449 0.1688007 ] 0
False
[ 0.53601888 0.07690996 0.52166433] 0
True
[ 0.41523235 0.43305092 0.20401004] 1
True
[ 0.24335408 0.81417385 0.11697066] 2
False
[ 0.57852008 0.12598402 0.37538725] 0
True
[ 0.551081 0.1362233 0.37554259] 2
False
[ 0.42037705 0.11563373 0.50328573] 0
False
[ 0.61932636 0.11782399 0.33480288] 0
True
[ 0.25488898 0.33959534 0.40012298] 2
True
[ 0.21759525 0.13513815 0.66255313] 2
True
[ 0.42337599 0.41634091 0.21069441] 1
False
[ 0.52166109 0.13748278 0.39415726] 0
True
[ 0.40596342 0.2291909 0.38164161] 2
False
[ 0.39349782 0.36474562 0.27146572] 0
True
[ 0.50462614 0.11915193 0.43353069] 2
False
[ 0.23462541 0.69660171 0.16675497] 1
True
[ 0.19181465 0.12285311 0.72313508] 2
True
[ 0.51605997 0.10237437 0.44679193] 2
False
[ 0.39646405 0.27467348 0.3523505 ] 2
False
[ 0.14023177 0.68709062 0.3296378 ] 2
False
[ 0.28419388 0.44692664 0.30819131] 1
True
[ 0.41503907 0.06524918 0.64809473] 2
True
[ 0.46206962 0.03210802 0.73221413] 2
True
[ 0.1684833 0.50486761 0.35518239] 1
True
[ 0.44053606 0.09991988 0.56446306] 0
False
[ 0.50474971 0.08865457 0.53373714] 0
False
[ 0.41542429 0.06309415 0.66076854] 2
True
[ 0.59060627 0.01729617 0.76606607] 2
True
[ 0.15747787 0.49755341 0.41958048] 1
True
[ 0.31748708 0.55017973 0.21542246] 1
True
[ 0.29354478 0.27233536 0.43809288] 2
True
[ 0.34321987 0.36296635 0.33082641] 0
False
[ 0.48625332 0.17001617 0.33501278] 0
True
[ 0.25593951 0.46434103 0.32709249] 1
True
[ 0.2401161 0.5500407 0.25373831] 1
True
[ 0.44945788 0.16717781 0.41724606] 2
False
[ 0.34794747 0.4026226 0.266212 ] 1
True
[ 0.48052861 0.01441134 0.84466815] 2
True
[ 0.49401593 0.14181064 0.38663771] 0
True
[ 0.19911557 0.4490178 0.42549536] 2
False
[ 0.23435432 0.15741328 0.64315013] 2
True
[ 0.5952151 0.08474083 0.42034561] 0
True
[ 0.38148419 0.44411737 0.21208616] 1
True
[ 0.31035862 0.13482009 0.58267765] 2
True
[ 0.4384228 0.12763299 0.46689228] 0
False
[ 0.3000374 0.46038277 0.27470277] 1
True
[ 0.413641 0.11457538 0.52860626] 2
True
[ 0.1778261 0.39538448 0.44512358] 1
False
[ 0.28088102 0.6708318 0.14764129] 1
True
[ 0.34984185 0.17785588 0.47357522] 0
False
[ 0.41933638 0.22272951 0.39061595] 0
True
[ 0.48147839 0.0917512 0.51585352] 2
True
[ 0.22873354 0.49178092 0.27340165] 1
True
[ 0.16687661 0.73433966 0.23619142] 2
False
[ 0.42398626 0.20789118 0.4266427 ] 0
False
[ 0.33329722 0.4746705 0.27455818] 1
True
[ 0.48240148 0.122174 0.47003714] 0
True
[ 0.34245336 0.28292138 0.37141856] 1
False
[ 0.26312431 0.49075894 0.27579523] 0
False
[ 0.53932176 0.07329304 0.5697659 ] 0
False
[ 0.30673813 0.42359103 0.30645769] 1
True
[ 0.19651298 0.5967278 0.28128053] 1
True
[ 0.41532983 0.19623421 0.40005627] 0
True
[ 0.61351945 0.11083141 0.34519861] 0
True
[ 0.49338819 0.22196048 0.30939826] 0
True
[ 0.49980423 0.10299282 0.44308854] 2
False
[ 0.26973066 0.57598575 0.20305528] 1
True
[ 0.56356385 0.03750393 0.63767584] 2
True
[ 0.18187696 0.09693505 0.76814037] 2
True
[ 0.18895855 0.12724956 0.70733363] 2
True
[ 0.18189859 0.42626754 0.36279151] 1
True
[ 0.42081308 0.28517449 0.28055811] 1
False
[ 0.33979239 0.24679002 0.38898819] 2
True
[ 0.33842381 0.10589199 0.61439958] 2
True
[ 0.33764282 0.26007271 0.36959105] 2
True
[ 0.26840482 0.61854414 0.22485764] 1
True
[ 0.16957422 0.72128077 0.26132742] 2
False
[ 0.32422912 0.15245312 0.58012174] 2
True
[ 0.10463334 0.72198584 0.32765516] 1
True
[ 0.40983931 0.09944541 0.57177504] 2
True
[ 0.36053818 0.13680427 0.50904539] 0
False
[ 0.35077746 0.32085852 0.32072161] 0
True
[ 0.26110664 0.4939237 0.31884131] 1
True
[ 0.38083889 0.16442511 0.48845522] 0
False
[ 0.13840829 0.79090852 0.22585807] 1
True
[ 0.20936644 0.58126138 0.28041179] 1
True
[ 0.27574913 0.39492456 0.31288886] 1
True
[ 0.25331667 0.51447049 0.27099089] 1
True
[ 0.29664728 0.62115164 0.17020612] 1
True
[ 0.48080894 0.17161588 0.37241631] 2
False
[ 0.19095562 0.60463254 0.29152676] 1
True
[ 0.44549507 0.15342022 0.43362118] 2
False
[ 0.45192438 0.138726 0.43007988] 2
False
[ 0.35049846 0.24479899 0.36573217] 1
False
[ 0.19761615 0.3769713 0.49211817] 2
True
[ 0.44725129 0.19020725 0.35191213] 0
True
[ 0.33139408 0.27651464 0.35734082] 1
False
[ 0.20779205 0.67246256 0.19266671] 1
True
[ 0.20551611 0.73653221 0.18295138] 1
True
[ 0.48310478 0.19502026 0.38101515] 2
False
[ 0.35302729 0.04741177 0.75437741] 2
True
[ 0.17738197 0.78719443 0.13905455] 1
True
[ 0.40271271 0.10369614 0.5715906 ] 2
True
[ 0.27273428 0.56960224 0.2137194 ] 1
True
[ 0.15501981 0.39345567 0.52163328] 2
True
[ 0.36927564 0.08558456 0.63590898] 2
True
[ 0.36629212 0.3196765 0.30754344] 0
True
[ 0.13091968 0.22971521 0.66615721] 2
True
[ 0.4671232 0.11048611 0.47310088] 0
False
[ 0.39460928 0.21350416 0.37167697] 0
True
[ 0.48563328 0.17647389 0.38421541] 0
True
[ 0.47221221 0.09792235 0.5071775 ] 2
True
[ 0.12214744 0.69338403 0.26528287] 1
True
[ 0.27009873 0.26777533 0.42709008] 0
False
[ 0.26194153 0.63956557 0.17343274] 1
True
[ 0.45654753 0.1430727 0.44540314] 2
False
[ 0.46450147 0.10010286 0.5093957 ] 2
True
[ 0.41266209 0.30021275 0.25771726] 1
False
[ 0.40531333 0.19816078 0.43471294] 0
False
[ 0.25574758 0.57528714 0.20927749] 1
True
[ 0.24746026 0.43375318 0.29173359] 1
True
[ 0.51930933 0.17803577 0.34443473] 0
True
[ 0.19018152 0.51607529 0.31646667] 1
True
[ 0.60416975 0.07364308 0.44949879] 0
True
[ 0.42397598 0.23412246 0.29684466] 0
True
[ 0.33390105 0.31689378 0.30808609] 0
True
[ 0.18179891 0.83998812 0.11438064] 2
False
[ 0.4745158 0.11055678 0.45882484] 2
False
[ 0.4339247 0.21642832 0.36893607] 0
True
[ 0.19666492 0.84151565 0.13762518] 2
False
[ 0.44710185 0.18656816 0.36018578] 0
True
[ 0.48572982 0.01803488 0.81182499] 2
True
[ 0.46571736 0.09917246 0.49963121] 0
False
[ 0.43783794 0.15448407 0.41443624] 2
False
[ 0.28108034 0.63219863 0.18269842] 1
True
[ 0.30995944 0.56694013 0.2059474 ] 1
True
[ 0.29139168 0.38595888 0.33687618] 0
False
[ 0.33061138 0.31707002 0.33147727] 2
True
[ 0.20948664 0.58240831 0.27519413] 1
True
[ 0.42759563 0.1221087 0.50470915] 0
False
[ 0.4239409 0.16127382 0.44014939] 0
False
[ 0.30180762 0.44567007 0.27698675] 1
True
[ 0.50573113 0.10387339 0.47090577] 2
False
[ 0.47142645 0.12954519 0.41490665] 0
True
[ 0.5158225 0.15857523 0.35659693] 0
True
[ 0.45901664 0.14123035 0.42531773] 2
False
[ 0.35427328 0.45023071 0.23380497] 1
True
[ 0.33275906 0.43675166 0.22080161] 0
False
[ 0.34340624 0.37169955 0.2931455 ] 1
True
[ 0.45108342 0.13774297 0.43468405] 2
False
[ 0.27920577 0.56381558 0.22650666] 1
True
[ 0.3097097 0.53102392 0.21121213] 1
True
[ 0.4565918 0.10730713 0.51087492] 2
True
[ 0.20402653 0.66837577 0.19362774] 1
True
[ 0.38267793 0.15714048 0.47851928] 2
True
[ 0.22183848 0.71075407 0.18828647] 1
True
[ 0.37062589 0.05883931 0.67051489] 2
True
[ 0.41986817 0.30607321 0.29324634] 0
True
[ 0.3567351 0.212573 0.47657076] 2
True
[ 0.21893835 0.41714435 0.34680959] 1
True
[ 0.31127446 0.32288505 0.35987027] 0
False
[ 0.34837438 0.51358951 0.16460549] 1
True
[ 0.48979489 0.26027058 0.2499537 ] 1
False
[ 0.417922 0.12325097 0.52891287] 0
False
[ 0.25523747 0.35980887 0.41621327] 2
True
[ 0.35968949 0.41004736 0.22884713] 1
True
[ 0.26253007 0.66983982 0.14793548] 1
True
[ 0.59169366 0.05654475 0.520795 ] 0
True
[ 0.37269191 0.60760439 0.12243929] 1
True
[ 0.369934 0.40968409 0.2096755 ] 1
True
[ 0.42680218 0.1547535 0.4204955 ] 2
False
[ 0.42670745 0.22204464 0.32028651] 0
True
[ 0.56294899 0.16304615 0.29439259] 0
True
[ 0.23193151 0.57276957 0.23980351] 1
True
[ 0.22488657 0.53646276 0.25744357] 1
True
[ 0.43592157 0.34570227 0.22442321] 0
True
[ 0.22161451 0.52372856 0.25650864] 1
True
[ 0.29655933 0.66121953 0.15024517] 1
True
[ 0.39352563 0.16386987 0.4493468 ] 2
True
[ 0.17892206 0.65986367 0.18590584] 1
True
[ 0.43193123 0.11355095 0.42693877] 0
True
[ 0.43988531 0.12832139 0.44468804] 0
False
[ 0.20052594 0.19327587 0.60338565] 2
True
[ 0.51872676 0.21041047 0.28716949] 0
True
[ 0.41129923 0.38845999 0.20333054] 0
True
[ 0.56035742 0.09187108 0.44069548] 0
True
[ 0.4044844 0.27256906 0.3360971 ] 0
True
[ 0.35664761 0.76682801 0.08014333] 1
True
[ 0.25905587 0.71659483 0.11433678] 1
True
[ 0.40981687 0.23182514 0.32951748] 0
True
[ 0.37610735 0.41232996 0.22989041] 1
True
[ 0.42161513 0.37432616 0.1938302 ] 1
False
[ 0.36245348 0.42572367 0.18096805] 0
False
[ 0.52539071 0.1450061 0.39858004] 0
True
[ 0.25693524 0.81951513 0.10561401] 1
True
[ 0.58048418 0.12692558 0.320665 ] 0
True
[ 0.36300476 0.40121593 0.23396662] 1
True
[ 0.38065529 0.61304126 0.12972028] 1
True
[ 0.56162943 0.11099355 0.37758534] 0
True
[ 0.42559489 0.16351008 0.3911919 ] 2
False
[ 0.41431828 0.32700397 0.26583749] 1
False
[ 0.40186571 0.15278797 0.41716635] 2
True
[ 0.48857876 0.12504279 0.41338053] 2
False
[ 0.23878345 0.55397046 0.26279769] 0
False
[ 0.53279057 0.18649195 0.31109024] 0
True
[ 0.56406951 0.12992581 0.36997072] 0
True
[ 0.39458669 0.26253063 0.31967967] 2
False
[ 0.50850879 0.13647882 0.32761087] 0
True
[ 0.38819969 0.33818362 0.23793577] 0
True
[ 0.50639208 0.0980182 0.45546744] 2
False
[ 0.44738423 0.29520598 0.2507948 ] 1
False
[ 0.41376196 0.13801515 0.44807622] 2
True
[ 0.41223609 0.25718538 0.29105031] 0
True
[ 0.35361969 0.34666024 0.28730113] 2
False
[ 0.40336783 0.2621663 0.32422584] 1
False
[ 0.41017945 0.13112636 0.4798836 ] 2
True
[ 0.52986901 0.16636493 0.30329559] 0
True
[ 0.12922854 0.30408507 0.60312837] 2
True
[ 0.18277543 0.61830182 0.24575671] 1
True
[ 0.30665879 0.24589708 0.42407964] 2
True
[ 0.2629031 0.34465883 0.37517182] 1
False
[ 0.31245333 0.24236924 0.43583471] 2
True
[ 0.22784239 0.4705676 0.32932046] 1
True
[ 0.29896751 0.57714935 0.16984221] 1
True
[ 0.31709739 0.30698241 0.35107301] 2
True
[ 0.30021861 0.1419473 0.54937596] 2
True
[ 0.35183177 0.25048175 0.32830995] 0
True
[ 0.48950716 0.14243356 0.39253374] 2
False
[ 0.4533747 0.11698735 0.48004605] 2
True
[ 0.39282333 0.03852574 0.74164633] 2
True
[ 0.30940717 0.35301986 0.33612983] 1
True
[ 0.21389182 0.5995664 0.22944879] 1
True
[ 0.35442498 0.34370895 0.29604778] 0
True
[ 0.36311343 0.28803644 0.32705073] 0
True
[ 0.15571879 0.75006474 0.21826809] 1
True
[ 0.40985284 0.30949697 0.31372014] 0
True
[ 0.42371395 0.20882865 0.31970316] 0
True
[ 0.25068144 0.53953373 0.2316183 ] 1
True
[ 0.51608925 0.11932474 0.40761884] 0
True
[ 0.35978192 0.39483524 0.24908679] 2
False
[ 0.40931689 0.11269809 0.51657471] 2
True
[ 0.19369387 0.73856605 0.18329198] 1
True
[ 0.34205367 0.20711991 0.38970407] 0
False
[ 0.52838017 0.10448804 0.42186542] 2
False
[ 0.1482404 0.76776255 0.16894642] 1
True
[ 0.21635534 0.43814691 0.34103687] 1
True
[ 0.34384045 0.27091449 0.38464473] 2
True
[ 0.44998906 0.09741493 0.51198475] 2
True
[ 0.46513394 0.10095584 0.45645588] 0
True
[ 0.22663642 0.72434289 0.16319337] 1
True
[ 0.38968698 0.19948002 0.38139723] 0
True
[ 0.45625837 0.21861406 0.25454433] 0
True
[ 0.24955728 0.42326692 0.32714348] 1
True
[ 0.37776393 0.07143992 0.6279072 ] 2
True
[ 0.17731083 0.27632933 0.51569133] 2
True
[ 0.22204535 0.74408175 0.1489252 ] 1
True
[ 0.46252766 0.24181235 0.29578912] 0
True
[ 0.39841108 0.36055274 0.23524851] 0
True
[ 0.43822765 0.08659887 0.53432888] 2
True
[ 0.18097606 0.77432321 0.17161021] 2
False
[ 0.37443547 0.13872645 0.52514563] 0
False
[ 0.42793667 0.23283565 0.32475537] 1
False
[ 0.41107212 0.06108107 0.62780074] 2
True
[ 0.35808792 0.26907519 0.37978202] 0
False
[ 0.50245042 0.12058328 0.43581496] 2
False
[ 0.25828665 0.60674923 0.20518217] 1
True
[ 0.42616096 0.19736338 0.40763162] 2
False
[ 0.14198349 0.76437518 0.18762188] 1
True
[ 0.18672957 0.79444761 0.15388787] 1
True
[ 0.35021249 0.3291278 0.30576287] 1
False
[ 0.26670646 0.69369128 0.14772717] 1
True
[ 0.4360393 0.12809696 0.42280376] 0
True
[ 0.22653198 0.78240363 0.12411537] 1
True
[ 0.21672195 0.45320393 0.29347934] 1
True
[ 0.16699505 0.74635746 0.20580395] 2
False
[ 0.48031428 0.04135128 0.68837788] 2
True
[ 0.3373459 0.09927648 0.5998533 ] 2
True
[ 0.29878617 0.47550323 0.24519829] 0
False
[ 0.43864776 0.21216862 0.36516847] 0
True
[ 0.45905464 0.19924617 0.35653415] 0
True
[ 0.45327267 0.03022392 0.77600216] 2
True
[ 0.45406906 0.14847316 0.44060107] 2
False
[ 0.14967761 0.63368124 0.2773005 ] 1
True
[ 0.39874469 0.26911796 0.32557525] 0
True
[ 0.29426881 0.33825671 0.31749137] 0
False
[ 0.22000823 0.46328296 0.30989269] 1
True
[ 0.40229978 0.15229665 0.43306789] 2
True
[ 0.23314536 0.67552919 0.17389382] 1
True
[ 0.23724244 0.7154607 0.16389624] 0
False
[ 0.48194141 0.14893743 0.39254029] 0
True
[ 0.48503055 0.15069352 0.39259221] 2
False
[ 0.50958893 0.13208914 0.42660078] 0
True
[ 0.45031713 0.13462743 0.43411014] 2
False
[ 0.22088771 0.5443435 0.27937955] 1
True
[ 0.40119932 0.07495412 0.629119 ] 2
True
[ 0.43265665 0.11893623 0.47210997] 0
False
[ 0.3445141 0.585533 0.14438879] 1
True
[ 0.52052983 0.22109153 0.28706477] 0
True
[ 0.55346595 0.08590285 0.46531302] 2
False
[ 0.24295969 0.720418 0.14065308] 1
True
[ 0.25059925 0.71102752 0.14934176] 1
True
[ 0.46353016 0.15439565 0.39145802] 2
False
[ 0.52650957 0.06505736 0.58711561] 0
False
[ 0.47737755 0.05800315 0.59825132] 2
True
[ 0.36391472 0.33034587 0.28832675] 1
False
[ 0.29640791 0.26023011 0.43280826] 2
True
[ 0.45915775 0.14450291 0.44202247] 2
False
[ 0.38674559 0.12682381 0.49174418] 0
False
[ 0.42024708 0.10947779 0.5073928 ] 2
True
[ 0.39180423 0.18054118 0.44091847] 2
True
[ 0.18462122 0.15409345 0.6837422 ] 2
True
[ 0.19403345 0.6621648 0.21688917] 2
False
[ 0.32841739 0.56163618 0.20023896] 1
True
[ 0.26131315 0.65156696 0.16624968] 1
True
[ 0.26550349 0.32815123 0.42883806] 2
True
[ 0.40350953 0.18983421 0.42938337] 0
False
[ 0.23681114 0.40255935 0.36046267] 1
True
[ 0.4614579 0.07846878 0.56682339] 0
False
[ 0.39922219 0.24048015 0.3602367 ] 0
True
[ 0.47753061 0.11263752 0.4441511 ] 2
False
[ 0.2861543 0.43733186 0.25459441] 1
True
[ 0.19692364 0.76803019 0.18247789] 2
False
[ 0.23369149 0.53976505 0.26884927] 1
True
[ 0.23969392 0.71017643 0.18889359] 1
True
[ 0.4475324 0.11002883 0.51703536] 0
False
[ 0.30825382 0.59622479 0.16172619] 1
True
[ 0.49639628 0.08436018 0.50564031] 0
False
[ 0.45193108 0.16403798 0.44257778] 0
True
[ 0.42176403 0.16337091 0.38971504] 2
False
[ 0.21508221 0.42202031 0.39284053] 2
False
[ 0.22200196 0.57522853 0.26217936] 1
True
[ 0.20337884 0.46540396 0.37596099] 1
True
[ 0.48444329 0.10598517 0.47999737] 0
True
[ 0.41074945 0.3417265 0.2728072 ] 0
True
[ 0.32003558 0.52362334 0.21130757] 1
True
[ 0.2818921 0.42350136 0.30433089] 1
True
[ 0.52904043 0.12875773 0.39678891] 2
False
[ 0.26464865 0.47460603 0.29271904] 1
True
[ 0.31223432 0.48074819 0.24270361] 1
True
[ 0.349383 0.31435762 0.32771504] 0
True
[ 0.4469129 0.21248565 0.3306925 ] 0
True
[ 0.52746353 0.14373822 0.36089996] 0
True
[ 0.52472057 0.11158744 0.46442135] 2
False
[ 0.28715291 0.56565148 0.22783092] 1
True
[ 0.51033828 0.06770121 0.55031286] 0
False
[ 0.57084183 0.03312844 0.64879604] 2
True
[ 0.20757889 0.14479155 0.70007594] 2
True
[ 0.43743899 0.29720453 0.28306453] 0
True
[ 0.55525699 0.07190749 0.47379049] 2
False
[ 0.57955306 0.07771574 0.45442332] 0
True
[ 0.47689236 0.13478233 0.41792137] 0
True
[ 0.40441283 0.325432 0.30380039] 0
True
[ 0.43949319 0.23345976 0.31132796] 0
True
[ 0.53673402 0.06880919 0.50699129] 2
False
[ 0.51267024 0.07317642 0.50873968] 0
True
[ 0.57968887 0.0859613 0.47316677] 0
True
[ 0.49114637 0.16768275 0.33572793] 0
True
[ 0.31377623 0.11688095 0.60680511] 2
True
[ 0.5812534 0.09474306 0.3997155 ] 0
True
[ 0.53924848 0.01571973 0.80729084] 2
True
[ 0.54528487 0.1563452 0.31512693] 0
True
[ 0.48924031 0.03693488 0.67574333] 2
True
[ 0.36705925 0.40224503 0.2238318 ] 0
False
[ 0.50714566 0.04942039 0.62206511] 2
True
[ 0.40184833 0.33074801 0.27244484] 1
False
[ 0.24773809 0.61137391 0.22490258] 1
True
[ 0.5845479 0.02729212 0.65657297] 2
True
[ 0.42032776 0.24094055 0.30633174] 0
True
[ 0.35377814 0.5737527 0.16779724] 1
True
[ 0.46132841 0.12647327 0.46872343] 0
False
[ 0.48302875 0.31746351 0.23281429] 1
False
[ 0.37418628 0.20293141 0.39966659] 2
True
[ 0.39534389 0.2985224 0.2808444 ] 0
True
[ 0.23892932 0.63648867 0.21777157] 2
False
[ 0.35632791 0.36178913 0.2719877 ] 1
True
[ 0.3061652 0.60017699 0.16931605] 1
True
[ 0.41282458 0.0790383 0.58055658] 2
True
[ 0.20550746 0.09544093 0.73112428] 2
True
[ 0.38951024 0.21995884 0.37538033] 0
True
[ 0.26487792 0.40102838 0.30013175] 1
True
[ 0.41337697 0.40129574 0.19987006] 0
True
[ 0.56348195 0.11053541 0.40674847] 2
False
[ 0.20104136 0.57004443 0.26876131] 1
True
[ 0.27320295 0.54493135 0.24751286] 1
True
[ 0.42217792 0.12890487 0.44565092] 0
False
[ 0.36735422 0.30818822 0.2891951 ] 1
False
[ 0.48067066 0.1344479 0.44937649] 0
True
[ 0.38570796 0.51651331 0.15869999] 1
True
[ 0.49667689 0.1796735 0.35188904] 0
True
[ 0.4058889 0.17645118 0.39454631] 0
True
[ 0.45532323 0.18615637 0.35152832] 0
True
[ 0.42746651 0.13831971 0.43086782] 2
True
[ 0.47495477 0.3055171 0.22660317] 0
True
[ 0.2095789 0.63397292 0.21568316] 1
True
[ 0.51100087 0.14559073 0.38670669] 0
True
[ 0.44327619 0.08781692 0.53476866] 2
True
[ 0.51417019 0.02068905 0.7634343 ] 2
True
[ 0.53216109 0.1149695 0.42035 ] 2
False
[ 0.48452619 0.11694271 0.4521371 ] 2
False
[ 0.5478874 0.12249073 0.38826691] 0
True
[ 0.32787982 0.44673842 0.2444238 ] 1
True
[ 0.30817857 0.34515785 0.30154061] 1
True
[ 0.38122875 0.18604919 0.38246983] 2
True
[ 0.43920669 0.10458292 0.47267086] 2
True
[ 0.36799331 0.15026433 0.49243935] 2
True
[ 0.42531292 0.1628508 0.41525201] 2
False
[ 0.41673845 0.17563993 0.422887 ] 0
False
[ 0.51506785 0.11186264 0.41787889] 0
True
[ 0.47327479 0.12040542 0.44694353] 2
False
[ 0.48152499 0.17301323 0.40505139] 0
True
[ 0.56970372 0.03093991 0.65625509] 2
True
[ 0.18325806 0.12053254 0.73067071] 2
True
[ 0.43190704 0.1397172 0.44417817] 0
False
[ 0.54663486 0.12322231 0.38051616] 0
True
[ 0.22032172 0.39580567 0.37225898] 1
True
[ 0.41652307 0.20875031 0.35421686] 2
False
[ 0.38655593 0.17315663 0.44684735] 2
True
[ 0.41589733 0.12251509 0.47441074] 2
True
[ 0.30286885 0.31632673 0.41112178] 2
True
[ 0.37219222 0.19421526 0.40249122] 0
False
[ 0.42021633 0.20421331 0.39618917] 0
True
[ 0.13407351 0.23001252 0.66222664] 2
True
[ 0.49687455 0.10921806 0.45959496] 2
False
[ 0.39906758 0.29552508 0.30646805] 0
True
[ 0.38189791 0.18445078 0.44866822] 0
False
[ 0.43943645 0.14528133 0.43087472] 0
True
[ 0.36461508 0.28757496 0.30614973] 1
False
[ 0.33494018 0.33040143 0.32753891] 1
False
[ 0.31980858 0.33061669 0.33594165] 2
True
[ 0.41253664 0.18636058 0.38966186] 0
True
[ 0.46156262 0.2134444 0.30873554] 1
False
[ 0.17091273 0.76277723 0.20300475] 1
True
[ 0.45101203 0.08462051 0.53871923] 2
True
[ 0.31651402 0.17777896 0.5177867 ] 2
True
[ 0.24550692 0.66611675 0.17859087] 1
True
[ 0.42404023 0.12225394 0.47691237] 0
False
[ 0.20796189 0.44358216 0.37813458] 2
False
[ 0.20413117 0.74559937 0.1689122 ] 1
True
[ 0.43708672 0.14796451 0.43617045] 2
False
[ 0.16940243 0.15996617 0.7035252 ] 2
True
[ 0.38526932 0.31852454 0.27970428] 1
False
[ 0.18946402 0.46576525 0.4023834 ] 2
False
[ 0.45961165 0.10119463 0.52762621] 2
True
[ 0.32886645 0.25575476 0.39127751] 2
True
[ 0.18544204 0.53647707 0.29176049] 0
False
[ 0.32260731 0.34146528 0.29266308] 1
True
[ 0.36828615 0.15147845 0.47538689] 2
True
[ 0.38805961 0.20122921 0.40683593] 0
False
[ 0.43601867 0.15786156 0.42682197] 2
False
[ 0.45303825 0.10843818 0.46639674] 0
False
[ 0.33303956 0.09843282 0.58518969] 2
True
[ 0.13865044 0.85328969 0.15905971] 1
True
[ 0.22568016 0.3132515 0.44922939] 2
True
[ 0.41589839 0.16785621 0.41314538] 2
False
[ 0.39816423 0.16123755 0.46355081] 2
True
[ 0.10327449 0.35630962 0.58970838] 2
True
[ 0.17805801 0.41114815 0.39349787] 1
True
[ 0.36613092 0.17185108 0.43967032] 0
False
[ 0.4777505 0.1849276 0.3448802] 0
True
[ 0.49656298 0.13250375 0.3936582 ] 0
True
[ 0.17231824 0.80278818 0.14874485] 1
True
[ 0.36945169 0.27078817 0.33115169] 0
True
[ 0.24694664 0.69968166 0.14674019] 1
True
[ 0.18173364 0.78961031 0.153496 ] 1
True
[ 0.5320598 0.14935775 0.36971529] 0
True
[ 0.26740964 0.63992241 0.17582514] 1
True
[ 0.49726251 0.09066867 0.48132842] 2
False
[ 0.39397658 0.12445506 0.49800809] 2
True
[ 0.54195584 0.03603168 0.64456092] 2
True
[ 0.42326637 0.16281737 0.41527341] 2
False
[ 0.16969565 0.62364951 0.27224622] 1
True
[ 0.53473653 0.04550295 0.59907046] 2
True
[ 0.17697619 0.63895521 0.26356794] 1
True
[ 0.35914011 0.27747725 0.34416357] 2
False
[ 0.4345594 0.151092 0.40317542] 2
False
[ 0.40807711 0.17309239 0.41075333] 2
True
[ 0.24379581 0.61806091 0.18851666] 1
True
[ 0.45129228 0.09525112 0.496194 ] 0
False
[ 0.22901878 0.72375664 0.16427743] 1
True
[ 0.41407443 0.22773672 0.33849839] 0
True
[ 0.43556224 0.11736072 0.47750805] 2
True
[ 0.4924687 0.0519688 0.57077599] 2
True
[ 0.13805183 0.53576599 0.40947997] 1
True
[ 0.47048652 0.13538283 0.44597984] 2
False
[ 0.42754354 0.15743979 0.41648639] 2
False
[ 0.23436567 0.55352576 0.25662094] 1
True
[ 0.26731728 0.15660109 0.57762426] 2
True
[ 0.27940766 0.55723407 0.19781738] 1
True
[ 0.14532653 0.1640265 0.65889787] 2
True
[ 0.40239774 0.17473368 0.42525643] 0
False
[ 0.42835379 0.05471116 0.63950837] 2
True
[ 0.40770559 0.2285269 0.39715462] 0
True
[ 0.41245016 0.17377293 0.41508334] 0
False
[ 0.2936403 0.44183217 0.246612 ] 1
True
[ 0.39316014 0.14050555 0.50289351] 0
False
[ 0.43501411 0.15772819 0.40148692] 0
True
[ 0.20897755 0.56306256 0.25307977] 1
True
[ 0.54085503 0.09404049 0.43503359] 0
True
[ 0.45172907 0.17143085 0.40477854] 1
False
[ 0.34832305 0.52283973 0.17968859] 0
False
[ 0.18511094 0.65262016 0.23030541] 1
True
[ 0.31407856 0.48433286 0.20751453] 0
False
[ 0.33936697 0.41628676 0.21765485] 1
True
[ 0.43997286 0.08839044 0.5330643 ] 2
True
[ 0.38047223 0.19977574 0.40116327] 2
True
[ 0.19843556 0.17446749 0.62113977] 2
True
[ 0.28764591 0.38599992 0.28163257] 1
True
[ 0.37627024 0.36827031 0.24234404] 0
True
[ 0.44243025 0.09399025 0.51498859] 2
True
[ 0.32308945 0.40967118 0.27470214] 1
True
[ 0.30581954 0.72851928 0.10309832] 1
True
[ 0.20546644 0.22760923 0.5025185 ] 2
True
[ 0.47926729 0.23130277 0.25881622] 0
True
[ 0.44072181 0.44007681 0.20167241] 0
True
[ 0.32348192 0.67202595 0.1030804 ] 1
True
[ 0.27402381 0.29352401 0.43073728] 2
True
[ 0.30594113 0.51723292 0.18949709] 1
True
[ 0.51800744 0.13205158 0.35119207] 0
True
[ 0.43382206 0.3144391 0.23127049] 1
False
[ 0.37316992 0.21607475 0.37395617] 2
True
[ 0.32731322 0.60689369 0.154653 ] 1
True
[ 0.46944507 0.09527821 0.50048312] 2
True
[ 0.36527896 0.34638011 0.26067244] 0
True
[ 0.21290951 0.22503045 0.5517091 ] 2
True
[ 0.40764571 0.24791109 0.32258152] 0
True
[ 0.19963825 0.5944283 0.2227598 ] 1
True
[ 0.4922303 0.08204502 0.45770424] 0
True
[ 0.53671383 0.09504068 0.42043368] 2
False
[ 0.40924588 0.23864196 0.33732102] 0
True
[ 0.2078626 0.87280115 0.08133034] 1
True
[ 0.30701265 0.61906047 0.14989713] 1
True
[ 0.36028835 0.40218733 0.24412229] 1
True
[ 0.22140336 0.10284292 0.68140024] 2
True
[ 0.4534701 0.36521911 0.19028391] 0
True
[ 0.27768833 0.37426881 0.29883983] 1
True
[ 0.25369088 0.60629033 0.19256038] 1
True
[ 0.48577328 0.17830167 0.34923846] 0
True
[ 0.50636198 0.03361372 0.67579273] 2
True
[ 0.47092384 0.12466979 0.44474397] 2
False
[ 0.32938971 0.42396925 0.21303857] 0
False
[ 0.53056722 0.14086774 0.36128237] 0
True
[ 0.40091232 0.26826543 0.28696069] 0
True
[ 0.51052219 0.25643112 0.25262576] 0
True
[ 0.34440712 0.74762267 0.09521536] 1
True
[ 0.42379478 0.21565134 0.33940953] 0
True
[ 0.34162707 0.53669827 0.16196371] 0
False
[ 0.45737273 0.07829577 0.5206782 ] 2
True
[ 0.39310402 0.41806221 0.22381401] 1
True
[ 0.22180664 0.78290535 0.13823929] 1
True
[ 0.50584412 0.07849779 0.46497931] 2
False
[ 0.61730558 0.14298765 0.30762973] 0
True
[ 0.34345579 0.60731929 0.13585421] 1
True
[ 0.46771422 0.33781509 0.19247788] 0
True
[ 0.36015982 0.42978702 0.2099511 ] 1
True
[ 0.23926578 0.8605407 0.06859474] 1
True
[ 0.49297646 0.05772492 0.56633431] 2
True
[ 0.21835549 0.7110863 0.15630464] 1
True
[ 0.48327408 0.02532808 0.73364892] 2
True
[ 0.40124526 0.44238943 0.15269554] 1
True
[ 0.30237819 0.80693148 0.07959272] 1
True
[ 0.49336835 0.30245385 0.20221096] 1
False
[ 0.44766423 0.24199505 0.27843516] 0
True
[ 0.3285059 0.62346008 0.13284199] 1
True
[ 0.49121496 0.22191996 0.27557013] 0
True
[ 0.40598723 0.21731452 0.32499655] 0
True
[ 0.43865281 0.32663005 0.2291864 ] 2
False
[ 0.31939181 0.28508219 0.34913109] 2
True
[ 0.30918734 0.46747179 0.20677114] 1
True
[ 0.43926848 0.48009784 0.14371682] 0
False
[ 0.27692619 0.39463987 0.27088496] 1
True
[ 0.53740931 0.08196172 0.46963802] 2
False
[ 0.49480351 0.14267862 0.41034671] 0
True
[ 0.48536698 0.04408422 0.62662331] 2
True
[ 0.55602008 0.08243609 0.47615184] 2
False
[ 0.28126751 0.26422823 0.38519009] 2
True
[ 0.2254363 0.76591787 0.11942661] 1
True
[ 0.27428426 0.55617371 0.18696083] 0
False
[ 0.48634883 0.12152971 0.4214114 ] 0
True
[ 0.55311425 0.10608683 0.40371788] 0
True
[ 0.44503497 0.06475965 0.58936491] 2
True
[ 0.5076021 0.19615538 0.31398093] 0
True
[ 0.48660287 0.41823047 0.1628275 ] 0
True
[ 0.2665172 0.56666237 0.17238822] 0
False
[ 0.39415784 0.29852751 0.28881924] 0
True
[ 0.32354242 0.30702869 0.33885971] 2
True
[ 0.49013925 0.07630746 0.53685217] 2
True
[ 0.52058748 0.02739061 0.70654497] 2
True
[ 0.25031875 0.33146014 0.39020908] 1
False
[ 0.52684346 0.11545893 0.39044422] 2
False
[ 0.19558779 0.58042826 0.27012696] 1
True
[ 0.38943692 0.38908642 0.24992463] 1
False
[ 0.16937312 0.62482177 0.25256495] 1
True
[ 0.32381878 0.65259351 0.11500867] 1
True
[ 0.4716024 0.05926173 0.59355732] 2
True
[ 0.37930331 0.20579066 0.38675036] 0
False
[ 0.43706841 0.09379246 0.47409114] 2
True
[ 0.39105758 0.29993461 0.28475896] 0
True
[ 0.27900202 0.42892128 0.29044255] 2
False
[ 0.32509346 0.32407348 0.28863369] 0
True
[ 0.51196018 0.09647014 0.45210018] 0
True
[ 0.45792334 0.31861166 0.20228248] 1
False
[ 0.44012749 0.17097183 0.39064341] 2
False
[ 0.17418887 0.65837008 0.2546687 ] 1
True
[ 0.38929991 0.31312072 0.27245089] 0
True
[ 0.22239495 0.88371886 0.06732695] 1
True
[ 0.43766588 0.31657998 0.238385 ] 1
False
[ 0.57432093 0.10737227 0.40298187] 2
False
[ 0.32469957 0.37202779 0.26372522] 0
False
[ 0.34578584 0.58974038 0.13624681] 1
True
[ 0.19913984 0.77133757 0.16179848] 2
False
[ 0.4596869 0.08981597 0.49946988] 0
False
[ 0.5447642 0.11562705 0.3788351 ] 0
True
[ 0.35707672 0.53975349 0.211133 ] 1
True
[ 0.41641125 0.33081231 0.30755442] 0
True
[ 0.48212279 0.09622286 0.5165547 ] 0
False
[ 0.31985324 0.53360763 0.18591692] 1
True
[ 0.61369414 0.03354385 0.625601 ] 2
True
[ 0.55017176 0.14508716 0.34248688] 0
True
[ 0.24171221 0.08204924 0.73515504] 2
True
[ 0.42397421 0.25422758 0.34666875] 2
False
[ 0.31631922 0.40582095 0.29442438] 1
True
[ 0.42879911 0.26830183 0.300783 ] 2
False
[ 0.58212512 0.10448913 0.37573498] 0
True
[ 0.50763626 0.20609802 0.33020428] 0
True
[ 0.41983004 0.32189001 0.28491324] 0
True
[ 0.48911822 0.08135477 0.5669283 ] 2
True
[ 0.41137422 0.21956674 0.34020514] 0
True
[ 0.36362142 0.35436781 0.25456367] 0
True
[ 0.56939479 0.12709487 0.34618771] 0
True
[ 0.54158872 0.16774137 0.31972972] 0
True
[ 0.37567769 0.3796994 0.25383742] 1
True
[ 0.54784164 0.15560901 0.31675812] 0
True
[ 0.42234694 0.15378934 0.40118899] 0
True
[ 0.44473435 0.23887967 0.33308386] 1
False
[ 0.36909614 0.19433511 0.36783515] 1
False
[ 0.39317668 0.33155439 0.26460105] 0
True
[ 0.44864273 0.18120845 0.38820277] 2
False
[ 0.43137572 0.15300144 0.42262145] 2
False
[ 0.25480699 0.39465046 0.31846325] 1
True
[ 0.30954984 0.4185063 0.24922705] 1
True
[ 0.49568365 0.12845558 0.41632866] 0
True
[ 0.55151797 0.13483706 0.31148192] 0
True
[ 0.46055912 0.10477488 0.48115631] 0
False
[ 0.26334333 0.80520768 0.0923964 ] 1
True
[ 0.2359995 0.4719811 0.27570762] 1
True
[ 0.38732829 0.41341867 0.22331462] 1
True
[ 0.52329717 0.10397809 0.42510412] 0
True
[ 0.45128176 0.23825874 0.29780024] 0
True
[ 0.43448608 0.099306 0.46620242] 0
False
[ 0.19214219 0.62817158 0.23255416] 2
False
[ 0.47094264 0.15434447 0.37380311] 0
True
[ 0.29083218 0.31086052 0.38446153] 1
False
[ 0.40359063 0.18113206 0.41182125] 2
True
[ 0.2528738 0.34008827 0.41204824] 2
True
[ 0.42448438 0.16896206 0.42089422] 2
False
[ 0.31967865 0.53292245 0.21328686] 1
True
[ 0.31767622 0.33528602 0.32837388] 1
True
[ 0.39574825 0.11914422 0.51534361] 2
True
[ 0.3947873 0.29468657 0.30107079] 1
False
[ 0.44993083 0.16285254 0.36855012] 0
True
[ 0.50014204 0.1932698 0.29454844] 0
True
[ 0.47790428 0.15442175 0.39936924] 0
True
[ 0.35199447 0.64121683 0.11901875] 1
True
[ 0.36747869 0.34648145 0.26292333] 0
True
[ 0.4212829 0.18693794 0.38107172] 0
True
[ 0.2955045 0.57532121 0.16566446] 1
True
[ 0.19851274 0.46697034 0.39705905] 2
False
[ 0.29472226 0.54675699 0.22657441] 0
False
[ 0.31182096 0.30366321 0.36503295] 2
True
[ 0.42151411 0.17602736 0.37315457] 0
True
[ 0.2424872 0.34448648 0.40447537] 2
True
[ 0.43033933 0.24846851 0.31550432] 0
True
[ 0.36561155 0.48654958 0.21083548] 2
False
[ 0.40248759 0.30745213 0.27452731] 0
True
[ 0.48943799 0.12714669 0.45739186] 2
False
[ 0.34834485 0.26412048 0.37689423] 1
False
[ 0.30057869 0.40607786 0.31825672] 1
True
[ 0.24956731 0.21162688 0.56411735] 2
True
[ 0.34980991 0.07265402 0.66242696] 2
True
[ 0.22501575 0.63444773 0.20713488] 1
True
[ 0.31974995 0.20359235 0.42118717] 2
True
[ 0.30525271 0.40286743 0.27826068] 1
True
[ 0.34724552 0.15340118 0.5036191 ] 2
True
[ 0.27643116 0.46371808 0.25303855] 1
True
[ 0.21294539 0.32972967 0.44858101] 2
True
[ 0.14514268 0.15738551 0.744332 ] 2
True
[ 0.23794473 0.57326595 0.2526839 ] 1
True
[ 0.3106531 0.30199105 0.35882206] 1
False
[ 0.31239075 0.47040507 0.23560178] 1
True
[ 0.13816601 0.75253525 0.20453704] 1
True
[ 0.12224026 0.30621988 0.59749499] 2
True
[ 0.39193074 0.23470217 0.34522599] 0
True
[ 0.36438486 0.16310796 0.44700635] 2
True
[ 0.0897057 0.81062038 0.23003734] 1
True
[ 0.34549625 0.1950824 0.42545705] 2
True
[ 0.23780669 0.61211013 0.1991775 ] 1
True
[ 0.26795367 0.45165345 0.26891134] 1
True
[ 0.40139491 0.03587607 0.74859263] 2
True
[ 0.48220275 0.03741964 0.66364238] 2
True
[ 0.37029227 0.20000844 0.41135741] 0
False
[ 0.08739433 0.87004323 0.16336117] 1
True
[ 0.42329927 0.17037305 0.42294967] 2
False
[ 0.41022194 0.19397068 0.37091836] 0
True
[ 0.15112489 0.53271221 0.39230824] 2
False
[ 0.37305676 0.20971398 0.40798626] 2
True
[ 0.29817787 0.1735476 0.51970885] 2
True
[ 0.19326728 0.63818822 0.25549981] 1
True
[ 0.15662571 0.51488665 0.34519432] 1
True
[ 0.21622654 0.22213461 0.54099478] 2
True
[ 0.27644096 0.21513134 0.49718291] 2
True
[ 0.44349925 0.14645961 0.43346601] 2
False
[ 0.22029784 0.43865817 0.30084591] 1
True
[ 0.40233584 0.17816449 0.40583734] 0
False
[ 0.16222182 0.61285015 0.25901721] 1
True
[ 0.13927443 0.81158364 0.16493318] 1
True
[ 0.34188008 0.24727921 0.40285268] 0
False
[ 0.18806673 0.48369178 0.31915137] 1
True
[ 0.31986456 0.39619164 0.24709471] 1
True
[ 0.36031015 0.18065583 0.43749112] 0
False
[ 0.50528856 0.04489506 0.61811104] 2
True
[ 0.27994591 0.66826128 0.13092062] 1
True
[ 0.15657334 0.54530703 0.33753877] 1
True
[ 0.38388823 0.28999797 0.28245329] 0
True
[ 0.19105024 0.58513641 0.27236393] 1
True
[ 0.42272351 0.09168875 0.4914034 ] 2
True
[ 0.41377476 0.08224012 0.57983791] 2
True
[ 0.31958676 0.32164016 0.35406946] 1
False
[ 0.36645632 0.27731651 0.28897662] 0
True
[ 0.39627603 0.1779217 0.39149525] 0
True
[ 0.47516493 0.11807142 0.42454261] 0
True
[ 0.17633164 0.16296872 0.66746445] 2
True
[ 0.49350071 0.16031509 0.3377257 ] 0
True
[ 0.18835333 0.6561395 0.21926023] 1
True
[ 0.28352262 0.72235428 0.10800853] 1
True
[ 0.39956229 0.37828649 0.20451642] 0
True
[ 0.37802129 0.22015844 0.36584342] 2
False
[ 0.53100334 0.1331521 0.33455644] 0
True
[ 0.23998835 0.37881163 0.38195775] 2
True
[ 0.50533516 0.2106266 0.26610269] 1
False
[ 0.38532869 0.14179002 0.4485128 ] 2
True
[ 0.14497837 0.8497878 0.1541354 ] 2
False
[ 0.25645946 0.55701047 0.20932079] 1
True
[ 0.47295683 0.17374748 0.38653484] 0
True
[ 0.21758593 0.49845258 0.30669636] 1
True
[ 0.44661039 0.23948412 0.29391244] 0
True
[ 0.45541426 0.05513172 0.63987023] 2
True
[ 0.32019435 0.58845238 0.16359749] 1
True
[ 0.16739083 0.56908912 0.34840221] 1
True
[ 0.19163547 0.52457389 0.32297949] 1
True
[ 0.37720112 0.24501028 0.36698767] 0
True
[ 0.19532089 0.17279137 0.64188787] 2
True
[ 0.41244389 0.14179441 0.4900878 ] 2
True
[ 0.41626198 0.16864597 0.44700626] 2
True
[ 0.37794225 0.34489465 0.28898044] 0
True
[ 0.19474797 0.12699268 0.69320695] 2
True
[ 0.39556443 0.33193856 0.24500363] 0
True
[ 0.26060926 0.59795192 0.17323561] 1
True
[ 0.30956646 0.57752269 0.20588766] 1
True
[ 0.4111439 0.39759873 0.22948171] 0
True
[ 0.54696984 0.11863339 0.42439087] 0
True
[ 0.46706809 0.26626665 0.30370855] 1
False
[ 0.46860437 0.18041118 0.34578043] 0
True
[ 0.39652353 0.18878855 0.38625009] 0
True
[ 0.28139724 0.76239306 0.10940112] 1
True
[ 0.39268672 0.3427794 0.24817214] 1
False
[ 0.51157738 0.08605132 0.51648665] 2
True
[ 0.37467996 0.25257222 0.3371329 ] 2
False
[ 0.23248888 0.7951827 0.11382338] 1
True
[ 0.40929696 0.18748596 0.40146003] 0
True
[ 0.24444648 0.10933108 0.67475159] 2
True
[ 0.25358832 0.10648074 0.63923087] 2
True
[ 0.57019282 0.15310561 0.31648477] 0
True
[ 0.15482338 0.25404836 0.61238942] 2
True
[ 0.24568295 0.6842981 0.14856713] 2
False
[ 0.52968736 0.13522521 0.41413945] 0
True
[ 0.26484402 0.69928416 0.12642542] 1
True
[ 0.50005862 0.09312239 0.46550824] 2
False
[ 0.27139175 0.41227643 0.28071348] 1
True
[ 0.42728788 0.08426606 0.5792722 ] 2
True
[ 0.42727599 0.07127467 0.64815826] 2
True
[ 0.23702773 0.42148297 0.30484793] 1
True
[ 0.48308159 0.14088879 0.38659599] 2
False
[ 0.40828156 0.3370225 0.25844325] 0
True
[ 0.49402629 0.1391353 0.3341678 ] 0
True
[ 0.4763777 0.02830131 0.73642012] 2
True
[ 0.3036674 0.26490961 0.39989631] 2
True
[ 0.35388152 0.39898529 0.21822636] 1
True
[ 0.45842298 0.15941919 0.41969187] 0
True
[ 0.34379362 0.3726723 0.27255099] 1
True
[ 0.23439686 0.53757024 0.23232504] 1
True
[ 0.43128592 0.22995199 0.32224057] 2
False
[ 0.18744688 0.18389656 0.56144363] 2
True
[ 0.11826128 0.35313008 0.57374492] 2
True
[ 0.44902662 0.21495612 0.32373055] 0
True
[ 0.21879366 0.45201647 0.30313942] 1
True
[ 0.50305369 0.11607595 0.40448861] 2
False
[ 0.13641453 0.20126192 0.67064764] 2
True
[ 0.14435164 0.19973471 0.66541345] 2
True
[ 0.48564468 0.16602163 0.385972 ] 0
True
[ 0.20397004 0.541642 0.28310005] 1
True
[ 0.36089745 0.26478195 0.34089064] 0
True
[ 0.19599969 0.39912974 0.36590065] 1
True
[ 0.29160163 0.40643384 0.33310932] 1
True
[ 0.18879549 0.66326541 0.22332518] 2
False
[ 0.45827094 0.13316301 0.47776249] 0
False
[ 0.53838077 0.09638278 0.43921141] 2
False
[ 0.27647592 0.60370153 0.21027093] 1
True
[ 0.1863682 0.72345955 0.20857855] 1
True
[ 0.42972001 0.14659506 0.47351343] 0
False
[ 0.47683683 0.02421632 0.76588287] 2
True
[ 0.18590142 0.58279232 0.25745044] 1
True
[ 0.46629038 0.09959357 0.48536903] 2
True
[ 0.38244211 0.28728623 0.31700133] 0
True
[ 0.37530889 0.17925946 0.42638706] 2
True
[ 0.15683863 0.17216732 0.70059757] 2
True
[ 0.26540396 0.72451051 0.15787401] 1
True
[ 0.44163782 0.06443193 0.60590438] 2
True
[ 0.45577341 0.1155054 0.48701943] 0
False
[ 0.33225035 0.42559868 0.27565683] 1
True
[ 0.41912732 0.12385315 0.47534422] 2
True
[ 0.23938006 0.34032204 0.38541497] 1
False
[ 0.15512143 0.79396004 0.17920373] 2
False
[ 0.26849379 0.28324629 0.45175285] 2
True
[ 0.28214343 0.65418855 0.18851479] 1
True
[ 0.27375531 0.36590286 0.35423838] 1
True
[ 0.54305166 0.09412561 0.42902071] 2
False
[ 0.16392178 0.64401679 0.25425937] 2
False
[ 0.30529032 0.40418058 0.29601057] 1
True
[ 0.12837441 0.15254129 0.76284303] 2
True
[ 0.19886089 0.6254762 0.27926201] 1
True
[ 0.50297847 0.11725317 0.3977323 ] 0
True
[ 0.41806452 0.10434165 0.54930279] 0
False
[ 0.33820513 0.27778167 0.36745643] 2
True
[ 0.3231399 0.39921039 0.30272748] 1
True
[ 0.40355155 0.28641665 0.32463204] 0
True
[ 0.55302765 0.08916364 0.43347539] 0
True
[ 0.44345483 0.12107666 0.48845768] 0
False
[ 0.3834245 0.18569917 0.4111317 ] 0
False
[ 0.37718172 0.20662851 0.45715552] 0
False
[ 0.54604527 0.07687951 0.49683875] 0
True
[ 0.34366235 0.52068354 0.18862841] 1
True
[ 0.35864177 0.25903604 0.40963332] 2
True
[ 0.39192241 0.22862892 0.38673005] 2
False
[ 0.16096815 0.60982209 0.32803321] 1
True
[ 0.499689 0.029362 0.7638016] 2
True
[ 0.47795508 0.10833555 0.47918062] 2
True
[ 0.31488271 0.3825004 0.25511822] 1
True
[ 0.446124 0.05455738 0.67093345] 2
True
[ 0.19440893 0.11845634 0.71621803] 2
True
[ 0.52712154 0.09015169 0.50605554] 0
True
[ 0.31051181 0.22737491 0.4661781 ] 2
True
[ 0.46530239 0.12267031 0.43513846] 2
False
[ 0.19510907 0.4455863 0.38338402] 1
True
[ 0.39676366 0.07740878 0.62985776] 2
True
[ 0.2365284 0.574533 0.29345341] 1
True
[ 0.4309752 0.28827205 0.26973188] 1
False
[ 0.15722981 0.6918974 0.27396049] 2
False
[ 0.33261262 0.41960552 0.30171017] 1
True
[ 0.27192245 0.43071736 0.35473335] 1
True
[ 0.44587861 0.11697983 0.47427744] 2
True
[ 0.39314314 0.21622544 0.40673522] 0
False
[ 0.36502767 0.51131378 0.19418807] 1
True
[ 0.24767801 0.25307861 0.5118116 ] 2
True
[ 0.38259383 0.15360233 0.4834637 ] 0
False
[ 0.1992008 0.51243859 0.31644557] 1
True
[ 0.41881977 0.15462895 0.41291771] 2
False
[ 0.42621335 0.13794488 0.47165847] 0
False
[ 0.26949191 0.70465052 0.15887839] 1
True
[ 0.20235647 0.4938785 0.34633687] 1
True
[ 0.37459714 0.30792057 0.32864943] 0
True
[ 0.4613661 0.21538841 0.3210436 ] 1
False
[ 0.34027373 0.3763148 0.31685612] 0
False
[ 0.52326566 0.13144325 0.44797766] 0
True
[ 0.46645371 0.14436879 0.40320061] 0
True
[ 0.31460754 0.59859263 0.17140133] 1
True
[ 0.22259847 0.75931238 0.12952319] 1
True
[ 0.5180877 0.07481716 0.5172068 ] 2
False
[ 0.47742592 0.07590572 0.51660186] 0
False
[ 0.50608283 0.21239185 0.31249319] 0
True
[ 0.30535042 0.64439457 0.14576819] 1
True
[ 0.53760815 0.25564039 0.23442665] 1
False
[ 0.21238221 0.54558826 0.25884741] 1
True
[ 0.49105993 0.20040697 0.31633347] 0
True
[ 0.24768608 0.75069659 0.13705624] 1
True
[ 0.50545584 0.12035647 0.38397967] 0
True
[ 0.20929861 0.76723991 0.12458855] 1
True
[ 0.39635997 0.1827441 0.38151527] 0
True
[ 0.28441921 0.72134823 0.13506715] 0
False
[ 0.27295941 0.50840382 0.21689563] 1
True
[ 0.21837968 0.75133965 0.15964825] 2
False
[ 0.49760434 0.05690099 0.57633701] 2
True
[ 0.40617175 0.14539671 0.45420133] 0
False
[ 0.44241528 0.53732136 0.12719439] 1
True
[ 0.53307847 0.19208422 0.31204925] 0
True
[ 0.33398735 0.34052355 0.2708653 ] 1
True
4000
[ 0.40651389 0.34300882 0.24250503] 1
False
[ 0.52373394 0.02017387 0.76161045] 2
True
[ 0.51550636 0.15188145 0.36528272] 2
False
[ 0.16373028 0.2399924 0.59428544] 2
True
[ 0.23259368 0.53618877 0.26961648] 1
True
[ 0.24871361 0.69322349 0.15880082] 1
True
[ 0.26834796 0.45988535 0.26713363] 1
True
[ 0.14164236 0.23535034 0.62131718] 2
True
[ 0.48790395 0.12465163 0.42955806] 0
True
[ 0.41098629 0.23359962 0.31844705] 0
True
[ 0.38999667 0.31935542 0.26725815] 0
True
[ 0.23507386 0.4094115 0.3463321 ] 1
True
[ 0.5152017 0.16722705 0.33349584] 0
True
[ 0.40589754 0.46589371 0.18209777] 0
False
[ 0.25270134 0.65323884 0.19597175] 1
True
[ 0.3988249 0.49776862 0.1573546 ] 1
True
[ 0.25764796 0.57172556 0.18984771] 1
True
[ 0.48150899 0.1944424 0.30594564] 2
False
[ 0.29087394 0.65865675 0.18809028] 1
True
[ 0.33215186 0.58309786 0.15839618] 1
True
[ 0.44319967 0.17324849 0.40998248] 2
False
[ 0.41149175 0.17876764 0.39940581] 0
True
[ 0.46472319 0.06594508 0.56699599] 2
True
[ 0.40457679 0.20392211 0.46997403] 2
True
[ 0.29757417 0.60953083 0.19227738] 1
True
[ 0.24855175 0.4172674 0.31969802] 1
True
[ 0.45410973 0.11834466 0.5087121 ] 2
True
[ 0.552605 0.14331619 0.33384948] 0
True
[ 0.47287799 0.20720335 0.34693257] 0
True
[ 0.45003183 0.26891189 0.30341131] 0
True
[ 0.44877531 0.20284218 0.34966995] 2
False
[ 0.4016688 0.32137017 0.28177124] 0
True
[ 0.31658064 0.53421475 0.21931592] 1
True
[ 0.32889493 0.72349774 0.10612579] 1
True
[ 0.46619834 0.26244124 0.30166804] 0
True
[ 0.2851385 0.50327418 0.21459209] 0
False
[ 0.57033392 0.1538228 0.34976785] 0
True
[ 0.52117041 0.13061151 0.39711132] 2
False
[ 0.49262716 0.06317852 0.56625997] 2
True
[ 0.51653016 0.13197153 0.4012661 ] 2
False
[ 0.4550674 0.07259708 0.58708865] 2
True
[ 0.37683233 0.39970566 0.25069568] 1
True
[ 0.51134341 0.1249472 0.43330241] 0
True
[ 0.48486954 0.07187265 0.51375858] 2
True
[ 0.40205551 0.16991285 0.47386232] 2
True
[ 0.355526 0.27777858 0.35114254] 1
False
[ 0.39420213 0.31665984 0.28700764] 2
False
[ 0.43213889 0.06561677 0.61692797] 2
True
[ 0.40512777 0.24214672 0.37317697] 0
True
[ 0.44917331 0.0706985 0.5993184 ] 2
True
[ 0.42536534 0.07644753 0.58726475] 2
True
[ 0.21984458 0.51861563 0.25796069] 1
True
[ 0.29629806 0.61301602 0.17976558] 1
True
[ 0.18189119 0.41776469 0.42043072] 2
True
[ 0.18893776 0.6621582 0.18318286] 1
True
[ 0.23553888 0.56850893 0.20733838] 1
True
[ 0.37426062 0.17085211 0.43107197] 0
False
[ 0.44396912 0.12084653 0.49580045] 2
True
[ 0.29202057 0.42713639 0.26091166] 0
False
[ 0.41216178 0.12469637 0.51432118] 2
True
[ 0.4435855 0.16866151 0.3864949 ] 0
True
[ 0.25917077 0.76242314 0.11563626] 1
True
[ 0.47126886 0.1250456 0.45651771] 2
False
[ 0.59652531 0.07688519 0.42513247] 2
False
[ 0.28156292 0.29169884 0.41583374] 2
True
[ 0.51318587 0.05242686 0.60494843] 2
True
[ 0.32993371 0.39627947 0.24913552] 0
False
[ 0.2898454 0.2437185 0.46990315] 2
True
[ 0.57556744 0.06415465 0.49743894] 0
True
[ 0.25697478 0.58863184 0.19259642] 0
False
[ 0.42998481 0.09018884 0.54692916] 2
True
[ 0.50334206 0.11946842 0.45419324] 0
True
[ 0.46097038 0.27372317 0.27702455] 2
False
[ 0.51811615 0.07346672 0.50882108] 2
False
[ 0.21810229 0.51977244 0.30391788] 1
True
[ 0.26935279 0.32623378 0.40623818] 2
True
[ 0.48788439 0.10954046 0.45694022] 2
False
[ 0.4593139 0.08848036 0.50675514] 2
True
[ 0.29447322 0.38457358 0.33382174] 0
False
[ 0.1996354 0.08201504 0.78374597] 2
True
[ 0.21780341 0.07997847 0.78119385] 2
True
[ 0.4469239 0.18488125 0.36793756] 2
False
[ 0.21023777 0.62822045 0.25372897] 2
False
[ 0.41008037 0.10948293 0.53094336] 0
False
[ 0.51540199 0.01264707 0.83268334] 2
True
[ 0.53596242 0.05738341 0.57396819] 2
True
[ 0.2102707 0.65549557 0.24778314] 2
False
[ 0.50358179 0.10994154 0.46699352] 2
False
[ 0.30320868 0.46978058 0.2923771 ] 1
True
[ 0.47279674 0.14399104 0.4320089 ] 0
True
[ 0.1984417 0.48060829 0.39823743] 1
True
[ 0.45865458 0.14441605 0.47713097] 0
False
[ 0.49738917 0.08619667 0.55512337] 2
True
[ 0.18561228 0.64217951 0.27393274] 1
True
[ 0.47489774 0.10489979 0.48658113] 2
True
[ 0.33960565 0.37698813 0.29693038] 0
False
[ 0.44043489 0.10261968 0.53489285] 2
True
[ 0.414781 0.18426403 0.41914038] 2
True
[ 0.49839782 0.09111096 0.48702354] 2
False
[ 0.4162308 0.24056506 0.36851485] 0
True
[ 0.4164565 0.14189666 0.48258635] 2
True
[ 0.36209057 0.42301994 0.24601097] 1
True
[ 0.21773103 0.06952125 0.76532652] 2
True
[ 0.4510707 0.05356683 0.66685499] 2
True
[ 0.39589162 0.22767967 0.39201729] 1
False
[ 0.39243195 0.20259359 0.38922125] 1
False
[ 0.36576696 0.38631972 0.24343875] 1
True
[ 0.4998326 0.1720992 0.36739067] 0
True
[ 0.39283829 0.07800781 0.61763793] 2
True
[ 0.48640725 0.08290828 0.52014152] 0
False
[ 0.49276229 0.13168246 0.39017562] 0
True
[ 0.15950495 0.63831071 0.28907647] 2
False
[ 0.15893649 0.08921143 0.81566229] 2
True
[ 0.36279788 0.15303198 0.51818215] 2
True
[ 0.47052944 0.12058 0.45400459] 0
True
[ 0.30127441 0.30511578 0.36338413] 1
False
[ 0.25259151 0.49162546 0.26856012] 1
True
[ 0.33941189 0.3087085 0.33898579] 0
True
[ 0.43448546 0.06667673 0.57917022] 2
True
[ 0.31645149 0.1256062 0.59365742] 2
True
[ 0.2777549 0.46575132 0.29232844] 1
True
[ 0.42435631 0.13274842 0.42978773] 0
False
[ 0.4383401 0.23035811 0.33389369] 0
True
[ 0.21126125 0.43061736 0.36572073] 1
True
[ 0.28876014 0.57087814 0.21098738] 1
True
[ 0.45902517 0.11827546 0.48732199] 0
False
[ 0.37668055 0.20122276 0.39719938] 0
False
[ 0.33410049 0.42276721 0.24033572] 0
False
[ 0.55718621 0.06893359 0.50721009] 0
True
[ 0.54650442 0.01432245 0.79054998] 2
True
[ 0.44488688 0.12878318 0.48541397] 2
True
[ 0.45461739 0.11484289 0.45245233] 0
True
[ 0.2060791 0.72251404 0.22402571] 2
False
[ 0.11853366 0.81525002 0.22497513] 1
True
[ 0.33481775 0.2950531 0.35738914] 1
False
[ 0.21243822 0.65392037 0.25036349] 1
True
[ 0.17843081 0.52276208 0.31847522] 1
True
[ 0.3710634 0.20786394 0.41453627] 0
False
[ 0.31450353 0.36639022 0.28874558] 1
True
[ 0.26012007 0.52127451 0.23568169] 1
True
[ 0.52638165 0.05058509 0.58264752] 2
True
[ 0.39911525 0.28754749 0.31012177] 0
True
[ 0.51877309 0.1460962 0.3449617 ] 0
True
[ 0.43362261 0.23945202 0.31443864] 1
False
[ 0.54027445 0.18103564 0.32801413] 0
True
[ 0.40474275 0.23365993 0.35483692] 0
True
[ 0.42842016 0.15803367 0.44110867] 0
False
[ 0.32994678 0.2442087 0.42723014] 2
True
[ 0.40140017 0.19225155 0.41368619] 2
True
[ 0.33469989 0.46796231 0.2400826 ] 1
True
[ 0.4219661 0.20941717 0.35066557] 0
True
[ 0.31428858 0.14260531 0.55670012] 2
True
[ 0.26599409 0.59388454 0.21745729] 1
True
[ 0.22691473 0.72399613 0.160732 ] 1
True
[ 0.49336755 0.14029854 0.41896208] 0
True
[ 0.54678423 0.06880949 0.50953985] 0
True
[ 0.25375103 0.44271738 0.29232068] 1
True
[ 0.39466256 0.33766378 0.2623787 ] 0
True
[ 0.2784391 0.40274176 0.30202808] 1
True
[ 0.41530192 0.23589146 0.3281684 ] 2
False
[ 0.40938313 0.2671278 0.29485265] 2
False
[ 0.26278081 0.3272712 0.39682259] 2
True
[ 0.20372765 0.4621816 0.32775759] 1
True
[ 0.16027612 0.12248651 0.741477 ] 2
True
[ 0.35518309 0.25854063 0.3881193 ] 2
True
[ 0.18625745 0.4708767 0.35900139] 1
True
[ 0.39380357 0.1215343 0.55360331] 2
True
[ 0.42334874 0.1817115 0.42312002] 0
True
[ 0.29206167 0.13919971 0.56598702] 2
True
[ 0.4660527 0.14696323 0.40365999] 0
True
[ 0.49123396 0.14728838 0.4021803 ] 2
False
[ 0.50017207 0.075216 0.49667722] 0
True
[ 0.45057931 0.14970697 0.44148376] 0
True
[ 0.47631589 0.14323959 0.42502191] 2
False
[ 0.49570792 0.09830949 0.45067202] 0
True
[ 0.46359819 0.0571329 0.62303992] 2
True
[ 0.15221459 0.58528306 0.34986868] 1
True
[ 0.35865071 0.1699015 0.47252414] 2
True
[ 0.35329468 0.23784581 0.39401884] 2
True
[ 0.18064671 0.58609405 0.29248791] 1
True
[ 0.48307964 0.09866971 0.53268413] 0
False
[ 0.40029281 0.11601974 0.55038198] 2
True
[ 0.35635376 0.3318429 0.30931503] 1
False
[ 0.35380726 0.13147902 0.58708438] 0
False
[ 0.30727473 0.49489941 0.24521183] 1
True
[ 0.47181638 0.10875809 0.46473306] 2
False
[ 0.4027148 0.23080104 0.36837423] 0
True
[ 0.52928808 0.06821409 0.45840408] 0
True
[ 0.30425857 0.36870905 0.3303844 ] 1
True
[ 0.16431229 0.13562718 0.72792924] 2
True
[ 0.40629603 0.20945822 0.38667225] 0
True
[ 0.29965367 0.41505234 0.31008758] 1
True
[ 0.23630859 0.66370595 0.18076713] 1
True
[ 0.45370638 0.1307492 0.45595111] 2
True
[ 0.37300511 0.23142875 0.39850475] 2
True
[ 0.25611915 0.53575576 0.23798707] 1
True
[ 0.37586825 0.17053914 0.4141181 ] 0
False
[ 0.19524196 0.69945388 0.21806386] 1
True
[ 0.25774538 0.57137294 0.21162696] 1
True
[ 0.40431808 0.20841697 0.3775986 ] 2
False
[ 0.39732032 0.19593855 0.39954647] 1
False
[ 0.26733058 0.29619637 0.41127213] 2
True
[ 0.4314918 0.02298715 0.79925748] 2
True
[ 0.45316916 0.11622266 0.45905904] 2
True
[ 0.1824329 0.48802618 0.3101517 ] 1
True
[ 0.29798188 0.46228091 0.22541119] 1
True
[ 0.20686991 0.68910288 0.18208573] 1
True
[ 0.41386562 0.18039931 0.39320221] 2
False
[ 0.21165906 0.60915949 0.23537136] 1
True
[ 0.35032207 0.31759875 0.32364059] 0
True
[ 0.45137525 0.04085553 0.67982197] 2
True
[ 0.23498152 0.59877312 0.22247739] 1
True
[ 0.15152847 0.19600903 0.63706415] 2
True
[ 0.14166908 0.12323358 0.75559555] 2
True
[ 0.23651412 0.23069519 0.56344399] 2
True
[ 0.21300843 0.49408076 0.27933249] 1
True
[ 0.23099193 0.59039568 0.24864015] 1
True
[ 0.28044554 0.36529226 0.31989549] 0
False
[ 0.25154168 0.30602912 0.38672068] 2
True
[ 0.14239518 0.60606236 0.32065883] 1
True
[ 0.3672122 0.25363123 0.38574297] 2
True
[ 0.23464145 0.52762975 0.27798362] 1
True
[ 0.2874963 0.4020318 0.30927075] 0
False
[ 0.41557797 0.1495748 0.42754287] 0
False
[ 0.40699566 0.1552346 0.40104093] 0
True
[ 0.47347017 0.07650014 0.52408805] 2
True
[ 0.42343295 0.08577273 0.56479946] 2
True
[ 0.28889661 0.27354292 0.42078852] 2
True
[ 0.38750217 0.23893915 0.3374111 ] 0
True
[ 0.18436367 0.58290363 0.27184695] 1
True
[ 0.42006728 0.20879898 0.35463956] 0
True
[ 0.1783921 0.49996149 0.35108093] 1
True
[ 0.47519978 0.17074573 0.36887241] 0
True
[ 0.21943888 0.46963791 0.2850731 ] 1
True
[ 0.19093457 0.42757627 0.41621589] 1
True
[ 0.23394304 0.37616615 0.32932743] 1
True
[ 0.47461635 0.20257444 0.30676993] 0
True
[ 0.48265171 0.08468918 0.47737143] 2
False
[ 0.47270577 0.10415763 0.46908206] 2
False
[ 0.19379993 0.12078348 0.70247027] 2
True
[ 0.49217776 0.16595718 0.36160184] 0
True
[ 0.27028057 0.38747955 0.29960821] 1
True
[ 0.2758126 0.64612787 0.16567161] 1
True
[ 0.40789754 0.38627165 0.21887333] 0
True
[ 0.31045553 0.31967517 0.3731199 ] 2
True
[ 0.32990878 0.23990462 0.46469389] 2
True
[ 0.39092949 0.19357424 0.36105474] 0
True
[ 0.44385869 0.17959772 0.34544004] 0
True
[ 0.36940793 0.25751196 0.33107772] 0
True
[ 0.55817087 0.12284268 0.33956734] 0
True
[ 0.49153813 0.22650551 0.29838273] 0
True
[ 0.37100311 0.47782517 0.19071636] 1
True
[ 0.31424496 0.57802209 0.16081957] 1
True
[ 0.5392012 0.15518865 0.32683121] 0
True
[ 0.24612907 0.57036404 0.22375238] 1
True
[ 0.28948627 0.30468232 0.40019098] 2
True
[ 0.56807167 0.01579945 0.7512259 ] 2
True
[ 0.27449629 0.39426715 0.27148841] 1
True
[ 0.55147638 0.07260547 0.48395868] 0
True
[ 0.39384736 0.2783259 0.29621591] 2
False
[ 0.26021475 0.67276184 0.13856051] 1
True
[ 0.2971741 0.62688934 0.15356634] 1
True
[ 0.42513841 0.17579328 0.39819785] 0
True
[ 0.2662448 0.43973534 0.31846428] 1
True
[ 0.47645519 0.15978582 0.32668001] 1
False
[ 0.5819062 0.02193329 0.67128938] 2
True
[ 0.49895005 0.15307204 0.35356115] 2
False
[ 0.38545918 0.08422264 0.59707957] 2
True
[ 0.48865326 0.12293668 0.39293865] 0
True
[ 0.26405662 0.23984818 0.49579274] 2
True
[ 0.38523412 0.33245726 0.27540208] 1
False
[ 0.39524969 0.18312838 0.42112251] 0
False
[ 0.40317366 0.25799653 0.34856649] 0
True
[ 0.48018963 0.16035817 0.35728665] 2
False
[ 0.20964786 0.53244394 0.25763651] 1
True
[ 0.27160534 0.65088423 0.14246405] 1
True
[ 0.39621877 0.09716897 0.53920928] 2
True
[ 0.13935143 0.80499946 0.16910875] 2
False
[ 0.25577046 0.72989087 0.12811095] 1
True
[ 0.18313352 0.85621124 0.11271281] 1
True
[ 0.3242151 0.4241871 0.26505745] 0
False
[ 0.39703977 0.17771072 0.39005751] 0
True
[ 0.35008393 0.17862046 0.50493839] 2
True
[ 0.299509 0.58898465 0.18077642] 1
True
[ 0.15254403 0.64913846 0.28645835] 1
True
[ 0.4012955 0.21755638 0.35198851] 0
True
[ 0.45255689 0.17949131 0.42417156] 0
True
[ 0.43217464 0.12755207 0.49555505] 2
True
[ 0.25448474 0.42117193 0.30991447] 1
True
[ 0.45439412 0.14731469 0.36377002] 0
True
[ 0.47798858 0.21308924 0.30608267] 2
False
[ 0.45514614 0.1455676 0.41719365] 2
False
[ 0.39494333 0.32526434 0.27827412] 0
True
[ 0.38524927 0.3048937 0.30435469] 0
True
[ 0.3624292 0.46433484 0.18961798] 1
True
[ 0.27558477 0.43202647 0.29642335] 1
True
[ 0.26939266 0.47280634 0.24085348] 1
True
[ 0.42295769 0.41519389 0.19386615] 1
False
[ 0.32064902 0.64712248 0.1391496 ] 1
True
[ 0.5551228 0.03197538 0.615735 ] 2
True
[ 0.30034142 0.55877063 0.18436127] 1
True
[ 0.42439165 0.18573818 0.36258197] 2
False
[ 0.42922447 0.09073037 0.56742922] 2
True
[ 0.35576869 0.37499163 0.23982612] 1
True
[ 0.21821107 0.34807918 0.41657842] 2
True
[ 0.40109613 0.11023284 0.50613403] 0
False
[ 0.18071168 0.42398408 0.43623255] 2
True
[ 0.29548625 0.24633784 0.42215588] 2
True
[ 0.1783976 0.74678705 0.16561668] 1
True
[ 0.29391104 0.18554749 0.50836202] 2
True
[ 0.31668422 0.37572394 0.2838651 ] 1
True
[ 0.35529247 0.20623917 0.41768271] 2
True
[ 0.43540463 0.17202044 0.44164109] 0
False
[ 0.47150419 0.1336214 0.40656671] 2
False
[ 0.44722434 0.12963491 0.45845665] 0
False
[ 0.33608363 0.23846893 0.39066106] 2
True
[ 0.28046933 0.23487033 0.46696807] 2
True
[ 0.23880624 0.37622468 0.35564134] 1
True
[ 0.40454316 0.12849011 0.48240799] 2
True
[ 0.42647675 0.17859702 0.4217542 ] 0
True
[ 0.33666662 0.21161295 0.41596212] 2
True
[ 0.45076817 0.1281344 0.42425128] 2
False
[ 0.29947337 0.28807204 0.40428831] 2
True
[ 0.34293454 0.44349975 0.24125819] 1
True
[ 0.32947608 0.17278658 0.50143689] 2
True
[ 0.25195219 0.17859209 0.54155074] 2
True
[ 0.43151966 0.12004681 0.47293836] 2
True
[ 0.31588561 0.34128598 0.35233145] 0
False
[ 0.47711865 0.09657486 0.4904533 ] 0
False
[ 0.43527839 0.17955505 0.34970508] 0
True
[ 0.26481354 0.73207778 0.12858871] 1
True
[ 0.38314353 0.32306889 0.29038165] 1
False
[ 0.2854905 0.64332858 0.15266729] 1
True
[ 0.28734448 0.16830944 0.51792293] 2
True
[ 0.2628163 0.61109337 0.18504451] 1
True
[ 0.26757188 0.34409274 0.36640358] 2
True
[ 0.39473205 0.23864401 0.3731876 ] 1
False
[ 0.15305959 0.46251621 0.37478019] 1
True
[ 0.17650101 0.45631484 0.427425 ] 2
False
[ 0.36514528 0.18230972 0.43056491] 0
False
[ 0.5247956 0.11691329 0.40044538] 0
True
[ 0.44225131 0.09394418 0.46931734] 0
False
[ 0.32659845 0.25043466 0.4002466 ] 2
True
[ 0.46758862 0.13949546 0.41378943] 0
True
[ 0.36892365 0.18630787 0.46869578] 0
False
[ 0.34062538 0.32364601 0.30984777] 1
False
[ 0.39569115 0.3473912 0.23945671] 0
True
[ 0.46235642 0.16160866 0.40497331] 0
True
[ 0.40846144 0.14582473 0.39691636] 2
False
[ 0.4683749 0.05725182 0.63229999] 2
True
[ 0.23635258 0.64307307 0.1827919 ] 1
True
[ 0.11084305 0.73324867 0.2954359 ] 1
True
[ 0.30094688 0.24544697 0.37468633] 2
True
[ 0.33730939 0.49202785 0.1965947 ] 1
True
[ 0.13363518 0.22155344 0.66381764] 2
True
[ 0.33475864 0.26834545 0.39837226] 2
True
[ 0.3560978 0.24636353 0.37182931] 0
False
[ 0.37443678 0.06913132 0.62623445] 2
True
[ 0.25743431 0.56940451 0.23745085] 0
False
[ 0.40019004 0.19984167 0.38475204] 0
True
[ 0.44820108 0.07329259 0.5812993 ] 2
True
[ 0.2256917 0.49016869 0.29369872] 1
True
[ 0.49756488 0.02126535 0.76773984] 2
True
[ 0.26760219 0.62070958 0.18526402] 1
True
[ 0.34672696 0.46513875 0.21546573] 1
True
[ 0.2170517 0.49289715 0.25504554] 1
True
[ 0.35581448 0.33419469 0.27642014] 1
False
[ 0.30620298 0.64952157 0.15744234] 1
True
[ 0.41697134 0.18376436 0.40642475] 2
False
[ 0.40278833 0.13305102 0.46387234] 2
True
[ 0.38242877 0.33386565 0.25015532] 1
False
[ 0.53053994 0.03983676 0.64849282] 2
True
[ 0.38414779 0.14151266 0.47465675] 2
True
[ 0.34490235 0.2547923 0.37959818] 2
True
[ 0.43046665 0.01747665 0.8262044 ] 2
True
[ 0.1053503 0.85009372 0.1989181 ] 2
False
[ 0.33192867 0.16933745 0.49695491] 2
True
[ 0.31194495 0.33826811 0.33952319] 1
False
[ 0.40059087 0.21870111 0.40223903] 0
False
[ 0.45978094 0.04510054 0.66423112] 2
True
[ 0.33075233 0.2877321 0.35708027] 2
True
[ 0.1116499 0.71735074 0.26700696] 1
True
[ 0.24102791 0.39969362 0.38119276] 1
True
[ 0.13663237 0.45079555 0.49061833] 2
True
[ 0.27151792 0.25356519 0.42201607] 2
True
[ 0.14840094 0.59192214 0.33974785] 1
True
[ 0.34628833 0.36865933 0.25864231] 0
False
[ 0.24691675 0.55630404 0.25011824] 1
True
[ 0.33275139 0.22236533 0.39043233] 2
True
[ 0.25746888 0.52634959 0.26337253] 1
True
[ 0.22244991 0.500518 0.29065403] 0
False
[ 0.39528448 0.23207426 0.380465 ] 2
False
[ 0.4183283 0.12104773 0.49663959] 2
True
[ 0.485642 0.15637718 0.4299614 ] 0
True
[ 0.42945586 0.14810018 0.44081723] 0
False
[ 0.10749439 0.31436929 0.63871608] 2
True
[ 0.34974941 0.41583642 0.24658715] 0
False
[ 0.30690443 0.44500336 0.28319038] 1
True
[ 0.32630942 0.56184933 0.19769301] 1
True
[ 0.24146485 0.428108 0.2946713 ] 1
True
[ 0.41306714 0.17121147 0.43416805] 0
False
[ 0.30913244 0.49222657 0.24606624] 1
True
[ 0.21259337 0.40096621 0.41542071] 2
True
[ 0.51039388 0.07546839 0.54546068] 0
False
[ 0.35110722 0.40552573 0.25255332] 1
True
[ 0.20406304 0.42420653 0.35270446] 1
True
[ 0.43590994 0.18906158 0.34478491] 0
True
[ 0.47619661 0.08604827 0.51643106] 0
False
[ 0.26340816 0.29100498 0.46486064] 2
True
[ 0.19823166 0.69078323 0.22161194] 1
True
[ 0.40991479 0.20170783 0.35357022] 2
False
[ 0.39076725 0.20115456 0.3702257 ] 0
True
[ 0.15849154 0.76280542 0.21456734] 2
False
[ 0.32707502 0.38176283 0.28202701] 1
True
[ 0.44746204 0.1381331 0.42968223] 2
False
[ 0.25869469 0.46016103 0.31143654] 1
True
[ 0.38510244 0.17353966 0.45191734] 2
True
[ 0.26917173 0.54135522 0.22569968] 0
False
[ 0.26710193 0.50559178 0.27519899] 1
True
[ 0.25457277 0.51240022 0.26443954] 0
False
[ 0.44019952 0.18207383 0.35313847] 2
False
[ 0.18633388 0.75357045 0.18831731] 2
False
[ 0.42006503 0.2545702 0.34599883] 0
True
[ 0.48930825 0.11056224 0.4603668 ] 2
False
[ 0.53273561 0.13421448 0.4498621 ] 0
True
[ 0.43806026 0.15033544 0.45160446] 0
False
[ 0.27492129 0.45237812 0.31212123] 1
True
[ 0.51452697 0.1084471 0.4513535 ] 2
False
[ 0.20670991 0.41256734 0.39244113] 1
True
[ 0.25547898 0.6115908 0.24427656] 1
True
[ 0.38075841 0.13147773 0.55589988] 2
True
[ 0.31543234 0.40609257 0.31816647] 1
True
[ 0.1522976 0.66093356 0.29607684] 1
True
[ 0.22097375 0.43490723 0.36844581] 1
True
[ 0.24191364 0.36029829 0.4356007 ] 2
True
[ 0.3507618 0.24257648 0.44137684] 2
True
[ 0.34305541 0.3740362 0.28882834] 0
False
[ 0.35581788 0.20548326 0.40488997] 0
False
[ 0.3798539 0.19622805 0.44940465] 2
True
[ 0.48762665 0.17731601 0.38108804] 0
True
[ 0.2908175 0.58235057 0.19519958] 1
True
[ 0.34964968 0.23540584 0.40943449] 2
True
[ 0.46143861 0.13941941 0.43744107] 2
False
[ 0.1683756 0.59358179 0.32518554] 1
True
[ 0.20435416 0.41671301 0.41880033] 1
False
[ 0.43538108 0.16897628 0.43714162] 0
False
[ 0.15390928 0.60553712 0.36880811] 1
True
[ 0.40538012 0.16370496 0.43576851] 2
True
[ 0.39260977 0.29587247 0.31438714] 0
True
[ 0.44755219 0.07346344 0.57737595] 2
True
[ 0.38255479 0.19605216 0.42518968] 2
True
[ 0.44995687 0.18592562 0.36594534] 2
False
[ 0.35132923 0.28048782 0.38554367] 0
False
[ 0.41729145 0.2512526 0.35684324] 0
True
[ 0.27052323 0.47330146 0.33550081] 0
False
[ 0.19959374 0.51798994 0.32772671] 1
True
[ 0.51359658 0.11108627 0.42515544] 0
True
[ 0.22461033 0.44998026 0.40493217] 2
False
[ 0.37838789 0.29206399 0.33912222] 1
False
[ 0.17857345 0.7559748 0.20540058] 2
False
[ 0.45730066 0.17663812 0.40219194] 0
True
[ 0.3902588 0.24839652 0.36715989] 0
True
[ 0.20656684 0.42664064 0.38836363] 1
True
[ 0.30122127 0.38204328 0.31646333] 1
True
[ 0.42216133 0.15677753 0.44046053] 0
False
[ 0.46673627 0.10973648 0.51335568] 2
True
[ 0.40936371 0.31412766 0.29492402] 0
True
[ 0.4973487 0.11577897 0.45313681] 2
False
[ 0.27937031 0.71761937 0.13423147] 1
True
[ 0.62029069 0.06530197 0.47287397] 0
True
[ 0.40945903 0.20399297 0.38561056] 0
True
[ 0.44578807 0.04864301 0.65749389] 2
True
[ 0.24325722 0.39196103 0.42380981] 2
True
[ 0.15926212 0.81547327 0.17832424] 1
True
[ 0.49945222 0.04635384 0.63915095] 2
True
[ 0.35385065 0.40657601 0.24181224] 1
True
[ 0.2891573 0.37077469 0.33373596] 1
True
[ 0.17269033 0.62379269 0.26749774] 1
True
[ 0.48173217 0.08027591 0.50526588] 2
True
[ 0.48773093 0.14203125 0.37250912] 2
False
[ 0.20046336 0.50686285 0.3189408 ] 1
True
[ 0.4979104 0.14482797 0.40095975] 2
False
[ 0.29970685 0.27140076 0.43761974] 0
False
[ 0.23729644 0.35966814 0.3453242 ] 1
True
[ 0.42029438 0.12914255 0.53075662] 2
True
[ 0.40643232 0.11176036 0.5236578 ] 0
False
[ 0.28898164 0.61888757 0.17746307] 1
True
[ 0.2335585 0.38621927 0.42332636] 2
True
[ 0.18933292 0.78070159 0.1485414 ] 1
True
[ 0.1867183 0.71157686 0.19899339] 1
True
[ 0.43954702 0.14141865 0.461236 ] 0
False
[ 0.51518434 0.14103197 0.38286833] 0
True
[ 0.31965671 0.3161487 0.35844489] 2
True
[ 0.12016738 0.29119619 0.65140717] 2
True
[ 0.40192626 0.21749119 0.42465627] 2
True
[ 0.27579876 0.5962962 0.20356516] 1
True
[ 0.13255298 0.7384328 0.22886572] 1
True
[ 0.34620508 0.22569786 0.4075095 ] 0
False
[ 0.158373 0.81801902 0.17191877] 1
True
[ 0.25636407 0.27436489 0.48753041] 2
True
[ 0.25220139 0.48362054 0.3126693 ] 1
True
[ 0.41999393 0.11604621 0.53828999] 2
True
[ 0.26168974 0.48545535 0.29270906] 0
False
[ 0.39040255 0.22113104 0.40382639] 2
True
[ 0.1380149 0.25735313 0.61438779] 2
True
[ 0.18324949 0.53445978 0.30699848] 1
True
[ 0.19551541 0.61365031 0.25749905] 2
False
[ 0.37007197 0.12742641 0.55054516] 2
True
[ 0.17459209 0.49728492 0.35649415] 1
True
[ 0.47815907 0.10602371 0.45924211] 0
True
[ 0.29323678 0.4397675 0.30090884] 1
True
[ 0.42626316 0.01683302 0.84295684] 2
True
[ 0.35111724 0.33234167 0.29856039] 0
True
[ 0.25482249 0.59922165 0.22954426] 1
True
[ 0.44233951 0.05296426 0.64029719] 2
True
[ 0.564483 0.02195728 0.6925534 ] 2
True
[ 0.40597173 0.06249009 0.63795252] 2
True
[ 0.23567577 0.37038261 0.36595638] 1
True
[ 0.42011912 0.15684971 0.44272403] 0
False
[ 0.42199869 0.33619617 0.26293903] 2
False
[ 0.26487862 0.56866795 0.22530195] 1
True
[ 0.12468868 0.84345326 0.1645271 ] 1
True
[ 0.38346705 0.12968372 0.50903303] 2
True
[ 0.29433026 0.53353062 0.20090321] 1
True
[ 0.23298317 0.66740893 0.20517197] 1
True
[ 0.40104609 0.08188765 0.61216599] 2
True
[ 0.48520463 0.07818345 0.52985415] 0
False
[ 0.35638049 0.25103883 0.36252813] 2
True
[ 0.27765726 0.53924317 0.2347277 ] 1
True
[ 0.43480463 0.15138446 0.45705942] 0
False
[ 0.19434744 0.49918729 0.31752206] 1
True
[ 0.44785635 0.08122327 0.55172709] 2
True
[ 0.19946319 0.57773806 0.25117869] 1
True
[ 0.36578361 0.13992767 0.4780525 ] 2
True
[ 0.40328982 0.10820066 0.51534732] 2
True
[ 0.25120663 0.22213221 0.53504143] 2
True
[ 0.47925819 0.03936526 0.68968347] 2
True
[ 0.43509498 0.0200119 0.83132382] 2
True
[ 0.39007605 0.24999719 0.3380973 ] 0
True
[ 0.49178981 0.16629086 0.37499162] 0
True
[ 0.3266138 0.13306916 0.54513467] 2
True
[ 0.36418635 0.07157087 0.60973401] 2
True
[ 0.48338891 0.14769639 0.36804492] 0
True
[ 0.47275038 0.11566517 0.44167397] 2
False
[ 0.40441087 0.07308888 0.63246422] 2
True
[ 0.11705608 0.26525469 0.6492687 ] 2
True
[ 0.16930491 0.50829581 0.324206 ] 1
True
[ 0.49155051 0.09259325 0.49500511] 2
True
[ 0.38417093 0.19469561 0.40483952] 2
True
[ 0.36512878 0.26650172 0.37460603] 2
True
[ 0.1307734 0.65292694 0.30606598] 1
True
[ 0.21473233 0.06398605 0.77912713] 2
True
[ 0.13998218 0.13048798 0.75249697] 2
True
[ 0.40747144 0.12278709 0.47072017] 2
True
[ 0.26342796 0.53036503 0.24468492] 1
True
[ 0.22818234 0.4631748 0.33908305] 1
True
[ 0.1722613 0.71581572 0.21364528] 1
True
[ 0.32523136 0.31309868 0.31145055] 0
True
[ 0.33102257 0.25898114 0.35479372] 1
False
[ 0.4236335 0.13880628 0.45365791] 2
True
[ 0.39472549 0.13055211 0.45989835] 0
False
[ 0.23401795 0.33117656 0.44720949] 2
True
[ 0.29834516 0.24701396 0.40062055] 2
True
[ 0.16811371 0.46681643 0.38176115] 1
True
[ 0.26418554 0.49031813 0.26074701] 1
True
[ 0.4032444 0.17887083 0.38714364] 0
True
[ 0.37956593 0.14762737 0.46120083] 2
True
[ 0.34097579 0.26975492 0.40656176] 2
True
[ 0.24022162 0.46571567 0.305624 ] 0
False
[ 0.29624091 0.49508364 0.2162569 ] 1
True
[ 0.1877915 0.68935251 0.23364255] 1
True
[ 0.56570011 0.09674677 0.38939628] 0
True
[ 0.26830524 0.60365396 0.17546622] 1
True
[ 0.36532813 0.28854509 0.36837287] 0
False
[ 0.3117918 0.54403808 0.19935858] 1
True
[ 0.29318566 0.38869472 0.29947839] 1
True
[ 0.48732342 0.19800234 0.29243773] 0
True
[ 0.49721979 0.12540883 0.3942773 ] 0
True
[ 0.48252825 0.09625225 0.44806473] 2
False
[ 0.36505309 0.35829906 0.25373114] 1
False
[ 0.41748599 0.08757486 0.53084376] 2
True
[ 0.39266687 0.149546 0.43917612] 2
True
[ 0.34252783 0.19952408 0.40015167] 2
True
[ 0.2850393 0.37760566 0.32505202] 0
False
[ 0.31434363 0.45662089 0.24577471] 0
False
[ 0.32820609 0.54277204 0.16385297] 1
True
[ 0.45122696 0.29263856 0.23025556] 1
False
[ 0.18228009 0.68748441 0.2084157 ] 1
True
[ 0.37282589 0.09016215 0.53489613] 2
True
[ 0.42959872 0.24015003 0.28770913] 2
False
[ 0.31857923 0.48080041 0.23355693] 1
True
[ 0.41515881 0.22539094 0.31332935] 2
False
[ 0.22506108 0.75644398 0.15987898] 1
True
[ 0.30708371 0.18939578 0.48341275] 2
True
[ 0.46123926 0.07114202 0.57075138] 2
True
[ 0.32615285 0.49332811 0.23258127] 0
False
[ 0.26062146 0.45726782 0.26926869] 1
True
[ 0.4091272 0.33900239 0.28031523] 0
True
[ 0.33670479 0.36589455 0.29132031] 1
True
[ 0.42282296 0.45881913 0.18732483] 1
True
[ 0.51894816 0.06716681 0.50455694] 2
False
[ 0.3005863 0.62100631 0.15870717] 1
True
[ 0.39463298 0.20543226 0.36105205] 0
True
[ 0.42623333 0.06216593 0.60738045] 2
True
[ 0.4825166 0.11514584 0.38795829] 0
True
[ 0.45026723 0.24676582 0.27541036] 0
True
[ 0.21796761 0.3605643 0.40799765] 2
True
[ 0.22908683 0.39746389 0.34280567] 1
True
[ 0.17836453 0.79866484 0.14688116] 2
False
[ 0.39035211 0.31591662 0.2800435 ] 0
True
[ 0.47183946 0.1311042 0.40274294] 0
True
[ 0.62573593 0.04910447 0.48547427] 0
True
[ 0.36693823 0.47000775 0.19203237] 1
True
[ 0.46375383 0.23086998 0.29331077] 1
False
[ 0.16936113 0.78681614 0.15644422] 2
False
[ 0.34575968 0.2198009 0.39099566] 2
True
[ 0.41977385 0.05012707 0.69146549] 2
True
[ 0.42290387 0.30529583 0.30909296] 0
True
[ 0.43709925 0.04989123 0.67908255] 2
True
[ 0.15550669 0.82475669 0.15283697] 1
True
[ 0.54268458 0.08671088 0.46694754] 2
False
[ 0.34388131 0.36514054 0.27895493] 1
True
[ 0.45331691 0.10804307 0.48775237] 0
False
[ 0.41526905 0.11020906 0.47159286] 0
False
[ 0.17734586 0.57641613 0.28982664] 1
True
[ 0.32343685 0.42235247 0.25588286] 1
True
[ 0.37760949 0.20070095 0.40216311] 2
True
[ 0.38862526 0.21863189 0.42850543] 0
False
[ 0.38647237 0.22035588 0.35347832] 0
True
[ 0.4840704 0.03059744 0.73367705] 2
True
[ 0.41889472 0.28828464 0.31224844] 0
True
[ 0.15380282 0.770831 0.19941279] 2
False
[ 0.27659107 0.53764565 0.23651702] 1
True
[ 0.44760556 0.1461848 0.40938071] 0
True
[ 0.4788873 0.05957509 0.56095858] 2
True
[ 0.28136849 0.20788468 0.53686165] 2
True
[ 0.16324428 0.71366278 0.2350064 ] 2
False
[ 0.41226392 0.07572664 0.62778338] 2
True
[ 0.25966659 0.42403787 0.33185142] 1
True
[ 0.16115557 0.10408687 0.77970684] 2
True
[ 0.44566659 0.14704098 0.46683252] 0
False
[ 0.23826266 0.50045666 0.29822522] 1
True
[ 0.19885718 0.1568549 0.68318421] 2
True
[ 0.3488835 0.31644013 0.34212191] 0
True
[ 0.2528605 0.05869918 0.76355728] 2
True
[ 0.5376586 0.08502404 0.46008556] 0
True
[ 0.39083432 0.33103772 0.31259516] 1
False
[ 0.23982866 0.75283841 0.16449275] 1
True
[ 0.46979106 0.0410989 0.69731489] 2
True
[ 0.23481184 0.41011677 0.33383141] 1
True
[ 0.23951015 0.388971 0.31961637] 1
True
[ 0.35810673 0.19260013 0.39398741] 2
True
[ 0.4570515 0.14459834 0.41011109] 2
False
[ 0.49784857 0.07555778 0.58172514] 0
False
[ 0.23187254 0.31044427 0.4761051 ] 2
True
[ 0.37516935 0.37512287 0.26569387] 1
False
[ 0.40448956 0.18831309 0.41471332] 2
True
[ 0.32396159 0.34137005 0.36174218] 0
False
[ 0.46170203 0.13786904 0.48166188] 0
False
[ 0.43210155 0.15775728 0.42134982] 2
False
[ 0.43594324 0.11120411 0.51308321] 2
True
[ 0.34964676 0.3538352 0.27812903] 0
False
[ 0.61464695 0.04078188 0.55886284] 0
True
[ 0.26531544 0.42888788 0.28178456] 1
True
[ 0.28224427 0.52269693 0.24942018] 1
True
[ 0.51210752 0.10742702 0.41109224] 0
True
[ 0.1744524 0.14156562 0.71101418] 2
True
[ 0.5512015 0.01445892 0.80577711] 2
True
[ 0.36813779 0.32615269 0.2808071 ] 1
False
[ 0.34987407 0.45494999 0.25584735] 0
False
[ 0.48464138 0.19127409 0.35062901] 0
True
[ 0.5152406 0.12145602 0.40874643] 2
False
[ 0.38583 0.19653928 0.43682582] 2
True
[ 0.53217756 0.07185615 0.44877109] 2
False
[ 0.28735098 0.62537572 0.16271813] 1
True
[ 0.27098568 0.27008215 0.41206982] 1
False
[ 0.21935193 0.51682993 0.30313599] 1
True
[ 0.38510369 0.20015178 0.39165658] 2
True
[ 0.15991537 0.56966223 0.32240326] 2
False
[ 0.39583555 0.07362923 0.6087468 ] 2
True
[ 0.14373071 0.1539235 0.74050677] 2
True
[ 0.33365024 0.32528415 0.35397486] 0
False
[ 0.25699312 0.67898926 0.19052868] 1
True
[ 0.40649745 0.20198626 0.38865346] 0
True
[ 0.42888854 0.15502836 0.44633529] 0
False
[ 0.32146555 0.40669023 0.25782091] 1
True
[ 0.318204 0.42614373 0.24502223] 1
True
[ 0.46372749 0.12639595 0.46035704] 2
False
[ 0.45018012 0.09684275 0.51914453] 2
True
[ 0.45019497 0.1377088 0.48292103] 2
True
[ 0.3159515 0.13735714 0.5533869 ] 2
True
[ 0.43822199 0.10879488 0.47376288] 2
True
[ 0.44233924 0.14702719 0.41856379] 2
False
[ 0.49212173 0.03279936 0.70061905] 2
True
[ 0.37126299 0.13607318 0.51117144] 2
True
[ 0.2317468 0.24834175 0.56713235] 2
True
[ 0.21690778 0.41675376 0.3686343 ] 1
True
[ 0.50172314 0.12779505 0.44111535] 0
True
[ 0.44581163 0.14201409 0.4247176 ] 2
False
[ 0.29316531 0.20126253 0.53266361] 2
True
[ 0.41376214 0.11105628 0.54307682] 0
False
[ 0.19206558 0.6819166 0.2345517 ] 2
False
[ 0.43804018 0.08494065 0.55001859] 0
False
[ 0.47923513 0.11412654 0.48984193] 2
True
[ 0.50861669 0.0786342 0.54573131] 0
False
[ 0.43372906 0.11254381 0.50860459] 2
True
[ 0.4214356 0.12147759 0.53525078] 2
True
[ 0.2369431 0.67844129 0.21379918] 1
True
[ 0.46346067 0.12465822 0.4600363 ] 2
False
[ 0.53024541 0.11598681 0.46156477] 0
True
[ 0.4002517 0.13603032 0.51165397] 2
True
[ 0.5234855 0.11367682 0.41328431] 0
True
[ 0.29423797 0.3792901 0.32684758] 1
True
[ 0.17945974 0.51392043 0.34066152] 1
True
[ 0.36633378 0.33636299 0.30395981] 0
True
[ 0.21603306 0.4973855 0.2912606 ] 1
True
[ 0.31019434 0.28088991 0.37922368] 2
True
[ 0.41183308 0.07309113 0.60997955] 2
True
[ 0.29513767 0.12220674 0.58860457] 2
True
[ 0.4228181 0.12251903 0.50075059] 2
True
[ 0.3190504 0.48538791 0.23137239] 1
True
[ 0.44536833 0.07125835 0.61146523] 2
True
[ 0.48732352 0.10931483 0.46523077] 0
True
[ 0.26559405 0.43848999 0.2785798 ] 1
True
[ 0.3845261 0.40851081 0.22659243] 1
True
[ 0.19172152 0.48141373 0.33017824] 1
True
[ 0.16550106 0.64856562 0.28465757] 2
False
[ 0.22349052 0.0845784 0.75618351] 2
True
[ 0.45569322 0.15176077 0.44417981] 0
True
[ 0.50131811 0.09511508 0.49920895] 0
True
[ 0.23459038 0.38424947 0.37603811] 1
True
[ 0.45094262 0.11808816 0.4569093 ] 0
False
[ 0.4199683 0.15365039 0.44422607] 2
True
[ 0.39860545 0.12706158 0.49440385] 2
True
[ 0.46994152 0.0995161 0.49623096] 0
False
[ 0.41223975 0.14069641 0.46051327] 2
True
[ 0.45118492 0.14739021 0.39629865] 2
False
[ 0.35586315 0.24551078 0.40631243] 0
False
[ 0.46983913 0.11748475 0.44061811] 2
False
[ 0.29652505 0.19973498 0.54406962] 2
True
[ 0.37534251 0.16848926 0.47510095] 0
False
[ 0.40481839 0.06937132 0.65685033] 2
True
[ 0.30492863 0.51948126 0.26038846] 0
False
[ 0.27945531 0.66683417 0.1869266 ] 1
True
[ 0.24839159 0.45009237 0.33599968] 1
True
[ 0.4808265 0.03173792 0.72230349] 2
True
[ 0.38481142 0.37053877 0.26894432] 1
False
[ 0.48485077 0.1219743 0.39259407] 0
True
[ 0.41667697 0.16267866 0.45215537] 0
False
[ 0.17766076 0.63870499 0.28041246] 1
True
[ 0.36170506 0.06616398 0.67403732] 2
True
[ 0.42150132 0.09924626 0.53624867] 2
True
[ 0.46200961 0.12197444 0.42379002] 2
False
[ 0.33281725 0.32768379 0.29295025] 1
False
[ 0.27849797 0.40930266 0.3249584 ] 1
True
[ 0.16051767 0.11296025 0.76492178] 2
True
[ 0.21519369 0.73975046 0.15514325] 1
True
[ 0.38063182 0.28698924 0.31434708] 0
True
[ 0.50050609 0.10842533 0.40500416] 0
True
[ 0.49449298 0.19123684 0.33257926] 2
False
[ 0.21870964 0.49046907 0.30579101] 1
True
[ 0.44032911 0.08861247 0.55258771] 2
True
[ 0.41089313 0.08010198 0.58273509] 2
True
[ 0.20552177 0.74627201 0.15935895] 1
True
[ 0.40148069 0.02312448 0.82150321] 2
True
[ 0.25324796 0.68462416 0.1521829 ] 1
True
[ 0.21231391 0.51163146 0.2883644 ] 1
True
[ 0.25359424 0.42742085 0.32575742] 1
True
[ 0.36353839 0.11906739 0.50100661] 0
False
[ 0.48119905 0.14146019 0.4012642 ] 2
False
[ 0.37832056 0.22129623 0.39268621] 2
True
[ 0.44392736 0.11044801 0.48547765] 0
False
[ 0.39224341 0.14569356 0.47003442] 0
False
[ 0.45191327 0.45177649 0.1892749 ] 1
False
[ 0.3667788 0.2328146 0.4161811] 2
True
[ 0.40485167 0.19466825 0.39294778] 0
True
[ 0.33562673 0.20618086 0.38298915] 0
False
[ 0.45846608 0.12663205 0.46251275] 0
False
[ 0.39306419 0.2912323 0.29138342] 0
True
[ 0.44642645 0.1693922 0.40673921] 2
False
[ 0.2841348 0.57646169 0.19075834] 1
True
[ 0.46132758 0.1385948 0.43660551] 2
False
[ 0.31081433 0.56913204 0.19779744] 1
True
[ 0.43485312 0.15632333 0.40171477] 0
True
[ 0.4803098 0.30011089 0.24390029] 0
True
[ 0.57857607 0.08518374 0.47821742] 0
True
[ 0.26951562 0.50729074 0.26541396] 1
True
[ 0.42327608 0.11970894 0.47908059] 0
False
[ 0.30381781 0.54933602 0.18174267] 1
True
[ 0.45462551 0.18250451 0.35245295] 0
True
[ 0.36726308 0.23368502 0.36201331] 0
True
[ 0.49871289 0.14166418 0.38362539] 2
False
[ 0.47569445 0.14832838 0.40079545] 2
False
[ 0.42219697 0.08717145 0.57444601] 2
True
[ 0.48573738 0.14836941 0.39359224] 0
True
[ 0.51887738 0.20888937 0.27069478] 1
False
[ 0.5241683 0.10195287 0.44277467] 2
False
[ 0.40630451 0.34785261 0.256562 ] 1
False
[ 0.36029242 0.27600416 0.32201593] 0
True
[ 0.5167312 0.11227894 0.44306184] 0
True
[ 0.48421143 0.17071035 0.38313649] 0
True
[ 0.49335519 0.16613127 0.33530485] 0
True
[ 0.41859395 0.09349337 0.5693782 ] 2
True
[ 0.23623513 0.83702005 0.09947057] 1
True
[ 0.44882997 0.13273891 0.47413614] 0
False
[ 0.16473438 0.25058518 0.57634094] 2
True
[ 0.44156898 0.44054203 0.18187236] 1
False
[ 0.1918167 0.10928161 0.7218787 ] 2
True
[ 0.17157356 0.73818922 0.21337154] 2
False
[ 0.27718453 0.54258874 0.22398144] 1
True
[ 0.19953072 0.72865278 0.21697566] 2
False
[ 0.44355272 0.08669864 0.56310316] 2
True
[ 0.41227327 0.26154698 0.32974523] 0
True
[ 0.28551715 0.53638463 0.22589364] 1
True
[ 0.51034752 0.01607399 0.81395709] 2
True
[ 0.39959732 0.34423732 0.27760276] 0
True
[ 0.20761824 0.61804662 0.27114978] 1
True
[ 0.41968573 0.16617508 0.41854364] 2
False
[ 0.28820929 0.66437618 0.18434616] 1
True
[ 0.2168327 0.12058757 0.70952759] 2
True
[ 0.15157694 0.74045212 0.22871627] 1
True
[ 0.47611028 0.1220902 0.44929253] 2
False
[ 0.21668525 0.39830951 0.34845906] 1
True
[ 0.39061052 0.20832991 0.40547763] 2
True
[ 0.46372172 0.11541136 0.49750625] 0
False
[ 0.27883918 0.50771918 0.25483037] 0
False
[ 0.36075832 0.37573258 0.26287389] 1
True
[ 0.37140014 0.18345012 0.45241478] 2
True
[ 0.38591312 0.33502109 0.26302574] 0
True
[ 0.46732518 0.15028746 0.39960785] 0
True
[ 0.31635689 0.44254257 0.28783041] 1
True
[ 0.38017708 0.49285035 0.20041621] 1
True
[ 0.44284376 0.20159569 0.39724478] 0
True
[ 0.33917942 0.65779445 0.12371344] 1
True
[ 0.35811107 0.42259411 0.25634181] 0
False
[ 0.56059005 0.12655037 0.35611308] 0
True
[ 0.53766632 0.06802913 0.48709188] 0
True
[ 0.49410841 0.08222834 0.53078593] 2
True
[ 0.48498099 0.21614412 0.31267215] 0
True
[ 0.38552526 0.40511223 0.18954484] 1
True
[ 0.2690326 0.07501526 0.70787648] 2
True
[ 0.52569208 0.12844849 0.36145619] 2
False
[ 0.44461625 0.16318155 0.4145628 ] 2
False
[ 0.48400245 0.04638666 0.67376455] 2
True
[ 0.21342066 0.10925679 0.69827181] 2
True
[ 0.49808578 0.17169872 0.35903853] 0
True
[ 0.25420196 0.43962603 0.29708636] 1
True
[ 0.51673022 0.09785263 0.4810219 ] 2
False
[ 0.44911401 0.12458746 0.43916198] 2
False
[ 0.4606286 0.06316992 0.59093395] 2
True
[ 0.46316021 0.18273364 0.39576384] 0
True
[ 0.51125144 0.1595283 0.35302875] 0
True
[ 0.3148174 0.56359925 0.19264508] 1
True
[ 0.49568926 0.15461784 0.35469536] 0
True
[ 0.31512761 0.28141381 0.33750299] 1
False
[ 0.47037973 0.1277451 0.44230641] 2
False
[ 0.51955291 0.08111969 0.52379493] 2
True
[ 0.43309215 0.125936 0.44808511] 2
True
[ 0.19133505 0.80419993 0.1413837 ] 1
True
[ 0.27814988 0.68451436 0.14528063] 1
True
[ 0.31185519 0.48652921 0.210596 ] 1
True
[ 0.30174903 0.51990007 0.22434734] 1
True
[ 0.46605089 0.12404811 0.43036453] 0
True
[ 0.50230572 0.11600113 0.47272526] 2
False
[ 0.19209861 0.11764558 0.74149603] 2
True
[ 0.42041874 0.20410601 0.40433788] 0
True
[ 0.16296328 0.82422424 0.16404466] 1
True
[ 0.41610366 0.16034612 0.42764043] 0
False
[ 0.37276148 0.21902093 0.40213487] 2
True
[ 0.19960177 0.58203088 0.2693482 ] 1
True
[ 0.18449758 0.18706529 0.62706225] 2
True
[ 0.40587727 0.25962817 0.34576927] 0
True
[ 0.49624707 0.11382425 0.41966709] 2
False
[ 0.45944985 0.1268421 0.4590918 ] 2
False
[ 0.34722927 0.12537976 0.54958606] 2
True
[ 0.40849989 0.2514605 0.33302368] 0
True
[ 0.32650658 0.41346298 0.29189392] 1
True
[ 0.15943935 0.61507952 0.29004699] 1
True
[ 0.52203959 0.12651435 0.423475 ] 2
False
[ 0.2334796 0.33359041 0.39252103] 2
True
[ 0.34421705 0.23861282 0.39997319] 0
False
[ 0.38091493 0.11176275 0.58014643] 0
False
[ 0.27585753 0.51252272 0.25669151] 1
True
[ 0.37954072 0.2204939 0.40089319] 2
True
[ 0.40671202 0.07871648 0.60034869] 2
True
[ 0.48893244 0.13743171 0.41794276] 0
True
[ 0.40670279 0.37579434 0.26956975] 0
True
[ 0.5498976 0.07151404 0.48234518] 2
False
[ 0.44995364 0.01679392 0.8456422 ] 2
True
[ 0.33292337 0.4678973 0.23113962] 1
True
[ 0.36979449 0.15727627 0.49823569] 2
True
[ 0.45894468 0.05770421 0.61885694] 2
True
[ 0.46668071 0.08603342 0.52533564] 2
True
[ 0.42968067 0.13424081 0.44422352] 0
False
[ 0.39410902 0.14119996 0.46932502] 2
True
[ 0.22473091 0.3980068 0.36290222] 1
True
[ 0.42454768 0.15017875 0.44221777] 0
False
[ 0.23399068 0.56032246 0.25139517] 1
True
[ 0.19528302 0.12201694 0.70602267] 2
True
[ 0.38650251 0.24622338 0.36978649] 0
True
[ 0.34915442 0.13118825 0.53107304] 2
True
[ 0.34941574 0.23385275 0.40626657] 0
False
[ 0.36519938 0.19845879 0.4419545 ] 2
True
[ 0.47229264 0.08761788 0.53204504] 2
True
[ 0.49101165 0.07367928 0.53741224] 2
True
[ 0.27601552 0.51819415 0.22386929] 1
True
[ 0.47960328 0.06670114 0.57051964] 2
True
[ 0.26204606 0.31505172 0.41067868] 1
False
[ 0.46481047 0.12900623 0.42468419] 2
False
[ 0.34743265 0.17975781 0.51263086] 2
True
[ 0.45622927 0.08464094 0.54202634] 0
False
[ 0.31749811 0.41281399 0.28153654] 1
True
[ 0.46081199 0.15003496 0.42431377] 2
False
[ 0.18406237 0.21756243 0.57308308] 2
True
[ 0.461807 0.0897541 0.51599944] 2
True
[ 0.35726318 0.30929413 0.30288746] 0
True
[ 0.24862736 0.6170323 0.22269122] 2
False
[ 0.36533536 0.1968604 0.42782192] 2
True
[ 0.44996176 0.19500653 0.37126461] 0
True
[ 0.31948393 0.25304656 0.4296269 ] 2
True
[ 0.52782175 0.09565379 0.46474871] 0
True
[ 0.37176369 0.37685664 0.27155543] 1
True
[ 0.41579368 0.10870496 0.50624705] 0
False
[ 0.29964645 0.53747129 0.19253295] 1
True
[ 0.49697554 0.08430244 0.48894838] 0
True
[ 0.2842228 0.28124587 0.42838264] 2
True
[ 0.4031812 0.42333257 0.20688892] 1
True
[ 0.49598289 0.1021063 0.45681176] 2
False
[ 0.45998405 0.10585814 0.49387437] 0
False
[ 0.53568957 0.12152236 0.39220227] 0
True
[ 0.4941346 0.09135955 0.47301436] 2
False
[ 0.44068736 0.12501113 0.49836477] 2
True
[ 0.23659873 0.403566 0.3672468 ] 1
True
[ 0.49924401 0.0989947 0.4781294 ] 2
False
[ 0.33323145 0.37932472 0.2528106 ] 0
False
[ 0.45214801 0.04311334 0.6759953 ] 2
True
[ 0.46872706 0.11354403 0.43789418] 2
False
[ 0.37174975 0.2232855 0.40289347] 2
True
[ 0.24605676 0.73327917 0.14649992] 1
True
[ 0.41644756 0.28355237 0.33140558] 0
True
[ 0.43563697 0.17434237 0.37238731] 0
True
[ 0.29963346 0.59312924 0.16506962] 1
True
[ 0.34226736 0.45087722 0.23914303] 1
True
[ 0.47291097 0.05032683 0.65391638] 2
True
[ 0.36845271 0.40096484 0.25106543] 0
False
[ 0.25655355 0.68008646 0.17708559] 2
False
[ 0.36458322 0.22697402 0.4290362 ] 0
False
[ 0.25947822 0.33343238 0.39006153] 1
False
[ 0.24983446 0.69900687 0.17212747] 1
True
[ 0.48104489 0.04830699 0.61483278] 2
True
[ 0.17714162 0.17566091 0.6449056 ] 2
True
[ 0.43479702 0.06172772 0.61888641] 2
True
[ 0.35654269 0.52442199 0.2004767 ] 1
True
[ 0.55081629 0.07045376 0.51840873] 2
False
[ 0.46385671 0.09573124 0.49556338] 0
False
[ 0.24572966 0.68680312 0.16615994] 2
False
[ 0.32896061 0.40748019 0.24185068] 1
True
[ 0.513446 0.14134085 0.40622535] 2
False
[ 0.40019778 0.08285703 0.59976156] 2
True
[ 0.19461204 0.61044296 0.22419133] 1
True
[ 0.35965736 0.26196003 0.3713313 ] 2
True
[ 0.51839824 0.07246094 0.51625888] 2
False
[ 0.40431045 0.1210732 0.56391312] 2
True
[ 0.43482919 0.12156607 0.45325957] 0
False
[ 0.31753528 0.18239501 0.48188146] 2
True
[ 0.51463241 0.09233916 0.47040745] 0
True
[ 0.48106838 0.12021992 0.4766405 ] 0
True
[ 0.46540548 0.09208032 0.51557188] 2
True
[ 0.45808065 0.09444384 0.5057881 ] 0
False
[ 0.32557444 0.42977954 0.26114077] 1
True
[ 0.39996964 0.20281048 0.41860144] 2
True
[ 0.34830552 0.28614632 0.31586079] 1
False
[ 0.33586956 0.34814299 0.29684967] 1
True
[ 0.52488825 0.05268376 0.56024024] 0
False
[ 0.49659374 0.10649107 0.44315314] 2
False
[ 0.38818518 0.35979992 0.27627349] 0
True
[ 0.27767644 0.25060332 0.4475018 ] 2
True
[ 0.43899559 0.04702554 0.67621944] 2
True
[ 0.22730598 0.40004202 0.33982725] 1
True
[ 0.48386845 0.1154634 0.45540705] 2
False
[ 0.15470916 0.7613727 0.19717666] 1
True
[ 0.47322535 0.11528594 0.47432455] 0
False
[ 0.38395659 0.2934111 0.31585669] 1
False
[ 0.18608088 0.6002603 0.23581824] 1
True
[ 0.35607101 0.32803403 0.30442725] 0
True
[ 0.53682035 0.09380084 0.40659292] 0
True
[ 0.22539435 0.39595005 0.36478345] 1
True
[ 0.20376657 0.5652192 0.26891281] 1
True
[ 0.51860321 0.08915102 0.46884423] 0
True
[ 0.31988512 0.26196063 0.39212173] 2
True
[ 0.47994565 0.11252208 0.45657328] 0
True
[ 0.28169254 0.39555521 0.30438805] 1
True
[ 0.38010738 0.30704049 0.24854807] 0
True
[ 0.40760045 0.38566205 0.2204624 ] 0
True
[ 0.29308727 0.31507536 0.3816424 ] 2
True
[ 0.4026221 0.13334321 0.48969031] 0
False
[ 0.18814523 0.66067477 0.21751919] 1
True
[ 0.55149193 0.09566065 0.41547321] 0
True
[ 0.19539606 0.82121879 0.10831057] 1
True
[ 0.49770668 0.09081184 0.48647275] 0
True
5000
[ 0.457823 0.09366033 0.49239734] 2
True
[ 0.42796588 0.42084266 0.17432981] 1
False
[ 0.41356222 0.12182677 0.50897702] 0
False
[ 0.40897126 0.21016476 0.37765722] 2
False
[ 0.23275785 0.45259127 0.3248144 ] 1
True
[ 0.56944501 0.12470644 0.37273646] 0
True
[ 0.20738073 0.80037907 0.13315352] 2
False
[ 0.27606578 0.34912309 0.36386678] 2
True
[ 0.46327439 0.09561403 0.50131638] 0
False
[ 0.34138859 0.48274417 0.20002112] 0
False
[ 0.17217246 0.76173693 0.19241139] 1
True
[ 0.46781109 0.12967992 0.41884993] 2
False
[ 0.57642823 0.07134033 0.45896445] 0
True
[ 0.41477491 0.24895365 0.3006079 ] 0
True
[ 0.33965948 0.09660804 0.61158271] 2
True
[ 0.40526965 0.37102328 0.26982425] 0
True
[ 0.29584444 0.37925326 0.29397658] 1
True
[ 0.35302718 0.44325165 0.24877673] 1
True
[ 0.55389691 0.14084241 0.32486084] 0
True
[ 0.38541401 0.3492095 0.23754949] 1
False
[ 0.39598917 0.22246951 0.37049207] 2
False
[ 0.23298748 0.43559902 0.34119315] 1
True
[ 0.50008644 0.11201087 0.41989957] 2
False
[ 0.23068767 0.56322563 0.26563603] 1
True
[ 0.29467203 0.3882833 0.29979292] 1
True
[ 0.47822626 0.09330071 0.51714896] 2
True
[ 0.46433802 0.15017403 0.43285848] 0
True
[ 0.40732549 0.197839 0.34881269] 2
False
[ 0.36201301 0.20416577 0.40775517] 0
False
[ 0.35001745 0.28309971 0.35158539] 1
False
[ 0.40393548 0.25706275 0.34222767] 2
False
[ 0.38553336 0.2156356 0.38784124] 2
True
[ 0.41784862 0.09801875 0.52199911] 0
False
[ 0.40728541 0.23256205 0.36843061] 0
True
[ 0.38517973 0.14175902 0.44248541] 0
False
[ 0.44950732 0.24118278 0.3167479 ] 1
False
[ 0.4359306 0.15414499 0.42349486] 0
True
[ 0.19019115 0.80715461 0.15786029] 2
False
[ 0.43271793 0.13939478 0.46392084] 0
False
[ 0.45732102 0.05531489 0.62961607] 2
True
[ 0.31127687 0.39941901 0.29060684] 1
True
[ 0.27812461 0.37345775 0.30690324] 1
True
[ 0.33809186 0.33063546 0.32578768] 1
False
[ 0.478724 0.09833953 0.48878657] 2
True
[ 0.29946793 0.51056175 0.23038596] 1
True
[ 0.23858903 0.45572042 0.32453655] 2
False
[ 0.44047808 0.10874835 0.49886273] 0
False
[ 0.34189185 0.18826234 0.48395235] 2
True
[ 0.39402302 0.26158514 0.35679325] 2
False
[ 0.17740376 0.72547411 0.20943432] 1
True
[ 0.35228506 0.3098677 0.3670888 ] 0
False
[ 0.36887564 0.30611769 0.33716647] 2
False
[ 0.17026914 0.85883394 0.14719224] 1
True
[ 0.4719871 0.12650587 0.44606037] 2
False
[ 0.29533218 0.68963362 0.14443002] 1
True
[ 0.51409097 0.0600996 0.5535413 ] 0
False
[ 0.17178202 0.11307035 0.75005917] 2
True
[ 0.42378485 0.05933175 0.64939742] 2
True
[ 0.17471352 0.61076328 0.27988956] 1
True
[ 0.36863524 0.35497569 0.26265386] 0
True
[ 0.32325126 0.44145525 0.24331873] 1
True
[ 0.3342159 0.2536835 0.44990465] 2
True
[ 0.41988459 0.07741287 0.58501244] 2
True
[ 0.21459835 0.06903296 0.77249398] 2
True
[ 0.43951929 0.20370867 0.37642992] 0
True
[ 0.35758592 0.34039603 0.30640428] 0
True
[ 0.41251413 0.18559877 0.42389534] 2
True
[ 0.39505351 0.25166984 0.34636789] 0
True
[ 0.37898434 0.20078865 0.4518228 ] 0
False
[ 0.41711309 0.26936001 0.30509435] 1
False
[ 0.34399988 0.50325502 0.18607506] 1
True
[ 0.54229148 0.0791957 0.47175062] 2
False
[ 0.18460146 0.14895748 0.66205736] 2
True
[ 0.50372905 0.02573735 0.73494294] 2
True
[ 0.48218701 0.17828139 0.33916492] 0
True
[ 0.25998517 0.33948491 0.40173656] 2
True
[ 0.60366894 0.08132858 0.43687396] 0
True
[ 0.43490153 0.11781348 0.45896326] 0
False
[ 0.48824833 0.0996477 0.45664841] 2
False
[ 0.360207 0.5881008 0.15587652] 1
True
[ 0.45140445 0.24031646 0.34392686] 0
True
[ 0.49559764 0.12151231 0.40752122] 2
False
[ 0.47769264 0.0362987 0.68819388] 2
True
[ 0.35139983 0.18205181 0.45250846] 2
True
[ 0.47461443 0.10320619 0.5010416 ] 2
True
[ 0.35993363 0.44264964 0.24369738] 1
True
[ 0.42812426 0.14930143 0.44098082] 2
True
[ 0.55888901 0.06030015 0.51938496] 2
False
[ 0.39928374 0.18965929 0.37733769] 2
False
[ 0.23419482 0.62746163 0.21624419] 1
True
[ 0.21125083 0.68361889 0.18462223] 1
True
[ 0.38405846 0.09043097 0.56357333] 2
True
[ 0.25706002 0.21707064 0.53177741] 2
True
[ 0.2407189 0.65714317 0.18995383] 1
True
[ 0.16677324 0.85831855 0.11808527] 1
True
[ 0.38741606 0.07653535 0.61086923] 2
True
[ 0.39832572 0.17803548 0.42222724] 0
False
[ 0.38710979 0.23619238 0.32834607] 0
True
[ 0.34034571 0.40557927 0.2484606 ] 1
True
[ 0.38283122 0.21543727 0.38719289] 0
False
[ 0.31086911 0.4563408 0.21646827] 0
False
[ 0.52000703 0.12496131 0.39662477] 0
True
[ 0.43452676 0.13213272 0.46312061] 2
True
[ 0.45112709 0.34105592 0.24862771] 0
True
[ 0.47547574 0.10225554 0.49107778] 0
False
[ 0.29064069 0.43916054 0.26677436] 1
True
[ 0.43618121 0.24225999 0.28351892] 1
False
[ 0.32762453 0.54811196 0.16681166] 1
True
[ 0.37873031 0.22595826 0.34633299] 0
True
[ 0.48114682 0.13137373 0.427091 ] 2
False
[ 0.36511542 0.10164247 0.57343353] 2
True
[ 0.48691192 0.07947125 0.50537153] 0
False
[ 0.43289006 0.24956327 0.28688095] 1
False
[ 0.22199745 0.54462253 0.25294864] 1
True
[ 0.53977555 0.100484 0.44618496] 0
True
[ 0.32189546 0.31417724 0.33168708] 1
False
[ 0.39535867 0.32099903 0.25370856] 0
True
[ 0.55267717 0.13830931 0.32987438] 0
True
[ 0.27741726 0.5025696 0.22864759] 1
True
[ 0.31499978 0.58059937 0.15402867] 1
True
[ 0.3291373 0.64136632 0.16132066] 1
True
[ 0.38638056 0.26592606 0.28576134] 0
True
[ 0.46380679 0.1553491 0.36302994] 2
False
[ 0.30178881 0.53001022 0.22133048] 1
True
[ 0.32719005 0.39042349 0.25158272] 0
False
[ 0.25022329 0.39188437 0.36717176] 2
False
[ 0.43693924 0.16374302 0.40621774] 2
False
[ 0.29502981 0.26889442 0.45550422] 2
True
[ 0.22260856 0.40463514 0.37442727] 1
True
[ 0.4323616 0.17311474 0.36667908] 0
True
[ 0.39809463 0.1399582 0.48892191] 2
True
[ 0.23588237 0.50820153 0.27702164] 1
True
[ 0.4905868 0.07713024 0.55674652] 0
False
[ 0.40765448 0.23838919 0.3743914 ] 0
True
[ 0.47712294 0.11904491 0.44582215] 0
True
[ 0.23490668 0.65782344 0.18737276] 2
False
[ 0.42444413 0.16065653 0.44533441] 2
True
[ 0.25715853 0.46256609 0.29194893] 2
False
[ 0.29523336 0.36350692 0.3334364 ] 1
True
[ 0.19217242 0.81194973 0.13103392] 2
False
[ 0.17222081 0.11161724 0.75242678] 2
True
[ 0.49239289 0.07948035 0.56376406] 2
True
[ 0.36034906 0.1943114 0.46588985] 2
True
[ 0.36893189 0.26914472 0.33831398] 1
False
[ 0.40010583 0.22312118 0.36775336] 2
False
[ 0.43046569 0.12820888 0.4488418 ] 0
False
[ 0.37117648 0.15169601 0.49463065] 0
False
[ 0.3404238 0.62810508 0.15329779] 1
True
[ 0.48465884 0.11033222 0.46772138] 0
True
[ 0.38260351 0.09219661 0.58413578] 2
True
[ 0.42178842 0.04757282 0.66907034] 2
True
[ 0.34224997 0.1267923 0.56203321] 2
True
[ 0.17682604 0.73020557 0.20865036] 1
True
[ 0.14395999 0.72863988 0.24262988] 1
True
[ 0.51772861 0.11101542 0.48049012] 0
True
[ 0.47993315 0.05736159 0.58267988] 2
True
[ 0.15706935 0.16630471 0.65705881] 2
True
[ 0.39102424 0.33189228 0.28618767] 0
True
[ 0.20908924 0.1527291 0.62354882] 2
True
[ 0.39064355 0.23894609 0.36845422] 2
False
[ 0.35856662 0.32205886 0.2973734 ] 1
False
[ 0.36702066 0.36534061 0.26615176] 1
False
[ 0.59308256 0.04258671 0.57031 ] 2
False
[ 0.40073251 0.15278475 0.45608652] 0
False
[ 0.23061409 0.68324521 0.178274 ] 1
True
[ 0.40270828 0.08255491 0.5847453 ] 2
True
[ 0.46391817 0.08540374 0.52753973] 2
True
[ 0.41149887 0.13286006 0.4373048 ] 1
False
[ 0.4233606 0.16577028 0.41076984] 2
False
[ 0.2151679 0.38640282 0.41784282] 2
True
[ 0.13493279 0.22019167 0.65902231] 2
True
[ 0.49063816 0.03497225 0.70109227] 2
True
[ 0.4100642 0.2733818 0.30270168] 0
True
[ 0.16976056 0.84524966 0.13797973] 1
True
[ 0.20334536 0.43623067 0.36105751] 1
True
[ 0.36706523 0.15874036 0.48857653] 0
False
[ 0.4239077 0.11267319 0.46736177] 0
False
[ 0.32997971 0.16407724 0.52391194] 2
True
[ 0.14634941 0.84632309 0.1527225 ] 2
False
[ 0.26563096 0.29809925 0.4665115 ] 2
True
[ 0.14797871 0.65432143 0.26846961] 1
True
[ 0.47781126 0.27217991 0.27590963] 1
False
[ 0.22361262 0.68661757 0.20387207] 1
True
[ 0.43405544 0.18496336 0.38854036] 0
True
[ 0.43458161 0.15144201 0.41872987] 2
False
[ 0.15449801 0.6035592 0.2948552 ] 1
True
[ 0.36718041 0.18373108 0.45257331] 2
True
[ 0.32385458 0.24642511 0.42328615] 2
True
[ 0.39739496 0.05011281 0.69044474] 2
True
[ 0.23867439 0.34229537 0.41143099] 2
True
[ 0.45150462 0.13812293 0.45373182] 2
True
[ 0.16082656 0.52938187 0.37595461] 1
True
[ 0.23136696 0.26635254 0.5224063 ] 2
True
[ 0.44921032 0.14271413 0.43345898] 0
True
[ 0.37710178 0.21849998 0.37432264] 0
True
[ 0.38713939 0.26933633 0.32375089] 1
False
[ 0.44787592 0.07701754 0.54654883] 2
True
[ 0.31443525 0.17144333 0.5192223 ] 2
True
[ 0.39934004 0.20896311 0.38433808] 2
False
[ 0.27854752 0.58096186 0.19795449] 1
True
[ 0.30642673 0.2396243 0.41532898] 0
False
[ 0.17283219 0.52945522 0.3323669 ] 1
True
[ 0.2755654 0.39381642 0.33060478] 0
False
[ 0.45476579 0.15014187 0.38646963] 2
False
[ 0.42029819 0.21173199 0.37788879] 0
True
[ 0.42190329 0.12144488 0.46456388] 2
True
[ 0.20818712 0.46853361 0.36441965] 1
True
[ 0.48768261 0.09750825 0.47666405] 0
True
[ 0.5043959 0.07363596 0.54547732] 0
False
[ 0.1957318 0.74450727 0.1695845 ] 0
False
[ 0.26806476 0.37074345 0.35705715] 1
True
[ 0.25481454 0.59389443 0.23130547] 0
False
[ 0.46684092 0.10384639 0.49982974] 0
False
[ 0.37745233 0.17971759 0.41646908] 2
True
[ 0.17328891 0.25682405 0.53596163] 2
True
[ 0.25185219 0.42928425 0.33861571] 1
True
[ 0.45179296 0.11243819 0.48321856] 0
False
[ 0.48066769 0.123124 0.4150158 ] 2
False
[ 0.18655058 0.13293922 0.6950068 ] 2
True
[ 0.39219645 0.33707706 0.2763676 ] 1
False
[ 0.56086044 0.0340123 0.63264941] 2
True
[ 0.39830182 0.17318444 0.40879116] 2
True
[ 0.28917281 0.72706548 0.11819681] 1
True
[ 0.39248897 0.30342071 0.29739142] 0
True
[ 0.42954769 0.06162826 0.63077493] 2
True
[ 0.23669618 0.62716075 0.19669512] 1
True
[ 0.45601613 0.10147022 0.49332648] 0
False
[ 0.22149075 0.53440876 0.2536873 ] 1
True
[ 0.49659738 0.11750334 0.4556904 ] 0
True
[ 0.32175775 0.52008509 0.20765005] 1
True
[ 0.35119354 0.59099213 0.14811822] 1
True
[ 0.46562141 0.10955841 0.46844667] 0
False
[ 0.46042783 0.17429189 0.37771812] 2
False
[ 0.26862321 0.68039471 0.17651796] 1
True
[ 0.49290565 0.09547448 0.46357415] 0
True
[ 0.17975589 0.66827123 0.24123025] 1
True
[ 0.55876345 0.10868105 0.40681517] 0
True
[ 0.19439801 0.84128877 0.09635492] 1
True
[ 0.18572088 0.63055315 0.27990683] 1
True
[ 0.38074169 0.17029731 0.41700985] 0
False
[ 0.4274413 0.13198567 0.44605945] 0
False
[ 0.2711287 0.55352858 0.17816309] 1
True
[ 0.3105431 0.61329972 0.16072069] 1
True
[ 0.47964119 0.11786423 0.43566729] 2
False
[ 0.28194434 0.53329267 0.22851141] 0
False
[ 0.44505243 0.12809587 0.478815 ] 2
True
[ 0.4523288 0.3222222 0.21815079] 1
False
[ 0.2182912 0.73386119 0.17258829] 2
False
[ 0.34540614 0.32643103 0.2855079 ] 0
True
[ 0.40247265 0.41170264 0.20151978] 1
True
[ 0.32022476 0.50099196 0.2370812 ] 1
True
[ 0.47277896 0.1454765 0.39696525] 2
False
[ 0.41715793 0.16633612 0.4128879 ] 2
False
[ 0.42249089 0.17773608 0.41655558] 0
True
[ 0.28429509 0.66529736 0.14023001] 1
True
[ 0.36122864 0.38390446 0.28447105] 0
False
[ 0.29586544 0.60139213 0.17024388] 1
True
[ 0.46369259 0.06115921 0.58373536] 2
True
[ 0.29126347 0.2908431 0.39755738] 2
True
[ 0.52782372 0.1150388 0.43561767] 2
False
[ 0.50390148 0.17407074 0.34208527] 0
True
[ 0.47456313 0.08712882 0.51514811] 0
False
[ 0.53188883 0.07579119 0.47898837] 0
True
[ 0.46017657 0.03109736 0.73321437] 2
True
[ 0.4837177 0.25291737 0.26913659] 0
True
[ 0.35695692 0.4254183 0.22677328] 0
False
[ 0.5687584 0.05480892 0.49259351] 0
True
[ 0.48959511 0.14204137 0.40836842] 0
True
[ 0.49483531 0.14234009 0.39540838] 0
True
[ 0.48455157 0.08647163 0.49999071] 0
False
[ 0.593872 0.13869239 0.31027251] 0
True
[ 0.51520599 0.0877553 0.4830207 ] 0
True
[ 0.25927795 0.54391849 0.24664561] 1
True
[ 0.41504528 0.4530083 0.17198602] 1
True
[ 0.40998259 0.08442581 0.56293597] 2
True
[ 0.26907231 0.08300241 0.68844418] 2
True
[ 0.43114722 0.2588678 0.27292989] 1
False
[ 0.45768019 0.40778334 0.1722869 ] 1
False
[ 0.20429604 0.79633145 0.12919458] 1
True
[ 0.54698244 0.03837037 0.60940223] 2
True
[ 0.34882307 0.62851593 0.12929248] 1
True
[ 0.47119512 0.07642076 0.53301752] 2
True
[ 0.52545867 0.08435967 0.48480564] 0
True
[ 0.3067689 0.58451794 0.1761192 ] 0
False
[ 0.37948739 0.2616286 0.33150264] 2
False
[ 0.47798793 0.10035291 0.44866353] 2
False
[ 0.48895745 0.07525938 0.53419096] 0
False
[ 0.21865405 0.59540487 0.23160881] 1
True
[ 0.59806629 0.08485426 0.4568444 ] 0
True
[ 0.24745874 0.51384589 0.27192429] 2
False
[ 0.23378087 0.52928555 0.26937988] 1
True
[ 0.47020968 0.19422104 0.39426991] 0
True
[ 0.58416838 0.08784775 0.37830303] 0
True
[ 0.4976199 0.12985411 0.40124947] 0
True
[ 0.34228679 0.40147889 0.26927782] 1
True
[ 0.44265375 0.20457104 0.35432304] 2
False
[ 0.20947453 0.79402459 0.13059016] 2
False
[ 0.27270337 0.72743598 0.13651751] 1
True
[ 0.50085387 0.10719339 0.46543555] 2
False
[ 0.37848897 0.27220781 0.33387091] 2
False
[ 0.50555202 0.09264824 0.47343919] 2
False
[ 0.36344631 0.19140713 0.46399748] 2
True
[ 0.19527177 0.86564381 0.10542391] 1
True
[ 0.49253702 0.12172905 0.39741972] 0
True
[ 0.43814213 0.23855103 0.33158832] 0
True
[ 0.12523368 0.92457487 0.11114251] 1
True
[ 0.17239853 0.82331596 0.14525429] 2
False
[ 0.4085104 0.1365316 0.48983785] 2
True
[ 0.25732478 0.64897714 0.18449519] 1
True
[ 0.33690774 0.49702382 0.19912512] 1
True
[ 0.34417288 0.27835077 0.37753208] 0
False
[ 0.30738572 0.42388137 0.29525596] 1
True
[ 0.22688971 0.5888254 0.25368227] 1
True
[ 0.42357766 0.28239638 0.27866808] 1
False
[ 0.4340436 0.16371746 0.39532294] 2
False
[ 0.45866118 0.13853333 0.43517375] 2
False
[ 0.35219044 0.39816156 0.23801812] 1
True
[ 0.45561693 0.09182851 0.50859249] 0
False
[ 0.21301021 0.63656282 0.24057342] 2
False
[ 0.17283137 0.08558539 0.77614286] 2
True
[ 0.19964492 0.42500041 0.41437367] 1
True
[ 0.47704088 0.08468875 0.53714661] 0
False
[ 0.41873706 0.1239205 0.50062719] 2
True
[ 0.44029802 0.15605601 0.45020581] 0
False
[ 0.18255547 0.45138303 0.33882256] 1
True
[ 0.16441077 0.13649631 0.73194935] 2
True
[ 0.3797281 0.17657142 0.47883927] 2
True
[ 0.26571502 0.64339802 0.17586502] 1
True
[ 0.43937915 0.11597223 0.48579687] 2
True
[ 0.36346079 0.20288858 0.42468104] 2
True
[ 0.46527757 0.13601837 0.42322247] 0
True
[ 0.16229578 0.11749759 0.75720622] 2
True
[ 0.3738176 0.23182602 0.38507343] 0
False
[ 0.27847672 0.47119668 0.25461316] 0
False
[ 0.2988846 0.38144672 0.31805868] 0
False
[ 0.155234 0.26657912 0.60550152] 2
True
[ 0.37506247 0.21427043 0.3952969 ] 0
False
[ 0.24713165 0.62252238 0.20852572] 2
False
[ 0.29315485 0.45560944 0.27751115] 1
True
[ 0.30794524 0.32960397 0.35160464] 1
False
[ 0.41767562 0.09809855 0.52355151] 2
True
[ 0.29600573 0.46240002 0.27362861] 1
True
[ 0.26893345 0.4643511 0.27005758] 0
False
[ 0.3966216 0.26180057 0.33303348] 0
True
[ 0.35062216 0.36403391 0.26631142] 0
False
[ 0.2680437 0.51514209 0.25556412] 1
True
[ 0.5112042 0.02082278 0.7756092 ] 2
True
[ 0.34552585 0.56641473 0.1681587 ] 1
True
[ 0.45818235 0.27019634 0.27513142] 0
True
[ 0.42072907 0.22487439 0.34713224] 2
False
[ 0.46815339 0.14529278 0.43658727] 0
True
[ 0.39170425 0.4731093 0.18243606] 1
True
[ 0.38397535 0.29141073 0.31864978] 1
False
[ 0.50032635 0.1092864 0.48082502] 0
True
[ 0.43757348 0.15995916 0.39459685] 2
False
[ 0.3563508 0.41410732 0.22260862] 0
False
[ 0.31056157 0.65654877 0.17439243] 1
True
[ 0.36506248 0.58958498 0.14384772] 1
True
[ 0.37374172 0.37277771 0.27549821] 0
True
[ 0.47151812 0.34828006 0.19373818] 1
False
[ 0.49444643 0.13579616 0.40399789] 2
False
[ 0.48344857 0.07112946 0.56136439] 0
False
[ 0.30166194 0.54715418 0.22105642] 1
True
[ 0.47125504 0.33301559 0.22096935] 2
False
[ 0.21797302 0.56384115 0.29165099] 1
True
[ 0.19624998 0.1018125 0.73471021] 2
True
[ 0.49700284 0.11083184 0.44234016] 0
True
[ 0.48542542 0.02606823 0.73275759] 2
True
[ 0.55337154 0.09808526 0.42380996] 0
True
[ 0.31679718 0.56061262 0.16184455] 1
True
[ 0.21185586 0.13018583 0.66449359] 2
True
[ 0.4547495 0.2047964 0.35462735] 0
True
[ 0.2123719 0.12545712 0.67010121] 2
True
[ 0.22449546 0.60612423 0.22503536] 2
False
[ 0.30573619 0.23527343 0.47466305] 2
True
[ 0.31024353 0.41670097 0.27471613] 0
False
[ 0.2171689 0.72735141 0.16933145] 2
False
[ 0.25477291 0.24256686 0.50296273] 2
True
[ 0.26880031 0.62677464 0.21137865] 1
True
[ 0.46522499 0.15962076 0.40764222] 2
False
[ 0.19911673 0.693658 0.23335988] 2
False
[ 0.39434425 0.06928278 0.63064092] 2
True
[ 0.48327951 0.09913941 0.51231831] 0
False
[ 0.4295016 0.04834377 0.69346376] 2
True
[ 0.52472249 0.12315588 0.40744936] 0
True
[ 0.28122569 0.19421899 0.57534319] 2
True
[ 0.31469023 0.13089344 0.61701683] 2
True
[ 0.17154356 0.08953704 0.80081159] 2
True
[ 0.27871041 0.25844478 0.49321121] 2
True
[ 0.20902014 0.58011069 0.26708662] 1
True
[ 0.47140756 0.05847397 0.56867268] 0
False
[ 0.24580109 0.54877889 0.24979839] 1
True
[ 0.47306684 0.06422669 0.58134107] 0
False
[ 0.41808225 0.13058816 0.49401502] 0
False
[ 0.49627902 0.06088639 0.57809019] 2
True
[ 0.36192278 0.12991597 0.54849761] 2
True
[ 0.44464445 0.14313663 0.46212674] 2
True
[ 0.23198769 0.07205105 0.77044871] 2
True
[ 0.4515186 0.19578419 0.36497594] 0
True
[ 0.17636722 0.14145536 0.71357519] 2
True
[ 0.30839748 0.41449916 0.30199964] 1
True
[ 0.32027179 0.40422223 0.29633585] 1
True
[ 0.44240862 0.16886959 0.3832346 ] 1
False
[ 0.20905611 0.6138995 0.23844243] 2
False
[ 0.24788983 0.44109923 0.32014501] 1
True
[ 0.45208382 0.20821697 0.3106369 ] 1
False
[ 0.46385395 0.0940985 0.50024952] 0
False
[ 0.45124583 0.23297153 0.30300697] 1
False
[ 0.43465512 0.04623472 0.64415885] 2
True
[ 0.20660763 0.40475907 0.3888079 ] 1
True
[ 0.51351047 0.07132606 0.50329293] 0
True
[ 0.29237122 0.45825925 0.25192936] 0
False
[ 0.43188323 0.11565648 0.54679858] 0
False
[ 0.23462306 0.39396137 0.34204798] 1
True
[ 0.41679457 0.18672974 0.41632707] 0
True
[ 0.46682933 0.19789999 0.34697965] 2
False
[ 0.48086753 0.01970554 0.78884005] 2
True
[ 0.25053711 0.35022924 0.37191944] 1
False
[ 0.43456699 0.17940593 0.40635131] 0
True
[ 0.39237654 0.34910509 0.22591833] 0
True
[ 0.42647426 0.16192287 0.42070197] 0
True
[ 0.55939819 0.12897306 0.38985451] 2
False
[ 0.27427073 0.28576177 0.40319865] 1
False
[ 0.48873123 0.12381729 0.44142637] 2
False
[ 0.38524427 0.11584586 0.54286503] 2
True
[ 0.46519797 0.24283522 0.31478865] 1
False
[ 0.34969604 0.31555531 0.318265 ] 0
True
[ 0.17038436 0.68452053 0.22259273] 1
True
[ 0.40938919 0.06565216 0.61433273] 2
True
[ 0.29477529 0.55685194 0.19187491] 1
True
[ 0.28808487 0.57916543 0.19427468] 1
True
[ 0.47557542 0.16261807 0.37283878] 0
True
[ 0.35543034 0.22624074 0.39952975] 2
True
[ 0.30889427 0.16412705 0.51715655] 2
True
[ 0.37184524 0.16970751 0.46121264] 2
True
[ 0.4759563 0.10765509 0.46015104] 0
True
[ 0.26234934 0.37064557 0.34332116] 1
True
[ 0.40213867 0.2165338 0.37803995] 0
True
[ 0.43757557 0.05986528 0.62537517] 2
True
[ 0.38007905 0.14565013 0.49754997] 0
False
[ 0.28744813 0.60548354 0.15823999] 0
False
[ 0.17672859 0.72146027 0.263134 ] 1
True
[ 0.5158354 0.11384071 0.43616901] 0
True
[ 0.25344134 0.42818403 0.28937834] 1
True
[ 0.51938325 0.21598864 0.29506681] 0
True
[ 0.32709759 0.27469653 0.39032208] 2
True
[ 0.51183923 0.01701269 0.80433125] 2
True
[ 0.43897225 0.18437963 0.37294119] 2
False
[ 0.40005952 0.36520328 0.24135638] 0
True
[ 0.62120512 0.02532075 0.6406076 ] 2
True
[ 0.47358694 0.11384676 0.45787507] 0
True
[ 0.47323075 0.19359791 0.3313817 ] 0
True
[ 0.57785075 0.06059945 0.5072697 ] 0
True
[ 0.44267211 0.21066731 0.34308995] 0
True
[ 0.37261602 0.18212295 0.43507445] 0
False
[ 0.39795486 0.38982684 0.23917479] 1
False
[ 0.38456933 0.21803719 0.37865273] 2
False
[ 0.43662574 0.27381368 0.25863425] 0
True
[ 0.55918029 0.09168835 0.40660041] 0
True
[ 0.50098413 0.09154311 0.46301778] 2
False
[ 0.40218197 0.12092019 0.45922117] 2
True
[ 0.39058234 0.46916868 0.17317334] 1
True
[ 0.32908017 0.24327663 0.38626472] 2
True
[ 0.42943548 0.16963927 0.42154195] 2
False
[ 0.4104981 0.14988607 0.41561705] 0
False
[ 0.4168161 0.06881549 0.59649141] 2
True
[ 0.45222692 0.12823349 0.40590203] 0
True
[ 0.33934526 0.46391973 0.27283923] 0
False
[ 0.59154153 0.08592235 0.44035661] 0
True
[ 0.59175497 0.07654495 0.47333458] 0
True
[ 0.40751159 0.33104625 0.28626606] 0
True
[ 0.28421441 0.49249691 0.25693262] 1
True
[ 0.40594499 0.10257536 0.51319915] 2
True
[ 0.41973092 0.14148744 0.43983394] 0
False
[ 0.47602675 0.09802926 0.48638961] 0
False
[ 0.46875437 0.07347663 0.57185947] 0
False
[ 0.4178342 0.49834365 0.1659455 ] 1
True
[ 0.59848217 0.08652413 0.40083482] 0
True
[ 0.47308137 0.27851845 0.26666272] 0
True
[ 0.49639559 0.07406153 0.50901851] 0
False
[ 0.28582107 0.35259041 0.3160069 ] 1
True
[ 0.57869814 0.06547684 0.46777506] 0
True
[ 0.22196363 0.19075651 0.57476935] 2
True
[ 0.23781112 0.79704011 0.1095889 ] 1
True
[ 0.2260483 0.18123469 0.59705345] 2
True
[ 0.25378207 0.73804561 0.13631989] 1
True
[ 0.37078952 0.52521601 0.15650546] 1
True
[ 0.46979975 0.11658171 0.47899328] 0
False
[ 0.59235174 0.05738711 0.4713066 ] 0
True
[ 0.48813154 0.20179477 0.29035865] 0
True
[ 0.43835635 0.19065072 0.3432318 ] 0
True
[ 0.40533055 0.30065724 0.26960023] 0
True
[ 0.48981217 0.151634 0.37353568] 2
False
[ 0.26305984 0.43719395 0.29389323] 1
True
[ 0.51982369 0.12345285 0.40795877] 0
True
[ 0.41156817 0.16175226 0.42031106] 0
False
[ 0.5047949 0.30668093 0.18801404] 1
False
[ 0.38507637 0.30630522 0.3067397 ] 1
False
[ 0.48585927 0.16989168 0.34826336] 2
False
[ 0.32994137 0.23749422 0.38782463] 2
True
[ 0.26164144 0.33215323 0.39831236] 2
True
[ 0.16584363 0.79424642 0.16385017] 1
True
[ 0.30889831 0.48955309 0.22324193] 1
True
[ 0.21317511 0.6836148 0.19459514] 1
True
[ 0.4855144 0.30209407 0.22910147] 0
True
[ 0.4285438 0.15151921 0.47553069] 2
True
[ 0.35338999 0.46556897 0.21590301] 1
True
[ 0.40428713 0.14759095 0.46355368] 0
False
[ 0.49100997 0.13401115 0.37624955] 2
False
[ 0.48474665 0.09387721 0.50057622] 0
False
[ 0.38414495 0.31011681 0.27973267] 1
False
[ 0.28241464 0.56462545 0.19347176] 1
True
[ 0.32356309 0.47266933 0.20137873] 0
False
[ 0.41474272 0.14593805 0.45544043] 2
True
[ 0.28392046 0.326043 0.39829253] 2
True
[ 0.40470109 0.41657377 0.2028548 ] 0
False
[ 0.41289599 0.17140261 0.4725309 ] 2
True
[ 0.2449203 0.44643198 0.31799016] 1
True
[ 0.47801881 0.20731978 0.31977133] 0
True
[ 0.54299043 0.07075994 0.50034141] 2
False
[ 0.36233516 0.55288076 0.16011404] 1
True
[ 0.40260409 0.117068 0.45339717] 0
False
[ 0.41330818 0.37588572 0.25155562] 0
True
[ 0.26643002 0.34440537 0.41822239] 2
True
[ 0.47053011 0.10028049 0.49127575] 2
True
[ 0.43413847 0.17597326 0.38861285] 2
False
[ 0.39917381 0.16740329 0.39935649] 2
True
[ 0.48348222 0.14781184 0.42541778] 0
True
[ 0.39128921 0.09500733 0.56365628] 2
True
[ 0.38862336 0.47679538 0.19044658] 1
True
[ 0.34414804 0.45085484 0.24775828] 1
True
[ 0.47188803 0.06671322 0.5682522 ] 2
True
[ 0.19772135 0.58134303 0.26117008] 1
True
[ 0.22024825 0.7359841 0.1656595 ] 1
True
[ 0.36157659 0.56123847 0.16874544] 1
True
[ 0.39587945 0.13187455 0.48842145] 0
False
[ 0.52508265 0.11715368 0.37779142] 2
False
[ 0.47653174 0.12824325 0.44952869] 2
False
[ 0.13901025 0.73973707 0.2388044 ] 1
True
[ 0.43450109 0.35939381 0.23492673] 1
False
[ 0.50291401 0.09910062 0.47873393] 0
True
[ 0.45845183 0.16562188 0.38217403] 0
True
[ 0.36688315 0.34271734 0.26613161] 0
True
[ 0.50231867 0.10184296 0.48394464] 2
False
[ 0.26427912 0.12523102 0.59478223] 2
True
[ 0.30768927 0.28638319 0.40818554] 2
True
[ 0.44953258 0.14801538 0.40313739] 2
False
[ 0.27177668 0.77417715 0.10326446] 1
True
[ 0.26816569 0.45340401 0.27270303] 1
True
[ 0.45900356 0.15118527 0.460531 ] 0
False
[ 0.21500689 0.78745808 0.13615351] 1
True
[ 0.25083248 0.41424413 0.37026868] 2
False
[ 0.26454301 0.38044059 0.35938981] 1
True
[ 0.4057724 0.14701545 0.43680873] 2
True
[ 0.3671821 0.38218498 0.26387488] 1
True
[ 0.30272149 0.59017045 0.17052665] 1
True
[ 0.4811772 0.07727596 0.50193099] 0
False
[ 0.35523274 0.37843811 0.27178163] 0
False
[ 0.35095141 0.27130862 0.42151141] 1
False
[ 0.30525117 0.22715405 0.45221649] 2
True
[ 0.17893157 0.67509342 0.22398539] 1
True
[ 0.33631818 0.47217119 0.24024914] 1
True
[ 0.41614951 0.16931641 0.41399659] 2
False
[ 0.36953073 0.3674424 0.25995026] 0
True
[ 0.19587254 0.14873384 0.65822732] 2
True
[ 0.4549609 0.138037 0.44453201] 2
False
[ 0.48439028 0.16583298 0.35068459] 2
False
[ 0.51884021 0.1023867 0.42072375] 2
False
[ 0.44862692 0.14871433 0.42865375] 0
True
[ 0.52836984 0.0986361 0.42353805] 0
True
[ 0.27696735 0.36515684 0.38408104] 2
True
[ 0.46637108 0.17769661 0.40427806] 0
True
[ 0.13828939 0.27631864 0.61232874] 2
True
[ 0.423957 0.15995424 0.43841852] 0
False
[ 0.22604549 0.40881218 0.37412901] 1
True
[ 0.34996984 0.24389474 0.37003813] 0
False
[ 0.56038629 0.0865878 0.42391342] 0
True
[ 0.41570443 0.01983653 0.79510973] 2
True
[ 0.54213259 0.09894335 0.41277443] 2
False
[ 0.4651758 0.18920799 0.34864193] 0
True
[ 0.52175188 0.09423785 0.44877579] 0
True
[ 0.58573091 0.07818722 0.42806596] 0
True
[ 0.22840344 0.51218268 0.25776854] 1
True
[ 0.38258737 0.20894346 0.34844186] 0
True
[ 0.51385878 0.1778504 0.33234096] 2
False
[ 0.34698831 0.27206249 0.34989229] 0
False
[ 0.54433059 0.08387273 0.45059639] 0
True
[ 0.33867393 0.32932375 0.31824313] 0
True
[ 0.36665182 0.59204784 0.16351664] 1
True
[ 0.3420446 0.34357674 0.32435816] 0
False
[ 0.32045934 0.38104807 0.28251266] 1
True
[ 0.50830968 0.01444002 0.82037761] 2
True
[ 0.29853219 0.46466338 0.24775532] 1
True
[ 0.47558012 0.12610897 0.38376487] 0
True
[ 0.48333034 0.06588008 0.54652592] 2
True
[ 0.33336183 0.51504688 0.20225922] 1
True
[ 0.22375134 0.82918772 0.10635276] 1
True
[ 0.26587816 0.76211027 0.11645489] 2
False
[ 0.45449761 0.20198456 0.33362543] 2
False
[ 0.25956311 0.45314345 0.27765854] 1
True
[ 0.43461719 0.07477086 0.58739323] 2
True
[ 0.44906921 0.15711245 0.39010289] 0
True
[ 0.4738537 0.15304951 0.37321855] 0
True
[ 0.51097272 0.23558341 0.25199761] 1
False
[ 0.24529573 0.45974601 0.26511625] 1
True
[ 0.37036814 0.31073481 0.28853036] 1
False
[ 0.44881838 0.11881383 0.47473606] 2
True
[ 0.48240677 0.01893071 0.79614592] 2
True
[ 0.26409516 0.69329935 0.13660108] 1
True
[ 0.28250839 0.627578 0.15240168] 1
True
[ 0.47353844 0.10488856 0.46664582] 2
False
[ 0.44983802 0.15904898 0.40367767] 2
False
[ 0.31432678 0.18505576 0.47288548] 2
True
[ 0.26376019 0.57960914 0.22371728] 1
True
[ 0.27897494 0.54227765 0.21241778] 1
True
[ 0.40591776 0.362803 0.23345466] 1
False
[ 0.27657472 0.66474771 0.15715603] 1
True
[ 0.257164 0.65770233 0.16042269] 1
True
[ 0.30526661 0.46207672 0.24812091] 0
False
[ 0.32281236 0.42113031 0.23934179] 1
True
[ 0.15881335 0.30288995 0.53781274] 2
True
[ 0.42346829 0.23775883 0.31976967] 0
True
[ 0.50260507 0.12724653 0.38147057] 0
True
[ 0.21282492 0.72155057 0.16151571] 1
True
[ 0.41616476 0.10972039 0.53314703] 2
True
[ 0.52843807 0.08728228 0.42519007] 0
True
[ 0.22402566 0.74434262 0.15904146] 1
True
[ 0.4651028 0.17773279 0.35195909] 0
True
[ 0.19835965 0.58820175 0.24020199] 1
True
[ 0.44849535 0.11302088 0.48205401] 2
True
[ 0.18337694 0.88009737 0.08440049] 1
True
[ 0.48017497 0.10944531 0.4435517 ] 0
True
[ 0.40666344 0.23367537 0.37182148] 0
True
[ 0.40370016 0.10625935 0.50493573] 0
False
[ 0.19677562 0.12173792 0.67794499] 2
True
[ 0.31717869 0.35975569 0.31970942] 1
True
[ 0.42637953 0.1226986 0.47681697] 0
False
[ 0.44830427 0.12363806 0.47527739] 0
False
[ 0.30503715 0.42640578 0.24321042] 2
False
[ 0.28274128 0.42791545 0.2597831 ] 1
True
[ 0.45968376 0.178504 0.38028183] 0
True
[ 0.33457625 0.34379561 0.30579398] 2
False
[ 0.34509004 0.28180397 0.37592951] 2
True
[ 0.45608529 0.14434777 0.41959385] 2
False
[ 0.39469567 0.12919661 0.49064859] 2
True
[ 0.32014968 0.37099274 0.29298152] 2
False
[ 0.20140102 0.11698443 0.67974228] 2
True
[ 0.32111674 0.56779776 0.16646273] 1
True
[ 0.36227387 0.10407261 0.53102842] 2
True
[ 0.15831797 0.74363371 0.21282504] 2
False
[ 0.19005675 0.11900699 0.7174927 ] 2
True
[ 0.43726611 0.14159606 0.41908238] 2
False
[ 0.32987061 0.33531069 0.32014257] 0
False
[ 0.49205957 0.12406591 0.43698437] 2
False
[ 0.3585746 0.29963547 0.3126969 ] 1
False
[ 0.47016611 0.22182958 0.30973735] 0
True
[ 0.19934798 0.70182612 0.18083227] 1
True
[ 0.31448394 0.32278746 0.339442 ] 1
False
[ 0.43041871 0.18362469 0.35637491] 2
False
[ 0.18405952 0.49681387 0.31545564] 1
True
[ 0.43803499 0.12890781 0.46287529] 0
False
[ 0.20688198 0.42723396 0.33094421] 1
True
[ 0.33320842 0.08381442 0.64756559] 2
True
[ 0.44109036 0.11014132 0.45970023] 0
False
[ 0.40702996 0.15391471 0.46384118] 0
False
[ 0.24175344 0.28629359 0.48788628] 2
True
[ 0.3102614 0.27304299 0.39822818] 2
True
[ 0.44137843 0.12327914 0.48579754] 0
False
[ 0.52424029 0.13939113 0.40275052] 0
True
[ 0.19543835 0.51063961 0.31049584] 1
True
[ 0.3486794 0.34434532 0.28808221] 0
True
[ 0.46130172 0.09978134 0.53285447] 2
True
[ 0.38053127 0.19193325 0.42260335] 0
False
[ 0.52069804 0.0937763 0.45133898] 2
False
[ 0.31807996 0.236055 0.41006559] 2
True
[ 0.26109506 0.58190291 0.21501704] 1
True
[ 0.48666932 0.03869268 0.68122739] 2
True
[ 0.35360848 0.57879752 0.16763795] 1
True
[ 0.47861619 0.14512684 0.44759632] 0
True
[ 0.41696382 0.0661051 0.61159583] 2
True
[ 0.46475513 0.09807968 0.47230627] 2
True
[ 0.45775092 0.0655117 0.59195724] 2
True
[ 0.17832761 0.62862555 0.27151567] 1
True
[ 0.43095186 0.21467578 0.34019513] 1
False
[ 0.37488224 0.14585316 0.50166841] 0
False
[ 0.37972232 0.22984775 0.33554375] 0
True
[ 0.39991155 0.23988769 0.31091132] 0
True
[ 0.26722783 0.65942257 0.15280966] 1
True
[ 0.27409908 0.37283193 0.31002505] 1
True
[ 0.37042248 0.28664774 0.30150817] 1
False
[ 0.45085622 0.12297296 0.4573014 ] 2
True
[ 0.32332081 0.53361168 0.20515159] 1
True
[ 0.51728723 0.14341707 0.37053298] 0
True
[ 0.32179154 0.37103375 0.2906631 ] 1
True
[ 0.45357217 0.18144135 0.36536281] 0
True
[ 0.40046852 0.21118947 0.37219895] 0
True
[ 0.36031378 0.19633169 0.39928471] 0
False
[ 0.40430519 0.2353643 0.35363381] 2
False
[ 0.28087666 0.29212754 0.40957409] 2
True
[ 0.19057502 0.64541913 0.20854348] 1
True
[ 0.26426899 0.63218967 0.20188182] 1
True
[ 0.35355631 0.46734052 0.20757969] 0
False
[ 0.47233663 0.11103918 0.42378995] 0
True
[ 0.38610773 0.22387654 0.33522885] 2
False
[ 0.21479217 0.50146958 0.28371593] 1
True
[ 0.27207564 0.43717248 0.24844283] 1
True
[ 0.27642761 0.52460311 0.22858006] 1
True
[ 0.33670038 0.23106784 0.36226759] 2
True
[ 0.20238486 0.55724968 0.27818376] 1
True
[ 0.42761196 0.15737776 0.41498974] 2
False
[ 0.23155812 0.78031842 0.14600822] 1
True
[ 0.37983749 0.20460528 0.4025643 ] 0
False
[ 0.42496939 0.26358013 0.3141882 ] 0
True
[ 0.22512475 0.65251198 0.20337864] 1
True
[ 0.34509259 0.54626974 0.1744108 ] 1
True
[ 0.48075218 0.10776767 0.48553903] 2
True
[ 0.41552688 0.07207018 0.58926082] 2
True
[ 0.45923914 0.10571616 0.49343995] 0
False
[ 0.40542048 0.07599359 0.59056719] 2
True
[ 0.49011201 0.15911185 0.36014103] 2
False
[ 0.52602651 0.1074153 0.42200003] 0
True
[ 0.28686098 0.41534852 0.27567645] 0
False
[ 0.41019776 0.33270856 0.26668229] 0
True
[ 0.5753759 0.09874267 0.43755011] 2
False
[ 0.33862797 0.43959427 0.2444672 ] 1
True
[ 0.37728124 0.22831907 0.36004864] 1
False
[ 0.2082318 0.40594041 0.4208325 ] 2
True
[ 0.43623948 0.10422731 0.50577335] 0
False
[ 0.33471869 0.37222364 0.2839646 ] 1
True
[ 0.46371682 0.24499577 0.25466646] 1
False
[ 0.46430467 0.09432301 0.51533741] 0
False
[ 0.40844678 0.21430864 0.37219134] 2
False
[ 0.2303005 0.76737968 0.12913723] 1
True
[ 0.23112953 0.54651542 0.27465089] 1
True
[ 0.28746421 0.19144393 0.49847451] 2
True
[ 0.32376272 0.55820401 0.17415912] 1
True
[ 0.54158405 0.10626791 0.35092078] 2
False
[ 0.39576605 0.22582385 0.34620584] 0
True
[ 0.30242413 0.56428999 0.19294511] 1
True
[ 0.36157162 0.18506099 0.45245625] 0
False
[ 0.51481518 0.13266023 0.36503592] 2
False
[ 0.36738615 0.18479669 0.44247848] 0
False
[ 0.39149474 0.30355428 0.28531226] 0
True
[ 0.18620525 0.85185492 0.13660802] 2
False
[ 0.23522151 0.44419889 0.3127641 ] 1
True
[ 0.47901992 0.15790288 0.3733305 ] 0
True
[ 0.27967949 0.59334236 0.22402636] 1
True
[ 0.21040216 0.61529691 0.23501093] 2
False
[ 0.2213896 0.57030681 0.26173315] 1
True
[ 0.35186369 0.19783203 0.40858308] 0
False
[ 0.27688032 0.54279962 0.22592886] 1
True
[ 0.38265368 0.28743785 0.35637173] 2
False
[ 0.48172786 0.12705154 0.45680349] 0
True
[ 0.3253643 0.48663503 0.22208244] 0
False
[ 0.56781823 0.09610172 0.39949071] 0
True
[ 0.39796546 0.08239066 0.62604843] 2
True
[ 0.3863708 0.33728649 0.28877565] 0
True
[ 0.30499287 0.6134145 0.15748699] 0
False
[ 0.47639222 0.13024588 0.4379776 ] 2
False
[ 0.4813562 0.13133832 0.44346308] 2
False
[ 0.48617753 0.10396965 0.51499481] 2
True
[ 0.24828918 0.3630326 0.39588968] 1
False
[ 0.45814471 0.09711505 0.51239257] 2
True
[ 0.40684755 0.29035032 0.32822267] 0
True
[ 0.24935839 0.48484872 0.3014162 ] 1
True
[ 0.35757793 0.29368409 0.35934006] 0
False
[ 0.40899775 0.16859751 0.41631757] 2
True
[ 0.27237071 0.55338528 0.24794755] 1
True
[ 0.45563821 0.11706498 0.49025584] 0
False
[ 0.56161767 0.0348347 0.65708008] 2
True
[ 0.20528383 0.85368482 0.10063107] 1
True
[ 0.2525893 0.40769934 0.35486534] 1
True
[ 0.40298917 0.30717828 0.31414885] 2
False
[ 0.45046879 0.17832031 0.36142076] 2
False
[ 0.29561633 0.40596774 0.28944988] 1
True
[ 0.43565246 0.315344 0.24140265] 1
False
[ 0.34561616 0.6213933 0.14500447] 1
True
[ 0.45623424 0.1387593 0.42591938] 2
False
[ 0.48229194 0.0833019 0.51010165] 0
False
[ 0.43702607 0.05094633 0.66065354] 2
True
[ 0.43173584 0.13378524 0.46917015] 2
True
[ 0.23661418 0.52321583 0.27772357] 1
True
[ 0.37076935 0.21471852 0.40386834] 0
False
[ 0.23513456 0.58049449 0.24044168] 1
True
[ 0.45373852 0.07929132 0.55480746] 0
False
[ 0.25336265 0.74339684 0.14108673] 1
True
[ 0.47035781 0.11430435 0.44182995] 2
False
[ 0.37166812 0.35374913 0.28467116] 0
True
[ 0.25641395 0.71667304 0.1437744 ] 1
True
[ 0.24908436 0.57516105 0.22705314] 1
True
[ 0.3139983 0.60540983 0.1907804 ] 1
True
[ 0.16125627 0.86734611 0.13288352] 1
True
[ 0.4210015 0.1616863 0.40097637] 2
False
[ 0.38307613 0.14977672 0.5168182 ] 2
True
[ 0.41905702 0.10241577 0.5549654 ] 0
False
[ 0.34022019 0.29997704 0.32777996] 1
False
[ 0.54358761 0.03208683 0.67551081] 2
True
[ 0.44918368 0.17356902 0.40893827] 0
True
[ 0.24163518 0.46344374 0.31465096] 1
True
[ 0.32958921 0.51397729 0.2112891 ] 1
True
[ 0.39788824 0.19847491 0.37079329] 0
True
[ 0.43727993 0.15292035 0.3982384 ] 2
False
[ 0.33243242 0.51856727 0.20781429] 1
True
[ 0.48928732 0.08564124 0.53199102] 0
False
[ 0.43976141 0.17118415 0.3766687 ] 2
False
[ 0.47026643 0.05907349 0.59615163] 2
True
[ 0.19113823 0.61859878 0.25509534] 1
True
[ 0.37836936 0.27634994 0.32215915] 2
False
[ 0.46474892 0.1633979 0.42112931] 2
False
[ 0.41676051 0.16804434 0.44685061] 0
False
[ 0.36331549 0.15195673 0.5161027 ] 0
False
[ 0.56018661 0.02639076 0.69718945] 2
True
[ 0.404019 0.13303765 0.49064584] 0
False
[ 0.18000339 0.67325498 0.22997501] 1
True
[ 0.40534235 0.31730914 0.27991291] 1
False
[ 0.36536207 0.1518704 0.4986343 ] 2
True
[ 0.38890764 0.33679064 0.2886767 ] 1
False
[ 0.31717154 0.57654522 0.18762697] 1
True
[ 0.44297297 0.10100725 0.4589386 ] 0
False
[ 0.4894393 0.23270482 0.30631756] 0
True
[ 0.20284723 0.50536179 0.28474995] 1
True
[ 0.48711648 0.14163167 0.40791607] 0
True
[ 0.46277767 0.13885901 0.45309035] 2
False
[ 0.45124971 0.13763509 0.41231105] 2
False
[ 0.40791972 0.18042684 0.41638277] 0
False
[ 0.19816754 0.63927962 0.22067649] 1
True
[ 0.48802107 0.17859376 0.38329117] 0
True
[ 0.36226457 0.34828872 0.29316406] 0
True
[ 0.28001963 0.74157498 0.12556332] 1
True
[ 0.51367276 0.11166597 0.40633574] 0
True
[ 0.47987523 0.16147595 0.3810864 ] 2
False
[ 0.48341808 0.09229782 0.49123862] 2
True
[ 0.3778886 0.18657764 0.44878113] 2
True
[ 0.27331555 0.53800411 0.2136174 ] 1
True
[ 0.27197977 0.49843594 0.28387661] 0
False
[ 0.36753463 0.29523644 0.31599047] 1
False
[ 0.51060402 0.05017629 0.62199563] 2
True
[ 0.28239146 0.34831821 0.39988144] 2
True
[ 0.34573427 0.20222227 0.3948941 ] 2
True
[ 0.2786255 0.50889856 0.28215321] 1
True
[ 0.4454228 0.14612284 0.43623085] 2
False
[ 0.43005201 0.08371556 0.59721728] 2
True
[ 0.4202442 0.22171278 0.40336831] 0
True
[ 0.39088793 0.13299804 0.48666026] 0
False
[ 0.42231066 0.17301454 0.42473509] 2
True
[ 0.48624714 0.16653038 0.33235141] 0
True
[ 0.59847202 0.06236956 0.50027772] 0
True
[ 0.46295404 0.18753577 0.34999224] 2
False
[ 0.57639608 0.10803702 0.36773297] 0
True
[ 0.42098139 0.14559349 0.49396471] 2
True
[ 0.31246273 0.51475942 0.23010822] 1
True
[ 0.27918345 0.56946664 0.23317707] 1
True
[ 0.17557937 0.82319379 0.14831788] 2
False
[ 0.38806195 0.12238775 0.54417281] 0
False
[ 0.41296493 0.12474691 0.46697351] 0
False
[ 0.27856949 0.6593676 0.16594081] 1
True
[ 0.210874 0.53886428 0.2890661 ] 1
True
[ 0.26027982 0.31519988 0.47543038] 2
True
[ 0.42406405 0.15169213 0.41379437] 2
False
[ 0.42654888 0.23945715 0.36815881] 0
True
[ 0.18023999 0.55720437 0.30850272] 1
True
[ 0.486525 0.24450057 0.29650873] 0
True
[ 0.42672374 0.16597728 0.44157817] 2
True
[ 0.50174566 0.04347318 0.63558245] 2
True
[ 0.35891523 0.51391035 0.20289897] 1
True
[ 0.53393558 0.01552851 0.79846389] 2
True
[ 0.22132805 0.83118585 0.10689196] 1
True
[ 0.48000632 0.1359193 0.41660688] 2
False
[ 0.40267436 0.08427596 0.58664027] 2
True
[ 0.31568114 0.26812461 0.42147587] 2
True
[ 0.38148868 0.21332237 0.40722399] 2
True
[ 0.31528728 0.41939283 0.28501009] 0
False
[ 0.49154075 0.12423606 0.41868796] 0
True
[ 0.30329484 0.59725422 0.19065178] 1
True
[ 0.29334208 0.65301417 0.17049118] 1
True
[ 0.39992173 0.32634397 0.28936637] 1
False
[ 0.1726678 0.60095953 0.25836449] 1
True
[ 0.39804067 0.3295194 0.297117 ] 1
False
[ 0.18006191 0.5769366 0.25919107] 1
True
[ 0.39604403 0.08326927 0.58943265] 2
True
[ 0.35754294 0.31732013 0.32087106] 0
True
[ 0.21381104 0.53038861 0.2593014 ] 1
True
[ 0.43053125 0.16962836 0.39958735] 0
True
[ 0.17829721 0.1983013 0.58311061] 2
True
[ 0.50570156 0.10808125 0.421084 ] 0
True
[ 0.35375848 0.21033222 0.43256698] 0
False
[ 0.49790704 0.09026482 0.4983149 ] 0
False
[ 0.43194841 0.240874 0.3317083 ] 2
False
[ 0.3968066 0.2466108 0.35857442] 0
True
[ 0.19579024 0.12125742 0.7006191 ] 2
True
[ 0.17556717 0.74810389 0.17695855] 1
True
[ 0.16190558 0.31486957 0.50600582] 2
True
[ 0.36908953 0.36855948 0.2627907 ] 1
False
[ 0.33082822 0.25839259 0.39891709] 2
True
[ 0.25678612 0.69678971 0.16491159] 1
True
[ 0.12142895 0.81088732 0.20874889] 1
True
[ 0.36065867 0.12272553 0.53307655] 2
True
[ 0.35358313 0.27382782 0.33735873] 0
True
[ 0.38389915 0.44302658 0.21056981] 1
True
[ 0.17717169 0.77057618 0.1684808 ] 1
True
[ 0.49704294 0.13764853 0.42222485] 0
True
[ 0.40895573 0.1559496 0.41850498] 0
False
[ 0.52078464 0.08368822 0.48166695] 2
False
[ 0.36571977 0.37593751 0.23836477] 1
True
[ 0.25777207 0.38973164 0.32418901] 1
True
[ 0.4265946 0.30513998 0.27927283] 1
False
[ 0.18752317 0.55411511 0.2824619 ] 1
True
[ 0.2390312 0.83438233 0.08608393] 1
True
[ 0.40480104 0.34923074 0.26512307] 0
True
[ 0.39056298 0.25420583 0.33621375] 0
True
[ 0.20954715 0.57590829 0.22108736] 1
True
[ 0.37450763 0.407714 0.23142452] 1
True
[ 0.34002355 0.44087707 0.23092396] 1
True
[ 0.3944276 0.20687506 0.3612803 ] 0
True
[ 0.19144278 0.72416599 0.16621211] 1
True
[ 0.3118092 0.5005039 0.24414241] 0
False
[ 0.26737744 0.50377216 0.27797899] 2
False
[ 0.28006828 0.38075 0.3297949 ] 2
False
[ 0.40812849 0.12832661 0.47349342] 0
False
[ 0.48791697 0.12587637 0.41904629] 0
True
[ 0.11554598 0.94669962 0.07432005] 1
True
[ 0.24211583 0.50965971 0.24762302] 1
True
[ 0.32416849 0.17727557 0.484595 ] 2
True
[ 0.20597867 0.6473474 0.2245641 ] 1
True
[ 0.39243067 0.22350049 0.34900584] 0
True
[ 0.42319903 0.17541352 0.41444553] 0
True
[ 0.116841 0.90442816 0.11856304] 1
True
[ 0.34651551 0.27185091 0.34565965] 0
True
[ 0.23832379 0.56758762 0.22419879] 2
False
[ 0.2343914 0.67050354 0.17675308] 1
True
[ 0.23727581 0.52429518 0.23869764] 1
True
[ 0.37702192 0.49890612 0.20569084] 1
True
[ 0.20048897 0.51317648 0.29130564] 1
True
[ 0.63111196 0.07553093 0.40173272] 0
True
[ 0.26260554 0.51593846 0.22856515] 1
True
[ 0.45554454 0.2511846 0.28425537] 1
False
[ 0.26045964 0.77108448 0.10629272] 1
True
[ 0.26825878 0.71489983 0.12880839] 1
True
[ 0.20983729 0.69467094 0.19028028] 1
True
[ 0.16522895 0.68981557 0.24167652] 1
True
[ 0.18730114 0.83449175 0.11874072] 1
True
[ 0.39702021 0.26779732 0.28324247] 1
False
[ 0.26257773 0.45883181 0.31008763] 2
False
[ 0.40946346 0.13343814 0.4219278 ] 0
False
[ 0.4458767 0.16644407 0.37811318] 0
True
[ 0.47406823 0.1054686 0.46568612] 0
True
[ 0.1221086 0.33356148 0.52638891] 2
True
[ 0.29751416 0.30856163 0.38022325] 2
True
[ 0.45089282 0.18744041 0.35572751] 0
True
[ 0.33333534 0.38764958 0.25910084] 0
False
[ 0.31793921 0.6178461 0.14891192] 1
True
[ 0.45743871 0.18964334 0.32394376] 0
True
[ 0.46454805 0.13779696 0.41147697] 2
False
[ 0.47822248 0.04575765 0.6377554 ] 2
True
[ 0.47115037 0.06459733 0.59703663] 2
True
[ 0.42819279 0.20642921 0.33890219] 0
True
[ 0.38525267 0.10346336 0.54931655] 2
True
[ 0.51267717 0.17251702 0.31319574] 0
True
[ 0.17216715 0.68358883 0.21912561] 1
True
[ 0.14238899 0.88108385 0.11764793] 1
True
[ 0.37875177 0.25671501 0.35895628] 2
False
[ 0.34233194 0.39800044 0.26253745] 0
False
[ 0.40210401 0.32336797 0.28433421] 1
False
[ 0.39869873 0.34549674 0.27323508] 0
True
[ 0.28677278 0.2832443 0.40130366] 2
True
[ 0.40728535 0.23599764 0.3664961 ] 2
False
[ 0.23414021 0.47531498 0.30167609] 1
True
[ 0.29255223 0.50596717 0.24220521] 1
True
[ 0.4410941 0.14984718 0.42717217] 2
False
[ 0.16501497 0.2133936 0.60718822] 2
True
[ 0.32685585 0.53069088 0.21708156] 1
True
[ 0.32027722 0.22483889 0.4954392 ] 2
True
[ 0.46975382 0.15410666 0.35567294] 0
True
[ 0.18107568 0.84174985 0.13070355] 1
True
[ 0.15360425 0.71590365 0.22605041] 1
True
[ 0.3459478 0.41379348 0.23181401] 0
False
[ 0.36908506 0.40804504 0.24351724] 1
True
[ 0.34800807 0.3927447 0.27418559] 0
False
[ 0.2295655 0.46067156 0.29260648] 1
True
[ 0.21585504 0.4512507 0.33116749] 1
True
[ 0.423075 0.07437571 0.59552113] 2
True
[ 0.41419232 0.20640171 0.37107982] 0
True
[ 0.19934473 0.12709751 0.69228193] 2
True
[ 0.18515446 0.761694 0.15465803] 2
False
[ 0.30737577 0.67389303 0.14271291] 1
True
[ 0.31696661 0.60046411 0.14685018] 1
True
[ 0.48421164 0.1561046 0.36678177] 0
True
[ 0.46572651 0.37375122 0.17883723] 1
False
[ 0.16770829 0.74550914 0.18747674] 1
True
[ 0.29681352 0.63257413 0.17248881] 1
True
[ 0.22949674 0.72673313 0.14631401] 1
True
[ 0.4688456 0.11510779 0.43042847] 0
True
[ 0.23859275 0.3636267 0.37337131] 2
True
[ 0.43516508 0.07872227 0.57581066] 2
True
[ 0.42162891 0.12553052 0.44223457] 0
False
[ 0.19809478 0.77017482 0.14580304] 1
True
[ 0.47770693 0.10810706 0.47880266] 0
False
[ 0.46837617 0.23202502 0.28004671] 1
False
[ 0.3516865 0.64759058 0.11956896] 1
True
[ 0.40524935 0.24322665 0.36009768] 0
True
6000
/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:6: RuntimeWarning: overflow encountered in exp
```python
####### Test phase #######
Error_Test=[]
N_correct=0
from itertools import product
###### Chooses patch and defines label #####
#for PP in range(0,len(Sequence)):
for PP in range(20000, 21000):
SS=Sequence[PP]
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
Int_RGB=np.mean(np.mean(Patches_F_RGB[SS,:,:,:], axis=0), axis=0)/255
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
Int_RGB=np.mean(np.mean(Patches_C_RGB[SS-N_F,:,:,:], axis=0), axis=0)/255
elif(SS>=(N_F+N_C)) and (SS<N_F+N_C+N_W):
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
Int_RGB=np.mean(np.mean(Patches_W_RGB[SS-N_F-N_C,:,:,:], axis=0), axis=0)/255
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
#H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
y=np.append([H4.flatten()], [Int_RGB])
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
for k in range(0,ClassAmount):
W_t=np.append([W[k].flatten()], [W2[k]])
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
f=f/np.sum((f))
Error_Test.append(np.sum((Class_label-f)**2))
if np.argmax(f)==np.argmax(Class_label):
#print True
N_correct=N_correct+1
Perc_corr=float(N_correct)/1000
print Perc_corr
```
0.649
/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:6: RuntimeWarning: overflow encountered in exp
```python
# TEST PHASE WITHOUT COLOUR
# TRAINING PHASE
#delta_H4=np.zeros((len(C1), N_branches, S_H4, S_H4))
#delta_H3=np.zeros((len(C1), N_branches, S_H4, S_H4))
#ERROR_cv2=np.zeros([1])
from itertools import product
for CROSSES in range(0,1):
C2=C2_INIT
W=W_INIT
#W2=W2_INIT
n_W=1
n_C2=10*10**-2
Sample_iterations=0
N_1000=0
###### Chooses patch and defines label #####
#for PP in range(0,len(Sequence)):
for PP in range(0,6000):
SS=Sequence[PP]
#SS=14000
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
elif(SS>=(N_F+N_C)) and (SS<N_F+N_C+N_W):
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
#else:
# Class_label=np.array([0,0,0,1])
# inputPatch=Patches_G[SS-N_F-N_C-N_W]
# Int_RGB=np.mean(np.mean(Patches_G_RGB[SS-N_F-N_C-N_W,:,:,:], axis=0), axis=0)/255
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
II=1
ITER=0
while II==1:
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
#From here on BP trakes place!
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
y=H4.flatten()
for k in range(0,ClassAmount):
W_t=W[k].flatten()
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
#f=f/np.sum((f))
###### Back-propagation #####
# First learning the delta's
delta_H4=np.zeros([ClassAmount,len(C1),N_branches,S_H4,S_H4])
e_k=f-Class_label
delta_k=e_k*Sigmoid_dx(x)
#Output_bias=Output_bias[k]+n_bias*e_k
for k in range(0, ClassAmount):
#update weights output layer
W[k]=W[k]-n_W*delta_k[k]*H4
delta_H4[i]=delta_k[i]*W[i]
delta_H4=np.sum(delta_H4, axis=0)
delta_H3=(float(1)/10)*delta_H4
C2_diff=np.zeros([len(C1),N_branches, Size_C2, Size_C2])
for r in range(0, len(C1)):
C2_t=np.array([[delta_H3[r][:]*H2[r][(0+u):(4+u),(0+v):(4+v)] for u in range(0,Size_C2)] for v in range (0,Size_C2)])
C2_t=np.sum(np.sum(C2_t, axis=4),axis=3)
C2_t=np.rollaxis(C2_t,2)
C2_diff[r]=-n_C2*C2_t
C2=C2+C2_diff
#print f
ERROR=np.sum((Class_label-f)**2)
ITER=ITER+1
if ERROR<0.55 or ITER>4:
II=0
print f, np.argmax(Class_label)
if np.argmax(f)==np.argmax(Class_label):
print True
else:
print False
# Sample_iterations=Sample_iterations+1
# if (Sample_iterations-(1000*N_1000))==1000:
# print Sample_iterations
# N_1000=N_1000+1
# n_W=0.5*n_W
# n_C2=0.5*n_C2
Sample_iterations=Sample_iterations+1
if Sample_iterations>1000:
n_W=0.7
n_C2=0.7*10*10**-2
if Sample_iterations>2000:
n_W=0.7*0.7
n_C2=0.7*0.7*10*10**-2
if Sample_iterations>3000:
n_W=0.7*0.7*0.7
n_C2=0.7*0.7*0.7*10*10**-2
if Sample_iterations>5000:
n_W=0.2
n_C2=0.02
#if Sample_iterations>7500:
# n_W=0.1
# n_C2=0.001
#if Sample_iterations>10000:
# n_W=0.01
# n_C2=0.0005
print "Training completed"
###### test phase!
N_correct=0
for PP in range(6000, 7000):
SS=Sequence[PP]
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
elif(SS>=(N_F+N_C)) and (SS<N_F+N_C+N_W):
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
# else:
# Class_label=np.array([0,0,0,1])
# inputPatch=Patches_G[SS-N_F-N_C-N_W]
# Int_RGB=np.mean(np.mean(Patches_G_RGB[SS-N_F-N_C-N_W,:,:,:], axis=0), axis=0)/255
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
y=H4.flatten()
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
for k in range(0,ClassAmount):
W_t=W[k].flatten()
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
f=f/np.sum((f))
if np.argmax(f)==np.argmax(Class_label):
#print True
N_correct=N_correct+1
Perc_corr=float(N_correct)/1000
print Perc_corr
#ERROR_cv2[CROSSES]=Perc_corr
#Sequence=np.roll(Sequence,2262)
```
[ 0.27959365 0.41338191 0.3140045 ] 1
True
[ 0.4184772 0.36117519 0.23754256] 0
True
[ 0.37973085 0.32236441 0.33080257] 0
True
[ 0.41392291 0.37063306 0.24565929] 0
True
[ 0.41660549 0.30389257 0.29963811] 0
True
[ 0.40860402 0.29407215 0.31242008] 0
True
[ 0.46901772 0.40721451 0.16949872] 0
True
[ 0.43661529 0.30185964 0.24551729] 2
False
[ 0.37395307 0.27407611 0.348726 ] 2
False
[ 0.26170657 0.43520652 0.30290066] 1
True
[ 0.2992291 0.39505056 0.29886558] 1
True
[ 0.28140215 0.40398626 0.29882225] 1
True
[ 0.40297756 0.36620208 0.21334692] 0
True
[ 0.31612672 0.35235596 0.32509139] 2
False
[ 0.411198 0.27542644 0.28700128] 0
True
[ 0.37355565 0.42161473 0.22492451] 1
True
[ 0.35006907 0.30908155 0.36160729] 2
True
[ 0.30399898 0.41212266 0.26925357] 1
True
[ 0.30342003 0.45592882 0.25155085] 1
True
[ 0.35868135 0.29859174 0.36659134] 2
True
[ 0.32069222 0.42886629 0.26370141] 0
False
[ 0.30420949 0.4035331 0.29589852] 1
True
[ 0.40010354 0.32943948 0.26901932] 0
True
[ 0.39721081 0.35105252 0.27374053] 0
True
[ 0.32180304 0.35389893 0.32318837] 2
False
[ 0.35373072 0.26871334 0.36245448] 2
True
[ 0.38998087 0.33982631 0.27851092] 0
True
[ 0.35239613 0.25889548 0.38080849] 2
True
[ 0.30866553 0.31814184 0.39215009] 2
True
[ 0.30637129 0.30742881 0.39926637] 2
True
[ 0.36550352 0.23755547 0.42850848] 2
True
[ 0.35652268 0.2952062 0.36728085] 1
False
[ 0.42326144 0.21692996 0.39614332] 0
True
[ 0.27247077 0.37482923 0.38285775] 1
False
[ 0.40142729 0.29156707 0.3427751 ] 1
False
[ 0.31372722 0.23103293 0.4739648 ] 2
True
[ 0.29135578 0.37253264 0.32023428] 1
True
[ 0.3844599 0.31369798 0.31487709] 0
True
[ 0.30243306 0.30942679 0.3989441 ] 2
True
[ 0.40737232 0.28847591 0.32450676] 0
True
[ 0.23605274 0.34963848 0.42275332] 1
False
[ 0.29062911 0.41333411 0.28617163] 1
True
[ 0.299494 0.28866693 0.42792178] 2
True
[ 0.35111151 0.3328176 0.32972775] 1
False
[ 0.40987252 0.34210855 0.24649646] 0
True
[ 0.35657221 0.321205 0.34116746] 2
False
[ 0.39062602 0.25005461 0.36264959] 0
True
[ 0.311468 0.41025829 0.29281425] 1
True
[ 0.42122823 0.31313888 0.26545026] 0
True
[ 0.3944822 0.30800736 0.28999238] 0
True
[ 0.35201592 0.28241228 0.38292935] 2
True
[ 0.39700383 0.30182839 0.29526374] 1
False
[ 0.29608285 0.31447132 0.4059301 ] 2
True
[ 0.3208203 0.29252959 0.3957717 ] 2
True
[ 0.36025315 0.2488934 0.41960952] 0
False
[ 0.32744176 0.30363773 0.35919442] 2
True
[ 0.33242323 0.30870028 0.35506331] 1
False
[ 0.35772551 0.23974828 0.40401216] 2
True
[ 0.32737015 0.37568868 0.27898854] 0
False
[ 0.34773545 0.23462905 0.39507412] 2
True
[ 0.32361231 0.27275405 0.39550325] 2
True
[ 0.31387533 0.30308873 0.39627192] 2
True
[ 0.32213548 0.18823154 0.49082806] 2
True
[ 0.21008783 0.23755895 0.55663122] 2
True
[ 0.35962414 0.28133511 0.36578987] 2
True
[ 0.24478534 0.26986283 0.53005509] 2
True
[ 0.29150268 0.22021381 0.51668877] 0
False
[ 0.29089167 0.19684513 0.52349898] 2
True
[ 0.33466252 0.21959537 0.44728048] 2
True
[ 0.37829137 0.32305859 0.31012024] 1
False
[ 0.28618294 0.39258538 0.31840935] 1
True
[ 0.39345706 0.28419562 0.33313189] 0
True
[ 0.40888339 0.21968597 0.36763078] 0
True
[ 0.28094924 0.2189912 0.54566253] 2
True
[ 0.41631392 0.18053867 0.41414635] 0
True
[ 0.30016749 0.31027308 0.39799811] 2
True
[ 0.31429426 0.28594574 0.38164716] 2
True
[ 0.32442203 0.19807397 0.5256946 ] 0
False
[ 0.39486054 0.1856406 0.44147359] 2
True
[ 0.31872244 0.24953387 0.47205629] 1
False
[ 0.37169357 0.17119365 0.50192363] 0
False
[ 0.41418968 0.27108923 0.31448441] 0
True
[ 0.464343 0.22944601 0.33883804] 0
True
[ 0.43687974 0.27915155 0.2657741 ] 0
True
[ 0.41108946 0.17188471 0.47527694] 2
True
[ 0.41685992 0.2033712 0.38188772] 2
False
[ 0.30287838 0.36164276 0.34494222] 1
True
[ 0.38390913 0.37570641 0.26483002] 0
True
[ 0.36620247 0.24464027 0.40856262] 2
True
[ 0.31274664 0.38703936 0.31283179] 1
True
[ 0.29894604 0.30852661 0.42199505] 2
True
[ 0.3064759 0.30033787 0.40736841] 2
True
[ 0.43243603 0.18986368 0.44029506] 0
False
[ 0.30673745 0.29806091 0.39459244] 2
True
[ 0.35124973 0.17412279 0.49448429] 2
True
[ 0.33692432 0.29299326 0.34018 ] 1
False
[ 0.30095362 0.19039331 0.55639725] 2
True
[ 0.41815025 0.23551607 0.35204374] 0
True
[ 0.35482679 0.17177792 0.52457486] 2
True
[ 0.32785651 0.23937225 0.43747684] 1
False
[ 0.39799076 0.24882253 0.33502077] 0
True
[ 0.34372751 0.30865787 0.33914563] 2
False
[ 0.33381115 0.26979436 0.46201144] 0
False
[ 0.41807776 0.26190913 0.28121476] 0
True
[ 0.4038729 0.23377975 0.37346074] 0
True
[ 0.40933986 0.28453349 0.31125488] 0
True
[ 0.44277537 0.3095139 0.26160058] 0
True
[ 0.35418229 0.31603857 0.30948595] 1
False
[ 0.36189399 0.34682785 0.26886221] 1
False
[ 0.29344638 0.42845567 0.26821758] 1
True
[ 0.38848882 0.3073392 0.28438418] 0
True
[ 0.26832943 0.37273041 0.35648424] 2
False
[ 0.30915049 0.29696558 0.34835015] 2
True
[ 0.4216038 0.28359951 0.3041773 ] 0
True
[ 0.36222385 0.28389594 0.36020572] 0
True
[ 0.466872 0.21820203 0.33195677] 0
True
[ 0.43106937 0.36067525 0.18241264] 1
False
[ 0.25713101 0.41229367 0.3209477 ] 1
True
[ 0.39955262 0.41613015 0.2052121 ] 1
True
[ 0.31687494 0.35381653 0.3348306 ] 2
False
[ 0.35976712 0.35584569 0.27960195] 0
True
[ 0.36051278 0.39560928 0.27729296] 2
False
[ 0.4011389 0.33627823 0.26642648] 0
True
[ 0.41503389 0.31176444 0.31099819] 0
True
[ 0.42180377 0.22065973 0.37961697] 2
False
[ 0.3373741 0.29473402 0.34965655] 2
True
[ 0.31299718 0.2632446 0.42617775] 2
True
[ 0.34746927 0.26776346 0.36908137] 0
False
[ 0.40739043 0.28055461 0.30675175] 1
False
[ 0.40620613 0.32058477 0.27531604] 0
True
[ 0.40681101 0.26795178 0.3461092 ] 1
False
[ 0.39465651 0.31953805 0.27984033] 0
True
[ 0.40711204 0.30187415 0.31124507] 0
True
[ 0.40102988 0.26469538 0.34108679] 0
True
[ 0.45107874 0.19614938 0.35375026] 2
False
[ 0.39414336 0.39622336 0.21060495] 1
True
[ 0.36901384 0.42468385 0.21596668] 1
True
[ 0.41276492 0.25601086 0.34809668] 0
True
[ 0.38797885 0.24575538 0.36383515] 0
True
[ 0.37735828 0.41485066 0.23783382] 1
True
[ 0.41362576 0.37954463 0.20369666] 0
True
[ 0.46436845 0.24392372 0.33335794] 2
False
[ 0.40205307 0.3465085 0.24884683] 0
True
[ 0.4015771 0.33762784 0.27506556] 0
True
[ 0.45550564 0.1803813 0.42437604] 2
False
[ 0.41286876 0.19717259 0.43183119] 2
True
[ 0.42560741 0.21234804 0.38520403] 1
False
[ 0.39387036 0.33995051 0.26014143] 1
False
[ 0.42046031 0.23263868 0.38093503] 0
True
[ 0.40679961 0.3871549 0.23511254] 1
False
[ 0.33312823 0.41824983 0.23094728] 2
False
[ 0.40223335 0.32907378 0.26124574] 0
True
[ 0.36483554 0.27344078 0.33779255] 2
False
[ 0.32245205 0.26469113 0.39435728] 2
True
[ 0.38545073 0.28950266 0.2883746 ] 0
True
[ 0.33804298 0.34635842 0.31917241] 2
False
[ 0.31916531 0.3065462 0.35952448] 1
False
[ 0.38529555 0.25143027 0.38285543] 2
False
[ 0.31804417 0.32500467 0.33663339] 1
False
[ 0.3274161 0.27155731 0.41760324] 2
True
[ 0.29267257 0.30468865 0.39388379] 2
True
[ 0.21822621 0.35819609 0.44891662] 1
False
[ 0.23956064 0.3619408 0.42602963] 2
True
[ 0.24262262 0.34025729 0.41939066] 2
True
[ 0.25582157 0.23708181 0.51005179] 2
True
[ 0.22996172 0.37704314 0.40458664] 2
True
[ 0.18879679 0.42444128 0.43405651] 1
False
[ 0.19716559 0.39008393 0.41166562] 1
False
[ 0.22499446 0.4300966 0.38418393] 1
True
[ 0.15336249 0.52485718 0.3553356 ] 1
True
[ 0.19837398 0.45777285 0.34848434] 0
False
[ 0.21088523 0.38777508 0.41121041] 2
True
[ 0.2699626 0.37935868 0.35006289] 2
False
[ 0.21341463 0.40130171 0.37783151] 1
True
[ 0.27755968 0.40378685 0.32171859] 1
True
[ 0.27329872 0.40496947 0.31454397] 0
False
[ 0.24819022 0.42792472 0.29179578] 1
True
[ 0.24206352 0.37036053 0.39538507] 2
True
[ 0.24009847 0.41853175 0.34888599] 1
True
[ 0.34754633 0.36425643 0.27049159] 0
False
[ 0.36567455 0.31797672 0.30663609] 0
True
[ 0.32269809 0.30621902 0.39283308] 2
True
[ 0.27440582 0.42201384 0.2968645 ] 1
True
[ 0.26080918 0.44786782 0.28203169] 2
False
[ 0.2881847 0.39243869 0.32299463] 1
True
[ 0.19453995 0.42686709 0.40564244] 1
True
[ 0.26230467 0.36977536 0.37498568] 2
True
[ 0.25741303 0.43139356 0.29480823] 1
True
[ 0.36718548 0.32813924 0.3111887 ] 0
True
[ 0.37265774 0.32667116 0.31095821] 0
True
[ 0.35375401 0.3823077 0.26537076] 0
False
[ 0.36641267 0.26773429 0.39678733] 2
True
[ 0.24427096 0.35232295 0.40572644] 2
True
[ 0.28111336 0.39467653 0.32684727] 1
True
[ 0.31409332 0.29618604 0.39134002] 0
False
[ 0.31611856 0.31678217 0.38203291] 2
True
[ 0.33215314 0.40136706 0.26019116] 1
True
[ 0.31033157 0.29205274 0.40484977] 2
True
[ 0.34764664 0.34004823 0.29172731] 0
True
[ 0.40922745 0.3099805 0.27002282] 0
True
[ 0.33799554 0.40065821 0.24718135] 1
True
[ 0.3063725 0.30250563 0.39772746] 2
True
[ 0.25638032 0.37144912 0.35002931] 1
True
[ 0.3107904 0.33223971 0.37339223] 2
True
[ 0.28058992 0.41213048 0.33582615] 1
True
[ 0.31190664 0.29253826 0.41139169] 2
True
[ 0.28214226 0.31212932 0.3597061 ] 0
False
[ 0.25979412 0.34205693 0.40855942] 2
True
[ 0.36486795 0.22881457 0.41310151] 2
True
[ 0.23856743 0.31545275 0.44646198] 2
True
[ 0.36127575 0.32806062 0.29845837] 0
True
[ 0.3117919 0.28462514 0.40081482] 2
True
[ 0.24267097 0.41447721 0.35179658] 1
True
[ 0.28246189 0.30941332 0.40868034] 2
True
[ 0.34072951 0.29900029 0.35543018] 0
False
[ 0.34613772 0.2778725 0.3634072 ] 0
False
[ 0.37204675 0.34424234 0.28835096] 1
False
[ 0.28674004 0.32535118 0.39361058] 2
True
[ 0.35990014 0.32379966 0.33362881] 1
False
[ 0.3509085 0.29352597 0.37740067] 0
False
[ 0.27886116 0.33316418 0.36744529] 2
True
[ 0.39529656 0.19714359 0.42598868] 0
False
[ 0.3110253 0.26629578 0.38657185] 2
True
[ 0.40953139 0.32553692 0.26765482] 0
True
[ 0.41444943 0.19118999 0.42609513] 2
True
[ 0.36602609 0.23236281 0.40180732] 2
True
[ 0.3825527 0.23333489 0.37534722] 2
False
[ 0.35333403 0.22918478 0.41677179] 2
True
[ 0.31035066 0.19936068 0.49912533] 2
True
[ 0.36951817 0.29017125 0.35821602] 0
True
[ 0.3939075 0.25576734 0.35935588] 0
True
[ 0.42450665 0.15353719 0.47864002] 2
True
[ 0.34800071 0.2161503 0.42582829] 1
False
[ 0.28949414 0.24057909 0.47316323] 2
True
[ 0.3298635 0.26439541 0.40787887] 0
False
[ 0.40921088 0.205385 0.37712031] 0
True
[ 0.39182799 0.29965366 0.31273786] 1
False
[ 0.44348846 0.29849413 0.2649027 ] 1
False
[ 0.35549708 0.40365661 0.23781761] 1
True
[ 0.40535744 0.28832716 0.29881701] 0
True
[ 0.37896015 0.43233868 0.20704346] 1
True
[ 0.41203581 0.24533688 0.3686842 ] 0
True
[ 0.37716898 0.32469692 0.27996011] 1
False
[ 0.36550309 0.41188416 0.19406072] 1
True
[ 0.32622412 0.42874751 0.20558595] 2
False
[ 0.39003026 0.26303762 0.31724288] 0
True
[ 0.31017633 0.31849196 0.33528139] 2
True
[ 0.32262778 0.35775189 0.30956369] 1
True
[ 0.29318032 0.3909511 0.2929303 ] 1
True
[ 0.2723384 0.35803085 0.34528817] 2
False
[ 0.39072564 0.34045645 0.2361381 ] 0
True
[ 0.39592329 0.24944072 0.32842225] 0
True
[ 0.43538345 0.27674558 0.29135847] 0
True
[ 0.33741591 0.39175188 0.26752353] 1
True
[ 0.38866762 0.43606665 0.19222945] 1
True
[ 0.40666584 0.32184044 0.29436528] 0
True
[ 0.4160111 0.3564912 0.23485843] 1
False
[ 0.34827834 0.40562956 0.2205123 ] 1
True
[ 0.32944113 0.38736466 0.30123431] 2
False
[ 0.33390334 0.37607649 0.26072771] 2
False
[ 0.38421819 0.35346155 0.26662207] 0
True
[ 0.27311259 0.31952443 0.39488831] 2
True
[ 0.30708922 0.41307588 0.29135319] 2
False
[ 0.32096875 0.30172415 0.37615576] 2
True
[ 0.39232143 0.27401486 0.33987689] 0
True
[ 0.41086159 0.22030536 0.34834811] 0
True
[ 0.41549068 0.22975285 0.34391521] 0
True
[ 0.41960332 0.21434406 0.33160269] 0
True
[ 0.48908055 0.2835631 0.24199377] 0
True
[ 0.49474275 0.22374556 0.29638885] 0
True
[ 0.35773076 0.27389403 0.370896 ] 2
True
[ 0.37343023 0.35464577 0.27632201] 1
False
[ 0.41449996 0.34369891 0.2769584 ] 0
True
[ 0.39403981 0.32847903 0.28441998] 1
False
[ 0.41104933 0.25265019 0.35836702] 2
False
[ 0.39884013 0.28041779 0.29194027] 0
True
[ 0.35180897 0.35256097 0.29532988] 0
False
[ 0.44180056 0.24697473 0.29273801] 0
True
[ 0.34618975 0.33862899 0.2904383 ] 1
False
[ 0.45895253 0.29858216 0.24567316] 0
True
[ 0.47103331 0.35865749 0.19385477] 1
False
[ 0.37341158 0.36459487 0.26413509] 1
False
[ 0.34113534 0.33429023 0.31168457] 2
False
[ 0.33463326 0.41000183 0.24532877] 1
True
[ 0.22940799 0.39534073 0.38256373] 2
False
[ 0.315532 0.49430243 0.21217632] 1
True
[ 0.28267661 0.40728318 0.29805994] 1
True
[ 0.233364 0.38042083 0.40661513] 2
True
[ 0.28323568 0.3365448 0.40344272] 2
True
[ 0.30761343 0.31393356 0.37747378] 0
False
[ 0.26767449 0.38451311 0.38255816] 2
False
[ 0.26857501 0.39944453 0.3407394 ] 1
True
[ 0.25482838 0.39946871 0.36302837] 1
True
[ 0.22707835 0.35195719 0.41853476] 2
True
[ 0.26229573 0.34442457 0.39936122] 2
True
[ 0.30616395 0.30792643 0.39896098] 0
False
[ 0.33202892 0.32536797 0.34707986] 0
False
[ 0.28090152 0.29645071 0.44741956] 2
True
[ 0.32200051 0.30151725 0.36804305] 2
True
[ 0.32450905 0.26727358 0.43833533] 2
True
[ 0.30852973 0.38622291 0.29680928] 1
True
[ 0.26190783 0.35622698 0.41232283] 2
True
[ 0.24244671 0.32112911 0.41716549] 2
True
[ 0.29112983 0.28819116 0.42003344] 2
True
[ 0.3330346 0.21183237 0.4699417 ] 0
False
[ 0.30454125 0.39033225 0.26898911] 1
True
[ 0.36451074 0.31760055 0.31600009] 0
True
[ 0.45549641 0.26206147 0.26147032] 0
True
[ 0.30118301 0.41034162 0.30151557] 1
True
[ 0.27951977 0.39712571 0.31870218] 1
True
[ 0.41411272 0.23839399 0.36523405] 0
True
[ 0.30968928 0.32239646 0.37613961] 2
True
[ 0.32023453 0.29392948 0.40290971] 2
True
[ 0.35448514 0.24434305 0.39136615] 2
True
[ 0.31285127 0.29597227 0.40240538] 2
True
[ 0.31550798 0.38633143 0.29141759] 1
True
[ 0.39812364 0.27430787 0.32158631] 0
True
[ 0.30902358 0.28812482 0.42373784] 2
True
[ 0.29599573 0.41902188 0.26470593] 1
True
[ 0.32359889 0.40304725 0.27830053] 1
True
[ 0.28454047 0.35549222 0.36423907] 2
True
[ 0.36806551 0.2991257 0.33236596] 0
True
[ 0.40061809 0.32969192 0.27258647] 0
True
[ 0.41183063 0.2286841 0.34538688] 0
True
[ 0.2746247 0.30973239 0.40602829] 2
True
[ 0.37405949 0.23517205 0.39863571] 0
False
[ 0.41030694 0.33338053 0.25405134] 0
True
[ 0.39703697 0.28865124 0.33905542] 1
False
[ 0.40900014 0.29508108 0.3174011 ] 0
True
[ 0.42719607 0.20607549 0.3611162 ] 2
False
[ 0.37187908 0.31738879 0.30606737] 1
False
[ 0.32120975 0.27094651 0.4088702 ] 2
True
[ 0.29872993 0.29729958 0.40326242] 2
True
[ 0.39459701 0.25584792 0.33919548] 0
True
[ 0.29543142 0.32692348 0.37300125] 2
True
[ 0.34449295 0.21502857 0.47874744] 2
True
[ 0.27419426 0.333207 0.42700875] 2
True
[ 0.22418399 0.37434186 0.41032567] 1
False
[ 0.26766803 0.30854427 0.45243872] 2
True
[ 0.37211563 0.24657842 0.41630611] 0
False
[ 0.39917233 0.23631944 0.36425668] 0
True
[ 0.36717823 0.2695422 0.3830303 ] 2
True
[ 0.30854879 0.35108119 0.34753465] 1
True
[ 0.42532855 0.18648033 0.4288299 ] 2
True
[ 0.37703542 0.28512156 0.34492849] 0
True
[ 0.38829096 0.24572749 0.36905778] 1
False
[ 0.29567884 0.38843273 0.28155526] 1
True
[ 0.40431394 0.27763362 0.30139359] 0
True
[ 0.32928152 0.35102532 0.3151911 ] 0
False
[ 0.29302662 0.30145717 0.41707135] 1
False
[ 0.26729812 0.36104387 0.36907227] 2
True
[ 0.35338985 0.2820148 0.38465971] 0
False
[ 0.4025484 0.28319589 0.3021916 ] 0
True
[ 0.38973734 0.27579622 0.32826841] 2
False
[ 0.37143559 0.21217489 0.41642384] 2
True
[ 0.38073649 0.23032954 0.39529464] 2
True
[ 0.35853492 0.19924899 0.46779295] 2
True
[ 0.25106747 0.31769198 0.45289026] 2
True
[ 0.2656273 0.3880726 0.3603834] 1
True
[ 0.34068394 0.3152323 0.35331926] 0
False
[ 0.3506421 0.19251682 0.46281391] 2
True
[ 0.35218813 0.32564099 0.31058191] 0
True
[ 0.3783886 0.3448132 0.27501932] 1
False
[ 0.3855665 0.25834565 0.35940317] 0
True
[ 0.33377927 0.26328397 0.41102017] 2
True
[ 0.26134271 0.35879053 0.39667041] 1
False
[ 0.28087035 0.39894119 0.32445809] 1
True
[ 0.38761017 0.33029066 0.27589056] 0
True
[ 0.29993657 0.40713942 0.29564258] 1
True
[ 0.30975158 0.34296326 0.35778297] 2
True
[ 0.33511807 0.40293233 0.25898296] 1
True
[ 0.27860838 0.42593952 0.30701108] 1
True
[ 0.27351024 0.40265046 0.31966872] 2
False
[ 0.35840808 0.27995735 0.36206092] 0
False
[ 0.30170081 0.32193778 0.36680129] 2
True
[ 0.40001926 0.28845017 0.32080813] 0
True
[ 0.37052747 0.36755313 0.26403569] 1
False
[ 0.35243511 0.43512277 0.2236929 ] 1
True
[ 0.26936116 0.41389651 0.31860724] 1
True
[ 0.29287785 0.45159074 0.26171812] 1
True
[ 0.2751255 0.33397463 0.39157827] 2
True
[ 0.2799427 0.35717021 0.37660904] 2
True
[ 0.20310921 0.40878885 0.3966021 ] 1
True
[ 0.21174496 0.38761377 0.4265348 ] 2
True
[ 0.21856125 0.39823039 0.42527547] 2
True
[ 0.21631105 0.43075052 0.35398704] 1
True
[ 0.17413673 0.47766392 0.37802457] 1
True
[ 0.22666838 0.41634218 0.37363761] 1
True
[ 0.29219569 0.29830935 0.42845967] 2
True
[ 0.22720702 0.4603584 0.32269274] 1
True
[ 0.28909421 0.48648765 0.24102724] 0
False
[ 0.21293112 0.42345735 0.36859462] 1
True
[ 0.34549807 0.41697064 0.24486689] 0
False
[ 0.30058216 0.30876793 0.40242859] 2
True
[ 0.30605372 0.39280518 0.29063364] 1
True
[ 0.24638309 0.34169272 0.4020316 ] 2
True
[ 0.28586299 0.30068582 0.41097738] 2
True
[ 0.38666569 0.27868142 0.36048062] 0
True
[ 0.39837438 0.27947911 0.33938578] 0
True
[ 0.40696983 0.34829654 0.26506017] 0
True
[ 0.2903572 0.30541709 0.4122448 ] 2
True
[ 0.26882299 0.33027416 0.4022927 ] 2
True
[ 0.24214755 0.39243641 0.40202069] 1
False
[ 0.29919792 0.36594536 0.33817059] 1
True
[ 0.29132508 0.36954461 0.35270978] 1
True
[ 0.40470716 0.31760167 0.2674961 ] 0
True
[ 0.26324365 0.41049418 0.31443475] 1
True
[ 0.22594105 0.48876206 0.28492065] 1
True
[ 0.21835845 0.38051267 0.40554288] 2
True
[ 0.29408348 0.31831781 0.40454873] 2
True
[ 0.21751124 0.39160423 0.40672223] 1
False
[ 0.16308703 0.4421687 0.45643956] 1
False
[ 0.24122226 0.41875638 0.33484856] 0
False
[ 0.26982991 0.40330443 0.33046847] 1
True
[ 0.2477432 0.44904608 0.30988572] 0
False
[ 0.28274044 0.34788193 0.36749687] 2
True
[ 0.17309705 0.50823306 0.34662722] 1
True
[ 0.17632905 0.49001991 0.35120894] 1
True
[ 0.38899222 0.35343997 0.2629731 ] 0
True
[ 0.36907346 0.34308454 0.29445561] 0
True
[ 0.26893246 0.41196587 0.31710045] 1
True
[ 0.26211343 0.45045 0.31480497] 1
True
[ 0.37010906 0.41434748 0.22743015] 1
True
[ 0.21868562 0.49233613 0.32000785] 1
True
[ 0.24131846 0.51367577 0.23822719] 2
False
[ 0.32557491 0.35156322 0.31054482] 1
True
[ 0.29412796 0.50169484 0.21855376] 0
False
[ 0.41751342 0.30188533 0.26613093] 2
False
[ 0.35762718 0.35510399 0.28136592] 1
False
[ 0.24425033 0.39214132 0.37453991] 0
False
[ 0.28471926 0.39535475 0.29319925] 1
True
[ 0.31306163 0.30934264 0.34890845] 0
False
[ 0.3303995 0.30255909 0.34492417] 2
True
[ 0.35100031 0.38418503 0.25131441] 0
False
[ 0.34236292 0.26674528 0.38516033] 2
True
[ 0.29039343 0.38922454 0.33519616] 1
True
[ 0.36491905 0.27041915 0.35741615] 2
False
[ 0.21427832 0.41249716 0.39161368] 1
True
[ 0.2659422 0.32302223 0.40599868] 2
True
[ 0.19295303 0.43723101 0.38536659] 1
True
[ 0.32982547 0.27957878 0.35880981] 0
False
[ 0.28142236 0.31436404 0.4097179 ] 2
True
[ 0.20362332 0.4059776 0.44359318] 2
True
[ 0.30665356 0.3582157 0.33593802] 1
True
[ 0.33536271 0.31490602 0.3453979 ] 0
False
[ 0.24319415 0.3432083 0.39101516] 2
True
[ 0.33645045 0.33667263 0.32071111] 1
True
[ 0.30906176 0.35715651 0.31806848] 0
False
[ 0.3240969 0.30258779 0.35440061] 2
True
[ 0.36922533 0.24786396 0.39835018] 0
False
[ 0.40671488 0.27452229 0.31288304] 0
True
[ 0.34383236 0.2368138 0.41212831] 2
True
[ 0.22771658 0.38062515 0.39127723] 1
False
[ 0.27049156 0.33630029 0.40306595] 2
True
[ 0.40723446 0.29094489 0.29915128] 0
True
[ 0.41082324 0.23183717 0.36760352] 0
True
[ 0.35511959 0.35856295 0.29158007] 1
True
[ 0.3055404 0.33422021 0.36460719] 2
True
[ 0.35692575 0.25506448 0.40514642] 2
True
[ 0.41782591 0.27840806 0.30600156] 0
True
[ 0.41067934 0.2305596 0.38019516] 0
True
[ 0.39941546 0.28991846 0.31408647] 0
True
[ 0.49055094 0.19238151 0.34511787] 2
False
[ 0.44634252 0.27516103 0.26417012] 1
False
[ 0.2570597 0.27801776 0.44516433] 2
True
[ 0.36202141 0.21803942 0.39786858] 2
True
[ 0.31561607 0.4137041 0.27201121] 1
True
[ 0.36863718 0.30113276 0.30804386] 0
True
[ 0.30642104 0.36945801 0.30704198] 1
True
[ 0.35149428 0.2539679 0.39113169] 2
True
[ 0.40853845 0.2552017 0.34573754] 0
True
[ 0.41100395 0.28700471 0.3112735 ] 0
True
[ 0.31633401 0.2829902 0.39245402] 2
True
[ 0.36072861 0.24492858 0.41196718] 2
True
[ 0.41124549 0.23580835 0.37690767] 0
True
[ 0.37919533 0.27874305 0.33010829] 1
False
[ 0.35662486 0.3422005 0.27821337] 1
False
[ 0.41261447 0.27083811 0.33288028] 0
True
[ 0.36216544 0.3830336 0.25621117] 1
True
[ 0.35560169 0.42512895 0.24035844] 1
True
[ 0.32707337 0.40922873 0.25941622] 1
True
[ 0.40163315 0.3556083 0.24937298] 0
True
[ 0.40855239 0.37707547 0.22680951] 0
True
[ 0.29285808 0.56728734 0.19353529] 1
True
[ 0.41242881 0.29009639 0.30034297] 0
True
[ 0.52190316 0.35468601 0.17355599] 0
True
[ 0.41030839 0.43283636 0.18436582] 1
True
[ 0.26582047 0.39971523 0.34965719] 2
False
[ 0.42509059 0.35734646 0.23027431] 0
True
[ 0.45024433 0.34118915 0.20889416] 0
True
[ 0.33697532 0.39639539 0.26645999] 1
True
[ 0.34255425 0.37953616 0.27288465] 2
False
[ 0.31609005 0.40872468 0.26976548] 0
False
[ 0.32712074 0.41126492 0.26965629] 1
True
[ 0.24913755 0.33509436 0.39646367] 2
True
[ 0.34844165 0.33222183 0.30387159] 2
False
[ 0.30223126 0.36272426 0.33329131] 1
True
[ 0.33335267 0.4177618 0.27549798] 1
True
[ 0.30666331 0.3304025 0.37042888] 2
True
[ 0.20510993 0.39196966 0.41759054] 2
True
[ 0.22203765 0.37112861 0.41773458] 2
True
[ 0.25727642 0.38660881 0.37760725] 1
True
[ 0.30299361 0.31770356 0.36669791] 0
False
[ 0.39876144 0.26217137 0.31589647] 0
True
[ 0.46458692 0.29607719 0.25526738] 0
True
[ 0.32724209 0.38768441 0.28371346] 0
False
[ 0.41203289 0.27338256 0.30798401] 0
True
[ 0.42512492 0.36014562 0.24991255] 1
False
[ 0.46041431 0.27757291 0.25350978] 0
True
[ 0.39951026 0.29659629 0.29918209] 0
True
[ 0.38254089 0.26452004 0.36768819] 2
False
[ 0.31953624 0.26799798 0.39365396] 1
False
[ 0.39008775 0.32171429 0.26966094] 0
True
[ 0.38112821 0.27455926 0.33057721] 2
False
[ 0.28731286 0.3097268 0.41096866] 2
True
[ 0.23769465 0.4120786 0.35639265] 1
True
[ 0.27457676 0.38868766 0.3395985 ] 1
True
[ 0.36554976 0.45119773 0.20614811] 1
True
[ 0.36903743 0.2761213 0.35302718] 0
True
[ 0.35691884 0.41065579 0.22980256] 1
True
[ 0.3368628 0.28651259 0.38102493] 2
True
[ 0.29956296 0.40879723 0.32512619] 1
True
[ 0.26066157 0.35493248 0.38498755] 2
True
[ 0.41616625 0.38194231 0.22571552] 0
True
[ 0.38628904 0.28906299 0.35798252] 0
True
[ 0.40198832 0.29581863 0.30618671] 0
True
[ 0.35576967 0.40332426 0.23942859] 1
True
[ 0.29667286 0.32526557 0.3689861 ] 2
True
[ 0.38217117 0.27924166 0.34696156] 0
True
[ 0.36243052 0.28118754 0.33266916] 0
True
[ 0.39680003 0.34478922 0.25423581] 0
True
[ 0.47245851 0.32903347 0.2148403 ] 1
False
[ 0.47458377 0.24320183 0.29718324] 0
True
[ 0.32868877 0.50757951 0.22497568] 1
True
[ 0.33699285 0.26712256 0.40617622] 2
True
[ 0.39000233 0.43279898 0.19873966] 1
True
[ 0.39646851 0.33896767 0.29693202] 0
True
[ 0.35379654 0.40068827 0.24574844] 1
True
[ 0.40510805 0.38341771 0.21014111] 0
True
[ 0.48424315 0.28143555 0.24471424] 2
False
[ 0.30420555 0.40446981 0.29335532] 1
True
[ 0.39660032 0.34635994 0.25896874] 0
True
[ 0.36628572 0.28875044 0.33481106] 2
False
[ 0.40737666 0.27436558 0.33501219] 0
True
[ 0.37560546 0.22894605 0.4081789 ] 2
True
[ 0.30467624 0.28507292 0.40056965] 2
True
[ 0.35113407 0.32673548 0.32766687] 1
False
[ 0.36729391 0.31783364 0.30749624] 1
False
[ 0.2167241 0.40317105 0.36993738] 1
True
[ 0.22268945 0.42090652 0.38267638] 1
True
[ 0.41090794 0.37969973 0.22003771] 0
True
[ 0.3580643 0.39424615 0.25303436] 0
False
[ 0.30094357 0.34829022 0.34001049] 2
False
[ 0.4207164 0.23607438 0.35682778] 0
True
[ 0.30482551 0.3989681 0.29068157] 1
True
[ 0.36229139 0.34708063 0.31038646] 2
False
[ 0.39305193 0.32630387 0.31020796] 0
True
[ 0.2824756 0.3896034 0.30692559] 1
True
[ 0.33901686 0.30962909 0.36455793] 2
True
[ 0.26381257 0.40919655 0.33461426] 1
True
[ 0.30401693 0.39717331 0.30121006] 1
True
[ 0.34686737 0.40736048 0.25594179] 1
True
[ 0.32078236 0.40594717 0.28506428] 0
False
[ 0.27092209 0.44390453 0.2935093 ] 2
False
[ 0.4053512 0.33081875 0.25805167] 0
True
[ 0.41375758 0.26251345 0.323103 ] 0
True
[ 0.27414563 0.31171361 0.3911384 ] 2
True
[ 0.34828788 0.24848292 0.39833994] 2
True
[ 0.25134996 0.32226283 0.42392806] 2
True
[ 0.27778985 0.40975626 0.32272877] 1
True
[ 0.29709646 0.42006656 0.30798982] 2
False
[ 0.27879017 0.38844224 0.34178208] 1
True
[ 0.23369417 0.34536846 0.43984123] 2
True
[ 0.2982929 0.37024842 0.2948897 ] 0
False
[ 0.32688146 0.30348947 0.34880892] 0
False
[ 0.40808414 0.24213684 0.34880034] 2
False
[ 0.31081003 0.28019466 0.39346307] 1
False
[ 0.42121507 0.25618628 0.35673341] 2
False
[ 0.39985478 0.2234455 0.35875517] 0
True
[ 0.33159409 0.32469796 0.35205024] 1
False
[ 0.38057621 0.38042113 0.25386689] 1
False
[ 0.2854627 0.30885601 0.39025655] 2
True
[ 0.34607059 0.35840551 0.29522399] 1
True
[ 0.27772151 0.39298306 0.33690172] 2
False
[ 0.37362626 0.34075812 0.2667831 ] 0
True
[ 0.30024328 0.29307805 0.4119797 ] 2
True
[ 0.28189262 0.30742707 0.41981737] 2
True
[ 0.40172491 0.25614434 0.35630796] 0
True
[ 0.41448834 0.24537693 0.35723799] 0
True
[ 0.4129331 0.276734 0.31854303] 0
True
[ 0.38798413 0.26117153 0.36773429] 2
False
[ 0.41299389 0.29594371 0.30416267] 0
True
[ 0.33101667 0.23061065 0.44686261] 2
True
[ 0.44815258 0.21918056 0.33544162] 1
False
[ 0.40693698 0.22335777 0.37002479] 0
True
[ 0.35810807 0.30082039 0.31421203] 1
False
[ 0.33102784 0.28460685 0.36899807] 2
True
[ 0.25167197 0.350055 0.39285821] 1
False
[ 0.28321739 0.39964113 0.29637097] 1
True
[ 0.32740702 0.39301798 0.26715189] 0
False
[ 0.30598851 0.31380693 0.38011011] 2
True
[ 0.40380645 0.27167369 0.3022246 ] 0
True
[ 0.34079838 0.37616763 0.26496497] 1
True
[ 0.35185106 0.31128778 0.32138137] 0
True
[ 0.36916294 0.33768091 0.30570736] 2
False
[ 0.35421353 0.44917484 0.22732972] 1
True
[ 0.30164473 0.40801906 0.30723964] 1
True
[ 0.35702507 0.32161056 0.30094315] 2
False
[ 0.39988026 0.29650581 0.29042341] 1
False
[ 0.34857054 0.27802008 0.37990181] 0
False
[ 0.43607199 0.30275667 0.25680086] 0
True
[ 0.41230429 0.33731345 0.23853388] 0
True
[ 0.3566969 0.43662363 0.22230597] 1
True
[ 0.42792257 0.34258074 0.23399164] 1
False
[ 0.26661276 0.39627001 0.31492907] 2
False
[ 0.26251885 0.37785863 0.35417141] 2
False
[ 0.27338327 0.37728516 0.3609563 ] 1
True
[ 0.20958744 0.41462274 0.38295986] 1
True
[ 0.26779769 0.4149592 0.318716 ] 1
True
[ 0.21695293 0.56845483 0.23806711] 1
True
[ 0.40331313 0.40590182 0.20081855] 0
False
[ 0.31019078 0.39451318 0.2675691 ] 0
False
[ 0.25610247 0.47201309 0.28949602] 2
False
[ 0.32979091 0.48578527 0.21176409] 1
True
[ 0.35868219 0.4281189 0.22389018] 0
False
[ 0.40456976 0.29173513 0.29914005] 0
True
[ 0.46741368 0.23173404 0.30164588] 0
True
[ 0.36093183 0.415409 0.22711638] 1
True
[ 0.40684928 0.26965139 0.31690676] 2
False
[ 0.39776537 0.29711701 0.30872358] 0
True
[ 0.36027865 0.41772126 0.23867275] 1
True
[ 0.33255389 0.39633507 0.2642996 ] 1
True
[ 0.39000728 0.30378476 0.32399186] 2
False
[ 0.30998395 0.29185172 0.40576593] 2
True
[ 0.3338287 0.36649477 0.31343583] 1
True
[ 0.37245433 0.30968441 0.32988524] 0
True
[ 0.30227756 0.2915787 0.40775657] 2
True
[ 0.39247576 0.2928756 0.29269034] 0
True
[ 0.32325818 0.34861411 0.32229245] 1
True
[ 0.32166123 0.34944349 0.31749229] 1
True
[ 0.32047448 0.40162477 0.2752828 ] 1
True
[ 0.30231975 0.38161922 0.31629385] 2
False
[ 0.34508363 0.33730519 0.31497185] 1
False
[ 0.25990315 0.35438568 0.39051923] 2
True
[ 0.31460104 0.42058434 0.28602781] 1
True
[ 0.32936141 0.41108699 0.26239974] 1
True
[ 0.24186046 0.40980212 0.34788608] 1
True
[ 0.20248469 0.44465684 0.39893263] 2
False
[ 0.25493 0.38572012 0.35024709] 1
True
[ 0.30118759 0.42285488 0.29402042] 1
True
[ 0.21003374 0.37682092 0.40303615] 2
True
[ 0.30594887 0.33532838 0.33694617] 0
False
[ 0.31165815 0.38336303 0.31702884] 0
False
[ 0.37837369 0.28310857 0.32594795] 0
True
[ 0.35815407 0.39429417 0.24747729] 0
False
[ 0.3462722 0.32272846 0.33375102] 2
False
[ 0.41461162 0.30801857 0.29154446] 0
True
[ 0.40051906 0.2919864 0.30956994] 0
True
[ 0.34585765 0.38973183 0.27008219] 1
True
[ 0.35915716 0.29662007 0.34332787] 2
False
[ 0.35039418 0.25001515 0.40042617] 2
True
[ 0.36676007 0.24975527 0.38896432] 0
False
[ 0.34605321 0.2014398 0.48676961] 2
True
[ 0.3632183 0.23969651 0.42312339] 2
True
[ 0.35191231 0.24022275 0.41608195] 2
True
[ 0.33640011 0.36803066 0.31381133] 1
True
[ 0.31946581 0.38611155 0.29015149] 1
True
[ 0.2797663 0.41032121 0.32526475] 1
True
[ 0.27876856 0.40473016 0.31505557] 1
True
[ 0.20055378 0.54770277 0.28668843] 1
True
[ 0.24540052 0.47200506 0.28060857] 1
True
[ 0.26933472 0.31944868 0.415776 ] 2
True
[ 0.3432659 0.43904069 0.23560965] 0
False
[ 0.28205431 0.33165088 0.37905437] 2
True
[ 0.23807103 0.48483084 0.29523551] 1
True
[ 0.22189703 0.41209859 0.38110711] 1
True
[ 0.26131845 0.41216856 0.32514635] 1
True
[ 0.40203212 0.41664731 0.21955738] 0
False
[ 0.27205757 0.39957851 0.33956118] 1
True
[ 0.26130906 0.39477768 0.34723807] 0
False
[ 0.36700916 0.31881047 0.31508228] 0
True
[ 0.34090089 0.47755584 0.21292998] 1
True
[ 0.31861979 0.36604 0.30630684] 2
False
[ 0.28089199 0.37325573 0.34247533] 2
False
[ 0.35176588 0.20523068 0.45501598] 2
True
[ 0.3500107 0.30564296 0.34428729] 2
False
[ 0.28803058 0.27087581 0.44723673] 2
True
[ 0.36001175 0.24867169 0.39018716] 0
False
[ 0.31251099 0.28847399 0.40703608] 2
True
[ 0.35853815 0.18442867 0.47867557] 2
True
[ 0.32694919 0.26595587 0.40026344] 2
True
[ 0.33378241 0.29894928 0.38941571] 2
True
[ 0.29089823 0.19213671 0.54111877] 2
True
[ 0.23973603 0.19713502 0.58887214] 2
True
[ 0.3570995 0.24229123 0.4208979 ] 0
False
[ 0.37474016 0.17480744 0.49350917] 0
False
[ 0.44259338 0.19388983 0.41115648] 0
True
[ 0.48639698 0.22513748 0.29173922] 0
True
[ 0.28276218 0.35821604 0.3338482 ] 1
True
[ 0.34041248 0.33148634 0.3275871 ] 2
False
[ 0.29665862 0.38777984 0.30724187] 1
True
[ 0.26922569 0.36052505 0.38472294] 1
False
[ 0.40527952 0.25173661 0.34813436] 0
True
[ 0.45088424 0.20207332 0.37575925] 0
True
[ 0.29283392 0.25946687 0.4324885 ] 1
False
[ 0.27206007 0.31027748 0.40487699] 2
True
[ 0.2835832 0.28840375 0.4268133 ] 2
True
[ 0.29390502 0.28690204 0.40827953] 2
True
[ 0.39533894 0.16928806 0.46334163] 2
True
[ 0.34846101 0.26270602 0.38578771] 0
False
[ 0.42886669 0.14676741 0.49209341] 2
True
[ 0.39737914 0.23780867 0.35324832] 0
True
[ 0.34387922 0.34244208 0.32088014] 1
False
[ 0.35713777 0.24024934 0.40895495] 2
True
[ 0.3832878 0.27006637 0.35641733] 0
True
[ 0.41187602 0.31317721 0.26073477] 0
True
[ 0.40428965 0.34067016 0.25160158] 1
False
[ 0.3816194 0.28458517 0.31758477] 1
False
[ 0.40475296 0.27858823 0.30606081] 0
True
[ 0.28292498 0.44586297 0.27458554] 2
False
[ 0.33675564 0.24430635 0.41109033] 2
True
[ 0.31447102 0.3970475 0.304984 ] 1
True
[ 0.30942491 0.25719202 0.43059548] 2
True
[ 0.2496962 0.36595949 0.40420219] 2
True
[ 0.24759783 0.34551928 0.40459832] 2
True
[ 0.37320784 0.3001497 0.31236466] 0
True
[ 0.36382736 0.27456664 0.3591366 ] 0
True
[ 0.40785789 0.27436888 0.31366939] 0
True
[ 0.3653025 0.3857459 0.25183082] 1
True
[ 0.3984861 0.25192605 0.34013341] 0
True
[ 0.36137374 0.32794769 0.31322371] 1
False
[ 0.38250499 0.42733672 0.21878278] 1
True
[ 0.38089472 0.36977533 0.25445328] 0
True
[ 0.33959621 0.38359829 0.29047556] 2
False
[ 0.35903263 0.28909375 0.33688633] 2
False
[ 0.36201055 0.25568194 0.35714759] 1
False
[ 0.32559423 0.38470419 0.2963519 ] 1
True
[ 0.29615957 0.32892673 0.3523706 ] 1
False
[ 0.4014761 0.3550607 0.24793698] 0
True
[ 0.26136023 0.48262884 0.24664827] 1
True
[ 0.34999152 0.39653969 0.25198472] 0
False
[ 0.31609196 0.39084938 0.28852457] 0
False
[ 0.41756617 0.37018661 0.22580863] 0
True
[ 0.41847742 0.24458452 0.35713697] 2
False
[ 0.37190298 0.32206274 0.30505638] 1
False
[ 0.34476261 0.32740861 0.28847441] 2
False
[ 0.27298219 0.30821738 0.40575123] 2
True
[ 0.37241517 0.26050217 0.35528607] 0
True
[ 0.37230549 0.22304454 0.40251721] 2
True
[ 0.36932578 0.40730793 0.24066453] 1
True
[ 0.34953755 0.25745772 0.38325459] 0
False
[ 0.48618717 0.3428034 0.19308973] 1
False
[ 0.32440549 0.36089773 0.31086523] 1
True
[ 0.27371205 0.3149417 0.40995591] 2
True
[ 0.33178754 0.31543673 0.37263501] 2
True
[ 0.23515255 0.29638847 0.46672378] 2
True
[ 0.28378143 0.32000644 0.40291071] 2
True
[ 0.26468716 0.34075375 0.41154849] 2
True
[ 0.20141 0.41602361 0.40917603] 1
True
[ 0.23082335 0.38518344 0.41951467] 2
True
[ 0.26845476 0.32242336 0.42538995] 0
False
[ 0.24738968 0.38192046 0.39960448] 1
False
[ 0.33452307 0.26813439 0.38371598] 0
False
[ 0.29420301 0.311338 0.3716475 ] 1
False
[ 0.35846456 0.24538161 0.41479801] 2
True
[ 0.29353069 0.40268148 0.30980933] 1
True
[ 0.30570537 0.45791828 0.24786273] 1
True
[ 0.2449166 0.35976057 0.42200056] 2
True
[ 0.24274134 0.33093297 0.43229227] 2
True
[ 0.30518579 0.26127678 0.44310116] 0
False
[ 0.31046604 0.28475509 0.40666984] 2
True
[ 0.31958934 0.28790372 0.37289717] 0
False
[ 0.32828972 0.34020146 0.30402166] 1
True
[ 0.32494914 0.25850383 0.40838686] 2
True
[ 0.26820998 0.32235453 0.40654548] 2
True
[ 0.29498481 0.3021004 0.40151489] 2
True
[ 0.26107962 0.29353089 0.44288633] 1
False
[ 0.34046113 0.42457553 0.24628096] 0
False
[ 0.31444319 0.22345873 0.47106888] 0
False
[ 0.42896509 0.23995596 0.38479081] 0
True
[ 0.39208004 0.22100149 0.41875429] 2
True
[ 0.35950681 0.31999036 0.31062391] 1
False
[ 0.25867231 0.39553138 0.34122583] 1
True
[ 0.28064042 0.31992375 0.39850339] 2
True
[ 0.24277129 0.48424412 0.26886935] 1
True
[ 0.28102623 0.36754974 0.35011147] 2
False
[ 0.26178606 0.37980861 0.37247051] 1
True
[ 0.25412849 0.37852177 0.34080548] 0
False
[ 0.31123305 0.4022252 0.28161484] 1
True
[ 0.21869989 0.35238823 0.40435169] 2
True
[ 0.29268569 0.33634451 0.38238276] 1
False
[ 0.28800197 0.26517059 0.44978014] 2
True
[ 0.30277948 0.26987519 0.41492803] 0
False
[ 0.40089892 0.2869194 0.2994761 ] 0
True
[ 0.3639452 0.27684575 0.36293035] 2
False
[ 0.33210359 0.2767901 0.37740605] 0
False
[ 0.39309896 0.326682 0.27289018] 0
True
[ 0.3271235 0.24786598 0.4428568 ] 2
True
[ 0.28773349 0.38904011 0.30777522] 1
True
[ 0.4142834 0.23400036 0.35224588] 0
True
[ 0.35351341 0.44021225 0.20722866] 1
True
[ 0.35381149 0.26266485 0.37981698] 2
True
[ 0.34177091 0.3519488 0.27985454] 0
False
[ 0.40400617 0.31986597 0.28229663] 1
False
[ 0.29816799 0.39284431 0.29317967] 2
False
[ 0.33446404 0.31145876 0.36674211] 2
True
[ 0.30031758 0.29975037 0.40590203] 2
True
[ 0.29141975 0.39298667 0.30491584] 1
True
[ 0.22772957 0.24455054 0.55540772] 2
True
[ 0.18496636 0.40151027 0.45800561] 1
False
[ 0.21436724 0.53794462 0.27234371] 1
True
[ 0.23747715 0.38804196 0.38468679] 2
False
[ 0.36825405 0.26089837 0.36559736] 0
True
[ 0.40700967 0.28564631 0.29737239] 0
True
[ 0.41663849 0.21122054 0.39249414] 0
True
[ 0.33860349 0.35494305 0.31169709] 1
True
[ 0.42673985 0.29988754 0.26999076] 0
True
[ 0.41601317 0.34042609 0.24004765] 1
False
[ 0.26758223 0.41282185 0.32453494] 1
True
[ 0.33598984 0.44949779 0.23116566] 0
False
[ 0.39797563 0.29335533 0.31120398] 0
True
[ 0.40999589 0.36160596 0.25774549] 0
True
[ 0.48911444 0.3521487 0.19160182] 0
True
[ 0.33740632 0.31177987 0.35227936] 2
True
[ 0.41516319 0.40546977 0.20489608] 0
True
[ 0.40982013 0.36960328 0.24145746] 0
True
[ 0.45135236 0.31330082 0.24764178] 0
True
[ 0.48360828 0.38511541 0.17411469] 0
True
[ 0.40941569 0.34835287 0.25762333] 0
True
[ 0.48722608 0.27534116 0.22531386] 2
False
[ 0.37786947 0.32576414 0.29116745] 1
False
[ 0.4315591 0.39304662 0.1979154 ] 0
True
[ 0.30958984 0.40650964 0.26762242] 1
True
[ 0.42077187 0.32155774 0.26542887] 0
True
[ 0.335712 0.34622526 0.32638685] 2
False
[ 0.33125275 0.38166106 0.28497848] 1
True
[ 0.34838791 0.39157533 0.27476035] 1
True
[ 0.26791359 0.42551591 0.32656234] 2
False
[ 0.28565002 0.35216461 0.34879405] 2
False
[ 0.25003084 0.35991799 0.40362707] 2
True
[ 0.26317295 0.37146497 0.37799636] 2
True
[ 0.33025651 0.3534568 0.30629693] 0
False
[ 0.35878038 0.25984893 0.36529282] 0
False
[ 0.31810355 0.2735984 0.40344461] 2
True
[ 0.27874833 0.3620744 0.34326181] 1
True
[ 0.37032648 0.36522422 0.26070587] 0
True
[ 0.29544465 0.31255756 0.40354311] 2
True
[ 0.40030969 0.27659781 0.31051897] 0
True
[ 0.40370717 0.22900633 0.36538052] 2
False
[ 0.35671232 0.2311144 0.4138003 ] 2
True
[ 0.38194193 0.23607557 0.39665356] 2
True
[ 0.3212262 0.1652465 0.58014426] 2
True
[ 0.43635609 0.17017387 0.44189359] 2
True
[ 0.24034172 0.38882772 0.37433017] 1
True
[ 0.25323625 0.39845069 0.34737636] 1
True
[ 0.33545379 0.41646929 0.26128057] 1
True
[ 0.24612865 0.33482908 0.42760969] 1
False
[ 0.28714618 0.40592148 0.30103598] 0
False
[ 0.40144802 0.35313139 0.25399733] 0
True
[ 0.30332682 0.31935474 0.38233167] 0
False
[ 0.382254 0.35319797 0.25954687] 1
False
[ 0.33508238 0.39526876 0.26436479] 2
False
[ 0.31860284 0.30752044 0.39506967] 2
True
[ 0.25859181 0.33215673 0.39992957] 2
True
[ 0.26446588 0.3553201 0.37873635] 1
False
[ 0.23719616 0.32389707 0.43110469] 2
True
[ 0.3661077 0.34402304 0.28634834] 0
True
[ 0.4140062 0.21993064 0.37659243] 0
True
[ 0.47426145 0.21397405 0.33206135] 0
True
[ 0.27112282 0.33264957 0.38865492] 2
True
[ 0.4309329 0.17600236 0.42800577] 2
False
[ 0.41083987 0.22132655 0.38079431] 0
True
[ 0.36070654 0.22466647 0.42537292] 0
False
[ 0.40975158 0.20577299 0.39475172] 0
True
[ 0.49064729 0.22408051 0.28721723] 0
True
[ 0.45367381 0.21192282 0.34857635] 1
False
[ 0.48511939 0.19024624 0.35343654] 0
True
[ 0.41465384 0.20491454 0.39604081] 0
True
[ 0.39908216 0.27208043 0.30238124] 1
False
[ 0.3358432 0.27123499 0.38589026] 2
True
[ 0.34304524 0.29896751 0.35109568] 0
False
[ 0.47074484 0.26994189 0.27545583] 0
True
[ 0.43313884 0.19832568 0.37218185] 2
False
[ 0.40718115 0.31039316 0.26777557] 1
False
[ 0.35276041 0.28147886 0.34433051] 0
True
[ 0.4110653 0.27009053 0.3023744 ] 2
False
[ 0.35725822 0.33843239 0.28809386] 2
False
[ 0.31275948 0.38209609 0.29905193] 1
True
[ 0.30731355 0.38522036 0.30270076] 1
True
[ 0.39178707 0.25652967 0.35292734] 0
True
[ 0.36221653 0.35182155 0.25709066] 1
False
[ 0.27437268 0.42082153 0.29932628] 2
False
[ 0.30647648 0.34775449 0.35160324] 0
False
[ 0.33124958 0.27308515 0.40678284] 2
True
[ 0.37993973 0.21482337 0.41770351] 2
True
[ 0.30532952 0.35788273 0.32492665] 0
False
[ 0.28536934 0.38954293 0.30006058] 1
True
[ 0.27952 0.40137322 0.30086324] 1
True
[ 0.26299542 0.32569331 0.40429507] 2
True
[ 0.2932765 0.35742168 0.3421619 ] 1
True
[ 0.29253302 0.35665418 0.34208707] 0
False
[ 0.39432458 0.38978691 0.21903564] 0
True
[ 0.40638166 0.27349838 0.33866791] 0
True
[ 0.3712981 0.26289869 0.32789487] 1
False
[ 0.30197025 0.41295176 0.27979398] 1
True
[ 0.32885666 0.33428671 0.31479496] 2
False
[ 0.28956131 0.39003281 0.32382217] 1
True
[ 0.30594835 0.40895497 0.28895152] 1
True
[ 0.27498346 0.41370585 0.31623576] 2
False
[ 0.28949763 0.39623721 0.31011864] 1
True
[ 0.26681583 0.33800309 0.38473744] 2
True
[ 0.26538354 0.40259163 0.31733993] 1
True
[ 0.39973012 0.31458435 0.27428849] 0
True
[ 0.22801068 0.369722 0.40894166] 2
True
[ 0.25385237 0.32047061 0.40996036] 2
True
[ 0.35940179 0.27732245 0.35909029] 0
True
[ 0.24755573 0.3337564 0.40927614] 2
True
[ 0.1915093 0.4169088 0.40361781] 1
True
[ 0.29682283 0.28948069 0.43744443] 2
True
[ 0.21364497 0.37283817 0.43107979] 2
True
[ 0.19202954 0.40772207 0.43129754] 2
True
[ 0.18701568 0.42540221 0.40642719] 1
True
[ 0.29247254 0.4280054 0.28455959] 1
True
[ 0.25874972 0.32726325 0.41065905] 2
True
[ 0.25921162 0.39730412 0.33596809] 1
True
[ 0.20408635 0.40174439 0.41330187] 2
True
[ 0.31244875 0.27168847 0.43226114] 0
False
[ 0.22792691 0.41110618 0.37032468] 1
True
[ 0.25614564 0.40123236 0.35451338] 1
True
[ 0.34163408 0.33736692 0.30687303] 0
True
[ 0.28811957 0.32236294 0.39927151] 2
True
[ 0.37719679 0.34565532 0.2736706 ] 0
True
[ 0.33511377 0.29833116 0.36362664] 0
False
[ 0.2568881 0.40712799 0.33379485] 2
False
[ 0.37685645 0.29117611 0.33190577] 2
False
[ 0.32059471 0.31025789 0.37570695] 1
False
[ 0.23093633 0.37456141 0.42305932] 2
True
[ 0.2021931 0.41056893 0.41053346] 1
True
[ 0.31994553 0.38993009 0.29857497] 1
True
[ 0.23108538 0.4252686 0.37372522] 2
False
[ 0.23562294 0.40729448 0.36503803] 1
True
[ 0.32445056 0.43484088 0.24141386] 0
False
[ 0.29760037 0.41135472 0.31087018] 1
True
[ 0.28945106 0.43904594 0.28339063] 1
True
[ 0.26030007 0.34810956 0.39529053] 2
True
[ 0.28354191 0.41481675 0.3065829 ] 0
False
[ 0.26751785 0.41740797 0.32172374] 1
True
[ 0.19734735 0.45769636 0.38949769] 2
False
[ 0.31123617 0.30136807 0.39964072] 2
True
[ 0.22217094 0.38522834 0.4116068 ] 2
True
[ 0.23956538 0.41411313 0.35542345] 1
True
[ 0.20596022 0.34246287 0.46213275] 2
True
[ 0.27653107 0.38796886 0.33495839] 1
True
[ 0.32489361 0.3143676 0.35713992] 0
False
[ 0.27217037 0.40255617 0.31840213] 1
True
[ 0.21865092 0.38576639 0.41869898] 2
True
[ 0.29724356 0.40912409 0.28961197] 0
False
[ 0.26977384 0.38619198 0.35290441] 1
True
[ 0.3932478 0.39222427 0.21796583] 0
True
[ 0.24876581 0.36106614 0.41335152] 2
True
[ 0.2719515 0.4491126 0.27339104] 1
True
[ 0.29498631 0.37414679 0.33295129] 2
False
[ 0.38621946 0.30618484 0.29391083] 0
True
[ 0.37976475 0.35103509 0.28288483] 0
True
[ 0.2802444 0.32474215 0.39522288] 2
True
[ 0.39961306 0.30414972 0.31205725] 1
False
[ 0.31016105 0.28171606 0.42538254] 1
False
[ 0.33408265 0.41370754 0.27569337] 1
True
[ 0.29949701 0.29994464 0.40038289] 2
True
[ 0.34374737 0.35224665 0.29308231] 0
False
[ 0.31872663 0.28810569 0.39781488] 2
True
[ 0.33169254 0.38443079 0.30169339] 0
False
[ 0.34228676 0.25613338 0.40281591] 2
True
[ 0.26305439 0.40228179 0.33607616] 1
True
[ 0.27102825 0.31726086 0.39785381] 2
True
[ 0.38668033 0.28596988 0.32099453] 2
False
[ 0.29366544 0.32960467 0.36692643] 1
False
[ 0.31060778 0.28972957 0.40177129] 2
True
[ 0.30635084 0.40055702 0.29027592] 2
False
[ 0.29060257 0.30710332 0.41072366] 2
True
[ 0.21415248 0.4197333 0.38560328] 1
True
[ 0.28997523 0.39522194 0.33010709] 2
False
[ 0.28583906 0.27512047 0.4281195 ] 2
True
[ 0.26848111 0.28086481 0.44503966] 0
False
[ 0.2405793 0.2403163 0.54933415] 2
True
[ 0.30900563 0.40647111 0.30338797] 1
True
[ 0.20879005 0.39895301 0.40786382] 1
False
[ 0.27035643 0.37748206 0.32367733] 0
False
[ 0.40163309 0.31909222 0.26555738] 0
True
[ 0.3672226 0.30593499 0.31189255] 0
True
[ 0.32459255 0.30957317 0.361603 ] 2
True
[ 0.26653998 0.31622522 0.40719902] 2
True
[ 0.39109585 0.31907284 0.29489997] 1
False
[ 0.29839389 0.29414226 0.39575632] 2
True
[ 0.35235093 0.36596673 0.28621517] 1
True
[ 0.14889157 0.45330162 0.43990624] 1
True
[ 0.27887008 0.37994163 0.32726299] 0
False
[ 0.2571742 0.33844514 0.39437348] 2
True
[ 0.3107452 0.29428217 0.39900124] 2
True
[ 0.39817648 0.24924109 0.35130556] 0
True
[ 0.35948961 0.30972319 0.30846827] 0
True
[ 0.39634578 0.30506652 0.29677696] 1
False
[ 0.33188862 0.28150569 0.38497136] 2
True
[ 0.26908077 0.32334723 0.40029912] 2
True
[ 0.34847359 0.38276098 0.2685903 ] 1
True
[ 0.32237902 0.32825432 0.36143263] 1
False
[ 0.35255276 0.32894061 0.30768817] 0
True
[ 0.29492173 0.28689055 0.44232017] 2
True
[ 0.31618975 0.32422275 0.34902246] 2
True
[ 0.402739 0.31333846 0.28704879] 0
True
[ 0.32334518 0.35558552 0.31861064] 1
True
[ 0.34592604 0.24650592 0.4064269 ] 2
True
[ 0.35038964 0.30869096 0.32651031] 0
True
[ 0.34562422 0.25462947 0.41596312] 0
False
[ 0.36902492 0.41030286 0.22670359] 1
True
[ 0.34813802 0.34275596 0.28322862] 1
False
[ 0.30296128 0.37637371 0.31335712] 1
True
[ 0.25973406 0.40121003 0.31068112] 1
True
[ 0.40706524 0.42136008 0.21183536] 0
False
[ 0.40979193 0.42746415 0.19909244] 1
True
[ 0.30204223 0.39193286 0.30910094] 1
True
[ 0.35856382 0.5028167 0.17873542] 1
True
[ 0.4274475 0.40812184 0.18920205] 0
True
[ 0.31836767 0.40550598 0.27438588] 1
True
[ 0.42701144 0.41841438 0.18171837] 0
True
[ 0.39495308 0.383939 0.21540769] 2
False
[ 0.37136303 0.40976724 0.21220385] 1
True
[ 0.28275973 0.32170884 0.38685927] 2
True
[ 0.3699444 0.41937427 0.22321922] 0
False
[ 0.38790586 0.33999242 0.25692939] 2
False
[ 0.38449892 0.35746801 0.27136645] 1
False
[ 0.37105268 0.40303499 0.22608874] 1
True
[ 0.40016867 0.34203834 0.24725331] 0
True
[ 0.28565159 0.38605486 0.34021809] 2
False
[ 0.29455693 0.44341199 0.27635794] 1
True
[ 0.37043606 0.42538557 0.22060708] 1
True
[ 0.37653002 0.31687778 0.28919069] 0
True
[ 0.3241459 0.34017541 0.33524903] 2
False
[ 0.3384982 0.25838105 0.40292718] 2
True
[ 0.25620117 0.348784 0.39802368] 1
False
[ 0.2967027 0.38490161 0.30782818] 2
False
[ 0.24536894 0.34218208 0.42039452] 2
True
[ 0.33073466 0.37545423 0.28826679] 1
True
[ 0.25417954 0.39033424 0.35008159] 2
False
[ 0.38365057 0.34105596 0.2913704 ] 0
True
[ 0.35896353 0.35583333 0.29971747] 1
False
[ 0.29577837 0.3002796 0.40592415] 2
True
[ 0.2808576 0.31534624 0.40037754] 2
True
[ 0.21501685 0.31985791 0.45635064] 1
False
[ 0.30341265 0.36083253 0.35601865] 2
False
[ 0.27211957 0.34588286 0.39139909] 1
False
[ 0.19350898 0.34129351 0.470953 ] 2
True
[ 0.29903583 0.40701046 0.31159637] 0
False
[ 0.16312008 0.41998127 0.45983191] 2
True
[ 0.22432639 0.38179924 0.38906379] 2
True
[ 0.26000237 0.36697969 0.39023615] 2
True
[ 0.1954299 0.29460387 0.56323824] 1
False
[ 0.21704727 0.41543843 0.38617892] 1
True
[ 0.16735222 0.41580685 0.45266998] 2
True
[ 0.20695625 0.38441562 0.42021734] 0
False
[ 0.16236152 0.43541307 0.44321223] 1
False
[ 0.27163925 0.368293 0.33003624] 0
False
[ 0.39769635 0.27418558 0.31145253] 0
True
[ 0.32420424 0.31744208 0.33648612] 0
False
[ 0.41238738 0.2201617 0.38603133] 0
True
[ 0.36318069 0.31469129 0.30124286] 1
False
[ 0.36640516 0.39139424 0.26736457] 0
False
[ 0.45132224 0.31512652 0.22649814] 0
True
[ 0.35633753 0.28224425 0.3556234 ] 2
False
[ 0.40207467 0.29247599 0.31517312] 0
True
[ 0.28186945 0.36727475 0.34758977] 1
True
[ 0.26476936 0.30921656 0.44869388] 2
True
[ 0.40858536 0.34945641 0.25426935] 0
True
[ 0.33959805 0.30615346 0.33090511] 2
False
[ 0.30963275 0.31458027 0.37072436] 1
False
[ 0.33236494 0.29549669 0.36209091] 0
False
[ 0.3664084 0.36856106 0.25560623] 1
True
[ 0.43176383 0.31417758 0.25730992] 2
False
[ 0.30514036 0.37158692 0.32746657] 1
True
[ 0.39053129 0.32556758 0.29448267] 0
True
[ 0.39417121 0.33082234 0.26351061] 0
True
[ 0.42422432 0.37414907 0.21870636] 0
True
[ 0.44253924 0.35956414 0.22194134] 1
False
[ 0.34556306 0.36290424 0.29316887] 2
False
[ 0.4582798 0.27657984 0.27659582] 2
False
[ 0.31410728 0.31911791 0.34988196] 2
True
[ 0.24315595 0.36352245 0.39417871] 1
False
[ 0.44016497 0.31745192 0.25656097] 0
True
[ 0.25554006 0.38409502 0.36815687] 2
False
[ 0.27602062 0.31645125 0.39801309] 2
True
[ 0.38070835 0.30324015 0.30450654] 0
True
[ 0.40681169 0.29374036 0.28118358] 0
True
[ 0.4043544 0.29028895 0.31122613] 0
True
[ 0.31999139 0.31109525 0.36930662] 1
False
[ 0.3449215 0.28820351 0.34756262] 0
False
[ 0.32877071 0.32347158 0.35092053] 1
False
[ 0.34793026 0.33484286 0.33035795] 2
False
[ 0.4107757 0.31451159 0.26308393] 1
False
[ 0.36000019 0.32991489 0.29783264] 2
False
[ 0.39083819 0.28141781 0.30636254] 0
True
[ 0.39123708 0.245335 0.35674909] 2
False
[ 0.32034081 0.33149308 0.32705934] 0
False
[ 0.37584619 0.24968796 0.37679547] 1
False
[ 0.38225442 0.34906742 0.26571047] 1
False
[ 0.40875157 0.29354565 0.30685038] 0
True
[ 0.38025456 0.38935947 0.2310629 ] 0
False
[ 0.3472639 0.40860492 0.22768972] 1
True
[ 0.39719934 0.31419541 0.27967485] 0
True
[ 0.28551385 0.40918706 0.3112003 ] 1
True
[ 0.37840948 0.39610333 0.24335154] 1
True
[ 0.35668718 0.39324485 0.25096555] 0
False
[ 0.28141909 0.36583153 0.35115024] 2
False
[ 0.2735088 0.43770216 0.30466886] 1
True
[ 0.30084303 0.32178909 0.36942467] 2
True
[ 0.36536864 0.3689927 0.26635558] 0
False
[ 0.40561877 0.32941712 0.26891127] 0
True
[ 0.33992196 0.39652366 0.25402764] 1
True
[ 0.39233441 0.31377267 0.27791716] 2
False
[ 0.30416286 0.28302022 0.40555889] 2
True
[ 0.2688353 0.37576821 0.34994928] 1
True
[ 0.38914588 0.34886765 0.26427594] 0
True
[ 0.39931908 0.30100894 0.30574628] 0
True
[ 0.37902431 0.31740454 0.31820603] 1
False
[ 0.29719827 0.37190284 0.34745378] 0
False
[ 0.2900261 0.31331923 0.40649365] 2
True
[ 0.40171303 0.31440051 0.28003556] 0
True
[ 0.34634238 0.38368164 0.27036693] 1
True
[ 0.31800538 0.36710127 0.29798078] 2
False
[ 0.31001897 0.30982221 0.38309511] 2
True
[ 0.31907027 0.26264734 0.4393747 ] 2
True
[ 0.28761307 0.36186048 0.35340184] 2
False
[ 0.31796712 0.2744556 0.39331257] 2
True
[ 0.26613138 0.32157147 0.44145981] 2
True
[ 0.30013578 0.28572614 0.43037702] 1
False
[ 0.40239222 0.20004345 0.41324279] 2
True
[ 0.24291943 0.36105092 0.40715177] 2
True
[ 0.3913461 0.23878677 0.36117347] 2
False
[ 0.26685548 0.34276207 0.41741926] 0
False
[ 0.26217999 0.28958244 0.46263369] 1
False
[ 0.25121363 0.31390103 0.4612583 ] 1
False
[ 0.29172444 0.34581719 0.37009707] 0
False
[ 0.29065072 0.29254535 0.39984351] 0
False
[ 0.28151794 0.3574145 0.34285245] 1
True
[ 0.31673504 0.38397617 0.29068569] 1
True
[ 0.32056008 0.35221834 0.32522112] 2
False
[ 0.29227257 0.32486104 0.37475178] 0
False
[ 0.24330173 0.36676905 0.41550389] 2
True
[ 0.38019598 0.30628696 0.31694993] 1
False
[ 0.23761159 0.35666383 0.39713434] 2
True
[ 0.27060617 0.36787392 0.37855654] 2
True
[ 0.25906117 0.33221158 0.41322198] 2
True
[ 0.29163915 0.3539178 0.35466854] 1
False
[ 0.36013572 0.31452001 0.31043849] 0
True
[ 0.28621215 0.34022015 0.35580859] 1
False
[ 0.22372353 0.36693838 0.41241157] 2
True
[ 0.2743529 0.3999342 0.31690982] 1
True
[ 0.26059911 0.382224 0.36340429] 2
False
[ 0.20927238 0.41584542 0.40384834] 1
True
[ 0.26490325 0.39536803 0.35190874] 0
False
[ 0.30448607 0.36140913 0.34455108] 1
True
[ 0.37620908 0.31870664 0.28881752] 0
True
[ 0.26782348 0.35461833 0.38685936] 2
True
[ 0.26688376 0.35476168 0.35786999] 0
False
[ 0.40942864 0.33517315 0.25886957] 0
True
[ 0.35937037 0.28413125 0.35356858] 2
False
[ 0.36629908 0.34057019 0.30192083] 1
False
[ 0.33231755 0.284287 0.37724532] 2
True
[ 0.40815727 0.33085112 0.26717322] 0
True
[ 0.2481924 0.35086513 0.41557118] 1
False
[ 0.2346992 0.41291338 0.36435033] 1
True
[ 0.2544908 0.42613463 0.30659192] 1
True
[ 0.2476376 0.43560976 0.32464392] 1
True
[ 0.31159071 0.3076169 0.35754232] 2
True
[ 0.30060228 0.37851312 0.30471902] 0
False
[ 0.40289268 0.2995473 0.28763997] 0
True
[ 0.32937759 0.30571894 0.36686133] 2
True
[ 0.22235837 0.3594837 0.43893796] 2
True
[ 0.34630819 0.29235429 0.35285131] 0
False
[ 0.34025326 0.39447894 0.27056175] 1
True
[ 0.36939017 0.2856811 0.35142484] 2
False
[ 0.26303132 0.31263587 0.42177924] 0
False
[ 0.32015583 0.26753096 0.40915194] 0
False
[ 0.39340714 0.29833248 0.30210237] 0
True
[ 0.34586783 0.3018904 0.35913018] 2
True
[ 0.42552084 0.30974811 0.27143825] 0
True
[ 0.31306359 0.32879028 0.35170383] 1
False
[ 0.41890492 0.30043049 0.27437487] 0
True
[ 0.32575602 0.31688654 0.33490553] 2
True
[ 0.41053795 0.3650369 0.2412354 ] 0
True
[ 0.27242753 0.40378198 0.32406813] 1
True
[ 0.39738391 0.36810275 0.24906011] 1
False
[ 0.32256024 0.39774759 0.28795325] 1
True
[ 0.41733055 0.35737673 0.22774386] 2
False
[ 0.27047957 0.33320253 0.38894107] 2
True
[ 0.29112667 0.28885419 0.40867611] 2
True
[ 0.28815351 0.30987582 0.40931042] 2
True
[ 0.24382936 0.36040952 0.39708046] 2
True
[ 0.31366524 0.28126725 0.41735708] 2
True
[ 0.33570592 0.31606457 0.33936983] 0
False
[ 0.26799068 0.34997469 0.37090703] 1
False
[ 0.29487408 0.3537276 0.34036376] 1
True
[ 0.22661805 0.40614795 0.35880306] 1
True
[ 0.24078211 0.40651005 0.35670231] 1
True
[ 0.19827007 0.4433909 0.3877895 ] 1
True
[ 0.34074505 0.40640283 0.25343876] 0
False
[ 0.27435193 0.33948345 0.3636277 ] 0
False
[ 0.32585661 0.29665777 0.36029156] 2
True
[ 0.38170355 0.2929521 0.30128128] 0
True
[ 0.38624271 0.27040186 0.31526754] 2
False
[ 0.24202957 0.31114504 0.45031983] 2
True
[ 0.38004506 0.35956775 0.26697478] 0
True
[ 0.30839248 0.38609344 0.31204633] 1
True
[ 0.3020632 0.31104508 0.39002919] 2
True
[ 0.39741539 0.275399 0.31865619] 0
True
[ 0.31164823 0.27490495 0.39944111] 2
True
[ 0.41306562 0.27127591 0.32413346] 0
True
[ 0.3412443 0.31960519 0.33917277] 0
True
[ 0.36250429 0.23488584 0.40859957] 2
True
[ 0.39878394 0.20533573 0.41274594] 2
True
[ 0.36517813 0.2559194 0.3979128 ] 0
False
[ 0.43085993 0.30874859 0.2686704 ] 0
True
[ 0.37278053 0.22297847 0.41501888] 2
True
[ 0.31270593 0.27847925 0.41049457] 1
False
[ 0.24417587 0.40932404 0.34473788] 1
True
[ 0.35777678 0.40556291 0.24919672] 1
True
[ 0.25677215 0.37511284 0.37291355] 1
True
[ 0.2556928 0.36747683 0.38012161] 2
True
[ 0.40646516 0.23016034 0.37699636] 0
True
[ 0.34755496 0.26399142 0.38716221] 2
True
[ 0.34161162 0.22286666 0.45061198] 2
True
[ 0.28540032 0.35616072 0.35147395] 1
True
[ 0.44151721 0.28648282 0.28562209] 1
False
[ 0.2955817 0.37671227 0.33884951] 2
False
[ 0.24655642 0.41689267 0.36151924] 1
True
[ 0.20350128 0.39924094 0.42784001] 2
True
[ 0.2788258 0.32722984 0.38543692] 2
True
[ 0.18684139 0.33887933 0.49667099] 0
False
[ 0.32876604 0.28635917 0.3901943 ] 2
True
[ 0.33026413 0.21971558 0.42868755] 0
False
[ 0.41737863 0.18533278 0.41055214] 0
True
[ 0.3012328 0.35740101 0.34195152] 0
False
[ 0.38536158 0.24551245 0.37661629] 2
False
[ 0.4209155 0.17344895 0.42585641] 0
False
[ 0.44205314 0.22965579 0.34262807] 0
True
[ 0.3292047 0.26799523 0.40722706] 2
True
[ 0.42317243 0.21429173 0.36463079] 0
True
[ 0.40449396 0.25027595 0.34239783] 0
True
[ 0.41726993 0.22258521 0.37871756] 0
True
[ 0.41356065 0.24864598 0.36916671] 0
True
[ 0.36323755 0.22021293 0.40771386] 2
True
[ 0.40133972 0.16294436 0.4662006 ] 2
True
[ 0.34917813 0.28623564 0.34779903] 2
False
[ 0.44800407 0.20113297 0.35526428] 1
False
[ 0.34416861 0.2407679 0.4112534 ] 0
False
[ 0.41008192 0.21942249 0.38560395] 0
True
[ 0.4343202 0.22966727 0.35391323] 1
False
[ 0.36835163 0.26685967 0.36617984] 2
False
[ 0.45498495 0.26904999 0.27204212] 1
False
[ 0.31561498 0.2993203 0.38004308] 2
True
[ 0.31875103 0.28057236 0.38544738] 2
True
[ 0.34115321 0.26694286 0.35402934] 1
False
[ 0.28972697 0.38425659 0.31832315] 1
True
[ 0.32437208 0.33733727 0.32473344] 2
False
[ 0.24221921 0.40512407 0.34255391] 1
True
[ 0.2849878 0.35836891 0.35085039] 2
False
[ 0.21050791 0.37900987 0.41736727] 2
True
[ 0.25802117 0.27584883 0.46276725] 2
True
[ 0.32783281 0.29760916 0.35636692] 0
False
[ 0.34489345 0.31068071 0.32829455] 1
False
[ 0.39607222 0.25188313 0.32622268] 0
True
[ 0.31831547 0.28738559 0.36502892] 2
True
[ 0.2444281 0.35520838 0.39723551] 1
False
[ 0.22640761 0.36308194 0.41275528] 2
True
[ 0.2480159 0.33030324 0.43350874] 2
True
[ 0.21710068 0.38692602 0.40879778] 2
True
[ 0.3453992 0.25954134 0.40063096] 0
False
[ 0.26220652 0.32006268 0.42130505] 1
False
[ 0.31881669 0.26663611 0.40358453] 0
False
[ 0.29260925 0.38658065 0.3060426 ] 1
True
[ 0.29620229 0.30895486 0.38915789] 2
True
[ 0.33682131 0.29063639 0.37888326] 2
True
[ 0.23587973 0.42017851 0.36912062] 1
True
[ 0.24658658 0.35859268 0.40600588] 2
True
[ 0.25025245 0.40148795 0.35709422] 1
True
[ 0.23226829 0.39690696 0.36652126] 1
True
[ 0.23766359 0.42170793 0.35746974] 1
True
[ 0.28953055 0.37636739 0.32703903] 2
False
[ 0.24427338 0.34182602 0.40800566] 2
True
[ 0.28905338 0.3072762 0.40921014] 0
False
[ 0.18970108 0.41938901 0.40564825] 1
True
[ 0.27429705 0.39008068 0.33910581] 1
True
[ 0.33038269 0.36239626 0.30319294] 1
True
[ 0.16220002 0.44071511 0.42944758] 2
False
[ 0.2905225 0.38279988 0.31925364] 0
False
[ 0.30439528 0.28874317 0.3979432 ] 2
True
[ 0.28166255 0.32627095 0.39889607] 2
True
[ 0.21663561 0.40784196 0.39184929] 2
False
[ 0.21880035 0.28385789 0.52196883] 2
True
[ 0.32114765 0.28119261 0.39938107] 2
True
[ 0.22925335 0.3408512 0.4263444 ] 1
False
[ 0.24044667 0.37991405 0.39636987] 1
False
[ 0.26167441 0.27768076 0.44200964] 0
False
[ 0.20283121 0.419982 0.38060688] 1
True
[ 0.24118243 0.3688842 0.39023893] 2
True
[ 0.24335413 0.46769231 0.29937715] 1
True
[ 0.38585282 0.26828192 0.34849333] 0
True
[ 0.26994944 0.34951504 0.38358807] 2
True
[ 0.27537548 0.35983195 0.36128877] 0
False
[ 0.26551771 0.40589604 0.33589597] 1
True
[ 0.26742258 0.37133092 0.37243429] 0
False
[ 0.35665794 0.329298 0.31819778] 0
True
[ 0.34584883 0.34703393 0.31242434] 0
False
[ 0.35998218 0.29123902 0.34937201] 2
False
[ 0.38070389 0.26259656 0.34248433] 0
True
[ 0.34331061 0.36437029 0.28027043] 1
True
[ 0.33448616 0.27798558 0.39142376] 2
True
[ 0.3152507 0.29260987 0.39886981] 2
True
[ 0.25844633 0.33920362 0.39707997] 2
True
[ 0.29710781 0.34172963 0.35804948] 1
False
[ 0.21706782 0.32788686 0.46789994] 2
True
[ 0.29153856 0.34791953 0.35980789] 0
False
[ 0.30138354 0.2655358 0.4312949 ] 2
True
[ 0.32221314 0.26732386 0.39016093] 2
True
[ 0.28290671 0.34062228 0.38130979] 1
False
[ 0.28880777 0.3176217 0.38976245] 1
False
[ 0.29956204 0.29618087 0.39857996] 2
True
[ 0.26230613 0.36588256 0.38908909] 2
True
[ 0.20799051 0.37801525 0.45399707] 1
False
[ 0.31263609 0.40021684 0.29393985] 1
True
[ 0.26702019 0.38919087 0.35952619] 2
False
[ 0.2901929 0.35966973 0.33987647] 0
False
[ 0.21389152 0.37977005 0.41561856] 1
False
[ 0.2234845 0.38132626 0.40689543] 0
False
[ 0.32988225 0.36344403 0.2862247 ] 0
False
[ 0.40439057 0.28510952 0.30609133] 0
True
[ 0.27116626 0.32830173 0.38996205] 2
True
[ 0.33374424 0.25893309 0.40302344] 2
True
[ 0.28540887 0.33989291 0.37793569] 2
True
[ 0.3176566 0.29149744 0.39572376] 0
False
[ 0.34644372 0.24699507 0.4124084 ] 2
True
[ 0.2589351 0.24173517 0.50572322] 1
False
[ 0.33058148 0.28104216 0.37365239] 0
False
[ 0.4019282 0.23715539 0.36923374] 0
True
[ 0.40172848 0.3151438 0.28684545] 0
True
[ 0.40162552 0.26145821 0.33282087] 0
True
[ 0.4526273 0.19611408 0.36143882] 0
True
[ 0.40412043 0.2060473 0.40566278] 2
True
[ 0.23654992 0.31865717 0.46701718] 1
False
[ 0.29254943 0.30538351 0.3933015 ] 2
True
[ 0.32038561 0.25638519 0.41924478] 2
True
[ 0.30381146 0.29242831 0.39533067] 2
True
[ 0.42496409 0.21433685 0.38631527] 2
False
[ 0.36168064 0.22466028 0.41694133] 0
False
[ 0.30766299 0.27126143 0.42107084] 0
False
[ 0.38208981 0.21698389 0.40624406] 2
True
[ 0.31319661 0.2453593 0.45111178] 0
False
[ 0.44959892 0.25040536 0.31975719] 0
True
[ 0.39185159 0.20544988 0.41448317] 0
False
[ 0.42665933 0.18796216 0.42817319] 2
True
[ 0.35494526 0.26660627 0.38509062] 1
False
[ 0.36855466 0.22838916 0.40827568] 2
True
[ 0.36325597 0.23617963 0.39797369] 1
False
[ 0.38615771 0.19303962 0.42972766] 2
True
[ 0.33535104 0.24196788 0.40066581] 1
False
[ 0.32540559 0.2606134 0.42050413] 2
True
[ 0.34764225 0.31922342 0.30705542] 1
False
[ 0.31632061 0.36680688 0.30609313] 0
False
[ 0.30259465 0.36422633 0.3437031 ] 1
True
[ 0.27477972 0.33926439 0.37484257] 2
True
[ 0.30013923 0.27243214 0.41569567] 0
False
[ 0.36625098 0.23227086 0.40543525] 2
True
[ 0.34832868 0.30743834 0.31354681] 1
False
[ 0.30957309 0.29746629 0.37679406] 0
False
[ 0.31075383 0.3499265 0.33120247] 1
True
[ 0.28210533 0.30659422 0.39579049] 2
True
[ 0.30245744 0.39994068 0.30403172] 1
True
[ 0.36500993 0.31866293 0.32122502] 0
True
[ 0.31473675 0.31100904 0.36066193] 2
True
[ 0.38920786 0.2972194 0.30829139] 0
True
[ 0.25821265 0.37489966 0.36758429] 1
True
[ 0.28533046 0.40397402 0.31078017] 1
True
[ 0.326121 0.31038267 0.35544537] 2
True
[ 0.29884113 0.39862139 0.29368332] 1
True
[ 0.25628488 0.40104914 0.33186119] 1
True
[ 0.29015567 0.29174383 0.41953868] 2
True
[ 0.24907835 0.40658089 0.34758986] 1
True
[ 0.32915178 0.35297174 0.30735115] 0
False
[ 0.28795901 0.42613835 0.27935467] 1
True
[ 0.32280075 0.30507035 0.3721811 ] 2
True
[ 0.3006165 0.30275785 0.3958813 ] 0
False
[ 0.31605324 0.39521681 0.28259672] 1
True
[ 0.2759847 0.35667884 0.37444553] 2
True
[ 0.27389246 0.3988879 0.32111324] 1
True
[ 0.27593336 0.39832853 0.32752426] 1
True
[ 0.25824367 0.35548314 0.39396094] 2
True
[ 0.29424999 0.41921035 0.29913371] 1
True
[ 0.25501922 0.41834422 0.32534979] 1
True
[ 0.22289764 0.46673546 0.31454099] 1
True
[ 0.22640962 0.42831528 0.35950442] 2
False
[ 0.29350488 0.31695576 0.4014269 ] 2
True
[ 0.14733983 0.49285449 0.44394964] 2
False
[ 0.1592956 0.43683559 0.46839975] 1
False
[ 0.19795889 0.40815215 0.41507201] 2
True
[ 0.19202739 0.42326082 0.40637958] 1
True
[ 0.22016117 0.39265315 0.41960366] 2
True
[ 0.17673834 0.4170544 0.43308787] 2
True
[ 0.30330834 0.31104712 0.38389161] 0
False
[ 0.30203348 0.2845925 0.39974484] 0
False
[ 0.16341829 0.43003358 0.44397125] 1
False
[ 0.24975588 0.39959754 0.34189551] 1
True
[ 0.18522461 0.52773025 0.33274745] 1
True
[ 0.33063849 0.33781339 0.33364757] 0
False
[ 0.26493872 0.34986648 0.36903772] 2
True
[ 0.23250637 0.36288536 0.41115528] 2
True
[ 0.2529708 0.39533279 0.36774148] 1
True
[ 0.2255714 0.43121606 0.35959395] 1
True
[ 0.18783854 0.42531684 0.41525164] 2
False
[ 0.22411261 0.40448204 0.37464438] 1
True
[ 0.27076146 0.40249756 0.32728502] 1
True
[ 0.17011088 0.50690897 0.3670034 ] 0
False
[ 0.26380435 0.45483279 0.30153971] 1
True
[ 0.25261581 0.39605649 0.35735801] 2
False
[ 0.24190901 0.39596255 0.37855995] 1
True
[ 0.24672454 0.50133967 0.27313622] 1
True
[ 0.24557672 0.44463174 0.29537861] 0
False
[ 0.26589044 0.43829551 0.28909765] 0
False
[ 0.28426443 0.38772777 0.34436487] 2
False
[ 0.38648658 0.31351947 0.28770219] 0
True
[ 0.34961962 0.32125108 0.32339399] 1
False
[ 0.34734738 0.3384771 0.29724448] 0
True
[ 0.29092225 0.33767689 0.34929041] 2
True
[ 0.33143661 0.34987598 0.3128506 ] 0
False
[ 0.33889952 0.39470003 0.25464447] 1
True
[ 0.39833809 0.32229218 0.27071285] 0
True
[ 0.32674864 0.3222986 0.33925042] 2
True
[ 0.36873587 0.24706283 0.3892428 ] 2
True
[ 0.40731518 0.23873747 0.32396637] 0
True
[ 0.40170464 0.3052326 0.28295215] 0
True
[ 0.28537128 0.32101292 0.39670524] 2
True
[ 0.26826231 0.31269827 0.39978906] 1
False
[ 0.39756534 0.28163869 0.32425223] 0
True
[ 0.29477756 0.34516136 0.36428642] 2
True
[ 0.33382578 0.30906752 0.33592719] 2
True
[ 0.38180081 0.27630073 0.32320953] 1
False
[ 0.3705653 0.29432764 0.3162598 ] 0
True
[ 0.36483933 0.2646002 0.37422857] 0
False
[ 0.32426064 0.34520292 0.33132623] 1
True
[ 0.32682003 0.30758002 0.36249804] 0
False
[ 0.31286179 0.38702943 0.2823499 ] 1
True
[ 0.32985882 0.29160338 0.3744401 ] 2
True
[ 0.38719602 0.26817883 0.33021869] 2
False
[ 0.27798654 0.37968025 0.34621092] 1
True
[ 0.28518507 0.32348014 0.37426526] 2
True
[ 0.31974683 0.39559077 0.29299276] 1
True
[ 0.2580233 0.40271989 0.35444452] 1
True
[ 0.25553587 0.4015824 0.32751339] 1
True
[ 0.21804088 0.36257164 0.44869538] 2
True
[ 0.31893534 0.37037984 0.30966409] 0
False
[ 0.28110214 0.4015954 0.31811517] 1
True
[ 0.29978768 0.37128062 0.31545837] 0
False
[ 0.22071312 0.38075926 0.41794946] 2
True
[ 0.23354753 0.36098746 0.41333779] 2
True
[ 0.3246203 0.27632674 0.40835353] 2
True
[ 0.2812127 0.39471121 0.32252227] 1
True
[ 0.26000771 0.3634347 0.37029946] 0
False
[ 0.2984241 0.34572535 0.37089411] 2
True
[ 0.2880812 0.3243064 0.40473453] 2
True
[ 0.33851842 0.27972681 0.39068807] 0
False
[ 0.25154698 0.33629112 0.41335546] 2
True
[ 0.30622491 0.29029031 0.40406861] 2
True
[ 0.29450142 0.32931427 0.3718178 ] 1
False
[ 0.3448311 0.39758523 0.26118395] 0
False
[ 0.38666255 0.24138175 0.3762284 ] 0
True
[ 0.28088891 0.32381227 0.39921738] 2
True
[ 0.30772141 0.29133631 0.38437337] 0
False
[ 0.37708246 0.27900442 0.34065864] 0
True
[ 0.41702977 0.24419529 0.34974384] 2
False
[ 0.33961271 0.24883113 0.42738742] 2
True
[ 0.40753438 0.23567606 0.3691392 ] 0
True
[ 0.42543026 0.25140914 0.35541791] 0
True
[ 0.384104 0.27729826 0.33436147] 1
False
[ 0.44591839 0.22683883 0.33253049] 0
True
[ 0.34621037 0.33060247 0.33346214] 1
False
[ 0.34466293 0.32913892 0.32127887] 2
False
[ 0.31751076 0.28016128 0.39667397] 0
False
[ 0.35265206 0.25695954 0.39738384] 2
True
[ 0.27195264 0.37767245 0.34897033] 1
True
[ 0.35467588 0.32991941 0.30246559] 0
True
[ 0.40739233 0.30229495 0.30370022] 0
True
[ 0.31426292 0.28295653 0.41848414] 1
False
[ 0.40593581 0.25338336 0.33284369] 2
False
[ 0.31195499 0.27185911 0.41225342] 2
True
[ 0.39631929 0.29775198 0.30978945] 0
True
[ 0.42441001 0.19973616 0.40830346] 0
True
[ 0.42404133 0.22112732 0.39092802] 0
True
[ 0.43133201 0.24833127 0.33940926] 0
True
[ 0.41155807 0.23257042 0.36359581] 2
False
[ 0.36483437 0.28084571 0.35408794] 2
False
[ 0.39257446 0.20840751 0.40288816] 2
True
[ 0.32841612 0.25860249 0.39971367] 2
True
[ 0.35116467 0.27756559 0.37117687] 1
False
[ 0.31253659 0.25716321 0.42734002] 1
False
[ 0.34662818 0.24895824 0.41121368] 2
True
[ 0.42695523 0.17835718 0.4254789 ] 0
True
[ 0.44256649 0.2340801 0.32458801] 0
True
[ 0.36403587 0.26927454 0.37397916] 2
True
[ 0.38061542 0.24340231 0.37741779] 0
True
[ 0.3460875 0.28167377 0.39901423] 1
False
[ 0.38306159 0.28084975 0.33554686] 0
True
[ 0.40883475 0.23587088 0.37571812] 0
True
[ 0.32312933 0.29115282 0.38600672] 2
True
[ 0.44865267 0.23513919 0.31888977] 0
True
[ 0.58413772 0.15394214 0.31841576] 0
True
[ 0.39097005 0.21072009 0.42055162] 2
True
[ 0.43531703 0.16893589 0.43117372] 0
True
[ 0.3691424 0.21458501 0.43680357] 0
False
[ 0.39190885 0.31345613 0.29778417] 1
False
[ 0.40063849 0.24530056 0.34382923] 0
True
[ 0.4279289 0.25369136 0.32767488] 0
True
[ 0.39671401 0.30843833 0.29116473] 1
False
[ 0.3445076 0.3291699 0.3044418] 1
False
[ 0.45994855 0.29183593 0.25192461] 1
False
[ 0.39745415 0.23336363 0.37621402] 2
False
[ 0.39864366 0.3620089 0.23050204] 0
True
[ 0.39904975 0.22545042 0.39709164] 2
False
[ 0.35063701 0.28715438 0.36226001] 2
True
[ 0.35272673 0.24011979 0.40693675] 2
True
[ 0.41767865 0.21307141 0.40136432] 0
True
[ 0.3664702 0.23155649 0.41879383] 2
True
[ 0.30031058 0.26360211 0.43270516] 2
True
[ 0.32778796 0.27585932 0.39538795] 2
True
[ 0.27918867 0.22398472 0.52213018] 2
True
[ 0.33841823 0.2494712 0.39761921] 0
False
[ 0.36598469 0.20387917 0.46163882] 0
False
[ 0.3410019 0.27467976 0.39017448] 2
True
[ 0.44144102 0.16430962 0.44668334] 0
False
[ 0.28384884 0.32275522 0.38935019] 1
False
[ 0.49654545 0.16798982 0.38151703] 0
True
[ 0.36563934 0.2518658 0.38047388] 1
False
[ 0.36678786 0.26464191 0.34403605] 0
True
[ 0.39263352 0.280073 0.31534422] 0
True
[ 0.39613689 0.31447488 0.29119253] 1
False
[ 0.30782633 0.390163 0.30048308] 1
True
[ 0.38834206 0.36914293 0.24445256] 0
True
[ 0.37624514 0.29152001 0.33780841] 2
False
[ 0.42900004 0.24479086 0.33350037] 2
False
[ 0.33127673 0.25830647 0.40589786] 2
True
[ 0.32847711 0.26764971 0.42790498] 2
True
[ 0.37434664 0.30014552 0.33897358] 1
False
[ 0.39353389 0.20702387 0.41148513] 2
True
[ 0.35537218 0.36066918 0.27593023] 0
False
[ 0.33166538 0.30319572 0.36085067] 1
False
[ 0.24200361 0.37915493 0.41290655] 2
True
[ 0.36905076 0.21855228 0.42306931] 0
False
[ 0.32972494 0.34130585 0.32612985] 1
True
[ 0.40314309 0.32763024 0.27775078] 0
True
[ 0.32713714 0.38358424 0.27968646] 1
True
[ 0.38927656 0.35896933 0.26734974] 2
False
[ 0.32943798 0.37429006 0.27828252] 1
True
[ 0.40713254 0.36005173 0.25390163] 0
True
[ 0.35261718 0.39427327 0.25404573] 1
True
[ 0.37122346 0.319571 0.29745592] 0
True
[ 0.3319209 0.42482582 0.25616968] 1
True
[ 0.31327509 0.33514927 0.35553219] 2
True
[ 0.37003618 0.41350154 0.2045728 ] 1
True
[ 0.32192005 0.38556993 0.29852272] 1
True
[ 0.35720635 0.32227425 0.29699491] 0
True
[ 0.42799228 0.29763116 0.27331108] 1
False
[ 0.28109137 0.49766229 0.23040446] 1
True
[ 0.28758041 0.40268009 0.31337246] 1
True
[ 0.36617102 0.33636874 0.29027806] 1
False
[ 0.27457658 0.43311865 0.30179941] 2
False
[ 0.23302178 0.5870491 0.21470314] 1
True
[ 0.29873154 0.44505605 0.21849216] 2
False
[ 0.29734987 0.31732228 0.37665076] 2
True
[ 0.35018682 0.36698037 0.2727192 ] 0
False
[ 0.39431996 0.33168018 0.26726646] 0
True
[ 0.35286264 0.35215712 0.27668672] 2
False
[ 0.33217719 0.34342162 0.31849769] 2
False
[ 0.38141015 0.25912911 0.35858739] 0
True
[ 0.40102496 0.31766348 0.27064446] 0
True
[ 0.35590667 0.33853871 0.32327782] 0
True
[ 0.40663951 0.3061218 0.30603694] 0
True
[ 0.44667539 0.21569487 0.33755229] 2
False
[ 0.40506075 0.20359631 0.38918055] 0
True
[ 0.35767892 0.26259151 0.39122241] 2
True
[ 0.31642081 0.31002486 0.35489977] 1
False
[ 0.39908823 0.2638449 0.33413158] 0
True
[ 0.37797252 0.2595466 0.36359647] 2
False
[ 0.25335939 0.38092085 0.37029246] 2
False
[ 0.40672246 0.25693069 0.3362894 ] 0
True
[ 0.45910701 0.24556231 0.31558766] 0
True
[ 0.37239224 0.22249797 0.40866763] 2
True
[ 0.37947074 0.23514613 0.39168537] 1
False
[ 0.43986049 0.24298669 0.3405412 ] 0
True
[ 0.30502118 0.39342186 0.29004573] 1
True
[ 0.36576811 0.3017954 0.32146567] 1
False
[ 0.34150564 0.38634221 0.27528134] 1
True
[ 0.3561412 0.33780885 0.30166703] 1
False
[ 0.34697564 0.4020648 0.24340907] 1
True
[ 0.27900207 0.38712057 0.32850683] 0
False
[ 0.29098108 0.36719398 0.35051175] 2
False
[ 0.24966249 0.43484345 0.31128151] 2
False
[ 0.2913119 0.33877593 0.35743805] 2
True
[ 0.31076315 0.31438584 0.36883845] 1
False
[ 0.25023173 0.42939707 0.32698061] 1
True
[ 0.35975021 0.39186809 0.25422616] 0
False
[ 0.30868257 0.32618885 0.36653163] 2
True
[ 0.20018409 0.40635864 0.4253321 ] 1
False
[ 0.24067722 0.35651861 0.4014494 ] 2
True
[ 0.29257083 0.35677788 0.34384 ] 0
False
[ 0.36468383 0.30123347 0.31788631] 0
True
[ 0.35858238 0.3531384 0.28579332] 0
True
[ 0.3072779 0.29050866 0.39442308] 2
True
[ 0.37485673 0.29408648 0.31158739] 1
False
[ 0.3411458 0.40239091 0.25819517] 1
True
[ 0.33368317 0.37333845 0.28213877] 1
True
[ 0.38789362 0.32854602 0.27782066] 0
True
[ 0.39328023 0.35822695 0.23105776] 0
True
[ 0.38361623 0.40882797 0.22635525] 1
True
[ 0.33930917 0.40842302 0.264496 ] 1
True
[ 0.30147095 0.39531292 0.30042417] 1
True
[ 0.26894258 0.42062638 0.31160096] 2
False
[ 0.29126166 0.40505422 0.30886089] 1
True
[ 0.29291747 0.42020526 0.31124309] 1
True
[ 0.26755385 0.44551534 0.29666518] 1
True
[ 0.26112364 0.34108052 0.40236706] 2
True
[ 0.29039075 0.35781308 0.33826958] 0
False
[ 0.31148851 0.37277617 0.30845559] 0
False
[ 0.3919501 0.31199212 0.28831552] 0
True
[ 0.36735821 0.38803439 0.25672739] 0
False
[ 0.37082122 0.38245521 0.24871075] 2
False
[ 0.30743018 0.41611817 0.26715942] 0
False
[ 0.42649805 0.22108056 0.36424685] 0
True
[ 0.37187861 0.32525439 0.3157426 ] 2
False
[ 0.3590263 0.26318008 0.37393944] 2
True
[ 0.40729366 0.2224543 0.38049926] 0
True
[ 0.36201833 0.27134089 0.34765468] 0
True
[ 0.45540228 0.2598653 0.29316255] 1
False
[ 0.34709988 0.30481547 0.35371363] 2
True
[ 0.34014193 0.2821279 0.36957871] 1
False
[ 0.32716507 0.31717577 0.35432461] 2
True
[ 0.37619485 0.29890821 0.32647777] 0
True
[ 0.28556552 0.34848026 0.36331891] 2
True
[ 0.38512321 0.24449318 0.356743 ] 2
False
[ 0.37384262 0.29434868 0.33549046] 1
False
[ 0.32193933 0.31261022 0.36108776] 1
False
[ 0.25108917 0.3590961 0.41388726] 2
True
[ 0.406607 0.24533681 0.35322563] 0
True
[ 0.28729326 0.30815443 0.3971604 ] 2
True
[ 0.37837979 0.31365427 0.29444565] 0
True
[ 0.29984848 0.30417992 0.40465804] 2
True
[ 0.35733596 0.3001945 0.34770698] 0
True
[ 0.32899191 0.28555184 0.37572936] 1
False
[ 0.28742834 0.32442156 0.39439252] 2
True
[ 0.26226352 0.31566727 0.38600589] 2
True
[ 0.26805924 0.30807952 0.44481381] 2
True
[ 0.29006406 0.30728721 0.39449004] 2
True
[ 0.34635285 0.28025352 0.36519018] 1
False
[ 0.27909001 0.31230785 0.40386148] 2
True
[ 0.25305902 0.2949926 0.46533953] 1
False
[ 0.3670918 0.31497062 0.3113467 ] 0
True
[ 0.31750265 0.24708745 0.43221199] 2
True
[ 0.36461003 0.35261499 0.29295947] 0
True
[ 0.28598828 0.38161965 0.34827657] 1
True
[ 0.30611999 0.34941855 0.34852432] 1
True
[ 0.31721104 0.40024975 0.29406597] 0
False
[ 0.30350716 0.29318866 0.40089837] 2
True
[ 0.28087234 0.3978666 0.30158628] 1
True
[ 0.29143488 0.32397617 0.38465466] 2
True
[ 0.24935288 0.37595315 0.39220701] 2
True
[ 0.38770154 0.26758086 0.35678976] 0
True
[ 0.36916677 0.25203597 0.37486206] 2
True
[ 0.29316303 0.26417676 0.45016765] 2
True
[ 0.24170361 0.40307402 0.36893908] 1
True
[ 0.3392288 0.26350103 0.39978825] 2
True
[ 0.28640246 0.34612851 0.37624504] 0
False
[ 0.25391615 0.35083511 0.39291222] 0
False
[ 0.30346854 0.28653181 0.38806064] 2
True
[ 0.35536016 0.33318326 0.29385137] 1
False
[ 0.26010054 0.38758155 0.35441805] 1
True
[ 0.3541591 0.35210986 0.303476 ] 0
True
[ 0.3190586 0.32510795 0.34784389] 2
True
[ 0.24720153 0.38939706 0.36791921] 1
True
[ 0.25638207 0.35239866 0.40246462] 1
False
[ 0.27014682 0.41161106 0.33424838] 0
False
[ 0.28132516 0.31610881 0.39464975] 2
True
[ 0.31480517 0.38643155 0.29596034] 0
False
[ 0.38723389 0.30071016 0.31477963] 0
True
[ 0.3357816 0.2578447 0.40363941] 2
True
[ 0.32565114 0.30705935 0.3523513 ] 2
True
[ 0.3723533 0.2402917 0.37710833] 0
False
[ 0.36835609 0.23823493 0.404083 ] 2
True
[ 0.35292693 0.30114073 0.3377483 ] 0
True
[ 0.36742957 0.25450777 0.37858777] 1
False
[ 0.44015617 0.22663056 0.32763293] 0
True
[ 0.40357339 0.2273408 0.3595745 ] 0
True
[ 0.41989773 0.20898554 0.38648073] 0
True
[ 0.38574937 0.32713967 0.28243676] 0
True
[ 0.40079455 0.25264493 0.36259264] 2
False
[ 0.41125604 0.23659731 0.36938924] 0
True
[ 0.35132359 0.25613695 0.4024326 ] 2
True
[ 0.40450258 0.25280868 0.34542353] 0
True
[ 0.35162259 0.30855233 0.321794 ] 1
False
[ 0.40817144 0.22349334 0.36767607] 0
True
[ 0.42325402 0.25423652 0.33027099] 1
False
[ 0.39991294 0.24601464 0.34691542] 0
True
[ 0.34402378 0.30194384 0.37221693] 1
False
[ 0.39947809 0.3251719 0.24786254] 1
False
[ 0.33471494 0.28474298 0.389047 ] 2
True
[ 0.32513609 0.2778598 0.37902858] 2
True
[ 0.29367563 0.38341253 0.30542985] 1
True
[ 0.32051636 0.40401031 0.28486756] 1
True
[ 0.28379561 0.3613115 0.34917886] 0
False
[ 0.29416641 0.39399414 0.29866459] 1
True
[ 0.34491288 0.35281553 0.29293566] 2
False
[ 0.27926013 0.31843246 0.39348678] 2
True
[ 0.37482217 0.29184996 0.31948329] 0
True
[ 0.42376019 0.38405775 0.21924804] 0
True
[ 0.31504642 0.39840739 0.29557589] 1
True
[ 0.3222542 0.40644542 0.26599059] 1
True
[ 0.28399822 0.46097899 0.26838897] 1
True
[ 0.2726648 0.36860271 0.35666234] 2
False
[ 0.34595848 0.35008063 0.28481255] 2
False
[ 0.25163203 0.3488586 0.41026547] 2
True
[ 0.26061136 0.31805121 0.43721633] 2
True
[ 0.30379738 0.35671368 0.32584484] 1
True
[ 0.31214768 0.3474607 0.33850986] 0
False
[ 0.29800512 0.30816085 0.37966512] 0
False
[ 0.26262665 0.33325874 0.40104243] 2
True
[ 0.33741588 0.26605181 0.39775787] 2
True
[ 0.28739809 0.32013972 0.38139426] 2
True
[ 0.24165148 0.34237193 0.41866873] 1
False
[ 0.3347304 0.33526951 0.32964299] 0
False
[ 0.41650532 0.27460486 0.32518742] 0
True
[ 0.27555454 0.39300828 0.35753534] 1
True
[ 0.38384758 0.33980934 0.267225 ] 1
False
[ 0.33344293 0.2976423 0.37198859] 2
True
[ 0.39558576 0.29667332 0.29072722] 0
True
[ 0.40113386 0.25936477 0.33709332] 0
True
[ 0.31787395 0.30834443 0.39514285] 2
True
[ 0.40222726 0.24600778 0.35992327] 0
True
[ 0.39813894 0.33285169 0.27270642] 0
True
[ 0.32765518 0.36092173 0.30059174] 1
True
[ 0.32927776 0.3622382 0.31600366] 1
True
[ 0.34850421 0.43108229 0.2258361 ] 1
True
[ 0.39096804 0.40879762 0.22906949] 1
True
[ 0.28350679 0.38995448 0.33328753] 2
False
[ 0.27494342 0.38145639 0.32587242] 2
False
[ 0.22911698 0.41297615 0.36482248] 1
True
[ 0.26617093 0.37759483 0.34260939] 1
True
[ 0.19951853 0.44091728 0.37792698] 1
True
[ 0.27635172 0.39276736 0.31723246] 1
True
[ 0.28349543 0.50519293 0.24247584] 1
True
[ 0.20545224 0.43575826 0.36930033] 1
True
[ 0.30180219 0.39106036 0.29650093] 2
False
[ 0.20467396 0.44387844 0.37067594] 1
True
[ 0.19989452 0.41907022 0.39834577] 1
True
[ 0.32514793 0.40996659 0.26989451] 1
True
[ 0.36574695 0.39488559 0.24677514] 0
False
[ 0.22295622 0.40617564 0.37546832] 1
True
[ 0.23295657 0.41453078 0.36926766] 1
True
[ 0.30261982 0.3979243 0.30645102] 0
False
[ 0.29208469 0.35003322 0.32822477] 2
False
[ 0.26606401 0.40136078 0.3253798 ] 1
True
[ 0.37554517 0.31086441 0.30520469] 0
True
[ 0.33422177 0.32573642 0.32430537] 2
False
[ 0.36260811 0.33186869 0.30164394] 0
True
[ 0.26248675 0.40334494 0.33394657] 1
True
[ 0.29131245 0.41024023 0.30112841] 1
True
[ 0.30560131 0.40607522 0.29315383] 1
True
[ 0.26615547 0.43692823 0.30768642] 1
True
[ 0.27983657 0.36784841 0.32923368] 2
False
[ 0.31032027 0.39928629 0.30052441] 1
True
[ 0.31955536 0.40284444 0.30485816] 2
False
[ 0.23741629 0.42003004 0.35023519] 2
False
[ 0.28351974 0.29982684 0.3971798 ] 0
False
[ 0.27948195 0.33099682 0.38908519] 0
False
[ 0.38424028 0.21417019 0.41880123] 2
True
[ 0.38236799 0.27212955 0.34006623] 0
True
[ 0.37262629 0.24562535 0.35965086] 0
True
[ 0.39384112 0.30103819 0.29527639] 0
True
[ 0.40668747 0.34202645 0.24629858] 0
True
[ 0.38755765 0.38362574 0.24783386] 1
False
[ 0.27954217 0.37023603 0.34143019] 1
True
[ 0.35163912 0.35207885 0.28169504] 2
False
[ 0.39413198 0.30479767 0.29939242] 0
True
[ 0.40672271 0.28885705 0.31816178] 0
True
[ 0.46295636 0.2450026 0.29853748] 0
True
[ 0.35058185 0.374047 0.2658389 ] 1
True
[ 0.40363608 0.26595814 0.33636612] 0
True
[ 0.37543732 0.31764683 0.30704496] 2
False
[ 0.38985023 0.29697712 0.30640848] 1
False
[ 0.43858463 0.35404798 0.23268303] 0
True
[ 0.44336781 0.30062319 0.2577191 ] 0
True
[ 0.3350893 0.31283342 0.34562971] 2
True
[ 0.32697776 0.36194767 0.30114173] 1
True
[ 0.3542952 0.3208641 0.31009778] 0
True
[ 0.37280217 0.24336227 0.36163492] 2
False
[ 0.29248274 0.35093634 0.3517824 ] 1
False
[ 0.31802457 0.35382921 0.3220254 ] 1
True
[ 0.39321898 0.30161603 0.30493142] 0
True
[ 0.34389641 0.31171562 0.34108524] 2
False
[ 0.39557769 0.29794905 0.29959346] 0
True
[ 0.36786972 0.39003124 0.2623276 ] 0
False
[ 0.38447428 0.29024132 0.34217052] 2
False
[ 0.32911871 0.27690219 0.39617056] 2
True
[ 0.39317356 0.34878111 0.25479716] 1
False
[ 0.40615313 0.26915598 0.33069532] 0
True
[ 0.35499727 0.25302205 0.39115658] 2
True
[ 0.34267464 0.38012292 0.27965736] 1
True
[ 0.39824237 0.23075606 0.3800626 ] 2
False
[ 0.35504341 0.36681876 0.28302916] 1
True
[ 0.36111222 0.27249574 0.37656102] 2
True
[ 0.34296212 0.32566008 0.31726212] 1
False
[ 0.26749334 0.40118978 0.33699727] 1
True
[ 0.29804813 0.36190454 0.35150407] 1
True
[ 0.25824058 0.34589193 0.40041521] 2
True
[ 0.32498882 0.3903291 0.2784982 ] 0
False
[ 0.21083646 0.37405156 0.43271736] 2
True
[ 0.30263597 0.39499535 0.29455152] 1
True
[ 0.30264175 0.36845434 0.32987682] 2
False
[ 0.24143607 0.41196794 0.37081806] 1
True
[ 0.25609717 0.3675215 0.38741806] 2
True
[ 0.22601104 0.42244644 0.36185486] 1
True
[ 0.25566385 0.40328308 0.33655553] 1
True
[ 0.19909579 0.4163365 0.39242621] 1
True
[ 0.31097542 0.30052665 0.39463685] 0
False
[ 0.33054468 0.29678991 0.35117057] 0
False
[ 0.28780876 0.32007042 0.4011105 ] 2
True
[ 0.29202546 0.39601793 0.30178675] 0
False
[ 0.3972586 0.35448349 0.25018367] 0
True
[ 0.26884622 0.36869305 0.37093871] 2
True
[ 0.29018714 0.37680081 0.33472339] 1
True
[ 0.33260694 0.38026549 0.30349203] 1
True
[ 0.28030647 0.31596264 0.39820174] 2
True
[ 0.2415385 0.38351592 0.40673457] 0
False
[ 0.25758167 0.42757325 0.32763245] 1
True
[ 0.24426976 0.40194496 0.3641651 ] 1
True
[ 0.39884374 0.30648733 0.27653729] 0
True
[ 0.31954939 0.49161131 0.22154631] 0
False
[ 0.33908479 0.36008448 0.31240517] 0
False
[ 0.37025918 0.35656433 0.27327317] 2
False
[ 0.42292247 0.28120526 0.30597325] 0
True
[ 0.35638659 0.34538051 0.31978279] 1
False
[ 0.39316445 0.29519658 0.30473147] 0
True
[ 0.39192561 0.34626951 0.26465709] 1
False
[ 0.36602885 0.41715698 0.22659212] 1
True
[ 0.3412647 0.47941615 0.21451814] 0
False
[ 0.43702078 0.46449771 0.15924232] 1
True
[ 0.3622999 0.27207444 0.364266 ] 2
True
[ 0.34222848 0.39732364 0.27050836] 1
True
[ 0.27267474 0.4292599 0.29119031] 1
True
[ 0.41159551 0.36338185 0.23512377] 0
True
[ 0.41737603 0.41707718 0.20620216] 0
True
[ 0.44247952 0.34937153 0.21487999] 1
False
[ 0.41005198 0.32035223 0.27907569] 2
False
[ 0.35182592 0.46310112 0.21213068] 1
True
[ 0.40231631 0.33736948 0.26284597] 0
True
[ 0.39321601 0.41136563 0.21921668] 0
False
[ 0.41575374 0.33299073 0.26253165] 0
True
[ 0.43890099 0.43712456 0.16579647] 1
False
[ 0.38714565 0.3412355 0.27876757] 2
False
[ 0.39931558 0.29583561 0.30197794] 0
True
[ 0.41099523 0.30496208 0.30072315] 0
True
[ 0.39413255 0.28083177 0.31244031] 0
True
[ 0.42058256 0.35547176 0.24749171] 2
False
[ 0.39799488 0.28990178 0.30420767] 0
True
[ 0.35512648 0.36753071 0.27744205] 2
False
[ 0.43719529 0.26138523 0.29339697] 0
True
[ 0.35635305 0.35091394 0.28442131] 1
False
[ 0.34475238 0.31890441 0.31634005] 2
False
[ 0.32679634 0.30337452 0.37341305] 2
True
[ 0.34578552 0.28015523 0.37201508] 0
False
[ 0.39981081 0.28147106 0.32221251] 0
True
[ 0.34977314 0.30446746 0.3449436 ] 2
False
[ 0.35905731 0.24455384 0.39179302] 2
True
[ 0.34230641 0.2494116 0.41137091] 0
False
[ 0.35088184 0.2471714 0.40596173] 2
True
[ 0.35493864 0.27103402 0.37969185] 1
False
[ 0.29492271 0.30732579 0.39529604] 2
True
[ 0.29096401 0.30530258 0.39281499] 1
False
[ 0.32616646 0.27184031 0.40412033] 2
True
[ 0.34217213 0.28652734 0.35664162] 0
False
[ 0.32866419 0.26265095 0.40889685] 2
True
[ 0.22203271 0.37203064 0.41257504] 1
False
[ 0.39085553 0.26313819 0.33774785] 0
True
[ 0.34421838 0.32141696 0.33102829] 0
True
[ 0.34557382 0.30240399 0.34860835] 1
False
[ 0.29657877 0.38532448 0.31005238] 1
True
[ 0.32213245 0.35187688 0.30949295] 1
True
[ 0.34444657 0.25470575 0.40215978] 2
True
[ 0.32890839 0.41309693 0.26069709] 1
True
[ 0.24509883 0.45617919 0.28993379] 1
True
[ 0.23096918 0.36661617 0.39907636] 2
True
[ 0.30567639 0.35448221 0.3318377 ] 2
False
[ 0.33761328 0.33377927 0.33335687] 0
True
[ 0.24848438 0.4191659 0.34775467] 2
False
[ 0.38174142 0.23461276 0.3854414 ] 0
False
[ 0.32883797 0.31370104 0.35772152] 0
False
[ 0.32459413 0.30304877 0.3435325 ] 1
False
[ 0.4047198 0.33095323 0.26603952] 0
True
[ 0.40653467 0.27908563 0.31927855] 0
True
[ 0.34277394 0.30179286 0.36327095] 2
True
[ 0.3289563 0.27618495 0.40744297] 2
True
[ 0.31329024 0.28634074 0.40104157] 2
True
[ 0.31963991 0.37598453 0.28780874] 1
True
[ 0.29203423 0.34845956 0.3563166 ] 2
True
[ 0.38325045 0.21743235 0.4100848 ] 2
True
[ 0.34391043 0.2990217 0.35368907] 0
False
[ 0.28032329 0.23799987 0.50544328] 2
True
[ 0.39828003 0.24190142 0.3757588 ] 0
True
[ 0.40449922 0.24824381 0.34411726] 0
True
[ 0.36609441 0.22868042 0.40576441] 2
True
[ 0.31349273 0.27443098 0.40201409] 2
True
[ 0.38315865 0.33619092 0.26858811] 1
False
[ 0.33668765 0.23116278 0.436302 ] 0
False
[ 0.26187459 0.39638323 0.33249354] 1
True
[ 0.34094871 0.28885865 0.37444921] 2
True
[ 0.33214226 0.28864136 0.38786675] 2
True
[ 0.39187339 0.31811335 0.3166305 ] 0
True
[ 0.34835513 0.24539112 0.40955888] 2
True
[ 0.27652773 0.32764894 0.38733066] 1
False
[ 0.32218367 0.31726291 0.35949344] 2
True
[ 0.3744843 0.22385997 0.42044256] 0
False
[ 0.35889173 0.22452745 0.45775578] 2
True
[ 0.36290924 0.2747626 0.36869308] 0
False
[ 0.42044222 0.23615553 0.36704346] 0
True
[ 0.34830635 0.25952452 0.38630468] 2
True
[ 0.34036507 0.21193295 0.45918439] 1
False
[ 0.40444756 0.20830822 0.38911887] 0
True
[ 0.43140249 0.26810302 0.30085561] 0
True
[ 0.34043717 0.25471677 0.41023935] 2
True
[ 0.4393947 0.2679478 0.29955532] 0
True
[ 0.3367257 0.24838502 0.42361597] 2
True
[ 0.44277298 0.16071353 0.43598268] 2
False
[ 0.33307517 0.23889436 0.41362562] 1
False
[ 0.32950226 0.30101504 0.35755284] 1
False
[ 0.27091891 0.33769629 0.38553713] 2
True
[ 0.28145454 0.28295682 0.4390848 ] 0
False
[ 0.37357594 0.21475559 0.41300635] 2
True
[ 0.22728073 0.32455494 0.45973111] 1
False
[ 0.3803494 0.32670597 0.27823402] 0
True
[ 0.32570504 0.27664242 0.41665808] 0
False
[ 0.30573187 0.25887661 0.42020836] 2
True
[ 0.3417236 0.319999 0.33165285] 1
False
[ 0.24606901 0.40963285 0.35355219] 1
True
[ 0.29796228 0.31494889 0.40665214] 2
True
[ 0.40827829 0.23027309 0.36480862] 0
True
[ 0.39776791 0.31864385 0.27599722] 0
True
[ 0.29307974 0.37775581 0.32038096] 1
True
[ 0.35743177 0.37519827 0.27244731] 1
True
[ 0.36399907 0.26647934 0.37052063] 0
False
[ 0.37941449 0.37023762 0.25784325] 1
False
[ 0.34455248 0.40105473 0.26454657] 1
True
[ 0.37501697 0.39710138 0.24045132] 1
True
[ 0.4397981 0.30060906 0.25885953] 1
False
[ 0.36874394 0.43516947 0.21696535] 1
True
[ 0.42302634 0.4122692 0.1873765 ] 0
True
[ 0.38816181 0.41221312 0.21863656] 1
True
[ 0.4041677 0.37088216 0.22967325] 0
True
[ 0.2921298 0.45201941 0.26327361] 2
False
[ 0.36852885 0.41375315 0.22669782] 1
True
[ 0.3840778 0.38560576 0.23891262] 1
True
[ 0.33768052 0.5201308 0.17268065] 1
True
[ 0.33700569 0.41018367 0.26252858] 1
True
[ 0.27508526 0.51495362 0.24060893] 0
False
[ 0.4145173 0.40846842 0.19386706] 0
True
[ 0.39565498 0.43821038 0.1959871 ] 1
True
[ 0.39105446 0.4332784 0.19229046] 1
True
[ 0.29627583 0.44168652 0.26476416] 2
False
[ 0.39346537 0.47351621 0.17593382] 1
True
[ 0.33950535 0.3976637 0.2635962 ] 1
True
[ 0.32716948 0.42326305 0.26011726] 1
True
[ 0.28701651 0.46416694 0.26631 ] 1
True
[ 0.30361633 0.43561791 0.24820757] 1
True
[ 0.33322805 0.41190769 0.2645137 ] 2
False
[ 0.29775864 0.44380145 0.27405582] 0
False
[ 0.368011 0.35328222 0.28159586] 0
True
[ 0.31411681 0.41663494 0.28197408] 1
True
[ 0.33440146 0.28337828 0.38159615] 2
True
[ 0.26121993 0.38016865 0.36349913] 2
False
[ 0.26541651 0.39844606 0.32315424] 1
True
[ 0.2787938 0.40908363 0.31628477] 1
True
[ 0.29305142 0.38199828 0.33051794] 2
False
[ 0.29769104 0.3999532 0.30478571] 1
True
[ 0.28987521 0.39639686 0.31811493] 1
True
[ 0.36888737 0.29525723 0.33815799] 2
False
[ 0.29493506 0.37687201 0.33399789] 2
False
[ 0.28488946 0.30689783 0.3860036 ] 2
True
[ 0.21940612 0.43706911 0.35504283] 1
True
[ 0.30015951 0.29455641 0.39993512] 2
True
[ 0.22274031 0.33085502 0.45570406] 1
False
[ 0.21230486 0.37842096 0.43729378] 2
True
[ 0.27232945 0.33397658 0.40174255] 2
True
[ 0.2673633 0.37388051 0.35379764] 0
False
[ 0.25759846 0.40713198 0.34507008] 1
True
[ 0.25475443 0.40579878 0.34639182] 1
True
[ 0.32990087 0.35905823 0.32572287] 0
False
[ 0.25278295 0.39128139 0.37036327] 2
False
[ 0.25825081 0.33540088 0.4020604 ] 2
True
[ 0.1758652 0.40797484 0.46565411] 2
True
[ 0.24194753 0.35581387 0.40511521] 2
True
[ 0.21394575 0.42544227 0.38909575] 1
True
[ 0.23555755 0.34374292 0.4303535 ] 2
True
[ 0.2442179 0.33029294 0.42813644] 2
True
[ 0.26350234 0.38626765 0.35908876] 1
True
[ 0.25666614 0.32924484 0.40843215] 2
True
[ 0.22078158 0.35590592 0.44113664] 0
False
[ 0.34458626 0.31528795 0.32602722] 0
True
[ 0.33231073 0.24948518 0.42840403] 0
False
[ 0.40547738 0.22076386 0.38375061] 0
True
[ 0.2434688 0.33994199 0.42731133] 0
False
[ 0.2697419 0.37305296 0.35419414] 1
True
[ 0.35269282 0.33380894 0.29794053] 2
False
[ 0.37885398 0.27843688 0.31624991] 0
True
[ 0.35759378 0.38989841 0.25773555] 1
True
[ 0.31633643 0.30891472 0.35609348] 1
False
[ 0.25275581 0.3835419 0.34996731] 0
False
[ 0.39877703 0.32484904 0.27765173] 2
False
[ 0.34327953 0.25764353 0.39364357] 0
False
[ 0.37138744 0.37275145 0.24822402] 1
True
[ 0.3053565 0.31335444 0.35587302] 2
True
[ 0.28470702 0.34269092 0.3697891 ] 1
False
[ 0.2267448 0.40670713 0.35774154] 1
True
[ 0.36006267 0.30984961 0.32102132] 1
False
[ 0.27501308 0.36832426 0.36650892] 1
True
[ 0.18767736 0.50155725 0.34336689] 1
True
[ 0.2514406 0.42239805 0.33077357] 2
False
[ 0.23263262 0.41720862 0.36805271] 2
False
[ 0.22740027 0.35648241 0.42311165] 2
True
[ 0.23901448 0.4156145 0.35590654] 1
True
[ 0.28920509 0.31570983 0.38470147] 1
False
[ 0.27114673 0.37530849 0.34509328] 2
False
[ 0.27603558 0.36621829 0.3558894 ] 0
False
[ 0.24957558 0.38579406 0.3666532 ] 2
False
[ 0.39519547 0.31272535 0.28045156] 0
True
[ 0.36909544 0.32121427 0.31990057] 2
False
[ 0.30939222 0.28805516 0.39701971] 0
False
[ 0.22079059 0.40238896 0.38195838] 1
True
[ 0.30774569 0.40486231 0.28738905] 0
False
[ 0.28134982 0.36075917 0.37565051] 2
True
[ 0.27005102 0.35038936 0.38651137] 1
False
[ 0.32029194 0.31158439 0.3529685 ] 0
False
[ 0.35177595 0.29342454 0.35489916] 0
False
[ 0.34598689 0.30855053 0.34356929] 2
False
[ 0.29716249 0.31928157 0.38031394] 2
True
[ 0.2795524 0.33844442 0.39572021] 0
False
[ 0.27080407 0.38937417 0.32787034] 1
True
[ 0.31271688 0.39984625 0.30200119] 1
True
[ 0.40866854 0.29750543 0.28948343] 0
True
[ 0.33643049 0.31273369 0.33387304] 2
False
[ 0.22001477 0.36897462 0.44759803] 1
False
[ 0.34147131 0.27931837 0.38517599] 2
True
[ 0.28515183 0.34279461 0.36660817] 1
False
[ 0.20613216 0.40279033 0.38363932] 1
True
[ 0.27533503 0.39812741 0.31992364] 1
True
[ 0.24876479 0.34617225 0.39494481] 0
False
[ 0.28804261 0.43548782 0.30339475] 2
False
[ 0.29147172 0.31606998 0.40263759] 0
False
[ 0.25964933 0.40037271 0.33968368] 1
True
[ 0.33939643 0.32766142 0.32411875] 2
False
[ 0.27578421 0.39597741 0.31238816] 1
True
[ 0.28250648 0.36482907 0.38215184] 0
False
[ 0.32895912 0.33712849 0.33020028] 0
False
[ 0.29701256 0.39079132 0.30128733] 0
False
[ 0.26328901 0.44016361 0.29498444] 2
False
[ 0.31583179 0.35201522 0.3220173 ] 2
False
[ 0.28126139 0.30710188 0.39248798] 2
True
[ 0.28402859 0.3612673 0.34936569] 1
True
[ 0.26740134 0.37266085 0.36568496] 1
True
[ 0.2424074 0.40068407 0.34672175] 1
True
[ 0.23486459 0.39613019 0.36641804] 2
False
[ 0.25865831 0.41186304 0.33113036] 2
False
[ 0.20481198 0.42098898 0.39511043] 0
False
[ 0.24330785 0.39007147 0.37196246] 1
True
[ 0.29524614 0.35654843 0.32479635] 2
False
[ 0.24344294 0.36658512 0.39077902] 2
True
[ 0.27199082 0.39098561 0.32147869] 1
True
[ 0.30855978 0.29941627 0.38471585] 2
True
[ 0.21653647 0.47828349 0.32587194] 1
True
[ 0.23266327 0.40768197 0.37003292] 1
True
[ 0.27509802 0.32607594 0.39686141] 1
False
[ 0.20738501 0.51311621 0.3026517 ] 1
True
[ 0.2499124 0.46478308 0.29720383] 0
False
[ 0.27798368 0.39146782 0.33013622] 0
False
[ 0.34769402 0.34667341 0.29057806] 0
True
[ 0.24782365 0.39054504 0.36935342] 2
False
[ 0.26298429 0.39184105 0.35907243] 1
True
[ 0.32737974 0.38143266 0.27735034] 1
True
[ 0.27647871 0.37266295 0.33381181] 2
False
[ 0.2326784 0.36521403 0.4155751 ] 1
False
[ 0.19678457 0.41392201 0.40909153] 1
True
[ 0.30478452 0.3870545 0.30675185] 0
False
[ 0.31921981 0.35078878 0.3165961 ] 2
False
[ 0.31639668 0.38284188 0.30075299] 0
False
[ 0.28799813 0.31452996 0.39577581] 2
True
[ 0.33283152 0.28817356 0.38086369] 0
False
[ 0.34193993 0.3512543 0.32019178] 1
True
[ 0.23882795 0.35624742 0.41062469] 0
False
[ 0.3745983 0.32417281 0.28324163] 0
True
[ 0.36959324 0.27975971 0.31856912] 1
False
[ 0.37709097 0.35529287 0.27926347] 1
False
[ 0.40452272 0.35733989 0.23343366] 0
True
[ 0.41959971 0.33592209 0.25945321] 0
True
[ 0.32135123 0.38997712 0.27728234] 1
True
[ 0.35560195 0.30861464 0.34549572] 2
False
[ 0.34129161 0.34499932 0.31585385] 1
True
[ 0.39601081 0.31763883 0.27634654] 0
True
[ 0.33877661 0.36670315 0.28577403] 0
False
[ 0.39586335 0.34925827 0.24585198] 0
True
[ 0.38950625 0.32257956 0.26892275] 0
True
[ 0.39928969 0.36711288 0.23158847] 0
True
[ 0.39500126 0.31253422 0.27428388] 0
True
[ 0.35516972 0.34704229 0.28440991] 2
False
[ 0.33369238 0.35573838 0.31552083] 1
True
[ 0.37170279 0.32148665 0.30048402] 1
False
[ 0.37131225 0.33348909 0.28556241] 0
True
[ 0.4236494 0.35866281 0.23667754] 0
True
[ 0.37413775 0.3817779 0.25619699] 1
True
[ 0.39675848 0.31695253 0.28798667] 0
True
[ 0.46504268 0.40479985 0.17232651] 1
False
[ 0.31405123 0.46521165 0.24476349] 0
False
[ 0.41884033 0.39551731 0.19880627] 0
True
[ 0.41441074 0.34853872 0.24371717] 2
False
[ 0.35811348 0.36025106 0.28682466] 2
False
[ 0.32261737 0.39765101 0.28379423] 1
True
[ 0.30595476 0.39989903 0.29023521] 1
True
[ 0.30782833 0.41546364 0.28929316] 1
True
[ 0.39257944 0.282645 0.31709078] 2
False
[ 0.31155248 0.38122583 0.30131185] 0
False
[ 0.38257101 0.3532747 0.26205128] 1
False
[ 0.33369922 0.39777671 0.26115165] 1
True
[ 0.34002822 0.42722068 0.23846817] 1
True
[ 0.36724072 0.36568534 0.28003924] 0
True
[ 0.30304404 0.36636685 0.32306877] 2
False
[ 0.34530014 0.37180681 0.28821887] 2
False
[ 0.35236554 0.31216456 0.32281246] 2
False
[ 0.33401587 0.35867637 0.32389726] 0
False
[ 0.28768896 0.41314274 0.30162956] 2
False
[ 0.40125762 0.37264997 0.24092474] 0
True
[ 0.31152118 0.39573687 0.28765699] 1
True
[ 0.28033747 0.41660541 0.30747149] 1
True
[ 0.27344211 0.45109248 0.27723242] 1
True
[ 0.32350037 0.33025742 0.33404685] 2
True
[ 0.3394464 0.26702422 0.39621692] 0
False
[ 0.36712978 0.3021344 0.32037181] 0
True
[ 0.37088006 0.29641836 0.33911966] 2
False
[ 0.34300563 0.347236 0.31787578] 0
False
[ 0.39066731 0.29525756 0.32104513] 0
True
[ 0.35101765 0.36564376 0.27895983] 1
True
[ 0.36624382 0.39476045 0.24449247] 1
True
[ 0.30382681 0.31202577 0.36875918] 2
True
[ 0.33781445 0.35195371 0.29968954] 1
True
[ 0.41287939 0.33273358 0.25733191] 0
True
[ 0.36299816 0.24225904 0.40623697] 2
True
[ 0.34494782 0.37134513 0.27891237] 1
True
[ 0.32990467 0.3981462 0.27106881] 1
True
[ 0.35747499 0.39992654 0.2619058 ] 0
False
[ 0.28068737 0.33729689 0.36765789] 2
True
[ 0.32027295 0.34274275 0.34039426] 0
False
[ 0.39419548 0.31262055 0.29233755] 0
True
[ 0.32948293 0.36963024 0.29287932] 1
True
[ 0.43209809 0.33400728 0.23808128] 0
True
[ 0.46440662 0.29384748 0.24267468] 0
True
[ 0.41520637 0.34803423 0.23438953] 0
True
[ 0.50452111 0.2988906 0.23585519] 0
True
[ 0.3998358 0.3036055 0.30664944] 0
True
[ 0.41154186 0.30608329 0.2745578 ] 2
False
[ 0.35318084 0.34454287 0.2929308 ] 2
False
[ 0.37198501 0.30647326 0.33959286] 0
True
[ 0.4001719 0.31608781 0.28344523] 0
True
[ 0.50260257 0.23130089 0.27120963] 0
True
[ 0.37083881 0.30854801 0.30799678] 1
False
[ 0.37343389 0.34801204 0.26497917] 1
False
[ 0.4044294 0.35738578 0.25102422] 0
True
[ 0.39262182 0.36140011 0.25161701] 2
False
[ 0.40361938 0.27240442 0.32183858] 2
False
[ 0.34960825 0.24952011 0.40962497] 0
False
[ 0.32263525 0.33167025 0.34991536] 2
True
[ 0.44168302 0.312037 0.25405763] 1
False
[ 0.4493005 0.25605594 0.27902174] 0
True
[ 0.38220982 0.24769833 0.36917039] 1
False
[ 0.31631827 0.40628012 0.279941 ] 1
True
[ 0.2828344 0.33908953 0.37155363] 1
False
[ 0.35976466 0.33598591 0.29866961] 0
True
[ 0.34885398 0.388413 0.26562711] 1
True
[ 0.29511531 0.38247511 0.30581661] 2
False
[ 0.3470723 0.31355429 0.34663665] 0
True
[ 0.2616807 0.422409 0.30617713] 1
True
[ 0.30340339 0.44323372 0.270138 ] 1
True
[ 0.28351473 0.42312016 0.28815605] 2
False
[ 0.2998622 0.3290766 0.36604141] 2
True
[ 0.27719035 0.33725095 0.39840618] 2
True
[ 0.35014339 0.28128788 0.36525915] 0
False
[ 0.26154498 0.38302946 0.35570972] 1
True
[ 0.33175693 0.36004247 0.31866377] 0
False
[ 0.28452042 0.33446758 0.3707103 ] 2
True
[ 0.35506233 0.24430638 0.4062224 ] 2
True
[ 0.39077109 0.23768232 0.36881868] 0
True
[ 0.3167251 0.34673858 0.3316755 ] 2
False
[ 0.37104543 0.27025903 0.36800848] 0
True
[ 0.26992351 0.3734681 0.34784156] 1
True
[ 0.39929305 0.26446051 0.32728819] 0
True
[ 0.40810354 0.2878042 0.30355337] 1
False
[ 0.40046988 0.3079796 0.29507442] 0
True
[ 0.39302854 0.30598152 0.29353958] 0
True
[ 0.31649782 0.34824192 0.33139755] 2
False
[ 0.39490556 0.29914504 0.28805199] 0
True
[ 0.37648436 0.32754269 0.29946858] 1
False
[ 0.40274254 0.2093035 0.40524199] 2
True
[ 0.39582481 0.28297835 0.30770024] 0
True
[ 0.37294211 0.37868743 0.25718975] 1
True
[ 0.36807219 0.27940077 0.36069243] 2
False
[ 0.34876369 0.34973542 0.30611186] 1
True
[ 0.35123327 0.31740467 0.3513907 ] 1
False
[ 0.38249152 0.37358759 0.24766921] 1
False
[ 0.28938915 0.32918425 0.38336461] 2
True
[ 0.28799143 0.38110932 0.33941243] 1
True
[ 0.3219158 0.3564057 0.33213895] 0
False
[ 0.34079383 0.31536728 0.34415378] 2
True
[ 0.34140011 0.37341855 0.28892723] 1
True
[ 0.29949097 0.42340634 0.28287785] 2
False
[ 0.27623887 0.33307669 0.41667759] 2
True
[ 0.26403954 0.33214199 0.40392496] 2
True
[ 0.26365899 0.32040562 0.40597731] 0
False
[ 0.36415885 0.35596313 0.26397747] 0
True
[ 0.35061617 0.30804419 0.31783955] 2
False
[ 0.39734201 0.30538424 0.30789894] 0
True
[ 0.34340338 0.29999437 0.35772486] 1
False
[ 0.38576758 0.30716994 0.30367688] 0
True
[ 0.32436368 0.27415301 0.38836487] 2
True
[ 0.31764142 0.37294102 0.30075879] 1
True
[ 0.34887558 0.37992995 0.27735353] 2
False
[ 0.27679214 0.39987082 0.33090489] 1
True
[ 0.25809649 0.38125662 0.3672587 ] 1
True
[ 0.31788863 0.35334118 0.31679163] 0
False
[ 0.36089561 0.37499194 0.26231593] 1
True
[ 0.27619675 0.41821894 0.31188391] 1
True
[ 0.25008219 0.38123739 0.3734478 ] 0
False
[ 0.2996493 0.37790497 0.29876715] 1
True
[ 0.33119698 0.40018847 0.27417402] 1
True
[ 0.27244597 0.45410258 0.29742992] 1
True
[ 0.26523401 0.43498551 0.29991225] 0
False
[ 0.28931774 0.45677267 0.27371411] 2
False
[ 0.29609254 0.37258418 0.32304459] 0
False
[ 0.29337302 0.39906089 0.3175178 ] 2
False
[ 0.32196537 0.30623817 0.34817847] 2
True
[ 0.27087855 0.30171329 0.43529603] 2
True
[ 0.28715184 0.35074992 0.34834539] 2
False
[ 0.26800454 0.37423889 0.34052033] 2
False
[ 0.25053188 0.35309638 0.38564221] 2
True
[ 0.27954682 0.34197248 0.37893885] 1
False
[ 0.32111719 0.30085377 0.3703394 ] 1
False
[ 0.21339048 0.39194403 0.40205596] 2
True
[ 0.22152816 0.37543767 0.40998589] 2
True
[ 0.22842272 0.40428377 0.36428348] 1
True
[ 0.24248177 0.39182407 0.37076121] 0
False
[ 0.31297107 0.29943514 0.38708787] 2
True
[ 0.28581058 0.28607968 0.40656138] 2
True
[ 0.26199092 0.31175334 0.4231504 ] 2
True
[ 0.26799589 0.3172018 0.43741805] 0
False
[ 0.35259307 0.28975579 0.37220624] 2
True
[ 0.2351749 0.35223758 0.41957903] 2
True
[ 0.2319114 0.35287048 0.41732813] 2
True
[ 0.22669077 0.38886076 0.41368649] 1
False
[ 0.23780329 0.35811152 0.39865454] 2
True
[ 0.23954061 0.31242254 0.45436785] 2
True
[ 0.24586336 0.3670703 0.38143047] 0
False
[ 0.27783659 0.38566586 0.34366963] 0
False
[ 0.36760098 0.3086561 0.32011135] 1
False
[ 0.30398109 0.34204868 0.35587344] 1
False
[ 0.31198864 0.32922883 0.36873763] 2
True
[ 0.24541497 0.40282981 0.35444248] 1
True
[ 0.23293509 0.37218425 0.40730129] 2
True
[ 0.24910227 0.35326802 0.40395515] 2
True
[ 0.3098053 0.37629793 0.30259151] 0
False
[ 0.26271201 0.32264503 0.41669263] 2
True
[ 0.30019382 0.27235276 0.43130367] 1
False
[ 0.26014349 0.33026774 0.39817205] 2
True
[ 0.30200124 0.38283722 0.3075799 ] 0
False
[ 0.31122497 0.33576993 0.34244008] 2
True
[ 0.29468431 0.30656319 0.39053607] 1
False
[ 0.30114808 0.26494158 0.43892459] 0
False
[ 0.22427582 0.33121642 0.45017636] 2
True
[ 0.29641871 0.29831405 0.39736692] 2
True
[ 0.26469643 0.30457731 0.43011735] 1
False
[ 0.27443941 0.33410091 0.39166751] 0
False
[ 0.32419793 0.2972905 0.36943577] 2
True
[ 0.3007351 0.35773483 0.3349381 ] 2
False
[ 0.25878138 0.30937651 0.43449314] 2
True
[ 0.21370349 0.35366133 0.46859207] 1
False
[ 0.26059762 0.39188471 0.3354926 ] 0
False
[ 0.30354824 0.27275208 0.41442269] 2
True
[ 0.25779748 0.3382389 0.40388462] 1
False
[ 0.29951272 0.31376128 0.40110317] 2
True
[ 0.25626179 0.4290119 0.31958884] 2
False
[ 0.3012971 0.34648641 0.36471248] 2
True
[ 0.25798373 0.34596419 0.40125266] 2
True
[ 0.2314436 0.32780001 0.44823323] 2
True
[ 0.22957621 0.29054078 0.52551684] 2
True
[ 0.21706424 0.33364922 0.43982964] 2
True
[ 0.2381589 0.31286161 0.45922073] 1
False
[ 0.21802511 0.37496398 0.41515502] 1
False
[ 0.22839732 0.37658349 0.40453779] 1
False
[ 0.19484644 0.42899701 0.41603492] 2
False
[ 0.30622268 0.30032619 0.39840971] 0
False
[ 0.26506023 0.33643494 0.41434187] 0
False
[ 0.20704285 0.42682558 0.39238843] 2
False
[ 0.39689962 0.26687694 0.33235874] 0
True
[ 0.31772217 0.30694763 0.3668659 ] 0
False
[ 0.2964167 0.30354632 0.3969347 ] 1
False
[ 0.30533055 0.29084711 0.40969217] 2
True
[ 0.22044991 0.30974107 0.49756939] 2
True
[ 0.33158943 0.39189909 0.26502362] 1
True
[ 0.3389078 0.33686457 0.3094799 ] 0
True
[ 0.30216671 0.35480052 0.34751522] 1
True
[ 0.29795926 0.3245022 0.36797989] 0
False
[ 0.26874364 0.39714465 0.31796026] 1
True
[ 0.33624091 0.33212916 0.32673279] 2
False
[ 0.24335238 0.33035546 0.4147204 ] 2
True
[ 0.3957003 0.28598954 0.30339226] 0
True
[ 0.36279898 0.30616503 0.31077904] 0
True
[ 0.39671576 0.29769275 0.29188287] 0
True
[ 0.29007905 0.33083193 0.38247283] 2
True
[ 0.30041038 0.37474644 0.32667711] 0
False
[ 0.36675728 0.27303114 0.36521281] 2
False
[ 0.32450517 0.27469425 0.39569998] 2
True
[ 0.33579854 0.26438181 0.39939468] 2
True
[ 0.39314585 0.21899628 0.39425705] 0
False
[ 0.39808118 0.23327361 0.38733412] 0
True
[ 0.30260246 0.28151786 0.42729632] 2
True
[ 0.3258545 0.30932537 0.35860925] 1
False
[ 0.31277484 0.31233968 0.38133222] 2
True
[ 0.35230824 0.26656839 0.40724477] 2
True
[ 0.36731777 0.27650311 0.35269443] 0
True
[ 0.46580172 0.2121322 0.33422002] 0
True
[ 0.28028931 0.35561562 0.36287379] 1
False
[ 0.32494738 0.26307954 0.41889492] 0
False
[ 0.4593185 0.22240182 0.32385229] 1
False
[ 0.38921331 0.29752241 0.30285116] 1
False
[ 0.3497937 0.29505718 0.36772918] 2
True
[ 0.39495378 0.27820091 0.32072309] 0
True
[ 0.33695325 0.30182355 0.35631391] 1
False
[ 0.34898163 0.32545717 0.32397568] 1
False
[ 0.32334067 0.36311005 0.31581617] 0
False
[ 0.39903973 0.29591903 0.30446189] 0
True
[ 0.40000883 0.3282703 0.25765692] 1
False
[ 0.39594247 0.3069721 0.29671374] 0
True
[ 0.32745325 0.39971591 0.26440551] 1
True
[ 0.38117283 0.38784962 0.25677668] 1
True
[ 0.39500686 0.37677064 0.25473782] 0
True
[ 0.42801822 0.34815268 0.24225938] 1
False
[ 0.4312568 0.33867029 0.24440311] 1
False
[ 0.44157243 0.31063164 0.2621562 ] 0
True
[ 0.32048673 0.34768163 0.33316415] 2
False
[ 0.31660688 0.35851341 0.31081092] 2
False
[ 0.279133 0.37496921 0.34718013] 2
False
[ 0.33492303 0.3312313 0.33499112] 0
False
[ 0.37130707 0.34281146 0.27590908] 1
False
[ 0.42426851 0.29979126 0.28159091] 1
False
[ 0.27015578 0.39606979 0.33125281] 2
False
[ 0.2486227 0.38228068 0.37080537] 1
True
[ 0.26434557 0.38336471 0.34723486] 2
False
[ 0.31228492 0.31754057 0.36651301] 2
True
[ 0.28696413 0.40506149 0.29283478] 1
True
[ 0.24733027 0.40090717 0.38848835] 2
False
[ 0.35476429 0.34504163 0.29350125] 0
True
[ 0.31864075 0.44246532 0.24698302] 0
False
[ 0.31544828 0.35003871 0.32590987] 2
False
[ 0.29208148 0.39254829 0.31368991] 1
True
[ 0.30304913 0.35226693 0.33767541] 2
False
[ 0.30757709 0.35527495 0.33029052] 1
True
[ 0.29743959 0.39451127 0.30586423] 1
True
[ 0.23193581 0.38996452 0.38478614] 2
False
[ 0.28045125 0.32712363 0.38929784] 0
False
[ 0.25161296 0.35566629 0.38887288] 2
True
[ 0.30293921 0.31159587 0.38556895] 0
False
[ 0.32245752 0.28647222 0.3829499 ] 0
False
[ 0.29058279 0.30172843 0.39307488] 2
True
[ 0.36876773 0.24507124 0.38409558] 0
False
[ 0.44141612 0.28231976 0.29951505] 0
True
[ 0.39348313 0.35281775 0.25719072] 1
False
[ 0.3860323 0.27004838 0.36400817] 0
True
[ 0.40335113 0.2486529 0.35574514] 0
True
[ 0.44226205 0.28340434 0.28325768] 1
False
[ 0.40619032 0.29683041 0.30339856] 0
True
[ 0.36880049 0.33913033 0.29027659] 0
True
[ 0.38309526 0.29021981 0.35612464] 1
False
[ 0.36687714 0.26353624 0.38259882] 2
True
[ 0.36402111 0.32309901 0.30971328] 0
True
[ 0.42332932 0.37123105 0.23254504] 1
False
[ 0.36749567 0.3023962 0.31473222] 2
False
[ 0.39258575 0.30683444 0.29816806] 1
False
[ 0.29670677 0.3833046 0.32611587] 1
True
[ 0.40110399 0.32328276 0.27691985] 0
True
[ 0.35012252 0.40933944 0.24929159] 1
True
[ 0.38706473 0.31905242 0.29249234] 2
False
[ 0.42777933 0.32192213 0.24891674] 0
True
[ 0.2939216 0.39769953 0.30986467] 1
True
[ 0.34260566 0.37735268 0.28153163] 2
False
[ 0.26750543 0.40459524 0.33303033] 1
True
[ 0.28285504 0.44109888 0.28624793] 2
False
[ 0.24896601 0.34693343 0.37970061] 1
False
[ 0.38062809 0.31068385 0.30612691] 0
True
[ 0.38811261 0.30187537 0.30088792] 0
True
[ 0.36829275 0.3242564 0.31661018] 1
False
[ 0.37888349 0.29885769 0.32975429] 0
True
[ 0.41047103 0.34391519 0.25910142] 1
False
[ 0.31496621 0.43233893 0.26932554] 1
True
[ 0.34096437 0.39306257 0.25218229] 1
True
[ 0.35483595 0.40453573 0.25518071] 1
True
[ 0.39901923 0.34453752 0.25937419] 0
True
[ 0.31860568 0.37778376 0.31058955] 2
False
[ 0.41593135 0.39459862 0.21282056] 0
True
[ 0.39575468 0.34522845 0.25293697] 0
True
[ 0.32624701 0.36134484 0.32493125] 0
False
[ 0.3223435 0.39951843 0.28096165] 1
True
[ 0.39044012 0.37142263 0.24031557] 0
True
[ 0.34073175 0.32868555 0.33296065] 2
False
[ 0.34158402 0.375619 0.27109609] 2
False
[ 0.37228783 0.37535551 0.26760386] 0
False
[ 0.29147121 0.36785263 0.3276266 ] 1
True
[ 0.37803181 0.32044315 0.30187049] 2
False
[ 0.39385332 0.32293714 0.28019073] 2
False
[ 0.31955166 0.30745111 0.37022017] 2
True
[ 0.28725865 0.2785386 0.42832945] 2
True
[ 0.30541522 0.32162772 0.37577256] 2
True
[ 0.42947782 0.22428955 0.3595792 ] 2
False
[ 0.25156145 0.37732967 0.39617433] 1
False
[ 0.33392635 0.26737384 0.39201167] 2
True
[ 0.31943558 0.30091475 0.37737525] 0
False
[ 0.3513568 0.31079762 0.34064622] 1
False
[ 0.24293206 0.35321351 0.42852905] 2
True
[ 0.31218405 0.28323783 0.40892597] 1
False
[ 0.33906102 0.30607382 0.37476163] 2
True
[ 0.34009648 0.30215747 0.34238016] 0
False
[ 0.35629461 0.27353492 0.36234681] 0
False
[ 0.39835892 0.27312703 0.31918693] 0
True
[ 0.31946587 0.36081843 0.32509379] 1
True
[ 0.23595709 0.36858365 0.40487408] 2
True
[ 0.30850111 0.32876991 0.34040572] 2
True
[ 0.36252769 0.24805831 0.39006557] 2
True
[ 0.33650717 0.25861462 0.40944857] 0
False
[ 0.36205389 0.235116 0.39668708] 2
True
[ 0.37427692 0.23534305 0.3791669 ] 2
True
[ 0.25828858 0.25072023 0.51357089] 1
False
[ 0.26771817 0.29528973 0.44357568] 0
False
[ 0.30033552 0.32463624 0.38144138] 1
False
[ 0.4227729 0.20517067 0.395351 ] 0
True
[ 0.28973677 0.33086774 0.38005378] 1
False
[ 0.26786327 0.34559146 0.37910131] 2
True
[ 0.33093884 0.28329791 0.38048935] 2
True
[ 0.40174244 0.32247302 0.30220425] 0
True
[ 0.2969227 0.28241264 0.42150914] 1
False
[ 0.28903068 0.32207903 0.36916382] 1
False
[ 0.26419035 0.32316042 0.40928361] 0
False
[ 0.24548292 0.35059954 0.40295335] 2
True
[ 0.30455326 0.39984587 0.30201728] 1
True
[ 0.28783803 0.35474216 0.33899488] 0
False
[ 0.39997629 0.27216087 0.32764078] 0
True
[ 0.29169828 0.35663093 0.34432364] 2
False
[ 0.3118852 0.33373485 0.35318956] 0
False
[ 0.25516703 0.40499244 0.34661809] 1
True
[ 0.33453259 0.305723 0.34469197] 2
True
[ 0.33632852 0.26053168 0.40943554] 0
False
[ 0.38269287 0.27084116 0.34457841] 2
False
[ 0.40277682 0.27609481 0.32459905] 0
True
[ 0.28245442 0.38207277 0.3452462 ] 1
True
[ 0.28205276 0.37304382 0.35338291] 1
True
[ 0.28773184 0.33028487 0.38245248] 2
True
[ 0.28493603 0.34252785 0.36049171] 2
True
[ 0.26934551 0.36932445 0.37740107] 1
False
[ 0.36708169 0.31274038 0.33374918] 0
True
[ 0.29263616 0.30081509 0.38941776] 2
True
[ 0.24028591 0.32968149 0.45182666] 1
False
[ 0.29170544 0.33363998 0.35561874] 1
False
[ 0.27821172 0.31581178 0.39735469] 2
True
[ 0.26295049 0.36599766 0.36204129] 0
False
[ 0.31221611 0.28959517 0.39883251] 2
True
[ 0.28914164 0.39754859 0.31700145] 1
True
[ 0.24558777 0.402774 0.35783279] 1
True
[ 0.18496099 0.40501227 0.43083414] 2
True
[ 0.27777903 0.32683699 0.39633101] 2
True
[ 0.24975442 0.38499664 0.36438632] 1
True
[ 0.2650011 0.32734767 0.40121717] 2
True
[ 0.26158374 0.35598261 0.36959536] 2
True
[ 0.23629215 0.35480781 0.42899368] 2
True
[ 0.25388767 0.32799736 0.42499436] 0
False
[ 0.26628761 0.39519478 0.34515015] 1
True
[ 0.34992658 0.36255405 0.30496957] 0
False
[ 0.35058236 0.3218057 0.31936735] 0
True
[ 0.26701102 0.3244239 0.40292909] 2
True
[ 0.33398695 0.29463326 0.38618977] 2
True
[ 0.39759972 0.27280428 0.3385703 ] 1
False
[ 0.2817744 0.31738441 0.39953269] 1
False
[ 0.31049039 0.28077669 0.4189674 ] 2
True
[ 0.33008285 0.29159471 0.37626041] 1
False
[ 0.27955181 0.40583669 0.31439595] 0
False
[ 0.28401628 0.32273895 0.39698594] 2
True
[ 0.25337414 0.41754467 0.3390095 ] 1
True
[ 0.27014869 0.35359063 0.37157955] 2
True
[ 0.29643312 0.3257335 0.36519735] 0
False
[ 0.25367437 0.38841531 0.35246699] 1
True
[ 0.31965719 0.36018648 0.31532783] 1
True
[ 0.29320446 0.35323977 0.35854365] 2
True
[ 0.27738984 0.3612633 0.36924543] 0
False
[ 0.31882439 0.32704116 0.35912426] 0
False
[ 0.30634077 0.38956128 0.2863503 ] 1
True
[ 0.29198375 0.29102281 0.43329155] 2
True
[ 0.22740977 0.40092215 0.37247486] 1
True
[ 0.28955726 0.44340683 0.27448223] 0
False
[ 0.30045533 0.34175759 0.35538182] 2
True
[ 0.29032288 0.39147724 0.30836071] 1
True
[ 0.33321759 0.36402398 0.30444582] 0
False
[ 0.34485859 0.34544714 0.31178346] 0
False
[ 0.3589319 0.35194516 0.27495708] 1
False
[ 0.30639869 0.35652056 0.32454032] 0
False
[ 0.36482021 0.32387637 0.31869825] 0
True
[ 0.34593308 0.36976934 0.26793485] 2
False
[ 0.36795478 0.34727537 0.29162349] 2
False
[ 0.43524907 0.33777366 0.24151243] 0
True
[ 0.43068868 0.30919758 0.25939989] 1
False
[ 0.41022573 0.34475256 0.26250996] 0
True
[ 0.37010466 0.33514008 0.28556234] 0
True
[ 0.43293282 0.31054081 0.26832732] 1
False
[ 0.29851166 0.4002331 0.31034457] 1
True
[ 0.39176753 0.34042224 0.2658877 ] 0
True
[ 0.32800984 0.3436401 0.3050794 ] 2
False
[ 0.39199264 0.33078224 0.29516937] 2
False
[ 0.3485893 0.30331199 0.33193641] 2
False
[ 0.36085473 0.27465907 0.36056161] 0
True
[ 0.31988572 0.29489975 0.37035869] 2
True
[ 0.33815404 0.26681297 0.39843565] 2
True
[ 0.28295546 0.309133 0.39546157] 2
True
[ 0.33174104 0.30459519 0.33483735] 1
False
[ 0.39660767 0.28432658 0.32411086] 0
True
[ 0.40756236 0.23292423 0.36953316] 0
True
[ 0.29053579 0.30057202 0.40869766] 2
True
[ 0.35217233 0.28609288 0.36883771] 2
True
[ 0.30225171 0.4042439 0.31026104] 1
True
[ 0.22565954 0.40663445 0.37609953] 1
True
[ 0.32675748 0.28575573 0.38523965] 0
False
[ 0.28217291 0.31919729 0.40004143] 2
True
[ 0.35840479 0.2757518 0.35593496] 0
True
[ 0.31556659 0.29336491 0.38016123] 1
False
[ 0.44374608 0.2533888 0.31440551] 0
True
[ 0.34830862 0.3809313 0.26097717] 0
False
[ 0.39314298 0.30043599 0.29942535] 0
True
[ 0.38958819 0.33210106 0.26966347] 0
True
[ 0.40457471 0.30057982 0.28976668] 0
True
[ 0.41061286 0.29622937 0.28909185] 1
False
[ 0.43734143 0.29494136 0.2717437 ] 0
True
[ 0.41768948 0.33106389 0.26210824] 0
True
[ 0.41617851 0.33285118 0.26993828] 1
False
[ 0.35090507 0.30736949 0.340511 ] 2
False
[ 0.36023104 0.33940195 0.31430727] 2
False
[ 0.43197253 0.29839209 0.28014258] 1
False
[ 0.32689337 0.36847253 0.281076 ] 2
False
[ 0.39040399 0.3015403 0.29865903] 1
False
[ 0.34962328 0.36214411 0.29566747] 1
True
[ 0.34253955 0.36114439 0.28645205] 0
False
[ 0.3321313 0.34785706 0.32150216] 2
False
[ 0.34613638 0.36914337 0.28201825] 0
False
[ 0.33411043 0.29110782 0.36972831] 2
True
[ 0.3445532 0.35313752 0.30962618] 0
False
[ 0.37593526 0.24762438 0.38809749] 0
False
[ 0.27525982 0.39369714 0.32561568] 1
True
[ 0.43006377 0.38658753 0.21423119] 1
False
[ 0.40270879 0.31440318 0.27968574] 0
True
[ 0.39137655 0.29122794 0.30194814] 1
False
[ 0.39956338 0.29819649 0.31347238] 1
False
[ 0.34140576 0.40511051 0.25985797] 1
True
[ 0.31574602 0.39209653 0.29058349] 2
False
[ 0.35082148 0.33523742 0.29832279] 2
False
[ 0.32973414 0.35944288 0.32693848] 2
False
[ 0.31615247 0.36881318 0.30890932] 1
True
[ 0.38370634 0.30301932 0.32640022] 1
False
[ 0.29353994 0.38561481 0.30953931] 0
False
[ 0.42097928 0.35144875 0.23543351] 1
False
[ 0.29867796 0.34352759 0.35333103] 2
True
[ 0.28409541 0.39718942 0.31608483] 1
True
[ 0.32949491 0.35249377 0.30805046] 0
False
[ 0.32514162 0.39892853 0.28852567] 0
False
[ 0.44686417 0.31217079 0.24118932] 0
True
[ 0.40792478 0.34148022 0.24885803] 0
True
[ 0.41084517 0.32783261 0.25570216] 2
False
[ 0.32967356 0.40310413 0.26927023] 1
True
[ 0.40208035 0.3680241 0.2260627 ] 0
True
[ 0.35715387 0.31429011 0.34046005] 2
False
[ 0.3898499 0.30787646 0.29760093] 2
False
[ 0.39482315 0.2476301 0.35988047] 2
False
[ 0.34069908 0.38792409 0.28719581] 1
True
[ 0.29002048 0.34658444 0.36682086] 2
True
[ 0.29508758 0.37079656 0.3332318 ] 1
True
[ 0.38432955 0.34232253 0.28524299] 0
True
[ 0.33819465 0.3026697 0.3569746 ] 1
False
[ 0.32997157 0.3546505 0.31962452] 0
False
[ 0.39204592 0.32634437 0.27938879] 0
True
[ 0.44330194 0.24774581 0.32347641] 2
False
[ 0.3681201 0.32843871 0.29393699] 1
False
[ 0.40123701 0.34417602 0.25054074] 1
False
[ 0.31659462 0.34675558 0.34074375] 2
False
[ 0.39290453 0.28928895 0.30326604] 0
True
[ 0.40731931 0.33459834 0.26962522] 0
True
[ 0.31593507 0.33542989 0.35820705] 2
True
[ 0.2743619 0.34935408 0.37965319] 2
True
[ 0.41322859 0.26541752 0.31487014] 0
True
[ 0.29197063 0.34477076 0.37973519] 1
False
[ 0.29511825 0.35685933 0.34520041] 1
True
[ 0.25536959 0.39269372 0.34690513] 1
True
[ 0.27544209 0.34327913 0.3851312 ] 2
True
[ 0.23102859 0.37894613 0.41711164] 1
False
[ 0.16250883 0.47586892 0.41837196] 2
False
[ 0.27388739 0.38493006 0.34316449] 1
True
[ 0.25080178 0.39929616 0.36554318] 2
False
[ 0.192606 0.41627432 0.42577731] 2
True
[ 0.2433137 0.38772075 0.38184532] 1
True
[ 0.28375263 0.3765354 0.34383485] 0
False
[ 0.21246848 0.41174733 0.38058773] 1
True
[ 0.25716585 0.41656265 0.3239802 ] 1
True
[ 0.30939737 0.3951446 0.29699389] 1
True
[ 0.30169011 0.39892875 0.31240789] 2
False
[ 0.203005 0.41778963 0.39152805] 2
False
[ 0.29038851 0.3906597 0.30532799] 1
True
[ 0.2662336 0.40412065 0.33754976] 1
True
[ 0.22713183 0.44236192 0.32922534] 1
True
[ 0.23606862 0.41218324 0.36799217] 2
False
[ 0.20079536 0.4094687 0.42772352] 2
True
[ 0.37104686 0.2987066 0.34515264] 2
False
[ 0.33066253 0.39429455 0.28455531] 0
False
[ 0.31082924 0.40152182 0.30188284] 1
True
[ 0.25350601 0.39944994 0.35014031] 2
False
[ 0.20013464 0.38625579 0.43132454] 1
False
[ 0.23148123 0.40667593 0.36973997] 1
True
[ 0.24058929 0.36614342 0.40101485] 2
True
[ 0.26214237 0.36599992 0.36169828] 0
False
[ 0.26468512 0.42174046 0.31215152] 0
False
[ 0.23465426 0.37229883 0.4045432 ] 2
True
[ 0.28725893 0.35432812 0.35727876] 1
False
[ 0.13238685 0.46924733 0.45330737] 2
False
[ 0.3179635 0.35366785 0.33278568] 1
True
[ 0.21021786 0.39554146 0.39825461] 2
True
[ 0.21440589 0.38883294 0.41420415] 2
True
[ 0.31919434 0.28587806 0.40324349] 2
True
[ 0.16232461 0.38450983 0.50547182] 2
True
[ 0.18475173 0.40216085 0.44372331] 2
True
[ 0.23922407 0.41023139 0.37814176] 1
True
[ 0.25022097 0.34769006 0.41192278] 0
False
[ 0.20587179 0.41464081 0.39294929] 1
True
[ 0.33305116 0.35453247 0.31776541] 0
False
[ 0.28957925 0.32810488 0.3750108 ] 2
True
[ 0.20030243 0.41700331 0.39842517] 1
True
[ 0.26450789 0.36966363 0.38654273] 0
False
[ 0.28690249 0.36066365 0.34968625] 0
False
[ 0.24177475 0.35851012 0.4028993 ] 2
True
[ 0.33917085 0.3118631 0.34086657] 0
False
[ 0.21796438 0.39575533 0.41493114] 2
True
[ 0.29173665 0.36580517 0.34535887] 0
False
[ 0.3124046 0.34215638 0.35731562] 1
False
[ 0.30429937 0.32122024 0.37399354] 1
False
[ 0.31942192 0.30227216 0.3819981 ] 0
False
[ 0.30649274 0.32872662 0.37815267] 0
False
[ 0.40619137 0.32566189 0.27422775] 0
True
[ 0.32205205 0.33277903 0.32753602] 0
False
[ 0.33768413 0.31645684 0.34574603] 1
False
[ 0.32195223 0.39637151 0.27118297] 1
True
[ 0.41307055 0.32734075 0.25915318] 2
False
[ 0.31896939 0.39200465 0.28473714] 1
True
[ 0.373882 0.31358164 0.31887755] 0
True
[ 0.32338934 0.41021344 0.27214746] 1
True
[ 0.39795021 0.32962687 0.2758086 ] 1
False
[ 0.33315839 0.34708825 0.32937611] 0
False
[ 0.39208205 0.31383008 0.28259386] 0
True
[ 0.39959523 0.31588964 0.28420842] 0
True
[ 0.36159271 0.33441287 0.29046251] 0
True
[ 0.40350946 0.34480037 0.24744379] 0
True
[ 0.3501085 0.36410382 0.27979407] 1
True
[ 0.41723574 0.38368838 0.21577919] 1
False
[ 0.49294466 0.32903781 0.20814294] 0
True
[ 0.31242895 0.4194461 0.27194117] 1
True
[ 0.31086485 0.39457499 0.2922373 ] 1
True
[ 0.40647584 0.39356797 0.21350744] 2
False
[ 0.33139181 0.35206628 0.28585351] 2
False
[ 0.33093105 0.34721666 0.31207035] 0
False
[ 0.3980996 0.28690266 0.31332319] 2
False
[ 0.33985954 0.35162387 0.30037272] 2
False
[ 0.34041745 0.35079791 0.3056696 ] 0
False
[ 0.39766036 0.34349189 0.25333493] 0
True
[ 0.41240739 0.30197537 0.29108982] 0
True
[ 0.32442303 0.35368724 0.31715084] 1
True
[ 0.32900298 0.39822483 0.2582641 ] 1
True
[ 0.32434199 0.37974434 0.27829526] 1
True
[ 0.34168575 0.40051524 0.26171404] 1
True
[ 0.36682322 0.35284722 0.27829425] 0
True
[ 0.31426525 0.41778125 0.28046424] 1
True
[ 0.37466202 0.34778463 0.27563165] 1
False
[ 0.34913989 0.39450026 0.24512553] 2
False
[ 0.33137861 0.39874803 0.26497424] 1
True
[ 0.40702347 0.38231362 0.21435684] 0
True
[ 0.38168917 0.37525833 0.23558464] 0
True
[ 0.34309102 0.34515813 0.30430251] 2
False
[ 0.40743832 0.33530144 0.26426855] 0
True
[ 0.42189497 0.32453359 0.24274496] 0
True
[ 0.47358971 0.32928519 0.19805213] 2
False
[ 0.32184391 0.29663172 0.37435571] 2
True
[ 0.33719525 0.30210427 0.35340233] 2
True
[ 0.29248389 0.33323705 0.35869955] 2
True
[ 0.28253417 0.38815786 0.30505409] 1
True
[ 0.29792479 0.29240595 0.39923542] 2
True
[ 0.25733297 0.30149457 0.42963471] 2
True
[ 0.28991048 0.37278328 0.35029298] 1
True
[ 0.30736207 0.34853674 0.33015843] 0
False
[ 0.32090191 0.34477695 0.3242727 ] 1
True
[ 0.25009903 0.39918822 0.354393 ] 1
True
[ 0.30124219 0.31264561 0.38154169] 2
True
[ 0.38067486 0.27475765 0.35228501] 2
False
[ 0.22919401 0.3695776 0.40692321] 2
True
[ 0.25280766 0.34864352 0.39695896] 2
True
[ 0.28193781 0.32212596 0.39840738] 0
False
[ 0.3053217 0.31260649 0.36234409] 0
False
[ 0.26917 0.29497092 0.44771248] 2
True
[ 0.2969369 0.35859831 0.3304485 ] 1
True
[ 0.31940451 0.29524511 0.36914646] 1
False
[ 0.24091776 0.2892441 0.49533541] 2
True
[ 0.28280278 0.33608665 0.37400588] 0
False
[ 0.27540989 0.31644444 0.39777885] 2
True
[ 0.26953912 0.32797653 0.38678616] 2
True
[ 0.35006212 0.32576313 0.32377995] 2
False
[ 0.25959028 0.36397502 0.38148421] 1
False
[ 0.21868233 0.40248066 0.36984508] 1
True
[ 0.31364689 0.27847546 0.41557426] 0
False
[ 0.30659094 0.39746817 0.28973023] 1
True
[ 0.24815681 0.3629755 0.37518956] 0
False
[ 0.27646992 0.33662319 0.38616205] 2
True
[ 0.33151734 0.31603793 0.32966345] 1
False
[ 0.29340968 0.33132317 0.3744414 ] 2
True
[ 0.2672006 0.29828013 0.42463023] 2
True
[ 0.31586119 0.31482747 0.3597013 ] 1
False
[ 0.26732224 0.33069507 0.40036498] 2
True
[ 0.25142346 0.41478772 0.33314599] 1
True
[ 0.2820249 0.36085981 0.35219245] 0
False
[ 0.27344267 0.31596955 0.39395168] 2
True
[ 0.24267264 0.40868143 0.35235257] 1
True
[ 0.40385714 0.33163612 0.26906055] 0
True
[ 0.32257771 0.36766101 0.31285066] 1
True
[ 0.2575039 0.33778472 0.39997126] 2
True
[ 0.28314687 0.30520552 0.40518111] 0
False
[ 0.3636266 0.25128146 0.37197973] 0
False
[ 0.25709596 0.3299434 0.39407514] 2
True
[ 0.30502033 0.29403393 0.38842146] 0
False
[ 0.30681571 0.30989317 0.34351709] 2
True
[ 0.33067414 0.31275474 0.36009004] 2
True
[ 0.34746376 0.3298976 0.32082724] 1
False
[ 0.27889952 0.32077648 0.38818298] 1
False
[ 0.2755734 0.37198031 0.35327947] 1
True
[ 0.29654338 0.29769932 0.39200064] 2
True
[ 0.21828689 0.37104559 0.39746354] 1
False
[ 0.32477675 0.32221415 0.37766966] 2
True
[ 0.37678157 0.23992614 0.3866686 ] 0
False
[ 0.37029184 0.33603737 0.29924153] 1
False
[ 0.2684994 0.3395189 0.38662725] 2
True
[ 0.23535849 0.407907 0.36585565] 1
True
[ 0.34771346 0.29947814 0.35215312] 0
False
[ 0.28972626 0.31186843 0.39697214] 2
True
[ 0.37463902 0.29465598 0.33247406] 0
True
[ 0.36420987 0.29293421 0.35128096] 1
False
[ 0.27388036 0.34887361 0.38326238] 1
False
[ 0.264266 0.40427931 0.33502662] 1
True
[ 0.25427044 0.35136643 0.3848157 ] 2
True
[ 0.32788261 0.39973972 0.28404629] 1
True
[ 0.24639997 0.41019857 0.35586651] 1
True
[ 0.22195764 0.40431054 0.37795914] 2
False
[ 0.31414228 0.30981664 0.37682897] 0
False
[ 0.22355967 0.39920591 0.40049775] 1
False
[ 0.28722264 0.32088942 0.40089565] 2
True
[ 0.21251913 0.44076412 0.38469036] 1
True
[ 0.32030356 0.39881771 0.27673593] 0
False
[ 0.24456875 0.38336141 0.36502913] 2
False
[ 0.27770639 0.33329751 0.38775357] 2
True
[ 0.23610776 0.39594326 0.38155417] 1
True
[ 0.2544544 0.39201951 0.35556697] 1
True
[ 0.2760228 0.31929507 0.40139417] 0
False
[ 0.27586434 0.36898275 0.37573358] 1
False
[ 0.32603052 0.38432964 0.2968127 ] 0
False
[ 0.27028016 0.37941729 0.34962742] 1
True
[ 0.23929981 0.40610643 0.36754428] 1
True
[ 0.25693274 0.44071548 0.29991001] 2
False
[ 0.28229887 0.33911243 0.40885158] 2
True
[ 0.19881631 0.41754702 0.41582702] 2
False
[ 0.26831693 0.35611498 0.40927379] 2
True
[ 0.27846789 0.35156738 0.37331073] 1
False
[ 0.19707569 0.40543377 0.4227492 ] 2
True
[ 0.27766998 0.40108189 0.32424801] 1
True
[ 0.20917501 0.43662912 0.35965834] 0
False
[ 0.23206817 0.38916915 0.3830173 ] 2
False
[ 0.25872264 0.35694684 0.39997569] 2
True
[ 0.2553867 0.34183719 0.39492987] 2
True
[ 0.23263334 0.41015393 0.37195976] 1
True
[ 0.29466797 0.3234929 0.37179409] 0
False
[ 0.30512674 0.36982728 0.32580911] 0
False
[ 0.27614337 0.32272446 0.38776926] 1
False
[ 0.30606148 0.32815992 0.34731721] 0
False
[ 0.31291422 0.34602801 0.34105017] 2
False
[ 0.27981042 0.35549374 0.36051688] 2
True
[ 0.28930333 0.30708887 0.39666072] 2
True
[ 0.26554579 0.3705407 0.36696578] 1
True
[ 0.17678563 0.36789358 0.49410169] 2
True
[ 0.23929775 0.40517228 0.37110743] 1
True
[ 0.24231622 0.39467048 0.35789754] 1
True
[ 0.27685492 0.36328172 0.37035566] 1
False
[ 0.22393238 0.42235103 0.34992269] 0
False
[ 0.24914311 0.42392548 0.33578453] 2
False
[ 0.23058535 0.32377987 0.46267765] 2
True
[ 0.263604 0.3183461 0.42434355] 1
False
[ 0.20121261 0.38552421 0.4087272 ] 2
True
[ 0.23603007 0.35992251 0.42219557] 2
True
[ 0.23953847 0.35814138 0.40446356] 2
True
[ 0.20351195 0.37480723 0.44365905] 2
True
[ 0.19777305 0.40956069 0.41846944] 2
True
[ 0.23784313 0.30597534 0.46658997] 2
True
[ 0.23319796 0.35941446 0.40308075] 2
True
[ 0.25131536 0.34230125 0.41875411] 2
True
[ 0.20399297 0.3319872 0.46757249] 0
False
[ 0.24817862 0.31680217 0.43891123] 1
False
[ 0.25689064 0.30463842 0.44039843] 2
True
[ 0.30152292 0.26594306 0.44468615] 0
False
[ 0.26291749 0.28168835 0.46114954] 2
True
[ 0.21885408 0.40959392 0.37608034] 1
True
[ 0.28279794 0.35136244 0.37339663] 0
False
[ 0.35422517 0.25835355 0.3970617 ] 0
False
[ 0.26501696 0.34258131 0.40290877] 2
True
[ 0.28713852 0.29921189 0.40082278] 0
False
[ 0.27106988 0.39681814 0.31967804] 1
True
[ 0.22947973 0.40814712 0.3701483 ] 1
True
[ 0.29877127 0.31513973 0.37789593] 0
False
[ 0.28606894 0.37610524 0.32857523] 1
True
[ 0.27551579 0.3883348 0.3299966 ] 1
True
[ 0.32783649 0.32539955 0.33192148] 0
False
[ 0.36419007 0.31520728 0.3134997 ] 0
True
[ 0.28048441 0.38655059 0.34170168] 2
False
[ 0.24979549 0.35796651 0.40508906] 2
True
[ 0.29716387 0.34031367 0.36988573] 2
True
[ 0.246634 0.40134269 0.37646461] 1
True
[ 0.25649005 0.37436892 0.37006751] 2
False
[ 0.34200314 0.35963967 0.29598357] 1
True
[ 0.26142736 0.34186568 0.3968017 ] 0
False
[ 0.31892893 0.30052445 0.39023307] 2
True
[ 0.24153423 0.39080521 0.37595095] 1
True
[ 0.29119035 0.39468717 0.30815293] 1
True
[ 0.32868026 0.31164304 0.36760681] 2
True
[ 0.30112945 0.37241361 0.30283378] 2
False
[ 0.27679052 0.28872188 0.42806323] 0
False
[ 0.3027482 0.38722619 0.31068857] 1
True
[ 0.32205404 0.31539564 0.34084728] 0
False
[ 0.31158776 0.31558473 0.364793 ] 0
False
[ 0.35696004 0.32779099 0.31742477] 1
False
[ 0.2773099 0.37996262 0.33109391] 1
True
[ 0.28683878 0.47264262 0.25277501] 1
True
[ 0.26352292 0.37117094 0.36042587] 2
False
[ 0.37882503 0.28112196 0.34576517] 2
False
[ 0.28135568 0.33697374 0.37089645] 2
True
[ 0.24845225 0.38891198 0.37585752] 2
False
[ 0.26651066 0.29827549 0.44209129] 2
True
[ 0.28822922 0.33588215 0.36992758] 1
False
[ 0.34345401 0.2908603 0.38399148] 0
False
[ 0.26350442 0.30659688 0.42420384] 2
True
[ 0.27068138 0.39753069 0.32594722] 1
True
[ 0.25556987 0.31027247 0.43112079] 2
True
[ 0.22363276 0.38518968 0.40118618] 1
False
[ 0.26782791 0.38907779 0.34468923] 0
False
[ 0.25515805 0.39804776 0.34331331] 1
True
[ 0.29900869 0.3669505 0.3223831 ] 0
False
[ 0.31815734 0.40361363 0.28690431] 1
True
[ 0.2133099 0.41015922 0.3842786 ] 1
True
[ 0.32927849 0.41706237 0.27117552] 1
True
[ 0.34597638 0.32251664 0.32851839] 0
True
[ 0.37443961 0.29612772 0.32638681] 2
False
[ 0.3358448 0.38581555 0.28193 ] 0
False
[ 0.30732052 0.37798635 0.29965967] 0
False
[ 0.33334554 0.3786984 0.27370538] 1
True
[ 0.24103452 0.41047892 0.37257147] 2
False
[ 0.32402275 0.3625659 0.2975398 ] 0
False
[ 0.31434502 0.37484851 0.31290265] 1
True
[ 0.29597459 0.37716259 0.31855926] 1
True
[ 0.41345638 0.40347752 0.20032615] 0
True
[ 0.35852458 0.34188191 0.28496464] 2
False
[ 0.31570694 0.33038629 0.35401532] 2
True
[ 0.32160538 0.35897714 0.31381276] 0
False
[ 0.25999743 0.39989691 0.33869811] 1
True
[ 0.2986502 0.41007445 0.2875939 ] 0
False
[ 0.27346279 0.37701695 0.35073606] 2
False
[ 0.31234052 0.36511129 0.33221536] 2
False
[ 0.34315245 0.32716015 0.33140603] 1
False
[ 0.33494714 0.34224114 0.3122778 ] 1
True
[ 0.266426 0.38669672 0.33979525] 2
False
[ 0.23138522 0.4098107 0.36383038] 1
True
[ 0.30315757 0.34216711 0.37135937] 0
False
[ 0.27752499 0.36338752 0.35239134] 0
False
[ 0.34671637 0.39814998 0.24261881] 1
True
[ 0.33367149 0.38134057 0.29194565] 1
True
[ 0.32803223 0.41846283 0.24912438] 2
False
[ 0.28385728 0.36321027 0.35931375] 0
False
[ 0.31966733 0.39842164 0.27745737] 1
True
[ 0.37420631 0.33516702 0.27700536] 0
True
[ 0.4001891 0.33745165 0.25384614] 0
True
[ 0.3591375 0.33849976 0.2998421 ] 0
True
[ 0.33216862 0.36318449 0.27950059] 1
True
[ 0.41779461 0.39103806 0.20720224] 0
True
[ 0.42788328 0.33855289 0.24521422] 1
False
[ 0.36910392 0.35693432 0.27436232] 2
False
[ 0.38894598 0.32412365 0.29120294] 2
False
[ 0.39854752 0.31497305 0.28877076] 0
True
[ 0.31279673 0.4007789 0.2862413 ] 1
True
[ 0.41018988 0.3515161 0.24032703] 0
True
[ 0.3444438 0.39646498 0.25263948] 1
True
[ 0.25033774 0.43443299 0.30557428] 2
False
[ 0.27033464 0.37974647 0.36094577] 1
True
[ 0.32974725 0.38037713 0.29513485] 0
False
[ 0.37228257 0.31828417 0.30539381] 0
True
[ 0.32792244 0.34495067 0.33073253] 2
False
[ 0.3061706 0.32310729 0.39227655] 2
True
[ 0.39675725 0.34726879 0.25549126] 0
True
[ 0.32834992 0.29710576 0.3805015 ] 2
True
[ 0.38159208 0.31827357 0.29305372] 0
True
[ 0.36768653 0.36685861 0.26518557] 0
True
[ 0.31510113 0.38150477 0.31086525] 1
True
[ 0.34673631 0.34712701 0.31292468] 0
False
[ 0.35775482 0.30435823 0.32120268] 2
False
[ 0.4745601 0.27559438 0.26516887] 0
True
[ 0.34808741 0.31630712 0.3219364 ] 1
False
[ 0.35926864 0.31285752 0.31532309] 2
False
[ 0.29359048 0.30827209 0.39489317] 2
True
[ 0.32851355 0.30461 0.37463667] 2
True
[ 0.26249374 0.32893108 0.42555203] 1
False
[ 0.35231653 0.2849044 0.37545526] 2
True
[ 0.36241615 0.34189509 0.29135342] 0
True
[ 0.29958133 0.3397184 0.35164032] 2
True
[ 0.34729905 0.23157983 0.43634067] 0
False
[ 0.40365763 0.29362361 0.30412458] 1
False
[ 0.41075166 0.28579748 0.31024378] 0
True
[ 0.31634619 0.30938112 0.35396928] 1
False
[ 0.36978045 0.32093355 0.2945558 ] 0
True
[ 0.26904103 0.33676297 0.4155812 ] 2
True
[ 0.37844811 0.30950665 0.31102002] 0
True
[ 0.42287193 0.30834985 0.27345481] 1
False
[ 0.32754755 0.29664872 0.3810756 ] 2
True
[ 0.40026902 0.32697243 0.28224495] 0
True
[ 0.27529307 0.3176305 0.40016685] 2
True
[ 0.30403358 0.3984204 0.30029889] 1
True
[ 0.35652578 0.32248062 0.30717095] 1
False
[ 0.33133576 0.38369602 0.28001669] 1
True
[ 0.32965214 0.35604232 0.30237967] 2
False
[ 0.28717216 0.38282732 0.32514532] 1
True
[ 0.34409261 0.3197056 0.32247669] 2
False
[ 0.30338047 0.35162797 0.33705618] 0
False
[ 0.29645822 0.34493192 0.3676587 ] 0
False
[ 0.26846621 0.37390905 0.3509142 ] 1
True
[ 0.3809508 0.32473084 0.28785821] 0
True
[ 0.38791231 0.34461831 0.27320878] 0
True
[ 0.32950214 0.33527204 0.3335169 ] 2
False
[ 0.40337258 0.27019733 0.32858465] 0
True
[ 0.26427274 0.39096423 0.358137 ] 1
True
[ 0.39258443 0.2732676 0.3219935 ] 2
False
[ 0.31534201 0.38799671 0.29297562] 1
True
[ 0.30789312 0.28502875 0.39755887] 2
True
[ 0.33044138 0.35955716 0.31784143] 0
False
[ 0.36956823 0.36191381 0.26403245] 2
False
[ 0.39977172 0.31203619 0.28883631] 0
True
[ 0.34194172 0.31507475 0.35271145] 1
False
[ 0.35129335 0.33855838 0.29617808] 1
False
[ 0.32526176 0.3449404 0.33179631] 1
True
[ 0.34307327 0.32692942 0.3290124 ] 1
False
[ 0.31505045 0.31252812 0.36583967] 2
True
[ 0.3984426 0.31182784 0.28021465] 0
True
[ 0.33221011 0.39234773 0.28086235] 1
True
[ 0.40213414 0.33950424 0.26376365] 0
True
[ 0.42326026 0.36614719 0.21985523] 1
False
[ 0.40045443 0.4198517 0.20753105] 1
True
[ 0.25234465 0.4038938 0.36568028] 2
False
[ 0.34982719 0.41032038 0.24443884] 0
False
[ 0.32952091 0.3901702 0.26850466] 2
False
[ 0.31401475 0.29853712 0.38362935] 2
True
[ 0.37197026 0.36777739 0.25384286] 0
True
[ 0.39304094 0.36060716 0.25920971] 1
False
[ 0.36112836 0.29434217 0.34569368] 0
True
[ 0.28968456 0.39050475 0.30496123] 1
True
[ 0.27483704 0.43021934 0.30548822] 2
False
[ 0.35602165 0.40013277 0.25473873] 1
True
[ 0.38056085 0.41245124 0.22681347] 1
True
[ 0.31455039 0.33547333 0.35130971] 1
False
[ 0.40398625 0.32254455 0.27194731] 1
False
[ 0.25566328 0.47579097 0.28722181] 1
True
[ 0.35898923 0.46694619 0.20664774] 0
False
[ 0.31530693 0.40022344 0.28824063] 1
True
[ 0.28430912 0.42254046 0.31622518] 2
False
[ 0.24416841 0.39520049 0.36467404] 2
False
[ 0.28137023 0.34870361 0.36142343] 2
True
[ 0.36716913 0.3343234 0.30219389] 0
True
[ 0.2054617 0.42765623 0.39279526] 1
True
[ 0.2716889 0.38849404 0.34729373] 1
True
[ 0.2140243 0.43809718 0.36586846] 2
False
[ 0.25553078 0.40021706 0.33790476] 0
False
[ 0.34894453 0.41201316 0.26246805] 1
True
[ 0.33862993 0.36241181 0.30899408] 2
False
[ 0.24569055 0.44433736 0.32143276] 0
False
[ 0.27602623 0.37778428 0.33022419] 1
True
[ 0.33737341 0.34174219 0.30816722] 2
False
[ 0.41735942 0.34290421 0.2462406 ] 0
True
[ 0.34094566 0.34314602 0.30601771] 0
False
[ 0.3437776 0.26035292 0.40562994] 0
False
[ 0.2188997 0.39254283 0.41779949] 1
False
[ 0.34478138 0.34164452 0.307482 ] 0
True
[ 0.41176227 0.35947989 0.23640023] 0
True
[ 0.35420299 0.35274949 0.29125234] 0
True
[ 0.30329392 0.39590789 0.29478106] 1
True
[ 0.49368301 0.3224003 0.20253563] 0
True
[ 0.34488569 0.3886983 0.27845436] 1
True
[ 0.42711605 0.3314093 0.24272348] 0
True
[ 0.4020232 0.41517757 0.21296096] 1
True
[ 0.332703 0.37250173 0.31119294] 1
True
[ 0.34073829 0.35892532 0.29841233] 2
False
[ 0.31024245 0.40276905 0.28017205] 2
False
[ 0.25087648 0.38667019 0.36613628] 2
False
[ 0.3877494 0.27631354 0.34583285] 0
True
[ 0.26424582 0.35353853 0.40238134] 2
True
[ 0.29542224 0.37607189 0.31846982] 1
True
[ 0.37173127 0.35066765 0.2695168 ] 1
False
[ 0.20898794 0.44677658 0.36465675] 2
False
[ 0.3032174 0.45184884 0.25740742] 1
True
[ 0.26962968 0.39873495 0.34059167] 2
False
[ 0.32444138 0.37410642 0.29558808] 1
True
[ 0.32558598 0.36514961 0.29119213] 2
False
[ 0.24797139 0.4029212 0.36095792] 1
True
[ 0.26683888 0.40990224 0.32849604] 1
True
[ 0.28331842 0.41777703 0.31666742] 1
True
[ 0.36571795 0.38179436 0.2536423 ] 2
False
[ 0.29148734 0.31081891 0.40597251] 2
True
[ 0.29149828 0.30966929 0.39710689] 2
True
[ 0.29456777 0.35548413 0.3258013 ] 2
False
[ 0.33512761 0.40006035 0.27012791] 0
False
[ 0.24398813 0.35649668 0.40713058] 1
False
[ 0.1914575 0.41171045 0.40588626] 1
True
[ 0.25740457 0.41402444 0.33236539] 2
False
[ 0.32056439 0.37402169 0.32971605] 2
False
[ 0.26879999 0.33390543 0.42340991] 0
False
[ 0.23030705 0.37041187 0.41379782] 1
False
[ 0.26380147 0.35490732 0.38558014] 0
False
[ 0.31680076 0.3201241 0.36916081] 0
False
[ 0.20775463 0.43566482 0.3656987 ] 1
True
[ 0.2834842 0.35935816 0.35776464] 2
False
[ 0.24997593 0.38928529 0.36881359] 0
False
[ 0.30644616 0.30399233 0.39790663] 2
True
[ 0.30014201 0.36675086 0.33180454] 2
False
[ 0.33637644 0.30459316 0.36147342] 2
True
[ 0.32155817 0.33483384 0.34108319] 1
False
[ 0.23568527 0.40619324 0.36533721] 1
True
[ 0.32928838 0.32816094 0.34626306] 0
False
[ 0.2454376 0.35415562 0.39808608] 1
False
[ 0.40024629 0.28344304 0.32628228] 0
True
[ 0.37158839 0.31114643 0.3110529 ] 1
False
[ 0.26056396 0.39337337 0.34204078] 2
False
[ 0.24967823 0.39760514 0.35597423] 0
False
[ 0.31502481 0.37581532 0.30556831] 2
False
[ 0.20787888 0.39781836 0.41508699] 2
True
[ 0.25340878 0.33165576 0.41208795] 0
False
[ 0.33776679 0.31621326 0.34447445] 0
False
[ 0.34262178 0.34099851 0.29756395] 1
False
[ 0.23902056 0.38881767 0.39294524] 1
False
[ 0.31085757 0.3673912 0.31213855] 1
True
[ 0.27071932 0.42100368 0.30599486] 1
True
[ 0.27779412 0.40408349 0.33136072] 1
True
[ 0.31617265 0.32082279 0.35231119] 2
True
[ 0.30557706 0.39768478 0.3106305 ] 2
False
[ 0.28066014 0.35640372 0.36406411] 1
False
[ 0.30239979 0.3724775 0.32533065] 0
False
[ 0.30421845 0.3370015 0.34755729] 2
True
[ 0.39047929 0.34475524 0.27207452] 1
False
[ 0.27019737 0.40547408 0.32389165] 2
False
[ 0.24977894 0.3814234 0.38202363] 1
False
[ 0.32986791 0.37917723 0.28918311] 2
False
[ 0.24333561 0.39295605 0.37192162] 0
False
[ 0.21167473 0.35861787 0.45262403] 2
True
[ 0.1916055 0.41773592 0.42384925] 2
True
[ 0.21522934 0.38556128 0.4106744 ] 2
True
[ 0.20595299 0.39577274 0.40731764] 2
True
[ 0.27984458 0.37100075 0.34835267] 0
False
[ 0.25022628 0.37166795 0.37901319] 0
False
[ 0.32658972 0.32622933 0.3366652 ] 2
True
[ 0.24881177 0.34688812 0.4182996 ] 1
False
[ 0.27800674 0.32402923 0.39572586] 2
True
[ 0.25423349 0.346337 0.39775855] 2
True
[ 0.33635902 0.34819554 0.32943883] 1
True
[ 0.23913647 0.34654637 0.41307803] 0
False
[ 0.20872137 0.36313038 0.42820678] 0
False
[ 0.30611698 0.31139449 0.36839489] 2
True
[ 0.27189709 0.36757543 0.36857535] 1
False
[ 0.32953756 0.31434599 0.34746277] 0
False
[ 0.21144033 0.41012951 0.38352722] 1
True
[ 0.30567389 0.38455648 0.30436738] 1
True
[ 0.22980948 0.43616866 0.32671448] 1
True
[ 0.36625947 0.36405605 0.27895926] 0
True
[ 0.28330333 0.35964695 0.36126852] 2
True
[ 0.36683226 0.32243484 0.30432379] 0
True
[ 0.37216862 0.32537295 0.29923767] 0
True
[ 0.20268526 0.41048797 0.39463596] 0
False
[ 0.36153094 0.30615033 0.34351789] 0
True
[ 0.37857082 0.37203651 0.25309824] 1
False
[ 0.3425684 0.29362079 0.36168046] 2
True
[ 0.44171572 0.29549367 0.27803203] 0
True
[ 0.37634518 0.31110111 0.30544726] 2
False
[ 0.34034588 0.3024166 0.33617259] 0
True
[ 0.40465077 0.26423283 0.32440704] 0
True
[ 0.417235 0.26418866 0.30935115] 2
False
[ 0.31391042 0.2883391 0.37463845] 2
True
[ 0.38355789 0.30558505 0.31777126] 1
False
[ 0.35406501 0.30413147 0.34068898] 0
True
[ 0.2442167 0.31978529 0.43933841] 2
True
[ 0.39654218 0.31280832 0.29276769] 0
True
[ 0.27791239 0.31861928 0.38972013] 2
True
[ 0.35882787 0.30208083 0.32727742] 1
False
[ 0.28524342 0.3109645 0.39118928] 2
True
[ 0.29047412 0.33694937 0.36898193] 2
True
[ 0.26945532 0.28744937 0.4516178 ] 2
True
[ 0.23971818 0.28122587 0.49690585] 2
True
[ 0.37160708 0.32733742 0.29774425] 1
False
[ 0.32860447 0.27094838 0.3972454 ] 2
True
[ 0.30698513 0.27910485 0.39359107] 2
True
[ 0.24101495 0.36250329 0.39063158] 1
False
[ 0.34090732 0.27815883 0.38106188] 0
False
[ 0.29751337 0.29290037 0.40684569] 0
False
[ 0.30016552 0.26833968 0.42610169] 2
True
[ 0.2769628 0.35947236 0.36251923] 2
True
[ 0.29145887 0.27834746 0.42954925] 1
False
[ 0.3487989 0.34015179 0.31332723] 1
False
[ 0.25823724 0.34741741 0.40223733] 2
True
[ 0.30607962 0.31932724 0.39087569] 0
False
[ 0.38264347 0.27673642 0.33098491] 0
True
[ 0.27756006 0.33279783 0.3860562 ] 1
False
[ 0.27063501 0.37433553 0.35650534] 1
True
[ 0.3094792 0.30345873 0.40027883] 2
True
[ 0.31861506 0.34066141 0.33284223] 1
True
[ 0.35954854 0.33802759 0.29211751] 2
False
[ 0.35132994 0.3453481 0.30177958] 0
True
[ 0.3006067 0.29274249 0.39295138] 2
True
[ 0.27691249 0.27703479 0.46859621] 2
True
[ 0.39649827 0.30984864 0.29517704] 0
True
[ 0.3125473 0.36661746 0.30868079] 1
True
[ 0.2888647 0.29021135 0.42260796] 2
True
[ 0.30852225 0.29013098 0.39601608] 0
False
[ 0.36059342 0.33598265 0.2919315 ] 1
False
[ 0.31119607 0.36305188 0.31405186] 2
False
[ 0.30600576 0.32474221 0.37888942] 1
False
[ 0.32955452 0.34376598 0.32888065] 1
True
[ 0.29201238 0.37520381 0.32387391] 0
False
[ 0.29080162 0.33146877 0.37106017] 0
False
[ 0.30341018 0.38373353 0.31130788] 2
False
[ 0.30714477 0.34744359 0.33418559] 1
True
[ 0.3045243 0.34435905 0.32917444] 2
False
[ 0.24337308 0.39497847 0.35649291] 0
False
[ 0.35052673 0.33469231 0.30451427] 1
False
[ 0.28440995 0.36387149 0.35168429] 0
False
[ 0.30043362 0.36548058 0.33349257] 1
True
[ 0.29731432 0.35370419 0.34431955] 0
False
[ 0.3965238 0.31136791 0.28455881] 0
True
[ 0.39530178 0.35427154 0.25534677] 1
False
[ 0.31950551 0.39702011 0.28248507] 1
True
[ 0.26842025 0.33260937 0.39795523] 0
False
[ 0.40646824 0.33466838 0.25519049] 0
True
[ 0.40684175 0.35435628 0.25042453] 0
True
[ 0.32239562 0.38668039 0.28931735] 2
False
[ 0.27753147 0.42829373 0.30262014] 1
True
[ 0.33509937 0.35014056 0.31515324] 2
False
[ 0.25181755 0.36672903 0.38277082] 2
True
[ 0.25740255 0.31806615 0.41552177] 2
True
[ 0.23022673 0.35647826 0.40139623] 1
False
[ 0.29043975 0.39279049 0.30946239] 1
True
[ 0.32934958 0.34555069 0.32292226] 2
False
[ 0.25452117 0.38714448 0.35238152] 2
False
[ 0.26085114 0.29259763 0.45707435] 2
True
[ 0.32127602 0.33563342 0.34099093] 1
False
[ 0.28732126 0.40971618 0.30196539] 2
False
[ 0.30281954 0.31795166 0.38798122] 2
True
[ 0.28737306 0.36618111 0.35409897] 1
True
[ 0.25140393 0.40211787 0.36257907] 2
False
[ 0.31250281 0.38422635 0.3079464 ] 0
False
[ 0.29217641 0.3430411 0.36194339] 0
False
[ 0.3239065 0.37448849 0.28839559] 1
True
[ 0.27796774 0.32657051 0.38674489] 0
False
[ 0.32942206 0.37475945 0.28008625] 1
True
[ 0.33621824 0.39229455 0.2549913 ] 1
True
[ 0.21458258 0.42091186 0.38264573] 1
True
[ 0.26376253 0.39913679 0.33089116] 1
True
[ 0.31149641 0.4048828 0.28691016] 1
True
[ 0.38648999 0.34120392 0.2782246 ] 2
False
[ 0.27264457 0.39416576 0.32158526] 1
True
[ 0.27851249 0.40192697 0.32271391] 2
False
[ 0.30711829 0.37970759 0.30206334] 2
False
[ 0.15032395 0.48393452 0.42205593] 1
True
[ 0.24179114 0.42418595 0.34340103] 2
False
[ 0.2570046 0.36972962 0.36806064] 0
False
[ 0.26357107 0.40316593 0.35029389] 1
True
[ 0.22892352 0.36365724 0.41820678] 1
False
[ 0.24577994 0.42068098 0.3499885 ] 1
True
[ 0.23436381 0.40143782 0.38156654] 2
False
[ 0.27641769 0.3314558 0.3902173 ] 2
True
[ 0.23755989 0.40005574 0.35590485] 1
True
[ 0.29161623 0.32827058 0.37472954] 2
True
[ 0.27452887 0.34107612 0.37429237] 1
False
[ 0.20034124 0.41226153 0.41399663] 2
True
[ 0.22913668 0.38001531 0.39847284] 2
True
[ 0.25004932 0.31461034 0.43834312] 0
False
[ 0.22330077 0.38387146 0.41133424] 2
True
[ 0.21100948 0.44494795 0.36424467] 0
False
[ 0.20822408 0.38500066 0.41722481] 0
False
[ 0.35842296 0.28025577 0.34104932] 0
True
[ 0.26965259 0.32977524 0.39122228] 2
True
[ 0.27647911 0.34805123 0.36091087] 1
False
[ 0.24455533 0.36010947 0.39404743] 0
False
[ 0.26547886 0.36841412 0.3487581 ] 1
True
[ 0.22445083 0.404243 0.37836358] 2
False
[ 0.31210045 0.34303117 0.34914321] 2
True
[ 0.25764285 0.34342408 0.40313055] 1
False
[ 0.2629242 0.35494441 0.3658496 ] 0
False
[ 0.19931693 0.40233817 0.4112531 ] 1
False
[ 0.29926955 0.38133338 0.31201051] 1
True
[ 0.29688969 0.36921818 0.32648794] 0
False
[ 0.22805518 0.41142118 0.36353817] 1
True
[ 0.40876595 0.31878943 0.26919119] 0
True
[ 0.32572911 0.34496478 0.31856114] 0
False
[ 0.2742991 0.38705476 0.33105153] 0
False
[ 0.29490904 0.36274379 0.32720009] 2
False
[ 0.27490148 0.3277776 0.40320373] 2
True
[ 0.23303037 0.38618825 0.38261764] 0
False
[ 0.31158056 0.32856457 0.34239127] 2
True
[ 0.3136033 0.30221507 0.39741943] 0
False
[ 0.29635418 0.30832676 0.39131723] 2
True
[ 0.34551956 0.27624852 0.37466035] 0
False
[ 0.37864339 0.24359997 0.35277813] 2
False
[ 0.30582094 0.32805516 0.3537357 ] 1
False
[ 0.33009255 0.32140229 0.35361888] 1
False
[ 0.26955171 0.30224537 0.4293532 ] 0
False
[ 0.27627427 0.34773778 0.3643619 ] 2
True
[ 0.22757215 0.40010265 0.37320963] 1
True
[ 0.36280322 0.2431192 0.39712985] 0
False
[ 0.39953959 0.31327767 0.2926571 ] 0
True
[ 0.42269059 0.30186525 0.2801636 ] 1
False
[ 0.3717473 0.27755743 0.3437402 ] 2
False
[ 0.32567008 0.28923678 0.36152063] 0
False
[ 0.35548439 0.31441868 0.32214184] 0
True
[ 0.29266462 0.30741888 0.38903811] 2
True
[ 0.34510199 0.31669527 0.30456005] 1
False
[ 0.37238057 0.32765608 0.31124751] 0
True
[ 0.30229001 0.31534591 0.38411339] 1
False
[ 0.26314379 0.35285268 0.39645669] 2
True
[ 0.26054889 0.38316853 0.34878516] 1
True
[ 0.27284997 0.36629878 0.35126114] 1
True
[ 0.26984879 0.39882281 0.339596 ] 2
False
[ 0.2339788 0.45575191 0.33070348] 1
True
[ 0.30426373 0.31653104 0.37950932] 2
True
[ 0.27055626 0.41430332 0.31222367] 1
True
[ 0.28530417 0.34743809 0.36176027] 2
True
[ 0.3322692 0.34202437 0.32591733] 0
False
[ 0.31725015 0.35082807 0.31955723] 2
False
[ 0.24489101 0.35889825 0.40495429] 1
False
[ 0.27916028 0.34644104 0.3740672 ] 0
False
[ 0.3473577 0.3060914 0.33208269] 1
False
[ 0.41226886 0.29693893 0.2961525 ] 1
False
[ 0.27580205 0.38711587 0.33802962] 0
False
[ 0.30447226 0.34039125 0.34273089] 2
True
[ 0.33884727 0.39903589 0.25876417] 1
True
[ 0.35365731 0.40068306 0.24538588] 1
True
[ 0.31906333 0.30078084 0.37790196] 0
False
[ 0.29588639 0.41526933 0.29428259] 1
True
[ 0.34843011 0.39488844 0.25802373] 1
True
[ 0.32762338 0.31770476 0.33751322] 2
True
[ 0.311616 0.4023645 0.2749844] 0
False
[ 0.34928963 0.31309366 0.32235577] 0
True
[ 0.33695827 0.33516235 0.32197618] 1
False
[ 0.32317871 0.39138918 0.26934535] 1
True
[ 0.28865306 0.39424946 0.32170347] 0
False
[ 0.27780986 0.36816818 0.36236201] 1
True
[ 0.29364513 0.40555789 0.29624176] 1
True
[ 0.37756884 0.36566833 0.25024381] 2
False
[ 0.30618328 0.39953239 0.27736142] 1
True
[ 0.33424993 0.38049606 0.30459443] 0
False
[ 0.35881231 0.3072894 0.31755898] 0
True
[ 0.34256456 0.30040815 0.36693474] 2
True
[ 0.39658033 0.33043156 0.27381186] 0
True
[ 0.33942853 0.32199814 0.34002229] 0
False
[ 0.51125088 0.31152774 0.20686636] 0
True
[ 0.39694551 0.3226024 0.27338907] 0
True
[ 0.43420833 0.30040973 0.26518124] 1
False
[ 0.32740076 0.38264885 0.28164793] 1
True
[ 0.32124031 0.32037797 0.37670928] 0
False
[ 0.28929467 0.37652276 0.32867116] 1
True
[ 0.33607526 0.37356782 0.28750894] 1
True
[ 0.33901657 0.3917154 0.27028075] 0
False
[ 0.36163662 0.42219115 0.23558456] 0
False
[ 0.31492969 0.41176284 0.28103256] 1
True
[ 0.43426092 0.2940461 0.27029971] 0
True
[ 0.28750507 0.37172815 0.34539964] 1
True
[ 0.46513064 0.38518827 0.19050068] 1
False
[ 0.35066715 0.33274569 0.30544023] 0
True
[ 0.266912 0.39408766 0.34531167] 2
False
[ 0.31406389 0.41402915 0.28336753] 1
True
[ 0.42162164 0.33931115 0.22468949] 2
False
[ 0.29476838 0.36813088 0.34009404] 2
False
[ 0.34683947 0.40358445 0.26707049] 0
False
[ 0.48133016 0.33210402 0.21044206] 0
True
[ 0.40029653 0.3559884 0.25505583] 0
True
[ 0.3235663 0.31690678 0.34318838] 2
True
[ 0.35944462 0.32192442 0.30590919] 0
True
[ 0.35952984 0.38891563 0.25631179] 0
False
[ 0.36719843 0.29758369 0.31976515] 2
False
[ 0.33584831 0.3951406 0.26268108] 1
True
[ 0.37756532 0.32706631 0.28480331] 2
False
[ 0.39750443 0.26046698 0.33469194] 0
True
[ 0.45125994 0.32601219 0.23782375] 2
False
[ 0.38469212 0.30991971 0.30866811] 1
False
[ 0.41092084 0.3321167 0.25353196] 2
False
[ 0.30749276 0.35651804 0.31726573] 0
False
[ 0.3939378 0.27472955 0.34488489] 2
False
[ 0.29902057 0.2926777 0.39554498] 1
False
[ 0.32742352 0.33864864 0.33838659] 2
False
[ 0.37798708 0.29928538 0.31423673] 1
False
[ 0.30867 0.33911086 0.31903682] 2
False
[ 0.29996613 0.363948 0.32968229] 1
True
[ 0.29048958 0.36438464 0.33561131] 1
True
[ 0.32245698 0.27891481 0.40020217] 2
True
[ 0.38938829 0.27782893 0.33224019] 2
False
[ 0.30814832 0.38384573 0.32043551] 0
False
[ 0.42113485 0.30739053 0.27372894] 2
False
[ 0.29683124 0.31904081 0.38365016] 2
True
[ 0.35640984 0.28594233 0.36298944] 2
True
[ 0.32588343 0.29175946 0.37514011] 1
False
[ 0.21224822 0.39409628 0.41524553] 1
False
[ 0.33076134 0.35941313 0.30814681] 0
False
[ 0.38235188 0.28051893 0.33888494] 0
True
[ 0.45419702 0.33209479 0.242533 ] 1
False
[ 0.33018078 0.32280044 0.33933817] 0
False
[ 0.36787985 0.35652441 0.25288842] 0
True
[ 0.43972187 0.37035332 0.21606575] 1
False
[ 0.39469942 0.2629316 0.33259461] 0
True
[ 0.3544815 0.32700556 0.30896408] 2
False
[ 0.36781941 0.28857501 0.32940171] 2
False
[ 0.29119297 0.35661238 0.3312741 ] 1
True
[ 0.28706133 0.33769886 0.37668368] 0
False
[ 0.36083847 0.31633392 0.31127818] 2
False
[ 0.41389282 0.35880846 0.22840276] 1
False
[ 0.30330067 0.40626073 0.30181647] 1
True
[ 0.30905637 0.3452326 0.3525027 ] 2
True
[ 0.24553926 0.3524819 0.39597521] 2
True
[ 0.34104208 0.29361473 0.37343875] 0
False
[ 0.34131249 0.27667783 0.37190863] 1
False
[ 0.38452051 0.33936191 0.27066383] 0
True
[ 0.34954441 0.28636277 0.35983645] 0
False
[ 0.3114351 0.31929531 0.38867883] 1
False
[ 0.28978679 0.32715198 0.38381308] 2
True
[ 0.36597612 0.25390331 0.38069065] 2
True
[ 0.2956109 0.3646493 0.34327901] 1
True
[ 0.31203016 0.35948142 0.32890699] 0
False
[ 0.3959571 0.26977024 0.33080926] 0
True
[ 0.37324171 0.29029777 0.33467159] 2
False
[ 0.29588926 0.29873437 0.39631702] 2
True
[ 0.38423537 0.29108169 0.33072742] 0
True
[ 0.26370306 0.33268225 0.42622343] 1
False
[ 0.26877106 0.28221913 0.4344638 ] 2
True
[ 0.3980479 0.2945192 0.32064417] 0
True
[ 0.27056407 0.3187569 0.42028534] 2
True
[ 0.31824285 0.35613634 0.31924152] 1
True
[ 0.33002488 0.34045235 0.32350612] 2
False
[ 0.3242635 0.302663 0.3569077] 1
False
[ 0.23158717 0.3645395 0.41437231] 1
False
[ 0.23762492 0.40173407 0.3568528 ] 1
True
[ 0.29457455 0.39518792 0.3145517 ] 1
True
[ 0.39875807 0.33219534 0.28202053] 0
True
[ 0.21921124 0.42048112 0.37463012] 1
True
[ 0.32605815 0.39463018 0.27720594] 1
True
[ 0.38601329 0.26970996 0.37593691] 2
False
[ 0.38525117 0.28142564 0.32585606] 2
False
[ 0.30924483 0.29410767 0.39755792] 2
True
[ 0.26027818 0.39258416 0.33882496] 0
False
[ 0.37532225 0.31498412 0.3187073 ] 0
True
[ 0.38911495 0.34625202 0.26471066] 0
True
[ 0.37280461 0.29173126 0.3266415 ] 2
False
[ 0.31187475 0.29578768 0.40382078] 2
True
[ 0.41330682 0.37106321 0.2320508 ] 1
False
[ 0.29503161 0.40481437 0.29615805] 0
False
[ 0.32488704 0.34550321 0.32507274] 0
False
[ 0.28161157 0.32080899 0.38998354] 1
False
[ 0.32996478 0.28186692 0.38413824] 2
True
[ 0.26748537 0.40045098 0.33544157] 1
True
[ 0.31060884 0.31938845 0.34553522] 0
False
[ 0.38278664 0.28356941 0.33527765] 0
True
[ 0.40414697 0.32818743 0.27261326] 2
False
[ 0.35339251 0.29863843 0.34728293] 0
True
[ 0.29475857 0.3146308 0.3879617 ] 2
True
[ 0.32919257 0.36715256 0.28706175] 1
True
[ 0.31535319 0.30784463 0.37513197] 2
True
[ 0.40224211 0.2590461 0.34663156] 0
True
[ 0.4121225 0.28037063 0.309787 ] 1
False
[ 0.36598714 0.35314434 0.28061731] 0
True
[ 0.30350631 0.28531757 0.40806024] 2
True
[ 0.31077976 0.38175629 0.30371256] 1
True
[ 0.43968419 0.28615929 0.28888194] 1
False
[ 0.25615797 0.39992157 0.34314531] 2
False
[ 0.31774828 0.35836871 0.32663641] 0
False
[ 0.40106199 0.27525719 0.31746485] 2
False
[ 0.34750591 0.32426734 0.34342518] 1
False
[ 0.3420251 0.27295981 0.37327642] 2
True
[ 0.36663674 0.32725978 0.31909731] 2
False
[ 0.27681636 0.34384795 0.37032896] 0
False
[ 0.28176449 0.31599562 0.39781794] 2
True
[ 0.34492798 0.32184324 0.33911871] 2
False
[ 0.32859872 0.31503359 0.35433747] 2
True
[ 0.29998504 0.31059216 0.41258653] 2
True
[ 0.35344214 0.34981983 0.28926741] 1
False
[ 0.26065657 0.38018527 0.35835071] 1
True
[ 0.31029324 0.30878866 0.37899339] 2
True
[ 0.26135339 0.29668558 0.46711777] 0
False
[ 0.26679011 0.31380096 0.41535044] 1
False
[ 0.30398943 0.32025493 0.36830724] 0
False
[ 0.29174793 0.2688873 0.45298283] 0
False
[ 0.33162198 0.3315106 0.33688237] 2
True
[ 0.28988091 0.36075141 0.33668896] 1
True
[ 0.33121512 0.32956003 0.33919827] 2
True
[ 0.24926071 0.37080089 0.39526254] 1
False
[ 0.28754924 0.38690103 0.33328522] 1
True
[ 0.29903384 0.3467209 0.342256 ] 0
False
[ 0.28258915 0.31250292 0.4271831 ] 1
False
[ 0.30794569 0.33776648 0.35107332] 0
False
[ 0.25988455 0.39189009 0.3462256 ] 0
False
[ 0.32217542 0.32103683 0.36064085] 2
True
[ 0.32625742 0.3486981 0.31287863] 2
False
[ 0.37284695 0.32343114 0.30810419] 1
False
[ 0.3086888 0.3750377 0.30757896] 1
True
[ 0.38439961 0.33644869 0.28614812] 0
True
[ 0.35741355 0.31506503 0.31485004] 0
True
[ 0.29869855 0.31568362 0.36491356] 1
False
[ 0.27242851 0.39899213 0.33136195] 1
True
[ 0.3445568 0.31715759 0.33252823] 2
False
[ 0.30331085 0.37744454 0.31721346] 1
True
[ 0.29271932 0.42612502 0.28909223] 1
True
[ 0.35580699 0.32627341 0.31389853] 0
True
[ 0.36012323 0.29970987 0.32757035] 0
True
[ 0.39642002 0.36606799 0.2455348 ] 0
True
[ 0.31842094 0.35073184 0.32127 ] 2
False
[ 0.36688352 0.27684749 0.35125547] 1
False
[ 0.31262322 0.36666311 0.31794326] 0
False
[ 0.37554714 0.34247295 0.27769776] 2
False
[ 0.33662491 0.32194585 0.34918193] 2
True
[ 0.38903542 0.30232277 0.314598 ] 0
True
[ 0.30414777 0.30320606 0.39559539] 2
True
[ 0.36599059 0.27726493 0.3589647 ] 0
True
[ 0.35048673 0.30553589 0.32678088] 0
True
[ 0.37810349 0.28975578 0.33179346] 0
True
[ 0.43667626 0.3020471 0.2807487 ] 0
True
[ 0.34613354 0.31179223 0.34308197] 2
False
[ 0.38842013 0.28958551 0.33267152] 0
True
[ 0.45997407 0.18302481 0.38752326] 0
True
[ 0.43811228 0.29698565 0.26991474] 0
True
[ 0.35527278 0.32283752 0.30973584] 2
False
[ 0.36499816 0.30788652 0.31985305] 0
True
[ 0.41932989 0.25475208 0.34117409] 2
False
[ 0.40216493 0.26740625 0.34769513] 0
True
[ 0.4020497 0.18878938 0.43908381] 2
True
[ 0.39871605 0.28430588 0.32114312] 0
True
[ 0.47731342 0.23918309 0.30833243] 2
False
[ 0.41573945 0.2751199 0.31310267] 1
False
[ 0.40134521 0.24873001 0.33237953] 1
False
[ 0.40864511 0.22014612 0.37710906] 2
False
[ 0.40792583 0.29940093 0.30616888] 0
True
[ 0.33365912 0.35003859 0.31387283] 1
True
[ 0.35439321 0.24821281 0.38455893] 0
False
[ 0.38940833 0.3145932 0.29405974] 1
False
[ 0.3036388 0.32631488 0.35658061] 2
True
[ 0.33536581 0.35043374 0.31417612] 0
False
[ 0.38059104 0.21893897 0.40796087] 2
True
[ 0.41894371 0.31124731 0.27693195] 1
False
[ 0.36425516 0.32469519 0.30701748] 1
False
[ 0.30614511 0.30505761 0.36627319] 2
True
[ 0.33613811 0.28663685 0.38158601] 2
True
[ 0.30497253 0.30204426 0.38340908] 0
False
[ 0.33357057 0.35204294 0.30331091] 1
True
[ 0.32116419 0.3132199 0.3800915 ] 0
False
[ 0.39988064 0.285936 0.32620875] 2
False
[ 0.38966243 0.33024218 0.29309376] 1
False
[ 0.38290008 0.29756414 0.31700101] 1
False
[ 0.34721613 0.33005428 0.30633322] 0
True
[ 0.37072884 0.26188752 0.35328657] 1
False
[ 0.37026035 0.3225707 0.30139967] 0
True
[ 0.37727262 0.37301262 0.25158368] 1
False
[ 0.39406869 0.26878464 0.34606903] 0
True
[ 0.35711371 0.2996232 0.33582593] 0
True
[ 0.39693483 0.29891316 0.30127281] 0
True
[ 0.38082049 0.31779692 0.28957349] 2
False
[ 0.42249193 0.30692781 0.28112647] 0
True
[ 0.37015274 0.32422126 0.28778105] 1
False
[ 0.39616683 0.29003862 0.31617276] 0
True
[ 0.30424321 0.29861879 0.39776492] 2
True
[ 0.37052676 0.27010666 0.36417482] 2
False
[ 0.33753258 0.30172613 0.35394705] 2
True
[ 0.34602628 0.28645014 0.36083567] 2
True
[ 0.39458673 0.24844404 0.36105062] 0
True
[ 0.24584696 0.31847966 0.44290598] 1
False
[ 0.37345327 0.31817982 0.32453372] 1
False
[ 0.30896235 0.31997325 0.37264783] 2
True
[ 0.31039263 0.34053228 0.34166847] 2
True
[ 0.33288826 0.30675874 0.36260138] 2
True
[ 0.25843308 0.33281144 0.3941807 ] 2
True
[ 0.31627423 0.31520404 0.35462133] 0
False
[ 0.3668779 0.31054083 0.31887598] 0
True
[ 0.33476226 0.27048994 0.40687602] 2
True
[ 0.41232212 0.230214 0.37658788] 0
True
[ 0.3523683 0.24004382 0.40440008] 2
True
[ 0.34749562 0.26135471 0.41565883] 2
True
[ 0.31990424 0.28274875 0.4092913 ] 0
False
[ 0.38165764 0.31290806 0.30153287] 0
True
[ 0.38169906 0.26559044 0.34708239] 1
False
[ 0.37294216 0.30818159 0.29405724] 2
False
[ 0.33822977 0.26545997 0.39966105] 2
True
[ 0.30581057 0.30514758 0.38571653] 2
True
[ 0.37080484 0.23814854 0.41175417] 2
True
[ 0.40019236 0.27602038 0.32288634] 0
True
[ 0.40637187 0.25626231 0.35333984] 0
True
[ 0.38644236 0.23077503 0.38514824] 2
False
[ 0.31150257 0.25779163 0.45222043] 2
True
[ 0.43863451 0.1970502 0.39014145] 0
True
[ 0.38965085 0.21764588 0.40345789] 0
False
[ 0.39050889 0.23388587 0.39444708] 0
False
[ 0.38827897 0.19776349 0.44203004] 1
False
[ 0.36625582 0.2293718 0.40682952] 1
False
[ 0.37392773 0.23476544 0.40460959] 2
True
[ 0.39780753 0.2601025 0.33775459] 0
True
[ 0.33022252 0.24416991 0.42293868] 1
False
[ 0.32615098 0.23807667 0.43646967] 1
False
[ 0.30326717 0.35841364 0.34653512] 2
False
[ 0.33697822 0.30927131 0.34633801] 2
True
[ 0.28964389 0.31996877 0.38840453] 1
False
[ 0.39870545 0.27350994 0.33417162] 0
True
[ 0.35758535 0.26225116 0.38326785] 2
True
[ 0.35340337 0.25785728 0.40211586] 1
False
[ 0.32412712 0.28883089 0.37545153] 2
True
[ 0.34695921 0.29144491 0.36938222] 2
True
[ 0.3162193 0.32999207 0.33200889] 1
False
[ 0.32483978 0.25265857 0.40680992] 2
True
[ 0.34240715 0.23380465 0.4281264 ] 2
True
[ 0.24555502 0.33399691 0.42973065] 2
True
[ 0.30527148 0.2848948 0.39048691] 0
False
[ 0.35053655 0.32067662 0.31353525] 1
False
[ 0.31841079 0.28802137 0.38133631] 2
True
[ 0.3427252 0.27559662 0.37708867] 0
False
[ 0.32764078 0.27479939 0.40268532] 2
True
[ 0.32508301 0.25148769 0.42042329] 0
False
[ 0.29868467 0.3015577 0.39256564] 2
True
[ 0.30815248 0.28176825 0.41420216] 1
False
[ 0.29348244 0.31584671 0.40058153] 2
True
[ 0.25969458 0.34300999 0.39709695] 2
True
[ 0.29096115 0.31570091 0.39589515] 2
True
[ 0.31447175 0.29362583 0.39212914] 2
True
[ 0.29702082 0.28730342 0.40846771] 1
False
[ 0.32536399 0.35464171 0.32982005] 0
False
[ 0.29627534 0.28279112 0.41847481] 0
False
[ 0.32596062 0.30523284 0.36240559] 0
False
[ 0.33050318 0.31882562 0.34832714] 1
False
[ 0.35334379 0.30453528 0.34344466] 0
True
[ 0.31344576 0.33160293 0.33531006] 1
False
[ 0.29805769 0.32779341 0.3693644 ] 1
False
[ 0.38521727 0.27094982 0.33255945] 0
True
[ 0.3153504 0.29985439 0.37677672] 1
False
[ 0.33786938 0.32358064 0.329135 ] 2
False
[ 0.36424433 0.30600863 0.33465417] 2
False
[ 0.32679527 0.28544363 0.40467006] 2
True
[ 0.27207182 0.33678029 0.38105491] 2
True
[ 0.3034974 0.33201854 0.35049177] 1
False
[ 0.3202371 0.32717388 0.34955852] 2
True
[ 0.26625701 0.36324437 0.38672402] 1
False
[ 0.32360516 0.28827045 0.39719419] 2
True
[ 0.2829269 0.26476999 0.45549389] 2
True
[ 0.27022204 0.30486151 0.44195796] 2
True
[ 0.32717809 0.33147847 0.34437787] 1
False
[ 0.27579638 0.32266358 0.40827077] 0
False
[ 0.33291825 0.29062216 0.37015033] 1
False
[ 0.36451786 0.27698636 0.36072379] 0
True
[ 0.29791427 0.31052002 0.39688735] 2
True
[ 0.34515694 0.33099349 0.31338587] 2
False
[ 0.25373535 0.35138976 0.40506328] 1
False
[ 0.32131294 0.29631062 0.3731183 ] 2
True
[ 0.2903435 0.30589414 0.39458062] 2
True
[ 0.28737278 0.34928732 0.37208469] 1
False
[ 0.2453168 0.37148102 0.38687253] 2
True
[ 0.22371864 0.36148792 0.42596416] 1
False
[ 0.26954467 0.3688328 0.35507816] 2
False
[ 0.34119394 0.28121814 0.36515113] 0
False
[ 0.27398669 0.32419526 0.41052503] 2
True
[ 0.31115736 0.24565252 0.45568374] 0
False
[ 0.30446862 0.3661879 0.33480033] 0
False
[ 0.28219843 0.27833173 0.45945483] 1
False
[ 0.3625584 0.27062422 0.37648622] 0
False
[ 0.40708012 0.26421038 0.34737896] 0
True
[ 0.30009626 0.38018237 0.33348089] 1
True
[ 0.33166325 0.30298654 0.36035307] 0
False
[ 0.35878571 0.28002428 0.35039431] 1
False
[ 0.4046187 0.34198342 0.25945655] 0
True
[ 0.27983838 0.34130695 0.38087135] 1
False
[ 0.35908828 0.29343875 0.34537123] 0
True
[ 0.33922133 0.37742914 0.27923843] 1
True
[ 0.25967018 0.40474473 0.34491549] 2
False
[ 0.30198246 0.34992771 0.33663021] 2
False
[ 0.28835443 0.33153455 0.37263839] 2
True
[ 0.33787247 0.33765938 0.31715391] 1
False
[ 0.32970496 0.32687279 0.32686644] 0
True
[ 0.34645692 0.32329206 0.33455672] 2
False
[ 0.35684742 0.30983463 0.33105928] 1
False
[ 0.32234759 0.40128915 0.28186447] 1
True
[ 0.27813028 0.37222419 0.32563987] 2
False
[ 0.32069308 0.32403701 0.34231157] 0
False
[ 0.38522801 0.27921203 0.33997052] 0
True
[ 0.29097333 0.40425509 0.31768295] 1
True
[ 0.28061522 0.32886801 0.39756442] 2
True
[ 0.33185486 0.34861432 0.31332828] 1
True
[ 0.339008 0.31098331 0.33499329] 0
True
[ 0.31353791 0.33164431 0.35818568] 1
False
[ 0.36806431 0.30751131 0.32708488] 2
False
[ 0.34831289 0.31364044 0.33639798] 1
False
[ 0.24866823 0.38357887 0.37552404] 2
False
[ 0.38146863 0.32308794 0.29826404] 0
True
[ 0.32338244 0.33424735 0.33907001] 2
True
[ 0.34153319 0.31192264 0.34288178] 0
False
[ 0.3551226 0.28998183 0.37142198] 1
False
[ 0.37215242 0.29824628 0.32203112] 0
True
[ 0.31906697 0.33457834 0.33341392] 2
False
[ 0.40221054 0.27495681 0.33347165] 0
True
[ 0.33016229 0.3792307 0.29427513] 1
True
[ 0.269294 0.42581551 0.31309954] 1
True
[ 0.33792816 0.36475196 0.30672765] 1
True
[ 0.34261972 0.3299047 0.3382308 ] 2
False
[ 0.28427358 0.38975801 0.32795816] 0
False
[ 0.29625646 0.35159524 0.35194746] 1
False
[ 0.30432897 0.33890389 0.34101653] 1
False
[ 0.34538585 0.37603636 0.2783956 ] 0
False
[ 0.31187203 0.36796402 0.33112658] 2
False
[ 0.31224324 0.38557711 0.30062697] 2
False
[ 0.28807784 0.32611486 0.38467545] 0
False
[ 0.45168586 0.27976189 0.27728221] 0
True
[ 0.367814 0.29419501 0.31749186] 0
True
[ 0.39663066 0.30471742 0.29372659] 0
True
[ 0.36447674 0.35629193 0.27569748] 1
False
[ 0.38621767 0.29187541 0.30778699] 0
True
[ 0.35925133 0.3352035 0.3237769 ] 0
True
[ 0.37934827 0.30808305 0.31858501] 2
False
[ 0.42419684 0.31397229 0.26099852] 1
False
[ 0.37355865 0.33851327 0.29998384] 1
False
[ 0.34922125 0.36560422 0.28645113] 2
False
[ 0.41012944 0.36191056 0.23108617] 0
True
[ 0.34983744 0.36955112 0.27619927] 1
True
[ 0.39326442 0.33690968 0.24868784] 0
True
[ 0.35278482 0.35845556 0.3052179 ] 1
True
[ 0.28788122 0.39150012 0.31831478] 1
True
[ 0.29790955 0.37590129 0.33156425] 2
False
[ 0.3323189 0.40079285 0.2702261 ] 1
True
[ 0.35382713 0.33283548 0.30597087] 2
False
[ 0.30761685 0.39739536 0.29965606] 1
True
[ 0.28901708 0.39565989 0.31539482] 1
True
[ 0.32813233 0.38719228 0.28469295] 1
True
[ 0.35787331 0.42474962 0.22944954] 0
False
[ 0.34267906 0.40275224 0.25773193] 1
True
[ 0.32021819 0.42175517 0.26690919] 0
False
[ 0.35523856 0.38226306 0.26451677] 0
False
[ 0.44602531 0.31934316 0.23663804] 2
False
[ 0.35459044 0.32648481 0.32072558] 2
False
[ 0.3908435 0.32375691 0.29717968] 1
False
[ 0.36616865 0.37168307 0.26977061] 0
False
[ 0.35146326 0.38210355 0.26294885] 1
True
[ 0.29452542 0.36720232 0.31515758] 2
False
[ 0.35020668 0.37576148 0.28412019] 0
False
[ 0.36018725 0.34566534 0.28769417] 2
False
[ 0.3617657 0.29632652 0.33062443] 2
False
[ 0.26260587 0.36491486 0.36109204] 2
False
[ 0.35841901 0.34910387 0.30405717] 1
False
[ 0.32874105 0.31681718 0.3328826 ] 0
False
[ 0.31777602 0.32435942 0.35798917] 0
False
[ 0.4052364 0.27827607 0.33137851] 0
True
[ 0.3226057 0.32223928 0.3542372 ] 2
True
[ 0.35897622 0.35127815 0.2899846 ] 0
True
[ 0.39518311 0.32617172 0.28790462] 0
True
[ 0.34741156 0.31550292 0.34017122] 0
True
[ 0.36709341 0.32248999 0.30007086] 0
True
[ 0.39009893 0.26345962 0.34596252] 2
False
[ 0.39738648 0.25785383 0.34469043] 2
False
[ 0.32561827 0.32365828 0.35178934] 2
True
[ 0.32137823 0.30957917 0.35933346] 1
False
[ 0.33163699 0.32385782 0.33753705] 2
True
[ 0.34865703 0.31469874 0.32980391] 1
False
[ 0.30787037 0.34711574 0.3457408 ] 1
True
[ 0.28025796 0.32745349 0.39487063] 1
False
[ 0.30899723 0.42596014 0.28861109] 1
True
[ 0.3284039 0.32296187 0.35440614] 2
True
[ 0.28646173 0.32575406 0.38136526] 0
False
[ 0.37611958 0.29151187 0.34326529] 2
False
[ 0.35897363 0.31795402 0.32769708] 0
True
[ 0.28456065 0.32055674 0.40121893] 2
True
[ 0.39915211 0.25475504 0.34875308] 0
True
[ 0.4398626 0.30206393 0.2744779 ] 0
True
[ 0.3486292 0.31782226 0.32198064] 1
False
[ 0.26922856 0.38212447 0.3453038 ] 2
False
[ 0.31089318 0.32888916 0.35402926] 1
False
[ 0.34190069 0.29335531 0.39055079] 0
False
[ 0.37305614 0.35710934 0.2642542 ] 1
False
[ 0.33294104 0.36490953 0.31198045] 1
True
[ 0.32138834 0.38891805 0.29609368] 2
False
[ 0.25939798 0.38415024 0.33483069] 0
False
[ 0.3013523 0.39284965 0.29860586] 1
True
[ 0.34570195 0.34165475 0.32296307] 2
False
[ 0.3770926 0.321846 0.30166317] 0
True
[ 0.32877305 0.36696607 0.31432273] 0
False
[ 0.23505096 0.43971825 0.34717224] 1
True
[ 0.37007498 0.29385947 0.32482645] 0
True
[ 0.287641 0.37348969 0.34007347] 0
False
[ 0.39741788 0.27514923 0.33715494] 1
False
[ 0.39342971 0.29407718 0.29748059] 2
False
[ 0.3955665 0.31047983 0.29568885] 0
True
[ 0.37473326 0.2724067 0.35711393] 2
False
[ 0.33743792 0.30180352 0.35908821] 2
True
[ 0.33411612 0.31006812 0.36221003] 1
False
[ 0.30790908 0.32230952 0.36299147] 2
True
[ 0.39617016 0.30358791 0.30097584] 0
True
[ 0.37330151 0.2788804 0.34892945] 0
True
[ 0.37359074 0.36101977 0.26952024] 0
True
[ 0.3403955 0.26667902 0.39966504] 2
True
[ 0.32347798 0.32478949 0.35143945] 0
False
[ 0.40293016 0.24515759 0.35710024] 0
True
[ 0.4316394 0.28995947 0.28635487] 0
True
[ 0.41410086 0.32036083 0.28187273] 0
True
[ 0.40568279 0.31263576 0.29805291] 1
False
[ 0.42802855 0.30481874 0.26935297] 0
True
[ 0.37539092 0.30074165 0.30664682] 0
True
[ 0.42200125 0.33610292 0.25621038] 1
False
[ 0.40675769 0.35589988 0.23901157] 1
False
[ 0.33515447 0.34247492 0.33566204] 0
False
[ 0.41825836 0.30837984 0.27352373] 2
False
[ 0.35839368 0.31525742 0.31971547] 2
False
[ 0.30144343 0.3416122 0.34692736] 1
False
[ 0.34037545 0.35386015 0.30959986] 1
True
[ 0.39564835 0.35304488 0.26256283] 0
True
[ 0.41095485 0.3178793 0.2747011 ] 0
True
[ 0.38549416 0.30683633 0.30634147] 0
True
[ 0.44333354 0.29179772 0.25737906] 1
False
[ 0.31915831 0.36603284 0.30494063] 1
True
[ 0.33606337 0.39734313 0.2615262 ] 1
True
[ 0.44706439 0.31729781 0.23707365] 0
True
[ 0.39694303 0.33279282 0.27400008] 0
True
[ 0.39341959 0.32117323 0.26986401] 0
True
[ 0.40870789 0.29077855 0.30387425] 2
False
[ 0.41679898 0.32459339 0.26681217] 0
True
[ 0.32811531 0.36978724 0.29856278] 1
True
[ 0.33474089 0.39891179 0.26606989] 2
False
[ 0.34250148 0.318018 0.34202746] 2
False
[ 0.30792598 0.33532554 0.35070579] 2
True
[ 0.34492249 0.40163985 0.25160601] 1
True
[ 0.31505819 0.42894394 0.25660813] 1
True
[ 0.27517579 0.38719354 0.35257615] 2
False
[ 0.30690896 0.38607293 0.30915295] 1
True
[ 0.339703 0.33426169 0.3200198 ] 0
True
[ 0.45484105 0.31934482 0.2406533 ] 0
True
[ 0.32006056 0.36119983 0.31541573] 0
False
[ 0.33508453 0.42135708 0.25550862] 1
True
[ 0.38402186 0.35916101 0.25979739] 0
True
[ 0.46039056 0.26780061 0.2740924 ] 0
True
[ 0.33621392 0.32718214 0.35235248] 1
False
[ 0.36041871 0.38342285 0.26413888] 2
False
[ 0.35514244 0.29868085 0.33915393] 0
True
[ 0.47881968 0.2969395 0.23923367] 2
False
[ 0.40281422 0.28822846 0.29862608] 0
True
[ 0.38395177 0.29758735 0.30578167] 2
False
[ 0.39718183 0.31165626 0.30596698] 0
True
[ 0.42139805 0.28408076 0.3187633 ] 2
False
[ 0.41172331 0.30602232 0.28520947] 0
True
[ 0.36948595 0.3091265 0.33010658] 2
False
[ 0.37919445 0.3252362 0.29249681] 1
False
[ 0.34155857 0.3225813 0.32784313] 1
False
[ 0.30471315 0.33280231 0.37292336] 2
True
[ 0.33889585 0.28923947 0.36430976] 2
True
[ 0.31625744 0.3519154 0.33251428] 1
True
[ 0.31897829 0.30986436 0.34939807] 2
True
[ 0.35408334 0.35711406 0.29852376] 1
True
[ 0.31652509 0.31187996 0.37260832] 2
True
[ 0.35237366 0.36795562 0.27817724] 1
True
[ 0.31781913 0.33315016 0.35301021] 2
True
[ 0.28791385 0.3582399 0.37777573] 2
True
[ 0.24548511 0.34137948 0.44589084] 1
False
[ 0.29119105 0.39096443 0.32043829] 1
True
[ 0.27494392 0.37391937 0.34521469] 1
True
[ 0.2851161 0.39689411 0.3208591 ] 1
True
[ 0.29343593 0.33195529 0.38325324] 2
True
[ 0.26417856 0.31390644 0.40442739] 0
False
[ 0.33827035 0.33463064 0.32745461] 2
False
[ 0.24420211 0.39130984 0.37104967] 1
True
[ 0.21185873 0.4034715 0.39154027] 2
False
[ 0.2160204 0.44811622 0.35587448] 1
True
[ 0.19466014 0.41915963 0.41436815] 1
True
[ 0.29573102 0.33921521 0.3629507 ] 2
True
[ 0.21549218 0.43920996 0.37458174] 2
False
[ 0.27885643 0.31716783 0.40411094] 0
False
[ 0.25799086 0.40326739 0.34241749] 1
True
[ 0.23825592 0.32543488 0.4417354 ] 2
True
[ 0.29531939 0.36457447 0.35011657] 0
False
[ 0.29472978 0.31653881 0.4039551 ] 2
True
[ 0.28021407 0.34220636 0.38695899] 2
True
[ 0.22580851 0.37408437 0.41139273] 2
True
[ 0.21447818 0.4295007 0.37036699] 1
True
[ 0.27166181 0.35828177 0.36191784] 1
False
[ 0.27533053 0.3385555 0.38118289] 2
True
[ 0.22748412 0.33428184 0.44998281] 2
True
[ 0.24095561 0.34545704 0.40620004] 2
True
[ 0.27204906 0.38103696 0.34250007] 1
True
[ 0.24652121 0.36122136 0.39815769] 0
False
[ 0.30849686 0.37931153 0.31455696] 1
True
[ 0.21111765 0.41717938 0.40264665] 1
True
[ 0.27639349 0.33769737 0.38133356] 0
False
[ 0.32522363 0.29625986 0.38261561] 1
False
[ 0.28031992 0.39892842 0.32511881] 1
True
[ 0.28872033 0.40242304 0.31246456] 0
False
[ 0.2083779 0.41283812 0.39426029] 2
False
[ 0.25890196 0.36979547 0.36389971] 1
True
[ 0.22340047 0.38413628 0.41023669] 1
False
[ 0.24741542 0.43791386 0.33544161] 0
False
[ 0.1920118 0.44366239 0.38414537] 1
True
[ 0.26882481 0.36316993 0.37659143] 2
True
[ 0.24137592 0.40056447 0.36691941] 2
False
[ 0.25721092 0.37066499 0.37521911] 1
False
[ 0.23355153 0.37380497 0.39575032] 0
False
[ 0.25979832 0.4087442 0.32780456] 0
False
[ 0.27686949 0.38251173 0.34806766] 0
False
[ 0.34561772 0.27670669 0.37503272] 2
True
[ 0.30636795 0.31246246 0.36881232] 0
False
[ 0.23984088 0.36828731 0.38661196] 1
False
[ 0.29468414 0.40567263 0.30498445] 1
True
[ 0.27282761 0.39001657 0.33260609] 0
False
[ 0.3353674 0.31009925 0.35004069] 2
True
[ 0.39292176 0.37071784 0.23766936] 0
True
[ 0.31572419 0.34589608 0.32285979] 2
False
[ 0.29711889 0.39809485 0.30168301] 1
True
[ 0.29350524 0.30774105 0.39527235] 2
True
[ 0.24926519 0.37561898 0.38897407] 2
True
[ 0.26292904 0.36069512 0.38019293] 1
False
[ 0.32742183 0.33883486 0.32817248] 0
False
[ 0.25790006 0.35040703 0.38224707] 1
False
[ 0.35378826 0.36926968 0.27179145] 0
False
[ 0.32894817 0.35954463 0.31058151] 2
False
[ 0.29487906 0.33637805 0.36302858] 1
False
[ 0.32336225 0.37699955 0.30769564] 1
True
[ 0.27967531 0.34985011 0.3553552 ] 1
False
[ 0.328291 0.39491555 0.2723595 ] 0
False
[ 0.30361172 0.3321119 0.35740815] 2
True
[ 0.23648719 0.38502657 0.38225254] 2
False
[ 0.34765526 0.33194815 0.31779208] 2
False
[ 0.2733518 0.33873331 0.39316534] 0
False
[ 0.30737079 0.36146376 0.32860264] 2
False
[ 0.3011728 0.35449749 0.34218972] 0
False
[ 0.38695106 0.32672753 0.27552261] 1
False
[ 0.27444078 0.35040229 0.37993134] 1
False
[ 0.3446131 0.3516884 0.30883903] 0
False
[ 0.29797308 0.34415786 0.36209637] 0
False
[ 0.29922007 0.35622863 0.33983083] 1
True
[ 0.36586226 0.29185938 0.32861948] 0
True
[ 0.37021633 0.30426138 0.307571 ] 0
True
[ 0.2948283 0.39497303 0.29675931] 1
True
[ 0.38267379 0.32809945 0.29925736] 1
False
[ 0.2800368 0.39441203 0.33698115] 2
False
[ 0.28884009 0.37696082 0.31893586] 2
False
[ 0.33293919 0.36868549 0.29904125] 1
True
[ 0.31846529 0.35753809 0.30397955] 0
False
[ 0.33247111 0.36417798 0.29979508] 2
False
[ 0.31375673 0.34321139 0.34270183] 2
False
[ 0.31055717 0.34613182 0.33080305] 0
False
[ 0.29260845 0.36438612 0.34053121] 2
False
[ 0.38032196 0.31758502 0.31247933] 2
False
[ 0.36423371 0.30666547 0.31521266] 0
True
[ 0.24069844 0.36750151 0.39949647] 1
False
[ 0.30960863 0.32959533 0.35724848] 2
True
[ 0.32119205 0.36357569 0.30983019] 1
True
[ 0.28328672 0.31360021 0.38575822] 2
True
[ 0.25402855 0.33942739 0.39959989] 2
True
[ 0.27121428 0.37486929 0.35636305] 1
True
[ 0.31811758 0.35319247 0.33516923] 2
False
[ 0.35500975 0.32501959 0.33275595] 0
True
[ 0.38044375 0.28811937 0.33945845] 0
True
[ 0.35121139 0.29091774 0.35441832] 2
True
[ 0.31472007 0.29513013 0.40333181] 2
True
[ 0.34791509 0.31405889 0.35162851] 1
False
[ 0.35767022 0.26932075 0.35405623] 0
True
[ 0.29099236 0.32473363 0.40017515] 1
False
[ 0.25717781 0.36039515 0.38885953] 1
False
[ 0.28816715 0.33593228 0.3888587 ] 2
True
[ 0.28960425 0.37127873 0.34513614] 2
False
[ 0.32085987 0.34340645 0.33932051] 2
False
[ 0.27931846 0.32282399 0.40151565] 0
False
[ 0.31161672 0.33615044 0.35174073] 1
False
[ 0.3686292 0.2819503 0.3505752] 2
False
[ 0.22554559 0.36967915 0.41734783] 2
True
[ 0.29210575 0.30951716 0.39800113] 2
True
[ 0.3767389 0.25255079 0.36987238] 0
True
[ 0.27735708 0.34400045 0.3722234 ] 1
False
[ 0.25727676 0.36069907 0.39205259] 0
False
[ 0.30646111 0.33805142 0.33752178] 1
True
[ 0.28171905 0.33844816 0.3913192 ] 1
False
[ 0.25272932 0.34467757 0.40010942] 2
True
[ 0.30309827 0.36440695 0.32181203] 0
False
[ 0.3188185 0.3083852 0.34599973] 2
True
[ 0.36816255 0.33116992 0.30627897] 1
False
[ 0.36398017 0.35355723 0.2730061 ] 1
False
[ 0.3323212 0.31467898 0.34108091] 0
False
[ 0.33274182 0.2803422 0.38603411] 2
True
[ 0.33783017 0.36833993 0.30216261] 1
True
[ 0.21021407 0.41866646 0.41062314] 2
False
[ 0.32505107 0.28810163 0.38677599] 0
False
[ 0.27041564 0.34834538 0.38289142] 2
True
[ 0.27550527 0.33816303 0.39974288] 2
True
[ 0.24893059 0.36639451 0.40412878] 1
False
[ 0.21750147 0.38455493 0.41198013] 2
True
[ 0.29974615 0.34799913 0.36311064] 0
False
[ 0.32756271 0.39247194 0.28676401] 1
True
[ 0.25705787 0.37242846 0.37415061] 2
True
[ 0.29186806 0.39069693 0.32585617] 1
True
[ 0.24442312 0.36468598 0.41242978] 2
True
[ 0.31333713 0.31080454 0.37231958] 2
True
[ 0.29282974 0.36111485 0.34629137] 1
True
[ 0.22157556 0.4085726 0.38701266] 1
True
[ 0.25561453 0.36865895 0.38797141] 2
True
[ 0.19033916 0.42254721 0.42010315] 2
False
[ 0.18246004 0.42632901 0.4271813 ] 1
False
[ 0.2449012 0.35927704 0.40416179] 2
True
[ 0.21715768 0.37425748 0.43397192] 1
False
[ 0.23740296 0.34862176 0.42002892] 0
False
[ 0.31275703 0.37371529 0.31106102] 0
False
[ 0.25325714 0.36556131 0.37292912] 2
True
[ 0.23605299 0.40022077 0.38300107] 1
True
[ 0.23744432 0.4145133 0.37253589] 0
False
[ 0.31191053 0.37571913 0.3098302 ] 0
False
[ 0.3040308 0.33821565 0.35795019] 0
False
[ 0.24468315 0.35717677 0.3982037 ] 0
False
[ 0.36785883 0.31423764 0.33123349] 0
True
[ 0.36441597 0.32123526 0.31246379] 0
True
[ 0.35024185 0.36380078 0.28082817] 1
True
[ 0.28848504 0.3298457 0.3849188 ] 2
True
[ 0.24361433 0.3823753 0.3753107 ] 2
False
[ 0.34857293 0.32717956 0.31111033] 1
False
[ 0.25144583 0.34885266 0.40328282] 2
True
[ 0.23137691 0.42063977 0.36712473] 2
False
[ 0.27270072 0.31720224 0.39818129] 1
False
[ 0.25814014 0.35442901 0.38700696] 2
True
[ 0.23871976 0.36276903 0.40451214] 2
True
[ 0.33627142 0.31514911 0.35281754] 0
False
[ 0.27373339 0.36163662 0.37391354] 2
True
[ 0.2652881 0.299649 0.4462762] 2
True
[ 0.30585224 0.33993324 0.35510746] 1
False
[ 0.32616584 0.28776644 0.39884946] 2
True
[ 0.27883487 0.37829764 0.34936705] 1
True
[ 0.27952441 0.38837075 0.34203503] 1
True
[ 0.28612045 0.34123744 0.38376248] 2
True
[ 0.21495803 0.36302263 0.44770916] 1
False
[ 0.31338285 0.32370502 0.3548036 ] 1
False
[ 0.26773121 0.36487094 0.36873528] 2
True
[ 0.30909907 0.34985823 0.33263593] 0
False
[ 0.26279519 0.36468673 0.38437928] 1
False
[ 0.26778382 0.35587634 0.36575692] 2
True
[ 0.19143418 0.39029441 0.42865996] 0
False
[ 0.2911117 0.36613949 0.33042344] 1
True
[ 0.3639479 0.30078964 0.33428443] 2
False
[ 0.27031454 0.34913027 0.37479711] 0
False
[ 0.24952929 0.35265906 0.39949513] 1
False
[ 0.24849331 0.37874179 0.37645574] 1
True
[ 0.29016164 0.33518019 0.36719685] 0
False
[ 0.26478127 0.42816392 0.30866027] 1
True
[ 0.29206597 0.35891531 0.35441966] 0
False
[ 0.29667853 0.33666727 0.35873128] 0
False
[ 0.29739219 0.36156052 0.32967824] 0
False
[ 0.29739926 0.36425847 0.33833231] 1
True
[ 0.28651542 0.39234597 0.30281481] 1
True
[ 0.35888181 0.38391003 0.25310683] 2
False
[ 0.35845526 0.35605673 0.27704458] 0
True
[ 0.32162696 0.37558651 0.28927665] 0
False
[ 0.27637255 0.40042327 0.32607416] 1
True
[ 0.26098775 0.4086706 0.31622496] 1
True
[ 0.36687303 0.41625102 0.23378532] 1
True
[ 0.35444984 0.29381526 0.34824641] 0
True
[ 0.30067974 0.43721925 0.26185979] 1
True
[ 0.34920149 0.4490436 0.22425436] 0
False
[ 0.38041668 0.34252191 0.27081735] 1
False
[ 0.3885623 0.31056531 0.28750659] 0
True
[ 0.41686211 0.31162685 0.26206552] 0
True
[ 0.40675147 0.34022847 0.25339892] 1
False
[ 0.39809216 0.36802126 0.23549324] 2
False
[ 0.26150621 0.4231412 0.31154787] 2
False
[ 0.40152096 0.36575075 0.23847684] 0
True
[ 0.29714904 0.35681125 0.33716623] 1
True
[ 0.31069083 0.39359268 0.31471058] 0
False
[ 0.32381921 0.39732893 0.27314794] 1
True
[ 0.27249168 0.4154944 0.30735366] 1
True
[ 0.33940187 0.37623132 0.27795568] 2
False
[ 0.37331035 0.31543528 0.30234698] 2
False
[ 0.35837535 0.32800358 0.30330247] 2
False
[ 0.28030764 0.42272817 0.31177267] 1
True
[ 0.25516641 0.40428688 0.34272886] 1
True
[ 0.23616113 0.43660184 0.345792 ] 1
True
[ 0.30793302 0.36473886 0.32036216] 2
False
[ 0.35114633 0.29419087 0.34706649] 0
True
[ 0.28208917 0.37918646 0.34340937] 0
False
[ 0.35224175 0.34930474 0.29697016] 0
True
[ 0.31181917 0.37403833 0.31307124] 1
True
[ 0.35064982 0.33408607 0.2942849 ] 0
True
[ 0.42515736 0.32567412 0.25378821] 0
True
[ 0.31123426 0.39838367 0.28668387] 1
True
[ 0.33268077 0.3913563 0.26820524] 1
True
[ 0.3512146 0.36689546 0.28427868] 1
True
[ 0.29383555 0.3609485 0.32966455] 2
False
[ 0.31774994 0.39081291 0.28071865] 1
True
[ 0.26547604 0.43852282 0.29534356] 1
True
[ 0.26844879 0.40031263 0.34516918] 2
False
[ 0.36725865 0.3334154 0.28802713] 0
True
[ 0.30210457 0.37739844 0.32517792] 2
False
[ 0.31585153 0.27924742 0.39830389] 2
True
[ 0.30603168 0.33839689 0.35307149] 1
False
[ 0.25923251 0.43731206 0.30671512] 1
True
[ 0.29410727 0.32317612 0.38721725] 2
True
[ 0.29978138 0.39118645 0.32425065] 0
False
[ 0.34934959 0.37474563 0.27044315] 0
False
[ 0.32472062 0.34588691 0.31787835] 0
False
[ 0.33400405 0.32186057 0.35136124] 2
True
[ 0.39464168 0.31414057 0.27924454] 0
True
[ 0.26697967 0.3823161 0.3724242 ] 1
True
[ 0.2785 0.43882795 0.28165622] 1
True
[ 0.39815554 0.30682572 0.29769868] 0
True
[ 0.39218691 0.34608658 0.26246546] 0
True
[ 0.43710386 0.31497156 0.25584128] 0
True
[ 0.41704863 0.29714767 0.26194847] 2
False
[ 0.34102964 0.34513694 0.31381115] 2
False
[ 0.35366083 0.29312925 0.34093704] 2
False
[ 0.3041598 0.29665213 0.38814546] 2
True
[ 0.26475846 0.3766828 0.36292542] 1
True
[ 0.33062073 0.34332223 0.33276579] 0
False
[ 0.35750272 0.30943877 0.33340039] 2
False
[ 0.36521464 0.29720987 0.33849061] 2
False
[ 0.35659503 0.27053979 0.36186632] 1
False
[ 0.26852722 0.31762215 0.41165048] 2
True
[ 0.34912599 0.31185041 0.33679556] 2
False
[ 0.32615794 0.30042716 0.37714272] 0
False
[ 0.31644717 0.29373344 0.39987165] 2
True
[ 0.30877356 0.30767851 0.38437963] 2
True
[ 0.28084283 0.34404421 0.35975569] 1
False
[ 0.3463412 0.34412941 0.30456538] 1
False
[ 0.26431688 0.33673964 0.39717191] 2
True
[ 0.27934598 0.30943301 0.40753385] 1
False
[ 0.27477755 0.3262742 0.39235232] 1
False
[ 0.30766268 0.345654 0.33972522] 0
False
[ 0.26762924 0.32960964 0.39882146] 2
True
[ 0.32105839 0.31319653 0.36187238] 0
False
[ 0.33179138 0.324239 0.33705116] 2
True
[ 0.42338281 0.25200512 0.33797098] 0
True
[ 0.38784868 0.31701593 0.28929062] 1
False
[ 0.33190683 0.29192819 0.37764177] 2
True
[ 0.29594742 0.28983123 0.4212428 ] 2
True
[ 0.2577238 0.33668234 0.41778011] 2
True
[ 0.27297596 0.33380181 0.39758293] 2
True
[ 0.32736402 0.31465724 0.34868551] 0
False
[ 0.28440896 0.34874475 0.36910391] 2
True
[ 0.30024713 0.32096259 0.37858422] 0
False
[ 0.3761231 0.31733587 0.3058157 ] 0
True
[ 0.33863384 0.32030884 0.33757772] 2
False
[ 0.26575871 0.25160363 0.49100209] 0
False
[ 0.33453999 0.24061966 0.44432213] 2
True
[ 0.27298526 0.33576751 0.40119982] 2
True
[ 0.29352134 0.3302478 0.36615487] 1
False
[ 0.34272242 0.26167764 0.39494636] 2
True
[ 0.35043258 0.25348864 0.40104567] 2
True
[ 0.26278389 0.35001016 0.40166883] 2
True
[ 0.32198147 0.2686157 0.40446379] 0
False
[ 0.35760542 0.25265408 0.39965105] 2
True
[ 0.27616136 0.29004743 0.43311828] 2
True
[ 0.31747481 0.26730335 0.42170082] 2
True
[ 0.33625244 0.27917118 0.40739954] 2
True
[ 0.40279498 0.29256975 0.31827573] 0
True
[ 0.31751813 0.25096623 0.44230818] 2
True
[ 0.30521422 0.26445957 0.41137506] 2
True
[ 0.30503099 0.2700416 0.43800253] 2
True
[ 0.37020667 0.23329624 0.40757245] 2
True
[ 0.33208114 0.24358121 0.4366802 ] 1
False
[ 0.26748851 0.23759726 0.50428997] 0
False
[ 0.28127388 0.28129456 0.43741015] 1
False
[ 0.34539683 0.27599747 0.36460933] 0
False
[ 0.34627793 0.23939875 0.43434693] 2
True
[ 0.28156015 0.30057391 0.41499526] 1
False
[ 0.29344343 0.29252861 0.41601857] 2
True
[ 0.37647849 0.2775056 0.35328876] 0
True
[ 0.28933706 0.30672074 0.40451707] 2
True
[ 0.27868271 0.32755144 0.39940634] 2
True
[ 0.29768966 0.26844657 0.44763271] 2
True
[ 0.36514028 0.24632778 0.38775473] 0
False
[ 0.33352477 0.27641842 0.4104775 ] 2
True
[ 0.33700093 0.32352899 0.33429677] 1
False
[ 0.31362696 0.29332429 0.39923316] 2
True
[ 0.346466 0.28700222 0.36256809] 2
True
[ 0.3226433 0.28553293 0.39761449] 1
False
[ 0.28660914 0.29961078 0.40878767] 1
False
[ 0.3206756 0.31232244 0.35669758] 1
False
[ 0.39233727 0.31321285 0.28764134] 0
True
[ 0.32840165 0.27059513 0.39731847] 2
True
[ 0.36711658 0.25820243 0.37073089] 0
False
[ 0.380761 0.28399773 0.34733934] 0
True
[ 0.30984684 0.27385671 0.42346727] 2
True
[ 0.30115071 0.30991887 0.38363436] 2
True
[ 0.37909806 0.24006012 0.39121699] 2
True
[ 0.2916563 0.29196784 0.42371762] 0
False
[ 0.34581629 0.30549964 0.34945852] 1
False
[ 0.37906638 0.29102017 0.33653819] 1
False
[ 0.33785843 0.35531203 0.29864201] 0
False
[ 0.30019769 0.2989571 0.39678449] 2
True
[ 0.28866293 0.3140955 0.39768775] 2
True
[ 0.33981208 0.31102138 0.34555553] 1
False
[ 0.36589071 0.31814471 0.30615784] 0
True
[ 0.34340042 0.26240443 0.40779279] 0
False
[ 0.38535424 0.28135805 0.3183531 ] 1
False
[ 0.31306222 0.31195818 0.36059263] 1
False
[ 0.34355078 0.30874476 0.32611849] 0
True
[ 0.38970474 0.32112119 0.30599293] 0
True
[ 0.39805768 0.29277382 0.29669184] 0
True
[ 0.40105538 0.28273033 0.31632125] 0
True
[ 0.38561803 0.2738184 0.3295356 ] 2
False
[ 0.37256761 0.30082704 0.33515193] 2
False
[ 0.39881438 0.2452258 0.34994359] 0
True
[ 0.40530418 0.28644693 0.30992871] 2
False
[ 0.34276562 0.30606241 0.3433089 ] 1
False
[ 0.35370866 0.29001914 0.35149841] 1
False
[ 0.26947736 0.33131015 0.40652734] 1
False
[ 0.33276207 0.35871502 0.29918551] 1
True
[ 0.28200974 0.35505536 0.35839416] 0
False
[ 0.33800256 0.2923627 0.36990055] 1
False
[ 0.32227474 0.34433753 0.32142021] 1
True
[ 0.44892324 0.30903208 0.25534741] 2
False
[ 0.35783822 0.3254022 0.30855103] 0
True
[ 0.36897115 0.3472895 0.29386377] 0
True
[ 0.29837197 0.42836563 0.27758313] 1
True
[ 0.39985353 0.33388544 0.27542489] 0
True
[ 0.32874221 0.36730778 0.30293886] 0
False
[ 0.36217986 0.32243763 0.32318104] 0
True
[ 0.34121874 0.30203125 0.34397685] 2
True
[ 0.34747353 0.36603716 0.27373297] 2
False
[ 0.38656773 0.29891526 0.31678202] 1
False
[ 0.32644301 0.36432826 0.30096961] 0
False
[ 0.33053639 0.30114209 0.3690737 ] 2
True
[ 0.35236058 0.342284 0.29844236] 1
False
[ 0.34730159 0.34235634 0.30819531] 1
False
[ 0.39688588 0.3162588 0.28837396] 0
True
[ 0.42012223 0.32517627 0.25834618] 0
True
[ 0.40681047 0.32642936 0.25852701] 1
False
[ 0.40118029 0.30197005 0.29383249] 0
True
[ 0.38123763 0.3531414 0.27394507] 1
False
[ 0.33994852 0.36736453 0.28547695] 2
False
[ 0.33557788 0.3530984 0.29778536] 2
False
[ 0.33105351 0.31266844 0.35243135] 2
True
[ 0.31208547 0.36111905 0.31550276] 1
True
[ 0.31525549 0.37014635 0.32125389] 2
False
[ 0.29567236 0.32644611 0.37908791] 2
True
[ 0.3040848 0.35900799 0.31811181] 1
True
[ 0.2742623 0.32212322 0.40027366] 2
True
[ 0.25960231 0.36913975 0.36958102] 0
False
[ 0.30757746 0.28126654 0.39100522] 2
True
[ 0.33674909 0.31438162 0.35168004] 0
False
[ 0.3070434 0.29612843 0.39793852] 2
True
[ 0.34857862 0.30483067 0.34696139] 0
True
[ 0.39079823 0.26911361 0.33312454] 0
True
[ 0.38148704 0.24117454 0.35231215] 2
False
[ 0.3986811 0.3141317 0.29225344] 0
True
[ 0.35688578 0.25687547 0.39275037] 2
True
[ 0.24291329 0.35913268 0.40550838] 1
False
[ 0.29741076 0.27709216 0.41127578] 2
True
[ 0.40646586 0.2632241 0.32881046] 2
False
[ 0.39588609 0.28270956 0.32411057] 1
False
[ 0.31466819 0.30805999 0.36138124] 0
False
[ 0.31222157 0.29560674 0.3954131 ] 2
True
[ 0.29861886 0.32673945 0.36845614] 1
False
[ 0.31283749 0.3150055 0.37047285] 0
False
[ 0.28631436 0.34801766 0.36549797] 1
False
[ 0.2663433 0.35936475 0.37594903] 2
True
[ 0.32592449 0.28545774 0.38663742] 0
False
[ 0.32632892 0.29198938 0.36522404] 0
False
[ 0.31448489 0.35191647 0.33446026] 1
True
[ 0.33589535 0.32701047 0.34310566] 2
True
[ 0.33842416 0.2960717 0.36795232] 0
False
[ 0.37755051 0.35334573 0.2706407 ] 1
False
[ 0.33192633 0.36878356 0.30178213] 1
True
[ 0.38829923 0.27807195 0.32795424] 2
False
[ 0.32650545 0.2982934 0.38185481] 2
True
[ 0.30398087 0.38860361 0.31298695] 1
True
[ 0.35624157 0.36312226 0.272748 ] 0
False
[ 0.27142219 0.31189634 0.42513203] 1
False
[ 0.29386893 0.39638225 0.30160027] 1
True
[ 0.38426571 0.2779477 0.33566053] 2
False
[ 0.32913392 0.40241545 0.27941806] 1
True
[ 0.21176838 0.42207311 0.37656222] 2
False
[ 0.28591413 0.34006503 0.39657879] 2
True
[ 0.26600731 0.37194094 0.36713993] 2
False
[ 0.27363757 0.39074302 0.34252376] 1
True
[ 0.32965591 0.38291129 0.30440321] 1
True
[ 0.2428725 0.40319507 0.36089909] 1
True
[ 0.33044739 0.386888 0.28371654] 2
False
[ 0.27857455 0.33998577 0.38642779] 1
False
[ 0.30818795 0.3694573 0.3174257 ] 0
False
[ 0.26263401 0.41348626 0.32363611] 2
False
[ 0.25741437 0.39807019 0.36225454] 1
True
[ 0.25078404 0.36892619 0.3838319 ] 2
True
[ 0.30982161 0.30444418 0.39236467] 2
True
[ 0.2601745 0.33538846 0.42081748] 2
True
[ 0.24822124 0.37933629 0.36753289] 1
True
[ 0.26664884 0.39378592 0.33741182] 1
True
[ 0.27074199 0.38600193 0.33641176] 0
False
[ 0.29213226 0.31709282 0.3994749 ] 2
True
[ 0.21670562 0.41914949 0.3892213 ] 1
True
[ 0.32739003 0.28834853 0.37842057] 2
True
[ 0.22097034 0.37677179 0.41960994] 1
False
[ 0.24187666 0.41027476 0.3655356 ] 0
False
[ 0.24437382 0.37374673 0.37564969] 0
False
[ 0.25517813 0.43089546 0.32238073] 0
False
[ 0.30472649 0.34945673 0.34235214] 2
False
[ 0.30702798 0.30248033 0.37571361] 2
True
[ 0.23694444 0.35502324 0.41763839] 2
True
[ 0.35495042 0.29766193 0.33309909] 0
True
[ 0.29682085 0.39229988 0.29337761] 1
True
[ 0.32191103 0.36778756 0.30200354] 0
False
[ 0.26162502 0.37288358 0.37702088] 1
False
[ 0.34409888 0.30681554 0.34914373] 0
False
[ 0.29282852 0.39834085 0.31293394] 1
True
[ 0.27829766 0.37330013 0.36615932] 1
True
[ 0.23819148 0.40490691 0.3665611 ] 1
True
[ 0.25945228 0.35917433 0.37465041] 0
False
[ 0.29133022 0.38036771 0.3241368 ] 2
False
[ 0.28774706 0.34538034 0.34999069] 2
True
[ 0.29704602 0.30452251 0.39573667] 2
True
[ 0.32564096 0.3092726 0.35934878] 0
False
[ 0.24748558 0.39996223 0.35248096] 1
True
[ 0.25286516 0.40294023 0.35717854] 1
True
[ 0.33243675 0.39843809 0.27202312] 0
False
[ 0.31871826 0.32479137 0.35231621] 2
True
[ 0.31476122 0.39435961 0.30304983] 2
False
[ 0.28370433 0.30814179 0.41233028] 0
False
[ 0.33211679 0.33201512 0.33610558] 0
False
[ 0.33683106 0.31612935 0.33810738] 0
False
[ 0.33798801 0.32073022 0.34534828] 0
False
[ 0.34637492 0.30153468 0.33901338] 0
True
[ 0.42721847 0.25787639 0.31108708] 1
False
[ 0.30811636 0.39046981 0.29741854] 1
True
[ 0.40452418 0.33265302 0.2735569 ] 0
True
[ 0.39566319 0.29047177 0.31179245] 1
False
[ 0.38998579 0.34445433 0.28161737] 2
False
[ 0.31691356 0.35664178 0.32576235] 2
False
[ 0.36754776 0.30434329 0.3182274 ] 1
False
[ 0.32710664 0.32540727 0.3289254 ] 0
False
[ 0.36052687 0.33678308 0.30902152] 2
False
[ 0.3833547 0.32999287 0.27003002] 1
False
[ 0.4200963 0.3396516 0.24413932] 1
False
[ 0.29901886 0.38425062 0.33166159] 0
False
[ 0.33833553 0.35353176 0.29289369] 1
True
[ 0.30323288 0.39861046 0.29791555] 1
True
[ 0.31888077 0.34965637 0.32687053] 2
False
[ 0.30999667 0.41947325 0.27281614] 2
False
[ 0.25956435 0.3864396 0.36500774] 2
False
[ 0.39815199 0.29061099 0.30790641] 0
True
[ 0.33823014 0.29254901 0.37216653] 2
True
[ 0.33291029 0.29984368 0.35285909] 1
False
[ 0.33910052 0.26826602 0.39893863] 0
False
[ 0.33132542 0.33689231 0.32876798] 0
False
[ 0.37035957 0.31416016 0.30371762] 2
False
[ 0.27333684 0.36776343 0.35093077] 1
True
[ 0.30155655 0.31426511 0.36084686] 1
False
[ 0.29396968 0.38299426 0.30248244] 2
False
[ 0.29756981 0.32962002 0.37780135] 2
True
[ 0.3176488 0.31943514 0.35190965] 1
False
[ 0.27102724 0.39992539 0.33155572] 1
True
[ 0.3468929 0.33244832 0.31359842] 0
True
[ 0.32151817 0.36042533 0.31422501] 0
False
[ 0.23832444 0.41804574 0.34686852] 2
False
[ 0.32756448 0.34958042 0.32441039] 1
True
[ 0.33991331 0.34077026 0.33478099] 1
True
[ 0.31432067 0.39195197 0.29265744] 0
False
[ 0.31423582 0.35983361 0.32292223] 0
False
[ 0.35363814 0.31696842 0.31780148] 2
False
[ 0.35485061 0.30259777 0.32403287] 1
False
[ 0.33013505 0.32328872 0.34295046] 0
False
[ 0.33408785 0.30609225 0.33081781] 2
False
[ 0.27828341 0.36469305 0.3649448 ] 2
True
[ 0.3314758 0.33249795 0.33321046] 0
False
[ 0.32354427 0.31568798 0.34933884] 0
False
[ 0.30870287 0.34474146 0.35212051] 1
False
[ 0.29149868 0.39436083 0.31423509] 1
True
[ 0.24834168 0.33633404 0.41419704] 1
False
[ 0.36040485 0.36772337 0.27224988] 1
True
[ 0.32337549 0.32281341 0.35490568] 1
False
[ 0.26046354 0.40467922 0.34618336] 2
False
[ 0.28480677 0.40103063 0.31511617] 1
True
[ 0.23203489 0.446731 0.34146457] 2
False
[ 0.25112767 0.35572402 0.40603608] 2
True
[ 0.29115242 0.36008306 0.36041906] 1
False
[ 0.27033652 0.37919957 0.35605244] 2
False
[ 0.22759209 0.41225401 0.37756789] 0
False
[ 0.2560257 0.37470678 0.36313019] 2
False
[ 0.31196933 0.33411321 0.3620688 ] 2
True
[ 0.24380289 0.40058658 0.35572773] 1
True
[ 0.22496298 0.33852459 0.45904131] 2
True
[ 0.30254644 0.40050139 0.30461396] 1
True
[ 0.25266929 0.33325623 0.41191904] 2
True
[ 0.28693066 0.36962669 0.33572924] 0
False
[ 0.31983916 0.28552267 0.39236101] 2
True
[ 0.3058777 0.34744809 0.33788098] 0
False
[ 0.29824466 0.33789166 0.37782961] 2
True
[ 0.25491652 0.33560107 0.4071672 ] 2
True
[ 0.23381771 0.34407325 0.42415044] 1
False
[ 0.28540376 0.34969991 0.37222146] 2
True
[ 0.28983981 0.36236969 0.34482679] 0
False
[ 0.32257192 0.27760886 0.39300023] 2
True
[ 0.30543125 0.31370472 0.37850883] 2
True
[ 0.2903734 0.32138192 0.39962614] 2
True
[ 0.30104686 0.32680367 0.37919314] 1
False
[ 0.26067665 0.36537523 0.39388695] 2
True
[ 0.23844305 0.33710335 0.42276472] 2
True
[ 0.25953914 0.3516656 0.4090177 ] 2
True
[ 0.24754701 0.3128545 0.45353037] 0
False
[ 0.3121913 0.26862271 0.41718615] 0
False
[ 0.2117292 0.34457554 0.45913535] 0
False
[ 0.28517337 0.31523298 0.38050637] 1
False
[ 0.32496545 0.31677507 0.3548208 ] 1
False
[ 0.24709391 0.38016791 0.38826921] 1
False
[ 0.32586273 0.29826144 0.36960992] 2
True
[ 0.24972398 0.35368286 0.38848011] 1
False
[ 0.25551663 0.33209746 0.39708869] 2
True
[ 0.2922985 0.33237719 0.37797081] 1
False
[ 0.24485493 0.41149427 0.35939293] 1
True
[ 0.27286194 0.35835893 0.36583445] 2
True
[ 0.25547859 0.36263043 0.38805177] 0
False
[ 0.29593082 0.33534034 0.35304038] 0
False
[ 0.27316045 0.35105776 0.36223419] 0
False
[ 0.35660339 0.30736262 0.33496984] 2
False
[ 0.30549937 0.33508064 0.35702968] 0
False
[ 0.33894285 0.30679442 0.34851707] 0
False
[ 0.31201801 0.323284 0.35812917] 1
False
[ 0.36698803 0.29486209 0.32090359] 0
True
[ 0.31617178 0.33491084 0.34249534] 0
False
[ 0.35312185 0.31386569 0.33347206] 2
False
[ 0.34292955 0.24838528 0.39370379] 2
True
[ 0.2763897 0.34821389 0.37535983] 1
False
[ 0.3351431 0.33234202 0.32675609] 1
False
[ 0.29242226 0.31499074 0.38606805] 2
True
[ 0.30331014 0.35496371 0.33511098] 1
True
[ 0.25353069 0.3296919 0.41756015] 2
True
[ 0.28145452 0.32380546 0.40150669] 2
True
[ 0.30955454 0.34086367 0.35178796] 0
False
[ 0.30682711 0.31038842 0.38563953] 2
True
[ 0.34831343 0.27834279 0.35624475] 0
False
[ 0.35684218 0.28437726 0.35741971] 0
False
[ 0.3507511 0.26683506 0.38229677] 2
True
[ 0.3964387 0.293058 0.30879882] 1
False
[ 0.36161387 0.24167537 0.40356683] 2
True
[ 0.30501794 0.30479711 0.37187474] 1
False
[ 0.25136622 0.34538095 0.3992044 ] 1
False
[ 0.29726282 0.36678791 0.32358676] 1
True
[ 0.25449938 0.36653508 0.3949996 ] 1
False
[ 0.33708021 0.40777062 0.27287586] 1
True
[ 0.32707718 0.32633072 0.34231159] 2
True
[ 0.2839709 0.34290017 0.36011654] 2
True
[ 0.2742037 0.39587096 0.33236369] 1
True
[ 0.24730547 0.41096759 0.34763361] 2
False
[ 0.26835621 0.33653158 0.39407489] 2
True
[ 0.29008858 0.29656685 0.41409602] 2
True
[ 0.2773932 0.31851978 0.39685998] 2
True
[ 0.22128027 0.3490085 0.42790921] 2
True
[ 0.30392832 0.30474059 0.39701604] 2
True
[ 0.17728084 0.4301448 0.42893295] 1
True
[ 0.23675977 0.33608021 0.43757659] 0
False
[ 0.24318834 0.38060215 0.37940795] 2
False
[ 0.2464821 0.37130339 0.39695198] 2
True
[ 0.26320863 0.40679091 0.34351623] 1
True
[ 0.22805281 0.40550753 0.37709933] 1
True
[ 0.33124592 0.27488004 0.40398219] 2
True
[ 0.20833869 0.38288807 0.41463668] 2
True
[ 0.22582834 0.41066327 0.38817428] 1
True
[ 0.29011476 0.284526 0.42800278] 0
False
[ 0.24804713 0.38546232 0.35753979] 1
True
[ 0.26368131 0.31469776 0.43008864] 2
True
[ 0.18587067 0.4212313 0.41350735] 1
True
[ 0.24805562 0.37654711 0.37374577] 0
False
[ 0.28165133 0.33175045 0.37540995] 2
True
[ 0.28703437 0.31751343 0.39632088] 2
True
[ 0.29052823 0.30820864 0.38485021] 0
False
[ 0.27717214 0.32378603 0.39433818] 0
False
[ 0.29879886 0.29966901 0.39478214] 2
True
[ 0.3479198 0.27713467 0.36486516] 0
False
[ 0.25359844 0.42559256 0.33663995] 1
True
[ 0.2662956 0.36098544 0.36589892] 1
False
[ 0.33987413 0.34023884 0.32375271] 1
True
[ 0.2662147 0.33098682 0.39889277] 0
False
[ 0.24936238 0.41351397 0.34389379] 1
True
[ 0.26120711 0.39275587 0.34413355] 2
False
[ 0.2645559 0.33844559 0.38462701] 0
False
[ 0.29049694 0.40259095 0.29778687] 1
True
[ 0.28266021 0.33909885 0.3764457 ] 1
False
[ 0.39661713 0.26325741 0.33740013] 0
True
[ 0.31996646 0.3672676 0.29816721] 0
False
[ 0.31030496 0.34266062 0.34578185] 2
True
[ 0.35416784 0.33323584 0.31220727] 1
False
[ 0.31517648 0.29636476 0.37596771] 2
True
[ 0.34444984 0.31981204 0.31437084] 0
True
[ 0.31320052 0.30713877 0.38167666] 2
True
[ 0.35859465 0.29061043 0.33395209] 1
False
[ 0.26898564 0.36642912 0.38298999] 2
True
[ 0.3073884 0.34192173 0.34925233] 1
False
[ 0.32162775 0.28943527 0.38366845] 2
True
[ 0.37319019 0.28242318 0.34663017] 0
True
[ 0.28779304 0.37227935 0.34656623] 1
True
[ 0.29710526 0.31254426 0.38316841] 0
False
[ 0.31591549 0.31067229 0.36915417] 2
True
[ 0.33228775 0.27784045 0.38842826] 2
True
[ 0.29652897 0.33775134 0.35415673] 0
False
[ 0.35252991 0.24724605 0.40340469] 2
True
[ 0.35713145 0.23806317 0.3933098 ] 0
False
[ 0.37204126 0.27178571 0.35561498] 0
True
[ 0.3786811 0.28309411 0.33955952] 1
False
[ 0.30066132 0.29742611 0.39154839] 2
True
[ 0.32886787 0.34686846 0.31947732] 1
True
[ 0.33300106 0.30754048 0.37209017] 1
False
[ 0.39397628 0.2882495 0.32574382] 2
False
[ 0.24685998 0.37954249 0.36045461] 1
True
[ 0.35984013 0.28314397 0.35638532] 1
False
[ 0.23738456 0.39936434 0.37347494] 1
True
[ 0.31668729 0.32351324 0.35098207] 2
True
[ 0.26486021 0.37303834 0.35262614] 2
False
[ 0.307577 0.28894717 0.40562 ] 0
False
[ 0.34505643 0.32064401 0.32418019] 0
True
[ 0.30142795 0.34583996 0.34097861] 2
False
[ 0.32687267 0.30807185 0.37082757] 0
False
[ 0.23707643 0.39025612 0.38141024] 1
True
[ 0.21723836 0.39599915 0.40775236] 2
True
[ 0.22180053 0.4064015 0.38576468] 2
False
[ 0.21662911 0.39884495 0.3992176 ] 1
False
[ 0.24268318 0.39296413 0.35906359] 1
True
[ 0.35804464 0.28698938 0.35463109] 0
True
[ 0.28474551 0.38224829 0.32601147] 1
True
[ 0.23486005 0.38542203 0.38167481] 2
False
[ 0.24492556 0.38649601 0.38543183] 0
False
[ 0.29512781 0.31324971 0.38949106] 2
True
[ 0.228137 0.37811141 0.4052024 ] 2
True
[ 0.28884713 0.31364036 0.40730207] 2
True
[ 0.30385484 0.33059865 0.3732024 ] 0
False
[ 0.35611296 0.25931374 0.3907717 ] 0
False
[ 0.25952787 0.35271487 0.40039612] 0
False
[ 0.2457003 0.39338008 0.36196892] 1
True
[ 0.29492951 0.375449 0.32884046] 0
False
[ 0.38262629 0.28102397 0.33602278] 2
False
[ 0.30329055 0.30929166 0.37738735] 1
False
[ 0.33768467 0.28123225 0.36021212] 2
True
[ 0.39338514 0.25560395 0.36236152] 0
True
[ 0.37435206 0.26799071 0.37119941] 0
True
[ 0.27922134 0.33758665 0.38614175] 1
False
[ 0.31174684 0.32847978 0.35124963] 1
False
[ 0.3159698 0.34619527 0.34364414] 0
False
[ 0.34292922 0.25118369 0.39739967] 2
True
[ 0.26953919 0.38504688 0.35134628] 0
False
[ 0.32681292 0.26769045 0.39584342] 2
True
[ 0.37940529 0.27157873 0.32885272] 1
False
[ 0.37243797 0.31418513 0.31624408] 0
True
[ 0.39553114 0.28198753 0.31988479] 0
True
[ 0.33671692 0.32585317 0.33789549] 2
True
[ 0.30824383 0.37415857 0.33602 ] 2
False
[ 0.30527637 0.32880357 0.35560901] 1
False
[ 0.33550351 0.32991052 0.34538888] 2
True
[ 0.36483304 0.29421768 0.33652029] 1
False
[ 0.32887113 0.34367079 0.32064225] 1
True
[ 0.21513807 0.43328593 0.37104066] 1
True
[ 0.32641563 0.29022557 0.39383313] 2
True
[ 0.28711435 0.35223167 0.35785659] 2
True
[ 0.33041179 0.3598791 0.30559561] 1
True
[ 0.28723907 0.31885718 0.39624144] 2
True
[ 0.3193402 0.31657832 0.35282615] 0
False
[ 0.25228614 0.39646248 0.34635876] 1
True
[ 0.32087602 0.30831899 0.38451446] 2
True
[ 0.28160255 0.3846735 0.33116919] 0
False
[ 0.2672131 0.35321544 0.36953667] 1
False
[ 0.33871117 0.29938814 0.35792106] 2
True
[ 0.29946817 0.38483121 0.31171813] 1
True
[ 0.31418103 0.35084687 0.34913105] 1
True
[ 0.29828261 0.37251694 0.32079572] 0
False
[ 0.27424967 0.38345357 0.31759056] 0
False
[ 0.2871007 0.36896538 0.33799217] 2
False
[ 0.21798838 0.37221846 0.40234397] 2
True
[ 0.24658275 0.37004921 0.3742607 ] 2
True
[ 0.35390515 0.29507166 0.34168107] 1
False
[ 0.32079427 0.33382715 0.33092153] 1
True
[ 0.27706681 0.38123845 0.3350887 ] 0
False
[ 0.24647524 0.39634969 0.34379156] 1
True
[ 0.31491836 0.33162295 0.341629 ] 2
True
[ 0.25051485 0.36721943 0.37780236] 1
False
[ 0.22590898 0.37417998 0.41515598] 2
True
[ 0.27149862 0.3850746 0.34679815] 0
False
[ 0.26946699 0.38025116 0.35539631] 2
False
[ 0.31379157 0.30690278 0.37256982] 2
True
[ 0.26315806 0.37752146 0.36815004] 1
True
[ 0.32369043 0.28861354 0.37558025] 2
True
[ 0.2899425 0.31344805 0.396542 ] 2
True
[ 0.24405 0.41101103 0.36974807] 1
True
[ 0.28519981 0.33461945 0.39265635] 0
False
[ 0.30261254 0.31108003 0.38951057] 1
False
[ 0.29719755 0.30668442 0.39973635] 2
True
[ 0.35708638 0.30085761 0.3489067 ] 0
True
[ 0.25811193 0.32929551 0.39971668] 1
False
[ 0.29343804 0.35513116 0.36121751] 2
True
[ 0.25595594 0.34828309 0.3947787 ] 2
True
[ 0.23766074 0.36780584 0.39887402] 2
True
[ 0.26803972 0.36558059 0.37417289] 1
False
[ 0.28638147 0.33135873 0.37733477] 0
False
[ 0.29489194 0.31006059 0.395681 ] 2
True
[ 0.17603809 0.38876018 0.48412435] 1
False
[ 0.25738423 0.38701274 0.35658774] 1
True
[ 0.19838725 0.42378984 0.41520353] 2
False
[ 0.25535004 0.37950284 0.37341897] 1
True
[ 0.29026417 0.39788638 0.31827205] 1
True
[ 0.27141445 0.37514431 0.36093949] 2
False
[ 0.30833528 0.32784729 0.36838082] 0
False
[ 0.23410052 0.36693683 0.42469519] 2
True
[ 0.27492164 0.36573093 0.37007359] 1
False
[ 0.29970188 0.34052621 0.35499061] 0
False
[ 0.22913169 0.41078356 0.37697029] 1
True
[ 0.25273646 0.34240085 0.40413529] 2
True
[ 0.25891711 0.38219243 0.36319783] 1
True
[ 0.21093166 0.43195314 0.37864194] 2
False
[ 0.17563917 0.43584895 0.43277419] 2
False
[ 0.20114868 0.40239669 0.41651242] 2
True
[ 0.20469045 0.37815058 0.45183997] 2
True
[ 0.2827286 0.32573361 0.39275888] 2
True
[ 0.21200794 0.34037595 0.45034288] 0
False
[ 0.27760378 0.34441658 0.36960128] 0
False
[ 0.31843353 0.29018026 0.41135393] 2
True
[ 0.30452566 0.30054352 0.39496382] 2
True
[ 0.27901489 0.33036516 0.38462895] 0
False
[ 0.29090418 0.35398163 0.3627296 ] 2
True
[ 0.31834287 0.23321681 0.45584399] 2
True
[ 0.22739733 0.38100752 0.40348605] 2
True
[ 0.28381171 0.3122007 0.40875603] 1
False
[ 0.31330666 0.29243364 0.39611889] 2
True
[ 0.26677399 0.33081614 0.4016876 ] 2
True
[ 0.26036585 0.33163559 0.41157596] 2
True
[ 0.27645818 0.33186335 0.39734372] 1
False
[ 0.26785218 0.34870717 0.40871461] 2
True
[ 0.2100825 0.35781992 0.45558602] 2
True
[ 0.21790308 0.3722723 0.43055001] 2
True
[ 0.23397154 0.39814326 0.40924967] 1
False
[ 0.2089226 0.42481155 0.41210111] 1
True
[ 0.31250861 0.33323352 0.37525591] 1
False
[ 0.22870666 0.35020134 0.42086873] 0
False
[ 0.24705435 0.37320806 0.38736608] 1
False
[ 0.26446501 0.33357197 0.39928958] 2
True
[ 0.30225858 0.25067333 0.43873016] 0
False
[ 0.25767794 0.37448618 0.36459701] 2
False
[ 0.23482777 0.36988381 0.41076973] 2
True
[ 0.23310325 0.35201532 0.42803361] 1
False
[ 0.26584493 0.33429419 0.39242072] 1
False
[ 0.24041763 0.38002159 0.39942192] 0
False
[ 0.23937793 0.36263308 0.4029334 ] 2
True
[ 0.21324932 0.34245869 0.45469518] 2
True
[ 0.24668031 0.37119097 0.3872015 ] 0
False
[ 0.23689403 0.40671251 0.36263923] 1
True
[ 0.2258813 0.40357597 0.38553946] 1
True
[ 0.26013959 0.36868772 0.36498433] 0
False
[ 0.23024098 0.41047495 0.35083876] 1
True
[ 0.27644598 0.34772418 0.35801848] 0
False
[ 0.21628139 0.41195359 0.38701922] 1
True
[ 0.21910274 0.43147487 0.34351614] 1
True
[ 0.22752955 0.41164122 0.37114261] 0
False
[ 0.31324377 0.38232979 0.30443002] 0
False
[ 0.25176044 0.40378319 0.34354826] 2
False
[ 0.30579164 0.35741595 0.31709115] 1
True
[ 0.27205064 0.33092798 0.38225167] 2
True
[ 0.2521028 0.35850653 0.39526545] 2
True
[ 0.32709302 0.30502883 0.35802997] 2
True
[ 0.29741773 0.33584153 0.36017671] 0
False
[ 0.30158755 0.31984467 0.36180186] 0
False
[ 0.33286193 0.31528805 0.3439316 ] 1
False
[ 0.30688175 0.37371834 0.30570345] 1
True
[ 0.22247881 0.43598549 0.36581158] 1
True
[ 0.31093156 0.3302661 0.34858156] 2
True
[ 0.31535884 0.28417557 0.41042047] 2
True
[ 0.26426989 0.38850757 0.33932682] 1
True
[ 0.31553035 0.29341386 0.38612002] 2
True
[ 0.30032065 0.38417528 0.30099003] 1
True
[ 0.2678723 0.3491627 0.39967779] 2
True
[ 0.22963007 0.36081865 0.39999865] 2
True
[ 0.2950829 0.28679406 0.39974139] 0
False
[ 0.20533079 0.39819389 0.43848479] 1
False
[ 0.36801965 0.24592831 0.37615071] 0
False
[ 0.2567892 0.39590083 0.34996594] 1
True
[ 0.27478862 0.37179651 0.35061228] 1
True
[ 0.25987718 0.35585982 0.40478066] 2
True
[ 0.34809677 0.34101958 0.29854215] 1
False
[ 0.32309706 0.34063868 0.3271292 ] 0
False
[ 0.34060443 0.29697277 0.35800782] 2
True
[ 0.28660386 0.37609494 0.32044304] 0
False
[ 0.3130122 0.34965338 0.33234425] 0
False
[ 0.28724618 0.34866197 0.36199174] 2
True
[ 0.30336405 0.37330296 0.31190242] 1
True
[ 0.23891089 0.4152183 0.32583006] 2
False
[ 0.30650545 0.37460123 0.32212411] 0
False
[ 0.2768246 0.36005365 0.35249274] 0
False
[ 0.4014044 0.27512129 0.33125983] 0
True
[ 0.32913484 0.34225184 0.32228034] 1
True
[ 0.32511296 0.36106555 0.31586132] 1
True
[ 0.21840313 0.40937086 0.37515454] 2
False
[ 0.25962598 0.34972601 0.39655548] 2
True
[ 0.2324423 0.35680861 0.4176538 ] 2
True
[ 0.36451474 0.29516468 0.32877168] 0
True
[ 0.34304617 0.31942217 0.35446246] 2
True
[ 0.30471883 0.34234903 0.36602 ] 1
False
[ 0.30852097 0.29702087 0.40095868] 2
True
[ 0.261716 0.39465059 0.34441184] 1
True
[ 0.29387119 0.33433969 0.37951303] 0
False
[ 0.37579694 0.29956897 0.3181333 ] 0
True
[ 0.29800275 0.3416024 0.35567859] 1
False
[ 0.35917153 0.29382702 0.34274576] 1
False
[ 0.33275688 0.338435 0.32382737] 2
False
[ 0.35144653 0.28600549 0.36609888] 0
False
[ 0.37232411 0.32291335 0.29815605] 0
True
[ 0.36685037 0.25252868 0.37492396] 2
True
[ 0.39976735 0.288685 0.31462426] 0
True
[ 0.31892596 0.32451193 0.35196568] 2
True
[ 0.33359496 0.34929457 0.31856923] 1
True
[ 0.41431611 0.23169279 0.35319529] 0
True
[ 0.30214979 0.3094987 0.40008533] 2
True
[ 0.32939353 0.27459717 0.39053329] 2
True
[ 0.18968061 0.38936574 0.44530812] 2
True
[ 0.33742628 0.29331892 0.3641646 ] 2
True
[ 0.28851284 0.35802107 0.36654932] 1
False
[ 0.27089082 0.33487906 0.39803911] 2
True
[ 0.34513735 0.2687347 0.40069692] 0
False
[ 0.2729783 0.30616265 0.43063907] 1
False
[ 0.31312519 0.27854873 0.41861894] 2
True
[ 0.32294511 0.36555168 0.31478317] 0
False
[ 0.36484671 0.24434596 0.40361914] 2
True
[ 0.35229441 0.32106884 0.33304807] 0
True
[ 0.34232787 0.29607132 0.35766028] 1
False
[ 0.32567684 0.34714806 0.32501823] 1
True
[ 0.35535563 0.26403284 0.37755223] 2
True
[ 0.29098243 0.39574511 0.31619114] 1
True
[ 0.29169122 0.36009215 0.35353345] 1
True
[ 0.2450825 0.39941458 0.35568928] 2
False
[ 0.29118417 0.32155046 0.39790273] 2
True
[ 0.29648715 0.26502089 0.4440817 ] 0
False
[ 0.36758865 0.31047962 0.33356748] 2
False
[ 0.32742282 0.29906606 0.36039522] 1
False
[ 0.27794469 0.3238578 0.40084848] 2
True
[ 0.32803717 0.32661184 0.33936797] 0
False
[ 0.31951368 0.34664686 0.34757698] 0
False
[ 0.26043295 0.33658955 0.39778074] 2
True
[ 0.20547131 0.36302926 0.45730712] 2
True
[ 0.35296884 0.27958125 0.36323439] 0
False
[ 0.34564151 0.27934123 0.35695076] 0
False
[ 0.33650242 0.33042217 0.33606711] 1
False
[ 0.31778424 0.34310159 0.33046939] 1
True
[ 0.31131522 0.37720294 0.32515714] 0
False
[ 0.30289354 0.27571062 0.42343972] 2
True
[ 0.31179488 0.34434826 0.34109345] 2
False
[ 0.34508566 0.29875892 0.35283976] 1
False
[ 0.38567296 0.29807847 0.31689632] 0
True
[ 0.39503438 0.33683604 0.26197725] 0
True
[ 0.36678933 0.31500129 0.3286632 ] 2
False
[ 0.33062293 0.31155465 0.35367806] 2
True
[ 0.3761571 0.26239821 0.37966306] 2
True
[ 0.31405334 0.3444452 0.34920536] 1
False
[ 0.26414268 0.32208674 0.43314417] 1
False
[ 0.29745824 0.30688059 0.38447489] 1
False
[ 0.30311914 0.32199277 0.3805251 ] 2
True
[ 0.2449398 0.39267523 0.37581892] 2
False
[ 0.3091929 0.31118505 0.39684249] 2
True
[ 0.35298776 0.30489258 0.3424183 ] 2
False
[ 0.3408804 0.30064965 0.34820948] 0
False
[ 0.32503365 0.30113698 0.38704487] 1
False
[ 0.35647638 0.27325885 0.36308335] 0
False
[ 0.36008602 0.26524753 0.37555547] 0
False
[ 0.27480942 0.40945705 0.32471956] 1
True
[ 0.2329054 0.39926735 0.37894308] 1
True
[ 0.28369785 0.36482361 0.35736633] 2
False
[ 0.37107267 0.26031295 0.37047499] 2
False
[ 0.32642731 0.26940252 0.41082057] 2
True
[ 0.31992044 0.25805732 0.42681951] 2
True
[ 0.32445545 0.27278372 0.39411105] 2
True
[ 0.23542724 0.36507929 0.40568443] 2
True
[ 0.2234771 0.3935583 0.38513393] 2
False
[ 0.25421334 0.29872273 0.45859496] 2
True
[ 0.33516074 0.29340694 0.38394544] 2
True
[ 0.28054954 0.32403491 0.40576256] 1
False
[ 0.37217149 0.26125366 0.36939723] 0
True
[ 0.29455968 0.31280512 0.40436468] 2
True
[ 0.24011508 0.35698207 0.41181464] 2
True
[ 0.2344701 0.36281953 0.41869731] 0
False
[ 0.30590644 0.29353139 0.39276086] 2
True
[ 0.28830971 0.27928356 0.43911123] 0
False
[ 0.31497657 0.26461716 0.43600428] 2
True
[ 0.31234985 0.27341488 0.41735708] 0
False
[ 0.32255332 0.24373657 0.44170761] 2
True
[ 0.39269492 0.23248285 0.37774591] 2
False
[ 0.37069928 0.21938784 0.41999577] 1
False
[ 0.35549205 0.25533989 0.40719832] 2
True
[ 0.32514739 0.27307941 0.38903495] 0
False
[ 0.24625781 0.36623389 0.40723559] 2
True
[ 0.36097717 0.23731573 0.39667334] 0
False
[ 0.37376875 0.29720723 0.32502686] 1
False
[ 0.3186403 0.27870602 0.41174228] 1
False
[ 0.3841345 0.25939666 0.3551762 ] 0
True
[ 0.3100067 0.37002405 0.30924034] 1
True
[ 0.31702982 0.31268885 0.37990322] 2
True
[ 0.30738307 0.29912204 0.40012969] 2
True
[ 0.3186626 0.28383059 0.40466025] 2
True
[ 0.30919826 0.30154477 0.40095574] 2
True
[ 0.27504568 0.3398712 0.38203478] 1
False
[ 0.34033344 0.31016277 0.35710673] 2
True
[ 0.39034203 0.25559648 0.36057194] 0
True
[ 0.26325399 0.35005199 0.38188799] 1
False
[ 0.27079566 0.30488491 0.42533218] 1
False
[ 0.2902947 0.38459079 0.32757607] 1
True
[ 0.34985457 0.31269524 0.34913077] 2
False
[ 0.22859714 0.37219341 0.40059177] 2
True
[ 0.32336697 0.30144009 0.3734542 ] 0
False
[ 0.30144313 0.33840776 0.34027082] 0
False
[ 0.24342535 0.40942053 0.3539825 ] 1
True
[ 0.26380774 0.37440984 0.35660791] 0
False
[ 0.34028531 0.2669291 0.39441843] 2
True
[ 0.31717159 0.27755152 0.39098091] 2
True
[ 0.38181169 0.25282105 0.36145128] 0
True
[ 0.32226452 0.25723682 0.42930687] 2
True
[ 0.30644021 0.29707164 0.40184452] 2
True
[ 0.3636219 0.29562575 0.35116034] 0
True
[ 0.2317964 0.33630836 0.44335285] 2
True
[ 0.29904768 0.30110092 0.38616702] 2
True
[ 0.28891718 0.28157442 0.43293406] 0
False
[ 0.27858304 0.32252277 0.39318628] 2
True
[ 0.40172486 0.23551287 0.37415209] 0
True
[ 0.29910445 0.31258186 0.38830163] 1
False
[ 0.37306545 0.22828962 0.40750598] 1
False
[ 0.32024958 0.27550792 0.39831957] 2
True
[ 0.35614608 0.27404935 0.37743743] 1
False
[ 0.29505228 0.29028663 0.42645498] 0
False
[ 0.34266682 0.26958155 0.38388524] 0
False
[ 0.37351915 0.26961249 0.35901763] 1
False
[ 0.25263354 0.32346881 0.42892261] 2
True
[ 0.35494122 0.26129379 0.38245389] 2
True
[ 0.30208661 0.32248419 0.36885213] 2
True
[ 0.27831389 0.28836661 0.42120673] 1
False
[ 0.34578738 0.34245101 0.31216282] 1
False
[ 0.30627549 0.33947473 0.36474291] 2
True
[ 0.34880314 0.31733324 0.32806907] 1
False
[ 0.24248705 0.37499162 0.3816671 ] 0
False
[ 0.32050017 0.30175109 0.38595781] 0
False
[ 0.39377499 0.26711067 0.33988812] 2
False
[ 0.29227826 0.30230966 0.40060989] 1
False
[ 0.38665861 0.22260752 0.41086003] 2
True
[ 0.33486472 0.2933495 0.36518641] 2
True
[ 0.30230395 0.31795913 0.38345208] 1
False
[ 0.27167275 0.32233215 0.41515704] 2
True
[ 0.28235875 0.38147828 0.32766288] 1
True
[ 0.27660452 0.40026054 0.32264026] 1
True
[ 0.2071563 0.41560205 0.40486984] 1
True
[ 0.23884491 0.41683885 0.35855162] 0
False
[ 0.31504747 0.32220012 0.3531565 ] 2
True
[ 0.30240451 0.29368666 0.3971523 ] 2
True
[ 0.34915474 0.28029804 0.37727501] 0
False
[ 0.33473336 0.32933254 0.32823414] 0
True
[ 0.37083975 0.29541727 0.31787027] 1
False
[ 0.38879547 0.29549328 0.32258231] 2
False
[ 0.31705108 0.28971259 0.36511091] 0
False
[ 0.40308011 0.24255602 0.3711939 ] 0
True
[ 0.32025968 0.31249927 0.3590316 ] 0
False
[ 0.38285702 0.24702125 0.37105985] 0
True
[ 0.37998145 0.2583888 0.33720284] 2
False
[ 0.26409274 0.37377522 0.35324506] 1
True
[ 0.30852145 0.3082832 0.37347125] 2
True
[ 0.38814405 0.25784609 0.36172807] 1
False
[ 0.39275352 0.33905243 0.27536069] 0
True
[ 0.40120597 0.24846023 0.34694457] 0
True
[ 0.34474771 0.29168889 0.35301823] 0
False
[ 0.44009485 0.28809256 0.26066437] 1
False
[ 0.31158386 0.3361485 0.33778825] 0
False
[ 0.37420284 0.28331091 0.33494824] 1
False
[ 0.35430612 0.31591704 0.31288358] 0
True
[ 0.32355731 0.40231719 0.26266658] 0
False
[ 0.37916516 0.27606927 0.32935792] 2
False
[ 0.39944517 0.25995633 0.33818007] 2
False
[ 0.37986798 0.23780403 0.37348996] 2
False
[ 0.40226686 0.25629034 0.35260118] 0
True
[ 0.31278106 0.30568286 0.37874919] 1
False
[ 0.39724027 0.22825078 0.37676969] 2
False
[ 0.39646619 0.26687519 0.33612875] 1
False
[ 0.32655266 0.3572618 0.31727039] 0
False
[ 0.36178089 0.26576844 0.37469263] 0
False
[ 0.35063499 0.30844236 0.33601654] 0
True
[ 0.40443942 0.29927002 0.30015448] 0
True
[ 0.38045314 0.27095712 0.34259576] 2
False
[ 0.4076397 0.31274725 0.2838683 ] 1
False
[ 0.38619435 0.3281956 0.28213838] 0
True
[ 0.33713313 0.31947262 0.34556455] 2
True
[ 0.40077475 0.26515712 0.33771493] 1
False
[ 0.36800866 0.24592817 0.39484507] 2
True
[ 0.33708007 0.31996084 0.35080694] 2
True
[ 0.31314377 0.30159379 0.37551149] 1
False
[ 0.2697278 0.33295513 0.40006932] 2
True
[ 0.3332899 0.27029943 0.39649321] 2
True
[ 0.33385461 0.30430023 0.35962412] 0
False
[ 0.3284512 0.29616137 0.36790144] 1
False
[ 0.36485538 0.31609734 0.33296612] 2
False
[ 0.40392437 0.24691142 0.35559488] 0
True
[ 0.32360638 0.35156185 0.3169024 ] 1
True
[ 0.29183928 0.31454161 0.4027151 ] 2
True
[ 0.33981652 0.29610493 0.35130885] 1
False
[ 0.30676528 0.2952883 0.39453758] 2
True
[ 0.32199989 0.31790175 0.35422332] 1
False
[ 0.2900893 0.28174024 0.42804899] 2
True
[ 0.27636336 0.32277467 0.39351268] 1
False
[ 0.25744148 0.36923623 0.36853998] 2
False
[ 0.27332179 0.33637529 0.38704828] 0
False
[ 0.32508668 0.3478813 0.3284974 ] 0
False
[ 0.30684433 0.39458574 0.30661868] 1
True
[ 0.35143907 0.29510279 0.34888983] 2
False
[ 0.33735699 0.29404574 0.35780203] 0
False
[ 0.3757437 0.30776659 0.31130896] 0
True
[ 0.36508559 0.27170228 0.35705811] 1
False
[ 0.29028909 0.37109502 0.33165883] 1
True
[ 0.30063358 0.37886226 0.31638979] 0
False
[ 0.319157 0.35275938 0.32108507] 1
True
[ 0.27428427 0.3904213 0.31423962] 0
False
[ 0.36960068 0.33396414 0.2886824 ] 0
True
[ 0.39956498 0.30032114 0.29394453] 0
True
[ 0.36374195 0.31616995 0.31523359] 2
False
[ 0.37820607 0.29367098 0.33558456] 0
True
[ 0.35498129 0.30900781 0.33598398] 1
False
[ 0.32281787 0.35530721 0.32795582] 2
False
[ 0.35384942 0.31341899 0.33260532] 2
False
[ 0.34874903 0.29342201 0.35935169] 2
True
[ 0.40355361 0.23719752 0.36468286] 2
False
[ 0.37431711 0.28262236 0.3548754 ] 2
False
[ 0.39953002 0.27254136 0.33753593] 0
True
[ 0.30031284 0.31353031 0.37391262] 1
False
[ 0.3813961 0.2542932 0.37147975] 2
False
[ 0.30228449 0.29538852 0.39261345] 2
True
[ 0.27031224 0.35273514 0.37134957] 2
True
[ 0.3851037 0.22982291 0.39736707] 0
False
[ 0.27234823 0.33944506 0.37340541] 0
False
[ 0.3021983 0.29177276 0.40177505] 1
False
[ 0.27673 0.31894126 0.4090908 ] 0
False
[ 0.29892475 0.3369506 0.35054442] 1
False
[ 0.35009681 0.30018967 0.35072887] 2
True
[ 0.37394635 0.22938106 0.41154889] 2
True
[ 0.30176856 0.32874928 0.38759546] 2
True
[ 0.31409975 0.32311107 0.34937883] 1
False
[ 0.31647452 0.34081963 0.34016492] 1
True
[ 0.2217946 0.39496248 0.39309265] 1
True
[ 0.30507919 0.32084579 0.37851162] 1
False
[ 0.3470721 0.2854582 0.36590695] 0
False
[ 0.39471525 0.2902548 0.3242604 ] 2
False
[ 0.26799175 0.34614022 0.3838338 ] 2
True
[ 0.2717547 0.29194844 0.43649073] 0
False
[ 0.33208969 0.2892309 0.37413535] 1
False
[ 0.3724833 0.35135923 0.27612573] 0
True
[ 0.28435333 0.36299267 0.35000656] 2
False
[ 0.30447874 0.38354714 0.30553144] 1
True
[ 0.33488058 0.30534851 0.36650049] 2
True
[ 0.32825118 0.3264891 0.34338717] 0
False
[ 0.36099082 0.27748219 0.3646416 ] 2
True
[ 0.29578413 0.34148219 0.36383484] 2
True
[ 0.29089419 0.31544076 0.39220058] 2
True
[ 0.35305911 0.30812019 0.33296315] 0
True
[ 0.31352012 0.29775927 0.38257224] 1
False
[ 0.34807149 0.28202691 0.36733091] 1
False
[ 0.30907351 0.31679722 0.36747895] 2
True
[ 0.30037941 0.33560076 0.36600751] 2
True
[ 0.32457742 0.26901051 0.41585976] 0
False
[ 0.26417605 0.33271417 0.41170235] 0
False
[ 0.27568776 0.34458688 0.37352999] 1
False
[ 0.3225591 0.30041648 0.37024084] 2
True
[ 0.34952995 0.26209014 0.38916347] 2
True
[ 0.32782328 0.32414472 0.36296184] 0
False
[ 0.37001768 0.28764615 0.34319208] 0
True
[ 0.32574755 0.27255769 0.39959729] 2
True
[ 0.35177697 0.22897786 0.42267418] 2
True
[ 0.36015548 0.3167333 0.31953812] 1
False
[ 0.28273453 0.32255155 0.38959792] 2
True
[ 0.2769913 0.32938154 0.3890565 ] 2
True
[ 0.29360241 0.29557325 0.42171803] 2
True
[ 0.34345147 0.26147267 0.40651717] 0
False
[ 0.29887792 0.30144112 0.40157062] 2
True
[ 0.29528643 0.28519445 0.41900192] 1
False
[ 0.2974235 0.30078721 0.39683835] 0
False
[ 0.36309314 0.32032367 0.32566996] 1
False
[ 0.3662829 0.25512732 0.3715098 ] 2
True
[ 0.35748467 0.29103982 0.35563127] 0
True
[ 0.25831082 0.32019693 0.4280731 ] 2
True
[ 0.32852355 0.30414553 0.35461478] 0
False
[ 0.37144227 0.23630714 0.4098381 ] 2
True
[ 0.35973696 0.24309567 0.40105191] 2
True
[ 0.26860226 0.33603236 0.39856002] 2
True
[ 0.28347309 0.29435148 0.41851717] 1
False
[ 0.31850341 0.23139855 0.45671733] 2
True
[ 0.29702876 0.28714639 0.41796745] 1
False
[ 0.39172114 0.23502193 0.4026943 ] 2
True
[ 0.24380891 0.32433524 0.4438877 ] 2
True
[ 0.2980165 0.29345062 0.40589781] 0
False
[ 0.31788308 0.26977794 0.40889422] 1
False
[ 0.34632806 0.25449901 0.3992538 ] 2
True
[ 0.24514098 0.3582026 0.40403768] 2
True
[ 0.22052374 0.29401837 0.4978036 ] 2
True
[ 0.32227421 0.2635141 0.42491884] 0
False
[ 0.3064305 0.23171822 0.46792684] 2
True
[ 0.2496151 0.33020476 0.45056949] 2
True
[ 0.36959291 0.18577226 0.48462584] 0
False
[ 0.30237624 0.2732151 0.42243119] 2
True
[ 0.33728399 0.28731517 0.37307055] 0
False
[ 0.31756272 0.28313075 0.40707468] 1
False
[ 0.30918857 0.2538746 0.44457467] 0
False
[ 0.32485631 0.36272122 0.31007676] 1
True
[ 0.38000118 0.31754955 0.3032525 ] 0
True
[ 0.32554052 0.29639625 0.38804368] 2
True
[ 0.29424219 0.30564335 0.40468722] 1
False
[ 0.36458658 0.31342828 0.32398771] 2
False
[ 0.3161311 0.30623641 0.37852167] 0
False
[ 0.35381094 0.29411465 0.35893068] 0
False
[ 0.34841146 0.27793755 0.37157657] 2
True
[ 0.33844996 0.25308072 0.41278829] 2
True
[ 0.29078875 0.36727675 0.34988667] 1
True
[ 0.33024236 0.2779931 0.39787393] 2
True
[ 0.38881319 0.2675136 0.33977885] 0
True
[ 0.34659958 0.29288363 0.37614158] 2
True
[ 0.36826692 0.21740217 0.42678332] 2
True
[ 0.30696499 0.28700553 0.41466527] 2
True
[ 0.28491823 0.30689017 0.41400486] 1
False
[ 0.32187221 0.3119227 0.35795556] 0
False
[ 0.37699067 0.28621341 0.3459482 ] 0
True
[ 0.29449252 0.33998777 0.35691952] 1
False
[ 0.34989884 0.30681083 0.33470298] 1
False
[ 0.31078898 0.29520323 0.39853554] 2
True
[ 0.34467992 0.32315316 0.32377912] 0
True
[ 0.37289276 0.23410105 0.40427926] 2
True
[ 0.38771349 0.25441549 0.38194679] 0
True
[ 0.41193186 0.24967055 0.35124657] 1
False
[ 0.30793657 0.33823091 0.34857225] 1
False
[ 0.34862454 0.26013295 0.40065461] 2
True
[ 0.30631956 0.3021473 0.39901496] 2
True
[ 0.26179983 0.3346502 0.39662858] 2
True
[ 0.30951649 0.3451117 0.34139179] 1
True
[ 0.3184845 0.27545837 0.41543385] 2
True
[ 0.35054879 0.31609156 0.33038189] 0
True
[ 0.36661258 0.26219116 0.38901402] 2
True
[ 0.34335563 0.34686269 0.3061953 ] 1
True
[ 0.36153108 0.27109258 0.36388652] 2
True
[ 0.32454528 0.28089588 0.39561119] 2
True
[ 0.29186369 0.29576733 0.39662131] 1
False
[ 0.3182856 0.35104187 0.32799338] 2
False
[ 0.3252783 0.27737109 0.42008386] 2
True
[ 0.27292957 0.36128815 0.3689196 ] 2
True
[ 0.31773607 0.30848281 0.37987581] 0
False
[ 0.34488804 0.2619686 0.39906311] 2
True
[ 0.37795667 0.23454277 0.3961032 ] 0
False
[ 0.33026717 0.27915157 0.39516746] 0
False
[ 0.28844219 0.26048771 0.46550185] 2
True
[ 0.31282688 0.29245013 0.39282653] 0
False
[ 0.3298778 0.33788075 0.32787147] 1
True
[ 0.32931311 0.27697557 0.39176979] 2
True
[ 0.36847314 0.2809899 0.35224499] 1
False
[ 0.30140486 0.30201094 0.4041011 ] 1
False
[ 0.30432469 0.30554733 0.39338654] 0
False
[ 0.36893516 0.31307995 0.32112655] 2
False
[ 0.34960015 0.26538113 0.37556621] 0
False
[ 0.32586125 0.27000918 0.39239831] 2
True
[ 0.33763313 0.28857594 0.36468257] 2
True
[ 0.35625997 0.27334817 0.36612327] 1
False
[ 0.32775718 0.27760583 0.39924993] 2
True
[ 0.32418668 0.26009146 0.4187073 ] 1
False
[ 0.31043554 0.32174337 0.36752513] 0
False
[ 0.25636021 0.39427435 0.34414549] 1
True
[ 0.32957563 0.31762845 0.36000561] 1
False
[ 0.39095524 0.25554305 0.35551706] 0
True
[ 0.34780042 0.31688559 0.32114004] 0
True
[ 0.35623751 0.35530843 0.29059614] 1
False
[ 0.32507769 0.37279203 0.3031909 ] 1
True
[ 0.36721628 0.31865316 0.29716211] 0
True
[ 0.38032498 0.28712913 0.33651858] 2
False
[ 0.38279072 0.30184773 0.32376724] 0
True
[ 0.28115871 0.38314451 0.34016339] 1
True
[ 0.31942399 0.38282813 0.29461648] 0
False
[ 0.40516858 0.29657917 0.29958039] 0
True
[ 0.40998312 0.24280096 0.34662907] 2
False
[ 0.39338264 0.32424033 0.27382662] 0
True
[ 0.36736889 0.29840028 0.32469214] 1
False
[ 0.31179023 0.37087856 0.31108918] 0
False
[ 0.35587868 0.33531099 0.31245089] 1
False
[ 0.32281928 0.30284365 0.3662972 ] 0
False
[ 0.2527688 0.37378187 0.38421466] 2
True
[ 0.4054898 0.28059408 0.30462203] 1
False
[ 0.38721079 0.30901034 0.3023867 ] 0
True
[ 0.3953716 0.24142373 0.36921187] 2
False
[ 0.34065895 0.35652374 0.30518995] 1
True
[ 0.41222068 0.28922477 0.30056342] 0
True
[ 0.31214247 0.38503044 0.29972848] 2
False
[ 0.37199233 0.33448678 0.30421519] 2
False
[ 0.37086333 0.29680158 0.3308244 ] 0
True
[ 0.34794188 0.40102025 0.2521021 ] 0
False
[ 0.36102794 0.30134338 0.33096908] 1
False
[ 0.38202983 0.28936732 0.32287737] 2
False
[ 0.32407908 0.34822188 0.32689641] 0
False
[ 0.37163824 0.30303905 0.31986856] 0
True
[ 0.43912664 0.24607591 0.31959655] 2
False
[ 0.39618753 0.28657949 0.31530592] 0
True
[ 0.33831187 0.31324223 0.34947363] 1
False
[ 0.351703 0.40254924 0.25460986] 1
True
[ 0.40503136 0.30475129 0.29586149] 0
True
[ 0.31542881 0.3707325 0.30516434] 1
True
[ 0.36912748 0.32839328 0.2975949 ] 2
False
[ 0.35438098 0.33386219 0.30571783] 1
False
[ 0.31982439 0.36577178 0.31731059] 2
False
[ 0.26034071 0.40406495 0.34678278] 1
True
[ 0.29603106 0.3971738 0.30660433] 1
True
[ 0.34631394 0.30061258 0.35437911] 2
True
[ 0.36193647 0.31133044 0.33281541] 0
True
[ 0.32053418 0.37045687 0.30790836] 2
False
[ 0.35085245 0.30993252 0.34749736] 0
True
[ 0.27468625 0.44054715 0.2926098 ] 1
True
[ 0.38914833 0.28411005 0.33857645] 2
False
[ 0.36475919 0.36521047 0.27751382] 2
False
[ 0.36565849 0.26305653 0.37091004] 0
False
[ 0.36605599 0.27544664 0.36189153] 0
True
[ 0.39360203 0.31369993 0.29213804] 0
True
[ 0.42756183 0.23502502 0.35055195] 1
False
[ 0.29090425 0.35351765 0.34985348] 0
False
[ 0.35367822 0.32339834 0.33126034] 2
False
[ 0.39907843 0.28255905 0.30031374] 0
True
[ 0.39773531 0.24606315 0.35468179] 2
False
[ 0.36189461 0.32928471 0.30016167] 1
False
[ 0.3750131 0.31944436 0.3227529 ] 1
False
[ 0.3681175 0.30576193 0.34232188] 1
False
[ 0.28992008 0.34921018 0.34973189] 2
True
[ 0.36780442 0.39449086 0.24144801] 1
True
[ 0.32835442 0.30073568 0.3548605 ] 2
True
[ 0.34815479 0.29705933 0.33767246] 0
True
[ 0.37147467 0.2470928 0.390683 ] 2
True
[ 0.34644851 0.34813652 0.30868671] 2
False
[ 0.35769279 0.26310896 0.37717331] 1
False
[ 0.32353295 0.31536763 0.35738853] 0
False
[ 0.31732358 0.32924811 0.35667539] 2
True
[ 0.29143121 0.34421445 0.35847199] 1
False
[ 0.27842968 0.3713193 0.34912302] 2
False
[ 0.32538301 0.27869906 0.39389115] 1
False
[ 0.29505059 0.36105349 0.34102014] 0
False
[ 0.31120967 0.32353318 0.36463648] 2
True
[ 0.35182502 0.25470734 0.38596495] 2
True
[ 0.34951522 0.30540818 0.34492706] 1
False
[ 0.34862273 0.28014221 0.36731723] 0
False
[ 0.43705933 0.23682097 0.3386407 ] 1
False
[ 0.31935614 0.32916948 0.35626873] 2
True
[ 0.33137992 0.32317878 0.33360253] 2
True
[ 0.32461936 0.27980808 0.39663482] 2
True
[ 0.3998622 0.26532985 0.34029833] 0
True
[ 0.31896561 0.33750751 0.35133672] 0
False
[ 0.2752478 0.32438408 0.39765876] 2
True
[ 0.32359009 0.2914194 0.39461021] 0
False
[ 0.38917849 0.27706121 0.33602342] 0
True
[ 0.26637793 0.39123321 0.34245539] 1
True
[ 0.29772495 0.35239884 0.36008375] 1
False
[ 0.32792639 0.26896878 0.40895706] 2
True
[ 0.27054345 0.3416485 0.38635681] 2
True
[ 0.40649828 0.26168829 0.32334073] 2
False
[ 0.34206749 0.26330996 0.40253529] 0
False
[ 0.27830151 0.38018671 0.35576984] 2
False
[ 0.37026785 0.26935077 0.36552408] 0
True
[ 0.40981497 0.25842746 0.33659206] 0
True
[ 0.34863244 0.2610531 0.39877182] 2
True
[ 0.33140693 0.25091566 0.42261125] 1
False
[ 0.40982331 0.28466084 0.31899777] 0
True
[ 0.33751077 0.28756455 0.36935638] 2
True
[ 0.37985311 0.26688915 0.36174941] 2
False
[ 0.35384281 0.25764042 0.40370614] 2
True
[ 0.34578639 0.22865912 0.44413448] 2
True
[ 0.31268828 0.30370026 0.38228953] 1
False
[ 0.34314378 0.2579519 0.39737037] 2
True
[ 0.32964514 0.26258526 0.39610758] 2
True
[ 0.42646654 0.21065592 0.39534011] 2
False
[ 0.30048206 0.32471438 0.36794367] 1
False
[ 0.30034157 0.29925325 0.38807198] 1
False
[ 0.27986722 0.384913 0.33819857] 2
False
[ 0.33126563 0.24753392 0.42028674] 2
True
[ 0.26148756 0.35793311 0.38448717] 1
False
[ 0.27334602 0.298426 0.43493247] 1
False
[ 0.23770542 0.35880318 0.39884014] 2
True
[ 0.32173983 0.29748386 0.36728139] 0
False
[ 0.33775307 0.30114519 0.35722516] 0
False
[ 0.27400581 0.37539475 0.34778058] 1
True
[ 0.28118942 0.34418371 0.36729405] 0
False
[ 0.27216074 0.36753393 0.36181679] 0
False
[ 0.32586722 0.30561349 0.36855941] 0
False
[ 0.31204455 0.33806956 0.34686416] 2
True
[ 0.40186526 0.24555896 0.35725191] 0
True
[ 0.34319011 0.29544493 0.36715248] 0
False
[ 0.30956018 0.30280584 0.37272917] 1
False
[ 0.27657561 0.32626608 0.3794441 ] 1
False
[ 0.29909786 0.35332344 0.33307686] 1
True
[ 0.3644694 0.31621176 0.32364461] 0
True
[ 0.30962855 0.32907298 0.36211004] 2
True
[ 0.29657376 0.31156822 0.38823241] 2
True
[ 0.33674611 0.25821309 0.41042608] 0
False
[ 0.30588006 0.33585939 0.35696248] 1
False
[ 0.37685864 0.31581036 0.31233651] 1
False
[ 0.33738645 0.28352737 0.37424868] 0
False
[ 0.32312131 0.30479574 0.37998704] 1
False
[ 0.32441894 0.35624164 0.3256981 ] 0
False
[ 0.39777911 0.29267406 0.31058676] 0
True
[ 0.38700242 0.29545505 0.31457435] 1
False
[ 0.30713012 0.39549683 0.29695489] 1
True
[ 0.36100581 0.37619507 0.25660974] 1
True
[ 0.39476181 0.28900462 0.2970084 ] 0
True
[ 0.33463719 0.36270629 0.29030157] 2
False
[ 0.29196578 0.37075487 0.33396003] 1
True
[ 0.29408125 0.41764928 0.28119463] 0
False
[ 0.2734245 0.41864876 0.31471042] 2
False
[ 0.27852379 0.36070095 0.3637945 ] 2
True
[ 0.29603125 0.35593747 0.35065132] 2
False
[ 0.34866077 0.3328947 0.32644151] 1
False
[ 0.31232838 0.39096994 0.30941227] 0
False
[ 0.39173882 0.22966836 0.38499212] 2
False
[ 0.27937 0.3724295 0.34070555] 1
True
[ 0.3238119 0.31395793 0.34452179] 0
False
[ 0.36390849 0.28558588 0.34062433] 0
True
[ 0.32198556 0.34586075 0.33558858] 0
False
[ 0.36521509 0.33304016 0.30770919] 2
False
[ 0.32397453 0.34789301 0.33881947] 2
False
[ 0.36727213 0.29994353 0.32693045] 2
False
[ 0.37736398 0.31638269 0.30826894] 1
False
[ 0.29929958 0.40164345 0.30003991] 2
False
[ 0.32002291 0.28836198 0.4006785 ] 2
True
[ 0.39180817 0.24366949 0.36332041] 2
False
[ 0.34745097 0.27096843 0.38597114] 2
True
[ 0.32223899 0.26288183 0.40832768] 1
False
[ 0.27267838 0.36889663 0.35793953] 2
False
[ 0.35238181 0.33150991 0.31413566] 0
True
[ 0.299747 0.28558471 0.40962339] 0
False
[ 0.33732684 0.28303571 0.37989852] 1
False
[ 0.33484356 0.2581224 0.40306764] 0
False
[ 0.32494523 0.34005207 0.33536953] 2
False
[ 0.28258102 0.32165661 0.39976605] 2
True
[ 0.34313291 0.27308052 0.38327584] 2
True
[ 0.3349026 0.30573259 0.36718541] 1
False
[ 0.2406637 0.39880893 0.35953372] 1
True
[ 0.32445868 0.26595904 0.41072001] 0
False
[ 0.27139352 0.38495147 0.34478667] 2
False
[ 0.31067181 0.32096937 0.36486548] 2
True
[ 0.33988538 0.27966924 0.37025664] 0
False
[ 0.34042494 0.29772911 0.35135289] 2
True
[ 0.2803811 0.32020329 0.39457831] 2
True
[ 0.35935042 0.25995323 0.37173756] 1
False
[ 0.339232 0.28471502 0.38038349] 1
False
[ 0.27420319 0.32487032 0.40105451] 2
True
[ 0.31414117 0.30198964 0.38104416] 0
False
[ 0.30401889 0.3551965 0.34583893] 1
True
[ 0.39393913 0.25915939 0.35204519] 2
False
[ 0.35465034 0.30164355 0.33914484] 2
False
[ 0.28928478 0.31191179 0.39412263] 1
False
[ 0.29365723 0.32288099 0.3836237 ] 2
True
[ 0.30286926 0.31523187 0.38472645] 2
True
[ 0.31996625 0.34257146 0.35309755] 2
True
[ 0.24721301 0.35570811 0.41160497] 2
True
[ 0.32025806 0.29066093 0.39185504] 0
False
[ 0.36282821 0.25005232 0.41538517] 1
False
[ 0.31242319 0.32669381 0.36671789] 1
False
[ 0.31371116 0.27758301 0.39327684] 0
False
[ 0.38131422 0.26966817 0.34882014] 0
True
[ 0.34295759 0.25875076 0.39266646] 2
True
[ 0.31756626 0.28447029 0.40343031] 2
True
[ 0.30535623 0.30100153 0.398663 ] 2
True
[ 0.26687396 0.37227765 0.37578776] 1
False
[ 0.31719628 0.25888872 0.41768531] 1
False
[ 0.33850538 0.25251501 0.4127865 ] 1
False
[ 0.30119195 0.26651506 0.42959317] 0
False
[ 0.33443149 0.25661476 0.41871477] 2
True
[ 0.25353004 0.37028852 0.39304583] 1
False
[ 0.29385959 0.33268135 0.37660393] 2
True
[ 0.28279301 0.33530197 0.3966845 ] 2
True
[ 0.25032717 0.34732588 0.39778325] 2
True
[ 0.32678117 0.27133507 0.40602092] 2
True
[ 0.30569689 0.26947481 0.41272018] 2
True
[ 0.2371591 0.37531871 0.39367262] 1
False
[ 0.27624415 0.29279873 0.43509546] 2
True
[ 0.32955693 0.31711904 0.35056581] 0
False
[ 0.34899475 0.27068246 0.37629633] 0
False
[ 0.3066979 0.28344325 0.40662187] 1
False
[ 0.30315807 0.3131755 0.38717797] 2
True
[ 0.29215161 0.31390361 0.39732445] 2
True
[ 0.33970777 0.28478084 0.39240068] 2
True
[ 0.32524836 0.28352428 0.39491594] 1
False
[ 0.34053912 0.23777561 0.42093447] 0
False
[ 0.37051407 0.29496314 0.34886496] 1
False
[ 0.25559352 0.36039168 0.38385637] 0
False
[ 0.28507894 0.32421146 0.39761693] 2
True
[ 0.28377182 0.3260314 0.39384578] 0
False
[ 0.36053171 0.2428038 0.40277201] 2
True
[ 0.35054649 0.24762141 0.40999493] 1
False
[ 0.30905816 0.29603562 0.39951983] 0
False
[ 0.40453357 0.23125667 0.37345273] 0
True
[ 0.31283417 0.32274605 0.34803824] 0
False
[ 0.3181812 0.33285336 0.34413561] 1
False
[ 0.2836731 0.37856772 0.33147699] 0
False
[ 0.37355426 0.29672763 0.32043962] 0
True
[ 0.27181013 0.36837328 0.36231883] 2
False
[ 0.36465112 0.28128718 0.34444503] 2
False
[ 0.33622737 0.32455582 0.33491275] 1
False
[ 0.37563957 0.24717899 0.39830582] 0
False
[ 0.35433275 0.27431729 0.38261844] 2
True
[ 0.30776586 0.33851873 0.34331864] 2
True
[ 0.32314824 0.31736254 0.35016892] 1
False
[ 0.3287281 0.31806387 0.34094334] 2
True
[ 0.32948863 0.26891464 0.39700939] 2
True
[ 0.36133181 0.27050708 0.3646073 ] 1
False
[ 0.30411475 0.32265934 0.35655456] 0
False
[ 0.35097 0.25944887 0.39505845] 2
True
[ 0.30600725 0.31260647 0.38585974] 1
False
[ 0.34846644 0.28045675 0.37943062] 0
False
[ 0.29791544 0.31034968 0.38102742] 1
False
[ 0.33843068 0.29851074 0.36626786] 0
False
[ 0.40280868 0.23613819 0.36272558] 1
False
[ 0.32590623 0.34506277 0.33120282] 1
True
[ 0.35652925 0.28991717 0.34657735] 0
True
[ 0.37101525 0.28519268 0.33549487] 2
False
[ 0.39753543 0.26923246 0.34760321] 1
False
[ 0.34835123 0.28525771 0.36372487] 0
False
[ 0.33279139 0.31234356 0.35366166] 1
False
[ 0.37483359 0.29764382 0.31504447] 0
True
[ 0.2987645 0.37903848 0.32702038] 1
True
[ 0.38685021 0.32577157 0.29597723] 1
False
[ 0.32698871 0.29324142 0.37714573] 0
False
[ 0.3523113 0.34718304 0.30329961] 0
True
[ 0.28626548 0.40602648 0.31026069] 1
True
[ 0.31856506 0.34960278 0.32420437] 1
True
[ 0.27702005 0.40194282 0.3184205 ] 2
False
[ 0.29569355 0.36267224 0.33086649] 0
False
[ 0.28266683 0.36860096 0.37433316] 2
True
[ 0.34224439 0.33068636 0.33524255] 1
False
[ 0.30171927 0.38514178 0.30939807] 2
False
[ 0.281518 0.39501494 0.31394577] 0
False
[ 0.29552411 0.34301101 0.35124323] 1
False
[ 0.27626649 0.37311735 0.35395924] 1
True
[ 0.28638274 0.38332587 0.32638279] 2
False
[ 0.26502983 0.40451838 0.33955976] 2
False
[ 0.33696132 0.340425 0.32869076] 0
False
[ 0.32539605 0.38172515 0.29244738] 1
True
[ 0.28447176 0.32971353 0.36961717] 0
False
[ 0.33755105 0.31283445 0.34585095] 1
False
[ 0.35271109 0.34521893 0.30547377] 2
False
[ 0.35986234 0.30393672 0.33374936] 2
False
[ 0.36023415 0.30879648 0.33465365] 2
False
[ 0.34534853 0.32952048 0.32246979] 0
True
[ 0.36080559 0.29944439 0.34151477] 0
True
[ 0.32781163 0.39015748 0.27331892] 0
False
[ 0.35348967 0.28973153 0.35615445] 2
True
[ 0.36397523 0.2872331 0.34539829] 0
True
[ 0.36087668 0.28630161 0.34471979] 0
True
[ 0.34118247 0.31628002 0.34695648] 0
False
[ 0.37487288 0.30662227 0.30229062] 0
True
[ 0.35919586 0.32884367 0.31404834] 0
True
[ 0.42873867 0.26050106 0.31404411] 0
True
[ 0.41018575 0.2487481 0.3505682 ] 0
True
[ 0.39773508 0.27058998 0.33313136] 0
True
[ 0.44357707 0.28445124 0.27280931] 1
False
[ 0.33174761 0.34080103 0.32202648] 1
True
[ 0.3220206 0.36401112 0.32440962] 2
False
[ 0.34016503 0.30634014 0.36446937] 2
True
[ 0.37493415 0.31858052 0.30338406] 1
False
[ 0.44506008 0.26813366 0.29509644] 1
False
[ 0.38819874 0.35813674 0.27004301] 1
False
[ 0.40505854 0.28558554 0.31452059] 2
False
[ 0.34475995 0.348529 0.30931097] 1
True
[ 0.31451897 0.37532799 0.30810086] 2
False
[ 0.32227209 0.33195852 0.34245892] 0
False
[ 0.39712274 0.27966148 0.32328783] 0
True
[ 0.34983936 0.32014845 0.31494082] 2
False
[ 0.38009611 0.29627017 0.31856289] 2
False
[ 0.31244025 0.35399743 0.33280624] 0
False
[ 0.38137013 0.27339699 0.34300398] 1
False
[ 0.33768278 0.33705491 0.32272253] 0
True
[ 0.40144993 0.28148258 0.3089619 ] 2
False
[ 0.33243796 0.35634779 0.30994946] 1
True
[ 0.38879851 0.320493 0.2854987 ] 0
True
[ 0.3809371 0.29456962 0.31949164] 0
True
[ 0.37384014 0.33897508 0.28310429] 0
True
[ 0.43711295 0.31986278 0.25675633] 1
False
[ 0.34524669 0.30294128 0.34411197] 2
False
[ 0.38719841 0.29000713 0.32270237] 2
False
[ 0.38021057 0.32192656 0.29276618] 1
False
[ 0.35939485 0.28654159 0.35465035] 2
False
[ 0.34871866 0.29173651 0.37108174] 2
True
[ 0.359085 0.27230299 0.37379862] 2
True
[ 0.35874414 0.33401162 0.32170252] 2
False
[ 0.37158952 0.3010318 0.31849717] 1
False
[ 0.32849188 0.34117864 0.33540755] 0
False
[ 0.38648797 0.29154104 0.31490181] 0
True
[ 0.40831898 0.29112882 0.31298248] 1
False
[ 0.39394863 0.27552195 0.32629605] 2
False
[ 0.36309145 0.32394225 0.30660351] 2
False
[ 0.31848562 0.30635764 0.3796869 ] 1
False
[ 0.34934062 0.27713193 0.36314303] 1
False
[ 0.36084613 0.31492892 0.31168826] 0
True
[ 0.36194568 0.31306773 0.33017353] 1
False
[ 0.31951992 0.37987616 0.29723123] 1
True
[ 0.32840954 0.31987904 0.34355657] 1
False
[ 0.31173241 0.37722377 0.31135178] 2
False
[ 0.32399755 0.31172794 0.36209926] 2
True
[ 0.31794095 0.3553685 0.33046351] 1
True
[ 0.35784826 0.32290042 0.31373021] 0
True
[ 0.32898239 0.28409356 0.39089047] 2
True
[ 0.26166559 0.38767979 0.34592451] 2
False
[ 0.28812453 0.39307262 0.31137984] 1
True
[ 0.33637134 0.32495486 0.328719 ] 0
True
[ 0.27667375 0.34965738 0.38280677] 2
True
[ 0.3048894 0.34228666 0.35343254] 0
False
[ 0.3213478 0.33149378 0.3397902 ] 1
False
[ 0.36807327 0.26873874 0.37509971] 2
True
[ 0.2988783 0.35338805 0.33577145] 2
False
[ 0.29688888 0.34481183 0.33990334] 1
True
[ 0.27128291 0.3844129 0.34806131] 2
False
[ 0.30368016 0.30296058 0.39160662] 2
True
[ 0.3514138 0.23916289 0.40420668] 0
False
[ 0.31967238 0.28160607 0.41382086] 2
True
[ 0.32528671 0.28759443 0.40758146] 0
False
[ 0.31518184 0.31716636 0.36253329] 0
False
[ 0.30448754 0.36396309 0.32329162] 0
False
[ 0.3333655 0.28280662 0.37859169] 2
True
[ 0.38386598 0.25382864 0.3630838 ] 0
True
[ 0.34977477 0.25904957 0.40272602] 2
True
[ 0.30234349 0.34271657 0.35159056] 1
False
[ 0.31268815 0.32153709 0.35995061] 1
False
[ 0.28755595 0.35747327 0.35850265] 2
True
[ 0.40366665 0.27983435 0.319723 ] 1
False
[ 0.27948794 0.35442251 0.36414228] 0
False
[ 0.38163847 0.25013446 0.37407125] 0
True
[ 0.37033943 0.31393947 0.31276023] 0
True
[ 0.32637523 0.35610555 0.31095822] 1
True
[ 0.35319688 0.30945111 0.32743591] 2
False
[ 0.29461686 0.34915613 0.36349679] 1
False
[ 0.39735361 0.27365914 0.33264348] 0
True
[ 0.35003659 0.3363614 0.30639311] 2
False
[ 0.36359201 0.26434638 0.36888507] 0
False
[ 0.39264798 0.26616363 0.33769574] 1
False
[ 0.31844165 0.32732908 0.35231525] 1
False
[ 0.33601755 0.3049549 0.35908542] 0
False
[ 0.41620641 0.25913975 0.32382801] 2
False
[ 0.34961044 0.33455202 0.31338859] 0
True
[ 0.39912251 0.28226153 0.30494615] 1
False
[ 0.36181494 0.33106328 0.30451765] 1
False
[ 0.36754218 0.30600885 0.31657607] 0
True
[ 0.35575112 0.31836087 0.33771622] 1
False
[ 0.34997223 0.33235427 0.31751398] 2
False
[ 0.35863523 0.34936874 0.308828 ] 0
True
[ 0.3373936 0.34390588 0.32526675] 1
True
[ 0.3731228 0.29791804 0.33736319] 2
False
[ 0.31356123 0.29459245 0.39382384] 1
False
[ 0.29966817 0.35015927 0.34215643] 2
False
[ 0.29863729 0.38207228 0.32574388] 0
False
[ 0.3599469 0.26953356 0.36282031] 2
True
[ 0.34805573 0.33843134 0.30296483] 0
True
[ 0.31769173 0.36002091 0.3258115 ] 1
True
[ 0.34651244 0.31165409 0.33443152] 2
False
[ 0.35490691 0.29879214 0.34866876] 0
True
[ 0.29694264 0.31630297 0.38645892] 2
True
[ 0.3898555 0.25353864 0.34576177] 2
False
[ 0.29962069 0.30264023 0.39623084] 2
True
[ 0.31113443 0.33969446 0.35284754] 0
False
[ 0.36003768 0.27694067 0.35596661] 2
False
[ 0.33347408 0.29499066 0.3675729 ] 2
True
[ 0.30687536 0.28623624 0.41485136] 1
False
[ 0.32698755 0.32730178 0.33778552] 2
True
[ 0.27184756 0.36275926 0.37137076] 2
True
[ 0.26235895 0.34542912 0.40153821] 2
True
[ 0.30986095 0.28422987 0.40753177] 0
False
[ 0.33023803 0.26811203 0.39855001] 2
True
[ 0.32788674 0.30830138 0.37522913] 0
False
[ 0.36032832 0.23883028 0.39786125] 2
True
[ 0.30240134 0.29011131 0.41944784] 2
True
[ 0.27383057 0.31519351 0.42470259] 2
True
[ 0.26973071 0.34585257 0.4026235 ] 2
True
[ 0.27056574 0.29185026 0.44051328] 1
False
[ 0.37139602 0.29383328 0.33809377] 0
True
[ 0.24553868 0.37400469 0.39118926] 1
False
[ 0.26011393 0.42334507 0.34431898] 0
False
[ 0.35607096 0.28597047 0.34614864] 0
True
[ 0.30598777 0.29465167 0.3963202 ] 2
True
[ 0.37512188 0.24867598 0.37459118] 2
False
[ 0.31676006 0.34799041 0.33998261] 2
False
[ 0.32489253 0.27443563 0.39608615] 2
True
[ 0.40809754 0.2110299 0.40724043] 0
True
[ 0.31584201 0.33948147 0.34557286] 2
True
[ 0.30488356 0.30921572 0.39016739] 1
False
[ 0.31980736 0.28276028 0.39169375] 1
False
[ 0.30836641 0.27770256 0.40416581] 1
False
[ 0.306047 0.32313598 0.35539195] 2
True
[ 0.30415953 0.28956848 0.40274303] 1
False
[ 0.33568494 0.32330235 0.34661294] 1
False
[ 0.36953941 0.25092788 0.38807043] 0
False
[ 0.40200164 0.27371114 0.32762245] 1
False
[ 0.37739967 0.24360651 0.36999632] 2
False
[ 0.25767397 0.37197058 0.36850209] 1
True
[ 0.28578653 0.37813176 0.33037792] 0
False
[ 0.33060124 0.33869968 0.32201392] 0
False
[ 0.37750373 0.24283191 0.38199013] 0
False
[ 0.32828859 0.28341969 0.3844817 ] 1
False
[ 0.40064054 0.29157214 0.31557637] 0
True
[ 0.34213149 0.32582366 0.32684664] 2
False
[ 0.28689391 0.31193908 0.4044129 ] 2
True
[ 0.30001605 0.32806406 0.37618905] 1
False
[ 0.31647656 0.38451097 0.30447384] 0
False
[ 0.33103524 0.30688846 0.35954914] 0
False
[ 0.32150776 0.33711197 0.35184878] 0
False
[ 0.39033203 0.25935326 0.36431578] 2
False
[ 0.32460455 0.31523669 0.36592191] 1
False
[ 0.40393593 0.25754003 0.34657032] 2
False
[ 0.25174756 0.35654781 0.38656338] 2
True
[ 0.30749336 0.33729364 0.34480463] 1
False
[ 0.31850144 0.26358034 0.42842852] 0
False
[ 0.38926646 0.30501376 0.30468515] 1
False
[ 0.27344887 0.32239615 0.41533749] 2
True
[ 0.32001109 0.30344779 0.37905237] 1
False
[ 0.31421068 0.32298616 0.35466624] 1
False
[ 0.3207367 0.32916801 0.34549033] 0
False
[ 0.32371116 0.31437161 0.37153687] 2
True
[ 0.30316953 0.35691052 0.35038856] 2
False
[ 0.26985498 0.40624225 0.33529702] 2
False
[ 0.32754253 0.32636121 0.35274223] 0
False
[ 0.33733331 0.35362522 0.3162667 ] 1
True
[ 0.29912771 0.33038202 0.36876851] 0
False
[ 0.36208713 0.27044009 0.3716182 ] 2
True
[ 0.35972323 0.26213261 0.37634752] 0
False
[ 0.37725383 0.29657604 0.33323493] 0
True
[ 0.36132706 0.33353985 0.29839563] 1
False
[ 0.30529323 0.34514821 0.35338432] 0
False
[ 0.33712799 0.31804049 0.35364653] 1
False
[ 0.38936311 0.324735 0.29407896] 0
True
[ 0.31216196 0.31684223 0.37470232] 2
True
[ 0.41750512 0.24769879 0.34110934] 2
False
[ 0.33093028 0.29461 0.38383179] 2
True
[ 0.39647776 0.26101225 0.3417638 ] 0
True
[ 0.28954221 0.30007151 0.41401376] 2
True
[ 0.35003961 0.28540142 0.37555097] 0
False
[ 0.39627209 0.24011145 0.36629459] 0
True
[ 0.38391828 0.27156263 0.34795049] 0
True
[ 0.37232151 0.25787645 0.35837673] 0
True
[ 0.37358319 0.31223515 0.30299234] 0
True
[ 0.35165925 0.31320177 0.34004065] 1
False
[ 0.43402782 0.21354889 0.36797648] 2
False
[ 0.34133596 0.32203306 0.34351662] 0
False
[ 0.42518597 0.24553648 0.33854303] 0
True
[ 0.35146187 0.30539277 0.33885802] 2
False
[ 0.31289482 0.33434618 0.36188561] 2
True
[ 0.31212604 0.35410213 0.32675295] 1
True
[ 0.32377938 0.33038883 0.34428432] 2
True
[ 0.39133855 0.21720209 0.40131295] 2
True
[ 0.40104112 0.2565664 0.34492521] 0
True
[ 0.33491324 0.3197658 0.35237961] 2
True
[ 0.31452688 0.32581922 0.36449186] 0
False
[ 0.43846226 0.27949657 0.30335531] 0
True
[ 0.4822573 0.22716837 0.30983996] 0
True
[ 0.3778902 0.26527094 0.36276231] 0
True
[ 0.41696479 0.2752702 0.31483663] 0
True
[ 0.42686547 0.24767096 0.33557801] 1
False
[ 0.38321737 0.23766892 0.37447412] 2
False
[ 0.35798295 0.26526644 0.38161954] 0
False
[ 0.37454447 0.28389549 0.33603116] 0
True
[ 0.45251416 0.21957353 0.34466201] 0
True
[ 0.35637863 0.28849364 0.35123387] 1
False
[ 0.42511415 0.22286876 0.37245196] 0
True
[ 0.38760003 0.26486138 0.35084756] 0
True
[ 0.44880796 0.24878696 0.3324837 ] 0
True
[ 0.3890894 0.28945746 0.31091127] 1
False
[ 0.45423927 0.225151 0.3331773 ] 0
True
[ 0.39477705 0.25281404 0.36832169] 2
False
[ 0.41437622 0.26866408 0.30991465] 1
False
[ 0.4143539 0.25310003 0.33958705] 2
False
[ 0.37951456 0.27954644 0.33663305] 1
False
[ 0.41929315 0.23769168 0.35108852] 1
False
[ 0.37094839 0.31945066 0.30675086] 0
True
[ 0.40236553 0.29628305 0.29190192] 0
True
[ 0.40137886 0.27527096 0.32649162] 0
True
[ 0.39610647 0.25649911 0.336715 ] 0
True
[ 0.42812852 0.2783321 0.30215009] 0
True
[ 0.33226046 0.31977285 0.348578 ] 2
True
[ 0.39077329 0.26727979 0.35059371] 1
False
[ 0.37929894 0.26705625 0.35111824] 0
True
[ 0.30189042 0.32224032 0.38124929] 0
False
[ 0.37413484 0.2876553 0.32630961] 1
False
[ 0.36245607 0.2664348 0.36261905] 1
False
[ 0.41489281 0.32847349 0.26177231] 2
False
[ 0.39178105 0.31671365 0.27085373] 2
False
[ 0.38092863 0.28575146 0.34903687] 2
False
[ 0.31149227 0.34632264 0.35313 ] 1
False
[ 0.36472822 0.31989917 0.3014382 ] 1
False
[ 0.36570717 0.28525409 0.35886395] 1
False
[ 0.39439168 0.28999058 0.30678428] 0
True
[ 0.38266597 0.28920926 0.31970557] 2
False
[ 0.38249283 0.29149095 0.32977257] 1
False
[ 0.33523305 0.32503926 0.33416765] 0
True
[ 0.29224873 0.36802736 0.34779535] 2
False
[ 0.36622312 0.29464495 0.34481413] 0
True
[ 0.32046042 0.30364407 0.37650299] 1
False
[ 0.36202136 0.30991274 0.32489335] 1
False
[ 0.3571602 0.35421958 0.29182241] 0
True
[ 0.28544528 0.36134587 0.34291819] 2
False
[ 0.39680463 0.26778374 0.33114316] 2
False
[ 0.39153485 0.3009883 0.2983572 ] 0
True
[ 0.33508967 0.31683511 0.34950583] 2
True
[ 0.31086431 0.35500397 0.3482772 ] 1
True
[ 0.34531515 0.34328945 0.30437349] 0
True
[ 0.36383807 0.22433981 0.40932865] 2
True
[ 0.32647753 0.32884064 0.34969051] 1
False
[ 0.33114948 0.3520878 0.31599025] 0
False
[ 0.40316727 0.26108602 0.33569403] 0
True
[ 0.33924768 0.28742008 0.36002487] 2
True
[ 0.34092646 0.31399822 0.32882213] 2
False
[ 0.36976037 0.27371253 0.35360538] 2
False
[ 0.35035152 0.29863116 0.35627496] 2
True
[ 0.37391066 0.28845145 0.35352423] 0
True
[ 0.2645986 0.40740065 0.3370142 ] 2
False
[ 0.35316202 0.26246861 0.38190709] 1
False
[ 0.36162987 0.24636361 0.3954574 ] 1
False
[ 0.32812969 0.28208586 0.39076175] 2
True
[ 0.29031181 0.33629196 0.37296589] 1
False
[ 0.28665747 0.30275375 0.39723737] 1
False
[ 0.31519654 0.34220851 0.33593625] 1
True
[ 0.37453957 0.31248348 0.3165983 ] 0
True
[ 0.32267114 0.28242572 0.39793561] 2
True
[ 0.28015916 0.34617349 0.38533575] 2
True
[ 0.28244039 0.36178311 0.36389983] 1
False
[ 0.34193853 0.30322352 0.32956601] 1
False
[ 0.32807877 0.30382865 0.36279232] 0
False
[ 0.35456885 0.34710494 0.30567003] 0
True
[ 0.35137064 0.33621379 0.30683305] 0
True
[ 0.3055299 0.3350633 0.3599363] 2
True
[ 0.3779265 0.27055399 0.34831898] 2
False
[ 0.3396329 0.30774408 0.35325148] 2
True
[ 0.33500601 0.28858055 0.38127783] 2
True
[ 0.35545854 0.30047027 0.34645572] 1
False
[ 0.33207496 0.30831565 0.36195961] 1
False
[ 0.32008331 0.2963674 0.38947832] 0
False
[ 0.29383143 0.35795144 0.35095442] 1
True
[ 0.23788308 0.38745252 0.39040544] 2
True
[ 0.31788972 0.33780927 0.35032759] 1
False
[ 0.25517865 0.40904198 0.34980582] 2
False
[ 0.33534255 0.33167339 0.32617204] 1
False
[ 0.31104058 0.33188575 0.34871212] 1
False
[ 0.31560429 0.36451198 0.32519068] 0
False
[ 0.31023663 0.30425928 0.37997967] 0
False
[ 0.29021632 0.34837028 0.36283852] 1
False
[ 0.37223684 0.31981712 0.31116039] 2
False
[ 0.27811644 0.40011481 0.31430225] 1
True
[ 0.2507162 0.41466785 0.3543277 ] 1
True
[ 0.3306794 0.2804587 0.39844066] 2
True
[ 0.28441288 0.35099242 0.3753039 ] 0
False
[ 0.28810863 0.34692375 0.37240718] 2
True
[ 0.26573192 0.36644728 0.36760414] 2
True
[ 0.34252207 0.28234483 0.37702309] 2
True
[ 0.26213583 0.31691944 0.41979702] 2
True
[ 0.31415047 0.33175307 0.35209782] 0
False
[ 0.35749472 0.290968 0.34562033] 0
True
[ 0.32602423 0.30952675 0.36897179] 2
True
[ 0.28615245 0.31400192 0.40503234] 0
False
[ 0.29851842 0.27668959 0.43854044] 2
True
[ 0.34049551 0.27333733 0.38396655] 0
False
[ 0.30774854 0.32297694 0.37238644] 1
False
[ 0.33116197 0.30105782 0.37040687] 0
False
[ 0.32649671 0.33016503 0.34458078] 0
False
[ 0.382829 0.27707227 0.33301255] 2
False
[ 0.31545293 0.33090383 0.35438064] 2
True
[ 0.30322498 0.28706115 0.41788391] 0
False
[ 0.40516965 0.21951562 0.38233299] 0
True
[ 0.4027377 0.26374116 0.3349446 ] 0
True
[ 0.37313527 0.30174434 0.31584364] 1
False
[ 0.34456509 0.28490383 0.35751362] 0
False
[ 0.3255435 0.30820874 0.36415006] 2
True
[ 0.31508039 0.29970954 0.37527102] 0
False
[ 0.3827295 0.2868378 0.32057765] 0
True
[ 0.27898425 0.35764366 0.35285656] 0
False
[ 0.3715381 0.27642144 0.37351268] 1
False
[ 0.375795 0.29833931 0.32986694] 0
True
[ 0.38166775 0.28303155 0.32445983] 1
False
[ 0.36300479 0.32772001 0.30787081] 2
False
[ 0.32001969 0.31552073 0.35971177] 1
False
[ 0.38730221 0.27307215 0.33790004] 0
True
[ 0.31202344 0.3075477 0.37896041] 2
True
[ 0.37143139 0.28432954 0.35484712] 1
False
[ 0.31721163 0.30443765 0.3656626 ] 1
False
[ 0.36552692 0.31107541 0.31640956] 2
False
[ 0.30222576 0.32928127 0.37853277] 2
True
[ 0.38798369 0.26215768 0.35590423] 1
False
[ 0.29761706 0.35706256 0.34352262] 2
False
[ 0.31543009 0.3150973 0.37397072] 0
False
[ 0.3667231 0.29888084 0.33714244] 0
True
[ 0.35463937 0.29634512 0.34360826] 1
False
[ 0.29039506 0.34480229 0.35612514] 1
False
[ 0.31693179 0.37331457 0.30636275] 1
True
[ 0.38476734 0.3051654 0.30070624] 2
False
[ 0.35806023 0.25659523 0.37183714] 2
True
[ 0.33187656 0.29839569 0.39167447] 1
False
[ 0.30972621 0.35896068 0.33324131] 1
True
[ 0.32944242 0.33464004 0.33789422] 2
True
[ 0.32444159 0.28454148 0.39234237] 2
True
[ 0.27288399 0.33830917 0.40162944] 2
True
[ 0.33591052 0.35439706 0.31220945] 1
True
[ 0.26902144 0.37715092 0.35460157] 1
True
[ 0.28720733 0.32040966 0.4031697 ] 1
False
[ 0.25178632 0.40470952 0.35455552] 1
True
[ 0.33707353 0.32601969 0.3301518 ] 1
False
[ 0.32610316 0.34778979 0.31581062] 0
False
[ 0.3075898 0.38131649 0.31668735] 1
True
[ 0.24446622 0.40505305 0.35908184] 2
False
[ 0.28271635 0.38704626 0.31796064] 0
False
[ 0.39801129 0.26491963 0.33580364] 0
True
[ 0.27116715 0.42376183 0.31202062] 1
True
[ 0.33616185 0.31046615 0.34560187] 2
True
[ 0.29659698 0.35063018 0.36039878] 0
False
[ 0.31224337 0.32931931 0.34300125] 1
False
[ 0.32933771 0.31157461 0.35341432] 0
False
[ 0.27407092 0.38306602 0.34367119] 1
True
[ 0.26509475 0.35005632 0.38729111] 2
True
[ 0.29340144 0.39236009 0.33427125] 1
True
[ 0.28050232 0.35722602 0.36357296] 0
False
[ 0.3472886 0.35025 0.29057004] 0
False
[ 0.32212085 0.34425982 0.33890687] 0
False
[ 0.28839504 0.38593095 0.31862136] 2
False
[ 0.32622993 0.35322238 0.3179043 ] 1
True
[ 0.32126848 0.30535308 0.363299 ] 0
False
[ 0.37888657 0.3214221 0.29835557] 0
True
[ 0.23012083 0.37830241 0.39142022] 2
True
[ 0.28409035 0.39077027 0.32552608] 1
True
[ 0.35207963 0.30400996 0.33334255] 0
True
[ 0.37095054 0.35913016 0.27519781] 2
False
[ 0.34732498 0.31663387 0.33025761] 2
False
[ 0.33196859 0.30450322 0.35583699] 2
True
[ 0.33142901 0.32047925 0.34055908] 2
True
[ 0.3328858 0.32110407 0.3488737 ] 2
True
[ 0.31679313 0.3075929 0.37180729] 2
True
[ 0.29499903 0.30928766 0.38014774] 1
False
[ 0.37980026 0.27207773 0.35155552] 2
False
[ 0.26091005 0.34589573 0.40184796] 2
True
[ 0.35091707 0.27544999 0.37795987] 2
True
[ 0.25546614 0.33763981 0.42223649] 2
True
[ 0.37339134 0.23566096 0.40951729] 0
False
[ 0.33852134 0.277788 0.39127214] 2
True
[ 0.27416107 0.35779148 0.36924619] 1
False
[ 0.33174602 0.26290406 0.40778307] 0
False
[ 0.31950952 0.30481575 0.37301773] 1
False
[ 0.28289592 0.33830436 0.38817217] 1
False
[ 0.30926949 0.28988483 0.39517593] 2
True
[ 0.3528695 0.27889396 0.36357064] 1
False
[ 0.34548008 0.28721284 0.37297446] 0
False
[ 0.32418265 0.30046291 0.3773615 ] 1
False
[ 0.29288628 0.38138316 0.33307038] 2
False
[ 0.27393326 0.39978611 0.3253439 ] 0
False
[ 0.31610983 0.28933403 0.38633663] 0
False
[ 0.31300225 0.33696983 0.35346844] 2
True
[ 0.33272972 0.27827104 0.38923004] 2
True
[ 0.31997749 0.31654007 0.36762458] 0
False
[ 0.35415837 0.29771595 0.35481641] 0
False
[ 0.27044269 0.35681947 0.3719158 ] 1
False
[ 0.28167274 0.38041031 0.35642433] 0
False
[ 0.37757556 0.25570651 0.36350248] 2
False
[ 0.36219344 0.30140187 0.33672937] 0
True
[ 0.35795103 0.24340082 0.39853583] 2
True
[ 0.29466574 0.31508253 0.39557045] 2
True
[ 0.32959703 0.29905558 0.38154477] 1
False
[ 0.28010571 0.34159714 0.38697785] 2
True
[ 0.3241633 0.30788281 0.35113359] 1
False
[ 0.34020808 0.27404817 0.38227607] 0
False
[ 0.34613423 0.26624567 0.39941157] 2
True
[ 0.33095291 0.27495611 0.382739 ] 2
True
[ 0.34056414 0.26757365 0.39429946] 2
True
[ 0.27234111 0.33136054 0.39721715] 1
False
[ 0.285739 0.33983583 0.37703298] 1
False
[ 0.24684007 0.35772162 0.4098753 ] 0
False
[ 0.358577 0.26733263 0.38053622] 0
False
[ 0.35039531 0.30630711 0.34207174] 0
True
[ 0.33439248 0.28090541 0.38772928] 1
False
[ 0.28148587 0.35743039 0.36074796] 1
False
[ 0.27502165 0.36104707 0.35233536] 1
True
[ 0.25776492 0.41239359 0.33605415] 2
False
[ 0.27527546 0.38985642 0.34202807] 1
True
[ 0.34347829 0.32656447 0.3417578 ] 0
True
[ 0.3014953 0.31872942 0.37932836] 1
False
[ 0.3223466 0.34042373 0.32923198] 0
False
[ 0.37611013 0.29011183 0.34039576] 0
True
[ 0.35520485 0.29759956 0.33326526] 0
True
[ 0.31603302 0.30234313 0.37350531] 2
True
[ 0.37724632 0.260705 0.3603396 ] 2
False
[ 0.30530715 0.33703621 0.34556999] 1
False
[ 0.31274827 0.30132264 0.38612506] 1
False
[ 0.36321099 0.32181299 0.31647991] 0
True
[ 0.27452885 0.37185429 0.36956552] 0
False
[ 0.31806545 0.32542407 0.36089049] 2
True
[ 0.32171505 0.37056169 0.29983432] 1
True
[ 0.26102668 0.39830259 0.34099545] 1
True
[ 0.27106926 0.34120822 0.38390587] 1
False
[ 0.27622329 0.3504446 0.38150973] 2
True
[ 0.31654604 0.3478897 0.32869386] 1
True
[ 0.30991921 0.31165973 0.37831132] 2
True
[ 0.30736964 0.37662465 0.30903564] 1
True
[ 0.28816041 0.35709902 0.34842014] 0
False
[ 0.32996852 0.33758292 0.31993162] 0
False
[ 0.31691531 0.34284082 0.33472369] 1
True
[ 0.32711862 0.32043102 0.35370803] 1
False
[ 0.29964321 0.3515791 0.3470387 ] 2
False
[ 0.30382679 0.36217618 0.33514442] 2
False
[ 0.33409012 0.32890449 0.33235782] 0
True
[ 0.3035714 0.36378192 0.32153915] 2
False
[ 0.34152585 0.31977512 0.34083907] 2
False
[ 0.29036154 0.37352181 0.35460816] 0
False
[ 0.3284644 0.31894168 0.34913622] 0
False
[ 0.30151613 0.38533777 0.33815692] 0
False
[ 0.32936477 0.31740657 0.35347034] 2
True
[ 0.30660734 0.33181944 0.37022582] 1
False
[ 0.28498732 0.39502053 0.3238903 ] 1
True
[ 0.25681393 0.3869881 0.36933736] 2
False
[ 0.40114111 0.30237222 0.30161582] 0
True
[ 0.33316269 0.33111543 0.33357842] 1
False
[ 0.37733235 0.27934867 0.34659307] 1
False
[ 0.29051438 0.38844475 0.31403743] 0
False
[ 0.28147063 0.35528945 0.364943 ] 2
True
[ 0.31208847 0.3189401 0.37330475] 1
False
[ 0.32940983 0.36257366 0.305052 ] 1
True
[ 0.32952545 0.31044228 0.34045544] 2
True
[ 0.32999303 0.32745587 0.32728424] 1
False
[ 0.29940174 0.36322028 0.33523335] 2
False
[ 0.32918122 0.31582342 0.36709857] 0
False
[ 0.30984463 0.3470035 0.33646662] 1
True
[ 0.32924545 0.33487328 0.32976763] 0
False
[ 0.36894147 0.31568693 0.30606057] 2
False
[ 0.28920945 0.34974069 0.35449522] 0
False
[ 0.33310835 0.34209528 0.31952098] 0
False
[ 0.2843615 0.35970623 0.36139769] 2
True
[ 0.27567942 0.36618138 0.35069842] 1
True
[ 0.29997081 0.33106923 0.36886962] 0
False
[ 0.25857738 0.4005796 0.34753661] 1
True
[ 0.25226549 0.38811354 0.36302986] 2
False
[ 0.3182007 0.36754774 0.32758562] 1
True
[ 0.28493048 0.41669454 0.30839907] 0
False
[ 0.35512176 0.34164749 0.29671188] 1
False
[ 0.31068395 0.37681178 0.30010308] 2
False
[ 0.3390417 0.35979081 0.30623427] 0
False
[ 0.28976115 0.38435907 0.32221384] 0
False
[ 0.37170744 0.30628092 0.31844128] 0
True
[ 0.29297148 0.37370125 0.34036121] 2
False
[ 0.38202047 0.31044307 0.3043062 ] 0
True
[ 0.3645981 0.32718324 0.31084131] 0
True
[ 0.31016336 0.32228172 0.36714253] 2
True
[ 0.36141696 0.31264195 0.31984281] 2
False
[ 0.31059026 0.34693401 0.35114524] 2
True
[ 0.36999802 0.30345707 0.33241706] 1
False
[ 0.34017494 0.32300124 0.33786286] 2
False
[ 0.3920688 0.23774404 0.37166703] 0
True
[ 0.36062574 0.3094277 0.32706619] 1
False
[ 0.32047086 0.33354683 0.33438381] 0
False
[ 0.28426292 0.34464032 0.36943174] 2
True
[ 0.28771473 0.38333063 0.32130622] 1
True
[ 0.37683044 0.33222101 0.29261388] 0
True
[ 0.32699267 0.31818022 0.3591612 ] 2
True
[ 0.30730533 0.3542503 0.3245357 ] 1
True
[ 0.34097244 0.30906763 0.34225529] 1
False
[ 0.31851283 0.31593945 0.35925172] 2
True
[ 0.33033127 0.29179945 0.36154233] 2
True
[ 0.28969913 0.36281529 0.3370723 ] 1
True
[ 0.3136159 0.36802199 0.31688552] 1
True
[ 0.28597547 0.36654407 0.35228048] 1
True
[ 0.2539428 0.35945783 0.38299252] 2
True
[ 0.39586281 0.30046527 0.30296586] 0
True
[ 0.27653779 0.34430634 0.37876269] 2
True
[ 0.30727103 0.32828438 0.36072518] 2
True
[ 0.32042846 0.34437429 0.33415763] 1
True
[ 0.22933058 0.39968479 0.3924981 ] 0
False
[ 0.33325942 0.29840374 0.37372493] 1
False
[ 0.31137785 0.33941494 0.34306932] 0
False
[ 0.29501275 0.3548921 0.35841822] 1
False
[ 0.28990402 0.37244909 0.33477544] 2
False
[ 0.29989006 0.34109839 0.35795304] 0
False
[ 0.31852662 0.39476983 0.29390915] 1
True
[ 0.29977201 0.35469055 0.34848367] 1
True
[ 0.31356572 0.38106444 0.30970685] 1
True
[ 0.29763329 0.34662257 0.36537019] 1
False
[ 0.31580806 0.36179472 0.30550429] 2
False
[ 0.27723951 0.36972907 0.36878908] 2
False
[ 0.3180485 0.35034264 0.32482834] 0
False
[ 0.28102182 0.37213437 0.35366063] 1
True
[ 0.2609309 0.39065855 0.35159525] 2
False
[ 0.30105378 0.34649068 0.35830559] 0
False
[ 0.30942693 0.3746305 0.31939715] 1
True
[ 0.282161 0.39798369 0.32472846] 1
True
[ 0.33462291 0.3112248 0.36288593] 0
False
[ 0.27600907 0.39434504 0.31658395] 2
False
[ 0.25717477 0.37389808 0.38206291] 1
False
[ 0.32593822 0.33810172 0.33498483] 0
False
[ 0.35813229 0.33640692 0.29568311] 2
False
[ 0.30666458 0.34760736 0.36053967] 2
True
[ 0.32965726 0.32904402 0.33870658] 1
False
[ 0.26468889 0.35504789 0.38757204] 2
True
[ 0.26391951 0.34126213 0.38553818] 2
True
[ 0.24853339 0.38318648 0.38270244] 0
False
[ 0.31898233 0.28937936 0.39457543] 0
False
[ 0.24953769 0.39711904 0.35604314] 2
False
[ 0.30362824 0.36304391 0.33096758] 0
False
[ 0.26990855 0.40091926 0.33785747] 1
True
[ 0.33908909 0.32741609 0.34411619] 1
False
[ 0.32718648 0.32278794 0.35171778] 2
True
[ 0.30743389 0.33477025 0.35141576] 1
False
[ 0.29976374 0.37861356 0.31538363] 1
True
[ 0.3325665 0.35997387 0.30076937] 0
False
[ 0.3051557 0.31752553 0.3688261 ] 0
False
[ 0.26539077 0.35206167 0.38592151] 1
False
[ 0.35317696 0.30312161 0.33400665] 0
True
[ 0.33452006 0.29521822 0.36541613] 2
True
[ 0.2971745 0.33688391 0.35928177] 2
True
[ 0.30633799 0.33379448 0.36307294] 0
False
[ 0.33613202 0.36011772 0.30071488] 1
True
[ 0.35034172 0.308674 0.34067598] 0
True
[ 0.37285524 0.33434627 0.30646525] 0
True
[ 0.25983706 0.36756931 0.38080746] 1
False
[ 0.35295196 0.30443393 0.34585081] 0
True
[ 0.33590704 0.34413473 0.32215573] 2
False
[ 0.35749462 0.31407342 0.32113612] 2
False
[ 0.34827562 0.28708264 0.36984337] 2
True
[ 0.26318554 0.41058261 0.34180559] 1
True
[ 0.34321437 0.33069678 0.32444324] 0
True
[ 0.3252671 0.30825461 0.35133555] 1
False
[ 0.28919794 0.34009249 0.372962 ] 2
True
[ 0.28043142 0.37134222 0.37881217] 2
True
[ 0.32768486 0.30314652 0.35655127] 2
True
[ 0.21526908 0.42041081 0.39552825] 1
True
[ 0.2296878 0.39144018 0.39143932] 2
False
[ 0.28716252 0.28441551 0.41824304] 2
True
[ 0.28818434 0.37992481 0.34704082] 0
False
[ 0.30925988 0.30991164 0.37253702] 0
False
[ 0.30876938 0.31054666 0.37777974] 2
True
[ 0.27673984 0.31489971 0.40578675] 0
False
[ 0.3713698 0.24777915 0.38015092] 0
False
[ 0.38319784 0.2398583 0.38032575] 2
False
[ 0.38412714 0.23382368 0.38856846] 0
False
[ 0.2751789 0.34495381 0.38588252] 2
True
[ 0.3249301 0.30372084 0.37127067] 1
False
[ 0.38472636 0.2983096 0.3157236 ] 1
False
[ 0.38435746 0.24156203 0.38512269] 2
True
[ 0.3209911 0.26150131 0.42237835] 0
False
[ 0.39946803 0.29076539 0.31980053] 0
True
[ 0.31890304 0.31633186 0.36576175] 1
False
[ 0.32110778 0.31479822 0.36038677] 1
False
[ 0.34199903 0.31247603 0.34873832] 2
True
[ 0.33429887 0.29943981 0.3583903 ] 2
True
[ 0.33482575 0.28997912 0.38466856] 0
False
[ 0.29730913 0.34759444 0.36488311] 1
False
[ 0.36099046 0.28030826 0.37807701] 0
False
[ 0.30001428 0.30145014 0.3878051 ] 2
True
[ 0.34742623 0.26581427 0.39491576] 2
True
[ 0.3103295 0.27356306 0.41560675] 1
False
[ 0.30228201 0.33606635 0.37220095] 2
True
[ 0.3161026 0.31820666 0.36451625] 1
False
[ 0.29441221 0.30482438 0.40067915] 2
True
[ 0.29415377 0.311431 0.39843896] 2
True
[ 0.33539725 0.31136346 0.35794583] 2
True
[ 0.32700699 0.27573031 0.39792681] 2
True
[ 0.34885939 0.22906761 0.44071459] 0
False
[ 0.28038501 0.30377148 0.40665824] 0
False
[ 0.30619653 0.31971353 0.3846524 ] 1
False
[ 0.41115164 0.26584918 0.31668323] 1
False
[ 0.26952186 0.37517072 0.35209824] 1
True
[ 0.30607342 0.37588423 0.33419526] 1
True
[ 0.31896787 0.36428521 0.30591983] 1
True
[ 0.31001723 0.39582467 0.29242087] 1
True
[ 0.29513142 0.35086661 0.36328608] 2
True
[ 0.33000326 0.31205068 0.35861762] 0
False
[ 0.32958854 0.35627391 0.32205967] 1
True
[ 0.28558272 0.35677713 0.35440143] 0
False
[ 0.29391995 0.38707187 0.31139868] 2
False
[ 0.34356085 0.38172993 0.28268895] 0
False
[ 0.30539025 0.3392588 0.35269361] 0
False
[ 0.32867526 0.31936898 0.35164839] 0
False
[ 0.32249974 0.29488556 0.38811853] 2
True
[ 0.38157209 0.29215854 0.31819513] 0
True
[ 0.38565291 0.2631507 0.35998068] 2
False
[ 0.30984095 0.33558575 0.3529806 ] 1
False
[ 0.37026414 0.27020238 0.33870836] 2
False
[ 0.2936039 0.3813817 0.32589577] 1
True
[ 0.28391177 0.36820697 0.35281366] 2
False
[ 0.2580392 0.39119636 0.3548221 ] 1
True
[ 0.32235972 0.31946802 0.35027814] 1
False
[ 0.29885049 0.33754132 0.37564293] 2
True
[ 0.33086088 0.33207162 0.34672434] 0
False
[ 0.27958046 0.32030763 0.39985224] 1
False
[ 0.26482458 0.39455752 0.34072483] 1
True
[ 0.33060008 0.33170751 0.31743717] 0
False
[ 0.27317121 0.41381642 0.31061125] 0
False
[ 0.39953926 0.27568696 0.32896367] 2
False
[ 0.33370274 0.35339457 0.30947811] 1
True
[ 0.27229454 0.39959708 0.33291026] 1
True
[ 0.38585108 0.31948616 0.29083819] 1
False
[ 0.28112169 0.40552197 0.31102368] 1
True
[ 0.2906221 0.38551297 0.30725537] 1
True
[ 0.30410559 0.37874231 0.32176843] 0
False
[ 0.31792543 0.36133117 0.31260829] 0
False
[ 0.2982465 0.39364748 0.3034194 ] 1
True
[ 0.32236695 0.35788623 0.33583335] 1
True
[ 0.3520023 0.3769543 0.27683093] 1
True
[ 0.28542769 0.38534438 0.32124205] 0
False
[ 0.28433351 0.40229034 0.32103183] 1
True
[ 0.34559388 0.35367345 0.29357349] 0
False
[ 0.27508439 0.37155657 0.35882124] 2
False
[ 0.34335412 0.3503787 0.29930798] 2
False
[ 0.32193214 0.36488644 0.31758266] 0
False
[ 0.31990599 0.37657318 0.3151169 ] 0
False
[ 0.33794687 0.36865802 0.29190108] 1
True
[ 0.35199452 0.37112125 0.2759091 ] 1
True
[ 0.33772981 0.36404641 0.304642 ] 2
False
[ 0.31783185 0.4054929 0.27456569] 1
True
[ 0.30793618 0.37688341 0.31806764] 0
False
[ 0.35756038 0.35069702 0.28245833] 0
True
[ 0.32510498 0.38744046 0.28671019] 1
True
[ 0.33912758 0.40974682 0.2577427 ] 0
False
[ 0.38350572 0.32542982 0.30076777] 2
False
[ 0.36576556 0.32972528 0.30914579] 1
False
[ 0.40954081 0.29931775 0.27605715] 1
False
[ 0.28922876 0.39738404 0.3119184 ] 1
True
[ 0.27780277 0.40290797 0.32820735] 1
True
[ 0.38578667 0.3375646 0.29340343] 0
True
[ 0.33124821 0.42415227 0.25458449] 1
True
[ 0.32506469 0.36950567 0.29477795] 1
True
[ 0.37611613 0.40630542 0.22498377] 1
True
[ 0.30945461 0.41141794 0.28461319] 1
True
[ 0.33324832 0.39340598 0.27819801] 1
True
[ 0.35900823 0.39263372 0.25416875] 1
True
[ 0.33397327 0.40927645 0.26023579] 1
True
[ 0.2997108 0.39704348 0.30618045] 1
True
[ 0.31237881 0.42753495 0.27695011] 2
False
[ 0.34289315 0.3914569 0.27967987] 0
False
[ 0.34224856 0.42889231 0.2480051 ] 0
False
[ 0.27316625 0.47617774 0.27089953] 0
False
[ 0.36706585 0.3832062 0.2484661 ] 2
False
[ 0.37091255 0.3675861 0.26256737] 2
False
[ 0.38822501 0.34361621 0.26813017] 0
True
[ 0.35318059 0.38738508 0.2724795 ] 0
False
[ 0.29593016 0.36671602 0.34471166] 1
True
[ 0.32840077 0.39504692 0.28282586] 0
False
[ 0.31290445 0.38539346 0.30791962] 2
False
[ 0.32669768 0.40800439 0.27571257] 2
False
[ 0.29021162 0.41624255 0.28800627] 2
False
[ 0.38478396 0.30032032 0.30079198] 0
True
[ 0.28938237 0.399273 0.31402762] 2
False
[ 0.37705461 0.33419492 0.28954271] 0
True
[ 0.30600863 0.41063873 0.28685343] 1
True
[ 0.34781298 0.34925514 0.31425171] 1
True
[ 0.34663171 0.31010496 0.34442885] 2
False
[ 0.34704712 0.37563231 0.28224352] 0
False
[ 0.31293946 0.35608559 0.31931501] 1
True
[ 0.31305211 0.37666349 0.30922239] 0
False
[ 0.35518731 0.32935667 0.31577024] 2
False
[ 0.30303112 0.33962463 0.36640766] 2
True
[ 0.2885138 0.39292703 0.31061035] 1
True
[ 0.34985488 0.36455572 0.2873911 ] 1
True
[ 0.27489803 0.40943475 0.30670888] 2
False
[ 0.28665972 0.32814881 0.38100478] 2
True
[ 0.37483149 0.31095155 0.3110215 ] 1
False
[ 0.24798002 0.38478223 0.36790062] 2
False
[ 0.35830332 0.33006253 0.30890799] 0
True
[ 0.34128069 0.3287083 0.32080387] 1
False
[ 0.24904022 0.40584255 0.35196783] 1
True
[ 0.29964095 0.36373177 0.33194712] 0
False
[ 0.32489204 0.35071275 0.31472552] 1
True
[ 0.30797906 0.35862106 0.33158734] 0
False
[ 0.28784585 0.4144915 0.29618721] 1
True
[ 0.28814009 0.39010514 0.32637427] 1
True
[ 0.25719379 0.42814041 0.31751696] 2
False
[ 0.35393452 0.33287007 0.31283507] 0
True
[ 0.25773428 0.39869595 0.35848482] 2
False
[ 0.23954351 0.42820201 0.35569339] 2
False
[ 0.37764265 0.33482261 0.29436068] 1
False
[ 0.33628895 0.33109705 0.34201829] 1
False
[ 0.30543893 0.40148213 0.29295218] 0
False
[ 0.35216422 0.3506035 0.2956587 ] 1
False
[ 0.2746503 0.39591019 0.33624633] 1
True
[ 0.29215524 0.40913827 0.30891164] 1
True
[ 0.28907803 0.39953525 0.319717 ] 1
True
[ 0.28863946 0.42284067 0.29160179] 0
False
[ 0.2910485 0.40172014 0.30922941] 2
False
[ 0.32447494 0.36141474 0.3009122 ] 2
False
[ 0.36043392 0.37021852 0.26878433] 0
False
[ 0.27017665 0.43457784 0.30059839] 1
True
[ 0.27963845 0.39865386 0.32698404] 0
False
[ 0.32038709 0.37877692 0.29684875] 1
True
[ 0.34675976 0.35278866 0.28812664] 1
True
[ 0.28818776 0.42046356 0.29777269] 0
False
Training completed
0.321
/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:6: RuntimeWarning: overflow encountered in exp
```python
C2_INIT
```
array([[[[ 0.77361321, 0.93182158, 0.70505992, 0.80422591, 0.1412422 ],
[ 0.82290848, 0.63917435, 0.90347831, 0.87803979, 0.37720469],
[ 0.18694892, 0.6482777 , 0.80398425, 0.61711257, 0.25713158],
[ 0.52051612, 0.12288631, 0.67118044, 0.610306 , 0.62816537],
[ 0.98065426, 0.11886239, 0.50349261, 0.26714588, 0.02563766]],
[[ 0.38593663, 0.77153711, 0.94189197, 0.91435221, 0.52416684],
[ 0.33415833, 0.96887456, 0.08432545, 0.43806018, 0.42834064],
[ 0.07147305, 0.92917761, 0.0655414 , 0.69054255, 0.14514601],
[ 0.8880909 , 0.48253988, 0.4782931 , 0.29879164, 0.09653021],
[ 0.14523237, 0.2919442 , 0.14693784, 0.93384983, 0.68102344]],
[[ 0.11339229, 0.04551164, 0.8511457 , 0.97720889, 0.07282508],
[ 0.20138635, 0.51323846, 0.89074582, 0.6704784 , 0.34351992],
[ 0.94680068, 0.00263861, 0.52705039, 0.04150781, 0.1981522 ],
[ 0.79174949, 0.83560386, 0.12828078, 0.57988589, 0.60540605],
[ 0.65793304, 0.19686322, 0.15036307, 0.0290918 , 0.10889995]]],
[[[ 0.12329405, 0.93355471, 0.01845181, 0.78831938, 0.88435157],
[ 0.22768363, 0.40246916, 0.49533672, 0.96084282, 0.68761556],
[ 0.12913785, 0.82011523, 0.35992067, 0.38547226, 0.95589189],
[ 0.36555491, 0.75952693, 0.76139844, 0.5493738 , 0.30694879],
[ 0.17487926, 0.89262204, 0.02650875, 0.32710238, 0.27231143]],
[[ 0.27191789, 0.46864823, 0.02734897, 0.22309678, 0.04973466],
[ 0.27447282, 0.52358444, 0.55812888, 0.04229138, 0.65413115],
[ 0.51466104, 0.42695084, 0.78178729, 0.45365784, 0.50726015],
[ 0.34256099, 0.50863581, 0.64283721, 0.51648786, 0.37265825],
[ 0.35811952, 0.06018938, 0.50453329, 0.98636914, 0.78066703]],
[[ 0.41548634, 0.06490975, 0.92927259, 0.01768628, 0.4105494 ],
[ 0.46362558, 0.80421379, 0.83765648, 0.78090424, 0.13987367],
[ 0.11434628, 0.88826959, 0.81857983, 0.09339392, 0.62799575],
[ 0.01830243, 0.30885712, 0.75598387, 0.46806943, 0.45343679],
[ 0.42458233, 0.43017579, 0.43951806, 0.95409968, 0.8923725 ]]]])
```python
####### Test phase #######
Error_Test=[]
N_correct=0
from itertools import product
###### Chooses patch and defines label #####
#for PP in range(0,len(Sequence)):
for PP in range(20000, 21000):
SS=Sequence[PP]
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
Int_RGB=np.mean(np.mean(Patches_F_RGB[SS,:,:,:], axis=0), axis=0)/255
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
Int_RGB=np.mean(np.mean(Patches_C_RGB[SS-N_F,:,:,:], axis=0), axis=0)/255
elif(SS>=(N_F+N_C)) and (SS<N_F+N_C+N_W):
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
Int_RGB=np.mean(np.mean(Patches_W_RGB[SS-N_F-N_C,:,:,:], axis=0), axis=0)/255
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
#H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
y=np.append([H4.flatten()], [Int_RGB])
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
for k in range(0,ClassAmount):
W_t=np.append([W[k].flatten()], [W2[k]])
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
f=f/np.sum((f))
Error_Test.append(np.sum((Class_label-f)**2))
if np.argmax(f)==np.argmax(Class_label):
#print True
N_correct=N_correct+1
Perc_corr=float(N_correct)/1000
print Perc_corr
```
0.655
/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:6: RuntimeWarning: overflow encountered in exp
```python
# CROSS VALIDATION!!
# TRAINING PHASE
#delta_H4=np.zeros((len(C1), N_branches, S_H4, S_H4))
#delta_H3=np.zeros((len(C1), N_branches, S_H4, S_H4))
ERROR_cv=np.zeros([10])
from itertools import product
for CROSSES in range(0,10):
C2=C2_INIT
W=W_INIT
W2=W2_INIT
n_W=1
n_C2=1.5*10**-2
Sample_iterations=0
N_1000=0
###### Chooses patch and defines label #####
#for PP in range(0,len(Sequence)):
for PP in range(0,20353):
SS=Sequence[PP]
#SS=14000
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
Int_RGB=np.mean(np.mean(Patches_F_RGB[SS,:,:,:], axis=0), axis=0)/255
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
Int_RGB=np.mean(np.mean(Patches_C_RGB[SS-N_F,:,:,:], axis=0), axis=0)/255
elif(SS>=(N_F+N_C)) and (SS<N_F+N_C+N_W):
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
Int_RGB=np.mean(np.mean(Patches_W_RGB[SS-N_F-N_C,:,:,:], axis=0), axis=0)/255
#else:
# Class_label=np.array([0,0,0,1])
# inputPatch=Patches_G[SS-N_F-N_C-N_W]
# Int_RGB=np.mean(np.mean(Patches_G_RGB[SS-N_F-N_C-N_W,:,:,:], axis=0), axis=0)/255
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
II=1
ITER=0
while II==1:
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
#From here on BP trakes place!
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
y=np.append([H4.flatten()], [Int_RGB])
for k in range(0,ClassAmount):
W_t=np.append([W[k].flatten()], [W2[k]])
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
###### Back-propagation #####
# First learning the delta's
delta_H4=np.zeros([ClassAmount,len(C1),N_branches,S_H4,S_H4])
e_k=f-Class_label
delta_k=e_k*Sigmoid_dx(x)
#Output_bias=Output_bias[k]+n_bias*e_k
for k in range(0, ClassAmount):
#update weights output layer
W[k]=W[k]-n_W*delta_k[k]*H4
W2[k]=W2[k]-n_W*delta_k[k]*Int_RGB
delta_H4[i]=delta_k[i]*W[i]
delta_H4=np.sum(delta_H4, axis=0)
delta_H3=(float(1)/10)*delta_H4
C2_diff=np.zeros([len(C1),N_branches, Size_C2, Size_C2])
for r in range(0, len(C1)):
C2_t=np.array([[delta_H3[r][:]*H2[r][(0+u):(4+u),(0+v):(4+v)] for u in range(0,Size_C2)] for v in range (0,Size_C2)])
C2_t=np.sum(np.sum(C2_t, axis=4),axis=3)
C2_t=np.rollaxis(C2_t,2)
C2_diff[r]=-n_C2*C2_t
C2=C2+C2_diff
#print f
ERROR=np.sum((Class_label-f)**2)
ITER=ITER+1
if ERROR<0.55 or ITER>4:
II=0
#print f, np.argmax(Class_label)
#if np.argmax(f)==np.argmax(Class_label):
#print True
#else:
#print False
Sample_iterations=Sample_iterations+1
if Sample_iterations>1000:
n_W=0.7
n_C2=0.7*1.5*10**-2
if Sample_iterations>2000:
n_W=0.7*0.7
n_C2=0.7*0.7*1.5*10**-2
if Sample_iterations>3000:
n_W=0.7*0.7*0.7
n_C2=0.7*0.7*0.7*1.5*10**-2
if Sample_iterations>5000:
n_W=0.2
n_C2=0.0025
if Sample_iterations>7500:
n_W=0.1
n_C2=0.001
if Sample_iterations>10000:
n_W=0.01
n_C2=0.0005
print "Training completed"
###### test phase!
N_correct=0
for PP in range(20353, N_total):
SS=Sequence[PP]
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
Int_RGB=np.mean(np.mean(Patches_F_RGB[SS,:,:,:], axis=0), axis=0)/255
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
Int_RGB=np.mean(np.mean(Patches_C_RGB[SS-N_F,:,:,:], axis=0), axis=0)/255
elif(SS>=(N_F+N_C)) and (SS<N_F+N_C+N_W):
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
Int_RGB=np.mean(np.mean(Patches_W_RGB[SS-N_F-N_C,:,:,:], axis=0), axis=0)/255
# else:
# Class_label=np.array([0,0,0,1])
# inputPatch=Patches_G[SS-N_F-N_C-N_W]
# Int_RGB=np.mean(np.mean(Patches_G_RGB[SS-N_F-N_C-N_W,:,:,:], axis=0), axis=0)/255
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
y=np.append([H4.flatten()], [Int_RGB])
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
for k in range(0,ClassAmount):
W_t=np.append([W[k].flatten()], [W2[k]])
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
f=f/np.sum((f))
if np.argmax(f)==np.argmax(Class_label):
#print True
N_correct=N_correct+1
Perc_corr=float(N_correct)/(2262)
print Perc_corr
ERROR_cv[CROSSES]=Perc_corr
N_correct=0
Sequence=np.roll(Sequence,2262)
```
```python
# CROSS VALIDATION!! WITHOUT COLOUR
# TRAINING PHASE
#delta_H4=np.zeros((len(C1), N_branches, S_H4, S_H4))
#delta_H3=np.zeros((len(C1), N_branches, S_H4, S_H4))
ERROR_cv2=np.zeros([10])
from itertools import product
for CROSSES in range(0,10):
C2=C2_INIT
W=W_INIT
W2=W2_INIT
n_W=1
n_C2=1.5*10**-2
Sample_iterations=0
N_1000=0
###### Chooses patch and defines label #####
#for PP in range(0,len(Sequence)):
for PP in range(0,20353):
SS=Sequence[PP]
#SS=14000
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
elif(SS>=(N_F+N_C)) and (SS<N_F+N_C+N_W):
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
#else:
# Class_label=np.array([0,0,0,1])
# inputPatch=Patches_G[SS-N_F-N_C-N_W]
# Int_RGB=np.mean(np.mean(Patches_G_RGB[SS-N_F-N_C-N_W,:,:,:], axis=0), axis=0)/255
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
II=1
ITER=0
while II==1:
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
#From here on BP trakes place!
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
y=H4.flatten()
for k in range(0,ClassAmount):
W_t=W[k].flatten()
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
f=f/np.sum((f))
###### Back-propagation #####
# First learning the delta's
delta_H4=np.zeros([ClassAmount,len(C1),N_branches,S_H4,S_H4])
e_k=f-Class_label
delta_k=e_k*Sigmoid_dx(x)
#Output_bias=Output_bias[k]+n_bias*e_k
for k in range(0, ClassAmount):
#update weights output layer
W[k]=W[k]-n_W*delta_k[k]*H4
delta_H4[i]=delta_k[i]*W[i]
delta_H4=np.sum(delta_H4, axis=0)
delta_H3=(float(1)/10)*delta_H4
C2_diff=np.zeros([len(C1),N_branches, Size_C2, Size_C2])
for r in range(0, len(C1)):
C2_t=np.array([[delta_H3[r][:]*H2[r][(0+u):(4+u),(0+v):(4+v)] for u in range(0,Size_C2)] for v in range (0,Size_C2)])
C2_t=np.sum(np.sum(C2_t, axis=4),axis=3)
C2_t=np.rollaxis(C2_t,2)
C2_diff[r]=-n_C2*C2_t
C2=C2+C2_diff
#print f
ERROR=np.sum((Class_label-f)**2)
ITER=ITER+1
if ERROR<0.55 or ITER>4:
II=0
#print f, np.argmax(Class_label)
#if np.argmax(f)==np.argmax(Class_label):
#print True
#else:
#print False
# Sample_iterations=Sample_iterations+1
# if (Sample_iterations-(1000*N_1000))==1000:
# print Sample_iterations
# N_1000=N_1000+1
# n_W=0.5*n_W
# n_C2=0.5*n_C2
Sample_iterations=Sample_iterations+1
if Sample_iterations>1000:
n_W=0.7
n_C2=0.7*1.5*10**-2
if Sample_iterations>2000:
n_W=0.7*0.7
n_C2=0.7*0.7*1.5*10**-2
if Sample_iterations>3000:
n_W=0.7*0.7*0.7
n_C2=0.7*0.7*0.7*1.5*10**-2
if Sample_iterations>5000:
n_W=0.2
n_C2=0.0025
if Sample_iterations>7500:
n_W=0.1
n_C2=0.001
if Sample_iterations>10000:
n_W=0.01
n_C2=0.0005
print "Training completed"
###### test phase!
N_correct=0
for PP in range(20353, N_total):
SS=Sequence[PP]
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
elif(SS>=(N_F+N_C)) and (SS<N_F+N_C+N_W):
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
# else:
# Class_label=np.array([0,0,0,1])
# inputPatch=Patches_G[SS-N_F-N_C-N_W]
# Int_RGB=np.mean(np.mean(Patches_G_RGB[SS-N_F-N_C-N_W,:,:,:], axis=0), axis=0)/255
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
y=H4.flatten()
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
for k in range(0,ClassAmount):
W_t=W[k].flatten()
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
f=f/np.sum((f))
if np.argmax(f)==np.argmax(Class_label):
#print True
N_correct=N_correct+1
Perc_corr=float(N_correct)/2262
print Perc_corr
ERROR_cv2[CROSSES]=Perc_corr
Sequence=np.roll(Sequence,2262)
```
Training completed
0.345269672856
Training completed
0.349690539346
Training completed
0.349248452697
Training completed
0.357648099027
Training completed
0.343059239611
Training completed
0.371794871795
Training completed
0.360300618921
Training completed
0.366931918656
Training completed
0.353227232538
Training completed
0.359858532272
/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:6: RuntimeWarning: overflow encountered in exp
```python
print ERROR_cv
#print ERROR_cv2
Ave_CV_withRGB=np.mean(ERROR_cv)
Std_CV_withRGB=np.std(ERROR_cv)
Ave_CV_withoutRGB=np.mean(ERROR_cv2)
Std_CV_withoutRGB=np.std(ERROR_cv2)
with open("Ave_CV_withRGB.txt", 'w') as f:
f.write(str(Ave_CV_withRGB))
with open("Std_CV_withRGB.txt", 'w') as f:
f.write(str(Std_CV_withRGB))
with open("Ave_CV_withoutRGB.txt", 'w') as f:
f.write(str(Ave_CV_withoutRGB))
with open("Std_CV_withoutRGB.txt", 'w') as f:
f.write(str(Std_CV_withoutRGB))
```
[ 0.66843501 0.67064545 0.68523431 0.64544651 0.66622458 0.65340407
0.68567639 0.67639257 0.67992927 0.66312997]
```python
print Ave_CV_withRGB, Std_CV_withRGB, Ave_CV_withoutRGB, Std_CV_withoutRGB
```
0.669451812555 0.0124775775099 0.355702917772 0.00883111652639
# save training parameters
```python
import pickle
# obj0, obj1, obj2 are created here...
# Saving the objects:
#with open('objs.pickle', 'w') as f:
# pickle.dump([obj0, obj1, obj2], f)
# Getting back the objects:
#with open('objs.pickle') as f:
# obj0, obj1, obj2 = pickle.load(f)
import pickle
file=open('W.txt','w')
pickle.dump(W,file)
file.close()
file=open('W2.txt','w')
pickle.dump(W2,file)
file.close()
file=open('Output_bias.txt','w')
pickle.dump(Output_bias,file)
file.close()
file=open('H3_bias.txt','w')
pickle.dump(H3_bias,file)
file.close()
file=open('C2.txt','w')
pickle.dump(C2,file)
file.close()
```
```python
W2
```
array([[ 5.29303417, -2.70781514, -18.24611794],
[ 40.34618459, -34.05035278, 10.92748976],
[-27.57509419, 21.62962176, -7.85044736],
[ 0.22459264, 0.27081064, 0.66933584]])
```python
```
```python
```
|
13b262e73cf233b4edd9e3e4c2d6265d78834094
| 882,841 |
ipynb
|
Jupyter Notebook
|
code/CNN/.ipynb_checkpoints/CNN3-BP-Marjo_3 with backpropagation-checkpoint.ipynb
|
ttphan/The-Cupcakes-3000
|
44b9e2b433ff945b1b6d57354dc1d22afbd0eddd
|
[
"Unlicense",
"MIT"
] | null | null | null |
code/CNN/.ipynb_checkpoints/CNN3-BP-Marjo_3 with backpropagation-checkpoint.ipynb
|
ttphan/The-Cupcakes-3000
|
44b9e2b433ff945b1b6d57354dc1d22afbd0eddd
|
[
"Unlicense",
"MIT"
] | null | null | null |
code/CNN/.ipynb_checkpoints/CNN3-BP-Marjo_3 with backpropagation-checkpoint.ipynb
|
ttphan/The-Cupcakes-3000
|
44b9e2b433ff945b1b6d57354dc1d22afbd0eddd
|
[
"Unlicense",
"MIT"
] | 1 |
2017-06-14T02:37:48.000Z
|
2017-06-14T02:37:48.000Z
| 34.412044 | 9,370 | 0.508494 | true | 362,739 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.884039 | 0.615088 | 0.543762 |
__label__yue_Hant
| 0.05063 | 0.10167 |
# Linearization at the differential equation level
<div id="nonlin:pdelevel"></div>
The attention is now turned to nonlinear partial differential
equations (PDEs) and application of the techniques explained above for
ODEs. The model problem is a nonlinear diffusion equation for
$u(\x,t)$:
<!-- Equation labels as ordinary links -->
<div id="nonlin:pdelevel:model:pde"></div>
$$
\begin{equation}
\frac{\partial u}{\partial t} = \nabla\cdot (\dfc(u)\nabla u) + f(u),\quad
\x\in\Omega,\ t\in (0,T],
\label{nonlin:pdelevel:model:pde} \tag{1}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="nonlin:pdelevel:model:Neumann"></div>
$$
\begin{equation}
-\dfc(u)\frac{\partial u}{\partial n} = g,\quad \x\in\partial\Omega_N,\
t\in (0,T],
\label{nonlin:pdelevel:model:Neumann} \tag{2}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="nonlin:pdelevel:model:Dirichlet"></div>
$$
\begin{equation}
u = u_0,\quad \x\in\partial\Omega_D,\ t\in (0,T]\thinspace .
\label{nonlin:pdelevel:model:Dirichlet} \tag{3}
\end{equation}
$$
In the present section, our aim is to discretize this problem in time
and then present techniques for linearizing the time-discrete PDE
problem "at the PDE level" such that we transform the nonlinear
stationary PDE problem at each time level into a sequence of linear
PDE problems, which can be solved using any method for linear
PDEs. This strategy avoids the solution of systems of nonlinear
algebraic equations. In the section [1D stationary nonlinear differential equations](#nonlin:alglevel:1D) we shall take
the opposite (and more common) approach: discretize the nonlinear
problem in time and space first, and then solve the resulting
nonlinear algebraic equations at each time level by the methods of
the section [nonlin:systems:alg](#nonlin:systems:alg). Very often, the two approaches are
mathematically identical, so there is no preference from a
computational efficiency point of view. The details of the ideas
sketched above will hopefully become clear through the forthcoming
examples.
## Explicit time integration
<div id="nonlin:pdelevel:explicit"></div>
The nonlinearities in the PDE are trivial to deal with if we choose an
explicit time integration method for ([1](#nonlin:pdelevel:model:pde)),
such as the Forward Euler method:
$$
[D_t^+ u = \nabla\cdot (\dfc(u)\nabla u) + f(u)]^n,
$$
or written out,
$$
\frac{u^{n+1} - u^n}{\Delta t} = \nabla\cdot (\dfc(u^n)\nabla u^n)
+ f(u^n),
$$
which is a linear equation in the unknown $u^{n+1}$ with solution
$$
u^{n+1} = u^n + \Delta t\nabla\cdot (\dfc(u^n)\nabla u^n) +
\Delta t f(u^n)\thinspace .
$$
The disadvantage with this discretization is
the strict stability criterion $\Delta t \leq h^2/(6\max\alpha)$
for the case $f=0$ and a standard 2nd-order finite difference discretization
in 3D space with mesh cell sizes $h=\Delta x=\Delta y=\Delta z$.
<!-- BC -->
## Backward Euler scheme and Picard iteration
<div id="nonlin:pdelevel:Picard"></div>
A Backward Euler scheme for ([1](#nonlin:pdelevel:model:pde))
reads
$$
[D_t^- u = \nabla\cdot (\dfc(u)\nabla u) + f(u)]^n\thinspace .
$$
Written out,
<!-- Equation labels as ordinary links -->
<div id="nonlin:pdelevel:pde:BE"></div>
$$
\begin{equation}
\frac{u^{n} - u^{n-1}}{\Delta t} = \nabla\cdot (\dfc(u^n)\nabla u^n)
+ f(u^n)\thinspace .
\label{nonlin:pdelevel:pde:BE} \tag{4}
\end{equation}
$$
This is a nonlinear PDE for the unknown function $u^n(\x)$. Such a
PDE can be viewed as a time-independent PDE where
$u^{n-1}(\x)$ is a known function.
We introduce a Picard iteration with $k$ as iteration counter.
A typical linearization of the $\nabla\cdot(\dfc(u^n)\nabla u^n)$ term
in iteration $k+1$ is to use the previously computed $u^{n,k}$
approximation in the diffusion coefficient: $\dfc(u^{n,k})$.
The nonlinear source term is treated similarly: $f(u^{n,k})$.
The unknown function $u^{n,k+1}$ then fulfills the linear PDE
<!-- Equation labels as ordinary links -->
<div id="nonlin:pdelevel:pde:BE:Picard:k"></div>
$$
\begin{equation}
\frac{u^{n,k+1} - u^{n-1}}{\Delta t} = \nabla\cdot (\dfc(u^{n,k})
\nabla u^{n,k+1})
+ f(u^{n,k})\thinspace .
\label{nonlin:pdelevel:pde:BE:Picard:k} \tag{5}
\end{equation}
$$
The initial guess for the Picard iteration at this time level can be
taken as the solution at the previous time level: $u^{n,0}=u^{n-1}$.
We can alternatively apply the implementation-friendly
notation where $u$ corresponds to
the unknown we want to solve for, i.e., $u^{n,k+1}$ above, and $u^{-}$
is the most recently computed value, $u^{n,k}$ above. Moreover,
$u^{(1)}$ denotes the unknown function at the previous time level, $u^{n-1}$
above. The PDE to be solved in a Picard iteration then looks like
<!-- Equation labels as ordinary links -->
<div id="nonlin:pdelevel:pde:BE:Picard"></div>
$$
\begin{equation}
\frac{u - u^{(1)}}{\Delta t} = \nabla\cdot (\dfc(u^{-})
\nabla u)
+ f(u^{-})\thinspace .
\label{nonlin:pdelevel:pde:BE:Picard} \tag{6}
\end{equation}
$$
At the beginning of the iteration we start with the value from the
previous time level: $u^{-}=u^{(1)}$, and after each
iteration, $u^{-}$ is updated to $u$.
**Remark on notation.**
The previous derivations of the numerical scheme for time discretizations
of PDEs have, strictly
speaking, a somewhat sloppy notation, but it is much used and convenient
to read. A more precise notation must
distinguish clearly between the exact solution of the PDE problem,
here denoted $\uex(\x,t)$, and the exact solution of the spatial
problem, arising after time discretization at each time level,
where ([4](#nonlin:pdelevel:pde:BE)) is an example. The latter
is here represented as $u^n(\x)$ and is an approximation to
$\uex(\x,t_n)$. Then we have another approximation $u^{n,k}(\x)$
to $u^n(\x)$ when solving the nonlinear PDE problem for
$u^n$ by iteration methods, as in ([5](#nonlin:pdelevel:pde:BE:Picard:k)).
In our notation, $u$ is a synonym for $u^{n,k+1}$ and $u^{(1)}$ is
a synonym for $u^{n-1}$, inspired by what are natural variable names
in a code.
We will usually state the PDE problem in terms of $u$ and
quickly redefine the symbol $u$ to mean the numerical approximation,
while $\uex$ is not explicitly introduced unless we need to talk about
the exact solution and the approximate solution at the same time.
## Backward Euler scheme and Newton's method
<div id="nonlin:pdelevel:Newton"></div>
At time level $n$, we have to solve the stationary PDE
([4](#nonlin:pdelevel:pde:BE)). In the previous section, we
saw how this can be done with Picard iterations.
Another alternative is to apply the idea of Newton's method
in a clever way.
Normally, Newton's method is defined for systems of *algebraic equations*,
but the idea of the method can be applied at the PDE level too.
### Linearization via Taylor expansions
Let $u^{n,k}$ be an approximation to the unknown $u^n$. We seek a
better approximation on
the form
<!-- Equation labels as ordinary links -->
<div id="nonlin:pdelevel:Newton:ansatz"></div>
$$
\begin{equation}
u^{n} = u^{n,k} + \delta u\thinspace .
\label{nonlin:pdelevel:Newton:ansatz} \tag{7}
\end{equation}
$$
The idea is to insert ([7](#nonlin:pdelevel:Newton:ansatz)) in
([4](#nonlin:pdelevel:pde:BE)), Taylor expand the nonlinearities
and keep only the terms that are
linear in $\delta u$ (which makes ([7](#nonlin:pdelevel:Newton:ansatz))
an approximation for $u^{n}$). Then we can solve a linear PDE for
the correction $\delta u$ and use ([7](#nonlin:pdelevel:Newton:ansatz))
to find a new approximation
$$
u^{n,k+1}=u^{n,k}+\delta u
$$
to $u^{n}$.
Repeating this procedure gives a sequence $u^{n,k+1}$, $k=0,1,\ldots$
that hopefully converges to the goal $u^n$.
Let us carry out all the mathematical details for the nonlinear diffusion
PDE discretized by the Backward Euler method.
Inserting ([7](#nonlin:pdelevel:Newton:ansatz)) in
([4](#nonlin:pdelevel:pde:BE)) gives
<!-- Equation labels as ordinary links -->
<div id="nonlin:pdelevel:pde:BE:Newton1"></div>
$$
\begin{equation}
\frac{u^{n,k} +\delta u - u^{n-1}}{\Delta t} =
\nabla\cdot (\dfc(u^{n,k} + \delta u)\nabla (u^{n,k}+\delta u))
+ f(u^{n,k}+\delta u)\thinspace .
\label{nonlin:pdelevel:pde:BE:Newton1} \tag{8}
\end{equation}
$$
We can Taylor expand $\dfc(u^{n,k} + \delta u)$ and
$f(u^{n,k}+\delta u)$:
$$
\begin{align*}
\dfc(u^{n,k} + \delta u) & = \dfc(u^{n,k}) + \frac{d\dfc}{du}(u^{n,k})
\delta u + \Oof{\delta u^2}\approx \dfc(u^{n,k}) + \dfc^{\prime}(u^{n,k})\delta u,\\
f(u^{n,k}+\delta u) &= f(u^{n,k}) + \frac{df}{du}(u^{n,k})\delta u
+ \Oof{\delta u^2}\approx f(u^{n,k}) + f^{\prime}(u^{n,k})\delta u\thinspace .
\end{align*}
$$
Inserting the linear approximations of $\dfc$ and $f$ in
([8](#nonlin:pdelevel:pde:BE:Newton1)) results in
$$
\frac{u^{n,k} +\delta u - u^{n-1}}{\Delta t} =
\nabla\cdot (\dfc(u^{n,k})\nabla u^{n,k}) + f(u^{n,k}) + \nonumber
$$
$$
\qquad \nabla\cdot (\dfc(u^{n,k})\nabla \delta u)
+ \nabla\cdot (\dfc^{\prime}(u^{n,k})\delta u\nabla u^{n,k}) + \nonumber
$$
<!-- Equation labels as ordinary links -->
<div id="nonlin:pdelevel:pde:BE:Newton2"></div>
$$
\begin{equation}
\qquad \nabla\cdot (\dfc^{\prime}(u^{n,k})\delta u\nabla \delta u)
+ f^{\prime}(u^{n,k})\delta u\thinspace .
\label{nonlin:pdelevel:pde:BE:Newton2} \tag{9}
\end{equation}
$$
The term $\dfc^{\prime}(u^{n,k})\delta u\nabla \delta u$ is of
order $\delta u^2$
and therefore omitted since we expect the correction $\delta u$
to be small ($\delta u \gg \delta u^2$).
Reorganizing the equation gives a PDE
for $\delta u$ that we can write in short form as
$$
\delta F(\delta u; u^{n,k}) = -F(u^{n,k}),
$$
where
<!-- Equation labels as ordinary links -->
<div id="nonlin:pdelevel:pde:BE:Newton2:F"></div>
$$
\begin{equation}
F(u^{n,k}) = \frac{u^{n,k} - u^{n-1}}{\Delta t} -
\nabla\cdot (\dfc(u^{n,k})\nabla u^{n,k}) + f(u^{n,k}),
\label{nonlin:pdelevel:pde:BE:Newton2:F} \tag{10}
\end{equation}
$$
$$
\delta F(\delta u; u^{n,k}) =
- \frac{1}{\Delta t}\delta u +
\nabla\cdot (\dfc(u^{n,k})\nabla \delta u) + \nonumber
$$
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
\quad \nabla\cdot (\dfc^{\prime}(u^{n,k})\delta u\nabla u^{n,k})
+ f^{\prime}(u^{n,k})\delta u\thinspace .
\label{_auto1} \tag{11}
\end{equation}
$$
Note that $\delta F$ is a linear function of $\delta u$, and
$F$ contains only terms that are known, such that
the PDE for $\delta u$ is indeed linear.
**Observations.**
The notational form $\delta F = -F$ resembles the Newton system $J\delta u =-F$
for systems of algebraic equations, with $\delta F$ as $J\delta u$.
The unknown vector in a linear system of algebraic equations enters
the system as a linear operator in terms of a
matrix-vector product ($J\delta u$), while at
the PDE level we have a linear differential operator instead
($\delta F$).
### Similarity with Picard iteration
We can rewrite the PDE for $\delta u$ in a slightly different way too
if we define $u^{n,k} + \delta u$ as $u^{n,k+1}$.
$$
\frac{u^{n,k+1} - u^{n-1}}{\Delta t} =
\nabla\cdot (\dfc(u^{n,k})\nabla u^{n,k+1}) + f(u^{n,k})\nonumber
$$
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
\qquad + \nabla\cdot (\dfc^{\prime}(u^{n,k})\delta u\nabla u^{n,k})
+ f^{\prime}(u^{n,k})\delta u\thinspace .
\label{_auto2} \tag{12}
\end{equation}
$$
Note that the first line is the same PDE as arises in the Picard
iteration, while the remaining terms arise from the differentiations
that are an inherent ingredient in Newton's method.
### Implementation
For coding we want to introduce $u$ for $u^n$, $u^{-}$ for $u^{n,k}$ and
$u^{(1)}$ for $u^{n-1}$. The formulas for $F$ and $\delta F$
are then more clearly written as
<!-- Equation labels as ordinary links -->
<div id="nonlin:pdelevel:pde:BE:Newton2:F2"></div>
$$
\begin{equation}
F(u^{-}) = \frac{u^{-} - u^{(1)}}{\Delta t} -
\nabla\cdot (\dfc(u^{-})\nabla u^{-}) + f(u^{-}),
\label{nonlin:pdelevel:pde:BE:Newton2:F2} \tag{13}
\end{equation}
$$
$$
\delta F(\delta u; u^{-}) =
- \frac{1}{\Delta t}\delta u +
\nabla\cdot (\dfc(u^{-})\nabla \delta u) + \nonumber
$$
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
\quad \nabla\cdot (\dfc^{\prime}(u^{-})\delta u\nabla u^{-})
+ f^{\prime}(u^{-})\delta u\thinspace .
\label{_auto3} \tag{14}
\end{equation}
$$
The form that orders the PDE as the Picard iteration terms plus
the Newton method's derivative terms becomes
$$
\frac{u - u^{(1)}}{\Delta t} =
\nabla\cdot (\dfc(u^{-})\nabla u) + f(u^{-}) + \nonumber
$$
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
\qquad \gamma(\nabla\cdot (\dfc^{\prime}(u^{-})(u - u^{-})\nabla u^{-})
+ f^{\prime}(u^{-})(u - u^{-}))\thinspace .
\label{_auto4} \tag{15}
\end{equation}
$$
The Picard and full Newton versions correspond to
$\gamma=0$ and $\gamma=1$, respectively.
### Derivation with alternative notation
Some may prefer to derive the linearized PDE for $\delta u$ using
the more compact notation. We start with inserting $u^n=u^{-}+\delta u$
to get
$$
\frac{u^{-} +\delta u - u^{n-1}}{\Delta t} =
\nabla\cdot (\dfc(u^{-} + \delta u)\nabla (u^{-}+\delta u))
+ f(u^{-}+\delta u)\thinspace .
$$
Taylor expanding,
$$
\begin{align*}
\dfc(u^{-} + \delta u) & \approx \dfc(u^{-}) + \dfc^{\prime}(u^{-})\delta u,\\
f(u^{-}+\delta u) & \approx f(u^{-}) + f^{\prime}(u^{-})\delta u,
\end{align*}
$$
and inserting these expressions gives a less cluttered PDE for $\delta u$:
$$
\begin{align*}
\frac{u^{-} +\delta u - u^{n-1}}{\Delta t} &=
\nabla\cdot (\dfc(u^{-})\nabla u^{-}) + f(u^{-}) + \\
&\qquad \nabla\cdot (\dfc(u^{-})\nabla \delta u)
+ \nabla\cdot (\dfc^{\prime}(u^{-})\delta u\nabla u^{-}) + \\
&\qquad \nabla\cdot (\dfc^{\prime}(u^{-})\delta u\nabla \delta u)
+ f^{\prime}(u^{-})\delta u\thinspace .
\end{align*}
$$
## Crank-Nicolson discretization
<div id="nonlin:pdelevel:Picard:CN"></div>
A Crank-Nicolson discretization of
([1](#nonlin:pdelevel:model:pde)) applies a centered difference
at $t_{n+\frac{1}{2}}$:
$$
[D_t u = \nabla\cdot (\dfc(u)\nabla u) + f(u)]^{n+\frac{1}{2}}\thinspace .
$$
The standard technique is to apply an arithmetic average for
quantities defined between two mesh points, e.g.,
$$
u^{n+\frac{1}{2}}\approx \frac{1}{2}(u^n + u^{n+1})\thinspace .
$$
However, with nonlinear terms we have many choices of formulating
an arithmetic mean:
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
[f(u)]^{n+\frac{1}{2}} \approx f(\frac{1}{2}(u^n + u^{n+1}))
= [f(\overline{u}^t)]^{n+\frac{1}{2}},
\label{_auto5} \tag{16}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto6"></div>
$$
\begin{equation}
[f(u)]^{n+\frac{1}{2}} \approx \frac{1}{2}(f(u^n) + f(u^{n+1}))
=[\overline{f(u)}^t]^{n+\frac{1}{2}},
\label{_auto6} \tag{17}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto7"></div>
$$
\begin{equation}
[\dfc(u)\nabla u]^{n+\frac{1}{2}} \approx
\dfc(\frac{1}{2}(u^n + u^{n+1}))\nabla (\frac{1}{2}(u^n + u^{n+1}))
= [\dfc(\overline{u}^t)\nabla \overline{u}^t]^{n+\frac{1}{2}},
\label{_auto7} \tag{18}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto8"></div>
$$
\begin{equation}
[\dfc(u)\nabla u]^{n+\frac{1}{2}} \approx
\frac{1}{2}(\dfc(u^n) + \dfc(u^{n+1}))\nabla (\frac{1}{2}(u^n + u^{n+1}))
= [\overline{\dfc(u)}^t\nabla\overline{u}^t]^{n+\frac{1}{2}},
\label{_auto8} \tag{19}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto9"></div>
$$
\begin{equation}
[\dfc(u)\nabla u]^{n+\frac{1}{2}} \approx
\frac{1}{2}(\dfc(u^n)\nabla u^n + \dfc(u^{n+1})\nabla u^{n+1})
= [\overline{\dfc(u)\nabla u}^t]^{n+\frac{1}{2}}\thinspace .
\label{_auto9} \tag{20}
\end{equation}
$$
A big question is whether there are significant differences in accuracy
between taking the products of arithmetic means or taking the arithmetic
mean of products. [nonlin:exer:products:arith:mean](#nonlin:exer:products:arith:mean) investigates
this question, and the answer is that the approximation is
$\Oof{\Delta t^2}$ in both cases.
# 1D stationary nonlinear differential equations
<div id="nonlin:alglevel:1D"></div>
The section [Linearization at the differential equation level](#nonlin:pdelevel) presented methods for linearizing
time-discrete PDEs directly prior to discretization in space. We can
alternatively carry out the discretization in space of the
time-discrete nonlinear PDE problem and get a system of nonlinear
algebraic equations, which can be solved by Picard iteration or
Newton's method as presented in the section [nonlin:systems:alg](#nonlin:systems:alg).
This latter approach will now be described in detail.
We shall work with the 1D problem
<!-- Equation labels as ordinary links -->
<div id="nonlin:alglevel:1D:pde"></div>
$$
\begin{equation}
-(\dfc(u)u^{\prime})^{\prime} + au = f(u),\quad x\in (0,L),
\quad \dfc(u(0))u^{\prime}(0) = C,\ u(L)=D
\thinspace .
\label{nonlin:alglevel:1D:pde} \tag{21}
\end{equation}
$$
The problem ([21](#nonlin:alglevel:1D:pde)) arises from the stationary
limit of a diffusion equation,
<!-- Equation labels as ordinary links -->
<div id="nonlin:alglevel:1D:pde:tver"></div>
$$
\begin{equation}
\frac{\partial u}{\partial t} = \frac{\partial}{\partial x}\left(
\alpha(u)\frac{\partial u}{\partial x}\right) - au + f(u),
\label{nonlin:alglevel:1D:pde:tver} \tag{22}
\end{equation}
$$
as $t\rightarrow\infty$ and $\partial u/\partial t\rightarrow 0$.
Alternatively, the problem ([21](#nonlin:alglevel:1D:pde)) arises
at each time level from implicit time discretization of
([22](#nonlin:alglevel:1D:pde:tver)). For example, a Backward Euler
scheme for ([22](#nonlin:alglevel:1D:pde:tver)) leads to
<!-- Equation labels as ordinary links -->
<div id="nonlin:alglevel:1D:pde:tver:BE"></div>
$$
\begin{equation}
\frac{u^{n}-u^{n-1}}{\Delta t} =
\frac{d}{dx}\left(
\alpha(u^n)\frac{du^n}{dx}\right) - au^n + f(u^n)\thinspace .
\label{nonlin:alglevel:1D:pde:tver:BE} \tag{23}
\end{equation}
$$
Introducing $u(x)$ for $u^n(x)$, $u^{(1)}$ for $u^{n-1}$, and defining $f(u)$
in ([21](#nonlin:alglevel:1D:pde)) to be $f(u)$ in
([23](#nonlin:alglevel:1D:pde:tver:BE)) plus $u^{n-1}/\Delta t$, gives
([21](#nonlin:alglevel:1D:pde)) with $a=1/\Delta t$.
## Finite difference discretization
<div id="nonlin:alglevel:1D:fd"></div>
The nonlinearity in the differential equation
([21](#nonlin:alglevel:1D:pde)) poses no more difficulty than a variable
coefficient, as in the term $(\dfc(x)u^{\prime})^{\prime}$. We can
therefore use a standard finite difference approach when discretizing
the Laplace term with a variable coefficient:
$$
[-D_x\dfc D_x u +au = f]_i\thinspace .
$$
Writing this out for a uniform mesh with points $x_i=i\Delta x$,
$i=0,\ldots,N_x$, leads to
<!-- Equation labels as ordinary links -->
<div id="nonlin:alglevel:1D:fd:deq0"></div>
$$
\begin{equation}
-\frac{1}{\Delta x^2}
\left(\dfc_{i+\frac{1}{2}}(u_{i+1}-u_i) -
\dfc_{i-\frac{1}{2}}(u_{i}-u_{i-1})\right)
+ au_i = f(u_i)\thinspace .
\label{nonlin:alglevel:1D:fd:deq0} \tag{24}
\end{equation}
$$
This equation is valid at all the mesh points $i=0,1,\ldots,N_x-1$.
At $i=N_x$ we have the Dirichlet condition $u_i=0$.
The only difference from the case with $(\dfc(x)u^{\prime})^{\prime}$ and $f(x)$ is that
now $\dfc$ and $f$ are functions of $u$ and not only of $x$:
$(\dfc(u(x))u^{\prime})^{\prime}$ and $f(u(x))$.
The quantity $\dfc_{i+\frac{1}{2}}$, evaluated between two mesh points,
needs a comment. Since $\dfc$ depends on $u$ and $u$ is only known
at the mesh points, we need to express $\dfc_{i+\frac{1}{2}}$ in
terms of $u_i$ and $u_{i+1}$. For this purpose we use an arithmetic
mean, although a harmonic mean is also common in this context if
$\dfc$ features large jumps.
There are two choices of arithmetic means:
<!-- Equation labels as ordinary links -->
<div id="nonlin:alglevel:1D:fd:dfc:mean:u"></div>
$$
\begin{equation}
\dfc_{i+\frac{1}{2}} \approx
\dfc(\frac{1}{2}(u_i + u_{i+1}) =
[\dfc(\overline{u}^x)]^{i+\frac{1}{2}},
\label{nonlin:alglevel:1D:fd:dfc:mean:u} \tag{25}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="nonlin:alglevel:1D:fd:dfc:mean:dfc"></div>
$$
\begin{equation}
\dfc_{i+\frac{1}{2}} \approx
\frac{1}{2}(\dfc(u_i) + \dfc(u_{i+1})) = [\overline{\dfc(u)}^x]^{i+\frac{1}{2}}
\label{nonlin:alglevel:1D:fd:dfc:mean:dfc} \tag{26}
\end{equation}
$$
Equation ([24](#nonlin:alglevel:1D:fd:deq0)) with
the latter approximation then looks like
$$
-\frac{1}{2\Delta x^2}
\left((\dfc(u_i)+\dfc(u_{i+1}))(u_{i+1}-u_i) -
(\dfc(u_{i-1})+\dfc(u_{i}))(u_{i}-u_{i-1})\right)\nonumber
$$
<!-- Equation labels as ordinary links -->
<div id="nonlin:alglevel:1D:fd:deq"></div>
$$
\begin{equation}
\qquad\qquad + au_i = f(u_i),
\label{nonlin:alglevel:1D:fd:deq} \tag{27}
\end{equation}
$$
or written more compactly,
$$
[-D_x\overline{\dfc}^x D_x u +au = f]_i\thinspace .
$$
At mesh point $i=0$ we have the boundary condition $\dfc(u)u^{\prime}=C$,
which is discretized by
$$
[\dfc(u)D_{2x}u = C]_0,
$$
meaning
<!-- Equation labels as ordinary links -->
<div id="nonlin:alglevel:1D:fd:Neumann:x0"></div>
$$
\begin{equation}
\dfc(u_0)\frac{u_{1} - u_{-1}}{2\Delta x} = C\thinspace .
\label{nonlin:alglevel:1D:fd:Neumann:x0} \tag{28}
\end{equation}
$$
The fictitious value $u_{-1}$ can be eliminated with the aid
of ([27](#nonlin:alglevel:1D:fd:deq)) for $i=0$.
Formally, ([27](#nonlin:alglevel:1D:fd:deq)) should be solved with
respect to $u_{i-1}$ and that value (for $i=0$) should be inserted in
([28](#nonlin:alglevel:1D:fd:Neumann:x0)), but it is algebraically
much easier to do it the other way around. Alternatively, one can
use a ghost cell $[-\Delta x,0]$ and update the $u_{-1}$ value
in the ghost cell according to ([28](#nonlin:alglevel:1D:fd:Neumann:x0))
after every Picard or Newton iteration. Such an approach means that
we use a known $u_{-1}$ value in ([27](#nonlin:alglevel:1D:fd:deq))
from the previous iteration.
## Solution of algebraic equations
### The structure of the equation system
The nonlinear algebraic equations ([27](#nonlin:alglevel:1D:fd:deq)) are
of the form $A(u)u = b(u)$ with
$$
\begin{align*}
A_{i,i} &= \frac{1}{2\Delta x^2}(\dfc(u_{i-1}) + 2\dfc(u_{i})
\dfc(u_{i+1})) + a,\\
A_{i,i-1} &= -\frac{1}{2\Delta x^2}(\dfc(u_{i-1}) + \dfc(u_{i})),\\
A_{i,i+1} &= -\frac{1}{2\Delta x^2}(\dfc(u_{i}) + \dfc(u_{i+1})),\\
b_i &= f(u_i)\thinspace .
\end{align*}
$$
The matrix $A(u)$ is tridiagonal: $A_{i,j}=0$ for $j > i+1$ and $j < i-1$.
The above expressions are valid for internal mesh points $1\leq i\leq N_x-1$.
For $i=0$ we need to express $u_{i-1}=u_{-1}$ in terms of $u_1$ using
([28](#nonlin:alglevel:1D:fd:Neumann:x0)):
<!-- Equation labels as ordinary links -->
<div id="nonlin:alglevel:1D:fd:Neumann:x0:um1"></div>
$$
\begin{equation}
u_{-1} = u_1 -\frac{2\Delta x}{\dfc(u_0)}C\thinspace .
\label{nonlin:alglevel:1D:fd:Neumann:x0:um1} \tag{29}
\end{equation}
$$
This value must be inserted in $A_{0,0}$. The expression for $A_{i,i+1}$
applies for $i=0$, and $A_{i,i-1}$ does not enter the system when $i=0$.
Regarding the last equation, its form depends on whether we include
the Dirichlet condition $u(L)=D$, meaning $u_{N_x}=D$, in the
nonlinear algebraic equation system or not. Suppose we choose
$(u_0,u_1,\ldots,u_{N_x-1})$ as unknowns, later referred to as
*systems without Dirichlet conditions*. The last equation
corresponds to $i=N_x-1$. mathcal{I}_t involves the boundary value $u_{N_x}$,
which is substituted by $D$. If the unknown vector includes the
boundary value, $(u_0,u_1,\ldots,u_{N_x})$, later referred to as
*system including Dirichlet conditions*, the equation for $i=N_x-1$
just involves the unknown $u_{N_x}$, and the final equation becomes
$u_{N_x}=D$, corresponding to $A_{i,i}=1$ and $b_i=D$ for $i=N_x$.
### Picard iteration
The obvious Picard iteration scheme is to use previously computed
values of $u_i$ in $A(u)$ and $b(u)$, as described more in detail in
the section [nonlin:systems:alg](#nonlin:systems:alg). With the notation $u^{-}$ for the
most recently computed value of $u$, we have the system $F(u)\approx
\hat F(u) = A(u^{-})u - b(u^{-})$, with $F=(F_0,F_1,\ldots,F_m)$,
$u=(u_0,u_1,\ldots,u_m)$. The index $m$ is $N_x$ if the system
includes the Dirichlet condition as a separate equation and $N_x-1$
otherwise. The matrix $A(u^{-})$ is tridiagonal, so the solution
procedure is to fill a tridiagonal matrix data structure and the
right-hand side vector with the right numbers and call a Gaussian
elimination routine for tridiagonal linear systems.
### Mesh with two cells
mathcal{I}_t helps on the understanding of the details to write out all the
mathematics in a specific
case with a small mesh, say just two cells ($N_x=2$). We use $u^{-}_i$
for the $i$-th component in $u^{-}$.
The starting point is the basic expressions for the
nonlinear equations at mesh point $i=0$ and $i=1$:
<!-- Equation labels as ordinary links -->
<div id="nonlin:alglevel:1D:fd:2x2:x0"></div>
$$
\begin{equation}
A_{0,-1}u_{-1} + A_{0,0}u_0 + A_{0,1}u_1 = b_0,
\label{nonlin:alglevel:1D:fd:2x2:x0} \tag{30}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="nonlin:alglevel:1D:fd:2x2:x1"></div>
$$
\begin{equation}
A_{1,0}u_{0} + A_{1,1}u_1 + A_{1,2}u_2 = b_1\thinspace .
\label{nonlin:alglevel:1D:fd:2x2:x1} \tag{31}
\end{equation}
$$
Equation ([30](#nonlin:alglevel:1D:fd:2x2:x0)) written out reads
$$
\begin{align*}
\frac{1}{2\Delta x^2}(& -(\dfc(u_{-1}) + \dfc(u_{0}))u_{-1}\, +\\
& (\dfc(u_{-1}) + 2\dfc(u_{0}) + \dfc(u_{1}))u_0\, -\\
& (\dfc(u_{0}) + \dfc(u_{1})))u_1 + au_0
=f(u_0)\thinspace .
\end{align*}
$$
We must then replace $u_{-1}$ by
([29](#nonlin:alglevel:1D:fd:Neumann:x0:um1)).
With Picard iteration we get
<!-- u_{-1} = u_1 -\frac{2\Delta x}{\dfc(u_0)}C -->
$$
\begin{align*}
\frac{1}{2\Delta x^2}(& -(\dfc(u^-_{-1}) + 2\dfc(u^-_{0})
+ \dfc(u^-_{1}))u_1\, +\\
&(\dfc(u^-_{-1}) + 2\dfc(u^-_{0}) + \dfc(u^-_{1}))u_0
+ au_0\\
&=f(u^-_0) -
\frac{1}{\dfc(u^-_0)\Delta x}(\dfc(u^-_{-1}) + \dfc(u^-_{0}))C,
\end{align*}
$$
where
$$
u^-_{-1} = u_1^- -\frac{2\Delta x}{\dfc(u^-_0)}C\thinspace .
$$
Equation ([31](#nonlin:alglevel:1D:fd:2x2:x1)) contains the unknown $u_2$
for which we have a Dirichlet condition. In case we omit the
condition as a separate equation, ([31](#nonlin:alglevel:1D:fd:2x2:x1))
with Picard iteration becomes
$$
\begin{align*}
\frac{1}{2\Delta x^2}(&-(\dfc(u^-_{0}) + \dfc(u^-_{1}))u_{0}\, + \\
&(\dfc(u^-_{0}) + 2\dfc(u^-_{1}) + \dfc(u^-_{2}))u_1\, -\\
&(\dfc(u^-_{1}) + \dfc(u^-_{2})))u_2 + au_1
=f(u^-_1)\thinspace .
\end{align*}
$$
We must now move the $u_2$ term to the right-hand side and replace all
occurrences of $u_2$ by $D$:
$$
\begin{align*}
\frac{1}{2\Delta x^2}(&-(\dfc(u^-_{0}) + \dfc(u^-_{1}))u_{0}\, +\\
& (\dfc(u^-_{0}) + 2\dfc(u^-_{1}) + \dfc(D)))u_1 + au_1\\
&=f(u^-_1) + \frac{1}{2\Delta x^2}(\dfc(u^-_{1}) + \dfc(D))D\thinspace .
\end{align*}
$$
The two equations can be written as a $2\times 2$ system:
$$
\left(\begin{array}{cc}
B_{0,0}& B_{0,1}\\
B_{1,0} & B_{1,1}
\end{array}\right)
\left(\begin{array}{c}
u_0\\
u_1
\end{array}\right)
=
\left(\begin{array}{c}
d_0\\
d_1
\end{array}\right),
$$
where
<!-- Equation labels as ordinary links -->
<div id="_auto10"></div>
$$
\begin{equation}
B_{0,0} =\frac{1}{2\Delta x^2}(\dfc(u^-_{-1}) + 2\dfc(u^-_{0}) + \dfc(u^-_{1}))
+ a,
\label{_auto10} \tag{32}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto11"></div>
$$
\begin{equation}
B_{0,1} =
-\frac{1}{2\Delta x^2}(\dfc(u^-_{-1}) + 2\dfc(u^-_{0})
+ \dfc(u^-_{1})),
\label{_auto11} \tag{33}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto12"></div>
$$
\begin{equation}
B_{1,0} =
-\frac{1}{2\Delta x^2}(\dfc(u^-_{0}) + \dfc(u^-_{1})),
\label{_auto12} \tag{34}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto13"></div>
$$
\begin{equation}
B_{1,1} =
\frac{1}{2\Delta x^2}(\dfc(u^-_{0}) + 2\dfc(u^-_{1}) + \dfc(D)) + a,
\label{_auto13} \tag{35}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto14"></div>
$$
\begin{equation}
d_0 =
f(u^-_0) -
\frac{1}{\dfc(u^-_0)\Delta x}(\dfc(u^-_{-1}) + \dfc(u^-_{0}))C,
\label{_auto14} \tag{36}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto15"></div>
$$
\begin{equation}
d_1 = f(u^-_1) + \frac{1}{2\Delta x^2}(\dfc(u^-_{1}) + \dfc(D))D\thinspace .
\label{_auto15} \tag{37}
\end{equation}
$$
The system with the Dirichlet condition becomes
$$
\left(\begin{array}{ccc}
B_{0,0}& B_{0,1} & 0\\
B_{1,0} & B_{1,1} & B_{1,2}\\
0 & 0 & 1
\end{array}\right)
\left(\begin{array}{c}
u_0\\
u_1\\
u_2
\end{array}\right)
=
\left(\begin{array}{c}
d_0\\
d_1\\
D
\end{array}\right),
$$
with
<!-- Equation labels as ordinary links -->
<div id="_auto16"></div>
$$
\begin{equation}
B_{1,1} =
\frac{1}{2\Delta x^2}(\dfc(u^-_{0}) + 2\dfc(u^-_{1}) + \dfc(u_2)) + a,
\label{_auto16} \tag{38}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto17"></div>
$$
\begin{equation}
B_{1,2} = -
\frac{1}{2\Delta x^2}(\dfc(u^-_{1}) + \dfc(u_2))),
\label{_auto17} \tag{39}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto18"></div>
$$
\begin{equation}
d_1 = f(u^-_1)\thinspace .
\label{_auto18} \tag{40}
\end{equation}
$$
Other entries are as in the $2\times 2$ system.
### Newton's method
The Jacobian must be derived in order to use Newton's method. Here it means
that we need to differentiate $F(u)=A(u)u - b(u)$ with respect to
the unknown parameters
$u_0,u_1,\ldots,u_m$ ($m=N_x$ or $m=N_x-1$, depending on whether the
Dirichlet condition is included in the nonlinear system $F(u)=0$ or not).
Nonlinear equation number $i$ has the structure
$$
F_i = A_{i,i-1}(u_{i-1},u_i)u_{i-1} +
A_{i,i}(u_{i-1},u_i,u_{i+1})u_i +
A_{i,i+1}(u_i, u_{i+1})u_{i+1} - b_i(u_i)\thinspace .
$$
Computing the Jacobian requires careful differentiation. For example,
$$
\begin{align*}
\frac{\partial}{\partial u_i}(A_{i,i}(u_{i-1},u_i,u_{i+1})u_i) &=
\frac{\partial A_{i,i}}{\partial u_i}u_i + A_{i,i}
\frac{\partial u_i}{\partial u_i}\\
&=
\frac{\partial}{\partial u_i}(
\frac{1}{2\Delta x^2}(\dfc(u_{i-1}) + 2\dfc(u_{i})
+\dfc(u_{i+1})) + a)u_i +\\
&\quad\frac{1}{2\Delta x^2}(\dfc(u_{i-1}) + 2\dfc(u_{i})
+\dfc(u_{i+1})) + a\\
&= \frac{1}{2\Delta x^2}(2\dfc^\prime (u_i)u_i
+\dfc(u_{i-1}) + 2\dfc(u_{i})
+\dfc(u_{i+1})) + a\thinspace .
\end{align*}
$$
The complete Jacobian becomes
$$
\begin{align*}
J_{i,i} &= \frac{\partial F_i}{\partial u_i}
= \frac{\partial A_{i,i-1}}{\partial u_i}u_{i-1}
+ \frac{\partial A_{i,i}}{\partial u_i}u_i
+ A_{i,i}
+ \frac{\partial A_{i,i+1}}{\partial u_i}u_{i+1}
- \frac{\partial b_i}{\partial u_{i}}\\
&=
\frac{1}{2\Delta x^2}(
-\dfc^{\prime}(u_i)u_{i-1}
+2\dfc^{\prime}(u_i)u_{i}
+\dfc(u_{i-1}) + 2\dfc(u_i) + \dfc(u_{i+1})) +\\
&\quad a
-\frac{1}{2\Delta x^2}\dfc^{\prime}(u_{i})u_{i+1}
- b^{\prime}(u_i),\\
J_{i,i-1} &= \frac{\partial F_i}{\partial u_{i-1}}
= \frac{\partial A_{i,i-1}}{\partial u_{i-1}}u_{i-1}
+ A_{i-1,i}
+ \frac{\partial A_{i,i}}{\partial u_{i-1}}u_i
- \frac{\partial b_i}{\partial u_{i-1}}\\
&=
\frac{1}{2\Delta x^2}(
-\dfc^{\prime}(u_{i-1})u_{i-1} - (\dfc(u_{i-1}) + \dfc(u_i))
+ \dfc^{\prime}(u_{i-1})u_i),\\
J_{i,i+1} &= \frac{\partial A_{i,i+1}}{\partial u_{i-1}}u_{i+1}
+ A_{i+1,i} +
\frac{\partial A_{i,i}}{\partial u_{i+1}}u_i
- \frac{\partial b_i}{\partial u_{i+1}}\\
&=\frac{1}{2\Delta x^2}(
-\dfc^{\prime}(u_{i+1})u_{i+1} - (\dfc(u_{i}) + \dfc(u_{i+1}))
+ \dfc^{\prime}(u_{i+1})u_i)
\thinspace .
\end{align*}
$$
The explicit expression for nonlinear equation number $i$,
$F_i(u_0,u_1,\ldots)$, arises from moving the $f(u_i)$ term in
([27](#nonlin:alglevel:1D:fd:deq)) to the left-hand side:
$$
F_i = -\frac{1}{2\Delta x^2}
\left((\dfc(u_i)+\dfc(u_{i+1}))(u_{i+1}-u_i) -
(\dfc(u_{i-1})+\dfc(u_{i}))(u_{i}-u_{i-1})\right)\nonumber
$$
<!-- Equation labels as ordinary links -->
<div id="nonlin:alglevel:1D:fd:deq2"></div>
$$
\begin{equation}
\qquad\qquad + au_i - f(u_i) = 0\thinspace .
\label{nonlin:alglevel:1D:fd:deq2} \tag{41}
\end{equation}
$$
At the boundary point $i=0$, $u_{-1}$ must be replaced using
the formula ([29](#nonlin:alglevel:1D:fd:Neumann:x0:um1)).
When the Dirichlet condition at $i=N_x$ is not a part of the
equation system, the last equation $F_m=0$ for $m=N_x-1$
involves the quantity $u_{N_x-1}$ which must be replaced by $D$.
If $u_{N_x}$ is treated as an unknown in the system, the
last equation $F_m=0$ has $m=N_x$ and reads
$$
F_{N_x}(u_0,\ldots,u_{N_x}) = u_{N_x} - D = 0\thinspace .
$$
Similar replacement of $u_{-1}$ and $u_{N_x}$ must be done in
the Jacobian for the first and last row. When $u_{N_x}$
is included as an unknown, the last row in the Jacobian
must help implement the condition $\delta u_{N_x}=0$, since
we assume that $u$ contains the right Dirichlet value
at the beginning of the iteration ($u_{N_x}=D$), and then
the Newton update should be zero for $i=0$, i.e., $\delta u_{N_x}=0$.
This also forces the right-hand side to be $b_i=0$, $i=N_x$.
We have seen, and can see from the present example, that the
linear system in Newton's method contains all the terms present
in the system that arises in the Picard iteration method.
The extra terms in Newton's method can be multiplied by a factor
such that it is easy to program one linear system and set this
factor to 0 or 1 to generate the Picard or Newton system.
<!-- Remark: Neumann cond at x=L and Dirichlet at x=0 leads to different -->
<!-- numbering of unknowns and u at mesh points. Must address this -->
<!-- in a remark and treat it properly in diffu. -->
|
f08d1bae967493b89be4797f48b44518194ba211
| 53,356 |
ipynb
|
Jupyter Notebook
|
fdm-devito-notebooks/05_nonlin/nonlin_pde1D.ipynb
|
devitocodes/devito_book
|
30405c3d440a1f89df69594fd0704f69650c1ded
|
[
"CC-BY-4.0"
] | 7 |
2020-07-17T13:19:15.000Z
|
2021-03-27T05:21:09.000Z
|
fdm-devito-notebooks/05_nonlin/nonlin_pde1D.ipynb
|
devitocodes/devito_book
|
30405c3d440a1f89df69594fd0704f69650c1ded
|
[
"CC-BY-4.0"
] | 73 |
2020-07-14T15:38:52.000Z
|
2020-09-25T11:54:59.000Z
|
fdm-devito-notebooks/05_nonlin/nonlin_pde1D.ipynb
|
devitocodes/devito_book
|
30405c3d440a1f89df69594fd0704f69650c1ded
|
[
"CC-BY-4.0"
] | 1 |
2021-03-27T05:21:14.000Z
|
2021-03-27T05:21:14.000Z
| 29.74136 | 129 | 0.506541 | true | 12,420 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.841826 | 0.882428 | 0.74285 |
__label__eng_Latn
| 0.888516 | 0.564222 |
# Job Shop Scheduling
**Implementation Note:** The following cell specifies the solver to used in the subsequent calculations. Some of these problems can become quite larger, and therefore the `gurobi` solver has been set as a default. If you don't have the `gurobi` solver then adjust the code to use the `glpk` solver, but know the calculations may take longer (and the benchmark problem will not solve at all). If you do have the `gurobi` solver, edit the location of the executable to match the location on your computer.
```
%%capture
!pip install -q pyomo
!apt-get install -y -qq glpk-utils
!apt-get install -y -qq coinor-cbc
```
```
from pyomo.environ import *
from pyomo.gdp import *
#solver = SolverFactory('glpk')
solver = SolverFactory('cbc', executable='/usr/bin/cbc')
#solver = SolverFactory('gurobi', executable='/usr/local/bin/gurobi.sh')
```
## Contents
* [Background](#Background)
* [Job Shop Example](#JobShopExample)
* [Task Decomposition](#TaskDecomposition)
* [Model Formulation](#ModelFormulation)
* [Pyomo Implementation](#PyomoImplementation)
* [Displaying a Solution](#DisplayingSolution)
* [Visualzing Results using Gantt Charts](#Visualization)
* [Appication to Scheduling of Batch Processes](#BatchProcesses)
* [Single Product Strategies](#SingleProduct)
* [Overlapping Tasks](#OverlappingTasks)
* [Unit Cleanout](#UnitCleanout)
* [Zero-Wait Policy](#ZeroWait)
* [Benchmark Problem LA19](#Benchmark)
<a id="Background"></a>
## Background
A job shop consists of a set of distinct machines that process jobs. Each job is a series of tasks that require use of particular machines for known durations, and which must be completed in specified order. The job shop scheduling problem is to schedule the jobs on the machines to minimize the time necessary to process all jobs (i.e, the makespan) or some other metric of productivity. Job shop scheduling is one of the classic problems in Operations Research.
Data consists of two tables. The first table is decomposition of the jobs into a series of tasks. Each task lists a job name, name of the required machine, and task duration. The second table list task pairs where the first task must be completed before the second task can be started. This formulation is quite general, but can also specify situations with no feasible solutions.
<a id="JobShopExample"></a>
## Job Shop Example
The following example of a job shop is from from Christelle Gueret, Christian Prins, Marc Sevaux, "Applications of Optimization with Xpress-MP," Dash Optimization, 2000.
In this example, there are three printed paper products that must pass through color printing presses in a particular order. The given data consists of a flowsheet showing the order in which each job passes through the color presses
and a table of data showing, in minutes, the amount of time each job requires on each machine.
| Machine | Color | Paper 1 | Paper 2 | Paper 3 |
| :-----: | :---: | :-----: | :-----: | :-----: |
| 1 | Blue | 45 | 20 | 12 |
| 2 | Green | - | 10 | 17 |
| 3 | Yellow| 10 | 34 | 28 |
What is the minimum amount of time (i.e, what is the makespan) for this set of jobs?
<a id="TaskDecomposition"></a>
## Task Decomposition
The first step in the analysis is to decompose the process into a series of tasks. Each task is a (job,machine) pair. Some tasks cannot start until a prerequisite task is completed.
| Task (Job,Machine) | Duration | Prerequisite Task |
| :----------------: | :------: | :---------------: |
| (Paper 1, Blue) | 45 | - |
| (Paper 1, Yellow) | 10 | (Paper 1,Blue) |
| (Paper 2, Blue) | 20 | (Paper 2, Green) |
| (Paper 2, Green) | 10 | - |
| (Paper 2, Yellow) | 34 | (Paper 2, Blue) |
| (Paper 3, Blue) | 12 | (Paper 3, Yellow) |
| (Paper 3, Green) | 17 | (Paper 3, Blue) |
| (Paper 3, Yellow) | 28 | - |
We convert this to a JSON style representation where tasks are denoted by (Job,Machine) tuples in Python. The task data is stored in a Python dictionary indexed by (Job,Machine) tuples. The task data conists of a dictionary with duration ('dur') and (Job,Machine) pair for any prerequisite task.
```
TASKS = {
('Paper_1','Blue') : {'dur': 45, 'prec': None},
('Paper_1','Yellow') : {'dur': 10, 'prec': ('Paper_1','Blue')},
('Paper_2','Blue') : {'dur': 20, 'prec': ('Paper_2','Green')},
('Paper_2','Green') : {'dur': 10, 'prec': None},
('Paper_2','Yellow') : {'dur': 34, 'prec': ('Paper_2','Blue')},
('Paper_3','Blue') : {'dur': 12, 'prec': ('Paper_3','Yellow')},
('Paper_3','Green') : {'dur': 17, 'prec': ('Paper_3','Blue')},
('Paper_3','Yellow') : {'dur': 28, 'prec': None},
}
```
<a id="ModelFormulation"></a>
## Model Formulation
Each task is represented as an ordered pair $(j,m)$ where $j$ is a job, and $m$ is a machine.
| Parameter | Description |
| :-------- | :-----------|
| $\text{dur}_{j,m}$ | Duration of task $(j,m)$ |
| $\text{prec}_{j,m}$ | A task $(k,n) = \text{Prec}_{j,m}$ that must be completed before task $(j,m)$|
| Decision Variables | Description |
| :-------- | :-----------|
| $\text{makespan}$ | Completion of all jobs |
| $\text{start}_{j,m}$ | Start time for task $(j,m)$ |
| $y_{j,k,m}$ | boolean variable for tasks $(i,m)$ and $(j,m)$ on machine $m$ where $j < k$ |
Upper and lower bounds on the start and completion of task $(j,m)$
\begin{align}
\text{start}_{j,m} & \geq 0\\
\text{start}_{j,m}+\text{Dur}_{j,m} & \leq \text{makespan}
\end{align}
Satisfying prerequisite tasks
\begin{align}
\text{start}_{k,n}+\text{Dur}_{k,n}\leq\text{start}_{j,m}\ \ \ \ \text{for } (k,n) =\text{Prec}_{j,m}
\end{align}
Disjunctive Constraints
If $M$ is big enough, then satisfying
\begin{align}
\text{start}_{j,m}+\text{Dur}_{j,m} & \leq \text{start}_{k,m}+M(1 - y_{j,k,m})\\
\text{start}_{k,m}+\text{Dur}_{k,m} & \leq \text{start}_{j,m}+My_{j,k,m}
\end{align}
avoids conflicts for use of the same machine.
<a id="PyomoImplementation"></a>
## Pyomo Implementation
The job shop scheduling problem is implemented below in Pyomo. The implementation consists of of a function JobShop(TASKS) that accepts a dictionary of tanks and returns a pandas dataframe containing an optimal schedule of tasks. An optional argument to JobShop allows one to specify a solver.
```
def JobShop(TASKS):
model = ConcreteModel()
model.TASKS = Set(initialize=TASKS.keys(), dimen=2)
model.JOBS = Set(initialize=set([j for (j,m) in TASKS.keys()]))
model.MACHINES = Set(initialize=set([m for (j,m) in TASKS.keys()]))
model.TASKORDER = Set(initialize = model.TASKS * model.TASKS, dimen=4,
filter = lambda model,j,m,k,n: (k,n) == TASKS[(j,m)]['prec'])
model.DISJUNCTIONS = Set(initialize=model.JOBS * model.JOBS * model.MACHINES, dimen=3,
filter = lambda model,j,k,m: j < k and (j,m) in model.TASKS and (k,m) in model.TASKS)
t_max = sum([TASKS[(j,m)]['dur'] for (j,m) in TASKS.keys()])
model.makespan = Var(bounds=(0, t_max))
model.start = Var(model.TASKS, bounds=(0, t_max))
model.obj = Objective(expr = model.makespan, sense = minimize)
model.fini = Constraint(model.TASKS, rule=lambda model,j,m:
model.start[j,m] + TASKS[(j,m)]['dur'] <= model.makespan)
model.prec = Constraint(model.TASKORDER, rule=lambda model,j,m,k,n:
model.start[k,n] + TASKS[(k,n)]['dur'] <= model.start[j,m])
model.disj = Disjunction(model.DISJUNCTIONS, rule=lambda model,j,k,m:
[model.start[j,m] + TASKS[(j,m)]['dur'] <= model.start[k,m],
model.start[k,m] + TASKS[(k,m)]['dur'] <= model.start[j,m]])
TransformationFactory('gdp.chull').apply_to(model)
solver.solve(model)
results = [{'Job': j,
'Machine': m,
'Start': model.start[j, m](),
'Duration': TASKS[(j, m)]['dur'],
'Finish': model.start[j, m]() + TASKS[(j, m)]['dur']}
for j,m in model.TASKS]
return results
results = JobShop(TASKS)
results
```
[{'Duration': 1.5,
'Finish': 25.5,
'Job': 'C1',
'Machine': 'Packaging',
'Start': 24.0},
{'Duration': 1.5,
'Finish': 24.0,
'Job': 'A2',
'Machine': 'Packaging',
'Start': 22.5},
{'Duration': 4,
'Finish': 18.5,
'Job': 'A1',
'Machine': 'Separator',
'Start': 14.5},
{'Duration': 5,
'Finish': 5.0,
'Job': 'C1',
'Machine': 'Separator',
'Start': 0.0},
{'Duration': 1,
'Finish': 28.0,
'Job': 'B2',
'Machine': 'Packaging',
'Start': 27.0},
{'Duration': 1.5,
'Finish': 19.0,
'Job': 'C2',
'Machine': 'Packaging',
'Start': 17.5},
{'Duration': 3,
'Finish': 17.5,
'Job': 'C2',
'Machine': 'Reactor',
'Start': 14.5},
{'Duration': 1, 'Finish': 1.0, 'Job': 'A1', 'Machine': 'Mixer', 'Start': 0.0},
{'Duration': 1, 'Finish': 2.0, 'Job': 'A2', 'Machine': 'Mixer', 'Start': 1.0},
{'Duration': 5,
'Finish': 11.0,
'Job': 'A2',
'Machine': 'Reactor',
'Start': 6.0},
{'Duration': 1,
'Finish': 21.5,
'Job': 'B1',
'Machine': 'Packaging',
'Start': 20.5},
{'Duration': 3,
'Finish': 20.5,
'Job': 'C1',
'Machine': 'Reactor',
'Start': 17.5},
{'Duration': 5,
'Finish': 14.5,
'Job': 'C2',
'Machine': 'Separator',
'Start': 9.5},
{'Duration': 4.5,
'Finish': 27.0,
'Job': 'B2',
'Machine': 'Separator',
'Start': 22.5},
{'Duration': 1.5,
'Finish': 20.5,
'Job': 'A1',
'Machine': 'Packaging',
'Start': 19.0},
{'Duration': 5,
'Finish': 6.0,
'Job': 'A1',
'Machine': 'Reactor',
'Start': 1.0},
{'Duration': 4,
'Finish': 22.5,
'Job': 'A2',
'Machine': 'Separator',
'Start': 18.5},
{'Duration': 4.5,
'Finish': 9.5,
'Job': 'B1',
'Machine': 'Separator',
'Start': 5.0}]
## Printing Schedules
```
import pandas as pd
schedule = pd.DataFrame(results)
print('\nSchedule by Job')
print(schedule.sort_values(by=['Job','Start']).set_index(['Job', 'Machine']))
print('\nSchedule by Machine')
print(schedule.sort_values(by=['Machine','Start']).set_index(['Machine', 'Job']))
```
Schedule by Job
Duration Finish Start
Job Machine
A1 Mixer 1.0 1.0 0.0
Reactor 5.0 6.0 1.0
Separator 4.0 18.5 14.5
Packaging 1.5 20.5 19.0
A2 Mixer 1.0 2.0 1.0
Reactor 5.0 11.0 6.0
Separator 4.0 22.5 18.5
Packaging 1.5 24.0 22.5
B1 Separator 4.5 9.5 5.0
Packaging 1.0 21.5 20.5
B2 Separator 4.5 27.0 22.5
Packaging 1.0 28.0 27.0
C1 Separator 5.0 5.0 0.0
Reactor 3.0 20.5 17.5
Packaging 1.5 25.5 24.0
C2 Separator 5.0 14.5 9.5
Reactor 3.0 17.5 14.5
Packaging 1.5 19.0 17.5
Schedule by Machine
Duration Finish Start
Machine Job
Mixer A1 1.0 1.0 0.0
A2 1.0 2.0 1.0
Packaging C2 1.5 19.0 17.5
A1 1.5 20.5 19.0
B1 1.0 21.5 20.5
A2 1.5 24.0 22.5
C1 1.5 25.5 24.0
B2 1.0 28.0 27.0
Reactor A1 5.0 6.0 1.0
A2 5.0 11.0 6.0
C2 3.0 17.5 14.5
C1 3.0 20.5 17.5
Separator C1 5.0 5.0 0.0
B1 4.5 9.5 5.0
C2 5.0 14.5 9.5
A1 4.0 18.5 14.5
A2 4.0 22.5 18.5
B2 4.5 27.0 22.5
<a id="Visualization"></a>
## Visualizing Results with Gantt Charts
```
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
def Visualize(results):
schedule = pd.DataFrame(results)
JOBS = list(schedule['Job'].unique())
MACHINES = list(schedule['Machine'].unique())
makespan = schedule['Finish'].max()
schedule.sort_values(by=['Job','Start'])
schedule.set_index(['Job', 'Machine'], inplace=True)
plt.figure(figsize=(12, 5 + (len(JOBS)+len(MACHINES))/4))
plt.subplot(2,1,1)
jdx = 0
for j in sorted(JOBS):
jdx += 1
mdx = 0
for m in MACHINES:
mdx += 1
c = mpl.cm.Dark2.colors[mdx%7]
if (j,m) in schedule.index:
plt.plot([schedule.loc[(j,m),'Start'],schedule.loc[(j,m),'Finish']],
[jdx,jdx],color = c,alpha=1.0,lw=25,solid_capstyle='butt')
plt.text((schedule.loc[(j,m),'Start'] + schedule.loc[(j,m),'Finish'])/2.0,jdx,
m, color='white', weight='bold',
horizontalalignment='center', verticalalignment='center')
plt.ylim(0.5,jdx+0.5)
plt.title('Job Schedule')
plt.gca().set_yticks(range(1,1+len(JOBS)))
plt.gca().set_yticklabels(sorted(JOBS))
plt.plot([makespan,makespan],plt.ylim(),'r--')
plt.text(makespan,plt.ylim()[0]-0.2,str(round(makespan,2)),
horizontalalignment='center', verticalalignment='top')
plt.xlabel('Time')
plt.ylabel('Jobs')
plt.subplot(2,1,2)
mdx = 0
for m in sorted(MACHINES):
mdx += 1
jdx = 0
for j in JOBS:
jdx += 1
c = mpl.cm.Dark2.colors[jdx%7]
if (j,m) in schedule.index:
plt.plot([schedule.loc[(j,m),'Start'],schedule.loc[(j,m),'Finish']],
[mdx,mdx],color = c,alpha=1.0,lw=25,solid_capstyle='butt')
plt.text((schedule.loc[(j,m),'Start'] + schedule.loc[(j,m),'Finish'])/2.0,mdx,
j, color='white', weight='bold',
horizontalalignment='center', verticalalignment='center')
plt.ylim(0.5,mdx+0.5)
plt.title('Machine Schedule')
plt.gca().set_yticks(range(1,1+len(MACHINES)))
plt.gca().set_yticklabels(sorted(MACHINES))
plt.plot([makespan,makespan],plt.ylim(),'r--')
plt.text(makespan,plt.ylim()[0]-0.2,str(round(makespan,2)),
horizontalalignment='center', verticalalignment='top')
plt.xlabel('Time')
plt.ylabel('Machines')
plt.tight_layout()
Visualize(results)
```
<a id="BatchProcesses"></a>
## Application to Scheduling of Batch Processes
We will now turn our attention to the application of the job shop scheduling problem to the short term scheduling of batch processes. We illustrate these techniques using an example from Dunn (2013).
| Process | Mixer | Reactor | Separator | Packaging |
| :-----: | :---: | :-----: | :-------: | :-------: |
| A | 1.0 | 5.0 | 4.0 | 1.5 |
| B | - | - | 4.5 | 1.0 |
| C | - | 3.0 | 5.0 | 1.5 |
<a id="SingleProduct"></a>
## Single Product Strategies
Before going further, we create a function to streamline the generation of the TASKS dictionary.
```
def Recipe(jobs,machines,durations):
TASKS = {}
for j in jobs:
prec = (None,None)
for m,d in zip(machines,durations):
task = (j,m)
if prec == (None,None):
TASKS.update({(j,m): {'dur': d, 'prec': None}})
else:
TASKS.update({(j,m): {'dur': d, 'prec': prec}})
prec = task
return TASKS
RecipeA = Recipe('A',['Mixer','Reactor','Separator','Packaging'],[1,5,4,1.5])
RecipeB = Recipe('B',['Separator','Packaging'],[4.5,1])
RecipeC = Recipe('C',['Separator','Reactor','Packaging'],[5,3,1.5])
Visualize(JobShop(RecipeA))
```
```
Visualize(JobShop(RecipeB))
```
```
Visualize(JobShop(RecipeC))
```
<a id="OverlappingTasks"></a>
## Overlapping Tasks
Let's now consider an optimal scheduling problem where we are wish to make two batches of Product A.
```
TASKS = Recipe(['A1','A2'],['Mixer','Reactor','Separator','Packaging'],[1,5,4,1.5])
results = JobShop(TASKS)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
```
Earlier we found it tood 11.5 hours to produce one batch of product A. As we see here, we can produce a second batch with only 5.0 additional hours because some of the tasks overlap. The overlapping of tasks is the key to gaining efficiency in batch processing facilities.
Let's next consider production of a single batch each of products A, B, and C.
```
TASKS = RecipeA
TASKS.update(RecipeB)
TASKS.update(RecipeC)
results = JobShop(TASKS)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
```
The individual production of A, B, and C required 11.5, 5.5, and 9.5 hours, respectively, for a total of 25.5 hours. As we see here, by scheduling the production simultaneously, we can get all three batches done in just 15 hours.
As we see below, each additional set of three products takes an additionl 13 hours. So there is considerable efficiency gained by scheduling over longer intervals whenever possible.
```
TASKS = Recipe(['A1','A2'],['Mixer','Reactor','Separator','Packaging'],[1,5,4,1.5])
TASKS.update(Recipe(['B1','B2'],['Separator','Packaging'],[4.5,1]))
TASKS.update(Recipe(['C1','C2'],['Separator','Reactor','Packaging'],[5,3,1.5]))
results = JobShop(TASKS)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
```
<a id="UnitCleanout"></a>
## Unit Cleanout
A common feature in batch unit operations is a requirement that equipment be cleaned prior to reuse.
In most cases the time needed for clean out would be equipment and product specific. Bur for the purposes of illustration, we implement this policy with a single non-negative parameter $t_{clean} \geq 0$ which, if specified, requires a period no less than $t_{clean}$ between the finish of one task and the start of another on every piece of equipment.
This is implemented by modifying the usual disjunctive constraints to avoid machine conflicts, i.e.,
\begin{align}
\text{start}_{j,m}+\text{Dur}_{j,m} & \leq \text{start}_{k,m}+M(1 - y_{j,k,m})\\
\text{start}_{k,m}+\text{Dur}_{k,m} & \leq \text{start}_{j,m}+My_{j,k,m}
\end{align}
to read
\begin{align}
\text{start}_{j,m}+\text{Dur}_{j,m} + t_{clean} & \leq \text{start}_{k,m}+M(1 - y_{j,k,m})\\
\text{start}_{k,m}+\text{Dur}_{k,m} + t_{clean} & \leq \text{start}_{j,m}+My_{j,k,m}
\end{align}
for sufficiently large $M$.
```
def JobShop(TASKS, tclean=0):
model = ConcreteModel()
model.TASKS = Set(initialize=TASKS.keys(), dimen=2)
model.JOBS = Set(initialize=set([j for (j,m) in TASKS.keys()]))
model.MACHINES = Set(initialize=set([m for (j,m) in TASKS.keys()]))
model.TASKORDER = Set(initialize = model.TASKS * model.TASKS, dimen=4,
filter = lambda model,j,m,k,n: (k,n) == TASKS[(j,m)]['prec'])
model.DISJUNCTIONS = Set(initialize=model.JOBS * model.JOBS * model.MACHINES, dimen=3,
filter = lambda model,j,k,m: j < k and (j,m) in model.TASKS and (k,m) in model.TASKS)
t_max = sum([TASKS[(j,m)]['dur'] for (j,m) in TASKS.keys()])
model.makespan = Var(bounds=(0, t_max))
model.start = Var(model.TASKS, bounds=(0, t_max))
model.obj = Objective(expr = model.makespan, sense = minimize)
model.fini = Constraint(model.TASKS, rule=lambda model,j,m:
model.start[j,m] + TASKS[(j,m)]['dur'] <= model.makespan)
model.prec = Constraint(model.TASKORDER, rule=lambda model,j,m,k,n:
model.start[k,n] + TASKS[(k,n)]['dur'] <= model.start[j,m])
model.disj = Disjunction(model.DISJUNCTIONS, rule=lambda model,j,k,m:
[model.start[j,m] + TASKS[(j,m)]['dur'] + tclean <= model.start[k,m],
model.start[k,m] + TASKS[(k,m)]['dur'] + tclean <= model.start[j,m]])
TransformationFactory('gdp.chull').apply_to(model)
solver.solve(model)
results = [{'Job': j,
'Machine': m,
'Start': model.start[j, m](),
'Duration': TASKS[(j, m)]['dur'],
'Finish': model.start[j, m]() + TASKS[(j, m)]['dur']}
for j,m in model.TASKS]
return results
results = JobShop(TASKS, tclean=0.5)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
```
<a id="ZeroWait"></a>
## Zero Wait Policy
One of the issues in the use of job shop scheduling for batch processing are situations where there it isn't possible to store intermediate materials. If there is no way to store intermediates, either in the processing equipment or in external vessels, then a **zero-wait** policy may be appropriate.
A zero-wait policy requires subsequent processing machines to be available immediately upon completion of any task. To implement this policy, the usual precident sequencing constraint of a job shop scheduling problem, i.e.,
\begin{align*}
\text{start}_{k,n}+\text{Dur}_{k,n} \leq \text{start}_{j,m}\ \ \ \ \text{for } (k,n) =\text{Prec}_{j,m}
\end{align*}
is changed to
\begin{align*}
\text{start}_{k,n}+\text{Dur}_{k,n} = \text{start}_{j,m}\ \ \ \ \text{for } (k,n) =\text{Prec}_{j,m}\text{ and ZW is True}
\end{align*}
if the zero-wait policy is in effect.
While this could be implemented on an equipment or product specific basis, here we add an optional ZW flag to the JobShop function that, by default, is set to False.
```
def JobShop(TASKS, tclean=0, ZW=False):
model = ConcreteModel()
model.TASKS = Set(initialize=TASKS.keys(), dimen=2)
model.JOBS = Set(initialize=set([j for (j,m) in TASKS.keys()]))
model.MACHINES = Set(initialize=set([m for (j,m) in TASKS.keys()]))
model.TASKORDER = Set(initialize = model.TASKS * model.TASKS, dimen=4,
filter = lambda model,j,m,k,n: (k,n) == TASKS[(j,m)]['prec'])
model.DISJUNCTIONS = Set(initialize=model.JOBS * model.JOBS * model.MACHINES, dimen=3,
filter = lambda model,j,k,m: j < k and (j,m) in model.TASKS and (k,m) in model.TASKS)
t_max = sum([TASKS[(j,m)]['dur'] for (j,m) in TASKS.keys()])
model.makespan = Var(bounds=(0, t_max))
model.start = Var(model.TASKS, bounds=(0, t_max))
model.obj = Objective(expr = model.makespan, sense = minimize)
model.fini = Constraint(model.TASKS, rule=lambda model,j,m:
model.start[j,m] + TASKS[(j,m)]['dur'] <= model.makespan)
if ZW:
model.prec = Constraint(model.TASKORDER, rule=lambda model,j,m,k,n:
model.start[k,n] + TASKS[(k,n)]['dur'] == model.start[j,m])
else:
model.prec = Constraint(model.TASKORDER, rule=lambda model,j,m,k,n:
model.start[k,n] + TASKS[(k,n)]['dur'] <= model.start[j,m])
model.disj = Disjunction(model.DISJUNCTIONS, rule=lambda model,j,k,m:
[model.start[j,m] + TASKS[(j,m)]['dur'] + tclean <= model.start[k,m],
model.start[k,m] + TASKS[(k,m)]['dur'] + tclean <= model.start[j,m]])
TransformationFactory('gdp.chull').apply_to(model)
solver.solve(model)
results = [{'Job': j,
'Machine': m,
'Start': model.start[j, m](),
'Duration': TASKS[(j, m)]['dur'],
'Finish': model.start[j, m]() + TASKS[(j, m)]['dur']}
for j,m in model.TASKS]
return results
results = JobShop(TASKS, tclean=0.5, ZW=True)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
```
<a id="Benchmark"></a>
## Benchmark Problems
The file `jobshop1.txt` (available [here](http://people.brunel.ac.uk/~mastjjb/jeb/orlib/files/jobshop1.txt)) is a well known collection of 82 benchmark problems for job shop scheduling. The data format for each example consists of a single line for each job. The data on each line is a sequence of (machine number, time) pairs showing the order in which machines process each job.
LA19 is a benchmark problem for job shop scheduling introduced by Lawrence in 1984, and a solution presented by Cook and Applegate in 1991. The following cell may take many minutes to hours to run, depending on the choice of solver and hardware.
```
data = """
2 44 3 5 5 58 4 97 0 9 7 84 8 77 9 96 1 58 6 89
4 15 7 31 1 87 8 57 0 77 3 85 2 81 5 39 9 73 6 21
9 82 6 22 4 10 3 70 1 49 0 40 8 34 2 48 7 80 5 71
1 91 2 17 7 62 5 75 8 47 4 11 3 7 6 72 9 35 0 55
6 71 1 90 3 75 0 64 2 94 8 15 4 12 7 67 9 20 5 50
7 70 5 93 8 77 2 29 4 58 6 93 3 68 1 57 9 7 0 52
6 87 1 63 4 26 5 6 2 82 3 27 7 56 8 48 9 36 0 95
0 36 5 15 8 41 9 78 3 76 6 84 4 30 7 76 2 36 1 8
5 88 2 81 3 13 6 82 4 54 7 13 8 29 9 40 1 78 0 75
9 88 4 54 6 64 7 32 0 52 2 6 8 54 5 82 3 6 1 26
"""
TASKS = {}
prec = ''
lines = data.splitlines()
job= 0
for line in lines[1:]:
j = "J{0:1d}".format(job)
nums = line.split()
prec = ''
for m,dur in zip(nums[::2],nums[1::2]):
task = (j,'M{0:s}'.format(m))
if prec:
TASKS[task] = {'dur':int(dur), 'prec':prec}
else:
TASKS[task] = {'dur':int(dur), 'prec':None}
prec = task
job += 1
Visualize(JobShop(TASKS))
```
### Recalculate Benchmark Problem with a Zero-Wait Policy
The following calculation is quite intensive and will take several minutes to finish with the `gurobi` solver.
```
Visualize(JobShop(TASKS, ZW=True))
```
```
```
|
3081793d5fa22a83a562b2e909fdf1925e5512e8
| 633,498 |
ipynb
|
Jupyter Notebook
|
notebooks/scheduling/Job_Shop_Scheduling.ipynb
|
edgBR/ND-Pyomo-Cookbook
|
0fb121f7e572e088a83c22c166c159ff55c2efb4
|
[
"Apache-2.0"
] | null | null | null |
notebooks/scheduling/Job_Shop_Scheduling.ipynb
|
edgBR/ND-Pyomo-Cookbook
|
0fb121f7e572e088a83c22c166c159ff55c2efb4
|
[
"Apache-2.0"
] | null | null | null |
notebooks/scheduling/Job_Shop_Scheduling.ipynb
|
edgBR/ND-Pyomo-Cookbook
|
0fb121f7e572e088a83c22c166c159ff55c2efb4
|
[
"Apache-2.0"
] | 1 |
2020-09-03T18:53:07.000Z
|
2020-09-03T18:53:07.000Z
| 228.369863 | 104,382 | 0.839659 | true | 8,186 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.826712 | 0.810479 | 0.670032 |
__label__eng_Latn
| 0.754112 | 0.395041 |
# About
Code snippet for finding closest image dimensions for which cuFFT uses faster Cooley-Tukey implementation (as opposed to Bluestein).
See https://docs.nvidia.com/cuda/cufft/index.html
Quote from there:
_Algorithms highly optimized for input sizes that can be written in the form 2 a × 3 b × 5 c × 7 d . In general the smaller the prime factor, the better the performance, i.e., powers of two are fastest._
Volker dot Hilsenstein at monash dot edu, April 2019
# Requirements
This notebook requires the following additional packages. Use `pip` or `conda` to install them.
* `sympy`
* `numpy`
* `tqdm`
# Implementation
```python
from sympy.ntheory import factorint
import warnings
import numpy as np
def is_optimal_for_cuFFT(n: int, allowed_factors) -> bool:
factorization = factorint(n)
if len(factorization) == 0: # factorint(1) returns empyt dict
return False
factors = set(factorization.keys())
return factors.issubset(set(allowed_factors))
def _closest_optimal(n: int, search_next_largest: bool, allowed_factors) -> int:
while(not is_optimal_for_cuFFT(n, allowed_factors) and n>=1):
if search_next_largest:
n += 1
else:
n -= 1
# edge case: decreasing search with start value smaller than allowed factor
if n < min(allowed_factors):
warnings.warn(f"{n}One provided dimension is smaller than smallest allowed factor and search direction is decreasing")
return(min(allowed_factors))
return n
def closest_optimal(n, search_next_largest: bool=True, allowed_factors=(2,3,5,7)):
""" Finds closest optimal array dimensions for cuFFT
Parameters
----------
n : iterable of integers
Input dimensions
search_next_largest : bool
if True (default) search closest optimal dimensions that are larger or equal to original
otherwise look for smaller ones.
allowed_factor: tuple of integers
allowed factors in decomposition. Defaults to (2,3,5,7) which are the factors listed in
the cuFFT documentation.
Returns
-------
np.array of ints
optimal dimensions for cuFFT
See also
--------
https://docs.nvidia.com/cuda/cufft/index.html
"""
n = np.asarray(n)
scalar_input = False
if n.ndim == 0:
n = n[None]
scalar_input = True
ret = np.array([_closest_optimal(ni, search_next_largest, allowed_factors) for ni in n])
if scalar_input:
return ret[0]
return ret
```
# Examples
```python
# Simple case, single number
closest_optimal(123)
```
125
```python
# find a smaller optimal dimension
closest_optimal(123, search_next_largest=False)
```
120
```python
# don't allow all factors
closest_optimal(123, search_next_largest=False, allowed_factors=(2,3))
```
108
```python
# only allow a single factor
# use a comma to make it a tuple, otherwise it will throw an error!
closest_optimal(123, search_next_largest=False, allowed_factors=(2,))
```
64
```python
# apply to multiple dimensions
closest_optimal((123, 23, 615))
```
array([125, 24, 625])
```python
# edge case, one dimension smaller than smallest factor and decreasing search should generate a warning
closest_optimal((1, 23, 615), search_next_largest=False)
```
c:\users\volker\anaconda3\envs\spimenv\lib\site-packages\ipykernel_launcher.py:21: UserWarning: 0One provided dimension is smaller than smallest allowed factor and search direction is decreasing
array([ 2, 21, 600])
```python
# one dimension smaller than smallest factor and increasing search should not generate a warning
closest_optimal((1, 23, 615))
```
array([ 2, 24, 625])
# Todo
* could allow `search_next_largest` to be an iterable of bools, to apply different strategies (rounding up/rounding down) according to dimension.
* could remove `sympy`-dependency by implementing recursive modulo tests as in `notGoodDimension` from https://github.com/dmilkie/cudaDecon/blob/master/RL-Biggs-Andrews.cpp. However, I find the explicit factorization more readable than the recursion.
# Rainbow table
Create a rainbow table with pre-computed good dimensions. Note that with this idiotic approach of building a rainbow table is very slow (building by multiplication is quicker than building by factorization). But we only have to do it once.
## idiotic approach (building by factorization)
Be lazy, recycle the code defined above and start testing numbers.
The tqdm progress bar indicates the problem.
Big-O for factorization is bad, hence used for encryption.
```python
import tqdm
def create_cuFFT_dim_rainbow_table(nr_of_els: int = 1000):
good_dimensions = [0] * nr_of_els
val = 1
for i in tqdm.tqdm(range(nr_of_els)):
good_dimensions[i] = closest_optimal(val)
val = good_dimensions[i]+1
return good_dimensions
```
```python
good_dimensions = create_cuFFT_dim_rainbow_table()
```
100%|███████████████████████████████████████████████| 1000/1000 [00:06<00:00, 155.06it/s]
```python
print(repr(good_dimensions))
```
[2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 24, 25, 27, 28, 30, 32, 35, 36, 40, 42, 45, 48, 49, 50, 54, 56, 60, 63, 64, 70, 72, 75, 80, 81, 84, 90, 96, 98, 100, 105, 108, 112, 120, 125, 126, 128, 135, 140, 144, 147, 150, 160, 162, 168, 175, 180, 189, 192, 196, 200, 210, 216, 224, 225, 240, 243, 245, 250, 252, 256, 270, 280, 288, 294, 300, 315, 320, 324, 336, 343, 350, 360, 375, 378, 384, 392, 400, 405, 420, 432, 441, 448, 450, 480, 486, 490, 500, 504, 512, 525, 540, 560, 567, 576, 588, 600, 625, 630, 640, 648, 672, 675, 686, 700, 720, 729, 735, 750, 756, 768, 784, 800, 810, 840, 864, 875, 882, 896, 900, 945, 960, 972, 980, 1000, 1008, 1024, 1029, 1050, 1080, 1120, 1125, 1134, 1152, 1176, 1200, 1215, 1225, 1250, 1260, 1280, 1296, 1323, 1344, 1350, 1372, 1400, 1440, 1458, 1470, 1500, 1512, 1536, 1568, 1575, 1600, 1620, 1680, 1701, 1715, 1728, 1750, 1764, 1792, 1800, 1875, 1890, 1920, 1944, 1960, 2000, 2016, 2025, 2048, 2058, 2100, 2160, 2187, 2205, 2240, 2250, 2268, 2304, 2352, 2400, 2401, 2430, 2450, 2500, 2520, 2560, 2592, 2625, 2646, 2688, 2700, 2744, 2800, 2835, 2880, 2916, 2940, 3000, 3024, 3072, 3087, 3125, 3136, 3150, 3200, 3240, 3360, 3375, 3402, 3430, 3456, 3500, 3528, 3584, 3600, 3645, 3675, 3750, 3780, 3840, 3888, 3920, 3969, 4000, 4032, 4050, 4096, 4116, 4200, 4320, 4374, 4375, 4410, 4480, 4500, 4536, 4608, 4704, 4725, 4800, 4802, 4860, 4900, 5000, 5040, 5103, 5120, 5145, 5184, 5250, 5292, 5376, 5400, 5488, 5600, 5625, 5670, 5760, 5832, 5880, 6000, 6048, 6075, 6125, 6144, 6174, 6250, 6272, 6300, 6400, 6480, 6561, 6615, 6720, 6750, 6804, 6860, 6912, 7000, 7056, 7168, 7200, 7203, 7290, 7350, 7500, 7560, 7680, 7776, 7840, 7875, 7938, 8000, 8064, 8100, 8192, 8232, 8400, 8505, 8575, 8640, 8748, 8750, 8820, 8960, 9000, 9072, 9216, 9261, 9375, 9408, 9450, 9600, 9604, 9720, 9800, 10000, 10080, 10125, 10206, 10240, 10290, 10368, 10500, 10584, 10752, 10800, 10935, 10976, 11025, 11200, 11250, 11340, 11520, 11664, 11760, 11907, 12000, 12005, 12096, 12150, 12250, 12288, 12348, 12500, 12544, 12600, 12800, 12960, 13122, 13125, 13230, 13440, 13500, 13608, 13720, 13824, 14000, 14112, 14175, 14336, 14400, 14406, 14580, 14700, 15000, 15120, 15309, 15360, 15435, 15552, 15625, 15680, 15750, 15876, 16000, 16128, 16200, 16384, 16464, 16800, 16807, 16875, 17010, 17150, 17280, 17496, 17500, 17640, 17920, 18000, 18144, 18225, 18375, 18432, 18522, 18750, 18816, 18900, 19200, 19208, 19440, 19600, 19683, 19845, 20000, 20160, 20250, 20412, 20480, 20580, 20736, 21000, 21168, 21504, 21600, 21609, 21870, 21875, 21952, 22050, 22400, 22500, 22680, 23040, 23328, 23520, 23625, 23814, 24000, 24010, 24192, 24300, 24500, 24576, 24696, 25000, 25088, 25200, 25515, 25600, 25725, 25920, 26244, 26250, 26460, 26880, 27000, 27216, 27440, 27648, 27783, 28000, 28125, 28224, 28350, 28672, 28800, 28812, 29160, 29400, 30000, 30240, 30375, 30618, 30625, 30720, 30870, 31104, 31250, 31360, 31500, 31752, 32000, 32256, 32400, 32768, 32805, 32928, 33075, 33600, 33614, 33750, 34020, 34300, 34560, 34992, 35000, 35280, 35721, 35840, 36000, 36015, 36288, 36450, 36750, 36864, 37044, 37500, 37632, 37800, 38400, 38416, 38880, 39200, 39366, 39375, 39690, 40000, 40320, 40500, 40824, 40960, 41160, 41472, 42000, 42336, 42525, 42875, 43008, 43200, 43218, 43740, 43750, 43904, 44100, 44800, 45000, 45360, 45927, 46080, 46305, 46656, 46875, 47040, 47250, 47628, 48000, 48020, 48384, 48600, 49000, 49152, 49392, 50000, 50176, 50400, 50421, 50625, 51030, 51200, 51450, 51840, 52488, 52500, 52920, 53760, 54000, 54432, 54675, 54880, 55125, 55296, 55566, 56000, 56250, 56448, 56700, 57344, 57600, 57624, 58320, 58800, 59049, 59535, 60000, 60025, 60480, 60750, 61236, 61250, 61440, 61740, 62208, 62500, 62720, 63000, 63504, 64000, 64512, 64800, 64827, 65536, 65610, 65625, 65856, 66150, 67200, 67228, 67500, 68040, 68600, 69120, 69984, 70000, 70560, 70875, 71442, 71680, 72000, 72030, 72576, 72900, 73500, 73728, 74088, 75000, 75264, 75600, 76545, 76800, 76832, 77175, 77760, 78125, 78400, 78732, 78750, 79380, 80000, 80640, 81000, 81648, 81920, 82320, 82944, 83349, 84000, 84035, 84375, 84672, 85050, 85750, 86016, 86400, 86436, 87480, 87500, 87808, 88200, 89600, 90000, 90720, 91125, 91854, 91875, 92160, 92610, 93312, 93750, 94080, 94500, 95256, 96000, 96040, 96768, 97200, 98000, 98304, 98415, 98784, 99225, 100000, 100352, 100800, 100842, 101250, 102060, 102400, 102900, 103680, 104976, 105000, 105840, 107163, 107520, 108000, 108045, 108864, 109350, 109375, 109760, 110250, 110592, 111132, 112000, 112500, 112896, 113400, 114688, 115200, 115248, 116640, 117600, 117649, 118098, 118125, 119070, 120000, 120050, 120960, 121500, 122472, 122500, 122880, 123480, 124416, 125000, 125440, 126000, 127008, 127575, 128000, 128625, 129024, 129600, 129654, 131072, 131220, 131250, 131712, 132300, 134400, 134456, 135000, 136080, 137200, 137781, 138240, 138915, 139968, 140000, 140625, 141120, 141750, 142884, 143360, 144000, 144060, 145152, 145800, 147000, 147456, 148176, 150000, 150528, 151200, 151263, 151875, 153090, 153125, 153600, 153664, 154350, 155520, 156250, 156800, 157464, 157500, 158760, 160000, 161280, 162000, 163296, 163840, 164025, 164640, 165375, 165888, 166698, 168000, 168070, 168750, 169344, 170100, 171500, 172032, 172800, 172872, 174960, 175000, 175616, 176400, 177147, 178605, 179200, 180000, 180075, 181440, 182250, 183708, 183750, 184320, 185220, 186624, 187500, 188160, 189000, 190512, 192000, 192080, 193536, 194400, 194481, 196000, 196608, 196830, 196875, 197568, 198450, 200000, 200704, 201600, 201684, 202500, 204120, 204800, 205800, 207360, 209952, 210000, 211680, 212625, 214326, 214375, 215040, 216000, 216090, 217728, 218700, 218750, 219520, 220500, 221184, 222264, 224000, 225000, 225792, 226800, 229376, 229635, 230400, 230496, 231525, 233280, 234375, 235200, 235298, 236196, 236250, 238140, 240000, 240100, 241920, 243000, 244944, 245000, 245760, 246960, 248832, 250000, 250047, 250880, 252000, 252105, 253125, 254016, 255150, 256000, 257250, 258048, 259200, 259308, 262144, 262440, 262500, 263424, 264600, 268800, 268912, 270000, 272160, 273375, 274400, 275562, 275625, 276480, 277830, 279936, 280000, 281250, 282240, 283500, 285768, 286720, 288000, 288120, 290304, 291600, 294000, 294912, 295245, 296352, 297675, 300000, 300125, 301056, 302400, 302526, 303750, 306180, 306250, 307200, 307328, 308700, 311040, 312500, 313600, 314928, 315000, 317520, 320000, 321489, 322560, 324000, 324135, 326592, 327680, 328050, 328125, 329280, 330750, 331776, 333396, 336000, 336140, 337500, 338688, 340200, 343000, 344064, 345600, 345744, 349920, 350000, 351232, 352800, 352947, 354294, 354375, 357210, 358400, 360000, 360150, 362880, 364500, 367416, 367500, 368640, 370440, 373248, 375000, 376320, 378000, 381024, 382725, 384000, 384160, 385875, 387072]
```python
good_dimensions_fac = good_dimensions # save for later
```
## sensible approach (building by multiplication)
```python
# increase these maximum exponents as desired
n7 = 36
n5 = 43
n3 = 63
n2 = 100
```
```python
max_to_consider = min(2**n2, 3**n3, 5**n5, 7**n7)
i = 0
good_dimensions = []
for pow7 in tqdm.tqdm(range(n7+1)):
for pow5 in range(n5+1):
for pow3 in range(n3+1):
for pow2 in range(n2+1):
prod = 2**pow2 * 3**pow3 * 5**pow5 * 7**pow7
if prod < max_to_consider:
good_dimensions.append(prod)
print("sorting")
good_dimensions = sorted(good_dimensions)
# drop the 1
good_dimensions = good_dimensions[1:]
```
100%|████████████████████████████████████████████████████| 37/37 [00:20<00:00, 1.73it/s]
sorting
```python
# Check both the factorization and the multiplication approach give the same first 1000 elements
assert(good_dimensions_fac[:] == good_dimensions[:1000])
```
# Rainbow table lookup
```python
from bisect import bisect_left
lookup_larger = lambda x : good_dimensions[bisect_left(good_dimensions, x)]
lookup_smaller = lambda x : good_dimensions[bisect_left(good_dimensions, x)-1]
```
```python
testnr = 11223344
print("Rainbow lookup:")
print("next largest :", lookup_larger(testnr))
print("nex smallest :", lookup_smaller(testnr))
print("Factorization check:")
print("next largest :", closest_optimal(testnr))
print("nex smallest :", closest_optimal(testnr, search_next_largest=False ))
```
Rainbow lookup:
next largest : 11239424
nex smallest : 11200000
Factorization check:
next largest : 11239424
nex smallest : 11200000
# Timing
### factorization
```python
%%timeit
closest_optimal(testnr)
```
939 ms ± 17.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
### rainbow table
```python
%%timeit
lookup_larger(testnr)
```
617 ns ± 15.3 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
# Serialize rainbow table into usable python code
Generate a `.py` file with the first `n_elements` of the rainbow table.
Note that the lambda functions don't deal with any edge cases and work on scalars only.
They will throw an exception if the input value exceeds the largest value in the rainbow table.
```python
n_elements = 8000
py_code = """
from bisect import bisect_left
good_dimensions = {rainbowtable}
lookup_larger = lambda x : good_dimensions[bisect_left(good_dimensions, x)]
lookup_smaller = lambda x : good_dimensions[bisect_left(good_dimensions, x)-1]
""".format(rainbowtable=repr(good_dimensions[:n_elements]))
with open("rainbow_cufft_pad.py", "w") as pyfile:
pyfile.write(py_code)
```
|
2aa591547cdd300ab8f4f41773eb60703543e8f2
| 21,461 |
ipynb
|
Jupyter Notebook
|
Optimal Dimensions for cuFFT.ipynb
|
VolkerH/Optimal-cuFFT-dimensions-in-Python
|
d0ac5fb3a51b9f2921a9b0709ec5d3cc245937ec
|
[
"BSD-3-Clause"
] | null | null | null |
Optimal Dimensions for cuFFT.ipynb
|
VolkerH/Optimal-cuFFT-dimensions-in-Python
|
d0ac5fb3a51b9f2921a9b0709ec5d3cc245937ec
|
[
"BSD-3-Clause"
] | 1 |
2019-08-05T02:52:51.000Z
|
2019-08-05T05:00:06.000Z
|
Optimal Dimensions for cuFFT.ipynb
|
VolkerH/Optimal-cuFFT-dimensions-in-Python
|
d0ac5fb3a51b9f2921a9b0709ec5d3cc245937ec
|
[
"BSD-3-Clause"
] | null | null | null | 36.68547 | 6,791 | 0.587671 | true | 5,916 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.83762 | 0.819893 | 0.686759 |
__label__krc_Cyrl
| 0.501154 | 0.433903 |
Godfrey Beddard 'Applying Mathis in the Chemical & Biomolecular Sciences' Chapter 9
```python
# import all python add-ons etc that will be needed later on
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sympy import *
from scipy.integrate import quad
init_printing() # allows printing of SymPy results in typeset maths format
plt.rcParams.update({'font.size': 14}) # set font size for plots
```
# Answers 16-21
**Q16 answer** Using the ideas presented in the 'strategy' expanding the integral forms,
$$\displaystyle G^1(u)=2+\frac{2\int E(t)E(u+t)dt}{\int E(t)^2dt}$$
where the limits are $\pm \infty$. As a check that $\int E(t)dt=\int E(u+t)dt$. SymPy can be used to perform the integrals. It is easier to help by converting the cosines to exponentials first:
```python
omega,a,t,u = symbols('omega a t u',positive=True)
f01= cos(omega*t/a)*exp(-(t/a)**2/2) # define electric fields
f02= cos(omega*(t+u)/a)*exp(-((t+u)/a)**2/2)
ef01 = expand(f01.rewrite(exp))
intf01 = integrate(ef01,(t,-oo,oo),conds='none')
simplify(intf01)
```
```python
# and for the t+u function
ef02 = expand(f02.rewrite(exp))
intf02 = integrate(ef02,(t,-oo,oo),conds='none')
simplify(intf02)
```
which shows that $\int E(t)dt=\int E(u+t)dt$. The same is true for the powers of the integrals. The cross term is $2\int E(t)E(u+t)dt$ and is calculated as
```python
# calculate the cross term E(t)E(t+u)
fcross = simplify(2*expand(ef01*ef02))
intfcross= integrate(fcross, (t,-oo,oo),conds='none')
simplify(intfcross)
```
This can be simplified by converting the $e^{-i\omega u/a }$ terms to a cosine and the result is
$$\displaystyle \int E(t)E(u+t)dt = \frac{\sqrt{\pi}}{a}e^{-u^2/(4a^2)}\left(\cos\left(\frac{\omega u}{a}\right) +e^{-\omega^2} \right)$$
The normalisation is found when $u=0$ and is $\displaystyle \int E(t)^2dt = \frac{\sqrt{\pi}}{a}(e^{-\omega^2}+1)$ and this produces the fringe resolved autocorrelation shown in figure 35. The final equation, divided by two so that the signal is 1 in the wings, is
$$\displaystyle 1+ \frac{\int E(t)E(u+t)dt}{\int E(t)^2dt} = 1 + \frac{e^{-u^2/(4a^2)}\left(\cos\left(\frac{\omega u}{a}\right) +e^{-\omega^2} \right)}{1+e^{-\omega^2}} $$
This function is calculated and plotted as $G^1(u)$ below,
```python
G1 = lambda u, a, omega: 1+(np.exp(-u**2/(4*a**2)) *(np.cos(omega*u/a)+np.exp(-omega**2)) )/(1+np.exp(-omega**2))
a = 5
omega = 10
u = np.linspace(-20,20,500)
plt.plot(u,G1(u,a,omega))
plt.axhline(1,linewidth=1,color='grey')
plt.title(r'$G^1(u)$')
plt.xlabel('time')
plt.show()
```
**(b)** The second-order correlation is calculated in a similar way; expanding the terms before integrating gives
$$\displaystyle G^2(u)=2+4\frac{\int E(t)E(u+t)^3 + E(t)^3E(u+t)dt}{\int E(t)^4dt}+6\frac{\int E(t)^2E(u+t)^2 dt}{\int E(t)^4dt}$$
Immediately it can be seen that at long positive and negative delays, the correlation is going to be constant. This is because when the pulses are not overlapped, each pulse produces frequency doubled light which the detector measures.
The denominator integrates to a simple expression, but not the numerator, and therefore python/SymPy is used to calculate and plot $G^2$ directly.
The normalization term has a value of 1/8, but it is conventional to have the signal with a value of 1 in the wings of the pulse, because here frequency doubling from only one arm of the interferometer is measured, and the autocorrelation is multiplied by 8 to achieve this.
The algebraic solution of each integral is given below calculated using SymPy. Unless you are interested in how to do this calculation skip this part and go straight to the figure.
```python
# to calculate G^2(u)
omega,a,t,u = symbols('omega a t u',positive=True)
f01= cos(omega*t/a)*exp(-(t/a)**2/2) # define electric fields
f02= cos(omega*(t+u)/a)*exp(-((t+u)/a)**2/2)
ef01 = expand(f01.rewrite(exp)) # change into exponential form to ease integration
ef02 = expand(f02.rewrite(exp))
# normalisation # int E(t)^4
ef01_4 = expand(ef01**4)
norml4 = integrate(ef01_4,(t,-oo,oo),conds='none')
factor(norml4)
```
```python
# integration E(t) x E(t+u)^3 and vice versa
ef13 = expand(ef01*ef02**3)
term13 = integrate(ef13,(t,-oo,oo),conds='none')
simplify(term13)
```
```python
# integration E(t)^2 x E(t+u)^2
ef02= expand(f02.rewrite(exp))
ef22= expand(ef01**2*ef02**2)
term22 = integrate(ef22,(t,-oo,oo),conds='none')
factor(term22)
```
The integrals can be used as they are or converted back into cosine form using $\displaystyle 2\cos(x)=e^{ix}+e^{-ix}$; thus
$$\begin{align}
\int E(t)^4dt &= \frac{a\sqrt{2\pi}}{16}\left(3+e^{-2\omega^2}+4e^{-\omega^2/2}\right) \\
\int E(t)E(u+t)^3dt &= \frac{a\sqrt{2\pi}}{32}\left[6\cos\left(\frac{\omega u}{a}\right)
+ e^{-2\omega^2} + 2e^{-\omega^2/2}\left(\cos\left(\frac{3\omega u}{2a}\right) +6\cos\left(\frac{\omega u}{2a}\right)\right) \right]e^{-3u^2/(8a^2)} \\
\int E(t)^2E(u+t)^2dt &= \frac{a\sqrt{2\pi}}{32}\left[4+2\cos\left(\frac{2\omega u}{a}\right) +2e^{-2\omega^2} +8\cos\left(\frac{\omega u}{a}\right)e^{-\omega^2/2} \right]e^{-u^2/(2a^2}
\end{align}$$
```python
a = 5
omega = 10
anorm = lambda a, omega: np.sqrt(2*np.pi)*a/16*\
( 3 + np.exp(-2*omega**2) + 4*np.exp(-0.5*omega**2) )
f13 = lambda u,a,omega: np.sqrt(2*np.pi)*a/32*(3*2*np.cos(omega*u/a) + np.exp(-2*omega**2)\
+ 2*np.exp(-omega**2/2)*( np.cos(3*omega*u/(2*a))\
+ 3*2*np.cos(omega*u/(2*a) )))*np.exp(-3*u**2/(8*a**2))
f22 = lambda u,a,omega :np.sqrt(2*np.pi)*a/32*\
(4+ 2*np.cos(2*omega*u/a)+2*np.exp(-2*omega**2) \
+4*2*np.cos(omega*u/a)*np.exp(-omega**2/2) ) *np.exp(-u**2/(2*a**2))
n = anorm(a,omega)
G2 = lambda u, a, omega: 2 + 6*f22(u,a,omega)/n + 2*4*f13(u,a,omega)/n
fmax = lambda u,a: (1+3*np.exp(-u**2/(2*a**2))+4*np.exp(-3*u**2/(8*a**2))) # max and min values
fmin = lambda u,a: (1+3*np.exp(-u**2/(2*a**2))-4*np.exp(-3*u**2/(8*a**2)))
u = np.linspace(-20,20,500) # define time values
plt.title(r'$G^2(u)$')
plt.plot( u, G2(u,a,omega)/2 )
plt.plot(u,fmax(u,a),color='grey',linewidth=1)
plt.plot(u,fmin(u,a),color='grey',linewidth=1)
plt.axhline(1,linewidth=1,color='grey')
plt.axhline(0,linewidth=1,color='grey')
plt.xlabel('time')
plt.show()
```
Figure 57. Normalized fringe resolved autocorrelation $G^2(u)$ with the upper and lower bounds shown as solid lines. The frequency of the pulse is constant throughout its duration so this is a _transform-limited_ pulse with zero chirp. The constants are $a$ = 5 and $\omega$ = 10.
**(b**) The outline pulse shape is found by ignoring the cosine term and integrating. The results is shown below. THe lower profile is found by subtracting the two exponentials, and has the effect only of changing the +4 term to -4.
```python
# outline pulse , **** ignore cosine ****
t ,a,u =symbols('t a u',positive=True)
f011= ( exp(-(t/a)**2/2) + exp(-((t+u)/a)**2/2) )**4 # define electric fields
s = integrate(f011,(t,-oo,oo),conds='none')
expand(s)
```
This equation can be normalised to one at long times and then simplified to give $\displaystyle \sqrt{2\pi}a(1+3e^{-u^2/(2a^2)}\pm 4e^{-3u^2/(8a^2)})$ where the positive sign corresponds to the upper curve. The fwhm of the curve $2\tau$, taking 1 as the baseline, is found by solving $\displaystyle (3e^{-\tau^2/(2a^2)}+ 4e^{-3\tau^2/(8a^2)})=7/2$. This equation is simplified by substituting $\displaystyle x=e^{-\tau^2/2a^2}$ giving $(3x+ 4x^3/4)=7/2$ which has one real solution which is 0.4428, and therefore $\displaystyle \tau = a\sqrt{-2\ln(0.4428)}=1.276a$. The original pulse has a fwhm of $\displaystyle a\sqrt{2\ln(2)}$, therefore the outline of the autocorrelation is $\approx$ 1.08 times wider than the pulse.
**(c)** The Fourier transform of the pulse is
$$\displaystyle g(k)= \frac{1}{\sqrt{2\pi}}\int\limits_{-\infty}^\infty \cos\left(\frac{\omega t}{a}\right)e^{-(t/a)^2/2}e^{-ikt}dt$$
and this is of a standard type of integral if the cosine is converted to its exponential form. Then $\displaystyle g(k)= \frac{1}{2\sqrt{2\pi}}\int\limits_{-\infty}^\infty (e^{i\omega t}+e^{-i\omega t})e^{-(t/a)^2/2-ikt}dt $.
The transform is thus $\displaystyle g(k)=\frac{a}{2}(e^{2ak\omega} +1)e^{-(ak+\omega)^2/2}$ and is shown in the next figure. The shape is dominated by the Gaussian shape of the final exponential.
```python
gk = lambda k,a, omega : 0.5*a*(np.exp(2.0*a*k*omega) +1 )*np.exp(-(a*k + omega )**2/2.0)
a = 5
omega = 10
k=np.linspace(0,4,500)
plt.plot(k,gk(k,a,omega),color='blue')
plt.xlabel('k')
plt.title('Fourier Transform of pulse')
plt.plot()
plt.show()
```
**Q17 answer** **(a)** The integration for the $\displaystyle e^{-|x|/a}$ pulse is $\displaystyle \int_{-\infty}^\infty e^{-|x|/a}e^{-|x+u|/a}dx$ which is simplified, as the autocorrelation is symmetric about zero, by doubling the integral from 0 to $\infty$, in which case the absolute values are not needed. The result is $\displaystyle 2\int_0^\infty e^{-2x/a}e^{-u/a}dx = ae^{-u/a}$. However, when plotting the absolute values must be replaced as $u$ can be negative. Thus $\displaystyle A(u) = ae^{-|u|/a}$.
At the fwhm let $u=\tau$ and $\displaystyle A(u)=1/2= ae^{-\tau/a}$ thus $\tau=2a\ln(2a)$. The laser pulse fwhm is $2a\ln(2)$ the autocorrelation is thus $\ln(2a)/\ln(2)$ wider.
**(b)** The $\mathrm{sech}^2$ pulse can be simplified first by converting to its exponential form then integrated, and after inserting limits of $\pm\infty$, the normalised result is
$$\displaystyle A(u) =16\frac{e^{2u} [ u(e^{2u}+1) - (e^{2u}-1) ]}{(e^{2u}-1)^3}$$
The fwhm can be found by solving the autocorrelation for the value $u$ when $A(u)$=0.5. This is easily done to sufficient accuracy using the Newton - Raphson method outlined below and in chapter 3.1. The method is recursive, if x is the required solution then $x=x-f(x)/f'(x)$ is repeatedly calculated where $f(x)$ is the function and $f'(x)$ the first derivative wrt. $x$. To find the half width $A(u)$-1/2 is the function to find the root.
```python
x = symbols('x')
u = symbols('u', positive=True) # this is necessary to get solution
eqn= (2/(exp(x)+exp(-x) ) )**2*(2/(exp(x+u)+exp(-x-u) ) )**2
Au = simplify(integrate(eqn,(x,-oo,oo ) ) )
Au
```
```python
df = simplify(diff(Au,u) )
factor(df)
```
```python
# Newton-Raphson
# derivative of A(u) is df
# simplified function and derivative used
df = lambda u: -16*np.exp(2*u)*(2*u*np.exp(4*u)+8*u*np.exp(2*u)+2*u-3*np.exp(4*u) +3)\
/(np.exp(2*u) -1 )**4
# function A(u)-1/2 and find value when zero (root).
f = lambda u:16*np.exp(2*u)*(u*np.exp(2*u)+u-np.exp(2*u)+1)/(np.exp(2*u)-1)**3 - 1/2
x = -0.5 # initial guess
for i in range(6): # 6 iterations are enough
x = x - f(x)/df(x)
print(x)
```
-1.992610035168469
-1.5791589526105754
-1.6377830584176059
-1.6389083234043145
-1.6389087505542517
-1.6389087505543132
which shows that the half width at half maximum (fwhm) is 2$\cdot$ 1.64. In comparison the same value to the sech<sup>2</sup> pulse is 2$\cdot$ 0.88 thus the autocorrelation is $\approx$ 1.86 times wider.
**Q18 answer** The acoustic pulse, $S(t)$ with $a$ = 0 compared with $a$ = 60,shows that the frequency is higher at short times than at longer ones; a down chirp. see fig 58. The calculated autocorrelation of the chirped and un - chirped pulse, are shown in Fig. 59. The calculation is essentially the same as in section 5.
Figure 58. Left. An acoustic pulse $S(t)$ but without a chirp, $a$ = 0. A chirped pulse (right) mimicking that emitted by a bat when close to its prey.
_____
The un-chirped pulse $a$ = 0 produces a linearly decreasing autocorrelation but which is so long that poor range discrimination would be achieved and the bat would hardly ever succeed in catching its prey. The integral of this autocorrelation also increases slowly with time and hence distance, again indicating that the bat would find it difficult to discriminate the prey from something else. The chirped pulse has a small autocorrelation amplitude at long times, therefore, its summation is large and constant when the bat is far from the prey, but it decreases rapidly as it approaches within 3 cm of its target allowing sharp range discrimination. Figure 59 shows the summed autocorrelations together with the experimental data given in the question.
Figure 59. Left: Autocorrelations of bat pulses $S(t)$ with no chirp (grey dotted line) and with down chirp (solid green line). Right: The sum of the autocorrelation with time, converted into distance, for the same two pulses together with the experimental data which is the percentage correct response vs difference in distance to the target. (Data was measured from Simmons 1971, Fig. 2.)
____
The summation in figure 59 was calculated as follows where $A_0$ is the autocorrelation.
S0 = [ 0.0 for i in range( numt) ] <br>
for i in range( numt ): S0[i] = sum( [ abs( A0[ k ] ) for k in range( i ) ] )
**Q 19 answer** The method to use similar to that used in the example. However, when the raw data is plotted the signal is buried in the noise, Fourier transforming produces an ambiguous result where it is not clear where to set the filter to extract the data. Apodising, by multiplying the data by an exponential decreases the noise in the longer part of the data and helps to identify the frequencies present. The initial FID and its Fourier transform is shown in the figure.
fig 60 Left: ideal FID of two spins. Right: close-up of the spectrum (imag part of transform) showing two lines, one at each frequency.
_____
The code with which to calculate the FID and FFT is shown below
```python
#.. make FID add noise and make FFT
n = 2**12
maxx = n/2
x = np.linspace(0,maxx,n) # make n points and of length maxx
tp = maxx/n
freq = [ i/(tp*n) for i in range(n)]
nu1 = 1/4.8
nu2 = 1/4.5
sig = 2.0 # magnitude of noise use randn() to use normally distributed noise
fid0 = [ np.exp(-x[i]/300.0)*(np.sin(2*np.pi*nu1*x[i]) + np.sin(2*np.pi*nu2*x[i]) ) \
+ sig*np.random.randn() for i in range(n)]
fft0 = np.fft.rfft(fid0).imag # as FID contais sine waves FFT is imaginary.
#.. plots shwon in the next figures
```
Figure 61. Noise added to the FID and then transformed to give a noisy spectrum of which only the imag part is shown. It is clear from the FFT that it is hard to determine which peaks are due to the data and which due to noise.
__
The figure shows that data is clearly swamped by noise. The next step is to apodise by multiplying by an exponential. This decreases the noise relative to the signal as this is largest at earlier times. The decay time can be chosen by trial and error to best illustrate the features wanted.
#.. define a new list to hold the apodised FID
fid1 = [0 for i in range( n )] # make new list to hold new FID
for i in range(n):
> fid1[ i ] = fid0[i]*np.exp(-i/1000)
fft1 = np.fft.rfft( fid1 ).imag # calculate FFT
#.... plot data as above
Figure 62. The apodised FID (left) is transformed into the spectrum thereby retrieving the two frequencies. The original frequencies, although not perfectly isolated, are at approx 0.2 and can clearly be identified from the noise.
____
**Q 20 answer** Plotting the data with and without noise, shows that the pulse lasts for about 2 ps, and examining this close to the maximum time, indicates that the smallest period is about 60 fs. Therefore, $n$ = 2<sup>12</sup> points will be more than adequate for the Fourier transforms.
The pulse has the form $\displaystyle \sin(x^2/200^2 )\exp(- (x-800.0)^2/200.0^2)$ and the noise, with an amplitude from $\pm$1, is added by including randn(). The plot shows the chirped pulse (pure pulse), and with the noise added, this forms the 'experimental' data. The pure pulse and the recovered data are shown in the second set of figures below.
```python
n = 2**12
maxt = 1500
t = np.linspace(0,maxt,n)
tp = maxt/n
# make data with and without noise
pnoise = [ np.sin((t[i]/200)**2 )*np.exp(-((t[i]-800.0)/200.0)**2)+np.random.randn() for i in range(n) ]
pulse = [ np.sin((t[i]/200)**2 )*np.exp(-((t[i]-800.0)/200.0)**2) for i in range(n) ]
freq= [ i/(tp*n) for i in range(n)]
fig1= plt.figure(figsize=(10.0,5.0))
ax0 = fig1.add_subplot(1,2,1)
ax1 = fig1.add_subplot(1,2,2)
ax0.plot(t,pnoise,color='gray')
ax0.plot(t,pulse,color='red',linewidth=2)
ax0.set_xlabel('time /fs')
ax0.set_title('noisy & pure signal')
fft0= np.fft.rfft(pnoise).real
ax1.plot(freq[0:n//2],fft0[0:n//2],marker='o',color='red')
ax1.set_xlim([0,0.04])
ax1.set_title('FFT')
ax1.set_xlabel('frequency')
plt.tight_layout()
plt.show()
```
```python
# from FFT plot choose frequencies to include in reverse transform
filter = [ fft0[i] if (i/n >0.0 and i/n <0.02) else 0 for i in range(n//2)]
fft1 = np.fft.irfft(filter)
plt.plot(t,pulse,color='red',linewidth=3,linestyle='dotted')
plt.plot(t[0:n-2],fft1[0:n],color='gray',linewidth=2)
plt.xlabel('time /fs')
plt.title('pure (green dotted) and recovered signal')
plt.show()
```
**Q21 answer** The recursive algorithm below is based on the equation in the text. The data is assumed to have been calculated elsewhere and put into array called data, the smoothed data is called sdata.
```python
# make some noisy data. The window has width m=3. This code generates the data shown in the text.
fig1= plt.figure(figsize=(5.0,5.0))
n = 2**9
noise = [ 0 for i in range(n)]
for i in [100,170,210,305,355,390,410]:
noise[i] = np.random.rand() + 0.5
data = [ noise[i]*0.75 + np.exp(-(i-250)**2/1e4)*0.5 + 0.3*np.random.rand()-0.3/2 for i in range(n)]
x = [i for i in range(n)]
sdata= [0.0 for i in range(n)]
m = 3
sdata[m]= sum(data[i] for i in range(0,2*m+1))/(2*m+1)
for i in range(1,n-m):
sdata[i] = sdata[i-1] + (data[i+m]-data[i-m-1])/(2*m+1)
plt.plot(x,data,color='blue')
plt.plot(x,sdata,color='red',linewidth=3)
plt.show()
```
```python
```
|
43bab6ad4d60e462d835df59680ca37f8f27f96c
| 251,568 |
ipynb
|
Jupyter Notebook
|
chapter-9/fourier-answers-16-21.ipynb
|
subblue/applied-maths-in-chem-book
|
e3368645412fcc974e2b12d7cc584aa96e8eb2b4
|
[
"CC0-1.0"
] | null | null | null |
chapter-9/fourier-answers-16-21.ipynb
|
subblue/applied-maths-in-chem-book
|
e3368645412fcc974e2b12d7cc584aa96e8eb2b4
|
[
"CC0-1.0"
] | null | null | null |
chapter-9/fourier-answers-16-21.ipynb
|
subblue/applied-maths-in-chem-book
|
e3368645412fcc974e2b12d7cc584aa96e8eb2b4
|
[
"CC0-1.0"
] | null | null | null | 281.711086 | 43,092 | 0.90058 | true | 5,827 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.879147 | 0.867036 | 0.762252 |
__label__eng_Latn
| 0.981413 | 0.609298 |
# 基于Pytorch深度学习
在本文中,我将介绍一下[PyTorch](http://pytorch.org/),PyTorch 是一个训练神经网络的有利助手。同时,Pytorch基本可以无缝衔接Numpy,对于熟悉Numpy的你,应该很容易就能转型到PyTorch。同时PyTorch能够调用GPU的API实现硬件加速,并提供诸如自动计算梯度和模型定制化等诸多方便功能。另一方面,PyTorch相较于 Tensorflow 有更好的兼容性。
## 神经网络
深度学习是一套基于人工神经网络的函数拟合方法。网络有多个“神经元”构成。每个神经元又有多个输入,每个输入又有自己的权重。这些输入将会权重加成后,输入激活函数得到输出值。
数学表达式为:
$$
\begin{align}
y &= f(w_1 x_1 + w_2 x_2 + b) \\
y &= f\left(\sum_i w_i x_i +b \right)
\end{align}
$$
这里的向量乘法为点乘。
$$
h = \begin{bmatrix}
x_1 \, x_2 \cdots x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$$
## 张量
通过对张量的线性运算,我们就能够得出各种各样的神经网络。张量其实是一种,矩阵的扩展表达形式,一维张量是标量,二维张量是向量,三维张量是矩阵(如下图所示)。
下面我们看一下如何使用PyTorch来构建一个简单的神经网络。
```python
# 首先引入 PyTorch
import torch
```
```python
def activation(x):
""" Sigmoid 激活函数
变量定义
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
```
Sigmoid 函数:
```python
### 生成数据
torch.manual_seed(7) # 设置随机种子数
# 基于标注正态分布,获得5个随机数, size 1*5,均值为0,方差为1。
features = torch.randn((1, 5))
print(features)
# 设置Ground-Truth(GT)权重,size 同 features。
weights = torch.randn_like(features)
# 设置GT的偏移量。
bias = torch.randn((1, 1))
```
tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]])
以上我们就完成了,训练简单神经网络的准备数据。现在他们还都是基于正态随机分布的随机取值,但是随着训练过程的进行,他们将收敛于GT。
Pytorch 的张量可以进行,相加,相乘,相减等操作,和你平常使用的Numpy的array用法一样。现在,我们将用生成的随机数据计算这个简单神经网络的输出值。
> **练习**: 通过特征 `features`,权重 `weights`,和偏移量`bias`计算网络的输出值。类似与Numpy,在Pytorch中可以使用[`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum)函数进行求和,然后使用我们定义的激活函数来计算输出值。
```python
### 解
y = activation(torch.sum(features * weights) + bias)
y = activation((features * weights).sum() + bias)
```
你也可以用采用矩阵相乘的办法来一次完成相乘和求和的操作。通常来说,我更建议采用矩阵相乘的方式来进行计算,因为这样做更高效。PyTorch提供了大量的库函数和GPU接口,来加速矩阵运算。
如果我们想使用矩阵乘法,我们就需要调用函数[`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) 或者 [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul)。我个人更建议使用后者,因为它有更多的特性。
```python
torch.matmul(features, weights)
```
但是当我们运行时,就会出现如下错误
```python
>> torch.matmul(features, weights)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-15d592eb5279> in <module>()
----> 1 torch.matmul(features, weights)
RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:961
```
这是因为张量的大小(shape)不正确,造成的矩阵不能相乘。这是一个非常常见的问题。
解决办法也很简单,就是直接调整`weights`的大小,来适应矩阵乘法运算。
**注意:** 张量的大小标示为`tensor.shape`。这是一个非常常见的运算函数,请记住它。
Pytorch 也提供了诸多适合改变shape(大小)的函数,例如: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), 和 [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view).
* `weights.reshape(a, b)` 是讲`weights`的数据***拷贝***到一个新的内存中,并整形为a*b。
* `weights.resize_(a, b)` 也能得到相同的结果,唯一不同的是,他会检查shape。当新tensor比原tensor的元素少时,多余的元素将会在新tensor中剔除(但你依然可以通过原tensor获得这部分数据),如果新tesnor的元素多于原tensor,那么程序将阻止它初始化。同时要注意,不同于reshape,这里的操作都是原地操作,没有拷贝。如果你想对原地操作有更过的了解可以查看 [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.
* `weights.view(a, b)` 其实和reshape差不多,但是由于存在时间比较长,所以用的人也比较多。
我个人建议倾向于使用`reshape`,但是如果你使用另外两个一般也不会影响你的使用结果。
> **练习**: 使用矩阵相乘来计算神经元输出。
```python
## 解
y = activation(torch.mm(features, weights.reshape(5,1)) + bias)
```
### 实现第一个网络吧!
现在我们已经学会了如何计算一个神经元。现在我们来试着把这些神经元堆叠在一起,从而试想一个网络,第一层的神经元的输出可以作为第二层神经元的输入。因为每层都有多个神经元,所以我们用矩阵来表示权重。
最底下的一层是神经网络的输入,我们称之为**输入层**。中间的称为**隐藏层**,最顶端的称为**输出层**。下面我们从数学的角度来分析一下网络的运算原理。例如,隐藏层($h_1$ 和$h_2$)可表示为:
$$
\vec{h} = [h_1 \, h_2] =
\begin{bmatrix}
x_1 \, x_2 \cdots \, x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_{11} & w_{12} \\
w_{21} &w_{22} \\
\vdots &\vdots \\
w_{n1} &w_{n2}
\end{bmatrix}
$$
隐藏层的输出就是输出层的输入。那么整个网络的输出就可以表达为:
$$
y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)
$$
```python
### 生成数据
torch.manual_seed(7) # 设置随机种子
# 生成3个正态分布随机数
features = torch.randn((1, 3))
# 定义网络每层的大小
n_input = features.shape[1] # 输入的大小
n_hidden = 2 # 隐藏层神经元数目
n_output = 1 # 输出层神经元数目
# 隐藏层输入的权重
W1 = torch.randn(n_input, n_hidden)
# 隐藏层输出的权重
W2 = torch.randn(n_hidden, n_output)
# 隐藏层和输出层的偏移量
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
```
> **练习:** 使用权重`W1`,`W2`和偏移量`B1`,`B2`计算神经网络的输出。
```python
### 解
h = activation(torch.mm(features, W1) + B1)
output = activation(torch.mm(h, W2) + B2)
print(output)
```
tensor([[0.3171]])
如果计算正确,你的输出为 `tensor([[ 0.3171]])`.
隐藏单元的数目称为**hyperparameter(超参数)**。每个权重和偏移量的超参数都不尽相同。同时,有更多层,更多单元的网络在相同数据中有更好的性能,因为他们学习到更多的特征,当然计算量也越大。
## Numpy 与 Torch 相互转换
和Numpy的相互转化是PyTorch的主打特性之一。具体操作为:
Numpy -> PyTorch
`torch.from_numpy()`
PyTorch -> Numpy
`.numpy()`
```python
import numpy as np
a = np.random.rand(4,3) # numpy的随机array
a
```
array([[0.11567605, 0.36076843, 0.65586404],
[0.3654072 , 0.99454583, 0.64481185],
[0.34689881, 0.77484326, 0.26163729],
[0.12669539, 0.36048957, 0.57723008]])
```python
b = torch.from_numpy(a) # 转换成torch的张量
b
```
0.3367 0.5953 0.6543
0.8653 0.5995 0.2804
0.4841 0.9836 0.3388
0.2559 0.5108 0.3999
[torch.DoubleTensor of size 4x3]
```python
b.numpy() # 转换回 numpy 的array
```
array([[ 0.33669496, 0.59531562, 0.65433944],
[ 0.86531224, 0.59945364, 0.28043973],
[ 0.48409303, 0.98357622, 0.33884284],
[ 0.25591391, 0.51081783, 0.39986403]])
注意这里所有的操作都是‘in-place’的。因为numpy和pyTorch共享内存。
```python
# pytorh乘2
b.mul_(2)
```
0.6734 1.1906 1.3087
1.7306 1.1989 0.5609
0.9682 1.9672 0.6777
0.5118 1.0216 0.7997
[torch.DoubleTensor of size 4x3]
```python
# Numpy array 也会有相应的变化,希望使用时大家能注意
a
```
array([[ 0.67338991, 1.19063124, 1.30867888],
[ 1.73062448, 1.19890728, 0.56087946],
[ 0.96818606, 1.96715243, 0.67768568],
[ 0.51182782, 1.02163565, 0.79972807]])
|
366a52e6be5a016a7ec2e45e4306fc897c1553f8
| 18,098 |
ipynb
|
Jupyter Notebook
|
intro-to-pytorch/Part_1_Tensors_in_PyTorch.ipynb
|
yangliuav/deep-learning-with-pytorch-Chinese-version
|
1f30437e6244e29ba37fb72861304a994f0b4fc0
|
[
"MIT"
] | 1 |
2021-03-14T12:46:23.000Z
|
2021-03-14T12:46:23.000Z
|
intro-to-pytorch/Part_1_Tensors_in_PyTorch.ipynb
|
yangliuav/deep-learning-with-pytorch-Chinese-version
|
1f30437e6244e29ba37fb72861304a994f0b4fc0
|
[
"MIT"
] | null | null | null |
intro-to-pytorch/Part_1_Tensors_in_PyTorch.ipynb
|
yangliuav/deep-learning-with-pytorch-Chinese-version
|
1f30437e6244e29ba37fb72861304a994f0b4fc0
|
[
"MIT"
] | null | null | null | 29.427642 | 397 | 0.45447 | true | 3,316 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.763484 | 0.795658 | 0.607472 |
__label__yue_Hant
| 0.568511 | 0.249691 |
```python
# Header starts here.
from sympy.physics.units import *
from sympy import *
# Rounding:
import decimal
from decimal import Decimal as DX
from copy import deepcopy
def iso_round(obj, pv, rounding=decimal.ROUND_HALF_EVEN):
import sympy
"""
Rounding acc. to DIN EN ISO 80000-1:2013-08
place value = Rundestellenwert
"""
assert pv in set([
# place value # round to:
1, # 1
0.1, # 1st digit after decimal
0.01, # 2nd
0.001, # 3rd
0.0001, # 4th
0.00001, # 5th
0.000001, # 6th
0.0000001, # 7th
0.00000001, # 8th
0.000000001, # 9th
0.0000000001, # 10th
])
objc = deepcopy(obj)
try:
tmp = DX(str(float(objc)))
objc = tmp.quantize(DX(str(pv)), rounding=rounding)
except:
for i in range(len(objc)):
tmp = DX(str(float(objc[i])))
objc[i] = tmp.quantize(DX(str(pv)), rounding=rounding)
return objc
# LateX:
kwargs = {}
kwargs["mat_str"] = "bmatrix"
kwargs["mat_delim"] = ""
# kwargs["symbol_names"] = {FB: "F^{\mathsf B}", }
# Units:
(k, M, G ) = ( 10**3, 10**6, 10**9 )
(mm, cm) = ( m/1000, m/100 )
Newton = kg*m/s**2
Pa = Newton/m**2
MPa = M*Pa
GPa = G*Pa
kN = k*Newton
deg = pi/180
half = S(1)/2
# Header ends here.
#
# https://colab.research.google.com/github/kassbohm/wb-snippets/blob/master/ipynb/HTM_03/Uebungen/3.3_cc.ipynb
from mpmath import radians, degrees, pi
l1, l2 = 3*m/10, 6*m/10
p1 = radians(30)
prec = 0.001
s, p2 = var("s, φ₂")
c1, s1 = cos(p1), sin(p1)
c2, s2 = cos(p2), sin(p2)
xdev = ( l1*c1 - l2*c2 - s ) / m
ydev = ( l1*s1 - l2*s2 ) / m
xdev = xdev.simplify()
ydev = ydev.simplify()
xi = var("xi")
xdev = xdev.subs(s/m, xi)
J11, J12 = diff(xdev, p2), diff(xdev, xi)
J21, J22 = diff(ydev, p2), diff(ydev, xi)
# Jacobian:
J = Matrix([[J11, J12],[J21, J22]])
# pprint(J)
pprint("\n--- Numerical Solution using SymPy's nsolve ---")
# Initial Values:
pprint("\nInitial (φ₂ / deg, ξ):")
p2i, xii = radians(150), 0.8
sol = nsolve( [xdev, ydev], [p2, xi], [p2i, xii], dict=True )
sol = sol[0]
sympy_sol = sol
tmp = Matrix([degrees(sol[p2]), sol[xi]])
pprint("\nSolution:")
pprint("(φ₂ / deg, ξ):")
pprint(iso_round(tmp,prec))
pprint("\n--- Step-by-Step Newton-Solution ---")
pprint("\n--- Step-1 ---")
pprint("\nJacobian in terms of (φ₂, ξ):")
pprint(J)
pprint("\n1: Inital (φ₂, ξ):")
p = Matrix([p2i, xii])
pprint(iso_round(p, prec))
pprint("Note: φ₂ / deg:")
tmp = degrees(p[0])
pprint(iso_round(tmp, prec))
F = Matrix([xdev, ydev])
sub_list = [
(p2, p2i),
(xi, xii),
]
pprint("\n2: Jacobian:")
Jn = J.subs(sub_list)
# pprint(Jn)
pprint(iso_round(Jn, prec))
pprint("\n3: Deviation Phi:")
Fn = F.subs(sub_list)
pprint(iso_round(Fn, prec))
# Newton-Step:
pprint("\n4: Increment (δφ₂, ξ):")
dp2, dxi = var("δφ₂, δξ")
dp = Matrix([dp2, dxi])
eq = Eq(Jn*dp, -Fn)
sol = solve(eq,[dp2, dxi], dict=True)
sol = sol[0]
dp = Matrix([sol[dp2], sol[dxi]])
pprint(iso_round(dp, prec))
p += dp
pprint("\n5: At end of iteration step:")
pprint("(φ₂, ξ):")
pprint(iso_round(p, prec))
pprint("Note: φ₂ / deg:")
tmp = degrees(p[0])
pprint(iso_round(tmp, prec))
# ---
pprint("\n--- Step-2 ---")
pprint("\n1: Using (φ₂, ξ) from end of last step.")
sub_list = [
(p2, p[0]),
(xi, p[1]),
]
pprint("\n2: Jacobian:")
Jn = J.subs(sub_list)
pprint(iso_round(Jn, prec))
pprint("\n3: Deviation Phi:")
Fn = F.subs(sub_list)
pprint(iso_round(Fn, prec))
# Newton-Step:
pprint("\n4: Increment (δφ₂, ξ):")
dp2, dxi = var("δφ₂, δξ")
dp = Matrix([dp2, dxi])
eq = Eq(Jn*dp, -Fn)
sol = solve(eq,[dp2, dxi], dict=True)
sol = sol[0]
dp = Matrix([sol[dp2], sol[dxi]])
pprint(iso_round(dp, prec))
p += dp
pprint("\n5: At end of iteration step:")
pprint("(φ₂, ξ):")
pprint(iso_round(p, prec))
pprint("Note: φ₂ / deg:")
tmp = degrees(p[0])
pprint(iso_round(tmp, prec))
pprint("\nPart d):")
sub_list = [
(p2, sympy_sol[p2]),
(xi, sympy_sol[xi]),
]
pprint("\nJ for solution of part c):")
J = J.subs(sub_list)
pprint(J)
# Unknowns:
p2p, xip = var("φ₂', ξ'")
unk = Matrix([p2p, xip])
rhs = l1/m * 1 / s * Matrix([s1, -c1])
# Linear System:
eq = Eq(J*unk, rhs)
sol = solve(eq,[p2p, xip], dict=True)
pprint("\nSolution:")
pprint(sol[0])
# --- Numerical Solution using SymPy's nsolve ---
#
# Initial (φ₂ / deg, ξ):
#
# Solution:
# (φ₂ / deg, ξ):
# ⎡165.522⎤
# ⎢ ⎥
# ⎣ 0.841 ⎦
#
# --- Step-by-Step Newton-Solution ---
#
# --- Step-1 ---
#
# Jacobian in terms of (φ₂, ξ):
# ⎡ 3⋅sin(φ₂) ⎤
# ⎢ ───────── -1⎥
# ⎢ 5 ⎥
# ⎢ ⎥
# ⎣-0.6⋅cos(φ₂) 0 ⎦
#
# 1: Inital (φ₂, ξ):
# ⎡2.618⎤
# ⎢ ⎥
# ⎣ 0.8 ⎦
# Note: φ₂ / deg:
# 150.000
#
# 2: Jacobian:
# ⎡0.3 -1.0⎤
# ⎢ ⎥
# ⎣0.52 0.0 ⎦
#
# 3: Deviation Phi:
# ⎡-0.021⎤
# ⎢ ⎥
# ⎣-0.15 ⎦
#
# 4: Increment (δφ₂, ξ):
# ⎡0.289⎤
# ⎢ ⎥
# ⎣0.066⎦
#
# 5: At end of iteration step:
# (φ₂, ξ):
# ⎡2.907⎤
# ⎢ ⎥
# ⎣0.866⎦
# Note: φ₂ / deg:
# 166.540
#
# --- Step-2 ---
#
# 1: Using (φ₂, ξ) from end of last step.
#
# 2: Jacobian:
# ⎡0.14 -1.0⎤
# ⎢ ⎥
# ⎣0.584 0.0 ⎦
#
# 3: Deviation Phi:
# ⎡-0.023⎤
# ⎢ ⎥
# ⎣ 0.01 ⎦
#
# 4: Increment (δφ₂, ξ):
# ⎡-0.018⎤
# ⎢ ⎥
# ⎣-0.025⎦
#
# 5: At end of iteration step:
# (φ₂, ξ):
# ⎡2.889⎤
# ⎢ ⎥
# ⎣0.841⎦
# Note: φ₂ / deg:
# 165.525
#
# Part d):
#
# J for solution of part c):
#
# Solution:
# ⎧ -0.217082039324994 -0.447213595499958 ⎫
# ⎨ξ': ───────────────────, φ₂': ───────────────────⎬
# ⎩ s s ⎭
```
|
4709046ef8ec41f0691c107afbcbdb2c76050483
| 11,259 |
ipynb
|
Jupyter Notebook
|
ipynb/HTM_03/Uebungen/3.3_cc.ipynb
|
kassbohm/wb-snippets
|
f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe
|
[
"MIT"
] | null | null | null |
ipynb/HTM_03/Uebungen/3.3_cc.ipynb
|
kassbohm/wb-snippets
|
f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe
|
[
"MIT"
] | null | null | null |
ipynb/HTM_03/Uebungen/3.3_cc.ipynb
|
kassbohm/wb-snippets
|
f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe
|
[
"MIT"
] | null | null | null | 34.643077 | 281 | 0.40794 | true | 2,415 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.845942 | 0.798187 | 0.67522 |
__label__eng_Latn
| 0.127964 | 0.407094 |
# Lesson 1 - Introduction to Artificial Neural Networks
This lesson will focus on the history of the challenges that researchers in the field on Artificial Neural Networks (ANNs) have met and solved.
The course attendants will understand the foundations on which ANNs have been (and currently are) developed and will thus better appreciate the majority of state-of-the-art solutions.
### Summary
* [Lexicon](#lexicon)
* [Parametric learning](#parametric_learning)
* [1943: Linear Threshold Unit](#ltu)
* [1949: Hebbian learning](#hebb)
* [1957: Perceptron algorithm](#perceptron)
* [1960: Delta rule](#delta_rule)
* [1989: Universal Approximation Theorem](#uat)
* [1986: Backpropagation algorithm](#backprop)
<a id=lexicon></a>
### Lexicon
In order to allow an easy translation from a *classical* statistical framework to the modern *statistical learning* framework, this notebook provides a brief vocabulary which will be used throughout the course.
The usual statistical inference problem is framed as a search for a mapping
$$f: X \to Y$$
where $X$ is a set representing what we can observe in the reality, while $Y$ represents what we would like to know about the reality, and that we suppose related to what we observe.
To put it simply:
* $X$ represents **what we know**
* $Y$ represents **what we want to know**
$X$ is usually a real vector space which coordinates quantify the relevant and quantifiable properties of the problem's domain.
$Y$ is usually a low-dimensional real vector space (for continuous regression) or a finite set (for logistic regression, AKA classification).
The points in $X$ are usually modeled as random vectors $x = (x_1, x_2, \dots x_n) \in \mathbb{R}^n$ whose components are called *independent variables*, *observables* or *predictors* in classical statistics and **inputs** in statistical learning.
The points in $Y$ are usually modeled as random vectors $y = (y_1, y_2, \dots y_m) \in \mathbb{R}^m$ or by a discrete random variable $y \in {Y_1, Y_2, \dots Y_C}$ which components are called *dependent variables*, *targets* or *responses* in classical statistics and **outputs** in statistical learning.
Many classical statistics techniques do not infer $y$ as a direct response to $x$
$$y = f(x)$$
but rather on a preliminary transformation of $x$:
$$y = f(\phi(x))$$
The transformed points $\phi(x)$ are called *latent variables* or *transformed variables* in classical statistics and **features** in statistical learning, while the map $\phi$ is called a **feature map**.
<a id=parametric_learning></a>
### Parametric learning
Suppose we observe a set $E = \{(x^{(1)}, y^{(1)}), (x^{(2)}, y^{(2)}), \dots (x^{(N)}, y^{(N)})\} \subset X \times Y$; we call this observations **experience**.
We would like to have a function $f: X \to Y$ that satisfies the observed pairs, but that could also be applied confidently to unseen instances $x \in X$; this last capability to extend $f$ to unseen examples is called **generalization**.
When $Y$ is continuous, both univariate and multivariate, this statistical inference task is referred to as **regression**.
If the seeked releationship $f \subset X \times Y$ is linear (i.e. $y_j = \sum_{i=1}^{n} w_{ji} x_i$) we talk about **linear regression**, otherwise the task is called **non-linear regression**.
When $Y$ is a finite set, the problem is called **classification**.
With a slight abuse, we will refer to classification as **logistic regression**, and in particular we will talk about **binomial logistic regression** when $|Y| = 2$ and of **multinomial logistic regression** when $|Y| = k, k > 2$.
A brief note about the etimology of the terms.
The term *regression* was coined in the 19th century by the English statistician Francis Galton to refer to the phenomenon of *regression towards mediocrity in hereditary stature*, an 1886 study in which he extensively used line fitting techniques, but where the term *regression* indicated more an outcome of the analysis than the techniques themselves!
The term *logistic* was introduced in 1844 by the Belgian statistician Pierre Verhulst in his work *Reserches mathematiques sur la loi d'accroisement de la population*, polemicizing against the exponential models for a population's growth that relied on the unreasonable assumption of unlimited resources; he defined more logical hypothesis and produced growth plot with a sigmoid shape, a function that he named *logistic* because it was the result of a model which used *logical* assumptions.
How was this term transferred to the classification problem? As we will see, this *logistic function* will play a critical role for Artificial Neural Networks in general, and for classification tasks in particular.
The general problem of **machine learning** is a particular case of the more general statistical learning framework, since all the experience available to an agent is in the form of digital data.
Classical statistical inference defines the concept of a *parametric model*, a space of functions
$$\mathcal{F}_{\Theta} = \{f_{\theta}: X \to Y\}_{\theta \in \Theta}$$
parametrized by elements of $\Theta$, into which the closest (in a metric spaces sense) approximation $f_{\theta^*}$ to the real map $f: X \to Y$ is searched for by whatever statistical procedure the investigator prefers.
Since machine learning is a subfield of Artificial Intelligence, in this context models are also called **agents**.
Statistical learning defines the concept of a **risk functional** (also called **loss functional** or **cost functional**)
$$L: \mathcal{F}_{\Theta} \to \mathbb{R}$$
that can evaluate models of the chosen family. This concept is critical, since it allows to *translate* the analytical procedure of functional minimization into a *navigation* of the parameter space $\{\theta_0, \theta_1, \dots \theta^*\} \subset \Theta$ intended to find an optimal solution $\theta^*$.
This process of iteratively *modifying the agent's parameters to minimize the risk functional* is called **learning**.
```python
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from sklearn.datasets import make_blobs
from sklearn.preprocessing import scale
# create dataset
X, Y_hat = make_blobs(n_samples=150, n_features=3, centers=2)
X = scale(X)
Y_hat = 2*Y_hat - 1.0
# plot data
fig1 = plt.figure()
ax = fig1.gca(projection='3d')
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=Y_hat, cmap='bwr')
plt.show()
```
<a id=ltu></a>
### 1943: Linear Threshold Unit
Idea: the original [paper](https://pdfs.semanticscholar.org/5272/8a99829792c3272043842455f3a110e841b1.pdf) by McCulloch and Pitts essentially stated that **a threshold neuron with binary output implementing a weighted sum of inputs can emulate first-order logic sentences (NOT/AND/OR functions)**.
This ensures that first-order logic statements as *"this point belongs/does not belong to a set"* can be computed by setting suitable thresholds on a linear combination of values.
Let
$$x = (x_1, x_2, \dots x_n) \in \mathbb{R}^n$$
be an input vector.
A **Linear Threshold Unit (LTU)** models the following:
$$LTU(x) = \begin{cases} 1, & \mbox{if } \sum_{i=1}^{n} w_i x_i \geq b \\ -1, & \mbox{if } \sum_{i=1}^{n} w_i x_i < b \end{cases}$$
where $b$ is a specified *firing threshold* (or simply *threshold*) that specifies the amount of *stress* to be accumulated in the neuron under the form of a linear combination of stimuli before the neuron can *fire* a non-zero signal.
Improperly, this function is often referred to as a *Heaviside-activated linear unit*, but the Heaviside function is defined as
$$H(t) = \begin{cases} 0, & \mbox{if } t < 0 \\ \frac{1}{2}, & \mbox{if } t = 0 \\ 1, & \mbox{if } t > 0 \end{cases}$$
Mathematically, a LTU is able to draw an hyperplane in $\mathbb{R}^n$, as shown by the example below.
```python
class LTU():
def __init__(self, n_dim):
# init LTU
self.weights = np.random.randn(n_dim)
self.bias = np.random.randn()
def linear(self, x):
return np.dot(x, self.weights) + self.bias
def activate(self, linear):
return np.where(linear >= 0, 1, -1)
def predict(self, x):
return self.activate(self.linear(x))
```
```python
ltu = LTU(X.shape[1])
# generate a meshgrid to compute LTU hyperplane
x = np.arange(-2.0, 2.0, 0.1)
y = np.arange(-2.0, 2.0, 0.1)
xx, yy = np.meshgrid(x, y)
# plot LTU hyperplane onto data
fig2 = plt.figure()
ax21 = fig2.gca(projection='3d')
ax21.scatter(X[:, 0], X[:, 1], X[:, 2], c=Y_hat, cmap='bwr')
zz_ltu = -(np.dot(np.transpose(np.vstack([xx[None, :], yy[None, :]]), (1, 2, 0)), ltu.weights[:-1]) + ltu.bias) / ltu.weights[-1]
ax22 = fig2.gca()
ax22.plot_surface(xx, yy, zz_ltu, alpha=0.2)
plt.show()
```
Problem: the threshold of the MPC neuron needs to be set by hand.
<a id=hebb></a>
### 1949: Hebbian learning
Idea: in his [book](http://s-f-walker.org.uk/pubsebooks/pdfs/The_Organization_of_Behavior-Donald_O._Hebb.pdf) on neuropsychology, on the basis of experimental observations, Hebb stated that **"when the axon of a cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased."**
The main implication of this statement is that connected neurons should exhibit a self-organization property in order to optimize their communication process.
To shape this idea into the mathematical terms of a LTU, Hebb's goal is to find a constant $\xi > 1$ such that
$$\langle x, w \rangle = y = \langle \frac{x}{\xi}, w + \Delta w \rangle$$
That is, to change the strengths $w$ of the connections in such a way that, after the *metabolic change*, the same activation $y$ can be obtained with an *attenuated version* $\frac{x}{\xi}$ of the inputs.
A simple way to do this is aligning or counteraligning the weights with the input pattern $x$
$$\Delta w = \alpha y x$$
where $\alpha$ is a small positive constant called **learning rate**, which guarantees
$$\xi = 1 + \eta \|x\|^2$$
as desired.
Problem: Hebb did not provide a formal procedure to train an LTU neuron automatically.
<a id=perceptron></a>
### 1957: Perceptron algorithm
Idea: Frank Rosenblatt was the first to realize a working Hebbian model based on a LTU neuron.
The basic representation is This *adaptive artificial neuron*, that was capable of learning autonomously, was called [perceptron](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.335.3398&rep=rep1&type=pdf).
The training algorithm proposed by Rosenblatt iteratively presents patterns $x^{(i)}$ to the LTU, which predicts $LTU(x^{(i)}) = y^{(i)} \in \{-1, 1\}$, but hides the known desired target $\hat{y}^{(i)}$ from it; after each presented pattern, the difference $\delta y^{(i)} = \hat{y}^{(i)} - y^{(i)}$ determines wether the LTU response should have been higher or lower; this **error signal** is finally used to determine which connections to inhibit ($|w + \Delta w| < |w|$) and which to excite ($|w + \Delta w| > |w|$):
$$\Delta w = \alpha \delta y^{(i)} x^{(i)}$$
The main difference with the Hebbian learning rule is that the perceptron learning is actually *anti-Hebbian* in that it tries to force the LTU to close the existing gap with a known pattern (**supervised learning**), while Hebbian learning creates a self-stimulating loop (the goal is the reduction of the intensity of the input).
The perceptron algorithm can also handle **multivariate logistic regression**: supposing we want to predict $\hat{y} = (\hat{y}_1, \hat{y}_2)$, we could use two distinct (and not mutually interconnected) perceptrons $LTU_1(x) = y_1$ and $LTU_2(x) = y_2$ to model the two components of the output.
This ensemble of units is a first example of a **layer** of artificial neurons, which is a foundamental concept for modern Artificial Neural Networks research.
```python
class Perceptron(LTU):
def __init__(self, n_dim, alpha=0.001, n_epochs=10):
LTU.__init__(self, n_dim)
self.bias = 0
self.alpha = alpha
self.n_epochs = n_epochs
# record "heatmaps" of updates
self.heatmaps = list()
def train_one_step(self, x, y_hat):
y = self.predict(x)
delta = y_hat - y
delta_w = self.alpha * delta * x
delta_b = self.alpha * delta
self.weights += delta_w
self.bias += delta_b
# store "heatmap" of current update
self.heatmaps.append(np.array([x, delta_w]))
def train(self, X, Y_hat):
for i_epoch in range(self.n_epochs):
for (x, y_hat) in zip(X, Y_hat):
self.train_one_step(x, y_hat)
print('Epoch {:2d} - Accuracy: {:6.2f}%'.format(i_epoch+1, self.accuracy(X, Y_hat)))
def accuracy(self, X, Y_hat):
Y = self.predict(X)
accuracy = 100 * (np.sum(np.where(Y==Y_hat, 1, 0)) / len(Y))
return accuracy
```
```python
perceptron = Perceptron(X.shape[1])
# plot initial hyperplane onto data
fig3 = plt.figure()
ax31 = fig3.gca(projection='3d')
ax31.scatter(X[:, 0], X[:, 1], X[:, 2], c=Y_hat, cmap='bwr')
zz_perceptron = -(np.dot(np.transpose(np.vstack([xx[None, :], yy[None, :]]), (1, 2, 0)), perceptron.weights[:-1]) + perceptron.bias) / perceptron.weights[-1]
ax32 = fig3.gca()
ax32.plot_surface(xx, yy, zz_perceptron, alpha=0.2)
perceptron.train(X, Y_hat)
# plot "heatmaps"
heatmaps = np.array(perceptron.heatmaps)
fig4, (ax41, ax42) = plt.subplots(1, 2)
ax41.imshow(heatmaps[:10, 0, :], cmap='hot', vmin=-2.0, vmax=2.0)
ax42.imshow(heatmaps[:10, 1, :], cmap='hot', vmin=-2.0*(perceptron.alpha*2.0), vmax=2.0*(perceptron.alpha*2.0))
plt.show()
# plot trained hyperplane onto data
fig5 = plt.figure()
ax51 = fig5.gca(projection='3d')
ax51.scatter(X[:, 0], X[:, 1], X[:, 2], c=Y_hat, cmap='bwr')
zz_perceptron = -(np.dot(np.transpose(np.vstack([xx[None, :], yy[None, :]]), (1, 2, 0)), perceptron.weights[:-1]) + perceptron.bias) / perceptron.weights[-1]
ax52 = fig5.gca()
ax52.plot_surface(xx, yy, zz_perceptron, alpha=0.2)
plt.show()
```
Problem: the perceptron is constrained to use a step activation function, and is thus constrained to solve classifiction problems.
<a id=delta_rule></a>
### 1960: Delta rule
Idea: a [paper](http://www.dtic.mil/dtic/tr/fulltext/u2/241531.pdf) by Bernard Widrow and Tedd Hoff described a way to modify the perceptron algorithm showing that **the step activation function was not a necessary condition for Hebbian learning**.
The two key tools on which the Delta rule relies are the chain rule for a composed function $f(w) = f(\phi(w))$
$$\frac{\partial f}{\partial w} = \frac{\partial f}{\partial \phi} \frac{\partial \phi}{\partial w}$$
and the steepest descent numerical optimization method
$$\Delta w = -\alpha \nabla_w f$$
which allow to replace the error signal $\delta y^{(i)}$ in a suitable manner.
This is where the Delta rule algorithms takes its name.
The Widrow-Hoff neuron, called ADAptive LINE neuron or **Adaline**, differs from a LTU in that its output is computed withouth thresholding the internal neuron score.
This implies that the error signal $\delta y^{(i)}$ is not constrained to the set $\{-2, +2\}$, but can be proportional to the distance from the desired output $\delta y^{(i)} \propto (\hat{y}^{(i)} - y^{(i)})$.
This effect is obtained applying a Least Mean Squres (LMS) loss
$$L(x) = \frac{1}{2} (\hat{y} - y)^2$$
to the Adaline's output $Adaline(x) = y$.
This yields the update:
$$\begin{align} \Delta w_{ij} &= -\alpha \frac{\partial L}{\partial y_j} \frac{\partial y_j}{\partial w_{ij}} \\ &= -\alpha (-(\hat{y}_j - y_j)) x_i \end{align}$$
An advantage of this model is that **it can handle also linear regression**: this depends on the fact that the desired response $\hat{y}$ can be whichever real number, and is not constrained to be a binary variable.
```python
# noisy line dataset
def noisy_line(noise=0.01):
n_samples = 150
X = np.random.uniform(low=-1.0, high=1.0, size=(n_samples, 1))
theta = np.random.uniform(low=-0.2, high=0.2, size=(2, 2))
Y_hat = theta[0, :] + X * theta[1, :]
Y_hat += noise * np.random.randn(n_samples, 1)
return theta, X, Y_hat
theta, X, Y_hat = noisy_line()
# plot data
fig6 = plt.figure()
ax = fig6.gca(projection='3d')
ax.scatter(X, Y_hat[:, 0], Y_hat[:, 1])
plt.show()
```
```python
class Adaline():
def __init__(self, in_dim, out_dim, alpha=0.01, n_epochs=10):
self.weights = 0.01 * np.random.randn(in_dim, out_dim)
self.biases = np.zeros((1, out_dim))
# training hyperparameters
self.alpha = alpha
self.n_epochs = n_epochs
self.losses = list()
def preprocess(self, x):
if x.ndim == 1:
x = x.reshape((x.shape[0], -1))
return x
def linear(self, x):
return self.biases + np.dot(x, self.weights)
def activate(self, linear):
return linear
def predict(self, x):
x = self.preprocess(x)
return self.activate(self.linear(x))
def train_one_step(self, x, y_hat):
y = self.predict(x)
delta_y = y_hat - y
loss = np.sum(np.square(delta_y)) / 2
grad_y = -delta_y
grad_w = x.T * grad_y
grad_b = grad_y
delta_w = -self.alpha * grad_w
delta_b = -self.alpha * grad_b
self.weights += delta_w
self.biases += delta_b
def train(self, X, Y_hat):
for i_epoch in range(self.n_epochs):
for (x, y_hat) in zip(X, Y_hat):
self.train_one_step(x, y_hat)
loss = self.loss(X, Y_hat)
self.losses.append(loss)
print('Epoch {:2d} - Loss: {:6.2f}'.format(i_epoch+1, loss))
def loss(self, X, Y_hat):
Y = self.predict(X)
delta = Y_hat - Y
loss = np.sum(np.square(delta)) / 2
return loss
```
```python
adaline = Adaline(X.shape[1], Y_hat.shape[1])
# generate a mesh to compute Adaline predictions
x = np.linspace(-1.0, 1.0)
# plot untrained Adaline predictions onto data
fig7 = plt.figure()
ax71 = fig7.gca(projection='3d')
ax71.scatter(X, Y_hat[:, 0], Y_hat[:, 1])
y = adaline.predict(x)
ax72 = fig7.gca()
ax72.plot(x, y[:, 0], y[:, 1], c='r')
adaline.train(X, Y_hat)
# plot trained Adaline predictions onto data
fig8 = plt.figure()
ax81 = fig8.gca(projection='3d')
ax81.scatter(X, Y_hat[:, 0], Y_hat[:, 1])
y = adaline.predict(x)
ax82 = fig8.gca()
ax82.plot(x, y[:, 0], y[:, 1], c='r')
plt.show()
fig9 = plt.figure()
plt.plot(adaline.losses)
plt.show()
print('Bias (real): ', theta[0, :])
print('Bias (learnt): ', adaline.biases)
print('Weights (real): ', theta[1:, :])
print('Weights (learnt) : ', adaline.weights)
```
Problem: both the perceptron and Adaline are intrinsically uncapable of non-linear discrimination.
Consider the case of a circular dense blob of points surrounded by a circular crown of qualitatively different points.
As humans, we could easily discriminate them in an analytic way by a transformation from a Cartesian coordinate system to a polar one:
$$\begin{align} \rho &: \mathbb{R}^2 \to \mathbb{R} \\ &(x_1, x_2) \mapsto \rho = \sqrt{x_1^2 + x_2^2} \end{align}$$
The polar reference allows for a straightforward classification:
$$f((x_1, x_2)) = \begin{cases} +1 , & \mbox{if } \rho((x_1, x_2)) < \bar{\rho} \\ -1 , & \mbox{if } \rho((x_1, x_2)) \geq \bar{\rho} \end{cases}$$
where $\bar{\rho}$ is a suitable threshold.
The perceptron and Adaline have no hope to solve this problem.
```python
from sklearn.datasets import make_circles
# generate non-linearly separable data
X, Y_hat = make_circles(n_samples=150, noise=0.1, factor=0.2)
# plot non-linearly separable data
fig10 = plt.figure()
ax101 = fig10.gca()
ax101.scatter(X[:, 0], X[:, 1], c=Y_hat, cmap='bwr')
plt.show()
```
<a id=uat></a>
### 1989: Universal Approximation Theorem
Idea: George Cybenko proved a [critical result](http://web.eecs.umich.edu/~cscott/smlrg/approx_by_superposition.pdf) about multilayer neural networks with at least one hidden layer.
In particular, he found that **given an arbitrary continuous function $\phi$, a two-layers Artificial Neural Network with sigmoid activation functions and a sufficiently large amount of neurons in the hidden layer is capable of approximating $\phi$ arbitrarily close**.
A few years later, Hornik [generalized the result](http://zmjones.com/static/statistical-learning/hornik-nn-1991.pdf) to other classes of activation functions: sigmoid functions are not a necessary condition to learn non-linearities.
Problem: the missing link is an extension of the perceptron algorithm and the Delta rule to multilayer networks.
<a id=backprop></a>
### 1986: Backpropagation algorithm
Idea: David Rumelhart and Geoffrey Hinton [extended the Delta rule](https://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf) to compute the inifinitesimal changes to be applied to an arbitrary network's parameters in order to reduce the error at its task.
This method can be used to efficiently train multilayer neural networks.
The possibility to train models with more than just one hidden layer enables the construction of **Deep Neural Networks** (DNN for short), Artificial Neural Networks that try to exploit the UAT not by adding more and more neurons to a single hidden layer, but instead **create a hierarchy of two or more transformations** to ease the process of extracting the desired mapping $f: X \to Y$.
To summarize:
* single-layer models are equivalent to statistical inference that does not use any intermediate feature map;
* two-layers models (i.e. models with just a single hidden layer, also called *shallow models*) are equivalent to statistical inference that uses just a single feature map;
* three-or-more-layers models (also called *deep models*) are equivalent to statistical inference that uses many intermediate feature maps.
As a conclusive note for this lesson, we will give analytical formulas for the backpropagation algorithm.
Suppose we have a $L$-feature maps model
$$f(x) = f(\phi_{L}(\phi_{L-1}((\dots \phi_1(x, w_{(1)})), w_{(L-1)}), w_{(L)})$$
To adjust the arbitrary $k$-th feature parameters, the backpropagation algorithm works in two steps.
The first is using the chain rule to compute the partial derivative of the model loss with respect to the parameters to be adjusted:
$$\frac{\partial L}{\partial w_{(k)}} = \frac{\partial L}{\partial f} \frac{\partial f}{\partial \phi_{L}} \left( \prod_{i=L}^{i=k+1}\frac{\partial \phi_{i}}{\partial \phi_{i-1}} \right) \frac{\partial \phi_{k}}{\partial w_{(k)}}$$
The second step consists in applying whichever numerical optimization algorithm the investigator estimates suitable for the problem; for example, using steepest gradient descent yields an update:
$$\Delta w_{(k)} = -\alpha \frac{\partial L}{\partial w_{(k)}}$$
### References
**History of single-layer ANNs**: this nice [blog post](http://sebastianraschka.com/Articles/2015_singlelayer_neurons.html) by Sebastian Raschka provides additional examples and visualizations.
**Backpropagation**: I definitely suggest to read [Stanford's lesson](http://cs231n.github.io/optimization-2/) for an intuitive explanation (or watch its [Youtube version](https://www.youtube.com/watch?v=59Hbtz7XgjM) for the laziest) and Michael Nielsen's [online book](http://neuralnetworksanddeeplearning.com/) for a better mathematical understanding.
|
f14418967618ec8967ee8c789637e28e2551d0c0
| 559,526 |
ipynb
|
Jupyter Notebook
|
Lesson_1.ipynb
|
spallanzanimatteo/deepteaching
|
2f4961b994c60aeb3bd9ac950e58ad650ab2237c
|
[
"BSD-2-Clause"
] | 2 |
2019-05-15T10:21:39.000Z
|
2019-05-30T09:00:01.000Z
|
Lesson_1.ipynb
|
spallanzanimatteo/deepteaching
|
2f4961b994c60aeb3bd9ac950e58ad650ab2237c
|
[
"BSD-2-Clause"
] | null | null | null |
Lesson_1.ipynb
|
spallanzanimatteo/deepteaching
|
2f4961b994c60aeb3bd9ac950e58ad650ab2237c
|
[
"BSD-2-Clause"
] | null | null | null | 738.16095 | 82,702 | 0.933714 | true | 6,478 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.651355 | 0.787931 | 0.513223 |
__label__eng_Latn
| 0.985495 | 0.030718 |
<a href="https://colab.research.google.com/github/shodimaggio/VieWork/blob/master/vie_resolution.ipynb" target="_parent"></a>
# 演習課題(2)-2の計算例
中心から上の角度を計算
\begin{equation}
\phi=\tan^{-1}\frac{0.5h}{3.0h}
\end{equation}
$\phi$を2倍して視野角$\theta$を計算
\begin{equation}
\theta = 2\phi
\end{equation}
ラジアンを「分」に換算
\begin{equation}
1 \mathrm{[rad]} = \frac{180\times 60}{\pi} \mathrm{[arc minute]}
\end{equation}
```
import math
phi = math.atan(0.5/3.0)
theta = 2*phi
print('theta = {} [分]'.format(theta/math.pi*180*60))
```
theta = 1135.4786649630742 [分]
|
97cbf250fa2dbf9876c6dffba85861293f3895bb
| 2,127 |
ipynb
|
Jupyter Notebook
|
python/vie_sec2_resolution.ipynb
|
shodimaggio/VieWork
|
d007e91d0df24683356c0304599a2976950890f1
|
[
"MIT"
] | null | null | null |
python/vie_sec2_resolution.ipynb
|
shodimaggio/VieWork
|
d007e91d0df24683356c0304599a2976950890f1
|
[
"MIT"
] | null | null | null |
python/vie_sec2_resolution.ipynb
|
shodimaggio/VieWork
|
d007e91d0df24683356c0304599a2976950890f1
|
[
"MIT"
] | null | null | null | 24.448276 | 232 | 0.433004 | true | 254 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.936285 | 0.831143 | 0.778187 |
__label__yue_Hant
| 0.406403 | 0.646321 |
```python
import powersddp as psddp
TestSystem = psddp.PowerSystem(path='system.yml')
```
```python
operation = TestSystem.dispatch(solver='ulp', scenario=1, plot=True)
```
## Decision Variables
### Hydro Units
- Final volume $v_f$: The final volume of the reservoir after the operational period
- Turbined flow $v_t$: The ammount of water that was turbined during the period
- Shed volume $v_v$: The ammount of water that was shed in the period
- Initial volume $v_i$
- Influx $afl$
### Thermal Units
- total generation $g_t$: The total amount of generation provided by the unit during the period
### System
- Outage $def$: The total ammount of power that will not be delivered by the system
## The Objective Function
Assuming a problem with 3 generation units (2 TGUs and 1 HGU) let's write down the Objetive Function of our problem:
$$
\begin{equation}
\begin{aligned}
\min \quad & C_1\cdot g_{t_1} + C_2\cdot g_{t_2} + C_{def}\cdot def + 0.01\cdot v_v\\
\textrm{s.t.} \quad & \\
\textrm{hydro balance} \quad & v_f(i) = v_i(i) + afl(i) - v_t(i) \\
\textrm{load supplying} \quad & \rho\cdot v_t + g_{t_1} + g_{t_2} + def = \textrm{load}\\
\textrm{constraints} \quad & \\
& v_{f_{min}}\leq v_f \leq v_{f_{max}}\\
& v_{t_{min}}\leq v_t \leq v_{t_{max}}\\
& v_{v_{min}}\leq v_v \leq v_{v_{max}}\\
& g_{t_{min}}\leq g_t^\ast \leq g_{t_{max}}\\
^\ast \textrm{for each TGU}&
\end{aligned}
\end{equation}
$$
> Later we shall also add the Future Cost Function $\alpha$ in the minimization function
```python
import cvxopt.modeling as model
from cvxopt import solvers
import pandas as pd
import plotly.graph_objects as go
from plotly.subplots import make_subplots
solvers.options['glpk'] = dict(msg_lev='GLP_MSG_OFF')
def dispatch(system, v_i, inflow, cuts, stage, verbose:bool=False):
n_tgu = len(system.data['thermal-units'])
n_hgu = len(system.data['hydro-units'])
solvers.options['show_progress'] = verbose
## Initializing Model Variables
v_f = model.variable(n_hgu, "Final Volume of the Hydro Unit")
v_t = model.variable(n_hgu, "Turbined Flow of the Hydro Unit")
v_v = model.variable(n_hgu, "Shed flow of the Hydro Unit")
g_t = model.variable(n_tgu, "Power generated by the Thermal Unit")
deficit = model.variable(1, "Power deficit")
alpha = model.variable(1, "Future Cost")
## Objective Function
fob = 0
for i, tgu in enumerate(system.data['thermal-units']):
fob += tgu["cost"]*g_t[i]
fob+=TestSystem.data['outage_cost']*deficit[0]
for i, _ in enumerate(system.data['hydro-units']):
fob += 0.01*v_v[i]
fob += 1.0 * alpha[0]
## Constraints
### Hydro Balance
constraints = []
for i, hgu in enumerate(system.data['hydro-units']):
constraints.append( v_f[i] == float(v_i[i]) + float(inflow[i]) - v_t[i] - v_v[i] )
supplying = 0
### Load Supply
for i, hgu in enumerate(system.data['hydro-units']):
supplying += hgu["prod"] * v_t[i]
for i, tgu in enumerate(system.data['thermal-units']):
supplying += g_t[i]
supplying += deficit[0]
constraints.append(supplying == system.data['load'][stage-2])
### Bounds
for i, hgu in enumerate(system.data['hydro-units']):
constraints.append(v_f[i] >= hgu["v_min"])
constraints.append(v_f[i] <= hgu["v_max"])
constraints.append(v_t[i] >= 0)
constraints.append(v_t[i] <= hgu["flow_max"])
constraints.append(v_v[i] >= 0)
for i, tgu in enumerate(system.data['thermal-units']):
constraints.append(g_t[i] >= 0)
constraints.append(g_t[i] <= tgu["capacity"])
constraints.append(deficit[0] >= 0)
constraints.append(alpha[0] >= 0)
### Cut constraint (Future cost function of forward stage)
for cut in cuts:
if cut['stage'] == stage:
equation = 0
for hgu in range(n_hgu):
equation += float(cut['coefs'][hgu])*v_f[hgu]
equation += float(cut['coef_b'])
constraints.append(alpha[0] >= equation)
## Solving
opt_problem = model.op(objective=fob, constraints=constraints)
opt_problem.solve(format='dense',solver='glpk')
## Print
if verbose:
print("Total Cost: {}".format(fob.value()))
for i, hgu in enumerate(system.data['hydro-units']):
print("{} {} is {} hm3".format(v_f.name,i,v_f[i].value()))
print("{} {} is {} hm3".format(v_t.name,i,v_t[i].value()))
print("{} {} is {} hm3".format(v_v.name,i,v_v[i].value()))
for i, tgu in enumerate(system.data['thermal-units']):
print("{} {} is {} MWmed".format(g_t.name,i,g_t[i].value()))
print("{} is {} MWmed".format(deficit.name,deficit[0].value()))
for i, hgu in enumerate(system.data['hydro-units']):
print("The cost of water at Hydro Unit {} is {} hm3".format(i,constraints[i].multiplier.value))
print("The Marginal Cost is: {}".format(constraints[n_hgu].multiplier.value))
return {
"deficit": deficit[0].value()[0],
"operational_marginal_cost": constraints[n_hgu].multiplier.value[0],
"total_cost": fob.value()[0],
"future_cost": alpha[0].value()[0],
"hydro_units": [{
"v_f": v_f[i].value()[0],
"v_t": v_t[i].value()[0],
"v_v": v_v[i].value()[0],
"water_marginal_cost": constraints[i].multiplier.value[0]} for i in range(n_hgu)],
"thermal_units": [{"g_t": g_t[i].value()[0]} for i in range(n_tgu)]
}
def plot_future_cost_function(operation: pd.DataFrame):
n_stages = len(operation['stage'].unique())
fig = make_subplots(rows=n_stages, cols=1)
i = 1
for stage in operation['stage'].unique():
stage_df = operation.loc[operation['stage'] == stage]
fig.add_trace(go.Scatter(x=stage_df["v_i"],
y=stage_df['average_cost'],
mode='lines',
name="Stage {}".format(i)), row=stage, col=1)
i+=1
fig.update_xaxes(title_text="Final Volume [hm3]")
fig.update_yaxes(title_text="$/MW")
fig.update_layout(height=300*TestSystem.data['stages'], title_text="Future Cost Function")
fig.show()
```
```python
from itertools import product
import numpy as np
n_hgu = len(TestSystem.data['hydro-units'])
n_tgu = len(TestSystem.data['thermal-units'])
step = 100/(TestSystem.data['discretizations']-1)
discretizations = list(product(np.arange(0,100+step,step), repeat=n_hgu))
cuts = []
operation = []
for stage in range(TestSystem.data['stages'],0,-1):
for discretization in discretizations:
v_i = []
# For Every Hydro Unit
for i, hgu in enumerate(TestSystem.data['hydro-units']):
v_i.append(hgu['v_min'] + (hgu['v_max']-hgu['v_min'])*discretization[i]/100)
# For Every Scenario
average = 0.
avg_water_marginal_cost = [0 for _ in TestSystem.data["hydro-units"]]
for scenario in range(TestSystem.data['scenarios']):
inflow = []
for i, hgu in enumerate(TestSystem.data['hydro-units']):
inflow.append(hgu['inflow_scenarios'][stage-1][scenario])
result = dispatch(TestSystem, v_i, inflow, cuts, stage+1)
average += result["total_cost"]
for i, hgu in enumerate(result["hydro_units"]):
avg_water_marginal_cost[i] += hgu["water_marginal_cost"]
# Calculating the average of the scenarios
average = average/TestSystem.data['scenarios']
coef_b = average
for i, hgu in enumerate(result["hydro_units"]):
# ! Invert the coeficient because of the minimization problem inverts the signal
avg_water_marginal_cost[i] = - avg_water_marginal_cost[i]/TestSystem.data['scenarios']
coef_b -= v_i[i]*avg_water_marginal_cost[i]
cuts.append({"stage": stage, "coef_b": coef_b, "coefs": avg_water_marginal_cost})
operation.append({'stage': stage, 'discretization': discretization[i], 'v_i': v_i[0], 'average_cost': round(average,2)})
operation_df = pd.DataFrame(operation)
if n_hgu == 1:
plot_future_cost_function(operation=operation_df)
```
```python
operation_df
```
## Considering the Future Cost Function
### Modelling the cost of water
Now, let's consider the Future Cost Function to back propagate the solutions. By back propagating we assume that the future cost function of the "stage ahead" is used as input for the previous stage solution.
Assuming that any Future Cost Function is aproximated by a series of straigh line discretizations. Any given point can be identified by a straight line, which is mathematically represented by:
$$
\begin{equation}
\begin{aligned}
\alpha = a \cdot v_f + b
\end{aligned}
\end{equation}
$$
Where $\alpha$ is cost at a given point of final volume. Where shall find the coeficients $a$ and $b$
- $a$: Is the marginal cost of the water, which comes from the solution of the minimization problem.
If we assume $\alpha = 75$ and $v_f = 60$ which means a cost of $\$60.00$ at Final Volume $60 hm^3$, that gives us:
$$
\begin{equation}
\begin{aligned}
b = \alpha - a \cdot v_f
\end{aligned}
\end{equation}
$$
> Naturaly, this process is repeated for every discretization used in the problem.
> $a$ is given by the average value of every scenario considered when calculating the marginal cost of the water.
If we evaluate for multiple Hydro Units, naturaly:
$$
\begin{equation}
\begin{aligned}
\alpha =b + \sum_{i=1}^{n} a_i \cdot v_{i}
\end{aligned}
\end{equation}
$$
Where $n$ = number of Hydro units
### Considering the cost function in the back propagation
In the previos stage (back propagating from the end to the beggining) we have the objetive function:
$$
\begin{equation}
\begin{aligned}
\min \quad & C_1\cdot g_{t_1} + C_2\cdot g_{t_2} + C_{def}\cdot def + 0.01\cdot v_v + \alpha\\
\textrm{s.t.} \quad & \\
\textrm{hydro balance} \quad & v_f(i) = v_i(i) + afl(i) - v_t(i) - v_v(i) \\
\textrm{load supplying} \quad & \rho\cdot v_t(i) + g_{t_1} + g_{t_2} + def = \textrm{load}\\
\textrm{considering the forward state}\quad & \\
\textrm{for every scenario} `s` \quad & \alpha \geq a^{s} \cdot v_f(i) + b^{s}\\
\textrm{constraints} \quad & \\
& v_{f_{min}}\leq v_f(i) \leq v_{f_{max}}\\
& v_{t_{min}}\leq v_t(i) \leq v_{t_{max}}\\
& v_{v_{min}}\leq v_v(i) \leq v_{v_{max}}\\
& g_{t_{min}}\leq g_t^\ast \leq g_{t_{max}}\\
^\ast \textrm{for each TGU}&
\end{aligned}
\end{equation}
$$
|
14a0d622199ebd87cb4970c7edbcdff023dfa679
| 15,640 |
ipynb
|
Jupyter Notebook
|
Notebook.ipynb
|
ettoreaquino/power-sddp
|
29c2d3c3d0d06d4d08d6c55563e6598080e8a717
|
[
"MIT"
] | 2 |
2021-08-19T13:45:06.000Z
|
2021-12-30T08:30:36.000Z
|
Notebook.ipynb
|
ettoreaquino/power-sddp
|
29c2d3c3d0d06d4d08d6c55563e6598080e8a717
|
[
"MIT"
] | 7 |
2021-08-13T21:33:39.000Z
|
2021-09-09T00:59:20.000Z
|
Notebook.ipynb
|
ettoreaquino/powersddp
|
29c2d3c3d0d06d4d08d6c55563e6598080e8a717
|
[
"MIT"
] | null | null | null | 41.266491 | 219 | 0.502366 | true | 3,087 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.896251 | 0.795658 | 0.71311 |
__label__eng_Latn
| 0.686007 | 0.495124 |
```python
import numpy as np
from numba import jit
```
# nonuniform flow model
$$
\begin{align}
\frac{d}{dx} \left( \frac{\beta Q^2}{2gA^2} + H \right)= -i_e
\end{align}
$$
```python
@jit(nopython=True, parallel=False)
def NonUniformflow(sections, Q, Hdb):
g = float(9.8)
dhini = float(0.5)
H = np.empty_like(Q)
H[0] = Hdb
arr = sections[0].calIeAlphaBetaRcUsubABS(Q[0], H[0])
ied = arr[0]
Ad = arr[-3]
Hd = H[0]
Qd = Q[0]
for i in range(1, len(Q)):
d = i - 1
sc, sd = sections[i], sections[d]
Qc = Q[i]
Hc = sc.calHcABS( Qc )[0]
arr = sc.calIeAlphaBetaRcUsubABS(Qc, Hc)
iec = arr[0]
Ac = arr[-3]
dx = sc.distance - sd.distance
E1 = 0.5/g*Qc**2.0/Ac**2.0 + Hc
E2 = 0.5/g*Qd**2.0/Ad**2.0 + Hd + 0.5*dx*(ied + iec)
if E2 < E1 :
H[i] = Hc
else :
Hc = Hc + float(0.001)
dh = dhini
for n in range(1000):
arr = sc.calIeAlphaBetaRcUsubABS(Qc, Hc)
iec = arr[0]
Ac = arr[-3]
E1 = 0.5/g*Qc**2.0/Ac**2.0 + Hc
E2 = 0.5/g*Qd**2.0/Ad**2.0 + Hd + 0.5*dx*(ied + iec)
if np.abs(E1 - E2) < 0.00001 :
break
elif E1 > E2 :
dh *= float(0.5)
Hc -= dh
else:
Hc += dh
H[i] = Hc
Qd, Hd, ied, Ad = Qc, Hc, iec, Ac
return H
```
# unsteady flow model
$$
\begin{align}
&\frac{\partial A}{\partial t} + \frac{\partial Q}{\partial x} = 0 \\
&\frac{\partial Q}{\partial t} + \frac{\partial }{\partial x}\left(\dfrac{\beta Q^2}{A}\right)
+ gA \frac{\partial H}{\partial x} + gAi_e = 0
\end{align}
$$
```python
@jit(nopython=True, parallel=False)
def UnSteadyflow(sections, A, Q, H, Abound, Qbound, dt):
g = float(9.8)
imax = len(A)
Anew, Qnew, Hnew = np.zeros(imax), np.zeros(imax), np.zeros(imax)
ie = np.zeros(imax)
Beta = np.zeros(imax)
# continuous equation
for i in range(1, imax-1) :
dx = 0.5*(sections[i-1].distance - sections[i+1].distance)
Anew[i] = A[i] - dt * ( Q[i] - Q[i-1] ) / dx
Anew[imax-1] = Abound
Anew[0] = Anew[1]
# Anew[0] = (Anew[1] - A[1]) + A[0]
for i in range(imax) :
s = sections[i]
Hnew[i], _, _ = s.A2HBS(Anew[i], H[i])
arr = s.calIeAlphaBetaRcUsubABS(Q[i], H[i])
ie[i] = arr[0]
Beta[i] = arr[2]
# moumentum equation
for i in range(1, imax-1):
ic, im, ip = i, i-1, i+1
dxp = sections[ic].distance - sections[ip].distance
dxm = sections[im].distance - sections[ic].distance
dxc = 0.5*(sections[im].distance - sections[ip].distance)
Cr1 = 0.5*( Q[ic]/A[ic] + Q[ip]/A[ip] )*dt/dxp
Cr2 = 0.5*( Q[ic]/A[ic] + Q[im]/A[im] )*dt/dxm
dHdx1 = ( Hnew[ip] - Hnew[ic] ) / dxp
dHdx2 = ( Hnew[ic] - Hnew[im] ) / dxm
dHdx = (float(1.0) - Cr1) * dHdx1 + Cr2 * dHdx2
Qnew[ic] = Q[ic] - dt * ( Beta[ic]*Q[ic]**2/A[ic] - Beta[im]*Q[im]**2/A[im] ) / dxc \
- dt * g * Anew[ic] * dHdx \
- dt * g * A[ic] * ie[ic]
Qnew[imax-1] = Qnew[imax-2]
Qnew[0] = Qbound
return Anew, Qnew, Hnew
```
|
992f2c07cde0d198a8b0ee7162f20927e7a7c913
| 5,780 |
ipynb
|
Jupyter Notebook
|
simulation1D/source/s1driverflow.ipynb
|
computational-sediment-hyd/8909090001RiverFlowSimulation
|
e68848f5481cdee14902bc45f95cfb4bb90bb578
|
[
"MIT"
] | null | null | null |
simulation1D/source/s1driverflow.ipynb
|
computational-sediment-hyd/8909090001RiverFlowSimulation
|
e68848f5481cdee14902bc45f95cfb4bb90bb578
|
[
"MIT"
] | null | null | null |
simulation1D/source/s1driverflow.ipynb
|
computational-sediment-hyd/8909090001RiverFlowSimulation
|
e68848f5481cdee14902bc45f95cfb4bb90bb578
|
[
"MIT"
] | null | null | null | 30.26178 | 118 | 0.40346 | true | 1,318 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.936285 | 0.746139 | 0.698599 |
__label__eng_Latn
| 0.190639 | 0.46141 |
```python
import numpy as np
import numpy.linalg as la
import sympy as sp
```
```python
def subs_all(formula, variables, values):
'''
You know what, it's getting to the point where this function
is necessary
'''
result = formula
for i in range(len(values)):
result = result.subs(variables[i], values[i])
return float(result.evalf())
```
```python
# Generic Test Case 1 for subs_all
formula = sp.sympify('5*x**2+6*y**2+4*(z/2)**2')
variables = sp.symbols('x y z')
values = [1, 1, 1]
expected = 12
actual = subs_all(formula, variables, values)
assert expected==actual
```
```python
def calculated_jacobian(fs, variables, x_k):
'''
fs need to EQUAL TO ZERO!
'''
size = len(fs)
result = np.zeros((size, size))
for i in range(size):
for j in range(size):
result[i][j] = subs_all(sp.diff(fs[i], variables[j]), variables, x_k)
return result
def newton_nd_step(x_prev, f_strs, var_str):
'''
Given x_k, string of functions equal to 0, and string for
variables, calculate the parameter for the next iteration
of Newton's method for solving ND non-linear system of equations
'''
fs = [sp.sympify(f_str) for f_str in f_strs]
variables = sp.symbols(var_str)
jac = calculated_jacobian(fs, variables, x_prev)
f_vals = np.array([subs_all(f, variables, x_prev) for f in fs])
s = la.solve(jac, -1*f_vals)
return jac, x_prev+s
```
```python
# Generic Test Case 1 for newton_nd_step
expected = np.array([-3/2, -1/6])
jac, actual = newton_nd_step([-2, 0], ['3*x*y-1', 'x**3+y**2+2'], 'x y')
tol = 10**-7
assert la.norm(expected - actual, 2) < tol
```
```python
# Generic Test Case 2 for newton_nd_step
expected_jac = np.array([[8, -10], [12, 3]])
expected = np.array([1.25694444, -0.69444444])
jac, actual = newton_nd_step([1, -1], ['2*x**4+5*y**2-6', '4*x**3+3*y-5'], 'x y')
tol = 10**-7
assert la.norm(expected_jac - jac) < tol
assert la.norm(expected - actual) < tol
```
```python
# Workspace
f_strs = ['4*x*y-3', 'x**3+y**2+2']
x_prev = [1, 0]
var_str = 'x y'
newton_nd_step(x_prev, f_strs, var_str)
```
(array([[0., 4.],
[3., 0.]]),
array([0. , 0.75]))
```python
```
|
8ec641b39d0fad4d2a45c4ab34b825d31b9dfdcf
| 4,132 |
ipynb
|
Jupyter Notebook
|
newton_solve_nd.ipynb
|
Racso-3141/uiuc-cs357-fa21-scripts
|
e44f0a1ea4eb657cb77253f1db464d52961bbe5e
|
[
"MIT"
] | 10 |
2021-11-02T05:56:10.000Z
|
2022-03-03T19:25:19.000Z
|
newton_solve_nd.ipynb
|
Racso-3141/uiuc-cs357-fa21-scripts
|
e44f0a1ea4eb657cb77253f1db464d52961bbe5e
|
[
"MIT"
] | null | null | null |
newton_solve_nd.ipynb
|
Racso-3141/uiuc-cs357-fa21-scripts
|
e44f0a1ea4eb657cb77253f1db464d52961bbe5e
|
[
"MIT"
] | 3 |
2021-10-30T15:18:01.000Z
|
2021-12-10T11:26:43.000Z
| 25.825 | 90 | 0.504356 | true | 711 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.888759 | 0.875787 | 0.778363 |
__label__eng_Latn
| 0.541407 | 0.646731 |
```python
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
import scipy as sp
from mpl_toolkits.mplot3d import Axes3D
import statsmodels.api as sm
import linregfunc as lr
```
E:\Anaconda\lib\site-packages\statsmodels\compat\pandas.py:65: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.
from pandas import Int64Index as NumericIndex
# <font face="gotham" color="purple"> Common Distributions in Hypothesis Testing </font>
Here we will have a quick refresh of distributions that are commonly used in hypothesis testing.
## <font face="gotham" color="purple"> The Normal Distributions </font>
Normal Distribution is the most important one among all, here we provide a graphic reminder of bivariate normal distribution. Please check out my notebooks of linear algebra, there is a whole chapter devoted for normal distribution.
For your reference, the pdf of multivariate normal distribution is
$$
p(\boldsymbol{x} ; \mu, \Sigma)=\frac{1}{(2 \pi)^{n / 2}|\Sigma|^{1 / 2}} \exp \left(-\frac{1}{2}(x-\mu)^{T} \Sigma^{-1}(x-\mu)\right) \tag{1}\label{1}
$$
```python
x = np.linspace(-10,10,500)
y = np.linspace(-10,10,500)
X,Y = np.meshgrid(x,y)
pos = np.array([X.flatten(),Y.flatten()]).T # two columns matrix
fig = plt.figure(figsize=(14,14))
#########################
ax = fig.add_subplot(221, projection='3d')
mu_x = 0
mu_y = 0
sigma_x = 3
sigma_y = 15
rho = 3
rv = sp.stats.multivariate_normal([mu_x, mu_y], [[sigma_x, rho], [rho, sigma_y]]) # frozen distribution
ax.plot_surface(X, Y, rv.pdf(pos).reshape(500,500),cmap='coolwarm')
ax = fig.add_subplot(223)
ax.contour(rv.pdf(pos).reshape(500,500))
string1 = r'$\sigma_x = %.1f$, $\sigma_y = %.1f$, $\rho = %.1f$'% (sigma_x, sigma_y, rho)
ax.annotate(text = string1, xy=(0.2, 0.05), xycoords='axes fraction', color = 'r')
####
mu_x = 0
mu_y = 0
sigma_x = 10
sigma_y = 4
rho = 0
rv = sp.stats.multivariate_normal([mu_x, mu_y], [[sigma_x, rho], [rho, sigma_y]]) # frozen distribution
ax = fig.add_subplot(222, projection='3d')
ax.plot_surface(X, Y, rv.pdf(pos).reshape(500,500),cmap='coolwarm')
ax = fig.add_subplot(224)
ax.contour(rv.pdf(pos).reshape(500,500))
string2 = r'$\sigma_x = %.1f$, $\sigma_y = %.1f$, $\rho = %.1f$'% (sigma_x, sigma_y, rho)
ax.annotate(text = string2, xy=(0.2, 0.05), xycoords='axes fraction', color = 'r')
#########################
plt.show()
```
Keep in your mind: _any linear combination of normally distributed variables is yet a normal distribution_.
## <font face="gotham" color="purple"> The Chi-Squared Distribution </font>
If an $n$-random vector follows $iid$ normal distribution, $\boldsymbol{z}\sim N(\boldsymbol{0}, \mathbf{I})$. Then the random variable
$$
y = \boldsymbol{z}^T\boldsymbol{z} = \sum_{i=i}^n z_i^2
$$
is said to follow the **chi-squared distribution** with $n$ degrees of freedom. Denoted as
$$
y\sim\chi^2(n)
$$
The mean is
$$\mathrm{E}(y)=\sum_{i=1}^{m} \mathrm{E}\left(z_{i}^{2}\right)=\sum_{i=1}^{m} 1=m$$
And the variance is
$$\begin{aligned}
\operatorname{Var}(y) &=\sum_{i=1}^{m} \operatorname{Var}\left(z_{i}^{2}\right)=m \mathrm{E}\left(\left(z_{i}^{2}-1\right)^{2}\right) \\
&=m \mathrm{E}\left(z_{i}^{4}-2 z_{i}^{2}+1\right)=m(3-2+1)=2 m
\end{aligned}$$
As $n$ increases, the probability density function of $\chi^2$ approaches the $N(m, 2m)$. Here is the graphic demonstration shows how $\chi^2$ distribution changes as d.o.f. rises.
```python
fig, ax = plt.subplots(figsize = (9, 9))
x = np.linspace(0, 50, 1000)
for i in range(1, 14):
chi_pdf = sp.stats.chi2.pdf(x, i)
ax.plot(x, chi_pdf, lw = 3, label = '$\chi^2 (%.0d)$'%i)
ax.legend(fontsize = 12)
ax.axis([0, 20, 0, .6])
plt.show()
```
### <font face="gotham" color="purple"> Quadratic Form of $\chi^2$ Distribution </font>
If an $n$-random vector $\boldsymbol{y} \sim N(\boldsymbol{\mu}, \Sigma)$ then
$$
(\boldsymbol{y} - \boldsymbol{\mu})^T\Sigma^{-1}(\boldsymbol{y}-\boldsymbol{\mu})\sim \chi^2(n)
$$
If $\boldsymbol{y} \sim N(\boldsymbol{0}, \Sigma)$ simplifies the expression
$$
\boldsymbol{y}^T\Sigma^{-1}\boldsymbol{y}\sim \chi^2(n)
$$
We will show why that holds by using diagonal decomposition. Since the $\Sigma$ is symmetric, it is orthogonally diagonalizable,
$$
\Sigma = QDQ^T
$$
where
$$
D=\left[\begin{array}{ccccc}
\lambda_{1} & 0 & 0 & \ldots & 0 \\
0 & \lambda_{2} & 0 & \ldots & 0 \\
\vdots & \vdots & \vdots & & \vdots \\
0 & 0 & 0 & \ldots & \lambda_{n}
\end{array}\right]
$$
$\lambda$s are eigenvalues. And $Q^{-1} = Q^T$, $Q$ holds all the eigenvectors of $\Sigma$ which are mutually perpendicular.
Denote $D^*$ as
$$
D^* =
\left[\begin{array}{ccccc}
\frac{1}{\sqrt{\lambda_{1}}} & 0 & 0 & \ldots & 0 \\
0 & \frac{1}{\sqrt{\lambda_{2}}} & 0 & \ldots & 0 \\
\vdots & \vdots & \vdots & & \vdots \\
0 & 0 & 0 & \ldots & \frac{1}{\sqrt{\lambda_{n}}}
\end{array}\right]
$$
Let the matrix $H = QD^*Q^T$, since $H$ is also symmetric
$$
HH^T= QD^*Q^TQD^*Q^T= Q^TD^*D^*Q = QD^{-1}Q^T =\Sigma^{-1}
$$
Furthermore
$$
H\Sigma H^T = QD^*Q^T\Sigma QD^*Q^T = QD^*Q^TQDQ^T QD^*Q^T = QD^*DD^*Q^T = QQ^T = I
$$
Back to the results from above, we set $\boldsymbol{z} = H^T (\boldsymbol{y}-\boldsymbol{\mu})$, which is standard normal distribution since
$$
E(\boldsymbol{z})= H^TE(\boldsymbol{y}-\boldsymbol{\mu})=\boldsymbol{0}\\
\text{Var}(\boldsymbol{z}) = H\text{Var}(\boldsymbol{y}-\boldsymbol{\mu})H^T =H\Sigma H^T = I
$$
Back to where we started
$$
(\boldsymbol{y}-\boldsymbol{\mu})^T\Sigma^{-1}(\boldsymbol{y}-\boldsymbol{\mu}) = (\boldsymbol{y}-\boldsymbol{\mu})^THH^T (\boldsymbol{y}-\boldsymbol{\mu}) = (H^T (\boldsymbol{y}-\boldsymbol{\mu}))^T(H^T (\boldsymbol{y}-\boldsymbol{\mu})) = \boldsymbol{z}^T\boldsymbol{z}
$$
here we proved that $(\boldsymbol{y}-\boldsymbol{\mu})^T\Sigma^{-1}(\boldsymbol{y}-\boldsymbol{\mu}) \sim \chi^2(n)$.
More details of proof in <a href='https://math.stackexchange.com/questions/2808041/x-normally-distributed-then-xt-sigma-1-x-follows-chi-square-distribut?noredirect=1&lq=1'>this page</a>.
## <font face="gotham" color="purple"> The Student’s $t$ Distribution </font>
If $z\sim N(0, 1)$ and $y\sim \chi^2(m)$, and $z$ and $y$ are independent, then
$$
t = \frac{z}{\sqrt{y/m}}
$$
follows the **Student's t distribution** with $m$ d.o.f.
Here is the plot of $t$-distribution, note that $t(1)$ is called **Cauchy distribution** which has no moments at all, because integral does not converge due to fat tails.
```python
fig, ax = plt.subplots(figsize = (12, 7))
x = np.linspace(-5, 5, 1000)
for i in range(1, 6):
chi_pdf = sp.stats.t.pdf(x, i)
if i == 1:
ax.plot(x, chi_pdf, lw = 3, label = 'Cauchy', ls = '--')
continue
else:
ax.plot(x, chi_pdf, lw = 3, label = '$t (%.0d)$'%i)
ax.legend(fontsize = 12)
ax.axis([-5, 5, 0, .4])
ax.set_title("Student's $t$ Distribution", size = 18)
plt.show()
```
As $m \rightarrow \infty$, $t(m)\rightarrow N(0, 1)$, in the limit process, the tails of $t$ distribution will diminish, and becoming thinner.
## <font face="gotham" color="purple"> The $F$ Distribution </font>
If $y_1$ and $y_2$ are independent random variables distributed as $\chi^2(m_1)$ and $\chi^2(m_2)$, then the random variable
$$
F = \frac{y_1/m_1}{y_2/m_2}
$$
follows the **$F$ distribution**, denoted $ F(m_1, m_2)$.
```python
x = np.linspace(.001, 5, 100)
fig, ax = plt.subplots(figsize = (12, 7))
df1 = 10
df2 = 2
f_pdf = sp.stats.f.pdf(x, dfn = df1, dfd = df2)
ax.plot(x, f_pdf, lw =3, label = '$df_1 = %.d, df_2 = %.d$' %(df1, df2))
df1 = 2
df2 = 10
f_pdf = sp.stats.f.pdf(x, dfn = df1, dfd = df2)
ax.plot(x, f_pdf, lw =3, label = '$df_1 = %.d, df_2 = %.d$ '%(df1, df2))
df1 = 8
df2 = 15
f_pdf = sp.stats.f.pdf(x, dfn = df1, dfd = df2)
ax.plot(x, f_pdf, lw =3, color = 'red', label = '$df_1 = %.d, df_2 = %.d$' %(df1, df2))
df1 = 15
df2 = 8
f_pdf = sp.stats.f.pdf(x, dfn = df1, dfd = df2)
ax.plot(x, f_pdf, lw =3, label = '$df_1 = %.d, df_2 = %.d$ '%(df1, df2))
ax.legend(fontsize = 15)
ax.axis([0, 4, 0, .8])
ax.set_title("$F$ Distribution", size = 18)
plt.show()
```
# <font face="gotham" color="purple">Single Restriction </font>
Any linear restriction, such as $\beta_1 = 5$, $\beta_1 =2{\beta_2}$ can be tested, however, no loss of generality if linear restriction of $\beta_2=0$ is demonstrated.
We first take a look at single restriction, see how FWL regression can help to construct statistic tests.
The regression model is
$$
\boldsymbol{y} = \boldsymbol{X}_1\boldsymbol{\beta}_1+\beta_2\boldsymbol{x}_2+\boldsymbol{u}
$$
where $\boldsymbol{x}_2$ is an $n$-vector, whereas $\boldsymbol{X}_1$ is an $n\times (m-1)$ matrix.
Project $\boldsymbol{X}_1$ off $\boldsymbol{x}_2$, the FWL regression is
$$
\boldsymbol{M}_1\boldsymbol{y} = \beta_2\boldsymbol{M}_1\boldsymbol{x}_2 + \boldsymbol{M}_1\boldsymbol{u}
$$
Applying OLS estimate and variance formula,
$$
\hat{\beta}_2 = [(\boldsymbol{M}_1\boldsymbol{x}_2)^T\boldsymbol{M}_1\boldsymbol{x}_2]^{-1}(\boldsymbol{M}_1\boldsymbol{x}_2)^T\boldsymbol{M}_1\boldsymbol{y}=(\boldsymbol{x}_2^T\boldsymbol{M}_1\boldsymbol{x}_2)^{-1}\boldsymbol{x}_2^T\boldsymbol{M}_1\boldsymbol{y}=\frac{\boldsymbol{x}_2^T\boldsymbol{M}_1\boldsymbol{y}}{\boldsymbol{x}_2^T\boldsymbol{M}_1\boldsymbol{x}_2}\\
\text{Var}(\hat{\beta}_2) = \sigma^2 [(\boldsymbol{M}_1\boldsymbol{x}_2)^T(\boldsymbol{M}_1\boldsymbol{x}_2)]^{-1} = \sigma^2(\boldsymbol{x}_2^T\boldsymbol{M}_1\boldsymbol{x}_2)^{-1}
$$
If null hypothesis is $\beta_2^0=0$, also assuming that $\sigma^2$ is known. Construct a $z$-statistic
$$
z_{\beta_2} = \frac{\hat{\beta}_2}
{\sqrt{\text{Var}(\hat{\beta}_2)}}=\frac{\frac{\boldsymbol{x}_2^T\boldsymbol{M}_1\boldsymbol{y}}{\boldsymbol{x}_2^T\boldsymbol{M}_1\boldsymbol{x}_2}}{ \sigma(\boldsymbol{x}_2^T\boldsymbol{M}_1\boldsymbol{x}_2)^{-\frac{1}{2}}}
= \frac{\boldsymbol{x}_2^T\boldsymbol{M}_1\boldsymbol{y}}{\sigma(\boldsymbol{x}_2^T\boldsymbol{M}_1\boldsymbol{x}_2)^{\frac{1}{2}}}
$$
However, $\sigma$ is not likely to be known, hence we replace it with $s$, the least square standard error estimator. Recall that
$$
s^2 =\frac{1}{n-k} \sum_{t=1}^n u_t^2 = \frac{\boldsymbol{u}^T\boldsymbol{u}}{n-k}= \frac{(\boldsymbol{M_X y})^T\boldsymbol{M_X y}}{n-k}=\frac{\boldsymbol{y}^T\boldsymbol{M_X y}}{n-k}
$$
Replace the $\sigma$ in $z_{\beta_2}$, we obtain $t$-statistic
$$
t_{\beta_2} = \frac{\boldsymbol{x}_2^T\boldsymbol{M}_1\boldsymbol{y}}{s(\boldsymbol{x}_2^T\boldsymbol{M}_1\boldsymbol{x}_2)^{\frac{1}{2}}} = \left(\frac{\boldsymbol{y}^T\boldsymbol{M_X y}}{n-k}\right)^{-\frac{1}{2}}\frac{\boldsymbol{x}_2^T\boldsymbol{M}_1\boldsymbol{y}}{(\boldsymbol{x}_2^T\boldsymbol{M}_1\boldsymbol{x}_2)^{\frac{1}{2}}}
$$
Of course we can also show it indeed follows the $t$ distribution by using the definition that the ratio of standard normal variable to a $\chi^2$ variable. However, this is unnecessary for us.
# <font face="gotham" color="purple"> Multiple Restrictions </font>
Multiple restrictions test can be formulated as following
$$
H_0:\quad \boldsymbol{y} = \boldsymbol{X}_1\boldsymbol{\beta_1} + \boldsymbol{u}\\
H_1:\quad \boldsymbol{y} = \boldsymbol{X}_1\boldsymbol{\beta_1} +\boldsymbol{X}_2\boldsymbol{\beta_2}+ \boldsymbol{u}
$$
$H_0$ is restricted model, $H_1$ is unrestricted. And we denote restricted residual sum of squares as $\text{RRSS}$, and unrestricted residual sum of squares as $\text{URSS}$, the test statistic is
$$
F_{\beta_2}= \frac{(\text{RRSS}-\text{URSS})/r}{\text{URSS}/(n-k)}
$$
where $r=k_2$, the number of restrictions on $\beta_2$.
Using FWL regression with $\boldsymbol{M}_1$ and $\text{TSS = ESS + RSS}$,
$$
\boldsymbol{M}_1\boldsymbol{y}=\boldsymbol{M}_1\boldsymbol{X}_2\boldsymbol{\beta}_2+\boldsymbol{u} \tag{FWL regression}
$$
\begin{align}
\text{URSS}& =(\boldsymbol{M_1y})^T\boldsymbol{M_1y}- \underbrace{\left[\boldsymbol{M}_1\boldsymbol{X}_2[(\boldsymbol{M}_1\boldsymbol{X}_2)^T\boldsymbol{M}_1\boldsymbol{X}_2]^{-1}(\boldsymbol{M}_1\boldsymbol{X}_2)^T\boldsymbol{y}\right]^T}_{\text{projection matrix}\ P}\boldsymbol{y}\\
&= \boldsymbol{y}^T\boldsymbol{M}_1\boldsymbol{y} -\boldsymbol{y}^T\boldsymbol{M}_1\boldsymbol{X}_2(\boldsymbol{X}_2^T\boldsymbol{M}_1\boldsymbol{X}_2)^{-1}\boldsymbol{X}^T_2\boldsymbol{M}_1 \boldsymbol{y} = \boldsymbol{y}^T\boldsymbol{M_X}\boldsymbol{y}
\end{align}
$$
\text{RRSS} = (\boldsymbol{M}_1y)^T\boldsymbol{M}_1y = \boldsymbol{y}^T\boldsymbol{M}_1\boldsymbol{y}
$$
Therefore
$$
\text{RRSS}-\text{URSS}=\boldsymbol{y}^T\boldsymbol{M}_1\boldsymbol{X}_2(\boldsymbol{X}_2^T\boldsymbol{M}_1\boldsymbol{X}_2)^{-1}\boldsymbol{X}^T_2\boldsymbol{M}_1 \boldsymbol{y}
$$
We have all parts of $F$ statistic, combine them
$$
F_{\boldsymbol{\beta}_{2}}=\frac{\boldsymbol{y}^{T} \boldsymbol{M}_{1} \boldsymbol{X}_{2}\left(\boldsymbol{X}_{2}^{T} \boldsymbol{M}_{1} \boldsymbol{X}_{2}\right)^{-1} \boldsymbol{X}_{2}^{T} \boldsymbol{M}_{1} \boldsymbol{y} / r}{\boldsymbol{y}^{T} \boldsymbol{M}_{\boldsymbol{X}} \boldsymbol{y} /(n-k)}
$$
In contrast, $t$ statistic will be
$$
t_{\beta_{2}}=\sqrt{\frac{\boldsymbol{y}^{\top} \boldsymbol{M}_{1} \boldsymbol{x}_{2}\left(\boldsymbol{x}_{2}^{\top} \boldsymbol{M}_{1} \boldsymbol{x}_{2}\right)^{-1} \boldsymbol{x}_{2}^{\top} \boldsymbol{M}_{1} \boldsymbol{y}}{\boldsymbol{y}^{\top} \boldsymbol{M}_{\boldsymbol{X}} \boldsymbol{y} /(n-k)}}
$$
To test the equality of two parameter vectors, we modify the $F$ test as
$$
F_{\gamma}=\frac{\left(\mathrm{RRSS}-\mathrm{RSS}_{1}-\mathrm{RSS}_{2}\right) / k}{\left(\mathrm{RSS}_{1}+\mathrm{RSS}_{2}\right) /(n-2 k)}
$$
for example, the sample being divided into two subsamples, to compare the stability of two subsamples, this so-called **Chow test** is a common practice.
# <font face="gotham" color="purple"> Asymptotic Theory </font>
**Asymptotic Theory** is concerned with the distribution of estimators and test statistics as the sample size $n$ tends to infinity.
## <font face="gotham" color="purple"> Law of Large Numbers </font>
The widely-known **Law of Large Numbers** ($\text{LLN}$) takes the form
$$
\bar{x} = \frac{1}{n}\sum_{t=1}^nx_t
$$
where $x_t$ are independent variables with its own bounded variance $\sigma_t^2$ and a common mean $\mu$. $\text{LLN}$ tells that as $n\rightarrow \infty$, then $\bar{x} \rightarrow \mu$.
The **Fundamental Theorem of Statistics** can be proved with $\text{LLN}$.
An **empirical distribution function** ($\text{EDF}$) can be expressed as
$$
\hat{F}(x) \equiv \frac{1}{n} \sum_{t=1}^{n} I\left(x_{t} \leq x\right)
$$
where $I(\cdot)$ is an **indicator function**, which takes value $1$ when its argument is true, otherwise $0$. To prove the Fundamental Theorem of Statistics, we invoke the $\text{LLN}$, expand the expectation
\begin{aligned}
\mathrm{E}\left(I\left(x_{t} \leq x\right)\right) &=0 \cdot \operatorname{Pr}\left(I\left(x_{t} \leq x\right)=0\right)+1 \cdot \operatorname{Pr}\left(I\left(x_{t} \leq x\right)=1\right) \\
&=\operatorname{Pr}\left(I\left(x_{t} \leq x\right)=1\right)=\operatorname{Pr}\left(x_{t} \leq x\right)=F(x)
\end{aligned}
It turns out that $\hat{F}(x)$ is a consistent estimator of $F(x)$.
## <font face="gotham" color="purple"> Central Limit Theorems </font>
The **Central Limit Theorems** ($\text{CLT}$) tells that $\frac{1}{n}$ times the sum of $n$ centered random variables will approximately follow a normal distribution when $n$ is sufficiently large. The most well $\text{CLT}$ is **Lindeberg-Lévy** $\text{CLT}$, the quantity
$$
z \equiv \frac{1}{\sqrt{n}} \sum_{t=1}^{n} \frac{x_{t}-\mu}{\sigma}
$$
is **asymptotically distributed** as $N(0,1)$.
We won't bother to prove them, but they are the implicit theoretical foundation when we are discussing **simulation-based tests**.
# <font face="gotham" color="purple"> Simulated $P$ Values </font>
```python
t_stats_real_model = betahat.T[0]/np.sqrt(cov_betahat_prin_diag)
print('t statistics: {}'.format(t_stats))
p_values_real_model = 2*(1-sp.stats.t.cdf(t_stats, df=n-k, loc=0, scale=1))
print('p_values: {}'.format(p_values))
```
t statistics: [1.75043396 2.79293141 2.73485916]
p_values: [0.25890776 0.00241733 0.07849986]
Above will be our imaginary real data.
```python
```
```python
```
```python
```
```python
indicator_array = np.array([0, 0, 0])
sim_rounds = 5000
for i in range(sim_rounds):
u = lr.gen_u(n)
y = X@beta + 15*u
betahat = lr.ols(y, X)
resid = y - X@betahat
cov_betahat, cov_betahat_prin_diag = lr.cov_beta_hat(resid, k, X)
t_stats_sim = betahat.T[0]/np.sqrt(cov_betahat_prin_diag)
indicator = t_stats_sim - t_stats_real_model
indicator[indicator < 0] = 0
indicator[indicator > 0] = 1
indicator_array = np.vstack([indicator_array, indicator])
indicator_array = indicator_array[1:]
```
```python
np.sum(indicator_array, axis = 0)/sim_rounds
```
array([0.6924, 0.288 , 0.3778])
```python
indicator
```
array([1., 1., 0.])
```python
```
```python
comparison = p_values - A
comparison
```
array([-0.04109224, 0.00141733, 0.02849986])
```python
comparison[comparison<0] = 0
```
```python
comparison[comparison>0] = 1
```
```python
np.vstack([comparison, comparison1])
```
array([[0., 1., 1.],
[1., 2., 3.]])
```python
```
|
f7548de0b3214652ec2ca983ceb88c6891f2f4e0
| 576,827 |
ipynb
|
Jupyter Notebook
|
Mathematics/Statistics/0.0 Basic_Statistics_With_Python/Chapter 3 - Hypothesis Test.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | null | null | null |
Mathematics/Statistics/0.0 Basic_Statistics_With_Python/Chapter 3 - Hypothesis Test.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | null | null | null |
Mathematics/Statistics/0.0 Basic_Statistics_With_Python/Chapter 3 - Hypothesis Test.ipynb
|
okara83/Becoming-a-Data-Scientist
|
f09a15f7f239b96b77a2f080c403b2f3e95c9650
|
[
"MIT"
] | 2 |
2022-02-09T15:41:33.000Z
|
2022-02-11T07:47:40.000Z
| 594.66701 | 304,124 | 0.943834 | true | 6,275 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.699254 | 0.763484 | 0.533869 |
__label__eng_Latn
| 0.552323 | 0.078687 |
# Corrugated Shells geometry
## Init symbols for *sympy*
```python
from sympy import *
from geom_util import *
from sympy.vector import CoordSys3D
import matplotlib.pyplot as plt
import sys
sys.path.append("../")
%matplotlib inline
%reload_ext autoreload
%autoreload 2
%aimport geom_util
```
```python
# Any tweaks that normally go in .matplotlibrc, etc., should explicitly go here
%config InlineBackend.figure_format='retina'
plt.rcParams['figure.figsize'] = (12, 12)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
# SMALL_SIZE = 42
# MEDIUM_SIZE = 42
# BIGGER_SIZE = 42
# plt.rc('font', size=SMALL_SIZE) # controls default text sizes
# plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
# plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
# plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
# plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
# plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
# plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
init_printing()
```
```python
N = CoordSys3D('N')
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3", real = True, positive=True)
```
## Cylindrical coordinates
```python
R, L, ga, gv = symbols("R L g_a g_v", real = True, positive=True)
```
```python
a1 = pi / 2 + (L / 2 - alpha1)/R
a2 = 2 * pi * alpha1 / L
x1 = (R + ga * cos(gv * a2)) * cos(a1)
x2 = alpha2
x3 = (R + ga * cos(gv * a2)) * sin(a1)
r = x1*N.i + x2*N.j + x3*N.k
r1=r.diff(alpha1)
r1
```
```python
z = 2*ga*gv*pi/L*sin(gv*a2)
w = 1 + ga/R*cos(gv*a2)
dr1x=w*sin(a1) - z*cos(a1)
dr1z=-w*cos(a1) - z*sin(a1)
r1 = dr1x*N.i + dr1z*N.k
r2 =N.j
mag=sqrt((w)**2+(z)**2)
nx = -dr1z/mag
nz = dr1x/mag
n = nx*N.i+nz*N.k
dnx=nx.diff(alpha1)
dnz=nz.diff(alpha1)
dn= dnx*N.i+dnz*N.k
```
```python
Ralpha = r+alpha3*n
R1=r1+alpha3*dn
R2=Ralpha.diff(alpha2)
R3=n
Ralpha
```
```python
R1
```
```python
R2
```
```python
R3
```
### Draw
```python
import plot
%aimport plot
# x1 = (R + alpha3 + ga * cos(gv * a2)) * cos(a1)
# x2 = alpha2
# x3 = (R + alpha3 + ga * cos(gv * a2)) * sin(a1)
x1 = Ralpha.dot(N.i)
x3 = Ralpha.dot(N.k)
alpha1_x = lambdify([R, L, ga, gv, alpha1, alpha3], x1, "numpy")
alpha3_z = lambdify([R, L, ga, gv, alpha1, alpha3], x3, "numpy")
R_num = 1/0.8
L_num = 2
h_num = 0.01
ga_num = 0.01
gv_num = 20
x1_start = 0
x1_end = L_num
x3_start = -h_num/2
x3_end = h_num/2
def alpha_to_x(a1, a2, a3):
x=alpha1_x(R_num, L_num, ga_num, gv_num, a1, a3)
z=alpha3_z(R_num, L_num, ga_num, gv_num, a1, a3)
return x, 0, z
plot.plot_init_geometry_2(x1_start, x1_end, x3_start, x3_end, alpha_to_x)
```
```python
%aimport plot
R3_1=R3.dot(N.i)
R3_3=R3.dot(N.k)
R3_1_x = lambdify([R, L, ga, gv, alpha1, alpha3], R3_1, "numpy")
R3_3_z = lambdify([R, L, ga, gv, alpha1, alpha3], R3_3, "numpy")
def R3_to_x(a1, a2, a3):
x=R3_1_x(R_num, L_num, ga_num, gv_num, a1, a3)
z=R3_3_z(R_num, L_num, ga_num, gv_num, a1, a3)
return x, 0, z
plot.plot_vectors(x1_start, x1_end, 0, alpha_to_x, R3_to_x)
```
```python
%aimport plot
R1_1=r1.dot(N.i)
R1_3=r1.dot(N.k)
R1_1_x = lambdify([R, L, ga, gv, alpha1, alpha3], R1_1, "numpy")
R1_3_z = lambdify([R, L, ga, gv, alpha1, alpha3], R1_3, "numpy")
def R1_to_x(a1, a2, a3):
x=R1_1_x(R_num, L_num, ga_num, gv_num, a1, a3)
z=R1_3_z(R_num, L_num, ga_num, gv_num, a1, a3)
return x, 0, z
plot.plot_vectors(x1_start, x1_end, 0, alpha_to_x, R1_to_x)
```
### Lame params
```python
A=mag
q=w/R+ga*(2*pi*gv/L)**2*cos(gv*a2)
K=(q*w+2*z*z/R)/(mag**3)
H1 = A*(1+alpha3*K)
H2=S(1)
H3=S(1)
H=[H1, H2, H3]
DIM=3
dH = zeros(DIM,DIM)
for i in range(DIM):
dH[i,0]=H[i].diff(alpha1)
dH[i,1]=H[i].diff(alpha2)
dH[i,2]=H[i].diff(alpha3)
trigsimp(H1)
```
|
3f77a57f141c67fc697f32429b4681b7f5bb11c9
| 510,334 |
ipynb
|
Jupyter Notebook
|
py/notebooks/MatricesForPlaneCorrugatedShellsNormal.ipynb
|
tarashor/vibrations
|
fcbadd545e2a8f1ef4b4bff399b396ee5f16924b
|
[
"MIT"
] | 1 |
2018-02-05T18:05:08.000Z
|
2018-02-05T18:05:08.000Z
|
py/notebooks/MatricesForPlaneCorrugatedShellsNormal.ipynb
|
tarashor/vibrations
|
fcbadd545e2a8f1ef4b4bff399b396ee5f16924b
|
[
"MIT"
] | null | null | null |
py/notebooks/MatricesForPlaneCorrugatedShellsNormal.ipynb
|
tarashor/vibrations
|
fcbadd545e2a8f1ef4b4bff399b396ee5f16924b
|
[
"MIT"
] | null | null | null | 590.664352 | 142,294 | 0.883339 | true | 1,564 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.894789 | 0.787931 | 0.705033 |
__label__eng_Latn
| 0.145317 | 0.476358 |
# Introduction to LP Problems - standard form
## Diet problem
you have a list of food, their macronutrients and the price. You want to get the most out of your food intake for the lowest cost possible
(Per portion)
|Product|Energy (kcal)|Proteins (g)|Calcium (mg)|Price (cent)|
|------|-----|-----|-----|-----|
|Oats|110|4|2|2|25|
|Chicken|205|32|12|130|
|Egg|160|13|54|85|
|Milk|160|8|285|70 |
|Cake|420 |22 | 95|
|Bean|260|14|80|98|
Suppose you require 2000kcal, 55g proteins and 800mg of calcium. Write the LP and the write it in its standard form:
## More problems
### 1
\begin{equation}
\min 3x_1+8x_2+4x_3\\
\textrm{subject to} \begin{cases}
x_1+x_2 \ge 8\\
2x_1-3x_2 \le 0\\
x_2 \ge 9 \\
x_1,x_2 \ge 0
\end{cases}
\end{equation}
### 2
\begin{equation}
\max 3x_1+2x_2-x_3+x_4\\
\textrm{subject to} \begin{cases}
x_1 + 2x_2 + x_3 - x_4 \le 5\\
-2x_1 - 4 x_2 + x_3 + x_4 \le -1\\
x_1\ge 0,x_2 \le 0
\end{cases}
\end{equation}
### 3
\begin{equation}
\min x_1 - x_2 + x_3\\
\textrm{subject to} \begin{cases}
x_1 + 2x_2 - x_3 \le 3\\
-x_1 + x_2 + x_3 \ge 2\\
x_1 - x_2 = 10\\
x_1 \ge 0, x_2 \le 0
\end{cases}
\end{equation}
### 4
\begin{equation}
\max x_1 - x_2 + x_3\\
\textrm{subject to} \begin{cases}
x_1 + 2x_2 - x_3 \le 3\\
-x_1 + x_2 + x_3 \ge 2\\
x_1 - x_2 = 10\\
x_1 \ge 0, x_2 \le 0
\end{cases}
\end{equation}
```julia
```
|
5461417eeddfd0453d146899a3a738508973542c
| 2,444 |
ipynb
|
Jupyter Notebook
|
Exercises/PS7bis.ipynb
|
JuliaTagBot/ES313.jl
|
3601743ca05bdb2562a26efd8b809c1a4f78c7b1
|
[
"MIT"
] | null | null | null |
Exercises/PS7bis.ipynb
|
JuliaTagBot/ES313.jl
|
3601743ca05bdb2562a26efd8b809c1a4f78c7b1
|
[
"MIT"
] | null | null | null |
Exercises/PS7bis.ipynb
|
JuliaTagBot/ES313.jl
|
3601743ca05bdb2562a26efd8b809c1a4f78c7b1
|
[
"MIT"
] | null | null | null | 26 | 147 | 0.474223 | true | 635 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.941654 | 0.893309 | 0.841189 |
__label__eng_Latn
| 0.4283 | 0.792696 |
# Linear algebra games including SVD for PCA
Some parts adapted from [Computational-statistics-with-Python.ipynb](https://github.com/cliburn/Computational-statistics-with-Python), which is itself from a course taught at Duke University; other parts from Peter Mills' [blog](https://blog.statsbot.co/singular-value-decomposition-tutorial-52c695315254).
The goal here is to practice some linear algebra manipulations by hand and with Python, and to gain some experience and intuition with the Singular Value Decomposition (SVD).
$\newcommand{\Amat}{\mathbf{A}} \newcommand{\AmatT}{\mathbf{A^\top}}
\newcommand{\thetavec}{\boldsymbol{\theta}}
\newcommand{\Sigmamat}{\mathbf{\Sigma}}
\newcommand{\Yvec}{\mathbf{Y}}
$
## Preliminary exercise: manipulations using the index form of matrices
If you haven't already done this earlier, prove that the Maximum Likelihood Estimate (MLE) for $\chi^2$ given by
$$
\chi^2 = (\Yvec - \Amat\thetavec)^{\mathbf{\top}} \Sigmamat^{-1} (\Yvec - \AmatT\thetavec)
$$
is
$$
\thetavec_{\mathrm{MLE}} = (\AmatT \Sigmamat^{-1} \Amat)^{-1} (\AmatT \Sigmamat^{-1} \Yvec) \;.
$$
Here $\thetavec$ is a $m\times 1$ matrix of parameters (i.e., there are $m$ parameters), $\Sigmamat$ is the $m\times m$ covariance matrix, $\Yvec$ is a $N\times 1$ matrix of observations (data), and $\Amat$ is an $N\times m$ matrix
$$
\Amat =
\left(
\begin{array}{cccc}
1 & x_1 & x_1^2 & \cdots \\
1 & x_2 & x_2^2 & \cdots \\
\vdots & \vdots & \vdots &\cdots \\
1 & x_N & x_N^2 & \cdots
\end{array}
\right)
$$
where $N$ is the number of observations. The idea is to do this with explicit indices for vectors and matrices, using the Einstein summation convention.
A suggested approach:
* Write $\chi^2$ in indices: $\chi^2 = (Y_i - A_{ij}\theta_j)\Sigma^{-1}_{ii'}(Y_{i'}- A_{i'j'}\theta_{j'})$, where summations over repeated indices are implied (be careful of transposes). *How do we see that $\chi^2$ is a scalar?*
* Find $\partial\chi^2/\partial \theta_k = 0$ for all $k$, using $\partial\theta_j/\partial\theta_k = 0$. Isolate the terms with one component of $\thetavec$ from those with none.
* You should get the matrix equation $ (\AmatT \Sigmamat^{-1} \Yvec) = (\AmatT \Sigmamat^{-1} \Amat)\thetavec$. At this point you can directly solve for $\thetavec$. *Why can you do this now?*
* If you get stuck, see Dick's notes from the Parameter Estimation III lecture.
## SVD basics
A singular value decomposition (SVD) decomposes a matrix $A$ into three other matrices (we'll skip the boldface font here):
$$
A = U S V^\top
$$
where (take $m > n$ for now)
* $A$ is an $m\times n$ matrix;
* $U$ is an $m\times n$ (semi)orthogonal matrix;
* $S$ is an $n\times n$ diagonal matrix;
* $V$ is an $n\times n$ orthogonal matrix.
Comments and tasks:
* *Verify that these dimensions are compatible with the decomposition of $A$.*
* The `scipy.linalg` function `svd` has a Boolean argument `full_matrices`. If `False`, it returns the decomposition above with matrix dimensions as stated. If `True`, then $U$ is $m\times m$, $S$ is $m \times n$, and $V$ is $n\times n$. We will use the `full_matrices = False` form here. *Can you see why this is ok?*
* Recall that orthogonal means that $U^\top U = I_{n\times n}$ and $V V^\top = I_{n\times n}$. *Are $U U^\top$ and $V^\top V$ equal to identity matrices?*
* In index form, the decomposition of $A$ is $A_{ij} = U_{ik} S_k V_{jk}$, where the diagonal matrix elements of $S$ are
$S_k$ (*make sure you agree*).
* These diagonal elements of $S$, namely the $S_k$, are known as **singular values**. They are ordinarily arranged from largest to smallest.
* $A A^\top = U S^2 U^\top$, which implies (a) $A A^\top U = U S^2$.
* $A^\top A = V S^2 V^\top$, which implies (b) $A^\top A V = V S^2$.
* If $m > n$, we can diagonalize $A^\top A$ to find $S^2$ and $V$ and then find $U = A V S^{-1}$. If $m < n$ we switch the roles of $U$ and $V$.
Quick demonstations for you to do or questions to answer:
* *Show from equations (a) and (b) that both $U$ and $V$ are orthogonal and that the eigenvalues, $\{S_i^2\}$, are all positive.*
* *Show that if $m < n$ there will be at most $m$ non-zero singular values.*
* *Show that the eigenvalues from equations (a) and (b) must be the same.*
A key feature of the SVD for us here is that the sum of the squares of the singular values equals the total variance in $A$, i.e., the sum of squares of all matrix elements (squared Frobenius norm). Thus the size of each says how much of the total variance is accounted for by each singular vector. We can create a truncated SVD containing a percentage (e.g., 99%) of the variance:
$$
A_{ij} \approx \sum_{k=1}^{p} U_{ik} S_k V_{jk}
$$
where $p < n$ is the number of singular values included. Typically this is not a large number.
### Geometric interpretation of SVD
- Geometric interpretation of SVD
- rotate orthogonal frame $V$ onto standard frame
- scale by $S$
- rotate standard frame into orthogonal frame $U$
Consider the two-dimensional case: $\mathbf{x_1} = (x_1, y_1)$, $\mathbf{x_2} = (x_2, y_2)$. We can fit these to an ellipse with major axis $a$ and minor axis $b$, made by stretching and rotating a unit circle. Let $\mathbf{x'} = (x', y')$ be the transformed coordinates:
$$
\mathbf{x'} = \mathbf{x} R M^{-1} \quad\mbox{with}\quad
R = \left(\begin{array}{cc}
\cos\theta & \sin\theta \\
-\sin\theta & \cos\theta
\end{array}
\right)
\quad\mbox{and}\quad
M = \left(\begin{array}{cc}
a & 0 \\
0 & b
\end{array}
\right)
$$
In index form this is $x'_j = \frac{1}{m_j} x_i R_{ij}$ or (clockwise rotation):
$$\begin{align}
x' &= \frac{x \cos\theta - y\sin\theta}{a} \\
y' &= \frac{x \sin\theta + y\cos\theta}{b} \\
\end{align}$$
The equation for a unit circle $\mathbf{x' \cdot x'} = 1$ becomes
$$
(M^{-1} R^\top \mathbf{x}) \cdot (\mathbf{x} R M^{-1}) = 1.
$$
With $X = \left(\begin{array}{cc}
x_1 & y_1 \\
x_2 & y_2
\end{array}
\right)$ we find the matrix equation:
$$
M^{-1} R^\top X^\top X R M^{-1}= 1.
$$
which is just a rearrangement of the equation from above, $A^\top A V = V S^2$.
**Interpretation:** If $A$ is considered to be a collection of points, then the singular values are the axes of a least-squares fitted ellipsoid while $V$ is its orientation. The matrix $U$ is the projection of each of the points in $A$ onto the axes.
### Solving matrix equations with SVD
We can solve for $\mathbf{x}$:
$$\begin{align}
A \mathbf{x} &= b \\
\mathbf{x} &= V S^{-1} U^\top b
\end{align}$$
or $x_i = \sum_j \frac{V_{ij}}{S_j} \sum_k U_{kj} b_k$. The value of this solution method is when we have an ill-conditioned matrix, meaning that the smallest eigenvalues are zero or close to zero. We can throw away the corresponding components and all is well! See [also](https://personalpages.manchester.ac.uk/staff/timothy.f.cootes/MathsMethodsNotes/L3_linear_algebra3.pdf).
Comments:
- If we have a non-square matrix, it still works. If $m\times n$ with $m > n$, then only $n$ singular values.
- If $m < n$, then only $n$ singular values.
- This is like solving
$$A^\top A \mathbf{x} = A^\top b$$
which is called the *normal equation*. It produces the solution to $\mathbf{x}$ that is closest to the origin, or
$$
\min_{\mathbf{x}} |A\mathbf x - b| \;.
$$
**Task:** *prove these results (work backwards from the last equation as a least-squares minimization)*.
### Data reduction
For machine learning (ML), there might be several hundred variables but the algorithms are made for a few dozen. We can use SVD in ML for variable reduction. This is also the connection to sloppy physics models. In general, our matrix $A$ can be closely approximated by only keeping the largest of the singular values. We'll see that visually below using images.
## Python imports
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
from sklearn.decomposition import PCA
```
*Generate random matrices and verify the properties for SVD given above. Check what happens when $m > n$.*
```python
A = np.random.rand(9, 4)
print('A = ', A)
Ap = np.random.randn(5, 3)
print('Ap = ', Ap)
```
Check the definition of `scipy.linalg.svd` with shift-tab-tab.
```python
# SVD from scipy.linalg
U, S, V_trans = la.svd(A, full_matrices=False)
Up, Sp, Vp_trans = la.svd(Ap, full_matrices=False)
```
```python
print(U.shape, S.shape, V_trans.shape)
```
```python
# Transpose with T, matrix multiplication with @
print(U.T @ U)
```
```python
# Here's one way to suppress small numbers from round-off error
np.around(U.T @ U, decimals=15)
```
```python
# Predict this one before evaluating!
print(U @ U.T)
```
Go on and check the other claimed properties.
For example, is $A = U S V^\top$? (Note: you'll need to make $S$ a matrix with `np.diag(S)`.)
```python
# Check the other properties, changing the matrix size and shapes.
```
For a square matrix, compare the singular values in $S$ to the eigenvalues from `la.eig`. What do you conclude? Now try this for a symmetric matrix (note that a matrix plus its transpose is symmetric).
## SVD applied to images for compression
Read in `figs/elephant.jpg` as a gray-scale image. The image has $1066 \times 1600$ values. Using SVD, recreate the image with a relative error of less than 0.5%. What is the relative size of the compressed image as a percentage?
```python
from skimage import io
img = io.imread('figs/elephant.jpg', as_gray=True)
plt.imshow(img, cmap='gray');
print('shape of img: ', img.shape)
```
```python
# turn off axis
plt.imshow(img, cmap='gray')
plt.gca().set_axis_off()
```
```python
# Do the svg
U, S, Vt = la.svd(img, full_matrices=False)
```
```python
# Check the shapes
U.shape, S.shape, Vt.shape
```
```python
# Check that we can recreate the image
img_orig = U @ np.diag(S) @ Vt
print(img_orig.shape)
plt.imshow(img_orig, cmap='gray')
plt.gca().set_axis_off()
```
Here's how we can efficiently reduce the size of the matrices. Our SVD should be sorted, so we are keeping only the largest singular values up to a point.
```python
# Pythonic way to figure out when we've accumulated 99.5% of the result
k = np.sum(np.cumsum((S**2)/(S**2).sum()) <= 0.995)
```
#### Aside: dissection of the Python statement to find the index for accumulation
```python
test = np.array([5, 4, 3, 2, 1])
threshold = 0.995
print('initial matrix, in descending magnitude: ', test)
print( 'fraction of total sum of squares: ', (test**2) / (test**2).sum() )
print( 'cumulative fraction: ', np.cumsum((test**2) / (test**2).sum()) )
print( 'mark entries as true if less than or equal to threshold: ',
(np.cumsum((test**2) / (test**2).sum()) <= threshold) )
print( 'sum up the Trues: ',
np.sum(np.cumsum((test**2) / (test**2).sum()) <= threshold) )
print( 'The last result is the index we are looking for.')
```
```python
# Let's plot the eigenvalues and mark where k is
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1,1,1)
ax.semilogy(S, color='blue', label='eigenvalues')
ax.axvline(k, color='red', label='99.5% of the variance');
ax.set_xlabel('eigenvalue number')
ax.legend()
fig.tight_layout()
```
Now keep only the most significant eigenvalues (those up to k).
```python
img2 = U[:,:k] @ np.diag(S[:k])@ Vt[:k, :]
img2.shape
```
```python
plt.imshow(img2, cmap='gray')
plt.gca().set_axis_off();
```
```python
k99 = np.sum(np.cumsum((S**2)/(S**2).sum()) <= 0.99)
img99 = U[:,:k99] @ np.diag(S[:k99])@ Vt[:k99, :]
```
```python
plt.imshow(img99, cmap='gray')
plt.gca().set_axis_off();
```
Let's try another interesting picture . . .
```python
fraction_kept = 0.995
def svd_shapes(U, S, V, k=None):
if k is None:
k = len(S)
U_shape = U[:,:k].shape
S_shape = S[:k].shape
V_shape = V[:,:k].shape
print(f'U shape: {U_shape}, S shape: {S_shape}, V shape: {V_shape}')
img_orig = io.imread('figs/Dick_in_tailcoat.jpg')
img = io.imread('figs/Dick_in_tailcoat.jpg', as_gray=True)
U, S, V = la.svd(img)
svd_shapes(U, S, V)
k995 = np.sum(np.cumsum((S**2)/(S**2).sum()) <= fraction_kept)
print(f'k995 = {k995}')
img995 = U[:,:k995] @ np.diag(S[:k995])@ V[:k995, :]
print(f'img995 shape = {img995.shape}')
svd_shapes(U, S, V, k995)
fig = plt.figure(figsize=(12,6))
ax1 = fig.add_subplot(1,3,1)
ax1.imshow(img_orig)
ax1.set_axis_off()
ax2 = fig.add_subplot(1,3,2)
ax2.imshow(img, cmap='gray')
ax2.set_axis_off()
ax3 = fig.add_subplot(1,3,3)
ax3.imshow(img995, cmap='gray')
ax3.set_axis_off()
fig.tight_layout()
```
```python
# Let's plot the eigenvalues and mark where k is
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1,1,1)
ax.semilogy(S, color='blue', label='eigenvalues')
ax.axvline(k995, color='red', label='99.5% of the variance');
ax.set_xlabel('eigenvalue number')
ax.legend()
fig.tight_layout()
```
### Things to do:
* Get your own figure and duplicate these results. Then play!
* As you reduce the percentage of the variance kept, what features of the image are retained and what are lost?
* See how small you can make the percentage and still recognize the picture.
* How is this related to doing a spatial Fourier transform, applying a low-pass filter, and transforming back. (Experts: try this!)
```python
```
## Covariance, PCA and SVD
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.linalg as la
np.set_printoptions(precision=3)
```
Recall the formula for covariance
$$
\text{Cov}(X, Y) = \frac{\sum_{i=1}^n(X_i - \bar{X})(Y_i - \bar{Y})}{n-1}
$$
where $\text{Cov}(X, X)$ is the sample variance of $X$.
```python
def cov(x, y):
"""Returns covariance of vectors x and y)."""
xbar = x.mean()
ybar = y.mean()
return np.sum((x - xbar)*(y - ybar))/(len(x) - 1)
```
```python
X = np.random.random(10)
Y = np.random.random(10)
```
```python
np.array([[cov(X, X), cov(X, Y)], [cov(Y, X), cov(Y,Y)]])
```
```python
np.cov(X, Y) # check against numpy
```
```python
# Extension to more variables is done in a pair-wise way
Z = np.random.random(10)
np.cov([X, Y, Z])
```
### Eigendecomposition of the covariance matrix
```python
# Zero mean but off-diagonal correlation matrix
mu = [0,0]
sigma = [[0.6,0.2],[0.2,0.2]]
n = 1000
x = np.random.multivariate_normal(mu, sigma, n).T
plt.scatter(x[0,:], x[1,:], alpha=0.2);
```
```python
# Find the covariance matrix of the matrix of points x
A = np.cov(x)
```
```python
# m = np.array([[1,2,3],[6,5,4]])
# ms = m - m.mean(1).reshape(2,1)
# np.dot(ms, ms.T)/2
```
```python
# Find the eigenvalues and eigenvectors
e, v = la.eigh(A)
```
```python
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(1,1,1)
ax.scatter(x[0,:], x[1,:], alpha=0.2)
for e_, v_ in zip(e, v.T):
ax.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2)
ax.axis([-3,3,-3,3])
ax.set_aspect(1)
ax.set_title('Eigenvectors of covariance matrix scaled by eigenvalue.');
```
### PCA (from Duke course)
"Principal Components Analysis" (PCA) basically means to find and rank all the eigenvalues and eigenvectors of a covariance matrix. This is useful because high-dimensional data (with $p$ features) may have nearly all their variation in a small number of dimensions $k<p$, i.e. in the subspace spanned by the eigenvectors of the covariance matrix that have the $k$ largest eigenvalues. If we project the original data into this subspace, we can have a dimension reduction (from $p$ to $k$) with hopefully little loss of information.
Numerically, PCA is typically done using SVD on the data matrix rather than eigendecomposition on the covariance matrix. Numerically, the condition number for working with the covariance matrix directly is the square of the condition number using SVD, so SVD minimizes errors."
For zero-centered vectors,
\begin{align}
\text{Cov}(X, Y) &= \frac{\sum_{i=1}^n(X_i - \bar{X})(Y_i - \bar{Y})}{n-1} \\
&= \frac{\sum_{i=1}^nX_iY_i}{n-1} \\
&= \frac{XY^T}{n-1}
\end{align}
and so the covariance matrix for a data set $X$ that has zero mean in each feature vector is just $XX^T/(n-1)$.
In other words, we can also get the eigendecomposition of the covariance matrix from the positive semi-definite matrix $XX^T$.
Note: Here $x$ is a matrix of **row** vectors.
```python
X = np.random.random((5,4))
X
```
```python
Y = X - X.mean(axis=1)[:, None] # eliminate the mean
print(Y.mean(axis=1))
```
```python
np.around(Y.mean(1), 5)
```
```python
Y
```
Check that the covariance matrix is unaffected by removing the mean:
```python
np.cov(X)
```
```python
np.cov(Y)
```
```python
# Find the eigenvalue and eigenvectors
e1, v1 = np.linalg.eig(np.dot(x, x.T)/(n-1))
```
#### Principal components
Principal components are simply the eigenvectors of the covariance matrix used as basis vectors. Each of the original data points is expressed as a linear combination of the principal components, giving rise to a new set of coordinates.
```python
# Check that we reproduce the previous result
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(1,1,1)
ax.scatter(x[0,:], x[1,:], alpha=0.2)
for e_, v_ in zip(e1, v1.T):
ax.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2)
ax.axis([-3,3,-3,3]);
ax.set_aspect(1)
```
### Using SVD for PCA
SVD is a decomposition of the data matrix $X = U S V^T$ where $U$ and $V$ are orthogonal matrices and $S$ is a diagonal matrix.
Recall that the transpose of an orthogonal matrix is also its inverse, so if we multiply on the right by $X^T$, we get the following simplification
\begin{align}
X &= U S V^T \\
X X^T &= U S V^T (U S V^T)^T \\
&= U S V^T V S U^T \\
&= U S^2 U^T
\end{align}
Compare with the eigendecomposition of a matrix $A = W \Lambda W^{-1}$, we see that SVD gives us the eigendecomposition of the matrix $XX^T$, which as we have just seen, is basically a scaled version of the covariance for a data matrix with zero mean, with the eigenvectors given by $U$ and eigenvalues by $S^2$ (scaled by $n-1$)..
```python
u, s, v = np.linalg.svd(x)
```
```python
# reproduce previous results yet again!
e2 = s**2/(n-1)
v2 = u
plt.scatter(x[0,:], x[1,:], alpha=0.2)
for e_, v_ in zip(e2, v2):
plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2)
plt.axis([-3,3,-3,3]);
```
```python
v1 # from eigenvectors of covariance matrix
```
```python
v2 # from SVD
```
```python
e1 # from eigenvalues of covariance matrix
```
```python
e2 # from SVD
```
```python
```
## Exercises: covariance matrix manipulations in Python (taken from the Duke course)
Given the following covariance matrix
```python
A = np.array([[2,1],[1,4]])
```
use Python to do these basic tasks (that is, do not do them by hand but use `scipy.linalg` functions).
1. Show that the eigenvectors of $A$ are orthogonal.
1. What is the vector representing the first principal component direction?
1. Find $A^{-1}$ without performing a matrix inversion.
1. What are the coordinates of the data points (0, 1) and (1, 1) in the standard basis expressed as coordinates of the principal components?
1. What is the proportion of variance explained if we keep only the projection onto the first principal component?
We'll give you a headstart on the Python manipulations (you should take a look at the `scipy.linalg` documentation).
```python
A = np.array([[2,1],[1,4]])
eigval, eigvec = la.eig(A)
```
```python
```
- Find the matrix $A$ that results in rotating the standard vectors in $\mathbb{R}^2$ by 30 degrees counter-clockwise and stretches $e_1$ by a factor of 3 and contracts $e_2$ by a factor of $0.5$.
- What is the inverse of this matrix? How you find the inverse should reflect your understanding.
The effects of the matrix $A$ and $A^{-1}$ are shown in the figure below:
```python
```
We observe some data points $(x_i, y_i)$, and believe that an appropriate model for the data is that
$$
f(x) = ax^2 + bx^3 + c\sin{x}
$$
with some added noise. Find optimal values of the parameters $\beta = (a, b, c)$ that minimize $\Vert y - f(x) \Vert^2$
1. using `scipy.linalg.lstsq`
2. solving the normal equations $X^TX \beta = X^Ty$
3. using `scipy.linalg.svd`
In each case, plot the data and fitted curve using `matplotlib`.
Data
```
x = array([ 3.4027718 , 4.29209002, 5.88176277, 6.3465969 , 7.21397852,
8.26972154, 10.27244608, 10.44703778, 10.79203455, 14.71146298])
y = array([ 25.54026428, 29.4558919 , 58.50315846, 70.24957254,
90.55155435, 100.56372833, 91.83189927, 90.41536733,
90.43103028, 23.0719842 ])
```
```python
x = np.array([ 3.4027718 , 4.29209002, 5.88176277, 6.3465969 , 7.21397852,
8.26972154, 10.27244608, 10.44703778, 10.79203455, 14.71146298])
y = np.array([ 25.54026428, 29.4558919 , 58.50315846, 70.24957254,
90.55155435, 100.56372833, 91.83189927, 90.41536733,
90.43103028, 23.0719842 ])
```
```python
```
|
53f32c4705a2d34b3fcf631a240e0141645ffc25
| 33,783 |
ipynb
|
Jupyter Notebook
|
topics/bayesian-parameter-estimation/linear_algebra_games_I.ipynb
|
asemposki/Bayes2019
|
bea9dbe5205fbf5939a154b1c3773e6c3baf39a4
|
[
"CC0-1.0"
] | 13 |
2019-06-06T17:55:08.000Z
|
2021-11-16T08:26:26.000Z
|
topics/bayesian-parameter-estimation/linear_algebra_games_I.ipynb
|
asemposki/Bayes2019
|
bea9dbe5205fbf5939a154b1c3773e6c3baf39a4
|
[
"CC0-1.0"
] | 1 |
2019-06-14T16:17:36.000Z
|
2019-06-15T04:41:39.000Z
|
topics/bayesian-parameter-estimation/linear_algebra_games_I.ipynb
|
asemposki/Bayes2019
|
bea9dbe5205fbf5939a154b1c3773e6c3baf39a4
|
[
"CC0-1.0"
] | 17 |
2019-06-10T18:23:29.000Z
|
2021-12-22T15:38:30.000Z
| 30.190349 | 542 | 0.544031 | true | 6,569 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.805632 | 0.888759 | 0.716013 |
__label__eng_Latn
| 0.976408 | 0.501869 |
```python
import numpy as np
import time
import matplotlib.pyplot as plt
from sklearn import linear_model
```
---
# Item XVI
Let $f(x) = \sum_{i=1}^n \alpha_i \text{sinc}(x-x_i)$ where $\text{sinc}(x) = \frac{\sin(x)}{x}$. Compute the total number of operations needed for evaluation of $f(x)$ at $x_j$, for $j=1 \dots n$. Also implement this algorithm and validate your estimation.
---
The first observation is that $\text{sinc}(x-x_i)$ should be defined as $\text{sinc}(0)=1$ when $x=x_i$. In this case, it's not necessary to compute one of the $\sin$'s nor the division.
So, for evaluating the function on a particular point $x$, the amount of $\sin$ that have to be calculated is $n$, but if $x=x_j$ for some $j$, then $n{-}1$ $\sin$ have to be calculated. The same goes for the amount of divisions by $x-x_j$, substractions to get $x-x_j$, multiplications by $a_j$ and $n-1$ sums to get the final amount.
| Operation | Times performed | Total ops. |
|:------| -----:| ---: |
| compute $x-x_j$ | $n$ | $n$ |
| compute $\sin(x-x_j)$ | $n-1$ | $(n-1)C$ |
| divide $\sin(x-x_j)$ by $x-x_j$ | $n-1$ | $n-1$ |
| multiply by $a_i$ | $n-1$ | $n-1$ |
| sum over $i$ | $n-1$ | $n-1$ |
This results in $4n-3+C(n-1)$ operations for each $f(x_j)$, if we compute add the operations required to compute all of them, we will end with $n(4n-3+C(n-1)) = O(n^2)$ where $C$ is the cost of computing a $\sin$.
Of course, additional optimizations could be done if some relations between the $x_j$ hold.
```python
def sinc_sum(xs,alphas):
n = len(xs)
def loc_func(x):
delta_xs = x-xs
sinc = np.ones(n)
sinc[delta_xs!=0] = np.sin(delta_xs[delta_xs!=0])/delta_xs[delta_xs!=0]
return np.sum(alphas*sinc)
return loc_func
```
```python
N = np.logspace(1,4.4,num=30,dtype='int') # from 10 to ~25000
ts = []
for n in N:
xsi = np.random.random(n)
asi = np.random.random(n)
f = sinc_sum(xsi,asi)
start = time.time()
f_evals = [f(x) for x in xsi]
end = time.time()
ts.append(end-start)
ts = np.array(ts)
```
```python
plt.plot(N,ts,'o-')
plt.grid(True)
plt.show()
```
We perform a linear regression with the last points, in logarithmic scale
```python
regr = linear_model.LinearRegression()
start = len(N)//2
logN = np.log(N.reshape((-1,1)))
logt = np.log(ts.reshape((-1,1)))
regr.fit(logN[start:],logt[start:])
# Check predictions:
res = np.exp(regr.predict(logN))
plt.loglog(N,res,label="fit")
plt.loglog(N,ts,label="real times")
plt.legend()
```
```python
print("regr. coef : %f"%regr.coef_)
print("regr. intercept: %f"%regr.intercept_)
```
regr. coef : 1.794841
regr. intercept: -15.550982
We can see that the resulting fit was:
\begin{align}
\log(t) &= 1.794841 \log(n) -15.550982 \\
t &= e^{1.794841 \log(n) -15.550982} \\
t &= 1.76\cdot10^{-7} n^{1.794841}
\end{align}
Which is somewhat near the $O(n^2)$ expected.
```python
```
|
73ca567417a7bd1b2e8283dc41f614b70d5a3153
| 34,579 |
ipynb
|
Jupyter Notebook
|
t1_questions/item_16.ipynb
|
autopawn/cc5-works
|
63775574c82da85ed0e750a4d6978a071096f6e7
|
[
"MIT"
] | null | null | null |
t1_questions/item_16.ipynb
|
autopawn/cc5-works
|
63775574c82da85ed0e750a4d6978a071096f6e7
|
[
"MIT"
] | null | null | null |
t1_questions/item_16.ipynb
|
autopawn/cc5-works
|
63775574c82da85ed0e750a4d6978a071096f6e7
|
[
"MIT"
] | null | null | null | 160.832558 | 15,932 | 0.891003 | true | 953 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.851953 | 0.92523 | 0.788252 |
__label__eng_Latn
| 0.910243 | 0.669707 |
$\newcommand{\xv}{\mathbf{x}}
\newcommand{\wv}{\mathbf{w}}
\newcommand{\yv}{\mathbf{y}}
\newcommand{\zv}{\mathbf{z}}
\newcommand{\Chi}{\mathcal{X}}
\newcommand{\R}{\rm I\!R}
\newcommand{\sign}{\text{sign}}
\newcommand{\Tm}{\mathbf{T}}
\newcommand{\Xm}{\mathbf{X}}
\newcommand{\Zm}{\mathbf{Z}}
\newcommand{\I}{\mathbf{I}}
\newcommand{\muv}{\boldsymbol\mu}
\newcommand{\Sigmav}{\boldsymbol\Sigma}
$
### ITCS6155
# Reinforcement Learning
Along with supervised and unsupervised learning, reinforcement learning is one of the interesting fields of machine learining.
As we shortly discussed in the first week of class, reinforcement learning is different from other machine learning paradigms as summarized below:
- only reward signal as feedback,
- the feedback can be delayed,
- sequential data,
- interaction based on the actions taken.
The reinforcement learning resembles huuman learning or animal training that treats reward good behavior.
When the series of actions end up with good results, we can **reinforce** those actions by giving some rewards.
<table>
<tr>
<td>
</td>
<td>
</td>
</tr>
</table>
## Applictions
There are many possible applications:
- walking robot,
- playing boardgames,
- controlling joysticks for video games,
- smart thermostat,
- stock trading,
- music personalization,
- recommendation system,
- marketing,
- product delivery,
- and so on...
## Terminology
- Reward: $R_t$ is a scalar feedback signal at time $t$. It indicates how well agent is doing at the time.
- State: $S_t$ represents what happens currently and next.
- Action: $A_t$ is how an agent affects to the environment that can change the state.
- Observation: $O_t$ is what an agent recognizes the world for the state $S_t$.
- Policy: A function that determines an agent's behavior or a function that selects its action, $\pi(S_t)$.
<b> <font color="red">Q: Think about the rewards for the example applications above, and answer the possible rewards for them.</font></b>
- walking robot: Positive reward for continuous successful walking; negative reward if it falls down.
- playing boardgames: Positive reward for wining; negative reward for loosing the game.
- controlling joysticks for video games: Positive reward for continuing the game; negative reward for loosing to oponent.
- smart thermostat: Not sure!
- stock trading: Positive reward for good prediction of profit; negative reward for loosing money.
- music personalization: Positive reward when user doesn't change the recommended music or play a lot from the recommended music; negative reward for not playing from recommendation or giving low ratting to the recommended music.
- recommendation system: Same as the `music personalization`.
- marketing: Positive reward for sales growth; negative reward for the oposit.
- product delivery: Positive reward for confirmation of product delivery; negative reward for not getting notification on-time.
## Modeling
How can we simplify the listed problems to solve? We have the sets of decision making problems or control problems.
With the assumption of **Markov property**, we can model the problems as Makrov Decision Process (MDP).
Here, the definition of Markov property is as follows,
**Definition**: A state $S_t$ is Markov if and only if
<br/><br/>
$$P [ S_{t+1} | S_t ] = P[ S_{t+1} | S_1, ..., S_t ].$$
That is, we can say the state transition model has Markov property when the future is dependent only on the current state not the past.
### Markov Decision Processes
So, the MDP can be defined as follows and the example state transition diagram has the transition probabilities and rewards along with discounting factor. The `discounting factor` ensures the mathematical simplicity, and it allows the modeling the uncertainty of the future as well.
<font color='red' size=5>$ ( S, A, P, R, \gamma ) $ </font>
* $S$ : a finite set of states
* $A$ : a finite set of actions
* $P$ : a state transition probability
* $P^a_{ss^\prime} = P [ S_{t+1} = s^\prime | S_t = s, A_t = a ]$
* $R$ : a reward function
* $\gamma$ : a discount factor
<center> MDP with 3 states and 2 actions (wikipedia) </center>
### Policy
The policy $\pi(S_t)$ is a function that maps from a state to an action. Thus, it can be easily represented by a probability:
$$ \pi(a | s) = P[A_t = a | S_t = s]. $$
Or, it can be a machine learning model to generate such probability for a certain behavior of an agent.
## Goal
Now with the MDP model, what we want to do is find a policy $\pi$ that `maximizes the long-term rewards`. The long-term rewards can be modeled as the sum of rewards. Here, considering the different importance on the actions (the current reward is not equivalent to future rewards), we can define the **Return**, $G_t$,
$$ G_t = R_{t+1} + \gamma R_{t+2} + \cdots = \sum_{k=0}^{\infty} \gamma^k R_{t+k+1}. $$
From this, we can model the long-term value of a certain state $s$ with the return:
$$ V(s) = E [ G_t | S_t = s ]. $$
We, we set our objective to maximize the return. Developing the optimal policy can be made by defining the evaluation function of state and action pair, the `Q funciton`:
$$
\begin{align}
Q^\pi(s, a) &= E_\pi [ G_t | S_t = s, A_t = a ] \\
&= E_\pi [ R_{t+1} + \gamma Q^\pi (S_{t+1}, A_{t+1}) | S_t = s, A_t = a ]
\end{align}
$$
## How to reach the goal? / How to find the optimal policy?
There are several ways to reach the goal such as `value iteration`, `policy iteration`, `linear programming`, and `temporal difference learning`.
First, let us examine the bootstraping representation of the value function to solve the problem.
This is called **Bellman equation**.
$$
\begin{aligned}
V(s) &= E [ G_t | S_t = s ] \\
&= E [ R_{t+1} + \gamma R_{t+2} + \cdots | S_t = s ] \\
&= E [ R_{t+1} + \gamma ( R_{t+2} + \gamma R_{t+3} + \cdots ) | S_t = s ] \\
&= E [ R_{t+1} + \gamma G_{t+1} | S_t = s ] \\
&= E [ R_{t+1} + \gamma V(s_{t+1}) | S_t= s ]\\
\\
V(s) &= R(s) + \gamma \sum_{s^\prime \in S} P_{ss^\prime} V(s^\prime)
\end{aligned}
$$
With matrix representation, we can rewrite this as follows:
$$
\begin{align}
\begin{pmatrix}
V_1 \\
V_2 \\
\vdots \\
V_n
\end{pmatrix} &=
\begin{pmatrix}
R_1 \\
R_2 \\
\vdots \\
R_n
\end{pmatrix} +
\gamma
\begin{pmatrix}
P_{11} & P_{12} & \cdots & P_{1n} \\
P_{21} & P_{22} & \cdots & P_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
P_{n1} & P_{n2} & \cdots & P_{nn}
\end{pmatrix}
\begin{pmatrix}
V_1 \\
V_2 \\
\vdots \\
V_n
\end{pmatrix}
\\
\\
V &= R + \gamma PV
\end{align}
$$
This can be directly solved with
$$ V = (I - \gamma P)^{-1} R. $$
Considering the computational complexity with inverse matrix, however, we can consider iterative solutions such as `dynamic programming`, `Monte-Carlo evaluation`, or `temporal difference learning`.
# Value Iteration
* For state transition from $i$ to $j$ with action $k$
$$ V^{n+1} (s_i) = \max_k \Big[ R_i + \gamma \sum_{j=1}^N P^k_{ij} V^n (s_j) \Big] $$
* Value Iteration:
<font size=3>
$$
\begin{align}
n=0 \quad \forall i, \quad &V^1 (s_i) = \max_k \Big[ R_i + \gamma \sum_{j=1}^N P^k_{ij} V^0 (s_j) \Big] \\
n=1 \quad \forall i, \quad &V^2 (s_i) = \max_k \Big[ R_i + \gamma \sum_{j=1}^N P^k_{ij} V^1 (s_j) \Big] \\
n=2 \quad \forall i, \quad &V^3 (s_i) = \max_k \Big[ R_i + \gamma \sum_{j=1}^N P^k_{ij} V^2 (s_j) \Big] \\
\vdots
\end{align}
$$
</font>
* Convergence can be tested as follows:
$$ \max_i \Big| V^{n+1}(s_i) - V^n(s_i) \Big| \lt \epsilon. $$
* Computing values for all $i$'s with Dynamic Programming.
Here follows an example from Wikipedia.
Let $\gamma = 1$. The values for each state can be iteratively updated as the following table.
| t | $V^\pi_{S_0}$ | $V^\pi_{S_1}$ | $V^\pi_{S_2}$ |
|:--|:-------------|:-------------|:-------------|
| 0 | 0 | 0.0 | 0 |
| 1 | 0 | 3.5 | 0.75 |
| 2 | 0 | 3.85 | 0.75 |
| 3 | 0.75 | 4.035 | 1.155 |
| 4 | 1.155 | 4.6595 | 1.5975 |
| 5 | 1.5975 | 5.09395 | 2.08335 |
In Python, we can write the codes.
```python
import collections
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
```python
gamma = 1.0
n_states = 3
n_actions = 2
# value vector
V = np.zeros(n_states).reshape((-1, 1))
# Reward dictionary
R = {(1,2,0): -1, (0,1,0): 5}
# transition probability: key (action, from, to)
P = {(0,0,0): 0.5, (0,0,2): 0.5, (1,0,2): 1.0,
(0,1,0): 0.7, (0,1,1): 0.1, (0,1,2): 0.2,
(1,1,1): 0.95, (1,1,2): 0.05,
(0,2,0): 0.4, (0,2,2): 0.6,
(1,2,0): 0.3, (1,2,1): 0.3, (1,2,2): 0.4 }
def Rf(s, sn, a):
if (a, s, sn) in R:
return R[(a,s,sn)]
return 0
def Pf(s, sn, a):
if (a, s, sn) in P:
return P[(a,s,sn)]
return 0.
""" with no matrix"""
def Rs(s, a):
r = 0
for j in range(3):
if (a, s, j) in P:
r += P[(a, s, j)] * Rf(a,s, j)
return r
def Pfv(s, a):
ret = np.zeros(3).reshape((-1, 1))
for j in range(3):
if (a, s, j) in P:
ret[j] = P[(a, s, j)]
return ret
```
```python
Rmat = np.zeros((n_states, n_actions))
Pmat = np.zeros((n_actions, n_states, n_states))
for i in range(n_states):
for k in range(n_actions):
for j in range(n_states):
if (k, i, j) in P:
Rmat[i, k] += P[(k, i, j)] * Rf(i, j, k)
for k, i, j in P.keys():
Pmat[k,i,j] = P[(k, i, j)]
for t in range(5):
print("t=", t, "\tV = ", V.T)
# Bellman
newV = Rmat + gamma * np.sum(Pmat @ V, axis=2).T
V[:] = np.max(newV, axis=1, keepdims=True)
print("t=", t+1, "\tV = ", V.T)
```
t= 0 V = [[0. 0. 0.]]
t= 1 V = [[0. 3.5 0. ]]
t= 2 V = [[0. 3.85 0.75]]
t= 3 V = [[0.75 4.035 1.155]]
t= 4 V = [[1.155 4.6595 1.5975]]
t= 5 V = [[1.5975 5.09395 2.08335]]
```python
gamma=1
for t in range(50):
print("t=", t, "\tV = ", V.T)
for i in range(3):
# Bellman
V[i] = max(Rs(i, 0) + gamma * np.sum(Pfv(i, 0) * V),
Rs(i, 1) + gamma * np.sum(Pfv(i, 1) * V))
print("t=", t+1, "\tV = ", V.T)
```
t= 0 V = [[1.5975 5.09395 2.08335]]
t= 1 V = [[2.08335 4.94342 2.941371]]
t= 2 V = [[2.941371 4.84331755 3.51195497]]
t= 3 V = [[3.51195497 4.77674942 3.8913933 ]]
t= 4 V = [[3.8913933 4.73248161 4.1437198 ]]
t= 5 V = [[4.1437198 4.70304352 4.31151691]]
t= 6 V = [[4.31151691 4.68346719 4.423102 ]]
t= 7 V = [[4.423102 4.67044893 4.49730608]]
t= 8 V = [[4.49730608 4.66179179 4.54665179]]
t= 9 V = [[4.54665179 4.65603479 4.57946669]]
t= 10 V = [[4.57946669 4.65220639 4.6012886 ]]
t= 11 V = [[4.6012886 4.6496605 4.61580017]]
t= 12 V = [[4.61580017 4.64796748 4.62545036]]
t= 13 V = [[4.62545036 4.64684162 4.63186774]]
t= 14 V = [[4.63186774 4.64609293 4.6361353 ]]
t= 15 V = [[4.6361353 4.64559505 4.63897322]]
t= 16 V = [[4.63897322 4.64526396 4.64086044]]
t= 17 V = [[4.64086044 4.64504378 4.64211544]]
t= 18 V = [[4.64211544 4.64489736 4.64295002]]
t= 19 V = [[4.64295002 4.6448 4.64350501]]
t= 20 V = [[4.64350501 4.64473525 4.64387408]]
t= 21 V = [[4.64387408 4.64469219 4.64411952]]
t= 22 V = [[4.64411952 4.64466356 4.64428273]]
t= 23 V = [[4.64428273 4.64464452 4.64439126]]
t= 24 V = [[4.64439126 4.64463185 4.64446344]]
t= 25 V = [[4.64446344 4.64462343 4.64451144]]
t= 26 V = [[4.64451144 4.64461783 4.64454336]]
t= 27 V = [[4.64454336 4.64461411 4.64456458]]
t= 28 V = [[4.64456458 4.64461163 4.6445787 ]]
t= 29 V = [[4.6445787 4.64460999 4.64458808]]
t= 30 V = [[4.64458808 4.64460889 4.64459433]]
t= 31 V = [[4.64459433 4.64460816 4.64459848]]
t= 32 V = [[4.64459848 4.64460768 4.64460124]]
t= 33 V = [[4.64460124 4.64460736 4.64460307]]
t= 34 V = [[4.64460307 4.64460714 4.64460429]]
t= 35 V = [[4.64460429 4.644607 4.6446051 ]]
t= 36 V = [[4.6446051 4.6446069 4.64460564]]
t= 37 V = [[4.64460564 4.64460684 4.644606 ]]
t= 38 V = [[4.644606 4.6446068 4.64460624]]
t= 39 V = [[4.64460624 4.64460677 4.6446064 ]]
t= 40 V = [[4.6446064 4.64460675 4.64460651]]
t= 41 V = [[4.64460651 4.64460674 4.64460658]]
t= 42 V = [[4.64460658 4.64460673 4.64460662]]
t= 43 V = [[4.64460662 4.64460673 4.64460665]]
t= 44 V = [[4.64460665 4.64460672 4.64460668]]
t= 45 V = [[4.64460668 4.64460672 4.64460669]]
t= 46 V = [[4.64460669 4.64460672 4.6446067 ]]
t= 47 V = [[4.6446067 4.64460672 4.6446067 ]]
t= 48 V = [[4.6446067 4.64460672 4.64460671]]
t= 49 V = [[4.64460671 4.64460672 4.64460671]]
t= 50 V = [[4.64460671 4.64460672 4.64460671]]
# Policy Iteration
* Start with an initial policy $\pi^0$.
* Iteratively,
* Evaluate policy ($V^n(x) = V^{\pi^n}(x)$):
$$ V^n(s) = R(s, a = \pi^n(s)) + \gamma \sum_{s^\prime} P(s^\prime | s, a = \pi^n(s)) V^{n}(s^\prime) $$
* Improve policy:
$$ \pi_{n+1}(s) = \arg \max_a \Big[ R(s, a) + \gamma \sum_{s^\prime} P(s^\prime | s, a) V^{n}(s^\prime) \Big] $$
* Stop condition:
* Policy does not change anymore (for about 10 iterations)
* The changes in evaluation of values are minor
```python
n_iter = 10
gamma = 1.
V = np.zeros(3)
pi = np.random.randint(2, size=n_states)
for n in range(n_iter):
print("n = ", n, "\t", pi, end="")
# evaluate
for s in range(n_states):
V[s] = Rs(s, pi[s]) + gamma * np.sum([Pf(s, sn, pi[s]) * V[sn] for sn in range(n_states)])
print("\t", V)
# improve
for s in range(n_states):
pi[s] = np.argmax([Rs(s, a) + gamma * np.sum([Pf(s, sn, a) * V[sn] for sn in range(n_states)]) for a in range(n_actions)] )
```
n = 0 [1 0 0] [0. 0. 0.]
n = 1 [0 0 0] [0. 0. 0.]
n = 2 [0 0 0] [0. 0. 0.]
n = 3 [0 0 0] [0. 0. 0.]
n = 4 [0 0 0] [0. 0. 0.]
n = 5 [0 0 0] [0. 0. 0.]
n = 6 [0 0 0] [0. 0. 0.]
n = 7 [0 0 0] [0. 0. 0.]
n = 8 [0 0 0] [0. 0. 0.]
n = 9 [0 0 0] [0. 0. 0.]
# Linear Programming
[Manne '60]
\begin{equation*}
\begin{aligned}
& \underset{x}{\text{minimize}}
& & \sum_x V(x) \\
& \text{subject to}
& & V(x) \geq R(x, u) + \gamma \sum_{x^\prime} P(x^\prime | x, u) V^{n}(x^\prime) &\forall x, a.
\end{aligned}
\end{equation*}
# Temporal Difference Learning
When there are large number of states, the memory requirement grows exponentially.
*Temporal difference (TD) learning* considers that the agent knows only the partial information of the MDP.
With only current and next state transition and without any model transition probability, TD lets the agent explore the environment to examine the random policy.
With an estiate of the value function $V(s)$, $\hat{V}(s)$,
$$
\begin{align}
V(s_t) &= R_{t+1} + \gamma V(s_{t+1}) \\
V(s_t) &\sim R_{t+1} + \gamma \hat{V}(s_{t+1}) \\
\Rightarrow \quad \delta_t &= R_{t+1} + \gamma \hat{V}(s_{t+1}) - V(s_t).
\end{align}
$$
Here, $\delta$ represents the *temporal diffrence error*.
We can use this error as a gradient to update the value estimation.
$$
\begin{align}
V(s_t) &\leftarrow R_{t+1} + \alpha \delta_t \\
V(s_t) &\leftarrow R_{t+1} + \alpha (R_{t+1} + \gamma \hat{V}(s_{t+1}) - V(s_t))
\end{align}
$$
## Example Model
```python
# model (problem to solve)
# a: 0 - loop 1 - to next state
# r: +1 +0 +100 when reaches 4
#
n_states = 5
n_actions = 2
def transition(s, a):
s1 = s + a
if s == s1:
r = 1
elif s1 == n_states-1:
r = 100
else:
r = 0
return s1, r
def pause_print():
sys.stdout.write('\r')
sys.stdout.write("\033[F")
sys.stdout.flush()
time.sleep(0.5)
```
```python
# Example TD(0) Update
alpha = 1.0
gamma = 0.9
V = np.zeros(n_states) # 5 states
pi = np.random.randint(n_actions, size=n_states)
np.set_printoptions(precision=2)
for k in range(2):
if k == 1:
pi = np.ones(n_states, dtype=np.int)
print("Policy:", pi)
# Evaluation of the random policy
for e in range(20):
s = 0
print("\r\tTraj: ", s, end=" ")
for step in range(10):
a = pi[s]
s1, r1 = transition(s, a)
V[s] += alpha * (r1 + gamma * V[s1] - V[s])
s = s1
print(a, s, end=" ")
if s == n_states-1:
break
print("\t", V, end="\n")
#pause_print()
print()
```
Policy: [1 0 0 0 1]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [0. 6.13 0. 0. 0. ]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [5.51 8.5 0. 0. 0. ]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [7.65 9.42 0. 0. 0. ]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [8.48 9.77 0. 0. 0. ]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [8.8 9.91 0. 0. 0. ]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [8.92 9.97 0. 0. 0. ]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [8.97 9.99 0. 0. 0. ]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [8.99 9.99 0. 0. 0. ]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [ 9. 10. 0. 0. 0.]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [ 9. 10. 0. 0. 0.]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [ 9. 10. 0. 0. 0.]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [ 9. 10. 0. 0. 0.]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [ 9. 10. 0. 0. 0.]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [ 9. 10. 0. 0. 0.]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [ 9. 10. 0. 0. 0.]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [ 9. 10. 0. 0. 0.]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [ 9. 10. 0. 0. 0.]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [ 9. 10. 0. 0. 0.]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [ 9. 10. 0. 0. 0.]
Traj: 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 [ 9. 10. 0. 0. 0.]
Policy: [1 1 1 1 1]
Traj: 0 1 1 1 2 1 3 1 4 [ 9. 0. 0. 100. 0.]
Traj: 0 1 1 1 2 1 3 1 4 [ 0. 0. 90. 100. 0.]
Traj: 0 1 1 1 2 1 3 1 4 [ 0. 81. 90. 100. 0.]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
Traj: 0 1 1 1 2 1 3 1 4 [ 72.9 81. 90. 100. 0. ]
# Control Problems and Q function
For control problems, we have defined Q function above to evaluate the state and action altogether.
Updating the Q values with TD learning is similar to previous update with two different considerations.
First, we update the Q with assumption that we follow a certain behavior policy. Thus, we call this as *on-policy control*, or **SARSA**.
$$
Q(s_t, a_t) \leftarrow Q(s_t, a_t) + \alpha ( R_{t+1} + \gamma Q(s_{t+1}, a_{t+1}) - Q(s_t, a_t))
$$
Next, without making assumption of behavior policy, we can explore other possible policies to update the Q. We call this as *off-policy control*, or **Q-learning**.
$$
Q(s_t, a_t) \leftarrow Q(s_t, a_t) + \alpha ( R_{t+1} + \gamma \max_a Q(s_{t+1}, a) - Q(s_t, a_t))
$$
Here describes the psedocode for each algorithm.
**[Algorithm: TD Learning]**
**[Algorithm: SARSA]**
**[Algorithm: Q-learning]**
# Choosing an Action
Picking an action can be simple by selecting one with maximum Q value, so *Greedy!*.
$$
a^* = \arg \max_a Q(S_t, a)
$$
However, this can cause limited experience to develop good Q estimation, and eventually a good policy.
Without new data, greedy action selection will repeat the same actions, or repeatedly *exploit* your current knowledge. Thus, you need to *explore* other non-greedy actions to increase the experience to improve the Q estimation.
This is called "exploration-exploitation dilemma."
One of the way for this dilemma is $\epsilon$-greedy action selection. With a parameter $\epsilon \in [0, 1]$, we can control the exploration and exploitation level. When $\epsilon = 0$, the actions are selected in greedy manner, but when $\epsilon = 1$, the actions are selected randomly.
```python
# action selection
def greedy(Q, s):
return np.argmax(Q[s]) # greedy action selection
def e_greedy(Q, s, e):
if np.random.rand() < e:
return np.random.randint(n_actions)
else:
return greedy(Q,s)
```
```python
# SARSA example
alpha = 1. # learning rate
gamma = 0.9 # discount factor
epsilon = 0.1
# tabular approximation
Q = np.random.rand(n_states, n_actions) # 5 states and 2 actions
for e in range(20):
s = 0
a = e_greedy(Q, s, epsilon) # greedy action selection
for step in range(100):
s1, r1 = transition(s, a)
a1 = e_greedy(Q, s, epsilon)
# TODO: update Q table here!
Q[s, a] += alpha * (r1 + gamma * Q[s1, a1] - Q[s,a])
#print("s: ", s, "a: ", a, "s1:", s1, "a1:", a1, "Q: ", Q[s,a])
s, a = s1, a1
if s == n_states-1:
break
print("Final Q: ", Q)
print("Policy:", np.argmax(Q, axis=1))
```
Final Q: [[ 2.52 73.17]
[ 74.32 81.47]
[ 82.47 90.52]
[ 91.52 100.57]
[ 0.42 0.64]]
Policy: [1 1 1 1 1]
```python
# Q-learning example
alpha = 1. # learning rate
gamma = 0.9 # discount factor
epsilon = 0.1
# tabular approximation
Q = np.random.rand(n_states, n_actions) # 5 states and 2 actions
for e in range(20):
s = 0
for step in range(100):
a = e_greedy(Q, s, epsilon) # greedy action selection
s1, r1 = transition(s, a)
# TODO: update Q table here!
Q[s, a] += alpha * (r1 + gamma * np.max(Q[s1, :]) - Q[s,a])
print("s: ", s, "a: ", "s1:", s1, "a1:", a1, "Q: ", Q[s,a])
s = s1
if s == n_states-1:
break
print("Final Q: ", Q)
print("Policy:", np.argmax(Q, axis=1))
```
s: 0 a: s1: 1 a1: 1 Q: 0.7561608580140299
s: 1 a: s1: 1 a1: 1 Q: 1.75616085801403
s: 1 a: s1: 1 a1: 1 Q: 2.580544772212627
s: 1 a: s1: 1 a1: 1 Q: 3.3224902949913644
s: 1 a: s1: 1 a1: 1 Q: 3.990241265492228
s: 1 a: s1: 1 a1: 1 Q: 4.591217138943005
s: 1 a: s1: 1 a1: 1 Q: 5.132095425048704
s: 1 a: s1: 1 a1: 1 Q: 5.618885882543834
s: 1 a: s1: 2 a1: 1 Q: 0.7505308201058045
s: 2 a: s1: 3 a1: 1 Q: 0.42173196249345
s: 3 a: s1: 3 a1: 1 Q: 1.4217319624934501
s: 3 a: s1: 3 a1: 1 Q: 2.2795587662441053
s: 3 a: s1: 3 a1: 1 Q: 3.0516028896196947
s: 3 a: s1: 3 a1: 1 Q: 3.7464426006577254
s: 3 a: s1: 3 a1: 1 Q: 4.371798340591953
s: 3 a: s1: 3 a1: 1 Q: 4.9346185065327575
s: 3 a: s1: 3 a1: 1 Q: 5.441156655879482
s: 3 a: s1: 3 a1: 1 Q: 5.897040990291534
s: 3 a: s1: 3 a1: 1 Q: 6.307336891262381
s: 3 a: s1: 3 a1: 1 Q: 6.676603202136143
s: 3 a: s1: 3 a1: 1 Q: 7.008942881922528
s: 3 a: s1: 3 a1: 1 Q: 7.308048593730275
s: 3 a: s1: 3 a1: 1 Q: 7.577243734357248
s: 3 a: s1: 3 a1: 1 Q: 7.819519360921523
s: 3 a: s1: 3 a1: 1 Q: 8.037567424829371
s: 3 a: s1: 3 a1: 1 Q: 8.233810682346434
s: 3 a: s1: 3 a1: 1 Q: 8.410429614111791
s: 3 a: s1: 3 a1: 1 Q: 8.569386652700612
s: 3 a: s1: 3 a1: 1 Q: 8.71244798743055
s: 3 a: s1: 3 a1: 1 Q: 8.841203188687496
s: 3 a: s1: 3 a1: 1 Q: 8.957082869818747
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 5.0569972942894506
s: 1 a: s1: 1 a1: 1 Q: 6.0569972942894506
s: 1 a: s1: 1 a1: 1 Q: 6.451297564860505
s: 1 a: s1: 1 a1: 1 Q: 6.806167808374455
s: 1 a: s1: 1 a1: 1 Q: 7.12555102753701
s: 1 a: s1: 1 a1: 1 Q: 7.412995924783309
s: 1 a: s1: 1 a1: 1 Q: 7.671696332304978
s: 1 a: s1: 1 a1: 1 Q: 7.904526699074481
s: 1 a: s1: 1 a1: 1 Q: 8.114074029167032
s: 1 a: s1: 1 a1: 1 Q: 8.302666626250328
s: 1 a: s1: 1 a1: 1 Q: 8.472399963625296
s: 1 a: s1: 1 a1: 1 Q: 8.625159967262768
s: 1 a: s1: 1 a1: 1 Q: 8.762643970536491
s: 1 a: s1: 1 a1: 1 Q: 8.886379573482841
s: 1 a: s1: 1 a1: 1 Q: 8.997741616134558
s: 1 a: s1: 1 a1: 1 Q: 9.097967454521102
s: 1 a: s1: 1 a1: 1 Q: 9.188170709068991
s: 1 a: s1: 1 a1: 1 Q: 9.269353638162093
s: 1 a: s1: 1 a1: 1 Q: 9.342418274345883
s: 1 a: s1: 2 a1: 1 Q: 0.379558766244105
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 8.408176446911295
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 0 a1: 1 Q: 66.98755171443602
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 3 a1: 1 Q: 91.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 2 a1: 1 Q: 82.4661132276988
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 0 a1: 1 Q: 66.98755171443602
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
s: 0 a: s1: 1 a1: 1 Q: 73.31950190492891
s: 1 a: s1: 2 a1: 1 Q: 81.46611322769878
s: 2 a: s1: 3 a1: 1 Q: 90.51790358633198
s: 3 a: s1: 4 a1: 1 Q: 100.57544842925776
Final Q: [[ 66.99 73.32]
[ 9.34 81.47]
[ 82.47 90.52]
[ 91.52 100.58]
[ 0.64 0.38]]
Policy: [1 1 1 1 0]
# Maze Example
Now, let us solve some maze problems. To deal with various maze configuration, we define our class that can handle a file input of maze. Here the file is expected to have 3 letters: 'O' for open space, 'H' for obstacles (walls), and 'G' for a goal position. The example grid.txt file looks as follows.
Note: this bash command runs only on Linux or Mac. The shell is to show the content of file only. You can skip running the cell and creat a file with the same content.
```bash
%%bash
cat grid.txt
```
OOOHOOOOO
OOOHOOHOO
OOOOOOHOO
OOOOHHHOO
OOHOOOOOH
OOHOOOGOO
OOOOOOOOO
Now, let us define the environment class. The environment constructor gets only one input, the filename. Along with some utility functions, the main simulation functions are <code>init()</code> and <code>next()</code> for initiating and running one step simulation. Please take a look at the example use and try them in a separate cell.
```python
# maze example
class GridWorld:
""" Grid World environment
there are four actions (left, right, up, and down) to move an agent
In a grid, if it reaches a goal, it get 30 points of reward.
If it falls in a hole or moves out of the grid world, it gets -5.
Each step costs -1 point.
to test GridWorld, run the following sample codes:
env = GridWorld('grid.txt')
env.print_map()
print [2,3], env.check_state([2,3])
print [0,0], env.check_state([0,0])
print [3,4], env.check_state([3,4])
print [10,3], env.check_state([10,3])
env.init([0,0])
print env.next(1) # right
print env.next(3) # down
print env.next(0) # left
print env.next(2) # up
print env.next(2) # up
Parameters
==========
_map ndarray
string array read from a file input
_size 1d array
the size of _map in ndarray
goal_pos tuple
the index for the goal location
_actions list
list of actions for 4 actions
_s 1d array
current state
"""
def __init__(self, fn):
# read a map from a file
self._map = self.read_map(fn)
self._size = np.asarray(self._map.shape)
self.goal_pos = np.where(self._map == 'G')
# definition of actions (left, right, up, and down repectively)
self._actions = [[0, -1], [0, 1], [-1, 0], [1, 0]]
self._s = None
def get_cur_state(self):
return self._s
def get_size(self):
return self._size
def read_map(self, fn):
grid = []
with open(fn) as f:
for line in f:
grid.append(list(line.strip()))
return np.asarray(grid)
def print_map(self):
print( self._map )
def check_state(self, s):
if isinstance(s, collections.Iterable) and len(s) == 2:
if s[0] < 0 or s[1] < 0 or\
s[0] >= self._size[0] or s[1] >= self._size[1]:
return 'N'
return self._map[tuple(s)].upper()
else:
return 'F' # wrong input
def init(self, state=None):
if state is None:
s = [0, 0]
else:
s = state
if self.check_state(s) == 'O':
self._s = np.asarray(state)
else:
raise ValueError("Invalid state for init")
def next(self, a):
s1 = self._s + self._actions[a]
# state transition
curr = self.check_state(s1)
if curr == 'H' or curr == 'N':
return -5
elif curr == 'F':
warnings.warn("invalid state " + str(s1))
return -5
elif curr == 'G':
self._s = s1
return 30
else:
self._s = s1
return -1
def is_goal(self):
return self.check_state(self._s) == 'G'
def get_actions(self):
return self._actions
```
```python
env = GridWorld("grid.txt")
env.print_map()
```
[['O' 'O' 'O' 'H' 'O' 'O' 'O' 'O' 'O']
['O' 'O' 'O' 'H' 'O' 'O' 'H' 'O' 'O']
['O' 'O' 'O' 'O' 'O' 'O' 'H' 'O' 'O']
['O' 'O' 'O' 'O' 'H' 'H' 'H' 'O' 'O']
['O' 'O' 'H' 'O' 'O' 'O' 'O' 'O' 'H']
['O' 'O' 'H' 'O' 'O' 'O' 'G' 'O' 'O']
['O' 'O' 'O' 'O' 'O' 'O' 'O' 'O' 'O']]
## Agent
Let us define an RL agent class. Two main methods are <code>train()</code> and <code>test()</code> as other ML classes. Unlike supervised learining, train function actively collect data while interacting with an environment and update its Q estimation with SARSA (or Q-learning). The next cell defines a utility to adjust coordianates to start from left-top.
```python
def coord_convert(s, sz):
return [s[1], sz[0]-s[0]-1]
```
```python
class RLAgent:
"""
Reinforcement Learning Agent Model for training/testing
with Tabular function approximation
"""
def __init__(self, evn):
self.env = env
self.size = env.get_size()
self.n_a = len(env.get_actions())
# self.Q table including the surrounding border
self.Q = np.zeros((self.size[0], self.size[1], self.n_a))
def epsilon_greed(self, epsilon, s):
if np.random.uniform() < epsilon:
a = np.random.randint(self.n_a)
else:
i_max = np.where(self.Q[s[0], s[1], :] == np.max(self.Q[s[0], s[1], :]))[0]
a = int(np.random.choice(i_max))
return a
def train(self, start, **params):
# parameters
gamma = params.pop('gamma', 0.99)
alpha = params.pop('alpha', 0.1)
epsilon= params.pop('epsilon', 0.1)
maxiter= params.pop('maxiter', 1000)
maxstep= params.pop('maxstep', 1000)
# init self.Q matrix
self.Q[...] = 0
self.Q[self.env._map == 'H'] = -np.inf
# online train
# rewards and step trace
rtrace = []
steps = []
for j in range(maxiter):
env.init(start)
s = env.get_cur_state()
# selection an action
a = self.epsilon_greed(epsilon, s)
rewards = []
trace = np.array(coord_convert(s, self.size))
# run simulation for max number of steps
for step in range(maxstep):
# move
r = env.next(a)
s1 = env.get_cur_state()
a1 = self.epsilon_greed(epsilon, s1)
rewards.append(r)
trace = np.vstack((trace, coord_convert(s1, self.size)))
# ToDo: update self.Q table (SARSA)
self.Q[s[0], s[1], a] += alpha * (r + gamma * self.Q[s1[0], s1[1], a1] -\
self.Q[s[0], s[1], a])
if env.is_goal(): # reached the goal
# ToDo: update Q table
self.Q[s1[0], s1[1], a1] = 0
break
s = s1
a = a1
rtrace.append(np.sum(rewards))
steps.append(step+1)
return rtrace, steps, trace # last trace of trajectory
def test(self, start, maxstep=1000):
#### ToDo: Here is one episode of simulation for testing.
#### Add one line to make this code test the trained agent
epsilon = 0.0
env.init(start)
s = env.get_cur_state()
# selection an action
a = self.epsilon_greed(epsilon, s)
trace = np.array(coord_convert(s, self.size))
# run simulation for max number of steps
for step in range(maxstep):
# move
r = env.next(a)
s1 = env.get_cur_state()
a1 = self.epsilon_greed(epsilon, s1)
trace = np.vstack((trace, coord_convert(s1, self.size)))
if env.is_goal(): # reached the goal
break
s = s1
a = a1
return trace
```
```python
### Plotting tools
def plot_trace(agent, start, trace, title="test trajectory"):
plt.plot(trace[:, 0], trace[:, 1], "ko-")
plt.text(env.goal_pos[1], agent.size[0]-env.goal_pos[0]-1, 'G')
plt.text(start[1], agent.size[0]-start[0]-1, 'S')
plt.xlim([0, agent.size[1]])
plt.ylim([0, agent.size[0]])
def plot_train(agent, rtrace, steps, trace, start):
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(221)
plt.plot(rtrace, "b-")
plt.ylabel("sum of rewards")
ax1 = fig.add_subplot(222)
plt.plot(steps)
plt.ylabel("# steps")
# contour plot for agent.Q
ax2 = fig.add_subplot(223)
xs = range(agent.size[1])
ys = range(agent.size[0])
maxQ = np.max(agent.Q, axis=2)
h_b = (maxQ==-np.inf)
maxQ[h_b] = 0
maxQ[h_b] = np.min(maxQ) - 100
cs = plt.contourf(xs, ys[::-1], maxQ)
plt.colorbar(cs)
plt.text(env.goal_pos[1], agent.size[0]-env.goal_pos[0]-1, 'G')
plt.text(start[1], agent.size[0]-start[0]-1, 'S')
plt.ylabel("max agent.Q")
# plot traces
ax3 = fig.add_subplot(224)
plot_trace(agent, start, trace, "trace of the last episode")
plt.plot()
```
```python
agent = RLAgent(env)
start = [0,0]
rtrace, steps, trace = agent.train(start,
amma=0.99,
alpha=0.1,
epsilon=0.1,
maxiter=100,
maxstep=1000)
```
```python
plot_train(agent, rtrace, steps, trace, start)
```
```python
test_start = [0,2]
test_trace = agent.test(test_start)
plot_trace(agent, test_start, test_trace)
```
```python
test_start = [1,8]
test_trace = agent.test(test_start)
plot_trace(agent, test_start, test_trace)
```
Q: What do you think about the test results above? Does the agent run fine?
|
c847c80ebd72a6ac2f765a8b8ca5e7705801b590
| 117,140 |
ipynb
|
Jupyter Notebook
|
reading_assignments/8_Note-ReinforcementLearning.ipynb
|
biqar/Fall-2020-ITCS-8156-MachineLearning
|
ce14609327e5fa13f7af7b904a69da3aa3606f37
|
[
"MIT"
] | null | null | null |
reading_assignments/8_Note-ReinforcementLearning.ipynb
|
biqar/Fall-2020-ITCS-8156-MachineLearning
|
ce14609327e5fa13f7af7b904a69da3aa3606f37
|
[
"MIT"
] | null | null | null |
reading_assignments/8_Note-ReinforcementLearning.ipynb
|
biqar/Fall-2020-ITCS-8156-MachineLearning
|
ce14609327e5fa13f7af7b904a69da3aa3606f37
|
[
"MIT"
] | null | null | null | 77.42234 | 49,284 | 0.727412 | true | 16,500 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.763484 | 0.689306 | 0.526274 |
__label__eng_Latn
| 0.659401 | 0.061039 |
# Numerical Methods
# Lecture 5: Numerical Linear Algebra I
```python
import numpy as np
import scipy.linalg as sl
```
## Learning objectives:
* Manipulation of matrices and matrix equations in Python
* Reminder on properties of matrices (from MM1): determinants, singularity etc
* Algorithms for the solution of linear systems
* Gaussian elimination, including back substitution
## Contents
## I. Introduction - Linear (Matrix Systems)
## II. Matrices in Python
## III. Properties of matrices: determinants, singularity, solvability of linear systems, etc
## IV. Gaussian Elimination - Method
## V. Gaussian Elimination - Algorithm and Code
## VI. Gaussian Jordan Elimination
## I. Introduction - Linear (Matrix) Systems
Recall from your Mathematical Methods I course that the we can re-write a system of simultaneous (linear) equations in matrix form. For example, in week 4 of MM1 you considered the following example:
\begin{eqnarray*}
2x + 3y &=& 7 \\
x - 4y &=& 3
\end{eqnarray*}
and it was noted that this can be written in matrix form as
$$
\left(
\begin{array}{rr}
2 & 3 \\
1 & -4 \\
\end{array}
\right)\left(
\begin{array}{c}
x \\
y \\
\end{array}
\right) = \left(
\begin{array}{c}
7 \\
3 \\
\end{array}
\right)
$$
We understand that this system of simultaneous equation with 2 equations always has the form of
$$
\left(
\begin{array}{rr}
a & b \\
c & d \\
\end{array}
\right)\left(
\begin{array}{c}
x \\
y \\
\end{array}
\right) = \left(
\begin{array}{c}
e \\
f \\
\end{array}
\right)
$$
Where $a,b,c,d,e,f$ are arbitrary constants.
Let's call the matrix which stores the coefficients of our system of linear equations to be **A**
$$
A=
\left(
\begin{array}{rr}
a & b \\
c & d \\
\end{array}
\right)
$$
And the matrix that contains our variables to be **x**
$$
x=
\left(
\begin{array}{c}
x \\
y \\
\end{array}
\right)
$$
Yeah, I know it's kinda confusing why this matrix is called **x** when it's got both $x$ and $y$ inside, because instinctively you would think that it should contain only $x$. However, this is just a name we give to the matrix, and it should be well known by now that those doing Engineering and Sciences are not the best people at naming various things. We could have equally named this **c**, or **z**, or **insert some other letter of the alphabet**, as long as we remember that this is a matrix used to store the variable. To help yourself in the future, you might want to write the letters used to represent the matrix in one color, for example, red, and the letters used to represent the variables or constants in the matrix in another color for example, blue, since you should have all those wonderful colored pens from your various field trips. This would be especially helpful during your Math Methods (or whatever they may have renamed this course to be in future) you are doing this year and next year to avoid confusion, and I mean this seriously, it really helps a lot. The matrices here were intentionally not colored to encourage you to write these things down in your favorite colored pens, and reinforce which are matrices and which are not, and totally not because I am too lazy to change the color,<span class= "irony"><span style="color:red"> TOTALLY NOT! BELIEVE ME!</span></span>
And the matrix that contains the results of our system of linear equation to be **b**
$$
b=
\left(
\begin{array}{c}
e \\
f \\
\end{array}
\right)
$$
This system of equations can be represented as the matrix equation $A\pmb{x}=\pmb{b}$
We could do the same procedure for equations which have $3$ variables inside of them, for example
\begin{eqnarray*}
2x + 3y +z &=& 7 \\
x - 4y + 2z&=& 3 \\
3x + 5y + 3z&=& 10 \\
\end{eqnarray*}
and it was noted that this can be written in matrix form as
$$
\left(
\begin{array}{rrr}
2 & 3 & 1 \\
1 & -4 & 2\\
3 & 5 &3
\end{array}
\right)\left(
\begin{array}{c}
x \\
y \\
z \\
\end{array}
\right) = \left(
\begin{array}{c}
7 \\
3 \\
10 \\
\end{array}
\right)
$$
We understand that this system of simultaneous equation with 2 equations always has the form of
$$
\left(
\begin{array}{rr}
a & b & c \\
d & e & f \\
g &h &i \\
\end{array}
\right)\left(
\begin{array}{c}
x \\
y \\
z \\
\end{array}
\right) = \left(
\begin{array}{c}
j \\
k \\
l \\
\end{array}
\right)
$$
Where $a,b,c,d,e,f$ are arbitrary constants.
Let's call the matrix which stores the coefficients of our system of linear equations to be **A**
$$
A=
\left(
\begin{array}{rrr}
a & b & c \\
d & e & f \\
g &h &i \\
\end{array}
\right)
$$
And the matrix that contains our variables to be **x**
$$
x=
\left(
\begin{array}{c}
x \\
y \\
z \\
\end{array}
\right)
$$
And the matrix that contains the results of our system of linear equation to be **b**
$$
b=
\left(
\begin{array}{c}
j \\
k \\
l \\
\end{array}
\right)
$$
This system of equations can again be represented as the matrix equation $A\pmb{x}=\pmb{b}$
Note here that although the letters I have used for the matrices are in alphabetical order, it is not neccesary to be in alphabetical order. Basically, you could put whatever letter you want instead of $a, b, c, d, etc. $ that I have placed here. Indeed, because many letters in Engineering are often used to represent stuff, like $c$ is often used for the speed of light, $u,v$ is used for velocity, $i,j,k$ is often used for indexing, it is often better to not use letters as placeholder for values in the matrix and instead let's use something else to avoid confusion. While you as a human might be able to distinguish which $c$ means the speed of light, and which $c$ is simply a placeholder in your matrix, your computer might not be smart enough to do so, and might get very confused.
For example, taking our
$$
\left(
\begin{array}{rrr}
2 & 3 & 1 \\
1 & -4 & 2\\
3 & 5 &3
\end{array}
\right)\left(
\begin{array}{c}
x \\
y \\
z \\
\end{array}
\right) = \left(
\begin{array}{c}
7 \\
3 \\
10 \\
\end{array}
\right)
$$
and the generalization
$$
\left(
\begin{array}{rr}
a & b & c \\
d & e & f \\
g &h &i \\
\end{array}
\right)\left(
\begin{array}{c}
x \\
y \\
z \\
\end{array}
\right) = \left(
\begin{array}{c}
j \\
k \\
l \\
\end{array}
\right)
$$
Here, $c$ is simply a placeholder for the number 1. If you used $c$ for light speed before, the computer might get very confused and instead try to solve
$$
\left(
\begin{array}{rrr}
2 & 3 & 3000000 \\
1 & -4 & 2\\
3 & 5 &3
\end{array}
\right)\left(
\begin{array}{c}
x \\
y \\
z \\
\end{array}
\right) = \left(
\begin{array}{c}
7 \\
3 \\
10 \\
\end{array}
\right)
$$
while the you are actually trying to solve for
$$
\left(
\begin{array}{rrr}
2 & 3 & 1 \\
1 & -4 & 2\\
3 & 5 &3
\end{array}
\right)\left(
\begin{array}{c}
x \\
y \\
z \\
\end{array}
\right) = \left(
\begin{array}{c}
7 \\
3 \\
10 \\
\end{array}
\right)
$$
.
Indeed, to avoid your computer getting confused at why there is strange 3 million in there,
We could instead use the position in the matrix to name them
For example:
$$
A=
\left(
\begin{array}{rrr}
A_{11} & A_{12} & A_{13} \\
A_{21} & A_{22} & A_{23} \\
A_{31} & A_{32} & A_{33} \\
\end{array}
\right)
$$
and
$$
x=
\left(
\begin{array}{c}
x_1 \\
x_2 \\
x_3 \\
\end{array}
\right)
$$
We call them $x_1$,$x_2$,$x_3$, because we are running out of letters!
$$
b=
\left(
\begin{array}{c}
b_1 \\
b_2 \\
b_3 \\
\end{array}
\right)
$$
Thus we write
$$
\left(
\begin{array}{rrr}
A_{11} & A_{12} & A_{13} \\
A_{21} & A_{22} & A_{23} \\
A_{31} & A_{32} & A_{33} \\
\end{array}
\right)
\left(
\begin{array}{c}
x_1 \\
x_2 \\
x_3 \\
\end{array}
\right)
=
\left(
\begin{array}{c}
b_1 \\
b_2 \\
b_3 \\
\end{array}
\right)
$$
And still, our matrix form of $Ax = b$ remains
We understand that the above can easily be extended to 4 variables and we get
$$
A=
\left(
\begin{array}{rrrr}
A_{11} & A_{12} & A_{13} & A_{14} \\
A_{21} & A_{22} & A_{23} & A_{24}\\
A_{31} & A_{32} & A_{33} & A_{34} \\
A_{41} & A_{42} & A_{43} & A_{44} \\
\end{array}
\right)
$$
and
$$
x=
\left(
\begin{array}{c}
x_1 \\
x_2 \\
x_3 \\
x_4 \\
\end{array}
\right)
$$
We call them $x_1$,$x_2$,$x_3$, because we are running out of letters!
$$
b=
\left(
\begin{array}{c}
b_1 \\
b_2 \\
b_3 \\
b_4 \\
\end{array}
\right)
$$
Thus we write
$$
\left(
\begin{array}{rrrr}
A_{11} & A_{12} & A_{13} & A_{14} \\
A_{21} & A_{22} & A_{23} & A_{24}\\
A_{31} & A_{32} & A_{33} & A_{34} \\
A_{41} & A_{42} & A_{43} & A_{44} \\
\end{array}
\right)
\left(
\begin{array}{c}
x_1 \\
x_2 \\
x_3 \\
x_4 \\
\end{array}
\right)
=
\left(
\begin{array}{c}
b_1 \\
b_2 \\
b_3 \\
b_4 \\
\end{array}
\right)
$$
More generally, consider the arbitrary system of $n$ linear equations for $n$ unknowns
\begin{eqnarray*}
A_{11}x_1 + A_{12}x_2 + \dots + A_{1n}x_n &=& b_1 \\
A_{21}x_1 + A_{22}x_2 + \dots + A_{2n}x_n &=& b_2 \\
\vdots &=& \vdots \\
A_{n1}x_1 + A_{n2}x_2 + \dots + A_{nn}x_n &=& b_n
\end{eqnarray*}
where $A_{ij}$ are the constant coefficients of the linear system, $x_j$ are the unknown variables, and $b_i$
are the terms on the right hand side (RHS). Here the index $i$ is referring to the equation number
(the row in the matrix below), with the index $j$ referring to the component of the unknown
vector $\pmb{x}$ (the column of the matrix).
This system of equations can be represented as the matrix equation $A\pmb{x}=\pmb{b}$
$$
\left(
\begin{array}{cccc}
A_{11} & A_{12} & \dots & A_{1n} \\
A_{21} & A_{22} & \dots & A_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
A_{n1} & A_{n2} & \dots & A_{nn} \\
\end{array}
\right)\left(
\begin{array}{c}
x_1 \\
x_2 \\
\vdots \\
x_n \\
\end{array}
\right) = \left(
\begin{array}{c}
b_1 \\
b_2 \\
\vdots \\
b_n \\
\end{array}
\right)
$$
We can easily solve the above $2 \times 2$ example of two equations and two unknowns using substitution (e.g. multiply the second equation by 2 and subtract the first equation from the resulting equation to eliminate $x$ and hence allowing us to find $y$, then we could compute $x$ from the first equation). We find:
$$ x=37/11, \quad y=1/11.$$
In MM1 you also considered $3 \times 3$ examples which were a little more complicated but still doable. This lecture considers the case of $n \times n$ where $n$ could easily be billions(think about AI and ML and think about how complex human decision process can be. What are the various different factors that compelled you to make such a choice in such a situation, there could be billions of different things; even very trivial things could be important in your decision making, don't worry, you will learn how to make the computer do these equations for you, instead of doing them yourselves. The method introduced in this lecture, the Gaussian elimination is a classic, and you probably have been doing it without knowing about it. Actual real world examples would use much more advanced and sophisticated algorithms than Gaussian elimination to deal with these massive matrices, but Gaussian elimination is a good bridge between your A Level Math and becoming an AI or ML expert!)! This case arises when you solve a differential equation numerically on a discrete mesh or grid. Here you would typically obtain one unknown and one (discrete, linear or nonlinear) equation at every grid point. You could generate an arbitrarily large matrix system simply by generating a finer mesh.
Note that you will solve differential equations numerically in the follow-up course Numerical Methods II.
Cases where the matrix is non-square, i.e. of shape $m \times n$ where $m\ne n$ correspond to the (over- or under-determined) system where you have more or less equations than unknowns - we won't consider these in this lecture.
## II. Matrices in Python
We have already used numpy arrays to store one-dimensional vectors of numbers.
The convention is that these are generally considered to be *column* vectors and have shape $n \times 1$.
We can extend to higher dimensions through the introduction of matrices as two-dimensional arrays (more generally vectors and matrices are just two examples of tensors).
We use subscript indices to identify each component of the array or matrix, i.e. we can identify each component of the vector $\pmb{v}$ by $v_i$, and each component of the matrix $A$ by $A_{ij}$.
Note that it is a convention that vectors are either underlined or bold, and generally lower case letters, whereas matrices are plain capital letters.
The *dimension* or *shape* of a vector/matrix is the number of rows and columns it posesses, i.e. $n \times 1$ and $m \times n$ for the examples above.
Here is an example of how we can extend our use of the numpy array object to two dimensions in order to define a matrix $A$ and some examples of some operations we can make on it.
```python
A = np.array([[10., 2., 1.],[6., 5., 4.],[1., 4., 7.]])
print(A)
```
[[10. 2. 1.]
[ 6. 5. 4.]
[ 1. 4. 7.]]
```python
# the total size of the array storing A - here 9 for a 3x3 matrix
print(np.size(A))
```
9
```python
# the number of dimensions of the matrix A
print(np.ndim(A) )
```
2
```python
# the shape of the matrix A
print(np.shape(A))
```
(3, 3)
```python
# the transpose of the matrix A
print(A.T)
```
[[10. 6. 1.]
[ 2. 5. 4.]
[ 1. 4. 7.]]
```python
# the inverse of the matrix A - computed using a scipy algorithm
print(sl.inv(A))
```
```python
# the determinant of the matrix A - computed using a scipy algorithm
print(sl.det(A))
```
```python
# Multiply A with its inverse using the @ matrix multiplication operator.
# Note that due to roundoff errors the off diagonal values are not exactly zero.
print(A @ sl.inv(A))
```
[[ 1.00000000e+00 -1.11022302e-16 5.55111512e-17]
[-2.22044605e-16 1.00000000e+00 2.22044605e-16]
[-8.32667268e-17 -3.33066907e-16 1.00000000e+00]]
```python
# The @ operator is fairly new, you may also see use of numpy.dot:
print(np.dot(A,sl.inv(A)))
```
[[ 1.00000000e+00 -1.11022302e-16 5.55111512e-17]
[-2.22044605e-16 1.00000000e+00 2.22044605e-16]
[-8.32667268e-17 -3.33066907e-16 1.00000000e+00]]
```python
# same way to achieve the same thing
print(A.dot(sl.inv(A)))
```
[[ 1.00000000e+00 -1.11022302e-16 5.55111512e-17]
[-2.22044605e-16 1.00000000e+00 2.22044605e-16]
[-8.32667268e-17 -3.33066907e-16 1.00000000e+00]]
```python
# note that the * operator simply does operations element-wise - here this
# is not what we want!
print(A*sl.inv(A))
```
[[ 1.42857143 -0.15037594 0.02255639]
[-1.71428571 2.59398496 -1.02255639]
[ 0.14285714 -1.14285714 2. ]]
```python
# how to initialise a vector of zeros
print(np.zeros(3))
```
[0. 0. 0.]
```python
# how to initialise a matrix of zeros
print(np.zeros((3,3)))
```
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
```python
# how to initialise a 3rd-order tensor of zeros
print(np.zeros((3,3,3)))
```
[[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]]
```python
# how to initialise the identity matrix, I or Id
print(np.eye(3))
```
[[1. 0. 0.]
[0. 1. 0.]
[0. 0. 1.]]
Let's quickly consider the $2 \times 2$ case from MM1 recreated above where we claimed that $x=37/11$ and $y=1/11$.
To solve the matrix equation
$$ A\pmb{x}=\pmb{b} $$
we can simply multiply both sides by the inverse of the matrix $A$ (if $A$ is invertible and if we know what the inverse is of course!):
\begin{align}
A\pmb{x} & = \pmb{b}\\
\implies A^{-1}A\pmb{x} & = A^{-1}\pmb{b}\\
\implies I\pmb{x} & = A^{-1}\pmb{b}\\
\implies \pmb{x} & = A^{-1}\pmb{b}
\end{align}
so we can find the solution $\pmb{x}$ by multiplying the inverse of $A$ with the RHS vector $\pmb{b}$.
### <span style="color:blue">Exercise 5.1: Solving a linear system </span>
Formulate and solve the linear system and check you get the answer quoted above.
```python
```
### <span style="color:blue">Aside: matrix objects </span>
Note that numpy does possess a matrix object as a sub-class of the numpy array. We can cast the above two-dimensional arrays into matrix objects and then the star operator does yield the expected matrix product:
```python
# A is an n-dimensional array (n-2 here)
A = np.array([[10., 2., 1.],[6., 5., 4.],[1., 4., 7.]])
print(type(A))
```
<class 'numpy.ndarray'>
```python
# this casts the array A into the matrix class
print(type(np.mat(A)))
```
<class 'numpy.matrix'>
```python
# for these objects * is standard matrix multuiplication
# and we can check that A*A^{-1}=I as expected
print(np.mat(A)*np.mat(sl.inv(A)))
```
[[ 1.00000000e+00 -1.11022302e-16 5.55111512e-17]
[-2.22044605e-16 1.00000000e+00 2.22044605e-16]
[-8.32667268e-17 -3.33066907e-16 1.00000000e+00]]
### <span style="color:blue">Slicing </span>
Remember from last year's Python module that just as for arrays or lists, we can use *slicing* in order to extract components of matrices, for example:
```python
A=np.array([[10., 2., 1.],[6., 5., 4.],[1., 4., 7.]])
print(A)
```
[[10. 2. 1.]
[ 6. 5. 4.]
[ 1. 4. 7.]]
```python
# single entry, first row, second column
print(A[0,1])
```
2.0
```python
# first row
print(A[0,:])
```
[10. 2. 1.]
```python
# last row
print(A[-1,:])
```
[1. 4. 7.]
```python
# second column
print(A[:,1])
```
[2. 5. 4.]
```python
# extract a 2x2 sub-matrix
print(A[1:3,1:3])
```
[[5. 4.]
[4. 7.]]
### <span style="color:blue">Exercise 5.2: Matrix manipulation in Python </span>
Let
$$
A = \left(
\begin{array}{ccc}
1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9 \\
\end{array}
\right)
\mathrm{\quad\quad and \quad\quad}
b = \left(
\begin{array}{c}
2 \\
4 \\
6 \\
\end{array}
\right)
$$
- Store $A$ and $b$ in NumPy array structures. Print them.
- Print their shape and size. What is the difference ?
- Create a NumPy array $I$ containing the identity matrix $I_3$. Do $A = A+I$.
- Substitute the 3rd column of $A$ with $b$.
- Solve $Ax=b$.
```python
```
## III. Properties of matrices: determinants, singularity, solvability of linear systems, etc
Consider $N$ linear equations in $N$ unknowns, $A\pmb{x}=\pmb{b}$.
From MM1 you learnt that this system has a *unique solution* provided that the determinant of A, $\det(A)$, is non-zero. In this case the matrix is said to be *non-singular*.
If $\det(A)=0$ (with $A$ then termed a *singular matrix*), then the linear system does *not* have a unique solution, it may have either infinite *or* no solutions.
For example, consider
$$
\left(
\begin{array}{rr}
2 & 3 \\
4 & 6 \\
\end{array}
\right)\left(
\begin{array}{c}
x \\
y \\
\end{array}
\right) = \left(
\begin{array}{c}
4 \\
8 \\
\end{array}
\right)
$$
The second equation is simply twice the first, and hence a solution to the first equation is also automatically a solution to the second equation.
We hence only have one *linearly-independent* equation, and our problem is under-constrained: we effectively only have one eqution for two unknowns with infinitely many possibly solutions.
If we replaced the RHS vector with $(4,7)^T$, then the two equations would be contradictory: in this case we have no solutions.
Note that a set of vectors where one can be written as a linear sum of the others are termed *linearly-dependent*. When this is not the case the vectors are termed *linearly-independent*.
The following properties of a square $n\times n$ matrix are equivalent:
* $\det(A)\ne 0$ - A is non-singular
* The columns of $A$ are linearly independent
* The rows of $A$ are linearly independent
* The columns of A *span* $n$-dimensional space (recall MM1 - we can reach any point in $\mathbb{R}^N$ through a linear combination of these vectors - note that this is simply what the operation $A\pmb{x}$ is doing of course if you write it out)
* $A$ is invertible, i.e. there exists a matrix $A^{-1}$ such that $A^{-1}A = A A^{-1}=I$
* the matrix system $A\pmb{x}=\pmb{b}$ has a unique solution for every vector $b$
## IV. Gaussian elimination - Method
The Gaussian elimination algorithm is simply a systematic implementation of the method of equation substitution we used above to solve the $2\times 2$ system (i.e. where we "multiply the second equation by 2 and subtract the first equation from the resulting equation to *eliminate* $x$ and hence allowing us to find $y$, we can then compute $x$ from the first equation").
So *Gaussian elimination* as the method is atributed to the mathematician *Gauss* (although it was certainly known before his time) and *elimination* as we seek to eliminate unknowns.
To perform this method for arbitrarily large systems (on paper) we form the so-called *augmented matrix*
$$
[A|\pmb{b}] =
\left[
\begin{array}{rr|r}
2 & 3 & 7 \\
1 & -4 & 3 \\
\end{array}
\right]
$$
Note that this encodes the equations (including the RHS values); we can perform so-called *row operations* and as long as we are consistent with what we do for the LHS and RHS components then what this system is describing will not change - a solution to the updated system will be the same as the solution to the original system.
Our task is to update the system so we can easily read off the solution - of course this is exactly what we do via the substitution approach:
First we multiplied the second equation by 2, this yield the updated augmented matrix:
$$
\left[
\begin{array}{rr|r}
2 & 3 & 7 \\
2 & -8 & 6 \\
\end{array}
\right]
$$
We can use the following notation to describe this operation:
$$ Eq. (2) \leftarrow 2\times Eq. (2) $$
Note importantly that this does not change anything about what these pair of equations are telling us about the unknown solution vector $\pmb{x}$ which although it doesn't appear is implicilty defined by this augmented equation.
The next step was to subtract the first equation from the updated second ($ Eq. (2) \leftarrow Eq. (2) - Eq. (1) $):
$$
\left[
\begin{array}{rr|r}
2 & 3 & 7 \\
0 & -11 & -1 \\
\end{array}
\right]
$$
The square matrix that is now in the $A$ position of this augmented system is an example of an *upper-triangular* matrix - all entries below the diagonal are zero.
For such a matrix we can perform back substitution - starting at the bottom to solve trivially for the final unknown ($y$ here which clearly takes the value $-1/-11$), and then using this knowledge working our way up to solve for each remaining unknown in turn, here just $x$ (solving $2x + 3\times (1/11) = 7$).
Note that we can perform the similar substitution if we had a lower triangular matrix, first finding the first unknown and then working our way forward through the remaining unknowns - hence in this case *forward substitution*.
Note that if we wished we could of course continue working on the augmented matrix to make the $A$ component diagonal: divide the second equation by 11 and multiply by 3 ($ Eq. (2) \leftarrow (3/11)\times Eq. (2) $) and add it to the first ($ Eq. (1) \leftarrow Eq. (1) + Eq. (2) $):
$$
\left[
\begin{array}{rr|r}
2 & 0 & 7-3/11\\
0 & -3 & -3/11 \\
\end{array}
\right]
$$
and we can further make it the identity by dividing the rows by $2$ and $-3$ respectively ($ Eq. (1) \leftarrow (1/2)\times Eq. (1) $, $ Eq. (2) \leftarrow (-1/3)\times Eq. (2) $) :
$$
\left[
\begin{array}{rr|r}
1 & 0 & (7-3/11)/2 \\
0 & 1 & 1/11 \\
\end{array}
\right]
$$
Each of these augmented matrices encodes exactly the same information as the original matrix system in terms of the unknown vector $\pmb{x}$, and hence this is telling us that
$$ \pmb{x} = I \pmb{x} = \left[
\begin{array}{c}
(7-3/11)/2 \\
1/11 \\
\end{array}
\right]
$$
i.e. exactly the solution we found when we performed back substitution from the upper-triangular form of the augmented system.
### <span style="color:blue">Exercise 5.3: Gaussian elimination $3 \times 3$ example (by hand) </span>
Consider the system of linear equations
\begin{align*}
2x + 3y - 4z &= 5 \\
6x + 8y + 2z &= 3 \\
4x + 8y - 6z &= 19
\end{align*}
write this in matrix form, form the corresponding augmented system and perform row operations until you get to upper-triangular form, find the solution using back substitution (**do this all with pen and paper**).
Write some code to check your answer using `sl.inv(A) @ b`.
You should find $x=-6$, $y=5$, $z=-1/2$.
I will also include the scheme to solve this below, but please only look at the scheme to do this if you are seriously stuck, and have given a good attempt at solving this and is still stuck. Thinking hard about something help reinforce the thing in your mind.
```python
```
```python
```
```python
```
We begin by noting that Gaussian Elimination has two major parts for solving this equation.
The 1st part is converting it to an upper triangle form,
and the 2nd part is back substitution.
We are trying to solve the system of equation given by:
\begin{align*}
2x + 3y - 4z &= 5 \\
6x + 8y + 2z &= 3 \\
4x + 8y - 6z &= 19
\end{align*}
For Gaussian Elimination, the 1st part is obtaining the upper triangle form
We begin by taking the first and second equations, so equations 1 and 2
Gaussian elimination works in sequence, so you'll see lots of "fist, second, third etc."
\begin{align*}
2x + 3y - 4z &= 5 \\
6x + 8y + 2z &= 3 \\
\end{align*}
We note that the first unknown we have is the unknown $x$ and so we begn with eliminating $x$
We note that the coefficient of unknown $x$ in the first equation, i.e. $2x+3y-4z=5$ is $2$
We note that the coefficient of unknown $x$ in the second equation, i.e. $6x+8y+2z=3$ is $6$
I understand that to get from $2$ of the first equation, to the $6$ of the second equation, I need to multiply $2$ by $3$ to obtain $6$. I get the $3$ from dividing the coefficient of the unknown $x$ in 2nd equation $6$ by the coefficient of the unknown $x$ in the 1st equation $2$, i.e. $6/2=3$
Thus I will multiply the 1st equation by $3$
After multiplying the first equation by 3, I will obtain the first and second equation as
\begin{align*}
6x + 9y - 12z &= 15 \\
6x + 8y + 2z &= 3 \\
\end{align*}
Then, I will subtract equation 1 from equation 2, so that the unknown $x$ is eliminated from equation 2.
Subtracting equation 1 from equation 2, I obtain
$$6x + 8y + 2z - (6x+9y-12z) = 3-15$$
$$-1y + 14z = -12$$
I will use the result of this subtraction to update my equation 2, thus, now I have an equation 2 that will not have the unknown $x$
\begin{align*}
2x + 3y - 4z &= 5 \\
0x - 1y + 14z &= -12 \\
4x + 8y - 6z &= 19
\end{align*}
Thus, we have succesfully eliminated the unknown $x$ from one of the equations!
Then, we continue by taking the equations 1 and 3
\begin{align*}
2x + 3y - 4z &= 5 \\
4x + 8y - 6z &= 19
\end{align*}
We note that the first unknown we have is the unknown $x$ and so we begn with eliminating $x$
We note that the coefficient of unknown $x$ in the first equation, i.e. $2x+3y-4z=5$ is $2$
We note that the coefficient of unknown $x$ in the third equation, i.e. $4x+8y-6z=19$ is $4$
I understand that to get from $2$ of the first equation, to the $4$ of the third equation, I need to multiply $2$ by $2$ to obtain $4$. I get the $2$ from dividing the coefficient of the unknown $x$ in 3rd equation $4$ by the coefficient of the unknown $x$ in the 1st equation $2$, i.e. $4/2=2$
Thus I will multiply the 1st equation by $2$
After multiplying the first equation by 2, I will obtain the first and second equation as
\begin{align*}
4x + 6y - 8z &= 10 \\
4x + 8y - 6z &= 19
\end{align*}
Then, I will subtract equation 1 from equation 3, so that the unknown $x$ is eliminated from equation 3.
Subtracting equation 1 from equation 3, I obtain
$$4x + 8y - 6z - (4x+6y-8z) = 19 - 10$$
$$2y + 2z = 9$$
I will use the result of this subtraction to update my equation 3, thus, now I have an equation 3 that will not have the unknown $x$
\begin{align*}
2x + 3y - 4z &= 5 \\
0x - 1y + 14z &= -12 \\
0x + 2y + 2z &= 9
\end{align*}
Thus, we have succesfully eliminated the unknown $x$ from another one of the equations!
Well, after we have done equation 1 equation 2, and then equation 1 equation 3, the only pair left we could use is equation 2 and equation 3. <span style="color:red">Note the sequential way we are doing this, so 1-2, then 1-3, then 2-3. If it would have 4 unknowns, and 4 equations that neither over nor underconstrains, then you would do 1-2, 1-3, 1-4, 2-3, 2-4, 3-4. If 5 unknown, and 5 equations that neither over nor underconstrains, then 1-2, 1-3, 1-4, 1-5, 2-3, 2-4, 2-5, 3-4, 3-5, 4-5. You could extend this to even more unknowns, and more equations, assuming that the number of equations is enough to ensure than there is neither overconstraining nor underconstraning. You could do this in Python through a double for loop and editing the start and end of your loop, or use pandas library (you do not need to use pandas though)! Make sure you do the loop correctly before you begin writing the actual operations! </span>
After our previous updates, we have equation 2 and equation 3 as
\begin{align*}
0x - 1y + 14z &= -12 \\
0x + 2y + 2z &= 9
\end{align*}
We note that the first unknown we have is the unknown $y$(Note: $0x$ does not count)and so we begn with eliminating $y$
We note that the coefficient of unknown $y$ in the 2nd equation, i.e. $-1y + 14z =-12$ is $-1$
We note that the coefficient of unknown $y$ in the 3rd equation, i.e. $2y+2z = 9$ is $2$
I understand that to get from $1$ of the 2nd equation, to the $2$ of the 3rd equation, I need to multiply $-1$ by $-2$ to obtain $2$. I get the $-2$ from dividing the coefficient of the unknown $y$ in 3rd equation $2$ by the coefficient of the unknown $y$ in the 2nd equation $-1$, i.e. $2/-1=-2$
Thus I will multiply the 1st equation by $-2$
After multiplying the first equation by $-2$, I will obtain the first and second equation as
\begin{align*}
0x + 2y - 28z &= 24 \\
0x + 2y + 2z &= 9
\end{align*}
Then, I will subtract equation 2 from equation 3, so that the unknown $y$ is eliminated from equation 3.
Subtracting equation 1 from equation 3, I obtain
$$(2y + 2z) - (2y - 28z) = 24 - 9$$
$$ 0y - 30z = 15$$
I will use the result of this subtraction to update my equation 3, thus, now I have an equation 3 that will not have the unknown $y$
\begin{align*}
2x + 3y - 4z &= 5 \\
0x + y - 14z &= 12 \\
0x + 0y - 30z &= 15
\end{align*}
Thus, we have succesfully eliminated the unknown $y$ from one of the equations!
Now, our equations look like they have a triangle where the coefficients are $0$ and another triangle where the coefficients are $\neq 0$. Because the triangle where coefficients are $\neq 0$ is above the triangle where the coefficients are $0$, this form is called the Upper Triangle Form.
Note that there is also the Lower Triangle Form, which is pretty much the mirror image of the Upper Triangle Form. The procedure to find the Lower Triangle Form is essentially the same as the procedure to find the Upper Triangle Form, but this time we work our way up, instead of the way down, since well it's like a mirror. So for a 3 unknown, 3 equation, instead of 1-2, 1-3, 2-3, you will go instead 3-2, 3-1, 2-1.
The step following the obtainment of the Upper Triangle Form is Back Substitution.
Let's start from the Upper Triangle Form
Since we are doing **back** substitution, we start from the last equation and go to the 1st equation, so for our case, equation 3, then 2 then 1.
If you had Lower Triangle Form, well, your **back** substitution,(it's like reverse the **back** so you get, uhm, **no back**) you will start from the first equation and go to last equation, so, if you had 3 unknown, 3 equation, not overconstrain or underconstrain, then you go 1-2-3.
Note that unlike when we were doing our Triangle Form which required 2 equations at once, back substitution only needs 1 equation at once.
\begin{align*}
2x + 3y - 4z &= 5 \\
0x + y - 14z &= 12 \\
0x + 0y - 30z &= 15
\end{align*}
I don't know the value of any of the unknowns yet.
From equation 3, I notice that
$$ 0x + 0y - 30z = 15 $$
Since I don't know any unknowns yet, I cannot substitute.
Which is basically
$$-30z = 15$$
$$z = -15/30 = -1/2$$
So we have found one of the unknowns, and we know that $z = -1/2$
Then I will proceed to equation 2.
I know the value of the unknown $z$ being $z=-1/2$
From equation 2, I notice that
$$0x + y - 14z = 12$$
I know that $z = -1/2$, so I can substitute, and I get
$$ +y - 14 \times -1/2 = 12 $$
$$ y +7 = 12 $$
$$ y = 5 $$
So we have found another one of the unknowns, and we know that $y = 5$ and $z = -1/2$
Then I will proceed to equation 1.
I know the value of the unknown $z$ being $z=-1/2$ and the unknown $y$ being $y = 5$
From equation 1, I notice that
$$ 2x + 3y - 4z = 5 $$
I know that $y = 5$ and $z = -1/2$, so I can substitute, and I get
$$ 2x + 3\times5 - 4 \times -1/2 = 5 $$
$$ 2x + 15 +2 =5 $$
$$ 2x = -12 $$
$$ x = -6$$
So we have found another one of the unknowns, and we know that $x = -6$ and $y = 5$ and $z = -1/2$
We can convert this into Matrix form $Ax = b$
$$
A=
\left(
\begin{array}{rrr}
A_{11} & A_{12} & A_{13} \\
A_{21} & A_{22} & A_{23} \\
A_{31} & A_{32} & A_{33} \\
\end{array}
\right)
$$
$$
A=
\left(
\begin{array}{rrr}
2 & 3 & -4 \\
6 & 8 & 2 \\
4 & 8 & -6 \\
\end{array}
\right)
$$
and
$$
x=
\left(
\begin{array}{c}
x_1 \\
x_2 \\
x_3 \\
\end{array}
\right)
$$
$$
x=
\left(
\begin{array}{c}
x \\
y \\
z \\
\end{array}
\right)
$$
and
$$
b=
\left(
\begin{array}{c}
b_1 \\
b_2 \\
b_3 \\
\end{array}
\right)
$$
$$
b=
\left(
\begin{array}{c}
5 \\
3 \\
19 \\
\end{array}
\right)
$$
Thus we write
$$
\left(
\begin{array}{rrr}
2 & 3 & -4 \\
6 & 8 & 2 \\
4 & 8 & -6 \\
\end{array}
\right)
\left(
\begin{array}{c}
x \\
y \\
z \\
\end{array}
\right)
=
\left(
\begin{array}{c}
5 \\
3 \\
19 \\
\end{array}
\right)
$$
Do try to follow the same procedure outlined above to solve the equation,
and see how would you write the above procedure in matrix form instead,
and once you have finished, try to think about how to implement it in Python code.
Separate your code into two parts, the part where you do the conversion to **Upper Triangle form** (or **Back Triangle Form** if you wish, noting the order of doing things), and the part where you do **Back Substitution**.
## V. Gaussian elimination - algorithm and code
Notice that we are free to perform the following operations on the augmented system without changing the corresponding solution:
* Exchange two rows (refer to the section on *partial pivoting* next week)
* Multiply a row by a non-zero constant ($Eq. (i)\leftarrow \lambda \times Eq.(i)$)
* Subtracting a (non-zero) multiple of one row with another ($Eq. (i)\leftarrow Eq. (i) - \lambda \times Eq.(j)$)
Note that the equation/row being subtracted is termed the *pivot*.
Let's consider the algorithm mid-way working on an arbitrary matrix system, i.e. assume that the first $k$ rows (i.e. above the horizontal dashed line in the matrix below) have already been transformed into upper-triangular form, while the equations/rows below are not yet in this form.
The augmented equation in this case can be assumed to look like
$$
\left[
\begin{array}{rrrrrrrrr|r}
A_{11} & A_{12} & A_{13} & \cdots & A_{1k} & \cdots & A_{1j} & \cdots & A_{1n} & b_1 \\
0 & A_{22} & A_{23} & \cdots & A_{2k} & \cdots & A_{2j} & \cdots & A_{2n} & b_2 \\
0 & 0 & A_{33} & \cdots & A_{3k} & \cdots & A_{3j} & \cdots & A_{3n} & b_3 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & A_{kk} & \cdots & A_{kj} & \cdots & A_{kn} & b_k \\
\hdashline
\vdots & \vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & A_{ik} & \cdots & A_{ij} & \cdots & A_{in} & b_i \\
\vdots & \vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & A_{nk} & \cdots & A_{nj} & \cdots & A_{nn} & b_n \\
\end{array}
\right]
$$
Remember that here as we are mid-way through the algorithm the $A$'s and $b$'s in the above are **not** the same as in the original system!
Our aim as a next step in the algorithm is to use row $k$ (the "pivot row") to *eliminate* $A_{ik}$, and we need to do this for all of the rows $i$ below the pivot, i.e. for all $i>k$.
Note that the zeros to the left of the leading term in the pivot row means that these operations will not mess up the fact that we have all the zeros we are looking for in the lower left part of the matrix.
To eliminate $A_{ik}$ for a single row $i$ we need to perform the operation
$$ Eq. (i)\leftarrow Eq. (i) - \frac{A_{ik}}{A_{kk}} \times Eq.(k) $$
or equivalently
\begin{align}
A_{ij} &\leftarrow A_{ij} - \frac{A_{ik}}{A_{kk}} A_{kj}, \quad j=k,k+1,\ldots,n\\
b_i &\leftarrow b_i - \frac{A_{ik}}{A_{kk}} b_{k}
\end{align}
$j$ only needs to run from $k$ upwards as we can assume that the earlier entries in column $i$ have already been set to zero, and also that the corresponding terms from the pivot row are also zero (we don't need to perform operations that we know involve the addition of zeros!).
And to eliminate these entries for all rows below the pivot we need to repeat for all $i>k$.
### <span style="color:blue">Exercise 5.3: Gaussian elimination</span>
Write some code that takes a matrix $A$ and a vector $\pmb{b}$ and converts it into upper-triangular form using the above algorithm. For the $2 \times 2$ and $3\times 3$ examples from above compare the resulting $A$ and $\pmb{b}$ you obtain following elimination.
```python
def upper_triangle(A, b):
""" A function to covert A into upper triangluar form through row operations.
The same row operations are performed on the vector b.
Note that this implementation does not use partial pivoting which is introduced below.
Also note that A and b are overwritten, and hence we do not need to return anything
from the function.
"""
n = np.size(b)
rows, cols = np.shape(A)
# check A is square
assert(rows == cols)
# and check A has the same numner of rows as the size of the vector b
assert(rows == n)
# Loop over each pivot row - all but the last row which we will never need to use as a pivot
for k in range(n-1):
# Loop over each row below the pivot row, including the last row which we do need to update
for i in range(.................
...........
...........
# Test our code on our 2x2 and 3x3 examples from above
A = .......
b = .......
upper_triangle(A, b)
# Here is a new trick for you - "pretty print"
from pprint import pprint
print('\nOur A matrix following row operations to transform it into upper-triangular form:')
pprint(A)
print('The correspondingly updated b vector:')
pprint(b)
```
```python
```
### Back substitution
Now that we have an augmented system in the upper-triangular form
$$
\left[
\begin{array}{rrrrr|r}
A_{11} & A_{12} & A_{13} & \cdots & A_{1n} & b_1 \\
0 & A_{22} & A_{23} & \cdots & A_{2n} & b_2 \\
0 & 0 & A_{33} & \cdots & A_{3n} & b_3 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & A_{nn} & b_n \\
\end{array}
\right]
$$
where the solution $\pmb{x}$ of the original system also satisfies $A\pmb{x}=\pmb{b}$ for the $A$ and $\pmb{b}$ in the above upper-triangular form (rather than the original $A$ and $\pmb{b}$!).
We can solve the final equation row to yield
$$x_n = \frac{b_n}{A_{nn}}$$
The second to last equation then yields (note we've introduced a comma in the subscripts here simply to make the more complex double indices easier to read)
\begin{align}
A_{n-1,n-1}x_{n-1} + A_{n-1,n}x_n &= b_{n-1}\\
\implies x_{n-1} = \frac{b_{n-1} - A_{n-1,n}x_n}{A_{n-1,n-1}}\\
\implies x_{n-1} = \frac{b_{n-1} - A_{n-1,n}\frac{b_n}{A_{nn}}}{A_{n-1,n-1}}
\end{align}
and so on to row $k$ which yields
\begin{align}
A_{k,k}x_{k} + A_{k,k+1}x_{k+1} +\cdots + A_{k,n}x_n &= b_{k}\\
\iff A_{k,k}x_{k} + \sum_{j=k+1}^{n}A_{kj}x_j &= b_{k}\\
\implies x_{k} &= \left( b_k - \sum_{j=k+1}^{n}A_{kj}x_j\right)\frac{1}{A_{kk}}
\end{align}
### <span style="color:blue">Exercise 5.5: Back substitution</span>
Extend your code to perform back substitution and hence to obtain the final solution $\pmb{x}$. Check against the solutions found earlier. Come up with some random $n\times n$ matrices (you can use `np.random.rand` for that) and check your code against `sl.inv(A)@b` (remember to use the original $A$ and $\pmb{b}$ here of course!)
```python
# This function assumes that A is already an upper triangular matrix,
# e.g. we have already run our upper_triangular function if needed.
def back_substitution(A, b):
""" Function to perform back subsitution on the system Ax=b.
Returns the solution x.
Assumes that A is on upper triangular form.
"""
n = np.size(b)
# Check A is square and its number of rows and columns same as size of the vector b
rows, cols = np.shape(A)
assert(rows == cols)
assert(rows == n)
# We can/should check that A is upper triangular using np.triu which is the
# upper triangular part of a matrix - if A is already upper triangular, then
# it should of course match the upper-triangular component of A!!
assert(np.allclose(A, np.triu(A)))
x = np.zeros(n)
# start at the end (row n-1) and work backwards
for k in range(............
..................
return x
# This A is the upper triangular matrix carried forward
# from the Python box above, and b the correspondingly updated b vector.
A = ..........
b = ..........
# print the solution using our codes
x = back_substitution(A, b)
print('Our solution: ',x)
# Reinitialise A and b !
# remember our functions overwrote them
A = np.array([[2., 3., -4.],
[6., 8., 2.],
[4., 8., -6.]])
b = np.array([5., 3., 19.])
# check our answer against what SciPy gives us by multiplying b by A inverse
print('SciPy solution: ',sl.inv(A) @ b)
print('Success: ', np.allclose(x, sl.inv(A) @ b))
```
```python
```
## VI. Gauss-Jordan elimination
Recall that for the augmented matrix example we did by hand above we continued past the upper-triangular form so that the augmented matrix had the identity matrix in the $A$ location. This algorithm has the name Gauss-Jordan elimination but note that it requires more operations than the conversion to upper-triangular form followed by back subsitution and so is only of academic interest.
### Matrix inversion
Note that if we were to form the augmented equation with the full identity matrix in the place of the vector $\pmb{b}$, i.e. $[A|I]$ and performed row operations exactly as above until $A$ is transformed into the identity matrix $I$, then we would be left with the inverse of $A$ in the original $I$ location, i.e.
$$ [A|I] \rightarrow [I|A^{-1}] $$
Ok, so the details above is sparse, and probably not quite sufficient. I suppose that some will get to coding right away, but let me give some hints to those who are, well, confused, and see if it helps alleviate the confusion. Those of you who don't need the hints can go straight to the next exercise. :)
I have typed a bunch of matrices, see if can find what the sequence of matrices mean. You should feel that the operations are oddly familiar.
Note that due to my low dexterity stat, although I double checked, there may be typos in the steps below.
$$
A=
\left(
\begin{array}{rrr}
2 & -1 & 0 \\
-1 & 2 & -1 \\
0 & -1 & 2 \\
\end{array}
\right)
\;\;\;\;\;\;\;\;I=
\left(
\begin{array}{rrr}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
$$
$$---------------------------------------------------------------------------------------------------------------------$$
$$
\left(
\begin{array}{rrr}
2 & -1 & 0 \\
2 & -4 & +2 \\
0 & -1 & 2 \\
\end{array}
\right)
\;\;\;\;\;\;\;\;\;
\left(
\begin{array}{rrr}
1 & 0 & 0 \\
0 & -2 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
$$
$$
\left(
\begin{array}{rrr}
2 & -1 & 0 \\
0 & -3 & +2 \\
0 & -1 & 2 \\
\end{array}
\right)
\;\;\;\;\;\;\;\;\;
\left(
\begin{array}{rrr}
1 & 0 & 0 \\
-1 & -2 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
$$
$$
\left(
\begin{array}{rrr}
2 & -1 & 0 \\
0 & -3 & +2 \\
0 & -3 & 6 \\
\end{array}
\right)
\;\;\;\;\;\;\;\;\;
\left(
\begin{array}{rrr}
1 & 0 & 0 \\
-1 & -2 & 0 \\
0 & 0 & 3 \\
\end{array}
\right)
$$
$$
\left(
\begin{array}{rrr}
2 & -1 & 0 \\
0 & -3 & +2 \\
0 & 0 & 4 \\
\end{array}
\right)
\;\;\;\;\;\;\;\;\;
\left(
\begin{array}{rrr}
1 & 0 & 0 \\
-1 & -2 & 0 \\
1 & 2 & 3 \\
\end{array}
\right)
$$
$$---------------------------------------------------------------------------------------------------------------------$$
$$
\left(
\begin{array}{rrr}
2 & -1 & 0 \\
0 & -6 & +4 \\
0 & 0 & 4 \\
\end{array}
\right)
\;\;\;\;\;\;\;\;\;
\left(
\begin{array}{rrr}
1 & 0 & 0 \\
-2 & -4 & 0 \\
1 & 2 & 3 \\
\end{array}
\right)
$$
$$
\left(
\begin{array}{rrr}
2 & -1 & 0 \\
0 & -6 & 0 \\
0 & 0 & 4 \\
\end{array}
\right)
\;\;\;\;\;\;\;\;\;
\left(
\begin{array}{rrr}
1 & 0 & 0 \\
-3 & -6 & -3 \\
1 & 2 & 3 \\
\end{array}
\right)
$$
$$
\left(
\begin{array}{rrr}
12 & -6 & 0 \\
0 & -6 & 0 \\
0 & 0 & 4 \\
\end{array}
\right)
\;\;\;\;\;\;\;\;\;
\left(
\begin{array}{rrr}
6 & 0 & 0 \\
-3 & -6 & -3 \\
1 & 2 & 3 \\
\end{array}
\right)
$$
$$
\left(
\begin{array}{rrr}
-12 & 0 & 0 \\
0 & -6 & 0 \\
0 & 0 & 4 \\
\end{array}
\right)
\;\;\;\;\;\;\;\;\;
\left(
\begin{array}{rrr}
-9 & -6 & -3 \\
-3 & -6 & -3 \\
1 & 2 & 3 \\
\end{array}
\right)
$$
$$---------------------------------------------------------------------------------------------------------------------$$
$$
\left(
\begin{array}{rrr}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
\;\;\;\;\;\;\;\;\;
\left(
\begin{array}{rrr}
3/4 & 1/2 & 1/4 \\
1/2 & 1 & 1/2 \\
1/4 & 2/4 & 3/4 \\
\end{array}
\right)
$$
$$---------------------------------------------------------------------------------------------------------------------$$
$$
A^{-1}=
\left(
\begin{array}{rrr}
3/4 & 1/2 & 1/4 \\
1/2 & 1 & 1/2 \\
1/4 & 2/4 & 3/4 \\
\end{array}
\right)
$$
### <span style="color:blue">Exercise 5.6: Matrix inversion</span>
Try updating your code to construct the inverse matrix. Check your answer against the inverse matrix given by a built-in function.
Hint: Once you have performed your Gaussian elimination to transform $A$ into an upper triangular matrix, perform another elimination "from bottom to top" to transform $A$ into a diagonal matrix.
```python
# Updated version of the upper_triangular function that
# assumes that a matrix, B, is in the old vector location
# in the augmented system, and applies the same operations to
# B as to A
def upper_triangle2(A, B):
# your code here
# Function which transforms the matrix into lower triangular
# form - the point here is that if you give it a
# matrix that is already in upper triangular form, then the
# result will be a diagonal matrix
def lower_triangle2(A, B):
# your code here
# Let's redefine A as our matrix above
A = # your code here
# and B is the identity of the corresponding size
B = # your code here
# transform A into upper triangular form (and perform the same operations on B)
# your code here
# now make this updated A lower triangular as well (the result should be diagonal)
# your code here
# final step is just to divide each row through by the value of the diagonal
# to end up with the identity in the place of A
# your code here
# print final A and B and check solution
# your code here
```
```python
```
### <span style="color:blue">Exercise 5.6: Zeros on the diagonal</span>
You may have noticed above that we have no way of guaranteeing that the $A_{kk}$ we divide through by in the Guassian elimination or back substitution algorithms is non-zero (or not very small which will also lead to computational problems).
Note also that we commented that we are free to exchange two rows in our augmented system - how could you use this fact to build robustness into our algorithms in order to deal with matrices for which our algorithms do lead to very small or zero $A_{kk}$ values?
See if you can figure out how to do this - more on this next week!
```python
```
|
61e67d07d5732397649347b5f1a51162e70ea023
| 74,276 |
ipynb
|
Jupyter Notebook
|
notebooks/c_mathematics/numerical_methods/NM1_2020/NM1_2020_lecture5.ipynb
|
primer-computational-mathematics/book
|
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
|
[
"MIT"
] | 3 |
2020-08-02T07:32:14.000Z
|
2021-11-16T16:40:43.000Z
|
notebooks/c_mathematics/numerical_methods/NM1_2020/NM1_2020_lecture5.ipynb
|
primer-computational-mathematics/book
|
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
|
[
"MIT"
] | 5 |
2020-07-27T10:45:26.000Z
|
2020-08-12T15:09:14.000Z
|
notebooks/c_mathematics/numerical_methods/NM1_2020/NM1_2020_lecture5.ipynb
|
primer-computational-mathematics/book
|
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
|
[
"MIT"
] | 4 |
2020-08-05T13:57:32.000Z
|
2022-02-02T19:03:57.000Z
| 32.792936 | 1,414 | 0.497402 | true | 16,058 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.928409 | 0.865224 | 0.803282 |
__label__eng_Latn
| 0.997234 | 0.704625 |
## 4.4 微积分
事实上,数学公式的范畴极为广泛,我们所熟知的大学数学课程中,微积分、线性代数、概率论与数理统计中数学表达式的符号系统均大不相同。本节将主要介绍如何使用LaTeX对微积分中的数学表达式进行书写和编译。
### 4.4.1 极限
求极限是整个微积分中的基石,例如$\lim_{x\to 2}x^{2}$对应的LaTeX代码为`$\lim_{x\to 2}x^{2}$`。
【**例1**】书写以下求极限的表达式:
$$\lim_{x\to-\infty}\frac{3x^{2}-2}{3x-2x^{2}}=\lim_{x\to-\infty}\frac{x^{2}\left(3-\frac{2}{x^{2}}\right)}{x^{2}\left(\frac{3}{x}-2\right)}=\lim_{x\to-\infty}\frac{3-\frac{2}{x^{2}}}{\frac{3}{x}-2}=-\frac{3}{2}$$
```tex
\documentclass[12pt]{article}
\begin{document}
$$\lim_{x\to-\infty}\frac{3x^{2}-2}{3x-2x^{2}}=\lim_{x\to-\infty}\frac{x^{2}\left(3-\frac{2}{x^{2}}\right)}{x^{2}\left(\frac{3}{x}-2\right)}=\lim_{x\to-\infty}\frac{3-\frac{2}{x^{2}}}{\frac{3}{x}-2}=-\frac{3}{2}$$
\end{document}
```
【**例2**】书写极限$\lim_{\Delta t\to0}\frac{s(t+\Delta t)+s(t)}{\Delta t}$和$\displaystyle{\lim_{\Delta t\to0}\frac{s(t+\Delta t)+s(t)}{\Delta t}}$。
```tex
\documentclass[12pt]{article}
\begin{document}
$\lim_{\Delta t\to0}\frac{s(t+\Delta t)+s(t)}{\Delta t}$ \& $\displaystyle{\lim_{\Delta t\to0}\frac{s(t+\Delta t)+s(t)}{\Delta t}}$
\end{document}
```
### 4.4.2 导数
在微积分中,给定函数$f(x)$后,我们能够将其导数定义为
$$f^\prime(a)=\lim_{x\to a}\frac{f(x)-f(a)}{x-a}$$
用LaTeX书写这条公式为`$$f^\prime(a)=\lim_{x\to a}\frac{f(x)-f(a)}{x-a}$$`,有时候,为了让分数的形式在直观上不显得过大,可以用`$$f^\prime(a)=\lim\limits_{x\to a}\frac{f(x)-f(a)}{x-a}$$`,其中,`\lim`和`\limits`两个命令需要配合使用。
需要注意的是,`f^\prime(x)`中的`\prime`命令是标准写法,有时候也可以写作`f'(x)`。
【**例3**】使用`\prime`命令书写导数的定义$f^\prime(x)=\lim_{\Delta x\to 0}\frac{f(x+\Delta x)-f(x)}{\Delta x}$。
```tex
\documentclass[12pt]{article}
\begin{document}
$$f^\prime(x)=\lim_{\Delta x\to 0}\frac{f(x+\Delta x)-f(x)}{\Delta x}$$
\end{document}
```
【**例4**】书写函数$f(x)=3x^{5}+2x^{3}+1$的导数$f^\prime(x)=15x^{4}+6x^{2}$。
```tex
\documentclass[12pt]{article}
\begin{document}
$$f^\prime(x)=15x^{4}+6x^{2}$$
\end{document}
```
微分在微积分中举足轻重,`\mathrm{d}`为微分符号$\mathrm{d}$的命令,一般而言,微分的标准写法为$\frac{\mathrm{d}^{n}}{\mathrm{d}x^{n}}f(x)$。
【**例5**】书写微分$\frac{\mathrm{d}}{\mathrm{d}x}f(x)=15x^{4}+6x^{2}$、$\frac{\mathrm{d}^{2}}{\mathrm{d}^{2}x}f(x)=60x^{3}+12x$。
```tex
\documentclass[12pt]{article}
\begin{document}
$$\frac{\mathrm{d}}{\mathrm{d}x}f(x)=15x^{4}+6x^{2}$$
$$\frac{\mathrm{d}^{2}}{\mathrm{d}^{2}x}f(x)=60x^{3}+12x$$
\end{document}
```
在微积分中,偏微分符号$\partial$的命令为`\partial`,对于任意函数$f(x,y)$,偏微分的标准写法为$\frac{\partial^{n}}{\partial x^{n}}f(x,y)$或$\frac{\partial^{n}}{\partial y^{n}}f(x,y)$。
【**例6**】书写函数$f(x,y)=3x^{5}y^{2}+2x^{3}y+1$的偏微分$\frac{\partial}{\partial x}f(x,y)=15x^{4}y^{2}+6x^{2}y$和$\frac{\partial}{\partial y}f(x,y)=6x^{5}y+2x^{3}$。
```tex
\documentclass[12pt]{article}
\begin{document}
$$\frac{\partial}{\partial x}f(x,y)=15x^{4}y^{2}+6x^{2}y$$
$$\frac{\partial}{\partial y}f(x,y)=6x^{5}y+2x^{3}$$
\end{document}
```
【**例7**】书写偏导数$z=\mu\,\frac{\partial y}{\partial x}\bigg|_{x=0}$。
```tex
\documentclass[12pt]{article}
\begin{document}
$$z=\mu\,\frac{\partial y}{\partial x}\bigg|_{x=0}$$
\end{document}
```
### 4.4.3 积分
积分的标准写法为$\int_{a}^{b}f(x)\,\mathrm{d}x$,代码为`\int_{a}^{b}f(x)\,\mathrm{d}x`,其中,`\int`表示积分,是英文单词integral的缩写形式,使用`\,`的目的是引入一个空格。
【**例8**】书写积分$\int\frac{\mathrm{d}x}{\sqrt{a^{2}+x^{2}}}=\frac{1}{a}\arcsin\left(\frac{x}{a}\right)+C$和$\int\tan^{2}x\,\mathrm{d}x=\tan x-x+C$。
```tex
\documentclass[12pt]{article}
\begin{document}
$$\int\frac{\mathrm{d}x}{\sqrt{a^{2}+x^{2}}}=\frac{1}{a}\arcsin\left(\frac{x}{a}\right)+C$$
$$\int\tan^{2}x\,\mathrm{d}x=\tan x-x+C$$
\end{document}
```
【**例9**】书写积分$\int_{a}^{b}\left[\lambda_{1}f_{1}(x)+\lambda_{2}f_{2}(x)\right]\,\mathrm{d}x=\lambda_{1}\int_{a}^{b}f_{1}(x)\,\mathrm{d}x+\lambda_{2}\int_{a}^{b}f_{2}(x)\,\mathrm{d}x$和$\int_{a}^{b}f(x)\,\mathrm{d}x=\int_{a}^{c}f(x)\,\mathrm{d}x+\int_{c}^{b}f(x)\,\mathrm{d}x$。
```tex
\documentclass[12pt]{article}
\begin{document}
$$\int_{a}^{b}\left[\lambda_{1}f_{1}(x)+\lambda_{2}f_{2}(x)\right]\,\mathrm{d}x=\lambda_{1}\int_{a}^{b}f_{1}(x)\,\mathrm{d}x+\lambda_{2}\int_{a}^{b}f_{2}(x)\,\mathrm{d}x$$
$$\int_{a}^{b}f(x)\,\mathrm{d}x=\int_{a}^{c}f(x)\,\mathrm{d}x+\int_{c}^{b}f(x)\,\mathrm{d}x$$
\end{document}
```
【**例10**】书写积分
\begin{equation}
\begin{aligned}
V &=2\pi\int_{0}^{1} x\left[1-(x-1)^{2}\right]\,\mathrm{d}x \\
&=2\pi\int_{0}^{2}\left[-x^{3}+2 x^{2}\right]\,\mathrm{d}x \\
&=2\pi\left[-\frac{1}{4} x^{4}+\frac{2}{3} x^{3}\right]_{0}^{2} \\
&=8\pi/3
\end{aligned}
\end{equation}
```tex
\documentclass[12pt]{article}
\begin{document}
\begin{equation}
\begin{aligned}
V&=2\pi\int_{0}^{1} x\left\{1-(x-1)^{2}\right\}\,\mathrm{d}x \\
&=2\pi\int_{0}^{2}\left\{-x^{3}+2 x^{2}\right\}\,\mathrm{d}x \\
&=2\pi\left[-\frac{1}{4} x^{4}+\frac{2}{3} x^{3}\right]_{0}^{2} \\
&=8\pi/3
\end{aligned}
\end{equation}
\end{document}
```
上述介绍的都是一重积分,在微积分课程中,还有二重积分、三重积分等,对于一重积分,LaTeX提供的基本命令为`\int`,二重积分为`\iint`,三重积分为`\iiint`、四重积分为`\iiiint`,当积分为五重或以上时,一般使用`\idotsint`,即$\idotsint$。
【**例11**】书写积分$\iint\limits_{D}f(x,y)\,\mathrm{d}\sigma$和$\iiint\limits_{\Omega}\left(x^{2}+y^{2}+z^{2}\right)\,\mathrm{d}v$。
```tex
\documentclass[12pt]{article}
\begin{document}
$$\iint\limits_{D}f(x,y)\,\mathrm{d}\sigma$$
$$\iiint\limits_{\Omega}\left(x^{2}+y^{2}+z^{2}\right)\,\mathrm{d}v$$
\end{document}
```
在积分中,有一种特殊的积分符号是在标准的积分符号上加上一个圈,可用来表示计算曲线曲面积分,即$\oint_{C}f(x)\,\mathrm{d}x+g(y)\,\mathrm{d}y$,代码为`\oint_{C}f(x)\,\mathrm{d}x+g(y)\,\mathrm{d}y`。
### 练习题
> 打开LaTeX在线系统[https://www.overleaf.com](https://www.overleaf.com/project)或本地安装好的LaTeX编辑器,创建名为LaTeX_practice的项目,并同时新建一个以`.tex`为拓展名的源文件,完成以下几道练习题。
[1] 书写泰勒展开式
\begin{equation}
\begin{aligned}
f\left(x\right)=&\frac{f\left(x_{0}\right)}{0!}+\frac{f'\left(x_{0}\right)}{1!}\left(x-x_{0}\right)^{2} \\
&+\cdots+\frac{f^{\left(n\right)}\left(x_{0}\right)}{n!}\left(x-x_{0}\right)^{n}+R_{n}\left(x\right)
\end{aligned}
\end{equation}
```tex
\documentclass[12pt]{article}
\begin{document}
\begin{equation}
\begin{aligned}
% 在此处书写公式
\end{aligned}
\end{equation}
\end{document}
```
【回放】[**4.3 希腊字母**](https://nbviewer.jupyter.org/github/xinychen/latex-cookbook/blob/main/chapter-4/section3.ipynb)
【继续】[**4.5 线性代数**](https://nbviewer.jupyter.org/github/xinychen/latex-cookbook/blob/main/chapter-4/section5.ipynb)
### License
<div class="alert alert-block alert-danger">
<b>This work is released under the MIT license.</b>
</div>
|
e0bff4fc07ae42d9af65e749a32a361a18a9cf11
| 9,337 |
ipynb
|
Jupyter Notebook
|
chapter-4/section4.ipynb
|
Enidsky/latex-cookbook
|
69b591e4712fcac0955431587dce23663f2ab836
|
[
"MIT"
] | 1 |
2022-03-31T03:16:55.000Z
|
2022-03-31T03:16:55.000Z
|
chapter-4/section4.ipynb
|
liudmmmmm/latex-cookbook
|
69b591e4712fcac0955431587dce23663f2ab836
|
[
"MIT"
] | null | null | null |
chapter-4/section4.ipynb
|
liudmmmmm/latex-cookbook
|
69b591e4712fcac0955431587dce23663f2ab836
|
[
"MIT"
] | null | null | null | 35.101504 | 307 | 0.474564 | true | 3,284 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.888759 | 0.73412 | 0.652455 |
__label__yue_Hant
| 0.401741 | 0.354203 |
# Variational Linear Systems via QAOA
Ryan LaRose, Yigit Subasi, Lukasz Cincio, Patrick Coles
### Abstract
In this notebook, we implement an approach to solving linear systems on a NISQ computer. The approach uses QAOA [1] with the "linear system Hamiltonian" given in [2]. Other approaches for the "quantum linear systems problem" [3-5] require resources beyond current hardware capabilities. This method can be implemented on NISQ processors. Other "near-term approaches" have recently emerged in the literature [6-7].
```python
"""Imports for the notebook.
Requires:
Matplotlib
Numpy
Scipy
Cirq v0.5.0
"""
from time import time
import numpy as np
import matplotlib.pyplot as plt
from scipy.linalg import expm
from scipy.optimize import minimize
import cirq
assert cirq.__version__ == "0.5.0", "Wrong version of Cirq! Code may or may not work."
```
```python
"""Flags for testing behavior."""
# Option to use identity matrix if true
DEBUG = False
# Option to scale gamma parameters by matrix condition number if true
SCALE = False
```
## Preliminaries
We seek to solve the "quantum linear systems problem" (QLSP) defined by the system
\begin{equation}
A \mathbf{x} = \mathbf{b}
\end{equation}
where $A \in \mathbb{R}^{N \times N}$ and $N = 2^n$. QLSP is similar to the classical linear systems problem except we only want to output a "quantum description" of the solution
\begin{equation}
| \mathbf{x} \rangle = |A^{-1} \mathbf{b} \rangle
\end{equation}
(because we don't have a good way around this).
#### Challenge: Find a (practical) problem where we only want some expectation of the solution $\langle \mathbf{x} | \hat{O} | \mathbf{x} \rangle$ for some Hermitian operator $\hat{O}$.
## Ansatz wavefunction
_Note: This presentation is based off the slides Yigit presented at SQUINT 2019._
We follow the method outlined in [2] to construct our ansatz wavefunction. First, define the projector
\begin{equation}
P_\mathbf{b}^\perp := I - |\mathbf{b}\rangle \langle \mathbf{b}| .
\end{equation}
Since $P_\mathbf{b}^\perp |\mathbf{b} \rangle = \mathbf{0}$, we have
\begin{equation}
P_\mathbf{b}^\perp A | \mathbf{x} \rangle = \mathbf{0} .
\end{equation}
Define $B := P_\mathbf{b}^\perp A$ and the operator
\begin{equation}
H_f := B^\dagger B = A P_\mathbf{b}^\perp A .
\end{equation}
The subscript $f$ will become apparent shortly. This operator (Hamiltonian) is:
* Hermitian.
* positive-semidefinite $\implies$ $|\mathbf{x}\rangle$ is the ground state.
Note that $|\mathbf{x}\rangle$ is the _unique_ ground state since $P_\mathbf{b}^\perp$ is an $N - 1$ dimensional projector.
We now parameterize the linear system $A$ with the schedule
\begin{equation}
A \mapsto A(t) := (1 - t) I + t A
\end{equation}
for $0 \le t \le 1$ and similarly the Hamiltonian
\begin{equation}
H(t) := A(t) P_\mathbf{b}^\perp A(t) .
\end{equation}
Note that $H(t=1) = H_f$.
This defines our "driver Hamiltonian" in QAOA, i.e. the unitary operator
\begin{equation}
U_t(\gamma) = e^{ - i \gamma H(t) }
\end{equation}
#### Question: Normally, the "cost Hamiltonian" in QAOA is assumed to be diagonal in the computational basis. In general, our Hamiltonian is not. Do we need a separate "mixer Hamiltonian" in this case?
#### Note: QAOA typically assumes the driver Hamiltonian is time-independent (so far as I know), but our Hamiltonian is time-dependent.
We will, for now, take the standard mixer Hamiltonian
\begin{equation}
V(\beta) = \prod_j e^{ - i \beta X_j} ,
\end{equation}
i.e., a rotation of each qubit about the $x$-axis by $2 \beta$.
The ansatz wavefunction is thus
\begin{equation}
|\mathbf{\gamma}, \mathbf{\beta} \rangle := \left[ \prod_{1}^{p} U_{t_i} (\gamma_i) V(\beta_i) \right] H^{\otimes n} |0\rangle^{\otimes n}
\end{equation}
where $n := \log_2 N$.
# Code Implementation
We now turn to a implementation of this algorithm using Cirq.
## Example Problem
In the cells below we'll generate a linear system of equations.
```python
"""Seed the random number generator."""
SEED = 29876
np.random.seed(SEED)
```
```python
"""Generate a system of equations to solve."""
# Dimension of system. Assume square for now
# Note: Studying different systems is (unfortunately) not as
# simple as changing n, since we defined the driver Hamiltonian
# as a cirq.TwoQubitGate
# TODO: Can this be changed to cirq.Gate?
n = 2
N = 2**n
if DEBUG:
A = np.identity(4)
else:
# Generate a random matrix
A = np.random.rand(N, N)
# Make sure it's Hermitian
A = A + A.conj().T
# Normalize it WLOG. Note this defaults to the Froebenius norm for matrices.
A /= np.linalg.norm(A)
# Control the condition number
k = 100
A = np.diag([1, 1 / k, 1, 1 / k])
# Compute the condition number of the matrix to scale exponentials
if SCALE:
kappa = np.linalg.cond(A)
else:
kappa = 1.0
if DEBUG:
b = np.array([1.0, 0.0, 0.0, 0.0], dtype=float)
else:
# Generate the vector
b = np.random.rand(N)
# Normalize it WLOG. Note this defaults to the l2-norm for vectors.
b /= np.linalg.norm(b)
# Use the |+> state
b = 1/2 * np.ones(4)
# Do the solution classically
start = time()
x = np.linalg.solve(A, b)
total = time() - start
# Display the system, solution, and time to solve
print("Classically solving the system Ax = b...")
print("A =\n", A)
print("b =\n", b)
print()
print("Solution found in", total, "seconds.")
print("x =\n", x)
print("Ax =\n", np.dot(A, x))
```
Classically solving the system Ax = b...
A =
[[1. 0. 0. 0. ]
[0. 0.01 0. 0. ]
[0. 0. 1. 0. ]
[0. 0. 0. 0.01]]
b =
[0.5 0.5 0.5 0.5]
Solution found in 0.00021839141845703125 seconds.
x =
[ 0.5 50. 0.5 50. ]
Ax =
[0.5 0.5 0.5 0.5]
## Building up the ansatz
Now we'll build up the ansatz. We'll "cheat" a little by
1. Allowing for matrix gates (to avoid compiling).
1. Classically exponentiating the system (to avoid Trotterization).
We'll use the "half-way point" Hamiltonian. That is, setting $t = 0.5$ in the Hamiltonian $H(t)$.
```python
"""Get the matrix for the driver Hamiltonian."""
# Set the time for the Hamiltonian
t = 0.5
# Identity matrix, for convenince
iden = np.identity(N)
# Get the "b^\perp projector"
Pb = iden - np.outer(b, b)
# Get the parameterized linear system
At = (1.0 - t) * iden + t * A
# Get the Hamiltonian
Ht = At @ Pb @ At
# Make sure it's Hermitian
assert np.allclose(Ht, Ht.conj().T)
```
This gives us a matrix representation of the $U_t(\gamma)$ unitary -- except without the $\gamma$ parameter. We'll take care of this when we construct a matrix gate in Cirq by raising the gate to the power $\gamma$. We now construct a gate from this unitary that we can use in a circuit.
```python
"""Create a gate for the driver Hamiltonian (unitary)."""
class Driver(cirq.TwoQubitGate):
"""Gate for the U_t(\gamma) driver Hamiltonian (for a particular t)."""
def __init__(self, H, gamma):
self.gamma = gamma
self.matrix = expm(-1j * kappa * gamma * H)
assert np.allclose(self.matrix @ self.matrix.conj().T, iden)
def _unitary_(self):
return self.matrix
def _circuit_diagram_info_(self, args):
return "Driver^{}".format(self.gamma), "Driver^{}".format(self.gamma)
```
Now we'll make sure we can implement this in a quantum circuit.
```python
"""Function for building the ansatz circuit."""
def circuit(gamma, beta):
"""Returns the state |gamma, beta> defined above."""
# Get a qubit register
qreg = cirq.LineQubit.range(n)
# Get a quantum circuit
circ = cirq.Circuit()
# How about a round of Hadamards?
circ.append(cirq.H.on_each(*qreg))
# Do the driver Hamiltonian
U = Driver(Ht, gamma)
circ.append(U.on(*qreg))
# Do the mixer Hamiltonian
circ.append(cirq.Rx(kappa * beta).on_each(*qreg))
return circ
```
```python
"""Make sure the code above is working properly."""
# Display the circuit
circ = circuit(0, 0)
print("Circuit:")
print(circ)
# Make sure it can be simulated
sim = cirq.Simulator()
res = sim.simulate(circ)
# Display the final state
print("\nFinal state:")
print("|psi> = ", res.dirac_notation())
```
Circuit:
0: ───H───Driver^0───Rx(0.0π)───
│
1: ───H───Driver^0───Rx(0.0π)───
Final state:
|psi> = 0.5|00⟩ + 0.5|01⟩ + 0.5|10⟩ + 0.5|11⟩
## Computing the Expectation Value
To compute the expectation value (cost), we use the final Hamiltonian $H_f$:
\begin{equation}
C(\mathbf{\gamma}, \mathbf{\beta}) =
\langle \mathbf{\gamma}, \mathbf{\beta} | H_f | \mathbf{\gamma}, \mathbf{\beta} \rangle
\end{equation}
It is this quantity we minimize during the optimization procedure.
```python
"""Function for computing the cost."""
def cost(gamma, beta, simulator=cirq.Simulator()):
"""Returns the cost C(\gamma, \beta) defined above."""
# Build the circuit
circ = circuit(gamma, beta)
# Get the final state
psi = sim.simulate(circ).final_state
# Return the expectation <\gamma, \beta| Hf | \gamma, \beta>
Hf = A @ Pb @ A
return abs(np.dot(psi.conj().T, np.dot(Hf, psi)))
```
## Minimizing the cost
We now minimize the cost. For simplicity, we use built-in optimizers from Scipy. Alternatively, a dynamic grid search or analytic-gradient methods could be used.
```python
"""Function that returns the objective and inputs an array (for Scipy)."""
# Functional form for scipy.optimize.minimize
def obj(x):
val = cost(x[0], x[1])
print("Current cost:", val, end="\r")
return val
```
```python
"""Do the minimization and time it."""
start = time()
out = minimize(obj, np.random.rand(2), method="Powell", tol=1e-10)
total = time() - start
```
Current cost: 0.0097207232108968440.009998400078796635
```python
print(out["x"])
```
[5.00683180e+00 9.00587972e-05]
## Doing a grid search over the landscape
Here we visualize the cost landscape by evaluating the cost on a grid.
```python
"""Do a grid search over all angles."""
# Define ranges of parameters to search over
gammas = np.linspace(-np.pi, np.pi, 100)
betas = np.linspace(-np.pi, np.pi, 100)
# Initialize an array to store the costs at each point
costs = np.zeros((100, 100))
# Evaluate the cost at each point and store it
for ii, gamma in enumerate(gammas):
for jj, beta in enumerate(betas):
costs[ii, jj] = obj([gamma, beta])
```
Current cost: 0.4134547543234798440.176811040322603470.31177489845282325 0.173510514740655820.11327110891036668
```python
"""Remind us of the system and condition number."""
print("A = ", A)
print("b = ", b)
print("kappa = ", np.linalg.cond(A))
```
A = [[1. 0. 0. 0. ]
[0. 0.01 0. 0. ]
[0. 0. 1. 0. ]
[0. 0. 0. 0.01]]
b = [0.5 0.5 0.5 0.5]
kappa = 100.0
We can now make a 2D plot of the landscape.
```python
"""Plot the cost landscape."""
%matplotlib inline
plt.imshow(costs, origin="lower")
plt.colorbar()
if SCALE:
plt.title("Scaled First Order QAOA Cost Landscape")
else:
plt.title("Unscaled First Order QAOA Cost Landscape")
#plt.savefig("kapp100-scaled", format="pdf")
```
```python
"""See the quantum solution."""
# Get the wavefunction at the best angles
gamma_opt, beta_opt = out["x"]
circ = circuit(gamma_opt, beta_opt)
xquantum = sim.simulate(circ).final_state
# Display the system, solution, and time to solve
print("Quantumly solving the system Ax = b...")
print("A =\n", A)
print("b =\n", b)
print()
print("Quantum solution found in", total, "seconds.")
print("xquantum =\n", xquantum)
print("xclassical =\n", x)
print("Ax =\n", np.dot(A, xquantum))
print("Overlap with classical solution:",
abs(np.dot(xquantum, x))**2 / abs(np.dot(x, x)))
```
Quantumly solving the system Ax = b...
A =
[[1. 0. 0. 0. ]
[0. 0.01 0. 0. ]
[0. 0. 1. 0. ]
[0. 0. 0. 0.01]]
b =
[0.5 0.5 0.5 0.5]
Quantum solution found in 2.9893102645874023 seconds.
xquantum =
[0.10558552+1.44229925e-05j 0.69917923-6.18084669e-05j
0.10558552+1.44229925e-05j 0.69917923-6.18084669e-05j]
xclassical =
[ 0.5 50. 0.5 50. ]
Ax =
[0.10558552+1.44229925e-05j 0.00699179-6.18084669e-07j
0.10558552+1.44229925e-05j 0.00699179-6.18084669e-07j]
Overlap with classical solution: 0.9805603065821096
## Higher order QAOA
We now go beyond one layer $p > 1$ in the QAOA ansatz. This is just a slight modification of the code above.
```python
"""Circuit for higher order QAOA."""
def qaoa(gammas, betas):
"""Returns the state |gamma, beta> defined above."""
# Make sure we have the correct number of parameters
if len(gammas) != len(betas):
raise ValueError("gammas and betas must have the same length.")
# Get a qubit register
qreg = cirq.LineQubit.range(n)
# Get a quantum circuit
circ = cirq.Circuit()
# How about a round of Hadamards?
circ.append(cirq.H.on_each(*qreg))
# Do each layer of QAOA
for gamma, beta in zip(gammas, betas):
# Do the driver Hamiltonian
U = Driver(Ht, gamma)
circ.append(U.on(*qreg))
# Do the mixer Hamiltonian
circ.append(cirq.Rx(beta).on_each(*qreg))
return circ
```
Let's test this to make sure it works properly.
```python
"""Testing the higher order QAOA circuit."""
gammas = [0, 0, 0]
betas = [0.0, 0.0, 0.0]
circ = qaoa(gammas, betas)
print("Circuit:")
print(circ)
# Make sure it can be simulated
sim = cirq.Simulator()
res = sim.simulate(circ)
# Display the final state
print("\nFinal state:")
print("|psi> = ", *res.final_state)
```
Circuit:
0: ───H───Driver^0───Rx(0.0π)───Driver^0───Rx(0.0π)───Driver^0───Rx(0.0π)───
│ │ │
1: ───H───Driver^0───Rx(0.0π)───Driver^0───Rx(0.0π)───Driver^0───Rx(0.0π)───
Final state:
|psi> = (0.49999997+0j) (0.49999997+0j) (0.49999997+0j) (0.49999997+0j)
Now we modify the cost function.
```python
"""Defining the cost function for higher order QAOA."""
def expectation(gammas, betas, simulator=cirq.Simulator()):
"""Returns the cost C(\gamma, \beta) defined above."""
# Build the circuit
circ = qaoa(gammas, betas)
# Get the final state
psi = sim.simulate(circ).final_state
# Return the expectation <\gamma, \beta| Hf |\gamma, \beta>
Hf = A.conj().T @ Pb @ A
return abs(np.dot(psi.conj().T, np.dot(Hf, psi)))
```
And now test it.
```python
"""Testing the cost function for higher order QAOA."""
print(expectation([1, 2], [3, 4]))
```
0.1416514988247451
And finally minimization for higher order QAOA.
```python
"""Defining the objective function for optimization with multiple layers."""
def objective(x):
"""Returns the objective function for higher order QAOA."""
# Make sure the number of parameters is valid
if len(x) % 2 != 0:
raise ValueError("Invalid number of parameters. Must be even.")
# Parse the parameters (arbitrary convention)
gammas = x[:len(x) // 2]
betas = x[len(x) // 2:]
# Compute the expectation for these parameters
cval = expectation(gammas, betas)
# Print it out
print("Current cost:", cval, end="\r")
return cval
```
## Higher order QAOA
Now we look at performing the optimization.
```python
"""Set the number of layers p for QAOA."""
p = 2
```
```python
"""Optimizing higher order QAOA."""
# Do the minimization
start = time()
out = minimize(objective, np.random.rand(2 * p), method="Powell")
total = time() - start
```
Current cost: 1.7099391798091803e-16
```python
"""See the quantum solution."""
# Get the wavefunction at the best angles
opt_params = out["x"]
gammas = opt_params[:len(opt_params) // 2]
betas = opt_params[len(opt_params) // 2:]
circ = qaoa(gammas, betas)
xquantum = sim.simulate(circ).final_state
# Display the system, solution, and time to solve
print("Quantumly solving the system Ax = b...")
print("A =\n", A)
print("b =\n", b)
print()
print("Solution found in", total, "seconds.")
print("xquantum =\n", xquantum)
print("xclassical =\n", x / np.linalg.norm(x))
print("Ax =\n", np.dot(A, xquantum))
print("Overlap with classical solution:",
abs(np.dot(xquantum, x))**2 / abs(np.dot(x, x)))
```
Quantumly solving the system Ax = b...
A =
[[1. 0. 0. 0. ]
[0. 0.01 0. 0. ]
[0. 0. 1. 0. ]
[0. 0. 0. 0.01]]
b =
[0.5 0.5 0.5 0.5]
Solution found in 1.31510591506958 seconds.
xquantum =
[0.00473688+0.00524945j 0.4736894 +0.5249459j 0.00473689+0.00524945j
0.47368944+0.524946j ]
xclassical =
[0.00707071 0.70707143 0.00707071 0.70707143]
Ax =
[0.00473688+0.00524945j 0.00473689+0.00524946j 0.00473689+0.00524945j
0.00473689+0.00524946j]
Overlap with classical solution: 0.9999998160642839
# References
[1] Edward Farhi, Jeffrey Goldstone, Sam Gutmann, A quantum approximate optimization algorithm. https://arxiv.org/abs/1411.4028
[2] Yigit Subasi, Rolando D. Somma, Davide Orsucci, Quantum algorithms for systems of linear equations inspired by adiabatic quantum computing. https://arxiv.org/abs/1805.10549.
[3] A. W. Harrow, A. Hassidim, and S. Lloyd, “Quantum algorithm for solving linear systems of equations,” Physical Review Letters, vol. 103, no. 15, Oct. 2009.
[4] L. Wossnig, Z. Zhao, and A. Prakash, “A quantum linear system algorithm for dense matrices,” Phys. Rev. Lett., vol. 120, no. 5, p. 050502, Jan. 2018.
[5] D. Dervovic, M. Herbster, P. Mountney, S. Severini, N. Usher, and L. Wossnig, “Quantum linear systems algorithms: a primer,” arXiv:1802.08227 [quant-ph], Feb. 2018.
[6] NISQLSP I.
[7] NISQLSP II.
|
3f6c19dd029869104ddca3eb027d53208eb8332a
| 59,618 |
ipynb
|
Jupyter Notebook
|
qaoa/vls-via-qaoa.ipynb
|
rmlarose/vls
|
95aeed8c96afdc073aa828ec331b6d9988ed2980
|
[
"Apache-2.0"
] | null | null | null |
qaoa/vls-via-qaoa.ipynb
|
rmlarose/vls
|
95aeed8c96afdc073aa828ec331b6d9988ed2980
|
[
"Apache-2.0"
] | null | null | null |
qaoa/vls-via-qaoa.ipynb
|
rmlarose/vls
|
95aeed8c96afdc073aa828ec331b6d9988ed2980
|
[
"Apache-2.0"
] | null | null | null | 57.325 | 30,292 | 0.743215 | true | 5,735 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.782662 | 0.833325 | 0.652212 |
__label__eng_Latn
| 0.859316 | 0.353638 |
```python
%matplotlib inline
```
# Scaling the regularization parameter for SVCs
The following example illustrates the effect of scaling the
regularization parameter when using `svm` for
`classification <svm_classification>`.
For SVC classification, we are interested in a risk minimization for the
equation:
\begin{align}C \sum_{i=1, n} \mathcal{L} (f(x_i), y_i) + \Omega (w)\end{align}
where
- $C$ is used to set the amount of regularization
- $\mathcal{L}$ is a `loss` function of our samples
and our model parameters.
- $\Omega$ is a `penalty` function of our model parameters
If we consider the loss function to be the individual error per
sample, then the data-fit term, or the sum of the error for each sample, will
increase as we add more samples. The penalization term, however, will not
increase.
When using, for example, `cross validation <cross_validation>`, to
set the amount of regularization with `C`, there will be a
different amount of samples between the main problem and the smaller problems
within the folds of the cross validation.
Since our loss function is dependent on the amount of samples, the latter
will influence the selected value of `C`.
The question that arises is `How do we optimally adjust C to
account for the different amount of training samples?`
The figures below are used to illustrate the effect of scaling our
`C` to compensate for the change in the number of samples, in the
case of using an `l1` penalty, as well as the `l2` penalty.
l1-penalty case
-----------------
In the `l1` case, theory says that prediction consistency
(i.e. that under given hypothesis, the estimator
learned predicts as well as a model knowing the true distribution)
is not possible because of the bias of the `l1`. It does say, however,
that model consistency, in terms of finding the right set of non-zero
parameters as well as their signs, can be achieved by scaling
`C1`.
l2-penalty case
-----------------
The theory says that in order to achieve prediction consistency, the
penalty parameter should be kept constant
as the number of samples grow.
Simulations
------------
The two figures below plot the values of `C` on the `x-axis` and the
corresponding cross-validation scores on the `y-axis`, for several different
fractions of a generated data-set.
In the `l1` penalty case, the cross-validation-error correlates best with
the test-error, when scaling our `C` with the number of samples, `n`,
which can be seen in the first figure.
For the `l2` penalty case, the best result comes from the case where `C`
is not scaled.
.. topic:: Note:
Two separate datasets are used for the two different plots. The reason
behind this is the `l1` case works better on sparse data, while `l2`
is better suited to the non-sparse case.
--
**NOTE:** This is sourced from ```scikit-learn``` learning module found here:
https://scikit-learn.org/stable/auto_examples/svm/plot_svm_scale_c.html#sphx-glr-auto-examples-svm-plot-svm-scale-c-py
--
```python
print(__doc__)
# Author: Andreas Mueller <amueller@ais.uni-bonn.de>
# Jaques Grobler <jaques.grobler@inria.fr>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.svm import LinearSVC
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import GridSearchCV
from sklearn.utils import check_random_state
from sklearn import datasets
rnd = check_random_state(1)
# set up dataset
n_samples = 100
n_features = 300
# l1 data (only 5 informative features)
X_1, y_1 = datasets.make_classification(n_samples=n_samples,
n_features=n_features, n_informative=5,
random_state=1)
# l2 data: non sparse, but less features
y_2 = np.sign(.5 - rnd.rand(n_samples))
X_2 = rnd.randn(n_samples, n_features // 5) + y_2[:, np.newaxis]
X_2 += 5 * rnd.randn(n_samples, n_features // 5)
clf_sets = [(LinearSVC(penalty='l1', loss='squared_hinge', dual=False,
tol=1e-3),
np.logspace(-2.3, -1.3, 10), X_1, y_1),
(LinearSVC(penalty='l2', loss='squared_hinge', dual=True,
tol=1e-4),
np.logspace(-4.5, -2, 10), X_2, y_2)]
colors = ['navy', 'cyan', 'darkorange']
lw = 2
for clf, cs, X, y in clf_sets:
# set up the plot for each regressor
fig, axes = plt.subplots(nrows=2, sharey=True, figsize=(9, 10))
for k, train_size in enumerate(np.linspace(0.3, 0.7, 3)[::-1]):
param_grid = dict(C=cs)
# To get nice curve, we need a large number of iterations to
# reduce the variance
grid = GridSearchCV(clf, refit=False, param_grid=param_grid,
cv=ShuffleSplit(train_size=train_size,
test_size=.3,
n_splits=250, random_state=1))
grid.fit(X, y)
scores = grid.cv_results_['mean_test_score']
scales = [(1, 'No scaling'),
((n_samples * train_size), '1/n_samples'),
]
for ax, (scaler, name) in zip(axes, scales):
ax.set_xlabel('C')
ax.set_ylabel('CV Score')
grid_cs = cs * float(scaler) # scale the C's
ax.semilogx(grid_cs, scores, label="fraction %.2f" %
train_size, color=colors[k], lw=lw)
ax.set_title('scaling=%s, penalty=%s, loss=%s' %
(name, clf.penalty, clf.loss))
plt.legend(loc="best")
plt.show()
```
```python
```
|
f94cc9c085a7e5a0595751181c886b4d135b9e9d
| 130,980 |
ipynb
|
Jupyter Notebook
|
further_resources_learn/nn_gridsearchcv/nn_gridsearchcv__svm_scale__doc.ipynb
|
georgetown-analytics/Paddle-Your-Loan-Canoe
|
62b40c28ea9ff757a4fbcd31f6e899670f35df6b
|
[
"MIT"
] | null | null | null |
further_resources_learn/nn_gridsearchcv/nn_gridsearchcv__svm_scale__doc.ipynb
|
georgetown-analytics/Paddle-Your-Loan-Canoe
|
62b40c28ea9ff757a4fbcd31f6e899670f35df6b
|
[
"MIT"
] | null | null | null |
further_resources_learn/nn_gridsearchcv/nn_gridsearchcv__svm_scale__doc.ipynb
|
georgetown-analytics/Paddle-Your-Loan-Canoe
|
62b40c28ea9ff757a4fbcd31f6e899670f35df6b
|
[
"MIT"
] | null | null | null | 519.761905 | 66,640 | 0.940747 | true | 1,412 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.72487 | 0.795658 | 0.576749 |
__label__eng_Latn
| 0.987096 | 0.178311 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.