text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
listlengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
listlengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
listlengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
# HEWL S-SAD Merging Statistics
Merging statistics are a useful means to assess data quality in crystallography. However, each statistic has inherent shortcomings. For example, R-merge will appear inflated if the multiplicity is high, and the Pearson correlation coefficients used for $CC_{1/2}$ are very sensitive to outliers.
Most scaling and merging programs output multiple merging statistics to get around these shortcomings. However, one can imagine that it could also be useful to customize certain parameters, such as how many resolution bins are used. Or, perhaps a better statistic will be developed that is worth implementing.
In this notebook, ``reciprocalspaceship`` is used to compute half-dataset correlation coefficients ($CC_{1/2}$ and $CC_{anom}$) for a dataset collected from a tetragonal hen egg-white lysozyme (HEWL) crystal at 6550 eV.
These data are unmerged, but were scaled in AIMLESS.
They contain sufficient sulfur anomalous signal to determine a solution by the SAD method.
This illustrates the use of ``rs`` to implement a merging routine, create a custom analysis, and could be useful as a template for other exploratory crystallographic data analyses.
As an example, we will compare half-dataset correlations computed using both Pearson and Spearman correlation coefficients.
```python
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context("notebook", font_scale=1.3)
import numpy as np
```
```python
import reciprocalspaceship as rs
```
```python
print(rs.__version__)
```
0.9.9
---
### Load scaled, unmerged data
This data has been scaled in AIMLESS. The data includes the image number and the scaled **I** and **SIGI** values.
```python
hewl = rs.read_mtz("data/HEWL_unmerged.mtz")
```
```python
hewl.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th></th>
<th>BATCH</th>
<th>I</th>
<th>SIGI</th>
<th>PARTIAL</th>
</tr>
<tr>
<th>H</th>
<th>K</th>
<th>L</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th rowspan="5" valign="top">0</th>
<th rowspan="5" valign="top">0</th>
<th>4</th>
<td>137</td>
<td>696.5212</td>
<td>87.83294</td>
<td>False</td>
</tr>
<tr>
<th>4</th>
<td>520</td>
<td>710.6812</td>
<td>88.107025</td>
<td>False</td>
</tr>
<tr>
<th>4</th>
<td>856</td>
<td>672.05634</td>
<td>87.75671</td>
<td>False</td>
</tr>
<tr>
<th>4</th>
<td>1239</td>
<td>642.47485</td>
<td>87.90302</td>
<td>False</td>
</tr>
<tr>
<th>4</th>
<td>2160</td>
<td>655.71783</td>
<td>87.74394</td>
<td>False</td>
</tr>
</tbody>
</table>
</div>
```python
print(f"Number of observed reflections: {len(hewl)}")
```
Number of observed reflections: 816804
---
### Merging with Inverse-Variance Weights
Since the input data are unmerged, we will implement the inverse-variance weighting scheme used by AIMLESS to merge the observations.
The weighted average is a better estimator of the true mean than the raw average, and this weighting scheme corresponds to the maximum likelihood estimator of the true mean if we assume that the observations are normally-distributed about the true mean.
The merged intensity for each reflection, $I_h$, can be determined from the observed intensities, $I_{h,i}$, and error estimates, $\sigma_{h,i}$, as follows:
\begin{equation}
I_h = \frac{\sum_{i}w_{h,i} I_{h,i}}{\sum_{i} w_{h,i}}
\end{equation}
where the weight for each observation, $w_{h,i}$ is given by:
\begin{equation}
w_{h,i} = \frac{1}{(\sigma_{h,i})^2}
\end{equation}
The updated estimate of the uncertainty, $\sigma_{h}$, is given by:
\begin{equation}
\sigma_{h} = \sqrt{\frac{1}{\sum_{i} w_{h,i}}}
\end{equation}
Let's start by implementing the above equations in a function that will compute the merged $I_h$ and $\sigma_h$. We will use the [Pandas groupby](https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html) methods to apply this function across the unique Miller indices in the ``DataSet``. If we group Friedel pairs together, we will refer to the quantity as **IMEAN**, and if we keep the Friedel pairs separate we will refer to the quantities as **I(+)** and **I(-)**.
```python
def merge(dataset, anomalous=False):
"""
Merge dataset using inverse-variance weights.
Parameters
----------
dataset : rs.DataSet
DataSet to be merged containing scaled I and SIGI
anomalous : bool
If True, I(+) and I(-) will be reported. If False,
IMEAN will be reported
Returns
-------
rs.DataSet
Merged DataSet object
"""
ds = dataset.hkl_to_asu(anomalous=anomalous)
ds["w"] = ds['SIGI']**-2
ds["wI"] = ds["I"] * ds["w"]
g = ds.groupby(["H", "K", "L"])
result = g[["w", "wI"]].sum()
result["I"] = result["wI"] / result["w"]
result["SIGI"] = np.sqrt(1 / result["w"])
result = result.loc[:, ["I", "SIGI"]]
result.merged = True
if anomalous:
result = result.unstack_anomalous()
return result
```
Using `anomalous=False`, this function can be used to compute **IMEAN** and **SIGIMEAN** by including both Friedel pairs:
```python
result1 = merge(hewl, anomalous=False)
```
```python
result1.sample(5)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th></th>
<th>I</th>
<th>SIGI</th>
</tr>
<tr>
<th>H</th>
<th>K</th>
<th>L</th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>23</th>
<th>22</th>
<th>9</th>
<td>1349.8007</td>
<td>22.878157</td>
</tr>
<tr>
<th>37</th>
<th>19</th>
<th>0</th>
<td>260.0996</td>
<td>9.075029</td>
</tr>
<tr>
<th>17</th>
<th>11</th>
<th>15</th>
<td>146.02324</td>
<td>2.895325</td>
</tr>
<tr>
<th>35</th>
<th>1</th>
<th>14</th>
<td>9.629998</td>
<td>3.902024</td>
</tr>
<tr>
<th>15</th>
<th>5</th>
<th>18</th>
<td>7.837115</td>
<td>1.039951</td>
</tr>
</tbody>
</table>
</div>
Using `anomalous=True`, this function can be used to compute **I(+)**, **SIGI(+)**, **I(-)**, and **SIGI(-)** by separating Friedel pairs:
```python
result2 = merge(hewl, anomalous=True)
```
```python
result2.sample(5)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th></th>
<th>I(+)</th>
<th>SIGI(+)</th>
<th>SIGI(-)</th>
<th>I(-)</th>
</tr>
<tr>
<th>H</th>
<th>K</th>
<th>L</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>20</th>
<th>4</th>
<th>7</th>
<td>790.00696</td>
<td>13.43943</td>
<td>13.45224</td>
<td>819.43054</td>
</tr>
<tr>
<th>21</th>
<th>15</th>
<th>16</th>
<td>51.543583</td>
<td>2.840802</td>
<td>2.675156</td>
<td>50.629883</td>
</tr>
<tr>
<th>35</th>
<th>8</th>
<th>12</th>
<td>56.64869</td>
<td>2.762924</td>
<td>2.779431</td>
<td>52.87201</td>
</tr>
<tr>
<th>16</th>
<th>16</th>
<th>15</th>
<td>517.4729</td>
<td>11.513449</td>
<td>11.513449</td>
<td>517.4729</td>
</tr>
<tr>
<th>33</th>
<th>23</th>
<th>3</th>
<td>101.954094</td>
<td>3.753756</td>
<td>3.605523</td>
<td>89.131714</td>
</tr>
</tbody>
</table>
</div>
A variant of the above function is implemented in `rs.algorithms`, and we will use that implementation in the next section for computing merging statistics. This function computes **IMEAN**, **I(+)**, **I(-)**, and associated uncertainties for each unique Miller index.
```python
result3 = rs.algorithms.merge(hewl)
```
```python
result3.sample(5)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th></th>
<th>IMEAN</th>
<th>SIGIMEAN</th>
<th>I(+)</th>
<th>SIGI(+)</th>
<th>I(-)</th>
<th>SIGI(-)</th>
<th>N(+)</th>
<th>N(-)</th>
</tr>
<tr>
<th>H</th>
<th>K</th>
<th>L</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>38</th>
<th>10</th>
<th>4</th>
<td>454.0382</td>
<td>11.617793</td>
<td>416.8525</td>
<td>21.370535</td>
<td>469.6386</td>
<td>13.841894</td>
<td>8</td>
<td>20</td>
</tr>
<tr>
<th>17</th>
<th>9</th>
<th>15</th>
<td>101.308815</td>
<td>2.1820927</td>
<td>100.45327</td>
<td>3.0412343</td>
<td>102.21658</td>
<td>3.1326878</td>
<td>32</td>
<td>31</td>
</tr>
<tr>
<th>38</th>
<th>7</th>
<th>8</th>
<td>4.571434</td>
<td>0.9255639</td>
<td>4.8481956</td>
<td>1.2750458</td>
<td>4.2631493</td>
<td>1.3457003</td>
<td>20</td>
<td>20</td>
</tr>
<tr>
<th>11</th>
<th>5</th>
<th>5</th>
<td>3991.629</td>
<td>47.690647</td>
<td>4061.3477</td>
<td>66.316765</td>
<td>3916.9563</td>
<td>68.63234</td>
<td>60</td>
<td>56</td>
</tr>
<tr>
<th>27</th>
<th>18</th>
<th>11</th>
<td>14.8991</td>
<td>1.0327507</td>
<td>10.999734</td>
<td>1.4392322</td>
<td>19.038132</td>
<td>1.4828024</td>
<td>20</td>
<td>20</td>
</tr>
</tbody>
</table>
</div>
---
### Merging with 2-fold Cross-Validation
To compute correlation coefficients we will repeatedly split our data into half-datasets. We will do this by randomly splitting the data using the image number. These half-datasets will be merged independently and used to determine uncertainties in the correlation coefficients. We will first write a method to randomly split our data, and then we will then write a method that automates the sampling and merging of multiple half-datasets.
```python
def sample_halfdatasets(data):
"""Randomly split DataSet into two equal halves by BATCH"""
batch = data.BATCH.unique().to_numpy(dtype=int)
np.random.shuffle(batch)
halfbatch1, halfbatch2 = np.array_split(batch, 2)
half1 = data.loc[data.BATCH.isin(halfbatch1)]
half2 = data.loc[data.BATCH.isin(halfbatch2)]
return half1, half2
```
```python
def merge_dataset(dataset, nsamples):
"""
Merge DataSet using inverse-variance weighting scheme. This represents the
maximum-likelihood estimator of the mean of the observed intensities assuming
they are independent and normally distributed with the same mean.
Sample means across half-datasets can be used to compute the merging statistics CC1/2 and CCanom.
"""
dataset = dataset.copy()
samples = []
for n in range(nsamples):
half1, half2 = sample_halfdatasets(dataset)
mergedhalf1 = rs.algorithms.merge(half1)
mergedhalf2 = rs.algorithms.merge(half2)
result = mergedhalf1.merge(mergedhalf2, left_index=True, right_index=True, suffixes=(1, 2))
result["sample"] = n
samples.append(result)
return rs.concat(samples).sort_index()
```
---
### Merge HEWL data
We will now merge the HEWL data, repeatedly sampling across half-datasets in order to assess the distribution of correlation coefficients.
```python
# This cell takes a few minutes with nsamples=15
merged = merge_dataset(hewl, 15)
```
```python
merged
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th></th>
<th>IMEAN1</th>
<th>SIGIMEAN1</th>
<th>I(+)1</th>
<th>SIGI(+)1</th>
<th>I(-)1</th>
<th>SIGI(-)1</th>
<th>N(+)1</th>
<th>N(-)1</th>
<th>IMEAN2</th>
<th>SIGIMEAN2</th>
<th>I(+)2</th>
<th>SIGI(+)2</th>
<th>I(-)2</th>
<th>SIGI(-)2</th>
<th>N(+)2</th>
<th>N(-)2</th>
<th>sample</th>
</tr>
<tr>
<th>H</th>
<th>K</th>
<th>L</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th rowspan="5" valign="top">0</th>
<th rowspan="5" valign="top">0</th>
<th>4</th>
<td>657.99817</td>
<td>43.882294</td>
<td>657.99817</td>
<td>43.882294</td>
<td>657.99817</td>
<td>43.882294</td>
<td>4</td>
<td>4</td>
<td>662.40204</td>
<td>25.35386</td>
<td>662.40204</td>
<td>25.353859</td>
<td>662.40204</td>
<td>25.353859</td>
<td>12</td>
<td>12</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>645.79663</td>
<td>43.884262</td>
<td>645.79663</td>
<td>43.884266</td>
<td>645.79663</td>
<td>43.884266</td>
<td>4</td>
<td>4</td>
<td>666.4745</td>
<td>25.35348</td>
<td>666.4745</td>
<td>25.35348</td>
<td>666.4745</td>
<td>25.35348</td>
<td>12</td>
<td>12</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>662.0198</td>
<td>25.348951</td>
<td>662.0198</td>
<td>25.348951</td>
<td>662.0198</td>
<td>25.348951</td>
<td>12</td>
<td>12</td>
<td>659.13995</td>
<td>43.907776</td>
<td>659.13995</td>
<td>43.907776</td>
<td>659.13995</td>
<td>43.907776</td>
<td>4</td>
<td>4</td>
<td>2</td>
</tr>
<tr>
<th>4</th>
<td>660.4694</td>
<td>33.18889</td>
<td>660.4694</td>
<td>33.18889</td>
<td>660.4694</td>
<td>33.18889</td>
<td>7</td>
<td>7</td>
<td>661.946</td>
<td>29.271538</td>
<td>661.946</td>
<td>29.271538</td>
<td>661.946</td>
<td>29.271538</td>
<td>9</td>
<td>9</td>
<td>3</td>
</tr>
<tr>
<th>4</th>
<td>666.3023</td>
<td>24.359608</td>
<td>666.3023</td>
<td>24.35961</td>
<td>666.3023</td>
<td>24.35961</td>
<td>13</td>
<td>13</td>
<td>639.66833</td>
<td>50.654987</td>
<td>639.66833</td>
<td>50.654987</td>
<td>639.66833</td>
<td>50.654987</td>
<td>3</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<th>...</th>
<th>...</th>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th rowspan="5" valign="top">45</th>
<th rowspan="5" valign="top">10</th>
<th>2</th>
<td>30.170218</td>
<td>7.3511124</td>
<td>30.170218</td>
<td>7.3511124</td>
<td>NaN</td>
<td>NaN</td>
<td>1</td>
<td>0</td>
<td>15.318142</td>
<td>3.9687188</td>
<td>15.318142</td>
<td>3.9687188</td>
<td>NaN</td>
<td>NaN</td>
<td>3</td>
<td>0</td>
<td>9</td>
</tr>
<tr>
<th>2</th>
<td>7.669444</td>
<td>6.6297607</td>
<td>7.669444</td>
<td>6.6297607</td>
<td>NaN</td>
<td>NaN</td>
<td>1</td>
<td>0</td>
<td>22.89468</td>
<td>4.1084785</td>
<td>22.89468</td>
<td>4.1084785</td>
<td>NaN</td>
<td>NaN</td>
<td>3</td>
<td>0</td>
<td>10</td>
</tr>
<tr>
<th>2</th>
<td>11.665072</td>
<td>5.2794247</td>
<td>11.665072</td>
<td>5.2794247</td>
<td>NaN</td>
<td>NaN</td>
<td>2</td>
<td>0</td>
<td>24.119883</td>
<td>4.656634</td>
<td>24.119883</td>
<td>4.656634</td>
<td>NaN</td>
<td>NaN</td>
<td>2</td>
<td>0</td>
<td>11</td>
</tr>
<tr>
<th>2</th>
<td>7.669444</td>
<td>6.6297607</td>
<td>7.669444</td>
<td>6.6297607</td>
<td>NaN</td>
<td>NaN</td>
<td>1</td>
<td>0</td>
<td>22.89468</td>
<td>4.1084785</td>
<td>22.89468</td>
<td>4.1084785</td>
<td>NaN</td>
<td>NaN</td>
<td>3</td>
<td>0</td>
<td>12</td>
</tr>
<tr>
<th>2</th>
<td>17.961912</td>
<td>4.288131</td>
<td>17.961912</td>
<td>4.288131</td>
<td>NaN</td>
<td>NaN</td>
<td>3</td>
<td>0</td>
<td>20.064919</td>
<td>6.0180674</td>
<td>20.064919</td>
<td>6.0180674</td>
<td>NaN</td>
<td>NaN</td>
<td>1</td>
<td>0</td>
<td>13</td>
</tr>
</tbody>
</table>
<p>187364 rows × 17 columns</p>
</div>
---
### Compute $CC_{1/2}$ and $CC_{anom}$
We will first assign each reflection to a resolution bin and then we will compute the correlation coefficients
```python
merged, labels = merged.assign_resolution_bins(bins=15)
```
```python
groupby1 = merged.groupby(["sample", "bin"])[["IMEAN1", "IMEAN2"]]
pearson1 = groupby1.corr(method="pearson").unstack().loc[:, ("IMEAN1", "IMEAN2")]
pearson1.name = "Pearson"
spearman1 = groupby1.corr(method="spearman").unstack().loc[:, ("IMEAN1", "IMEAN2")]
spearman1.name = "Spearman"
results1 = rs.concat([pearson1, spearman1], axis=1)
results1 = results1.groupby("bin").agg(["mean", "std"])
```
```python
results1.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead tr th {
text-align: left;
}
.dataframe thead tr:last-of-type th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th colspan="2" halign="left">Pearson</th>
<th colspan="2" halign="left">Spearman</th>
</tr>
<tr>
<th></th>
<th>mean</th>
<th>std</th>
<th>mean</th>
<th>std</th>
</tr>
<tr>
<th>bin</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.997334</td>
<td>0.000316</td>
<td>0.998895</td>
<td>0.000063</td>
</tr>
<tr>
<th>1</th>
<td>0.997958</td>
<td>0.000360</td>
<td>0.999363</td>
<td>0.000066</td>
</tr>
<tr>
<th>2</th>
<td>0.998971</td>
<td>0.000306</td>
<td>0.999678</td>
<td>0.000023</td>
</tr>
<tr>
<th>3</th>
<td>0.999297</td>
<td>0.000102</td>
<td>0.999658</td>
<td>0.000024</td>
</tr>
<tr>
<th>4</th>
<td>0.999368</td>
<td>0.000147</td>
<td>0.999687</td>
<td>0.000022</td>
</tr>
</tbody>
</table>
</div>
```python
plt.figure(figsize=(8, 4))
plt.errorbar(results1.index, results1[("Pearson", "mean")],
yerr=results1[("Pearson", "std")],
color="#1b9e77",
label=r"$CC_{1/2}$ (Pearson)")
plt.errorbar(results1.index, results1[("Spearman", "mean")],
yerr=results1[("Spearman", "std")],
color="#d95f02",
label=r"$CC_{1/2}$ (Spearman)")
plt.xticks(results1.index, labels, rotation=45, ha='right', rotation_mode='anchor')
plt.ylabel("Correlation Coefficient")
plt.xlabel(r"Resolution Bin ($\AA$)")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.grid(axis="y", linestyle='--')
plt.show()
```
It is important to note the scale on the y-axis -- this dataset is edge-limited, and as such the $CC_{1/2}$ is very high across all resolution bins. The Spearman CC appears higher across all resolution bins except at high resolution, and overall has a lower standard deviation among samples.
This is consistent with our expectation that Spearman CCs are a more robust estimator of correlation than Pearson CCs.
Let's now repeat this for the anomalous data, computing $CC_{anom}$:
```python
merged["ANOM1"] = merged["I(+)1"] - merged["I(-)1"]
merged["ANOM2"] = merged["I(+)2"] - merged["I(-)2"]
# Similar to CChalf, but we will only look at acentric reflections
groupby2 = merged.acentrics.groupby(["sample", "bin"])[["ANOM1", "ANOM2"]]
pearson2 = groupby2.corr(method="pearson").unstack().loc[:, ("ANOM1", "ANOM2")]
pearson2.name = "Pearson"
spearman2 = groupby2.corr(method="spearman").unstack().loc[:, ("ANOM1", "ANOM2")]
spearman2.name = "Spearman"
results2 = rs.concat([pearson2, spearman2], axis=1)
results2 = results2.groupby("bin").agg(["mean", "std"])
```
```python
results2.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead tr th {
text-align: left;
}
.dataframe thead tr:last-of-type th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th colspan="2" halign="left">Pearson</th>
<th colspan="2" halign="left">Spearman</th>
</tr>
<tr>
<th></th>
<th>mean</th>
<th>std</th>
<th>mean</th>
<th>std</th>
</tr>
<tr>
<th>bin</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.422196</td>
<td>0.037998</td>
<td>0.497923</td>
<td>0.029704</td>
</tr>
<tr>
<th>1</th>
<td>0.233349</td>
<td>0.061975</td>
<td>0.328263</td>
<td>0.023917</td>
</tr>
<tr>
<th>2</th>
<td>0.357182</td>
<td>0.051746</td>
<td>0.444657</td>
<td>0.029250</td>
</tr>
<tr>
<th>3</th>
<td>0.484710</td>
<td>0.045454</td>
<td>0.543314</td>
<td>0.031731</td>
</tr>
<tr>
<th>4</th>
<td>0.579058</td>
<td>0.023952</td>
<td>0.611739</td>
<td>0.020141</td>
</tr>
</tbody>
</table>
</div>
```python
plt.figure(figsize=(8, 4))
plt.errorbar(results2.index, results2[("Pearson", "mean")],
yerr=results2[("Pearson", "std")],
color="#1b9e77",
label=r"$CC_{anom}$ (Pearson)")
plt.errorbar(results2.index, results2[("Spearman", "mean")],
yerr=results2[("Spearman", "std")],
color="#d95f02",
label=r"$CC_{anom}$ (Spearman)")
plt.xticks(results2.index, labels, rotation=45, ha='right', rotation_mode='anchor')
plt.ylabel("Correlation Coefficient")
plt.xlabel(r"Resolution Bin ($\AA$)")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.grid(axis="y", linestyle='--')
plt.show()
```
There is significant anomalous signal across all but the highest resolution bins. The Spearman CCs have smaller error bars than the corresponding Pearson CCs and the Spearman CCs are also higher in most bins. These differences highlight the influence of outlier measurements on the different correlation coefficients.
---
### Summary
We have used `reciprocalspaceship` to merge a dataset using inverse-variance weights. As part of this analysis, we performed repeated 2-fold cross-validation to compute Pearson and Spearman correlation coefficients and associated uncertainties. This relatively simple procedure to obtain uncertainty estimates for correlation coefficients is seldom done when analyzing merging quality. However, we can see that the standard deviation for computed correlation coefficients can be nearly $\pm0.1$ for quantities such as $CC_{anom}$. This is worth keeping in mind when analyzing SAD experiments because this dataset is very high quality (edge-limited) when one considers the $CC_{1/2}$. Putting these two correlation coefficients on the same axes emphasizes this point:
```python
plt.figure(figsize=(9, 6))
plt.errorbar(results1.index, results1[("Pearson", "mean")],
yerr=results1[("Pearson", "std")],
color='#1b9e77',
label=r"$CC_{1/2}$ (Pearson)")
plt.errorbar(results1.index, results1[("Spearman", "mean")],
yerr=results1[("Spearman", "std")],
color='#d95f02',
label=r"$CC_{1/2}$ (Spearman)")
plt.errorbar(results2.index, results2[("Pearson", "mean")],
yerr=results2[("Pearson", "std")],
color='#1b9e77',
linestyle="--",
label=r"$CC_{anom}$ (Pearson)")
plt.errorbar(results2.index, results2[("Spearman", "mean")],
yerr=results2[("Spearman", "std")],
color='#d95f02',
linestyle="--",
label=r"$CC_{anom}$ (Spearman)")
plt.xticks(results1.index, labels, rotation=45, ha='right', rotation_mode='anchor')
plt.ylabel(r"Correlation Coefficient")
plt.xlabel(r"Resolution Bin ($\AA$)")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.grid(axis="y", linestyle='--')
plt.tight_layout()
```
Even a simple change to the procedure for computing merging statistics, such as substituting a Spearman correlation coefficient for a Pearson one, can alter the apparent quality of a dataset. By lowering the barrier to implementing new analyses, we hope that `reciprocalspaceship` can encourage the development of more robust indicators of crystallographic data quality.
|
a7ff14f3914eb539613cfe1ba2d4530884b070f7
| 220,557 |
ipynb
|
Jupyter Notebook
|
docs/examples/2_mergingstats.ipynb
|
kmdalton/reciprocalspaceship
|
50655f077cb670ee86e88480f54621780c8e9f0d
|
[
"MIT"
] | 22 |
2020-07-10T18:13:10.000Z
|
2022-03-04T16:51:00.000Z
|
docs/examples/2_mergingstats.ipynb
|
kmdalton/reciprocalspaceship
|
50655f077cb670ee86e88480f54621780c8e9f0d
|
[
"MIT"
] | 109 |
2020-07-03T10:07:18.000Z
|
2022-03-28T20:49:48.000Z
|
docs/examples/2_mergingstats.ipynb
|
JBGreisman/reciprocalspaceship
|
cf936cca64c5c387ace505416a047318efa9375f
|
[
"MIT"
] | 10 |
2020-07-03T10:51:21.000Z
|
2021-08-23T19:05:24.000Z
| 142.294839 | 61,692 | 0.836405 | true | 9,291 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.903294 | 0.867036 | 0.783188 |
__label__eng_Latn
| 0.635856 | 0.657941 |
## System dynamics
CartPole environment consists of a cart on a trail and a pole hinge fixed on it.
### States
$$X = \left[\array{x \\ \dot{x} \\ \theta \\ \dot{\theta} }\right]$$
### Parameters:
$l = 0.5 m$: half length of pole (homogenuous pole)
$M = 1 kg$: mass of cart
$m = 0.1 kg$: mass of pole
$\tau = 0.02 s$: discrete time peroid
$F_u (N)$: exterted force on the cart
### Derivative equations based on Newton and Euler's equations
$$(M+m)\ddot{x}= F_u + ml (\dot{\theta})^2sin\theta-ml\ddot{\theta}cos\theta$$
$$\frac{4}{3} ml^2\ddot{\theta}=mglsin\theta-mlcos\theta \ddot{x}$$
### How simulation works
- Calculate accelaration based on state $x_k$
$$\gamma_k = \frac{(F_{u_k} + ml{\dot{\theta_k}}^2sin\theta_k)}{M+m}$$
$$\ddot{\theta_k}=\frac{gsin\theta_k-\gamma_k cos\theta_k}{\frac{4}{3}(M+m)l-mlcos^2\theta_k}$$
$$\ddot{x_k}=\gamma_k-\frac{ml\ddot{\theta_k}cos\theta_k}{M+m}$$
- In very short discrete time interval, calculate $x_{k+1}$
$$x_{k+1} = x_k + \tau\dot{x_k}$$
$$\dot{x_{k+1}}=\dot{x_k}+\tau\ddot{x_k}$$
$$\theta_{k+1} = \theta_k + \tau\dot{\theta_k}$$
$$\dot{\theta_k}=\dot{\theta_k}+\tau\ddot{\theta_k}$$
### Linearized state space function
- Assumption
$$sin\theta\approx\theta$$
$$cos\theta\approx 1$$
$$(\dot{\theta})^2 \approx 0$$
- Take assumption into derivative equations
$$\beta = \frac{4}{3}-\frac{m}{M+m}$$
$$\ddot{x}_k=-\frac{mg\theta}{(M+m)\beta}\theta-\frac{F_u(\beta(M+m)+m)}{(M+m)^2\beta}$$
$$\ddot{\theta}_k=\frac{g}{l\beta}\theta-\frac{F_u}{(M+m)l\beta}$$
- The linearized discrete equation of system following previous procedure
- states
$$X=\left[\matrix{x \\ \theta \\ \dot{x} \\ \dot{\theta}}\right]$$
- state equations
$$x_{k+1}=x_k+\dot{x_k}\tau$$
$$\theta_{k+1}=\theta_k+\dot{\theta_k}\tau$$
$$\dot{x}_{k+1} = \dot{x}_k+\tau \ddot{x_k}$$
$$\dot{\theta}_{k+1} = \dot{\theta}_k+\tau\ddot{\theta}_k$$
- discrete state space
$$\left[\matrix{x_{k+1} \\ \dot{x}_{k+1} \\ \theta_{k+1} \\ \dot{\theta}_{k+1}}\right] =
\left[\matrix{1 & \tau & 0 & 0 \\
0 & 1 & -\frac{\tau mg}{(M+m)\beta} & 0 \\
0 & 0 & 1 & \tau \\
0 & 0 & \frac{g\tau}{l\beta_k}& 1}\right]\left[\matrix{x_{k} \\ \dot{x}_{k} \\ \theta_{k} \\ \dot{\theta}_{k}}\right] + \left[\matrix{0 \\ \frac{(\beta(M+m)+m)}{(M+m)^2\beta} \\ 0 \\ \frac{-1}{(M+m)\beta} }\right][F_{u_k}]$$
```python
import gym
from cartpole_util import CartPoleEnv
env = CartPoleEnv(1e-1, 0.1)
```
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
```python
J = 1/12*(env.masspole*2)**2
beta = env.masspole*env.masscart*env.length**2 + J*(env.masspole+env.masscart)
ac = -(env.masspole**2*env.length**2*env.gravity)/beta
bc = (J+env.masspole*env.length**2)/beta
cc = env.masspole*env.length*env.gravity*(env.masscart+env.masspole)/beta
dc = -env.masspole*env.length/beta
```
```python
import numpy as np
A_c = np.array([[0,1,0,0],
[0,0,ac,0],
[0,0,0,1],
[0,0,cc,0]])
B_c = np.array([[0],
[bc],
[0],
[dc]])
C_c = np.array([[0,1,0,0],
[0,0,0,1]])
D_c = np.array([[0],
[0]])
```
```python
print(C_c.shape)
```
(2, 4)
```python
from scipy import signal
sys = signal.StateSpace(A_c, B_c, C_c, D_c)
```
```python
discrete_sys = sys.to_discrete(env.tau)
```
```python
discrete_sys.A
```
array([[ 1.00000000e+00, 2.00000000e-02, -1.71037389e-04,
-1.13996348e-06],
[ 0.00000000e+00, 1.00000000e+00, -1.71144572e-02,
-1.71037389e-04],
[ 0.00000000e+00, 0.00000000e+00, 1.00376282e+00,
2.00250792e-02],
[-0.00000000e+00, -0.00000000e+00, 3.76518059e-01,
1.00376282e+00]])
```python
from numpy.linalg import matrix_rank
```
```python
A = discrete_sys.A
# C = discrete_sys.C
# control_C = np.array([])
C= np.array([ [1,1,0,0],
[0,0,0,1]])
```
```python
control_C = np.concatenate((C,np.matmul(C,A)),axis=0)
matrix_rank(control_C)
```
4
```python
from sympy import sin, cos, Matrix, symbols, diff, simplify
from sympy.abc import rho, phi, theta, tau, omega
```
```python
X = Matrix([rho*cos(phi), rho*sin(phi), rho**2])
```
```python
Y = Matrix([rho, phi])
```
```python
X.jacobian(Y)
```
Matrix([
[cos(phi), -rho*sin(phi)],
[sin(phi), rho*cos(phi)],
[ 2*rho, 0]])
```python
v, g, F, m, l, M = symbols("v, g, F, m, l, M")
h2 = v+tau*(F+m*l*omega**2*sin(theta)-3/4*m*g*sin(theta)*cos(theta))/(M+m-3/4*m*(cos(theta))**2)
```
```python
simplify(diff(h2, theta))
```
m*tau*(-1.5*(F - 0.375*g*m*sin(2*theta) + l*m*omega**2*sin(theta))*sin(theta)*cos(theta) + (M - 0.75*m*cos(theta)**2 + m)*(1.5*g*sin(theta)**2 - 0.75*g + 1.0*l*omega**2*cos(theta)))/(M - 0.75*m*cos(theta)**2 + m)**2
```python
diff(h2, omega)
```
2*l*m*omega*tau*sin(theta)/(M - 0.75*m*cos(theta)**2 + m)
```python
diff(h2, v)
```
1
```python
h4 = omega+tau*((M+m)*g*sin(theta)-m*l*omega**2*sin(theta)*cos(theta)-F*cos(theta))/(4/3*(m+M)*l-m*l*(cos(theta))**2)
```
```python
diff(h4, omega)
```
-2*l*m*omega*tau*sin(theta)*cos(theta)/(-l*m*cos(theta)**2 + l*(1.33333333333333*M + 1.33333333333333*m)) + 1
```python
```
|
70141ed15fe94468e86ceee78accc08e488496e9
| 10,406 |
ipynb
|
Jupyter Notebook
|
Proposal.ipynb
|
caseypen/MAE_298_Final_Project
|
66c96aabc7bbcfe070c5170c03d5f6b5196b31bb
|
[
"MIT"
] | null | null | null |
Proposal.ipynb
|
caseypen/MAE_298_Final_Project
|
66c96aabc7bbcfe070c5170c03d5f6b5196b31bb
|
[
"MIT"
] | null | null | null |
Proposal.ipynb
|
caseypen/MAE_298_Final_Project
|
66c96aabc7bbcfe070c5170c03d5f6b5196b31bb
|
[
"MIT"
] | 1 |
2020-11-08T14:38:42.000Z
|
2020-11-08T14:38:42.000Z
| 26.145729 | 285 | 0.468672 | true | 2,055 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.934395 | 0.815232 | 0.761749 |
__label__kor_Hang
| 0.151337 | 0.608131 |
平面ロボットアームの運動学導出
```python
import sympy as sy
from sympy import pi, cos, sin, tan
from IPython.display import display
from sympy.printing.pycode import pycode
#q1, q2, q3, q4 = sy.symbols("q1, q2, q3, q4") # 関節角度
t = sy.Symbol("t")
q1 = sy.Function("q1")
q2 = sy.Function("q2")
q3 = sy.Function("q3")
q4 = sy.Function("q4")
omega1 = sy.Function("omega1")
omega2 = sy.Function("omega2")
omega3 = sy.Function("omega3")
omega4 = sy.Function("omega4")
a1, a2, a3, a4 = sy.symbols("a1, a2, a3, a4")
b1, b2, b3, b4 = sy.symbols("b1, b2, b3, b4")
c1, c2, c3, c4 = sy.symbols("c1, c2, c3, c4")
l1, l2, l3, l4 = sy.symbols("l1, l2, l3, l4") # リンク長さ
lg1, lg2, lg3, lg4 = sy.symbols("lg1, lg2, lg3, lg4") # 重心までの長さ
m1, m2, m3, m4 = sy.symbols("m1, m2, m3, m4") # 質量
Ig1, Ig2, Ig3, Ig4 = sy.symbols("I1, I2, I3, I4") # 慣性モーメント
g = sy.Symbol("g") # 重力加速度
def R(q):
return sy.Matrix([
[cos(q), -sin(q)],
[sin(q), cos(q)],
])
def HTM(q, x, y):
return sy.Matrix([
[cos(q), -sin(q), x],
[sin(q), cos(q), y],
[0, 0, 1],
])
# ジョイントの位置
x1 = R(q1(t)) * sy.Matrix([[l1, 0]]).T
x2 = R(q1(t)) * sy.Matrix([[l1, 0]]).T + \
R(q1(t) + q2(t)) * sy.Matrix([[l2, 0]]).T
x3 = R(q1(t)) * sy.Matrix([[l1, 0]]).T +\
R(q1(t) + q2(t)) * sy.Matrix([[l2, 0]]).T +\
R(q1(t) + q2(t) + q3(t)) * sy.Matrix([[l3, 0]]).T
x4 = R(q1(t)) * sy.Matrix([[l1, 0]]).T +\
R(q1(t) + q2(t)) * sy.Matrix([[l2, 0]]).T +\
R(q1(t) + q2(t) + q3(t)) * sy.Matrix([[l3, 0]]).T +\
R(q1(t) + q2(t) + q3(t) + q4(t)) * sy.Matrix([[l4, 0]]).T
q = sy.Matrix([[q1(t), q2(t), q3(t), q4(t)]]).T
```
```python
J1 = x1.jacobian(q)
J2 = x2.jacobian(q)
J3 = x3.jacobian(q)
J4 = x4.jacobian(q)
J1_dot = sy.diff(J1, t)
J2_dot = sy.diff(J2, t)
J3_dot = sy.diff(J3, t)
J4_dot = sy.diff(J4, t)
J_all = [J1, J2, J3, J4]
J_dot_all = [J1_dot, J2_dot, J3_dot, J4_dot]
for i, J in enumerate(J_all):
J_all[i] = J.subs([
(sy.Derivative(q1(t),t), b1),
(sy.Derivative(q2(t),t), b2),
(sy.Derivative(q3(t),t), b3),
(sy.Derivative(q4(t),t), b4),
(q1(t), a1),
(q2(t), a2),
(q3(t), a3),
(q4(t), a4),
])
for i, J_dot in enumerate(J_dot_all):
J_dot_all[i] = J_dot.subs([
(sy.Derivative(q1(t),t), b1),
(sy.Derivative(q2(t),t), b2),
(sy.Derivative(q3(t),t), b3),
(sy.Derivative(q4(t),t), b4),
(q1(t), a1),
(q2(t), a2),
(q3(t), a3),
(q4(t), a4),
])
```
```python
f = open('sice_kinema.txt', 'w')
for i, j in enumerate([x1, x2, x3, x4]):
s = '\nx' + str(i) + '='
f.write(s)
f.write(str(j))
for i, j in enumerate(J_all):
s = '\nJ' + str(i) + '='
f.write(s)
f.write(str(j))
for i, j in enumerate(J_dot_all):
s = '\nJ_dot' + str(i) + '='
f.write(s)
f.write(str(j))
f.close()
```
```python
print(J_dot_all[3])
```
Matrix([[-b1*l1*cos(a1) - l2*(b1 + b2)*cos(a1 + a2) - l3*(b1 + b2 + b3)*cos(a1 + a2 + a3) - l4*(b1 + b2 + b3 + b4)*cos(a1 + a2 + a3 + a4), -l2*(b1 + b2)*cos(a1 + a2) - l3*(b1 + b2 + b3)*cos(a1 + a2 + a3) - l4*(b1 + b2 + b3 + b4)*cos(a1 + a2 + a3 + a4), -l3*(b1 + b2 + b3)*cos(a1 + a2 + a3) - l4*(b1 + b2 + b3 + b4)*cos(a1 + a2 + a3 + a4), -l4*(b1 + b2 + b3 + b4)*cos(a1 + a2 + a3 + a4)], [-b1*l1*sin(a1) - l2*(b1 + b2)*sin(a1 + a2) - l3*(b1 + b2 + b3)*sin(a1 + a2 + a3) - l4*(b1 + b2 + b3 + b4)*sin(a1 + a2 + a3 + a4), -l2*(b1 + b2)*sin(a1 + a2) - l3*(b1 + b2 + b3)*sin(a1 + a2 + a3) - l4*(b1 + b2 + b3 + b4)*sin(a1 + a2 + a3 + a4), -l3*(b1 + b2 + b3)*sin(a1 + a2 + a3) - l4*(b1 + b2 + b3 + b4)*sin(a1 + a2 + a3 + a4), -l4*(b1 + b2 + b3 + b4)*sin(a1 + a2 + a3 + a4)]])
```python
print(J4)
```
Matrix([[-l1*sin(q1(t)) - l2*sin(q1(t) + q2(t)) - l3*sin(q1(t) + q2(t) + q3(t)) - l4*sin(q1(t) + q2(t) + q3(t) + q4(t)), -l2*sin(q1(t) + q2(t)) - l3*sin(q1(t) + q2(t) + q3(t)) - l4*sin(q1(t) + q2(t) + q3(t) + q4(t)), -l3*sin(q1(t) + q2(t) + q3(t)) - l4*sin(q1(t) + q2(t) + q3(t) + q4(t)), -l4*sin(q1(t) + q2(t) + q3(t) + q4(t))], [l1*cos(q1(t)) + l2*cos(q1(t) + q2(t)) + l3*cos(q1(t) + q2(t) + q3(t)) + l4*cos(q1(t) + q2(t) + q3(t) + q4(t)), l2*cos(q1(t) + q2(t)) + l3*cos(q1(t) + q2(t) + q3(t)) + l4*cos(q1(t) + q2(t) + q3(t) + q4(t)), l3*cos(q1(t) + q2(t) + q3(t)) + l4*cos(q1(t) + q2(t) + q3(t) + q4(t)), l4*cos(q1(t) + q2(t) + q3(t) + q4(t))]])
|
c984d200a9dfd5c1c8d5d1c4cf2b4bd6ed344c98
| 6,796 |
ipynb
|
Jupyter Notebook
|
misc/sice_arm_kinematics.ipynb
|
YoshimitsuMatsutaIe/manipulator_dynamics
|
587b3cedddd07c2aa09d1195289b0c312e0fc749
|
[
"MIT"
] | null | null | null |
misc/sice_arm_kinematics.ipynb
|
YoshimitsuMatsutaIe/manipulator_dynamics
|
587b3cedddd07c2aa09d1195289b0c312e0fc749
|
[
"MIT"
] | null | null | null |
misc/sice_arm_kinematics.ipynb
|
YoshimitsuMatsutaIe/manipulator_dynamics
|
587b3cedddd07c2aa09d1195289b0c312e0fc749
|
[
"MIT"
] | null | null | null | 32.830918 | 778 | 0.424956 | true | 2,136 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.938124 | 0.782662 | 0.734234 |
__label__kor_Hang
| 0.06295 | 0.544205 |
# 11 Ordinary Differential Equations (ODEs)
[ODE](http://mathworld.wolfram.com/OrdinaryDifferentialEquation.html)s describe many phenomena in physics. They describe the changes of a **dependent variable** $y(t)$ as a function of a **single independent variable** (e.g. $t$ or $x$).
An ODE of **order** $n$
$$
F(t, y^{(0)}, y^{(1)}, ..., y^{(n)}) = 0
$$
contains derivatives $y^{(k)}(t) \equiv y^{(k)} \equiv \frac{d^{k}y(t)}{dt^{k}}$ up to the $n$-th derivative (and $y^{(0)} \equiv y$).
### Initial and boundary conditions
* $n$ **initial conditions** are needed to *uniquely determine* the solution of a $n$-th order ODE, e.g, initial position and velocities.
* **Boundary conditions** (values of solution on domain boundries) can additionaly restrict solutions but the resulting *eigenvalue problems* are more difficult, e.g, wavefunction goes towards 0 for $\pm\infty$.
### Linear ODEs
A **linear** ODE contains no higher powers than 1 of any of the $y^{(k)}$.
*Superposition principle*: Linear combinations of solutions are also solutions.
#### Example: First order linear ODE
\begin{align}
\frac{dy}{dt} &= f(t)y + g(t)\\
y^{(1)} &= f(t)y + g(t)\\
% y^{(1)} - f(t)y - g(t) &= 0
\end{align}
##### Radioactive decay
$$
\frac{dN}{dt} = -k N
$$
### Non-linear ODEs
**Non-linear** ODEs can contain any powers in the dependent variable and its derivatives.
No superposition of solutions. Often impossible to solve analytically.
#### Example: Second order (general) ODE
\begin{gather}
\frac{d^2 y}{dt^2} + \lambda(t) \frac{dy}{dt} = f\left(t, y, \frac{dy}{dt}\right)\\
\end{gather}
##### Newton's equations of motion
$$
m\frac{d^2 x}{dt^2} = F(x) + F_\text{ext}(x, t) \quad \text{with}
\quad F(x) = -\frac{dU}{dx}
$$
(Force is often derived from a potential energy $U(x)$ and may contain non-linear terms such as $x^{-2}$ or $x^3$.)
## Partial differential equations (PDEs)
* more than one independent variable (e.g. $x$ and $t$)
* partial derivatives
* much more difficult than ODEs
#### Example: Schrödinger equation (Quantum Mechanics)
$$
i\hbar \frac{\partial\psi(\mathbf{x}, t)}{\partial t} = -\frac{\hbar^2}{2m}
\left(\frac{\partial^2 \psi}{\partial x^2} +
\frac{\partial^2 \psi}{\partial y^2} +
\frac{\partial^2 \psi}{\partial z^2}
\right) + V(\mathbf{x})\, \psi(\mathbf{x}, t)
$$
## Harmonic and anharmonic oscillator
* particle with mass $m$ connected to a spring
* spring described by a harmonic potential or anharmonic ones in the displacements from equilibrium $x$
\begin{align}
U_1(x) &= \frac{1}{2} k x^2, \quad k=1\\
U_2(x) &= \frac{1}{2} k x^2 \left(1 - \frac{2}{3}\alpha x\right), \quad k=1,\ \alpha=\frac{1}{2}\\
U_3(x) &= \frac{1}{p} k x^p, \quad k=1,\ p=6
\end{align}
1. What do these potentials look like? Sketch or plot.
2. Calculate the forces.
#### Potentials
```python
import numpy as np
def U1(x, k=1):
return 0.5 * k * x*x
def U2(x, k=1, alpha=0.5):
return 0.5 * k * x*x * (1 - (2/3)*alpha*x)
def U3(x, k=1, p=6):
return (k/p) * np.power(x, p)
```
```python
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('seaborn-talk')
%matplotlib inline
```
```python
X = np.linspace(-3, 3, 100)
ax = plt.subplot(1,1,1)
ax.plot(X, U1(X), label=r"$U_1$")
ax.plot(X, U2(X), label=r"$U_2$")
ax.plot(X, U3(X), label=r"$U_3$")
ax.set_ylim(None, 10)
ax.legend(loc="best")
```
#### Forces
\begin{align}
F_1(x) &= -kx\\
F_2(x) &= -kx(1 + \alpha x)\\
F_3(x) &= -k x^{p-1}
\end{align}
## ODE Algorithms
Basic idea:
1. Start with initial conditions, $y_0 \equiv y(t=0)$
2. Use $\frac{dy}{dt} = f(t, y)$ (the RHS!) to advance solution a small step $h$ forward in time: $y(t=h) \equiv y_1$
3. Repeat with $y_1$ to obtain $y_2 \equiv y(t=2h)$... and for all future values of $t$.
Possible issues
* small differences: subtractive cancelation and round-off error accumulation
* extrapolation: numerical "solution" can deviate wildly from exact
* possibly need adaptive $h$
### Euler's rule
Simple: forward difference
\begin{align}
f(t, y) = \frac{dy(t)}{dt} &\approx \frac{y(t_{n+1}) - y(t_n)}{h}\\
y_{n+1} &\approx y_n + h f(t_n, y_n) \quad \text{with} \quad y_n := y(t_n)
\end{align}
Error will be $\mathcal{O}(h^2)$ (bad!).
Also: what if we have a second order ODE ?!?! We only used $dy/dt$.
### Standard (dynamic) form of ODEs
1 ODE of *any order* $n$ $\rightarrow$ $n$ coupled simultaneous first-order ODEs in $n$ unknowns $y^{(0)}, \dots, y^{(n-1)}$:
\begin{align}
\frac{dy^{(0)}}{dt} &= f^{(0)}(t, y^{(0)}, \dots, y^{(n-1)})\\
\frac{dy^{(1)}}{dt} &= f^{(1)}(t, y^{(0)}, \dots, y^{(n-1)})\\
\vdots & \\
\frac{dy^{(n-1)}}{dt} &= f^{(n-1)}(t, y^{(0)}, \dots, y^{(n-1)})\\
\end{align}
In $n$-dimensional vector notation:
\begin{align}
\frac{d\mathbf{y}(t)}{dt} &= \mathbf{f}(t, \mathbf{y})\\
\mathbf{y} &= \left(\begin{array}{c}
y^{(0)}(t) \\
y^{(1)}(t) \\
\vdots \\
y^{(n-1)}(t)
\end{array}\right),
\quad
\mathbf{f} = \left(\begin{array}{c}
f^{(0)}(t, \mathbf{y}) \\
f^{(1)}(t, \mathbf{y}) \\
\vdots \\
f^{(n-1)}(t, \mathbf{y})
\end{array}\right)
\end{align}
#### Example: Convert Newton's EOMs to standard form
$$
\frac{d^2 x}{dt^2} = m^{-1} F\Big(t, x, \frac{dx}{dt}\Big)
$$
RHS may *not contain any explicit derivatives* but components of $\mathbf{y}$ can represent derivatives.
* position $x$ as first dependent variable $y^{(0)}$ (as usual).
* velocity $dx/dt$ as second dependent variable $y^{(1)}$
$$
y^{(0)}(t) := x(t), \quad y^{(1)}(t) := \frac{dx}{dt} = \frac{dy^{(0)}}{dt}
$$
One 2nd order ODE
$$
\frac{d^2 x}{dt^2} = m^{-1} F\Big(t, x, \frac{dx}{dt}\Big)
$$
to two simultaneous 1st order ODEs:
\begin{align}
\frac{dy^{(0)}}{dt} &= y^{(1)}(t)\\
\frac{dy^{(1)}}{dt} &= m^{-1} F\Big(t, y^{(0)}, y^{(1)}\Big)
\end{align}
\begin{align}
\frac{d\mathbf{y}(t)}{dt} &= \mathbf{f}(t, \mathbf{y})\\
\mathbf{y} &= \left(\begin{array}{c}
y^{(0)} \\
y^{(1)}
\end{array}\right) =
\left(\begin{array}{c}
x(t) \\
\frac{dx}{dt}
\end{array}\right),\\
\mathbf{f} &= \left(\begin{array}{c}
y^{(1)}(t) \\
m^{-1} F\Big(t, y^{(0)}, y^{(1)}\Big)
\end{array}\right) =
\left(\begin{array}{c}
\frac{dx}{dt} \\
m^{-1} F\Big(t, x(t), \frac{dx}{dt}\Big)
\end{array}\right)
\end{align}
#### Example: Driven 1D harmonic oscillator in standard form
With $F_1 = -k x$:
$$
\frac{d^2 x}{dt^2} = F_\text{ext}(x, t) - k x
$$
convert to
\begin{align}
\frac{dy^{(0)}}{dt} &= y^{(1)}(t) \\
\frac{dy^{(1)}}{dt} &= m^{-1}[F_\text{ext}(y^{(0)}, t) - k y^{(0)}]
\end{align}
Force (or derivative) function $\mathbf{f}$ and initial conditions:
\begin{alignat}{3}
f^{(0)}(t, \mathbf{y}) &= y^{(1)},
&\quad y^{(0)}(0) &= x_0,\\
f^{(1)}(t, \mathbf{y}) &= m^{-1}[F_\text{ext}(y^{(0)}, t) - k y^{(0)}],
&\quad y^{(1)}(0) &= v_0.
\end{alignat}
### Euler's rule (standard form)
Given the $n$-dimensional vectors from the ODE standard form
$$
\frac{d\mathbf{y}}{dt} = \mathbf{f}(t, \mathbf{y})
$$
the **Euler rule** amounts to
\begin{align}
\mathbf{f}(t, \mathbf{y}) = \frac{d\mathbf{y}(t)}{dt} &\approx \frac{\mathbf{y}(t_{n+1}) - \mathbf{y}(t_n)}{h}\\
\mathbf{y}_{n+1} &\approx \mathbf{y}_n + h \mathbf{f}(t_n, \mathbf{y}_n) \quad \text{with} \quad \mathbf{y}_n := \mathbf{y}(t_n)
\end{align}
## Problem: Numerically integrate the 1D harmonic oscillator with Euler
\begin{alignat}{3}
f^{(0)}(t, \mathbf{y}) &= y^{(1)},
&\quad y^{(0)}(0) &= x_0,\\
f^{(1)}(t, \mathbf{y}) &= - \frac{k}{m} y^{(0)},
&\quad y^{(1)}(0) &= v_0.
\end{alignat}
with $k=1$; $x_0 = 0$ and $v_0 = +1$.
### Explicit implementation:
* Note how in `f_hramonic` we are constructing the force vector of the standard ODE representation
* `y` is the vector of dependents in the standard representation
* We pre-allocate the array for `y` and then assign to individual elements with the
```python
y[:] = ...
```
notation, which has higher performance than creating the array anew every time.
```python
import numpy as np
def F1(x, k=1):
"""Harmonic force"""
return -k*x
def f_harmonic(t, y, k=1, m=1):
"""Force vector in standard ODE form (n=2)"""
return np.array([y[1], F1(y[0], k=k)/m])
t_max = 100
h = 0.01
Nsteps = t_max/h
t_range = h * np.arange(Nsteps)
x = np.empty_like(t_range)
y = np.zeros(2)
# initial conditions
x0, v0 = 0.0, 1.0
y[:] = x0, v0
for i, t in enumerate(t_range):
# store position that corresponds to time t_i
x[i] = y[0]
# Euler integrator
y[:] = y + h * f_harmonic(t, y)
```
Plot the position $x(t)$ (which is $y_0$) against time:
```python
plt.plot(t_range, x)
```
### Modular solution with functions
We can make the Euler integrator a function, which makes the code more readable and modular and we can make the whole integration a function, too. This will allow us to easily run the integration with different initial values or `h` steps.
```python
import numpy as np
def F1(x, k=1):
"""Harmonic force"""
return -k*x
def f_harmonic(t, y, k=1, m=1):
"""Force vector in standard ODE form (n=2)"""
return np.array([y[1], F1(y[0], k=k)/m])
def euler(y, f, t, h):
"""Euler integrator.
Returns new y at t+h.
"""
return y + h * f(t, y)
def integrate(x0=0, v0=1, t_max=100, h=0.001):
"""Integrate the harmonic oscillator with force F1.
Note that the spring constant k and particle mass m are currently
pre-defined.
Arguments
---------
x0 : float
initial position
v0 : float
initial velocity
t_max : float
time to integrate out to
h : float, default 0.001
integration time step
Returns
-------
Tuple ``(t, x)`` with times and positions.
"""
Nsteps = t_max/h
t_range = h * np.arange(Nsteps)
x = np.empty_like(t_range)
y = np.zeros(2)
# initial conditions
y[:] = x0, v0
for i, t in enumerate(t_range):
# store position that corresponds to time t_i
x[i] = y[0]
# Euler integrator
y[:] = euler(y, f_harmonic, t, h)
return t_range, x
```
Plot the position as a function of time, $x(t)$.
```python
t, x = integrate(h=0.01)
plt.plot(t, x)
```
Note the increase in amplitude. Explore if smaller $h$ fixes this obvious problem.
```python
t, x = integrate(h=0.001)
plt.plot(t, x)
```
Smaller $h$ improves the integration (but Euler is still a bad algorithm... just run out for longer, i.e., higher `t_max`.)
|
b969f6e98be1f8a06e51c610aef1730ca8b5d502
| 125,997 |
ipynb
|
Jupyter Notebook
|
11_ODEs/.ipynb_checkpoints/11-ODEs-checkpoint.ipynb
|
nachrisman/PHY494
|
bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7
|
[
"CC-BY-4.0"
] | null | null | null |
11_ODEs/.ipynb_checkpoints/11-ODEs-checkpoint.ipynb
|
nachrisman/PHY494
|
bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7
|
[
"CC-BY-4.0"
] | null | null | null |
11_ODEs/.ipynb_checkpoints/11-ODEs-checkpoint.ipynb
|
nachrisman/PHY494
|
bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7
|
[
"CC-BY-4.0"
] | null | null | null | 145.157834 | 28,476 | 0.874576 | true | 3,818 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.779993 | 0.843895 | 0.658232 |
__label__eng_Latn
| 0.773268 | 0.367625 |
# 15 PDEs: Solution with Time Stepping
## Heat Equation
The **heat equation** can be derived from Fourier's law and energy conservation (see the [lecture notes on the heat equation (PDF)](15_PDEs_LectureNotes_HeatEquation.pdf))
$$
\frac{\partial T(\mathbf{x}, t)}{\partial t} = \frac{K}{C\rho} \nabla^2 T(\mathbf{x}, t),
$$
## Problem: insulated metal bar (1D heat equation)
A metal bar of length $L$ is insulated along it lengths and held at 0ºC at its ends. Initially, the whole bar is at 100ºC. Calculate $T(x, t)$ for $t>0$.
### Analytic solution
Solve by separation of variables and power series: The general solution that obeys the boundary conditions $T(0, t) = T(L, t) = 0$ is
$$
T(x, t) = \sum_{n=1}^{+\infty} A_n \sin(k_n x)\, \exp\left(-\frac{k_n^2 K t}{C\rho}\right), \quad k_n = \frac{n\pi}{L}
$$
The specific solution that satisfies $T(x, 0) = T_0 = 100^\circ\text{C}$ leads to $A_n = 4 T_0/n\pi$ for $n$ odd:
$$
T(x, t) = \sum_{n=1,3,5,\dots}^{+\infty} \frac{4 T_0}{n \pi} \sin(k_n x)\, \exp\left(-\frac{k_n^2 K t}{C\rho}\right)
$$
```python
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
```
Analytical solution:
```python
def T_bar(x, t, T0, L, K=237, C=900, rho=2700, nmax=1000):
T = np.zeros_like(x)
eta = K / (C*rho)
for n in range(1, nmax, 2):
kn = n*np.pi/L
T += 4*T0/(np.pi * n) * np.sin(kn*x) * np.exp(-kn*kn * eta * t)
return T
```
```python
T0 = 100.
L = 1.0
X = np.linspace(0, L, 100)
for t in np.linspace(0, 3000, 50):
plt.plot(X, T_bar(X, t, T0, L))
plt.xlabel(r"$x$ (m)")
plt.ylabel(r"$T$ ($^\circ$C)");
```
### Numerical solution: Leap frog
Discretize (finite difference):
For the time domain we only have the initial values so we use a simple forward difference for the time derivative:
$$
\frac{\partial T(x,t)}{\partial t} \approx \frac{T(x, t+\Delta t) - T(x, t)}{\Delta t}
$$
For the spatial derivative we have initially all values so we can use the more accurate central difference approximation:
$$
\frac{\partial^2 T(x, t)}{\partial x^2} \approx \frac{T(x+\Delta x, t) + T(x-\Delta x, t) - 2 T(x, t)}{\Delta x^2}
$$
Thus, the heat equation can be written as the finite difference equation
$$
\frac{T(x, t+\Delta t) - T(x, t)}{\Delta t} = \frac{K}{C\rho} \frac{T(x+\Delta x, t) + T(x-\Delta x, t) - 2 T(x, t)}{\Delta x^2}
$$
which can be reordered so that the RHS contains only known terms and the LHS future terms. Index $i$ is the spatial index, and $j$ the time index: $x = x_0 + i \Delta x$, $t = t_0 + j \Delta t$.
$$
T_{i, j+1} = (1 - 2\eta) T_{i,j} + \eta(T_{i+1,j} + T_{i-1, j}), \quad \eta := \frac{K \Delta t}{C \rho \Delta x^2}
$$
Thus we can step forward in time ("leap frog"), using only known values.
### Solve the 1D heat equation numerically for an iron bar
* $K = 237$ W/mK
* $C = 900$ J/K
* $\rho = 2700$ kg/m<sup>3</sup>
* $L = 1$ m
* $T_0 = 373$ K and $T_b = 273$ K
* $T(x, 0) = T_0$ and $T(0, t) = T(L, t) = T_b$
#### Key considerations
The key line is the computation of the new temperature field at time step $j+1$ from the temperature distribution at time step $j$. It can be written purely with numpy array operations (see last lecture!):
```python
T[1:-1] = (1 - 2*eta) * T[1:-1] + eta * (T[2:] + T[:-2])
```
Note that the range operator `T[start:end]` *excludes* `end`, so in order to include `T[1], T[2], ..., T[-2]` (but not the rightmost `T[-1]`) we have to use `T[1:-1]`.
The *boundary conditions* are fixed for all times:
```python
T[0] = T[-1] = Tb
```
The *initial conditions* (at time step `j=0`)
```python
T[1:-1] = T0
```
are only used to compute the distribution of temperatures at the next step `j=1`.
#### Solution
```python
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
```
For 3D rotatable output:
```python
%matplotlib widget
```
For HTML/nbviewer output, use inline:
```python
%matplotlib inline
```
Numerical solution:
```python
L_rod = 1. # m
t_max = 3000. # s
Dx = 0.02 # m
Dt = 2 # s
Nx = int(L_rod // Dx)
Nt = int(t_max // Dt)
Kappa = 237 # W/(m K)
CHeat = 900 # J/K
rho = 2700 # kg/m^3
T0 = 373 # K
Tb = 273 # K
eta = Kappa * Dt / (CHeat * rho * Dx**2)
eta2 = 1 - 2*eta
step = 20 # plot solution every n steps
print("Nx = {0}, Nt = {1}".format(Nx, Nt))
print("eta = {0}".format(eta))
T = np.zeros(Nx)
T_plot = np.zeros((Nt//step + 1, Nx))
# initial conditions
T[1:-1] = T0
# boundary conditions
T[0] = T[-1] = Tb
t_index = 0
T_plot[t_index, :] = T
for jt in range(1, Nt):
T[1:-1] = eta2 * T[1:-1] + eta*(T[2:] + T[:-2])
if jt % step == 0 or jt == Nt-1:
t_index += 1
T_plot[t_index, :] = T
print("Iteration {0:5d}".format(jt), end="\r")
else:
print("Completed {0:5d} iterations: t={1} s".format(jt, jt*Dt))
```
Nx = 49, Nt = 1500
eta = 0.4876543209876543
Completed 1499 iterations: t=2998 s
#### Visualization
Visualize (you can use the code as is).
Note how we are making the plot use proper units by mutiplying with `Dt * step` and `Dx`.
```python
X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1]))
Z = T_plot[X, Y]
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.plot_wireframe(X*Dt*step, Y*Dx, Z)
ax.set_xlabel(r"time $t$ (s)")
ax.set_ylabel(r"position $x$ (m)")
ax.set_zlabel(r"temperature $T$ (K)")
fig.tight_layout()
```
2D as above for the analytical solution…
```python
X = Dx * np.arange(T_plot.shape[1])
plt.plot(X, T_plot.T)
plt.xlabel(r"$x$ (m)")
plt.ylabel(r"$T$ (K)");
```
## Stability of the solution
### Empirical investigation of the stability
Investigate the solution for different values of `Dt` and `Dx`. Can you discern patters for stable/unstable solutions?
Report `Dt`, `Dx`, and `eta`
* for 3 stable solutions
* for 3 unstable solutions
```python
def calculate_T(L_rod=1, t_max=3000, Dx=0.02, Dt=2, T0=373, Tb=273,
step=20):
Nx = int(L_rod // Dx)
Nt = int(t_max // Dt)
Kappa = 237 # W/(m K)
CHeat = 900 # J/K
rho = 2700 # kg/m^3
eta = Kappa * Dt / (CHeat * rho * Dx**2)
eta2 = 1 - 2*eta
print("Nx = {0}, Nt = {1}".format(Nx, Nt))
print("eta = {0}".format(eta))
T = np.zeros(Nx)
T_plot = np.zeros((int(np.ceil(Nt/step)) + 1, Nx))
# initial conditions
T[1:-1] = T0
# boundary conditions
T[0] = T[-1] = Tb
t_index = 0
T_plot[t_index, :] = T
for jt in range(1, Nt):
T[1:-1] = eta2 * T[1:-1] + eta*(T[2:] + T[:-2])
if jt % step == 0 or jt == Nt-1:
t_index += 1
T_plot[t_index, :] = T
print("Iteration {0:5d}".format(jt), end="\r")
else:
print("Completed {0:5d} iterations: t={1} s".format(jt, jt*Dt))
return T_plot
def plot_T(T_plot, Dx, Dt, step):
X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1]))
Z = T_plot[X, Y]
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.plot_wireframe(X*Dt*step, Y*Dx, Z)
ax.set_xlabel(r"time $t$ (s)")
ax.set_ylabel(r"position $x$ (m)")
ax.set_zlabel(r"temperature $T$ (K)")
fig.tight_layout()
return ax
```
```python
Dx, Dt, step = 0.01, 2, 20
T_plot = calculate_T(Dx=Dx, Dt=Dt, step=step)
plot_T(T_plot, Dx, Dt, step)
```
Note that *decreasing* the value of $\Delta x$ made the solution *unstable*. This is strange, we have gotten used to the idea that working on a finer mesh will increase the detail (until we hit round-off error) and just become computationally more expensive. But here the algorithm suddenly becomes unstable (and it is not just round-off).
For certain combination of values of $\Delta t$ and $\Delta x$ the solution become unstable. Empirically, bigger $\eta$ leads to instability. (In fact, $\eta \geq \frac{1}{2}$ is unstable for the leapfrog algorithm as we will see.)
### Von Neumann stability analysis
If the difference equation solution diverges then we *know* that we have a bad approximation to the original PDE.
Von Neumann stability analysis starts from the assumption that *eigenmodes* of the difference equation can be written as
$$
T_{m,j} = \xi(k)^j e^{ikm\Delta x}, \quad t=j\Delta t,\ x=m\Delta x
$$
with the unknown wave vectors $k=2\pi/\lambda$ and unknown complex functions – the *amplification factors* – $\xi(k)$.
Solutions of the difference equation can be written as linear superpositions of these basis functions. But they are only stable if the eigenmodes are stable, i.e., will not grow in time (with $j$). This is the case when
$$
|\xi(k)| < 1
$$
for all $k$.
Insert the eigenmodes into the finite difference equation
$$
T_{m, j+1} = (1 - 2\eta) T_{m,j} + \eta(T_{m+1,j} + T_{m-1, j})
$$
to obtain
\begin{align}
\xi(k)^{j+1} e^{ikm\Delta x} &= (1 - 2\eta) \xi(k)^{j} e^{ikm\Delta x}
+ \eta(\xi(k)^{j} e^{ik(m+1)\Delta x} + \xi(k)^{j} e^{ik(m-1)\Delta x})\\
\xi(k) &= (1 - 2\eta) + \eta(e^{ik\Delta x} + e^{-ik\Delta x})\\
\xi(k) &= 1 - 2\eta + 2\eta \cos k\Delta x\\
\xi(k) &= 1 + 2\eta\big(\cos k\Delta x - 1\big)
\end{align}
For $|\xi(k)| < 1$ (and all possible $k$):
\begin{align}
|\xi(k)| < 1 \quad &\Leftrightarrow \quad \xi^2(k) < 1\\
(1 + 2y)^2 = 1 + 4y + 4y^2 &< 1 \quad \text{with}\ \ y = \eta(\cos k\Delta x - 1)\\
y(1 + y) &< 0 \quad \Leftrightarrow \quad -1 < y < 0\\
\eta(\cos k\Delta x - 1) &\leq 0 \quad \forall k \quad (\eta > 0, -1 \leq \cos x \leq 1)\\
\eta(\cos k\Delta x - 1) &> -1\\
\eta &< \frac{1}{1 - \cos k\Delta x}\\
\eta = \frac{K \Delta t}{C \rho \Delta x^2} &< \frac{1}{2} \le \frac{1}{1 - \cos k\Delta x}
\end{align}
Thus, solutions are only stable for $\eta < 1/2$. In particular, decreasing $\Delta t$ will always improve stability, But decreasing $\Delta x$ requires an quadratic *increase* in $\Delta t$!
Note
* Perform von Neumann stability analysis when possible (depends on PDE and the specific discretization).
* Test different combinations of $\Delta t$ and $\Delta x$.
* Not guarantee that decreasing both will lead to more stable solutions!
Check my inputs:
This was stable and it conforms to the stability criterion:
```python
Dt = 2
Dx = 0.02
eta = Kappa * Dt /(CHeat * rho * Dx*Dx)
print(eta)
```
0.4876543209876543
... and this was unstable, despite a seemingly small change:
```python
Dt = 2
Dx = 0.01
eta = Kappa * Dt /(CHeat * rho * Dx*Dx)
print(eta)
```
1.9506172839506173
```python
```
|
dbeee048f81ca92544519ab22fb3ff43504c4e84
| 488,829 |
ipynb
|
Jupyter Notebook
|
15_PDEs/15_PDEs.ipynb
|
Py4Phy/PHY432-resources
|
c26d95eaf5c28e25da682a61190e12ad6758a938
|
[
"CC-BY-4.0"
] | null | null | null |
15_PDEs/15_PDEs.ipynb
|
Py4Phy/PHY432-resources
|
c26d95eaf5c28e25da682a61190e12ad6758a938
|
[
"CC-BY-4.0"
] | 1 |
2022-03-03T21:47:56.000Z
|
2022-03-03T21:47:56.000Z
|
15_PDEs/15_PDEs.ipynb
|
Py4Phy/PHY432-resources
|
c26d95eaf5c28e25da682a61190e12ad6758a938
|
[
"CC-BY-4.0"
] | null | null | null | 677.048476 | 174,744 | 0.948424 | true | 3,668 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.90053 | 0.880797 | 0.793184 |
__label__eng_Latn
| 0.866937 | 0.681165 |
```julia
using Pkg
Pkg.activate(@__DIR__)
Pkg.instantiate()
using LinearAlgebra, Symbolics, DifferentialEquations, JLD2
```
[32m[1m Activating[22m[39m environment at `~/Research/symbolics_double_pendulum/Project.toml`
┌ Info: Precompiling Symbolics [0c5d862f-8b57-4792-8d23-62f2024744c7]
└ @ Base loading.jl:1317
┌ Info: Precompiling DifferentialEquations [0c46a032-eb83-5123-abaf-570d42b7fbaa]
└ @ Base loading.jl:1317
```julia
struct DoublePendulum{T}
m1::T
m2::T
l1::T
l2::T
end
n = 2 # number of states
model = DoublePendulum(1.0, 1.0, 1.0, 1.0) # model
```
DoublePendulum{Float64}(1.0, 1.0, 1.0, 1.0)
```julia
# kinematics
function kinematics_1(model::DoublePendulum, q)
θ1, θ2 = q
[0.5 * model.l1 * sin(θ1);
-0.5 * model.l1 * cos(θ1)]
end
function kinematics_2(model::DoublePendulum, q)
θ1, θ2 = q
[model.l1 * sin(θ1) + 0.5 * model.l2 * sin(θ1 + θ2);
-model.l1 * cos(θ1) - 0.5 * model.l2 * cos(θ1 + θ2)]
end
```
kinematics_2 (generic function with 1 method)
```julia
# fast kinematics functions
@variables q[1:n]
@variables q̇[1:n]
k1 = kinematics_1(model, q)
k2 = kinematics_2(model, q)
k1_exp = Symbolics.build_function(k1, q)
k2_exp = Symbolics.build_function(k2, q)
k1_func = eval(k1_exp[1])
k2_func = eval(k2_exp[1])
```
#7 (generic function with 1 method)
```julia
# kinematics Jacobians
j1 = Symbolics.jacobian(k1, q, simplify = true)
j2 = Symbolics.jacobian(k2, q, simplify = true)
j1_exp = Symbolics.build_function(j1, q)
j2_exp = Symbolics.build_function(j2, q)
j1_func = eval(j1_exp[1])
j2_func = eval(j2_exp[1])
```
#11 (generic function with 1 method)
```julia
# Lagrangian
function lagrangian(model, q, q̇)
L = 0.0
# mass 1
v1 = j1_func(q) * q̇
L += 0.5 * model.m1 * transpose(v1) * v1 # kinetic energy
L -= model.m1 * 9.81 * k1_func(q)[2] # potential energy
# mass 2
v2 = j2_func(q) * q̇
L += 0.5 * model.m2 * transpose(v2) * v2
L -= model.m2 * 9.81 * k2_func(q)[2]
return L
end
# fast Lagrangian
L = lagrangian(model, q, q̇)
#
dLq = Symbolics.gradient(L, q, simplify = true)
dLq̇ = Symbolics.gradient(L, q̇, simplify = true)
ddL = Symbolics.hessian(L, [q; q̇], simplify = true)
# mass matrix
M = ddL[n .+ (1:n), n .+ (1:n)]
M = simplify.(M)
# dynamics bias
C = ddL[n .+ (1:n), 1:n] * q̇ - dLq
C = simplify.(C)
```
\begin{equation}
\left[
\begin{array}{c}
q\dot{_2} \left( \left( \cos\left( q{_1} \right) + 0.5 \cos\left( q{_1} + q{_2} \right) \right) \left( - 0.25 q\dot{_1} \sin\left( q{_1} + q{_2} \right) - 0.25 q\dot{_2} \sin\left( q{_1} + q{_2} \right) \right) + \left( \sin\left( q{_1} \right) + 0.5 \sin\left( q{_1} + q{_2} \right) \right) \left( 0.25 q\dot{_1} \cos\left( q{_1} + q{_2} \right) + 0.25 q\dot{_2} \cos\left( q{_1} + q{_2} \right) \right) + \left( 0.5 \cos\left( q{_1} \right) + 0.25 \cos\left( q{_1} + q{_2} \right) \right) \left( - 0.5 q\dot{_1} \sin\left( q{_1} + q{_2} \right) - 0.5 q\dot{_2} \sin\left( q{_1} + q{_2} \right) \right) + \left( 0.5 \sin\left( q{_1} \right) + 0.25 \sin\left( q{_1} + q{_2} \right) \right) \left( 0.5 q\dot{_1} \cos\left( q{_1} + q{_2} \right) + 0.5 q\dot{_2} \cos\left( q{_1} + q{_2} \right) \right) + 0.25 \cos\left( q{_1} + q{_2} \right) \left( q\dot{_1} \left( \sin\left( q{_1} \right) + 0.5 \sin\left( q{_1} + q{_2} \right) \right) + 0.5 q\dot{_2} \sin\left( q{_1} + q{_2} \right) \right) + 0.5 \cos\left( q{_1} + q{_2} \right) \left( 0.5 q\dot{_1} \left( \sin\left( q{_1} \right) + 0.5 \sin\left( q{_1} + q{_2} \right) \right) + 0.25 q\dot{_2} \sin\left( q{_1} + q{_2} \right) \right) - 0.25 \sin\left( q{_1} + q{_2} \right) \left( q\dot{_1} \left( \cos\left( q{_1} \right) + 0.5 \cos\left( q{_1} + q{_2} \right) \right) + 0.5 q\dot{_2} \cos\left( q{_1} + q{_2} \right) \right) - 0.5 \sin\left( q{_1} + q{_2} \right) \left( 0.5 q\dot{_1} \left( \cos\left( q{_1} \right) + 0.5 \cos\left( q{_1} + q{_2} \right) \right) + 0.25 q\dot{_2} \cos\left( q{_1} + q{_2} \right) \right) \right) + 14.715 \sin\left( q{_1} \right) + 4.905 \sin\left( q{_1} + q{_2} \right) \\
q\dot{_2} \left( 2 \cos\left( q{_1} + q{_2} \right) \left( - 0.125 q\dot{_1} \sin\left( q{_1} + q{_2} \right) + 0.25 q\dot{_1} \left( \sin\left( q{_1} \right) + 0.5 \sin\left( q{_1} + q{_2} \right) \right) \right) - 0.25 \sin\left( q{_1} + q{_2} \right) \left( q\dot{_1} \left( \cos\left( q{_1} \right) + 0.5 \cos\left( q{_1} + q{_2} \right) \right) + 0.5 q\dot{_2} \cos\left( q{_1} + q{_2} \right) \right) + 0.5 \sin\left( q{_1} + q{_2} \right) \left( 0.25 q\dot{_1} \cos\left( q{_1} + q{_2} \right) + 0.25 q\dot{_2} \cos\left( q{_1} + q{_2} \right) \right) + 0.25 \sin\left( q{_1} + q{_2} \right) \left( 0.5 q\dot{_1} \cos\left( q{_1} + q{_2} \right) + 0.5 q\dot{_2} \cos\left( q{_1} + q{_2} \right) \right) - 0.5 \sin\left( q{_1} + q{_2} \right) \left( 0.5 q\dot{_1} \left( \cos\left( q{_1} \right) + 0.5 \cos\left( q{_1} + q{_2} \right) \right) + 0.25 q\dot{_2} \cos\left( q{_1} + q{_2} \right) \right) \right) + 4.905 \sin\left( q{_1} + q{_2} \right) - \left( q\dot{_1} \left( \cos\left( q{_1} \right) + 0.5 \cos\left( q{_1} + q{_2} \right) \right) + 0.5 q\dot{_2} \cos\left( q{_1} + q{_2} \right) \right) \left( - 0.25 q\dot{_1} \sin\left( q{_1} + q{_2} \right) - 0.25 q\dot{_2} \sin\left( q{_1} + q{_2} \right) \right) - \left( q\dot{_1} \left( \sin\left( q{_1} \right) + 0.5 \sin\left( q{_1} + q{_2} \right) \right) + 0.5 q\dot{_2} \sin\left( q{_1} + q{_2} \right) \right) \left( 0.25 q\dot{_1} \cos\left( q{_1} + q{_2} \right) + 0.25 q\dot{_2} \cos\left( q{_1} + q{_2} \right) \right) - \left( 0.5 q\dot{_1} \cos\left( q{_1} + q{_2} \right) + 0.5 q\dot{_2} \cos\left( q{_1} + q{_2} \right) \right) \left( 0.5 q\dot{_1} \left( \sin\left( q{_1} \right) + 0.5 \sin\left( q{_1} + q{_2} \right) \right) + 0.25 q\dot{_2} \sin\left( q{_1} + q{_2} \right) \right) - \left( - 0.5 q\dot{_1} \sin\left( q{_1} + q{_2} \right) - 0.5 q\dot{_2} \sin\left( q{_1} + q{_2} \right) \right) \left( 0.5 q\dot{_1} \left( \cos\left( q{_1} \right) + 0.5 \cos\left( q{_1} + q{_2} \right) \right) + 0.25 q\dot{_2} \cos\left( q{_1} + q{_2} \right) \right) \\
\end{array}
\right]
\end{equation}
```julia
# dynamics
# ẋ = [q̇; M \ (-1.0 * C)]
ẋ = [q̇; M \ (-0.5 * q̇ -1.0 * C)] # joint friction
# ẋ = [q̇; M \ (-1.0 * (q - [π / 10; 0.0]) -1.0 * C)] # spring
ẋ = simplify.(ẋ)
ẋ_exp = Symbolics.build_function(ẋ, q, q̇)
dynamics = eval(ẋ_exp[1])
```
#22 (generic function with 1 method)
```julia
# save dynamics function
path = joinpath(pwd(), "dynamics.jld2")
# @save path ẋ_exp
# @load path ẋ_exp
```
"/home/taylor/Research/symbolics_double_pendulum/dynamics.jld2"
```julia
# DifferentialEquations.jl
function dynamics!(ẋ, x, p, t)
ẋ .= dynamics(view(x, 1:n), view(x, n .+ (1:n)))
end
# simulate
x0 = [0.5 * π; 0.0; 0.0; 0.0]
tspan = (0.0, 10.0)
dt = 0.01
prob = ODEProblem(dynamics!, x0, tspan);
```
```julia
sol = solve(prob, Tsit5(), adaptive = false, dt = dt);
```
```julia
# MeshCat.jl
include(joinpath(pwd(), "visuals.jl"))
vis = Visualizer()
render(vis)
```
┌ Info: MeshCat server started. You can open the visualizer by visiting the following URL in your browser:
│ http://127.0.0.1:8704
└ @ MeshCat /home/taylor/.julia/packages/MeshCat/GlCMx/src/visualizer.jl:73
<div style="height: 500px; width: 100%; overflow-x: auto; overflow-y: hidden; resize: both">
</div>
```julia
visualize_double_pendulum!(vis, model, sol.u, Δt = dt)
```
```julia
```
```julia
```
|
828d158277d8b02d1471296941c66b04dad3dfce
| 14,101 |
ipynb
|
Jupyter Notebook
|
double pendulum.ipynb
|
thowell/symbolics_double_pendulum
|
1d8541f491fd61f7862ff2885d1f2c7cf30a4544
|
[
"MIT"
] | 4 |
2021-04-21T20:56:02.000Z
|
2021-11-22T14:19:25.000Z
|
double pendulum.ipynb
|
thowell/symbolics_double_pendulum
|
1d8541f491fd61f7862ff2885d1f2c7cf30a4544
|
[
"MIT"
] | null | null | null |
double pendulum.ipynb
|
thowell/symbolics_double_pendulum
|
1d8541f491fd61f7862ff2885d1f2c7cf30a4544
|
[
"MIT"
] | null | null | null | 38.527322 | 2,243 | 0.491667 | true | 3,472 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.94079 | 0.826712 | 0.777762 |
__label__eng_Latn
| 0.0903 | 0.645334 |
```python
# Notebook imports and packages
import numpy as np
from sympy import symbols, diff
# symbols is for "turning variables into math symbols"
# diff differentiates functions (when using symbols).
# Go through the rest of the code to understand better.
```
# Partial Derivatives and Symbolic Computation
$$f(x, y)=\frac{1}{3^{-x^2-y^2}+1}$$
<hr color="lightblue">
$$\frac{\partial f(x, y)}{\partial x}=\frac{2x\ln \left(3\right)\cdot \:3^{-x^2-y^2}}{\left(3^{-x^2-y^2}+1\right)^2}$$
<hr color="lightblue">
$$\frac{\partial f(x, y)}{\partial y}=\frac{2y\ln \left(3\right)\cdot \:3^{-y^2-x^2}}{\left(3^{-x^2-y^2}+1\right)^2}$$
```python
def f(x, y):
return 1/(3**(-(x**2)-(y**2)) + 1)
def fpx(x, y):
den = 2 * x * np.log(3) * 3**(-x**2-y**2)
num = (3**(-x**2-y**2)+1)**2
return den/num
def fpy(x, y):
den = 2 * y * np.log(3) * 3**(-x**2-y**2)
num = (3**(-x**2-y**2)+1)**2
return den/num
# Derivatives here are calculated with Symbolab. It's the same result as with sympy.
```
```python
a, b = symbols('x, y') # Variable 'a' will now represent 'x' and 'b' will represent 'y'.
f(a, b) # Our cost function.
```
$\displaystyle \frac{1}{3^{- x^{2} - y^{2}} + 1}$
```python
diff(f(a,b),a) # Differentiates f(x,y) with respect to 'x'.
```
$\displaystyle \frac{2 \cdot 3^{- x^{2} - y^{2}} x \log{\left(3 \right)}}{\left(3^{- x^{2} - y^{2}} + 1\right)^{2}}$
```python
diff(f(a,b),b) # Differentiates f(x,y) with respect to 'y'.
```
$\displaystyle \frac{2 \cdot 3^{- x^{2} - y^{2}} y \log{\left(3 \right)}}{\left(3^{- x^{2} - y^{2}} + 1\right)^{2}}$
```python
print(diff(f(a,b),a))
```
2*3**(-x**2 - y**2)*x*log(3)/(3**(-x**2 - y**2) + 1)**2
```python
# Cost can be calculated with both sympy and directly.
# In this function we have to differentiate with respect to both 'x' and 'y'.
cost, cost2, dfx, dfy = (
f(1.8,1.0),
f(a,b).evalf(subs={a:1.8,b:1.0}),
diff(f(a,b),a).evalf(subs={a:1.8,b:1.0}), # Differentiates f(x,y) with respect to 'x'.
diff(f(a,b),b).evalf(subs={a:1.8,b:1.0}) # Differentiates f(x,y) with respect to 'y'.
)
```
```python
cost, cost2, dfx, dfy
```
(0.9906047940325824, 0.990604794032582, 0.0368089716197505, 0.0204494286776392)
```python
print("Value of f(x,y) at (x=1.8, y=1.0):\t", cost)
print("Value of df(x,y)/dx at (x=1.8, y=1.0):\t", dfx)
print("Value of df(x,y)/dy at (x=1.8, y=1.0):\t", dfy)
```
Value of f(x,y) at (x=1.8, y=1.0): 0.9906047940325824
Value of df(x,y)/dx at (x=1.8, y=1.0): 0.0368089716197505
Value of df(x,y)/dy at (x=1.8, y=1.0): 0.0204494286776392
|
713aaf57f175751aea4b05c8fd71b405497e826c
| 6,840 |
ipynb
|
Jupyter Notebook
|
Section_04/Example_04_(05-08)/06-SymPy_derivatives.ipynb
|
ArielMAJ/Data-Science-and-Machine-Learning_Bootcamp
|
afae685c96d9fc8af0b2ee1be4d817df505c6c8d
|
[
"MIT"
] | null | null | null |
Section_04/Example_04_(05-08)/06-SymPy_derivatives.ipynb
|
ArielMAJ/Data-Science-and-Machine-Learning_Bootcamp
|
afae685c96d9fc8af0b2ee1be4d817df505c6c8d
|
[
"MIT"
] | null | null | null |
Section_04/Example_04_(05-08)/06-SymPy_derivatives.ipynb
|
ArielMAJ/Data-Science-and-Machine-Learning_Bootcamp
|
afae685c96d9fc8af0b2ee1be4d817df505c6c8d
|
[
"MIT"
] | null | null | null | 6,840 | 6,840 | 0.650292 | true | 1,070 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.946597 | 0.863392 | 0.817284 |
__label__eng_Latn
| 0.684796 | 0.737157 |
### Code setup
```python
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
sns.set_context("talk", font_scale=1.5, rc={"lines.linewidth": 2.5})
sns.set_style("whitegrid")
from IPython.display import HTML
from matplotlib import animation
%matplotlib inline
# Don't tinker, or do
#%matplotlib nbagg
# from matplotlib import rcParams
#rcParams['font.family']='sans-serif'
#rcParams('font', serif='Helvetica Neue')
# rcParams['text.usetex']= True
#rcParams.update({'font.size': 22})
```
## Solving initial (-boundary) value problems
Time is something we all have to deal with and manage. Within the domain of scientific applications, this usually defaults to solving a (system of) initial value problem, specified in the following form:
$$ \frac{\partial \mathbf{u}}{\partial t} = \mathbf{F}(\mathbf{u}, t)$$
as a partial differential equation (PDE). As we will see, on numerical discretization modifies the above PDE into a manageable (computable) form, wherein
$$ \mathbf{u}^{n+1} = \sum_{i=0}^{k} \alpha_i \mathbf{u}^{n-i} + \sum_{j=0}^{r} \beta_j \frac{\partial \mathbf{u}^{n-j}}{\partial t} $$
which for $k=1$ and $r=0$ looks something along these lines:
$$ \mathbf{u}^{n+1} = \alpha_0 \mathbf{u}^{n} + \alpha_1 \mathbf{u}^{n-1} + \beta_0 \frac{\partial \mathbf{u}^{n}}{\partial t} $$
which takes the current ($n$) and past ($n-1$) information to predict states in the next (future, $n+1$) iterations.
Notice that $\mathbf{F}(\mathbf{u}, t)$ can also involve solving a boundary value problem, similar to our soft mechanics equations.
```python
class TimeStepper(object):
""" Class for wrapping a timestepper function with other goodies
"""
def __init__(self, i_x, i_v, i_dt, i_T):
""" Initialize the timestepper
"""
# What forcing function are we using?
self.forcing = None
# What timestepping algorithm are we using?
self.timestepper = None
self.nsteps = int(i_T/i_dt)
self.dt = i_dt
if len(i_x) == len(i_v):
# Same length, corresponding to index, data makes sense
self.ndim = len(i_x)
self.x = np.zeros((self.nsteps, self.ndim))
self.v = np.zeros((self.nsteps, self.ndim))
# Set initial values
self.x[0] = np.array(i_x)
self.v[0] = np.array(i_v)
else:
raise RuntimeError('Length of initial velocity and position \
not matching')
def set_forcing_function(self, t_func):
""" Set forcing function to be used
"""
if type(t_func) is not str:
try:
# If not string, try and evaluate the function
t_func(x[0])
# If the function works, set this function as forcing
self.forcing = t_func
except:
raise RuntimeError('Provided function cannot be evaluated')
else:
if t_func=="harmonic":
def harmonic(x):
return -x
self.forcing = harmonic
def error_norm(self):
""" For testing convergence, defined as a special function """
if self.forcing.__name__ == 'harmonic':
time_arr = np.arange(0.0, self.dt*self.nsteps, self.dt)
analytical_solution = self.x[0, :]*np.exp(-time_arr.reshape(-1,1))
return np.linalg.norm(analytical_solution - self.x, np.inf)
def timestep_using(self, timestepper):
""" Provides access to internal variables x and v
Applies func over and over again till number of timesteps reached.
"""
self.timestepper = timestepper.__name__
for i in range(self.nsteps - 1):
# Do one cycle
self.x[i+1], self.v[i+1] = timestepper(self.dt, self.x[i], self.v[i], self.forcing)
def draw(self, renderer):
""" Draw the matplotlib canvas with the portrait we want
"""
if self.timestepper:
# If there is a timestepper, then there is numerical data
# Plot them
renderer.scatter(self.x[0], self.v[0], c='k',marker='o')
renderer.plot(self.x, self.v, label=self.timestepper)
x_min, x_max = np.min(self.x), np.max(self.x)
v_min, v_max = np.min(self.v), np.max(self.v)
extension = 0.5
renderer.set_xlim(min(0.0, x_min) - extension, max(0.0, x_max) + extension)
renderer.set_ylim(min(0.0, x_min) - extension, max(0.0, v_max) + extension)
renderer.legend()
else:
# If there is no timestepper, you are looking for analytical data,
# if it exists
# Plot them instead
if self.forcing.__name__ == 'harmonic':
true_sol = plt.Circle((0, 0), 1.0, fill=None, edgecolor='k', linestyle='--', lw=4)
renderer.set_xlim(-1.5, 1.5)
renderer.set_ylim(-1.5, 1.5)
renderer.add_artist(true_sol)
# raise RuntimeError('No information found. Did you forget to run your timestepper?')
renderer.set_xlabel(r'$x$')
renderer.set_ylabel(r'$v$')
renderer.set_title(r'${}$'.format(self.forcing.__name__))
renderer.set_aspect('equal')
def draw_sol(self, renderers):
""" Draw the matplotlib canvas with the solution we want
"""
time_arr = np.arange(0.0, self.dt*self.nsteps, self.dt)
if self.timestepper:
# If there is a timestepper, then there is numerical data
# Plot them
renderers[0].plot(time_arr/2.0/np.pi, self.x[:, 0], '-o', label='position')
renderers[1].plot(time_arr/2.0/np.pi, self.v[:, 0], '-o', label='velocity')
renderers[2].plot(time_arr/2.0/np.pi, self.x[:, 0]**2 + self.v[:, 0]**2, '-o', label='energy')
extension = 0.5
# Plot almost 5 cycles
renderers[0].set_xlim(-0.05, 5.05)
renderers[0].set_ylim(-1.05, 1.05)
renderers[1].set_ylim(-1.05, 1.05)
renderers[2].set_ylim(0.0, 10.0)
# renderer_one.legend()
else:
# If there is no timestepper, you are looking for analytical data,
# if it exists
# Plot them instead
if self.forcing.__name__ == 'harmonic':
analytical_pos = np.cos(time_arr.reshape(-1,))
analytical_vel = -np.sin(time_arr.reshape(-1,))
renderers[0].plot(time_arr/2.0/np.pi, analytical_pos, 'k--', label=self.timestepper)
renderers[1].plot(time_arr/2.0/np.pi, analytical_vel, 'k--', label=self.timestepper)
renderers[2].plot(time_arr/2.0/np.pi, analytical_pos**2 + analytical_vel**2, 'k--', label='energy')
renderers[0].set_xlim(-0.05, 5.05)
renderers[0].set_ylim(-1.05, 1.05)
renderers[1].set_ylim(-1.05, 1.05)
renderers[2].set_ylim(0.0, 5.0)
# raise RuntimeError('No information found. Did you forget to run your timestepper?')
renderers[0].set_ylabel(r'$x(t)$')
renderers[1].set_ylabel(r'$v(t)$')
renderers[2].set_ylabel(r'$E(t)$')
renderers[2].set_xlabel(r'$t/T$')
renderers[0].set_title(r'Analytical solution')
def animate(self, fig, renderer, color):
""" Access to the animate class from matplotlib
"""
self.timestepper = None
# animation function. This is called sequentially
def animate_in(i):
renderer.clear()
self.draw(renderer)
for j in range(i + 1):
renderer.plot([self.x[j], self.x[j+1]], [self.v[j], self.v[j+1]], marker='o', c=color, alpha=0.5**((i-j)/20.))
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate_in, frames=100, interval=5)
return anim
```
## Time-stepping routines
A variety of numerical algorithms for solving such initial value problems exist, and we are going to look at three main ones : (a) Euler method (or Euler forward) (b) Runge-Kutta-4/RK4 (multi-stage methods) and (c) Position Verlet (symplectic, area preserving) integrators, although others will be discussed on the way. We will attempt to compare these methods in terms of their ease (in understanding/implementation), order of accuracy (in comparison to the time step $dt$), function evaluations for each step $dt$ and some *special* properties.
## Order of accuracy of time-steppers
What's **order** of accuracy? Order of accuracy quantifies the rate of convergence of a numerical approximation of a differential equation to the exact solution.
The numerical solution $\mathbf{u}$ is said to be $n^{\text{th}}$-order accurate if the error, $e(dt):=\lVert\tilde{\mathbf{u}}-\mathbf{u} \rVert$ is proportional to the step-size $ dt $, to the $n^{\text{th}}$ power. That is
$$ e(dt)=\lVert\tilde{\mathbf{u}}-\mathbf{u} \rVert\leq C(dt)^{n} $$
Details of this are given in the slides. Here, we focus on the implementation of these time-steppers for a simple ODE and figure out the order of convergence. The model problem that we deal with is
$$ \frac{dy}{dt} = -y \quad,\quad y(0) = 1$$
which as we know has the analytical solution $ \tilde{y}(t) = e^{-t} $, so error can be calculated.
### Euler's method
$$ x^{n+1} = x^{n} + f(x^{n})dt $$
```python
# Question
def euler_fwd_ooa(dt, x, v, force_rule):
"""Does one iteration/timestep using Forward Euler scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
# Fill in
x_n = x
v_n = v
return x_n, v_n
```
```python
# Answer
def euler_fwd_ooa(dt, x, v, force_rule):
"""Does one iteration/timestep using Forward Euler scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
# Fill in
x_n = x + dt * force_rule(x)
v_n = v
return x_n, v_n
```
### Euler's method (backward, implicit)
$$ x^{n+1} = x^{n} + \underbrace{f(x^{n+1})dt}_{\text{Evaluated at next timestep!}} $$
```python
# Answer
def euler_bwd_ooa(dt, x, v, force_rule):
"""Does one iteration/timestep using Backward Euler scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
# Fill in
if force_rule.__name__ == "harmonic":
x_n = x /(1 + dt)
else:
raise NotImplementedError("Cannot do implicit timemarching")
v_n = v
return x_n, v_n
```
### Midpoint method
$$ \begin{align}
x^{*}&= x^{n} + f({x}^{n}) \cdot \frac{dt}{2} \\ x^{n+1} &= x + f({x}^{*}) \cdot dt \\
\end{align}
$$
```python
# Question
def midpoint_method_ooa(dt, x , v, force_rule):
"""Does one iteration/timestep using the midpoint scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
# Fill in
x_n = x
v_n = v
return x_n, v_n
```
```python
# Answer
def midpoint_method_ooa(dt, x , v, force_rule):
"""Does one iteration/timestep using the midpoint scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
# Fill in
temp_x = x + 0.5*dt*force_rule(x)
x_n = x + dt * force_rule(temp_x)
v_n = v
return x_n, v_n
```
### Runge Kutta-4
$$ \begin{align}
{k}_1 &= {f}({x}^{n}) \cdot dt \\
{k}_2 &= {f}({x}^{n} + 0.5 \cdot {k}_1)\cdot dt \\
{k}_3 &= {f}({x}^{n} + 0.5 \cdot {k}_2)\cdot dt \\
{k}_4 &= {f}({x}^{n} + {k}_3)\cdot dt \\
{x}^{n+1} &= {x}^{n} + \frac{{k}_1+2{k}_2+2{k}_3+{k}_4}{6}
\end{align} $$
```python
# Question
def rk4_ooa(dt, x, v, force_rule):
"""Does one iteration/timestep using the RK4 scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
# Fill in
x_n = x
v_n = v
return x_n, v_n
```
```python
# Answer
def rk4_ooa(dt, x, v, force_rule):
"""Does one iteration/timestep using the RK4 scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
# Fill in
# Stage 1
k_1 = dt*force_rule(x)
# Stage 2
k_2 = dt * force_rule(x + 0.5*k_1)
# Stage 3
k_3 = dt * force_rule(x + 0.5*k_2)
# Stage 4
k_4 = dt * force_rule(x + k_3)
x_n = x + (1./6.)*(k_1 + 2.*k_2 + 2.* k_3 + k_4)
v_n = v
return x_n, v_n
```
Given your implementation of these schemes, let's implement them on the model problem and see their order of accuracy.
```python
# Initial conditions
i_x = [1.0] # Initial position
i_v = [0.0] # Initial velocity (not required for this problem)
f_T = 10.0 # Final time
# All functions that you coded up
func_list = [euler_fwd_ooa, euler_bwd_ooa, midpoint_method_ooa, rk4_ooa]
# Time steps and associated errors from 2^0 to 2^(-10)
dt_steps = np.arange(11, dtype=np.int16)
errors_list = [[None for i in dt_steps] for func in func_list]
# Run simulations and collect errors
for i_func, func in enumerate(func_list):
for i_step in dt_steps:
dt = (2.)**(-i_step)
b = TimeStepper(i_x, i_v, dt, f_T)
b.set_forcing_function('harmonic')
b.timestep_using(func)
errors_list[i_func][i_step] = b.error_norm()
```
```python
# Draw error plots in a log-log plot
fig, ax = plt.subplots(1,1, figsize=(10, 10))
# x axis is time, y axis is error
for i_func, func in enumerate(func_list):
ax.plot((2.)**(-dt_steps), errors_list[i_func], 'o-', label=func.__name__)
# Draw helpful slope lines to compare
slopes_list = [None for func in func_list]
slopes_list[0] = 0.1 * (2.)**(-dt_steps)
slopes_list[1] = 0.1 * (2.)**(-dt_steps)
slopes_list[2] = 0.05 * (2.)**(-2*dt_steps)
slopes_list[3] = 0.01 * (2.)**(-4*dt_steps)
for slope_lines in slopes_list:
ax.plot((2.)**(-dt_steps), slope_lines, 'k--')
# Make it readable
ax.set_xlabel(r'$dt$')
ax.set_ylabel(r'$e(dt)$')
ax.set_title('Order of accuracy')
ax.set_yscale('log')
ax.set_xscale('log')
ax.legend()
# fig.savefig('ooa.pdf')
# Save data if you need to plot it in another application
SAVE_FLAG = True
if SAVE_FLAG:
import os
DATA_PATH = os.path.join(os.getcwd(),'data')
if not os.path.isdir(DATA_PATH):
os.makedirs(DATA_PATH)
for i_func, func in enumerate(func_list):
data_arr = np.vstack(((2.)**(-dt_steps), np.array(errors_list[i_func])))
np.savetxt(os.path.join(DATA_PATH, func.__name__ + '.txt'), data_arr.T, delimiter='\t')
data_arr = np.vstack(((2.)**(-dt_steps), (slopes_list[i_func])))
np.savetxt(os.path.join(DATA_PATH, func.__name__ + '_slopes.txt'), data_arr.T, delimiter='\t')
# print(data_arr.shape)
# ax.plot((2.)**(-dt_steps), errors_list[i_func], 'o-', label=func.__name__)
```
Some questions to ponder about
- How do you interpret this diagram?
- Are these schemes robust to coding errors (i.e. replace `x` by `x*` and see what happens)
- Are these bound to exhibit the same behavior, irrespective of $f$?
## Symmetry/Symplectic/Energy preserving characteristics
### Harmonic oscillator
Let's consider the equations governing the dynamics of the harmonic oscillator next. What are harmonic oscillators?
Any undamped linear spring-mass system! (*This statement is not completely true, but for this class it is*).
(Credits: User `kma`, https://tex.stackexchange.com/a/58448)
This system is modeled by $\ddot{x} + x = 0$ (parameters are normalized). We first decompose this second order ODE to a system of two first order ODEs (so that we can use the same first order ODE machinery seen before) by introducing the transformation $ y = \dot{x} $ (If $x$ is the position, $y = \dot{x}$ indicates the velocity. Doing so gives rise to the following linear system, whose dynamics in time we need to uncover:
$$ \begin{pmatrix} \dot{x} \\ \dot{y} \end{pmatrix} = \begin{bmatrix} 0 & 1\\-1 & 0 \end{bmatrix} \begin{pmatrix} {x} \\ {y} \end{pmatrix} $$
wherein we consider the initial conditions to be $ x(0) = 1, y(0) = \dot{x}(0) = 0 $. The analytical solution of this system of ODEs is $ x(t) = \cos(t), y(t) = -\sin(t) $, as seen in class. Notice that if we draw the phase plane of position (in x-axis) and velocity (in y-axis), then
$$ x^2(t) + y^2(t) \equiv 1$$
```python
i_x = [1.0]
i_v = [0.0]
# # First set, fine
dt = 1.0
f_T = 62.8
# Second set, very coarse
# dt = 5.0
# f_T = 628.0
fig2, ax2_list = plt.subplots(3,1, figsize=(10, 10), sharex=True)
fig, ax = plt.subplots(1,1, figsize=(10, 10))
a = TimeStepper(i_x, i_v, dt, f_T)
a.set_forcing_function('harmonic')
a.draw(ax)
a.draw_sol(ax2_list)
# fig.savefig('true_solution.pdf')
```
### Euler's forward method
For a system of ODEs
```python
# Question
def euler_fwd(dt, x, v, force_rule):
"""Does one iteration/timestep using the Euler forward scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
x_n = x
v_n = v
return x_n, v_n
a.timestep_using(euler_fwd)
```
```python
# Answer
def euler_fwd(dt, x, v, force_rule):
"""Does one iteration/timestep using the Euler forward scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
x_n = x + dt * v
v_n = v + dt * force_rule(x)
return x_n, v_n
a.timestep_using(euler_fwd)
```
```python
a.draw_sol(ax2_list)
fig2
```
```python
a.draw(ax)
fig
# fig.savefig('euler_fwd_1.0.pdf')
```
### Position verlet scheme for second order ODE
$$ \begin{align}
\mathbf{x}^* &= \mathbf{x}^n + 0.5\cdot dt \cdot \mathbf{y}^n \\
\mathbf{y}^{n+1} &= \mathbf{y}^n + dt \cdot \mathbf{f}\left( \mathbf{x}^*\right) \\
\mathbf{x}^{n+1} &= \mathbf{x}^* + 0.5\cdot dt \cdot \mathbf{y}^{n+1}
\end{align} $$
We only have **one** functional evaluation, while obtaining **second**-order accuracy in position and velocity.
```python
# Question
def position_verlet(dt, x, v, force_rule):
"""Does one iteration/timestep using the Position verlet scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
v_n = v
x_n = x
return x_n, v_n
a.timestep_using(position_verlet)
```
```python
# Answer
def position_verlet(dt, x, v, force_rule):
"""Does one iteration/timestep using the Position verlet scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
temp_x = x + 0.5*dt*v
v_n = v + dt * force_rule(temp_x)
x_n = temp_x + 0.5 * dt * v_n
return x_n, v_n
a.timestep_using(position_verlet)
```
```python
a.draw_sol(ax2_list)
fig2
```
```python
a.draw(ax)
fig
```
### Velocity verlet scheme for second order ODE
$$ \begin{align}
\mathbf{y}^* &= \mathbf{y}^n + 0.5\cdot dt \cdot \mathbf{f}\left( \mathbf{x}^n\right) \\
\mathbf{x}^{n+1} &= \mathbf{x}^{n} + dt \cdot \mathbf{y}^{*} \\
\mathbf{y}^{n+1} &= \mathbf{y}^* + 0.5\cdot dt \cdot \mathbf{f}\left( \mathbf{x}^{n+1}\right) \\
\end{align} $$
Note that we now have two functional evaluations, while retaining second-order accuracy in position and velocity.
```python
# Question
def velocity_verlet(dt, x, v, force_rule):
"""Does one iteration/timestep using the Velocity verlet scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
x_n = x
v_n = v
return x_n, v_n
a.timestep_using(velocity_verlet)
```
```python
# Answer
def velocity_verlet(dt, x, v, force_rule):
"""Does one iteration/timestep using the Velocity verlet scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
temp_v = v + 0.5*dt*force_rule(x)
x_n = x + dt * temp_v
v_n = temp_v + 0.5* dt * force_rule(x_n)
return x_n, v_n
a.timestep_using(velocity_verlet)
```
```python
a.draw_sol(ax2_list)
fig2
```
```python
a.draw(ax)
fig
```
### Euler-Cromer scheme
The second variant of semi-implicit Euler/Euler-Cromer scheme is
$$ \begin{align}
\mathbf{x}^{n+1} &= \mathbf{x}^{n} + dt \cdot \mathbf{y}^{n+1} \\
\mathbf{y}^{n+1} &= \mathbf{y}^{n} + dt \cdot \mathbf{f}\left( \mathbf{x}^{n+1}\right) \\
\end{align} $$
```python
# Question
def euler_cromer(dt, x, v, force_rule):
"""Does one iteration/timestep using the Euler Cromer scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
x_n = x
v_n = v
return x_n, v_n
# a.timestep_using(euler_cromer)
```
```python
# Answer
def euler_cromer(dt, x, v, force_rule):
"""Does one iteration/timestep using the Euler Cromer scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
x_n = x + dt * v
v_n = v + dt * force_rule(x_n)
return x_n, v_n
# a.timestep_using(euler_cromer)
```
### Runge Kutta-4
For a system of ODEs. Repeated again for convenience
$$ \begin{align}
\mathbf{k}_1 &= \mathbf{f}(\mathbf{x}^{n}) \cdot dt \\
\mathbf{k}_2 &= \mathbf{f}(\mathbf{x}^{n} + 0.5 \cdot \mathbf{k}_1)\cdot dt \\
\mathbf{k}_3 &= \mathbf{f}(\mathbf{x}^{n} + 0.5 \cdot \mathbf{k}_2)\cdot dt \\
\mathbf{k}_4 &= \mathbf{f}(\mathbf{x}^{n} + \mathbf{k}_3)\cdot dt \\
\mathbf{x}^{n+1} &= \mathbf{x}^{n} + \frac{\mathbf{k}_1+2\mathbf{k}_2+2\mathbf{k}_3+\mathbf{k}_4}{6}
\end{align} $$
```python
# Question
def runge_kutta4(dt, x, v, force_rule):
"""Does one iteration/timestep using the RK4 scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
x_n, v_n = x, v
return x_n, v_n
a.timestep_using(runge_kutta4)
```
```python
# Answer
def runge_kutta4(dt, x, v, force_rule):
"""Does one iteration/timestep using the RK4 scheme
Parameters
----------
dt : float
Simulation timestep in seconds
x : float/array-like
Quantity of interest / position of COM
v : float/array-like
Quantity of interest / velocity of COM
force_rule : ufunc
A function, f, that takes one argument and
returns the instantaneous forcing
Returns
-------
x_n : float/array-like
The quantity of interest at the Next time step
v_n : float/array-like
The quantity of interest at the Next time step
"""
def vector_func(y):
return np.array([y[1], force_rule(y[0])])
# Base
u = np.array([x,v])
# Stage 1
k_1 = dt*vector_func(u)
# Stage 2
k_2 = dt * vector_func(u + 0.5*k_1)
# Stage 3
k_3 = dt * vector_func(u + 0.5*k_2)
# Stage 4
k_4 = dt * vector_func(u + k_3)
u_n = u + (1./6.)*(k_1 + 2.*k_2 + 2.* k_3 + k_4)
return u_n[0], u_n[1]
a.timestep_using(runge_kutta4)
```
```python
a.draw_sol(ax2_list)
fig2
```
```python
a.draw(ax)
fig
# fig.savefig('rk4_1.0.pdf')
```
Some questions to ponder about
- What do you observe as you increase \( dt \) in all cases?
- Are symplectic schemes perfect in its descrption of physics?
# Video of the maps, please ignore
```python
fig3, ax3 = plt.subplots(1,1, figsize=(10, 10))
i_x = [0.0]
i_v = [1.0]
dt = 1.0
f_T = 700.0
b = TimeStepper(i_x, i_v, dt, f_T)
b.set_forcing_function('harmonic')
b.draw(ax2)
b.timestep_using(position_verlet)
ax3.set_xlim([-1.5, 1.5])
ax3.set_ylim([-1.5, 1.5])
#b.draw(ax2)
# fig2
```
```python
cmap = sns.color_palette()
verlet_color = cmap[0]
rk_color = cmap[2]
anim = b.animate(fig3, ax3, verlet_color)
```
```python
HTML(anim.to_jshtml())
```
```python
# anim.save('verlet.mp4', fps=30,
# extra_args=['-vcodec', 'h264',
# '-pix_fmt', 'yuv420p'])
```
|
c305f16a7b4d9c7e0b4a0e5290a4920f1602d73d
| 44,297 |
ipynb
|
Jupyter Notebook
|
lectures/05_timeintegration/code/time_integrators.ipynb
|
tp5uiuc/soft_systems_course
|
c9585c8fdc7fbc2fd539b4a1ed5e3b43a889a1ce
|
[
"MIT"
] | 3 |
2022-01-12T21:54:46.000Z
|
2022-01-15T09:31:40.000Z
|
lectures/05_timeintegration/code/time_integrators.ipynb
|
tp5uiuc/soft_systems_course
|
c9585c8fdc7fbc2fd539b4a1ed5e3b43a889a1ce
|
[
"MIT"
] | null | null | null |
lectures/05_timeintegration/code/time_integrators.ipynb
|
tp5uiuc/soft_systems_course
|
c9585c8fdc7fbc2fd539b4a1ed5e3b43a889a1ce
|
[
"MIT"
] | null | null | null | 34.418803 | 556 | 0.503872 | true | 8,679 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.899121 | 0.810479 | 0.728719 |
__label__eng_Latn
| 0.8782 | 0.53139 |
# Polynomial Optimization
## Technical note
The section "Sum-of-Squares approach" of notebook uses features of SumOfSquares.jl and PolyJuMP.jl that are not yet released.
Please do the following to use the "master" branch
```julia
Pkg.checkout("SumOfSquares")
Pkg.checkout("PolyJuMP")
```
You can undo these with the following two lines
```julia
Pkg.free("SumOfSquares")
Pkg.free("PolyJuMP")
```
## Introduction
Consider the polynomial optimization problem of
minimizing the polynomial $x^3 - x^2 + 2xy -y^2 + y^3$
over the polyhedron defined by the inequalities $x \ge 0, y \ge 0$ and $x + y \geq 1$.
```julia
using DynamicPolynomials
@polyvar x y
p = x^3 - x^2 + 2x*y -y^2 + y^3
using SemialgebraicSets
S = @set x >= 0 && y >= 0 && x + y >= 1
p(x=>1, y=>0), p(x=>1//2, y=>1//2), p(x=>0, y=>1)
```
(0, 1//4, 0)
The optimal solutions are $(x, y) = (1, 0)$ and $(x, y) = (0, 1)$ with objective value $0$ but [Ipopt](https://github.com/JuliaOpt/Ipopt.jl/) only finds the local minimum $(1/2, 1/2)$ with objective value $1/4$.
```julia
using JuMP
using Ipopt
m = Model(optimizer=IpoptOptimizer(print_level=0))
@variable m a >= 0
@variable m b >= 0
@constraint m a + b >= 1
@NLobjective(m, Min, a^3 - a^2 + 2a*b - b^2 + b^3)
JuMP.optimize(m)
@show JuMP.terminationstatus(m)
@show JuMP.resultvalue(a)
@show JuMP.resultvalue(b)
@show JuMP.objectivevalue(m);
```
With the following equivalent model, [Ipopt](https://github.com/JuliaOpt/Ipopt.jl/) finds the correct optimal solution. The reason (although counter intuitive) of the difference is that with registered functions, only first order derivatives are available.
```julia
using JuMP
using Ipopt
m = Model(solver=IpoptSolver(print_level=0))
@variable m a >= 0
@variable m b >= 0
@constraint m a + b >= 1
peval(a, b) = p(x=>a, y=>a)
JuMP.register(m, :peval, 2, peval, autodiff=true)
@NLobjective(m, Min, peval(a, b))
status = solve(m)
@show status
@show getvalue(a)
@show getvalue(b)
@show getobjectivevalue(m);
```
## Sum-of-Squares approach
We will now see how to find the optimal solution using Sum of Squares Programming.
We first need to pick an SDP solver, see [here](http://www.juliaopt.org/) for a list of the available choices.
```julia
using CSDP
optimizer = CSDPOptimizer(printlevel=0);
```
```julia
using MathOptInterfaceMosek
optimizer = MosekOptimizer(LOG=0);
```
A Sum-of-Squares certificate that $p \ge \alpha$ over the domain `S`, ensures that $\alpha$ is a lower bound to the polynomial optimization problem.
The following program searches for the largest upper bound and find zero.
```julia
using JuMP
using SumOfSquares
const MOI = MathOptInterface
MOI.empty!(optimizer)
m = SOSModel(optimizer = optimizer)
@variable m α
@objective m Max α
c3 = @constraint m p >= α domain = S
JuMP.optimize(m)
@show JuMP.terminationstatus(m)
@show JuMP.objectivevalue(m);
```
JuMP.terminationstatus(m) = Success::MathOptInterface.TerminationStatusCode = 0
JuMP.objectivevalue(m) = -2.0092666419557759e-10
Using the solution $(1/2, 1/2)$ found by Ipopt of objective value $1/4$
and this certificate of lower bound $0$ we know that the optimal objective value is in the interval $[0, 1/4]$
but we still do not know what it is (if we consider that we did not try the solutions $(1, 0)$ and $(0, 1)$ as done in the introduction).
If the dual of the constraint `c3` was atomic, its atoms would have given optimal solutions of objective value $0$ but that is not the case.
```julia
using MultivariateMoments
μ3 = JuMP.resultdual(c3)
X3 = certificate_monomials(c3)
ν3 = matmeasure(μ3, X3)
extractatoms(ν3, 1e-3) # Returns nothing as the dual is not atomic
```
Nullable{MultivariateMoments.AtomicMeasure{Float64,Array{DynamicPolynomials.PolyVar{true},1}}}()
Fortunately, there is a hierarchy of programs with increasingly better programs that can be solved until we get one with atom dual variables.
This comes from the way the Sum-of-Squares constraint with domain `S` is formulated.
The polynomial $p - \alpha$ is guaranteed to be nonnegative over the domain `S` if there exists Sum-of-Squares polynomials $s_0$, $s_1$, $s_2$, $s_3$ such that
$$ p - \alpha = s_0 + s_1 x + s_2 y + s_3 (x + y - 1). $$
Indeed, in the domain `S`, $x$, $y$ and $x + y - 1$ are nonnegative so the right-hand side is a sum of squares hence is nonnegative.
Once the degrees of $s_1$, $s_2$ and $s_3$ have been decided, the degree needed for $s_0$ will be determined but we have a freesom in choosing the degrees of $s_1$, $s_2$ and $s_3$.
By default, they are chosen so that the degrees of $s_1 x$, $s_2 y$ and $s_3 (x + y - 1)$ match those of $p - \alpha$ but this can be overwritten using the $maxdegree$ keyword argument.
### The maxdegree keyword argument
The maximum total degree (i.e. maximum sum of the exponents of $x$ and $y$) of the monomials of $p$ is 3 so the constraint in the program above is equivalent to `@constraint m p >= α domain = S maxdegree = 3`..
That is, since $x$, $y$ and $x + y - 1$ have total degree 1, the sum of squares polynomials $s_1$, $s_2$ and $s_3$ have been chosen with maximum total degree $2$.
Since these polynomials are sums of squares, their degree must be even so the next maximum total degree to try is 4.
For this reason, the keywords `maxdegree = 4` and `maxdegree = 5` have the same effect in this example.
In general, if the polynomials in the domain are not all odd or all even, each value of `maxdegree` has different effect in the choice of the maximum total degree of $s_i$.
```julia
using JuMP
using SumOfSquares
const MOI = MathOptInterface
MOI.empty!(optimizer)
m = SOSModel(optimizer = optimizer)
@variable m α
@objective m Max α
c5 = @constraint m p >= α domain = S maxdegree = 5
JuMP.optimize(m)
@show JuMP.terminationstatus(m)
@show JuMP.objectivevalue(m);
```
JuMP.terminationstatus(m) = Success::MathOptInterface.TerminationStatusCode = 0
JuMP.objectivevalue(m) = -8.707343734926098e-10
This time, the dual variable is atomic as it is the moments of the measure
$$0.5 \delta(x-1, y) + 0.5 \delta(x, y-1)$$
where $\delta(x, y)$ is the dirac measure centered at $(0, 0)$.
Therefore the program provides both a certificate that $0$ is a lower bound and a certificate that it is also an upper bound since it is attained at the global minimizers $(1, 0)$ and $(0, 1)$.
```julia
using MultivariateMoments
μ5 = JuMP.resultdual(c5)
X5 = certificate_monomials(c5)
ν5 = matmeasure(μ5, X5)
extractatoms(ν5, 1e-3)
```
Nullable{MultivariateMoments.AtomicMeasure{Float64,Array{DynamicPolynomials.PolyVar{true},1}}}(Atomic measure on the variables x, y with 2 atoms:
at [-0.00109073, 1.00109] with weight 0.49908199889134325
at [0.99992, 8.0388e-5] with weight 0.5006917587661682)
## A deeper look into atom extraction
The `extractatoms` function requires a `ranktol` argument that we have set to `1e-3` in the preceding section.
This argument is used to transform the dual variable into a system of polynomials equations whose solutions give the atoms.
This transformation uses the SVD decomposition of the matrix of moments and discard the equations corresponding to a singular value lower than `ranktol`.
When this system of equation has an infinite number of solutions, `extractatoms` concludes that the measure is not atomic.
For instance, with `maxdegree = 3`, we obtain the system
$$x + y = 1$$
which contains a whole line of solution.
This explains `extractatoms` returned `nothing`.
```julia
ν3 = matmeasure(μ3, X3)
MultivariateMoments.computesupport!(ν3, 1e-3)
```
Algebraic Set defined by 1 equality
-x - 1.0000000000000007y + 1.0000000000569826 == 0
With `maxdegree = 5`, we obtain the system
\begin{align}
x + y & = 1\\
y^2 & = y\\
xy & = 0\\
x^2 + y & = 1
\end{align}
```julia
ν5 = matmeasure(μ5, X5)
MultivariateMoments.computesupport!(ν5, 1e-3)
```
Algebraic Set defined by 4 equalities
-x - 0.9999999999999996y + 1.0000000002403562 == 0
-y^2 + 1.0011711224086954y - 8.047569308633484e-5 == 0
-xy + 1.4187906239762362e-12y + 0.00018536057700866868 == 0
-x^2 - 1.0011711224223037y + 1.0010906469640255 == 0
This system can be reduced to the equivalent system
\begin{align}
x + y & = 1\\
y^2 & = y
\end{align}
which has the solutions $(0, 1)$ and $(1, 0)$.
```julia
SemialgebraicSets.computegröbnerbasis!(ideal(get(ν5.support)))
get(ν5.support)
```
Algebraic Set defined by 2 equalities
x + 0.9999999999999996y - 1.0000000002403562 == 0
y^2 - 1.0011711224086954y + 8.047569308633484e-5 == 0
The function `extractatoms` then reuse the matrix of moments to find the weights $1/2$, $1/2$ corresponding to the diracs centered respectively at $(0, 1)$ and $(1, 0)$.
This details the how the function obtained the result
$$0.5 \delta(x-1, y) + 0.5 \delta(x, y-1)$$
given in the previous section.
```julia
```
|
27904efc04f64857560dbd0034b2ce3833646e24
| 14,510 |
ipynb
|
Jupyter Notebook
|
examples/Polynomial_Optimization.ipynb
|
mforets/SumOfSquares.jl
|
d98ad9b6e5fef3bac2e4cbaf38faf599cabeada9
|
[
"MIT"
] | null | null | null |
examples/Polynomial_Optimization.ipynb
|
mforets/SumOfSquares.jl
|
d98ad9b6e5fef3bac2e4cbaf38faf599cabeada9
|
[
"MIT"
] | null | null | null |
examples/Polynomial_Optimization.ipynb
|
mforets/SumOfSquares.jl
|
d98ad9b6e5fef3bac2e4cbaf38faf599cabeada9
|
[
"MIT"
] | null | null | null | 29.855967 | 262 | 0.573604 | true | 2,788 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.870597 | 0.867036 | 0.754839 |
__label__eng_Latn
| 0.983828 | 0.592076 |
# Frequentist Inference Case Study - Part A
## 1. Learning objectives
Welcome to part A of the Frequentist inference case study! The purpose of this case study is to help you apply the concepts associated with Frequentist inference in Python. Frequentist inference is the process of deriving conclusions about an underlying distribution via the observation of data. In particular, you'll practice writing Python code to apply the following statistical concepts:
* the _z_-statistic
* the _t_-statistic
* the difference and relationship between the two
* the Central Limit Theorem, including its assumptions and consequences
* how to estimate the population mean and standard deviation from a sample
* the concept of a sampling distribution of a test statistic, particularly for the mean
* how to combine these concepts to calculate a confidence interval
## Prerequisites
To be able to complete this notebook, you are expected to have a basic understanding of:
* what a random variable is (p.400 of Professor Spiegelhalter's *The Art of Statistics, hereinafter AoS*)
* what a population, and a population distribution, are (p. 397 of *AoS*)
* a high-level sense of what the normal distribution is (p. 394 of *AoS*)
* what the t-statistic is (p. 275 of *AoS*)
Happily, these should all be concepts with which you are reasonably familiar after having read ten chapters of Professor Spiegelhalter's book, *The Art of Statistics*.
We'll try to relate the concepts in this case study back to page numbers in *The Art of Statistics* so that you can focus on the Python aspects of this case study. The second part (part B) of this case study will involve another, more real-world application of these tools.
For this notebook, we will use data sampled from a known normal distribution. This allows us to compare our results with theoretical expectations.
## 2. An introduction to sampling from the normal distribution
First, let's explore the ways we can generate the normal distribution. While there's a fair amount of interest in [sklearn](https://scikit-learn.org/stable/) within the machine learning community, you're likely to have heard of [scipy](https://docs.scipy.org/doc/scipy-0.15.1/reference/index.html) if you're coming from the sciences. For this assignment, you'll use [scipy.stats](https://docs.scipy.org/doc/scipy-0.15.1/reference/tutorial/stats.html) to complete your work.
This assignment will require some digging around and getting your hands dirty (your learning is maximized that way)! You should have the research skills and the tenacity to do these tasks independently, but if you struggle, reach out to your immediate community and your mentor for help.
```python
from scipy.stats import norm
from scipy.stats import t
import numpy as np
from numpy.random import seed
import matplotlib.pyplot as plt
```
__Q1:__ Call up the documentation for the `norm` function imported above. (Hint: that documentation is [here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html)). What is the second listed method?
__A:__ Probability density function.
pdf(x, loc=0, scale=1)
__Q2:__ Use the method that generates random variates to draw five samples from the standard normal distribution.
__A:__ Random variates.
rvs(loc=0, scale=1, size=1, random_state=None)
```python
seed(47)
# draw five samples here
samples = norm.rvs(size=5)
print(samples)
```
[-0.84800948 1.30590636 0.92420797 0.6404118 -1.05473698]
__Q3:__ What is the mean of this sample? Is it exactly equal to the value you expected? Hint: the sample was drawn from the standard normal distribution. If you want a reminder of the properties of this distribution, check out p. 85 of *AoS*.
__A:__ About 0.19.
This is what I expected because the normal distribution should be spread around 1. Then 1 is divided by the size (5 samples).
```python
# Calculate and print the mean here, hint: use np.mean()
np.mean(samples)
```
0.19355593334131074
__Q4:__ What is the standard deviation of these numbers? Calculate this manually here as $\sqrt{\frac{\sum_i(x_i - \bar{x})^2}{n}}$ (This is just the definition of **standard deviation** given by Professor Spiegelhalter on p.403 of *AoS*). Hint: np.sqrt() and np.sum() will be useful here and remember that numPy supports [broadcasting](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
__A:__ Approximately 0.96.
```python
np.sqrt(np.sum((samples - samples.mean()) ** 2) / len(samples))
```
0.9606195639478641
Here we have calculated the actual standard deviation of a small data set (of size 5). But in this case, this small data set is actually a sample from our larger (infinite) population. In this case, the population is infinite because we could keep drawing our normal random variates until our computers die!
In general, the sample mean we calculate will not be equal to the population mean (as we saw above). A consequence of this is that the sum of squares of the deviations from the _population_ mean will be bigger than the sum of squares of the deviations from the _sample_ mean. In other words, the sum of squares of the deviations from the _sample_ mean is too small to give an unbiased estimate of the _population_ variance. An example of this effect is given [here](https://en.wikipedia.org/wiki/Bessel%27s_correction#Source_of_bias). Scaling our estimate of the variance by the factor $n/(n-1)$ gives an unbiased estimator of the population variance. This factor is known as [Bessel's correction](https://en.wikipedia.org/wiki/Bessel%27s_correction). The consequence of this is that the $n$ in the denominator is replaced by $n-1$.
You can see Bessel's correction reflected in Professor Spiegelhalter's definition of **variance** on p. 405 of *AoS*.
__Q5:__ If all we had to go on was our five samples, what would be our best estimate of the population standard deviation? Use Bessel's correction ($n-1$ in the denominator), thus $\sqrt{\frac{\sum_i(x_i - \bar{x})^2}{n-1}}$.
__A:__ Approximately 1.07.
```python
np.sqrt(np.sum((samples - samples.mean()) ** 2) / (len(samples) - 1))
```
1.0740053227518152
__Q6:__ Now use numpy's std function to calculate the standard deviation of our random samples. Which of the above standard deviations did it return?
__A:__ The first (regular) standard deviation calculation.
```python
np.std(samples)
```
0.9606195639478641
__Q7:__ Consult the documentation for np.std() to see how to apply the correction for estimating the population parameter and verify this produces the expected result.
__A:__ The ddof parameter of 1 returns the same result of about 1.07 as above.
```python
# np.std? only returns the help in the Jupyter console.
help(np.std)
```
Help on function std in module numpy:
std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=<no value>)
Compute the standard deviation along the specified axis.
Returns the standard deviation, a measure of the spread of a distribution,
of the array elements. The standard deviation is computed for the
flattened array by default, otherwise over the specified axis.
Parameters
----------
a : array_like
Calculate the standard deviation of these values.
axis : None or int or tuple of ints, optional
Axis or axes along which the standard deviation is computed. The
default is to compute the standard deviation of the flattened array.
.. versionadded:: 1.7.0
If this is a tuple of ints, a standard deviation is performed over
multiple axes, instead of a single axis or all the axes as before.
dtype : dtype, optional
Type to use in computing the standard deviation. For arrays of
integer type the default is float64, for arrays of float types it is
the same as the array type.
out : ndarray, optional
Alternative output array in which to place the result. It must have
the same shape as the expected output but the type (of the calculated
values) will be cast if necessary.
ddof : int, optional
Means Delta Degrees of Freedom. The divisor used in calculations
is ``N - ddof``, where ``N`` represents the number of elements.
By default `ddof` is zero.
keepdims : bool, optional
If this is set to True, the axes which are reduced are left
in the result as dimensions with size one. With this option,
the result will broadcast correctly against the input array.
If the default value is passed, then `keepdims` will not be
passed through to the `std` method of sub-classes of
`ndarray`, however any non-default value will be. If the
sub-class' method does not implement `keepdims` any
exceptions will be raised.
Returns
-------
standard_deviation : ndarray, see dtype parameter above.
If `out` is None, return a new array containing the standard deviation,
otherwise return a reference to the output array.
See Also
--------
var, mean, nanmean, nanstd, nanvar
ufuncs-output-type
Notes
-----
The standard deviation is the square root of the average of the squared
deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``.
The average squared deviation is normally calculated as
``x.sum() / N``, where ``N = len(x)``. If, however, `ddof` is specified,
the divisor ``N - ddof`` is used instead. In standard statistical
practice, ``ddof=1`` provides an unbiased estimator of the variance
of the infinite population. ``ddof=0`` provides a maximum likelihood
estimate of the variance for normally distributed variables. The
standard deviation computed in this function is the square root of
the estimated variance, so even with ``ddof=1``, it will not be an
unbiased estimate of the standard deviation per se.
Note that, for complex numbers, `std` takes the absolute
value before squaring, so that the result is always real and nonnegative.
For floating-point input, the *std* is computed using the same
precision the input has. Depending on the input data, this can cause
the results to be inaccurate, especially for float32 (see example below).
Specifying a higher-accuracy accumulator using the `dtype` keyword can
alleviate this issue.
Examples
--------
>>> a = np.array([[1, 2], [3, 4]])
>>> np.std(a)
1.1180339887498949 # may vary
>>> np.std(a, axis=0)
array([1., 1.])
>>> np.std(a, axis=1)
array([0.5, 0.5])
In single precision, std() can be inaccurate:
>>> a = np.zeros((2, 512*512), dtype=np.float32)
>>> a[0, :] = 1.0
>>> a[1, :] = 0.1
>>> np.std(a)
0.45000005
Computing the standard deviation in float64 is more accurate:
>>> np.std(a, dtype=np.float64)
0.44999999925494177 # may vary
```python
np.std(samples, ddof=1)
```
1.0740053227518152
### Summary of section
In this section, you've been introduced to the scipy.stats package and used it to draw a small sample from the standard normal distribution. You've calculated the average (the mean) of this sample and seen that this is not exactly equal to the expected population parameter (which we know because we're generating the random variates from a specific, known distribution). You've been introduced to two ways of calculating the standard deviation; one uses $n$ in the denominator and the other uses $n-1$ (Bessel's correction). You've also seen which of these calculations np.std() performs by default and how to get it to generate the other.
You use $n$ as the denominator if you want to calculate the standard deviation of a sequence of numbers. You use $n-1$ if you are using this sequence of numbers to estimate the population parameter. This brings us to some terminology that can be a little confusing.
The population parameter is traditionally written as $\sigma$ and the sample statistic as $s$. Rather unhelpfully, $s$ is also called the sample standard deviation (using $n-1$) whereas the standard deviation of the sample uses $n$. That's right, we have the sample standard deviation and the standard deviation of the sample and they're not the same thing!
The sample standard deviation
\begin{equation}
s = \sqrt{\frac{\sum_i(x_i - \bar{x})^2}{n-1}} \approx \sigma,
\end{equation}
is our best (unbiased) estimate of the population parameter ($\sigma$).
If your dataset _is_ your entire population, you simply want to calculate the population parameter, $\sigma$, via
\begin{equation}
\sigma = \sqrt{\frac{\sum_i(x_i - \bar{x})^2}{n}}
\end{equation}
as you have complete, full knowledge of your population. In other words, your sample _is_ your population. It's worth noting that we're dealing with what Professor Spiegehalter describes on p. 92 of *AoS* as a **metaphorical population**: we have all the data, and we act as if the data-point is taken from a population at random. We can think of this population as an imaginary space of possibilities.
If, however, you have sampled _from_ your population, you only have partial knowledge of the state of your population. In this case, the standard deviation of your sample is not an unbiased estimate of the standard deviation of the population, in which case you seek to estimate that population parameter via the sample standard deviation, which uses the $n-1$ denominator.
Great work so far! Now let's dive deeper.
## 3. Sampling distributions
So far we've been dealing with the concept of taking a sample from a population to infer the population parameters. One statistic we calculated for a sample was the mean. As our samples will be expected to vary from one draw to another, so will our sample statistics. If we were to perform repeat draws of size $n$ and calculate the mean of each, we would expect to obtain a distribution of values. This is the sampling distribution of the mean. **The Central Limit Theorem (CLT)** tells us that such a distribution will approach a normal distribution as $n$ increases (the intuitions behind the CLT are covered in full on p. 236 of *AoS*). For the sampling distribution of the mean, the standard deviation of this distribution is given by
\begin{equation}
\sigma_{mean} = \frac{\sigma}{\sqrt n}
\end{equation}
where $\sigma_{mean}$ is the standard deviation of the sampling distribution of the mean and $\sigma$ is the standard deviation of the population (the population parameter).
This is important because typically we are dealing with samples from populations and all we know about the population is what we see in the sample. From this sample, we want to make inferences about the population. We may do this, for example, by looking at the histogram of the values and by calculating the mean and standard deviation (as estimates of the population parameters), and so we are intrinsically interested in how these quantities vary across samples.
In other words, now that we've taken one sample of size $n$ and made some claims about the general population, what if we were to take another sample of size $n$? Would we get the same result? Would we make the same claims about the general population? This brings us to a fundamental question: _when we make some inference about a population based on our sample, how confident can we be that we've got it 'right'?_
We need to think about **estimates and confidence intervals**: those concepts covered in Chapter 7, p. 189, of *AoS*.
Now, the standard normal distribution (with its variance equal to its standard deviation of one) would not be a great illustration of a key point. Instead, let's imagine we live in a town of 50,000 people and we know the height of everyone in this town. We will have 50,000 numbers that tell us everything about our population. We'll simulate these numbers now and put ourselves in one particular town, called 'town 47', where the population mean height is 172 cm and population standard deviation is 5 cm.
```python
seed(47)
pop_heights = norm.rvs(172, 5, size=50000)
```
```python
_ = plt.hist(pop_heights, bins=30)
_ = plt.xlabel('height (cm)')
_ = plt.ylabel('number of people')
_ = plt.title('Distribution of heights in entire town population')
_ = plt.axvline(172, color='r')
_ = plt.axvline(172+5, color='r', linestyle='--')
_ = plt.axvline(172-5, color='r', linestyle='--')
_ = plt.axvline(172+10, color='r', linestyle='-.')
_ = plt.axvline(172-10, color='r', linestyle='-.')
```
Now, 50,000 people is rather a lot to chase after with a tape measure. If all you want to know is the average height of the townsfolk, then can you just go out and measure a sample to get a pretty good estimate of the average height?
```python
def townsfolk_sampler(n):
return np.random.choice(pop_heights, n)
```
Let's say you go out one day and randomly sample 10 people to measure.
```python
seed(47)
daily_sample1 = townsfolk_sampler(10)
```
```python
_ = plt.hist(daily_sample1, bins=10)
_ = plt.xlabel('height (cm)')
_ = plt.ylabel('number of people')
_ = plt.title('Distribution of heights in sample size 10')
```
The sample distribution doesn't resemble what we take the population distribution to be. What do we get for the mean?
```python
np.mean(daily_sample1)
```
173.47911444163503
And if we went out and repeated this experiment?
```python
daily_sample2 = townsfolk_sampler(10)
```
```python
np.mean(daily_sample2)
```
173.7317666636263
__Q8:__ Simulate performing this random trial every day for a year, calculating the mean of each daily sample of 10, and plot the resultant sampling distribution of the mean.
__A:__
```python
seed(47)
# take your samples here
for day in range(365):
print(f"Sample for day {day + 1} was {np.mean(townsfolk_sampler(10))}")
```
Sample for day 1 was 173.00937310417513
Sample for day 2 was 170.2661643961573
Sample for day 3 was 174.34598844844118
Sample for day 4 was 170.785406800034
Sample for day 5 was 173.31770470569631
Sample for day 6 was 173.10858686641774
Sample for day 7 was 171.40439332248283
Sample for day 8 was 170.9704617042305
Sample for day 9 was 172.61510636545793
Sample for day 10 was 172.2913740885055
Sample for day 11 was 170.50358941424687
Sample for day 12 was 172.22018481582614
Sample for day 13 was 172.85834358816803
Sample for day 14 was 171.56620891479838
Sample for day 15 was 171.58204113512346
Sample for day 16 was 171.07473473402555
Sample for day 17 was 175.2047218243162
Sample for day 18 was 172.20101905509054
Sample for day 19 was 175.8140325675064
Sample for day 20 was 171.42567364667013
Sample for day 21 was 171.54879166928384
Sample for day 22 was 173.37962048578632
Sample for day 23 was 170.84926519404007
Sample for day 24 was 174.59322186598968
Sample for day 25 was 171.54718475118278
Sample for day 26 was 171.6096336712505
Sample for day 27 was 171.53767794655576
Sample for day 28 was 172.9149498945323
Sample for day 29 was 172.2981516718446
Sample for day 30 was 165.39551194077626
Sample for day 31 was 169.9597376836921
Sample for day 32 was 173.9465941840398
Sample for day 33 was 172.1342306069537
Sample for day 34 was 171.3984656489666
Sample for day 35 was 171.11161431266052
Sample for day 36 was 173.6267218608726
Sample for day 37 was 169.11050233231748
Sample for day 38 was 169.69609920441803
Sample for day 39 was 172.4816825903941
Sample for day 40 was 172.35465226352488
Sample for day 41 was 170.4018730294428
Sample for day 42 was 172.6410928824817
Sample for day 43 was 171.34876456725738
Sample for day 44 was 172.84629108546204
Sample for day 45 was 175.26564169319403
Sample for day 46 was 168.68677877662915
Sample for day 47 was 173.01832873955627
Sample for day 48 was 169.56393772762289
Sample for day 49 was 172.99035886037572
Sample for day 50 was 175.3707428706809
Sample for day 51 was 171.68166141416253
Sample for day 52 was 172.21351476973385
Sample for day 53 was 173.57719464559077
Sample for day 54 was 172.23443258433025
Sample for day 55 was 171.49321124063263
Sample for day 56 was 175.0569955524844
Sample for day 57 was 169.7489045337734
Sample for day 58 was 170.2576081367393
Sample for day 59 was 173.00527760461273
Sample for day 60 was 169.41958867850704
Sample for day 61 was 171.09210131077157
Sample for day 62 was 174.09652244869528
Sample for day 63 was 173.97372431777853
Sample for day 64 was 170.76960029551344
Sample for day 65 was 173.91299863576833
Sample for day 66 was 172.77281552568883
Sample for day 67 was 171.17248840522046
Sample for day 68 was 172.6754158361887
Sample for day 69 was 174.95950548649049
Sample for day 70 was 174.5280861190028
Sample for day 71 was 169.3587222486768
Sample for day 72 was 172.19628668598872
Sample for day 73 was 173.47675542556266
Sample for day 74 was 171.8867463490586
Sample for day 75 was 171.64766944047537
Sample for day 76 was 172.03472701707668
Sample for day 77 was 171.8514968514924
Sample for day 78 was 173.3504076956295
Sample for day 79 was 175.835465920465
Sample for day 80 was 173.01619729265536
Sample for day 81 was 172.87431639983677
Sample for day 82 was 171.27137361530023
Sample for day 83 was 169.08324493645043
Sample for day 84 was 173.68116250421124
Sample for day 85 was 170.11358709792825
Sample for day 86 was 171.1893750210999
Sample for day 87 was 169.25468169001886
Sample for day 88 was 169.2150994830036
Sample for day 89 was 171.596721889334
Sample for day 90 was 173.77774156427014
Sample for day 91 was 173.03004628460803
Sample for day 92 was 172.34242485010785
Sample for day 93 was 172.9855405060567
Sample for day 94 was 169.43469092853624
Sample for day 95 was 171.77975348011097
Sample for day 96 was 172.64844848584667
Sample for day 97 was 171.56408093054327
Sample for day 98 was 169.95379792250952
Sample for day 99 was 171.12137486338096
Sample for day 100 was 171.73249633181402
Sample for day 101 was 172.0630011932523
Sample for day 102 was 172.834180845258
Sample for day 103 was 172.38388837514353
Sample for day 104 was 170.54584084764085
Sample for day 105 was 171.99296673596194
Sample for day 106 was 173.42344336887487
Sample for day 107 was 170.69610500534776
Sample for day 108 was 173.36937790530678
Sample for day 109 was 174.7423134134954
Sample for day 110 was 171.57490485555303
Sample for day 111 was 171.25352997756042
Sample for day 112 was 173.72474189207932
Sample for day 113 was 172.441773023841
Sample for day 114 was 173.80765705457642
Sample for day 115 was 170.96908036245844
Sample for day 116 was 170.5646444001596
Sample for day 117 was 171.16932302392036
Sample for day 118 was 171.7865759467937
Sample for day 119 was 174.05858516618719
Sample for day 120 was 171.6143448222104
Sample for day 121 was 174.26791917555542
Sample for day 122 was 172.75750544038792
Sample for day 123 was 169.4482347475428
Sample for day 124 was 172.88259602544014
Sample for day 125 was 173.3483004197289
Sample for day 126 was 169.76084247981902
Sample for day 127 was 169.14347393153977
Sample for day 128 was 171.4217504513107
Sample for day 129 was 173.89834379492194
Sample for day 130 was 170.3495147245446
Sample for day 131 was 172.17203741079754
Sample for day 132 was 172.79214318105068
Sample for day 133 was 175.10499281941355
Sample for day 134 was 173.18876387302893
Sample for day 135 was 174.81414282425817
Sample for day 136 was 173.51971821349957
Sample for day 137 was 169.1832903415072
Sample for day 138 was 172.44643036845486
Sample for day 139 was 170.37438239142895
Sample for day 140 was 170.5128178545061
Sample for day 141 was 172.7603336967199
Sample for day 142 was 173.43295705827208
Sample for day 143 was 172.44986889632654
Sample for day 144 was 168.54115045199467
Sample for day 145 was 171.47237444495545
Sample for day 146 was 172.28422187204686
Sample for day 147 was 169.31812658254867
Sample for day 148 was 171.9843089839522
Sample for day 149 was 172.5937581563948
Sample for day 150 was 173.23557646642925
Sample for day 151 was 172.45242838151756
Sample for day 152 was 172.95373798288568
Sample for day 153 was 169.31196279581437
Sample for day 154 was 169.68424533261566
Sample for day 155 was 173.09559250773697
Sample for day 156 was 170.5045876184657
Sample for day 157 was 170.77385661410713
Sample for day 158 was 173.1993206004434
Sample for day 159 was 169.41827455268532
Sample for day 160 was 172.69828423736095
Sample for day 161 was 171.80810764017772
Sample for day 162 was 171.83850781893216
Sample for day 163 was 173.5753616187581
Sample for day 164 was 170.78294074321053
Sample for day 165 was 167.7625596819958
Sample for day 166 was 173.63599353895043
Sample for day 167 was 172.35793394439912
Sample for day 168 was 172.42914238209988
Sample for day 169 was 170.71751228569852
Sample for day 170 was 171.54000560683969
Sample for day 171 was 173.188789929698
Sample for day 172 was 172.62322681495678
Sample for day 173 was 172.6508432608691
Sample for day 174 was 171.42006139790251
Sample for day 175 was 172.4712247126185
Sample for day 176 was 170.3786488645244
Sample for day 177 was 172.73731268204696
Sample for day 178 was 172.3000446236936
Sample for day 179 was 170.7649928023232
Sample for day 180 was 169.9141005121299
Sample for day 181 was 172.42333039503097
Sample for day 182 was 171.61277999714807
Sample for day 183 was 170.60637508298126
Sample for day 184 was 171.76476298366762
Sample for day 185 was 170.41303623484504
Sample for day 186 was 172.47393077457045
Sample for day 187 was 171.3194342008746
Sample for day 188 was 169.5841940850787
Sample for day 189 was 170.52305891287497
Sample for day 190 was 174.13981403506384
Sample for day 191 was 171.57249535993967
Sample for day 192 was 172.92969865919665
Sample for day 193 was 170.71069014088408
Sample for day 194 was 172.70087709251987
Sample for day 195 was 171.02564174035243
Sample for day 196 was 174.5655176759607
Sample for day 197 was 173.2373307135623
Sample for day 198 was 169.72435883757208
Sample for day 199 was 171.4080637212518
Sample for day 200 was 172.82472795827337
Sample for day 201 was 172.68087344401215
Sample for day 202 was 170.73816995930957
Sample for day 203 was 173.27623446278108
Sample for day 204 was 174.00762188244605
Sample for day 205 was 173.13361473414275
Sample for day 206 was 170.84245444649585
Sample for day 207 was 173.38610121883
Sample for day 208 was 171.0638349843619
Sample for day 209 was 171.126280719832
Sample for day 210 was 172.73680722414176
Sample for day 211 was 170.48813262391832
Sample for day 212 was 173.8065513385304
Sample for day 213 was 174.987975821513
Sample for day 214 was 170.03229177775182
Sample for day 215 was 175.02529474715647
Sample for day 216 was 173.40098890648693
Sample for day 217 was 171.44694390778417
Sample for day 218 was 174.3025151813375
Sample for day 219 was 173.4280196820072
Sample for day 220 was 171.33423913799567
Sample for day 221 was 171.62893394353907
Sample for day 222 was 174.71937083523463
Sample for day 223 was 173.6777821451332
Sample for day 224 was 173.29205813062757
Sample for day 225 was 171.48099822052652
Sample for day 226 was 174.7643867716951
Sample for day 227 was 174.21143537234744
Sample for day 228 was 171.77420202846264
Sample for day 229 was 171.37841143093172
Sample for day 230 was 172.18616002136272
Sample for day 231 was 172.3111613339467
Sample for day 232 was 171.77236918473153
Sample for day 233 was 169.4252121074236
Sample for day 234 was 171.16984338312017
Sample for day 235 was 171.98592378485796
Sample for day 236 was 170.66765933964413
Sample for day 237 was 173.07633301699337
Sample for day 238 was 172.55483298565144
Sample for day 239 was 170.02605126977423
Sample for day 240 was 171.4680428484353
Sample for day 241 was 171.98907654608053
Sample for day 242 was 175.02655281778826
Sample for day 243 was 171.07855120204874
Sample for day 244 was 170.51520740788092
Sample for day 245 was 172.48598843478018
Sample for day 246 was 172.1474353242007
Sample for day 247 was 169.2709521164695
Sample for day 248 was 172.5087810017655
Sample for day 249 was 172.95952188635115
Sample for day 250 was 170.5105096194364
Sample for day 251 was 173.80365699123186
Sample for day 252 was 173.20783401436017
Sample for day 253 was 172.30853501437937
Sample for day 254 was 171.3292027460107
Sample for day 255 was 170.1284541620547
Sample for day 256 was 170.53153661961474
Sample for day 257 was 169.99233807038905
Sample for day 258 was 172.2060568309715
Sample for day 259 was 172.59375266931607
Sample for day 260 was 173.13187918050644
Sample for day 261 was 173.84225403798737
Sample for day 262 was 172.16900966778172
Sample for day 263 was 171.2740795246999
Sample for day 264 was 172.06848748155048
Sample for day 265 was 172.70806798793316
Sample for day 266 was 169.52191788351348
Sample for day 267 was 173.13995943698018
Sample for day 268 was 171.31446586385138
Sample for day 269 was 174.45944054257342
Sample for day 270 was 172.33779383789957
Sample for day 271 was 170.04050400074735
Sample for day 272 was 170.5897937787512
Sample for day 273 was 172.381119795683
Sample for day 274 was 171.2191777049789
Sample for day 275 was 174.13679937916376
Sample for day 276 was 171.58968685112407
Sample for day 277 was 172.14155987323056
Sample for day 278 was 170.14580076222987
Sample for day 279 was 173.8575126095746
Sample for day 280 was 171.22280004171273
Sample for day 281 was 174.50071744849237
Sample for day 282 was 172.88891068451716
Sample for day 283 was 169.31889881116254
Sample for day 284 was 170.69600548765348
Sample for day 285 was 171.42981400026548
Sample for day 286 was 172.50472870805683
Sample for day 287 was 171.51334191192277
Sample for day 288 was 170.08549988158256
Sample for day 289 was 172.5517746579218
Sample for day 290 was 170.35377108926656
Sample for day 291 was 173.3479274356198
Sample for day 292 was 168.98144965130814
Sample for day 293 was 174.43697752031915
Sample for day 294 was 174.24488590135522
Sample for day 295 was 171.75499841402396
Sample for day 296 was 172.2505806984
Sample for day 297 was 172.13537084694025
Sample for day 298 was 168.91730244778347
Sample for day 299 was 171.85383633190443
Sample for day 300 was 171.44332622752884
Sample for day 301 was 171.98065353587435
Sample for day 302 was 174.67545641644853
Sample for day 303 was 169.27456293913542
Sample for day 304 was 171.98544346762102
Sample for day 305 was 171.71523803475168
Sample for day 306 was 171.66213269382746
Sample for day 307 was 171.112298762341
Sample for day 308 was 170.77343371955163
Sample for day 309 was 172.20311106521876
Sample for day 310 was 169.99680356458154
Sample for day 311 was 172.95196752111943
Sample for day 312 was 176.75728819085288
Sample for day 313 was 171.8196727050369
Sample for day 314 was 170.71102865921227
Sample for day 315 was 168.0443984080638
Sample for day 316 was 172.71396733459656
Sample for day 317 was 168.70848675599822
Sample for day 318 was 171.76101124195003
Sample for day 319 was 173.73259618312758
Sample for day 320 was 172.39938678401919
Sample for day 321 was 172.4348027054093
Sample for day 322 was 172.4558659621563
Sample for day 323 was 170.7107801353672
Sample for day 324 was 172.51742285335624
Sample for day 325 was 172.24819759923054
Sample for day 326 was 174.39776477155866
Sample for day 327 was 172.01380734487162
Sample for day 328 was 172.8420396499487
Sample for day 329 was 172.34460031959003
Sample for day 330 was 170.78349067379074
Sample for day 331 was 173.0535374392026
Sample for day 332 was 172.81474736800695
Sample for day 333 was 170.59751924018238
Sample for day 334 was 171.81195373983633
Sample for day 335 was 173.14301505727445
Sample for day 336 was 173.0867067005878
Sample for day 337 was 172.49120180031275
Sample for day 338 was 172.79245354383073
Sample for day 339 was 168.77864347190467
Sample for day 340 was 172.85250539601859
Sample for day 341 was 171.51847211854056
Sample for day 342 was 168.79667318837573
Sample for day 343 was 171.98321088302234
Sample for day 344 was 171.93952803545892
Sample for day 345 was 173.98082900081164
Sample for day 346 was 170.42434934870033
Sample for day 347 was 172.80981621822798
Sample for day 348 was 171.43800984364023
Sample for day 349 was 169.13060750544295
Sample for day 350 was 170.68510962199605
Sample for day 351 was 171.6445361884467
Sample for day 352 was 173.21043754817526
Sample for day 353 was 169.11260696642978
Sample for day 354 was 170.3325375027076
Sample for day 355 was 171.78168029566487
Sample for day 356 was 172.3123996044116
Sample for day 357 was 170.1283454398062
Sample for day 358 was 174.0205474832662
Sample for day 359 was 170.3304690943122
Sample for day 360 was 171.93335633113756
Sample for day 361 was 170.89875086405297
Sample for day 362 was 175.5202730928333
Sample for day 363 was 171.85429378017477
Sample for day 364 was 171.2142851564963
Sample for day 365 was 172.26925019233641
```python
seed(47)
# Or the Pythonic way
daily_sample_means = np.array([np.mean(townsfolk_sampler(10)) for i in range(365)])
```
```python
_ = plt.hist(daily_sample_means, bins=10)
_ = plt.xlabel('height (cm)')
_ = plt.ylabel('number of people')
_ = plt.title('Distribution of heights in sample size 10')
```
The above is the distribution of the means of samples of size 10 taken from our population. The Central Limit Theorem tells us the expected mean of this distribution will be equal to the population mean, and standard deviation will be $\sigma / \sqrt n$, which, in this case, should be approximately 1.58.
__Q9:__ Verify the above results from the CLT.
__A:__ This is approximately 1.58.
```python
np.std(daily_sample_means, ddof=1)
```
1.5778333114768472
Remember, in this instance, we knew our population parameters, that the average height really is 172 cm and the standard deviation is 5 cm, and we see some of our daily estimates of the population mean were as low as around 168 and some as high as 176.
__Q10:__ Repeat the above year's worth of samples but for a sample size of 50 (perhaps you had a bigger budget for conducting surveys that year)! Would you expect your distribution of sample means to be wider (more variable) or narrower (more consistent)? Compare your resultant summary statistics to those predicted by the CLT.
__A:__ The larger sample size of 50 is more normally distributed with a narrower range. This is expected as the sample size becomes more representative of the population.
```python
seed(47)
# calculate daily means from the larger sample size here
daily_sample_means_50 = np.array([np.mean(townsfolk_sampler(50)) for i in range(365)])
```
```python
_ = plt.hist(daily_sample_means_50, bins=10)
_ = plt.xlabel('height (cm)')
_ = plt.ylabel('number of people')
_ = plt.title('Distribution of heights in sample size 10')
```
```python
np.std(daily_sample_means_50, ddof=1)
```
0.6745354088447525
What we've seen so far, then, is that we can estimate population parameters from a sample from the population, and that samples have their own distributions. Furthermore, the larger the sample size, the narrower are those sampling distributions.
### Normally testing time!
All of the above is well and good. We've been sampling from a population we know is normally distributed, we've come to understand when to use $n$ and when to use $n-1$ in the denominator to calculate the spread of a distribution, and we've seen the Central Limit Theorem in action for a sampling distribution. All seems very well behaved in Frequentist land. But, well, why should we really care?
Remember, we rarely (if ever) actually know our population parameters but we still have to estimate them somehow. If we want to make inferences to conclusions like "this observation is unusual" or "my population mean has changed" then we need to have some idea of what the underlying distribution is so we can calculate relevant probabilities. In frequentist inference, we use the formulae above to deduce these population parameters. Take a moment in the next part of this assignment to refresh your understanding of how these probabilities work.
Recall some basic properties of the standard normal distribution, such as that about 68% of observations are within plus or minus 1 standard deviation of the mean. Check out the precise definition of a normal distribution on p. 394 of *AoS*.
__Q11:__ Using this fact, calculate the probability of observing the value 1 or less in a single observation from the standard normal distribution. Hint: you may find it helpful to sketch the standard normal distribution (the familiar bell shape) and mark the number of standard deviations from the mean on the x-axis and shade the regions of the curve that contain certain percentages of the population.
__A:__
```python
1 - ((1 - 0.68) / 2)
```
0.8400000000000001
Calculating this probability involved calculating the area under the curve from the value of 1 and below. To put it in mathematical terms, we need to *integrate* the probability density function. We could just add together the known areas of chunks (from -Inf to 0 and then 0 to $+\sigma$ in the example above). One way to do this is to look up tables (literally). Fortunately, scipy has this functionality built in with the cdf() function.
__Q12:__ Use the cdf() function to answer the question above again and verify you get the same answer.
__A:__ The two answers are the same.
```python
norm.cdf(1)
```
0.8413447460685429
__Q13:__ Using our knowledge of the population parameters for our townsfolks' heights, what is the probability of selecting one person at random and their height being 177 cm or less? Calculate this using both of the approaches given above.
NOTE: Assuming the following questions are using the actual population mean (172) and standard deviation (5) given in the description above.
__A:__ There is about an 84% chance of selecting someone that is 177 cm or less from this population.
```python
norm(172, 5).cdf(177)
```
0.8413447460685429
__Q14:__ Turning this question around — suppose we randomly pick one person and measure their height and find they are 2.00 m tall. How surprised should we be at this result, given what we know about the population distribution? In other words, how likely would it be to obtain a value at least as extreme as this? Express this as a probability.
__A:__ This is VERY surprising. There is almost has no probability of it happening. It could be a measurement error or someone from out of town.
```python
1 - norm(172, 5).cdf(200)
```
1.0717590259723409e-08
What we've just done is calculate the ***p-value*** of the observation of someone 2.00m tall (review *p*-values if you need to on p. 399 of *AoS*). We could calculate this probability by virtue of knowing the population parameters. We were then able to use the known properties of the relevant normal distribution to calculate the probability of observing a value at least as extreme as our test value.
We're about to come to a pinch, though. We've said a couple of times that we rarely, if ever, know the true population parameters; we have to estimate them from our sample and we cannot even begin to estimate the standard deviation from a single observation.
This is very true and usually we have sample sizes larger than one. This means we can calculate the mean of the sample as our best estimate of the population mean and the standard deviation as our best estimate of the population standard deviation.
In other words, we are now coming to deal with the sampling distributions we mentioned above as we are generally concerned with the properties of the sample means we obtain.
Above, we highlighted one result from the CLT, whereby the sampling distribution (of the mean) becomes narrower and narrower with the square root of the sample size. We remind ourselves that another result from the CLT is that _even if the underlying population distribution is not normal, the sampling distribution will tend to become normal with sufficiently large sample size_. (**Check out p. 199 of AoS if you need to revise this**). This is the key driver for us 'requiring' a certain sample size, for example you may frequently see a minimum sample size of 30 stated in many places. In reality this is simply a rule of thumb; if the underlying distribution is approximately normal then your sampling distribution will already be pretty normal, but if the underlying distribution is heavily skewed then you'd want to increase your sample size.
__Q15:__ Let's now start from the position of knowing nothing about the heights of people in our town.
* Use the random seed of 47, to randomly sample the heights of 50 townsfolk
* Estimate the population mean using np.mean
* Estimate the population standard deviation using np.std (remember which denominator to use!)
* Calculate the (95%) [margin of error](https://www.statisticshowto.datasciencecentral.com/probability-and-statistics/hypothesis-testing/margin-of-error/#WhatMofE) (use the exact critial z value to 2 decimal places - [look this up](https://www.statisticshowto.datasciencecentral.com/probability-and-statistics/find-critical-values/) or use norm.ppf()) Recall that the ***margin of error*** is mentioned on p. 189 of the *AoS* and discussed in depth in that chapter).
* Calculate the 95% Confidence Interval of the mean (***confidence intervals*** are defined on p. 385 of *AoS*)
* Does this interval include the true population mean?
__A:__
```python
seed(47)
sample_size = 50
# take your sample now
sample = townsfolk_sampler(sample_size)
```
```python
mean_sample = np.mean(sample)
print(f"Mean is: {mean_sample}.")
```
Mean is: 172.7815108576788.
```python
std_sample = np.std(sample)
print(f"Standard deviation is: {std_sample}.")
```
Standard deviation is: 4.153258225264712.
```python
# 95% margin of error has 2 tails of rejection:
# 1) 100% - 95% = 5%. 2) 5% / 2 = 0.025. 3) 1 - 0.025 = 0.975.
critical_value = norm.ppf(0.975)
std_error = std_sample / np.sqrt(sample_size)
margin_of_error = critical_value * std_error
print(f"Margin of error is: {margin_of_error}.")
```
Margin of error is: 1.151203291581224.
```python
lower = mean_sample - margin_of_error
upper = mean_sample + margin_of_error
ci = np.array([lower, upper])
print(f"The 95% confidence interval is: {ci}.")
```
The 95% confidence interval is: [171.63030757 173.93271415].
__Q16:__ Above, we calculated the confidence interval using the critical z value. What is the problem with this? What requirement, or requirements, are we (strictly) failing?
__A:__ Only using one sample. This may or may not accurately reflect the population. The t distribution is most likely better to find the confidence interval.
__Q17:__ Calculate the 95% confidence interval for the mean using the _t_ distribution. Is this wider or narrower than that based on the normal distribution above? If you're unsure, you may find this [resource](https://www.statisticshowto.datasciencecentral.com/probability-and-statistics/confidence-interval/) useful. For calculating the critical value, remember how you could calculate this for the normal distribution using norm.ppf().
__A:__ The confidence interval using the t distribution is a little wider.
### Steps to calculate a Confidence Interval For a Sample
1) Subtract 1 from your sample size.
2) Subtract the confidence level from 1 and then divide by two.
3) Lookup answers from step 1 and 2 in t-distribution table or calculate them.
4) Divide sample standard deviation by the square root of the sample size.
5) Multiple step 3 and 4.
6) Subtract step 5 from the sample mean for the lower end of the range.
7) Add step 5 from the sample mean for the upper end of the range.
```python
# 50- 1 = 49.
first_steps = t(49).ppf([0.025, 0.975])
step_4 = std_sample / np.sqrt(sample_size)
step_5 = first_steps * step_4
last_steps = step_5 + mean_sample
last_steps
```
array([171.60116793, 173.96185378])
This is slightly wider than the previous confidence interval. This reflects the greater uncertainty given that we are estimating population parameters from a sample.
## 4. Learning outcomes
Having completed this project notebook, you now have hands-on experience:
* sampling and calculating probabilities from a normal distribution
* identifying the correct way to estimate the standard deviation of a population (the population parameter) from a sample
* with sampling distribution and now know how the Central Limit Theorem applies
* with how to calculate critical values and confidence intervals
|
cb62898eb8e3cddabe141738f4936ae2378999ce
| 112,622 |
ipynb
|
Jupyter Notebook
|
Frequentist Inference Case Study - Part A (3).ipynb
|
JasonCaldwellMBA/Frequentist_Case_Study
|
c8e89370f481985bfd366ccc76c6d00d640a88fd
|
[
"MIT"
] | null | null | null |
Frequentist Inference Case Study - Part A (3).ipynb
|
JasonCaldwellMBA/Frequentist_Case_Study
|
c8e89370f481985bfd366ccc76c6d00d640a88fd
|
[
"MIT"
] | null | null | null |
Frequentist Inference Case Study - Part A (3).ipynb
|
JasonCaldwellMBA/Frequentist_Case_Study
|
c8e89370f481985bfd366ccc76c6d00d640a88fd
|
[
"MIT"
] | null | null | null | 68.839853 | 14,387 | 0.77737 | true | 12,891 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.798187 | 0.727975 | 0.58106 |
__label__eng_Latn
| 0.999061 | 0.188328 |
<h1 align=center> Home Quiz 1 - Logistic Regression</h1>
<br>
$$
\text{Chatziefraimidis Lefteris 2209}\\
$$
## Problem 1: Gradient Descent
We will estimate the parameters $w_{0},w_{1},w_{2}$ using gradient descent for the following prediction model:
<br>
<br>
$$ y = w_{0} + w_{1}x_{1} + w_{2}x_{2} + w_{3}x_{1}^2 + \epsilon \space \text{ ,where }\space\epsilon\text{ ~ } N(0,\sigma^2)$$
<br>
$
\text{The error of the approximation: } \epsilon = y - y_{predicted} = y - w_{0} + w_{1}x_{1} + w_{2}x_{2} + w_{3}x_{1}^{2}
$
### $\triangleright$ Exercise (a) :
<strong>Gaussian Distribution :</strong>
<br>
$X$ is distributed according to normal (or Gaussian) distribution with mean $\mu$ and variance $\sigma^2$
$$
\begin{aligned}
X &\sim \mathcal{N}(\mu,\sigma^2) \\ \\
p(X = x|\mu,\sigma^2)&= \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{1}{2\sigma^2}(x-\mu)^2}
\end{aligned}
$$
So,for our model we have:
<br>
<br>
$$P(y|x_{1},x_{2}) = \frac{1}{\sqrt{2\pi\sigma^2}}e^-{\frac{(y-(w_{0} + w_{1}x_{1} + w_{2}x_{2} + w_{3}x_{1}^2))^2}{2\sigma^2}}$$
<br>
and the above expression its called likelihood function.
### $\triangleright$ Exercise (b) :
Assume you are given a set of training observations $(x_{1}^{(i)},x_{2}^{(i)},y^{(i)})$ for $i=1,....,n$
<br>
<br>
Log-likelihood of this training data:
<br>
$$LogP(y|x_{1},x_{2}) = \sum_{i=1}^n \log \frac{1}{\sqrt{2\pi\sigma^2}} e^-{\frac{(y-(w_{0} + w_{1}x_{1}^{(i)} + w_{2}x_{2}^{(i)} + w_{3}x_{1}^{(i)2}))^2}{2\sigma^2}}$$
<br>
$$ = \sum_{i=1}^n \left[-\frac{1}{2}\log 2\pi\sigma^2 - \frac{1}{2\sigma^2}(y-(w_{0} + w_{1}x_{1}^{(i)} + w_{2}x_{2}^{(i)} + w_{3}x_{1}^{(i)2}))^2\right]$$
<br>
$$ = \sum_{i=1}^n \left[- \frac{1}{2\sigma^2}(y-(w_{0} + w_{1}x_{1}^{(i)} + w_{2}x_{2}^{(i)} + w_{3}x_{1}^{(i)2}))^2\right]$$
### $\triangleright$ Exercise (c) :
Based on your answer above, we can write a loss function $f(w_{0},w_{1},w_{2},w_{3})$ that can be minimized to find the desired parameter estimates:
<br>
<br>
$$f(w_{0},w_{1},w_{2},w_{3}) = \frac{1}{n}\sum_{i=1}^n \left[(y-(w_{0} + w_{1}x_{1}^{(i)} + w_{2}x_{2}^{(i)} + w_{3}x_{1}^{(i)2}))^2\right] , \frac{1}{n} \text{ makes the loss interpretable}$$
<br>
<br>
This particular loss function is also known as Mean squared error(MSE).We can use gradient descent to optimize it.
### $\triangleright$ Exercise (d) :
Lets calculate the gradient of $f(w)$ with respect to the parameter vector $w = [w_{0},w_{1} ,w_{2},w_{3}]^{T} :$
<br>
<br>
$$\nabla_w f(w) = \begin{bmatrix}
\frac{\partial f(w)}{\partial w_0} &\frac{\partial f(w)}{\partial w_1} &\frac{\partial f(w)}{\partial w_2} & \frac{\partial f(w)}{\partial w_3}
\end{bmatrix}^{T}$$
<br>
<br>
Our goal was to find $w_0,w_1,w_2,w_3$ such that:
<br>
<br>
$$\nabla_w f(w) = \begin{bmatrix}
\frac{\partial f(w)}{\partial w_0} &\frac{\partial f(w)}{\partial w_1} &\frac{\partial f(w)}{\partial w_2} & \frac{\partial f(w)}{\partial w_3}
\end{bmatrix}^{T} = \text{[ 0 0 0 0 ]}$$
<br>
because this guarantees that $f(w)$ is minimized.
<br>
<br>
$$
\nabla_w f(w) =
\begin{bmatrix}
\\
\frac{2}{n}\sum_{i=1}^{n} [y_i-(w_{0} + w_{1}x_{i,1} + w_{2}x_{i,2} + w_{3}x_{i,1}^{2})](-1))
\\
\frac{2}{n}\sum_{i=1}^{n} [y_i-(w_{0} + w_{1}x_{i,1} + w_{2}x_{i,2} + w_{3}x_{i,1}^{2})](-x_{i,1})
\\
\frac{2}{n}\sum_{i=1}^{n} [y_i-(w_{0} + w_{1}x_{i,1} + w_{2}x_{i,2} + w_{3}x_{i,1}^{2})](-x_{i,2})
\\
\frac{2}{n}\sum_{i=1}^{n} [y_i-(w_{0} + w_{1}x_{i,1} + w_{2}x_{i,2} + w_{3}x_{i,1}^{2})](-x_{i,1}^{2})
\\
\\
\end{bmatrix} =
\begin{bmatrix}
0
\\
0
\\
0
\\
0
\end{bmatrix}
$$
### $\triangleright$ Exercise (e) :
Gradient descent update rule for $w$ in terms of $\nabla_w f(w)$:
<br>
<br>
$$
w^{(\textrm{iteration}+1)} = w^{(\textrm{iteration})} - \alpha\nabla f(w^{(\textrm{iteration})})
\text{ , }\alpha\text{ : learning rate}
$$
<br>
<br>
$$
w^{(\textrm{iteration}+1)} = w^{(\textrm{iteration})} - \alpha\begin{bmatrix}
\\
\frac{2}{n}\sum_{i=1}^{n} [y_i-(w_{0} + w_{1}x_{i,1} + w_{2}x_{i,2} + w_{3}x_{i,1}^{2})](-1))
\\
\frac{2}{n}\sum_{i=1}^{n} [y_i-(w_{0} + w_{1}x_{i,1} + w_{2}x_{i,2} + w_{3}x_{i,1}^{2})](-x_{i,1})
\\
\frac{2}{n}\sum_{i=1}^{n} [y_i-(w_{0} + w_{1}x_{i,1} + w_{2}x_{i,2} + w_{3}x_{i,1}^{2})](-x_{i,2})
\\
\frac{2}{n}\sum_{i=1}^{n} [y_i-(w_{0} + w_{1}x_{i,1} + w_{2}x_{i,2} + w_{3}x_{i,1}^{2})](-x_{i,1}^{2})
\\
\\
\end{bmatrix}
$$
### $\triangleright$ Exercise (f) :
```python
from sympy import *
#Define the symbols
x1 = Symbol('x1')
x2 = Symbol('x2')
w0 = Symbol('w0')
w1 = Symbol('w1')
w2 = Symbol('w2')
w3 = Symbol('w3')
y = Symbol('y')
n = Symbol('n')
a =Symbol('a')
#Define the expression y prediction
ypred = w0 + w1*x1 + w2*x2 + w3*(x1**2)
#Define the loss function f
f = (1/n)*(y - ypred)**2
#Calculate the derivatives in order to calculate the nabla
df_dw0 = diff(f,w0)
df_dw1 = diff(f,w1)
df_dw2 = diff(f,w2)
df_dw3 = diff(f,w3)
#Nabla of loss function
nabla_f = Matrix([[df_dw0],[df_dw1],[df_dw2],[df_dw3]])
#Calculate the new weights
weights = Matrix([[w0],[w1],[w2],[w3]])
weights = weights - a*nabla_f
print("Nabla: ")
print(nabla_f)
print("Weights: ")
print(weights)
```
Nabla:
Matrix([[(2*w0 + 2*w1*x1 + 2*w2*x2 + 2*w3*x1**2 - 2*y)/n], [-2*x1*(-w0 - w1*x1 - w2*x2 - w3*x1**2 + y)/n], [-2*x2*(-w0 - w1*x1 - w2*x2 - w3*x1**2 + y)/n], [-2*x1**2*(-w0 - w1*x1 - w2*x2 - w3*x1**2 + y)/n]])
Weights:
Matrix([[-a*(2*w0 + 2*w1*x1 + 2*w2*x2 + 2*w3*x1**2 - 2*y)/n + w0], [2*a*x1*(-w0 - w1*x1 - w2*x2 - w3*x1**2 + y)/n + w1], [2*a*x2*(-w0 - w1*x1 - w2*x2 - w3*x1**2 + y)/n + w2], [2*a*x1**2*(-w0 - w1*x1 - w2*x2 - w3*x1**2 + y)/n + w3]])
## Problem 2: Logistic Regression
### $\triangleright$ Exercise (a) :
#### Some theory will we need:
Sigmoid (Logistic) Function:
$$
\sigma(z) = \frac{1}{1 + \exp(-z)}
$$
<br>
<br>
In logistic regression we model a binary variable $y \in \{0,1\}(Bernoulli)$
<br>
<br>
$$
p(y|x,\beta_0,\beta) = \sigma(\beta_0 + x^{T}\beta) = \frac{1}{1 + e^{-(\beta_0 + x^{T}\beta)}} \\
$$
<br>
<br>
Likelihood function is:
<br>
<br>
$$
L(\beta_0,\beta|y,x) = \prod_i p(y_i | x_i,\beta_0,\beta)^{y_i}(1 - p(y_i| x_i,\beta_0,\beta))^{1-y_i}
$$
<br>
<br>
Log-likelihood function is:
<br>
<br>
$$
\log L(\beta_0,\beta|y,x) = \sum_i y_i\log p(y_i| x_i,\beta_0,\beta) + (1 - y_i)log(1 - p(y_i| x_i,\beta_0,\beta)) \\
=\sum_i y_i\log p(y_i| x_i,\beta_0,\beta) + (1 - y_i)log(p(y_i| x_i,\beta_0,\beta)) \\
=\sum_i y_i\log1 - y_i\log(1 + e^{-(\beta_0 + x_i\beta)}) + \log(\frac{e^{-(\beta_0 + x_i\beta)}}{1 + e^{-(\beta_0 + x_i\beta)}}) -y_i\log(e^{-(\beta_0 + x_i\beta)}) +y_i\log(1 + e^{-(\beta_0 + x_i\beta)}) \\
=\sum_i \log(\frac{e^{-(\beta_0 + x_i\beta)}}{1 + e^{-(\beta_0 + x_i\beta)}}) -y_i\log(e^{-(\beta_0 + x_i\beta)}) \\
=\sum_i \log(\frac{1}{1 + e^{(\beta_0 + x_i\beta)}}) -y_i\log(e^{-(\beta_0 + x_i\beta)}) \\
=\sum_i -\log(1 + e^{(\beta_0 + x_i\beta)}) + y_i(\beta_0 + x_i\beta)
$$
<br>
<br>
Our objective function for gradient ascent:
<br>
<br>
$$
f(w_0,w) = \sum_i -\log(1 + e^{(w_0 + x_iw)}) + y_i(w_0 + x_iw)
$$
### $\triangleright$ Exercise (b) :
We compute the partial derivative of the objective function with respect to $w_0$ and with respect to an
arbitrary $w_j$:
<br>
<br>
$$
\frac{\partial f}{\partial w_j} = - \sum_i \frac{e^{(w_0 +x_iw)}x_i}{1 + e^{(w_0 +x_iw)}} + \sum_i y_ix_i \\
=\sum_i (y_i - p(y_i|x_i,\beta_0,\beta) ) x_{ij}
$$
## Logistic Regression Implementation
```python
import numpy as np
import pandas as pd
import numpy.matlib
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
############################################## Functions we will use #############################################
#Define the sigmoid function
def sigmoid(z):
return 1 / (1 + np.exp(-z))
#Calculate the logistic regression objective value
def LR_CalcObj(XTrain,yTrain,wHat):
#Get the dimensions
[n,p] = XTrain.shape
#Add one's to Xtrain
XTrain=np.c_[np.ones((n,1)),XTrain]
#Calculate X*w abd exp(X*w)
Xw = np.dot(XTrain,wHat)
eXw = np.exp(Xw)
#Calculate objective value
return np.sum(yTrain*Xw - np.log(1+eXw))
#Check whether the objective value has converged
def LR_CheckConvg(old,new,tol):
#Compute difference between objectives
diff = np.abs(old-new);
#Compare difference to tolerance
if(diff < tol):
return True
else:
return False
#Calculate the new value of wHat using the gradient
def LR_UpdateParams(wHat,grad,eta):
#Update value of w
wHat = wHat + eta*grad
return wHat
#Calculate the gradient of the logistic regression
def LR_CalcGrad(XTrain,yTrain,wHat):
#Get the dimensions
[n,p] = XTrain.shape
#Add one's to Xtrain
XTrain=np.c_[np.ones((n,1)),XTrain]
#Calculate X*w abd exp(X*w)
z = np.dot(XTrain,wHat)
h = sigmoid(z)
#Return gradient
return np.dot(XTrain.T, (yTrain - h))
#Run the gradient ascent algorithm for logistic regression
def LR_GradientAscent(XTrain,yTrain):
#Define step size
eta = 0.01
#Define the covergence tolerance
tol = 0.001
#Get the dimensions
[n,p] = XTrain.shape
#Initialize wHat
wHat = np.zeros((p+1,1))
#Initialize objVal
objVals=[]
objVals.append(LR_CalcObj(XTrain,yTrain,wHat))
#Initialize convergence flag
hasConverged = False
while(not hasConverged):
#Calculate gradient
grad = LR_CalcGrad(XTrain,yTrain,wHat)
#Update parameter estimate
wHat = LR_UpdateParams(wHat,grad,eta)
#Calculate new objective
newObj = LR_CalcObj(XTrain,yTrain,wHat)
#Check convergence
hasConverged = LR_CheckConvg(objVals[-1],newObj,tol)
#Store new objective
objVals.append(newObj)
return wHat,objVals
#Predict the labls for a test set using logistic regression
def LR_PredictLabels(XTest,yTest,wHat):
#Get dimensions
[n,p] = XTest.shape
#Add one's to XTest
XTest=np.c_[np.ones((n,1)),XTest]
#Calculate X*w abd exp(X*w)
Xw = np.dot(XTest,wHat)
eXw = np.exp(Xw)
#Calculate p(Y = 0)
pY0 = 1/(1 + eXw)
#Calculate p(Y = 1)
pY1 = eXw/(1 + eXw)
yHat =[]
#Choose best propability
for i in range(0,len(pY0)):
if(pY1[i] > pY0[i]):
yHat.append([1])
else:
yHat.append([0])
yHat=np.array(yHat)
#Calculate error
numErrors = np.sum(yHat!= yTest)
return yHat,numErrors
def PlotDB():
#Load the data
#Training
XTrain = np.array(pd.read_csv('XTrain.csv',header=None))
yTrain = np.array(pd.read_csv('yTrain.csv',header=None))
#Testing
XTest = np.array(pd.read_csv('XTest.csv',header=None))
yTest = np.array(pd.read_csv('yTest.csv',header=None))
#Train logistic regression
[wHat,objVals] = LR_GradientAscent(XTrain,yTrain)
ind0 = []
ind1 = []
for i in range(len(yTest)):
if(yTest[i] == 0):
ind0.append(i)
else:
ind1.append(i)
#Calculate decision boundary
dbDimJ = np.arange(np.min(XTest[:,0]),np.max(XTest[:,0]),step=.01)
dbDimK = (-(wHat[0] + wHat[1])*dbDimJ)/wHat[2]
plt.plot(XTest[ind0,0],XTest[ind0,1],'r.')
plt.plot(XTest[ind1,0],XTest[ind1,1],'b.')
plt.plot(dbDimJ,dbDimK,'k-')
plt.xlabel('Dimension 1')
plt.ylabel('Dimension 2')
plt.title('Logistic Regression Decision Boundary')
plt.show()
############################################## Start of the program #############################################
#Load the data
#Training
XTrain = np.array(pd.read_csv('XTrain.csv',header=None))
yTrain = np.array(pd.read_csv('yTrain.csv',header=None))
#Testing
XTest = np.array(pd.read_csv('XTest.csv',header=None))
yTest = np.array(pd.read_csv('yTest.csv',header=None))
#Train Phase
wHat,objVals = LR_GradientAscent(XTrain,yTrain)
#Test Phase
yHat,numErrors = LR_PredictLabels(XTest,yTest,wHat)
#Print the number of misclassified examples
print('There were %d misclassified examples in the test set\n'%(numErrors))
#Plot the objective values
plt.plot(objVals)
plt.xlabel('Gradient Ascent Iteration')
plt.ylabel('Logistic Regression Objective Value')
plt.title('Convergence of Gradient Ascent for Logistic Regression')
plt.show()
print('Gradient ascent coverges after %d iterations\n'%(len(objVals)-1))
#2D Plot
PlotDB()
#Evaluate the training set and test error as a function of training set size
n = XTrain.shape[0]
kVals =np.arange(10,n+10,step=10)
m = XTest.shape[0]
#Errors for test,train
trainingError = np.zeros((len(kVals),1));
testError = np.zeros((len(kVals),1));
for i in range(len(kVals)):
#Set k
k=kVals[i]
#Generate trainingset
subsetsInds = np.random.randint(0,n,size=k)
XTrainSubset = XTrain[subsetsInds,:]
yTrainSubset = yTrain[subsetsInds,:]
#Train logistic regression
wHat,objVals = LR_GradientAscent(XTrainSubset,yTrainSubset)
#Test classifier on training set
[yHatTrain,numErrorsTrain] = LR_PredictLabels(XTrainSubset,yTrainSubset,wHat)
trainingError[i] = numErrorsTrain/k;
#Test classifier on test set
[yHatTest,numErrorsTest] = LR_PredictLabels(XTest,yTest,wHat)
testError[i] = numErrorsTest/m;
#Plot the above
plt.plot(kVals,trainingError)
plt.plot(kVals,testError)
plt.xlabel('Training Set Size')
plt.ylabel('Prediction Error')
plt.title('Logistic Regression Performance by Training Set Size')
plt.legend(['Training Error','Test Error'])
plt.show()
#Perform the same experiment but average over multiple random training sets
n = XTrain.shape[0]
kVals =np.arange(10,n+10,step=10)
m = XTest.shape[0]
#Errors for test,train
trainingError = np.zeros((len(kVals),1));
testError = np.zeros((len(kVals),1));
for i in range(len(kVals)):
#Set k
k=kVals[i]
for j in range(0,10):
#Generate trainingset
subsetsInds = np.random.randint(0,n,size=k)
XTrainSubset = XTrain[subsetsInds,:]
yTrainSubset = yTrain[subsetsInds,:]
#Train logistic regression
wHat,objVals = LR_GradientAscent(XTrainSubset,yTrainSubset)
#Test classifier on training set
[yHatTrain,numErrorsTrain] = LR_PredictLabels(XTrainSubset,yTrainSubset,wHat)
trainingError[i] += numErrorsTrain/k;
#Test classifier on test set
[yHatTest,numErrorsTest] = LR_PredictLabels(XTest,yTest,wHat)
testError[i] += numErrorsTest/m;
trainingError[i]/= 10;
testError[i] /= 10;
#Plot the above
plt.plot(kVals,trainingError)
plt.plot(kVals,testError)
plt.xlabel('Training Set Size')
plt.ylabel('Prediction Error')
plt.title('Logistic Regression Performance by Training Set Size')
plt.legend(['Training Error','Test Error'])
plt.show()
```
### $\triangleright$ Exercise (g) :
$\bullet$ As the training set size increases our model overfits.That happens because we increase the training set.
Imagine if you only had 2 points for the training set. It would be easy for almost any model to match this set exactly. However, it's very likely that the model would fail horribly on the test set. It hasn't really seen enough to learn. With small training sets, the model is likely to underfit and perform poorly on the test set. This shows a high variance in the model.
<br>
$\bullet$ The more data points you add to the training set, the more it can overcome the overfitting, and will perform better on the test set. However, it can perform worse on the training set, because the model may not be able to fit each example to the model. This can be a good thing, as noise in the training set can be ignored.
### $\triangleright$ Exercise (h) :
$\bullet$ In our classification problem with two classes, a decision boundary or decision surface is a hypersurface that partitions the underlying vector space into two sets, one for each class. The classifier will classify all the points on one side of the decision boundary as belonging to one class and all those on the other side as belonging to the other class.
<br>
$\bullet$ XTrain is an $n × p$ dimensional matrix that contains one training instance per row.So in order to classify each row you need a hypersurface to separate those data.
How to find the decision boundary:
* Assume a 2D problem with two features $x_1$ and $x_2$, then
$$
p(y=1|x,\beta_0,\beta) =
\sigma{(\beta_0 + \beta_1 x_1 + \beta_2 x_2)}=
\frac{1}{1 + e^{-(\beta_0 + \beta_1 x_1 + \beta_2 x_2)}}
$$
<br>
<br>
The decision boundary between 0 and 1 is 0.5.So:
$$
\frac{1}{1 + e^{-(\beta_0 + \beta_1 x_1 + \beta_2 x_2)}} = \frac{1}{2}
$$
<br>
$$
e^{-(\beta_0 + \beta_1 x_1 + \beta_2 x_2)} = 1
$$
<br>
$$
\beta_0 + \beta_1 x_1 + \beta_2 x_2 = 0 \rightarrow x_2 = \beta_0 - \frac{\beta_1}{\beta_2} x_1
$$
|
d1239f05d6223b09b8c49b317973aab9eb4814ed
| 133,460 |
ipynb
|
Jupyter Notebook
|
Logistic Regression - Quiz 1/Quiz_1.ipynb
|
echatzief/Machine-Learning-Projects
|
900493e4f14ef4faf1a794d85d103d966dca5be5
|
[
"MIT"
] | null | null | null |
Logistic Regression - Quiz 1/Quiz_1.ipynb
|
echatzief/Machine-Learning-Projects
|
900493e4f14ef4faf1a794d85d103d966dca5be5
|
[
"MIT"
] | null | null | null |
Logistic Regression - Quiz 1/Quiz_1.ipynb
|
echatzief/Machine-Learning-Projects
|
900493e4f14ef4faf1a794d85d103d966dca5be5
|
[
"MIT"
] | null | null | null | 179.623149 | 39,672 | 0.877342 | true | 5,973 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.94079 | 0.771843 | 0.726142 |
__label__eng_Latn
| 0.586601 | 0.525404 |
```
import scipy
import numpy as np
import sympy
from sympy import *
```
```
ix, iy, iz = symbols('ix iy iz',real=True, constant = True)
hx, hy, hz = symbols('hx hy hz',real = True, constant = False)
```
```
h = Matrix([hx, hy, hz])
i = Matrix([ix, iy, iz])
tmp = 2.*(h.T*i)[0,0]
f = tmp*h - i
f = f.subs(ix,0).subs(iy,0).subs(iz,1)
f
```
Matrix([
[ 2.0*hx*hz],
[ 2.0*hy*hz],
[2.0*hz**2 - 1]])
```
nx, ny, nz, x, y, z = symbols('nx ny nz x y z')
```
```
tmp = sqrt(x*x + y*y + z*z)
n = Matrix([x, y, z])/tmp
n
```
Matrix([
[x/sqrt(x**2 + y**2 + z**2)],
[y/sqrt(x**2 + y**2 + z**2)],
[z/sqrt(x**2 + y**2 + z**2)]])
```
Jf = f.jacobian(Matrix([hx, hy, hz]))
Jf
```
Matrix([
[2.0*hz, 0, 2.0*hx],
[ 0, 2.0*hz, 2.0*hy],
[ 0, 0, 4.0*hz]])
```
Jf.det()
```
16.0*hz**3
```
Jn = n.jacobian(Matrix([x, y, z]))
Jn
```
Matrix([
[-x**2/(x**2 + y**2 + z**2)**(3/2) + 1/sqrt(x**2 + y**2 + z**2), -x*y/(x**2 + y**2 + z**2)**(3/2), -x*z/(x**2 + y**2 + z**2)**(3/2)],
[ -x*y/(x**2 + y**2 + z**2)**(3/2), -y**2/(x**2 + y**2 + z**2)**(3/2) + 1/sqrt(x**2 + y**2 + z**2), -y*z/(x**2 + y**2 + z**2)**(3/2)],
[ -x*z/(x**2 + y**2 + z**2)**(3/2), -y*z/(x**2 + y**2 + z**2)**(3/2), -z**2/(x**2 + y**2 + z**2)**(3/2) + 1/sqrt(x**2 + y**2 + z**2)]])
```
Jn.subs(x, f[0,0]).subs(y, f[1,0]).subs(z, f[2,0])*Jf
```
Matrix([
[-2.0*hy*ix*(2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)*(2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)*sign(hx*ix + hy*iy + hz*iz)/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) - 2.0*hz*ix*(2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)*(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)*sign(hx*ix + hy*iy + hz*iz)/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + (-(2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + 1/sqrt((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2))*(2.0*hx*ix*sign(hx*ix + hy*iy + hz*iz) + 2.0*Abs(hx*ix + hy*iy + hz*iz)), 2.0*hx*iy*(-(2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + 1/sqrt((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2))*sign(hx*ix + hy*iy + hz*iz) - 2.0*hz*iy*(2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)*(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)*sign(hx*ix + hy*iy + hz*iz)/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) - (2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)*(2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)*(2.0*hy*iy*sign(hx*ix + hy*iy + hz*iz) + 2.0*Abs(hx*ix + hy*iy + hz*iz))/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2), 2.0*hx*iz*(-(2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + 1/sqrt((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2))*sign(hx*ix + hy*iy + hz*iz) - 2.0*hy*iz*(2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)*(2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)*sign(hx*ix + hy*iy + hz*iz)/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) - (2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)*(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)*(2.0*hz*iz*sign(hx*ix + hy*iy + hz*iz) + 2.0*Abs(hx*ix + hy*iy + hz*iz))/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2)],
[ 2.0*hy*ix*(-(2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + 1/sqrt((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2))*sign(hx*ix + hy*iy + hz*iz) - 2.0*hz*ix*(2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)*(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)*sign(hx*ix + hy*iy + hz*iz)/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) - (2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)*(2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)*(2.0*hx*ix*sign(hx*ix + hy*iy + hz*iz) + 2.0*Abs(hx*ix + hy*iy + hz*iz))/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2), -2.0*hx*iy*(2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)*(2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)*sign(hx*ix + hy*iy + hz*iz)/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) - 2.0*hz*iy*(2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)*(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)*sign(hx*ix + hy*iy + hz*iz)/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + (-(2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + 1/sqrt((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2))*(2.0*hy*iy*sign(hx*ix + hy*iy + hz*iz) + 2.0*Abs(hx*ix + hy*iy + hz*iz)), -2.0*hx*iz*(2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)*(2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)*sign(hx*ix + hy*iy + hz*iz)/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + 2.0*hy*iz*(-(2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + 1/sqrt((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2))*sign(hx*ix + hy*iy + hz*iz) - (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)*(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)*(2.0*hz*iz*sign(hx*ix + hy*iy + hz*iz) + 2.0*Abs(hx*ix + hy*iy + hz*iz))/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2)],
[-2.0*hy*ix*(2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)*(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)*sign(hx*ix + hy*iy + hz*iz)/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + 2.0*hz*ix*(-(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + 1/sqrt((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2))*sign(hx*ix + hy*iy + hz*iz) - (2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)*(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)*(2.0*hx*ix*sign(hx*ix + hy*iy + hz*iz) + 2.0*Abs(hx*ix + hy*iy + hz*iz))/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2), -2.0*hx*iy*(2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)*(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)*sign(hx*ix + hy*iy + hz*iz)/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + 2.0*hz*iy*(-(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + 1/sqrt((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2))*sign(hx*ix + hy*iy + hz*iz) - (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)*(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)*(2.0*hy*iy*sign(hx*ix + hy*iy + hz*iz) + 2.0*Abs(hx*ix + hy*iy + hz*iz))/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2), -2.0*hx*iz*(2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)*(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)*sign(hx*ix + hy*iy + hz*iz)/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) - 2.0*hy*iz*(2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)*(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)*sign(hx*ix + hy*iy + hz*iz)/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + (-(2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2/((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2)**(3/2) + 1/sqrt((2.0*hx*Abs(hx*ix + hy*iy + hz*iz) - ix)**2 + (2.0*hy*Abs(hx*ix + hy*iy + hz*iz) - iy)**2 + (2.0*hz*Abs(hx*ix + hy*iy + hz*iz) - iz)**2))*(2.0*hz*iz*sign(hx*ix + hy*iy + hz*iz) + 2.0*Abs(hx*ix + hy*iy + hz*iz))]])
```
```
|
16c34562bb970b86c6a10ecc7bb09ea15baa25ea
| 13,207 |
ipynb
|
Jupyter Notebook
|
misc/TheJacobianOfTheSpecialReflection.ipynb
|
DaWelter/NaiveTrace
|
a904785a0e13c394b2c221bc918cddb41bc8b175
|
[
"FSFAP"
] | 16 |
2018-04-25T08:14:14.000Z
|
2022-01-29T06:19:16.000Z
|
misc/TheJacobianOfTheSpecialReflection.ipynb
|
DaWelter/NaiveTrace
|
a904785a0e13c394b2c221bc918cddb41bc8b175
|
[
"FSFAP"
] | null | null | null |
misc/TheJacobianOfTheSpecialReflection.ipynb
|
DaWelter/NaiveTrace
|
a904785a0e13c394b2c221bc918cddb41bc8b175
|
[
"FSFAP"
] | null | null | null | 64.740196 | 2,855 | 0.447187 | true | 5,776 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.947381 | 0.679179 | 0.643441 |
__label__kor_Hang
| 0.205682 | 0.33326 |
# 量子・古典ハイブリッドの量子機械学習アルゴリズムを使って、新しい素粒子現象の発見を目指す
この実習では、**量子・古典ハイブリッドアルゴリズム**の応用である**量子機械学習**の基本的な実装を学んだのち、その活用例として、**素粒子実験での新粒子探索**への応用を考えます。ここで学ぶ量子機械学習の手法は、量子コンピュータを応用することで古典機械学習の性能を向上するという観点から提案された、**変分量子回路**を使った学習手法 [[1]](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.032309)です。その手法の元になる変分法と、それに基づいた変分量子固有値ソルバー法について学んだのち、量子機械学習の学習へと進みます。
## 内容
1. [はじめに](#introduction)
2. [変分法と変分量子回路](#variational_method)
3. [機械学習](#ml)
4. [量子機械学習](#qml)
5. [初歩的な例](#example)
1. [学習データの準備](#func_data)
2. [量子状態の生成](#func_state_preparation)
3. [変分フォームを使った状態変換](#func_ariational_form)
4. [測定とモデル出力](#func_measurement)
6. [素粒子現象の探索への応用](#susy)
1. [学習データの準備](#susy_data)
2. [量子状態の生成](#susy_state_preparation)
3. [変分フォームを使った状態変換](#susy_ariational_form)
4. [測定とモデル出力](#susy_measurement)
7. [[課題] VQEの応用](#vqe_application)
8. [参考文献](#references)
## はじめに <a id='introduction'></a>
近年、機械学習の分野において**深層学習**(**ディープラーニング**)が注目を浴びています。ディープラーニングは**ニューラルネットワーク**の隠れ層を多層にすることで、入力と出力の間の複雑な関係を学習することができます。その学習結果を使って、新しい入力データに対して出力を予測することが可能になります。ここで学習する量子機械学習アルゴリズムは、このニューラルネットワークの部分を変分量子回路に置き換えたものです。つまり、ニューラルネットワークでの各ニューロン層への重みを調節する代わりに、変分量子回路のパラメータ(例えば回転ゲートの回転角)を調整することで入力と出力の関係を学習しようという試みです。
量子力学の重ね合わせの原理から、**指数関数的に増える多数の計算基底**を使って状態を表現できることが量子コンピュータの強みです。この強みを生かすことで、データ間の複雑な相関を学習できる可能性が生まれます。そこに量子機械学習の最も大きな強みがあると考えられています。
多項式で与えられる数の量子ゲートを使って、指数関数的に増える関数を表現できる可能性があるところに量子機械学習の強みがありますが、誤り訂正機能を持たない中規模の量子コンピュータ (*Noisy Intermediate-Scale Quantum*デバイス, 略してNISQ)で、古典計算を上回る性能を発揮できるか確証はありません。しかしNISQデバイスでの動作に適したアルゴリズムであるため、2019年3月にはIBMの実験チームによる実機での実装がすでに行われ、結果も論文[[2]](https://www.nature.com/articles/s41586-019-0980-2)として出版されています。
## 変分法と変分量子回路 <a id='variational_method'></a>
変分量子回路の元になるのは、**変分法**(あるいは変分原理)と呼ばれる考え方です。量子力学における変分法と、その方法を用いた変分量子固有値ソルバー法については、この[ノートブック](vqe.ipynb)を参照してください。
## 機械学習と深層学習 <a id='ml'></a>
機械学習を一言で(大雑把に)説明すると、与えられたデータを元に、ある予測を返すような機械を実現する工程だと言えます。例えば、2種類の変数$\boldsymbol{x}$と$\boldsymbol{y}$からなるデータ($(x_i, y_i)$を要素とするベクトル、$i$は要素の添字)があったとして、その変数間の関係を求める問題として機械学習を考えてみましょう。つまり、変数$x_i$を引数とする関数$f$を考え、その出力$\tilde{y_i}=f(x_i)$が$\tilde{y}_i\simeq y_i$となるような関数$f$をデータから近似的に求めることに対応します。
一般的に、この関数$f$は変数$x$以外のパラメータを持っているでしょう。なので、そのパラメータ$\boldsymbol{w}$をうまく調整して、$y_i\simeq\tilde{y}_i$となる関数$f=f(x,\boldsymbol{w}^*)$とパラメータ$\boldsymbol{w}^*$を求めることが機械学習の鍵になります。
関数$f$を近似する方法の一つとして、現在主流になっているのが脳のニューロン構造を模式化したニューラルネットワークです。下図に示しているのは、ニューラルネットの基本的な構造です。丸で示しているのが構成ユニット(ニューロン)で、ニューロンを繋ぐ情報の流れを矢印で表しています。ニューラルネットには様々な構造が考えられますが、基本になるのは図に示したような層構造で、前層にあるニューロンの出力が次の層にあるニューロンへの入力になります。入力データ$x$を受ける入力層と出力$\tilde{y}$を出す出力層に加え、中間に複数の"隠れ層"を持つものを総称して深層ニューラルネットワークと呼びます。
では、もう少し数学的なモデルを見てみましょう。$l$層目にある$j$番目のユニット$u_j^l$に対して、前層($l-1$番目)から$n$個の入力$o_k^{l-1}$ ($k=1,2,\cdots n$) がある場合、入力$o_k^{l-1}$への重みパラメータ$w_k^l$を使って
$$
o_j^l=g\left(\sum_{k=1}^n o_k^{l-1}w_k^l\right)
$$
となる出力$o_j^l$を考えます。図で示すと
になります。関数$g$は活性化関数と呼ばれ、入力に対して非線形な出力を与えます。活性化関数としては、一般的にはシグモイド関数やReLU(Rectified Linear Unit)等の関数が用いられることが多いです。
関数$f(x,\boldsymbol{w}^*)$を求めるために、最適なパラメータ$\boldsymbol{w}^*$を決定するプロセス(学習と呼ばれる)が必要です。そのために、出力$\tilde{y}$とターゲットとなる変数$y$の差を測定する関数$L(\boldsymbol{w})$を考えます(一般に損失関数やコスト関数と呼ばれます)。
$$
L(\boldsymbol{w}) = \frac{1}{N}\sum_{i=1}^N L(f(x_i,\boldsymbol{w}),y_i)
$$
$N$は$(x_i, y_i)$データの数です。この損失関数$L(\boldsymbol{w})$を最小化するパラメータ$\boldsymbol{w}^*$を求めたいわけですが、それには誤差逆伝搬法と呼ばれる手法を使うことができることが知られています。この手法は、$L(\boldsymbol{w})$の各$w$に対する微分係数$\Delta_w L(\boldsymbol{w})$を求めて、
$$
w'=w-\epsilon\Delta_w L(\boldsymbol{w})
$$
のように$w$を更新することで、$L(\boldsymbol{w})$を最小化するというものです($w$と$w'$は更新前と更新後のパラメータ)。$\epsilon\:(>0)$は学習率と呼ばれるパラメータで、これは基本的には私たちが手で決めてやる必要があります。
## 量子機械学習<a id='qml'></a>
変分量子回路を用いた量子機械学習アルゴリズムは、一般的には以下のような順番で量子回路に実装され、計算が行われます。
1. **学習データ**$\{(\boldsymbol{x}_i, y_i)\}$を用意する。$\boldsymbol{x}_i$は入力データのベクトル、$y_i$は入力データに対する真の値(教師データ)とする($i$は学習データのサンプルを表す添字)。
2. 入力$\boldsymbol{x}$から何らかの規則で決まる回路$U_{\text{in}}(\boldsymbol{x})$(**特徴量マップ**と呼ぶ)を用意し、$\boldsymbol{x}_i$の情報を埋め込んだ入力状態$|\psi_{\rm in}(\boldsymbol{x}_i)\rangle = U_{\text{in}}(\boldsymbol{x}_i)|0\rangle$を作る。
3. 入力状態にパラメータ$\boldsymbol{\theta}$に依存したゲート$U(\boldsymbol{\theta})$(**変分フォーム**)を掛けたものを出力状態$|\psi_{\rm out}(\boldsymbol{x}_i,\boldsymbol{\theta})\rangle = U(\boldsymbol{\theta})|\psi_{\rm in}(\boldsymbol{x}_i)\rangle$とする。
4. 出力状態のもとで何らかの**観測量**を測定し、測定値$O$を得る。例えば、最初の量子ビットで測定したパウリ$Z$演算子の期待値$\langle Z_1\rangle = \langle \psi_{\rm out} |Z_1|\psi_{\rm out} \rangle$などを考える。
5. $F$を適当な関数として、$F(O)$をモデルの出力$y(\boldsymbol{x}_i,\boldsymbol{\theta})$とする。
6. 真の値$y_i$と出力$y(\boldsymbol{x}_i,\boldsymbol{\theta})$の間の乖離を表す**コスト関数**$L(\boldsymbol{\theta})$を定義し、古典計算でコスト関数を計算する。
7. $L(\boldsymbol{\theta})$が小さくなるように$\boldsymbol{\theta}$を更新する。
7. 3-7のプロセスを繰り返すことで、コスト関数を最小化する$\boldsymbol{\theta}=\boldsymbol{\theta^*}$を求める。
8. $y(\boldsymbol{x},\boldsymbol{\theta^*})$が学習によって得られた**予測モデル**になる。
この順に量子機械学習アルゴリズムを実装していきましょう。まず、必要なライブラリを最初にインポートします。
```python
# Tested with python 3.7.9, qiskit 0.23.5, numpy 1.20.1
import numpy as np
import matplotlib.pyplot as plt
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import Aer, execute
from qiskit.aqua.components.optimizers import SPSA, COBYLA
import logging
from qiskit.aqua import set_qiskit_aqua_logging
set_qiskit_aqua_logging(logging.DEBUG) # choose INFO, DEBUG to see the log
```
## 初歩的な例<a id='example'></a>
ある入力$\{x_i\}$と、既知の関数$f$による出力$y_i=f(x_i)$が学習データとして与えられた時に、そのデータから関数$f$を近似的に求める問題を考えてみます。例として、$f(x)=x^3$としてみます。
### 学習データの準備<a id='func_data'></a>
まず、学習データを準備します。$x_{\text{min}}$と$x_{\text{max}}$の範囲でデータをnum_x_train個ランダムに取った後、正規分布に従うノイズを追加しておきます。nqubitが量子ビット数、c_depthが変分フォーム回路の深さ(後述)を表します。
```python
random_seed = 0
np.random.seed(random_seed)
# Qubit数、回路の深さ、訓練サンプル数の定義など
nqubit = 3
c_depth = 5
x_min = -1.; x_max = 1.; num_x_train = 30
# 関数の定義
func_to_learn = lambda x: x**3
x_train = x_min + (x_max - x_min) * np.random.rand(num_x_train)
y_train = func_to_learn(x_train)
# 関数に正規分布ノイズを付加
mag_noise = 0.05
y_train_noise = y_train + mag_noise * np.random.randn(num_x_train)
```
### 量子状態の生成<a id='func_state_preparation'></a>
次に、入力$x_i$を初期状態$|0\rangle^{\otimes n}$に埋め込むための回路$U_{\rm in}(x_i)$(特徴量マップ)を作成します。まず参考文献 [[1]](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.032309)に従い、回転ゲート$R_j^Y(\theta)=e^{i\theta Y_j/2}$と$R_j^Z(\theta)=e^{i\theta Z_j/2}$を使って
$$
U_{\rm in}(x_i) = \prod_j R_j^Z(\cos^{-1}(x^2))R_j^Y(\sin^{-1}(x))
$$
と定義します。この$U_{\rm in}(x_i)$をゼロの標準状態に適用することで、入力$x_i$は$|\psi_{\rm in}(x_i)\rangle=U_{\rm in}(x_i)|0\rangle^{\otimes n}$という量子状態に変換されることになります。
```python
def U_in(x, nqubit):
qr = QuantumRegister(nqubit)
U = QuantumCircuit(qr)
angle_y = np.arcsin(x)
angle_z = np.arccos(x**2)
for i in range(nqubit):
U.ry(angle_y, i)
U.rz(angle_z, i)
return U
```
### 変分フォームを使った状態変換<a id='func_variational_form'></a>
#### 変分量子回路$U(\boldsymbol{\theta})$の構成
次に、最適化すべき変分量子回路$U(\boldsymbol{\theta})$を作っていきます。これは以下の3つの手順で行います。
1. 2量子ビットゲートの作成($\to$ 量子ビットをエンタングルさせる)
2. 回転ゲートの作成
3. 1.と2.のゲートを交互に組み合わせ、1つの大きな変分量子回路$U(\boldsymbol{\theta})$を作る
#### 2量子ビットゲートの作成
ここではControlled-$Z$ゲート($CZ$)を使ってエンタングルさせ、モデルの表現能力を上げることを目指します。
#### 回転ゲートと$U(\boldsymbol{\theta})$の作成
$CZ$ゲートを使ってエンタングルメントを生成する回路$U_{\text{ent}}$と、$j \:(=1,2,\cdots n)$番目の量子ビットに適用する回転ゲート
\begin{align}
U_{\text{rot}}(\theta_j^l) = R_j^Y(\theta_{j3}^l)R_j^Z(\theta_{j2}^l)R_j^Y(\theta_{j1}^l)
\end{align}
を掛けたものを組み合わせて、変分量子回路$U(\boldsymbol{\theta})$を構成します。ここで$l$は量子回路の層を表していて、$U_{\text{ent}}$と上記の回転ゲートを合計$d$層繰り返すことを意味しています。実際は、この演習では最初に回転ゲート$U_{\text{rot}}$を一度適用してから$d$層繰り返す構造を使うため、全体としては
\begin{align}
U\left(\{\theta_j^l\}\right) = \prod_{l=1}^d\left(\left(\prod_{j=1}^n U_{\text{rot}}(\theta_j^l)\right) \cdot U_{\text{ent}}\right)\cdot\prod_{j=1}^n U_{\text{rot}}(\theta_j^0)
\end{align}
という形式の変分量子回路を用いることになります。つまり、変分量子回路は全体で$3n(d+1)$個のパラメータを含んでいます。$\boldsymbol{\theta}$の初期値ですが、$[0, 2\pi]$の範囲でランダムに設定するものとします。
```python
def U_out(nqubit, params):
qr = QuantumRegister(nqubit)
#cr = ClassicalRegister(nqubit)
U = QuantumCircuit(qr)
for i in range(nqubit):
U.ry(params[i], i)
U.rz(params[nqubit+i], i)
U.ry(params[nqubit*2+i], i)
for d in range(c_depth):
for j in range(nqubit-1):
U.cz(j, j+1)
U.cz(nqubit-1, 0)
for i in range(nqubit):
U.ry(params[nqubit*(3*d+3)+i], i)
U.rz(params[nqubit*(3*d+4)+i], i)
U.ry(params[nqubit*(3*d+5)+i], i)
return U
```
### 測定とモデル出力<a id='func_measurement'></a>
モデルの出力(予測値)として、状態$|\psi_{\rm out}(\boldsymbol{x},\boldsymbol{\theta})\rangle=U(\boldsymbol{\theta})|\psi_{\rm in}(\boldsymbol{x})\rangle$の元で最初の量子ビットを$Z$基底で測定した時の期待値を使うことにします。つまり$y(\boldsymbol{x},\boldsymbol{\theta}) = \langle Z_0(\boldsymbol{x},\boldsymbol{\theta}) \rangle = \langle \psi_{\rm out}(\boldsymbol{x},\boldsymbol{\theta})|Z_0|\psi_{\rm out}(\boldsymbol{x},\boldsymbol{\theta})\rangle$です。
```python
def pred_circ(x, nqubit, params):
qr = QuantumRegister(nqubit, name='q')
cr = ClassicalRegister(1, name='c')
circ = QuantumCircuit(qr, cr)
u_in = U_in(x, nqubit).to_instruction()
u_out = U_out(nqubit, params).to_instruction()
circ.append(u_in, qr)
circ.append(u_out, qr)
circ.measure(0, 0)
return circ
backend = Aer.get_backend("qasm_simulator")
NUM_SHOTS = 10000
def objective_function(params):
cost_total = 0
for i in range(len(x_train)):
qc = pred_circ(x_train[i], nqubit, params)
result = execute(qc, backend, shots=NUM_SHOTS).result()
counts = result.get_counts(qc)
exp_2Z = (2*counts['0']-2*counts['1'])/NUM_SHOTS
cost = (y_train_noise[i] - exp_2Z)**2
cost_total += cost
return cost_total
```
ここで0と1の測定結果(固有値+1と-1)に2を掛けているのは、$Z$基底での測定結果の範囲を広げるためです。コスト関数$L$として、モデルの予測値と真の値$y_i$の平均2乗誤差の総和を使っています。
では、最後にこの回路を実行して、結果を見てみましょう。
```python
num_vars = nqubit*3*(c_depth+1)
params = np.random.rand(num_vars)*2*np.pi
optimizer = COBYLA(maxiter=500, tol=0.0001)
ret = optimizer.optimize(num_vars=num_vars, objective_function=objective_function, initial_point=params)
print('ret[0] =',ret[0])
x_list = np.arange(x_min, x_max, 0.02)
y_pred = []
for x in x_list:
qc = pred_circ(x, nqubit, ret[0])
result = execute(qc, backend, shots=NUM_SHOTS).result()
counts = result.get_counts(qc)
exp_2Z = (2*counts['0']-2*counts['1'])/NUM_SHOTS
y_pred.append(exp_2Z)
plt.plot(x_train, y_train_noise, "o", label='Training Data (w/ Noise)')
plt.plot(x_list, func_to_learn(x_list), label='Original Function')
plt.plot(x_list, np.array(y_pred), label='Predicted Function')
plt.legend()
plt.show()
```
生成された図を確認してください。ノイズを印加した学習データの分布から、元の関数$f(x)=x^3$をほぼ導き出せていることが分かると思います。
## 素粒子現象の探索への応用<a id='susy'></a>
次の実習課題では、(**標準模型**と呼ばれる)素粒子現象の基本理論を超える新しい物理現象として、**超対称性理論**(*Supersymmetry*、略してSUSY)から予言される新粒子の探索を考えてみます。下の図はSUSY信号(左側)と標準模型でのバックグラウンド(右側)のファインマンダイアグラムを表していて、この二つの物理過程を量子機械学習で分類することを試みます。
(図の引用:参考文献[[3]](https://www.nature.com/articles/ncomms5308))
### 学習データの準備<a id='susy_data'></a>
学習に用いるデータは、カリフォルニア大学アーバイン校(UC Irvine)の研究グループが提供する[機械学習レポジトリ](https://archive.ics.uci.edu/ml/index.php)の中の[SUSYデータセット](https://archive.ics.uci.edu/ml/datasets/SUSY)です。このデータセットの詳細は文献[[3]](https://www.nature.com/articles/ncomms5308)に委ねますが、ある特定のSUSY粒子生成反応とそれに良く似た特徴を持つ背景事象を検出器で観測した時に予想される信号(運動学的変数)をシミュレートしたデータが含まれています。
以下では、まず学習に使う運動学的変数を選び、その変数を指定したサンプルを訓練用とテスト用に準備します。
```python
import pandas as pd
from qiskit.aqua import QuantumInstance
from qiskit.circuit.library import TwoLocal, ZFeatureMap, ZZFeatureMap
from qiskit.aqua.algorithms import VQC
from qiskit.aqua.utils import split_dataset_to_data_and_labels, map_label_to_class_name
df = pd.read_csv("data_files/SUSY_1K.csv",
names=('isSignal','lep1_pt','lep1_eta','lep1_phi','lep2_pt','lep2_eta',
'lep2_phi','miss_ene','miss_phi','MET_rel','axial_MET','M_R','M_TR_2',
'R','MT2','S_R','M_Delta_R','dPhi_r_b','cos_theta_r1'))
feature_dim = 3 # dimension of each data point
if feature_dim == 3:
SelectedFeatures = ['lep1_pt', 'lep2_pt', 'miss_ene']
elif feature_dim == 5:
SelectedFeatures = ['lep1_pt','lep2_pt','miss_ene','M_TR_2','M_Delta_R']
elif feature_dim == 7:
SelectedFeatures = ['lep1_pt','lep1_eta','lep2_pt','lep2_eta','miss_ene','M_TR_2','M_Delta_R']
training_size = 20
testing_size = 20
niter = 500
random_seed = 10598
df_sig = df.loc[df.isSignal==1, SelectedFeatures]
df_bkg = df.loc[df.isSignal==0, SelectedFeatures]
df_sig_training = df_sig.values[:training_size]
df_bkg_training = df_bkg.values[:training_size]
df_sig_test = df_sig.values[training_size:training_size+testing_size]
df_bkg_test = df_bkg.values[training_size:training_size+testing_size]
training_input = {'1':df_sig_training, '0':df_bkg_training}
test_input = {'1':df_sig_test, '0':df_bkg_test}
#print('train_input =',training_input)
#print('test_input =',test_input)
datapoints, class_to_label = split_dataset_to_data_and_labels(test_input)
datapoints_tr, class_to_label_tr = split_dataset_to_data_and_labels(training_input)
```
### 量子状態の生成<a id='susy_state_preparation'></a>
次は特徴量マップ$U_{\rm in}(\boldsymbol{x}_i)$の作成ですが、ここでは参考文献[[2]](https://www.nature.com/articles/s41586-019-0980-2)に従い、
\begin{align}
U_{\phi_{\{k\}}}(\boldsymbol{x}_i)=\exp\left(i\phi_{\{k\}}(\boldsymbol{x}_i)Z_k\right)
\end{align}
とします($k$は入力値$\boldsymbol{x}_i$のベクトル要素の添字)。ここで$\phi_{\{k\}}(\boldsymbol{x}_i)=x_i^k$と決めて($x_i^k$は$\boldsymbol{x}_i$の$k$番目要素)、入力値$\boldsymbol{x}_i$を$k$個の量子ビットに埋め込みます。この$U_{\phi_{\{k\}}}(x)$にアダマール演算子を組み合わせることで、全体として
\begin{align}
U_{\rm in}(\boldsymbol{x}_i) = U_{\phi}(\boldsymbol{x}_i) H^{\otimes n},\:\:U_{\phi}(\boldsymbol{x}_i) = \exp\left(i \sum_{k=1}^n \phi_{\{k\}}(\boldsymbol{x}_i)Z_k\right)
\end{align}
が得られます。
```python
feature_map = ZFeatureMap(feature_dim, reps=1)
```
### 変分フォームを使った状態変換<a id='susy_variational_form'></a>
変分量子回路$U(\boldsymbol{\theta})$は上の初歩的な例で用いた回路とほぼ同じですが、回転ゲートとして
\begin{align}
U_{\text{rot}}(\theta_j^l) = R_j^Y(\theta_{j1}^l)R_j^Z(\theta_{j2}^l)
\end{align}
を使います。上の例では$U(\boldsymbol{\theta})$を自分で組み立てましたが、Qiskitにはこの$U(\boldsymbol{\theta})$を実装するAPIがすでに準備されているので、ここではそれを使います。
```python
two = TwoLocal(feature_dim, ['ry','rz'], 'cz', 'full', reps=1)
print(two)
```
### 測定とモデル出力<a id='susy_measurement'></a>
測定やパラメータの最適化、コスト関数の定義も初歩的な例で用いたものとほぼ同じです。QiskitのAPIを用いるので、プログラムはかなり簡略化されています。
```python
# シミュレータで実行する場合
backend = Aer.get_backend('qasm_simulator')
# 量子コンピュータで実行する場合
#from qiskit import IBMQ
#IBMQ.load_account()
#provider0 = IBMQ.get_provider(hub='ibm-q', group='open', project='main')
#backend_name = 'ibmq_santiago'
#backend = provider0.get_backend(backend_name)
optimizer = COBYLA(maxiter=niter, disp=True)
vqc = VQC(optimizer, feature_map, two, training_input, test_input)
quantum_instance = QuantumInstance(backend=backend, shots=1024,
seed_simulator=random_seed, seed_transpiler=random_seed,
skip_qobj_validation=True)
result = vqc.run(quantum_instance)
print(" --- Testing success ratio: ", result['testing_accuracy'])
```
学習したモデルの出力に対して、閾値を設定することで信号とバックグラウンドの選別が可能になります。その選別性能を評価するために一般的に行われるのは、閾値を連続的に変化させた時に得られる選別効率を2次元平面上でプロットすることです。この曲線をROC(Receiver Operating Characteristic)曲線と言います。
学習結果を使って、訓練用とテスト用のデータからROC曲線を描いてみます。
```python
predicted_probs, predicted_labels = vqc.predict(datapoints[0])
prob_test_signal = predicted_probs[:,1]
#predicted_classes = map_label_to_class_name(predicted_labels, vqc.label_to_class)
#print(" --- Prediction: {}".format(predicted_classes))
predicted_probs_tr, predicted_labels_tr = vqc.predict(datapoints_tr[0])
prob_train_signal = predicted_probs_tr[:,1]
from sklearn.metrics import roc_curve, auc, roc_auc_score
fpr, tpr, thresholds = roc_curve(datapoints[1], prob_test_signal, drop_intermediate=False)
fpr_tr, tpr_tr, thresholds_tr = roc_curve(datapoints_tr[1], prob_train_signal, drop_intermediate=False)
roc_auc = auc(fpr, tpr)
roc_auc_tr = auc(fpr_tr, tpr_tr)
plt.plot(fpr, tpr, color='darkorange', lw=2, label='Testing Data (AUC = %0.3f)' % roc_auc)
plt.plot(fpr_tr, tpr_tr, color='darkblue', lw=2, label='Training Data (AUC = %0.3f)' % roc_auc_tr)
plt.plot([0, 0], [1, 1], color='navy', lw=2, linestyle='--')
plt.xlim([-0.05, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend(loc="lower right")
plt.show()
```
## [課題] 素粒子実験へのVQEの応用 <a id='vqe_application'></a>
ここでは、高エネルギー物理実験にVQEを応用できるかどうかを考えます。その応用例と課題をこの[ノートブック](vqe_tracking.ipynb)に準備しました。最後にある課題をレポートとして提出してください。
## 参考文献<a id='references'></a>
1. K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, “Quantum circuit learning”, [Phys. Rev. A 98, 032309 (2018)](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.032309)
2. V. Havlicek _et al._ , “Supervised learning with quantum-enhanced feature spaces”, [Nature 567, 209–212 (2019)](https://www.nature.com/articles/s41586-019-0980-2)
3. P. Baldi, P. Sadowski, and D. Whiteson, “Searching for exotic particles in high-energy physics with deep learning”, [Nature Commun. 5, 4308 (2014)](https://www.nature.com/articles/ncomms5308)
|
b22376ad8779c4a6f0db14c162f922b42e73d6f5
| 25,514 |
ipynb
|
Jupyter Notebook
|
source/jp/vqc_machine_learning.ipynb
|
kterashi/qc-workbook
|
07d8948f1e24e3b6862ab6fa307031cfd35f8b55
|
[
"Apache-2.0"
] | null | null | null |
source/jp/vqc_machine_learning.ipynb
|
kterashi/qc-workbook
|
07d8948f1e24e3b6862ab6fa307031cfd35f8b55
|
[
"Apache-2.0"
] | null | null | null |
source/jp/vqc_machine_learning.ipynb
|
kterashi/qc-workbook
|
07d8948f1e24e3b6862ab6fa307031cfd35f8b55
|
[
"Apache-2.0"
] | null | null | null | 35.884669 | 438 | 0.591754 | true | 9,560 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.90599 | 0.581303 | 0.526655 |
__label__yue_Hant
| 0.31526 | 0.061925 |
```python
# Notebook imports and packages
import numpy as np
from sympy import symbols, diff, lambdify
```
# Please lambdify your derivatives
$$f(x, y)=\frac{1}{3^{-x^2-y^2}+1}$$
<hr color="lightblue">
$$\frac{\partial f(x, y)}{\partial x}=\frac{2x\ln \left(3\right)\cdot \:3^{-x^2-y^2}}{\left(3^{-x^2-y^2}+1\right)^2}$$
<hr color="lightblue">
$$\frac{\partial f(x, y)}{\partial y}=\frac{2y\ln \left(3\right)\cdot \:3^{-y^2-x^2}}{\left(3^{-x^2-y^2}+1\right)^2}$$
```python
def f(x, y):
return 1/(3**(-(x**2)-(y**2)) + 1)
```
```python
a, b = symbols('x, y')
```
Make sure to lambdify your functions:
If you do GD with `diff(f(a,b),a).evalf(subs={a:params[0],b:params[1]})` differentiating every time, it will be slow.
10k iterations with `dfx(x,y)` is way faster than 1k iterations differentiating every time.
```python
dfx=lambdify([a,b], diff(f(a,b), a))
dfy=lambdify([a,b], diff(f(a,b), b))
```
```python
params = np.array([1.8, 1.0])
```
```python
dfx(x=1.8,y=1.0), dfx(1.8,1.0), dfx(*params) # All three result in the same thing
```
(0.036808971619750504, 0.036808971619750504, 0.036808971619750504)
# Bath Gradient Descent with SymPy
```python
multiplier = .1
max_iter = 10000
params = np.array([1.8, 1.0])
for n in range(max_iter):
gradient_x, gradient_y = dfx(*params), dfy(*params)
gradients = np.array([gradient_x,gradient_y]) # These two first steps could be combined into one;
params = params - multiplier * gradients
print("Values in gradient array", gradients)
print("Minimum occurs at (x,y):", tuple(params))
print("The cost is:\t\t", f(*params))
```
Values in gradient array [1.61557085e-244 8.97539364e-245]
Minimum occurs at (x,y): (2.7795548456563392e-244, 1.544197136475744e-244)
The cost is: 0.5
|
667150e54731e46049ef7d4906c6e19ad9fb93c0
| 5,185 |
ipynb
|
Jupyter Notebook
|
Section_04/Example_04_(05-08)/07-GD_and_Lambdify.ipynb
|
ArielMAJ/Data-Science-and-Machine-Learning_Bootcamp
|
afae685c96d9fc8af0b2ee1be4d817df505c6c8d
|
[
"MIT"
] | null | null | null |
Section_04/Example_04_(05-08)/07-GD_and_Lambdify.ipynb
|
ArielMAJ/Data-Science-and-Machine-Learning_Bootcamp
|
afae685c96d9fc8af0b2ee1be4d817df505c6c8d
|
[
"MIT"
] | null | null | null |
Section_04/Example_04_(05-08)/07-GD_and_Lambdify.ipynb
|
ArielMAJ/Data-Science-and-Machine-Learning_Bootcamp
|
afae685c96d9fc8af0b2ee1be4d817df505c6c8d
|
[
"MIT"
] | null | null | null | 5,185 | 5,185 | 0.694889 | true | 647 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.91611 | 0.833325 | 0.763417 |
__label__eng_Latn
| 0.673676 | 0.612005 |
# EPA-1316 Introduction to *Urban* Data Science
## Lab 6: plotting, Simple Linear Regression,K-NN Regression
**TU Delft**<br>
**Q1 2020**<br>
**Instructor:** Trivik Verma <br>
**TAs:** Aarthi Meenakshi Sundaram, Jelle Egbers, Tess Kim, Lotte Lourens, Amir Ebrahimi Fard, Giulia Reggiani, Bramka Jafino, Talia Kaufmann <br>
**[Computational Urban Science & Policy Lab](https://research.trivikverma.com/)** <br>
---
## <font color='red'> Repetition of Content </font>
Some important things in this lab that we have addressed before:
* Another example to illustrate the difference between `.iloc` and `.loc` in `pandas` -- > [here](#iloc)
* Some notes on why we are adding a constant in our linear regression model --> [here](#constant)
---
## Learning Goals
By the end of this lab, you should be able to:
* Review `numpy` including 2-D arrays and understand array reshaping
* Use `matplotlib` to make plots
* Feel comfortable with simple linear regression
* Feel comfortable with $k$ nearest neighbors
**This lab corresponds to Week 6.**
## Table of Contents
#### <font color='red'> HIGHLIGHTS FROM PRE-LAB </font>
* [1 - Review of numpy](#first-bullet)
* [2 - Another Intro to matplotlib plus more ](#second-bullet)
#### <font color='red'> LAB 6 MATERIAL </font>
* [3 - Simple Linear Regression](#third-bullet)
* [4 - Building a model with `statsmodels` and `sklearn`](#fourth-bullet)
* 5 - Example: Simple linear regression with automobile data - as a homework
* [6 - $k$Nearest Neighbors](#sixth-bullet)
```python
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import pandas as pd
import time
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
#import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
# Displays the plots for us.
%matplotlib inline
```
<a class="anchor" id="first-bullet"></a>
## 1 - Review of the `numpy` Python library
In lab1 we learned about the `numpy` library [(documentation)](http://www.numpy.org/) and its fast array structure, called the `numpy array`.
```python
# import numpy
import numpy as np
```
```python
# make an array
my_array = np.array([1,4,9,16])
my_array
```
array([ 1, 4, 9, 16])
```python
print(f'Size of my array: {my_array.size}, or length of my array: {len(my_array)}')
print (f'Shape of my array: {my_array.shape}')
```
Size of my array: 4, or length of my array: 4
Shape of my array: (4,)
#### Notice the way the shape appears in numpy arrays
- For a 1D array, .shape returns a tuple with 1 element (n,)
- For a 2D array, .shape returns a tuple with 2 elements (n,m)
- For a 3D array, .shape returns a tuple with 3 elements (n,m,p)
```python
# How to reshape a 1D array to a 2D
my_array.reshape(-1,2)
```
array([[ 1, 4],
[ 9, 16]])
Numpy arrays support the same operations as lists! Below we slice and iterate.
```python
print("array[2:4]:", my_array[2:4]) # A slice of the array
# Iterate over the array
for ele in my_array:
print("element:", ele)
```
array[2:4]: [ 9 16]
element: 1
element: 4
element: 9
element: 16
Remember `numpy` gains a lot of its efficiency from being **strongly typed** (all elements are of the same type, such as integer or floating point). If the elements of an array are of a different type, `numpy` will force them into the same type (the longest in terms of bytes)
```python
mixed = np.array([1, 2.3, 'eleni', True])
print(type(1), type(2.3), type('eleni'), type(True))
mixed # all elements will become strings
```
<class 'int'> <class 'float'> <class 'str'> <class 'bool'>
array(['1', '2.3', 'eleni', 'True'], dtype='<U32')
Next, we push ahead to two-dimensional arrays and begin to dive into some of the deeper aspects of `numpy`.
```python
# create a 2d-array by handing a list of lists
my_array2d = np.array([ [1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]
])
my_array2d
```
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12]])
### Array Slicing (a reminder...)
Numpy arrays can be sliced, and can be iterated over with loops. Below is a schematic illustrating slicing two-dimensional arrays.
Notice that the list slicing syntax still works!
`array[2:,3]` says "in the array, get rows 2 through the end, column 3]"
`array[3,:]` says "in the array, get row 3, all columns".
<a class="anchor" id="iloc"></a>
### Pandas Slicing (a reminder...)
`.iloc` is by position (position is unique), `.loc` is by label (label is not unique)
```python
# import cast dataframe
cast = pd.read_csv('data/cast.csv', encoding='utf_8')
cast.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>title</th>
<th>year</th>
<th>name</th>
<th>type</th>
<th>character</th>
<th>n</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Closet Monster</td>
<td>2015</td>
<td>Buffy #1</td>
<td>actor</td>
<td>Buffy 4</td>
<td>31.0</td>
</tr>
<tr>
<th>1</th>
<td>Suuri illusioni</td>
<td>1985</td>
<td>Homo $</td>
<td>actor</td>
<td>Guests</td>
<td>22.0</td>
</tr>
<tr>
<th>2</th>
<td>Battle of the Sexes</td>
<td>2017</td>
<td>$hutter</td>
<td>actor</td>
<td>Bobby Riggs Fan</td>
<td>10.0</td>
</tr>
<tr>
<th>3</th>
<td>Secret in Their Eyes</td>
<td>2015</td>
<td>$hutter</td>
<td>actor</td>
<td>2002 Dodger Fan</td>
<td>NaN</td>
</tr>
<tr>
<th>4</th>
<td>Steve Jobs</td>
<td>2015</td>
<td>$hutter</td>
<td>actor</td>
<td>1988 Opera House Patron</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
```python
# get me rows 10 to 13 (python slicing style : exclusive of end)
cast.iloc[10:13]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>title</th>
<th>year</th>
<th>name</th>
<th>type</th>
<th>character</th>
<th>n</th>
</tr>
</thead>
<tbody>
<tr>
<th>10</th>
<td>When the Man Went South</td>
<td>2014</td>
<td>Taipaleti 'Atu'ake</td>
<td>actor</td>
<td>Two Palms - Ua'i Paame</td>
<td>8.0</td>
</tr>
<tr>
<th>11</th>
<td>Little Angel (Angelita)</td>
<td>2015</td>
<td>Michael 'babeepower' Viera</td>
<td>actor</td>
<td>Chico</td>
<td>9.0</td>
</tr>
<tr>
<th>12</th>
<td>Mixing Nia</td>
<td>1998</td>
<td>Michael 'babeepower' Viera</td>
<td>actor</td>
<td>Rapper</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
```python
# get me columns 0 to 2 but all rows - use head()
cast.iloc[:, 0:2].head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>title</th>
<th>year</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Closet Monster</td>
<td>2015</td>
</tr>
<tr>
<th>1</th>
<td>Suuri illusioni</td>
<td>1985</td>
</tr>
<tr>
<th>2</th>
<td>Battle of the Sexes</td>
<td>2017</td>
</tr>
<tr>
<th>3</th>
<td>Secret in Their Eyes</td>
<td>2015</td>
</tr>
<tr>
<th>4</th>
<td>Steve Jobs</td>
<td>2015</td>
</tr>
</tbody>
</table>
</div>
```python
# get me rows 10 to 13 AND only columns 0 to 2
cast.iloc[10:13, 0:2]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>title</th>
<th>year</th>
</tr>
</thead>
<tbody>
<tr>
<th>10</th>
<td>When the Man Went South</td>
<td>2014</td>
</tr>
<tr>
<th>11</th>
<td>Little Angel (Angelita)</td>
<td>2015</td>
</tr>
<tr>
<th>12</th>
<td>Mixing Nia</td>
<td>1998</td>
</tr>
</tbody>
</table>
</div>
```python
# COMPARE: get me rows 10 to 13 (pandas slicing style : inclusive of end)
cast.loc[10:13]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>title</th>
<th>year</th>
<th>name</th>
<th>type</th>
<th>character</th>
<th>n</th>
</tr>
</thead>
<tbody>
<tr>
<th>10</th>
<td>When the Man Went South</td>
<td>2014</td>
<td>Taipaleti 'Atu'ake</td>
<td>actor</td>
<td>Two Palms - Ua'i Paame</td>
<td>8.0</td>
</tr>
<tr>
<th>11</th>
<td>Little Angel (Angelita)</td>
<td>2015</td>
<td>Michael 'babeepower' Viera</td>
<td>actor</td>
<td>Chico</td>
<td>9.0</td>
</tr>
<tr>
<th>12</th>
<td>Mixing Nia</td>
<td>1998</td>
<td>Michael 'babeepower' Viera</td>
<td>actor</td>
<td>Rapper</td>
<td>NaN</td>
</tr>
<tr>
<th>13</th>
<td>The Replacements</td>
<td>2000</td>
<td>Steven 'Bear'Boyd</td>
<td>actor</td>
<td>Defensive Tackle - Washington Sentinels</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
```python
# give me columns 'year' and 'type' by label but only for rows 5 to 10
cast.loc[5:10,['year','type']]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>year</th>
<th>type</th>
</tr>
</thead>
<tbody>
<tr>
<th>5</th>
<td>2015</td>
<td>actor</td>
</tr>
<tr>
<th>6</th>
<td>2015</td>
<td>actor</td>
</tr>
<tr>
<th>7</th>
<td>2009</td>
<td>actor</td>
</tr>
<tr>
<th>8</th>
<td>2014</td>
<td>actor</td>
</tr>
<tr>
<th>9</th>
<td>2014</td>
<td>actor</td>
</tr>
<tr>
<th>10</th>
<td>2014</td>
<td>actor</td>
</tr>
</tbody>
</table>
</div>
#### Another example of positioning with `.iloc` and `loc`
Look at the following data frame. It is a bad example because we have duplicate values for the index but that is legal in pandas. It's just a bad practice and we are doing it to illustrate the difference between positioning with `.iloc` and `loc`. To keep rows unique, though, internally, `pandas` has its own index which in this dataframe runs from `0` to `2`.
```python
index = ['A', 'Z', 'A']
famous = pd.DataFrame({'Elton': ['singer', 'Candle in the wind', 'male'],
'Maraie': ['actress' , 'Do not know', 'female'],
'num': np.random.randn(3)}, index=index)
famous
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Elton</th>
<th>Maraie</th>
<th>num</th>
</tr>
</thead>
<tbody>
<tr>
<th>A</th>
<td>singer</td>
<td>actress</td>
<td>0.804906</td>
</tr>
<tr>
<th>Z</th>
<td>Candle in the wind</td>
<td>Do not know</td>
<td>-0.114201</td>
</tr>
<tr>
<th>A</th>
<td>male</td>
<td>female</td>
<td>-1.068322</td>
</tr>
</tbody>
</table>
</div>
```python
# accessing elements by label can bring up duplicates!!
famous.loc['A'] # since we want all rows is the same as famous.loc['A',:]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Elton</th>
<th>Maraie</th>
<th>num</th>
</tr>
</thead>
<tbody>
<tr>
<th>A</th>
<td>singer</td>
<td>actress</td>
<td>0.804906</td>
</tr>
<tr>
<th>A</th>
<td>male</td>
<td>female</td>
<td>-1.068322</td>
</tr>
</tbody>
</table>
</div>
```python
# accessing elements by position is unique - brings up only one row
famous.iloc[1]
```
Elton Candle in the wind
Maraie Do not know
num -0.114201
Name: Z, dtype: object
<a class="anchor" id="second-bullet"></a>
## 2 - Plotting with matplotlib and beyond
<br>
`matplotlib` is a very powerful `python` library for making scientific plots.
We will not focus too much on the internal aspects of `matplotlib` in today's lab. There are many excellent tutorials out there for `matplotlib`. For example,
* [`matplotlib` homepage](https://matplotlib.org/)
* [`matplotlib` tutorial](https://github.com/matplotlib/AnatomyOfMatplotlib)
Conveying your findings convincingly is an absolutely crucial part of any analysis. Therefore, you must be able to write well and make compelling visuals. Creating informative visuals is an involved process and we won't cover that in this lab. However, part of creating informative data visualizations means generating *readable* figures. If people can't read your figures or have a difficult time interpreting them, they won't understand the results of your work. Here are some non-negotiable commandments for any plot:
* Label $x$ and $y$ axes
* Axes labels should be informative
* Axes labels should be large enough to read
* Make tick labels large enough
* Include a legend if necessary
* Include a title if necessary
* Use appropriate line widths
* Use different line styles for different lines on the plot
* Use different markers for different lines
There are other important elements, but that list should get you started on your way.
We will work with `matplotlib` and `seaborn` for plotting in this class. `matplotlib` is a very powerful `python` library for making scientific plots. `seaborn` is a little more specialized in that it was developed for statistical data visualization. We have already covered `seaborn` in previous weeks. However, you can look at the [seaborn documentation](https://seaborn.pydata.org) for more.
First, let's generate some data.
#### Let's plot some functions (Don't worry about the math of these functions - they are examples)
We will use the following three functions to make some plots:
* Logistic function:
\begin{align*}
f\left(z\right) = \dfrac{1}{1 + be^{-az}}
\end{align*}
where $a$ and $b$ are parameters.
* Hyperbolic tangent:
\begin{align*}
g\left(z\right) = b\tanh\left(az\right) + c
\end{align*}
where $a$, $b$, and $c$ are parameters.
* Rectified Linear Unit:
\begin{align*}
h\left(z\right) =
\left\{
\begin{array}{lr}
z, \quad z > 0 \\
\epsilon z, \quad z\leq 0
\end{array}
\right.
\end{align*}
where $\epsilon < 0$ is a small, positive parameter.
You are given the code for the first two functions. Notice that $z$ is passed in as a `numpy` array and that the functions are returned as `numpy` arrays. Parameters are passed in as floats.
You should write a function to compute the rectified linear unit. The input should be a `numpy` array for $z$ and a positive float for $\epsilon$.
```python
import numpy as np
def logistic(z: np.ndarray, a: float, b: float) -> np.ndarray:
""" Compute logistic function
Inputs:
a: exponential parameter
b: exponential prefactor
z: numpy array; domain
Outputs:
f: numpy array of floats, logistic function
"""
den = 1.0 + b * np.exp(-a * z)
return 1.0 / den
def stretch_tanh(z: np.ndarray, a: float, b: float, c: float) -> np.ndarray:
""" Compute stretched hyperbolic tangent
Inputs:
a: horizontal stretch parameter (a>1 implies a horizontal squish)
b: vertical stretch parameter
c: vertical shift parameter
z: numpy array; domain
Outputs:
g: numpy array of floats, stretched tanh
"""
return b * np.tanh(a * z) + c
def relu(z: np.ndarray, eps: float = 0.01) -> np.ndarray:
""" Compute rectificed linear unit
Inputs:
eps: small positive parameter
z: numpy array; domain
Outputs:
h: numpy array; relu
"""
return np.fmax(z, eps * z)
```
Now let's make some plots. First, let's just warm up and plot the logistic function.
```python
x = np.linspace(-5.0, 5.0, 100) # Equally spaced grid of 100 pts between -5 and 5
f = logistic(x, 1.0, 1.0) # Generate data
```
```python
plt.plot(x, f)
plt.xlabel('x')
plt.ylabel('f')
plt.title('Logistic Function')
plt.grid(True)
```
#### Figures with subplots
Let's start thinking about the plots as objects. We have the `figure` object which is like a matrix of smaller plots named `axes`. You can use array notation when handling it.
```python
fig, ax = plt.subplots(1,1) # Get figure and axes objects
ax.plot(x, f) # Make a plot
# Create some labels
ax.set_xlabel('x')
ax.set_ylabel('f')
ax.set_title('Logistic Function')
# Grid
ax.grid(True)
```
Wow, it's *exactly* the same plot! Notice, however, the use of `ax.set_xlabel()` instead of `plt.xlabel()`. The difference is tiny, but you should be aware of it. I will use this plotting syntax from now on.
What else do we need to do to make this figure better? Here are some options:
* Make labels bigger!
* Make line fatter
* Make tick mark labels bigger
* Make the grid less pronounced
* Make figure bigger
Let's get to it.
```python
fig, ax = plt.subplots(1,1, figsize=(10,6)) # Make figure bigger
# Make line plot
ax.plot(x, f, lw=4)
# Update ticklabel size
ax.tick_params(labelsize=24)
# Make labels
ax.set_xlabel(r'$x$', fontsize=24) # Use TeX for mathematical rendering
ax.set_ylabel(r'$f(x)$', fontsize=24) # Use TeX for mathematical rendering
ax.set_title('Logistic Function', fontsize=24)
ax.grid(True, lw=1.5, ls='--', alpha=0.75)
```
Notice:
* `lw` stands for `linewidth`. We could also write `ax.plot(x, f, linewidth=4)`
* `ls` stands for `linestyle`.
* `alpha` stands for transparency.
The only thing remaining to do is to change the $x$ limits. Clearly these should go from $-5$ to $5$.
```python
#fig.savefig('figs/logistic.png')
# Put this in a markdown cell and uncomment this to check what you saved.
#
```
#### Resources
If you want to see all the styles available, please take a look at the documentation.
* [Line styles](https://matplotlib.org/2.0.1/api/lines_api.html#matplotlib.lines.Line2D.set_linestyle)
* [Marker styles](https://matplotlib.org/2.0.1/api/markers_api.html#module-matplotlib.markers)
* [Everything you could ever want](https://matplotlib.org/2.0.1/api/lines_api.html#matplotlib.lines.Line2D.set_marker)
We haven't discussed it yet, but you can also put a legend on a figure. Here are some additional resources:
* [Legend](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.html)
* [Grid](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.grid.html)
`ax.legend(loc='best', fontsize=24);`
<a class="anchor" id="third-bullet"></a>
## 3 - Simple Linear Regression
Linear regression and its many extensions are a workhorse of the statistics and data science community, both in application and as a reference point for other models. Most of the major concepts in machine learning can be and often are discussed in terms of various linear regression models. Thus, this section will introduce you to building and fitting linear regression models and some of the process behind it, so that you can 1) fit models to data you encounter 2) experiment with different kinds of linear regression and observe their effects 3) see some of the technology that makes regression models work.
### Linear regression with a toy dataset
We first examine a toy problem, focusing our efforts on fitting a linear model to a small dataset with three observations. Each observation consists of one predictor $x_i$ and one response $y_i$ for $i = 1, 2, 3$,
\begin{align*}
(x , y) = \{(x_1, y_1), (x_2, y_2), (x_3, y_3)\}.
\end{align*}
To be very concrete, let's set the values of the predictors and responses.
\begin{equation*}
(x , y) = \{(1, 2), (2, 2), (3, 4)\}
\end{equation*}
There is no line of the form $\beta_0 + \beta_1 x = y$ that passes through all three observations, since the data are not collinear. Thus our aim is to find the line that best fits these observations in the *least-squares sense*, as discussed in lecture.
* Make two numpy arrays out of this data, x_train and y_train
* Check the dimentions of these arrays
* Try to reshape them into a different shape
* Make points into a very simple scatterplot
* Make a better scatterplot
```python
x_train = np.array([1,2,3])
y_train = np.array([2,3,6])
type(x_train)
```
numpy.ndarray
```python
x_train.shape
```
(3,)
```python
x_train = x_train.reshape(3,1)
x_train.shape
```
(3, 1)
```python
# Make a simple scatterplot
plt.scatter(x_train,y_train)
# check dimensions
print(x_train.shape,y_train.shape)
```
```python
def nice_scatterplot(x, y, title):
# font size
f_size = 18
# make the figure
fig, ax = plt.subplots(1,1, figsize=(8,5)) # Create figure object
# set axes limits to make the scale nice
ax.set_xlim(np.min(x)-1, np.max(x) + 1)
ax.set_ylim(np.min(y)-1, np.max(y) + 1)
# adjust size of tickmarks in axes
ax.tick_params(labelsize = f_size)
# remove tick labels
ax.tick_params(labelbottom=False, bottom=False)
# adjust size of axis label
ax.set_xlabel(r'$x$', fontsize = f_size)
ax.set_ylabel(r'$y$', fontsize = f_size)
# set figure title label
ax.set_title(title, fontsize = f_size)
# you may set up grid with this
ax.grid(True, lw=1.75, ls='--', alpha=0.15)
# make actual plot (Notice the label argument!)
#ax.scatter(x, y, label=r'$my points$')
#ax.scatter(x, y, label='$my points$')
ax.scatter(x, y, label=r'$my\,points$')
ax.legend(loc='best', fontsize = f_size);
return ax
nice_scatterplot(x_train, y_train, 'hello nice plot')
```
#### Formulae
Linear regression is special among the models we study because it can be solved explicitly. While most other models (and even some advanced versions of linear regression) must be solved itteratively, linear regression has a formula where you can simply plug in the data.
For the single predictor case it is:
\begin{align}
\beta_1 &= \frac{\sum_{i=1}^n{(x_i-\bar{x})(y_i-\bar{y})}}{\sum_{i=1}^n{(x_i-\bar{x})^2}}\\
\beta_0 &= \bar{y} - \beta_1\bar{x}\
\end{align}
Where $\bar{y}$ and $\bar{x}$ are the mean of the y values and the mean of the x values, respectively.
### Building a model from scratch
In this part, we will solve the equations for simple linear regression and find the best fit solution to our toy problem.
The snippets of code below implement the linear regression equations on the observed predictors and responses, which we'll call the training data set. Let's walk through the code.
We have to reshape our arrrays to 2D. We will see later why.
* make an array with shape (2,3)
* reshape it to a size that you want
```python
xx = np.array([[1,2,3],[4,6,8]])
xxx = xx.reshape(-1,2)
xxx.shape
```
(3, 2)
```python
# Reshape to be a proper 2D array
x_train = x_train.reshape(x_train.shape[0], 1)
y_train = y_train.reshape(y_train.shape[0], 1)
print(x_train.shape)
```
(3, 1)
```python
# first, compute means
y_bar = np.mean(y_train)
x_bar = np.mean(x_train)
# build the two terms
numerator = np.sum( (x_train - x_bar)*(y_train - y_bar) )
denominator = np.sum((x_train - x_bar)**2)
print(numerator.shape, denominator.shape) #check shapes
```
() ()
* Why the empty brackets? (The numerator and denominator are scalars, as expected.)
```python
#slope beta1
beta_1 = numerator/denominator
#intercept beta0
beta_0 = y_bar - beta_1*x_bar
print("The best-fit line is {0:3.2f} + {1:3.2f} * x".format(beta_0, beta_1))
print(f'The best fit is {beta_0}')
```
The best-fit line is -0.33 + 2.00 * x
The best fit is -0.3333333333333335
<div class="exercise"><b>Exercise</b></div>
Turn the code from the above cells into a function called `simple_linear_regression_fit`, that inputs the training data and returns `beta0` and `beta1`.
To do this, copy and paste the code from the above cells below and adjust the code as needed, so that the training data becomes the input and the betas become the output.
```python
def simple_linear_regression_fit(x_train: np.ndarray, y_train: np.ndarray) -> np.ndarray:
return
```
Check your function by calling it with the training data from above and printing out the beta values.
```python
# Your code here
# First try it yourself and if it dosesnt work, try again. Stil doesn't work? Try working with a friend in groups. Nothing? Ok, use the code below.
```
```python
# %load solutions/simple_linear_regression_fit.py
```
* Let's run this function and see the coefficients
```python
x_train = np.array([1 ,2, 3])
y_train = np.array([2, 2, 4])
betas = simple_linear_regression_fit(x_train, y_train)
beta_0 = betas[0]
beta_1 = betas[1]
print("The best-fit line is {0:8.6f} + {1:8.6f} * x".format(beta_0, beta_1))
```
Reshaping features array.
Reshaping observations array.
The best-fit line is 0.666667 + 1.000000 * x
<div class="exercise"><b>Exercise</b></div>
* Do the values of `beta0` and `beta1` seem reasonable?
* Plot the training data using a scatter plot.
* Plot the best fit line with `beta0` and `beta1` together with the training data.
```python
# Your code here
# First try it yourself and if it dosesnt work, try again. Stil doesn't work? Try working with a friend in groups. Nothing? Ok, use the code below.
```
```python
# %load solutions/best_fit_scatterplot.py
```
The values of `beta0` and `beta1` seem roughly reasonable. They capture the positive correlation. The line does appear to be trying to get as close as possible to all the points.
<a class="anchor" id="fourth-bullet"></a>
## 4 - Building a model with `statsmodels` and `sklearn`
Now that we can concretely fit the training data from scratch, let's learn two `python` packages to do it all for us:
* [statsmodels](http://www.statsmodels.org/stable/regression.html) and
* [scikit-learn (sklearn)](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html).
Our goal is to show how to implement simple linear regression with these packages. For an important sanity check, we compare the $\beta$ values from `statsmodels` and `sklearn` to the $\beta$ values that we found from above with our own implementation.
For the purposes of this lab, `statsmodels` and `sklearn` do the same thing. More generally though, `statsmodels` tends to be easier for inference \[finding the values of the slope and intercept and dicussing uncertainty in those values\], whereas `sklearn` has machine-learning algorithms and is better for prediction \[guessing y values for a given x value\]. (Note that both packages make the same guesses, it's just a question of which activity they provide more support for.
**Note:** `statsmodels` and `sklearn` are different packages! Unless we specify otherwise, you can use either one.
<a class="anchor" id="constant"></a>
### Why do we need to add a constant in our simple linear regression model?
Let's say we a data set of two obsevations with one predictor and one response variable each. We would then have the following two equations if we run a simple linear regression model. $$y_1=\beta_0 + \beta_1*x_1$$ $$y_2=\beta_0 + \beta_1*x_2$$ <BR> For simplicity and calculation efficiency we want to "absorb" the constant $b_0$ into an array with $b_1$ so we have only multiplication. To do this we introduce the constant ${x}^0=1$<br>$$y_1=\beta_0*{x_1}^0 + \beta_1*x_1$$ $$y_2=\beta_0 * {x_2}^0 + \beta_1*x_2$$ <BR> That becomes:
$$y_1=\beta_0*1 + \beta_1*x_1$$ $$y_2=\beta_0 * 1 + \beta_1*x_2$$<bR>
In matrix notation:
$$
\left [
\begin{array}{c}
y_1 \\ y_2 \\
\end{array}
\right] =
\left [
\begin{array}{cc}
1& x_1 \\ 1 & x_2 \\
\end{array}
\right]
\cdot
\left [
\begin{array}{c}
\beta_0 \\ \beta_1 \\
\end{array}
\right]
$$
<BR><BR>
`sklearn` adds the constant for us where in `statsmodels` we need to explicitly add it using `sm.add_constant`
Below is the code for `statsmodels`. `Statsmodels` does not by default include the column of ones in the $X$ matrix, so we include it manually with `sm.add_constant`.
```python
import statsmodels.api as sm
```
```python
# create the X matrix by appending a column of ones to x_train
X = sm.add_constant(x_train)
# this is the same matrix as in our scratch problem!
print(X)
# build the OLS model (ordinary least squares) from the training data
toyregr_sm = sm.OLS(y_train, X)
# do the fit and save regression info (parameters, etc) in results_sm
results_sm = toyregr_sm.fit()
# pull the beta parameters out from results_sm
beta0_sm = results_sm.params[0]
beta1_sm = results_sm.params[1]
print(f'The regression coef from statsmodels are: beta_0 = {beta0_sm:8.6f} and beta_1 = {beta1_sm:8.6f}')
```
[[1. 1.]
[1. 2.]
[1. 3.]]
The regression coef from statsmodels are: beta_0 = 0.666667 and beta_1 = 1.000000
Besides the beta parameters, `results_sm` contains a ton of other potentially useful information.
```python
import warnings
warnings.filterwarnings('ignore')
print(results_sm.summary())
```
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.750
Model: OLS Adj. R-squared: 0.500
Method: Least Squares F-statistic: 3.000
Date: Thu, 17 Sep 2020 Prob (F-statistic): 0.333
Time: 11:43:06 Log-Likelihood: -2.0007
No. Observations: 3 AIC: 8.001
Df Residuals: 1 BIC: 6.199
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 0.6667 1.247 0.535 0.687 -15.181 16.514
x1 1.0000 0.577 1.732 0.333 -6.336 8.336
==============================================================================
Omnibus: nan Durbin-Watson: 3.000
Prob(Omnibus): nan Jarque-Bera (JB): 0.531
Skew: -0.707 Prob(JB): 0.767
Kurtosis: 1.500 Cond. No. 6.79
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Now let's turn our attention to the `sklearn` library.
```python
from sklearn import linear_model
```
```python
# build the least squares model
toyregr = linear_model.LinearRegression()
# save regression info (parameters, etc) in results_skl
results = toyregr.fit(x_train, y_train)
# pull the beta parameters out from results_skl
beta0_skl = toyregr.intercept_
beta1_skl = toyregr.coef_[0]
print("The regression coefficients from the sklearn package are: beta_0 = {0:8.6f} and beta_1 = {1:8.6f}".format(beta0_skl, beta1_skl))
```
The regression coefficients from the sklearn package are: beta_0 = 0.666667 and beta_1 = 1.000000
We should feel pretty good about ourselves now, and we're ready to move on to a real problem!
### The `scikit-learn` library and the shape of things
Before diving into a "real" problem, let's discuss more of the details of `sklearn`.
`Scikit-learn` is the main `Python` machine learning library. It consists of many learners which can learn models from data, as well as a lot of utility functions such as `train_test_split()`.
Use the following to add the library into your code:
```python
import sklearn
```
In `scikit-learn`, an **estimator** is a Python object that implements the methods `fit(X, y)` and `predict(T)`
Let's see the structure of `scikit-learn` needed to make these fits. `fit()` always takes two arguments:
```python
estimator.fit(Xtrain, ytrain)
```
We will consider two estimators in this lab: `LinearRegression` and `KNeighborsRegressor`.
It is very important to understand that `Xtrain` must be in the form of a **2x2 array** with each row corresponding to one sample, and each column corresponding to the feature values for that sample.
`ytrain` on the other hand is a simple array of responses. These are continuous for regression problems.
### Practice with `sklearn` and a real dataset
We begin by loading up the `mtcars` dataset. This data was extracted from the 1974 Motor Trend US magazine, and comprises of fuel consumption and 10 aspects of automobile design and performance for 32 automobiles (1973–74 models). We will load this data to a dataframe with 32 observations on 11 (numeric) variables. Here is an explanation of the features:
- `mpg` is Miles/(US) gallon
- `cyl` is Number of cylinders,
- `disp` is Displacement (cu.in.),
- `hp` is Gross horsepower,
- `drat` is Rear axle ratio,
- `wt` is the Weight (1000 lbs),
- `qsec` is 1/4 mile time,
- `vs` is Engine (0 = V-shaped, 1 = straight),
- `am` is Transmission (0 = automatic, 1 = manual),
- `gear` is the Number of forward gears,
- `carb` is Number of carburetors.
```python
import pandas as pd
#load mtcars
dfcars = pd.read_csv("data/mtcars.csv")
dfcars.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Unnamed: 0</th>
<th>mpg</th>
<th>cyl</th>
<th>disp</th>
<th>hp</th>
<th>drat</th>
<th>wt</th>
<th>qsec</th>
<th>vs</th>
<th>am</th>
<th>gear</th>
<th>carb</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Mazda RX4</td>
<td>21.0</td>
<td>6</td>
<td>160.0</td>
<td>110</td>
<td>3.90</td>
<td>2.620</td>
<td>16.46</td>
<td>0</td>
<td>1</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<th>1</th>
<td>Mazda RX4 Wag</td>
<td>21.0</td>
<td>6</td>
<td>160.0</td>
<td>110</td>
<td>3.90</td>
<td>2.875</td>
<td>17.02</td>
<td>0</td>
<td>1</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<th>2</th>
<td>Datsun 710</td>
<td>22.8</td>
<td>4</td>
<td>108.0</td>
<td>93</td>
<td>3.85</td>
<td>2.320</td>
<td>18.61</td>
<td>1</td>
<td>1</td>
<td>4</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>Hornet 4 Drive</td>
<td>21.4</td>
<td>6</td>
<td>258.0</td>
<td>110</td>
<td>3.08</td>
<td>3.215</td>
<td>19.44</td>
<td>1</td>
<td>0</td>
<td>3</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>Hornet Sportabout</td>
<td>18.7</td>
<td>8</td>
<td>360.0</td>
<td>175</td>
<td>3.15</td>
<td>3.440</td>
<td>17.02</td>
<td>0</td>
<td>0</td>
<td>3</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
```python
# Fix the column title
dfcars = dfcars.rename(columns={"Unnamed: 0":"car name"})
dfcars.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>car name</th>
<th>mpg</th>
<th>cyl</th>
<th>disp</th>
<th>hp</th>
<th>drat</th>
<th>wt</th>
<th>qsec</th>
<th>vs</th>
<th>am</th>
<th>gear</th>
<th>carb</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Mazda RX4</td>
<td>21.0</td>
<td>6</td>
<td>160.0</td>
<td>110</td>
<td>3.90</td>
<td>2.620</td>
<td>16.46</td>
<td>0</td>
<td>1</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<th>1</th>
<td>Mazda RX4 Wag</td>
<td>21.0</td>
<td>6</td>
<td>160.0</td>
<td>110</td>
<td>3.90</td>
<td>2.875</td>
<td>17.02</td>
<td>0</td>
<td>1</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<th>2</th>
<td>Datsun 710</td>
<td>22.8</td>
<td>4</td>
<td>108.0</td>
<td>93</td>
<td>3.85</td>
<td>2.320</td>
<td>18.61</td>
<td>1</td>
<td>1</td>
<td>4</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>Hornet 4 Drive</td>
<td>21.4</td>
<td>6</td>
<td>258.0</td>
<td>110</td>
<td>3.08</td>
<td>3.215</td>
<td>19.44</td>
<td>1</td>
<td>0</td>
<td>3</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>Hornet Sportabout</td>
<td>18.7</td>
<td>8</td>
<td>360.0</td>
<td>175</td>
<td>3.15</td>
<td>3.440</td>
<td>17.02</td>
<td>0</td>
<td>0</td>
<td>3</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
```python
dfcars.shape
```
(32, 12)
#### Searching for values: how many cars have 4 gears?
```python
len(dfcars[dfcars.gear == 4].drop_duplicates(subset='car name', keep='first'))
```
12
Next, let's split the dataset into a training set and test set.
```python
# split into training set and testing set
from sklearn.model_selection import train_test_split
#set random_state to get the same split every time
traindf, testdf = train_test_split(dfcars, test_size=0.2, random_state=42)
```
```python
# testing set is around 20% of the total data; training set is around 80%
print("Shape of full dataset is: {0}".format(dfcars.shape))
print("Shape of training dataset is: {0}".format(traindf.shape))
print("Shape of test dataset is: {0}".format(testdf.shape))
```
Shape of full dataset is: (32, 12)
Shape of training dataset is: (25, 12)
Shape of test dataset is: (7, 12)
Now we have training and test data. We still need to select a predictor and a response from this dataset. Keep in mind that we need to choose the predictor and response from both the training and test set. You will do this in the exercises below. However, we provide some starter code for you to get things going.
```python
traindf.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>car name</th>
<th>mpg</th>
<th>cyl</th>
<th>disp</th>
<th>hp</th>
<th>drat</th>
<th>wt</th>
<th>qsec</th>
<th>vs</th>
<th>am</th>
<th>gear</th>
<th>carb</th>
</tr>
</thead>
<tbody>
<tr>
<th>25</th>
<td>Fiat X1-9</td>
<td>27.3</td>
<td>4</td>
<td>79.0</td>
<td>66</td>
<td>4.08</td>
<td>1.935</td>
<td>18.90</td>
<td>1</td>
<td>1</td>
<td>4</td>
<td>1</td>
</tr>
<tr>
<th>12</th>
<td>Merc 450SL</td>
<td>17.3</td>
<td>8</td>
<td>275.8</td>
<td>180</td>
<td>3.07</td>
<td>3.730</td>
<td>17.60</td>
<td>0</td>
<td>0</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<th>0</th>
<td>Mazda RX4</td>
<td>21.0</td>
<td>6</td>
<td>160.0</td>
<td>110</td>
<td>3.90</td>
<td>2.620</td>
<td>16.46</td>
<td>0</td>
<td>1</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<th>4</th>
<td>Hornet Sportabout</td>
<td>18.7</td>
<td>8</td>
<td>360.0</td>
<td>175</td>
<td>3.15</td>
<td>3.440</td>
<td>17.02</td>
<td>0</td>
<td>0</td>
<td>3</td>
<td>2</td>
</tr>
<tr>
<th>16</th>
<td>Chrysler Imperial</td>
<td>14.7</td>
<td>8</td>
<td>440.0</td>
<td>230</td>
<td>3.23</td>
<td>5.345</td>
<td>17.42</td>
<td>0</td>
<td>0</td>
<td>3</td>
<td>4</td>
</tr>
</tbody>
</table>
</div>
```python
# Extract the response variable that we're interested in
y_train = traindf.mpg
y_train
```
25 27.3
12 17.3
0 21.0
4 18.7
16 14.7
5 18.1
13 15.2
11 16.4
23 13.3
1 21.0
2 22.8
26 26.0
3 21.4
21 15.5
27 30.4
22 15.2
18 30.4
31 21.4
20 21.5
7 24.4
10 17.8
14 10.4
28 15.8
19 33.9
6 14.3
Name: mpg, dtype: float64
<div class="exercise"><b>Exercise</b></div>
Use slicing to get the same vector `y_train`
----
Now, notice the shape of `y_train`.
```python
y_train.shape, type(y_train)
```
((25,), pandas.core.series.Series)
### Array reshape
This is a 1D array as should be the case with the **Y** array. Remember, `sklearn` requires a 2D array only for the predictor array. You will have to pay close attention to this in the exercises later. `Sklearn` doesn't care too much about the shape of `y_train`.
The whole reason we went through that whole process was to show you how to reshape your data into the correct format.
**IMPORTANT:** Remember that your response variable `ytrain` can be a vector but your predictor variable `xtrain` ***must*** be an array!
<a class="anchor" id="sixth-bullet"></a>
## 5 - Example: Simple linear regression with automobile data - as a homework
To get to part 6 below, you will have to first complete the homework exercise 4 and then use that to complete k-nearest neighbours exercise below. The variables x_train and y_train come from the homework and it is your task to do it. If you are not able to, then solutions will be provided to you.
<a class="anchor" id="sixth-bullet"></a>
## 6 - $k$-nearest neighbors
Now that you're familiar with `sklearn`, you're ready to do a KNN regression.
Sklearn's regressor is called `sklearn.neighbors.KNeighborsRegressor`. Its main parameter is the `number of nearest neighbors`. There are other parameters such as the distance metric (default for 2 order is the Euclidean distance). For a list of all the parameters see the [Sklearn kNN Regressor Documentation](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html).
Let's use $5$ nearest neighbors.
```python
# Import the library
from sklearn.neighbors import KNeighborsRegressor
```
```python
# Set number of neighbors
k = 5
knnreg = KNeighborsRegressor(n_neighbors=k)
```
```python
# Fit the regressor - make sure your numpy arrays are the right shape
knnreg.fit(X_train, y_train)
# Evaluate the outcome on the train set using R^2
r2_train = knnreg.score(X_train, y_train)
# Print results
print(f'kNN model with {k} neighbors gives R^2 on the train set: {r2_train:.5}')
```
kNN model with 5 neighbors gives R^2 on the train set: 0.87181
```python
knnreg.predict(X_test)
```
array([20.14, 14. , 15.3 , 26.3 , 19.56, 17.06, 16.88])
<div class="exercise"><b>Exercise</b></div>
Calculate and print the $R^{2}$ score on the test set
```python
r2_test = knnreg.score(X_test, y_test)
print(f'kNN model with {k} neighbors gives R^2 on the test set: {r2_test:.5}')
```
kNN model with 5 neighbors gives R^2 on the test set: 0.69922
Not so good? Lets vary the number of neighbors and see what we get.
```python
# Make our lives easy by storing the different regressors in a dictionary
regdict = {}
# Make our lives easier by entering the k values from a list
k_list = [1, 2, 4, 15]
# Do a bunch of KNN regressions
for k in k_list:
knnreg = KNeighborsRegressor(n_neighbors=k)
knnreg.fit(X_train, y_train)
# Store the regressors in a dictionary
regdict[k] = knnreg
# Print the dictionary to see what we have
regdict
```
{1: KNeighborsRegressor(n_neighbors=1),
2: KNeighborsRegressor(n_neighbors=2),
4: KNeighborsRegressor(n_neighbors=4),
15: KNeighborsRegressor(n_neighbors=15)}
Now let's plot all the k values in same plot.
```python
fig, ax = plt.subplots(1,1, figsize=(10,6))
ax.plot(dfcars.wt, dfcars.mpg, 'o', label="data")
xgrid = np.linspace(np.min(dfcars.wt), np.max(dfcars.wt), 100)
# let's unpack the dictionary to its elements (items) which is the k and Regressor
for k, regressor in regdict.items():
predictions = regressor.predict(xgrid.reshape(-1,1))
ax.plot(xgrid, predictions, label="{}-NN".format(k))
ax.legend();
```
<div class="exercise"><b>Exercise</b></div>
Explain what you see in the graph. **Hint** Notice how the $1$-NN goes through every point on the training set but utterly fails elsewhere.
Lets look at the scores on the training set.
```python
ks = range(1, 15) # Grid of k's
scores_train = [] # R2 scores
for k in ks:
# Create KNN model
knnreg = KNeighborsRegressor(n_neighbors=k)
# Fit the model to training data
knnreg.fit(X_train, y_train)
# Calculate R^2 score
score_train = knnreg.score(X_train, y_train)
scores_train.append(score_train)
# Plot
fig, ax = plt.subplots(1,1, figsize=(12,8))
ax.plot(ks, scores_train,'o-')
ax.set_xlabel(r'$k$')
ax.set_ylabel(r'$R^{2}$')
```
<div class="exercise"><b>Exercise</b></div>
* Why do we get a perfect $R^2$ at k=1 for the training set?
* Make the same plot as above on the *test* set.
* What is the best $k$?
```python
# Your code here
# First try it yourself and if it dosesnt work, try again. Stil doesn't work? Try working with a friend in groups. Nothing? Ok, use the code below.
```
```python
# %load solutions/knn_regression.py
```
|
2d3260430637f709935bd8768edf1283c443e95c
| 275,678 |
ipynb
|
Jupyter Notebook
|
static/epa1316-2020/labs/lab-06/lab-06.ipynb
|
trivikverma/researchgroup
|
af14454b351c4b0f673c321f4eeba8bdeee2b05c
|
[
"MIT"
] | null | null | null |
static/epa1316-2020/labs/lab-06/lab-06.ipynb
|
trivikverma/researchgroup
|
af14454b351c4b0f673c321f4eeba8bdeee2b05c
|
[
"MIT"
] | null | null | null |
static/epa1316-2020/labs/lab-06/lab-06.ipynb
|
trivikverma/researchgroup
|
af14454b351c4b0f673c321f4eeba8bdeee2b05c
|
[
"MIT"
] | null | null | null | 91.923308 | 35,456 | 0.805371 | true | 14,974 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.877477 | 0.851953 | 0.747569 |
__label__eng_Latn
| 0.931206 | 0.575185 |
<h1 align=center style="color: #005496; font-size: 4.2em;">Machine Learning with Python</h1>
<h2 align=center>Laboratory on Numpy / Matplotlib / Scikit-learn</h2>
***
***
## Introduction
In the past few years, Python has become the de-facto standard programming language for data analytics. Python's success is due to several factors, but one major reason has been the availability of powerful, open-source libraries for scientific computation such as Numpy, Scipy and Matplotlib. Python is also the most popular programming language for machine learning, thanks to libraries such as Scikit-learn and TensorFlow.
In this lecture we will explore the basics of Numpy, Matplotlib and Scikit-learn. The first is a library for data manipulation through the powerfull `numpy.ndarray` data structure; the second is useful for graphical visualization and plotting; the third is a general purpose library for machine learning, containing dozens of algorithms for classification, regression and clustering.
In this lecture we assume familiarity with the Python programming language. If you are not familiar with the language, we advise you to look it up before carrying over to the next sections. Here are some useful links to learn about Python:
- https://docs.python.org/3/tutorial/introduction.html
- https://www.learnpython.org/
- http://www.scipy-lectures.org/
If you have never seen a page like this, it is a **Jupyther Notebook**. Here one can easily embed Python code and run it on the fly. You can run the code in a cell by selecting the cell and clicking the *Run* button (top). You can do the same using the **SHIFT+Enter** shortcut. You can modify the existing cells, run them and finally save your changes.
## Requirements
1. Python (preferably version > 3.3): https://www.python.org/downloads/
2. Numpy, Scipy and Matplotlib: https://www.scipy.org/install.html
3. Scikit-learn: http://scikit-learn.org/stable/install.html
## References
- https://docs.scipy.org/doc/numpy/
- https://docs.scipy.org/doc/scipy/reference/
- https://matplotlib.org/users/index.html
- http://scikit-learn.org/stable/documentation.html
# Numpy
Numpy provides high-performance data structures for data manipulation and numeric computation. In particular, we will look at the `numpy.ndarray`, a data structure for manipulating vectors, matrices and tensors. Let's start by importing `numpy`:
```python
# the np alias is very common
import numpy as np
```
We can initialize a Numpy array from a Python list using the `numpy.array` function:
```python
# if the argument is a list of numbers, the array will be a 1-dimensional vector
a = np.array([1, 2, 3, 4, 5, 6])
a
```
array([1, 2, 3, 4, 5, 6])
```python
# if the argument is a list of lists, the array will be a 2-dimensional matrix
M = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]])
M
```
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12],
[13, 14, 15, 16]])
Given a Numpy array, we can check its `shape`, a tuple containing the number of elements for each dimension:
```python
a.shape
```
(6,)
```python
M.shape
```
(4, 4)
The size of an array is its total number of elements:
```python
a.size
```
6
```python
M.size
```
16
We can do quite some nice things with Numpy arrays that are not possible with standard Python lists.
### Indexing
Numpy array allow us to index arrays in quite advanced ways.
```python
# A 1d vector can be indexed in all the common ways
a[0]
```
1
```python
a[1:3]
```
array([2, 3])
```python
a[0:5:2]
```
array([1, 3, 5])
```python
# Use a boolean mask
mask = [True, False, False, True, True, False]
a[mask]
```
array([1, 4, 5])
```python
# Access specific elements by passing a list of index
a[[1, 4, 5]]
```
array([2, 5, 6])
The power of Numpy indexing capabilities starts showing up with 2d arrays:
```python
# Access a single element of the matrix
M[0, 1]
```
2
```python
# Access an entire row
M[1]
```
array([5, 6, 7, 8])
```python
# Access an entire column
M[:,2]
```
array([ 3, 7, 11, 15])
```python
# Extract a sub-matrix
M[1:3, 0:2]
```
array([[ 5, 6],
[ 9, 10]])
### Data manipulation
We can manipulate data in several ways.
```python
# Flatten a matrix
M.flatten()
```
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16])
```python
# Reshaping a matrix
M.reshape(2, 8)
```
array([[ 1, 2, 3, 4, 5, 6, 7, 8],
[ 9, 10, 11, 12, 13, 14, 15, 16]])
```python
# The last index can be automatically inferred using -1
M.reshape(2, -1)
```
array([[ 1, 2, 3, 4, 5, 6, 7, 8],
[ 9, 10, 11, 12, 13, 14, 15, 16]])
```python
# Computing the max and the min
M.max(), M.min()
```
(16, 1)
```python
# Computing the mean and standard deviation
M.mean(), M.std()
```
(8.5, 4.6097722286464435)
```python
# Computing the sum along the rows
M.sum(axis=1)
```
array([10, 26, 42, 58])
### Linear algebra
Numpy is very useful to all sort of numeric computation, especially linear algebra:
```python
# Transpose
M.T
```
array([[ 1, 5, 9, 13],
[ 2, 6, 10, 14],
[ 3, 7, 11, 15],
[ 4, 8, 12, 16]])
```python
# Adding and multiplying a constant
10 * M + 5
```
array([[ 15, 25, 35, 45],
[ 55, 65, 75, 85],
[ 95, 105, 115, 125],
[135, 145, 155, 165]])
```python
# Element wise product
b = np.array([-1, -2, 4, 6, 8, -4])
a * b
```
array([ -1, -4, 12, 24, 40, -24])
```python
# Dot product
a.dot(b)
```
47
```python
# More linear algebra in the package numpy.linalg
# Determinant
np.linalg.det(M)
```
4.7331654313261276e-30
```python
# Eigenvalues
np.linalg.eigvals(M)
```
array([ 3.62093727e+01, -2.20937271e+00, -3.18863232e-15,
-1.34840081e-16])
### Vector generation and sampling
Numpy allows us to generate or randomly sample vectors:
```python
# Generate an array with 0.5 spacing
x = np.arange(-10, 10, 0.5)
x
```
array([-10. , -9.5, -9. , -8.5, -8. , -7.5, -7. , -6.5, -6. ,
-5.5, -5. , -4.5, -4. , -3.5, -3. , -2.5, -2. , -1.5,
-1. , -0.5, 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. ,
3.5, 4. , 4.5, 5. , 5.5, 6. , 6.5, 7. , 7.5,
8. , 8.5, 9. , 9.5])
```python
# Generate an array with 20 equally spaced points
x = np.linspace(-10, 10, 20)
x
```
array([-10. , -8.94736842, -7.89473684, -6.84210526,
-5.78947368, -4.73684211, -3.68421053, -2.63157895,
-1.57894737, -0.52631579, 0.52631579, 1.57894737,
2.63157895, 3.68421053, 4.73684211, 5.78947368,
6.84210526, 7.89473684, 8.94736842, 10. ])
```python
# Sample a vector from a standardize normal distribution
np.random.normal(size=(10,))
```
array([-1.45737898, 0.23555453, 0.24578509, -2.07977299, 1.08726802,
-0.41107403, 0.12253856, 1.47129648, 0.5223578 , -0.29633517])
### Functions
Numpy provides all sorts of mathematical functions we can apply to arrays
```python
# Exponential function
np.exp(x)
```
array([ 4.53999298e-05, 1.30079023e-04, 3.72699966e-04,
1.06785292e-03, 3.05959206e-03, 8.76628553e-03,
2.51169961e-02, 7.19647439e-02, 2.06192028e-01,
5.90777514e-01, 1.69268460e+00, 4.84984802e+00,
1.38956932e+01, 3.98136782e+01, 1.14073401e+02,
3.26840958e+02, 9.36458553e+02, 2.68312340e+03,
7.68763460e+03, 2.20264658e+04])
```python
# Sine
np.sin(x)
```
array([ 0.54402111, -0.4594799 , -0.99916962, -0.53027082, 0.47389753,
0.99970104, 0.5163796 , -0.48818921, -0.99996678, -0.50235115,
0.50235115, 0.99996678, 0.48818921, -0.5163796 , -0.99970104,
-0.47389753, 0.53027082, 0.99916962, 0.4594799 , -0.54402111])
```python
# A gaussian function
y = np.exp(-(x ** 2)/2)
y
```
array([ 1.92874985e-22, 4.13228632e-18, 2.92342653e-14,
6.82937941e-11, 5.26814324e-08, 1.34190319e-05,
1.12868324e-03, 3.13480292e-02, 2.87498569e-01,
8.70659634e-01, 8.70659634e-01, 2.87498569e-01,
3.13480292e-02, 1.12868324e-03, 1.34190319e-05,
5.26814324e-08, 6.82937941e-11, 2.92342653e-14,
4.13228632e-18, 1.92874985e-22])
# Matplotlib
The above matrices provide little insight without the possibility of visualizing them properly. Matplotlib is a powerful library for data visualization. Let's plot the above function.
```python
# the following line is only needed to show plots in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
x = np.linspace(-10, 10, 200) # get a sample of the x axis
y = np.exp(-(x**2)/(2*1)) # compute the function for all points in the sample
plt.plot(x, y) # add the curve to the plot
plt.show() # show the plot
```
We can also plot more than one line in the same figure and add a grid to the plot.
```python
z = np.exp(-(x**2)/(2*10))
plt.grid() # add the grid under the curves
plt.plot(x, y) # add the first curve to the plot
plt.plot(x, z) # add the second curve to the plot
plt.show() # show the plot
```
We can also set several properties of the plot in this way:
```python
plt.grid()
plt.xlabel('x') # add a label to the x axis
plt.ylabel('y') # add a label to the y axis
plt.xticks(np.arange(-10, 11, 2)) # specify in which point to place a tick on the x axis
plt.yticks(np.arange(0, 2.2, 0.2)) # and on the y axis
# rs- stands for red, squared markers, solid line
# yd-- stands for yellow, diamond markers, dashed line
plt.plot(x, y, 'rs-', markevery=10, label='sigma=1') # add a style and a label and specify the gap
plt.plot(x, z, 'yd--', markevery=10, label='sigma=10') # between markers for both curves
plt.legend() # add the legend (displays the labels of the curves)
plt.show() # show the plot
```
Finally, we can save the plot into a png file in this way:
```python
plt.grid()
plt.xlabel('x')
plt.ylabel('y')
plt.xticks(np.arange(-10, 11, 2))
plt.yticks(np.arange(0, 2.2, 0.2))
plt.plot(x, y, 'rs-', markevery=10, label='sigma=1')
plt.plot(x, z, 'yd--', markevery=10, label='sigma=10')
plt.legend()
plt.savefig('plot.png', dpi=300) # saves the plot into the file plot.png with 300 dpi
# will not work on lion0b because directory is read-only
```
# Scikit-learn
Let's now dive into the real **Machine Learning** part. *Scikit-learn* is perhaps the most wide-spread library for Machine Learning in use nowadays, and most of its fame is due to its extreme simplicity. With Scikit-learn it is possible to easily manage datasets, and train a wide range of classifiers out-of-the-box. It is also useful for several other Machine Learning tasks such as regression, clustering, dimensionality reduction, and model selection.
In the following we will see how to use Scikit-learn to load a dataset, train a classifier and perform validation and model selection.
Scikit-learn comes with a range of popular reference datasets. Let's load and use the *Digits* dataset:
```python
from sklearn.datasets import load_digits
digits = load_digits()
print(digits.DESCR) # print a description of the digits dataset
```
Optical Recognition of Handwritten Digits Data Set
===================================================
Notes
-----
Data Set Characteristics:
:Number of Instances: 5620
:Number of Attributes: 64
:Attribute Information: 8x8 image of integer pixels in the range 0..16.
:Missing Attribute Values: None
:Creator: E. Alpaydin (alpaydin '@' boun.edu.tr)
:Date: July; 1998
This is a copy of the test set of the UCI ML hand-written digits datasets
http://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits
The data set contains images of hand-written digits: 10 classes where
each class refers to a digit.
Preprocessing programs made available by NIST were used to extract
normalized bitmaps of handwritten digits from a preprinted form. From a
total of 43 people, 30 contributed to the training set and different 13
to the test set. 32x32 bitmaps are divided into nonoverlapping blocks of
4x4 and the number of on pixels are counted in each block. This generates
an input matrix of 8x8 where each element is an integer in the range
0..16. This reduces dimensionality and gives invariance to small
distortions.
For info on NIST preprocessing routines, see M. D. Garris, J. L. Blue, G.
T. Candela, D. L. Dimmick, J. Geist, P. J. Grother, S. A. Janet, and C.
L. Wilson, NIST Form-Based Handprint Recognition System, NISTIR 5469,
1994.
References
----------
- C. Kaynak (1995) Methods of Combining Multiple Classifiers and Their
Applications to Handwritten Digit Recognition, MSc Thesis, Institute of
Graduate Studies in Science and Engineering, Bogazici University.
- E. Alpaydin, C. Kaynak (1998) Cascading Classifiers, Kybernetika.
- Ken Tang and Ponnuthurai N. Suganthan and Xi Yao and A. Kai Qin.
Linear dimensionalityreduction using relevance weighted LDA. School of
Electrical and Electronic Engineering Nanyang Technological University.
2005.
- Claudio Gentile. A New Approximate Maximal Margin Classification
Algorithm. NIPS. 2000.
Let's take a look at the data:
```python
X, y = digits.data, digits.target
# The attributes of the first instance (notice it is a Numpy array)
X[0]
```
array([ 0., 0., 5., 13., 9., 1., 0., 0., 0., 0., 13.,
15., 10., 15., 5., 0., 0., 3., 15., 2., 0., 11.,
8., 0., 0., 4., 12., 0., 0., 8., 8., 0., 0.,
5., 8., 0., 0., 9., 8., 0., 0., 4., 11., 0.,
1., 12., 7., 0., 0., 2., 14., 5., 10., 12., 0.,
0., 0., 0., 6., 13., 10., 0., 0., 0.])
```python
# The label of the first instance
y[0]
```
0
Being a Numpy array, we can actually take a look at this image. We first need to reshape it into an 8x8 matrix and then use matplotlib.
```python
x = X[0].reshape((8, 8))
plt.gray() # use a grayscale
plt.matshow(x) # display a matrix of values
plt.show() # show the figure
```
Now we want to train a classifier to recognize the digits from the images and then we want to evaluate it. In order to make a proper evaluation, we first need to split the dataset in two sets, one for training and one for testing. Scikit-learn helps us with that:
```python
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Let's check the length of the two sets
len(X_train), len(X_test)
```
(1437, 360)
Now we need a classifier. Let's use an **SVM**. A reminder:
\begin{align}
\min_{\boldsymbol{w}} \quad & \frac{1}{2}\|\boldsymbol{w}\|^2 + C \sum_{i\in|\mathcal{D}|} \xi_i \\
\forall (x_i, y_i) \in \mathcal{D} \quad & y_i ( \boldsymbol{w}^T x_i + b ) \ge 1 - \xi_i
\end{align}
```python
from sklearn.svm import SVC
# Specify the parameters in the constructor.
# C is the parameter of the primal problem of the SVM;
# The rbf kernel is the Radial Basis Function;
# The rbf kernel takes one parameter: gamma
clf = SVC(C=10, kernel='rbf', gamma=0.02)
```
Now the classifier can be trained and then used to predict unseen instances.
```python
# Training
clf.fit(X_train, y_train)
# Prediction
y_pred = clf.predict(X_test)
y_pred
```
array([3, 3, 3, 7, 3, 1, 3, 3, 3, 3, 3, 3, 3, 0, 3, 3, 3, 3, 8, 3, 3, 3, 3,
3, 3, 3, 3, 3, 6, 3, 3, 9, 1, 3, 3, 6, 3, 4, 3, 6, 6, 3, 1, 3, 3, 3,
3, 3, 6, 3, 3, 3, 3, 3, 3, 0, 3, 3, 0, 1, 3, 3, 3, 3, 3, 5, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 0, 3, 4, 3, 3, 3, 3,
8, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 1, 3, 3, 3, 3, 3, 7, 7, 3, 3, 3,
3, 3, 3, 3, 7, 2, 6, 3, 3, 3, 3, 3, 7, 3, 3, 3, 3, 3, 3, 3, 6, 3, 4,
3, 3, 3, 3, 3, 3, 3, 3, 6, 3, 3, 3, 3, 3, 6, 3, 3, 3, 3, 3, 3, 3, 3,
2, 3, 3, 4, 3, 3, 3, 3, 3, 3, 3, 4, 3, 1, 3, 7, 3, 2, 2, 3, 3, 8, 3,
3, 2, 3, 3, 6, 9, 3, 3, 1, 3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4,
3, 1, 3, 3, 3, 3, 6, 1, 3, 6, 0, 4, 3, 2, 7, 3, 6, 3, 3, 3, 3, 3, 2,
3, 6, 3, 1, 3, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 2, 3, 3, 3, 3, 3, 3, 0, 3, 3, 3, 0, 1, 3, 4, 3, 1, 3, 3, 6, 0, 3,
3, 0, 3, 3, 3, 3, 3, 3, 3, 3, 3, 9, 3, 3, 3, 3, 3, 3, 0, 3, 3, 3, 3,
0, 3, 3, 6, 3, 3, 3, 3, 3, 3, 2, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 1,
3, 3, 7, 0, 6, 3, 3, 3, 3, 1, 3, 4, 3, 3, 7, 3, 3, 3, 3, 3, 3, 0, 7,
3, 3, 3, 3, 2, 7, 3, 1, 3, 7, 3, 3, 3, 3, 3])
Now we want to evaluate the performance of our classifier. A reminder:
\begin{align}
\text{Accuracy } &= \frac{\text{true-positive} + \text{true-negative}}{\text{all examples}} \\
\text{Precision } &= \frac{\text{true-positive}}{\text{true-positive} + \text{false-positive}} \\
\text{Recall } &= \frac{\text{true-positive}}{\text{true-positive} + \text{false-negative}} \\
F_1 &= \frac{2 \times \text{precision} \times \text{recall}}{\text{precision} + \text{recall}} \\
\end{align}
In a multiclass classification Precision, Recall and $F_1$ are computed per class, considering the given class as positive and all others as negative.
We can use Scikit-learn to compute and show these measures for all classes.
```python
from sklearn import metrics
report = metrics.classification_report(y_test, y_pred)
# the support is the number of instances having the given label in y_test
print(report)
```
precision recall f1-score support
0 1.00 0.39 0.57 33
1 1.00 0.61 0.76 28
2 1.00 0.36 0.53 33
3 0.12 1.00 0.22 34
4 1.00 0.20 0.33 46
5 1.00 0.02 0.04 47
6 1.00 0.49 0.65 35
7 1.00 0.35 0.52 34
8 1.00 0.10 0.18 30
9 1.00 0.07 0.14 40
avg / total 0.92 0.34 0.37 360
Finally we can compute the accuracy of our classifier:
```python
metrics.accuracy_score(y_test, y_pred)
```
0.33611111111111114
Apparently our classifier performs a bit poorly out-of-sample. This is probably due to the random choice of the parameters for the classifier. We can do much better! We need to perform model selection, that is we need to search for better parameters for our classifier.
In particular, we are going to perform a **cross-validation** on the training set and see how the classifier performs with different values of *gamma*.
A $k$-fold cross-validation works like this:
- Split the dataset $D$ in $k$ equally sized disjoint subsets $D_i$
- For $i \in [1, k]$
- Train the classifier on $T_i = D \setminus D_i$
- Compute the score (accuracy, precision, ...) on $D_i$
- Return the list of scores, one for each fold
Scikit-learn helps us with this as well. We compute the cross-validated accuracy for all the possible values of *gamma* and select the *gamma* with the best average accuracy.
```python
from sklearn.model_selection import KFold, cross_val_score
# 3-fold cross-validation
# random_state ensures same split for each value of gamma
kf = KFold(n_splits=3, shuffle=True, random_state=42)
gamma_values = [0.1, 0.05, 0.02, 0.01]
accuracy_scores = []
# Do model selection over all the possible values of gamma
for gamma in gamma_values:
# Train a classifier with current gamma
clf = SVC(C=10, kernel='rbf', gamma=gamma)
# Compute cross-validated accuracy scores
scores = cross_val_score(clf, X_train, y_train, cv=kf.split(X_train), scoring='accuracy')
# Compute the mean accuracy and keep track of it
accuracy_score = scores.mean()
accuracy_scores.append(accuracy_score)
# Get the gamma with highest mean accuracy
best_index = np.array(accuracy_scores).argmax()
best_gamma = gamma_values[best_index]
# Train over the full training set with the best gamma
clf = SVC(C=10, kernel='rbf', gamma=best_gamma)
clf.fit(X_train, y_train)
# Evaluate on the test set
y_pred = clf.predict(X_test)
accuracy = metrics.accuracy_score(y_test, y_pred)
accuracy
```
0.81388888888888888
Much better! Model selection allows us to fine-tune the parameters of a lerning algorithm to get the best performance.
Let's now look at the **Learnig curve** of our classifier, in which we plot the training accuracy and the cross-validated accuracy for increasing number of examples.
```python
from sklearn.model_selection import learning_curve
plt.figure()
plt.title("Learning curve")
plt.xlabel("Training examples")
plt.ylabel("Score")
plt.grid()
clf = SVC(C=10, kernel='rbf', gamma=best_gamma)
# Compute the scores of the learning curve
# by default the (relative) dataset sizes are: 10%, 32.5%, 55%, 77.5%, 100%
train_sizes, train_scores, test_scores = learning_curve(clf, X_train, y_train, scoring='accuracy')
# Get the mean and std of train and test scores along the varying dataset sizes
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
# Plot the mean and std for the training scores
plt.plot(train_sizes, train_scores_mean, 'o-', color="r", label="Training score")
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1, color="r")
# Plot the mean and std for the cross-validation scores
plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.legend()
plt.show()
```
Now we want to go even further. We can perform the above model selection procedure considering the *C* parameter as well. In general, this process over several parameters is called **grid search**, and Scikit-learn has an automated procedure to perform cross-validated grid search for any classifier.
```python
from sklearn.model_selection import GridSearchCV
possible_parameters = {
'C': [1e0, 1e1, 1e2, 1e3],
'gamma': [1e-1, 1e-2, 1e-3, 1e-4]
}
svc = SVC(kernel='rbf')
# The GridSearchCV is itself a classifier
# we fit the GridSearchCV with the training data
# and then we use it to predict on the test set
clf = GridSearchCV(svc, possible_parameters, n_jobs=4) # n_jobs=4 means we parallelize the search over 4 threads
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
accuracy = metrics.accuracy_score(y_test, y_pred)
accuracy
```
0.98888888888888893
Nice! Now we have a classifier with a quite competitive accuracy. The state-of-the-art (on a very similar task) has accuracy around $0.9979$, achieved by using Neural Networks, which we will see in the next Lab. Stay tuned!
|
57f0a1bc0aa5f4c932432cee7b690c1c55e67c61
| 153,576 |
ipynb
|
Jupyter Notebook
|
sklearn-lab.ipynb
|
paolodragone/machine-learning-labs
|
83a6fa410f86d45866379b80abe700ae26a52e9d
|
[
"MIT"
] | 1 |
2019-05-30T04:32:37.000Z
|
2019-05-30T04:32:37.000Z
|
sklearn-lab.ipynb
|
paolodragone/machine-learning-labs
|
83a6fa410f86d45866379b80abe700ae26a52e9d
|
[
"MIT"
] | null | null | null |
sklearn-lab.ipynb
|
paolodragone/machine-learning-labs
|
83a6fa410f86d45866379b80abe700ae26a52e9d
|
[
"MIT"
] | null | null | null | 91.632458 | 23,908 | 0.825708 | true | 8,068 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.787931 | 0.853913 | 0.672824 |
__label__eng_Latn
| 0.942275 | 0.401528 |
# KW-Distance: Two alternatives LP Models
In this notebook, we write a basic Linear Programming (LP) model to approximate the Kantorovich-Wasserstein distance of order 1 between a pair of discrete measures, such as, for instance, a pair of gray scale images.
In order to assess computationally the deviance of our model from the optimal solution, we implement first a standard LP model defined on a bipartite graph, which has a quadratic number of variables. Later, we show how to implement our compact model.
In the following, for the easy of notation, we consider discrete measures with $N$ support points defined as sum of Diracs as follows:
$$\mu = \sum_{i=1,\dots,N} \mu_i \delta(x_i)$$
where $\mu_i$ is the quantity of mass located at position $x_i$, and $\delta(x_i)=1$ only at position $x_i$, and it is equal to zero, otherwise.
### Exact Bipartite Model
We begin with given the standard bipartite LP model. Given two discrete measutes $\mu$ and $\nu$ defined on 2-dimensional histograms (i.e., a regular grid) with $N=n \times n$ bins, we define a bipartite graph $G=(V \cup W, E)$ as follows. We have a node $v \in V$ for each support point of the first measure $\mu$ and a node $w \in W$ for each support point of the second measure $\nu$. Note that we need to consider only support points a strictly positive quantity of mass (we can ignore all the bins without mass). Then, we add an edge $\{i,j\} \in E$ with cost $c_{ij} = d_{ij} = \sqrt{||x_i - x_j||}$, whenever both $\mu_i>0$ and $\nu_j>0$.
Given the bipartite graph $G=(V \cup W, E)$, we can write the following LP model:
$$\begin{align}
\min \quad & \sum_{ij \in E} c_{ij} \pi_{ij} \\
\mbox{s.t.} \quad
& \sum_{ij \in E} \pi_{ij} = \mu_i, & \forall i \in V \\
& \sum_{ij \in E} \pi_{ij} = \nu_j, & \forall j \in W \\
& \pi_{ij} \geq 0, & \forall \{i,j\} \in E.
\end{align}$$
The variable $\pi_{ij}$ are the decision variables that indicate the quantity of mass to be moved from position $x_i$ to $x_j$.
### Installing PYOMO and GLPK
We show next how to implement the previous model using the [Pyomo](https://pyomo.readthedocs.io/en/stable/index.html) optimization modeling language and the [GLPK](https://www.gnu.org/software/glpk/) open source MILP solver.
```python
import shutil
import sys
import os.path
if not shutil.which("pyomo"):
!pip install -q pyomo
assert(shutil.which("pyomo"))
if not (shutil.which("glpk") or os.path.isfile("glpk")):
if "google.colab" in sys.modules:
!apt-get install -y -qq glpk-utils
else:
try:
!conda install -c conda-forge glpk
except:
pass
```
From Pyomo, we use the following procedures for writing our LP models:
```python
from pyomo.environ import ConcreteModel, Var, Objective, Constraint, SolverFactory
from pyomo.environ import RangeSet, ConstraintList, NonNegativeReals
```
### Creating random 2D discrete measures
In order to store a random discrete measure, with support $N$ points randomly located over the space of a square grid of size $M \times M$, we use the following class
```python
import numpy as np
# Support function to normalize a numpy vector
Normalize = lambda x: x/sum(x)
# We define a data type for 2D-histograms as defined before
class Measure2D(object):
def __init__(self, N, M=32, seed=13):
""" default c'tor: N random points """
# Fix the seed for debugging
np.random.seed(seed)
# N random weights
self.W = Normalize(np.random.uniform(0, 1, size=N))
# N random support points
x = set()
while len(x) < N:
x.add((np.random.randint(1, M),
np.random.randint(1, M)))
self.X = list(x)
# Map point to weight
self.D = {}
for i in range(N):
self.D[self.X[i]] = self.W[i]
```
And to create two random measures over a grid of size $32 \times 32$, the first with 300 support points, and the second with 200 support points:
```python
# Create two random measures
GridSize = 32
Mu = Measure2D(300, GridSize, seed=13)
Nu = Measure2D(200, GridSize, seed=14)
```
Later on, we need a cost function between a pair of support points defined in $\mathbb{R}^2$.
```python
from math import sqrt
# Euclidean distance in the plane
Cost = lambda x, y: sqrt((x[0] - y[0])**2 + (x[1] - y[1])**2)
```
## Solving the Bipartite model
Given two discrete measures $\mu$ and $\nu$, we can use an implicit definition of the bipartite graph, by defining directly the following function, whichi implements and solves LP model described before.
```python
# Others useful libraries
from time import time
# Second, we write a function that implements the model, solves the LP,
# and returns the KW distance along with an optimal transport plan.
def BipartiteDistanceW1_L2(Mu, Nu):
t0 = time()
# Main Pyomo model
model = ConcreteModel()
# Parameters
model.I = RangeSet(len(Mu.X))
model.J = RangeSet(len(Nu.X))
# Variables
model.PI = Var(model.I, model.J, within=NonNegativeReals)
# Objective Function
model.obj = Objective(
expr=sum(model.PI[i,j] * Cost(Mu.X[i-1], Nu.X[j-1]) for i,j in model.PI))
# Constraints on the marginals
model.Mu = Constraint(model.I,
rule = lambda m, i: sum(m.PI[i,j] for j in m.J) == Mu.W[i-1])
model.Nu = Constraint(model.J,
rule = lambda m, j: sum(m.PI[i,j] for i in m.I) == Nu.W[j-1])
# Solve the model
sol = SolverFactory('glpk').solve(model)
# Get a JSON representation of the solution
sol_json = sol.json_repn()
# Check solution status
if sol_json['Solver'][0]['Status'] != 'ok':
return None
if sol_json['Solver'][0]['Termination condition'] != 'optimal':
return None
return model.obj(), time()-t0
```
In order to compute the distance between the two models, we have only to call our function with the two discrete meausures randomly defined before.
```python
# Compute distance and runtime
distA, runtimeA = BipartiteDistanceW1_L2(Mu, Nu)
```
```python
print("Optimal distance: {}, runtime: {}".format(distA, runtimeA))
```
Optimal distance: 2.3510932706221777, runtime: 10.579878091812134
In the following, we look at a different LP model which approximately solves an equivalent problem.
## Approximate LP model
In the this section, we show how we can solve the very same problem by using a likely smaller LP model. Note that for very samll instances, the difference might be invisible, but as the input measure scale up in size, the difference become quickly relevant.
The main idea is to exploit the cost structure of Kantorovich-Wasserstein distance, which permit to formulate the same problem on a flow network instead of using a bipartite graph. We can however prove that under given assumptions, the two problem formulations are equivalent.
Given two discrete measutes $\mu$ and $\nu$ defined on 2-dimensional histograms (i.e., a regular grid) with $N=n \times n$ bins, we define an uncapacitated network flow $G=(V, E, b)$ as follows. We have a node $v \in V$ for each bin, with support point $x_v=(i,j)$ with $0 \leq i,j \leq n-1$. Each node $v \in V$ has a flow balance $b_v = \mu_v - \nu_v$: if $b_v > 0$ then node $v$ is a source node; if $b_v < 0$ node $v$ is a sink node, otherwise, whenever $b_v=0$ node $v$ is a transit node. In addition, we have and edge $\{i,j\} \in E$ with cost $c_{ij} = d_{ij} = \sqrt{||x_i - x_j||}$, whenever a specific condition is verified between the pair of support points $x_i$ and $x_j$. The way we specify this condition is the core of our contribution in [1]. For the moment, suppose we have a link between any pair of bins.
At this point, if we look for a minimum flow from the source nodes to the sink nodes, we find the minimum cost of moving the first measure into the second at minimum cost. That is, we are solving a problem equivalent to the LP defined on the bipartite graph.
### Building the auxiliary Flow Network
The first task, given a value of the parameter $L$, is to build the flow network which implements an equivalent problem as an uncapacitated network flow problem. The fundamental rule for building such network is to add a link between a pair of location $(i,j)$ and $(i+v,j+w)$ if only if $(v,w)$ are co-primes.
Thus, first we define a function that compute the set of coprimes number from 0 to L.
```python
# Given a value of the parameter L, build the corrisponding co-primes set
def CoprimesSet(L):
from numpy import gcd
Cs = []
for v in range(-L, L+1):
for w in range(-L, L+1):
if (not (v == 0 and w == 0)) and gcd(v, w) == 1:
Cs.append((v, w))
return Cs
```
For instance, if we want to build a small set of coprimes locations, we could use $L=3$.
```python
L = 3
Cs = CoprimesSet(L)
print(Cs)
```
[(-3, -2), (-3, -1), (-3, 1), (-3, 2), (-2, -3), (-2, -1), (-2, 1), (-2, 3), (-1, -3), (-1, -2), (-1, -1), (-1, 0), (-1, 1), (-1, 2), (-1, 3), (0, -1), (0, 1), (1, -3), (1, -2), (1, -1), (1, 0), (1, 1), (1, 2), (1, 3), (2, -3), (2, -1), (2, 1), (2, 3), (3, -2), (3, -1), (3, 1), (3, 2)]
Using the set of coprimes pair, we can build our *small* flow network as follows.
```python
# Import the graph library NetworkX
import networkx as nx
# Build a Network using a precomputed set of pair of coprimes numbers
def BuildGridNetwork(N, Coprimes):
def ID(x,y):
return x*N+y
G = nx.DiGraph()
for i in range(N):
for j in range(N):
G.add_node(ID(i,j), pos=(i,j))
for i in range(N):
for j in range(N):
for (v, w) in Coprimes:
if i + v >= 0 and i + v < N and j + w >= 0 and j + w < N:
G.add_edge(ID(i,j), ID(i+v, j+w),
weight=sqrt(pow(v, 2) + pow(w, 2)))
return G
```
We add also a function to plot a network in the plane
```python
# Plot a grid network (nodes must have coordinates position labels)
def PlotGridNetwork(G, name=""):
import matplotlib.pyplot as plt
plt.figure(3,figsize=(8, 8))
plt.axis('equal')
pos = nx.get_node_attributes(G, 'pos')
nx.draw(G, pos, font_weight='bold', node_color='blue',
arrows=True, arrowstyle='->', arrowsize=15, width=1, node_size=200)
# If a name is specified, save the plot in a file
if name:
plt.savefig("grid_{}.png".format(name), format="PNG")
```
Let start we a couple of small example, to get an idea about the networks that are built.
```python
L = 2
Cs = CoprimesSet(L)
G1 = BuildGridNetwork(8, Cs)
PlotGridNetwork(G1)
```
Note that the degree of every node is limited, and much smaller than the case where every node (location) is connected with every other possible locations, as in the bipartite graphs.
Let us try with a large value of $L$.
```python
L = 3
Cs = CoprimesSet(L)
G1 = BuildGridNetwork(8, Cs)
PlotGridNetwork(G1)
```
## Solving the Flow Problem
Given a grid (flow) network $G$, and a pair of discrete measures defined over the grid, we can build the following LP model.
```python
def ApproximateDistanceW1_L2(Mu, Nu, G):
t0 = time()
# Number of egdes
m = len(G.edges())
# Main Pyomo model
model = ConcreteModel()
# Parameters
model.E = RangeSet(m)
# Variables
model.PI = Var(model.E, within=NonNegativeReals)
# Map edges to cost
C = np.zeros(m)
M = {}
for e, (i, j) in enumerate(G.edges()):
C[e] = G.edges[i,j]['weight']
M[i,j] = e+1
# Objective Function
model.obj = Objective(expr=sum(model.PI[e] * C[e-1] for e in model.PI))
# Flow balance constraints (using marginals balance at each location)
model.Flow = ConstraintList()
for v in G.nodes():
Fs = [M[w] for w in G.out_edges(v)]
Bs = [M[w] for w in G.in_edges(v)]
# Compute flow balance value at given node position
x = G.nodes[v]['pos']
b = Mu.D.get(x, 0.0) - Nu.D.get(x, 0.0)
# Flow balance constraint
model.Flow.add(expr = sum(model.PI[e] for e in Fs) - sum(model.PI[e] for e in Bs) == b)
# Solve the model
sol = SolverFactory('glpk').solve(model)
# Get a JSON representation of the solution
sol_json = sol.json_repn()
# Check solution status
if sol_json['Solver'][0]['Status'] != 'ok':
return None
if sol_json['Solver'][0]['Termination condition'] != 'optimal':
return None
return model.obj(), (time()-t0)
```
At this point, we can compute the distance between the same pair of discrete measures $Mu$ and $Nu$ defined before, but using our new LP model.
```python
# We build a flow network of size 32x32, using L=3
L = 3
Cs = CoprimesSet(L)
G = BuildGridNetwork(GridSize, Cs)
# Compute distance and runtime with the approximate model
distB, runtimeB = ApproximateDistanceW1_L2(Mu, Nu, G)
# ... and to compare with previous solution
print("LB Full = {:.5}, LB Apx = {:.5}".format(distA, distB))
print("Time Full = {:.5}, Time Apx = {:.5}".format(runtimeA, runtimeB))
```
LB Full = 2.3511, LB Apx = 2.3558
Time Full = 10.58, Time Apx = 6.1425
Note that in this case, the difference in running time is limited, and it is dominanted by the time for building the model (Pyomo is not very fast with this respect, despite its flexibility).
The approximate model is much more efficent for **dense** discrete measures, such as, for instance, images, where every point of a regular grid has a weight strictly positive.
Let us look at the following example.
```python
# Create two random measures
GridSize = 32
N = 900
Mu = Measure2D(N, GridSize, seed=13)
Nu = Measure2D(N, GridSize, seed=14)
# Compute distance and runtime
distA, runtimeA = BipartiteDistanceW1_L2(Mu, Nu)
# We build a flow network of size 32x32, using L=3
L = 3
Cs = CoprimesSet(L)
G = BuildGridNetwork(GridSize, Cs)
# Compute distance and runtime with the approximate model
distB, runtimeB = ApproximateDistanceW1_L2(Mu, Nu, G)
# ... and to compare with the previous solution optimal valur and runtime
print("LB Full = {:.5}, LB Apx = {:.5}".format(distA, distB))
print("Time Full = {:.5}, Time Apx = {:.5}".format(runtimeA, runtimeB))
```
LB Full = 0.63854, LB Apx = 0.63878
Time Full = 199.16, Time Apx = 7.8611
For additional details and more extensive computational experiments, we refer to the following paper.
### References
1. Bassetti, F., Gualandi, S. and Veneroni, M., 2020. [*On the Computation of Kantorovich--Wasserstein Distances Between Two-Dimensional Histograms by Uncapacitated Minimum Cost Flows*](https://epubs.siam.org/doi/abs/10.1137/19M1261195). **SIAM Journal on Optimization**, 30(3), pp.2441-2469.
|
b9b97caba674281547e87cb52d74fbc9f3a4971d
| 488,005 |
ipynb
|
Jupyter Notebook
|
notebook/KW-Distance_LP_models.ipynb
|
stegua/dotlib
|
754d93f16522714668e99a3c313a2acdc2cd0bd1
|
[
"MIT"
] | 4 |
2018-02-21T20:19:36.000Z
|
2021-05-07T03:23:38.000Z
|
notebook/KW-Distance_LP_models.ipynb
|
stegua/dotlib
|
754d93f16522714668e99a3c313a2acdc2cd0bd1
|
[
"MIT"
] | null | null | null |
notebook/KW-Distance_LP_models.ipynb
|
stegua/dotlib
|
754d93f16522714668e99a3c313a2acdc2cd0bd1
|
[
"MIT"
] | null | null | null | 764.898119 | 283,456 | 0.950357 | true | 4,217 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.923039 | 0.870597 | 0.803595 |
__label__eng_Latn
| 0.983604 | 0.705354 |
```python
# This cell is for the Google Colaboratory
# https://stackoverflow.com/a/63519730
if 'google.colab' in str(get_ipython()):
# https://colab.research.google.com/notebooks/io.ipynb
import google.colab.drive as gcdrive
# may need to visit a link for the Google Colab authorization code
gcdrive.mount("/content/drive/")
import sys
sys.path.insert(0,"/content/drive/My Drive/Colab Notebooks/nmisp/45_sympy")
```
# `sympy`
[`sympy`](https://www.sympy.org)는 *기호 처리기*로 숫자 대신 기호 연산을 지원한다..<br>
[`sympy`](https://www.sympy.org), a *symbolic processor* supports operations in symbols instead of numbers.
2006년 이후 2019 까지 800명이 넘는 개발자가 작성한 코드를 제공하였다.<br>
Since 2006, more than 800 developers contributed so far in 2019.
## 기호 연산 예<br>Examples of symbolic processing
`sympy` 모듈을 `sym` 라는 이름으로 불러온다.<br>Import `sympy` module in the name of `sym`.
```python
import sympy as sym
sym.init_printing()
```
비교를 위해 `numpy` 모듈도 불러온다.<br>
Import `numpy` module to compare.
```python
import numpy as np
```
```python
np.pi
```
```python
sym.pi
```
#### 오일러 공식<br>Euler formula
$$
e ^ {\pi i} + 1 = 0
$$
```python
np.exp(np.pi * 1j) + 1
```
```python
sym.exp(sym.pi * 1j) + 1
```
```python
sym.simplify(sym.exp(sym.pi * 1j) + 1)
```
#### 무한대<br>Infinity
```python
np.inf, np.inf > 999999
```
```python
sym.oo, sym.oo > 999999
```
### 제곱근<br>Square root
10의 제곱근을 구해보자.<br>Let't find the square root of ten.
```python
np.sqrt(10)
```
```python
sym.sqrt(10)
```
결과를 숫자로 살펴보려면 `evalf()` 메소드를 사용한다.<br>
Use `evalf()` method to check the result in digits.
```python
sym.sqrt(10).evalf()
```
```python
sym.sqrt(10).evalf(30)
```
10의 제곱근을 제곱해보자.<br>Let't square the square root of ten.
```python
np.sqrt(10) ** 2
```
```python
sym.sqrt(10) ** 2
```
위 결과의 차이에 대해 어떻게 생각하는가?<br>
What do you think about the differences of the results above?
### 분수<br>Fractions
15 / 11 을 생각해보자.<br>Let't think about 15/11.
```python
num = 15
den = 11
```
```python
division = num / den
```
```python
division
```
```python
division * den
```
```python
import fractions
```
```python
fr_division = fractions.Fraction(num, den)
```
```python
fr_division
```
```python
fr_division * den
```
```python
sym_division = sym.Rational(num, den)
```
```python
sym_division
```
```python
sym_division * den
```
위 결과의 차이에 대해 어떻게 생각하는가?<br>
What do you think about the differences of the results above?
### 변수를 포함하는 수식<br>Expressions with variables
사용할 변수를 정의.<br>Define variables to use.
```python
a, b, c, x = sym.symbols('a b c x')
theta, phi = sym.symbols('theta phi')
```
변수들을 한번 살펴보자.<br>Let's take a look at the variables
```python
a, b, c, x
```
```python
theta, phi
```
변수를 조합하여 새로운 수식을 만들어 보자.<br>
Let's make equations using variables.
```python
y = a * x + b
```
```python
y
```
```python
z = a * x * x + b * x + c
```
```python
z
```
```python
w = a * sym.sin(theta) ** 2 + b
```
```python
w
```
```python
p = (x - a) * (x - b) * (x - c)
```
```python
p
```
```python
sym.expand(p, x)
```
```python
sym.collect(_, x)
```
$$
\frac{x + xy}{x}
$$
```python
sym.simplify((x + x * y) / x)
```
### 그래프<br>Plot
```python
import sympy.plotting as splot
```
```python
splot.plot(sym.sin(x));
```
```python
import mpmath
splot.plot(sym.sin(mpmath.radians(x)), (x, -360, 360));
```
```python
splot.plot_parametric((sym.cos(theta), sym.sin(theta)), (theta, -sym.pi, sym.pi));
```
```python
splot.plot_parametric(
16 * (sym.sin(theta)**3),
13 * sym.cos(theta) - 5 * sym.cos(2*theta) - 2 * sym.cos(3*theta) - sym.cos(4*theta),
(theta, -sym.pi, sym.pi)
);
```
#### 3차원 그래프<br>3D Plot
```python
x, y = sym.symbols('x y')
splot.plot3d(sym.cos(x) + sym.sin(y), (x, -5, 5), (y, -5, 5));
```
```python
splot.plot3d_parametric_line(x, 25-x**2, 25-x**2, (x, -5, 5));
```
```python
u, v = sym.symbols('u v')
splot.plot3d_parametric_surface(u + v, sym.sin(u), sym.cos(u), (u, -1, 1), (v, -1, 1));
```
### 극한<br>Limits
```python
sym.limit(x ** x, x, 0)
```
$$
\lim_{x \to 0} \frac{sin x}{x}
$$
```python
sym.limit(sym.sin(x) / x, x, 0)
```
$$
\lim_{x \to \infty} x
$$
```python
sym.limit(x, x, sym.oo)
```
$$
\lim_{x \to \infty} \frac{1}{x}
$$
```python
sym.limit(1 / x, x, sym.oo)
```
$$
\lim_{x \to 0} x^x
$$
```python
sym.limit(x ** x, x, 0)
```
### 미적분<br>Calculus
$$
\frac{dz}{dx} =\frac{d}{dx} \left( a x^2 + bx + c \right)
$$
```python
z.diff(x)
```
$$
\int{z}{dx} =\int{\left(a x^2 + bx + c \right)}{dx}
$$
```python
sym.integrate(z, x)
```
```python
w
```
```python
w.diff(theta)
```
```python
sym.integrate(w, theta)
```
#### 정적분<br>Definite integral
```python
sym.integrate(w, (theta,0, sym.pi))
```
### 근<br>Root
```python
z_sol_list = sym.solve(z, x)
```
```python
z_sol_list
```
```python
sym.solve(2* sym.sin(theta) ** 2 - 1, theta)
```
### 코드 생성<br>Code generation
```python
print(sym.python(z_sol_list[0]))
```
```python
import sympy.utilities.codegen as sc
```
```python
[(c_name, c_code), (h_name, c_header)] = sc.codegen(
("z_sol", z_sol_list[0]),
"C89",
"test"
)
```
```python
c_name
```
```python
print(c_code)
```
```python
h_name
```
```python
print(c_header)
```
### 방정식<br>Equation solving
$$
x^4=1
$$
```python
sym.solve(x ** 4 - 1, x)
```
```python
sym.solveset(x ** 4 - 1, x)
```
$$
e^x=-1
$$
```python
sym.solve(sym.exp(x) + 1, x)
```
$$
x^4 - 3x^2 +1
$$
```python
f = x ** 4 - 3 * x ** 2 + 1
sym.factor(f)
```
```python
sym.factor(f, modulus=5)
```
Boolean equations
```python
sym.satisfiable(a & b)
```
```python
sym.satisfiable(a ^ b)
```
### 연립방정식<br>System of equations
```python
a1, a2, a3 = sym.symbols('a1:4')
b1, b2, b3 = sym.symbols('b1:4')
c1, c2 = sym.symbols('c1:3')
x1, x2 = sym.symbols('x1:3')
```
```python
eq1 = sym.Eq(
a1 * x1 + a2 * x2,
c1,
)
```
```python
eq1
```
```python
eq2 = sym.Eq(
b1 * x1 + b2 * x2,
c2,
)
```
```python
eq_list = [eq1, eq2]
```
```python
eq_list
```
```python
sym.solve(eq_list, (x1, x2))
```
### 행렬<br>Matrix
```python
identity = sym.Matrix([[1, 0], [0, 1]])
identity
```
```python
A = sym.Matrix([[1, a], [b, 1]])
A
```
```python
A * identity
```
```python
A * A
```
```python
A ** 2
```
### 미분방정식<br>Differential Equations
$$
\frac{d^2}{dx^2}f(x) + f(x)
$$
```python
f = sym.Function('f', real=True)
(f(x).diff(x, x) + f(x))
```
```python
sym.dsolve(f(x).diff(x, x) + f(x))
```
기계진동<br>Mechanical Vibration
$$
m \frac{d^2x(t)}{dt^2} +c \frac{dx(t)}{dt} + x(t) = 0
$$
```python
m, c, k, t = sym.symbols('m c k t')
x = sym.Function('x', real=True)
vib_eq = m * x(t).diff(t, t) + c * x(t).diff(t) + k * x(t)
vib_eq
```
```python
result = sym.dsolve(vib_eq)
result
```
```python
sym.simplify(result)
```
강제진동<br>Forced Vibration
$$
m \frac{d^2x(t)}{dt^2} +c \frac{dx(t)}{dt} + x(t) = sin(t)
$$
```python
forced_vib_eq = m * x(t).diff(t, t) + c * x(t).diff(t) + k * x(t) - sym.sin(t)
forced_vib_eq
```
```python
result = sym.dsolve(forced_vib_eq)
result
```
```python
sym.simplify(result)
```
## 참고문헌<br>References
* SymPy Development Team, SymPy 1.4 documentation, sympy.org, 2019 04 10. [Online] Available : https://docs/sympy.org/latest/index.html.
* SymPy Development Team, SymPy Tutorial, SymPy 1.4 documentation, sympy.org, 2019 04 10. [Online] Available : https://docs/sympy.org/latest/tutorial/index.html.
* d84_n1nj4, "How to keep fractions in your equation output", Stackoverflow.com, 2017 08 12. [Online] Available : https://stackoverflow.com/a/45651175.
* Python developers, "Fractions", Python documentation, 2019 10 12. [Online] Available : https://docs.python.org/3.7/library/fractions.html.
* SymPy Development Team, codegen, SymPy 1.4 documentation, sympy.org, 2019 04 10. [Online] Available : https://docs/sympy.org/latest/modules/utilities/codegen.html.
* Pedregosa, F., Sympy : Symbolic Mathematics in Python, Scipy Lecture Notes, 2019 March,[Online] Available : http://www.scipy-lectures.org/packages/sympy.html [Accessed 2019 10 28]
* MIT, Twitter, 2021 Feb. [Online] Available : https://twitter.com/MIT/status/1360971008325406721.
## Final Bell<br>마지막 종
```python
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
```
```python
```
|
95a981a598e75a7fea6852d705fb30a020fc4b67
| 26,044 |
ipynb
|
Jupyter Notebook
|
45_sympy/10_sympy.ipynb
|
kangwonlee/2109eca-nmisp-template
|
2e078870757fa06222df62d0ff8f4f4f288af51a
|
[
"BSD-3-Clause"
] | null | null | null |
45_sympy/10_sympy.ipynb
|
kangwonlee/2109eca-nmisp-template
|
2e078870757fa06222df62d0ff8f4f4f288af51a
|
[
"BSD-3-Clause"
] | null | null | null |
45_sympy/10_sympy.ipynb
|
kangwonlee/2109eca-nmisp-template
|
2e078870757fa06222df62d0ff8f4f4f288af51a
|
[
"BSD-3-Clause"
] | null | null | null | 17.385848 | 190 | 0.453963 | true | 3,179 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.843895 | 0.800692 | 0.6757 |
__label__kor_Hang
| 0.367905 | 0.408209 |
Let's use SymPy to derive the relation between potential V and charge density R
```
%pylab inline
from sympy.interactive import init_printing
init_printing()
from sympy import pi, var, S, Piecewise, piecewise_fold
var("r R")
Vh = Piecewise((-S(2)/3 * pi * (3*R**2 - r**2), r <= R), (-S(4)/3 * pi * R**3 / r, True))
def laplace(f):
return (r*f).diff(r, 2)/r
print "Vh ="
Vh
```
Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline].
For more information, type 'help(pylab)'.
Vh =
$$\begin{cases} - \frac{2}{3} \pi \left(3 R^{2} - r^{2}\right) & \text{for}\: r \leq R \\- \frac{4}{3} \frac{\pi R^{3}}{r} & \text{otherwise} \end{cases}$$
Charge density is then:
```
piecewise_fold(-laplace(Vh)/(4*pi))
```
$$\begin{cases} -1 & \text{for}\: r \leq R \\0 & \text{otherwise} \end{cases}$$
```
from numpy import (empty, pi, meshgrid, linspace, sum, sin, exp, shape, sqrt,
conjugate)
from numpy.fft import fftn, fftfreq, ifftn
N = 100
print "N =", N
L = 2.4*4
R = 1.
x1d = linspace(-L/2, L/2, N+1)[:-1]
x, y, z = meshgrid(x1d, x1d, x1d, indexing="ij")
r = sqrt(x**2+y**2+z**2)
nr = empty(shape(x), dtype="double")
nr[:] = 0
nr[r <= R] = -1
Vanalytic = empty(shape(x), dtype="double")
Vanalytic[r <= R] = -2./3 * pi * (3*R**2 - r[r <= R]**2)
Vanalytic[r > R] = -4./3 * pi * R**3 / r[r > R]
ng = fftn(nr) / N**3
G1d = N * fftfreq(N) * 2*pi/L
kx, ky, kz = meshgrid(G1d, G1d, G1d)
G2 = kx**2+ky**2+kz**2
G2[0, 0, 0] = 1 # omit the G=0 term
tmp = 2*pi*abs(ng)**2 / G2
tmp[0, 0, 0] = 0 # omit the G=0 term
E = sum(tmp) * L**3
print "Hartree Energy (calculated): %.15f" % E
Vg = 4*pi*ng / G2
Vg[0, 0, 0] = 0 # omit the G=0 term
V = ifftn(Vg).real * N**3
V += Vanalytic[N/2, N/2, N/2] - V[N/2, N/2, N/2]
l2_norm = sum((Vanalytic - V)**2)
print "l2_norm = ", l2_norm
plot(x[:, N/2, N/2], Vanalytic[:, N/2, N/2], label="analytic")
plot(x[:, N/2, N/2], V[:, N/2, N/2], label="FFT")
legend(loc="best");
```
```
```
|
950795722b6067af6c17737b0fa712a6b64ebbe8
| 21,726 |
ipynb
|
Jupyter Notebook
|
tutorial_exercises/FFT charged sphere.ipynb
|
certik/scipy-2013-tutorial
|
26a1cab3a16402afdc20088cedf47acd9bc58483
|
[
"BSD-3-Clause"
] | 23 |
2015-02-28T08:53:05.000Z
|
2021-12-05T05:37:59.000Z
|
sympy/FFT charged sphere.ipynb
|
certik/scipy-in-13
|
418c139ab6e1b0c9acd53e7e1a02b8b930005096
|
[
"BSD-3-Clause"
] | 1 |
2021-04-17T15:05:46.000Z
|
2021-04-17T15:05:46.000Z
|
sympy/FFT charged sphere.ipynb
|
certik/scipy-in-13
|
418c139ab6e1b0c9acd53e7e1a02b8b930005096
|
[
"BSD-3-Clause"
] | 14 |
2015-03-11T00:25:21.000Z
|
2021-08-25T14:52:40.000Z
| 110.846939 | 16,352 | 0.823115 | true | 847 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.924142 | 0.833325 | 0.77011 |
__label__eng_Latn
| 0.33916 | 0.627556 |
Copyright **Paolo Raiteri**, January 2022
# Langmuir isotherm virtual lab
The Langmuir isotherm is one of the simplest models that can be used to describe the adsorption of molecules on surfaces, either in the gas phase or in solutions.
It is based on 5 key assumptions:
1. The surface is flat
2. The adsorbate is immobile on the surface
3. All adsorption sites are equivalent
4. There are no interactions between adsorbate molecules on adjacent sites
5. One one molecule can adsorb in a site (monolayer coverage)
The fundamental equation of the Langmuir adsorption isotherm can be derived using either thermodynamic or kinetic arguments.
Here we will follow the thermodynamic route.
The surface adsorption process can be regarded as an equilibrium problem, where the adsorbate molecules, $A$, on the surface are in dynamic equilibrium with those in solution
\begin{equation}
A + S \leftrightharpoons SA \tag{1}
\end{equation}
where $A$ are the free molecules in solution (or in the gas phase), $S$ are the available adsorption sites and $SA$ are the filled adsorption sites.
The equilibrium constant for this chemical reaction is
\begin{equation}
K = \frac{[SA]}{[A][S]} \tag{2}
\end{equation}
Although, the concentrations of free/occupied adsorption sites are somewhat ill-defined quantities, it is easy to see how their "concentration" would be related to the surface coverage.
If we define the coverage, $\theta$ as the fraction of occupied surface sites,
\begin{equation}
[SA] \propto \theta \tag{3}
\end{equation}
\begin{equation}
[S] \propto (1-\theta) \tag{4}
\end{equation}
\begin{equation}
[A] = c_{sol} \tag{5}
\end{equation}
where we have introduced a slight change of notation by calling the $c_{sol}$ the equilibrium concentration of the adsorbate in solution. We can then rewrite the equilibrium constant as
\begin{equation}
K_L = \frac{\theta}{(1-\theta)c_{sol}} \tag{6}
\end{equation}
where $K_L$ is the Langmuir constant, which contains all the unknown proportionality constant that connect the coverage with the "concentrations" that are in the definition of the equilibrium constant. This equation can then be rewritten to obtain the famous Langmuir isotherm equation
\begin{equation}
\theta = \frac{K_Lc_{sol}}{1+K_Lc_{sol}} \tag{7}
\end{equation}
where $\theta$ is the fraction of adsorption sites that are occupied, $K_L$ is the Langmuir equilibrium constant, and $c_{sol}$ is the equilibrium concentration of the adsorbate in solution.
Because it is not possible to directly measure the fraction of occupied surface sites, a more practical version of that equation is
\begin{equation}
c_{surf} = \frac{QK_Lc_{sol}}{1+K_Lc_{sol}} \tag{8}
\end{equation}
where $c_{surf}$ is the concentration of the adsorbate that is on the surface, _i.e._ out of the solution, and the new parameter $Q$ corresponds to the _monolayer_ coverage, _i.e._ the maximum concentration of molecules that can adsorb on the substrate. The linear form of the above equation, which uses the inverse of the concentrtions, is more convenient for the fitting;
\begin{equation}
\frac{1}{c_{surf}} = \frac{1}{QK_L} \frac{1}{c_{sol}} + \frac{1}{Q} \tag{9}
\end{equation}
The name _isotherm_ stems from the fact that the experiments are performed at constant temperature and in principle both the Langmuir equilibrium constant, $K_L$, and the _monolayer_ coverage, $Q$, can have a temperature dependence.
Similarly to normal chemical reactions, by performing a series of experiments at different conditions it is possible to determine the enthalpy and entropy of the adsorption process using the van't Hoff equation.
In the virtual laboratory below, you will be looking at the adsorption of the dye Acid Blue 158 on chitin in water. Perform a series of experiments at different conditions to determine the enthalpy of adsorption of the dye on the substrate. The molar mass of Acid Blue 158 is 584.91 g/mol.
### Instructions
* Select the temperature and amount on water for your experiments
* Select an appropriate minimum and maximum amounts of acid Blue 158 to use in the experiments; this has to cover a large enough range to allow for a proper fit of the curve.
* Select how many experiments you want to perform ($N$)
<p style="text-align: center;"> Figure 1: Schematic representation of the adsorption virtual experiment. </p>
Each _experiment_ consists of adding a certain amount of dye, $c_{tot}$, to the chosen volume of DI water with a fixed amount of chitin and measuring the concentration of the dye that is left in solution at equilibrium, $c_{sol}$.
\begin{equation}
c_{tot}=c_{surf}+c_{sol} \tag{10}
\end{equation}
When you click the *Perform experiment* button, you will obtain $N$ observations at the selected temperature, where the amount of dye that is added to the chosen amount of water is varied between the minimum and maximum values that you have chosen, in equally spaced intervals.
Every time you click *Perform experiment* $N$ new observations will be generated and appended to the output file.
If you click *Reset experiments* all observations will be deleted.
This will be useful to generate a clean set of data after you have done a few tests to find what is an appropriate range for the amount of dye to use in the virtual experiment.
You can download all the observations in CSV format and import them directly into excel, or you can use a jupyter notebook to read and analyse the data, and produce the figures for the report.
### Questions to be answered in the lab report
1. What is the Langmuir constant at a minimum of 4 different temperatures?
2. What is the monolayer coverage at those temperatures?
3. What are the enthalpy and entropy of adsorption?
4. How do your result compare with experimental values?
## Launch virtual experiment
- [Adsorption isotherms virtual lab](virtualExperiment.ipynb)
```python
```
|
6de0722562fa7881bb45c75be214303dc81ee87a
| 7,776 |
ipynb
|
Jupyter Notebook
|
week_04_surfaceAdsorption/langmuir.ipynb
|
praiteri/TeachingNotebook
|
75ee8baf8ef81154dffcac556d4739bf73eba712
|
[
"MIT"
] | null | null | null |
week_04_surfaceAdsorption/langmuir.ipynb
|
praiteri/TeachingNotebook
|
75ee8baf8ef81154dffcac556d4739bf73eba712
|
[
"MIT"
] | null | null | null |
week_04_surfaceAdsorption/langmuir.ipynb
|
praiteri/TeachingNotebook
|
75ee8baf8ef81154dffcac556d4739bf73eba712
|
[
"MIT"
] | 1 |
2022-02-23T11:36:12.000Z
|
2022-02-23T11:36:12.000Z
| 49.528662 | 382 | 0.662551 | true | 1,455 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.76908 | 0.689306 | 0.530131 |
__label__eng_Latn
| 0.998747 | 0.070002 |
Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved BSD-3 license. (c) Lorena A. Barba, Gilbert F. Forsyth 2017. Thanks to NSF for support via CAREER award #1149784.
[@LorenaABarba](https://twitter.com/LorenaABarba)
12 steps to Navier–Stokes
=====
***
We continue our journey to solve the Navier–Stokes equation with Step 4. But don't continue unless you have completed the previous steps! In fact, this next step will be a combination of the two previous ones. The wonders of *code reuse*!
Step 4: Burgers' Equation
----
***
You can read about Burgers' Equation on its [wikipedia page](http://en.wikipedia.org/wiki/Burgers'_equation).
Burgers' equation in one spatial dimension looks like this:
$$\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = \nu \frac{\partial ^2u}{\partial x^2}$$
As you can see, it is a combination of non-linear convection and diffusion. It is surprising how much you learn from this neat little equation!
We can discretize it using the methods we've already detailed in Steps [1](./01_Step_1.ipynb) to [3](./04_Step_3.ipynb). Using forward difference for time, backward difference for space and our 2nd-order method for the second derivatives yields:
$$\frac{u_i^{n+1}-u_i^n}{\Delta t} + u_i^n \frac{u_i^n - u_{i-1}^n}{\Delta x} = \nu \frac{u_{i+1}^n - 2u_i^n + u_{i-1}^n}{\Delta x^2}$$
As before, once we have an initial condition, the only unknown is $u_i^{n+1}$. We will step in time as follows:
$$u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)$$
### Initial and Boundary Conditions
To examine some interesting properties of Burgers' equation, it is helpful to use different initial and boundary conditions than we've been using for previous steps.
Our initial condition for this problem is going to be:
\begin{eqnarray}
u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\
\phi &=& \exp \bigg(\frac{-x^2}{4 \nu} \bigg) + \exp \bigg(\frac{-(x-2 \pi)^2}{4 \nu} \bigg)
\end{eqnarray}
This has an analytical solution, given by:
\begin{eqnarray}
u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\
\phi &=& \exp \bigg(\frac{-(x-4t)^2}{4 \nu (t+1)} \bigg) + \exp \bigg(\frac{-(x-4t -2 \pi)^2}{4 \nu(t+1)} \bigg)
\end{eqnarray}
Our boundary condition will be:
$$u(0) = u(2\pi)$$
This is called a *periodic* boundary condition. Pay attention! This will cause you a bit of headache if you don't tread carefully.
### Saving Time with SymPy
The initial condition we're using for Burgers' Equation can be a bit of a pain to evaluate by hand. The derivative $\frac{\partial \phi}{\partial x}$ isn't too terribly difficult, but it would be easy to drop a sign or forget a factor of $x$ somewhere, so we're going to use SymPy to help us out.
[SymPy](http://sympy.org/en/) is the symbolic math library for Python. It has a lot of the same symbolic math functionality as Mathematica with the added benefit that we can easily translate its results back into our Python calculations (it is also free and open source).
Start by loading the SymPy library, together with our favorite library, NumPy.
```python
import numpy
import sympy
```
```python
# !pip3 install --user sympy
```
We're also going to tell SymPy that we want all of its output to be rendered using $\LaTeX$. This will make our Notebook beautiful!
```python
from sympy import init_printing
init_printing(use_latex=True)
```
Start by setting up symbolic variables for the three variables in our initial condition and then type out the full equation for $\phi$. We should get a nicely rendered version of our $\phi$ equation.
```python
x, nu, t = sympy.symbols('x nu t')
phi = (sympy.exp(-(x - 4 * t)**2 / (4 * nu * (t + 1))) +
sympy.exp(-(x - 4 * t - 2 * sympy.pi)**2 / (4 * nu * (t + 1))))
phi
```
```python
sympy.symbols('mu'),sympy.symbols('nu') # dynamic viscous, and kinematic viscous
```
It's maybe a little small, but that looks right. Now to evaluate our partial derivative $\frac{\partial \phi}{\partial x}$ is a trivial task.
```python
phiprime = phi.diff(x)
phiprime
```
If you want to see the unrendered version, just use the Python print command.
```python
print(phiprime)
```
-(-8*t + 2*x)*exp(-(-4*t + x)**2/(4*nu*(t + 1)))/(4*nu*(t + 1)) - (-8*t + 2*x - 4*pi)*exp(-(-4*t + x - 2*pi)**2/(4*nu*(t + 1)))/(4*nu*(t + 1))
### Now what?
Now that we have the Pythonic version of our derivative, we can finish writing out the full initial condition equation and then translate it into a usable Python expression. For this, we'll use the *lambdify* function, which takes a SymPy symbolic equation and turns it into a callable function.
```python
from sympy.utilities.lambdify import lambdify
u = -2 * nu * (phiprime / phi) + 4
print(u)
u
```
### Lambdify
To lambdify this expression into a useable function, we tell lambdify which variables to request and the function we want to plug them in to.
```python
ufunc = lambdify((t, x, nu), u)
print(ufunc(1, 4, 3))
```
3.49170664206445
### Back to Burgers' Equation
Now that we have the initial conditions set up, we can proceed and finish setting up the problem. We can generate the plot of the initial condition using our lambdify-ed function.
```python
from matplotlib import pyplot
%matplotlib inline
###variable declarations
nx = 301
nt = 100
dx = 4 * numpy.pi / (nx - 1)
nu = 0.07
dt = dx * nu
x = numpy.linspace(0, 4 * numpy.pi, nx)
un = numpy.empty(nx)
t = 0
u = numpy.asarray([ufunc(t, x0, nu) for x0 in x])
u
```
array([ 4. , 4.0418879 , 4.0837758 , 4.12566371, 4.16755161,
4.20943951, 4.25132741, 4.29321531, 4.33510322, 4.37699112,
4.41887902, 4.46076692, 4.50265482, 4.54454273, 4.58643063,
4.62831853, 4.67020643, 4.71209433, 4.75398224, 4.79587014,
4.83775804, 4.87964594, 4.92153385, 4.96342175, 5.00530965,
5.04719755, 5.08908545, 5.13097336, 5.17286126, 5.21474916,
5.25663706, 5.29852496, 5.34041287, 5.38230077, 5.42418867,
5.46607657, 5.50796447, 5.54985238, 5.59174028, 5.63362818,
5.67551608, 5.71740398, 5.75929189, 5.80117979, 5.84306769,
5.88495559, 5.92684349, 5.9687314 , 6.0106193 , 6.0525072 ,
6.0943951 , 6.136283 , 6.17817091, 6.22005881, 6.26194671,
6.30383461, 6.34572251, 6.38761042, 6.42949832, 6.47138622,
6.51327412, 6.55516202, 6.59704993, 6.63893783, 6.68082572,
6.72271359, 6.76460125, 6.80648759, 6.84836523, 6.89018589,
6.93163322, 6.97063555, 6.99367964, 6.91482855, 6.26782654,
4. , 1.73217346, 1.08517145, 1.00632036, 1.02936445,
1.06836678, 1.10981411, 1.15163477, 1.19351241, 1.23539875,
1.27728641, 1.31917428, 1.36106217, 1.40295007, 1.44483798,
1.48672588, 1.52861378, 1.57050168, 1.61238958, 1.65427749,
1.69616539, 1.73805329, 1.77994119, 1.82182909, 1.863717 ,
1.9056049 , 1.9474928 , 1.9893807 , 2.0312686 , 2.07315651,
2.11504441, 2.15693231, 2.19882021, 2.24070811, 2.28259602,
2.32448392, 2.36637182, 2.40825972, 2.45014762, 2.49203553,
2.53392343, 2.57581133, 2.61769923, 2.65958713, 2.70147504,
2.74336294, 2.78525084, 2.82713874, 2.86902664, 2.91091455,
2.95280245, 2.99469035, 3.03657825, 3.07846615, 3.12035406,
3.16224196, 3.20412986, 3.24601776, 3.28790567, 3.32979357,
3.37168147, 3.41356937, 3.45545727, 3.49734518, 3.53923308,
3.58112098, 3.62300888, 3.66489678, 3.70678469, 3.74867259,
3.79056049, 3.83244839, 3.87433629, 3.9162242 , 3.9581121 ,
4. , 4.0418879 , 4.0837758 , 4.12566371, 4.16755161,
4.20943951, 4.25132741, 4.29321531, 4.33510322, 4.37699112,
4.41887902, 4.46076692, 4.50265482, 4.54454273, 4.58643063,
4.62831853, 4.67020643, 4.71209433, 4.75398224, 4.79587014,
4.83775804, 4.87964594, 4.92153385, 4.96342175, 5.00530965,
5.04719755, 5.08908545, 5.13097336, 5.17286126, 5.21474916,
5.25663706, 5.29852496, 5.34041287, 5.38230077, 5.42418867,
5.46607657, 5.50796447, 5.54985238, 5.59174028, 5.63362818,
5.67551608, 5.71740398, 5.75929189, 5.80117979, 5.84306769,
5.88495559, 5.92684349, 5.9687314 , 6.0106193 , 6.0525072 ,
6.0943951 , 6.136283 , 6.17817091, 6.22005881, 6.26194671,
6.30383461, 6.34572251, 6.38761042, 6.42949832, 6.47138622,
6.51327412, 6.55516202, 6.59704993, 6.63893783, 6.68082573,
6.72271363, 6.76460154, 6.80648944, 6.84837734, 6.89026524,
6.93215314, 6.97404105, 7.01592895, 7.05781685, 7.09970475,
7.14159265, 7.18348056, 7.22536846, 7.26725636, 7.30914426,
7.35103216, 7.39292007, 7.43480797, 7.47669587, 7.51858377,
7.56047167, 7.60235958, 7.64424748, 7.68613538, 7.72802328,
7.76991118, 7.81179909, 7.85368699, 7.89557489, 7.93746279,
7.97935069, 8.0212386 , 8.0631265 , 8.1050144 , 8.1469023 ,
8.1887902 , 8.23067811, 8.27256601, 8.31445391, 8.35634181,
8.39822972, 8.44011762, 8.48200552, 8.52389342, 8.56578132,
8.60766923, 8.64955713, 8.69144503, 8.73333293, 8.77522083,
8.81710874, 8.85899664, 8.90088454, 8.94277244, 8.98466034,
9.02654825, 9.06843615, 9.11032405, 9.15221195, 9.19409985,
9.23598776, 9.27787566, 9.31976356, 9.36165146, 9.40353936,
9.44542727, 9.48731517, 9.52920307, 9.57109097, 9.61297887,
9.65486678, 9.69675468, 9.73864258, 9.78053048, 9.82241838,
9.86430629, 9.90619419, 9.94808209, 9.98996999, 10.03185789,
10.0737458 , 10.1156337 , 10.1575216 , 10.1994095 , 10.24129741,
10.28318531])
```python
pyplot.figure(figsize=(8, 4), dpi=100)
pyplot.plot(x, u, marker='o', lw=2)
# pyplot.xlim([0, 2 * numpy.pi])
# pyplot.ylim([0, 10]);
```
This is definitely not the hat function we've been dealing with until now. We call it a "saw-tooth function". Let's proceed forward and see what happens.
### Periodic Boundary Conditions
One of the big differences between Step 4 and the previous lessons is the use of *periodic* boundary conditions. If you experiment with Steps 1 and 2 and make the simulation run longer (by increasing `nt`) you will notice that the wave will keep moving to the right until it no longer even shows up in the plot.
With periodic boundary conditions, when a point gets to the right-hand side of the frame, it *wraps around* back to the front of the frame.
Recall the discretization that we worked out at the beginning of this notebook:
$$u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)$$
What does $u_{i+1}^n$ *mean* when $i$ is already at the end of the frame?
Think about this for a minute before proceeding.
```python
for n in range(nt):
un = u.copy()
for i in range(1, nx-1):
u[i] = un[i] - un[i] * dt / dx *(un[i] - un[i-1]) + nu * dt / dx**2 *\
(un[i+1] - 2 * un[i] + un[i-1])
u[0] = un[0] - un[0] * dt / dx * (un[0] - un[-2]) + nu * dt / dx**2 *\
(un[1] - 2 * un[0] + un[-2])
u[-1] = u[0]
u_analytical = numpy.asarray([ufunc(nt * dt, xi, nu) for xi in x])
```
```python
pyplot.figure(figsize=(11, 7), dpi=100)
pyplot.plot(x,u, marker='o', lw=2, label='Computational')
pyplot.plot(x, u_analytical, label='Analytical')
# pyplot.xlim([0, 2 * numpy.pi])
# pyplot.ylim([0, 10])
pyplot.legend();
```
***
What next?
----
The subsequent steps, from 5 to 12, will be in two dimensions. But it is easy to extend the 1D finite-difference formulas to the partial derivatives in 2D or 3D. Just apply the definition — a partial derivative with respect to $x$ is the variation in the $x$ direction *while keeping $y$ constant*.
Before moving on to [Step 5](./07_Step_5.ipynb), make sure you have completed your own code for steps 1 through 4 and you have experimented with the parameters and thought about what is happening. Also, we recommend that you take a slight break to learn about [array operations with NumPy](./06_Array_Operations_with_NumPy.ipynb).
```python
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
|
0a72f4390a165553ea7654e80b5ac28d21c0ae89
| 98,619 |
ipynb
|
Jupyter Notebook
|
lessons/05_Step_4.ipynb
|
XuesongDing/CFDPython
|
36a2b7b7b9a562db509a38a7e95dbc190523ac29
|
[
"CC-BY-3.0"
] | null | null | null |
lessons/05_Step_4.ipynb
|
XuesongDing/CFDPython
|
36a2b7b7b9a562db509a38a7e95dbc190523ac29
|
[
"CC-BY-3.0"
] | null | null | null |
lessons/05_Step_4.ipynb
|
XuesongDing/CFDPython
|
36a2b7b7b9a562db509a38a7e95dbc190523ac29
|
[
"CC-BY-3.0"
] | 1 |
2021-12-18T02:08:34.000Z
|
2021-12-18T02:08:34.000Z
| 157.287081 | 50,756 | 0.851428 | true | 4,938 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.805632 | 0.731059 | 0.588964 |
__label__eng_Latn
| 0.838245 | 0.206692 |
# piston example with Gauss-Legendre collocation
```python
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.animation as anim
import sympy
sympy.init_printing()
from IPython.display import display
import numpy
import sys
sys.path.insert(0, './code')
from gauss_legendre import gauss_legendre
from symbolic import eval_expr
from evaluate_functional import evaluate_functional
import ideal_gas_lumped
```
### macroscopic state
```python
q = sympy.Symbol('q')
p = sympy.Symbol('p')
s_1 = sympy.Symbol('s_1')
s_2 = sympy.Symbol('s_2')
x = [q, p, s_1, s_2]
x
```
### parameters
```python
l = sympy.Symbol('l')
r = sympy.Symbol('r')
w = sympy.Symbol('w')
A = sympy.Symbol('A')
m_Al = sympy.Symbol('m_Al')
m_Cu = sympy.Symbol('m_Cu')
m = sympy.Symbol('m')
κ_Al = sympy.Symbol('κ_Al')
κ_Cu = sympy.Symbol('κ_Cu')
α = sympy.Symbol('α')
d = sympy.Symbol('d')
params = {
# length of cylinder (m)
l: 0.1,
# radius of cylinder / piston (m)
r: 0.05,
# length of piston (m)
w: 0.006,
# cross-sectional area of piston (m**2)
A: r**2 * sympy.pi,
# density of piston material (kg/m**3)
m_Al: 2700.0,
m_Cu: 8960.0,
# mass of piston (kg)
m: m_Cu * A * w,
# thermal conductivity piston material (W/(m*K))
κ_Al: 237.0,
κ_Cu: 401.0,
# thermal conduction coefficient through piston (W/K)
α: κ_Cu * A / w,
# friction coefficient between piston and cylinder (N*s/m)
d: 1.0,
}
params = {**params, **ideal_gas_lumped.params}
```
### functionals
```python
functionals = {}
v_1 = sympy.Symbol('v_1') # volume (m**3)
functionals[v_1] = A * (q - (w/2))
U_1 = sympy.Symbol('U_1') # internal energy
m_1 = sympy.Symbol('m_1') # fixed mass (kg)
θ_1 = sympy.Symbol('θ_1') # temperature (K)
π_1 = sympy.Symbol('π_1') # pressure (Pa)
ideal_gas_lumped.add_functionals(functionals, U=U_1, s=s_1, v=v_1, m=m_1, m_a=ideal_gas_lumped.m_Ar, θ=θ_1, π=π_1)
v_2 = sympy.Symbol('v_2')
functionals[v_2] = A * (l - (q + w/2))
U_2 = sympy.Symbol('U_2')
m_2 = sympy.Symbol('m_2')
θ_2 = sympy.Symbol('θ_2')
π_2 = sympy.Symbol('π_2')
ideal_gas_lumped.add_functionals(functionals, U=U_2, s=s_2, v=v_2, m=m_2, m_a=ideal_gas_lumped.m_Ar, θ=θ_2, π=π_2)
υ = sympy.Symbol('υ')
functionals[υ] = p / m
E = sympy.Symbol('E') # total energy (J)
functionals[E] = sympy.Rational(1,2) * p**2 / m + U_1 + U_2
```
```python
eval_expr(E, functionals)
```
### initial conditions
determine $m_1$, $m_2$ and $s_1(0)$, $s_2(0)$
```python
# wanted conditions
q0 = params[l]/2
v_10 = float(eval_expr(v_1, functionals, params, {q: q0}))
v_20 = float(eval_expr(v_2, functionals, params, {q: q0}))
θ_10 = 273.15 + 25.0
π_10 = 1.5 * 1e5
θ_20 = 273.15 + 20.0
π_20 = 1.0 * 1e5
```
```python
import ideal_gas
from scipy.optimize import fsolve
```
```python
n_10 = fsolve(lambda n : ideal_gas.S_π(ideal_gas.U2(θ_10, n), v_10, n) - π_10, x0=2e22)[0]
s_10 = ideal_gas.S(ideal_gas.U2(θ_10, n_10), v_10, n_10)
print(f"n = {n_10}")
print(f"s = {s_10}")
print(f"θ = {ideal_gas.U_θ(s_10, v_10, n_10) - 273.15} °C")
print(f"π = {ideal_gas.U_π(s_10, v_10, n_10) * 1e-5} bar")
print(f"u = {ideal_gas.U(s_10, v_10, n_10)}")
```
n = 1.345119603317771e+22
s = 3.3828942208847232
θ = 25.00000000000159 °C
π = 1.500000000000008 bar
u = 83.05585577928
```python
n_20 = fsolve(lambda n : ideal_gas.S_π(ideal_gas.U2(θ_20, n), v_20, n) - π_20, x0=2e22)[0]
s_20 = ideal_gas.S(ideal_gas.U2(θ_20, n_20), v_20, n_20)
print(f"n = {n_20}")
print(f"s = {s_20}")
print(f"θ = {ideal_gas.U_θ(s_20, v_20, n_20) - 273.15} °C")
print(f"π = {ideal_gas.U_π(s_20, v_20, n_20) * 1e-5} bar")
print(f"u = {ideal_gas.U(s_20, v_20, n_20)}")
```
n = 9.12041411630436e+21
s = 2.3394613409617664
θ = 20.000000000004206 °C
π = 1.0000000000000147 bar
u = 55.37057051952051
```python
x_0 = [q0, 0, s_10, s_20]
params[m_1] = n_10 * ideal_gas_lumped.m_Ar
params[m_2] = n_20 * ideal_gas_lumped.m_Ar
```
### dynamics
```python
x
```
```python
F = [υ, A*(π_1 - π_2) - d*υ, (α*(θ_2 - θ_1) + d*υ**2/2)/θ_1, (α*(θ_1 - θ_2) + d*υ**2/2)/θ_2]
F
```
```python
t_f = 0.8
dt = 1e-3
s = 2
print(f"K = {int(t_f // dt)}")
```
K = 800
```python
%time time, solution = gauss_legendre(x, F, x_0, t_f, dt, s, functionals, params)
```
CPU times: user 2.27 s, sys: 11.1 ms, total: 2.28 s
Wall time: 2.28 s
```python
sol = solution.copy()
```
```python
fig, ax = plt.subplots(dpi=200)
ax.plot(time,solution[:, 0]);
ax.plot(time,sol[:, 0], '--');
```
```python
energy = evaluate_functional(x, E, solution, functionals, params)
S = s_1 + s_2
entropy = evaluate_functional(x, S, solution, functionals, params)
fig, axes = plt.subplots(nrows=3, sharex=True, dpi=200)
fig.tight_layout(pad=1.5)
#axes[0].set_title(f"time step = {dt}")
axes[0].plot(time, solution[:, 0])
axes[0].set_ylabel("$q \: (\mathrm{m})$")
axes[0].ticklabel_format(style='sci', axis='y', scilimits=(0,0))
axes[0].xaxis.major.formatter._useMathText = True
axes[1].plot(time, energy)
axes[1].set_ylabel("$E \: (\mathrm{J})$")
axes[1].ticklabel_format(style='sci', axis='y', scilimits=(0,0))
axes[2].plot(time, entropy)
axes[2].set_xlabel("$t \: (\mathrm{s})$")
axes[2].set_ylabel("$S \: (\mathrm{J}/\mathrm{K})$");
axes[2].ticklabel_format(style='sci', axis='y', scilimits=(0,0))
font = {'family' : 'Calibri',
'weight' : 'normal',
'size' : 15}
matplotlib.rc('font', **font)
matplotlib.rc('text', usetex=True)
#fig.savefig("simulation.pdf")
```
```python
import matplotlib
import matplotlib.animation as anim
def animate(solution, functionals, params, file):
plt.ioff()
r2 = 2 * params[r]
w2 = params[w] / 2
l2 = params[l] - w2
cmap = matplotlib.cm.get_cmap('YlOrRd')
fig, ax = plt.subplots(dpi=200)
ax.set_xlim(left=0, right=params[l])
ax.set_ylim(bottom=-params[r], top=params[r])
q0 = params[l]/2
vol1 = plt.Rectangle((0, -params[r]), q0-w2, r2, fc='b')
ax.add_patch(vol1)
pist = plt.Rectangle((q0-w2, -params[r]), params[w], r2, fc='#C84843')
ax.add_patch(pist)
vol2 = plt.Rectangle((q0+w2, -params[r]), l2-q0, r2, fc='g')
ax.add_patch(vol2)
θ_1sol = evaluate_functional(x, θ_1, solution, functionals, params)
θ_2sol = evaluate_functional(x, θ_2, solution, functionals, params)
θ_min = numpy.min([numpy.min(θ_1sol), numpy.min(θ_2sol)])
θ_max = numpy.max([numpy.max(θ_1sol), numpy.max(θ_2sol)])
θ_swing = θ_max - θ_min
data = numpy.block([solution, θ_1sol.reshape(-1,1), θ_2sol.reshape(-1,1)])
def animate(datum):
qd = datum[0]
θ_1d = datum[4]
θ_2d = datum[5]
l_1 = qd - w2
l_2 = l2 - qd
vol1.set_width(l_1)
pist.set_x(l_1)
vol2.set_x(l_1 + params[w])
vol2.set_width(l_2)
vol1.set_fc(cmap(0.8 * ((θ_1d-θ_min) / θ_swing)))
vol2.set_fc(cmap(0.8 * ((θ_2d-θ_min) / θ_swing)))
return (vol1, pist, vol2)
animation = anim.FuncAnimation(fig, animate, frames=data, blit=True, repeat=False)
animation.save(file, fps=20, extra_args=['-vcodec', 'libx264'])
plt.close(fig)
```
```python
%time animate(solution[:4000], functionals, params, 'piston.mp4')
```
CPU times: user 26.9 s, sys: 1.39 s, total: 28.3 s
Wall time: 28.9 s
|
ddcd7beb1b894a535ca4b14fc3adecbc074dd4da
| 239,801 |
ipynb
|
Jupyter Notebook
|
piston_animation.ipynb
|
MarkusLohmayer/master-thesis-code
|
b107d1b582064daf9ad4414e1c9f332ef0be8660
|
[
"MIT"
] | 1 |
2020-11-14T15:56:07.000Z
|
2020-11-14T15:56:07.000Z
|
piston_animation.ipynb
|
MarkusLohmayer/master-thesis-code
|
b107d1b582064daf9ad4414e1c9f332ef0be8660
|
[
"MIT"
] | null | null | null |
piston_animation.ipynb
|
MarkusLohmayer/master-thesis-code
|
b107d1b582064daf9ad4414e1c9f332ef0be8660
|
[
"MIT"
] | null | null | null | 389.920325 | 132,088 | 0.931698 | true | 2,808 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.882428 | 0.727975 | 0.642386 |
__label__yue_Hant
| 0.208587 | 0.330808 |
```python
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
```
<style>.container { width:100% !important; }</style>
```python
import numpy as np
import matplotlib.pyplot as plt
```
# Funciones necesarias para que las demas funciones funcionen .
```python
# Funcion de la que me tomo mas tiempo hacerlo
def gauss(a,b):
'''Funcion que retorna un array "x" despues de hacer una eliminacion de Gauss con pivoteo'''
n = a.shape[0]
# https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.html
# estamos creando un matriz columna
x = np.ndarray( shape=(n,1), dtype = np.float64 )
a_copy = a.copy() # con el fin de hacer segumiento a la matriz al final .
s = [] # lista para alojar los valores maximos (es parte del pivoteo)
l = [] # usada para reordenar como se hara la eliminacion
# #### LLENAMOS LA LISTA "S" CON LOS VALORES MAXIMOS #######
for i in range(0,n):
l.append(i) # vamos agregando .Establecemos esta matriz
smax = 0 # el maximo de una fila es importante para el pivote
for j in range(0,n):
# se escoje el maximo entre todos los elemnos de una fila
smax = max(smax,abs(a[i][j]))
s.append(smax) # alojamos el maximo de la fila
# ---------- print('l inicial ' ,l) (SEGUIMIENTO)
# --------- print('s inicial ', s) (SEGUIMIENTO)
# ##### PIVOTEO(ESCOJEMOS LA FILAS PIVOTES Y ELIMINAMOS) #########
# k hace referencia a la columna donde se hara ceros seran creados en el array a_ij
# Pero recuerda que los ceros no se crean realmente . Por que en esos espacios almacenamos
# otras cosas(los multiplicadorese para descomposicion LU) .
for k in range(0,n-1):
rmax = 0
# ##### SE ESCOJE LA FILA PIVOTE EN ESTE BLOQUE #########
for i in range (k,n):
# esto es para escoger la correcta fila pivote
# se esta dividiendo (elemnto de columna k y fila [k-n][k])/los Smax
r = abs( a[l[i]][k] / s[l[i]] )
if (r > rmax):
# j = i es para escoger el correcto pivote
# rmax = r es para escoger la maxima proporcion(ratio)
rmax,j = r,i
# al final no usaremos rmax , solo es necesario para detener el if cuando sea necesario
l[j],l[k] = l[k],l[j] # luego se cambia el lugar donde ocurre el maximo proporcion
# ####### ELIMINACION GAUSSIANA PERO LA FILA PIVOTE LO DETERMINA "l" ##################
for i in range(k+1,n):
xmult = a[l[i]][k]/a[l[k]][k]
a[l[i]][k] = xmult # los guardo para fines de LU y para eliminar "b"
a_copy[l[i]][k] -= xmult*a[l[k]][k] # para hacer segumiento(se puede borrar)
# este for hara segumiento de que se haga operaciones en toda la fila que no es pivot
for j in range(k+1,n):
a[l[i]][j] -= xmult*a[l[k]][j]
a_copy[l[i]][j] -= xmult*a[l[k]][j] # para hacer seguimiento(se puede borrar)
# --------- print('l final ' ,l) (SEGUIMIENTO)
# --------- print('a final copy \n' , a_copy) (SEGUIMIENTO)
# ##### SEGIMOS PIVOTEANDO PERO PARA LA MATRIZ b###
for k in range(0,n-1):
# recuerda el que ahora maneja el orden sera la lista "l"
# debemos hacer las operaciones en el mismo orden que hemos hecho para "a"
for i in range(k+1,n):
b[l[i]] -= a[l[i]][k]*b[l[k]]
#------ print('b final \n ' , b) (SEGUIMIENTO)
# ######### AHORA HACEMOS LA SUSTITUCION BACKWARD ##########
# espero se entienda por que -1 . Es por la cuenta por cero
x[n-1] = b[l[n-1]]/a[l[n-1]][n-1]
for i in range(n-2,-1,-1):
summ = b[l[i]]
for j in range(i+1,n):
summ -= a[l[i]][j]*x[j]
x[i] = summ/a[l[i]][i]
return x
```
# Aca implementare regresion general ( la lineal esta incluido por supuesto)
```python
def regresion_orden_n(data,n=1):
''' Regresion de cualquier orden . Esta funcion retorna un polinomio de orden "n" que hace regresion a los datos , ademas regresa los valores de los parametros'''
# la primera fila de "data" son los x's y la segunda fila de "data" son los y's
# n = sera el orden del polinomio que se hara la regresion
# m = el numero de datos
m = data.shape[1] # la dimension de los datos . Eso necesitamos
x = data[0]
y = data[1]
########## Creamos el sistema lineal que vamos a resolver #########
# creamos la matriz m+1 . Recuerda una regresion lineal crea una matriz 2*2 .
# Es logico que una regresion polinomial de orden n genere un sistema de ecuaciones de orden n+1 .
A = np.ndarray( shape=(n+1,n+1), dtype = np.float64 )
b = np.ndarray( shape=(n+1,1), dtype = np.float64 )
# llenamos la matriz con los datos
# https://es.wikipedia.org/wiki/Regresi%C3%B3n_no_lineal (aca esta el porque hice de esta manera)
for i in range(0,n+1):
for j in range(0,n+1):
# matriz
A[i,j] = np.sum(x**(j+i))
b[i,0] = np.sum(x**(i)*y)
########## Resolvemos la matriz #############
a = gauss(A.copy(),b.copy())
# creamos el polinomio que hace regresion a nuestros datos
# pongo evalf y no "x" para no confundir con mi array en la primeras lineas de la funcion cabeza (ver lineas arriba )
def pn(evalf):
acum = 0
# recuerda que es del orden de "n" el polinomio y "a" es una array con (n+1) elementos
for i in range(0,len(a)):
acum = acum + a[i,0]*(evalf)**(i)
return acum
return pn,a
```
# Probando mi regresion para cualquier orden
```python
import random
####### Armamos la data para la regresion lineal
lista = []
for i in range(11):
lista.append(3*i+5 + random.uniform(-5,5)) # regresa un numero float entre -1 y 1
x1 = np.arange(0,11,1)
y1 = np.array(lista,dtype=np.float64)
data1 = np.array([x1,y1],dtype=np.float64)
# creamos las funciones
pn_ord_1,parame = regresion_orden_n(data1.copy(),n=1)
######## Armamos la data para la regresion polinomial de segudno orden
lista = []
for i in range(11):
lista.append(2*i**2+3*i -5 + random.uniform(-20,20)) # regresa un numero float entre -1 y 1
x2 = np.arange(0,11,1)
y2 = np.array(lista,dtype=np.float64)
data2 = np.array([x2,y2],dtype=np.float64)
# creamos las funciones
pn_ord_2,parame = regresion_orden_n(data2.copy(),n=2)
######## Armamos la data para la regresion polinomial de tercer orden
lista = []
for i in range(11):
lista.append(i**3-2*i**2-3*i-4 + random.uniform(-100,100)) # regresa un numero float entre -1 y 1
x3 = np.arange(0,11,1)
y3 = np.array(lista,dtype=np.float64)
data3 = np.array([x3,y3],dtype=np.float64)
# creamos las funciones
pn_ord_3,parame = regresion_orden_n(data3.copy(),n=3)
x1rango = np.linspace(0,10.1,100)
# graficamos
fig, axes = plt.subplots(nrows=2,ncols=2 , figsize = (30,15)) # fig es la figura y axes son los ejes (son elementos de cada figura)
axes[0,0].plot(x1rango,pn_ord_1(x1rango),'r',label="regresion linear 1")
axes[0,0].plot(data1[0],data1[1],'*b')
axes[0,0].axvline(0, color="black")
axes[0,0].set_title('Regresion lineal')
# axes[0,0].set_ylim(-3, 3)
axes[0,0].legend( loc='upper right', shadow=True) # se pone la legenda
axes[0,1].plot(x1rango,pn_ord_2(x1rango),'r',label="regresion linear 2")
axes[0,1].plot(data2[0],data2[1],'*b')
axes[0,1].axvline(0, color="black")
axes[0,1].set_title('Regresion polinomial orden 2')
# axes[0,0].set_ylim(-3, 3)
axes[0,1].legend( loc='upper right', shadow=True) # se pone la legenda
axes[1,0].plot(x1rango,pn_ord_3(x1rango),'r',label="regresion linear 3")
axes[1,0].plot(data3[0],data3[1],'*b')
axes[1,0].axvline(0, color="black")
axes[1,0].set_title('Regresion polinomial orden 3')
# axes[0,0].set_ylim(-3, 3)
axes[1,0].legend( loc='upper right', shadow=True) # se pone la legenda
plt.show()
```
# Aca implementare regresion no lineal
```python
# Esta regresion no lineal solo sirve para cuando se tiene dos parametros . a_0 y a_1
# toma como parametro la data, la funcion , su derivada parcial respecto a0 y tambien la de a1 . Los dos ultimos son los a0 y a1 iniciales estimados por el que llama la funcion .
# cabe recordar que "f" , fa0 y fa1 debe tener 3 parametros como argumentos . En el codigo veras la razon de esto
def regresion_no_lineal(data,f,fa0,fa1,a0_in=0,a1_in=0,kmax=1000):
'''
funcion que acepta la data , funciones y sus derivadas parciales , con los parametros iniciales
Y retorna los parametros a0 y a1 que hacen una regresion lineal a la funcion
'''
# m = es el numero de datos que se tiene
m = data.shape[1]
x = data[0]
y = data[1]
a0 = a0_in
a1 = a1_in
#### creamos las matrices que vamos a usar
# D = Z.\deltaA + E
D = np.ndarray(shape=(m,1),dtype=np.float64)
Z = np.ndarray(shape=(m,2),dtype=np.float64)
##### comenzamos con las iteraciones
for k in range(0,kmax):
# llenamos la matriz que muta en cada iteracion
for i in range(0,m):
D[i,0] = y[i] - f(a0,a1,x[i])
for j in range(0,2):
if j == 0:
Z[i,j] = fa0(a0,a1,x[i])
elif j == 1:
Z[i,j] = fa1(a0,a1,x[i])
#### Calculamos los \delta(a0) y \delta(a1)
# para hacer eso formamos el sistema de ecuaciones a resolver
A = Z.T.dot(Z) # saco la transpuesta a una matriz y luego la multiplico matricialmente con la misma
b = Z.T.dot(D) # Saco la traspuesta y la multiplico matricialmente con la misma
delta_a = gauss(A,b)
### actualizamos los datos
a0 = a0 + delta_a[0,0]
a1 = a1 + delta_a[1,0]
return a0,a1
```
# Probando mi regresion no lineal con la funcion seno
```python
def funcion(a0,a1,xev):
return a0*np.sin(a1*xev)
def derviva0_funcion(a0,a1,xev):
return np.sin(a1*xev)
def deriva1_funcion(a0,a1,xev):
return a0*xev*np.cos(a1*xev)
import random
####### Armamos la data para la regresion lineal
lista = []
for i in range(11):
lista.append(3*np.sin(2*i) + random.uniform(-1,1)) # regresa un numero float entre -1 y 1
x1 = np.arange(0,11,1)
y1 = np.array(lista,dtype=np.float64)
data1 = np.array([x1,y1],dtype=np.float64)
# creamos las funciones
a0_ajust,a1_ajust = regresion_no_lineal(data1.copy(),f=funcion,fa0=derviva0_funcion,fa1=deriva1_funcion,a0_in=3,a1_in=2)
print("x"*10)
print(a0_ajust,a1_ajust)
x1rango = np.linspace(0,10.1,100)
# graficamos
fig, axes = plt.subplots(nrows=1,ncols=1 , figsize = (10,5)) # fig es la figura y axes son los ejes (son elementos de cada figura)
axes.plot(x1rango,funcion(a0_ajust,a1_ajust,x1rango),'r',label="regresion linear 1")
axes.plot(data1[0],data1[1],'*b')
axes.axvline(0, color="black")
axes.set_title('Regresion lineal')
# axes[0,0].set_ylim(-3, 3)
axes.grid("True")
axes.legend( loc='upper right', shadow=True) # se pone la legenda
plt.show()
```
# Problema 1
\begin{align}
y &= \left(\dfrac{a+\sqrt{x}}{b\sqrt{x}}\right)^2 \\
\sqrt{y}&= \dfrac{a}{b}\dfrac{1}{\sqrt{x}} + \dfrac{1}{b}
\end{align}
Luego haciendo un cambio
\begin{align}
\dfrac{a}{b} &= a1 \\
\dfrac{1}{b} &=a0 \\
\dfrac{1}{\sqrt{x}} &= x1 \\
\sqrt{y} &= y1
\end{align}
Tenemos que hacer regresion lineal
$$ y1 = a1\cdot x1 + a0$$
```python
# tenemos la siguiente data
x = np.array([0.5,1,2,3,4],dtype=np.float64)
y = np.array([10.4,5.8,3.3,2.4,2],dtype=np.float64)
datareal = np.array([x,y],dtype=np.float64)
# transformamos la data para el problema linealizado
x1 = 1/x**(1/2)
y1 = y**(1/2)
# armamos la data para la linealizacion del problema
data1 = np.array([x1,y1],dtype=np.float64)
pn_ord_1,parametros = regresion_orden_n(data1.copy(),n=1)
# ya que tenemos los parametros , los guardamos
a0 = parametros[0,0]
a1 = parametros[1,0]
# luego hallamos a y b de las relaciones anteriores que usamos para linealizar nuestro problema
b = 1/a0
a = a1*b
print("Los valores de a y b son por lo tanto")
print(f'a = {a}')
print(f'b = {b}')
```
Los valores de a y b son por lo tanto
a = 4.861361656451297
b = 2.440288253956208
```python
######## Comprobando que resulta los a y b deseados######
def funcion1(acal,bcal,x):
return ( (acal+x**(1/2))/(bcal*x**(1/2)) )**2
x1rango = np.linspace(0.2,5,100)
rango = np.linspace(0,2,100)
# graficamos
fig, axes = plt.subplots(nrows=1,ncols=2 , figsize = (30,10)) # fig es la figura y axes son los ejes (son elementos de cada figura)
axes[0].plot(rango,pn_ord_1(rango),'r',label="regresion linear ")
axes[0].plot(data1[0],data1[1],'*b')
axes[0].axvline(0, color="black")
axes[0].set_title('Regresion lineal en la linealizacion')
axes[0].grid("True")
axes[0].legend( loc='upper right', shadow=True) # se pone la legenda
axes[1].plot(x1rango,funcion1(a,b,x1rango),'r',label="regresion linear 1")
axes[1].plot(datareal[0],datareal[1],'*b')
axes[1].axvline(0, color="black")
axes[1].set_title('Comparacion de los puntos con la funcion que fue linealizada')
axes[1].grid("True")
axes[1].legend( loc='upper right', shadow=True) # se pone la legenda
plt.show()
```
```python
# comproblemos que es el minimo
error1 = np.sum( ( funcion1(a,b,datareal[0]) - datareal[1] )**2 )
print(f"El errror que se obtuvo fue de error : {error1}")
```
El errror que se obtuvo fue de error : 0.0028643984188465354
# Problema 2
##### Usamos la linealizacion para esta funcion
\begin{align}
y &= \alpha_4 x e^{\beta_4 x} \\
ln\left(\dfrac{y}{x}\right) &= \beta_4 x + ln(\alpha_4)
\end{align}
Haciendo el siguiente cambio
\begin{align}
ln\left(\dfrac{y}{x}\right) &= y1 \\
x &= x1 \\
\beta_4 &= a1 \\
ln(\alpha_4) &= a0
\end{align}
Tenemos que hacer regresion lineal a
$$ y1 = a1 \cdot x1 + a0 $$
```python
# tenemos la siguiente data
x = np.array([0.1,0.2,0.4,0.6,0.9,1.3,1.5,1.7,1.8],dtype=np.float64)
y = np.array([0.75,1.25,1.45,1.25,0.85,0.55,0.35,0.28,0.18],dtype=np.float64)
datareal = np.array([x,y],dtype=np.float64)
# transformamos la data para el problema linealizado
x1 = x
y1 = np.log(y/x)
# armamos la data para la linealizacion del problema
data1 = np.array([x1,y1],dtype=np.float64)
# Resolvamos la linealizacion , teniendo la funcion con sus parametros
pn_ord_1,parametros = regresion_orden_n(data1.copy(),n=1)
# ya que tenemos los parametros , los guardamos
a0 = parametros[0,0]
a1 = parametros[1,0]
# luego hallamos a y b de las relaciones anteriores que usamos para linealizar nuestro problema
alpha4 = np.e**(a0)
beta4 = a1
print("Los valores de alpha4 y beta4 usando linealizacion")
print(f'alpha4 = {alpha4}')
print(f'beta4 = {beta4}')
```
Los valores de alpha4 y beta4 usando linealizacion
alpha4 = 9.661785859642904
beta4 = -2.473308765704635
#### Usamos la regresion no lineal
```python
def funcion(alpha4_cal,beta4_cal,xev):
return alpha4_cal*xev*np.e**(beta4_cal*xev)
def derivalpha4_funcion(alpha4_cal,beta4_cal,xev):
return xev*np.e**(beta4_cal*xev)
def derivabeta4_funcion(alpha4_cal,beta4_cal,xev):
return alpha4_cal*xev**2*np.e**(beta4_cal*xev)
x = np.array([0.1,0.2,0.4,0.6,0.9,1.3,1.5,1.7,1.8],dtype=np.float64)
y = np.array([0.75,1.25,1.45,1.25,0.85,0.55,0.35,0.28,0.18],dtype=np.float64)
datareal = np.array([x,y],dtype=np.float64)
# Obtengamos los parametros
########### Pregunta , ve cambia las condiciones iniciales y todo cambiara , ( cambia 2 por 1)
alpha4_ajust,beta4_ajust = regresion_no_lineal(datareal.copy(),f=funcion,fa0=derivalpha4_funcion,fa1=derivabeta4_funcion,a0_in=2,a1_in=0.4)
print("Los valores de alpha4 y beta4 usando regresion no lineal")
print(f'alpha4 = {alpha4_ajust}')
print(f'beta4 = {beta4_ajust}')
```
Los valores de alpha4 y beta4 usando regresion no lineal
alpha4 = 9.897361567707442
beta4 = -2.5318692382283694
# Graficamos para comparar
```python
x1rango = np.linspace(0,2,100)
# graficamos
fig, axes = plt.subplots(nrows=1,ncols=1 , figsize = (10,5)) # fig es la figura y axes son los ejes (son elementos de cada figura)
axes.plot(x1rango,funcion(alpha4_ajust,beta4_ajust,x1rango),'r',label="Con regresion no lineal")
axes.plot(x1rango,funcion(alpha4,beta4,x1rango),'c',label="con regresion lineal(linealizacion)")
axes.plot(datareal[0],datareal[1],'*b')
axes.set_title('Pregunta numero 2')
# axes.set_ylim(0, 2)
axes.grid("True")
axes.legend( loc='upper right', shadow=True) # se pone la legenda
plt.show()
```
# Comparemos numericamente
```python
# comproblemos que es el minimo
error1 = np.sum( ( funcion(alpha4,beta4,datareal[0]) - datareal[1] )**2 )
print(f"El errror cuadratico usando linealizacion : {error1}")
error2 = np.sum( ( funcion(alpha4_ajust,beta4_ajust,datareal[0]) - datareal[1] )**2 )
print(f"El errror cuadratico usando regresion no lineal : {error2}")
########## pregunta #########
'''
Es normal que error1 > error2 . Es decir una esta usando un metodo directo mientras que para hallar error 2 se esta usando un metodo iterativo con aproximaciones
Por lo tanto yo esperaria que error 1 < error 2 . Discusion pendiente
'''
```
El errror cuadratico usando linealizacion : 0.02120628164241789
El errror cuadratico usando regresion no lineal : 0.018315258899579863
' \nEs normal que error1 > error2 . Es decir una esta usando un metodo directo mientras que para hallar error 2 se esta usando un metodo iterativo con aproximaciones \nPor lo tanto yo esperaria que error 1 < error 2 . Discusion pendiente \n'
########## Resultado de Jhonatan #############
NO LINEAL : alfa4= 9.89736154327602 y beta4= -2.5318692333384005
LINEAL: alfa4= 9.661785859642901 y beta4= -2.4733087657046346
|
10508c033b7e316ee63d3866f2b1c49e56c4e106
| 211,583 |
ipynb
|
Jupyter Notebook
|
Curso_Metodos_Numericos_2020_I/codigos_antes_parcial/Minimos_Cuadrados.ipynb
|
alonso121198/Regresion-lineal-en-python
|
9f39c1ddc33e68263fa81730e62efc28c051ec6e
|
[
"MIT"
] | null | null | null |
Curso_Metodos_Numericos_2020_I/codigos_antes_parcial/Minimos_Cuadrados.ipynb
|
alonso121198/Regresion-lineal-en-python
|
9f39c1ddc33e68263fa81730e62efc28c051ec6e
|
[
"MIT"
] | null | null | null |
Curso_Metodos_Numericos_2020_I/codigos_antes_parcial/Minimos_Cuadrados.ipynb
|
alonso121198/Regresion-lineal-en-python
|
9f39c1ddc33e68263fa81730e62efc28c051ec6e
|
[
"MIT"
] | null | null | null | 266.47733 | 69,004 | 0.91133 | true | 5,941 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.740174 | 0.782662 | 0.579307 |
__label__spa_Latn
| 0.710379 | 0.184254 |
# Computational Astrophysics
## Interpolation 01
---
## Eduard Larrañaga
Observatorio Astronómico Nacional\
Facultad de Ciencias\
Universidad Nacional de Colombia
---
### About this notebook
In this notebook we present some of the interpolation techniques.
---
## Interpolation
Experimental astrophysical data usually consist of a discrete set of data points $(x_j, f_j)$ which represent the value of a function $f(x)$ for a finite set of arguments $\{ x_1, x_2, ..., x_n \}$. However, it is usually needed to know the value of the function at aditional points and **interpolation** is the method used to obtain those values.
**Interpolation** corresponds to define a function $g(x)$, using the known discrete information and such that $g(x_j) = f(x_j)$, to approximate the value of $f$ at any point $x \in [x_{min}, x_{max}]$, where $x_{min} = \min [x_j]$ and $x_{max} = \max \{ x_j \}$.
**Extrapolation** will correspond to approximate the value of $f$ at a point $x \notin [x_{min}, x_{max}]$}.
---
## Simple Polynomial Interpolation
The simplest method of interpolation is called **Polynomial Interpolation** and consist in finding a polynomial $p(x)$ of degree $n$ that passes through $n+1$ points $x_j$ with values $p(x_j) = f(x_j)$, where $j=0,1,2,...,n$.
The polynomial is written
$p_n(x) = a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n$
where $a_i$ are $n+1$-real constants to be determined by the conditions
$\left(
\begin{array}{ccccc}
1&x_0^1&x_0^2&\cdots&x_0^n\\
\vdots&\vdots&\vdots&\vdots&\vdots\\
\vdots&\vdots&\vdots&\vdots&\vdots\\
1&x_n^1&x_n^2&\cdots&x_n^n\\
\end{array}
\right)
\left(\begin{array}{c}
a_0\\
\vdots\\
\vdots\\
a_n
\end{array}\right)
=
\left(\begin{array}{c}
f(x_0)\\
\vdots\\
\vdots\\
f(x_n)
\end{array}\right)$
Solving this system is straightforward for simple cases such as linear ($n=1$) and quadratic ($n=2$) interpolation,which we show below, but can be complicated for large $n$.
---
### Linear Interpolation
The linear interpolation ($n=1$) of a function $f(x)$ in an interval
$[x_i,x_{i+1}]$ requires information from just two points (the lower and upper limits of the interval).
Solving the linear system or equivalently, using the forward difference approximation defined for numerical derivation, we obtain the linear polynomial
$p_1(x) = f(x_i) + \frac{f(x_{i+1}) - f(x_i)}{h} (x-x_i) + \mathcal{O}(h^2)$,
where $h=x_{i+1} - x_i$.
The linear interpolation method provides a polynomial with second order accuracy and that can be differentiated once, but this derivative is not continuous at the endpoints $x_i$ and $x_{i+1}$.
#### Example. Piecewise Linear Interpolation
We will read a data set from a .txt file and interpolate linearly between each pair of points (piecewise interpolation)
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Reading the data
data = np.loadtxt('data_points.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
plt.figure()
plt.scatter(x,f)
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.show()
```
```python
def linearInterpolation(x1, x2, f1, f2, x):
p1 = f1 + ((f2-f1)/(x2-x1))*(x-x1)
return p1
N = len(x)
plt.figure(figsize=(7,5))
plt.scatter(x, f, color='black')
for i in range(N-1):
x_interval = np.linspace(x[i],x[i+1],3)
# Note that the number 3 in thie above line indeicates the number of
# points interpolated in each interval !
# (including the extreme points of the interval)
y_interval = linearInterpolation(x[i], x[i+1], f[i], f[i+1], x_interval)
plt.plot(x_interval, y_interval,'r')
plt.title(r'Linear Piecewise Interpolation')
plt.xlabel(r'$x$')
plt.ylabel(r'$p_1(x)$')
plt.show()
```
---
### Quadratic Interpolation
The quadratic interpolation ($n=2$) requires information of three points and the resulting polynomial will be sensitive to which three points are chosen.
Choosing the points $x_i$ , $x_{i+1}$ and $x_{i+2}$ for interpolating $f(x)$ in the range $[x_{i},x_{i+1}]$ and solving the corresponding system from the linear system gives
$p_2(x) = \frac{(x-x_{i+1})(x-x_{i+2})}{(x_i - x_{i+1})(x_i - x_{i+2})} f(x_i)
+ \frac{(x-x_{i})(x-x_{i+2})}{(x_{i+1} - x_{i})(x_{i+1} - x_{i+2})} f(x_{i+1})
+ \frac{(x-x_i)(x-x_{i+1})}{(x_{i+2} - x_i)(x_{i+2} - x_{i+1})} f(x_{i+2}) + \mathcal{O}(h^3)$,
where $h = \max \{ x_{i+2}-x_{i+1},x_{i+1}-x_i \}$.
This time the interpolating polynomial $p(x)$ is twice differentiable, but although its first derivative will be continuous, the second derivative will have finite-size steps.
#### Example. Piecewise Quadratic Interpolation
We will read a data set from a .txt file and interpolate a second order polynomial between each pair of points (piecewise interpolation)
```python
import numpy as np
import matplotlib.pyplot as plt
# Reading the data
data = np.loadtxt('data_points.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
def quadraticInterpolation(x1, x2, x3, f1, f2, f3, x):
p2 = (((x-x2)*(x-x3))/((x1-x2)*(x1-x3)))*f1 + (((x-x1)*(x-x3))/((x2-x1)*(x2-x3)))*f2 +\
(((x-x1)*(x-x2))/((x3-x1)*(x3-x2)))*f3
return p2
N = len(x)
plt.figure(figsize=(7,5))
plt.scatter(x, f, color='black')
for i in range(N-2):
x_interval = np.linspace(x[i],x[i+1],6) # 6 interpolate points in each interval
y_interval = quadraticInterpolation(x[i], x[i+1], x[i+2], f[i], f[i+1], f[i+2], x_interval)
plt.plot(x_interval, y_interval,'r')
plt.title(r' Quadratic Polynomial Piecewise Interpolation')
plt.xlabel(r'$x$')
plt.ylabel(r'$p_2(x)$')
plt.show()
```
**Note:** Tthe form of this quadratic interpolation leaves the last interval without information (because we need three points to apply the numerical method).
---
## Lagrange Interpolation
**Lagrange Interpolation** also finds an interpolating polynomial of degree $n$ using data at $n+1$
points, but uses an alternative method for finding the coefficients. First, we re-write the interpolating linear polynomial as
\begin{equation}
p_1(x) = \frac{x-x_{i+1}}{x_i - x_{i+1}} f(x_i) + \frac{x-x_i}{x_{i+1}-x_i} f(x_{i+1}) + \mathcal{O}(h^2),
\end{equation}
or as
\begin{equation}
p_1(x) = \sum_{j=i}^{i+1} f(x_j) L_{1j}(x) + \mathcal{O}(h^2)
\end{equation}
where we introduce the Lagrange coefficients
\begin{equation}
L_{1j}(x) = \frac{x-x_k}{x_j-x_k}\bigg|_{k\ne j}
\end{equation}
Note that these coefficients ensure that the polynomial passes through the two data points, i.e. $p_1(x_i) = f(x_i)$ and $p_1(x_{i+1}) = f(x_{i+1})$
**Lagrange interpolation** generalizes these expressions to give a polynomial of degree $n$ that
passes through all the $n+1$ data points. It is defined by
\begin{equation}
p_n (x) = \sum_{j=0}^{n} f(x_j) L_{nj}(x) + \mathcal{O}(h^{n+1})\,, \label{eq:LagrangeInterpolation}
\end{equation}
where the coefficients are generalized to
\begin{equation}
L_{nj}(x) = \prod_{k\ne j}^{n} \frac{x-x_k}{x_j - x_k}\,.
\end{equation}
Again, it is important to note that these coefficients ensure that $p(x_j) = f(x_j)$ for the $n+1$ data points.
```python
%load lagrangeInterpolation
```
```python
import numpy as np
import matplotlib.pyplot as plt
import lagrangeInterpolation as lagi
import sys
# Reading the data
data = np.loadtxt('data_points.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
N = len(x)
# Degree of the polynomial to be interpolated piecewise
n = 3
# Check if the number of point is enough to interpolate such a polynomial
if n>=N:
print('\nThere are not enough points to interpolate this polynomial.')
print(f'Using {N:.0f} points it is possible to interpolate polynomials up to order n={N-1:.0f}')
sys.exit()
plt.figure(figsize=(7,5))
plt.title(f'Lagrange Polynomial Piecewise Interpolation n={n:.0f}')
plt.scatter(x, f, color='black')
# Piecewise Interpolation Loop
for i in range(N-n):
xi = x[i:i+n+1]
fi = f[i:i+n+1]
x_interval = np.linspace(x[i],x[i+1],3*n)
y_interval = lagi.p(x_interval,xi,fi)
plt.plot(x_interval, y_interval,'r')
plt.xlabel(r'$x$')
plt.ylabel(r'$p_n(x)$')
plt.show()
```
Note that the las $N-n$ points can be interpolated. What can we do?
```python
import numpy as np
import matplotlib.pyplot as plt
import lagrangeInterpolation as lagi
import sys
# Reading the data
data = np.loadtxt('data_points.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
N = len(x)
# Degree of the polynomial to be interpolated piecewise
n = 3
# Check if the number of point is enough to interpolate such a polynomial
if n>=N:
print('\nThere are not enough points to interpolate this polynomial.')
print(f'Using {N:.0f} points it is possible to interpolate polynomials up to order n={N-1:.0f}')
sys.exit()
plt.figure(figsize=(7,5))
plt.title(f'Lagrange Polynomial Piecewise Interpolation n={n:.0f}')
plt.scatter(x, f, color='black')
# Piecewise Interpolation Loop
for i in range(N-n):
xi = x[i:i+n+1]
fi = f[i:i+n+1]
x_interval = np.linspace(x[i],x[i+1],3*n)
y_interval = lagi.p(x_interval,xi,fi)
plt.plot(x_interval, y_interval,'r')
# Piecewise Interpolation for the final N-n points,
# using a lower degree polynomial
while n>1:
m = n-1
for i in range(N-n,N-m):
xi = x[i:i+m+1]
fi = f[i:i+m+1]
x_interval = np.linspace(x[i],x[i+1],3*m)
y_interval = lagi.p(x_interval,xi,fi)
plt.plot(x_interval, y_interval,'r')
n=n-1
plt.xlabel(r'$x$')
plt.ylabel(r'$p_n(x)$')
plt.show()
```
### Runge's Phenomenon
Why to interpolate piecewise? It is ususal to hink that interpolating a high order polynomial may be better than lower order polynomials. However, for oscillating function this is ususally not a good idea due to Runge's phenomenon.
For example, for a dataset with $N$-points we can interpolate a $19$-degree polynomial:
```python
import numpy as np
import matplotlib.pyplot as plt
import lagrangeInterpolation as lagi
import sys
# Reading the data
data = np.loadtxt('data_points.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
N = len(x)
# Higher Degree polynomial to be interpolated
n = N-1
plt.figure(figsize=(7,5))
plt.title(f'Lagrange Polynomial Piecewise Interpolation n={n:.0f}')
plt.scatter(x, f, color='black')
#Interpolation of the higher degree polynomial
x_int = np.linspace(x[0],x[N-1],3*n)
y_int = lagi.p(x_int,x,f)
plt.plot(x_int, y_int,'r')
plt.xlabel(r'$x$')
plt.ylabel(r'$p_n(x)$')
plt.show()
```
It is clear that the high order polynomial interpolated (not piecewise), has a bad behavior, specially at the borders of the dataset interval. The bad behavior is worst for higly oscillating functions!
---
## Piecewise Cubic Hermite Interpolation
Hermite interpolation is a special form of polynomial interpolation which uses data points and the derivatives of the data to obtain the interpolating polynomial. Incorporation of the first derivative reduces the unwanted oscillations. The inclusion of the derivative also permits to interpolate a high-order polynomial with less data points than Lagrange interpolation.
Piecewise third-order Hermite interpolation is one of the most used cases. In this method, for each domain interval $[x_i , x_{i+1}]$, in which we know (or evaluate) $f(x_i)$, $f(x_{i+1})$, $f'(x_i)$ and $f'(x_{i+1})$, one interpolates a cubic Hermite polynomial given by
\begin{equation}
H_3(x) = f(x_i)\psi_0(z) + f(x_{i+1})\psi_0(1-z)+ f'(x_i)(x_{i+1} - x_{i})\psi_1(z) - f'(x_{i+1})(x_{i+1}-x_i)\psi_1 (1-z),
\end{equation}
where
\begin{equation}
z = \frac{x-x_i}{x_{i+1}-x_i}
\end{equation}
and
\begin{align}
\psi_0(z) =&2z^3 - 3z^2 + 1 \\
\psi_1(z) =&z^3-2z^2+z\,\,.
\end{align}
Note that it is possible to interpolate a third-order polynomial in an interval with only two points! This fact makes possible to interpolate the third-order polynomial between all intervals, even the last one.
```python
%load HermiteInterpolation
```
```python
import numpy as np
import matplotlib.pyplot as plt
import HermiteInterpolation as heri
def Derivative(x, f):
'''
------------------------------------------
Derivative(x, f)
------------------------------------------
This function returns the numerical
derivative of a discretely-sample function
using one-side derivatives in the extreme
points of the interval and second order
accurate derivative in the middle points.
The data points may be evenly or unevenly
spaced.
------------------------------------------
'''
# Number of points
N = len(x)
dfdx = np.zeros([N, 2])
dfdx[:,0] = x
# Derivative at the extreme points
dfdx[0,1] = (f[1] - f[0])/(x[1] - x[0])
dfdx[N-1,1] = (f[N-1] - f[N-2])/(x[N-1] - x[N-2])
#Derivative at the middle points
for i in range(1,N-1):
h1 = x[i] - x[i-1]
h2 = x[i+1] - x[i]
dfdx[i,1] = h1*f[i+1]/(h2*(h1+h2)) - (h1-h2)*f[i]/(h1*h2) -\
h2*f[i-1]/(h1*(h1+h2))
return dfdx
# Loading the data
data = np.loadtxt('data_points.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
N = len(x)
# Calling the derivative function and chosing only the second column
dfdx = Derivative(x,f)[:,1]
plt.figure(figsize=(7,5))
plt.title(f'Cubic Hermite Polynomial Piecewise Interpolation')
plt.scatter(x, f, color='black')
# Piecewise Hermite Interpolation Loop
for i in range(N-1):
xi = x[i:i+2]
fi = f[i:i+2]
dfidx = dfdx[i:i+2]
x_interval = np.linspace(x[i],x[i+1],4)
y_interval = heri.H3(x_interval, xi, fi, dfidx)
plt.plot(x_interval, y_interval,'r')
plt.xlabel(r'$x$')
plt.ylabel(r'$H_3(x)$')
plt.show()
```
|
2f8b64bca86c0f2e8d2cea76f8b56b39a25f7879
| 154,982 |
ipynb
|
Jupyter Notebook
|
05._Interpolation/presentation/Interpolation01.ipynb
|
ashcat2005/ComputationalAstrophysics
|
edda507d0d0a433dfd674a2451d750cf6ad3f1b7
|
[
"MIT"
] | 2 |
2020-09-23T02:49:10.000Z
|
2021-08-21T06:04:39.000Z
|
05._Interpolation/presentation/Interpolation01.ipynb
|
ashcat2005/ComputationalAstrophysics
|
edda507d0d0a433dfd674a2451d750cf6ad3f1b7
|
[
"MIT"
] | null | null | null |
05._Interpolation/presentation/Interpolation01.ipynb
|
ashcat2005/ComputationalAstrophysics
|
edda507d0d0a433dfd674a2451d750cf6ad3f1b7
|
[
"MIT"
] | 2 |
2020-12-05T14:06:28.000Z
|
2022-01-25T04:51:58.000Z
| 234.111782 | 23,588 | 0.908796 | true | 4,182 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.938124 | 0.841826 | 0.789737 |
__label__eng_Latn
| 0.952581 | 0.673156 |
# Matrices Solutions
```
from sympy import *
init_printing()
```
Use `row_del` and `row_insert` to go from one Matrix to the other.
```
def matrix1(M):
"""
>>> M = Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> M
[1, 2, 3]
[4, 5, 6]
[7, 8, 9]
>>> matrix1(M)
[4, 5, 6]
[0, 0, 0]
[7, 8, 9]
"""
M.row_del(0)
M = M.row_insert(1, Matrix([[0, 0, 0]]))
return M
```
```
M = Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
```
```
M
```
```
matrix1(M)
```
## Matrix Constructors
Use the matrix constructors to construct the following matrices. There may be more than one correct answer.
$$\left[\begin{array}{ccc}4 & 0 & 0\\\\
0 & 4 & 0\\\\
0 & 0 & 4\end{array}\right]$$
```
def matrix2():
"""
>>> matrix2()
[4, 0, 0]
[0, 4, 0]
[0, 0, 4]
"""
return eye(3)*4
# OR return diag(4, 4, 4)
```
```
matrix2()
```
$$\left[\begin{array}{}1 & 1 & 1 & 0\\\\1 & 1 & 1 & 0\\\\0 & 0 & 0 & 1\end{array}\right]$$
```
def matrix3():
"""
>>> matrix3()
[1, 1, 1, 0]
[1, 1, 1, 0]
[0, 0, 0, 1]
"""
return diag(ones(2, 3), 1)
# OR diag(ones(2, 3), ones(1, 1))
```
```
matrix3()
```
$$\left[\begin{array}{}-1 & -1 & -1 & 0 & 0 & 0\\\\-1 & -1 & -1 & 0 & 0 & 0\\\\-1 & -1 & -1 & 0 & 0 & 0\\\\0 & 0 & 0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0 & 0 & 0\end{array}\right]$$
```
def matrix4():
"""
>>> matrix4()
[-1, -1, -1, 0, 0, 0]
[-1, -1, -1, 0, 0, 0]
[-1, -1, -1, 0, 0, 0]
[ 0, 0, 0, 0, 0, 0]
[ 0, 0, 0, 0, 0, 0]
[ 0, 0, 0, 0, 0, 0]
"""
return diag(-ones(3, 3), zeros(3, 3))
# OR diag(-ones(3, 3), 0, 0, 0)
```
```
matrix4()
```
## Advanced Methods
Recall that if $f$ is an analytic function, then we can define $f(M)$ for any square matrix $M$ by "plugging" $M$ into the power series formula for $f(x)$. In other words, if $$f(x) = \sum_{n=0}^\infty a_n x^n,$$ then we define $f(M)$ by $$f(M) = \sum_{n=0}^\infty a_n M^n,$$ where $M^0$ is $I$, the identity matrix.
Furthermore, if $M$ is a diagonalizable matrix, that is, $M=PDP^{-1}$, where $D$ is diagonal, then $M^n = PD^nP^{-1}$ (because $M^n = \left(PDP^{-1}\right)\left(PDP^{-1}\right)\cdots\left(PDP^{-1}\right)=PD\left(P^{-1}P\right)D\left(P^{-1}P\right)\cdots DP^{-1} = PD^nP^{-1}$).
But if
$$ D = \begin{bmatrix}
d_1 & 0 & \cdots & 0 \\\\
0 & d_2 & \cdots & 0 \\\\
\vdots & \vdots & \ddots & \vdots \\\\
0 & 0 & \cdots & d_n
\end{bmatrix}
$$
is a diagonal matrix, then
$$ D^n = \begin{bmatrix}
d_1^n & 0 & \cdots & 0 \\\\
0 & d_2^n & \cdots & 0 \\\\
\vdots & \vdots & \ddots & \vdots \\\\
0 & 0 & \cdots & d_n^n
\end{bmatrix}
$$
so that
$$
\sum_{n=0}^\infty a_n M^n = \sum_{n=0}^\infty a_n PD^nP^{-1} = P\cdot\begin{bmatrix}
\sum_{n=0}^\infty a_n d_1^n & 0 & \cdots & 0 \\\\
0 & \sum_{n=0}^\infty a_n d_2^n & \cdots & 0 \\\\
\vdots & \vdots & \ddots & \vdots \\\\
0 & 0 & \cdots & \sum_{n=0}^\infty a_n d_n^n
\end{bmatrix}\cdot P^{-1} = P\cdot\begin{bmatrix}
f(d_1) & 0 & \cdots & 0 \\\\
0 & f(d_2) & \cdots & 0 \\\\
\vdots & \vdots & \ddots & \vdots \\\\
0 & 0 & \cdots & f(d_n)
\end{bmatrix}\cdot P^{-1}
$$
Let's create some square matrices, which we will use throughout the exercises.
```
x = symbols('x')
A = Matrix([[1, 1], [1, 0]])
M = Matrix([[3, 10, -30], [0, 3, 0], [0, 2, -3]])
N = Matrix([[-1, -2, 0, 2], [-1, -1, 2, 1], [0, 0, 2, 0], [-1, -2, 2, 2]])
```
First, verify that these matrices are indeed diagonalizable.
```
print(A.is_diagonalizable())
print(M.is_diagonalizable())
print(N.is_diagonalizable())
```
True
True
True
Now, we want to write a function that computes $f(M)$, for diagonalizable matrix $M$ and analytic function $f$.
However, there is one complication. We can use `diagonalize` to get `P` and `D`, but we need to apply the function to the diagonal of `D`. We might think that we could use `eigenvals` to get the eigenvalues of the matrix, since the diagonal values of `D` are just the eigenvalues of `M`, but the issue is that they could be in any order in `D`.
Instead, we can use matrix slicing to get the diagonal values (or indeed, any value) of a matrix. There is not enough time in this tutorial (or room in this document) to discuss the full details of matrix slicing. For now, we just note that `M[i, j]` returns the element at position `i, j` (which is the `i + 1, j + 1`th element of the matrix, due to Python's 0-indexing). For example
```
M
```
```
M[0, 1]
```
That should be enough information to write the following function.
```
def matrix_func(M, func):
"""
Computes M at func. Assumes that M is square diagonalizable.
>>> matrix_func(M, exp)
[exp(3), -5*exp(-3)/3 + 5*exp(3)/3, -5*exp(3) + 5*exp(-3)]
[ 0, exp(3), 0]
[ 0, -exp(-3)/3 + exp(3)/3, exp(-3)]
Note that for the function exp, we can also just use M.exp()
>>> matrix_func(M, exp) == M.exp()
True
But for other functions, we have to do it this way.
>>> M.sin()
Traceback (most recent call last):
...
AttributeError: Matrix has no attribute sin.
>>> matrix_func(N, sin)
[-sin(1), -2*sin(1), 0, 2*sin(1)]
[-sin(1), -sin(1), sin(2), sin(1)]
[ 0, 0, sin(2), 0]
[-sin(1), -2*sin(1), sin(2), 2*sin(1)]
Note that we could also use this to compute the series expansion of a matrix,
if we know the closed form of that expansion. For example, suppose we wanted to compute
I + M + M**2 + M**3 + …
The series
1 + x + x**2 + x**3 + …
is equal to the function 1/(1 - x).
>>> matrix_func(M, Lambda(x, 1/(1 - x))) # Note, Lambda works just like lambda, but is symbolic
[-1/2, -5/4, 15/4]
[ 0, -1/2, 0]
[ 0, -1/4, 1/4]
"""
P, D = M.diagonalize()
diags = [func(D[i, i]) for i in range(M.shape[0])]
return P*diag(*diags)*P**-1
```
```
matrix_func(M, exp)
```
```
matrix_func(M, Lambda(x, 1/(1 - x)))
```
Now lets investigate how this works in relation to the series expansion definition. Write a function that uses `matrix_func` and `series` to compute the approximation of a matrix evaluated at a function up to $O(M^n)$.
```
def matrix_func_series(M, func, n):
"""
Computes the approximation of the func(M) using the series definition up to O(M**n).
>>> matrix_func_series(M, exp, 10)
[22471/1120, 14953/448, -44859/448]
[ 0, 22471/1120, 0]
[ 0, 14953/2240, 83/2240]
>>> matrix_func_series(M, exp, 10).evalf()
[20.0633928571429, 33.3772321428571, -100.131696428571]
[ 0, 20.0633928571429, 0]
[ 0, 6.67544642857143, 0.0370535714285714]
>>> matrix_func(M, exp).evalf()
[20.0855369231877, 33.3929164246997, -100.178749274099]
[ 0, 20.0855369231877, 0]
[ 0, 6.67858328493993, 0.0497870683678639]
It's pretty close. Basically what we might expect for those values up to O(x**10).
>>> matrix_func_series(N, sin, 3)
[-1, -2, 0, 2]
[-1, -1, 2, 1]
[ 0, 0, 2, 0]
[-1, -2, 2, 2]
>>> matrix_func(N, sin).evalf()
[-0.841470984807897, -1.68294196961579, 0, 1.68294196961579]
[-0.841470984807897, -0.841470984807897, 0.909297426825682, 0.841470984807897]
[ 0, 0, 0.909297426825682, 0]
[-0.841470984807897, -1.68294196961579, 0.909297426825682, 1.68294196961579]
It's not as close, because we used O(x**3), but clearly still the same thing.
>>> matrix_func_series(M, Lambda(x, 1/(1 - x)), 10)
[29524, 73810, -221430]
[ 0, 29524, 0]
[ 0, 14762, -14762]
>>> matrix_func(M, Lambda(x, 1/(1 - x)))
[-1/2, -5/4, 15/4]
[ 0, -1/2, 0]
[ 0, -1/4, 1/4]
Woah! That one's not close at all. What is happening here? Let's try more terms
>>> matrix_func_series(M, Lambda(x, 1/(1 - x)), 100)
[257688760366005665518230564882810636351053761000, 644221900915014163795576412207026590877634402500, -1932665702745042491386729236621079772632903207500]
[ 0, 257688760366005665518230564882810636351053761000, 0]
[ 0, 128844380183002832759115282441405318175526880500, -128844380183002832759115282441405318175526880500]
It just keeps getting bigger. In fact, the series diverges. Recall that
1/(1 - x) = 1 + x + x**2 + x**3 + … *only if* |x| < 1. But the eigenvalues
of M are bigger than 1 in absolute value.
>>> M.eigenvals()
{3: 2, -3: 1}
In fact, 1/(1 - M) is mathematically defined via the analytic continuation
of the series expansion 1 + x + x**2 + …, which is just 1/(1 - x). This is
well-defined as long as none of the eigenvalues of M are equal to 1. Let's
try it on N.
>>> matrix_func(N, Lambda(x, 1/(1 - x)))
[nan, -oo, nan, oo]
[nan, nan, nan, nan]
[nan, nan, nan, nan]
[nan, -oo, nan, oo]
That didn't work. What are the eigenvalues of N?
>>> N.eigenvals()
{1: 1, 2: 1, -1: 1, 0: 1}
Ah, the first one is 1, so we cannot define 1/(1 - N).
"""
x = Dummy('x') # This works even if func already contains Symbol('x')
series_func = Lambda(x, func(x).series(x, 0, n).removeO())
return matrix_func(M, series_func)
```
```
matrix_func_series(M, exp, 10)
```
```
matrix_func_series(M, exp, 10).evalf()
```
```
matrix_func(M, exp).evalf()
```
```
matrix_func_series(N, sin, 3)
```
```
matrix_func_series(M, Lambda(x, 1/(1 - x)), 100)
```
```
M.eigenvals()
```
```
matrix_func(N, Lambda(x, 1/(1 - x)))
```
```
N.eigenvals()
```
```
```
|
144fbd18f86ca153d1425446a07ebd01ff7cea1d
| 70,162 |
ipynb
|
Jupyter Notebook
|
tutorial_exercises/Advanced-Matrices Solutions.ipynb
|
gvvynplaine/scipy-2016-tutorial
|
aa417427a1de2dcab2a9640b631b809d525d7929
|
[
"BSD-3-Clause"
] | 53 |
2016-06-21T21:11:02.000Z
|
2021-02-04T07:51:03.000Z
|
tutorial_exercises/Advanced-Matrices Solutions.ipynb
|
gvvynplaine/scipy-2016-tutorial
|
aa417427a1de2dcab2a9640b631b809d525d7929
|
[
"BSD-3-Clause"
] | 11 |
2016-07-02T20:24:06.000Z
|
2016-07-11T11:31:44.000Z
|
tutorial_exercises/Advanced-Matrices Solutions.ipynb
|
gvvynplaine/scipy-2016-tutorial
|
aa417427a1de2dcab2a9640b631b809d525d7929
|
[
"BSD-3-Clause"
] | 36 |
2016-06-25T09:04:24.000Z
|
2021-08-09T06:46:01.000Z
| 71.887295 | 11,276 | 0.743608 | true | 3,778 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.849971 | 0.880797 | 0.748652 |
__label__eng_Latn
| 0.826184 | 0.577702 |
# Analytic approx. for filters
The aim here is to derive analytic formulae for products of the filtering, given $W(kR)$ models and (very) simple $P(k)$. These will be useful for basic testing (against known analytic solution), but also, if $P(k)$ can be set close enough to reasonable models, for checking appropriate resolution/limits for integration.
Our main targets will be the mass variance:
$$ \sigma^2_n(r) = \frac{1}{2\pi^2} \int_0^\infty dk\ k^{2(1+n)} P(k) W^2(kR), $$
and the log derivative:
$$ \frac{d\ln \sigma^2}{d\ln R} = \frac{1}{\pi^2\sigma^2} \int_0^\infty W(kR) \frac{dW(kR)}{d\ln(kR)} P(k)k^2 dk. $$
Typically we'll use a power-law for the power spectrum,
$$ P(k) = k^p. $$
```python
from sympy import *
init_session()
p = symbols("p")
k, x, R, P = symbols('k x R P',positive=True)
```
IPython console for SymPy 1.0 (Python 2.7.12-64-bit) (ground types: python)
These commands were executed:
>>> from __future__ import division
>>> from sympy import *
>>> x, y, z, t = symbols('x y z t')
>>> k, m, n = symbols('k m n', integer=True)
>>> f, g, h = symbols('f g h', cls=Function)
>>> init_printing()
Documentation can be found at http://docs.sympy.org/1.0/
```python
def sigma(W, n,p,kmin=0,kmax=1):
P = k**p
#return k**(2*(1+n)) * P * W**2/(2*pi**2)
integ = k**(2*(1+n)) * P * W**2/(2*pi**2)
integ = integ.subs(x,k*R)
res = integrate(integ,(k,kmin,kmax))
print res
return res
def dw_dlnkr(W):
return x*diff(W,x)
def dlnss_dlnr(W,p,kmin=0,kmax=1):
P = k**p
dwdlnx = dw_dlnkr(W)
integ = (W * dwdlnx * P * k**2).subs(x,k*R)
s = sigma(W,0,p,kmin,kmax)
res = integrate(integ,(k,kmin,kmax))/(pi**2*s)
print res
return res
```
## TopHat
In this case, we have
$$ W(kR) = 3\frac{\sin x - x\cos x}{x^3}. $$
```python
W = 3*(sin(x) - x*cos(x))/x**3
```
```python
sigma(W,0,2,0,1)
```
```python
sigma(W,1,2)
```
## SharpK
In this case, we have
$$ W(kR) = \begin{cases} 1 & kR \geq 1 \\ 0 & kR < 1 \end{cases}. $$
This renders the solution very simple:
$$ \sigma^2(R) = \frac{1}{2\pi^2} \int_0^{1/R} k^{2(1+n)} k^p = \frac{1}{2\pi^2}\frac{1}{tR^t}, $$
where $t = 2(1+n) + p + 1$.
And
$$ \frac{d\ln \sigma^2}{d\ln r} = \frac{-1}{2\pi^2 \sigma^2 R^{3+p}}. $$
## Gaussian
In this case we have
$$ W(x=kR) = \exp(-x^2/2). $$
```python
W = exp(-x**2/2)
```
```python
sigma(W,0,-y,0,oo)
```
```python
sigma(W,1,2,0,oo)
```
```python
dlnss_dlnr(W,2,0,oo)
```
|
0459f65557f92b9374d417fde77fa9aa6864e317
| 20,420 |
ipynb
|
Jupyter Notebook
|
development/analytic_filter.ipynb
|
liuxx479/hmf-1
|
8b24f5df42cdf73d507ffc4a7c6138573769bb2c
|
[
"MIT"
] | 45 |
2015-01-06T06:13:54.000Z
|
2021-01-08T04:31:19.000Z
|
development/analytic_filter.ipynb
|
liuxx479/hmf-1
|
8b24f5df42cdf73d507ffc4a7c6138573769bb2c
|
[
"MIT"
] | 113 |
2015-03-12T13:31:41.000Z
|
2021-01-21T22:28:14.000Z
|
development/analytic_filter.ipynb
|
liuxx479/hmf-1
|
8b24f5df42cdf73d507ffc4a7c6138573769bb2c
|
[
"MIT"
] | 28 |
2015-03-14T05:56:51.000Z
|
2020-12-14T20:16:15.000Z
| 50.544554 | 5,044 | 0.685553 | true | 966 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.932453 | 0.865224 | 0.806781 |
__label__eng_Latn
| 0.658195 | 0.712755 |
# Example #1: Neural Network for $y = \sin(x)$
Same example as yesterday, a sine-curve with 10 points as training values:
```
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0,6.6, 0.6)
y = np.sin(x)
xplot = np.arange(0, 6.6, 0.01)
yplot = np.sin(xplot)
plt.scatter(x,y, color="b", label="Training")
plt.plot(xplot, yplot, color="g", label="sin(x)")
plt.legend()
plt.show()
```
## Defining the architecture of our neural network:
Fully connected with 1 input node, 1 hidden layer, 1 output node.
Layer connections:
\begin{equation}
y = b+\sum_i x_i w_i
\end{equation}
**Question:** "How many weights are there in the above example?"
### Defining the Activation function (sigmoid):
\begin{equation}
\sigma\left(x\right) = \frac{1}{1 + \exp\left(-x\right)}
\end{equation}
Popular because the derivative of the sigmoid function is simple:
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}x}\sigma\left(x\right) = \sigma\left(x\right)\left(1 - \sigma\left(x\right)\right)
\end{equation}
```
def activation(val):
sigmoid = 1.0 / (1.0 + np.exp(-val))
return sigmoid
```
### Defining the architecture (i.e. the layers):
* `input_value` - Input value
* `w_ih` - Weights that connect input layer with hidden layer
* `w_io` - Weights that connect hidden layer with output layer
```
def model(input_value, w_ih, w_ho):
hidden_layer = activation(input_value * w_ih)
output_value = np.sum(hidden_layer*w_ho)
return output_value
```
Let's start by testing the neural network with random weights:
```
np.random.seed(1000)
random_weights_ih = np.random.random(10)
random_weights_ho = np.random.random(10)
print(random_weights_ih)
print(random_weights_ho)
print()
val = 2.0
sinx_predicted = model(val, random_weights_ih, random_weights_ho)
print("Predicted:", sinx_predicted)
print("True: ", np.sin(2.0))
```
Setting our Model parameters:
```
# The number of nodes in the hidden layer
HIDDEN_LAYER_SIZE = 40
# L2-norm regularization
L2REG = 0.01
```
## Optimizing the weights:
We want to find the best set of weights $\mathbf{w}$ that minimizes some loss function. For example we can minimize the squared error (like we did in least squares fitting):
\begin{equation}
L\left(\mathbf{w}\right) = \sum_i \left(y_i^\mathrm{true} - y_i^\mathrm{predicted}(\mathbf{w}) \right)^{2}
\end{equation}
Or with L2-regularization:
\begin{equation}
L\left(\mathbf{w}\right) = \sum_i \left(y_i^\mathrm{true} - y_i^\mathrm{predicted}(\mathbf{w}) \right)^{2} + \lambda\sum_j w_j^{2}
\end{equation}
Just like in the numerics lectures and exercises, we can use a function from SciPy to do this minimization: `scipy.optimize.minimize()`.
```
def loss_function(parameters):
w_ih = parameters[:HIDDEN_LAYER_SIZE]
w_ho = parameters[HIDDEN_LAYER_SIZE:]
squared_error = 0.0
for i in range(len(x)):
# Predict y for x[i]
y_predicted = model(x[i], w_ih, w_ho)
# Without # Regularization
squared_error = squared_error + (y[i] - y_predicted)**2
# With regularization
# rmse += (z - y[i])**2 + np.linalg.norm(parameters) * L2REG
return squared_error
```
## Running the minimization with `scipy.optimize.minimize()`:
Documentation: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
Since we haven't implemented the gradient of the neural network, we can't use optimizers that require the gradient. One algorithm we can use is the Nelder-Mead optimizer.
```
from scipy.optimize import minimize
# Define random initial weights
np.random.seed(666)
p = np.random.random(size=2*HIDDEN_LAYER_SIZE)
# Minimize the loss function with parameters p
result = minimize(loss_function, p, method="Nelder-Mead",
options={"maxiter": 100000, "disp": True})
wfinal_in = result.x[:HIDDEN_LAYER_SIZE]
wfinal_hl = result.x[HIDDEN_LAYER_SIZE:]
print(wfinal_in)
print(wfinal_hl)
```
```
# Print sin(2.5) and model(2.5)
val = 2.5
sinx_predicted = model(val, wfinal_in, wfinal_hl)
print("Predicted:", sinx_predicted)
print("True: ", np.sin(val))
```
Lets make a plot with pyplot!
```
xplot = np.arange(0,6.6, 0.01)
yplot = np.sin(xplot)
ypred = np.array([model(val, wfinal_in, wfinal_hl) for val in xplot])
import matplotlib.pyplot as plt
plt.plot(xplot,yplot, color="g", label="sin(x)")
plt.scatter(x, y, color="b", label="Training")
plt.plot(xplot, ypred, color="r", label="Predicted")
plt.ylim([-2,2])
plt.show()
```
## What to do about "crazy" behaviour?
* Regularization
* Adjust hyperparameters (hidden layer size)
|
b276771ba95a51c67f9e41b7538da84b8eddf1b0
| 10,841 |
ipynb
|
Jupyter Notebook
|
machine_learning_example_sinx.ipynb
|
andersx/python-intro
|
8409c89da7dd9cea21e3702a0f0f47aae816eb58
|
[
"CC0-1.0"
] | 11 |
2020-05-03T11:59:01.000Z
|
2021-11-15T12:33:39.000Z
|
machine_learning_example_sinx.ipynb
|
andersx/python-intro
|
8409c89da7dd9cea21e3702a0f0f47aae816eb58
|
[
"CC0-1.0"
] | null | null | null |
machine_learning_example_sinx.ipynb
|
andersx/python-intro
|
8409c89da7dd9cea21e3702a0f0f47aae816eb58
|
[
"CC0-1.0"
] | 7 |
2020-05-10T21:15:15.000Z
|
2021-12-05T15:13:54.000Z
| 27.726343 | 187 | 0.463241 | true | 1,328 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.945801 | 0.882428 | 0.834601 |
__label__eng_Latn
| 0.77758 | 0.777392 |
(Other_Activation_Functions)=
# Chapter 16 -- Other Activation Functions
The other solution for the vanishing gradient is to use other activation functions. We like the old activation function sigmoid $\sigma(h)$ because first, it returns $0.5$ when $h=0$ (i.e. $\sigma(0)$) and second, it gives a higher probability when the input value is positive and vice versa. This makes it the perfect activation function for predicting the probability. However, the vanishing gradient is a major problem we cannot ignore. In DNN (Deep Neural Network), fortunately, we can use the sigmoid function for only the output layer, and use other activation functions for hidden layers. Here are some alternatives for the activation function that do not share the same vanishing gradient problem.
$$
tanh(x)=\frac{e^h-e^{-h}}{e^h+e^{-h}}
$$ (eq16_1)
where $h=w*x+b$.
```python
# make the figure be plotted at the centre
from IPython.core.display import HTML
HTML("""
<style>
.output_png {
display: table-cell;
text-align: center;
vertical-align: middle;
}
</style>
""")
```
<style>
.output_png {
display: table-cell;
text-align: center;
vertical-align: middle;
}
</style>
```python
import numpy as np
import matplotlib.pyplot as plt
N = 100
def main():
h = np.linspace(-5, 5, N)
tanh = (np.exp(h)-np.exp(-h))/(np.exp(h)+np.exp(-h))
plt.figure()
plt.plot(h, tanh)
plt.xlabel('$h$')
plt.ylabel('$tanh(h)$')
plt.title('Figure 1.4 Tanh function')
plt.show()
if __name__ == '__main__':
main()
```
$tanh(x)$ is centred at $0$, which means the probability would be negative if the input value is negative and vice versa. The possibility of having a negative value allows the weight to update better than sigmoid. For sigmoid function that only produces positive values, all weights to the same neuron must either increase together or decrease together. That's a problem, since some of the weights may need to increase while others need to decrease. That can only happen if some of the input activations have different signs. That suggests replacing the sigmoid by an activation function, such as tanh, which allows both positive and negative activations. Of course, the tanh has slightly steeper gradient than the sigmoid but it still faces the vanishing gradient problem.
\begin{equation}
ReLU(h)=max(0,h)
\end{equation}
```python
#ReLu function
def relu(X):
return np.maximum(0,X)
N = 100
def main():
h = np.linspace(-5, 5, N)
Relu = relu(h)
plt.figure()
plt.plot(h, Relu)
plt.xlabel('$h$')
plt.ylabel('$Relu(h)$')
plt.title('Figure 1.4 Relu function')
plt.show()
if __name__ == '__main__':
main()
```
This activation function is wildly used in CNN (Convolutional Neural Network) because of the two characters: it is easy to compute and it does not have a vanishing gradient problem at all. Nevertheless, it has the biggest problem, which is there is no derivative at the point ($x=0$). We can avoid this by keep our learning rate low. On the other hand, when the weighted input to a rectified linear unit is negative, the gradient vanishes, and so the neuron stops learning entirely. Some recent work on image recognition has found considerable benefit in using rectified linear units through much of the network. However, as with tanh neurons, we do not yet have a really deep understanding of when, exactly, rectified linear units are preferable, nor why.
The following figure 16.1 is a MLP/DNN model with modified activation functions. In this model, the activation functions are changed to ReLU from sigmoid for all hidden layers but the output layer (in order to predict the probability).
<center> Figure 16.1
|
7d8e36ce108625ca1f3b777d1aea4ac18a6a3aa5
| 32,946 |
ipynb
|
Jupyter Notebook
|
notebooks/e_extra/pytorch_image_filtering_ml/Chapter 16 -- Other Activation Functions.ipynb
|
primer-computational-mathematics/book
|
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
|
[
"MIT"
] | 3 |
2020-08-02T07:32:14.000Z
|
2021-11-16T16:40:43.000Z
|
notebooks/e_extra/pytorch_image_filtering_ml/Chapter 16 -- Other Activation Functions.ipynb
|
primer-computational-mathematics/book
|
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
|
[
"MIT"
] | 5 |
2020-07-27T10:45:26.000Z
|
2020-08-12T15:09:14.000Z
|
notebooks/e_extra/pytorch_image_filtering_ml/Chapter 16 -- Other Activation Functions.ipynb
|
primer-computational-mathematics/book
|
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
|
[
"MIT"
] | 4 |
2020-08-05T13:57:32.000Z
|
2022-02-02T19:03:57.000Z
| 151.12844 | 14,764 | 0.891884 | true | 892 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.91611 | 0.882428 | 0.808401 |
__label__eng_Latn
| 0.998262 | 0.716518 |
```python
%%time
import time
for _ in range(1000):
time.sleep(0.01)# sleep for 0.01 seconds
from sympy import *
from sympy import init_printing; init_printing(use_latex = 'mathjax')
from sympy.plotting import plot
n = int(input('Qué número de valores de energía desea aproximar?'))
l, m, hbar, k = symbols('l m hbar k', real = True, constant = True)
var('x,W')
H = ones(n,n)
S = ones(n,n)
U = ones(n,n)
CC = ones(n,n)
#F = [sympify(input('Ingrese la función {0}: '.format(i+1))) for i in range(n)]
F = [x*(l - x),(x**2)*((l - x)**2),x*(l - x)*((l/2)-x),(x**2)*((l - x)**2)*((l/2)-x)]
fi = zeros(n)
c = ones(n,n)
for i in range(n):
for j in range(n):
c[i,j] = sympify('c%d%d' %(j+1,i+1))
fi[j] = sympify('phi%d' %(j+1))
for j in range(1,n+1): #loop para llenar la matriz H
for i in range(1,n+1):
I = ((-hbar**2)/(2*m))
integrando = I*(F[j-1])*diff(F[i-1], x, 2)
A = integrate(integrando, (x, 0, l))
integrandos = (F[j-1])*(F[i-1])
B = integrate(integrandos, (x, 0, l))
H[j-1,i-1] *= A
S[j-1,i-1] *= B
U[j-1,i-1] *= (H[j-1,i-1] -W*S[j-1,i-1])
E = U.det()
EE = solve(E,W)
a = 1/EE[0]
#truco para ordenar los W
for i in range(n):
EE[i] = EE[i]*a
EE.sort()
for j in range(n):
EE[j] = EE[j]*(1/a)
cc = Matrix(c)
for j in range(n):
for i in range(n):
C = U*cc.col(j)
CC[i,j] *= C[i].subs(W, EE[j])
G = []
for i in range(n):
D = solve(CC.col(i),cc)
G.append(list(D.items()))
G = Matrix(G)
J = []
for i in range(len(G)):
if G[i][1] != 0:
J.append(factor(G[i]))
ceros = []
param = []
for i in range(len(G)):
if G[i][1] != 0:
param.append(G[i][0])
elif G[i][1] == 0:
ceros.append(G[i][0])
kas = [x for x in cc if x not in (ceros+param)]
finale = ones(n,n)
for j in range(n):
for i in range(n):
if sympify('c'+str(i+1)+str(j+1)) not in (ceros+param):
finale[i,j] *= k
elif sympify('c'+str(i+1)+str(j+1)) not in (kas+ceros):
finale[i,j] *= J[i][1].subs(sympify('c'+str(i+1)+str(j+1+1)),k)
else:
finale[i,j] *= 0
Psi = factor(finale*Matrix(F))
integrand = []
Psis = []
for i in range(n):
integrand.append(Psi[i]**2)
Psis.append(integrate(integrand[i], (x, 0, l)))
normaliz = []
for i in range(n):
normaliz.append(factor(Psis[i])*(1/k**2)-(1/k**2))
KKK = []
Figaro = []
for i in range(n):
KKK.append(solve(normaliz[i],k**2))
Figaro.append(Psi[i]**2)
Figaro[i] = Figaro[i].subs(k**2,KKK[i][0])
for i in range(n):
plot(Figaro[i].subs(l, 1), (x, 0,1))
```
|
05de0a76f7f40cb00eec222ba1f437609e07c451
| 89,164 |
ipynb
|
Jupyter Notebook
|
Huckel_M0/Variational+Theory+beta.ipynb
|
lazarusA/Density-functional-theory
|
c74fd44a66f857de570dc50471b24391e3fa901f
|
[
"MIT"
] | null | null | null |
Huckel_M0/Variational+Theory+beta.ipynb
|
lazarusA/Density-functional-theory
|
c74fd44a66f857de570dc50471b24391e3fa901f
|
[
"MIT"
] | null | null | null |
Huckel_M0/Variational+Theory+beta.ipynb
|
lazarusA/Density-functional-theory
|
c74fd44a66f857de570dc50471b24391e3fa901f
|
[
"MIT"
] | null | null | null | 443.60199 | 22,450 | 0.923635 | true | 982 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.884039 | 0.581303 | 0.513895 |
__label__eng_Latn
| 0.190077 | 0.032279 |
```python
from sympy import *
x, y, z, t = symbols('x y z t')
```
## Mechanics
The module called [`sympy.physics.mechanics`](http://pyvideo.org/video/2653/dynamics-and-control-with-python)
contains elaborate tools for describing mechanical systems,
manipulating reference frames, forces, and torques.
These specialized functions are not necessary for a first-year mechanics course.
The basic `SymPy` functions like `solve`,
and the vector operations you learned in the previous sections are powerful enough for basic Newtonian mechanics.
### Dynamics
The net force acting on an object is the sum of all the external forces acting on it $\vec{F}_{\textrm{net}} = \sum \vec{F}$.
Since forces are vectors,
we need to use vector addition to compute the net force.
Compute
$\vec{F}_{\textrm{net}}=\vec{F}_1 + \vec{F}_2$,
where $\vec{F}_1=4\hat{\imath}[\mathrm{N}]$ and $\vec{F}_2 = 5\angle 30^\circ[\mathrm{N}]$:
```python
F_1 = Matrix( [4,0] )
F_2 = Matrix( [5*cos(30*pi/180), 5*sin(30*pi/180) ] )
F_net = F_1 + F_2
F_net # in Newtons
```
```python
F_net.evalf() # in Newtons
```
To express the answer in length-and-direction notation,
use `norm` to find the length of $\vec{F}_{\textrm{net}}$
and `atan2` (The function `atan2(y,x)` computes the correct direction
for all vectors $(x,y)$, unlike `atan(y/x)` which requires corrections for angles in the range $[\frac{\pi}{2}, \frac{3\pi}{2}]$.) to find its direction:
```python
F_net.norm().evalf() # |F_net| in [N]
```
```python
(atan2( F_net[1],F_net[0] )*180/pi).n() # angle in degrees
```
The net force on the object is $\vec{F}_{\textrm{net}}= 8.697\angle 16.7^\circ$[N].
### Kinematics
Let $x(t)$ denote the position of an object,
$v(t)$ denote its velocity,
and $a(t)$ denote its acceleration.
Together $x(t)$, $v(t)$, and $a(t)$ are known as the *equations of motion* of the object.
The equations of motion are related by the derivative operation:
$$
a(t) \overset{\frac{d}{dt} }{\longleftarrow} v(t) \overset{\frac{d}{dt} }{\longleftarrow} x(t).
$$
Assume we know the initial position $x_i\equiv x(0)$ and the initial velocity $v_i\equiv v(0)$ of the object
and we want to find $x(t)$ for all later times.
We can do this starting from the dynamics of the problem—the forces acting on the object.
Newton's second law $\vec{F}_{\textrm{net}} = m\vec{a}$ states that a net force $\vec{F}_{\textrm{net}}$
applied on an object of mass $m$ produces acceleration $\vec{a}$.
Thus, we can obtain an objects acceleration if we know the net force acting on it.
Starting from the knowledge of $a(t)$, we can obtain $v(t)$ by integrating
then find $x(t)$ by integrating $v(t)$:
$$
a(t) \ \ \ \overset{v_i+ \int\!dt }{\longrightarrow} \ \ \ v(t) \ \ \ \overset{x_i+ \int\!dt }{\longrightarrow} \ \ \ x(t).
$$
The reasoning follows from the fundamental theorem of calculus:
if $a(t)$ represents the change in $v(t)$,
then the total of $a(t)$ accumulated between $t=t_1$ and $t=t_2$
is equal to the total change in $v(t)$ between these times: $\Delta v = v(t_2) - v(t_1)$.
Similarly, the integral of $v(t)$ from $t=0$ until $t=\tau$ is equal to $x(\tau) - x(0)$.
### Uniform acceleration motion (UAM)
Let's analyze the case where the net force on the object is constant.
A constant force causes a constant acceleration $a = \frac{F}{m} = \textrm{constant}$.
If the acceleration function is constant over time $a(t)=a$.
We find $v(t)$ and $x(t)$ as follows:
```python
t, a, v_i, x_i = symbols('t a v_i x_i')
v = v_i + integrate(a, (t, 0,t) )
v
```
```python
x = x_i + integrate(v, (t, 0,t) )
x
```
You may remember these equations from your high school physics class.
They are the *uniform accelerated motion* (UAM) equations:
\begin{align*}
a(t) &= a, \\
v(t) &= v_i + at, \\[-2mm]
x(t) &= x_i + v_it + \frac{1}{2}at^2.
\end{align*}
In high school, you probably had to memorize these equations.
Now you know how to derive them yourself starting from first principles.
For the sake of completeness, we'll now derive the fourth UAM equation,
which relates the object's final velocity to the initial velocity,
the displacement, and the acceleration, without reference to time:
```python
(v*v).expand()
```
```python
((v*v).expand() - 2*a*x).simplify()
```
The above calculation shows $v_f^2 - 2ax_f = -2ax_i + v_i^2$.
After moving the term $2ax_f$ to the other side of the equation, we obtain
\begin{align*}
(v(t))^2 \ = \ v_f^2 = v_i^2 + 2a\Delta x \ = \ v_i^2 + 2a(x_f-x_i).
\end{align*}
The fourth equation is important for practical purposes
because it allows us to solve physics problems in a time-less manner.
#### Example
Find the position function of an object at time $t=3[\mathrm{s}]$,
if it starts from $x_i=20[\mathrm{m}]$ with $v_i=10[\mathrm{m/s}]$ and undergoes
a constant acceleration of $a=5[\mathrm{m/s^2}]$.
What is the object's velocity at $t=3[\mathrm{s}]$?
```python
x_i = 20 # initial position
v_i = 10 # initial velocity
a = 5 # acceleration (constant during motion)
x = x_i + integrate( v_i+integrate(a,(t,0,t)), (t,0,t) )
x
```
```python
x.subs({t:3}).n() # x(3) in [m]
```
```python
diff(x,t).subs({t:3}).n() # v(3) in [m/s]
```
If you think about it,
physics knowledge combined with computer skills is like a superpower!
### General equations of motion
The procedure
$a(t) \ \overset{v_i+ \int\!dt }{\longrightarrow} \ v(t) \ \overset{x_i+ \int\!dt }{\longrightarrow} \ x(t)$
can be used to obtain the position function $x(t)$ even when the acceleration is not constant.
Suppose the acceleration of an object is $a(t)=\sqrt{k t}$;
what is its $x(t)$?
```python
t, v_i, x_i, k = symbols('t v_i x_i k')
a = sqrt(k*t)
x = x_i + integrate( v_i+integrate(a,(t,0,t)), (t, 0,t) )
x
```
### Potential energy
Instead of working with the kinematic equations of motion $x(t)$, $v(t)$, and $a(t)$ which depend on time,
we can solve physics problems using *energy* calculations.
A key connection between the world of forces and the world of energy is the concept of *potential energy*.
If you move an object against a conservative force (think raising a ball in the air against the force of gravity),
you can think of the work you do agains the force as being stored in the potential energy of the object.
For each force $\vec{F}(x)$ there is a corresponding potential energy $U_F(x)$.
The change in potential energy associated with the force $\vec{F}(x)$ and displacement $\vec{d}$
is defined as the negative of the work done by the force during the displacement: $U_F(x) = - W = - \int_{\vec{d}} \vec{F}(x)\cdot d\vec{x}$.
The potential energies associated with gravity $\vec{F}_g = -mg\hat{\jmath}$
and the force of a spring $\vec{F}_s = -k\vec{x}$ are calculated as follows:
```python
x, y = symbols('x y')
m, g, k, h = symbols('m g k h')
F_g = -m*g # Force of gravity on mass m
U_g = - integrate( F_g, (y,0,h) )
U_g # Grav. potential energy
```
```python
F_s = -k*x # Spring force for displacement x
U_s = - integrate( F_s, (x,0,x) )
U_s # Spring potential energy
```
Note the negative sign in the formula defining the potential energy.
This negative is canceled by the negative sign of the dot product $\vec{F}\cdot d\vec{x}$:
when the force acts in the direction opposite to the displacement,
the work done by the force is negative.
### Simple harmonic motion
The force exerted by a spring is given by the formula $F=-kx$.
If the only force acting on a mass $m$ is the force of a spring,
we can use Newton's second law to obtain the following equation:
$$
F=ma
\quad \Rightarrow \quad
-kx = ma
\quad \Rightarrow \quad
-kx(t) = m\frac{d^2}{dt^2}\Big[x(t)\Big].
$$
The motion of a mass-spring system is described by the *differential equation* $\frac{d^2}{dt^2}x(t) + \omega^2 x(t)=0$,
where the constant $\omega = \sqrt{\frac{k}{m}}$ is called the angular frequency.
We can find the position function $x(t)$ using the `dsolve` method:
```python
t = Symbol('t') # time t
x = Function('x') # position function x(t)
w = Symbol('w', positive=True) # angular frequency w
sol = dsolve( diff(x(t),t,t) + w**2*x(t), x(t) )
sol
```
```python
x = sol.rhs
x
```
Note the solution $x(t)=C_1\sin(\omega t)+C_2 \cos(\omega t)$ is equivalent to $x(t) = A\cos(\omega t + \phi)$,
which is more commonly used to describe simple harmonic motion.
We can use the `expand` function with the argument `trig=True` to convince ourselves of this equivalence:
```python
A, phi = symbols("A phi")
(A*cos(w*t - phi)).expand(trig=True)
```
If we define $C_1=A\sin(\phi)$ and $C_2=A\cos(\phi)$,
we obtain the form $x(t)=C_1\sin(\omega t)+C_2 \cos(\omega t)$ that `SymPy` found.
### Conservation of energy
We can verify that the total energy of the mass-spring system is conserved by showing
$E_T(t) = U_s(t) + K(t) = \textrm{constant}$:
```python
x = sol.rhs.subs({"C1":0,"C2":A})
x
```
```python
v = diff(x, t)
v
```
```python
E_T = (0.5*k*x**2 + 0.5*m*v**2).simplify()
E_T
```
```python
E_T.subs({k:m*w**2}).simplify() # = K_max
```
```python
E_T.subs({w:sqrt(k/m)}).simplify() # = U_max
```
|
837a66368ba3abcecf839362518797a1c60b708f
| 70,039 |
ipynb
|
Jupyter Notebook
|
notebooks/Mechanics.ipynb
|
minireference/sympytut_notebooks
|
6669e7bfccef9e70ae029ac5cbb54cb6cbc31652
|
[
"BSD-3-Clause"
] | 4 |
2016-08-29T12:04:19.000Z
|
2020-02-23T05:14:52.000Z
|
notebooks/Mechanics.ipynb
|
minireference/sympytut_notebooks
|
6669e7bfccef9e70ae029ac5cbb54cb6cbc31652
|
[
"BSD-3-Clause"
] | null | null | null |
notebooks/Mechanics.ipynb
|
minireference/sympytut_notebooks
|
6669e7bfccef9e70ae029ac5cbb54cb6cbc31652
|
[
"BSD-3-Clause"
] | null | null | null | 68.936024 | 4,280 | 0.795428 | true | 2,859 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.909907 | 0.839734 | 0.76408 |
__label__eng_Latn
| 0.987631 | 0.613546 |
```python
import numpy as np
from sympy import *
init_printing(use_latex='mathjax')
```
```python
x = symbols('x')
f = x ** 6 / 6 - 3 * x ** 4 - 2 * x ** 3 / 3 + 27 * x ** 2 / 2 + 18 * x - 30
f
```
$$\frac{x^{6}}{6} - 3 x^{4} - \frac{2 x^{3}}{3} + \frac{27 x^{2}}{2} + 18 x - 30$$
```python
df = diff(f, x)
df
```
$$x^{5} - 12 x^{3} - 2 x^{2} + 27 x + 18$$
```python
- f.evalf(subs={x:1}) / df.evalf(subs={x:1})
```
$$0.0625$$
```python
```
|
9729b823f8d2194511cbe0e42bfa223528155b3b
| 2,342 |
ipynb
|
Jupyter Notebook
|
Certification 2/Week5.1 - Newton-Raphson method.ipynb
|
The-Brains/MathForMachineLearning
|
5cbd9006f166059efaa2f312b741e64ce584aa1f
|
[
"MIT"
] | 6 |
2018-04-16T02:53:59.000Z
|
2021-05-16T06:51:57.000Z
|
Certification 2/Week5.1 - Newton-Raphson method.ipynb
|
The-Brains/MathForMachineLearning
|
5cbd9006f166059efaa2f312b741e64ce584aa1f
|
[
"MIT"
] | null | null | null |
Certification 2/Week5.1 - Newton-Raphson method.ipynb
|
The-Brains/MathForMachineLearning
|
5cbd9006f166059efaa2f312b741e64ce584aa1f
|
[
"MIT"
] | 4 |
2019-05-20T02:06:55.000Z
|
2020-05-18T06:21:41.000Z
| 19.516667 | 94 | 0.401793 | true | 217 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.960361 | 0.882428 | 0.847449 |
__label__yue_Hant
| 0.561749 | 0.807242 |
```python
# zero divisor (영인자)
# AB = 0 A \neq 0, B \neq 0
import sympy as sm
M1 = sm.Matrix([[1,1],[2,2]])
M2 = sm.Matrix([[1,1],[-1,-1]])
M1*M2
```
$\displaystyle \left[\begin{matrix}0 & 0\\0 & 0\end{matrix}\right]$
### 행고정: 행벡터, 열고정: 열벡터
> ### $
\left [
\begin{array}{}
a_{11} & a_{12} & a_{13} & a_{14} & a_{15} \\
a_{21} & a_{22} & a_{23} & a_{24} & a_{25} \\
a_{31} & a_{32} & a_{33} & a_{34} & a_{35} \\
a_{41} & a_{42} & a_{43} & a_{44} & a_{45} \\
a_{51} & a_{52} & a_{53} & a_{54} & a_{55} \\
\end{array}
\right ]
\left [
\begin{array}{}
b_{11} & b_{12} & b_{13} \\
b_{21} & b_{22} & b_{23} \\
b_{31} & b_{32} & b_{33} \\
b_{41} & b_{42} & b_{43} \\
b_{51} & b_{52} & b_{53} \\
\end{array}
\right ]
=
\left [
\begin{array}{}
(AB)_{11} & (AB)_{12} & (AB)_{13} \\
(AB)_{21} & (AB)_{22} & (AB)_{23} \\
(AB)_{31} & (AB)_{32} & (AB)_{33} \\
(AB)_{41} & (AB)_{42} & (AB)_{43} \\
(AB)_{51} & (AB)_{52} & (AB)_{53} \\
\end{array}
\right ]
\\
(AB)_{11} = a_{11} b_{11} + a_{12} b_{21} + a_{13} b_{31} + a_{14} b_{41} + a_{15} b_{51}
= \sum_{j=1}^{5}a_{1j}b_{j1}
\\
(AB)_{21} = a_{21} b_{11} + a_{22} b_{21} + a_{23} b_{31} + a_{24} b_{41} + a_{25} b_{51}
= \sum_{j=1,k=1}^{5}a_{2j}b_{j1}
\\
(AB)_{31} = a_{31} b_{11} + a_{32} b_{21} + a_{33} b_{31} + a_{34} b_{41} + a_{35} b_{51}
= \sum_{j=1,k=1}^{5}a_{3j}b_{j1}
\\
(AB)^{1} = a_{jk}b_{k1} \\
(AB)^{2} = a_{jk}b_{k2} \\
(AB)^{i} = a_{jk}b_{ki} \\
\because A^{j}(\text{ 열벡터:열고정행더미}) = \sum_{k}a_{kj} \quad \because A_{i} = \sum_{k}a_{ik} \\
\therefore (AB)^{i} = b_{ji}A^{j} \\
$
$
AB_{12} = a_{11} b_{12} + a_{12} b_{22} + a_{13} b_{32} + a_{14} b_{42} + a_{15} b_{52}
= \sum_{j=1,k=1}^{5}a_{1j}b_{j2}
\\
AB_{ik} = a_{ij} b_{jk} = \sum_{j=1}^{5}a_{ij}b_{jk}
$
# Matrix
> ## $ \text{such that }A \in \mathbb{R}^{m \times n} \\
f: \mathbb{R}^{n} \mapsto \mathbb{R}^{m} $
> ### sm.Matrix ( [ [ ], [ ], [ ] ] )
>> ### $T = $ sm.Matrix( [ [1, 2],[3, 4] ] )
>> ### $ T_{i} = T.row(i) \to $ row vector,
>> ### T.row_join(Matrix) = T.col_insert(col_num_to_add, Matrix)
>>> ### T.col.del(num)
>> ### $ T^{i} = T.col(i) \to $ column vector
>> ### T.col_join(Matrix) = T.row_insert(row_num_to_add, Matrix)
>>> ### T.row_del(num)
>> ### v.to_matrix(N_system)
>> ### sm.vector.matrix_to_vector(T_matrix ,N_system)
# Matrix
> ### $
A = \begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22} \\
\end{bmatrix}
B = \begin{bmatrix}
b_{11} & b_{12} \\
b_{21} & b_{22} \\
\end{bmatrix}
\\
AB = \begin{bmatrix}
a_{11} b_{11} + a_{12} b_{21}, & a_{11}b_{12} + a_{12} b{22} \\
a_{21} b_{11} + a_{22} b_{21}, & a_{21}b_{12} + a_{22} b{22} \\
\end{bmatrix}
= \begin{bmatrix}
b_{11} \begin{bmatrix}a_{11} \\ a_{21}\end{bmatrix}
+ b_{21}\begin{bmatrix}a_{12} \\ a_{22} \end{bmatrix}, &
b_{12} \begin{bmatrix}a_{11} \\ a_{21}\end{bmatrix}
+ b_{22}\begin{bmatrix}a_{12} \\ a_{22} \end{bmatrix}
\end{bmatrix}
\\ (AB) = \sum_{kj}^{2} b_{kj}(\sum_{i}^{2}a_{ik}G_{ij})
\\ (AB)^{j} = \sum_{k} b_{kj}(\sum_{i}a_{ik}G_{ij})
\\ (AB)^{j} = \sum_{k} b_{kj}(A^{k})
\\ \therefore
(AB) =
\begin{bmatrix} b_{11}A^1 + b_{21}A^2 & b_{12}A^1 + b_{22}A^2 \end{bmatrix}
\\
\Rightarrow (AB)^{k} = \sum_{j} b_{jk}A^j = b_{1k}A^1 + b_{2k}A^2+...+ b_{nk}A^n
$
> ### $
= \begin{bmatrix}
a_{11} \begin{bmatrix}b_{11} & b_{12}\end{bmatrix}
+
a_{12} \begin{bmatrix}b_{21} & b_{22} \end{bmatrix}
\\
a_{21} \begin{bmatrix}b_{11} & b_{12}\end{bmatrix}
+
a_{22}\begin{bmatrix}b_{21} & b_{22} \end{bmatrix}
\end{bmatrix}
$
>> ### B의 행에 맞추어 A의 열이 반복된다.
>>> ### $
b_{ij}\big(a_{ki}G_{kj}\big)
\Rightarrow
\sum_{ij}b_{ij}\Big(\sum_{k}a_{ki}G_{kj}\Big)
\\
= (AB)^{k} = B_{jk}(A_{ij}G_{ik})
= (AB)^{k} = B_{jk}A^{i}
$
> ### $
(AB)_{i} = A_{ij}(B_{jk}G_{ik})
(AB)_{i} = A_{ik}B_k
$
# AB
> ### $
AB = \sum_{ij}a_{ij}G_{ij} \sum_{lm}b_{lm}G_{lm} \\
= \sum_{ilm}a_{il}b_{lm}G_{im} \iff \sum_{ijk}a_{ij}b_{jk}G_{ik}
\\
(AB)^{1} = a_11 b_11 + a_12 b_21 + a_1n b_n1 \\ a_21b_21 + a_22 b_22 + a_2n b_2n
(AB)_{1} = a_11 b_11 + a_12 b_21 + a_1n b_n1 \\ a_21b_11
(AB)^{2} = a_11 b_12 + a_12 b_22 + a_1n b_n2 \\ a_21b_12
(AB)^{3} = a_11 b_13 + a_12 b_23 + a_1n b_n2 \\ a_21b_13
(AB)^{k} = a_11 b_1k + a_12 b_2k + a_1n b_nk \\ a_21b_1k
(AB)^{k} = a_ij b_jk + a_ij b_jk + a_1n b_nk \\ a_ijb_jk
$
```python
import sympy as sm
i,j,k,l,m,n = sm.symbols('i j k l m n', positive=True)
A = sm.MatrixSymbol('A',3,3)
B = sm.MatrixSymbol('B',m,n)
C = sm.MatrixSymbol('C',m,m)
D = sm.MatrixSymbol('D',m,m)
sm.Matrix(A)
sm.Matrix(B.subs({m:3,n:2}))
sm.Sum(C[i,j]*D[j,k],(j,0,m-1))
sm.MatrixExpr.from_index_summation(sm.Sum(C[i,j]*D[j,k],(j,0,m-1)))
3*(C + D) == 3*C + 3*D
C.inv()
D.T == D.transpose()
sm.Trace(sm.eye(3,3)).doit()
# [row_start_index : row_end_index), [col_start_index ~ col_end_index)
T = sm.Matrix([[1,2,3],[4,5,6],[7,8,9]])
T[:,:]
M11 = T[1:,1:]
T.adjoint()
T.adjugate()
```
$\displaystyle \left[\begin{matrix}-3 & 6 & -3\\6 & -12 & 6\\-3 & 6 & -3\end{matrix}\right]$
```python
sm.Sum(C[j,i]*D[j,k],(j,0,m-1))
sm.MatrixExpr.from_index_summation(sm.Sum(C[j,i]*D[j,k],(j,0,m-1)))
sm.combinatorics.Permutation(1,2,0).doit()
sm.Matrix([[25,15,-5],[15,18,0],[-5,0,11]]).det()
sm.Matrix([[25,15,-5],[15,18,0],[-5,0,11]]).LDLdecomposition()
```
(Matrix([
[ 1, 0, 0],
[ 3/5, 1, 0],
[-1/5, 1/3, 1]]),
Matrix([
[25, 0, 0],
[ 0, 9, 0],
[ 0, 0, 9]]))
# dummy index
> ## $$ A_{ik} = \sum_{j\to 1}^{m} a_{ij} \: b_{ji}$$
$$
\begin{bmatrix}
A_{11} & \dots & A_{1k} & \dots & A_{1n} \\
\vdots & \ddots & \vdots & \ddots & \vdots \\
A_{i1} & \dots & A_{ik} & \dots & A_{in} \\
\vdots & \ddots & \vdots & \ddots & \vdots \\
A_{l1} & \dots & A_{lk} & \dots & A_{ln} \\
\end{bmatrix}
\:=\:
\begin{bmatrix}
a_{11} & \dots & a_{1k} & \dots & a_{1m} \\
\vdots & \ddots & \vdots & \ddots & \vdots \\
a_{i1} & \dots & a_{ik} & \dots & a_{im} \\
\vdots & \ddots & \vdots & \ddots & \vdots \\
a_{l1} & \dots & a_{lk} & \dots & a_{lm} \\
\end{bmatrix}
\:
\begin{bmatrix}
b_{11} & \dots & b_{1k} & \dots & b_{1n} \\
\vdots & \ddots & \vdots & \ddots & \vdots \\
b_{i1} & \dots & b_{ik} & \dots & b_{in} \\
\vdots & \ddots & \vdots & \ddots & \vdots \\
b_{m1} & \dots & b_{mk} & \dots & b_{mn} \\
\end{bmatrix}
$$
```python
```
# Diagnolly Dominant Matrices
> ### if for each row,
>> ### the magnitude of the diagonal element
>>> ### is greater than or equl
>>> ### to the sum of the magnitudes of all other elements in that row.
> ## $$
\big|a_{ii}| \geq \sum_{j\neq i} \big|a_{ij}\big|
$$
```python
# diagonally dominant matrices
# Not diagonally dominant ( 1 less than 2
A = sm.Matrix([[1,2],[3,4]])
# Ok diagonally dominant Matrix
B = sm.Matrix([[3,1,1],[0,1,7],[2,1,4]])
a11, a12, a13 = sm.symbols('a_11 a_12 a_13')
a21, a22, a23 = sm.symbols('a_21 a_22 a_23')
a31, a32, a33 = sm.symbols('a_31 a_32 a_33')
x1,x2,x3 = sm.symbols('x1:4')
b1,b2,b3 = sm.symbols('b1:4')
eq1 = sm.Eq(a11*x1 + a12*x2 + a13*x3,b1)
eq2 = sm.Eq(a21*x1 + a22*x2 + a23*x3,b2)
eq3 = sm.Eq(a31*x1 + a32*x2 + a33*x3,b3)
sm.solve(eq1,x1)
```
$\displaystyle a_{11} x_{1} + a_{12} x_{2} + a_{13} x_{3} = b_{1}$
```python
import sympy.vector
import numpy as np
import matplotlib.pyplot as plt
%matplotlib widget
B = sm.vector.CoordSys3D('B')
C = B.create_new('C','cylindrical')
S = B.create_new('S','spherical')
t,q0,q1,q2,q3 = sm.symbols('t q0:4')
Q = B.orient_new_quaternion('Q',q0,q1,q2,q3)
## \vec{x'(t)} = A * \vec{x(t)}
# https://www.youtube.com/watch?v=8wAgRAWwE3M
# x' = \lambda * x
# first order system of linear differencital equations
# \vec{x'(t)} = [ -c2e^t sin(t) +2c3e^t sin(t) + c1e^t cos(t) ...]
A = sm.Matrix([[1,-1,2],[-1,1,0],[-1,0,1]])
# A.eigenvals().values()
r1,r2,r3 = A.eigenvals().keys()
lamb1, mu1 = sm.re(r1), sm.im(r1)
lamb2, mu2 = sm.re(r2), sm.im(r2)
lamb3, mu3 = sm.re(r3), sm.im(r3)
v1, v2, v3 = A.eigenvects()[0][2][0], A.eigenvects()[1][2][0], A.eigenvects()[2][2][0],
x_sup1 = sm.lambdify(t,v1*(sm.exp(lamb1*t) * sm.cos(mu1*t) + 1*sm.I*sm.sin(mu1*t)))
x_sup2 = sm.lambdify(t,v2*(sm.exp(lamb2*t) * sm.cos(mu2*t) + 1*sm.I*sm.sin(mu2*t)))
u = sm.lambdify(t,v2*(sm.exp(lamb2*t) * sm.cos(mu2*t) + 1*sm.I*sm.sin(mu2*t)))
v = sm.lambdify(t,v2*(sm.exp(lamb2*t) * sm.cos(mu2*t) + 1*sm.I*sm.sin(mu2*t)))
c1,c2,c3 = 1,1,1
x = lambda t: c1*x_sup2(t) + c2*u(t) + c3*v(t)
x(0)
```
array([[0.+3.j],
[3.+0.j],
[3.+0.j]])
# Functional
>valuesf(x) \iff f(g(u)) \iff f(g(h(t)))$
>> $ \begin{cases}
x \iff g(u) \iff {u}^2+ 3{u} + c\\
u \iff a{t}^2 + b{t} + c \\
\end{cases}$
> $f(x,y) \iff f(g(u,v)) \iff f(g(h(t,s))$
>> $\begin{cases}
x \iff g(u,v) \iff {u}^2+ 3{v} + c\\
u \iff h(t,s) \iff a{t}^2 + b{s} + c\\
\end{cases}\\
\begin{cases}
y \iff g(u,v) \iff {v}^2+ 2{u} + d\\
v \iff h(t,s) \iff a{s}^2 + b{s} + c\\
\end{cases}$
> $f(x,y,z) \iff f(g(u_0,u_1,u_2)) \iff f(g(h(s_0,s_1,s_2)) $
>> $\begin{cases}
x \iff g(u_0,u_1,u_2) \iff {u_2}^2+ 3{u_0} + c\\
u_0 \iff h(s_0,s_1,s_2) \iff a{s_2}^2 + b{s_1} + c\\
\end{cases}\\
\begin{cases}
y \iff g(u_0,u_1,u_2) \iff {u_2}^2+ 2{u_1} + d\\
u_1 \iff h(s_0,s_1,s_2) \iff a{s_0}^2 + b{s_1} + c\\
\end{cases}\\
\begin{cases}
z \iff g(u_0,u_1,u_2) \iff {u_1}^2+ 2{u_2} + d\\
u_2 \iff h(s_0,s_1,s_2) \iff a{s_2}^2 + b{s_1} + c\\
\end{cases}$
> $f(x_0,x_1,x_2,x_3) \iff f(g(u_0,u_1,u_2,u_3)) \iff f(g(h(s_0,s_1,s_2,s_3))$
>> $\begin{cases}
x_0 \iff g(u_0,u_1,u_2,u_3) \iff {u_2}^2+ 3{u_0} + c\\
u_0 \iff h(s_0,s_1,s_2,s_3) \iff a{s_2}^2 + b{s_1} + c\\
\end{cases}\\
\begin{cases}
x_1 \iff g(u_0,u_1,u_2,u_3) \iff {u_2}^2+ 2{u_1} + d\\
u_1 \iff h(s_0,s_1,s_2,s_3) \iff a{s_0}^2 + b{s_1} + c\\
\end{cases}\\
\begin{cases}
x_2 \iff g(u_0,u_1,u_2,u_3) \iff {u_1}^2+ 2{u_2} + d\\
u_2 \iff h(s_0,s_1,s_2,s_3) \iff a{s_2}^2 + b{s_1} + c\\
\end{cases}\\
\begin{cases}
x_3 \iff g(u_0,u_1,u_2,u_3) \iff {u_1}^2+ 2{u_2} + d\\
u_3 \iff h(s_0,s_1,s_2,s_3) \iff a{s_2}^2 + b{s_1} + c\\
\end{cases}$
---
# single vaiable single function
> $ f(x) = x^2 \iff f \big[x \big] = \big[x^2 \big]$
> $ \frac{df}{dx} = 2x \iff \big [\frac{df}{dx}\big] = \big[ 2x \big]$
# multivariable single function
> $ f(x_1, x_2) = 2x_1x_2 + 3x_1 + 2x_2
\iff f(x_1, x_2)
=
\begin{bmatrix} 2x_1x_2 + 3x_1 + 2x_2 \end{bmatrix}$
> $ \begin{bmatrix}
\frac{\partial f}{\partial x_1} \\ \frac{\partial f}{\partial x_2}
\end{bmatrix}
=
\begin{bmatrix}2x_2^2 + 3 \\ 4x_1x_2 + 2
\end{bmatrix}
$
# multivariable mutiple function
> $
f\begin{bmatrix}
x_1 \\
x_2
\end{bmatrix}
=
\begin{bmatrix}
f_1(x_1, x_2) \\
f_2(x_1,x_2)
\end{bmatrix}
=
\begin{bmatrix} 3x_1^2 x_2 + x_2 \\
2x_1x_2^3 - 2x_1
\end{bmatrix}
$
> $
\begin{bmatrix}
\frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} \\
\frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} \\
\end{bmatrix}
=
\begin{bmatrix}
9x_1^2x_2 & 3x_1 + 1\\
2x_2^3 -2 & 6x_1x_2^2
\end{bmatrix}
$
---
# Jacobian
> ## Notation
>> $J_f(x_1,x_2)$
>>> $ f.jacobian([x_1,x_2])$
```python
x,y, x1,x2 = sm.symbols('x y x1:3')
f = sm.Matrix([[3*x1**2 + x2],
[2*x1*x2 - 2*x2]])
# J_{f} (x_{1}, x_{2})
J = sm.Matrix([[6*x1, 1],[2*x2, 2*x1 - 2]])
Jd = J.det()
fJ = f.jacobian(sm.Matrix([x1,x2]))
a1, a2 = [sm.diff(i,x1) for i in f]
b1, b2 = [sm.diff(i,x2) for i in f]
fcross = a1 * b2 - a2 * b1
fdot = a1 * b1 + a2 * b2
B = sm.vector.CoordSys3D('')
BJ = sm.Matrix([[6*B.x, 1],[2*B.y, 2*B.x - 2]]).subs({B.x:x,B.y:y})
```
```python
sm.plotting.plot3d(BJ[0])
```
<div style="display: inline-block;">
<div class="jupyter-widgets widget-label" style="text-align: center;">
Figure
</div>
</div>
<sympy.plotting.plot.Plot at 0x7f8897730100>
# Jacobian can relate teh joint velocities to catesian velocities in a robot manipulator
> ## notation
>> ### $
{}^{3}J,{}^{2}J, {}^{1}J \\
{}^{\circ}J = {}^0_{3}R {}^{3}J
$
>> ## heading angle v
>> ## $
\vec{v} \text{ 로봇이 향하는 방향의 속도 벡터 }
\begin{cases}
v_x = v cos\theta \\
v_y = v sin\theta
\end{cases}\\
\overset{\cdot}{\vec{v}}
\begin{cases}
\overset{\cdot}{v_x} = \frac{d}{dt}v\: cos(\theta) \\
\overset{\cdot}{v_y} = \frac{d}{dt}v\: sin(\theta)
\end{cases}
$
> ## position of robot
>> ## $X_g$ global coordinate
>>> ## $
x = \begin{bmatrix}x \\ y \\ \theta \end{bmatrix}
=
\begin{bmatrix}v\:cos\theta \\ v\:sin\theta \\ \theta \end{bmatrix}
\Rightarrow
x_g = \begin{bmatrix}x_g \\ y_g \\ \theta_g \end{bmatrix}\\
\overset{\cdot}{x}
=
\begin{bmatrix}
v cos{\theta} \\ vsin(\theta) \\ \omega
\end{bmatrix}
=
\begin{bmatrix}cos(\theta) & 0 \\ sin(\theta) & 0 \\ 0 & 1 \end{bmatrix}
\begin{bmatrix} v \\ \omega \end{bmatrix}
$
# [eigenvalue of the matrix is equal the root of polimonia](https://www.youtube.com/watch?v=VKUU3bZbPVM&list=PLIxff5DJJR7oBEy0Kdg12WWSlS6XFtr6r&index=31): time(11:31)
> ## Frobenius Matrix
>> ### $
\begin{bmatrix}
-a_{n-1} & -a_{n-2} & \cdots & -a_{1} & -a_{0} \\
1 & 0 & \cdots & 0 & 0 \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
0 & 0 & \cdots & 1 & 0 \\
\end{bmatrix}^{=A}\vec{v} = \lambda \vec{v} \\
\iff
\begin{vmatrix}
\lambda -a_{n-1} & -a_{n-2} & \cdots & -a_{1} & -a_{0} \\
1 & -\lambda & \cdots & 0 & 0 \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
0 & 0 & \cdots & 1 & -\lambda \\
\end{vmatrix}\\
\iff \text{det}(A - \lambda I) = (-1)^n \Big(\lambda^n + a_{n-1}\lambda^{n-1}+ a_{n-2}\lambda^{n-2}+ ... + a_{2}\lambda^{2} + a_{1}\lambda + a_{0}\Big)
$
---
# inverse 의미
> ### $ A\vec{x} = \vec{b} \iff \big(A^{-1}A \big)\vec{x} = A^{-1}\vec{b} \\
\therefore I\;\vec{x} = A^{-1}\;\vec{b} \iff \vec{x} = A^{-1}\vec{x}$
> ### eigen value, eigen vector
>> ### $ A\vec{x} = \lambda \vec{x} \iff A\vec{x} - \lambda \vec{x} I = 0 \\
\therefore \Big( A\vec{x} - \lambda I \Big) \vec{x} = 0$
> ### $
\begin{bmatrix}
3 & 1 \\ 1 & 2
\end{bmatrix}
\begin{bmatrix}
x_0 \\ x_2
\end{bmatrix}
=
\begin{bmatrix}
3 \\ 2
\end{bmatrix}
$
```python
#import sympy.physics.vector
#R = sm.physics.vector.ReferenceFrame('R')
#v = R.x + R.y + R.z
#v.to_matrix(R)
import sympy as sm
import sympy.vector
x,y = sm.symbols('x y')
N = sm.vector.CoordSys3D('N')
T = sm.vector.CoordSys3D('T')
# A * \vec{x} = \vec{b}
# A = [3 & 2 \\ 1 & 2] \iff N.i=(3,1), N.j=(2,2)
# x = [x_0 \\ x_1]
# B = [3 \\ 2]
# [3 & 2 // 1 & 2] * [ x_0 \\ x_1 ] = [3 \\ 2 ]
T.i = 3*N.i + N.j
T.j = 2*N.i + 2*N.j
T.k = N.k
h = 3*N.i + 2*N.j
H = b.to_matrix(N)
T.i.to_matrix(N)
A = T.i.to_matrix(N).row_join(T.j.to_matrix(N)).row_join(T.k.to_matrix(N))
## B_i \cdot A^j = \delta_{i}^{j}
B = A.inv()
B*H
# 3*(1/2) + (3/4) = 2.25
# 2*(1/2) + 2*(3/4) = 2.25
sm.solve([3*N.x + 2*N.y - 3, N.x + 2*N.y-2])
sm.solve([3*x + 2*y - 3, x + 2*y-2])
C = sm.Matrix([x,y,0])
# A^1(3,1) 과 수직이고 A^2(2,2)와 내적은 1인 basis 구하기
## (-1/4, 3/4)
eq1 = 3*x + y
eq2 = 2*x + 2*y - 1
sm.solve([eq1,eq2])
sy = (-1/4)*x + (3/4)*y
# A^2(2,2) 과 수직이고 A^1(3,1)와 내적은 1인 basis 구하기
## (1/2, -1/2)
A.col(1).T*C
eq3 = 2*x + 2*y
eq4 = 3*x + 1*y - 1
sm.solve([eq3,eq4])
sx = 1/2*x - 1/2*y
sm.Matrix([sx.subs({x:3,y:2}), sy.subs({x:3,y:2})]) == B*sm.Matrix([3,2,0])
```
False
# determinant (행열식) 의미
> ### each basis 들의 성분이 만들어 내는 부피(면적)이다, 즉 기저 단위 부피이다.
# inverse (역행열의 의미)
> ### 변환된 시스템과 원래 시스템간의 단위 부피의 비율이 바로 행열식(determinant)이다 . 즉
> ### 기존 좌표값을 새로운 기저의 단위 부피로 나누어서 기본 단위를 바꾸기 위해서 이다.
> ### 따라서 기존 좌표값을 역행열로 변환 시키면 새로운 기저를 기준으로 하는 좌표값이 나온다.
# $ \text{변환행열의 열벡터(기저벡터)} \iff \text{ 역행열의 행벡터(기저벡터)}$
# 행열 A의 역행열을 구한다는것은
> ### 행열 A의 열벡터 $\{A^1,...,A^n\}$에
> ### 수직인 행벡터 $\{B_1,...,B_m\}$ 을 구한다는 것이다.
> ### 따라서 여러 수직선 그리고 상대 좌표 찾으려는 것이다.
> ### $ B_{i} \cdot A^{j} = \delta_{i}^{j}$
```python
import sympy as sm
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot()
ax.set_aspect('equal')
ax.set_xlim(xmin=-2,xmax=4)
ax.set_ylim(ymin=-2,ymax=4)
ax.spines[['left','bottom']].set_position('zero')
# A = [ 3 & 1 \\ 2 & 2 ]
ax.quiver(0,0,3,1,scale=1,units='xy')
ax.quiver(0,0,2,2,scale=1,units='xy')
# b = [ 3 \\ 2 ]
ax.quiver(0,0,3,2,color='r',scale=1,units='xy')
# projection 2 axis of orthogonal two axis A = A^{-1}
######
# A^{-1} = [1/2 & -1/2 \\ -1/4 & 3/4 ] \iff (1/2,-1/2),(-1/4, 3/4) :: 행으로 좌표를 읽음
######
ax.quiver(0,0,1/2,-1/2,color='r',scale=1,units='xy')
ax.quiver(0,0,-1/4,3/4,color='r',scale=1,units='xy')
#### solution ####
# A^{-1}*b = [1/2, 3/4] => A^1 을 1/2등분, A^2 을 3/4 등분이 해가 된다.
# 즉 현재 좌표계로 (3,2) 가 변환 A 를 거치면 변환의 basis인
# A^1(1열벡터) basis, A^2(2 column vector) basis 기준으로는 어떤 좌표값이 되는가? => 해이다.
ax.plot([-0.3,3],[0.9,2],'magenta')
ax.plot([1/2,3],[-1/2,2],'magenta')
ax.plot(6/4,6/4,'co')
ax.plot(3/2,1/2,'co')
A.inv()*B
(A.inv())*sm.Matrix([3,2,0])
A*sm.Matrix([1/2,3/4,0])
```
$\displaystyle \left[\begin{matrix}3.0\\2.0\\0\end{matrix}\right]$
<div style="display: inline-block;">
<div class="jupyter-widgets widget-label" style="text-align: center;">
Figure
</div>
</div>
# Ax = b
> ### $
\begin{bmatrix}
3 & 2 \\ 1 &2
\end{bmatrix}
\begin{bmatrix}
x_1 \\ x_2
\end{bmatrix}
=
\begin{bmatrix}
3 \\ 2
\end{bmatrix}
$
```python
import sympy as sm
A = sm.Matrix([[3,2],[1,2]])
A.inv()
```
$\displaystyle \left[\begin{matrix}\frac{1}{2} & - \frac{1}{2}\\- \frac{1}{4} & \frac{3}{4}\end{matrix}\right]$
```python
A.inv()*sm.Matrix([3,2])
```
$\displaystyle \left[\begin{matrix}\frac{1}{2}\\\frac{3}{4}\end{matrix}\right]$
```python
```
|
61c94f768a62fe136555a3b232019610325cdfec
| 212,846 |
ipynb
|
Jupyter Notebook
|
python/Vectors/Matrix.ipynb
|
karng87/nasm_game
|
a97fdb09459efffc561d2122058c348c93f1dc87
|
[
"MIT"
] | null | null | null |
python/Vectors/Matrix.ipynb
|
karng87/nasm_game
|
a97fdb09459efffc561d2122058c348c93f1dc87
|
[
"MIT"
] | null | null | null |
python/Vectors/Matrix.ipynb
|
karng87/nasm_game
|
a97fdb09459efffc561d2122058c348c93f1dc87
|
[
"MIT"
] | null | null | null | 209.082515 | 161,651 | 0.890033 | true | 8,562 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.782662 | 0.654895 | 0.512562 |
__label__kor_Hang
| 0.181598 | 0.029181 |
```python
# This cell is added by sphinx-gallery
# It can be customized to whatever you like
%matplotlib inline
```
Noisy circuits
==============
.. meta::
:property="og:description": Learn how to simulate noisy quantum circuits
:property="og:image": https://pennylane.ai/qml/_images/N-Nisq.png
.. related::
tutorial_noisy_circuit_optimization Optimizing noisy circuits with Cirq
pytorch_noise PyTorch and noisy devices
In this demonstration, you'll learn how to simulate noisy circuits using built-in functionality in
PennyLane. We'll cover the basics of noisy channels and density matrices, then use example code to
simulate noisy circuits. PennyLane, the library for differentiable quantum computations, has
unique features that enable us to compute gradients of noisy channels. We'll also explore how
to employ channel gradients to optimize noise parameters in a circuit.
We're putting the N in NISQ.
.. figure:: ../demonstrations/noisy_circuits/N-Nisq.png
:align: center
:width: 20%
..
Noisy operations
----------------
Noise is any unwanted transformation that corrupts the intended
output of a quantum computation. It can be separated into two categories.
* **Coherent noise** is described by unitary operations that maintain the purity of the
output quantum state. A common source are systematic errors originating from
imperfectly-calibrated devices that do not exactly apply the desired gates, e.g., applying
a rotation by an angle $\phi+\epsilon$ instead of $\phi$.
* **Incoherent noise** is more problematic: it originates from a quantum computer
becoming entangled with the environment, resulting in mixed states --- probability
distributions over different pure states. Incoherent noise thus leads to outputs that are
always random, regardless of what basis we measure in.
Mixed states are described by `density matrices
<https://en.wikipedia.org/wiki/Density_matrices>`__.
They provide a more general method of describing quantum states that elegantly
encodes a distribution over pure states in a single mathematical object.
Mixed states are the most general description of a quantum state, of which pure
states are a special case.
The purpose of PennyLane's ``default.mixed`` device is to provide native
support for mixed states and for simulating noisy computations. Let's use ``default.mixed`` to
simulate a simple circuit for preparing the
Bell state $|\psi\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)$. We ask the QNode to
return the expectation value of $Z_0\otimes Z_1$:
```python
import pennylane as qml
from pennylane import numpy as np
dev = qml.device('default.mixed', wires=2)
@qml.qnode(dev)
def circuit():
qml.Hadamard(wires=0)
qml.CNOT(wires=[0, 1])
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
print(f"QNode output = {circuit():.4f}")
```
The device stores the output state as a density matrix. In this case, the density matrix is
equal to $|\psi\rangle\langle\psi|$,
where $|\psi\rangle=\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$.
```python
print(f"Output state is = \n{np.real(dev.state)}")
```
Incoherent noise is modelled by
quantum channels. Mathematically, a quantum channel is a linear, completely positive,
and trace-preserving (`CPTP
<https://www.quantiki.org/wiki/channel-cp-map>`__) map. A convenient strategy for representing
quantum channels is to employ `Kraus operators
<https://en.wikipedia.org/wiki/Quantum_operation#Kraus_operators>`__
$\{K_i\}$ satisfying the condition
$\sum_i K_{i}^{\dagger} K_i = I$. For an initial state $\rho$, the output
state after the action of a channel $\Phi$ is:
\begin{align}\Phi(\rho) = \sum_i K_i \rho K_{i}^{\dagger}.\end{align}
Just like pure states are special cases of mixed states, unitary
transformations are special cases of quantum channels. Unitary transformations are represented
by a single Kraus operator,
the unitary $U$, and they transform a state as
$U\rho U^\dagger$.
More generally, the action of a quantum channel can be interpreted as applying a
transformation corresponding to the Kraus operator $K_i$ with some associated
probability. More precisely, the channel applies the
transformation
$\frac{1}{p_i}K_i\rho K_i^\dagger$ with probability $p_i = \text{Tr}[K_i \rho K_{i}^{
\dagger}]$. Quantum
channels therefore represent a probability distribution over different possible
transformations on a quantum state. For
example, consider the bit flip channel. It describes a transformation that flips the state of
a qubit (applies an X gate) with probability $p$ and leaves it unchanged with probability
$1-p$. Its Kraus operators are
\begin{align}K_0 &= \sqrt{1-p}\begin{pmatrix}1 & 0\\ 0 & 1\end{pmatrix}, \\
K_1 &= \sqrt{p}\begin{pmatrix}0 & 1\\ 1 & 0\end{pmatrix}.\end{align}
This channel can be implemented in PennyLane using the :class:`qml.BitFlip <pennylane.BitFlip>`
operation.
Let's see what happens when we simulate this type of noise acting on
both qubits in the circuit. We'll evaluate the QNode for different bit flip probabilities.
```python
@qml.qnode(dev)
def bitflip_circuit(p):
qml.Hadamard(wires=0)
qml.CNOT(wires=[0, 1])
qml.BitFlip(p, wires=0)
qml.BitFlip(p, wires=1)
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
ps = [0.001, 0.01, 0.1, 0.2]
for p in ps:
print(f"QNode output for bit flip probability {p} is {bitflip_circuit(p):.4f}")
```
The circuit behaves quite differently in the presence of noise! This will be familiar to anyone
that has run an algorithm on quantum hardware. It is also highlights why error
mitigation and error correction are so important. We can use PennyLane to look under the hood and
see the output state of the circuit for the largest noise parameter
```python
print(f"Output state for bit flip probability {p} is \n{np.real(dev.state)}")
```
Besides the bit flip channel, PennyLane supports several other noisy channels that are commonly
used to describe experimental imperfections: :class:`~.pennylane.PhaseFlip`,
:class:`~.pennylane.AmplitudeDamping`, :class:`~.pennylane.GeneralizedAmplitudeDamping`,
:class:`~.pennylane.PhaseDamping`, and the :class:`~.pennylane.DepolarizingChannel`. You can also
build your own custom channel using the operation :class:`~.pennylane.QubitChannel` by
specifying its Kraus operators, or even submit a `pull request
<https://pennylane.readthedocs.io/en/stable/development/guide.html>`__ introducing a new channel.
Let's take a look at another example. The depolarizing channel is a
generalization of
the bit flip and phase flip channels, where each of the three possible Pauli errors can be
applied to a single qubit. Its Kraus operators are given by
\begin{align}K_0 &= \sqrt{1-p}\begin{pmatrix}1 & 0\\ 0 & 1\end{pmatrix}, \\
K_1 &= \sqrt{p/3}\begin{pmatrix}0 & 1\\ 1 & 0\end{pmatrix}, \\
K_2 &= \sqrt{p/3}\begin{pmatrix}0 & -i\\ i & 0\end{pmatrix}, \\
K_3 &= \sqrt{p/3}\begin{pmatrix}1 & 0\\ 0 & -1\end{pmatrix}.\end{align}
A circuit modelling the effect of depolarizing noise in preparing a Bell state is implemented
below.
```python
@qml.qnode(dev)
def depolarizing_circuit(p):
qml.Hadamard(wires=0)
qml.CNOT(wires=[0, 1])
qml.DepolarizingChannel(p, wires=0)
qml.DepolarizingChannel(p, wires=1)
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
ps = [0.001, 0.01, 0.1, 0.2]
for p in ps:
print(f"QNode output for depolarizing probability {p} is {depolarizing_circuit(p):.4f}")
```
As before, the output deviates from the desired value as the amount of
noise increases.
Modelling the noise that occurs in real experiments requires careful consideration.
PennyLane
offers the flexibility to experiment with different combinations of noisy channels to either mimic
the performance of quantum algorithms when deployed on real devices, or to explore the effect
of more general quantum transformations.
Channel gradients
-----------------
The ability to compute gradients of any operation is an essential ingredient of
:doc:`quantum differentiable programming </glossary/quantum_differentiable_programming>`.
In PennyLane, it is possible to
compute gradients of noisy channels and optimize them inside variational circuits.
PennyLane supports analytical
gradients for channels whose Kraus operators are proportional to unitary
matrices [#johannes]_. In other cases, gradients are evaluated using finite differences.
To illustrate this property, we'll consider an elementary example. We aim to learn the noise
parameters of a circuit in order to reproduce an observed expectation value. So suppose that we
run the circuit to prepare a Bell state
on a hardware device and observe that the expectation value of $Z_0\otimes Z_1$ is
not equal to 1 (as would occur with an ideal device), but instead has the value 0.7781. In the
experiment, it is known that the
major source of noise is amplitude damping, for example as a result of photon loss.
Amplitude damping projects a state to $|0\rangle$ with probability $p$ and
otherwise leaves it unchanged. It is
described by the Kraus operators
\begin{align}K_0 = \begin{pmatrix}1 & 0\\ 0 & \sqrt{1-p}\end{pmatrix}, \quad
K_1 = \begin{pmatrix}0 & \sqrt{p}\\ 0 & 0\end{pmatrix}.\end{align}
What damping parameter ($p$) explains the experimental outcome? We can answer this question
by optimizing the channel parameters to reproduce the experimental
observation! 💪 Since the parameter $p$ is a probability, we use a sigmoid function to
ensure that the trainable parameters give rise to a valid channel parameter, i.e., a number
between 0 and 1.
```python
ev = np.tensor([0.7781], requires_grad=False) # observed expectation value
def sigmoid(x):
return 1/(1+np.exp(-x))
@qml.qnode(dev)
def damping_circuit(x):
qml.Hadamard(wires=0)
qml.CNOT(wires=[0, 1])
qml.AmplitudeDamping(sigmoid(x), wires=0) # p = sigmoid(x)
qml.AmplitudeDamping(sigmoid(x), wires=1)
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
```
We optimize the circuit with respect to a simple cost function that attains its minimum when
the output of the QNode is equal to the experimental value:
```python
def cost(x, target):
return (damping_circuit(x) - target[0])**2
```
All that remains is to optimize the parameter. We use a straightforward gradient descent
method.
```python
opt = qml.GradientDescentOptimizer(stepsize=10)
steps = 35
x = np.tensor([0.0], requires_grad=True)
for i in range(steps):
(x, ev), cost_val = opt.step_and_cost(cost, x, ev)
if i % 5 == 0 or i == steps - 1:
print(f"Step: {i} Cost: {cost_val}")
print(f"QNode output after optimization = {damping_circuit(x):.4f}")
print(f"Experimental expectation value = {ev[0]}")
print(f"Optimized noise parameter p = {sigmoid(x.take(0)):.4f}")
```
Voilà! We've trained the noisy channel to reproduce the experimental observation. 😎
Developing quantum algorithms that leverage the power of NISQ devices requires serious
consideration of the effects of noise. With PennyLane, you have access to tools that can
help you design, simulate, and optimize noisy quantum circuits. We look forward to seeing what
the quantum community can achieve with them! 🚀 🎉 😸
References
----------
.. [#johannes]
Johannes Jakob Meyer, Johannes Borregaard, and Jens Eisert, "A variational toolbox for quantum
multi-parameter estimation." `arXiv:2006.06303 (2020) <https://arxiv.org/abs/2006.06303>`__.
|
5890b361c3a2c89961812c83e7933d87cf0b4bc7
| 14,261 |
ipynb
|
Jupyter Notebook
|
98_quantum/99_tutorial_noisy_circuits.ipynb
|
dpai/workshop
|
d4936da77dac759ba2bac95a9584fde8e86c6b2b
|
[
"Apache-2.0"
] | 2,327 |
2020-03-01T09:47:34.000Z
|
2021-11-25T12:38:42.000Z
|
98_quantum/99_tutorial_noisy_circuits.ipynb
|
trideau/Data-Science-with-AWS-Workshop
|
7dbe7989fa99e88544da8bf262beec907c536093
|
[
"Apache-2.0"
] | 209 |
2020-03-01T17:14:12.000Z
|
2021-11-08T20:35:42.000Z
|
98_quantum/99_tutorial_noisy_circuits.ipynb
|
trideau/Data-Science-with-AWS-Workshop
|
7dbe7989fa99e88544da8bf262beec907c536093
|
[
"Apache-2.0"
] | 686 |
2020-03-03T17:24:51.000Z
|
2021-11-25T23:39:12.000Z
| 73.133333 | 2,169 | 0.693289 | true | 3,050 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.831143 | 0.746139 | 0.620148 |
__label__eng_Latn
| 0.98676 | 0.279143 |
# Function Representation and Manipulation
```
%matplotlib inline
```
```
import numpy as np
import matplotlib
matplotlib.rcParams.update({'font.size': 14})
import matplotlib.pyplot as plt
```
From a mathematical point of view, a central point in numerical methods is how we represent a general function $f(x)$. As $x$ is a real number the function encodes an (uncountably) infinite amount of information: its value $f(x)$ for every point $x$. A computer can only usefully store and manipulate a finite amount of information, so it would appear impossible to represent even the simplest function of one variable.
Of course, most functions can be represented straightforwardly using little information. We often see functions that can be represented symbolically - $f(x) = x^2 + x$ is a simple example - but numerical methods are precisely for cases where symbolic methods are not applicable. Instead we must consider how a function is represented, or approximated, in terms of a finite amount of information.
Two standard representations can be used to make the central point. The first representation is in terms of polynomials, using Taylor series:
\begin{equation}
f(x ; x_0) = \sum_{n=0}^{\infty} \frac{c_n}{n!} (x - x_0)^n.
\end{equation}
Here $x_0$ is a parameter about which the Taylor series is centred.
The second representation is in terms of trigonometric functions, using Fourier series. For simplicity we consider only the Fourier cosine series
\begin{equation}
f(x ; L) = \frac{1}{2} a_0 + \sum_{n=1}^{\infty} a_n \cos \left( \frac{n \pi x}{L} \right).
\end{equation}
The Fourier series representation is $2L$-periodic.
Given a set of coefficients ($\{c_n\}$ for the Taylor series or $\{a_n\}$ for the Fourier series) we define a function.
```
an = np.zeros((10, 1))
cn = np.zeros_like(an)
for n in range(len(an)):
an[n] = 1.0 / (n + 1)**2
cn[n] = an[n]
x = np.linspace(-1.0, 1.0, 100)
taylor = an[0]*np.ones_like(x)
fourier = cn[0]*np.ones_like(x)
for n in range(1, len(an)):
taylor += an[n] * x**n
fourier += cn[n] * np.cos(n * np.pi * x)
fig = plt.figure(figsize = (8,6))
ax = fig.add_subplot(111)
ax.plot(x,taylor,label = 'Taylor')
ax.plot(x,fourier, label = 'Fourier')
plt.legend()
ax.set_xlabel('$x$')
plt.show()
```
Given a function, the standard analytical methods will give you the coefficients. This can rarely be directly applied in numerical methods - as noted above, it is usual that you're dealing with a function that isn't known in symbolic form.
Above, when we defined the function $f(x)$ from the coefficients, we were noting that we could evaluate the representation for any point $x$ given
1. the form of the representation (e.g., as a Taylor series about $x_0=0$), and
2. the value of the coefficients.
We can (nearly always) go in the opposite direction: given the value of the function at enough points $x_n$ we can compute the coefficients of the representation.
For example, let us assume that
1. the function will be represented as a polynomial of order 2, i.e. a Taylor series about $x_0=0$ such that all coefficients $c_n$ vanish for $n > 2$, and
2. the value of the function is known at three points $x_1, x_2, x_3$.
This is precisely the information needed to compute the coefficients $c_0, c_1, c_2$ and hence write the function as
\begin{equation}
f(x) \approx c_0 + c_1 x + \frac{c_2}{2} x^2.
\end{equation}
Explicitly, we note that the function representation matches the known value of the function at the three points only if
\begin{equation}
\begin{pmatrix} 1 & x_1 & \tfrac{1}{2} x_1^2 \\ 1 & x_2 & \tfrac{1}{2} x_2^2 \\ 1 & x_3 & \tfrac{1}{2} x_3^2 \end{pmatrix} \begin{pmatrix} c_0 \\ c_1 \\ c_2 \end{pmatrix} = \begin{pmatrix} f(x_1) \\ f(x_2) \\ f(x_3) \end{pmatrix}
\end{equation}
which, given the information we know, is a linear system for the coefficients.
Note in particular that the number of points at which the function is known must match the number of coefficients in the function representation.
The previous example hints at one of the standard methods of numerically representing functions. First, we take a representation that we know, given an infinite amount of information, converges to the function we want (in some sense). Second, assume that taking a *finite* number of terms is sufficient: e.g., truncate the sum in the Taylor series expansion after a finite number of terms. Then, using a finite amount of information about the function (such at its value at an appropriate number of points) we compute the coefficients of the representation. We then manipulate the representation *as if* it were the true function.
```
```
|
86e908a6ef7593ac55c818a6431eb890449e4a6d
| 36,170 |
ipynb
|
Jupyter Notebook
|
Lectures/Function Representation and Manipulation.ipynb
|
alistairwalsh/NumericalMethods
|
fa10f9dfc4512ea3a8b54287be82f9511858bd22
|
[
"MIT"
] | 1 |
2021-12-01T09:15:04.000Z
|
2021-12-01T09:15:04.000Z
|
Lectures/Function Representation and Manipulation.ipynb
|
indranilsinharoy/NumericalMethods
|
989e0205565131057c9807ed9d55b6c1a5a38d42
|
[
"MIT"
] | null | null | null |
Lectures/Function Representation and Manipulation.ipynb
|
indranilsinharoy/NumericalMethods
|
989e0205565131057c9807ed9d55b6c1a5a38d42
|
[
"MIT"
] | 1 |
2021-04-13T02:58:54.000Z
|
2021-04-13T02:58:54.000Z
| 198.736264 | 28,969 | 0.880398 | true | 1,258 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.851953 | 0.855851 | 0.729145 |
__label__eng_Latn
| 0.998205 | 0.532379 |
# Random Signals and LTI-Systems
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Power Spectral Densitity
For a wide-sense stationary (WSS) real-valued random process $x[k]$, the [power spectral density](../random_signals/power_spectral_densities.ipynb#Power-Spectral-Density) (PSD) $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ is given as the discrete-time Fourier transformation (DTFT) of the auto-correlation function (ACF) $\varphi_{xx}[\kappa]$
\begin{equation}
\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{\kappa = -\infty}^{\infty} \varphi_{xx}[\kappa] \; \mathrm{e}^{\,-\mathrm{j}\,\Omega\,\kappa}
\end{equation}
Under the assumption of a real-valued LTI system with impulse response $h[k] \in \mathbb{R}$, the PSD $\Phi_{yy}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ of the output signal of an LTI system $y[k] = \mathcal{H} \{ x[k] \}$ is derived by taking the DTFT of the [ACF of the output signal](../random_signals_LTI_systems/correlation_functions.ipynb#Auto-Correlation-Function) $\varphi_{yy}[\kappa]$
\begin{align}
\Phi_{yy}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) &= \sum_{\kappa = -\infty}^{\infty} \underbrace{h[\kappa] * h[-\kappa]}_{\varphi_{hh}[\kappa]} * \varphi_{xx}[\kappa] \; \mathrm{e}^{\,-\mathrm{j}\,\Omega\,\kappa} \\
&= H(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \cdot H(\mathrm{e}^{\,-\mathrm{j}\,\Omega}) \cdot \Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = | H(\mathrm{e}^{\,\mathrm{j}\,\Omega}) |^2 \cdot \Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})
\end{align}
The PSD of the output signal $\Phi_{yy}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ of an LTI system is given by the PSD of the input signal $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ multiplied with the squared magnitude $| H(\mathrm{e}^{\,\mathrm{j}\,\Omega}) |^2$ of the transfer function of the system.
### Example - Pink Noise
It can be concluded from above findings, that filtering can be applied to a white noise random signal $x[k]$ with $\Phi_{yy}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = N_0$ in order to create a random signal $y[k] = x[k] * h[k]$ with a desired PSD
\begin{equation}
\Phi_{yy}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = N_0 \cdot | H(\mathrm{e}^{\,\mathrm{j}\,\Omega}) |^2
\end{equation}
where $N_0$ denotes the power per frequency of the white noise. Such a random signal is commonly termed as [*colored noise*](https://en.wikipedia.org/wiki/Colors_of_noise). Different application specific types of colored noise exist. One of these is [*pink noise*](https://en.wikipedia.org/wiki/Pink_noise) whose PSD is inversely proportional to the frequency. The approximation of a pink noise signal by filtering is illustrated by the following example. The PSDs $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ and $\Phi_{yy}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ are estimated from $x[k]$ and $y[k]$ using the [Welch technique](../spectral_estimation_random_signals/welch_method.ipynb).
```python
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as sig
fs = 44100
N = 5*fs
# generate uniformly distributed white noise
np.random.seed(1)
x = np.random.uniform(size=N) - .5
# filter white noise to yield pink noise
# see http://www.firstpr.com.au/dsp/pink-noise/#Filtering
a = np.poly([0.99572754, 0.94790649, 0.53567505]) # denominator coefficients
b = np.poly([0.98443604, 0.83392334, 0.07568359]) # numerator coefficients
y = 1/3 * sig.lfilter(b, a, x)
# estimate PSDs using Welch's technique
f, Pxx = sig.csd(x, x, nperseg=256)
f, Pyy = sig.csd(y, y, nperseg=256)
# PSDs
Om = f * 2 * np.pi
plt.plot(Om, 20*np.log10(np.abs(.5*Pxx)),
label=r'$| \Phi_{xx}(e^{j \Omega}) |$ in dB')
plt.plot(Om, 20*np.log10(np.abs(.5*Pyy)),
label=r'$| \Phi_{yy}(e^{j \Omega}) |$ in dB')
plt.title('Power Spectral Density')
plt.xlabel(r'$\Omega$')
plt.legend()
plt.axis([0, np.pi, -60, -10])
plt.grid()
```
Let's listen to white and pink noise
```python
from scipy.io import wavfile
wavfile.write('uniform_white_noise.wav', fs, np.int16(x*32768))
wavfile.write('uniform_pink_noise.wav', fs, np.int16(y*32768))
```
**White noise**
<audio src="./uniform_white_noise.wav" controls>Your browser does not support the audio element.</audio>[./uniform_white_noise.wav](./uniform_white_noise.wav)
**Pink noise**
<audio src="./uniform_pink_noise.wav" controls>Your browser does not support the audio element.</audio>[./uniform_pink_noise.wav](./uniform_white_noise.wav)
## Cross-Power Spectral Densities
The cross-power spectral densities $\Phi_{yx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ and $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ between the in- and output of an LTI system are given by taking the DTFT of the [cross-correlation functions](../random_signals_LTI_systems/correlation_functions.ipynb#Cross-Correlation-Function) (CCF) $\varphi_{yx}[\kappa]$ and $\varphi_{xy}[\kappa]$. Hence,
\begin{equation}
\Phi_{yx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{\kappa = -\infty}^{\infty} h[\kappa] * \varphi_{xx}[\kappa] \; \mathrm{e}^{\,-\mathrm{j}\,\Omega\,\kappa} = \Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \cdot H(\mathrm{e}^{\,\mathrm{j}\,\Omega})
\end{equation}
and
\begin{equation}
\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{\kappa = -\infty}^{\infty} h[-\kappa] * \varphi_{xx}[\kappa] \; \mathrm{e}^{\,-\mathrm{j}\,\Omega\,\kappa} = \Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \cdot H(\mathrm{e}^{\,-\mathrm{j}\,\Omega})
\end{equation}
## System Identification by Spectral Division
Using the result above for the cross-power spectral density $\Phi_{yx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ between out- and input, and the relation of the [CCF of finite-length signals to the convolution](../random_signals/correlation_functions.ipynb#Definition) yields
\begin{equation}
H(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{\Phi_{yx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})}{\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})} = \frac{\frac{1}{K} Y(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \cdot X(\mathrm{e}^{\,-\mathrm{j}\,\Omega})}{\frac{1}{K} X(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \cdot X(\mathrm{e}^{\,-\mathrm{j}\,\Omega})}
= \frac{Y(\mathrm{e}^{\,\mathrm{j}\,\Omega})}{X(\mathrm{e}^{\,\mathrm{j}\,\Omega})}
\end{equation}
holding for $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \neq 0$ and $X(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \neq 0$. Hence, the transfer function $H(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ of an unknown system can be derived by dividing the spectrum of the output signal $Y(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ through the spectrum of the input signal $X(\mathrm{e}^{\,\mathrm{j}\,\Omega})$. This is equal to the [definition of the transfer function](https://en.wikipedia.org/wiki/Transfer_function). However, care has to be taken that the spectrum of the input signal does not contain zeros.
Above relation can be realized by the discrete Fourier transformation (DFT) by taking into account that a multiplication of two spectra $X[\mu] \cdot Y[\mu]$ results in the cyclic/periodic convolution $x[k] \circledast y[k]$. Since we aim at a linear convolution, zero-padding of the in- and output signal has to be applied.
### Example
We consider the estimation of the impulse response $h[k] = \mathcal{F}_*^{-1} \{ H(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \}$ of an unknown system using the spectral division method. Normal distributed white noise with variance $\sigma_n^2 = 1$ is used as wide-sense ergodic input signal $x[k]$. In order to show the effect of sensor noise, normally distributed white noise $n[k]$ with the variance $\sigma_n^2 = 0.01$ is added to the output signal $y[k] = x[k] * h[k] + n[k]$.
```python
N = 1000 # number of samples for input signal
# generate input signal
# normally distributed (zero-mean, unit-variance) white noise
np.random.seed(1)
x = np.random.normal(size=N, scale=1)
# impulse response of the system
h = np.concatenate((np.zeros(20), np.ones(10), np.zeros(20)))
# output signal by convolution
y = np.convolve(h, x, mode='full')
# add noise to the output signal
y = y + np.random.normal(size=y.shape, scale=.1)
# zero-padding of input signal
x = np.concatenate((x, np.zeros(len(h)-1)))
# estimate transfer function
H = np.fft.rfft(y)/np.fft.rfft(x)
# compute inpulse response
he = np.fft.irfft(H)
he = he[0:len(h)]
# plot impulse response
plt.figure()
plt.stem(he, label='estimated')
plt.plot(h, 'g-', label='true')
plt.title('Estimated impulse response')
plt.xlabel(r'$k$')
plt.ylabel(r'$\hat{h}[k]$')
plt.legend();
```
**Exercise**
* Change the length `N` of the input signal. What happens?
* Change the variance $\sigma_n^2$ of the additive noise. What happens?
Solution: Increasing the length `N` of the input signal lowers the uncertainty in estimating the impulse response. The higher the variance of the additive white noise, the higher the uncertainties in the estimated impulse response.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
|
08c6421f93e6c749fa5a05b2e8af0df1b858c873
| 183,023 |
ipynb
|
Jupyter Notebook
|
random_signals_LTI_systems/power_spectral_densities.ipynb
|
ZeroCommits/digital-signal-processing-lecture
|
e1e65432a5617a309ec02327a14962e37a0f7ec5
|
[
"MIT"
] | 630 |
2016-01-05T17:11:43.000Z
|
2022-03-30T07:48:27.000Z
|
random_signals_LTI_systems/power_spectral_densities.ipynb
|
alirezaopmc/digital-signal-processing-lecture
|
e1e65432a5617a309ec02327a14962e37a0f7ec5
|
[
"MIT"
] | 12 |
2016-11-07T15:49:55.000Z
|
2022-03-10T13:05:50.000Z
|
random_signals_LTI_systems/power_spectral_densities.ipynb
|
alirezaopmc/digital-signal-processing-lecture
|
e1e65432a5617a309ec02327a14962e37a0f7ec5
|
[
"MIT"
] | 172 |
2015-12-26T21:05:40.000Z
|
2022-03-10T23:13:30.000Z
| 61.936717 | 25,378 | 0.629762 | true | 2,932 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.835484 | 0.824462 | 0.688824 |
__label__eng_Latn
| 0.767944 | 0.438701 |
```python
%load_ext rpy2.ipython
%matplotlib inline
```
```python
import matplotlib.pyplot as plt
import numpy as np
import numpy.random as rnd
from scipy import stats
import sympy as sym
from IPython.display import Image
plt.rcParams['figure.figsize'] = (20, 7)
```
# Rare-event simulation
## Lecture 3
### Patrick Laub, Institut de Science Financière et d’Assurances
## Agenda
- Show you Markov chain Monte Carlo (MCMC)
- Go back to finish Markov chain example
- Explain MCMC
## MCMC: inputs
_Inputs_:
- $f_X(x)$, the _target density_ (known up to a normalising constant),
- $q(y \mid x)$, a _transition kernel_, gives the density of proposing a jump to $y$ given we're currently at $x$,
- $X_0$, our _starting position_, and $R$ the number of _replicates_ we want.
_Outputs_: $X_1, \dots, X_R \sim f_X(x)$, dependent but i.d.
_An example_:
- target is $f_X(x) \propto 2 + \sin(x)$ for $x \in [0, 4\pi]$,
- we propose $(Y \mid X) \sim \mathsf{Uniform}(X-1, X+1)$, so $q(y \mid x) = \frac12 1\{ |y-x| \le 1 \}$,
- start at $X_0 = 2\pi$, and ask for $R = 10^6$ samples.
## MCMC: Metropolis–Hastings algorithm
_Inputs_: target density $f_X(x)$, transition kernel $q(y \mid x)$, starting position $X_0$, and desired number of replicates $R$.
_Definition_: $$\alpha(X,Y) := \min\Bigl\{ \frac{ f_X(Y) \, q(X \mid Y)
}{ f_X(X) \, q(Y \mid X) } , 1 \Bigr\} .$$
To generate the $r$-th random variable:
$\quad$ Make a proposal $Y$ from $q(\,\cdot\, \mid X_{r-1})$
$\quad$ With probability $\alpha(X_{r-1}, Y)$:
$\quad$ $\quad$ We accept the proposal
$\quad$ $\quad$ $X_r \gets Y$
$\quad$ Otherwise:
$\quad$ $\quad$ We reject and stay where we are
$\quad$ $\quad$ $X_r \gets X_{r-1}$
Return $(X_1, \dots, X_R)$
For $r = 1$ to $R$
$\quad$ $Y \sim q(\,\cdot\, \mid X_{r-1})$
$\quad$ $U \sim \mathsf{Unif}(0,1)$
$\quad$ If
$U \le \alpha(X_{r-1}, Y) = \min\bigl\{ \frac{ f_X(Y) \, q(X_{r-1} \mid Y)
}{ f_X(X_{r-1}) \, q(Y \mid X_{r-1}) } , 1 \bigr\} $
$\quad$ $\quad$ $X_r \gets Y$
$\quad$ Else
$\quad$ $\quad$ $X_r \gets X_{r-1}$
$\quad$ End If
End For
Return $(X_1, \dots, X_R)$
## Prepare yourself to see the coolest animation ever..
[Animation](https://chi-feng.github.io/mcmc-demo/app.html#HamiltonianMC,banana)
## How does MCMC help with rare event estimation?
Multiple ways. One method is the _Improved Cross-Entropy method_.
To estimate $\ell = \mathbb{P}(X > \gamma)$, with optimal IS density $g^*(x) \propto 1\{x > \gamma\} f_X(x)$, then:
1. Choose a family $f( \,\cdot\, ; \mathbf{v})$, $R$ (e.g. $R=10^6$).
2. Simulate $X_r \overset{\mathrm{i.i.d.}}{\sim} g^*( \,\cdot\, )$ for $r=1,\dots,R$ using MCMC.
3. Set $\mathbf{v}_*$ to be the MLE estimate of fitting $\{X_1,\dots, X_R\}$ to $f( \,\cdot\, ; \mathbf{v})$. That is,
$$
\DeclareMathOperator*{\argmax}{arg\,max}
\mathbf{v}_* = \argmax_{\mathbf{v}} \frac{1}{R} \sum_{r=1}^R \log \bigl[ f(X_r; \mathbf{v}) \bigr] .
$$
4. Return the result of IS with $f( \,\cdot\, ; \mathbf{v}_*)$ proposal.
This is _so much simpler_...
## A very strange Markov chain example
Given $X_{i-1} = x_{i-1}$, how to get the next $X_i$?
Sample $E_i \sim \mathsf{Exponential}(\lambda)$ and either _jump left_ taking $X_i = x_{i-1} - E_i$ or _jump right_ taking $X_i = x_{i-1} + E_i$.
What are the rules for jumping left or right?
- If $x_{i-1} < -1$ we jump right
- If $x_{i-1} > 1$ we jump left.
- If $x_{i-1} \in (-1, 1)$ we jump left with probability
$$ \frac{ \frac{1}{(x+1)^2} }{ \frac{1}{(x+1)^2} + \frac{1}{(x-1)^2} } .$$
## R to generate a transition
Given $X_{i-1} = x_{i-1}$, how to get the next $X_i$?
Sample $E_i \sim \mathsf{Exponential}(\lambda)$ and either _jump left_ taking $X_i = x_{i-1} - E_i$ or _jump right_ taking $X_i = x_{i-1} + E_i$.
What are the rules for jumping left or right?
- If $x_{i-1} < -1$ we jump right
- If $x_{i-1} > 1$ we jump left.
- If $x_{i-1} \in (-1, 1)$ we jump left with probability
$$ \frac{ \frac{1}{(x+1)^2} }{ \frac{1}{(x+1)^2} + \frac{1}{(x-1)^2} } .$$
```r
%%R
lambda <- 5
rtransition <- function(x) {
E <- rexp(1, lambda)
probJumpLeft <- (1 / (x+1)^2) /
((1 / (x+1)^2) + (1 / (x-1)^2))
if (x > 1) {
return( x - E )
}
if (x < -1) {
return( x + E )
}
if (runif(1) < probJumpLeft) {
return( x - rexp(1, lambda) )
} else {
return( x + rexp(1, lambda) )
}
}
rtransition(0)
```
[1] 0.05895065
## Plot transition densities
```r
%%R
dtransition <- function(y, x) {
leftJump <- dexp( -(y-x), lambda )
rightJump <- dexp( (y-x), lambda )
if (x < -1) {
return(rightJump)
}
if (x > 1) {
return(leftJump)
}
probJumpLeft <- (1 / (x+1)^2) /
((1 / (x+1)^2) + (1 / (x-1)^2))
return(probJumpLeft*leftJump + (1-probJumpLeft)*rightJump)
}
```
```r
%%R
xGrid <- seq(-3, 3, 0.005)
pdfs <- c(dtransition(xGrid, 0))
pdfs <- c(pdfs, dtransition(xGrid, 0.5))
pdfs <- c(pdfs, dtransition(xGrid, -0.5))
pdfs <- c(pdfs, dtransition(xGrid, 1.1))
pdfs <- c(pdfs, dtransition(xGrid, -1.1))
allPDFs <- matrix(pdfs, ncol=5)
matplot(xGrid, allPDFs, type="l")
```
## And vectorise the transition simulation
```r
%%R
lambda <- 5
rtransition <- function(x) {
E <- rexp(1, lambda)
probJumpLeft <- (1 / (x+1)^2) /
((1 / (x+1)^2) + (1 / (x-1)^2))
if (x > 1) {
return( x - E )
}
if (x < -1) {
return( x + E )
}
if (runif(1) < probJumpLeft) {
return( x - rexp(1, lambda) )
} else {
return( x + rexp(1, lambda) )
}
}
rtransition(0)
```
[1] 0.5320334
```r
%%R
rtransitionVectorised <- function(x) {
R <- length(x)
Es <- rexp(R, lambda)
probJumpLeft <- (1 / (x+1)^2) /
((1 / (x+1)^2) + (1 / (x-1)^2))
jumpLeft <- (runif(R) < probJumpLeft)
jumpLeft[which(x < -1)] <- FALSE
jumpLeft[which(x > 1)] <- TRUE
jumpSizes <- (-1)^jumpLeft * Es
return(x + jumpSizes)
}
rtransitionVectorised(c(-1.5, 0, 1.5))
```
[1] -1.3049987 0.1950013 1.6950013
## Simulate the chain
```r
%%R
R <- 1000; N <- 5000
X <- matrix(rep(NA, N*R), nrow=N, ncol=R)
X[1,] <- rtransitionVectorised(rep(0, R))
for (n in 2:N)
X[n,] <- rtransitionVectorised(X[n-1,])
```
```r
%%R
# What's the distribution of X_N?
hist(X[N,], 40)
# library(ks)# plot(kde(X[N,]))
```
```r
%%R
# What does one sample path look like?
plot(X[,1], type="l")
```
## Compare histogram of $X_N$ to that of all $X_i$'s
```r
%%R
library(ks); plot(kde(X[N,]))
```
```r
%%R
library(ks); plot(kde(as.vector(X)))
```
## How does this compare with a different starting position?
```r
%%R
R <- 1000; N <- 500
X <- matrix(rep(NA, R*N), nrow=N, ncol=R)
X[1,] <- rtransitionVectorised(rep(100, R))
for (n in 2:N)
X[n,] <- rtransitionVectorised(X[n-1,])
```
```r
%%R
# Plot one sample path
plot(X[,1], type="l")
```
```r
%%R
# Plot histograms for X_N and all X_i's
#plot(kde(X[N,]))
library(ks); plot(kde(as.vector(X)))
#plot(kde(as.vector(X[1000:N,])))
```
## Markov chain Monte Carlo
_Input_: $f_X$, $R$, $q$, $X_0$
To generate the $r$-th random variable:
$\quad$ Make a proposal $Y$ from the distribution $q(\,\cdot\, \mid X_{r-1})$
$\quad$ With probability $\alpha(X_{r-1}, Y)$:
$\quad$ $\quad$ We accept the proposal, so $X_r \gets Y$
$\quad$ Otherwise:
$\quad$ $\quad$ We reject and stay where we are, so $X_r \gets X_{r-1}$
Return $(X_1, \dots, X_R)$
Here we use
$$\alpha(X,Y) := \min\Bigl\{ \frac{ f_X(Y) \, q(X_{r-1} \mid Y)
}{ f_X(X_{r-1}) \, q(Y \mid X_{r-1}) } , 1 \Bigr\} $$
_Input_: $f_X$, $R$, $q$, $X_0$
For $r = 1$ to $R$
$\quad$ $Y \sim q(\,\cdot\, \mid X_{r-1})$
$\quad$ $U \sim \mathsf{Unif}(0,1)$
$\quad$ If
$U \le \alpha(X_{r-1}, Y) = \min\bigl\{ \frac{ f_X(Y) \, q(X_{r-1} \mid Y)
}{ f_X(X_{r-1}) \, q(Y \mid X_{r-1}) } , 1 \bigr\} $
$\quad$ $\quad$ $X_r \gets Y$
$\quad$ Else
$\quad$ $\quad$ $X_r \gets X_{r-1}$
$\quad$ End If
End For
Return $(X_1, \dots, X_R)$
## Example: sampling from $Z \mid Z > 5$
Will propose jumps which are Laplace distributed (i.e. double exponential distributed)
$$ X \sim \mathsf{Laplace}(\mu, \lambda) \quad \Rightarrow \quad f_X(x) = \frac{1}{2\lambda} \exp \,\Bigl\{ \frac{| x - \mu | }{\lambda} \Bigr\} $$
```python
xs = np.linspace(-5,5, 500)
plt.plot(xs, stats.laplace.pdf(xs), 'r');
```
```python
zs = np.linspace(3, 8, 500)
plt.plot(zs, (zs > 5) * stats.norm.pdf(zs) / (stats.norm.sf(5)));
```
_Input_:
$$f_X(x) \propto 1\{x > 5\} f_Z(x) , \quad R = 10^6, \quad X_0 = 5.01, \quad
q(x_r \mid x_{r-1}) = \frac{1}{2\lambda} \exp \,\Bigl\{ -\frac{| x_r - x_{r-1} | }{\lambda} \Bigr\}$$
Note: $q(x_r \mid x_{r-1}) = q(x_{r-1} \mid x_r)$
For $r = 1$ to $R$
$\quad$ $Y \sim \mathsf{Laplace}(X_{r-1}, \lambda)$
$\quad$ $U \sim \mathsf{Unif}(0,1)$
$\quad$ If
$U \le \frac{ f_X(Y) q(X_{r-1} \mid Y)
}{ f_X(X_{r-1}) q(Y \mid X_{r-1}) } = \frac{ f_X(Y) }{ f_X(X_{r-1}) } = 1\{Y > 5\} \mathrm{e}^{ \frac12 (X_{r-1}^2 - Y^2) } $
$\quad$ $\quad$ $X_r \gets Y$
$\quad$ Else
$\quad$ $\quad$ $X_r \gets X_{r-1}$
$\quad$ End If
End For
Return $(X_1, \dots, X_R)$
To generate the $r$-th random variable:
$\quad$ Make a proposal $Y$ from the distribution $\mathsf{Laplace}(X_{r-1}, \lambda)$
$\quad$ Three scenarios:
$\quad$ $\quad$ a) $Y$ is not valid ($f_X(Y) = 0$, e.g. $Y \le 5$)
$\quad$ $\quad$ $\quad$ We reject and stay where we are, so $X_r \gets X_{r-1}$
$\quad$ $\quad$ b) $Y$ is valid are more likely than $X_{r-1}$ ($\frac{ f_X(Y) }{ f_X(X_{r-1}) } \ge 1$)
$\quad$ $\quad$ $\quad$ We accept the proposal, so $X_r \gets Y$
$\quad$ $\quad$ c) $Y$ is valid but less likely than $X_{r-1}$ ($\frac{ f_X(Y) }{ f_X(X_{r-1}) } < 1$)
$\quad$ $\quad$ $\quad$ We accept with probability $\frac{ f_X(Y) }{ f_X(X_{r-1}) }$, and reject otherwise.
## Into R land
For $r = 1$ to $R$
$\quad$ $Y \sim \mathsf{Laplace}(X_{r-1}, \lambda)$
$\quad$ $U \sim \mathsf{Unif}(0,1)$
$\quad$ If
$U \le \frac{ f_X(Y) q(X_{r-1} \mid Y)
}{ f_X(X_{r-1}) q(Y \mid X_{r-1}) } = \frac{ f_X(Y) }{ f_X(X_{r-1}) } = 1\{Y > 5\} \mathrm{e}^{ \frac12 (X_{r-1}^2 - Y^2) } $
$\quad$ $\quad$ $X_r \gets Y$
$\quad$ Else
$\quad$ $\quad$ $X_r \gets X_{r-1}$
$\quad$ End If
End For
Return $(X_1, \dots, X_R)$
```r
%%R
lambda <- 10
Xstart <- 5.01
R <- 5 * 10^6
Xs <- rep(NA, R)
Xs[1] <- Xstart
for (r in 2:R) {
# Generate proposal
U1 <- (runif(1) < 0.5)
sign <- (-1)^U1
Y <- Xs[r-1] + sign * rexp(1, lambda)
# Calculate acceptance probability.
alpha <- (Y > 5) * exp(0.5 * (Xs[r-1]^2 - Y^2))
# Transition with this probability
U <- runif(1)
if (U < alpha) {
Xs[r] <- Y
} else {
Xs[r] <- Xs[r-1]
}
}
```
## The histogram of the samples against the desired density
```r
%%R
hist(Xs, 40, prob=T, ylim=c(0, 5.5))
zs <- seq(4.9, 7, 0.005)
lines(zs, (zs > 5) * dnorm(zs) / (1-pnorm(5)), col="red");
```
|
49164ffaa506e41229c50bc52536683fb5e1be16
| 214,142 |
ipynb
|
Jupyter Notebook
|
2019/slides/l3.ipynb
|
Pat-Laub/RareEvents
|
19e4f6bda4213dcd4a903bc3f1cde8cedd0dfca6
|
[
"CC0-1.0"
] | 4 |
2019-04-10T23:24:56.000Z
|
2020-06-09T12:41:20.000Z
|
2019/slides/l3.ipynb
|
Pat-Laub/RareEvents
|
19e4f6bda4213dcd4a903bc3f1cde8cedd0dfca6
|
[
"CC0-1.0"
] | null | null | null |
2019/slides/l3.ipynb
|
Pat-Laub/RareEvents
|
19e4f6bda4213dcd4a903bc3f1cde8cedd0dfca6
|
[
"CC0-1.0"
] | 1 |
2020-04-19T07:08:31.000Z
|
2020-04-19T07:08:31.000Z
| 210.769685 | 46,951 | 0.9097 | true | 4,459 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.815232 | 0.743168 | 0.605855 |
__label__eng_Latn
| 0.509851 | 0.245934 |
<center></center>
## Машинное обучение
### Семинар 13. ЕМ-алгоритм
<br />
<br />
9 декабря 2021
Будем решать задачу восставновления картинки лица по набору зашумленных картинок (взято с курса deep bayes 2018 https://github.com/bayesgroup/deepbayes-2018).
У вас есть $K$ фотографий, поврежденных электромагнитным шумом. Известно, что на каждом фото есть лицо в неизвестно где начинающейся прямоугольной области ширины $w$ и фон, одинаковый для всех фотографий.
<center></center>
```python
from matplotlib import pyplot as plt
import numpy as np
```
```python
import zipfile
with zipfile.ZipFile('data_em.zip', 'r') as zip_ref:
zip_ref.extractall('.')
```
```python
DATA_FILE = "data_em"
w = 73 # face_width
```
```python
X = np.load(DATA_FILE)
```
```python
X.shape # H, W, K
```
(100, 200, 1000)
```python
plt.imshow(X[:, :, 7], cmap="Greys_r")
plt.axis("off")
```
```python
tH, tW, tw, tK = 2, 3, 1, 2
tX = np.arange(tH*tW*tK).reshape(tH, tW, tK)
tF = np.arange(tH*tw).reshape(tH, tw)
tB = np.arange(tH*tW).reshape(tH, tW)
ts = 0.1
ta = np.arange(1, (tW-tw+1)+1)
ta = ta / ta.sum()
tq = np.arange(1, (tW-tw+1)*tK+1).reshape(tW-tw+1, tK)
tq = tq / tq.sum(axis=0)[np.newaxis, :]
```
1. **Реализуйте calculate_log_probability**
Для $k$-й картини $X_k$ и некоторой позиции $d_k$:
$$ p(X_k \mid d_k,\,F,\,B,\, std) = \prod\limits_{ij}\begin{cases}
\mathcal{N}(X_k[i,j]\mid F[i,\,j-d_k],\, std^2),
& \text{if}\, (i,j)\in faceArea(d_k)\\
\mathcal{N}(X_k[i,j]\mid B[i,j],\, std^2), & \text{else}
\end{cases}
$$
Замечания:
* $faceArea(d_k) = \{[i, j]| d_k \leq j \leq d_k + w - 1 \}$
* Априорное распределение задаётся обучаемым вектором $a \in \mathbb{R}^{W-w+1}$: $$p(d_k \mid a) = a[d_k],\ \sum\limits_j a[j] = 1$$
* Итоговая вероятностная модель: $$ p(X, d \mid F,\,B,\,std,\,a) = \prod\limits_k p(X_k \mid d_k,\,F,\,B,\,std) p(d_k \mid a)$$
* Не забудьте про логарифм!
* `scipy.stats.norm` может вам пригодиться
```python
import scipy.stats
```
```python
def calculate_log_probability(X, F, B, s):
"""
Calculates log p(X_k|d_k, F, B, s) for all images X_k in X and
all possible face position d_k.
Parameters
----------
X : array, shape (H, W, K)
K images of size H x W.
F : array, shape (H, w)
Estimate of prankster's face.
B : array, shape (H, W)
Estimate of background.
s : float
Estimate of standard deviation of Gaussian noise.
Returns
-------
ll : array, shape(W-w+1, K)
ll[dw, k] - log-likelihood of observing image X_k given
that the prankster's face F is located at position dw
"""
H, W, K = X.shape
_, w = F.shape
# your code here
ll = np.zeros((W-w+1, K))
for dw in range(W-w+1):
combined = np.copy(B)
combined[:, dw:dw+w] = F
d_combined = X - np.expand_dims(combined, 2)
ll[dw] = scipy.stats.norm(0, s).logpdf(d_combined).sum(axis=(0,1))
return ll
```
```python
# run this cell to test your implementation
expected = np.array([[-3541.69812064, -5541.69812064],
[-4541.69812064, -6741.69812064],
[-6141.69812064, -8541.69812064]])
actual = calculate_log_probability(tX, tF, tB, ts)
assert np.allclose(actual, expected)
print("OK")
```
OK
2. **Реализуйте calculate_lower_bound**
\begin{equation}\mathscr{L}(q, \,F, \,B,\, s,\, a) = \sum_k \biggl (\mathbb{E} _ {q( d_k)}\bigl ( \log p( X_{k} \mid {d}_{k} , \,F,\,B,\,s) +
\log p( d_k \mid a)\bigr) - \mathbb{E} _ {q( d_k)} \log q( d_k)\biggr) \end{equation}
Замечания
* Используйте calculate_log_probability!
* Обратите внимание, что $q( d_k)$ и $p( d_k \mid a)$ дискретны. Например, $P(d_k=i \mid a) = a[i]$.
```python
def calculate_lower_bound(X, F, B, s, a, q):
"""
Calculates the lower bound L(q, F, B, s, a) for
the marginal log likelihood.
Parameters
----------
X : array, shape (H, W, K)
K images of size H x W.
F : array, shape (H, w)
Estimate of prankster's face.
B : array, shape (H, W)
Estimate of background.
s : float
Estimate of standard deviation of Gaussian noise.
a : array, shape (W-w+1)
Estimate of prior on position of face in any image.
q : array
q[dw, k] - estimate of posterior
of position dw
of prankster's face given image Xk
Returns
-------
L : float
The lower bound L(q, F, B, s, a)
for the marginal log likelihood.
"""
# your code here
return (q * (calculate_log_probability(X,F,B,s) + np.expand_dims(np.log(a), 1) - np.log(q))).sum()
```
```python
calculate_lower_bound(tX, tF, tB, ts, ta, tq)
```
-12761.187501001436
```python
# run this cell to test your implementation
expected = -12761.1875
actual = calculate_lower_bound(tX, tF, tB, ts, ta, tq)
assert np.allclose(actual, expected)
print("OK")
```
OK
3. **Реализуем E шаг**
$$q(d_k) = p(d_k \mid X_k, \,F, \,B, \,s,\, a) =
\frac {p( X_{k} \mid {d}_{k} , \,F,\,B,\,s)\, p(d_k \mid a)}
{\sum_{d'_k} p( X_{k} \mid d'_k , \,F,\,B,\,s) \,p(d'_k \mid a)}$$
Замечания
* Используйте calculate_log_probability!
* Считайте в логарифмах, возведите в экспоненту в конце.
* Рекомендется использовать следующее утверждение для выч. стабильности: $$\beta_i = \log{p_i(\dots)} \quad\rightarrow \quad
\frac{e^{\beta_i}}{\sum\limits_k e^{\beta_k}} =
\frac{e^{(\beta_i - \max_j \beta_j)}}{\sum\limits_k e^{(\beta_k- \max_j \beta_j)}}$$
```python
def run_e_step(X, F, B, s, a):
"""
Given the current esitmate of the parameters, for each image Xk
esitmates the probability p(d_k|X_k, F, B, s, a).
Parameters
----------
X : array, shape(H, W, K)
K images of size H x W.
F : array_like, shape(H, w)
Estimate of prankster's face.
B : array shape(H, W)
Estimate of background.
s : float
Estimate of standard deviation of Gaussian noise.
a : array, shape(W-w+1)
Estimate of prior on face position in any image.
Returns
-------
q : array
shape (W-w+1, K)
q[dw, k] - estimate of posterior of position dw
of prankster's face given image Xk
"""
# your code here
log_nom = calculate_log_probability(X,F,B,s) + np.expand_dims(np.log(a), 1)
mx = log_nom.max(axis=0)
nom = np.exp(log_nom - mx)
return nom / nom.sum(axis=0)
```
```python
run_e_step(tX, tF, tB, ts, ta)
```
array([[1., 1.],
[0., 0.],
[0., 0.]])
```python
# run this cell to test your implementation
expected = np.array([[ 1., 1.],
[ 0., 0.],
[ 0., 0.]])
actual = run_e_step(tX, tF, tB, ts, ta)
assert np.allclose(actual, expected)
print("OK")
```
OK
4. **Реализуйте M шаг**
Надо
\begin{equation}\mathscr{L}(q, \,F, \,B,\, s,\, a) = \sum_k \biggl (\mathbb{E} _ {q( d_k)}\bigl ( \log p( X_{k} \mid {d}_{k} , \,F,\,B,\,s) +
\log p( d_k \mid a)\bigr) - \mathbb{E} _ {q( d_k)} \log q( d_k)\biggr)\rightarrow \max\limits_{\theta, a} \end{equation}
После долгих вычислений получим:
$$a[j] = \frac{\sum_k q( d_k = j )}{\sum_{j'} \sum_{k'} q( d_{k'} = j')}$$$$F[i, m] = \frac 1 K \sum_k \sum_{d_k} q(d_k)\, X^k[i,\, m+d_k]$$\begin{equation}B[i, j] = \frac {\sum_k \sum_{ d_k:\, (i, \,j) \,\not\in faceArea(d_k)} q(d_k)\, X^k[i, j]}
{\sum_k \sum_{d_k: \,(i, \,j)\, \not\in faceArea(d_k)} q(d_k)}\end{equation}\begin{equation}s^2 = \frac 1 {HWK} \sum_k \sum_{d_k} q(d_k)
\sum_{i,\, j} (X^k[i, \,j] - Model^{d_k}[i, \,j])^2\end{equation}
где $Model^{d_k}[i, j]$ картинка из фона и лица, сдвинутого на $d_k$.
Замечания
* Обновляйте параметры в порядке: $a$, $F$, $B$, $s$.
* Используйте обновленный параметр для оценки следующего параметра.
```python
def run_m_step(X, q, w):
"""
Estimates F, B, s, a given esitmate of posteriors defined by q.
Parameters
----------
X : array, shape (H, W, K)
K images of size H x W.
q :
q[dw, k] - estimate of posterior of position dw
of prankster's face given image Xk
w : int
Face mask width.
Returns
-------
F : array, shape (H, w)
Estimate of prankster's face.
B : array, shape (H, W)
Estimate of background.
s : float
Estimate of standard deviation of Gaussian noise.
a : array, shape (W-w+1)
Estimate of prior on position of face in any image.
"""
# your code here
H, W, K = X.shape
dw, _ = q.shape
w = W - dw + 1
a = q.sum(axis=1)/q.sum()
F = np.zeros((H, w))
for dk in range(dw):
F += (q[dk] * X[:, dk:dk+w]).sum(axis=2) / K
B = np.zeros((H, W))
denom = np.zeros((H, W))
for dk in range(dw):
if dk > 0:
denom[:, :dk] += q[dk].sum()
B[:, :dk] += (q[dk] * X[:, :dk]).sum(axis=2)
if dk + w < W:
B[:, dk+w:] += (q[dk] * X[:, dk+w:]).sum(axis=2)
denom[:, dk + w:] += q[dk].sum()
B /= denom
s2 = 0
for dk in range(dw):
model = np.copy(B)
model[:, dk:dk+w] = F
s2 += (q[dk] * ((X - np.expand_dims(model,2)) ** 2)).sum()
s2 /= H * W * K
return F, B, np.sqrt(s2), a
```
```python
run_m_step(tX, tq, tw)
```
(array([[3.27777778],
[9.27777778]]),
array([[ 0.48387097, 2.5 , 4.52941176],
[ 6.48387097, 8.5 , 10.52941176]]),
0.9486806229147358,
array([0.13888889, 0.33333333, 0.52777778]))
```python
# run this cell to test your implementation
expected = [np.array([[ 3.27777778],
[ 9.27777778]]),
np.array([[ 0.48387097, 2.5 , 4.52941176],
[ 6.48387097, 8.5 , 10.52941176]]),
0.94868,
np.array([ 0.13888889, 0.33333333, 0.52777778])]
actual = run_m_step(tX, tq, tw)
for a, e in zip(actual, expected):
assert np.allclose(a, e)
print("OK")
```
OK
5. **Реализуйте EM алгоритм**
```python
def run_EM(X, w, F=None, B=None, s=None, a=None, tolerance=0.001,
max_iter=50):
"""
Runs EM loop until the likelihood of observing X given current
estimate of parameters is idempotent as defined by a fixed
tolerance.
Parameters
----------
X : array, shape (H, W, K)
K images of size H x W.
w : int
Face mask width.
F : array, shape (H, w), optional
Initial estimate of prankster's face.
B : array, shape (H, W), optional
Initial estimate of background.
s : float, optional
Initial estimate of standard deviation of Gaussian noise.
a : array, shape (W-w+1), optional
Initial estimate of prior on position of face in any image.
tolerance : float, optional
Parameter for stopping criterion.
max_iter : int, optional
Maximum number of iterations.
Returns
-------
F, B, s, a : trained parameters.
"""
# your code here
H, W, N = X.shape
if F is None:
F = np.random.randint(0, 255, (H, w))
if B is None:
B = np.random.randint(0, 255, (H, W))
if a is None:
a = np.ones(W - w + 1)
a /= np.sum(a)
if s is None:
s = np.random.rand()*64*64
l_prev = -np.inf
for it in range(max_iter):
print(f"iteration = {it}")
q = run_e_step(X, F, B, s, a)
print("e")
F, B, s, a = run_m_step(X, q, w)
print("m")
print(s)
if it == max_iter - 1:
print("no convergence")
break
l_cur = calculate_lower_bound(X, F, B, s, a, q)
if l_cur - l_prev < tolerance:
print(f"converged in {it} iterations {l_cur - l_prev}")
break
else:
l_prev = l_cur
return F, B, s, a
```
Расшифровываем картинку:
```python
def show(F, i=1, n=1):
"""
shows face F at subplot i out of n
"""
plt.subplot(1, n, i)
plt.imshow(F, cmap="Greys_r")
plt.axis("off")
```
```python
%%time
F, B, s, a = [None] * 4
lens = [50, 100, 300, 500, 1000]
iters = [5, 1, 1, 1, 1]
plt.figure(figsize=(20, 5))
for i, (l, it) in enumerate(zip(lens, iters)):
F, B, s, a = run_EM(X[:, :, :l], w, F, B, s, a, max_iter=it)
print(s)
show(F, i+1, 5)
```
И фон:
```python
show(B)
```
|
d0d4b116cceef3bcd1fae78ac7f3c15eec0ebfb3
| 632,475 |
ipynb
|
Jupyter Notebook
|
2021-fall-part-1/seminars/13_em_algo/13_em_algo_practice.ipynb
|
bagrorg/ml-course
|
9a2aa7379ea0dee6968eef3a4ae5926e83c391ca
|
[
"MIT"
] | 4 |
2021-09-16T07:03:16.000Z
|
2021-12-13T10:33:51.000Z
|
2021-fall-part-1/seminars/13_em_algo/13_em_algo_practice.ipynb
|
bagrorg/ml-course
|
9a2aa7379ea0dee6968eef3a4ae5926e83c391ca
|
[
"MIT"
] | null | null | null |
2021-fall-part-1/seminars/13_em_algo/13_em_algo_practice.ipynb
|
bagrorg/ml-course
|
9a2aa7379ea0dee6968eef3a4ae5926e83c391ca
|
[
"MIT"
] | 13 |
2021-09-02T07:29:24.000Z
|
2021-12-13T15:26:00.000Z
| 612.863372 | 358,160 | 0.943418 | true | 4,453 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.771843 | 0.754915 | 0.582676 |
__label__eng_Latn
| 0.310221 | 0.192082 |
# Sparse-Group Lasso Inductive Matrix Completion via ADMM
```python
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
```
Fix the random state
```python
random_state = np.random.RandomState(0x0BADCAFE)
```
## Problem?
```python
PROBLEM = "classification" if True else "regression"
```
### Synthetic data
```python
assert PROBLEM in ("classification", "regression")
```
Produce a low rank matrix
```python
n_samples, n_objects = 19990, 201
n_rank, n_features = 5, 20
```
```python
n_samples, n_objects = 1990, 2010
n_rank, n_features = 5, 20
```
```python
n_samples, n_objects = 199, 2010
n_rank, n_features = 5, 20
```
```python
n_samples, n_objects = 199, 201
n_rank, n_features = 5, 200
```
```python
n_samples, n_objects = 75550, 40
n_rank, n_features = 5, 20
```
```python
n_samples, n_objects = 199, 201
n_rank, n_features = 5, 20
```
```python
n_samples, n_objects = 1990, 2010
n_rank, n_features = 5, 100
```
```python
n_samples, n_objects = 550, 550
n_rank, n_features = 5, 25
```
Transform the problem
```python
from sgimc.utils import make_imc_data, sparsify
X, W_ideal, Y, H_ideal, R_full = make_imc_data(
n_samples, n_features, n_objects, n_features,
n_rank, scale=(0.05, 0.05), noise=0,
binarize=(PROBLEM == "classification"),
random_state=random_state)
```
Drop the bulk of the values from $R$
```python
R, mask = sparsify(R_full, 0.10, random_state=random_state)
```
Plot the matrix
```python
fig = plt.figure(figsize=(12, 12))
ax = fig.add_subplot(111, title="The synthetic matrix")
ax.imshow(R.todense(), cmap=plt.cm.RdBu, origin="upper")
print("Observed entries: %d / %d" % (R.nnz, np.prod(R.shape)))
plt.show()
```
# The IMC problem
```python
from sgimc import IMCProblem
```
The IMC problem is:
$$\begin{aligned}
& \underset{W, H}{\text{miminize}}
& & \sum_{(i,j)\in \Omega} l(p_{ij}, R_{ij})
+ \nu_W \sum_{m=1}^{d_1} \bigl\| W' e_m \bigr\|_2
+ \nu_H \sum_{m=1}^{d_2} \bigl\| H' e_m \bigr\|_2
+ \mu_W \bigl\| W \bigr\|_1
+ \mu_H \bigl\| H \bigr\|_1
\,, \\
& \text{with}
& & p_{ij} = e_i'\, X W \, H' Y'\, e_j
\,,
\end{aligned}$$
where $X \in \mathbb{R}^{n_1 \times d_1}$, $Y \in \mathbb{R}^{n_2 \times d_2}$,
$W \in \mathbb{R}^{d_1\times k}$ and $H \in \mathbb{R}^{d_2\times k}$.
### Quadratic Approximation
The target objective without regularization (holding $H$ fixed) is
$$ F(W; H)
= \sum_{(i,j)\in \Omega}
l(p_{ij}, R_{ij})
\,, $$
in which $p = p(W) = (e_i' X W H' Y' e_j)_{(i,j)\in \Omega}$ are the current
predictions.
The Quadratic Approximation to $F$ around $W_0$ is
$$ Q(W; W_0)
= F(W_0)
+ \nabla F(W_0)' \delta
+ \frac12 \delta' \nabla^2 F(W_0) \delta
\,, $$
for $\delta = \mathtt{vec}(W - W_0)$. Now the gradient of $F$ w.r.t. vec-form of $W$ is
$$ \nabla F(W_0)
= \mathtt{vec}\bigl(
X' g Y H
\bigr)
\,, $$
with $g = g(W_0) = (l{'}_{p}(p(W_0)_{ij}, R_{ij}))_{(i,j)\in \Omega}$ is $\Omega$-sparse
matrix of first-order (gradient) data. For a matrix $D \in \mathbb{R}^{d_1 \times k}$
$$ \nabla^2F(W_0)\, \mathtt{vec}(D)
= \mathtt{vec}\Bigl(
X' \underbrace{\bigl\{h \odot (X D H'Y')\bigr\}}_{\Omega-\text{sparse}} YH
\Bigr)
\,, $$
where $h = h(W_0) = (l{''}_{pp}(p(W_0)_{ij}, R_{ij}))_{(i,j)\in \Omega}$ is
the $\Omega$-sparse matrix of the second order (hessian) values and $\odot$
is the element-wise matrix product.
The quadratic approximation with respect to $H$ around $H_0$ holding $W$ and $\Sigma$ fixed
is similar up to transposing $R$ and swapping $X \leftrightarrow Y$ and $W \leftrightarrow H$
in the above formulae.
**Note** that although the expressions for the gradient and the hessian-vector product presented above are identical to the fast operations in *section 3.1* of [H. Yu et al. (2014)](http://bigdata.ices.utexas.edu/publication/993/), the fomulae here have been derived independently. In fact, they are obvious products of simple block-matrix and **vech** algebra.
The implementation below is, however, completely original (although, nothing special).
#### Implementation details
To compute the gradient and the hessian-vector product we need the following
"elementary" operations:
* $\mathtt{Op}_d: D \mapsto (e_i' X D H'Y' e_j)_{(i, j)\in \Omega}$ -- a map
of some $\mathbb{R}^{d_1\times k}$ dense $D$ to a $\mathbb{R}^{n_1\times n_2}$
$\Omega$-sparse matrix $S$;
* $\mathtt{Op}_s: S \mapsto X'S YH$ mapping an $\mathbb{R}^{n_1\times n_2}$
$\Omega$-sparse $S$ to a $\mathbb{R}^{d_1\times k}$ dense matrix $D$.
The gradient becomes
$$ \nabla F(W_0)
= \mathtt{vec}(\mathtt{Op}_s(g)) \,, $$
and the hessian-vector product transforms into
$$ \nabla^2 F(W_0)\,\mathtt{vec}(D)
= \mathtt{vec}\bigl(\mathtt{Op}_s\bigl(h\odot \mathtt{Op}_d(D)\bigr)\bigr) \,. $$
In fact the predictions $p = p(W_0)$ also form an $\mathbb{R}^{n_1\times n_2}$
$\Omega$-matrix, that cam be computed by $p(W_0) = \mathtt{Op}_d(W_0)$. The
gradient $g(W_0)$ and hessian $h(W_0)$ statistics are also $\Omega$-sparse,
and can be computed by element-wise application of $l'_p$ and $l''_{pp}$
to $p$.
Similar formulae hold for $H$ with appropriate re-labellings and transpositions.
**Note** that the sparsity structure remains unchanged and the thin matrix $YH$
can be cached, since both $H$ and $Y$ fit in memory and $k < d_2 \ll n_2$.
```python
# from sgimc import op_s, op_d
```
Define the objectives
The $l_2$ loss $l(p, t) = \frac12 (p-t)^2$.
```python
from sgimc.qa_objective import QAObjectiveL2Loss
```
The log-loss $l(p, t) = \log \bigl(1 + e^{-t p}\bigr)$ for $t\in \{-1, +1\}$
and $p\in \mathbb{R}$.
\begin{align}
\sigma(x)
&= \frac1{1+e^{-x}}
\,, \\
\sigma'(x)
&= -\frac{- e^{-x}}{(1+e^{-x})^2}
= \frac{e^{-x}}{1+e^{-x}} \frac1{1+e^{-x}}
= (1-\sigma(x))\,\sigma(x)
\,, \\
l(p, t)
&= \log \bigl(1 + e^{-t p}\bigr)
= \log \bigl(1 + e^{- \lvert p \rvert}\bigr)
- \min\bigl\{t p, 0\bigr\}
\,, \\
l_p'(p, t)
&= \frac{-t e^{-t p}}{1 + e^{-t p}}
= -t (1 - \sigma(t p))
\,, \\
l_p''(p, t)
&= (1 - \sigma(t p))\sigma(t p)
= (1 - \sigma(p))\sigma(p)
\,.
\end{align}
```python
from sgimc.qa_objective import QAObjectiveLogLoss
```
Huber loss:
$$ l(x; \epsilon)
= \begin{cases}
\frac12 x^2
& \text{if } \lvert x \rvert \leq \epsilon\,, \\
\epsilon \bigl(\lvert x \rvert - \frac\epsilon2\bigr)
& \text{otherwise}
\end{cases}
\,. $$
Therefore
$$ l_p'
= \begin{cases}
x & \text{if } \lvert x \rvert \leq \epsilon\,, \\
\epsilon \frac{x}{\lvert x \rvert}
& \text{otherwise}
\end{cases}
\,, $$
and
$$ l_{pp}''
= \begin{cases}
1 & \text{if } \lvert x \rvert \leq \epsilon\,, \\
0 & \text{otherwise}
\end{cases}
\,. $$
```python
from sgimc.qa_objective import QAObjectiveHuberLoss
```
Choose the objective
```python
if PROBLEM == "classification":
QAObjectiveLoss = QAObjectiveLogLoss
else:
QAObjectiveLoss = QAObjectiveL2Loss # QAObjectiveHuberLoss
problem = IMCProblem(QAObjectiveLoss, X, Y, R, n_threads=4)
```
### Optimisation
Fix $H$ and consider the problem with respect to $W$:
$$\begin{aligned}
& \underset{W \in \mathbb{R}^{d_1\times k}}{\text{miminize}}
& & Q(W; W_0)
+ \sum_{m=1}^{d_1}
\nu_m \bigl\| W' e_m \bigr\|_2
+ \mu_m \bigl\| W' e_m \bigr\|_1
+ \frac{\kappa_m}2 \bigl\| W' e_m \bigr\|_2^2
\,.
\end{aligned}$$
Let's move to an equivalent problem by splitting the variables in
the objective, introducing linear consensus constraints and adding
$d_1$ ridge-like regularizers (augmenation)
$$\begin{aligned}
& \underset{Z_m, \delta_m \in \mathbb{R}^{k\times 1}}{\text{miminize}}
& & Q(\delta; W_0)
+ \sum_{m=1}^{d_1}
\nu_m \bigl\| Z_m \bigr\|_2
+ \mu_m \bigl\| Z_m \bigr\|_1
+ \frac{\kappa_m}2 \bigl\| Z_m \bigr\|_2^2
+ \frac1{2\eta}
\sum_{m=1}^{d_1} \bigl\| \delta_m - (Z_m - W_0'e_m) \bigr\|_2^2
\,, \\
& \text{subject to}
& & Z_m - \delta_m = W_0' e_m\,, m=1 \ldots d_1
\,,
\end{aligned}$$
with $\sum_{m=1}^{d_1} e_m \delta_m' = \delta$.
The objective is convex and the constraints are linear, which means that
Strong Duality holds for this problem. The lagrangian is
\begin{align}
\mathcal{L}(Z_m, \delta_m; \lambda_m)
&= F(W_0)
+ \nabla F(W_0)' \mathtt{vec}(\delta)
+ \frac12 \mathtt{vec}(\delta)' \nabla^2 F(W_0) \mathtt{vec}(\delta)
\\
& + \sum_{m=1}^{d_1}
\nu_m \bigl\| Z_m \bigr\|_2
+ \mu_m \bigl\| Z_m \bigr\|_1
+ \frac{\kappa_m}2 \bigl\| Z_m \bigr\|_2^2
\\
& + \frac1\eta
\sum_{m=1}^{d_1} \lambda_m'\bigl(\delta_m - (Z_m - W_0'e_m)\bigr)
+ \frac1{2\eta}
\sum_{m=1}^{d_1} \bigl\| \delta_m - (Z_m - W_0'e_m) \bigr\|_2^2
\,.
\end{align}
Note the following expressions
\begin{align}
\sum_{m=1}^{d_1} \lambda_m'\bigl(\delta_m - (Z_m - W_0'e_m)\bigr)
&= \mathtt{tr}\bigl((\delta - (Z - W_0))\Lambda'\bigr) \,,
\\
\sum_{m=1}^{d_1} \bigl\| \delta_m - (Z_m - W_0'e_m) \bigr\|_2^2
&= \Bigl\| \delta - (Z - W_0) \Bigr\|_\text{F}^2 \,,
\\
\end{align}
where $\Lambda = \sum_{m=1}^{d_1}e_m \lambda_m'$ and $Z = \sum_{m=1}^{d_1}e_m Z_m'$.
#### Sub-0
Consider the following subproblem ($\mathtt{Sub}_0^\text{QA}$):
$$\begin{aligned}
& \underset{\delta \in \mathbb{R}^{d_1\times k}}{\text{miminize}}
& & \nabla F(W_0)' \mathtt{vec}(\delta)
+ \frac12 \mathtt{vec}(\delta)' \nabla^2 F(W_0) \mathtt{vec}(\delta)
\\
% & & & + \frac1\eta
% \mathtt{tr}\bigl((\delta - (Z - W_0))\Lambda'\bigr)
% + \frac1{2\eta}
% \Bigl\| \delta - (Z - W_0) \Bigr\|_\text{F}^2
& & & + \frac1{2\eta}
\Bigl\| \delta + W_0 - Z + \Lambda \Bigr\|_\text{F}^2
- \frac1{2\eta} \| \Lambda \|_\text{F}^2
\,.
\end{aligned}$$
The first-order-conditions for this convex problem w.r.t. $\mathtt{vec}(\delta)$
are
$$ \nabla F(W_0) + \nabla^2 F(W_0) \mathtt{vec}(\delta)
+ \frac1\eta
\mathtt{vec}\bigl( \delta - (\underbrace{Z - W_0 - \Lambda}_{D}) \bigr)
= 0 \,. $$
Since computing the inverse of the hessian is out of the quiestion, we use Conjugate
Gradient method to solve for $\delta$, because it queries the hessian
only through matrix-vector priducts, which are efficicnetly computable.
The map $\mathtt{Sub}_0^\text{QA}(D; \eta)$ returns the $\delta$ which satisfies
$$\Bigl( \nabla^2 F(W_0) + \frac1\eta I\Bigr)\mathtt{vec}(\delta)
= \frac1\eta \mathtt{vec}\bigl(D \bigr) - \nabla F(W_0) \,. $$
```python
from sgimc.algorithm.admm import sub_0_cg
```
Using a more comprehensive solver, like `L-BFGS` we can tackle the original
objective, instead of its Quadratic approximation.
Consider the subproblem ($\mathtt{Sub}_0^\text{Orig}$):
$$\begin{aligned}
& \underset{W \in \mathbb{R}^{d_1\times k}}{\text{miminize}}
& & F(W; H) + \frac1{2\eta}
\Bigl\|W - Z + \Lambda \Bigr\|_\text{F}^2
- \frac1{2\eta} \| \Lambda \|_\text{F}^2
\,.
\end{aligned}$$
The L-BFGS requires the gradient of the final objective:
$$ \nabla F(W)
+ \frac1\eta
\mathtt{vec}\bigl( W - (Z - \Lambda) \bigr) \,. $$
```python
from sgimc.algorithm.admm import sub_0_lbfgs
```
#### Sub-m
The next set of subproblems is represented by the following problem ($\mathtt{Sub}_m$):
$$\begin{aligned}
& \underset{Z_m \in \mathbb{R}^{k\times 1}}{\text{miminize}}
& & \mu_m \bigl\| Z_m \bigr\|_1 + \nu_m \bigl\| Z_m \bigr\|_2
+ \frac{\kappa_m}2 \bigl\| Z_m \bigr\|_2^2
\\
% & & & + \frac1\eta \lambda_m'\bigl(\delta_m - (Z_m - W_0'e_m)\bigr)
% + \frac1{2\eta} \bigl\| \delta_m - (Z_m - W_0'e_m) \bigr\|_2^2
& & & + \frac1{2\eta} \bigl\| (\delta_m + W_0'e_m + \lambda_m) - Z_m\bigr\|_2^2
- \frac1{2\eta} \| \lambda_m \|_2^2
\,.
\end{aligned}$$
After a **lot of math** this problem admits a closed form solution:
$$ Z_m
= \frac1{1 + \kappa_m \eta}
\biggl(1 - \frac{\nu_m \eta}{\|S(V_m; \mu_m \eta)\|_2}\biggr)_+
S(V_m; \mu_m \eta)
\,, $$
where $V_m = \delta_m + W_0'e_m + \lambda_m$ and
$$ S(u; \mu_m \eta)
= \Bigl(\Bigl(1 - \frac{\mu_m \eta}{\lvert u_i \rvert}\Bigr)_+ u_i\Bigr)_{i=1}^k\,, $$
is the **soft_thresholding** operator.
The map $\mathtt{Sub}_m(D; \eta)$ returns $Z_m$ defined above.
```python
from sgimc.algorithm.admm import sub_m
```
#### ADMM
Thus the QA-ADMM for $W$ around $W_0$ with $H$ fixed is the follwing
iterative procedure:
\begin{align}
Z^{t+1}_m &= \mathtt{Sub}_m(W^t_m + \lambda^t_m) \,,\, m = 1\ldots d_1 \,,\\
W^{t+1} &= \mathtt{Sub}_0(Z^{t+1} - W_0 - \Lambda^t) + W_0 \,,\\
% W^{t+1} &= \mathtt{Sub}_0(Z^t - W_0 - \Lambda^t) + W_0 \,,\\
% Z^{t+1}_m &= \mathtt{Sub}_m(W^{t+1}_m + \lambda^t_m) \,,\, m = 1\ldots d_1 \,,\\
\Lambda^{t+1} &= \Lambda^t + (W^{t+1} - Z^{t+1})\,,\\
\end{align}
where $W^{t+1}_m$ is the $m$-th row of $W^{t+1}$, $Z^{t+1}_m$ is the
$m$-th row of $Z^{t+1}$ and $\lambda_m$ is the $m$-th row of $\Lambda$.
These iterations necessarily converge to a fixed point, which is the
solution of the original optimisation problem. If stopped early, the
current values of $W^t$ and $Z^t$ would be close to each other, however
$Z^t$ would be sparse and $W^t$ -- dense.
Note that we can also consider ADMM with a linear approximation of $F$
w.r.t. $W$ at $W_0$, instead of the quadratic (LA-ADMM). This way the algorithm
reduces to prox-gradient descent with step $\eta$. Although it does not utilize
the second order infromation, it can be fused with Nesterov's Accelerated
gradient.
```python
from sgimc.algorithm import admm_step
def step_qaadmm(problem, W, H, C, eta, method="l-bfgs", sparse=True,
n_iterations=50, rtol=1e-5, atol=1e-8):
approx_type = "quadratic" if method in ("cg",) else "linear"
Obj = problem.objective(W, H, approx_type=approx_type)
return admm_step(Obj, W, C, eta, sparse=sparse, method=method,
n_iterations=n_iterations, rtol=rtol, atol=atol)
```
```python
from sgimc.algorithm.decoupled import step as decoupled_step
def step_decoupled(problem, W, H, C, eta, rtol=1e-5, atol=1e-8):
Obj = problem.objective(W, H, approx_type="linear")
return decoupled_step(Obj, W, C, eta, rtol=rtol, atol=atol)
```
Ad-hoc procedure. No guarantees for convergence.
```python
# def step_adhoc(problem, W, H, C, eta, rtol=1e-5, atol=1e-8):
# Obj = problem.objective(W, H, approx_type="quadratic")
# delta = sub_0_cg(np.zeros_like(W), Obj, eta=eta, tol=1e-8)
# return sub_m(delta + W, *C, eta=eta)
```
```python
# def QA_argmin(D, Obj, tol=1e-8):
# # set up the CG arguments
# x = D.reshape(-1).copy()
# b = - Obj.grad().reshape(-1)
# Ax = lambda x: Obj.hess_v(x.reshape(D.shape)).reshape(-1)
# n_iter = simple_cg(Ax, b, x, tol=tol)
# return x.reshape(D.shape)
```
Thus Sparse Group IMC via QA-ADMM is the follwing iterative procedure:
* $W^{t+1} = \mathtt{ADMM}\bigl(W^t; H^t\bigr)$,
* $H^{t+1} = \mathtt{ADMM}\bigl(H^t; W^{t+1}\bigr)$,
until convergence.
```python
from sgimc import imc_descent
```
The loss information: value and regularization on the train data and value of the full matrix.
```python
from sgimc.utils import performance
```
### Illustration
```python
step_fn = step_qaadmm
# step_fn = step_decoupled
```
$$\bigl(C_\mathtt{lasso}, C_\mathtt{group}, C_\mathtt{ridge}\bigr) = C \,.$$
It seems that it must hold $C_\mathtt{lasso} > C_\mathtt{group}$ so that
individual sparsity preceeds group sparsity.
```python
if PROBLEM == "classification":
C = 1e0, 1e-1, 1e-3
eta = 1e0
else:
# C = 2e-5, 2e-3, 0
C = 2e-3, 2e-4, 1e-4 # 1e-2
eta = 1e1
```
```python
if step_fn == step_decoupled:
eta = 1e-3
```
Let's see how the feature coefficients look like.
```python
from sgimc.utils import plot_WH, plot_loss
```
Initialization
```python
K = 10 # n_rank
# K = n_rank
W_0 = random_state.normal(size=(X.shape[1], K))
H_0 = random_state.normal(size=(Y.shape[1], K))
# W_0 = W_ideal.copy() # + random_state.normal(scale=0.1, size=(X.shape[1], K))
# H_0 = H_ideal.copy() # + random_state.normal(scale=0.1, size=(Y.shape[1], K))
```
Now in this experiment the ideal solution is a unit matrix stacked atop a zero martix.
```python
plot_WH(W_ideal, H_ideal)
loss_arr, exp_type, norm_type = performance(
problem, W_ideal, H_ideal, C, R_full)
print("The loss on the initial guess is:")
print("%.3e + %.3e -- partial matrix" % (loss_arr[0, -1], loss_arr[1, -1]))
print("%.3e -- full matrix" % loss_arr[3, -1])
print("score %.4f" % loss_arr[2, -1])
```
```python
plt.imshow(np.dot(W_ideal, H_ideal.T))
```
The initial guess is:
```python
plot_WH(W_0, H_0)
loss_arr, exp_type, norm_type = performance(
problem, W_0, H_0, C, R_full)
print("The loss on the initial guess is:")
print("%.3e + %.3e -- partial matrix" % (loss_arr[0, -1], loss_arr[1, -1]))
print("%.3e -- full matrix" % loss_arr[3, -1])
print("score %.4f" % loss_arr[2, -1])
```
```python
plt.imshow(np.dot(W_0, H_0.T))
```
Run!
```python
W, H = W_0.copy(), H_0.copy()
```
Setup the parameters of the subalgorithm.
```python
step_kwargs = {
"C": C, # the regularizr constants (C_lasso, C_group, C_ridge)
"eta": eta, # the eta of the ADMM (larger - faster but more unstable)
"rtol": 1e-5, # the relative tolerance for stopping the ADMM
"atol": 1e-8, # the absolute tolerance
"method": "l-bfgs", # the method to use in Sub_0
"n_iterations": 2, # the number of iterations of the inner ADMM
}
# n_iterations = 2, 10, 25, 50
```
Run the alternating minimization with ADMM as the sub-algorithm.
```python
W, H = imc_descent(problem, W, H,
step_fn, # the inner optimization
step_kwargs=step_kwargs, # asrtguments for the inner optimizer
n_iterations=1000, # the number of outer iterations (Gauss-Siedel)
return_history=True, # Record the evolution of the matrices (W, H)
rtol=1e-5, # relative stopping tolerance for the outer iterations
atol=1e-7, # absolute tolerance
verbose=True, # show the progress bar
check_product=True, # use the product W H' for stopping
)
```
17%|█▋ | 166/1000 [01:03<04:34, 3.04it/s]
Inspect
```python
plot_WH(abs(W[..., -1]), abs(H[..., -1]))
loss_arr, exp_type, norm_type = performance(problem, W, H, C, R_full)
print("The loss on the final estimates is:")
print("%.3e + %.3e -- partial matrix" % (loss_arr[0, -1], loss_arr[1, -1]))
print("%.3e -- full matrix" % loss_arr[3, -1])
print("score %.4f" % loss_arr[2, -1])
```
```python
plt.imshow(np.dot(W[..., -1], H[..., -1].T))
```
```python
fig = plt.figure(figsize=(12, 12))
ax = fig.add_subplot(111, title="Elementwise loss value")
R_hat = problem.prediction(W[..., -1], H[..., -1])
ax.imshow(problem.loss(R_hat, R_full), cmap=plt.cm.hot, origin="upper")
plt.show()
```
```python
print(str(np.array(["#", "."])[np.isclose(W[..., -1], 0)*1]).replace("' '", ""))
```
[['#.........']
['....#.....']
['..#.......']
['......#...']
['.........#']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']]
```python
print(str(np.array(["#", "."])[np.isclose(H[..., -1], 0)*1]).replace("' '", ""))
```
[['#.........']
['....#.....']
['..#.......']
['......#...']
['.........#']
['..........']
['..........']
['..........']
['..........']
['..........']
['..#.......']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']
['..........']]
```python
np.linalg.norm(W[..., -1], 2, axis=-1)
```
array([ 84.33452596, 116.27803303, 68.39865667, 94.56376613,
84.2719859 , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ])
```python
np.linalg.norm(H[..., -1], 2, axis=-1)
```
array([ 92.46341611, 66.942329 , 114.10945471, 82.50728563,
92.46151265, 0. , 0. , 0. ,
0. , 0. , 0.19861052, 0. ,
0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ])
```python
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111)
ax.imshow(~ np.isclose(np.dot(W[..., -1], H[..., -1].T),
np.dot(W_ideal, H_ideal.T)),
cmap=plt.cm.binary_r)
plt.show()
```
```python
plt.hist(abs(np.dot(W[..., -1], H[..., -1].T)).reshape(-1), bins=20) ;
```
```python
plt.hist(abs(R_hat).reshape(-1), bins=200) ;
```
```python
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, projection='3d', xlabel="col", ylabel="row")
ZZ = problem.loss(R_hat, R_full)
mesh_ = np.meshgrid(*[np.linspace(0, 1, num=n) for n in ZZ.shape[::-1]])
surf = ax.plot_surface(*mesh_, ZZ, alpha=0.5, lw=0, antialiased=True,
cmap=plt.cm.coolwarm)
fig.colorbar(surf, shrink=0.5, aspect=10)
ax.view_init(37, 15)
```
```python
ZZ[~mask].std()
```
0.079106809138822079
```python
ZZ[mask].std()
```
0.080047102742687851
```python
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, projection='3d', xlabel="col", ylabel="row")
ZZ = np.dot(W[..., -1], H[..., -1].T)
mesh_ = np.meshgrid(*[np.linspace(0, 1, num=n) for n in ZZ.shape[::-1]])
surf = ax.plot_surface(*mesh_, ZZ, alpha=0.5, lw=0, antialiased=True,
cmap=plt.cm.coolwarm)
fig.colorbar(surf, shrink=0.5, aspect=10)
ax.view_init(37, 15)
```
```python
plot_loss(loss_arr, exp_type, norm_type,
fig_size=4, max_cols=4, yscale="log")
```
<hr/>
|
1c1d0af6afb28a4056d794d859b247b5e497157e
| 667,094 |
ipynb
|
Jupyter Notebook
|
experiments/sgimc_by_qaadmm_prototype.ipynb
|
ivannz/SGIMC
|
cde56459d1d49576a5a6979a353ac27253233f3d
|
[
"MIT"
] | 11 |
2018-05-03T14:29:01.000Z
|
2018-12-11T11:15:53.000Z
|
experiments/sgimc_by_qaadmm_prototype.ipynb
|
ivannz/SGIMC
|
cde56459d1d49576a5a6979a353ac27253233f3d
|
[
"MIT"
] | null | null | null |
experiments/sgimc_by_qaadmm_prototype.ipynb
|
ivannz/SGIMC
|
cde56459d1d49576a5a6979a353ac27253233f3d
|
[
"MIT"
] | 1 |
2019-09-03T08:40:06.000Z
|
2019-09-03T08:40:06.000Z
| 408.258262 | 211,504 | 0.933885 | true | 8,307 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.851953 | 0.740174 | 0.630594 |
__label__eng_Latn
| 0.462704 | 0.303411 |
```python
import sys
sys.path.append('..')
import torch
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sympy import simplify_logic
from lens.utils.base import validate_network
from lens.utils.relu_nn import get_reduced_model, prune_features
from lens import logic
import lens
torch.manual_seed(0)
np.random.seed(0)
```
```python
x = pd.read_csv('dsprites_c_train.csv', index_col=0)
y = pd.read_csv('dsprites_y_train.csv', index_col=0)
```
```python
base_concepts = ['color', 'shape', 'scale', 'rotation', 'x_pos', 'y_pos']
base_concepts
```
['color', 'shape', 'scale', 'rotation', 'x_pos', 'y_pos']
```python
colors = ['white']
shapes = ['square', 'ellipse', 'heart']
scale = ['very small', 'small', 's-medium', 'b-medium', 'big', 'very big']
rotation = ['0°', '5°', '10°', '15°', '20°', '25°', '30°', '35°']
x_pos = ['x0', 'x2', 'x4', 'x6', 'x8', 'x10', 'x12', 'x14', 'x16', 'x18', 'x20', 'x22', 'x24', 'x26', 'x28', 'x30']
y_pos = ['y0', 'y2', 'y4', 'y6', 'y8', 'y10', 'y12', 'y14', 'y16', 'y18', 'y20', 'y22', 'y24', 'y26', 'y28', 'y30']
concepts = colors + shapes + scale + rotation + x_pos + y_pos
```
```python
x_train = torch.tensor(x.values, dtype=torch.float)
print(x_train.shape)
print(y_train.shape)
print(n_classes)
x
```
torch.Size([5530, 50])
torch.Size([5530, 18])
18
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>...</th>
<th>40</th>
<th>41</th>
<th>42</th>
<th>43</th>
<th>44</th>
<th>45</th>
<th>46</th>
<th>47</th>
<th>48</th>
<th>49</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>1</th>
<td>1.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>2</th>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>3</th>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
</tr>
<tr>
<th>4</th>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>5525</th>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>5526</th>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>5527</th>
<td>1.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>5528</th>
<td>1.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
</tr>
<tr>
<th>5529</th>
<td>1.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.0</td>
<td>0.0</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
</tbody>
</table>
<p>5530 rows × 50 columns</p>
</div>
```python
y_train = torch.zeros((y.shape[0], y.shape[1]), dtype=torch.float)
y_train = torch.tensor(y.values, dtype=torch.float)
x_test = x_train
n_classes = y_train.size(1)
print(n_classes)
y_train
```
18
tensor([[1., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 1., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]])
```python
y_train.sum(dim=0)
```
tensor([321., 311., 295., 324., 286., 295., 325., 302., 302., 319., 293., 297.,
302., 301., 329., 326., 321., 281.])
```python
torch.manual_seed(0)
np.random.seed(0)
layers = [
torch.nn.Linear(x_train.size(1), 20 * n_classes),
torch.nn.LeakyReLU(),
lens.nn.XLinear(20, 10, n_classes),
torch.nn.LeakyReLU(),
lens.nn.XLinear(10, 5, n_classes),
torch.nn.LeakyReLU(),
lens.nn.XLinear(5, 1, n_classes),
torch.nn.Softmax(),
]
model = torch.nn.Sequential(*layers)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
loss_form = torch.nn.BCELoss()
model.train()
need_pruning = True
for epoch in range(6000):
# forward pass
optimizer.zero_grad()
y_pred = model(x_train)
# Compute Loss
loss = loss_form(y_pred, y_train)
for module in model.children():
if isinstance(module, torch.nn.Linear):
loss += 0.001 * torch.norm(module.weight, 1)
break
# backward pass
loss.backward()
optimizer.step()
if epoch > 3000 and need_pruning:
prune_features(model, n_classes)
#need_pruning = False
# compute accuracy
if epoch % 500 == 0:
y_pred_d = torch.argmax(y_pred, dim=1)
y_train_d = torch.argmax(y_train, dim=1)
accuracy = y_pred_d.eq(y_train_d).sum().item() / y_train.size(0)
print(f'Epoch {epoch}: train accuracy: {accuracy:.4f}')
```
/home/pietro/anaconda3/envs/dev/lib/python3.7/site-packages/torch/nn/modules/container.py:117: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
input = module(input)
Epoch 0: train accuracy: 0.0588
Epoch 500: train accuracy: 0.8325
Epoch 1000: train accuracy: 0.9467
Epoch 1500: train accuracy: 0.9467
Epoch 2000: train accuracy: 0.9467
Epoch 2500: train accuracy: 1.0000
Epoch 3000: train accuracy: 1.0000
Epoch 3500: train accuracy: 1.0000
Epoch 4000: train accuracy: 1.0000
Epoch 4500: train accuracy: 1.0000
Epoch 5000: train accuracy: 1.0000
Epoch 5500: train accuracy: 1.0000
# Local explanations
```python
np.set_printoptions(precision=2, suppress=True)
outputs = []
for i, (xin, yin) in enumerate(zip(x_train, y_train)):
model_reduced = get_reduced_model(model, xin)
for module in model_reduced.children():
if isinstance(module, torch.nn.Linear):
wa = module.weight.detach().numpy()
break
output = model_reduced(xin)
pred_class = torch.argmax(output)
true_class = torch.argmax(y_train[i])
# generate local explanation only if the prediction is correct
if pred_class.eq(true_class):
local_explanation = logic.relu_nn.explain_local(model, x_train, y_train, xin, concepts)
print(f'Input {(i+1)}')
print(f'\tx={xin.detach().numpy()}')
print(f'\ty={y_train[i].detach().numpy()}')
print(f'\ty={output.detach().numpy()}')
#print(f'\tw={wa}')
print(f'\tExplanation: {local_explanation}')
print()
outputs.append(output)
if i > 1:
break
```
Input 1
x=[1. 1. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.
0. 0.]
y=[1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
y=[1. 0.02 0.04 0.02 0.03 0.46 0.56 0. 0. 0. 0. 0. 0.48 0.
0.62 0. 0.04 0. ]
Explanation: square & very small
Input 2
x=[1. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.
0. 0.]
y=[0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
y=[0.72 0. 0. 0. 0. 0. 1. 0.47 0.36 0.55 0.48 0.33 0.47 0.
0.62 0. 0.04 0. ]
Explanation: ellipse & very small
Input 3
x=[1. 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 1. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
y=[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
y=[0. 0. 0. 0. 0.55 0. 0. 0. 0. 0. 0.53 0. 0. 0.
0.62 0. 1. 0. ]
Explanation: heart & big
# Combine local explanations
```python
y_train_d = torch.argmax(y_train, dim=1)
for target_class in range(n_classes):
global_explanation, predictions, counter = logic.combine_local_explanations(model, x_train, y_train,
topk_explanations=10,
target_class=target_class,
concept_names=concepts)
y2 = torch.argmax(y_train, dim=1) == target_class
accuracy = sum(predictions == y2.detach().numpy().squeeze()) / len(predictions)
print(f'Class {target_class} - Global explanation: "{global_explanation}" - Accuracy: {accuracy:.4f}')
```
Class 0 - Global explanation: "square & very small" - Accuracy: 1.0000
Class 1 - Global explanation: "square & small" - Accuracy: 1.0000
Class 2 - Global explanation: "square & s-medium" - Accuracy: 1.0000
Class 3 - Global explanation: "square & b-medium" - Accuracy: 1.0000
Class 4 - Global explanation: "square & big" - Accuracy: 1.0000
Class 5 - Global explanation: "square & very big" - Accuracy: 1.0000
Class 6 - Global explanation: "ellipse & very small" - Accuracy: 1.0000
Class 7 - Global explanation: "ellipse & small" - Accuracy: 1.0000
Class 8 - Global explanation: "ellipse & s-medium" - Accuracy: 1.0000
Class 9 - Global explanation: "ellipse & b-medium" - Accuracy: 1.0000
Class 10 - Global explanation: "ellipse & big" - Accuracy: 1.0000
Class 11 - Global explanation: "ellipse & very big" - Accuracy: 1.0000
Class 12 - Global explanation: "heart & very small" - Accuracy: 1.0000
Class 13 - Global explanation: "heart & small" - Accuracy: 1.0000
Class 14 - Global explanation: "heart" - Accuracy: 0.7231
Class 15 - Global explanation: "heart & b-medium" - Accuracy: 1.0000
Class 16 - Global explanation: "heart & big" - Accuracy: 1.0000
Class 17 - Global explanation: "heart & very big" - Accuracy: 1.0000
|
4da43cf1bf0c656ad402102961acad6c9a70c187
| 23,777 |
ipynb
|
Jupyter Notebook
|
examples/example_pruning_02_dsprites.ipynb
|
pietrobarbiero/logic_explained_networks
|
238f2a220ae8fc4f31ab0cf12649603aba0285d5
|
[
"Apache-2.0"
] | 18 |
2021-05-24T07:47:57.000Z
|
2022-01-05T14:48:39.000Z
|
examples/example_pruning_02_dsprites.ipynb
|
pietrobarbiero/logic_explained_networks
|
238f2a220ae8fc4f31ab0cf12649603aba0285d5
|
[
"Apache-2.0"
] | 1 |
2021-08-25T16:33:10.000Z
|
2021-08-25T16:33:10.000Z
|
examples/example_pruning_02_dsprites.ipynb
|
pietrobarbiero/deep-logic
|
238f2a220ae8fc4f31ab0cf12649603aba0285d5
|
[
"Apache-2.0"
] | 2 |
2021-05-26T08:15:14.000Z
|
2021-08-23T18:58:16.000Z
| 32.437926 | 226 | 0.368549 | true | 5,700 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.721743 | 0.709019 | 0.51173 |
__label__kor_Hang
| 0.20741 | 0.027249 |
<a href="https://colab.research.google.com/github/SzymonSkrobiszewski/ON2022/blob/main/Untitled4.ipynb" target="_parent"></a>
```python
from sympy import*
def lagrange(X, Y, x):
y = 0; lenght = len(X)
for i in range(lenght):
result = 1
for j in range(lenght):
if j != i:
result *= (x - X[j])/(X[i] - X[j])
y += Y[i] * result
return int(y)
X = [1,2,3]
Y = [1,4,9]
x = int(input("Podaj punkt: "))
print(lagrange(X,Y, x))
```
Podaj punkt: 5
25
```python
```
|
96d4c1075f26b7ecbb2a4910a6f1ac00aee5a593
| 2,000 |
ipynb
|
Jupyter Notebook
|
Untitled4.ipynb
|
SzymonSkrobiszewski/ON2022
|
e71f77001e6cd0739a051423c3b7b36ccdf0dbb5
|
[
"MIT"
] | null | null | null |
Untitled4.ipynb
|
SzymonSkrobiszewski/ON2022
|
e71f77001e6cd0739a051423c3b7b36ccdf0dbb5
|
[
"MIT"
] | null | null | null |
Untitled4.ipynb
|
SzymonSkrobiszewski/ON2022
|
e71f77001e6cd0739a051423c3b7b36ccdf0dbb5
|
[
"MIT"
] | null | null | null | 24.096386 | 232 | 0.4225 | true | 184 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.932453 | 0.76908 | 0.717131 |
__label__eng_Latn
| 0.158533 | 0.504468 |
# Combinando Modelos e dados: COVID-19
Ao se modelar uma epidemia real, a concordância do modelo e os dados observados é de extrema importância.
Neste Notebook vamos estudar o Modelo SEIAHR proposto para a COVID-19 por [Coelho et al](https://www.medrxiv.org/content/10.1101/2020.06.15.20132050v1). Neste modelo, temos os seguintes compartimentos:
Matemáticamente:
\begin{align}
\frac{dS}{dt}&=-\lambda [(1-\chi) S],\\
\frac{dE}{dt}&= \lambda [(1-\chi) S]-\alpha E,\\
\frac{dI}{dt}&= (1-p)\alpha E - \delta I -\phi I,\\
\frac{dA}{dt}&= p\alpha E - \gamma A,\\
\frac{dH}{dt}&= \phi I -(\rho+\mu) H,\\
\frac{dR}{dt}&= \delta I + \rho H+\gamma A,
\end{align}
onde $\lambda=\beta (I+A)$.
Para facilitar nosso trabalho com dados dentro do ambiente Sage, precisamos instalar o Pandas para isso
em um terminal use o seguinte comando:
```
sage -sh
```
este comando iniciará uma subsessão do sage onde você poderá instalar o pandas com o comando usual:
```
pip install pandas
pip install parameter-sherpa
exit
```
o comando `exit` após a instalação visa sair da subsessão do Sage.
```python
import numpy as np
import pandas as pd
# %display typeset
```
```python
def model(t, y, params):
S, E, I, A, H, R, C, D = y
chi, phi, beta, rho, delta, gamma, alpha, mu, p, q, r = params
lamb = beta * (I + A)
# Turns on Quarantine on day q and off on day q+r
chi *= ((1 + np.tanh(t - q)) / 2) * ((1 - np.tanh(t - (q + r))) / 2)
return [
-lamb * ((1 - chi) * S), # dS/dt
lamb * ((1 - chi) * S) - alpha * E, # dE/dt
(1 - p) * alpha * E - delta * I - phi * I, # dI/dt
p * alpha * E - gamma * A,
phi * I - (rho + mu) * H, # dH/dt
delta * I + rho * H + gamma * A, # dR/dt
phi * I, # (1-p)*alpha*E+ p*alpha*E # Hospit. acumuladas
mu * H # Morte acumuladas
]
```
```python
chi = .3
phi = 0.012413633926076584
beta = 0.27272459855759813
rho = 0.2190519831830368
delta = 0.04168480042146949
gamma = 0.04
alpha = 0.3413355572047603
mu = 0.02359234606623134
p = 0.7693029079871165
q = 50
r = 55
```
```python
T = ode_solver()
T.function = model
T.algorithm='rk8pd'
inits = [.99, 0, 1e-4, 0, 0, 0, 0, 0]
tspan = [0,200]
T.ode_solve(tspan, inits, num_points=200, params=[chi,phi,beta,rho,delta,gamma,alpha,mu,p,q,r])
```
```python
def get_sim_array(sol):
sim = np.array([y for t,y in sol])
return sim
get_sim_array(T.solution).shape
```
(201, 8)
```python
popRJ = 6.32e6
def plot_sol(sol):
sim = get_sim_array(sol)*popRJ
P = list_plot(sim[:,0],legend_label='S')
colors = ['blue','red','pink','green','yellow','orange','black','purple']
for i,var in enumerate(['E','I','A','H','R','C','D']):
P += list_plot(sim[:,i+1],color=colors[i+1],legend_label=var)
show(P)
plot_sol(T.solution)
```
```python
sims = get_sim_array(T.solution)
sims[-1,-2]*popRJ
```
328307.38107624033
## Obtendo os dados
Os dados usados aqui foram obtidos do site [Brasil.io](https://brasil.io). Apenas os dados do estado do Rio de Janeiro foram extraídos e salvos em um CSV.
```python
def load_data(state):
df = pd.read_csv(f'dados_{state}.csv')
df['data'] = pd.to_datetime(df.data)
# df.set_index('data', inplace=True)
return df
```
```python
dfRJ = load_data('RJ')
ld = len(dfRJ)
html(dfRJ.tail().to_html())
```
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>data</th>
<th>date</th>
<th>last_available_confirmed</th>
<th>last_available_deaths</th>
<th>incidencia_casos</th>
<th>incidencia_morte</th>
<th>ew</th>
</tr>
</thead>
<tbody>
<tr>
<th>170</th>
<td>2020-08-22</td>
<td>2020-08-22</td>
<td>210464</td>
<td>15267</td>
<td>65.0</td>
<td>3428.0</td>
<td>34</td>
</tr>
<tr>
<th>171</th>
<td>2020-08-23</td>
<td>2020-08-23</td>
<td>210948</td>
<td>15292</td>
<td>25.0</td>
<td>484.0</td>
<td>35</td>
</tr>
<tr>
<th>172</th>
<td>2020-08-24</td>
<td>2020-08-24</td>
<td>211360</td>
<td>15392</td>
<td>100.0</td>
<td>412.0</td>
<td>35</td>
</tr>
<tr>
<th>173</th>
<td>2020-08-25</td>
<td>2020-08-25</td>
<td>214003</td>
<td>15560</td>
<td>168.0</td>
<td>2643.0</td>
<td>35</td>
</tr>
<tr>
<th>174</th>
<td>2020-08-26</td>
<td>2020-08-26</td>
<td>214003</td>
<td>15560</td>
<td>0.0</td>
<td>0.0</td>
<td>35</td>
</tr>
</tbody>
</table>
```python
subnot=1
dfRJ.set_index('data')[['last_available_confirmed','last_available_deaths']].plot();
```
```python
dfRJ['last_available_deaths'].plot()
```
## Ajustando o Modelo aos dados
Há muitas formas de se ajustar um modelo dinâmico aos dados disponíveis. Vamos começar por meio de otimização, buscando os valores de parâmetros que minimizam o desvio entre o modelo e os dados. O ajuste será feito simultaneamente para casos e mortes acumuladas. Abaixo vamos importar a biblioteca [Sherpa](https://parameter-sherpa.readthedocs.io/en/latest/gettingstarted/guide.html):
```python
!pip install parameter-sherpa
```
Collecting parameter-sherpa
Downloading parameter-sherpa-1.0.6.tar.gz (513 kB)
[K |████████████████████████████████| 513 kB 4.5 MB/s eta 0:00:01
[?25hRequirement already satisfied: pandas>=0.20.3 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from parameter-sherpa) (1.3.1)
Collecting pymongo>=3.5.1
Downloading pymongo-3.12.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (531 kB)
[K |████████████████████████████████| 531 kB 7.4 MB/s eta 0:00:01
[?25hRequirement already satisfied: numpy>=1.8.2 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from parameter-sherpa) (1.19.5)
Requirement already satisfied: scipy>=1.0.0 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from parameter-sherpa) (1.5.4)
Requirement already satisfied: scikit-learn>=0.19.1 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from parameter-sherpa) (0.24.2)
Collecting flask>=0.12.2
Downloading Flask-2.0.1-py3-none-any.whl (94 kB)
[K |████████████████████████████████| 94 kB 2.8 MB/s eta 0:00:01
[?25hCollecting GPyOpt>=1.2.5
Downloading GPyOpt-1.2.6.tar.gz (56 kB)
[K |████████████████████████████████| 56 kB 4.8 MB/s eta 0:00:01
[?25hCollecting enum34
Downloading enum34-1.1.10-py3-none-any.whl (11 kB)
Requirement already satisfied: matplotlib in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from parameter-sherpa) (3.3.4)
Collecting Jinja2>=3.0
Using cached Jinja2-3.0.1-py3-none-any.whl (133 kB)
Collecting itsdangerous>=2.0
Downloading itsdangerous-2.0.1-py3-none-any.whl (18 kB)
Requirement already satisfied: click>=7.1.2 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from flask>=0.12.2->parameter-sherpa) (8.0.1)
Collecting Werkzeug>=2.0
Downloading Werkzeug-2.0.1-py3-none-any.whl (288 kB)
[K |████████████████████████████████| 288 kB 12.2 MB/s eta 0:00:01
[?25hCollecting GPy>=1.8
Downloading GPy-1.10.0.tar.gz (959 kB)
[K |████████████████████████████████| 959 kB 12.8 MB/s eta 0:00:01
[?25hRequirement already satisfied: six in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from GPy>=1.8->GPyOpt>=1.2.5->parameter-sherpa) (1.15.0)
Collecting paramz>=0.9.0
Downloading paramz-0.9.5.tar.gz (71 kB)
[K |████████████████████████████████| 71 kB 7.1 MB/s eta 0:00:01
[?25hRequirement already satisfied: cython>=0.29 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from GPy>=1.8->GPyOpt>=1.2.5->parameter-sherpa) (0.29.21)
Collecting MarkupSafe>=2.0
Downloading MarkupSafe-2.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (30 kB)
Requirement already satisfied: python-dateutil>=2.7.3 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from pandas>=0.20.3->parameter-sherpa) (2.8.0)
Requirement already satisfied: pytz>=2017.3 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from pandas>=0.20.3->parameter-sherpa) (2020.4)
Requirement already satisfied: decorator>=4.0.10 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from paramz>=0.9.0->GPy>=1.8->GPyOpt>=1.2.5->parameter-sherpa) (4.4.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from scikit-learn>=0.19.1->parameter-sherpa) (2.2.0)
Requirement already satisfied: joblib>=0.11 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from scikit-learn>=0.19.1->parameter-sherpa) (1.0.1)
Requirement already satisfied: pillow>=6.2.0 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from matplotlib->parameter-sherpa) (8.1.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from matplotlib->parameter-sherpa) (1.0.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from matplotlib->parameter-sherpa) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from matplotlib->parameter-sherpa) (0.10.0)
Requirement already satisfied: setuptools in /home/fccoelho/Downloads/SageMath/local/lib/python3.9/site-packages (from kiwisolver>=1.0.1->matplotlib->parameter-sherpa) (51.1.1)
Building wheels for collected packages: parameter-sherpa, GPyOpt, GPy, paramz
Building wheel for parameter-sherpa (setup.py) ... [?25ldone
[?25h Created wheel for parameter-sherpa: filename=parameter_sherpa-1.0.6-py2.py3-none-any.whl size=542118 sha256=8836fcef51a6e46b135b20539b48b4a7e87ce274b6bb8f630bf39378f80edbfd
Stored in directory: /home/fccoelho/.cache/pip/wheels/5e/e2/ac/eb3e175761289cb9a9c5d86045009b9e192ed0df15554f637d
Building wheel for GPyOpt (setup.py) ... [?25ldone
[?25h Created wheel for GPyOpt: filename=GPyOpt-1.2.6-py3-none-any.whl size=83621 sha256=173cc00fed6f5f8590501dafed53f5f21635365d0961027867aae75d14e8bddb
Stored in directory: /home/fccoelho/.cache/pip/wheels/11/b8/44/0282cdff2277bc12f04266de6f104099dec02411879a0ac19f
Building wheel for GPy (setup.py) ... [?25ldone
[?25h Created wheel for GPy: filename=GPy-1.10.0-cp39-cp39-linux_x86_64.whl size=3286527 sha256=7b5f179a6788cd0a1275f12e328eaaa1309580d1119e93858e863c88422f4bc4
Stored in directory: /home/fccoelho/.cache/pip/wheels/78/fd/57/7c1e4a6f9a5380e2536af9809075ba085b1bb8d38ee84ea183
Building wheel for paramz (setup.py) ... [?25ldone
[?25h Created wheel for paramz: filename=paramz-0.9.5-py3-none-any.whl size=102550 sha256=cb0e77e7644d17d8ea46d040c3b03b55de262e17a4fb8106b97b3cabc1efea73
Stored in directory: /home/fccoelho/.cache/pip/wheels/9c/5f/9b/c4273ae8f869387214be2b99598d1b71dbf00672576cb85e74
Successfully built parameter-sherpa GPyOpt GPy paramz
Installing collected packages: paramz, MarkupSafe, Werkzeug, Jinja2, itsdangerous, GPy, pymongo, GPyOpt, flask, enum34, parameter-sherpa
Attempting uninstall: MarkupSafe
Found existing installation: MarkupSafe 1.1.1
Uninstalling MarkupSafe-1.1.1:
Successfully uninstalled MarkupSafe-1.1.1
Attempting uninstall: Jinja2
Found existing installation: Jinja2 2.11.2
Uninstalling Jinja2-2.11.2:
Successfully uninstalled Jinja2-2.11.2
Successfully installed GPy-1.10.0 GPyOpt-1.2.6 Jinja2-3.0.1 MarkupSafe-2.0.1 Werkzeug-2.0.1 enum34-1.1.10 flask-2.0.1 itsdangerous-2.0.1 parameter-sherpa-1.0.6 paramz-0.9.5 pymongo-3.12.0
[33mWARNING: You are using pip version 21.0.1; however, version 21.2.4 is available.
You should consider upgrading via the '/home/fccoelho/Downloads/SageMath/local/bin/python3 -m pip install --upgrade pip' command.[0m
```python
import sherpa
```
Começamos por definir o tipo de variável de cada parâmetro, e o tipo de algoritmo de busca que utilizaremos.
```python
parameters = [
sherpa.Continuous(name='e',range=[0,1]),
sherpa.Continuous(name='chi',range=[0,.3]),
sherpa.Continuous(name='phi',range=[0,1]),
sherpa.Continuous(name='beta',range=[0.1,2]),
sherpa.Continuous(name='rho',range=[0.2,1]),
sherpa.Continuous(name='delta',range=[0.01,1]),
sherpa.Continuous(name='gamma',range=[0.01,1]),
sherpa.Continuous(name='alpha',range=[0.01,1]),
sherpa.Continuous(name='mu',range=[0.01,.7]),
sherpa.Continuous(name='p',range=[0.01,.7]),
sherpa.Discrete(name='q',range=[1,30]),
sherpa.Discrete(name='r',range=[1,12]),
sherpa.Discrete(name='t0',range=[0,25]),
]
algorithm = sherpa.algorithms.RandomSearch(max_num_trials=1000)
# algorithm = sherpa.algorithms.GPyOpt(model_type='GP',max_num_trials=150)
```
```python
study = sherpa.Study(parameters=parameters,
algorithm=algorithm,
lower_is_better=True,
disable_dashboard=True)
```
Uma vez definido o problema (`study`) podemos gerar uma sugestão valores para ver como funciona:
```python
trial = study.get_suggestion()
trial.parameters
```
{'e': 0.6633981790812898,
'chi': 0.20806102643110655,
'phi': 0.1675576065501,
'beta': 1.6334929266739133,
'rho': 0.4983366877248262,
'delta': 0.3151485461881289,
'gamma': 0.5319575393066565,
'alpha': 0.3685410748547585,
'mu': 0.5881872108774452,
'p': 0.39349886479238544,
'q': 22,
'r': 1,
't0': 19}
Então podemos executar a busca de parâmetros em um simples loop
```python
for trial in study:
pars = [trial.parameters[n] for n in ['chi', 'phi', 'beta', 'rho', 'delta', 'gamma', 'alpha', 'mu', 'p', 'q', 'r']]
t0 = trial.parameters['t0']
T.ode_solve(tspan, inits, num_points=200, params=pars)
sim = get_sim_array(T.solution)
H = sim[:ld+t0,-2]*popRJ*trial.parameters['e']
D = sim[:ld+t0,-1]*popRJ*trial.parameters['e']
loss = sum((dfRJ.last_available_confirmed-H[t0:t0+ld])**2) +sum((dfRJ.last_available_deaths-D[t0:t0+ld])**2)/2*ld
study.add_observation(trial=trial,
objective=loss,
)
study.finalize(trial)
```
```python
res = study.get_best_result()
res
```
{'Trial-ID': 475,
'Iteration': 1,
'alpha': 0.9394810724951352,
'beta': 0.5308498112450182,
'chi': 0.12780938386529786,
'delta': 0.5923550985639527,
'e': 0.9084210097968136,
'gamma': 0.2498206877232359,
'mu': 0.04066936638497329,
'p': 0.3337257898942823,
'phi': 0.09298989032857552,
'q': 21,
'r': 1,
'rho': 0.3594731742120626,
't0': 20,
'Objective': 75443865634.05386}
```python
def plot_results(pars):
T.ode_solve(tspan, inits, num_points=200, params=list(pars[:-1]))
t0=pars[-1]
sim = get_sim_array(T.solution)*popRJ
h = list_plot(sim[:ld+t0,-2],color='red',legend_label='Cum. cases', plotjoined=True)
d = list_plot(sim[:ld+t0,-1],color='purple', legend_label='Cum. Deaths', plotjoined=True)
cc = list_plot(list(zip(range(t0,ld+t0),dfRJ.last_available_confirmed)), color='black',legend_label='cases (obs)')
cd = list_plot(list(zip(range(t0,ld+t0),dfRJ.last_available_deaths)), color='orange',legend_label='deaths(obs)')
show(h+d+cc+cd)
```
```python
plot_results([res['chi'],
res['phi'],
res['beta'],
res['rho'],
res['delta'],
res['gamma'],
res['alpha'],
res['mu'],
res['p'],
res['q'],
res['r'],
res['t0']
])
```
# Passo a passo da segunda parte
## Resultados
### Ter o modelo pronto e adimensionalizado
### Encontrar os equilíbrios: ELD (eq. livre de doença) EE (eq. endêmicos)
### Por meio de metodo de linearização local, caracterizar as estabilidades dos equilíbrios
### Calculo do RO. Analitico e depois substituir os valores para obter uma estimativa numérica.
### Gerar simulações e compara com os dados. Descrevam a adequação do modelo para representar a epidemia no país escolhido
### Análise de sensibilidade
### Otimização dos parâmetros
### Estimação bayesiana dos parâmetros.
## Discussão
### Contextualizar cada resultado
### Discutir toda a análise feita de um ponto de vista geral defendendo a aplicabilidade e originalidade dos resultados obtidos.
### Implicações do modelo para o controle da COVID
## Conclusão
Adicionalmente é necessário escrever um resumo, no início de artigo.
```python
from scipy.interpolate import interp1d
from matplotlib import pyplot as plt
```
```python
d = [2,6,5,3,8,7,0]
f = interp1d(range(7),d,kind='quadratic')
```
```python
plt.plot(d, 'o')
t2 = [0, 1, 1.5,2.5,3,3.1,6]
plt.plot(t2,f(t2),'-*' )
```
```python
plot(f(t2))
```
```python
```
|
59355bf8f553a4f4a659fc9d97c16d6751208ad7
| 175,304 |
ipynb
|
Jupyter Notebook
|
Planilhas Sage/Suplemento 2 - o Modelo SEIAHR.ipynb
|
fccoelho/Modelagem-Matematica-IV
|
f0ff2824a564183a7c972988b32b487fa7fa1942
|
[
"BSD-Source-Code"
] | 23 |
2019-04-15T16:51:02.000Z
|
2021-08-25T01:22:03.000Z
|
Planilhas Sage/Suplemento 2 - o Modelo SEIAHR.ipynb
|
fccoelho/Modelagem-Matematica-IV
|
f0ff2824a564183a7c972988b32b487fa7fa1942
|
[
"BSD-Source-Code"
] | 11 |
2021-08-04T12:25:24.000Z
|
2021-11-26T13:57:28.000Z
|
Planilhas Sage/Suplemento 2 - o Modelo SEIAHR.ipynb
|
fccoelho/Modelagem-Matematica-IV
|
f0ff2824a564183a7c972988b32b487fa7fa1942
|
[
"BSD-Source-Code"
] | 10 |
2020-08-03T12:24:13.000Z
|
2021-12-08T12:51:02.000Z
| 176.008032 | 48,768 | 0.885131 | true | 6,057 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.824462 | 0.712232 | 0.587208 |
__label__kor_Hang
| 0.143609 | 0.202612 |
Programming exercise 1: Manuel, Niclas, Veli <br>
This code approximates the solution to the two dimensional Poisson problem by discretizing:
\begin{align}
-\Delta u & = f \; in \; \Omega = (0,1)^2\\
u & = 0 \; on \; \partial \Omega
\end{align}
First, we write a function that gives back the sparse matrix of size $(n-1)^2 \times (n-1)^2$ which we will need to solve to get the approximate solution of the Poisson problem with dirichlet boundary conditions.
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.sparse import csr_matrix, isspmatrix_csr
from scipy.sparse import csc_matrix
from scipy.sparse.linalg import spsolve
import numpy.linalg as LA
from mpl_toolkits.mplot3d import Axes3D
import scipy
import time
def A_matrix(n):
if n<=1:#check if n is bigger 1, throw error if not
return print("Error: Input has to be an integer n>1")
else:
#nb=[-1]*(n-1) alternative but slower way in this case
nb = [-1 for i in range(n-1)] #list of length n-1 filled with -1, since we use it twice
data = np.array([[2]*(n-1), nb, nb])#diagonal entries
diags = np.array([0, -1, 1])#on which diagonals the entries should be put
del nb #since we dont use nb anymore
L=scipy.sparse.spdiags(data, diags, n-1, n-1)
A=scipy.sparse.kronsum(L,L)
A=n*n*A
return A
#Example calculation for n=4:
print("This is the resulting matrix:\n", A_matrix(4).toarray())
#This is the resulting matrix:\n", .toarray()
```
This is the resulting matrix:
[[ 64 -16 0 -16 0 0 0 0 0]
[-16 64 -16 0 -16 0 0 0 0]
[ 0 -16 64 0 0 -16 0 0 0]
[-16 0 0 64 -16 0 -16 0 0]
[ 0 -16 0 -16 64 -16 0 -16 0]
[ 0 0 -16 0 -16 64 0 0 -16]
[ 0 0 0 -16 0 0 64 -16 0]
[ 0 0 0 0 -16 0 -16 64 -16]
[ 0 0 0 0 0 -16 0 -16 64]]
Now we implement a jacobi iteration to solve the system Au=b with:
\begin{align}
u^{(k+1)} = u^{(k)} + D^{-1} (b-Au^{(k)})
\end{align}
with stopping criterion $\lVert u^{(k+1)} - u^{(k)}\rVert < \epsilon$ for $\epsilon >0$.
```python
def jacobi_iteration(A,u1,b1,e,n):
D=(1/(4*n*n)*scipy.sparse.eye((n-1)*(n-1)))#inverse diagonal matrix
while True:
q=A.dot(u1)
#k=len(q)
#q=np.reshape(q,(k,1))
r=b1-q
v= D.dot(r) + u1
if LA.norm(v-u1)<e:#if difference between steps is small, end
return v
u1=v
```
Now we want to approximately solve the example $f(x,y)=5\pi^2sin(2\pi x)sin(\pi y)$.
```python
def f(x,y):#the function for which we want to solve the Poisson equation
return 5*np.pi*np.pi*np.sin(2*np.pi*x)*np.sin(np.pi*y)
def u(n):#gives start vector [00...00] of right size back
k=(n-1)*(n-1)
u1=[0]*k
u1=np.reshape(u1,(k,1))
return u1
def grid_ij(n):#grid with ij indexing
x=np.arange(1/n, 1, 1/n)
y=np.arange(1/n, 1, 1/n)
XX,YY=np.meshgrid(x, y, sparse=False, indexing='ij')
return XX,YY
def grid_xy(n):#grid with coordinate like indexing
x=np.arange(1/n, 1, 1/n)
y=np.arange(1/n, 1, 1/n)
XX,YY=np.meshgrid(x, y, sparse=False, indexing='xy')
return XX,YY
def b(n):#calculates vector of values of f(x,y)
z=grid_ij(n)
print(z)
l=len(z[0])
a=np.reshape(z,l*l*2, order='F')
print(a)
a=a.reshape((l*l,2))
print(a)
result=f(a[:,:1],a[:,1:2])
return result
def solve(n):#applies jacobi iteration
A=A_matrix(n)
b1=b(n)
u1=u(n)
e=1e-10#tolerance
return jacobi_iteration(A,u1,b1,e,n)
b(6)
n_list=[8,16,32,64,128] #different grid sizes for calculations
approx_result = [solve(x) for x in n_list]#calculating different approximations
x=np.arange(1/128, 1, 1/128)
y=np.arange(1/128, 1, 1/128)
XX,YY=np.meshgrid(x, y, sparse=False, indexing='xy')
ZZ=b1.reshape(np.array(grid_ij(128)[0]).shape)#function values on grid points in right format
#Axes3D.plot_surface(XX,YY,ZZ)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(XX, YY, ZZ)
#scipy.sparse.linalg.spsolve(A_matrix(128), b(128)) for comparing results, works as intended
```
This is the surface plot of our approximated solution.
```python
def exact_sol(x,y):#analytical solution to the poisson equation for the example
return np.sin(2*np.pi*x)*np.sin(np.pi*y)
def apply_exact_sol(n):#calculates vector of values of exact_sol(x,y)
z=grid_ij(n)
l=len(z[0])
a=np.reshape(z,l*l*2, order='F')
a=a.reshape((l*l,2))
result=exact_sol(a[:,:1],a[:,1:2])
return result
numb=np.arange(0., 0.12,0.001)
h_list=[1/n_list[i] for i in range(5)]
exact_result = [apply_exact_sol(x) for x in n_list]
#print(exact_result[4])
max_diff=[np.amax(abs(approx_result[i]-exact_result[i])) for i in range(5)]
plt.xlabel('grid size h')
plt.ylabel('l_inf error')
plt.title('convergence plot')
plt.grid(True)
plt.plot(h_list,max_diff,label="differences")
plt.plot(numb,3*numb**2,label="3*x^2")
plt.legend()
plt.show()
```
|
c3fbdeface516d068887f69651094e3bcb320735
| 150,224 |
ipynb
|
Jupyter Notebook
|
Programming_Exercise_Manuel_Niclas_Veli.ipynb
|
Veli-hub/Scientific_Computing
|
942e0adf28913231b21396109c6c893b6dce2279
|
[
"MIT"
] | null | null | null |
Programming_Exercise_Manuel_Niclas_Veli.ipynb
|
Veli-hub/Scientific_Computing
|
942e0adf28913231b21396109c6c893b6dce2279
|
[
"MIT"
] | null | null | null |
Programming_Exercise_Manuel_Niclas_Veli.ipynb
|
Veli-hub/Scientific_Computing
|
942e0adf28913231b21396109c6c893b6dce2279
|
[
"MIT"
] | null | null | null | 134.008921 | 74,224 | 0.771228 | true | 1,726 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.947381 | 0.853913 | 0.808981 |
__label__eng_Latn
| 0.806809 | 0.717866 |
<a href="https://colab.research.google.com/github/kalz2q/mycolabnotebooks/blob/master/chartmathc01matrix.ipynb" target="_parent"></a>
# メモ
手元にある
基礎からのチャート式数学C
の
第1章行列
を読む。
いくつかの数や文字を長方形状に並べ、両側を括弧で囲んだものを行列といい、そのおのおの数や文字を成分という。 横の並びを行 row といい、縦の並びを列 column という。
```latex
%%latex
\begin{pmatrix}
a & b & c \\
d & e & f \\
\end{pmatrix}
```
\begin{pmatrix}
a & b & c \\
d & e & f \\
\end{pmatrix}
```
# 上のコードセルの出力で a b c を第1行、 d e f を第2行、a d を第1列、b e を第2列、c fを第3列という。
# 0 ベースでないのが注意ですね。というかプログラムの世界の方が利便性のために 0 ベースにしているのかも。
```
```
from sympy import *
init_printing
Matrix([[1,2],[3,4]])
```
$\displaystyle \left[\begin{matrix}1 & 2\\3 & 4\end{matrix}\right]$
```
# 上のコードセルで Matrix を小文字の matrix と書いたら NameError: name 'matrix' is not defined となった。
# python の命名規則では関数、メソッドは小文字で始まるはずなので、たぶん sympy 固有の規則なのだと思う。
# 積分記号は Integral で積分命令は integrate 。
# テキストで行列の括弧は `(`, `)` の大きいのであって鉤括弧 `[`, `]` ではない。
# cf. parentheses, brackets, braces
```
```
Matrix([[1,2],[3,4]]) + Matrix([[1,1],[1,1]])
```
$\displaystyle \left[\begin{matrix}2 & 3\\4 & 5\end{matrix}\right]$
```
a,b,c,d,e,f = symbols('a,b,c,d,e,f')
Matrix([[a,b,c],[d,e,f]])
```
$\displaystyle \left[\begin{matrix}a & b & c\\d & e & f\end{matrix}\right]$
```
# 成分の数え方
from sympy import *
init_printing
a,b,c,d,e,f = symbols('a,b,c,d,e,f')
Matrix([[a,b,c],[d,e,f]])[0,0]
```
$\displaystyle a$
```
Matrix([[a,b,c],[d,e,f]])[5]
```
$\displaystyle f$
```
Matrix([[a,b,c],[d,e,f]])[1,2]
```
$\displaystyle f$
```
# 上のコードセルで、Matrix([[a,b,c],[d,e,f]])[1,2] で f が取れた。
# Matrix([[a,b,c],[d,e,f]])[1][2] ではダメ。
# 行だけ、列だけ、を取り出すにはどうすればよいか。
```
```
Matrix([[a,b],[c,d],[e,f]])[2,1]
```
$\displaystyle f$
行列 $A$、$B$ について、$A=B$ であることは、$A$ と $B$ が同じ型でり、対応する成分がそれぞれ等しい。
$m \times n$ 行列
```latex
%%latex
\begin{pmatrix}
a_{11} & a_{12} & \ldots & a_{1n} \\
a_{21} & a_{22} & \ldots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{m1} & a_{m2} & \ldots & a_{mn}
\end{pmatrix}
```
\begin{pmatrix}
a_{11} & a_{12} & \ldots & a_{1n} \\
a_{21} & a_{22} & \ldots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{m1} & a_{m2} & \ldots & a_{mn}
\end{pmatrix}
```latex
%%latex
\begin{pmatrix}
a_{11} & \ldots & a_{1n} \\
\vdots & \ddots & \vdots \\
a_{m1} & \ldots & a_{mn}
\end{pmatrix}
```
\begin{pmatrix}
a_{11} & \ldots & a_{1n} \\
\vdots & \ddots & \vdots \\
a_{m1} & \ldots & a_{mn}
\end{pmatrix}
```
# 2次の正方行列
%%latex
A =
\begin{pmatrix}
1 & 0 \\
2 & -3
\end{pmatrix}
```
A =
\begin{pmatrix}
1 & 0 \\
2 & -3
\end{pmatrix}
```
# 3次の行ベクトル
%%latex
B =
\begin{pmatrix}
1 & 2 & 3\\
\end{pmatrix}
```
B =
\begin{pmatrix}
1 & 2 & 3\\
\end{pmatrix}
```
# 2次の列ベクトル
%%latex
C =
\begin{pmatrix}
-1 \\
1
\end{pmatrix}
```
C =
\begin{pmatrix}
-1 \\
1
\end{pmatrix}
```
# 行列の和・差・実数倍
%%latex
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
+
\begin{pmatrix}
p & q \\
r & s
\end{pmatrix}
=
\begin{pmatrix}
a+p & b+q \\
c+r& d+s
\end{pmatrix}
```
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
+
\begin{pmatrix}
p & q \\
r & s
\end{pmatrix}
=
\begin{pmatrix}
a+p & b+q \\
c+r& d+s
\end{pmatrix}
```latex
%%latex
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
-
\begin{pmatrix}
p & q \\
r & s
\end{pmatrix}
=
\begin{pmatrix}
a-p & b-q \\
c-r& d-s
\end{pmatrix}
```
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
-
\begin{pmatrix}
p & q \\
r & s
\end{pmatrix}
=
\begin{pmatrix}
a-p & b-q \\
c-r& d-s
\end{pmatrix}
```latex
%%latex
k
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
=
\begin{pmatrix}
ka & kb \\
kc & kd
\end{pmatrix}
\quad k \ は実数
```
k
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
=
\begin{pmatrix}
ka & kb \\
kc & kd
\end{pmatrix}
\quad k \ は実数
```latex
%%latex
特に \quad
-
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
=
\begin{pmatrix}
-a & -b \\
-c & -d
\end{pmatrix}
```
特に \quad
-
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
=
\begin{pmatrix}
-a & -b \\
-c & -d
\end{pmatrix}
```
# 行列の和・差・実数倍の性質
%%latex
同じ型の行列の和や差について、次のことが成り立つ。 k, \ l \ は実数とする。 \\
1.\ 交換法則 \quad A+B=B+A \\
2.\ 結合法則 \quad (A+B)+C=A+(B+C) \quad \longleftarrow \quad A+B+C \ と書く。\\
3.\ 零行列 \ O \quad A+(-A)=O, \quad A+O=A, \quad A-A=O \\
4.\ 差と和 \quad \quad A-B=A+(-B) \\
5.\ 実数倍 \quad \quad 1A=A, \quad (-1)A=-A, \quad 0A=O, \quad kO=O \\
\quad [1]\quad k(lA)=(kl)A \quad [2]\quad (k+l)A=kA+lA \quad [3]\quad k(A+B)=kA+kB
```
同じ型の行列の和や差について、次のことが成り立つ。 k, \ l \ は実数とする。 \\
1.\ 交換法則 \quad A+B=B+A \\
2.\ 結合法則 \quad (A+B)+C=A+(B+C) \quad \longleftarrow \quad A+B+C \ と書く。\\
3.\ 零行列 \ O \quad A+(-A)=O, \quad A+O=A, \quad A-A=O \\
4.\ 差と和 \quad \quad A-B=A+(-B) \\
5.\ 実数倍 \quad \quad 1A=A, \quad (-1)A=-A, \quad 0A=O, \quad kO=O \\
\quad [1]\quad k(lA)=(kl)A \quad [2]\quad (k+l)A=kA+lA \quad [3]\quad k(A+B)=kA+kB
# 行列の乗法
```
# 行列の乗法
%%latex
\begin{pmatrix}
a & b
\end{pmatrix}
\begin{pmatrix}
p \\
q
\end{pmatrix}
= ap + bq
```
\begin{pmatrix}
a & b
\end{pmatrix}
\begin{pmatrix}
p \\
q
\end{pmatrix}
= ap + bq
```latex
%%latex
\begin{pmatrix}
a & b & c
\end{pmatrix}
\begin{pmatrix}
p \\
q \\
r
\end{pmatrix}
= ap + bq + cr
```
\begin{pmatrix}
a & b & c
\end{pmatrix}
\begin{pmatrix}
p \\
q \\
r
\end{pmatrix}
= ap + bq + cr
## 行列の積
行列 $A$、$B$ について、$A$ の列の個数と $B$ の行の個数が等しいとき、積 $AB$ を $A$ の行ベクトルと $B$ の列ベクトルの積を成分とする行列 と定める。
```latex
%%latex
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
\begin{pmatrix}
p & q \\
r & x
\end{pmatrix}
=
\begin{pmatrix}
ap + br & aq + bs \\
cp + dr & cq + ds
\end{pmatrix}
```
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
\begin{pmatrix}
p & q \\
r & x
\end{pmatrix}
=
\begin{pmatrix}
ap + br & aq + bs \\
cp + dr & cq + ds
\end{pmatrix}
```
# 行列の乗法の性質
%%latex
1.\ k \ が実数のとき \quad (kA)B =A(kB) = k(AB) \quad \longleftarrow \quad kAB \ と書く。\\
2.\ 結合法則 \quad (AB)C=A(BC) \quad \longleftarrow \quad ABC \ と書く。\\
3.\ 分配法則 \quad (A+B)C=AC+BC, \quad A(B+C)=AB+AC\\
4.\ 交換法則は一般には成り立たない。 \quad AB \neq BA \quad 非可換性
```
1.\ k \ が実数のとき \quad (kA)B =A(kB) = k(AB) \quad \longleftarrow \quad kAB \ と書く。\\
2.\ 結合法則 \quad (AB)C=A(BC) \quad \longleftarrow \quad ABC \ と書く。\\
3.\ 分配法則 \quad (A+B)C=AC+BC, \quad A(B+C)=AB+AC\\
4.\ 交換法則は一般には成り立たない。 \quad AB \neq BA \quad 非可換性
```
# 単位行列 E, ゼロ行列 O の性質
%%latex
任意の正方行列 \ A \ と、A \ と同じ次数の単位行列 \ E、および零行列 \ O \ に対して \\
\quad AE=EA=A \quad AO = OA = O
```
任意の正方行列 \ A \ と、A \ と同じ次数の単位行列 \ E、および零行列 \ O \ にたいして \\
\quad AE=EA=A \quad AO = OA = O
```
# 行列の累乗
%%latex
正方行列 \ A \ を \ n \ 個掛け合わせた積を \ A^n \ と表し、A \ の \ n \ 乗という。
```
正方行列 \ A \ を \ n \ 個掛け合わせた積を \ A^n \ と表し、A \ の \ n \ 乗という。
# ハミルトン・ケーリの定理と種々の性質
```
```
```
# いまここ
p.16
```
# いまここ
|
df8a649a84f50589c923916760ba4381a11e62cc
| 27,408 |
ipynb
|
Jupyter Notebook
|
chartmathc01matrix.ipynb
|
kalz2q/-yjupyternotebooks
|
ba37ac7822543b830fe8602b3f611bb617943463
|
[
"MIT"
] | 1 |
2021-09-16T03:45:19.000Z
|
2021-09-16T03:45:19.000Z
|
chartmathc01matrix.ipynb
|
kalz2q/-yjupyternotebooks
|
ba37ac7822543b830fe8602b3f611bb617943463
|
[
"MIT"
] | null | null | null |
chartmathc01matrix.ipynb
|
kalz2q/-yjupyternotebooks
|
ba37ac7822543b830fe8602b3f611bb617943463
|
[
"MIT"
] | null | null | null | 26.818004 | 462 | 0.387004 | true | 3,641 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.812867 | 0.766294 | 0.622895 |
__label__yue_Hant
| 0.109993 | 0.285524 |
# Lecture 6: Monty Hall, Simpson's Paradox
## The Monty Hall Problem
You know this problem.
* There are three doors.
* A car is behind one of the doors.
* The other two doors have goats behind them.
* You choose a door, but before you see what's behind your choice, Monty opens one of the other doors to reveal a goat.
* Monty offers you the chance to switch doors.
_Should you switch?_
### Defining the problem
Let $S$ be the event of winning when you switch.
Let $D_j$ be the event of the car being behind door $j$.
### Solving with a probability tree
With a probability tree, it is easy to represent the case where you condition on Monty opening door 2. Given that you initially choose door 1, you can quickly see that if you stick with door 1, you have a $\frac{1}{3}~$ chance of winning.
You have a $\frac{2}{3}~$ chance of winning if you switch.
### Solving with the Law of Total Probability
This is even easier to solve using the Law of Total Probability.
\begin{align}
P(S) &= P(S|D_1)P(D_1) + P(S|D_2)P(D_2) + P(S|D_3)P(D_3) \\
&= 0 \frac{1}{3} + 1 \frac{1}{3} + 1 \frac{1}{3} \\
&= \frac{2}{3}
\end{align}
### A more general solution
Let $n = 7$ be the number of doors in the game.
Let $m=3$ be the number of doors with goats that Monty opens after you select your initial door choice.
Let $S$ be the event where you win _by sticking with your original door choice of door 1_.
Let $C_j$ be the event that the car is actually behind door $j$.
Conditioning only on which door has the car, we have
\begin{align}
& &P(S) &= P(S|C_1)P(C_1) + \dots + P(S|C_n)P(C_n) & &\text{Law of Total Probability} \\
& & &= P(C_1) \\
& & &= \frac{1}{7} \\
\end{align}
Let $M_{i,j,k}$ be the event that Monty opens doors $i,j,k$. Conditioning on Monty opening up doors $i,j,k$, we have
\begin{align}
& &P(S) &= \sum_{i,j,k} P(S|M_{i,j,k})P(M_{i,j,k}) & &\text{summed over all i, j, k with } 2 \le i \lt j \lt k \le 7 \\
\\
& &\Rightarrow P(S|M_{i,j,k}) &= P(S) & &\text{by symmetry} \\
& & &=\frac{1}{7}
\end{align}
Note that we can now generalize this to the case where:
* there are $n \ge 3$ doors
* after you choose a door, Monty opens $m$ of the remaining doors $n-1$ doors to reveal a goat (with $1 \le m \le n-m-2$)
The probability of winning with the strategy of _sticking to your initial choice_ is $\frac{1}{n}$, whether __unconditional or conditioning on the doors Monty opens__.
After Monty opens $m$ doors, each of the remaining $n-m-1$ doors has __conditional__ probability of $\left(\frac{n-1}{n-m-1}\right) \left(\frac{1}{n}\right)$.
Since $\frac{1}{n} \lt \left(\frac{n-1}{n-m-1}\right) \left(\frac{1}{n}\right)$, you will always have a greater chance of winning if you switch.
## Simpson's Paradox
_Is it possible for a certain set of events to be more (or less) probable than another without conditioning, and then be less (or more) probable with conditioning?_
Assume that we have the above rates of success/failure for Drs. Hibbert and Nick for two types of surgery: heart surgery and band-aid removal.
### Defining the problem
Let $A$ be the event of a successful operation.
Let $B$ be the event of treatment by Dr. Nick.
Let $C$ be the event of heart surgery.
\begin{align}
P(A|B,C) &< P(A|B^c,C) & &\text{Dr. Nick is not as skilled as Dr. Hibbert in heart surgery} \\
P(A|B,C^c) &< P(A|B^c,C^c) & &\text{neither is he that good at band-aid removal} \\
\end{align}
And yet $P(A|B) > P(A|B^c)$?
### Explaining with the Law of Total Probability
To explain this paradox, let's try to use the Law of Total Probability.
\begin{align}
P(A|B) &= P(A|B,C)P(C|B) + P(A|B,C^c)P(C^c|B) \\
\\
\text{but } P(A|B,C) &< P(A|B^c,C) \\
\text{and } P(A|B,C^c) &< P(A|B^c,C^c)
\end{align}
Look at $P(C|B$ and $P(C|B^c)$. These weights are what makes this paradox possible, as they are what make the inequality relation sign flip.
Event $C$ is a case of __confounding__
### Another example
_Is it possible to have events $A_1, A_2, B, C$ such that_
\begin{align}
P(A_1|B) &\gt P(A_1|C) \text{ and } P(A_2|B) \gt P(A_2|C) & &\text{ ... yet...} \\
P(A_1 \cup A_2|B) &\lt P(A_1 \cup A_2|C)
\end{align}
Yes, and this is just another case of Simpson's Paradox.
Note that
\begin{align}
P(A_1 \cup A_2|B) &= P(A_1|B) + P(A_2|B) - P(A_1 \cap A_2|B)
\end{align}
So this is _not_ possible if $A_1$ and $A_2$ are disjoint and $P(A_1 \cup A_2|B) = P(A_1|B) + P(A_2|B)$.
It is crucial, therefore, to consider the _intersection_ $P(A_1 \cap A_2|B)$, so let's looks at the following example where $P(A_1 \cap A_2|B) \gg P(A_1 \cap A_2|C)$ in order to offset the other inequalities.
Consider two basketball players each shooting a pair of free throws.
Let $A_j$ be the event basketball free throw scores on the $j^{th}$ try.
Player $B$ always either makes both $P(A_1 \cap A_2|B) = 0.8$, or misses both.
\begin{align}
P(A_1|B) = P(A_2|B) = P(A_1 \cap A_2|B) = P(A_1 \cup A_2|B) = 0.8
\end{align}
Player $C$ makes free throw shots with probability $P(A_j|C) = 0.7$, independently, so we have
\begin{align}
P(A_1|C) &= P(A_2|C) = 0.7 \\
P(A_1 \cap A_2|C) &= P(A_1|C) P(A_2|C) = 0.49 \\
P(A_1 \cup A_2|C) &= P(A_1|C) + P(A_2|C) - P(A_1 \cap A_2|C) \\
&= 2 \times 0.7 - 0.49 \\
&= 0.91
\end{align}
And so we have our case where
\begin{align}
P(A_1|B) = 0.8 &\gt P(A_1|C) = 0.7 \\
P(A_2|B) = 0.8 &\gt P(A_2|C) = 0.7 \\
\\
\text{ ... and yet... } \\
\\
P(A_1 \cup A_2|B) &\lt P(A_1 \cup A_2|C) ~~~~ \blacksquare
\end{align}
|
1101208cb1edba4a90f77869b0b671b471c81161
| 8,543 |
ipynb
|
Jupyter Notebook
|
Lecture_06.ipynb
|
dirtScrapper/Stats-110-master
|
a123692d039193a048ff92f5a7389e97e479eb7e
|
[
"BSD-3-Clause"
] | null | null | null |
Lecture_06.ipynb
|
dirtScrapper/Stats-110-master
|
a123692d039193a048ff92f5a7389e97e479eb7e
|
[
"BSD-3-Clause"
] | null | null | null |
Lecture_06.ipynb
|
dirtScrapper/Stats-110-master
|
a123692d039193a048ff92f5a7389e97e479eb7e
|
[
"BSD-3-Clause"
] | null | null | null | 36.353191 | 248 | 0.52347 | true | 1,994 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.891811 | 0.936285 | 0.834989 |
__label__eng_Latn
| 0.991095 | 0.778293 |
```python
import numpy as np
from sympy import divisors, divisor_count, sieve
from tqdm import tqdm
import multiprocessing as mp
import pickle
```
```python
N = 1e8
try:
prime_set = pickle.load(open('data/prime_set_1e8.pkl', 'rb'))
except FileNotFoundError:
sieve._reset()
sieve.extend(N)
prime_set = set(sieve._list)
is_prime = lambda x: x in prime_set
```
```python
def compute(n):
ds = divisors(n, generator=True)
l = divisor_count(n)
if l % 2 == 1:
return False
for i in range(l//2):
d = next(ds)
if not is_prime(d + n//d):
return False
return True
def do_batched(batch):
s = 0
for n in tqdm(batch):
if compute(n):
s += n
return s
```
```python
batches = np.arange(N, dtype=int).reshape(100, -1).tolist()
pool = mp.Pool(processes=6)
out = pool.map(do_batched, batches)
sum(out)
```
|
7df5b072d1066fdc17e7685b71f31b1f34dc66b4
| 2,145 |
ipynb
|
Jupyter Notebook
|
src/p357.ipynb
|
alexandru-dinu/project-euler
|
10afd9e204203dd8d5c827b33659a5a2b3090532
|
[
"MIT"
] | null | null | null |
src/p357.ipynb
|
alexandru-dinu/project-euler
|
10afd9e204203dd8d5c827b33659a5a2b3090532
|
[
"MIT"
] | 3 |
2021-10-13T19:26:01.000Z
|
2021-10-13T22:18:23.000Z
|
src/p357.ipynb
|
alexandru-dinu/project-euler
|
10afd9e204203dd8d5c827b33659a5a2b3090532
|
[
"MIT"
] | null | null | null | 21.237624 | 74 | 0.475058 | true | 260 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.896251 | 0.689306 | 0.617791 |
__label__eng_Latn
| 0.347874 | 0.273666 |
# **Propagação de erros em Python3: Jupyter Notebook - Data analytics**
## *Subprojeto "Protótipo de Magnetômetro Portátil com Internet das Coisas" (Computação Física) da UFES/Alegre*
### Eduardo Destefani Stefanato, IC FAPES
### Professor : Roberto Colistete Jr., em 05/02/2021.
__________________________________
### O que é propagação de erros/incerteza?
Em estatística, propagação de incerteza ou propagação de erro (ambas diferem na forma de apresentar seus valores) é uma forma de verificar a confiabilidade dos dados de uma certa amostra ou medida, quando esta é submetida a diferentes operações matemáticas. Ela define como as incertezas ou erros das variáveis estão relacionadas e fornece a melhor estimativa para aquele conjunto de dados.
Incerteza é uma quantidade (dimensional ou adimensional) que expressa a confiabilidade de um conjunto de dados, dada a sua dispersão, independentemente do valor verdadeiro. A entidade máxima para os padrões de medidas de incerteza é o Escritório Internacional de Pesos e Medidas (BIPM).
Erro é a diferença entre o valor de uma certa medida e o seu valor verdadeiro.
#### Importação dos módulos Python para o programa
```python
# Data sat
from sympy import *
```
#### Desenvolvimento
Classe de calculos das derivadas parciais e listas de erros (incertezas):
```python
class uncertz :
def __init__(self, name) :
self.name = name
def grad(*vartuple) :
diff_func = []
func = input('\nFunção: ')
for i in vartuple :
i = Symbol('{}' .format(i))
diff_func.append(diff(func, i))
return diff_func[1:]
def error(*errtuple) :
err_func = []
for i in errtuple :
err_func.append(i)
return err_func[1:]
```
A celula a seguir possui o formulário que deve ser preenchido pelo usuário no cálculo da incerteza da função desejada:
```python
f = uncertz(input('Nome da função: '))
loop = True
while loop :
try:
qt_gran = int(input('\nNúmero de grandezas: '))
loop = False
except ValueError:
loop = True
print('Você não digitou um número inteiro.')
count = 0
lst_var = []
for i in range(qt_gran) :
count += 1
i = input('Variável {}: '. format(count))
lst_var.append(i)
print('\n')
count = 0
lst_err = []
for i in range(qt_gran) :
count += 1
i = float(input('Incerteza da variável {}: '. format(count)))
lst_err.append(i)
count = 0
lst_val = []
for i in range(qt_gran) :
count += 1
i = float(input('Valor da variável {}: '. format(count)))
lst_val.append(i)
vartuple = tuple(lst_var)
errtuple = tuple(lst_err)
valtuple = tuple(lst_val)
dict_data = {}
for var, val in list(zip(vartuple, valtuple)):
dict_data[var] = val
grad = f.grad(*vartuple)
error = f.error(*errtuple)
BIMP = prod_int = sqrt(sum(tuple(map(lambda a,b: (a*b)**2, grad, error))))
BIMP = BIMP.subs(dict_data); print('\n+/-', BIMP.round(2))
```
Nome da função: Momento
Número de grandezas: 2
Variável 1: m
Variável 2: v
Incerteza da variável 1: 0.01
Incerteza da variável 2: 0.2
Valor da variável 1: 89
Valor da variável 2: 0.8
Função: m*v
+/- 17.80
Fonte: [Propagação de erros](https://pt.wikipedia.org/wiki/Propaga%C3%A7%C3%A3o_de_erros)
End.
|
df6fc10f76b7b0158308f7a0d20152f3e829210e
| 5,881 |
ipynb
|
Jupyter Notebook
|
uncertz/source-code/uncertz_v0.1.ipynb
|
EduardoDestefani/python-samples
|
91affdafe61bd1f5d55cb801a18969657e73177f
|
[
"MIT"
] | null | null | null |
uncertz/source-code/uncertz_v0.1.ipynb
|
EduardoDestefani/python-samples
|
91affdafe61bd1f5d55cb801a18969657e73177f
|
[
"MIT"
] | null | null | null |
uncertz/source-code/uncertz_v0.1.ipynb
|
EduardoDestefani/python-samples
|
91affdafe61bd1f5d55cb801a18969657e73177f
|
[
"MIT"
] | null | null | null | 26.490991 | 399 | 0.516579 | true | 977 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.877477 | 0.800692 | 0.702589 |
__label__por_Latn
| 0.987835 | 0.47068 |
# Learning Disentangled Representations using sequential images of a teapot
```python
from mpl_toolkits import mplot3d
import matplotlib.pyplot as plt
import random
import numpy as np
from PIL import Image
from tqdm import tqdm
import os
```
### Create dataset
Code to generate this dataset borrows from
https://medium.com/@yzhong.cs/beyond-data-scientist-3d-plots-in-python-with-examples-2a8bd7aa654b
The following cells generate sequential images of a teapot, where at each step an action, corresponding to either a rotation in viewpoint or a change in colour, was performed to generate the next image.
```python
CREATE_DATASET = True
```
First load teapot object, which is borrowed from the Stanford Computer Graphics Lab at https://graphics.stanford.edu/courses/cs148-10-summer/as3/code/as3/teapot.obj
```python
# Load teapot.obj
if CREATE_DATASET:
import numpy as np
def read_obj(filename):
triangles = []
vertices = []
with open(filename) as file:
for line in file:
components = line.strip(' \n').split(' ')
if components[0] == "f": # face data
# e.g. "f 1/1/1/ 2/2/2 3/3/3 4/4/4 ..."
indices = list(map(lambda c: int(c.split('/')[0]) - 1, components[1:]))
for i in range(0, len(indices) - 2):
triangles.append(indices[i: i+3])
elif components[0] == "v": # vertex data
# e.g. "v 30.2180 89.5757 -76.8089"
vertex = list(map(lambda c: float(c), components[1:]))
vertices.append(vertex)
return np.array(vertices), np.array(triangles)
```
Now generate a dataset, consisting of 1000 sequential images
```python
if CREATE_DATASET:
N_DATA = 1000
folder = 'teapot/'
vertices, triangles = read_obj(folder+'teapot.obj')
angle = 2*np.pi / 5
colors = [[0,0,0],[255,0,0],[255,255,255],[0,255,0],[0,0,255]]
color_index = 0
actions = []
for i in tqdm(range(N_DATA)):
# First, plot 3D image of a teapot and save as image
x = np.asarray(vertices[:,0]).squeeze()
y = np.asarray(vertices[:,1]).squeeze()
z = np.asarray(vertices[:,2]).squeeze()
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.grid(None)
ax.axis('off')
ax.set_xlim([-3, 3])
ax.set_ylim([-3, 3])
ax.set_zlim([0, 3])
ax.plot_trisurf(x, z, triangles, y, shade=True, color='white')
ax.view_init(100, angle)
plt.savefig(folder+'teapot'+str(i)+'.png')
plt.close()
# Then load the image, crop, resize it, and change background color
img = Image.open('teapot/teapot'+str(i)+'.png').convert('RGB')
img = img.crop((100,0,350,258))
img = img.resize((84,84))
arr = np.array(img)
arr = np.where(arr == [255,255,255], colors[color_index], arr)
np.save(folder+'small_teapot/teapot'+str(i),arr)
# Now select an action to perform that changes the scene.
action = random.randrange(8)
if action == 0: # y rotation, positive
m = np.matrix([[np.cos(angle), 0, np.sin(angle)],[0,1,0],[-np.sin(angle), 0, np.cos(angle)]])
elif action == 1: # y rotation, negative
m = np.matrix([[np.cos(angle), 0, -np.sin(angle)],[0,1,0],[np.sin(angle), 0, np.cos(angle)]])
elif action == 2: # z rotation, positive
m = np.matrix([[1,0,0],[0, np.cos(angle), np.sin(angle)],[0, -np.sin(angle), np.cos(angle)]])
elif action == 3: # z rotation, positive
m = np.matrix([[1,0,0],[0, np.cos(angle), -np.sin(angle)],[0, np.sin(angle), np.cos(angle)]])
elif action == 4: # x rotation, positive
m = np.matrix([[np.cos(angle), np.sin(angle), 0],[-np.sin(angle), np.cos(angle), 0],[0,0,1]])
elif action == 5: # x rotation, positive
m = np.matrix([[np.cos(angle), -np.sin(angle), 0],[np.sin(angle), np.cos(angle), 0],[0,0,1]])
elif action ==6: # Change color by +1 increment
m = np.matrix([[1, 0, 0],[0,1,0],[0,0,1]])
color_index = (color_index + 1) % 5
elif action ==7: # Change color by -1 increment
m = np.matrix([[1, 0, 0],[0,1,0],[0,0,1]])
color_index = (color_index - 1) % 5
actions.append(action)
# Change viewpoint of teapot
vertices = vertices*m
# Save action sequence
np.save(folder+'actions',actions)
```
### Show a sample from this dataset
```python
plt.imshow(np.load('teapot/small_teapot/teapot290.npy'))
```
# Learning Disentangled Representations
Note: This works faster using a GPU
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
```
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
```
### First, define a TeapotWorld class that can load sequences of states and actions from our dataset
```python
class TeapotWorld():
class action_space():
def __init__(self,n_actions):
self.n = n_actions
class observation_space():
def __init__(self,shape):
self.shape = shape
def __init__(self, action_file, data_dir):
self.action_space = self.action_space(8)
self.observation_space = self.observation_space([84,84,3])
self.actions = np.load(action_file)
self.data_dir = data_dir
self.n_data = len(self.actions)
self.dataset = self.load_dataset()
self.current_idx = 0
self.reset()
def load_dataset(self):
data = []
for idx in tqdm(range(self.n_data)):
data.append(self.load_image(idx))
return torch.stack(data)
def reset(self):
self.current_idx = random.randrange(self.n_data-10)
return self.get_observation()
def load_image(self, idx):
data_file = 'teapot'+str(idx)+'.npy'
obs = np.load(self.data_dir+data_file)
return torch.FloatTensor(obs/255)
def get_observation(self, idx=None):
if idx == None:
idx = self.current_idx
obs = self.dataset[idx]
return obs.to(device)
def get_action(self):
return self.actions[self.current_idx]
def step(self):
self.current_idx += 1
return self.get_observation()
def get_batch(self, batch_size):
idx = random.sample(range(self.n_data),batch_size)
batch = []
for i in idx:
batch.append(self.get_observation(idx=i))
batch = torch.stack(batch)
return batch.to(device)
```
**Check that TeapotWorld loads correctly**
```python
env = TeapotWorld('teapot/actions.npy','teapot/small_teapot/')
```
100%|██████████| 1000/1000 [00:01<00:00, 676.83it/s]
### Define encoder and decoder.
These are based on the neural networks used in the "Human-level control through deep reinforcement learning" DQN paper, since the observations have pretty much the same dimensions
```python
def init_weights(m, gain):
if (type(m) == nn.Linear) | (type(m) == nn.Conv2d) | (type(m) == nn.ConvTranspose2d):
nn.init.orthogonal_(m.weight, gain)
nn.init.zeros_(m.bias)
class Encoder(nn.Module):
def __init__(self, n_out=5, n_hid = 128, weight_scale=5):
super().__init__()
self.conv = nn.Sequential(nn.Conv2d(3, 16, 8, stride=4),
nn.ReLU(),
nn.Conv2d(16, 32, 4, stride=2),
nn.ReLU(),
nn.Conv2d(32, 32, 3, stride=1),
nn.ReLU())
self.output = nn.Sequential(nn.Linear(32 * 7 * 7, n_hid),
nn.ReLU(),
nn.Linear(n_hid, n_out))
self.conv.apply(lambda x: init_weights(x, weight_scale))
self.output.apply(lambda x: init_weights(x, weight_scale))
def forward(self, obs):
if len(obs.shape) != 4:
obs = obs.unsqueeze(0)
obs = obs.permute(0, 3, 1, 2)
obs = obs/255
obs = self.conv(obs)
obs = obs.contiguous().view(obs.size(0), -1)
return F.normalize(self.output(obs)).squeeze()
class Decoder(nn.Module):
def __init__(self, n_in=5, n_hid = 128, weight_scale=5):
super().__init__()
self.fc1 = nn.Linear(n_in, n_hid)
self.fc2 = nn.Linear(n_hid, 32 * 7 * 7)
self.conv = nn.Sequential(nn.ConvTranspose2d(32, 32, 3, stride=1),
nn.ReLU(),
nn.ConvTranspose2d(32, 16, 4, stride=2),
nn.ReLU(),
nn.ConvTranspose2d(16, 3, 8, stride=4),
)
self.conv.apply(lambda x: init_weights(x, weight_scale))
init_weights(self.fc1, weight_scale)
init_weights(self.fc2, weight_scale)
def forward(self, x):
if len(x.shape) == 1:
x = x.unsqueeze(0)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
batch_size = x.shape[0]
x = x.reshape(batch_size,32,7,7)
x = self.conv(x).permute(0,2,3,1)
return torch.sigmoid(x).squeeze()
```
**Check dimensions**
```python
encoder = Encoder(n_out=5).to(device)
decoder = Decoder(n_in=5).to(device)
print(encoder)
print(decoder)
```
Encoder(
(conv): Sequential(
(0): Conv2d(3, 16, kernel_size=(8, 8), stride=(4, 4))
(1): ReLU()
(2): Conv2d(16, 32, kernel_size=(4, 4), stride=(2, 2))
(3): ReLU()
(4): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1))
(5): ReLU()
)
(output): Sequential(
(0): Linear(in_features=1568, out_features=128, bias=True)
(1): ReLU()
(2): Linear(in_features=128, out_features=5, bias=True)
)
)
Decoder(
(fc1): Linear(in_features=5, out_features=128, bias=True)
(fc2): Linear(in_features=128, out_features=1568, bias=True)
(conv): Sequential(
(0): ConvTranspose2d(32, 32, kernel_size=(3, 3), stride=(1, 1))
(1): ReLU()
(2): ConvTranspose2d(32, 16, kernel_size=(4, 4), stride=(2, 2))
(3): ReLU()
(4): ConvTranspose2d(16, 3, kernel_size=(8, 8), stride=(4, 4))
)
)
```python
obs = env.reset()
print(obs.shape)
latent = encoder(obs)
print(latent.shape)
reconstructed = decoder(latent)
print(reconstructed.shape)
```
torch.Size([84, 84, 3])
torch.Size([5])
torch.Size([84, 84, 3])
**Representation**
The crux of the matter is learning to 'represent' actions in the observation space with actions in latent space. Here, we will do this by assuming every action is a generalized rotation in latent space, which we denote with a series of 2-dimensional rotations.
A 2-d rotation is given by:
\begin{pmatrix}
\cos(\theta) & \sin(\theta) \\
-\sin(\theta) & \cos(\theta)
\end{pmatrix}
and we denote a rotation in dimensions $i$ and $j$ of a higher dimensional space as $R_{i,j}(\theta)$. For $i=1$, $j=4$, in a 4-dimensional space:
\begin{equation}
R_{1,4}(\theta) =
\begin{pmatrix}
\cos(\theta) & 0 & 0 & \sin(\theta) \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
-\sin(\theta) & 0 & 0 & \cos(\theta)
\end{pmatrix}
\end{equation}
An arbitrary rotation, denoted $g$ as I am subtly moving towards this being a group action, can then be written as:
\begin{equation}
g(\theta_{1,2},\theta_{1,3},\dots,\theta_{n-1,n}) = \prod_{i=1}^{n-1} \prod_{j=1+1}^{n} R_{i,j}(\theta_{i,j})
\end{equation}
which has $n(n-1)/2$ free parameters (i.e. $\theta_{i,j}$'s).
```python
class Representation():
def __init__(self, dim=5):
self.dim = dim
self.params = dim*(dim-1)//2
self.thetas = torch.FloatTensor(2*torch.rand(self.params)-1).to(device).requires_grad_()
self.__matrix = None
def set_thetas(self, thetas):
self.thetas = thetas
self.thetas.requires_grad = True
self.clear_matrix()
def clear_matrix(self):
self.__matrix = None
def get_matrix(self):
if self.__matrix is None:
k = 0
mats = []
for i in range(self.dim-1):
for j in range(self.dim-1-i):
theta_ij = self.thetas[k]
k+=1
c, s = torch.cos(theta_ij), torch.sin(theta_ij)
rotation_i = torch.eye(self.dim, self.dim).to(device)
rotation_i[i, i] = c
rotation_i[i, i+j+1] = s
rotation_i[j+i+1, i] = -s
rotation_i[j+i+1, j+i+1] = c
mats.append(rotation_i)
def chain_mult(l):
if len(l)>=3:
return l[0]@l[1]@chain_mult(l[2:])
elif len(l)==2:
return l[0]@l[1]
else:
return l[0]
self.__matrix = chain_mult(mats)
return self.__matrix
```
**LatentWorld**
Now, for symmetry's sake, we'll also have a `LatentWorld` which acts as the environment in the latent space.
```python
class LatentWorld():
class action_space():
def __init__(self,n_actions):
self.n = n_actions
def sample(self, k=1):
return torch.randint(0,self.n,(k,))
class observation_space():
def __init__(self,n_features):
self.shape = [n_features]
def __init__(self,
dim=5,
n_actions=8,
action_reps=None):
self.dim = dim
self.action_space = self.action_space(n_actions)
self.observation_space = self.observation_space(dim)
if action_reps is None:
self.action_reps = [Representation(dim=self.dim) for _ in range(n_actions)]
else:
if len(action_reps)!=n_actions:
raise Exception("Must pass an action representation for every action.")
if not all([rep.dim==self.dim]):
raise Exception("Action representations do not act on the dimension of the latent space.")
self.action_reps = action_reps
def reset(self, state):
self.state = state
return self.get_observation()
def clear_representations(self):
for rep in self.action_reps:
rep.clear_matrix()
def get_representation_params(self):
params = []
for rep in self.action_reps:
params.append(rep.thetas)
return params
def save_representations(self, path):
if os.path.splitext(path)[-1] != '.pth':
path += '.pth'
rep_thetas = [rep.thetas for rep in self.action_reps]
return torch.save(rep_thetas, path)
def load_reprentations(self, path):
rep_thetas = torch.load(path)
for rep in self.action_reps:
rep.set_thetas(rep_thetas.pop(0))
def get_observation(self):
return self.state
def step(self,action):
self.state = torch.mv(self.action_reps[action].get_matrix(), self.state.squeeze())
obs = self.get_observation()
return obs
```
**Entanglement regularisation**
So for $m$ parameters, ${\theta_1, \dots, \theta_m}$, we want to regularise with
\begin{equation}
\sum_{i \neq j} \vert\theta_i\vert^2, \mathrm{where\ } \theta_j {=} \mathrm{max_k}({\vert\theta_k\vert}).
\end{equation}
We will also use this term as our metric of 'entanglement'.
```python
def calc_entanglement(params):
params = params.abs().pow(2)
return params.sum() - params.max()
params = torch.FloatTensor([1,1,0.5,0,0])
calc_entanglement(params)
```
tensor(1.2500)
### Training with regularization
We find it helpful to increase the regularization strength halfway during training
```python
dim=5
obs_env = TeapotWorld('teapot/actions.npy','teapot/small_teapot/')
lat_env = LatentWorld(dim = dim,
n_actions = obs_env.action_space.n
)
decoder = Decoder(n_in = dim, n_hid = 128).to(device)
encoder = Encoder(n_out = dim, n_hid = 128).to(device)
optimizer_dec = optim.Adam(decoder.parameters(),
lr=1e-3,
# betas=(0.9, 0.99),
weight_decay=0)
optimizer_enc = optim.Adam(encoder.parameters(),
lr=1e-3,
# betas=(0.9, 0.99),
weight_decay=0)
optimizer_rep = optim.Adam(lat_env.get_representation_params(),
lr=1e-2,
# betas=(0.9, 0.99),
weight_decay=0)
losses = []
entanglement = []
orthogonality = []
```
100%|██████████| 1000/1000 [00:00<00:00, 1754.53it/s]
```python
import time
n_sgd_steps = 10000
ep_steps = 5
batch_eps = 16
i = 1
t_start = time.time()
temp = 0
while i < n_sgd_steps:
loss = torch.zeros(1).to(device)
for _ in range(batch_eps):
t_ep = -1
while t_ep < ep_steps:
if t_ep == -1:
obs_x = obs_env.reset()
obs_z = lat_env.reset(encoder(obs_x))
else:
action = obs_env.get_action()
obs_x = obs_env.step()
obs_z = lat_env.step(action)
t_ep += 1
obs_x_recon = decoder(obs_z)
loss += F.binary_cross_entropy(obs_x_recon, obs_x)
loss /= (ep_steps * batch_eps)
loss_raw = loss
reg_loss = sum([calc_entanglement(r.thetas) for r in lat_env.action_reps])/8
if i < 5000:
loss += reg_loss*1e-3
else:
loss += reg_loss*3e-2
losses.append(loss_raw.item())
entanglement.append(reg_loss.item())
optimizer_dec.zero_grad()
optimizer_rep.zero_grad()
optimizer_enc.zero_grad()
loss.backward()
optimizer_dec.step()
optimizer_rep.step()
optimizer_enc.step()
# Rember to clear the cached action representations after we update the parameters!
lat_env.clear_representations()
i+=1
if i%10==0:
print("iter {} : loss={:.3e} : entanglement={:.2e} : last 10 iters in {:.3f}s".format(
i, loss.item(), reg_loss.item(), time.time() - t_start
), end="\r" if i%100 else "\n")
t_start = time.time()
```
iter 100 : loss=8.301e-01 : entanglement=1.50e+00 : last 10 iters in 3.398s
iter 200 : loss=8.234e-01 : entanglement=1.10e+00 : last 10 iters in 3.412s
iter 300 : loss=7.464e-01 : entanglement=7.17e-01 : last 10 iters in 3.326s
iter 400 : loss=6.625e-01 : entanglement=5.25e-01 : last 10 iters in 3.296s
iter 500 : loss=6.165e-01 : entanglement=4.72e-01 : last 10 iters in 3.372s
iter 600 : loss=4.312e-01 : entanglement=4.46e-01 : last 10 iters in 3.389s
iter 700 : loss=3.550e-01 : entanglement=4.39e-01 : last 10 iters in 3.328s
iter 800 : loss=2.399e-01 : entanglement=4.42e-01 : last 10 iters in 3.372s
iter 900 : loss=1.676e-01 : entanglement=4.29e-01 : last 10 iters in 3.408s
iter 1000 : loss=1.614e-01 : entanglement=5.21e-01 : last 10 iters in 3.788s
iter 1100 : loss=1.492e-01 : entanglement=7.08e-01 : last 10 iters in 3.340s
iter 1200 : loss=1.425e-01 : entanglement=7.89e-01 : last 10 iters in 3.372s
iter 1300 : loss=1.357e-01 : entanglement=8.48e-01 : last 10 iters in 3.384s
iter 1400 : loss=1.248e-01 : entanglement=8.88e-01 : last 10 iters in 3.416s
iter 1500 : loss=1.295e-01 : entanglement=8.86e-01 : last 10 iters in 3.374s
iter 1600 : loss=1.296e-01 : entanglement=8.93e-01 : last 10 iters in 3.390s
iter 1700 : loss=1.212e-01 : entanglement=8.90e-01 : last 10 iters in 3.384s
iter 1800 : loss=1.178e-01 : entanglement=8.89e-01 : last 10 iters in 3.327s
iter 1900 : loss=1.226e-01 : entanglement=8.66e-01 : last 10 iters in 3.345s
iter 2000 : loss=1.226e-01 : entanglement=8.80e-01 : last 10 iters in 3.365s
iter 2100 : loss=1.156e-01 : entanglement=8.64e-01 : last 10 iters in 3.392s
iter 2200 : loss=1.142e-01 : entanglement=8.59e-01 : last 10 iters in 3.336s
iter 2300 : loss=1.164e-01 : entanglement=8.46e-01 : last 10 iters in 3.398s
iter 2400 : loss=1.144e-01 : entanglement=8.43e-01 : last 10 iters in 3.356s
iter 2500 : loss=1.121e-01 : entanglement=8.32e-01 : last 10 iters in 3.356s
iter 2600 : loss=1.132e-01 : entanglement=8.24e-01 : last 10 iters in 3.344s
iter 2700 : loss=1.144e-01 : entanglement=8.18e-01 : last 10 iters in 3.440s
iter 2800 : loss=1.130e-01 : entanglement=8.05e-01 : last 10 iters in 3.405s
iter 2900 : loss=1.130e-01 : entanglement=8.01e-01 : last 10 iters in 3.392s
iter 3000 : loss=1.140e-01 : entanglement=7.92e-01 : last 10 iters in 3.428s
iter 3100 : loss=1.165e-01 : entanglement=7.86e-01 : last 10 iters in 3.373s
iter 3200 : loss=1.119e-01 : entanglement=7.82e-01 : last 10 iters in 3.740s
iter 3300 : loss=1.125e-01 : entanglement=7.73e-01 : last 10 iters in 3.768s
iter 3400 : loss=1.111e-01 : entanglement=7.62e-01 : last 10 iters in 3.769s
iter 3500 : loss=1.106e-01 : entanglement=7.54e-01 : last 10 iters in 3.817s
iter 3600 : loss=1.067e-01 : entanglement=7.44e-01 : last 10 iters in 3.804s
iter 3700 : loss=1.155e-01 : entanglement=7.35e-01 : last 10 iters in 3.772s
iter 3800 : loss=1.100e-01 : entanglement=7.31e-01 : last 10 iters in 3.788s
iter 3900 : loss=1.091e-01 : entanglement=7.24e-01 : last 10 iters in 3.884s
iter 4000 : loss=1.132e-01 : entanglement=7.20e-01 : last 10 iters in 3.892s
iter 4100 : loss=1.092e-01 : entanglement=7.18e-01 : last 10 iters in 3.855s
iter 4200 : loss=1.080e-01 : entanglement=7.05e-01 : last 10 iters in 3.933s
iter 4300 : loss=1.078e-01 : entanglement=7.01e-01 : last 10 iters in 3.920s
iter 4400 : loss=1.101e-01 : entanglement=7.02e-01 : last 10 iters in 3.965s
iter 4500 : loss=1.109e-01 : entanglement=6.86e-01 : last 10 iters in 3.944s
iter 4600 : loss=1.099e-01 : entanglement=6.84e-01 : last 10 iters in 3.988s
iter 4700 : loss=1.076e-01 : entanglement=6.73e-01 : last 10 iters in 3.995s
iter 4800 : loss=1.073e-01 : entanglement=6.74e-01 : last 10 iters in 4.090s
iter 4900 : loss=1.042e-01 : entanglement=6.74e-01 : last 10 iters in 4.064s
iter 5000 : loss=1.099e-01 : entanglement=6.69e-01 : last 10 iters in 4.104s
iter 5100 : loss=1.503e-01 : entanglement=3.34e-01 : last 10 iters in 3.988s
iter 5200 : loss=1.384e-01 : entanglement=2.47e-01 : last 10 iters in 3.882s
iter 5300 : loss=1.256e-01 : entanglement=1.54e-01 : last 10 iters in 3.868s
iter 5400 : loss=1.206e-01 : entanglement=8.89e-02 : last 10 iters in 4.004s
iter 5500 : loss=1.173e-01 : entanglement=4.64e-02 : last 10 iters in 3.896s
iter 5600 : loss=1.177e-01 : entanglement=2.57e-02 : last 10 iters in 3.940s
iter 5700 : loss=1.125e-01 : entanglement=1.45e-02 : last 10 iters in 3.963s
iter 5800 : loss=1.116e-01 : entanglement=8.64e-03 : last 10 iters in 3.824s
iter 5900 : loss=1.104e-01 : entanglement=4.99e-03 : last 10 iters in 3.836s
iter 6000 : loss=1.097e-01 : entanglement=3.90e-03 : last 10 iters in 3.912s
iter 6100 : loss=1.103e-01 : entanglement=2.56e-03 : last 10 iters in 3.904s
iter 6200 : loss=1.065e-01 : entanglement=1.83e-03 : last 10 iters in 3.884s
iter 6300 : loss=1.079e-01 : entanglement=1.38e-03 : last 10 iters in 3.880s
iter 6400 : loss=1.089e-01 : entanglement=1.41e-03 : last 10 iters in 3.888s
iter 6500 : loss=1.066e-01 : entanglement=9.59e-04 : last 10 iters in 3.941s
iter 6600 : loss=1.051e-01 : entanglement=9.38e-04 : last 10 iters in 3.915s
iter 6700 : loss=1.085e-01 : entanglement=7.41e-04 : last 10 iters in 3.960s
iter 6800 : loss=1.065e-01 : entanglement=6.89e-04 : last 10 iters in 3.920s
iter 6900 : loss=1.074e-01 : entanglement=8.87e-04 : last 10 iters in 4.000s
iter 7000 : loss=1.072e-01 : entanglement=6.43e-04 : last 10 iters in 4.032s
iter 7100 : loss=1.077e-01 : entanglement=6.19e-04 : last 10 iters in 4.036s
iter 7200 : loss=1.054e-01 : entanglement=5.41e-04 : last 10 iters in 4.092s
iter 7300 : loss=1.081e-01 : entanglement=5.11e-04 : last 10 iters in 4.076s
iter 7400 : loss=1.095e-01 : entanglement=5.76e-04 : last 10 iters in 4.067s
iter 7500 : loss=1.025e-01 : entanglement=5.35e-04 : last 10 iters in 4.107s
iter 7600 : loss=1.071e-01 : entanglement=4.75e-04 : last 10 iters in 4.160s
iter 7700 : loss=1.068e-01 : entanglement=4.39e-04 : last 10 iters in 4.124s
iter 7800 : loss=1.032e-01 : entanglement=4.82e-04 : last 10 iters in 4.155s
iter 7900 : loss=1.051e-01 : entanglement=5.41e-04 : last 10 iters in 4.195s
iter 8000 : loss=1.053e-01 : entanglement=5.58e-04 : last 10 iters in 4.228s
iter 8100 : loss=1.063e-01 : entanglement=5.11e-04 : last 10 iters in 4.224s
iter 8200 : loss=1.064e-01 : entanglement=5.91e-04 : last 10 iters in 4.285s
iter 8300 : loss=1.036e-01 : entanglement=5.89e-04 : last 10 iters in 4.201s
iter 8400 : loss=1.065e-01 : entanglement=5.15e-04 : last 10 iters in 4.307s
iter 8500 : loss=1.041e-01 : entanglement=6.81e-04 : last 10 iters in 4.344s
iter 8600 : loss=1.039e-01 : entanglement=4.47e-04 : last 10 iters in 4.293s
iter 8700 : loss=1.035e-01 : entanglement=7.87e-04 : last 10 iters in 4.317s
iter 8800 : loss=1.047e-01 : entanglement=8.44e-04 : last 10 iters in 4.324s
iter 8900 : loss=1.053e-01 : entanglement=7.56e-04 : last 10 iters in 4.372s
iter 9000 : loss=1.030e-01 : entanglement=6.96e-04 : last 10 iters in 4.400s
iter 9100 : loss=1.038e-01 : entanglement=1.03e-03 : last 10 iters in 4.399s
iter 9200 : loss=1.041e-01 : entanglement=7.93e-04 : last 10 iters in 4.428s
iter 9300 : loss=9.924e-02 : entanglement=1.13e-03 : last 10 iters in 4.428s
iter 9400 : loss=1.006e-01 : entanglement=6.01e-04 : last 10 iters in 4.451s
iter 9500 : loss=1.023e-01 : entanglement=7.11e-04 : last 10 iters in 4.428s
iter 9600 : loss=1.001e-01 : entanglement=5.78e-04 : last 10 iters in 4.449s
iter 9700 : loss=1.041e-01 : entanglement=7.96e-04 : last 10 iters in 4.540s
iter 9800 : loss=1.036e-01 : entanglement=9.06e-04 : last 10 iters in 4.548s
iter 9900 : loss=1.031e-01 : entanglement=7.36e-04 : last 10 iters in 4.528s
iter 10000 : loss=1.048e-01 : entanglement=5.82e-04 : last 10 iters in 4.440s
### Show reconstructed states
```python
obs = env.reset()
latent = encoder(obs)
reconstructed = decoder(latent)
fig, (ax1,ax2) = plt.subplots(1, 2)
ax1.imshow(obs.to('cpu'))
ax2.imshow(reconstructed.detach().to('cpu'))
```
### Show action representations
```python
width=0.5
rep_thetas = [rep.thetas.detach().to('cpu').numpy() for rep in lat_env.action_reps]
#print(rep_thetas)
plt_lim = max( 0.22, max([max(t) for t in rep_thetas])/(2*np.pi) )
titles = ["transformation +", "transformation -"]
cols=["r","b","g","black"]
labels=["x","y","z","color"]
with plt.style.context('seaborn-paper', after_reset=True):
fig, axs = plt.subplots(1, 2, figsize=(12, 4), gridspec_kw={"wspace":0.5, "hspace":0.5})
for i, thetas in enumerate(rep_thetas):
x = np.arange(len(thetas))
axs[i%2].bar(x - width/2, thetas/(2*np.pi), width, label=labels[i//2], color=cols[i//2])
for i in range(2):
axs[i].hlines([0.2,-0.2], xmin=-10, xmax=10, linestyles="dashed")
axs[i].hlines(0., xmin=-10, xmax=10)
axs[i].set_yticks([-0.2, 0., 0.2])
axs[i].set_xticks(x-0.25)
axs[i].set_xticklabels(["12","13","14","15","23","24","25","34","35","45"], fontsize=10)
axs[i].set_xlabel("$ij$", fontsize=20)
axs[i].set_ylim(-plt_lim,plt_lim)
axs[i].set_xlim(-0.5,9)
axs[i].set_title(titles[i], fontsize=20)
axs[i].tick_params(labelsize=15)
axs[i].set_ylabel(r"$\theta / 2\pi$", fontsize=20)
axs[1].legend(loc="center right", bbox_to_anchor=(1.5,0.5), fontsize=15)
plt.savefig("teapot/representations.png", bbox_inches='tight')
```
### For the paper, generate sequence of images from the dataset
```python
import matplotlib.gridspec as gridspec
from IPython import display
plt.figure(figsize = (6,6))
gs1 = gridspec.GridSpec(3, 3)
gs1.update(wspace=0.02, hspace=0.02)
plt.grid(None)
state = env.reset()
for i in range(9):
ax = plt.subplot(gs1[i])
ax.axis('off')
ax.set_aspect('equal')
ax.imshow(state.to('cpu'))
display.display(plt.gcf())
time.sleep(0.2)
display.clear_output(wait=True)
state = env.step()
plt.savefig("teapot/env.png", bbox_inches='tight')
```
```python
```
|
3c9e0c400840f467ccaeecdcd48fa576fa74483d
| 94,939 |
ipynb
|
Jupyter Notebook
|
fig4_teapot.ipynb
|
luis-armando-perez-rey/learning-group-structure
|
e238308de73a29506d9281e1b55cdd2de2795ebb
|
[
"MIT"
] | 12 |
2020-02-16T10:34:27.000Z
|
2022-02-20T00:27:19.000Z
|
fig4_teapot.ipynb
|
luis-armando-perez-rey/learning-group-structure
|
e238308de73a29506d9281e1b55cdd2de2795ebb
|
[
"MIT"
] | 4 |
2021-06-08T22:32:50.000Z
|
2022-03-12T00:49:42.000Z
|
fig4_teapot.ipynb
|
luis-armando-perez-rey/learning-group-structure
|
e238308de73a29506d9281e1b55cdd2de2795ebb
|
[
"MIT"
] | 3 |
2020-04-03T08:24:19.000Z
|
2022-01-16T02:02:10.000Z
| 76.873684 | 29,760 | 0.755211 | true | 9,386 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.803174 | 0.637031 | 0.511646 |
__label__eng_Latn
| 0.317221 | 0.027055 |
```python
%run base.py
```
```python
from sympy import init_printing
init_printing()
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
```
# 构建Waston函数
```python
m=31
n=5
xvec2 = symbols(f'x1:{n+1}')
xvec2 #向量符号
rlist = []
tlist = [Rational(i+1, 29) for i in range(29)]
for i in range(m-2):#这个是角标,取出来就是x_i和t_i
tt = -1
for j in range(2,n+1):#2<=j<=n
tt = tt + (j-1)*xvec2[j-1]*(tlist[i]**(j-2))
sum_root = 0
for j in range(1,n+1):#1<=j<=n
sum_root = sum_root + xvec2[j-1]*(tlist[i]**(j-1))
rlist.append(tt-sum_root**2)
rlist.append(xvec2[0])#r30
rlist.append(xvec2[1]-xvec2[0]**2-1)#r31
rlist[29]
#print('-'*40)
rlist[30]
#print('-'*40)
rlist[2]
```
```python
%%time
Watson = 0
for rx in rlist:
Watson += rx**2
```
```python
foo_Watson = lambdify(xvec2,Watson,'numpy')
x00 = list((0 for _ in range(n)))
foo_Watson(*x00)
```
```python
gexpr = get_g(Watson, xvec2)#循环好慢
gexpr
```
```python
Gexpr = get_G(Watson, xvec2)
Gexpr
```
```python
%%time
xvec_Waston = symbols(f'x1:{n+1}')
x = modified_newton(Watson, xvec_Waston, list(
(0 for _ in range(n))), eps=1e-5, maxiter=5000)
print('x结果:', x)
```
```python
print("函数值:",foo_Watson(*x))
```
> 函数迭代5000步到达最大设置次数,大部分是奇异矩阵负梯度情况
x结果: [[-0.07499081]
[ 0.97396254]
[ 0.23935247]
[-0.49671772]
[ 0.68380743]]
Wall time: 2min 32s
函数值: [0.01721672]
```python
%%time
xvec_Waston = symbols(f'x1:{n+1}')
x = damped_newton(Watson, xvec_Waston, list(
(0 for _ in range(n))), eps=1e-5, maxiter=5000)
print('结果:', x)
```
```python
print("函数值:",foo_Watson(*x))
```
> 阻尼牛顿算法不含奇异矩阵解决方案
```python
%%time
xvec_Waston = symbols(f'x1:{n+1}')
x = quasi_newton(Watson, xvec_Waston, list(
(0 for _ in range(n))), eps=1e-5, maxiter=5000)
print('结果:', x)
```
```python
print("函数值:",foo_Watson(*x))
```
> 在50步之内收敛了(每50print一次)
```python
%%time
xvec_Waston = symbols(f'x1:{n+1}')
x = quasi_newton(Watson, xvec_Waston, list(
(0 for _ in range(n))), eps=1e-5, maxiter=5000,method='SR1')
print('结果:', x)
```
```python
print("函数值:",foo_Watson(*x))
```
> 也是50步之内收敛,但是线搜索使用更多一些
```python
%%time
xvec_Waston = symbols(f'x1:{n+1}')
x = quasi_newton(Watson, xvec_Waston, list(
(0 for _ in range(n))), eps=1e-5, maxiter=5000,method='DFP')
print('结果:', x)
```
```python
print("函数值:",foo_Watson(*x))
```
> 50-100步之间
```python
```
|
d91791b99d02bce093f7fa16470caeb2a2cbd18f
| 5,982 |
ipynb
|
Jupyter Notebook
|
waston.ipynb
|
LingrenKong/Numerical-Optimization-Code
|
598e2b5099e2ba57ea0aa7ff4a5f5547889828b2
|
[
"MIT"
] | 2 |
2021-11-10T09:06:03.000Z
|
2021-12-07T06:43:45.000Z
|
waston.ipynb
|
LingrenKong/Numerical-Optimization-Code
|
598e2b5099e2ba57ea0aa7ff4a5f5547889828b2
|
[
"MIT"
] | null | null | null |
waston.ipynb
|
LingrenKong/Numerical-Optimization-Code
|
598e2b5099e2ba57ea0aa7ff4a5f5547889828b2
|
[
"MIT"
] | null | null | null | 20.209459 | 77 | 0.490137 | true | 1,055 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.833325 | 0.743168 | 0.6193 |
__label__eng_Latn
| 0.117456 | 0.277172 |
# Tarea 2
Daniela Paz Díaz Mora
201710003-6
```python
import numpy as np
from sympy import Matrix
from sympy.abc import x, y
from numpy import linalg
```
### Problema 1
# 2.4 Lutkepohl
#### Determine the autocovariances $\Gamma_y(0)$, $\Gamma_y(1)$, $\Gamma_y(2)$, $\Gamma_y(3)$ of the process (2.4.1). Compute an plot the autocorrelations $R_y(0)$,$R_y(1)$,$R_y(2)$,$R_y(3)$.
Para calcular $\Gamma_y(0)$ se tiene que $Vec(\Gamma(0))=(I_{k^2}-A_1\otimes A_1)^{-1}Vec(\Sigma_u)$
```python
#vectorización de una matriz de mxn
def Vec(M):
aux=[]
(a,b)=M.shape
for i in np.arange(b):
for j in np.arange(a):
aux.append(M[j,i])
return np.array(aux).T
```
```python
Sigma_u=np.array([[0.26,0.03,0],[0.03,0.09,0],[0,0,0.81]])
vec_sigma=Vec(Sigma_u)
A1=np.array([[0.7,0.1,0],[0,0.4,0.1],[0.9,0,0.8]])
vec_gamma_0=np.dot(np.linalg.inv(np.eye(9)-np.kron(A1,A1)),vec_sigma)
#desvectorizacion
Gamma_0=vec_gamma_0.reshape(3,3).T
```
```python
Matrix(Gamma_0)
```
$\displaystyle \left[\begin{matrix}0.564403943690934 & 0.176906937783372 & 1.05195079079148\\0.176906937783372 & 0.307903999270431 & 1.14202798778072\\1.05195079079148 & 1.14202798778072 & 7.72771203647051\end{matrix}\right]$
Luego $\Gamma(h)=A_1\Gamma_y(h-1)$
```python
Gamma_1=np.dot(A1,Gamma_0)
Matrix(Gamma_1)
```
$\displaystyle \left[\begin{matrix}0.412773454361991 & 0.154625256375403 & 0.850568352332105\\0.175957854192496 & 0.237364398486244 & 1.22958239875934\\1.34952418195502 & 1.07283863422961 & 7.12892534088874\end{matrix}\right]$
```python
Gamma_2=np.dot(A1,Gamma_1)
Matrix(Gamma_2)
```
$\displaystyle \left[\begin{matrix}0.306537203472643 & 0.131974119311407 & 0.718356086508407\\0.205335559872501 & 0.202229622817459 & 1.20472549359261\\1.45111545448981 & 0.99743363812155 & 6.46865178980988\end{matrix}\right]$
```python
Gamma_3=np.dot(A1,Gamma_2)
Matrix(Gamma_3)
```
$\displaystyle \left[\begin{matrix}0.2351095984181 & 0.112604845799731 & 0.623321809915146\\0.227245769397981 & 0.180635212939138 & 1.12875537641803\\1.43677584671723 & 0.916723617877506 & 5.82144190970547\end{matrix}\right]$
Se tine que para calcular las autocorrelaciones se hace mediante
$R_y(h)=D^{-1}\Gamma_y(h) D^{-1}$
```python
D_inv=np.diag(1/np.sqrt(np.diag(Gamma_0)))
R0=np.dot(np.dot(D_inv,Gamma_0),D_inv)
Matrix(R0)
```
$\displaystyle \left[\begin{matrix}1.0 & 0.424367562556585 & 0.503703466234757\\0.424367562556585 & 1.0 & 0.740361143062004\\0.503703466234757 & 0.740361143062004 & 1.0\end{matrix}\right]$
```python
R1=np.dot(np.dot(D_inv,Gamma_1),D_inv)
Matrix(R1)
```
$\displaystyle \left[\begin{matrix}0.731344029353602 & 0.37091786212516 & 0.407275921164448\\0.42209088367011 & 0.770903915014652 & 0.797121471605457\\0.646189930335929 & 0.695506630360993 & 0.922514362238677\end{matrix}\right]$
```python
R2=np.dot(np.dot(D_inv,Gamma_2),D_inv)
Matrix(R2)
```
$\displaystyle \left[\begin{matrix}0.54311669310465 & 0.316581904782695 & 0.343968989740363\\0.492562655490582 & 0.656794401166062 & 0.78100707955979\\0.694834673571943 & 0.64662260150314 & 0.837072054352108\end{matrix}\right]$
```python
R3=np.dot(np.dot(D_inv,Gamma_3),D_inv)
Matrix(R3)
```
$\displaystyle \left[\begin{matrix}0.416562642848658 & 0.27011854109762 & 0.298463919588629\\0.545121262450462 & 0.58666082079852 & 0.731756690434717\\0.687968468229711 & 0.594299397970662 & 0.753320243072141\end{matrix}\right]$
```python
import matplotlib.pyplot as plt
```
```python
R0
```
array([[1. , 0.42436756, 0.50370347],
[0.42436756, 1. , 0.74036114],
[0.50370347, 0.74036114, 1. ]])
```python
fig, axs = plt.subplots(3, 3,figsize=(15,15))
x=[0,1,2,3]
axs[0,0].plot(x,[R0[0,0],R1[0,0],R2[0,0],R3[0,0]],marker='o')
axs[0,0].set_title('componente [1,1]')
axs[0,0].set_xlabel('h')
axs[0,0].set_ylabel('R(h)[1,1]')
axs[0,1].plot(x,[R0[0,1],R1[0,1],R2[0,1],R3[0,1]],marker='o')
axs[0,1].set_title('componente [1,2]')
axs[0,1].set_xlabel('h')
axs[0,1].set_ylabel('R(h)[1,1]')
axs[0,2].plot(x,[R0[0,2],R1[0,2],R2[0,2],R3[0,2]],marker='o')
axs[0,2].set_title('componente [1,3]')
axs[0,2].set_xlabel('h')
axs[0,2].set_ylabel('R(h)[1,3]')
axs[1,0].plot(x,[R0[1,0],R1[1,0],R2[1,0],R3[1,0]],marker='o')
axs[1,0].set_title('componente [2,1]')
axs[1,0].set_xlabel('h')
axs[1,0].set_ylabel('R(h)[2,1]')
axs[1,1].plot(x,[R0[1,1],R1[1,1],R2[1,1],R3[1,1]],marker='o')
axs[1,1].set_title('componente [2,2]')
axs[1,1].set_xlabel('h')
axs[1,1].set_ylabel('R(h)[2,2]')
axs[1,2].plot(x,[R0[1,2],R1[1,2],R2[1,2],R3[1,2]],marker='o')
axs[1,2].set_title('componente [2,3]')
axs[1,2].set_xlabel('h')
axs[1,2].set_ylabel('R(h)[2,3]')
axs[2,0].plot(x,[R0[2,0],R1[2,0],R2[2,0],R3[2,0]],marker='o')
axs[2,0].set_title('componente [3,1]')
axs[2,0].set_xlabel('h')
axs[2,0].set_ylabel('R(h)[3,1]')
axs[2,1].plot(x,[R0[2,1],R1[2,1],R2[2,1],R3[2,1]],marker='o')
axs[2,1].set_title('componente [3,2]')
axs[2,1].set_xlabel('h')
axs[2,1].set_ylabel('R(h)[3,2]')
axs[2,2].plot(x,[R0[2,2],R1[2,2],R2[2,2],R3[2,2]],marker='o')
axs[2,2].set_title('componente [3,3]')
axs[2,2].set_xlabel('h')
axs[2,2].set_ylabel('R(h)[3,3]')
plt.show()
```
### Problema 2
# Problema 2.5 Lutkepohl
### Consider again the process (2.4.1)
### a) Suppose that $y_{2000}=[0.7 \quad 1\quad 1.5]^T$ y $Y_{1999}=[1\quad 1.5\quad 3]^T$ and forecast $y_{2001}$,$y_{2002}$,$y_{2003}$
```python
from rpy2.robjects import r
```
```python
r('library(MASS)')
r('MASS::mvrnorm(n=5, mu = c(0,0,0), Sigma = diag(3))')
```
<span>FloatMatrix with 15 elements.</span>
<table>
<tbody>
<tr>
<td>
1.148095
</td>
<td>
-0.991160
</td>
<td>
-0.040938
</td>
<td>
...
</td>
<td>
0.645486
</td>
<td>
0.735373
</td>
<td>
-0.524594
</td>
</tr>
</tbody>
</table>
```python
#Creo el vector de errores de distibucion N(0,sigma)
def U(s1,s2,s3):
u1=np.random.normal(0,np.sqrt(s1))
u2=np.random.normal(0,np.sqrt(s2))
u3=np.random.normal(0,np.sqrt(s3))
return np.array([u1,u2,u2,0,0,0,]).T
```
```python
A=[[]]
Y_t=np.array([0.7, 1,1.5,1,1.5,3]).T
```
```python
def Y_th(A,Y,h):
sum=0
for i in np.arange(h):
if i==0:
sum+= np.dot(np.eye(6),U(Sigma_u[0,0],Sigma_u[1,1],Sigma_u[2,2]))
else:
sum+= np.dot(np.linalg.matrix_power(A, i),U(Sigma_u[0,0],Sigma_u[1,1],Sigma_u[2,2]))
return sum+np.dot(np.linalg.matrix_power(A, h),Y)
```
```python
y1999=np.array([1,1.5,3]).T
y2000=np.array([0.7,1,1.5]).T
A=np.array([[0.7,.1,0,-0.2,0,0],[0,.4,.1,0,.1,.1],[.9,0,.8,0,0,0],[1,0,0,0,0,0],[0,1,0,0,0,0],[0,0,1,0,0,0]])
V=np.array([2,1,0,0,0,0]).T
A2=np.array([[-.2,0,0],[0,0.1,0.1],[0,0,0]])
```
```python
Matrix(A2)
```
$\displaystyle \left[\begin{matrix}-0.2 & 0.0 & 0.0\\0.0 & 0.1 & 0.1\\0.0 & 0.0 & 0.0\end{matrix}\right]$
```python
Mu = np.dot(np.linalg.inv(np.eye(6)-A),V)
J = np.block([np.eye(3),np.zeros((3,3))])
v_medias= np.dot(J,Mu)
v_medias
```
array([ 6.875 , 14.375 , 30.9375])
```python
z1999 = y1999-v_medias
z2000 = y2000-v_medias
z2001 = np.dot(A1,z2000)+np.dot(A2,z1999)
z2002 = np.dot(A1,z2001)+np.dot(A2,z2000)
z2003 = np.dot(A1,z2002)+np.dot(A2,z2001)
y2001 = z2001 + v_medias
y2002 = z2002 + v_medias
y2003 = z2003 + v_medias
```
```python
Matrix(y2001)
```
$\displaystyle \left[\begin{matrix}2.39\\2.0\\1.83\end{matrix}\right]$
```python
Matrix(y2002)
```
$\displaystyle \left[\begin{matrix}3.733\\2.233\\3.615\end{matrix}\right]$
```python
Matrix(y2003)
```
$\displaystyle \left[\begin{matrix}4.3584\\2.6377\\6.2517\end{matrix}\right]$
### (b) Determine the MSE matrices for forecast horizons h = 1, 2, 3.
```python
def phi_i(i):
return np.dot(np.dot(J,np.linalg.matrix_power(A,i)),J.T)
```
```python
MSE_1=np.dot(np.dot(phi_i(0),Sigma_u),phi_i(0).T)
Matrix(MSE_1)
```
$\displaystyle \left[\begin{matrix}0.26 & 0.03 & 0.0\\0.03 & 0.09 & 0.0\\0.0 & 0.0 & 0.81\end{matrix}\right]$
```python
MSE_2=MSE_1+np.dot(np.dot(phi_i(1),Sigma_u),phi_i(1).T)
Matrix(MSE_2)
```
$\displaystyle \left[\begin{matrix}0.3925 & 0.042 & 0.1665\\0.042 & 0.1125 & 0.0756\\0.1665 & 0.0756 & 1.539\end{matrix}\right]$
```python
MSE_3=MSE_2+np.dot(np.dot(phi_i(2),Sigma_u),phi_i(2).T)
Matrix(MSE_3)
```
$\displaystyle \left[\begin{matrix}0.41745 & 0.055701 & 0.279603\\0.055701 & 0.161298 & 0.234117\\0.279603 & 0.234117 & 2.352645\end{matrix}\right]$
### c) Assume that y t is a Gaussian process and construct 90% and 95% forecast intervals for t = 2001, 2002, 2003.
```python
import scipy.stats
```
```python
#90%
aux_y=[y2001,y2002,y2003]
aux_mse=[MSE_1,MSE_2,MSE_3]
zaph=scipy.stats.norm.ppf(1-0.9/2)
for i in np.arange(3):
for j in np.arange(3):
lim_inf=round(aux_y[i][j]-zaph*np.sqrt(aux_mse[i][j,j]),4)
lim_sup=round(aux_y[i][j]+zaph*np.sqrt(aux_mse[i][j,j]),4)
print('El intervalo de confianza del 90% para y',2001+i,'_',j+1,'es de [',lim_inf,',',lim_sup,']')
```
El intervalo de confianza del 90% para y 2001 _ 1 es de ( 2.3259 , 2.4541 )
El intervalo de confianza del 90% para y 2001 _ 2 es de ( 1.9623 , 2.0377 )
El intervalo de confianza del 90% para y 2001 _ 3 es de ( 1.7169 , 1.9431 )
El intervalo de confianza del 90% para y 2002 _ 1 es de ( 3.6543 , 3.8117 )
El intervalo de confianza del 90% para y 2002 _ 2 es de ( 2.1909 , 2.2751 )
El intervalo de confianza del 90% para y 2002 _ 3 es de ( 3.4591 , 3.7709 )
El intervalo de confianza del 90% para y 2003 _ 1 es de ( 4.2772 , 4.4396 )
El intervalo de confianza del 90% para y 2003 _ 2 es de ( 2.5872 , 2.6882 )
El intervalo de confianza del 90% para y 2003 _ 3 es de ( 6.059 , 6.4444 )
```python
#95%
aux_y=[y2001,y2002,y2003]
aux_mse=[MSE_1,MSE_2,MSE_3]
zaph=scipy.stats.norm.ppf(1-0.95/2)
for i in np.arange(3):
for j in np.arange(3):
lim_inf=round(aux_y[i][j]-zaph*np.sqrt(aux_mse[i][j,j]),4)
lim_sup=round(aux_y[i][j]+zaph*np.sqrt(aux_mse[i][j,j]),4)
print('El intervalo de confianza del 95% para y',2001+i,'_',i+1,'es de [',lim_inf,',',lim_sup,']')
```
El intervalo de confianza del 95% para y 2001 _ 1 es de ( 2.358 , 2.422 )
El intervalo de confianza del 95% para y 2001 _ 1 es de ( 1.9812 , 2.0188 )
El intervalo de confianza del 95% para y 2001 _ 1 es de ( 1.7736 , 1.8864 )
El intervalo de confianza del 95% para y 2002 _ 2 es de ( 3.6937 , 3.7723 )
El intervalo de confianza del 95% para y 2002 _ 2 es de ( 2.212 , 2.254 )
El intervalo de confianza del 95% para y 2002 _ 2 es de ( 3.5372 , 3.6928 )
El intervalo de confianza del 95% para y 2003 _ 3 es de ( 4.3179 , 4.3989 )
El intervalo de confianza del 95% para y 2003 _ 3 es de ( 2.6125 , 2.6629 )
El intervalo de confianza del 95% para y 2003 _ 3 es de ( 6.1555 , 6.3479 )
### d) Use the Bonferroni method to determine a joint forecast region for GNP 2001 , GNP 2002 , GNP 2003 with probability content at least 97%.
```python
zaph_d=scipy.stats.norm.ppf(1-0.99/2)
for i in np.arange(3):
lim_inf=round(aux_y[i][0]-zaph_d*np.sqrt(aux_mse[i][0,0]),4)
lim_sup=round(aux_y[i][0]+zaph_d*np.sqrt(aux_mse[i][0,0]),4)
print('GNP_',2001+i,'con 99% de confianza es [',lim_inf,',',lim_sup,']')
```
GNP_ 2001 con 99% de confianza es [ 2.3836 , 2.3964 ]
GNP_ 2002 con 99% de confianza es [ 3.7251 , 3.7409 ]
GNP_ 2003 con 99% de confianza es [ 4.3503 , 4.3665 ]
```python
GNP2001=[2.3836 , 2.3964]
GNP2002=[3.7251 , 3.7409]
GNP2003=[4.3503 , 4.3665 ]
R=np.cross(np.crros(GNP2001,GNP2002),GNP2003)
```
|
2261a07f75bb9266fc5d834bace899f15627a0a0
| 136,342 |
ipynb
|
Jupyter Notebook
|
Tareas/Tarea2.ipynb
|
pazDaniela/MAT287-Series-de-Tiempo-2020-2
|
e7ac2bf9b474311556d8a345d567c54aa55788fd
|
[
"MIT"
] | 2 |
2020-10-30T00:47:43.000Z
|
2020-12-05T14:11:58.000Z
|
Tareas/Tarea2.ipynb
|
pazDaniela/MAT287-Series-de-Tiempo-2020-2
|
e7ac2bf9b474311556d8a345d567c54aa55788fd
|
[
"MIT"
] | null | null | null |
Tareas/Tarea2.ipynb
|
pazDaniela/MAT287-Series-de-Tiempo-2020-2
|
e7ac2bf9b474311556d8a345d567c54aa55788fd
|
[
"MIT"
] | null | null | null | 143.820675 | 109,288 | 0.876348 | true | 5,038 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.887205 | 0.805632 | 0.714761 |
__label__yue_Hant
| 0.089188 | 0.49896 |
```python
from epipack import SymbolicEpiModel
from epipack.interactive import InteractiveIntegrator, Range, LogRange
import sympy
import numpy as np
%matplotlib widget
S, I, R, R0, tau, omega = sympy.symbols("S I R R_0 tau omega")
I0 = 0.01
model = SymbolicEpiModel([S,I,R])\
.set_processes([
(S, I, R0/tau, I, I),
(I, 1/tau, R),
(R, omega, S),
])\
.set_initial_conditions({S:1-I0, I:I0})
parameters = {
R0: LogRange(min=0.1,max=10,step_count=1000),
tau: Range(min=0.1,max=10,value=8.0),
omega: 1/14
}
```
```python
t = np.logspace(-3,2,1000)
InteractiveIntegrator(model, parameters, t, figsize=(4,4))
```
InteractiveIntegrator(children=(VBox(children=(FloatLogSlider(value=1.0, continuous_update=False, description=…
```python
```
|
4593c100d1981b17253a6047054ed202d0182741
| 2,106 |
ipynb
|
Jupyter Notebook
|
interactive.ipynb
|
benmaier/networks2021-hons-softwaredemo
|
f50112bb785123e4beed7077ef75283d72579a20
|
[
"MIT"
] | 1 |
2021-07-02T18:22:08.000Z
|
2021-07-02T18:22:08.000Z
|
interactive.ipynb
|
benmaier/networks2021-hons-softwaredemo
|
f50112bb785123e4beed7077ef75283d72579a20
|
[
"MIT"
] | null | null | null |
interactive.ipynb
|
benmaier/networks2021-hons-softwaredemo
|
f50112bb785123e4beed7077ef75283d72579a20
|
[
"MIT"
] | null | null | null | 23.4 | 120 | 0.506648 | true | 266 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.92523 | 0.699254 | 0.646971 |
__label__kor_Hang
| 0.181389 | 0.341461 |
# Batch Normalization
One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train.
One idea along these lines is batch normalization which was proposed by [1] in 2015.
The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.
The authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [1] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.
It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.
[1] [Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift", ICML 2015.](https://arxiv.org/abs/1502.03167)
```python
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
def print_mean_std(x,axis=0):
print(' means: ', x.mean(axis=axis))
print(' stds: ', x.std(axis=axis))
print()
```
run the following from the cs231n directory and try again:
python setup.py build_ext --inplace
You may also need to restart your iPython kernel
```python
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
```
X_train: (49000, 3, 32, 32)
y_train: (49000,)
X_val: (1000, 3, 32, 32)
y_val: (1000,)
X_test: (1000, 3, 32, 32)
y_test: (1000,)
## Batch normalization: forward
In the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.
Referencing the paper linked to above in [1] may be helpful!
```python
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print_mean_std(a,axis=0)
gamma = np.ones((D3,))
beta = np.zeros((D3,))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print_mean_std(a_norm,axis=0)
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
# Now means should be close to beta and stds close to gamma
print('After batch normalization (gamma=', gamma, ', beta=', beta, ')')
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print_mean_std(a_norm,axis=0)
```
Before batch normalization:
means: [ -2.3814598 -13.18038246 1.91780462]
stds: [27.18502186 34.21455511 37.68611762]
After batch normalization (gamma=1, beta=0)
means: [5.32907052e-17 7.04991621e-17 1.85962357e-17]
stds: [0.99999999 1. 1. ]
After batch normalization (gamma= [1. 2. 3.] , beta= [11. 12. 13.] )
means: [11. 12. 13.]
stds: [0.99999999 1.99999999 2.99999999]
```python
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in range(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print_mean_std(a_norm,axis=0)
```
After batch normalization (test-time):
means: [-0.00856644 -0.02735023 -0.05918961]
stds: [1.06152724 1.02815831 0.95174132]
## Batch normalization: backward
Now implement the backward pass for batch normalization in the function `batchnorm_backward`.
To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.
Once you have finished, run the following to numerically check your backward pass.
```python
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
#You should expect to see relative errors between 1e-13 and 1e-8
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
```
dx error: 1.7029235612572515e-09
dgamma error: 7.420414216247087e-13
dbeta error: 2.8795057655839487e-12
## Batch normalization: alternative backward
In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper.
Surprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too!
In the forward pass, given a set of inputs $X=\begin{bmatrix}x_1\\x_2\\...\\x_N\end{bmatrix}$,
we first calculate the mean $\mu$ and variance $v$.
With $\mu$ and $v$ calculated, we can calculate the standard deviation $\sigma$ and normalized data $Y$.
The equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$).
\begin{align}
& \mu=\frac{1}{N}\sum_{k=1}^N x_k & v=\frac{1}{N}\sum_{k=1}^N (x_k-\mu)^2 \\
& \sigma=\sqrt{v+\epsilon} & y_i=\frac{x_i-\mu}{\sigma}
\end{align}
The meat of our problem during backpropagation is to compute $\frac{\partial L}{\partial X}$, given the upstream gradient we receive, $\frac{\partial L}{\partial Y}.$ To do this, recall the chain rule in calculus gives us $\frac{\partial L}{\partial X} = \frac{\partial L}{\partial Y} \cdot \frac{\partial Y}{\partial X}$.
The unknown/hart part is $\frac{\partial Y}{\partial X}$. We can find this by first deriving step-by-step our local gradients at
$\frac{\partial v}{\partial X}$, $\frac{\partial \mu}{\partial X}$,
$\frac{\partial \sigma}{\partial v}$,
$\frac{\partial Y}{\partial \sigma}$, and $\frac{\partial Y}{\partial \mu}$,
and then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\frac{\partial Y}{\partial X}$.
If it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\frac{\partial L}{\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\frac{\partial \mu}{\partial x_i}, \frac{\partial v}{\partial x_i}, \frac{\partial \sigma}{\partial x_i},$ then assemble these pieces to calculate $\frac{\partial y_i}{\partial x_i}$.
You should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation.
After doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
```python
np.random.seed(231)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', rel_error(dx1, dx2))
print('dgamma difference: ', rel_error(dgamma1, dgamma2))
print('dbeta difference: ', rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
```
dx difference: 1.0689044038190631e-12
dgamma difference: 0.0
dbeta difference: 0.0
speedup: 1.14x
## Fully Connected Nets with Batch Normalization
Now that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs231n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.
Concretely, when the `normalization` flag is set to `"batchnorm"` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.
HINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.
```python
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
# You should expect losses between 1e-4~1e-10 for W,
# losses between 1e-08~1e-10 for b,
# and losses between 1e-08~1e-09 for beta and gammas.
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
normalization='batchnorm')
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
if reg == 0: print()
```
Running check with reg = 0
Initial loss: 2.2611955101340957
W1 relative error: 1.10e-04
W2 relative error: 2.85e-06
W3 relative error: 4.05e-10
b1 relative error: 2.66e-07
b2 relative error: 1.67e-08
b3 relative error: 1.01e-10
beta1 relative error: 7.33e-09
beta2 relative error: 1.89e-09
gamma1 relative error: 6.96e-09
gamma2 relative error: 1.96e-09
Running check with reg = 3.14
Initial loss: 6.996533220108303
W1 relative error: 1.98e-06
W2 relative error: 2.29e-06
W3 relative error: 2.79e-08
b1 relative error: 1.94e-08
b2 relative error: 8.22e-07
b3 relative error: 2.10e-10
beta1 relative error: 6.65e-09
beta2 relative error: 4.23e-09
gamma1 relative error: 6.27e-09
gamma2 relative error: 5.28e-09
# Batchnorm for deep networks
Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
```python
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)
print('Solver with batch norm:')
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True,print_every=20)
bn_solver.train()
print('\nSolver without batch norm:')
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
```
Solver with batch norm:
(Iteration 1 / 200) loss: 2.340975
(Epoch 0 / 10) train acc: 0.111000; val_acc: 0.123000
(Epoch 1 / 10) train acc: 0.335000; val_acc: 0.272000
(Iteration 21 / 200) loss: 2.039365
(Epoch 2 / 10) train acc: 0.426000; val_acc: 0.294000
(Iteration 41 / 200) loss: 2.036710
(Epoch 3 / 10) train acc: 0.493000; val_acc: 0.326000
(Iteration 61 / 200) loss: 1.769536
(Epoch 4 / 10) train acc: 0.544000; val_acc: 0.309000
(Iteration 81 / 200) loss: 1.265761
(Epoch 5 / 10) train acc: 0.576000; val_acc: 0.313000
(Iteration 101 / 200) loss: 1.261353
(Epoch 6 / 10) train acc: 0.633000; val_acc: 0.316000
(Iteration 121 / 200) loss: 1.102834
(Epoch 7 / 10) train acc: 0.692000; val_acc: 0.314000
(Iteration 141 / 200) loss: 1.215466
(Epoch 8 / 10) train acc: 0.697000; val_acc: 0.295000
(Iteration 161 / 200) loss: 0.777501
(Epoch 9 / 10) train acc: 0.756000; val_acc: 0.320000
(Iteration 181 / 200) loss: 0.850938
(Epoch 10 / 10) train acc: 0.776000; val_acc: 0.326000
Solver without batch norm:
(Iteration 1 / 200) loss: 2.302332
(Epoch 0 / 10) train acc: 0.129000; val_acc: 0.131000
(Epoch 1 / 10) train acc: 0.283000; val_acc: 0.250000
(Iteration 21 / 200) loss: 2.041970
(Epoch 2 / 10) train acc: 0.316000; val_acc: 0.277000
(Iteration 41 / 200) loss: 1.900473
(Epoch 3 / 10) train acc: 0.373000; val_acc: 0.282000
(Iteration 61 / 200) loss: 1.713156
(Epoch 4 / 10) train acc: 0.390000; val_acc: 0.310000
(Iteration 81 / 200) loss: 1.662209
(Epoch 5 / 10) train acc: 0.434000; val_acc: 0.300000
(Iteration 101 / 200) loss: 1.696059
(Epoch 6 / 10) train acc: 0.535000; val_acc: 0.345000
(Iteration 121 / 200) loss: 1.557987
(Epoch 7 / 10) train acc: 0.530000; val_acc: 0.304000
(Iteration 141 / 200) loss: 1.432189
(Epoch 8 / 10) train acc: 0.628000; val_acc: 0.339000
(Iteration 161 / 200) loss: 1.033932
(Epoch 9 / 10) train acc: 0.661000; val_acc: 0.340000
(Iteration 181 / 200) loss: 0.901034
(Epoch 10 / 10) train acc: 0.726000; val_acc: 0.318000
Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
```python
def plot_training_history(title, label, baseline, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None):
"""utility function for plotting training history"""
plt.title(title)
plt.xlabel(label)
bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers]
bl_plot = plot_fn(baseline)
num_bn = len(bn_plots)
for i in range(num_bn):
label='with_norm'
if labels is not None:
label += str(labels[i])
plt.plot(bn_plots[i], bn_marker, label=label)
label='baseline'
if labels is not None:
label += str(labels[0])
plt.plot(bl_plot, bl_marker, label=label)
plt.legend(loc='lower center', ncol=num_bn+1)
plt.subplot(3, 1, 1)
plot_training_history('Training loss','Iteration', solver, [bn_solver], \
lambda x: x.loss_history, bl_marker='o', bn_marker='o')
plt.subplot(3, 1, 2)
plot_training_history('Training accuracy','Epoch', solver, [bn_solver], \
lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o')
plt.subplot(3, 1, 3)
plot_training_history('Validation accuracy','Epoch', solver, [bn_solver], \
lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o')
plt.gcf().set_size_inches(15, 15)
plt.show()
```
# Batch normalization and initialization
We will now run a small experiment to study the interaction of batch normalization and weight initialization.
The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
```python
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers_ws = {}
solvers_ws = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers_ws[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers_ws[weight_scale] = solver
```
Running weight scale 1 / 20
Running weight scale 2 / 20
Running weight scale 3 / 20
Running weight scale 4 / 20
Running weight scale 5 / 20
Running weight scale 6 / 20
Running weight scale 7 / 20
Running weight scale 8 / 20
Running weight scale 9 / 20
Running weight scale 10 / 20
Running weight scale 11 / 20
Running weight scale 12 / 20
Running weight scale 13 / 20
Running weight scale 14 / 20
Running weight scale 15 / 20
Running weight scale 16 / 20
Running weight scale 17 / 20
Running weight scale 18 / 20
Running weight scale 19 / 20
Running weight scale 20 / 20
```python
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers_ws[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers_ws[ws].train_acc_history))
best_val_accs.append(max(solvers_ws[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers_ws[ws].val_acc_history))
final_train_loss.append(np.mean(solvers_ws[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers_ws[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gca().set_ylim(1.0, 3.5)
plt.gcf().set_size_inches(15, 15)
plt.show()
```
## Inline Question 1:
Describe the results of this experiment. How does the scale of weight initialization affect models with/without batch normalization differently, and why?
## Answer:
With high weight scale loss is very high too without batchnorm. Batchnorm reduce this effect and also help with a low weight scale.
# Batch normalization and batch size
We will now run a small experiment to study the interaction of batch normalization and batch size.
The first cell will train 6-layer networks both with and without batch normalization using different batch sizes. The second layer will plot training accuracy and validation set accuracy over time.
```python
def run_batchsize_experiments(normalization_mode):
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
n_epochs=10
weight_scale = 2e-2
batch_sizes = [5,10,50]
lr = 10**(-3.5)
solver_bsize = batch_sizes[0]
print('No normalization: batch size = ',solver_bsize)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)
solver = Solver(model, small_data,
num_epochs=n_epochs, batch_size=solver_bsize,
update_rule='adam',
optim_config={
'learning_rate': lr,
},
verbose=False)
solver.train()
bn_solvers = []
for i in range(len(batch_sizes)):
b_size=batch_sizes[i]
print('Normalization: batch size = ',b_size)
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=normalization_mode)
bn_solver = Solver(bn_model, small_data,
num_epochs=n_epochs, batch_size=b_size,
update_rule='adam',
optim_config={
'learning_rate': lr,
},
verbose=False)
bn_solver.train()
bn_solvers.append(bn_solver)
return bn_solvers, solver, batch_sizes
batch_sizes = [5,10,50]
bn_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('batchnorm')
```
No normalization: batch size = 5
Normalization: batch size = 5
Normalization: batch size = 10
Normalization: batch size = 50
```python
plt.subplot(2, 1, 1)
plot_training_history('Training accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \
lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)
plt.subplot(2, 1, 2)
plot_training_history('Validation accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \
lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)
plt.gcf().set_size_inches(15, 10)
plt.show()
```
## Inline Question 2:
Describe the results of this experiment. What does this imply about the relationship between batch normalization and batch size? Why is this relationship observed?
## Answer:
As we can read in the original paper, batchnorm works well with big batches. Maybe running mean and var calculated by the small batches are very noise and it's doesn't work well.
# Layer Normalization
Batch normalization has proved to be effective in making networks easier to train, but the dependency on batch size makes it less useful in complex networks which have a cap on the input batch size due to hardware limitations.
Several alternatives to batch normalization have been proposed to mitigate this problem; one such technique is Layer Normalization [2]. Instead of normalizing over the batch, we normalize over the features. In other words, when using Layer Normalization, each feature vector corresponding to a single datapoint is normalized based on the sum of all terms within that feature vector.
[2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer Normalization." stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)
## Inline Question 3:
Which of these data preprocessing steps is analogous to batch normalization, and which is analogous to layer normalization?
1. Scaling each image in the dataset, so that the RGB channels for each row of pixels within an image sums up to 1.
2. Scaling each image in the dataset, so that the RGB channels for all pixels within an image sums up to 1.
3. Subtracting the mean image of the dataset from each image in the dataset.
4. Setting all RGB values to either 0 or 1 depending on a given threshold.
## Answer:
[FILL THIS IN]
# Layer Normalization: Implementation
Now you'll implement layer normalization. This step should be relatively straightforward, as conceptually the implementation is almost identical to that of batch normalization. One significant difference though is that for layer normalization, we do not keep track of the moving moments, and the testing phase is identical to the training phase, where the mean and variance are directly calculated per datapoint.
Here's what you need to do:
* In `cs231n/layers.py`, implement the forward pass for layer normalization in the function `layernorm_backward`.
Run the cell below to check your results.
* In `cs231n/layers.py`, implement the backward pass for layer normalization in the function `layernorm_backward`.
Run the second cell below to check your results.
* Modify `cs231n/classifiers/fc_net.py` to add layer normalization to the `FullyConnectedNet`. When the `normalization` flag is set to `"layernorm"` in the constructor, you should insert a layer normalization layer before each ReLU nonlinearity.
Run the third cell below to run the batch size experiment on layer normalization.
```python
# Check the training-time forward pass by checking means and variances
# of features both before and after layer normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 =4, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before layer normalization:')
print_mean_std(a,axis=1)
gamma = np.ones(D3)
beta = np.zeros(D3)
# Means should be close to zero and stds close to one
print('After layer normalization (gamma=1, beta=0)')
a_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})
print_mean_std(a_norm,axis=1)
gamma = np.asarray([3.0,3.0,3.0])
beta = np.asarray([5.0,5.0,5.0])
# Now means should be close to beta and stds close to gamma
print('After layer normalization (gamma=', gamma, ', beta=', beta, ')')
a_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})
print_mean_std(a_norm,axis=1)
```
Before layer normalization:
means: [-59.06673243 -47.60782686 -43.31137368 -26.40991744]
stds: [10.07429373 28.39478981 35.28360729 4.01831507]
After layer normalization (gamma=1, beta=0)
means: [ 4.81096644e-16 -7.40148683e-17 2.22044605e-16 -5.92118946e-16]
stds: [0.99999995 0.99999999 1. 0.99999969]
After layer normalization (gamma= [3. 3. 3.] , beta= [5. 5. 5.] )
means: [5. 5. 5. 5.]
stds: [2.99999985 2.99999998 2.99999999 2.99999907]
```python
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
ln_param = {}
fx = lambda x: layernorm_forward(x, gamma, beta, ln_param)[0]
fg = lambda a: layernorm_forward(x, a, beta, ln_param)[0]
fb = lambda b: layernorm_forward(x, gamma, b, ln_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = layernorm_forward(x, gamma, beta, ln_param)
dx, dgamma, dbeta = layernorm_backward(dout, cache)
#You should expect to see relative errors between 1e-12 and 1e-8
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
```
dx error: 1.0
dgamma error: 4.519489546032799e-12
dbeta error: 2.276445013433725e-12
# Layer Normalization and batch size
We will now run the previous batch size experiment with layer normalization instead of batch normalization. Compared to the previous experiment, you should see a markedly smaller influence of batch size on the training history!
```python
ln_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('layernorm')
plt.subplot(2, 1, 1)
plot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \
lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)
plt.subplot(2, 1, 2)
plot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \
lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)
plt.gcf().set_size_inches(15, 10)
plt.show()
```
## Inline Question 4:
When is layer normalization likely to not work well, and why?
1. Using it in a very deep network
2. Having a very small dimension of features
3. Having a high regularization term
## Answer:
[FILL THIS IN]
|
2927f5541ad8aec21fb08e129ad5c1811d4f1a61
| 460,304 |
ipynb
|
Jupyter Notebook
|
assignments/2019/assignment2/BatchNormalization.ipynb
|
comratvlad/cs231n.github.io
|
63c72c3e8e88a6edfea7db7df604d715416ba15b
|
[
"MIT"
] | null | null | null |
assignments/2019/assignment2/BatchNormalization.ipynb
|
comratvlad/cs231n.github.io
|
63c72c3e8e88a6edfea7db7df604d715416ba15b
|
[
"MIT"
] | null | null | null |
assignments/2019/assignment2/BatchNormalization.ipynb
|
comratvlad/cs231n.github.io
|
63c72c3e8e88a6edfea7db7df604d715416ba15b
|
[
"MIT"
] | null | null | null | 402.715661 | 119,564 | 0.929466 | true | 9,035 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.731059 | 0.822189 | 0.601068 |
__label__eng_Latn
| 0.921406 | 0.234814 |
# A/B Testing from Scratch: Bayesian Approach
We reuse the simple problem of comparing two online ads campaigns (or teatments, user interfaces or slot machines). We details how Bayesian A/B test is conducted and highlights the differences between it and the frequentist approaches. Readers are encouraged to tinker with the widgets provided in order to explore the impacts of each parameter.
```python
import numpy as np
import pandas as pd
#widgets
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display
#plots
import matplotlib.pyplot as plt
from plotnine import *
from mizani import *
#stats
import scipy as sp
import statsmodels as sm
import warnings
warnings.filterwarnings('ignore')
import collections
```
## Start with A Problem
A typical situation marketers (research physicians, UX researchers, or gamblers) find themselves in is that they have two variations of ads (treatments, user interfaces, or slot machines) and want to find out which one has the better performance in the long run.
Practitioners know this as A/B testing and statisticians as **hypothesis testing**. Consider the following problem. We are running an online ads campaign `A` for a period of time, but now we think a new ads variation might work better so we run an experiemnt by dividing our audience in half: one sees the existing campaign `A` whereas the other sees a new campaign `B`. Our performance metric is conversion (sales) per click (ignore [ads attribution problem](https://support.google.com/analytics/answer/1662518) for now). After the experiment ran for two months, we obtain daily clicks and conversions of each campaign and determine which campaign has the better performance.
We simulate the aforementioned problem with both campaigns getting randomly about a thousand clicks per day. The secrete we will pretend to not know is that hypothetical campaign `B` has slightly better conversion rate than `A` in the long run (10.5% vs 10%). With this synthetic data, we will explore some useful statistical concepts and exploit them for our frequentist A/B testing.
```python
def gen_campaigns(p1,p2,nb_days,scaler,seed):
#generate fake data
np.random.seed(seed)
ns = np.random.triangular(50,100,150,size=nb_days*2).astype(int)
np.random.seed(seed)
es = np.random.randn(nb_days*2) / scaler
n1 = ns[:nb_days]
c1 = ((p1 + es[:nb_days]) * n1).astype(int)
n2 = ns[nb_days:]
c2 = ((p2 + es[nb_days:]) * n2).astype(int)
conv_days = pd.DataFrame({'click_day':range(nb_days),'click_a':n1,'conv_a':c1,'click_b':n2,'conv_b':c2})
conv_days = conv_days[['click_day','click_a','click_b','conv_a','conv_b']]
conv_days['cumu_click_a'] = conv_days.click_a.cumsum()
conv_days['cumu_click_b'] = conv_days.click_b.cumsum()
conv_days['cumu_conv_a'] = conv_days.conv_a.cumsum()
conv_days['cumu_conv_b'] = conv_days.conv_b.cumsum()
conv_days['cumu_rate_a'] = conv_days.cumu_conv_a / conv_days.cumu_click_a
conv_days['cumu_rate_b'] = conv_days.cumu_conv_b / conv_days.cumu_click_b
return conv_days
conv_days = gen_campaigns(p1 = 0.10,
p2 = 0.105,
nb_days = 24,
scaler=300,
seed = 1412) #god-mode
conv_days.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>click_day</th>
<th>click_a</th>
<th>click_b</th>
<th>conv_a</th>
<th>conv_b</th>
<th>cumu_click_a</th>
<th>cumu_click_b</th>
<th>cumu_conv_a</th>
<th>cumu_conv_b</th>
<th>cumu_rate_a</th>
<th>cumu_rate_b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>125</td>
<td>87</td>
<td>12</td>
<td>9</td>
<td>125</td>
<td>87</td>
<td>12</td>
<td>9</td>
<td>0.096000</td>
<td>0.103448</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>114</td>
<td>86</td>
<td>11</td>
<td>9</td>
<td>239</td>
<td>173</td>
<td>23</td>
<td>18</td>
<td>0.096234</td>
<td>0.104046</td>
</tr>
<tr>
<th>2</th>
<td>2</td>
<td>67</td>
<td>91</td>
<td>6</td>
<td>9</td>
<td>306</td>
<td>264</td>
<td>29</td>
<td>27</td>
<td>0.094771</td>
<td>0.102273</td>
</tr>
<tr>
<th>3</th>
<td>3</td>
<td>96</td>
<td>103</td>
<td>9</td>
<td>10</td>
<td>402</td>
<td>367</td>
<td>38</td>
<td>37</td>
<td>0.094527</td>
<td>0.100817</td>
</tr>
<tr>
<th>4</th>
<td>4</td>
<td>89</td>
<td>125</td>
<td>9</td>
<td>13</td>
<td>491</td>
<td>492</td>
<td>47</td>
<td>50</td>
<td>0.095723</td>
<td>0.101626</td>
</tr>
</tbody>
</table>
</div>
```python
rates_df = conv_days[['click_day','cumu_rate_a','cumu_rate_b']].melt(id_vars='click_day')
g = (ggplot(rates_df, aes(x='click_day', y='value', color='variable')) + geom_line() + theme_minimal() +
xlab('Hours of Experiment Run') + ylab('Cumulative Conversions / Cumulative Clicks'))
g
```
```python
#sum after 2 months
conv_df = pd.DataFrame({'campaign_id':['A','B'], 'clicks':[conv_days.click_a.sum(),conv_days.click_b.sum()],
'conv_cnt':[conv_days.conv_a.sum(),conv_days.conv_b.sum()]})
conv_df['conv_per'] = conv_df['conv_cnt'] / conv_df['clicks']
conv_df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>campaign_id</th>
<th>clicks</th>
<th>conv_cnt</th>
<th>conv_per</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>A</td>
<td>2488</td>
<td>234</td>
<td>0.094051</td>
</tr>
<tr>
<th>1</th>
<td>B</td>
<td>2209</td>
<td>222</td>
<td>0.100498</td>
</tr>
</tbody>
</table>
</div>
## Think Like A Bayesian
The core idea of Bayesian A/B testing is to formulate a posterior distribution for each variation. Recall that frequentist approach does not do this; it instead first assume a null hypothesis that there is no difference between the variations and see the data to determine the false positive rate. The advantage is that we can have hypotheses directly about the true values we are interested in such as the conversion rate for each variation. The cost is that now we cannot leverate central limit theorem to assume normal distribution for our hypotheses; instead we need to construct our own distributions, called **posterior distriubtion**, using Bayes' Rule:
\begin{align}
P(H|D) &= \frac{P(H \cap D)}{P(D)} \\
&= \frac{P(D|H)P(H)}{P(D)} \text{; chain rule of probability}\\
& = \frac{P(D|H)P(H)}{\sum_{j=1}^k P(D|H_j)P(H_j)} \text{; summing over all possible hypotheses}
\end{align}
where $H$ is the hypothesis or model and $D$ is the data or evidence
* $P(H|D)$ is the probability of the hypothesis being true given the observed data, called the **posterior**. This is the distribution we use to estimate the true values such as true conversion rates in the Bayesian approach.
* $P(D|H)$ is the probability of seeing the data given that the hypothesis is true, called the **likelihood**, this is similar to p-value of rejection of the null hypothesis under frequentist approach.
* $P(H)$ is the probability of the hypothesis being true aka our belief in the hypothesis, called the **prior**. Refer to a table of [conjugate priors](https://en.wikipedia.org/wiki/Conjugate_prior) to choose the suitable prior for your posterior and likelihood.
* $P(D)$ is the probability of the data being present, called the **evidence**.
## Derive The Posterior Distribution
In our case, the likelihood that we would see this set of clicks and conversions data given that the true conversion rate is `p` can be described as the probability mass function of a Bernouilli distribution:
\begin{align}
P(D|H) &= \prod_{i=1}^{n} p^{x_i} (1-p)^{1-x_i} \\
&= p^{\sum_{i=1}^{n} x_i} (1-p)^{\sum_{i=1}^{n} (1-x_i)}
\end{align}
where $x_i$ is the binary flag for conversion and `p` is the true conversion rate by the hypothesis $H$.
The prior is a beta distribution with the following probability density function:
\begin{align}
P(H) &= \frac{p^{\alpha-1} (1-p)^{\beta-1}}{B(\alpha,\beta)}
\end{align}
where $\alpha$ and $\beta$ are hyperparameters correlated with number of successes and failures respectively, and $B(\alpha, \beta)$ is the beta function to normalize the distribution to be between 0 and 1. Intuitively, the beta distribution is shaped by the number of successes and failures initialized by $\alpha$ and $\beta$; moreover, higher number means we are more certain about our distribution thus resulting in less variance:
```python
beta_df = pd.DataFrame({'x': [i/100 for i in range(100)],
'1_1': [sp.stats.beta.pdf(i/100, a=1,b=1) for i in range(100)],
'2_3': [sp.stats.beta.pdf(i/100, a=2,b=3) for i in range(100)],
'4_6': [sp.stats.beta.pdf(i/100, a=4,b=6) for i in range(100)],
'20_30': [sp.stats.beta.pdf(i/100, a=20,b=30) for i in range(100)],
}).melt(id_vars='x')
g = (ggplot(beta_df,aes(x='x',y='value',color='variable')) +
geom_line() + theme_minimal() +
xlab('Values') + ylab('Probability Density')
)
g
```
Here we notice that the evidence $P(D)$ and the normalizing factor $B(\alpha, \beta)$ are constants in regards to our hypothesis so we can bypass them and consider that our posterior distribution is proportional to:
\begin{align}
P(H|D) &\propto \left( p^{\sum_{i=1}^{n} x_i} (1-p)^{\sum_{i=1}^{n} (1-x_i)} \right) \left(p^{\alpha-1} (1-p)^{\beta-1} \right) \\
&\propto p^{\alpha + \sum_{i=1}^{n} x_i - 1} (1-p)^{\beta + \sum_{i=1}^{n} (1-x_i) -1}
\end{align}
We can see that our resulting terms are equivalent to an unnormalized beta distribution. Thus, by normalizing it, we can get a beta distribution as our posterior:
\begin{align}
P(H|D) &\propto p^{\alpha + \sum_{i=1}^{n} x_i - 1} (1-p)^{\beta + \sum_{i=1}^{n} (1-x_i) -1} \\
&= \frac{p^{\alpha + \sum_{i=1}^{n} x_i - 1} (1-p)^{\beta + \sum_{i=1}^{n} (1-x_i) -1}}{B(\alpha + \sum_{i=1}^{n} x_i, \beta + \sum_{i=1}^{n} (1-x_i))} \\
&= Beta(\alpha + \text{number of conversions}, \beta + \text{number of non-conversions})
\end{align}
## Choose The Right Prior
We can see that for events that acts like a Bernoulli trial the posterior distribution takes a very convenient form. What we need to do is simply choose a reasonable set of $\alpha$ and $\beta$ for the prior. This is where our assumption factors into a Bayesian A/B test. One way is to set the prior based on past campaign performance; for instance, if we have run this type of campaign and had the average conversion rate of 10%, we can scale the prior according to how certain we are about the conversion rates. Note that different priors can represent 10%, for instance $Beta(1,9)$ and $Beta(10,90)$, but at different level of certainty. See how $Beta(10,90)$ is more pointy around 10% than $Beta(1,9)$.
```python
def plot_beta(a=1,b=9,scaler=1):
beta_df = pd.DataFrame({'x': [i/100 for i in range(100)],
'value': [sp.stats.beta.pdf(i/100, a=a*scaler,b=b*scaler) for i in range(100)]})
g = (ggplot(beta_df,aes(x='x',y='value')) +
geom_line() + theme_minimal() +
xlab('Values') + ylab('Probability Density') +
ggtitle(f'alpha = {a*scaler}; beta={b*scaler}')
)
return g
widgets.interact(plot_beta,
a=widgets.IntSlider(min=1,max=100,step=1,value=1),
b=widgets.IntSlider(min=1,max=100,step=1,value=9),
scaler=widgets.FloatSlider(min=0.1,max=100,step=0.5,value=1))
```
interactive(children=(IntSlider(value=1, description='a', min=1), IntSlider(value=9, description='b', min=1), …
<function __main__.plot_beta(a=1, b=9, scaler=1)>
```python
conv_df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>campaign_id</th>
<th>clicks</th>
<th>conv_cnt</th>
<th>conv_per</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>A</td>
<td>2488</td>
<td>234</td>
<td>0.094051</td>
</tr>
<tr>
<th>1</th>
<td>B</td>
<td>2209</td>
<td>222</td>
<td>0.100498</td>
</tr>
</tbody>
</table>
</div>
Depending on what we choose as our priors for `A` and `B`, we can derive their posterior distributions as follows. Note that the prior has an impact on the certainty about the values of the posterior (pointiness) and the expected values of the posterior themselves. For instance, even if your observed conversion rate is around 10%, if you gives an absurdly strong prior like $Beta(999,1)$, implying 99.9% conversion rate, your posteriors' expectations can be around 30%.
```python
def plot_posterior(clicks, conv_cnt, a = 1, b = 1, scaler=1):
beta_df = pd.DataFrame({'x': [i/100 for i in range(100)]})
if not isinstance(a, collections.MutableSequence):
a = [a for i in range(len(clicks))]
b = [b for i in range(len(clicks))]
for v in range(len(clicks)):
beta_df[f'value_{v}'] = [sp.stats.beta.pdf(i/100, a=a[v]*scaler + conv_cnt[v], \
b=b[v]*scaler + clicks[v] - conv_cnt[v]) for i in range(100)]
beta_df = beta_df.melt(id_vars='x')
g = (ggplot(beta_df,aes(x='x',y='value',color='variable',group='variable')) +
geom_line() + theme_minimal() +
xlab('Values') + ylab('Probability Density') +
ggtitle(f'alpha = {[i*scaler for i in a]}; beta={[i*scaler for i in b]}')
)
return g
widgets.interact(plot_posterior,
clicks=fixed(conv_df.clicks),
conv_cnt=fixed(conv_df.conv_cnt),
a=widgets.IntSlider(min=1,max=1000,step=1,value=999),
b=widgets.IntSlider(min=1,max=1000,step=1,value=1),
scaler=widgets.FloatSlider(min=1,max=100,step=1,value=1))
```
interactive(children=(IntSlider(value=999, description='a', max=1000, min=1), IntSlider(value=1, description='…
<function __main__.plot_posterior(clicks, conv_cnt, a=1, b=1, scaler=1)>
## Who Wins by How Much
One way to determine which variation is better is to look at the posterior distributions and their expectations; however, that does not tell us the margin of difference. Since we have a posterior distribution for each variation we can derive what people sometimes confuse with the frequentist p-value: **what is the probability that one variation is better than another?** One way to do this is by using Monte Carlo simulation to draw a large number of samples from each posterior then calculate the percentage of times each sample won.
```python
def sample_proportion(c,n,a=1,b=1,sim_size=100000): return np.random.beta(c+a,n-c+b,sim_size)
def proportion_test_b(c1,c2,n1,n2,a1=1,a2=1,b1=9,b2=9,sim_size=100000):
p1 = sample_proportion(c1,n1,a1,b1,sim_size)
p2 = sample_proportion(c2,n2,a2,b2,sim_size)
return (p1 > p2).mean()
def proportion_ratio(c1,c2,n1,n2,a1=1,a2=1,b1=9,b2=9,sim_size=100000):
p1 = sample_proportion(c1,n1,a1,b1,sim_size)
p2 = sample_proportion(c2,n2,a2,b2,sim_size)
return p1/p2
def proportion_ci_b(c1,c2,n1,n2,p_value=0.05,a1=1,a2=1,b1=9,b2=9,sim_size=100000):
ratios = proportion_ratio(c1,c2,n1,n2,a1,a2,b1,b2,sim_size)
return np.quantile(ratios,[p_value/2,1-p_value/2])
p_value= proportion_test_b(*conv_df.conv_cnt,*conv_df.clicks)
ratios = proportion_ratio(*conv_df.conv_cnt,*conv_df.clicks)
credible = proportion_ci_b(*conv_df.conv_cnt,*conv_df.clicks,p_value=0.05)
print(f'Probability that A is greater than B: {p_value}')
print(f'Average A/B ratio: {ratios.mean()}')
print(f'Credible interval of A/B ratio: {credible}')
```
Probability that A is greater than B: 0.22743
Average A/B ratio: 0.9394908467261973
Credible interval of A/B ratio: [0.78622231 1.11331594]
Plotting the ratio of A over B also gives us a look into the magnitude of the difference. For instace, below we can see that for about 80% of the ratio distribution `A` is worse than `B`, and in case that we were wrong `A` is at most about 20% better than `B`. And similar to the frequentist approach, we can calculate the range where most of these ratios (say 95%) falls into, called **credible interval**.
```python
g = (ggplot(pd.DataFrame({'x':ratios}), aes(x='x')) + geom_histogram() +
theme_minimal() + geom_vline(xintercept=1,color='red') +
geom_vline(xintercept=credible[0],color='green') + geom_vline(xintercept=credible[1],color='green') +
xlab('Ratio of sampled conversion rates A / B')
)
g
```
## When to Stop: Value Remaining
The freedom from frequentist p-values allows us to not run into the issue that infinite number of samples *always* yield statistical significance even when the true values are exactly the same. As we can see from the plot below, the probility of `A` beating `B` as a result of Monte Carlo sampling from the posteriors stay about 50% when there is no true difference and gradually decreases when there is.
```python
conv_days2 = gen_campaigns(p1 = 0.10,
p2 = 0.10,
nb_days = 60,
scaler=300,
seed = 1412) #god-mode
conv_days2['prob_same'] = conv_days2.apply(lambda row: proportion_test_b(row['cumu_conv_a'],
row['cumu_conv_b'],row['cumu_click_a'],
row['cumu_click_b']),1)
conv_days3 = gen_campaigns(p1 = 0.10,
p2 = 0.11,
nb_days = 60,
scaler=300,
seed = 1412) #god-mode
conv_days3['prob_diff'] = conv_days3.apply(lambda row: proportion_test_b(row['cumu_conv_a'],
row['cumu_conv_b'],row['cumu_click_a'],
row['cumu_click_b']),1)
prob_df = pd.DataFrame({'click_day':conv_days2.click_day,'prob_same':conv_days2.prob_same,
'prob_diff':conv_days3.prob_diff}).melt(id_vars='click_day')
g = (ggplot(prob_df,aes(x='click_day',y='value',color='variable')) +
geom_line() + theme_minimal() +
geom_hline(yintercept=[0.9,0.1],color=['green','red']) +
xlab('Number of timesteps') + ylab('Probability of A beating B') +
scale_y_continuous(labels=formatters.percent_format()) +
annotate("text", label = "Above this line A is better than B", x = 20, y = 1, color = 'green') +
annotate("text", label = "Below this line B is better than A", x = 20, y = 0, color = 'red') +
ggtitle('Comparison between probabilities of A beating B when B is actually better and when both are the same')
)
g
```
The immediate question then comes up: when should we stop and declare a winner. Technically when there is a true difference, we can stop at any point in time and we will end up choosing the winner at varying degrees of probability (this is a clear improvement from the frequentist approach where we can only say we are not sure most of the time). We can also use a simple rule like if the probability of `A` beating `B` is lower than 10% or greater than 90%, we declare a winner like the plot above.
Stopping criteria for experiments such as ROPE and expected loss as described in [Bayesian A/B Testing: a step-by-step guide](http://www.claudiobellei.com/2017/11/02/bayesian-AB-testing/); here we are using **value remaining** as introduced by [Google](https://support.google.com/analytics/answer/2846882?hl=en). Value remaining per round of experiment is defined as:
$$V_t = \frac{rate_{max}-rate_{opt}}{rate_{opt}}$$
As experiment goes on, we plot the distribution of $V_t$ and stops when the $1-\alpha$ percentile is lower than our threshold. Intuitively, this is to say that we are $1-\alpha$% confident that our "best" arm might be beaten by the margin equals to the threshold. For practical purpose, we try 95th percentile and threshold of 1%.
```python
def value_remaining(c1,c2,n1,n2,q=95,sim_size=100000,a1=1,a2=1,b1=9,b2=9):
p1 = sample_proportion(c1,n1,a1,b1,sim_size)[:,None]
p2 = sample_proportion(c2,n2,a2,b2,sim_size)[:,None]
p = np.concatenate([p1,p2],1)
p_max = p.max(1)
best_idx = np.argmax([p1.mean(),p2.mean()])
p_best = p[:,best_idx]
vs = (p_max-p_best)/p_best
return np.percentile(vs,q)
value_remaining(*conv_df.conv_cnt,*conv_df.clicks)
```
0.0835620586721222
You can see that in the case where the true difference is 1%, value remaining gradually decreases below our 1% threshold. On the other hand, when the true difference is 0%, value remaining always hover around 10%.
```python
conv_days2['value_remaining'] = conv_days2.apply(lambda row: value_remaining(row['cumu_conv_a'],
row['cumu_conv_b'],row['cumu_click_a'],
row['cumu_click_b']),1)
conv_days3['value_remaining'] = conv_days3.apply(lambda row: value_remaining(row['cumu_conv_a'],
row['cumu_conv_b'],row['cumu_click_a'],
row['cumu_click_b']),1)
value_df = pd.DataFrame({'click_day':conv_days2.click_day,'value_same':conv_days2.value_remaining,
'value_diff':conv_days3.value_remaining}).melt(id_vars='click_day')
g = (ggplot(value_df,aes(x='click_day',y='value',color='variable')) +
geom_line() + theme_minimal() +
geom_hline(yintercept=0.01, color='red') +
scale_y_continuous(labels=formatters.percent_format(), breaks=[i/100 for i in range(0,101,10)]) +
xlab('Number of timesteps') + ylab('Value Remaining')
)
g
```
## References
* [An Introduction to Bayesian Thinking: A Companion to the Statistics with R Course](https://statswithr.github.io/book/)
* [Formulas for Bayesian A/B Testing](https://www.evanmiller.org/bayesian-ab-testing.html)
* [20 - Beta conjugate prior to Binomial and Bernoulli likelihoods](https://youtu.be/hKYvZF9wXkk)
* [Bayesian A/B Testing: A Hypothesis Test that Makes Sense](https://www.countbayesie.com/blog/2015/4/25/bayesian-ab-testing)
* [What's the difference between a confidence interval and a credible interval?](https://stats.stackexchange.com/questions/2272/whats-the-difference-between-a-confidence-interval-and-a-credible-interval)
* [Is Bayesian A/B Testing Immune to Peeking? Not Exactly](http://varianceexplained.org/r/bayesian-ab-testing/)
|
d932b249e6f5a2817422fad209315de964ad9a31
| 224,258 |
ipynb
|
Jupyter Notebook
|
notebooks/bayesian.ipynb
|
TeamTamoad/abtestoo
|
90e903ddbe945034b8226aad05a74fb46efb5326
|
[
"Apache-2.0"
] | 12 |
2019-04-23T03:12:39.000Z
|
2020-09-16T06:00:44.000Z
|
notebooks/bayesian.ipynb
|
TeamTamoad/abtestoo
|
90e903ddbe945034b8226aad05a74fb46efb5326
|
[
"Apache-2.0"
] | null | null | null |
notebooks/bayesian.ipynb
|
TeamTamoad/abtestoo
|
90e903ddbe945034b8226aad05a74fb46efb5326
|
[
"Apache-2.0"
] | 27 |
2020-10-08T19:22:58.000Z
|
2021-11-29T11:09:45.000Z
| 236.559072 | 54,636 | 0.891915 | true | 6,765 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.861538 | 0.766294 | 0.660191 |
__label__eng_Latn
| 0.948949 | 0.372176 |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial3.ipynb" target="_parent"></a>
# DL Neuromatch Academy: Week 1, Day 2, Tutorial 3
# Deep Linear Neural Networks
__Content creators:__ Andrew Saxe, Saeed Salehi, Vladimir Haltakov
__Content reviewers:__ Polina Turishcheva, Atnafu Lambebo, Yu-Fang Yang
__Content editors:__ Anoop Kulkarni
__Production editors:__ Khalid Almubarak, , Spiros Chavlis
---
#Tutorial Objectives
* Deep linear neural networks
* Learning dynamics and singular value decomposition
* Representational Similarity Analysis
* Illusory correlations & ethics.
```python
#@markdown Tutorial slides
# you should link the slides for all tutorial videos here (we will store pdfs on osf)
from IPython.display import HTML
HTML('')
```
---
# Setup
```python
# Imports
import numpy as np
import matplotlib.pyplot as plt
import random
from collections import OrderedDict
import torch
import torch.nn as nn
import torch.optim as optim
from tqdm.notebook import tqdm, trange
import time
from math import sqrt
! pip install treelib --quiet
from treelib import Node, Tree
```
```python
# @title Figure settings
# import ipywidgets as widgets
# from ipywidgets import interact, fixed, HBox, Layout, VBox, interactive, Label
# from ipywidgets import interact, IntSlider, FloatSlider, interact_manual
# from mpl_toolkits.mplot3d import Axes3D
from matplotlib import gridspec
from ipywidgets import interact, IntSlider, FloatSlider, interact_manual, fixed
from ipywidgets import FloatLogSlider, HBox, Layout, VBox, interactive, Label
from ipywidgets import interactive_output
import warnings
warnings.filterwarnings("ignore")
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
```
```python
#@title Plotting functions
def plot_x_y_hier_data(im1, im2, subplot_ratio=[1, 2]):
fig = plt.figure(figsize=(12, 5))
gs = gridspec.GridSpec(1, 2, width_ratios=subplot_ratio)
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
ax0.imshow(im1, cmap="cool")
ax1.imshow(im2, cmap="cool")
# plt.suptitle("The whole dataset as imshow plot", y=1.02)
ax0.set_title("Labels of all samples")
ax1.set_title("Features of all samples")
ax0.set_axis_off()
ax1.set_axis_off()
plt.tight_layout()
plt.show()
def plot_x_y_hier_one(im1, im2, subplot_ratio=[1, 2]):
fig = plt.figure(figsize=(12, 1))
gs = gridspec.GridSpec(1, 2, width_ratios=subplot_ratio)
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
ax0.imshow(im1, cmap="cool")
ax1.imshow(im2, cmap="cool")
ax0.set_title("Labels of a single sample")
ax1.set_title("Features of a single sample")
ax0.set_axis_off()
ax1.set_axis_off()
plt.tight_layout()
plt.show()
def plot_tree_data(im1, im2, label_list):
im1_dim1, im1_dim2 = im1.shape
fig = plt.figure(figsize=(12, 5))
gs = gridspec.GridSpec(1, 2, width_ratios=[1, 2])
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
ax0.imshow(im1, cmap="cool")
ax1.imshow(im2[:, -im1_dim1*2:], cmap="cool", vmin=0.0, vmax=1.0)
ax0.set_title("all the Labels")
ax1.set_title("last {} Features".format(-im1_dim1*2))
ax0.set_yticks(ticks=np.arange(im1_dim1))
ax0.set_yticklabels(labels=label_list)
ax0.set_xticks(ticks=np.arange(im1_dim1))
ax0.set_xticklabels(labels=item_names, rotation='vertical')
ax1.set_axis_off()
plt.tight_layout()
plt.show()
def plot_loss(loss_array, title="Training loss (Mean Squared Error)", c="r"):
plt.figure(figsize=(9, 5))
plt.plot(loss_array, color=c)
plt.xlabel("Epoch")
plt.ylabel("MSE")
plt.title(title)
plt.show()
def plot_loss_sv(loss_array, sv_array):
n_sing_values = sv_array.shape[1]
sv_array = sv_array / np.max(sv_array)
cmap = plt.cm.get_cmap("Set1", n_sing_values)
_, (plot1, plot2) = plt.subplots(2, 1, sharex=True, figsize=(10, 10))
plot1.set_title("Training loss (Mean Squared Error)")
plot1.plot(loss_array, color='r')
plot2.set_title("Evolution of singular values (modes)")
for i in range(n_sing_values):
plot2.plot(sv_array[:, i], c=cmap(i))
plot2.set_xlabel("Epoch")
plt.show()
def plot_loss_sv_twin(loss_array, sv_array):
n_sing_values = sv_array.shape[1]
sv_array = sv_array / np.max(sv_array)
cmap = plt.cm.get_cmap("winter", n_sing_values)
fig = plt.figure(figsize=(11, 6))
ax1 = plt.gca()
ax1.set_title("Learning Dynamics")
ax1.set_xlabel("Epoch")
ax1.set_ylabel("Mean Squared Error", c='r')
ax1.tick_params(axis='y', labelcolor='r')
ax1.plot(loss_array, color='r')
ax2 = ax1.twinx()
ax2.set_ylabel("Singular values (modes)", c='b')
ax2.tick_params(axis='y', labelcolor='b')
for i in range(n_sing_values):
ax2.plot(sv_array[:, i], c=cmap(i))
fig.tight_layout()
plt.show()
def plot_ills_sv_twin(ill_array, sv_array):
n_sing_values = sv_array.shape[1]
sv_array = sv_array / np.max(sv_array)
cmap = plt.cm.get_cmap("winter", n_sing_values)
fig = plt.figure(figsize=(11, 6))
ax1 = plt.gca()
ax1.set_title("Network evolution in learning the Illusory Correlations")
ax1.set_xlabel("Epoch")
ax1.set_ylabel("Illusory Correlations", c='r')
ax1.tick_params(axis='y', labelcolor='r')
ax1.plot(ill_array, color='r', linewidth=3)
ax1.set_ylim(-0.05, 1.0)
ax2 = ax1.twinx()
ax2.set_ylabel("Singular values (modes)", c='b')
ax2.tick_params(axis='y', labelcolor='b')
for i in range(n_sing_values):
ax2.plot(sv_array[:, i], c=cmap(i))
fig.tight_layout()
plt.show()
def plot_loss_sv_rsm(loss_array, sv_array, rsm_array, i_ep):
rsm_array = rsm_array / np.max(rsm_array, axis=0)
sv_array = sv_array / np.max(sv_array)
n_sing_values = sv_array.shape[1]
cmap = plt.cm.get_cmap("winter", n_sing_values)
fig = plt.figure(figsize=(15, 5))
gs = gridspec.GridSpec(1, 2, width_ratios=[2, 1])
ax0 = plt.subplot(gs[1])
ax0.yaxis.tick_right()
ax0.imshow(rsm_array[i_ep], cmap="Purples", vmin=0.0, vmax=1.1)
ax0.set_title("RSM at epoch {}".format(i_ep), fontsize=16)
# ax0.set_axis_off()
ax0.set_yticks(ticks=np.arange(n_sing_values))
ax0.set_yticklabels(labels=item_names)
# ax0.set_xticks([])
ax0.set_xticks(ticks=np.arange(n_sing_values))
ax0.set_xticklabels(labels=item_names, rotation='vertical')
ax1 = plt.subplot(gs[0])
ax1.set_title("Learning Dynamics", fontsize=16)
ax1.set_xlabel("Epoch")
ax1.set_ylabel("Mean Squared Error", c='r')
ax1.tick_params(axis='y', labelcolor='r')
ax1.plot(loss_array, color='r')
ax1.axvspan(i_ep-2, i_ep+2, alpha=0.2, color='m')
ax2 = ax1.twinx()
ax2.set_ylabel("Singular values", c='b')
ax2.tick_params(axis='y', labelcolor='b')
for i in range(n_sing_values):
ax2.plot(sv_array[:, i], c=cmap(i))
plt.show()
class SimpleTree:
def __init__(self, plot=False):
tree = Tree()
tree.create_node("Living things", 0)
tree.create_node("Animal", 1, parent=0)
tree.create_node("Plant", 2, parent=0)
tree.create_node("Fish", 3, parent=1)
tree.create_node("Bird", 4, parent=1)
tree.create_node("Flower", 5, parent=2)
tree.create_node("Tree", 6, parent=2)
tree.create_node("Goldfish", 7, parent=3)
tree.create_node("Tuna", 8, parent=3)
tree.create_node("Robin", 9, parent=4)
tree.create_node("Canary", 10, parent=4)
tree.create_node("Rose", 11, parent=5)
tree.create_node("Daisy", 12, parent=5)
tree.create_node("Pine", 13, parent=6)
tree.create_node("Oak", 14, parent=6)
self.tree = tree
if plot: self.plot()
def plot(self):
self.tree.show(line_type="ascii-em")
def rename(self, old, new, plot=False):
for nodes in self.tree.all_nodes():
if nodes.tag == old:
nodes.tag = new
break
if plot: self.plot()
```
```python
#@title Helper functions
seed = 2015 # LeCun, Bengio, & Hinton (2015)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
np.random.seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
class VariableDepthWidth(nn.Module):
def __init__(self, in_dim, out_dim, hid_dims=[], gamma=1e-12):
"""Variable depth linear network
Args:
in_dim (int): input dimension
out_dim (int): ouput dimension
hid_dims (list): a list, containing the number of neurons in each hidden layer
default is empty list (`[]`) for linear regression.
example: For 2 hidden layers, first with 5 and second with 7 neurons,
we use: `hid_dims = [5, 7]`
"""
super().__init__()
assert isinstance(in_dim, int)
assert isinstance(out_dim, int)
assert isinstance(hid_dims, list)
n_hidden_layers = len(hid_dims) # number of hidden layers
layers = OrderedDict()
if n_hidden_layers == 0: # linear regression
layers["map"] = nn.Linear(in_dim, out_dim, bias=False)
else: # shallow and deep linear neural net
layers["in->"] = nn.Linear(in_dim, hid_dims[0], bias=False)
for i in range(n_hidden_layers-1): # creating hidden layers
layers["hid {}".format(i+1)] = nn.Linear(hid_dims[i],
hid_dims[i+1],
bias=False)
layers["->out"] = nn.Linear(hid_dims[-1], out_dim, bias=False)
for k in layers: # re-initialization of the weights
sigma = gamma / sqrt(layers[k].weight.shape[0] + layers[k].weight.shape[1])
nn.init.normal_(layers[k].weight, std=sigma)
self.layers = nn.Sequential(layers)
def forward(self, input_tensor):
"""Forward pass
"""
return self.layers(input_tensor)
def build_tree(n_levels, n_branches, probability, to_np_array=True):
"""Builds a tree
"""
assert 0.0 <= probability <= 1.0
tree = {}
tree["level"] = [0]
for i in range(1, n_levels+1):
tree["level"].extend([i]*(n_branches**i))
tree["pflip"] = [probability]*len(tree["level"])
tree["parent"] = [None]
k = len(tree["level"])-1
for j in range(k//n_branches):
tree["parent"].extend([j]*n_branches)
if to_np_array:
tree["level"] = np.array(tree["level"])
tree["pflip"] = np.array(tree["pflip"])
tree["parent"] = np.array(tree["parent"])
return tree
def sample_from_tree(tree, n):
""" Generates n samples from a tree
"""
items = [i for i, v in enumerate(tree["level"]) if v == max(tree["level"])]
n_items = len(items)
x = np.zeros(shape=(n, n_items))
rand_temp = np.random.rand(n, len(tree["pflip"]))
flip_temp = np.repeat(tree["pflip"].reshape(1, -1), n, 0)
samp = (rand_temp > flip_temp) * 2 - 1
for i in range(n_items):
j = items[i]
prop = samp[:, j]
while tree["parent"][j] is not None:
j = tree["parent"][j]
prop = prop * samp[:, j]
x[:, i] = prop.T
return x
def generate_hsd():
# building the tree
n_branches = 2 # 2 branches at each node
probability = .15 # flipping probability
n_levels = 3 # number of levels (depth of tree)
tree = build_tree(n_levels, n_branches, probability, to_np_array=True)
tree["pflip"][0] = 0.5
n_samples = 10000 # Sample this many features
tree_labels = np.eye(n_branches**n_levels)
tree_features = sample_from_tree(tree, n_samples).T
return tree_labels, tree_features
def linear_regression(X, Y):
"""Analytical Linear regression
"""
assert isinstance(X, np.ndarray)
assert isinstance(Y, np.ndarray)
M, Dx = X.shape
N, Dy = Y.shape
assert Dy == Dy
W = Y @ X.T @ np.linalg.inv(X @ X.T)
return W
# #@markdown Run this cell to define the train function!
def train_svd_rsa_track(model, in_features, out_features, n_epochs, lr, ill_i=0):
"""Training function
Args:
model (torch nn.Module): the neural network
in_features (torch.Tensor): features (input) with shape `torch.Size([batch_size, input_dim])`
out_features (torch.Tensor): targets (labels) with shape `torch.Size([batch_size, output_dim])`
n_epochs (int): number of training epochs
lr (float): learning rate
ill_i (int): index of illusory feature
Returns:
np.ndarray: record (evolution) of losses
np.ndarray: record (evolution) of singular values
np.ndarray: record (evolution) of representational similarity matrices
np.ndarray: record of network prediction for the last feature
"""
assert in_features.shape[0] == out_features.shape[0]
optimizer = optim.SGD(model.parameters(), lr=lr)
criterion = nn.MSELoss()
xd = in_features.shape[1]
loss_record = [] # losses
sv_record = [] # singular values
rsm_record = [] # represent sim mats
pred_record = [] # network prediction for the last feature
for i in range(n_epochs):
y_pred = model(in_features) # forward pass
loss = criterion(y_pred, out_features) # calculating the loss
optimizer.zero_grad() # reset all the graph gradients to zero
loss.backward() # back propagation of the error
optimizer.step() # gradient step
# calculating the W_tot by multiplying all layers' weights
W_tot = model.layers[-1].weight.detach() # starting from the last layer
for i in range(2, len(model.layers)+1):
W_tot = W_tot @ model.layers[-i].weight.detach()
U, Σ, V = torch.svd(W_tot) # performing the SVD!
# calculating representational similarity matrix
H1 = model.layers[0].weight.detach() @ in_features
RSM = H1.T @ H1
# network prediction of ill_i in_feature for the last feature
ill_pred = y_pred[ill_i, -1].detach().numpy()
loss_record.append(loss.item())
sv_record.append(Σ.numpy())
rsm_record.append(RSM.numpy())
pred_record.append(ill_pred)
return np.array(loss_record), np.array(sv_record), np.array(rsm_record), np.array(pred_record)
def add_feature(existing_features, new_feature):
assert isinstance(existing_features, np.ndarray)
assert isinstance(new_feature, list)
new_feature = np.array([new_feature]).T
# return np.hstack((tree_features, new_feature*2-1))
return np.hstack((tree_features, new_feature))
```
---
# Section 0: Prelude
## Exercise 0: Variable depth and width LNN
Throughout this tutorial, we will need several neural nets with different depth and width. So first, let's create a model with variable depth and width.
This can be easily done by using [`OrderedDict()`](https://docs.python.org/3/library/collections.html#collections.OrderedDict) and `nn.Sequential` function. The model is defined by its input and output dimensions, and a list containing the width of each hidden layer. If the list is left empty, the neural network will perform a linear regression (coming up next). We also exclude the `bias` from all the layers.
We also take over the initialization. In PyTorch, we can use [`nn.init`](https://pytorch.org/docs/stable/nn.init.html) to initialize tensors from a given distribution. Here, we sample the weights from the following distribution:
$$\mathcal{N}\left(\mu=0, ~~\sigma=\gamma \sqrt{\dfrac{1}{n_{in} + n_{out}}} \right)$$
where $\gamma$ is given as an argument. the Underscore ("_") in `nn.init.random_` and other functions, denote "in-place" operation. Note that `nn.Linear` layers are initialized at definition, so we re-initialize them.
```python
class VariableDepthWidthExercise(nn.Module):
def __init__(self, in_dim, out_dim, hid_dims=[], gamma=1e-12):
"""Variable depth linear network
Args:
in_dim (int): input dimension
out_dim (int): ouput dimension
hid_dims (list): a list, containing the number of neurons in each hidden layer
default is empty list (`[]`) for linear regression.
example: For 2 hidden layers, first with 5 and second with 7 neurons,
we use: `hid_dims = [5, 7]`
"""
super().__init__()
assert isinstance(in_dim, int)
assert isinstance(out_dim, int)
assert isinstance(hid_dims, list)
n_hidden_layers = len(hid_dims) # number of hidden layers
layers = OrderedDict()
if n_hidden_layers == 0: # linear regression
layers["map"] = nn.Linear(in_dim, out_dim, bias=False)
else: # shallow and deep linear neural net
layers["in->"] = nn.Linear(in_dim, hid_dims[0], bias=False)
for i in range(n_hidden_layers-1): # creating hidden layers
#################################################
## Complete the hidden loop of DeepLNNExercise class
# Complete the function and remove or comment the line below
raise NotImplementedError("Network model `DeepLNNExercise`")
#################################################
for k in layers: # re-initialization of the weights
sigma = gamma / sqrt(layers[k].weight.shape[0] + layers[k].weight.shape[1])
nn.init.normal_(layers[k].weight, std=sigma)
self.layers = nn.Sequential(layers)
def forward(self, input_tensor):
"""Forward pass
"""
return self.layers(input_tensor)
# # Uncomment and run
# print("Deep LNN:\n",
# VariableDepthWidthExercise(64, 100, [32, 16, 16, 32]))
# print("\nLinear Regression model:\n",
# VariableDepthWidthExercise(64, 100,[]))
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearNN/solutions/W1D2_Tutorial3_Solution_cfc9c90c.py)
We have prepare the training function (very similar to that of tutorial 1) for you already. Just look check if everything is okay.
```python
def train(model, in_features, out_features, n_epochs, lr,
criterion=None, optimizer=None, show_progress_bar=False):
"""Training function
Args:
model (torch nn.Module): the neural network
in_features (torch.Tensor): features (input) with shape `torch.Size([batch_size, input_dim])`
out_features (torch.Tensor): targets (labels) with shape `torch.Size([batch_size, output_dim])`
n_epochs (int): number of training epochs
criterion (function): loss function (default 'nn.MSELoss()')
optimizer(function): optimizer (default 'optim.SGD')
lr(float): learning rate
Returns:
list: record (evolution) of losses
"""
assert in_features.shape[0] == out_features.shape[0]
loss_record = [] # for recoding losses
if optimizer is None:
optimizer = optim.SGD(model.parameters(), lr=lr)
if criterion is None:
criterion = nn.MSELoss()
model.train() # we first put the model in training mode
for i in range(n_epochs):
y_pred = model(in_features) # forward pass
loss = criterion(y_pred, out_features) # calculating the loss
optimizer.zero_grad() # reset all the graph gradients to zero
loss.backward() # back propagation of the error
optimizer.step() # gradient step
loss_record.append(loss.item())
model.eval() # putting the model to evaluation mode
return loss_record
```
---
# Section 00: Analytical Linear Regression
Linear regression is a relatively simple optimization problem. Unlike most other models that we will see in this course, linear regression for mean squared loss can be solved analytically.
For $D$ samples (batch size), $\mathbf{X} \in \mathbb{R}^{M \times D}$, and $\mathbf{Y} \in \mathbb{R}^{N \times D}$, the goal of linear regression is to find $\mathbf{W} \in \mathbb{R}^{N \times M}$ such that:
$$\mathbf{Y} = \mathbf{W} ~ \mathbf{X} $$
Given the Squared Error loss function, we have:
\begin{equation}
Loss(\mathbf{W}) = ||\mathbf{Y} - \mathbf{W} ~ \mathbf{X}||^2
\end{equation}
So, using matrix notation, the optimization problem is given by:
\begin{align}
\mathbf{W^{*}} &= \underset{\mathbf{W}}{\mathrm{argmin}} \left( Loss (\mathbf{W}) \right) \\
&= \underset{\mathbf{W}}{\mathrm{argmin}} \left( ||\mathbf{Y} - \mathbf{W} ~ \mathbf{X}||^2 \right) \\
&= \underset{\mathbf{W}}{\mathrm{argmin}} \left( \left( \mathbf{Y} - \mathbf{W} ~ \mathbf{X}\right)^{\top} \left( \mathbf{Y} - \mathbf{W} ~ \mathbf{X}\right) \right)
\end{align}
To solve the minimization problem, we can simply set the derivative of the loss with respect to $\mathbf{W}$ to zero.
\begin{equation}
\dfrac{\partial Loss}{\partial \mathbf{W}} = 0
\end{equation}
Assuming that $\mathbf{X}\mathbf{X}^{\top}$ is full-rank, and thus it is invertible we can write:
\begin{equation}
\mathbf{W}^{\mathbf{*}} = \mathbf{Y} \mathbf{X}^{\top} \left( \mathbf{X} \mathbf{X}^{\top} \right) ^{-1}
\end{equation}
## Exercise 00: Analytical solution to LR
Complete the function `linear_regression` for finding the analytical solution to linear regression.
```python
def linear_regression_exercise(X, Y):
"""Analytical Linear regression
Args:
X (np.ndarray): design matrix
Y (np.ndarray): target ouputs
return:
np.ndarray: estimated weights (mapping)
"""
assert isinstance(X, np.ndarray)
assert isinstance(Y, np.ndarray)
M, Dx = X.shape
N, Dy = Y.shape
assert Dy == Dy
#################################################
## Complete the linear_regression_exercise function
# Complete the function and remove or comment the line below
raise NotImplementedError("Linear Regression `linear_regression_exercise`")
#################################################
W = ...
return W
W_true = np.random.randint(low=0, high=10, size=(3, 3)).astype(float)
X_train = np.random.rand(3, 37) # 37 samples
noise = np.random.normal(scale=0.01, size=(3, 37))
Y_train = W_true @ X_train + noise
# # Uncomment and run
# W_estimate = linear_regression_exercise(X_train, Y_train)
# print("True weights:\n", W_true)
# print("\nEstimated weights:\n", np.round(W_estimate, 1))
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearNN/solutions/W1D2_Tutorial3_Solution_ba1e16d4.py)
---
# Section 1: Deep Linear Neural Nets
```python
#@title Video 1: Representation Learning (Intro)
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="MRPy6uZRxms", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
So far depth just seems to slow down the learning. And we know that a single nonlinear hidden layer (given enough number of neurons) has the potential to approximate any function. So it's seems fair to ask: **What is depth good for**? One reason can be that, shallow nonlinear neural networks hardly meet their true potential in practice.
In the contrast, deep neural nets are often surprisingly powerful in learning complex functions without sacrificing the generalization. A core intuition behind deep learning is that deep nets derive their power through learning internal representations. How does this work? To address representation learning, we have to go beyond the 1D chain, to a deep Linear Neual Network (LNN).
For this and the next couple of exercises, we use syntactically generated hierarchically structured data through a *branching diffusion process* (see [this reference](https://www.pnas.org/content/pnas/suppl/2019/05/16/1820226116.DCSupplemental/pnas.1820226116.sapp.pdf) for more details).
<center></center>
<center> hierarchically structured data </center>
## Exercise 1: Training a deep LNN
This is a rather simple exercise. We will generate some hierarchically structured data, instantiate a LNN from `VariableDepthWidth` class and train it on the data.
**Important note**:
* Datasets are often generated as `numpy.ndarray`, and pass to PyTorch, which needs `torch.Tensor` for training. You can use `torch.tensor(toy_data).float()` for the "conversion" of float datatype.
```python
#@markdown #### Run to generate and visualize training samples from tree
tree_labels, tree_features = generate_hsd()
item_names = ['Goldfish', 'Tuna', 'Robin', 'Canary', 'Rose', 'Daisy', 'Pine', 'Oak']
plot_tree_data(tree_labels, tree_features, item_names)
# dimensions
print()
print("---------------------------------------------------------------")
print("Input Dimension: {}".format(tree_labels.shape[1]))
print("Output Dimension: {}".format(tree_features.shape[1]))
print("Number of samples: {}".format(tree_features.shape[0]))
```
```python
def exercise_1(η=100.0, epochs=250 , γ=1e-12):
"""Training a LNN
Args:
η (float): learning rate (default 100.0)
epochs (int): number of epochs (default 250)
γ (float): initialization scale (default 1e-12)
"""
n_hidden = [30]
dim_input = tree_labels.shape[1]
dim_output = tree_features.shape[1]
deep_model = VariableDepthWidth(in_dim=dim_input,
out_dim=dim_output,
hid_dims=n_hidden,
gamma=γ)
# convert (cast) data from np.ndarray to torch.Tensor
input_tensor = torch.tensor(tree_labels).float()
#################################################
## convert output_data from np.ndarray to torch.Tensor
# Complete the function and remove or comment the line below
raise NotImplementedError("Cast output_data as torch.Tensor")
#################################################
output_tensor = ...
training_losses = train(deep_model,
input_tensor,
output_tensor,
n_epochs=epochs,
lr=η)
plot_loss(training_losses)
# # Uncomment and run
# exercise_1()
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearNN/solutions/W1D2_Tutorial3_Solution_2c03a4ed.py)
*Example output:*
**Question**: Why haven't we seen these "bumps" in training before? And should we look for them in the future? You can slide the widgets below and find your answer. Here, $\gamma$ is the initialization scale.
```python
#@markdown Make sure you execute this cell to enable the widget!
_ = interact(exercise_1,
η = FloatSlider(min=1.0, max=200.0, step=2.0, value=100.0,
continuous_update=False, readout_format='.1f', description='η'),
epochs = fixed(250),
γ = FloatLogSlider(min=-15, max=1, step=1, value=1e-12, base=10,
continuous_update=False, description='γ'),
)
```
---
# Section 2: Singular Value Decomposition (SVD)
```python
#@title Video 2: Singular Value Decomposition (SVD)
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="eTXNKMleEj8", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
In this section, we would go deeper in understanding of the learning dynamics we just saw. First, we should know that a linear neural network is perfoming sequential matrix maltiplications, which can simplified to:
\begin{align}
\mathbf{y} &= \mathbf{W}_{L}~\mathbf{W}_{L-1}~\dots~\mathbf{W}_{1} ~ \mathbf{x} \\
&= (\prod_{i=1}^{L}{\mathbf{W}_{i}}) ~ \mathbf{x} \\
&= \mathbf{W}_{tot} ~ \mathbf{x}
\end{align}
where $L$ denotes the number of layers in our network.
Why did we just called the learning progress, "learning dynamics"? Learning through gradient descent seems very aike to evolution of a dynamic system. They both are described by set of differntial equations (gradients). Dynamical systems often have a "time-constant" which describes the rate of change, similar to the learning rate, only instead of time, gradient descent evolves through epochs.
[Saxe et al. (2013)](https://arxiv.org/abs/1312.6120) showed that to analyse and to understanding the nonlinear learning dynamics of a deep LNN, we can use [Singular Value Decomposition (SVD)](https://en.wikipedia.org/wiki/Singular_value_decomposition) to decompose the $\mathbf{W}_{tot}$ into orthogonal vectors, where orthogonality of the vecors would ensure their "individuality". This means we can break the a deep wide LNN to multiple deep narrow LNN, so their activity is untangled from each other.
<br/>
__A Quick intro to SVD__
Any real-valued matix $A$ (yes, ANY) can be decomposed (factorized) to 3 matrices:
\begin{equation}
\mathbf{A} = \mathbf{U} \mathbf{Σ} \mathbf{V}^{\top}
\end{equation}
where $U$ is an orthogonal matrix, $\Sigma$ is a diagonal matrix, and $V$ is again an orthogonal matrix. The diagonal elements of $\Sigma$ are called **singular values**.
The main difference between SVD and Eigen Value Decomposition (EVD), is that EVD requires $A$ to be squred and does not guarantee the eigenvectors to be orthogonal. For the complex-valued matrix $A$, the factorization changes to $A = UΣV^*$ and $U$ and $V$ are unitary matrices.
We strongly recommend the [Singular Value Decomposition (the SVD)](https://www.youtube.com/watch?v=mBcLRGuAFUk) by the amazing [Gilbert Strang](http://www-math.mit.edu/~gs/) if you would like to learn more.
## Exercise 2. SVD
Let's put what we learned in practice. Here, we want to modify our training loop to perform the SVD on $\mathbf{W}_{tot}$ in every epoch, and record the singular values (diagonal values of $\Sigma$). SVD is implemented both in PyTorch [`torch.svd`](https://pytorch.org/docs/stable/generated/torch.svd.html) and in NumPy [`np.linalg.svd`](https://numpy.org/doc/stable/reference/generated/numpy.linalg.svd.html), but we recommend the PyTorch method to avoid the conversion cost. Since $\Sigma$ is a diagonal matrix, often (e.g. PyTorch and NumPy) just the diagonal elements are returned as a vector, not the whole matrix.
We have removed the progress bar and the optional loss and optimizer arguments to make the exercise "cleaner".
```python
def train_svd_exercise(model, in_features, out_features, n_epochs, lr):
"""Training function
Args:
model (torch nn.Module): the neural network
in_features (torch.Tensor): features (input) with shape `torch.Size([batch_size, input_dim])`
out_features (torch.Tensor): targets (labels) with shape `torch.Size([batch_size, output_dim])`
n_epochs (int): number of training epochs
lr(float): learning rate
Returns:
np.ndarray: record (evolution) of losses
np.ndarray: record (evolution) of singular values
"""
assert in_features.shape[0] == out_features.shape[0]
optimizer = optim.SGD(model.parameters(), lr=lr)
criterion = nn.MSELoss()
loss_record = [] # for recoding losses
sv_record = [] # for recoding singular values
for i in range(n_epochs):
y_pred = model(in_features) # forward pass
loss = criterion(y_pred, out_features) # calculating the loss
optimizer.zero_grad() # reset all the graph gradients to zero
loss.backward() # back propagation of the error
optimizer.step() # gradient step
# calculating the W_tot by multiplying all layers' weights
W_tot = model.layers[-1].weight.detach() # starting from the last layer
for i in range(2, len(model.layers)+1):
#################################################
## Complete the loop for calculating the W_tot
# Complete the function and remove or comment the line below
raise NotImplementedError("Calculate the W_tot")
#################################################
W_tot = ...
# performing the SVD!
#################################################
## calculate singular value decomposition of W_tot
# Complete the function and remove or comment the line below
raise NotImplementedError("Calculate the SVD for W_tot")
#################################################
U, Σ, V = ...
loss_record.append(loss.item())
sv_record.append(Σ.numpy())
return np.array(loss_record), np.array(sv_record)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearNN/solutions/W1D2_Tutorial3_Solution_047b33fe.py)
```python
#@markdown Make sure you execute this cell to train the network and plot
dim_input = tree_labels.shape[1]
dim_output = tree_features.shape[1]
input_tensor = torch.tensor(tree_labels).float()
output_tensor = torch.tensor(tree_features).float()
deep_model = VariableDepthWidth(in_dim=dim_input,
out_dim=dim_output,
hid_dims=[30])
training_losses, singular_values, _, _ = train_svd_rsa_track(deep_model,
input_tensor,
output_tensor,
n_epochs=250,
lr=100.0)
plot_loss_sv_twin(training_losses, singular_values)
```
**Question**: Isn't this beautiful? For EigenValue decomposition, the anount of variance explained by eigenvectors are proportional to the corresponding eigenvalues. What about the SVD? We definitely see that the gradient descent guides the network to first learn the features that carry more information (have higher singular value)!
---
# Section 3: Representational Similarity Analysis (RSA)
```python
#@title Video 3.1: Representational Similarity Analysis (RSA)
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="19seHV97WkI", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
The previous section ended with an interesting remark! The network (through gradient descent), seems to prioritize learning features that explain most of the data, and gradually learn all the hidden representations. Given that we are training a hierarchically structured data, we may be able to see that progress as well.
To do so, we get help from Representational Similarity Analysis (RSA) approach to understand the internal representation of our network. The main idea is that the activity of hidden units (neurons) in the network must be similar when the network is presented with a similar input. The exercise will help more to get an intuition of this approach and the dynamics of representational learning.
## Exercise 3: RSA
We need to modify our training function once more. The task is to calculate similarity between the hidden layer activities (i.e. $~\mathbf{h_1} = \mathbf{W_1} \mathbf{x}~$) for all the inputs at every epoch. For similarity measure, we can use the good old dot (scalar) product, which is also called cosine similarity. For calculating the dot product between multiple vectors (which would be our case), you can simply use matrix multiplication. Therefore the Representational Similarity Matrix (SM) for multiple input activity could be calculated as follow:
$$ RSM = \mathbf{H_1}^{\top} \mathbf{H_1} $$
where $\mathbf{H_1} = \mathbf{W_1} \mathbf{X}$.
```python
def train_svd_rsa_exercise(model, in_features, out_features, n_epochs, lr):
"""Training function
Args:
model (torch nn.Module): the neural network
in_features (torch.Tensor): features (input) with shape `torch.Size([batch_size, input_dim])`
out_features (torch.Tensor): targets (labels) with shape `torch.Size([batch_size, output_dim])`
n_epochs (int): number of training epochs
lr(float): learning rate
Returns:
np.ndarray: record (evolution) of losses
np.ndarray: record (evolution) of singular values
np.ndarray: record (evolution) of representational similarity matrices
"""
assert in_features.shape[0] == out_features.shape[0]
optimizer = optim.SGD(model.parameters(), lr=lr)
criterion = nn.MSELoss()
loss_record = [] # for recoding losses
sv_record = [] # for recoding singular values
rsm_record = [] # for recording representational similarity matrices
for i in range(n_epochs):
y_pred = model(in_features) # forward pass
loss = criterion(y_pred, out_features) # calculating the loss
optimizer.zero_grad() # reset all the graph gradients to zero
loss.backward() # back propagation of the error
optimizer.step() # gradient step
# calculating the W_tot by multiplying all layers' weights
W_tot = model.layers[-1].weight.detach() # starting from the last layer
for i in range(2, len(model.layers)+1):
W_tot = W_tot @ model.layers[-i].weight.detach()
U, Σ, V = torch.svd(W_tot) # performing the SVD!
# calculating representational similarity matrix
H1 = model.layers[0].weight.detach() @ in_features
#################################################
## Use H1 to calculate the representational similarity matrix
# Complete the function and remove or comment the line below
raise NotImplementedError("Calculate the RSM")
#################################################
RSM = ...
loss_record.append(loss.item())
sv_record.append(Σ.numpy())
rsm_record.append(RSM.numpy())
return np.array(loss_record), np.array(sv_record), np.array(rsm_record)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearNN/solutions/W1D2_Tutorial3_Solution_78a1d0b5.py)
```python
#@markdown Make sure you execute this cell to train the network
deep_model = VariableDepthWidth(in_dim=dim_input,
out_dim=dim_output,
hid_dims=[30])
training_losses, singular_values, rep_sim_mats, _ = train_svd_rsa_track(deep_model,
input_tensor,
output_tensor,
n_epochs=250,
lr=100.0)
```
Using the widget below, you can look at the representational similartiy matrix at any point of training.
```python
#@markdown Make sure you execute this cell to enable the widget!
i_ep_slider = IntSlider(min=5, max=245, step=1, value=50,
continuous_update=False, description='Epoch',
layout=Layout(width='680px'))
widgets_ui = HBox([i_ep_slider])
widgets_out = interactive_output(plot_loss_sv_rsm,
{'loss_array': fixed(training_losses),
'sv_array': fixed(singular_values),
'rsm_array': fixed(rep_sim_mats),
'i_ep': i_ep_slider})
display(widgets_ui, widgets_out)
```
```python
#@title Video 3.2: Linear Regression
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="etsXyJJSru4", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
## Demonstration: Linear Regression vs. DLNN
A linear neural network with NO hidden layer, is very similar to linear regression in its core. We also know that no matter how many hidden layers a linear network has, it can be compressed to a linear regression (no hidden layers).
In this demonstration, we use the hierarchically structured data to:
* analytically find the mapping between features and labels
* train a zero-depth LNN to find the mapping
* compare them to the $W_{tot}$ from the already trained deep LNN
```python
# calculating the W_tot for deep network (already trained model)
deep_weight_tot = deep_model.layers[-1].weight.detach().numpy()
for i in range(2, len(deep_model.layers)+1):
deep_weight_tot = deep_weight_tot @ deep_model.layers[-i].weight.detach().numpy()
```
```python
# analytically estimation of weights (map)
# our data is batch first dimension, so we need to transpose our data
analytical_weights = linear_regression(tree_labels.T, tree_features.T)
```
```python
# create a model instance of VariableDepthWidth
zero_depth_model = VariableDepthWidth(in_dim=dim_input,
out_dim=dim_output,
hid_dims=[])
# train the zero_depth_model
training_losses = train(zero_depth_model,
input_tensor,
output_tensor,
n_epochs=250,
lr=1000.0)
# trained weights from zero_depth_model
zero_depth_model_weights = zero_depth_model.layers[0].weight.detach().numpy()
plot_loss(training_losses, "Training loss for zero depth LNN", c="r")
```
```python
print("The final weights from all methods are approximately equal?! {}!\n".format(
(np.allclose(analytical_weights, zero_depth_model_weights, atol=1e-02) and \
np.allclose(analytical_weights, deep_weight_tot, atol=1e-02))))
```
As you may have guessed, they all arrive at the same results but through very different paths.
---
# Section 4: Illusory Correlations
```python
#@title Video 4.1: Illusory Correlations
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="t_-wmMjl9kk", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
So far, everything looks great, all our trainings are successful (training loss converging to zero), and very fast. We even could interpret the dynamics of our deep linear networks and relate them to the data. Unfortunately, this rarely happens in practice. Real world problems often require very deep and nonlinear networks with many hyper-parameters. And oridinarily, these complex networks take hours, if not days, to train.
Let's recall the training loss curves. There was often a long plateau (where the weights are stuck at a saddle point), followed by a sudden drop. For very deep complex neural network, such plateaus can last for hours of training, and we often decide to stop the training becasue we believe it "as good as it gets"! This riases a challenge of whether the network has learned all the "intended" hidden representations. But more importantly, the network might find an illusionary correlation between features that has never seen.
To better understand this, let's do the next demonstration and exercise.
## Demonstration: Illusory Correlations
So far we worked with a dataset that has 4 animals: Canary, Robin, Goldfish, and Tuna. These animals all have bones. Therefore if we include the "has bone" feature, the network would learn it at the second level (i.e. second bump, second singular value convergence), which is OK.
What if the dataset has Shark instead of Goldfish. Sharks don't have bones (their skeletons is made of cartilaginous, which is much lighter than true bone and more flexible). Then we will have feature which is *True* (i.e. +1) for Tuna, Robin, and Canary, but *False* (i.e. 0) for all the plants and the shark! Let's see what the network does.
First, we add the new feature to the targets. We then start training our LNN and in every epoch, record the network prediction for "sharks having bones".
```python
item_names = ['Shark', 'Tuna', 'Robin', 'Canary', 'Rose', 'Daisy', 'Pine', 'Oak']
has_bones = [0, 1, 1, 1, 0, 0, 0, 0]
tree_features = add_feature(tree_features, has_bones)
plot_tree_data(tree_labels, tree_features, item_names)
```
You can see the new feature shown in the last column of the plot above.
```python
#@markdown Make sure you execute this cell to train the network
dim_input = tree_labels.shape[1]
dim_output = tree_features.shape[1]
input_tensor = torch.tensor(tree_labels).float()
output_tensor = torch.tensor(tree_features).float()
deep_model = VariableDepthWidth(in_dim=dim_input,
out_dim=dim_output,
hid_dims=[30])
_, singular_values, _, ill_predictions = train_svd_rsa_track(deep_model,
input_tensor,
output_tensor,
n_epochs=250,
lr=100.0,
ill_i=0)
plot_ills_sv_twin(ill_predictions, singular_values)
```
It seems that the network starts by learning the "[alternative fact](https://en.wikipedia.org/wiki/Alternative_facts)" that sharks have bones, and in later epochs, as it learns deeper representations, it can see (learn) beyond the illusory correlation. This is important to remember that we never presented the network with any data saying that sharks have bones.
```python
#@title Video 4.2: Illusory Correlations Explained
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="QMuTlq-atlc", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
## Exercise 4: Illusory Correlations
This exercise is just for you to explore the idea of illusory correlations. Think of medical, natural or possibly social illusory correlations which can test the learning power of deep linear neural nets.
**Notes**: Before you start, there are few things to important to know:
* the generated data is independent of tree labels, therefore the names are just for convenience.
* you can rename any node in the tree object(from the `SimpleTree` class) to help you keep track of your tree. This tree, also is not the one generating the samples and is purely for convenience.
Here is our example for **Non-human Living things don't speak**:
```python
# this is just for plotting a tree and has no connection to data!
tree = SimpleTree() # creates a tree
tree.rename("Canary", "Parrot") # renames the Canary node to Parrot
tree.plot() # plots the tree
```
```python
item_names = ['Goldfish', 'Tuna', 'Robin', 'Parrot', 'Rose', 'Daisy', 'Pine', 'Oak']
can_NOT_speak = [1, 1, 1, 0, 1, 1, 1, 1] # creating the new feature
ill_id = 3 # the index of your feature
tree_labels, tree_features = generate_hsd() # sampling new data from the tree
tree_features = add_feature(tree_features, can_NOT_speak) # adding the feature
plot_tree_data(tree_labels, tree_features, item_names) # plot
```
```python
#@markdown Make sure you execute this cell to train the network and plot the output
dim_input = tree_labels.shape[1]
dim_output = tree_features.shape[1]
input_tensor = torch.tensor(tree_labels).float()
output_tensor = torch.tensor(tree_features).float()
deep_model = VariableDepthWidth(in_dim=dim_input,
out_dim=dim_output,
hid_dims=[30])
_, singular_values, _, ill_predictions = train_svd_rsa_track(deep_model,
input_tensor,
output_tensor,
n_epochs=250,
lr=100.0,
ill_i=ill_id)
plot_ills_sv_twin(ill_predictions, singular_values)
```
---
# Wrap up
```python
#@title Video 4.3: Outro
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Y0JfyCtikhc", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
---
# Appendix
Generally, *regression* refers to a set of methods for modeling the mapping (relationship) between one (or more) independent variable(s) (i.e. features) and one (or more) dependent variable(s) (i.e. labels). For example, if we want to examine the relative impacts of calendar date, GPS coordinates, and time of the say (the independent variables) on air temperature (the dependent variable). On the other hand, regression can be used for predictive analysis. Thus the independent variables are also called predictors. When the model contains more than one predictor, then the method is called *multiple regression*, and if it contains more than one dependent variable called *multivariate regression*. Regression problems pop up whenever we want to predict a numerical (usually continuous) value.
The independent variables are collected in vector $\mathbf{x} \in \mathbb{R}^M$, where $M$ denotes the number of independent variables, while the dependent variables are collected in vector $\mathbf{y} \in \mathbb{R}^N$, where $N$ denotes the number of independent variables. And the mapping between them is represented by the weight matrix $\mathbf{W} \in \mathbb{R}^{N \times M}$ and a bias vector $\mathbf{b} \in \mathbb{R}^{N}$ (generalizing to affine mappings).
The multivariate regression model can be written as:
\begin{equation}
\mathbf{y} = \mathbf{W} ~ \mathbf{x} + \mathbf{b}
\end{equation}
or it can be written in matrix format as:
\begin{equation}
\begin{bmatrix} y_{1} \\ y_{2} \\ \vdots \\ y_{N} \\ \end{bmatrix} = \begin{bmatrix} w_{1,1} & w_{1,2} & \dots & w_{1,M} \\ w_{2,1} & w_{2,2} & \dots & w_{2,M} \\ \vdots & \ddots & \ddots & \vdots \\ w_{N,1} & w_{N,2} & \dots & w_{N,M} \end{bmatrix} \begin{bmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{M} \\ \end{bmatrix} + \begin{bmatrix} b_{1} \\ b_{2} \\ \vdots \\b_{N} \\ \end{bmatrix}
\end{equation}
__Vectorized regression__
Linear regression can be simply extended to multi-samples ($D$) input-output mapping, which we can collect in a matrix $\mathbf{X} \in \mathbb{R}^{M \times D}$, sometimes called the design matrix. The sample dimension also shows up in the output matrix $\mathbf{Y} \in \mathbb{R}^{N \times D}$. Thus, linear regression takes the following form:
\begin{equation}
\mathbf{Y} = \mathbf{W} ~ \mathbf{X} + \mathbf{b}
\end{equation}
where matrix $\mathbf{W} \in \mathbb{R}^{N \times M}$ and the vector $\mathbf{b} \in \mathbb{R}^{N}$ (broudcasted over sample dimension) are the desired parameters to find.
|
ebe3adf65c48f44dc3e946552c67b8618a74b333
| 70,972 |
ipynb
|
Jupyter Notebook
|
tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial3.ipynb
|
MeRajat/course-content-dl
|
cb659b29a4b0acd4bd0fb2705dd28b304c0a71cd
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | 1 |
2021-07-04T21:41:03.000Z
|
2021-07-04T21:41:03.000Z
|
tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial3.ipynb
|
MeRajat/course-content-dl
|
cb659b29a4b0acd4bd0fb2705dd28b304c0a71cd
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null |
tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial3.ipynb
|
MeRajat/course-content-dl
|
cb659b29a4b0acd4bd0fb2705dd28b304c0a71cd
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | 39.82716 | 805 | 0.577157 | true | 12,597 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.672332 | 0.787931 | 0.529751 |
__label__eng_Latn
| 0.919626 | 0.069119 |
## Introduction
This tutorial illustrates the spectra computation for standard and pure B modes. We will only use
the `HEALPIX` pixellisation to pass through the different steps of generation.
The `HEALPIX` survey mask is a disk centered on longitude 30° and latitude 50° with a radius of 25
radians. The `nside` value is set to 512 for this tutorial to reduce computation time.
## Preamble
`matplotlib` magic
```python
%matplotlib inline
```
Versions used for this tutorial
```python
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import healpy as hp
import pspy
print(" Numpy :", np.__version__)
print("Matplotlib :", mpl.__version__)
print(" healpy :", hp.__version__)
print(" pspy :", pspy.__version__)
```
Numpy : 1.18.0
Matplotlib : 3.1.2
healpy : 1.13.0
pspy : 1.2.0+4.gcb26dc1
Get default data dir from `pspy` and set Planck colormap as default
```python
from pixell import colorize
colorize.mpl_setdefault("planck")
```
## Generation of the templates, mask and apodisation type
We start by specifying the `HEALPIX` survey parameters namely longitude, latitude and patch size. The
`nside` value is set to 512.
```python
lon, lat = 30, 50
radius = 25
nside = 512
```
Given the `nside` value, we can set the $\ell$<sub>max</sub> value
```python
lmax = 3 * nside - 1
```
For this example, we will make use of 3 components : Temperature (spin 0) and polarisation Q and U
(spin 2)
```python
ncomp = 3
```
Given the parameters, we can generate the `HEALPIX` template as follow
```python
from pspy import so_map
template = so_map.healpix_template(ncomp, nside)
```
We also define the binary template for the window function pixels
```python
binary = so_map.healpix_template(ncomp=1, nside=nside)
vec = hp.pixelfunc.ang2vec(lon, lat, lonlat=True)
disc = hp.query_disc(nside, vec, radius=radius*np.pi/180)
binary.data[disc] = 1
```
## Generation of spectra
### Generate window
We then create an apodisation for the survey mask. We use a C1 apodisation with an apodisation size
of 5 degrees
```python
from pspy import so_window
window = so_window.create_apodization(binary, apo_type="C1", apo_radius_degree=5)
hp.mollview(window.data, title=None)
```
We can also have a look to the corresponding spin 1 and spin 2 window functions
```python
niter = 3
w1_plus, w1_minus, w2_plus, w2_minus = so_window.get_spinned_windows(window, lmax=lmax, niter=niter)
plt.figure(figsize=(8, 8))
kwargs = {"rot": (lon, lat, 0), "xsize": 3500, "reso": 1, "title": None}
hp.gnomview(w1_plus.data, **kwargs, sub=(2, 2, 1))
hp.gnomview(w1_minus.data, **kwargs, sub=(2, 2, 2))
hp.gnomview(w2_plus.data, **kwargs, sub=(2, 2, 3))
hp.gnomview(w2_minus.data, **kwargs, sub=(2, 2, 4))
```
### Binning file
We create a binning file with the following format : lmin, lmax, lmean
```python
import os
output_dir = "/tmp/tutorial_purebb"
os.makedirs(output_dir, exist_ok=True)
binning_file = os.path.join(output_dir, "binning.dat")
from pspy import pspy_utils
pspy_utils.create_binning_file(bin_size=50, n_bins=300, file_name=binning_file)
```
### Compute mode coupling matrix
For spin 0 and 2 the window need to be a tuple made of two objects: the window used for spin 0 and the
one used for spin 2
```python
window_tuple = (window, window)
```
The windows (for `spin0` and `spin2`) are going to couple mode together, we compute a mode coupling
matrix in order to undo this effect given the binning file. We do it for both calculations *i.e.*
standard and pure B mode
```python
from pspy import so_mcm
print("computing standard mode coupling matrix")
mbb_inv, Bbl = so_mcm.mcm_and_bbl_spin0and2(window_tuple,
binning_file,
lmax=lmax,
niter=niter,
type="Cl")
print("computing pure mode coupling matrix")
mbb_inv_pure, Bbl_pure = so_mcm.mcm_and_bbl_spin0and2(window_tuple,
binning_file,
lmax=lmax,
niter=niter,
type="Cl",
pure=True)
```
computing standard mode coupling matrix
computing pure mode coupling matrix
### Generation of ΛCDM power spectra
We first have to compute $C_\ell$ data using a cosmology code such as [CAMB](https://camb.readthedocs.io/en/latest/) and we need to install it
since this is not a prerequisite of `pspy`. We can do it within this notebook by executing the
following command
```python
%pip install -U camb
```
Requirement already up-to-date: camb in /home/garrido/Workdir/CMB/development/pspy/pyenv/lib/python3.8/site-packages (1.1.0)
Requirement already satisfied, skipping upgrade: scipy>=1.0 in /home/garrido/Workdir/CMB/development/pspy/pyenv/lib/python3.8/site-packages (from camb) (1.4.1)
Requirement already satisfied, skipping upgrade: six in /home/garrido/Workdir/CMB/development/pspy/pyenv/lib/python3.8/site-packages (from camb) (1.13.0)
Requirement already satisfied, skipping upgrade: sympy>=1.0 in /home/garrido/Workdir/CMB/development/pspy/pyenv/lib/python3.8/site-packages (from camb) (1.5)
Requirement already satisfied, skipping upgrade: numpy>=1.13.3 in /home/garrido/Workdir/CMB/development/pspy/pyenv/lib/python3.8/site-packages (from scipy>=1.0->camb) (1.18.0)
Requirement already satisfied, skipping upgrade: mpmath>=0.19 in /home/garrido/Workdir/CMB/development/pspy/pyenv/lib/python3.8/site-packages (from sympy>=1.0->camb) (1.1.0)
Note: you may need to restart the kernel to use updated packages.
To make sure everything goes well, we can import `CAMB` and check its version
```python
import camb
print("CAMB version:", camb.__version__)
```
CAMB version: 1.1.0
Now that `CAMB` is properly installed, we will produce $C_\ell$ data from $\ell$<sub>min</sub>=2 to
$\ell$<sub>max</sub>=10<sup>4</sup> for the following set of $\Lambda$CDM parameters
```python
ellmin, ellmax = 2, 10**4
ell = np.arange(ellmin, ellmax)
cosmo_params = {
"H0": 67.5,
"As": 1e-10*np.exp(3.044),
"ombh2": 0.02237,
"omch2": 0.1200,
"ns": 0.9649,
"Alens": 1.0,
"tau": 0.0544
}
pars = camb.set_params(**cosmo_params)
pars.set_for_lmax(ellmax, lens_potential_accuracy=1)
results = camb.get_results(pars)
powers = results.get_cmb_power_spectra(pars, CMB_unit="muK")
```
We finally have to write $C_\ell$ into a file to feed the `so_map.synfast` function for both
pixellisation templates
```python
cl_file = os.path.join(output_dir, "cl_camb.dat")
np.savetxt(cl_file,
np.hstack([ell[:, np.newaxis], powers["total"][ellmin:ellmax]]))
```
## Running simulations
Given the parameters and data above, we will now simulate `n_sims` simulations to check for mean and
variance of BB spectrum. For illustrative purpose, we will only run 10 simulations (~ few minutes)
but for reasonable comparisons, you should increase this number to few tens of simulations.
We will do it for both calculations (standard and pure) and finally we will graphically compare
results
We first need to specify the order of the spectra to be used by `pspy` although only BB spectrum will
be used
```python
spectra = ["TT", "TE", "TB", "ET", "BT", "EE", "EB", "BE", "BB"]
```
and we define a dictionnary of methods regarding the calculation type for B mode spectrum
```python
from pspy import sph_tools
methods = {
"standard": {"alm" : sph_tools.get_alms, "mbb": mbb_inv, "bb": []},
"pure": {"alm": sph_tools.get_pure_alms, "mbb": mbb_inv_pure, "bb": []}
}
```
```python
from pspy import so_spectra
n_sims = 10
for i in range(n_sims):
cmb = template.synfast(cl_file)
for k, v in methods.items():
get_alm = v.get("alm")
alm = get_alm(cmb, window_tuple, niter, lmax)
ell, ps = so_spectra.get_spectra(alm, spectra=spectra)
ellb, ps_dict = so_spectra.bin_spectra(ell,
ps,
binning_file,
lmax,
type="Cl",
mbb_inv=v.get("mbb"),
spectra=spectra)
v["bb"] += [ps_dict["BB"]]
```
Let's plot the mean results against the theory value for BB spectrum
```python
for k, v in methods.items():
v["mean"] = np.mean(v.get("bb"), axis=0)
v["std"] = np.std(v.get("bb"), axis=0)
from pspy import pspy_utils
ell_th, ps_theory = pspy_utils.ps_lensed_theory_to_dict(cl_file, output_type="Cl", lmax=lmax)
ps_theory_b = so_mcm.apply_Bbl(Bbl, ps_theory, spectra=spectra)
ps_theory_b_pure = so_mcm.apply_Bbl(Bbl_pure, ps_theory, spectra=spectra)
fac = ellb * (ellb + 1) / (2 * np.pi)
facth = ell_th * (ell_th + 1) / (2 * np.pi)
plt.figure(figsize=(7, 6))
grid = plt.GridSpec(4, 1, hspace=0, wspace=0)
main = plt.subplot(grid[:3], xticklabels=[], xlim=(0, 2*nside))
main.plot(ell_th[:lmax], ps_theory["BB"][:lmax] * facth[:lmax], color="grey")
main.errorbar(ellb, ps_theory_b["BB"] * fac, color="tab:red", label="binned theory BB")
main.errorbar(ellb, ps_theory_b_pure["BB"] * fac, color="tab:blue", label="binned theory BB pure")
main.errorbar(ellb, methods.get("standard").get("mean") * fac,
methods.get("standard").get("std") * fac, fmt=".", color="tab:red", label="mean BB")
main.errorbar(ellb, methods.get("pure").get("mean") * fac,
methods.get("pure").get("std") * fac, fmt=".", color="tab:blue", label="mean BB pure")
main.set(ylim=(-0.07, 0.17), ylabel=r"$D^{BB}_{\ell}$")
plt.legend(title=r"$n_{\rm sims}=%s$" % n_sims)
ratio = plt.subplot(grid[3], xlim=(0, 2*nside))
ratio.plot(ellb, methods.get("pure").get("std") / methods.get("standard").get("std"), ".-k")
ratio.set(ylabel=r"$\sigma^{\rm pure}_\ell/ \sigma_\ell$", xlabel=r"$\ell$");
ratio.axhline(1)
```
|
d1f3e76e604304ed2e469b57d14edf0178a2fece
| 298,164 |
ipynb
|
Jupyter Notebook
|
notebooks/tutorial_purebb.ipynb
|
xgarrido/pspy
|
8c1c13828ca982a1747ddeed2ee9c35b09fd9f0b
|
[
"BSD-3-Clause"
] | 6 |
2020-01-26T22:00:31.000Z
|
2021-05-04T08:13:44.000Z
|
notebooks/tutorial_purebb.ipynb
|
simonsobs/pspy
|
b1faf15eb7c9f4c2bee80fe5cfafaab1d4bc6470
|
[
"BSD-3-Clause"
] | 5 |
2021-02-12T13:04:08.000Z
|
2022-01-24T18:57:34.000Z
|
notebooks/tutorial_purebb.ipynb
|
xgarrido/pspy
|
8c1c13828ca982a1747ddeed2ee9c35b09fd9f0b
|
[
"BSD-3-Clause"
] | 1 |
2021-11-02T11:01:58.000Z
|
2021-11-02T11:01:58.000Z
| 440.419498 | 239,124 | 0.941009 | true | 2,888 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.72487 | 0.70253 | 0.509243 |
__label__eng_Latn
| 0.843819 | 0.021472 |
$\newcommand{\ve}[1]{\mathbf{#1}}$
$\newcommand{\ovo}{\overline{O}}$
$\def\Brack#1{\left[ #1 \right]}$
$\def\bra#1{\mathinner{\langle{#1}|}}$
$\def\ket#1{\mathinner{|{#1}\rangle}}$
$\def\braket#1{\mathinner{\langle{#1}\rangle}}$
$\def\Bra#1{\left<#1\right|}$
$\def\Ket#1{\left|#1\right>}$
$\def\KetC#1{\left|\left\{ #1 \right\} \right\rangle}$
$\def\BraC#1{\left\langle \left\{ #1 \right\} \right|}$
$\def\sen{\mathop{\mbox{\normalfont sen}}\nolimits}$
$\newcommand{\vac}{\ket{\text{vac}}}$
$\newcommand{\vacbra}{\bra{\text{vac}}}$
$\newcommand{\sinc}{\text{sinc}}$
<center> <h1>Quantum Non Demolition Measurements (QND)</h1>
<h2> Mediciones cuánticas no destructivas </h2></center>
<center><h1> Outline <h1></center>
* Divisor de haz
* Interferómetro
* Medidas no destructivas. Ver solo un poquito.
<center><h1> Divisor de haz <h1></center>
Un divisor de haz sin pérdidas, es como el que se muestra en la figura. Posee:
* Dos modos de campo de entrada
* Dos modos de campo de salida
* Que no haya pérdidas garantiza que la probabilidad se conserve
* Es una transformación unitaria que se puede escribir como
\begin{equation}
|\psi' \rangle = U | \psi \rangle
\end{equation}
* Se puede escribir en forma matricial
$$\mathbf{U} = e^{i\kappa}\left[\begin{array}
{rr}
te^{i\delta_r} & -re^{-i\delta_t} \\
re^{i\delta_t} & t^{-i\delta_r} \\
\end{array}\right]
$$
La forma matricial significa que si llega un estado $|0\rangle$ o $|1\rangle$ tendrá la transformación
$$|0\rangle \rightarrow U_{00} |0\rangle + U_{10} |1\rangle $$
$$|1\rangle \rightarrow U_{01} |0\rangle + U_{11} |1\rangle$$
<center><h2> Algunos hechos interesantes del divisor de haz <h2></center>
* $t^2 + r^2 =1$
* La diferencia de fase entre los campos reflejado y transmitido cuando el estado de entrada es $|0\rangle$ es $\delta_0=\delta_r-\delta_t$
* Cuando el estado de entrada es |1\rangle, la diferencia de fase es $\delta_1=-\delta_r+\delta_t \pm \pi$.
* Además, todo divisor de haz cumple $\delta_0+\delta_1=\pm \pi$
<center><h2> Divisores de haz especiales <h2></center>
$$\mathbf{U}_1 = e^{i\kappa}\left[\begin{array}
{rr}
ir & t \\
t & ir \\
\end{array}\right], \quad
\mathbf{U}_2 = e^{i\kappa}\left[\begin{array}
{rr}
r & t \\
t & -r \\
\end{array}\right]
$$
* $U_1$ no es simétrico en el tiempo, ¿Qué significa esto?
* Pero es espacialmente simétrica para $r=t=1/\sqrt{2}$
* ¿Es $U_2$ temporalmente simétrica?
* ¿Y espacialmente?
## Graficando el qubit en la esfera de Bloch
Recordar que para graficar el qubit en la esfera de Bloch es necesario obtener las proyecciones en cada uno de los ejes. Dado que contamos con la matriz de densidad del sistema, y como podemos escribir el estado de un qubit de forma general en términos de las matrices de Pauli como sigue
$$
\rho = \frac{1}{2} \left( \mathbf{1}+ \vec{r} \cdot \vec{\sigma} \right),
$$
donde $\vec{\sigma} = (\sigma_1,\sigma_2, \sigma_3 )$ y $\vec{r} = (\sin \theta \cos \phi, \sin \theta \sin\phi, \cos \theta)$, por lo que obtener las componentes $r_i$ es directo de la forma
$$
r_i = \text{tr} [\rho \sigma_i]
$$
```python
import numpy as np
from scipy.linalg import expm, norm
import tensorflow as tf
import strawberryfields as sf
from strawberryfields.ops import *
from strawberryfields.backends.tfbackend.ops import partial_trace
# matrices de Pauli
s1 = np.array([[0, 1],[ 1, 0]])
s2 = np.array([[0, -1j],[1j, 0]])
s3 = np.array([[1, 0],[0, -1]])
cutoff = 2
#entrada 1 del BS
psi = np.zeros([cutoff], dtype=np.complex128)
psi[0] = 1.0
psi[1] = 1.0
psi /= np.linalg.norm(psi)
#entrada 2 del BS
phi = np.zeros([cutoff],dtype=np.complex128)
phi[0] = 1.0
phi /= np.linalg.norm(phi)
```
```python
#conversión al tipo necesario para tensor flow
psi = tf.cast(psi, tf.complex64)
phi = tf.cast(phi,tf.complex64)
in_state = tf.tensordot(psi,phi,axes=0)
eng, q = sf.Engine(2)
with eng:
Ket(in_state) | q
BSgate(np.pi/4,0) | q
#state_out = eng.run('tf', cutoff_dim=cutoff,eval=False,modes=[1])
state_out = eng.run('tf', cutoff_dim=cutoff)
#Matriz de densidad del sistema y las matrices reducidas
rho=state_out.dm()
rhoA = np.einsum('ijll->ij', rho)
rhoB = np.einsum('kkij->ij', rho)
#Grafica de p(n) para uno de los modos de salida
import matplotlib.pyplot as plt
plt.bar(np.arange(cutoff), height=np.real_if_close(np.diag(rhoA)))
```
```python
# Gráfica en esfera de Bloch
def M(axis, theta):
#Función que realiza una rotación en el eje axis
return expm(np.cross(np.eye(3), axis/norm(axis)*theta))
from qutip import Bloch
b=Bloch()
vec = [[0,0,-1],[0,1,0],[0,0,1]]
b.add_vectors(vec)
npts=10;
v5, axis, theta = [0.1,0.5,0], [0,0,1],1.2
#v=v/norm(v)
v1= np.trace(rhoA@s1)
v2= np.trace(rhoA@s2)
v3= np.trace(rhoA@s3)
v = np.real_if_close([v1,v2,v3])
b.clear()
b.vector_color = ['r']
b.view = [-40,30]
#b.add_points(np.transpose(vecv))
b.add_vectors(v)
```
```python
b.show()
```
### Ejercicio:
* Escribe las transformaciones sobre estados $|0\rangle$ y $| 1\rangle$
* Obtén las transformaciones para una combinación lineal de estados de la forma $|\psi \rangle = \alpha |0 \rangle + \beta |1\rangle$
* Haz un programa utilizando RBF que realice la transformación de estos estados.
* ¿Podemos construir alguna compuerta del set estudiado hasta ahora con lo que acabamos de ver de los divisores de haz?
NOTA PERSONAL: falta agregar la parametrización de los coeficientes
* Construya cualquier transformación unitaria mediante retardadores de fase y la compuerta $U_2$
<center><h2> Inteferómetro Mach-Zender <h2></center>
Un interferómetro de Mach-Zender se construye como sigue
- Es equivalente a introducir un 'retardador de fase' entre dos compuertas Hadamard (Compruebe esto). Esto se puede hacer con una compuerta de fase. Esto es equivalente a $H \phi(-\alpha) H$, donde la fase $\alpha$ es una fase relativa entre los dos caminos ópticos posibles
- Considere ahora que un sólo fotón entra en el interferómetro por el brazo superior ($|0\rangle$). Luego, se detecta un fotón en el brazo superior a la salida del interferómetro. Se puede demostrar que la probabilidad de obtener un fotón está dada por
$$p_0 = \frac{1}{2} \left( 1+ \cos \alpha\right)$$
<center><h2> Mediciones nulas ("No destructivas") <h2></center>
Queremos detectar detectar una bomba que es tan sensible, que con sólo absorber un fotón exlota. Es decir, que si nosotros la observamos, por el hecho de saber que está allí (haberla iluminado de alguna forma para observarla) habría provocado su explosión. ¿Habrá alguna manera de resolver este paradigma?
NOTA: colocar encuesta y resultados en esta sección
#### Una medición nula: es aquella en la que el objeto a ser medido no cambia cuando cumple con ciertas características, pero se modifica cuando cumple con otras. Considere el ejemplo en la figura, donde un electrón sólo puede ser excitado si un fotón incide con la polarización adecuada.
<center>
</center>
<center><h2> Implementación en StrawBerry Fields <h2></center>
Queremos detectar bomba que se muestra en la figura
Flujo de codificación y simulación:
* Preparar un estado inicial en $|0\rangle$.
* Hacerlo evolucionar en el BS
* Calcular la probabilidad de que el fotón se haya ido por el camino inferior, utilizando $P_1 = |\langle 1 | \psi \rangle |^2$.
* Aplicar la medición nula, es decir, volver a preparar el estado en el modo $|0\rangle$ y hacerlo evolucionar en el BS
* Calcular las probabilidades de detección finales.
```python
# se prepararn las entradas del BS
psi = np.zeros([cutoff], dtype=np.complex128)
#psi[0] = 1.0
psi[1] = 1.0
psi /= np.linalg.norm(psi)
#entrada 2 del BS
phi = np.zeros([cutoff],dtype=np.complex128)
phi[0] = 1.0
phi /= np.linalg.norm(phi)
psi = tf.cast(psi, tf.complex64)
phi = tf.cast(phi,tf.complex64)
in_state = tf.tensordot(psi,phi,axes=0)
eng, q = sf.Engine(2)
with eng:
Ket(in_state) | q
BSgate(np.pi/4,0) | q #2 efecto del BS sobre la entrada
# Measure | q[1] # esta estrategia tiene la problemática de que
state_out = eng.run('tf', cutoff_dim=cutoff)
rho1=state_out.dm()
#realizar medición
MMM= np.tensordot(np.eye(2),np.array([0,1])[np.newaxis].T@np.array([0,1])[np.newaxis],axes=0)
Prob_Bomba=np.real_if_close(np.trace(np.trace(rho1@MMM)))
# se prepara el estado nuevamente
# se hace pasar por el divisor de haz
# se hacen mediciones de n
eng, q = sf.Engine(2)
with eng:
Ket(in_state) | q
BSgate(np.pi/4,0) | q #2 efecto del BS sobre la entrada
Measure | q[0]
Measure | q[1]
#sess=tf.Session()
#with sess.as_default():
#psi_value=sess.run(psi)
# MMM=MMM.eval()
# rhoB2=rhoB.eval()
```
```python
```
0.4999999701976776
```python
np.trace(np.trace(state_out.dm()@np.tensordot(np.eye(2),np.eye(2),axes=0)))
```
(0.9999999403953552+0j)
```python
np.array([0,1])[np.newaxis].T@np.array([0,1])[np.newaxis]
```
array([[0, 0],
[0, 1]])
```python
a = np.array([5,4])[np.newaxis]
```
```python
a
```
array([[5, 4]])
```python
np.array([0,1])[np.newaxis].T@np.array([0,1])[np.newaxis]
```
array([[0, 0],
[0, 1]])
```python
np.eye(2)
```
array([[1., 0.],
[0., 1.]])
```python
MMM
```
array([[[[0., 0.],
[0., 1.]],
[[0., 0.],
[0., 0.]]],
[[[0., 0.],
[0., 0.]],
[[0., 0.],
[0., 1.]]]])
```python
```
|
9dfc0160f4afb1ca0b17a11725e968ca07490efe
| 108,405 |
ipynb
|
Jupyter Notebook
|
lectures/Notebooks/Dia_4_QNDs.ipynb
|
ChekHub/ENS3
|
245685f7de4c18a6323fcccbcefd869b2b102513
|
[
"MIT"
] | 2 |
2019-05-12T00:05:52.000Z
|
2019-05-12T00:13:59.000Z
|
lectures/Notebooks/Dia_4_QNDs.ipynb
|
ChekHub/ENS3
|
245685f7de4c18a6323fcccbcefd869b2b102513
|
[
"MIT"
] | null | null | null |
lectures/Notebooks/Dia_4_QNDs.ipynb
|
ChekHub/ENS3
|
245685f7de4c18a6323fcccbcefd869b2b102513
|
[
"MIT"
] | null | null | null | 171.255924 | 84,764 | 0.896739 | true | 3,175 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.805632 | 0.76908 | 0.619596 |
__label__spa_Latn
| 0.861091 | 0.277859 |
<b>Traçar um esboço do gráfico e obter uma equação da parábola que satisfaça as condições dadas.</b>
<b>21. Vértice: $V(0,-2)$; diretriz: $2x-3=0$</b>
<b>Arrumando a equação da diretriz</b><br><br>
$d: x = \frac{3}{2}$<br><br><br>
<b>Fazendo um esboço é possivel perceber que a parábola é paralela ao eixo $x$, logo sua équação é dada por $(y-k)^2 = 2p(x-h)$</b><br><br>
<b>Substituindo os pontos do vértice por $x=0$ e $y=-2$</b><br><br>
$(y-(-2))^2 = 2p(x-0)$<br><br>
$(y+2)^2 = 2px$<br><br>
<b>Achando o valor de $p$, utilizando a coordenada da diretriz $D(\frac{3}{2},-2)$</b><br><br>
$\frac{p}{2} = \sqrt{(0-\frac{3}{2})^2+(-2-(-2))^2}$<br><br>
$\frac{p}{2} = \sqrt{(-\frac{3}{2})^2 + 0}$<br><br>
$\frac{p}{2} = \pm \sqrt{\frac{9}{4}}$<br><br>
$\frac{p}{2} = -\frac{3}{2}$<br><br>
$p = -3$<br><br>
<b>Substituindo $p$ na fórmula</b><br><br>
$(y+2)^2 = 2 \cdot -3 \cdot x$<br><br>
$(y+2)^2 = -6x$<br><br>
$y^2 + 4y + 4 = -6x$<br><br>
$y^2 + 4y + 6x + 4 = 0$<br><br>
<b>Gráfico da parábola</b>
```python
from sympy import *
from sympy.plotting import plot_implicit
x, y = symbols("x y")
plot_implicit(Eq((y+2)**2, -6*(x+0)), (x,-20,20), (y,-20,20),
title=u'Gráfico da parábola', xlabel='x', ylabel='y');
```
|
716ea3bb386ea627f429a81a7583dbd1ba045414
| 14,204 |
ipynb
|
Jupyter Notebook
|
Problemas Propostos. Pag. 172 - 175/21.ipynb
|
mateuschaves/GEOMETRIA-ANALITICA
|
bc47ece7ebab154e2894226c6d939b7e7f332878
|
[
"MIT"
] | 1 |
2020-02-03T16:40:45.000Z
|
2020-02-03T16:40:45.000Z
|
Problemas Propostos. Pag. 172 - 175/21.ipynb
|
mateuschaves/GEOMETRIA-ANALITICA
|
bc47ece7ebab154e2894226c6d939b7e7f332878
|
[
"MIT"
] | null | null | null |
Problemas Propostos. Pag. 172 - 175/21.ipynb
|
mateuschaves/GEOMETRIA-ANALITICA
|
bc47ece7ebab154e2894226c6d939b7e7f332878
|
[
"MIT"
] | null | null | null | 161.409091 | 11,812 | 0.881794 | true | 580 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.917303 | 0.882428 | 0.809453 |
__label__por_Latn
| 0.906719 | 0.718964 |
# Семинар 5
```python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
%matplotlib inline
```
```python
plt.rcParams['figure.figsize'] = (15, 7)
```
## Линейная классификация
### Постановка задачи классификации
Пусть задана обучающая выборка $X = \left\{ \left( x_i, y_i \right) \right\}_{i=1}^l, x_i \in \mathbb{X}, y_i \in \mathbb{Y},$ — $l$ пар объект-ответ, где
$\mathbb{X}$ — пространство объектов,
$\mathbb{Y}$ — пространство ответов.
### Логистическая регрессия
Рассмотрим в качестве верхней оценки пороговой функции потерь логистическую функцию:
$$\tilde{L}(M) = \log (1 + \exp(-M)).$$
Таким образом, необходимо решить следующую оптимизационную задачу:
$$\frac{1}{l} \sum_{i=1}^l \tilde{L} (M_i) = \frac{1}{l} \sum_{i=1}^l \log (1 + \exp (-y_i \langle w, x_i \rangle)) \to \min_w$$
Получившийся метод обучения называется **логистической регрессией**.
Одно из полезных свойств логистической регрессии, которое будет изучено нами несколько позднее, — тот факт, что она позволяет предсказывать помимо метки класса ещё и вероятность принадлежности каждому из них, что может быть полезным в некоторых задачах.
**Пример**: Вы работаете в банке и хотите выдавать кредиты только тем клиентам, которые вернут его с вероятностью не меньше 0.9.
Попробуем сконструировать функцию потерь из других соображений.
Если алгоритм $b(x) \in [0, 1]$ действительно выдает вероятности, то
они должны согласовываться с выборкой.
С точки зрения алгоритма вероятность того, что в выборке встретится объект $x_i$ с классом $y_i$,
равна $b(x_i)^{[y_i = +1]} (1 - b(x_i))^{[y_i = -1]}$.
Исходя из этого, можно записать правдоподобие выборки (т.е. вероятность получить такую выборку
с точки зрения алгоритма):
$$
Q(a, X)
=
\prod_{i = 1}^{\ell}
b(x_i)^{[y_i = +1]} (1 - b(x_i))^{[y_i = -1]}.
$$
Данное правдоподобие можно использовать как функционал для обучения алгоритма --
с той лишь оговоркой, что удобнее оптимизировать его логарифм:
$$
-\sum_{i = 1}^{\ell} \left(
[y_i = +1] \log b(x_i)
+
[y_i = -1] \log (1 - b(x_i))
\right)
\to
\min
$$
Данная функция потерь называется логарифмической (log-loss).
Мы хотим предсказывать вероятности, то есть, чтобы наш алгоритм предсказывал числа в интервале [0, 1]. Этого легко достичь, если положить $b(x) = \sigma(\langle w, x \rangle)$,
где в качестве $\sigma$ может выступать любая монотонно неубывающая функция
с областью значений $[0, 1]$.
Мы будем использовать сигмоидную функцию: $\sigma(z) = \frac{1}{1 + \exp(-z)}$.
Таким образом, чем больше скалярное произведение $\langle w, x \rangle$,
тем больше будет предсказанная вероятность.
Подставим трансформированный ответ линейной модели в логарифмическую функцию потерь:
\begin{align*}
-\sum_{i = 1}^{\ell} &\left(
[y_i = +1]
\log \frac{1}{1 + \exp(-\langle w, x_i \rangle)}
+
[y_i = -1]
\log \frac{\exp(-\langle w, x_i \rangle)}{1 + \exp(-\langle w, x_i \rangle)}
\right)
=\\
&=
-\sum_{i = 1}^{\ell} \left(
[y_i = +1]
\log \frac{1}{1 + \exp(-\langle w, x_i \rangle)}
+
[y_i = -1]
\log \frac{1}{1 + \exp(\langle w, x_i \rangle)}
\right)
=\\
&=
\sum_{i = 1}^{\ell} \left(
[y_i = +1]
\log (1 + \exp(-\langle w, x_i \rangle))
+
[y_i = -1]
\log (1 + \exp(\langle w, x_i \rangle))
\right)
=\\
&=
\sum_{i = 1}^{\ell}
\log \left(
1 + \exp(-y_i \langle w, x_i \rangle)
\right).
\end{align*}
Полученная функция в точности представляет собой логистические потери,
упомянутые в начале.
Линейная модель классификации, настроенная путём минимизации данного функционала,
называется логистической регрессией.
Как видно из приведенных рассуждений, она оптимизирует
правдоподобие выборки и дает корректные оценки вероятности принадлежности к положительному классу.
### Пример обучения логистической регрессии
#### Определение спама по тексту электронного письма
Попробуем при помощи моделей линейной классификации построить алгоритм, отделяющий спам от нормальной почты. Для экспериментов воспользуемся небольшим набором данных с [UCI](https://archive.ics.uci.edu/ml/datasets.html). Объекты в датасете соответствуют письмам, которые описаны признаками на основе текста письма, спам — положительный пример для классификации, хорошее письмо — отрицательный пример.
```python
spam_data = pd.read_csv('spam_data.csv')
spam_data
X, y = spam_data.iloc[:, :-1].values, spam_data.iloc[:, -1].values
spam_data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>word_freq_make</th>
<th>word_freq_address</th>
<th>word_freq_all</th>
<th>word_freq_3d</th>
<th>word_freq_our</th>
<th>word_freq_over</th>
<th>word_freq_remove</th>
<th>word_freq_internet</th>
<th>word_freq_order</th>
<th>word_freq_mail</th>
<th>...</th>
<th>char_freq_;</th>
<th>char_freq_(</th>
<th>char_freq_[</th>
<th>char_freq_!</th>
<th>char_freq_$</th>
<th>char_freq_#</th>
<th>capital_run_length_average</th>
<th>capital_run_length_longest</th>
<th>capital_run_length_total</th>
<th>spam</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.00</td>
<td>0.0</td>
<td>0.65</td>
<td>0.0</td>
<td>0.00</td>
<td>0.00</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.000</td>
<td>0.000</td>
<td>0.0</td>
<td>0.125</td>
<td>0.0</td>
<td>0.0</td>
<td>1.250</td>
<td>5</td>
<td>40</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>0.96</td>
<td>0.0</td>
<td>0.00</td>
<td>0.0</td>
<td>0.32</td>
<td>0.00</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.000</td>
<td>0.057</td>
<td>0.0</td>
<td>0.000</td>
<td>0.0</td>
<td>0.0</td>
<td>1.147</td>
<td>5</td>
<td>78</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>0.30</td>
<td>0.0</td>
<td>0.30</td>
<td>0.0</td>
<td>0.00</td>
<td>0.00</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.102</td>
<td>0.718</td>
<td>0.0</td>
<td>0.000</td>
<td>0.0</td>
<td>0.0</td>
<td>1.404</td>
<td>6</td>
<td>118</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>0.00</td>
<td>0.0</td>
<td>0.00</td>
<td>0.0</td>
<td>0.00</td>
<td>0.00</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.000</td>
<td>0.000</td>
<td>0.0</td>
<td>0.353</td>
<td>0.0</td>
<td>0.0</td>
<td>1.555</td>
<td>4</td>
<td>14</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>0.31</td>
<td>0.0</td>
<td>0.62</td>
<td>0.0</td>
<td>0.00</td>
<td>0.31</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>...</td>
<td>0.000</td>
<td>0.232</td>
<td>0.0</td>
<td>0.000</td>
<td>0.0</td>
<td>0.0</td>
<td>1.142</td>
<td>3</td>
<td>88</td>
<td>0</td>
</tr>
</tbody>
</table>
<p>5 rows × 58 columns</p>
</div>
```python
X.shape, y.shape
```
((4601, 57), (4601,))
### Обучение логистической регрессии
Разделим выборку на обучающую и тестовую в отношении 80/20 и обучим логистическую регрессию при помощи объекта [LogisticRegression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html).
```python
from sklearn.linear_model import LogisticRegression
X_train = X[:int(len(X) * 0.8)]
y_train = y[:int(len(X) * 0.8)]
X_test = X[int(len(X) * 0.8):]
y_test = y[int(len(X) * 0.8):]
```
```python
lr = LogisticRegression(max_iter=3000, solver='lbfgs', random_state=13)
lr.fit(X_train, y_train)
```
LogisticRegression(max_iter=3000, random_state=13)
```python
??LogisticRegression
```
```python
def pred_with_th(X, model, th=0.5):
th = min(max(0, th),1)
probs = model.predict_proba(X)
labels = np.zeros(X.shape[0])
labels = prob > th
return labels
```
Вычислим долю правильных ответов при помощи соответствующей функции из модуля [sklearn.metrics](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics).
```python
spam_data['spam'].value_counts()
```
0 2788
1 1813
Name: spam, dtype: int64
```python
lr.predict()
```
```python
from sklearn.metrics import accuracy_score
print(accuracy_score(y_train, lr.predict(X_train)))
print(accuracy_score(y_test, lr.predict(X_test)))
```
0.9315217391304348
0.7937024972855592
В чем проблема?
```python
X_train = X[:int(len(X) * 0.8)]
y_train = y[:int(len(X) * 0.8)]
X_test = X[int(len(X) * 0.8):]
y_test = y[int(len(X) * 0.8):]
```
```python
y_train.mean(), y_test.mean()
```
(0.24239130434782608, 1.0)
```python
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=13)
y_train.mean(), y_test.mean()
```
(0.39565217391304347, 0.38762214983713356)
```python
??train_test_split
```
```python
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y, random_state=13)
y_train.mean(), y_test.mean()
```
(0.39402173913043476, 0.3941368078175896)
```python
# training
lr = LogisticRegression(max_iter=3000, solver='lbfgs')
lr.fit(X_train, y_train)
```
LogisticRegression(max_iter=3000)
```python
print(accuracy_score(y_train, lr.predict(X_train)))
print(accuracy_score(y_test, lr.predict(X_test)))
```
0.9380434782608695
0.9120521172638436
Теперь будем смотреть на AUC-ROC:
```python
from sklearn.metrics import roc_auc_score
print(roc_auc_score(y_train, lr.predict_proba(X_train)[:, 1]))
print(roc_auc_score(y_test, lr.predict_proba(X_test)[:, 1]))
```
0.979321014380702
0.9591318858181029
```python
lr.predict_proba(X_train)
```
array([[0.96780045, 0.03219955],
[0.99409979, 0.00590021],
[0.69562986, 0.30437014],
...,
[0.99642941, 0.00357059],
[0.00467549, 0.99532451],
[0.6575597 , 0.3424403 ]])
Давайте попробуем сделать лучше. У нашего алгоритма есть разные гиперпараметры: способ регуляризации, коэффициент регуляризации. Запустим поиск по сетке гиперпараметров, алгоритм переберет все возможные комбинации, посчитает метрику для каждого набора и выдаст лучший набор.
```python
np.logspace(-5, 1)
```
array([1.00000000e-05, 1.32571137e-05, 1.75751062e-05, 2.32995181e-05,
3.08884360e-05, 4.09491506e-05, 5.42867544e-05, 7.19685673e-05,
9.54095476e-05, 1.26485522e-04, 1.67683294e-04, 2.22299648e-04,
2.94705170e-04, 3.90693994e-04, 5.17947468e-04, 6.86648845e-04,
9.10298178e-04, 1.20679264e-03, 1.59985872e-03, 2.12095089e-03,
2.81176870e-03, 3.72759372e-03, 4.94171336e-03, 6.55128557e-03,
8.68511374e-03, 1.15139540e-02, 1.52641797e-02, 2.02358965e-02,
2.68269580e-02, 3.55648031e-02, 4.71486636e-02, 6.25055193e-02,
8.28642773e-02, 1.09854114e-01, 1.45634848e-01, 1.93069773e-01,
2.55954792e-01, 3.39322177e-01, 4.49843267e-01, 5.96362332e-01,
7.90604321e-01, 1.04811313e+00, 1.38949549e+00, 1.84206997e+00,
2.44205309e+00, 3.23745754e+00, 4.29193426e+00, 5.68986603e+00,
7.54312006e+00, 1.00000000e+01])
```python
np.linspace(10**(-5), 10**(1))
```
array([1.00000000e-05, 2.04091429e-01, 4.08172857e-01, 6.12254286e-01,
8.16335714e-01, 1.02041714e+00, 1.22449857e+00, 1.42858000e+00,
1.63266143e+00, 1.83674286e+00, 2.04082429e+00, 2.24490571e+00,
2.44898714e+00, 2.65306857e+00, 2.85715000e+00, 3.06123143e+00,
3.26531286e+00, 3.46939429e+00, 3.67347571e+00, 3.87755714e+00,
4.08163857e+00, 4.28572000e+00, 4.48980143e+00, 4.69388286e+00,
4.89796429e+00, 5.10204571e+00, 5.30612714e+00, 5.51020857e+00,
5.71429000e+00, 5.91837143e+00, 6.12245286e+00, 6.32653429e+00,
6.53061571e+00, 6.73469714e+00, 6.93877857e+00, 7.14286000e+00,
7.34694143e+00, 7.55102286e+00, 7.75510429e+00, 7.95918571e+00,
8.16326714e+00, 8.36734857e+00, 8.57143000e+00, 8.77551143e+00,
8.97959286e+00, 9.18367429e+00, 9.38775571e+00, 9.59183714e+00,
9.79591857e+00, 1.00000000e+01])
То, какая метрика будет использоваться, определяется параметром `'scoring'`.
```python
from sklearn.metrics import SCORERS
sorted(SCORERS.keys())
```
['accuracy',
'adjusted_mutual_info_score',
'adjusted_rand_score',
'average_precision',
'balanced_accuracy',
'completeness_score',
'explained_variance',
'f1',
'f1_macro',
'f1_micro',
'f1_samples',
'f1_weighted',
'fowlkes_mallows_score',
'homogeneity_score',
'jaccard',
'jaccard_macro',
'jaccard_micro',
'jaccard_samples',
'jaccard_weighted',
'max_error',
'mutual_info_score',
'neg_brier_score',
'neg_log_loss',
'neg_mean_absolute_error',
'neg_mean_gamma_deviance',
'neg_mean_poisson_deviance',
'neg_mean_squared_error',
'neg_mean_squared_log_error',
'neg_median_absolute_error',
'neg_root_mean_squared_error',
'normalized_mutual_info_score',
'precision',
'precision_macro',
'precision_micro',
'precision_samples',
'precision_weighted',
'r2',
'recall',
'recall_macro',
'recall_micro',
'recall_samples',
'recall_weighted',
'roc_auc',
'roc_auc_ovo',
'roc_auc_ovo_weighted',
'roc_auc_ovr',
'roc_auc_ovr_weighted',
'v_measure_score']
```python
from sklearn.model_selection import GridSearchCV
grid_searcher = GridSearchCV(
LogisticRegression(max_iter=3000, solver='liblinear', random_state=13), # тут задали фикс параметры, ниже сетку
param_grid={
'C': np.logspace(-5, 1),
'penalty': ['l1', 'l2']
},
cv=5,
scoring='roc_auc',
n_jobs = -1,
verbose = 5
)
```
Параметр `cv=5` говорит, что во время поиска оптимальных параметров будет использоваться кросс-валидация с 5 фолдами. Давайте вспомним, что это такое:
*Source: https://scikit-learn.org/stable/modules/cross_validation.html*
В нашем случае, выборка будет разделена на 5 частей, и на каждой из 5 итераций часть данных будет становиться тестовой выборкой, а другая часть - обучающей. Посчитав метрики на каждой итерации, мы сможем усреднить их в конце и получить достаточно точную оценку качества нашего алгоритма.
```python
%%time
grid_searcher.fit(X_train, y_train);
```
Fitting 5 folds for each of 100 candidates, totalling 500 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 2 tasks | elapsed: 3.9s
[Parallel(n_jobs=-1)]: Done 56 tasks | elapsed: 4.4s
[Parallel(n_jobs=-1)]: Done 360 tasks | elapsed: 7.1s
[Parallel(n_jobs=-1)]: Done 485 out of 500 | elapsed: 8.6s remaining: 0.2s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 8.8s finished
Wall time: 9.06 s
GridSearchCV(cv=5,
estimator=LogisticRegression(max_iter=3000, random_state=13,
solver='liblinear'),
n_jobs=-1,
param_grid={'C': array([1.00000000e-05, 1.32571137e-05, 1.75751062e-05, 2.32995181e-05,
3.08884360e-05, 4.09491506e-05, 5.42867544e-05, 7.19685673e-05,
9.54095476e-05, 1.26485522e-04, 1.67683294e-04, 2.22299648e-04,
2.94705170e-04, 3.90693994e-04, 5.17947468...
2.68269580e-02, 3.55648031e-02, 4.71486636e-02, 6.25055193e-02,
8.28642773e-02, 1.09854114e-01, 1.45634848e-01, 1.93069773e-01,
2.55954792e-01, 3.39322177e-01, 4.49843267e-01, 5.96362332e-01,
7.90604321e-01, 1.04811313e+00, 1.38949549e+00, 1.84206997e+00,
2.44205309e+00, 3.23745754e+00, 4.29193426e+00, 5.68986603e+00,
7.54312006e+00, 1.00000000e+01]),
'penalty': ['l1', 'l2']},
scoring='roc_auc', verbose=5)
Посмотрим на результаты лучшей модели.
```python
print(roc_auc_score(y_train, grid_searcher.predict_proba(X_train)[:, 1]))
print(roc_auc_score(y_test, grid_searcher.predict_proba(X_test)[:, 1]))
```
0.9802989021184475
0.9604401789152522
Полные результаты поиска гиперпараметров:
```python
best_model = grid_searcher.best_estimator_
```
```python
pd.DataFrame(grid_searcher.cv_results_)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>mean_fit_time</th>
<th>std_fit_time</th>
<th>mean_score_time</th>
<th>std_score_time</th>
<th>param_C</th>
<th>param_penalty</th>
<th>params</th>
<th>split0_test_score</th>
<th>split1_test_score</th>
<th>split2_test_score</th>
<th>split3_test_score</th>
<th>split4_test_score</th>
<th>mean_test_score</th>
<th>std_test_score</th>
<th>rank_test_score</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.008593</td>
<td>0.001359</td>
<td>0.002798</td>
<td>0.000400</td>
<td>1e-05</td>
<td>l1</td>
<td>{'C': 1e-05, 'penalty': 'l1'}</td>
<td>0.747449</td>
<td>0.766457</td>
<td>0.788754</td>
<td>0.773601</td>
<td>0.730134</td>
<td>0.761279</td>
<td>0.020469</td>
<td>97</td>
</tr>
<tr>
<th>1</th>
<td>0.013790</td>
<td>0.001937</td>
<td>0.002399</td>
<td>0.000490</td>
<td>1e-05</td>
<td>l2</td>
<td>{'C': 1e-05, 'penalty': 'l2'}</td>
<td>0.853920</td>
<td>0.845400</td>
<td>0.847766</td>
<td>0.855969</td>
<td>0.819283</td>
<td>0.844467</td>
<td>0.013173</td>
<td>82</td>
</tr>
<tr>
<th>2</th>
<td>0.007796</td>
<td>0.001600</td>
<td>0.002998</td>
<td>0.000632</td>
<td>1.32571e-05</td>
<td>l1</td>
<td>{'C': 1.3257113655901082e-05, 'penalty': 'l1'}</td>
<td>0.747449</td>
<td>0.766457</td>
<td>0.788754</td>
<td>0.773601</td>
<td>0.730134</td>
<td>0.761279</td>
<td>0.020469</td>
<td>97</td>
</tr>
<tr>
<th>3</th>
<td>0.011392</td>
<td>0.002937</td>
<td>0.002400</td>
<td>0.000490</td>
<td>1.32571e-05</td>
<td>l2</td>
<td>{'C': 1.3257113655901082e-05, 'penalty': 'l2'}</td>
<td>0.859788</td>
<td>0.851979</td>
<td>0.854817</td>
<td>0.863584</td>
<td>0.825383</td>
<td>0.851110</td>
<td>0.013471</td>
<td>81</td>
</tr>
<tr>
<th>4</th>
<td>0.008796</td>
<td>0.000401</td>
<td>0.002797</td>
<td>0.000400</td>
<td>1.75751e-05</td>
<td>l1</td>
<td>{'C': 1.757510624854793e-05, 'penalty': 'l1'}</td>
<td>0.747449</td>
<td>0.766457</td>
<td>0.788754</td>
<td>0.773601</td>
<td>0.730134</td>
<td>0.761279</td>
<td>0.020469</td>
<td>97</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>95</th>
<td>0.053544</td>
<td>0.011785</td>
<td>0.001778</td>
<td>0.000392</td>
<td>5.68987</td>
<td>l2</td>
<td>{'C': 5.689866029018293, 'penalty': 'l2'}</td>
<td>0.975437</td>
<td>0.979017</td>
<td>0.967458</td>
<td>0.971656</td>
<td>0.982612</td>
<td>0.975236</td>
<td>0.005330</td>
<td>8</td>
</tr>
<tr>
<th>96</th>
<td>0.019588</td>
<td>0.001020</td>
<td>0.002202</td>
<td>0.000747</td>
<td>7.54312</td>
<td>l1</td>
<td>{'C': 7.543120063354607, 'penalty': 'l1'}</td>
<td>0.973566</td>
<td>0.975143</td>
<td>0.968432</td>
<td>0.972739</td>
<td>0.982194</td>
<td>0.974415</td>
<td>0.004480</td>
<td>24</td>
</tr>
<tr>
<th>97</th>
<td>0.068169</td>
<td>0.018043</td>
<td>0.002598</td>
<td>0.000799</td>
<td>7.54312</td>
<td>l2</td>
<td>{'C': 7.543120063354607, 'penalty': 'l2'}</td>
<td>0.975638</td>
<td>0.978359</td>
<td>0.967435</td>
<td>0.971788</td>
<td>0.982503</td>
<td>0.975145</td>
<td>0.005205</td>
<td>11</td>
</tr>
<tr>
<th>98</th>
<td>0.017791</td>
<td>0.003248</td>
<td>0.001999</td>
<td>0.000001</td>
<td>10</td>
<td>l1</td>
<td>{'C': 10.0, 'penalty': 'l1'}</td>
<td>0.973504</td>
<td>0.974733</td>
<td>0.968409</td>
<td>0.972708</td>
<td>0.982047</td>
<td>0.974280</td>
<td>0.004429</td>
<td>25</td>
</tr>
<tr>
<th>99</th>
<td>0.086053</td>
<td>0.025654</td>
<td>0.002996</td>
<td>0.001669</td>
<td>10</td>
<td>l2</td>
<td>{'C': 10.0, 'penalty': 'l2'}</td>
<td>0.975646</td>
<td>0.978143</td>
<td>0.967612</td>
<td>0.972050</td>
<td>0.982743</td>
<td>0.975239</td>
<td>0.005162</td>
<td>7</td>
</tr>
</tbody>
</table>
<p>100 rows × 15 columns</p>
</div>
Лучшие гиперпараметры:
```python
grid_searcher.best_params_
```
{'C': 1.0481131341546852, 'penalty': 'l1'}
Лучший скор модели на кросс-валидации:
```python
grid_searcher.best_score_
```
0.9753796196072366
Мы также можем выделить лучшую модель:
```python
lr = grid_searcher.best_estimator_
lr
```
LogisticRegression(C=1.0481131341546852, max_iter=3000, penalty='l1',
random_state=13, solver='liblinear')
Оценку модели на кросс-валидации мы можем получить и без перебора гиперпараметров:
```python
from sklearn.model_selection import cross_val_score
cv_score = cross_val_score(lr, X_train, y_train, scoring='roc_auc', cv=5)
print(cv_score)
print(cv_score.mean())
```
[0.9750116 0.97936447 0.96785217 0.9722437 0.98242616]
0.9753796196072366
Вместо перебора по сетке можно перебирать гиперпараметры, сгенерированные из заданного распределения.
```python
from scipy.stats import uniform
from sklearn.model_selection import RandomizedSearchCV
lr = LogisticRegression(max_iter=3000, solver='liblinear', random_state=13)
distributions = dict(C=uniform(loc=0, scale=10),
penalty=['l1', 'l2'])
clf = RandomizedSearchCV(lr, distributions, n_iter=50, cv=5, scoring='roc_auc', random_state=13, n_jobs = -1)
```
```python
%%time
clf.fit(X_train, y_train)
```
Wall time: 6.93 s
RandomizedSearchCV(cv=5,
estimator=LogisticRegression(max_iter=3000, random_state=13,
solver='liblinear'),
n_iter=50, n_jobs=-1,
param_distributions={'C': <scipy.stats._distn_infrastructure.rv_frozen object at 0x0000023A10582CD0>,
'penalty': ['l1', 'l2']},
random_state=13, scoring='roc_auc')
```python
clf.best_estimator_
```
LogisticRegression(C=1.1152636790448922, max_iter=3000, penalty='l1',
random_state=13, solver='liblinear')
```python
clf.best_score_
```
0.975350239678367
```python
print(roc_auc_score(y_train, clf.predict_proba(X_train)[:, 1]))
print(roc_auc_score(y_test, clf.predict_proba(X_test)[:, 1]))
```
0.9803387969692284
0.9603710615440821
Для некоторых моделей из `sklearn` можно сразу применить кросс-валидацию:
```python
from sklearn.linear_model import LogisticRegressionCV
lr = LogisticRegressionCV(max_iter=3000, solver='lbfgs', cv=5, random_state=13)
lr.fit(X_train, y_train)
```
LogisticRegressionCV(cv=5, max_iter=3000, random_state=13)
```python
lr.C_
```
array([21.5443469])
```python
print(roc_auc_score(y_train, lr.predict_proba(X_train)[:, 1]))
print(roc_auc_score(y_test, lr.predict_proba(X_test)[:, 1]))
```
0.9805200247409928
0.9596946986976311
# SVM
Рассмотрим теперь другой подход к построению функции потерь,
основанный на максимизации зазора между классами.
Будем рассматривать линейные классификаторы вида
$$
a(x) = sign (\langle w, x \rangle + b), \qquad w \in R^d, b \in R.
$$
### Разделимый случай
Будем считать, что существуют такие параметры $w_*$ и $b_*$,
что соответствующий им классификатор $a(x)$ не допускает ни одной ошибки
на обучающей выборке.
В этом случае говорят, что выборка __линейно разделима__.
Пусть задан некоторый классификатор $a(x) = sign (\langle w, x \rangle + b)$.
Заметим, что если одновременно умножить параметры $w$ и $b$
на одну и ту же положительную константу,
то классификатор не изменится.
Распорядимся этой свободой выбора и отнормируем параметры так, что
\begin{equation}
\label{eq:svmNormCond}
\min_{x \in X} | \langle w, x \rangle + b| = 1.
\end{equation}
Можно показать, что расстояние от произвольной точки $x_0 \in R^d$ до гиперплоскости,
определяемой данным классификатором, равно
$$
\rho(x_0, a)
=
\frac{
|\langle w, x \rangle + b|
}{
\|w\|
}.
$$
Тогда расстояние от гиперплоскости до ближайшего объекта обучающей выборки равно
$$
\min_{x \in X}
\frac{
|\langle w, x \rangle + b|
}{
\|w\|
}
=
\frac{1}{\|w\|} \min_{x \in X} |\langle w, x \rangle + b|
=
\frac{1}{\|w\|}.
$$
Данная величина также называется __отступом (margin)__.
Таким образом, если классификатор без ошибок разделяет обучающую выборку,
то ширина его разделяющей полосы равна $\frac{2}{\|w\|}$.
Известно, что максимизация ширины разделяющей полосы приводит
к повышению обобщающей способности классификатора.
Вспомним также, что на повышение обобщающей способности направлена и регуляризация,
которая штрафует большую норму весов -- а чем больше норма весов,
тем меньше ширина разделяющей полосы.
Итак, требуется построить классификатор, идеально разделяющий обучающую выборку,
и при этом имеющий максимальный отступ.
Запишем соответствующую оптимизационную задачу,
которая и будет определять метод опорных векторов для линейно разделимой выборки (hard margin support vector machine):
\begin{equation}
\label{eq:svmSep}
\left\{
\begin{aligned}
& \frac{1}{2} \|w\|^2 \to \min_{w, b} \\
& y_i \left(
\langle w, x_i \rangle + b
\right) \geq 1, \quad i = 1, \dots, \ell.
\end{aligned}
\right.
\end{equation}
### Неразделимый случай
Рассмотрим теперь общий случай, когда выборку
невозможно идеально разделить гиперплоскостью.
Это означает, что какие бы $w$ и $b$ мы не взяли,
хотя бы одно из ограничений в предыдущей задаче будет нарушено:
$$
\exists x_i \in X:\
y_i \left(
\langle w, x_i \rangle + b
\right) < 1.
$$
Сделаем эти ограничения "мягкими", введя штраф $\xi_i \geq 0$ за их нарушение:
$$
y_i \left(
\langle w, x_i \rangle + b
\right) \geq 1 - \xi_i, \quad i = 1, \dots, \ell.
$$
Отметим, что если отступ объекта лежит между нулем и
единицей ($0 \leq y_i \left( \langle w, x_i \rangle + b \right) < 1$),
то объект верно классифицируется, но имеет ненулевой штраф $\xi > 0$.
Таким образом, мы штрафуем объекты за попадание внутрь разделяющей полосы.
Величина $\frac{1}{\|w\|}$ в данном случае называется мягким отступом (soft margin).
С одной стороны, мы хотим максимизировать отступ, с другой -- минимизировать
штраф за неидеальное разделение выборки $\sum_{i = 1}^{\ell} \xi_i$.
Эти две задачи противоречат друг другу: как правило, излишняя подгонка под
выборку приводит к маленькому отступу, и наоборот -- максимизация отступа
приводит к большой ошибке на обучении.
В качестве компромисса будем минимизировать взвешенную сумму двух указанных величин.
Приходим к оптимизационной задаче,
соответствующей методу опорных векторов для линейно неразделимой выборки (soft margin support vector machine)
\begin{equation}
\label{eq:svmUnsep}
\left\{
\begin{aligned}
& \frac{1}{2} \|w\|^2 + C \sum_{i = 1}^{\ell} \xi_i \to \min_{w, b, \xi} \\
& y_i \left(
\langle w, x_i \rangle + b
\right) \geq 1 - \xi_i, \quad i = 1, \dots, \ell, \\
& \xi_i \geq 0, \quad i = 1, \dots, \ell.
\end{aligned}
\right.
\end{equation}
Чем больше здесь параметр $C$, тем сильнее мы будем настраиваться на обучающую выборку.
Исследуем зависимость положения разделяющей гиперплоскости в методе опорных векторов в зависимости от значения гиперпараметра $C$.
Сгенерируем двумерную искуственную выборку из двух различных нормальных распределений:
```python
class_size=500
mean0 = [7, 5]
cov0 = [[4, 0], [0, 1]] # diagonal covariance
mean1 = [0, 0]
cov1 = [[4, 0], [0, 2]]
data0 = np.random.multivariate_normal(mean0, cov0, class_size)
data1 = np.random.multivariate_normal(mean1, cov1, class_size)
data = np.vstack((data0, data1))
y = np.hstack((-np.ones(class_size), np.ones(class_size)))
plt.scatter(data0[:, 0], data0[:, 1], c='red', s=50)
plt.scatter(data1[:, 0], data1[:, 1], c='green', s=50)
plt.legend(['y = -1', 'y = 1'])
axes = plt.gca()
axes.set_xlim([-5,15])
axes.set_ylim([-5,10])
plt.show()
```
```python
from sklearn.svm import SVC
SVM_classifier = SVC(C=0.01, kernel='linear') # changing C here
SVM_classifier.fit(data, y)
```
SVC(C=0.01, kernel='linear')
```python
w_1 = SVM_classifier.coef_[0][0]
w_2 = SVM_classifier.coef_[0][1]
w_0 = SVM_classifier.intercept_[0]
plt.scatter(data0[:, 0], data0[:, 1], c='red', s=50)
plt.scatter(data1[:, 0], data1[:, 1], c='green', s=50)
plt.legend(['y = -1', 'y = 1'])
x_arr = np.linspace(-10, 15, 3000)
plt.plot(x_arr, -(w_0 + w_1 * x_arr) / w_2)
axes = plt.gca()
axes.set_xlim([-5,15])
axes.set_ylim([-5,10])
plt.show()
```
```python
plt.scatter(data0[:, 0], data0[:, 1], c='red', s=50, label='y = -1')
plt.scatter(data1[:, 0], data1[:, 1], c='green', s=50, label='y = +1')
#plt.legend(['y = -1', 'y = 1'])
x_arr = np.linspace(-10, 15, 3000)
colors = ['red', 'orange', 'green', 'blue', 'magenta']
for i, C in enumerate([0.0001, 0.01, 1, 100, 10000]):
SVM_classifier = SVC(C=C, kernel='linear')
SVM_classifier.fit(data, y)
w_1 = SVM_classifier.coef_[0][0]
w_2 = SVM_classifier.coef_[0][1]
w_0 = SVM_classifier.intercept_[0]
plt.plot(x_arr, -(w_0 + w_1 * x_arr) / w_2, color=colors[i], label='C='+str(C))
axes = plt.gca()
axes.set_xlim([-5,15])
axes.set_ylim([-5,10])
plt.legend(loc=0)
plt.show()
```
Гиперпараметр $C$ отвечает за то, что является более приоритетным для классификатора, — "подгонка" под обучающую выборку или максимизация ширины разделяющей полосы.
- При больших значениях $C$ классификатор сильно настраивается на обучение, тем самым сужая разделяющую полосу.
- При маленьких значениях $C$ классификатор расширяет разделяющую полосу, при этом допуская ошибки на некоторых объектах обучающей выборки.
```python
from scipy.stats import uniform
from sklearn.model_selection import RandomizedSearchCV
lr = SVC(C = C, kernel = 'linear')
# lr = SVC(C = C, kernel = 'rbf')
distributions = dict(C=uniform(loc=0, scale=10))
clf = RandomizedSearchCV(lr, distributions, n_iter=50, cv=5, scoring='roc_auc', random_state=13, n_jobs = -1, verbose = 5)
```
```python
clf.fit(X_train, y_train)
```
```python
```
|
dbbadc67d5df561a37e1a89133ab0a5a2607779b
| 300,304 |
ipynb
|
Jupyter Notebook
|
04 ML/05 svm/sem05_logreg_svm.ipynb
|
ksetdekov/HSE_DS
|
619d5b84f9d9e97b58ca1f12c5914ec65456c2c8
|
[
"MIT"
] | 1 |
2020-09-26T18:48:11.000Z
|
2020-09-26T18:48:11.000Z
|
04 ML/05 svm/sem05_logreg_svm.ipynb
|
ksetdekov/HSE_DS
|
619d5b84f9d9e97b58ca1f12c5914ec65456c2c8
|
[
"MIT"
] | null | null | null |
04 ML/05 svm/sem05_logreg_svm.ipynb
|
ksetdekov/HSE_DS
|
619d5b84f9d9e97b58ca1f12c5914ec65456c2c8
|
[
"MIT"
] | null | null | null | 151.975709 | 102,896 | 0.858846 | true | 13,032 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.810479 | 0.771843 | 0.625563 |
__label__rus_Cyrl
| 0.078429 | 0.291723 |
### Exercises of Optimization
```python
# import Python libraries
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import sympy as sym
from sympy.plotting import plot
import pandas as pd
from IPython.display import display
from IPython.core.display import Math
```
**1.) Find the extrema in the function $f(x)=x^3−7.5x^2+18x−10$ analytically and determine if they are minimum or maximum.**
```python
x = sym.symbols('x')
f_de_x = (x**3) - 7.5 * (x**2) + 18*x - 10
Fdiff = sym.expand(sym.diff(f_de_x, x))
roots = sym.solve(Fdiff, x)
display(Math(sym.latex('Roots:') + sym.latex(roots)))
```
$$Roots:\left [ 2.0, \quad 3.0\right ]$$
```python
f = np.array([1,0])
f[0] = (roots[0]**3) - 7.5 * (roots[0]**2) + 18*roots[0] - 10
f[1] = (roots[1]**3) - 7.5 * (roots[1]**2) + 18*roots[1] - 10
print("For the first root, f_de_x is", f[0])
print("For the second root, f_de_x is", f[1])
```
For the first root, f_de_x is 4
For the second root, f_de_x is 3
```python
print("So, the maximun of f_de_x is ", np.max(f))
print("and the minimun of f_de_x is ", np.min(f))
```
So, the maximun of f_de_x is 4
and the minimun of f_de_x is 3
**2.) Find the minimum in the $f(x)=x^3−7.5x^2+18x−10$ using the gradient descent algorithm.**
```python
# From https://en.wikipedia.org/wiki/Gradient_descent
# The local minimum of $f(x)=x^4-3x^3+2$ is at x=9/4
cur_x = 6 # The algorithm starts at x=6
gamma = 0.01 # step size multiplier
precision = 0.00001
step_size = 1 # initial step size
max_iters = 10000 # maximum number of iterations
iters = 0 # iteration counter
f = lambda x: (x**3) - 7.5 * (x**2) + 18*x - 10 # lambda function for f(x)
df = lambda x: 3*x**2 - 15*x + 18 # lambda function for the gradient of f(x)
while (step_size > precision) & (iters < max_iters):
prev_x = cur_x
cur_x -= gamma*df(prev_x)
step_size = abs(cur_x - prev_x)
iters+=1
print('True local minimum at {} with function value {}.'.format(9/4, f(9/4)))
print('Local minimum by gradient descent at {} with function value {}.'.format(cur_x, f(cur_x)))
```
True local minimum at 2.25 with function value 3.921875.
Local minimum by gradient descent at 3.000323195755751 with function value 3.5000001567170003.
|
d268d67c94bf1b442d4c31353fb536cacb75a458
| 4,415 |
ipynb
|
Jupyter Notebook
|
courses/modsim2018/tasks/Task_ForLecture19.ipynb
|
raissabthibes/bmc
|
840800fb94ea3bf188847d0771ca7197dfec68e3
|
[
"MIT"
] | null | null | null |
courses/modsim2018/tasks/Task_ForLecture19.ipynb
|
raissabthibes/bmc
|
840800fb94ea3bf188847d0771ca7197dfec68e3
|
[
"MIT"
] | null | null | null |
courses/modsim2018/tasks/Task_ForLecture19.ipynb
|
raissabthibes/bmc
|
840800fb94ea3bf188847d0771ca7197dfec68e3
|
[
"MIT"
] | null | null | null | 25.818713 | 132 | 0.516648 | true | 784 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.969785 | 0.888759 | 0.861905 |
__label__eng_Latn
| 0.915101 | 0.840828 |
# Redes Neuronales
Inicialmente, las redes neuronales fueron inspiradas en el cerebro humano. Sin embargo, después de cierto tiempo se ha dejado de tratar de emular cómo funciona el cerebro y se ha tomado un enfoque en encontrar las configuraciones más apropiadas de las redes neuronales para desarrollar diferentes tareas, incluyendo visión artificial, procesamiento de lenguaje natural y reconocimiento de voz.
Se puede describir una red neuronal como un modelo matemático para procesamiento de información.
- El procesamiento de información se realiza en las neuronas.
- Las neuronas son conectadas e intercambian información y señales entre ellas a través de enlaces de conexión.
- Los enlaces de conexión entre las neuronas pueden ser fuertes o débiles y esto determina cómo la información es procesada.
- Cada neurona tiene un estado interno que es determinado por todas las conexiones con otras neuronas.
- Cada neurona tiene una función de activación que es calculada en su estado y determina la señal de salida de la neurona.
Se pueden identificar dos características principales para una red neuronal.
- Arquitectura de la red: Esto describe el conjunto de conexiones, es decir "feedforward", "recurrent", "multi/single layered" y así.
- El aprendizaje: Describe lo que comúnmente se denomina entrenamiento. La forma más común es descenso de gradiente y backpropagation.
Una neurona es una función matemática que toma uno o más valores de entrada y calcula una salida como un valor numérico.
Una neurona está definida como
\begin{equation}
y=f(\sum_{i} x_i w_i +b)
\end{equation}
1. Primero se calcula el valor $\sum x_i w_i$ de las entradas $x_i$ y los pesos $w_i$. Aquí $x_i$ son los valores de entrada numéricos o las salidas de otras neuronas.
2. Los pesos $w_i$ son valores numéricos que representan la fuerza de las entradas o, alternativamente, la fuerza de las conexiones entre las neuronas.
3. Los pesos $b$ son los valores especiales cuya entrada es 1.
Luego se utiliza la suma pesada de una entrada hacia la función de activación $f$, cuyo valor es conocido como función de transferencia. Hay muchos tipos de funciones de activación, pero en general tienen el requerimiento de ser no lineales.
# Tipos de funciones de activación.
- f(a)=a. Esta función de activación es conocida como la identidad.
- f(a)=0 o 1.
- f(a)=$\frac{1}{1+exp(-a)}$. Esta función es la más usada comunmente, uede ser interpretada como la probabilidad de una neurona de activarse.
- f(a)=$\frac{2}{1+exp(-a)}-1$: Esta función de activación es llamada sigmoide bipolar y es simplemente una sigmoide logística reescalada y traducida al rango {-1,1}
- f(a)=$\frac{exp(a)-exp(-a)}{exp(a)+exp(-a)}$. Esta función de activación es llamada tangente hiperbólica.
- f(a)=$a$ si $a\geq 0$, o $0$ si $a<0$. Esta función de activación es probablemente la más cercana a su contraparte biológica. Es una mezcla de la identidad y una función de umbral.
```python
# Ejercicio: Hacer las funciones de activación previas
import numpy as np
import matplotlib.pyplot as plt
a=np.linspace(-5,5,100)
def ida(a):
return a
def taf(a):
return np.sign(a)
def sigmoid(a):
return 1/(1+np.exp(-a))
def bip(a):
return 2/(1+np.exp(-a))-1
def tanh(a):
return np.tanh(a)
def relu(a):
return a*(np.sign(a)*0.5+0.5)
plt.plot(a,relu(a))
plt.show()
```
```python
import matplotlib.pyplot as plt
import numpy
weight_value=1000
#Se modifica para cambiar donde empieza la función de escalón
bias_value_1=5000
# Dice dónde el escalóln termina
bias_value_2=-5000
plt.axis([-10,10,-1,10])
print("la función de escalon empieza en {0} y termina en {1}".format(-bias_value_1/weight_value,-bias_value_2/weight_value))
inputs=numpy.arange(-10,10,0.01)
outputs=list()
for x in inputs:
y1=1.0/(1.0+numpy.exp(-weight_value*x-bias_value_1))
y2=1.0/(1.0+numpy.exp(-weight_value*x-bias_value_2))
w=7
y=y1*w-y2*w
outputs.append(y)
plt.plot(inputs,outputs,lw=2,color="black")
plt.show()
```
```python
from matplotlib.colors import ListedColormap
import numpy as np
def tanh(x): return (1.0-np.exp(-2*x))/(1.0+np.exp(-2*x))
def tanh_derivative(x):
return (1+tanh(x))*(1-tanh(x))
class NeuralNetwork:
def __init__(self, net_arch):
self.activation_func = tanh
self.activation_derivative = tanh_derivative
self.layers = len(net_arch)
self.steps_per_epoch = 1000
self.net_arch = net_arch
self.weights=[]
for layer in range(len(net_arch) - 1):
w = 2 * np.random.rand(net_arch[layer] + 1,net_arch[layer + 1]) - 1
self.weights.append(w)
def fit(self, data, labels, learning_rate=0.1, epochs=10):
ones = np.ones((1, data.shape[0]))
Z = np.concatenate((ones.T, data), axis=1)
training = epochs * self.steps_per_epoch
for k in range(training):
if k % self.steps_per_epoch == 0:
print("epochs: {}".format(k/self.steps_per_epoch))
for s in data:
print(s,nn.predict(s))
sample=np.random.randint(data.shape[0])
y=[Z[sample]]
for i in range(len(self.weights) - 1):
activation = np.dot(y[i], self.weights[i])
activation_f = self.activation_func(activation)
activation_f = np.concatenate((np.ones(1), np.array(activation_f)))
y.append(activation_f)
activation = np.dot(y[-1], self.weights[-1])
activation_f = self.activation_func(activation)
y.append(activation_f)
error = labels[sample] - y[-1]
delta_vec = [error * self.activation_derivative(y[-1])]
for i in range(self.layers - 2, 0, -1):
error = delta_vec[-1].dot(self.weights[i][1:].T)
error = error * self.activation_derivative(y[i][1:])
delta_vec.append(error)
delta_vec.reverse()
for i in range(len(self.weights)):
layer = y[i].reshape(1, nn.net_arch[i] + 1)
delta = delta_vec[i].reshape(1,nn.net_arch[i + 1] )
self.weights[i] += learning_rate * layer.T.dot(delta)
def predict(self, x):
val=np.concatenate((np.ones(1).T,np.array(x)))
for i in range(0,len(self.weights)):
val = self.activation_func(np.dot(val, self.weights[i]))
val = np.concatenate((np.ones(1).T, np.array(val)))
return val[1]
def plot_decision_regions(self,X,y,points=200):
markers=('o','^')
colors=('red','blue')
cmap=ListedColormap(colors)
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
resolution=max(x1_max-x1_min,x2_max-x2_min)/float(points)
xx1, xx2 = np.meshgrid(np.arange(x1_min,x1_max,resolution),np.arange(x2_min, x2_max,resolution))
input=np.array([xx1.ravel(),xx2.ravel()]).T
Z=np.empty(0)
for i in range(input.shape[0]):
val=nn.predict(np.array(input[i]))
if val < 0.5:
val = 0
if val >= 0.5:
val = 1
Z = np.append(Z, val)
Z = Z.reshape(xx1.shape)
plt.pcolormesh(xx1, xx2, Z, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
classes = ["False", "True"]
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=1.0,
c=colors[idx],
edgecolors='black',
marker=markers[idx],
s=80,
label=classes[idx])
plt.xlabel('x-axis')
plt.ylabel('y-axis')
plt.legend(loc='upper left')
plt.show()
if __name__=='__main__':
np.random.seed(0)
nn = NeuralNetwork([2, 2, 1])
X = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
y = np.array([0, 0, 1, 1])
nn.fit(X, y, epochs=10)
print("Final prediction")
for s in X:
print(s, nn.predict(s))
nn.plot_decision_regions(X, y)
```
# Aprendizaje de características
Para ilustrar cómo funciona el aprendizaje profundo, consideremos la tarea de reconocer una simple figura geométrica, por ejemplo, un cubo, como se ve en el siguiente diagrama. El cubo es compuesto de bordes (o líneas), que se cruzan en vértices. Digamos que cada punto posible en el espacio tridimensional está asociado con una neurona (olvide por un momento que esto requerirá un número infinito de neuronas). Todos los puntos / neuronas están en la primera (entrada) capa de una red de alimentación de múltiples capas. Un punto de entrada / neurona está activo si el punto correspondiente se encuentra en una línea. Los puntos / neuronas que se encuentran en una línea común (borde) tener fuertes conexiones positivas con un solo borde / neurona común en la siguiente capa. Por el contrario, tienen conexiones negativas con todas las demás neuronas en la siguiente capa. Lo único excepción son las neuronas que se encuentran en los vértices. Cada una de esas neuronas se encuentra simultáneamente en tres bordes, y está conectado a sus tres neuronas correspondientes en la capa posterior.
Ahora tenemos dos capas ocultas, con diferentes niveles de abstracción: la primera para los puntos. y el segundo para bordes. Pero esto no es suficiente para codificar un cubo completo en la red. Probemos con otra capa para vértices. Aquí, cada tres bordes activos / neuronas del segundo capa, que forma un vértice, tiene una conexión positiva significativa a un único común vértice / neurona de la tercera capa. Como un borde del cubo forma dos vértices, cada uno borde / neurona tendrá conexiones positivas a dos vértices / neuronas y negativas conexiones a todos los demás. Finalmente, presentaremos la última capa oculta (cubo). El cuatro los vértices / neuronas que forman un cubo tendrán conexiones positivas con un solo cubo / neurona del cubo /capa:
El ejemplo de representación del cubo está demasiado simplificado, pero podemos sacar varias conclusiones
de eso. Una de ellas es que las redes neuronales profundas se prestan bien jerárquicamente datos organizados Por ejemplo, una imagen consta de píxeles, que forman líneas, bordes, regiones, y así. Esto también es cierto para el habla, donde los bloques de construcción se llaman fonemas; como así como texto, donde tenemos caracteres, palabras y oraciones. En el ejemplo anterior, dedicamos capas a entidades de cubo específicas deliberadamente, pero en practica, no haríamos eso. En cambio, una red profunda "descubrirá" características automáticamente durante el entrenamiento. Estas características pueden no ser inmediatamente obvias y, en en general, no sería interpretable por los humanos. Además, no sabríamos el nivel de características codificadas en las diferentes capas de la red. Nuestro ejemplo es más parecido al clásico. algoritmos de aprendizaje automático, donde el usuario tiene que usar su propia experiencia para seleccionar lo que ellos piensan son las mejores características. Este proceso se llama ingeniería de características, y puede ser laborioso y lento. Permitir que una red descubra automáticamente las características no solo son más fáciles, sino que son muy abstractas, lo que las hace menos sensible al ruido. Por ejemplo, la visión humana puede reconocer objetos de diferentes formas, tamaños, en diferentes condiciones de iluminación, e incluso cuando su vista está parcialmente oscurecida. Podemos reconocer a las personas con diferentes cortes de pelo, rasgos faciales e incluso cuando usan un sombrero o un bufanda que les cubre la boca. Del mismo modo, las características abstractas que aprende la red lo ayudarán para reconocer mejor las caras, incluso en condiciones más difíciles.
# Algoritmos de Deep Learning
Podríamos definir el aprendizaje profundo como una clase de técnicas de aprendizaje automático, donde la información se procesa en capas jerárquicas para comprender las representaciones y características de los datos en niveles crecientes de complejidad. En la práctica, todos los algoritmos de aprendizaje profundo son redes neuronales, que comparten algunas propiedades básicas comunes. Todos consisten en neuronas interconectadas que están organizadas en capas. En lo que difieren es en la arquitectura de la red (o la forma en que las neuronas están organizadas en la red) y, a veces, en la forma en que se entrenan. Con eso en mente, echemos un vistazo a las principales clases de redes neuronales. La siguiente lista no es exhaustiva, pero representa la gran mayoría de los algoritmos en uso hoy en día:
- Perceptrones multicapa (MLP): una red neuronal con propagación de alimentación directa, capas completamente conectadas y al menos una capa oculta.
- Redes neuronales convolucionales (CNN): una CNN es una red neuronal con varios tipos de capas especiales. Por ejemplo, las capas convolucionales aplican un filtro a la imagen de entrada (o sonido) deslizando ese filtro por toda la señal entrante, para producir un mapa de activación n-dimensional. Existe alguna evidencia de que las neuronas en las CNN se organizan de manera similar a cómo se organizan las células biológicas en la corteza visual del cerebro. Hemos mencionado CNN varias veces hasta ahora, y eso no es una coincidencia: hoy, superan a todos los demás algoritmos de ML en una gran cantidad de tareas de visión por computadora y PNL.
- Redes recurrentes: este tipo de red tiene un estado interno (o memoria), que se basa en la totalidad o en parte de los datos de entrada ya alimentados a la red. La salida de una red recurrente es una combinación de su estado interno (memoria de entradas) y la última muestra de entrada. Al mismo tiempo, el estado interno cambia para incorporar datos recién ingresados. Debido a estas propiedades, las redes recurrentes son buenas candidatas para tareas que funcionan en datos secuenciales, como texto o datos de series de tiempo.
- Codificadores automáticos: una clase de algoritmos de aprendizaje no supervisados, en los que la forma de salida es la misma que la entrada que permite a la red aprender mejor las representaciones básicas.
```python
```
# Softmax y entropía cruzada
La función softmax es una generalización del concepto de regresión logística para múltiples clases. Veamos la siguiente fórmula:
\begin{equation}
F(x_i)=\frac{e^{x_i}}{\sum_{j=1}^{n}e^{x_j}}
\end{equation}
Aquí, i, j = 0, 1, 2, ... ny xi representan cada uno de n valores reales arbitrarios, correspondientes a n clases mutuamente excluyentes. El softmax "aplasta" los valores de entrada en el intervalo (0, 1), similar a la función logística. Pero tiene la propiedad adicional de que la suma de todas las salidas aplastadas suma 1. Podemos interpretar las salidas softmax como una distribución de probabilidad normalizada de las clases. Entonces, tiene sentido usar una función de pérdida, que compara la diferencia entre las probabilidades de clase estimadas y la distribución de clase real (la diferencia se conoce como crossentropía). Como mencionamos en el paso 5 de esta sección, la distribución real suele ser un vector codificado en caliente, donde la clase real tiene una probabilidad de 1, y todos los demás tienen una probabilidad de 0. La función de pérdida que hace esto se llama Pérdida de entropía cruzada:
\begin{equation}
H(p,q)=\sum_{i=1}^{n}{p_i(x)log(q_i (x))}
\end{equation}
Aquí, qi (x) es la probabilidad estimada de que la salida pertenezca a la clase i (de n clases totales) y pi (x) es la probabilidad real. Cuando utilizamos valores objetivo codificados en caliente para pi (x), solo la clase objetivo tiene un valor distinto de cero (1) y todos los demás son ceros. En este caso, la pérdida de entropía cruzada solo capturará el error en la clase de destino y descartará todos los demás errores. En aras de la simplicidad, asumiremos que aplicamos la fórmula
sobre una sola muestra de entrenamiento.
```python
import warnings
warnings.filterwarnings("ignore")
```
```python
import tensorflow
with tensorflow.device("/gpu:1"):
pass
# model definition here
#Here's an example:
#"/cpu:0": the main CPU of your machine
#"/gpu:0": the first GPU of your machine, if one exists
#"/gpu:1": the second GPU of your machine, if a second exists
#"/gpu:2": the third GPU of your machine, if a third exists, and so on
```
```python
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.utils import np_utils
(X_train,Y_train),(X_test,Y_test)=mnist.load_data()
plt.imshow(X_test[0])
X_train=X_train.reshape(60000,784)
X_test=X_test.reshape(10000,784)
import matplotlib.pyplot as plt
classes=10
Y_train=np_utils.to_categorical(Y_train,classes)
Y_test=np_utils.to_categorical(Y_test,classes)
print(Y_test[0])
input_size=784
batch_size=100
hidden_neurons=100
epochs=10
model=Sequential([Dense(hidden_neurons,input_dim=input_size), Activation('sigmoid'),Dense(classes),Activation('softmax')])
model.compile(loss='categorical_crossentropy',metrics=['accuracy'],optimizer='sgd')
model.fit(X_train,Y_train,batch_size=batch_size,nb_epoch=epochs,verbose=1)
score=model.evaluate(X_test,Y_test,verbose=1)
print('Test accuracy:',score)
weights=model.layers[0].get_weights()
```
```python
```
```python
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy
fig = plt.figure()
w = weights[0].T
for neuron in range(hidden_neurons):
ax = fig.add_subplot(10, 10, neuron + 1)
ax.axis("off")
ax.imshow(numpy.reshape(w[neuron], (28, 28)), cmap=cm.Greys_r)
plt.show()
```
```python
from keras.datasets import cifar10
from keras.layers.core import Dense, Activation
from keras.models import Sequential
from keras.utils import np_utils
(X_train, Y_train), (X_test, Y_test) = cifar10.load_data()
X_train = X_train.reshape(50000, 3072)
X_test = X_test.reshape(10000, 3072)
classes = 10
Y_train = np_utils.to_categorical(Y_train, classes)
Y_test = np_utils.to_categorical(Y_test, classes)
input_size = 3072
batch_size = 100
epochs = 100
model = Sequential([Dense(1024, input_dim=input_size),Activation('relu'),Dense(512),Activation('relu'),Dense(512),Activation('sigmoid'),Dense(classes),Activation('softmax')])
model.compile(loss='categorical_crossentropy',metrics=['accuracy'],optimizer='sgd')
model.fit(X_train, Y_train, batch_size=batch_size, epochs=epochs,validation_data=(X_test, Y_test), verbose=1)
```
Train on 50000 samples, validate on 10000 samples
Epoch 1/100
50000/50000 [==============================] - 5s 100us/step - loss: 2.1783 - acc: 0.1827 - val_loss: 2.0974 - val_acc: 0.2161
Epoch 2/100
50000/50000 [==============================] - 4s 76us/step - loss: 2.0231 - acc: 0.2486 - val_loss: 1.9859 - val_acc: 0.2604
Epoch 3/100
50000/50000 [==============================] - 4s 77us/step - loss: 1.9524 - acc: 0.2854 - val_loss: 1.9488 - val_acc: 0.2747
Epoch 4/100
50000/50000 [==============================] - 4s 77us/step - loss: 1.9187 - acc: 0.3005 - val_loss: 1.8829 - val_acc: 0.3214
Epoch 5/100
50000/50000 [==============================] - 4s 78us/step - loss: 1.8811 - acc: 0.3198 - val_loss: 1.8578 - val_acc: 0.3231
Epoch 6/100
50000/50000 [==============================] - 4s 76us/step - loss: 1.8533 - acc: 0.3292 - val_loss: 1.8288 - val_acc: 0.3348
Epoch 7/100
50000/50000 [==============================] - 4s 79us/step - loss: 1.8185 - acc: 0.3415 - val_loss: 1.8235 - val_acc: 0.3455
Epoch 8/100
50000/50000 [==============================] - 4s 77us/step - loss: 1.7925 - acc: 0.3528 - val_loss: 1.7932 - val_acc: 0.3574
Epoch 9/100
50000/50000 [==============================] - 4s 79us/step - loss: 1.7643 - acc: 0.3618 - val_loss: 1.7683 - val_acc: 0.3660
Epoch 10/100
50000/50000 [==============================] - 4s 79us/step - loss: 1.7477 - acc: 0.3680 - val_loss: 1.7113 - val_acc: 0.3798
Epoch 11/100
50000/50000 [==============================] - 4s 83us/step - loss: 1.7303 - acc: 0.3756 - val_loss: 1.7235 - val_acc: 0.3854
Epoch 12/100
50000/50000 [==============================] - 4s 79us/step - loss: 1.7080 - acc: 0.3866 - val_loss: 1.6981 - val_acc: 0.3877
Epoch 13/100
50000/50000 [==============================] - 4s 78us/step - loss: 1.6914 - acc: 0.3921 - val_loss: 1.6913 - val_acc: 0.3967
Epoch 14/100
50000/50000 [==============================] - 4s 78us/step - loss: 1.6767 - acc: 0.3951 - val_loss: 1.6676 - val_acc: 0.4020
Epoch 15/100
50000/50000 [==============================] - 4s 82us/step - loss: 1.6661 - acc: 0.4009 - val_loss: 1.6577 - val_acc: 0.4085
Epoch 16/100
50000/50000 [==============================] - 6s 118us/step - loss: 1.6457 - acc: 0.4126 - val_loss: 1.7004 - val_acc: 0.3944
Epoch 17/100
50000/50000 [==============================] - 4s 82us/step - loss: 1.6359 - acc: 0.4142 - val_loss: 1.6234 - val_acc: 0.4179
Epoch 18/100
50000/50000 [==============================] - 5s 90us/step - loss: 1.6218 - acc: 0.4163 - val_loss: 1.6255 - val_acc: 0.4162
Epoch 19/100
50000/50000 [==============================] - 4s 80us/step - loss: 1.6055 - acc: 0.4247 - val_loss: 1.6418 - val_acc: 0.4101
Epoch 20/100
50000/50000 [==============================] - 4s 79us/step - loss: 1.5900 - acc: 0.4304 - val_loss: 1.5921 - val_acc: 0.4355
Epoch 21/100
50000/50000 [==============================] - 4s 79us/step - loss: 1.5807 - acc: 0.4339 - val_loss: 1.5878 - val_acc: 0.4314
Epoch 22/100
50000/50000 [==============================] - 4s 79us/step - loss: 1.5672 - acc: 0.4397 - val_loss: 1.5928 - val_acc: 0.4314
Epoch 23/100
50000/50000 [==============================] - 4s 79us/step - loss: 1.5570 - acc: 0.4408 - val_loss: 1.5611 - val_acc: 0.4385
Epoch 24/100
50000/50000 [==============================] - 4s 84us/step - loss: 1.5523 - acc: 0.4444 - val_loss: 1.5583 - val_acc: 0.4416
Epoch 25/100
50000/50000 [==============================] - 4s 80us/step - loss: 1.5449 - acc: 0.4457 - val_loss: 1.5478 - val_acc: 0.4471
Epoch 26/100
50000/50000 [==============================] - 4s 80us/step - loss: 1.5267 - acc: 0.4555 - val_loss: 1.5635 - val_acc: 0.4446
Epoch 27/100
50000/50000 [==============================] - 4s 78us/step - loss: 1.5233 - acc: 0.4553 - val_loss: 1.5280 - val_acc: 0.4497
Epoch 28/100
50000/50000 [==============================] - 4s 88us/step - loss: 1.5114 - acc: 0.4575 - val_loss: 1.5298 - val_acc: 0.4555
Epoch 29/100
50000/50000 [==============================] - 4s 81us/step - loss: 1.5086 - acc: 0.4592 - val_loss: 1.5356 - val_acc: 0.4525
Epoch 30/100
50000/50000 [==============================] - 4s 82us/step - loss: 1.4947 - acc: 0.4649 - val_loss: 1.5176 - val_acc: 0.4576
Epoch 31/100
50000/50000 [==============================] - 4s 84us/step - loss: 1.4896 - acc: 0.4693 - val_loss: 1.4967 - val_acc: 0.4678
Epoch 32/100
50000/50000 [==============================] - 4s 84us/step - loss: 1.4837 - acc: 0.4698 - val_loss: 1.5119 - val_acc: 0.4597
Epoch 33/100
50000/50000 [==============================] - 5s 90us/step - loss: 1.4720 - acc: 0.4724 - val_loss: 1.4917 - val_acc: 0.4630
Epoch 34/100
50000/50000 [==============================] - 4s 84us/step - loss: 1.4652 - acc: 0.4748 - val_loss: 1.4913 - val_acc: 0.4689
Epoch 35/100
50000/50000 [==============================] - 4s 88us/step - loss: 1.4564 - acc: 0.4777 - val_loss: 1.4959 - val_acc: 0.4650
Epoch 36/100
50000/50000 [==============================] - 4s 83us/step - loss: 1.4468 - acc: 0.4819 - val_loss: 1.5042 - val_acc: 0.4574
Epoch 37/100
50000/50000 [==============================] - 4s 81us/step - loss: 1.4442 - acc: 0.4832 - val_loss: 1.4873 - val_acc: 0.4634
Epoch 38/100
50000/50000 [==============================] - 4s 85us/step - loss: 1.4353 - acc: 0.4842 - val_loss: 1.5064 - val_acc: 0.4617
Epoch 39/100
50000/50000 [==============================] - 4s 81us/step - loss: 1.4346 - acc: 0.4874 - val_loss: 1.4934 - val_acc: 0.4643
Epoch 40/100
50000/50000 [==============================] - 4s 84us/step - loss: 1.4173 - acc: 0.4946 - val_loss: 1.4545 - val_acc: 0.4791
Epoch 41/100
50000/50000 [==============================] - 4s 83us/step - loss: 1.4078 - acc: 0.4989 - val_loss: 1.4925 - val_acc: 0.4653
Epoch 42/100
50000/50000 [==============================] - 4s 82us/step - loss: 1.4132 - acc: 0.4958 - val_loss: 1.4676 - val_acc: 0.4753
Epoch 43/100
50000/50000 [==============================] - 5s 95us/step - loss: 1.4011 - acc: 0.4976 - val_loss: 1.4602 - val_acc: 0.4816
Epoch 44/100
50000/50000 [==============================] - 5s 100us/step - loss: 1.3979 - acc: 0.4978 - val_loss: 1.4389 - val_acc: 0.4843
Epoch 45/100
50000/50000 [==============================] - 5s 108us/step - loss: 1.3876 - acc: 0.5055 - val_loss: 1.4901 - val_acc: 0.4696
Epoch 46/100
50000/50000 [==============================] - 5s 96us/step - loss: 1.3902 - acc: 0.5007 - val_loss: 1.5024 - val_acc: 0.4686
Epoch 47/100
50000/50000 [==============================] - 5s 96us/step - loss: 1.3798 - acc: 0.5080 - val_loss: 1.4699 - val_acc: 0.4767
Epoch 48/100
50000/50000 [==============================] - 5s 97us/step - loss: 1.3802 - acc: 0.5057 - val_loss: 1.4573 - val_acc: 0.4829
Epoch 49/100
50000/50000 [==============================] - 4s 88us/step - loss: 1.3720 - acc: 0.5073 - val_loss: 1.4239 - val_acc: 0.4882
Epoch 50/100
50000/50000 [==============================] - 4s 86us/step - loss: 1.3632 - acc: 0.5137 - val_loss: 1.4383 - val_acc: 0.4867
Epoch 51/100
50000/50000 [==============================] - 4s 88us/step - loss: 1.3547 - acc: 0.5126 - val_loss: 1.4643 - val_acc: 0.4828
Epoch 52/100
50000/50000 [==============================] - 4s 90us/step - loss: 1.3466 - acc: 0.5180 - val_loss: 1.4495 - val_acc: 0.4878
Epoch 53/100
50000/50000 [==============================] - 5s 99us/step - loss: 1.3442 - acc: 0.5193 - val_loss: 1.4264 - val_acc: 0.4846
Epoch 54/100
50000/50000 [==============================] - 5s 96us/step - loss: 1.3385 - acc: 0.5211 - val_loss: 1.4610 - val_acc: 0.4885
Epoch 55/100
50000/50000 [==============================] - 5s 94us/step - loss: 1.3282 - acc: 0.5234 - val_loss: 1.4150 - val_acc: 0.4972
Epoch 56/100
50000/50000 [==============================] - 5s 97us/step - loss: 1.3255 - acc: 0.5248 - val_loss: 1.4380 - val_acc: 0.4874
Epoch 57/100
50000/50000 [==============================] - 5s 105us/step - loss: 1.3170 - acc: 0.5295 - val_loss: 1.3911 - val_acc: 0.5089
Epoch 58/100
50000/50000 [==============================] - 5s 96us/step - loss: 1.3143 - acc: 0.5304 - val_loss: 1.4412 - val_acc: 0.4866
Epoch 59/100
50000/50000 [==============================] - 4s 85us/step - loss: 1.3120 - acc: 0.5307 - val_loss: 1.4223 - val_acc: 0.4957
Epoch 60/100
50000/50000 [==============================] - 4s 84us/step - loss: 1.3089 - acc: 0.5336 - val_loss: 1.4069 - val_acc: 0.5013
Epoch 61/100
50000/50000 [==============================] - 4s 83us/step - loss: 1.2988 - acc: 0.5352 - val_loss: 1.4492 - val_acc: 0.4865
Epoch 62/100
50000/50000 [==============================] - 4s 84us/step - loss: 1.2934 - acc: 0.5361 - val_loss: 1.4143 - val_acc: 0.4940
Epoch 63/100
50000/50000 [==============================] - 4s 83us/step - loss: 1.2871 - acc: 0.5398 - val_loss: 1.3954 - val_acc: 0.5039
Epoch 64/100
50000/50000 [==============================] - 4s 82us/step - loss: 1.2796 - acc: 0.5413 - val_loss: 1.3905 - val_acc: 0.5097
Epoch 65/100
50000/50000 [==============================] - 4s 84us/step - loss: 1.2686 - acc: 0.5461 - val_loss: 1.4012 - val_acc: 0.5004
Epoch 66/100
50000/50000 [==============================] - 4s 82us/step - loss: 1.2606 - acc: 0.5502 - val_loss: 1.3996 - val_acc: 0.5117
Epoch 67/100
50000/50000 [==============================] - 4s 84us/step - loss: 1.2595 - acc: 0.5502 - val_loss: 1.4035 - val_acc: 0.5005
Epoch 68/100
50000/50000 [==============================] - 4s 83us/step - loss: 1.2558 - acc: 0.5513 - val_loss: 1.3989 - val_acc: 0.5105
Epoch 69/100
50000/50000 [==============================] - 4s 82us/step - loss: 1.2517 - acc: 0.5542 - val_loss: 1.4048 - val_acc: 0.5075
Epoch 70/100
50000/50000 [==============================] - 4s 83us/step - loss: 1.2454 - acc: 0.5569 - val_loss: 1.3911 - val_acc: 0.5111
Epoch 71/100
50000/50000 [==============================] - 4s 82us/step - loss: 1.2354 - acc: 0.5582 - val_loss: 1.3855 - val_acc: 0.5111
Epoch 72/100
50000/50000 [==============================] - 5s 94us/step - loss: 1.2333 - acc: 0.5568 - val_loss: 1.4110 - val_acc: 0.5014
Epoch 73/100
50000/50000 [==============================] - 5s 95us/step - loss: 1.2267 - acc: 0.5635 - val_loss: 1.4218 - val_acc: 0.5005
Epoch 74/100
50000/50000 [==============================] - 4s 87us/step - loss: 1.2239 - acc: 0.5626 - val_loss: 1.3804 - val_acc: 0.5072
Epoch 75/100
50000/50000 [==============================] - 5s 96us/step - loss: 1.2105 - acc: 0.5689 - val_loss: 1.3710 - val_acc: 0.5146
Epoch 76/100
50000/50000 [==============================] - 5s 91us/step - loss: 1.2092 - acc: 0.5706 - val_loss: 1.3825 - val_acc: 0.5142
Epoch 77/100
50000/50000 [==============================] - 6s 113us/step - loss: 1.2020 - acc: 0.5693 - val_loss: 1.4336 - val_acc: 0.4964
Epoch 78/100
50000/50000 [==============================] - 4s 88us/step - loss: 1.1982 - acc: 0.5705 - val_loss: 1.3763 - val_acc: 0.5213
Epoch 79/100
50000/50000 [==============================] - 4s 85us/step - loss: 1.1940 - acc: 0.5742 - val_loss: 1.3887 - val_acc: 0.5137
Epoch 80/100
50000/50000 [==============================] - 6s 122us/step - loss: 1.1866 - acc: 0.5778 - val_loss: 1.3790 - val_acc: 0.5155
Epoch 81/100
50000/50000 [==============================] - 4s 84us/step - loss: 1.1849 - acc: 0.5749 - val_loss: 1.4099 - val_acc: 0.5047
Epoch 82/100
50000/50000 [==============================] - 4s 86us/step - loss: 1.1762 - acc: 0.5804 - val_loss: 1.3757 - val_acc: 0.5191
Epoch 83/100
50000/50000 [==============================] - 4s 88us/step - loss: 1.1732 - acc: 0.5806 - val_loss: 1.3690 - val_acc: 0.5169
Epoch 84/100
50000/50000 [==============================] - 5s 93us/step - loss: 1.1674 - acc: 0.5845 - val_loss: 1.4169 - val_acc: 0.5098
Epoch 85/100
50000/50000 [==============================] - 5s 95us/step - loss: 1.1631 - acc: 0.5835 - val_loss: 1.3976 - val_acc: 0.5095
Epoch 86/100
50000/50000 [==============================] - 5s 98us/step - loss: 1.1570 - acc: 0.5871 - val_loss: 1.3729 - val_acc: 0.5203
Epoch 87/100
50000/50000 [==============================] - 5s 94us/step - loss: 1.1518 - acc: 0.5909 - val_loss: 1.3584 - val_acc: 0.5223
Epoch 88/100
50000/50000 [==============================] - 5s 93us/step - loss: 1.1447 - acc: 0.5916 - val_loss: 1.3580 - val_acc: 0.5280
Epoch 89/100
50000/50000 [==============================] - 5s 93us/step - loss: 1.1411 - acc: 0.5917 - val_loss: 1.4336 - val_acc: 0.5034
Epoch 90/100
50000/50000 [==============================] - 6s 123us/step - loss: 1.1349 - acc: 0.5958 - val_loss: 1.3716 - val_acc: 0.5118
Epoch 91/100
50000/50000 [==============================] - 4s 83us/step - loss: 1.1276 - acc: 0.5975 - val_loss: 1.3844 - val_acc: 0.5163
Epoch 92/100
50000/50000 [==============================] - 4s 85us/step - loss: 1.1183 - acc: 0.6021 - val_loss: 1.3758 - val_acc: 0.5182
Epoch 93/100
50000/50000 [==============================] - 4s 82us/step - loss: 1.1191 - acc: 0.5991 - val_loss: 1.3949 - val_acc: 0.5174
Epoch 94/100
50000/50000 [==============================] - 4s 86us/step - loss: 1.1087 - acc: 0.6046 - val_loss: 1.3686 - val_acc: 0.5222
Epoch 95/100
50000/50000 [==============================] - 5s 110us/step - loss: 1.0997 - acc: 0.6100 - val_loss: 1.3793 - val_acc: 0.5196
Epoch 96/100
50000/50000 [==============================] - 4s 82us/step - loss: 1.0935 - acc: 0.6097 - val_loss: 1.3982 - val_acc: 0.5105
Epoch 97/100
50000/50000 [==============================] - 5s 102us/step - loss: 1.0914 - acc: 0.6116 - val_loss: 1.3512 - val_acc: 0.5242
Epoch 98/100
50000/50000 [==============================] - 6s 127us/step - loss: 1.0754 - acc: 0.6188 - val_loss: 1.3880 - val_acc: 0.5252
Epoch 99/100
50000/50000 [==============================] - 5s 92us/step - loss: 1.0730 - acc: 0.6177 - val_loss: 1.3652 - val_acc: 0.5225
Epoch 100/100
50000/50000 [==============================] - 4s 83us/step - loss: 1.0640 - acc: 0.6201 - val_loss: 1.3744 - val_acc: 0.5254
<keras.callbacks.History at 0x27059a17e88>
```python
print(model.predict_classes(X_test))
print(Y_test.reshape(-1))
cm=confusion_matrix(Y_test.reshape(-1), model.predict_classes(X_test))
cm/np.sum(cm,axis=1)*100
```
[3 8 0 ... 3 4 7]
[3 8 8 ... 5 1 7]
array([[64.1, 5.6, 3.5, 1.4, 3.6, 0.9, 1.8, 3.8, 12.7, 2.6],
[ 3.3, 71.1, 1.1, 1.2, 1.1, 1.3, 1. , 2.7, 6.8, 10.4],
[10.9, 2.4, 33.1, 5.7, 17.4, 8.4, 7.1, 11.5, 2. , 1.5],
[ 4.4, 3.3, 6.1, 27.5, 9.3, 21.1, 9.2, 12.1, 3.3, 3.7],
[ 7. , 2.1, 9.4, 4.3, 46.6, 4.9, 6.4, 15.2, 2.5, 1.6],
[ 2.8, 0.9, 7.3, 14.1, 9.5, 42.1, 6. , 13.4, 2.6, 1.3],
[ 2.2, 2.5, 5.4, 5.9, 14.3, 7.3, 53.4, 5.7, 1.1, 2.2],
[ 5.1, 1.5, 2.6, 3.1, 5.6, 5.3, 1.7, 70.8, 1.2, 3.1],
[10.7, 8.2, 1.2, 1.5, 2.8, 1.2, 0.7, 1.9, 67.9, 3.9],
[ 4.8, 25.9, 1. , 1.7, 0.6, 2.3, 1.5, 6.9, 6.5, 48.8]])
```python
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.gridspec as gridspec
import numpy
import random
fig = plt.figure()
outer_grid = gridspec.GridSpec(10, 10, wspace=0.0, hspace=0.0)
weights = model.layers[0].get_weights()
w = weights[0].T
for i, neuron in enumerate(random.sample(range(0, 1023), 100)):
ax = plt.Subplot(fig, outer_grid[i])
ax.imshow(numpy.mean(numpy.reshape(w[i], (32, 32, 3)), axis=2),
cmap=cm.Greys_r)
ax.set_xticks([])
ax.set_yticks([])
fig.add_subplot(ax)
plt.show()
```
```python
# Ejercicio Utilicen Redes neuronales para clasificar
```
```python
```
```python
```
```python
import matplotlib.pyplot as plt
import numpy as np
mean = [0, 0]
cov = [[1, 0], [0, 100]] # diagonal covariance
cov2 = [[2, 0], [0, 1]] # diagonal covariance
x1, x2 = np.random.multivariate_normal(mean, cov, 100).T
x3, x4 = np.random.multivariate_normal(mean, cov2, 100).T
```
```python
from mpl_toolkits import mplot3d
zdata = 15 * np.random.random(1000)
xdata = np.sin(zdata) + 0.1 * np.random.randn(1000)
ydata = np.cos(zdata) + 0.1 * np.random.randn(1000)
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.scatter3D(xdata, ydata, zdata, cmap='Greens');
```
```python
print(X.shape)
covX=np.cov(X)
print(covX)
```
(3, 1000)
[[ 2.79463640e-02 -4.03523147e-04 -6.37819697e-03]
[-4.03523147e-04 2.63174637e-02 6.63009104e-03]
[-6.37819697e-03 6.63009104e-03 9.83000217e-01]]
```python
from numpy import linalg as LA
w, v = LA.eig(covX)
print(v)
```
[[ 0.00668036 -0.97855495 -0.20587761]
[-0.00693215 0.20583193 -0.9785628 ]
[-0.99995366 -0.00796433 0.00540845]]
```python
```
```python
print(w)
print(v)
```
[0.98308879 0.02797933 0.02619592]
[[ 0.00668036 -0.97855495 -0.20587761]
[-0.00693215 0.20583193 -0.9785628 ]
[-0.99995366 -0.00796433 0.00540845]]
```python
fig = plt.figure(figsize=(10,10))
ax = plt.axes(projection='3d')
ax.scatter3D(xdata, ydata, zdata, cmap='Greens');
for i in range(len(v)):
ax.plot([0,v[i][2]],[0,v[i][1]],[0,v[i][0]])
plt.show()
```
```python
print(v.shape)
dat=np.dot(X.T,v)
plt.scatter(dat[:,0],dat[:,1])
```
```python
```
|
7fba0ed6bf74292628f87845be7ab372cebb6127
| 802,034 |
ipynb
|
Jupyter Notebook
|
12NNs/1_IntroNNs.ipynb
|
sergiogaitan/Study_Guides
|
083acd23f5faa6c6bc404d4d53df562096478e7c
|
[
"MIT"
] | 5 |
2020-09-12T17:16:12.000Z
|
2021-02-03T01:37:02.000Z
|
12NNs/1_IntroNNs.ipynb
|
sergiogaitan/Study_Guides
|
083acd23f5faa6c6bc404d4d53df562096478e7c
|
[
"MIT"
] | null | null | null |
12NNs/1_IntroNNs.ipynb
|
sergiogaitan/Study_Guides
|
083acd23f5faa6c6bc404d4d53df562096478e7c
|
[
"MIT"
] | 4 |
2020-05-22T12:57:49.000Z
|
2021-02-03T01:37:07.000Z
| 652.060163 | 195,288 | 0.939263 | true | 12,657 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.912436 | 0.839734 | 0.766204 |
__label__spa_Latn
| 0.47447 | 0.61848 |
<p align="center">
</p>
## Data Analytics
### Distribution Transformations in Python
#### Michael Pyrcz, Associate Professor, The University of Texas at Austin
##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
### Data Analytics: Distribution Transformations
Here's a demonstration of making and general use of distribution transformations in Python. This demonstration is part of the resources that I include for my courses in Spatial / Subsurface Data Analytics at the Cockrell School of Engineering at the University of Texas at Austin.
#### Distribution Transformations
Why do we do this?
* **Inference**: variable has expected shape
* **Data Preparation / Cleaning**: correcting for too few data and outliers
* **Theory**: a specific distribution assumption required for a method
How do we do it?
We apply this to all sample data, $x_{\alpha}$ $\forall$ $\alpha = 1,\ldots,n$.
\begin{equation}
y_{\alpha} = G^{-1}_Y\left(F_X(x_{\alpha})\right)
\end{equation}
were $X$ is the original feature with a $F_X$ original cumulative distribution function and $Y$ is transformed feature with a $G_Y$ transformed cumulative distribution function.
* Mapping from one distribution to another through percentiles
* This may be applied to any parametric or nonparametric distributions
* This is a rank preserving transform, e.g. P50 of 𝑋 is P50 of 𝑌
I have a lecture on distribution transformations available on [YouTube](https://www.youtube.com/watch?v=ZDIpE3OkAIU&list=PLG19vXLQHvSB-D4XKYieEku9GQMQyAzjJ&index=14).
#### Getting Started
Here's the steps to get setup in Python with the GeostatsPy package:
1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/).
2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal.
3. In the terminal type: pip install geostatspy.
4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality.
You will need to copy the data file to your working directory. They are available here:
* Tabular data - [sample_data.csv](https://github.com/GeostatsGuy/GeoDataSets/blob/master/sample_data.csv).
#### Importing Packages
We will need some standard packages. These should have been installed with Anaconda 3.
```python
import numpy as np # ndarrys for gridded data
import pandas as pd # DataFrames for tabular data
import os # set working directory, run executables
import matplotlib.pyplot as plt # plotting
from scipy import stats # summary statistics
import math # trigonometry etc.
import scipy.signal as signal # kernel for moving window calculation
import random # randon numbers
import seaborn as sns # matrix scatter plots
from scipy.stats import norm # Gaussian parametric distribution
from sklearn import preprocessing
import geostatspy.GSLIB as GSLIB
from ipywidgets import interactive # widgets and interactivity
from ipywidgets import widgets
from ipywidgets import Layout
from ipywidgets import Label
from ipywidgets import VBox, HBox
```
#### Set the Random Number Seed
Set the random number seed so that we have a repeatable workflow
```python
seed = 73073
```
#### Set the Working Directory
I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time).
```python
#os.chdir("c:/PGE383") # set the working directory
```
#### Loading Tabular Data
Here's the command to load our comma delimited data file in to a Pandas' DataFrame object. For fun try misspelling the name. You will get an ugly, long error.
```python
#df = pd.read_csv('sample_data.csv') # load our data table
df = pd.read_csv('https://raw.githubusercontent.com/GeostatsGuy/GeoDataSets/master/sample_data.csv') # load from Dr. Pyrcz's GitHub repository
```
It worked, we loaded our file into our DataFrame called 'df'. But how do you really know that it worked? Visualizing the DataFrame would be useful and we already leard about these methods in this demo (https://git.io/fNgRW).
We can preview the DataFrame by printing a slice or by utilizing the 'head' DataFrame member function (with a nice and clean format, see below). With the slice we could look at any subset of the data table and with the head command, add parameter 'n=13' to see the first 13 rows of the dataset.
```python
df.head(n=6) # we could also use this command for a table preview
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>X</th>
<th>Y</th>
<th>Facies</th>
<th>Porosity</th>
<th>Perm</th>
<th>AI</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>100.0</td>
<td>900.0</td>
<td>1.0</td>
<td>0.100187</td>
<td>1.363890</td>
<td>5110.699751</td>
</tr>
<tr>
<th>1</th>
<td>100.0</td>
<td>800.0</td>
<td>0.0</td>
<td>0.107947</td>
<td>12.576845</td>
<td>4671.458560</td>
</tr>
<tr>
<th>2</th>
<td>100.0</td>
<td>700.0</td>
<td>0.0</td>
<td>0.085357</td>
<td>5.984520</td>
<td>6127.548006</td>
</tr>
<tr>
<th>3</th>
<td>100.0</td>
<td>600.0</td>
<td>0.0</td>
<td>0.108460</td>
<td>2.446678</td>
<td>5201.637996</td>
</tr>
<tr>
<th>4</th>
<td>100.0</td>
<td>500.0</td>
<td>0.0</td>
<td>0.102468</td>
<td>1.952264</td>
<td>3835.270322</td>
</tr>
<tr>
<th>5</th>
<td>100.0</td>
<td>400.0</td>
<td>0.0</td>
<td>0.110579</td>
<td>3.691908</td>
<td>5295.267191</td>
</tr>
</tbody>
</table>
</div>
#### Calculating and Plotting a CDF by Hand
Let's demonstrate the calculation and plotting of a non-parametric CDF by hand
1. make a copy of the feature as a 1D array (ndarray from NumPy)
2. sort the data in ascending order
3. assign cumulative probabilities based on the tail assumptions
4. plot cumuative probability vs. value
```python
por = df['Porosity'].copy(deep = True).values # make a deepcopy of the feature from the DataFrame
print('The ndarray has a shape of ' + str(por.shape) + '.')
por = np.sort(por) # sort the data in ascending order
n = por.shape[0] # get the number of data samples
cprob = np.zeros(n)
for i in range(0,n):
index = i + 1
cprob[i] = index / n # known upper tail
# cprob[i] = (index - 1)/n # known lower tail
# cprob[i] = (index - 1)/(n - 1) # known upper and lower tails
# cprob[i] = index/(n+1) # unknown tails
plt.subplot(111)
plt.plot(por,cprob, alpha = 0.2, c = 'black') # plot piecewise linear interpolation
plt.scatter(por,cprob,s = 30, alpha = 1.0, c = 'red', edgecolor = 'black') # plot the CDF points
plt.grid(); plt.xlim([0.05,0.25]); plt.ylim([0.0,1.0])
plt.xlabel("Porosity (fraction)"); plt.ylabel("Cumulative Probability"); plt.title("Non-parametric Porosity Cumulative Distribution Function")
plt.subplots_adjust(left=0.0, bottom=0.0, right=1.0, top=1.2, wspace=0.1, hspace=0.2)
plt.show()
```
#### Transformation to a Parametric Distribution
We can transform our data feature distribution to any parametric distribution with this workflow.
1. Calculate the cumulative probability value of each of our data values, $p_{\alpha} = F_x(x_\alpha)$, $\forall$ $\alpha = 1,\ldots, n$.
2. Apply the inverse of the target parametric cumulative distribution function (CDF) to calculate the transformed values. $y_{\alpha} = G_y^{-1}\left(F_x(x_\alpha)\right)$, $\forall$ $\alpha = 1,\ldots, n$.
```python
y = np.zeros(n)
for i in range(0,n):
y[i] = norm.ppf(cprob[i],loc=0.0,scale=1.0)
plt.subplot(121)
plt.plot(por,cprob, alpha = 0.2, c = 'black') # plot piecewise linear interpolation
plt.scatter(por,cprob,s = 30, alpha = 1.0, c = 'red', edgecolor = 'black') # plot the CDF points
plt.grid(); plt.xlim([0.05,0.25]); plt.ylim([0.0,1.0])
plt.xlabel("Porosity (fraction)"); plt.ylabel("Cumulative Probability"); plt.title("Non-parametric Porosity Cumulative Distribution Function")
plt.subplot(122)
plt.plot(y,cprob, alpha = 0.2, c = 'black') # plot piecewise linear interpolation
plt.scatter(y,cprob,s = 30, alpha = 1.0, c = 'blue', edgecolor = 'black') # plot the CDF points
plt.grid(); plt.xlim([-3.0,3.0]); plt.ylim([0.0,1.0])
plt.xlabel("Porosity (fraction)"); plt.ylabel("Cumulative Probability"); plt.title("After Distribution Transformation to Gaussian")
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
```
Let's make an interactive version of this plot to visualize the transformation.
```python
# widgets and dashboard
l = widgets.Text(value=' Data Analytics, Distribution Transformation, Prof. Michael Pyrcz, The University of Texas at Austin',layout=Layout(width='950px', height='30px'))
data_index = widgets.IntSlider(min=1, max = n-1, value=1.0, step = 1.0, description = 'Data Index, $\\alpha$',orientation='horizontal', style = {'description_width': 'initial'}, continuous_update=False)
ui = widgets.VBox([l,data_index],)
def run_plot(data_index): # make data, fit models and plot
plt.subplot(131)
plt.plot(por,cprob, alpha = 0.2, c = 'black') # plot piecewise linear interpolation
plt.scatter(por,cprob,s = 20, alpha = 1.0, c = 'red', edgecolor = 'black') # plot the CDF points
plt.grid(); plt.xlim([0.05,0.25]); plt.ylim([0.0,1.0])
plt.xlabel("Original Feature, $x$"); plt.ylabel("Cumulative Probability"); plt.title("CDF Original Feature")
plt.plot([por[data_index-1],por[data_index-1]],[0.0,cprob[data_index-1]],color = 'red',linestyle='dashed')
plt.plot([por[data_index-1],3.0],[cprob[data_index-1],cprob[data_index-1]],color = 'red',linestyle='dashed')
plt.scatter(por[data_index-1],0,marker='s',s = 100, c = 'red', edgecolor = 'black', alpha = 1.0, zorder=200) # plot the CDF points
plt.annotate('x = ' + str(round(por[data_index-1],2)), xy=(por[data_index-1]+0.01, 0.01))
plt.annotate('p = ' + str(round(cprob[data_index-1],2)), xy=(0.225, cprob[data_index-1]+0.02))
plt.subplot(132)
plt.plot(y,cprob, alpha = 0.2, c = 'black') # plot piecewise linear interpolation
plt.scatter(y,cprob,s = 20, alpha = 1.0, c = 'blue', edgecolor = 'black') # plot the CDF points
plt.grid(); plt.xlim([-3.0,3.0]); plt.ylim([0.0,1.0])
plt.xlabel("Gaussian Transformed Feature, $y$"); plt.ylabel("Cumulative Probability"); plt.title("CDF After Distribution Transformation to Gaussian")
plt.plot([-3.0,y[data_index-1]],[cprob[data_index-1],cprob[data_index-1]],color = 'blue',linestyle='dashed')
plt.plot([y[data_index-1],y[data_index-1]],[0.0,cprob[data_index-1]],color = 'blue',linestyle='dashed')
plt.scatter(y[data_index-1],0,marker='v',s = 100, c = 'blue', edgecolor = 'black', alpha = 1.0, zorder=200) # plot the CDF points
plt.annotate('p = ' + str(round(cprob[data_index-1],2)), xy=(-2.90, cprob[data_index-1]+0.02))
plt.annotate('y = ' + str(round(y[data_index-1],2)), xy=(y[data_index-1]+0.3, 0.01))
plt.subplot(133)
plt.plot(por,y, alpha = 0.2, c = 'black') # plot piecewise linear interpolation
plt.grid(); plt.xlim([0.05,0.25]); plt.ylim([-3.0,3.0])
plt.xlabel("Original Porosity (fraction)"); plt.ylabel("Gaussian Transformed Porosity (N[fraction])"); plt.title("Q-Q Plot, Distribution Transformation")
#plt.plot([0.05,0.25],[0.05,0.25],color = 'red',linestyle='dashed', alpha = 0.4)
plt.plot([por[data_index-1],por[data_index-1]],[-3,y[data_index-1]],color = 'red',linestyle='dashed')
plt.plot([0.05,por[data_index-1]],[y[data_index-1],y[data_index-1]],color = 'blue',linestyle='dashed')
plt.scatter(por[data_index-1],y[data_index-1],marker='+',s = 700, c = 'purple', edgecolor = 'black', alpha = 1.0, zorder=200) # plot the CDF points
plt.scatter(por[data_index-1],y[data_index-1],marker='x',s = 500, c = 'purple', edgecolor = 'black', alpha = 1.0, zorder=200) # plot the CDF points
plt.scatter(por[data_index-1],y[data_index-1],s = 200, c = 'purple', edgecolor = 'black', alpha = 1.0, zorder=200) # plot the CDF points
plt.scatter(por,y,s = 20, c = 'purple', edgecolor = 'black', alpha = 0.7, zorder=100) # plot the CDF points
plt.scatter(por[data_index-1],-3,marker='s',s = 100, c = 'red', edgecolor = 'black', alpha = 1.0, zorder=200) # plot the CDF points
plt.scatter(0.05,y[data_index-1],marker='s',s = 100, c = 'blue', edgecolor = 'black', alpha = 1.0, zorder=200) # plot the CDF points
plt.subplots_adjust(left=0.0, bottom=0.0, right=3.0, top=1.1, wspace=0.2, hspace=0.2)
plt.show()
# connect the function to make the samples and plot to the widgets
interactive_plot = widgets.interactive_output(run_plot, {'data_index':data_index})
interactive_plot.clear_output(wait = True) # reduce flickering by delaying plot updating
```
### Interactive Data Analytics Distribution Transformation Demonstration
#### Michael Pyrcz, Associate Professor, The University of Texas at Austin
Select any data value and observe the distribution transform by mapping through cumulative probability.
### The Inputs
* **data_index** - the data index from 1 to n in the sorted ascending order
```python
display(ui, interactive_plot) # display the interactive plot
```
VBox(children=(Text(value=' Data Analytics, Distribution Tran…
Output()
#### Distribution Transform to a Non-Parametric Distribution
We can apply the mapping through cumulative probabilities to transform from any distribution to any other distribution.
* let's make a new data set by randomly sampling from the previous one and adding error
Then we can demonstrate transforming this dataset to match the original distribution
* this is mimicking the situation where we transform a dataset to match the distribution of a better sampled analog distribution
```python
n_sample = 30
df_sample = df.sample(n_sample,random_state = seed)
df_sample = df_sample.copy(deep = True) # make a deepcopy of the feature from the DataFrame
df_sample['Porosity'] = df_sample['Porosity'].values + np.random.normal(loc = 0.0, scale = 0.01, size = n_sample)
df_sample = df_sample.sort_values(by = 'Porosity') # sort the DataFrame
por_sample = df_sample['Porosity'].values
print('The sample ndarray has a shape of ' + str(por_sample.shape) + '.')
cprob_sample = np.zeros(n_sample)
for i in range(0,n_sample):
index = i + 1
cprob_sample[i] = index / n_sample # known upper tail
# cprob[i] = (index - 1)/n # known lower tail
# cprob[i] = (index - 1)/(n - 1) # known upper and lower tails
# cprob[i] = index/(n+1) # unknown tails
plt.subplot(121)
plt.plot(por_sample,cprob_sample, alpha = 0.2, c = 'black') # plot piecewise linear interpolation
plt.scatter(por_sample,cprob_sample,s = 30, alpha = 1.0, c = 'red', edgecolor = 'black') # plot the CDF points
plt.grid(); plt.xlim([0.05,0.25]); plt.ylim([0.0,1.0])
plt.xlabel("Porosity (fraction)"); plt.ylabel("Cumulative Probability"); plt.title("Sparse Sample with Noise Cumulative Distribution Function")
plt.subplot(122)
plt.plot(por,cprob, alpha = 0.2, c = 'black') # plot piecewise linear interpolation
plt.scatter(por,cprob,s = 30, alpha = 1.0, c = 'blue', edgecolor = 'black') # plot the CDF points
plt.grid(); plt.xlim([0.05,0.25]); plt.ylim([0.0,1.0])
plt.xlabel("Porosity (fraction)"); plt.ylabel("Cumulative Probability"); plt.title("Non-parametric Porosity Cumulative Distribution Function")
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
```
Let's transform the values and show them on the target distribution.
```python
y_sample = np.zeros(n_sample)
for i in range(0,n_sample):
y_sample[i] = np.percentile(por,cprob_sample[i]*100, interpolation = 'linear') # piecewise linear interpolation of inverse of target CDF
plt.subplot(121)
plt.plot(por_sample,cprob_sample, alpha = 0.2, c = 'black') # plot piecewise linear interpolation
plt.scatter(por_sample,cprob_sample,s = 30, alpha = 1.0, c = 'red', edgecolor = 'black', zorder = 100) # plot the CDF points
plt.grid(); plt.xlim([0.05,0.25]); plt.ylim([0.0,1.0])
plt.xlabel("Porosity (fraction)"); plt.ylabel("Cumulative Probability"); plt.title("Sparse Sample with Noise Cumulative Distribution Function")
plt.subplot(122)
plt.plot(por,cprob, alpha = 0.6,c = 'black') # plot piecewise linear interpolation
plt.scatter(por,cprob,s = 20, c = 'red', edgecolor = 'black', alpha = 0.6) # plot the CDF points
plt.scatter(y_sample,cprob_sample,s = 60, c = 'white', alpha = .7, zorder = 99) # plot the CDF points
plt.scatter(y_sample,cprob_sample,s = 30, c = 'blue', edgecolor = 'black', alpha = 1.0, zorder = 100) # plot the CDF points
plt.grid(); plt.xlim([0.05,0.25]); plt.ylim([0.0,1.0])
plt.xlabel("Porosity (fraction)"); plt.ylabel("Cumulative Probability"); plt.title("Non-parametric Porosity Cumulative Distribution Function")
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
```
Let's make an interactive version of this plot to visualize the transformation.
```python
# widgets and dashboard
l_sample = widgets.Text(value=' Data Analytics, Distribution Transformation, Prof. Michael Pyrcz, The University of Texas at Austin',layout=Layout(width='950px', height='30px'))
data_index_sample = widgets.IntSlider(min=1, max = n_sample, value=1.0, step = 1.0, description = 'Data Sample Index, $\\beta$',orientation='horizontal', style = {'description_width': 'initial'}, continuous_update=False)
ui_sample = widgets.VBox([l_sample,data_index_sample],)
def run_plot_sample(data_index_sample): # make data, fit models and plot
plt.subplot(131)
plt.plot(por_sample,cprob_sample, alpha = 0.2, c = 'black') # plot piecewise linear interpolation
plt.scatter(por_sample,cprob_sample,s = 30, alpha = 1.0, c = 'red', edgecolor = 'black',zorder = 100) # plot the CDF points
plt.grid(); plt.xlim([0.05,0.25]); plt.ylim([0.0,1.0])
plt.xlabel("Porosity (fraction)"); plt.ylabel("Cumulative Probability"); plt.title("Original Sparse Sample with Noise, Cumulative Distribution Function")
plt.plot([por_sample[data_index_sample-1],por_sample[data_index_sample-1]],[0.0,cprob_sample[data_index_sample-1]],color = 'red',linestyle='dashed')
plt.plot([por_sample[data_index_sample-1],3.0],[cprob_sample[data_index_sample-1],cprob_sample[data_index_sample-1]],color = 'red',linestyle='dashed')
plt.annotate('x = ' + str(round(por_sample[data_index_sample-1],2)), xy=(por_sample[data_index_sample-1]+0.003, 0.01))
plt.annotate('p = ' + str(round(cprob_sample[data_index_sample-1],2)), xy=(0.225, cprob_sample[data_index_sample-1]+0.02))
plt.subplot(132)
plt.plot(por,cprob, alpha = 0.2, c = 'black') # plot piecewise linear interpolation
plt.scatter(por,cprob,s = 30, c = 'blue', edgecolor = 'black', alpha = 1.0) # plot the CDF points
plt.grid(); plt.xlim([0.05,0.25]); plt.ylim([0.0,1.0])
plt.xlabel("Porosity (fraction)"); plt.ylabel("Cumulative Probability"); plt.title("Non-parametric Target Porosity Cumulative Distribution Function")
plt.plot([0.0,y_sample[data_index_sample-1]],[cprob_sample[data_index_sample-1],cprob_sample[data_index_sample-1]],color = 'blue',linestyle='dashed')
plt.plot([y_sample[data_index_sample-1],y_sample[data_index_sample-1]],[0.0,cprob_sample[data_index_sample-1]],color = 'blue',linestyle='dashed')
plt.annotate('p = ' + str(round(cprob_sample[data_index_sample-1],2)), xy=(0.053, cprob_sample[data_index_sample-1]+0.02))
plt.annotate('y = ' + str(round(y_sample[data_index_sample-1],2)), xy=(y_sample[data_index_sample-1]+0.003, 0.01))
plt.scatter(y_sample[data_index_sample-1],cprob_sample[data_index_sample-1],s = 200, c = 'white', alpha = 1.0, zorder=99) # plot the CDF points
plt.scatter(y_sample[data_index_sample-1],cprob_sample[data_index_sample-1],s = 70, c = 'blue', edgecolor = 'black', alpha = 1.0, zorder=100) # plot the CDF points
plt.subplot(133)
plt.plot(por_sample,y_sample, alpha = 0.4, c = 'black') # plot piecewise linear interpolation
plt.grid(); plt.xlim([0.05,0.25]); plt.ylim([0.05,0.25])
plt.xlabel("Original Porosity (fraction)"); plt.ylabel("Transformed Porosity (fraction)"); plt.title("Q-Q Plot for a Non-parametric Distribution Transformation")
plt.plot([0.05,0.25],[0.05,0.25],color = 'black',linestyle='dashed', alpha = 1.0)
plt.scatter(por_sample[data_index_sample-1],y_sample[data_index_sample-1],s = 120, c = 'white', alpha = 1.0, zorder=190) # plot the CDF points
plt.scatter(por_sample[data_index_sample-1],y_sample[data_index_sample-1],s = 80, c = 'purple', edgecolor = 'black', alpha = 1.0, zorder=200) # plot the CDF points
plt.scatter(por_sample,y_sample,s = 30, c = 'purple', edgecolor = 'black', alpha = 0.8, zorder=100) # plot the CDF points
plt.scatter(por_sample[data_index_sample-1],y_sample[data_index_sample-1],marker='+',s = 700, c = 'purple', edgecolor = 'black', alpha = 1.0, zorder=199) # plot the CDF points
plt.scatter(por_sample[data_index_sample-1],y_sample[data_index_sample-1],marker='x',s = 500, c = 'purple', edgecolor = 'black', alpha = 1.0, zorder=199) # plot the CDF points
plt.subplots_adjust(left=0.0, bottom=0.0, right=3.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
# connect the function to make the samples and plot to the widgets
interactive_plot_s = widgets.interactive_output(run_plot_sample, {'data_index_sample':data_index_sample})
#interactive_plot_sample.clear_output(wait = True) # reduce flickering by delaying plot updating
```
### Interactive Data Analytics Distribution Transformation Demonstration
#### Michael Pyrcz, Associate Professor, The University of Texas at Austin
Select any data value and observe the distribution transform by mapping through cumulative probability.
#### The Inputs
* **data_index** - the data index from 1 to n in the sorted ascending order
```python
display(ui_sample, interactive_plot_s) # display the interactive plot
```
VBox(children=(Text(value=' Data Analytics, Distribution Tran…
Output()
To summarize let's look at a DataFrame with the original noisey sample and the transformed to match the original distribution.
* we're making and showing a table of original values, $x_{\beta}$ $\forall$ $\beta = 1, \ldots, n_{sample}$, and the transformed values, $y_{\beta}$ $\forall$ $\beta = 1, \ldots, n_{sample}$.
```python
df_sample['Transformed_Por'] = y_sample
df_sample.head(n=n_sample)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>X</th>
<th>Y</th>
<th>Facies</th>
<th>Porosity</th>
<th>Perm</th>
<th>AI</th>
<th>Transformed_Por</th>
</tr>
</thead>
<tbody>
<tr>
<th>80</th>
<td>900.0</td>
<td>100.0</td>
<td>0.0</td>
<td>0.078139</td>
<td>1.280257</td>
<td>4573.656072</td>
<td>0.081044</td>
</tr>
<tr>
<th>207</th>
<td>201.0</td>
<td>426.0</td>
<td>0.0</td>
<td>0.099933</td>
<td>0.400658</td>
<td>5263.542112</td>
<td>0.085867</td>
</tr>
<tr>
<th>3</th>
<td>100.0</td>
<td>600.0</td>
<td>0.0</td>
<td>0.102014</td>
<td>2.446678</td>
<td>5201.637996</td>
<td>0.091834</td>
</tr>
<tr>
<th>41</th>
<td>500.0</td>
<td>400.0</td>
<td>0.0</td>
<td>0.104302</td>
<td>6.312198</td>
<td>5515.918646</td>
<td>0.095628</td>
</tr>
<tr>
<th>218</th>
<td>251.0</td>
<td>416.0</td>
<td>0.0</td>
<td>0.111108</td>
<td>1.003374</td>
<td>5822.467914</td>
<td>0.099378</td>
</tr>
<tr>
<th>226</th>
<td>211.0</td>
<td>396.0</td>
<td>0.0</td>
<td>0.112444</td>
<td>6.368529</td>
<td>5725.334803</td>
<td>0.101987</td>
</tr>
<tr>
<th>47</th>
<td>600.0</td>
<td>700.0</td>
<td>0.0</td>
<td>0.113304</td>
<td>12.384496</td>
<td>3595.586977</td>
<td>0.103970</td>
</tr>
<tr>
<th>210</th>
<td>231.0</td>
<td>426.0</td>
<td>0.0</td>
<td>0.114971</td>
<td>5.584040</td>
<td>4919.074871</td>
<td>0.106704</td>
</tr>
<tr>
<th>189</th>
<td>201.0</td>
<td>456.0</td>
<td>0.0</td>
<td>0.115299</td>
<td>0.546396</td>
<td>5018.355476</td>
<td>0.108460</td>
</tr>
<tr>
<th>5</th>
<td>100.0</td>
<td>400.0</td>
<td>0.0</td>
<td>0.120576</td>
<td>3.691908</td>
<td>5295.267191</td>
<td>0.111491</td>
</tr>
<tr>
<th>72</th>
<td>900.0</td>
<td>900.0</td>
<td>0.0</td>
<td>0.120786</td>
<td>12.433996</td>
<td>6242.704810</td>
<td>0.113887</td>
</tr>
<tr>
<th>71</th>
<td>800.0</td>
<td>100.0</td>
<td>0.0</td>
<td>0.139345</td>
<td>7.739105</td>
<td>5274.532660</td>
<td>0.117984</td>
</tr>
<tr>
<th>53</th>
<td>600.0</td>
<td>100.0</td>
<td>1.0</td>
<td>0.146009</td>
<td>42.396044</td>
<td>4204.150893</td>
<td>0.121592</td>
</tr>
<tr>
<th>165</th>
<td>955.0</td>
<td>469.0</td>
<td>1.0</td>
<td>0.175355</td>
<td>26.197239</td>
<td>2889.196647</td>
<td>0.127131</td>
</tr>
<tr>
<th>245</th>
<td>690.0</td>
<td>529.0</td>
<td>1.0</td>
<td>0.179410</td>
<td>316.905689</td>
<td>4271.013148</td>
<td>0.137062</td>
</tr>
<tr>
<th>151</th>
<td>995.0</td>
<td>489.0</td>
<td>1.0</td>
<td>0.181649</td>
<td>460.494986</td>
<td>2792.804322</td>
<td>0.153759</td>
</tr>
<tr>
<th>84</th>
<td>955.0</td>
<td>559.0</td>
<td>1.0</td>
<td>0.184025</td>
<td>74.215058</td>
<td>3386.182722</td>
<td>0.176536</td>
</tr>
<tr>
<th>93</th>
<td>955.0</td>
<td>549.0</td>
<td>1.0</td>
<td>0.189082</td>
<td>374.298925</td>
<td>3181.557281</td>
<td>0.185270</td>
</tr>
<tr>
<th>99</th>
<td>925.0</td>
<td>539.0</td>
<td>1.0</td>
<td>0.189614</td>
<td>211.163296</td>
<td>3442.885245</td>
<td>0.188136</td>
</tr>
<tr>
<th>136</th>
<td>935.0</td>
<td>499.0</td>
<td>1.0</td>
<td>0.190983</td>
<td>523.287810</td>
<td>2579.032897</td>
<td>0.191655</td>
</tr>
<tr>
<th>111</th>
<td>955.0</td>
<td>529.0</td>
<td>1.0</td>
<td>0.193601</td>
<td>1113.971076</td>
<td>3177.635737</td>
<td>0.195998</td>
</tr>
<tr>
<th>174</th>
<td>955.0</td>
<td>459.0</td>
<td>1.0</td>
<td>0.195106</td>
<td>45.002088</td>
<td>3394.563038</td>
<td>0.198182</td>
</tr>
<tr>
<th>158</th>
<td>975.0</td>
<td>479.0</td>
<td>1.0</td>
<td>0.197169</td>
<td>69.336576</td>
<td>2493.128177</td>
<td>0.199315</td>
</tr>
<tr>
<th>125</th>
<td>1005.0</td>
<td>519.0</td>
<td>1.0</td>
<td>0.198837</td>
<td>54.667195</td>
<td>2577.714678</td>
<td>0.201943</td>
</tr>
<tr>
<th>150</th>
<td>985.0</td>
<td>489.0</td>
<td>1.0</td>
<td>0.203183</td>
<td>73.133040</td>
<td>2672.294567</td>
<td>0.206934</td>
</tr>
<tr>
<th>181</th>
<td>935.0</td>
<td>449.0</td>
<td>1.0</td>
<td>0.203501</td>
<td>368.507601</td>
<td>4249.477923</td>
<td>0.209051</td>
</tr>
<tr>
<th>113</th>
<td>975.0</td>
<td>529.0</td>
<td>1.0</td>
<td>0.214610</td>
<td>1548.094062</td>
<td>3167.185377</td>
<td>0.211638</td>
</tr>
<tr>
<th>141</th>
<td>985.0</td>
<td>499.0</td>
<td>1.0</td>
<td>0.228755</td>
<td>68.276148</td>
<td>2547.526113</td>
<td>0.215686</td>
</tr>
<tr>
<th>149</th>
<td>975.0</td>
<td>489.0</td>
<td>1.0</td>
<td>0.236303</td>
<td>21.109085</td>
<td>2412.875330</td>
<td>0.224214</td>
</tr>
<tr>
<th>129</th>
<td>955.0</td>
<td>509.0</td>
<td>1.0</td>
<td>0.241784</td>
<td>1525.247066</td>
<td>2512.061434</td>
<td>0.242298</td>
</tr>
</tbody>
</table>
</div>
It would be straitforward to modify the code above to perform distribution transformations:
* to a parametric distribution like Gaussian
* to a non-parametric distribution from actual data (build a CDF and interpolate between the data samples)
#### Comments
This was a basic demonstration of distribution transformations.
I have other demonstrations on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations, trend modeling and many other workflows available at [Python Demos](https://github.com/GeostatsGuy/PythonNumericalDemos) and a Python package for data analytics and geostatistics at [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy).
I hope this was helpful,
*Michael*
#### The Author:
### Michael Pyrcz, Associate Professor, University of Texas at Austin
*Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions*
With over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development.
For more about Michael check out these links:
#### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#### Want to Work Together?
I hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate.
* Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you!
* Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems!
* I can be reached at mpyrcz@austin.utexas.edu.
I'm always happy to discuss,
*Michael*
Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
#### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
|
8750353f73349102e74439126629e57f93491182
| 229,895 |
ipynb
|
Jupyter Notebook
|
PythonDataBasics_DistributionsTransformations.ipynb
|
caf3676/PythonNumericalDemos
|
206a3d876f79e137af88b85ba98aff171e8d8e06
|
[
"MIT"
] | null | null | null |
PythonDataBasics_DistributionsTransformations.ipynb
|
caf3676/PythonNumericalDemos
|
206a3d876f79e137af88b85ba98aff171e8d8e06
|
[
"MIT"
] | null | null | null |
PythonDataBasics_DistributionsTransformations.ipynb
|
caf3676/PythonNumericalDemos
|
206a3d876f79e137af88b85ba98aff171e8d8e06
|
[
"MIT"
] | 1 |
2022-03-14T03:28:32.000Z
|
2022-03-14T03:28:32.000Z
| 182.311657 | 50,724 | 0.863394 | true | 10,587 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.72487 | 0.70253 | 0.509243 |
__label__eng_Latn
| 0.583197 | 0.021472 |
# Programación lineal
### Anuncios varios
+ Encuesta
+ Clase 18 de Febrero (Martes 19 de feb 9-11 salón por definir)
+ Exámen 1 (28 de Febrero)
+ Proyecto (7 de Marzo)
> La programación lineal es el campo de la optimización matemática dedicado a maximizar o minimizar (optimizar) funciones lineales, denominada función objetivo, de tal forma que las variables de dicha función estén sujetas a una serie de restricciones expresadas mediante un sistema de ecuaciones o inecuaciones también lineales.
**Referencias:**
- https://es.wikipedia.org/wiki/Programaci%C3%B3n_lineal
- https://docs.scipy.org/doc/scipy-0.18.1/reference/optimize.html
- http://bdigital.unal.edu.co/5037/4/guillermojimenezlozano.2006_Parte1.pdf
## 1. Apuntes históricos
- 1826: Joseph Fourier anticipa la programación lineal. Carl Friedrich Gauss resuelve ecuaciones lineales por eliminación "gaussiana".
- 1902: Gyula Farkas concibe un método para resolver sistemas de inecuaciones.
- Es hasta la Segunda Guerra Mundial que se plantea la programación lineal como un modelo matemático para planificar gastos y retornos, de modo que se reduzcan costos de guerra y aumentar pérdidas del enemigo. Secreto hasta 1947 (posguerra).
- 1947: George Dantzig publica el algoritmo simplex y John von Neumann desarrolló la teoría de la dualidad. Se sabe que Leonid Kantoróvich también formuló la teoría en forma independiente.
- Fue usado por muchas industrias en la planificación diaria.
**Hasta acá, tiempos exponenciales de solución. Lo siguiente, tiempo polinomial.**
- 1979: Leonid Khachiyan, diseñó el llamado Algoritmo del elipsoide, a través del cual demostró que el problema de la programación lineal es resoluble de manera eficiente, es decir, en tiempo polinomial.
- 1984: Narendra Karmarkar introduce el método del punto interior para resolver problemas de programación lineal.
## 2. Motivación
Ya la clase pasada habíamos mencionado que cuando se quería optimizar una función de varias variables con restricciones, se podía aplicar siempre el método de Multiplicadores de Lagrange. Sin embargo, este método es computacionalmente muy complejo conforme crece el número de variables.
Por tanto, cuando la función a optimizar y las restricciones son de caracter lineal, los métodos de solución que se pueden desarrollar son computacionalmente eficientes, por lo que es útil realizar la distinción.
## 3. Problemas de programación lineal
### 3.1. Ejemplo básico
Una multinacional farmacéutica desea fabricar un compuesto nutritivo a base de dos productos A y B. El producto A contiene $30\%$ de proteínas, un $1\%$ de grasas y un $10\%$ de azúcares. El producto B contiene un $5\%$ de proteínas, un $7\%$ de grasas y un $10\%$ de azúcares.
El compuesto tiene que tener, al menos, $25g$ de proteínas, $6g$ de grasas y $30g$ de azúcares. El coste del producto A es de $0.6$ u.m./g y el de B es de $0.2$ u.m./g
Se desea encontrar la cantidad en gramos de cada producto para que el coste total sea mínimo.
Formular el problema de decidir cuánto hacer de cada producto como un problema de programación lineal.
#### Solución
Sean:
- $x_A$: la cantidad de gramos de A a ser producidas, y
- $x_B$: la cantidad de gramos de B a ser producidas en la semana.
Notar que lo que se quiere es minimizar $0.6x_A+0.2x_B$.
Restricciones:
1. El compuesto debe tener **al menos** $25 g$ de proteína: $30\%x_A+5\%x_B\geq 25 \Rightarrow 0.3x_A+0.05x_B\geq 25$.
2. El compuesto debe tener **al menos** $6 g$ de grasa: $1\%x_A+7\%x_B\geq 6 \Rightarrow 0.01x_A+0.07x_B\geq 6$.
3. El compuesto debe tener **al menos** $30 g$ de azúcares: $10\%x_A+10\%x_B\geq 30 \Rightarrow 0.1x_A+0.1x_B\geq 30$.
Finalmente, el problema puede ser expresado en la forma explicada como:
\begin{equation}
\begin{array}{ll}
\min_{x_A,x_B} & 0.6x_A+0.2x_B \\
\text{s. a. } & -0.3x_A-0.05x_B\leq -25 \\
& -0.01x_A-0.07x_B\leq -6 \\
& -0.1x_A-0.1x_B\leq -30,
\end{array}
\end{equation}
o, eqivalentemente
\begin{equation}
\begin{array}{ll}
\min_{\boldsymbol{x}} & \boldsymbol{c}^\top\boldsymbol{x} \\
\text{s. a. } & \boldsymbol{A}_{eq}\boldsymbol{x}=\boldsymbol{b}_{eq} \\
& \boldsymbol{A}\boldsymbol{x}\leq\boldsymbol{b},
\end{array}
\end{equation}
con
- $\boldsymbol{c}=\left[0.6 \quad 0.2\right]^\top$,
- $\boldsymbol{A}=\left[\begin{array}{cc}-0.3 & -0.05 \\ -0.01 & -0.07\\ -0.1 & -0.1\end{array}\right]$, y
- $\boldsymbol{b}=\left[-25 \quad -6\quad -30\right]^\top$.
Preferiremos, en adelante, la notación vectorial/matricial.
### 3.2. Ejemplo básico 2
Una fábrica de carrocerías de automóviles y camiones tiene dos naves.
+ En la nave A, para hacer la carrocería de un camión, se invierten siete días-operario, para fabricar la de un coche se precisan dos días-operario.
+ En la nave B se invierten tres días operario tanto en carrocerías de camión como de coche.
Por limitaciones de mano de obra y maquinaria, la nave A dispone de $300$ días operario, y la nave B de $270$ días-operario.
Si los beneficios que se obtienen por cada camión son de $600$ u.m y por cada automóvil $200$ u.m, ¿cuántas unidades de cada uno se deben producir para maximizar las ganancias?
### 3.3. En general
De acuerdo a lo descrito anteriormente, un problema de programación lineal puede escribirse en la siguiente forma:
\begin{equation}
\begin{array}{ll}
\min_{x_1,\dots,x_n} & c_1x_1+\dots+c_nx_n \\
\text{s. a. } & a^{eq}_{j,1}x_1+\dots+a^{eq}_{j,n}x_n=b^{eq}_j \text{ para } 1\leq j\leq m_1 \\
& a_{k,1}x_1+\dots+a_{k,n}x_n\leq b_k \text{ para } 1\leq k\leq m_2,
\end{array}
\end{equation}
donde:
- $x_i$ para $i=1,\dots,n$ son las incógnitas o variables de decisión,
- $c_i$ para $i=1,\dots,n$ son los coeficientes de la función a optimizar,
- $a^{eq}_{j,i}$ para $j=1,\dots,m_1$ e $i=1,\dots,n$, son los coeficientes de la restricción de igualdad,
- $a_{k,i}$ para $k=1,\dots,m_2$ e $i=1,\dots,n$, son los coeficientes de la restricción de desigualdad,
- $b^{eq}_j$ para $j=1,\dots,m_1$ son valores conocidos que deben ser respetados estrictamente, y
- $b_k$ para $k=1,\dots,m_2$ son valores conocidos que no deben ser superados.
Equivalentemente, el problema puede escribirse como
\begin{equation}
\begin{array}{ll}
\min_{\boldsymbol{x}} & \boldsymbol{c}^\top\boldsymbol{x} \\
\text{s. a. } & \boldsymbol{A}_{eq}\boldsymbol{x}=\boldsymbol{b}_{eq} \\
& \boldsymbol{A}\boldsymbol{x}\leq\boldsymbol{b},
\end{array}
\end{equation}
donde:
- $\boldsymbol{x}=\left[x_1\quad\dots\quad x_n\right]^\top$,
- $\boldsymbol{c}=\left[c_1\quad\dots\quad c_n\right]^\top$,
- $\boldsymbol{A}_{eq}=\left[\begin{array}{ccc}a^{eq}_{1,1} & \dots & a^{eq}_{1,n}\\ \vdots & \ddots & \vdots\\ a^{eq}_{m_1,1} & \dots & a^{eq}_{m_1,n}\end{array}\right]$,
- $\boldsymbol{A}=\left[\begin{array}{ccc}a_{1,1} & \dots & a_{1,n}\\ \vdots & \ddots & \vdots\\ a_{m_2,1} & \dots & a_{m_2,n}\end{array}\right]$,
- $\boldsymbol{b}_{eq}=\left[b^{eq}_1\quad\dots\quad b^{eq}_{m_1}\right]^\top$, y
- $\boldsymbol{b}=\left[b_1\quad\dots\quad b_{m_2}\right]^\top$.
**Nota:** el problema $\max_{\boldsymbol{x}}\boldsymbol{g}(\boldsymbol{x})$ es equivalente a $\min_{\boldsymbol{x}}-\boldsymbol{g}(\boldsymbol{x})$.
#### Bueno, y una vez planteado, ¿cómo se resuelve el problema?
Este problema está sencillo pues solo es en dos variables. La solución gráfica es válida.
```python
#Importat numpy y matplotlib.pyplot
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
```python
#Definir funciones de restricción y de costo
def res1(xA):
return (25-0.3*xA)/0.05
def res2(xA):
return (6-0.01*xA)/0.07
def res3(xA):
return (30-0.1*xA)/0.1
#Definir la función de costo
def z(xA,xB):
return 0.6*xA + 0.2*xB
```
```python
#Evaluar funciones
xA = np.linspace(0,400,200)
r1 = res1(xA)
r2 = res2(xA)
r3 = res3(xA)
```
```python
#Graficar
plt.figure(figsize = (8,6))
plt.plot(xA,r1,'b--',label='res1')
plt.plot(xA,r2,'r-.',label='res2')
plt.plot(xA,r3,'g:',label='res3')
plt.legend(loc = 'best')
plt.xlabel('$x_A$')
plt.ylabel('$x_B$')
plt.axis([0,400,0,400])
plt.show()
```
```python
#Evaluar función de costo y comparar
z(40,260), z(250,60)
```
(76.0, 162.0)
```python
#Volver a graficar con solución
```
**Actividad.** Mónica hace aretes y cadenitas de joyería. Es tan buena, que todo lo que hace lo vende.
Le toma 30 minutos hacer un par de aretes y una hora hacer una cadenita, y como Mónica también es estudihambre, solo dispone de 10 horas a la semana para hacer las joyas. Por otra parte, el material que compra solo le alcanza para hacer 15 unidades (el par de aretes cuenta como unidad) de joyas por semana.
La utilidad que le deja la venta de las joyas es $\$15$ en cada par de aretes y $\$20$ en cada cadenita.
¿Cuántos pares de aretes y cuántas cadenitas debería hacer Mónica para maximizar su utilidad?
Formular el problema en la forma explicada y obtener la solución gráfica (puede ser a mano).
**Diez minutos: quien primero lo haga, pasará a explicarlo al tablero y le subiré la nota de alguna tarea a 100. Debe salir a explicar el problema.**
## 4. ¿Cómo se resuelve en python?
### 4.1 Librería `SciPy`
`SciPy` es un software de código abierto basado en `Python` para matemáticas, ciencia e ingeniería.
En particular, los siguientes son algunos de los paquetes básicos:
- `NumPy`
- `SymPy`
- `matplotlib`
- **Librería `SciPy`**
- `pandas`
La **Librería `SciPy`** es uno de los paquetes principales y provee varias rutinas numéricas eficientes. Entre ellas, para integración numérica y optimización.
En esta clase, y en lo que resta del módulo, estaremos utilizando el módulo `optimize` de la librería `SciPy`.
**Importémoslo**
```python
# Importar el módulo optimize de la librería scipy
import scipy.optimize as opt
```
El módulo `optimize` que acabamos de importar contiene varias funciones para optimización y búsqueda de raices ($f(x)=0$). Entre ellas se encuentra la función `linprog`
```python
# Función linprog del módulo optimize
help(opt.linprog)
```
Help on function linprog in module scipy.optimize._linprog:
linprog(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None, bounds=None, method='simplex', callback=None, options=None)
Minimize a linear objective function subject to linear
equality and inequality constraints.
Linear Programming is intended to solve the following problem form::
Minimize: c^T * x
Subject to: A_ub * x <= b_ub
A_eq * x == b_eq
Parameters
----------
c : array_like
Coefficients of the linear objective function to be minimized.
A_ub : array_like, optional
2-D array which, when matrix-multiplied by ``x``, gives the values of
the upper-bound inequality constraints at ``x``.
b_ub : array_like, optional
1-D array of values representing the upper-bound of each inequality
constraint (row) in ``A_ub``.
A_eq : array_like, optional
2-D array which, when matrix-multiplied by ``x``, gives the values of
the equality constraints at ``x``.
b_eq : array_like, optional
1-D array of values representing the RHS of each equality constraint
(row) in ``A_eq``.
bounds : sequence, optional
``(min, max)`` pairs for each element in ``x``, defining
the bounds on that parameter. Use None for one of ``min`` or
``max`` when there is no bound in that direction. By default
bounds are ``(0, None)`` (non-negative)
If a sequence containing a single tuple is provided, then ``min`` and
``max`` will be applied to all variables in the problem.
method : str, optional
Type of solver. :ref:`'simplex' <optimize.linprog-simplex>`
and :ref:`'interior-point' <optimize.linprog-interior-point>`
are supported.
callback : callable, optional (simplex only)
If a callback function is provide, it will be called within each
iteration of the simplex algorithm. The callback must have the
signature ``callback(xk, **kwargs)`` where ``xk`` is the current
solution vector and ``kwargs`` is a dictionary containing the
following::
"tableau" : The current Simplex algorithm tableau
"nit" : The current iteration.
"pivot" : The pivot (row, column) used for the next iteration.
"phase" : Whether the algorithm is in Phase 1 or Phase 2.
"basis" : The indices of the columns of the basic variables.
options : dict, optional
A dictionary of solver options. All methods accept the following
generic options:
maxiter : int
Maximum number of iterations to perform.
disp : bool
Set to True to print convergence messages.
For method-specific options, see :func:`show_options('linprog')`.
Returns
-------
A `scipy.optimize.OptimizeResult` consisting of the following fields:
x : ndarray
The independent variable vector which optimizes the linear
programming problem.
fun : float
Value of the objective function.
slack : ndarray
The values of the slack variables. Each slack variable corresponds
to an inequality constraint. If the slack is zero, then the
corresponding constraint is active.
success : bool
Returns True if the algorithm succeeded in finding an optimal
solution.
status : int
An integer representing the exit status of the optimization::
0 : Optimization terminated successfully
1 : Iteration limit reached
2 : Problem appears to be infeasible
3 : Problem appears to be unbounded
nit : int
The number of iterations performed.
message : str
A string descriptor of the exit status of the optimization.
See Also
--------
show_options : Additional options accepted by the solvers
Notes
-----
This section describes the available solvers that can be selected by the
'method' parameter. The default method
is :ref:`Simplex <optimize.linprog-simplex>`.
:ref:`Interior point <optimize.linprog-interior-point>` is also available.
Method *simplex* uses the simplex algorithm (as it relates to linear
programming, NOT the Nelder-Mead simplex) [1]_, [2]_. This algorithm
should be reasonably reliable and fast for small problems.
.. versionadded:: 0.15.0
Method *interior-point* uses the primal-dual path following algorithm
as outlined in [4]_. This algorithm is intended to provide a faster
and more reliable alternative to *simplex*, especially for large,
sparse problems. Note, however, that the solution returned may be slightly
less accurate than that of the simplex method and may not correspond with a
vertex of the polytope defined by the constraints.
References
----------
.. [1] Dantzig, George B., Linear programming and extensions. Rand
Corporation Research Study Princeton Univ. Press, Princeton, NJ,
1963
.. [2] Hillier, S.H. and Lieberman, G.J. (1995), "Introduction to
Mathematical Programming", McGraw-Hill, Chapter 4.
.. [3] Bland, Robert G. New finite pivoting rules for the simplex method.
Mathematics of Operations Research (2), 1977: pp. 103-107.
.. [4] Andersen, Erling D., and Knud D. Andersen. "The MOSEK interior point
optimizer for linear programming: an implementation of the
homogeneous algorithm." High performance optimization. Springer US,
2000. 197-232.
.. [5] Andersen, Erling D. "Finding all linearly dependent rows in
large-scale linear programming." Optimization Methods and Software
6.3 (1995): 219-227.
.. [6] Freund, Robert M. "Primal-Dual Interior-Point Methods for Linear
Programming based on Newton's Method." Unpublished Course Notes,
March 2004. Available 2/25/2017 at
https://ocw.mit.edu/courses/sloan-school-of-management/15-084j-nonlinear-programming-spring-2004/lecture-notes/lec14_int_pt_mthd.pdf
.. [7] Fourer, Robert. "Solving Linear Programs by Interior-Point Methods."
Unpublished Course Notes, August 26, 2005. Available 2/25/2017 at
http://www.4er.org/CourseNotes/Book%20B/B-III.pdf
.. [8] Andersen, Erling D., and Knud D. Andersen. "Presolving in linear
programming." Mathematical Programming 71.2 (1995): 221-245.
.. [9] Bertsimas, Dimitris, and J. Tsitsiklis. "Introduction to linear
programming." Athena Scientific 1 (1997): 997.
.. [10] Andersen, Erling D., et al. Implementation of interior point
methods for large scale linear programming. HEC/Universite de
Geneve, 1996.
Examples
--------
Consider the following problem:
Minimize: f = -1*x[0] + 4*x[1]
Subject to: -3*x[0] + 1*x[1] <= 6
1*x[0] + 2*x[1] <= 4
x[1] >= -3
where: -inf <= x[0] <= inf
This problem deviates from the standard linear programming problem.
In standard form, linear programming problems assume the variables x are
non-negative. Since the variables don't have standard bounds where
0 <= x <= inf, the bounds of the variables must be explicitly set.
There are two upper-bound constraints, which can be expressed as
dot(A_ub, x) <= b_ub
The input for this problem is as follows:
>>> c = [-1, 4]
>>> A = [[-3, 1], [1, 2]]
>>> b = [6, 4]
>>> x0_bounds = (None, None)
>>> x1_bounds = (-3, None)
>>> from scipy.optimize import linprog
>>> res = linprog(c, A_ub=A, b_ub=b, bounds=(x0_bounds, x1_bounds),
... options={"disp": True})
Optimization terminated successfully.
Current function value: -22.000000
Iterations: 1
>>> print(res)
fun: -22.0
message: 'Optimization terminated successfully.'
nit: 1
slack: array([39., 0.])
status: 0
success: True
x: array([10., -3.])
Note the actual objective value is 11.428571. In this case we minimized
the negative of the objective function.
la cual resuelve problemas como los que aprendimos a plantear.
Parámetros importantes:
+ c: Vector con los coeficientes de función de costo (objetivo) lineal a minimizar.
+ A_ub: Matriz con los coeficientes de $x$ de la restricción de desigualdad.
+ b_ub: Vector que representa los valores de cada restricción de desigualdad.
+ A_eq: Matriz con los coeficientes de $x$ de la restricción de igualdad.
+ b_eq: Vector que representa los valores de cada restricción de igualdad.
+ bounds: (min, max) pares de cada elemento en $x$ definiendo las cotas mìnimas y màximas correspondientes. Por default $(0, None)$, no-negativo.
### 4.2 Solución del ejemplo básico con linprog
Ya hicimos la solución gráfica. Contrastemos con la solución que nos da `linprog`...
- $\boldsymbol{c}=\left[0.6 \quad 0.2\right]^\top$,
- $\boldsymbol{A}=\left[\begin{array}{cc}-0.3 & -0.05 \\ -0.01 & -0.07\\ -0.1 & -0.1\end{array}\right]$, y
- $\boldsymbol{b}=\left[-25 \quad -6\quad -30\right]^\top$.
```python
# Importar numpy para crear las matrices
import numpy as np
```
```python
# Crear las matrices para resolver el problema
c = np.array([0.6, 0.2])
A = np.array([[-0.3,-0.05],
[-0.01,-0.07],
[-0.1,-0.1]])
b = np.array([-25,-6,-30])
```
```python
```
```python
# Resolver utilizando linprog
resultado = opt.linprog(c, A_ub=A,b_ub=b)
```
```python
# Mostrar el resultado
resultado
```
fun: 75.99999999999987
message: 'Optimization terminated successfully.'
nit: 5
slack: array([ 0. , 12.6, 0. ])
status: 0
success: True
x: array([ 40., 260.])
```python
# Extraer el vector solución
xs = resultado.x
xs[0]
```
39.99999999999983
**Conclusión**
- Para minimizar el costo del compuesto nutritivo basado en los productos $A$ y $B$, se debe producir $40$ gramos de $A$ y $260$ gramos de B.
- Con esa producción, el costo total del compuesto será de $76$ u.m.
**Actividad.** Resolver el ejemplo de Mónica y sus ventas con `linprog`
```python
# Definir matrices
c = np.array([-15, -20])
A = np.array([[0.5,1],
[1,1]])
b = np.array([10,15])
```
```python
# Resolver con el módulo linprog
resultado_Monica = opt.linprog(c, A_ub=A,b_ub=b)
```
```python
#Mostrar solución
resultado_Monica
```
fun: -250.0
message: 'Optimization terminated successfully.'
nit: 2
slack: array([0., 0.])
status: 0
success: True
x: array([10., 5.])
## 5. Problema de transporte 1
- **Referencia**: http://bdigital.unal.edu.co/5037/4/guillermojimenezlozano.2006_Parte1.pdf
Una empresa tiene dos factorías A y B. En ellas se fabrica un determinado producto, a razón de 500 y 400 unidades por día respectivamente. El producto ha de ser distribuido posteriormente a tres centros C, D y E, que requieren, respectivamente, 200, 300 y 400 unidades. Los costos de transportar cada unidad del producto desde cada factoría a cada centro distribuidor son los indicados en la tabla siguiente:
Factoría|C|D|E|Fabricación (Unidades)
:----|----|----|----|----
A| 50 u.m|60 u.m|10 u.m|500 u
B| 25 u.m|40 u.m|20 u.m|400 u
Demanda|200|300|400|
**¿De qué manera deben organizar el transporte a fin de que los gastos sean mínimos?**
Formulemos el problema para ser resuelto por la programación lineal con
- $x_1$: unidades transportadas de la factoría "A" al centro "C"
- $x_2$: unidades transportadas de la factoría "A" al centro "D"
- $x_3$: unidades transportadas de la factoría "A" al centro "E"
- $x_4$: unidades transportadas de la factoría "B" al centro "C"
- $x_5$: unidades transportadas de la factoría "B" al centro "D"
- $x_6$: unidades transportadas de la factoría "B" al centro "E"
se tienen las siguientes ecuaciones:
Restricciones de la producción:
- $x_1 + x_2 + x_3 \leq 500$
- $x_4 + x_5 + x_6 \leq 400$
Restricciones del consumo:
- $x_1 + x_4 \geq 200$
- $x_2 + x_5 \geq 300$
- $x_3 + x_6 \geq 400$
La función objetivo será:
$$\min_{x_1,\dots,x_6}50x_1 + 60x_2 + 10x_3 + 25x_4 + 40x_5 + 20x_6$$
Resolver con `linprog`
```python
# Matrices y cotas
c = np.array([50,60,10,25,40,20])
A = np.array([[1,1,1,0,0,0],
[0,0,0,1,1,1],
[-1,0,0,-1,0,0],
[0,-1,0,0,-1,0],
[0,0,-1,0,0,-1]])
b = np.array([500,400,-200,-300,-400])
```
```python
# Resolver
resultado_transporte = opt.linprog(c, A_ub=A,b_ub=b)
```
```python
# Mostrar resultado
resultado_transporte
```
fun: 23000.0
message: 'Optimization terminated successfully.'
nit: 6
slack: array([-0., 0., 0., 0., 0.])
status: 0
success: True
x: array([ 0., 100., 400., 200., 200., 0.])
**Conclusión**
- La estrategia de menor costo es llevar $100$ unidades de la Factoría "A" al cento "D", $400$ unidades de la Factoría "A" al cento "E", $200$ unidades de la Factoría "B" al cento "C" y $200$ unidades de la Factoría "B" al cento "D". El costo total de esta estrategia de transporte es $23000$ u.m.
## 6. Optimización de inversión en bonos
**Referencia:**
```python
from IPython.display import YouTubeVideo
YouTubeVideo('gukxBus8lOs')
```
El objetivo de este problema es determinar la mejor estrategia de inversión, dados diferentes tipos de bono, la máxima cantidad que puede ser invertida en cada bono, el porcentaje de retorno y los años de madurez. También hay una cantidad fija de dinero disponible ($\$750,000$). Por lo menos la mitad de este dinero debe ser invertido en bonos con 10 años o más para la madurez. Se puede invertir un máximo del $25\%$ de esta cantidad en cada bono. Finalmente, hay otra restricción que no permite usar más de $35\%$ en bonos de alto riesgo.
Existen seis (6) opciones de inversión con las letras correspondientes $A_i$
1. $A_1$:(Tasa de retorno=$8.65\%$; Años para la madurez=11, Riesgo=Bajo)
1. $A_2$:(Tasa de retorno=$9.50\%$; Años para la madurez=10, Riesgo=Alto)
1. $A_3$:(Tasa de retorno=$10.00\%$; Años para la madurez=6, Riesgo=Alto)
1. $A_4$:(Tasa de retorno=$8.75\%$; Años para la madurez=10, Riesgo=Bajo)
1. $A_5$:(Tasa de retorno=$9.25\%$; Años para la madurez=7, Riesgo=Alto)
1. $A_6$:(Tasa de retorno=$9.00\%$; Años para la madurez=13, Riesgo=Bajo)
Lo que se quiere entonces es maximizar el retorno que deja la inversión.
Este problema puede ser resuelto con programación lineal. Formalmente, puede ser descrito como:
$$\max_{A_1,A_2,...,A_6}\sum^{6}_{i=1} A_iR_i,$$
donde $A_i$ representa la cantidad invertida en la opción, y $R_i$ representa la tasa de retorno respectiva.
Plantear restricciones...
```python
# Matrices y cotas
c = -np.array([8.65, 9.5,10, 8.75, 9.25, 9])/100
A_ub = np.array([[-1,-1,0,-1,0,-1],
[0,1,1,0,1,0]])
b_ub = np.array([-750000/2,0.35*750000])
A_eq = np.array([[1,1,1,1,1,1]])
b_eq = np.array([750000])
Ai_bounds = (0,750000/4)
```
```python
# Resolver
resultado_bonos = opt.linprog(c,A_ub,b_ub,A_eq,b_eq,bounds=(Ai_bounds,Ai_bounds,Ai_bounds,Ai_bounds,Ai_bounds,Ai_bounds,))
```
```python
# Mostrar resultado
resultado_bonos
```
fun: -68887.5
message: 'Optimization terminated successfully.'
nit: 9
slack: array([187500., 0., 75000., 0., 0., 187500., 0.,
0.])
status: 0
success: True
x: array([112500., 75000., 187500., 187500., 0., 187500.])
Recordar que en el problema minimizamos $-\sum^{6}_{i=1} A_iR_i$. El rendimiento obtenido es entonces:
```python
```
**Conclusión**
-
## 7. Tarea
### 7.1. Diseño de la Dieta Óptima
Se quiere producir comida para gatos de la manera más barata, no obstante se debe también asegurar que se cumplan los datos requeridos de analisis nutricional. Por lo que se quiere variar la cantidad de cada ingrediente para cumplir con los estandares nutricionales. Los requisitos que se tienen es que en $100$ gramos, se deben tener **por lo menos** $8$ gramos de proteína y $6$ gramos de grasa. Así mismo, no se debe tener más de $2$ gramos de fibra y $0.4$ gramos de sal.
Los datos nutricionales se pueden obtener de la siguiente tabla:
Ingrediente|Proteína|Grasa|Fibra|Sal
:----|----|----|----|----
Pollo| 10.0%|08.0%|00.1%|00.2%
Carne| 20.0%|10.0%|00.5%|00.5%
Cordero|15.0%|11.0%|00.5%|00.7%
Arroz| 00.0%|01.0%|10.0%|00.2%
Trigo| 04.0%|01.0%|15.0%|00.8%
Gel| 00.0%|00.0%|00.0%|00.0%
Los costos de cada producto son:
Ingrediente|Costo por gramo
:----|----
Pollo|$\$$0.013
Carne|$\$$0.008
Cordero|$\$$0.010
Arroz|$\$$0.002
Trigo|$\$$0.005
Gel|$\$$0.001
Lo que se busca optimizar en este caso es la cantidad de productos que se debe utilizar en la comida de gato, minimizando el costo total. Para simplificar la notación use las siguientes variables:
+ $x_1:$ Gramos de pollo
+ $x_2:$ Gramos de carne
+ $x_3:$ Gramos de cordero
+ $x_4:$ Gramos de arroz
+ $x_5:$ Gramos de trigo
+ $x_6:$ Gramos de gel
La tarea consiste en plantear el problemade programación lineal que permita satisfacer las necesidades alimenticias del gato al tiempo que minimice el costo total y resolverlo con `linprog`.
### 7.2. Otro problema de distribución (de energía eléctrica)
La Comisión Federal de Electricidad **(CFE)** dispone de cuatro plantas de generación para satisfacer la demanda diaria eléctrica en cuatro ciudades, Guadalajara, León y Morelia. Las plantas $1$, $2$ y $3$ pueden satisfacer $80$, $40$ y $60$ millones de KW al día respectivamente. Las necesidades de las ciudades de Guadalajara, León y Morelia son de $70$, $40$ y $70$ millones de Kw al día respectivamente.
Los costos asociados al envío de suministro energético por cada millón de Kw entre cada planta y cada ciudad son los registrados en la siguiente tabla.
-|Guadalajara|León|Morelia
:----|----|----|----
Planta 1|5|2|7
Planta 2|3|6|6
Planta 3|6|1|2
Y por último, las restricciones del problema, van a estar dadas por las capacidades de oferta y demanda de cada planta (en millones de KW) y cada ciudad.
Para simplificar la notación use las siguientes variables:
+ $x_1$: Kw (en millones) distribuidos de la Planta 1 a Guadalajara
+ $x_2$: Kw (en millones) distribuidos de la Planta 1 a León
+ $x_3$: Kw (en millones) distribuidos de la Planta 1 a Morelia
+ $x_4$: Kw (en millones) distribuidos de la Planta 2 a Guadalajara
+ $x_5$: Kw (en millones) distribuidos de la Planta 2 a León
+ $x_6$: Kw (en millones) distribuidos de la Planta 2 a Morelia
+ $x_7$: Kw (en millones) distribuidos de la Planta 3 a Guadalajara
+ $x_8$: Kw (en millones) distribuidos de la Planta 3 a León
+ $x_9$: Kw (en millones) distribuidos de la Planta 3 a Morelia
La tarea consiste en plantear el problema de programación lineal que permita satisfacer las necesidades de todas las ciudades al tiempo que minimice los costos asociados a la distribución y resolverlo con `linprog`.
Deben crear un notebook de jupyter (archivo .ipynb) y llamarlo PrimerApellido_PrimerNombre, y subirlo a moodle.
**Definir fecha**
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Cristian Camilo Zapata Zuluaga
</footer>
|
3915df958c7879d44bfddc9e8d1f26533bf7a075
| 96,426 |
ipynb
|
Jupyter Notebook
|
Modulo1/Clase5_ProgramacionLineal.ipynb
|
ArellanoMCarlos/SimMat2019-1
|
c84b92a581916572352615806d31961a468da3d9
|
[
"MIT"
] | null | null | null |
Modulo1/Clase5_ProgramacionLineal.ipynb
|
ArellanoMCarlos/SimMat2019-1
|
c84b92a581916572352615806d31961a468da3d9
|
[
"MIT"
] | null | null | null |
Modulo1/Clase5_ProgramacionLineal.ipynb
|
ArellanoMCarlos/SimMat2019-1
|
c84b92a581916572352615806d31961a468da3d9
|
[
"MIT"
] | null | null | null | 83.630529 | 27,729 | 0.776761 | true | 9,280 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.76908 | 0.851953 | 0.65522 |
__label__spa_Latn
| 0.859449 | 0.360627 |
| | Pierre Proulx, ing, professeur|
|:---|:---|
|Département de génie chimique et de génie biotechnologique |** GCH200-Phénomènes d'échanges I **|
### Section 2.2, écoulement d'un film de fluide Newtonien sur un plan incliné.
> Dans cette section il est important de revoir les concepts de flux de quantité de mouvement vus au chapitre 1. Voici un résumé des points qui seront utilisés dans la section 2.2:
http://pierreproulx.espaceweb.usherbrooke.ca/images/GCH200_Ch1_resume.pdf
### Film d'un fluide newtonien, section 2.2 Transport Phenomena
>Le développement fait dans Transport Phenomena sera répété ici en développant les solutions avec le calculateur formel sympy et en traçant la solution avec sympy.plot.
>>Un fluide Newtonien s'écoule sous l'effet de la gravité sur un plan incliné:
>>Le bilan fait sur le film de fluide est schématisé ainsi:
>Plusieurs des termes montrés sur la figure ci-haut sont nuls, :
$
\begin{equation*}
\boxed{ v_x=0 \quad v_y=0 \quad \tau_{zz}=0 \quad \tau_{yz}=0 }
\end{equation*}
$
>et de plus, on a que:
$
\begin{equation*}
\boxed{ \rho v_z v_z \quad et \quad p }
\end{equation*}
$
>ne varient pas en fonction de z, ce qui fera que
$
\begin{equation*}
\boxed{\phi_{zz}=constante }
\end{equation*}
$
>Dans le traitement qui sera fait dans sympy on utilisera $\phi_{zz} = C $.
```python
#
# Pierre Proulx
#
# Préparation de l'affichage et des outils de calcul symbolique
import sympy as sp # pour utiliser sympy, on lui donne l'alias sp
from IPython.display import * # pour utiliser display qui formate les équations
sp.init_printing(use_latex=True) # pour utiliser LaTeX, formattage de grande qualité
import matplotlib.pyplot as plt # pout utiliser matplotlib, outils de représentations graphiques
```
### On pourrait aussi utiliser
> sp.init_printing(use_latex=False)
### Le résultat sera le même, mais le formattage des équations sera beaucoup moins élégant,
```python
#
# définition des variables symboliques
#
x,delta_x,L,W,rho,g,beta,mu,delta=sp.symbols('x,delta_x,L,W,rho,g,beta,mu,delta')
tau_xz = sp.symbols('tau_xz')
phi_xz = sp.symbols('phi_xz')
phi_zz = sp.symbols('phi_zz')
C1,C2 = sp.symbols('C1,C2')
```
#### On pose le bilan:
```python
# Bilan des forces
dV = W*L*delta_x
dAx = L*W
dAz = W*delta_x
bilan = dAx*(phi_xz(x)-phi_xz(x+delta_x))+dAz*(phi_zz(0)-phi_zz(L))+dV*rho*g*sp.cos(beta)
bilan = bilan/(L*W*delta_x)
# mais phi_zz est le même en z=0 et en z=L, donc le terme de phi_zz(0) - phi_zz(L) =0
bilan = bilan.subs((phi_zz(0)-phi_zz(L)), 0)
display(bilan)
```
> *Dans le bilan obtenu ci-haut on prend la limite quand $\delta x \rightarrow 0 $ On le fait avec **sympy** mais notez bien que ce sont des manipulations de calcul algébrique qu'on ferait facilement à la main. Suivez bien les développements en faisant les mêmes sur une feuille de papier à côté. Vous obtiendrez le même résultat, même si votre notation peut être un peu différente et si vos manipulations ne sont pas exactement les mêmes. L'utilité du calculateur symbolique deviendra de plus en plus évidente au fur et à mesure que les problèmes deviendront plus complexes. *
```python
eq1 = sp.limit(bilan, delta_x, 0)
display(eq1)
```
```python
eq1 = eq1.doit() # fais la substitution de la variable à la limite
display(eq1)
```
```python
eq1 = eq1.subs(phi_xz(x), tau_xz(x)) # remplacer phi par tau car la convection est nulle
display(eq1)
```
On insère ensuite la loi de Newton pour remplacer $\tau_{xz}$
```python
# Loi de Newton
vz = sp.Function('v_z')(x)
newton = -mu*sp.Derivative(vz,x)
# On la substitue dans le bilan
eq2 = sp.Eq(eq1.subs(tau_xz(x), newton))
display(eq2)
```
```python
# La solution est faite avec la fonction dsolve de sympy
eq3 = sp.dsolve(eq2, vz)
display(eq3)
```
>> Conditions aux limites,
>> en x = $\delta$
>>> ${v_{z}}{\left (\delta \right )} = C_{1} + C_{2} (\delta) - \frac{g \rho (\delta)^{2}}{2 \mu} \cos{\left (\beta \right )}$ $=0$
>> et en x=0
>>>$\frac {{dv_{z}}}{dx} = C_{2} - 2 \frac{g \rho (0)}{2 \mu} \cos{\left (\beta \right )}$ $=0$
> exercice facile à faire à la main, afin de nous préparer à des problèmes plus complexes à venir, voyons comment le résoudre avec **sympy**
```python
# Le rhs est la partie de droite de l'équation, c'est la partie qui nous intéresse
eq4=eq3.rhs
display('eq3 et eq4 (qui est eq3.rhs)',eq3,eq4)
"""
Pose et solution des 2 équations de conditions aux limites pour C1 et C2
La forme générale:
sp.solve([equation1, equation2, ...],('variable1, variable2,...'))
"""
conditions = [ sp.Eq(eq4.diff(x).subs(x, 0), 0), #L'équation 4 dérivée, quand x=0, est égale à 0
sp.Eq(eq4.subs(x, delta), 0) ] #l'équation 4 quand x=delta est égale à 0
display('Les deux conditions aux limites',*conditions)
constantes=sp.solve(conditions, ('C1,C2')) # pour trouver C1 et C2
display('les constantes C1 et C2', constantes)
vz=eq4.subs(constantes)
display('Le profil est donc, en substituant C1 et C2 dans eq4', vz)
vzp=eq3.subs(constantes)
display('ou dans eq3', vzp)
```
```python
# Et le profil est obtenu.
display(vz.simplify())
#
# Qu'on peut maintenant tracer, en donnant des valeurs numériques bien choisies aux paramètres
#
dico = {'beta': sp.pi/4,
'delta':0.001,
'g': 9.81,
'rho': 1000,
'mu': 0.001}
vzmax = vz.subs(x, 0)
display(9.81*1000*0.001**2*.7/2/0.001)
vzmax = vzmax.subs(dico)
plt.rcParams['figure.figsize'] = 10,8
goptions={'title' : 'Profil de vitesse parabolique',
'ylabel': 'V/Vmax',
'xlabel': 'r/R'}
sp.plot(vz.subs(dico)/vzmax,(x,0,0.001),**goptions);
```
### À partir du profil de vitesses on peut calculer:
* La force que le fluide exerce sur la plaque:
$
\begin{equation*}
\boxed{ F = -\mu \bigg [\frac {dv_z}{dx}\bigg ]_{x=\delta} WL}
\end{equation*}
$
* Le débit volumétrique
$
\begin{equation*}
\boxed{ Q =\int_0 ^{W} \int_0 ^{\delta} v_z dx dy }
\end{equation*}
$
* La vitesse moyenne
$
\begin{equation*}
\boxed{ v_{z_{moyen}} = \frac {\int_0 ^{W} \int_0 ^{\delta} v_z dx dy }
{\int_0 ^{W} \int_0 ^{\delta} dx dy }}
\end{equation*}
$
### Utilisons sympy
```python
force = -mu*vz.diff(x).subs(x, delta)*L*W
display(force)
```
>>> Quel est le poids du film de liquide?
```python
debit = sp.integrate(vz, (x, 0, delta))*W
display(debit)
```
```python
vzmoyen=debit/(delta*W)
display(vzmoyen)
```
```python
vzmax=vz.subs(x, 0)
display(vzmax)
```
```python
display(vzmoyen/vzmax)
```
|
b56fe9a98e0a1994bade82c8025563b8440d2fbd
| 75,446 |
ipynb
|
Jupyter Notebook
|
Chap-2-Section-2-2.ipynb
|
Spationaute/GCH200
|
55144f5b2a59a7240d36c985997387f5036149f7
|
[
"MIT"
] | null | null | null |
Chap-2-Section-2-2.ipynb
|
Spationaute/GCH200
|
55144f5b2a59a7240d36c985997387f5036149f7
|
[
"MIT"
] | null | null | null |
Chap-2-Section-2-2.ipynb
|
Spationaute/GCH200
|
55144f5b2a59a7240d36c985997387f5036149f7
|
[
"MIT"
] | null | null | null | 97.475452 | 21,380 | 0.825199 | true | 2,229 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.810479 | 0.798187 | 0.646914 |
__label__fra_Latn
| 0.888736 | 0.341328 |
[Principal component analysis (PCA)](https://en.wikipedia.org/wiki/Principal_component_analysis) is one of the most used techniques for exploratory data analysis and preprocessing.
There are different formulations of PCA. A fundamental concept that occurs in several formulations are covariance matrices. In this lab, we thus first take a look at them. Then we investigate different ways to compute the principal component (PC) directions and scores.
```python
# imports
import scipy as sp
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# for interactive figures, requires installation of ipympl
#%matplotlib ipympl
# default
%matplotlib inline
```
## Check package versions
```python
print('Package versions used: ')
import matplotlib as mpl
import sklearn
print('* Scipy: ', sp.__version__)
print('* Numpy: ', np.__version__)
print('* Matplotlib: ', mpl.__version__)
print('* Seaborn', sns.__version__)
print('* Sklearn', sklearn.__version__)
```
Package versions used:
* Scipy: 1.5.2
* Numpy: 1.17.4
* Matplotlib: 3.1.2
* Seaborn 0.11.1
* Sklearn 0.23.2
DICE packages:
* Scipy: 1.3.3
* Numpy: 1.17.4
* Matplotlib: 3.1.2dd
* Seaborn 0.10.0
* Sklearn0.22.2.post1
Also tested with:
* Scipy: 1.5.2
* Numpy: 1.19.2
* Matplotlib: 3.3.2
* Seaborn 0.11.1
* Sklearn 0.23.2
# Covariance structure of data
## Generating covariance matrices
Covariance matrices are by definition [symmetric positive semi-definite](https://en.wikipedia.org/wiki/Positive-definite_matrix#Positive-semidefinite). One property of positive semi-definite matrices is that their eigenvalues are all non-negative.
The function `generate_spsd_matrix()` generates a random symmetric positive semi-definite matrix. The `random_seed` parameter can be used to set the numpy random seed in order to ensure reproducible results. Appendix A.9 "Positive Semi-definite and Definite Matrices" in the lecture notes explains why this function produces positive semi-definite matrices.
```python
def generate_spsd_matrix(d, random_seed=None):
"""
Reproducible random generation of symmetric
positive semi-definite matrices.
Parameters
----------
d : integer
Number of dimensions.
random_seed : integer
Random seed number.
Returns
----------
A : ndarray, shape (n,n)
Symmetric positive definite matrix.
"""
if random_seed is not None:
np.random.seed(random_seed)
A = np.random.randn(d,d)
return np.dot(A.T, A)
```
We also provide you with the following function that checks whether a given matrix is positive semi-definite. It uses `numpy.linalg.eigh()` function to compute the eigenvalue decomposition. Check out its [documentation](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) to learn about its usage.
```python
def is_positive_semi_definite(a):
"""
Tests whether a matrix is symmetric positive
semi-definite.
Parameters
----------
a : ndarray, shape (n,n)
Symmetric matrix.
Returns
----------
True if matrix is positive semi-definite.
Raises
----------
ValueError
If the provided matrix is not real or symmetric.
"""
a = np.asarray(a)
# Check that matrix is real
if not np.all(np.isreal(a)):
raise ValueError("The provided matrix is \
not real.")
# Check that matrix is symmetric
is_symmetric = np.array_equal(a, a.T)
if is_symmetric is not True:
raise ValueError("The provided matrix is \
not symmetric.")
# Eigenvalue decomposition
eigval, _ = np.linalg.eigh(a)
if np.all(eigval >= 0):
return True
else:
return False
```
Let us check whether the functions do what we want them to do:
```python
# Example with a pos-def matrix
a = generate_spsd_matrix(d=10, random_seed=10)
if is_positive_semi_definite(a):
print("Matrix is positive semi-definite.")
else:
print("Matrix is not positive semi-definite.")
```
Matrix is positive semi-definite.
```python
# Example with a random symmetric matrix
b = np.random.standard_normal((5,5))
b = b + b.T # to make the matrix symmetric
if is_positive_semi_definite(b):
print("Matrix is positive semi-definite.")
else:
print("Matrix is not positive semi-definite.")
```
Matrix is not positive semi-definite.
## Question 1: Generating data with a given covariance matrix---the math
Suppose that you want to sample (i.e. generate) some data points with a specific mean vector and covariance matrix. We will assume that we have access to a method that generates random variables with zero mean and unit variance, e.g. `np.random.standard_normal` for Gaussian data.
We know that if a random variable variable $x$ has zero mean and unit variance, then $y = \sigma x + m$ has mean $m$ and variance $\sigma^2$ . Therefore, we can use this property to sample from an arbitrary univariate distribution with mean $m$ and variance $\sigma^2$ if we can sample from its "standardised" version of mean zero and variance 1.
By repeating the process described above $d$ times, we can sample from a $d$-dimensional distribution with an arbitrary mean vector and diagonal covariance matrix. But, how can we sample from a multivariate distribution with given covariance matrix $\mathbf{C}$? The answer is through decomposing the covariance matrix via its [eigen (or spectral) decomposition](https://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix). For a real symmetric matrix $\mathbf{C}$ (such as covariance matrices), the decomposition takes the following form
$$\mathbf{C} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^T$$
where $\mathbf{U}$ is an orthogonal matrix containing the eigenvectors of $\mathbf{C}$ and $\mathbf{\Lambda}$ is a diagonal matrix whose entries are the non-negative eigenvalues of $\mathbf{C}$. Now, you might wonder how this decomposition can help us sample from a multivariate distribution.
Assume that $\mathbf{x}$ is a multivariate random variable with zero mean and unit covariance matrix, and $\mathbf{C} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^T$ is the eigendecomposition of $\mathbf{C}$. Then
$$\mathbf{y} = \mathbf{U} \mathbf{\Lambda}^{1/2} \mathbf{x} + \mathbf{m}$$
has mean $\mathbf{m}$ and covariance $\mathbf{C} $.
Using properties shown in Chapter 1 of the lecture notes, verify that $\mathbf{y}$ has mean $\mathbf{m}$ and covariance $\mathbf{C} $.
## Answer:
Using Eq. (1.40) in Chapter 1 of the [DME notes](https://www.inf.ed.ac.uk/teaching/courses/dme/2021/lecture-notes.pdf), we know that for any multivariate random variable $\mathbf{x}$, the following holds:
$$
\text{Cov}[A\mathbf{x} + \mathbf{b}] = A \text{Cov}[\mathbf{x}]A^{T}
$$
Now, applying this to the random variable $\mathbf{y}$ defined in the question we yield:
$$
\begin{align}
\text{Cov}[\mathbf{y}] &= \text{Cov}[U\Lambda^{1/2}\mathbf{x} + \mathbf{m}]\\
&= U\Lambda^{1/2}\text{Cov}[\mathbf{x}](U\Lambda^{1/2})^T\\
&= U\Lambda^{1/2}\mathbb{I}\Lambda^{1/2}U^T\\
&= U\Lambda U^T\\ &
= C \quad \quad \quad \quad \text{Q.E.D.}
\end{align}
$$
Repeating this analysis for the mean, we note that $\mathbb{E}[\mathbf{x}] = 0$ and hence:
$$
\mathbb{E}[\mathbf{y}] = \mathbb{E}[\mathbf{m}] = \mathbf{m}
$$
Since $\mathbf{m}$ is constant with respect to the distribution we are taking the expectation under.
## Question 2: Generating data with a given covariance matrix---the code
The result above implies the following procedure to sample one data point from a multivariate distribution with mean $\mathbf{m}$ and covariance matrix $\mathbf{C}$ if sampling from a "standardised" distribution is possible:
1. Compute the eigendecomposition of the covariance matrix, so that $\mathbf{C} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^T$
2. Sample a data point $\mathbf{x} \in \mathbb{R}^d$ from the "standardised" distribution (i.e. zero mean, identity covariance matrix).
3. Compute $\mathbf{y} = \mathbf{U} \mathbf{\Lambda}^{1/2} \mathbf{x} + \mathbf{m}$.
Write a function that generates $n$ random samples from a multivariate normal distribution with given mean vector and covariance matrix. You should make use of the [`np.random.standard_normal`](https://numpy.org/doc/stable/reference/random/generated/numpy.random.standard_normal.html) function that generates samples from a standard multivariate gaussian distribution and the eigendecomposition of the covariance matrix. For computing the eigendecomposition of a symmetric matrix you should use the [`numpy.linalg.eigh()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) function.
Finally, generate a 3 x 3 random covariance matrix `C` by using the `generate_positive_semi_definite()` function. Use the function you just wrote to generate 1 million random samples with zero mean and covariance matrix `C`. Compute the empirical covariance matrix of the data (you can use [`numpy.cov()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html)) and check that it is a good estimate of the true covariance matrix `C`.
**Important:**
There are different conventions on how data are stored in matrices: There is the "rows are variables, columns are observations" convention where the data matrix `X` has size $d \times n$, with $d$ denoting the dimensionality and $n$ the number of samples. This convention is used in the lecture notes and many text books, and e.g. by `numpy.cov` (see the default `rowvar=True`).
There is also the reverse convention where `X`is the transpose, i.e. it is a $n \times d$ matrix. This "rows are observations, columns are variables" is e.g. followed by `Pandas` and `scikit-learn`.
It is up to you which convention you follow. The first convention has the advantage that code more closely follows the math if you represent random vectors as column vectors (which is usually done). For example if we have a random (column) vector $\mathbf{x}$ of mean zero, the covariance is $\mathbb{E}[\mathbf{xx^\top}]$. If we work with $d \times n$ data matrices `X`, the formula can be implemented as `1/n X@X.T`, which is of the same form as the math formula. The second convention is advantageous if you use e.g. `Pandas` and `scikit-learn` libraries (and if you derived the math formula assuming the random vectors are row vectors). However, you have to transpose the math results if established using column vectors. For example $\mathbf{y} = \mathbf{U} \mathbf{\Lambda}^{1/2} \mathbf{x} + \mathbf{m}$ becomes
$$\mathbf{y}^\top = \mathbf{x}^\top (\mathbf{U}\mathbf{\Lambda}^{1/2})^\top + \mathbf{m}^\top = \mathbf{x}^\top \mathbf{\Lambda}^{1/2} \mathbf{U}^\top + \mathbf{m}^\top$$
Alternatively, you may just make sure that your functions and methods implement whatever interface you need, but possibly transpose input and output matrices at the beginning and end of your code, respectively. The transpose [returns a view in numpy](https://numpy.org/doc/stable/reference/generated/numpy.transpose.html) so that there are no memory issues in this approach.
As an exercise, follow here the $n \times d$ convention.
```python
def sample_multivariate_normal_eig(mean, cov, n, random_seed=None):
"""
Sample from multivariate normal distribution
with given mean and covariance matrix by
using eigendecomposition of covariance matrix.
Parameters
----------
mean : array, shape (d,)
Mean vector.
cov : array, shape (d,d)
Covariance matrix.
n : integer
Number of samples.
random_seed : integer (optional)
Random seed.
Returns
----------
X : array, shape(n, d)
Random samples.
Raises
----------
ValueError
If the provided matrix is not positive definite or if dimension of the
provided mean and covariance matrix does not match.
"""
if cov.shape[0] and cov.shape[1] != mean.shape[0]:
raise ValueError("The dimensions of the\
provided mean and covariance matrix do not match..")
if not is_positive_semi_definite(cov):
raise ValueError("The covariance matrix is not\
positive semi definite")
Lambda, U = np.linalg.eigh(cov)
X = np.random.standard_normal((n, d))
X = X @ np.diag(np.sqrt(Lambda)) @ U.T + mean[:,None].T
return X
```
Let's check that it works:
## Answer:
```python
n = int(1e6); d = 3
cov = generate_spsd_matrix(d, random_seed=24)
mean = np.ones(d)
X = sample_multivariate_normal_eig(mean, cov, n, random_seed=24)
```
### Close enough!
```python
np.cov(X, rowvar=False) # Estimated cov matrix
```
array([[3.07235317, 0.203425 , 0.08493394],
[0.203425 , 1.8242953 , 1.30008106],
[0.08493394, 1.30008106, 4.8130683 ]])
```python
cov # True cov matrix
```
array([[3.0670766 , 0.20434845, 0.0871184 ],
[0.20434845, 1.82705041, 1.30318069],
[0.0871184 , 1.30318069, 4.8151199 ]])
*Note: numpy built-in functions* Numpy implements, for example, the [`numpy.random.multivariate_normal()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multivariate_normal.html) function that can be used to generate samples from a multivariate normal distribution with given mean vector and covariance matrix. You are encouraged to use such built-in functions whenever available, as they will most likely be highly optimised, and bug-free. Nevertheless, it is very useful to know what these functions do under the hood, and in some cases, the function that you need may not be available and you have to write your own.
## Question 3: Analysing the covariance of data
You are provided with the following function `generate_gaussian_data()` that can be used to generate a multivariate Gaussian dataset with a given mean and covariance. When the mean and covariance are not defined, they are generated at random. The `random_seed` parameter can be used to ensure reproducible results. The function returns a tuple containing three items; the dataset, the true mean, and the true covariance matrix of the probability distribution the data were sampled from. Execute the cell below to load this function.
```python
def generate_gaussian_data(n_samples, n_features=None, mu=None, cov=None, random_seed=None):
"""
Generates a multivariate gaussian dataset.
Parameters
----------
n_samples : integer
Number of samples.
n_features : integer
Number of dimensions (features).
mu : array, optional (default random), shape (n_features,)
Mean vector of normal distribution.
cov : array, optional (default random), shape (n_features,n_features)
Covariance matrix of normal distribution.
random_seed : integer
Random seed.
Returns
-------
x : array, shape (n_samples, n_features)
Data matrix arranged in rows (i.e.
columns correspond to features and
rows to observations).
mu : array, shape (n_features,)
Mean vector of normal distribution.
cov : array, shape (n_features,n_features)
Covariance matrix of normal distribution.
Raises
------
ValueError when the shapes of mu and C are not compatible
with n_features.
"""
if random_seed is not None:
np.random.seed(random_seed)
if mu is None:
mu = np.random.randn(n_features,)
else:
if n_features is None:
n_features = mu.shape[0]
else:
if mu.shape[0] != n_features:
raise ValueError("Shape mismatch between mean and number of features.")
if cov is None:
cov = generate_spsd_matrix(n_features, random_seed=random_seed)
else:
if (cov.shape[0] != n_features) or (cov.shape[1] != n_features):
raise ValueError("Shape mismatch between covariance and number of features.")
x = np.random.multivariate_normal(mu, cov, n_samples)
return (x, mu, cov)
```
Generate a two-dimensional Gaussian data set with 1000 observations. The two Gaussian random variables should have mean zero, variances 1 and 2 respectively, and a correlation coefficient of 0.6.
Print the empirical mean, covariance and correlation matrices using numpy built-in functions. Look up the numpy [documentation](https://numpy.org/doc/) if you are unsure about the commands. Finally, use the seaborn [`jointplot()`](http://seaborn.pydata.org/generated/seaborn.jointplot.html) function to produce a joint scatter plot of the two variables. This function also shows the marginal histograms on the top and right hand sides of the plot. Label axes appropriately.
## Answer:
```python
# Generate covariance matrices with trace of
mu = np.zeros((2,))
v1 = 1
v2 = 2
rho = 0.6
# Creating covariance matrix
covar = np.sqrt(v1*v2)*rho
cov = np.array([[v1, covar], [covar, v2]])
# Generate Gaussian data
x_2d, _, _ = generate_gaussian_data(n_samples=1000, n_features=2, mu=mu, cov=cov)
# Plotting
g = sns.jointplot(data=x_2d,
x=x_2d[:,0],
y=x_2d[:,1],
color='dodgerblue',
height=8
)
```
# PCA with sklearn
Sklearn offers a class implementation of `pca`. Please spend a minute to read the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) of this class. The principal component (PC) directions of a dataset are computed by using the [`fit()`](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html#sklearn.decomposition.PCA.fit) method and stored row-wise in the `components_` attribute.
The PC scores can be computed by using the [`transform()`](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html#sklearn.decomposition.PCA.transform) method. The amount of variance explained by each of the selected components is stored into the `explained_variance_` attribute.
## Data
We will use a 3-dimensional Gaussian dataset. Execute the cell below to generate the dataset and print the true mean and covariance matrix of the distribution the data was sampled from.
```python
# Generates a 3D dataset and prints true mean and covariance
x_3d, mu_true, C_true = generate_gaussian_data(n_samples=1000, n_features=3, random_seed=20)
print("Dataset consists of {} samples and {} variables/features.\n".format(x_3d.shape[0], x_3d.shape[1]))
print("True mean:\n{}\n".format(mu_true))
print("True covariance matrix:\n{}".format(C_true))
```
Dataset consists of 1000 samples and 3 variables/features.
True mean:
[0.88389311 0.19586502 0.35753652]
True covariance matrix:
[[ 7.15474605 1.79591767 -0.52284687]
[ 1.79591767 2.17265 -1.0294186 ]
[-0.52284687 -1.0294186 0.69419873]]
## Question 4: Computing all PCs
Create a `pca` instance and fit it on the dataset `x_3d`. Print the three PC directions as column vectors. Store the PC scores for `x_3d` in an array called `pc_scores`.
## Answer:
```python
from sklearn.decomposition import PCA
# Perform PCA with sklearn
pca = PCA()
pca.fit(x_3d)
pc_scores = pca.transform(x_3d)
# Transpose due to nxd convention of sklearn
print(pca.components_.T)
```
[[-0.94682757 -0.31374345 0.07129232]
[-0.30532736 0.80631739 -0.50658413]
[ 0.10145321 -0.50141532 -0.85923799]]
## Question 5: Computing a subset of PCs
Most often, we do not want to compute all PC directions, but only a few (i.e. dimensionality reduction). We can define the desired number of PCs by setting the [`n_components`]() parameter appropriately when we instantiate the `pca` class.
Initialise a `pca_new` object with 2 PCs and fit it on the dataset `x_3d`. Compute the corresponding PC scores and print the two PC directions.
*Hint: the 2 PC directions should be the same as the first 2 directions you computed in the previous question. The reason for this is ultimately that PCA by sequential and simultaneous variance maximisation give the same result.*
## Answer:
```python
pca_new = PCA(n_components=2).fit(x_3d) # Only get first 2 principle componants
pc_scores_new = pca_new.transform(x_3d) # Compute score
print(pca_new.components_.T)
```
[[-0.94682757 -0.31374345]
[-0.30532736 0.80631739]
[ 0.10145321 -0.50141532]]
# PCA from scratch
## Question 6: PCA via covariance matrix eigendecomposition
Now we want to implement PCA from scratch using the eigendecomposition of the covariance matrix. The procedure can be summarised as follows:
1. Compute the empirical covariance matrix.
2. Compute the eigendecomposition of the estimated covariance matrix.
3. Sort eigenvalues and associated eigenvectors, in eigenvalue descending order. The sorted eigenvectors correspond to the PC directions. If we want to reduce the dimensionality, we select the first `k` eigenvectors corresponding to the `k` largest eigenvalues (`k` < `d`).
4. To compute PC scores we project the centered data matrix (i.e. matrix product) onto the PC directions.
Some algorithms for eigendecompositions allow you to specify the number of eigenvectors and eigenvalues that should be computed. You then do not have to compute the complete eigendecomposition, which is wasteful if you are only interested in a few principle components. For example, while `numpy`'s `linalg.eigh` computes all eigenvectors and values ([documentation](https://numpy.org/doc/stable/reference/generated/numpy.linalg.eigh.html)), `scipy`'s `linalg.eigh` allows you extract only a subset using the `eigvals` keyword argument ([documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigh.html)). Note that with scipy v1.5.0 `eigvals` is deprecated and `subset_by_index` should be used.
Compute and print the first two PC directions in the dataset `x_3d` by using the procedure described above, using either `scipy`'s or `numpy`'s `linalg.eigh`. Then compute the PC scores.
As happens very often when writing code, it is likely that there will be a few bugs in your implementation. To check that your code is correct, compare the computed PC directions and scores to the ones obtained with scikit-learn.
*Hint: you might (or might not) find that some of the PC directions/scores you have computed have opposite signs to the ones returned by the sklearn implementation. Do not worry about this, the two solutions are equivalent (why?). To make debugging easier, you are provided with the following function, `solutions_equivalent()` which tests whether two solutions are equivalent, regardless of their signs. Execute the following cell to load this function.*
```python
def solutions_equivalent(b1, b2):
"""
Checks whether two PC directions/scores
solutions are equivalent regardless of their .
respective signs.
Parameters
----------
s1 : array,
First solution.
s2 : array,
Second solution.
Returns
-------
True if solutions are equivalent.
Raises
------
ValueError if the two bases do not have
the same dimensionality.
"""
s1 = np.asarray(b1)
s2 = np.asarray(b2)
if s1.shape != s2.shape:
raise ValueError("Solutions must have the same dimensionality.")
for dim in range(s1.shape[1]):
if (np.allclose(s1[:,dim],s2[:,dim]) or np.allclose(s1[:,dim],-s2[:,dim])):
pass
else:
return False
return True
```
```python
WITH_SCIPY = False
k = 2 # no. of PCs to grab
# Centre data
mu_est = np.mean(x_3d, axis=0)
C_est = np.cov(x_3d, rowvar=False)
x_3d_centred = x_3d - mu_est
# Define param
d = C_est.shape[0]
# Get PCs
if WITH_SCIPY:
print("Using scipy...")
# Get eigvals and eigvecs
eigvals, eigvecs = sp.linalg.eigh(C_est, eigvals=[d-k, d-1])
# Sort principle components
pca_directs = eigvecs[:,::-1]
else:
print("Using numpy...")
eigvals, eigvecs = np.linalg.eigh(C_est)
order = np.argsort(eigvals)[::-1]
pca_directs = eigvecs[:, order[:k]]
# Compute scores by projects data onto PCs
pca_scores = x_3d_centred.dot(pca_directs)
# Tests
print("Principle components:\n{}".format(pca_directs))
print(solutions_equivalent(pca_new.components_.T, pca_directs)) # directions
print(solutions_equivalent(pc_scores_new, pca_scores)) #scores
```
Using numpy...
Principle components:
[[-0.94682757 -0.31374345]
[-0.30532736 0.80631739]
[ 0.10145321 -0.50141532]]
True
True
## Question 7: PCA via data matrix singular value decomposition (SVD)
Assume we have a centred $d \times n$ data matrix $\mathbf{Y}$ with singular decomposition
$$\mathbf{Y = USV^\top} = \sum_{i=1}^r s_i \mathbf{u}_i \mathbf{v}_i^\top$$
with $s_1\ge s_2 \ge \ldots s_r$ and where $r$ is the rank of the matrix (typically equal to $d$). We have seen in the lecture notes that the best rank $k$ approximation of $\mathbf{Y}$ (measured by the Frobenius norm) is given by
$$\mathbf{\hat{Y}} = \sum_{i=1}^k s_i \mathbf{u}_i \mathbf{v}_i^\top$$
which means that we can just retain the $k$ terms with the largest singular values. Moreover, we have seen that
* the PC directions are given by the left singular vectors $\mathbf{u}_i$
* the PC scores are given by the scaled right singular values as $\mathbf{z}_i= s_i \mathbf{v}_i^\top$
* the variances of the PC scores are $\lambda_i = s_i^2/n$.
Compute the first two PC directions and scores in the dataset `x_3d` using `scipy.sparse.linalg.svds` ([documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.svds.html)). The method allows you to only compute $k$ singular vectors. This is also the method used by sklearn if you choose `arpack` as option for `svd_solver`. Alternatively, you may use `numpy`'s `linalg.svd` ([documentation](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.svd.html)) but this method computes the full SVD decomposition, which is wasteful if you are only interested in few components.
Compare the computed PC directions and scores to the ones obtained with scikit-learn.
*Hint: Since `x_3d` is a $n \times d$ matrix, you may best first transpose it to more easily implement the math above, and after the computation, transpose the computed scores back to $n \times k$ form*
```python
from scipy.sparse.linalg import svds
k = 2 # PCs to grab
# Centre and transpose
x_3d_centred = x_3d - np.mean(x_3d, axis=0)
Y = x_3d_centred.T # d x n
# SVD, which='LM' finds the largest singular values
uk, sk, vkt = svds(Y, k=k, which='LM')
# Sort since svds does not gurantee sorted outputs
order = np.argsort(sk)[::-1]
svd_pc_directs = uk[:, order]
svd_scores = sk[order, None] * vkt[order, :]
# Revert back to n x k
svd_scores = svd_scores.T
# Tests
print("Principle components:\n{}".format(svd_pc_directs))
print(solutions_equivalent(pca_new.components_.T, svd_pc_directs)) # directions
print(solutions_equivalent(pc_scores_new, svd_scores)) #scores
```
Principle components:
[[-0.94682757 -0.31374345]
[-0.30532736 0.80631739]
[ 0.10145321 -0.50141532]]
True
True
# Image compression [optional]
In lecture, we have seen that the SVD allows us to find a low rank approximation of the data matrix. We here exemplify the low rank approximation property of the SVD on a image compression task.
Grey-scale images are represented in the digital world as 2D matrices, whose elements correspond to pixel intensities. We here approximate this matrix by a low-rank approximation through the SVD. If there are correlations between the pixels in the image (which happens to be the case for [natural images](http://www.naturalimagestatistics.net/)), then we should be able to achieve a relatively good reconstruction of the image by using only a few components.
Let us first load a sample image from the scipy package:
```python
# Load sample image
from scipy.misc import face
img = face(gray=True)
print("Image array dimensionality: {}".format(img.shape))
```
Image array dimensionality: (768, 1024)
We can visualise the image by using the matplotlib imshow function:
```python
# Show image
sns.set_style("white")
plt.figure()
plt.imshow(img, cmap=plt.cm.gray)
plt.axis("off")
plt.show()
```
## Question 8 [optional]
Write a function image_low_rank_approx() that takes as input an image (i.e. 2-dimensional array) and an integer k and reconstructs the image by using a k-rank SVD approximation.
```python
def image_low_rank_approx(img, k):
# Centre and transpose
mean_est = np.mean(img, axis=0)
img = (img - mean_est).T # d x n
# SVD, which='LM' finds the largest singular values
uk, sk, vkt = svds(img, k=k, which='LM')
# Sort since svds does not gurantee sorted outputs
order = np.argsort(sk)[::-1]
svd_pc_directs = uk[:, order]
svd_scores = sk[order, None] * vkt[order, :]
reconstructed_img = svd_pc_directs @ svd_scores
return reconstructed_img.T
```
## Question 9 [optional]
Perform a low-rank approximation of the image stored in img by using a varying number of ranks (i.e. from 1 to 500) and visualise the approximation. How many components do you roughly need to obtain a qualitatively decent approximation?
```python
# Your code goes here
rimg = image_low_rank_approx(img, 20)
```
*Your answer goes here.*
```python
sns.set_style("white")
plt.figure()
plt.imshow(rimg, cmap=plt.cm.gray)
plt.axis("off")
plt.show()
```
```python
```
```python
```
|
4d61edcddbf5b5408b5d3dcf9aaba6b7aed56187
| 320,652 |
ipynb
|
Jupyter Notebook
|
w2/02_Lab_2_Principal_component_analysis.ipynb
|
c-abbott/dme-2021
|
09bc0499c6e68864292e574485e11729449e7aa0
|
[
"MIT"
] | null | null | null |
w2/02_Lab_2_Principal_component_analysis.ipynb
|
c-abbott/dme-2021
|
09bc0499c6e68864292e574485e11729449e7aa0
|
[
"MIT"
] | null | null | null |
w2/02_Lab_2_Principal_component_analysis.ipynb
|
c-abbott/dme-2021
|
09bc0499c6e68864292e574485e11729449e7aa0
|
[
"MIT"
] | null | null | null | 278.34375 | 147,700 | 0.915394 | true | 7,707 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.870597 | 0.763484 | 0.664687 |
__label__eng_Latn
| 0.979897 | 0.382621 |
## Confidence Intervals and Hypothesis Testing in Python for Engineers and Geoscientists
### Michael Pyrcz, Associate Professor, University of Texas at Austin
#### Contacts: [Twitter/@GeostatsGuy](https://twitter.com/geostatsguy) | [GitHub/GeostatsGuy](https://github.com/GeostatsGuy) | [www.michaelpyrcz.com](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446)
This is a tutorial / demonstration of **Confidence Intervals and Hypothesis Testing in Python**. In Python, the SciPy package, specifically the Stats functions (https://docs.scipy.org/doc/scipy/reference/stats.html) provide excellent tools for efficient use of statistics.
I have previously provided these examples worked out by-hand in Excel (https://github.com/GeostatsGuy/LectureExercises/blob/master/Lecture7_CI_Hypoth_eg_R.xlsx) and also in R (https://github.com/GeostatsGuy/LectureExercises/blob/master/Lecture7_CI_Hypoth_eg.R). In all cases, I use the same dataset available as a comma delimited file (https://git.io/fxLAt).
This tutorial includes basic, typical confidence interval and hypothesis testing methods that would commonly be required for Engineers and Geoscientists including:
1. Student-t confidence interval for the mean
2. Student-t hypothesis test for difference in means (pooled variance)
3. Student-t hypothesis test for difference in means (difference variances), Welch's t Test
3. F-distribution hypothesis test for difference in variances
##### Caveats
I have not included all the details, specifically the test assumptions in this document. These are included in the accompanying course notes, Lec08_hypothesis.pdf.
#### Project Goal
0. Introduction to Python in Jupyter including setting a working directory, loading data into a Pandas DataFrame.
1. Learn the basics for working with confidence intervals and hypothesis testing in Python.
2. Demonstrate the efficiency of using Python and SciPy package for statistical analysis.
#### Load the required libraries
The following code loads the required libraries.
```python
import os # to set current working directory
import numpy as np # arrays and matrix math
import scipy.stats as st # statistical methods
import pandas as pd # DataFrames
```
If you get a package import error, you may have to first install some of these packages. This can usually be accomplished by opening up a command window on Windows and then typing 'python -m pip install [package-name]'. More assistance is available with the respective package docs.
#### Set the working directory
I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time). Also, in this case make sure to place the required (see below) data file in this directory. When we are done with this tutorial we will write our new dataset back to this directory.
```python
os.chdir("C:\PGE337") # set the working directory
```
#### Loading Data
Let's load the provided dataset. 'PorositySamples2Units.csv' is available at https://github.com/GeostatsGuy/GeoDataSets. It is a comma delimited file with 20 porosity measures from 2 rock units from the subsurface, porosity (as a fraction). We load it with the pandas 'read_csv' function into a data frame we called 'df' and then preview it by printing a slice and by utilizing the 'head' DataFrame member function (with a nice and clean format, see below).
```python
#df = pd.read_csv("PorositySample2Units.csv") # read a .csv file in as a DataFrame
df = pd.read_csv(r"https://raw.githubusercontent.com/GeostatsGuy/GeoDataSets/master/PorositySample2Units.csv") # load data from Prof. Pyrcz's github
print(df.iloc[0:5,:]) # display first 4 samples in the table as a preview
df.head() # we could also use this command for a table preview
```
X1 X2
0 0.21 0.20
1 0.17 0.26
2 0.15 0.20
3 0.20 0.19
4 0.19 0.13
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>X1</th>
<th>X2</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.21</td>
<td>0.20</td>
</tr>
<tr>
<th>1</th>
<td>0.17</td>
<td>0.26</td>
</tr>
<tr>
<th>2</th>
<td>0.15</td>
<td>0.20</td>
</tr>
<tr>
<th>3</th>
<td>0.20</td>
<td>0.19</td>
</tr>
<tr>
<th>4</th>
<td>0.19</td>
<td>0.13</td>
</tr>
</tbody>
</table>
</div>
It is useful to review the summary statistics of our loaded DataFrame. That can be accomplished with the 'describe' DataFrame member function. We transpose to switch the axes for ease of visualization.
```python
df.describe().transpose()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>count</th>
<th>mean</th>
<th>std</th>
<th>min</th>
<th>25%</th>
<th>50%</th>
<th>75%</th>
<th>max</th>
</tr>
</thead>
<tbody>
<tr>
<th>X1</th>
<td>20.0</td>
<td>0.1645</td>
<td>0.027810</td>
<td>0.11</td>
<td>0.1500</td>
<td>0.17</td>
<td>0.19</td>
<td>0.21</td>
</tr>
<tr>
<th>X2</th>
<td>20.0</td>
<td>0.2000</td>
<td>0.045422</td>
<td>0.11</td>
<td>0.1675</td>
<td>0.20</td>
<td>0.23</td>
<td>0.30</td>
</tr>
</tbody>
</table>
</div>
Here we extract the X1 and X2 unit porosity samples from the DataFrame into separate arrays called 'X1' and 'X2' for convenience.
```python
X1 = df['X1']
X2 = df['X2']
```
#### Confidence Intervals
Let's first demonstrate the calculation of the confidence interval for the sample mean at a 95% confidence level. This could be interpreted as the interval over which there is a 95% confidence that it contains the true population. We use the student's t distribution as we assume we do not know the variance and the sample size is small.
\begin{equation}
x̅ \pm t_{\frac{\alpha}{2},n-1} \times \frac {s}{\sqrt{n}}
\end{equation}
```python
ci_95_x1 = st.t.interval(0.95, len(df)-1, loc=np.mean(X1), scale=st.sem(X1))
print('The confidence interval for the X1 interval is ' + str(ci_95_x1))
```
The confidence interval for the X1 interval is (0.1514843093952749, 0.17751569060472505)
One can check the Excel file linked above with the confidence interval calculated by hand and confirm that this result is correct.
##### Hypothesis Testing
Now, let's try the t test, hypothesis test for difference in means. This test assumes that the variances are similar along with the data being Gaussian distributed (see the course notes for more on this). This is our test:
\begin{equation}
H_0: \mu_{X1} = \mu_{X2}
\end{equation}
\begin{equation}
H_1: \mu_{X1} \ne \mu_{X2}
\end{equation}
For the resulting t-statistic and p-value we run this command.
##### Pooled Variance t-test Difference in Means
```python
t_pooled, p_pooled = st.ttest_ind(X1,X2) # assuminng equal variance
print('The t statistic is ' + str(t_pooled) + ' and the p-value is ' + str(p_pooled))
```
The t statistic is -2.9808897468855644 and the p-value is 0.004992130565788754
The p-value, $p$, is the symmetric interval probaiblity our outside. In other words the $p$ reported is 2 x cumulative probability of the t statistic applied to the sampling t distribution. Another way to look at it, if one used the $\pm t_{t_{statistic},.d.f}$ statistic as thresholds, $p$ is the probability being outside this symmetric interval. So we will reject the null hypothesis if $p \lt \alpha$. From the p-value alone it is clear that we would reject the null hypothesis and accept the alternative hypothesis that the means are not equal.
In case you want to compare the t-statistic to t-critical, we can apply the inverse of the student's t distribution at $\frac{\alpha}{2}$ and $1-\frac{\alpha}{2}$ to get the upper and lower critcal values.
```python
t_critical = st.t.ppf([0.025,0.975], df=len(X1)+len(X2)-2)
print('The t crical lower and upper values are ' + str(t_critical))
```
The t crical lower and upper values are [-2.02439416 2.02439416]
We can observe that, as expected, the t-statistic is outside the t-critcal interval. These results are exactly what we got when we worked out the problem by hand in Excel, but so much more efficient!
##### Welch's t-test Difference in Means
Now let's try the t-test, hypothesis test for difference in means allowing for unequal variances, this is also known as the Welch's t test. All we have to do is set the parameter 'equal_var' to false, note it defaults to true (e.g. the command above).
```python
st.ttest_ind(X1, X2, equal_var = False) # allowing for difference in variance
```
Ttest_indResult(statistic=-2.9808897468855644, pvalue=0.005502572350112331)
Once again we can see by $p$ that we will clearly reject the null hypothesis.
##### F-test Difference in Variances
Let's now compare the variances with the F-test for difference in variances.
\begin{equation}
H_0: \frac{\sigma^{2}_{X_2}}{\sigma^{2}_{X_1}} = 1.0
\end{equation}
\begin{equation}
H_1: \frac{\sigma^{2}_{X_2}}{\sigma^{2}_{X_1}} > 1.0
\end{equation}
Note, by ordering the variances we eliminate the case of $\sigma^{2}_{X_2} \lt \sigma^{2}_{X_1}$.
Details about the test are available in the course notes (along with assumptions such as Gaussian distributed) and this example is also worked out by hand in the linked Excel workbook. We can accomplish the F-test in with SciPy.Stats the function with one line of code if we calculate the ratio of the sample variances ensuring that the larger variance is in the numerator and get the degrees of freedom using the len() command, ensuring that we are consistent with the numerator degrees of freedom set as 'dfn' and the denominator degrees of freedom set as 'dfd'. We take a p-value of $1-p$ since the test is configured to be a single, right tailed test.
```python
p_value = 1 - st.f.cdf(np.var(X2)/np.var(X1), dfn=len(X2)-1, dfd=len(X1)-1)
p_value
```
0.01918734806315381
Once again we would clearly reject the null hypothesis since $p \lt alpha$ and assume that the variances are not equal.
#### Comments
We are just scratching the surface for confidence intervals and hypothesis tests. Once again there are a lot of details left out of the problem formulation and assumptions, see the course notes for more coverage. By running the same confidence interval and hypothesis tests 1) by hand in Excel and with 2) R and 3) Python code, I hope this demonstration will enable and encourage more engineers and scientists to make these R and Python tools part of their common practice. I'm always happy to discuss,
*Michael*
Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
On twitter I'm the @GeostatsGuy.
```python
```
|
b1ecf2330375dd1194a5d7a16ed6e1b521863e5f
| 18,328 |
ipynb
|
Jupyter Notebook
|
PythonDataBasics_Hypothesis.ipynb
|
caf3676/PythonNumericalDemos
|
206a3d876f79e137af88b85ba98aff171e8d8e06
|
[
"MIT"
] | 403 |
2017-10-15T02:07:38.000Z
|
2022-03-30T15:27:14.000Z
|
PythonDataBasics_Hypothesis.ipynb
|
caf3676/PythonNumericalDemos
|
206a3d876f79e137af88b85ba98aff171e8d8e06
|
[
"MIT"
] | 4 |
2019-08-21T10:35:09.000Z
|
2021-02-04T04:57:13.000Z
|
PythonDataBasics_Hypothesis.ipynb
|
caf3676/PythonNumericalDemos
|
206a3d876f79e137af88b85ba98aff171e8d8e06
|
[
"MIT"
] | 276 |
2018-06-27T11:20:30.000Z
|
2022-03-25T16:04:24.000Z
| 35.937255 | 667 | 0.553525 | true | 3,237 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.893309 | 0.833325 | 0.744417 |
__label__eng_Latn
| 0.988788 | 0.567861 |
<center>
<h1> INF285 - Computación Científica </h1>
<h2> Gradient Descent and Nonlinear Least-Square </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.02</h2>
</center>
<div id='toc' />
## Table of Contents
* [Introduction](#intro)
* [Gradient Descent](#GradientDescent)
* [Gradient Descent in 1D](#GradientDescent1D)
* [Gradient Descent for a 2D linear least-square problem](#GD_2D_LinearLeastSquare)
* [Gradient Descent for a 2D nonlinear least-square problem](#GD_2D_NonLinearLeastSquare)
* [Further Study](#FurtherStudy)
* [Acknowledgements](#acknowledgements)
```python
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as spla
%matplotlib inline
# https://scikit-learn.org/stable/modules/classes.html#module-sklearn.datasets
from sklearn import datasets
import ipywidgets as widgets
from ipywidgets import interact, interact_manual, RadioButtons
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
M=8
```
<div id='intro' />
# Introduction
[Back to TOC](#toc)
This jupyter notebook presents the algorithm of Gradient Descent applied to non-linear least-square problems.
<div id='GradientDescent' />
# Gradient Descent
[Back to TOC](#toc)
The algorithm of Gradient Descent is used in Optimization, in particular, in problems when we want to minimize a function (or equivalently in maximization problem by changing the sign of the function).
This algorithm considers a function $f(\mathbf{x}):\mathbb{R}^n \rightarrow \mathbb{R}$, which has at least a local minimum near the point $\mathbf{x}_0$.
The algorithm considers that we have access to the gradient of $f(\mathbf{x})$, i.e. $\nabla f(\mathbf{x})$, which indicates the direction of fastest increase of $f(\mathbf{x})$ at the point $\mathbf{x}$, or equivalently, $-\nabla f(\mathbf{x})$ is teh direction of fastest decrease.
Thus, the algorithm is the following,
- Select an initial guess, say $\mathbf{x}_0$
- Compute the direction of fastest decrease: $\mathbf{d}_0=-\nabla f(\mathbf{x}_0)$
- Update the approximation $\mathbf{x}_1=\mathbf{x}_0+\alpha\,\mathbf{d}_0$
- Iterate until certain threshold is achieved.
where $\alpha$ is a scaling factor for the Gradient Descent step.
The coefficient $\alpha$ could also depend on on the iteration number, such that it adapts based on the iterations.
<div id='GradientDescent1D' />
# Gradient Descent in 1D
[Back to TOC](#toc)
To primary explain the algorithm, considere the following 1D example:
$$
f(x) = (x - 2)\,\sin(2\,x) + x^2.
$$
We will first plot the function as follows:
```python
# Defining the function using a 'lambda' definition.
f = lambda x: (x - 2)*np.sin(2*x) + np.power(x,2)
# Defining the grid for plotting, the number '1000' indicates the number of points of the sample.
# Suggestion: Change it and see what happends! For instance, what about if you change to 10?
xx = np.linspace(-3,3,1000)
# Plotting the function
plt.figure(figsize=(8,8))
plt.plot(xx,f(xx),'-',label=r'$f(x)$')
plt.grid(True)
plt.xlabel('$x$')
plt.legend(loc='best')
plt.show()
```
Now, we will create an interactive use of the Gradient Descent in 1D where you could define the initial guess $x_0$, the scaling factor $\alpha$ and the iteration number.
In this numerical experiment we will the importance of the coefficient $\alpha$, and how it is related to the 'gradient' and the initial guess.
```python
def GD_1D(x0=2, alpha=1, n=0):
# Defining the function using a 'lambda' definition and its derivative.
f = lambda x: (x-2)*np.sin(2*x)+np.power(x,2)
fp = lambda x: 2*x+2*(x-2)*np.cos(2*x)+np.sin(2*x)
# Plotting the function and its derivative.
xx = np.linspace(-3,3,1000)
plt.figure(figsize=(14,7))
ax = plt.subplot(1,2,1)
plt.plot(xx,f(xx),'b-',label=r'$f(x)$')
# Warning: The 'alpha' parameter for the plt.plot function corresponds to
# a transparency parameter, it is not related to the alpha parameter of
# the Gradient Descent explained before.
plt.plot(xx,fp(xx),'r-',label=r"$f'(x)$", alpha=0.5)
plt.grid(True)
plt.xlabel('$x$')
plt.title('Plot in linear scale')
# Plotting outcome with no iterations
plt.plot(x0,f(x0),'k.',markersize=10,label=r'$x_i$')
plt.plot(x0,fp(x0),'m.',markersize=10,label=r"$f'(x_i)$: 'Gradient'")
ax = plt.subplot(1,2,2)
plt.semilogy(xx,np.abs(f(xx)),'b-',label=r"$|f(x)|$")
plt.semilogy(xx,np.abs(fp(xx)),'r-',label=r"$|f'(x)|$", alpha=0.5)
plt.grid(True)
plt.xlabel('$x$')
plt.title('Plot in logarithmic scale')
plt.semilogy(x0,np.abs(f(x0)),'k.',markersize=10,label=r'$x_i$')
plt.semilogy(x0,np.abs(fp(x0)),'m.',markersize=10,label=r"$|f'(x_i)|$: 'Gradient'")
# Computing steps of Gradient Descent
if n>0:
xi_output=np.zeros(n+1)
xi_output[0]=x0
for k in range(n):
fp_x0=fp(x0)
x1 = x0-alpha*fp_x0
xi_output[k+1]=x1
x0 = x1
ax = plt.subplot(1,2,1)
plt.plot(xi_output,f(xi_output),'k.-',markersize=10,label=r'$x_i$')
plt.plot(xi_output,fp(xi_output),'m.',markersize=10)
ax = plt.subplot(1,2,2)
plt.semilogy(xi_output,np.abs(f(xi_output)),'k.-',markersize=10,label=r'$x_i$')
plt.semilogy(xi_output,np.abs(fp(xi_output)),'m.',markersize=10)
# Plotting outcome
ax = plt.subplot(1,2,1)
plt.legend(loc='best')
ax = plt.subplot(1,2,2)
plt.legend(loc='best')
plt.show()
interact(GD_1D,x0=(-3,3,0.1), alpha=(0,10,0.01), n=(0,100,1))
```
interactive(children=(FloatSlider(value=2.0, description='x0', max=3.0, min=-3.0), FloatSlider(value=1.0, desc…
<function __main__.GD_1D(x0=2, alpha=1, n=0)>
What conclusions could be draw?
The main conclusion that can be draw is the importance of the selection of the parameter $\alpha$ for the success of the task of finding a minimum of a function.
Also, as ussual, the initial guess $x_0$ will help us to select different local minima.
Question to think about:
- What could happen if you normalize the 'gradient'? In 1D this would be computing the following coeficients: $GN=\frac{f'(x_i)}{|f'(x_i)|}$, this will gives us the 'direction' where we should move (in 1D is just the sign of the derivative), then the coefficient $\alpha$ may control a bit more the magnitude of each step from $x_i$ to $x_{i+1}$. So, how do we undertand this? Implement it!
<div id='GD_2D_LinearLeastSquare' />
# Gradient Descent for a 2D linear least-square problem
[Back to TOC](#toc)
In this case we will solve the following least-square problem:
$$
\begin{equation}
\underbrace{\begin{bmatrix}
1 & x_1 \\
1 & x_2 \\
1 & x_3 \\
\vdots & \vdots \\
1 & x_m
\end{bmatrix}}_{\displaystyle{A}}
\underbrace{\begin{bmatrix}
a\\
b
\end{bmatrix}}_{\mathbf{x}}
=
\underbrace{\begin{bmatrix}
y_1 \\
y_2 \\
y_3 \\
\vdots\\
y_m
\end{bmatrix}}_{\displaystyle{\mathbf{b}}}.
\end{equation}
$$
This overdetermined linear least-square problem can be translated to the following form:
$$
\begin{equation}
E(a,b)=\left\|\mathbf{b}-A\,\mathbf{x}\right\|_2^2=\sum_{i=1}^m (y_i-a-b\,x_i)^2.
\end{equation}
$$
Now, to apply the Gradient Descent algorithm we need to compute the Gradient of $E(a,b)$ with respect to $a$ and $b$, which is the following,
$$
\begin{align*}
\frac{\partial E}{\partial a} &= \sum_{i=1}^m -2\,(y_i-a-b\,x_i),\\
\frac{\partial E}{\partial b} &= \sum_{i=1}^m -2\,x_i\,(y_i-a-b\,x_i).
\end{align*}
$$
Notice that in this case we don't want to cancel out the "-" (minus) sign since it will change the direction of the Gradient.
Now, we have everything to apply the Gradient Descent in 2D.
For comparison purposes, we will also include the solution obtain by the normal equations.
```python
def GD_2D_linear(a0=2, b0=2, alpha=0, n=0, m=10):
# Building data.
np.random.seed(0)
xi = np.random.normal(size=m)
yi = -2+xi+np.random.normal(loc=0, scale=0.5, size=m)
# Defining matrix A and the right-hand-side.
# Recall that we usually denote as b the right-hand-side but to avoid confusion with
# the coefficient b, we will just call it RHS.
A = np.ones((m,2))
A[:,1] = xi
RHS = yi
# Defining the Gradient
E = lambda a, b: np.sum(np.power(yi-a-b*xi,2))
G = lambda a, b: np.array([np.sum(-2*(yi-a-b*xi)), np.sum(-2*xi*(yi-a-b*xi))],dtype=float)
# This fucntion will help us to evaluate the Gradient on the points (X[i,j],Y[i,j])
def E_mG_XY(AA,BB):
Z = np.zeros_like(AA)
U = np.zeros_like(AA)
V = np.zeros_like(AA)
for i in range(m):
for j in range(m):
Z[i,j]=E(AA[i,j],BB[i,j])
uv = -G(AA[i,j],BB[i,j])
U[i,j] = uv[0]
V[i,j] = uv[1]
return Z, U, V
# Plotting the function and its gradient.
# Credits:
# https://matplotlib.org/stable/gallery/images_contours_and_fields/plot_streamplot.html
# https://scipython.com/blog/visualizing-a-vector-field-with-matplotlib/
x = np.linspace(-5,5,m)
AA, BB = np.meshgrid(x,x)
fig = plt.figure(figsize=(14,10))
Z, U, V = E_mG_XY(AA,BB)
cont = plt.contour(AA,BB,Z, 100)
stream = plt.streamplot(AA, BB, U, V, color=Z, linewidth=2, cmap='autumn', arrowstyle='->', arrowsize=2)
fig.colorbar(stream.lines)
fig.colorbar(cont)
plt.scatter(a0, b0, s=300, marker='.', c='k')
my_grad = G(a0,b0)
my_title = r'$\alpha=$ %.4f, $E(a,b)=$ %.4f, $\nabla E(a,b)=$ [%.4f, %.4f]' % (alpha, E(a0,b0), my_grad[0], my_grad[1])
plt.title(my_title)
# Computing steps of Gradient Descent
if n>0:
ab_output=np.zeros((n+1,2))
z0 = np.array([a0,b0],dtype=float)
z0[0] = a0
z0[1] = b0
ab_output[0,:]=z0
# The Gradient Descent Algorithm
for k in range(n):
G_E_0=G(z0[0],z0[1])
z1 = z0-alpha*G_E_0
ab_output[k+1,:]=z1
z0 = z1
plt.scatter(z1[0], z1[1], s=300, marker='.', c='k')
plt.plot(ab_output[:,0],ab_output[:,1],'k-')
my_grad = G(ab_output[-1,0],ab_output[-1,1])
my_title = r'$\alpha=$ %.4f, $E(a,b)=$ %.4f, $\nabla E(a,b)=$ [%.4f, %.4f]' % (alpha, E(ab_output[-1,0],ab_output[-1,1]), my_grad[0], my_grad[1])
plt.title(my_title)
plt.show()
interact(GD_2D_linear, a0=(-4,4,0.1), b0=(-4,4,0.1), alpha=(0,0.1,0.0001), n=(0,100,1), m=(10,100,10))
```
interactive(children=(FloatSlider(value=2.0, description='a0', max=4.0, min=-4.0), FloatSlider(value=2.0, desc…
<function __main__.GD_2D_linear(a0=2, b0=2, alpha=0, n=0, m=10)>
In the previous implementation we used the following notation:
- $n$: Number of iteration of Gradient Descent
- Black dot: Solution $[a_n,b_n]$ at $n$-th step of the Gradient Descent.
- Red-Yellow streamplot: Stream plot of the vector field generated by minus the Gradient of the error function $E(a,b)$
- Blue-Green contour: Contour plot of the error function $E(a,b)$.
Questions:
- Try: $\alpha=0.02$, $n=20$, and $m=10$. What do you observe? (keep initialization values of $a_0$ and $a_1$)
- Try: $\alpha=0.04$, $n=20$, and $m=10$. What do you observe? (keep initialization values of $a_0$ and $a_1$)
- Try: $\alpha=0.08$, $n=20$, and $m=10$. What do you observe? (keep initialization values of $a_0$ and $a_1$)
- Can we use a large value of $\alpha$?
- How is related $\alpha$ and the iteration number $n$?
<div id='GD_2D_NonLinearLeastSquare' />
# Gradient Descent for a 2D nonlinear least-square problem
[Back to TOC](#toc)
In this case, we will explore the use of the the Gradient Descent algorithm applied to a nonlinear least-square problem with an exponential fit.
Let the function to be fit be,
$$
\begin{equation}
y(t) = a\,\exp(b\,t),
\end{equation}
$$
where the error function is defined as follows,
$$
\begin{equation}
E(a,b)=\sum_{i=1}^m (y_i-a\,\exp(b\,t_i))^2.
\end{equation}
$$
Now, to apply the Gradient Descent algorithm we need to compute the Gradient of $E(a,b)$ with respect to $a$ and $b$, which is the following,
$$
\begin{align*}
\frac{\partial E}{\partial a} &= \sum_{i=1}^m 2\,\exp(b\,t_i)(a\,\exp(b\,t_i)-y_i),\\
\frac{\partial E}{\partial b} &= \sum_{i=1}^m 2\,a\,\exp(b\,t_i)\,t_i\,(a\,\exp(b\,t_i)-y_i).
\end{align*}
$$
As you may expect, this approach may create very large values for the gradient, which will be very challenging to handle them numerically.
So, an alternative approach is the following, which we will call it "The Variant":
- Select an initial guess, say $\mathbf{x}_0$
- Compute the direction of fastest decrease: $\mathbf{d}_0=-\nabla E(\mathbf{x}_0)$
- Update the approximation $\mathbf{x}_1=\mathbf{x}_0+\alpha\,\frac{\mathbf{d}_0}{\|\mathbf{d}_0\|}$
- Iterate until certain threshold is achieved.
Thus, the only change is on the magnitud of the **direction** vector used.
In this case, it will be a unitary direction.
This brings the advantage that $\alpha$ now controls the **length** of the update.
This is useful when you want to control the increment, otherwise it may require a very fine tuning of the parameter (or in general hyperparameter tuning!).
```python
def GD_2D_nonlinear(a0=0.75, b0=0.75, alpha=0, n=0, m=10, TheVariantFlag=False):
# Building data.
np.random.seed(0)
a = 1.1
b = 0.23
y = lambda t: a*np.exp(b*t)
T = 10
ti = T*(np.random.rand(m)*2-1)
yi = y(ti)+np.random.normal(loc=0, scale=0.1, size=m)
# Defining the Gradient
E = lambda a, b: np.sum(np.power(yi-a*np.exp(b*ti),2))
G = lambda a, b: np.array([np.sum(2*np.exp(b*ti)*(a*np.exp(b*ti)-yi)), np.sum(2*a*np.exp(b*ti)*ti*(a*np.exp(b*ti)-yi))],dtype=float)
# This fucntion will help us to evaluate the Gradient on the points (X[i,j],Y[i,j])
def E_mG_XY(AA,BB):
Z = np.zeros_like(AA)
U = np.zeros_like(AA)
V = np.zeros_like(AA)
for i in range(m):
for j in range(m):
Z[i,j]=E(AA[i,j],BB[i,j])
uv = -G(AA[i,j],BB[i,j])
U[i,j] = uv[0]
V[i,j] = uv[1]
return Z, U, V
# Plotting the function and its gradient.
# Credits:
# https://matplotlib.org/stable/gallery/images_contours_and_fields/plot_streamplot.html
# https://scipython.com/blog/visualizing-a-vector-field-with-matplotlib/
x = np.linspace(-3,3,m)
AA, BB = np.meshgrid(x,x)
fig = plt.figure(figsize=(14,10))
Z, U, V = E_mG_XY(AA,BB)
cont = plt.contour(AA,BB,Z, 10)
stream = plt.streamplot(AA, BB, U, V, color=Z, linewidth=2, cmap='autumn', arrowstyle='->', arrowsize=2)
fig.colorbar(stream.lines)
fig.colorbar(cont)
plt.scatter(a0, b0, s=300, marker='.', c='k')
my_grad = G(a0,b0)
my_title = r'$\alpha=$ %.4f, $E(a,b)=$ %.4f, $\nabla E(a,b)=$ [%.4f, %.4f]' % (alpha, E(a0,b0), my_grad[0], my_grad[1])
plt.title(my_title)
# Computing steps of Gradient Descent
if n>0:
ab_output=np.zeros((n+1,2))
z0 = np.array([a0,b0],dtype=float)
z0[0] = a0
z0[1] = b0
ab_output[0,:]=z0
# The Gradient Descent Algorithm
for k in range(n):
G_E_0=G(z0[0],z0[1])
if not TheVariantFlag:
# Traditional GD
z1 = z0-alpha*G_E_0
else:
# The Variant! Why would this be useful?
z1 = z0-alpha*G_E_0/np.linalg.norm(G_E_0)
ab_output[k+1,:]=z1
z0 = z1
plt.scatter(z1[0], z1[1], s=300, marker='.', c='k')
plt.plot(ab_output[:,0],ab_output[:,1],'k-')
my_grad = G(ab_output[-1,0],ab_output[-1,1])
my_title = r'$\alpha=$ %.6f, $E(a,b)=$ %.4f, $\nabla E(a,b)=$ [%.4f, %.4f]' % (alpha, E(ab_output[-1,0],ab_output[-1,1]), my_grad[0], my_grad[1])
plt.title(my_title)
print('GD found:',ab_output[-1,0],ab_output[-1,1])
# Plotting the original data and the "transformed" solution
# Using the same notation from classnotes:
A = np.ones((m,2))
A[:,1]=ti
K_c2 =np.linalg.lstsq(A,np.log(yi), rcond=None)[0]
c1_ls = np.exp(K_c2[0])
c2_ls = K_c2[1]
print('Transformed Linear LS solution:',c1_ls, c2_ls)
plt.plot(c1_ls,c2_ls,'ms',markersize=20, label='Transformed Linear LS')
print('Original data:',a,b)
plt.plot(a,b,'bd',markersize=20, label='Original data')
plt.legend(loc='lower right')
plt.show()
radio_button_TheVariant=RadioButtons(
options=[('Traditional GD',False),('The Variant GD',True)],
value=False,
description='GD type:',
disabled=False
)
interact(GD_2D_nonlinear, a0=(-2,2,0.01), b0=(-2,2,0.01), alpha=(0,1,0.0001), n=(0,1000,1), m=(10,100,10), TheVariantFlag=radio_button_TheVariant)
```
interactive(children=(FloatSlider(value=0.75, description='a0', max=2.0, min=-2.0, step=0.01), FloatSlider(val…
<function __main__.GD_2D_nonlinear(a0=0.75, b0=0.75, alpha=0, n=0, m=10, TheVariantFlag=False)>
In the previous implementation we used the following notation:
- $n$: Number of iteration of Gradient Descent
- Black dot: Solution $[a_n,b_n]$ at $n$-th step of the Gradient Descent.
- Red-Yellow streamplot: Stream plot of the vector field generated by minus the Gradient of the error function $E(a,b)$
- Blue-Green contour: Contour plot of the error function $E(a,b)$.
<div id='FurtherStudy' />
# Further Study
[Back to TOC](#toc)
Another extension of the Gradient Descent is the so called _Stochastic Gradient Descent Method (SGD)_, very popular in Data Science, Machine Learning and Artificial Neural Networks (ANN) in general.
Here a interesting reference: [Link](https://optimization.cbe.cornell.edu/index.php?title=Stochastic_gradient_descent), another good reference is the textbook _[Linear Algebra and leraning from data](https://math.mit.edu/~gs/learningfromdata/)_ by Professor Gilbert Strang, page 359.
A simple way to undertand the SGD is as follows:
- Select an initial guess, say $\mathbf{x}_0$
- Select a sample of data $D_k$ from the dataset $D$, where $k$ indicates the number of _data points_ of the sample.
- Define the error only including the _data points_ from the sample $D_k$, and call it $E_k(\cdot)$
- Compute the direction of fastest decrease: $\mathbf{d}^{[k]}_0=-\nabla E_k(\mathbf{x}_0)$
- Update the approximation $\mathbf{x}_1=\mathbf{x}_0+\alpha\,\mathbf{d}^{[k]}_0$
- Iterate until certain threshold is achieved.
So, the key point here is that we don't use all the dataset $D$ to update the coefficients on each iteration, it clearly has the advantage that the computation is way faster but the question it arises is that, _would this affect the convergence?_ Answer: Try it numerically! In general, this approximation behaves very well when used in ANN since in ANN they don't want to _overfit_ the coefficients to the dataset.
Notice that the size of the sample $k$ could it be even $1$, this makes the computation very fast!
Before we finish, it is useful to make the connection between the terminology used here and the terminology used in ANN,
- Error function $E(\cdot)$ $\rightarrow$ Loss function $L(\cdot)=\frac{1}{m}E(\cdot)$. Notice however that the loss function $L(\cdot)$ in ANN may not have a quadratic form, for instance it could be $\frac{1}{m}\sum |y_i-a-b\,x_i|$, i.e. the sum of the absolutes values. And in general it may also consider _activator functions_ $\phi(\cdot)$ to model neurons, which modify the loss function as follows $\frac{1}{m}\sum \phi(y_i-a-b\,x_i)$.
- Coefficient $\alpha$ $\rightarrow$ It is called _learning rate_ in ANN, since it controls how fast the ANN _learns_ from samples. As we say in this jupyter notebook, it is very important for a good _training_.
- Adjusting coefficients $\rightarrow$ Training. This the step where the ANN _learn_ from _samples_. Notice that in ANN it may not be required a low error, since it may affect the _generalization capabilities_ of the ANN.
- A brief but useful explanation of Deep Learning is [here](https://math.mit.edu/%7Egs/learningfromdata/siam.pdf).
<div id='acknowledgements' />
# Acknowledgements
[Back to TOC](#toc)
* _Material created by professor Claudio Torres_ (`ctorres@inf.utfsm.cl`) DI UTFSM. November 2021.- v1.0.
* _Update November 2021 - v1.01 - C.Torres_ : Fixing TOC.
* _Update November 2021 - v1.02 - C.Torres_ : Fixing titles size, typos and adding further study section.
```python
```
|
30c3ecd488809a2bae7c32f908352c0a2644ed6c
| 55,581 |
ipynb
|
Jupyter Notebook
|
SC1v2/Bonus - 07-08 - Gradient Descent and Nonlinear Least-Square.ipynb
|
xavierutox/Scientific-Computing
|
bb5dd02362a7c2cdcaf2d24c348ab16c8533482c
|
[
"BSD-3-Clause"
] | 37 |
2017-06-05T21:01:15.000Z
|
2022-03-17T12:51:55.000Z
|
SC1v2/Bonus - 07-08 - Gradient Descent and Nonlinear Least-Square.ipynb
|
xavierutox/Scientific-Computing
|
bb5dd02362a7c2cdcaf2d24c348ab16c8533482c
|
[
"BSD-3-Clause"
] | null | null | null |
SC1v2/Bonus - 07-08 - Gradient Descent and Nonlinear Least-Square.ipynb
|
xavierutox/Scientific-Computing
|
bb5dd02362a7c2cdcaf2d24c348ab16c8533482c
|
[
"BSD-3-Clause"
] | 63 |
2017-10-02T21:21:30.000Z
|
2022-03-23T02:23:22.000Z
| 80.90393 | 26,888 | 0.753765 | true | 6,450 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.805632 | 0.727975 | 0.58648 |
__label__eng_Latn
| 0.905975 | 0.200921 |
Blankenbach Benchmark Case 1
======
Steady isoviscous thermal convection
----
Two-dimensional, incompressible, bottom heated, steady isoviscous thermal convection in a 1 x 1 box, see case 1 of Blankenbach *et al.* 1989 for details.
**This example introduces:**
1. Loading/Saving variables to disk.
2. Defining analysis tools.
3. Finding a steady state.
**Keywords:** Stokes system, advective diffusive systems, analysis tools
**References**
B. Blankenbach, F. Busse, U. Christensen, L. Cserepes, D. Gunkel, U. Hansen, H. Harder, G. Jarvis, M. Koch, G. Marquart, D. Moore, P. Olson, H. Schmeling and T. Schnaubelt. A benchmark comparison for mantle convection codes. Geophysical Journal International, 98, 1, 23–38, 1989
http://onlinelibrary.wiley.com/doi/10.1111/j.1365-246X.1989.tb05511.x/abstract
```python
import underworld as uw
from underworld import function as fn
import underworld.visualisation as vis
import math
import numpy as np
try:
from xvfbwrapper import Xvfb
vdisplay = Xvfb()
vdisplay.start()
except:
pass
```
Setup parameters
-----
```python
boxHeight = 1.0
boxLength = 1.0
# Set grid resolution.
res = 128
# Set max & min temperautres
tempMin = 0.0
tempMax = 1.0
```
Choose which Rayleigh number, see case 1 of Blankenbach *et al.* 1989 for details.
```python
case = "a"
if(case=="a"):
Ra=1.e4
eta0=1.e23
elif(case=="b"):
Ra=1.e5
eta0=1.e22
else:
Ra=1.e6
eta0=1.e21
```
Set input and output file directory
```python
inputPath = 'input/1_03_BlankenbachBenchmark/'
outputPath = 'output/'
# Make output directory if necessary.
if uw.mpi.rank==0:
import os
if not os.path.exists(outputPath):
os.makedirs(outputPath)
```
Create mesh and variables
------
```python
mesh = uw.mesh.FeMesh_Cartesian( elementType = ("Q1/dQ0"),
elementRes = (res, res),
minCoord = (0., 0.),
maxCoord = (boxLength, boxHeight))
velocityField = mesh.add_variable( nodeDofCount=2 )
pressureField = mesh.subMesh.add_variable( nodeDofCount=1 )
temperatureField = mesh.add_variable( nodeDofCount=1 )
temperatureDotField = mesh.add_variable( nodeDofCount=1 )
# initialise velocity, pressure and temperatureDot field
velocityField.data[:] = [0.,0.]
pressureField.data[:] = 0.
temperatureField.data[:] = 0.
temperatureDotField.data[:] = 0.
```
Set up material parameters and functions
-----
Set values and functions for viscosity, density and buoyancy force.
```python
# Set a constant viscosity.
viscosity = 1.
# Create our density function.
densityFn = Ra * temperatureField
# Define our vertical unit vector using a python tuple (this will be automatically converted to a function).
z_hat = ( 0.0, 1.0 )
# A buoyancy function.
buoyancyFn = densityFn * z_hat
```
Set initial temperature field
-----
The initial temperature field can be loaded from a pre-run steady state data set ( ``LoadFromFile = True`` ) or set to a sinusodial perterbation ( ``LoadFromFile = False`` ).
```python
# Steady state temperature field to be loaded from data file.
LoadFromFile = True
```
**If loading steady state data set**
Data is stored in h5 format from a 64\*64 grid resolution model. Data has been saved for 3 different Rayleigh numbers, $Ra = 10^4$, $10^5$ or $10^6$.
Once loaded the data will need to be re-meshed onto a new grid, unless the new resolution is also 64\*64.
For more information on using meshes see the user guide.
```python
if(LoadFromFile == True):
# Setup mesh and temperature field for 64*64 data file.
mesh64 = uw.mesh.FeMesh_Cartesian( elementType = ("Q1/dQ0"),
elementRes = (64, 64),
minCoord = (0., 0.),
maxCoord = (boxLength, boxHeight),
partitioned = False )
temperatureField64 = mesh64.add_variable( nodeDofCount=1 )
# read in saved steady state temperature field data
if( case == "a" ):
temperatureField64.load(inputPath+'tempfield_inp_64_Ra1e4.h5')
print('Loading 64*64 for Ra = 1e4')
elif( case == "b" ):
temperatureField64.load(inputPath+'tempfield_inp_64_Ra1e5.h5')
print('Loading 64*64 for Ra = 1e5')
else:
temperatureField64.load(inputPath+'tempfield_inp_64_Ra1e6.h5')
print('Loading 64*64 for Ra = 1e6')
if( res==64 ): # no remeshing needed, copy directly
temperatureField.data[:] = temperatureField64.data[:]
else: # remeshing needed
temperatureField.data[:] = temperatureField64.evaluate(mesh)
```
**If using sinusodial perturbation**
```python
if(LoadFromFile == False):
temperatureField.data[:] = 0.
pertStrength = 0.1
deltaTemp = tempMax - tempMin
for index, coord in enumerate(mesh.data):
pertCoeff = math.cos( math.pi * coord[0]/boxLength ) * math.sin( math.pi * coord[1]/boxLength )
temperatureField.data[index] = tempMin + deltaTemp*(boxHeight - coord[1]) + pertStrength * pertCoeff
temperatureField.data[index] = max(tempMin, min(tempMax, temperatureField.data[index]))
```
**Show initial temperature field**
```python
fig = vis.Figure()
fig.append( vis.objects.Surface(mesh, temperatureField) )
fig.show()
```
Create boundary conditions
----------
Set temperature boundary conditions on the bottom ( ``MinJ`` ) and top ( ``MaxJ`` ).
```python
for index in mesh.specialSets["MinJ_VertexSet"]:
temperatureField.data[index] = tempMax
for index in mesh.specialSets["MaxJ_VertexSet"]:
temperatureField.data[index] = tempMin
```
Construct sets for the both horizontal and vertical walls. Combine the sets of vertices to make the ``I`` (left and right side walls) and ``J`` (top and bottom walls) sets.
```python
iWalls = mesh.specialSets["MinI_VertexSet"] + mesh.specialSets["MaxI_VertexSet"]
jWalls = mesh.specialSets["MinJ_VertexSet"] + mesh.specialSets["MaxJ_VertexSet"]
freeslipBC = uw.conditions.DirichletCondition( variable = velocityField,
indexSetsPerDof = (iWalls, jWalls) )
tempBC = uw.conditions.DirichletCondition( variable = temperatureField,
indexSetsPerDof = (jWalls,) )
```
System setup
-----
**Setup a Stokes system**
```python
stokes = uw.systems.Stokes( velocityField = velocityField,
pressureField = pressureField,
conditions = [freeslipBC,],
fn_viscosity = viscosity,
fn_bodyforce = buoyancyFn )
# get the default stokes equation solver
solver = uw.systems.Solver( stokes )
```
**Create an advection diffusion system**
```python
advDiff = uw.systems.AdvectionDiffusion( phiField = temperatureField,
phiDotField = temperatureDotField,
velocityField = velocityField,
fn_diffusivity = 1.0,
conditions = [tempBC,] )
```
Analysis tools
-----
**Nusselt number**
The Nusselt number is the ratio between convective and conductive heat transfer
\\[
Nu = -h \frac{ \int_0^l \partial_z T (x, z=h) dx}{ \int_0^l T (x, z=0) dx}
\\]
```python
nuTop = uw.utils.Integral( fn=temperatureField.fn_gradient[1],
mesh=mesh, integrationType='Surface',
surfaceIndexSet=mesh.specialSets["MaxJ_VertexSet"])
nuBottom = uw.utils.Integral( fn=temperatureField,
mesh=mesh, integrationType='Surface',
surfaceIndexSet=mesh.specialSets["MinJ_VertexSet"])
```
```python
nu = - nuTop.evaluate()[0]/nuBottom.evaluate()[0]
print('Nusselt number = {0:.6f}'.format(nu))
```
**RMS velocity**
The root mean squared velocity is defined by intergrating over the entire simulation domain via
\\[
\begin{aligned}
v_{rms} = \sqrt{ \frac{ \int_V (\mathbf{v}.\mathbf{v}) dV } {\int_V dV} }
\end{aligned}
\\]
where $V$ denotes the volume of the box.
```python
intVdotV = uw.utils.Integral( fn.math.dot( velocityField, velocityField ), mesh )
vrms = math.sqrt( intVdotV.evaluate()[0] )
print('Initial vrms = {0:.3f}'.format(vrms))
```
Main simulation loop
-----
If the initial conditions are loaded from file then this loop will only take a single step. If you would like to run the entire simulation from a small perturbation then change the ``LoadFromFile`` variable above to equal ``False``. Warning: the simulation will take a long time to get to steady state.
```python
#initialise time, step, output arrays
time = 0.
step = 0
timeVal = []
vrmsVal = []
# starting from steady state == True
if(LoadFromFile == True):
step_end = 1
else:
step_end = 5000
# output frequency
step_output = max(1,min(100, step_end/10))
epsilon = 1.e-8
velplotmax = 0.0
nuLast = -1.0
```
```python
# define an update function
def update():
# Determining the maximum timestep for advancing the a-d system.
dt = advDiff.get_max_dt()
# Advect using this timestep size.
advDiff.integrate(dt)
return time+dt, step+1
```
```python
# Perform steps.
while step<=step_end:
# Solving the Stokes system.
solver.solve()
# Calculate & store the RMS velocity and Nusselt number.
vrms = math.sqrt( intVdotV.evaluate()[0] )
nu = - nuTop.evaluate()[0]/nuBottom.evaluate()[0]
vrmsVal.append(vrms)
timeVal.append(time)
velplotmax = max(vrms, velplotmax)
# print output statistics
if step%(step_end/step_output) == 0:
if(uw.mpi.rank==0):
print('steps = {0:6d}; time = {1:.3e}; v_rms = {2:.3f}; Nu = {3:.3f}; Rel change = {4:.3e}'
.format(step, time, vrms, nu, abs((nu - nuLast)/nu)))
# Check loop break conditions.
if(abs((nu - nuLast)/nu) < epsilon):
if(uw.mpi.rank==0):
print('steps = {0:6d}; time = {1:.3e}; v_rms = {2:.3f}; Nu = {3:.3f}; Rel change = {4:.3e}'
.format(step, time, vrms, nu, abs((nu - nuLast)/nu)))
break
nuLast = nu
# update
time, step = update()
```
Post analysis
-----
**Benchmark values**
The time loop above outputs $v_{rms}$ and $Nu$ as general statistics for the system. For comparison, the benchmark values for the RMS velocity and Nusselt number are shown below for different Rayleigh numbers. All benchmark values shown below were determined in Blankenbach *et al.* 1989 by extroplation of numerical results.
| $Ra$ | $v_{rms}$ | $Nu$ | $q_1$ | $q_2$ |
| ------------- |:-------------:|:-----:|:-----:|:-----:|
| 10$^4$ | 42.865 | 4.884 | 8.059 | 0.589 |
| 10$^5$ | 193.215 | 10.535 | 19.079 | 0.723 |
| 10$^6$ | 833.990 | 21.972 | 45.964 | 0.877 |
```python
# Let's add a test to ensure things are working as expected
if case == "a":
if not np.isclose(nu,4.884,rtol=1.e-2):
raise RuntimeError("Model did not produce the expected Nusselt number.")
if not np.isclose(vrms,42.865,rtol=1.e-2):
raise RuntimeError("Model did not produce the expected Nusselt number.")
```
**Resulting pressure field**
Use the same method as above to plot the new temperature field. This can also be used to plot the pressure field, or any other data structures of interest.
```python
figtemp = vis.Figure()
figtemp.append( vis.objects.Surface( mesh, pressureField ) )
figtemp.show()
```
**Plot the velocity vector field**
For this example the velocity field is interesting to see. This is visualised in two ways, firstly plotting a surface colour map of the velocity magnitude, and secondly the velocity vectors at points on the mesh. For aesthetics the vector arrows are scaled by a little more than the maximum $v_{rms}$ value found in the time loop above.
```python
fig2 = vis.Figure()
velmagfield = uw.function.math.sqrt( uw.function.math.dot( velocityField, velocityField ) )
fig2.append( vis.objects.VectorArrows(mesh, velocityField/(2.5*velplotmax), arrowHead=0.2, scaling=0.1) )
fig2.append( vis.objects.Surface(mesh, temperatureField) )
fig2.show()
```
Parallel friendly post analysis
----
When running underworld in parallel the data of each mesh variable is spread across all the processors. However often we will want to calculate a quantity based on data at specific points that may not all be on the same processor.
A solution is presented here which consists of saving the data from all processors to file, then reloading the mesh variable data using a new non-partitioned mesh. This enables all the data to be available to each processor. We will the carry out the post analysis using the first processor.
**Save temperature, pressure and velocity data**
Save the basic mesh variable data to files using the HDF5 format. This is the same file type as is loaded above.
```python
mesh.save(outputPath+"mesh.h5")
temperatureField.save(outputPath+'tempfield.h5')
pressureField.save(outputPath+'presfield.h5')
velocityField.save(outputPath+'velfield.h5')
```
**Construct new mesh and variable on non-partitioned mesh**
Read saved mesh variable data into a new mesh variable where the information is not partitioned across multiple processors. This means that we can use a single processor to access all the data and calculate some quantities of interest.
```python
# build a non-partitioned mesh with same box size
mesh0 = uw.mesh.FeMesh_Cartesian( elementType = ("Q1/dQ0"),
elementRes = (res, res),
minCoord = (0., 0.),
maxCoord = (boxLength, boxHeight),
partitioned = False )
# load previous mesh coordinate data onto new non-partitioned mesh
mesh0.load(outputPath+'mesh.h5')
# load T, P and V data onto the new mesh
# note that pressure is always on the submesh
temperatureField0 = mesh0.add_variable( nodeDofCount=1 )
pressureField0 = mesh0.subMesh.add_variable( nodeDofCount=1 )
velocityField0 = mesh0.add_variable( nodeDofCount=2 )
temperatureField0.load(outputPath+"tempfield.h5")
pressureField0.load(outputPath+"presfield.h5")
velocityField0.load(outputPath+"velfield.h5")
```
**Temperature gradient**
The final benchmarks in the Blankenbach paper involve the temperature gradient in the vertical direction ($\frac{\partial T}{\partial z}$). This is easy to find using the underworld functions, as shown below.
```python
if(uw.mpi.rank==0):
tempgradField = temperatureField0.fn_gradient
vertTGradField = - boxHeight * tempgradField[1] / tempMax # scaled for direct benchmarking below
```
**More benchmark values**
The vertical temperature gradient (above) is set up to be non-dimensional as per Blankenbach et al 1989. To compare to the benchmark values in their work the gradient is compared at the corners of the simulation box: $q_1$ at $x = 0$, $z = h$; $q_2$ at $x = l$, $z = h$; $q_3$ at $x = l$, $z = 0$; $q_4$ at $x = 0$, $z = 0$. Where $h$ = Box_Height and $l$ = Box_Length and the non-dimensional gradient field is given by
\\[
q = \frac{-h}{\Delta T} \left( \frac{\partial T}{\partial z} \right)
\\]
Provided the simulation is run to steady-state with sufficent resolution then the $q$ values should be close to the benchmark values given again below for different Rayleigh numbers.
| $Ra$ | $q_1$ | $q_2$ |
| ------------- |:-----:|:-----:|
| 10$^4$ | 8.059 | 0.589 |
| 10$^5$ | 19.079 | 0.723 |
| 10$^6$ | 45.964 | 0.877 |
```python
if(uw.mpi.rank==0):
q1 = vertTGradField.evaluate( (0., boxHeight))[0][0]
q2 = vertTGradField.evaluate( (boxLength, boxHeight))[0][0]
q3 = vertTGradField.evaluate( (boxLength, 0.))[0][0]
q4 = vertTGradField.evaluate( (0., 0.))[0][0]
print('Rayleigh number = {0:.1e}'.format(Ra))
print('q1 = {0:.3f}; q2 = {1:.3f}'.format(q1, q2))
print('q3 = {0:.3f}; q4 = {1:.3f}'.format(q3, q4))
```
```python
# Let's add a test to ensure things are working as expected
if case == "a":
if not np.isclose(q1,8.020,rtol=1.e-2):
raise RuntimeError("Model did not produce the expected q1.")
if not np.isclose(q2,0.589,rtol=1.e-2):
raise RuntimeError("Model did not produce the expected q2.")
```
**Save time and rms values**
The following command uses the ``numpy`` package save to text file function to output all $v_{RMS}$ values as a function of time. This is particularly useful if you have run the simulation from the perturbed initial condition rather than the saved data file, as you can see the system coming to steady state.
The format for this text file is:
timeVal[0], vrmsVal[0]
timeVal[1], vrmsVal[1]
...
timeVal[N], vrmsVal[N]
```python
if(uw.mpi.rank==0):
np.savetxt(outputPath+'vrms.txt', np.c_[timeVal, vrmsVal], header="Time, VRMS" )
```
**Calculate stress values for benchmark comparison**
Determine stress field for whole box in dimensionless units (King 2009)
\begin{equation}
\tau_{ij} = \eta \frac{1}{2} \left[ \frac{\partial v_j}{\partial x_i} + \frac{\partial v_i}{\partial x_j}\right]
\end{equation}
which for vertical normal stress becomes
\begin{equation}
\tau_{zz} = \eta \frac{1}{2} \left[ \frac{\partial v_z}{\partial z} + \frac{\partial v_z}{\partial z}\right] = \eta \frac{\partial v_z}{\partial z}
\end{equation}
which is implemented for the whole box in the functions defined below.
```python
# get topography from non-partitioned stress tensor
if(uw.mpi.rank==0):
stresstensorFn = 2.* stokes.fn_viscosity*fn.tensor.symmetric( velocityField0.fn_gradient ) - (1.,1.,0.)*pressureField0
verticalStressFn = stresstensorFn[1]
stress_zz_top = -verticalStressFn.evaluate(mesh0.specialSets["MaxJ_VertexSet"])
# subtract the average value for benchmark.
mean_sigma_zz_top = np.mean(stress_zz_top)
sigma_zz_top = stress_zz_top - mean_sigma_zz_top
```
Dimensionalise the stress from the vertical normal stress at the top of the box (King 2009)
$$
\sigma_{t} = \frac{\eta_0 \kappa}{\rho g h^2}\tau _{zz} \left( x, z=h\right)
$$
where all constants have been defined above. Finally calculate the topography, defined using $h = \sigma_{top} / (\rho g)$.
```python
# Set parameters in SI units
if(uw.mpi.rank==0):
grav = 10 # m.s^-2
height = 1.e6 # m
rho = 4.0e3 # g.m^-3
kappa = 1.0e-6 # m^2.s^-1
# dimensionalise
dim_sigma_zz_top = (eta0 * kappa / (height*height)) * sigma_zz_top
# find topography in [m]
topography = dim_sigma_zz_top / (rho * grav)
```
**Calculate x-coordinate at zero stress**
Calculate the zero point for the stress along the x-axis at the top of the box using the **interpolation function** from ``numpy``. Note that ``numpy`` requires that the first array input for ``np.interp`` must be increasing, so the negative of the topography is used.
```python
if(uw.mpi.rank==0):
xCoordFn = fn.input()[0]
x = xCoordFn.evaluate(mesh0.specialSets["MinJ_VertexSet"])
xIntercept = np.interp(0.0,-1.0*topography[:, 0],x[:, 0])
```
**Topography comparison**
Topography of the top boundary calculated in the left and right corners as given in Table 9 of Blankenbach et al 1989.
| $Ra$ | $\xi_1$ | $\xi_2$ | $x$ ($\xi = 0$) |
| ------------- |:-----------:|:--------:|:--------------:|
| 10$^4$ | 2254.02 | -2903.23 | 0.539372 |
| 10$^5$ | 1460.99 | -2004.20 | 0.529330 |
| 10$^6$ | 931.96 | -1283.80 | 0.506490 |
```python
if(uw.mpi.rank==0):
e1 = float(topography[0])
e2 = float(topography[len(topography)-1])
print('Rayleigh number = {0:.1e}'.format(Ra))
print('Topography[x=0],[x=max] = {0:.2f}, {1:.2f}'.format(e1, e2))
print('x(topo=0) = {0:.6f}'.format(xIntercept))
# output a summary file with benchmark values (useful for parallel runs)
np.savetxt(outputPath+'summary.txt', [Ra, e1, e2, xIntercept, q1, q2, q3, q4])
```
```python
# Let's add a test to ensure things are working as expected
if case == "a":
if not np.isclose(e1,2254.02,rtol=1.e-2):
raise RuntimeError("Model did not produce the expected xi1.")
if not np.isclose(e2,-2903.23,rtol=1.e-2):
raise RuntimeError("Model did not produce the expected xi2.")
```
|
868c778c68586755e0ae35337d6803b0ae810532
| 32,337 |
ipynb
|
Jupyter Notebook
|
Notebooks/Underworld/03_BlankenbachBenchmark.ipynb
|
underworld-geodynamics-cloud/underworld-cloud-droplet
|
5f786ae88cf42ecac980ad8fdc1c69bb389f948e
|
[
"MIT"
] | null | null | null |
Notebooks/Underworld/03_BlankenbachBenchmark.ipynb
|
underworld-geodynamics-cloud/underworld-cloud-droplet
|
5f786ae88cf42ecac980ad8fdc1c69bb389f948e
|
[
"MIT"
] | null | null | null |
Notebooks/Underworld/03_BlankenbachBenchmark.ipynb
|
underworld-geodynamics-cloud/underworld-cloud-droplet
|
5f786ae88cf42ecac980ad8fdc1c69bb389f948e
|
[
"MIT"
] | null | null | null | 32.272455 | 428 | 0.540372 | true | 5,713 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.763484 | 0.715424 | 0.546215 |
__label__eng_Latn
| 0.841927 | 0.107369 |
# Time Evolution: Split Operator Method
### Category: Prerequisites
### Prerequisites: Quantum Mechanics
When cleaning my apartment, sometimes I just grab the nearest dirty thing to me and try to do something to it. But that is not the most efficient way to get things done. If I'm planning, I'll first dedicate my attention to one problem, like putting clothing away, then rotate my attention to something else, like dirty dishes. I can keep focused on just one task and do it well. Each problem I solve optimally in shorter intervals instead of tackling everything at once.
That same principle applies to solving partial differential equations. [1] called this principle one of the big ideas of numerical computation. In numerics, we call it <b>Strang splitting</b>.
We will be applying Strang splitting to solve the Schrondinger equation, but people use the same idea to a variety of problems, like ones with different timescales, length scales, or physical processes. We will be using it to seperate out terms diagonal in position space from terms diagonal in momentum space.
We can describe a class of general partial differential equations by
\begin{equation}
\frac{\partial}{\partial t}y = L_1(y,t)+L_2(y,t)
\end{equation}
Over a small time step, the following approximation holds
\begin{equation}
y(\delta t)= e^{L_1(y,0) \delta t+L_2(y,0) \delta t}y(0)
\end{equation}
For <b>Strang splitting</b>, instead of applying both operators together, we break them up into two. I'll discuss non-commutivity later.
\begin{equation}
y(\delta t)= e^{L_1 (y,0) \delta t} e^{L_2(y,0) \delta t} y(0) = U_1 U_2 y(0)
\end{equation}
$U_1$ and $U_2$ are evolution operators. We can define
\begin{equation}
\tilde{y(0)}= U_2 y(0)
\end{equation}
so that
\begin{equation}
y (\delta t) = U_1 \tilde{y}(0)
\end{equation}
### Applying to Quantum Mechanics
Now let's take a look at the Schrodinger Equation:
\begin{equation}
i \hbar \frac{\partial}{\partial t} | \Psi \rangle = \mathcal{H} | \Psi \rangle
=
\left[ \frac{\hat{p}^2}{2m} + V(x) \right] | \Psi \rangle
=
\left[ \mathcal{H}_p + \mathcal{H}_x \right] | \Psi \rangle
\end{equation}
The Hamiltonian gets seperated into position terms and momentum terms. For ease, let's define our unitary evolution operators,
\begin{equation}
U_p(\delta t)=e^{-\frac{i}{\hbar}\mathcal{H}_p \delta t}
\;\;\;\;\;
U_x (\delta t)= e^{-\frac{i}{\hbar}\mathcal{H}_x \delta t}
\end{equation}
I mentioned earlier that I would discuss non-communitvity. We need to do that now. We can't simply seperate the evolution operator for the full Hamiltonian into two parts, because we would introduce terms proportional to the commutator.
\begin{equation}
e^{A+B}=e^{A}e^{B}e^{[A,B]}
\end{equation}
$e^{A+B}$ expanded has terms that look like $AB$ <b>and</b> $BA$, whereas $e^{A}e^{B}$ only has terms that look like $AB$. We lose the symmetry of the expression. We can gain back an order of accuracy by symmetrizing our formula, calculating a time step by
\begin{equation}
|\Psi (\delta t) \rangle =
U_x (\delta t/2) U_p (\delta t) U_x (\delta t/2)
\end{equation}
But the next step will the start with $U_x (\delta t/2)$ !
\begin{equation}
|\Psi (2 \delta t) \rangle = \left(
U_x (\frac{\delta t}{2}) U_p (\delta t) U_x (\frac{\delta t}{2})\right)\left( U_x (\frac{\delta t}{2}) U_p (\delta t) U_x (\frac{\delta t}{2}) \right)
\end{equation}
\begin{equation}
= U_x (\frac{\delta t}{2}) U_p (\delta t) U_x (\delta t) U_p (\delta t) U_x (\frac{\delta t}{2})
\end{equation}
All we need to do to add an order of accuracy is start the simulation with $U_x(\delta t/2)$ and end it with $U_x(\delta t/2)$, leaving everything else the same. Pretty remarkable you can get that much of an improvement for that little work. Once we apply this to a bunch of time steps, we get
\begin{equation}
U_x (\frac{\delta t}{2}) U_p (\delta t) \left( \prod_{n-1} U_x(\delta t) U_p (\delta t) \right) U_x (\frac{\delta t}{2}).
\end{equation}
We have to apply a few operators before starting the loop. Between the loop and a measurement, we have to apply an additional operator.
In the spatial domain, the momentum operator involves derivatives and is rather icky. But in the momentum domain, we only have to multiply by $k^2/2m$. Thanks to some nicely optimized libraries, we can just transform into the momentum domain with `fft`, solve the momentum problem there, and transform back with `ifft`.
## Rabi Oscillations
To demonstrate time evolution in a simple system with interesting physics, I chose to apply the split operator to Rabi Oscillations between two harmonic oscillators.
To get an idea of what will happen, we will use a qualitative model of two states weakly coupled to each other by a parameter $\epsilon$. If we have the two minima sufficifiently seperated from each other, tunneling will happen slowly and will not significantly affect the shape of the eigenfunctions and their energies $E_0$. Instead of of solving for the shape of the wavefunction, we solve a two-state Hamiltonian that looks like this,
\begin{equation}i \hbar \frac{\partial}{\partial t}
\begin{bmatrix}
| \phi_r \rangle \\
| \phi_l \rangle
\end{bmatrix}
= \begin{bmatrix}
E_0 & \epsilon \\
ϵ & E_0 \\
\end{bmatrix}
\begin{bmatrix}
| \phi_r \rangle \\
| \phi_l \rangle
\end{bmatrix}
\end{equation}
The eigenvalues and corresponding eigenvectors of the matrix are,
\begin{equation}
\lambda_1 = E_0 + \epsilon \;\;\;\;\;\;
\lambda_2 = E_0 - \epsilon
\end{equation}
\begin{equation}
\vec{v}_{1} = \begin{bmatrix}
| \phi_r \rangle \\
| \phi_l \rangle
\end{bmatrix}
\;\;\;\;\;
\vec{v}_2 = \begin{bmatrix}
| \phi_r \rangle \\
- |\phi_l \rangle
\end{bmatrix}
\end{equation}
If a wavefunction starts purely in the right state, we want to choose a combination of our eigenvectors that sets the left state to zero at $t=0$. The resulting wavefunction will evolve as,
\begin{equation}
| \Psi (t) \rangle = \frac{1}{\sqrt{2}}e^{-\frac{i}{\hbar} E_0 t } \left(
e^{-\frac{i}{\hbar} \epsilon t} \begin{bmatrix}
| \phi_r \rangle \\
| \phi_l \rangle
\end{bmatrix}
+ e^{\frac{i}{\hbar} \epsilon t} \begin{bmatrix}
| \phi_r \rangle \\
-| \phi_l \rangle
\end{bmatrix}
\right)
\end{equation}
\begin{equation}
= \sqrt{2} e^{-\frac{i}{\hbar} E_0 t } \begin{bmatrix}
\cos \frac{\epsilon t}{\hbar} | \phi_r \rangle \\
i \sin \frac{\epsilon t}{\hbar} | \phi_l \rangle
\end{bmatrix}
\end{equation}
Thus this simple phenomological model shows us how we can expect the wavefunction to move back and forth between the two wells in a cyclical manner, with a period proportional to the rate of tunneling.
#### Packages
For the first time, I'm using `Plots.jl`, with `PlotlyJS` as the backend for `Plots`. I had to coax my computer a bit to get the packages to work, but the errors were specific to my computer. I choose to switch as I believe the `PlotlyJS` output will provide a better expierence for those viewing the GitHub-Pages static site, though it might cause more trouble for anyone using the jupyter notebooks. If you are having trouble plotting yourself, I recommend just switching back to whatever package is easiest for you.
I used `Plots` to generate a gif for me directly, but I found my old method of generating a file of .png files and then using `ffmpeg` from the command line much fast and easier.
```julia
using Plots
using FFTW
gr()
```
┌ Info: Recompiling stale cache file /home/shaula/.julia/compiled/v1.1/FFTW/PvIn2.ji for FFTW [7a1cc6ca-52ef-59f5-83cd-3a7055c09341]
└ @ Base loading.jl:1184
Plots.GRBackend()
## Input Parameters
```julia
# Set Time Parameters
t0=0
tf=40000
dt=.1
# Set Space Grid Parameters
dx=.1
xmax=8
# xmin will be -xmax. Making the situation symmetric
# How far seperated are the potential minima
seperation=6;
# minima at seperation/2 and -seperation/2
# How often we measure occupation and view the state
nmeasure=1000;
```
## Automatic Evaluation Parameters
Given the parameters above, we can calculate the following variables that we will use in the code.
Note: `k` Gave me a bit of a headache. The algorithm depends quite a bit on the conventions `fft` decides to use ;(
Currently, I'm using odd `N`. You'll have to change the formula if you use even `N`.
```julia
t=collect(t0:dt:tf)
x=collect(-xmax:dx:xmax)
nt=length(t)
N=length(x)
k = [ collect(0:((N-1)/2)) ; collect(-(N-1)/2:-1) ] *2*π/(N*dx);
occupation=zeros(Complex{Float64},floor(Int,nt/nmeasure),2);
```
## The Potentials and Evolution Operators
```julia
Vx=.5*(abs.(x).-seperation/2).^2;
Vk=k.^2/2
Uxh=exp.(-im*Vx*dt/2);
Ux=exp.(-im*Vx*dt);
Uf=exp.(-im*Vk*dt);
"potentials and evolvers defined"
```
"potentials and evolvers defined"
```julia
plot(x, Vx)
plot!(xlabel="x", ylabel="V",
plot_title="Double Well Potential")
```
<div id="c451b5c8-dbab-4b08-9b8a-5098f18c95a9" class="plotly-graph-div"></div>
## The Unperturbed Wavefunctions
The ground state for a harmonic oscillator is a Gaussian
\begin{equation}
\langle x | \phi \rangle= \phi (x) = \frac{1}{\pi^{1/4}} e^{-\frac{x^2}{2}}
\end{equation}
We assume $\omega = \hbar = m = 1$ for sake of convenience.
```julia
ϕ(x)=π^(-.25)*exp(-x.^2/2)
```
ϕ (generic function with 1 method)
```julia
ϕl=ϕ.(x.+seperation/2);
ϕr=ϕ.(x.-seperation/2);
Ψ0=convert(Array{Complex{Float64},1},ϕl);
```
```julia
plot(x,ϕl,label="ϕl")
plot!(x,ϕr,label="ϕr")
plot!(xlabel="x", ylabel="ϕ",
plot_title="Left and Right Wavefunctions")
```
<div id="acd34b41-fa6e-454b-b796-db05b17dc5cc" class="plotly-graph-div"></div>
## FFT's
This algorithm runs a large number of Fast Fourier Transforms and Inverse Fast Fourier Transforms. To speed the process, we can tell the computer to spend some time, in the beginning, allocating the right amount of space and optimizing the routine for the particular size and type of array we are passing it.
The next cell does this, by using `plan_fft` and `plan_ifft` to generate objects that can act on our arrays as operators.
```julia
ft=plan_fft(Ψ0);
Ψf=ft*Ψ0;
ift=plan_ifft(Ψf);
```
## Occupancy of each state
To measure the occupancy of the total wavefunction in either the left or right well groundstate, I compute the value
\begin{equation}
c_r=\langle \Psi | \phi_r \rangle = \int \Psi^* (x) \phi_r(x) dx \;\;\;\;\;\;\; p_r=c_r^2
\end{equation}
\begin{equation}
c_l = \langle \Psi | \phi_l \rangle = \int \Psi^* (x) \phi_l (x) dx\;\;\;\;\;\;\; p_l = c_l^2
\end{equation}
The probability of being in the state is the coefficient squared.
Though in theory these values will always be real, numerical errors introduce some errors, and Julia will assume that the answer is complex. Therefore, we need to apply `abs` to make the numbers `Float64` instead of `Complex{Float64}`.
```julia
nmeas=1000
c=zeros(Float64,floor(Int,nt/nmeas),2);
```
```julia
# Uncomment the # lines to generate a gif. Note: It takes a long time
Ψ=Ψ0;
jj=1
# The operators we have to start off with
Ψ=Ψ.*Uxh
Psif=ft*Ψ
Psif=Psif.*Uf
Ψ=ift*Psif
#@gif for ii in 1:nt
for ii in 1:nt
Ψ=Ψ.*Ux
Psif=ft*Ψ
Psif=Psif.*Uf
Ψ=ift*Psif
if ii%nmeas == 0
# Every time we measure, we have to finish with a U_x half time step
Ψt=Ψ.*Uxh
c[jj,1]=abs(sum( conj(Ψt).*ϕl )) *dx
c[jj,2]=abs(sum( conj(Ψt).*ϕr )) *dx
jj+=1
end
#plot(x[21:141],Vx[21:141]/6, label="Vx scaled")
#plot!(x,abs(conj(Ψt).*Ψt), label="Wavefunction")
#plot!(xlabel="x", ylabel="Ψ",
# plot_title="Wavefunction evolution")
end
#end every nmeas
Ψ=Ψ.*Uxh;
```
```julia
plot(c[:,1].^2,label="Left Prob")
plot!(c[:,2].^2, label="Right Prob")
plot!(xlabel="x",ylabel="Probability",
plot_title="Rabi Oscillations for a Double Harmonic Oscillator")
```
<div id="b684b895-ceee-4075-a5a3-0e55439f9033" class="plotly-graph-div"></div>
PyPlot generated png's strung together:
Plots generated gif:
The faster one can perform Fourier Transforms, the faster one can perform this algorithm. Therefore, scientists, such as [2], will use multiple cores or GPU's.
In addition to real time evolution, algorithms like this can determine the ground state of an arbitrary system by imaginary time evolution. Soon, I will take the work covered here and look at this aspect of the algorithm.
[1] Glowinski, Roland, Stanley J. Osher, and Wotao Yin, eds. Splitting Methods in Communication, Imaging, Science, and Engineering. Springer, 2017.
[2] Heiko Bauke and Christoph H. Keitel. Accelerating the Fourier split operator method via graphics processing unit. Computer Physics Communications, 182(12):2454–2463 (2011)
```julia
```
|
7868672a59b98c652320e359b47d3d681feba9ad
| 55,970 |
ipynb
|
Jupyter Notebook
|
Prerequisites/Time-Evolution.ipynb
|
IanHawke/M4
|
2d841d4eb38f3d09891ed3c84e49858d30f2d4d4
|
[
"MIT"
] | null | null | null |
Prerequisites/Time-Evolution.ipynb
|
IanHawke/M4
|
2d841d4eb38f3d09891ed3c84e49858d30f2d4d4
|
[
"MIT"
] | null | null | null |
Prerequisites/Time-Evolution.ipynb
|
IanHawke/M4
|
2d841d4eb38f3d09891ed3c84e49858d30f2d4d4
|
[
"MIT"
] | null | null | null | 96.666667 | 19,138 | 0.704217 | true | 3,937 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.863392 | 0.843895 | 0.728612 |
__label__eng_Latn
| 0.969918 | 0.531141 |
```python
from IPython.display import HTML, display
```
# Simulating Planetary Orbits with a Symplectic Integrator
The name of this library in Fluxions in homage to Isaac Newton, whose early name for differential calculus was "the method of fluxions." (For an entertaining work of fiction that places the invention of calculus in historical context, I highly recommend Neal Stephenson's Baroque Cycle.) The problem Newton was trying to solve was calculating the motion of the planets around the sun under the influence of gravity. He succeeded in calculating a highly accuracte approximation based on all the planets moving around the sun in elliptical orbits. This approximation effectively treats the sun as a very heavy stationary body and all the planets orbit around it. It ignores the gravitational forces between planets and any movement of the sun itself. It's a good approximation because the sun is much heavier than the planets put toghether (approximately 99.8% of the of the mass of the solar system), but it's not perfect because Jupiter in particular is heavy enough to throw things off.
**Kepler's Laws for Elliptical Orbits**
Citation: http://hyperphysics.phy-astr.gsu.edu/hbase/kepler.html
## Numerical Solution of Differential Equations with Symplectic Integrators
How do scientists and engineers today compute planetary orbits to the highest standards of precision suitable for launching space vehicles? They use numerical integrators to solve the Newton's equations of motion. In particular, they use a special class of numerical integrators called **symplectic integrators**. Most students in STEM fields who encounter differential equations are likely to see an unrepresentative sample of ones that are analytically tractable, such as a simple harmonic oscillator. It turns out that most differential equations of interest do **not** have analytical solutions, but have to be solved numerically instead. This is a mature field of study in applied mathematics, and many methods exist.
A simple numerical method is the highly intuitive Forward Euler Method, in which the equation is discretized in time:
Citation: https://en.wikipedia.org/wiki/Euler_method#/media/File:Euler_method.svg
As the picture suggests, discretization errors can gradually accumulate and the approximated trajectory can drift away from the true solution. These errors can be mitigated by using higher order methods with small step size, but unless an integrator is carefully constructed, it is likely to have a drift where it either gains or loses energy as the system evolves. This destroys its ability to make accurate calculations over a large number of steps.
### Hamilton's Equations
No discussion of symplectic integrators would be complete without introducing Hamilton's equations. This is a formalism for solving the equations of motion in a physical system. The spatial coordinates are $q = (q_1, ... q_n)$ and their "conjugate momenta" are $p = (p_1, ... p_n)$.
$$\frac{d\mathcal{p}_i}{dt} = -\frac{\partial \mathcal{H}}{\partial q_i} \\
\frac{dq_i}{dt} = +\frac{\partial \mathcal{H}}{\partial p_i}$$
The highly symmetrical dual structure of these equations give Hamiltonian systems their special properties.
This can look like a bit much at first, but there's a physical intuition that helps to understand it:
* $q_i$ are the x, y, and z coordinates of different bodies in the problem
* $p_i$ are the x, y, and z coordinates of the momentum of the bodies; $p_i = m_i v_i$
For a classical mechanical system,
$$\mathcal{H} = T + U$$
where $T$ is the total kinetic energy of the system, and $U$ is the total potential energy of the system.
A very important special case is easier to solve: when the Hamiltonian is *separable* and *time invariant*.
In this case, the kinetic energy $T = T(p)$ depends only on the momenta $p$,
and the potential energy $U = U(q)$ depends only on the position.
It can be proven mathematically that any Hamiltonian system (i.e. a system that evolves according to these differential equations) has two special properties:
* It conserves volume in phase space, $dV = dp \; dq$
* It conserves energy $\mathcal{H}(p, q) = \mathcal{H}_0$
### Bad Idea: Simulating a Hamiltonian System with a Non-Symplectic Integrator
What happens if you try to simulate a Hamiltonian system with a non-symplectic integrator? It loses its special properties. The simulated solution will not conserve energy and volume in phase space.
Citation: https://www.av8n.com/physics/symplectic-integrator.htm
### What is a Symplectic Integrator?
The figure above shows a typical behavior of a non-symplectic integrator that is gradually leaking energy. If this were a planetary orbit, the simulation would show the planet crashing into the sun at a time when it should still enjoy a stable orbit.
A symplectic integrator on the other hand **respects** the two key symmetries of the Hamiltonian system.
* It conserves energy
* It conserves volume
### Conservation of Energy
Citation: https://www.av8n.com/physics/symplectic-integrator.htm
### Convervation of Volume in Phase Space
Here are visualizations of the fact that symplectic integrators conserve volume in phase space, but non-symplectic integrators do not:
Citation: https://www.av8n.com/physics/symplectic-integrator.htm
One way to think of a symplectic integrator is that it models the behavior of another symplectic system (one that conserves energy and volume in phase space) that is very close to the true system.
Whereas a non-symplectic integrator models a system close to the true system, but one that is not symplectic.
### Better Idea: Leapfrog Integration -- A Simple Symplectic Integrator
Fortunately there is a simple scheme for numerically solving separable Hamiltonian systems that is symplectic. It is called Leapfrog Integration. Here is a presentation in "traditional" coordinates $x$, $v$ and $a$ for position, velocity, and acceleration respectively. Please note that velocities are indexed at half-integer time steps.
\begin{align}
x_i &= v_{i-1} + v_{i-1/2} \Delta t \\
a_i &= F(x_i) / m_i \\
v_{i+1/2} &= v_{i-1/2} + a_i \Delta t
\end{align}
Here is an equivalent version with only integer indices that is better suited to direct translation into computer code.
This version of the equations was used in the planetary simulation we wrote.
\begin{align}
x_{i+1} &= x_i + v_i \Delta t + \frac{1}{2} a_i \Delta t^2
\end{align}
citation: https://en.wikipedia.org/wiki/Leapfrog_integration
This embarrassingly simple code is at the heart of the planetary simulation presented below:
```python
# Perform leapfrog integration simulation
# https://en.wikipedia.org/wiki/Leapfrog_integration
print(f'Performing leapfrog integration with {N} steps...')
for i in tqdm(range(N-1)):
# Positions at the next time step
q[i+1,:] = q[i,:] + v[i,:] * dt + 0.5 * a[i,:] * dt2
# Accelerations of each body in the system at the next time step
a[i+1,:] = accel_func(q[i+1])
# Velocities of each body at the next time step
v[i+1,:] = v[i,:] + 0.5 * (a[i,:] + a[i+1,:]) * dt
return q, v
```
## Modern Calculations of Planetary Orbits and Ephemerides
Planetary orbits can be described accurately with just a small number of parameters called **orbital elements**. These quantities date back to Johannes Kepler and are sometimes referred to as Keplerian elements in his honor. The reason an entire orbit can be decribed locally with just six parameters is that the orbits are very close to following elliptical paths as discovered by Kepler. Here is a picture illustrating the definitions of the orbital elements.
Citation: By Lasunncty at the English Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=8971052
Professionals working in astronomy and space exploration have collaborated over the years and built an excellent infrastructure for efficiently sharing data about the positions and orientations of celestial bodies. Important ideas that feed into this include standardization of time measurements and reference frames.
### Julian Days and Astronomical Time Measurements
Astronomy is one of the oldest sciences and historical records date back millennia. Astronomers have defined the concept of the Julian Day (https://en.wikipedia.org/wiki/Julian_day) as the number of days from the beginning of the Julian Period. The Julian period began on January 1, 4713 BC on the proleptic Julian Calendar. It's straightforward to find a conversion utility. If you write your own be sure to check it against a reference implementation on the web!
### Ecliptic Coordinate Systems and Astronomical Frames of Reference
We take it for granted that we can describe the location of an object on the earth to a high degree of precision with just two numbers, latitude and longitude. Pinning down the position of an object in the solar system is much more complicated because everything is always moving around. The barycenter of the solar system (center of mass) makes a convenient origin, but you also need to define axes. Astronomers have done this by standardizing on the ideas of an ecliptic coordinate system:
Citation: By Tfr000 (talk) 18:12, 20 June 2012 (UTC) - Own work, CC BY-SA 3.0,
https://commons.wikimedia.org/w/index.php?curid=19971787
The most common frame in use today is called **J2000.0 epoch** and is based on the coordinate system above with the mean equinox of the year 2000.
### Ephemerides and jplephem library
An ephemeris in astronomy (plural ephemerides) comes from the Greek word for "diaries." It refers to the positions of astronomical orbits at a moment in time. Before the advent of computers, they were generated by a combination of astronomical observations and hand calculations. In modern times, computer simulations can produce very accurate ephemerides for important objects like the planets in the solar system. NASA and the JPL (Jet Propulsion Laboratory) at Caltech run a public service where they offer high quality ephemerides free to the public. This is offered through an interface called Horizons which can be found here: https://ssd.jpl.nasa.gov/horizons.cgi
Fortunately there is a package available on PyPI called **jplephem** available here: https://pypi.org/project/jplephem/.
This is a very cool package that lets you get to work doing astronomical calculations in minutes.
The most convenient way to use it is to download a data file once and save it locally. I downloaded the de430.bsp file here: https://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/
This file is 117 MB but includes positions of the planets between the years 1550 and 2650. I excerpted it down to 40 years to reduce the size fo 4.26 MB so it would fit on GitHub.
Here is a quick showing how easy it is to use (citation: jplephem documentation on PyPI)
```python
from jplephem.spk import SPK
kernel = SPK.open('../solar_system/resources/planets.bsp')
print(kernel)
```
File type DAF/SPK and format LTL-IEEE with 14 segments:
2451544.50..2466160.50 Solar System Barycenter (0) -> Mercury Barycenter (1)
2451536.50..2466160.50 Solar System Barycenter (0) -> Venus Barycenter (2)
2451536.50..2466160.50 Solar System Barycenter (0) -> Earth Barycenter (3)
2451536.50..2466160.50 Solar System Barycenter (0) -> Mars Barycenter (4)
2451536.50..2466160.50 Solar System Barycenter (0) -> Jupiter Barycenter (5)
2451536.50..2466160.50 Solar System Barycenter (0) -> Saturn Barycenter (6)
2451536.50..2466160.50 Solar System Barycenter (0) -> Uranus Barycenter (7)
2451536.50..2466160.50 Solar System Barycenter (0) -> Neptune Barycenter (8)
2451536.50..2466160.50 Solar System Barycenter (0) -> Pluto Barycenter (9)
2451536.50..2466160.50 Solar System Barycenter (0) -> Sun (10)
2451544.50..2466156.50 Earth Barycenter (3) -> Moon (301)
2451544.50..2466156.50 Earth Barycenter (3) -> Earth (399)
2287184.50..2688976.50 Mercury Barycenter (1) -> Mercury (199)
2287184.50..2688976.50 Venus Barycenter (2) -> Venus (299)
The documentation tells us that all coordinates are in the J2000.0 coordinate system. Distances are given in kilometers. The positions of the sun and 9 planets including Pluto are given vs. the barycenter of the solar system. Here's how to get the position of the earth on February 8, 2015, which is Julian day 2457061.5:
```python
position = kernel[0,3].compute(2457061.5)
print(position)
```
[-1.10369890e+08 8.93069592e+07 3.86931886e+07]
The system includes the ability to compute the velocity by differentiating a Chebyshev polynomial, as follows:
```python
position, velocity = kernel[0,3].compute_and_differentiate(2457061.5)
print(velocity)
```
[-1740702.19738961 -1781622.22600993 -772367.63928313]
## Putting it Together: Strategy to Simulate the Solar System
Interested readers can of course review the code. The highlights are:
* Install the jplephem package
* Download the de430.bsp data file with ephemerides from the JPL
* Write function to load the physical constants used: gravitational constant G, and masses of the sun and planets in kg (these are all available on Wikipedia)
* Write a utility function julian_day that converts a Python date to a julian day
* Write a function configuration(t0, t1, steps_per_day) that returns an array with time steps indexing the rows and positions and velocities across the columns. For the sun and 8 planets, there are 27 positions and 27 velocities: sun_x, sun_y, sun_z, mercury ..., venus ..., earth, ... ... neptune.
* Write a function accel(q) that compute the acceleration applied to all 27 spatial coordinates (note this is time invariant). This is for the "constructive" solution that doesn't use automatic differention, just Newton's equations directly
* Write a function make_force(q) that computes the forces applied to all the objects using the Fluxions library. This is done by building a single Fluxion for the graviational potential energy U(q) once, and then differentiating it with respect to q.
* Write a function simulate_leapfrog() that takes among its arguments functions configuration_func and accel_func. This allows the same integration back end to be used for both the constructive and Fluxions approach to the problem.
* Write a function energy(q, v) that computes the total energy in the system to check that it really conserves energy
* Write a function mse that computes the mean squared error between a simulated path and a reference path pulled directly from the JPL data
* Write functions to plot a still image and generate a movie clip of the orbit
That's it! The whole calculation is done in 271 lines of code in solar_sytem.py and 466 lines of code in eight_planets.py. Much of that is constants, comments, testing, and making the movie.
Due to bugs with the bash magic command on Windows 10, it's best to demo running the program directly from the console.
We can see that this program works! It simulated the sun and 8 planets at 16 time steps per day for all of 2018 in just 3 seconds. It has a mean squared error vs. the JPL simulation of 9.7E-5 astronomical units. (That's actually very close on this scale; they're simulating other bodies e.g. the moon and the planets move around a lot.) The energy checks out. This system has an energy change on the order of 8.9e-10. The JPL system shows a larger change on the order of 6.9E-7 because they are accounting for additional objects. The orbits look essentially the same from the two sources.
```python
import io
import base64
video = io.open('../solar_system/movie/planets.mp4', 'r+b').read()
encoded = base64.b64encode(video)
HTML(data=''''''.format(encoded.decode('ascii')))
```
The soundtrack to this movie is from "The Planets" by Gustav Holst. This is the opening of the piece, "Jupiter: The Bringer of Jollity"
|
37a7cc634682b9e51aefb654ccb063d1452c46a0
| 878,145 |
ipynb
|
Jupyter Notebook
|
presentation/presentation_solar_system.ipynb
|
CS207-Final-Project-Group-10/cs207-FinalProject
|
842e9c2d3ca1490cef18c086dfde81856d8d3a82
|
[
"MIT"
] | 1 |
2021-03-21T04:50:31.000Z
|
2021-03-21T04:50:31.000Z
|
presentation/presentation_solar_system.ipynb
|
CS207-Final-Project-Group-10/cs207-FinalProject
|
842e9c2d3ca1490cef18c086dfde81856d8d3a82
|
[
"MIT"
] | 6 |
2018-11-04T20:49:41.000Z
|
2021-06-01T23:09:43.000Z
|
presentation/presentation_solar_system.ipynb
|
CS207-Final-Project-Group-10/cs207-FinalProject
|
842e9c2d3ca1490cef18c086dfde81856d8d3a82
|
[
"MIT"
] | 1 |
2020-10-22T13:59:47.000Z
|
2020-10-22T13:59:47.000Z
| 2,286.835938 | 857,140 | 0.96301 | true | 3,940 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.857768 | 0.867036 | 0.743716 |
__label__eng_Latn
| 0.997106 | 0.566233 |
This notebook is part of https://github.com/AudioSceneDescriptionFormat/splines, see also http://splines.readthedocs.io/.
# Derivation of Non-Uniform Catmull--Rom Splines
Recursive algorithm developed by
<cite data-cite="barry1988recursive">Barry and Goldman (1988)</cite>,
according to
<cite data-cite="yuksel2011parameterization">Yuksel et al. (2011)</cite>, figure 3.
```python
import sympy as sp
sp.init_printing()
```
```python
from utility import NamedExpression, NamedMatrix
```
```python
x_1, x0, x1, x2 = sp.symbols('xbm_-1 xbm:3')
```
```python
t, t_1, t0, t1, t2 = sp.symbols('t t_-1 t:3')
```
```python
p_10 = NamedExpression('pbm_-1,0', x_1 * (t0 - t) / (t0 - t_1) + x0 * (t - t_1) / (t0 - t_1))
p_10
```
```python
p01 = NamedExpression('pbm_0,1', x0 * (t1 - t) / (t1 - t0) + x1 * (t - t0) / (t1 - t0))
p01
```
```python
p12 = NamedExpression('pbm_1,2', x1 * (t2 - t) / (t2 - t1) + x2 * (t - t1) / (t2 - t1))
p12
```
```python
p_101 = NamedExpression('pbm_-1,0,1', p_10.name * (t1 - t) / (t1 - t_1) + p01.name * (t - t_1) / (t1 - t_1))
p_101
```
```python
p012 = NamedExpression('pbm_0,1,2', p01.name * (t2 - t) / (t2 - t0) + p12.name * (t - t0) / (t2 - t0))
p012
```
```python
p = NamedExpression('pbm', p_101.name * (t1 - t) / (t1 - t0) + p012.name * (t - t0) / (t1 - t0))
p
```
```python
p = p.subs([p_101, p012]).subs([p_10, p01, p12])
p
```
```python
p_normalized = p.expr.subs(t, t * (t1 - t0) + t0)
```
```python
M_CR = NamedMatrix(
r'{M_\text{CR}}',
sp.Matrix([[c.expand().coeff(x).factor() for x in (x_1, x0, x1, x2)]
for c in p_normalized.as_poly(t).all_coeffs()]))
```
```python
deltas = [
(t_1, -sp.Symbol('Delta_-1')),
(t0, 0),
(t1, sp.Symbol('Delta0')),
(t2, sp.Symbol('Delta0') + sp.Symbol('Delta1'))
]
```
```python
M_CR.simplify().subs(deltas).factor()
```
```python
uniform = [
(sp.Symbol('Delta_-1'), 1),
(sp.Symbol('Delta0') , 1),
(sp.Symbol('Delta1') , 1),
]
```
```python
M_CR.subs(deltas).subs(uniform).pull_out(sp.S.Half).expr
```
```python
velocity = p.expr.diff(t)
```
```python
velocity.subs(t, t0).subs(deltas).factor()
```
```python
velocity.subs(t, t1).subs(deltas).factor()
```
in general:
\begin{equation}
\boldsymbol{\dot{x}}_i =
\frac{
(t_{i+1} - t_i)^2 (\boldsymbol{x}_i - \boldsymbol{x}_{i-1}) +
(t_i - t_{i-1})^2 (\boldsymbol{x}_{i+1} - \boldsymbol{x}_i)
}{
(t_{i+1} - t_i)(t_i - t_{i-1})(t_{i+1} - t_{i-1})
}
\end{equation}
You might encounter another way to write the equation for $\boldsymbol{\dot{x}}_0$
(e.g. at https://stackoverflow.com/a/23980479/):
```python
(x0 - x_1) / (t0 - t_1) - (x1 - x_1) / (t1 - t_1) + (x1 - x0) / (t1 - t0)
```
... but this is equivalent to the equation shown above:
```python
_.subs(deltas).factor()
```
Yet another way to skin this cat -- sometimes referred to as Bessel--Overhauser -- is to define the velocity of the left and right chords:
```python
v_left = (x0 - x_1) / (t0 - t_1)
v_right = (x1 - x0) / (t1 - t0)
```
... and then combine them in this way:
```python
((t1 - t0) * v_left + (t0 - t_1) * v_right) / (t1 - t_1)
```
Again, that's the same as we had above:
```python
_.subs(deltas).factor()
```
|
c9ca7563c421f0e3d40ffc577b12b7d0150afb4f
| 7,439 |
ipynb
|
Jupyter Notebook
|
doc/catmull-rom-non-uniform.ipynb
|
mgeier/splines
|
f54b09479d98bf13f00a183fd9d664b5783e3864
|
[
"MIT"
] | null | null | null |
doc/catmull-rom-non-uniform.ipynb
|
mgeier/splines
|
f54b09479d98bf13f00a183fd9d664b5783e3864
|
[
"MIT"
] | null | null | null |
doc/catmull-rom-non-uniform.ipynb
|
mgeier/splines
|
f54b09479d98bf13f00a183fd9d664b5783e3864
|
[
"MIT"
] | null | null | null | 21.193732 | 144 | 0.488641 | true | 1,245 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.919643 | 0.90053 | 0.828165 |
__label__eng_Latn
| 0.323131 | 0.762439 |
# Lecture 4: Conditional Probability
## Stat 110, Prof. Joe Blitzstein, Harvard University
----
## Definitions
We continue with some basic definitions of _independence_ and _disjointness_:
#### Definition: independence & disjointness
> Events A and B are __independent__ if $P(A \cap B) = P(A)P(B)$. Knowing that event A occurs tells us nothing about event B.
>
> In contrast, events A and B are __disjoint__ if A occurring means that B cannot occur.
What about the case of events A, B and c?
> Events A, B and C are __independent__ if
>
> \begin\{align\}
> P(A \cap B) &= P(A)P(B), ~~ P(A \cap C) = P(A)P(C), ~~ P(B \cap C) = P(B)P(C) \\\\
> P(A \cap B \cap C) &= P(A)P(B)P(C)
> \end\{align\}
>
> So you need both _pair-wise independence and three-way independence_.
## Newton-Pepys Problem (1693)
Yet another famous example of probability that comes from a [gambling question](https://en.wikipedia.org/wiki/Newton%E2%80%93Pepys_problem).
We have fair dice. Which of the following events is most likely?
- $A$ ... at least one 6 with 6 dice
- $B$ ... at least two 6's with 12 dice
- $C$ ... at least three 6's with 18 dice
Let's solve for the probability of each event using independence.
\begin{align}
P(A) &= 1 - P(A^c) ~~~~ &\text{since the complement of at least one 6 is no 6's at all} \\
&= 1 - \left(\frac{5}{6}\right)^6 &\text{the 6 dice are independent, so we just multiply them all} \\
&\approx 0.665 \\
\\
P(B) &= 1 - P(\text{no 6's}) - P(\text{one 6}) \\
&= 1 - \left(\frac{5}{6}\right)^{12} - 12 \left(\frac{1}{6}\right)\left(\frac{5}{6}\right)^{11} &\text{... does this look familiar?}\\
&\approx 0.619 \\
\\
P(C) &= 1 - P(\text{no 6's}) - P(\text{one 6}) - P(\text{two 6's}) \\
&= 1 - \sum_{k=0}^{2} \binom{18}{k} \left(\frac{1}{6}\right)^k \left(\frac{5}{6}\right)^{18-k} &\text{... it's Binomial probability!} \\
&\approx 0.597
\end{align}
## Conditional Probability
> Conditioning is the soul of probability.
How do you update your beliefs when presented with new information? That's the question here.
Consider 2 events $A$ and $B$. We defined _conditional probability_ a $P(A|B)$, read _the probability of A given B_.
Suppose we just observed that $B$ occurred. Now if $A$ and $B$ are independent, then $P(A|B)$ is irrelevant. But if $A$ and $B$ are not independent, then the fact that $B$ happened is important information and we need to update our uncertainty about $A$ accordingly.
#### Definition: conditional probability
> \begin\{align\}
> \text{conditional probability } P(A|B) &= \frac{P(A \cap B)}{P(B)} &\text{if }P(B)\gt0 \\
> \end\{align\}
Prof. Blitzstein gives examples of _Pebble World_ and _Frequentist World_ to help explain conditional probability, but I find that [Legos make things simple](https://www.countbayesie.com/blog/2015/2/18/bayes-theorem-with-lego).
## Theorem 1
The intersection of events $A$ and $B$ can be given by
\begin{align}
P(A \cap B) = P(B) P(A|B) = P(A) P(B|A)
\end{align}
Note that if $A$ and $B$ are independent, then conditioning on $B$ means nothing (and vice-versa) so $P(A|B) = P(A)$, and $P(A \cap B) = P(A) P(B)$.
## Theorem 2
\begin{align}
P(A_1, A_2, ... A_n) = P(A_1)P(A_2|A_1)P(A_3|A_1,A_2)...P(A_n|A_1,A_2,...,A_{n-1})
\end{align}
## Theorem 3: Bayes' Theorem
\begin{align}
P(A|B) = \frac{P(B|A)P(A)}{P(B)} ~~~~ \text{this follows from Theorem 1}
\end{align}
----
## Appendix A: Bayes' Rule Expressed in Terms of Odds
The _odds_ of an event with probability $p$ is $\frac{p}{1-p}$.
An event with probability $\frac{3}{4}$ can be described as having odds _3 to 1 in favor_, or _1 to 3 against_.
Let $H$ be the hypothesis, or the event we are interested in.
Let $D$ be the evidence (event) we gather in order to study $H$.
The _prior_ probability $P(H)$ is that for which $H$ is true __before__ we observe any new evidence $D$.
The _posterior_ probability $P(H|D)$ is, of course, that which is __after__ we observed new evidence.
The _likelihood ratio_ is defined as $\frac{P(D|H)}{P(D^c|H^c)}$
Applying Bayes' Rule, we can see how the _posterior odds_, _prior odds_ and _likelihood odds_ are related:
\begin{align}
P(H|D) &= \frac{P(D|H)P(H)}{P(D)} \\
\\
P(H^c|D) &= \frac{P(D|H^c)P(H^c)}{P(D)} \\
\\
\Rightarrow \underbrace{\frac{P(H|D)}{P(H^c|D)}}_{\text{posterior odds of H}} &= \underbrace{\frac{P(H)}{P(H^c)}}_{\text{prior odds of H}} \times \underbrace{\frac{P(D|H)}{P(D|H^c)}}_{\text{likelihood ratio}}
\end{align}
----
## Appendix B: Translating Odds into Probability
To go from _odds_ back to _probability_
\begin{align}
p = \frac{p/q}{1 + p/q} & &\text{ for } q = 1-p
\end{align}
----
View [Lecture 4: Conditional Probability | Statistics 110](http://bit.ly/2Mwpk11) on YouTube.
|
e9653b76771fa270fee8316c296372201afd9f55
| 7,291 |
ipynb
|
Jupyter Notebook
|
Lecture_04.ipynb
|
abhra-nilIITKgp/stats-110
|
258461cdfbdcf99de5b96bcf5b4af0dd98d48f85
|
[
"BSD-3-Clause"
] | 113 |
2016-04-29T07:27:33.000Z
|
2022-02-27T18:32:47.000Z
|
Lecture_04.ipynb
|
snoop2head/stats-110
|
88d0cc56ede406a584f6ba46368e548010f2b14a
|
[
"BSD-3-Clause"
] | null | null | null |
Lecture_04.ipynb
|
snoop2head/stats-110
|
88d0cc56ede406a584f6ba46368e548010f2b14a
|
[
"BSD-3-Clause"
] | 65 |
2016-12-24T02:02:25.000Z
|
2022-02-13T13:20:02.000Z
| 34.230047 | 275 | 0.522562 | true | 1,631 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.828939 | 0.92079 | 0.763278 |
__label__eng_Latn
| 0.939413 | 0.611683 |
# Chapter 3.2 Calculus - Review
Here, we provide some examples of calculus.
More examples: https://scipy-lectures.org/packages/sympy.html
Copyright:
## 1 Calculate limits using Sympy
```python
# import library
import sympy as sym
# pythonic math expressions: add spaces, use single quotes and lowercases
# declaring variables
x, y, z, a, b, c = sym.symbols('x, y, z, a, b, c')
f = sym.sin(x) / x
print(sym.limit(f, x, 0))
n = sym.symbols('n')
print(sym.limit(((n + 3) / (n + 2)) ** n, n, sym.oo)) # oo is mathematical infinity
```
1
E
## 2.1 Calculate derivatives
```python
# declaring variables
x, y, z, a, b, c = sym.symbols('x, y, z, a, b, c')
# we take the derivative using diff
# diff (function, independent variable, number of derivatives)
exp1 = sym.diff(sym.sin(2 * x), x)
print('1st derivative w.r.t x: ', exp1)
exp2 = sym.diff(sym.sin(2*x), x, 2)
print('2nd derivative w.r.t x: ', exp2)
exp3 = sym.diff(sym.sin(2*x), x, 3)
print('3rd derivative w.r.t x: ', exp3)
exp4 = sym.diff(sym.sin(x*y), x, 2, y, 3)
print('\n', exp4)
```
1st derivative w.r.t x: 2*cos(2*x)
2nd derivative w.r.t x: -4*sin(2*x)
3rd derivative w.r.t x: -8*cos(2*x)
x*(x**2*y**2*cos(x*y) + 6*x*y*sin(x*y) - 6*cos(x*y))
```python
# notice the difference of the output format
sym.diff(sym.exp(-x) * sym.cos(3 - x))
```
$\displaystyle - e^{- x} \sin{\left(x - 3 \right)} - e^{- x} \cos{\left(x - 3 \right)}$
## 2.2 Lambda Calculus
```python
x = sym.symbols('x')
f = x ** 4 + 7 * x ** 3 + 5 * x ** 2 - 17 * x + 3
f
```
$\displaystyle x^{4} + 7 x^{3} + 5 x^{2} - 17 x + 3$
```python
fLam = sym.lambdify('x', f)
fLam(1)
```
-1
```python
f2 = sym.diff(f)
f2
```
$\displaystyle 4 x^{3} + 21 x^{2} + 10 x - 17$
```python
f2Lam = sym.lambdify('x', f2)
f2Lam(1)
```
18
## 2.3 Partial derivatives of functions of several variables
```python
x, y, z = sym.symbols('x, y, z')
exp6 = sym.exp(x * y * z)
sym.diff(exp6, x)
```
$\displaystyle y z e^{x y z}$
```python
sym.diff(exp6, x, x)
```
$\displaystyle y^{2} z^{2} e^{x y z}$
```python
sym.diff(exp6, x, y)
```
$\displaystyle z \left(x y z + 1\right) e^{x y z}$
```python
sym.diff(exp6, x, y, z)
```
$\displaystyle \left(x^{2} y^{2} z^{2} + 3 x y z + 1\right) e^{x y z}$
## 3.1 Functional integration
```python
value = sym.integrate(sym.sin(x) * sym.cos(x), (x, 0, sym.pi / 2))
value
```
$\displaystyle \frac{1}{2}$
```python
import numpy as np
xs = np.linspace(-4,4,100)
point = -2
f = sym.sin(x) + sym.cos(x)
fLam = sym.lambdify('x', f)
fdLam = sym.lambdify('x', sym.diff(f))
derived = fLam(point) + (fdLam(point) * (xs - point))
f
derived
```
array([-2.31174544e+00, -2.27189489e+00, -2.23204434e+00, -2.19219379e+00,
-2.15234323e+00, -2.11249268e+00, -2.07264213e+00, -2.03279157e+00,
-1.99294102e+00, -1.95309047e+00, -1.91323992e+00, -1.87338936e+00,
-1.83353881e+00, -1.79368826e+00, -1.75383771e+00, -1.71398715e+00,
-1.67413660e+00, -1.63428605e+00, -1.59443549e+00, -1.55458494e+00,
-1.51473439e+00, -1.47488384e+00, -1.43503328e+00, -1.39518273e+00,
-1.35533218e+00, -1.31548163e+00, -1.27563107e+00, -1.23578052e+00,
-1.19592997e+00, -1.15607941e+00, -1.11622886e+00, -1.07637831e+00,
-1.03652776e+00, -9.96677203e-01, -9.56826650e-01, -9.16976098e-01,
-8.77125545e-01, -8.37274992e-01, -7.97424439e-01, -7.57573887e-01,
-7.17723334e-01, -6.77872781e-01, -6.38022228e-01, -5.98171676e-01,
-5.58321123e-01, -5.18470570e-01, -4.78620017e-01, -4.38769465e-01,
-3.98918912e-01, -3.59068359e-01, -3.19217806e-01, -2.79367254e-01,
-2.39516701e-01, -1.99666148e-01, -1.59815595e-01, -1.19965043e-01,
-8.01144899e-02, -4.02639372e-02, -4.13384443e-04, 3.94371683e-02,
7.92877211e-02, 1.19138274e-01, 1.58988827e-01, 1.98839379e-01,
2.38689932e-01, 2.78540485e-01, 3.18391038e-01, 3.58241590e-01,
3.98092143e-01, 4.37942696e-01, 4.77793249e-01, 5.17643801e-01,
5.57494354e-01, 5.97344907e-01, 6.37195460e-01, 6.77046012e-01,
7.16896565e-01, 7.56747118e-01, 7.96597671e-01, 8.36448223e-01,
8.76298776e-01, 9.16149329e-01, 9.55999882e-01, 9.95850434e-01,
1.03570099e+00, 1.07555154e+00, 1.11540209e+00, 1.15525265e+00,
1.19510320e+00, 1.23495375e+00, 1.27480430e+00, 1.31465486e+00,
1.35450541e+00, 1.39435596e+00, 1.43420651e+00, 1.47405707e+00,
1.51390762e+00, 1.55375817e+00, 1.59360873e+00, 1.63345928e+00])
```python
import matplotlib.pyplot as plt
plt.plot(xs, fLam(xs), lw = 2, color = 'k', zorder = 1, label = 'f(x)')
plt.scatter(point, fLam(point), color = 'r', zorder = 2, label = r'$f(x_0)$')
plt.plot(xs, derived, lw = 2, color = 'b', zorder =1, label = r'$f(x_0) + (\nabla_xf)(x_0)(x-x_0)$')
plt.axis([-4, 4, -3, 5])
plt.legend(loc = 1);
```
## 3.1.1 Use the Harvard Autograd library
grad and jacobian take a function as their argument.
More information: https://github.com/HIPS/autograd
```python
import autograd.numpy as np # a concise version of numpy
from autograd import grad, jacobian
x = np.array([5, 3], dtype = float)
def cost(x):
return x[0] ** 2 / x[1] - np.log(x[1])
gradient_cost = grad(cost)
jacobian_cost = jacobian(cost)
gradient_cost(x)
jacobian_cost(np.array([x, x, x]))
```
## 3.1.2 Or use the jacobian method available for matrices in sympy
```python
from sympy import sin, cos, Matrix
from sympy.abc import rho, phi
X = Matrix([rho * cos(phi), rho * sin(phi), rho ** 2])
Y = Matrix([rho, phi])
X.jacobian(Y)
```
$\displaystyle \left[\begin{matrix}\cos{\left(\phi \right)} & - \rho \sin{\left(\phi \right)}\\\sin{\left(\phi \right)} & \rho \cos{\left(\phi \right)}\\2 \rho & 0\end{matrix}\right]$
|
517e24e89034e6af97ca9cae0c8caf609a1308f8
| 31,643 |
ipynb
|
Jupyter Notebook
|
3.2 Calculus.ipynb
|
NilaBlueshirt/MAT494TeachingMaterial
|
87f89d627345eef254ebe3f6f658ab181f791984
|
[
"MIT"
] | null | null | null |
3.2 Calculus.ipynb
|
NilaBlueshirt/MAT494TeachingMaterial
|
87f89d627345eef254ebe3f6f658ab181f791984
|
[
"MIT"
] | null | null | null |
3.2 Calculus.ipynb
|
NilaBlueshirt/MAT494TeachingMaterial
|
87f89d627345eef254ebe3f6f658ab181f791984
|
[
"MIT"
] | 1 |
2022-01-11T23:20:15.000Z
|
2022-01-11T23:20:15.000Z
| 31,643 | 31,643 | 0.818222 | true | 2,507 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.939025 | 0.66888 | 0.628095 |
__label__yue_Hant
| 0.145377 | 0.297606 |
# Frequentist Inference Case Study - Part B
## Learning objectives
Welcome to Part B of the Frequentist inference case study! The purpose of this case study is to help you apply the concepts associated with Frequentist inference in Python. In particular, you'll practice writing Python code to apply the following statistical concepts:
* the _z_-statistic
* the _t_-statistic
* the difference and relationship between the two
* the Central Limit Theorem, including its assumptions and consequences
* how to estimate the population mean and standard deviation from a sample
* the concept of a sampling distribution of a test statistic, particularly for the mean
* how to combine these concepts to calculate a confidence interval
In the previous notebook, we used only data from a known normal distribution. **You'll now tackle real data, rather than simulated data, and answer some relevant real-world business problems using the data.**
## Hospital medical charges
Imagine that a hospital has hired you as their data scientist. An administrator is working on the hospital's business operations plan and needs you to help them answer some business questions.
In this assignment notebook, you're going to use frequentist statistical inference on a data sample to answer the questions:
* has the hospital's revenue stream fallen below a key threshold?
* are patients with insurance really charged different amounts than those without?
Answering that last question with a frequentist approach makes some assumptions, and requires some knowledge, about the two groups.
We are going to use some data on medical charges obtained from [Kaggle](https://www.kaggle.com/easonlai/sample-insurance-claim-prediction-dataset).
For the purposes of this exercise, assume the observations are the result of random sampling from our single hospital. Recall that in the previous assignment, we introduced the Central Limit Theorem (CLT), and its consequence that the distributions of sample statistics approach a normal distribution as $n$ increases. The amazing thing about this is that it applies to the sampling distributions of statistics that have been calculated from even highly non-normal distributions of data! Recall, also, that hypothesis testing is very much based on making inferences about such sample statistics. You're going to rely heavily on the CLT to apply frequentist (parametric) tests to answer the questions in this notebook.
```python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import t
from numpy.random import seed
medical = pd.read_csv('insurance2.csv')
```
```python
medical.shape
```
(1338, 8)
```python
medical.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>age</th>
<th>sex</th>
<th>bmi</th>
<th>children</th>
<th>smoker</th>
<th>region</th>
<th>charges</th>
<th>insuranceclaim</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>19</td>
<td>0</td>
<td>27.900</td>
<td>0</td>
<td>1</td>
<td>3</td>
<td>16884.92400</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>18</td>
<td>1</td>
<td>33.770</td>
<td>1</td>
<td>0</td>
<td>2</td>
<td>1725.55230</td>
<td>1</td>
</tr>
<tr>
<th>2</th>
<td>28</td>
<td>1</td>
<td>33.000</td>
<td>3</td>
<td>0</td>
<td>2</td>
<td>4449.46200</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>33</td>
<td>1</td>
<td>22.705</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>21984.47061</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>32</td>
<td>1</td>
<td>28.880</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>3866.85520</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
__Q1:__ Plot the histogram of charges and calculate the mean and standard deviation. Comment on the appropriateness of these statistics for the data.
__A:__
```python
estimate = medical.charges.mean()
sd = medical.charges.std() # Pandas standard deviation defaults to the unbiased estimator, i.e., ddof=1
print(f"The mean of charges is ${estimate:.2f}, and the standard deviation is ${sd:.2f}")
```
The mean of charges is $13270.42, and the standard deviation is $12110.01
```python
medical.charges.hist(figsize=(10, 8), bins='auto')
plt.axvline(estimate, color='r', label='mean')
plt.title('Histogram of medical charges')
plt.ylabel('Count of charges')
plt.xlabel('Amount charged')
plt.axvline(estimate + 2*sd, color='r', linestyle='--', label='+/- 2 sd')
plt.axvline(estimate - 2*sd, color='r', linestyle='--')
plt.legend();
```
This histogram shows clearly that the mean and standard deviation are misleading at best as summaries of the medical charges because the distribution of medical charges is highly right-skewed.
__Q2:__ The administrator is concerned that the actual average charge has fallen below 12,000, threatening the hospital's operational model. On the assumption that these data represent a random sample of charges, how would you justify that these data allow you to answer that question? And what would be the most appropriate frequentist test, of the ones discussed so far, to apply?
__A:__ This is a large sample (n=1338), and by the central limit theorem, the distribution of sample means is normally distributed, allowing us to quantify our uncertainty due to random (sampling) error by treating the sample mean as belonging to a normal distribution with sd = (standard deviation of our sample) / sqrt(n). Although this technically calls for a t test (since we do not know the population variance), the z-statistic and the t-statistic will be about the same, because we have 1337 degrees of freedom in our estimate. In other words, the t-distribution with dof=1337 is approximately normal.
__Q3:__ Given the nature of the administrator's concern, what is the appropriate confidence interval in this case? A ***one-sided*** or ***two-sided*** interval? (Refresh your understanding of this concept on p. 399 of the *AoS*). Calculate the critical value and the relevant 95% confidence interval for the mean, and comment on whether the administrator should be concerned.
__A:__ In this case, the administrator is concerned about the average charge falling below \\$12,000, so our null hypothesis is that the average charge is at or above \\$12,000, necessitating a one-sided t-test.
```python
h_null = 12000
n = medical.charges.count()
se = medical.charges.std() / np.sqrt(n)
t_score = (estimate - h_null) / se
p = t.cdf(-t_score, n-1)
conf_95 = np.round(estimate + t.ppf([0.025, 0.975], n-1) * se, 2)
```
```python
print(f"""One-tailed t-test:
The estimate of the mean is {estimate:.2f}, with a standard error of {se:.2f}.
The test statistic is {t_score:.2f}.
The one-tailed p-value for this test statistic is {p:.2e}.
The 95% confidence interval for the estimate is {conf_95}.
""")
```
One-tailed t-test:
The estimate of the mean is 13270.42, with a standard error of 331.07.
The test statistic is 3.84.
The one-tailed p-value for this test statistic is 6.51e-05.
The 95% confidence interval for the estimate is [12620.95 13919.89].
With 95% confidence, we estimate that the average charge is \\$13,270, with a 95% confidence interval from \\$12,558 –\\$13982. Assuming that this sample is representative, we are highly confident that the average charge is not falling below \\$12,000.
The administrator then wants to know whether people with insurance really are charged a different amount to those without.
__Q4:__ State the null and alternative hypothesis here. Use the _t_-test for the difference between means, where the pooled standard deviation of the two groups is given by:
\begin{equation}
s_p = \sqrt{\frac{(n_0 - 1)s^2_0 + (n_1 - 1)s^2_1}{n_0 + n_1 - 2}}
\end{equation}
and the *t*-test statistic is then given by:
\begin{equation}
t = \frac{\bar{x}_0 - \bar{x}_1}{s_p \sqrt{1/n_0 + 1/n_1}}.
\end{equation}
(If you need some reminding of the general definition of ***t-statistic***, check out the definition on p. 404 of *AoS*).
What assumption about the variances of the two groups are we making here?
__A:__ The null hypothesis is that mean charge for people with insurance is the same as the mean charage as people without insurance; the alternative hypothesis is that these two population means differ. In using a t-test, we are assuming homogeneity of variance between the two groups.
__Q5:__ Perform this hypothesis test both manually, using the above formulae, and then using the appropriate function from [scipy.stats](https://docs.scipy.org/doc/scipy/reference/stats.html#statistical-tests) (hint, you're looking for a function to perform a _t_-test on two independent samples). For the manual approach, calculate the value of the test statistic and then its probability (the p-value). Verify you get the same results from both.
__A:__
```python
ins = medical[medical.insuranceclaim == 1].charges
noins = medical[medical.insuranceclaim == 0].charges
y_bar_ins = ins.mean()
y_bar_noins = noins.mean()
theta_hat = y_bar_ins - y_bar_noins
n_ins = ins.count()
n_noins = noins.count()
sd_theta = np.sqrt(( (n_ins - 1) * ins.var() + (n_noins - 1) * noins.var() ) / (n_ins + n_noins - 2))
se_theta = (sd_theta) * np.sqrt(1/n_ins + 1/n_noins)
t_score = np.abs(theta_hat) / se_theta
p_value = 2 * t.cdf(-t_score, n_ins + n_noins - 2)
```
```python
print(f"My manually calculated test statistic is {t_score} and p-value is {p_value}")
```
My manually calculated test statistic is 11.89329903087671 and p-value is 4.461230231620972e-31
```python
from scipy.stats import ttest_ind
ttest_ind(ins, noins)
```
Ttest_indResult(statistic=11.893299030876712, pvalue=4.461230231620717e-31)
Congratulations! Hopefully you got the exact same numerical results. This shows that you correctly calculated the numbers by hand. Secondly, you used the correct function and saw that it's much easier to use. All you need to do is pass your data to it.
__Q6:__ Conceptual question: look through the documentation for statistical test functions in scipy.stats. You'll see the above _t_-test for a sample, but can you see an equivalent one for performing a *z*-test from a sample? Comment on your answer.
__A:__ Scipy doesn't provide an equivalent function for a z-test from a sample; the results from an equivalent z-test will approach the results from the t-test as the sample sizes approach infinity. The t distribution accomodates the sampling uncertainty with fatter tails than the normal distribution for low degrees of freedom.
## Learning outcomes
Having completed this project notebook, you now have good hands-on experience:
* using the central limit theorem to help you apply frequentist techniques to answer questions that pertain to very non-normally distributed data from the real world
* performing inference using such data to answer business questions
* forming a hypothesis and framing the null and alternative hypotheses
* testing this using a _t_-test
|
e39d49f7feaa01f268ad9f135650b1b62f7775b2
| 38,900 |
ipynb
|
Jupyter Notebook
|
frequentist-case-study/frequentist-case-study-part-B.ipynb
|
reppertj/Data-Science-Examples
|
ee2690f07a9f606ecdb47cf1f3538641ade24312
|
[
"MIT"
] | null | null | null |
frequentist-case-study/frequentist-case-study-part-B.ipynb
|
reppertj/Data-Science-Examples
|
ee2690f07a9f606ecdb47cf1f3538641ade24312
|
[
"MIT"
] | null | null | null |
frequentist-case-study/frequentist-case-study-part-B.ipynb
|
reppertj/Data-Science-Examples
|
ee2690f07a9f606ecdb47cf1f3538641ade24312
|
[
"MIT"
] | null | null | null | 74.095238 | 20,700 | 0.78144 | true | 2,987 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.83762 | 0.782662 | 0.655574 |
__label__eng_Latn
| 0.996646 | 0.361448 |
# Notebook 03: Inverse design parameterization
This notebook will introduce a few basic parameterization concepts for inverse design. The same mode converter device concept as in the previous notebook will be used here as an example.
*Parameterization* refers to how we are representing our device. In the previous notebook, we simply used a 2D array of permittivity values, $\epsilon_r[x,y]$.
However, we observed that this led to continuously varying features, which is not ideal for fabricating devices generated by our optimization.
In this notebook, we will use a modified parameterization of the device to encourage more *binarized* features in the optimized device.
However, as we will see, we will still need to tune various hyperparameters in order to get desirable results.
We begin by importing the necessary python packages:
```python
import numpy as np
import autograd.numpy as npa
import copy
import matplotlib as mpl
mpl.rcParams['figure.dpi']=100
import matplotlib.pylab as plt
from autograd.scipy.signal import convolve as conv
from skimage.draw import circle
import ceviche
from ceviche import fdfd_ez, jacobian
from ceviche.optimizers import adam_optimize
from ceviche.modes import insert_mode
import collections
# Create a container for our slice coords to be used for sources and probes
Slice = collections.namedtuple('Slice', 'x y')
```
## Introduction: device parameterization
Here, our device will be parameterized by a 2D array of density values (a density distribution, referred to as $\rho[x,y]$).
This part of the parameterization actually looks very similar to the 2D array of permitivitty values we used in the previous notebook.
However, the difference is that we then definite two relatively simple operators that will be applied to this density. They are:
- **Blur operator**: A 2D convolution blur filter with a configurable radius; can be applied a configurable number of times
- **Projection operator**: A sigmoid-like nonlinear function for *binarizing* the output materials; has tunable slope and transition region
After these two operations are applied, the resulting density distribution, $\tilde{\rho}[x,y]$, will be a 2D array with values varying between `0.0` and `1.0`. A value of `0.0` represents a *background* material, while a value of `1.0` represents a *foreground* material.
The permitivitty distribution, which represents the optical description of our device, is then constructed as:
\begin{equation}
\epsilon_r[x,y] = \epsilon_{\text{min}} + \left(\epsilon_{\text{max}} - \epsilon_{\text{min}} \right) \tilde{\rho}{[x,y]}
\end{equation}
We start by defining our parameterization operators:
```python
# Projection that drives rho towards a "binarized" design with values either 0 or 1
def operator_proj(rho, eta=0.5, beta=100, N=1):
"""Density projection
eta : Center of the projection between 0 and 1
beta : Strength of the projection
N : Number of times to apply the projection
"""
for i in range(N):
rho = npa.divide(npa.tanh(beta * eta) + npa.tanh(beta * (rho - eta)),
npa.tanh(beta * eta) + npa.tanh(beta * (1 - eta)))
return rho
# Blurring filter that results in smooth features of the structure
# First we define a function to create the kernel
def _create_blur_kernel(radius):
"""Helper function used below for creating the conv kernel"""
rr, cc = circle(radius, radius, radius+1)
kernel = np.zeros((2*radius+1, 2*radius+1), dtype=np.float)
kernel[rr, cc] = 1
return kernel/kernel.sum()
# Then we define the function to apply the operation
def operator_blur(rho, radius=2, N=1):
"""Blur operator implemented via two-dimensional convolution
radius : Radius of the circle for the conv kernel filter
N : Number of times to apply the filter
Note that depending on the radius, the kernel is not always a
perfect circle due to "pixelation" / stair casing
"""
kernel = _create_blur_kernel(radius)
for i in range(N):
# For whatever reason HIPS autograd doesn't support 'same' mode, so we need to manually crop the output
rho = conv(rho, kernel, mode='full')[radius:-radius,radius:-radius]
return rho
```
### Visualization of the blur
Below we visualize the blur operator applied to a random 2D array:
```python
# Specify a range of values for the blur radius
blur_radii = [2, 3, 4, 5, 6]
# Number of times to apply the blur filter
N_blur = 1
# Define a random 2D array to test our blur filter on
rho = np.random.rand(50, 50)
# Create a figure with panels to plot into
fig, axs = plt.subplots(2, len(blur_radii)+1,figsize=(12,4), constrained_layout=True)
# First, plot a dummy conv filter
axs[0,0].set_title('Initial random image')
axs[0,0].imshow(np.zeros((1,1)), vmin=0, cmap='Greys')
# And also plot the initial random image
axs[1,0].imshow(rho)
# Now, loop over the blur radii and visualize each result
for i, radius in enumerate(blur_radii):
kernel = _create_blur_kernel(radius)
kernel_pad = np.pad(kernel, (2+np.max(blur_radii)-radius, 2+np.max(blur_radii)-radius),
'constant', constant_values=(0, 0))
axs[0,i+1].imshow(kernel_pad, vmin=0)
axs[0,i+1].set_title('radius = %d' % radius)
rho_p = operator_blur(rho, radius=radius, N=N_blur)
axs[1,i+1].imshow(rho_p)
axs[0,0].set_yticks([])
axs[0,0].set_xticks([])
axs[0,0].set_ylabel('Conv filter')
axs[1,0].set_ylabel('Image')
fig.align_labels()
plt.show()
```
### Visualization of the projection
Below we visualize the projection operator:
```python
rho = np.linspace(-0.5, +1.5, 999)
# Visualize different values of the projection strength
plt.figure()
for beta in [5, 10, 50, 100]:
plt.plot(rho, operator_proj(rho, beta=beta), label=r"$\beta$ = %d" % beta)
plt.xlabel(r"Input $\rho$")
plt.ylabel(r"Projected $\hat{\rho}$")
plt.legend()
plt.show()
```
### Visualization of blur + projection
Finally, we visualize the combined projection and blur operation. Notice how the larger blur radius leads to larger and smoother features.
```python
# Specify a range of values for the blur radius
blur_radii = [2, 3, 4, 5, 6, 10]
# Number of times to apply the blur filter
N_blur = 1
# Number of times to apply the projection operator
N_proj = 1
# Specify beta value to use for projection
beta = 200
# Specify eta value to use for projection
eta = 0.5
# Define a random 2D array to test our blur filter on
rho = np.random.rand(100, 100)
# Create a figure with panels to plot into
fig, axs = plt.subplots(2, len(blur_radii)+1,figsize=(12,4), constrained_layout=True)
# First, plot a dummy conv filter
axs[0,0].set_title('Initial random image')
axs[0,0].imshow(np.zeros((1,1)), vmin=0, cmap='Greys')
# And also plot the initial random image
axs[1,0].imshow(rho)
# Now, loop over the blur radii and visualize each result
for i, radius in enumerate(blur_radii):
kernel = _create_blur_kernel(radius)
kernel_pad = np.pad(kernel, (2+np.max(blur_radii)-radius, 2+np.max(blur_radii)-radius),
'constant', constant_values=(0, 0))
axs[0,i+1].imshow(kernel_pad, vmin=0)
axs[0,i+1].set_title('radius = %d' % radius)
rho_p = operator_blur(rho, radius=radius, N=N_blur)
rho_p = operator_proj(rho_p, beta=beta, eta=eta, N=N_proj)
axs[1,i+1].imshow(rho_p)
axs[0,0].set_yticks([])
axs[0,0].set_xticks([])
axs[0,0].set_ylabel('Conv filter')
axs[1,0].set_ylabel('Image')
fig.align_labels()
plt.show()
```
---
## Simulation and optimization parameters
```python
# Angular frequency of the source in 1/s
omega=2*np.pi*200e12
# Spatial resolution in meters
dl=40e-9
# Number of pixels in x-direction
Nx=120
# Number of pixels in y-direction
Ny=120
# Number of pixels in the PMLs in each direction
Npml=20
# Minimum value of the relative permittivity
epsr_min=1.0
# Maximum value of the relative permittivity
epsr_max=12.0
# Radius of the smoothening features
blur_radius=5
# Number of times to apply the blur
N_blur=1
# Strength of the binarizing projection
beta=500.0
# Middle point of the binarizing projection
eta=0.5
# Number of times to apply the blur
N_proj=1
# Space between the PMLs and the design region (in pixels)
space=10
# Width of the waveguide (in pixels)
wg_width=12
# Length in pixels of the source/probe slices on each side of the center point
space_slice=8
# Number of epochs in the optimization
Nsteps=50
# Step size for the Adam optimizer
step_size=5e-3
```
## Utility functions
As in **Notebook 02**, here we define several utility functions for constructing the simulation domain, feed waveguides, design region, input source, and output probe,
```python
def init_domain(Nx, Ny, Npml, space=10, wg_width=10, space_slice=5):
"""Initializes the domain and design region
space : The space between the PML and the structure
wg_width : The feed and probe waveguide width
space_slice : The added space for the probe and source slices
"""
rho = np.zeros((Nx, Ny))
bg_rho = np.zeros((Nx, Ny))
design_region = np.zeros((Nx, Ny))
# Input waveguide
bg_rho[0:int(Npml+space),int(Ny/2-wg_width/2):int(Ny/2+wg_width/2)] = 1
# Input probe slice
input_slice = Slice(x=np.array(Npml+1),
y=np.arange(int(Ny/2-wg_width/2-space_slice), int(Ny/2+wg_width/2+space_slice)))
# Output waveguide
bg_rho[int(Nx-Npml-space)::,int(Ny/2-wg_width/2):int(Ny/2+wg_width/2)] = 1
# Output probe slice
output_slice = Slice(x=np.array(Nx-Npml-1),
y=np.arange(int(Ny/2-wg_width/2-space_slice), int(Ny/2+wg_width/2+space_slice)))
design_region[Npml+space:Nx-Npml-space, Npml+space:Ny-Npml-space] = 1
# Const init
rho[Npml+space:Nx-Npml-space, Npml+space:Ny-Npml-space] = 0.5
# Random init
# rho = design_region * np.random.rand(Nx, Ny)
return rho, bg_rho, design_region, input_slice, output_slice
def viz_sim(epsr, source, slices=[]):
"""Solve and visualize a simulation with permittivity 'epsr'
"""
simulation = fdfd_ez(omega, dl, epsr, [Npml, Npml])
Hx, Hy, Ez = simulation.solve(source)
fig, ax = plt.subplots(1, 2, constrained_layout=True, figsize=(6,3))
ceviche.viz.real(Ez, outline=epsr, ax=ax[0], cbar=False)
for sl in slices:
ax[0].plot(sl.x*np.ones(len(sl.y)), sl.y, 'b-')
ceviche.viz.abs(epsr, ax=ax[1], cmap='Greys');
plt.show()
return (simulation, ax)
def mask_combine_rho(rho, bg_rho, design_region):
"""Utility function for combining the design region rho and the background rho
"""
return rho*design_region + bg_rho*(design_region==0).astype(np.float)
def epsr_parameterization(rho, bg_rho, design_region, radius=2, N_blur=1, beta=100, eta=0.5, N_proj=1):
"""Defines the parameterization steps for constructing rho
"""
# Combine rho and bg_rho; Note: this is so the subsequent blur sees the waveguides
rho = mask_combine_rho(rho, bg_rho, design_region)
rho = operator_blur(rho, radius=radius, N=N_blur)
rho = operator_proj(rho, beta=beta, eta=eta, N=N_proj)
# Final masking undoes the blurring of the waveguides
rho = mask_combine_rho(rho, bg_rho, design_region)
return epsr_min + (epsr_max-epsr_min) * rho
def mode_overlap(E1, E2):
"""Defines an overlap integral between the sim field and desired field
"""
return npa.abs(npa.sum(npa.conj(E1)*E2))
```
## Simulate the initial structure
```python
# Initialize the parametrization rho and the design region
rho, bg_rho, design_region, input_slice, output_slice = \
init_domain(Nx, Ny, Npml, space=space, wg_width=wg_width, space_slice=space_slice)
# Compute the permittivity from rho_init, including blurring and projection
epsr_init = epsr_parameterization(rho, bg_rho, design_region, \
radius=blur_radius, N_blur=N_blur, beta=beta, eta=eta, N_proj=N_proj)
# Setup source
source = insert_mode(omega, dl, input_slice.x, input_slice.y, epsr_init, m=1)
# Setup probe
probe = insert_mode(omega, dl, output_slice.x, output_slice.y, epsr_init, m=2)
# Simulate initial device
simulation, ax = viz_sim(epsr_init, source, slices=[input_slice, output_slice])
# get normalization factor (field overlap before optimizing)
_, _, Ez = simulation.solve(source)
E0 = mode_overlap(Ez, probe)
```
## Run the mode converter optimization
```python
def objective(rho):
"""Objective function called by optimizer
1) Takes the density distribution as input
2) Constructs epsr
2) Runs the simulation
3) Returns the overlap integral between the output wg field
and the desired mode field
"""
rho = rho.reshape((Nx, Ny))
epsr = epsr_parameterization(rho, bg_rho, design_region, \
radius=blur_radius, N_blur=N_blur, beta=beta, eta=eta, N_proj=N_proj)
simulation.eps_r = epsr
_, _, Ez = simulation.solve(source)
return mode_overlap(Ez, probe) / E0
# Compute the gradient of the objective function using revere-mode differentiation
objective_jac = jacobian(objective, mode='reverse')
# Maximize the objective function using an ADAM optimizer
(rho_optimum, loss) = adam_optimize(objective, rho.flatten(), objective_jac,
Nsteps=Nsteps, direction='max', step_size=step_size)
# Simulate optimal device
rho_optimum = rho_optimum.reshape((Nx, Ny))
epsr = epsr_parameterization(rho_optimum, bg_rho, design_region, \
radius=blur_radius, N_blur=N_blur, beta=beta, eta=eta, N_proj=N_proj)
viz_sim(epsr, source, slices=[input_slice, output_slice]);
```
## Visualizing the parameterization steps
We can take a look what each step of our parameterization actually looks like. Below we visualize the raw density distribution, the blurred density distribution, and finally, the projected device density.
```python
fig, axs = plt.subplots(1,3,constrained_layout=True,sharex=True,sharey=True, figsize=(8.5,3))
Z = mask_combine_rho(rho_optimum, bg_rho, design_region)
ceviche.viz.abs(Z, cmap='Greys', ax=axs[0])
axs[0].set_xlabel('')
axs[0].set_xticks([])
axs[0].set_ylabel('')
axs[0].set_yticks([])
axs[0].set_title('Raw input density')
Z = mask_combine_rho(rho_optimum, bg_rho, design_region)
Z = operator_blur(Z, radius=blur_radius, N=N_blur)
# Z = operator_proj(Z, beta=beta, eta=eta)
Z = mask_combine_rho(Z, bg_rho, design_region)
ceviche.viz.abs(Z, cmap='Greys', ax=axs[1])
axs[1].set_xlabel('')
axs[1].set_ylabel('')
axs[1].set_title('Blurred density')
Z = rho_optimum
Z = mask_combine_rho(Z, bg_rho, design_region)
Z = operator_blur(Z, radius=blur_radius, N=N_blur)
Z = operator_proj(Z, beta=beta, eta=eta, N=N_proj)
Z = mask_combine_rho(Z, bg_rho, design_region)
ceviche.viz.abs(Z, cmap='Greys', ax=axs[2])
axs[2].set_xlabel('')
axs[2].set_ylabel('')
axs[2].set_title('Projected density (final structure)');
```
## Penalizing the amount of material
We notice that in the optimized device shown above that there seems to be a lot of unnecessary material in the device. Here, we will try to eliminate some of this extra material by adding a penalty term to the objective function.
To add the penalty term, we only need to modify the objective function:
```python
def objective(rho, penalty_weight=1e8):
rho = rho.reshape((Nx, Ny))
epsr = epsr_parameterization(rho, bg_rho, design_region, \
radius=blur_radius, N_blur=N_blur, beta=beta, eta=eta, N_proj=N_proj)
simulation.eps_r = epsr
_, _, Ez = simulation.solve(source)
# This penalty term is directly proportional to the material area
# penalty = penalty_weight * (design_region*(epsr-1)).sum() # penalty_weight = 1e-10
# This penalty term is the L2-norm of the raw density
penalty = penalty_weight * npa.linalg.norm(rho)
return mode_overlap(Ez, probe) / E0 - penalty
```
```python
# Initialize the parametrization rho and the design region
rho, bg_rho, design_region, input_slice, output_slice = \
init_domain(Nx, Ny, Npml, space=space, wg_width=wg_width, space_slice=space_slice)
# Compute the permittivity from rho_init, including blurring and projection
epsr_init = epsr_parameterization(rho, bg_rho, design_region, \
radius=blur_radius, N_blur=N_blur, beta=beta, eta=eta, N_proj=N_proj)
# Setup source
source = insert_mode(omega, dl, input_slice.x, input_slice.y, epsr_init, m=1)
# Setup probe
probe = insert_mode(omega, dl, output_slice.x, output_slice.y, epsr_init, m=2)
# Simulate initial device
viz_sim(epsr_init, source, slices=[input_slice, output_slice])
# Run optimization
objective_jac = jacobian(objective, mode='reverse')
(rho_optimum, loss) = adam_optimize(objective, rho.flatten(), objective_jac,
Nsteps=Nsteps, direction='max', step_size=step_size)
# Simulate optimal device
rho_optimum = rho_optimum.reshape((Nx, Ny))
epsr_pen = epsr_parameterization(rho_optimum, bg_rho, design_region, \
radius=blur_radius, N_blur=N_blur, beta=beta, eta=eta, N_proj=N_proj)
viz_sim(epsr_pen, source, slices=[input_slice, output_slice]);
```
We can then quantify how much our penalization impacted the amount of material used in the final design.
```python
# Calculate areas
def calc_design_area(design_region, epsr_max, epsr_min, epsr, dl):
"""Computes the area of material used in the design region
"""
A = ((epsr-epsr_min)/epsr_max * design_region).sum() * dl**2 * 1e12
A_design_region = design_region.sum() * (dl)**2 * 1e12
return A, A_design_region
_, A_design_region = calc_design_area(design_region, epsr_max, epsr_min, epsr, dl)
A_original, _ = calc_design_area(design_region, epsr_max, epsr_min, epsr, dl)
A_pen, _ = calc_design_area(design_region, epsr_max, epsr_min, epsr_pen, dl)
# Print summary
print('Design region area: %.2f um^2' % A_design_region)
print('Unpenalized design area: %.2f um^2' % A_original)
print('Penalized design area: %.2f um^2' % A_pen)
print('---')
print('Improvement: %.2f%%' % (100*(1-A_pen/A_original)))
```
Design region area: 5.76 um^2
Unpenalized design area: 1.96 um^2
Penalized design area: 1.03 um^2
---
Improvement: 47.32%
```python
```
```python
```
|
e41605cafc168bfa6a48ebfeacfad8a8ee0d112d
| 378,110 |
ipynb
|
Jupyter Notebook
|
03_Invdes_parameterization.ipynb
|
fancompute/workshop-invdesign
|
200eaa0abc3f691137e228e98ebb62446015ec38
|
[
"MIT"
] | 57 |
2019-11-22T18:22:21.000Z
|
2022-03-15T15:38:08.000Z
|
03_Invdes_parameterization.ipynb
|
Ydeh22/workshop-invdesign
|
200eaa0abc3f691137e228e98ebb62446015ec38
|
[
"MIT"
] | 4 |
2019-12-14T16:57:42.000Z
|
2021-04-01T05:41:30.000Z
|
03_Invdes_parameterization.ipynb
|
Ydeh22/workshop-invdesign
|
200eaa0abc3f691137e228e98ebb62446015ec38
|
[
"MIT"
] | 20 |
2019-11-23T19:37:37.000Z
|
2022-03-22T22:30:20.000Z
| 421.057906 | 107,924 | 0.930462 | true | 5,093 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.924142 | 0.826712 | 0.763999 |
__label__eng_Latn
| 0.87413 | 0.613358 |
# Multi-trait LMMs
### Set up the environment
```python
%matplotlib inline
from warnings import simplefilter
simplefilter(action='ignore', category=FutureWarning)
import sys
import scipy as sp
import numpy as np
import scipy.stats as st
import pylab as pl
import pandas as pd
import h5py
sp.random.seed(0)
import limix.util as lmx_util
import limix.plot as lmx_plt
def no_annotate(*args):
pass
from limix.plot import manhattan
manhattan._annotate = no_annotate
```
### Download the data
First, load the yeast data, which have been imported into an hdf5 file.
To process your own data, use the limix command line binary (see [here](http://nbviewer.jupyter.org/github/limix/limix-tutorials/blob/master/preprocessing_QC/loading_files.ipynb) for an example).
```python
sys.path.append('./..')
import data as tutorial_data
file_name = tutorial_data.get_file('BYxRM')
```
### Set up the data object
The data object allows us to query the genotype and phenotype data.
```python
f = h5py.File(file_name, 'r')
pheno_group = f['phenotype']
pheno_df = pd.DataFrame(pheno_group['matrix'][:],
columns=np.char.decode(pheno_group['col_header']['phenotype_ID'][:]),
index=np.char.decode(pheno_group['row_header']['sample_ID'][:]))
```
```python
pheno_df.shape
```
(1008, 46)
```python
pheno_df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Cadmium_Chloride</th>
<th>Caffeine</th>
<th>Calcium_Chloride</th>
<th>Cisplatin</th>
<th>Cobalt_Chloride</th>
<th>Congo_red</th>
<th>Copper</th>
<th>Cycloheximide</th>
<th>Diamide</th>
<th>E6_Berbamine</th>
<th>...</th>
<th>x6-Azauracil</th>
<th>Xylose</th>
<th>YNB</th>
<th>YNB:ph3</th>
<th>YNB:ph8</th>
<th>YPD</th>
<th>YPD:15C</th>
<th>YPD:37C</th>
<th>YPD:4C</th>
<th>Zeocin</th>
</tr>
</thead>
<tbody>
<tr>
<th>A01_01</th>
<td>-7.323520</td>
<td>0.279993</td>
<td>0.313118</td>
<td>1.658179</td>
<td>-1.604442</td>
<td>5.841617</td>
<td>-4.130950</td>
<td>0.821226</td>
<td>3.622602</td>
<td>-0.378747</td>
<td>...</td>
<td>1.199054</td>
<td>-0.309149</td>
<td>17.470822</td>
<td>0.055225</td>
<td>-0.184268</td>
<td>24.548971</td>
<td>0.712171</td>
<td>0.890842</td>
<td>4.118372</td>
<td>8.592818</td>
</tr>
<tr>
<th>A01_02</th>
<td>-8.098236</td>
<td>-0.206326</td>
<td>-0.534844</td>
<td>-0.918012</td>
<td>0.892198</td>
<td>-1.618172</td>
<td>1.131947</td>
<td>-0.764736</td>
<td>-2.946279</td>
<td>-2.475193</td>
<td>...</td>
<td>0.443182</td>
<td>0.697908</td>
<td>18.052925</td>
<td>0.283462</td>
<td>1.662201</td>
<td>26.808476</td>
<td>-1.522498</td>
<td>0.006062</td>
<td>0.066515</td>
<td>-4.220476</td>
</tr>
<tr>
<th>A01_03</th>
<td>7.605720</td>
<td>-0.127960</td>
<td>-0.311102</td>
<td>-2.712088</td>
<td>3.301709</td>
<td>-6.680571</td>
<td>-1.138056</td>
<td>-3.382532</td>
<td>-3.157866</td>
<td>-4.178616</td>
<td>...</td>
<td>1.472629</td>
<td>1.474329</td>
<td>18.163782</td>
<td>-0.951379</td>
<td>0.369565</td>
<td>26.183975</td>
<td>1.082289</td>
<td>1.723157</td>
<td>5.714088</td>
<td>-6.506519</td>
</tr>
<tr>
<th>A01_04</th>
<td>-6.147649</td>
<td>0.878392</td>
<td>-0.563331</td>
<td>-0.827358</td>
<td>-1.291270</td>
<td>2.753851</td>
<td>-2.268121</td>
<td>0.867456</td>
<td>-6.832079</td>
<td>-1.970630</td>
<td>...</td>
<td>0.054694</td>
<td>-0.739040</td>
<td>15.575856</td>
<td>-1.690867</td>
<td>-0.840344</td>
<td>20.893646</td>
<td>-0.636280</td>
<td>0.053258</td>
<td>-3.196936</td>
<td>0.294462</td>
</tr>
<tr>
<th>A01_05</th>
<td>9.379060</td>
<td>-1.353169</td>
<td>0.405204</td>
<td>-1.137234</td>
<td>4.260286</td>
<td>-7.391216</td>
<td>-3.222742</td>
<td>1.586376</td>
<td>1.048391</td>
<td>-3.971730</td>
<td>...</td>
<td>0.267463</td>
<td>0.959138</td>
<td>17.808444</td>
<td>0.127621</td>
<td>1.967267</td>
<td>28.621507</td>
<td>-3.720999</td>
<td>-0.768724</td>
<td>-4.002786</td>
<td>7.045575</td>
</tr>
</tbody>
</table>
<p>5 rows × 46 columns</p>
</div>
```python
geno_group = f['genotype']
chromosomes = geno_group['col_header']['chrom'][:]
positions = geno_group['col_header']['pos'][:]
geno_df = pd.DataFrame(geno_group['matrix'][:], columns=positions,
index=np.char.decode(geno_group['row_header']['sample_ID'][:]),
dtype='int64')
```
```python
geno_df.shape
```
(1008, 11623)
```python
geno_df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>27915</th>
<th>28323</th>
<th>28652</th>
<th>29667</th>
<th>30756</th>
<th>31059</th>
<th>31213</th>
<th>31636</th>
<th>31756</th>
<th>31976</th>
<th>...</th>
<th>925487</th>
<th>925742</th>
<th>926177</th>
<th>927903</th>
<th>928103</th>
<th>929518</th>
<th>929724</th>
<th>930545</th>
<th>931289</th>
<th>931944</th>
</tr>
</thead>
<tbody>
<tr>
<th>A01_01</th>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>A01_02</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>A01_03</th>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>A01_04</th>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>...</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>A01_05</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
</tbody>
</table>
<p>5 rows × 11623 columns</p>
</div>
### Visualize the correlation among traits
```python
#Remove NaNs
filtered_pheno_df = pheno_df.dropna()
phenotype_names = filtered_pheno_df.columns
# center phenotype
normalized_pheno_df = (filtered_pheno_df - filtered_pheno_df.mean()) / filtered_pheno_df.std()
```
```python
pl.figure(figsize=[20,20])
Ce= np.cov(normalized_pheno_df.values.T)
pl.imshow(Ce,aspect='auto',interpolation='none')
pl.xticks(np.arange(len(phenotype_names)),phenotype_names,rotation=90)
pl.yticks(np.arange(len(phenotype_names)),phenotype_names,rotation=0)
pl.colorbar()
```
### Select a subset of the phenotypes
```python
phenotype_names = ['YPD:37C','YPD:15C','YPD:4C']
data_subsample = pheno_df[phenotype_names]
```
```python
phenotypes = data_subsample.dropna()
sample_idx = phenotypes.index.intersection(geno_df.index)
phenotypes = phenotypes.loc[sample_idx]
phenotypes = (phenotypes - phenotypes.mean()) / phenotypes.std()
phenotypes.describe()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>YPD:37C</th>
<th>YPD:15C</th>
<th>YPD:4C</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td>8.040000e+02</td>
<td>8.040000e+02</td>
<td>8.040000e+02</td>
</tr>
<tr>
<th>mean</th>
<td>-2.651279e-17</td>
<td>2.209399e-18</td>
<td>5.744438e-17</td>
</tr>
<tr>
<th>std</th>
<td>1.000000e+00</td>
<td>1.000000e+00</td>
<td>1.000000e+00</td>
</tr>
<tr>
<th>min</th>
<td>-2.651086e+00</td>
<td>-2.497574e+00</td>
<td>-2.020168e+00</td>
</tr>
<tr>
<th>25%</th>
<td>-7.669938e-01</td>
<td>-7.652285e-01</td>
<td>-7.548557e-01</td>
</tr>
<tr>
<th>50%</th>
<td>4.068578e-02</td>
<td>-7.631450e-02</td>
<td>-1.699190e-01</td>
</tr>
<tr>
<th>75%</th>
<td>8.097791e-01</td>
<td>7.221308e-01</td>
<td>6.011856e-01</td>
</tr>
<tr>
<th>max</th>
<td>2.867696e+00</td>
<td>2.745373e+00</td>
<td>3.382201e+00</td>
</tr>
</tbody>
</table>
</div>
We start by (further) examining the correlation among the phenotypes:
```python
#pairwise corrrelations of the first 3 traits
pl.figure(figsize=[15,5])
pl.subplot(1,3,1)
pl.plot(phenotypes[phenotype_names[0]].values,phenotypes[phenotype_names[1]].values,'.')
pl.xlabel(phenotype_names[0])
pl.ylabel(phenotype_names[1])
pl.subplot(1,3,2)
pl.plot(phenotypes[phenotype_names[1]].values,phenotypes[phenotype_names[2]].values,'.')
pl.xlabel(phenotype_names[1])
pl.ylabel(phenotype_names[2])
pl.subplot(1,3,3)
pl.plot(phenotypes[phenotype_names[0]].values,phenotypes[phenotype_names[2]].values,'.')
pl.xlabel(phenotype_names[0])
pl.ylabel(phenotype_names[2])
```
# Variance Decomposition
Here, we show how to estimate the genetic and residual covariance using the limix class limix.varDecomp.VarianceDecomposition (see [here](https://www.pydoc.io/pypi/limix-1.0.6/autoapi/varDecomp/varianceDecomposition/index.html)).
```python
from limix.vardec import VarianceDecomposition
```
```python
# genetic relatedness matrix
from limix.stats import linear_kinship, gower_norm
sample_relatedness = gower_norm(linear_kinship(geno_df.loc[sample_idx].values))
```
100%|██████████| 100/100 [00:00<00:00, 421.01it/s]
```python
# variance component model
vc = VarianceDecomposition(phenotypes.values)
vc.addFixedEffect()
vc.addRandomEffect(K=sample_relatedness)
vc.addRandomEffect(is_noise=True)
vc.optimize()
# retrieve genetic and noise covariance matrix
Cg = vc.getTraitCovar(0)
Cn = vc.getTraitCovar(1)
```
### Empirical Correlation
```python
Ce = np.corrcoef(phenotypes.T)
```
### Plot the correlation among the phenotypes (empirical), the genetic covariance among phenotypes, and the noise covariance.
```python
pl.figure(figsize=[15,5])
pl.subplot(1,3,1)
pl.imshow(Ce,aspect='auto',interpolation='none',vmin=-1,vmax=1)
pl.xticks(np.arange(len(phenotype_names)),phenotypes.columns)
pl.yticks(np.arange(len(phenotype_names)),phenotypes.columns)
pl.title('empirical correlation')
pl.subplot(1,3,2)
pl.imshow(Cg,aspect='auto',interpolation='none',vmin=-1,vmax=1)
pl.xticks(np.arange(len(phenotype_names)),phenotypes.columns)
pl.yticks(np.arange(len(phenotype_names)),phenotypes.columns)
pl.title('genetic covariance')
pl.subplot(1,3,3)
pl.imshow(Cn,aspect='auto',interpolation='none',vmin=-1,vmax=1)
pl.xticks(np.arange(len(phenotype_names)),phenotypes.columns)
pl.yticks(np.arange(len(phenotype_names)),phenotypes.columns)
pl.title('noise covariance')
pl.colorbar()
```
# Univariate association testing
As shown earlier, univariate (single-trait) association testing with linear mixed models can be performed with the function ``limix.qtl.qtl_test_lmm`` (see [here](https://limix.readthedocs.io/en/stable/qtl.html#linear-mixed-models)).
```python
from limix.qtl import qtl_test_lmm
```
```python
# load snp data
snps = geno_df.loc[sample_idx].values
positions = geno_df.columns
```
Run the LMM and convert the P-values into a pandas DataFrame.
```python
lmm = qtl_test_lmm(snps=snps,
pheno=phenotypes.values,
K=sample_relatedness)
pv_lmm = lmm.getPv()
pvalues_lmm = pd.DataFrame(data=pv_lmm.T,
index=positions,
columns=phenotype_names)
```
Plot the results from (univariate) GWAS using Manhattan plots.
```python
for p_ID in phenotype_names:
pl.figure(figsize=[15,4])
lmx_plt.plot_manhattan(pd.DataFrame(dict(pv=pvalues_lmm[p_ID].values,pos=positions,chrom=chromosomes,alpha=0.05)))
pl.title(p_ID)
```
# Multi-trait association testing
Multi-trait association testing with linear mixed models can be performed using the function ``limix.qtl.qtl_test_lmm_kronecker`` (see [here](https://limix.readthedocs.io/en/stable/qtl.html#limix.qtl.qtl_test_lmm_kronecker)).
Here we show how to perform the following multi-trait tests:
- any effect test (that is, a test to determine if a SNP has an effect on any of the phenotypes)
- common effect test (test to determine if a SNP has the same effect size and sign/direction for both phenotypes)
- specific effect test (test to determine if a SNP has a specific effect on the two traits; good for GxE)
### Any effect test
```python
N, P = phenotypes.values.shape
```
```python
covs = np.ones((N, 1)) #covariates
Acovs = np.eye(P) #the design matrix for the covariates
Asnps = np.eye(P) #the design matrix for the SNPs
K1r = sample_relatedness #the first sample-sample covariance matrix (non-noise)
```
```python
from limix.qtl import qtl_test_lmm_kronecker
```
```python
lmm, pvalues = qtl_test_lmm_kronecker(snps=snps,
phenos=phenotypes.values,
covs=covs,
Acovs=Acovs,
Asnps=Asnps,
K1r=K1r)
```
Convert the P-values into a DataFrame:
```python
pvalues = pd.DataFrame(data=pvalues.T,index=positions,columns=['multi_trait'])
```
### Plot the results from multi-trait GWAS using Manhattan plots.
```python
pl.figure(figsize=[15,4])
pl.title('Any effect test')
lmx_plt.plot_manhattan(pd.DataFrame(dict(pv=pvalues['multi_trait'].values,
pos=positions,chrom=chromosomes)))
```
### Common effect test
A common effect test is a 1 degree of freedom test and can be done by setting
\begin{equation}
\mathbf{A}_1^\text{(snp)} = \mathbf{1}_{1,P},\;\;\;
\mathbf{A}_0^\text{(snp)} = \mathbf{0}
\end{equation}
```python
covs = np.ones((N, 1)) #covariates
Acovs = np.eye(P) #the design matrix for the covariates
Asnps = np.ones((1,P)) #the design matrix for the SNPs
K1r = sample_relatedness #the first sample-sample covariance matrix (non-noise)
```
```python
lmm, pvalues_common = qtl_test_lmm_kronecker(snps=snps,
phenos=phenotypes.values,
#covs=covs,
#Acovs=Acovs,
Asnps=Asnps,
K1r=K1r)
```
Convert the P-values into a DataFrame:
```python
pvalues_common = pd.DataFrame(data=pvalues_common.T,index=positions,columns=['common'])
```
### Manhattan plot
```python
pl.figure(figsize=[15,4])
pl.title('common')
lmx_plt.plot_manhattan(pd.DataFrame(dict(pv=pvalues_common['common'].values,
pos=positions,chrom=chromosomes)))
```
### Testing for a specific effect
For a specific effect test for trait $p$
the alternative model is set to have both a common and a specific effect
for transcript $p$ from the SNP while the null model has only a common effect.
It is a 1 degree of freedom test and,
in the particular case of $P=3$ traits and for $p=0$, it can be done by setting
\begin{equation}
\mathbf{A}_1^\text{(snp)} =
\begin{pmatrix}
1 & 0 & 0 \\
1 & 1 & 1
\end{pmatrix}
\;\;\;,
\mathbf{A}_0^\text{(snp)} = \mathbf{1}_{1,3}
\end{equation}
Specific effect tests can be performed using the function ``limix.qtl.qtl_test_interaction_lmm_kronecker`` (see [here](https://limix.readthedocs.io/en/stable/qtl.html#limix.qtl.qtl_test_interaction_lmm_kronecker)).
```python
Asnps0 = np.ones((1,P)) #the null model design matrix for the SNPs
Asnps1 = np.zeros((2,P)) #the alternative model design matrix for the SNPs
Asnps1[0,:] = 1.0
Asnps1[1,1] = 1.0
print("Design(0): \n"+str(Asnps0))
print("Design(Alt): \n"+str(Asnps1))
```
Design(0):
[[1. 1. 1.]]
Design(Alt):
[[1. 1. 1.]
[0. 1. 0.]]
```python
from limix.qtl import qtl_test_interaction_lmm_kronecker
pvalues_inter = qtl_test_interaction_lmm_kronecker(snps=snps,
phenos=phenotypes.values,
covs=covs,
Acovs=Acovs,
Asnps0=Asnps0,
Asnps1=Asnps1,
K1r=K1r)
```
Convert the P-values into a DataFrame:
```python
pvalues_inter = pd.DataFrame(data=np.concatenate(pvalues_inter).T,
index=positions,
columns=["specific","null_common","alternative_any"])
```
### Manhattan plot
```python
pl.figure(figsize=[15,4])
pl.title('specific')
lmx_plt.plot_manhattan(pd.DataFrame(dict(pv=pvalues_inter['specific'].values,
pos=positions,chrom=chromosomes,
alpha=0.1)))
```
```python
```
|
dab88598499ec3ba8527880789d274207fdd214d
| 521,538 |
ipynb
|
Jupyter Notebook
|
limix1/Lecture-12-Multi-Trait-Linear-Mixed-Model.ipynb
|
mahort/gwas-lecture
|
59613e19a49d4cb1b4b446b077c2b30949f27347
|
[
"CC-BY-3.0"
] | 17 |
2018-11-26T10:09:26.000Z
|
2022-01-05T14:08:06.000Z
|
limix1/Lecture-12-Multi-Trait-Linear-Mixed-Model.ipynb
|
mahort/gwas-lecture
|
59613e19a49d4cb1b4b446b077c2b30949f27347
|
[
"CC-BY-3.0"
] | 1 |
2020-11-20T17:26:13.000Z
|
2020-11-20T18:02:46.000Z
|
limix1/Lecture-12-Multi-Trait-Linear-Mixed-Model.ipynb
|
mahort/gwas-lecture
|
59613e19a49d4cb1b4b446b077c2b30949f27347
|
[
"CC-BY-3.0"
] | 14 |
2018-11-30T17:42:19.000Z
|
2021-10-09T09:40:29.000Z
| 353.824966 | 112,868 | 0.925125 | true | 6,450 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.810479 | 0.718594 | 0.582406 |
__label__eng_Latn
| 0.334571 | 0.191453 |
<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcahiers-de-programmes&branch=master&subPath=Tutoriels/LaTeX.ipynb&depth=1" target="_parent"></a>
# Composition mathématique avec LaTeX
Cela ne servira que de brève introduction à la composition mathématique avec LaTeX et ne couvrira pas beaucoup d'autres aspects du langage de composition. Ce tutoriel couvrira les fondements de quelques erreurs courantes et comment les résoudre, et vous fournira un lien vers un aide-mémoire.
1. Pour mettre une variable en ligne, entourez-la de signes dollar comme: `$x$`, qui rend et $x$.
1. Pour composer une équation, entourez-la (sur une nouvelle ligne) en double dollar: `$$equation$$`
1. vous pouvez également utiliser cette syntaxe
```latex
\begin{equation}
votre_équation_ici
\end{equation}
```
1. Les symboles spéciaux commencent par une barre oblique inverse (`\`). Par exemple, la lettre grecque $\alpha$ est tapée comme `$\alpha$`.
1. Les fractions peuvent être composées avec `\frac{numerator}{denominator}`
1. Les racines carrées sont `\sqrt{stuff inside square root}`
Si vous voulez taper des variables en ligne, c'est aussi simple que $x$. Si vous voulez des symboles spéciaux en ligne, appelez-les simplement par leur nom, par exemple $\alpha$, $\gamma$, $\epsilon$, $\Delta$, $\nabla$ et ainsi de suite sont appelés en tapant «$\alpha$», `$\nabla$` etc. [Ce lien mène](http://tug.ctan.org/info/symbols/comprehensive/symbols-a4.pdf) vers une liste complète de symboles et leurs commandes LaTeX. Bien sûr, une recherche Google est parfois plus rapide.
# Bases du LaTeX
#### Double-clickez sur moi
Vous pouvez facilement commencer des équations
$$ c = \sqrt{a^2 + b^2}$$
remarquez comment dans la liste ci-dessus, les variables apparaissant sous la racine carrée sont entourées par des accolades (`{ }`), et les puissances sont faites avec des carottes (`^`).
#### Double-clickez sur moi
Vous pouvez également taper des équations en utilisant cette syntaxe
\begin{equation}
a = \frac{x^3 +\Gamma}{\int e^{-x^2} dx}
\end{equation}
Remarquez comment le numérateur de la fraction apparaît entre accolades et le dénominateur apparaît entre accolades séparées. Notez que les lettres majuscules grecques sont trouvées en utilisant une majuscule comme première lettre de votre commande LaTeX.
#### Double-clickez sur moi
Notez également l'exposant, cette fois nous avons inclus des accolades. C'est parce que sans eux, seulement le premier caractère deviendra un exposant, comme démontré ci-dessous.
\begin{equation}
a = x^-10
\end{equation}
#### Double-clickez sur moi
Ce qui précède ne rend pas comme prévu. Cependant, nous pouvons résoudre ce problème en ajoutant des accolades autour de l'exposant comme suit:
\begin{equation}
a = x^{-10}
\end{equation}
#### Double-clickez sur moi
Vous pouvez également inclure des vecteurs sur des variables
\begin{equation}
\vec{F} = m \vec{a}
\end{equation}
#### Double-clickez sur moi
Le symbole astérisque peut ne pas rendre comment vous l'espérez pour la multiplication en LaTeX, et il est souvent préférable d'utiliser `\times`. Par exemple, comparez
\begin{equation}
a * b = ab
\end{equation}
contre
\begin{equation}
a \times b = ab
\end{equation}
#### Double Click Me
Si vous avez besoin de mettre des parenthèses autour d'une fraction, votre équation peut parfois ne pas être bien rendue, par exemple
\begin{equation}
f(x) = a \times ( \frac{ x^2 }{ \sqrt{ x^2 + \tan(x) } } )
\end{equation}
Pour augmenter la taille de vos parenthèses, vous devez inclure les commandes `\left` et` \right` comme suit:
\begin{equation}
f(x) = a \times \left( \frac{ x^2 }{ \sqrt{ x^2 + \tan(x) } } \right)
\end{equation}
# Erreurs communes au LaTeX
### Les indices longs doivent être entourés d'accolades pour être affichés correctement
##### Double-clickez sur moi
Notez comment l'équation
\begin{equation}
a = x_10
\end{equation}
Ne rend pas ce que vous voulez, mais
\begin{equation}
a = x_{10}
\end{equation}
le fait une fois que vous incluez les accolades.
### Les espaces dans les équations sont ignorés
##### Double-clickez sur moi
Notez ce qui arrive aux mots inclus dans l'environnement mathématique
$$
Si vous utilisez des mots comme variable, ils deviennent difficiles à lire = x_a^2
$$
Si vous devez inclure des mots dans vos équations, vous pouvez le faire avec `\text{}`
$$
\text{Maintenant, nous pouvons lire ceci facilement une fois que nous avons rendu le calcul} = x_a^2
$$
### Vous ne pouvez pas inclure de signes de pourcentage sans un peu de travail supplémentaire
##### Double-clickez sur moi
Si vous voulez inclure un signe de pourcentage dans une équation mathématique, vous devez y échapper avec un `\` parce qu'un signe de pourcentage indique un commentaire dans LaTeX
$$
100 % C'est parce qu'un signe de pourcentage est un commentaire en Latex!
$$
Pour échapper le commentaire, ajoutez simplement une barre oblique inverse
$$ 100 \% $$
### Manquez une accolade
Si vous manquez une accolade lorsque vous utilisez une commande LaTeX, par exemple dans une fraction, votre équation ne sera pas rendue, comme dans un exemple simple ci-dessous.
\begin{equation}
x = \frac{ a } { y
\end{equation}
Pour résoudre ce problème, ajoutez simplement l'accolade manquante
\begin{equation}
x = \frac{ a } { y }
\end{equation}
Notez cependant que chaque fois que vous rencontrez une erreur dans votre LaTeX, il sera tout simplement pas rendu comme indiqué ci-dessus, et parfois c'est un peu difficile de trouver l'accolade manquante.
Le LaTeX est très étendu pour la composition mathématique, et nous ne pouvons pas tout couvrir ici. Cependant, il est assez intuitif et, avec un peu de pratique, un peu plus rapide que d'utiliser les éditeurs d'équations dans d'autres outils. N'hésitez pas d'essayer de créer vos propres équations maintenant.
---
### Des exercices
1. Corrigez ce qui suit pour représenter correctement la formule pour calculer l'aire de surface d'une sphère
$$
Area = 4 pi r2
$$
2. Corrigez ce qui suit pour représenter correctement la formule pour calculer le volume d'une sphère
\begin{equation}
Volume = \frac{4} / {3} pi r3
\end{equation}
3. Assurez-vous que pi apparaît comme un symbole dans les questions 1 et 2.
4. Dans cette phrase, modifiez les variables x, Y et y pour vous assurer qu'elles apparaissent sous forme de variables LaTeX.
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
|
a89b50dbf0b49ca72c8815a7e6d77325d6324a40
| 10,921 |
ipynb
|
Jupyter Notebook
|
Tutoriels/LaTeX.ipynb
|
callysto/cahiers-de-programmes
|
456045bfe7d28395ff31a4b0b89fe86260765605
|
[
"CC-BY-3.0"
] | null | null | null |
Tutoriels/LaTeX.ipynb
|
callysto/cahiers-de-programmes
|
456045bfe7d28395ff31a4b0b89fe86260765605
|
[
"CC-BY-3.0"
] | null | null | null |
Tutoriels/LaTeX.ipynb
|
callysto/cahiers-de-programmes
|
456045bfe7d28395ff31a4b0b89fe86260765605
|
[
"CC-BY-3.0"
] | null | null | null | 31.025568 | 497 | 0.595641 | true | 1,892 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.705785 | 0.901921 | 0.636562 |
__label__fra_Latn
| 0.993005 | 0.317278 |
```{warning}
This book is a work in progress and should be considered currently to be in a
**pre**draft state. Work is actively taking place in preparation for October
2020.
If you happen to find this and notice any typos and/or have any suggestions
please open an issue on the github repo: <https://github.com/drvinceknight/pfm>
```
# Python for Mathematics
## Introduction
This book aims to introduce readers to programming for mathematics.
It is assumed that readers are used to solving high school mathematics problems
of the form:
---
```{admonition} Problem
Given the function $f:\mathbb{R}\to\mathbb{R}$ defined by
$f(x) = x ^ 2 - 3 x + 1$ obtain the global minima of the function.
```
```{admonition} Solution
:class: tip
To solve this we need to apply our **mathematical knowledge** which tells us to:
1. Differentiate $f(x)$ to get $\frac{df}{dx}$;
2. Equate $\frac{df}{dx}=0$;
3. Use the second derivative test on the solution to the previous equation.
For each of those 3 steps we will usually make use of our **mathematical
techniques**:
1. Differentiate $f(x)$:
$$\frac{df}{dx} = 2 x - 3$$
2. Equate $\frac{df}{dx}=0$:
$$2x-3 =0 \Rightarrow x = 3/2$$
3. Use the second derivative test on the solution:
$$\frac{d^2f}{dx^2} = 2 > 0\text{ for all values of }x$$
Thus $x=2/3$ is the global minima of the function.
```
```{attention}
As we progress as mathematicians **mathematical knowledge** is more prominent
than **mathematical technique**: often knowing what to do is the real problem as
opposed to having the technical ability to do it.
```
This is what this book will cover: **programming** allows us to instruct a
computer to carry out mathematical techniques.
We will for example learn how to solve the above problem by instructing a
computer which **mathematical technique** to carry out.
**This book will teach us how to give the correct instructions to a
computer.**
The following is an example, do not worry too much about the specific code used
for now:
1. Differentiate $f(x)$ to get $\frac{df}{dx}$;
```python
import sympy as sym
x = sym.Symbol("x")
sym.diff(x ** 2 - 3 * x + 1, x)
```
$\displaystyle 2 x - 3$
2. Equate $\frac{df}{dx}=0$:
```python
sym.solveset(2 * x - 3, x)
```
$\displaystyle \left\{\frac{3}{2}\right\}$
3. Use the second derivative test on the solution:
```python
sym.diff(x ** 2 - 3 * x + 1, x, 2)
```
$\displaystyle 2$
{ref}`Knowledge versus technique <fig:knowledge_vs_technique>` is a brief summary.
```{figure} ./img/knowledge_vs_technique/main.png
---
width: 50%
name: fig:knowledge_vs_technique
---
Knowledge versus technique in this book.
```
## How this book is structured
Most programming texts introduce readers to the building blocks of
programming and build up to using more sophisticated tools for a specific
purpose.
This is akin to teaching someone how to forge metal so as to make a nail and
then slowly work our way to using more sophisticated tools such as power tools
to build a house.
This book will do thing in a different way: we will start with using and
understanding tools that are helpful to mathematicians. In the later part of the
book we will cover the building blocks and you will be able to build your own
sophisticated tools.
The book is in two parts:
1. Tools for mathematics;
2. Building tools.
The first part of the book will not make use of any novel mathematics.
Instead we will consider a number of mathematics problem that are often covered
in secondary school.
- Algebraic manipulation
- Calculus (differentiation and integration)
- Permutations and combinations
- Probability
- Linear algebra
The questions we will tackle will be familiar in their presentation and
description. **What will be different** is that no **by hand** calculations will
be done. We will instead carry them all out using a programming language.
In the second part of the book you will be encouraged to build your own tools
to be able to tackle a problem type of your choice.
```{attention}
Every chapter will have 4 parts:
- A tutorial: you will be walked through solving a problem. You will be
specifically told what to do and what to expect.
- A how to section: this will be a shorter more succinct section that will
detail how to carry out specific things.
- A reference section: this will be a section with references to further
resources as well as background information about specific things in the
chapter.
- An exercise section: this will be a number of exercises that you can work on.
```
|
decf84ac289ab9eb0f5ef7a40bb7625b46a2a775
| 7,495 |
ipynb
|
Jupyter Notebook
|
book/.intro.md.bcp.ipynb
|
daffidwilde/pfm
|
dcf38faccee3c212c8394c36f4c093a2916d283e
|
[
"MIT"
] | 8 |
2020-09-24T21:02:41.000Z
|
2020-10-14T08:37:21.000Z
|
book/.intro.md.bcp.ipynb
|
daffidwilde/pfm
|
dcf38faccee3c212c8394c36f4c093a2916d283e
|
[
"MIT"
] | 87 |
2020-09-21T15:54:23.000Z
|
2021-12-19T23:26:15.000Z
|
book/.intro.md.bcp.ipynb
|
daffidwilde/pfm
|
dcf38faccee3c212c8394c36f4c093a2916d283e
|
[
"MIT"
] | 3 |
2020-10-02T09:21:27.000Z
|
2021-07-08T14:46:27.000Z
| 29.163424 | 91 | 0.563442 | true | 1,154 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.689306 | 0.793106 | 0.546692 |
__label__eng_Latn
| 0.999225 | 0.108479 |
Code for HW 2
```python
import numpy as np
```
Q4
```python
v0 = np.array([0.5, 0.5, 0.5, 0.5]).T
v1 = np.array([0.5, 0.5, -0.5, -0.5]).T
v2 = np.array([0.5, -0.5, 0.5, -0.5]).T
```
```python
A = 0.5*np.matrix([[1, 1, 1, 1],[1, 1, -1 ,-1],[1, -1, 1, -1]])
```
```python
y = np.matrix([-0.5, 0.5, 0.5, 1.5]).T
```
```python
from sympy import *
```
```python
a, b, c, d = symbols('a b c d')
v3 = np.array([a, b, c, d])
answer = solve([v0.dot(v3), v1.dot(v3), v2.dot(v3), v3.dot(v3)-1], (a, b, c, d))
```
```python
q1 = np.asarray(answer[0], dtype='float')
q2 = np.asarray(answer[1], dtype='float')
print(q1)
print('------------')
print(q2)
```
[-0.5 0.5 0.5 -0.5]
------------
[ 0.5 -0.5 -0.5 0.5]
```python
A1 = np.stack((v0, v1, v2,q1),axis = -1)
print(A1)
```
[[ 0.5 0.5 0.5 -0.5]
[ 0.5 0.5 -0.5 0.5]
[ 0.5 -0.5 0.5 0.5]
[ 0.5 -0.5 -0.5 -0.5]]
```python
A2 = np.stack((v0, v1, v2, q2),axis = -1)
print(A2)
```
[[ 0.5 0.5 0.5 0.5]
[ 0.5 0.5 -0.5 -0.5]
[ 0.5 -0.5 0.5 -0.5]
[ 0.5 -0.5 -0.5 0.5]]
y = Ac
c = inv(C)*y
```python
from numpy import linalg as la
```
```python
print("Shape of A matrix: ",A1.shape)
print("Shape of y: ", y.shape)
```
Shape of A matrix: (4, 4)
Shape of y: (4, 1)
Evaluating the coefficients needed to produce y give the bases matrix A1 and A2
```python
A1_prime = la.inv(A1)
print(np.dot(A1_prime,y))
```
[[ 1.]
[-1.]
[-1.]
[ 0.]]
```python
A2_prime = la.inv(A2)
print(np.dot(A2_prime,y))
```
[[ 1.]
[-1.]
[-1.]
[ 0.]]
## Q4
```python
y = np.reshape(y, (4,1))
```
```python
q1 = np.reshape(q1, (4,1))
q2 = np.reshape(q2, (4,1))
```
```python
v0 = np.reshape(v0, (4,1))
v1 = np.reshape(v1, (4,1))
v2 = np.reshape(v2, (4,1))
```
```python
Q1 = np.hstack((y,v1,v2,q1))
print(la.matrix_rank(Q1))
```
4
```python
Q2 = np.hstack((y,v0,v2,q1))
print(la.matrix_rank(Q2))
```
4
```python
Q3 = np.hstack((y,v0,v1,v2))
print(la.matrix_rank(Q3))
```
3
```python
Q4 = np.hstack((y,v1,v2,q1-2*v1))
print(la.matrix_rank(Q4))
```
4
Select option 1,2 and 4
```python
```
|
f999c0314a91230a9b6432dead96a5f13dbf8751
| 6,753 |
ipynb
|
Jupyter Notebook
|
HaarBasis/HW 2.ipynb
|
AkshayPR244/Coursera-EPFL-Digital-Signal-Processing
|
bdf9c65e2c02f0a99336cbe60ebac919891e05e3
|
[
"MIT"
] | 2 |
2020-07-24T03:16:36.000Z
|
2020-09-25T10:21:00.000Z
|
HaarBasis/HW 2.ipynb
|
AkshayPR244/Coursera-EPFL-Digital-Signal-Processing
|
bdf9c65e2c02f0a99336cbe60ebac919891e05e3
|
[
"MIT"
] | null | null | null |
HaarBasis/HW 2.ipynb
|
AkshayPR244/Coursera-EPFL-Digital-Signal-Processing
|
bdf9c65e2c02f0a99336cbe60ebac919891e05e3
|
[
"MIT"
] | 1 |
2021-03-23T19:37:53.000Z
|
2021-03-23T19:37:53.000Z
| 17.494819 | 86 | 0.423071 | true | 1,003 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.847968 | 0.835484 | 0.708463 |
__label__eng_Latn
| 0.109798 | 0.484329 |
# Vibration modes of a membrane in parabolic coordinates
```python
%matplotlib notebook
```
```python
import numpy as np
from scipy.linalg import eigh
from sympy import (symbols, lambdify, init_printing,
expand, Matrix, diff, integrate)
from sympy.utilities.lambdify import lambdify
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
```
```python
x, y, r, s= symbols('x y r s')
init_printing()
```
## Boundary conditions
The boundary conditions are satisfied multiplying by $b(x, y)$.
```python
b = lambda x, y: (2*y - x**2 + 1)*(2*y - 1 + x**2)
bound = b(x, y)
```
```python
b_num = lambdify((x,y), bound, "numpy")
X, Y = np.mgrid[-1:1:200j, -0.5:0.5:200j]
bound_num = b_num(X,Y)
```
```python
fig = plt.figure(figsize=(6, 3))
plt.contourf(X, Y, bound_num, 12, cmap="RdYlBu", vmin=-1, vmax=1)
plt.contour(X, Y, bound_num, [0], colors="black")
plt.axis("image");
```
<IPython.core.display.Javascript object>
## Approximating functions
```python
def w_fun(x, y, m, n):
""" Trial function. """
c = symbols('c:%d' % (m*n)) # This is the way of define the coefficients c_i
w = []
for i in range(0, m):
for j in range(0, n):
w.append(x**i * y**j)
return w, c
def u_fun(x, y, m, n):
""" Complete function. Contains the boundary and trial functions. """
w, c = w_fun(x, y, m, n)
return [b(x, y) * phi for phi in w ], c
m = 10
n = 9
u, c = u_fun(x, y, m, n)
```
## Matrices and solution
```python
dudx = [diff(u[k], x) for k in range(len(c))]
dudy = [diff(u[k], y) for k in range(len(c))]
```
```python
Kaux = Matrix(m*n, m*n, lambda ii, jj: dudx[ii]*dudx[jj] + dudy[ii]*dudy[jj])
Maux = Matrix(m*n, m*n, lambda ii, jj: u[ii]*u[jj])
K = Matrix(m*n, m*n, lambda i,j: 0)
M = Matrix(m*n, m*n, lambda i,j: 0)
```
The integrals should be of the form
$$B_{ij} = \int\limits_{-1}^1\int\limits_{\frac{1}{2}(x^2 - 1)}^{\frac{1}{2}(1 - x^2)}
A_{ij} \mathrm{d}y\, \mathrm{d}x\, ,$$
```python
for row in range(m*n):
for col in range(row, m*n):
K_inte = Kaux[row, col]
M_inte = Maux[row, col]
K_inte = integrate(K_inte, (y, (x**2 - 1)/2, (1 - x**2)/2), (x, -1, 1))
M_inte = integrate(M_inte, (y, (x**2 - 1)/2, (1 - x**2)/2), (x, -1, 1))
K[row, col] += K_inte
M[row, col] += M_inte
if row != col:
K[col, row] += K_inte
M[col, row] += M_inte
```
```python
Kn = np.array(K).astype(np.float64)
Mn = np.array(M).astype(np.float64)
```
```python
vals, vecs = eigh(Kn, Mn, eigvals=(0,8))
vals
```
array([ 16.10095349, 30.93334613, 50.33837571, 50.33838703,
74.38718728, 74.38735248, 102.69815051, 102.75135697,
105.00010187])
## Visualization of the modes
```python
mask = np.ones_like(X)
mask[bound_num > 0] = np.nan
```
```python
plt.figure(figsize=(8, 8))
for i in range(8):
U = sum(vecs[j, i]*u[j] for j in range(m*n))
vecU = lambdify((x,y), U, "numpy")
Z = vecU(X,Y)*mask
Z_max = Z.max()
Z_max = max (Z_max, -Z.min())
plt.subplot(4, 2, i + 1)
plt.title(r"$k^2=%.2f$" % vals[i], size=12);
plt.contour(X, Y, bound_num, [0], colors="black")
plt.contourf(X, Y, Z, 12, cmap="RdYlBu", vmin=-1.2, vmax=1.2)
plt.axis("image")
plt.axis(False)
plt.savefig("../../../Documents/%dx%d.png" % (m, n), dpi=300)
```
<IPython.core.display.Javascript object>
## Analytic mass matrix
The mass matrix can be integrated analytically.
```python
from sympy import gamma
def mass_coeff(j, k, m, n):
coeff = (k + n + 2)*(k + n + 4)*(1 + (-1)**(j + m))*(1 + (-1)**(k + n))
coeff *= gamma(n + k + 1)*gamma((1 + m + j)/2)/gamma((13 + m + j + 2*n + 2*k)/2)
coeff /= 2**(k + n - 1)
return coeff
```
```python
mass_coeff(0, 0, 0, 0)
```
```python
vals
```
array([ 16.10095349, 30.93334613, 50.33837571, 50.33838703,
74.38718728, 74.38735248, 102.69815051, 102.75135697,
105.00010187])
```python
```
|
a5d20712f2212fc754d2fde3a28cde8dda6e7b61
| 297,282 |
ipynb
|
Jupyter Notebook
|
variational/parabolic_membrane.ipynb
|
nicoguaro/FEM_resources
|
32f032a4e096fdfd2870e0e9b5269046dd555aee
|
[
"MIT"
] | 28 |
2015-11-06T16:59:39.000Z
|
2022-02-25T18:18:49.000Z
|
variational/parabolic_membrane.ipynb
|
oldninja/FEM_resources
|
e44f315be217fd78ba95c09e3c94b1693773c047
|
[
"MIT"
] | null | null | null |
variational/parabolic_membrane.ipynb
|
oldninja/FEM_resources
|
e44f315be217fd78ba95c09e3c94b1693773c047
|
[
"MIT"
] | 9 |
2018-06-24T22:12:00.000Z
|
2022-01-12T15:57:37.000Z
| 153.317174 | 167,179 | 0.83741 | true | 1,489 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.868827 | 0.79053 | 0.686834 |
__label__eng_Latn
| 0.382669 | 0.434077 |
```julia
using Distributions
using Plots
using WebIO
WebIO.install_jupyter_nbextension()
using Interact
```
<p
id="webio-warning-9096107456339008615"
class="output_text output_stderr"
style="padding: 1em; font-weight: bold;"
>
Unable to load WebIO. Please make sure WebIO works for your Jupyter client.
For troubleshooting, please see <a href="https://juliagizmos.github.io/WebIO.jl/latest/providers/ijulia/">
the WebIO/IJulia documentation</a>.
<!-- TODO: link to installation docs. -->
</p>
┌ Info: Installing Jupyter WebIO extension...
│ cmd = `[4m/home/dominik/python-venvs/default/bin/jupyter[24m [4mnbextension[24m [4minstall[24m [4m--user[24m [4m/home/dominik/.julia/packages/WebIO/2nnB1/deps/bundles/webio-jupyter-notebook.js[24m`
└ @ WebIO /home/dominik/.julia/packages/WebIO/2nnB1/deps/jupyter.jl:237
Up to date: /home/dominik/.local/share/jupyter/nbextensions/webio-jupyter-notebook.js
To initialize this nbextension in the browser every time the notebook (or other app) loads:
jupyter nbextension enable <the entry point> --user
┌ Info: Enabling Jupyter WebIO extension...
│ cmd = `[4m/home/dominik/python-venvs/default/bin/jupyter[24m [4mnbextension[24m [4menable[24m [4m--user[24m [4mwebio-jupyter-notebook[24m`
└ @ WebIO /home/dominik/.julia/packages/WebIO/2nnB1/deps/jupyter.jl:241
Enabling notebook extension webio-jupyter-notebook...
- Validating: [32mOK[0m
# Weber's Law
Weber's law (WL), initially reported in 1834 (Weber, 1834) and later formalized by Fechner (1860), states that the just-noticeable-difference (JND) between two stimuli grows linearly with the stimulus magnitude:
\begin{align}
\Delta_{\theta} = w \cdot \theta,
\end{align}
where $w = \frac{ \Delta_{\theta}}{\theta}$ is also known as the Weber fraction (WF).
It is important to note that the JND is a statement about a discrimination task. The JND $ \Delta_{\theta}$ is the difference in two stimuli $\theta_1 = \theta + \frac{ \Delta_{\theta}}{2}$ and $\theta_2 = \theta - \frac{ \Delta_{\theta}}{2}$ around a value $\theta$ that is noticed in a fraction $p$ of repetitions.
```julia
θ = 0:0.01:10
@manipulate for w=0.1:0.1:1.0
plot(θ, w*θ, title="Weber's law", xlabel="theta", ylabel="JND", ylim=(0,10), label="")
end
```
<div
class="webio-mountpoint"
data-webio-mountpoint="9050622871592702416"
>
</div>
# Probabilistic Models of Perception
In probabilistic models of perception (Ma, Kording & Goldreich, unpublished), one typically assumes that observers have access to a noisy measurement $m$ of a presented stimulus $\theta$ with some probability distribution $p(m | \theta)$, which is often called the measurement (or noise) distribution. The observer's goal is then to compute an estimate of the presented stimulus $\hat \theta(m)$. Critically, while the observer's noisy measurement is a random variable, the estimate is a completely deterministic function of the measurement $m$. As an experimenter, we do not know the noisy internal measurement of an observer in any given trial. We can, however, compute the distribution of the estimates given a particular stimulus $p(\hat\theta | \theta)$.
JNDs are usually expressed in probabilistic model of perception in the following way. For a small $\Delta_{\theta}$, we can locally assume that the measurement distributions are Gaussian with equal variance $\sigma_\theta^2$. Then, in a 2-AFC discrimination task, the distribution of the difference of the measurements is Gaussian with variance $\frac{\sigma_\theta^2}{2}$ and thus the probability of estimating that $\hat\theta_1 > \hat\theta_2$ is given by a cumulative Gaussian with mean $\theta_1 - \theta_2$ and standard deviation $\frac{\sigma_\theta}{\sqrt{2}}$:
\begin{align}
p(\hat\theta_1 - \hat\theta_2 | \theta_1, \theta_2) = \Phi(\theta_1 - \theta_2 | 0, \frac{\sigma_\theta}{\sqrt{2}}) = \Phi(\Delta_{\theta} | 0, \frac{\sigma_\theta}{\sqrt{2}})
\end{align}
Thus, for a given discrimination probability $p$, we have $\Delta_{\theta} = \Phi^{-1}(p | 0, \frac{\sigma_\theta}{\sqrt{2}})$, from which follows that
\begin{align}
\Delta_{\theta} \propto \frac{\sigma_\theta}{\sqrt{2}},
\end{align}
where the proportionality constant depends on $p$. The $\sqrt{2}$ factor results from the 2-AFC task and would be different for tasks with more choice alternatives. This means that the standard deviation of the assumed measurement distribution is the same as the discriminability (up to a constant factor).
If we now assume that WL holds (or better, if we have measured that it holds), we can interpret this as a statement about the standard deviation of the measurement distribution. Particularly, it means that we get a linear scaling of the standard deviation with the stimulus magnitude
\begin{align}
\sigma_\theta = w \cdot \theta.
\end{align}
This linear scaling of standard deviation is often used in combination with Gaussian measurement distributions in probabilistic observer models (e.g. Jazayri & Shadlen, 2010; Cicchini et al, 2012).
Keep in mind that we have not used the assumption of linearly scaled standard deviation in measuring the discrimination thresholds, but have assumed locally that the standard deviation is constant. For measurements of $\Delta_{\theta}$ at different reference stimuli, we can check whether WL holds.
An alternative to Gaussian noise with linearly scaled standard deviations is a logarithmic encoding of the stimuli. Here, the assumption is that $\psi = \ln{\theta}$ is Gaussian distributed with equal variance $\sigma^2_{\text{log}}$ everywhere. In this case, the measurements on the physical stimulus dimension follow a log-normal distribution:
\begin{align}
p(m | \theta) = \text{Lognormal}(m | \ln{\theta}, \sigma_{\text{log}}).
\end{align}
Since the standard deviation of the log-normal distribution is $\sqrt{[e^{\sigma_{\text{log}}^2} - 1] \, e^{2 \ln \theta + \sigma_{\text{log}}^2}}$, it can be written as a linear function of the stimulus magnitude: $\sigma_\theta = c \cdot \theta$ with $c = \sqrt{[e^{\sigma_{\text{log}}^2} - 1]\,e^{\sigma_{\text{log}}^2}}$.
In probabilistic models of perception, the log-normal assumption is a popular way of dealing with magnitude variables, for which WL scaling of standard deviation is a good approximatition (e.g. Petzschner & Glasauer, 2011; Battaglia et al., 2011). This assumption makes computing estimates more convenient and avoids some problems of the Gaussian case, as we will see in the following sections.
```julia
measurement_norm(theta; wf=0.5) = Normal(theta, wf * theta)
measurement_logn(theta; wf=0.5) = LogNormal(log(theta), wf)
```
measurement_logn (generic function with 1 method)
For the standard deviations scaled by a WF of 0.5 as demonstrated above, the log-normal distribution is very different from the normal distribution. For other parameters, approximating the log-normal by the normal works rather well. Let us now examine for which range of parameters this approximation holds.
```julia
m = 0:0.01:10
@manipulate for θ=1.0:0.5:5, w=0.1:0.05:1.0
plot([θ, θ], [0., 0.5], linestyle=:dot, color=2, label="theta", xlabel="m", ylabel="p(m|theta)")
plot!(m, pdf.(measurement_norm(θ, wf=w), m), color=0, label="norm", ylim=(0, 1.0))
plot!(m, pdf.(measurement_logn(θ, wf=w), m), color=1, label="lognorm")
end
```
<div
class="webio-mountpoint"
data-webio-mountpoint="12982649666422286069"
>
</div>
# Maximum-likelihood estimates
The observer does not have access to the actually presented stimulus $\theta_0$. If we assume that they still have knowledge about their own noise distribution, they can make use of the likelihood function. The likelihood function is the probability of the observed measurement $m$ as a function of the "true" stimulus $\theta$:
\begin{align}
\lambda(\theta; m) = p(m | \theta).
\end{align}
Note that the likelihood function is different from the measurement distribution, since it is a function of $\theta$. For the Gaussian with scaled standard deviation, the shape of the likelihood function is no longer Gaussian but has a heavier tail to the right due to the increasing variance.
```julia
θ = 0:0.01:10
@manipulate for m=1:0.5:5, w=0.1:0.05:1.0
plot([m, m], [0., 0.5], linestyle=:dot, color=2, label="m", xlabel="theta", ylabel="p(m|theta)")
plot!(θ, pdf.(measurement_norm.(θ, wf=w), m), label="norm", ylim=(0, 0.5), color=0)
plot!(θ, pdf.(measurement_logn.(θ, wf=w), m), label="lognorm", ylim=(0, 1.0), color=1)
end
```
<div
class="webio-mountpoint"
data-webio-mountpoint="4338626487208743988"
>
</div>
An estimate $\hat\theta(m)$ is usually computed from this likelihood function (and possibly a prior distribution and a cost function). Here, we focus first on the simplest case, a maximum likelihood (ML) estimate $\hat\theta(m) = \text{argmax}_\theta \lambda(\theta; m)$ and compare the normal with scaled standard deviation and log-normal models.
## Normal distribution with scaled variance
For the normal distribution with scaled variance, the likelihood function is
\begin{align}
\lambda(\theta; m) = p(m | \theta) = \frac{1}{w\theta \sqrt{2\pi}} \exp\left\{\frac{-(m - \theta)^2}{2(w\theta)^2}\right\}.
\end{align}
The ML estimate can then be computed (see Jazayeri & Shadlen, 2010) as
\begin{align}
\hat\theta_{\text{MLE}} = \text{argmax}_\theta \lambda(\theta; m) = m \left[\frac{\sqrt{1 + 4w^2} - 1}{2 w^2} \right].
\end{align}
The proportionality constant is smaller than 1, resulting in a systematic underestimation of the true $\theta$, which becomes more severe the higher the WF.
## Log-normal distribution
Assuming that measurements are log-normally distributed is the same as assuming that the log of the measurements are normally distributed. The likelihood function in this case is
\begin{align}
\lambda(\theta; m) = p(m | \theta) = \frac{1}{m\sigma_{\text{log}}\sqrt{2\pi}} \exp\left\{ \frac{- (\ln m - \ln \theta)^2}{2 \sigma_{\text{log}}^2} \right\}.
\end{align}
Because the normalizing constant does not depend on $\theta$ anymore, the ML esimate is simply $\hat\theta(m) = m$.
```julia
using Optim
function maximum_likelihood(d, m; wf)
f(x) = -logpdf(d(x, wf=wf), m)
result = optimize(f, 0., 10)
return Optim.minimizer(result)
end
linreg(x, y) = [ones(size(x)) x] \ y
```
linreg (generic function with 1 method)
```julia
m = 0.01:0.01:5
@manipulate for w=0.1:0.05:1.0
# TODO: compute maximum likelihood for different values of m
theta_hat = [maximum_likelihood(measurement_norm, x, wf=w)[1] for x in m]
theta_hat_log = [maximum_likelihood(measurement_logn, x, wf=w)[1] for x in m]
plot(m, theta_hat, label="norm", color=0)
plot!(m, theta_hat_log, label="lognorm", color=1)
plot!(m, m, linestyle=:dash, label="theta^(m) = m", xlabel="m", ylabel="theta^")
end
```
<div
class="webio-mountpoint"
data-webio-mountpoint="8107893142182293350"
>
</div>
# Posterior estimates
```julia
```
|
c4e73eece48ce7ce5f32af500e6e4fa6552c5876
| 480,478 |
ipynb
|
Jupyter Notebook
|
notebooks/Weber-ProbModels.ipynb
|
dominikstrb/Fechner.jl
|
8761b988db4e153c6ebd3f115a91d07aad99a5db
|
[
"MIT"
] | 1 |
2021-11-22T19:49:54.000Z
|
2021-11-22T19:49:54.000Z
|
notebooks/Weber-ProbModels.ipynb
|
dominikstrb/Fechner.jl
|
8761b988db4e153c6ebd3f115a91d07aad99a5db
|
[
"MIT"
] | null | null | null |
notebooks/Weber-ProbModels.ipynb
|
dominikstrb/Fechner.jl
|
8761b988db4e153c6ebd3f115a91d07aad99a5db
|
[
"MIT"
] | null | null | null | 250.509906 | 63,324 | 0.642912 | true | 3,153 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.855851 | 0.817574 | 0.699722 |
__label__eng_Latn
| 0.953595 | 0.46402 |
```python
from thewalrus import hafnian, tor, quantum, samples, reduction, symplectic, threshold_detection_prob
import strawberryfields as sf
from strawberryfields.ops import *
import numpy as np
from sympy.utilities.iterables import multiset_permutations
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_formats=['svg']
import time
import random
import math
from itertools import *
```
```python
# functions
def prob_tor(cov, sub_mat):
Husimi_big = quantum.Qmat(cov)
dim = len(Husimi_big)
det_hu = np.linalg.det(Husimi_big)
denominator = det_hu**0.5
if np.any(sub_mat) == 0:
return((1/denominator).real)
else:
Husimi_sub = quantum.Qmat(sub_mat)
dim_sub = len(Husimi_sub)
O = np.eye(dim_sub) - np.linalg.inv(Husimi_sub)
Tor = tor(O)
return (Tor/denominator).real
def prob_exact(M):
m = len(M)
stat = np.zeros((m+1), dtype = np.float32)
for i in range(m+1):
permut = []
list_0 = [1 for j in range(i)]
list_0 += [0]*(m - i)
permut += multiset_permutations(list_0)
if i == 0:
stat[i] += 1.
else:
for j in range(len(permut)):
stat[i] += Z_i(M, permut[j])
for c in range(m):
for h in range(c+1, m+1):
stat[h] -= stat[c]*round(fact(m-c)/(fact(h-c)*fact(m - h)))
s = 0.
for i in range(m+1):
s += stat[i]
return stat/s
def prob_sectors_exact(M):
m = len(M)
Nu = 10*m
dnu = 2*np.pi/Nu
stat = np.zeros((m+1, Nu), dtype = np.complex128)
sectors = np.zeros((m+1, Nu), dtype = np.complex128)
for n in range(m+1):
permut = []
list_0 = [1 for j in range(n)]
list_0 += [0]*(m - n)
permut += multiset_permutations(list_0)
if n == 0:
for nu in range(Nu):
stat[n,nu] += 1.
else:
for nu in range(Nu):
for k in range(len(permut)):
stat[n,nu] += Z_i_v(M, nu*dnu, permut[k])
for k in range(m):
for h in range(k+1, m+1):
for nu in range(Nu):
stat[h,nu] -= stat[k,nu]*round(fact(m-k)/(fact(h-k)*fact(m - h)))
for n in range(m+1):
for j in range(Nu):
for k in range(Nu):
sectors[n,j] += stat[n,k]*np.exp(-1j*j*k*dnu)/Nu
return sectors.real
def Z_i(E, list_det):
list_det_1 = covert_01_0123(list_det)
Ei = red_mat(E, list_det_1)
E_i = Ei.conjugate()@Ei
# II = np.eye(len(E_i))
# Zi = (np.linalg.det(II - 4*E_i))**(-0.5)
eig = np.linalg.eigh(E_i)[0]
Zi = 1.
for i in range(len(eig)):
Zi *= (1 - 4*eig[i])**(-0.5)
return Zi.real
def Z(M):
M = M.conjugate().T@M
# II = np.eye(len(M))
# z = (np.linalg.det(II - 4*M))**(-0.5)
eig = np.linalg.eigh(M)[0]
z = 1.
for i in range(len(eig)):
z *= (1 - 4*eig[i])**(-0.5)
return z.real
def Z_i_v(E, nu, list_det):
Zi = 1.
list_det_1 = covert_01_0123(list_det)
Ei = red_mat(E, list_det_1)
E_i = Ei.conjugate()@Ei
# II = np.eye(len(E_i))
# Zi = (np.linalg.det(II - 4*np.exp(1j*nu)*E_i))**(-0.5)
eig = np.linalg.eigh(E_i)[0]
for i in range(len(eig)):
p = 4*np.exp(1j*nu)*eig[i]
Zi *= (1 - p)**(-0.5)
return Zi
def covert_01_0123(list_det):
#[0,1,1,0] - input
#[1,2] - output
list_det_1 = []
for i in range(len(list_det)):
if list_det[i] == 1:
list_det_1.append(i)
return list_det_1
def fact(x):
res = 1
for i in range(int(x)):
res *=(i+1)
return res
def red_mat(M_big, list_det): # [0, ..., n,k,l, ..., m-1] - numbers of clicked detectors
n = len(list_det)
small_mat = np.zeros((n, n), dtype = np.complex128)
# [0,+,0,+] == [1,3]
for i in range(n):
for j in range(n):
ind_i = list_det[i]
ind_j = list_det[j]
small_mat[i,j] = M_big[ind_i,ind_j]
return small_mat
```
```python
# parameters
path = r'data/demo'
data_VS = np.genfromtxt(path + '/Initial_state.dat')
m = len(data_VS) - 1
r = []
phi = []
for i in range(1, m+1):
r.append(data_VS[i,1])
for i in range(1, m+1):
phi.append(data_VS[i,2])
data_U = np.genfromtxt(path + '/Parmeters_of_interferometer.dat')
N_BS = len(data_U) - 1
N_PS = 2*N_BS
Phi_list_1 = np.zeros(N_BS)
Phi_list_2 = np.zeros(N_BS)
alfa_list = np.zeros(N_BS)
ind_list = []
for i in range(N_BS):
ind_list.append([int(data_U[i+1,0]),int(data_U[i+1,1])])
for i in range(N_BS):
Phi_list_1[i] = data_U[i+1,2]
for i in range(N_BS):
Phi_list_2[i] = data_U[i+1,3]
for i in range(N_BS):
alfa_list[i] = data_U[i+1,4]
data_M = np.genfromtxt(path + '/GBS_matrix.dat')
M = np.zeros((m, m),dtype=np.complex128)
real_part = []
imaginary_part = []
for i in range(m):
for k in range(0,2*m,2):
real_part.append(data_M[i,k])
for i in range(m):
for k in range(1,2*m+1,2):
imaginary_part.append(data_M[i,k])
for i in range(m*m):
M[i//m,i%m] = real_part[i] + 1j*imaginary_part[i]
print('Input:','\n')
print('n_modes = ', m ,'\n')
print('r = ', r,'\n')
print('phi = ', phi,'\n')
print('Interferometer:','\n')
print('N_bs = ',N_BS, ', N_ps = ', N_PS ,'\n')
```
Input:
n_modes = 8
r = [1.5, 1.5, 1.5, 1.5, 0.0, 0.0, 0.0, 0.0]
phi = [1.53672, 0.14297, 2.15673, 0.12049, 0.0, 0.0, 0.0, 0.0]
Interferometer:
N_bs = 320 , N_ps = 640
```python
possible_values = (0,1)
n_positions = m
sorted_combinations = combinations_with_replacement(possible_values, n_positions)
unique_permutations = set()
for combo in sorted_combinations:
for p in permutations(combo):
unique_permutations.add(p)
unique_permutations = sorted(unique_permutations)
print("Number of unique permutations: %i" % (len(unique_permutations)))
# for p in unique_permutations:
# print(p)
```
Number of unique permutations: 256
```python
# We check our exact results with the Walrus library https://the-walrus.readthedocs.io
# Here we want to prove that our expressions are in the same units as the conventional GBS problem.
prog = sf.Program(m)
eng = sf.Engine("gaussian")
with prog.context as q:
for i in range(m):
Sgate(r[i], phi[i])| q[i]
for k in range(N_BS):
Rgate(Phi_list_1[k]) | q[ind_list[k][0]]
BSgate(alfa_list[k]) | (q[ind_list[k][1]], q[ind_list[k][0]])
Rgate(Phi_list_2[k]) | q[ind_list[k][1]]
state = eng.run(prog).state
mu = state.means()
cov = state.cov()
```
```python
# start_time = time.process_time()
print("P [s a m p l e] ", " walrus", " ours", '\n' )
comb = len(unique_permutations)
P_list_walrus = []
P_list_ours = []
P_sum_walrus = np.zeros(m+1)
P_sum_ours = prob_exact(M)
for p in unique_permutations:
k = list(p).count(1)
list_det = covert_01_0123(list(p))
if k != 0:
M_sub = red_mat(M, list_det)
norm = Z(M_sub)/Z(M)
P_list_ours.append(norm*prob_exact(M_sub)[k])
else:
P_list_ours.append(prob_exact(M)[k])
if k == m:
P_list_walrus.append(prob_tor(cov, cov))
P_sum_walrus[k] = prob_tor(cov, cov)
else:
P_list_walrus.append(threshold_detection_prob(mu, cov, list(p))) #threshold_detection_prob_parallel
P_sum_walrus[k] += threshold_detection_prob(mu, cov, list(p))
for k in range(comb):
print('P', list(unique_permutations[k]), ' = ', "{:.3e}".format(P_list_walrus[k]), ' | ', "{:.3e}".format(P_list_ours[k]) )
sum_prob_walrus = 0
sum_prob_ours = 0
for i in range(comb):
sum_prob_walrus += P_list_walrus[i]
sum_prob_ours += P_list_ours[i]
print('\n',"sum prob:","{:.3e}".format(sum_prob_walrus), '|', "{:.3e}".format(sum_prob_ours), '\n')
for i in range(m+1):
print('n =',i, '|', ' P_walrus = ', "{:.3e}".format(P_sum_walrus[i]), ' P_ours = ', "{:.3e}".format(P_sum_ours[i]))
# print('\n',"--- %s minutes ---" % ((time.process_time() - start_time)/60))
```
P [s a m p l e] walrus ours
/tmp/ipykernel_9/34118179.py:35: ComplexWarning: Casting complex values to real discards the imaginary part
P [0, 0, 0, 0, 0, 0, 0, 0] = 3.265e-02-3.366e-19j | 3.265e-02
P [0, 0, 0, 0, 0, 0, 0, 1] = 2.008e-03+1.218e-18j | 2.008e-03
P [0, 0, 0, 0, 0, 0, 1, 0] = 2.778e-04+3.372e-18j | 2.778e-04
P [0, 0, 0, 0, 0, 0, 1, 1] = 7.032e-03+1.001e-18j | 7.032e-03
P [0, 0, 0, 0, 0, 1, 0, 0] = 1.593e-05+5.658e-19j | 1.593e-05
P [0, 0, 0, 0, 0, 1, 0, 1] = 1.801e-03+7.970e-19j | 1.801e-03
P [0, 0, 0, 0, 0, 1, 1, 0] = 1.356e-03-6.354e-20j | 1.356e-03
P [0, 0, 0, 0, 0, 1, 1, 1] = 1.378e-03+1.873e-18j | 1.378e-03
P [0, 0, 0, 0, 1, 0, 0, 0] = 9.818e-04+2.048e-18j | 9.818e-04
P [0, 0, 0, 0, 1, 0, 0, 1] = 5.467e-03-1.260e-18j | 5.467e-03
P [0, 0, 0, 0, 1, 0, 1, 0] = 7.706e-03-3.006e-19j | 7.706e-03
P [0, 0, 0, 0, 1, 0, 1, 1] = 7.151e-03+2.101e-18j | 7.151e-03
P [0, 0, 0, 0, 1, 1, 0, 0] = 6.483e-04-4.510e-20j | 6.483e-04
P [0, 0, 0, 0, 1, 1, 0, 1] = 7.654e-04-2.847e-19j | 7.654e-04
P [0, 0, 0, 0, 1, 1, 1, 0] = 1.145e-03+2.629e-19j | 1.145e-03
P [0, 0, 0, 0, 1, 1, 1, 1] = 3.587e-03-9.623e-19j | 3.587e-03
P [0, 0, 0, 1, 0, 0, 0, 0] = 3.070e-05+8.215e-19j | 3.070e-05
P [0, 0, 0, 1, 0, 0, 0, 1] = 7.499e-03+6.073e-19j | 7.499e-03
P [0, 0, 0, 1, 0, 0, 1, 0] = 7.941e-04-4.154e-19j | 7.941e-04
P [0, 0, 0, 1, 0, 0, 1, 1] = 3.704e-03+2.135e-18j | 3.704e-03
P [0, 0, 0, 1, 0, 1, 0, 0] = 8.584e-04+2.636e-20j | 8.584e-04
P [0, 0, 0, 1, 0, 1, 0, 1] = 1.917e-03+8.746e-19j | 1.917e-03
P [0, 0, 0, 1, 0, 1, 1, 0] = 2.426e-04+1.549e-19j | 2.426e-04
P [0, 0, 0, 1, 0, 1, 1, 1] = 1.303e-03-1.188e-18j | 1.303e-03
P [0, 0, 0, 1, 1, 0, 0, 0] = 4.397e-04+2.415e-19j | 4.397e-04
P [0, 0, 0, 1, 1, 0, 0, 1] = 4.401e-03+1.485e-18j | 4.401e-03
P [0, 0, 0, 1, 1, 0, 1, 0] = 8.035e-04+8.786e-19j | 8.035e-04
P [0, 0, 0, 1, 1, 0, 1, 1] = 1.613e-02+2.275e-18j | 1.613e-02
P [0, 0, 0, 1, 1, 1, 0, 0] = 1.500e-04+9.998e-20j | 1.500e-04
P [0, 0, 0, 1, 1, 1, 0, 1] = 1.730e-03+4.776e-19j | 1.730e-03
P [0, 0, 0, 1, 1, 1, 1, 0] = 4.840e-04+2.448e-19j | 4.839e-04
P [0, 0, 0, 1, 1, 1, 1, 1] = 7.176e-03+3.253e-18j | 7.176e-03
P [0, 0, 1, 0, 0, 0, 0, 0] = 2.571e-03-3.059e-19j | 2.571e-03
P [0, 0, 1, 0, 0, 0, 0, 1] = 1.165e-03+6.790e-19j | 1.165e-03
P [0, 0, 1, 0, 0, 0, 1, 0] = 4.691e-03-1.089e-18j | 4.691e-03
P [0, 0, 1, 0, 0, 0, 1, 1] = 8.239e-03+2.193e-18j | 8.239e-03
P [0, 0, 1, 0, 0, 1, 0, 0] = 8.242e-04-5.374e-19j | 8.242e-04
P [0, 0, 1, 0, 0, 1, 0, 1] = 1.855e-04-8.375e-20j | 1.855e-04
P [0, 0, 1, 0, 0, 1, 1, 0] = 1.347e-03+1.211e-19j | 1.347e-03
P [0, 0, 1, 0, 0, 1, 1, 1] = 1.791e-03+3.301e-19j | 1.790e-03
P [0, 0, 1, 0, 1, 0, 0, 0] = 2.453e-03+3.357e-19j | 2.453e-03
P [0, 0, 1, 0, 1, 0, 0, 1] = 1.163e-03-1.347e-18j | 1.163e-03
P [0, 0, 1, 0, 1, 0, 1, 0] = 1.066e-02-1.466e-18j | 1.066e-02
P [0, 0, 1, 0, 1, 0, 1, 1] = 1.354e-02+2.686e-18j | 1.354e-02
P [0, 0, 1, 0, 1, 1, 0, 0] = 1.333e-04+9.042e-20j | 1.333e-04
P [0, 0, 1, 0, 1, 1, 0, 1] = 6.896e-04+7.503e-19j | 6.895e-04
P [0, 0, 1, 0, 1, 1, 1, 0] = 4.408e-03+1.976e-18j | 4.408e-03
P [0, 0, 1, 0, 1, 1, 1, 1] = 4.610e-03+1.736e-18j | 4.610e-03
P [0, 0, 1, 1, 0, 0, 0, 0] = 5.205e-06+7.785e-20j | 5.201e-06
P [0, 0, 1, 1, 0, 0, 0, 1] = 1.155e-03+5.395e-19j | 1.155e-03
P [0, 0, 1, 1, 0, 0, 1, 0] = 3.142e-04+7.166e-20j | 3.142e-04
P [0, 0, 1, 1, 0, 0, 1, 1] = 5.751e-03-4.664e-18j | 5.751e-03
P [0, 0, 1, 1, 0, 1, 0, 0] = 1.179e-04+1.520e-19j | 1.179e-04
P [0, 0, 1, 1, 0, 1, 0, 1] = 4.665e-04+7.920e-19j | 4.665e-04
P [0, 0, 1, 1, 0, 1, 1, 0] = 4.953e-04+2.498e-19j | 4.953e-04
P [0, 0, 1, 1, 0, 1, 1, 1] = 1.666e-03+2.846e-18j | 1.666e-03
P [0, 0, 1, 1, 1, 0, 0, 0] = 9.390e-05-2.008e-20j | 9.390e-05
P [0, 0, 1, 1, 1, 0, 0, 1] = 1.228e-03-3.004e-19j | 1.228e-03
P [0, 0, 1, 1, 1, 0, 1, 0] = 1.926e-03-7.707e-19j | 1.926e-03
P [0, 0, 1, 1, 1, 0, 1, 1] = 1.990e-02+1.264e-18j | 1.990e-02
P [0, 0, 1, 1, 1, 1, 0, 0] = 8.348e-05+1.675e-19j | 8.349e-05
P [0, 0, 1, 1, 1, 1, 0, 1] = 1.001e-03-2.205e-18j | 1.001e-03
P [0, 0, 1, 1, 1, 1, 1, 0] = 2.023e-03-4.905e-18j | 2.023e-03
P [0, 0, 1, 1, 1, 1, 1, 1] = 8.115e-03-2.603e-18j | 8.115e-03
P [0, 1, 0, 0, 0, 0, 0, 0] = 1.691e-03+1.708e-20j | 1.691e-03
P [0, 1, 0, 0, 0, 0, 0, 1] = 2.702e-03+3.925e-19j | 2.702e-03
P [0, 1, 0, 0, 0, 0, 1, 0] = 5.892e-04-1.003e-19j | 5.892e-04
P [0, 1, 0, 0, 0, 0, 1, 1] = 1.900e-03+1.079e-18j | 1.899e-03
P [0, 1, 0, 0, 0, 1, 0, 0] = 2.120e-04+1.223e-19j | 2.120e-04
P [0, 1, 0, 0, 0, 1, 0, 1] = 6.259e-04-9.090e-20j | 6.259e-04
P [0, 1, 0, 0, 0, 1, 1, 0] = 1.676e-04-3.098e-20j | 1.676e-04
P [0, 1, 0, 0, 0, 1, 1, 1] = 1.285e-03-1.691e-18j | 1.285e-03
P [0, 1, 0, 0, 1, 0, 0, 0] = 3.677e-03+1.325e-18j | 3.677e-03
P [0, 1, 0, 0, 1, 0, 0, 1] = 1.855e-03+6.805e-19j | 1.855e-03
P [0, 1, 0, 0, 1, 0, 1, 0] = 2.301e-03+1.026e-18j | 2.301e-03
P [0, 1, 0, 0, 1, 0, 1, 1] = 4.694e-03-1.320e-18j | 4.694e-03
P [0, 1, 0, 0, 1, 1, 0, 0] = 4.217e-04+1.364e-19j | 4.217e-04
P [0, 1, 0, 0, 1, 1, 0, 1] = 7.328e-04+1.843e-18j | 7.328e-04
P [0, 1, 0, 0, 1, 1, 1, 0] = 5.758e-04+8.166e-19j | 5.758e-04
P [0, 1, 0, 0, 1, 1, 1, 1] = 3.127e-03+4.385e-18j | 3.127e-03
P [0, 1, 0, 1, 0, 0, 0, 0] = 5.639e-03-5.830e-19j | 5.639e-03
P [0, 1, 0, 1, 0, 0, 0, 1] = 6.705e-03-1.066e-18j | 6.705e-03
P [0, 1, 0, 1, 0, 0, 1, 0] = 4.285e-04+6.237e-19j | 4.285e-04
P [0, 1, 0, 1, 0, 0, 1, 1] = 2.809e-03-3.157e-18j | 2.809e-03
P [0, 1, 0, 1, 0, 1, 0, 0] = 6.566e-04+1.341e-19j | 6.566e-04
P [0, 1, 0, 1, 0, 1, 0, 1] = 3.883e-03+1.744e-19j | 3.883e-03
P [0, 1, 0, 1, 0, 1, 1, 0] = 4.990e-04+5.612e-20j | 4.990e-04
P [0, 1, 0, 1, 0, 1, 1, 1] = 1.870e-03+6.699e-18j | 1.870e-03
P [0, 1, 0, 1, 1, 0, 0, 0] = 3.283e-03+1.045e-18j | 3.283e-03
P [0, 1, 0, 1, 1, 0, 0, 1] = 1.243e-02-2.424e-18j | 1.243e-02
P [0, 1, 0, 1, 1, 0, 1, 0] = 4.236e-03+6.295e-20j | 4.236e-03
P [0, 1, 0, 1, 1, 0, 1, 1] = 2.482e-02+1.605e-17j | 2.482e-02
P [0, 1, 0, 1, 1, 1, 0, 0] = 1.820e-03+8.229e-20j | 1.820e-03
P [0, 1, 0, 1, 1, 1, 0, 1] = 8.396e-03+4.332e-18j | 8.396e-03
P [0, 1, 0, 1, 1, 1, 1, 0] = 1.883e-03+1.220e-18j | 1.883e-03
P [0, 1, 0, 1, 1, 1, 1, 1] = 2.522e-02+4.735e-18j | 2.522e-02
P [0, 1, 1, 0, 0, 0, 0, 0] = 3.714e-03-5.448e-19j | 3.714e-03
P [0, 1, 1, 0, 0, 0, 0, 1] = 6.055e-04+2.314e-19j | 6.055e-04
P [0, 1, 1, 0, 0, 0, 1, 0] = 2.346e-03-6.696e-19j | 2.346e-03
P [0, 1, 1, 0, 0, 0, 1, 1] = 2.725e-03-5.492e-19j | 2.725e-03
P [0, 1, 1, 0, 0, 1, 0, 0] = 5.440e-04+1.969e-20j | 5.440e-04
P [0, 1, 1, 0, 0, 1, 0, 1] = 5.167e-04+3.575e-20j | 5.167e-04
P [0, 1, 1, 0, 0, 1, 1, 0] = 1.668e-03-5.143e-20j | 1.668e-03
P [0, 1, 1, 0, 0, 1, 1, 1] = 1.191e-03+5.177e-19j | 1.191e-03
P [0, 1, 1, 0, 1, 0, 0, 0] = 5.656e-04+2.593e-19j | 5.656e-04
P [0, 1, 1, 0, 1, 0, 0, 1] = 2.941e-03+7.798e-19j | 2.941e-03
P [0, 1, 1, 0, 1, 0, 1, 0] = 3.545e-03+8.309e-19j | 3.545e-03
P [0, 1, 1, 0, 1, 0, 1, 1] = 8.900e-03-3.648e-18j | 8.900e-03
P [0, 1, 1, 0, 1, 1, 0, 0] = 3.290e-04+4.770e-19j | 3.290e-04
P [0, 1, 1, 0, 1, 1, 0, 1] = 9.242e-04-2.445e-18j | 9.242e-04
P [0, 1, 1, 0, 1, 1, 1, 0] = 2.292e-03-7.315e-18j | 2.292e-03
P [0, 1, 1, 0, 1, 1, 1, 1] = 6.172e-03-1.394e-17j | 6.172e-03
P [0, 1, 1, 1, 0, 0, 0, 0] = 2.125e-03+7.101e-19j | 2.125e-03
P [0, 1, 1, 1, 0, 0, 0, 1] = 3.652e-03+1.540e-18j | 3.652e-03
P [0, 1, 1, 1, 0, 0, 1, 0] = 2.481e-03-1.515e-18j | 2.481e-03
P [0, 1, 1, 1, 0, 0, 1, 1] = 8.932e-03-6.922e-18j | 8.932e-03
P [0, 1, 1, 1, 0, 1, 0, 0] = 1.351e-03+5.250e-19j | 1.351e-03
P [0, 1, 1, 1, 0, 1, 0, 1] = 2.994e-03-1.133e-18j | 2.994e-03
P [0, 1, 1, 1, 0, 1, 1, 0] = 3.306e-03-1.242e-18j | 3.306e-03
P [0, 1, 1, 1, 0, 1, 1, 1] = 5.427e-03+2.217e-18j | 5.427e-03
P [0, 1, 1, 1, 1, 0, 0, 0] = 1.046e-03+1.423e-18j | 1.046e-03
P [0, 1, 1, 1, 1, 0, 0, 1] = 1.087e-02+3.346e-18j | 1.087e-02
P [0, 1, 1, 1, 1, 0, 1, 0] = 5.484e-03+3.136e-18j | 5.484e-03
P [0, 1, 1, 1, 1, 0, 1, 1] = 3.875e-02+1.329e-17j | 3.875e-02
P [0, 1, 1, 1, 1, 1, 0, 0] = 1.151e-03-1.625e-19j | 1.151e-03
P [0, 1, 1, 1, 1, 1, 0, 1] = 1.243e-02-5.113e-18j | 1.243e-02
P [0, 1, 1, 1, 1, 1, 1, 0] = 6.440e-03+8.596e-18j | 6.439e-03
P [0, 1, 1, 1, 1, 1, 1, 1] = 4.901e-02+9.975e-17j | 4.901e-02
P [1, 0, 0, 0, 0, 0, 0, 0] = 2.940e-04-7.697e-19j | 2.940e-04
P [1, 0, 0, 0, 0, 0, 0, 1] = 6.437e-04-3.277e-19j | 6.437e-04
P [1, 0, 0, 0, 0, 0, 1, 0] = 1.956e-03+3.622e-19j | 1.956e-03
P [1, 0, 0, 0, 0, 0, 1, 1] = 2.101e-03-5.057e-20j | 2.101e-03
P [1, 0, 0, 0, 0, 1, 0, 0] = 2.178e-04+1.908e-19j | 2.178e-04
P [1, 0, 0, 0, 0, 1, 0, 1] = 1.822e-04-1.827e-19j | 1.822e-04
P [1, 0, 0, 0, 0, 1, 1, 0] = 2.671e-04+1.598e-19j | 2.671e-04
P [1, 0, 0, 0, 0, 1, 1, 1] = 1.420e-03-2.044e-19j | 1.420e-03
P [1, 0, 0, 0, 1, 0, 0, 0] = 3.911e-04+1.264e-19j | 3.911e-04
P [1, 0, 0, 0, 1, 0, 0, 1] = 2.475e-04-4.085e-19j | 2.475e-04
P [1, 0, 0, 0, 1, 0, 1, 0] = 1.391e-03-1.246e-19j | 1.391e-03
P [1, 0, 0, 0, 1, 0, 1, 1] = 3.290e-03-1.177e-18j | 3.290e-03
P [1, 0, 0, 0, 1, 1, 0, 0] = 5.375e-05+1.310e-20j | 5.375e-05
P [1, 0, 0, 0, 1, 1, 0, 1] = 6.339e-05+4.339e-19j | 6.339e-05
P [1, 0, 0, 0, 1, 1, 1, 0] = 6.411e-04+3.943e-19j | 6.411e-04
P [1, 0, 0, 0, 1, 1, 1, 1] = 3.145e-03+4.560e-18j | 3.145e-03
P [1, 0, 0, 1, 0, 0, 0, 0] = 1.402e-04-1.103e-19j | 1.402e-04
P [1, 0, 0, 1, 0, 0, 0, 1] = 3.690e-04+2.089e-19j | 3.690e-04
P [1, 0, 0, 1, 0, 0, 1, 0] = 1.287e-04+2.260e-19j | 1.287e-04
P [1, 0, 0, 1, 0, 0, 1, 1] = 1.396e-03-2.604e-18j | 1.396e-03
P [1, 0, 0, 1, 0, 1, 0, 0] = 3.590e-05-4.962e-20j | 3.590e-05
P [1, 0, 0, 1, 0, 1, 0, 1] = 1.655e-04+3.184e-19j | 1.654e-04
P [1, 0, 0, 1, 0, 1, 1, 0] = 1.187e-04+2.259e-20j | 1.187e-04
P [1, 0, 0, 1, 0, 1, 1, 1] = 8.393e-04+2.386e-18j | 8.393e-04
P [1, 0, 0, 1, 1, 0, 0, 0] = 4.031e-05-2.796e-20j | 4.031e-05
P [1, 0, 0, 1, 1, 0, 0, 1] = 4.118e-04-1.607e-20j | 4.118e-04
P [1, 0, 0, 1, 1, 0, 1, 0] = 1.514e-04+4.419e-19j | 1.514e-04
P [1, 0, 0, 1, 1, 0, 1, 1] = 6.862e-03+9.339e-18j | 6.862e-03
P [1, 0, 0, 1, 1, 1, 0, 0] = 5.251e-05-2.093e-20j | 5.251e-05
P [1, 0, 0, 1, 1, 1, 0, 1] = 1.672e-04+1.954e-19j | 1.672e-04
P [1, 0, 0, 1, 1, 1, 1, 0] = 2.087e-04-7.650e-19j | 2.087e-04
P [1, 0, 0, 1, 1, 1, 1, 1] = 6.317e-03-8.196e-18j | 6.317e-03
P [1, 0, 1, 0, 0, 0, 0, 0] = 6.704e-04+3.904e-20j | 6.704e-04
P [1, 0, 1, 0, 0, 0, 0, 1] = 1.098e-04+3.074e-19j | 1.098e-04
P [1, 0, 1, 0, 0, 0, 1, 0] = 1.342e-03+1.553e-19j | 1.342e-03
P [1, 0, 1, 0, 0, 0, 1, 1] = 4.310e-03-4.422e-18j | 4.310e-03
P [1, 0, 1, 0, 0, 1, 0, 0] = 6.007e-05-2.741e-20j | 6.007e-05
P [1, 0, 1, 0, 0, 1, 0, 1] = 8.887e-05+3.315e-19j | 8.887e-05
P [1, 0, 1, 0, 0, 1, 1, 0] = 4.118e-04-6.450e-19j | 4.118e-04
P [1, 0, 1, 0, 0, 1, 1, 1] = 1.745e-03+1.206e-18j | 1.745e-03
P [1, 0, 1, 0, 1, 0, 0, 0] = 9.150e-05-1.092e-19j | 9.150e-05
P [1, 0, 1, 0, 1, 0, 0, 1] = 1.686e-04+6.851e-19j | 1.686e-04
P [1, 0, 1, 0, 1, 0, 1, 0] = 3.532e-03+3.003e-18j | 3.532e-03
P [1, 0, 1, 0, 1, 0, 1, 1] = 6.066e-03+8.754e-18j | 6.066e-03
P [1, 0, 1, 0, 1, 1, 0, 0] = 8.616e-05+3.020e-20j | 8.617e-05
P [1, 0, 1, 0, 1, 1, 0, 1] = 7.441e-05-1.267e-18j | 7.443e-05
P [1, 0, 1, 0, 1, 1, 1, 0] = 1.896e-03-1.153e-18j | 1.896e-03
P [1, 0, 1, 0, 1, 1, 1, 1] = 6.178e-03-2.302e-17j | 6.177e-03
P [1, 0, 1, 1, 0, 0, 0, 0] = 1.915e-05+6.245e-20j | 1.915e-05
P [1, 0, 1, 1, 0, 0, 0, 1] = 1.982e-04-6.586e-20j | 1.982e-04
P [1, 0, 1, 1, 0, 0, 1, 0] = 1.823e-04-1.450e-19j | 1.823e-04
P [1, 0, 1, 1, 0, 0, 1, 1] = 3.049e-03+6.365e-18j | 3.049e-03
P [1, 0, 1, 1, 0, 1, 0, 0] = 4.591e-05+1.089e-20j | 4.590e-05
P [1, 0, 1, 1, 0, 1, 0, 1] = 9.490e-05-1.466e-18j | 9.491e-05
P [1, 0, 1, 1, 0, 1, 1, 0] = 2.047e-04+4.632e-20j | 2.047e-04
P [1, 0, 1, 1, 0, 1, 1, 1] = 1.683e-03+1.105e-18j | 1.683e-03
P [1, 0, 1, 1, 1, 0, 0, 0] = 1.823e-05-3.279e-20j | 1.823e-05
P [1, 0, 1, 1, 1, 0, 0, 1] = 1.694e-04+6.595e-19j | 1.694e-04
P [1, 0, 1, 1, 1, 0, 1, 0] = 8.245e-04-5.268e-19j | 8.245e-04
P [1, 0, 1, 1, 1, 0, 1, 1] = 7.296e-03+7.318e-18j | 7.296e-03
P [1, 0, 1, 1, 1, 1, 0, 0] = 2.266e-05-1.166e-19j | 2.268e-05
P [1, 0, 1, 1, 1, 1, 0, 1] = 1.294e-04+2.074e-18j | 1.293e-04
P [1, 0, 1, 1, 1, 1, 1, 0] = 1.353e-03+6.948e-18j | 1.353e-03
P [1, 0, 1, 1, 1, 1, 1, 1] = 8.512e-03+3.913e-17j | 8.511e-03
P [1, 1, 0, 0, 0, 0, 0, 0] = 2.241e-03+1.075e-18j | 2.241e-03
P [1, 1, 0, 0, 0, 0, 0, 1] = 1.249e-03-3.089e-20j | 1.249e-03
P [1, 1, 0, 0, 0, 0, 1, 0] = 8.311e-04+2.282e-19j | 8.311e-04
P [1, 1, 0, 0, 0, 0, 1, 1] = 4.482e-03+1.034e-19j | 4.482e-03
P [1, 1, 0, 0, 0, 1, 0, 0] = 1.178e-04+1.384e-19j | 1.178e-04
P [1, 1, 0, 0, 0, 1, 0, 1] = 7.676e-04+2.143e-19j | 7.676e-04
P [1, 1, 0, 0, 0, 1, 1, 0] = 4.079e-04+6.638e-20j | 4.079e-04
P [1, 1, 0, 0, 0, 1, 1, 1] = 4.906e-03+4.401e-18j | 4.906e-03
P [1, 1, 0, 0, 1, 0, 0, 0] = 1.259e-03+1.252e-19j | 1.259e-03
P [1, 1, 0, 0, 1, 0, 0, 1] = 5.276e-04+3.729e-19j | 5.276e-04
P [1, 1, 0, 0, 1, 0, 1, 0] = 2.235e-03+1.884e-20j | 2.235e-03
P [1, 1, 0, 0, 1, 0, 1, 1] = 6.477e-03+2.621e-18j | 6.477e-03
P [1, 1, 0, 0, 1, 1, 0, 0] = 4.006e-04+3.589e-19j | 4.006e-04
P [1, 1, 0, 0, 1, 1, 0, 1] = 4.321e-04-8.341e-19j | 4.320e-04
P [1, 1, 0, 0, 1, 1, 1, 0] = 1.349e-03-1.731e-18j | 1.349e-03
P [1, 1, 0, 0, 1, 1, 1, 1] = 1.306e-02-1.073e-17j | 1.306e-02
P [1, 1, 0, 1, 0, 0, 0, 0] = 1.392e-03+3.166e-19j | 1.392e-03
P [1, 1, 0, 1, 0, 0, 0, 1] = 2.555e-03+2.922e-19j | 2.555e-03
P [1, 1, 0, 1, 0, 0, 1, 0] = 4.753e-04-7.285e-20j | 4.753e-04
P [1, 1, 0, 1, 0, 0, 1, 1] = 3.827e-03+4.809e-18j | 3.827e-03
P [1, 1, 0, 1, 0, 1, 0, 0] = 5.380e-04+1.576e-19j | 5.380e-04
P [1, 1, 0, 1, 0, 1, 0, 1] = 2.939e-03+1.256e-18j | 2.939e-03
P [1, 1, 0, 1, 0, 1, 1, 0] = 4.981e-04+1.633e-19j | 4.981e-04
P [1, 1, 0, 1, 0, 1, 1, 1] = 6.525e-03-3.784e-18j | 6.525e-03
P [1, 1, 0, 1, 1, 0, 0, 0] = 2.631e-03+8.952e-19j | 2.631e-03
P [1, 1, 0, 1, 1, 0, 0, 1] = 4.512e-03+4.545e-18j | 4.512e-03
P [1, 1, 0, 1, 1, 0, 1, 0] = 3.193e-03+4.798e-18j | 3.193e-03
P [1, 1, 0, 1, 1, 0, 1, 1] = 1.257e-02-8.113e-19j | 1.257e-02
P [1, 1, 0, 1, 1, 1, 0, 0] = 2.175e-03+1.783e-18j | 2.175e-03
P [1, 1, 0, 1, 1, 1, 0, 1] = 6.010e-03+2.559e-18j | 6.010e-03
P [1, 1, 0, 1, 1, 1, 1, 0] = 4.762e-03-9.731e-19j | 4.762e-03
P [1, 1, 0, 1, 1, 1, 1, 1] = 2.782e-02+7.536e-17j | 2.782e-02
P [1, 1, 1, 0, 0, 0, 0, 0] = 1.593e-03+1.292e-19j | 1.593e-03
P [1, 1, 1, 0, 0, 0, 0, 1] = 4.302e-04+4.718e-20j | 4.302e-04
P [1, 1, 1, 0, 0, 0, 1, 0] = 2.603e-03-8.021e-19j | 2.603e-03
P [1, 1, 1, 0, 0, 0, 1, 1] = 3.721e-03+2.946e-18j | 3.721e-03
P [1, 1, 1, 0, 0, 1, 0, 0] = 4.641e-04+2.329e-19j | 4.641e-04
P [1, 1, 1, 0, 0, 1, 0, 1] = 6.090e-04+4.417e-19j | 6.090e-04
P [1, 1, 1, 0, 0, 1, 1, 0] = 2.044e-03+3.820e-19j | 2.044e-03
P [1, 1, 1, 0, 0, 1, 1, 1] = 3.957e-03-2.415e-18j | 3.957e-03
P [1, 1, 1, 0, 1, 0, 0, 0] = 8.942e-04-1.490e-19j | 8.942e-04
P [1, 1, 1, 0, 1, 0, 0, 1] = 8.406e-04-5.086e-19j | 8.406e-04
P [1, 1, 1, 0, 1, 0, 1, 0] = 2.991e-03-1.779e-18j | 2.991e-03
P [1, 1, 1, 0, 1, 0, 1, 1] = 1.377e-02+1.295e-17j | 1.377e-02
P [1, 1, 1, 0, 1, 1, 0, 0] = 5.004e-04-1.686e-19j | 5.004e-04
P [1, 1, 1, 0, 1, 1, 0, 1] = 5.333e-04+3.393e-18j | 5.333e-04
P [1, 1, 1, 0, 1, 1, 1, 0] = 2.705e-03+1.527e-18j | 2.705e-03
P [1, 1, 1, 0, 1, 1, 1, 1] = 2.259e-02+2.612e-17j | 2.259e-02
P [1, 1, 1, 1, 0, 0, 0, 0] = 1.803e-03+1.165e-18j | 1.803e-03
P [1, 1, 1, 1, 0, 0, 0, 1] = 1.744e-03-1.610e-18j | 1.744e-03
P [1, 1, 1, 1, 0, 0, 1, 0] = 2.599e-03+7.010e-19j | 2.599e-03
P [1, 1, 1, 1, 0, 0, 1, 1] = 9.942e-03+1.976e-17j | 9.942e-03
P [1, 1, 1, 1, 0, 1, 0, 0] = 1.467e-03-1.052e-18j | 1.467e-03
P [1, 1, 1, 1, 0, 1, 0, 1] = 3.587e-03+4.007e-18j | 3.587e-03
P [1, 1, 1, 1, 0, 1, 1, 0] = 5.921e-03+3.961e-18j | 5.921e-03
P [1, 1, 1, 1, 0, 1, 1, 1] = 1.290e-02+1.930e-17j | 1.290e-02
P [1, 1, 1, 1, 1, 0, 0, 0] = 2.215e-03-7.218e-19j | 2.215e-03
P [1, 1, 1, 1, 1, 0, 0, 1] = 6.033e-03+3.134e-18j | 6.033e-03
P [1, 1, 1, 1, 1, 0, 1, 0] = 4.396e-03-3.884e-18j | 4.396e-03
P [1, 1, 1, 1, 1, 0, 1, 1] = 3.100e-02-5.153e-17j | 3.100e-02
P [1, 1, 1, 1, 1, 1, 0, 0] = 4.327e-03+5.781e-18j | 4.327e-03
P [1, 1, 1, 1, 1, 1, 0, 1] = 1.054e-02-1.708e-17j | 1.054e-02
P [1, 1, 1, 1, 1, 1, 1, 0] = 1.127e-02-1.760e-17j | 1.127e-02
P [1, 1, 1, 1, 1, 1, 1, 1] = 7.251e-02 | 7.251e-02
sum prob: 1.000e+00+2.917e-16j | 1.000e+00
n = 0 | P_walrus = 3.265e-02 P_ours = 3.265e-02
n = 1 | P_walrus = 7.869e-03 P_ours = 7.869e-03
n = 2 | P_walrus = 6.553e-02 P_ours = 6.553e-02
n = 3 | P_walrus = 8.247e-02 P_ours = 8.247e-02
n = 4 | P_walrus = 1.448e-01 P_ours = 1.448e-01
n = 5 | P_walrus = 1.969e-01 P_ours = 1.969e-01
n = 6 | P_walrus = 2.236e-01 P_ours = 2.236e-01
n = 7 | P_walrus = 1.736e-01 P_ours = 1.736e-01
n = 8 | P_walrus = 7.251e-02 P_ours = 7.251e-02
```python
n_cl = [i for i in range(m+1)]
plt.plot(n_cl, P_sum_walrus, label = 'the walrus')
plt.plot(n_cl, P_sum_ours, '.', label = 'our scheme')
plt.yscale('log')
plt.legend(prop={'size':13}, loc='lower center')
plt.xlabel(r'n',fontsize=20)
plt.ylabel(r'P',fontsize=20)
plt.show()
```
```python
# Calculation the probabilities via sectors
print("P [s a m p l e] ", " exact", " sum over sectors", '\n' )
# P[random sample]
N_comb = unique_permutations.index((1,1,0,1,0,1,0,1))
list_det = list(unique_permutations[N_comb])
n_tar = list_det.count(1)
list_det_ = covert_01_0123(list_det)
M_sub = red_mat(M, list_det_)
P_sectors = prob_sectors_exact(M_sub)
norm = 1/Z(M)
P_ = 0
for nu in range(10*n_tar):
P_ += P_sectors[n_tar,nu]*norm
print('P',list_det, ' = ', "{:.3e}".format(P_list_ours[N_comb]) ,' ', "{:.3e}".format(P_))
```
P [s a m p l e] exact sum over sectors
P [1, 1, 0, 1, 0, 1, 0, 1] = 2.939e-03 2.939e-03
```python
# Sectors for the last point of P(n) ( P[1,1,1,1,1,1] )
P_sectors = prob_sectors_exact(M)
plt.plot(n_cl, P_sum_ours, '-', label = 'our scheme')
for nu in range(m,m*10,m):
plt.plot([i for i in range(m+1)], [P_sectors[j,nu]/Z(M) for j in range(m+1)],'--' ,label = 'k='+str(2*nu))
plt.yscale('log')
plt.legend(prop={'size':10}, loc='lower left')
plt.xlabel(r'n',fontsize=20)
plt.ylabel(r'P',fontsize=20)
plt.ylim([10**(-6),10**(-0.3)])
plt.show()
```
```python
# Moments calculation
# Import minors
Nu = 10*m
data_minors = np.genfromtxt(path + r'/Minors0-1.dat')
data_minors2 = np.genfromtxt(path + r'/Minors2.dat')
data_minors3 = np.genfromtxt(path + r'/Minors3.dat')
data_minors4 = np.genfromtxt(path + r'/Minors4.dat')
p2 = round(fact(m)/(fact(m-2)*2))
p3 = round(fact(m)/(fact(m - 3)*fact(3)))
p4 = round(fact(m)/(fact(m - 4)*fact(4)))
Z_v_0 = np.zeros((Nu),dtype=np.complex128)
Z_v_1 = np.zeros((m, Nu),dtype=np.complex128)
Z_v_2 = np.zeros((p2, Nu),dtype=np.complex128)
Z_v_3 = np.zeros((p3, Nu),dtype=np.complex128)
Z_v_4 = np.zeros((p4, Nu),dtype=np.complex128)
for j in range(Nu):
Z_v_0[j] = data_minors[j,1:2] + 1j*data_minors[j,2:3]
for j in range(Nu):
for n in range(0,2*m,2):
Z_v_1[n//2,j] = data_minors[j,int(3+n)] + 1j*data_minors[j,int(4+n)]
for j in range(Nu):
for n in range(0,2*p2,2):
Z_v_2[n//2,j] = data_minors2[j,int(1+n)] + 1j*data_minors2[j,int(2+n)]
for j in range(Nu):
for n in range(0,2*p3,2):
Z_v_3[n//2,j] = data_minors3[j,int(1+(n))] + 1j*data_minors3[j,int(2+(n))]
for j in range(Nu):
for n in range(0,2*p4,2):
Z_v_4[n//2,j] = data_minors4[j,int(1+(n))] + 1j*data_minors4[j,int(2+(n))]
Z_v_0f = np.fft.fft(Z_v_0)/Nu
Z_v_1f = np.fft.fft(Z_v_1)/Nu
Z_v_2f = np.fft.fft(Z_v_2)/Nu
Z_v_3f = np.fft.fft(Z_v_3)/Nu
Z_v_4f = np.fft.fft(Z_v_4)/Nu
```
```python
# Moments calculation
mean_ = np.zeros(Nu)
disp_ = np.zeros(Nu)
m3_ = np.zeros(Nu)
m4_ = np.zeros(Nu)
m5_ = np.zeros(Nu)
def moment_formula(n, *args):
m = 0
for x in args:
moments = x
if n == 2:
m = moments[0] + 2*moments[1] - moments[0]**2
if n == 3:
m = moments[0] + 6*moments[1] + 6*moments[2] - 3*mean_[nu]*(moments[0] + 2*moments[1]) + 2*moments[0]**3
if n == 4:
m_2 = moments[0] + 2*moments[1]
m_3 = moments[0] + 6*moments[1] + 6*moments[2]
m_4 = moments[0] + 14*moments[1] + 36*moments[2] + 24*moments[3]
m = m_4 - 4*m_3*moments[0]- 3*m_2**2 + 12*m_2*moments[0]**2 - 6*moments[0]**4
return m
n_ij_v = np.zeros(Nu)
n_ijk_v = np.zeros(Nu)
n_ijkl_v = np.zeros(Nu)
n_ijklp_v = np.zeros(Nu)
ind_2 = []
ind_3 = []
ind_4 = []
for i in range(m):
for j in range(i+1, m):
ind_2.append([i,j])
for i in range(m):
for j in range(i+1, m):
for k in range(j+1, m):
ind_3.append([i,j,k])
for i in range(m):
for j in range(i+1, m):
for k in range(j+1, m):
for l in range(k+1, m):
ind_4.append([i,j,k,l])
for z in range(Nu):
for j in range(m):
mean_[z] += 1 - (Z_v_1f[j,z]/Z_v_0f[z]).real
for nu in range(Nu):
i_ = 0
for i in range(m):
for j in range(i+1, m):
n_ij_v[nu] += 1 - (( Z_v_1f[j,nu] + Z_v_1f[i,nu] - Z_v_2f[i_,nu])/Z_v_0f[nu]).real
i_ += 1
disp_[nu] = moment_formula(2, [mean_[nu], n_ij_v[nu]])
for nu in range(Nu):
i_= 0
for i in range(m):
for j in range(i+1, m):
for k in range(j+1, m):
z1 = ind_2.index([i,j])
z2 = ind_2.index([i,k])
z3 = ind_2.index([j,k])
n_ijk_v[nu] += 1 - ((Z_v_1f[i,nu] + Z_v_1f[j,nu] + Z_v_1f[k,nu] - Z_v_2f[z1,nu] - Z_v_2f[z2,nu] - Z_v_2f[z3,nu] + Z_v_3f[i_,nu])/Z_v_0f[nu]).real
i_ += 1
m3_[nu] = moment_formula(3, [mean_[nu], n_ij_v[nu], n_ijk_v[nu]])
for nu in range(Nu):
i_= 0
for i in range(m):
for j in range(i+1, m):
for k in range(j+1, m):
for l in range(k+1, m):
z1 = ind_2.index([i,j])
z2 = ind_2.index([i,k])
z3 = ind_2.index([i,l])
z4 = ind_2.index([j,k])
z5 = ind_2.index([k,l])
z6 = ind_2.index([j,l])
h1 = ind_3.index([i,j,k])
h2 = ind_3.index([j,k,l])
h3 = ind_3.index([i,k,l])
h4 = ind_3.index([i,j,l])
n_ijkl_v[nu] += 1 - ((Z_v_1f[i,nu] + Z_v_1f[j,nu] + Z_v_1f[k,nu] + Z_v_1f[l,nu] - Z_v_2f[z1,nu] - Z_v_2f[z2,nu] - Z_v_2f[z3,nu] - Z_v_2f[z4,nu] - Z_v_2f[z5,nu] - Z_v_2f[z6,nu] + Z_v_3f[h1,nu] + Z_v_3f[h2,nu] + Z_v_3f[h3,nu] + Z_v_3f[h4,nu] - Z_v_4f[i_,nu])/Z_v_0f[nu]).real
i_ += 1
m4_[nu] = moment_formula(4, [mean_[nu], n_ij_v[nu], n_ijk_v[nu], n_ijkl_v[nu]])
```
```python
# Approximation
old_settings = np.seterr(all='ignore')
def gauss_fun(x, *args):
for c in args:
c = args
if len(c) == 3:
res = c[0]*np.exp(-(x - c[1])**2/(2*c[2]))
if len(c) == 4:
res = c[0]*np.exp(-(x - c[1])**2/(2*c[2])) * np.exp(+ c[3]*(x - c[1])**3/(6*c[2]**3))
if len(c) == 5:
res = c[0]*np.exp(-(x - c[1])**2/(2*c[2])) * np.exp(+ c[3]*(x - c[1])**3/(6*c[2]**3)) * np.exp(+ c[4]*(x - c[1])**4/(8*c[2]**4))
return res
mu0 = np.zeros(Nu)
mu1 = np.zeros(Nu)
mu2 = np.zeros(Nu)
mu3 = np.zeros(Nu)
mu4 = np.zeros(Nu)
for nu in range(Nu):
mu0[nu] = (Z_v_0f[nu]/Z_v_0[0]).real
mu1[nu] = mean_[nu]
mu2[nu] = disp_[nu]
mu3[nu] = m3_[nu]
mu4[nu] = m4_[nu]
n_cut = int(m+1)
# 2 order
A_2 = np.zeros(Nu)
Mu1_2 = np.zeros(Nu)
Mu2_2 = np.zeros(Nu)
for nu in range(Nu):
A_2[nu] = mu0[nu]
Mu1_2[nu] = mu1[nu]
Mu2_2[nu] = mu2[nu]
for z in range(300):
s0 = 0
s1 = 0
s2 = 0
s3 = 0
s4 = 0
for j in range(n_cut):
s0 += gauss_fun(j, A_2[nu], Mu1_2[nu], Mu2_2[nu])
s1 += gauss_fun(j, A_2[nu], Mu1_2[nu], Mu2_2[nu])* j
s2 += gauss_fun(j, A_2[nu], Mu1_2[nu], Mu2_2[nu])* j**2
if s0==s0:
mu0_ = s0
mu1_ = s1/s0
mu2_ = s2/s0 - mu1_**2
A_2[nu] += 0.1*(mu0[nu] - mu0_ )
Mu1_2[nu] += 0.1*(mu1[nu] - mu1_)
Mu2_2[nu] += 0.1*(mu2[nu] - mu2_)
else:
A_2[nu] += 0
Mu1_2[nu] += 0
Mu2_2[nu] += 0
# 3 order
A_3 = np.zeros(Nu)
Mu1_3 = np.zeros(Nu)
Mu2_3 = np.zeros(Nu)
Mu3_3 = np.zeros(Nu)
for nu in range(Nu):
A_3[nu] = mu0[nu]
Mu1_3[nu] = mu1[nu]
Mu2_3[nu] = mu2[nu]
Mu3_3[nu] = 0
for z in range(500):
s0 = 0
s1 = 0
s2 = 0
s3 = 0
for j in range(n_cut):
s0 += gauss_fun(j, A_3[nu], Mu1_3[nu], Mu2_3[nu], Mu3_3[nu])
s1 += gauss_fun(j, A_3[nu], Mu1_3[nu], Mu2_3[nu], Mu3_3[nu])* j
s2 += gauss_fun(j, A_3[nu], Mu1_3[nu], Mu2_3[nu], Mu3_3[nu])* j**2
s3 += gauss_fun(j, A_3[nu], Mu1_3[nu], Mu2_3[nu], Mu3_3[nu])* j**3
if s0==s0:
mu0_ = s0
mu1_ = s1/s0
mu2_ = s2/s0 - mu1_**2
mu3_ = s3/s0 - 3*mu2_*mu1_ - mu1_**3
A_3[nu] += 0.05*(mu0[nu] - mu0_)
Mu1_3[nu] += 0.1*(mu1[nu] - mu1_ )
Mu2_3[nu] += 0.1*(mu2[nu] - mu2_)
Mu3_3[nu] += 0.05*(mu3[nu] - mu3_)
else:
A_3[nu] += 0
Mu1_3[nu] += 0
Mu2_3[nu] += 0
Mu3_3[nu] += 0
A_4 = np.zeros(Nu)
Mu1_4 = np.zeros(Nu)
Mu2_4 = np.zeros(Nu)
Mu3_4 = np.zeros(Nu)
Mu4_4 = np.zeros(Nu)
for nu in range(Nu):
A_4[nu] = mu0[nu]
Mu1_4[nu] = mu1[nu]
Mu2_4[nu] = mu2[nu]
Mu3_4[nu] = 0
Mu4_4[nu] = 0
for z in range(700):
s0 = 0
s1 = 0
s2 = 0
s3 = 0
s4 = 0
for j in range(n_cut):
s0 += gauss_fun(j, A_4[nu], Mu1_4[nu], Mu2_4[nu], Mu3_4[nu], Mu4_4[nu])
s1 += gauss_fun(j, A_4[nu], Mu1_4[nu], Mu2_4[nu], Mu3_4[nu], Mu4_4[nu])* j
s2 += gauss_fun(j, A_4[nu], Mu1_4[nu], Mu2_4[nu], Mu3_4[nu], Mu4_4[nu])* j**2
s3 += gauss_fun(j, A_4[nu], Mu1_4[nu], Mu2_4[nu], Mu3_4[nu], Mu4_4[nu])* j**3
s4 += gauss_fun(j, A_4[nu], Mu1_4[nu], Mu2_4[nu], Mu3_4[nu], Mu4_4[nu])* j**4
if s0==s0:
mu0_ = s0
mu1_ = s1/s0
mu2_ = s2/s0 - mu1_**2
mu3_ = s3/s0 - 3*mu2_*mu1_ - mu1_**3
mu4_ = s4/s0 - 4*mu3_*mu1_ - 3*mu2_**2 - 6*mu2_*mu1_**2 - mu1_**4
step_ini_0 = 0.1
step_ini_1 = 0.1
step_ini_2 = 0.1
step_ini_3 = 0.05
step_ini_4 = 0.008
A_4[nu] += step_ini_0*(mu0[nu] - mu0_)
Mu1_4[nu] += step_ini_1*(mu1[nu] - mu1_)
Mu2_4[nu] += step_ini_2*(mu2[nu] - mu2_)
Mu3_4[nu] += step_ini_3*(mu3[nu] - mu3_)
Mu4_4[nu] += step_ini_4*(mu4[nu] - mu4_)
else:
A_4[nu] += 0
Mu1_4[nu] += 0
Mu2_4[nu] += 0
Mu3_4[nu] += 0
Mu4_4[nu] += 0
```
```python
P_sectors = prob_sectors_exact(M)
k_ex = int(m)
for nu in range(k_ex,k_ex+1):
iN = 100
di = (m+0.1)/iN
line1, = plt.plot([i for i in range(m+1)], [P_sectors[j,nu]/Z(M) for j in range(m+1)],'.' ,label = 'exact')
line2, = plt.plot([i*di for i in range(iN)],[gauss_fun(i*di,A_2[nu], Mu1_2[nu], Mu2_2[nu]) for i in range(iN)],':', label = '2nd order')
line3, = plt.plot([i*di for i in range(iN)],[gauss_fun(i*di,A_3[nu], Mu1_3[nu], Mu2_3[nu], Mu3_3[nu]) for i in range(iN)],'-.', label = '3rd order')
line4, = plt.plot([i*di for i in range(iN)],[gauss_fun(i*di,A_4[nu], Mu1_4[nu], Mu2_4[nu], Mu3_4[nu], Mu4_4[nu]) for i in range(iN)],'--', label = '4th order')
plt.yscale('log')
plt.title('k='+str(2*k_ex), size = 15)
plt.legend(handles=[line1, line2,line3,line4], loc='upper left', prop={'size':13}, title_fontsize = '13' )
plt.xlabel('n', fontsize=20)
plt.ylabel('P', fontsize=20)
plt.xlim([4,8.3])
plt.ylim([10**(-3),10**(-1.5)])
plt.show()
```
```python
x_start = int(Nu*0.05)
x_fin = int(Nu/2)
line1, = plt.plot([2*i for i in range(x_start,x_fin, 1)], [P_sectors[m,j]/Z(M) for j in range(x_start,x_fin)],'.' ,label = 'exact')
line2, = plt.plot([2*i for i in range(x_start,x_fin, 1)], [gauss_fun(m, A_2[j], Mu1_2[j], Mu2_2[j]) for j in range(x_start,x_fin, 1)] , linewidth=2, linestyle = ':', label = '2nd order')
line3, = plt.plot([2*i for i in range(x_start,x_fin, 1)], [gauss_fun(m, A_3[j], Mu1_3[j], Mu2_3[j], Mu3_3[j]) for j in range(x_start,x_fin, 1)] , linestyle = '-.', label = '3rd order')
line4, = plt.plot([2*i for i in range(x_start,x_fin, 1)], [gauss_fun(m, A_4[z], Mu1_4[z], Mu2_4[z], Mu3_4[z], Mu4_4[z]) for z in range(x_start,x_fin, 1)] , linestyle = '--', label = '4th order')
plt.yscale('log')
plt.legend(handles=[line1, line2,line3,line4], prop={'size':13}, loc='lower center', title_fontsize = '13' )
plt.xlabel('k',fontsize=20)
plt.ylabel('P',fontsize=20)
plt.ylim([10**(-4),10**(-2.2)])
plt.show()
```
```python
# Warning!
# Choosing of k_0 and k_cut (cut off of the number of sectors) is heuristic
# The 3rd and 4th order might give the 'nan' result due to the function's kind.
cut_off_ = np.zeros(Nu)
for i in range(Nu):
if gauss_fun(m, A_4[i],Mu1_4[i], Mu2_4[i],Mu3_4[i],Mu4_4[i] ) == gauss_fun(m, A_4[i],Mu1_4[i], Mu2_4[i],Mu3_4[i],Mu4_4[i] ): # excludes 'nan'
cut_off_[i] = gauss_fun(m, A_4[i],Mu1_4[i], Mu2_4[i],Mu3_4[i],Mu4_4[i] )
k_max = list(cut_off_).index(np.max(cut_off_))
# You can vary 'accur' to obtain more precise results
accur = 1000
# Let's find the left cut off over sectors
k_0 = 1
for j in range(int(Nu/10)):
if gauss_fun(m, A_4[j],Mu1_4[j], Mu2_4[j],Mu3_4[j], Mu4_4[j] ) < 10**(-12): # > 10**(-12) and A_2[i,j]!= 0 :
k_0 = j
# Let's find the right cut off over sectors
p_4 = 0
i = k_max
while cut_off_[k_max]/cut_off_[i] < accur and i < Nu - 1:
p_4 += gauss_fun(m, A_4[k], Mu1_4[k], Mu2_4[k], Mu3_4[k], Mu4_4[k])
i += 1
k_cut = i
# The probability computation for different orders of approximation
p_2 = 0
p_3 = 0
p_4 = 0
for k in range(k_0, k_cut):
p_2 += gauss_fun(m, A_2[k], Mu1_2[k], Mu2_2[k])
p_3 += gauss_fun(m, A_3[k], Mu1_3[k], Mu2_3[k], Mu3_3[k])
p_4 += gauss_fun(m, A_4[k], Mu1_4[k], Mu2_4[k], Mu3_4[k], Mu4_4[k])
p_4_new = p_4
if p_4 > 10*p_2 or (p_4 == p_4)==False:
while p_4_new > 10*p_2 and (p_4 == p_4)==False:
k_cut -= 1
p_4_new = 0
for k in range(k_0, k_cut):
p_4_new += gauss_fun(m, A_4[k], Mu1_4[k], Mu2_4[k], Mu3_4[k], Mu4_4[k])
p_4 = p_4_new
p_3 = 0
for k in range(k_0, k_cut):
p_3 += gauss_fun(m, A_3[k], Mu1_3[k], Mu2_3[k], Mu3_3[k])
print('K_cut_off = (' , k_0*2, ',', k_cut*2,')\n')
print('P_exact = ', "{:.3e}".format(P_list_ours[len(unique_permutations)-1]), '\n',' P_2 = ' , "{:.3e}".format(p_2), '\n', ' P_3 = ' , "{:.3e}".format(p_3), '\n',' P_4 = ' , "{:.3e}".format(p_4), '\n',)
```
K_cut_off = ( 4 , 98 )
P_exact = 7.251e-02
P_2 = 7.322e-02
P_3 = 7.314e-02
P_4 = 7.202e-02
```python
```
|
e95ac8c071c1a59129187b2a6822cb6616563813
| 241,339 |
ipynb
|
Jupyter Notebook
|
demo.ipynb
|
stacy8popova/PyGBSThr
|
d0a6ba4ad99dc23d1cfad581926b098b6a9222ed
|
[
"MIT"
] | 1 |
2021-11-29T08:59:18.000Z
|
2021-11-29T08:59:18.000Z
|
demo.ipynb
|
stacy8popova/PyGBSThr
|
d0a6ba4ad99dc23d1cfad581926b098b6a9222ed
|
[
"MIT"
] | null | null | null |
demo.ipynb
|
stacy8popova/PyGBSThr
|
d0a6ba4ad99dc23d1cfad581926b098b6a9222ed
|
[
"MIT"
] | null | null | null | 39.492554 | 305 | 0.431381 | true | 23,322 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.817574 | 0.795658 | 0.65051 |
__label__krc_Cyrl
| 0.274976 | 0.349683 |
# What's up with polynomial regression?
Why do we have to use this `PolynomialFeatures` thing from scikit? What does it do?
Let's imagine we have some data from which we know the true function we want our model to learn. This function is:
\begin{align}
y = 3x + 1x^2 -2
\end{align}
```python
import numpy
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
x_values = numpy.array([[-2], [-1], [0], [1], [2]])
y_values = numpy.array([[-4], [-4], [-2], [2], [8]])
```
```python
x_values
```
array([[-2],
[-1],
[ 0],
[ 1],
[ 2]])
Polynomial regression extends the linear model by adding extra predictors, obtained by raising each of the original predictors to a power. Our original predictors were:
```
[-2, -1, 0, 1, 2]
```
```python
transformer = PolynomialFeatures(degree=2)
x_values_transformed = transformer.fit_transform(x_values)
```
```python
x_values_transformed
```
array([[ 1., -2., 4.],
[ 1., -1., 1.],
[ 1., 0., 0.],
[ 1., 1., 1.],
[ 1., 2., 4.]])
What the transformer has done for each value of x is to expand it from a single number into an array for three numbers:
1. The bias (always 1.0, the feature in which all polynomial powers are zero )
2. The original value
3. The value, squared
It has *extended the linear model* by adding extra predictors.
We can now hand off the transformed values off to the linear regression model's ``fit`` method and it will understand that it needs to fit a second-degree polynomial instead of a straight line.
```python
model = LinearRegression()
model.fit(x_values_transformed,y_values)
```
LinearRegression()
Now we can predict. In the same way we transformed out `x` inputs with the `PolynomialFeatures` class before training the model, we will need to transform any `x` values for which we want to predict a `y`:
```python
# Values of x for which we want to predict a y
x_pred = [[3], [-3]]
x_pred_transformed = transformer.fit_transform(x_pred)
model.predict(x_pred_transformed)
```
array([[16.],
[-2.]])
Are these correct?
\begin{align}
y = (3 * 3) + 3^2 -2 \\
y = (3 * -3) + -3^2 -2
\end{align}
Yes, they are! Our model correctly learned the function! If you still aren't convinced from just two examples that the model has correctly learned the true function:
```python
intercept = model.intercept_[0]
slope = model.coef_[0]
print(f"Intercept is: {intercept}")
print(f"Slope is: {slope}")
```
Intercept is: -2.000000000000004
Slope is: [0. 3. 1.]
And our true function, again:
\begin{align}
y = 3x + 1x^2 -2
\end{align}
```python
```
|
44b4249dfb1cb69b0751cbd16b3b094d7f0e845b
| 5,803 |
ipynb
|
Jupyter Notebook
|
cmsc_210/examples/lecture_25/notebooks/PolynomialFeatureTransforms.ipynb
|
mazelife/cmsc-210
|
dbaa1604ef49bcfe5a70e09c17fbd243a8b80220
|
[
"MIT"
] | null | null | null |
cmsc_210/examples/lecture_25/notebooks/PolynomialFeatureTransforms.ipynb
|
mazelife/cmsc-210
|
dbaa1604ef49bcfe5a70e09c17fbd243a8b80220
|
[
"MIT"
] | 5 |
2022-01-16T23:30:12.000Z
|
2022-01-30T23:03:21.000Z
|
cmsc_210/examples/lecture_25/notebooks/PolynomialFeatureTransforms.ipynb
|
mazelife/cmsc-210
|
dbaa1604ef49bcfe5a70e09c17fbd243a8b80220
|
[
"MIT"
] | null | null | null | 23.589431 | 211 | 0.513527 | true | 757 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.952574 | 0.879147 | 0.837452 |
__label__eng_Latn
| 0.988029 | 0.784016 |
<a href="https://colab.research.google.com/github/john-s-butler-dit/Numerical-Analysis-Python/blob/master/Chapter%2001%20-%20Euler%20Methods/102_Euler_method_with_Theorems_nonlinear_Growth_function.ipynb" target="_parent"></a>
# Euler Method with Theorems Applied to Non-Linear Population Equations
The more general form of a first order Ordinary Differential Equation is:
\begin{equation}
y^{'}=f(t,y).
\end{equation}
This can be solved analytically by integrating both sides but this is not straight forward for most problems.
Numerical methods can be used to approximate the solution at discrete points.
In this notebook we will work through the Euler method for two initial value problems:
1. A non-linear sigmoidal population equation
\begin{equation}
y^{'}=0.2 y− 0.01 y^2 ,
\end{equation}
2. A non-linear sigmoidal population differential equation with a wiggle,
\begin{equation}
y^{'}=0.2 y-0.01 y^2+\sin(2\pi t).
\end{equation}
## Euler method
The simplest one step numerical method is the Euler Method named after the most prolific of mathematicians [Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler) (15 April 1707 – 18 September 1783) .
The general Euler formula for to the first order differential equation
\begin{equation}
y^{'} = f(t,y),
\end{equation}
approximates the derivative at time point $t_i$,
\begin{equation}
y^{'}(t_i) \approx \frac{w_{i+1}-w_i}{t_{i+1}-t_{i}},
\end{equation}
where $w_i$ is the approximate solution of $y$ at time $t_i$.
This substitution changes the differential equation into a __difference__ equation of the form
\begin{equation}
\frac{w_{i+1}-w_i}{t_{i+1}-t_{i}}=f(t_i,w_i).
\end{equation}
Assuming uniform stepsize $t_{i+1}-t_{i}$ is replaced by $h$, re-arranging the equation gives
\begin{equation}
w_{i+1}=w_i+hf(t_i,w_i),
\end{equation}
This can be read as the future $w_{i+1}$ can be approximated by the present $w_i$ and the addition of the input to the system $f(t,y)$ times the time step.
```python
## Library
import numpy as np
import math
%matplotlib inline
import matplotlib.pyplot as plt # side-stepping mpl backend
import matplotlib.gridspec as gridspec # subplots
import warnings
import pandas as pd
warnings.filterwarnings("ignore")
```
# Non-linear population equation
The general form of the non-linear sigmoidal population growth differential equation is:
\begin{equation}
y^{'}=\alpha y-\beta y^2,
\end{equation}
where $\alpha$ is the growth rate and $\beta$ is the death rate. The initial population at time $ a $ is
\begin{equation}
y(a)=A,
\end{equation}
\begin{equation}
a\leq t \leq b.
\end{equation}
## Specific non-linear population equation
Given the growth rate $$\alpha=0.2,$$ and death rate $$\beta=0.01,$$ giving the specific differential equation,
\begin{equation}
y^{'}=0.2 y-0.01 y^2,
\end{equation}
The initial population at time $2000$ is
\begin{equation}
y(2000)=6,
\end{equation}
we are interested in the time period
\begin{equation}
2000\leq t \leq 2020.
\end{equation}
## Initial Condition
To get a specify solution to a first order initial value problem, an __initial condition__ is required.
For our population problem the initial population is 6 billion people:
\begin{equation}
y(2000)=6.
\end{equation}
## General Discrete Interval
The continuous time $a\leq t \leq b $ is discretised into $N$ points seperated by a constant stepsize
\begin{equation}
h=\frac{b-a}{N}.
\end{equation}
## Specific Discrete Interval
Here the interval is $2000\leq t \leq 2020$ with $N=200$
\begin{equation}
h=\frac{2020-2000}{200}=0.1,
\end{equation}
this gives the 201 discrete points with stepsize h=0.1:
\begin{equation}
t_0=2000, \ t_1=0.1, \ ... t_{200}=2020,
\end{equation}
which is generalised to
\begin{equation}
t_i=2000+i0.1, \ \ \ i=0,1,...,200.
\end{equation}
The plot below illustrates the discrete time steps from 2000 to 2002.
```python
### Setting up time
t_end=2020.0
t_start=2000.0
N=200
h=(t_end-t_start)/(N)
time=np.arange(t_start,t_end+0.01,h)
fig = plt.figure(figsize=(10,4))
plt.plot(time,0*time,'o:',color='red')
plt.title('Illustration of discrete time points for h=%s'%(h))
plt.xlim((2000,2002))
plt.plot();
```
## Numerical approximation of Population growth
The differential equation is transformed using the Euler method into a difference equation of the form
\begin{equation}
w_{i+1}=w_{i}+h (\alpha w_i-\beta w_i\times w_i).
\end{equation}
This approximates a series of of values $w_0, \ w_1, \ ..., w_{N}$.
For the specific example of the population equation the difference equation is,
\begin{equation}
w_{i+1}=w_{i}+h 0.1 [0.2 w_i-0.01 w_i\times w_i],
\end{equation}
where $i=0,1,2,...,199$, and $w_0=6$. From this initial condition the series is approximated.
```python
w=np.zeros(N+1)
w[0]=6
for i in range (0,N):
w[i+1]=w[i]+h*(0.2*w[i]-0.01*w[i]*w[i])
```
The plot below shows the Euler approximation $w$ in blue squares.
```python
fig = plt.figure(figsize=(10,4))
plt.plot(time,w,'s:',color='blue',label='Euler')
plt.xlim((min(time),max(time)))
plt.xlabel('time')
plt.legend(loc='best')
plt.title('Euler solution')
plt.plot();
```
### Table
The table below shows the iteration $i$, the discrete time point t[i], and the Euler approximation w[i] of the solution $y$ at time point t[i] for the non-linear population equation.
```python
d = {'time t[i]': time[0:10], 'Euler (w_i) ':w[0:10]}
df = pd.DataFrame(data=d)
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>time t[i]</th>
<th>Euler (w_i)</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2000.0</td>
<td>6.000000</td>
</tr>
<tr>
<th>1</th>
<td>2000.1</td>
<td>6.084000</td>
</tr>
<tr>
<th>2</th>
<td>2000.2</td>
<td>6.168665</td>
</tr>
<tr>
<th>3</th>
<td>2000.3</td>
<td>6.253986</td>
</tr>
<tr>
<th>4</th>
<td>2000.4</td>
<td>6.339953</td>
</tr>
<tr>
<th>5</th>
<td>2000.5</td>
<td>6.426557</td>
</tr>
<tr>
<th>6</th>
<td>2000.6</td>
<td>6.513788</td>
</tr>
<tr>
<th>7</th>
<td>2000.7</td>
<td>6.601634</td>
</tr>
<tr>
<th>8</th>
<td>2000.8</td>
<td>6.690085</td>
</tr>
<tr>
<th>9</th>
<td>2000.9</td>
<td>6.779130</td>
</tr>
</tbody>
</table>
</div>
## Numerical Error
With a numerical solution there are two types of error:
* local truncation error at one time step;
* global error which is the propagation of local error.
### Derivation of Euler Local truncation error
The left hand side of a initial value problem $\frac{dy}{dt}$ is approximated by __Taylors theorem__ expand about a point $t_0$ giving:
\begin{equation}
y(t_1) = y(t_0)+(t_1-t_0)y^{'}(t_0) + \frac{(t_1-t_0)^2}{2!}y^{''}(\xi), \ \ \ \ \ \ \xi \in [t_0,t_1].
\end{equation}
Rearranging and letting $h=t_1-t_0$ the equation becomes
\begin{equation}
y^{'}(t_0)=\frac{y(t_1)-y(t_0)}{h}-\frac{h}{2}y^{''}(\xi).
\end{equation}
From this the local truncation error is
\begin{equation}
\tau \leq \frac{h}{2}M,
\end{equation}
where $y^{''}(t) \leq M $.
#### Derivation of Euler Local truncation error for the Population Growth
As the exact solution $y$ is unknown we cannot get an exact estimate of the second derivative
\begin{equation}
y'(t)=0.2 y-0.01 y^2,
\end{equation}
differentiate with respect to $t$,
\begin{equation}
y''(t)=0.2 y'-0.01 (2yy'),
\end{equation}
subbing the original equation gives
\begin{equation}
y''(t)=0.2 (0.2 y-0.01 y^2)-0.01 \big(2y(0.2 y-0.01 y^2)\big),
\end{equation}
which expresses the second derivative as a function of the exact solution $y$, this is still a problem as the value of $y$ is unknown, to side step this issue we assume the population is between $0\le y \le 20,$ this gives
\begin{equation}
\max|y''|=M\leq 6,
\end{equation}
this gives a local trucation for $h=0.1$ for our non-linear equation is
\begin{equation}
\tau=\frac{h}{2}6=0.3.
\end{equation}
```python
M=6
fig = plt.figure(figsize=(10,4))
plt.plot(time[0:2],0.1*M/2*np.ones(2),'v:'
,color='black',label='Upper Local Truncation')
plt.xlabel('time')
plt.ylim([0,0.1])
plt.legend(loc='best')
plt.title('Local Truncation Error')
plt.plot();
```
### Global truncation error for the population equation
For the population equation specific values $L$ and $M$ can be calculated.
In this case $f(t,y)=\epsilon y$ is continuous and satisfies a Lipschitz Condition with constant
\begin{equation}
\left|\frac{\partial f(t,y)}{\partial y}\right|\leq L,
\end{equation}
\begin{equation}
\left|\frac{\partial (0.2 -0.01 y^2)}{\partial y}\right|\leq |0.2(20)-0.01(2\times y)| \leq |0.2-0.01(2\times 20)|\leq 0.8,
\end{equation}
on $D=\{(t,y)|2000\leq t \leq 2020, 0 < y < 20 \}$ and that a constant $M$
exists with the property that
\begin{equation}
|y^{''}(t)|\leq M\leq 6.
\end{equation}
__Specific Theorem Global Error__
Let $y(t)$ denote the unique solution of the Initial Value Problem
\begin{equation}
y^{'}=0.2 y-0.01 y^2, \ \ \ 2000\leq t \leq 2020, \ \ \ y(0)=6,
\end{equation}
and $w_0,w_1,...,w_N$ be the approx generated by the Euler method for some
positive integer N. Then for $i=0,1,...,N$ the error is:
\begin{equation}
|y(t_i)-w_i| \leq \frac{6 h}{2\times 0.8}|e^{0.8(t_i-2000)}-1|.
\end{equation}
# Non-linear population equation with a temporal oscilation
Given the specific population differential equation with a wiggle,
\begin{equation}
y^{'}=0.2 y-0.01 y^2+sin(2\pi t),
\end{equation}
with the initial population at time $2000$ is
\begin{equation}
y(2000)=6,
\end{equation}
\begin{equation}
2000\leq t \leq 2020.
\end{equation}
For the specific example of the population equation the difference equation is
\begin{equation}
w_{i+1}=w_{i}+h 0.5 (0.2 w_i-0.02 w_i\times w_i+sin(2 \pi t),
\end{equation}
for $i=0,1,...,199$,
where $w_0=6$. From this initial condition the series is approximated.
The figure below shows the discrete solution.
```python
w=np.zeros(N+1)
w[0]=6
for i in range (0,N):
w[i+1]=w[i]+h*(0.2*w[i]-0.01*w[i]*w[i]+np.sin(2*np.pi*time[i]))
fig = plt.figure(figsize=(10,4))
plt.plot(time,w,'s:',color='blue',label='Euler')
plt.xlim((min(time),max(time)))
plt.xlabel('time')
plt.legend(loc='best')
plt.title('Euler solution')
plt.plot();
```
### Table
The table below shows the iteration $i$, the discrete time point t[i], and the Euler approximation w[i] of the solution $y$ at time point t[i] for the non-linear population equation with a temporal oscilation.
```python
d = {'time t_i': time[0:10], 'Euler (w_i) ':w[0:10]}
df = pd.DataFrame(data=d)
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>time t_i</th>
<th>Euler (w_i)</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2000.0</td>
<td>6.000000</td>
</tr>
<tr>
<th>1</th>
<td>2000.1</td>
<td>6.084000</td>
</tr>
<tr>
<th>2</th>
<td>2000.2</td>
<td>6.227443</td>
</tr>
<tr>
<th>3</th>
<td>2000.3</td>
<td>6.408317</td>
</tr>
<tr>
<th>4</th>
<td>2000.4</td>
<td>6.590522</td>
</tr>
<tr>
<th>5</th>
<td>2000.5</td>
<td>6.737676</td>
</tr>
<tr>
<th>6</th>
<td>2000.6</td>
<td>6.827034</td>
</tr>
<tr>
<th>7</th>
<td>2000.7</td>
<td>6.858187</td>
</tr>
<tr>
<th>8</th>
<td>2000.8</td>
<td>6.853211</td>
</tr>
<tr>
<th>9</th>
<td>2000.9</td>
<td>6.848203</td>
</tr>
</tbody>
</table>
</div>
```python
```
|
405e935a2f5e3e8f618500e437847118f60406f4
| 68,768 |
ipynb
|
Jupyter Notebook
|
Chapter 01 - Euler Methods/102_Euler_method_with_Theorems_nonlinear_Growth_function.ipynb
|
john-s-butler-dit/Numerical-Analysis-Python
|
edd89141efc6f46de303b7ccc6e78df68b528a91
|
[
"MIT"
] | 69 |
2019-09-05T21:39:12.000Z
|
2022-03-26T14:00:25.000Z
|
Chapter 01 - Euler Methods/102_Euler_method_with_Theorems_nonlinear_Growth_function.ipynb
|
Zak2020/Numerical-Analysis-Python
|
edd89141efc6f46de303b7ccc6e78df68b528a91
|
[
"MIT"
] | null | null | null |
Chapter 01 - Euler Methods/102_Euler_method_with_Theorems_nonlinear_Growth_function.ipynb
|
Zak2020/Numerical-Analysis-Python
|
edd89141efc6f46de303b7ccc6e78df68b528a91
|
[
"MIT"
] | 13 |
2021-06-17T15:34:04.000Z
|
2022-01-14T14:53:43.000Z
| 90.010471 | 11,118 | 0.762113 | true | 4,206 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.810479 | 0.865224 | 0.701246 |
__label__eng_Latn
| 0.897841 | 0.467561 |
```python
%pylab inline
%config InlineBackend.figure_format = 'retina'
from ipywidgets import interact
```
# Question 1
The Lagrange interpolating polynomial is
$$ p(x) = \sum_{j=0}^{n}y_j L_j(x).$$
Show that the identity ,
$$ \sum_{j=0}^{n} L_j(x) = 1,$$
is true for all $x$.
**Hint: The answer requires no algebra. Use the fact that $f(x) = 1$ is a polynomial of degree zero and a Lagrange polynomial.**
------------------------------------------------------------------
# Question 2
## A.
Write a function for computing the barycentric weights
$$w_j = \left[\prod_{\substack{i=0 \\\\ i\neq j}}^{n}(x_j - x_i)\right]^{-1}.$$
Your function should take as input a vector containing the nodes $x_j$ and output the weights $w_j$ . Call your function `baryfit`. Write another function for evaluating the barycentric interpolant
$p(x).$
This function should take as input a vector containing the nodes $x_j$, a vector containing the corresponding barycentric weights $w_j$ (generated from your `baryfit` function), a vector containing the corresponding function values $y_j = f(x_j)$, and the location (or a vector of locations) of where the interpolant should be evaluated. The output of the function should be the value of the interpolating polynomial at all the evaluation points. Call this function `baryeval`.
------------------------------------------------------------------
## B.
Using your `baryfit` function from part a, generate the barycentric weights for the following two sets of nodes:
1. $x_j = -1 + \frac{2j}{8}$, $j=0,1,\ldots, 8$
2. $x_j = -\cos(\frac{j\pi}{8})$, $j=0,1,\ldots, 8$
Plot the values of the weights versus the corresponding values of the nodes (i.e plot ($x_j$ , $w_j$ )) for each of the node sets. Comment on the results.
------------------------------------------------------------------
## C.
For the two node sets from part B, use your `baryeval` function to evaluate the 8th degree polynomial interpolant of the function $f(x) = \vert x \vert$ at 101 equally spaced points between $[-1, 1]$. Plot the error ($p(x) − \vert x \vert$) in the polynomial interpolant at these evaluation points for each of the two node sets. Which node set seems to produce the best result? What criteria did you use to determine what ‘best’ means?
------------------------------------------------------------------
## D.
For certain sets of nodes $x_j$, it is possible to give explicit formulas for the barycentric weights $w_j$. The easiest case is when the nodes are equally spaced between $[−1,1]$, (i.e., $x_j =−1+\frac{2j}{n}$, $j=0,1,...,n$). Show that for these nodes
$$w_j = \frac{\left(\frac{n}{2}\right)^n(-1)^{n-j}}{n!}\binom{n}{j}$$
Note that since $w_j$ appear both in the numerator and denominator of the barycentric formula for $p(x)$, any factors common to all $w_j$ can be factored out. Thus, we can reduce the above expression for the barycentric weights to $w_j =(−1)^j\binom{n}{j}$.
------------------------------------------------------------------
```python
```
# Question 3
Suppose you are given the following experimental measurements:
\begin{align}
x_i &\qquad f(x_i) \\\\
0.08 &\qquad 0.6739 \\\\
0.46 &\qquad 2.4306 \\\\
1.00 &\qquad 0.0000 \\\\
1.51 &\qquad -1.0621 \\\\
2.05 &\qquad 0.0986
\end{align}
## A.
Write a python script (using the code from Q2) that approximates $f(0.75)$ using the following Lagrange polynomials:
\begin{align}
&\text{$P_1$ using $x_0 = 0.46$, and $x_1 = 1.00$;} \\\\
&\text{$P_2$ using $x_0 = 0.46$, $x_1 = 1.00$, and $x_2 = 1.51$;} \\\\
&\text{$P_3$ using $x_0 = 0.08$, $x_1 = 0.46$, $x_2 = 1.00$, and $x_3 = 1.51$;} \\\\
&\text{$P_4$} \text{ using all five points}.
\end{align}
------------------------------------------------------------------
## B.
The information in the above table corresponds to the function
\begin{equation}
f(x) = \sin(\pi x)e^{\cos(x)}.
\end{equation}
What is the absolute error of each of your approximations $P_i(0.75)$? Which Lagrange polynomial was the most accurate? Is this the result you expected?
------------------------------------------------------------------
```python
```
|
9111b6a787b67440bf9f48ffd9a64cf7a21794ab
| 5,863 |
ipynb
|
Jupyter Notebook
|
Homework 5 Problems.ipynb
|
newby-jay/MATH381-Fall2021-JupyterNotebooks
|
9181fb6e154081de26fb267e0794a67f60ae11a0
|
[
"Apache-2.0"
] | null | null | null |
Homework 5 Problems.ipynb
|
newby-jay/MATH381-Fall2021-JupyterNotebooks
|
9181fb6e154081de26fb267e0794a67f60ae11a0
|
[
"Apache-2.0"
] | null | null | null |
Homework 5 Problems.ipynb
|
newby-jay/MATH381-Fall2021-JupyterNotebooks
|
9181fb6e154081de26fb267e0794a67f60ae11a0
|
[
"Apache-2.0"
] | null | null | null | 41.288732 | 486 | 0.526181 | true | 1,211 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.800692 | 0.950411 | 0.760986 |
__label__eng_Latn
| 0.981189 | 0.606359 |
```
# default_exp oneDim
```
```
#hide
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.cm as cm
plt.rcParams['figure.figsize'] = (10,6)
import sympy; sympy.init_printing()
# code for displaying matrices nicely
def display_matrix(m):
display(sympy.Matrix(m))
```
# oneDim
> Code for a 1-D problem.
```
#hide
from nbdev.showdoc import *
```
# 1 dimensional case (ODE)
We consider the following 1-D problem:
$$-\frac{d}{dx}\left(p(x)\frac{du(x)}{dx}\right)=f(x) \hspace{0.5cm}\forall x\in[0,1]$$
$$u(0)=u(1)=0$$
where here $f$ is a random forcing term, assumed to be a GP in this work.
## Variational formulation
The variational formulation is given by:
$$a(u,v)=L(v)$$
where:
$$a(u,v)=\int_{0}^{1}pu^{\prime}v^{\prime}dx$$
and
$$L(v)=\int_{0}^{1}fvdx$$
We will make the following choices for $p,f$:
$$p(x)=1$$
$$f\sim\mathcal{G}\mathcal{P}(\bar{f},k_{f})$$
$$\bar{f}(x)=1$$
$$ k_{f}(x,y) = \sigma_f^{2}\exp\left(-\frac{|x-y|^2}{2l_f^2}\right)$$
$$ \sigma_{f} = 0.1$$
$$ l_f = 0.4 $$
## Difference between true prior mean and statFEM prior mean
Since the mean of $f$ is $\bar{f}(x)=1$ we have that the true mean of the solution $u$ is the solution of the ODE with forcing term set to the constant function 1. This has the exact analytic solution:
$$u(x)=\frac{1}{2}x(1-x)$$
as can be directly verified.
The FEM approximation to the solution distribution has mean $\boldsymbol{\Phi}(x)^{*}A^{-1}\bar{F}$ which is the solution to the approximate variational problem obtained by replacing $f$ with $\bar{f}$ in the linear form $L$.
We will utilise FEniCS to compute the error between these two as a function of $h$ the mesh size. To do this we first create a function `mean_assembler` which will assemble the mean for the statFEM prior
```
#export
from dolfin import *
import numpy as np
from scipy import integrate
from scipy.spatial.distance import cdist
from scipy.linalg import sqrtm
from scipy.sparse import csr_matrix
from scipy.sparse.linalg import spsolve
from scipy.interpolate import interp1d
from joblib import Parallel, delayed
import multiprocessing
# code to assemble the mean for a given mesh size
def mean_assembler(h,f_bar):
"This function assembles the mean for the statFEM prior for our 1-D problem."
# get size of the grid
J = int(np.round(1/h))
# set up the mesh and function space for FEM
mesh = UnitIntervalMesh(J)
V = FunctionSpace(mesh,'Lagrange',1)
# set up boundary condition
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, 0.0, boundary)
# set up the functions p and f
p = Constant(1.0)
f = f_bar
# set up the bilinear form for the variational problem
u = TrialFunction(V)
v = TestFunction(V)
a = inner(p*grad(u),grad(v))*dx
# set up the linear form
L = f*v*dx
# solve the variational problem
μ = Function(V)
solve(a == L, μ, bc)
return μ
```
`mean_assembler` takes in the mesh size `h` and the mean function `f_bar` for the forcing and computes the mean of the approximate statFEM prior, returning this as a FEniCS function.
> Important: `mean_assembler` requires `f_bar` to be represented as a FEniCS function/expression/constant.
Let's check that this is working:
```
h = 0.15
f_bar = Constant(1.0)
μ = mean_assembler(h,f_bar)
μ
```
```
# check the type of μ
assert type(μ) == function.function.Function
```
As explained above the true mean is the function $u(x)=\frac{1}{2}x(1-x)$. Let's check that the approximate mean resembles this by plotting both:
```
#hide_input
# use FEniCS to plot μ
x = np.linspace(0,1,100)
μ_true = 0.5*x*(1-x)
plt.plot(x,μ_true,label='true mean',color='red')
plot(μ,label='FEM approximation',color='blue')
plt.legend()
plt.xlabel(r'$x$')
plt.grid()
plt.show()
```
We can see that the FEM approximation does indeed resemble the true mean!
## Difference between true prior covariance and statFEM prior covariance
The solution $u$ has covariance function $c_u(x,y)$ given by the following expression:
$$c_u(x,y)=\int_{0}^{1}\int_{0}^{1}G(x,w)k_f(w,t)G(t,y)dtdw$$
Where $G(x,y)$ is the Green's function for our problem:
$$G(x,y) = x(1-y)\Theta(y-x) + (1-x)y\Theta(x-y) \quad \forall x,y\in[0,1]$$
(note: $\Theta(x)$ is the Heaviside Step function)
The statFEM covariance can be approximated as follows:
$$c_u^{\text{FEM}}(x,y)\approx\sum_{i,j=1}^{J}\varphi_{i}(x)Q_{ij}\varphi_{j}(y)$$
where $Q=A^{-1}MC_{f}M^{T}A^{-T}$ and where the $\{\varphi_{i}\}_{i=1}^{J}$ are the FE basis functions corresponding to the interior nodes of our domain. $C_f$ is the kernel matrix of $f$ (evaluated on the FEM grid).
The difference between the covariance operators we are interested in computing is the following contribution to the 2-Wasserstein distance between the true solution GP and the approximate FEM GP:
$$d_W(C_1,C_2) = \operatorname{tr} C_1 +\operatorname{tr} C_2-2\operatorname{tr}\sqrt{C_{1}^{1/2}C_{2}C_{1}^{1/2}}$$
where $C_1, C_2$ are the covariance operators corresponding to $c_u$ and $c_u^{\text{FEM}}$ respectively.
The above quantity will be approximated by fixing a fine grid and computing the cov matrices $\Sigma_1, \Sigma_2$ for the cov operators $C_1, C_2$, respectively, on this grid. We will then utilise the approximation:
$d_W(C_1,C_2)\approx \operatorname{tr} \Sigma_1 +\operatorname{tr} \Sigma_2-2\operatorname{tr}\sqrt{\Sigma_{1}^{1/2}\Sigma_{2}\Sigma_{1}^{1/2}}$
Thus, it will be necessary to write code to form the matrices $\Sigma_1,\Sigma_2$ above. The structure of the approximate $c_u^{\text{FEM}}$ will allow us to compute $\Sigma_2$ in a very efficient manner using FEniCS. This is achieved by noting that we can write:
$$c_u^{\text{FEM}}(x,y)\approx\boldsymbol{\phi}(x)^{T}Q\boldsymbol{\phi}(y)$$
where $\boldsymbol{\phi}(x):=\left(\varphi_1(x),\cdots,\varphi_J(x)\right)^{T}$
Written in this form, it is now easy to see that $\Sigma_2$, whose $ij$-th entry is given by $(\Sigma_2)_{ij}=\boldsymbol{\phi}(x_i)^{T}Q\boldsymbol{\phi}(x_j)$, can be expressed as follows:
$$\Sigma_2=\boldsymbol{\Phi}^{T}Q\boldsymbol{\Phi}$$
where $\boldsymbol{\Phi}$ is a $J\times N$ matrix whose $i$th column is given by $\boldsymbol{\phi}(x_i)$ where $\{x_i\}_{i=1}^{N}$ are the grid points.
Thus, provided we can efficiently compute the matrices $\boldsymbol{\Phi}$ and $Q$ with FEniCS we can efficiently compute the differene between the covariances required.
In order to compute $\Sigma_1$ and the matrix $C_f$ needed for $Q$ we will need to be able to construct a covariance matrix on a grid for a given cov function. We thus will first create a function `kernMat` which assembles the covariance matrix corresponding to the covariance function `k` on a grid `grid`.
```
#export
def kernMat(k,grid,parallel=True,translation_inv=False):
"Function to compute the covariance matrix $K$ corresponding to the covariance kernel $k$ on a grid. This matrix has $ij-$th entry $K_{ij}=k(x_i,x_j)$ where $x_i$ is the $i$-th point of the grid."
# get the length of the grid
n = len(grid)
# preallocate an n x n array of zeros to hold the cov matrix
K = np.zeros((n,n))
# check if the cov matrix should be computed in parallel
if parallel:
# compute the cov matrix in parallel by computing the upper triangular part column by column
# set up function to compute the ith column of the upper triangular part:
def processInput(i):
return np.array([k(grid[i],grid[j]) for j in range(i,n)])
# get the number of cpu cores present and compute the upper triangular columns in parallel
num_cores = multiprocessing.cpu_count()
results = Parallel(n_jobs=num_cores)(delayed(processInput)(i) for i in range(n))
# store the results in the appropriate positions in K
#for (i,v) in enumerate(results[0:n-1]):
for (i,v) in enumerate(results): # is this correct???
K[i,i:] = v
# only the upper triangular part has been formed, so use the symmetry of the cov mat to get full K:
K = K + K.T - np.diag(K.diagonal())
return K
elif translation_inv:
# reshape grid so that it has correct dimensions
grid = grid.reshape(n,1)
# compute the distance matrix D
D = cdist(grid,grid)
# evaluate the kernel function using D
K = k(D)
return K
else:
# compute the cov mat using a nested for loop
for i in range(n):
for j in range(i,n):
K[i,j] = k(grid[i],grid[j])
K = K + K.T - np.diag(K.diagonal())
return K
```
> Note: This function takes in two optional boolean arguments `parallel` and `translation_inv`. The first of these specifies whether or not the cov matrix should be computed in parallel and the second specifies whether or not the cov kernel is translation invariant. If it is, the covariance matrix is computed more efficiently using the `cdist` function from scipy.
Let's quickly test if this function is working, by computing the cov matrix for white noise, which has kernel function $k(x,y)=\delta(x-y)$. For a grid of length $N$ this should be the $N\times N$ identity matrix.
```
# set up the kernel function
# set up tolerance for comparison
tol = 1e-16
def k(x,y):
if np.abs(x-y) < tol:
# x == y within the tolerance
return 1.0
else:
# x != y within the tolerance
return 0.0
# set up grid
N = 21
grid = np.linspace(0,1,N)
K = kernMat(k,grid,True,False) # parallel mode
# check that this is the N x N identity matrix
assert (K == np.eye(N)).all()
```
We now create a function `BigPhiMat` to utilise FEniCS to efficiently compute the matrix $\boldsymbol{\Phi}$ defined above.
```
#export
def BigPhiMat(J,grid):
"Function to compute the $\Phi$ matrix."
# create the FE mesh and function space
mesh = UnitIntervalMesh(J)
V = FunctionSpace(mesh,'Lagrange',1)
# get the tree for the mesh
tree = mesh.bounding_box_tree()
# set up a function to compute the ith column of Phi corresponding to the ith grid point
def Φ(i):
x = grid[i]
cell_index = tree.compute_first_entity_collision(Point(x))
cell = Cell(mesh,cell_index)
cell_global_dofs = V.dofmap().cell_dofs(cell_index)
vertex_coordinates = cell.get_vertex_coordinates()
cell_orientation = cell.orientation()
data = V.element().evaluate_basis_all(x,vertex_coordinates,cell_orientation)
return (data,cell_global_dofs,i*np.ones_like(cell_global_dofs))
# compute all the columns of Phi using the function above
res = [Φ(i) for i in range(len(grid))]
# assemble the sparse matrix Phi using the results
data = np.hstack([res[i][0] for i in range(len(grid))])
row = np.hstack([res[i][1] for i in range(len(grid))])
col = np.hstack([res[i][2] for i in range(len(grid))])
return csr_matrix((data,(row,col)),shape=(V.dim(),len(grid)))
```
`BigPhiMat` takes in two arguments: `J`, which controls the FE mesh size ($h=1/J$), and `grid` which is the grid in the definition of $\boldsymbol{\Phi}$. `BigPhiMat` returns $\boldsymbol{\Phi}$ as a sparse `csr_matrix` for memory efficiency.
> Note: Note that since FEniCS works with the FE functions corresponding to all the FE dofs and our matrix $\Sigma_2$ only uses the FE functions corresponding to non-boundary dofs we need to account for this in the code. See the source code for `BigPhiMat` to see how this is done.
We now create a function `cov_asssembler` which assembles the approximate FEM covariance matrix on the grid.
```
#export
def cov_assembler(J,k_f,grid,parallel,translation_inv):
"Function to assemble the approximate FEM covariance matrix on the reference grid."
# set up mesh and function space
mesh = UnitIntervalMesh(J)
V = FunctionSpace(mesh,'Lagrange',1)
# set up FE grid
x_grid = V.tabulate_dof_coordinates()
# set up boundary condition
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, 0.0, boundary)
# get the boundary and interior dofs
bc_dofs = bc.get_boundary_values().keys()
first, last = V.dofmap().ownership_range()
all_dofs = range(last - first)
interior_dofs = list(set(all_dofs) - set(bc_dofs))
bc_dofs = list(set(bc_dofs))
# set up the function p
p = Constant(1.0)
# get the mass and stiffness matrices as sparse csr_matrices
u = TrialFunction(V)
v = TestFunction(V)
mass_form = u*v*dx
a = inner(p*grad(u),grad(v))*dx
M = assemble(mass_form)
A = assemble(a)
M = as_backend_type(M).mat()
A = as_backend_type(A).mat()
M = csr_matrix(M.getValuesCSR()[::-1],shape=M.size)
A = csr_matrix(A.getValuesCSR()[::-1],shape=A.size)
# extract the submatrices corresponding to the interior dofs
M = M[interior_dofs,:][:,interior_dofs]
A = A[interior_dofs,:][:,interior_dofs]
# get the forcing cov matrix on the interior nodes of the grid
Σ_int = kernMat(k_f,x_grid[interior_dofs],parallel,translation_inv)
# form the matrix Q in the defintion of the approximate FEM cov mat
# Note: overwrite Σ_int for memory efficiency.
# Σ_int = M @ Σ_int @ M.T
Σ_int = Σ_int @ M.T
Σ_int = M @ Σ_int
# form Q (storing this in Σ_int directly for memory efficiency)
Σ_int = spsolve(A,Σ_int)
Σ_int = spsolve(A,Σ_int.T).T
# ensure Σ_int is symmetric
Σ_int = 0.5*(Σ_int + Σ_int.T)
# get big phi matrix on the grid (extracting only the rows corresponding to the
# interior dofs)
Phi = BigPhiMat(J,grid)[interior_dofs,:]
# assemble cov mat on grid using Phi and Σ_int
Σ = Phi.T @ Σ_int @ Phi
# ensure Σ is symmetric and return
Σ = 0.5*(Σ + Σ.T)
return Σ
```
`cov_assembler` takes in several arguments which are explained below:
- `J`: controls the FE mesh size ($h=1/J)$
- `k_f`: the covariance function for the forcing $f$
- `grid`: the reference grid where the FEM cov matrix should be computed on
- `parallel`: boolean argument indicating whether the intermediate computation of $C_f$ should be done in parallel
- `translation_inv`: boolean argument indicating whether the intermediate computation of $C_f$ should be computed assuming `k_f` is translation invariant or not
As a quick demonstration that the code is working, we will compute the true and approximate covariance matrices for a relatively coarse grid. We first set up functions to compute the true covariance matrix $\Sigma_1$:
```
# set up kernel functions for f
l_f = 0.4
σ_f = 0.1
def c_f(x,y):
return (σ_f**2)*np.exp(-(x-y)**2/(2*(l_f**2)))
# translation invariant form of c_f
def k_f(x):
return (σ_f**2)*np.exp(-(x**2)/(2*(l_f**2)))
# use quadrature for the true cov function
from scipy import integrate
# compute inner integral over t
def η(w,y):
I_1 = integrate.quad(lambda t: t*c_f(w,t),0.0,y)[0]
I_2 = integrate.quad(lambda t: (1-t)*c_f(w,t),y,1.0)[0]
return (1-y)*I_1 + y*I_2
# use this function η and compute the outer integral over w
def c_u(x,y):
I_1 = integrate.quad(lambda w: (1-w)*η(w,y),x,1.0)[0]
I_2 = integrate.quad(lambda w: w*η(w,y),0.0,x)[0]
return x*I_1 + (1-x)*I_2
```
With these functions we can now compute $\Sigma_1$ as follows:
```
# set up a reference grid
N = 21
grid = np.linspace(0,1,N)
# compute Σ_1 using c_u
Σ_1 = kernMat(c_u,grid,True,False)
```
We now use our function `cov_assembler` to compute $\Sigma_2$:
```
J = 20 # choose a FE mesh size
Σ_2 = cov_assembler(J,k_f,grid,False,True)
```
Let's plot heatmaps of both $\Sigma_1, \Sigma_2$ to compare:
```
#hide_input
vmin = min(Σ_1.min(), Σ_2.min())
vmax = max(Σ_1.max(), Σ_2.max())
plt.rcParams['figure.figsize'] = (12,6)
fig, axs = plt.subplots(ncols=3, gridspec_kw=dict(width_ratios=[4,4,0.2]))
sns.heatmap(Σ_1,cbar=False,
annot=False,
xticklabels=False,
yticklabels=False,
cmap=cm.viridis,
ax=axs[0])
axs[0].title.set_text(r'$\Sigma_1$')
sns.heatmap(Σ_2,cbar=False,
annot=False,
xticklabels=False,
yticklabels=False,
cmap=cm.viridis,
ax=axs[1])
axs[1].title.set_text(r'$\Sigma_2$')
fig.colorbar(axs[np.argmax([Σ_1.max(), Σ_2.max()])].collections[0], cax=axs[2])
plt.tight_layout()
plt.show()
```
Even with a relatively coarse reference grid and a relatively coarse FE space it looks as if the approximate FEM covariance is quite similar to the true covariance matrix as can be seen from the heatmaps above. Let's also check how similar they are by utilising `np.linalg.norm` to compute the relative percentage difference:
```
#hide_input
print("Relative percentage difference is: %.2f" %(100*np.linalg.norm(Σ_1-Σ_2)/np.linalg.norm(Σ_1)) + "%")
```
Relative percentage difference is: 0.68%
## Posterior from incorporating sensor readings
Denote by $\nu_{i}=\mathcal{N}(m_{i},\Sigma_{i})$, where $i$ is either the symbol $\star$ or $h$, the true and statFEM prior respectively. When we take $u\sim\nu_{i}$ as our prior, the resulting posterior after incorporating the noisy sensor readings $\mathbf{v}$ at the locations $\{y_{j}\}_{j=1}^{s}$ is given by:
$$u|\mathbf{v}\sim\mathcal{N}\left(m_{u|\mathbf{v}}^{(i)},\Sigma_{u|\mathbf{v}}^{(i)}\right)$$
where we have:
$$m_{u|\mathbf{v}}^{(i)}=m_{i} + \Sigma_{i}S^{\dagger}(\epsilon^{2}I+S\Sigma_{i}S^{\dagger})^{-1}(\mathbf{v}-Sm_{i})$$
$$\Sigma_{u|\mathbf{v}}^{(i)}=\Sigma_{i} - \Sigma_{i}S^{\dagger}(\epsilon^{2}I+S\Sigma_{i}S^{\dagger})^{-1}S\Sigma_{i}$$
where $S$ is the operator which maps a function $g$ to the vector $\left(g(y_1),\cdots,g(y_s)\right)^{T}$.
For brevity we will denote the $s\times s$ matrix which appears above as $B_{\epsilon,i}:=\epsilon^{2}I+S\Sigma_{i}S^{\dagger}=\epsilon^{2}I+C_{Y,i}$ where we have also defined $C_{Y,i}:=S\Sigma_{i}S^{\dagger}$. This matrix has $pq$*-th* entry $c^{(i)}(y_{p},y_{q})$ where $c^{(i)}$ is the covariance function associated with the covariance operator $\Sigma_{i}$.
### Posterior mean
Thus, our posterior mean in both cases has the form:
$$m^{(i)}_{u|\mathbf{v}}(x)=m_{i}(x)+\sum_{p,q=1}^{s}c^{(i)}(x,y_{p})\left(B_{\epsilon,i}^{-1}\right)_{pq}(v_{q}-m_{i}(y_{q}))$$
Note that this can be expressed as:
$$m^{(i)}_{u|\mathbf{v}}(x)=m_{i}(x) - \mathbf{c}^{(i)}(x)^{T}B_{\epsilon,i}^{-1}(\mathbf{m}^{(i)}-\mathbf{v})$$
where $\mathbf{m}^{(i)}:=Sm_{i}=(m_{i}(y_1),\cdots,m_{i}(y_s))^{T}$ and $\mathbf{c}^{(i)}(x):=(c^{(i)}(x,y_1),\cdots,c^{(i)}(x,y_s))^{T}$.
Thus, we require a function to evaluate the posterior means. We will thus create a function `m_post` which evaluates the posterior means.
```
# export
def m_post(x,m,c,v,Y,B):
"This function evalutes the posterior mean at the point $x$."
m_vect = np.array([m(y_i) for y_i in Y]).flatten()
c_vect = c(x).flatten()
# compute the update term
update = c_vect @ np.linalg.solve(B,m_vect-v)
# return m_post
return (m(x) - update)
```
`m_post` takes in several arguments which are explained below:
- `x`: point where the posterior mean will be evaluated
- `m`: function which computes the prior mean at a given point y
- `c`: function which returns the vector (c(x,y)) for y in Y (note: c is the prior covariance function)
- `v`: vector of noisy sensor readings
- `Y`: vector of sensor locations
- `B`: the matrix $\epsilon^{2}I+C_Y$ to be inverted in order to obtain the posterior
As a quick test to see if the code is working, note that if we choose $\mathbf{c}$ above to be constant at the $j$*-th* standard basis vector and if we take $B$ to be the identity matrix then we should obtain the function $m(x)-m(y_j)+v_j$. This will give us the $v_j$ when evaluated at $y_j$. We will test that this is indeed what we get:
```
# choose several prior mean functions to try
m_list = [lambda x: 1.0, lambda x: 0.5*x*(1.0-x), lambda x: np.sin(2*np.pi*x)]
# set up Y and B and v:
Y = np.linspace(0.01,0.99,11)
s = len(Y)
B = np.eye(s)
np.random.seed(42)
v = np.random.randn(s)
# test that we get v_j when evaluated at y_j (and when c is set to be j-th basis vector)
for j in range(s):
# define c to be j-th basis vector for all x
def c(x):
c_vect = np.zeros(s)
c_vect[j] = 1.0
return c_vect
# evaluate the posterior mean at jth sensor location and check that this is v_j
# (this check is done up to a tolerance tol)
tol = 1e-15
for m in m_list:
assert np.abs(m_post(Y[j],m,c,v,Y,B) - v[j]) < tol
```
### Difference between posterior means
In order to compute the difference between the posterior means we require some more code.
Firstly, we will need code to generate samples from a GP with mean $m$ and cov function $k$ on a grid. We write the function `sample_gp` for this purpose.
```
#export
def sample_gp(n_sim,m,k,grid,par=False,trans=True, tol=1e-9):
"Function to sample a GP with mean $m$ and cov $k$ on a grid."
# get length of grid
d = len(grid)
# construct mean vector
μ = np.array([m(x) for x in grid]).reshape(d,1)
# construct covariance matrix
Σ = kernMat(k,grid,parallel = par, translation_inv = trans)
# construct the cholesky decomposition Σ = GG^T
# we add a small diagonal perturbation to Σ to ensure it
# strictly positive definite
G = np.linalg.cholesky(Σ + tol * np.eye(d))
# draw iid standard normal random vectors
Z = np.random.normal(size=(d,n_sim))
# construct samples from GP(m,k)
Y = G@Z + np.tile(μ,n_sim)
# return the sampled trajectories
return Y
```
`sample_gp` takes in several arguments which are explained below:
- `n_sim`: number of trajectories to be sampled
- `m`: mean function for the GP
- `k`: cov function for the GP
- `grid`: grid of points on which to sample the GP
- `par`: boolean argument indicating whether the computation of the cov matrix should be done in parallel
- `trans`: boolean argument indicating whether the computation of the cov matrix should be computed assuming `k` is translation invariant or not
- `tol`: controls the size of the tiny diagonal perturbation added to cov matrix to ensure it is strictly positive definite (defaults to `1e-9`)
As a quick demonstration that the code is working lets generate 10 trajectories of white noise, using the kernel `k` from one of the previous tests:
```
#hide_input
# set up grid to sample on
N = 51
grid = np.linspace(0,1,N)
# set up mean
def m(x):
return 0.0
# sample the GP
n_sim = 10
np.random.seed(235)
samples = sample_gp(n_sim,m,k,grid,True,False)
plt.plot(grid,samples)
plt.grid()
plt.xlabel(r'$x$')
plt.title('White noise trajectories')
plt.show()
```
The next bit of code we require is code to generate noisy sensor readings from our system. We write the function `gen_sensor` for this purpose.
```
#export
def gen_sensor(ϵ,m,k,Y,u_quad,grid,par=False,trans=True,tol=1e-9,maxiter=50,require_f=False):
"Function to generate noisy sensor observations of the solution u on a sensor grid Y."
# get number of sensors from the sensor grid Y
s = len(Y)
# sample a single f on the grid
f_sim = sample_gp(1,m,k,grid,par=par,trans=trans,tol=tol)
# create solution function
# interpolate f to get a function
f = interp1d(grid,f_sim.flatten(),kind='cubic')
# use u_quad together with f to compute solution
def u(x):
return u_quad(x,f,maxiter=maxiter)
# get solution on grid Y:
u_Y = np.array([u(y_i) for y_i in Y])
# add N(0,ϵ^2) to each evaluation point
u_S = u_Y + ϵ*np.random.normal(size=s)
# if require the simulated trajectory of f return this as well, if not just return u_S
if require_f:
return u_S, f_sim
else:
return u_S
```
`gen_sensor` takes in several arguments which are explained below:
- `ϵ`: controls the amount of sensor noise
- `m`: mean function for the forcing f
- `k`: cov function for the forcing f
- `Y`: vector of sensor locations
- `u_quad`: function to accurately compute the solution u given a realisation of the forcing f
- `grid`: grid where forcing f is sampled on
- `par`: boolean argument indicating whether the computation of the forcing cov matrix should be done in parallel
- `trans`: boolean argument indicating whether the computation of the forcing cov matrix should be computed assuming `k` is translation invariant or not
- `tol`: controls the size of the tiny diagonal perturbation added to forcing cov matrix to ensure it is strictly positive definite (defaults to `1e-9`)
- `maxiter`: parameter which controls the accuracy of the quadrature used in u_quad (defaults to `50`)
- `require_f` : boolean argument indicating whether or not to also return the realisation of the forcing f (defaults to `False`)
> Important: The function `u_quad` which is passed to `gen_sensor` is assumed to compute the solution using quadrature. This must be done in a particular way and will be demonstrated below. It is also important to choose a fine enough grid for the argument `grid` passed to `gen_sensor` as this affects the solution accuracy.
Let's demonstrate that this code is working. To start we note that due to the form of the Green's function for our problem, we can express the solution $u$ in terms of the forcing $f$ as follows:
$$u(x)=\int_{0}^{1}G(x,y)f(y)\mathrm{d}y=(1-x)\int_{0}^{x}yf(y)\mathrm{d}y+x\int_{x}^{1}(1-y)f(y)\mathrm{d}y$$
We will use this observation when setting up `u_quad` below. We now generate $s=20$ sensor observations with the sensors equally spaced in the interval $(0.01,0.99)$.
```
# set up mean and kernel functions for the forcing f
l_f = 0.4
σ_f = 0.1
def m_f(x):
return 1.0
def k_f(x):
return (σ_f**2)*np.exp(-(x**2)/(2*(l_f**2)))
# set up sensor grid and sensor noise level
s = 20
Y = np.linspace(0.01,0.99,s)
ϵ = 0.1
# set up grid to simulate f on
N = 40
grid = np.linspace(0,1,N+1)
# set up u_quad
def u_quad(x,f,maxiter=50):
I_1 = integrate.quadrature(lambda w: w*f(w), 0.0, x,maxiter=maxiter)[0]
I_2 = integrate.quadrature(lambda w: (1-w)*f(w),x, 1.0,maxiter=maxiter)[0]
return (1-x)*I_1 + x*I_2
# generate the sensor observations
np.random.seed(534)
v_dat,f_sim = gen_sensor(ϵ,m_f,k_f,Y,u_quad,grid,maxiter=200,require_f=True)
```
Plotting these sensor observations with the solution for this particular realisation of the forcing gives:
```
#hide_input
# plot the sensor readings as well as the true mean
x_range = np.linspace(0,1,100)
f = interp1d(grid,f_sim.flatten(),kind='cubic')
def u(x):
return u_quad(x,f,maxiter=200)
res = np.array([u(x) for x in x_range])
plt.scatter(Y,v_dat,c='r',label='noisy sensor observations')
plt.plot(x_range,res,c='b',label='solution')
plt.xlabel(r'$x$')
plt.ylim(-0.2,0.3)
plt.title('Noisy sensor observations')
plt.legend()
plt.grid()
plt.show()
```
The next bit of code needed in order to compute the difference between the posterior means is a way of comparing the two different mean functions. One possible solution is to overload the `UserExpression` class in FEniCS to create custom FEniCS expressions from user defined functions. This will allow us to use our function `m_post` together with `errornorm` from FEniCS to compute the L2 norm of the difference. We thus, create a class called `MyExpression`.
```
#export
class MyExpression(UserExpression):
"Class to allow users to user their own functions to create a FEniCS UserExpression."
def eval(self, value, x):
value[0] = self.f(x)
def value_shape(self):
return ()
```
```
show_doc(MyExpression,title_level=4)
```
<h4 id="MyExpression" class="doc_header"><code>class</code> <code>MyExpression</code><a href="" class="source_link" style="float:right">[source]</a></h4>
> <code>MyExpression</code>(**\*`args`**, **\*\*`kwargs`**) :: `UserExpression`
Class to allow users to user their own functions to create a FEniCS UserExpression.
We will now demonstrate how this works, building on the sensor observation example above.
```
# set up the true prior mean and the true prior cov needed for the true posterior
μ_true = Expression('0.5*x[0]*(1-x[0])',degree=2)
C_true_s = kernMat(c_u,Y.flatten())
def c_u_vect(x):
return np.array([c_u(x,y_i) for y_i in Y])
# set up matrix B for posterior
B_true = (ϵ**2)*np.eye(s) + C_true_s
# compute the true posterior mean
def true_post_mean(x):
return m_post(x,μ_true,c_u_vect,v_dat,Y,B_true)
# set up MyExpression object
μ_true_post = MyExpression()
μ_true_post.f = true_post_mean
μ_true_post
```
`μ_true_post` now works like a usual FEniCS expression/function. We can evaluate it at a point:
```
μ_true_post(0.3)
```
Or even evaluate it on the nodes of a FEniCS mesh:
```
μ_true_post.compute_vertex_values(mesh=UnitIntervalMesh(5))
```
array([0. , 0.08179325, 0.12278251, 0.12279555, 0.08181789,
0. ])
> Warning: A mesh needs to be passed when using `MyExpression` objects with certain FEniCS methods
We now require code which will create the matrix $C_Y,h$ and the function $\mathbf{c}^{(h)}$ required for the statFEM posterior mean. We will create the function `fem_cov_assembler_post` for this purpose.
```
#export
def fem_cov_assembler_post(J,k_f,Y,parallel,translation_inv):
"Function to create the matrix $C_{Y,h}$ and the vector function $c^{(h)}$ required for the statFEM posterior mean."
# set up mesh and function space
mesh = UnitIntervalMesh(J)
V = FunctionSpace(mesh,'Lagrange',1)
tree = mesh.bounding_box_tree()
# set up grid
x_grid = V.tabulate_dof_coordinates()
# set up boundary condition
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, 0.0, boundary)
# get the boundary and interior dofs
bc_dofs = bc.get_boundary_values().keys()
first, last = V.dofmap().ownership_range()
all_dofs = range(last - first)
interior_dofs = list(set(all_dofs) - set(bc_dofs))
bc_dofs = list(set(bc_dofs))
# set up the function p
p = Constant(1.0)
# get the mass and stiffness matrices
u = TrialFunction(V)
v = TestFunction(V)
mass_form = u*v*dx
a = inner(p*grad(u),grad(v))*dx
M = assemble(mass_form)
A = assemble(a)
M = as_backend_type(M).mat()
A = as_backend_type(A).mat()
M = csr_matrix(M.getValuesCSR()[::-1],shape=M.size)
A = csr_matrix(A.getValuesCSR()[::-1],shape=A.size)
# extract the submatrices corresponding to the interior dofs
M = M[interior_dofs,:][:,interior_dofs]
A = A[interior_dofs,:][:,interior_dofs]
# get the forcing cov matrix on the interior nodes of the grid
Σ_int = kernMat(k_f,x_grid[interior_dofs],parallel,translation_inv)
# form the matrix Q in the defintion of the approximate FEM cov mat
# Note: overwrite Σ_int for memory efficiency
Σ_int = M @ Σ_int @ M.T
Σ_int = spsolve(A,Σ_int)
Σ_int = spsolve(A,Σ_int.T).T
# ensure Σ_int is symmetric
Σ_int = 0.5*(Σ_int + Σ_int.T)
# get big phi matrix on the sensor grid (only need the interior dofs)
Phi = BigPhiMat(J,Y)[interior_dofs,:]
# assemble the FEM cov mat on the sensor grid and ensure it is symmetric
Σ_s = Phi.T @ Σ_int @ Phi
Σ_s = 0.5*(Σ_s + Σ_s.T)
# set up function to yield the vector (c(x,y)) for y in Y
def Φ(x):
cell_index = tree.compute_first_entity_collision(Point(x))
cell_global_dofs = V.dofmap().cell_dofs(cell_index)
cell = Cell(mesh, cell_index)
vertex_coordinates = cell.get_vertex_coordinates()
cell_orientation = cell.orientation()
data = V.element().evaluate_basis_all(x,vertex_coordinates,cell_orientation)
col = np.zeros_like(cell_global_dofs)
res = csr_matrix((data,(cell_global_dofs,col)),shape=(V.dim(),1))[interior_dofs,:]
return res
def c_fem(x):
return Φ(x).T @ Σ_int @ Phi
#return Σ and c_fem
return Σ_s, c_fem
```
`fem_cov_assembler_post` takes in several arguments which are explained below:
- `J`: controls the FE mesh size ($h=1/J$)
- `k_f`: the covariance function for the forcing $f$
- `Y`: vector of sensor locations
- `parallel`: boolean argument indicating whether the computation of the forcing cov mat should be done in parallel
- `translation_inv`: boolean argument indicating whether the computation of the forcing cov mat should be computed assuming `k_f` is translation invariant or not
With all of this code in place we can now finally write the function `m_post_fem_assmebler` which will assemble the statFEM posterior mean function.
```
#export
def m_post_fem_assembler(J,f_bar,k_f,ϵ,Y,v_dat,par=False,trans=True):
"Function to assemble the statFEM posterior mean function."
# get number of sensors
s = len(Y)
# set up mesh and function space
mesh = UnitIntervalMesh(J)
V = FunctionSpace(mesh,'Lagrange',1)
# set up boundary condition
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, 0.0, boundary)
# set up the functions p and f
p = Constant(1.0)
f = f_bar
# set up the bilinear form for the variational problem
u = TrialFunction(V)
v = TestFunction(V)
a = inner(p*grad(u),grad(v))*dx
# set up linear form
L = f*v*dx
# solve the variational problem
μ_fem = Function(V)
solve(a == L, μ_fem, bc)
# use fem_cov_assembler_post to obtain cov mat on sensor grid and function to compute vector
# (c(x,y)) for y in Y
B_fem_s, c_fem = fem_cov_assembler_post(J,k_f,Y.flatten(),parallel=par,translation_inv=trans)
# form B_fem_s by adding noise contribution
B_fem_s += (ϵ**2)*np.eye(s)
# assemble function to compute posterior mean and return
def m_post_fem(x):
return m_post(x,μ_fem,c_fem,v_dat,Y,B_fem_s)
return m_post_fem
```
`m_post_fem_assembler` takes in several arguments which are explained below:
- `J`: controls the FE mesh size ($h=1/J$)
- `f_bar`: the mean function for the forcing $f$
- `k_f`: the covariance function for the forcing $f$
- `ϵ`: controls the amount of sensor noise
- `Y`: vector of sensor locations
- `v_dat`: vector of noisy sensor observations
- `par`: boolean argument passed to `fem_cov_assembler_post`'s argument `parallel` (defaults to `False`)
- `trans`: boolean argument passed to `fem_cov_assembler_post`'s argument `translation_inv` (defaults to `True`)
> Important: `m_post_fem_assembler` requires `f_bar` to be represented as a FEniCS function/expression/constant.
Let's quickly check that this function is working.
```
J = 20
f_bar = Constant(1.0)
m_post_fem = m_post_fem_assembler(J,f_bar,k_f,ϵ,Y,v_dat)
# compute posterior mean at a location x in [0,1]
x = 0.3
m_post_fem(x)
```
Let's also plot the statFEM posterior mean together with the corresponding statFEM prior mean:
```
#hide_input
h = 1/J
m_prior = mean_assembler(h,f_bar)
x_range = np.linspace(0,1,100)
y_range = np.array([m_post_fem(x) for x in x_range])
plot(m_prior,label='prior')
plt.plot(x_range,y_range,label='posterior',c='r')
plt.grid()
plt.xlabel(r'$x$')
plt.title('statFEM prior and posterior means')
plt.legend()
plt.show()
```
### Posterior covariance
From the form of the posterior covariance operators $\Sigma_{u|\mathbf{v}}^{(i)}$ given in the section **"Posterior from incorporating sensor readings"** we can see that the posterior covariance functions both have the form:
$$c_{u|\mathbf{v}}^{(i)}(x,y) = c^{(i)}(x,y) - \sum_{p,q=1}^{s}c^{(i)}(x,y_p)(B_{\epsilon,i}^{-1})_{pq}c^{(i)}(y_q,y)$$
Note that this can be expressed as:
$$c_{u|\mathbf{v}}^{(i)}(x,y) = c^{(i)}(x,y) - \mathbf{c}^{(i)}(x)^{T}B_{\epsilon,i}^{-1}\mathbf{c}^{(i)}(y)$$
where we have utilised the fact that $c^{(i)}$ are covariance functions and are hence symmetric which allows us to put $\mathbf{c}^{(i)}(y)=(c^{(i)}(y,y_1),\cdots,c^{(i)}(y,y_s))^{T}=(c^{(i)}(y_1,y),\cdots,c^{(i)}(y_s,y))^{T}$.
Thus, we require a function to evalute the posterior covarainces. We will thus create a function `c_post` which evalutes the posterior covariances.
```
#export
def c_post(x,y,c,Y,B):
"This function evaluates the posterior covariance at $(x,y)$"
# compute vectors c_x and c_y:
c_x = np.array([c(x,y_i) for y_i in Y])
c_y = np.array([c(y_i,y) for y_i in Y])
# compute update term
update = c_x @ np.linalg.solve(B,c_y)
# return c_post
return (c(x,y) - update)
```
`c_post` takes in several arguments which are explained below:
- `x`,`y`: points to evaluate the covariance at
- `c`: function which returns the prior covariance at any given pair $(x,y)$
- `Y`: vector of sensor locations
- `B`: the matrix $\epsilon^{2}I+C_{Y}$ to be inverted in order to obtain the posterior
> Note: The function `c_post` will only be used for the true posterior covariances.
### Difference between posterior covariances
In order to compute the difference between the posterior covariances we require some more code. Since we will be comparing the posterior covariances on a fixed reference grid $\{x_{i}\}_{i=1}^{N}$ we will need to assemble the cov matrices on this grid. I.e. we will require the matrices $\tilde{C}_{X,i}$ with $pq$*-th* entry $c_{u|\mathbf{v}}^{(i)}(x_{p},x_{q})$ for $p,q=1,\cdots N$. For statFEM this matrix can be efficiently assembled by exploiting the form of the statFEM prior and posterior covariance functions, i.e. by noting that we have:
$$\tilde{C}_{X,h} = \Sigma_{X} - \Sigma_{XY}B_{\epsilon,h}^{-1}\Sigma_{XY}^{T}$$
where $\Sigma_{X}:=\Phi_{X}^{T}Q\Phi_{X}$, $\Sigma_{XY}=\Phi_{X}^{T}Q\Phi_{Y}$ and where $\Phi_{X}$ is a $J\times N$ matrix whose $i$*-th* column is given by $\phi(x_{i})$ and similarly $\Phi_{Y}$ is a $J\times s$ matrix whose $i$*-th* column is given by $\phi(y_{i})$ and $Q$ is the matrix defined in the section **"Difference between the true prior covariance and the statFEM prior covariance"**.
Thus, we can use our function `BigPhiMat` to compute $\tilde{C}_{X,h}$ efficiently. We start by creating the function `post_fem_cov_assembler` which assembles the matrices $\Sigma_{X}, \Sigma_{XY}$, and $\Sigma_{Y}:=\Phi_{Y}^{T}Q\Phi_{Y}$ required for the statFEM posterior covariance.
```
#export
def post_fem_cov_assembler(J,k_f,grid,Y,parallel,translation_inv):
"Function which assembles the matrices $Σ_X,Σ_{XY}$, and $Σ_Y$ required for the statFEM posterior covariance."
# set up mesh and function space
mesh = UnitIntervalMesh(J)
V = FunctionSpace(mesh,'Lagrange',1)
# set up grid
x_grid = V.tabulate_dof_coordinates()
# set up boundary condition
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, 0.0, boundary)
# get the boundary and interior dofs
bc_dofs = bc.get_boundary_values().keys()
first, last = V.dofmap().ownership_range()
all_dofs = range(last - first)
interior_dofs = list(set(all_dofs) - set(bc_dofs))
bc_dofs = list(set(bc_dofs))
# set up the function p
p = Constant(1.0)
# get the mass and stiffness matrices
u = TrialFunction(V)
v = TestFunction(V)
mass_form = u*v*dx
a = inner(p*grad(u),grad(v))*dx
M = assemble(mass_form)
A = assemble(a)
M = as_backend_type(M).mat()
A = as_backend_type(A).mat()
M = csr_matrix(M.getValuesCSR()[::-1],shape=M.size)
A = csr_matrix(A.getValuesCSR()[::-1],shape=A.size)
# extract the submatrices corresponding to the interior dofs
M = M[interior_dofs,:][:,interior_dofs]
A = A[interior_dofs,:][:,interior_dofs]
# get the forcing cov matrix on the interior nodes of the grid
Σ_int = kernMat(k_f,x_grid[interior_dofs],parallel,translation_inv)
# form the matrix Q in the defintion of the approximate FEM cov mat
# Note: overwrite Σ_int for memory efficiency
Σ_int = M @ Σ_int @ M.T
Σ_int = spsolve(A,Σ_int)
Σ_int = spsolve(A,Σ_int.T).T
# ensure Σ_int is symmetric
Σ_int = 0.5*(Σ_int + Σ_int.T)
# get big phi matrix on the grid (on;y need the interior nodes)
Phi_grid = BigPhiMat(J,grid)[interior_dofs,:]
# get big phi matrix on the sensor grid (only need the interior nodes)
Phi_Y = BigPhiMat(J,Y)[interior_dofs,:]
# assemble the FEM cov mat on the sensor grid using Σ_int and Phi_Y
Σ_Y = Phi_Y.T @ Σ_int @ Phi_Y
# assemble the FEM cov mat on the grid using Σ_int and Phi_grid
Σ_X = Phi_grid.T @ Σ_int @ Phi_grid
# assemble cross term matrix (with ijth entry c(x_i,y_j))
Σ_XY = Phi_grid.T @ Σ_int @ Phi_Y
# return these sigma matrices
return Σ_Y, Σ_X, Σ_XY
```
`post_fem_cov_assembler` takes in several arguments which are explained below:
- `J`: controls the FE mesh size ($h=1/J$)
- `k_f`: the covariance function for the forcing $f$
- `grid`: the fixed reference grid $\{x_{i}\}_{i=1}^{N}$ on which to assemble the posterior cov mat
- `Y`: vector of sensor locations.
- `parallel`: boolean argument indicating whether the computation of the forcing cov mat should be done in parallel
- `translation_inv`: boolean argument indicating whether the computation of the forcing cov mat should be computed assuming `k_f` is translation invariant or not
Finally, we create the function `c_post_fem_assembler` which assembles the statFEM posterior cov mat on the reference grid using the matrices `post_fem_cov_assembler` returns.
```
#export
def c_post_fem_assembler(J,k_f,grid,Y,ϵ,par,trans):
"Function to assemble the statFEM posterior cov mat on a reference grid specified by grid."
# use post_fem_cov_assembler to get the sigma matrices needed for posterior cov mat
Σ_Y, Σ_X, Σ_XY = post_fem_cov_assembler(J,k_f,grid,Y,parallel=par,translation_inv=trans)
# create the matrix B
s = len(Y) # number of sensor points
B = (ϵ**2)*np.eye(s) + Σ_Y
#form the posterior cov matrix
update = Σ_XY @ np.linalg.solve(B,Σ_XY.T)
return Σ_X - update
```
Let's quickly demonstrate that this code is working by computing the statFEM posterior covariance matrix on a reference grid and comparing this to the corresponding statFEM prior.
```
# set up reference grid and J
N = 21
grid = np.linspace(0,1,N)
J = 20
# get statFEM prior cov mat on this grid
Σ_prior = cov_assembler(J,k_f,grid,False,True)
# get statFEM posterior cov mat on this grid
Σ_posterior = c_post_fem_assembler(J,k_f,grid,Y,ϵ,False,True)
```
```
#hide_input
vmin = min(Σ_prior.min(), Σ_posterior.min())
vmax = max(Σ_prior.max(), Σ_posterior.max())
plt.rcParams['figure.figsize'] = (12,6)
fig, axs = plt.subplots(ncols=3, gridspec_kw=dict(width_ratios=[4,4,0.2]))
sns.heatmap(Σ_prior,cbar=False,
annot=False,
xticklabels=False,
yticklabels=False,
cmap=cm.viridis,
ax=axs[0])
axs[0].title.set_text('statFEM prior covariance')
sns.heatmap(Σ_posterior,cbar=False,
annot=False,
xticklabels=False,
yticklabels=False,
cmap=cm.viridis,
ax=axs[1])
axs[1].title.set_text('statFEM posterior covariance')
fig.colorbar(axs[np.argmax([Σ_prior.max(), Σ_posterior.max()])].collections[0], cax=axs[2])
plt.tight_layout()
plt.show()
```
```
#hide
from nbdev.export import notebook2script; notebook2script()
```
Converted 00_oneDim.ipynb.
Converted index.ipynb.
|
847b7cc9c66b56fc94363426d486b3e9f254fbc8
| 411,548 |
ipynb
|
Jupyter Notebook
|
00_oneDim.ipynb
|
YanniPapandreou/statFEM
|
189ddbb9c2f5a363d6e7e2f62a893cb3706e45bb
|
[
"Apache-2.0"
] | 1 |
2022-02-04T09:26:33.000Z
|
2022-02-04T09:26:33.000Z
|
00_oneDim.ipynb
|
YanniPapandreou/statFEM
|
189ddbb9c2f5a363d6e7e2f62a893cb3706e45bb
|
[
"Apache-2.0"
] | null | null | null |
00_oneDim.ipynb
|
YanniPapandreou/statFEM
|
189ddbb9c2f5a363d6e7e2f62a893cb3706e45bb
|
[
"Apache-2.0"
] | null | null | null | 221.85876 | 218,180 | 0.902514 | true | 12,742 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.917303 | 0.822189 | 0.754196 |
__label__eng_Latn
| 0.968941 | 0.590583 |
# A study on spontaneous decay rate of an atom in presence of a square dielectric waveguide using BEM approach
In these notes, I calculate the Local Density of States (LDOS), or the imaginary part of the on-site Green's function and hence the modified spontaneous emission rate of an atom in presence of a square dielectric waveguide using the Boundary Element Method (BEM). The BEM code is from Prof. Alejandro Manjavacas's group.
This is an [IJulia notebook](https://github.com/JuliaLang/IJulia.jl), which provides a nice
browser-based [Jupyter](http://jupyter.org/) interface to the [Julia language](http://julialang.org/), a high-level dynamic language (similar to Matlab or Python+SciPy) for technical computing. The notebook allows us to combine code and results in one place.
We are only manipulating the generated data from the simulation results in this notebook. As a brief recap of the simulation process, I have used a compiled BEM code by Alejandro's group written in C++ called `bem2D` on a cluster computing system. A configuration C++ code is defined in the file `scripts_ldos.cpp` which is in the same folder as `bem2D`. I compiled the script and put the generated executable into another folder called `p3`, for example, by
```
g++ scripts_ldos.cpp -lstdc++ -o ../p3/scripts_ldos
```
Then ran the executable and submitted the generated PBS script to the cluster system to run the simulation:
```
cd ../p3/
./scripts_ldos
qsub ldos_N_1_lam_894_eps_4_0_a_300_b_300_c_5_x_0_y_350_q_0_2_401.pbs
```
Notice that the name of the PBS script is automatically generated based on the configuration parameters for this simulation. The name will vary for different simulations. After running the script, I got a set of data files--one has the same name as the PBS script but with a `.dat` extension for the data table storing the calculated LDOS values; there are another two `*.dat` files for the geometry of boundary and dieletric function distribution. We will look into those data files in the following sections.
## 3D dielectric waveguide simulated in 2D
Just a little more detail on the simulation itself: By assuming the waveguide along z-axis or the light propagation direction is uniform, one can completely solve the boundary condition problem of dipole emission by simulating the field in one single layer of the xy cross section. The z-component of the field only adds in a phase factor. The data of the simulated result is stored in the /data/ folder of this repo.
We can read in this data as a matrix of numbers by the `readdlm` function in Julia with the `header=false` option meaning that it reads the first line as the beginning of the data entries without a list of strings describing each column.
Now, let's plot the results. I'll use the [PyPlot](https://github.com/stevengj/PyPlot.jl) package in Julia, which is an interface to the sophisticated [matplotlib](http://matplotlib.org/) Python plotting library. We'll plot three things:
* The waveguide structure in terms of $\epsilon$ and the interface boundary between two media.
* Plot the LDOS components for a fixed dipole position.
* Calculate the waveguide-modified spontaneous decay rate when the dipole varies its position outside of the waveguide.
### Plotting the boundary and index of refraction profile of the waveguide in the xy cross section
Our boundary points are meshed in the file `/data/p3/geom_regions_a_300_b_300_c_5.data`, which positions where the equivalent charges and current sources to be computed in the BEM simulation. The waveguide has a square cross-section of a width $a=b=300nm$ ($nm$ is the unit of length) and a index of refraction of $n_1=n_{core}=2$ for the waveguide material and $n_2=n_{clad}=1$ for the vacuum clad.
It is good to plot out the mesh of index of refraction in space and find out how good is the mesh resolution. This can be done by plotting out the output eps file in a simple data table format, which ends in `.dat`.
The following will first print out some of the data in order to figure out the physics meaning of the dimensions. They should contain the coordinate and index of refraction information for the simulation.
```julia
boundary = readdlm("data/geom_regions_a_300_b_300_c_5.dat", header=false);
boundarypoints = boundary[:,1:3]
```
10201×3 Array{Float64,2}:
-180.0 -180.0 1.0
-180.0 -176.4 1.0
-180.0 -172.8 1.0
-180.0 -169.2 1.0
-180.0 -165.6 1.0
-180.0 -162.0 1.0
-180.0 -158.4 1.0
-180.0 -154.8 1.0
-180.0 -151.2 1.0
-180.0 -147.6 1.0
-180.0 -144.0 1.0
-180.0 -140.4 1.0
-180.0 -136.8 1.0
⋮
180.0 140.4 1.0
180.0 144.0 1.0
180.0 147.6 1.0
180.0 151.2 1.0
180.0 154.8 1.0
180.0 158.4 1.0
180.0 162.0 1.0
180.0 165.6 1.0
180.0 169.2 1.0
180.0 172.8 1.0
180.0 176.4 1.0
180.0 180.0 1.0
```julia
epsilon3D = readdlm("data/geom_a_300_b_300_c_5.dat", header=false);
epsilon2Dpoints = epsilon3D[:,[1,2,4]]
```
620×3 Array{Float64,2}:
0.966667 150.0 2.0
2.9 150.0 2.0
4.83333 150.0 2.0
6.76667 150.0 2.0
8.7 150.0 2.0
10.6333 150.0 2.0
12.5667 150.0 2.0
14.5 150.0 2.0
16.4333 150.0 2.0
18.3667 150.0 2.0
20.3 150.0 2.0
22.2333 150.0 2.0
24.1667 150.0 2.0
⋮
-150.0 142.1 2.0
-150.0 144.033 2.0
-149.938 -145.782 1.0
-149.455 -147.27 1.0
-148.536 -148.536 1.0
-147.27 -149.455 1.0
-145.782 -149.938 1.0
-145.782 149.938 1.0
-147.27 149.455 1.0
-148.536 148.536 1.0
-149.455 147.27 1.0
-149.938 145.782 1.0
Now we plot out the data in a 2D (xy) plane.
```julia
using PyPlot
#println(convert(Int64,floor(lenz/2)))
x = boundarypoints[:,1];
y = boundarypoints[:,2];
v_regions = boundarypoints[:,3];
lenx = length(x)
fig = figure("Boundary points plot",figsize=(10,10))
ax = fig[:add_subplot](1,2,1)
c = get_cmap("PRGn")
rgbs = [c(norm(value/2.)) for value in v_regions]
scatter(x,y,c=rgbs,linewidths=0,marker=".",s=5)
xlabel(L"x/nm")
ylabel(L"y/nm")
axis("image")
xlim(-250,250)
ylim(-250,250)
tight_layout()
title("meshing points (x,y)")
display(maximum(abs.(x)))
subplot(1,2,2)
x_eps = epsilon2Dpoints[:,1]; y_eps = epsilon2Dpoints[:,2]; v_eps = epsilon2Dpoints[:,3];
ax = fig[:add_subplot](1,2,2)
c = get_cmap("RdBu")
rgbs = [c(norm(value)) for value in v_eps]
scatter(x_eps,y_eps,c=rgbs,s=1)
xlabel(L"x/ nm")
ylabel(L"y/ nm")
axis("image")
xlim(-250,250)
ylim(-250,250)
tight_layout()
title("boundary (x,y)");
```
The first plot only shows the 101$\times$101 data points, in which the plotted area is the computing region of a $180nm\times 180nm$ square with the waveguide region colored in green. The figure on the right covers the boundary points pretty densely, although the plot didn't resolve the image on the corners clearly.
# Plotting the LDOS components with a fixed dipole position
To calculate the modified decay rates, we need to use the LDOS value at the dipole position. The result is calculated at a series of $k$ points. I expect to see a continuous positive curve when $k\in [0,1]\omega/c$ or in the radiative mode regime and a single positive spark in the $[1,2]\omega/c$ or guided mode regime--given the waveguide is a single-mode waveguide.
```julia
ldos = readdlm("data/ldos_N_1_lam_894_eps_4_0_a_300_b_300_c_5_x_0_y_350_q_0_2_401.dat", header=false);
#ldos = readdlm("data/ldos_N_1_lam_894_eps_4_0.001_a_300_b_300_c_5_x_0_y_350_q_0.1_4_101.dat", header=false);
ldosqpoints = ldos[:,[2,5,6,7,8]]
using PyPlot
#println(convert(Int64,floor(lenz/2)))
q = ldosqpoints[:,1]
ldosx = ldosqpoints[:,2]
ldosy = ldosqpoints[:,3]
ldosz = ldosqpoints[:,4]
ldos_av = ldosqpoints[:,5]
lenx = length(q)
fig = figure("LDOS q plot",figsize=(10,10))
ax = fig[:add_subplot](1,2,1)
cp = ax[:plot](q, ldosx, "b-", linewidth=2.0)
#ax[:clabel](cp, inline=1, fontsize=5)
xlabel(L"k/(\omega/c)")
ylabel(L"LDOS_x")
axis("image")
xlim(-0.0,2)
ylim(-2,3)
tight_layout()
gcf() # Needed for IJulia to plot inline
display(maximum(abs.(ldosx)))
ax = fig[:add_subplot](1,2,2)
cp = ax[:plot](q, ldos_av, "r-", linewidth=2.0)
#ax[:clabel](cp, inline=1, fontsize=5)
xlabel(L"k/(\omega/c)")
ylabel(L"LDOS")
axis("image")
xlim(-0.0,2)
ylim(-2,3)
tight_layout()
gcf() # Needed for IJulia to plot inline
display(maximum(abs.(ldosx)))
```
As you can see, ***there are negative sparks in the guided mode regime***. It could be a numerical error in the code and may be removed using some tricks.
Now we can integrate the $\mathrm{LDOS}_i$ values over $k$ along the whole axis to obtain the total decay rate or from $0$ to $\omega/c$ to obtain the radiative mode contribution for the decay rate. Each integral can be performed as a sum over all discrete points along the $k$-axis using the Trapezoid approximation as below:
$$\begin{align}\int \mathrm{LDOS}_i(k)dk &= \sum_{j=1,\cdots,N-1} \frac{\mathrm{LDOS}_i(k_j)+\mathrm{LDOS}_i(k_j+1)}{2}\Delta k\\
&= \left(\frac{\mathrm{LDOS}_i(k_N)+\mathrm{LDOS}_i(k_N)}{2}+\sum_{j=2,\cdots,N-1}\mathrm{LDOS}_i(k_j)\right)\Delta k,\end{align}$$
where $N$ is the total number of data points along the $k$-axis, and $\Delta k=k_j-k_{j-1}=k_2-k_1$ as the interval of $k$-vector in a uniform distribution manner.
In our case, the integrand is discontinuous and breaks the continuity at $k=k_0=\omega/c$ point. Therefore, we divide our integration limit into the $[0,n_2)k_0$ and $(n_2,n_1]k_0$ two regions corresponding to radiation mode contribution and guided mode contribution parts, where $n_2=1$ is the index of refraction of the vacuum clad and $n_1=2$ is the index of refraction of the waveguide bulk material. Since there are surdden jumps in the guided mode regime, the integration may have some error using current method, but it shouldn't be too large as the jumps are in small intervals.
```julia
length_of_q=length(ldosqpoints[:,1])
breakpoint = Int(floor((length_of_q-1)/2)) # The breaking point of index is chosen under the fact that the radiation contribution part takes a half space for $n_1=2$.
using NumericalIntegration
ldos_x_rad = integrate(ldosqpoints[1:breakpoint,1],ldosqpoints[1:breakpoint,2])
ldos_x_guide = integrate(ldosqpoints[breakpoint:end,1],ldosqpoints[breakpoint:end,2])
ldos_x = ldos_x_rad+ldos_x_guide;
ldos_y_rad = integrate(ldosqpoints[1:breakpoint,1],ldosqpoints[1:breakpoint,3])
ldos_y_guide = integrate(ldosqpoints[breakpoint:end,1],ldosqpoints[breakpoint:end,3])
ldos_y = ldos_y_rad+ldos_y_guide;
ldos_z_rad = integrate(ldosqpoints[1:breakpoint,1],ldosqpoints[1:breakpoint,4])
ldos_z_guide = integrate(ldosqpoints[breakpoint:end,1],ldosqpoints[breakpoint:end,4])
ldos_z = ldos_z_rad+ldos_z_guide;
ldos_rad = integrate(ldosqpoints[1:breakpoint,1],ldosqpoints[1:breakpoint,5])
ldos_guide = integrate(ldosqpoints[breakpoint:end,1],ldosqpoints[breakpoint:end,5])
ldos_total = ldos_rad + ldos_guide;
# Print out the result.
@printf("\Delta k = %4f k_0.\n",ldosqpoints[2,1]-ldosqpoints[1,1])
@printf("LDOS_x_rad=%5f, LDOS_x_guide=%5f, LDOS_x=%5f;\n",ldos_x_rad,ldos_x_guide,ldos_x)
@printf("LDOS_y_rad=%5f, LDOS_y_guide=%5f, LDOS_y=%5f;\n",ldos_y_rad,ldos_y_guide,ldos_y)
@printf("LDOS_z_rad=%5f, LDOS_z_guide=%5f, LDOS_z=%5f;\n",ldos_z_rad,ldos_z_guide,ldos_z)
@printf("LDOS_rad = %5f, LDOS_guide = %5f, LDOS = %5f.",ldos_rad,ldos_guide,ldos_total)
```
Delta k = 0.005000 k_0.
Notice that the result above was calculated using a $k$-resolution of $\Delta k= 0.005k_0$. We can compare the results above with a coarser gridding case with $\Delta k= 0.01k_0$. The LDOS values can be then calculated as below.
```julia
ldos = readdlm("data/ldos_N_1_lam_894_eps_4_0_a_300_b_300_c_5_x_0_y_350_q_0_2_201.dat", header=false);
ldosqpoints = ldos[:,[2,5,6,7,8]]
length_of_q=length(ldosqpoints[:,1])
breakpoint = Int(floor((length_of_q-1)/2)) # The breaking point of index is chosen under the fact that the radiation contribution part takes a half space for $n_1=2$.
using NumericalIntegration
ldos_x_rad = integrate(ldosqpoints[1:breakpoint,1],ldosqpoints[1:breakpoint,2])
ldos_x_guide = integrate(ldosqpoints[breakpoint:end,1],ldosqpoints[breakpoint:end,2])
ldos_x = ldos_x_rad+ldos_x_guide;
ldos_y_rad = integrate(ldosqpoints[1:breakpoint,1],ldosqpoints[1:breakpoint,3])
ldos_y_guide = integrate(ldosqpoints[breakpoint:end,1],ldosqpoints[breakpoint:end,3])
ldos_y = ldos_y_rad+ldos_y_guide;
ldos_z_rad = integrate(ldosqpoints[1:breakpoint,1],ldosqpoints[1:breakpoint,4])
ldos_z_guide = integrate(ldosqpoints[breakpoint:end,1],ldosqpoints[breakpoint:end,4])
ldos_z = ldos_z_rad+ldos_z_guide;
ldos_rad = integrate(ldosqpoints[1:breakpoint,1],ldosqpoints[1:breakpoint,5])
ldos_guide = integrate(ldosqpoints[breakpoint:end,1],ldosqpoints[breakpoint:end,5])
ldos_total = ldos_rad + ldos_guide;
# Print out the result.
display("\Delta k = $(ldosqpoints[2,1]-ldosqpoints[1,1]) k_0.")
@printf("LDOS_x_rad=%5f, LDOS_x_guide=%5f, LDOS_x=%5f;\n",ldos_x_rad,ldos_x_guide,ldos_x)
@printf("LDOS_y_rad=%5f, LDOS_y_guide=%5f, LDOS_y=%5f;\n",ldos_y_rad,ldos_y_guide,ldos_y)
@printf("LDOS_z_rad=%5f, LDOS_z_guide=%5f, LDOS_z=%5f;\n",ldos_z_rad,ldos_z_guide,ldos_z)
@printf("LDOS_rad = %5f, LDOS_guide = %5f, LDOS = %5f.",ldos_rad,ldos_guide,ldos_total)
#ldosqpoints[breakpoint,2]
```
"Delta k = 0.01 k_0."
LDOS_x_rad=0.623963, LDOS_x_guide=-0.004968, LDOS_x=0.618995;
LDOS_y_rad=0.657902, LDOS_y_guide=-0.002447, LDOS_y=0.655455;
LDOS_z_rad=0.525345, LDOS_z_guide=0.012674, LDOS_z=0.538019;
LDOS_rad = 1.807210, LDOS_guide = 0.005258, LDOS = 1.812468.
We can see that there are considerable differences between the integrated LDOS values for the two cases. The main differences are from the guided mode contribution part to LDOS, and should be related to the negative sparks. Therefore, I suspect that I still need to find a way to rule out the spark errors in order to calculate the LDOS values accurate. The resolution of the later case seems fine for the radiation contribution part, at least.
I didn't plot the details of the LDOS components for this case, but there are also negative sparks in the guided mode regime. The sparks disappear in the case of $\Delta k=0.05k_0$, but the integrals may not be accurate enough.
Another set of data is taken at $r=330nm$ position with a slightly different setting--mainly the edges are having a larger radius. We can plot the result below.
```julia
ldos = readdlm("data/ldos_N_1_lam_894_eps_4_0_a_300_b_300_c_6_x_330_y_0_q_0_2_201.dat", header=false);
#ldos = readdlm("data/ldos_N_1_lam_894_eps_4_0.001_a_300_b_300_c_5_x_0_y_350_q_0.1_4_101.dat", header=false);
ldosqpoints = ldos[:,[2,5,6,7,8]]
using PyPlot
#println(convert(Int64,floor(lenz/2)))
q = ldosqpoints[:,1]
ldosx = ldosqpoints[:,2]
ldosy = ldosqpoints[:,3]
ldosz = ldosqpoints[:,4]
ldos_av = ldosqpoints[:,5]
lenx = length(q)
fig = figure("LDOS q plot",figsize=(10,10))
ax = fig[:add_subplot](1,2,1)
cp = ax[:plot](q, ldosx, "b-", linewidth=2.0)
#ax[:clabel](cp, inline=1, fontsize=5)
xlabel(L"k/(\omega/c)")
ylabel(L"LDOS_x")
axis("image")
xlim(-0.0,2)
#ylim(-300,300)
tight_layout()
gcf() # Needed for IJulia to plot inline
display(maximum(abs.(ldosx)))
ax = fig[:add_subplot](1,2,2)
cp = ax[:plot](q, ldos_av, "r-", linewidth=2.0)
#ax[:clabel](cp, inline=1, fontsize=5)
xlabel(L"k/(\omega/c)")
ylabel(L"LDOS")
axis("image")
xlim(-0.0,2)
#ylim(-300,300)
tight_layout()
gcf() # Needed for IJulia to plot inline
display(maximum(abs.(ldosx)))
```
## LDOS components as a function of dipole position
We can also plot out the LDOS's when the dipole is placed at different locations along the x-axis. Here we are using some rough parameters just for demonstration purpose.
As plotted below, the dipole is changing position from $225$nm to $420$nm from the origin (center of the waveguide) along the x-axis.
```julia
# Load the simulated data.
ldos = readdlm("data/ldos_N_1_lam_894_eps_4_0_a_300_b_300_c_6_x0_225_x1_420_nx_14_y_0_q_0_2_201.dat", header=false)
ldosqpoints = ldos[:,[2,3,5,6,7,8]]
lenr = 14; rstart=225; rend=420;
lenq = 201;
breakpoint = Int(floor((lenq-1)/2)) # The breaking point of index is chosen under the fact that the radiation contribution part takes a half space for $n_1=2$.
rprime = linspace(rstart,rend,lenr);
ldos_rscan = zeros(lenq,5,lenr);
ldos_int = zeros(lenr,12);
ii=1;
using NumericalIntegration
for ri in rprime
ind=find(ldosqpoints[:,2].==ri);
ldos_rscan[:,:,ii]=ldosqpoints[ind,[1,3,4,5,6]];
ldos_x_rad = integrate(ldos_rscan[1:breakpoint,1,ii],ldos_rscan[1:breakpoint,2,ii])
ldos_x_guide = integrate(ldos_rscan[breakpoint:end,1,ii],ldos_rscan[breakpoint:end,2,ii])
ldos_x = ldos_x_rad+ldos_x_guide;
ldos_y_rad = integrate(ldos_rscan[1:breakpoint,1,ii],ldos_rscan[1:breakpoint,3,ii])
ldos_y_guide = integrate(ldos_rscan[breakpoint:end,1,ii],ldos_rscan[breakpoint:end,3,ii])
ldos_y = ldos_y_rad+ldos_y_guide;
ldos_z_rad = integrate(ldos_rscan[1:breakpoint,1,ii],ldos_rscan[1:breakpoint,4,ii])
ldos_z_guide = integrate(ldos_rscan[breakpoint:end,1,ii],ldos_rscan[breakpoint:end,4,ii])
ldos_z = ldos_z_rad+ldos_z_guide;
ldos_rad = integrate(ldos_rscan[1:breakpoint,1,ii],ldos_rscan[1:breakpoint,5,ii])
ldos_guide = integrate(ldos_rscan[breakpoint:end,1,ii],ldos_rscan[breakpoint:end,5,ii])
ldos_total = ldos_rad + ldos_guide;
ldos_int[ii,1]=ldos_x_rad;
ldos_int[ii,2]=ldos_x_guide;
ldos_int[ii,3]=ldos_x;
ldos_int[ii,4]=ldos_y_rad;
ldos_int[ii,5]=ldos_y_guide;
ldos_int[ii,6]=ldos_y;
ldos_int[ii,7]=ldos_z_rad;
ldos_int[ii,8]=ldos_z_guide;
ldos_int[ii,9]=ldos_z;
ldos_int[ii,10]=ldos_rad;
ldos_int[ii,11]=ldos_guide;
ldos_int[ii,12]=ldos_total;
ii+=1;
end
```
```julia
# Plot the LDOS's.
using PyPlot
fig = figure("LDOS(r') plot",figsize=(10,5))
ax = fig[:add_subplot](1,2,1)
cp = ax[:plot](rprime, ldos_int[:,3], "b:", linewidth=2.0)
cp = ax[:plot](rprime, ldos_int[:,6], "r:", linewidth=2.0)
cp = ax[:plot](rprime, ldos_int[:,9], "m:", linewidth=2.0)
cp = ax[:plot](rprime, ldos_int[:,12], "k-", linewidth=2.0)
xlabel(L"r\prime(nm)")
ylabel(L"LDOS")
ylim(-0.0,2.2)
xlim(205,420)
legend(["LDOS_x","LDOS_y","LDOS_z","LDOS_total"],loc="right")
ax = fig[:add_subplot](1,2,2)
cp = ax[:plot](rprime, ldos_int[:,10], "b:", linewidth=2.0)
cp = ax[:plot](rprime, ldos_int[:,11], "r:", linewidth=2.0)
cp = ax[:plot](rprime, ldos_int[:,12], "k-", linewidth=2.0)
xlabel(L"r\prime(nm)")
ylabel(L"LDOS")
ylim(-0.0,2.2)
xlim(205,420)
legend(["LDOS_rad","LDOS_guide","LDOS_total"],loc="right")
```
The guide mode contribution seems very small compared to the nanofiber case using the BEM approach. As discussed in other tests, the BEM calculation on the guided mode contribution part is not accurate due to the fact that the dielectric function has a zero imaginary part or loss in the frequency domain which will cause an infinite narrow peak of the LDOS function and leads to unphysical solution.
To avoid this difficulty, we will use other method to calculate the guided mode contribution part to the Green's function tensor.
# Calculation of Green's function tensor using BEM
BEM can output full local field components so that computing the radiative mode contribution to the full Green's function tensor is possible. With the Green's function tensor, one can then calculate the modified decay rates with dipoles orientated along arbitrary directions--including the dipole transitions corresponding to $\sigma_\pm$ and $\pi$ transitions.
```julia
# Load data for different dipole positions.
rp_BEM=160:10:600;#[170,190,210,230,250,270,290,310,330,350,370,390,410,430,450,470];
lenrp=length(rp_BEM);
lendr=201;
E_dipolex = readdlm("data/dipolex_E_N_1_lam_894_eps_4_0_a_300_b_300_c_6_x_330_y_0_q_0_2_201.dat", header=false);
k_vec=E_dipolex[:,2];
Ex_dx=zeros(Complex{Float64},lendr,lenrp); Ey_dx=zeros(Complex{Float64},lendr,lenrp); Ez_dx=zeros(Complex{Float64},lendr,lenrp);
Ex_dy=zeros(Complex{Float64},lendr,lenrp); Ey_dy=zeros(Complex{Float64},lendr,lenrp); Ez_dy=zeros(Complex{Float64},lendr,lenrp);
Ex_dz=zeros(Complex{Float64},lendr,lenrp); Ey_dz=zeros(Complex{Float64},lendr,lenrp); Ez_dz=zeros(Complex{Float64},lendr,lenrp);
for ii=2:2:(lenrp-1)
E_dipolex = readdlm("data/dipolex_E_N_1_lam_894_eps_4_0.01_a_300_b_300_c_6_x_$(rp_BEM[ii])_y_0_q_0_2_201.dat", header=false); #",rp_list[ii],"
Ex_dx[:,ii]=E_dipolex[:,5]+im*E_dipolex[:,6];
Ey_dx[:,ii]=E_dipolex[:,7]+im*E_dipolex[:,8];
Ez_dx[:,ii]=E_dipolex[:,9]+im*E_dipolex[:,10];
E_dipoley = readdlm("data/dipoley_E_N_1_lam_894_eps_4_0.01_a_300_b_300_c_6_x_$(rp_BEM[ii])_y_0_q_0_2_201.dat", header=false); #",rp_list[ii],"
Ex_dy[:,ii]=E_dipoley[:,5]+im*E_dipoley[:,6];
Ey_dy[:,ii]=E_dipoley[:,7]+im*E_dipoley[:,8];
Ez_dy[:,ii]=E_dipoley[:,9]+im*E_dipoley[:,10];
E_dipolez = readdlm("data/dipolez_E_N_1_lam_894_eps_4_0.01_a_300_b_300_c_6_x_$(rp_BEM[ii])_y_0_q_0_2_201.dat", header=false); #",rp_list[ii],"
Ex_dz[:,ii]=E_dipolez[:,5]+im*E_dipolez[:,6];
Ey_dz[:,ii]=E_dipolez[:,7]+im*E_dipolez[:,8];
Ez_dz[:,ii]=E_dipolez[:,9]+im*E_dipolez[:,10];
end
for ii=1:2:lenrp
E_dipolex = readdlm("data/dipolex_E_N_1_lam_894_eps_4_0_a_300_b_300_c_6_x_$(rp_BEM[ii])_y_0_q_0_2_201.dat", header=false); #",rp_list[ii],"
Ex_dx[:,ii]=E_dipolex[:,5]+im*E_dipolex[:,6];
Ey_dx[:,ii]=E_dipolex[:,7]+im*E_dipolex[:,8];
Ez_dx[:,ii]=E_dipolex[:,9]+im*E_dipolex[:,10];
E_dipoley = readdlm("data/dipoley_E_N_1_lam_894_eps_4_0_a_300_b_300_c_6_x_$(rp_BEM[ii])_y_0_q_0_2_201.dat", header=false); #",rp_list[ii],"
Ex_dy[:,ii]=E_dipoley[:,5]+im*E_dipoley[:,6];
Ey_dy[:,ii]=E_dipoley[:,7]+im*E_dipoley[:,8];
Ez_dy[:,ii]=E_dipoley[:,9]+im*E_dipoley[:,10];
E_dipolez = readdlm("data/dipolez_E_N_1_lam_894_eps_4_0_a_300_b_300_c_6_x_$(rp_BEM[ii])_y_0_q_0_2_201.dat", header=false); #",rp_list[ii],"
Ex_dz[:,ii]=E_dipolez[:,5]+im*E_dipolez[:,6];
Ey_dz[:,ii]=E_dipolez[:,7]+im*E_dipolez[:,8];
Ez_dz[:,ii]=E_dipolez[:,9]+im*E_dipolez[:,10];
end
# Calculate the diagonal elements of the Green's function tensor from the radiation mode contribution.
c=2.99792458e8;
au=1.72e7; # This is the atomic unit in the CGS-units: $q/a_0^2$ statvolts/cm. In SI units, it becomes $e/(4π\varepsilon_0a_0^2)$ = 5.2e11 V/m.
lambda0=0.895e-6;
ω=2.*pi*c/lambda0;
GFT_rad_ind=zeros(Complex{Float64},3,3,lenrp);
Gxx_rad_ind=zeros(Complex{Float64},lenrp);
Gxy_rad_ind=zeros(Complex{Float64},lenrp);
Gxz_rad_ind=zeros(Complex{Float64},lenrp);
Gyx_rad_ind=zeros(Complex{Float64},lenrp);
Gyy_rad_ind=zeros(Complex{Float64},lenrp);
Gyz_rad_ind=zeros(Complex{Float64},lenrp);
Gzx_rad_ind=zeros(Complex{Float64},lenrp);
Gzy_rad_ind=zeros(Complex{Float64},lenrp);
Gzz_rad_ind=zeros(Complex{Float64},lenrp);
G0=Inf + 2.0/3.*(ω/c)^3*im;
gamma_rad_BEM_rp_average=zeros(lenrp);
gamma_rad_BEM_rp_sigmap=zeros(lenrp);
gamma_rad_BEM_rp_sigmam=zeros(lenrp);
gamma_rad_BEM_rp_pi=zeros(lenrp);
# Define the unitary dipole orientation vector.
e_dipole_sigmap=[-1.;-1.0*im;0]/sqrt(2);
e_dipole_sigmam=[1.; -1.0*im;0]/sqrt(2);
e_dipole_pi=[0.; 0.; 1.];
using NumericalIntegration
for ii=1:lenrp
Gxx_rad_ind[ii]=integrate(k_vec[1:breakpoint],Ex_dx[1:breakpoint,ii],Trapezoidal())*(ω/c)^3/pi^2*au*4.0/3.;#G0+
Gyy_rad_ind[ii]=integrate(k_vec[1:breakpoint],Ey_dy[1:breakpoint,ii],Trapezoidal())*(ω/c)^3/pi^2*au*4.0/3.;#G0+
Gzz_rad_ind[ii]=integrate(k_vec[1:breakpoint],Ez_dz[1:breakpoint,ii],Trapezoidal())*(ω/c)^3/pi^2*au*4.0/3.;#G0+
Gyx_rad_ind[ii]=integrate(k_vec[1:breakpoint],Ey_dx[1:breakpoint,ii],Trapezoidal())*(ω/c)^3/pi^2*au*4.0/3.;
Gzx_rad_ind[ii]=integrate(k_vec[1:breakpoint],Ez_dx[1:breakpoint,ii],Trapezoidal())*(ω/c)^3/pi^2*au*4.0/3.;
Gxy_rad_ind[ii]=integrate(k_vec[1:breakpoint],Ex_dy[1:breakpoint,ii],Trapezoidal())*(ω/c)^3/pi^2*au*4.0/3.;
Gzy_rad_ind[ii]=integrate(k_vec[1:breakpoint],Ez_dy[1:breakpoint,ii],Trapezoidal())*(ω/c)^3/pi^2*au*4.0/3.;
Gxz_rad_ind[ii]=integrate(k_vec[1:breakpoint],Ex_dz[1:breakpoint,ii],Trapezoidal())*(ω/c)^3/pi^2*au*4.0/3.;
Gyz_rad_ind[ii]=integrate(k_vec[1:breakpoint],Ey_dz[1:breakpoint,ii],Trapezoidal())*(ω/c)^3/pi^2*au*4.0/3.;
GFT_rad_ind[:,:,ii]=[Gxx_rad_ind[ii] Gxy_rad_ind[ii] Gxz_rad_ind[ii];
Gyx_rad_ind[ii] Gyy_rad_ind[ii] Gzy_rad_ind[ii];
Gzx_rad_ind[ii] Gzy_rad_ind[ii] Gzz_rad_ind[ii]];
end
# Calculate the relative averaged decay rate at the given dipole located at x=405nm and y=0.
gamma0=imag(G0);
for ii =1:lenrp
gamma_rad_BEM_rp_average[ii]=1+trace(imag(GFT_rad_ind[:,:,ii]))/gamma0/3.;
gamma_rad_BEM_rp_sigmap[ii]=1+real((e_dipole_sigmap'*imag(GFT_rad_ind[:,:,ii])*e_dipole_sigmap)/gamma0)[1];
gamma_rad_BEM_rp_sigmam[ii]=1+real((e_dipole_sigmam'*imag(GFT_rad_ind[:,:,ii])*e_dipole_sigmam)/gamma0)[1];
gamma_rad_BEM_rp_pi[ii]=1+real((e_dipole_pi'*imag(GFT_rad_ind[:,:,ii])*e_dipole_pi)/gamma0)[1];
end
# Recalculate the diagonal GFT elements with a dipole placed at r'=405nm from the fiber axis with lower imaginary part of epsilon.
E_dipolex = readdlm(join(["data/dipolex_E_N_1_lam_894_eps_4_0_a_300_b_300_c_6_x_","330","_y_0_q_0_2_201.dat"]), header=false)
Ex_dx_r0=E_dipolex[:,5]+im*E_dipolex[:,6];
Ey_dx_r0=E_dipolex[:,7]+im*E_dipolex[:,8];
Ez_dx_r0=E_dipolex[:,9]+im*E_dipolex[:,10];
E_dipoley = readdlm(join(["data/dipoley_E_N_1_lam_894_eps_4_0_a_300_b_300_c_6_x_","330","_y_0_q_0_2_201.dat"]), header=false)
Ex_dy_r0=E_dipoley[:,5]+im*E_dipoley[:,6];
Ey_dy_r0=E_dipoley[:,7]+im*E_dipoley[:,8];
Ez_dy_r0=E_dipoley[:,9]+im*E_dipoley[:,10];
E_dipolez = readdlm(join(["data/dipolez_E_N_1_lam_894_eps_4_0_a_300_b_300_c_6_x_","330","_y_0_q_0_2_201.dat"]), header=false)
Ex_dz_r0=E_dipolez[:,5]+im*E_dipolez[:,6];
Ey_dz_r0=E_dipolez[:,7]+im*E_dipolez[:,8];
Ez_dz_r0=E_dipolez[:,9]+im*E_dipolez[:,10];
Gxx_rad_r0=integrate(k_vec[1:breakpoint],Ex_dx_r0[1:breakpoint],Trapezoidal())*(ω/c)^3/pi^2*au*4.0/3.;
Gyy_rad_r0=integrate(k_vec[1:breakpoint],Ey_dy_r0[1:breakpoint],Trapezoidal())*(ω/c)^3/pi^2*au*4.0/3.;
Gzz_rad_r0=integrate(k_vec[1:breakpoint],Ez_dz_r0[1:breakpoint],Trapezoidal())*(ω/c)^3/pi^2*au*4.0/3.;
gamma_rad_BEM_r0=imag(Gxx_rad_r0+Gyy_rad_r0+Gzz_rad_r0)/3/gamma0;
# Plot out gamma_rad as a function of dipole position.
figure(figsize=(16,3));
subplot(1,2,1)
plot((1:breakpoint)/100.,real(Ex_dx[1:breakpoint,4]),"r-")
plot((1:breakpoint)/100.,imag(Ex_dx[1:breakpoint,4]),"b-")
ylabel(L"E(\beta)")
xlabel(L"\beta/k_0")
legend(["Real","Imag"],loc="upper left")
subplot(1,2,2)
a=300.;
#plot(rp0_test[1,:]/1.e-9, 1+sum(gamma_rad,2), "r-", linewidth=2.0)
plot(rp_BEM-a/2,real(gamma_rad_BEM_rp_average),"b-");
plot(330-a/2,real(gamma_rad_BEM_r0)+1.,"mo");
plot(rp_BEM-a/2,gamma_rad_BEM_rp_sigmap,"r--");
plot(rp_BEM-a/2,gamma_rad_BEM_rp_sigmam,"m.-");
plot(rp_BEM-a/2,gamma_rad_BEM_rp_pi,"k.");
xlabel("(r'-a/2)/nm");
ylabel(L"\Gamma_{rad}^{BEM}/\Gamma_0");
#ylim([0,1.2])
legend(["average",L"average_{no\,\,loss}",L"\sigma_+",L"\sigma_-",L"\pi"],loc="upper right",fontsize=12);
#gamma_rad_BEM_rp_sigmap
```
The first figure (on the left) shows the real and imaginary parts of the $E_x$ component changes as a function of $\beta=k_z$ in the radiative mode regime.
The second figure (on the right) shows the decay rates caused by non-guided modes and decomposed into the $\sigma_\pm$ and $\pi$ transitions and their average.
In calculating the averaged non-guided mode induced decay rates, we define
$$\begin{align}
\frac{\Gamma_{rad}^{ave}}{\Gamma_0} &= 1+ \frac{\sum_{i=x,y,z}\mathrm{Im}\left[\mathbf{e}_i^*\cdot \mathbf{G}_{ind,rad}(\mathbf{r}',\mathbf{r}')\cdot \mathbf{e}_i\right]}{\sum_{i=x,y,z} \mathrm{Im}\left[\mathbf{e}_i^*\cdot \mathbf{G}_0(\mathbf{r}',\mathbf{r}')\cdot \mathbf{e}_i\right]}\\
&=1+ \frac{\sum_{i=x,y,z}\mathrm{Im}\left[\mathbf{e}_i^*\cdot \mathbf{G}_{ind,rad}(\mathbf{r}',\mathbf{r}')\cdot \mathbf{e}_i\right]}{3\mathrm{Im}\left[G_0(\mathbf{r}',\mathbf{r}')\right]}\\
&= 1+ \frac{\mathrm{Tr}\left\{\mathrm{Im}\left[ \mathbf{G}_{ind,rad}(\mathbf{r}',\mathbf{r}')\right]\right\}}{3\mathrm{Im}\left[G_0(\mathbf{r}',\mathbf{r}')\right]}
\end{align}$$
with the free-space Green's function scaled by $G_0(\mathbf{r}',\mathbf{r}';\omega)=\frac{2}{3}k_0^3$ in the CGS units as $\mathbf{G}_0=G_0\mathbb{1}$ and the numerical result of the waveguide-induced Green's function tensor elements
$$ G_{ind,rad}^{ij}(\mathbf{r}',\mathbf{r}')=\frac{2k_0^2*a.u.}{3\pi^2}\int_{-k_0}^{k_0} d\beta E_j^i(\mathbf{r}')=\frac{4k_0^2*a.u.}{3\pi^2}\int_{0}^{k_0} d\beta E_j^i(\mathbf{r}')$$
calculated via BEM by putting a unit dipole in atomic units orientated along $j$ direction while the $i$-th electric field component is measured at the dipole position.
We have used $\varepsilon=4+0.01i$ (with a small loss) and $\varepsilon=4$ (without loss) to calculate the Green's function tensors as well as the averaged corresponding decay rate contributions.
These two cases doesn't show a noticeable difference with one sampling data point as shown in the purple dot points.
In calculating the corresponding contributions from the $\sigma_\pm$ and $\pi$ transitions of the atoms, we have defined
$$\begin{align}
\frac{\Gamma_{rad}^{\mathbf{e}_q}}{\Gamma_0} &= 1+ \frac{\mathrm{Im}\left[\mathbf{e}_q^*\cdot \mathbf{G}_{ind,rad}(\mathbf{r}',\mathbf{r}')\cdot \mathbf{e}_q\right]}{ \mathrm{Im}\left[\mathbf{e}_q^*\cdot \mathbf{G}_0(\mathbf{r}',\mathbf{r}')\cdot \mathbf{e}_q\right]}
=1+ \frac{\mathrm{Im}\left[\mathbf{e}_q^*\cdot \mathbf{G}_{ind,rad}(\mathbf{r}',\mathbf{r}')\cdot \mathbf{e}_q\right]}{\mathrm{Im}\left[G_0(\mathbf{r}',\mathbf{r}')\right]},
\end{align}$$
where the three orthogonal dipole transition bases
$$\begin{align}
\mathbf{e}_\pm &=\mp \frac{\mathbf{e}_{\tilde{x}}\pm i\mathbf{e}_{\tilde{y}}}{\sqrt{2}}\\
\mathbf{e}_0 &=\mathbf{e}_{\tilde{z}}
\end{align}$$
correspond to the $\sigma_\pm$ and $\pi$ transitions of the atoms.
These basis vectors are quantization-axis dependent, but in our calculation above, we assume the $z$-axis or the waveguide axis is the quantization axis, and hence $\mathbf{e}_{\tilde{x}}=\mathbf{e}_x$, $\mathbf{e}_{\tilde{y}}=\mathbf{e}_y$ and $\mathbf{e}_{\tilde{z}}=\mathbf{e}_z$.
As you can see, when the atoms are placed around $200$nm from the nanofiber surface, different dipole transitions make a noticeable difference.
To wrap up, the total decay rates
$$\begin{align}
\Gamma \propto \mathbf{e}_d^*\cdot \mathrm{Im}\left[\mathbf{G}(\mathbf{r}',\mathbf{r}')\right]\cdot \mathbf{e}_d,
\end{align}$$
where $$\mathbf{G}=\mathbf{G}_{hom}+\mathbf{G}_{inhom}=\mathbf{G}_{rad,free-space}+\mathbf{G}_{rad,ind}+\mathbf{G}_{gyd}$$
with $\mathbf{G}_{rad,free-space}=\mathbf{G}_0$ and $\mathrm{Im}\left[ \mathbf{G}_0(\mathbf{r}',\mathbf{r}')\right]=\frac{2}{3}\mathbb{1}$.
In the end, the total decay rate
$$\begin{align}\Gamma = \Gamma_{rad}+\Gamma_{gyd}=\Gamma_{rad,free-space}+\Gamma_{rad,ind}+\Gamma_{gyd}\end{align}$$
with $\Gamma_{rad,free-space}=\Gamma_0$.
```julia
# Export data to a MAT file.
using MAT
matopen("data/Julia_swg_GFT_decayrates_rad_D1.mat", "w") do matfile
write(matfile,"omega0",ω)
write(matfile,"rp_BEM",collect(rp_BEM))
write(matfile,"gamma0",gamma0)
write(matfile,"e_dipole_sigmap",e_dipole_sigmap)
write(matfile,"e_dipole_sigmam",e_dipole_sigmam)
write(matfile,"e_dipole_pi",e_dipole_pi)
write(matfile,"gamma_rad_BEM_rp_average",gamma_rad_BEM_rp_average)
write(matfile,"gamma_rad_BEM_rp_sigmap",gamma_rad_BEM_rp_sigmap)
write(matfile,"gamma_rad_BEM_rp_sigmam",gamma_rad_BEM_rp_sigmam)
write(matfile,"gamma_rad_BEM_rp_pi",gamma_rad_BEM_rp_pi)
write(matfile,"a",a)
write(matfile,"GFT_rad_rp",GFT_rad_ind)
write(matfile,"Gxx_rad_rp",Gxx_rad_ind)
write(matfile,"Gxy_rad_rp",Gxy_rad_ind)
write(matfile,"Gxz_rad_rp",Gxz_rad_ind)
write(matfile,"Gyx_rad_rp",Gyx_rad_ind)
write(matfile,"Gyy_rad_rp",Gyy_rad_ind)
write(matfile,"Gyz_rad_rp",Gyz_rad_ind)
write(matfile,"Gzx_rad_rp",Gzx_rad_ind)
write(matfile,"Gzy_rad_rp",Gzy_rad_ind)
write(matfile,"Gzz_rad_rp",Gzz_rad_ind)
end
```
```julia
```
|
e818d84159ecec7d1ac061f8f432dc633cbfd2a2
| 477,016 |
ipynb
|
Jupyter Notebook
|
sqwg_BEM.ipynb
|
i2000s/simwaveguide
|
e74233ee5d108c57d50818dc8172cb716c0185b0
|
[
"MIT"
] | null | null | null |
sqwg_BEM.ipynb
|
i2000s/simwaveguide
|
e74233ee5d108c57d50818dc8172cb716c0185b0
|
[
"MIT"
] | null | null | null |
sqwg_BEM.ipynb
|
i2000s/simwaveguide
|
e74233ee5d108c57d50818dc8172cb716c0185b0
|
[
"MIT"
] | null | null | null | 503.712777 | 226,998 | 0.932749 | true | 11,596 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.785309 | 0.721743 | 0.566791 |
__label__eng_Latn
| 0.701871 | 0.155176 |
# Mousai: An Open-Source General Purpose Harmonic Balance Solver
Theory and Algorithm
November, 2019
## Overview
A wide array of contemporary problems can be represented by nonlinear ordinary differential equations with solutions that can be represented by Fourier Series:
* **Limit cycle oscillation of wings/blades**
* Flapping motion of birds/insects/ornithopters
* Flagellum (threadlike cellular structures that enable bacteria etc. to swim)
* Shaft rotation, especially including rubbing or nonlinear bearing contacts
* **Engines**
* Radio/sonar/radar electronics
* Wireless power transmission
* Power converters
* Boat/ship motions and interactions
* **Cardio systems** (heart/arteries/veins)
* Ultrasonic systems transversing nonlinear media
* Responses of composite materials or materials with cracks
* Near buckling behavior of vibrating columns
* Nonlinearities in power systems
* **Energy harvesting systems**
* **Wind turbines**
* Radio Frequency Integrated Circuits
* **Any system with nonlinear coatings/friction damping, air damping, etc.**
These can all be observed in a quick literature search on 'Harmonic Balance'.
## Theory:
### Linear Solution
- Most dynamics systems can be modeled in *state space* form as a first order differential equation
\begin{equation}
\dot{\mathbf{z}}(t)=\mathbf{f}(\mathbf{z}(t),\mathbf{u}(t))
\end{equation}
Mousai is designed to solve the cases where $\mathbf{u}(t)$ can be represented by
\begin{equation}
\mathbf{u}(t)=A_0 + \sum_{n=0}^{\infty}
A_{n}\sin(n \times \omega t) + B_n\cos(n \times \omega t)
\end{equation}
In practicality this is represented in the form:
\begin{equation}
\mathbf{u}(t)=\sum_{n=-\infty}^{\infty}
\mathbf{U}_{n}e^{j(n \times \omega t)}
\end{equation}
where $U_n = \frac{A_n}{2}-j\frac{B_n}{2}$ because the math is far simpler, if less intuitive for the analyst. This also works better computationally and is closely related to the form of a discrete Fourier transform- a process much more suitable to computation.
The solution is presumed to be of the form:
\begin{equation}
\mathbf{z}(t)=\sum_{n=-\infty}^{\infty}
\mathbf{Z}_{n}e^{j(n \times \omega t)}
\end{equation}
In practice this holds true even if the governing equation is nonlinear.
When the model is linear, we can use superposition to solve for one term at a time. This solution is very well known (See <cite data-cite="4722060/P65RNMFB"></cite>, <cite data-cite="4722060/FCQQEA5V"></cite> and <cite data-cite="4722060/WJ9D3EXN"></cite>).
For a linear system, the state equation reduces to
\begin{equation}
\dot{\mathbf{z}}(t)=A \mathbf{z}(t) + B \mathbf{u}(t)
\end{equation}
where
\begin{equation}
A = \frac{\partial\mathbf{f}(\mathbf{z}(t),\mathbf{u}(t))}{\partial\mathbf{z}(t)},\qquad
B = \frac{\partial\mathbf{f}(\mathbf{z}(t),\mathbf{u}(t))}{\partial\mathbf{u}(t)}\end{equation}
Taking the Fourier transform, this is
\begin{equation}j\omega\mathbf{Z}(\omega)=A\mathbf{Z}(\omega)+B\mathbf{U}(\omega)\end{equation}
The solution is:
\begin{equation}
\label{eq:linsoln}
\mathbf{Z}(\omega) = \left(Ij\omega-A\right)^{-1}B\mathbf{U}(\omega)
\end{equation}
where the magnitudes and phases of the elements of $\mathbf{Z}$ provide the amplitudes and phases of the harmonic response of each state at the frequency $\omega$.
Thus, each value $\mathbf{Z}_n(n\omega)$ is
\begin{equation}
\mathbf{Z}_n(n\omega) = \left(Ijn\omega-A\right)^{-1}B\mathbf{U}_n(n\omega)
\end{equation}
===========
When a model is nonlinear closed-form solutions do not typically exist– numerical methods must then be used to estimate the solution.
Numerically finding the oscillatory response, after dissipation of the transient response, requires **long** time marching.
- Without energy dissipation term in the model this is not feasible because transients do not attenuate with time. When appears possible it is because numerical dissipation enables it. However, the numerical dissipation phenomenon isn't real- the solution represents that of a system with an equivalent amount of dissipation, not the non-dissipative system being modeled.
- With dissipation, simulations require tens, hundreds, or thousands of cycles, therefore tens of thousands of times steps may be necessary to simulate until a time at which the transient isn't noticeable.
- Time marching can be inaccurate or unstable. Numerical energy generation or loss can substantially impact the solution.
For a linear system in the frequency domain this is
\begin{equation}j\omega\mathbf{Z}(\omega)=\mathbf{f}(\mathbf{Z}(\omega),\mathbf{U}(\omega))\end{equation}
\begin{equation}j\omega\mathbf{Z}(\omega)=A\mathbf{Z}(\omega)+B\mathbf{U}(\omega)\end{equation}
where
\begin{equation}A = \frac{\partial \mathbf{f}(\mathbf{Z}(\omega),\mathbf{U}(\omega))}{\partial\mathbf{Z}(\omega)},\qquad
B = \frac{\partial \mathbf{f}(\mathbf{Z}(\omega),\mathbf{U}(\omega))}{\partial\mathbf{U}(\omega)}\end{equation}
are constant matrices.
The solution is:
\begin{equation}\mathbf{Z}(\omega) = \left(Ij\omega-A\right)^{-1}B\mathbf{U}(\omega)\end{equation}
where the magnitudes and phases of the elements of $\mathbf{Z}$ provide the amplitudes and phases of the harmonic response of each state at the frequency $\omega$.
### Nonlinear solution
- For a nonlinear system in the frequency domain we assume a Fourier series solution
\begin{equation}\mathbf{z}(t)=\lim_{N\to\infty}\sum_{n=-N}^{N}\mathbf{Z}_n e^{j n \omega t}\end{equation}
- $N=1$ for a single harmonic. $n=0$ is the constant term.
- This can be substituted into the governing equation to find $\dot{\mathbf{z}}(t)$:
\begin{equation}\dot{\mathbf{z}}(t)=\mathbf{f}(\mathbf{z}(t),\mathbf{u}(t))\end{equation}
- This is actually a function call to a Finite Element Package, CFD, Matlab function, - whatever your solver uses to get derivatives
- We can also find $\dot{\mathbf{z}}(t)$ from the derivative of the Fourier Series:
\begin{equation}\dot{\mathbf{z}}(t)=\lim_{N\to\infty}\sum_{n=-N}^{N}j n \omega\mathbf{Z}_n e^{j n \omega t}\end{equation}
- The difference between these methods is zero when $\mathbf{Z}_n$ are correct.
\begin{equation}\mathbf{0} \approx\sum_{n=-N}^{N}j n\omega \mathbf{Z}_n e^{j n \omega t}-\mathbf{f}\left(\sum_{n=-N}^{N}\mathbf{Z}_n e^{j n \omega t},\mathbf{u}(t)\right)\end{equation}
- These operations are wrapped inside a function that returns this error
- This function is used by a Newton-Krylov nonlinear algebraic solver.
- Calls any solver in the SciPy family of solvers with the ability to easily pass through parameters to the solver *and* to the external derivative evaluator.
## Examples:
### Duffing Oscillator
\begin{equation}\ddot{x}+0.1\dot{x}+x+0.1 x^3=\sin(\omega t)\end{equation}
```python
# Define our function (Python)
def duff_osc_ss(x, params):
omega = params['omega']
t = params['cur_time']
xd = np.array([[x[1]],
[-x[0] - 0.1 * x[0]**3 - 0.1 * x[1] + 1 * sin(omega * t)]])
return xd
```
```python
# Arguments are name of derivative function, number of states, driving frequency,
# form of the equation, and number of harmonics
t, x, e, amps, phases = ms.hb_time(duff_osc_ss, num_variables=2, omega=.1,
eqform='first_order', num_harmonics=5)
print('Displacement amplitude is ', amps[0])
print('Velocity amplitude is ', amps[1])
```
Displacement amplitude is 0.9469563546008394
Velocity amplitude is 0.09469563544416415
#### Mousai can easily recreate the near-continuous response
````python
time, xc = ms.time_history(t, x)
````
```python
def pltcont():
time, xc = ms.time_history(t, x)
disp_plot, _ = plt.plot(time, xc.T[:, 0], t,
x.T[:, 0], '*b', label='Displacement')
vel_plot, _ = plt.plot(time, xc.T[:, 1], 'r',
t, x.T[:, 1], '*r', label='Velocity')
plt.legend(handles=[disp_plot, vel_plot])
plt.xlabel('Time (sec)')
plt.title('Response of Duffing Oscillator at 0.0159 rad/sec')
plt.ylabel('Response')
plt.legend
plt.grid(True)
```
```python
fig=plt.figure()
ax=fig.add_subplot(111)
time, xc = ms.time_history(t, x)
disp_plot, _ = ax.plot(time, xc.T[:, 0], t,
x.T[:, 0], '*b', label='Displacement')
vel_plot, _ = ax.plot(time, xc.T[:, 1], 'r',
t, x.T[:, 1], '*r', label='Velocity')
ax.legend(handles=[disp_plot, vel_plot])
ax.set_xlabel('Time (sec)')
ax.set_title('Response of Duffing Oscillator at 0.0159 rad/sec')
ax.set_ylabel('Response')
ax.legend
ax.grid(True)
```
```python
pltcont()# abbreviated plotting function
```
```python
time, xc = ms.time_history(t, x)
disp_plot, _ = plt.plot(time, xc.T[:, 0], t,
x.T[:, 0], '*b', label='Displacement')
vel_plot, _ = plt.plot(time, xc.T[:, 1], 'r',
t, x.T[:, 1], '*r', label='Velocity')
plt.legend(handles=[disp_plot, vel_plot])
plt.xlabel('Time (sec)')
plt.title('Response of Duffing Oscillator at 0.0159 rad/sec')
plt.ylabel('Response')
plt.legend
plt.grid(True)
```
```python
omega = np.arange(0, 3, 1 / 200) + 1 / 200
amp = sp.zeros_like(omega)
amp[:] = np.nan
t, x, e, amps, phases = ms.hb_time(duff_osc_ss, num_variables=2,
omega=1 / 200, eqform='first_order', num_harmonics=1)
for i, freq in enumerate(omega):
# Here we try to obtain solutions, but if they don't work,
# we ignore them by inserting `np.nan` values.
x = x - sp.average(x)
try:
t, x, e, amps, phases =
ms.hb_time(duff_osc_ss, x0=x,
omega=freq, eqform='first_order', num_harmonics=1)
amp[i] = amps[0]
except:
amp[i] = np.nan
if np.isnan(amp[i]):
break
plt.plot(omega, amp)
```
#### Let's sweep through driving frequencies to find a frequency response function
```python
omegal = np.arange(3, .03, -1 / 200) + 1 / 200
ampl = sp.zeros_like(omegal)
ampl[:] = np.nan
t, x, e, amps, phases = ms.hb_time(duff_osc_ss, num_variables=2,
omega=3, eqform='first_order', num_harmonics=1)
for i, freq in enumerate(omegal):
# Here we try to obtain solutions, but if they don't work,
# we ignore them by inserting `np.nan` values.
x = x - np.average(x)
try:
t, x, e, amps, phases = ms.hb_time(duff_osc_ss, x0=x,
omega=freq, eqform='first_order', num_harmonics=1)
ampl[i] = amps[0]
except:
ampl[i] = np.nan
if np.isnan(ampl[i]):
break
```
```python
plt.plot(omega,amp, label='Up sweep')
plt.plot(omegal,ampl, label='Down sweep')
plt.legend()
plt.title('Amplitude versus frequency for Duffing Oscillator')
plt.xlabel('Driving frequency $\\omega$')
plt.ylabel('Amplitude')
plt.grid()
```
### Two degree of freedom system
$$\begin{bmatrix}1&0\\0&1\end{bmatrix}\begin{bmatrix}\ddot{x}_1\\ \ddot{x}_2\end{bmatrix}+\begin{bmatrix}2&-1 \\-1&2\end{bmatrix}\begin{bmatrix}{x}_1\\{x}_2\end{bmatrix}+\begin{bmatrix}\alpha x_{1}^{3}\\0\end{bmatrix}=\begin{bmatrix}0 \\A \sin(\omega t)\end{bmatrix}$$
```python
def two_dof_demo(x, params):
omega = params['omega']
t = params['cur_time']
force_amplitude = params['force_amplitude']
alpha = params['alpha']
# The following could call an external code to obtain the state derivatives
xd = np.array([[x[1]],
[-2 * x[0] - alpha * x[0]**3 + x[2]],
[x[3]],
[-2 * x[2] + x[0]]] + force_amplitude * np.sin(omega * t))
return xd
```
#### Let's find a response.
```python
parameters = {'force_amplitude': 0.2}
parameters['alpha'] = 0.4
t, x, e, amps, phases = ms.hb_time(two_dof_demo, num_variables=4,
omega=1.2, eqform='first_order', params=parameters)
amps
```
array([0.86696762, 0.89484597, 0.99030411, 1.04097851])
#### Or a parametric study of response amplitude versus nonlinearity.
```python
alpha = np.linspace(-1, .45, 2000)
amp = np.zeros_like(alpha)
for i, alphai in enumerate(alpha):
parameters['alpha'] = alphai
t, x, e, amps, phases = ms.hb_time(two_dof_demo, num_variables=4, omega=1.2,
eqform='first_order', params=parameters)
amp[i] = amps[0]
```
```python
plt.plot(alpha,amp)
plt.title('Amplitude of $x_1$ versus $\\alpha$')
plt.ylabel('Amplitude of $x_1$')
plt.xlabel('$\\alpha$')
plt.grid()
```
### Two degree of freedom system with Coulomb Damping
$$\begin{bmatrix}1&0\\0&1\end{bmatrix}\begin{bmatrix}\ddot{x}_1\\ \ddot{x}_2\end{bmatrix}+\begin{bmatrix}2&-1 \\-1&2\end{bmatrix}\begin{bmatrix}{x}_1\\{x}_2\end{bmatrix}+\begin{bmatrix}\mu |\dot{x}|_{1}\\0\end{bmatrix}=\begin{bmatrix}0 \\A \sin(\omega t)\end{bmatrix}$$
```python
def two_dof_coulomb(x, params):
omega = params['omega']
t = params['cur_time']
force_amplitude = params['force_amplitude']
mu = params['mu']
# The following could call an external code to obtain the state derivatives
xd = np.array([[x[1]],
[-2 * x[0] - mu * np.abs(x[1]) + x[2]],
[x[3]],
[-2 * x[2] + x[0]]] + force_amplitude * np.sin(omega * t))
return xd
```
```python
parameters = {'force_amplitude': 0.2}
parameters['mu'] = 0.1
t, x, e, amps, phases = ms.hb_time(two_dof_coulomb, num_variables=4,
omega=1.2, eqform='first_order', params=parameters)
amps
```
array([0.68916938, 0.68228248, 0.67299991, 0.66065019])
```python
mu = np.linspace(0, 1.0, 200)
amp = np.zeros_like(mu)
for i, mui in enumerate(mu):
parameters['mu'] = mui
t, x, e, amps, phases = ms.hb_time(two_dof_coulomb, num_variables=4, omega=1.2,
eqform='first_order', num_harmonics=3, params=parameters)
amp[i] = amps[0]
```
#### Too much Coulomb friction can increase the response.
* Did you know that?
* This damping shifted resonance.
```python
plt.plot(mu,amp)
plt.title('Amplitude of $x_1$ versus $\\mu$')
plt.ylabel('Amplitude of $x_1$')
plt.xlabel('$\\mu$')
plt.grid()
```
### But can I solve an equation in one line? Yes!!!
Damped Duffing oscillator in one command.
```python
out = ms.hb_time(lambda x, v,
params: np.array([[-x - .1 * x**3 - .1 * v + 1 *
sin(params['omega'] * params['cur_time'])]]),
num_variables=1, omega=.7, num_harmonics=1)
out[3][0]
```
1.4779630014433971
OK - that's a bit obtuse. I wouldn't do that normally, but Mousai can.
## How to get this?
* Install Scientific Python from [SciPy.org](https://www.scipy.org/install.html)
* AFRL: See your tech support to get the Enthought distribution installed
* See the Mousai [documents for](https://josephcslater.github.io/mousai/index.html) installation instructions
* `pip install mousai`
* AFRL: Talk to me- install is easy if I send you the files.
* See [Mousai on GitHub](https://github.com/josephcslater/mousai) (https://github.com/josephcslater/mousai)
## Conclusions
* Nonlinear frequency solutions are within reach of undergraduates
* Installation is trivial
* Already in use (GitHub logs indicate dozens of users)
* Custom special case and proprietary solvers such as BDamper can be replaced for free
* Research potential is about to be unleashed
## Future
* Add time-averaging method
* currently requires high number of harmonics for non-smooth systems
* Add masking of known harmonics (average is often fixed and known)
* Automated sweep control
* Branch following
* Condense the one-line method
* Evaluate on large scale problems
* Currently attempting to hook to ANSYS
* Parallelize
* Leverage CUDA
<div class="cite2c-biblio"></div>
```python
```
|
88cb40864953d52e1b4a2bc40da6292614e25e7b
| 207,550 |
ipynb
|
Jupyter Notebook
|
docs/algorithm/Algorithm.ipynb
|
CodingPenguin1/mousai
|
0509d46f452a4baecb86822211f209b70a4a2522
|
[
"BSD-3-Clause"
] | 19 |
2018-02-05T16:13:45.000Z
|
2021-06-29T09:23:22.000Z
|
docs/algorithm/Algorithm.ipynb
|
josephcslater/mousai
|
165ff167c3f8b092857586c5958e6469da78be96
|
[
"BSD-3-Clause"
] | 15 |
2017-05-23T13:45:16.000Z
|
2021-08-15T16:13:51.000Z
|
docs/algorithm/Algorithm.ipynb
|
CodingPenguin1/mousai
|
0509d46f452a4baecb86822211f209b70a4a2522
|
[
"BSD-3-Clause"
] | 15 |
2017-05-18T17:50:49.000Z
|
2021-07-31T17:31:36.000Z
| 186.310592 | 34,030 | 0.893891 | true | 4,757 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.859664 | 0.76908 | 0.66115 |
__label__eng_Latn
| 0.891001 | 0.374405 |
# The Jupyter notebook
[IPython](https://ipython.org) provides a **kernel** for [Jupyter](https://jupyter.org).
Jupyter is the name for this notebook interface,
and the document format.
Notebooks can contain [Markdown](https://help.github.com/articles/markdown-basics/) like this cell here,
as well as mathematics rendered with [mathjax](https://mathjax.org):
$$
\frac{1}{\Bigl(\sqrt{\phi \sqrt{5}}-\phi\Bigr) e^{\frac25 \pi}} =
1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}}
{1+\frac{e^{-8\pi}} {1+\ldots} } } }
$$
```python
!head -n 32 "Intro to IPython.ipynb"
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# The Jupyter notebook\n",
"\n",
"[IPython](https://ipython.org) provides a **kernel** for [Jupyter](https://jupyter.org).\n",
"Jupyter is the name for this notebook interface,\n",
"and the document format.\n",
"\n",
"\n",
"\n",
"Notebooks can contain [Markdown](https://help.github.com/articles/markdown-basics/) like this cell here,\n",
"as well as mathematics rendered with [mathjax](https://mathjax.org):\n",
"\n",
"$$\n",
"\\frac{1}{\\Bigl(\\sqrt{\\phi \\sqrt{5}}-\\phi\\Bigr) e^{\\frac25 \\pi}} =\n",
"1+\\frac{e^{-2\\pi}} {1+\\frac{e^{-4\\pi}} {1+\\frac{e^{-6\\pi}}\n",
"{1+\\frac{e^{-8\\pi}} {1+\\ldots} } } } \n",
"$$"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!head -n 32 \"Intro to IPython.ipynb\""
]
[nbviewer](http://nbviewer.org) is a service that renders notebooks to HTML,
for sharing and reading notebooks on the Internet.
[This notebook](http://nbviewer.ipython.org/81c2a94563d102d93895) on nbviewer.
You can also convert notebooks to HTML and other formats locally with `jupyter nbconvert`.
When executing code in IPython, all valid Python syntax works as-is, but IPython provides a number of features designed to make the interactive experience more fluid and efficient.
## First things first: running code, getting help
In the notebook, to run a cell of code, hit `Shift-Enter`. This executes the cell and puts the cursor in the next cell below, or makes a new one if you are at the end. Alternately, you can use:
- `Alt-Enter` to force the creation of a new cell unconditionally (useful when inserting new content in the middle of an existing notebook).
- `Control-Enter` executes the cell and keeps the cursor in the same cell, useful for quick experimentation of snippets that you don't need to keep permanently.
```python
print("Hi")
```
Hi
```python
import time
for i in range(10):
print(i, end=' ')
time.sleep(1)
```
0 1 2 3 4 5 6 7 8 9
```python
i
```
9
Getting help:
```python
?
```
IPython -- An enhanced Interactive Python
=========================================
IPython offers a fully compatible replacement for the standard Python
interpreter, with convenient shell features, special commands, command
history mechanism and output results caching.
At your system command line, type 'ipython -h' to see the command line
options available. This document only describes interactive features.
GETTING HELP
------------
Within IPython you have various way to access help:
? -> Introduction and overview of IPython's features (this screen).
object? -> Details about 'object'.
object?? -> More detailed, verbose information about 'object'.
%quickref -> Quick reference of all IPython specific syntax and magics.
help -> Access Python's own help system.
If you are in terminal IPython you can quit this screen by pressing `q`.
MAIN FEATURES
-------------
* Access to the standard Python help with object docstrings and the Python
manuals. Simply type 'help' (no quotes) to invoke it.
* Magic commands: type %magic for information on the magic subsystem.
* System command aliases, via the %alias command or the configuration file(s).
* Dynamic object information:
Typing ?word or word? prints detailed information about an object. Certain
long strings (code, etc.) get snipped in the center for brevity.
Typing ??word or word?? gives access to the full information without
snipping long strings. Strings that are longer than the screen are printed
through the less pager.
The ?/?? system gives access to the full source code for any object (if
available), shows function prototypes and other useful information.
If you just want to see an object's docstring, type '%pdoc object' (without
quotes, and without % if you have automagic on).
* Tab completion in the local namespace:
At any time, hitting tab will complete any available python commands or
variable names, and show you a list of the possible completions if there's
no unambiguous one. It will also complete filenames in the current directory.
* Search previous command history in multiple ways:
- Start typing, and then use arrow keys up/down or (Ctrl-p/Ctrl-n) to search
through the history items that match what you've typed so far.
- Hit Ctrl-r: opens a search prompt. Begin typing and the system searches
your history for lines that match what you've typed so far, completing as
much as it can.
- %hist: search history by index.
* Persistent command history across sessions.
* Logging of input with the ability to save and restore a working session.
* System shell with !. Typing !ls will run 'ls' in the current directory.
* The reload command does a 'deep' reload of a module: changes made to the
module since you imported will actually be available without having to exit.
* Verbose and colored exception traceback printouts. See the magic xmode and
xcolor functions for details (just type %magic).
* Input caching system:
IPython offers numbered prompts (In/Out) with input and output caching. All
input is saved and can be retrieved as variables (besides the usual arrow
key recall).
The following GLOBAL variables always exist (so don't overwrite them!):
_i: stores previous input.
_ii: next previous.
_iii: next-next previous.
_ih : a list of all input _ih[n] is the input from line n.
Additionally, global variables named _i<n> are dynamically created (<n>
being the prompt counter), such that _i<n> == _ih[<n>]
For example, what you typed at prompt 14 is available as _i14 and _ih[14].
You can create macros which contain multiple input lines from this history,
for later re-execution, with the %macro function.
The history function %hist allows you to see any part of your input history
by printing a range of the _i variables. Note that inputs which contain
magic functions (%) appear in the history with a prepended comment. This is
because they aren't really valid Python code, so you can't exec them.
* Output caching system:
For output that is returned from actions, a system similar to the input
cache exists but using _ instead of _i. Only actions that produce a result
(NOT assignments, for example) are cached. If you are familiar with
Mathematica, IPython's _ variables behave exactly like Mathematica's %
variables.
The following GLOBAL variables always exist (so don't overwrite them!):
_ (one underscore): previous output.
__ (two underscores): next previous.
___ (three underscores): next-next previous.
Global variables named _<n> are dynamically created (<n> being the prompt
counter), such that the result of output <n> is always available as _<n>.
Finally, a global dictionary named _oh exists with entries for all lines
which generated output.
* Directory history:
Your history of visited directories is kept in the global list _dh, and the
magic %cd command can be used to go to any entry in that list.
* Auto-parentheses and auto-quotes (adapted from Nathan Gray's LazyPython)
1. Auto-parentheses
Callable objects (i.e. functions, methods, etc) can be invoked like
this (notice the commas between the arguments)::
In [1]: callable_ob arg1, arg2, arg3
and the input will be translated to this::
callable_ob(arg1, arg2, arg3)
This feature is off by default (in rare cases it can produce
undesirable side-effects), but you can activate it at the command-line
by starting IPython with `--autocall 1`, set it permanently in your
configuration file, or turn on at runtime with `%autocall 1`.
You can force auto-parentheses by using '/' as the first character
of a line. For example::
In [1]: /globals # becomes 'globals()'
Note that the '/' MUST be the first character on the line! This
won't work::
In [2]: print /globals # syntax error
In most cases the automatic algorithm should work, so you should
rarely need to explicitly invoke /. One notable exception is if you
are trying to call a function with a list of tuples as arguments (the
parenthesis will confuse IPython)::
In [1]: zip (1,2,3),(4,5,6) # won't work
but this will work::
In [2]: /zip (1,2,3),(4,5,6)
------> zip ((1,2,3),(4,5,6))
Out[2]= [(1, 4), (2, 5), (3, 6)]
IPython tells you that it has altered your command line by
displaying the new command line preceded by -->. e.g.::
In [18]: callable list
-------> callable (list)
2. Auto-Quoting
You can force auto-quoting of a function's arguments by using ',' as
the first character of a line. For example::
In [1]: ,my_function /home/me # becomes my_function("/home/me")
If you use ';' instead, the whole argument is quoted as a single
string (while ',' splits on whitespace)::
In [2]: ,my_function a b c # becomes my_function("a","b","c")
In [3]: ;my_function a b c # becomes my_function("a b c")
Note that the ',' MUST be the first character on the line! This
won't work::
In [4]: x = ,my_function /home/me # syntax error
Typing `object_name?` will print all sorts of details about any object, including docstrings, function definition lines (for call arguments) and constructor details for classes.
```python
import collections
collections.namedtuple?
```
[0;31mSignature:[0m [0mcollections[0m[0;34m.[0m[0mnamedtuple[0m[0;34m([0m[0mtypename[0m[0;34m,[0m [0mfield_names[0m[0;34m,[0m [0;34m*[0m[0;34m,[0m [0mverbose[0m[0;34m=[0m[0;32mFalse[0m[0;34m,[0m [0mrename[0m[0;34m=[0m[0;32mFalse[0m[0;34m,[0m [0mmodule[0m[0;34m=[0m[0;32mNone[0m[0;34m)[0m[0;34m[0m[0m
[0;31mDocstring:[0m
Returns a new subclass of tuple with named fields.
>>> Point = namedtuple('Point', ['x', 'y'])
>>> Point.__doc__ # docstring for the new class
'Point(x, y)'
>>> p = Point(11, y=22) # instantiate with positional args or keywords
>>> p[0] + p[1] # indexable like a plain tuple
33
>>> x, y = p # unpack like a regular tuple
>>> x, y
(11, 22)
>>> p.x + p.y # fields also accessible by name
33
>>> d = p._asdict() # convert to a dictionary
>>> d['x']
11
>>> Point(**d) # convert from a dictionary
Point(x=11, y=22)
>>> p._replace(x=100) # _replace() is like str.replace() but targets named fields
Point(x=100, y=22)
[0;31mFile:[0m ~/conda/lib/python3.6/collections/__init__.py
[0;31mType:[0m function
```python
collections.Counter??
```
[0;31mInit signature:[0m [0mcollections[0m[0;34m.[0m[0mCounter[0m[0;34m([0m[0;34m*[0m[0margs[0m[0;34m,[0m [0;34m**[0m[0mkwds[0m[0;34m)[0m[0;34m[0m[0m
[0;31mSource:[0m
[0;32mclass[0m [0mCounter[0m[0;34m([0m[0mdict[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'''Dict subclass for counting hashable items. Sometimes called a bag[0m
[0;34m or multiset. Elements are stored as dictionary keys and their counts[0m
[0;34m are stored as dictionary values.[0m
[0;34m[0m
[0;34m >>> c = Counter('abcdeabcdabcaba') # count elements from a string[0m
[0;34m[0m
[0;34m >>> c.most_common(3) # three most common elements[0m
[0;34m [('a', 5), ('b', 4), ('c', 3)][0m
[0;34m >>> sorted(c) # list all unique elements[0m
[0;34m ['a', 'b', 'c', 'd', 'e'][0m
[0;34m >>> ''.join(sorted(c.elements())) # list elements with repetitions[0m
[0;34m 'aaaaabbbbcccdde'[0m
[0;34m >>> sum(c.values()) # total of all counts[0m
[0;34m 15[0m
[0;34m[0m
[0;34m >>> c['a'] # count of letter 'a'[0m
[0;34m 5[0m
[0;34m >>> for elem in 'shazam': # update counts from an iterable[0m
[0;34m ... c[elem] += 1 # by adding 1 to each element's count[0m
[0;34m >>> c['a'] # now there are seven 'a'[0m
[0;34m 7[0m
[0;34m >>> del c['b'] # remove all 'b'[0m
[0;34m >>> c['b'] # now there are zero 'b'[0m
[0;34m 0[0m
[0;34m[0m
[0;34m >>> d = Counter('simsalabim') # make another counter[0m
[0;34m >>> c.update(d) # add in the second counter[0m
[0;34m >>> c['a'] # now there are nine 'a'[0m
[0;34m 9[0m
[0;34m[0m
[0;34m >>> c.clear() # empty the counter[0m
[0;34m >>> c[0m
[0;34m Counter()[0m
[0;34m[0m
[0;34m Note: If a count is set to zero or reduced to zero, it will remain[0m
[0;34m in the counter until the entry is deleted or the counter is cleared:[0m
[0;34m[0m
[0;34m >>> c = Counter('aaabbc')[0m
[0;34m >>> c['b'] -= 2 # reduce the count of 'b' by two[0m
[0;34m >>> c.most_common() # 'b' is still in, but its count is zero[0m
[0;34m [('a', 3), ('c', 1), ('b', 0)][0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0;31m# References:[0m[0;34m[0m
[0;34m[0m [0;31m# http://en.wikipedia.org/wiki/Multiset[0m[0;34m[0m
[0;34m[0m [0;31m# http://www.gnu.org/software/smalltalk/manual-base/html_node/Bag.html[0m[0;34m[0m
[0;34m[0m [0;31m# http://www.demo2s.com/Tutorial/Cpp/0380__set-multiset/Catalog0380__set-multiset.htm[0m[0;34m[0m
[0;34m[0m [0;31m# http://code.activestate.com/recipes/259174/[0m[0;34m[0m
[0;34m[0m [0;31m# Knuth, TAOCP Vol. II section 4.6.3[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__init__[0m[0;34m([0m[0;34m*[0m[0margs[0m[0;34m,[0m [0;34m**[0m[0mkwds[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'''Create a new, empty Counter object. And if given, count elements[0m
[0;34m from an input iterable. Or, initialize the count from another mapping[0m
[0;34m of elements to their counts.[0m
[0;34m[0m
[0;34m >>> c = Counter() # a new, empty counter[0m
[0;34m >>> c = Counter('gallahad') # a new counter from an iterable[0m
[0;34m >>> c = Counter({'a': 4, 'b': 2}) # a new counter from a mapping[0m
[0;34m >>> c = Counter(a=4, b=2) # a new counter from keyword args[0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0;32mnot[0m [0margs[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mraise[0m [0mTypeError[0m[0;34m([0m[0;34m"descriptor '__init__' of 'Counter' object "[0m[0;34m[0m
[0;34m[0m [0;34m"needs an argument"[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0mself[0m[0;34m,[0m [0;34m*[0m[0margs[0m [0;34m=[0m [0margs[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mlen[0m[0;34m([0m[0margs[0m[0;34m)[0m [0;34m>[0m [0;36m1[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mraise[0m [0mTypeError[0m[0;34m([0m[0;34m'expected at most 1 arguments, got %d'[0m [0;34m%[0m [0mlen[0m[0;34m([0m[0margs[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0msuper[0m[0;34m([0m[0mCounter[0m[0;34m,[0m [0mself[0m[0;34m)[0m[0;34m.[0m[0m__init__[0m[0;34m([0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0mself[0m[0;34m.[0m[0mupdate[0m[0;34m([0m[0;34m*[0m[0margs[0m[0;34m,[0m [0;34m**[0m[0mkwds[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__missing__[0m[0;34m([0m[0mself[0m[0;34m,[0m [0mkey[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'The count of elements not in the Counter is zero.'[0m[0;34m[0m
[0;34m[0m [0;31m# Needed so that self[missing_item] does not raise KeyError[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;36m0[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0mmost_common[0m[0;34m([0m[0mself[0m[0;34m,[0m [0mn[0m[0;34m=[0m[0;32mNone[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'''List the n most common elements and their counts from the most[0m
[0;34m common to the least. If n is None, then list all element counts.[0m
[0;34m[0m
[0;34m >>> Counter('abcdeabcdabcaba').most_common(3)[0m
[0;34m [('a', 5), ('b', 4), ('c', 3)][0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0;31m# Emulate Bag.sortedByCount from Smalltalk[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mn[0m [0;32mis[0m [0;32mNone[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0msorted[0m[0;34m([0m[0mself[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m,[0m [0mkey[0m[0;34m=[0m[0m_itemgetter[0m[0;34m([0m[0;36m1[0m[0;34m)[0m[0;34m,[0m [0mreverse[0m[0;34m=[0m[0;32mTrue[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0m_heapq[0m[0;34m.[0m[0mnlargest[0m[0;34m([0m[0mn[0m[0;34m,[0m [0mself[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m,[0m [0mkey[0m[0;34m=[0m[0m_itemgetter[0m[0;34m([0m[0;36m1[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0melements[0m[0;34m([0m[0mself[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'''Iterator over elements repeating each as many times as its count.[0m
[0;34m[0m
[0;34m >>> c = Counter('ABCABC')[0m
[0;34m >>> sorted(c.elements())[0m
[0;34m ['A', 'A', 'B', 'B', 'C', 'C'][0m
[0;34m[0m
[0;34m # Knuth's example for prime factors of 1836: 2**2 * 3**3 * 17**1[0m
[0;34m >>> prime_factors = Counter({2: 2, 3: 3, 17: 1})[0m
[0;34m >>> product = 1[0m
[0;34m >>> for factor in prime_factors.elements(): # loop over factors[0m
[0;34m ... product *= factor # and multiply them[0m
[0;34m >>> product[0m
[0;34m 1836[0m
[0;34m[0m
[0;34m Note, if an element's count has been set to zero or is a negative[0m
[0;34m number, elements() will ignore it.[0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0;31m# Emulate Bag.do from Smalltalk and Multiset.begin from C++.[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0m_chain[0m[0;34m.[0m[0mfrom_iterable[0m[0;34m([0m[0m_starmap[0m[0;34m([0m[0m_repeat[0m[0;34m,[0m [0mself[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;31m# Override dict methods where necessary[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;34m@[0m[0mclassmethod[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0mfromkeys[0m[0;34m([0m[0mcls[0m[0;34m,[0m [0miterable[0m[0;34m,[0m [0mv[0m[0;34m=[0m[0;32mNone[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;31m# There is no equivalent method for counters because setting v=1[0m[0;34m[0m
[0;34m[0m [0;31m# means that no element can have a count greater than one.[0m[0;34m[0m
[0;34m[0m [0;32mraise[0m [0mNotImplementedError[0m[0;34m([0m[0;34m[0m
[0;34m[0m [0;34m'Counter.fromkeys() is undefined. Use Counter(iterable) instead.'[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0mupdate[0m[0;34m([0m[0;34m*[0m[0margs[0m[0;34m,[0m [0;34m**[0m[0mkwds[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'''Like dict.update() but add counts instead of replacing them.[0m
[0;34m[0m
[0;34m Source can be an iterable, a dictionary, or another Counter instance.[0m
[0;34m[0m
[0;34m >>> c = Counter('which')[0m
[0;34m >>> c.update('witch') # add elements from another iterable[0m
[0;34m >>> d = Counter('watch')[0m
[0;34m >>> c.update(d) # add elements from another counter[0m
[0;34m >>> c['h'] # four 'h' in which, witch, and watch[0m
[0;34m 4[0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0;31m# The regular dict.update() operation makes no sense here because the[0m[0;34m[0m
[0;34m[0m [0;31m# replace behavior results in the some of original untouched counts[0m[0;34m[0m
[0;34m[0m [0;31m# being mixed-in with all of the other counts for a mismash that[0m[0;34m[0m
[0;34m[0m [0;31m# doesn't have a straight-forward interpretation in most counting[0m[0;34m[0m
[0;34m[0m [0;31m# contexts. Instead, we implement straight-addition. Both the inputs[0m[0;34m[0m
[0;34m[0m [0;31m# and outputs are allowed to contain zero and negative counts.[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0;32mnot[0m [0margs[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mraise[0m [0mTypeError[0m[0;34m([0m[0;34m"descriptor 'update' of 'Counter' object "[0m[0;34m[0m
[0;34m[0m [0;34m"needs an argument"[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0mself[0m[0;34m,[0m [0;34m*[0m[0margs[0m [0;34m=[0m [0margs[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mlen[0m[0;34m([0m[0margs[0m[0;34m)[0m [0;34m>[0m [0;36m1[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mraise[0m [0mTypeError[0m[0;34m([0m[0;34m'expected at most 1 arguments, got %d'[0m [0;34m%[0m [0mlen[0m[0;34m([0m[0margs[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0miterable[0m [0;34m=[0m [0margs[0m[0;34m[[0m[0;36m0[0m[0;34m][0m [0;32mif[0m [0margs[0m [0;32melse[0m [0;32mNone[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0miterable[0m [0;32mis[0m [0;32mnot[0m [0;32mNone[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0misinstance[0m[0;34m([0m[0miterable[0m[0;34m,[0m [0mMapping[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mself[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mself_get[0m [0;34m=[0m [0mself[0m[0;34m.[0m[0mget[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0miterable[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mself[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m=[0m [0mcount[0m [0;34m+[0m [0mself_get[0m[0;34m([0m[0melem[0m[0;34m,[0m [0;36m0[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32melse[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0msuper[0m[0;34m([0m[0mCounter[0m[0;34m,[0m [0mself[0m[0;34m)[0m[0;34m.[0m[0mupdate[0m[0;34m([0m[0miterable[0m[0;34m)[0m [0;31m# fast path when counter is empty[0m[0;34m[0m
[0;34m[0m [0;32melse[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0m_count_elements[0m[0;34m([0m[0mself[0m[0;34m,[0m [0miterable[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mkwds[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mself[0m[0;34m.[0m[0mupdate[0m[0;34m([0m[0mkwds[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0msubtract[0m[0;34m([0m[0;34m*[0m[0margs[0m[0;34m,[0m [0;34m**[0m[0mkwds[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'''Like dict.update() but subtracts counts instead of replacing them.[0m
[0;34m Counts can be reduced below zero. Both the inputs and outputs are[0m
[0;34m allowed to contain zero and negative counts.[0m
[0;34m[0m
[0;34m Source can be an iterable, a dictionary, or another Counter instance.[0m
[0;34m[0m
[0;34m >>> c = Counter('which')[0m
[0;34m >>> c.subtract('witch') # subtract elements from another iterable[0m
[0;34m >>> c.subtract(Counter('watch')) # subtract elements from another counter[0m
[0;34m >>> c['h'] # 2 in which, minus 1 in witch, minus 1 in watch[0m
[0;34m 0[0m
[0;34m >>> c['w'] # 1 in which, minus 1 in witch, minus 1 in watch[0m
[0;34m -1[0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0;32mnot[0m [0margs[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mraise[0m [0mTypeError[0m[0;34m([0m[0;34m"descriptor 'subtract' of 'Counter' object "[0m[0;34m[0m
[0;34m[0m [0;34m"needs an argument"[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0mself[0m[0;34m,[0m [0;34m*[0m[0margs[0m [0;34m=[0m [0margs[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mlen[0m[0;34m([0m[0margs[0m[0;34m)[0m [0;34m>[0m [0;36m1[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mraise[0m [0mTypeError[0m[0;34m([0m[0;34m'expected at most 1 arguments, got %d'[0m [0;34m%[0m [0mlen[0m[0;34m([0m[0margs[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0miterable[0m [0;34m=[0m [0margs[0m[0;34m[[0m[0;36m0[0m[0;34m][0m [0;32mif[0m [0margs[0m [0;32melse[0m [0;32mNone[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0miterable[0m [0;32mis[0m [0;32mnot[0m [0;32mNone[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mself_get[0m [0;34m=[0m [0mself[0m[0;34m.[0m[0mget[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0misinstance[0m[0;34m([0m[0miterable[0m[0;34m,[0m [0mMapping[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0miterable[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mself[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m=[0m [0mself_get[0m[0;34m([0m[0melem[0m[0;34m,[0m [0;36m0[0m[0;34m)[0m [0;34m-[0m [0mcount[0m[0;34m[0m
[0;34m[0m [0;32melse[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m [0;32min[0m [0miterable[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mself[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m=[0m [0mself_get[0m[0;34m([0m[0melem[0m[0;34m,[0m [0;36m0[0m[0;34m)[0m [0;34m-[0m [0;36m1[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mkwds[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mself[0m[0;34m.[0m[0msubtract[0m[0;34m([0m[0mkwds[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0mcopy[0m[0;34m([0m[0mself[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'Return a shallow copy.'[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mself[0m[0;34m.[0m[0m__class__[0m[0;34m([0m[0mself[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__reduce__[0m[0;34m([0m[0mself[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mself[0m[0;34m.[0m[0m__class__[0m[0;34m,[0m [0;34m([0m[0mdict[0m[0;34m([0m[0mself[0m[0;34m)[0m[0;34m,[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__delitem__[0m[0;34m([0m[0mself[0m[0;34m,[0m [0melem[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'Like dict.__delitem__() but does not raise KeyError for missing values.'[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0melem[0m [0;32min[0m [0mself[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0msuper[0m[0;34m([0m[0;34m)[0m[0;34m.[0m[0m__delitem__[0m[0;34m([0m[0melem[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__repr__[0m[0;34m([0m[0mself[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0;32mnot[0m [0mself[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;34m'%s()'[0m [0;34m%[0m [0mself[0m[0;34m.[0m[0m__class__[0m[0;34m.[0m[0m__name__[0m[0;34m[0m
[0;34m[0m [0;32mtry[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mitems[0m [0;34m=[0m [0;34m', '[0m[0;34m.[0m[0mjoin[0m[0;34m([0m[0mmap[0m[0;34m([0m[0;34m'%r: %r'[0m[0;34m.[0m[0m__mod__[0m[0;34m,[0m [0mself[0m[0;34m.[0m[0mmost_common[0m[0;34m([0m[0;34m)[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;34m'%s({%s})'[0m [0;34m%[0m [0;34m([0m[0mself[0m[0;34m.[0m[0m__class__[0m[0;34m.[0m[0m__name__[0m[0;34m,[0m [0mitems[0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mexcept[0m [0mTypeError[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;31m# handle case where values are not orderable[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0;34m'{0}({1!r})'[0m[0;34m.[0m[0mformat[0m[0;34m([0m[0mself[0m[0;34m.[0m[0m__class__[0m[0;34m.[0m[0m__name__[0m[0;34m,[0m [0mdict[0m[0;34m([0m[0mself[0m[0;34m)[0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;31m# Multiset-style mathematical operations discussed in:[0m[0;34m[0m
[0;34m[0m [0;31m# Knuth TAOCP Volume II section 4.6.3 exercise 19[0m[0;34m[0m
[0;34m[0m [0;31m# and at http://en.wikipedia.org/wiki/Multiset[0m[0;34m[0m
[0;34m[0m [0;31m#[0m[0;34m[0m
[0;34m[0m [0;31m# Outputs guaranteed to only include positive counts.[0m[0;34m[0m
[0;34m[0m [0;31m#[0m[0;34m[0m
[0;34m[0m [0;31m# To strip negative and zero counts, add-in an empty counter:[0m[0;34m[0m
[0;34m[0m [0;31m# c += Counter()[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__add__[0m[0;34m([0m[0mself[0m[0;34m,[0m [0mother[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'''Add counts from two counters.[0m
[0;34m[0m
[0;34m >>> Counter('abbb') + Counter('bcc')[0m
[0;34m Counter({'b': 4, 'c': 2, 'a': 1})[0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0;32mnot[0m [0misinstance[0m[0;34m([0m[0mother[0m[0;34m,[0m [0mCounter[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mNotImplemented[0m[0;34m[0m
[0;34m[0m [0mresult[0m [0;34m=[0m [0mCounter[0m[0;34m([0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mnewcount[0m [0;34m=[0m [0mcount[0m [0;34m+[0m [0mother[0m[0;34m[[0m[0melem[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mnewcount[0m [0;34m>[0m [0;36m0[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mresult[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m=[0m [0mnewcount[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0mother[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0melem[0m [0;32mnot[0m [0;32min[0m [0mself[0m [0;32mand[0m [0mcount[0m [0;34m>[0m [0;36m0[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mresult[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m=[0m [0mcount[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mresult[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__sub__[0m[0;34m([0m[0mself[0m[0;34m,[0m [0mother[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m''' Subtract count, but keep only results with positive counts.[0m
[0;34m[0m
[0;34m >>> Counter('abbbc') - Counter('bccd')[0m
[0;34m Counter({'b': 2, 'a': 1})[0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0;32mnot[0m [0misinstance[0m[0;34m([0m[0mother[0m[0;34m,[0m [0mCounter[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mNotImplemented[0m[0;34m[0m
[0;34m[0m [0mresult[0m [0;34m=[0m [0mCounter[0m[0;34m([0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mnewcount[0m [0;34m=[0m [0mcount[0m [0;34m-[0m [0mother[0m[0;34m[[0m[0melem[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mnewcount[0m [0;34m>[0m [0;36m0[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mresult[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m=[0m [0mnewcount[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0mother[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0melem[0m [0;32mnot[0m [0;32min[0m [0mself[0m [0;32mand[0m [0mcount[0m [0;34m<[0m [0;36m0[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mresult[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m=[0m [0;36m0[0m [0;34m-[0m [0mcount[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mresult[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__or__[0m[0;34m([0m[0mself[0m[0;34m,[0m [0mother[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'''Union is the maximum of value in either of the input counters.[0m
[0;34m[0m
[0;34m >>> Counter('abbb') | Counter('bcc')[0m
[0;34m Counter({'b': 3, 'c': 2, 'a': 1})[0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0;32mnot[0m [0misinstance[0m[0;34m([0m[0mother[0m[0;34m,[0m [0mCounter[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mNotImplemented[0m[0;34m[0m
[0;34m[0m [0mresult[0m [0;34m=[0m [0mCounter[0m[0;34m([0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mother_count[0m [0;34m=[0m [0mother[0m[0;34m[[0m[0melem[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0mnewcount[0m [0;34m=[0m [0mother_count[0m [0;32mif[0m [0mcount[0m [0;34m<[0m [0mother_count[0m [0;32melse[0m [0mcount[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mnewcount[0m [0;34m>[0m [0;36m0[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mresult[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m=[0m [0mnewcount[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0mother[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0melem[0m [0;32mnot[0m [0;32min[0m [0mself[0m [0;32mand[0m [0mcount[0m [0;34m>[0m [0;36m0[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mresult[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m=[0m [0mcount[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mresult[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__and__[0m[0;34m([0m[0mself[0m[0;34m,[0m [0mother[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m''' Intersection is the minimum of corresponding counts.[0m
[0;34m[0m
[0;34m >>> Counter('abbb') & Counter('bcc')[0m
[0;34m Counter({'b': 1})[0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0;32mnot[0m [0misinstance[0m[0;34m([0m[0mother[0m[0;34m,[0m [0mCounter[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mNotImplemented[0m[0;34m[0m
[0;34m[0m [0mresult[0m [0;34m=[0m [0mCounter[0m[0;34m([0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mother_count[0m [0;34m=[0m [0mother[0m[0;34m[[0m[0melem[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0mnewcount[0m [0;34m=[0m [0mcount[0m [0;32mif[0m [0mcount[0m [0;34m<[0m [0mother_count[0m [0;32melse[0m [0mother_count[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mnewcount[0m [0;34m>[0m [0;36m0[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mresult[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m=[0m [0mnewcount[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mresult[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__pos__[0m[0;34m([0m[0mself[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'Adds an empty counter, effectively stripping negative and zero counts'[0m[0;34m[0m
[0;34m[0m [0mresult[0m [0;34m=[0m [0mCounter[0m[0;34m([0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mcount[0m [0;34m>[0m [0;36m0[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mresult[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m=[0m [0mcount[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mresult[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__neg__[0m[0;34m([0m[0mself[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'''Subtracts from an empty counter. Strips positive and zero counts,[0m
[0;34m and flips the sign on negative counts.[0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0mresult[0m [0;34m=[0m [0mCounter[0m[0;34m([0m[0;34m)[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mcount[0m [0;34m<[0m [0;36m0[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mresult[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m=[0m [0;36m0[0m [0;34m-[0m [0mcount[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mresult[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m_keep_positive[0m[0;34m([0m[0mself[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'''Internal method to strip elements with a negative or zero count'''[0m[0;34m[0m
[0;34m[0m [0mnonpositive[0m [0;34m=[0m [0;34m[[0m[0melem[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m [0;32mif[0m [0;32mnot[0m [0mcount[0m [0;34m>[0m [0;36m0[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m [0;32min[0m [0mnonpositive[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;32mdel[0m [0mself[0m[0;34m[[0m[0melem[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mself[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__iadd__[0m[0;34m([0m[0mself[0m[0;34m,[0m [0mother[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'''Inplace add from another counter, keeping only positive counts.[0m
[0;34m[0m
[0;34m >>> c = Counter('abbb')[0m
[0;34m >>> c += Counter('bcc')[0m
[0;34m >>> c[0m
[0;34m Counter({'b': 4, 'c': 2, 'a': 1})[0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0mother[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mself[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m+=[0m [0mcount[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mself[0m[0;34m.[0m[0m_keep_positive[0m[0;34m([0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__isub__[0m[0;34m([0m[0mself[0m[0;34m,[0m [0mother[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'''Inplace subtract counter, but keep only results with positive counts.[0m
[0;34m[0m
[0;34m >>> c = Counter('abbbc')[0m
[0;34m >>> c -= Counter('bccd')[0m
[0;34m >>> c[0m
[0;34m Counter({'b': 2, 'a': 1})[0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0mother[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mself[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m-=[0m [0mcount[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mself[0m[0;34m.[0m[0m_keep_positive[0m[0;34m([0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__ior__[0m[0;34m([0m[0mself[0m[0;34m,[0m [0mother[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'''Inplace union is the maximum of value from either counter.[0m
[0;34m[0m
[0;34m >>> c = Counter('abbb')[0m
[0;34m >>> c |= Counter('bcc')[0m
[0;34m >>> c[0m
[0;34m Counter({'b': 3, 'c': 2, 'a': 1})[0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mother_count[0m [0;32min[0m [0mother[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mcount[0m [0;34m=[0m [0mself[0m[0;34m[[0m[0melem[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mother_count[0m [0;34m>[0m [0mcount[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mself[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m=[0m [0mother_count[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mself[0m[0;34m.[0m[0m_keep_positive[0m[0;34m([0m[0;34m)[0m[0;34m[0m
[0;34m[0m[0;34m[0m
[0;34m[0m [0;32mdef[0m [0m__iand__[0m[0;34m([0m[0mself[0m[0;34m,[0m [0mother[0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0;34m'''Inplace intersection is the minimum of corresponding counts.[0m
[0;34m[0m
[0;34m >>> c = Counter('abbb')[0m
[0;34m >>> c &= Counter('bcc')[0m
[0;34m >>> c[0m
[0;34m Counter({'b': 1})[0m
[0;34m[0m
[0;34m '''[0m[0;34m[0m
[0;34m[0m [0;32mfor[0m [0melem[0m[0;34m,[0m [0mcount[0m [0;32min[0m [0mself[0m[0;34m.[0m[0mitems[0m[0;34m([0m[0;34m)[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mother_count[0m [0;34m=[0m [0mother[0m[0;34m[[0m[0melem[0m[0;34m][0m[0;34m[0m
[0;34m[0m [0;32mif[0m [0mother_count[0m [0;34m<[0m [0mcount[0m[0;34m:[0m[0;34m[0m
[0;34m[0m [0mself[0m[0;34m[[0m[0melem[0m[0;34m][0m [0;34m=[0m [0mother_count[0m[0;34m[0m
[0;34m[0m [0;32mreturn[0m [0mself[0m[0;34m.[0m[0m_keep_positive[0m[0;34m([0m[0;34m)[0m[0;34m[0m[0m
[0;31mFile:[0m ~/conda/lib/python3.6/collections/__init__.py
[0;31mType:[0m type
```python
c = collections.Counter('abcdeabcdabcaba')
```
```python
c.most_common?
```
[0;31mSignature:[0m [0mc[0m[0;34m.[0m[0mmost_common[0m[0;34m([0m[0mn[0m[0;34m=[0m[0;32mNone[0m[0;34m)[0m[0;34m[0m[0m
[0;31mDocstring:[0m
List the n most common elements and their counts from the most
common to the least. If n is None, then list all element counts.
>>> Counter('abcdeabcdabcaba').most_common(3)
[('a', 5), ('b', 4), ('c', 3)]
[0;31mFile:[0m ~/conda/lib/python3.6/collections/__init__.py
[0;31mType:[0m method
```python
c.most_common(2)
```
[('a', 5), ('b', 4)]
with '\*', you can do a wildcard search:
```python
*int*?
```
FloatingPointError
int
print
```python
import numpy as np
np.*array?
```
np.array
np.asanyarray
np.asarray
np.ascontiguousarray
np.asfarray
np.asfortranarray
np.chararray
np.ndarray
np.numarray
np.recarray
An IPython quick reference card:
```python
%quickref
```
IPython -- An enhanced Interactive Python - Quick Reference Card
================================================================
obj?, obj?? : Get help, or more help for object (also works as
?obj, ??obj).
?foo.*abc* : List names in 'foo' containing 'abc' in them.
%magic : Information about IPython's 'magic' % functions.
Magic functions are prefixed by % or %%, and typically take their arguments
without parentheses, quotes or even commas for convenience. Line magics take a
single % and cell magics are prefixed with two %%.
Example magic function calls:
%alias d ls -F : 'd' is now an alias for 'ls -F'
alias d ls -F : Works if 'alias' not a python name
alist = %alias : Get list of aliases to 'alist'
cd /usr/share : Obvious. cd -<tab> to choose from visited dirs.
%cd?? : See help AND source for magic %cd
%timeit x=10 : time the 'x=10' statement with high precision.
%%timeit x=2**100
x**100 : time 'x**100' with a setup of 'x=2**100'; setup code is not
counted. This is an example of a cell magic.
System commands:
!cp a.txt b/ : System command escape, calls os.system()
cp a.txt b/ : after %rehashx, most system commands work without !
cp ${f}.txt $bar : Variable expansion in magics and system commands
files = !ls /usr : Capture system command output
files.s, files.l, files.n: "a b c", ['a','b','c'], 'a\nb\nc'
History:
_i, _ii, _iii : Previous, next previous, next next previous input
_i4, _ih[2:5] : Input history line 4, lines 2-4
exec _i81 : Execute input history line #81 again
%rep 81 : Edit input history line #81
_, __, ___ : previous, next previous, next next previous output
_dh : Directory history
_oh : Output history
%hist : Command history of current session.
%hist -g foo : Search command history of (almost) all sessions for 'foo'.
%hist -g : Command history of (almost) all sessions.
%hist 1/2-8 : Command history containing lines 2-8 of session 1.
%hist 1/ ~2/ : Command history of session 1 and 2 sessions before current.
%hist ~8/1-~6/5 : Command history from line 1 of 8 sessions ago to
line 5 of 6 sessions ago.
%edit 0/ : Open editor to execute code with history of current session.
Autocall:
f 1,2 : f(1,2) # Off by default, enable with %autocall magic.
/f 1,2 : f(1,2) (forced autoparen)
,f 1 2 : f("1","2")
;f 1 2 : f("1 2")
Remember: TAB completion works in many contexts, not just file names
or python names.
The following magic functions are currently available:
%alias:
Define an alias for a system command.
%alias_magic:
::
%autocall:
Make functions callable without having to type parentheses.
%automagic:
Make magic functions callable without having to type the initial %.
%autosave:
Set the autosave interval in the notebook (in seconds).
%bookmark:
Manage IPython's bookmark system.
%cat:
Alias for `!cat`
%cd:
Change the current working directory.
%clear:
Clear the terminal.
%colors:
Switch color scheme for prompts, info system and exception handlers.
%config:
configure IPython
%connect_info:
Print information for connecting other clients to this kernel
%cp:
Alias for `!cp`
%debug:
::
%dhist:
Print your history of visited directories.
%dirs:
Return the current directory stack.
%doctest_mode:
Toggle doctest mode on and off.
%ed:
Alias for `%edit`.
%edit:
Bring up an editor and execute the resulting code.
%env:
Get, set, or list environment variables.
%gui:
Enable or disable IPython GUI event loop integration.
%hist:
Alias for `%history`.
%history:
::
%killbgscripts:
Kill all BG processes started by %%script and its family.
%ldir:
Alias for `!ls -F -G -l %l | grep /$`
%less:
Show a file through the pager.
%lf:
Alias for `!ls -F -l -G %l | grep ^-`
%lk:
Alias for `!ls -F -l -G %l | grep ^l`
%ll:
Alias for `!ls -F -l -G`
%load:
Load code into the current frontend.
%load_ext:
Load an IPython extension by its module name.
%loadpy:
Alias of `%load`
%logoff:
Temporarily stop logging.
%logon:
Restart logging.
%logstart:
Start logging anywhere in a session.
%logstate:
Print the status of the logging system.
%logstop:
Fully stop logging and close log file.
%ls:
Alias for `!ls -F -G`
%lsmagic:
List currently available magic functions.
%lx:
Alias for `!ls -F -l -G %l | grep ^-..x`
%macro:
Define a macro for future re-execution. It accepts ranges of history,
%magic:
Print information about the magic function system.
%man:
Find the man page for the given command and display in pager.
%matplotlib:
::
%mkdir:
Alias for `!mkdir`
%more:
Show a file through the pager.
%mv:
Alias for `!mv`
%namespace:
Load one or more predefined namespace
%notebook:
::
%page:
Pretty print the object and display it through a pager.
%pastebin:
Upload code to Github's Gist paste bin, returning the URL.
%pdb:
Control the automatic calling of the pdb interactive debugger.
%pdef:
Print the call signature for any callable object.
%pdoc:
Print the docstring for an object.
%pfile:
Print (or run through pager) the file where an object is defined.
%pinfo:
Provide detailed information about an object.
%pinfo2:
Provide extra detailed information about an object.
%pip:
%popd:
Change to directory popped off the top of the stack.
%pprint:
Toggle pretty printing on/off.
%precision:
Set floating point precision for pretty printing.
%prun:
Run a statement through the python code profiler.
%psearch:
Search for object in namespaces by wildcard.
%psource:
Print (or run through pager) the source code for an object.
%pushd:
Place the current dir on stack and change directory.
%pwd:
Return the current working directory path.
%pycat:
Show a syntax-highlighted file through a pager.
%pylab:
::
%qtconsole:
Open a qtconsole connected to this kernel.
%quickref:
Show a quick reference sheet
%recall:
Repeat a command, or get command to input line for editing.
%rehashx:
Update the alias table with all executable files in $PATH.
%reload_ext:
Reload an IPython extension by its module name.
%rep:
Alias for `%recall`.
%rerun:
Re-run previous input
%reset:
Resets the namespace by removing all names defined by the user, if
%reset_selective:
Resets the namespace by removing names defined by the user.
%rm:
Alias for `!rm`
%rmdir:
Alias for `!rmdir`
%run:
Run the named file inside IPython as a program.
%save:
Save a set of lines or a macro to a given filename.
%sc:
Shell capture - run shell command and capture output (DEPRECATED use !).
%set_env:
Set environment variables. Assumptions are that either "val" is a
%store:
Lightweight persistence for python variables.
%sx:
Shell execute - run shell command and capture output (!! is short-hand).
%system:
Shell execute - run shell command and capture output (!! is short-hand).
%tb:
Print the last traceback with the currently active exception mode.
%tic:
Start a timer
%time:
Time execution of a Python statement or expression.
%timeit:
Time execution of a Python statement or expression
%toc:
Stop and print the timer started by the last call to %tic
%unalias:
Remove an alias
%unload_ext:
Unload an IPython extension by its module name.
%who:
Print all interactive variables, with some minimal formatting.
%who_ls:
Return a sorted list of all interactive variables.
%whos:
Like %who, but gives some extra information about each variable.
%xdel:
Delete a variable, trying to clear it from anywhere that
%xmode:
Switch modes for the exception handlers.
%%!:
Shell execute - run shell command and capture output (!! is short-hand).
%%HTML:
Alias for `%%html`.
%%SVG:
Alias for `%%svg`.
%%bash:
%%bash script magic
%%capture:
::
%%debug:
::
%%file:
Alias for `%%writefile`.
%%html:
::
%%javascript:
Run the cell block of Javascript code
%%js:
Run the cell block of Javascript code
%%latex:
Render the cell as a block of latex
%%markdown:
Render the cell as Markdown text block
%%perl:
%%perl script magic
%%prun:
Run a statement through the python code profiler.
%%pypy:
%%pypy script magic
%%python:
%%python script magic
%%python2:
%%python2 script magic
%%python3:
%%python3 script magic
%%ruby:
%%ruby script magic
%%script:
::
%%sh:
%%sh script magic
%%svg:
Render the cell as an SVG literal
%%sx:
Shell execute - run shell command and capture output (!! is short-hand).
%%system:
Shell execute - run shell command and capture output (!! is short-hand).
%%time:
Time execution of a Python statement or expression.
%%timeit:
Time execution of a Python statement or expression
%%writefile:
::
## Tab completion
Tab completion, especially for attributes, is a convenient way to explore the structure of any object you’re dealing with. Simply type `object_name.<TAB>` to view the object’s attributes. Besides Python objects and keywords, tab completion also works on file and directory names.
```python
np.array_equal
```
## The interactive workflow: input, output, history
```python
2+10
```
12
```python
_+10
```
62
You can suppress the storage and rendering of output if you append `;` to the last cell (this comes in handy when plotting with matplotlib, for example):
```python
10+20;
```
```python
_
```
The output is stored in `_N` and `Out[N]` variables:
```python
Out[21]
```
```python
_15 == Out[15]
```
`%history` lets you view and search your history
```python
%history -n 1-5
```
1: !head -n 32 "Intro to IPython.ipynb"
2: print("Hi")
3:
import time
for i in range(10):
print(i, end=' ')
time.sleep(1)
4: i
5: ?
```python
%history?
```
[0;31mDocstring:[0m
::
%history [-n] [-o] [-p] [-t] [-f FILENAME] [-g [PATTERN [PATTERN ...]]]
[-l [LIMIT]] [-u]
[range [range ...]]
Print input history (_i<n> variables), with most recent last.
By default, input history is printed without line numbers so it can be
directly pasted into an editor. Use -n to show them.
By default, all input history from the current session is displayed.
Ranges of history can be indicated using the syntax:
``4``
Line 4, current session
``4-6``
Lines 4-6, current session
``243/1-5``
Lines 1-5, session 243
``~2/7``
Line 7, session 2 before current
``~8/1-~6/5``
From the first line of 8 sessions ago, to the fifth line of 6
sessions ago.
Multiple ranges can be entered, separated by spaces
The same syntax is used by %macro, %save, %edit, %rerun
Examples
--------
::
In [6]: %history -n 4-6
4:a = 12
5:print a**2
6:%history -n 4-6
positional arguments:
range
optional arguments:
-n print line numbers for each input. This feature is
only available if numbered prompts are in use.
-o also print outputs for each input.
-p print classic '>>>' python prompts before each input.
This is useful for making documentation, and in
conjunction with -o, for producing doctest-ready
output.
-t print the 'translated' history, as IPython understands
it. IPython filters your input and converts it all
into valid Python source before executing it (things
like magics or aliases are turned into function calls,
for example). With this option, you'll see the native
history instead of the user-entered version: '%cd /'
will be seen as 'get_ipython().run_line_magic("cd",
"/")' instead of '%cd /'.
-f FILENAME FILENAME: instead of printing the output to the
screen, redirect it to the given file. The file is
always overwritten, though *when it can*, IPython asks
for confirmation first. In particular, running the
command 'history -f FILENAME' from the IPython
Notebook interface will replace FILENAME even if it
already exists *without* confirmation.
-g <[PATTERN [PATTERN ...]]>
treat the arg as a glob pattern to search for in
(full) history. This includes the saved history
(almost all commands ever written). The pattern may
contain '?' to match one unknown character and '*' to
match any number of unknown characters. Use '%hist -g'
to show full saved history (may be very long).
-l <[LIMIT]> get the last n lines from all sessions. Specify n as a
single arg, or the default is the last 10 lines.
-u when searching history using `-g`, show only unique
history.
[0;31mFile:[0m ~/dev/ip/ipython/IPython/core/magics/history.py
## Accessing the underlying operating system
```python
import os
print(os.getcwd())
```
/Users/minrk/dev/simula/tools-meetup/2018-05-25-jupyter
```python
!pwd
```
/Users/minrk/dev/simula/tools-meetup/2018-05-25-jupyter
```python
!ls -la
```
total 3808
drwxr-xr-x 7 minrk staff 224 May 25 14:19 [34m.[m[m
drwxr-xr-x 11 minrk staff 352 May 25 13:27 [34m..[m[m
drwxr-xr-x 5 minrk staff 160 May 25 13:58 [34m.ipynb_checkpoints[m[m
-rw-r--r-- 1 minrk staff 104035 May 25 14:19 Intro to IPython.ipynb
-rw-r--r-- 1 minrk staff 500312 May 25 14:09 Lorenz Differential Equations.ipynb
-rw-r--r-- 1 minrk staff 1311739 May 25 14:00 Profiling and Optimizing with IPython.ipynb
-rw-r--r--@ 1 minrk staff 22868 Sep 23 2015 jupyter-logo.png
```python
ls
```
Intro to IPython.ipynb
Lorenz Differential Equations.ipynb
Profiling and Optimizing with IPython.ipynb
jupyter-logo.png
```python
files = !ls
print("My current directory's files:")
print(files)
```
My current directory's files:
['Intro to IPython.ipynb', 'Lorenz Differential Equations.ipynb', 'Profiling and Optimizing with IPython.ipynb', 'jupyter-logo.png']
```python
for f in files:
print(f * 2)
```
Intro to IPython.ipynbIntro to IPython.ipynb
Lorenz Differential Equations.ipynbLorenz Differential Equations.ipynb
Profiling and Optimizing with IPython.ipynbProfiling and Optimizing with IPython.ipynb
jupyter-logo.pngjupyter-logo.png
```python
!echo $files
```
[Intro to IPython.ipynb, Lorenz Differential Equations.ipynb, Profiling and Optimizing with IPython.ipynb, jupyter-logo.png]
```python
!echo {files[0].upper()}
```
INTRO TO IPYTHON.IPYNB
Note that all this is available even in multiline blocks:
```python
import os
for i,f in enumerate(files):
if f.endswith('ipynb'):
!echo {"%02d" % i} - "{os.path.splitext(f)[0]}"
else:
print('--')
```
00 - Intro to IPython
01 - Lorenz Differential Equations
02 - Profiling and Optimizing with IPython
--
```python
%history -t 34
```
## Beyond Python: magic functions
The IPyhton 'magic' functions are a set of commands, invoked by prepending one or two `%` signs to their name, that live in a namespace separate from your normal Python variables and provide a more command-like interface. They take flags with `--` and arguments without quotes, parentheses or commas. The motivation behind this system is two-fold:
- To provide an orthogonal namespace for controlling IPython itself and exposing other system-oriented functionality.
- To expose a calling mode that requires minimal verbosity and typing while working interactively. Thus the inspiration taken from the classic Unix shell style for commands.
Line vs cell magics:
```python
%timeit list(range(1000))
```
13 µs ± 26.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
```python
%%timeit
list(range(10))
list(range(100))
```
1.54 µs ± 45.3 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
Line magics can be used even inside code blocks:
```python
for i in range(1, 5):
size = i*100
print('size:', size, end=' ')
%timeit list(range(size))
```
size: 100 1.05 µs ± 30.8 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
size: 200 1.5 µs ± 41.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
size: 300 2.52 µs ± 128 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
size: 400 4.13 µs ± 138 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
```python
%timeit time.sleep(0.1)
```
104 ms ± 770 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
```python
%timeit?
```
[0;31mDocstring:[0m
Time execution of a Python statement or expression
Usage, in line mode:
%timeit [-n<N> -r<R> [-t|-c] -q -p<P> -o] statement
or in cell mode:
%%timeit [-n<N> -r<R> [-t|-c] -q -p<P> -o] setup_code
code
code...
Time execution of a Python statement or expression using the timeit
module. This function can be used both as a line and cell magic:
- In line mode you can time a single-line statement (though multiple
ones can be chained with using semicolons).
- In cell mode, the statement in the first line is used as setup code
(executed but not timed) and the body of the cell is timed. The cell
body has access to any variables created in the setup code.
Options:
-n<N>: execute the given statement <N> times in a loop. If this value
is not given, a fitting value is chosen.
-r<R>: repeat the loop iteration <R> times and take the best result.
Default: 3
-t: use time.time to measure the time, which is the default on Unix.
This function measures wall time.
-c: use time.clock to measure the time, which is the default on
Windows and measures wall time. On Unix, resource.getrusage is used
instead and returns the CPU user time.
-p<P>: use a precision of <P> digits to display the timing result.
Default: 3
-q: Quiet, do not print result.
-o: return a TimeitResult that can be stored in a variable to inspect
the result in more details.
Examples
--------
::
In [1]: %timeit pass
8.26 ns ± 0.12 ns per loop (mean ± std. dev. of 7 runs, 100000000 loops each)
In [2]: u = None
In [3]: %timeit u is None
29.9 ns ± 0.643 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
In [4]: %timeit -r 4 u == None
In [5]: import time
In [6]: %timeit -n1 time.sleep(2)
The times reported by %timeit will be slightly higher than those
reported by the timeit.py script when variables are accessed. This is
due to the fact that %timeit executes the statement in the namespace
of the shell, compared with timeit.py, which uses a single setup
statement to import function or create variables. Generally, the bias
does not matter as long as results from timeit.py are not mixed with
those from %timeit.
[0;31mFile:[0m ~/dev/ip/ipython/IPython/core/magics/execution.py
Magics can do anything they want with their input, so it doesn't have to be valid Python:
```bash
%%bash
echo "My shell is:" $SHELL
echo "My disk usage is:"
df -h
```
My shell is: /usr/local/bin/bash
My disk usage is:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1s1 466Gi 353Gi 108Gi 77% 3738151 9223372036851037656 0% /
devfs 194Ki 194Ki 0Bi 100% 670 0 100% /dev
/dev/disk1s4 466Gi 4.0Gi 108Gi 4% 4 9223372036854775803 0% /private/var/vm
map -hosts 0Bi 0Bi 0Bi 100% 0 0 100% /net
map auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /home
```ruby
%%ruby
puts "hello"
```
hello
Another interesting cell magic: create any file you want locally from the notebook:
```python
%%writefile test.txt
This is a test file!
It can contain anything I want...
And more...
```
Writing test.txt
```python
!cat test.txt
```
This is a test file!
It can contain anything I want...
And more...
Let's see what other magics are currently defined in the system:
```python
%lsmagic
```
Available line magics:
%alias %alias_magic %autocall %automagic %autosave %bookmark %cat %cd %clear %colors %config %connect_info %cp %debug %dhist %dirs %doctest_mode %ed %edit %env %gui %hist %history %killbgscripts %ldir %less %lf %lk %ll %load %load_ext %loadpy %logoff %logon %logstart %logstate %logstop %ls %lsmagic %lx %macro %magic %man %matplotlib %mkdir %more %mv %namespace %notebook %page %pastebin %pdb %pdef %pdoc %pfile %pinfo %pinfo2 %popd %pprint %precision %prun %psearch %psource %pushd %pwd %pycat %pylab %qtconsole %quickref %recall %rehashx %reload_ext %rep %rerun %reset %reset_selective %rm %rmdir %run %save %sc %set_env %store %sx %system %tb %tic %time %timeit %toc %unalias %unload_ext %who %who_ls %whos %xdel %xmode
Available cell magics:
%%! %%HTML %%SVG %%bash %%capture %%debug %%file %%html %%javascript %%js %%latex %%markdown %%perl %%prun %%pypy %%python %%python2 %%python3 %%ruby %%script %%sh %%svg %%sx %%system %%time %%timeit %%writefile
Automagic is ON, % prefix IS NOT needed for line magics.
```python
%%capture?
```
[0;31mDocstring:[0m
::
%capture [--no-stderr] [--no-stdout] [--no-display] [output]
run the cell, capturing stdout, stderr, and IPython's rich display() calls.
positional arguments:
output The name of the variable in which to store output. This is a
utils.io.CapturedIO object with stdout/err attributes for the
text of the captured output. CapturedOutput also has a show()
method for displaying the output, and __call__ as well, so you
can use that to quickly display the output. If unspecified,
captured output is discarded.
optional arguments:
--no-stderr Don't capture stderr.
--no-stdout Don't capture stdout.
--no-display Don't capture IPython's rich display.
[0;31mFile:[0m ~/dev/ip/ipython/IPython/core/magics/execution.py
```python
import math
math.pi
```
3.141592653589793
```python
%precision 2
math.pi
```
3.14
```python
%precision?
```
## Running normal Python code: execution and errors
Not only can you input normal Python code, you can even paste straight from a Python or IPython shell session:
```python
>>> # Fibonacci series:
... # the sum of two elements defines the next
... a, b = 0, 1
>>> while b < 10:
... print(b)
... a, b = b, a+b
```
```python
import sys
for i in range(32):
print(i, end=' ')
time.sleep(0.1)
```
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
And when your code produces errors, you can control how they are displayed with the `%xmode` magic:
```python
%%writefile mod.py
def f(x):
return 1.0/(x-1)
def g(y):
return f(y+1)
```
Writing mod.py
Now let's call the function `g` with an argument that would produce an error:
```python
import mod
mod.g(0)
```
```python
%xmode verbose
```
Exception reporting mode: Verbose
```python
mod.g(0)
```
## Raw Input in the notebook
Since 1.0 the IPython notebook web application support `raw_input` which for example allow us to invoke the `%debug` magic in the notebook:
```python
%debug
```
> [0;32m/Users/minrk/dev/simula/tools-meetup/2018-05-25-jupyter/mod.py[0m(3)[0;36mf[0;34m()[0m
[0;32m 1 [0;31m[0;34m[0m[0m
[0m[0;32m 2 [0;31m[0;32mdef[0m [0mf[0m[0;34m([0m[0mx[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0m
[0m[0;32m----> 3 [0;31m [0;32mreturn[0m [0;36m1.0[0m[0;34m/[0m[0;34m([0m[0mx[0m[0;34m-[0m[0;36m1[0m[0;34m)[0m[0;34m[0m[0m
[0m[0;32m 4 [0;31m[0;34m[0m[0m
[0m[0;32m 5 [0;31m[0;32mdef[0m [0mg[0m[0;34m([0m[0my[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0m
[0m
1
> [0;32m/Users/minrk/dev/simula/tools-meetup/2018-05-25-jupyter/mod.py[0m(6)[0;36mg[0;34m()[0m
[0;32m 2 [0;31m[0;32mdef[0m [0mf[0m[0;34m([0m[0mx[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0m
[0m[0;32m 3 [0;31m [0;32mreturn[0m [0;36m1.0[0m[0;34m/[0m[0;34m([0m[0mx[0m[0;34m-[0m[0;36m1[0m[0;34m)[0m[0;34m[0m[0m
[0m[0;32m 4 [0;31m[0;34m[0m[0m
[0m[0;32m 5 [0;31m[0;32mdef[0m [0mg[0m[0;34m([0m[0my[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0m
[0m[0;32m----> 6 [0;31m [0;32mreturn[0m [0mf[0m[0;34m([0m[0my[0m[0;34m+[0m[0;36m1[0m[0;34m)[0m[0;34m[0m[0m
[0m
> [0;32m/Users/minrk/dev/simula/tools-meetup/2018-05-25-jupyter/mod.py[0m(3)[0;36mf[0;34m()[0m
[0;32m 1 [0;31m[0;34m[0m[0m
[0m[0;32m 2 [0;31m[0;32mdef[0m [0mf[0m[0;34m([0m[0mx[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0m
[0m[0;32m----> 3 [0;31m [0;32mreturn[0m [0;36m1.0[0m[0;34m/[0m[0;34m([0m[0mx[0m[0;34m-[0m[0;36m1[0m[0;34m)[0m[0;34m[0m[0m
[0m[0;32m 4 [0;31m[0;34m[0m[0m
[0m[0;32m 5 [0;31m[0;32mdef[0m [0mg[0m[0;34m([0m[0my[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0m
[0m
> [0;32m/Users/minrk/dev/simula/tools-meetup/2018-05-25-jupyter/mod.py[0m(6)[0;36mg[0;34m()[0m
[0;32m 2 [0;31m[0;32mdef[0m [0mf[0m[0;34m([0m[0mx[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0m
[0m[0;32m 3 [0;31m [0;32mreturn[0m [0;36m1.0[0m[0;34m/[0m[0;34m([0m[0mx[0m[0;34m-[0m[0;36m1[0m[0;34m)[0m[0;34m[0m[0m
[0m[0;32m 4 [0;31m[0;34m[0m[0m
[0m[0;32m 5 [0;31m[0;32mdef[0m [0mg[0m[0;34m([0m[0my[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0m
[0m[0;32m----> 6 [0;31m [0;32mreturn[0m [0mf[0m[0;34m([0m[0my[0m[0;34m+[0m[0;36m1[0m[0;34m)[0m[0;34m[0m[0m
[0m
> [0;32m<ipython-input-51-9fa96bd6b3b6>[0m(1)[0;36m<module>[0;34m()[0m
[0;32m----> 1 [0;31m[0mmod[0m[0;34m.[0m[0mg[0m[0;34m([0m[0;36m0[0m[0;34m)[0m[0;34m[0m[0m
[0m
> [0;32m/Users/minrk/dev/simula/tools-meetup/2018-05-25-jupyter/mod.py[0m(6)[0;36mg[0;34m()[0m
[0;32m 2 [0;31m[0;32mdef[0m [0mf[0m[0;34m([0m[0mx[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0m
[0m[0;32m 3 [0;31m [0;32mreturn[0m [0;36m1.0[0m[0;34m/[0m[0;34m([0m[0mx[0m[0;34m-[0m[0;36m1[0m[0;34m)[0m[0;34m[0m[0m
[0m[0;32m 4 [0;31m[0;34m[0m[0m
[0m[0;32m 5 [0;31m[0;32mdef[0m [0mg[0m[0;34m([0m[0my[0m[0;34m)[0m[0;34m:[0m[0;34m[0m[0m
[0m[0;32m----> 6 [0;31m [0;32mreturn[0m [0mf[0m[0;34m([0m[0my[0m[0;34m+[0m[0;36m1[0m[0;34m)[0m[0;34m[0m[0m
[0m
<function f at 0x10b423400>
0
--KeyboardInterrupt--
--KeyboardInterrupt--
Don't foget to exit your debugging session. Raw input can of course be use to ask for user input:
```python
colour = input('What is your favourite colour? ')
print('colour is:', colour)
```
colour is: x
## Running code in other languages with special `%%` magics
```perl
%%perl
@months = ("July", "August", "September");
print $months[0];
```
```ruby
%%ruby
name = "world"
puts "Hello #{name.capitalize}!"
```
## Plotting in the notebook
This magic configures matplotlib to render its figures inline:
```python
%matplotlib inline
```
```python
import numpy as np
import matplotlib.pyplot as plt
```
```python
x = np.linspace(0, 2*np.pi, 300)
y = np.sin(x**2)
plt.plot(x, y)
plt.title("A little chirp")
fig = plt.gcf() # let's keep the figure object around for later...
```
### Widgets
```python
from ipywidgets import interact
@interact
def show_args(num=5, text='hello', check=True):
print(locals())
```
interactive(children=(IntSlider(value=5, description='num', max=15, min=-5), Text(value='hello', description='…
```python
%timeit time.sleep(0.1)
```
```python
%history -t 72
```
```python
import sympy
from sympy import Symbol, Eq, factor
x = Symbol('x')
sympy.init_printing(use_latex='mathjax')
x
```
$$x$$
```python
@interact(n=(1,21))
def factorit(n):
return Eq(x**n-1, factor(x**n-1))
```
interactive(children=(IntSlider(value=11, description='n', max=21, min=1), Output()), _dom_classes=('widget-in…
```python
```
|
fd8ee59931871ad9c3b753a1b6ccb72cab202d6c
| 281,403 |
ipynb
|
Jupyter Notebook
|
2018-05-25-jupyter/Intro to IPython.ipynb
|
Anastasiia-Grishina/simula-tools-meetup
|
2a1d661e818fb31750ced15170797d6ad47c7996
|
[
"Unlicense"
] | 9 |
2018-04-20T13:12:08.000Z
|
2021-11-08T09:28:22.000Z
|
2018-05-25-jupyter/Intro to IPython.ipynb
|
Anastasiia-Grishina/simula-tools-meetup
|
2a1d661e818fb31750ced15170797d6ad47c7996
|
[
"Unlicense"
] | 1 |
2019-05-03T14:44:19.000Z
|
2019-05-03T14:44:19.000Z
|
2018-05-25-jupyter/Intro to IPython.ipynb
|
Anastasiia-Grishina/simula-tools-meetup
|
2a1d661e818fb31750ced15170797d6ad47c7996
|
[
"Unlicense"
] | 5 |
2018-04-20T13:13:49.000Z
|
2021-10-31T07:55:35.000Z
| 99.400565 | 145,036 | 0.779174 | true | 31,896 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.705785 | 0.83762 | 0.59118 |
__label__eng_Latn
| 0.58017 | 0.211839 |
## Rosenbrock
The definition ca be found in <cite data-cite="rosenbrock"></cite>. It is a non-convex function, introduced by Howard H. Rosenbrock in 1960 and also known as Rosenbrock's valley or Rosenbrock's banana function.
**Definition**
\begin{align}
\begin{split}
f(x) &=& \sum_{i=1}^{n-1} \bigg[100 (x_{i+1}-x_i^2)^2+(x_i - 1)^2 \bigg] \\
&&-2.048 \leq x_i \leq 2.048 \quad i=1,\ldots,n
\end{split}
\end{align}
**Optimum**
$$f(x^*) = 0 \; \text{at} \; x^* = (1,\ldots,1) $$
**Contour**
```python
import numpy as np
from pymoo.factory import get_problem, get_visualization
problem = get_problem("rosenbrock", n_var=2)
get_visualization("fitness-landscape", problem, angle=(45, 45), _type="surface").show()
```
```python
```
|
91c2bd9fd79054cad36dff668ceca6a0e581698a
| 377,300 |
ipynb
|
Jupyter Notebook
|
doc/source/problems/single/rosenbrock.ipynb
|
gabicavalcante/pymoo
|
1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846
|
[
"Apache-2.0"
] | 11 |
2018-05-22T17:38:02.000Z
|
2022-02-28T03:34:33.000Z
|
doc/source/problems/single/rosenbrock.ipynb
|
gabicavalcante/pymoo
|
1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846
|
[
"Apache-2.0"
] | 15 |
2022-01-03T19:36:36.000Z
|
2022-03-30T03:57:58.000Z
|
doc/source/problems/single/rosenbrock.ipynb
|
gabicavalcante/pymoo
|
1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846
|
[
"Apache-2.0"
] | 3 |
2021-11-22T08:01:47.000Z
|
2022-03-11T08:53:58.000Z
| 3,042.741935 | 374,816 | 0.964551 | true | 267 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.904651 | 0.841826 | 0.761558 |
__label__eng_Latn
| 0.714133 | 0.607687 |
# Nonlinear vibrations<br>Part 2
## Van der Pol equation
### Limiting cycle
Method of averaging:
Van der Pol equation for autonomous system with negative damping
$$\ddot{x}-\epsilon(1-x^2)\dot{x}+x=0$$
Use method of averaging to estimate amplitude in case of small nonlinearity $\epsilon \ll 1$:
\begin{aligned}
x&=r\cos(t+\phi)\\
\dot{x}&=-r\sin(t+\phi)\\
\ddot{x}&=-\dot{r}\sin(t+\phi)-r\dot{\phi}\cos(t+\phi)-x
\end{aligned}
where equation for $\dot{x}$ holds if
$$\dot{r}\cos(t+\phi)-r\dot{\phi}\sin(t+\phi)=0$$
Substituting these equations into van der Pol equation yields
$$\dot{r}\sin(t+\phi)+r\dot{\phi}\cos(t+\phi)=-\epsilon(1-x^2)\dot{x}$$
The system can be solved for $\dot{r}$ and $\dot{\phi}$
\begin{aligned}
\dot{r}&=-\epsilon\sin(t+\phi)(1-x^2)\dot{x}\\
\dot{\phi}&=-\frac{1}{r}\epsilon\cos(t+\phi)(1-x^2)\dot{x}
\end{aligned}
Let $\tau=t+\phi$ and $\phi$ is considered almost constant in comparison with $t$ on period of $\sin t$. In this case
\begin{aligned}
\dot{r}&\approx-\epsilon r\frac{1}{2\pi}\int_0^{2\pi}\sin\tau\cdot(1-r^2\cos^2\tau)\sin\tau d\tau=\epsilon\frac{r}{2} \left(1-\frac{r^2}{4}\right)\\
\dot{\phi}&\approx-\epsilon\frac{1}{2\pi}\int_0^{2\pi}\cos\tau\cdot(1-r^2\cos^2\tau)\sin\tau d\tau=0
\end{aligned}
Staedy state solutions to $\dot{r}=0$ are $r=0$ and $r=2$ amplitudes. The first one is unstable singular point and $r=2$ corresponds to a stable limiting cycle.
$$\dot{r}=\epsilon\frac{r}{2} \left(1-\frac{r^2}{4}\right)\quad\implies\quad\frac{dr}{\frac{r}{2}\left(1-\left(\frac{r}{2}\right)^2\right)}=\epsilon dt$$
$$z=\frac{r}{2}, \quad \frac{2 dz}{z(1-z^2)}=\epsilon dt$$
$$\frac{2}{z(1-z^2)}=\frac{2}{z}-\left(\frac{1}{z-1}+\frac{1}{z+1}\right)$$
$$2\int_{z_0}^z\frac{dz}{z}-\left(\int_{z_0}^z\frac{dz}{z-1}+\int_{z_0}^z\frac{dz}{z+1}\right)=\epsilon t$$
$$2\log\frac{z}{z_0}-\left(\log\frac{z-1}{z_0-1}+\log\frac{z+1}{z_0+1}\right)=\epsilon t$$
$$\frac{z_0^2}{z_0^2-1}\cdot\frac{z^2-1}{z^2}=e^{-\epsilon t}$$
$$1-\frac{4}{r^2}=\left(1-\frac{4}{r_0^2}\right)e^{-\epsilon t}$$
$$r=\frac{2}{\sqrt{1-\left(1-4/r_0^2\right)e^{-\epsilon t}}}$$
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
import sympy as sp
```
```python
r, phi, eps, t = sp.symbols('r, \phi, \epsilon, t')
```
```python
x = r * sp.cos(t)
dx = -r * sp.sin(t)
F = eps * (1 - x**2) * dx
dr = -sp.sin(t) * F
dphi = -sp.cos(t) * F
```
```python
F
```
$\displaystyle - \epsilon r \left(- r^{2} \cos^{2}{\left(t \right)} + 1\right) \sin{\left(t \right)}$
```python
eq1 = (sp.integrate(dr, (t, 0, 2*sp.pi))/2/sp.pi).simplify()
eq1
```
$\displaystyle \frac{\epsilon r \left(4 - r^{2}\right)}{8}$
```python
sp.integrate(dphi, (t,0,2*sp.pi))
```
$\displaystyle 0$
## Relaxation oscillations
### Rayleigh's equation to van de Pol relation
$$\ddot{u}-\epsilon\left(\dot{u}-\frac{1}{3}\dot{u}^3\right)+u=0$$
differentiate Rayleigh's equation:
$$\dddot{u}-\epsilon\left(\ddot{u}-\dot{u}^2\ddot{u}\right)+\dot{u}=0$$
and sybstituying $\dot{u} = v$ results in van der Pol equation:
$$\ddot{v}-\epsilon(1-v^2)\dot{v}+v=0$$
### limiting cycle
substitution
$$\dot{u} = v,\quad\xi=u/\epsilon,\quad\dot{\xi}=v/\epsilon$$
into Rayleigh's eqyation gives 1st order system of equations:
\begin{aligned}
\frac{1}{\epsilon}\dot{v}&=v-\frac{1}{3}v^3-\xi\\
\dot{\xi}&=\frac{v}{\epsilon}
\end{aligned}
Equation for the trajectories on phase plane:
$$\frac{1}{\epsilon^2}\frac{dv}{d\xi}=\frac{v-v^3/3-\xi}{v}$$
### Period
Limiting cycle $\xi=v-\frac{1}{3}v^3$. From $\dot{\xi}=v/\epsilon$ find
$$dt=\epsilon\frac{d\xi}{v}$$
Period can be estimates as integral over the limiting circle.
The symmetry of the curve can be taken into account as well:
$$T=\epsilon\oint\frac{d\xi}{v}=2\epsilon\int_{v_1}^{v_2}\frac{d\xi}{v}=2\epsilon\int_{v_1}^{v_2}\left(\frac{1}{v}-v\right)dv$$
Find the integral limits. Point $v_2$ corresponds to equation $\dot{\xi}=1-v_2^2=0$, thus $v_2=\pm 1$.
Another point can be found from equation
$$\xi(-1)=-2/3=v_1-v_1^3/3$$
$$v_1^3-3v_1-2=0\quad\implies\quad v_1=2$$
$$T=2\epsilon\int_2^1\left(\frac{1}{v}-v\right)dv=2\epsilon\left(\frac{3}{2}-\log 2\right)\approx 1.614\epsilon$$
<center></center>
## Numerical solution
```python
# for numerical integration
def vdp(z, t, e):
return [ z[1], -z[0] + e*(1-z[0]**2)*z[1] ]
```
```python
eps = 0.3
r0 = 1e-3
t = np.linspace(0, 100, 1000)
sol = odeint(vdp, [r0, 0], t, args=(eps,))
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.plot(t, sol[:,0], 'b',
t, 2/np.sqrt(1-(1-4/r0**2)*np.exp(-eps*t)), 'm')
plt.subplot(1,2,2)
plt.plot(sol[:,0], sol[:,1])
plt.grid(True)
plt.show()
```
```python
t = np.linspace(0, 100, 1000)
sol1 = odeint(vdp, [0.1, 0], t, args=(0.1,))
sol2 = odeint(vdp, [3.0, 0], t, args=(0.1,))
plt.figure(figsize=(7,7))
plt.plot(sol1[:,0],sol1[:,1],'b',sol2[:,0],sol2[:,1],'r')
plt.grid(True)
plt.xlabel('$x$')
plt.ylabel('$\dot{x}$')
plt.title('Limiting cycle')
plt.show()
```
```python
r0 = 1e-3
eps = 1
t = np.linspace(0, 30, 300)
sol = odeint(vdp, [r0, 0], t, args=(eps,))
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.plot(t, sol[:,0], 'b',
t, 2/np.sqrt(1-(1-4/r0**2)*np.exp(-eps*t)), 'm')
plt.subplot(1,2,2)
plt.plot(sol[:,0], sol[:,1])
plt.grid(True)
plt.show()
```
Relaxation oscillations similar to multivibrator at $\epsilon \gg 1$.
Period depends on $\epsilon$.
```python
r0 = 0.01
eps = 10
t = np.linspace(0, 60, 2000)
sol = odeint(vdp, [r0, 0], t, args=(eps,))
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.plot(t, sol[:,0])
plt.subplot(1,2,2)
plt.plot(sol[:,0], sol[:,1])
plt.grid(True)
plt.show()
```
## Notes
"Dynamics of a System Exhibiting the Global Bifurcation of a Limit Cycle at Infinity" by W.L.Keith and R.H.Rand, Int. J. Non-Linear Mechanics, 20:325-338 (1985)
```python
```
|
01172105afa411f36bb4dc425c639143b29bd7a5
| 273,509 |
ipynb
|
Jupyter Notebook
|
vdp-oscillator/vdp-1.ipynb
|
vr050714/nonlinear-vibration-seminar
|
663584d46708857383b637610e54fafa753250e2
|
[
"CC0-1.0"
] | 1 |
2021-05-26T05:38:38.000Z
|
2021-05-26T05:38:38.000Z
|
vdp-oscillator/vdp-1.ipynb
|
vr050714/nonlinear-vibration-seminar
|
663584d46708857383b637610e54fafa753250e2
|
[
"CC0-1.0"
] | null | null | null |
vdp-oscillator/vdp-1.ipynb
|
vr050714/nonlinear-vibration-seminar
|
663584d46708857383b637610e54fafa753250e2
|
[
"CC0-1.0"
] | null | null | null | 526.992293 | 98,828 | 0.944525 | true | 2,374 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.913677 | 0.863392 | 0.788861 |
__label__eng_Latn
| 0.253613 | 0.67112 |
# Quick Start: 単振り子の運動をシミュレーション
* Next >> None.
* Prev >> [1_modeling](https://github.com/yfur/basic-mechanics-python/blob/master/1_modeling/1_modeling.ipynb)
[単振り子](https://ja.wikipedia.org/wiki/%E6%8C%AF%E3%82%8A%E5%AD%90#.E5.8D.98.E6.8C.AF.E3.82.8A.E5.AD.90) の運動をシミュレーションする.
## 0. はじめに
一言に力学シミュレーションといっても,その方法は多様であり,いくつもの方法があるのだが,そのうちの一つを例としてあげる.シミュレーションを行うために必要な作業を
1. ** モデル化 **
2. ** 数値計算 **
3. ** 視覚化 **
の3つのプロセスに分けて考える.
まず,一つ目の「モデル化」について.これは,例えば考える物理モデルからニュートンの運動方程式を導くプロセスのことである.
次に,二つ目の「数値計算」について.これは,Python プログラムを用いてコンピュータ上で運動を計算させるプロセスのことである.このプロセスでコンピュータは振り子の運動を与えられたモデル情報に基づいて計算し,その結果を数値で示す.例えば
> シミュレーション開始0秒後,振り子は10 [rad]の位置にいる
>
> シミュレーション開始1秒後,振り子は0 [rad]の位置にいる
>
> シミュレーション開始2秒後,振り子は-10 [rad]の位置にいる
>
> シミュレーション開始3秒後,振り子は0 [rad]の位置にいる
>
> ...
という情報が,
| 時間 | 位置 |
|:------:|:---------:|
| 0 | 10 |
| 1 | 0 |
| 2 | -10 |
| 3 | 0 |
のような行列となって生成される.
最後に,「視覚化」について.「数値計算」によって得られた数値のデータは,あくまで数字の羅列であり,これだけでは物体の運動を理解しようとするのは難しい.そこで,運動を直感的に理解できるようにするために,この数値データからグラフやアニメーションを生成する.
## 1. モデル化
図のような,よくある [単振り子の力学モデル](https://ja.wikipedia.org/wiki/%E6%8C%AF%E3%82%8A%E5%AD%90#.E5.8D.98.E6.8C.AF.E3.82.8A.E5.AD.90) を考える.振り子の角度を$\theta$,振り子の長さを$l$,振り子の先端のおもりの質量を$m$,鉛直下向きにはたらく重力の重力加速度を$g$としている.
おもりの運動方向にはたらく力は
\begin{align}
F = -mg\sin\theta
\end{align}
である.おもりの運動方向の加速度は$l\ddot{\theta}$なので,ニュートンの運動方程式 ($ma=F$) は,
\begin{align}
m l\ddot{\theta}= -mg\sin\theta
\end{align}
したがって,
\begin{align}
\ddot{\theta} = -\frac{g}{l}\sin\theta
\end{align}
である.
さて,単振り子のモデルを考える上で,時間の経過とともに値が変化する **変数** と,時間の経過に関わらず値が変化しない **定数** が存在した.$\theta$ が前者であり,$m$,$l$,$g$が後者である.** 1. モデル化 ** のプロセスで達成することは,運動方程式を**変数** について整理し,「(変数の時間の微分項)$=$(それ以外の項)」という式を得ることである.
## 2. 数値計算
**1. モデル化** では変数$\theta$についての運動方程式を導いたが, **2. 数値計算** では,これを用いて **常微分方程式** を構築する.
まず,$\theta$自身とその一階微分である$\dot{\theta}$を縦に並べた列ベクトルをつくり,これを$s$とする.(GiuHub上で行列が上手く表示されないのは仕方のないことなのかな?)
\begin{align}
s =
\begin{bmatrix}
\theta \\ \dot{\theta}
\end{bmatrix}
\end{align}
そして,$s$の時間の一階微分を式で表す.ここで,$\ddot{\theta}$は先ほど求めた運動方程式によって,$\ddot{\theta}$を含まない形で表すことができる.
\begin{align}
\frac{d}{dt} s =
\begin{bmatrix}
\dot{\theta} \\ \ddot{\theta}
\end{bmatrix} =
\begin{bmatrix}
\dot{\theta} \\ -\frac{g}{l}\sin\theta
\end{bmatrix}
\end{align}
こうして得られた常微分方程式は,数値計算に適した形となっている.具体的には,$\theta = $ `s[0]` ,$\dot{\theta} = $ `s[1]` とすることで,
```
def odefunc(s, t):
theta = s[0]
dtheta = s[1]
ddtheta = -g/l*sin(theta) # <- Equation of motion
return np.r_[dtheta, ddtheta]
```
のようにして常微分方程式をPythonの関数として記述したうえで
```
s = odeint(odefunc, s_init, t)
```
を実行することで常微分方程式を数値的に解くことができる.ただし [`odeint`](https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.integrate.odeint.html) は [SciPy](https://www.scipy.org/) ライブラリの `integrate` クラスで定義される,常微分方程式を解く関数である.
```python
import numpy as np
from scipy.integrate import odeint
from math import sin
''' constants '''
m = 1 # mass of the pendulum [kg]
l = 1 # length of the pendulum [m]
g = 10 # Gravitational acceleration [m/s^2]
''' time setting '''
t_end = 10 # simulation time [s]
t_fps = 50 # frame per second. This value means smoothness of produced graph and animation
t_step = 1/t_fps
t = np.arange(0, t_end, t_step)
''' initial value '''
theta_init = 0 # initial value of theta [rad]
dtheta_init = 1 # initial value of dot theta [rad/s]
s_init = np.r_[theta_init, dtheta_init]
def odefunc(s, t):
theta = s[0]
dtheta = s[1]
ddtheta = -g/l*sin(theta) # <- Equation of motion
return np.r_[dtheta, ddtheta]
s = odeint(odefunc, s_init, t)
print('ODE calculation finished.')
```
ODE calculation finished.
以上が数値計算を行うプログラムである.
```
print(np.c_[t, s])
```
を実行することで,
| 時間 | 振子の角度 | 振子の角速度 |
|:------:|:---------:|:---------:|
| 0. | 0. | 1. |
|... | ... | ... |
| 9.98 | -0.018 | 0.998 |
のように,それぞれの時間で振子の角度がどのようになっていて,また振子の角速度がどのようになっているのかを確認することができる.
```python
print(np.c_[t, s])
```
[[ 0. 0. 1. ]
[ 0.02 0.01998668 0.99800074]
[ 0.04 0.03989343 0.99201173]
...,
[ 9.94 -0.05725082 0.98347963]
[ 9.96 -0.03747992 0.99295215]
[ 9.98 -0.01755919 0.9984571 ]]
## 3. 可視化
数値計算が完了したものの,数字の羅列では運動の様子が直感的には理解できない.そこで,振り子の運動の様子を表すアニメーション動画や,振子の角度の時間変化を表すグラフを作る.
以下に示す`animfunc` はアニメーション動画を生成する関数である.基本的な原理はパラパラ漫画であり,動画フレームごとに振り子の絵を描いては保存するの繰り返しを行っている.フレームごとの更新は`update_figure`関数が行っている.
```python
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from math import cos
def animfunc(s, t):
''' Create mp4 movie file of a pendulum '''
plt.close()
fig = plt.figure()
plt.axis('scaled')
plt.xlim(-1, 1)
plt.ylim(-1.5, .5)
plt.grid('on')
draw_ceiling, = plt.plot([-2, 2], [0, 0], c='k', lw=2)
draw_pendulum, = plt.plot([], [], lw=4, c='b')
draw_mass, = plt.plot([], [], lw=2, marker='o', ms=20, mew=4, mec='b', mfc='c')
indicate_time = plt.text(-0.3, 0.25, [], fontsize=12)
def update_figure(i):
''' Set data of each movie frame '''
mass_x = l*sin(s[i, 0])
mass_y = - l*cos(s[i, 0])
pendlum_x = [0, mass_x]
pendlum_y = [0, mass_y]
draw_pendulum.set_data(pendlum_x, pendlum_y)
draw_mass.set_data(mass_x, mass_y)
indicate_time.set_text('t = {0:4.2f} [s]'.format(t[i]))
''' Create a movie file '''
line_ani = animation.FuncAnimation(fig, update_figure, frames=len(t))
line_ani.save('./pendulum.mp4', fps=t_fps)
print('pendulum.mp4 created')
```
`animfunc`を実行することで,`pendulum.mp4`のようなアニメーション動画が保存される.
```python
animfunc(s, t)
```
pendulum.mp4 created
一方,振子の動きをグラフに示したい場合は,例えば次のようにしてプロットを行えば良い.次のプログラムを実行することで,`pendulum_graph.png`のようなイメージファイルが保存される.
```python
plt.figure()
plt.plot(t, s[:, 0])
plt.xlabel('t [s]')
plt.ylabel('theta [rad]')
plt.savefig('pendulum_graph.png')
```
## おわりに
以上が,Python で単振り子の運動をシミュレーションを行う例である.シミュレーションを行う上で最低限必要となるの要素のみを取り出してコードを書いたのだが,より高度な計算や表現を行うことはもちろん可能である.
## 参考文献
###### todo
* 単振り子の完成版プログラム
* 参考情報の追加
```python
```
|
68cfb6b89ec11d0ae62fa18b947ef18fb5d8896a
| 10,381 |
ipynb
|
Jupyter Notebook
|
0_quickstart/.ipynb_checkpoints/0_quickstart-checkpoint.ipynb
|
yfur/basic-mechanics-python
|
fb313b01a116180a249a1f78e28aa5685030b2ea
|
[
"Apache-2.0"
] | 1 |
2021-09-17T11:34:59.000Z
|
2021-09-17T11:34:59.000Z
|
0_quickstart/0_quickstart.ipynb
|
yfur/basic-mechanics-python
|
fb313b01a116180a249a1f78e28aa5685030b2ea
|
[
"Apache-2.0"
] | null | null | null |
0_quickstart/0_quickstart.ipynb
|
yfur/basic-mechanics-python
|
fb313b01a116180a249a1f78e28aa5685030b2ea
|
[
"Apache-2.0"
] | null | null | null | 27.756684 | 216 | 0.51103 | true | 3,380 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.917303 | 0.737158 | 0.676197 |
__label__yue_Hant
| 0.546338 | 0.409364 |
# Description of the problem and solution
The task1 was to predict a person's age from the brain image data: a standard regression problem. The original dataset included 832 features as well as a lot of NaN values and a few outliers. A good preprocessing stage was necessary in order to have a well defined dataset that could be used in our regression model. First step was the imputation of the dataset. Filling each NaN value with the median of each feature column. The use of the median instead of other value (e.g. mean) is justified since a lot of outliers are included in the dataset. (e.g. 1 2 _ 5 20 median: 3 mean: 7). Next step was the feature extraction. By using the "autofeat" library (paper: https://arxiv.org/pdf/1901.07329.pdf), we extracted the 21 most important features. The way the algorithm works is going through a loop of correlation of features with target, select promising features, train Lasso regression model with promising features, filter the good features keeping the ones with non-zero regression weights. We updated the datasets by keeping only the 21 most important features. Finally, we used these updated datasets for the training of our final regression model. A lot of outlier detection techniques were used but we decided to keep the outliers and use a tree-based method for our final model. Tree-methods have been proved to be robust to outliers and we avoid risking excluded important features / points from the dataset. The "ExtraTreesRegressor" model from the "sklearn" package was used and fine tuned based on the R2 score performance in our validation set. The final model had a score >0.6 in the validation sets using cross-validation and in the submission leaderboard of ETH scored 0.6812 while the hard baseline was set to 0.65 by the Advanced Machine Learning Task1 team.
# Include all the necessary packages
```python
!pip install autofeat
from sklearn.metrics import r2_score
try:
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from autofeat import FeatureSelector
from sklearn.model_selection import train_test_split
from sklearn.ensemble import ExtraTreesRegressor
```
Requirement already satisfied: autofeat in /usr/local/lib/python3.6/dist-packages (0.2.5)
Requirement already satisfied: pint in /usr/local/lib/python3.6/dist-packages (from autofeat) (0.9)
Requirement already satisfied: pandas>=0.24.0 in /usr/local/lib/python3.6/dist-packages (from autofeat) (0.25.2)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from autofeat) (0.21.3)
Requirement already satisfied: sympy in /usr/local/lib/python3.6/dist-packages (from autofeat) (1.1.1)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from autofeat) (0.14.0)
Requirement already satisfied: numpy in /tensorflow-2.0.0/python3.6 (from autofeat) (1.17.3)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from autofeat) (0.16.0)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.24.0->autofeat) (2018.9)
Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.24.0->autofeat) (2.6.1)
Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->autofeat) (1.3.1)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.6/dist-packages (from sympy->autofeat) (1.1.0)
Requirement already satisfied: six>=1.5 in /tensorflow-2.0.0/python3.6 (from python-dateutil>=2.6.1->pandas>=0.24.0->autofeat) (1.12.0)
# Load the data from the CSV files
```python
column_names_x = ['id']
for i in range(832):
column_names_x.append('x'+str(i))
raw_dataset_x = pd.read_csv('/content/X_train.csv', names=column_names_x,
na_values = "?", comment='\t',
sep=",", skipinitialspace=True, skiprows=True)
dataset_x = raw_dataset_x.copy()
dataset_x.tail()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>id</th>
<th>x0</th>
<th>x1</th>
<th>x2</th>
<th>x3</th>
<th>x4</th>
<th>x5</th>
<th>x6</th>
<th>x7</th>
<th>x8</th>
<th>x9</th>
<th>x10</th>
<th>x11</th>
<th>x12</th>
<th>x13</th>
<th>x14</th>
<th>x15</th>
<th>x16</th>
<th>x17</th>
<th>x18</th>
<th>x19</th>
<th>x20</th>
<th>x21</th>
<th>x22</th>
<th>x23</th>
<th>x24</th>
<th>x25</th>
<th>x26</th>
<th>x27</th>
<th>x28</th>
<th>x29</th>
<th>x30</th>
<th>x31</th>
<th>x32</th>
<th>x33</th>
<th>x34</th>
<th>x35</th>
<th>x36</th>
<th>x37</th>
<th>x38</th>
<th>...</th>
<th>x792</th>
<th>x793</th>
<th>x794</th>
<th>x795</th>
<th>x796</th>
<th>x797</th>
<th>x798</th>
<th>x799</th>
<th>x800</th>
<th>x801</th>
<th>x802</th>
<th>x803</th>
<th>x804</th>
<th>x805</th>
<th>x806</th>
<th>x807</th>
<th>x808</th>
<th>x809</th>
<th>x810</th>
<th>x811</th>
<th>x812</th>
<th>x813</th>
<th>x814</th>
<th>x815</th>
<th>x816</th>
<th>x817</th>
<th>x818</th>
<th>x819</th>
<th>x820</th>
<th>x821</th>
<th>x822</th>
<th>x823</th>
<th>x824</th>
<th>x825</th>
<th>x826</th>
<th>x827</th>
<th>x828</th>
<th>x829</th>
<th>x830</th>
<th>x831</th>
</tr>
</thead>
<tbody>
<tr>
<th>1207</th>
<td>1207.0</td>
<td>NaN</td>
<td>5395.719279</td>
<td>95668.548818</td>
<td>1125.414599</td>
<td>10341.757613</td>
<td>10.547165</td>
<td>108249.187400</td>
<td>1.073291e+06</td>
<td>108672.758838</td>
<td>2.365013</td>
<td>1.042126e+06</td>
<td>21031.538201</td>
<td>1059.704248</td>
<td>103860.409465</td>
<td>107.805931</td>
<td>100.350289</td>
<td>14040.029112</td>
<td>NaN</td>
<td>90414.095308</td>
<td>10.108470</td>
<td>3.215609</td>
<td>102144.219482</td>
<td>10.087478</td>
<td>3825.046407</td>
<td>89.538601</td>
<td>4550.239929</td>
<td>3542.470382</td>
<td>206892.186644</td>
<td>1.011027e+06</td>
<td>11432.036864</td>
<td>10.957698</td>
<td>976.093967</td>
<td>84.689946</td>
<td>2500.095879</td>
<td>1068.544855</td>
<td>8.868114</td>
<td>3.695648</td>
<td>10381.626637</td>
<td>8.641068e+05</td>
<td>...</td>
<td>6.934015</td>
<td>2.05604</td>
<td>9.362337</td>
<td>908.498493</td>
<td>94.234452</td>
<td>10.319140</td>
<td>4155.345879</td>
<td>8.498432e+16</td>
<td>10977.013897</td>
<td>93.766125</td>
<td>8.120353</td>
<td>10595.018829</td>
<td>9.594486e+05</td>
<td>10.485803</td>
<td>10.601033</td>
<td>101328.492269</td>
<td>2.417968</td>
<td>48.359296</td>
<td>9486.144826</td>
<td>2.185578</td>
<td>966.094287</td>
<td>9.039254e+05</td>
<td>10.510759</td>
<td>1049.753163</td>
<td>10856.049561</td>
<td>9109.098919</td>
<td>10320.748617</td>
<td>10892.743222</td>
<td>106528.008864</td>
<td>294.801642</td>
<td>10.519831</td>
<td>10.234396</td>
<td>987.456317</td>
<td>978.661701</td>
<td>109520.061346</td>
<td>102914.439553</td>
<td>8144.701025</td>
<td>10.442410</td>
<td>102380.867791</td>
<td>2.236101</td>
</tr>
<tr>
<th>1208</th>
<td>1208.0</td>
<td>93669.580198</td>
<td>3564.454295</td>
<td>96937.341346</td>
<td>1040.828378</td>
<td>8415.112792</td>
<td>10.721800</td>
<td>100628.318687</td>
<td>1.027671e+06</td>
<td>102269.472299</td>
<td>2.398768</td>
<td>1.015631e+06</td>
<td>10073.888339</td>
<td>1005.671797</td>
<td>106924.048275</td>
<td>98.512805</td>
<td>104.366843</td>
<td>11719.094168</td>
<td>115055.546469</td>
<td>95750.178334</td>
<td>NaN</td>
<td>3.446025</td>
<td>100455.697718</td>
<td>10.303138</td>
<td>3502.012112</td>
<td>84.167435</td>
<td>3462.164372</td>
<td>3411.771480</td>
<td>206892.234016</td>
<td>1.082826e+06</td>
<td>9957.026155</td>
<td>11.155088</td>
<td>1127.806594</td>
<td>107.745872</td>
<td>2298.057201</td>
<td>1031.637383</td>
<td>8.778034</td>
<td>3.477553</td>
<td>10627.490011</td>
<td>1.007256e+06</td>
<td>...</td>
<td>10.732723</td>
<td>2.14034</td>
<td>10.930583</td>
<td>956.306694</td>
<td>117.419062</td>
<td>10.361670</td>
<td>3436.194754</td>
<td>6.249342e+16</td>
<td>9866.685225</td>
<td>104.420975</td>
<td>9.171168</td>
<td>10762.025617</td>
<td>9.205254e+05</td>
<td>10.627956</td>
<td>10.101622</td>
<td>105133.566646</td>
<td>2.740250</td>
<td>55.449159</td>
<td>10959.655054</td>
<td>2.160671</td>
<td>1111.015635</td>
<td>1.118527e+06</td>
<td>10.342601</td>
<td>1030.540259</td>
<td>10097.006670</td>
<td>9039.074820</td>
<td>10895.768148</td>
<td>10315.088493</td>
<td>106898.299599</td>
<td>195.245438</td>
<td>10.618901</td>
<td>10.456550</td>
<td>1112.699713</td>
<td>NaN</td>
<td>NaN</td>
<td>106685.647900</td>
<td>7428.338174</td>
<td>10.405107</td>
<td>107051.806312</td>
<td>2.297040</td>
</tr>
<tr>
<th>1209</th>
<td>1209.0</td>
<td>94119.048262</td>
<td>NaN</td>
<td>88398.586879</td>
<td>1298.024079</td>
<td>8905.109893</td>
<td>10.294486</td>
<td>103347.543915</td>
<td>1.087571e+06</td>
<td>106082.959731</td>
<td>2.345450</td>
<td>1.074165e+06</td>
<td>49595.639929</td>
<td>1057.051949</td>
<td>103147.267707</td>
<td>NaN</td>
<td>104.099172</td>
<td>11275.080883</td>
<td>115055.523086</td>
<td>105201.547958</td>
<td>10.358605</td>
<td>2.872759</td>
<td>107834.745605</td>
<td>10.070086</td>
<td>2708.039945</td>
<td>90.315251</td>
<td>4502.083820</td>
<td>2267.035496</td>
<td>206892.199058</td>
<td>1.016735e+06</td>
<td>8545.036877</td>
<td>NaN</td>
<td>818.975938</td>
<td>95.255808</td>
<td>2614.046035</td>
<td>1002.028549</td>
<td>11.262573</td>
<td>3.016052</td>
<td>10912.480045</td>
<td>1.016656e+06</td>
<td>...</td>
<td>10.447547</td>
<td>NaN</td>
<td>10.502198</td>
<td>967.481047</td>
<td>88.216016</td>
<td>10.535296</td>
<td>3548.968573</td>
<td>3.358649e+16</td>
<td>8140.800029</td>
<td>98.172030</td>
<td>11.651412</td>
<td>8779.078077</td>
<td>7.920442e+05</td>
<td>10.525088</td>
<td>10.468168</td>
<td>104071.834129</td>
<td>2.615953</td>
<td>130.309663</td>
<td>10513.170240</td>
<td>2.228708</td>
<td>728.052692</td>
<td>1.147362e+06</td>
<td>9.968195</td>
<td>1077.035231</td>
<td>7580.079750</td>
<td>8509.033486</td>
<td>10228.351982</td>
<td>10327.801413</td>
<td>100144.263391</td>
<td>310.183226</td>
<td>10.887052</td>
<td>10.836007</td>
<td>1124.755976</td>
<td>988.193910</td>
<td>102076.305244</td>
<td>101964.413583</td>
<td>6470.242459</td>
<td>9.786964</td>
<td>NaN</td>
<td>2.222794</td>
</tr>
<tr>
<th>1210</th>
<td>1210.0</td>
<td>NaN</td>
<td>3356.949591</td>
<td>97081.843123</td>
<td>1002.778571</td>
<td>10864.284014</td>
<td>10.797471</td>
<td>102894.681201</td>
<td>NaN</td>
<td>100102.393811</td>
<td>2.436208</td>
<td>1.072574e+06</td>
<td>37587.588730</td>
<td>1047.896435</td>
<td>105241.465707</td>
<td>93.329187</td>
<td>102.255382</td>
<td>11668.076593</td>
<td>115055.525882</td>
<td>85619.926316</td>
<td>10.732048</td>
<td>2.422995</td>
<td>104044.942484</td>
<td>NaN</td>
<td>2945.014802</td>
<td>112.182044</td>
<td>3202.074919</td>
<td>4883.134639</td>
<td>206892.208501</td>
<td>1.095387e+06</td>
<td>7036.096752</td>
<td>8.290676</td>
<td>878.350888</td>
<td>93.612258</td>
<td>2560.094596</td>
<td>1011.331590</td>
<td>10.580875</td>
<td>2.822595</td>
<td>NaN</td>
<td>1.085728e+06</td>
<td>...</td>
<td>10.115872</td>
<td>2.15810</td>
<td>9.868292</td>
<td>1004.134878</td>
<td>110.999990</td>
<td>10.713020</td>
<td>3394.243287</td>
<td>1.335970e+17</td>
<td>8371.821670</td>
<td>89.703650</td>
<td>9.220535</td>
<td>8998.055593</td>
<td>1.113826e+06</td>
<td>10.302250</td>
<td>10.883418</td>
<td>101283.417629</td>
<td>2.186525</td>
<td>366.750195</td>
<td>11376.166095</td>
<td>2.186734</td>
<td>1120.066965</td>
<td>1.013800e+06</td>
<td>11.734373</td>
<td>1046.939030</td>
<td>9132.087999</td>
<td>5896.007658</td>
<td>10706.113864</td>
<td>10369.803817</td>
<td>102404.821085</td>
<td>NaN</td>
<td>10.661966</td>
<td>10.569144</td>
<td>1010.143125</td>
<td>1064.316139</td>
<td>101477.589227</td>
<td>104517.364548</td>
<td>4922.920835</td>
<td>8.566389</td>
<td>103968.235402</td>
<td>2.071553</td>
</tr>
<tr>
<th>1211</th>
<td>1211.0</td>
<td>101744.439395</td>
<td>3416.474452</td>
<td>111000.425491</td>
<td>893.080460</td>
<td>9250.598106</td>
<td>10.935695</td>
<td>102906.878188</td>
<td>NaN</td>
<td>107003.164135</td>
<td>2.547171</td>
<td>1.087477e+06</td>
<td>32084.633800</td>
<td>1075.700568</td>
<td>104998.478525</td>
<td>118.678047</td>
<td>106.993698</td>
<td>13226.038641</td>
<td>115055.543174</td>
<td>97282.156471</td>
<td>10.208538</td>
<td>3.710694</td>
<td>107439.936587</td>
<td>10.753281</td>
<td>4719.085520</td>
<td>96.505582</td>
<td>3419.271979</td>
<td>3796.393911</td>
<td>206892.165049</td>
<td>1.006348e+06</td>
<td>11518.011808</td>
<td>10.803776</td>
<td>1103.388499</td>
<td>106.356520</td>
<td>2583.097413</td>
<td>1081.964569</td>
<td>11.453524</td>
<td>3.203942</td>
<td>10257.152912</td>
<td>9.292065e+05</td>
<td>...</td>
<td>8.157006</td>
<td>2.16880</td>
<td>8.844200</td>
<td>1005.897320</td>
<td>96.253172</td>
<td>NaN</td>
<td>3289.749206</td>
<td>9.364744e+16</td>
<td>9170.136304</td>
<td>97.067706</td>
<td>NaN</td>
<td>NaN</td>
<td>1.148141e+06</td>
<td>10.989940</td>
<td>10.204516</td>
<td>100896.544241</td>
<td>2.720932</td>
<td>49.567830</td>
<td>8013.156899</td>
<td>2.550404</td>
<td>916.073375</td>
<td>1.080536e+06</td>
<td>9.101128</td>
<td>1013.211838</td>
<td>12517.066013</td>
<td>10990.064488</td>
<td>10349.678752</td>
<td>10096.876996</td>
<td>NaN</td>
<td>322.445715</td>
<td>10.599132</td>
<td>10.350638</td>
<td>962.755003</td>
<td>1093.734877</td>
<td>105353.550546</td>
<td>106061.798352</td>
<td>9302.374002</td>
<td>10.949418</td>
<td>109317.619776</td>
<td>2.496408</td>
</tr>
</tbody>
</table>
<p>5 rows × 833 columns</p>
</div>
```python
column_names_y = ['id','y']
raw_dataset_y = pd.read_csv('/content/y_train.csv', names=column_names_y,
na_values = "?", comment='\t',
sep=",", skipinitialspace=True, skiprows=True)
dataset_y = raw_dataset_y.copy()
dataset_y.tail()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>id</th>
<th>y</th>
</tr>
</thead>
<tbody>
<tr>
<th>1207</th>
<td>1207.0</td>
<td>66.0</td>
</tr>
<tr>
<th>1208</th>
<td>1208.0</td>
<td>73.0</td>
</tr>
<tr>
<th>1209</th>
<td>1209.0</td>
<td>74.0</td>
</tr>
<tr>
<th>1210</th>
<td>1210.0</td>
<td>78.0</td>
</tr>
<tr>
<th>1211</th>
<td>1211.0</td>
<td>64.0</td>
</tr>
</tbody>
</table>
</div>
# Print the missing values
```python
missing_values = dataset_x.isna().sum()
print (missing_values)
```
id 0
x0 81
x1 103
x2 92
x3 91
...
x827 83
x828 78
x829 98
x830 84
x831 92
Length: 833, dtype: int64
# Split the data into training and test data
```python
# Split using sklearn.model_selection
x_train, x_test, y_train, y_test = train_test_split(dataset_x, dataset_y, test_size=0.2, random_state = 100)
```
```python
train_stats = x_train.describe()
train_stats.pop("id")
train_stats = train_stats.transpose()
train_stats
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>count</th>
<th>mean</th>
<th>std</th>
<th>min</th>
<th>25%</th>
<th>50%</th>
<th>75%</th>
<th>max</th>
</tr>
</thead>
<tbody>
<tr>
<th>x0</th>
<td>909.0</td>
<td>99849.359545</td>
<td>9534.020258</td>
<td>65533.368423</td>
<td>93818.485147</td>
<td>100183.062423</td>
<td>105994.290528</td>
<td>130226.576502</td>
</tr>
<tr>
<th>x1</th>
<td>885.0</td>
<td>3698.730375</td>
<td>943.683864</td>
<td>180.312021</td>
<td>3076.550570</td>
<td>3651.110055</td>
<td>4303.892503</td>
<td>7265.213902</td>
</tr>
<tr>
<th>x2</th>
<td>891.0</td>
<td>99975.389109</td>
<td>9540.065988</td>
<td>68544.573581</td>
<td>93937.346571</td>
<td>99386.035114</td>
<td>106102.200889</td>
<td>132221.045067</td>
</tr>
<tr>
<th>x3</th>
<td>898.0</td>
<td>999.944996</td>
<td>100.903669</td>
<td>694.745271</td>
<td>935.303439</td>
<td>999.571797</td>
<td>1068.606823</td>
<td>1434.200505</td>
</tr>
<tr>
<th>x4</th>
<td>890.0</td>
<td>10001.743350</td>
<td>1001.473353</td>
<td>6681.561828</td>
<td>9339.312428</td>
<td>10021.924636</td>
<td>10646.003276</td>
<td>13560.223285</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>x827</th>
<td>906.0</td>
<td>104990.967566</td>
<td>2761.209013</td>
<td>100015.768596</td>
<td>102783.100024</td>
<td>104986.305216</td>
<td>107352.370223</td>
<td>109999.847537</td>
</tr>
<tr>
<th>x828</th>
<td>907.0</td>
<td>6827.704539</td>
<td>1387.835714</td>
<td>1696.036569</td>
<td>6002.316474</td>
<td>6835.947954</td>
<td>7652.607118</td>
<td>11276.075121</td>
</tr>
<tr>
<th>x829</th>
<td>884.0</td>
<td>10.021817</td>
<td>0.982265</td>
<td>6.899008</td>
<td>9.378562</td>
<td>9.977236</td>
<td>10.676450</td>
<td>13.188278</td>
</tr>
<tr>
<th>x830</th>
<td>902.0</td>
<td>104960.353706</td>
<td>2845.423469</td>
<td>100003.049706</td>
<td>102653.373914</td>
<td>104838.184005</td>
<td>107428.898901</td>
<td>109993.046071</td>
</tr>
<tr>
<th>x831</th>
<td>893.0</td>
<td>2.269127</td>
<td>0.169559</td>
<td>1.589261</td>
<td>2.173057</td>
<td>2.291077</td>
<td>2.374205</td>
<td>2.846222</td>
</tr>
</tbody>
</table>
<p>832 rows × 8 columns</p>
</div>
# Fill the NaN in the training data set with the median values of each column
```python
x_train = x_train.fillna(x_train.median())
x_test = x_test.fillna(x_test.median())
missing_values = x_train.isna().sum()
print (missing_values)
```
id 0
x0 0
x1 0
x2 0
x3 0
..
x827 0
x828 0
x829 0
x830 0
x831 0
Length: 833, dtype: int64
# Remove the unnecessary "id" label
```python
y_train.pop("id")
y_test.pop("id")
```
1143 1143.0
941 941.0
365 365.0
467 467.0
615 615.0
...
156 156.0
689 689.0
28 28.0
69 69.0
1198 1198.0
Name: id, Length: 243, dtype: float64
# Feature Extraction using autofeat
```python
fsel = FeatureSelector(featsel_runs=4,
max_it=150,
w_thr=1e-6,
keep=None,
n_jobs=1,
verbose=1)
new_X = fsel.fit_transform(pd.DataFrame(x_train, columns=column_names_x), y_train)
print(new_X.columns)
df_train = pd.DataFrame(x_train, columns=column_names_x)
df_test = pd.DataFrame(x_test, columns=column_names_x)
```
/usr/local/lib/python3.6/dist-packages/sklearn/utils/validation.py:724: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
[featsel] Scaling data...done.
[featsel] 220/833 features after univariate filtering
[featsel] Feature selection run 1/4
[featsel] Feature selection run 2/4
[featsel] Feature selection run 3/4
[featsel] Feature selection run 4/4
[featsel] 28 features after 4 feature selection runs
[featsel] 28 features after correlation filtering
[featsel] 21 features after noise filtering
[featsel] 21 final features selected (including 0 original keep features).
Index(['x400', 'x635', 'x757', 'x516', 'x809', 'x214', 'x556', 'x617', 'x93',
'x346', 'x596', 'x255', 'x309', 'x252', 'x292', 'x738', 'x537', 'x593',
'x474', 'x614', 'x502'],
dtype='object')
# Keep only the necessary features
```python
dataset_selected_x = x_train.copy()
x_test_selected= x_test.copy()
#accepted=[400, 757, 635, 516, 132, 15, 809, 116, 214,
#556, 617, 93, 346, 596, 309, 252, 292, 474,
#593, 614]
accepted=[400, 635, 757, 516, 809, 214, 556, 617, 93,
346, 596, 255, 309, 252, 292, 738, 537, 593,
474, 614, 502]
for j in range(832):
if (j in accepted):
print (j)
else:
del dataset_selected_x['x'+str(j)]
del x_test_selected['x'+str(j)]
train_stats = dataset_selected_x.describe()
train_stats.pop("id")
train_stats = train_stats.transpose()
train_stats
```
93
214
252
255
292
309
346
400
474
502
516
537
556
593
596
614
617
635
738
757
809
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>count</th>
<th>mean</th>
<th>std</th>
<th>min</th>
<th>25%</th>
<th>50%</th>
<th>75%</th>
<th>max</th>
</tr>
</thead>
<tbody>
<tr>
<th>x93</th>
<td>969.0</td>
<td>3.519955e+03</td>
<td>6.123839e+02</td>
<td>1.789851e+03</td>
<td>3.122307e+03</td>
<td>3.492040e+03</td>
<td>3.882096e+03</td>
<td>5.902057e+03</td>
</tr>
<tr>
<th>x214</th>
<td>969.0</td>
<td>1.686814e+03</td>
<td>4.246669e+02</td>
<td>3.857756e+02</td>
<td>1.438003e+03</td>
<td>1.657997e+03</td>
<td>1.929010e+03</td>
<td>3.719025e+03</td>
</tr>
<tr>
<th>x252</th>
<td>969.0</td>
<td>5.153136e+03</td>
<td>6.443209e+03</td>
<td>-1.066389e+04</td>
<td>2.036529e+03</td>
<td>3.051817e+03</td>
<td>5.396992e+03</td>
<td>5.218476e+04</td>
</tr>
<tr>
<th>x255</th>
<td>969.0</td>
<td>1.063172e+04</td>
<td>1.847941e+03</td>
<td>3.258064e+03</td>
<td>9.580092e+03</td>
<td>1.053304e+04</td>
<td>1.170004e+04</td>
<td>1.742768e+04</td>
</tr>
<tr>
<th>x292</th>
<td>969.0</td>
<td>1.289525e+05</td>
<td>1.561997e+04</td>
<td>7.115248e+04</td>
<td>1.201669e+05</td>
<td>1.282218e+05</td>
<td>1.377145e+05</td>
<td>1.982293e+05</td>
</tr>
<tr>
<th>x309</th>
<td>969.0</td>
<td>1.360934e+04</td>
<td>2.104294e+03</td>
<td>1.787475e+03</td>
<td>1.241801e+04</td>
<td>1.355888e+04</td>
<td>1.480791e+04</td>
<td>2.142239e+04</td>
</tr>
<tr>
<th>x346</th>
<td>969.0</td>
<td>7.264234e+03</td>
<td>1.238998e+03</td>
<td>2.195927e+03</td>
<td>6.577637e+03</td>
<td>7.355575e+03</td>
<td>8.085946e+03</td>
<td>1.121581e+04</td>
</tr>
<tr>
<th>x400</th>
<td>969.0</td>
<td>2.419177e+00</td>
<td>1.566607e-01</td>
<td>1.441218e+00</td>
<td>2.325897e+00</td>
<td>2.416256e+00</td>
<td>2.507563e+00</td>
<td>3.029658e+00</td>
</tr>
<tr>
<th>x474</th>
<td>969.0</td>
<td>2.138704e+05</td>
<td>3.363320e+04</td>
<td>6.580233e+04</td>
<td>1.951336e+05</td>
<td>2.113702e+05</td>
<td>2.311579e+05</td>
<td>4.824998e+05</td>
</tr>
<tr>
<th>x502</th>
<td>969.0</td>
<td>7.341196e+13</td>
<td>5.051213e+13</td>
<td>-8.384285e+13</td>
<td>4.129257e+13</td>
<td>6.245895e+13</td>
<td>9.198935e+13</td>
<td>3.816907e+14</td>
</tr>
<tr>
<th>x516</th>
<td>969.0</td>
<td>2.633410e+00</td>
<td>2.851324e-01</td>
<td>1.586758e+00</td>
<td>2.461447e+00</td>
<td>2.632617e+00</td>
<td>2.810227e+00</td>
<td>3.709460e+00</td>
</tr>
<tr>
<th>x537</th>
<td>969.0</td>
<td>2.131250e+05</td>
<td>3.401347e+04</td>
<td>6.499225e+04</td>
<td>1.936266e+05</td>
<td>2.105931e+05</td>
<td>2.305157e+05</td>
<td>4.810740e+05</td>
</tr>
<tr>
<th>x556</th>
<td>969.0</td>
<td>6.058210e+03</td>
<td>7.982839e+02</td>
<td>3.040824e+03</td>
<td>5.588225e+03</td>
<td>5.998493e+03</td>
<td>6.472923e+03</td>
<td>9.103862e+03</td>
</tr>
<tr>
<th>x593</th>
<td>969.0</td>
<td>6.306444e+04</td>
<td>3.517018e+04</td>
<td>2.381309e+04</td>
<td>4.825502e+04</td>
<td>5.195309e+04</td>
<td>5.625009e+04</td>
<td>2.232860e+05</td>
</tr>
<tr>
<th>x596</th>
<td>969.0</td>
<td>1.994849e+04</td>
<td>2.631193e+03</td>
<td>9.693873e+03</td>
<td>1.845468e+04</td>
<td>1.980071e+04</td>
<td>2.136755e+04</td>
<td>3.037075e+04</td>
</tr>
<tr>
<th>x614</th>
<td>969.0</td>
<td>1.219718e+06</td>
<td>1.863793e+05</td>
<td>4.090431e+05</td>
<td>1.112420e+06</td>
<td>1.220616e+06</td>
<td>1.319038e+06</td>
<td>1.976749e+06</td>
</tr>
<tr>
<th>x617</th>
<td>969.0</td>
<td>1.528155e+03</td>
<td>6.897489e+02</td>
<td>-6.478282e+02</td>
<td>1.039647e+03</td>
<td>1.425202e+03</td>
<td>1.899843e+03</td>
<td>5.028253e+03</td>
</tr>
<tr>
<th>x635</th>
<td>969.0</td>
<td>2.552238e+00</td>
<td>2.211141e-01</td>
<td>1.536237e+00</td>
<td>2.441985e+00</td>
<td>2.571545e+00</td>
<td>2.685138e+00</td>
<td>3.319348e+00</td>
</tr>
<tr>
<th>x738</th>
<td>969.0</td>
<td>4.070505e+05</td>
<td>5.419321e+04</td>
<td>1.500802e+05</td>
<td>3.777463e+05</td>
<td>4.051612e+05</td>
<td>4.389639e+05</td>
<td>6.508670e+05</td>
</tr>
<tr>
<th>x757</th>
<td>969.0</td>
<td>2.556026e+00</td>
<td>2.138400e-01</td>
<td>1.513769e+00</td>
<td>2.448038e+00</td>
<td>2.577740e+00</td>
<td>2.684761e+00</td>
<td>3.292584e+00</td>
</tr>
<tr>
<th>x809</th>
<td>969.0</td>
<td>8.301919e+01</td>
<td>1.026332e+02</td>
<td>-1.034091e+02</td>
<td>3.267047e+01</td>
<td>5.403747e+01</td>
<td>9.901100e+01</td>
<td>1.245880e+03</td>
</tr>
</tbody>
</table>
</div>
# Regression model
```python
rfr = ExtraTreesRegressor(n_jobs=1, max_depth=None, n_estimators=180, random_state=0, min_samples_split=3, max_features=None)
rfr.fit(dataset_selected_x, np.ravel(y_train))
y_pred = rfr.predict(x_test_selected)
score = r2_score(y_test, y_pred)
print(score)
```
0.6037280104368132
# Export the file with our predictions for submission
```python
#accepted=[400, 757, 635, 516, 132, 15, 809, 116, 214,
#556, 617, 93, 346, 596, 309, 252, 292, 474,
#593, 614]
accepted=[400, 635, 757, 516, 809, 214, 556, 617, 93,
346, 596, 255, 309, 252, 292, 738, 537, 593,
474, 614, 502]
column_names_x = ['id']
for i in range(832):
column_names_x.append('x'+str(i))
raw_dataset_x_test = pd.read_csv('/content/X_test.csv', names=column_names_x,
na_values = "?", comment='\t',
sep=",", skipinitialspace=True, skiprows=True)
dataset_x_test = raw_dataset_x_test.copy()
dataset_x_test.tail()
dataset_x_test = dataset_x_test.fillna(dataset_x_test.median())
dataset_selected_x_test = dataset_x_test.copy()
for j in range(832):
if (j in accepted):
print (j)
else:
del dataset_selected_x_test['x'+str(j)]
predictions = rfr.predict(dataset_selected_x_test)
index = 0.0
with open('predictions.txt', 'w') as f:
f.write("%s\n" % "id,y")
for predict in predictions:
writing_str = str(index)+','+str(predict.item(0))
f.write("%s\n" % writing_str)
index = index + 1
```
93
214
252
255
292
309
346
400
474
502
516
537
556
593
596
614
617
635
738
757
809
|
3ed804506389baf26e7796d1971f4dc27cdc3fce
| 67,892 |
ipynb
|
Jupyter Notebook
|
Task 1/Task_1_AML.ipynb
|
KonstantinosBarmpas/Advanced-Machine-Learning-Projects
|
61839d3933c3299666536b4daff53344af214b84
|
[
"MIT"
] | 13 |
2020-10-15T19:45:05.000Z
|
2022-01-15T19:38:29.000Z
|
Task 1/Task_1_AML.ipynb
|
KonstantinosBarmpas/Advanced-Machine-Learning-Projects
|
61839d3933c3299666536b4daff53344af214b84
|
[
"MIT"
] | null | null | null |
Task 1/Task_1_AML.ipynb
|
KonstantinosBarmpas/Advanced-Machine-Learning-Projects
|
61839d3933c3299666536b4daff53344af214b84
|
[
"MIT"
] | 12 |
2020-09-27T13:15:00.000Z
|
2021-11-22T17:29:55.000Z
| 38.270575 | 1,795 | 0.350071 | true | 12,794 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.766294 | 0.705785 | 0.540839 |
__label__yue_Hant
| 0.182175 | 0.094879 |
# Single-qubit geometric gates
In this notebook we create a single-qubit geometric gate between the $|0\rangle$ and $|2\rangle$ state of the transmon. The ideal unitary operator describing the single-qubit geometric gate in the $\{|0\rangle, |2\rangle\}$ basis is
\begin{align}
U_g = \begin{pmatrix}
\cos\theta & e^{i\phi}\sin\theta \\
e^{-i\phi}\sin\theta & -\cos\theta
\end{pmatrix}
\end{align}
where $\theta$ and $\phi$ are controllable rotation angles.
The rotation that a geometric gate implements depends only on the path that the quantum system takes in Hilbert space during the time evolution. This gate is implemented by simultaneously driving a $2\pi$ rotation between the $|0\rangle$ to $|1\rangle$ and the $|1\rangle$ to $|2\rangle$ transitions which differ in frequency by the anharmonicity of the transmon. We therefore require properly calibrated $X$ gates between $|0\rangle$ to $|1\rangle$ and between $|1\rangle$ to $|2\rangle$ which we label $X_{01}$ and $X_{12}$, respectively. The backend provides us with a properly calibrated $X_{01}$ gate. By modulating the pulse that implements $X_{01}$ with the anharmonicity of the transmon and by scaling its amplitude we can implement a calibrated $X_{12}$ gate that has the same duration as $X_{01}$. This modulation requires a bandwidth comparable to the anharmonicity, which is typically -400 ~ -300 MHz for transmon devices. To avoid calibrating $2\pi$ rotations we implement the geometric gate by two back-to-back simultaneously driven $X_{01}$ and $X_{12}$ gates. The pulse envelope of the $X_{01}$ and $X_{12}$ gates are scaled by the complex parameters $a$ and $b$ which control the rotation angles $\theta$ and $\phi$ on the Bloch sphere spanned by $|0\rangle$ and $|2\rangle$ according to
\begin{align}
e^{i\phi}\tan\frac{\theta}{2}=\frac{a}{b}.
\end{align}
To implement the geometric gate we need to do the following steps.
1. Identify the frequency of the $|1\rangle$ to $|2\rangle$ transition using spectroscopy.
2. Identify the amplitude of the $\pi$-pulse between the $|1\rangle$ to $|2\rangle$ transition.
3. Build a discriminator using Qiskit-Ignis to discriminate the $|0\rangle$, $|1\rangle$, and $|2\rangle$.
4. Implement and measure a single-qubit geometric gate.
References:
- Abdumalikov *et al.*, Nature **496**, 482-485 (2013)
- Egger *et al*, Phys. Rev. Applied **11**, 014017 (2019)
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
import qiskit.pulse as pulse
from qiskit.compiler import assemble
# Qiskit pulse
from qiskit.pulse import (MemorySlot, Acquire, DriveChannel, Schedule, SamplePulse,
InstructionScheduleMap, Play, MeasureChannel, AcquireChannel)
from qiskit.pulse.pulse_lib import Gaussian
from qiskit.qobj.utils import MeasLevel, MeasReturnType
from qiskit.scheduler import measure_all
from qiskit.visualization.pulse.qcstyle import SchedStyle
# Ignis discriminator fitter tools
from sklearn.svm import SVC
from qiskit.ignis.mitigation.measurement import CompleteMeasFitter, MeasurementFilter
from qiskit.ignis.measurement.discriminator.filters import DiscriminationFilter
from qiskit.ignis.measurement.discriminator.iq_discriminators import SklearnIQDiscriminator
%matplotlib inline
```
### Load a high resolution pulse backend
```python
import warnings
warnings.filterwarnings('ignore')
from qiskit.tools.jupyter import *
from qiskit import IBMQ
IBMQ.load_account()
hub='update-here'
group='update-here'
project='update-here'
provider = IBMQ.get_provider(hub=hub, group=group, project=project)
backend = provider.get_backend('ibmq_armonk')
```
```python
backend
```
VBox(children=(HTML(value="<h1 style='color:#ffffff;background-color:#000000;padding-top: 1%;padding-bottom: 1…
<IBMQBackend('ibmq_armonk') from IBMQ(hub='ibm-q-internal', group='dev-qiskit', project='pulse-testing')>
```python
config = backend.configuration()
defaults = backend.defaults()
inst_map = defaults.circuit_instruction_map
style = SchedStyle(figsize=(16, 5)) # schedule style plotting
```
Extract the default Xp and measurement pulse for the qubit that we are going to use.
```python
xp = inst_map.get('x', (0,)).instructions[0][1].pulse
d0 = DriveChannel(0)
dt = config.dt
shots = 1024
```
```python
def add_modulation(pulse, freq: float, dt: float, scale: float = 1.) -> SamplePulse:
"""
Add a modulation to the pulse.
Args:
samples: A list of AWG samples to modulate.
freq: The frequency of the modulation to add to samples.
dt: The cycle time.
scale: a factor to scale the samples.
Returns: SamplePulse with the added modulation.
"""
if not isinstance(pulse, SamplePulse):
samples = pulse.get_sample_pulse().samples
else:
samples = pulse.samples
modulated_samples = []
for i, amp in enumerate(samples):
modulated_samples.append(scale * amp * np.exp(2.0j*np.pi*freq*i*dt))
return SamplePulse(modulated_samples)
def get_job_data(job, average: bool, qubit: int, scale_factor=1):
"""Retrieve data from a job that has already run.
Args:
job (Job): The job whose data you want.
average: If True, gets the data assuming data is an average.
If False, gets the data assuming it is for single shots.
Return:
list: List containing job result data.
"""
job_results = job.result(timeout=120) # timeout parameter set to 120 s
result_data = []
for i in range(len(job_results.results)):
if average: # get avg data
result_data.append(job_results.get_memory(i)[qubit]*scale_factor)
else: # get single data
result_data.append(job_results.get_memory(i)[:, qubit]*scale_factor)
return result_data
def lorenz(x, a, q_freq, b, c):
return (a / np.pi) * (b / ((x - q_freq)**2 + b**2)) + c
```
## Calibration of an X gate between $|1\rangle$ and $|2\rangle$
### Find the $|1\rangle$ to $|2\rangle$ transition
We use spectroscopy to find the transition between $|1\rangle$ and $|2\rangle$.
First, we apply a pi-pulse to the qubit to prepare the $|1\rangle$ state.
Next, we apply a longer and weaker spectroscopic pulse with a frequency detunning $\delta f$ from the $|0\rangle$ to $|1\rangle$ transition and measure.
By repeating this experiment for different values of $\delta f$ we can identify the frequency detunning of the transition between $|1\rangle$ and $|2\rangle$ which is called the anharmonicity $\alpha$.
**Note**: Here we will drive a transition at +400 MHz which corresponds to an image (created by the IQ mixer) of the anharmonicity. Since the LO is +50 MHz above the $|0\rangle$ to $|1\rangle$ transition frequency of the qubit and that the true qubit anharmonicity is approximately -350 MHz the +400 MHz thus corresponds to an image of the anharmonicity. This image transition of the anharmonicity has proven superior at creating a $\pi$-pulse between the $|1\rangle$ and $|2\rangle$ states.
```python
exps = 74
# Frequency range to scan in GHz
# Anharmonicity is around -349 MHz and its image is around +400 MHz.
frequency_offsets = np.linspace(0.396, 0.404, exps)
schedules = []
for freq in frequency_offsets:
sched = Schedule(name='Frequency %f MHz' % (freq/1e3))
spec_pulse = add_modulation(Gaussian(3200, 0.1, 800), freq*1e9, dt)
#spec_pulse = add_modulation(xp, freq*1e9, dt, scale=0.88)
sched += Play(xp, d0)
sched += Play(spec_pulse, d0)
sched += measure_all(backend) << sched.duration
schedules.append(sched)
```
```python
schedules[32].draw(plot_range=[0, 7500], style=style, channels=[DriveChannel(0), MeasureChannel(0)])
```
```python
qobj = assemble(schedules, backend, meas_level=1,
meas_return=MeasReturnType.AVERAGE,
shots=shots)
```
```python
job = backend.run(qobj)
job.job_id()
```
'5e8af77747a99200182d40cc'
```python
job.status()
```
<JobStatus.DONE: 'job has successfully run'>
```python
result = get_job_data(job, True, 0, 1e-15)
spec_signal = abs(np.array(result))
spec_signal = np.average(spec_signal) - spec_signal
```
```python
popt, pcov = curve_fit(lorenz, frequency_offsets, spec_signal, [0.015, 0.4, 0.0045, 0.1])
anharmonicity = popt[1]*1e9
```
```python
anharmonicity
```
399668282.76144534
```python
fig = plt.figure()
plt.rcParams['font.size'] = 20
plt.plot(frequency_offsets, lorenz(frequency_offsets, *popt))
plt.plot(frequency_offsets, spec_signal, 'ok')
plt.xlabel('Frequency detunning (MHz)')
plt.ylabel('Signal (arb units.)')
plt.title('Anharmonicity %.2f MHz' % (anharmonicity/1e6));
```
## Calibrate a $\pi$-pulse between 1 and 2
Since we now know the anharmonicity $\alpha$ of the transmon, we can apply a frequency modulation $e^{i\alpha t}$ to the pulse envelope $\Omega(t)$ that implements an $X$ gate between $|0\rangle$ and $|1\rangle$ to drive the $|1\rangle$ to $|2\rangle$ transition.
However, the resulting pulse will not have the correct amplitude to implement a $\pi$-pulse between $|1\rangle$ and $|2\rangle$.
To calibrate this pulse we scan a scaling factor $\eta$ of the pulse envelope $\eta\,\Omega(t)\,e^{i\alpha t}$ to measure a Rabi oscillation between $|1\rangle$ and $|2\rangle$.
```python
xp12_scales = np.linspace(0., 1.6, 74)
schedules = []
for amp in xp12_scales:
sched = Schedule(name='Amplitude {:.3f} (\\% AWG output)'.format(amp))
xp12 = add_modulation(xp, anharmonicity, dt, scale=amp)
sched += Play(xp, d0)
sched += Play(xp12, d0)
sched += measure_all(backend) << sched.duration
schedules.append(sched)
```
```python
schedules[73].draw(plot_range=[0, 2000], style=style, channels=[d0, MeasureChannel(0)])
```
```python
qobj = assemble(schedules, backend, meas_level=1,
meas_return=MeasReturnType.AVERAGE,
shots=1024)
```
```python
job = backend.run(qobj)
job.job_id()
```
'5e8af8695b8bdd00181fb5c7'
```python
job.status()
```
<JobStatus.DONE: 'job has successfully run'>
```python
def amp_func(x, a, b, c):
"""Function used to fit the amplitude scan."""
return a*np.cos(np.pi*x/b)+c
```
```python
signal = np.real(np.array(get_job_data(job, True, 0, 1e-15)))
popt, pcov = curve_fit(amp_func, xp12_scales, signal, [0.8, 2.0, 0.5])
xp12_scale = abs(popt[1])
```
```python
fig = plt.figure()
plt.plot(xp12_scales, amp_func(xp12_scales, *popt))
plt.plot(xp12_scales, signal, 'ok')
plt.plot([xp12_scale]*2, [min(signal), max(signal)])
plt.xlabel('Amplitude of xp12 relative to xp')
plt.ylabel('Signal (arb. units)')
plt.title('Relative amplitude %.2f %%' % (xp12_scale*100));
```
## Create a 0, 1, 2 discriminator with Qiskit Ignis
We should now have a calibrated $\pi$-pulse between the $|1\rangle$ and $|2\rangle$ states.
We check this by preparing calibration schedules to see if we can distinguish between the $|0\rangle$, $|1\rangle$ and $|2\rangle$ states in the IQ plane.
This measurement is then used to build a 0-1-2 discriminator with Qiskit-Ignis.
Note that we also added a small modulation to the readout pulse to improve the efficiency of the 0-1-2 discriminator.
```python
# The measurement pulse may not be setup to distinguish between 0, 1, and 2
# This line of code allows us to adjust the frequency and amplitude
# of the measurement pulse
meas = measure_all(backend).instructions[1][1].pulse
meas_freq_shift = -0.075e6
meas_pulse = add_modulation(meas, meas_freq_shift, dt, scale=0.9)
meas_sched = Schedule(name='meas %f' % meas_freq_shift)
meas_sched += Play(meas_pulse, MeasureChannel(0))
meas_sched += Acquire(len(meas.samples), AcquireChannel(0), MemorySlot(0))
schedules = []
cal0 = Schedule(name='cal_0'.format(amp))
cal0 += meas_sched << cal0.duration
schedules.append(cal0)
cal1 = Schedule(name='cal_1'.format(amp))
cal1 += Play(xp, d0)
cal1 += meas_sched << cal1.duration
schedules.append(cal1)
# Create the calibrated pi-pulse between |1> and |2>
xp12 = add_modulation(xp, anharmonicity, dt, scale=xp12_scale)
cal2 = Schedule(name='cal_2'.format(amp))
cal2 += Play(xp, d0)
cal2 += Play(xp12, d0)
cal2 += meas_sched << cal2.duration
schedules.append(cal2)
```
```python
schedules[2].draw(style=style, channels=[d0, MeasureChannel(0), AcquireChannel(0)])
```
```python
qobj = assemble(schedules, backend, meas_level=1,
meas_return=MeasReturnType.SINGLE,
shots=1024)
```
```python
job = backend.run(qobj)
job.job_id()
```
'5e8af9100f58730019b2191b'
```python
job.status()
```
<JobStatus.DONE: 'job has successfully run'>
```python
cal_result_disc = job.result(timeout=3600)
```
```python
svc = SVC(C=0.01, kernel="rbf", gamma="scale")
svc_discriminator = SklearnIQDiscriminator(svc, cal_result_disc, [0], ['0', '1', '2'])
filter012 = DiscriminationFilter(svc_discriminator)
```
The data in the figure below should show three clearly seperated clusters in the IQ plan which correspond to $|0\rangle$, $|1\rangle$, and $|2\rangle$. Since the discriminator must discriminate three states we use the support vector machine provided by Scikit learn.
```python
fig, ax = plt.subplots(1, 1, figsize=(8,5))
svc_discriminator.plot(ax, flag_misclassified=False, show_boundary=True);
```
This measurement can also be used to correct for readout errors using a readout error mitigation tools provided by Qiskit Ignis. In the code below the discriminator returns the states encoded in binary, therefore 0, 1, and 2 are represented by the strings '0', '1', and '10', respectively.
```python
cal_counts = filter012.apply(cal_result_disc).get_counts()
cal_matrix = np.array([[cal_counts[0].get('0',0), cal_counts[1].get('0',0), cal_counts[2].get('0',0)],
[cal_counts[0].get('1',0), cal_counts[1].get('1',0), cal_counts[2].get('1',0)],
[cal_counts[0].get('10',0), cal_counts[1].get('10',0), cal_counts[2].get('10',0)]])/shots
meas_filter = MeasurementFilter(cal_matrix, ['0', '1', '10'])
```
```python
plt.imshow(cal_matrix, cmap='gray')
plt.xlabel('Prepared state')
plt.ylabel('Measured state');
```
## Geometric single-qubit gates
We now have all the tools needed to implement a single-qubit geometric gate between the $|0\rangle$ and $|2\rangle$ states. This is done by *simultaneously* applying two back-to-back $\pi$-pulses (which corresponds to a $2\pi$ rotation) between the $|0\rangle$ and $|1\rangle$ states and the $|1\rangle$ and $|2\rangle$ states. By using two $\pi$-pulses we save ourselves the task of having to calibrate the $2\pi$-pulses. The pulses that implement the single-qubit geometric gate have the envelope
\begin{align}
a\,\Omega(t)+b\,\eta\,\Omega(t)\,e^{i\alpha t}.
\end{align}
The complex scalars $a$ and $b$ define the rotation angle $\theta$ and phase $\phi$ of the unitary gate that will be implemented on the Bloch sphere created by $|0\rangle$ and $|2\rangle$. $\theta$ and $\phi$ are related to $a$ and $b$ by
\begin{align}
e^{i\phi}\tan\frac{\theta}{2}=\frac{a}{b}.
\end{align}
Note that the pulses that create the single-qubit geometric gate cannot be implement with the `SetFrequency` instruction.
```python
def geometric(a: complex, b: complex, xp: SamplePulse, xp12: SamplePulse, name='') -> Schedule:
"""
Create a single-qubit geometric gate between the 0 and 2 state.
In principle the geometric gate is built from a 2pi rotation between 0 and
1 and a 2pi rotation between 1 and 2. Since we have not calibrated the 2pi
rotations we will emulate them with two pi back-to-back pi rotations.
The geometric gate creates a rotation on the Bloch Sphere between |0> and |2>.
The rotation angle theta and phase phi on the (0,2) Bloch sphere are
given by e^(i*phi)*tan(theta/2)=a/b.
Args:
a: Parameter to control the geometric rotation.
b: Parameter to control the geometric rotation.
xp: Calibrated X gate between |0> and |1>.
xp12: Calibrated X gate between |1> and |2>.
Returns: A schedules with the geometric gate.
"""
if abs(abs(a)**2 + abs(b)**2 - 1.0) > 1e-4:
raise ValueError('a and b must satisfy |a|**2+|b|**2')
geom_samples = a*xp.samples + b*xp12.samples
geom = SamplePulse(geom_samples)
sched = Schedule(name='Geometric (%f, %f) ' % (a, b)+name)
sched += Play(geom, d0)
sched += Play(geom, d0)
sched += meas_sched << sched.duration
return sched
```
```python
schedules = [cal0, cal1, cal2]
angles = np.linspace(0., np.pi, 70)
for angle in angles:
a = np.sin(angle/2)
schedules.append(geometric(a, np.sqrt(1.0-a*a), xp, xp12))
```
The schedule below shows the single-qubit geometric gate, implemented as two back-to-back superpositions of $X_{01}$ and $X_{12}$. These pulses can only be implemented on high resolution devices and cannot be implemented with the `SetFrequency` instruction.
```python
schedules[65].draw(plot_range=[0, 1500], style=style, channels=[d0, MeasureChannel(0)])
```
```python
qobj = assemble(schedules, backend, meas_level=1,
meas_return=MeasReturnType.SINGLE,
shots=1024)
```
```python
job = backend.run(qobj)
job.job_id()
```
'5e8afa81d8deff0019b54e4c'
```python
job.status()
```
<JobStatus.DONE: 'job has successfully run'>
```python
geom_result = job.result(timeout=3600)
```
To analyse the data we first apply the the 012 discriminator that we build with Qiskit-Ignis. Next, we employ our measurement error mitigation filter to correct the readout.
```python
# Apply discriminator
geom_discriminated = filter012.apply(geom_result)
# Mitigate readout errors
geom_mitigated = meas_filter.apply(geom_discriminated)
# Compute the population counts
counts = geom_mitigated.get_counts()
pop0 = [cnt.get('0',0)/shots for cnt in counts[3:]]
pop1 = [cnt.get('1',0)/shots for cnt in counts[3:]]
pop2 = [cnt.get('10',0)/shots for cnt in counts[3:]]
```
```python
fig, ax = plt.subplots(1, 2, figsize=(14,5), tight_layout=True)
svc_discriminator.plot(ax[0], flag_misclassified=False, show_boundary=True, show_fitting_data=False);
for idx in range(len(angles)):
iqs = geom_result.get_memory(idx+3)
ax[0].scatter(np.real(iqs), np.imag(iqs), s=3, color='k',alpha=0.05)
ax[0].set_title('')
ax[1].plot(np.linspace(0, np.pi, 100), np.cos(np.linspace(0, np.pi, 100))**2, '--k')
ax[1].plot(np.linspace(0, np.pi, 100), np.sin(np.linspace(0, np.pi, 100))**2, '--k', label='Ideal')
ax[1].plot(angles, pop0, 'oC0', label='$|0\\rangle$')
ax[1].plot(angles, pop1, 'oC2', label='$|1\\rangle$')
ax[1].plot(angles, pop2, 'oC1', label='$|2\\rangle$')
ax[1].set_xlabel('Rotation angle (rad)')
ax[1].set_ylabel('Transmon population')
ax[1].set_xticks([0, np.pi/2, np.pi])
ax[1].set_xticklabels(['0', '$\pi/2$', '$\pi$'])
ax[1].set_ylim([0, 1.3])
ax[1].set_yticks([0, 0.25, 0.50, 0.75, 1.0])
ax[1].legend(ncol=4, loc=1, fontsize=16, columnspacing=0);
```
```python
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
<h3>Version Information</h3><table><tr><th>Qiskit Software</th><th>Version</th></tr><tr><td>Qiskit</td><td>None</td></tr><tr><td>Terra</td><td>0.13.0.dev0+4bf9fd5</td></tr><tr><td>Aer</td><td>0.4.1</td></tr><tr><td>Ignis</td><td>0.3.0.dev0+5400ab2</td></tr><tr><td>Aqua</td><td>None</td></tr><tr><td>IBM Q Provider</td><td>0.5.0</td></tr><tr><th>System information</th></tr><tr><td>Python</td><td>3.7.3 (default, Mar 27 2019, 22:11:17)
[GCC 7.3.0]</td></tr><tr><td>OS</td><td>Linux</td></tr><tr><td>CPUs</td><td>4</td></tr><tr><td>Memory (Gb)</td><td>7.775188446044922</td></tr><tr><td colspan='2'>Fri Apr 03 15:06:15 2020 CEST</td></tr></table>
<div style='width: 100%; background-color:#d5d9e0;padding-left: 10px; padding-bottom: 10px; padding-right: 10px; padding-top: 5px'><h3>This code is a part of Qiskit</h3><p>© Copyright IBM 2017, 2020.</p><p>This code is licensed under the Apache License, Version 2.0. You may<br>obtain a copy of this license in the LICENSE.txt file in the root directory<br> of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.<p>Any modifications or derivative works of this code must retain this<br>copyright notice, and modified files need to carry a notice indicating<br>that they have been altered from the originals.</p></div>
```python
```
|
863dd42dccc98e75db35fc4c98597beb77eaa006
| 513,607 |
ipynb
|
Jupyter Notebook
|
terra/qis_adv/single_qubit_geometric_gates.ipynb
|
YumaNK/qiskit-community-tutorials
|
491fbb7ef1f99772d25eb6eacb4340ef1ac75253
|
[
"Apache-2.0"
] | 293 |
2020-05-29T17:03:04.000Z
|
2022-03-31T07:09:50.000Z
|
terra/qis_adv/single_qubit_geometric_gates.ipynb
|
YumaNK/qiskit-community-tutorials
|
491fbb7ef1f99772d25eb6eacb4340ef1ac75253
|
[
"Apache-2.0"
] | 30 |
2020-06-23T19:11:32.000Z
|
2021-12-20T22:25:54.000Z
|
terra/qis_adv/single_qubit_geometric_gates.ipynb
|
YumaNK/qiskit-community-tutorials
|
491fbb7ef1f99772d25eb6eacb4340ef1ac75253
|
[
"Apache-2.0"
] | 204 |
2020-06-08T12:55:52.000Z
|
2022-03-31T08:37:14.000Z
| 469.047489 | 145,396 | 0.940762 | true | 5,995 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.771844 | 0.672332 | 0.518935 |
__label__eng_Latn
| 0.917112 | 0.043989 |
<!-- dom:TITLE: PHY321: Time-dependent Forces and Fourier Series, begin two-body problems -->
# PHY321: Time-dependent Forces and Fourier Series, begin two-body problems
<!-- dom:AUTHOR: [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/) at Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA & Department of Physics, University of Oslo, Norway -->
<!-- Author: -->
**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA and Department of Physics, University of Oslo, Norway
Date: **Mar 7, 2021**
Copyright 1999-2021, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license
## Aims and Overarching Motivation
Driven oscillations and resonances with numerical examples and Fourier Series
### Monday
**Reading suggestion**: Taylor sections 5.6-5.8.
### Wednesday
Summary oscillations and resonances with examples and Fourier Series
**Reading suggestion**: Taylor chapter 5.
### Friday
Begin two-body problems
**Reading suggestion**: Taylor section 8.2.
## Numerical Studies of Driven Oscillations
Solving the problem of driven oscillations numerically gives us much
more flexibility to study different types of driving forces. We can
reuse our earlier code by simply adding a driving force. If we stay in
the $x$-direction only this can be easily done by adding a term
$F_{\mathrm{ext}}(x,t)$. Note that we have kept it rather general
here, allowing for both a spatial and a temporal dependence.
Before we dive into the code, we need to briefly remind ourselves
about the equations we started with for the case with damping, namely
$$
m\frac{d^2x}{dt^2} + b\frac{dx}{dt}+kx(t) =0,
$$
with no external force applied to the system.
Let us now for simplicty assume that our external force is given by
$$
F_{\mathrm{ext}}(t) = F_0\cos{(\omega t)},
$$
where $F_0$ is a constant (what is its dimension?) and $\omega$ is the frequency of the applied external driving force.
**Small question:** would you expect energy to be conserved now?
Introducing the external force into our lovely differential equation
and dividing by $m$ and introducing $\omega_0^2=\sqrt{k/m}$ we have
$$
\frac{d^2x}{dt^2} + \frac{b}{m}\frac{dx}{dt}+\omega_0^2x(t) =\frac{F_0}{m}\cos{(\omega t)},
$$
Thereafter we introduce a dimensionless time $\tau = t\omega_0$
and a dimensionless frequency $\tilde{\omega}=\omega/\omega_0$. We have then
$$
\frac{d^2x}{d\tau^2} + \frac{b}{m\omega_0}\frac{dx}{d\tau}+x(\tau) =\frac{F_0}{m\omega_0^2}\cos{(\tilde{\omega}\tau)},
$$
Introducing a new amplitude $\tilde{F} =F_0/(m\omega_0^2)$ (check dimensionality again) we have
$$
\frac{d^2x}{d\tau^2} + \frac{b}{m\omega_0}\frac{dx}{d\tau}+x(\tau) =\tilde{F}\cos{(\tilde{\omega}\tau)}.
$$
Our final step, as we did in the case of various types of damping, is
to define $\gamma = b/(2m\omega_0)$ and rewrite our equations as
$$
\frac{d^2x}{d\tau^2} + 2\gamma\frac{dx}{d\tau}+x(\tau) =\tilde{F}\cos{(\tilde{\omega}\tau)}.
$$
This is the equation we will code below using the Euler-Cromer method.
```python
DeltaT = 0.001
#set up arrays
tfinal = 20 # in years
n = ceil(tfinal/DeltaT)
# set up arrays for t, v, and x
t = np.zeros(n)
v = np.zeros(n)
x = np.zeros(n)
# Initial conditions as one-dimensional arrays of time
x0 = 1.0
v0 = 0.0
x[0] = x0
v[0] = v0
gamma = 0.2
Omegatilde = 0.5
Ftilde = 1.0
# Start integrating using Euler-Cromer's method
for i in range(n-1):
# Set up the acceleration
# Here you could have defined your own function for this
a = -2*gamma*v[i]-x[i]+Ftilde*cos(t[i]*Omegatilde)
# update velocity, time and position
v[i+1] = v[i] + DeltaT*a
x[i+1] = x[i] + DeltaT*v[i+1]
t[i+1] = t[i] + DeltaT
# Plot position as function of time
fig, ax = plt.subplots()
ax.set_ylabel('x[m]')
ax.set_xlabel('t[s]')
ax.plot(t, x)
fig.tight_layout()
save_fig("ForcedBlockEulerCromer")
plt.show()
```
In the above example we have focused on the Euler-Cromer method. This
method has a local truncation error which is proportional to $\Delta t^2$
and thereby a global error which is proportional to $\Delta t$.
We can improve this by using the Runge-Kutta family of
methods. The widely popular Runge-Kutta to fourth order or just **RK4**
has indeed a much better truncation error. The RK4 method has a global
error which is proportional to $\Delta t$.
Let us revisit this method and see how we can implement it for the above example.
## Differential Equations, Runge-Kutta methods
Runge-Kutta (RK) methods are based on Taylor expansion formulae, but yield
in general better algorithms for solutions of an ordinary differential equation.
The basic philosophy is that it provides an intermediate step in the computation of $y_{i+1}$.
To see this, consider first the following definitions
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
\frac{dy}{dt}=f(t,y),
\label{_auto1} \tag{1}
\end{equation}
$$
and
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
y(t)=\int f(t,y) dt,
\label{_auto2} \tag{2}
\end{equation}
$$
and
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
y_{i+1}=y_i+ \int_{t_i}^{t_{i+1}} f(t,y) dt.
\label{_auto3} \tag{3}
\end{equation}
$$
To demonstrate the philosophy behind RK methods, let us consider
the second-order RK method, RK2.
The first approximation consists in Taylor expanding $f(t,y)$
around the center of the integration interval $t_i$ to $t_{i+1}$,
that is, at $t_i+h/2$, $h$ being the step.
Using the midpoint formula for an integral,
defining $y(t_i+h/2) = y_{i+1/2}$ and
$t_i+h/2 = t_{i+1/2}$, we obtain
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
\int_{t_i}^{t_{i+1}} f(t,y) dt \approx hf(t_{i+1/2},y_{i+1/2}) +O(h^3).
\label{_auto4} \tag{4}
\end{equation}
$$
This means in turn that we have
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
y_{i+1}=y_i + hf(t_{i+1/2},y_{i+1/2}) +O(h^3).
\label{_auto5} \tag{5}
\end{equation}
$$
However, we do not know the value of $y_{i+1/2}$. Here comes thus the next approximation, namely, we use Euler's
method to approximate $y_{i+1/2}$. We have then
<!-- Equation labels as ordinary links -->
<div id="_auto6"></div>
$$
\begin{equation}
y_{(i+1/2)}=y_i + \frac{h}{2}\frac{dy}{dt}=y(t_i) + \frac{h}{2}f(t_i,y_i).
\label{_auto6} \tag{6}
\end{equation}
$$
This means that we can define the following algorithm for
the second-order Runge-Kutta method, RK2.
1
2
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
<!-- Equation labels as ordinary links -->
<div id="_auto8"></div>
$$
\begin{equation}
k_2=hf(t_{i+1/2},y_i+k_1/2),
\label{_auto8} \tag{8}
\end{equation}
$$
with the final value
<!-- Equation labels as ordinary links -->
<div id="_auto9"></div>
$$
\begin{equation}
y_{i+i}\approx y_i + k_2 +O(h^3).
\label{_auto9} \tag{9}
\end{equation}
$$
The difference between the previous one-step methods
is that we now need an intermediate step in our evaluation,
namely $t_i+h/2 = t_{(i+1/2)}$ where we evaluate the derivative $f$.
This involves more operations, but the gain is a better stability
in the solution.
The fourth-order Runge-Kutta, RK4, has the following algorithm
1
5
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
$$
k_3=hf(t_i+h/2,y_i+k_2/2)\hspace{0.5cm} k_4=hf(t_i+h,y_i+k_3)
$$
with the final result
$$
y_{i+1}=y_i +\frac{1}{6}\left( k_1 +2k_2+2k_3+k_4\right).
$$
Thus, the algorithm consists in first calculating $k_1$
with $t_i$, $y_1$ and $f$ as inputs. Thereafter, we increase the step
size by $h/2$ and calculate $k_2$, then $k_3$ and finally $k_4$. The global error goes as $O(h^4)$.
However, at this stage, if we keep adding different methods in our
main program, the code will quickly become messy and ugly. Before we
proceed thus, we will now introduce functions that enbody the various
methods for solving differential equations. This means that we can
separate out these methods in own functions and files (and later as classes and more
generic functions) and simply call them when needed. Similarly, we
could easily encapsulate various forces or other quantities of
interest in terms of functions. To see this, let us bring up the code
we developed above for the simple sliding block, but now only with the simple forward Euler method. We introduce
two functions, one for the simple Euler method and one for the
force.
Note that here the forward Euler method does not know the specific force function to be called.
It receives just an input the name. We can easily change the force by adding another function.
```python
def ForwardEuler(v,x,t,n,Force):
for i in range(n-1):
v[i+1] = v[i] + DeltaT*Force(v[i],x[i],t[i])
x[i+1] = x[i] + DeltaT*v[i]
t[i+1] = t[i] + DeltaT
```
```python
def SpringForce(v,x,t):
# note here that we have divided by mass and we return the acceleration
return -2*gamma*v-x+Ftilde*cos(t*Omegatilde)
```
It is easy to add a new method like the Euler-Cromer
```python
def ForwardEulerCromer(v,x,t,n,Force):
for i in range(n-1):
a = Force(v[i],x[i],t[i])
v[i+1] = v[i] + DeltaT*a
x[i+1] = x[i] + DeltaT*v[i+1]
t[i+1] = t[i] + DeltaT
```
and the Velocity Verlet method (be careful with time-dependence here, it is not an ideal method for non-conservative forces))
```python
def VelocityVerlet(v,x,t,n,Force):
for i in range(n-1):
a = Force(v[i],x[i],t[i])
x[i+1] = x[i] + DeltaT*v[i]+0.5*a
anew = Force(v[i],x[i+1],t[i+1])
v[i+1] = v[i] + 0.5*DeltaT*(a+anew)
t[i+1] = t[i] + DeltaT
```
Finally, we can now add the Runge-Kutta2 method via a new function
```python
def RK2(v,x,t,n,Force):
for i in range(n-1):
# Setting up k1
k1x = DeltaT*v[i]
k1v = DeltaT*Force(v[i],x[i],t[i])
# Setting up k2
vv = v[i]+k1v*0.5
xx = x[i]+k1x*0.5
k2x = DeltaT*vv
k2v = DeltaT*Force(vv,xx,t[i]+DeltaT*0.5)
# Final result
x[i+1] = x[i]+k2x
v[i+1] = v[i]+k2v
t[i+1] = t[i]+DeltaT
```
Finally, we can now add the Runge-Kutta2 method via a new function
```python
def RK4(v,x,t,n,Force):
for i in range(n-1):
# Setting up k1
k1x = DeltaT*v[i]
k1v = DeltaT*Force(v[i],x[i],t[i])
# Setting up k2
vv = v[i]+k1v*0.5
xx = x[i]+k1x*0.5
k2x = DeltaT*vv
k2v = DeltaT*Force(vv,xx,t[i]+DeltaT*0.5)
# Setting up k3
vv = v[i]+k2v*0.5
xx = x[i]+k2x*0.5
k3x = DeltaT*vv
k3v = DeltaT*Force(vv,xx,t[i]+DeltaT*0.5)
# Setting up k4
vv = v[i]+k3v
xx = x[i]+k3x
k4x = DeltaT*vv
k4v = DeltaT*Force(vv,xx,t[i]+DeltaT)
# Final result
x[i+1] = x[i]+(k1x+2*k2x+2*k3x+k4x)/6.
v[i+1] = v[i]+(k1v+2*k2v+2*k3v+k4v)/6.
t[i+1] = t[i] + DeltaT
```
The Runge-Kutta family of methods are particularly useful when we have a time-dependent acceleration.
If we have forces which depend only the spatial degrees of freedom (no velocity and/or time-dependence), then energy conserving methods like the Velocity Verlet or the Euler-Cromer method are preferred. As soon as we introduce an explicit time-dependence and/or add dissipitave forces like friction or air resistance, then methods like the family of Runge-Kutta methods are well suited for this.
The code below uses the Runge-Kutta4 methods.
```python
DeltaT = 0.001
#set up arrays
tfinal = 20 # in years
n = ceil(tfinal/DeltaT)
# set up arrays for t, v, and x
t = np.zeros(n)
v = np.zeros(n)
x = np.zeros(n)
# Initial conditions (can change to more than one dim)
x0 = 1.0
v0 = 0.0
x[0] = x0
v[0] = v0
gamma = 0.2
Omegatilde = 0.5
Ftilde = 1.0
# Start integrating using Euler's method
# Note that we define the force function as a SpringForce
RK4(v,x,t,n,SpringForce)
# Plot position as function of time
fig, ax = plt.subplots()
ax.set_ylabel('x[m]')
ax.set_xlabel('t[s]')
ax.plot(t, x)
fig.tight_layout()
save_fig("ForcedBlockRK4")
plt.show()
```
<!-- !split -->
## Principle of Superposition and Periodic Forces (Fourier Transforms)
If one has several driving forces, $F(t)=\sum_n F_n(t)$, one can find
the particular solution to each $F_n$, $x_{pn}(t)$, and the particular
solution for the entire driving force is
<!-- Equation labels as ordinary links -->
<div id="_auto10"></div>
$$
\begin{equation}
x_p(t)=\sum_nx_{pn}(t).
\label{_auto10} \tag{10}
\end{equation}
$$
This is known as the principal of superposition. It only applies when
the homogenous equation is linear. If there were an anharmonic term
such as $x^3$ in the homogenous equation, then when one summed various
solutions, $x=(\sum_n x_n)^2$, one would get cross
terms. Superposition is especially useful when $F(t)$ can be written
as a sum of sinusoidal terms, because the solutions for each
sinusoidal (sine or cosine) term is analytic, as we saw above.
Driving forces are often periodic, even when they are not
sinusoidal. Periodicity implies that for some time $\tau$
$$
\begin{eqnarray}
F(t+\tau)=F(t).
\end{eqnarray}
$$
One example of a non-sinusoidal periodic force is a square wave. Many
components in electric circuits are non-linear, e.g. diodes, which
makes many wave forms non-sinusoidal even when the circuits are being
driven by purely sinusoidal sources.
The code here shows a typical example of such a square wave generated using the functionality included in the **scipy** Python package. We have used a period of $\tau=0.2$.
```python
%matplotlib inline
import numpy as np
import math
from scipy import signal
import matplotlib.pyplot as plt
# number of points
n = 500
# start and final times
t0 = 0.0
tn = 1.0
# Period
t = np.linspace(t0, tn, n, endpoint=False)
SqrSignal = np.zeros(n)
SqrSignal = 1.0+signal.square(2*np.pi*5*t)
plt.plot(t, SqrSignal)
plt.ylim(-0.5, 2.5)
plt.show()
```
For the sinusoidal example studied in the previous week the
period is $\tau=2\pi/\omega$. However, higher harmonics can also
satisfy the periodicity requirement. In general, any force that
satisfies the periodicity requirement can be expressed as a sum over
harmonics,
<!-- Equation labels as ordinary links -->
<div id="_auto11"></div>
$$
\begin{equation}
F(t)=\frac{f_0}{2}+\sum_{n>0} f_n\cos(2n\pi t/\tau)+g_n\sin(2n\pi t/\tau).
\label{_auto11} \tag{11}
\end{equation}
$$
We can write down the answer for
$x_{pn}(t)$, by substituting $f_n/m$ or $g_n/m$ for $F_0/m$. By
writing each factor $2n\pi t/\tau$ as $n\omega t$, with $\omega\equiv
2\pi/\tau$,
<!-- Equation labels as ordinary links -->
<div id="eq:fourierdef1"></div>
$$
\begin{equation}
\label{eq:fourierdef1} \tag{12}
F(t)=\frac{f_0}{2}+\sum_{n>0}f_n\cos(n\omega t)+g_n\sin(n\omega t).
\end{equation}
$$
The solutions for $x(t)$ then come from replacing $\omega$ with
$n\omega$ for each term in the particular solution,
$$
\begin{eqnarray}
x_p(t)&=&\frac{f_0}{2k}+\sum_{n>0} \alpha_n\cos(n\omega t-\delta_n)+\beta_n\sin(n\omega t-\delta_n),\\
\nonumber
\alpha_n&=&\frac{f_n/m}{\sqrt{((n\omega)^2-\omega_0^2)+4\beta^2n^2\omega^2}},\\
\nonumber
\beta_n&=&\frac{g_n/m}{\sqrt{((n\omega)^2-\omega_0^2)+4\beta^2n^2\omega^2}},\\
\nonumber
\delta_n&=&\tan^{-1}\left(\frac{2\beta n\omega}{\omega_0^2-n^2\omega^2}\right).
\end{eqnarray}
$$
Because the forces have been applied for a long time, any non-zero
damping eliminates the homogenous parts of the solution, so one need
only consider the particular solution for each $n$.
The problem will considered solved if one can find expressions for the
coefficients $f_n$ and $g_n$, even though the solutions are expressed
as an infinite sum. The coefficients can be extracted from the
function $F(t)$ by
<!-- Equation labels as ordinary links -->
<div id="eq:fourierdef2"></div>
$$
\begin{eqnarray}
\label{eq:fourierdef2} \tag{13}
f_n&=&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~F(t)\cos(2n\pi t/\tau),\\
\nonumber
g_n&=&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~F(t)\sin(2n\pi t/\tau).
\end{eqnarray}
$$
To check the consistency of these expressions and to verify
Eq. ([13](#eq:fourierdef2)), one can insert the expansion of $F(t)$ in
Eq. ([12](#eq:fourierdef1)) into the expression for the coefficients in
Eq. ([13](#eq:fourierdef2)) and see whether
$$
\begin{eqnarray}
f_n&=?&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~\left\{
\frac{f_0}{2}+\sum_{m>0}f_m\cos(m\omega t)+g_m\sin(m\omega t)
\right\}\cos(n\omega t).
\end{eqnarray}
$$
Immediately, one can throw away all the terms with $g_m$ because they
convolute an even and an odd function. The term with $f_0/2$
disappears because $\cos(n\omega t)$ is equally positive and negative
over the interval and will integrate to zero. For all the terms
$f_m\cos(m\omega t)$ appearing in the sum, one can use angle addition
formulas to see that $\cos(m\omega t)\cos(n\omega
t)=(1/2)(\cos[(m+n)\omega t]+\cos[(m-n)\omega t]$. This will integrate
to zero unless $m=n$. In that case the $m=n$ term gives
<!-- Equation labels as ordinary links -->
<div id="_auto12"></div>
$$
\begin{equation}
\int_{-\tau/2}^{\tau/2}dt~\cos^2(m\omega t)=\frac{\tau}{2},
\label{_auto12} \tag{14}
\end{equation}
$$
and
$$
\begin{eqnarray}
f_n&=?&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~f_n/2\\
\nonumber
&=&f_n~\checkmark.
\end{eqnarray}
$$
The same method can be used to check for the consistency of $g_n$.
Consider the driving force:
<!-- Equation labels as ordinary links -->
<div id="_auto13"></div>
$$
\begin{equation}
F(t)=At/\tau,~~-\tau/2<t<\tau/2,~~~F(t+\tau)=F(t).
\label{_auto13} \tag{15}
\end{equation}
$$
Find the Fourier coefficients $f_n$ and $g_n$ for all $n$ using Eq. ([13](#eq:fourierdef2)).
Only the odd coefficients enter by symmetry, i.e. $f_n=0$. One can find $g_n$ integrating by parts,
<!-- Equation labels as ordinary links -->
<div id="eq:fouriersolution"></div>
$$
\begin{eqnarray}
\label{eq:fouriersolution} \tag{16}
g_n&=&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2}dt~\sin(n\omega t) \frac{At}{\tau}\\
\nonumber
u&=&t,~dv=\sin(n\omega t)dt,~v=-\cos(n\omega t)/(n\omega),\\
\nonumber
g_n&=&\frac{-2A}{n\omega \tau^2}\int_{-\tau/2}^{\tau/2}dt~\cos(n\omega t)
+\left.2A\frac{-t\cos(n\omega t)}{n\omega\tau^2}\right|_{-\tau/2}^{\tau/2}.
\end{eqnarray}
$$
The first term is zero because $\cos(n\omega t)$ will be equally
positive and negative over the interval. Using the fact that
$\omega\tau=2\pi$,
$$
\begin{eqnarray}
g_n&=&-\frac{2A}{2n\pi}\cos(n\omega\tau/2)\\
\nonumber
&=&-\frac{A}{n\pi}\cos(n\pi)\\
\nonumber
&=&\frac{A}{n\pi}(-1)^{n+1}.
\end{eqnarray}
$$
## Fourier Series
More text will come here, chpater 5.7-5.8 of Taylor are discussed
during the lectures. The code here uses the Fourier series discussed
in chapter 5.7 for a square wave signal. The equations for the
coefficients are are discussed in Taylor section 5.7, see Example
5.4. The code here visualizes the various approximations given by
Fourier series compared with a square wave with period $T=0.2$, witth
$0.1$ and max value $F=2$. We see that when we increase the number of
components in the Fourier series, the Fourier series approximation gets closes and closes to the square wave signal.
```python
import numpy as np
import math
from scipy import signal
import matplotlib.pyplot as plt
# number of points
n = 500
# start and final times
t0 = 0.0
tn = 1.0
# Period
T =0.2
# Max value of square signal
Fmax= 2.0
# Width of signal
Width = 0.1
t = np.linspace(t0, tn, n, endpoint=False)
SqrSignal = np.zeros(n)
FourierSeriesSignal = np.zeros(n)
SqrSignal = 1.0+signal.square(2*np.pi*5*t+np.pi*Width/T)
a0 = Fmax*Width/T
FourierSeriesSignal = a0
Factor = 2.0*Fmax/np.pi
for i in range(1,500):
FourierSeriesSignal += Factor/(i)*np.sin(np.pi*i*Width/T)*np.cos(i*t*2*np.pi/T)
plt.plot(t, SqrSignal)
plt.plot(t, FourierSeriesSignal)
plt.ylim(-0.5, 2.5)
plt.show()
```
## Solving differential equations with Fouries series
Material to be added.
## Response to Transient Force
Consider a particle at rest in the bottom of an underdamped harmonic
oscillator, that then feels a sudden impulse, or change in momentum,
$I=F\Delta t$ at $t=0$. This increases the velocity immediately by an
amount $v_0=I/m$ while not changing the position. One can then solve
the trajectory by solving the equations with initial
conditions $v_0=I/m$ and $x_0=0$. This gives
<!-- Equation labels as ordinary links -->
<div id="_auto14"></div>
$$
\begin{equation}
x(t)=\frac{I}{m\omega'}e^{-\beta t}\sin\omega't, ~~t>0.
\label{_auto14} \tag{17}
\end{equation}
$$
Here, $\omega'=\sqrt{\omega_0^2-\beta^2}$. For an impulse $I_i$ that
occurs at time $t_i$ the trajectory would be
<!-- Equation labels as ordinary links -->
<div id="_auto15"></div>
$$
\begin{equation}
x(t)=\frac{I_i}{m\omega'}e^{-\beta (t-t_i)}\sin[\omega'(t-t_i)] \Theta(t-t_i),
\label{_auto15} \tag{18}
\end{equation}
$$
where $\Theta(t-t_i)$ is a step function, i.e. $\Theta(x)$ is zero for
$x<0$ and unity for $x>0$. If there were several impulses linear
superposition tells us that we can sum over each contribution,
<!-- Equation labels as ordinary links -->
<div id="_auto16"></div>
$$
\begin{equation}
x(t)=\sum_i\frac{I_i}{m\omega'}e^{-\beta(t-t_i)}\sin[\omega'(t-t_i)]\Theta(t-t_i)
\label{_auto16} \tag{19}
\end{equation}
$$
Now one can consider a series of impulses at times separated by
$\Delta t$, where each impulse is given by $F_i\Delta t$. The sum
above now becomes an integral,
<!-- Equation labels as ordinary links -->
<div id="eq:Greeny"></div>
$$
\begin{eqnarray}\label{eq:Greeny} \tag{20}
x(t)&=&\int_{-\infty}^\infty dt'~F(t')\frac{e^{-\beta(t-t')}\sin[\omega'(t-t')]}{m\omega'}\Theta(t-t')\\
\nonumber
&=&\int_{-\infty}^\infty dt'~F(t')G(t-t'),\\
\nonumber
G(\Delta t)&=&\frac{e^{-\beta\Delta t}\sin[\omega' \Delta t]}{m\omega'}\Theta(\Delta t)
\end{eqnarray}
$$
The quantity
$e^{-\beta(t-t')}\sin[\omega'(t-t')]/m\omega'\Theta(t-t')$ is called a
Green's function, $G(t-t')$. It describes the response at $t$ due to a
force applied at a time $t'$, and is a function of $t-t'$. The step
function ensures that the response does not occur before the force is
applied. One should remember that the form for $G$ would change if the
oscillator were either critically- or over-damped.
When performing the integral in Eq. ([20](#eq:Greeny)) one can use
angle addition formulas to factor out the part with the $t'$
dependence in the integrand,
<!-- Equation labels as ordinary links -->
<div id="eq:Greeny2"></div>
$$
\begin{eqnarray}
\label{eq:Greeny2} \tag{21}
x(t)&=&\frac{1}{m\omega'}e^{-\beta t}\left[I_c(t)\sin(\omega't)-I_s(t)\cos(\omega't)\right],\\
\nonumber
I_c(t)&\equiv&\int_{-\infty}^t dt'~F(t')e^{\beta t'}\cos(\omega't'),\\
\nonumber
I_s(t)&\equiv&\int_{-\infty}^t dt'~F(t')e^{\beta t'}\sin(\omega't').
\end{eqnarray}
$$
If the time $t$ is beyond any time at which the force acts,
$F(t'>t)=0$, the coefficients $I_c$ and $I_s$ become independent of
$t$.
Consider an undamped oscillator ($\beta\rightarrow 0$), with
characteristic frequency $\omega_0$ and mass $m$, that is at rest
until it feels a force described by a Gaussian form,
$$
\begin{eqnarray*}
F(t)&=&F_0 \exp\left\{\frac{-t^2}{2\tau^2}\right\}.
\end{eqnarray*}
$$
For large times ($t>>\tau$), where the force has died off, find
$x(t)$.\\ Solve for the coefficients $I_c$ and $I_s$ in
Eq. ([21](#eq:Greeny2)). Because the Gaussian is an even function,
$I_s=0$, and one need only solve for $I_c$,
$$
\begin{eqnarray*}
I_c&=&F_0\int_{-\infty}^\infty dt'~e^{-t^{\prime 2}/(2\tau^2)}\cos(\omega_0 t')\\
&=&\Re F_0 \int_{-\infty}^\infty dt'~e^{-t^{\prime 2}/(2\tau^2)}e^{i\omega_0 t'}\\
&=&\Re F_0 \int_{-\infty}^\infty dt'~e^{-(t'-i\omega_0\tau^2)^2/(2\tau^2)}e^{-\omega_0^2\tau^2/2}\\
&=&F_0\tau \sqrt{2\pi} e^{-\omega_0^2\tau^2/2}.
\end{eqnarray*}
$$
The third step involved completing the square, and the final step used the fact that the integral
$$
\begin{eqnarray*}
\int_{-\infty}^\infty dx~e^{-x^2/2}&=&\sqrt{2\pi}.
\end{eqnarray*}
$$
To see that this integral is true, consider the square of the integral, which you can change to polar coordinates,
$$
\begin{eqnarray*}
I&=&\int_{-\infty}^\infty dx~e^{-x^2/2}\\
I^2&=&\int_{-\infty}^\infty dxdy~e^{-(x^2+y^2)/2}\\
&=&2\pi\int_0^\infty rdr~e^{-r^2/2}\\
&=&2\pi.
\end{eqnarray*}
$$
Finally, the expression for $x$ from Eq. ([21](#eq:Greeny2)) is
$$
\begin{eqnarray*}
x(t>>\tau)&=&\frac{F_0\tau}{m\omega_0} \sqrt{2\pi} e^{-\omega_0^2\tau^2/2}\sin(\omega_0t).
\end{eqnarray*}
$$
## The classical pendulum and scaling the equations
Let us end our discussion of oscillations with another classical case, the pendulum.
The angular equation of motion of the pendulum is given by
Newton's equation and with no external force it reads
<!-- Equation labels as ordinary links -->
<div id="_auto17"></div>
$$
\begin{equation}
ml\frac{d^2\theta}{dt^2}+mgsin(\theta)=0,
\label{_auto17} \tag{22}
\end{equation}
$$
with an angular velocity and acceleration given by
<!-- Equation labels as ordinary links -->
<div id="_auto18"></div>
$$
\begin{equation}
v=l\frac{d\theta}{dt},
\label{_auto18} \tag{23}
\end{equation}
$$
and
<!-- Equation labels as ordinary links -->
<div id="_auto19"></div>
$$
\begin{equation}
a=l\frac{d^2\theta}{dt^2}.
\label{_auto19} \tag{24}
\end{equation}
$$
We do however expect that the motion will gradually come to an end due a viscous drag torque acting on the pendulum.
In the presence of the drag, the above equation becomes
<!-- Equation labels as ordinary links -->
<div id="eq:pend1"></div>
$$
\begin{equation}
ml\frac{d^2\theta}{dt^2}+\nu\frac{d\theta}{dt} +mgsin(\theta)=0, \label{eq:pend1} \tag{25}
\end{equation}
$$
where $\nu$ is now a positive constant parameterizing the viscosity
of the medium in question. In order to maintain the motion against
viscosity, it is necessary to add some external driving force.
We choose here a periodic driving force. The last equation becomes then
<!-- Equation labels as ordinary links -->
<div id="eq:pend2"></div>
$$
\begin{equation}
ml\frac{d^2\theta}{dt^2}+\nu\frac{d\theta}{dt} +mgsin(\theta)=Asin(\omega t), \label{eq:pend2} \tag{26}
\end{equation}
$$
with $A$ and $\omega$ two constants representing the amplitude and
the angular frequency respectively. The latter is called the driving frequency.
We define
$$
\omega_0=\sqrt{g/l},
$$
the so-called natural frequency and the new dimensionless quantities
$$
\hat{t}=\omega_0t,
$$
with the dimensionless driving frequency
$$
\hat{\omega}=\frac{\omega}{\omega_0},
$$
and introducing the quantity $Q$, called the *quality factor*,
$$
Q=\frac{mg}{\omega_0\nu},
$$
and the dimensionless amplitude
$$
\hat{A}=\frac{A}{mg}
$$
## More on the Pendulum
We have
$$
\frac{d^2\theta}{d\hat{t}^2}+\frac{1}{Q}\frac{d\theta}{d\hat{t}}
+sin(\theta)=\hat{A}cos(\hat{\omega}\hat{t}).
$$
This equation can in turn be recast in terms of two coupled first-order differential equations as follows
$$
\frac{d\theta}{d\hat{t}}=\hat{v},
$$
and
$$
\frac{d\hat{v}}{d\hat{t}}=-\frac{\hat{v}}{Q}-sin(\theta)+\hat{A}cos(\hat{\omega}\hat{t}).
$$
These are the equations to be solved. The factor $Q$ represents the
number of oscillations of the undriven system that must occur before
its energy is significantly reduced due to the viscous drag. The
amplitude $\hat{A}$ is measured in units of the maximum possible
gravitational torque while $\hat{\omega}$ is the angular frequency of
the external torque measured in units of the pendulum's natural
frequency.
# Two-body Problems
The gravitational potential energy and forces involving two masses $a$ and $b$ are
$$
\begin{eqnarray}
V_{ab}&=&-\frac{Gm_am_b}{|\boldsymbol{r}_a-\boldsymbol{r}_b|},\\
\nonumber
F_{ba}&=&-\frac{Gm_am_b}{|\boldsymbol{r}_a-\boldsymbol{r}_b|^2}\hat{r}_{ab},\\
\nonumber
\hat{r}_{ab}&=&\frac{\boldsymbol{r}_b-\boldsymbol{r}_a}{|\boldsymbol{r}_a-\boldsymbol{r}_b|}.
\end{eqnarray}
$$
Here $G=6.67\times 10^{-11}$ Nm$^2$/kg$^2$, and $F_{ba}$ is the force
on $b$ due to $a$. By inspection, one can see that the force on $b$
due to $a$ and the force on $a$ due to $b$ are equal and opposite. The
net potential energy for a large number of masses would be
<!-- Equation labels as ordinary links -->
<div id="_auto20"></div>
$$
\begin{equation}
V=\sum_{a<b}U_{ab}=\frac{1}{2}\sum_{a\ne b}V_{ab}.
\label{_auto20} \tag{27}
\end{equation}
$$
## Relative and Center of Mass Motion
Thus far, we have considered the trajectory as if the force is
centered around a fixed point. For two bodies interacting only with
one another, both masses circulate around the center of mass. One
might think that solutions would become more complex when both
particles move, but we will see here that the problem can be reduced
to one with a single body moving according to a fixed force by
expressing the trajectories for $\boldsymbol{r}_1$ and $\boldsymbol{r}_2$ into the
center-of-mass coordinate $\boldsymbol{R}_{\rm cm}$ and the relative
coordinate $\boldsymbol{r}$,
$$
\begin{eqnarray}
\boldsymbol{R}_{\rm cm}&\equiv&\frac{m_1\boldsymbol{r}_1+m_2\boldsymbol{r}_2}{m_1+m_2},\\
\nonumber
\boldsymbol{r}&\equiv&\boldsymbol{r}_1-\boldsymbol{r_2}.
\end{eqnarray}
$$
Here, we assume the two particles interact only with one another, so
$\boldsymbol{F}_{12}=-\boldsymbol{F}_{21}$ (where $\boldsymbol{F}_{ij}$ is the force on $i$
due to $j$. The equations of motion then become
$$
\begin{eqnarray}
\ddot{\boldsymbol{R}}_{\rm cm}&=&\frac{1}{m_1+m_2}\left\{m_1\ddot{\boldsymbol{r}}_1+m_2\ddot{\boldsymbol{r}}_2\right\}\\
\nonumber
&=&\frac{1}{m_1+m_2}\left\{\boldsymbol{F}_{12}+\boldsymbol{F}_{21}\right\}=0.\\
\ddot{\boldsymbol{r}}&=&\ddot{\boldsymbol{r}}_1-\ddot{\boldsymbol{r}}_2=\left(\frac{\boldsymbol{F}_{12}}{m_1}-\frac{\boldsymbol{F}_{21}}{m_2}\right)\\
\nonumber
&=&\left(\frac{1}{m_1}+\frac{1}{m_2}\right)\boldsymbol{F}_{12}.
\end{eqnarray}
$$
The first expression simply states that the center of mass coordinate
$\boldsymbol{R}_{\rm cm}$ moves at a fixed velocity. The second expression
can be rewritten in terms of the reduced mass $\mu$.
$$
\begin{eqnarray}
\mu \ddot{\boldsymbol{r}}&=&\boldsymbol{F}_{12},\\
\frac{1}{\mu}&=&\frac{1}{m_1}+\frac{1}{m_2},~~~~\mu=\frac{m_1m_2}{m_1+m_2}.
\end{eqnarray}
$$
Thus, one can treat the trajectory as a one-body problem where the
reduced mass is $\mu$, and a second trivial problem for the center of
mass. The reduced mass is especially convenient when one is
considering gravitational problems because then
$$
\begin{eqnarray}
\mu \ddot{r}&=&-\frac{Gm_1m_2}{r^2}\hat{r}\\
\nonumber
&=&-\frac{GM\mu}{r^2}\hat{r},~~~M\equiv m_1+m_2.
\end{eqnarray}
$$
For the gravitational problem, the reduced mass then falls out and the
trajectory depends only on the total mass $M$.
The kinetic energy and momenta also have analogues in center-of-mass
coordinates. The total and relative momenta are
$$
\begin{eqnarray}
\boldsymbol{P}&\equiv&\boldsymbol{p}_1+\boldsymbol{p}_2=M\dot{\boldsymbol{R}}_{\rm cm},\\
\nonumber
\boldsymbol{q}&\equiv&\mu\dot{\boldsymbol{r}}.
\end{eqnarray}
$$
With these definitions, a little algebra shows that the kinetic energy becomes
$$
\begin{eqnarray}
T&=&\frac{1}{2}m_1|\boldsymbol{v}_1|^2+\frac{1}{2}m_2|\boldsymbol{v}_2|^2\\
\nonumber
&=&\frac{1}{2}M|\dot{\boldsymbol{R}}_{\rm cm}|^2
+\frac{1}{2}\mu|\dot{\boldsymbol{r}}|^2\\
\nonumber
&=&\frac{P^2}{2M}+\frac{q^2}{2\mu}.
\end{eqnarray}
$$
The standard strategy is to transform into the center of mass frame,
then treat the problem as one of a single particle of mass $\mu$
undergoing a force $\boldsymbol{F}_{12}$. Scattering angles can also be
expressed in this frame, then transformed into the lab frame. In
practice, one sees examples in the literature where $d\sigma/d\Omega$
expressed in both the "center-of-mass" and in the "laboratory"
frame.
## Deriving Elliptical Orbits
Kepler's laws state that a gravitational orbit should be an ellipse
with the source of the gravitational field at one focus. Deriving this
is surprisingly messy. To do this, we first use angular momentum
conservation to transform the equations of motion so that it is in
terms of $r$ and $\theta$ instead of $r$ and $t$. The overall strategy
is to
1. Find equations of motion for $r$ and $t$ with no angle ($\theta$) mentioned, i.e. $d^2r/dt^2=\cdots$. Angular momentum conservation will be used, and the equation will involve the angular momentum $L$.
2. Use angular momentum conservation to find an expression for $\dot{\theta}$ in terms of $r$.
3. Use the chain rule to convert the equations of motions for $r$, an expression involving $r,\dot{r}$ and $\ddot{r}$, to one involving $r,dr/d\theta$ and $d^2r/d\theta^2$. This is quitecomplicated because the expressions will also involve a substitution $u=1/r$ so that one finds an expression in terms of $u$ and $\theta$.
4. Once $u(\theta)$ is found, you need to show that this can be converted to the familiar form for an ellipse.
The equations of motion give
<!-- Equation labels as ordinary links -->
<div id="eq:radialeqofmotion"></div>
$$
\begin{eqnarray}
\label{eq:radialeqofmotion} \tag{28}
\frac{d}{dt}r^2&=&\frac{d}{dt}(x^2+y^2)=2x\dot{x}+2y\dot{y}=2r\dot{r},\\
\nonumber
\dot{r}&=&\frac{x}{r}\dot{x}+\frac{y}{r}\dot{y},\\
\nonumber
\ddot{r}&=&\frac{x}{r}\ddot{x}+\frac{y}{r}\ddot{y}
+\frac{\dot{x}^2+\dot{y}^2}{r}
-\frac{\dot{r}^2}{r}.
\end{eqnarray}
$$
Recognizing that the numerator of the third term is the velocity squared, and that it can be written in polar coordinates,
<!-- Equation labels as ordinary links -->
<div id="_auto21"></div>
$$
\begin{equation}
v^2=\dot{x}^2+\dot{y}^2=\dot{r}^2+r^2\dot{\theta}^2,
\label{_auto21} \tag{29}
\end{equation}
$$
one can write $\ddot{r}$ as
<!-- Equation labels as ordinary links -->
<div id="eq:radialeqofmotion2"></div>
$$
\begin{eqnarray}
\label{eq:radialeqofmotion2} \tag{30}
\ddot{r}&=&\frac{F_x\cos\theta+F_y\sin\theta}{m}+\frac{\dot{r}^2+r^2\dot{\theta}^2}{r}-\frac{\dot{r}^2}{r}\\
\nonumber
&=&\frac{F}{m}+\frac{r^2\dot{\theta}^2}{r}\\
\nonumber
m\ddot{r}&=&F+\frac{L^2}{mr^3}.
\end{eqnarray}
$$
This derivation used the fact that the force was radial,
$F=F_r=F_x\cos\theta+F_y\sin\theta$, and that angular momentum is
$L=mrv_{\theta}=mr^2\dot{\theta}$. The term $L^2/mr^3=mv^2/r$ behaves
like an additional force. Sometimes this is referred to as a
centrifugal force, but it is not a force. Instead, it is the
consequence of considering the motion in a rotating (and therefore
accelerating) frame.
Now, we switch to the particular case of an attractive inverse square
force, $F=-\alpha/r^2$, and show that the trajectory, $r(\theta)$, is
an ellipse. To do this we transform derivatives w.r.t. time to
derivatives w.r.t. $\theta$ using the chain rule combined with angular
momentum conservation, $\dot{\theta}=L/mr^2$.
<!-- Equation labels as ordinary links -->
<div id="eq:rtotheta"></div>
$$
\begin{eqnarray}
\label{eq:rtotheta} \tag{31}
\dot{r}&=&\frac{dr}{d\theta}\dot{\theta}=\frac{dr}{d\theta}\frac{L}{mr^2},\\
\nonumber
\ddot{r}&=&\frac{d^2r}{d\theta^2}\dot{\theta}^2
+\frac{dr}{d\theta}\left(\frac{d}{dr}\frac{L}{mr^2}\right)\dot{r}\\
\nonumber
&=&\frac{d^2r}{d\theta^2}\left(\frac{L}{mr^2}\right)^2
-2\frac{dr}{d\theta}\frac{L}{mr^3}\dot{r}\\
\nonumber
&=&\frac{d^2r}{d\theta^2}\left(\frac{L}{mr^2}\right)^2
-\frac{2}{r}\left(\frac{dr}{d\theta}\right)^2\left(\frac{L}{mr^2}\right)^2
\end{eqnarray}
$$
Equating the two expressions for $\ddot{r}$ in Eq.s ([30](#eq:radialeqofmotion2)) and ([31](#eq:rtotheta)) eliminates all the derivatives w.r.t. time, and provides a differential equation with only derivatives w.r.t. $\theta$,
<!-- Equation labels as ordinary links -->
<div id="eq:rdotdot"></div>
$$
\begin{equation}
\label{eq:rdotdot} \tag{32}
\frac{d^2r}{d\theta^2}\left(\frac{L}{mr^2}\right)^2
-\frac{2}{r}\left(\frac{dr}{d\theta}\right)^2\left(\frac{L}{mr^2}\right)^2
=\frac{F}{m}+\frac{L^2}{m^2r^3},
\end{equation}
$$
that when solved yields the trajectory, i.e. $r(\theta)$. Up to this
point the expressions work for any radial force, not just forces that
fall as $1/r^2$.
The trick to simplifying this differential equation for the inverse
square problems is to make a substitution, $u\equiv 1/r$, and rewrite
the differential equation for $u(\theta)$.
$$
\begin{eqnarray}
r&=&1/u,\\
\nonumber
\frac{dr}{d\theta}&=&-\frac{1}{u^2}\frac{du}{d\theta},\\
\nonumber
\frac{d^2r}{d\theta^2}&=&\frac{2}{u^3}\left(\frac{du}{d\theta}\right)^2-\frac{1}{u^2}\frac{d^2u}{d\theta^2}.
\end{eqnarray}
$$
Plugging these expressions into Eq. ([32](#eq:rdotdot)) gives an
expression in terms of $u$, $du/d\theta$, and $d^2u/d\theta^2$. After
some tedious algebra,
<!-- Equation labels as ordinary links -->
<div id="_auto22"></div>
$$
\begin{equation}
\frac{d^2u}{d\theta^2}=-u-\frac{F m}{L^2u^2}.
\label{_auto22} \tag{33}
\end{equation}
$$
For the attractive inverse square law force, $F=-\alpha u^2$,
<!-- Equation labels as ordinary links -->
<div id="_auto23"></div>
$$
\begin{equation}
\frac{d^2u}{d\theta^2}=-u+\frac{m\alpha}{L^2}.
\label{_auto23} \tag{34}
\end{equation}
$$
The solution has two arbitrary constants, $A$ and $\theta_0$,
<!-- Equation labels as ordinary links -->
<div id="eq:Ctrajectory"></div>
$$
\begin{eqnarray}
\label{eq:Ctrajectory} \tag{35}
u&=&\frac{m\alpha}{L^2}+A\cos(\theta-\theta_0),\\
\nonumber
r&=&\frac{1}{(m\alpha/L^2)+A\cos(\theta-\theta_0)}.
\end{eqnarray}
$$
The radius will be at a minimum when $\theta=\theta_0$ and at a
maximum when $\theta=\theta_0+\pi$. The constant $A$ is related to the
eccentricity of the orbit. When $A=0$ the radius is a constant
$r=L^2/(m\alpha)$, and the motion is circular. If one solved the
expression $mv^2/r=-\alpha/r^2$ for a circular orbit, using the
substitution $v=L/(mr)$, one would reproduce the expression
$r=L^2/(m\alpha)$.
The form describing the elliptical trajectory in
Eq. ([35](#eq:Ctrajectory)) can be identified as an ellipse with one
focus being the center of the ellipse by considering the definition of
an ellipse as being the points such that the sum of the two distances
between the two foci are a constant. Making that distance $2D$, the
distance between the two foci as $2a$, and putting one focus at the
origin,
$$
\begin{eqnarray}
2D&=&r+\sqrt{(r\cos\theta-2a)^2+r^2\sin^2\theta},\\
\nonumber
4D^2+r^2-4Dr&=&r^2+4a^2-4ar\cos\theta,\\
\nonumber
r&=&\frac{D^2-a^2}{D+a\cos\theta}=\frac{1}{D/(D^2-a^2)-a\cos\theta/(D^2-a^2)}.
\end{eqnarray}
$$
By inspection, this is the same form as Eq. ([35](#eq:Ctrajectory)) with $D/(D^2-a^2)=m\alpha/L^2$ and $a/(D^2-a^2)=A$.
Let us remind ourselves about what an ellipse is before we proceed.
```python
import numpy as np
from matplotlib import pyplot as plt
from math import pi
u=1. #x-position of the center
v=0.5 #y-position of the center
a=2. #radius on the x-axis
b=1.5 #radius on the y-axis
t = np.linspace(0, 2*pi, 100)
plt.plot( u+a*np.cos(t) , v+b*np.sin(t) )
plt.grid(color='lightgray',linestyle='--')
plt.show()
```
## Effective or Centrifugal Potential
The total energy of a particle is
$$
\begin{eqnarray}
E&=&U(r)+\frac{1}{2}mv_\theta^2+\frac{1}{2}m\dot{r}^2\\
\nonumber
&=&U(r)+\frac{1}{2}mr^2\dot{\theta}^2+\frac{1}{2}m\dot{r}^2\\
\nonumber
&=&U(r)+\frac{L^2}{2mr^2}+\frac{1}{2}m\dot{r}^2.
\end{eqnarray}
$$
The second term then contributes to the energy like an additional
repulsive potential. The term is sometimes referred to as the
"centrifugal" potential, even though it is actually the kinetic energy
of the angular motion. Combined with $U(r)$, it is sometimes referred
to as the "effective" potential,
$$
\begin{eqnarray}
U_{\rm eff}(r)&=&U(r)+\frac{L^2}{2mr^2}.
\end{eqnarray}
$$
Note that if one treats the effective potential like a real potential, one would expect to be able to generate an effective force,
$$
\begin{eqnarray}
F_{\rm eff}&=&-\frac{d}{dr}U(r) -\frac{d}{dr}\frac{L^2}{2mr^2}\\
\nonumber
&=&F(r)+\frac{L^2}{mr^3}=F(r)+m\frac{v_\perp^2}{r},
\end{eqnarray}
$$
which is indeed matches the form for $m\ddot{r}$ in Eq. ([30](#eq:radialeqofmotion2)), which included the **centrifugal** force.
The following code plots this effective potential for a simple choice of parameters, with a standard gravitational potential $-\alpha/r$. Here we have chosen $L=m=\alpha=1$.
```python
# Common imports
import numpy as np
from math import *
import matplotlib.pyplot as plt
Deltax = 0.01
#set up arrays
xinitial = 0.3
xfinal = 5.0
alpha = 1.0 # spring constant
m = 1.0 # mass, you can change these
AngMom = 1.0 # The angular momentum
n = ceil((xfinal-xinitial)/Deltax)
x = np.zeros(n)
for i in range(n):
x[i] = xinitial+i*Deltax
V = np.zeros(n)
V = -alpha/x+0.5*AngMom*AngMom/(m*x*x)
# Plot potential
fig, ax = plt.subplots()
ax.set_xlabel('r[m]')
ax.set_ylabel('V[J]')
ax.plot(x, V)
fig.tight_layout()
plt.show()
```
|
48d72eec3c5bd01d06d8bd5a2389891d31293565
| 68,128 |
ipynb
|
Jupyter Notebook
|
doc/pub/week10/ipynb/.ipynb_checkpoints/week10-checkpoint.ipynb
|
Shield94/Physics321
|
9875a3bf840b0fa164b865a3cb13073aff9094ca
|
[
"CC0-1.0"
] | 20 |
2020-01-09T17:41:16.000Z
|
2022-03-09T00:48:58.000Z
|
doc/pub/week10/ipynb/.ipynb_checkpoints/week10-checkpoint.ipynb
|
Shield94/Physics321
|
9875a3bf840b0fa164b865a3cb13073aff9094ca
|
[
"CC0-1.0"
] | 6 |
2020-01-08T03:47:53.000Z
|
2020-12-15T15:02:57.000Z
|
doc/pub/week10/ipynb/.ipynb_checkpoints/week10-checkpoint.ipynb
|
Shield94/Physics321
|
9875a3bf840b0fa164b865a3cb13073aff9094ca
|
[
"CC0-1.0"
] | 33 |
2020-01-10T20:40:55.000Z
|
2022-02-11T20:28:41.000Z
| 29.841437 | 1,111 | 0.525408 | true | 13,870 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.752013 | 0.819893 | 0.61657 |
__label__eng_Latn
| 0.960675 | 0.270829 |
# Perfect captive
# Purpose
If the matematical model is not correct or too little data is available this may lead to paramter drift, so that the parameters in the matematical model changes depending on how the fitted data has been sampled.
This notebooks showcases the perfect case when you have the correct model and enought data in a captive test to identify the correct model.
# Methodology
* Sample data of forces from model
* Fit the parameters of the same model to this data
* Are the parameters correct?
* Is the simulation correct?
# Setup
```python
# %load imports.py
## Local packages:
%matplotlib inline
%load_ext autoreload
%autoreload 2
%config Completer.use_jedi = False ## (To fix autocomplete)
## External packages:
import pandas as pd
pd.options.display.max_rows = 999
pd.options.display.max_columns = 999
pd.set_option("display.max_columns", None)
import numpy as np
np.set_printoptions(linewidth=150)
import numpy as np
import os
import matplotlib.pyplot as plt
#if os.name == 'nt':
# plt.style.use('presentation.mplstyle') # Windows
import plotly.express as px
import plotly.graph_objects as go
import seaborn as sns
import sympy as sp
from sympy.physics.mechanics import (dynamicsymbols, ReferenceFrame,
Particle, Point)
from sympy.physics.vector.printing import vpprint, vlatex
from IPython.display import display, Math, Latex
from src.substitute_dynamic_symbols import run, lambdify
import pyro
import sklearn
import pykalman
from statsmodels.sandbox.regression.predstd import wls_prediction_std
import statsmodels.api as sm
from scipy.integrate import solve_ivp
## Local packages:
from src.data import mdl
from src.symbols import *
from src.parameters import *
import src.symbols as symbols
from src import prime_system
from src.models.regression import ForceRegression, results_summary_to_dataframe
from src.models.diff_eq_to_matrix import DiffEqToMatrix
from src.visualization.regression import show_pred, show_pred_captive
from src.visualization.plot import track_plot,captive_plot
## Load models:
# (Uncomment these for faster loading):
import src.models.vmm_abkowitz as vmm
#import src.models.vmm_martin as vmm_simpler
from src.models.vmm import ModelSimulator
from src.data.wpcc import ship_parameters, df_parameters, ps, ship_parameters_prime, ps_ship, scale_factor
from src.models.captive_variation import variate
```
```python
#format the book
import src.visualization.book_format as book_format
book_format.set_style()
```
## Load model
```python
model = ModelSimulator.load('../models/model_VCT_abkowitz.pkl')
```
### Run a zigzag simulation with the model
```python
u0_=2
angle_deg = 35
result = model.zigzag(u0=u0_, angle=angle_deg)
```
```python
```
```python
result.track_plot();
result.plot(compare=False);
```
```python
df_result = result.result.copy()
df_result_prime = model.prime_system.prime(df_result, U=df_result['U'])
```
## Parameter variation (captive test)
```python
len(model.parameters)
```
```python
variation_keys = ['u','v','r','delta']
df_inputs = variate(df=df_result_prime, variation_keys=variation_keys, N=3)
df_outputs = model.forces(df_inputs)
df_captive_all = pd.concat([df_inputs,df_outputs], axis=1)
```
```python
len(df_inputs)
```
```python
3**(len(variation_keys))
```
## Fit model
```python
reg_all = ForceRegression(vmm=model, data=df_captive_all)
display(reg_all.show_pred_X())
display(reg_all.show_pred_Y())
display(reg_all.show_pred_N())
```
### Create a simulation model from the regression model
```python
added_masses_ = {key:value for key,value in model.parameters.items() if 'dot' in key}
added_masses = pd.DataFrame(added_masses_, index=['prime']).transpose()
model_all = reg_all.create_model(added_masses=added_masses, ship_parameters=model.ship_parameters,
ps=model.prime_system, control_keys=['delta'])
```
### Resimulate with the regressed model
```python
result_all = model_all.simulate(df_result)
```
```python
result_all.plot_compare();
```
```python
df_compare_parameters =pd.DataFrame()
df_compare_parameters['model'] = model.parameters
df_compare_parameters['model captive all'] = model_all.parameters
df_compare_parameters['model_abs'] = df_compare_parameters['model'].abs()
df_compare_parameters.sort_values(by='model_abs', ascending=False, inplace=True)
df_compare_parameters.drop(columns=['model_abs'], inplace=True)
df_compare_parameters = df_compare_parameters.divide(df_compare_parameters['model'], axis=0)
df_compare_parameters['dof'] = pd.Series(df_compare_parameters.index).apply(lambda x:x[0]).values
for dof, df_ in df_compare_parameters.groupby(by='dof', sort=False):
fig,ax=plt.subplots()
fig.set_size_inches(10,2)
df_.plot(kind='bar', ax=ax)
fig.suptitle(dof)
```
```python
```
|
5fb24f24754c7d298cead786b571a908bda50e93
| 8,869 |
ipynb
|
Jupyter Notebook
|
notebooks/21.04_perfect_captive.ipynb
|
martinlarsalbert/wPCC
|
16e0d4cc850d503247916c9f5bd9f0ddb07f8930
|
[
"MIT"
] | null | null | null |
notebooks/21.04_perfect_captive.ipynb
|
martinlarsalbert/wPCC
|
16e0d4cc850d503247916c9f5bd9f0ddb07f8930
|
[
"MIT"
] | null | null | null |
notebooks/21.04_perfect_captive.ipynb
|
martinlarsalbert/wPCC
|
16e0d4cc850d503247916c9f5bd9f0ddb07f8930
|
[
"MIT"
] | null | null | null | 25.34 | 220 | 0.581464 | true | 1,148 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.760651 | 0.822189 | 0.625399 |
__label__eng_Latn
| 0.655172 | 0.291341 |
# Bracketing Methods (Bisection example)
Consider the refrigeration tank example from Belegundu and Chandrupatla [1].
We want to minimize the cost of a cylindrical refrigeration tank that must have
a volume of 50 m$^3$.
The costs of the tank are
- Circular ends cost \$10 per m$^2$
- Cylindrical walls cost \$6 per m$^2$
- Refrigerator costs \$80 per m$^2$ over its life
Let $d$ be the tank diameter, and $L$ the height.
\begin{align}
f &= 10 \left(\frac{2 \pi d^2}{4}\right) + 6 (\pi d L) + 80 \left( \frac{2
\pi d^2}{4} + \pi d L \right)\\
&= 45 \pi d^2 + 86 \pi d L
\end{align}
However, $L$ is a function of $d$ because the volume is constrained. We could
add a constraint to the problem
\begin{align}
\frac{\pi d^2}{4} L = V
\end{align}
but it is easier to express $V$ as a function of $d$ and make the problem
unconstrained.
\begin{align}
L &= \frac{4 V}{\pi d^2}\\
&= \frac{200}{\pi d^2}
\end{align}
Thus the optimization can be expressed as
\begin{align*}
\textrm{minimize} &\quad 45 \pi d^2 + \frac{17200}{d}\\
\textrm{with respect to} &\quad d \\
\textrm{subject to} &\quad d \ge 0
\end{align*}
One-dimensional optimization problems are silly of course, we can just find the
minimum by looking at a plot. However, we use a one-dimensional example to
illustrate line searches. A line search seeks an approximate minimum to a one-dimensional optimization problem within a N-dimensional space.
We will use bisection to find the minimum of this function. This is a recursive
function.
```python
from math import fabs
def bisection(x1, x2, f1, f2, fh, sizevec):
"""
This function finds the root of a function using bisection.
Parameters
----------
x1 : float
lower bound
x2 : float
upper bound
f1 : float
function value at lower bound
f2 : float
function value at upper bound
f1 * f2 must be < 0 in order to contain a root.
Currently this is left up to the user to check.
fh : function handle
should be of form f = fh(x)
where f is the function value
sizevec : list
input an empty array and the interval size
will be appended at each iteration
Returns
-------
xroot : float
root of function fh
"""
# divide interval in half
x = 0.5*(x1 + x2)
# save in iteration history
sizevec.append(x2-x1)
# if interval is small, then we have converged
if (fabs(x2 - x1) < 1e-6):
return x
# evaluate function at the new point (midpoint of interval)
f = fh(x)
# determine which side of the interval are root is in
if (f*f1 < 0): # left brack applies
x2 = x
f2 = f
else: # right bracket applies
x1 = x
f1 = f
# recursively call bisection with our new interval
return bisection(x1, x2, f1, f2, fh, sizevec)
```
We are interseted in optimization, so we don't want to find the root of our
function, but rather the "root" of the derivative as a potential minimum point.
Let's define our objectve function, its derivative, and solve for the minium.
```python
%matplotlib inline
import numpy as np
from math import pi
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
def func(d):
return 45*pi*d**2 + 17200.0/d
def deriv(d):
return 90*pi*d - 17200.0/d**2
# choose starting interval
d1 = 1.0
d2 = 10.0
# evalaute function
g1 = deriv(d1)
g2 = deriv(d2)
# check that our bracket is ok
assert(g1*g2 < 0)
# find optimal point
size = []
dopt = bisection(d1, d2, g1, g2, deriv, size)
# plot function
dvec = np.linspace(d1, d2, 200)
plt.figure()
plt.plot(dvec, func(dvec)/1e3)
plt.plot(dopt, func(dopt)/1e3, 'r*', markersize=12)
plt.xlabel('diameter (m)')
plt.ylabel('cost (thousands of dollars)')
# plot convergence history (interval size)
plt.figure()
plt.semilogy(size)
plt.xlabel('iteration')
plt.ylabel('interval size')
```
Note the linear convergence behavior.
[1] Belegundu, A. D. and Chandrupatla, T. R., Optimization Concepts and Applications in Engineering, Cambridge University Press, Mar 2011.
|
164eb530e59640e762528d5a0a7769a57d1f9426
| 58,346 |
ipynb
|
Jupyter Notebook
|
LineSearch.ipynb
|
BYUFLOWLab/MDOnotebooks
|
49344cb874a52cd67cc04ebb728195fa025d5590
|
[
"MIT"
] | 4 |
2017-03-13T23:22:32.000Z
|
2017-08-10T14:15:31.000Z
|
LineSearch.ipynb
|
BYUFLOWLab/MDOnotebooks
|
49344cb874a52cd67cc04ebb728195fa025d5590
|
[
"MIT"
] | null | null | null |
LineSearch.ipynb
|
BYUFLOWLab/MDOnotebooks
|
49344cb874a52cd67cc04ebb728195fa025d5590
|
[
"MIT"
] | 1 |
2019-03-12T11:31:01.000Z
|
2019-03-12T11:31:01.000Z
| 236.218623 | 28,090 | 0.903678 | true | 1,228 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.92079 | 0.893309 | 0.82255 |
__label__eng_Latn
| 0.980195 | 0.749392 |
$\newcommand{\xv}{\mathbf{x}}
\newcommand{\tv}{\mathbf{t}}
\newcommand{\wv}{\mathbf{w}}
\newcommand{\Chi}{\mathcal{X}}
\newcommand{\R}{\rm I\!R}
\newcommand{\sign}{\text{sign}}
\newcommand{\Tm}{\mathbf{T}}
\newcommand{\Xm}{\mathbf{X}}
\newcommand{\Im}{\mathbf{I}}
$
### ITCS6155
# Linear Model
**Supervised Learning**:
$$ f: \mathcal{X} \rightarrow y $$
Supervised learning can be formulated as above. When we want to predict tomorrow's temparature, for instance, we might look for the data that we can use as an input $\mathcal{X}$ such as humidity, history of temparature changes, air pressure, and vapor pressure along with the output (today's temparature) $y$. Once we recorded or found the data, we can build a table as follows.
humidity (%) | last year temparature (ºF) | yesterday's air pressure (inHG) | vapor pressure (inHG) | **Today's Temp** (ºF)
---|---|---|---|---
23 | 72 | 30.12 | 0.79 | 76
15 | 82 | 29.32 | 0.68 | 81
| | ... | |
Here, we note that the output is *today's* not *tomorrow's* temparature.
As we discussed in the first lecture, machine learning model *learns* from data or experiences.
This learning is called as *"training"* and the data used for training is called *training samples*.
In this example, the table is training samples that we will feed in to our models.
To maintain the right relation between input data and output prediction, however, the humidity and pressure values from yesterday are paired with the output, today's temparature.
Once you have data to play with, now you can apply learning algorithms ($f$) to find parameters.
This model with the learned parameters is *hypothesis*, and will be your model for prediction.
From today's measurements, applying the hypothesis model can simply generate the prediction output.
When the training is successful, it is more likely to have a good estimation.
## Linear Model
Linear model can be defined as a Euclidean dot product between two vectors:
$$
\begin{align}
f(\xv; \wv) &= \wv^\top \xv = \sum_0^D w_i x_i \\
&= w_0 x_0 + w_1 x_1 + \cdots + w_D x_D
\end{align}
$$
where $\wv$ is a weight vector and $\xv$ is an input vector.
When it is a one dimensional vector, it represents a straight line, so it is called *linear*.
Assume that we have $N$ data observations, $\xv_i$ and target outputs, $t_i$, for $i = 1, \cdots, N$.
The simpliest model that we can think of is constant model, $f(\xv) = c$, where $c$ is any scalar.
In this case, we have the zero weights in the linear model.
The linearity in the parameter $\xv$ make the optimization based on deravatives to solvable analytically.
The model limits the complexity so its representation is also limited. However, it can prevent possible overfitting with the simple modeling, especially when you have sparsely sampled data.
### Dot Product
The dot product, also known as inner product or scalar product, computes the product of pair of elements in each vector and the summation of the products. It is geometrically interpreted as a cosine between two vectors as well.
Thus, it can be written as
$$
\wv^\top \xv = \| \wv \| \| \xv \| \cos(\theta),
$$
where $\theta$ is the angle between two vectors.
Thus, when the vectors are unit vectors, the dot product is simply consine of the angle between two vectors.
<center>(from wikipedia)</center>
### Regression
The target output $t$ is s real number ($y, t \in \R$) as we discussed in last lecture. Thus, training $f(\xv; \wv)$ generates $y$ values close to the target outputs. We will discuss more about linear regression in next section.
### Classification
Classification has target outputs as discrete values. In case of binary classification, you have two values as target (ie. $t \in \{ -1, 1 \}$). As we know that $y$ can be any real value, we can cap the model to generate discrete values as below:
$$ y = \sign ( f(\xv; \wv) ).$$
### Advantages of Linear Model
- Simple
- Stable
- Avoid Overfitting
- Scalable
# Practice
**Finish this excercise and submit on Canvas.**
Q: Write a python code that create two vectors, $\wv$ and $\xv$ as follows:
$
\xv = \begin{bmatrix}
4.0 \\
2.3 \\
1.2 \\
5.8
\end{bmatrix},
\wv = \begin{bmatrix}
0.8 \\
0.1 \\
0.53 \\
0.33
\end{bmatrix}
$
```python
# TODO
```
Q: Write a function *linear_model(x, w)* that returns the result of dot product.
```python
def linear_model(x, w):
# TODO: fill in here
pass
```
Q: Pass the $\wv$ and $\xv$ and print the output of the linear model.
```python
# TODO
```
# Linear Regression
In this note, we solve regression problems using the linear model as follows.
For instance, we have example data as follows.
The goal we want to acheive in this problem is to find a best fit on all the data.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
```python
X = np.linspace(0,10, 101)
T = 2 * X + 4+ np.random.rand(101) * 5
```
```python
def data_scatter(k=101):
plt.plot(T[:k], '.')
plt.xticks(range(0, 101, 20)[:k], range(0, 11, 2)[:k])
```
```python
data_scatter()
```
When we use a linear model, there can be multiple options. One of them, or the simplest solution, can be the avergage value.
```python
mean = np.mean(T)
data_scatter()
plt.plot([0, 100],[mean, mean], 'r-')
```
When the data is linear or when we need simple solution, the linear model can suggest better solutions.
For instance, one dimensional affine model can be written as
$$
f(x; a, b) = a x + b.
$$
Unifying the weight symbol with $w$,
$$
f(x; \wv) = w_1 x + w_0.
$$
Considering multiple inputs for $x$, we can extend the input $x$ to input vector $\xv$ with dummy input $x_0 = 1$:
$$
\begin{align}
f(\xv; \wv) &= w_D x_D + \cdots + w_1 x_1 + w_0 \\
&= \sum_{i=0}^{D} w_i x_i \quad\text{where } x_0 = 1\\
&= \wv^\top \xv.
\end{align}
$$
#### Error (Cost) Function
When we define $\wv \in \R^D$, the choice of $D$ real numbers can result in the best result.
Here, the word "*best*" can be vague. Thus, we need to define what *best* means.
The sum of square error function defines as follows:
$$
E(\wv) = \sum_{i=1}^N \Big( f(\xv_i; \wv_i) - t_i \Big)^2
$$
This error function says that we want to minimize the sum of Euclidean distances between the target values and the model outputs. Here the square term leads the distances are not biased a few samples since sacrificing others with large errors will disturb the objective.
## Least Squares
The parameter that gives best fit will be
$$
\wv^* = \arg\min_\wv \sum_{i=1}^{N} \Big( f(\xv_i; \wv) - t_i \Big)^2
$$
Since the error funciton is quadratic, the problem can be analytically solved by simply setting derivative with respect to $\wv$ to zero.
For this, let us prepare data in matrix.
The target values are collected in matrix $\tv$, and the input samples are in matrix $\Xm$.
$$
\begin{align}
\tv &= [t_1, t_2, \cdots, t_N]^\top \\
\\
\wv &= [w_0, w_1, \cdots, w_D]^\top \\
\\
\Xm &= \begin{bmatrix}
x_{10} & x_{11} & x_{12} & \dots & x_{1D} \\
x_{20} & x_{21} & x_{22} & \dots & x_{2D} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x_{N0} & x_{N1} & x_{N2} & \dots & x_{ND}
\end{bmatrix}
\end{align}
$$
where the first column is one's, $\xv_{*0} = [1, 1, \dots, 1]^\top$.
With this matrix, $f(\xv; \wv)$ can be written in matrix form as:
$$
f(\xv; \wv) = \Xm \wv.
$$
Thus, the error function can be
$$
\begin{align}
E(\wv) &= \sum_{i=1}^N \Big(f(\xv_i; \wv_i) - t_i \Big)^2 \\
\\
&= (\Xm \wv - \tv)^\top (\Xm \wv - \tv) \\
\\
&= \wv^\top \Xm^\top \Xm \wv - 2 \tv^\top \Xm \wv + \tv^\top \tv
\end{align}
$$
because $\tv^\top \Xm \wv$ is a scalar, so symmetric.
Now, let us do the derivative.
$$
\begin{align}
\frac{\partial E(\wv)}{\partial \wv} &= \frac{\partial (\Xm \wv - \tv)^\top (\Xm \wv - \tv)}{\partial \wv} \\
\\
&= \frac{\partial (\wv^\top \Xm^\top \Xm \wv - 2 \tv^\top \Xm \wv + \tv^\top \tv )}{\partial \wv} \\
\\
&= \frac{\partial (\wv^\top \Xm^\top \Xm \wv)}{\partial \wv} - 2 \Xm^\top \tv \\
\\
&= \Xm^\top \Xm \wv + (\Xm^\top \Xm)^\top \wv - 2 \Xm^\top \tv \\
\\
&= 2 \Xm^\top \Xm \wv - 2 \Xm^\top \tv
\end{align}
$$
Setting this to zero,
$$
\begin{align}
2 \Xm^\top \Xm \wv - 2 \Xm^\top \tv &= 0\\
\\
\Xm^\top \Xm \wv &= \Xm^\top \tv\\
\\
\wv &= \big(\Xm^\top \Xm\big)^{-1} \Xm^\top \tv
\end{align}
$$
# Practice
Implement the least squares model and apply to the simulated data X and T.
Consider using **np.linalg.inv**, **np.linalg.solve**, **np.linalg.lstsq**.
After getting the parameter w, plot the approximation line.
```python
import numpy as np
N = X.shape[0]
# TODO: code for finding w
# First create X1 by adding 1's column to X
X1 =
# Next, using inverse, solve, lstsq function to get w*
w =
```
array([ 6.63917738, 1.99506176])
```python
# TODO: Write codes to generate the plot as below.
# call data_scatter function to present the training data
# then, write a plotting code showing the linear line
```
## Least Mean Squares (LMS)
Previously we observed that the least squares use all the available data for training or finding the best fit.
This can be often computationally costly, especially with large data sets. When data is sufficiently large, we can consider *sequential* or *online* learning.
During the online learning process, we introduce a data point one by one, and update the parameters. Using the updated parameters, it makes a new estimation and repeat these steps.
For this, we start with an initial guess $\wv$ and changes it as it reads more data until it converges.
When $k$ represents the steps for the repetition,
$$
\wv^{(k+1)} = \wv^{(k)} - \alpha \nabla E_k
$$
where $E_k$ is the error for the $k$'th sample and $\alpha$ is a learning rate.
This is called *stochastic gradient descent* or *sequential gradient descent*.
For the $k$'th sample $\xv_k$, the gradient for the sum-of-squares error is
$$
\begin{align}
\nabla E_k = \frac{\partial E}{\partial \wv^{(k)}} &= \frac{\partial }{\partial \wv^{(k)}}\Big( f(\xv_k; \wv^{(k)}) - t_k \Big)^2 \\
&= 2 \Big( f(\xv_k; \wv^{(k)}) - t_k \Big) \frac{\partial }{\partial \wv^{(k)}} \Big( f(\xv_k; \wv^{(k)}) - t_k \Big) \\
&= 2 \Big( {\wv^{(k)}}^\top \xv_k - t_k \Big) \frac{\partial }{\partial \wv^{(k)}} \Big( {\wv^{(k)}}^\top \xv_k - t_k \Big) \\
&= 2\Big( {\wv^{(k)}}^\top \xv_k - t_k \Big) \xv_k.
\end{align}
$$
This gives the following update rule for each sample:
$$
\wv^{(k+1)} = \wv^{(k)} - \alpha \Big( {\wv^{(k)}}^\top \xv_k - t_k \Big) \xv_k.
$$
# Pratice
Implement the LMS for the simulated sample X, and show the plot
```python
import IPython.display as ipd # for display and clear_output
# initial weights with random values
w = np.random.rand(X1.shape[1])
# learning rate
alpha = 0.01
fig = plt.figure()
# sequential learning
for k in range(N):
# TODO: online update of weights
plt.clf()
data_scatter(k+1)
# TODO: Plot the current model's estimation in a line
ipd.clear_output(wait=True)
ipd.display(fig)
ipd.clear_output(wait=True)
```
|
e07d8a481e03a89a6ea631a9d35ee102b61104fb
| 56,922 |
ipynb
|
Jupyter Notebook
|
reading_assignments/questions/1_Note-Linear Model.ipynb
|
biqar/Fall-2020-ITCS-8156-MachineLearning
|
ce14609327e5fa13f7af7b904a69da3aa3606f37
|
[
"MIT"
] | null | null | null |
reading_assignments/questions/1_Note-Linear Model.ipynb
|
biqar/Fall-2020-ITCS-8156-MachineLearning
|
ce14609327e5fa13f7af7b904a69da3aa3606f37
|
[
"MIT"
] | null | null | null |
reading_assignments/questions/1_Note-Linear Model.ipynb
|
biqar/Fall-2020-ITCS-8156-MachineLearning
|
ce14609327e5fa13f7af7b904a69da3aa3606f37
|
[
"MIT"
] | null | null | null | 94.87 | 12,154 | 0.811338 | true | 3,598 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.805632 | 0.833325 | 0.671353 |
__label__eng_Latn
| 0.989409 | 0.398109 |
## Model Based Submodular Selection
Author: Jacob Schreiber <jmschreiber91@gmail.com>
Submodular selection is the task of identifying a representative subset of samples from a large set, and apricot focuses on the use of these algorithms to identify a good subset of data that can be used for the purpose of training machine learning models. However, submodular functions can also be coupled with a trained machine learning model and a feature attribution algorithm in order to identify the subset of features that the machine learning model thinks are the most important for training.
In order to understand how this works, let's review the feature based submodular algorithm that's implemented in apricot. These functions greedily add samples to the growing subset that utilize a diversity of features. The equation that the feature based functions optimize is as follows:
\begin{equation}
f(X) = \sum\limits_{u \in U} w_{u} \phi_{u} \left( \sum\limits_{x \in X} m_{u}(x_{u}) \right)
\end{equation}
In this equation, $U$ is the set of features, or dimensions, of a sample, and $u$ refers to a specific feature. $X$ refers to the original data set that we are selecting from and $x$ refers to a single sample from that data set. $w$ is a vector of weights that indicate how important each feature is, with $w_{u}$ being a scalar referring to how important feature $u$ is. Frequently these weights are uniform. $\phi$ refers to a set of saturating functions, such as $sqrt(X)$ or $log(X + 1)$, that have diminishing returns the larger X gets.
The default implementation in apricot for $m_{u}$ is the identify function $m_{u}(x_{u}) = x_{u}$, meaning that a sample will have the most gain when it has a high feature value in a feature that is not well represented with high values in the already included samples. What if, instead of using the raw feature value, we used the feature ~attribution~ as determined by some attribution algorithm? This would give us a subset of samples that gave a diversity of importanes.
```python
%pylab inline
import seaborn; seaborn.set_style('whitegrid')
import shap
from sklearn.linear_model import LogisticRegression
from shap import LinearExplainer
from apricot import FeatureBasedSelection
```
Populating the interactive namespace from numpy and matplotlib
Let's start off with creating two clusters of data and seeing what happens when we use a feature based function off of them natively.
```python
X = numpy.concatenate([numpy.random.normal((7, 8), 1, size=(100, 2)),
numpy.random.normal((10, 5), 1, size=(100, 2))])
y = numpy.concatenate([numpy.zeros(100), numpy.ones(100)])
Xi, yi = FeatureBasedSelection(20).fit_transform(X, y)
```
```python
plt.figure(figsize=(8, 6))
plt.title("Submodular selection on Gaussian blobs", fontsize=16)
plt.scatter(X[:100, 0], X[:100, 1], marker='+', label='Negative Class')
plt.scatter(X[100:, 0], X[100:, 1], marker='+', label='Positive Class')
plt.scatter(Xi[yi == 0, 0], Xi[yi == 0, 1], label='Selected Negative Samples')
plt.scatter(Xi[yi == 1, 0], Xi[yi == 1, 1], label="Selected Positive Samples")
plt.legend(fontsize=14, loc=(1, 0.3))
plt.show()
```
That's unfortunate. It looks like the selected samples are just those with the highest feature values. This is not entirely true due to the non-linearity that's applied during selection, but an approximation is that as you run a diagonal line with slope y = -x down from the top right corner of the plot, it selected the first 20 points that it hits. This is a downside of using a feature based function natively.
Well, our goal here was not to select on the feature naively, but rather to identify features that a trained machine learning model thought were important. However, this term "important" can have two meanings. It can be the samples that are the most obvious samples from one or another class, perhaps the most representative, or it can be those that are on the boundary. Let's look at the first case first. In order to calculate feature importances we will use the package shap.
The first thing to do is to train our model. In our case let's train a logistic regression model.
```python
model = LogisticRegression().fit(X, y)
```
Now we need to gt the attributions. These values sum up to the final prediction and atribute to each feature the importance of that measurement.
```python
X_shap = LinearExplainer(model, X).shap_values(X)
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.title("Original Data", fontsize=14)
plt.scatter(X[y == 0, 0], X[y == 0, 1], s=10)
plt.scatter(X[y == 1, 0], X[y == 1, 1], s=10)
plt.subplot(122)
plt.title("Attributions", fontsize=14)
plt.scatter(X_shap[y == 0, 0], X_shap[y == 0, 1], s=10)
plt.scatter(X_shap[y == 1, 0], X_shap[y == 1, 1], s=10)
plt.xlabel("x-axis attribution", fontsize=14)
plt.ylabel("y-axis attribution", fontsize=14)
plt.show()
```
Unfortunately these values correspond to the log odds and so can be negative when they are predicting the negative class. We can correct for this by taking the absolute value of the attribution. That will ensure that the values are entirely positive and that the higher in magnitude they are, the most important that feature was for the prediction.
```python
X_shap = numpy.abs(X_shap)
plt.scatter(X_shap[y == 0, 0], X_shap[y == 0, 1], s=10, label="Negative Samples")
plt.scatter(X_shap[y == 1, 0], X_shap[y == 1, 1], s=10, label="Positive Samples")
plt.xlabel("|x-axis attribution|", fontsize=14)
plt.ylabel("|y-axis attribution|", fontsize=14)
plt.legend(fontsize=14, loc=(1.01, 0.4))
plt.show()
```
In this absolute attribution space we will still be selecting values as if you were running a line with slope y = -x down from the top of the plot, but these transformed values should be more amenable to that selection in that manner. Now, let's select the 20 most important samples according to the model. We can compare this to the samples that the model is most confident in, as ranked by maximum predicted probability.
```python
Xi1, yi1 = FeatureBasedSelection(20).fit_transform(X, y)
selector = FeatureBasedSelection(20)
selector.fit_transform(X_shap)
Xi2 = X[selector.indices]
yi2 = y[selector.indices]
y_pred = model.predict_proba(X).max(axis=1)
idx = numpy.argsort(y_pred)[::-1][:20]
Xi3 = X[idx]
yi3 = y[idx]
plt.figure(figsize=(14, 4))
plt.subplot(131)
plt.title("Selection in the original space", fontsize=16)
plt.scatter(X[y == 0, 0], X[y == 0, 1], s=10, label="Negative Samples")
plt.scatter(X[y == 1, 0], X[y == 1, 1], s=10, label="Positive Samples")
plt.scatter(Xi1[yi1 == 0, 0], Xi1[yi1 == 0, 1], s=20, label="Negative Selected Samples")
plt.scatter(Xi1[yi1 == 1, 0], Xi1[yi1 == 1, 1], s=20, label="Positive Selected Samples")
plt.subplot(132)
plt.title("Selection in the transformed space", fontsize=16)
plt.scatter(X[y == 0, 0], X[y == 0, 1], s=10, label="Negative Samples")
plt.scatter(X[y == 1, 0], X[y == 1, 1], s=10, label="Positive Samples")
plt.scatter(Xi2[yi2 == 0, 0], Xi2[yi2 == 0, 1], s=20, label="Negative Selected Samples")
plt.scatter(Xi2[yi2 == 1, 0], Xi2[yi2 == 1, 1], s=20, label="Positive Selected Samples")
plt.subplot(133)
plt.title("Selection in the transformed space", fontsize=16)
plt.scatter(X[y == 0, 0], X[y == 0, 1], s=10, label="Negative Samples")
plt.scatter(X[y == 1, 0], X[y == 1, 1], s=10, label="Positive Samples")
plt.scatter(Xi3[yi3 == 0, 0], Xi3[yi3 == 0, 1], s=20, label="Negative Selected Samples")
plt.scatter(Xi3[yi3 == 1, 0], Xi3[yi3 == 1, 1], s=20, label="Positive Selected Samples")
plt.legend(fontsize=14, loc=(1, 0.5))
plt.show()
```
It certainly looks like using the absolute attributions in this manner is yielding samples that are more representative of the most confident samples. However, it's unclear what the differences between using attributions versus using the model predictions alone correspond to, and whether one is better than the other in this simple case.
Another way that one might want to select samples is based on the *least confident samples*, i.e., identifying those samples that lie near the decision boundary. This can be particularly useful for the identification of samples that a model would be uncertain about. We can use a similar procedure, but after taking the absolute value of the attribution values, we multiply all attributions for a sample by the *minimum predicted class probability*. This has the effect of reducing values for samples where the model is very confident, while preserving values when the model is uncertain. The submodular selection algorithm will then preferentially select samples that have a diversity of features being relevant for the prediction, but only for uncertain samples.
```python
X_shap = LinearExplainer(model, X).shap_values(X)
X_shap = (numpy.abs(X_shap).T * model.predict_proba(X).min(axis=1)).T
Xi1, yi1 = FeatureBasedSelection(20).fit_transform(X, y)
selector = FeatureBasedSelection(20)
selector.fit_transform(X_shap)
Xi2 = X[selector.indices]
yi2 = y[selector.indices]
y_pred = model.predict_proba(X).min(axis=1)
idx = numpy.argsort(y_pred)[::-1][:20]
Xi3 = X[idx]
yi3 = y[idx]
plt.figure(figsize=(14, 4))
plt.subplot(131)
plt.title("Selection on feature values", fontsize=14)
plt.scatter(X[y == 0, 0], X[y == 0, 1], s=10, label="Negative Samples")
plt.scatter(X[y == 1, 0], X[y == 1, 1], s=10, label="Positive Samples")
plt.scatter(Xi1[yi1 == 0, 0], Xi1[yi1 == 0, 1], s=20, label="Negative Selected Samples")
plt.scatter(Xi1[yi1 == 1, 0], Xi1[yi1 == 1, 1], s=20, label="Positive Selected Samples")
plt.subplot(132)
plt.title("Selection on feature attributions", fontsize=14)
plt.scatter(X[y == 0, 0], X[y == 0, 1], s=10, label="Negative Samples")
plt.scatter(X[y == 1, 0], X[y == 1, 1], s=10, label="Positive Samples")
plt.scatter(Xi2[yi2 == 0, 0], Xi2[yi2 == 0, 1], s=20, label="Negative Selected Samples")
plt.scatter(Xi2[yi2 == 1, 0], Xi2[yi2 == 1, 1], s=20, label="Positive Selected Samples")
plt.subplot(133)
plt.title("Least confident samples", fontsize=16)
plt.scatter(X[y == 0, 0], X[y == 0, 1], s=10, label="Negative Samples")
plt.scatter(X[y == 1, 0], X[y == 1, 1], s=10, label="Positive Samples")
plt.scatter(Xi3[yi3 == 0, 0], Xi3[yi3 == 0, 1], s=20, label="Negative Selected Samples")
plt.scatter(Xi3[yi3 == 1, 0], Xi3[yi3 == 1, 1], s=20, label="Positive Selected Samples")
plt.legend(fontsize=14, loc=(1, 0.3))
plt.show()
```
It looks like, again, using the attributions is much more meaningful tha running submodular selection on the original feature values. One could argue that it looks like performing submodular selection on the attributions yields a more diverse sampling near the decision boundary than using the those samples that the model is the least confident about, but it's a bit subjective to say that.
Let's try to solidify a situation in which submodular selection on the attribution values is better than selecting the least confident samples. In this case, we'll build a data set where samples overlap significant in one part of the decision boundary and not in another portion.
```python
numpy.random.seed(2)
X = numpy.concatenate([numpy.random.uniform((5.6, 5), (8, 6.3), size=(100, 2)),
numpy.random.uniform((4, 5), (5.8, 8), size=(200, 2)),
numpy.random.uniform(6, 8.1, size=(300, 2))])
y = numpy.concatenate([numpy.zeros(300), numpy.ones(300)])
plt.scatter(X[y == 0, 0], X[y == 0, 1], s=5, label="Negative Samples")
plt.scatter(X[y == 1, 0], X[y == 1, 1], c='r', s=5, label="Positive Samples")
plt.legend(fontsize=14, loc=(1, 0.4))
plt.xlim(4, 8)
plt.ylim(5, 8)
plt.show()
```
On this data set we can see that the blue and red samples overlap on the portion of the decision boundary where the y-axis separates the two classes, but that the samples don't overlap on the portion where the x-axis separates the two classes. A machine learning model built on this data will be less sure about the overlapping samples
```python
from sklearn.svm import SVC
from shap import KernelExplainer
model = SVC(probability=True).fit(X, y)
explainer = KernelExplainer(model.predict_proba, X)
X_shap = explainer.shap_values(X)
X_shap = (numpy.abs(X_shap[1] - X_shap[0]).T * model.predict_proba(X).min(axis=1)).T
selector = FeatureBasedSelection(50)
selector.fit_transform(X_shap)
Xi2 = X[selector.indices]
y_pred = model.predict_proba(X).min(axis=1)
idx = numpy.argsort(y_pred)[::-1]
xx, yy = np.meshgrid(np.arange(4, 8.1, 0.1), np.arange(5, 9.1, 0.1))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
```
100%|██████████| 600/600 [00:05<00:00, 115.12it/s]
Now let's plot the attributions for each sample, the decision boundaries, and the selected samples using submodular selection versus the least confident samples.
```python
plt.figure(figsize=(14, 4))
plt.subplot(131)
plt.title("Feature Attributions", fontsize=16)
plt.scatter(X_shap[y == 0, 0], X_shap[y == 0, 1], s=5)
plt.scatter(X_shap[y == 1, 0], X_shap[y == 1, 1], c='r', s=5)
plt.scatter(X_shap[selector.indices][:, 0], X_shap[selector.indices][:, 1], c='g', s=20)
plt.xlabel("x-axis attribution", fontsize=12)
plt.ylabel("y-axis attribution", fontsize=12)
plt.subplot(132)
plt.title("Selection on Attribution", fontsize=16)
plt.contourf(xx, yy, Z, cmap='RdBu_r', linewidths=0.3, alpha=0.25)
plt.scatter(X[y == 0, 0], X[y == 0, 1], s=5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], c='r', s=5)
plt.scatter(Xi2[:, 0], Xi2[:, 1], s=20, color='g', label="Selected Samples")
plt.xlim(4, 8)
plt.ylim(5, 8)
plt.subplot(133)
plt.title("Least confident samples", fontsize=16)
plt.contourf(xx, yy, Z, cmap='RdBu_r', linewidths=0.3, alpha=0.25)
plt.scatter(X[y == 0, 0], X[y == 0, 1], s=5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], c='r', s=5)
plt.scatter(X[idx[:50], 0], X[idx[:50], 1], s=20, color='#FF6600', label="Selected Samples")
plt.xlim(4, 8)
plt.ylim(5, 8)
plt.savefig("img/attributionselection.png")
plt.show()
```
It looks like here that submodular selection identifies six samples from the x-axis dominated portion of the selection whereas the least confident samples selects only a single one.
|
a4aedfaa6ade51b78cd22a2eee2e5dced1e72340
| 345,760 |
ipynb
|
Jupyter Notebook
|
tutorials/3. Model-Based Selection.ipynb
|
domoritz/apricot
|
6dff8d08dee9145ec6c7e3e79d77efa0bbf19474
|
[
"MIT"
] | null | null | null |
tutorials/3. Model-Based Selection.ipynb
|
domoritz/apricot
|
6dff8d08dee9145ec6c7e3e79d77efa0bbf19474
|
[
"MIT"
] | null | null | null |
tutorials/3. Model-Based Selection.ipynb
|
domoritz/apricot
|
6dff8d08dee9145ec6c7e3e79d77efa0bbf19474
|
[
"MIT"
] | null | null | null | 704.195519 | 75,350 | 0.935687 | true | 4,017 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.76908 | 0.661923 | 0.509072 |
__label__eng_Latn
| 0.98845 | 0.021074 |
# Game instructions
Consider the following board game: A game board has 12 spaces. The swine senses the Christmas spirit and manages to run away from home couple of weeks beforehand. Fortunately for it, the butcher is a bit of a drunkard and easily distracted. The swine starts on space 7, and a butcher on space 1. On each game turn a 6-sided die is rolled. On a result of 1 to 3, the swine moves that many spaces forward. On a result of 5 or 6, the butcher moves that many spaces forward. On result 4, both advance one space forward. The swine wins if it reaches the river at space 12 (the final roll does not have to be exact, moving past space 12 is OK). The butcher wins if he catches up with the swine (or moves past it).
What are the probabilities of winning for the swine and the butcher?
Your assignment is to create a mathematical or statistical model to find these probabilities, and implement the solution as a computer program in whatever language you like. You will present it during the interview and we will discuss it with you.
Consider the following questions as well:
- Can you make your model easily extendable for different initial conditions (board size and initial positions)?
- Pros and cons of the approach?
- Can you say something about how long the game takes (also under different initial conditions)?
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import itertools
```
# Model: dynamic programming
#### Initialize game parameters
```python
board_size = 12
swine_start = 7
butcher_start = 1
if butcher_start >= swine_start:
raise ValueError('Error in starting positions: The swine has to start ahead of the butcher.')
elif swine_start >= board_size:
raise ValueError('Error in starting positions: The river has to lie ahead of the swine.')
```
#### DP formulation
Assume a fixed board size. Let $s$ be the position of the swine on the board, $b$ the position of the butcher on the board, and $F(s, b)$ the probability that the swine wins the game if the swine is at space $s$ and the butcher at space $b$.
The recurrence relation can be formulated as follows:
\begin{align}
F(s, b \mid s \geq \text{board size}) = & 1 & \text{Swine has won with 100% probability if end of board is reached} \\
F(s, b \mid s \leq b) = & 0 & \text{Swine wins with 0% probability if the butcher is ahead or at the same square} \\
F(s, b) = & 1/6 \cdot F(s+1, b) + & \text{DP recurrence relation} \\
& 1/6 \cdot F(s+2, b) + \\
& 1/6 \cdot F(s+3, b) + \\
& 1/6 \cdot F(s+1, b+1) + \\
& 1/6 \cdot F(s, b+5) + \\
& 1/6 \cdot F(s, b+6)
\end{align}
The equations express that the swine's probability of winning is a weighted combination of its winning chances in all possible subsequent states.
#### Prefill winning probabilities known at start
```python
swine_positions = np.arange(swine_start, board_size+1)
butcher_positions = np.arange(butcher_start, board_size+1)
probs = pd.DataFrame(np.nan*np.ones(shape=(len(swine_positions), len(butcher_positions))), index=swine_positions, columns=butcher_positions)
# Swine has won with 100% probability if end of board is reached.
probs.loc[board_size, :] = 1.0
# Swine wins with 0% probability if the butcher is ahead or at the same square.
# I.e. upper right triangle should be zeros.
mask = np.ones(probs.shape, dtype='bool')
triu = np.triu_indices(n=probs.shape[0], m=probs.shape[0])
mask[tuple([triu[0], triu[1] + probs.shape[1] - probs.shape[0]])] = False
probs.where(mask, other=0.0, inplace=True)
```
```python
probs
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
<th>12</th>
</tr>
</thead>
<tbody>
<tr>
<th>7</th>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>8</th>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>9</th>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>10</th>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>11</th>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>12</th>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.0</td>
</tr>
</tbody>
</table>
</div>
#### Solve DP
```python
def F(probs, sp, bp):
"""Calculate the swine's probability of winning based on the current swine and butcher position.
Arguments
- probs: dataframe of swine's winning chances (known and unknown)
- sp: swine's current position
- bp: butcher's current position
"""
# Check that the current positions are not lower than the starting positions.
if sp < probs.index.min():
sp = probs.index.min()
print('Swine position lower than starting position: using swine starting position ({}) instead.'.format(swine_start))
if bp < probs.columns.min():
bp = probs.columns.min()
print('Butcher position lower than starting position: using butcher starting position ({}) instead.'.format(butcher_start))
# Check that neither the swine nor the butcher has already reached the end of the board.
# If so, reset position to the last space on the board.
if sp > probs.index.max():
# print('Swine position exceeds board length: using highest possible position instead.')
sp = probs.index.max()
if bp > probs.columns.max():
# print('Butcher position exceeds board length: using highest possible position instead.')
bp = probs.columns.max()
# If the requested probability is already known: return from storage.
if not np.isnan(probs.loc[sp, bp]):
return probs.loc[sp, bp]
# Else: calculate the requested probability according to the DP recurrence, store and return.
else:
prob = 1/6 * F(probs, sp+1, bp) \
+ 1/6 * F(probs, sp+2, bp) \
+ 1/6 * F(probs, sp+3, bp) \
+ 1/6 * F(probs, sp+1, bp+1) \
+ 1/6 * F(probs, sp, bp+5) \
+ 1/6 * F(probs, sp, bp+6)
probs.loc[sp, bp] = prob
return prob
```
```python
swine_winning_prob = F(probs, swine_start, butcher_start)
print('The swine and the butcher start at space {} and {} respectively. The board is {} spaces long.'.format(swine_start, butcher_start, board_size))
print('The probability that the swine wins is {:.1f}%.'.format(swine_winning_prob*100))
```
The swine and the butcher start at space 7 and 1 respectively. The board is 12 spaces long.
The probability that the swine wins is 51.2%.
|
b0b1f31011d80ff94471c0a6d78e4ab2d8afb62a
| 12,936 |
ipynb
|
Jupyter Notebook
|
dp.ipynb
|
MeekeRoet/swine-escape
|
4201354dbc6cef7f84b6c6b7ad395292b29cccf8
|
[
"MIT"
] | null | null | null |
dp.ipynb
|
MeekeRoet/swine-escape
|
4201354dbc6cef7f84b6c6b7ad395292b29cccf8
|
[
"MIT"
] | null | null | null |
dp.ipynb
|
MeekeRoet/swine-escape
|
4201354dbc6cef7f84b6c6b7ad395292b29cccf8
|
[
"MIT"
] | null | null | null | 35.152174 | 716 | 0.474103 | true | 2,486 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.921922 | 0.90599 | 0.835252 |
__label__eng_Latn
| 0.975705 | 0.778903 |
<h1 style='text-align:center'>Simulação de Canal de Comunicação segundo Modelo Erceg</h1>
```python
import numpy as np
import random
import matplotlib.pyplot as plt
from PIL import Image
import math
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
#%%latex
```
O modelo Erceg foi construído e estudado de acordo com o dado experimentais coletados pela AT&T Wireless Service em 95 estações dentro dos EUA operando a 1,9GHz. O modelo enumera três categorias de terrenos que provocam perdas no sinal e eles são divididos da seguinte maneira; a categoria A representa terrenos com morros e colinas e com alta de densidade de obejetos no caminho do sinal, o que representa uma alta perda de percurso. Categoria C representa um terreno plano com baixa densidade de objetos no caminho do sinal, representando uma baixa perda do sinal, já a categoria B é constituída por terrenos montanhososo com baixa densidade de objetos ou terrenos plano com densidade consideravel de objetos no caminho do sinal, basicamente a categoria B representa um meio termo entre as categorias A e C, com perda de sinal mediana quando compara com as outras.
Para todas as três categorias o caminho de perda mediana é representada pela mesma equação com a condição de $d > d_0$, no caso a seguinte equação.$$
\begin{equation}\label{eq:erceg}
P_L(dB) = 20\log_{10}(4\pi d_0/\lambda)+10\gamma \log_{10}(d/d_0)+s
\end{equation}
$$Onde $\lambda$ é o comprimento de onda do sinal, s é o efeito de sombreamento do sinal, $\gamma$ representa o caminho de menor perda considerando as três categorias do modelo, representado pela seguinte equação
$$
\gamma = a - bh_b + c/h_b
$$
$h_b$ representa a altura da estação base em metros, normalmente entre 10 e 80m, $d_0=100m$ e os valores nominais de a, b e c variam de acordo com a catgoria do terrenos, seus valores variam de acordo a seguinte tabela:
|Parametro|Categoria A|Categoria B|Categoria C|
|---------|-----------|-----------|-----------|
|a|4,6|3|3,6|
|b|0,0076|0,0065|0,005|
|c|12,6|17,1|20|
Definiçao das variáveis $d_0$, $h_b$ e $s$
```python
d0 = 100
hb = 50
s = 6
f = 1900000000
```
Definição das distâncias a serem simuladas
```python
d = np.arange(1, 1000, 1)
```
Construção dos arrays das categorias
```python
a = np.array([4.6, 3, 3.6])
b = np.array([0.0076, 0.0065, 0.005])
c = np.array([12.6, 17.1, 20])
```
Array com os valores de gamma
```python
gamma = [a[i]-b[i]*hb+c[i]/hb for i in range(3)]
```
Com os valores instânciados podemos analisar a resposta para cada uma das três situações
```python
v1 = 20*np.log10(4*math.pi*d0*f/300000000)+10*gamma[0]*np.log10(d/d0)+s
v2 = 20*np.log10(4*math.pi*d0*f/300000000)+10*gamma[1]*np.log10(d/d0)+s
v3 = 20*np.log10(4*math.pi*d0*f/300000000)+10*gamma[2]*np.log10(d/d0)+s
```
Plotando os resultados obtidos
```python
plt.plot(d, v1, label='Categoria A')
plt.plot(d, v2, label='Categoria B')
plt.plot(d, v3, label='Categoria C')
plt.xlabel('Distancia entre estacao movel e a base [m]')
plt.ylabel('Perda de potencia incidente na estacao movel [db]')
plt.legend()
# plt.title('Perda de Potencia em funcao da distancia entre estacao movel e base')
```
Analisando as curva obtidas em função da distância entre a estação rádio base e o dispositivo móvel podemos verificar para distâncias $d<d_0$ as curvas apresentam comportamento similar, apresentado perdas de potência próximas, um forte indicativo de que os fatores usados para compor o modelo Erceg entregam praticamente os mesmos valores independente da categoria do terreno por onde sinal se propaga. É interessante notar que as divergência entre as curvas para cada categoria se tornam evidentes quando $d>2d_0$, fazendo as curvas divergirem.
Um ponto interessante a se simular é fixar a distância entre a estação rádio móvel, no caso para as três situações possíveis, $d<d_0$, $d=d_0$ e $d>d_0$ e verificar o comportamento de uma frequencia variavel, alguns limites para esse caso são importantes pois o modelo Erceg foi projetado para trabalhar com frequências próximas de 2GHz, logo não faz sentido simular algo para todo o espectro, logo a simulação foi centrada em 2GHz com uma faixa de $+/-$300MHz, ou seja, foi simulado o comportamento da perda de potência para uma distância fixa para a faixa de frequência iniciando em 1,7GHz e finalizando em 2,3GHz no passo de 100KHz.
```python
fval = np.arange(1700000000, 2300000000, 100000)
fv1 = 20*np.log10(4*math.pi*d0*fval/300000000)+10*gamma[0]*np.log10(100/d0)+s
fv2 = 20*np.log10(4*math.pi*d0*fval/300000000)+10*gamma[1]*np.log10(100/d0)+s
fv3 = 20*np.log10(4*math.pi*d0*fval/300000000)+10*gamma[2]*np.log10(100/d0)+s
fv4 = 20*np.log10(4*math.pi*d0*fval/300000000)+10*gamma[0]*np.log10(50/d0)+s
fv5 = 20*np.log10(4*math.pi*d0*fval/300000000)+10*gamma[1]*np.log10(50/d0)+s
fv6 = 20*np.log10(4*math.pi*d0*fval/300000000)+10*gamma[2]*np.log10(50/d0)+s
fv7 = 20*np.log10(4*math.pi*d0*fval/300000000)+10*gamma[0]*np.log10(200/d0)+s
fv8 = 20*np.log10(4*math.pi*d0*fval/300000000)+10*gamma[1]*np.log10(200/d0)+s
fv9 = 20*np.log10(4*math.pi*d0*fval/300000000)+10*gamma[2]*np.log10(200/d0)+s
```
```python
plt.plot(fval, fv1, label='Categoria A d=d0')
plt.plot(fval, fv2, label='Categoria B d=d0')
plt.plot(fval, fv3, label='Categoria C d=d0')
plt.plot(fval, fv4, label='Categoria A d<d0')
plt.plot(fval, fv5, label='Categoria B d<d0')
plt.plot(fval, fv6, label='Categoria C d<d0')
plt.plot(fval, fv7, label='Categoria A d>d0')
plt.plot(fval, fv8, label='Categoria B d>d0')
plt.plot(fval, fv9, label='Categoria C d>d0')
plt.xlabel('Frequencia do sinal')
plt.ylabel('Perda de potencia incidente na estacao movel [db]')
plt.legend()
```
Analisando as curvas geradas, chama atenção que quando $d=d_0$ as curvas são iguais, ou seja, independente da categoria a variação da frequência produz o mesmo resultado, contudo este comportamente era previsto pelo modelo pois com $d=d_0$ o termo que depende das categorias de terreno é anulado do modelo, fazendo com que ele dependesse apenas da frequência.
Para distâncias $d<d_0$ o modelo apresenta baixa perdas quando comparado com os outros casos simulados, configurando este como o caso ideal de uso. Para distâncias $d>d_0$ o modelo já apresenta perdas maiores quando colocado em comparação com as outras curvas geradas.
Outra simulacao possivel seria variar a altura da torre fixando a uma frequencia e um $d$ especifico assim seria possivel enteder qual a relacao da alura da torre
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.210.3876&rep=rep1&type=pdf
http://morse.colorado.edu/~tlen5510/text/classwebch3.html
https://www.mathworks.com/matlabcentral/fileexchange/39322-erceg-model
https://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Working%20With%20Markdown%20Cells.html
<h1 style='text-align:center'>Simulação de Propagacao em Multipercurso</h1>
$$X(t)=X_c(t)+jX_s(t)$$
$$X_c(t)=\dfrac{2}{\sqrt M}\sum_{n=1}^{M}\cos (\Psi_n)\cos (\omega_d t \cos a_n + \phi)$$
$$X_s(t)=\dfrac{2}{\sqrt M}\sum_{n=1}^{M}\sin (\Psi_n)\cos (\omega_d t \cos a_n + \phi)$$
$$a_n = \dfrac{2\pi n - \pi + \theta}{4M}$$
Onde $\theta$, $\phi$ e $\Psi_n$ sao estatisticamente independente, variando entre $[-\pi,\pi)$ para todo $n$. A simulacao do artigo foi testada com $M = 8$
A expressão pode ser manilpulada de forma a reunir todos os fatores em uma única expressão.
$$
X_k(t)= \sqrt{\dfrac{2}{M}} \left\{ \sum_{n=1}^{M}\cos (\Psi_{n,k})\cos \left[\omega_d t \cos \left(\dfrac{2\pi n - \pi + \theta}{4M}\right) + \phi_k \right] \right\} + j\left\{ \sum_{n=1}^{M}\sin (\Psi_{n,k})\cos \left[\omega_d t \cos \left(\dfrac{2\pi n - \pi + \theta}{4M}\right) + \phi_k \right] \right\}
$$
Primeiro é preciso instanciar as bibliotecas para gerar um distribuição uniforme para $\theta$, $\phi$ e $\Psi_n$ entre $[-\pi,\pi)$
```python
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
fig, ax = plt.subplots(figsize=(25, 15))
M = 8
const = np.sqrt(2/M)
ts = np.arange(0, 100, step=0.025)
Nstats = [10, 50, 100]
colors = ['r', 'g', 'b']
trials = {}
for i, Nstat in enumerate(Nstats):
for N in range(Nstat):
Xc_t = []
theta_multi = np.random.uniform(-np.pi, np.pi)
phi_multi = np.random.uniform(-np.pi, np.pi)
psi_multi = np.random.uniform(-np.pi, np.pi, size=M)
for t in ts:
def return_an(n):
return (2*np.pi*n-np.pi+theta_multi)/(4*M)
Xc = np.sum(np.array([np.cos(psi_multi[m-1])*np.cos(2*np.pi*t*np.cos(return_an(m)+phi_multi))
for m in range(1, M+1)]))
Xc_t.append(const*Xc)
Xc_t = np.array(Xc_t)
trials[N] = Xc_t
df = pd.DataFrame(trials)
averages = df.mean(axis=1)
c = np.correlate(averages.values, averages.values, mode='same')
sns.lineplot(ts, c / c.max(), color=colors[i], label='Nstat ' + str(Nstat), ax=ax)
ax.set_xlim(50, 65)
ax.legend()
ax.set_xticklabels(list(range(0, 15, 2)))
ax.set_xlabel('Tempo Normalizado')
ax.set_ylabel('Autocorrelação X_c(t)')
```
```python
```
```python
```
```python
```
```python
E_0 = np.sqrt(2)
C_n = 1/np.sqrt(4*M+2)
phi_multi = np.random.uniform(-np.pi, np.pi)
gc_t = []
ts = np.arange(0, 30, step=0.025)
M = 8
phi_n = np.random.uniform(-np.pi, np.pi)
a_n = np.random.uniform(-np.pi, np.pi)
for t in ts:
def return_an(n):
return p.random.uniform(-np.pi, np.pi)(2*np.pi*n/(4*M+2))
gc = E_0*np.sum(np.array([C_n*np.cos(2*np.pi*t*np.cos(a_n))
for m in range(1, M+1)]))
gc_t.append(gc)
fig, ax = plt.subplots(figsize=(25, 15))
c = np.correlate(np.array(gc_t), np.array(gc_t), mode='same')
sns.lineplot(ts, c / c.max(), ax=ax)
ax.set_xlim(15, 30)
```
```python
```
```python
```
|
de05f8820f821f3042a22caea9437cf83133085f
| 315,050 |
ipynb
|
Jupyter Notebook
|
Channel-Simulation/Erceg-Model.ipynb
|
JoaoPedroPP/Channel-Simulation-and-OFDM-Study
|
7b9bc7422c59012bd73ef6f33c4a18a24be4d309
|
[
"Apache-2.0"
] | 1 |
2021-04-22T07:22:57.000Z
|
2021-04-22T07:22:57.000Z
|
Channel-Simulation/Erceg-Model.ipynb
|
JoaoPedroPP/Channel-Simulation-and-OFDM-Study
|
7b9bc7422c59012bd73ef6f33c4a18a24be4d309
|
[
"Apache-2.0"
] | null | null | null |
Channel-Simulation/Erceg-Model.ipynb
|
JoaoPedroPP/Channel-Simulation-and-OFDM-Study
|
7b9bc7422c59012bd73ef6f33c4a18a24be4d309
|
[
"Apache-2.0"
] | 2 |
2019-08-22T22:58:38.000Z
|
2019-08-23T02:00:10.000Z
| 628.842315 | 166,772 | 0.948281 | true | 3,258 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.754915 | 0.754915 | 0.569897 |
__label__por_Latn
| 0.95664 | 0.162391 |
## Histograms of Oriented Gradients (HOG)
As we saw with the ORB algorithm, we can use keypoints in images to do keypoint-based matching to detect objects in images. These type of algorithms work great when you want to detect objects that have a lot of consistent internal features that are not affected by the background. For example, these algorithms work well for facial detection because faces have a lot of consistent internal features that don’t get affected by the image background, such as the eyes, nose, and mouth. However, these type of algorithms don’t work so well when attempting to do more general object recognition, say for example, pedestrian detection in images. The reason is that people don’t have consistent internal features, like faces do, because the body shape and style of every person is different (see Fig. 1). This means that every person is going to have a different set of internal features, and so we need something that can more generally describe a person.
<br>
<figure>
<figcaption style = "text-align:left; font-style:italic">Fig. 1. - Pedestrians.</figcaption>
</figure>
<br>
One option is to try to detect pedestrians by their contours instead. Detecting objects in images by their contours (boundaries) is very challenging because we have to deal with the difficulties brought about by the contrast between the background and the foreground. For example, suppose you wanted to detect a pedestrian in an image that is walking in front of a white building and she is wearing a white coat and black pants (see Fig. 2). We can see in Fig. 2, that since the background of the image is mostly white, the black pants are going to have a very high contrast, but the coat, since it is white as well, is going to have very low contrast. In this case, detecting the edges of pants is going to be easy but detecting the edges of the coat is going to be very difficult. This is where **HOG** comes in. HOG stands for **Histograms of Oriented Gradients** and it was first introduced by Navneet Dalal and Bill Triggs in 2005.
<br>
<figure>
<figcaption style = "text-align:left; font-style:italic">Fig. 2. - High and Low Contrast.</figcaption>
</figure>
<br>
The HOG algorithm works by creating histograms of the distribution of gradient orientations in an image and then normalizing them in a very special way. This special normalization is what makes HOG so effective at detecting the edges of objects even in cases where the contrast is very low. These normalized histograms are put together into a feature vector, known as the HOG descriptor, that can be used to train a machine learning algorithm, such as a Support Vector Machine (SVM), to detect objects in images based on their boundaries (edges). Due to its great success and reliability, HOG has become one of the most widely used algorithms in computer vison for object detection.
In this notebook, you will learn:
* How the HOG algorithm works
* How to use OpenCV to create a HOG descriptor
* How to visualize the HOG descriptor.
# The HOG Algorithm
As its name suggests, the HOG algorithm, is based on creating histograms from the orientation of image gradients. The HOG algorithm is implemented in a series of steps:
1. Given the image of particular object, set a detection window (region of interest) that covers the entire object in the image (see Fig. 3).
2. Calculate the magnitude and direction of the gradient for each individual pixel in the detection window.
3. Divide the detection window into connected *cells* of pixels, with all cells being of the same size (see Fig. 3). The size of the cells is a free parameter and it is usually chosen so as to match the scale of the features that want to be detected. For example, in a 64 x 128 pixel detection window, square cells 6 to 8 pixels wide are suitable for detecting human limbs.
4. Create a Histogram for each cell, by first grouping the gradient directions of all pixels in each cell into a particular number of orientation (angular) bins; and then adding up the gradient magnitudes of the gradients in each angular bin (see Fig. 3). The number of bins in the histogram is a free parameter and it is usually set to 9 angular bins.
5. Group adjacent cells into *blocks* (see Fig. 3). The number of cells in each block is a free parameter and all blocks must be of the same size. The distance between each block (known as the stride) is a free parameter but it is usually set to half the block size, in which case you will get overlapping blocks (*see video below*). The HOG algorithm has been shown empirically to work better with overlapping blocks.
6. Use the cells contained within each block to normalize the cell histograms in that block (see Fig. 3). If you have overlapping blocks this means that most cells will be normalized with respect to different blocks (*see video below*). Therefore, the same cell may have several different normalizations.
7. Collect all the normalized histograms from all the blocks into a single feature vector called the HOG descriptor.
8. Use the resulting HOG descriptors from many images of the same type of object to train a machine learning algorithm, such as an SVM, to detect those type of objects in images. For example, you could use the HOG descriptors from many images of pedestrians to train an SVM to detect pedestrians in images. The training is done with both positive a negative examples of the object you want detect in the image.
9. Once the SVM has been trained, a sliding window approach is used to try to detect and locate objects in images. Detecting an object in the image entails finding the part of the image that looks similar to the HOG pattern learned by the SVM.
<br>
<figure>
<figcaption style = "text-align:left; font-style:italic">Fig. 3. - HOG Diagram.</figcaption>
</figure>
<br>
<figure>
<figcaption style = "text-align:left; font-style:italic">Vid. 1. - HOG Animation.</figcaption>
</figure>
# Why The HOG Algorithm Works
As we learned above, HOG creates histograms by adding the magnitude of the gradients in particular orientations in localized portions of the image called *cells*. By doing this we guarantee that stronger gradients will contribute more to the magnitude of their respective angular bin, while the effects of weak and randomly oriented gradients resulting from noise are minimized. In this manner the histograms tell us the dominant gradient orientation of each cell.
### Dealing with contrast
Now, the magnitude of the dominant orientation can vary widely due to variations in local illumination and the contrast between the background and the foreground.
To account for the background-foreground contrast differences, the HOG algorithm tries to detect edges locally. In order to do this, it defines groups of cells, called **blocks**, and normalizes the histograms using this local group of cells. By normalizing locally, the HOG algorithm can detect the edges in each block very reliably; this is called **block normalization**.
In addition to using block normalization, the HOG algorithm also uses overlapping blocks to increase its performance. By using overlapping blocks, each cell contributes several independent components to the final HOG descriptor, where each component corresponds to a cell being normalized with respect to a different block. This may seem redundant but, it has been shown empirically that by normalizing each cell several times with respect to different local blocks, the performance of the HOG algorithm increases dramatically.
### Loading Images and Importing Resources
The first step in building our HOG descriptor is to load the required packages into Python and to load our image.
We start by using OpenCV to load an image of a triangle tile. Since, the `cv2.imread()` function loads images as BGR we will convert our image to RGB so we can display it with the correct colors. As usual we will convert our BGR image to Gray Scale for analysis.
```python
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Load the image
image = cv2.imread('./images/triangle_tile.jpeg')
# Convert the original image to RGB
original_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Convert the original image to gray scale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Print the shape of the original and gray scale images
print('The original image has shape: ', original_image.shape)
print('The gray scale image has shape: ', gray_image.shape)
# Display the images
plt.subplot(121)
plt.imshow(original_image)
plt.title('Original Image')
plt.subplot(122)
plt.imshow(gray_image, cmap='gray')
plt.title('Gray Scale Image')
plt.show()
```
The original image has shape: (250, 250, 3)
The gray scale image has shape: (250, 250)
<Figure size 1700x700 with 2 Axes>
# Creating The HOG Descriptor
We will be using OpenCV’s `HOGDescriptor` class to create the HOG descriptor. The parameters of the HOG descriptor are setup using the `HOGDescriptor()` function. The parameters of the `HOGDescriptor()` function and their default values are given below:
`cv2.HOGDescriptor(win_size = (64, 128),
block_size = (16, 16),
block_stride = (8, 8),
cell_size = (8, 8),
nbins = 9,
win_sigma = DEFAULT_WIN_SIGMA,
threshold_L2hys = 0.2,
gamma_correction = true,
nlevels = DEFAULT_NLEVELS)`
Parameters:
* **win_size** – *Size*
Size of detection window in pixels (*width, height*). Defines the region of interest. Must be an integer multiple of cell size.
* **block_size** – *Size*
Block size in pixels (*width, height*). Defines how many cells are in each block. Must be an integer multiple of cell size and it must be smaller than the detection window. The smaller the block the finer detail you will get.
* **block_stride** – *Size*
Block stride in pixels (*horizontal, vertical*). It must be an integer multiple of cell size. The `block_stride` defines the distance between adjecent blocks, for example, 8 pixels horizontally and 8 pixels vertically. Longer `block_strides` makes the algorithm run faster (because less blocks are evaluated) but the algorithm may not perform as well.
* **cell_size** – *Size*
Cell size in pixels (*width, height*). Determines the size fo your cell. The smaller the cell the finer detail you will get.
* **nbins** – *int*
Number of bins for the histograms. Determines the number of angular bins used to make the histograms. With more bins you capture more gradient directions. HOG uses unsigned gradients, so the angular bins will have values between 0 and 180 degrees.
* **win_sigma** – *double*
Gaussian smoothing window parameter. The performance of the HOG algorithm can be improved by smoothing the pixels near the edges of the blocks by applying a Gaussian spatial window to each pixel before computing the histograms.
* **threshold_L2hys** – *double*
L2-Hys (Lowe-style clipped L2 norm) normalization method shrinkage. The L2-Hys method is used to normalize the blocks and it consists of an L2-norm followed by clipping and a renormalization. The clipping limits the maximum value of the descriptor vector for each block to have the value of the given threshold (0.2 by default). After the clipping the descriptor vector is renormalized as described in *IJCV*, 60(2):91-110, 2004.
* **gamma_correction** – *bool*
Flag to specify whether the gamma correction preprocessing is required or not. Performing gamma correction slightly increases the performance of the HOG algorithm.
* **nlevels** – *int*
Maximum number of detection window increases.
As we can see, the `cv2.HOGDescriptor()`function supports a wide range of parameters. The first few arguments (`block_size, block_stride, cell_size`, and `nbins`) are probably the ones you are most likely to change. The other parameters can be safely left at their default values and you will get good results.
In the code below, we will use the `cv2.HOGDescriptor()`function to set the cell size, block size, block stride, and the number of bins for the histograms of the HOG descriptor. We will then use `.compute(image)`method to compute the HOG descriptor (feature vector) for the given `image`.
```python
# Specify the parameters for our HOG descriptor
# Cell Size in pixels (width, height). Must be smaller than the size of the detection window
# and must be chosen so that the resulting Block Size is smaller than the detection window.
cell_size = (6, 6)
# Number of cells per block in each direction (x, y). Must be chosen so that the resulting
# Block Size is smaller than the detection window
num_cells_per_block = (2, 2)
# Block Size in pixels (width, height). Must be an integer multiple of Cell Size.
# The Block Size must be smaller than the detection window
block_size = (num_cells_per_block[0] * cell_size[0],
num_cells_per_block[1] * cell_size[1])
# Calculate the number of cells that fit in our image in the x and y directions
x_cells = gray_image.shape[1] // cell_size[0]
y_cells = gray_image.shape[0] // cell_size[1]
# Horizontal distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (x_cells - num_cells_per_block[0]) / h_stride = integer.
h_stride = 1
# Vertical distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (y_cells - num_cells_per_block[1]) / v_stride = integer.
v_stride = 1
# Block Stride in pixels (horizantal, vertical). Must be an integer multiple of Cell Size
block_stride = (cell_size[0] * h_stride, cell_size[1] * v_stride)
# Number of gradient orientation bins
num_bins = 9
# Specify the size of the detection window (Region of Interest) in pixels (width, height).
# It must be an integer multiple of Cell Size and it must cover the entire image. Because
# the detection window must be an integer multiple of cell size, depending on the size of
# your cells, the resulting detection window might be slightly smaller than the image.
# This is perfectly ok.
win_size = (x_cells * cell_size[0] , y_cells * cell_size[1])
# Print the shape of the gray scale image for reference
print('\nThe gray scale image has shape: ', gray_image.shape)
print()
# Print the parameters of our HOG descriptor
print('HOG Descriptor Parameters:\n')
print('Window Size:', win_size)
print('Cell Size:', cell_size)
print('Block Size:', block_size)
print('Block Stride:', block_stride)
print('Number of Bins:', num_bins)
print()
# Set the parameters of the HOG descriptor using the variables defined above
hog = cv2.HOGDescriptor(win_size, block_size, block_stride, cell_size, num_bins)
# Compute the HOG Descriptor for the gray scale image
hog_descriptor = hog.compute(gray_image)
```
The gray scale image has shape: (250, 250)
HOG Descriptor Parameters:
Window Size: (246, 246)
Cell Size: (6, 6)
Block Size: (12, 12)
Block Stride: (6, 6)
Number of Bins: 9
# Number of Elements In The HOG Descriptor
The resulting HOG Descriptor (feature vector), contains the normalized histograms from all cells from all blocks in the detection window concatenated in one long vector. Therefore, the size of the HOG feature vector will be given by the total number of blocks in the detection window, multiplied by the number of cells per block, times the number of orientation bins:
<span class="mathquill">
\begin{equation}
\mbox{total_elements} = (\mbox{total_number_of_blocks})\mbox{ } \times \mbox{ } (\mbox{number_cells_per_block})\mbox{ } \times \mbox{ } (\mbox{number_of_bins})
\end{equation}
</span>
If we don’t have overlapping blocks (*i.e.* the `block_stride`equals the `block_size`), the total number of blocks can be easily calculated by dividing the size of the detection window by the block size. However, in the general case we have to take into account the fact that we have overlapping blocks. To find the total number of blocks in the general case (*i.e.* for any `block_stride` and `block_size`), we can use the formula given below:
<span class="mathquill">
\begin{equation}
\mbox{Total}_i = \left( \frac{\mbox{block_size}_i}{\mbox{block_stride}_i} \right)\left( \frac{\mbox{window_size}_i}{\mbox{block_size}_i} \right) - \left [\left( \frac{\mbox{block_size}_i}{\mbox{block_stride}_i} \right) - 1 \right]; \mbox{ for } i = x,y
\end{equation}
</span>
Where <span class="mathquill">Total$_x$</span>, is the total number of blocks along the width of the detection window, and <span class="mathquill">Total$_y$</span>, is the total number of blocks along the height of the detection window. This formula for <span class="mathquill">Total$_x$</span> and <span class="mathquill">Total$_y$</span>, takes into account the extra blocks that result from overlapping. After calculating <span class="mathquill">Total$_x$</span> and <span class="mathquill">Total$_y$</span>, we can get the total number of blocks in the detection window by multiplying <span class="mathquill">Total$_x$ $\times$ Total$_y$</span>. The above formula can be simplified considerably because the `block_size`, `block_stride`, and `window_size`are all defined in terms of the `cell_size`. By making all the appropriate substitutions and cancelations the above formula reduces to:
<span class="mathquill">
\begin{equation}
\mbox{Total}_i = \left(\frac{\mbox{cells}_i - \mbox{num_cells_per_block}_i}{N_i}\right) + 1\mbox{ }; \mbox{ for } i = x,y
\end{equation}
</span>
Where <span class="mathquill">cells$_x$</span> is the total number of cells along the width of the detection window, and <span class="mathquill">cells$_y$</span>, is the total number of cells along the height of the detection window. And <span class="mathquill">$N_x$</span> is the horizontal block stride in units of `cell_size` and <span class="mathquill">$N_y$</span> is the vertical block stride in units of `cell_size`.
Let's calculate what the number of elements for the HOG feature vector should be and check that it matches the shape of the HOG Descriptor calculated above.
```python
# Calculate the total number of blocks along the width of the detection window
tot_bx = np.uint32(((x_cells - num_cells_per_block[0]) / h_stride) + 1)
# Calculate the total number of blocks along the height of the detection window
tot_by = np.uint32(((y_cells - num_cells_per_block[1]) / v_stride) + 1)
# Calculate the total number of elements in the feature vector
tot_els = (tot_bx) * (tot_by) * num_cells_per_block[0] * num_cells_per_block[1] * num_bins
# Print the total number of elements the HOG feature vector should have
print('\nThe total number of elements in the HOG Feature Vector should be: ',
tot_bx, 'x',
tot_by, 'x',
num_cells_per_block[0], 'x',
num_cells_per_block[1], 'x',
num_bins, '=',
tot_els)
# Print the shape of the HOG Descriptor to see that it matches the above
print('\nThe HOG Descriptor has shape:', hog_descriptor.shape)
print()
```
The total number of elements in the HOG Feature Vector should be: 40 x 40 x 2 x 2 x 9 = 57600
The HOG Descriptor has shape: (57600, 1)
# Visualizing The HOG Descriptor
We can visualize the HOG Descriptor by plotting the histogram associated with each cell as a collection of vectors. To do this, we will plot each bin in the histogram as a single vector whose magnitude is given by the height of the bin and its orientation is given by the angular bin that its associated with. Since any given cell might have multiple histograms associated with it, due to the overlapping blocks, we will choose to average all the histograms for each cell to produce a single histogram for each cell.
OpenCV has no easy way to visualize the HOG Descriptor, so we have to do some manipulation first in order to visualize it. We will start by reshaping the HOG Descriptor in order to make our calculations easier. We will then compute the average histogram of each cell and finally we will convert the histogram bins into vectors. Once we have the vectors, we plot the corresponding vectors for each cell in an image.
The code below produces an interactive plot so that you can interact with the figure. The figure contains:
* the grayscale image,
* the HOG Descriptor (feature vector),
* a zoomed-in portion of the HOG Descriptor, and
* the histogram of the selected cell.
**You can click anywhere on the gray scale image or the HOG Descriptor image to select a particular cell**. Once you click on either image a *magenta* rectangle will appear showing the cell you selected. The Zoom Window will show you a zoomed in version of the HOG descriptor around the selected cell; and the histogram plot will show you the corresponding histogram for the selected cell. The interactive window also has buttons at the bottom that allow for other functionality, such as panning, and giving you the option to save the figure if desired. The home button returns the figure to its default value.
**NOTE**: If you are running this notebook in the Udacity workspace, there is around a 2 second lag in the interactive plot. This means that if you click in the image to zoom in, it will take about 2 seconds for the plot to refresh.
```python
%matplotlib notebook
import copy
import matplotlib.patches as patches
# Set the default figure size
plt.rcParams['figure.figsize'] = [9.8, 9]
# Reshape the feature vector to [blocks_y, blocks_x, num_cells_per_block_x, num_cells_per_block_y, num_bins].
# The blocks_x and blocks_y will be transposed so that the first index (blocks_y) referes to the row number
# and the second index to the column number. This will be useful later when we plot the feature vector, so
# that the feature vector indexing matches the image indexing.
hog_descriptor_reshaped = hog_descriptor.reshape(tot_bx,
tot_by,
num_cells_per_block[0],
num_cells_per_block[1],
num_bins).transpose((1, 0, 2, 3, 4))
# Print the shape of the feature vector for reference
print('The feature vector has shape:', hog_descriptor.shape)
# Print the reshaped feature vector
print('The reshaped feature vector has shape:', hog_descriptor_reshaped.shape)
# Create an array that will hold the average gradients for each cell
ave_grad = np.zeros((y_cells, x_cells, num_bins))
# Print the shape of the ave_grad array for reference
print('The average gradient array has shape: ', ave_grad.shape)
# Create an array that will count the number of histograms per cell
hist_counter = np.zeros((y_cells, x_cells, 1))
# Add up all the histograms for each cell and count the number of histograms per cell
for i in range (num_cells_per_block[0]):
for j in range(num_cells_per_block[1]):
ave_grad[i:tot_by + i,
j:tot_bx + j] += hog_descriptor_reshaped[:, :, i, j, :]
hist_counter[i:tot_by + i,
j:tot_bx + j] += 1
# Calculate the average gradient for each cell
ave_grad /= hist_counter
# Calculate the total number of vectors we have in all the cells.
len_vecs = ave_grad.shape[0] * ave_grad.shape[1] * ave_grad.shape[2]
# Create an array that has num_bins equally spaced between 0 and 180 degress in radians.
deg = np.linspace(0, np.pi, num_bins, endpoint = False)
# Each cell will have a histogram with num_bins. For each cell, plot each bin as a vector (with its magnitude
# equal to the height of the bin in the histogram, and its angle corresponding to the bin in the histogram).
# To do this, create rank 1 arrays that will hold the (x,y)-coordinate of all the vectors in all the cells in the
# image. Also, create the rank 1 arrays that will hold all the (U,V)-components of all the vectors in all the
# cells in the image. Create the arrays that will hold all the vector positons and components.
U = np.zeros((len_vecs))
V = np.zeros((len_vecs))
X = np.zeros((len_vecs))
Y = np.zeros((len_vecs))
# Set the counter to zero
counter = 0
# Use the cosine and sine functions to calculate the vector components (U,V) from their maginitudes. Remember the
# cosine and sine functions take angles in radians. Calculate the vector positions and magnitudes from the
# average gradient array
for i in range(ave_grad.shape[0]):
for j in range(ave_grad.shape[1]):
for k in range(ave_grad.shape[2]):
U[counter] = ave_grad[i,j,k] * np.cos(deg[k])
V[counter] = ave_grad[i,j,k] * np.sin(deg[k])
X[counter] = (cell_size[0] / 2) + (cell_size[0] * i)
Y[counter] = (cell_size[1] / 2) + (cell_size[1] * j)
counter = counter + 1
# Create the bins in degress to plot our histogram.
angle_axis = np.linspace(0, 180, num_bins, endpoint = False)
angle_axis += ((angle_axis[1] - angle_axis[0]) / 2)
# Create a figure with 4 subplots arranged in 2 x 2
fig, ((a,b),(c,d)) = plt.subplots(2,2)
# Set the title of each subplot
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
c.set(title = 'Zoom Window', xlim = (0, 18), ylim = (0, 18), autoscale_on = False)
d.set(title = 'Histogram of Gradients')
# Plot the gray scale image
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
# Plot the feature vector (HOG Descriptor)
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
# Define function for interactive zoom
def onpress(event):
#Unless the left mouse button is pressed do nothing
if event.button != 1:
return
# Only accept clicks for subplots a and b
if event.inaxes in [a, b]:
# Get mouse click coordinates
x, y = event.xdata, event.ydata
# Select the cell closest to the mouse click coordinates
cell_num_x = np.uint32(x / cell_size[0])
cell_num_y = np.uint32(y / cell_size[1])
# Set the edge coordinates of the rectangle patch
edgex = x - (x % cell_size[0])
edgey = y - (y % cell_size[1])
# Create a rectangle patch that matches the the cell selected above
rect = patches.Rectangle((edgex, edgey),
cell_size[0], cell_size[1],
linewidth = 1,
edgecolor = 'magenta',
facecolor='none')
# A single patch can only be used in a single plot. Create copies
# of the patch to use in the other subplots
rect2 = copy.copy(rect)
rect3 = copy.copy(rect)
# Update all subplots
a.clear()
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
a.add_patch(rect)
b.clear()
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
b.add_patch(rect2)
c.clear()
c.set(title = 'Zoom Window')
c.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 1)
c.set_xlim(edgex - cell_size[0], edgex + (2 * cell_size[0]))
c.set_ylim(edgey - cell_size[1], edgey + (2 * cell_size[1]))
c.invert_yaxis()
c.set_aspect(aspect = 1)
c.set_facecolor('black')
c.add_patch(rect3)
d.clear()
d.set(title = 'Histogram of Gradients')
d.grid()
d.set_xlim(0, 180)
d.set_xticks(angle_axis)
d.set_xlabel('Angle')
d.bar(angle_axis,
ave_grad[cell_num_y, cell_num_x, :],
180 // num_bins,
align = 'center',
alpha = 0.5,
linewidth = 1.2,
edgecolor = 'k')
fig.canvas.draw()
# Create a connection between the figure and the mouse click
fig.canvas.mpl_connect('button_press_event', onpress)
plt.show()
```
The feature vector has shape: (57600, 1)
The reshaped feature vector has shape: (40, 40, 2, 2, 9)
The average gradient array has shape: (41, 41, 9)
<IPython.core.display.Javascript object>
# Understanding The Histograms
Let's take a look at a couple of snapshots of the above figure to see if the histograms for the selected cell make sense. Let's start looking at a cell that is inside a triangle and not near an edge:
<br>
<figure>
<figcaption style = "text-align:center; font-style:italic">Fig. 4. - Histograms Inside a Triangle.</figcaption>
</figure>
<br>
In this case, since the triangle is nearly all of the same color there shouldn't be any dominant gradient in the selected cell. As we can clearly see in the Zoom Window and the histogram, this is indeed the case. We have many gradients but none of them clearly dominates over the other.
Now let’s take a look at a cell that is near a horizontal edge:
<br>
<figure>
<figcaption style = "text-align:center; font-style:italic">Fig. 5. - Histograms Near a Horizontal Edge.</figcaption>
</figure>
<br>
Remember that edges are areas of an image where the intensity changes abruptly. In these cases, we will have a high intensity gradient in some particular direction. This is exactly what we see in the corresponding histogram and Zoom Window for the selected cell. In the Zoom Window, we can see that the dominant gradient is pointing up, almost at 90 degrees, since that’s the direction in which there is a sharp change in intensity. Therefore, we should expect to see the 90-degree bin in the histogram to dominate strongly over the others. This is in fact what we see.
Now let’s take a look at a cell that is near a vertical edge:
<br>
<figure>
<figcaption style = "text-align:center; font-style:italic">Fig. 6. - Histograms Near a Vertical Edge.</figcaption>
</figure>
<br>
In this case we expect the dominant gradient in the cell to be horizontal, close to 180 degrees, since that’s the direction in which there is a sharp change in intensity. Therefore, we should expect to see the 170-degree bin in the histogram to dominate strongly over the others. This is what we see in the histogram but we also see that there is another dominant gradient in the cell, namely the one in the 10-degree bin. The reason for this, is because the HOG algorithm is using unsigned gradients, which means 0 degrees and 180 degrees are considered the same. Therefore, when the histograms are being created, angles between 160 and 180 degrees, contribute proportionally to both the 10-degree bin and the 170-degree bin. This results in there being two dominant gradients in the cell near the vertical edge instead of just one.
To conclude let’s take a look at a cell that is near a diagonal edge.
<br>
<figure>
<figcaption style = "text-align:center; font-style:italic">Fig. 7. - Histograms Near a Diagonal Edge.</figcaption>
</figure>
<br>
To understand what we are seeing, let’s first remember that gradients have an *x*-component, and a *y*-component, just like vectors. Therefore, the resulting orientation of a gradient is going to be given by the vector sum of its components. For this reason, on vertical edges the gradients are horizontal, because they only have an x-component, as we saw in Figure 4. While on horizontal edges the gradients are vertical, because they only have a y-component, as we saw in Figure 3. Consequently, on diagonal edges, the gradients are also going to be diagonal because both the *x* and *y* components are non-zero. Since the diagonal edges in the image are close to 45 degrees, we should expect to see a dominant gradient orientation in the 50-degree bin. This is in fact what we see in the histogram but, just like in Figure 4., we see there are two dominant gradients instead of just one. The reason for this is that when the histograms are being created, angles that are near the boundaries of bins, contribute proportionally to the adjacent bins. For example, a gradient with an angle of 40 degrees, is right in the middle of the 30-degree and 50-degree bin. Therefore, the magnitude of the gradient is split evenly into the 30-degree and 50-degree bin. This results in there being two dominant gradients in the cell near the diagonal edge instead of just one.
Now that you know how HOG is implemented, in the workspace you will find a notebook named *Examples*. In there, you will be able set your own paramters for the HOG descriptor for various images. Have fun!
|
e6329139415ef5c9e43860dc689dcfc555bbb679
| 360,732 |
ipynb
|
Jupyter Notebook
|
1_4_Feature_Vectors/3_1. HOG.ipynb
|
matijazigic/CVND_Exercises
|
52b9cdd76f64d5e5cb3454a657c8b64df06d0490
|
[
"MIT"
] | null | null | null |
1_4_Feature_Vectors/3_1. HOG.ipynb
|
matijazigic/CVND_Exercises
|
52b9cdd76f64d5e5cb3454a657c8b64df06d0490
|
[
"MIT"
] | null | null | null |
1_4_Feature_Vectors/3_1. HOG.ipynb
|
matijazigic/CVND_Exercises
|
52b9cdd76f64d5e5cb3454a657c8b64df06d0490
|
[
"MIT"
] | null | null | null | 244.729986 | 285,087 | 0.8812 | true | 7,811 |
Qwen/Qwen-72B
|
1. YES
2. YES
| 0.754915 | 0.79053 | 0.596783 |
__label__eng_Latn
| 0.997525 | 0.224857 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.