markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
下载流行的 MNIST 数据集
fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # Adding a dimension to the array -> new shape == (28, 28, 1) # We are doing this because the first layer in our model is a convolutional # layer and it requires a 4D input (batch_size, height, width, channels). # batch_size dimension will be added later on. train_images = train_images[..., None] test_images = test_images[..., None] # Getting the images in [0, 1] range. train_images = train_images / np.float32(255) test_images = test_images / np.float32(255)
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
创建一个分发变量和图形的策略 tf.distribute.MirroredStrategy 策略是如何运作的? 所有变量和模型图都复制在副本上。 输入都均匀分布在副本中。 每个副本在收到输入后计算输入的损失和梯度。 通过求和,每一个副本上的梯度都能同步。 同步后,每个副本上的复制的变量都可以同样更新。 注意:您可以将下面的所有代码放在一个单独单元内。 我们将它分成几个代码单元用于说明目的。
# If the list of devices is not specified in the # `tf.distribute.MirroredStrategy` constructor, it will be auto-detected. strategy = tf.distribute.MirroredStrategy() print ('Number of devices: {}'.format(strategy.num_replicas_in_sync))
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
设置输入流水线 将图形和变量导出成平台不可识别的 SavedModel 格式。在你的模型保存后,你可以在有或没有范围的情况下载入它。
BUFFER_SIZE = len(train_images) BATCH_SIZE_PER_REPLICA = 64 GLOBAL_BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync EPOCHS = 10
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
创建数据集并分发它们:
train_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).shuffle(BUFFER_SIZE).batch(GLOBAL_BATCH_SIZE) test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE) train_dist_dataset = strategy.experimental_distribute_dataset(train_dataset) test_dist_dataset = strategy.experimental_distribute_dataset(test_dataset)
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
创建模型 使用 tf.keras.Sequential 创建一个模型。你也可以使用模型子类化 API 来完成这个。
def create_model(): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(64, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10) ]) return model # Create a checkpoint directory to store the checkpoints. checkpoint_dir = './training_checkpoints' checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
定义损失函数 通常,在具有 1 个 GPU/CPU 的单台机器上,损失会除以输入批次中的样本数量。 因此,使用 tf.distribute.Strategy 时应如何计算损失? 例如,假设有 4 个 GPU,批次大小为 64。一个批次的输入会分布在各个副本(4 个 GPU)上,每个副本获得一个大小为 16 的输入。 每个副本上的模型都会使用其各自的输入进行前向传递,并计算损失。现在,不将损失除以其相应输入中的样本数 (BATCH_SIZE_PER_REPLICA = 16),而应将损失除以 GLOBAL_BATCH_SIZE (64)。 为什么这样做? 之所以需要这样做,是因为在每个副本上计算完梯度后,会通过对梯度求和在副本之间同步梯度。 如何在 TensorFlow 中执行此操作? 如果您正在编写自定义训练循环(如本教程中所述),则应将每个样本的损失相加,然后将总和除以 GLOBAL_BATCH_SIZE: scale_loss = tf.reduce_sum(loss) * (1. / GLOBAL_BATCH_SIZE),或者您可以使用 tf.nn.compute_average_loss,它会将每个样本的损失、可选样本权重和 GLOBAL_BATCH_SIZE 作为参数,并返回经过缩放的损失。 如果在模型中使用正则化损失,则需要按副本数缩放损失值。您可以使用 tf.nn.scale_regularization_loss 函数进行此操作。 不建议使用 tf.reduce_mean。这样做会将损失除以实际的每个副本批次大小,该大小可能会随着步骤的不同而发生变化。 这种缩减和缩放会在 Keras model.compile 和 <br> model.fit 中自动完成。 如果使用 tf.keras.losses 类(如下面的示例所示),则需要将损失缩减显式地指定为 NONE 或 SUM。与 tf.distribute.Strategy 一起使用时,不允许使用 AUTO 和 SUM_OVER_BATCH_SIZE。不允许使用 AUTO,因为用户应明确考虑他们想要的缩减量,以确保在分布式情况下缩减量正确。不允许使用 SUM_OVER_BATCH_SIZE,因为当前它只能按副本批次大小进行划分,而将按副本数量划分划留给用户,这可能很容易遗漏。因此,我们转而要求用户自己显式地执行缩减操作。 如果 labels 为多维,则对每个样本中的元素数量的 per_example_loss 求平均值。例如,如果 predictions 的形状为 (batch_size, H, W, n_classes),而 labels 为 (batch_size, H, W),则需要更新 per_example_loss,例如:per_example_loss /= tf.cast(tf.reduce_prod(tf.shape(labels)[1:]), tf.float32) 小心:验证损失的形状。tf.losses/tf.keras.losses 中的损失函数通常会返回输入最后一个维度的平均值。损失类封装这些函数。在创建损失类的实例时传递 reduction=Reduction.NONE,表示“无额外缩减”。对于样本输入形状为 [batch, W, H, n_classes] 的类别损失,会缩减 n_classes 维度。对于类似 losses.mean_squared_error 或 losses.binary_crossentropy 的逐点损失,应包含一个虚拟轴,使 [batch, W, H, 1] 缩减为 [batch, W, H]。如果没有虚拟轴,则 [batch, W, H] 将被错误地缩减为 [batch, W]。
with strategy.scope(): # Set reduction to `none` so we can do the reduction afterwards and divide by # global batch size. loss_object = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction=tf.keras.losses.Reduction.NONE) def compute_loss(labels, predictions): per_example_loss = loss_object(labels, predictions) return tf.nn.compute_average_loss(per_example_loss, global_batch_size=GLOBAL_BATCH_SIZE)
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
定义衡量指标以跟踪损失和准确性 这些指标可以跟踪测试的损失,训练和测试的准确性。 您可以使用.result()随时获取累积的统计信息。
with strategy.scope(): test_loss = tf.keras.metrics.Mean(name='test_loss') train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy( name='train_accuracy') test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy( name='test_accuracy')
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
训练循环
# model, optimizer, and checkpoint must be created under `strategy.scope`. with strategy.scope(): model = create_model() optimizer = tf.keras.optimizers.Adam() checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) def train_step(inputs): images, labels = inputs with tf.GradientTape() as tape: predictions = model(images, training=True) loss = compute_loss(labels, predictions) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) train_accuracy.update_state(labels, predictions) return loss def test_step(inputs): images, labels = inputs predictions = model(images, training=False) t_loss = loss_object(labels, predictions) test_loss.update_state(t_loss) test_accuracy.update_state(labels, predictions) # `run` replicates the provided computation and runs it # with the distributed input. @tf.function def distributed_train_step(dataset_inputs): per_replica_losses = strategy.run(train_step, args=(dataset_inputs,)) return strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None) @tf.function def distributed_test_step(dataset_inputs): return strategy.run(test_step, args=(dataset_inputs,)) for epoch in range(EPOCHS): # TRAIN LOOP total_loss = 0.0 num_batches = 0 for x in train_dist_dataset: total_loss += distributed_train_step(x) num_batches += 1 train_loss = total_loss / num_batches # TEST LOOP for x in test_dist_dataset: distributed_test_step(x) if epoch % 2 == 0: checkpoint.save(checkpoint_prefix) template = ("Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, " "Test Accuracy: {}") print (template.format(epoch+1, train_loss, train_accuracy.result()*100, test_loss.result(), test_accuracy.result()*100)) test_loss.reset_states() train_accuracy.reset_states() test_accuracy.reset_states()
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
以上示例中需要注意的事项: 我们使用for x in ...迭代构造train_dist_dataset和test_dist_dataset。 缩放损失是distributed_train_step的返回值。 这个值会在各个副本使用tf.distribute.Strategy.reduce的时候合并,然后通过tf.distribute.Strategy.reduce叠加各个返回值来跨批次。 在执行tf.distribute.Strategy.experimental_run_v2时,tf.keras.Metrics应在train_step和test_step中更新。 恢复最新的检查点并进行测试 使用 tf.distribute.Strategy 设置了检查点的模型可以使用或不使用策略进行恢复。
eval_accuracy = tf.keras.metrics.SparseCategoricalAccuracy( name='eval_accuracy') new_model = create_model() new_optimizer = tf.keras.optimizers.Adam() test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE) @tf.function def eval_step(images, labels): predictions = new_model(images, training=False) eval_accuracy(labels, predictions) checkpoint = tf.train.Checkpoint(optimizer=new_optimizer, model=new_model) checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir)) for images, labels in test_dataset: eval_step(images, labels) print ('Accuracy after restoring the saved model without strategy: {}'.format( eval_accuracy.result()*100))
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
迭代一个数据集的替代方法 使用迭代器 如果你想要迭代一个已经给定步骤数量而不需要整个遍历的数据集,你可以创建一个迭代器并在迭代器上调用iter和显式调用next。 您可以选择在 tf.function 内部和外部迭代数据集。 这是一个小片段,演示了使用迭代器在 tf.function 外部迭代数据集。
for _ in range(EPOCHS): total_loss = 0.0 num_batches = 0 train_iter = iter(train_dist_dataset) for _ in range(10): total_loss += distributed_train_step(next(train_iter)) num_batches += 1 average_train_loss = total_loss / num_batches template = ("Epoch {}, Loss: {}, Accuracy: {}") print (template.format(epoch+1, average_train_loss, train_accuracy.result()*100)) train_accuracy.reset_states()
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
在 tf.function 中迭代 您还可以使用for x in ...构造在 tf.function 内部迭代整个输入train_dist_dataset,或者像上面那样创建迭代器。下面的例子演示了在 tf.function 中包装一个 epoch 并在功能内迭代train_dist_dataset。
@tf.function def distributed_train_epoch(dataset): total_loss = 0.0 num_batches = 0 for x in dataset: per_replica_losses = strategy.run(train_step, args=(x,)) total_loss += strategy.reduce( tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None) num_batches += 1 return total_loss / tf.cast(num_batches, dtype=tf.float32) for epoch in range(EPOCHS): train_loss = distributed_train_epoch(train_dist_dataset) template = ("Epoch {}, Loss: {}, Accuracy: {}") print (template.format(epoch+1, train_loss, train_accuracy.result()*100)) train_accuracy.reset_states()
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
plot_images() is used to plot several images in the same figure. It supports many configurations and has many options available to customize the resulting output. The function returns a list of matplotlib axes, which can be used to further customize the figure. Some examples are given below. Default usage A common usage for plot_images() is to view the different slices of a multidimensional image (a hyperimage):
import scipy.ndimage image = hs.signals.Image(np.random.random((2, 3, 512, 512))) for i in range(2): for j in range(3): image.data[i,j,:] = scipy.misc.ascent()*(i+0.5+j) axes = image.axes_manager axes[2].name = "x" axes[3].name = "y" axes[2].units = "nm" axes[3].units = "nm" image.metadata.General.title = 'multi-dimensional Lena' hs.plot.plot_images(image, tight_layout=True)
hyperspy/tests/drawing/test_plot_image.ipynb
to266/hyperspy
gpl-3.0
Specified labels By default, plot_images() will attempt to auto-label the images based on the Signal titles. The labels (and title) can be customized with the label and suptitle arguments. In this example, the axes labels and ticks are also disabled with axes_decor:
import scipy.ndimage image = hs.signals.Image(np.random.random((2, 3, 512, 512))) for i in range(2): for j in range(3): image.data[i,j,:] = scipy.misc.ascent()*(i+0.5+j) axes = image.axes_manager axes[2].name = "x" axes[3].name = "y" axes[2].units = "nm" axes[3].units = "nm" image.metadata.General.title = 'multi-dimensional Lena' hs.plot.plot_images(image, suptitle='Custom figure title', label=['Image 1', 'Image 2', 'Image 3', 'Image 4', 'Image 5', 'Image 6'], axes_decor=None, tight_layout=True)
hyperspy/tests/drawing/test_plot_image.ipynb
to266/hyperspy
gpl-3.0
List of images plot_images() can also be used to easily plot a list of Images, comparing different Signals, including RGB images. This example also demonstrates how to wrap labels using labelwrap (for preventing overlap) and using a single colorbar for all the Images, as opposed to multiple individual ones:
import scipy.ndimage # load red channel of raccoon as an image image0 = hs.signals.Image(scipy.misc.ascent()[:,:,0]) image0.metadata.General.title = 'Rocky Raccoon - R' axes0 = image0.axes_manager axes0[0].name = "x" axes0[1].name = "y" axes0[0].units = "mm" axes0[1].units = "mm" # load lena into 2x3 hyperimage image1 = hs.signals.Image(np.random.random((2, 3, 512, 512))) image1.metadata.General.title = 'multi-dimensional Lena' for i in range(2): for j in range(3): image1.data[i,j,:] = scipy.misc.ascent()*(i+0.5+j) axes1 = image1.axes_manager axes1[2].name = "x" axes1[3].name = "y" axes1[2].units = "nm" axes1[3].units = "nm" # load green channel of raccoon as an image image2 = hs.signals.Image(scipy.misc.ascent()[:,:,1]) image2.metadata.General.title = 'Rocky Raccoon - G' axes2 = image2.axes_manager axes2[0].name = "x" axes2[1].name = "y" axes2[0].units = "mm" axes2[1].units = "mm" # load rgb image rgb = hs.signals.Spectrum(scipy.misc.ascent()) rgb.change_dtype("rgb8") rgb.metadata.General.title = 'RGB' axesRGB = rgb.axes_manager axesRGB[0].name = "x" axesRGB[1].name = "y" axesRGB[0].units = "nm" axesRGB[1].units = "nm" hs.plot.plot_images([image0, image1, image2, rgb], tight_layout=True, #colorbar='single', labelwrap=20)
hyperspy/tests/drawing/test_plot_image.ipynb
to266/hyperspy
gpl-3.0
Real-world use Another example for this function is plotting EDS line intensities. Using a spectrum image with EDS data, one can use the following commands to get a representative figure of the line intensities. This example also demonstrates changing the colormap (with cmap), adding scalebars to the plots (with scalebar), and changing the padding between the images. The padding is specified as a dictionary, which is used to call matplotlib.figure.Figure.subplots_adjust() (see documentation). Note, this padding can also be changed interactively by clicking on the subplots_adjust button (<img src="plot_images_subplots.png" style="display:inline-block;vertical-align:bottom">) in the GUI (button may be different when using different graphical backends). The sample and the data used are described in P. Burdet, et al., Acta Materialia, 61, p. 3090-3098 (2013) (see http://infoscience.epfl.ch/record/185861/). Further information is available in the Hyperspy EDS tutorial: http://nbviewer.ipython.org/github/hyperspy/hyperspy-demos/blob/master/electron_microscopy/EDS/Hyperpsy_EDS_TEM_tutorial_CAM_2015.ipynb
from urllib import urlretrieve url = 'http://cook.msm.cam.ac.uk//~hyperspy//EDS_tutorial//' urlretrieve(url + 'core_shell.hdf5', 'core_shell.hdf5') si_EDS = hs.load("core_shell.hdf5") im = si_EDS.get_lines_intensity() hs.plot.plot_images( im, tight_layout=True, cmap='RdYlBu_r', axes_decor='off', colorbar='single', scalebar='all', scalebar_color='black', suptitle_fontsize=16, padding={'top':0.8, 'bottom':0.10, 'left':0.05, 'right':0.85, 'wspace':0.20, 'hspace':0.10})
hyperspy/tests/drawing/test_plot_image.ipynb
to266/hyperspy
gpl-3.0
자동차 연비 예측하기: 회귀 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/regression"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />TensorFlow.org에서 보기</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/keras/regression.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/keras/regression.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/keras/regression.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도 불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다. 이 번역에 개선할 부분이 있다면 tensorflow/docs-l10n 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다. 문서 번역이나 리뷰에 참여하려면 docs-ko@tensorflow.org로 메일을 보내주시기 바랍니다. 회귀(regression)는 가격이나 확률 같이 연속된 출력 값을 예측하는 것이 목적입니다. 이와는 달리 분류(classification)는 여러개의 클래스 중 하나의 클래스를 선택하는 것이 목적입니다(예를 들어, 사진에 사과 또는 오렌지가 포함되어 있을 때 어떤 과일인지 인식하는 것). 이 노트북은 Auto MPG 데이터셋을 사용하여 1970년대 후반과 1980년대 초반의 자동차 연비를 예측하는 모델을 만듭니다. 이 기간에 출시된 자동차 정보를 모델에 제공하겠습니다. 이 정보에는 실린더 수, 배기량, 마력(horsepower), 공차 중량 같은 속성이 포함됩니다. 이 예제는 tf.keras API를 사용합니다. 자세한 내용은 케라스 가이드를 참고하세요.
# 산점도 행렬을 그리기 위해 seaborn 패키지를 설치합니다 !pip install seaborn import pathlib import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers print(tf.__version__)
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
Auto MPG 데이터셋 이 데이터셋은 UCI 머신 러닝 저장소에서 다운로드할 수 있습니다. 데이터 구하기 먼저 데이터셋을 다운로드합니다.
dataset_path = keras.utils.get_file("auto-mpg.data", "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data") dataset_path
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
판다스를 사용하여 데이터를 읽습니다.
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight', 'Acceleration', 'Model Year', 'Origin'] raw_dataset = pd.read_csv(dataset_path, names=column_names, na_values = "?", comment='\t', sep=" ", skipinitialspace=True) dataset = raw_dataset.copy() dataset.tail()
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
데이터 정제하기 이 데이터셋은 일부 데이터가 누락되어 있습니다.
dataset.isna().sum()
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
문제를 간단하게 만들기 위해서 누락된 행을 삭제하겠습니다.
dataset = dataset.dropna()
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
"Origin" 열은 수치형이 아니고 범주형이므로 원-핫 인코딩(one-hot encoding)으로 변환하겠습니다:
origin = dataset.pop('Origin') dataset['USA'] = (origin == 1)*1.0 dataset['Europe'] = (origin == 2)*1.0 dataset['Japan'] = (origin == 3)*1.0 dataset.tail()
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
데이터셋을 훈련 세트와 테스트 세트로 분할하기 이제 데이터를 훈련 세트와 테스트 세트로 분할합니다. 테스트 세트는 모델을 최종적으로 평가할 때 사용합니다.
train_dataset = dataset.sample(frac=0.8,random_state=0) test_dataset = dataset.drop(train_dataset.index)
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
데이터 조사하기 훈련 세트에서 몇 개의 열을 선택해 산점도 행렬을 만들어 살펴 보겠습니다.
sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde")
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
전반적인 통계도 확인해 보죠:
train_stats = train_dataset.describe() train_stats.pop("MPG") train_stats = train_stats.transpose() train_stats
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
특성과 레이블 분리하기 특성에서 타깃 값 또는 "레이블"을 분리합니다. 이 레이블을 예측하기 위해 모델을 훈련시킬 것입니다.
train_labels = train_dataset.pop('MPG') test_labels = test_dataset.pop('MPG')
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
데이터 정규화 위 train_stats 통계를 다시 살펴보고 각 특성의 범위가 얼마나 다른지 확인해 보죠. 특성의 스케일과 범위가 다르면 정규화(normalization)하는 것이 권장됩니다. 특성을 정규화하지 않아도 모델이 수렴할 수 있지만, 훈련시키기 어렵고 입력 단위에 의존적인 모델이 만들어집니다. 노트: 의도적으로 훈련 세트만 사용하여 통계치를 생성했습니다. 이 통계는 테스트 세트를 정규화할 때에도 사용됩니다. 이는 테스트 세트를 모델이 훈련에 사용했던 것과 동일한 분포로 투영하기 위해서입니다.
def norm(x): return (x - train_stats['mean']) / train_stats['std'] normed_train_data = norm(train_dataset) normed_test_data = norm(test_dataset)
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
정규화된 데이터를 사용하여 모델을 훈련합니다. 주의: 여기에서 입력 데이터를 정규화하기 위해 사용한 통계치(평균과 표준편차)는 원-핫 인코딩과 마찬가지로 모델에 주입되는 모든 데이터에 적용되어야 합니다. 여기에는 테스트 세트는 물론 모델이 실전에 투입되어 얻은 라이브 데이터도 포함됩니다. 모델 모델 만들기 모델을 구성해 보죠. 여기에서는 두 개의 완전 연결(densely connected) 은닉층으로 Sequential 모델을 만들겠습니다. 출력 층은 하나의 연속적인 값을 반환합니다. 나중에 두 번째 모델을 만들기 쉽도록 build_model 함수로 모델 구성 단계를 감싸겠습니다.
def build_model(): model = keras.Sequential([ layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]), layers.Dense(64, activation='relu'), layers.Dense(1) ]) optimizer = tf.keras.optimizers.RMSprop(0.001) model.compile(loss='mse', optimizer=optimizer, metrics=['mae', 'mse']) return model model = build_model()
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
모델 확인 .summary 메서드를 사용해 모델에 대한 간단한 정보를 출력합니다.
model.summary()
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
모델을 한번 실행해 보죠. 훈련 세트에서 10 샘플을 하나의 배치로 만들어 model.predict 메서드를 호출해 보겠습니다.
example_batch = normed_train_data[:10] example_result = model.predict(example_batch) example_result
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
제대로 작동하는 것 같네요. 결괏값의 크기와 타입이 기대했던 대로입니다. 모델 훈련 이 모델을 1,000번의 에포크(epoch) 동안 훈련합니다. 훈련 정확도와 검증 정확도는 history 객체에 기록됩니다.
# 에포크가 끝날 때마다 점(.)을 출력해 훈련 진행 과정을 표시합니다 class PrintDot(keras.callbacks.Callback): def on_epoch_end(self, epoch, logs): if epoch % 100 == 0: print('') print('.', end='') EPOCHS = 1000 history = model.fit( normed_train_data, train_labels, epochs=EPOCHS, validation_split = 0.2, verbose=0, callbacks=[PrintDot()])
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
history 객체에 저장된 통계치를 사용해 모델의 훈련 과정을 시각화해 보죠.
hist = pd.DataFrame(history.history) hist['epoch'] = history.epoch hist.tail() import matplotlib.pyplot as plt def plot_history(history): hist = pd.DataFrame(history.history) hist['epoch'] = history.epoch plt.figure(figsize=(8,12)) plt.subplot(2,1,1) plt.xlabel('Epoch') plt.ylabel('Mean Abs Error [MPG]') plt.plot(hist['epoch'], hist['mae'], label='Train Error') plt.plot(hist['epoch'], hist['val_mae'], label = 'Val Error') plt.ylim([0,5]) plt.legend() plt.subplot(2,1,2) plt.xlabel('Epoch') plt.ylabel('Mean Square Error [$MPG^2$]') plt.plot(hist['epoch'], hist['mse'], label='Train Error') plt.plot(hist['epoch'], hist['val_mse'], label = 'Val Error') plt.ylim([0,20]) plt.legend() plt.show() plot_history(history)
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
이 그래프를 보면 수 백번 에포크를 진행한 이후에는 모델이 거의 향상되지 않는 것 같습니다. model.fit 메서드를 수정하여 검증 점수가 향상되지 않으면 자동으로 훈련을 멈추도록 만들어 보죠. 에포크마다 훈련 상태를 점검하기 위해 EarlyStopping 콜백(callback)을 사용하겠습니다. 지정된 에포크 횟수 동안 성능 향상이 없으면 자동으로 훈련이 멈춥니다. 이 콜백에 대해 더 자세한 내용은 여기를 참고하세요.
model = build_model() # patience 매개변수는 성능 향상을 체크할 에포크 횟수입니다 early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10) history = model.fit(normed_train_data, train_labels, epochs=EPOCHS, validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()]) plot_history(history)
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
이 그래프를 보면 검증 세트의 평균 오차가 약 +/- 2 MPG입니다. 좋은 결과인가요? 이에 대한 평가는 여러분에게 맡기겠습니다. 모델을 훈련할 때 사용하지 않았던 테스트 세트에서 모델의 성능을 확인해 보죠. 이를 통해 모델이 실전에 투입되었을 때 모델의 성능을 짐작할 수 있습니다:
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2) print("테스트 세트의 평균 절대 오차: {:5.2f} MPG".format(mae))
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
예측 마지막으로 테스트 세트에 있는 샘플을 사용해 MPG 값을 예측해 보겠습니다:
test_predictions = model.predict(normed_test_data).flatten() plt.scatter(test_labels, test_predictions) plt.xlabel('True Values [MPG]') plt.ylabel('Predictions [MPG]') plt.axis('equal') plt.axis('square') plt.xlim([0,plt.xlim()[1]]) plt.ylim([0,plt.ylim()[1]]) _ = plt.plot([-100, 100], [-100, 100])
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
모델이 꽤 잘 예측한 것 같습니다. 오차의 분포를 살펴 보죠.
error = test_predictions - test_labels plt.hist(error, bins = 25) plt.xlabel("Prediction Error [MPG]") _ = plt.ylabel("Count")
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
As always, let's do imports and initialize a logger and a new Bundle.
import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger('error') b = phoebe.default_binary()
2.3/tutorials/ltte.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now let's add a light curve dataset to see how ltte affects the timings of eclipses.
b.add_dataset('lc', times=phoebe.linspace(-0.05, 0.05, 51), dataset='lc01')
2.3/tutorials/ltte.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Relevant Parameters The 'ltte' parameter in context='compute' defines whether light travel time effects are taken into account or not.
print(b['ltte@compute'])
2.3/tutorials/ltte.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Comparing with and without ltte In order to have a binary system with any noticeable ltte effects, we'll set a somewhat extreme mass-ratio and semi-major axis.
b['sma@binary'] = 100 b['q'] = 0.1
2.3/tutorials/ltte.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
We'll just ignore the fact that this will be a completely unphysical system since we'll leave the radii and temperatures alone despite somewhat ridiculous masses - but since the masses and radii disagree so much, we'll have to abandon atmospheres and use blackbody.
b.set_value_all('atm', 'blackbody') b.set_value_all('ld_mode', 'manual') b.set_value_all('ld_func', 'logarithmic') b.run_compute(irrad_method='none', ltte=False, model='ltte_off') b.run_compute(irrad_method='none', ltte=True, model='ltte_on') afig, mplfig = b.plot(show=True)
2.3/tutorials/ltte.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
OMA 2. kolokvij 2012/2013 1. naloga V polkrog z radijem 1 vcrtamo pravokotnik ABCD tako, da oglisci A in B lezita na premeru, oglisci C in D pa na loku polkroga. Koliksni naj bosta dolzini stranic pravokotnika, da bo ploscina pravokotnika maksimalna?
%%tikz s 400,400 -sc 1.2 -f png \draw [domain=0:180] plot ({cos(\x)}, {sin(\x)}); \draw (-1,0) -- (1, 0); \draw [color=red] (-0.5, 0) -- node[below, color=black] {2a} ++ (1, 0); \draw [color=red] (-0.5, 0.8660254037844386) -- (0.5, 0.8660254037844386); \draw [color=red] (-0.5, 0) -- node[left, color=black] {b} ++ (0, 0.8660254037844386); \draw [color=red] (0.5, 0.8660254037844386) -- (0.5, 0);
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
Maksimiziramo funkcijo $P(x)=2ab$. Velja tudi $a^2 + b^2 = 1$. Namesto ploscine bomo maksimizirali njen kvadrat (ki ima maksimum v isti tocki kot prvotna funkcija.
P = sympy.symbols('P', cls=sympy.Function) eq1 = Eq(P(b), (2*a*b)**2) eq2 = Eq(a**2+b**2, 1) equation = Eq(P(b), solve([eq1, eq2], P(b), a**2)[P(b)]) equation P = sympy.lambdify(b, equation.rhs) x = sympy.symbols('x', positive=True) solve(Eq(P(x).diff(x), 0))[0]
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
2. naloga Naj bo $$f(x,y)=3x^2-3y^2+8xy-6x-8y+3.$$ Izracunaj gradient funkcije $f(x,y)$.
x, y = sympy.symbols('x y') f = lambda x, y: 3*x**2 - 3*y**2 + 8*x*y-6*x-8*y+3 f(x,y).diff(x), f(x,y).diff(y)
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
Izracunaj stacionarne tocke funkcije $f(x,y)$.
sympy.solve([f(x,y).diff(x), f(x,y).diff(y)])
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
3. naloga Izracunaj odvod funkcije $$\frac{\cos(x)}{\sin(x)}.$$
x = sympy.symbols('x') f = lambda x: sympy.cos(x)/sympy.sin(x) sympy.simplify(f(x).diff())
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
*S pomocjo substitucije izracunaj nedoloceni integral $$\int \frac{\cos(x)}{\sin(x)}.$$ *
sympy.simplify(f(x).integrate())
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
V zgorjem racunu poleg konstante znotraj funkcije $\log$ manjka se absolutna vrednost (sympy racuna v kompleksnih stevilih), tako da je pravi rezultat $$ \frac{1}{2}\log(\sin^2(x)) + C = \log(\sin^2(x)) + C.$$ S pomocjo pravila za integriranje po delih izracunaj $$\int\frac{x}{\sin^2(x)}.$$
x = sympy.symbols('x') f = lambda x: x/sympy.sin(x)**2 sympy.simplify(f(x).integrate())
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
Tudi to resitev se da poenostaviti v $$ \int\frac{x}{\sin^2(x)} = \log(|\sin(x)|) - x\cot(x) + C.$$ 4. naloga Narisite lik, ki ga omejujeta krivulji $y=e^{2x}$ in $y=-e^{2x}+4$. Izracunajte ploscino lika.
from matplotlib import pyplot as plt import numpy as np x = sympy.symbols('x') f = lambda x: np.exp(2*x) g = lambda x: -np.exp(2*x)+4 fig, ax = plt.subplots() xs = np.linspace(0,0.6) ax.fill_between(xs, f(xs),g(xs),where = f(xs)>=g(xs), facecolor='green',interpolate=True) ax.fill_between(xs, f(xs), g(xs), where = f(xs)<= g(xs),facecolor='red',interpolate=True) plt.title("Liki med dvema krivuljama.")
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
Izracunati moramo ploscino rdecega lika.
x = sympy.symbols('x', real=True) f = lambda x: sympy.E**(2*x) g = lambda x: -sympy.E**(2*x)+4 intersection = sympy.solve(sympy.Eq(f(x), g(x)))[0] result = sympy.integrate(g(x)-f(x), (x, 0, intersection)) result result.evalf()
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
Hamilton (1989) switching model of GNP This replicates Hamilton's (1989) seminal paper introducing Markov-switching models. The model is an autoregressive model of order 4 in which the mean of the process switches between two regimes. It can be written: $$ y_t = \mu_{S_t} + \phi_1 (y_{t-1} - \mu_{S_{t-1}}) + \phi_2 (y_{t-2} - \mu_{S_{t-2}}) + \phi_3 (y_{t-3} - \mu_{S_{t-3}}) + \phi_4 (y_{t-4} - \mu_{S_{t-4}}) + \varepsilon_t $$ Each period, the regime transitions according to the following matrix of transition probabilities: $$ P(S_t = s_t | S_{t-1} = s_{t-1}) = \begin{bmatrix} p_{00} & p_{10} \ p_{01} & p_{11} \end{bmatrix} $$ where $p_{ij}$ is the probability of transitioning from regime $i$, to regime $j$. The model class is MarkovAutoregression in the time-series part of statsmodels. In order to create the model, we must specify the number of regimes with k_regimes=2, and the order of the autoregression with order=4. The default model also includes switching autoregressive coefficients, so here we also need to specify switching_ar=False to avoid that. After creation, the model is fit via maximum likelihood estimation. Under the hood, good starting parameters are found using a number of steps of the expectation maximization (EM) algorithm, and a quasi-Newton (BFGS) algorithm is applied to quickly find the maximum.
# Get the RGNP data to replicate Hamilton dta = pd.read_stata('https://www.stata-press.com/data/r14/rgnp.dta').iloc[1:] dta.index = pd.DatetimeIndex(dta.date, freq='QS') dta_hamilton = dta.rgnp # Plot the data dta_hamilton.plot(title='Growth rate of Real GNP', figsize=(12,3)) # Fit the model mod_hamilton = sm.tsa.MarkovAutoregression(dta_hamilton, k_regimes=2, order=4, switching_ar=False) res_hamilton = mod_hamilton.fit() res_hamilton.summary()
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
We plot the filtered and smoothed probabilities of a recession. Filtered refers to an estimate of the probability at time $t$ based on data up to and including time $t$ (but excluding time $t+1, ..., T$). Smoothed refers to an estimate of the probability at time $t$ using all the data in the sample. For reference, the shaded periods represent the NBER recessions.
fig, axes = plt.subplots(2, figsize=(7,7)) ax = axes[0] ax.plot(res_hamilton.filtered_marginal_probabilities[0]) ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1) ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1]) ax.set(title='Filtered probability of recession') ax = axes[1] ax.plot(res_hamilton.smoothed_marginal_probabilities[0]) ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1) ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1]) ax.set(title='Smoothed probability of recession') fig.tight_layout()
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
From the estimated transition matrix we can calculate the expected duration of a recession versus an expansion.
print(res_hamilton.expected_durations)
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
In this case, it is expected that a recession will last about one year (4 quarters) and an expansion about two and a half years. Kim, Nelson, and Startz (1998) Three-state Variance Switching This model demonstrates estimation with regime heteroskedasticity (switching of variances) and no mean effect. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn. The model in question is: $$ \begin{align} y_t & = \varepsilon_t \ \varepsilon_t & \sim N(0, \sigma_{S_t}^2) \end{align} $$ Since there is no autoregressive component, this model can be fit using the MarkovRegression class. Since there is no mean effect, we specify trend='nc'. There are hypothesized to be three regimes for the switching variances, so we specify k_regimes=3 and switching_variance=True (by default, the variance is assumed to be the same across regimes).
# Get the dataset ew_excs = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn').content raw = pd.read_table(BytesIO(ew_excs), header=None, skipfooter=1, engine='python') raw.index = pd.date_range('1926-01-01', '1995-12-01', freq='MS') dta_kns = raw.loc[:'1986'] - raw.loc[:'1986'].mean() # Plot the dataset dta_kns[0].plot(title='Excess returns', figsize=(12, 3)) # Fit the model mod_kns = sm.tsa.MarkovRegression(dta_kns, k_regimes=3, trend='nc', switching_variance=True) res_kns = mod_kns.fit() res_kns.summary()
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Below we plot the probabilities of being in each of the regimes; only in a few periods is a high-variance regime probable.
fig, axes = plt.subplots(3, figsize=(10,7)) ax = axes[0] ax.plot(res_kns.smoothed_marginal_probabilities[0]) ax.set(title='Smoothed probability of a low-variance regime for stock returns') ax = axes[1] ax.plot(res_kns.smoothed_marginal_probabilities[1]) ax.set(title='Smoothed probability of a medium-variance regime for stock returns') ax = axes[2] ax.plot(res_kns.smoothed_marginal_probabilities[2]) ax.set(title='Smoothed probability of a high-variance regime for stock returns') fig.tight_layout()
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Filardo (1994) Time-Varying Transition Probabilities This model demonstrates estimation with time-varying transition probabilities. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn. In the above models we have assumed that the transition probabilities are constant across time. Here we allow the probabilities to change with the state of the economy. Otherwise, the model is the same Markov autoregression of Hamilton (1989). Each period, the regime now transitions according to the following matrix of time-varying transition probabilities: $$ P(S_t = s_t | S_{t-1} = s_{t-1}) = \begin{bmatrix} p_{00,t} & p_{10,t} \ p_{01,t} & p_{11,t} \end{bmatrix} $$ where $p_{ij,t}$ is the probability of transitioning from regime $i$, to regime $j$ in period $t$, and is defined to be: $$ p_{ij,t} = \frac{\exp{ x_{t-1}' \beta_{ij} }}{1 + \exp{ x_{t-1}' \beta_{ij} }} $$ Instead of estimating the transition probabilities as part of maximum likelihood, the regression coefficients $\beta_{ij}$ are estimated. These coefficients relate the transition probabilities to a vector of pre-determined or exogenous regressors $x_{t-1}$.
# Get the dataset filardo = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn').content dta_filardo = pd.read_table(BytesIO(filardo), sep=' +', header=None, skipfooter=1, engine='python') dta_filardo.columns = ['month', 'ip', 'leading'] dta_filardo.index = pd.date_range('1948-01-01', '1991-04-01', freq='MS') dta_filardo['dlip'] = np.log(dta_filardo['ip']).diff()*100 # Deflated pre-1960 observations by ratio of std. devs. # See hmt_tvp.opt or Filardo (1994) p. 302 std_ratio = dta_filardo['dlip']['1960-01-01':].std() / dta_filardo['dlip'][:'1959-12-01'].std() dta_filardo['dlip'][:'1959-12-01'] = dta_filardo['dlip'][:'1959-12-01'] * std_ratio dta_filardo['dlleading'] = np.log(dta_filardo['leading']).diff()*100 dta_filardo['dmdlleading'] = dta_filardo['dlleading'] - dta_filardo['dlleading'].mean() # Plot the data dta_filardo['dlip'].plot(title='Standardized growth rate of industrial production', figsize=(13,3)) plt.figure() dta_filardo['dmdlleading'].plot(title='Leading indicator', figsize=(13,3));
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
The time-varying transition probabilities are specified by the exog_tvtp parameter. Here we demonstrate another feature of model fitting - the use of a random search for MLE starting parameters. Because Markov switching models are often characterized by many local maxima of the likelihood function, performing an initial optimization step can be helpful to find the best parameters. Below, we specify that 20 random perturbations from the starting parameter vector are examined and the best one used as the actual starting parameters. Because of the random nature of the search, we seed the random number generator beforehand to allow replication of the result.
mod_filardo = sm.tsa.MarkovAutoregression( dta_filardo.iloc[2:]['dlip'], k_regimes=2, order=4, switching_ar=False, exog_tvtp=sm.add_constant(dta_filardo.iloc[1:-1]['dmdlleading'])) np.random.seed(12345) res_filardo = mod_filardo.fit(search_reps=20) res_filardo.summary()
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Below we plot the smoothed probability of the economy operating in a low-production state, and again include the NBER recessions for comparison.
fig, ax = plt.subplots(figsize=(12,3)) ax.plot(res_filardo.smoothed_marginal_probabilities[0]) ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='gray', alpha=0.2) ax.set_xlim(dta_filardo.index[6], dta_filardo.index[-1]) ax.set(title='Smoothed probability of a low-production state');
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Using the time-varying transition probabilities, we can see how the expected duration of a low-production state changes over time:
res_filardo.expected_durations[0].plot( title='Expected duration of a low-production state', figsize=(12,3));
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Importare il package AlignIO che è il package per manipolare file contenenti allineamenti multipli in diversi formati (tra cui clustal che è quello del file di input).
from Bio import AlignIO
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Leggere l'allineamento in input Il package AlignIO mette a disposizione la funzione read per leggete un allineamento: AligIO.read(input_file_name, format) e restituisce un oggetto di tipo MultipleSeqAlignment che è un oggetto iterabile contenente oggetti SeqRecord, uno per ognuna delle righe dell'allineamento letto.
alignment = AlignIO.read("mafft-alignments.clustalw", "clustal")
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
La lunghezza dell'allineamento in input (numero di colonne della matrice di allineamento) è:
alignment.get_alignment_length()
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Trasformare l'oggetto in una lista di oggetti SeqRecord.
alignment = list(alignment) alignment
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Eliminare i gap iniziali. Trovare il più lungo prefisso di soli simboli - delle righe dell'allineamento. Supponendo che tale prefisso sia lungo g, eliminare da ogni riga dell'allinemento il prefisso di lunghezza g. Ad esempio il seguente allineamento composto da tre righe: GTATGTGTCATGTTTTTGCTA --ATGTGTCATG-TTT----- ----GTGTCATGTTTTTG--- presenta un più lungo prefisso di soli simboli - di lunghezza g=4 (terza riga). Eliminando da tutte le righe un prefisso di lunghezza 4 si ottiene: GTGTCATGTTTTTGCTA GTGTCATG-TTT----- GTGTCATGTTTTTG---
import re gap_list = [re.findall('^-+', str(row.seq)) for row in alignment] gap_size_list = [len(gap[0]) for gap in gap_list if gap] gap_size_list[:0] = [0] leading_gaps = max(gap_size_list) alignment = [row[leading_gaps:] for row in alignment] alignment
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Eliminare i gap finali. Trovare il più lungo suffisso di soli simboli - delle righe dell'allineamento. Supponendo che tale suffisso sia lungo g, eliminare da ogni riga il suffisso di lunghezza g. Ad esempio il seguente allineamento composto da tre righe: GTGTCATGTTTTTGCTA GTGTCATG-TTT----- GTGTCATGTTTTTG--- presenta un più lungo suffisso di soli simboli - di lunghezza g=5 (seconda riga). Eliminando da tutte le righe un suffisso di lunghezza 5 si ottiene: GTGTCATGTTTT GTGTCATG-TTT GTGTCATGTTTT
gap_list = [re.findall('-+$', str(row.seq)) for row in alignment] gap_size_list = [len(gap[0]) for gap in gap_list if gap] gap_size_list[:0] = [0] trailing_gaps = max(gap_size_list) alignment = [row[:len(row)-trailing_gaps] for row in alignment] alignment
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Creare la lista degli identificatori dei genomi
index_list = [row.id for row in alignment] index_list
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Creare il dizionario contenente i dati per costruire il data frame key: posizione 1-based della variazione (posizione della colonna nell'allineamento in input) value: lista dei simboli allineati coinvolti nella variazione (il primo simbolo deve essere quello del reference, mentre se un genoma non presenta una differenza con il reference si deve inserire la stringa vuota)
df_data = {} reference = alignment.pop(0) for (i,c) in enumerate(reference): variant_list = [] is_variant = False for row in alignment: variant = '' if row[i] != c and row[i] in {'A', 'C', 'G', 'T'}: is_variant = True variant = row[i] variant_list.append(variant) if is_variant: variant_list[:0] = [c] df_data[str(i+leading_gaps+1)] = variant_list df_data
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Creare il data frame df = pd.DataFrame(df_data, index = index_list)
import pandas as pd df = pd.DataFrame(df_data, index = index_list) df
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Estrarre il genoma con più variazioni e quello con meno variazioni Determinare la lista del numero di variazioni per genoma (per tutti i genomi tranne quello di riferimento).
variants_per_genome = [len(list(filter(lambda x: x!='', list(row)))) for row in df.values] variants_per_genome.pop(0) variants_per_genome
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
In alternativa:
variants_per_genome = [df.shape[1]-list(df.loc[index]).count('') for index in index_list[1:]] variants_per_genome
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Estrarre il genoma con più variazioni.
index_list[variants_per_genome.index(max(variants_per_genome))+1]
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Estrarre il genoma con meno variazioni.
index_list[variants_per_genome.index(min(variants_per_genome))+1]
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
In alternativa, per estrarre il genoma con meno variazioni:
null_df = pd.DataFrame((df == '').sum(axis=1), columns=['difference']) null_df[1:][null_df[1:]['difference'] == null_df[1:]['difference'].max()]
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
In alternativa, per estrarre il genoma con più variazioni:
null_df[1:][null_df[1:]['difference'] == null_df[1:]['difference'].min()]
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Determinare il data frame delle variazioni "complete" Selezionare dal data frame precedente le sole colonne relative a variazioni "complete".
df_complete = df[[col for col in df.columns if all(df[col] != '')]] df_complete
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Determinare il data frame delle variazioni "stabili" Selezionare dal data frame precedente le sole colonne relative a variazioni "stabili".
df_stable = df_complete[[col for col in df_complete.columns if len(df_complete[col][1:].unique()) == 1]] df_stable
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Ottenere la lista delle posizioni in cui c'è un gap nel genoma di riferimento.
ref_gaps = [col for col in df.columns if df[col][0] == '-'] ref_gaps
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Ottenere la lista delle posizioni in cui c'è un gap in almeno uno dei genomi (diversi dal riferimento).
other_gaps = [col for col in df.columns if any(df[col][1:] == '-')] other_gaps
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
As a rule, I always conduct statistical simulations to make sure the functions I have written actually perform the way I expect them to when the null is known. If you can't get your method to work on a data generating procedure of your choosing, it should not leave the statistical laboratory! In the simulations below, $\mu_y = 0$, and $\mu_x$ will vary from zero to 0.2. At the same time, both variance homoskedasticity ($\sigma_y = \sigma_x$) and heteroskedasticity ($\sigma_y \neq \sigma_x$) will be assessed. To further ensure the approach works, the respective sample sizes, $n$ and $m$, for each of the nsim=100K experiments will be a random integer between 25 and 75. In order to avoid an inner loop and rely of pure numpy vectorization, a data matrix of dimension 75 x 100000 will be generated. To account for the different sample sizes, if $n$ or $m$ is less than 75, the corresponding difference in rows will be set as a missing value np.NaN. The np.nanmean and np.nanstd functions will be used to handle missing values. Note that in all of the subsequent simulations, the type-I error rate target will be fixed to 5% ($\alpha=0.05$), and 100K simulations will be run.
# Parameters of simulations nsim = 100000 alpha = 0.05 nlow, nhigh = 25, 75 n1, n2 = np.random.randint(nlow, nhigh+1, nsim), np.random.randint(nlow, nhigh+1, nsim) se1, se2 = np.exp(np.random.randn(nsim)), np.exp(np.random.randn(nsim)) mu_seq = np.arange(0,0.21,0.01) tt_seq, method_seq = np.repeat(['eq','neq'],2), np.tile(['neq','eq'],2) holder = [] np.random.seed(1234) for mu in mu_seq: # Generate random data x1 = mu + se1*np.random.randn(nhigh, nsim) x2a = se1 * np.random.randn(nhigh, nsim) x2b = se2 * np.random.randn(nhigh, nsim) idx = np.tile(np.arange(nhigh),[nsim,1]).T # Find which rows to set to missing idx1, idx2 = idx < rvec(n1), idx < rvec(n2) x1, x2a, x2b = np.where(idx1, x1, np.nan), np.where(idx2, x2a, np.nan), np.where(idx2, x2b, np.nan) mu_hat1, mu_hat2a, mu_hat2b = np.nanmean(x1, 0), np.nanmean(x2a, 0), np.nanmean(x2b, 0) se_hat1, se_hat2a, se_hat2b = np.nanstd(x1, 0, ddof=1), np.nanstd(x2a, 0, ddof=1), np.nanstd(x2b, 0, ddof=1) # Calculate statistics and p-values tstat_neq_a, pval_neq_a = tdist_2dist(mu_hat1, mu_hat2a, se_hat1, se_hat2a, n1, n2, False) tstat_eq_a, pval_eq_a = tdist_2dist(mu_hat1, mu_hat2a, se_hat1, se_hat2a, n1, n2, True) tstat_neq_b, pval_neq_b = tdist_2dist(mu_hat1, mu_hat2b, se_hat1, se_hat2b, n1, n2, False) tstat_eq_b, pval_eq_b = tdist_2dist(mu_hat1, mu_hat2b, se_hat1, se_hat2b, n1, n2, True) # Find hypothesis rejection probability power_neq_a, power_eq_a = np.mean(pval_neq_a < alpha), np.mean(pval_eq_a < alpha) power_neq_b, power_eq_b = np.mean(pval_neq_b < alpha), np.mean(pval_eq_b < alpha) power_seq = np.array([power_neq_a, power_eq_a, power_neq_b, power_eq_b]) holder.append(pd.DataFrame({'mu':mu,'tt':tt_seq,'method':method_seq, 'power':power_seq})) # Power comparison di_method = {'eq':'Equal','neq':'Not Equal'} res_power = pd.concat(holder).assign(nsim=nsim) res_power[['tt','method']] = res_power[['tt','method']].apply(lambda x: x.map(di_method)) res_power = res_power.rename(columns={'tt':'Variance'}).assign(nreject=lambda x: (x.power*x.nsim).astype(int)) res_power = pd.concat([res_power.drop(columns=['nsim','nreject']), pd.concat(prop_CI(count=res_power.nreject,nobs=nsim,method='beta'),1)],1) res_power.rename(columns={0:'lb',1:'ub'}, inplace=True) plotnine.options.figure_size = (8, 3.5) gg_power_ttest = (ggplot(res_power,aes(x='mu',y='power',color='method')) + theme_bw() + geom_line() + geom_hline(yintercept=0.05,linetype='--') + scale_color_discrete(name='Variance assumption') + geom_linerange(aes(ymin='lb',ymax='ub')) + ggtitle('Vertical lines show 95% CI') + labs(y='Prob. of rejecting null',x='Mean difference') + facet_wrap('~Variance',labeller=label_both) + theme(legend_position=(0.5,-0.1),legend_direction='horizontal')) gg_power_ttest
_rmd/extra_unequalvar/unequalvar.ipynb
erikdrysdale/erikdrysdale.github.io
mit
Figure 1 above shows that the tdist_2dist function is working as expected. When the variances of $x$ and $y$ are equivalent, there is no difference in performance between approaches. When the mean difference is zero, the probability of rejecting the null is exactly equivalent to the level of the test (5%). However, when the variances differ, using the degrees of freedom calculation assuming they are equal leads to an inflated type-I error rate. Whereas using the adjustment from Welch's t-test gets to the right nominal level. (2) Checking power calculations After checking that function's test-statistic has the right nominal coverage on simulated data, I find is useful to check whether the power of the test can be predicted for different values of the alternative hypothesis. For some test statistics, this is not possible to do analytically, since the distribution of the test statistic under the alternative may not be known. However, for the student-t distribution, a difference in true means amounts to a noncentral t-distribution. $$ \begin{align} T &= \frac{Z + c}{\sqrt{V/\nu}} \ T &\sim \text{nct}(\nu,c) \ Z&\sim N(0,1), \hspace{3mm} V\sim \chi^2(\nu), \hspace{3mm} \mu \neq 0 \end{align} $$ The statistic $d$ from \eqref{eq:dstat} can be modified to match the noncentral t-distribution: $$ \begin{align} d + \underbrace{\frac{\mu_x - \mu_y}{\sqrt{\sigma^2_x/n + \sigma^2_y/m}}}_{c}. \end{align} $$ The power simulations below will fix $n=25$, $m=75$, and unit variances when $\sigma_x=\sigma_y$ and $\sigma_x=1$ and $\sigma_y=2$ in the heteroskedastic case.
n1, n2 = 25, 75 se1 = 1 se2a, se2b = se1, se1 + 1 var1, var2a, var2b = se1**2, se2a**2, se2b**2 # ddof under different assumptions nu_a = n1 + n2 - 2 nu_b = (var1/n1 + var2b/n2)**2 / ( (var1/n1)**2/(n1-1) + (var2b/n2)**2/(n2-1) ) mu_seq = np.round(np.arange(0, 1.1, 0.1),2) # Pre-calculate power crit_ub_a, crit_lb_a = stats.t(df=nu_a).ppf(1-alpha/2), stats.t(df=nu_a).ppf(alpha/2) crit_ub_b, crit_lb_b = stats.t(df=nu_b).ppf(1-alpha/2), stats.t(df=nu_b).ppf(alpha/2) lam_a = np.array([mu/np.sqrt(var1*(1/n1 + 1/n2)) for mu in mu_seq]) lam_b = np.array([mu/np.sqrt((var1/n1 + var2b/n2)) for mu in mu_seq]) dist_alt_a, dist_alt_b = stats.nct(df=nu_a, nc=lam_a), stats.nct(df=nu_b, nc=lam_b) power_a = (1-dist_alt_a.cdf(crit_ub_a)) + dist_alt_a.cdf(crit_lb_a) power_b = (1-dist_alt_b.cdf(crit_ub_b)) + dist_alt_b.cdf(crit_lb_b) dat_theory = pd.concat([pd.DataFrame({'mu':mu_seq,'theory':power_a,'method':'eq'}), pd.DataFrame({'mu':mu_seq,'theory':power_b,'method':'neq'})]) # Run simulations to confirm np.random.seed(1234) holder = [] for mu in mu_seq: x1 = mu + se1 * np.random.randn(n1, nsim) x2a = se2a * np.random.randn(n2, nsim) x2b = se2b * np.random.randn(n2, nsim) mu_hat1, mu_hat2a, mu_hat2b = x1.mean(0), x2a.mean(0), x2b.mean(0) se_hat1, se_hat2a, se_hat2b = x1.std(0,ddof=1), x2a.std(0, ddof=1), x2b.std(0, ddof=1) stat_a, pval_a = tdist_2dist(mu_hat1, mu_hat2a, se_hat1, se_hat2a, n1, n2, var_eq=True) stat_b, pval_b = tdist_2dist(mu_hat1, mu_hat2b, se_hat1, se_hat2b, n1, n2, var_eq=False) reject_a, reject_b = np.mean(pval_a < 0.05), np.mean(pval_b < 0.05) holder.append(pd.DataFrame({'mu': mu,'method':['eq','neq'], 'power': [reject_a, reject_b]})) res_theory = pd.concat(holder).merge(dat_theory).sort_values(['method','mu']).reset_index(None, True) res_theory = res_theory.assign(nreject=lambda x: (x.power*nsim).astype(int)) res_theory = pd.concat([res_theory.drop(columns='nreject'), pd.concat(prop_CI(count=res_theory.nreject,nobs=nsim,method='beta'),1)],1) res_theory.rename(columns={0:'lb',1:'ub','method':'Variance'}, inplace=True) res_theory = res_theory.assign(Variance=lambda x: x.Variance.map(di_method)) plotnine.options.figure_size = (8, 3.5) gg_power_theory = (ggplot(res_theory,aes(x='theory',y='power')) + theme_bw() + geom_point() + geom_linerange(aes(ymin='lb',ymax='ub')) + facet_wrap('~Variance', labeller=label_both) + theme(legend_position=(0.5, -0.1), legend_direction='horizontal') + labs(x='Expected power',y='Actual power') + scale_y_continuous(limits=[0,1]) + scale_x_continuous(limits=[0,1]) + geom_abline(slope=1,intercept=0,color='blue',linetype='--')) gg_power_theory
_rmd/extra_unequalvar/unequalvar.ipynb
erikdrysdale/erikdrysdale.github.io
mit
Figure 2 shows that the power calculations line up exactly with the analytical expectations for both equal and unequal variances. Having thoroughly validated the type-I and type-II errors of this function we can now move onto testing whether the means from multiple normal distributions are equal. (3) F-test for equality of means Suppose there are $K$ normal data vectors: $x_1=(x_{1,1},\dots,x_{1,n_1})$ to $x_k=(x_{k,1},\dots,x_{1,n_k})$, and we want to test the null hypothesis of $\mu_1 = \mu_2 = \dots = \mu_K$ against an alternative hypothesis that there is at least 1 inequality in the means, where $x_{k,i} \sim N(\mu_k,\sigma^2_k)$. As before, the variances of each vector may or may not be equal. When the variances are equal, the sum of squared differences between the total mean and any one group mean will be chi-square. Similarly, the sum of the sample variances will also have a chi-square distribution. Hence, the F-test for equality of means is the ratio of the variation "between" versus "within" the groups, $$ \begin{align} R &= \frac{\frac{1}{K-1}\sum_{k=1}^K n_k (\bar x_k - \bar x)^2 }{\frac{1}{N-K}\sum_{k=1}^K (n_k - 1)\hat\sigma^2_k}, \ R &\sim F(K-1, N-K) \hspace{3mm} \text{ if } \sigma^2_k = \sigma^2 \hspace{3mm} \forall k \in {1,\dots,K} \end{align} $$ Where $N = \sum_k n_k$. To account for heteroskedasticity in the data (i.e. non-equal variances), both the test and degrees of freedom need to be modified using an approach Welch proposed in 1951. $$ \begin{align} R_W &= \frac{\frac{1}{K-1}\sum_{k=1}^K w_k (\bar x_k - \bar x_w)^2 }{1 + \frac{2}{3}((K-2)\nu)}, \ w_k &= n_k / \hat\sigma^2_k \ \bar x_w &= \frac{\sum_{k=1}^K w_k \bar x_k}{\sum_{k=1}^K w_k}\ \nu &= \frac{3\cdot \sum_{k=1}^K \Bigg[ \frac{1}{n_k - 1} \Big( 1 - \frac{w_k}{\sum_{k=1}^K w_k} \Big)^2 \Bigg]^2}{K^2-1} \ R_W &\sim F(K-1, 1/\nu) \hspace{3mm} \text{ if } \sigma^2_k \neq \sigma^2_{-k} \hspace{3mm} \text{for at least one }k \end{align} $$ The fdist_anova function below carries out an F-test for the equality of means using only the empirical means, standard deviations, and sample sizes for either variance assumption. In R this would be equivalent to using aov for equal variances or oneway.test for unequal variances. In python, it will replicate the scipy.stats.f_oneway function (for equal variances). I am unaware of a python function that does a Welch-adjustment (if you know please message me and I will provide an update with this information). As before, because the function only relies on the moments of the data, it can be fully vectorized to handle matrices of means, variances, and sample sizes. The simulation below assesses how well the two F-test approaches (homoskedasticity vs heteroskedasticity) do when the ground truth variances are either all equal or vary. To vary the signal in the data, I generate the $K$ different means from $(-\mu,\dots,0,\dots,\mu)$, where $\mu$ is referred to as "mean dispersion" in the subsequent figures.
def fdist_anova(mus, ses, ns, var_eq=False): lshape = len(mus.shape) assert lshape <= 2 assert mus.shape == ses.shape if len(ns.shape) == 1: ns = cvec(ns.copy()) else: assert ns.shape == mus.shape if lshape == 1: mus = cvec(mus.copy()) ses = cvec(ses.copy()) vars = ses ** 2 # variance n, k = ns.sum(0), len(ns) # Total samples and groups df1, df2 = (k - 1), (n - k) if var_eq: # classical anova xbar = np.atleast_2d(np.sum(mus * ns, 0) / n) vb = np.sum(ns*(xbar - mus)**2,0) / df1 # numerator is variance between vw = np.sum((vars * (ns - 1)), 0) / df2 # den is variance within fstat = vb / vw pval = stats.f(dfn=df1,dfd=df2).sf(fstat) else: w = ns / vars xbar = np.sum(w * mus, 0) / np.sum(w,0) num = np.sum(w * (xbar - mus) ** 2,0) / df1 v = 3*np.sum((1-w/w.sum(0))**2 / (ns-1),0) / (k**2 - 1) den = 1 + 2*((k-2)*v)/3 fstat = num / den pval = stats.f(dfn=df1, dfd=1/v).sf(fstat) return fstat, pval nlow, niter = 25, 5 k_seq = [5, 7, 9] disp_seq = np.round(np.arange(0, 0.51, 0.1),2) dgp_seq = np.repeat(['eq', 'neq'], 2) method_seq = np.tile(['eq', 'neq'], 2) holder = [] np.random.seed(1) for k in k_seq: n_seq = np.arange(nlow, nlow+k * niter, niter) n_seq = np.tile(n_seq, [nsim, 1]).T nhigh = np.max(n_seq) dim_3d = [1, 1, k] for disp in disp_seq: mu_k = np.linspace(-disp, disp, num=k) se_k1 = np.repeat(1,k).reshape(dim_3d) se_k2 = np.exp(np.random.randn(k)).reshape(dim_3d) X1 = mu_k + se_k1 * np.random.randn(nhigh,nsim,k) X2 = mu_k + se_k2 * np.random.randn(nhigh, nsim, k) idx = np.tile(np.arange(nhigh),[k,nsim,1]).T <= np.atleast_3d(n_seq).T X1, X2 = np.where(idx, X1, np.nan), np.where(idx, X2, np.nan) # Calculate means and variance : (k x nsim) mu_X1, mu_X2 = np.nanmean(X1, 0).T, np.nanmean(X2, 0).T se_X1, se_X2 = np.nanstd(X1, 0, ddof=1).T, np.nanstd(X2, 0, ddof=1).T assert n_seq.shape == mu_X1.shape == se_X1.shape # Calculate significance fstat_eq1, pval_eq1 = fdist_anova(mus=mu_X1, ses=se_X1, ns=n_seq, var_eq=True) fstat_neq1, pval_neq1 = fdist_anova(mus=mu_X1, ses=se_X1, ns=n_seq, var_eq=False) fstat_eq2, pval_eq2 = fdist_anova(mus=mu_X2, ses=se_X2, ns=n_seq, var_eq=True) fstat_neq2, pval_neq2 = fdist_anova(mus=mu_X2, ses=se_X2, ns=n_seq, var_eq=False) reject_eq1, reject_neq1 = np.mean(pval_eq1 < alpha), np.mean(pval_neq1 < alpha) reject_eq2, reject_neq2 = np.mean(pval_eq2 < alpha), np.mean(pval_neq2 < alpha) reject_seq = [reject_eq1, reject_neq1, reject_eq2, reject_neq2] tmp = pd.DataFrame({'k':k,'disp':disp,'dgp':dgp_seq,'method':method_seq,'reject':reject_seq}) # print(tmp) holder.append(tmp) res_f = pd.concat(holder).reset_index(None,True) res_f[['dgp','method']] = res_f[['dgp','method']].apply(lambda x: x.map(di_method),0) res_f.rename(columns={'dgp':'Variance'}, inplace=True) plotnine.options.figure_size = (8, 6) gg_fdist = (ggplot(res_f, aes(x='disp',y='reject',color='method.astype(str)')) + theme_bw() + geom_line() + geom_point() + facet_grid('k~Variance',labeller=label_both) + labs(x='Mean dispersion',y='Prob. of rejecting null') + geom_hline(yintercept=0.05,linetype='--') + scale_y_continuous(limits=[0,1]) + scale_color_discrete(name='Variance assumption')) gg_fdist
_rmd/extra_unequalvar/unequalvar.ipynb
erikdrysdale/erikdrysdale.github.io
mit
The simulations in Figure 3 show a similar finding to the that of t-test: when the ground truth variances are equal, there is almost no differences between the tests, and an expected 5% false positive rate occurs when the means are equal. However, for the unequal variance situation, the assumption of homoskedasticity leads to an inflated type-I error rate (as was the case for the t-test), but also lower power when the null is false (which was not the case for the t-test). Using the Welch adjustment is better in both cases. The one surprising finding is that the power of the test is not monotonically increasing in the heteroskedastic case. I am not completely sure why this is the case. One theory could be that since a higher mean dispersion leads to a higher variance of $\bar{x}_w$, the ratio of the degrees of freedom may be more stable for lower values of $\mu$, leading to a more consistent rejection rate. (4) Quick sanity checks After confirming the frequentist properties of a test statistic, it is worthwhile checking the results of any custom function to similar functions from other libraries. The tdist_2dist function will be compared to it's scipy counterpart on the Iris dataset.
from sklearn import datasets ix, iy = datasets.load_iris(return_X_y=True) v1, v2 = ix[:,0], ix[:,1] k = 1 all_stats = [stats.ttest_ind(v1, v2, equal_var=True)[k], tdist_2dist(v1.mean(), v2.mean(), v1.std(ddof=1), v2.std(ddof=1), len(v1), len(v2), var_eq=True)[k], stats.ttest_ind(v1, v2, equal_var=False)[k], tdist_2dist(v1.mean(), v2.mean(), v1.std(ddof=1), v2.std(ddof=1), len(v1), len(v2), var_eq=False)[k]] pd.DataFrame({'test':'t-test', 'method':np.tile(['scipy','custom'],2), 'pval':all_stats})
_rmd/extra_unequalvar/unequalvar.ipynb
erikdrysdale/erikdrysdale.github.io
mit
So far so good. Next, we'll use rpy2 to get the results in R which supports equal and unequal variances with two different functions.
import rpy2.robjects as robjects moments_x = pd.DataFrame({'x':ix[:,0],'y':iy}).groupby('y').x.describe()[['mean','std','count']] all_stats = [np.array(robjects.r('summary(aov(Sepal.Length~Species,iris))[[1]][1, 5]'))[0], fdist_anova(moments_x['mean'], moments_x['std'], moments_x['count'], var_eq=True)[1][0], np.array(robjects.r('oneway.test(Sepal.Length~Species,iris)$p.value'))[0], fdist_anova(moments_x['mean'], moments_x['std'], moments_x['count'], var_eq=False)[1][0]] pd.DataFrame({'test':'F-test', 'method':np.tile(['R','custom'],2), 'pval':all_stats})
_rmd/extra_unequalvar/unequalvar.ipynb
erikdrysdale/erikdrysdale.github.io
mit
Once again the results are identical to the benchmark functions. (5) Application to AUROC inference The empirical AUROC has an asymptotically normal distribution. Consequently, the difference between two AUROCs will also have an asymptotically normal distribution. For small sample sizes, the Hanley and McNeil adjustment to the AUROC standard error will obtain slightly better coverage. For a review of the notation and meaning of the AUROC, see a previous post here. $$ \begin{align} AUC &= \frac{1}{n_1 n_0} \sum_{i: y_i = 1} \sum_{j: y_j=0} I(s_i > s_j) \ \sigma_{N} &= \sqrt{\frac{n_1 + n_0 + 1}{12\cdot n_1 n_0}} \ \sigma_{HM} &= \sqrt{\frac{AUC\cdot (1-AUC) + q_1 + q_0}{n_1 n_0}} \ q_1 &= (n_1 - 1)\cdot ( AUC / (2-AUC) - AUC^2) \ q_0&= (n_0- 1)\cdot ( AUC^2 / (1+AUC) - AUC^2) \end{align} $$ The standard error from the normal approximation ($\sigma_N$) is only a function of the positive ($n_1$) and negative ($n_0$) class sample sizes whereas the Hanley and McNeil adjustment ($\sigma_{HM}$) uses the empirical AUROC as well. The previous t- and F-tests relied on the fact that the sample mean had a variance that $O(1/n)$ so that $\bar x \sim N(\mu, \sigma^2/n)$. As can be seen from either formula, the sample variance for the AUROC can not be nearly re-written as a function of the sample size. We can still appeal to the t-test, the only difference being that the sample size is built into the variance estimate: $$ \begin{align} \frac{AUC_A - AUC_B}{\sqrt{\sigma^2_{HM_A} + \sigma^2_{HM_B}}} &\sim N(0,1) \hspace{3mm} \text{ if $H_0$ is true} \end{align} $$ In the simulation below, scores will come from one of two distributions. The negative class will have 200 samples drawn from a standard normal ($n_0$). The positive class scores will have 100 samples ($n_1$) drawn from either a standard normal (for the null distribution) and a normal with a mean at or above zero. The difference in AUROCs between these two distributions will be evaluated. Since the null distribution will have an (average) AUROC of 50%, the difference in these distribution will be above zero when the mean from the alternative is greater than zero.
n1, n0 = 100, 200 n = n1 + n0 n1n0 = n1 * n0 mu_seq = np.round(np.arange(0, 1.01, 0.1),2) def se_auroc_hanley(auroc, n1, n0): q1 = (n1 - 1) * ((auroc / (2 - auroc)) - auroc ** 2) q0 = (n0 - 1) * ((2 * auroc ** 2) / (1 + auroc) - auroc ** 2) se_auroc = np.sqrt((auroc * (1 - auroc) + q1 + q0) / (n1 * n0)) return se_auroc def se_auroc_normal(n1, n0): return np.sqrt( (n1 + n0 + 1) / (12 * n1 * n0) ) np.random.seed(1) holder = [] for mu in mu_seq: x1_null, x0 = np.random.randn(n1, nsim), np.random.randn(n0, nsim) x1 = mu + np.random.randn(n1, nsim) x, x_null = np.concatenate((x1, x0)), np.concatenate((x1_null, x0)) auc = (np.sum(stats.rankdata(x, axis=0)[:n1],0) - n1*(n1+1)/2) / n1n0 auc_null = (np.sum(stats.rankdata(x_null, axis=0)[:n1], 0) - n1 * (n1 + 1) / 2) / n1n0 se_HM, se_null_HM = se_auroc_hanley(auc, n1, n0), se_auroc_hanley(auc_null, n1, n0) se_N = se_auroc_normal(n1, n0) # Do pairwise t-test dauc = auc - auc_null t_score_HM = dauc / np.sqrt(se_HM**2 + se_null_HM**2) t_score_N = dauc / np.sqrt(2 * se_N**2) dist_null = stats.t(df=2*n - 2) pval_HM = 2 * np.minimum(dist_null.sf(t_score_HM), dist_null.cdf(t_score_HM)) pval_N = 2 * np.minimum(dist_null.sf(t_score_N), dist_null.cdf(t_score_N)) reject_HM, reject_N = np.mean(pval_HM < alpha), np.mean(pval_N < alpha) tmp = pd.DataFrame({'method':['HM','N'],'mu':mu, 'reject':[reject_HM, reject_N]}) holder.append(tmp) # Merge and analyse res_auc = pd.concat(holder).reset_index(None, True) res_auc = res_auc.assign(auc=lambda x: stats.norm.cdf(x.mu/np.sqrt(2)), nreject=lambda x: (x.reject*nsim).astype(int)) res_auc = pd.concat([res_auc.drop(columns='nreject'), pd.concat(prop_CI(count=res_auc.nreject,nobs=nsim,method='beta'),1)],1) res_auc.rename(columns={0:'lb',1:'ub'},inplace=True) # plot plotnine.options.figure_size = (5, 4) gg_auc = (ggplot(res_auc,aes(x='auc',y='reject',color='method')) + theme_bw() + geom_line() + labs(x='Alternative hypothesis AUROC',y='Prob. of rejecting null') + geom_hline(yintercept=0.05,linetype='--') + geom_linerange(aes(ymin='lb',ymax='ub')) + scale_color_discrete(name='Method',labels=['Hanley-McNeil','Normal'])) gg_auc
_rmd/extra_unequalvar/unequalvar.ipynb
erikdrysdale/erikdrysdale.github.io
mit
Example 2
# Example 2 data = ['ACME', 50, 91.1, (2012, 12, 21)] name, shares, price, date = data print name print date name, shares, price, (year, mon, day) = data print name print year print mon print day
notebooks/ch01/01_unpacking_a_sequence_into_variables.ipynb
tuanavu/python-cookbook-3rd
mit
Example 3 If there is a mismatch in the number of elements, you’ll get an error
# Example 3 # error with mismatch in number of elements p = (4, 5) x, y, z = p
notebooks/ch01/01_unpacking_a_sequence_into_variables.ipynb
tuanavu/python-cookbook-3rd
mit
Example 4 Unpacking actually works with any object that happens to be iterable, not just tuples or lists. This includes strings, files, iterators, and generators.
# Example 4: string s = 'Hello' a, b, c, d, e = s print a print b print e
notebooks/ch01/01_unpacking_a_sequence_into_variables.ipynb
tuanavu/python-cookbook-3rd
mit
Example 5 Discard certain values
# Example 5 # discard certain values data = [ 'ACME', 50, 91.1, (2012, 12, 21) ] _, shares, price, _ = data print shares print price
notebooks/ch01/01_unpacking_a_sequence_into_variables.ipynb
tuanavu/python-cookbook-3rd
mit
From Wikipedia: In the mathematics of shuffling playing cards, the Gilbert–Shannon–Reeds model is a probability distribution on riffle shuffle permutations that has been reported to be a good match for experimentally observed outcomes of human shuffling, and that forms the basis for a recommendation that a deck of cards should be riffled seven times in order to thoroughly randomize it. ... The deck of cards is cut into two packets... [t]hen, one card at a time is repeatedly moved from the bottom of one of the packets to the top of the shuffled deck. Here we implement the Gilbert–Shannon–Reeds model, and verify this recommendation of seven shuffles. Note that the functions below have doctest examples. To test the functions, just run pytest in the top level of the repository. First, define a function to determine how many cards to split into our right hand.
def get_random_number_for_right_deck(n: int, seed: int=None, ) -> int: """ Return the number of cards to split into the right sub-deck. :param n: one above the highest number that could be returned by this function. :param seed: optional seed for the random number generator to enable deterministic behavior. :return: a random integer (between 1 and n-1) that represents the desired number of cards. Examples: >>> get_random_number_for_right_deck(n=5, seed=0, ) 1 """ random = sklearn.utils.check_random_state(seed=seed, ) return random.randint(low=1, high=n, )
Gilbert-Shannon-Reeds.ipynb
proinsias/gilbert-shannon-reeds
mit
Next, define a function to determine which hand to drop a card from.
def should_drop_from_right_deck(n_left: int, n_right:int, seed: int=None, ) -> bool: """ Determine whether we drop a card from the right or left sub-deck. Either `n_left` or `n_right` (or both) must be greater than zero. :param n_left: the number of cards in the left sub-deck. :param n_right: the number of cards in the right sub-deck. :param seed: optional seed for the random number generator to enable deterministic behavior. :return: True if we should drop a card from the right sub-deck, False otherwise. Examples: >>> should_drop_from_right_deck(n_left=32, n_right=5, seed=0, ) True >>> should_drop_from_right_deck(n_left=0, n_right=5, ) True >>> should_drop_from_right_deck(n_left=7, n_right=0, ) False >>> should_drop_from_right_deck(n_left=0, n_right=0, ) Traceback (most recent call last): ... ValueError: Either `n_left` or `n_right` (or both) must be greater than zero. """ if n_left > 0 and n_right > 0: # There are cards left in both sub-decks, so pick a # sub-deck at random. random = sklearn.utils.check_random_state(seed=seed, ) num = random.randint(low=0, high=2, ) boolean = (num == 0) return boolean elif n_left == 0 and n_right > 0: # There are no more cards in the left sub-deck, only # the right sub-deck, so we drop from the right sub-deck. return True elif n_left > 0 and n_right == 0: # There are no more cards in the right sub-deck, only # the left sub-deck, so we drop from the left sub-deck. return False else: # There are no more cards in either sub-deck. raise ValueError ('Either `n_left` or `n_right` '\ '(or both) must be greater than zero.')
Gilbert-Shannon-Reeds.ipynb
proinsias/gilbert-shannon-reeds
mit
Now we can implement the 'Gilbert–Shannon–Reeds' shuffle.
def shuffle(deck: np.array, seed: int=None, ) -> np.array: """ Shuffle the input 'deck' using the Gilbert–Shannon–Reeds method. :param seq: the input sequence of integers. :param seed: optional seed for the random number generator to enable deterministic behavior. :return: A new deck containing shuffled integers from the input deck. Examples: >>> shuffle(deck=np.array([0, 7, 3, 8, 4, 9, ]), seed=0, ) array([4, 8, 3, 7, 0, 9]) """ # First randomly divide the 'deck' into 'left' and 'right' # 'sub-decks'. num_cards_in_deck = len(deck) orig_num_cards_right_deck = get_random_number_for_right_deck( n=num_cards_in_deck, seed=seed, ) # By definition of get_random_number_for_right_deck(): n_right = orig_num_cards_right_deck n_left = num_cards_in_deck - orig_num_cards_right_deck shuffled_deck = np.empty(num_cards_in_deck, dtype=int) # We will drop a card n times. for index in range(num_cards_in_deck): drop_from_right_deck = should_drop_from_right_deck( n_left=n_left, n_right=n_right, seed=seed, ) if drop_from_right_deck is True: # Drop from the bottom of right sub-deck # onto the shuffled pile. shuffled_deck[index] = deck[n_right - 1] n_right = n_right - 1 else: # Drop from the bottom of left sub-deck # onto the shuffled pile. shuffled_deck[index] = deck[ orig_num_cards_right_deck + n_left - 1 ] n_left = n_left - 1 return shuffled_deck
Gilbert-Shannon-Reeds.ipynb
proinsias/gilbert-shannon-reeds
mit
Finally, we run some experiments to confirm the recommendation of seven shuffles for a deck of 52 cards.
num_cards = 52 max_num_shuffles = 20 num_decks = 10000 # Shuffling the cards using a uniform probability # distribution results in the same expected frequency # for each card in each deck position. uniform_rel_freqs = np.full( shape=[num_cards, num_cards], fill_value=1./num_cards, ) def calculate_differences( num_shuffles: int ) -> typing.Tuple[np.float64, np.float64, np.float64,]: """ Calculate differences between observed and uniform distributions. :param The number of times to shuffle the deck each time. :return Three metrics for differences between the observed and uniform relative frequencies. """ shuffled_decks = np.empty(shape=[num_decks, num_cards], ) # First create a random deck. orig_deck = np.array(range(num_cards)) np.random.shuffle(orig_deck) for i in range(num_decks): # Now shuffle this deck using the Gilbert–Shannon–Reeds method. new_deck = orig_deck for j in range(num_shuffles): new_deck = shuffle(new_deck) shuffled_decks[i] = new_deck # Calculate the relative frequencies of each card in each position. rel_freqs = np.empty(shape=[num_cards, num_cards], ) for i in range(num_cards): col = shuffled_decks[:, i] # Make sure that each card appears at least once in this # position, by first adding the entire deck, and then # subtracting 1 from the total counts of each card in # this position. col = np.append(col, orig_deck) col_freqs = sp.stats.itemfreq(col)[:, 1] col_freqs = col_freqs - 1 rel_freqs[i] = col_freqs / num_decks # Here I use three metrics for differences between the # observed and uniform relative frequencies: # * The sum of the squared element-wise differences, # * The relative information entropy, and # * The Kolmogorov-Smirnov statistic. sum_squared = np.sum(np.square(np.subtract(uniform_rel_freqs, rel_freqs))) entropy = sp.stats.entropy(rel_freqs.flatten(), uniform_rel_freqs.flatten()) kstest = sp.stats.kstest(rel_freqs.flatten(), 'uniform').statistic return sum_squared, entropy, kstest # Now run the experiment using all our CPUs! num_cpus = max(mp.cpu_count() - 2, 1) with mp.Pool(num_cpus) as p: results = p.map(calculate_differences, range(1, max_num_shuffles+1)) results = np.array(results) sums_squared = results[:, 0] entropies = results[:, 1] kstests = results[:, 2]
Gilbert-Shannon-Reeds.ipynb
proinsias/gilbert-shannon-reeds
mit
The KS statistics are of most use here. You can see how the statistic approaches its maximum value around num_shuffles = 7.
fs = 14 fig, ax = plt.subplots(figsize=(8, 6), dpi=300) ax.scatter(range(1, max_num_shuffles + 1), kstests, ); ax.xaxis.set_major_locator(matplotlib.ticker.MaxNLocator(integer=True)) ax.set_xlabel('Number of Shuffles', fontsize=fs, ) ax.set_ylabel('Kolmogorov-Smirnov Statistic', fontsize=fs, ) ax.set_xlim([0, max_num_shuffles + 1]) plt.show(); fig, ax = plt.subplots(figsize=(8, 6), dpi=300) ax.scatter(range(1, max_num_shuffles + 1), sums_squared, ); ax.xaxis.set_major_locator(matplotlib.ticker.MaxNLocator(integer=True)) ax.set_xlabel('Number of Shuffles', fontsize=fs, ) ax.set_ylabel('Sum of the Squared Differences', fontsize=fs, ) ax.set_xlim([0, max_num_shuffles + 1]) plt.show(); fig, ax = plt.subplots(figsize=(8, 6), dpi=300) ax.scatter(range(1, max_num_shuffles + 1), entropies, ); ax.xaxis.set_major_locator(matplotlib.ticker.MaxNLocator(integer=True)) ax.set_xlabel('Number of Shuffles', fontsize=fs, ) ax.set_ylabel('Relative Information Entropy', fontsize=fs, ) ax.set_xlim([0, max_num_shuffles + 1]) plt.show();
Gilbert-Shannon-Reeds.ipynb
proinsias/gilbert-shannon-reeds
mit
As always, let's do imports and initialize a logger and a new bundle.
import phoebe import numpy as np b = phoebe.default_binary()
2.3/examples/eccentric_ellipsoidal.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now we need a highly eccentric system that nearly overflows at periastron and is slightly eclipsing.
b.set_value('q', value=0.7) b.set_value('period', component='binary', value=10) b.set_value('sma', component='binary', value=25) b.set_value('incl', component='binary', value=0) b.set_value('ecc', component='binary', value=0.9) print(b.filter(qualifier='requiv*', context='component')) b.set_value('requiv', component='primary', value=1.1) b.set_value('requiv', component='secondary', value=0.9)
2.3/examples/eccentric_ellipsoidal.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Adding Datasets We'll add light curve, orbit, and mesh datasets.
b.add_dataset('lc', compute_times=phoebe.linspace(-2, 2, 201), dataset='lc01') b.add_dataset('orb', compute_times=phoebe.linspace(-2, 2, 201)) anim_times = phoebe.linspace(-2, 2, 101) b.add_dataset('mesh', compute_times=anim_times, coordinates='uvw', dataset='mesh01')
2.3/examples/eccentric_ellipsoidal.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Running Compute
b.run_compute(irrad_method='none')
2.3/examples/eccentric_ellipsoidal.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Plotting
afig, mplfig = b.plot(kind='lc', x='phases', t0='t0_perpass', show=True)
2.3/examples/eccentric_ellipsoidal.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now let's make a nice figure. Let's go through these options: * time: make the plot at this single time * z: by default, orbits plot in 2d, but since we're overplotting with a mesh, we want the z-ordering to be correct, so we'll have them plot with w-coordinates in the z-direction. * c: (will be ignored by the mesh): set the color to blue for the primary and red for the secondary (will only affect the orbits as the light curve is not tagged with any component). * fc: (will be ignored by everything but the mesh): set the facecolor to be blue for the primary and red for the secondary. * ec: disable drawing the edges of the triangles in a separate color. We could also set this to 'none', but then we'd be able to "see-through" the triangle edges. * uncover: for the orbit, uncover based on the current time. * trail: for the orbit, let's show a "trail" behind the current position. * highlight: disable highlighting for the orbit, since the mesh will be in the same position. * tight_layout: use matplotlib's tight layout to ensure we have enough padding between axes to see the labels.
afig, mplfig = b.plot(time=0.0, z={'orb': 'ws'}, c={'primary': 'blue', 'secondary': 'red'}, fc={'primary': 'blue', 'secondary': 'red'}, ec='face', uncover={'orb': True}, trail={'orb': 0.1}, highlight={'orb': False}, tight_layout=True, show=True)
2.3/examples/eccentric_ellipsoidal.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now let's animate the same figure in time. We'll use the same arguments as the static plot above, with the following exceptions: times: pass our array of times that we want the animation to loop over. pad_aspect: pad_aspect doesn't work with animations, so we'll disable to avoid the warning messages. animate: self-explanatory. save: we could use show=True, but that doesn't always play nice with jupyter notebooks save_kwargs: may need to change these for your setup, to create a gif, passing {'writer': 'imagemagick'} is often useful.
afig, mplfig = b.plot(times=anim_times, z={'orb': 'ws'}, c={'primary': 'blue', 'secondary': 'red'}, fc={'primary': 'blue', 'secondary': 'red'}, ec='face', uncover={'orb': True}, trail={'orb': 0.1}, highlight={'orb': False}, tight_layout=True, pad_aspect=False, animate=True, save='eccentric_ellipsoidal.gif', save_kwargs={'writer': 'imagemagick'})
2.3/examples/eccentric_ellipsoidal.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
The following are all Theano defined types:
A = T.matrix('A') b = T.scalar('b') v = T.vector('v') print A.type print b.type print v.type
TheanoLearning/TheanoLearning/theano_demo.ipynb
shengshuyang/PCLCombinedObjectDetection
gpl-2.0