code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/10.Clinical_Relation_Extraction.ipynb)
# Clinical Relation Extraction Model
## Colab Setup
```
import json
with open('workshop_license_keys_365.json') as f:
license_keys = json.load(f)
license_keys.keys()
import os
# Install java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID']= license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
version = license_keys['PUBLIC_VERSION']
jsl_version = license_keys['JSL_VERSION']
! pip install --ignore-installed -q pyspark==2.4.4
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
! pip install --ignore-installed -q spark-nlp==$version
import sparknlp
print (sparknlp.version())
import json
import os
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
spark = sparknlp_jsl.start(secret)
```
## Posology Releation Extraction
This is a demonstration of using SparkNLP for extracting posology relations. The following relatios are supported:
DRUG-DOSAGE
DRUG-FREQUENCY
DRUG-ADE (Adversed Drug Events)
DRUG-FORM
DRUG-ROUTE
DRUG-DURATION
DRUG-REASON
DRUG=STRENGTH
The model has been validated agains the posology dataset described in (Magge, Scotch, & Gonzalez-Hernandez, 2018).
| Relation | Recall | Precision | F1 | F1 (Magge, Scotch, & Gonzalez-Hernandez, 2018) |
| --- | --- | --- | --- | --- |
| DRUG-ADE | 0.66 | 1.00 | **0.80** | 0.76 |
| DRUG-DOSAGE | 0.89 | 1.00 | **0.94** | 0.91 |
| DRUG-DURATION | 0.75 | 1.00 | **0.85** | 0.92 |
| DRUG-FORM | 0.88 | 1.00 | **0.94** | 0.95* |
| DRUG-FREQUENCY | 0.79 | 1.00 | **0.88** | 0.90 |
| DRUG-REASON | 0.60 | 1.00 | **0.75** | 0.70 |
| DRUG-ROUTE | 0.79 | 1.00 | **0.88** | 0.95* |
| DRUG-STRENGTH | 0.95 | 1.00 | **0.98** | 0.97 |
*Magge, Scotch, Gonzalez-Hernandez (2018) collapsed DRUG-FORM and DRUG-ROUTE into a single relation.
```
import os
import re
import pyspark
import sparknlp
import sparknlp_jsl
import functools
import json
import numpy as np
from scipy import spatial
import pyspark.sql.functions as F
import pyspark.sql.types as T
from pyspark.sql import SparkSession
from pyspark.ml import Pipeline
from sparknlp_jsl.annotator import *
from sparknlp.annotator import *
from sparknlp.base import *
```
**Build pipeline using SparNLP pretrained models and the relation extration model optimized for posology**.
The precision of the RE model is controlled by "setMaxSyntacticDistance(4)", which sets the maximum syntactic distance between named entities to 4. A larger value will improve recall at the expense at lower precision. A value of 4 leads to literally perfect precision (i.e. the model doesn't produce any false positives) and reasonably good recall.
```
documenter = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentencer = SentenceDetector()\
.setInputCols(["document"])\
.setOutputCol("sentences")
tokenizer = sparknlp.annotators.Tokenizer()\
.setInputCols(["sentences"])\
.setOutputCol("tokens")
words_embedder = WordEmbeddingsModel()\
.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
pos_tagger = PerceptronModel()\
.pretrained("pos_clinical", "en", "clinical/models") \
.setInputCols(["sentences", "tokens"])\
.setOutputCol("pos_tags")
ner_tagger = NerDLModel()\
.pretrained("ner_posology", "en", "clinical/models")\
.setInputCols("sentences", "tokens", "embeddings")\
.setOutputCol("ner_tags")
ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "ner_tags"])\
.setOutputCol("ner_chunks")
dependency_parser = DependencyParserModel()\
.pretrained("dependency_conllu", "en")\
.setInputCols(["sentences", "pos_tags", "tokens"])\
.setOutputCol("dependencies")
reModel = RelationExtractionModel()\
.pretrained("posology_re", "en", "clinical/models")\
.setInputCols(["embeddings", "pos_tags", "ner_chunks", "dependencies"])\
.setOutputCol("relations")\
.setMaxSyntacticDistance(4)
pipeline = Pipeline(stages=[
documenter,
sentencer,
tokenizer,
words_embedder,
pos_tagger,
ner_tagger,
ner_chunker,
dependency_parser,
reModel
])
```
**Create empty dataframe**
```
empty_data = spark.createDataFrame([[""]]).toDF("text")
```
**Create a light pipeline for annotating free text**
```
model = pipeline.fit(empty_data)
lmodel = sparknlp.base.LightPipeline(model)
```
**Sample free text**
```
text = """
The patient was prescribed 1 unit of Advil for 5 days after meals. The patient was also
given 1 unit of Metformin daily.
He was seen by the endocrinology service and she was discharged on 40 units of insulin glargine at night ,
12 units of insulin lispro with meals , and metformin 1000 mg two times a day.
"""
results = lmodel.fullAnnotate(text)
```
**Show extracted relations**
```
for rel in results[0]["relations"]:
print("{}({}={} - {}={})".format(
rel.result,
rel.metadata['entity1'],
rel.metadata['chunk1'],
rel.metadata['entity2'],
rel.metadata['chunk2']
))
import pandas as pd
def get_relations_df (results):
rel_pairs=[]
for rel in results[0]['relations']:
rel_pairs.append((
rel.result,
rel.metadata['entity1'],
rel.metadata['entity1_begin'],
rel.metadata['entity1_end'],
rel.metadata['chunk1'],
rel.metadata['entity2'],
rel.metadata['entity2_begin'],
rel.metadata['entity2_end'],
rel.metadata['chunk2'],
rel.metadata['confidence']
))
rel_df = pd.DataFrame(rel_pairs, columns=['relation','entity1','entity1_begin','entity1_end','chunk1','entity2','entity2_begin','entity2_end','chunk2', 'confidence'])
return rel_df
rel_df = get_relations_df (results)
rel_df
text ="""A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years prior to presentation and subsequent type two diabetes mellitus ( T2DM ),
one prior episode of HTG-induced pancreatitis three years prior to presentation, associated with an acute hepatitis , and obesity with a body mass index ( BMI ) of 33.5 kg/m2 , presented with a one-week history of polyuria , polydipsia , poor appetite , and vomiting . Two weeks prior to presentation , she was treated with a five-day course of amoxicillin for a respiratory tract infection . She was on metformin , glipizide , and dapagliflozin for T2DM and atorvastatin and gemfibrozil for HTG . She had been on dapagliflozin for six months at the time of presentation. Physical examination on presentation was significant for dry oral mucosa ; significantly , her abdominal examination was benign with no tenderness , guarding , or rigidity . Pertinent laboratory findings on admission were : serum glucose 111 mg/dl , bicarbonate 18 mmol/l , anion gap 20 , creatinine 0.4 mg/dL , triglycerides 508 mg/dL , total cholesterol 122 mg/dL , glycated hemoglobin ( HbA1c ) 10% , and venous pH 7.27 . Serum lipase was normal at 43 U/L . Serum acetone levels could not be assessed as blood samples kept hemolyzing due to significant lipemia . The patient was initially admitted for starvation ketosis , as she reported poor oral intake for three days prior to admission . However , serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL , the anion gap was still elevated at 21 , serum bicarbonate was 16 mmol/L , triglyceride level peaked at 2050 mg/dL , and lipase was 52 U/L . The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - the original sample was centrifuged and the chylomicron layer removed prior to analysis due to interference from turbidity caused by lipemia again . The patient was treated with an insulin drip for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL , within 24 hours . Her euDKA was thought to be precipitated by her respiratory tract infection in the setting of SGLT2 inhibitor use . The patient was seen by the endocrinology service and she was discharged on 40 units of insulin glargine at night , 12 units of insulin lispro with meals , and metformin 1000 mg two times a day . It was determined that all SGLT2 inhibitors should be discontinued indefinitely .
She had close follow-up with endocrinology post discharge .
"""
annotations = lmodel.fullAnnotate(text)
rel_df = get_relations_df (annotations)
rel_df
```
## Clinical RE
### The set of relations defined in the 2010 i2b2 relation challenge
TrIP: A certain treatment has improved or cured a medical problem (eg, ‘infection resolved with antibiotic course’)
TrWP: A patient's medical problem has deteriorated or worsened because of or in spite of a treatment being administered (eg, ‘the tumor was growing despite the drain’)
TrCP: A treatment caused a medical problem (eg, ‘penicillin causes a rash’)
TrAP: A treatment administered for a medical problem (eg, ‘Dexamphetamine for narcolepsy’)
TrNAP: The administration of a treatment was avoided because of a medical problem (eg, ‘Ralafen which is contra-indicated because of ulcers’)
TeRP: A test has revealed some medical problem (eg, ‘an echocardiogram revealed a pericardial effusion’)
TeCP: A test was performed to investigate a medical problem (eg, ‘chest x-ray done to rule out pneumonia’)
PIP: Two problems are related to each other (eg, ‘Azotemia presumed secondary to sepsis’)
```
clinical_ner_tagger = sparknlp.annotators.NerDLModel()\
.pretrained("ner_clinical", "en", "clinical/models")\
.setInputCols("sentence", "tokens", "embeddings")\
.setOutputCol("ner_tags")
clinical_re_Model = RelationExtractionModel()\
.pretrained("re_clinical", "en", 'clinical/models')\
.setInputCols(["embeddings", "pos_tags", "ner_chunks", "dependencies"])\
.setOutputCol("relations")\
.setMaxSyntacticDistance(4)\
.setRelationPairs(["problem-test", "problem-treatment"]) # we can set the possible relation pairs (if not set, all the relations will be calculated)
loaded_pipeline = Pipeline(stages=[
documenter,
sentencer,
tokenizer,
words_embedder,
pos_tagger,
clinical_ner_tagger,
ner_chunker,
dependency_parser,
clinical_re_Model
])
loaded_model = loaded_pipeline.fit(empty_data)
loaded_lmodel = LightPipeline(loaded_model)
text ="""A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years prior to presentation and subsequent type two diabetes mellitus ( T2DM ),
one prior episode of HTG-induced pancreatitis three years prior to presentation, associated with an acute hepatitis , and obesity with a body mass index ( BMI ) of 33.5 kg/m2 , presented with a one-week history of polyuria , polydipsia , poor appetite , and vomiting . Two weeks prior to presentation , she was treated with a five-day course of amoxicillin for a respiratory tract infection . She was on metformin , glipizide , and dapagliflozin for T2DM and atorvastatin and gemfibrozil for HTG . She had been on dapagliflozin for six months at the time of presentation. Physical examination on presentation was significant for dry oral mucosa ; significantly , her abdominal examination was benign with no tenderness , guarding , or rigidity . Pertinent laboratory findings on admission were : serum glucose 111 mg/dl , bicarbonate 18 mmol/l , anion gap 20 , creatinine 0.4 mg/dL , triglycerides 508 mg/dL , total cholesterol 122 mg/dL , glycated hemoglobin ( HbA1c ) 10% , and venous pH 7.27 . Serum lipase was normal at 43 U/L . Serum acetone levels could not be assessed as blood samples kept hemolyzing due to significant lipemia . The patient was initially admitted for starvation ketosis , as she reported poor oral intake for three days prior to admission . However , serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL , the anion gap was still elevated at 21 , serum bicarbonate was 16 mmol/L , triglyceride level peaked at 2050 mg/dL , and lipase was 52 U/L . The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - the original sample was centrifuged and the chylomicron layer removed prior to analysis due to interference from turbidity caused by lipemia again . The patient was treated with an insulin drip for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL , within 24 hours . Her euDKA was thought to be precipitated by her respiratory tract infection in the setting of SGLT2 inhibitor use . The patient was seen by the endocrinology service and she was discharged on 40 units of insulin glargine at night , 12 units of insulin lispro with meals , and metformin 1000 mg two times a day . It was determined that all SGLT2 inhibitors should be discontinued indefinitely .
She had close follow-up with endocrinology post discharge .
"""
annotations = loaded_lmodel.fullAnnotate(text)
rel_df = get_relations_df (annotations)
rel_df[rel_df.relation!="O"]
```
## Train a Relation Extraction Model
```
data = spark.read.option("header","true").format("csv").load("i2b2_clinical_relfeatures.csv")
data.show(10)
#Annotation structure
annotationType = T.StructType([
T.StructField('annotatorType', T.StringType(), False),
T.StructField('begin', T.IntegerType(), False),
T.StructField('end', T.IntegerType(), False),
T.StructField('result', T.StringType(), False),
T.StructField('metadata', T.MapType(T.StringType(), T.StringType()), False),
T.StructField('embeddings', T.ArrayType(T.FloatType()), False)
])
#UDF function to convert train data to names entitities
@F.udf(T.ArrayType(annotationType))
def createTrainAnnotations(begin1, end1, begin2, end2, chunk1, chunk2, label1, label2):
entity1 = sparknlp.annotation.Annotation("chunk", begin1, end1, chunk1, {'entity': label1.upper(), 'sentence': '0'}, [])
entity2 = sparknlp.annotation.Annotation("chunk", begin2, end2, chunk2, {'entity': label2.upper(), 'sentence': '0'}, [])
entity1.annotatorType = "chunk"
entity2.annotatorType = "chunk"
return [entity1, entity2]
#list of valid relations
rels = ["TrIP", "TrAP", "TeCP", "TrNAP", "TrCP", "PIP", "TrWP", "TeRP"]
#a query to select list of valid relations
valid_rel_query = "(" + " OR ".join(["rel = '{}'".format(rel) for rel in rels]) + ")"
data = data\
.withColumn("begin1i", F.expr("cast(firstCharEnt1 AS Int)"))\
.withColumn("end1i", F.expr("cast(lastCharEnt1 AS Int)"))\
.withColumn("begin2i", F.expr("cast(firstCharEnt2 AS Int)"))\
.withColumn("end2i", F.expr("cast(lastCharEnt2 AS Int)"))\
.where("begin1i IS NOT NULL")\
.where("end1i IS NOT NULL")\
.where("begin2i IS NOT NULL")\
.where("end2i IS NOT NULL")\
.where(valid_rel_query)\
.withColumn(
"train_ner_chunks",
createTrainAnnotations(
"begin1i", "end1i", "begin2i", "end2i", "chunk1", "chunk2", "label1", "label2"
).alias("train_ner_chunks", metadata={'annotatorType': "chunk"}))
train_data = data.where("dataset='train'")
test_data = data.where("dataset='test'")
documenter = sparknlp.DocumentAssembler()\
.setInputCol("sentence")\
.setOutputCol("document")
sentencer = SentenceDetector()\
.setInputCols(["document"])\
.setOutputCol("sentences")
tokenizer = sparknlp.annotators.Tokenizer()\
.setInputCols(["sentences"])\
.setOutputCol("tokens")\
words_embedder = WordEmbeddingsModel()\
.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
pos_tagger = PerceptronModel()\
.pretrained("pos_clinical", "en", "clinical/models") \
.setInputCols(["sentences", "tokens"])\
.setOutputCol("pos_tags")
dependency_parser = sparknlp.annotators.DependencyParserModel()\
.pretrained("dependency_conllu", "en")\
.setInputCols(["document", "pos_tags", "tokens"])\
.setOutputCol("dependencies")
# set training params and upload model graph (see ../Healthcare/8.Generic_Classifier.ipynb)
reApproach = sparknlp_jsl.annotator.RelationExtractionApproach()\
.setInputCols(["embeddings", "pos_tags", "train_ner_chunks", "dependencies"])\
.setOutputCol("relations")\
.setLabelColumn("rel")\
.setEpochsNumber(50)\
.setBatchSize(200)\
.setLearningRate(0.001)\
.setModelFile("/content/RE.in1200D.out20.pb")\
.setFixImbalance(True)\
.setValidationSplit(0.2)\
.setFromEntity("begin1i", "end1i", "label1")\
.setToEntity("begin2i", "end2i", "label2")
finisher = sparknlp.Finisher()\
.setInputCols(["relations"])\
.setOutputCols(["relations_out"])\
.setCleanAnnotations(False)\
.setValueSplitSymbol(",")\
.setAnnotationSplitSymbol(",")\
.setOutputAsArray(False)
train_pipeline = Pipeline(stages=[
documenter, sentencer, tokenizer, words_embedder, pos_tagger,
dependency_parser, reApproach, finisher
])
rel_model = train_pipeline.fit(train_data)
rel_model.stages[-2]
rel_model.stages[-2].write().overwrite().save('custom_RE_model')
result = rel_model.transform(test_data)
recall = result\
.groupBy("rel")\
.agg(F.avg(F.expr("IF(rel = relations_out, 1, 0)")).alias("recall"))\
.select(
F.col("rel").alias("relation"),
F.format_number("recall", 2).alias("recall"))\
.show()
performance = result\
.where("relations_out <> ''")\
.groupBy("relations_out")\
.agg(F.avg(F.expr("IF(rel = relations_out, 1, 0)")).alias("precision"))\
.select(
F.col("relations_out").alias("relation"),
F.format_number("precision", 2).alias("precision"))\
.show()
result_df = result.select(F.explode(F.arrays_zip('relations.result', 'relations.metadata')).alias("cols")) \
.select(F.expr("cols['0']").alias("relation"),
F.expr("cols['1']['entity1']").alias("entity1"),
F.expr("cols['1']['entity1_begin']").alias("entity1_begin"),
F.expr("cols['1']['entity1_end']").alias("entity1_end"),
F.expr("cols['1']['chunk1']").alias("chunk1"),
F.expr("cols['1']['entity2']").alias("entity2"),
F.expr("cols['1']['entity2_begin']").alias("entity2_begin"),
F.expr("cols['1']['entity2_end']").alias("entity2_end"),
F.expr("cols['1']['chunk2']").alias("chunk2"),
F.expr("cols['1']['confidence']").alias("confidence")
)
result_df.show(50, truncate=100)
```
# Load trained model from disk
```
loaded_re_Model = RelationExtractionModel() \
.load("custom_RE_model")\
.setInputCols(["embeddings", "pos_tags", "ner_chunks", "dependencies"]) \
.setOutputCol("relations")\
.setRelationPairs(["problem-test", "problem-treatment"])\
.setPredictionThreshold(0.9)\
.setMaxSyntacticDistance(4)
trained_pipeline = Pipeline(stages=[
documenter,
sentencer,
tokenizer,
words_embedder,
pos_tagger,
clinical_ner_tagger,
ner_chunker,
dependency_parser,
loaded_re_Model
])
empty_data = spark.createDataFrame([[""]]).toDF("sentence")
loaded_re_model = trained_pipeline.fit(empty_data)
text ="""A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years prior to presentation and subsequent type two diabetes mellitus ( T2DM ),
one prior episode of HTG-induced pancreatitis three years prior to presentation, associated with an acute hepatitis , and obesity with a body mass index ( BMI ) of 33.5 kg/m2 , presented with a one-week history of polyuria , polydipsia , poor appetite , and vomiting . Two weeks prior to presentation , she was treated with a five-day course of amoxicillin for a respiratory tract infection . She was on metformin , glipizide , and dapagliflozin for T2DM and atorvastatin and gemfibrozil for HTG . She had been on dapagliflozin for six months at the time of presentation. Physical examination on presentation was significant for dry oral mucosa ; significantly , her abdominal examination was benign with no tenderness , guarding , or rigidity . Pertinent laboratory findings on admission were : serum glucose 111 mg/dl , bicarbonate 18 mmol/l , anion gap 20 , creatinine 0.4 mg/dL , triglycerides 508 mg/dL , total cholesterol 122 mg/dL , glycated hemoglobin ( HbA1c ) 10% , and venous pH 7.27 . Serum lipase was normal at 43 U/L . Serum acetone levels could not be assessed as blood samples kept hemolyzing due to significant lipemia . The patient was initially admitted for starvation ketosis , as she reported poor oral intake for three days prior to admission . However , serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL , the anion gap was still elevated at 21 , serum bicarbonate was 16 mmol/L , triglyceride level peaked at 2050 mg/dL , and lipase was 52 U/L . The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - the original sample was centrifuged and the chylomicron layer removed prior to analysis due to interference from turbidity caused by lipemia again . The patient was treated with an insulin drip for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL , within 24 hours . Her euDKA was thought to be precipitated by her respiratory tract infection in the setting of SGLT2 inhibitor use . The patient was seen by the endocrinology service and she was discharged on 40 units of insulin glargine at night , 12 units of insulin lispro with meals , and metformin 1000 mg two times a day . It was determined that all SGLT2 inhibitors should be discontinued indefinitely .
She had close follow-up with endocrinology post discharge .
"""
loaded_re_model_light = LightPipeline(loaded_re_model)
annotations = loaded_re_model_light.fullAnnotate(text)
rel_df = get_relations_df (annotations)
rel_df[rel_df.relation!="O"]
```
|
github_jupyter
|
import json
with open('workshop_license_keys_365.json') as f:
license_keys = json.load(f)
license_keys.keys()
import os
# Install java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID']= license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
version = license_keys['PUBLIC_VERSION']
jsl_version = license_keys['JSL_VERSION']
! pip install --ignore-installed -q pyspark==2.4.4
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
! pip install --ignore-installed -q spark-nlp==$version
import sparknlp
print (sparknlp.version())
import json
import os
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
spark = sparknlp_jsl.start(secret)
import os
import re
import pyspark
import sparknlp
import sparknlp_jsl
import functools
import json
import numpy as np
from scipy import spatial
import pyspark.sql.functions as F
import pyspark.sql.types as T
from pyspark.sql import SparkSession
from pyspark.ml import Pipeline
from sparknlp_jsl.annotator import *
from sparknlp.annotator import *
from sparknlp.base import *
documenter = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentencer = SentenceDetector()\
.setInputCols(["document"])\
.setOutputCol("sentences")
tokenizer = sparknlp.annotators.Tokenizer()\
.setInputCols(["sentences"])\
.setOutputCol("tokens")
words_embedder = WordEmbeddingsModel()\
.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
pos_tagger = PerceptronModel()\
.pretrained("pos_clinical", "en", "clinical/models") \
.setInputCols(["sentences", "tokens"])\
.setOutputCol("pos_tags")
ner_tagger = NerDLModel()\
.pretrained("ner_posology", "en", "clinical/models")\
.setInputCols("sentences", "tokens", "embeddings")\
.setOutputCol("ner_tags")
ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "ner_tags"])\
.setOutputCol("ner_chunks")
dependency_parser = DependencyParserModel()\
.pretrained("dependency_conllu", "en")\
.setInputCols(["sentences", "pos_tags", "tokens"])\
.setOutputCol("dependencies")
reModel = RelationExtractionModel()\
.pretrained("posology_re", "en", "clinical/models")\
.setInputCols(["embeddings", "pos_tags", "ner_chunks", "dependencies"])\
.setOutputCol("relations")\
.setMaxSyntacticDistance(4)
pipeline = Pipeline(stages=[
documenter,
sentencer,
tokenizer,
words_embedder,
pos_tagger,
ner_tagger,
ner_chunker,
dependency_parser,
reModel
])
empty_data = spark.createDataFrame([[""]]).toDF("text")
model = pipeline.fit(empty_data)
lmodel = sparknlp.base.LightPipeline(model)
text = """
The patient was prescribed 1 unit of Advil for 5 days after meals. The patient was also
given 1 unit of Metformin daily.
He was seen by the endocrinology service and she was discharged on 40 units of insulin glargine at night ,
12 units of insulin lispro with meals , and metformin 1000 mg two times a day.
"""
results = lmodel.fullAnnotate(text)
for rel in results[0]["relations"]:
print("{}({}={} - {}={})".format(
rel.result,
rel.metadata['entity1'],
rel.metadata['chunk1'],
rel.metadata['entity2'],
rel.metadata['chunk2']
))
import pandas as pd
def get_relations_df (results):
rel_pairs=[]
for rel in results[0]['relations']:
rel_pairs.append((
rel.result,
rel.metadata['entity1'],
rel.metadata['entity1_begin'],
rel.metadata['entity1_end'],
rel.metadata['chunk1'],
rel.metadata['entity2'],
rel.metadata['entity2_begin'],
rel.metadata['entity2_end'],
rel.metadata['chunk2'],
rel.metadata['confidence']
))
rel_df = pd.DataFrame(rel_pairs, columns=['relation','entity1','entity1_begin','entity1_end','chunk1','entity2','entity2_begin','entity2_end','chunk2', 'confidence'])
return rel_df
rel_df = get_relations_df (results)
rel_df
text ="""A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years prior to presentation and subsequent type two diabetes mellitus ( T2DM ),
one prior episode of HTG-induced pancreatitis three years prior to presentation, associated with an acute hepatitis , and obesity with a body mass index ( BMI ) of 33.5 kg/m2 , presented with a one-week history of polyuria , polydipsia , poor appetite , and vomiting . Two weeks prior to presentation , she was treated with a five-day course of amoxicillin for a respiratory tract infection . She was on metformin , glipizide , and dapagliflozin for T2DM and atorvastatin and gemfibrozil for HTG . She had been on dapagliflozin for six months at the time of presentation. Physical examination on presentation was significant for dry oral mucosa ; significantly , her abdominal examination was benign with no tenderness , guarding , or rigidity . Pertinent laboratory findings on admission were : serum glucose 111 mg/dl , bicarbonate 18 mmol/l , anion gap 20 , creatinine 0.4 mg/dL , triglycerides 508 mg/dL , total cholesterol 122 mg/dL , glycated hemoglobin ( HbA1c ) 10% , and venous pH 7.27 . Serum lipase was normal at 43 U/L . Serum acetone levels could not be assessed as blood samples kept hemolyzing due to significant lipemia . The patient was initially admitted for starvation ketosis , as she reported poor oral intake for three days prior to admission . However , serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL , the anion gap was still elevated at 21 , serum bicarbonate was 16 mmol/L , triglyceride level peaked at 2050 mg/dL , and lipase was 52 U/L . The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - the original sample was centrifuged and the chylomicron layer removed prior to analysis due to interference from turbidity caused by lipemia again . The patient was treated with an insulin drip for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL , within 24 hours . Her euDKA was thought to be precipitated by her respiratory tract infection in the setting of SGLT2 inhibitor use . The patient was seen by the endocrinology service and she was discharged on 40 units of insulin glargine at night , 12 units of insulin lispro with meals , and metformin 1000 mg two times a day . It was determined that all SGLT2 inhibitors should be discontinued indefinitely .
She had close follow-up with endocrinology post discharge .
"""
annotations = lmodel.fullAnnotate(text)
rel_df = get_relations_df (annotations)
rel_df
clinical_ner_tagger = sparknlp.annotators.NerDLModel()\
.pretrained("ner_clinical", "en", "clinical/models")\
.setInputCols("sentence", "tokens", "embeddings")\
.setOutputCol("ner_tags")
clinical_re_Model = RelationExtractionModel()\
.pretrained("re_clinical", "en", 'clinical/models')\
.setInputCols(["embeddings", "pos_tags", "ner_chunks", "dependencies"])\
.setOutputCol("relations")\
.setMaxSyntacticDistance(4)\
.setRelationPairs(["problem-test", "problem-treatment"]) # we can set the possible relation pairs (if not set, all the relations will be calculated)
loaded_pipeline = Pipeline(stages=[
documenter,
sentencer,
tokenizer,
words_embedder,
pos_tagger,
clinical_ner_tagger,
ner_chunker,
dependency_parser,
clinical_re_Model
])
loaded_model = loaded_pipeline.fit(empty_data)
loaded_lmodel = LightPipeline(loaded_model)
text ="""A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years prior to presentation and subsequent type two diabetes mellitus ( T2DM ),
one prior episode of HTG-induced pancreatitis three years prior to presentation, associated with an acute hepatitis , and obesity with a body mass index ( BMI ) of 33.5 kg/m2 , presented with a one-week history of polyuria , polydipsia , poor appetite , and vomiting . Two weeks prior to presentation , she was treated with a five-day course of amoxicillin for a respiratory tract infection . She was on metformin , glipizide , and dapagliflozin for T2DM and atorvastatin and gemfibrozil for HTG . She had been on dapagliflozin for six months at the time of presentation. Physical examination on presentation was significant for dry oral mucosa ; significantly , her abdominal examination was benign with no tenderness , guarding , or rigidity . Pertinent laboratory findings on admission were : serum glucose 111 mg/dl , bicarbonate 18 mmol/l , anion gap 20 , creatinine 0.4 mg/dL , triglycerides 508 mg/dL , total cholesterol 122 mg/dL , glycated hemoglobin ( HbA1c ) 10% , and venous pH 7.27 . Serum lipase was normal at 43 U/L . Serum acetone levels could not be assessed as blood samples kept hemolyzing due to significant lipemia . The patient was initially admitted for starvation ketosis , as she reported poor oral intake for three days prior to admission . However , serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL , the anion gap was still elevated at 21 , serum bicarbonate was 16 mmol/L , triglyceride level peaked at 2050 mg/dL , and lipase was 52 U/L . The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - the original sample was centrifuged and the chylomicron layer removed prior to analysis due to interference from turbidity caused by lipemia again . The patient was treated with an insulin drip for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL , within 24 hours . Her euDKA was thought to be precipitated by her respiratory tract infection in the setting of SGLT2 inhibitor use . The patient was seen by the endocrinology service and she was discharged on 40 units of insulin glargine at night , 12 units of insulin lispro with meals , and metformin 1000 mg two times a day . It was determined that all SGLT2 inhibitors should be discontinued indefinitely .
She had close follow-up with endocrinology post discharge .
"""
annotations = loaded_lmodel.fullAnnotate(text)
rel_df = get_relations_df (annotations)
rel_df[rel_df.relation!="O"]
data = spark.read.option("header","true").format("csv").load("i2b2_clinical_relfeatures.csv")
data.show(10)
#Annotation structure
annotationType = T.StructType([
T.StructField('annotatorType', T.StringType(), False),
T.StructField('begin', T.IntegerType(), False),
T.StructField('end', T.IntegerType(), False),
T.StructField('result', T.StringType(), False),
T.StructField('metadata', T.MapType(T.StringType(), T.StringType()), False),
T.StructField('embeddings', T.ArrayType(T.FloatType()), False)
])
#UDF function to convert train data to names entitities
@F.udf(T.ArrayType(annotationType))
def createTrainAnnotations(begin1, end1, begin2, end2, chunk1, chunk2, label1, label2):
entity1 = sparknlp.annotation.Annotation("chunk", begin1, end1, chunk1, {'entity': label1.upper(), 'sentence': '0'}, [])
entity2 = sparknlp.annotation.Annotation("chunk", begin2, end2, chunk2, {'entity': label2.upper(), 'sentence': '0'}, [])
entity1.annotatorType = "chunk"
entity2.annotatorType = "chunk"
return [entity1, entity2]
#list of valid relations
rels = ["TrIP", "TrAP", "TeCP", "TrNAP", "TrCP", "PIP", "TrWP", "TeRP"]
#a query to select list of valid relations
valid_rel_query = "(" + " OR ".join(["rel = '{}'".format(rel) for rel in rels]) + ")"
data = data\
.withColumn("begin1i", F.expr("cast(firstCharEnt1 AS Int)"))\
.withColumn("end1i", F.expr("cast(lastCharEnt1 AS Int)"))\
.withColumn("begin2i", F.expr("cast(firstCharEnt2 AS Int)"))\
.withColumn("end2i", F.expr("cast(lastCharEnt2 AS Int)"))\
.where("begin1i IS NOT NULL")\
.where("end1i IS NOT NULL")\
.where("begin2i IS NOT NULL")\
.where("end2i IS NOT NULL")\
.where(valid_rel_query)\
.withColumn(
"train_ner_chunks",
createTrainAnnotations(
"begin1i", "end1i", "begin2i", "end2i", "chunk1", "chunk2", "label1", "label2"
).alias("train_ner_chunks", metadata={'annotatorType': "chunk"}))
train_data = data.where("dataset='train'")
test_data = data.where("dataset='test'")
documenter = sparknlp.DocumentAssembler()\
.setInputCol("sentence")\
.setOutputCol("document")
sentencer = SentenceDetector()\
.setInputCols(["document"])\
.setOutputCol("sentences")
tokenizer = sparknlp.annotators.Tokenizer()\
.setInputCols(["sentences"])\
.setOutputCol("tokens")\
words_embedder = WordEmbeddingsModel()\
.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
pos_tagger = PerceptronModel()\
.pretrained("pos_clinical", "en", "clinical/models") \
.setInputCols(["sentences", "tokens"])\
.setOutputCol("pos_tags")
dependency_parser = sparknlp.annotators.DependencyParserModel()\
.pretrained("dependency_conllu", "en")\
.setInputCols(["document", "pos_tags", "tokens"])\
.setOutputCol("dependencies")
# set training params and upload model graph (see ../Healthcare/8.Generic_Classifier.ipynb)
reApproach = sparknlp_jsl.annotator.RelationExtractionApproach()\
.setInputCols(["embeddings", "pos_tags", "train_ner_chunks", "dependencies"])\
.setOutputCol("relations")\
.setLabelColumn("rel")\
.setEpochsNumber(50)\
.setBatchSize(200)\
.setLearningRate(0.001)\
.setModelFile("/content/RE.in1200D.out20.pb")\
.setFixImbalance(True)\
.setValidationSplit(0.2)\
.setFromEntity("begin1i", "end1i", "label1")\
.setToEntity("begin2i", "end2i", "label2")
finisher = sparknlp.Finisher()\
.setInputCols(["relations"])\
.setOutputCols(["relations_out"])\
.setCleanAnnotations(False)\
.setValueSplitSymbol(",")\
.setAnnotationSplitSymbol(",")\
.setOutputAsArray(False)
train_pipeline = Pipeline(stages=[
documenter, sentencer, tokenizer, words_embedder, pos_tagger,
dependency_parser, reApproach, finisher
])
rel_model = train_pipeline.fit(train_data)
rel_model.stages[-2]
rel_model.stages[-2].write().overwrite().save('custom_RE_model')
result = rel_model.transform(test_data)
recall = result\
.groupBy("rel")\
.agg(F.avg(F.expr("IF(rel = relations_out, 1, 0)")).alias("recall"))\
.select(
F.col("rel").alias("relation"),
F.format_number("recall", 2).alias("recall"))\
.show()
performance = result\
.where("relations_out <> ''")\
.groupBy("relations_out")\
.agg(F.avg(F.expr("IF(rel = relations_out, 1, 0)")).alias("precision"))\
.select(
F.col("relations_out").alias("relation"),
F.format_number("precision", 2).alias("precision"))\
.show()
result_df = result.select(F.explode(F.arrays_zip('relations.result', 'relations.metadata')).alias("cols")) \
.select(F.expr("cols['0']").alias("relation"),
F.expr("cols['1']['entity1']").alias("entity1"),
F.expr("cols['1']['entity1_begin']").alias("entity1_begin"),
F.expr("cols['1']['entity1_end']").alias("entity1_end"),
F.expr("cols['1']['chunk1']").alias("chunk1"),
F.expr("cols['1']['entity2']").alias("entity2"),
F.expr("cols['1']['entity2_begin']").alias("entity2_begin"),
F.expr("cols['1']['entity2_end']").alias("entity2_end"),
F.expr("cols['1']['chunk2']").alias("chunk2"),
F.expr("cols['1']['confidence']").alias("confidence")
)
result_df.show(50, truncate=100)
loaded_re_Model = RelationExtractionModel() \
.load("custom_RE_model")\
.setInputCols(["embeddings", "pos_tags", "ner_chunks", "dependencies"]) \
.setOutputCol("relations")\
.setRelationPairs(["problem-test", "problem-treatment"])\
.setPredictionThreshold(0.9)\
.setMaxSyntacticDistance(4)
trained_pipeline = Pipeline(stages=[
documenter,
sentencer,
tokenizer,
words_embedder,
pos_tagger,
clinical_ner_tagger,
ner_chunker,
dependency_parser,
loaded_re_Model
])
empty_data = spark.createDataFrame([[""]]).toDF("sentence")
loaded_re_model = trained_pipeline.fit(empty_data)
text ="""A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years prior to presentation and subsequent type two diabetes mellitus ( T2DM ),
one prior episode of HTG-induced pancreatitis three years prior to presentation, associated with an acute hepatitis , and obesity with a body mass index ( BMI ) of 33.5 kg/m2 , presented with a one-week history of polyuria , polydipsia , poor appetite , and vomiting . Two weeks prior to presentation , she was treated with a five-day course of amoxicillin for a respiratory tract infection . She was on metformin , glipizide , and dapagliflozin for T2DM and atorvastatin and gemfibrozil for HTG . She had been on dapagliflozin for six months at the time of presentation. Physical examination on presentation was significant for dry oral mucosa ; significantly , her abdominal examination was benign with no tenderness , guarding , or rigidity . Pertinent laboratory findings on admission were : serum glucose 111 mg/dl , bicarbonate 18 mmol/l , anion gap 20 , creatinine 0.4 mg/dL , triglycerides 508 mg/dL , total cholesterol 122 mg/dL , glycated hemoglobin ( HbA1c ) 10% , and venous pH 7.27 . Serum lipase was normal at 43 U/L . Serum acetone levels could not be assessed as blood samples kept hemolyzing due to significant lipemia . The patient was initially admitted for starvation ketosis , as she reported poor oral intake for three days prior to admission . However , serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL , the anion gap was still elevated at 21 , serum bicarbonate was 16 mmol/L , triglyceride level peaked at 2050 mg/dL , and lipase was 52 U/L . The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - the original sample was centrifuged and the chylomicron layer removed prior to analysis due to interference from turbidity caused by lipemia again . The patient was treated with an insulin drip for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL , within 24 hours . Her euDKA was thought to be precipitated by her respiratory tract infection in the setting of SGLT2 inhibitor use . The patient was seen by the endocrinology service and she was discharged on 40 units of insulin glargine at night , 12 units of insulin lispro with meals , and metformin 1000 mg two times a day . It was determined that all SGLT2 inhibitors should be discontinued indefinitely .
She had close follow-up with endocrinology post discharge .
"""
loaded_re_model_light = LightPipeline(loaded_re_model)
annotations = loaded_re_model_light.fullAnnotate(text)
rel_df = get_relations_df (annotations)
rel_df[rel_df.relation!="O"]
| 0.34621 | 0.753716 |
# Exploring Clojure and Java interop
## Calling Java from Clojure
### Importing Java classes into Clojure
```
; The general form of import statements is as follows
"""
(import & import-symbols-or-lists)
"""
; Importing 2 java classes individually
;The quote (') reader macro instructs the runtime not to evaluuate the symbol
(import 'java.util.Date 'java.text.SimpleDateFormat)
; Importing 2 java classes as a sequence
(import '[java.util Date Set])
; Using the :import keyword to import classes in a namespace
(ns com.clojureinaction.book (:import (java.util Set Date)))
(ns user)
```
### Creating instances
```
; Using the new keyword to instantiate a class (like in Java)
(import '(java.text SimpleDateFormat))
(def sdf (new SimpleDateFormat "yyyy-MM-dd"))
; Using a trailing dot to instantiate a class
(def sdf (SimpleDateFormat. "yyyy-MM-dd"))
```
### Accessing methods and fields
```
; Using a leading dot to access an instance method
(defn date-from-date-string [date-string]
(let [sdf (SimpleDateFormat. "yyyy-MM-dd")]
(.parse sdf date-string)))
; Using a slash to access a static method
(Long/parseLong "12321") ; The syntax is (Classname/staticMethod args*)
; Using a slash to access a static field
(import '(java.util Calendar))
(Calendar/JANUARY)
```
### Macros and the dot special form
```
; General form of calling static methods
"""
(. ClassnameSymbol methodSymbol args*)
"""
; Example
(. System getenv "PATH")
; General form of calling instance methods
"""
(. instanceExpr methodSymbol args*)
"""
; Example
(import '(java.util Random))
(def rnd (Random.))
(. rnd nextInt 10)
; General form of calling static and instance fields
"""
(. ClassnameSymbol memberSymbol)
(. instanceExpr memberSymbol)
"""
; Example
(. Calendar DECEMBER)
```
### The Dot-Dot macro
Using the '.' macro to chain Java method calls (hard to read)
```
(import '(java.util Calendar TimeZone))
(. (. (Calendar/getInstance) (getTimeZone)) (getDisplayName))
```
Using the '..' macro to chain method calls (easier to read)
```
(.. (Calendar/getInstance) (getTimeZone) (getDisplayName))
```
### The Doto macro
```
; Applying a method repeteadly to a single Java object (note the code duplication)
(import '(java.util Calendar))
(defn the-past-midnight-1 []
(let [calendar-obj (Calendar/getInstance)]
(.set calendar-obj Calendar/AM_PM Calendar/AM)
(.set calendar-obj Calendar/HOUR 0)
(.set calendar-obj Calendar/MINUTE 0)
(.set calendar-obj Calendar/SECOND 0)
(.set calendar-obj Calendar/MILLISECOND 0)
(.getTime calendar-obj)))
; Using the doto macro to apply a method repeteadly to a single Java object
; (the code duplication was removed)
(defn the-past-midnight-2 []
(let [calendar-obj (Calendar/getInstance)]
(doto calendar-obj
(.set Calendar/AM_PM Calendar/AM)
(.set Calendar/HOUR 0)
(.set Calendar/MINUTE 0)
(.set Calendar/SECOND 0)
(.set Calendar/MILLISECOND 0))
(.getTime calendar-obj)))
```
### The memfn macro
```
; Using a Java method as a normal function
(map (fn [x] (.getBytes x)) ["amit" "rob" "kyle"])
; Using a Java method as an anonymous function
(map #(.getBytes %) ["amit" "rob" "kyle"])
; Using the memfn macro instead of the anonymous function
(map (memfn getBytes) ["amit" "rob" "kyle"])
; Calling a Java method without type hints
(.subSequence "Clojure" 2 5)
; Calling a Java method with type hints
((memfn ^String subSequence ^Long start ^Long end) "Clojure" 2 5)
```
### The bean macro
```
; Converting Java bean objects to immutable Clojure maps
(import '[java.util Calendar])
(bean (Calendar/getInstance))
```
Working with Java arrays
```
(def tokens (.split "clojure.in.action" "\\."))
```
### Implementing interfaces and extending classes
The proxy macro
```
; Implementing the MouseAdapter class with proxy
(import 'java.awt.event.MouseAdapter)
(proxy [MouseAdapter] []
(mousePressed [event]
(println "Hey!")))
; The general form of the proxy macro is as follows
"""
(proxy [class-and-interfaces] [args] fs+)
"""
```
The reify macro
```
; Creating an instance of Java’s FileFilter interface
(reify java.io.FileFilter
(accept [this f]
(.isDirectory f)))
```
## Compiling Clojure code to Java bytecode
The first dierctory structure of the project
```
root
classes
src
com
curry
utils
calculators.clj
```
```
; Contents of the calculators.clj file
"""
(ns com.curry.utils.calculators (:gen-class))
(defn present-value [data]
(println \"calculating present value...\")
"""
; Compiling the defined namespace
(compile 'com.curry.utils.calculators)
```
The new directory structure of the project
```
root
classes
src
com
curry
utils
calculators.clj
calc
dcf.clj
fcf.clj
```
```
; Contents of dcf.clj
"""
(in-ns 'com.curry.utils.calculators)
(defn discounted-cash-flow [data]
(println \"calculating discounted cash flow...\"))
"""
; Contents of fcf.clj
"""
(in-ns 'com.curry.utils.calculators)
(defn free-cash-flow [data]
(println \"calculating free cash flow...\"))
"""
; The new contents of calculators.clj
"""
(ns com.curry.utils.calculators (:gen-class))
(load \"calc/fcf\")
(load \"calc/dcf\")
(defn present-value [data]
(println \"calculating present value...\"))
"""
```
### Creating Java classes and interfaces using gen-class and gen-interface
```
; An abstract Java class that will be used to illustrate
; how gen-class works
"""
package com.gentest;
public abstract class AbstractJavaClass {
public AbstractJavaClass(String a, String b) {
System.out.println(\"Constructor: a, b\");
}
public AbstractJavaClass(String a) {
System.out.println(\"Constructor: a\");
}
public abstract String getCurrentStatus();
public String getSecret() {
return \"The Secret\";
}
}
"""
; Using the last java AbstractJavaClass inside clojure code
"""
(ns com.gentest.gen-clojure
(:import (com.gentest AbstractJavaClass))
(:gen-class
:name com.gentest.ConcreteClojureClass
:extends com.gentest.AbstractJavaClass
:constructors {[String] [String]
[String String] [String String]}
:implements [Runnable]
:init initialize
:state localState
:methods [[stateValue [] String]]))
(defn -initialize
([s1]
(println \"Init value:\" s1)
[[s1 \"default\"] (ref s1)])
([s1 s2]
(println \"Init values:\" s1 \",\" s2)
[[s1 s2] (ref s2)]))
(defn -getCurrentStatus [this]
\"getCurrentStatus from - com.gentest.ConcreteClojureClass\")
(defn -stateValue [this]
@(.localState this))
(defn -run [this]
(println \"In run!\")
(println \"I'm a\" (class this))
(dosync (ref-set (.localState this) \"GO\")))
(defn -main []
(let [g (new com.gentest.ConcreteClojureClass \"READY\")]
(println (.getCurrentStatus g))
(println (.getSecret g))
(println (.stateValue g)))
(let [g (new com.gentest.ConcreteClojureClass \"READY\" \"SET\")]
(println (.stateValue g))
(.start (Thread. g))
(Thread/sleep 1000)
(println (.stateValue g))))
"""
; To compile and test the last code, execute the following commands in the REPL
!compile 'com.gentest.gen-clojure
!java com.gentest.ConcreteClojureClass
```
Leiningen project file for ConcreteClojureClass
```
"""
(defproject gentest \"0.1.0\"
:dependencies [[org.clojure/clojure \"1.6.0\"]]
; Place our \"AbstractJavaClass.java\" and \"gen-clojure.clj\" files under
; the src/com/gentest directory.
:source-paths [\"src\"]
:java-source-paths [\"src\"]
; :aot is a list of clojure namespaces to compile.
:aot [com.gentest.gen-clojure]
; This is the java class \"lein run\" should execute.
:main com.gentest.ConcreteClojureClass)
"""
```
## Calling Clojure from Java
```
; Clojure function, defined in the clj.script.examples namespace
(ns clj.script.examples)
(defn print-report [user-name]
(println "Report for:" user-name) 10)
; Using the last function in a Java code
"""
import clojure.lang.RT;
import clojure.lang.Var;
public class Driver {
public static void main(String[] args) throws Exception {
RT.loadResourceScript(\"clojure_script.clj\");
Var report = RT.var(\"clj.script.examples\", \"print-report\");
Integer result = (Integer) report.invoke(\"Siva\");
System.out.println(\"Result: \" + result);
}
}
"""
```
|
github_jupyter
|
; The general form of import statements is as follows
"""
(import & import-symbols-or-lists)
"""
; Importing 2 java classes individually
;The quote (') reader macro instructs the runtime not to evaluuate the symbol
(import 'java.util.Date 'java.text.SimpleDateFormat)
; Importing 2 java classes as a sequence
(import '[java.util Date Set])
; Using the :import keyword to import classes in a namespace
(ns com.clojureinaction.book (:import (java.util Set Date)))
(ns user)
; Using the new keyword to instantiate a class (like in Java)
(import '(java.text SimpleDateFormat))
(def sdf (new SimpleDateFormat "yyyy-MM-dd"))
; Using a trailing dot to instantiate a class
(def sdf (SimpleDateFormat. "yyyy-MM-dd"))
; Using a leading dot to access an instance method
(defn date-from-date-string [date-string]
(let [sdf (SimpleDateFormat. "yyyy-MM-dd")]
(.parse sdf date-string)))
; Using a slash to access a static method
(Long/parseLong "12321") ; The syntax is (Classname/staticMethod args*)
; Using a slash to access a static field
(import '(java.util Calendar))
(Calendar/JANUARY)
; General form of calling static methods
"""
(. ClassnameSymbol methodSymbol args*)
"""
; Example
(. System getenv "PATH")
; General form of calling instance methods
"""
(. instanceExpr methodSymbol args*)
"""
; Example
(import '(java.util Random))
(def rnd (Random.))
(. rnd nextInt 10)
; General form of calling static and instance fields
"""
(. ClassnameSymbol memberSymbol)
(. instanceExpr memberSymbol)
"""
; Example
(. Calendar DECEMBER)
(import '(java.util Calendar TimeZone))
(. (. (Calendar/getInstance) (getTimeZone)) (getDisplayName))
(.. (Calendar/getInstance) (getTimeZone) (getDisplayName))
; Applying a method repeteadly to a single Java object (note the code duplication)
(import '(java.util Calendar))
(defn the-past-midnight-1 []
(let [calendar-obj (Calendar/getInstance)]
(.set calendar-obj Calendar/AM_PM Calendar/AM)
(.set calendar-obj Calendar/HOUR 0)
(.set calendar-obj Calendar/MINUTE 0)
(.set calendar-obj Calendar/SECOND 0)
(.set calendar-obj Calendar/MILLISECOND 0)
(.getTime calendar-obj)))
; Using the doto macro to apply a method repeteadly to a single Java object
; (the code duplication was removed)
(defn the-past-midnight-2 []
(let [calendar-obj (Calendar/getInstance)]
(doto calendar-obj
(.set Calendar/AM_PM Calendar/AM)
(.set Calendar/HOUR 0)
(.set Calendar/MINUTE 0)
(.set Calendar/SECOND 0)
(.set Calendar/MILLISECOND 0))
(.getTime calendar-obj)))
; Using a Java method as a normal function
(map (fn [x] (.getBytes x)) ["amit" "rob" "kyle"])
; Using a Java method as an anonymous function
(map #(.getBytes %) ["amit" "rob" "kyle"])
; Using the memfn macro instead of the anonymous function
(map (memfn getBytes) ["amit" "rob" "kyle"])
; Calling a Java method without type hints
(.subSequence "Clojure" 2 5)
; Calling a Java method with type hints
((memfn ^String subSequence ^Long start ^Long end) "Clojure" 2 5)
; Converting Java bean objects to immutable Clojure maps
(import '[java.util Calendar])
(bean (Calendar/getInstance))
(def tokens (.split "clojure.in.action" "\\."))
; Implementing the MouseAdapter class with proxy
(import 'java.awt.event.MouseAdapter)
(proxy [MouseAdapter] []
(mousePressed [event]
(println "Hey!")))
; The general form of the proxy macro is as follows
"""
(proxy [class-and-interfaces] [args] fs+)
"""
; Creating an instance of Java’s FileFilter interface
(reify java.io.FileFilter
(accept [this f]
(.isDirectory f)))
root
classes
src
com
curry
utils
calculators.clj
; Contents of the calculators.clj file
"""
(ns com.curry.utils.calculators (:gen-class))
(defn present-value [data]
(println \"calculating present value...\")
"""
; Compiling the defined namespace
(compile 'com.curry.utils.calculators)
root
classes
src
com
curry
utils
calculators.clj
calc
dcf.clj
fcf.clj
; Contents of dcf.clj
"""
(in-ns 'com.curry.utils.calculators)
(defn discounted-cash-flow [data]
(println \"calculating discounted cash flow...\"))
"""
; Contents of fcf.clj
"""
(in-ns 'com.curry.utils.calculators)
(defn free-cash-flow [data]
(println \"calculating free cash flow...\"))
"""
; The new contents of calculators.clj
"""
(ns com.curry.utils.calculators (:gen-class))
(load \"calc/fcf\")
(load \"calc/dcf\")
(defn present-value [data]
(println \"calculating present value...\"))
"""
; An abstract Java class that will be used to illustrate
; how gen-class works
"""
package com.gentest;
public abstract class AbstractJavaClass {
public AbstractJavaClass(String a, String b) {
System.out.println(\"Constructor: a, b\");
}
public AbstractJavaClass(String a) {
System.out.println(\"Constructor: a\");
}
public abstract String getCurrentStatus();
public String getSecret() {
return \"The Secret\";
}
}
"""
; Using the last java AbstractJavaClass inside clojure code
"""
(ns com.gentest.gen-clojure
(:import (com.gentest AbstractJavaClass))
(:gen-class
:name com.gentest.ConcreteClojureClass
:extends com.gentest.AbstractJavaClass
:constructors {[String] [String]
[String String] [String String]}
:implements [Runnable]
:init initialize
:state localState
:methods [[stateValue [] String]]))
(defn -initialize
([s1]
(println \"Init value:\" s1)
[[s1 \"default\"] (ref s1)])
([s1 s2]
(println \"Init values:\" s1 \",\" s2)
[[s1 s2] (ref s2)]))
(defn -getCurrentStatus [this]
\"getCurrentStatus from - com.gentest.ConcreteClojureClass\")
(defn -stateValue [this]
@(.localState this))
(defn -run [this]
(println \"In run!\")
(println \"I'm a\" (class this))
(dosync (ref-set (.localState this) \"GO\")))
(defn -main []
(let [g (new com.gentest.ConcreteClojureClass \"READY\")]
(println (.getCurrentStatus g))
(println (.getSecret g))
(println (.stateValue g)))
(let [g (new com.gentest.ConcreteClojureClass \"READY\" \"SET\")]
(println (.stateValue g))
(.start (Thread. g))
(Thread/sleep 1000)
(println (.stateValue g))))
"""
; To compile and test the last code, execute the following commands in the REPL
!compile 'com.gentest.gen-clojure
!java com.gentest.ConcreteClojureClass
"""
(defproject gentest \"0.1.0\"
:dependencies [[org.clojure/clojure \"1.6.0\"]]
; Place our \"AbstractJavaClass.java\" and \"gen-clojure.clj\" files under
; the src/com/gentest directory.
:source-paths [\"src\"]
:java-source-paths [\"src\"]
; :aot is a list of clojure namespaces to compile.
:aot [com.gentest.gen-clojure]
; This is the java class \"lein run\" should execute.
:main com.gentest.ConcreteClojureClass)
"""
; Clojure function, defined in the clj.script.examples namespace
(ns clj.script.examples)
(defn print-report [user-name]
(println "Report for:" user-name) 10)
; Using the last function in a Java code
"""
import clojure.lang.RT;
import clojure.lang.Var;
public class Driver {
public static void main(String[] args) throws Exception {
RT.loadResourceScript(\"clojure_script.clj\");
Var report = RT.var(\"clj.script.examples\", \"print-report\");
Integer result = (Integer) report.invoke(\"Siva\");
System.out.println(\"Result: \" + result);
}
}
"""
| 0.815306 | 0.842928 |
___
<a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a>
___
<center><em>Copyright Pierian Data</em></center>
<center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center>
# RNN Exercise Solutions
**TASK: IMPORT THE BASIC LIBRARIES YOU THINK YOU WILL USE**
```
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
```
## Data
Info about this data set: https://fred.stlouisfed.org/series/IPN31152N
Units: Index 2012=100, Not Seasonally Adjusted
Frequency: Monthly
The industrial production (IP) index measures the real output of all relevant establishments located in the United States, regardless of their ownership, but not those located in U.S. territories.
NAICS = 31152
Source Code: IP.N31152.N
Suggested Citation:
Board of Governors of the Federal Reserve System (US), Industrial Production: Nondurable Goods: Ice cream and frozen dessert [IPN31152N], retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/IPN31152N, November 16, 2019.
# Project Tasks
**TASK: Read in the data set "Frozen_Dessert_Production.csv" from the Data folder. Figure out how to set the date to a datetime index columns**
```
# CODE HERE
df = pd.read_csv('../Data/Frozen_Dessert_Production.csv',index_col='DATE',parse_dates=True)
df.head()
```
**Task: Change the column name to Production**
```
#CODE HERE
df.columns = ['Production']
df.head()
```
**TASK: Plot out the time series**
```
#CODE HERE
df.plot(figsize=(12,8))
```
## Train Test Split
**TASK: Figure out the length of the data set**
```
#CODE HERE
len(df)
```
**TASK: Split the data into a train/test split where the test set is the last 24 months of data.**
```
#CODE HERE
test_size = 24
test_ind = len(df)- test_size
train = df.iloc[:test_ind]
test = df.iloc[test_ind:]
len(test)
```
## Scale Data
**TASK: Use a MinMaxScaler to scale the train and test sets into scaled versions.**
```
# CODE HERE
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
# IGNORE WARNING ITS JUST CONVERTING TO FLOATS
# WE ONLY FIT TO TRAININ DATA, OTHERWISE WE ARE CHEATING ASSUMING INFO ABOUT TEST SET
scaler.fit(train)
scaled_train = scaler.transform(train)
scaled_test = scaler.transform(test)
```
# Time Series Generator
**TASK: Create a TimeSeriesGenerator object based off the scaled_train data. The batch length is up to you, but at a minimum it should be at least 18 to capture a full year seasonality.**
```
#CODE HERE
from tensorflow.keras.preprocessing.sequence import TimeseriesGenerator
length = 18
n_features=1
generator = TimeseriesGenerator(scaled_train, scaled_train, length=length, batch_size=1)
```
### Create the Model
**TASK: Create a Keras Sequential Model with as many LSTM units you want and a final Dense Layer.**
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,LSTM
# define model
model = Sequential()
model.add(LSTM(100, activation='relu', input_shape=(length, n_features)))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.summary()
```
**TASK: Create a generator for the scaled test/validation set. NOTE: Double check that your batch length makes sense for the size of the test set as mentioned in the RNN Time Series video.**
```
# CODE HERE
validation_generator = TimeseriesGenerator(scaled_test,scaled_test, length=length, batch_size=1)
```
**TASK: Create an EarlyStopping callback based on val_loss.**
```
#CODE HERE
from tensorflow.keras.callbacks import EarlyStopping
early_stop = EarlyStopping(monitor='val_loss',patience=2)
```
**TASK: Fit the model to the generator, let the EarlyStopping dictate the amount of epochs, so feel free to set the parameter high.**
```
# CODE HERE
# fit model
model.fit_generator(generator,epochs=20,
validation_data=validation_generator,
callbacks=[early_stop])
```
**TASK: Plot the history of the loss that occured during training.**
```
# CODE HERE
loss = pd.DataFrame(model.history.history)
loss.plot()
```
## Evaluate on Test Data
**TASK: Forecast predictions for your test data range (the last 12 months of the entire dataset). Remember to inverse your scaling transformations. Your final result should be a DataFrame with two columns, the true test values and the predictions.**
```
# CODE HERE
test_predictions = []
first_eval_batch = scaled_train[-length:]
current_batch = first_eval_batch.reshape((1, length, n_features))
for i in range(len(test)):
# get prediction 1 time stamp ahead ([0] is for grabbing just the number instead of [array])
current_pred = model.predict(current_batch)[0]
# store prediction
test_predictions.append(current_pred)
# update batch to now include prediction and drop first value
current_batch = np.append(current_batch[:,1:,:],[[current_pred]],axis=1)
true_predictions = scaler.inverse_transform(test_predictions)
test['Predictions'] = true_predictions
test
```
**TASK: Plot your predictions versus the True test values. (Your plot may look different than ours).**
```
# CODE HERE
test.plot()
```
**TASK: Calculate your RMSE.**
```
from sklearn.metrics import mean_squared_error
np.sqrt(mean_squared_error(test['Production'],test['Predictions']))
```
# Note! Check out the end of the video solutions lecture to see a discussion on improving these results!
|
github_jupyter
|
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# CODE HERE
df = pd.read_csv('../Data/Frozen_Dessert_Production.csv',index_col='DATE',parse_dates=True)
df.head()
#CODE HERE
df.columns = ['Production']
df.head()
#CODE HERE
df.plot(figsize=(12,8))
#CODE HERE
len(df)
#CODE HERE
test_size = 24
test_ind = len(df)- test_size
train = df.iloc[:test_ind]
test = df.iloc[test_ind:]
len(test)
# CODE HERE
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
# IGNORE WARNING ITS JUST CONVERTING TO FLOATS
# WE ONLY FIT TO TRAININ DATA, OTHERWISE WE ARE CHEATING ASSUMING INFO ABOUT TEST SET
scaler.fit(train)
scaled_train = scaler.transform(train)
scaled_test = scaler.transform(test)
#CODE HERE
from tensorflow.keras.preprocessing.sequence import TimeseriesGenerator
length = 18
n_features=1
generator = TimeseriesGenerator(scaled_train, scaled_train, length=length, batch_size=1)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,LSTM
# define model
model = Sequential()
model.add(LSTM(100, activation='relu', input_shape=(length, n_features)))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.summary()
# CODE HERE
validation_generator = TimeseriesGenerator(scaled_test,scaled_test, length=length, batch_size=1)
#CODE HERE
from tensorflow.keras.callbacks import EarlyStopping
early_stop = EarlyStopping(monitor='val_loss',patience=2)
# CODE HERE
# fit model
model.fit_generator(generator,epochs=20,
validation_data=validation_generator,
callbacks=[early_stop])
# CODE HERE
loss = pd.DataFrame(model.history.history)
loss.plot()
# CODE HERE
test_predictions = []
first_eval_batch = scaled_train[-length:]
current_batch = first_eval_batch.reshape((1, length, n_features))
for i in range(len(test)):
# get prediction 1 time stamp ahead ([0] is for grabbing just the number instead of [array])
current_pred = model.predict(current_batch)[0]
# store prediction
test_predictions.append(current_pred)
# update batch to now include prediction and drop first value
current_batch = np.append(current_batch[:,1:,:],[[current_pred]],axis=1)
true_predictions = scaler.inverse_transform(test_predictions)
test['Predictions'] = true_predictions
test
# CODE HERE
test.plot()
from sklearn.metrics import mean_squared_error
np.sqrt(mean_squared_error(test['Production'],test['Predictions']))
| 0.510252 | 0.972362 |
```
import pyspark
sc = pyspark.SparkContext(master="spark://10.0.0.3:6060")
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
```
http://10.0.0.3:4040/
```
from pyspark.sql import functions as F
import pyspark
import numpy as np
import sys
def seedKernel(data, dataIdValue, centroids, k, metric):
point = dataIdValue[1]
d = sys.maxsize
for j in range(len(centroids)):
temp_dist = metric(point, data[centroids[j]])
d = min(d, temp_dist)
return int(d)
def seedClusters(data, dataFrame, k, metric):
centroids = list(np.random.choice(data.shape[0], 1, replace=False))
for i in range(k - 1):
print("clusterSeed", i)
dist = []
mK = dataFrame.rdd.map(lambda dataIdValue: seedKernel(data, dataIdValue, centroids, k, metric))
mK_collect = mK.collect()
dist = np.array(mK_collect)
next_centroid = np.argmax(dist)
centroids.append(next_centroid)
dist = []
return centroids
def nearestCenteroidKernel(dataIdValue, centeroidIdValues, metric):
dataId, dataValue = dataIdValue
dataNp = np.asarray(dataValue)
distances = []
for centeroidId, centeroidValue in centeroidIdValues:
centeroidNp = np.asarray(centeroidValue)
distance = metric(dataNp, centeroidNp)
distances.append(distance)
distances = np.asarray(distances)
closestCenteroid = np.argmin(distances)
return int(closestCenteroid)
def optimiseClusterMembershipSpark(data, dataFrame, n, metric, intitalClusterIndices=None):
dataShape = data.shape
dataRDD = dataFrame.rdd
lengthOfData = dataShape[0]
if intitalClusterIndices is None:
index = np.random.choice(lengthOfData, n, replace=False)
else:
index = intitalClusterIndices
listIndex = [int(i) for i in list(index)]
centeroidIdValues = [(i,data[index[i]]) for i in range(len(index))]
dataRDD = dataRDD.filter(lambda dataIdValue: int(dataIdValue["id"]) not in listIndex)
associatedClusterPoints = dataRDD.map(lambda dataIdValue: (dataIdValue[0],nearestCenteroidKernel(dataIdValue, centeroidIdValues, metric)))
clusters = associatedClusterPoints.toDF(["id", "bestC"]).groupBy("bestC").agg(F.collect_list("id").alias("cluster"))
return index, clusters
def costKernel(data, testCenteroid, clusterData, metric):
cluster = np.asarray(clusterData)
lenCluster = cluster.shape[0]
lenFeature = data.shape[1]
testCenteroidColumn = np.zeros(shape=(lenCluster, lenFeature), dtype=data.dtype)
newClusterColumn = np.zeros(shape=(lenCluster, lenFeature), dtype=data.dtype)
for i in range(0, lenCluster):
newClusterColumn[i] = data[cluster[i]]
testCenteroidColumn[i] = data[int(testCenteroid)]
pairwiseDistance = metric(newClusterColumn, testCenteroidColumn)# (np.absolute(newClusterColumn-testCenteroidColumn).sum(axis=1))# metric(newClusterColumn, testCenteroidColumn)
cost = np.sum(pairwiseDistance)
return float(cost) #newClusterColumn.shape[1]
def optimiseCentroidSelectionSpark(data, dataFrame, centeroids, clustersFrames, metric):
dataRDD = dataFrame.rdd
dataShape = data.shape
newCenteroidIds = []
totalCost = 0
for clusterIdx in range(len(centeroids)):
print("clusterOpIdx", clusterIdx)
oldCenteroid = centeroids[clusterIdx]
clusterFrame = clustersFrames.filter(clustersFrames.bestC == clusterIdx).select(F.explode(clustersFrames.cluster))
clusterData = clusterFrame.collect()
if clusterData:
clusterData = [clusterData[i].col for i in range(len(clusterData))]
else:
clusterData = []
cluster = np.asarray(clusterData)
costData = clusterFrame.rdd.map(lambda pointId: (pointId[0], costKernel(data, pointId[0], clusterData, metric)))
cost = costData.map(lambda pointIdCost: pointIdCost[1]).sum()
totalCost = totalCost + cost
pointResult = costData.sortBy(lambda pointId_Cost: pointId_Cost[1]).take(1)
if (pointResult):
bestPoint = pointResult[0][0]
else:
bestPoint = oldCenteroid
newCenteroidIds.append(bestPoint)
return (newCenteroidIds, totalCost)
#vector metrics
def hammingVector(stack1, stack2):
return (stack1 != stack2).sum(axis=1)
def euclideanVector(stack1, stack2):
return (np.absolute(stack2-stack1)).sum(axis=1)
# point metrics
def euclideanPoint(p1, p2):
return np.sum((p1 - p2)**2)
def hammingPoint(p1, p2):
return np.sum((p1 != p2))
def fit(sc, data, nRegions = 2, metric = "euclidean", seeding = "heuristic"):
if metric == "euclidean":
pointMetric = euclideanPoint
vectorMetric = euclideanVector
elif metric == "hamming":
pointMetric = hammingPoint
vectorMetric = hammingVector
else:
print("unsuported metric")
return
dataN = np.asarray(data)
seeds = None
dataFrame = sc.parallelize(data).zipWithIndex().map(lambda xy: (xy[1],xy[0])).toDF(["id", "vector"]).cache()
if (seeding == "heuristic"):
seeds = list(seedClusters(dataN, dataFrame, nRegions, pointMetric))
lastCenteroids, lastClusters = optimiseClusterMembershipSpark(dataN, dataFrame, nRegions, pointMetric, seeds)
lastCost = float('inf')
iteration = 0
escape = False
while not escape:
iteration = iteration + 1
currentCenteroids, currentCost = optimiseCentroidSelectionSpark(dataN, dataFrame, lastCenteroids, lastClusters, vectorMetric)
currentCenteroids, currentClusters = optimiseClusterMembershipSpark(dataN, dataFrame, nRegions, pointMetric, currentCenteroids)
print((currentCost<lastCost, currentCost, lastCost, currentCost - lastCost))
if (currentCost<lastCost):
print(("iteration",iteration,"cost improving...", currentCost, lastCost))
lastCost = currentCost
lastCenteroids = currentCenteroids
lastClusters = currentClusters
else:
print(("iteration",iteration,"cost got worse or did not improve", currentCost, lastCost))
escape = True
bc = bestClusters.collect()
unpackedClusters = [bc[i].cluster for i in range(len(bc))]
return (lastCenteroids, unpackedClusters)
import numpy as np #maths
visualFeatureVocabulary = None
visualFeatureVocabularyList = None
with open("data/ORBvoc.txt", "r") as fin:
extractedFeatures = list(map(lambda x: x.split(" ")[2:-2], fin.readlines()[1:]))
dedupedFeatureStrings = set()
for extractedFeature in extractedFeatures:
strRep = ".".join(extractedFeature)
dedupedFeatureStrings.add(strRep)
finalFeatures = []
for dedupedFeatureStr in list(dedupedFeatureStrings):
finalFeatures.append([int(i) for i in dedupedFeatureStr.split(".")])
visualFeatureVocabulary = np.asarray(finalFeatures, dtype=np.uint8)
visualFeatureVocabularyList = list(finalFeatures)
print(visualFeatureVocabulary.shape)
%%time
#ret = fit(sc, visualFeatureVocabularyList, 4, "hamming")
#ret = KMedoids.fit(sc, visualFeatureVocabularyList, 4, "hamming")
#ret[1].show()
#ret[0]
from pyclustering.cluster import cluster_visualizer
from pyclustering.utils import read_sample
from pyclustering.samples.definitions import FCPS_SAMPLES
from pyclustering.samples.definitions import SIMPLE_SAMPLES
sample = read_sample(FCPS_SAMPLES.SAMPLE_GOLF_BALL)
%%time
#visualizer = cluster_visualizer()
bestCentroids, bestClusters = fit(sc, sample, 10) #"hamming"
#visualizer.append_clusters(bestClusters, sample)
#visualizer.show()
#print(bestClustersData)
```
|
github_jupyter
|
import pyspark
sc = pyspark.SparkContext(master="spark://10.0.0.3:6060")
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
from pyspark.sql import functions as F
import pyspark
import numpy as np
import sys
def seedKernel(data, dataIdValue, centroids, k, metric):
point = dataIdValue[1]
d = sys.maxsize
for j in range(len(centroids)):
temp_dist = metric(point, data[centroids[j]])
d = min(d, temp_dist)
return int(d)
def seedClusters(data, dataFrame, k, metric):
centroids = list(np.random.choice(data.shape[0], 1, replace=False))
for i in range(k - 1):
print("clusterSeed", i)
dist = []
mK = dataFrame.rdd.map(lambda dataIdValue: seedKernel(data, dataIdValue, centroids, k, metric))
mK_collect = mK.collect()
dist = np.array(mK_collect)
next_centroid = np.argmax(dist)
centroids.append(next_centroid)
dist = []
return centroids
def nearestCenteroidKernel(dataIdValue, centeroidIdValues, metric):
dataId, dataValue = dataIdValue
dataNp = np.asarray(dataValue)
distances = []
for centeroidId, centeroidValue in centeroidIdValues:
centeroidNp = np.asarray(centeroidValue)
distance = metric(dataNp, centeroidNp)
distances.append(distance)
distances = np.asarray(distances)
closestCenteroid = np.argmin(distances)
return int(closestCenteroid)
def optimiseClusterMembershipSpark(data, dataFrame, n, metric, intitalClusterIndices=None):
dataShape = data.shape
dataRDD = dataFrame.rdd
lengthOfData = dataShape[0]
if intitalClusterIndices is None:
index = np.random.choice(lengthOfData, n, replace=False)
else:
index = intitalClusterIndices
listIndex = [int(i) for i in list(index)]
centeroidIdValues = [(i,data[index[i]]) for i in range(len(index))]
dataRDD = dataRDD.filter(lambda dataIdValue: int(dataIdValue["id"]) not in listIndex)
associatedClusterPoints = dataRDD.map(lambda dataIdValue: (dataIdValue[0],nearestCenteroidKernel(dataIdValue, centeroidIdValues, metric)))
clusters = associatedClusterPoints.toDF(["id", "bestC"]).groupBy("bestC").agg(F.collect_list("id").alias("cluster"))
return index, clusters
def costKernel(data, testCenteroid, clusterData, metric):
cluster = np.asarray(clusterData)
lenCluster = cluster.shape[0]
lenFeature = data.shape[1]
testCenteroidColumn = np.zeros(shape=(lenCluster, lenFeature), dtype=data.dtype)
newClusterColumn = np.zeros(shape=(lenCluster, lenFeature), dtype=data.dtype)
for i in range(0, lenCluster):
newClusterColumn[i] = data[cluster[i]]
testCenteroidColumn[i] = data[int(testCenteroid)]
pairwiseDistance = metric(newClusterColumn, testCenteroidColumn)# (np.absolute(newClusterColumn-testCenteroidColumn).sum(axis=1))# metric(newClusterColumn, testCenteroidColumn)
cost = np.sum(pairwiseDistance)
return float(cost) #newClusterColumn.shape[1]
def optimiseCentroidSelectionSpark(data, dataFrame, centeroids, clustersFrames, metric):
dataRDD = dataFrame.rdd
dataShape = data.shape
newCenteroidIds = []
totalCost = 0
for clusterIdx in range(len(centeroids)):
print("clusterOpIdx", clusterIdx)
oldCenteroid = centeroids[clusterIdx]
clusterFrame = clustersFrames.filter(clustersFrames.bestC == clusterIdx).select(F.explode(clustersFrames.cluster))
clusterData = clusterFrame.collect()
if clusterData:
clusterData = [clusterData[i].col for i in range(len(clusterData))]
else:
clusterData = []
cluster = np.asarray(clusterData)
costData = clusterFrame.rdd.map(lambda pointId: (pointId[0], costKernel(data, pointId[0], clusterData, metric)))
cost = costData.map(lambda pointIdCost: pointIdCost[1]).sum()
totalCost = totalCost + cost
pointResult = costData.sortBy(lambda pointId_Cost: pointId_Cost[1]).take(1)
if (pointResult):
bestPoint = pointResult[0][0]
else:
bestPoint = oldCenteroid
newCenteroidIds.append(bestPoint)
return (newCenteroidIds, totalCost)
#vector metrics
def hammingVector(stack1, stack2):
return (stack1 != stack2).sum(axis=1)
def euclideanVector(stack1, stack2):
return (np.absolute(stack2-stack1)).sum(axis=1)
# point metrics
def euclideanPoint(p1, p2):
return np.sum((p1 - p2)**2)
def hammingPoint(p1, p2):
return np.sum((p1 != p2))
def fit(sc, data, nRegions = 2, metric = "euclidean", seeding = "heuristic"):
if metric == "euclidean":
pointMetric = euclideanPoint
vectorMetric = euclideanVector
elif metric == "hamming":
pointMetric = hammingPoint
vectorMetric = hammingVector
else:
print("unsuported metric")
return
dataN = np.asarray(data)
seeds = None
dataFrame = sc.parallelize(data).zipWithIndex().map(lambda xy: (xy[1],xy[0])).toDF(["id", "vector"]).cache()
if (seeding == "heuristic"):
seeds = list(seedClusters(dataN, dataFrame, nRegions, pointMetric))
lastCenteroids, lastClusters = optimiseClusterMembershipSpark(dataN, dataFrame, nRegions, pointMetric, seeds)
lastCost = float('inf')
iteration = 0
escape = False
while not escape:
iteration = iteration + 1
currentCenteroids, currentCost = optimiseCentroidSelectionSpark(dataN, dataFrame, lastCenteroids, lastClusters, vectorMetric)
currentCenteroids, currentClusters = optimiseClusterMembershipSpark(dataN, dataFrame, nRegions, pointMetric, currentCenteroids)
print((currentCost<lastCost, currentCost, lastCost, currentCost - lastCost))
if (currentCost<lastCost):
print(("iteration",iteration,"cost improving...", currentCost, lastCost))
lastCost = currentCost
lastCenteroids = currentCenteroids
lastClusters = currentClusters
else:
print(("iteration",iteration,"cost got worse or did not improve", currentCost, lastCost))
escape = True
bc = bestClusters.collect()
unpackedClusters = [bc[i].cluster for i in range(len(bc))]
return (lastCenteroids, unpackedClusters)
import numpy as np #maths
visualFeatureVocabulary = None
visualFeatureVocabularyList = None
with open("data/ORBvoc.txt", "r") as fin:
extractedFeatures = list(map(lambda x: x.split(" ")[2:-2], fin.readlines()[1:]))
dedupedFeatureStrings = set()
for extractedFeature in extractedFeatures:
strRep = ".".join(extractedFeature)
dedupedFeatureStrings.add(strRep)
finalFeatures = []
for dedupedFeatureStr in list(dedupedFeatureStrings):
finalFeatures.append([int(i) for i in dedupedFeatureStr.split(".")])
visualFeatureVocabulary = np.asarray(finalFeatures, dtype=np.uint8)
visualFeatureVocabularyList = list(finalFeatures)
print(visualFeatureVocabulary.shape)
%%time
#ret = fit(sc, visualFeatureVocabularyList, 4, "hamming")
#ret = KMedoids.fit(sc, visualFeatureVocabularyList, 4, "hamming")
#ret[1].show()
#ret[0]
from pyclustering.cluster import cluster_visualizer
from pyclustering.utils import read_sample
from pyclustering.samples.definitions import FCPS_SAMPLES
from pyclustering.samples.definitions import SIMPLE_SAMPLES
sample = read_sample(FCPS_SAMPLES.SAMPLE_GOLF_BALL)
%%time
#visualizer = cluster_visualizer()
bestCentroids, bestClusters = fit(sc, sample, 10) #"hamming"
#visualizer.append_clusters(bestClusters, sample)
#visualizer.show()
#print(bestClustersData)
| 0.429908 | 0.807764 |
# Population Estimation Notebook
- Notebook for creation of population estimates for years between US official census.
- ETL Script: ```db_estimate_script.py```
```
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from config import DATABASE_URI
from models import NameEntry, StateEntry
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import us
```
## Extract Data From PostgreSQL Database
- Data: [Wikipedia](https://en.wikipedia.org/wiki/List_of_U.S._states_and_territories_by_historical_population)
- Scraping script: ```db_population_script.py```
```
# Create PostgreSQL database connection with SQLAlchemy
engine = create_engine(DATABASE_URI)
Session = sessionmaker(bind=engine)
# Get data from database
s = Session()
q = s.query(StateEntry.state, StateEntry.year, StateEntry.population).all()
s.close()
# Convert to DataFrame
pops = [ {'state': v[0], 'year': v[1], 'population': v[2]} for v in q ]
pops = pd.DataFrame(pops)
# Sample
pops.head(3)
# Sample
sns.scatterplot(data=pops.loc[pops['state'] == 'New York'], x='year', y='population')
plt.show()
```
## Estimate Population from Data
- Create function to estimate state populations for years in between census years.
- Estimates are derived from a simple linear population slope.
```
def get_pop_est(state):
'''
Function to create population estimates between census years
'''
years_ = range(1960, 2010)
ps = np.array(pops.loc[pops['state'] == state]['population'])
# Population slope between census data years
ms = np.diff(ps) / 10
# Initial population of decade
cs = ps[:-1]
# Create estimates through matrix operations
ests = np.round((np.arange(0, 10).reshape(-1, 1) * ms + np.ones(10).reshape(-1,1) * cs)).T
ests = [
{'state': state, 'year': year, 'population': int(est)}
for year, est in zip(years_, ests.flatten())
]
return ests
# Sample
get_pop_est('New York')[:5]
```
## Extract and Clean Data
- Data from: [US Census Bureau - State Populations + DC (2010-2019)](https://www.census.gov/data/tables/time-series/demo/popest/2010s-state-total.html)
```
# Set states
states = [ f'{state.name}' for state in us.states.STATES ] + ['District of Columbia']
# Load excel file
tens = pd.read_excel('data/nst-est2019-01.xlsx', index_col=0) # Note: years in 3rd row
tens.head(3)
# Select rows with states or DC (Example: '.Alabama')
tens = tens.loc[[ '.' + state for state in states ]]
tens.index.name = 'state'
# Select columns with yearly population data and rename columns
tens = tens[[ f'Unnamed: {i}' for i in range(3, 13) ]]
tens.columns = range(2010, 2020)
# Reset index and clean state names
tens.reset_index(inplace=True)
tens['state'] = [ state[1:] for state in tens['state'] ]
# Sample
tens.head(3)
# Make DataFrame wide to long
df = tens.melt(id_vars='state', value_vars=range(2010, 2020), var_name='year', value_name='population')
# Sample
df.head(3)
# Concat estimates onto US Census Bureau data
for state in states:
df = pd.concat([df, pd.DataFrame(get_pop_est(state))], ignore_index=True)
# Sample
sns.scatterplot(data=df.loc[df['state'] == 'New York'], x='year', y='population')
plt.show()
```
|
github_jupyter
|
## Extract Data From PostgreSQL Database
- Data: [Wikipedia](https://en.wikipedia.org/wiki/List_of_U.S._states_and_territories_by_historical_population)
- Scraping script: ```db_population_script.py```
## Estimate Population from Data
- Create function to estimate state populations for years in between census years.
- Estimates are derived from a simple linear population slope.
## Extract and Clean Data
- Data from: [US Census Bureau - State Populations + DC (2010-2019)](https://www.census.gov/data/tables/time-series/demo/popest/2010s-state-total.html)
| 0.808899 | 0.938576 |
<h2>Caveat</h2>
Web sites often change the format of their pages so this may not always work. If it doesn't, rework the examples after examining the html content of the page (most browsers will let you see the html source - look for a "page source" option - though you might have to turn on the developer mode in your browser preferences. For example, on Chrome you need to click the "developer mode" check box under Extensions in the Preferences/Options menu.
<h3>Import necessary modules</h3>
```
import requests
from bs4 import BeautifulSoup
# !pip3 install ipywidgets
import javascript
from ipywidgets import widgets
from IPython.display import display
# text = widgets.Text()
text = widgets.Text(
# continuous_update=False
# value='Hello World',
# placeholder='Type something',
# description='String:',
# disabled=False
)
display(text)
def handle_submit(sender):
print(text.value)
# text.on_submit(handle_submit)
# text.(handle_submit)
text.observe(handle_submit)
```
<h3>The http request response cycle</h3>
```
url = "http://www.epicurious.com/search/Tofu Chili"
response = requests.get(url)
if response.status_code == 200:
print("Success")
else:
print("Failure")
a = input()
print(a)
keywords = input("Please enter the things you want to see in a recipe")
# keywords='tufu chili'
url = "http://www.epicurious.com/search/" + keywords
response = requests.get(url)
if response.status_code == 200:
print("Success")
else:
print("Failure")
```
<h3>Set up the BeautifulSoup object</h3>
```
results_page = BeautifulSoup(response.content,'lxml')
print(results_page.prettify())
```
<h3>BS4 functions</h3>
<h4>find_all finds all instances of a specified tag</h4>
<h4>returns a result_set (a list)</h4>
```
all_a_tags = results_page.find_all('a')
print(type(all_a_tags))
```
<h4>find finds the first instance of a specified tag</h4>
<h4>returns a bs4 element</h4>
```
div_tag = results_page.find('div')
print(div_tag)
type(div_tag)
```
<h4>bs4 functions can be recursively applied on elements</h4>
```
div_tag.find('a')
```
<h4>Both find as well as find_all can be qualified by css selectors</h4>
<li>using selector=value
<li>using a dictionary
```
#When using this method and looking for 'class' use 'class_' (because class is a reserved word in python)
#Note that we get a list back because find_all returns a list
results_page.find_all('article',class_="recipe-content-card")
#Since we're using a string as the key, the fact that class is a reserved word is not a problem
#We get an element back because find returns an element
results_page.find('article',{'class':'recipe-content-card'})
```
<h4>get_text() returns the marked up text (the content) enclosed in a tag.</h4>
<li>returns a string
```
results_page.find('article',{'class':'recipe-content-card'}).get_text()
```
<h4>get returns the value of a tag attribute</h4>
<li>returns a string
```
recipe_tag = results_page.find('article',{'class':'recipe-content-card'})
recipe_link = recipe_tag.find('a')
print("a tag:",recipe_link)
link_url = recipe_link.get('href')
print("link url:",link_url)
print(type(link_url))
```
<h1>A function that returns a list containing recipe names, recipe descriptions (if any) and recipe urls</h1>
```
def get_recipes(keywords):
recipe_list = list()
import requests
from bs4 import BeautifulSoup
url = "http://www.epicurious.com/search/" + keywords
response = requests.get(url)
if not response.status_code == 200:
return None
try:
results_page = BeautifulSoup(response.content,'lxml')
recipes = results_page.find_all('article',class_="recipe-content-card")
for recipe in recipes:
recipe_link = "http://www.epicurious.com" + recipe.find('a').get('href')
recipe_name = recipe.find('a').get_text()
try:
recipe_description = recipe.find('p',class_='dek').get_text()
except:
recipe_description = ''
recipe_list.append((recipe_name,recipe_link,recipe_description))
return recipe_list
except:
return None
get_recipes("Tofu chili")
get_recipes('Nothing')
```
<h2>Let's write a function that</h2>
<h3>given a recipe link</h3>
<h3>returns a dictionary containing the ingredients and preparation instructions</h3>
```
recipe_link = "http://www.epicurious.com" + '/recipes/food/views/spicy-lemongrass-tofu-233844'
def get_recipe_info(recipe_link):
recipe_dict = dict()
import requests
from bs4 import BeautifulSoup
try:
response = requests.get(recipe_link)
if not response.status_code == 200:
return recipe_dict
result_page = BeautifulSoup(response.content,'lxml')
ingredient_list = list()
prep_steps_list = list()
for ingredient in result_page.find_all('li',class_='ingredient'):
ingredient_list.append(ingredient.get_text())
for prep_step in result_page.find_all('li',class_='preparation-step'):
prep_steps_list.append(prep_step.get_text().strip())
recipe_dict['ingredients'] = ingredient_list
recipe_dict['preparation'] = prep_steps_list
return recipe_dict
except:
return recipe_dict
get_recipe_info(recipe_link)
```
<h2>Construct a list of dictionaries for all recipes</h2>
```
def get_all_recipes(keywords):
results = list()
all_recipes = get_recipes(keywords)
for recipe in all_recipes:
recipe_dict = get_recipe_info(recipe[1])
recipe_dict['name'] = recipe[0]
recipe_dict['description'] = recipe[2]
results.append(recipe_dict)
return(results)
get_all_recipes("Tofu chili")
```
<h1>Logging in to a web server</h1>
<h2>Get username and password</h2>
<li>Best to store in a file for reuse
<li>You will need to set up your own login and password and place them in a file called wikidata.txt
<li>Line one of the file should contain your username
<li>Line two your password
```
with open('wikidata.txt') as f:
contents = f.read().split('\n')
username = contents[0]
password = contents[1]
```
<h3>Construct an object that contains the data to be sent to the login page</h3>
```
payload = {
'wpName': username,
'wpPassword': password,
'wploginattempt': 'Log in',
'wpEditToken': "+\\",
'title': "Special:UserLogin",
'authAction': "login",
'force': "",
'wpForceHttps': "1",
'wpFromhttp': "1",
#'wpLoginToken': ‘', #We need to read this from the page
}
```
<h3>get the value of the login token</h3>
```
def get_login_token(response):
soup = BeautifulSoup(response.text, 'lxml')
token = soup.find('input',{'name':"wpLoginToken"}).get('value')
return token
```
<h3>Setup a session, login, and get data</h3>
```
with requests.session() as s:
response = s.get('https://en.wikipedia.org/w/index.php?title=Special:UserLogin&returnto=Main+Page')
payload['wpLoginToken'] = get_login_token(response)
#Send the login request
response_post = s.post('https://en.wikipedia.org/w/index.php?title=Special:UserLogin&action=submitlogin&type=login',
data=payload)
#Get another page and check if we’re still logged in
response = s.get('https://en.wikipedia.org/wiki/Special:Watchlist')
data = BeautifulSoup(response.content,'lxml')
print(data.find('div',class_='mw-changeslist').get_text())
```
|
github_jupyter
|
import requests
from bs4 import BeautifulSoup
# !pip3 install ipywidgets
import javascript
from ipywidgets import widgets
from IPython.display import display
# text = widgets.Text()
text = widgets.Text(
# continuous_update=False
# value='Hello World',
# placeholder='Type something',
# description='String:',
# disabled=False
)
display(text)
def handle_submit(sender):
print(text.value)
# text.on_submit(handle_submit)
# text.(handle_submit)
text.observe(handle_submit)
url = "http://www.epicurious.com/search/Tofu Chili"
response = requests.get(url)
if response.status_code == 200:
print("Success")
else:
print("Failure")
a = input()
print(a)
keywords = input("Please enter the things you want to see in a recipe")
# keywords='tufu chili'
url = "http://www.epicurious.com/search/" + keywords
response = requests.get(url)
if response.status_code == 200:
print("Success")
else:
print("Failure")
results_page = BeautifulSoup(response.content,'lxml')
print(results_page.prettify())
all_a_tags = results_page.find_all('a')
print(type(all_a_tags))
div_tag = results_page.find('div')
print(div_tag)
type(div_tag)
div_tag.find('a')
#When using this method and looking for 'class' use 'class_' (because class is a reserved word in python)
#Note that we get a list back because find_all returns a list
results_page.find_all('article',class_="recipe-content-card")
#Since we're using a string as the key, the fact that class is a reserved word is not a problem
#We get an element back because find returns an element
results_page.find('article',{'class':'recipe-content-card'})
results_page.find('article',{'class':'recipe-content-card'}).get_text()
recipe_tag = results_page.find('article',{'class':'recipe-content-card'})
recipe_link = recipe_tag.find('a')
print("a tag:",recipe_link)
link_url = recipe_link.get('href')
print("link url:",link_url)
print(type(link_url))
def get_recipes(keywords):
recipe_list = list()
import requests
from bs4 import BeautifulSoup
url = "http://www.epicurious.com/search/" + keywords
response = requests.get(url)
if not response.status_code == 200:
return None
try:
results_page = BeautifulSoup(response.content,'lxml')
recipes = results_page.find_all('article',class_="recipe-content-card")
for recipe in recipes:
recipe_link = "http://www.epicurious.com" + recipe.find('a').get('href')
recipe_name = recipe.find('a').get_text()
try:
recipe_description = recipe.find('p',class_='dek').get_text()
except:
recipe_description = ''
recipe_list.append((recipe_name,recipe_link,recipe_description))
return recipe_list
except:
return None
get_recipes("Tofu chili")
get_recipes('Nothing')
recipe_link = "http://www.epicurious.com" + '/recipes/food/views/spicy-lemongrass-tofu-233844'
def get_recipe_info(recipe_link):
recipe_dict = dict()
import requests
from bs4 import BeautifulSoup
try:
response = requests.get(recipe_link)
if not response.status_code == 200:
return recipe_dict
result_page = BeautifulSoup(response.content,'lxml')
ingredient_list = list()
prep_steps_list = list()
for ingredient in result_page.find_all('li',class_='ingredient'):
ingredient_list.append(ingredient.get_text())
for prep_step in result_page.find_all('li',class_='preparation-step'):
prep_steps_list.append(prep_step.get_text().strip())
recipe_dict['ingredients'] = ingredient_list
recipe_dict['preparation'] = prep_steps_list
return recipe_dict
except:
return recipe_dict
get_recipe_info(recipe_link)
def get_all_recipes(keywords):
results = list()
all_recipes = get_recipes(keywords)
for recipe in all_recipes:
recipe_dict = get_recipe_info(recipe[1])
recipe_dict['name'] = recipe[0]
recipe_dict['description'] = recipe[2]
results.append(recipe_dict)
return(results)
get_all_recipes("Tofu chili")
with open('wikidata.txt') as f:
contents = f.read().split('\n')
username = contents[0]
password = contents[1]
payload = {
'wpName': username,
'wpPassword': password,
'wploginattempt': 'Log in',
'wpEditToken': "+\\",
'title': "Special:UserLogin",
'authAction': "login",
'force': "",
'wpForceHttps': "1",
'wpFromhttp': "1",
#'wpLoginToken': ‘', #We need to read this from the page
}
def get_login_token(response):
soup = BeautifulSoup(response.text, 'lxml')
token = soup.find('input',{'name':"wpLoginToken"}).get('value')
return token
with requests.session() as s:
response = s.get('https://en.wikipedia.org/w/index.php?title=Special:UserLogin&returnto=Main+Page')
payload['wpLoginToken'] = get_login_token(response)
#Send the login request
response_post = s.post('https://en.wikipedia.org/w/index.php?title=Special:UserLogin&action=submitlogin&type=login',
data=payload)
#Get another page and check if we’re still logged in
response = s.get('https://en.wikipedia.org/wiki/Special:Watchlist')
data = BeautifulSoup(response.content,'lxml')
print(data.find('div',class_='mw-changeslist').get_text())
| 0.279927 | 0.59134 |
# Tutorial 12: Meta-Learning - Learning to Learn
* **Author:** Phillip Lippe
* **License:** CC BY-SA
* **Generated:** 2021-10-10T18:35:50.818431
In this tutorial, we will discuss algorithms that learn models which can quickly adapt to new classes and/or tasks with few samples.
This area of machine learning is called _Meta-Learning_ aiming at "learning to learn".
Learning from very few examples is a natural task for humans. In contrast to current deep learning models, we need to see only a few examples of a police car or firetruck to recognize them in daily traffic.
This is crucial ability since in real-world application, it is rarely the case that the data stays static and does not change over time.
For example, an object detection system for mobile phones trained on data from 2000 will have troubles detecting today's common mobile phones, and thus, needs to adapt to new data without excessive label effort.
The optimization techniques we have discussed so far struggle with this because they only aim at obtaining a good performance on a test set that had similar data.
However, what if the test set has classes that we do not have in the training set?
Or what if we want to test the model on a completely different task?
We will discuss and implement three common Meta-Learning algorithms for such situations.
This notebook is part of a lecture series on Deep Learning at the University of Amsterdam.
The full list of tutorials can be found at https://uvadlc-notebooks.rtfd.io.
---
Open in [{height="20px" width="117px"}](https://colab.research.google.com/github/PytorchLightning/lightning-tutorials/blob/publication/.notebooks/course_UvA-DL/12-meta-learning.ipynb)
Give us a ⭐ [on Github](https://www.github.com/PytorchLightning/pytorch-lightning/)
| Check out [the documentation](https://pytorch-lightning.readthedocs.io/en/latest/)
| Join us [on Slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-pw5v393p-qRaDgEk24~EjiZNBpSQFgQ)
## Setup
This notebook requires some packages besides pytorch-lightning.
```
# ! pip install --quiet "torch>=1.6, <1.9" "matplotlib" "torchmetrics>=0.3" "seaborn" "torchvision" "pytorch-lightning>=1.3"
```
<div class="center-wrapper"><div class="video-wrapper"><iframe src="https://www.youtube.com/embed/035rkmT8FfE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></div></div>
Meta-Learning offers solutions to these situations, and we will discuss three popular algorithms: __Prototypical Networks__ ([Snell et al., 2017](https://arxiv.org/pdf/1703.05175.pdf)), __Model-Agnostic Meta-Learning / MAML__ ([Finn et al., 2017](http://proceedings.mlr.press/v70/finn17a.html)), and __Proto-MAML__ ([Triantafillou et al., 2020](https://openreview.net/pdf?id=rkgAGAVKPr)).
We will focus on the task of few-shot classification where the training and test set have distinct sets of classes.
For instance, we would train the model on the binary classifications of cats-birds and flowers-bikes, but during test time, the model would need to learn from 4 examples each the difference between dogs and otters, two classes we have not seen during training (Figure credit - [Lilian Weng](https://lilianweng.github.io/lil-log/2018/11/30/meta-learning.html)).
<center width="100%"><img src="https://github.com/PyTorchLightning/lightning-tutorials/raw/main/course_UvA-DL/12-meta-learning/few-shot-classification.png" width="800px"></center>
A different setup, which is very common in Reinforcement Learning and recently Natural Language Processing, is to aim at few-shot learning of a completely new task.
For example, an robot agent that learned to run, jump and pick up boxes, should quickly adapt to collecting and stacking boxes.
In NLP, we can think of a model which was trained sentiment classification, hatespeech detection and sarcasm classification, to adapt to classifying the emotion of a text.
All methods we will discuss in this notebook can be easily applied to these settings since we only use a different definition of a 'task'.
For few-shot classification, we consider a task to distinguish between $M$ novel classes.
Here, we would not only have novel classes, but also a completely different dataset.
First of all, let's start with importing our standard libraries. We will again be using PyTorch Lightning.
```
import json
import os
import random
import urllib.request
from collections import defaultdict
from copy import deepcopy
from statistics import mean, stdev
from urllib.error import HTTPError
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pytorch_lightning as pl
import seaborn as sns
import torch
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data as data
import torchvision
from IPython.display import set_matplotlib_formats
from PIL import Image
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
from torchvision import transforms
from torchvision.datasets import CIFAR100, SVHN
from tqdm.auto import tqdm
plt.set_cmap("cividis")
# %matplotlib inline
set_matplotlib_formats("svg", "pdf") # For export
matplotlib.rcParams["lines.linewidth"] = 2.0
sns.reset_orig()
# Import tensorboard
# %load_ext tensorboard
# Path to the folder where the datasets are/should be downloaded (e.g. CIFAR10)
DATASET_PATH = os.environ.get("PATH_DATASETS", "data/")
# Path to the folder where the pretrained models are saved
CHECKPOINT_PATH = os.environ.get("PATH_CHECKPOINT", "saved_models/MetaLearning/")
# Setting the seed
pl.seed_everything(42)
# Ensure that all operations are deterministic on GPU (if used) for reproducibility
torch.backends.cudnn.determinstic = True
torch.backends.cudnn.benchmark = False
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
print("Device:", device)
```
Training the models in this notebook can take between 2 and 8 hours, and the evaluation time of some algorithms is in the span of couples of minutes.
Hence, we download pre-trained models and results below.
```
# Github URL where saved models are stored for this tutorial
base_url = "https://raw.githubusercontent.com/phlippe/saved_models/main/tutorial16/"
# Files to download
pretrained_files = [
"ProtoNet.ckpt",
"ProtoMAML.ckpt",
"tensorboards/ProtoNet/events.out.tfevents.ProtoNet",
"tensorboards/ProtoMAML/events.out.tfevents.ProtoMAML",
"protomaml_fewshot.json",
"protomaml_svhn_fewshot.json",
]
# Create checkpoint path if it doesn't exist yet
os.makedirs(CHECKPOINT_PATH, exist_ok=True)
# For each file, check whether it already exists. If not, try downloading it.
for file_name in pretrained_files:
file_path = os.path.join(CHECKPOINT_PATH, file_name)
if "/" in file_name:
os.makedirs(file_path.rsplit("/", 1)[0], exist_ok=True)
if not os.path.isfile(file_path):
file_url = base_url + file_name
print("Downloading %s..." % file_url)
try:
urllib.request.urlretrieve(file_url, file_path)
except HTTPError as e:
print(
"Something went wrong. Please try to download the file from the GDrive folder, or contact the author with the full output including the following error:\n",
e,
)
```
## Few-shot classification
We start our implementation by discussing the dataset setup.
In this notebook, we will use CIFAR100 which we have already seen in Tutorial 6.
CIFAR100 has 100 classes each with 600 images of size $32\times 32$ pixels.
Instead of splitting the training, validation and test set over examples, we will split them over classes: we will use 80 classes for training, and 10 for validation and 10 for testing.
Our overall goal is to obtain a model that can distinguish between the 10 test classes with seeing very little examples.
First, let's load the dataset and visualize some examples.
```
# Loading CIFAR100 dataset
cifar_train_set = CIFAR100(root=DATASET_PATH, train=True, download=True, transform=transforms.ToTensor())
cifar_test_set = CIFAR100(root=DATASET_PATH, train=False, download=True, transform=transforms.ToTensor())
# Visualize some examples
NUM_IMAGES = 12
cifar_images = [cifar_train_set[np.random.randint(len(cifar_train_set))][0] for idx in range(NUM_IMAGES)]
cifar_images = torch.stack(cifar_images, dim=0)
img_grid = torchvision.utils.make_grid(cifar_images, nrow=6, normalize=True, pad_value=0.9)
img_grid = img_grid.permute(1, 2, 0)
plt.figure(figsize=(8, 8))
plt.title("Image examples of the CIFAR100 dataset")
plt.imshow(img_grid)
plt.axis("off")
plt.show()
plt.close()
```
### Data preprocessing
Next, we need to prepare the dataset in the training, validation and test split as mentioned before.
The torchvision package gives us the training and test set as two separate dataset objects.
The next code cells will merge the original training and test set, and then create the new train-val-test split.
```
# Merging original training and test set
cifar_all_images = np.concatenate([cifar_train_set.data, cifar_test_set.data], axis=0)
cifar_all_targets = torch.LongTensor(cifar_train_set.targets + cifar_test_set.targets)
```
To have an easier time handling the dataset, we define our own, simple dataset class below.
It takes a set of images, labels/targets, and image transformations, and
returns the corresponding images and labels element-wise.
```
class ImageDataset(data.Dataset):
def __init__(self, imgs, targets, img_transform=None):
"""
Inputs:
imgs - Numpy array of shape [N,32,32,3] containing all images.
targets - PyTorch array of shape [N] containing all labels.
img_transform - A torchvision transformation that should be applied
to the images before returning. If none, no transformation
is applied.
"""
super().__init__()
self.img_transform = img_transform
self.imgs = imgs
self.targets = targets
def __getitem__(self, idx):
img, target = self.imgs[idx], self.targets[idx]
img = Image.fromarray(img)
if self.img_transform is not None:
img = self.img_transform(img)
return img, target
def __len__(self):
return self.imgs.shape[0]
```
Now, we can create the class splits.
We will assign the classes randomly to training, validation and test, and use a 80%-10%-10% split.
```
pl.seed_everything(0) # Set seed for reproducibility
classes = torch.randperm(100) # Returns random permutation of numbers 0 to 99
train_classes, val_classes, test_classes = classes[:80], classes[80:90], classes[90:]
```
To get an intuition of the validation and test classes, we print the class names below:
```
# Printing validation and test classes
idx_to_class = {val: key for key, val in cifar_train_set.class_to_idx.items()}
print("Validation classes:", [idx_to_class[c.item()] for c in val_classes])
print("Test classes:", [idx_to_class[c.item()] for c in test_classes])
```
As we can see, the classes have quite some variety and some classes might be easier to distinguish than others.
For instance, in the test classes, 'pickup_truck' is the only vehicle while the classes 'mushroom', 'worm' and 'forest' might be harder to keep apart.
Remember that we want to learn the classification of those ten classes from 80 other classes in our training set, and few examples from the actual test classes.
We will experiment with the number of examples per class.
Finally, we can create the training, validation and test dataset according to our split above.
For this, we create dataset objects of our previously defined class `ImageDataset`.
```
def dataset_from_labels(imgs, targets, class_set, **kwargs):
class_mask = (targets[:, None] == class_set[None, :]).any(dim=-1)
return ImageDataset(imgs=imgs[class_mask], targets=targets[class_mask], **kwargs)
```
As in our experiments before on CIFAR in Tutorial 5, 6 and 9, we normalize the dataset.
Additionally, we use small augmentations during training to prevent overfitting.
```
DATA_MEANS = (cifar_train_set.data / 255.0).mean(axis=(0, 1, 2))
DATA_STD = (cifar_train_set.data / 255.0).std(axis=(0, 1, 2))
test_transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize(DATA_MEANS, DATA_STD)])
# For training, we add some augmentation.
train_transform = transforms.Compose(
[
transforms.RandomHorizontalFlip(),
transforms.RandomResizedCrop((32, 32), scale=(0.8, 1.0), ratio=(0.9, 1.1)),
transforms.ToTensor(),
transforms.Normalize(DATA_MEANS, DATA_STD),
]
)
train_set = dataset_from_labels(cifar_all_images, cifar_all_targets, train_classes, img_transform=train_transform)
val_set = dataset_from_labels(cifar_all_images, cifar_all_targets, val_classes, img_transform=test_transform)
test_set = dataset_from_labels(cifar_all_images, cifar_all_targets, test_classes, img_transform=test_transform)
```
### Data sampling
The strategy of how to use the available training data for learning few-shot adaptation is crucial in meta-learning.
All three algorithms that we discuss here have a similar idea: simulate few-shot learning during training.
Specifically, at each training step, we randomly select a small number of classes, and sample a small number of examples for each class.
This represents our few-shot training batch, which we also refer to as **support set**.
Additionally, we sample a second set of examples from the same classes, and refer to this batch as **query set**.
Our training objective is to classify the query set correctly from seeing the support set and its corresponding labels.
The main difference between our three methods (ProtoNet, MAML, and Proto-MAML) is in how they use the support set to adapt to the training classes.
This subsection summarizes the code that is needed to create such training batches.
In PyTorch, we can specify the data sampling procedure by so-called `Sampler` ([documentation](https://pytorch.org/docs/stable/data.html#data-loading-order-and-sampler)).
Samplers are iteratable objects that return indices in the order in which the data elements should be sampled.
In our previous notebooks, we usually used the option `shuffle=True` in the `data.DataLoader` objects which creates a sampler returning the data indices in a random order.
Here, we focus on samplers that return batches of indices that correspond to support and query set batches.
Below, we implement such a sampler.
```
class FewShotBatchSampler:
def __init__(self, dataset_targets, N_way, K_shot, include_query=False, shuffle=True, shuffle_once=False):
"""
Inputs:
dataset_targets - PyTorch tensor of the labels of the data elements.
N_way - Number of classes to sample per batch.
K_shot - Number of examples to sample per class in the batch.
include_query - If True, returns batch of size N_way*K_shot*2, which
can be split into support and query set. Simplifies
the implementation of sampling the same classes but
distinct examples for support and query set.
shuffle - If True, examples and classes are newly shuffled in each
iteration (for training)
shuffle_once - If True, examples and classes are shuffled once in
the beginning, but kept constant across iterations
(for validation)
"""
super().__init__()
self.dataset_targets = dataset_targets
self.N_way = N_way
self.K_shot = K_shot
self.shuffle = shuffle
self.include_query = include_query
if self.include_query:
self.K_shot *= 2
self.batch_size = self.N_way * self.K_shot # Number of overall images per batch
# Organize examples by class
self.classes = torch.unique(self.dataset_targets).tolist()
self.num_classes = len(self.classes)
self.indices_per_class = {}
self.batches_per_class = {} # Number of K-shot batches that each class can provide
for c in self.classes:
self.indices_per_class[c] = torch.where(self.dataset_targets == c)[0]
self.batches_per_class[c] = self.indices_per_class[c].shape[0] // self.K_shot
# Create a list of classes from which we select the N classes per batch
self.iterations = sum(self.batches_per_class.values()) // self.N_way
self.class_list = [c for c in self.classes for _ in range(self.batches_per_class[c])]
if shuffle_once or self.shuffle:
self.shuffle_data()
else:
# For testing, we iterate over classes instead of shuffling them
sort_idxs = [
i + p * self.num_classes for i, c in enumerate(self.classes) for p in range(self.batches_per_class[c])
]
self.class_list = np.array(self.class_list)[np.argsort(sort_idxs)].tolist()
def shuffle_data(self):
# Shuffle the examples per class
for c in self.classes:
perm = torch.randperm(self.indices_per_class[c].shape[0])
self.indices_per_class[c] = self.indices_per_class[c][perm]
# Shuffle the class list from which we sample. Note that this way of shuffling
# does not prevent to choose the same class twice in a batch. However, for
# training and validation, this is not a problem.
random.shuffle(self.class_list)
def __iter__(self):
# Shuffle data
if self.shuffle:
self.shuffle_data()
# Sample few-shot batches
start_index = defaultdict(int)
for it in range(self.iterations):
class_batch = self.class_list[it * self.N_way : (it + 1) * self.N_way] # Select N classes for the batch
index_batch = []
for c in class_batch: # For each class, select the next K examples and add them to the batch
index_batch.extend(self.indices_per_class[c][start_index[c] : start_index[c] + self.K_shot])
start_index[c] += self.K_shot
if self.include_query: # If we return support+query set, sort them so that they are easy to split
index_batch = index_batch[::2] + index_batch[1::2]
yield index_batch
def __len__(self):
return self.iterations
```
Now, we can create our intended data loaders by passing an object of `FewShotBatchSampler` as `batch_sampler=...` input to the PyTorch data loader object.
For our experiments, we will use a 5-class 4-shot training setting.
This means that each support set contains 5 classes with 4 examples each, i.e., 20 images overall.
Usually, it is good to keep the number of shots equal to the number that you aim to test on.
However, we will experiment later with different number of shots, and hence, we pick 4 as a compromise for now.
To get the best performing model, it is recommended to consider the
number of training shots as hyperparameter in a grid search.
```
N_WAY = 5
K_SHOT = 4
train_data_loader = data.DataLoader(
train_set,
batch_sampler=FewShotBatchSampler(train_set.targets, include_query=True, N_way=N_WAY, K_shot=K_SHOT, shuffle=True),
num_workers=4,
)
val_data_loader = data.DataLoader(
val_set,
batch_sampler=FewShotBatchSampler(
val_set.targets, include_query=True, N_way=N_WAY, K_shot=K_SHOT, shuffle=False, shuffle_once=True
),
num_workers=4,
)
```
For simplicity, we implemented the sampling of a support and query set as sampling a support set with twice the number of examples.
After sampling a batch from the data loader, we need to split it into a support and query set.
We can summarize this step in the following function:
```
def split_batch(imgs, targets):
support_imgs, query_imgs = imgs.chunk(2, dim=0)
support_targets, query_targets = targets.chunk(2, dim=0)
return support_imgs, query_imgs, support_targets, query_targets
```
Finally, to ensure that our implementation of the data sampling process is correct, we can sample a batch and visualize its support and query set.
What we would like to see is that the support and query set have the same classes, but distinct examples.
```
imgs, targets = next(iter(val_data_loader)) # We use the validation set since it does not apply augmentations
support_imgs, query_imgs, _, _ = split_batch(imgs, targets)
support_grid = torchvision.utils.make_grid(support_imgs, nrow=K_SHOT, normalize=True, pad_value=0.9)
support_grid = support_grid.permute(1, 2, 0)
query_grid = torchvision.utils.make_grid(query_imgs, nrow=K_SHOT, normalize=True, pad_value=0.9)
query_grid = query_grid.permute(1, 2, 0)
fig, ax = plt.subplots(1, 2, figsize=(8, 5))
ax[0].imshow(support_grid)
ax[0].set_title("Support set")
ax[0].axis("off")
ax[1].imshow(query_grid)
ax[1].set_title("Query set")
ax[1].axis("off")
fig.suptitle("Few Shot Batch", weight="bold")
fig.show()
plt.close(fig)
```
As we can see, the support and query set have the same five classes, but different examples.
The models will be tasked to classify the examples in the query set by learning from the support set and its labels.
With the data sampling in place, we can now start to implement our first meta-learning model: Prototypical Networks.
## Prototypical Networks
<div class="center-wrapper"><div class="video-wrapper"><iframe src="https://www.youtube.com/embed/LhZGPOtTd_Y" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></div></div>
The Prototypical Network, or ProtoNet for short, is a metric-based meta-learning algorithm which operates similar to a nearest neighbor classification.
Metric-based meta-learning methods classify a new example $\mathbf{x}$ based on some distance function $d_{\varphi}$ between $x$ and all elements in the support set.
ProtoNets implements this idea with the concept of prototypes in a learned feature space.
First, ProtoNet uses an embedding function $f_{\theta}$ to encode each input in the support set into a $L$-dimensional feature vector.
Next, for each class $c$, we collect the feature vectors of all examples with label $c$, and average their feature vectors.
Formally, we can define this as:
$$\mathbf{v}_c=\frac{1}{|S_c|}\sum_{(\mathbf{x}_i,y_i)\in S_c}f_{\theta}(\mathbf{x}_i)$$
where $S_c$ is the part of the support set $S$ for which $y_i=c$, and $\mathbf{v}_c$ represents the _prototype_ of class $c$.
The prototype calculation is visualized below for a 2-dimensional feature space and 3 classes (Figure credit - [Snell et al.](https://arxiv.org/pdf/1703.05175.pdf)).
The colored dots represent encoded support elements with color-corresponding class label, and the black dots next to the class label are the averaged prototypes.
<center width="100%"><img src="https://github.com/PyTorchLightning/lightning-tutorials/raw/main/course_UvA-DL/12-meta-learning/protonet_classification.svg" width="300px"></center>
Based on these prototypes, we want to classify a new example.
Remember that since we want to learn the encoding function $f_{\theta}$, this classification must be differentiable and hence, we need to define a probability distribution across classes.
For this, we will make use of the distance function $d_{\varphi}$: the closer a new example $\mathbf{x}$ is to a prototype $\mathbf{v}_c$, the higher the probability for $\mathbf{x}$ belonging to class $c$.
Formally, we can simply use a softmax over the distances of $\mathbf{x}$ to all class prototypes:
$$p(y=c\vert\mathbf{x})=\text{softmax}(-d_{\varphi}(f_{\theta}(\mathbf{x}), \mathbf{v}_c))=\frac{\exp\left(-d_{\varphi}(f_{\theta}(\mathbf{x}), \mathbf{v}_c)\right)}{\sum_{c'\in \mathcal{C}}\exp\left(-d_{\varphi}(f_{\theta}(\mathbf{x}), \mathbf{v}_{c'})\right)}$$
Note that the negative sign is necessary since we want to increase the probability for close-by vectors and have a low probability for distant vectors.
We train the network $f_{\theta}$ based on the cross entropy error of the training query set examples.
Thereby, the gradient flows through both the prototypes $\mathbf{v}_c$ and the query set encodings $f_{\theta}(\mathbf{x})$.
For the distance function $d_{\varphi}$, we can choose any function as long as it is differentiable with respect to both of its inputs.
The most common function, which we also use here, is the squared
euclidean distance, but there has been several works on different
distance functions as well.
### ProtoNet implementation
Now that we know how a ProtoNet works in principle, let's look at how we can apply to our specific problem of few-shot image classification, and implement it below.
First, we need to define the encoder function $f_{\theta}$.
Since we work with CIFAR images, we can take a look back at Tutorial 5 where we compared common Computer Vision architectures, and choose one of the best performing ones.
Here, we go with a DenseNet since it is in general more parameter efficient than ResNet.
Luckily, we do not need to implement DenseNet ourselves again and can rely on torchvision's model package instead.
We use common hyperparameters of 64 initial feature channels, add 32 per block, and use a bottleneck size of 64 (i.e. 2 times the growth rate).
We use 4 stages of 6 layers each, which results in overall about 1 million parameters.
Note that the torchvision package assumes that the last layer is used for classification and hence calls its output size `num_classes`.
However, we can instead just use it as the feature space of ProtoNet, and choose an arbitrary dimensionality.
We will use the same network for other algorithms in this notebook to ensure a fair comparison.
```
def get_convnet(output_size):
convnet = torchvision.models.DenseNet(
growth_rate=32,
block_config=(6, 6, 6, 6),
bn_size=2,
num_init_features=64,
num_classes=output_size, # Output dimensionality
)
return convnet
```
Next, we can look at implementing ProtoNet.
We will define it as PyTorch Lightning module to use all functionalities of PyTorch Lightning.
The first step during training is to encode all images in a batch with our network.
Next, we calculate the class prototypes from the support set (function `calculate_prototypes`), and classify the query set examples according to the prototypes (function `classify_feats`).
Keep in mind that we use the data sampling described before, such that the support and query set are stacked together in the batch.
Thus, we use our previously defined function `split_batch` to split them apart.
The full code can be found below.
```
class ProtoNet(pl.LightningModule):
def __init__(self, proto_dim, lr):
"""Inputs.
proto_dim - Dimensionality of prototype feature space
lr - Learning rate of Adam optimizer
"""
super().__init__()
self.save_hyperparameters()
self.model = get_convnet(output_size=self.hparams.proto_dim)
def configure_optimizers(self):
optimizer = optim.AdamW(self.parameters(), lr=self.hparams.lr)
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[140, 180], gamma=0.1)
return [optimizer], [scheduler]
@staticmethod
def calculate_prototypes(features, targets):
# Given a stack of features vectors and labels, return class prototypes
# features - shape [N, proto_dim], targets - shape [N]
classes, _ = torch.unique(targets).sort() # Determine which classes we have
prototypes = []
for c in classes:
p = features[torch.where(targets == c)[0]].mean(dim=0) # Average class feature vectors
prototypes.append(p)
prototypes = torch.stack(prototypes, dim=0)
# Return the 'classes' tensor to know which prototype belongs to which class
return prototypes, classes
def classify_feats(self, prototypes, classes, feats, targets):
# Classify new examples with prototypes and return classification error
dist = torch.pow(prototypes[None, :] - feats[:, None], 2).sum(dim=2) # Squared euclidean distance
preds = F.log_softmax(-dist, dim=1)
labels = (classes[None, :] == targets[:, None]).long().argmax(dim=-1)
acc = (preds.argmax(dim=1) == labels).float().mean()
return preds, labels, acc
def calculate_loss(self, batch, mode):
# Determine training loss for a given support and query set
imgs, targets = batch
features = self.model(imgs) # Encode all images of support and query set
support_feats, query_feats, support_targets, query_targets = split_batch(features, targets)
prototypes, classes = ProtoNet.calculate_prototypes(support_feats, support_targets)
preds, labels, acc = self.classify_feats(prototypes, classes, query_feats, query_targets)
loss = F.cross_entropy(preds, labels)
self.log("%s_loss" % mode, loss)
self.log("%s_acc" % mode, acc)
return loss
def training_step(self, batch, batch_idx):
return self.calculate_loss(batch, mode="train")
def validation_step(self, batch, batch_idx):
self.calculate_loss(batch, mode="val")
```
For validation, we use the same principle as training and sample support and query sets from the hold-out 10 classes.
However, this gives us noisy scores depending on which query sets are chosen to which support sets.
This is why we will use a different strategy during testing.
For validation, our training strategy is sufficient since it is much
faster than testing, and gives a good estimate of the training
generalization as long as we keep the support-query sets constant across
validation iterations.
### Training
After implementing the model, we can already start training it.
We use our common PyTorch Lightning training function, and train the model for 200 epochs.
The training function takes `model_class` as input argument, i.e. the
PyTorch Lightning module class that should be trained, since we will
reuse this function for other algorithms as well.
```
def train_model(model_class, train_loader, val_loader, **kwargs):
trainer = pl.Trainer(
default_root_dir=os.path.join(CHECKPOINT_PATH, model_class.__name__),
gpus=1 if str(device) == "cuda:0" else 0,
max_epochs=200,
callbacks=[
ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc"),
LearningRateMonitor("epoch"),
],
progress_bar_refresh_rate=0,
)
trainer.logger._default_hp_metric = None
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, model_class.__name__ + ".ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model at %s, loading..." % pretrained_filename)
# Automatically loads the model with the saved hyperparameters
model = model_class.load_from_checkpoint(pretrained_filename)
else:
pl.seed_everything(42) # To be reproducable
model = model_class(**kwargs)
trainer.fit(model, train_loader, val_loader)
model = model_class.load_from_checkpoint(
trainer.checkpoint_callback.best_model_path
) # Load best checkpoint after training
return model
```
Below is the training call for our ProtoNet.
We use a 64-dimensional feature space.
Larger feature spaces showed to give noisier results since the squared euclidean distance becomes proportionally larger in expectation, and smaller feature spaces might not allow for enough flexibility.
We recommend to load the pre-trained model here at first, but feel free
to play around with the hyperparameters yourself.
```
protonet_model = train_model(
ProtoNet, proto_dim=64, lr=2e-4, train_loader=train_data_loader, val_loader=val_data_loader
)
```
We can also take a closer look at the TensorBoard below.
```
# Opens tensorboard in notebook. Adjust the path to your CHECKPOINT_PATH if needed
# # %tensorboard --logdir ../saved_models/tutorial16/tensorboards/ProtoNet/
```
<center width="100%"><img src="https://github.com/PyTorchLightning/lightning-tutorials/raw/main/course_UvA-DL/12-meta-learning/tensorboard_screenshot_ProtoNet.png" width="1100px"></center>
In contrast to standard supervised learning, we see that ProtoNet does not overfit as much as we would expect.
The validation accuracy is of course lower than the average training, but the training loss does not stick close to zero.
This is because no training batch is as the other, and we also mix new examples in the support set and query set.
This gives us slightly different prototypes in every iteration, and makes it harder for the network to fully overfit.
### Testing
Our goal of meta-learning is to obtain a model that can quickly adapt to a new task, or in this case, new classes to distinguish between.
To test this, we will use our trained ProtoNet and adapt it to the 10 test classes.
Thereby, we pick $k$ examples per class from which we determine the prototypes, and test the classification accuracy on all other examples.
This can be seen as using the $k$ examples per class as support set, and the rest of the dataset as a query set.
We iterate through the dataset such that each example has been once included in a support set.
The average performance over all support sets tells us how well we can expect ProtoNet to perform when seeing only $k$ examples per class.
During training, we used $k=4$.
In testing, we will experiment with $k=\{2,4,8,16,32\}$ to get a better sense of how $k$ influences the results.
We would expect that we achieve higher accuracies the more examples we have in the support set, but we don't know how it scales.
Hence, let's first implement a function that executes the testing procedure for a given $k$:
```
@torch.no_grad()
def test_proto_net(model, dataset, data_feats=None, k_shot=4):
"""Inputs.
model - Pretrained ProtoNet model
dataset - The dataset on which the test should be performed.
Should be instance of ImageDataset
data_feats - The encoded features of all images in the dataset.
If None, they will be newly calculated, and returned
for later usage.
k_shot - Number of examples per class in the support set.
"""
model = model.to(device)
model.eval()
num_classes = dataset.targets.unique().shape[0]
exmps_per_class = dataset.targets.shape[0] // num_classes # We assume uniform example distribution here
# The encoder network remains unchanged across k-shot settings. Hence, we only need
# to extract the features for all images once.
if data_feats is None:
# Dataset preparation
dataloader = data.DataLoader(dataset, batch_size=128, num_workers=4, shuffle=False, drop_last=False)
img_features = []
img_targets = []
for imgs, targets in tqdm(dataloader, "Extracting image features", leave=False):
imgs = imgs.to(device)
feats = model.model(imgs)
img_features.append(feats.detach().cpu())
img_targets.append(targets)
img_features = torch.cat(img_features, dim=0)
img_targets = torch.cat(img_targets, dim=0)
# Sort by classes, so that we obtain tensors of shape [num_classes, exmps_per_class, ...]
# Makes it easier to process later
img_targets, sort_idx = img_targets.sort()
img_targets = img_targets.reshape(num_classes, exmps_per_class).transpose(0, 1)
img_features = img_features[sort_idx].reshape(num_classes, exmps_per_class, -1).transpose(0, 1)
else:
img_features, img_targets = data_feats
# We iterate through the full dataset in two manners. First, to select the k-shot batch.
# Second, the evaluate the model on all other examples
accuracies = []
for k_idx in tqdm(range(0, img_features.shape[0], k_shot), "Evaluating prototype classification", leave=False):
# Select support set and calculate prototypes
k_img_feats = img_features[k_idx : k_idx + k_shot].flatten(0, 1)
k_targets = img_targets[k_idx : k_idx + k_shot].flatten(0, 1)
prototypes, proto_classes = model.calculate_prototypes(k_img_feats, k_targets)
# Evaluate accuracy on the rest of the dataset
batch_acc = 0
for e_idx in range(0, img_features.shape[0], k_shot):
if k_idx == e_idx: # Do not evaluate on the support set examples
continue
e_img_feats = img_features[e_idx : e_idx + k_shot].flatten(0, 1)
e_targets = img_targets[e_idx : e_idx + k_shot].flatten(0, 1)
_, _, acc = model.classify_feats(prototypes, proto_classes, e_img_feats, e_targets)
batch_acc += acc.item()
batch_acc /= img_features.shape[0] // k_shot - 1
accuracies.append(batch_acc)
return (mean(accuracies), stdev(accuracies)), (img_features, img_targets)
```
Testing ProtoNet is relatively quick if we have processed all images once. Hence, we can do in this notebook:
```
protonet_accuracies = dict()
data_feats = None
for k in [2, 4, 8, 16, 32]:
protonet_accuracies[k], data_feats = test_proto_net(protonet_model, test_set, data_feats=data_feats, k_shot=k)
print(
"Accuracy for k=%i: %4.2f%% (+-%4.2f%%)"
% (k, 100.0 * protonet_accuracies[k][0], 100 * protonet_accuracies[k][1])
)
```
Before discussing the results above, let's first plot the accuracies over number of examples in the support set:
```
def plot_few_shot(acc_dict, name, color=None, ax=None):
sns.set()
if ax is None:
fig, ax = plt.subplots(1, 1, figsize=(5, 3))
ks = sorted(list(acc_dict.keys()))
mean_accs = [acc_dict[k][0] for k in ks]
std_accs = [acc_dict[k][1] for k in ks]
ax.plot(ks, mean_accs, marker="o", markeredgecolor="k", markersize=6, label=name, color=color)
ax.fill_between(
ks,
[m - s for m, s in zip(mean_accs, std_accs)],
[m + s for m, s in zip(mean_accs, std_accs)],
alpha=0.2,
color=color,
)
ax.set_xticks(ks)
ax.set_xlim([ks[0] - 1, ks[-1] + 1])
ax.set_xlabel("Number of shots per class", weight="bold")
ax.set_ylabel("Accuracy", weight="bold")
if len(ax.get_title()) == 0:
ax.set_title("Few-Shot Performance " + name, weight="bold")
else:
ax.set_title(ax.get_title() + " and " + name, weight="bold")
ax.legend()
return ax
ax = plot_few_shot(protonet_accuracies, name="ProtoNet", color="C1")
plt.show()
plt.close()
```
As we initially expected, the performance of ProtoNet indeed increases the more samples we have.
However, even with just two samples per class, we classify almost half of the images correctly, which is well above random accuracy (10%).
The curve shows an exponentially dampend trend, meaning that adding 2 extra examples to $k=2$ has a much higher impact than adding 2 extra samples if we already have $k=16$.
Nonetheless, we can say that ProtoNet adapts fairly well to new classes.
## MAML and ProtoMAML
<div class="center-wrapper"><div class="video-wrapper"><iframe src="https://www.youtube.com/embed/xKcA6g-esH4" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></div></div>
The second meta-learning algorithm we will look at is MAML, short for Model-Agnostic Meta-Learning.
MAML is an optimization-based meta-learning algorithm, which means that it tries to adjust the standard optimization procedure to a few-shot setting.
The idea of MAML is relatively simple: given a model, support and query set during training, we optimize the model for $m$ steps on the support set, and evaluate the gradients of the query loss with respect to the original model's parameters.
For the same model, we do it for a few different support-query sets and accumulate the gradients.
This results in learning a model that provides a good initialization for being quickly adapted to the training tasks.
If we denote the model parameters with $\theta$, we can visualize the procedure as follows (Figure credit - [Finn et al. ](http://proceedings.mlr.press/v70/finn17a.html)).
<center width="100%"><img src="https://github.com/PyTorchLightning/lightning-tutorials/raw/main/course_UvA-DL/12-meta-learning/MAML_figure.svg" width="300px"></center>
The full algorithm of MAML is therefore as follows.
At each training step, we sample a batch of tasks, i.e., a batch of support-query set pairs.
For each task $\mathcal{T}_i$, we optimize a model $f_{\theta}$ on the support set via SGD, and denote this model as $f_{\theta_i'}$.
We refer to this optimization as _inner loop_.
Using this new model, we calculate the gradients of the original parameters, $\theta$, with respect to the query loss on $f_{\theta_i'}$.
These gradients are accumulated over all tasks, and used to update $\theta$.
This is called _outer loop_ since we iterate over tasks.
The full MAML algorithm is summarized below (Figure credit - [Finn et al. ](http://proceedings.mlr.press/v70/finn17a.html)).
<center width="100%"><img src="https://github.com/PyTorchLightning/lightning-tutorials/raw/main/course_UvA-DL/12-meta-learning/MAML_algorithm.svg" width="400px"></center>
To obtain gradients for the initial parameters $\theta$ from the optimized model $f_{\theta_i'}$, we actually need second-order gradients, i.e. gradients of gradients, as the support set gradients depend on $\theta$ as well.
This makes MAML computationally expensive, especially when using mulitple inner loop steps.
A simpler, yet almost equally well performing alternative is First-Order MAML (FOMAML) which only uses first-order gradients.
This means that the second-order gradients are ignored, and we can calculate the outer loop gradients (line 10 in algorithm 2) simply by calculating the gradients with respect to $\theta_i'$, and use those as update to $\theta$.
Hence, the new update rule becomes:
$$\theta\leftarrow\theta-\beta\sum_{\mathcal{T}_i\sim p(\mathcal{T})}\nabla_{\theta_i'}\mathcal{L}_{\mathcal{T}_i}(f_{\theta_i'})$$
Note the change of $\theta$ to $\theta_i'$ for $\nabla$.
### ProtoMAML
A problem of MAML is how to design the output classification layer.
In case all tasks have different number of classes, we need to initialize the output layer with zeros or randomly in every iteration.
Even if we always have the same number of classes, we just start from random predictions.
This requires several inner loop steps to reach a reasonable classification result.
To overcome this problem, Triantafillou et al.
(2020) propose to combine the merits of Prototypical Networks and MAML.
Specifically, we can use prototypes to initialize our output layer to have a strong initialization.
Thereby, it can be shown that the softmax over euclidean distances can be reformulated as a linear layer with softmax.
To see this, let's first write out the negative euclidean distance between a feature vector $f_{\theta}(\mathbf{x}^{*})$ of a new data point $\mathbf{x}^{*}$ to a prototype $\mathbf{v}_c$ of class $c$:
$$
-||f_{\theta}(\mathbf{x}^{*})-\mathbf{v}_c||^2=-f_{\theta}(\mathbf{x}^{*})^Tf_{\theta}(\mathbf{x}^{*})+2\mathbf{v}_c^{T}f_{\theta}(\mathbf{x}^{*})-\mathbf{v}_c^T\mathbf{v}_c
$$
We perform the classification across all classes $c\in\mathcal{C}$ and take a softmax on the distance.
Hence, any term that is same for all classes can be removed without changing the output probabilities.
In the equation above, this is true for $-f_{\theta}(\mathbf{x}^{*})^Tf_{\theta}(\mathbf{x}^{*})$ since it is independent of any class prototype.
Thus, we can write:
$$
-||f_{\theta}(\mathbf{x}^{*})-\mathbf{v}_c||^2=2\mathbf{v}_c^{T}f_{\theta}(\mathbf{x}^{*})-||\mathbf{v}_c||^2+\text{constant}
$$
Taking a second look at the equation above, it looks a lot like a linear layer.
For this, we use $\mathbf{W}_{c,\cdot}=2\mathbf{v}_c$ and $b_c=-||\mathbf{v}_c||^2$ which gives us the linear layer $\mathbf{W}f_{\theta}(\mathbf{x}^{*})+\mathbf{b}$.
Hence, if we initialize the output weight with twice the prototypes, and the biases by the negative squared L2 norm of the prototypes, we start with a Prototypical Network.
MAML allows us to adapt this layer and the rest of the network further.
In the following, we will implement First-Order ProtoMAML for few-shot classification.
The implementation of MAML would be the same except the output layer initialization.
### ProtoMAML implementation
For implementing ProtoMAML, we can follow Algorithm 2 with minor modifications.
At each training step, we first sample a batch of tasks, and a support and query set for each task.
In our case of few-shot classification, this means that we simply sample multiple support-query set pairs from our sampler.
For each task, we finetune our current model on the support set.
However, since we need to remember the original parameters for the other tasks, the outer loop gradient update and future training steps, we need to create a copy of our model, and finetune only the copy.
We can copy a model by using standard Python functions like `deepcopy`.
The inner loop is implemented in the function `adapt_few_shot` in the PyTorch Lightning module below.
After finetuning the model, we apply it on the query set and calculate the first-order gradients with respect to the original parameters $\theta$.
In contrast to simple MAML, we also have to consider the gradients with respect to the output layer initialization, i.e. the prototypes, since they directly rely on $\theta$.
To realize this efficiently, we take two steps.
First, we calculate the prototypes by applying the original model, i.e. not the copied model, on the support elements.
When initializing the output layer, we detach the prototypes to stop the gradients.
This is because in the inner loop itself, we do not want to consider gradients through the prototypes back to the original model.
However, after the inner loop is finished, we re-attach the computation graph of the prototypes by writing `output_weight = (output_weight - init_weight).detach() + init_weight`.
While this line does not change the value of the variable `output_weight`, it adds its dependency on the prototype initialization `init_weight`.
Thus, if we call `.backward` on `output_weight`, we will automatically calculate the first-order gradients with respect to the prototype initialization in the original model.
After calculating all gradients and summing them together in the original model, we can take a standard optimizer step.
PyTorch Lightning's method is however designed to return a loss-tensor on which we call `.backward` first.
Since this is not possible here, we need to perform the optimization step ourselves.
All details can be found in the code below.
For implementing (Proto-)MAML with second-order gradients, it is recommended to use libraries such as [$\nabla$higher](https://github.com/facebookresearch/higher) from Facebook AI Research.
For simplicity, we stick with first-order methods here.
```
class ProtoMAML(pl.LightningModule):
def __init__(self, proto_dim, lr, lr_inner, lr_output, num_inner_steps):
"""Inputs.
proto_dim - Dimensionality of prototype feature space
lr - Learning rate of the outer loop Adam optimizer
lr_inner - Learning rate of the inner loop SGD optimizer
lr_output - Learning rate for the output layer in the inner loop
num_inner_steps - Number of inner loop updates to perform
"""
super().__init__()
self.save_hyperparameters()
self.model = get_convnet(output_size=self.hparams.proto_dim)
def configure_optimizers(self):
optimizer = optim.AdamW(self.parameters(), lr=self.hparams.lr)
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[140, 180], gamma=0.1)
return [optimizer], [scheduler]
def run_model(self, local_model, output_weight, output_bias, imgs, labels):
# Execute a model with given output layer weights and inputs
feats = local_model(imgs)
preds = F.linear(feats, output_weight, output_bias)
loss = F.cross_entropy(preds, labels)
acc = (preds.argmax(dim=1) == labels).float()
return loss, preds, acc
def adapt_few_shot(self, support_imgs, support_targets):
# Determine prototype initialization
support_feats = self.model(support_imgs)
prototypes, classes = ProtoNet.calculate_prototypes(support_feats, support_targets)
support_labels = (classes[None, :] == support_targets[:, None]).long().argmax(dim=-1)
# Create inner-loop model and optimizer
local_model = deepcopy(self.model)
local_model.train()
local_optim = optim.SGD(local_model.parameters(), lr=self.hparams.lr_inner)
local_optim.zero_grad()
# Create output layer weights with prototype-based initialization
init_weight = 2 * prototypes
init_bias = -torch.norm(prototypes, dim=1) ** 2
output_weight = init_weight.detach().requires_grad_()
output_bias = init_bias.detach().requires_grad_()
# Optimize inner loop model on support set
for _ in range(self.hparams.num_inner_steps):
# Determine loss on the support set
loss, _, _ = self.run_model(local_model, output_weight, output_bias, support_imgs, support_labels)
# Calculate gradients and perform inner loop update
loss.backward()
local_optim.step()
# Update output layer via SGD
output_weight.data -= self.hparams.lr_output * output_weight.grad
output_bias.data -= self.hparams.lr_output * output_bias.grad
# Reset gradients
local_optim.zero_grad()
output_weight.grad.fill_(0)
output_bias.grad.fill_(0)
# Re-attach computation graph of prototypes
output_weight = (output_weight - init_weight).detach() + init_weight
output_bias = (output_bias - init_bias).detach() + init_bias
return local_model, output_weight, output_bias, classes
def outer_loop(self, batch, mode="train"):
accuracies = []
losses = []
self.model.zero_grad()
# Determine gradients for batch of tasks
for task_batch in batch:
imgs, targets = task_batch
support_imgs, query_imgs, support_targets, query_targets = split_batch(imgs, targets)
# Perform inner loop adaptation
local_model, output_weight, output_bias, classes = self.adapt_few_shot(support_imgs, support_targets)
# Determine loss of query set
query_labels = (classes[None, :] == query_targets[:, None]).long().argmax(dim=-1)
loss, preds, acc = self.run_model(local_model, output_weight, output_bias, query_imgs, query_labels)
# Calculate gradients for query set loss
if mode == "train":
loss.backward()
for p_global, p_local in zip(self.model.parameters(), local_model.parameters()):
p_global.grad += p_local.grad # First-order approx. -> add gradients of finetuned and base model
accuracies.append(acc.mean().detach())
losses.append(loss.detach())
# Perform update of base model
if mode == "train":
opt = self.optimizers()
opt.step()
opt.zero_grad()
self.log("%s_loss" % mode, sum(losses) / len(losses))
self.log("%s_acc" % mode, sum(accuracies) / len(accuracies))
def training_step(self, batch, batch_idx):
self.outer_loop(batch, mode="train")
return None # Returning None means we skip the default training optimizer steps by PyTorch Lightning
def validation_step(self, batch, batch_idx):
# Validation requires to finetune a model, hence we need to enable gradients
torch.set_grad_enabled(True)
self.outer_loop(batch, mode="val")
torch.set_grad_enabled(False)
```
### Training
To train ProtoMAML, we need to change our sampling slightly.
Instead of a single support-query set batch, we need to sample multiple.
To implement this, we yet use another Sampler which combines multiple batches from a `FewShotBatchSampler`, and returns it afterwards.
Additionally, we define a `collate_fn` for our data loader which takes the stack of support-query set images, and returns the tasks as a list.
This makes it easier to process in our PyTorch Lightning module before.
The implementation of the sampler can be found below.
```
class TaskBatchSampler:
def __init__(self, dataset_targets, batch_size, N_way, K_shot, include_query=False, shuffle=True):
"""
Inputs:
dataset_targets - PyTorch tensor of the labels of the data elements.
batch_size - Number of tasks to aggregate in a batch
N_way - Number of classes to sample per batch.
K_shot - Number of examples to sample per class in the batch.
include_query - If True, returns batch of size N_way*K_shot*2, which
can be split into support and query set. Simplifies
the implementation of sampling the same classes but
distinct examples for support and query set.
shuffle - If True, examples and classes are newly shuffled in each
iteration (for training)
"""
super().__init__()
self.batch_sampler = FewShotBatchSampler(dataset_targets, N_way, K_shot, include_query, shuffle)
self.task_batch_size = batch_size
self.local_batch_size = self.batch_sampler.batch_size
def __iter__(self):
# Aggregate multiple batches before returning the indices
batch_list = []
for batch_idx, batch in enumerate(self.batch_sampler):
batch_list.extend(batch)
if (batch_idx + 1) % self.task_batch_size == 0:
yield batch_list
batch_list = []
def __len__(self):
return len(self.batch_sampler) // self.task_batch_size
def get_collate_fn(self):
# Returns a collate function that converts one big tensor into a list of task-specific tensors
def collate_fn(item_list):
imgs = torch.stack([img for img, target in item_list], dim=0)
targets = torch.stack([target for img, target in item_list], dim=0)
imgs = imgs.chunk(self.task_batch_size, dim=0)
targets = targets.chunk(self.task_batch_size, dim=0)
return list(zip(imgs, targets))
return collate_fn
```
The creation of the data loaders is with this sampler straight-forward.
Note that since many images need to loaded for a training batch, it is recommended to use less workers than usual.
```
# Training constant (same as for ProtoNet)
N_WAY = 5
K_SHOT = 4
# Training set
train_protomaml_sampler = TaskBatchSampler(
train_set.targets, include_query=True, N_way=N_WAY, K_shot=K_SHOT, batch_size=16
)
train_protomaml_loader = data.DataLoader(
train_set, batch_sampler=train_protomaml_sampler, collate_fn=train_protomaml_sampler.get_collate_fn(), num_workers=2
)
# Validation set
val_protomaml_sampler = TaskBatchSampler(
val_set.targets,
include_query=True,
N_way=N_WAY,
K_shot=K_SHOT,
batch_size=1, # We do not update the parameters, hence the batch size is irrelevant here
shuffle=False,
)
val_protomaml_loader = data.DataLoader(
val_set, batch_sampler=val_protomaml_sampler, collate_fn=val_protomaml_sampler.get_collate_fn(), num_workers=2
)
```
Now, we are ready to train our ProtoMAML.
We use the same feature space size as for ProtoNet, but can use a higher learning rate since the outer loop gradients are accumulated over 16 batches.
The inner loop learning rate is set to 0.1, which is much higher than the outer loop lr because we use SGD in the inner loop instead of Adam.
Commonly, the learning rate for the output layer is higher than the base model is the base model is very deep or pre-trained.
However, for our setup, we observed no noticable impact of using a different learning rate than the base model.
The number of inner loop updates is another crucial hyperparmaeter, and depends on the similarity of our training tasks.
Since all tasks are on images from the same dataset, we notice that a single inner loop update achieves similar performance as 3 or 5 while training considerably faster.
However, especially in RL and NLP, larger number of inner loop steps are often needed.
```
protomaml_model = train_model(
ProtoMAML,
proto_dim=64,
lr=1e-3,
lr_inner=0.1,
lr_output=0.1,
num_inner_steps=1, # Often values between 1 and 10
train_loader=train_protomaml_loader,
val_loader=val_protomaml_loader,
)
```
Let's have a look at the training TensorBoard.
```
# Opens tensorboard in notebook. Adjust the path to your CHECKPOINT_PATH if needed
# # %tensorboard --logdir ../saved_models/tutorial16/tensorboards/ProtoMAML/
```
<center width="100%"><img src="https://github.com/PyTorchLightning/lightning-tutorials/raw/main/course_UvA-DL/12-meta-learning/tensorboard_screenshot_ProtoMAML.png" width="1100px"></center>
One obvious difference to ProtoNet is that the loss curves look much less noisy.
This is because we average the outer loop gradients over multiple tasks, and thus have a smoother training curve.
Additionally, we only have 15k training iterations after 200 epochs.
This is again because of the task batches, which cause 16 times less iterations.
However, each iteration has seen 16 times more data in this experiment.
Thus, we still have a fair comparison between ProtoMAML and ProtoNet.
At first sight on the validation accuracy, one would assume that
ProtoNet performs superior to ProtoMAML, but we have to verify that with
proper testing below.
### Testing
We test ProtoMAML in the same manner as ProtoNet, namely by picking random examples in the test set as support sets and use the rest of the dataset as query set.
Instead of just calculating the prototypes for all examples, we need to finetune a separate model for each support set.
This is why this process is more expensive than ProtoNet, and in our case, testing $k=\{2,4,8,16,32\}$ can take almost an hour.
Hence, we provide evaluation files besides the pretrained models.
```
def test_protomaml(model, dataset, k_shot=4):
pl.seed_everything(42)
model = model.to(device)
num_classes = dataset.targets.unique().shape[0]
# Data loader for full test set as query set
full_dataloader = data.DataLoader(dataset, batch_size=128, num_workers=4, shuffle=False, drop_last=False)
# Data loader for sampling support sets
sampler = FewShotBatchSampler(
dataset.targets, include_query=False, N_way=num_classes, K_shot=k_shot, shuffle=False, shuffle_once=False
)
sample_dataloader = data.DataLoader(dataset, batch_sampler=sampler, num_workers=2)
# We iterate through the full dataset in two manners. First, to select the k-shot batch.
# Second, the evaluate the model on all other examples
accuracies = []
for (support_imgs, support_targets), support_indices in tqdm(
zip(sample_dataloader, sampler), "Performing few-shot finetuning"
):
support_imgs = support_imgs.to(device)
support_targets = support_targets.to(device)
# Finetune new model on support set
local_model, output_weight, output_bias, classes = model.adapt_few_shot(support_imgs, support_targets)
with torch.no_grad(): # No gradients for query set needed
local_model.eval()
batch_acc = torch.zeros((0,), dtype=torch.float32, device=device)
# Evaluate all examples in test dataset
for query_imgs, query_targets in full_dataloader:
query_imgs = query_imgs.to(device)
query_targets = query_targets.to(device)
query_labels = (classes[None, :] == query_targets[:, None]).long().argmax(dim=-1)
_, _, acc = model.run_model(local_model, output_weight, output_bias, query_imgs, query_labels)
batch_acc = torch.cat([batch_acc, acc.detach()], dim=0)
# Exclude support set elements
for s_idx in support_indices:
batch_acc[s_idx] = 0
batch_acc = batch_acc.sum().item() / (batch_acc.shape[0] - len(support_indices))
accuracies.append(batch_acc)
return mean(accuracies), stdev(accuracies)
```
In contrast to training, it is recommended to use many more inner loop updates during testing.
During training, we are not interested in getting the best model from the inner loop, but the model which can provide the best gradients.
Hence, one update might be already sufficient in training, but for testing, it was often observed that larger number of updates can give a considerable performance boost.
Thus, we change the inner loop updates to 200 before testing.
```
protomaml_model.hparams.num_inner_steps = 200
```
Now, we can test our model.
For the pre-trained models, we provide a json file with the results to reduce evaluation time.
```
protomaml_result_file = os.path.join(CHECKPOINT_PATH, "protomaml_fewshot.json")
if os.path.isfile(protomaml_result_file):
# Load pre-computed results
with open(protomaml_result_file) as f:
protomaml_accuracies = json.load(f)
protomaml_accuracies = {int(k): v for k, v in protomaml_accuracies.items()}
else:
# Perform same experiments as for ProtoNet
protomaml_accuracies = dict()
for k in [2, 4, 8, 16, 32]:
protomaml_accuracies[k] = test_protomaml(protomaml_model, test_set, k_shot=k)
# Export results
with open(protomaml_result_file, "w") as f:
json.dump(protomaml_accuracies, f, indent=4)
for k in protomaml_accuracies:
print(
"Accuracy for k=%i: %4.2f%% (+-%4.2f%%)"
% (k, 100.0 * protomaml_accuracies[k][0], 100.0 * protomaml_accuracies[k][1])
)
```
Again, let's plot the results in our plot from before.
```
ax = plot_few_shot(protonet_accuracies, name="ProtoNet", color="C1")
plot_few_shot(protomaml_accuracies, name="ProtoMAML", color="C2", ax=ax)
plt.show()
plt.close()
```
We can observe that ProtoMAML is indeed able to outperform ProtoNet for $k>4$.
This is because with more samples, it becomes more relevant to also adapt the base model's parameters.
Meanwhile, for $k=2$, ProtoMAML achieves lower performance than ProtoNet.
This is likely also related to choosing 200 inner loop updates since with more updates, there exists the risk of overfitting.
Nonetheless, the high standard deviation for $k=2$ makes it hard to take any statistically valid conclusion.
Overall, we can conclude that ProtoMAML slightly outperforms ProtoNet for larger shot counts.
However, one disadvantage of ProtoMAML is its much longer training and testing time.
ProtoNet provides a simple, efficient, yet strong baseline for
ProtoMAML, and might be the better solution in situations where limited
resources are available.
## Domain adaptation
So far, we have evaluated our meta-learning algorithms on the same dataset on which we have trained them.
However, meta-learning algorithms are especially interesting when we want to move from one to another dataset.
So, what happens if we apply them on a quite different dataset than CIFAR?
This is what we try out below, and evaluate ProtoNet and ProtoMAML on the SVHN dataset.
### SVHN dataset
The Street View House Numbers (SVHN) dataset is a real-world image dataset for house number detection.
It is similar to MNIST by having the classes 0 to 9, but is more difficult due to its real-world setting and possible distracting numbers left and right.
Let's first load the dataset, and visualize some images to get an impression of the dataset.
```
SVHN_test_dataset = SVHN(root=DATASET_PATH, split="test", download=True, transform=transforms.ToTensor())
# Visualize some examples
NUM_IMAGES = 12
SVHN_images = [SVHN_test_dataset[np.random.randint(len(SVHN_test_dataset))][0] for idx in range(NUM_IMAGES)]
SVHN_images = torch.stack(SVHN_images, dim=0)
img_grid = torchvision.utils.make_grid(SVHN_images, nrow=6, normalize=True, pad_value=0.9)
img_grid = img_grid.permute(1, 2, 0)
plt.figure(figsize=(8, 8))
plt.title("Image examples of the SVHN dataset")
plt.imshow(img_grid)
plt.axis("off")
plt.show()
plt.close()
```
Each image is labeled with one class between 0 and 9 representing the main digit in the image.
Can our ProtoNet and ProtoMAML learn to classify the digits from only a few examples?
This is what we will test out below.
The images have the same size as CIFAR, so that we can use the images without changes.
We first prepare the dataset, for which we take the first 500 images per class.
For this dataset, we use our test functions as before to get an estimated performance for different number of shots.
```
imgs = np.transpose(SVHN_test_dataset.data, (0, 2, 3, 1))
targets = SVHN_test_dataset.labels
# Limit number of examples to 500 to reduce test time
min_label_count = min(500, np.bincount(SVHN_test_dataset.labels).min())
idxs = np.concatenate([np.where(targets == c)[0][:min_label_count] for c in range(1 + targets.max())], axis=0)
imgs = imgs[idxs]
targets = torch.from_numpy(targets[idxs]).long()
svhn_fewshot_dataset = ImageDataset(imgs, targets, img_transform=test_transform)
svhn_fewshot_dataset.imgs.shape
```
### Experiments
First, we can apply ProtoNet to the SVHN dataset:
```
protonet_svhn_accuracies = dict()
data_feats = None
for k in [2, 4, 8, 16, 32]:
protonet_svhn_accuracies[k], data_feats = test_proto_net(
protonet_model, svhn_fewshot_dataset, data_feats=data_feats, k_shot=k
)
print(
"Accuracy for k=%i: %4.2f%% (+-%4.2f%%)"
% (k, 100.0 * protonet_svhn_accuracies[k][0], 100 * protonet_svhn_accuracies[k][1])
)
```
It becomes clear that the results are much lower than the ones on CIFAR, and just slightly above random for $k=2$.
How about ProtoMAML?
We provide again evaluation files since the evaluation can take several minutes to complete.
```
protomaml_result_file = os.path.join(CHECKPOINT_PATH, "protomaml_svhn_fewshot.json")
if os.path.isfile(protomaml_result_file):
# Load pre-computed results
with open(protomaml_result_file) as f:
protomaml_svhn_accuracies = json.load(f)
protomaml_svhn_accuracies = {int(k): v for k, v in protomaml_svhn_accuracies.items()}
else:
# Perform same experiments as for ProtoNet
protomaml_svhn_accuracies = dict()
for k in [2, 4, 8, 16, 32]:
protomaml_svhn_accuracies[k] = test_protomaml(protomaml_model, svhn_fewshot_dataset, k_shot=k)
# Export results
with open(protomaml_result_file, "w") as f:
json.dump(protomaml_svhn_accuracies, f, indent=4)
for k in protomaml_svhn_accuracies:
print(
"Accuracy for k=%i: %4.2f%% (+-%4.2f%%)"
% (k, 100.0 * protomaml_svhn_accuracies[k][0], 100.0 * protomaml_svhn_accuracies[k][1])
)
```
While ProtoMAML shows similar performance than ProtoNet for $k\leq 4$, it considerably outperforms ProtoNet for more than 8 shots.
This is because we can adapt the base model, which is crucial when the data does not fit the original training data.
For $k=32$, ProtoMAML achieves $13\%$ higher classification accuracy than ProtoNet which already starts to flatten out.
We can see the trend more clearly in our plot below.
```
ax = plot_few_shot(protonet_svhn_accuracies, name="ProtoNet", color="C1")
plot_few_shot(protomaml_svhn_accuracies, name="ProtoMAML", color="C2", ax=ax)
plt.show()
plt.close()
```
## Conclusion
In this notebook, we have discussed meta-learning algorithms that learn to adapt to new classes and/or tasks with just a few samples.
We have discussed three popular algorithms, namely ProtoNet, MAML and ProtoMAML.
On the few-shot image classification task of CIFAR100, ProtoNet and ProtoMAML showed to perform similarly well, with slight benefits of ProtoMAML for larger shot sizes.
However, for out-of-distribution data (SVHN), the ability to optimize the base model showed to be crucial and gave ProtoMAML considerable performance gains over ProtoNet.
Nonetheless, ProtoNet offers other advantages compared to ProtoMAML, namely a very cheap training and test cost as well as a simpler implementation.
Hence, it is recommended to consider whether the additionally complexity
of ProtoMAML is worth the extra training computation cost, or whether
ProtoNet is already sufficient for the task at hand.
### References
[1] Snell, Jake, Kevin Swersky, and Richard S. Zemel.
"Prototypical networks for few-shot learning."
NeurIPS 2017.
([link](https://arxiv.org/pdf/1703.05175.pdf))
[2] Chelsea Finn, Pieter Abbeel, Sergey Levine.
"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks."
ICML 2017.
([link](http://proceedings.mlr.press/v70/finn17a.html))
[3] Triantafillou, Eleni, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin et al.
"Meta-dataset: A dataset of datasets for learning to learn from few examples."
ICLR 2020.
([link](https://openreview.net/pdf?id=rkgAGAVKPr))
## Congratulations - Time to Join the Community!
Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the Lightning
movement, you can do so in the following ways!
### Star [Lightning](https://github.com/PyTorchLightning/pytorch-lightning) on GitHub
The easiest way to help our community is just by starring the GitHub repos! This helps raise awareness of the cool
tools we're building.
### Join our [Slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-pw5v393p-qRaDgEk24~EjiZNBpSQFgQ)!
The best way to keep up to date on the latest advancements is to join our community! Make sure to introduce yourself
and share your interests in `#general` channel
### Contributions !
The best way to contribute to our community is to become a code contributor! At any time you can go to
[Lightning](https://github.com/PyTorchLightning/pytorch-lightning) or [Bolt](https://github.com/PyTorchLightning/lightning-bolts)
GitHub Issues page and filter for "good first issue".
* [Lightning good first issue](https://github.com/PyTorchLightning/pytorch-lightning/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
* [Bolt good first issue](https://github.com/PyTorchLightning/lightning-bolts/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
* You can also contribute your own notebooks with useful examples !
### Great thanks from the entire Pytorch Lightning Team for your interest !
{height="60px" width="240px"}
|
github_jupyter
|
# ! pip install --quiet "torch>=1.6, <1.9" "matplotlib" "torchmetrics>=0.3" "seaborn" "torchvision" "pytorch-lightning>=1.3"
import json
import os
import random
import urllib.request
from collections import defaultdict
from copy import deepcopy
from statistics import mean, stdev
from urllib.error import HTTPError
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pytorch_lightning as pl
import seaborn as sns
import torch
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data as data
import torchvision
from IPython.display import set_matplotlib_formats
from PIL import Image
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
from torchvision import transforms
from torchvision.datasets import CIFAR100, SVHN
from tqdm.auto import tqdm
plt.set_cmap("cividis")
# %matplotlib inline
set_matplotlib_formats("svg", "pdf") # For export
matplotlib.rcParams["lines.linewidth"] = 2.0
sns.reset_orig()
# Import tensorboard
# %load_ext tensorboard
# Path to the folder where the datasets are/should be downloaded (e.g. CIFAR10)
DATASET_PATH = os.environ.get("PATH_DATASETS", "data/")
# Path to the folder where the pretrained models are saved
CHECKPOINT_PATH = os.environ.get("PATH_CHECKPOINT", "saved_models/MetaLearning/")
# Setting the seed
pl.seed_everything(42)
# Ensure that all operations are deterministic on GPU (if used) for reproducibility
torch.backends.cudnn.determinstic = True
torch.backends.cudnn.benchmark = False
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
print("Device:", device)
# Github URL where saved models are stored for this tutorial
base_url = "https://raw.githubusercontent.com/phlippe/saved_models/main/tutorial16/"
# Files to download
pretrained_files = [
"ProtoNet.ckpt",
"ProtoMAML.ckpt",
"tensorboards/ProtoNet/events.out.tfevents.ProtoNet",
"tensorboards/ProtoMAML/events.out.tfevents.ProtoMAML",
"protomaml_fewshot.json",
"protomaml_svhn_fewshot.json",
]
# Create checkpoint path if it doesn't exist yet
os.makedirs(CHECKPOINT_PATH, exist_ok=True)
# For each file, check whether it already exists. If not, try downloading it.
for file_name in pretrained_files:
file_path = os.path.join(CHECKPOINT_PATH, file_name)
if "/" in file_name:
os.makedirs(file_path.rsplit("/", 1)[0], exist_ok=True)
if not os.path.isfile(file_path):
file_url = base_url + file_name
print("Downloading %s..." % file_url)
try:
urllib.request.urlretrieve(file_url, file_path)
except HTTPError as e:
print(
"Something went wrong. Please try to download the file from the GDrive folder, or contact the author with the full output including the following error:\n",
e,
)
# Loading CIFAR100 dataset
cifar_train_set = CIFAR100(root=DATASET_PATH, train=True, download=True, transform=transforms.ToTensor())
cifar_test_set = CIFAR100(root=DATASET_PATH, train=False, download=True, transform=transforms.ToTensor())
# Visualize some examples
NUM_IMAGES = 12
cifar_images = [cifar_train_set[np.random.randint(len(cifar_train_set))][0] for idx in range(NUM_IMAGES)]
cifar_images = torch.stack(cifar_images, dim=0)
img_grid = torchvision.utils.make_grid(cifar_images, nrow=6, normalize=True, pad_value=0.9)
img_grid = img_grid.permute(1, 2, 0)
plt.figure(figsize=(8, 8))
plt.title("Image examples of the CIFAR100 dataset")
plt.imshow(img_grid)
plt.axis("off")
plt.show()
plt.close()
# Merging original training and test set
cifar_all_images = np.concatenate([cifar_train_set.data, cifar_test_set.data], axis=0)
cifar_all_targets = torch.LongTensor(cifar_train_set.targets + cifar_test_set.targets)
class ImageDataset(data.Dataset):
def __init__(self, imgs, targets, img_transform=None):
"""
Inputs:
imgs - Numpy array of shape [N,32,32,3] containing all images.
targets - PyTorch array of shape [N] containing all labels.
img_transform - A torchvision transformation that should be applied
to the images before returning. If none, no transformation
is applied.
"""
super().__init__()
self.img_transform = img_transform
self.imgs = imgs
self.targets = targets
def __getitem__(self, idx):
img, target = self.imgs[idx], self.targets[idx]
img = Image.fromarray(img)
if self.img_transform is not None:
img = self.img_transform(img)
return img, target
def __len__(self):
return self.imgs.shape[0]
pl.seed_everything(0) # Set seed for reproducibility
classes = torch.randperm(100) # Returns random permutation of numbers 0 to 99
train_classes, val_classes, test_classes = classes[:80], classes[80:90], classes[90:]
# Printing validation and test classes
idx_to_class = {val: key for key, val in cifar_train_set.class_to_idx.items()}
print("Validation classes:", [idx_to_class[c.item()] for c in val_classes])
print("Test classes:", [idx_to_class[c.item()] for c in test_classes])
def dataset_from_labels(imgs, targets, class_set, **kwargs):
class_mask = (targets[:, None] == class_set[None, :]).any(dim=-1)
return ImageDataset(imgs=imgs[class_mask], targets=targets[class_mask], **kwargs)
DATA_MEANS = (cifar_train_set.data / 255.0).mean(axis=(0, 1, 2))
DATA_STD = (cifar_train_set.data / 255.0).std(axis=(0, 1, 2))
test_transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize(DATA_MEANS, DATA_STD)])
# For training, we add some augmentation.
train_transform = transforms.Compose(
[
transforms.RandomHorizontalFlip(),
transforms.RandomResizedCrop((32, 32), scale=(0.8, 1.0), ratio=(0.9, 1.1)),
transforms.ToTensor(),
transforms.Normalize(DATA_MEANS, DATA_STD),
]
)
train_set = dataset_from_labels(cifar_all_images, cifar_all_targets, train_classes, img_transform=train_transform)
val_set = dataset_from_labels(cifar_all_images, cifar_all_targets, val_classes, img_transform=test_transform)
test_set = dataset_from_labels(cifar_all_images, cifar_all_targets, test_classes, img_transform=test_transform)
class FewShotBatchSampler:
def __init__(self, dataset_targets, N_way, K_shot, include_query=False, shuffle=True, shuffle_once=False):
"""
Inputs:
dataset_targets - PyTorch tensor of the labels of the data elements.
N_way - Number of classes to sample per batch.
K_shot - Number of examples to sample per class in the batch.
include_query - If True, returns batch of size N_way*K_shot*2, which
can be split into support and query set. Simplifies
the implementation of sampling the same classes but
distinct examples for support and query set.
shuffle - If True, examples and classes are newly shuffled in each
iteration (for training)
shuffle_once - If True, examples and classes are shuffled once in
the beginning, but kept constant across iterations
(for validation)
"""
super().__init__()
self.dataset_targets = dataset_targets
self.N_way = N_way
self.K_shot = K_shot
self.shuffle = shuffle
self.include_query = include_query
if self.include_query:
self.K_shot *= 2
self.batch_size = self.N_way * self.K_shot # Number of overall images per batch
# Organize examples by class
self.classes = torch.unique(self.dataset_targets).tolist()
self.num_classes = len(self.classes)
self.indices_per_class = {}
self.batches_per_class = {} # Number of K-shot batches that each class can provide
for c in self.classes:
self.indices_per_class[c] = torch.where(self.dataset_targets == c)[0]
self.batches_per_class[c] = self.indices_per_class[c].shape[0] // self.K_shot
# Create a list of classes from which we select the N classes per batch
self.iterations = sum(self.batches_per_class.values()) // self.N_way
self.class_list = [c for c in self.classes for _ in range(self.batches_per_class[c])]
if shuffle_once or self.shuffle:
self.shuffle_data()
else:
# For testing, we iterate over classes instead of shuffling them
sort_idxs = [
i + p * self.num_classes for i, c in enumerate(self.classes) for p in range(self.batches_per_class[c])
]
self.class_list = np.array(self.class_list)[np.argsort(sort_idxs)].tolist()
def shuffle_data(self):
# Shuffle the examples per class
for c in self.classes:
perm = torch.randperm(self.indices_per_class[c].shape[0])
self.indices_per_class[c] = self.indices_per_class[c][perm]
# Shuffle the class list from which we sample. Note that this way of shuffling
# does not prevent to choose the same class twice in a batch. However, for
# training and validation, this is not a problem.
random.shuffle(self.class_list)
def __iter__(self):
# Shuffle data
if self.shuffle:
self.shuffle_data()
# Sample few-shot batches
start_index = defaultdict(int)
for it in range(self.iterations):
class_batch = self.class_list[it * self.N_way : (it + 1) * self.N_way] # Select N classes for the batch
index_batch = []
for c in class_batch: # For each class, select the next K examples and add them to the batch
index_batch.extend(self.indices_per_class[c][start_index[c] : start_index[c] + self.K_shot])
start_index[c] += self.K_shot
if self.include_query: # If we return support+query set, sort them so that they are easy to split
index_batch = index_batch[::2] + index_batch[1::2]
yield index_batch
def __len__(self):
return self.iterations
N_WAY = 5
K_SHOT = 4
train_data_loader = data.DataLoader(
train_set,
batch_sampler=FewShotBatchSampler(train_set.targets, include_query=True, N_way=N_WAY, K_shot=K_SHOT, shuffle=True),
num_workers=4,
)
val_data_loader = data.DataLoader(
val_set,
batch_sampler=FewShotBatchSampler(
val_set.targets, include_query=True, N_way=N_WAY, K_shot=K_SHOT, shuffle=False, shuffle_once=True
),
num_workers=4,
)
def split_batch(imgs, targets):
support_imgs, query_imgs = imgs.chunk(2, dim=0)
support_targets, query_targets = targets.chunk(2, dim=0)
return support_imgs, query_imgs, support_targets, query_targets
imgs, targets = next(iter(val_data_loader)) # We use the validation set since it does not apply augmentations
support_imgs, query_imgs, _, _ = split_batch(imgs, targets)
support_grid = torchvision.utils.make_grid(support_imgs, nrow=K_SHOT, normalize=True, pad_value=0.9)
support_grid = support_grid.permute(1, 2, 0)
query_grid = torchvision.utils.make_grid(query_imgs, nrow=K_SHOT, normalize=True, pad_value=0.9)
query_grid = query_grid.permute(1, 2, 0)
fig, ax = plt.subplots(1, 2, figsize=(8, 5))
ax[0].imshow(support_grid)
ax[0].set_title("Support set")
ax[0].axis("off")
ax[1].imshow(query_grid)
ax[1].set_title("Query set")
ax[1].axis("off")
fig.suptitle("Few Shot Batch", weight="bold")
fig.show()
plt.close(fig)
def get_convnet(output_size):
convnet = torchvision.models.DenseNet(
growth_rate=32,
block_config=(6, 6, 6, 6),
bn_size=2,
num_init_features=64,
num_classes=output_size, # Output dimensionality
)
return convnet
class ProtoNet(pl.LightningModule):
def __init__(self, proto_dim, lr):
"""Inputs.
proto_dim - Dimensionality of prototype feature space
lr - Learning rate of Adam optimizer
"""
super().__init__()
self.save_hyperparameters()
self.model = get_convnet(output_size=self.hparams.proto_dim)
def configure_optimizers(self):
optimizer = optim.AdamW(self.parameters(), lr=self.hparams.lr)
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[140, 180], gamma=0.1)
return [optimizer], [scheduler]
@staticmethod
def calculate_prototypes(features, targets):
# Given a stack of features vectors and labels, return class prototypes
# features - shape [N, proto_dim], targets - shape [N]
classes, _ = torch.unique(targets).sort() # Determine which classes we have
prototypes = []
for c in classes:
p = features[torch.where(targets == c)[0]].mean(dim=0) # Average class feature vectors
prototypes.append(p)
prototypes = torch.stack(prototypes, dim=0)
# Return the 'classes' tensor to know which prototype belongs to which class
return prototypes, classes
def classify_feats(self, prototypes, classes, feats, targets):
# Classify new examples with prototypes and return classification error
dist = torch.pow(prototypes[None, :] - feats[:, None], 2).sum(dim=2) # Squared euclidean distance
preds = F.log_softmax(-dist, dim=1)
labels = (classes[None, :] == targets[:, None]).long().argmax(dim=-1)
acc = (preds.argmax(dim=1) == labels).float().mean()
return preds, labels, acc
def calculate_loss(self, batch, mode):
# Determine training loss for a given support and query set
imgs, targets = batch
features = self.model(imgs) # Encode all images of support and query set
support_feats, query_feats, support_targets, query_targets = split_batch(features, targets)
prototypes, classes = ProtoNet.calculate_prototypes(support_feats, support_targets)
preds, labels, acc = self.classify_feats(prototypes, classes, query_feats, query_targets)
loss = F.cross_entropy(preds, labels)
self.log("%s_loss" % mode, loss)
self.log("%s_acc" % mode, acc)
return loss
def training_step(self, batch, batch_idx):
return self.calculate_loss(batch, mode="train")
def validation_step(self, batch, batch_idx):
self.calculate_loss(batch, mode="val")
def train_model(model_class, train_loader, val_loader, **kwargs):
trainer = pl.Trainer(
default_root_dir=os.path.join(CHECKPOINT_PATH, model_class.__name__),
gpus=1 if str(device) == "cuda:0" else 0,
max_epochs=200,
callbacks=[
ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc"),
LearningRateMonitor("epoch"),
],
progress_bar_refresh_rate=0,
)
trainer.logger._default_hp_metric = None
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, model_class.__name__ + ".ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model at %s, loading..." % pretrained_filename)
# Automatically loads the model with the saved hyperparameters
model = model_class.load_from_checkpoint(pretrained_filename)
else:
pl.seed_everything(42) # To be reproducable
model = model_class(**kwargs)
trainer.fit(model, train_loader, val_loader)
model = model_class.load_from_checkpoint(
trainer.checkpoint_callback.best_model_path
) # Load best checkpoint after training
return model
protonet_model = train_model(
ProtoNet, proto_dim=64, lr=2e-4, train_loader=train_data_loader, val_loader=val_data_loader
)
# Opens tensorboard in notebook. Adjust the path to your CHECKPOINT_PATH if needed
# # %tensorboard --logdir ../saved_models/tutorial16/tensorboards/ProtoNet/
@torch.no_grad()
def test_proto_net(model, dataset, data_feats=None, k_shot=4):
"""Inputs.
model - Pretrained ProtoNet model
dataset - The dataset on which the test should be performed.
Should be instance of ImageDataset
data_feats - The encoded features of all images in the dataset.
If None, they will be newly calculated, and returned
for later usage.
k_shot - Number of examples per class in the support set.
"""
model = model.to(device)
model.eval()
num_classes = dataset.targets.unique().shape[0]
exmps_per_class = dataset.targets.shape[0] // num_classes # We assume uniform example distribution here
# The encoder network remains unchanged across k-shot settings. Hence, we only need
# to extract the features for all images once.
if data_feats is None:
# Dataset preparation
dataloader = data.DataLoader(dataset, batch_size=128, num_workers=4, shuffle=False, drop_last=False)
img_features = []
img_targets = []
for imgs, targets in tqdm(dataloader, "Extracting image features", leave=False):
imgs = imgs.to(device)
feats = model.model(imgs)
img_features.append(feats.detach().cpu())
img_targets.append(targets)
img_features = torch.cat(img_features, dim=0)
img_targets = torch.cat(img_targets, dim=0)
# Sort by classes, so that we obtain tensors of shape [num_classes, exmps_per_class, ...]
# Makes it easier to process later
img_targets, sort_idx = img_targets.sort()
img_targets = img_targets.reshape(num_classes, exmps_per_class).transpose(0, 1)
img_features = img_features[sort_idx].reshape(num_classes, exmps_per_class, -1).transpose(0, 1)
else:
img_features, img_targets = data_feats
# We iterate through the full dataset in two manners. First, to select the k-shot batch.
# Second, the evaluate the model on all other examples
accuracies = []
for k_idx in tqdm(range(0, img_features.shape[0], k_shot), "Evaluating prototype classification", leave=False):
# Select support set and calculate prototypes
k_img_feats = img_features[k_idx : k_idx + k_shot].flatten(0, 1)
k_targets = img_targets[k_idx : k_idx + k_shot].flatten(0, 1)
prototypes, proto_classes = model.calculate_prototypes(k_img_feats, k_targets)
# Evaluate accuracy on the rest of the dataset
batch_acc = 0
for e_idx in range(0, img_features.shape[0], k_shot):
if k_idx == e_idx: # Do not evaluate on the support set examples
continue
e_img_feats = img_features[e_idx : e_idx + k_shot].flatten(0, 1)
e_targets = img_targets[e_idx : e_idx + k_shot].flatten(0, 1)
_, _, acc = model.classify_feats(prototypes, proto_classes, e_img_feats, e_targets)
batch_acc += acc.item()
batch_acc /= img_features.shape[0] // k_shot - 1
accuracies.append(batch_acc)
return (mean(accuracies), stdev(accuracies)), (img_features, img_targets)
protonet_accuracies = dict()
data_feats = None
for k in [2, 4, 8, 16, 32]:
protonet_accuracies[k], data_feats = test_proto_net(protonet_model, test_set, data_feats=data_feats, k_shot=k)
print(
"Accuracy for k=%i: %4.2f%% (+-%4.2f%%)"
% (k, 100.0 * protonet_accuracies[k][0], 100 * protonet_accuracies[k][1])
)
def plot_few_shot(acc_dict, name, color=None, ax=None):
sns.set()
if ax is None:
fig, ax = plt.subplots(1, 1, figsize=(5, 3))
ks = sorted(list(acc_dict.keys()))
mean_accs = [acc_dict[k][0] for k in ks]
std_accs = [acc_dict[k][1] for k in ks]
ax.plot(ks, mean_accs, marker="o", markeredgecolor="k", markersize=6, label=name, color=color)
ax.fill_between(
ks,
[m - s for m, s in zip(mean_accs, std_accs)],
[m + s for m, s in zip(mean_accs, std_accs)],
alpha=0.2,
color=color,
)
ax.set_xticks(ks)
ax.set_xlim([ks[0] - 1, ks[-1] + 1])
ax.set_xlabel("Number of shots per class", weight="bold")
ax.set_ylabel("Accuracy", weight="bold")
if len(ax.get_title()) == 0:
ax.set_title("Few-Shot Performance " + name, weight="bold")
else:
ax.set_title(ax.get_title() + " and " + name, weight="bold")
ax.legend()
return ax
ax = plot_few_shot(protonet_accuracies, name="ProtoNet", color="C1")
plt.show()
plt.close()
class ProtoMAML(pl.LightningModule):
def __init__(self, proto_dim, lr, lr_inner, lr_output, num_inner_steps):
"""Inputs.
proto_dim - Dimensionality of prototype feature space
lr - Learning rate of the outer loop Adam optimizer
lr_inner - Learning rate of the inner loop SGD optimizer
lr_output - Learning rate for the output layer in the inner loop
num_inner_steps - Number of inner loop updates to perform
"""
super().__init__()
self.save_hyperparameters()
self.model = get_convnet(output_size=self.hparams.proto_dim)
def configure_optimizers(self):
optimizer = optim.AdamW(self.parameters(), lr=self.hparams.lr)
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[140, 180], gamma=0.1)
return [optimizer], [scheduler]
def run_model(self, local_model, output_weight, output_bias, imgs, labels):
# Execute a model with given output layer weights and inputs
feats = local_model(imgs)
preds = F.linear(feats, output_weight, output_bias)
loss = F.cross_entropy(preds, labels)
acc = (preds.argmax(dim=1) == labels).float()
return loss, preds, acc
def adapt_few_shot(self, support_imgs, support_targets):
# Determine prototype initialization
support_feats = self.model(support_imgs)
prototypes, classes = ProtoNet.calculate_prototypes(support_feats, support_targets)
support_labels = (classes[None, :] == support_targets[:, None]).long().argmax(dim=-1)
# Create inner-loop model and optimizer
local_model = deepcopy(self.model)
local_model.train()
local_optim = optim.SGD(local_model.parameters(), lr=self.hparams.lr_inner)
local_optim.zero_grad()
# Create output layer weights with prototype-based initialization
init_weight = 2 * prototypes
init_bias = -torch.norm(prototypes, dim=1) ** 2
output_weight = init_weight.detach().requires_grad_()
output_bias = init_bias.detach().requires_grad_()
# Optimize inner loop model on support set
for _ in range(self.hparams.num_inner_steps):
# Determine loss on the support set
loss, _, _ = self.run_model(local_model, output_weight, output_bias, support_imgs, support_labels)
# Calculate gradients and perform inner loop update
loss.backward()
local_optim.step()
# Update output layer via SGD
output_weight.data -= self.hparams.lr_output * output_weight.grad
output_bias.data -= self.hparams.lr_output * output_bias.grad
# Reset gradients
local_optim.zero_grad()
output_weight.grad.fill_(0)
output_bias.grad.fill_(0)
# Re-attach computation graph of prototypes
output_weight = (output_weight - init_weight).detach() + init_weight
output_bias = (output_bias - init_bias).detach() + init_bias
return local_model, output_weight, output_bias, classes
def outer_loop(self, batch, mode="train"):
accuracies = []
losses = []
self.model.zero_grad()
# Determine gradients for batch of tasks
for task_batch in batch:
imgs, targets = task_batch
support_imgs, query_imgs, support_targets, query_targets = split_batch(imgs, targets)
# Perform inner loop adaptation
local_model, output_weight, output_bias, classes = self.adapt_few_shot(support_imgs, support_targets)
# Determine loss of query set
query_labels = (classes[None, :] == query_targets[:, None]).long().argmax(dim=-1)
loss, preds, acc = self.run_model(local_model, output_weight, output_bias, query_imgs, query_labels)
# Calculate gradients for query set loss
if mode == "train":
loss.backward()
for p_global, p_local in zip(self.model.parameters(), local_model.parameters()):
p_global.grad += p_local.grad # First-order approx. -> add gradients of finetuned and base model
accuracies.append(acc.mean().detach())
losses.append(loss.detach())
# Perform update of base model
if mode == "train":
opt = self.optimizers()
opt.step()
opt.zero_grad()
self.log("%s_loss" % mode, sum(losses) / len(losses))
self.log("%s_acc" % mode, sum(accuracies) / len(accuracies))
def training_step(self, batch, batch_idx):
self.outer_loop(batch, mode="train")
return None # Returning None means we skip the default training optimizer steps by PyTorch Lightning
def validation_step(self, batch, batch_idx):
# Validation requires to finetune a model, hence we need to enable gradients
torch.set_grad_enabled(True)
self.outer_loop(batch, mode="val")
torch.set_grad_enabled(False)
class TaskBatchSampler:
def __init__(self, dataset_targets, batch_size, N_way, K_shot, include_query=False, shuffle=True):
"""
Inputs:
dataset_targets - PyTorch tensor of the labels of the data elements.
batch_size - Number of tasks to aggregate in a batch
N_way - Number of classes to sample per batch.
K_shot - Number of examples to sample per class in the batch.
include_query - If True, returns batch of size N_way*K_shot*2, which
can be split into support and query set. Simplifies
the implementation of sampling the same classes but
distinct examples for support and query set.
shuffle - If True, examples and classes are newly shuffled in each
iteration (for training)
"""
super().__init__()
self.batch_sampler = FewShotBatchSampler(dataset_targets, N_way, K_shot, include_query, shuffle)
self.task_batch_size = batch_size
self.local_batch_size = self.batch_sampler.batch_size
def __iter__(self):
# Aggregate multiple batches before returning the indices
batch_list = []
for batch_idx, batch in enumerate(self.batch_sampler):
batch_list.extend(batch)
if (batch_idx + 1) % self.task_batch_size == 0:
yield batch_list
batch_list = []
def __len__(self):
return len(self.batch_sampler) // self.task_batch_size
def get_collate_fn(self):
# Returns a collate function that converts one big tensor into a list of task-specific tensors
def collate_fn(item_list):
imgs = torch.stack([img for img, target in item_list], dim=0)
targets = torch.stack([target for img, target in item_list], dim=0)
imgs = imgs.chunk(self.task_batch_size, dim=0)
targets = targets.chunk(self.task_batch_size, dim=0)
return list(zip(imgs, targets))
return collate_fn
# Training constant (same as for ProtoNet)
N_WAY = 5
K_SHOT = 4
# Training set
train_protomaml_sampler = TaskBatchSampler(
train_set.targets, include_query=True, N_way=N_WAY, K_shot=K_SHOT, batch_size=16
)
train_protomaml_loader = data.DataLoader(
train_set, batch_sampler=train_protomaml_sampler, collate_fn=train_protomaml_sampler.get_collate_fn(), num_workers=2
)
# Validation set
val_protomaml_sampler = TaskBatchSampler(
val_set.targets,
include_query=True,
N_way=N_WAY,
K_shot=K_SHOT,
batch_size=1, # We do not update the parameters, hence the batch size is irrelevant here
shuffle=False,
)
val_protomaml_loader = data.DataLoader(
val_set, batch_sampler=val_protomaml_sampler, collate_fn=val_protomaml_sampler.get_collate_fn(), num_workers=2
)
protomaml_model = train_model(
ProtoMAML,
proto_dim=64,
lr=1e-3,
lr_inner=0.1,
lr_output=0.1,
num_inner_steps=1, # Often values between 1 and 10
train_loader=train_protomaml_loader,
val_loader=val_protomaml_loader,
)
# Opens tensorboard in notebook. Adjust the path to your CHECKPOINT_PATH if needed
# # %tensorboard --logdir ../saved_models/tutorial16/tensorboards/ProtoMAML/
def test_protomaml(model, dataset, k_shot=4):
pl.seed_everything(42)
model = model.to(device)
num_classes = dataset.targets.unique().shape[0]
# Data loader for full test set as query set
full_dataloader = data.DataLoader(dataset, batch_size=128, num_workers=4, shuffle=False, drop_last=False)
# Data loader for sampling support sets
sampler = FewShotBatchSampler(
dataset.targets, include_query=False, N_way=num_classes, K_shot=k_shot, shuffle=False, shuffle_once=False
)
sample_dataloader = data.DataLoader(dataset, batch_sampler=sampler, num_workers=2)
# We iterate through the full dataset in two manners. First, to select the k-shot batch.
# Second, the evaluate the model on all other examples
accuracies = []
for (support_imgs, support_targets), support_indices in tqdm(
zip(sample_dataloader, sampler), "Performing few-shot finetuning"
):
support_imgs = support_imgs.to(device)
support_targets = support_targets.to(device)
# Finetune new model on support set
local_model, output_weight, output_bias, classes = model.adapt_few_shot(support_imgs, support_targets)
with torch.no_grad(): # No gradients for query set needed
local_model.eval()
batch_acc = torch.zeros((0,), dtype=torch.float32, device=device)
# Evaluate all examples in test dataset
for query_imgs, query_targets in full_dataloader:
query_imgs = query_imgs.to(device)
query_targets = query_targets.to(device)
query_labels = (classes[None, :] == query_targets[:, None]).long().argmax(dim=-1)
_, _, acc = model.run_model(local_model, output_weight, output_bias, query_imgs, query_labels)
batch_acc = torch.cat([batch_acc, acc.detach()], dim=0)
# Exclude support set elements
for s_idx in support_indices:
batch_acc[s_idx] = 0
batch_acc = batch_acc.sum().item() / (batch_acc.shape[0] - len(support_indices))
accuracies.append(batch_acc)
return mean(accuracies), stdev(accuracies)
protomaml_model.hparams.num_inner_steps = 200
protomaml_result_file = os.path.join(CHECKPOINT_PATH, "protomaml_fewshot.json")
if os.path.isfile(protomaml_result_file):
# Load pre-computed results
with open(protomaml_result_file) as f:
protomaml_accuracies = json.load(f)
protomaml_accuracies = {int(k): v for k, v in protomaml_accuracies.items()}
else:
# Perform same experiments as for ProtoNet
protomaml_accuracies = dict()
for k in [2, 4, 8, 16, 32]:
protomaml_accuracies[k] = test_protomaml(protomaml_model, test_set, k_shot=k)
# Export results
with open(protomaml_result_file, "w") as f:
json.dump(protomaml_accuracies, f, indent=4)
for k in protomaml_accuracies:
print(
"Accuracy for k=%i: %4.2f%% (+-%4.2f%%)"
% (k, 100.0 * protomaml_accuracies[k][0], 100.0 * protomaml_accuracies[k][1])
)
ax = plot_few_shot(protonet_accuracies, name="ProtoNet", color="C1")
plot_few_shot(protomaml_accuracies, name="ProtoMAML", color="C2", ax=ax)
plt.show()
plt.close()
SVHN_test_dataset = SVHN(root=DATASET_PATH, split="test", download=True, transform=transforms.ToTensor())
# Visualize some examples
NUM_IMAGES = 12
SVHN_images = [SVHN_test_dataset[np.random.randint(len(SVHN_test_dataset))][0] for idx in range(NUM_IMAGES)]
SVHN_images = torch.stack(SVHN_images, dim=0)
img_grid = torchvision.utils.make_grid(SVHN_images, nrow=6, normalize=True, pad_value=0.9)
img_grid = img_grid.permute(1, 2, 0)
plt.figure(figsize=(8, 8))
plt.title("Image examples of the SVHN dataset")
plt.imshow(img_grid)
plt.axis("off")
plt.show()
plt.close()
imgs = np.transpose(SVHN_test_dataset.data, (0, 2, 3, 1))
targets = SVHN_test_dataset.labels
# Limit number of examples to 500 to reduce test time
min_label_count = min(500, np.bincount(SVHN_test_dataset.labels).min())
idxs = np.concatenate([np.where(targets == c)[0][:min_label_count] for c in range(1 + targets.max())], axis=0)
imgs = imgs[idxs]
targets = torch.from_numpy(targets[idxs]).long()
svhn_fewshot_dataset = ImageDataset(imgs, targets, img_transform=test_transform)
svhn_fewshot_dataset.imgs.shape
protonet_svhn_accuracies = dict()
data_feats = None
for k in [2, 4, 8, 16, 32]:
protonet_svhn_accuracies[k], data_feats = test_proto_net(
protonet_model, svhn_fewshot_dataset, data_feats=data_feats, k_shot=k
)
print(
"Accuracy for k=%i: %4.2f%% (+-%4.2f%%)"
% (k, 100.0 * protonet_svhn_accuracies[k][0], 100 * protonet_svhn_accuracies[k][1])
)
protomaml_result_file = os.path.join(CHECKPOINT_PATH, "protomaml_svhn_fewshot.json")
if os.path.isfile(protomaml_result_file):
# Load pre-computed results
with open(protomaml_result_file) as f:
protomaml_svhn_accuracies = json.load(f)
protomaml_svhn_accuracies = {int(k): v for k, v in protomaml_svhn_accuracies.items()}
else:
# Perform same experiments as for ProtoNet
protomaml_svhn_accuracies = dict()
for k in [2, 4, 8, 16, 32]:
protomaml_svhn_accuracies[k] = test_protomaml(protomaml_model, svhn_fewshot_dataset, k_shot=k)
# Export results
with open(protomaml_result_file, "w") as f:
json.dump(protomaml_svhn_accuracies, f, indent=4)
for k in protomaml_svhn_accuracies:
print(
"Accuracy for k=%i: %4.2f%% (+-%4.2f%%)"
% (k, 100.0 * protomaml_svhn_accuracies[k][0], 100.0 * protomaml_svhn_accuracies[k][1])
)
ax = plot_few_shot(protonet_svhn_accuracies, name="ProtoNet", color="C1")
plot_few_shot(protomaml_svhn_accuracies, name="ProtoMAML", color="C2", ax=ax)
plt.show()
plt.close()
| 0.595845 | 0.754101 |
```
import sys as sys
sys.path.append('C:/Users/Fernando-Bluesquare/Desktop/Data Science/data_pipelines-master/src')
import re
import audit.dhis as dhis
import audit.completeness as cplt
import processing.data_process as dp
from processing.data_aggregation_process import *
import pandas as pd
import psycopg2 as pypg
%load_ext autoreload
%autoreload 2
hivdr = dhis.dhis_instance(dbname, user, host, psswrd)
```
### Build the total number of patients
```
# We filter to get a list of matching data names and corresponding uids for the two sources, in the form of the json that we're going to use later on
art_pnls=['PNLS-DRUG-ABC + 3TC + EFV','PNLS-DRUG-ABC + 3TC + EFV sex','PNLS-DRUG-ABC + 3TC + LPV/r','PNLS-DRUG-ABC + 3TC + LPV/r sex','PNLS-DRUG-ABC + 3TC + NVP',
'PNLS-DRUG-ABC + 3TC + NVP sex', 'PNLS-DRUG-AZT + 3TC + LPV/r', 'PNLS-DRUG-AZT + 3TC + LPV/r sex','PNLS-DRUG-AZT+3TC+EFV', 'PNLS-DRUG-AZT+3TC+EFV sex',
'PNLS-DRUG-AZT+3TC+NVP', 'PNLS-DRUG-AZT+3TC+NVP sex','PNLS-DRUG-TDF + 3TC + LPV/r', 'PNLS-DRUG-TDF + 3TC + LPV/r sex', 'PNLS-DRUG-TDF + FTC + NVP',
'PNLS-DRUG-TDF + FTC + NVP sex','PNLS-DRUG-TDF+ FTC + EFV', 'PNLS-DRUG-TDF+ FTC + EFV sex', 'PNLS-DRUG-TDF+3TC+EFV sex', 'PNLS-DRUG-TDF+3TC+NVP sex','PNLS-DRUG-Autres (à préciser)',
'PNLS-DRUG-Autres (à préciser) sex']
art_cordaid = ['ABC + 3TC + EFV', 'ABC + 3TC + LPV/r', 'ABC + 3TC + NVP', 'Autres (à préciser)', 'AZT + 3TC + LPV/r', 'AZT+3TC+ EFV',
'AZT+3TC+NVP', 'TDF + 3TC + LPV/r', 'TDF + FTC + NVP', 'TDF+ FTC + EFV', 'TDF+3TC+EFV', 'TDF+3TC+NVP', 'TDF+FTC+LPV+rt']
art_pnls_withoutpre=[datel[10:] for datel in art_pnls]
datavar_json={}
for datel in art_cordaid:
pnls_el=[str('PNLS-DRUG-')+ str(matchel) for matchel in art_pnls_withoutpre if ((str(datel) in matchel) and ('sex' in matchel)) ]
if len(pnls_el) >0:
pnls_el=pnls_el[0]
else:
pnls_el=None
dict_row= {'sources':{'cordaid':{'elementname':str(datel),'cat_filter':None},'pnls':{'elementname':pnls_el,'cat_filter':None}},'preferred_source':'cordaid','methods':{'last':'quarterly'}}
datavar_json.update({str(datel):dict_row})
datavar_json['AZT+3TC+ EFV']['sources']['pnls']['elementname']='PNLS-DRUG-AZT+3TC+EFV sex'
for key in datavar_json.keys():
for source in datavar_json[str(key)]['sources'].keys():
if datavar_json[str(key)]['sources'][str(source)]['elementname']:
datavar_json[str(key)]['sources'][str(source)]['uid']=hivdr.dataelement.query('name=="'+datavar_json[str(key)]['sources'][str(source)]['elementname']+'"')['uid'].iloc[0]
datavar_json['Patients HIV']= {
'sources':{
'cordaid':{'elementname':'Nombre de patients encore sous TARV dans la structure',
'uid':'Yj8caUQs178',
'cat_filter':'Anciens cas'
},
'pnls':{'elementname':'PNLS-ARV-Patients encoresous TARV dans la structure',
'uid':'Dd2G5zI0o0a',
'cat_filter':'AC'
}
},
'preferred_source':'cordaid',
'methods':{
'last':'quarterly'
}
}
def json_to_dict_df(datavar_json,reconcile_only=False):
from objectpath import Tree
import pandas as pd
jsonTree=Tree(datavar_json)
renconcile_dict={}
dict_of_df={}
for dataelement in jsonTree.execute('$*').keys():
#Creation of db
data_united_df=pd.DataFrame()
for dbsource in jsonTree.execute('$*["'+str(dataelement)+'"].sources').keys():
uid_element=jsonTree.execute('$*["'+str(dataelement)+'"].sources["'+str(dbsource)+'"].uid')
if uid_element:
df = hivdr.get_data(jsonTree.execute('$*["'+str(dataelement)+'"].sources["'+str(dbsource)+'"].uid'))
df['value'] = pd.to_numeric(df['value'],'integer')
cat_filter= jsonTree.execute('$*["'+str(dataelement)+'"].sources["'+str(dbsource)+'"].cat_filter')
if cat_filter:
df_filter = df.catcomboname.str.contains(cat_filter)
df_agg = (df[df_filter][['value', 'monthly', 'dataelementid', 'uidorgunit', 'enddate']]
.groupby(by=['uidorgunit', 'dataelementid', 'monthly', 'enddate']).sum().reset_index()
)
else:
df_agg = (df[['value', 'monthly', 'dataelementid', 'uidorgunit', 'enddate']]
.groupby(by=['uidorgunit', 'dataelementid', 'monthly', 'enddate']).sum().reset_index()
)
df_agg['source']=dbsource
data_united_df=data_united_df.append(df_agg)
reconcile_element_dict={}
def reconcilegroupby(groupdf):
subreconcile_dict={}
for dbsource in jsonTree.execute('$*["'+str(dataelement)+'"].sources').keys():
orgdict={dbsource:groupdf.query('source=="'+str(dbsource)+'"')}
subreconcile_dict.update(orgdict)
reconcile_element_dict[str(groupdf.name)]=subreconcile_dict
reconcile_subdf= dp.measured_serie(subreconcile_dict, 'stock', jsonTree.execute('$*["'+str(dataelement)+'"].preferred_source'))
reconcile_subdf.reconcile_series()
return reconcile_subdf.preferred_serie
reconcile_element_df=data_united_df.groupby(by=['uidorgunit']).apply(reconcilegroupby).reset_index(drop=True)
if not reconcile_only:
renconcile_dict[str(dataelement)]={'data_element_dict':reconcile_element_dict,'data_element_reconciledf':reconcile_element_df}
else:
renconcile_dict[str(dataelement)]=reconcile_element_df
return renconcile_dict
%%time
dict_df_patients=json_to_dict_df(datavar_json,reconcile_only=True)
def process_agg(dict_df_patients,datavar_json,datelment_list=['value']):
from objectpath import Tree
jsonTree=Tree(datavar_json)
dict_of_processed_df_patients={}
for element_key in dict_df_patients.keys():
methods_dict=jsonTree.execute('$*["'+str(element_key)+'"].methods')
for method_key in methods_dict.keys():
pro_df=processing(datanames_list=datelment_list,dat_df=dict_df_patients[element_key])
processed_df=aggregation(datel_list=datelment_list,df_data=pro_df,method=method_key,agg_period_type=methods_dict[method_key])
dict_of_processed_df_patients[str(element_key)]={str(method_key):processed_df}
return dict_of_processed_df_patients
%%time
dict_of_processed_df_patients=process_agg(dict_df_patients,datavar_json)
element_creation_df=data_element_creation_csv(dict_of_processed_df_patients)
element_creation_df.to_csv('./data_element_creation.csv',index=False,encoding ='utf-8')
df_csv=output_csv(dict_of_processed_df_patients,hivdr)
df_csv.to_csv('./data_values_df.csv',index=False,encoding ='utf-8')
```
## Looking at stock outs
```
r_exclude =['PNLS-DRUG-CTX 480 / 960 mg ces - Bt 500 ces', 'PNLS-DRUG-CTX 480 mg ces - Bt 1000 ces',
'PNLS-DRUG-Hepatitis, HBsAg, Determine Kit, 100 Tests','PNLS-DRUG-Hepatitis, HCV, Rapid Device, Serum/Plasma/Whole Blood, kit de 40 Tests',
'PNLS-DRUG-HIV 1/2, Double Check Gold, Kit de 100 test','PNLS-DRUG-HIV 1+2, Determine Complete, Kit de 100 tests',
'PNLS-DRUG-HIV 1+2, Uni-Gold HIV, Kit de 20 tests', 'PNLS-DRUG-INH 100 mg; 300 mg - Cés', 'PNLS-DRUG-INH 50mg/5 ml - Sol.Orale',
'PNLS-DRUG-Syphilis RPR Kit, kit de 100 Tests Determine syph','PNLS-DRUG-CTX 96 mg / ml - Inj']
#Selection of Variables
catlab_sources_dict={'SO':'Sortie','RS':'Nbr de jours RS','EN':'Entrée','SI':'Stock Initial'}
datelemlist_sources_dict={}
for key in catlab_sources_dict.keys():
c_cordaid = hivdr.dataelement[hivdr.dataelement.name.str.contains(str(key)+' -')]
cco = hivdr.categoryoptioncombo.categoryoptioncomboid[hivdr.categoryoptioncombo.name.str.contains(catlab_sources_dict[key])].iloc[0]
pnls_cco_uid = hivdr.categoryoptioncombo.uid[hivdr.categoryoptioncombo.name.str.contains(catlab_sources_dict[key])].iloc[0]
cc = hivdr.categorycombos_optioncombos.categorycomboid[hivdr.categorycombos_optioncombos.categoryoptioncomboid == cco].iloc[0]
c_pnls = hivdr.dataelement[hivdr.dataelement.categorycomboid == cc]
datelemlist_sources_dict[str(key)]={'cordaid':c_cordaid,'pnls':c_pnls,'pnls_uid':pnls_cco_uid}
datelemlist_sources_dict.keys()
def initial_merge(datelemlist_sources_dict):
srce_dict=datelemlist_sources_dict
for key in srce_dict.keys():
srce_dict[key]['cordaid']['stand_name']=srce_dict[key]['cordaid'].name.str.replace(str(key)+' -','')
srce_dict[key]['cordaid']['stand_name']=srce_dict[key]['cordaid'].stand_name.str.replace(' ','').str.lower()
srce_dict[key]['pnls']['stand_name']=srce_dict[key]['pnls'].name.str.replace(" ",'')
srce_dict[key]['pnls']['stand_name']=srce_dict[key]['pnls'].stand_name.str.replace("PNLS-DRUG-",'').str.lower()
srce_dict[key]['merge_dict_list']= srce_dict[key]['pnls'].merge(srce_dict[key]['cordaid'], on='stand_name', suffixes=['_pnls', '_cordaid'])
srce_dict[key]['pnls'].columns = ['uid_pnls', 'name_pnls', 'dataelementid_pnls' , 'categorycomboid_pnls', 'stand_name']
srce_dict[key]['cordaid'].columns = ['uid_cordaid', 'name_cordaid', 'dataelementid_cordaid' , 'categorycomboid_cordaid', 'stand_name']
return srce_dict
datelemlist_processed_sources_dict=initial_merge(datelemlist_sources_dict)
#Cases of missspelling to be included in the merge dataframe
for key in datelemlist_processed_sources_dict.keys():
merge_list_df=datelemlist_processed_sources_dict[key]['merge_dict_list']
list_pnls=datelemlist_processed_sources_dict[key]['pnls']
list_cordaid=datelemlist_processed_sources_dict[key]['cordaid']
for leftout in ['(200/50 mg)','(300/200 mg)','NVP 50 mg']:
pnls_leftout_values=list_pnls.drop('stand_name', axis=1)[list_pnls.name_pnls.str.contains(leftout)].reset_index(drop=True)
coraid_leftout_values=list_cordaid[list_cordaid.name_cordaid.str.contains(leftout)].reset_index(drop=True)
concat_row=pd.concat([pnls_leftout_values,coraid_leftout_values],axis=1)
merge_list_df = merge_list_df.append(concat_row)
datelemlist_processed_sources_dict[key]['merge_dict_list'] = merge_list_df[~merge_list_df.name_pnls.isin(r_exclude)]
def get_hivdrable(de_id):
data = hivdr.get_data(de_id)
data = data[['dataelementid', 'monthly', 'uidorgunit', 'catcomboid', 'value']]
data.columns = ['dataElement' , 'monthly' , 'orgUnit' , 'categoryOptionCombo', 'value']
return data
def datadf_into_meds_source_dict(processed_sources_df,pnls_stock_uid):
meds_dict ={}
for line in processed_sources_df.name_cordaid:
print(line)
id_cordaid = processed_sources_df.uid_cordaid[processed_sources_df.name_cordaid == line].iloc[0]
id_pnls = processed_sources_df.uid_pnls[processed_sources_df.name_cordaid == line].iloc[0]
cordaid_data = get_hivdrable(id_cordaid)
pnls_data = get_hivdrable(id_pnls)
pnls_data = pnls_data[pnls_data.categoryOptionCombo == pnls_stock_uid]
meds_dict[str(line)[5:]]={'cordaid':cordaid_data,'pnls':pnls_data}
return meds_dict
%%time
large_meds_dict={}
for key in datelemlist_processed_sources_dict.keys():
rs_meds_dict=datadf_into_meds_source_dict(datelemlist_processed_sources_dict[key]['merge_dict_list'],datelemlist_processed_sources_dict[key]['pnls_uid'])
large_meds_dict[str(key)]=rs_meds_dict
import re
def pills_per_package(line):
if re.search('[Cc][EeÉé][Ss]$',line):
line=line[-7:3]
line=re.sub('[Oo]|" "','', line)
return float(line)
else:
return None
#Finally I didn't use this one part because a talk I had with Antoine
def measurement_boxes(meds_label,datadf):
pppck=pills_per_package
if pppck:
datadf.value = pd.to_numeric(datadf.value)
datadf['boxes']=datadf.value/pppck
return datadf
def reconcilegroupby(groupdf):
subreconcile_dict={}
for dbsource in ['cordaid','pnls']:
orgdict={dbsource:groupdf.query('source=="'+str(dbsource)+'"')}
subreconcile_dict.update(orgdict)
reconcile_subdf= dp.measured_serie(subreconcile_dict, 'stock', 'cordaid')
reconcile_subdf.reconcile_series()
return reconcile_subdf.preferred_serie
def get_preferes_series_from_meds(large_meds_dict):
renconcile_dict={}
for typedata in large_meds_dict.keys():
renconcile_dict[str(typedata)]={}
for medicine in large_meds_dict[typedata].keys():
data_united_df=pd.DataFrame()
for dbsource in ['cordaid','pnls']:
df_agg=large_meds_dict[typedata][medicine][dbsource]
df_agg['source']=str(dbsource)
data_united_df=data_united_df.append(df_agg)
reconcile_element_df=data_united_df.groupby(by=['orgUnit']).apply(reconcilegroupby).reset_index(drop=True)
renconcile_dict[str(typedata)].update({str(medicine):reconcile_element_df})
return renconcile_dict
%%time
renconcile_dict_meds=get_preferes_series_from_meds(large_meds_dict)
renconcile_dict_transpose={}
for medicine in renconcile_dict_meds['SO'].keys():
df_list=[]
for typedata in renconcile_dict_meds.keys():
df_row=renconcile_dict_meds[typedata][medicine][['monthly','orgUnit','value']]
df_row=df_row.rename(index=str, columns={'value':str(typedata),'orgUnit':'uidorgunit'})
df_list.append(df_row)
unified_df=df_list[0]
for df_typedata in df_list[1:]:
unified_df=unified_df.merge(df_typedata,how='outer',on=['monthly','uidorgunit'])
renconcile_dict_transpose[str(medicine)]=unified_df
def mean_consumption_stockout(dict_df_med):
dict_of_processed_df_patients={}
for medicine in dict_df_med.keys():
med_df_processed=processing(datanames_list=['SO','RS','EN','SI'],dat_df=dict_df_med[str(medicine)])
meancm_df=mean_consumation_adjusted(med_df_processed[['uidorgunit','enddate','SO','RS']],data_el=medicine)
combination_df=aggregation(datel_list=['EN','SI'],df_data=med_df_processed,method='combination',agg_period_type='monthly')
stock_df=meancm_df.merge(combination_df,on=['uidorgunit','period'],how='outer')
stock_df=stock_df.merge(med_df_processed[['uidorgunit','monthly','RS']].rename(index=str, columns={'monthly':'period'}),on=['uidorgunit','period'],how='outer')
stockout_df=stockout(stock_df,data_elem=str(medicine),type_period='monthly')
dict_of_processed_df_patients[str(medicine)]={'AMC':meancm_df,'combination':combination_df,'stockout':stockout_df}
return dict_of_processed_df_patients
%%time
dict_of_processed_df_patients=mean_consumption_stockout(renconcile_dict_transpose)
def mean_consumption_stockout(dict_df_med):
dict_of_processed_df_patients={}
for medicine in dict_df_med.keys():
med_df_processed=processing(datanames_list=['SO','RS','EN','SI'],dat_df=dict_df_med[str(medicine)])
meancm_df=mean_consumation_adjusted(med_df_processed[['uidorgunit','enddate','SO','RS']],data_el=medicine)
combination_df=aggregation(datel_list=['EN','SI'],df_data=med_df_processed,method='combination',agg_period_type='monthly')
stock_df=meancm_df.merge(combination_df,on=['uidorgunit','period'],how='outer')
stock_df=stock_df.merge(med_df_processed[['uidorgunit','monthly','RS']].rename(index=str, columns={'monthly':'period'}),on=['uidorgunit','period'],how='outer')
stockout_df=stockout(stock_df,data_elem=str(medicine),type_period='monthly')
dict_of_processed_df_patients[str(medicine)]={'AMC':meancm_df,'combination':combination_df,'stockout':stockout_df}
return dict_of_processed_df_patients
element_creation_df=data_element_creation_csv(dict_of_processed_df_patients)
element_creation_df.to_csv('./data_element_creation_drugs.csv',index=False,encoding ='utf-8')
hivdr = dhis.dhis_instance(dbname, user, host, psswrd)
df_csv=output_csv(dict_of_processed_df_patients,hivdr)
```
|
github_jupyter
|
import sys as sys
sys.path.append('C:/Users/Fernando-Bluesquare/Desktop/Data Science/data_pipelines-master/src')
import re
import audit.dhis as dhis
import audit.completeness as cplt
import processing.data_process as dp
from processing.data_aggregation_process import *
import pandas as pd
import psycopg2 as pypg
%load_ext autoreload
%autoreload 2
hivdr = dhis.dhis_instance(dbname, user, host, psswrd)
# We filter to get a list of matching data names and corresponding uids for the two sources, in the form of the json that we're going to use later on
art_pnls=['PNLS-DRUG-ABC + 3TC + EFV','PNLS-DRUG-ABC + 3TC + EFV sex','PNLS-DRUG-ABC + 3TC + LPV/r','PNLS-DRUG-ABC + 3TC + LPV/r sex','PNLS-DRUG-ABC + 3TC + NVP',
'PNLS-DRUG-ABC + 3TC + NVP sex', 'PNLS-DRUG-AZT + 3TC + LPV/r', 'PNLS-DRUG-AZT + 3TC + LPV/r sex','PNLS-DRUG-AZT+3TC+EFV', 'PNLS-DRUG-AZT+3TC+EFV sex',
'PNLS-DRUG-AZT+3TC+NVP', 'PNLS-DRUG-AZT+3TC+NVP sex','PNLS-DRUG-TDF + 3TC + LPV/r', 'PNLS-DRUG-TDF + 3TC + LPV/r sex', 'PNLS-DRUG-TDF + FTC + NVP',
'PNLS-DRUG-TDF + FTC + NVP sex','PNLS-DRUG-TDF+ FTC + EFV', 'PNLS-DRUG-TDF+ FTC + EFV sex', 'PNLS-DRUG-TDF+3TC+EFV sex', 'PNLS-DRUG-TDF+3TC+NVP sex','PNLS-DRUG-Autres (à préciser)',
'PNLS-DRUG-Autres (à préciser) sex']
art_cordaid = ['ABC + 3TC + EFV', 'ABC + 3TC + LPV/r', 'ABC + 3TC + NVP', 'Autres (à préciser)', 'AZT + 3TC + LPV/r', 'AZT+3TC+ EFV',
'AZT+3TC+NVP', 'TDF + 3TC + LPV/r', 'TDF + FTC + NVP', 'TDF+ FTC + EFV', 'TDF+3TC+EFV', 'TDF+3TC+NVP', 'TDF+FTC+LPV+rt']
art_pnls_withoutpre=[datel[10:] for datel in art_pnls]
datavar_json={}
for datel in art_cordaid:
pnls_el=[str('PNLS-DRUG-')+ str(matchel) for matchel in art_pnls_withoutpre if ((str(datel) in matchel) and ('sex' in matchel)) ]
if len(pnls_el) >0:
pnls_el=pnls_el[0]
else:
pnls_el=None
dict_row= {'sources':{'cordaid':{'elementname':str(datel),'cat_filter':None},'pnls':{'elementname':pnls_el,'cat_filter':None}},'preferred_source':'cordaid','methods':{'last':'quarterly'}}
datavar_json.update({str(datel):dict_row})
datavar_json['AZT+3TC+ EFV']['sources']['pnls']['elementname']='PNLS-DRUG-AZT+3TC+EFV sex'
for key in datavar_json.keys():
for source in datavar_json[str(key)]['sources'].keys():
if datavar_json[str(key)]['sources'][str(source)]['elementname']:
datavar_json[str(key)]['sources'][str(source)]['uid']=hivdr.dataelement.query('name=="'+datavar_json[str(key)]['sources'][str(source)]['elementname']+'"')['uid'].iloc[0]
datavar_json['Patients HIV']= {
'sources':{
'cordaid':{'elementname':'Nombre de patients encore sous TARV dans la structure',
'uid':'Yj8caUQs178',
'cat_filter':'Anciens cas'
},
'pnls':{'elementname':'PNLS-ARV-Patients encoresous TARV dans la structure',
'uid':'Dd2G5zI0o0a',
'cat_filter':'AC'
}
},
'preferred_source':'cordaid',
'methods':{
'last':'quarterly'
}
}
def json_to_dict_df(datavar_json,reconcile_only=False):
from objectpath import Tree
import pandas as pd
jsonTree=Tree(datavar_json)
renconcile_dict={}
dict_of_df={}
for dataelement in jsonTree.execute('$*').keys():
#Creation of db
data_united_df=pd.DataFrame()
for dbsource in jsonTree.execute('$*["'+str(dataelement)+'"].sources').keys():
uid_element=jsonTree.execute('$*["'+str(dataelement)+'"].sources["'+str(dbsource)+'"].uid')
if uid_element:
df = hivdr.get_data(jsonTree.execute('$*["'+str(dataelement)+'"].sources["'+str(dbsource)+'"].uid'))
df['value'] = pd.to_numeric(df['value'],'integer')
cat_filter= jsonTree.execute('$*["'+str(dataelement)+'"].sources["'+str(dbsource)+'"].cat_filter')
if cat_filter:
df_filter = df.catcomboname.str.contains(cat_filter)
df_agg = (df[df_filter][['value', 'monthly', 'dataelementid', 'uidorgunit', 'enddate']]
.groupby(by=['uidorgunit', 'dataelementid', 'monthly', 'enddate']).sum().reset_index()
)
else:
df_agg = (df[['value', 'monthly', 'dataelementid', 'uidorgunit', 'enddate']]
.groupby(by=['uidorgunit', 'dataelementid', 'monthly', 'enddate']).sum().reset_index()
)
df_agg['source']=dbsource
data_united_df=data_united_df.append(df_agg)
reconcile_element_dict={}
def reconcilegroupby(groupdf):
subreconcile_dict={}
for dbsource in jsonTree.execute('$*["'+str(dataelement)+'"].sources').keys():
orgdict={dbsource:groupdf.query('source=="'+str(dbsource)+'"')}
subreconcile_dict.update(orgdict)
reconcile_element_dict[str(groupdf.name)]=subreconcile_dict
reconcile_subdf= dp.measured_serie(subreconcile_dict, 'stock', jsonTree.execute('$*["'+str(dataelement)+'"].preferred_source'))
reconcile_subdf.reconcile_series()
return reconcile_subdf.preferred_serie
reconcile_element_df=data_united_df.groupby(by=['uidorgunit']).apply(reconcilegroupby).reset_index(drop=True)
if not reconcile_only:
renconcile_dict[str(dataelement)]={'data_element_dict':reconcile_element_dict,'data_element_reconciledf':reconcile_element_df}
else:
renconcile_dict[str(dataelement)]=reconcile_element_df
return renconcile_dict
%%time
dict_df_patients=json_to_dict_df(datavar_json,reconcile_only=True)
def process_agg(dict_df_patients,datavar_json,datelment_list=['value']):
from objectpath import Tree
jsonTree=Tree(datavar_json)
dict_of_processed_df_patients={}
for element_key in dict_df_patients.keys():
methods_dict=jsonTree.execute('$*["'+str(element_key)+'"].methods')
for method_key in methods_dict.keys():
pro_df=processing(datanames_list=datelment_list,dat_df=dict_df_patients[element_key])
processed_df=aggregation(datel_list=datelment_list,df_data=pro_df,method=method_key,agg_period_type=methods_dict[method_key])
dict_of_processed_df_patients[str(element_key)]={str(method_key):processed_df}
return dict_of_processed_df_patients
%%time
dict_of_processed_df_patients=process_agg(dict_df_patients,datavar_json)
element_creation_df=data_element_creation_csv(dict_of_processed_df_patients)
element_creation_df.to_csv('./data_element_creation.csv',index=False,encoding ='utf-8')
df_csv=output_csv(dict_of_processed_df_patients,hivdr)
df_csv.to_csv('./data_values_df.csv',index=False,encoding ='utf-8')
r_exclude =['PNLS-DRUG-CTX 480 / 960 mg ces - Bt 500 ces', 'PNLS-DRUG-CTX 480 mg ces - Bt 1000 ces',
'PNLS-DRUG-Hepatitis, HBsAg, Determine Kit, 100 Tests','PNLS-DRUG-Hepatitis, HCV, Rapid Device, Serum/Plasma/Whole Blood, kit de 40 Tests',
'PNLS-DRUG-HIV 1/2, Double Check Gold, Kit de 100 test','PNLS-DRUG-HIV 1+2, Determine Complete, Kit de 100 tests',
'PNLS-DRUG-HIV 1+2, Uni-Gold HIV, Kit de 20 tests', 'PNLS-DRUG-INH 100 mg; 300 mg - Cés', 'PNLS-DRUG-INH 50mg/5 ml - Sol.Orale',
'PNLS-DRUG-Syphilis RPR Kit, kit de 100 Tests Determine syph','PNLS-DRUG-CTX 96 mg / ml - Inj']
#Selection of Variables
catlab_sources_dict={'SO':'Sortie','RS':'Nbr de jours RS','EN':'Entrée','SI':'Stock Initial'}
datelemlist_sources_dict={}
for key in catlab_sources_dict.keys():
c_cordaid = hivdr.dataelement[hivdr.dataelement.name.str.contains(str(key)+' -')]
cco = hivdr.categoryoptioncombo.categoryoptioncomboid[hivdr.categoryoptioncombo.name.str.contains(catlab_sources_dict[key])].iloc[0]
pnls_cco_uid = hivdr.categoryoptioncombo.uid[hivdr.categoryoptioncombo.name.str.contains(catlab_sources_dict[key])].iloc[0]
cc = hivdr.categorycombos_optioncombos.categorycomboid[hivdr.categorycombos_optioncombos.categoryoptioncomboid == cco].iloc[0]
c_pnls = hivdr.dataelement[hivdr.dataelement.categorycomboid == cc]
datelemlist_sources_dict[str(key)]={'cordaid':c_cordaid,'pnls':c_pnls,'pnls_uid':pnls_cco_uid}
datelemlist_sources_dict.keys()
def initial_merge(datelemlist_sources_dict):
srce_dict=datelemlist_sources_dict
for key in srce_dict.keys():
srce_dict[key]['cordaid']['stand_name']=srce_dict[key]['cordaid'].name.str.replace(str(key)+' -','')
srce_dict[key]['cordaid']['stand_name']=srce_dict[key]['cordaid'].stand_name.str.replace(' ','').str.lower()
srce_dict[key]['pnls']['stand_name']=srce_dict[key]['pnls'].name.str.replace(" ",'')
srce_dict[key]['pnls']['stand_name']=srce_dict[key]['pnls'].stand_name.str.replace("PNLS-DRUG-",'').str.lower()
srce_dict[key]['merge_dict_list']= srce_dict[key]['pnls'].merge(srce_dict[key]['cordaid'], on='stand_name', suffixes=['_pnls', '_cordaid'])
srce_dict[key]['pnls'].columns = ['uid_pnls', 'name_pnls', 'dataelementid_pnls' , 'categorycomboid_pnls', 'stand_name']
srce_dict[key]['cordaid'].columns = ['uid_cordaid', 'name_cordaid', 'dataelementid_cordaid' , 'categorycomboid_cordaid', 'stand_name']
return srce_dict
datelemlist_processed_sources_dict=initial_merge(datelemlist_sources_dict)
#Cases of missspelling to be included in the merge dataframe
for key in datelemlist_processed_sources_dict.keys():
merge_list_df=datelemlist_processed_sources_dict[key]['merge_dict_list']
list_pnls=datelemlist_processed_sources_dict[key]['pnls']
list_cordaid=datelemlist_processed_sources_dict[key]['cordaid']
for leftout in ['(200/50 mg)','(300/200 mg)','NVP 50 mg']:
pnls_leftout_values=list_pnls.drop('stand_name', axis=1)[list_pnls.name_pnls.str.contains(leftout)].reset_index(drop=True)
coraid_leftout_values=list_cordaid[list_cordaid.name_cordaid.str.contains(leftout)].reset_index(drop=True)
concat_row=pd.concat([pnls_leftout_values,coraid_leftout_values],axis=1)
merge_list_df = merge_list_df.append(concat_row)
datelemlist_processed_sources_dict[key]['merge_dict_list'] = merge_list_df[~merge_list_df.name_pnls.isin(r_exclude)]
def get_hivdrable(de_id):
data = hivdr.get_data(de_id)
data = data[['dataelementid', 'monthly', 'uidorgunit', 'catcomboid', 'value']]
data.columns = ['dataElement' , 'monthly' , 'orgUnit' , 'categoryOptionCombo', 'value']
return data
def datadf_into_meds_source_dict(processed_sources_df,pnls_stock_uid):
meds_dict ={}
for line in processed_sources_df.name_cordaid:
print(line)
id_cordaid = processed_sources_df.uid_cordaid[processed_sources_df.name_cordaid == line].iloc[0]
id_pnls = processed_sources_df.uid_pnls[processed_sources_df.name_cordaid == line].iloc[0]
cordaid_data = get_hivdrable(id_cordaid)
pnls_data = get_hivdrable(id_pnls)
pnls_data = pnls_data[pnls_data.categoryOptionCombo == pnls_stock_uid]
meds_dict[str(line)[5:]]={'cordaid':cordaid_data,'pnls':pnls_data}
return meds_dict
%%time
large_meds_dict={}
for key in datelemlist_processed_sources_dict.keys():
rs_meds_dict=datadf_into_meds_source_dict(datelemlist_processed_sources_dict[key]['merge_dict_list'],datelemlist_processed_sources_dict[key]['pnls_uid'])
large_meds_dict[str(key)]=rs_meds_dict
import re
def pills_per_package(line):
if re.search('[Cc][EeÉé][Ss]$',line):
line=line[-7:3]
line=re.sub('[Oo]|" "','', line)
return float(line)
else:
return None
#Finally I didn't use this one part because a talk I had with Antoine
def measurement_boxes(meds_label,datadf):
pppck=pills_per_package
if pppck:
datadf.value = pd.to_numeric(datadf.value)
datadf['boxes']=datadf.value/pppck
return datadf
def reconcilegroupby(groupdf):
subreconcile_dict={}
for dbsource in ['cordaid','pnls']:
orgdict={dbsource:groupdf.query('source=="'+str(dbsource)+'"')}
subreconcile_dict.update(orgdict)
reconcile_subdf= dp.measured_serie(subreconcile_dict, 'stock', 'cordaid')
reconcile_subdf.reconcile_series()
return reconcile_subdf.preferred_serie
def get_preferes_series_from_meds(large_meds_dict):
renconcile_dict={}
for typedata in large_meds_dict.keys():
renconcile_dict[str(typedata)]={}
for medicine in large_meds_dict[typedata].keys():
data_united_df=pd.DataFrame()
for dbsource in ['cordaid','pnls']:
df_agg=large_meds_dict[typedata][medicine][dbsource]
df_agg['source']=str(dbsource)
data_united_df=data_united_df.append(df_agg)
reconcile_element_df=data_united_df.groupby(by=['orgUnit']).apply(reconcilegroupby).reset_index(drop=True)
renconcile_dict[str(typedata)].update({str(medicine):reconcile_element_df})
return renconcile_dict
%%time
renconcile_dict_meds=get_preferes_series_from_meds(large_meds_dict)
renconcile_dict_transpose={}
for medicine in renconcile_dict_meds['SO'].keys():
df_list=[]
for typedata in renconcile_dict_meds.keys():
df_row=renconcile_dict_meds[typedata][medicine][['monthly','orgUnit','value']]
df_row=df_row.rename(index=str, columns={'value':str(typedata),'orgUnit':'uidorgunit'})
df_list.append(df_row)
unified_df=df_list[0]
for df_typedata in df_list[1:]:
unified_df=unified_df.merge(df_typedata,how='outer',on=['monthly','uidorgunit'])
renconcile_dict_transpose[str(medicine)]=unified_df
def mean_consumption_stockout(dict_df_med):
dict_of_processed_df_patients={}
for medicine in dict_df_med.keys():
med_df_processed=processing(datanames_list=['SO','RS','EN','SI'],dat_df=dict_df_med[str(medicine)])
meancm_df=mean_consumation_adjusted(med_df_processed[['uidorgunit','enddate','SO','RS']],data_el=medicine)
combination_df=aggregation(datel_list=['EN','SI'],df_data=med_df_processed,method='combination',agg_period_type='monthly')
stock_df=meancm_df.merge(combination_df,on=['uidorgunit','period'],how='outer')
stock_df=stock_df.merge(med_df_processed[['uidorgunit','monthly','RS']].rename(index=str, columns={'monthly':'period'}),on=['uidorgunit','period'],how='outer')
stockout_df=stockout(stock_df,data_elem=str(medicine),type_period='monthly')
dict_of_processed_df_patients[str(medicine)]={'AMC':meancm_df,'combination':combination_df,'stockout':stockout_df}
return dict_of_processed_df_patients
%%time
dict_of_processed_df_patients=mean_consumption_stockout(renconcile_dict_transpose)
def mean_consumption_stockout(dict_df_med):
dict_of_processed_df_patients={}
for medicine in dict_df_med.keys():
med_df_processed=processing(datanames_list=['SO','RS','EN','SI'],dat_df=dict_df_med[str(medicine)])
meancm_df=mean_consumation_adjusted(med_df_processed[['uidorgunit','enddate','SO','RS']],data_el=medicine)
combination_df=aggregation(datel_list=['EN','SI'],df_data=med_df_processed,method='combination',agg_period_type='monthly')
stock_df=meancm_df.merge(combination_df,on=['uidorgunit','period'],how='outer')
stock_df=stock_df.merge(med_df_processed[['uidorgunit','monthly','RS']].rename(index=str, columns={'monthly':'period'}),on=['uidorgunit','period'],how='outer')
stockout_df=stockout(stock_df,data_elem=str(medicine),type_period='monthly')
dict_of_processed_df_patients[str(medicine)]={'AMC':meancm_df,'combination':combination_df,'stockout':stockout_df}
return dict_of_processed_df_patients
element_creation_df=data_element_creation_csv(dict_of_processed_df_patients)
element_creation_df.to_csv('./data_element_creation_drugs.csv',index=False,encoding ='utf-8')
hivdr = dhis.dhis_instance(dbname, user, host, psswrd)
df_csv=output_csv(dict_of_processed_df_patients,hivdr)
| 0.11479 | 0.330931 |
방법 1) .py파일 직접 실행
문제) 함수를 만들라하는데, 저기 loops함수안의 python 구문에 파라미터가 작동안함
"우리는"을 n으로 loops에 넣었는데, 생성되는 거는 n으로 시작함
```
!python generator.py --temperature=1.5 --tmp_sent="우리" --text_size=100 --loops=1 --load_path="./checkpoint/KoGPT2_checkpoint_80000.tar"
def loops(n):
for i in range(n):
!python generator.py --temperature=1.5 --tmp_sent="나는" --text_size=100 --loops=1 --load_path="./checkpoint/KoGPT2_checkpoint_80000.tar"
loops(3)
import random
low=round(random.uniform(0.5, 1.0),1)
mid=round(random.uniform(1.0,2.0),1)
high=round(random.uniform(2.0,5.0),1)
def loops(creativity,n):
for i in range(3):
!python generator.py --temperature=creativity --tmp_sent=n --text_size=100 --loops=1 --load_path="./checkpoint/KoGPT2_checkpoint_80000.tar"
import pandas as pd
df=pd.read_csv('./train.csv')
df.head()
df_lyrics=df['lyrics'].tolist()
df_lyrics[0:5]
lyrics_ngram=[df_lyrics[i].split(' ')[0:2] for i in range(len(df_lyrics))]
lyrics_ngram[0:5]
def loops(n):
for i in range(3):
!python generator.py --temperature=0.7 --tmp_sent=n --text_size=100 --loops=1 --load_path="./checkpoint/KoGPT2_checkpoint_80000.tar"
loops(lyrics_ngram[0][0])
lyrics_ngram[0][0]
```
# ---------------------------------------------------
방법 2) import 하여 사용
문제) py에서 쓰는 parser가 jupyter에서는 작동안함. easydict로 바꾸면 될수도?
참고링크) https://worthpreading.tistory.com/56
```
from generator import main
main(temperature=0.7,tmp_sent="우리",text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_80000.tar")
```
# ---------------------------------------------------
방법2 -1) import 하여 사용 + easydict = generator2.py
```
from generator2 import main
main(temperature=0.7,tmp_sent="우리",text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_80000.tar")
temp=1.0
sent="우리는"
for i in range(3):
main(temperature=temp,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_80000.tar")
```
# --------------------------------------------------
방법 3) .py파일 직접실행으로 파라미터를 직접 넣어줌
단점) 번거로움
# --------------------------------------------------
### 성공한 방법 2-1에 대해 이어서 진행
#### 1) train dataset불러와 앞 2,3 단어를 저장
```
import pandas as pd
df_original=pd.read_csv('./train.csv')
df_original.head()
df=df_original
df=df.drop(['score','genre'],axis=1)
df_lyrics=df['lyrics'].tolist()
df_lyrics[0:5]
lyrics_ngram=[df_lyrics[i].split(' ')[0:3] for i in range(len(df_lyrics))]
lyrics_ngram[0:5]
print(' '.join(lyrics_ngram[0][0:2]))
print(' '.join(lyrics_ngram[0]))
lyrics_bigram=[]
lyrics_trigram=[]
for i in range(len(lyrics_ngram)):
lyrics_bigram.append(' '.join(lyrics_ngram[i][0:2]))
lyrics_trigram.append(' '.join(lyrics_ngram[i]))
print(lyrics_bigram[0:5])
print(lyrics_trigram[0:5])
df['bigram']=lyrics_bigram
df['trigram']=lyrics_trigram
df.head()
```
#### 2) bi,trigram과 low, mid, high에 따라 생성되는 문장 저장
수정목록:
~~1. 학습 더 시키기~~
~~2. generator2.py에서 print대신 return으로 바꿔서 dataframe으로 다 저장할수있도록 하기~~
__temperature 1.0, 3.0, 5.0 선정 이유__:
- https://medium.com/analytics-vidhya/understanding-the-gpt-2-source-code-part-1-4481328ee10b
gpt2에서는 주로 0.2-1.5에서 문맥에 맞는 좋은 문장이 많이 생성된다고 하였음.
하지만 우리 데이터셋에 적용해본 결과, 다음과 같은 범위에서는 train dataset의 문장 그대로가 출력되는 경우가 다수였음
- https://github.com/gyunggyung/KoGPT2-FineTuning
kogpt2를 finetuning한 방식의 우리가 참고한 깃허브에서는 default를 1.0, 표절 없는 문장 생성을 5.0으로 하였음
한국어 생성을 하는 만큼 우리의 목적과 더 부합하다고 판단하였음
따라서 1.0을 low, 5.0을 high로 지정하였고 그 중간인 3.0을 mid로 하였음
- 1.0과 3.0 모두 train dataset과 동일한 문장이 나오는 경우가 대부분
-> 10.0도 한번 해보자!
```
# test
from generator2 import main
main(temperature=5.0,tmp_sent="사랑",text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
from generator2 import main
list_re=[]
for i in range(3):
a=main(temperature=5.0,tmp_sent="사랑",text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_re.append(a)
print(list_re)
from generator2 import main
list_re=[]
a=main(temperature=5.0,tmp_sent="사랑",text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_re.append(a)
list_re
main(temperature=10.0,tmp_sent="사랑",text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
# 1. bi + low
list_bi_low=[]
for sent in lyrics_bigram:
a=main(temperature=1.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_bi_low.append(a)
df['bigram+low']=list_bi_low
df.head()
# 2. bi + mid
list_bi_mid=[]
for sent in lyrics_bigram:
a=main(temperature=3.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_bi_mid.append(a)
df['bigram+mid']=list_bi_mid
df.head()
# 3. bi + high
list_bi_high=[]
for sent in lyrics_bigram:
a=main(temperature=5.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_bi_high.append(a)
df['bigram+high']=list_bi_high
df.head()
# 4. tri + low
list_tri_low=[]
for sent in lyrics_trigram:
a=main(temperature=1.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_tri_low.append(a)
df['trigram+low']=list_tri_low
df.head()
# 5. tri + mid
list_tri_mid=[]
for sent in lyrics_trigram:
a=main(temperature=3.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_tri_mid.append(a)
df['trigram+mid']=list_tri_mid
df.head()
# 6. tri + high
list_tri_high=[]
for sent in lyrics_trigram:
a=main(temperature=5.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_tri_high.append(a)
df['trigram+high']=list_tri_high
df.head()
df.to_csv("result.csv",encoding='utf-8-sig')
```
# -------------------------------------------------------
```
import pandas as pd
df_res=pd.read_csv('./result.csv')
df_res.head()
df_res=df_res.drop(['Unnamed: 0','bigram','trigram'],axis=1)
df_res.head()
column_names=['bigram+low','bigram+mid','bigram+high','trigram+low','trigram+mid','trigram+high']
for column_name in column_names:
for i in range(len(df_res)):
df_res[column_name][i]=df_res[column_name][i].replace('</s>','.')
df_res.head()
for column_name in column_names:
for i in range(len(df_res)):
df_res[column_name][i]=df_res[column_name][i].replace('..','.')
df_res.head()
df_res.to_csv("result2.csv",encoding='utf-8-sig',index=False)
```
# --------------------------------------------------
```
# 7. bi + highest
from generator2 import main
list_bi_highest=[]
for sent in lyrics_bigram:
a=main(temperature=10.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_bi_highest.append(a)
df['bigram+highest']=list_bi_highest
df.head()
# 8. tri + highest
list_tri_highest=[]
for sent in lyrics_trigram:
a=main(temperature=10.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_tri_highest.append(a)
df['trigram+highest']=list_tri_highest
df.head()
column_names=['bigram+highest','trigram+highest']
df=df.drop(['bigram','trigram'],axis=1)
for column_name in column_names:
for i in range(len(df)):
df[column_name][i]=df[column_name][i].replace('</s>','.')
pd.set_option('display.max_colwidth', -1)
df.head()
for column_name in column_names:
for i in range(len(df)):
df[column_name][i]=df[column_name][i].replace('..','.')
df.head()
df.to_csv("result3.csv",encoding='utf-8-sig',index=False)
```
|
github_jupyter
|
!python generator.py --temperature=1.5 --tmp_sent="우리" --text_size=100 --loops=1 --load_path="./checkpoint/KoGPT2_checkpoint_80000.tar"
def loops(n):
for i in range(n):
!python generator.py --temperature=1.5 --tmp_sent="나는" --text_size=100 --loops=1 --load_path="./checkpoint/KoGPT2_checkpoint_80000.tar"
loops(3)
import random
low=round(random.uniform(0.5, 1.0),1)
mid=round(random.uniform(1.0,2.0),1)
high=round(random.uniform(2.0,5.0),1)
def loops(creativity,n):
for i in range(3):
!python generator.py --temperature=creativity --tmp_sent=n --text_size=100 --loops=1 --load_path="./checkpoint/KoGPT2_checkpoint_80000.tar"
import pandas as pd
df=pd.read_csv('./train.csv')
df.head()
df_lyrics=df['lyrics'].tolist()
df_lyrics[0:5]
lyrics_ngram=[df_lyrics[i].split(' ')[0:2] for i in range(len(df_lyrics))]
lyrics_ngram[0:5]
def loops(n):
for i in range(3):
!python generator.py --temperature=0.7 --tmp_sent=n --text_size=100 --loops=1 --load_path="./checkpoint/KoGPT2_checkpoint_80000.tar"
loops(lyrics_ngram[0][0])
lyrics_ngram[0][0]
from generator import main
main(temperature=0.7,tmp_sent="우리",text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_80000.tar")
from generator2 import main
main(temperature=0.7,tmp_sent="우리",text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_80000.tar")
temp=1.0
sent="우리는"
for i in range(3):
main(temperature=temp,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_80000.tar")
import pandas as pd
df_original=pd.read_csv('./train.csv')
df_original.head()
df=df_original
df=df.drop(['score','genre'],axis=1)
df_lyrics=df['lyrics'].tolist()
df_lyrics[0:5]
lyrics_ngram=[df_lyrics[i].split(' ')[0:3] for i in range(len(df_lyrics))]
lyrics_ngram[0:5]
print(' '.join(lyrics_ngram[0][0:2]))
print(' '.join(lyrics_ngram[0]))
lyrics_bigram=[]
lyrics_trigram=[]
for i in range(len(lyrics_ngram)):
lyrics_bigram.append(' '.join(lyrics_ngram[i][0:2]))
lyrics_trigram.append(' '.join(lyrics_ngram[i]))
print(lyrics_bigram[0:5])
print(lyrics_trigram[0:5])
df['bigram']=lyrics_bigram
df['trigram']=lyrics_trigram
df.head()
# test
from generator2 import main
main(temperature=5.0,tmp_sent="사랑",text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
from generator2 import main
list_re=[]
for i in range(3):
a=main(temperature=5.0,tmp_sent="사랑",text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_re.append(a)
print(list_re)
from generator2 import main
list_re=[]
a=main(temperature=5.0,tmp_sent="사랑",text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_re.append(a)
list_re
main(temperature=10.0,tmp_sent="사랑",text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
# 1. bi + low
list_bi_low=[]
for sent in lyrics_bigram:
a=main(temperature=1.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_bi_low.append(a)
df['bigram+low']=list_bi_low
df.head()
# 2. bi + mid
list_bi_mid=[]
for sent in lyrics_bigram:
a=main(temperature=3.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_bi_mid.append(a)
df['bigram+mid']=list_bi_mid
df.head()
# 3. bi + high
list_bi_high=[]
for sent in lyrics_bigram:
a=main(temperature=5.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_bi_high.append(a)
df['bigram+high']=list_bi_high
df.head()
# 4. tri + low
list_tri_low=[]
for sent in lyrics_trigram:
a=main(temperature=1.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_tri_low.append(a)
df['trigram+low']=list_tri_low
df.head()
# 5. tri + mid
list_tri_mid=[]
for sent in lyrics_trigram:
a=main(temperature=3.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_tri_mid.append(a)
df['trigram+mid']=list_tri_mid
df.head()
# 6. tri + high
list_tri_high=[]
for sent in lyrics_trigram:
a=main(temperature=5.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_tri_high.append(a)
df['trigram+high']=list_tri_high
df.head()
df.to_csv("result.csv",encoding='utf-8-sig')
import pandas as pd
df_res=pd.read_csv('./result.csv')
df_res.head()
df_res=df_res.drop(['Unnamed: 0','bigram','trigram'],axis=1)
df_res.head()
column_names=['bigram+low','bigram+mid','bigram+high','trigram+low','trigram+mid','trigram+high']
for column_name in column_names:
for i in range(len(df_res)):
df_res[column_name][i]=df_res[column_name][i].replace('</s>','.')
df_res.head()
for column_name in column_names:
for i in range(len(df_res)):
df_res[column_name][i]=df_res[column_name][i].replace('..','.')
df_res.head()
df_res.to_csv("result2.csv",encoding='utf-8-sig',index=False)
# 7. bi + highest
from generator2 import main
list_bi_highest=[]
for sent in lyrics_bigram:
a=main(temperature=10.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_bi_highest.append(a)
df['bigram+highest']=list_bi_highest
df.head()
# 8. tri + highest
list_tri_highest=[]
for sent in lyrics_trigram:
a=main(temperature=10.0,tmp_sent=sent,text_size=100,loops=1,load_path="./checkpoint/KoGPT2_checkpoint_240000.tar")
list_tri_highest.append(a)
df['trigram+highest']=list_tri_highest
df.head()
column_names=['bigram+highest','trigram+highest']
df=df.drop(['bigram','trigram'],axis=1)
for column_name in column_names:
for i in range(len(df)):
df[column_name][i]=df[column_name][i].replace('</s>','.')
pd.set_option('display.max_colwidth', -1)
df.head()
for column_name in column_names:
for i in range(len(df)):
df[column_name][i]=df[column_name][i].replace('..','.')
df.head()
df.to_csv("result3.csv",encoding='utf-8-sig',index=False)
| 0.124173 | 0.732161 |
```
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
import numpy as np
import re
from sklearn import tree
from sklearn.linear_model import LogisticRegression
```
### 將檔案存為Pandas DataFrame
```
dfTrain=pd.read_csv("../datasets/titanic/titanic_train.csv") # 訓練資料
dfTest=pd.read_csv("../datasets/titanic/titanic_test.csv") # 測試資料
dfTrain.info()
dfTrain['Age'].apply(lambda x:1 if x > 18 else 0)
```
### 看每個欄位是否有重複值?
```
dfTrain.shape
dfTrain.apply(lambda x:x.unique().shape[0],axis=0)
set(dfTrain["Pclass"])
dfTrain.shape
(dfTrain.apply(lambda x:x.unique().shape[0],axis=0)/dfTrain.shape[0]).plot(kind='bar',rot=45)
```
* 上圖中,若欄位所對應的y值小,則代表該欄位的值有高度重複的現象。也就是說,該欄位可能為類別型變數。而若y值=1,則代表該欄位無重複值,有可能為索引或是連續型變數。
### 看欄位是否有空值?
```
dfTrain.isnull().any()
dfTrain.isnull().sum().plot(kind='bar',rot=45,title='number of missing values')
```
由上圖得知,Age, Cabin和Embarked這三個欄位含有空值。
```
cmap=sns.light_palette("navy", reverse=False)
sns.heatmap(dfTrain.isnull().astype(np.int8),yticklabels=False,cmap=cmap)
```
### 探究:性別(Sex), 艙等(Pclass)和年齡(Age),是否會影響生還與否(Survived)?
```
def trans(x):
if x<=12:
return "children"
elif x>12:
return "non_children"
else:
return np.NaN
dfTrain["AgeInfo"]=dfTrain["Age"].apply(trans)
dfTest["AgeInfo"]=dfTest["Age"].apply(trans)
dfTmp=dfTrain.groupby(["Pclass","Sex"])["Survived"].agg([np.mean,np.std,np.sum,len])
dfTmp=dfTmp.reset_index()
dfTmp
fig,axes=plt.subplots(1,3,figsize=(10,3),sharey=True)
groups=dfTmp.groupby("Pclass")
for idx,(name,group) in enumerate(groups):
axes[idx].bar(x=group["Sex"],height=group["mean"],
color=["darkgreen","darkblue"])
axes[idx].set_title("Pclass = %i"%name)
```
* 無論何種艙等,女性生還率皆高於男性至少一倍以上。
利用Seaborn,可簡單的執行一行指令即得到上圖:
```
sns.catplot(data=dfTrain,col="Pclass",x="Sex",y="Survived",kind="bar")
g=sns.catplot(data=dfTrain,col="Pclass",x="Sex",y="Survived",kind="bar")
g=sns.catplot(data=dfTrain,col="Pclass",x="Sex",hue="AgeInfo",y="Survived",kind="bar")
g=sns.countplot("Pclass",hue="Sex",data=dfTrain)
g=sns.countplot("Pclass",hue="AgeInfo",data=dfTrain)
dfTrain["famSize"]=dfTrain["SibSp"]+dfTrain["Parch"]
dfTest["famSize"]=dfTest["SibSp"]+dfTest["Parch"]
g=sns.countplot("Pclass",hue="famSize",data=dfTrain)
```
* 三等艙單身的人多,也相較於其他艙等,比較有大一些的家庭。
```
ax=dfTrain[["famSize","Survived"]].groupby("famSize").count().plot(kind="bar")
```
* 單身一人,沒有家庭的人佔大多數。有超過兩個親人的人不多。
```
g=sns.catplot(x="famSize",y="Survived",data=dfTrain,kind="bar",ci=None)
g.set_ylabels("Survival Rate")
g.set_xlabels("Family Size")
```
* 小家庭(1-3人)較容易生還。
```
g=sns.catplot(x="famSize",y="Survived",hue='Sex',data=dfTrain,kind="bar",ci=None)
g.set_ylabels("Survival Rate")
g.set_xlabels("Family Size")
```
* 家室數量$\leq 3$時,男性生還率與家室數量成正比。
```
g=sns.catplot(x="famSize",y="Survived",hue='AgeInfo',
data=dfTrain[["famSize","Survived","AgeInfo"]].dropna(how="any"),
kind="bar",ci=None)
g.set_ylabels("Survival Rate")
g.set_xlabels("Family Size")
```
* 小孩生還率較非小孩高。
但家庭太大則不一定。不過,家庭大時,小孩樣本數很少,所以也許沒有參考性。
```
g=sns.countplot("famSize",hue='AgeInfo',
data=dfTrain[["famSize","AgeInfo"]].dropna(how="any"))
```
---
### 座位(Cabin)
```
print("座艙資料筆數=\t", len( dfTrain["Cabin"] ) )
print("座艙空值數=\t",dfTrain["Cabin"].isnull().sum() )
dfTrain["Cabin"].unique()
```
座位號碼太多,目前我想要只保留字母。其實也許數字大小也有用,之後或可考慮利用數字大小。
```
def extractCabinLabel(name):
try:
matched=re.search("([A-z])(.*)",name)
label=matched.groups()[0]
except:
label=np.NaN
return label
dfTrain["Cabin"]=dfTrain["Cabin"].apply(extractCabinLabel)
print( dfTrain["Cabin"].unique() )
print( dfTrain["Embarked"].unique() )
groups=dfTrain[["Cabin","Embarked"] ].groupby("Embarked")
for name,group in groups:
print(name,group["Cabin"].isnull().sum())
```
* 很多從S港口登陸的人,我們不確定他們坐在什麼位置。
### 探究座位(Cabin)是否影響生還與否(Survived)
我們要問的是,是否座位是影響生還率的factor(因子)之一。故,以下使用Seaborn內建的sns.factorplot()來探究:
```
sns.catplot(x="Cabin",y="Survived",data=dfTrain[["Cabin","Survived"]].dropna(how="any"),
kind="violin",order=["A","B","C","D","E","F","G","T"])
```
我們直接來計算每個座位區的生存率:
```
dfTrain[["Cabin","Survived"]].dropna(how="any").groupby("Cabin").mean().plot(kind="bar",rot=0)
g=sns.catplot(x="Cabin",y="Survived",hue="Pclass",
data=dfTrain[["Cabin","Survived","Pclass"]].dropna(how="any"),
kind="violin",
order=["A","B","C","D","E","F","G","T"],size=5,aspect=2)
g.fig.suptitle("Survived v.s. Cabin")
g.fig.subplots_adjust(top=0.9)
```
* 由上圖可見,座位順序由A至G移動時,艙等等級隨之下降。
---
### 探究生還與否(Survived)和其他連續變數的相依性(correlation)
```
corDf=dfTrain.corr()
corDf["Survived"]
corDf["Survived"].apply(lambda x:np.abs(x)).sort_values(ascending=False)
```
* 由上表可見,連續型變數中,與Survived較為相關的變數有Pclass, Fare。
Correlation可畫成熱圖:
```
plt.figure(figsize=(10, 10))
g=sns.heatmap(corDf, vmax=.8, linewidths=0.01,
square=True,annot=True,cmap='YlGnBu',linecolor="white")
plt.title('Correlation between features');
```
---
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
import numpy as np
import re
from sklearn import tree
from sklearn.linear_model import LogisticRegression
dfTrain=pd.read_csv("../datasets/titanic/titanic_train.csv") # 訓練資料
dfTest=pd.read_csv("../datasets/titanic/titanic_test.csv") # 測試資料
dfTrain.info()
dfTrain['Age'].apply(lambda x:1 if x > 18 else 0)
dfTrain.shape
dfTrain.apply(lambda x:x.unique().shape[0],axis=0)
set(dfTrain["Pclass"])
dfTrain.shape
(dfTrain.apply(lambda x:x.unique().shape[0],axis=0)/dfTrain.shape[0]).plot(kind='bar',rot=45)
dfTrain.isnull().any()
dfTrain.isnull().sum().plot(kind='bar',rot=45,title='number of missing values')
cmap=sns.light_palette("navy", reverse=False)
sns.heatmap(dfTrain.isnull().astype(np.int8),yticklabels=False,cmap=cmap)
def trans(x):
if x<=12:
return "children"
elif x>12:
return "non_children"
else:
return np.NaN
dfTrain["AgeInfo"]=dfTrain["Age"].apply(trans)
dfTest["AgeInfo"]=dfTest["Age"].apply(trans)
dfTmp=dfTrain.groupby(["Pclass","Sex"])["Survived"].agg([np.mean,np.std,np.sum,len])
dfTmp=dfTmp.reset_index()
dfTmp
fig,axes=plt.subplots(1,3,figsize=(10,3),sharey=True)
groups=dfTmp.groupby("Pclass")
for idx,(name,group) in enumerate(groups):
axes[idx].bar(x=group["Sex"],height=group["mean"],
color=["darkgreen","darkblue"])
axes[idx].set_title("Pclass = %i"%name)
sns.catplot(data=dfTrain,col="Pclass",x="Sex",y="Survived",kind="bar")
g=sns.catplot(data=dfTrain,col="Pclass",x="Sex",y="Survived",kind="bar")
g=sns.catplot(data=dfTrain,col="Pclass",x="Sex",hue="AgeInfo",y="Survived",kind="bar")
g=sns.countplot("Pclass",hue="Sex",data=dfTrain)
g=sns.countplot("Pclass",hue="AgeInfo",data=dfTrain)
dfTrain["famSize"]=dfTrain["SibSp"]+dfTrain["Parch"]
dfTest["famSize"]=dfTest["SibSp"]+dfTest["Parch"]
g=sns.countplot("Pclass",hue="famSize",data=dfTrain)
ax=dfTrain[["famSize","Survived"]].groupby("famSize").count().plot(kind="bar")
g=sns.catplot(x="famSize",y="Survived",data=dfTrain,kind="bar",ci=None)
g.set_ylabels("Survival Rate")
g.set_xlabels("Family Size")
g=sns.catplot(x="famSize",y="Survived",hue='Sex',data=dfTrain,kind="bar",ci=None)
g.set_ylabels("Survival Rate")
g.set_xlabels("Family Size")
g=sns.catplot(x="famSize",y="Survived",hue='AgeInfo',
data=dfTrain[["famSize","Survived","AgeInfo"]].dropna(how="any"),
kind="bar",ci=None)
g.set_ylabels("Survival Rate")
g.set_xlabels("Family Size")
g=sns.countplot("famSize",hue='AgeInfo',
data=dfTrain[["famSize","AgeInfo"]].dropna(how="any"))
print("座艙資料筆數=\t", len( dfTrain["Cabin"] ) )
print("座艙空值數=\t",dfTrain["Cabin"].isnull().sum() )
dfTrain["Cabin"].unique()
def extractCabinLabel(name):
try:
matched=re.search("([A-z])(.*)",name)
label=matched.groups()[0]
except:
label=np.NaN
return label
dfTrain["Cabin"]=dfTrain["Cabin"].apply(extractCabinLabel)
print( dfTrain["Cabin"].unique() )
print( dfTrain["Embarked"].unique() )
groups=dfTrain[["Cabin","Embarked"] ].groupby("Embarked")
for name,group in groups:
print(name,group["Cabin"].isnull().sum())
sns.catplot(x="Cabin",y="Survived",data=dfTrain[["Cabin","Survived"]].dropna(how="any"),
kind="violin",order=["A","B","C","D","E","F","G","T"])
dfTrain[["Cabin","Survived"]].dropna(how="any").groupby("Cabin").mean().plot(kind="bar",rot=0)
g=sns.catplot(x="Cabin",y="Survived",hue="Pclass",
data=dfTrain[["Cabin","Survived","Pclass"]].dropna(how="any"),
kind="violin",
order=["A","B","C","D","E","F","G","T"],size=5,aspect=2)
g.fig.suptitle("Survived v.s. Cabin")
g.fig.subplots_adjust(top=0.9)
corDf=dfTrain.corr()
corDf["Survived"]
corDf["Survived"].apply(lambda x:np.abs(x)).sort_values(ascending=False)
plt.figure(figsize=(10, 10))
g=sns.heatmap(corDf, vmax=.8, linewidths=0.01,
square=True,annot=True,cmap='YlGnBu',linecolor="white")
plt.title('Correlation between features');
| 0.28577 | 0.820649 |
```
from google.colab import drive
drive.mount('/content/gdrive')
import os
os.chdir('/content/gdrive/My Drive/finch/tensorflow1/free_chat/chinese_lccc/main')
%tensorflow_version 1.x
import tensorflow as tf
import numpy as np
print("TensorFlow Version", tf.__version__)
print('GPU Enabled:', tf.test.is_gpu_available())
def rnn_cell():
def cell_fn():
cell = tf.nn.rnn_cell.LSTMCell(params['rnn_units'],
initializer=tf.orthogonal_initializer())
return cell
if params['dec_layers'] > 1:
cells = []
for i in range(params['dec_layers']):
if i == params['dec_layers'] - 1:
cells.append(cell_fn())
else:
cells.append(tf.nn.rnn_cell.ResidualWrapper(cell_fn(), residual_fn=lambda i,o: tf.concat((i,o), -1)))
return tf.nn.rnn_cell.MultiRNNCell(cells)
else:
return cell_fn()
def dec_cell(enc_out, enc_seq_len):
attn = tf.contrib.seq2seq.BahdanauAttention(
num_units = params['rnn_units'],
memory = enc_out,
memory_sequence_length = enc_seq_len)
return tf.contrib.seq2seq.AttentionWrapper(
cell = rnn_cell(),
attention_mechanism = attn,
attention_layer_size = params['rnn_units'])
class TiedDense(tf.layers.Layer):
def __init__(self, tied_embed, out_dim):
super().__init__()
self.tied_embed = tied_embed
self.out_dim = out_dim
def build(self, input_shape):
self.bias = self.add_weight(name='bias',
shape=[self.out_dim],
trainable=True)
super().build(input_shape)
def call(self, inputs):
x = tf.matmul(inputs, self.tied_embed, transpose_b=True)
x = tf.nn.bias_add(x, self.bias)
return x
def compute_output_shape(self, input_shape):
return input_shape[:-1].concatenate(self.out_dim)
def forward(features, labels, mode):
words = features['words'] if isinstance(features, dict) else features
words_len = tf.count_nonzero(words, 1, dtype=tf.int32)
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
batch_sz = tf.shape(words)[0]
mask = tf.sign(words)
with tf.variable_scope('Embedding'):
embedding = tf.Variable(np.load('../vocab/char.npy'),
dtype=tf.float32,
name='fasttext_vectors')
x = tf.nn.embedding_lookup(embedding, words)
with tf.variable_scope('Encoder'):
encoder = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(
params['rnn_units'], return_state=True, return_sequences=True, zero_output_for_mask=True))
enc_out, state_fw_h, state_fw_c, state_bw_h, state_bw_c = encoder(x, mask=mask)
enc_state = tf.concat((tf.reduce_max(enc_out, 1), state_fw_h, state_bw_h), axis=-1)
enc_state = tf.layers.dense(enc_state, params['rnn_units'], params['activation'], name='state_fc')
enc_state = tf.nn.rnn_cell.LSTMStateTuple(c=enc_state, h=enc_state)
if params['dec_layers'] > 1:
enc_state = tuple(params['dec_layers'] * [enc_state])
with tf.variable_scope('Decoder'):
output_proj = TiedDense(embedding, len(params['char2idx'])+1)
enc_out_t = tf.contrib.seq2seq.tile_batch(enc_out, params['beam_width'])
enc_state_t = tf.contrib.seq2seq.tile_batch(enc_state, params['beam_width'])
enc_seq_len_t = tf.contrib.seq2seq.tile_batch(words_len, params['beam_width'])
cell = dec_cell(enc_out_t, enc_seq_len_t)
init_state = cell.zero_state(batch_sz*params['beam_width'], tf.float32).clone(
cell_state=enc_state_t)
decoder = tf.contrib.seq2seq.BeamSearchDecoder(
cell = cell,
embedding = embedding,
start_tokens = tf.tile(tf.constant([1], tf.int32), [batch_sz]),
end_token = 2,
initial_state = init_state,
beam_width = params['beam_width'],
output_layer = output_proj,
length_penalty_weight = params['length_penalty'],
coverage_penalty_weight = params['coverage_penalty'],)
decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = decoder,
maximum_iterations = params['max_len'],)
return decoder_output.predicted_ids[:, :, :params['top_k']]
def model_fn(features, labels, mode, params):
logits_or_ids = forward(features, labels, mode)
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode, predictions=logits_or_ids)
params = {
'model_dir': '../model/lstm_seq2seq',
'export_dir': '../model/lstm_seq2seq_export',
'vocab_path': '../vocab/char.txt',
'rnn_units': 300,
'max_len': 30,
'activation': tf.nn.relu,
'dec_layers': 1,
'beam_width': 10,
'top_k': 3,
'length_penalty': .0,
'coverage_penalty': .0,
}
def serving_input_receiver_fn():
words = tf.placeholder(tf.int32, [None, None], 'words')
features = {'words': words}
receiver_tensors = features
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
def get_vocab(f_path):
word2idx = {}
with open(f_path) as f:
for i, line in enumerate(f):
line = line.rstrip('\n')
word2idx[line] = i
return word2idx
params['char2idx'] = get_vocab(params['vocab_path'])
params['idx2char'] = {idx: char for char, idx in params['char2idx'].items()}
estimator = tf.estimator.Estimator(model_fn, params['model_dir'])
estimator.export_saved_model(params['export_dir'], serving_input_receiver_fn)
```
|
github_jupyter
|
from google.colab import drive
drive.mount('/content/gdrive')
import os
os.chdir('/content/gdrive/My Drive/finch/tensorflow1/free_chat/chinese_lccc/main')
%tensorflow_version 1.x
import tensorflow as tf
import numpy as np
print("TensorFlow Version", tf.__version__)
print('GPU Enabled:', tf.test.is_gpu_available())
def rnn_cell():
def cell_fn():
cell = tf.nn.rnn_cell.LSTMCell(params['rnn_units'],
initializer=tf.orthogonal_initializer())
return cell
if params['dec_layers'] > 1:
cells = []
for i in range(params['dec_layers']):
if i == params['dec_layers'] - 1:
cells.append(cell_fn())
else:
cells.append(tf.nn.rnn_cell.ResidualWrapper(cell_fn(), residual_fn=lambda i,o: tf.concat((i,o), -1)))
return tf.nn.rnn_cell.MultiRNNCell(cells)
else:
return cell_fn()
def dec_cell(enc_out, enc_seq_len):
attn = tf.contrib.seq2seq.BahdanauAttention(
num_units = params['rnn_units'],
memory = enc_out,
memory_sequence_length = enc_seq_len)
return tf.contrib.seq2seq.AttentionWrapper(
cell = rnn_cell(),
attention_mechanism = attn,
attention_layer_size = params['rnn_units'])
class TiedDense(tf.layers.Layer):
def __init__(self, tied_embed, out_dim):
super().__init__()
self.tied_embed = tied_embed
self.out_dim = out_dim
def build(self, input_shape):
self.bias = self.add_weight(name='bias',
shape=[self.out_dim],
trainable=True)
super().build(input_shape)
def call(self, inputs):
x = tf.matmul(inputs, self.tied_embed, transpose_b=True)
x = tf.nn.bias_add(x, self.bias)
return x
def compute_output_shape(self, input_shape):
return input_shape[:-1].concatenate(self.out_dim)
def forward(features, labels, mode):
words = features['words'] if isinstance(features, dict) else features
words_len = tf.count_nonzero(words, 1, dtype=tf.int32)
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
batch_sz = tf.shape(words)[0]
mask = tf.sign(words)
with tf.variable_scope('Embedding'):
embedding = tf.Variable(np.load('../vocab/char.npy'),
dtype=tf.float32,
name='fasttext_vectors')
x = tf.nn.embedding_lookup(embedding, words)
with tf.variable_scope('Encoder'):
encoder = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(
params['rnn_units'], return_state=True, return_sequences=True, zero_output_for_mask=True))
enc_out, state_fw_h, state_fw_c, state_bw_h, state_bw_c = encoder(x, mask=mask)
enc_state = tf.concat((tf.reduce_max(enc_out, 1), state_fw_h, state_bw_h), axis=-1)
enc_state = tf.layers.dense(enc_state, params['rnn_units'], params['activation'], name='state_fc')
enc_state = tf.nn.rnn_cell.LSTMStateTuple(c=enc_state, h=enc_state)
if params['dec_layers'] > 1:
enc_state = tuple(params['dec_layers'] * [enc_state])
with tf.variable_scope('Decoder'):
output_proj = TiedDense(embedding, len(params['char2idx'])+1)
enc_out_t = tf.contrib.seq2seq.tile_batch(enc_out, params['beam_width'])
enc_state_t = tf.contrib.seq2seq.tile_batch(enc_state, params['beam_width'])
enc_seq_len_t = tf.contrib.seq2seq.tile_batch(words_len, params['beam_width'])
cell = dec_cell(enc_out_t, enc_seq_len_t)
init_state = cell.zero_state(batch_sz*params['beam_width'], tf.float32).clone(
cell_state=enc_state_t)
decoder = tf.contrib.seq2seq.BeamSearchDecoder(
cell = cell,
embedding = embedding,
start_tokens = tf.tile(tf.constant([1], tf.int32), [batch_sz]),
end_token = 2,
initial_state = init_state,
beam_width = params['beam_width'],
output_layer = output_proj,
length_penalty_weight = params['length_penalty'],
coverage_penalty_weight = params['coverage_penalty'],)
decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = decoder,
maximum_iterations = params['max_len'],)
return decoder_output.predicted_ids[:, :, :params['top_k']]
def model_fn(features, labels, mode, params):
logits_or_ids = forward(features, labels, mode)
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode, predictions=logits_or_ids)
params = {
'model_dir': '../model/lstm_seq2seq',
'export_dir': '../model/lstm_seq2seq_export',
'vocab_path': '../vocab/char.txt',
'rnn_units': 300,
'max_len': 30,
'activation': tf.nn.relu,
'dec_layers': 1,
'beam_width': 10,
'top_k': 3,
'length_penalty': .0,
'coverage_penalty': .0,
}
def serving_input_receiver_fn():
words = tf.placeholder(tf.int32, [None, None], 'words')
features = {'words': words}
receiver_tensors = features
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
def get_vocab(f_path):
word2idx = {}
with open(f_path) as f:
for i, line in enumerate(f):
line = line.rstrip('\n')
word2idx[line] = i
return word2idx
params['char2idx'] = get_vocab(params['vocab_path'])
params['idx2char'] = {idx: char for char, idx in params['char2idx'].items()}
estimator = tf.estimator.Estimator(model_fn, params['model_dir'])
estimator.export_saved_model(params['export_dir'], serving_input_receiver_fn)
| 0.601125 | 0.226955 |
# Linear Algebra
:label:`sec_linear-algebra`
Now that you can store and manipulate data,
let us briefly review the subset of basic linear algebra
that you will need to understand and implement
most of models covered in this book.
Below, we introduce the basic mathematical objects, arithmetic,
and operations in linear algebra,
expressing each of them through mathematical notation
and the corresponding implementation in code.
## Scalars
If you never studied linear algebra or machine learning,
then your past experience with math probably consisted
of thinking about one number at a time.
And, if you ever balanced a checkbook
or even paid for dinner at a restaurant
then you already know how to do basic things
like adding and multiplying pairs of numbers.
For example, the temperature in Palo Alto is $52$ degrees Fahrenheit.
Formally, we call values consisting
of just one numerical quantity *scalars*.
If you wanted to convert this value to Celsius
(the metric system's more sensible temperature scale),
you would evaluate the expression $c = \frac{5}{9}(f - 32)$, setting $f$ to $52$.
In this equation, each of the terms---$5$, $9$, and $32$---are scalar values.
The placeholders $c$ and $f$ are called *variables*
and they represent unknown scalar values.
In this book, we adopt the mathematical notation
where scalar variables are denoted
by ordinary lower-cased letters (e.g., $x$, $y$, and $z$).
We denote the space of all (continuous) *real-valued* scalars by $\mathbb{R}$.
For expedience, we will punt on rigorous definitions
of what precisely *space* is,
but just remember for now that the expression $x \in \mathbb{R}$
is a formal way to say that $x$ is a real-valued scalar.
The symbol $\in$ can be pronounced "in"
and simply denotes membership in a set.
Analogously, we could write $x, y \in \{0, 1\}$
to state that $x$ and $y$ are numbers
whose value can only be $0$ or $1$.
(**A scalar is represented by a tensor with just one element.**)
In the next snippet, we instantiate two scalars
and perform some familiar arithmetic operations with them,
namely addition, multiplication, division, and exponentiation.
```
import tensorflow as tf
x = tf.constant([3.0])
y = tf.constant([2.0])
x + y, x * y, x / y, x**y
```
## Vectors
[**You can think of a vector as simply a list of scalar values.**]
We call these values the *elements* (*entries* or *components*) of the vector.
When our vectors represent examples from our dataset,
their values hold some real-world significance.
For example, if we were training a model to predict
the risk that a loan defaults,
we might associate each applicant with a vector
whose components correspond to their income,
length of employment, number of previous defaults, and other factors.
If we were studying the risk of heart attacks hospital patients potentially face,
we might represent each patient by a vector
whose components capture their most recent vital signs,
cholesterol levels, minutes of exercise per day, etc.
In math notation, we will usually denote vectors as bold-faced,
lower-cased letters (e.g., $\mathbf{x}$, $\mathbf{y}$, and $\mathbf{z})$.
We work with vectors via one-dimensional tensors.
In general tensors can have arbitrary lengths,
subject to the memory limits of your machine.
```
x = tf.range(4)
x
```
We can refer to any element of a vector by using a subscript.
For example, we can refer to the $i^\mathrm{th}$ element of $\mathbf{x}$ by $x_i$.
Note that the element $x_i$ is a scalar,
so we do not bold-face the font when referring to it.
Extensive literature considers column vectors to be the default
orientation of vectors, so does this book.
In math, a vector $\mathbf{x}$ can be written as
$$\mathbf{x} =\begin{bmatrix}x_{1} \\x_{2} \\ \vdots \\x_{n}\end{bmatrix},$$
:eqlabel:`eq_vec_def`
where $x_1, \ldots, x_n$ are elements of the vector.
In code,
we (**access any element by indexing into the tensor.**)
```
x[3]
```
### Length, Dimensionality, and Shape
Let us revisit some concepts from :numref:`sec_ndarray`.
A vector is just an array of numbers.
And just as every array has a length, so does every vector.
In math notation, if we want to say that a vector $\mathbf{x}$
consists of $n$ real-valued scalars,
we can express this as $\mathbf{x} \in \mathbb{R}^n$.
The length of a vector is commonly called the *dimension* of the vector.
As with an ordinary Python array,
we [**can access the length of a tensor**]
by calling Python's built-in `len()` function.
```
len(x)
```
When a tensor represents a vector (with precisely one axis),
we can also access its length via the `.shape` attribute.
The shape is a tuple that lists the length (dimensionality)
along each axis of the tensor.
(**For tensors with just one axis, the shape has just one element.**)
```
x.shape
```
Note that the word "dimension" tends to get overloaded
in these contexts and this tends to confuse people.
To clarify, we use the dimensionality of a *vector* or an *axis*
to refer to its length, i.e., the number of elements of a vector or an axis.
However, we use the dimensionality of a tensor
to refer to the number of axes that a tensor has.
In this sense, the dimensionality of some axis of a tensor
will be the length of that axis.
## Matrices
Just as vectors generalize scalars from order zero to order one,
matrices generalize vectors from order one to order two.
Matrices, which we will typically denote with bold-faced, capital letters
(e.g., $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$),
are represented in code as tensors with two axes.
In math notation, we use $\mathbf{A} \in \mathbb{R}^{m \times n}$
to express that the matrix $\mathbf{A}$ consists of $m$ rows and $n$ columns of real-valued scalars.
Visually, we can illustrate any matrix $\mathbf{A} \in \mathbb{R}^{m \times n}$ as a table,
where each element $a_{ij}$ belongs to the $i^{\mathrm{th}}$ row and $j^{\mathrm{th}}$ column:
$$\mathbf{A}=\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \\ \end{bmatrix}.$$
:eqlabel:`eq_matrix_def`
For any $\mathbf{A} \in \mathbb{R}^{m \times n}$, the shape of $\mathbf{A}$
is ($m$, $n$) or $m \times n$.
Specifically, when a matrix has the same number of rows and columns,
its shape becomes a square; thus, it is called a *square matrix*.
We can [**create an $m \times n$ matrix**]
by specifying a shape with two components $m$ and $n$
when calling any of our favorite functions for instantiating a tensor.
```
A = tf.reshape(tf.range(20), (5, 4))
A
```
We can access the scalar element $a_{ij}$ of a matrix $\mathbf{A}$ in :eqref:`eq_matrix_def`
by specifying the indices for the row ($i$) and column ($j$),
such as $[\mathbf{A}]_{ij}$.
When the scalar elements of a matrix $\mathbf{A}$, such as in :eqref:`eq_matrix_def`, are not given,
we may simply use the lower-case letter of the matrix $\mathbf{A}$ with the index subscript, $a_{ij}$,
to refer to $[\mathbf{A}]_{ij}$.
To keep notation simple, commas are inserted to separate indices only when necessary,
such as $a_{2, 3j}$ and $[\mathbf{A}]_{2i-1, 3}$.
Sometimes, we want to flip the axes.
When we exchange a matrix's rows and columns,
the result is called the *transpose* of the matrix.
Formally, we signify a matrix $\mathbf{A}$'s transpose by $\mathbf{A}^\top$
and if $\mathbf{B} = \mathbf{A}^\top$, then $b_{ij} = a_{ji}$ for any $i$ and $j$.
Thus, the transpose of $\mathbf{A}$ in :eqref:`eq_matrix_def` is
a $n \times m$ matrix:
$$
\mathbf{A}^\top =
\begin{bmatrix}
a_{11} & a_{21} & \dots & a_{m1} \\
a_{12} & a_{22} & \dots & a_{m2} \\
\vdots & \vdots & \ddots & \vdots \\
a_{1n} & a_{2n} & \dots & a_{mn}
\end{bmatrix}.
$$
Now we access a (**matrix's transpose**) in code.
```
tf.transpose(A)
```
As a special type of the square matrix,
[**a *symmetric matrix* $\mathbf{A}$ is equal to its transpose:
$\mathbf{A} = \mathbf{A}^\top$.**]
Here we define a symmetric matrix `B`.
```
B = tf.constant([[1, 2, 3], [2, 0, 4], [3, 4, 5]])
B
```
Now we compare `B` with its transpose.
```
B == tf.transpose(B)
```
Matrices are useful data structures:
they allow us to organize data that have different modalities of variation.
For example, rows in our matrix might correspond to different houses (data examples),
while columns might correspond to different attributes.
This should sound familiar if you have ever used spreadsheet software or
have read :numref:`sec_pandas`.
Thus, although the default orientation of a single vector is a column vector,
in a matrix that represents a tabular dataset,
it is more conventional to treat each data example as a row vector in the matrix.
And, as we will see in later chapters,
this convention will enable common deep learning practices.
For example, along the outermost axis of a tensor,
we can access or enumerate minibatches of data examples,
or just data examples if no minibatch exists.
## Tensors
Just as vectors generalize scalars, and matrices generalize vectors, we can build data structures with even more axes.
[**Tensors**]
("tensors" in this subsection refer to algebraic objects)
(**give us a generic way of describing $n$-dimensional arrays with an arbitrary number of axes.**)
Vectors, for example, are first-order tensors, and matrices are second-order tensors.
Tensors are denoted with capital letters of a special font face
(e.g., $\mathsf{X}$, $\mathsf{Y}$, and $\mathsf{Z}$)
and their indexing mechanism (e.g., $x_{ijk}$ and $[\mathsf{X}]_{1, 2i-1, 3}$) is similar to that of matrices.
Tensors will become more important when we start working with images,
which arrive as $n$-dimensional arrays with 3 axes corresponding to the height, width, and a *channel* axis for stacking the color channels (red, green, and blue). For now, we will skip over higher order tensors and focus on the basics.
```
X = tf.reshape(tf.range(24), (2, 3, 4))
X
```
## Basic Properties of Tensor Arithmetic
Scalars, vectors, matrices, and tensors ("tensors" in this subsection refer to algebraic objects)
of an arbitrary number of axes
have some nice properties that often come in handy.
For example, you might have noticed
from the definition of an elementwise operation
that any elementwise unary operation does not change the shape of its operand.
Similarly,
[**given any two tensors with the same shape,
the result of any binary elementwise operation
will be a tensor of that same shape.**]
For example, adding two matrices of the same shape
performs elementwise addition over these two matrices.
```
A = tf.reshape(tf.range(20, dtype=tf.float32), (5, 4))
B = A # No cloning of `A` to `B` by allocating new memory
A, A + B
```
Specifically,
[**elementwise multiplication of two matrices is called their *Hadamard product***]
(math notation $\odot$).
Consider matrix $\mathbf{B} \in \mathbb{R}^{m \times n}$ whose element of row $i$ and column $j$ is $b_{ij}$. The Hadamard product of matrices $\mathbf{A}$ (defined in :eqref:`eq_matrix_def`) and $\mathbf{B}$
$$
\mathbf{A} \odot \mathbf{B} =
\begin{bmatrix}
a_{11} b_{11} & a_{12} b_{12} & \dots & a_{1n} b_{1n} \\
a_{21} b_{21} & a_{22} b_{22} & \dots & a_{2n} b_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{m1} b_{m1} & a_{m2} b_{m2} & \dots & a_{mn} b_{mn}
\end{bmatrix}.
$$
```
A * B
```
[**Multiplying or adding a tensor by a scalar**] also does not change the shape of the tensor,
where each element of the operand tensor will be added or multiplied by the scalar.
```
a = 2
X = tf.reshape(tf.range(24), (2, 3, 4))
a + X, (a * X).shape
```
## Reduction
:label:`subseq_lin-alg-reduction`
One useful operation that we can perform with arbitrary tensors
is to
calculate [**the sum of their elements.**]
In mathematical notation, we express sums using the $\sum$ symbol.
To express the sum of the elements in a vector $\mathbf{x}$ of length $d$,
we write $\sum_{i=1}^d x_i$.
In code, we can just call the function for calculating the sum.
```
x = tf.range(4, dtype=tf.float32)
x, tf.reduce_sum(x)
```
We can express [**sums over the elements of tensors of arbitrary shape.**]
For example, the sum of the elements of an $m \times n$ matrix $\mathbf{A}$ could be written $\sum_{i=1}^{m} \sum_{j=1}^{n} a_{ij}$.
```
A.shape, tf.reduce_sum(A)
```
By default, invoking the function for calculating the sum
*reduces* a tensor along all its axes to a scalar.
We can also [**specify the axes along which the tensor is reduced via summation.**]
Take matrices as an example.
To reduce the row dimension (axis 0) by summing up elements of all the rows,
we specify `axis=0` when invoking the function.
Since the input matrix reduces along axis 0 to generate the output vector,
the dimension of axis 0 of the input is lost in the output shape.
```
A_sum_axis0 = tf.reduce_sum(A, axis=0)
A_sum_axis0, A_sum_axis0.shape
```
Specifying
`axis=1` will reduce the column dimension (axis 1) by summing up elements of all the columns.
Thus, the dimension of axis 1 of the input is lost in the output shape.
```
A_sum_axis1 = tf.reduce_sum(A, axis=1)
A_sum_axis1, A_sum_axis1.shape
```
Reducing a matrix along both rows and columns via summation
is equivalent to summing up all the elements of the matrix.
```
tf.reduce_sum(A, axis=[0, 1]) # Same as `tf.reduce_sum(A)`
```
[**A related quantity is the *mean*, which is also called the *average*.**]
We calculate the mean by dividing the sum by the total number of elements.
In code, we could just call the function for calculating the mean
on tensors of arbitrary shape.
```
tf.reduce_mean(A), tf.reduce_sum(A) / tf.size(A).numpy()
```
Likewise, the function for calculating the mean can also reduce a tensor along the specified axes.
```
tf.reduce_mean(A, axis=0), tf.reduce_sum(A, axis=0) / A.shape[0]
```
### Non-Reduction Sum
:label:`subseq_lin-alg-non-reduction`
However,
sometimes it can be useful to [**keep the number of axes unchanged**]
when invoking the function for calculating the sum or mean.
```
sum_A = tf.reduce_sum(A, axis=1, keepdims=True)
sum_A
```
For instance,
since `sum_A` still keeps its two axes after summing each row, we can (**divide `A` by `sum_A` with broadcasting.**)
```
A / sum_A
```
If we want to calculate [**the cumulative sum of elements of `A` along some axis**], say `axis=0` (row by row),
we can call the `cumsum` function. This function will not reduce the input tensor along any axis.
```
tf.cumsum(A, axis=0)
```
## Dot Products
So far, we have only performed elementwise operations, sums, and averages. And if this was all we could do, linear algebra probably would not deserve its own section. However, one of the most fundamental operations is the dot product.
Given two vectors $\mathbf{x}, \mathbf{y} \in \mathbb{R}^d$, their *dot product* $\mathbf{x}^\top \mathbf{y}$ (or $\langle \mathbf{x}, \mathbf{y} \rangle$) is a sum over the products of the elements at the same position: $\mathbf{x}^\top \mathbf{y} = \sum_{i=1}^{d} x_i y_i$.
[~~The *dot product* of two vectors is a sum over the products of the elements at the same position~~]
```
y = tf.ones(4, dtype=tf.float32)
x, y, tf.tensordot(x, y, axes=1)
```
Note that
(**we can express the dot product of two vectors equivalently by performing an elementwise multiplication and then a sum:**)
```
tf.reduce_sum(x * y)
```
Dot products are useful in a wide range of contexts.
For example, given some set of values,
denoted by a vector $\mathbf{x} \in \mathbb{R}^d$
and a set of weights denoted by $\mathbf{w} \in \mathbb{R}^d$,
the weighted sum of the values in $\mathbf{x}$
according to the weights $\mathbf{w}$
could be expressed as the dot product $\mathbf{x}^\top \mathbf{w}$.
When the weights are non-negative
and sum to one (i.e., $\left(\sum_{i=1}^{d} {w_i} = 1\right)$),
the dot product expresses a *weighted average*.
After normalizing two vectors to have the unit length,
the dot products express the cosine of the angle between them.
We will formally introduce this notion of *length* later in this section.
## Matrix-Vector Products
Now that we know how to calculate dot products,
we can begin to understand *matrix-vector products*.
Recall the matrix $\mathbf{A} \in \mathbb{R}^{m \times n}$
and the vector $\mathbf{x} \in \mathbb{R}^n$
defined and visualized in :eqref:`eq_matrix_def` and :eqref:`eq_vec_def` respectively.
Let us start off by visualizing the matrix $\mathbf{A}$ in terms of its row vectors
$$\mathbf{A}=
\begin{bmatrix}
\mathbf{a}^\top_{1} \\
\mathbf{a}^\top_{2} \\
\vdots \\
\mathbf{a}^\top_m \\
\end{bmatrix},$$
where each $\mathbf{a}^\top_{i} \in \mathbb{R}^n$
is a row vector representing the $i^\mathrm{th}$ row of the matrix $\mathbf{A}$.
[**The matrix-vector product $\mathbf{A}\mathbf{x}$
is simply a column vector of length $m$,
whose $i^\mathrm{th}$ element is the dot product $\mathbf{a}^\top_i \mathbf{x}$:**]
$$
\mathbf{A}\mathbf{x}
= \begin{bmatrix}
\mathbf{a}^\top_{1} \\
\mathbf{a}^\top_{2} \\
\vdots \\
\mathbf{a}^\top_m \\
\end{bmatrix}\mathbf{x}
= \begin{bmatrix}
\mathbf{a}^\top_{1} \mathbf{x} \\
\mathbf{a}^\top_{2} \mathbf{x} \\
\vdots\\
\mathbf{a}^\top_{m} \mathbf{x}\\
\end{bmatrix}.
$$
We can think of multiplication by a matrix $\mathbf{A}\in \mathbb{R}^{m \times n}$
as a transformation that projects vectors
from $\mathbb{R}^{n}$ to $\mathbb{R}^{m}$.
These transformations turn out to be remarkably useful.
For example, we can represent rotations
as multiplications by a square matrix.
As we will see in subsequent chapters,
we can also use matrix-vector products
to describe the most intensive calculations
required when computing each layer in a neural network
given the values of the previous layer.
Expressing matrix-vector products in code with tensors,
we use the same `dot` function as for dot products.
When we call `np.dot(A, x)` with a matrix `A` and a vector `x`,
the matrix-vector product is performed.
Note that the column dimension of `A` (its length along axis 1)
must be the same as the dimension of `x` (its length).
```
A.shape, x.shape, tf.linalg.matvec(A, x)
```
## Matrix-Matrix Multiplication
If you have gotten the hang of dot products and matrix-vector products,
then *matrix-matrix multiplication* should be straightforward.
Say that we have two matrices $\mathbf{A} \in \mathbb{R}^{n \times k}$ and $\mathbf{B} \in \mathbb{R}^{k \times m}$:
$$\mathbf{A}=\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1k} \\
a_{21} & a_{22} & \cdots & a_{2k} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nk} \\
\end{bmatrix},\quad
\mathbf{B}=\begin{bmatrix}
b_{11} & b_{12} & \cdots & b_{1m} \\
b_{21} & b_{22} & \cdots & b_{2m} \\
\vdots & \vdots & \ddots & \vdots \\
b_{k1} & b_{k2} & \cdots & b_{km} \\
\end{bmatrix}.$$
Denote by $\mathbf{a}^\top_{i} \in \mathbb{R}^k$
the row vector representing the $i^\mathrm{th}$ row of the matrix $\mathbf{A}$,
and let $\mathbf{b}_{j} \in \mathbb{R}^k$
be the column vector from the $j^\mathrm{th}$ column of the matrix $\mathbf{B}$.
To produce the matrix product $\mathbf{C} = \mathbf{A}\mathbf{B}$, it is easiest to think of $\mathbf{A}$ in terms of its row vectors and $\mathbf{B}$ in terms of its column vectors:
$$\mathbf{A}=
\begin{bmatrix}
\mathbf{a}^\top_{1} \\
\mathbf{a}^\top_{2} \\
\vdots \\
\mathbf{a}^\top_n \\
\end{bmatrix},
\quad \mathbf{B}=\begin{bmatrix}
\mathbf{b}_{1} & \mathbf{b}_{2} & \cdots & \mathbf{b}_{m} \\
\end{bmatrix}.
$$
Then the matrix product $\mathbf{C} \in \mathbb{R}^{n \times m}$ is produced as we simply compute each element $c_{ij}$ as the dot product $\mathbf{a}^\top_i \mathbf{b}_j$:
$$\mathbf{C} = \mathbf{AB} = \begin{bmatrix}
\mathbf{a}^\top_{1} \\
\mathbf{a}^\top_{2} \\
\vdots \\
\mathbf{a}^\top_n \\
\end{bmatrix}
\begin{bmatrix}
\mathbf{b}_{1} & \mathbf{b}_{2} & \cdots & \mathbf{b}_{m} \\
\end{bmatrix}
= \begin{bmatrix}
\mathbf{a}^\top_{1} \mathbf{b}_1 & \mathbf{a}^\top_{1}\mathbf{b}_2& \cdots & \mathbf{a}^\top_{1} \mathbf{b}_m \\
\mathbf{a}^\top_{2}\mathbf{b}_1 & \mathbf{a}^\top_{2} \mathbf{b}_2 & \cdots & \mathbf{a}^\top_{2} \mathbf{b}_m \\
\vdots & \vdots & \ddots &\vdots\\
\mathbf{a}^\top_{n} \mathbf{b}_1 & \mathbf{a}^\top_{n}\mathbf{b}_2& \cdots& \mathbf{a}^\top_{n} \mathbf{b}_m
\end{bmatrix}.
$$
[**We can think of the matrix-matrix multiplication $\mathbf{AB}$ as simply performing $m$ matrix-vector products and stitching the results together to form an $n \times m$ matrix.**]
In the following snippet, we perform matrix multiplication on `A` and `B`.
Here, `A` is a matrix with 5 rows and 4 columns,
and `B` is a matrix with 4 rows and 3 columns.
After multiplication, we obtain a matrix with 5 rows and 3 columns.
```
B = tf.ones((4, 3), tf.float32)
tf.matmul(A, B)
```
Matrix-matrix multiplication can be simply called *matrix multiplication*, and should not be confused with the Hadamard product.
## Norms
:label:`subsec_lin-algebra-norms`
Some of the most useful operators in linear algebra are *norms*.
Informally, the norm of a vector tells us how *big* a vector is.
The notion of *size* under consideration here
concerns not dimensionality
but rather the magnitude of the components.
In linear algebra, a vector norm is a function $f$ that maps a vector
to a scalar, satisfying a handful of properties.
Given any vector $\mathbf{x}$,
the first property says
that if we scale all the elements of a vector
by a constant factor $\alpha$,
its norm also scales by the *absolute value*
of the same constant factor:
$$f(\alpha \mathbf{x}) = |\alpha| f(\mathbf{x}).$$
The second property is the familiar triangle inequality:
$$f(\mathbf{x} + \mathbf{y}) \leq f(\mathbf{x}) + f(\mathbf{y}).$$
The third property simply says that the norm must be non-negative:
$$f(\mathbf{x}) \geq 0.$$
That makes sense, as in most contexts the smallest *size* for anything is 0.
The final property requires that the smallest norm is achieved and only achieved
by a vector consisting of all zeros.
$$\forall i, [\mathbf{x}]_i = 0 \Leftrightarrow f(\mathbf{x})=0.$$
You might notice that norms sound a lot like measures of distance.
And if you remember Euclidean distances
(think Pythagoras' theorem) from grade school,
then the concepts of non-negativity and the triangle inequality might ring a bell.
In fact, the Euclidean distance is a norm:
specifically it is the $L_2$ norm.
Suppose that the elements in the $n$-dimensional vector
$\mathbf{x}$ are $x_1, \ldots, x_n$.
[**The $L_2$ *norm* of $\mathbf{x}$ is the square root of the sum of the squares of the vector elements:**]
(**$$\|\mathbf{x}\|_2 = \sqrt{\sum_{i=1}^n x_i^2},$$**)
where the subscript $2$ is often omitted in $L_2$ norms, i.e., $\|\mathbf{x}\|$ is equivalent to $\|\mathbf{x}\|_2$. In code,
we can calculate the $L_2$ norm of a vector as follows.
```
u = tf.constant([3.0, -4.0])
tf.norm(u)
```
In deep learning, we work more often
with the squared $L_2$ norm.
You will also frequently encounter [**the $L_1$ *norm***],
which is expressed as the sum of the absolute values of the vector elements:
(**$$\|\mathbf{x}\|_1 = \sum_{i=1}^n \left|x_i \right|.$$**)
As compared with the $L_2$ norm,
it is less influenced by outliers.
To calculate the $L_1$ norm, we compose
the absolute value function with a sum over the elements.
```
tf.reduce_sum(tf.abs(u))
```
Both the $L_2$ norm and the $L_1$ norm
are special cases of the more general $L_p$ *norm*:
$$\|\mathbf{x}\|_p = \left(\sum_{i=1}^n \left|x_i \right|^p \right)^{1/p}.$$
Analogous to $L_2$ norms of vectors,
[**the *Frobenius norm* of a matrix $\mathbf{X} \in \mathbb{R}^{m \times n}$**]
is the square root of the sum of the squares of the matrix elements:
[**$$\|\mathbf{X}\|_F = \sqrt{\sum_{i=1}^m \sum_{j=1}^n x_{ij}^2}.$$**]
The Frobenius norm satisfies all the properties of vector norms.
It behaves as if it were an $L_2$ norm of a matrix-shaped vector.
Invoking the following function will calculate the Frobenius norm of a matrix.
```
tf.norm(tf.ones((4, 9)))
```
### Norms and Objectives
:label:`subsec_norms_and_objectives`
While we do not want to get too far ahead of ourselves,
we can plant some intuition already about why these concepts are useful.
In deep learning, we are often trying to solve optimization problems:
*maximize* the probability assigned to observed data;
*minimize* the distance between predictions
and the ground-truth observations.
Assign vector representations to items (like words, products, or news articles)
such that the distance between similar items is minimized,
and the distance between dissimilar items is maximized.
Oftentimes, the objectives, perhaps the most important components
of deep learning algorithms (besides the data),
are expressed as norms.
## More on Linear Algebra
In just this section,
we have taught you all the linear algebra
that you will need to understand
a remarkable chunk of modern deep learning.
There is a lot more to linear algebra
and a lot of that mathematics is useful for machine learning.
For example, matrices can be decomposed into factors,
and these decompositions can reveal
low-dimensional structure in real-world datasets.
There are entire subfields of machine learning
that focus on using matrix decompositions
and their generalizations to high-order tensors
to discover structure in datasets and solve prediction problems.
But this book focuses on deep learning.
And we believe you will be much more inclined to learn more mathematics
once you have gotten your hands dirty
deploying useful machine learning models on real datasets.
So while we reserve the right to introduce more mathematics much later on,
we will wrap up this section here.
If you are eager to learn more about linear algebra,
you may refer to either the
[online appendix on linear algebraic operations](https://d2l.ai/chapter_appendix-mathematics-for-deep-learning/geometry-linear-algebraic-ops.html)
or other excellent resources :cite:`Strang.1993,Kolter.2008,Petersen.Pedersen.ea.2008`.
## Summary
* Scalars, vectors, matrices, and tensors are basic mathematical objects in linear algebra.
* Vectors generalize scalars, and matrices generalize vectors.
* Scalars, vectors, matrices, and tensors have zero, one, two, and an arbitrary number of axes, respectively.
* A tensor can be reduced along the specified axes by `sum` and `mean`.
* Elementwise multiplication of two matrices is called their Hadamard product. It is different from matrix multiplication.
* In deep learning, we often work with norms such as the $L_1$ norm, the $L_2$ norm, and the Frobenius norm.
* We can perform a variety of operations over scalars, vectors, matrices, and tensors.
## Exercises
1. Prove that the transpose of a matrix $\mathbf{A}$'s transpose is $\mathbf{A}$: $(\mathbf{A}^\top)^\top = \mathbf{A}$.
1. Given two matrices $\mathbf{A}$ and $\mathbf{B}$, show that the sum of transposes is equal to the transpose of a sum: $\mathbf{A}^\top + \mathbf{B}^\top = (\mathbf{A} + \mathbf{B})^\top$.
1. Given any square matrix $\mathbf{A}$, is $\mathbf{A} + \mathbf{A}^\top$ always symmetric? Why?
1. We defined the tensor `X` of shape (2, 3, 4) in this section. What is the output of `len(X)`?
1. For a tensor `X` of arbitrary shape, does `len(X)` always correspond to the length of a certain axis of `X`? What is that axis?
1. Run `A / A.sum(axis=1)` and see what happens. Can you analyze the reason?
1. When traveling between two points in Manhattan, what is the distance that you need to cover in terms of the coordinates, i.e., in terms of avenues and streets? Can you travel diagonally?
1. Consider a tensor with shape (2, 3, 4). What are the shapes of the summation outputs along axis 0, 1, and 2?
1. Feed a tensor with 3 or more axes to the `linalg.norm` function and observe its output. What does this function compute for tensors of arbitrary shape?
[Discussions](https://discuss.d2l.ai/t/196)
|
github_jupyter
|
import tensorflow as tf
x = tf.constant([3.0])
y = tf.constant([2.0])
x + y, x * y, x / y, x**y
x = tf.range(4)
x
x[3]
len(x)
x.shape
A = tf.reshape(tf.range(20), (5, 4))
A
tf.transpose(A)
B = tf.constant([[1, 2, 3], [2, 0, 4], [3, 4, 5]])
B
B == tf.transpose(B)
X = tf.reshape(tf.range(24), (2, 3, 4))
X
A = tf.reshape(tf.range(20, dtype=tf.float32), (5, 4))
B = A # No cloning of `A` to `B` by allocating new memory
A, A + B
A * B
a = 2
X = tf.reshape(tf.range(24), (2, 3, 4))
a + X, (a * X).shape
x = tf.range(4, dtype=tf.float32)
x, tf.reduce_sum(x)
A.shape, tf.reduce_sum(A)
A_sum_axis0 = tf.reduce_sum(A, axis=0)
A_sum_axis0, A_sum_axis0.shape
A_sum_axis1 = tf.reduce_sum(A, axis=1)
A_sum_axis1, A_sum_axis1.shape
tf.reduce_sum(A, axis=[0, 1]) # Same as `tf.reduce_sum(A)`
tf.reduce_mean(A), tf.reduce_sum(A) / tf.size(A).numpy()
tf.reduce_mean(A, axis=0), tf.reduce_sum(A, axis=0) / A.shape[0]
sum_A = tf.reduce_sum(A, axis=1, keepdims=True)
sum_A
A / sum_A
tf.cumsum(A, axis=0)
y = tf.ones(4, dtype=tf.float32)
x, y, tf.tensordot(x, y, axes=1)
tf.reduce_sum(x * y)
A.shape, x.shape, tf.linalg.matvec(A, x)
B = tf.ones((4, 3), tf.float32)
tf.matmul(A, B)
u = tf.constant([3.0, -4.0])
tf.norm(u)
tf.reduce_sum(tf.abs(u))
tf.norm(tf.ones((4, 9)))
| 0.745676 | 0.993417 |
# Setup
```
!pip install -U tf-nightly-2.0-preview
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
```
# Trend and Seasonality
```
def trend(time, slope=0):
return slope * time
```
Let's create a time series that just trends upward:
```
time = np.arange(4 * 365 + 1)
baseline = 10
series = trend(time, 0.1)
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
```
Now let's generate a time series with a seasonal pattern:
```
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
baseline = 10
amplitude = 40
series = seasonality(time, period=365, amplitude=amplitude)
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
```
Now let's create a time series with both trend and seasonality:
```
slope = 0.05
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
```
# Noise
In practice few real-life time series have such a smooth signal. They usually have some noise, and the signal-to-noise ratio can sometimes be very low. Let's generate some white noise:
```
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
plt.figure(figsize=(10, 6))
plot_series(time, noise)
plt.show()
```
Now let's add this white noise to the time series:
```
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
```
All right, this looks realistic enough for now. Let's try to forecast it. We will split it into two periods: the training period and the validation period (in many cases, you would also want to have a test period). The split will be at time step 1000.
```
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
def autocorrelation(time, amplitude, seed=None):
rnd = np.random.RandomState(seed)
φ1 = 0.5
φ2 = -0.1
ar = rnd.randn(len(time) + 50)
ar[:50] = 100
for step in range(50, len(time) + 50):
ar[step] += φ1 * ar[step - 50]
ar[step] += φ2 * ar[step - 33]
return ar[50:] * amplitude
def autocorrelation(time, amplitude, seed=None):
rnd = np.random.RandomState(seed)
φ = 0.8
ar = rnd.randn(len(time) + 1)
for step in range(1, len(time) + 1):
ar[step] += φ * ar[step - 1]
return ar[1:] * amplitude
series = autocorrelation(time, 10, seed=42)
plot_series(time[:200], series[:200])
plt.show()
series = autocorrelation(time, 10, seed=42) + trend(time, 2)
plot_series(time[:200], series[:200])
plt.show()
series = autocorrelation(time, 10, seed=42) + seasonality(time, period=50, amplitude=150) + trend(time, 2)
plot_series(time[:200], series[:200])
plt.show()
series = autocorrelation(time, 10, seed=42) + seasonality(time, period=50, amplitude=150) + trend(time, 2)
series2 = autocorrelation(time, 5, seed=42) + seasonality(time, period=50, amplitude=2) + trend(time, -1) + 550
series[200:] = series2[200:]
#series += noise(time, 30)
plot_series(time[:300], series[:300])
plt.show()
def impulses(time, num_impulses, amplitude=1, seed=None):
rnd = np.random.RandomState(seed)
impulse_indices = rnd.randint(len(time), size=10)
series = np.zeros(len(time))
for index in impulse_indices:
series[index] += rnd.rand() * amplitude
return series
series = impulses(time, 10, seed=42)
plot_series(time, series)
plt.show()
def autocorrelation(source, φs):
ar = source.copy()
max_lag = len(φs)
for step, value in enumerate(source):
for lag, φ in φs.items():
if step - lag > 0:
ar[step] += φ * ar[step - lag]
return ar
signal = impulses(time, 10, seed=42)
series = autocorrelation(signal, {1: 0.99})
plot_series(time, series)
plt.plot(time, signal, "k-")
plt.show()
signal = impulses(time, 10, seed=42)
series = autocorrelation(signal, {1: 0.70, 50: 0.2})
plot_series(time, series)
plt.plot(time, signal, "k-")
plt.show()
series_diff1 = series[1:] - series[:-1]
plot_series(time[1:], series_diff1)
from pandas.plotting import autocorrelation_plot
autocorrelation_plot(series)
from statsmodels.tsa.arima_model import ARIMA
model = ARIMA(series, order=(5, 1, 0))
model_fit = model.fit(disp=0)
print(model_fit.summary())
df = pd.read_csv("sunspots.csv", parse_dates=["Date"], index_col="Date")
series = df["Monthly Mean Total Sunspot Number"].asfreq("1M")
series.head()
series.plot(figsize=(12, 5))
series["1995-01-01":].plot()
series.diff(1).plot()
plt.axis([0, 100, -50, 50])
from pandas.plotting import autocorrelation_plot
autocorrelation_plot(series)
autocorrelation_plot(series.diff(1)[1:])
autocorrelation_plot(series.diff(1)[1:].diff(11 * 12)[11*12+1:])
plt.axis([0, 500, -0.1, 0.1])
autocorrelation_plot(series.diff(1)[1:])
plt.axis([0, 50, -0.1, 0.1])
116.7 - 104.3
[series.autocorr(lag) for lag in range(1, 50)]
pd.read_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=None, error_bad_lines=True, warn_bad_lines=True, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None)
Read a comma-separated values (csv) file into DataFrame.
from pandas.plotting import autocorrelation_plot
series_diff = series
for lag in range(50):
series_diff = series_diff[1:] - series_diff[:-1]
autocorrelation_plot(series_diff)
import pandas as pd
series_diff1 = pd.Series(series[1:] - series[:-1])
autocorrs = [series_diff1.autocorr(lag) for lag in range(1, 60)]
plt.plot(autocorrs)
plt.show()
```
|
github_jupyter
|
!pip install -U tf-nightly-2.0-preview
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
def trend(time, slope=0):
return slope * time
time = np.arange(4 * 365 + 1)
baseline = 10
series = trend(time, 0.1)
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
baseline = 10
amplitude = 40
series = seasonality(time, period=365, amplitude=amplitude)
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
slope = 0.05
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
plt.figure(figsize=(10, 6))
plot_series(time, noise)
plt.show()
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
def autocorrelation(time, amplitude, seed=None):
rnd = np.random.RandomState(seed)
φ1 = 0.5
φ2 = -0.1
ar = rnd.randn(len(time) + 50)
ar[:50] = 100
for step in range(50, len(time) + 50):
ar[step] += φ1 * ar[step - 50]
ar[step] += φ2 * ar[step - 33]
return ar[50:] * amplitude
def autocorrelation(time, amplitude, seed=None):
rnd = np.random.RandomState(seed)
φ = 0.8
ar = rnd.randn(len(time) + 1)
for step in range(1, len(time) + 1):
ar[step] += φ * ar[step - 1]
return ar[1:] * amplitude
series = autocorrelation(time, 10, seed=42)
plot_series(time[:200], series[:200])
plt.show()
series = autocorrelation(time, 10, seed=42) + trend(time, 2)
plot_series(time[:200], series[:200])
plt.show()
series = autocorrelation(time, 10, seed=42) + seasonality(time, period=50, amplitude=150) + trend(time, 2)
plot_series(time[:200], series[:200])
plt.show()
series = autocorrelation(time, 10, seed=42) + seasonality(time, period=50, amplitude=150) + trend(time, 2)
series2 = autocorrelation(time, 5, seed=42) + seasonality(time, period=50, amplitude=2) + trend(time, -1) + 550
series[200:] = series2[200:]
#series += noise(time, 30)
plot_series(time[:300], series[:300])
plt.show()
def impulses(time, num_impulses, amplitude=1, seed=None):
rnd = np.random.RandomState(seed)
impulse_indices = rnd.randint(len(time), size=10)
series = np.zeros(len(time))
for index in impulse_indices:
series[index] += rnd.rand() * amplitude
return series
series = impulses(time, 10, seed=42)
plot_series(time, series)
plt.show()
def autocorrelation(source, φs):
ar = source.copy()
max_lag = len(φs)
for step, value in enumerate(source):
for lag, φ in φs.items():
if step - lag > 0:
ar[step] += φ * ar[step - lag]
return ar
signal = impulses(time, 10, seed=42)
series = autocorrelation(signal, {1: 0.99})
plot_series(time, series)
plt.plot(time, signal, "k-")
plt.show()
signal = impulses(time, 10, seed=42)
series = autocorrelation(signal, {1: 0.70, 50: 0.2})
plot_series(time, series)
plt.plot(time, signal, "k-")
plt.show()
series_diff1 = series[1:] - series[:-1]
plot_series(time[1:], series_diff1)
from pandas.plotting import autocorrelation_plot
autocorrelation_plot(series)
from statsmodels.tsa.arima_model import ARIMA
model = ARIMA(series, order=(5, 1, 0))
model_fit = model.fit(disp=0)
print(model_fit.summary())
df = pd.read_csv("sunspots.csv", parse_dates=["Date"], index_col="Date")
series = df["Monthly Mean Total Sunspot Number"].asfreq("1M")
series.head()
series.plot(figsize=(12, 5))
series["1995-01-01":].plot()
series.diff(1).plot()
plt.axis([0, 100, -50, 50])
from pandas.plotting import autocorrelation_plot
autocorrelation_plot(series)
autocorrelation_plot(series.diff(1)[1:])
autocorrelation_plot(series.diff(1)[1:].diff(11 * 12)[11*12+1:])
plt.axis([0, 500, -0.1, 0.1])
autocorrelation_plot(series.diff(1)[1:])
plt.axis([0, 50, -0.1, 0.1])
116.7 - 104.3
[series.autocorr(lag) for lag in range(1, 50)]
pd.read_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=None, error_bad_lines=True, warn_bad_lines=True, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None)
Read a comma-separated values (csv) file into DataFrame.
from pandas.plotting import autocorrelation_plot
series_diff = series
for lag in range(50):
series_diff = series_diff[1:] - series_diff[:-1]
autocorrelation_plot(series_diff)
import pandas as pd
series_diff1 = pd.Series(series[1:] - series[:-1])
autocorrs = [series_diff1.autocorr(lag) for lag in range(1, 60)]
plt.plot(autocorrs)
plt.show()
| 0.735926 | 0.965218 |
```
import sys
from PyQt5 import QtCore, QtGui, QtWidgets,uic,Qt
from PyQt5.QtWidgets import QLayout, QSizePolicy,QApplication, QWidget, QListWidget, QVBoxLayout, QLabel, QPushButton, QListWidgetItem, QHBoxLayout
import pymongo
import datetime
item_list =list()
data_client = pymongo.MongoClient("mongodb://localhost/")
ds_db = data_client["dataseed_db"]
ds_user = ds_db["user"]
ds_datasets = ds_db["dataset"]
# curr_datasets = ds_datasets.find_many()
for x in ds_datasets.find():
print(type(x))
item_list.append(x)
curr_user = ds_user.find_one()
qtCreatorFile_pur = "purchase_window.ui" # Enter file here.
Ui_MainWindow_pur, QtBaseClass = uic.loadUiType(qtCreatorFile_pur)
class PurchaseWindow(QtWidgets.QMainWindow, Ui_MainWindow_pur):
def __init__(self):
QtWidgets.QMainWindow.__init__(self)
Ui_MainWindow.__init__(self)
self.setupUi(self)
# self.purchase_btn.clicked.connect(self.CalculateTax)
# self.sell_btn.clicked.connect(self.CalculateTax2)
qtCreatorFile = "homepage.ui" # Enter file here.
Ui_MainWindow, QtBaseClass = uic.loadUiType(qtCreatorFile)
filterItem = ['All item','Medical','Movies','General']
# dummy_dataset = [{
# "uploaded_by": 123,
# "full_description": "This dataset refers to data of genes and etc.",
# "category":"Medical",
# "short_description":"This data is nice med.",
# "data_location": r"C:\Users\SAAD PC\Documents\DSci Project\diabetes.csv",
# "data_size":"5 GB",
# "status":"For Sale",
# "cost":30500,
# "uploaded_on_date_time":datetime.datetime.now()}]
original_list_item = item_list.copy()
class MyApp(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
QtWidgets.QMainWindow.__init__(self)
Ui_MainWindow.__init__(self)
self.setupUi(self)
self.clickme.clicked.connect(self.searchItem)
self.searchBox.setText('')
# self.ItemListView.addItems(ItemList)
self.filterBox.addItems(filterItem)
self.filterBox.currentIndexChanged.connect(self.selectionchange)
# self.label.setText('DataSeed-Homepage')
# self.ItemListView.setStyleSheet( "QListWidget::item {margin-bottom:10px}")
self.renderList()
self.ItemListView.itemDoubleClicked.connect(self.itemclicked)
self.purchase_window = uic.loadUi("purchase_window.ui")
self.searchBox.returnPressed.connect(self.clickme.click)
def itemclicked(self,iteem):
print("item clicked: ",iteem)
i=0;
while i<len(item_list):
if(self.ItemListView.item(i)== iteem):
break
i=i+1
self.PurchaseWindowOpen(i)
# mainpg.hide()
# pg1.show()
# pg1.requestButton.hide()
# pg1.Ltitle.setText("Saad DB se "+str(i)+"th entry ka show kara do")
def PurchaseWindowOpen(self,item_index):
print(item_list[item_index])
self.purchase_window.show()
self.purchase_window.label1.setText(item_list[item_index]['short_description'])
self.purchase_window.label2.setText(item_list[item_index]['cost'])
def Search_Query(self,query):
search_list =[]
search_query = query
global item_list
for x in item_list:
temp = list(x.values())
for y in temp:
# print(y)
try:
if search_query in y:
search_list.append(x)
# print('breaking after \n',x)
break
except:
pass
search_list
item_list=[]
item_list = search_list.copy()
self.SearchresultLabel.setText(str(len(search_list)) + " result(s) found")
def searchItem(self):
global original_list_item
global item_list
item_list = original_list_item.copy()
print('check')
query = self.searchBox.text()
self.Search_Query(query)
self.renderList()
def renderList(self):
self.ItemListView.clear()
for i in range(0,len(item_list)):
self.renderListItem(i)
def renderListItem(self,i):
layout = QHBoxLayout()
layout.setSizeConstraint(QLayout.SetMinimumSize);
item = QListWidgetItem(self.ItemListView)
label = QLabel(str(i+1)+ ") " + item_list[i]['short_description'] + "\n" + "Uploaded By: " + str(item_list[i]['uploaded_by']) + "\n" +"Rating: 3/5" )
label.setStyleSheet("height:fit-content;font-size:12pt;font-style: normal;font-weight:100;");
# label.setSizePolicy(QSizePolicy.Preferred, QSizePolicy.Minimum)
label.setWordWrap(True);
label2 = QLabel("Data Size: " + item_list[i]['data_size'] + '\nStatus: ' + item_list[i]['status'])
label2.setStyleSheet("height:fit-content;font-size:12pt;text-align:right;");
# label2.setStyleSheet("color: white; background: red;,text-align:right;");
label2.setAlignment(QtCore.Qt.AlignCenter)
# label2.setSizePolicy(QSizePolicy.Minimum, QSizePolicy.Ignored)
# label2.setSizePolicy(QSizePolicy.Preferred, QSizePolicy.Minimum)
label2.setWordWrap(True)
layout.addWidget(label)
layout.addWidget(label2)
widget = QWidget()
widget.setStyleSheet("height:fit-content;,width:100%");
widget.setLayout(layout);
item.setSizeHint(layout.sizeHint())
self.ItemListView.addItem(item)
self.ItemListView.setItemWidget(item,widget)
# strr=''
# for i in range(0,3):
# for item in dummy_dataset:
# for key,value in item.items():
# strr += key + ' ' + str(value) + '\n'
def selectionchange(self,i):
global original_list_item
global item_list
strr = self.filterBox.currentText()
# print(strr)
item_list = original_list_item.copy()
if strr != "All item":
self.Search_Query(strr)
self.renderList()
if __name__ == "__main__":
x=12
app = QtWidgets.QApplication(sys.argv)
window = MyApp()
window.show()
sys.exit(app.exec_())
```
|
github_jupyter
|
import sys
from PyQt5 import QtCore, QtGui, QtWidgets,uic,Qt
from PyQt5.QtWidgets import QLayout, QSizePolicy,QApplication, QWidget, QListWidget, QVBoxLayout, QLabel, QPushButton, QListWidgetItem, QHBoxLayout
import pymongo
import datetime
item_list =list()
data_client = pymongo.MongoClient("mongodb://localhost/")
ds_db = data_client["dataseed_db"]
ds_user = ds_db["user"]
ds_datasets = ds_db["dataset"]
# curr_datasets = ds_datasets.find_many()
for x in ds_datasets.find():
print(type(x))
item_list.append(x)
curr_user = ds_user.find_one()
qtCreatorFile_pur = "purchase_window.ui" # Enter file here.
Ui_MainWindow_pur, QtBaseClass = uic.loadUiType(qtCreatorFile_pur)
class PurchaseWindow(QtWidgets.QMainWindow, Ui_MainWindow_pur):
def __init__(self):
QtWidgets.QMainWindow.__init__(self)
Ui_MainWindow.__init__(self)
self.setupUi(self)
# self.purchase_btn.clicked.connect(self.CalculateTax)
# self.sell_btn.clicked.connect(self.CalculateTax2)
qtCreatorFile = "homepage.ui" # Enter file here.
Ui_MainWindow, QtBaseClass = uic.loadUiType(qtCreatorFile)
filterItem = ['All item','Medical','Movies','General']
# dummy_dataset = [{
# "uploaded_by": 123,
# "full_description": "This dataset refers to data of genes and etc.",
# "category":"Medical",
# "short_description":"This data is nice med.",
# "data_location": r"C:\Users\SAAD PC\Documents\DSci Project\diabetes.csv",
# "data_size":"5 GB",
# "status":"For Sale",
# "cost":30500,
# "uploaded_on_date_time":datetime.datetime.now()}]
original_list_item = item_list.copy()
class MyApp(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
QtWidgets.QMainWindow.__init__(self)
Ui_MainWindow.__init__(self)
self.setupUi(self)
self.clickme.clicked.connect(self.searchItem)
self.searchBox.setText('')
# self.ItemListView.addItems(ItemList)
self.filterBox.addItems(filterItem)
self.filterBox.currentIndexChanged.connect(self.selectionchange)
# self.label.setText('DataSeed-Homepage')
# self.ItemListView.setStyleSheet( "QListWidget::item {margin-bottom:10px}")
self.renderList()
self.ItemListView.itemDoubleClicked.connect(self.itemclicked)
self.purchase_window = uic.loadUi("purchase_window.ui")
self.searchBox.returnPressed.connect(self.clickme.click)
def itemclicked(self,iteem):
print("item clicked: ",iteem)
i=0;
while i<len(item_list):
if(self.ItemListView.item(i)== iteem):
break
i=i+1
self.PurchaseWindowOpen(i)
# mainpg.hide()
# pg1.show()
# pg1.requestButton.hide()
# pg1.Ltitle.setText("Saad DB se "+str(i)+"th entry ka show kara do")
def PurchaseWindowOpen(self,item_index):
print(item_list[item_index])
self.purchase_window.show()
self.purchase_window.label1.setText(item_list[item_index]['short_description'])
self.purchase_window.label2.setText(item_list[item_index]['cost'])
def Search_Query(self,query):
search_list =[]
search_query = query
global item_list
for x in item_list:
temp = list(x.values())
for y in temp:
# print(y)
try:
if search_query in y:
search_list.append(x)
# print('breaking after \n',x)
break
except:
pass
search_list
item_list=[]
item_list = search_list.copy()
self.SearchresultLabel.setText(str(len(search_list)) + " result(s) found")
def searchItem(self):
global original_list_item
global item_list
item_list = original_list_item.copy()
print('check')
query = self.searchBox.text()
self.Search_Query(query)
self.renderList()
def renderList(self):
self.ItemListView.clear()
for i in range(0,len(item_list)):
self.renderListItem(i)
def renderListItem(self,i):
layout = QHBoxLayout()
layout.setSizeConstraint(QLayout.SetMinimumSize);
item = QListWidgetItem(self.ItemListView)
label = QLabel(str(i+1)+ ") " + item_list[i]['short_description'] + "\n" + "Uploaded By: " + str(item_list[i]['uploaded_by']) + "\n" +"Rating: 3/5" )
label.setStyleSheet("height:fit-content;font-size:12pt;font-style: normal;font-weight:100;");
# label.setSizePolicy(QSizePolicy.Preferred, QSizePolicy.Minimum)
label.setWordWrap(True);
label2 = QLabel("Data Size: " + item_list[i]['data_size'] + '\nStatus: ' + item_list[i]['status'])
label2.setStyleSheet("height:fit-content;font-size:12pt;text-align:right;");
# label2.setStyleSheet("color: white; background: red;,text-align:right;");
label2.setAlignment(QtCore.Qt.AlignCenter)
# label2.setSizePolicy(QSizePolicy.Minimum, QSizePolicy.Ignored)
# label2.setSizePolicy(QSizePolicy.Preferred, QSizePolicy.Minimum)
label2.setWordWrap(True)
layout.addWidget(label)
layout.addWidget(label2)
widget = QWidget()
widget.setStyleSheet("height:fit-content;,width:100%");
widget.setLayout(layout);
item.setSizeHint(layout.sizeHint())
self.ItemListView.addItem(item)
self.ItemListView.setItemWidget(item,widget)
# strr=''
# for i in range(0,3):
# for item in dummy_dataset:
# for key,value in item.items():
# strr += key + ' ' + str(value) + '\n'
def selectionchange(self,i):
global original_list_item
global item_list
strr = self.filterBox.currentText()
# print(strr)
item_list = original_list_item.copy()
if strr != "All item":
self.Search_Query(strr)
self.renderList()
if __name__ == "__main__":
x=12
app = QtWidgets.QApplication(sys.argv)
window = MyApp()
window.show()
sys.exit(app.exec_())
| 0.060599 | 0.117826 |
# `chart_ipynb` Time Series
`chart_ipynb` provides additional functions specifically for time series data. Instead of using the function directly, we first set up the data and options manually to show how it works.
In `time_series`, we use [`pandas_datareader`](https://pandas-datareader.readthedocs.io/en/latest/) to read stock data from [`quandl`](https://pandas-datareader.readthedocs.io/en/latest/readers/quandl.html) by default. Other data websites can also be used to access different kinds of data ([More details can be found here](https://pandas-datareader.readthedocs.io/en/latest/remote_data.html)).
```
from chart_ipynb import utils
from chart_ipynb.chart_framework import ChartSuperClass
import numpy as np
import pandas as pd
import pandas_datareader
import pandas_datareader.data as web
import datetime
import time
```
For free multiple access to data, `quandl` requires api_key by creating your own accounts on the [official website](https://www.quandl.com/). API key can be found under your Account Settings.
In the following examples, we will use data of Apple, Amazon, Google.
```
api_key = '1JFowowyzc-FnajAsDkY'
start = datetime.datetime(2017,1,1)
end = datetime.datetime(2018,1,1)
aapl = web.DataReader('AAPL',"quandl", start, end, api_key = api_key)
amzn = web.DataReader('AMZN',"quandl", start, end, api_key = api_key)
googl = web.DataReader('GOOGL',"quandl", start, end, api_key = api_key)
aapl.head()
amzn.head()
googl.head()
```
We need to format the data to the structure available to feed into the chart initialization function.
The following function return two values, a list of price values, a list of date string.
```
def data_format(dataset, val_col):
"""
dataset: pd.DataFrame
val_col: the column name for the target value. e.g 'Close'
"""
data = dataset[val_col]
idx_reset_df = dataset.reset_index()
if 'Date' not in idx_reset_df.columns:
return 'please rename the date columns to "Date"'
sort_df = idx_reset_df.sort_values(by='Date')
sort_df['Date']=sort_df['Date'].astype(str)
return list(sort_df[val_col]), list(sort_df['Date'])
aapl_val, aapl_label = data_format(aapl, 'Close')
amzn_val, amzn_label = data_format(amzn, 'Close')
googl_val, googl_label = data_format(googl, 'Close')
```
`utils` provides helper functions to create dataset formats and data formats.
The `label` in the dataset is the ticker symbol of the company, while `labels` in the data is a list of Date string which will be present as x axis.
```
dataset1 = utils.dataset(
label = 'AAPL',
backgroundColor = utils.color_rgb('red',0.5),
borderColor = utils.color_rgb('red'),
data = aapl_val,
type = 'line',
pointRadius = 0,
fill = False,
lineTension = 0,
borderWidth = 2
)
dataset2 = utils.dataset(
label = 'AMZN',
backgroundColor = utils.color_rgb('blue',0.5),
borderColor = utils.color_rgb('blue'),
data = amzn_val,
type = 'line',
pointRadius = 0,
fill = False,
lineTension = 0,
borderWidth = 2
)
dataset3 = utils.dataset(
label = 'GOOGL',
backgroundColor = utils.color_rgb('green',0.5),
borderColor = utils.color_rgb('green'),
data = googl_val,
type = 'line',
pointRadius = 0,
fill = False,
lineTension = 0,
borderWidth = 2
)
data = utils.data(
labels = aapl_label,
datasets = [dataset1,dataset2,dataset3]
)
```
The configuration is required to initialize the chart. It contains type, data, options.
The value of type will be a string indicating the type of chart, such as 'line', 'bar', and 'bubble' etc. The dictionary of options contains many more features including title, legend scales and elements.
```
config = utils.config(
type = 'line',
data = data,
options = utils.options(
animation = {
'duration': 0
},
scales = {
'xAxes': [{
'display':True,
'scaleLabel':{
'display':True,
'labelString':'Date'
}
,'ticks': {
'major': {
'enabled': True,
'fontStyle': 'bold'
},
'source': 'data',
'autoSkip': True,
'autoSkipPadding': 10,
'maxRotation': 60,
},
}],
'yAxes': [{
'gridLines': {
'drawBorder': False
},
'scaleLabel': {
'display': True,
'labelString': 'Closing price ($)'
}
}]
},
)
)
```
After setting up the configuration, we are ready to initialize the line chart by creating a ChartSuperClass object which is the super class for all the charts in `chart_ipynb`.
```
line_chart = ChartSuperClass()
line_chart.initialize_chart(width=800, config=config)
line_chart
```
## `time_series` line chart
Now, we can directly use the function called `time_series_Chart` provided in `time_series` to create line chart.
`time_series_Chart` support two types of charts: line and bar.
```
time_series_Chart(_chart_type, ticker_symbol, val_col, date_col = None,
start=None, end=None,
data_provide = False, input_dataset = None,
website = None, api_key = None,
multi_axis = False, axis_label = None, stacked = False,
options = None, xAxes = None, yAxes = None,
colors=None, backgroundColor = None, borderColor = None,
title = None,
fill = False,
width=800,
**other_arguments
)
```
- `_chart_type`: the type of chart, 'line' or 'bar
- `ticker_symbol`: if use inner stock dataset, it will be ticker symbol of company; if self provide data, it will be the name of datasets shown in the legend
- `val_col`: the name of value column
- `date_col`: the name of date column
- `start`: start date; a string format in 'yyyy-m-d'
- `end`: end date; a string format in 'yyyy-m-d'
- `data_provide`: self provide data or not; default is False
- `input_dataset`: if data_provide=True, must provide your own data
- `website`: the website you want to access the data
- `api_key`: API key to access the data from the website
- `multi_axis`: only work for two datasets
- `axis_label`: axis label
- `stacked`: only work for bar chart and multi_aixs = False
- More arguments refer to [here](https://github.com/AaronWatters/Chart_ipynb/blob/master/chart_ipynb/time_series.py)
```
from chart_ipynb import time_series
```
## Stock Closing prices from quandl
### Line Chart
```
start = '2017-1-1'
end = '2018-1-1'
symbols = ['AAPL','AMZN','GOOGL']
colors = ['red', 'blue', 'green']
col = 'Close'
time_series.time_series_Chart('line', symbols, col, start = start, end = end, colors = colors,
website='quandl', title='Closing Price ($)')
```
### Stacked Bar Chart
```
time_series.time_series_Chart('bar', symbols, col, start = start, end = end, colors = ['violet', 'midnightblue', 'cyan'],
website='quandl',stacked=True, title='Closing Price ($)')
```
## Stock Open prices from quandl - Multi axis
```
start = '2017-1-1'
end = '2018-1-1'
symbols = ['AAPL','AMZN']
colors = ['purple', 'brown']
val_col = 'Open'
time_series.time_series_Chart('line', symbols, val_col, start = start, end = end,
website='quandl', multi_axis = True,
colors = colors,
title = 'Opening Price ($)')
```
## Self Provide Datasets
The following example show how the function works when using your own datasets.
### Line Chart
```
symbols = ['AAPL','AMZN', 'GOOGL']
input_dataset = [aapl, amzn, googl]
colors = ['salmon','seagreen','royalblue']
val_col = 'Close'
date_col = 'Date'
time_series.time_series_Chart('line', symbols, val_col,
date_col = date_col,
start = start, end = end,
data_provide=True,
input_dataset = input_dataset,
colors = colors,
title = "Closing Price ($)")
```
### Bar Chart
```
colors = ['salmon','seagreen','royalblue']
time_series.time_series_Chart('bar', symbols, val_col,
date_col = date_col,
start = start, end = end,
data_provide=True,
input_dataset = input_dataset,
stacked=True,
colors = colors,
title = "Closing Price ($)")
```
|
github_jupyter
|
from chart_ipynb import utils
from chart_ipynb.chart_framework import ChartSuperClass
import numpy as np
import pandas as pd
import pandas_datareader
import pandas_datareader.data as web
import datetime
import time
api_key = '1JFowowyzc-FnajAsDkY'
start = datetime.datetime(2017,1,1)
end = datetime.datetime(2018,1,1)
aapl = web.DataReader('AAPL',"quandl", start, end, api_key = api_key)
amzn = web.DataReader('AMZN',"quandl", start, end, api_key = api_key)
googl = web.DataReader('GOOGL',"quandl", start, end, api_key = api_key)
aapl.head()
amzn.head()
googl.head()
def data_format(dataset, val_col):
"""
dataset: pd.DataFrame
val_col: the column name for the target value. e.g 'Close'
"""
data = dataset[val_col]
idx_reset_df = dataset.reset_index()
if 'Date' not in idx_reset_df.columns:
return 'please rename the date columns to "Date"'
sort_df = idx_reset_df.sort_values(by='Date')
sort_df['Date']=sort_df['Date'].astype(str)
return list(sort_df[val_col]), list(sort_df['Date'])
aapl_val, aapl_label = data_format(aapl, 'Close')
amzn_val, amzn_label = data_format(amzn, 'Close')
googl_val, googl_label = data_format(googl, 'Close')
dataset1 = utils.dataset(
label = 'AAPL',
backgroundColor = utils.color_rgb('red',0.5),
borderColor = utils.color_rgb('red'),
data = aapl_val,
type = 'line',
pointRadius = 0,
fill = False,
lineTension = 0,
borderWidth = 2
)
dataset2 = utils.dataset(
label = 'AMZN',
backgroundColor = utils.color_rgb('blue',0.5),
borderColor = utils.color_rgb('blue'),
data = amzn_val,
type = 'line',
pointRadius = 0,
fill = False,
lineTension = 0,
borderWidth = 2
)
dataset3 = utils.dataset(
label = 'GOOGL',
backgroundColor = utils.color_rgb('green',0.5),
borderColor = utils.color_rgb('green'),
data = googl_val,
type = 'line',
pointRadius = 0,
fill = False,
lineTension = 0,
borderWidth = 2
)
data = utils.data(
labels = aapl_label,
datasets = [dataset1,dataset2,dataset3]
)
config = utils.config(
type = 'line',
data = data,
options = utils.options(
animation = {
'duration': 0
},
scales = {
'xAxes': [{
'display':True,
'scaleLabel':{
'display':True,
'labelString':'Date'
}
,'ticks': {
'major': {
'enabled': True,
'fontStyle': 'bold'
},
'source': 'data',
'autoSkip': True,
'autoSkipPadding': 10,
'maxRotation': 60,
},
}],
'yAxes': [{
'gridLines': {
'drawBorder': False
},
'scaleLabel': {
'display': True,
'labelString': 'Closing price ($)'
}
}]
},
)
)
line_chart = ChartSuperClass()
line_chart.initialize_chart(width=800, config=config)
line_chart
time_series_Chart(_chart_type, ticker_symbol, val_col, date_col = None,
start=None, end=None,
data_provide = False, input_dataset = None,
website = None, api_key = None,
multi_axis = False, axis_label = None, stacked = False,
options = None, xAxes = None, yAxes = None,
colors=None, backgroundColor = None, borderColor = None,
title = None,
fill = False,
width=800,
**other_arguments
)
from chart_ipynb import time_series
start = '2017-1-1'
end = '2018-1-1'
symbols = ['AAPL','AMZN','GOOGL']
colors = ['red', 'blue', 'green']
col = 'Close'
time_series.time_series_Chart('line', symbols, col, start = start, end = end, colors = colors,
website='quandl', title='Closing Price ($)')
time_series.time_series_Chart('bar', symbols, col, start = start, end = end, colors = ['violet', 'midnightblue', 'cyan'],
website='quandl',stacked=True, title='Closing Price ($)')
start = '2017-1-1'
end = '2018-1-1'
symbols = ['AAPL','AMZN']
colors = ['purple', 'brown']
val_col = 'Open'
time_series.time_series_Chart('line', symbols, val_col, start = start, end = end,
website='quandl', multi_axis = True,
colors = colors,
title = 'Opening Price ($)')
symbols = ['AAPL','AMZN', 'GOOGL']
input_dataset = [aapl, amzn, googl]
colors = ['salmon','seagreen','royalblue']
val_col = 'Close'
date_col = 'Date'
time_series.time_series_Chart('line', symbols, val_col,
date_col = date_col,
start = start, end = end,
data_provide=True,
input_dataset = input_dataset,
colors = colors,
title = "Closing Price ($)")
colors = ['salmon','seagreen','royalblue']
time_series.time_series_Chart('bar', symbols, val_col,
date_col = date_col,
start = start, end = end,
data_provide=True,
input_dataset = input_dataset,
stacked=True,
colors = colors,
title = "Closing Price ($)")
| 0.380299 | 0.963848 |
```
import requests
from pprint import pprint
from IPython.display import display, HTML, clear_output
import time
# 得到A股基本信息参考表 a_refer
a_list = ['name',
'open_price',
'yesterday_price',
'current_price',
'max', 'min',
'buy1', 'sell1',
'volume', # 成交量
'amount', # 成交额
'buy1_volume', # 买1申请量
'buy1', # 买1价格
'buy2_volume', 'buy2', 'buy3_volume', 'buy3', 'buy4_volume', 'buy4', 'buy5_volume', 'buy5',
'sell1_volume', # 卖1申请量
'sell1', # 卖1价格
'sell2_volume', 'sell2', 'sell3_volume', 'sell3', 'sell4_volume', 'sell4', 'sell5_volume', 'sell5',
'date', 'time'
]
a_refer = {}
for i in range(len(a_list)):
a_refer[a_list[i]] = i
# a_refer
# 得到港股基本信息参考表 rt_refer
rt_list = ['name_en', 'name',
'open_price', 'yesterday_price', 'max', 'min', 'current_price',
'up_price', 'up_rate', # 涨跌价与涨跌百分比幅度
'buy1', 'sell1', 'volume', 'amount',
'pe', 'dividend_yield', # 市盈率、周息率
'max_of_52weeks', 'min_of_52weeks',
'date', 'time'
]
rt_refer = {}
for i in range(len(rt_list)):
rt_refer[rt_list[i]] = i
# rt_refer
# 得到美股基本信息参考表 gb_refer
gb_list = ['name', 'current_price', 'up_rate', # 涨跌百分比幅度
'datetime', # 日期和时间
'up_price', # 上涨价格
'open_price', 'max', 'min', 'max_of_52weeks', 'min_of_52weeks',
'volume', 'volume_of_10days', 'maket_value',
'earn_per_stock', 'pe', 'unknown1', 'beta', 'uknown2', 'uknown3',
'capital_stock', 'unknown4',
'close_price', 'uknown5', 'uknown6', 'known7',
'UTC_time', 'yesterday_price', 'unkown8'
]
gb_refer = {}
for i in range(len(gb_list)):
gb_refer[gb_list[i]] = i
# gb_refer
stock_list = {
# 'gb_tal': '好未来',
# 'gb_$dji': '道琼斯',
'rt_hkHSI': '恒生指数',
'sh000001': '上证指数',
'sh000016': '上证50',
'sh000300': '沪深300',
'sh000827': '中证环保',
'sh000905': '中证500',
'sh000991': '全指医药',
'sh513030': '德国30',
'sh513050': '中概互联',
'sz159934': '黄金ETF',
'sz162411': '华宝油气',
'sz399006': '创业板指',
'sz399396': '国证食品',
'sz399812': '养老产业',
'sz399932': '中证消费',
'sz399967': '中证军工',
'sz399971': '中证传媒',
'sz399975': '证券公司'
}
url = 'http://hq.sinajs.cn/list=' + ','.join([key for key in stock_list])
html_model ="""<div style="margin: 3px; overflow: hidden;">
<div style="float:left;position:relative;">
<div style="color:white;height:25px;width:75px;text-align: center;padding-left:2px;padding-right:2px;margin-right:2px;z-index:11;
background-color:#e2d5d5;color:#000000;font-weight: bold;float:left;">{name}</div> <!-- 名称 -->
<div style="color:white;height:25px;width:55px;text-align: center;padding-left:2px;padding-right:2px;margin-right:2px;z-index:11;
background-color:#e2d5d5;color:#000000;font-weight: bold;float:left;">{code}</div> <!-- 代码 -->
<div style="color:white;height:25px;width:75px;text-align: right;padding-left:2px;padding-right:2px;margin-right:2px;z-index:11;
background-color:#adadad;color: #353030;float:left;">{yesterday_price}</div> <!-- 昨日收盘价 -->
<div style="color:white;height:25px;width:75px;text-align: right;padding-left:2px;padding-right:2px;margin-right:2px;z-index:11;
background-color:#6666ff;float:left;">{current_price}</div> <!-- 现价 -->
<!-- 前5个糅合一起的区间 -->
</div>
<div style="float:left;position:relative;">
<div style="color:{color};width:56px;height:25px;text-align: center;padding-left:2px;margin-right:2px;z-index:1;
background-color:{flash_color};position:absolute;font-weight: bold;border: 1.2px solid {color};">{up_p_r}</div>
<!-- 涨跌幅color up -->
<div style="margin-left:60px;width:{total_width}px;height:25px;z-index:1;
background-image:linear-gradient(to right, #6666ff, white, {color});position:absolute;">
<div style="float:left;line-height:25px;text-align:left;font-size:11px;color:white;">{min_p_r}</div>
<div style="float:right;line-height:25px;text-align:right;font-size:11px;color:white;">{max_p_r}</div>
</div> <!-- 涨跌停展示区的宽度,内部显示最低价和最高价 width,min,max -->
<div style="margin-left:{middle_px}px;width:2px;height:25px;z-index:4;
background-color:#FFFFFF;position:absolute;"></div> <!-- 中间0%%处的小白条 -->
<div style="margin-left:{bar_px}px;width:{bar_width}px;height:25px;z-index:2;
background-color:#E7D9A0;position:absolute;"></div> <!-- 最高价和最低价的区间展示bar_left, bar_width -->
<div style="margin-left:{recent_px[0]}px;width:5px;height:31px;z-index:3;
background-color:#ebeb0e;position:absolute;top:-3px;opacity:0.2;border-right:2px outset blue;"></div>
<div style="margin-left:{recent_px[1]}px;width:5px;height:31px;z-index:3;
background-color:#ebeb0e;position:absolute;top:-3px;opacity:0.4;border-right:2px outset blue;"></div>
<div style="margin-left:{recent_px[2]}px;width:5px;height:31px;z-index:3;
background-color:#ebeb0e;position:absolute;top:-3px;opacity:0.5;border-right:2px outset blue;"></div>
<div style="margin-left:{recent_px[3]}px;width:5px;height:31px;z-index:3;
background-color:#ebeb0e;position:absolute;top:-3px;opacity:0.7;border-right:2px outset blue;"></div>
<div style="margin-left:{recent_px[4]}px;width:5px;height:31px;z-index:5;
background-color:#ebeb0e;position:absolute;top:-3px;opacity:1;border-right:2px outset blue;"></div>
<!-- 位置坐标 5个 -->
</div>
</div>""" # 需要参数如下
# name, code, yesterday_price, current_price, up(p/r), total_width, min,max(p/r), bar_start, bar_width
# px_recent[0], px_recent[1], px_recent[2], px_recent[3], px_recent[4],
# 需要区分rate/price: up,min,max, 所有结果均为string
stock_cache = { } # 用于存储当前获得所有信息的字典
for key in stock_list:
stock_cache[key] = {'info':[], 'price_info': [], 'rate_info': [], 'recent_rates': [0, 0, 0, 0, 0]}
def update_stock(mark_rate = 1):
r_list = requests.get(url).content.decode('gbk').split('\n')[:-1] # 除去最终的空行
# stock_all_dic = {} # 用于更新最终的股票代码和名字的字典
for r in r_list:
r_sp = r[11:-1].split('=')
# 除去头尾部信息,根据 '=' 切割得到代码和结果,再根据 ',' 切割得到结果
code = r_sp[0]
result_l = r_sp[1][1:-1].split(',') # 得到最终的每份信息
# print(result)
refer = {} # 根据不同的代码选择不同的解构字典
s_code = ''
if code[:2] == 'rt':
refer = rt_refer
s_code = code.split('_')[-1]
elif code[:2] == 'gb':
refer = gb_refer
s_code = code.split('_')[-1]
else:
refer = a_refer
s_code = code[-6:]
name = result_l[refer['name']]
yesterday_price = float(result_l[refer['yesterday_price']])
current_price = float(result_l[refer['current_price']])
price_change = current_price - yesterday_price
change_rate = price_change / yesterday_price
min_price = float(result_l[refer['min']])
max_price = float(result_l[refer['max']])
min_rate = (min_price - yesterday_price) / yesterday_price
max_rate = (max_price - yesterday_price) / yesterday_price
stock_cache[code]['info'] = [name, s_code,
format_point(current_price, yesterday_price),
format_point(current_price, current_price)
]
stock_cache[code]['price_info'] = [
format_point(current_price, price_change),
format_point(current_price, min_price),
format_point(current_price, max_price),
]
stock_cache[code]['rate_info'] = [change_rate,
min_rate,
max_rate,
]
if mark_rate: # 如果需要打点,则栈式输入
del stock_cache[code]['recent_rates'][0]
stock_cache[code]['recent_rates'].append(change_rate)
# result_dic[code] = result_l[]
# stock_all_dic[code] = result_l[refer['name']] # 用于更新所有股票的name内容
# pprint(stock_all_dic)
def format_point(price, float_num):
return '%.2f' % float_num if price > 100 else '%.3f' % float_num
count = 0
start_px = 60
px_for_10percent = 200
while(True):
output_price = ''
output_rate = ''
output_price_flash = ''
output_rate_flash = ''
# sortby = up_rate_reversed
count = 0 if count == 30 else count + 1 # 20s 更新一次
update_stock(mark_rate = 1 if count == 1 else 0)
stock_cache_tuple = sorted(stock_cache.items(), key = lambda item: float(item[1]['rate_info'][0]))
# 得到元组,每一个地方的第一位为code,后面是一个dict
price_format_list = []
rate_format_list = []
for stock in stock_cache_tuple:
color = 'green' if stock[1]['rate_info'][0] < 0 else 'red'
delta_rate = stock[1]['recent_rates'][-1] - stock[1]['recent_rates'][-2]
flash_color = ''
if delta_rate > 0.0001:
flash_color = 'red'
elif delta_rate < -0.0001:
flash_color = 'green'
else:
flash_color = 'white'
format_dic = {'name': stock[1]['info'][0],
'code' : stock[1]['info'][1],
'yesterday_price' : stock[1]['info'][2],
'current_price' : stock[1]['info'][3],
'total_width': '%d'%(2*px_for_10percent),
'middle_px': '%d' % (start_px + px_for_10percent),
'bar_px': '%.0f' % (start_px + (1+stock[1]['rate_info'][1]*10) * px_for_10percent),
'bar_width': '%.0f' % ((stock[1]['rate_info'][2] - stock[1]['rate_info'][1]) *10 * px_for_10percent),
'recent_px': ['%.0f' % (start_px + (1 + i * 10) * px_for_10percent) for i in stock[1]['recent_rates']],
'color': color,
'flash_color': flash_color,
'up_p_r': stock[1]['price_info'][0],
'min_p_r': stock[1]['price_info'][1],
'max_p_r': stock[1]['price_info'][2],
}
output_price_flash += html_model.format(**format_dic)
format_dic['flash_color'] = 'white'
output_price += html_model.format(**format_dic)
format_dic.update({
'flash_color': flash_color,
'up_p_r': '%.2f%%'%(100*stock[1]['rate_info'][0]),
'min_p_r': '%.2f%%'%(stock[1]['rate_info'][1]*100),
'max_p_r': '%.2f%%'%(stock[1]['rate_info'][2]*100),
})
output_rate_flash += html_model.format(**format_dic)
format_dic['flash_color'] = 'white'
output_rate += html_model.format(**format_dic)
for i in range(2):
clear_output()
display(HTML(output_price_flash))
time.sleep(0.3)
clear_output()
display(HTML(output_price))
time.sleep(2)
clear_output()
display(HTML(output_rate_flash))
time.sleep(0.3)
clear_output()
display(HTML(output_rate))
time.sleep(3)
stock_cache
print(output_rate_flash)
# 获取数据的验证块
s = 'HSCEI,恒生中国企业指数,11771.591,11815.000,11774.591,11694.320,11737.960,-77.040,-0.650,0.000,0.000,13554505.728,0,0.000,0.000,12589.530,9761.600,2019/04/10,12:05:00,,,,,,'
sp = s.split(',')
for key in rt_refer:
print('%-15s----|%s'% (key, sp[rt_refer[key]]))
```
## 华宝油气
### 支撑位
- 0.5
- 0.46
### 压力位
- 0.73
- 0.63
# 网格点
- 0.44
- 0.50
- 0.54
- 0.58
- 0.62
- 0.71
|
github_jupyter
|
import requests
from pprint import pprint
from IPython.display import display, HTML, clear_output
import time
# 得到A股基本信息参考表 a_refer
a_list = ['name',
'open_price',
'yesterday_price',
'current_price',
'max', 'min',
'buy1', 'sell1',
'volume', # 成交量
'amount', # 成交额
'buy1_volume', # 买1申请量
'buy1', # 买1价格
'buy2_volume', 'buy2', 'buy3_volume', 'buy3', 'buy4_volume', 'buy4', 'buy5_volume', 'buy5',
'sell1_volume', # 卖1申请量
'sell1', # 卖1价格
'sell2_volume', 'sell2', 'sell3_volume', 'sell3', 'sell4_volume', 'sell4', 'sell5_volume', 'sell5',
'date', 'time'
]
a_refer = {}
for i in range(len(a_list)):
a_refer[a_list[i]] = i
# a_refer
# 得到港股基本信息参考表 rt_refer
rt_list = ['name_en', 'name',
'open_price', 'yesterday_price', 'max', 'min', 'current_price',
'up_price', 'up_rate', # 涨跌价与涨跌百分比幅度
'buy1', 'sell1', 'volume', 'amount',
'pe', 'dividend_yield', # 市盈率、周息率
'max_of_52weeks', 'min_of_52weeks',
'date', 'time'
]
rt_refer = {}
for i in range(len(rt_list)):
rt_refer[rt_list[i]] = i
# rt_refer
# 得到美股基本信息参考表 gb_refer
gb_list = ['name', 'current_price', 'up_rate', # 涨跌百分比幅度
'datetime', # 日期和时间
'up_price', # 上涨价格
'open_price', 'max', 'min', 'max_of_52weeks', 'min_of_52weeks',
'volume', 'volume_of_10days', 'maket_value',
'earn_per_stock', 'pe', 'unknown1', 'beta', 'uknown2', 'uknown3',
'capital_stock', 'unknown4',
'close_price', 'uknown5', 'uknown6', 'known7',
'UTC_time', 'yesterday_price', 'unkown8'
]
gb_refer = {}
for i in range(len(gb_list)):
gb_refer[gb_list[i]] = i
# gb_refer
stock_list = {
# 'gb_tal': '好未来',
# 'gb_$dji': '道琼斯',
'rt_hkHSI': '恒生指数',
'sh000001': '上证指数',
'sh000016': '上证50',
'sh000300': '沪深300',
'sh000827': '中证环保',
'sh000905': '中证500',
'sh000991': '全指医药',
'sh513030': '德国30',
'sh513050': '中概互联',
'sz159934': '黄金ETF',
'sz162411': '华宝油气',
'sz399006': '创业板指',
'sz399396': '国证食品',
'sz399812': '养老产业',
'sz399932': '中证消费',
'sz399967': '中证军工',
'sz399971': '中证传媒',
'sz399975': '证券公司'
}
url = 'http://hq.sinajs.cn/list=' + ','.join([key for key in stock_list])
html_model ="""<div style="margin: 3px; overflow: hidden;">
<div style="float:left;position:relative;">
<div style="color:white;height:25px;width:75px;text-align: center;padding-left:2px;padding-right:2px;margin-right:2px;z-index:11;
background-color:#e2d5d5;color:#000000;font-weight: bold;float:left;">{name}</div> <!-- 名称 -->
<div style="color:white;height:25px;width:55px;text-align: center;padding-left:2px;padding-right:2px;margin-right:2px;z-index:11;
background-color:#e2d5d5;color:#000000;font-weight: bold;float:left;">{code}</div> <!-- 代码 -->
<div style="color:white;height:25px;width:75px;text-align: right;padding-left:2px;padding-right:2px;margin-right:2px;z-index:11;
background-color:#adadad;color: #353030;float:left;">{yesterday_price}</div> <!-- 昨日收盘价 -->
<div style="color:white;height:25px;width:75px;text-align: right;padding-left:2px;padding-right:2px;margin-right:2px;z-index:11;
background-color:#6666ff;float:left;">{current_price}</div> <!-- 现价 -->
<!-- 前5个糅合一起的区间 -->
</div>
<div style="float:left;position:relative;">
<div style="color:{color};width:56px;height:25px;text-align: center;padding-left:2px;margin-right:2px;z-index:1;
background-color:{flash_color};position:absolute;font-weight: bold;border: 1.2px solid {color};">{up_p_r}</div>
<!-- 涨跌幅color up -->
<div style="margin-left:60px;width:{total_width}px;height:25px;z-index:1;
background-image:linear-gradient(to right, #6666ff, white, {color});position:absolute;">
<div style="float:left;line-height:25px;text-align:left;font-size:11px;color:white;">{min_p_r}</div>
<div style="float:right;line-height:25px;text-align:right;font-size:11px;color:white;">{max_p_r}</div>
</div> <!-- 涨跌停展示区的宽度,内部显示最低价和最高价 width,min,max -->
<div style="margin-left:{middle_px}px;width:2px;height:25px;z-index:4;
background-color:#FFFFFF;position:absolute;"></div> <!-- 中间0%%处的小白条 -->
<div style="margin-left:{bar_px}px;width:{bar_width}px;height:25px;z-index:2;
background-color:#E7D9A0;position:absolute;"></div> <!-- 最高价和最低价的区间展示bar_left, bar_width -->
<div style="margin-left:{recent_px[0]}px;width:5px;height:31px;z-index:3;
background-color:#ebeb0e;position:absolute;top:-3px;opacity:0.2;border-right:2px outset blue;"></div>
<div style="margin-left:{recent_px[1]}px;width:5px;height:31px;z-index:3;
background-color:#ebeb0e;position:absolute;top:-3px;opacity:0.4;border-right:2px outset blue;"></div>
<div style="margin-left:{recent_px[2]}px;width:5px;height:31px;z-index:3;
background-color:#ebeb0e;position:absolute;top:-3px;opacity:0.5;border-right:2px outset blue;"></div>
<div style="margin-left:{recent_px[3]}px;width:5px;height:31px;z-index:3;
background-color:#ebeb0e;position:absolute;top:-3px;opacity:0.7;border-right:2px outset blue;"></div>
<div style="margin-left:{recent_px[4]}px;width:5px;height:31px;z-index:5;
background-color:#ebeb0e;position:absolute;top:-3px;opacity:1;border-right:2px outset blue;"></div>
<!-- 位置坐标 5个 -->
</div>
</div>""" # 需要参数如下
# name, code, yesterday_price, current_price, up(p/r), total_width, min,max(p/r), bar_start, bar_width
# px_recent[0], px_recent[1], px_recent[2], px_recent[3], px_recent[4],
# 需要区分rate/price: up,min,max, 所有结果均为string
stock_cache = { } # 用于存储当前获得所有信息的字典
for key in stock_list:
stock_cache[key] = {'info':[], 'price_info': [], 'rate_info': [], 'recent_rates': [0, 0, 0, 0, 0]}
def update_stock(mark_rate = 1):
r_list = requests.get(url).content.decode('gbk').split('\n')[:-1] # 除去最终的空行
# stock_all_dic = {} # 用于更新最终的股票代码和名字的字典
for r in r_list:
r_sp = r[11:-1].split('=')
# 除去头尾部信息,根据 '=' 切割得到代码和结果,再根据 ',' 切割得到结果
code = r_sp[0]
result_l = r_sp[1][1:-1].split(',') # 得到最终的每份信息
# print(result)
refer = {} # 根据不同的代码选择不同的解构字典
s_code = ''
if code[:2] == 'rt':
refer = rt_refer
s_code = code.split('_')[-1]
elif code[:2] == 'gb':
refer = gb_refer
s_code = code.split('_')[-1]
else:
refer = a_refer
s_code = code[-6:]
name = result_l[refer['name']]
yesterday_price = float(result_l[refer['yesterday_price']])
current_price = float(result_l[refer['current_price']])
price_change = current_price - yesterday_price
change_rate = price_change / yesterday_price
min_price = float(result_l[refer['min']])
max_price = float(result_l[refer['max']])
min_rate = (min_price - yesterday_price) / yesterday_price
max_rate = (max_price - yesterday_price) / yesterday_price
stock_cache[code]['info'] = [name, s_code,
format_point(current_price, yesterday_price),
format_point(current_price, current_price)
]
stock_cache[code]['price_info'] = [
format_point(current_price, price_change),
format_point(current_price, min_price),
format_point(current_price, max_price),
]
stock_cache[code]['rate_info'] = [change_rate,
min_rate,
max_rate,
]
if mark_rate: # 如果需要打点,则栈式输入
del stock_cache[code]['recent_rates'][0]
stock_cache[code]['recent_rates'].append(change_rate)
# result_dic[code] = result_l[]
# stock_all_dic[code] = result_l[refer['name']] # 用于更新所有股票的name内容
# pprint(stock_all_dic)
def format_point(price, float_num):
return '%.2f' % float_num if price > 100 else '%.3f' % float_num
count = 0
start_px = 60
px_for_10percent = 200
while(True):
output_price = ''
output_rate = ''
output_price_flash = ''
output_rate_flash = ''
# sortby = up_rate_reversed
count = 0 if count == 30 else count + 1 # 20s 更新一次
update_stock(mark_rate = 1 if count == 1 else 0)
stock_cache_tuple = sorted(stock_cache.items(), key = lambda item: float(item[1]['rate_info'][0]))
# 得到元组,每一个地方的第一位为code,后面是一个dict
price_format_list = []
rate_format_list = []
for stock in stock_cache_tuple:
color = 'green' if stock[1]['rate_info'][0] < 0 else 'red'
delta_rate = stock[1]['recent_rates'][-1] - stock[1]['recent_rates'][-2]
flash_color = ''
if delta_rate > 0.0001:
flash_color = 'red'
elif delta_rate < -0.0001:
flash_color = 'green'
else:
flash_color = 'white'
format_dic = {'name': stock[1]['info'][0],
'code' : stock[1]['info'][1],
'yesterday_price' : stock[1]['info'][2],
'current_price' : stock[1]['info'][3],
'total_width': '%d'%(2*px_for_10percent),
'middle_px': '%d' % (start_px + px_for_10percent),
'bar_px': '%.0f' % (start_px + (1+stock[1]['rate_info'][1]*10) * px_for_10percent),
'bar_width': '%.0f' % ((stock[1]['rate_info'][2] - stock[1]['rate_info'][1]) *10 * px_for_10percent),
'recent_px': ['%.0f' % (start_px + (1 + i * 10) * px_for_10percent) for i in stock[1]['recent_rates']],
'color': color,
'flash_color': flash_color,
'up_p_r': stock[1]['price_info'][0],
'min_p_r': stock[1]['price_info'][1],
'max_p_r': stock[1]['price_info'][2],
}
output_price_flash += html_model.format(**format_dic)
format_dic['flash_color'] = 'white'
output_price += html_model.format(**format_dic)
format_dic.update({
'flash_color': flash_color,
'up_p_r': '%.2f%%'%(100*stock[1]['rate_info'][0]),
'min_p_r': '%.2f%%'%(stock[1]['rate_info'][1]*100),
'max_p_r': '%.2f%%'%(stock[1]['rate_info'][2]*100),
})
output_rate_flash += html_model.format(**format_dic)
format_dic['flash_color'] = 'white'
output_rate += html_model.format(**format_dic)
for i in range(2):
clear_output()
display(HTML(output_price_flash))
time.sleep(0.3)
clear_output()
display(HTML(output_price))
time.sleep(2)
clear_output()
display(HTML(output_rate_flash))
time.sleep(0.3)
clear_output()
display(HTML(output_rate))
time.sleep(3)
stock_cache
print(output_rate_flash)
# 获取数据的验证块
s = 'HSCEI,恒生中国企业指数,11771.591,11815.000,11774.591,11694.320,11737.960,-77.040,-0.650,0.000,0.000,13554505.728,0,0.000,0.000,12589.530,9761.600,2019/04/10,12:05:00,,,,,,'
sp = s.split(',')
for key in rt_refer:
print('%-15s----|%s'% (key, sp[rt_refer[key]]))
| 0.138316 | 0.219944 |
```
import graphlab as gl
```
# Get the data set od product reviews
```
products = gl.SFrame('C:\Users\Iskndraniii73\MachineLearning\Classification_Week3\BabyProducts\B_amazon_baby.sframe/')
```
# Explore the data
```
products.head()
```
# Add word count vector
```
products['Word_Count'] = gl.text_analytics.count_words(products['review'])
products.head()
```
# explore more the data
```
gl.canvas.set_target('ipynb')
products['name'].show()
```
# Separate the most popular product reviews (Giraffe)
```
giraffe_reviews = products[ products['name'] == 'Vulli Sophie the Giraffe Teether' ]
len(giraffe_reviews)
giraffe_reviews['rating'].show(view = 'Categorical')
```
# Explore the rating of all the data
```
products['rating'].show('Categorical')
# remove Neutral (3*)
products = products [products['rating'] != 3]
products.show()
products['rating'].show('Categorical')
# Define Positive (>=4) and Negative (<=2)
products['sentiment'] = products['rating'] >= 4
products.head()
```
# Aggregate the products with thier reviews count
```
agg_prod_rev = products.groupby('name',operations={'count':gl.aggregate.COUNT()}).sort('count',ascending = False)
agg_prod_rev
```
# Training the Classifier Model
## Split the data to training & testing
```
train_data , test_data = products.random_split(0.8,seed = 0)
```
## Build a model and train it by training data . verify it by testing data
```
review_classifier = gl.logistic_classifier.create(train_data,target='sentiment',features=['Word_Count'],validation_set=test_data,max_iterations=11)
```
# Predicting The sentiments for the products reviews
```
products['predicted_sentiment'] = review_classifier.predict(products,output_type='probability')
```
## remove neutral and add sentiment in giraffe_reviews data
```
giraffe_reviews = giraffe_reviews[ giraffe_reviews['rating']!=3 ]
giraffe_reviews['sentiment'] = giraffe_reviews['rating'] >= 4
```
## predicitng sentiment for giraffe_reviews
```
giraffe_reviews['predicted_sentiment'] = review_classifier.predict(giraffe_reviews,output_type='probability')
giraffe_reviews = giraffe_reviews.sort('predicted_sentiment',ascending=False)
```
# Evaluate The Model (Should be before using the model to predict)
```
review_classifier.evaluate(test_data,metric='roc_curve')
review_classifier.show(view='Evaluation')
giraffe_reviews[0]['review']
giraffe_reviews[1]['review']
giraffe_reviews[-1]['review']
giraffe_reviews[-2]['review']
giraffe_reviews[500]['review']
giraffe_reviews[500]['predicted_sentiment']
giraffe_reviews[500]['rating']
```
|
github_jupyter
|
import graphlab as gl
products = gl.SFrame('C:\Users\Iskndraniii73\MachineLearning\Classification_Week3\BabyProducts\B_amazon_baby.sframe/')
products.head()
products['Word_Count'] = gl.text_analytics.count_words(products['review'])
products.head()
gl.canvas.set_target('ipynb')
products['name'].show()
giraffe_reviews = products[ products['name'] == 'Vulli Sophie the Giraffe Teether' ]
len(giraffe_reviews)
giraffe_reviews['rating'].show(view = 'Categorical')
products['rating'].show('Categorical')
# remove Neutral (3*)
products = products [products['rating'] != 3]
products.show()
products['rating'].show('Categorical')
# Define Positive (>=4) and Negative (<=2)
products['sentiment'] = products['rating'] >= 4
products.head()
agg_prod_rev = products.groupby('name',operations={'count':gl.aggregate.COUNT()}).sort('count',ascending = False)
agg_prod_rev
train_data , test_data = products.random_split(0.8,seed = 0)
review_classifier = gl.logistic_classifier.create(train_data,target='sentiment',features=['Word_Count'],validation_set=test_data,max_iterations=11)
products['predicted_sentiment'] = review_classifier.predict(products,output_type='probability')
giraffe_reviews = giraffe_reviews[ giraffe_reviews['rating']!=3 ]
giraffe_reviews['sentiment'] = giraffe_reviews['rating'] >= 4
giraffe_reviews['predicted_sentiment'] = review_classifier.predict(giraffe_reviews,output_type='probability')
giraffe_reviews = giraffe_reviews.sort('predicted_sentiment',ascending=False)
review_classifier.evaluate(test_data,metric='roc_curve')
review_classifier.show(view='Evaluation')
giraffe_reviews[0]['review']
giraffe_reviews[1]['review']
giraffe_reviews[-1]['review']
giraffe_reviews[-2]['review']
giraffe_reviews[500]['review']
giraffe_reviews[500]['predicted_sentiment']
giraffe_reviews[500]['rating']
| 0.252108 | 0.897695 |
# Visualize Parameter Space with Exhaustive Optimizer
ITK optimizers are commonly used to select suitable values for various parameters, such as choosing how to transform a moving image to register with a fixed image. A variety of image metrics and transform classes are available to guide the optimization process, each of which may employ parameters unique to its own implementation. It is often useful to visualize how changes in parameters will impact the metric value and the optimization process.
The `ExhaustiveOptimizer` class exists to evaluate a metric over a windowed parameter space of fixed step size. This example shows how to use `ExhaustiveOptimizerv4` with the `MeanSquaresImageToImageMetricv4` metric and `Euler2DTransform` transform to survey performance over a parameter space and visualize the results with `matplotlib`.
```
import os
import sys
import itertools
from math import pi, sin, cos, sqrt
from urllib.request import urlretrieve
import matplotlib.pyplot as plt
import numpy as np
import itk
from itkwidgets import view, compare, checkerboard, cm
module_path = os.path.abspath(os.path.join('.'))
if module_path not in sys.path:
sys.path.append(module_path)
```
### Get sample data to register
In this example we seek to transform an image of an orange to overlay on the image of an apple. We will eventually use the `MeanSquaresImageToImageMetricv4` class to inform the optimizer about how the two images are related given the current parameter state. We can visualize the fixed and moving images with `ITKWidgets`.
```
fixed_img_path = 'apple.jpg'
moving_img_path = 'orange.jpg'
if not os.path.exists(fixed_img_path):
url = 'https://data.kitware.com/api/v1/file/5cad1aec8d777f072b181870/download'
urlretrieve(url, fixed_img_path)
if not os.path.exists(moving_img_path):
url = 'https://data.kitware.com/api/v1/file/5cad1aed8d777f072b181879/download'
urlretrieve(url, moving_img_path)
fixed_img = itk.imread(fixed_img_path, itk.F)
moving_img = itk.imread(moving_img_path, itk.F)
compare(fixed_img, moving_img, ui_collapsed=True)
```
### Define and Initialize the Transform
In this example we will use an `Euler2DTransform` instance to represent how the moving image will be sampled from the fixed image. The [Euler2DTransform](https://itk.org/Doxygen/html/classitk_1_1Euler2DTransform.html) documentation shows that the transform has three parameters, first a rotation around a fixed center, followed by a 2D translation. We use a `CenteredTransformInitializer` to estimate what may be a "good" fixed center point at which to define the transform prior to conducting optimization.
```
dimension = 2
FixedImageType = itk.Image[itk.F, dimension]
MovingImageType = itk.Image[itk.F, dimension]
TransformType = itk.Euler2DTransform[itk.D]
OptimizerType = itk.ExhaustiveOptimizerv4[itk.D]
MetricType = itk.MeanSquaresImageToImageMetricv4[FixedImageType, MovingImageType]
TransformInitializerType = \
itk.CenteredTransformInitializer[itk.MatrixOffsetTransformBase[itk.D,2,2],
FixedImageType, MovingImageType]
RegistrationType = itk.ImageRegistrationMethodv4[FixedImageType,MovingImageType]
transform = TransformType.New()
initializer = TransformInitializerType.New(
Transform=transform,
FixedImage=fixed_img,
MovingImage=moving_img,
)
initializer.InitializeTransform()
```
### Run Optimization
We rely on the `ExhaustiveOptimizerv4` class to visualize the parameter space. For this example we choose to visualize the metric value over the first two parameters only, so we set the number of steps in the third dimension to zero. The angle and translation parameters are measured on different scales, so we set the optimizer to take steps of reasonable size along each dimension. An observer is used to save the results of each step.
```
metric_results = dict()
metric = MetricType.New()
optimizer = OptimizerType.New()
optimizer.SetNumberOfSteps([10,10,0])
scales = optimizer.GetScales()
scales.SetSize(3)
scales.SetElement(0, 0.1)
scales.SetElement(1, 1.0)
scales.SetElement(2, 1.0)
optimizer.SetScales(scales)
def collect_metric_results():
metric_results[tuple(optimizer.GetCurrentPosition())] = \
optimizer.GetCurrentValue()
optimizer.AddObserver(itk.IterationEvent(), collect_metric_results)
registration = RegistrationType.New(Metric=metric,
Optimizer=optimizer,
FixedImage=fixed_img,
MovingImage=moving_img,
InitialTransform=transform,
NumberOfLevels=1)
registration.Update()
print(f'MinimumMetricValue: {optimizer.GetMinimumMetricValue():.4f}\t'
f'MaximumMetricValue: {optimizer.GetMaximumMetricValue():.4f}\n'
f'MinimumMetricValuePosition: {list(optimizer.GetMinimumMetricValuePosition())}\t'
f'MaximumMetricValuePosition: {list(optimizer.GetMaximumMetricValuePosition())}\n'
f'StopConditionDescription: {optimizer.GetStopConditionDescription()}\t')
```
### Visualize Parameter Space as 2D Scatter Plot
We can use `matplotlib` to view the results of each discrete optimizer step as a 2D scatter plot. In this case the horizontal axis represents the angle that the image is rotated about its fixed center in radians and the vertical axis represents the horizontal distance that the image is translated after rotation. The value of `MeanSquaresImageToImageMetricv4` for each transformation is represented via color gradients. We can also directly plot optimizer extrema for visualization.
```
fig = plt.figure()
ax = plt.axes()
ax.scatter([x[0] for x in metric_results.keys()],
[x[1] for x in metric_results.keys()],
c=list(metric_results.values()),
cmap='coolwarm');
ax.plot(optimizer.GetMinimumMetricValuePosition().GetElement(0),
optimizer.GetMinimumMetricValuePosition().GetElement(1),
'wx')
ax.plot(optimizer.GetMaximumMetricValuePosition().GetElement(0),
optimizer.GetMaximumMetricValuePosition().GetElement(1),
'k^')
```
### Visualize Parameter Space as 3D Surface
We can also plot results in 3D space with `numpy` and `matplotlib`. In this example we use `np.meshgrid` to define the parameter domain and define corresponding metric results as an accompanying `numpy` array. The resulting graph can be used to visualize gradients and visually identify extrema.
```
x_unique = list(set(x for (x,y,_) in metric_results.keys()))
y_unique = list(set(y for (x,y,_) in metric_results.keys()))
x_unique.sort()
y_unique.sort()
X, Y = np.meshgrid(x_unique, y_unique)
Z = np.array([[metric_results[(x,y,0)] for x in x_unique] for y in y_unique])
np.shape(Z)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(X,Y,Z,cmap='coolwarm')
```
### Clean up
```
os.remove(fixed_img_path)
os.remove(moving_img_path)
```
|
github_jupyter
|
import os
import sys
import itertools
from math import pi, sin, cos, sqrt
from urllib.request import urlretrieve
import matplotlib.pyplot as plt
import numpy as np
import itk
from itkwidgets import view, compare, checkerboard, cm
module_path = os.path.abspath(os.path.join('.'))
if module_path not in sys.path:
sys.path.append(module_path)
fixed_img_path = 'apple.jpg'
moving_img_path = 'orange.jpg'
if not os.path.exists(fixed_img_path):
url = 'https://data.kitware.com/api/v1/file/5cad1aec8d777f072b181870/download'
urlretrieve(url, fixed_img_path)
if not os.path.exists(moving_img_path):
url = 'https://data.kitware.com/api/v1/file/5cad1aed8d777f072b181879/download'
urlretrieve(url, moving_img_path)
fixed_img = itk.imread(fixed_img_path, itk.F)
moving_img = itk.imread(moving_img_path, itk.F)
compare(fixed_img, moving_img, ui_collapsed=True)
dimension = 2
FixedImageType = itk.Image[itk.F, dimension]
MovingImageType = itk.Image[itk.F, dimension]
TransformType = itk.Euler2DTransform[itk.D]
OptimizerType = itk.ExhaustiveOptimizerv4[itk.D]
MetricType = itk.MeanSquaresImageToImageMetricv4[FixedImageType, MovingImageType]
TransformInitializerType = \
itk.CenteredTransformInitializer[itk.MatrixOffsetTransformBase[itk.D,2,2],
FixedImageType, MovingImageType]
RegistrationType = itk.ImageRegistrationMethodv4[FixedImageType,MovingImageType]
transform = TransformType.New()
initializer = TransformInitializerType.New(
Transform=transform,
FixedImage=fixed_img,
MovingImage=moving_img,
)
initializer.InitializeTransform()
metric_results = dict()
metric = MetricType.New()
optimizer = OptimizerType.New()
optimizer.SetNumberOfSteps([10,10,0])
scales = optimizer.GetScales()
scales.SetSize(3)
scales.SetElement(0, 0.1)
scales.SetElement(1, 1.0)
scales.SetElement(2, 1.0)
optimizer.SetScales(scales)
def collect_metric_results():
metric_results[tuple(optimizer.GetCurrentPosition())] = \
optimizer.GetCurrentValue()
optimizer.AddObserver(itk.IterationEvent(), collect_metric_results)
registration = RegistrationType.New(Metric=metric,
Optimizer=optimizer,
FixedImage=fixed_img,
MovingImage=moving_img,
InitialTransform=transform,
NumberOfLevels=1)
registration.Update()
print(f'MinimumMetricValue: {optimizer.GetMinimumMetricValue():.4f}\t'
f'MaximumMetricValue: {optimizer.GetMaximumMetricValue():.4f}\n'
f'MinimumMetricValuePosition: {list(optimizer.GetMinimumMetricValuePosition())}\t'
f'MaximumMetricValuePosition: {list(optimizer.GetMaximumMetricValuePosition())}\n'
f'StopConditionDescription: {optimizer.GetStopConditionDescription()}\t')
fig = plt.figure()
ax = plt.axes()
ax.scatter([x[0] for x in metric_results.keys()],
[x[1] for x in metric_results.keys()],
c=list(metric_results.values()),
cmap='coolwarm');
ax.plot(optimizer.GetMinimumMetricValuePosition().GetElement(0),
optimizer.GetMinimumMetricValuePosition().GetElement(1),
'wx')
ax.plot(optimizer.GetMaximumMetricValuePosition().GetElement(0),
optimizer.GetMaximumMetricValuePosition().GetElement(1),
'k^')
x_unique = list(set(x for (x,y,_) in metric_results.keys()))
y_unique = list(set(y for (x,y,_) in metric_results.keys()))
x_unique.sort()
y_unique.sort()
X, Y = np.meshgrid(x_unique, y_unique)
Z = np.array([[metric_results[(x,y,0)] for x in x_unique] for y in y_unique])
np.shape(Z)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(X,Y,Z,cmap='coolwarm')
os.remove(fixed_img_path)
os.remove(moving_img_path)
| 0.362969 | 0.977371 |
# Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
## Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
* A really good [conceptual overview](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) of word2vec from Chris McCormick
* [First word2vec paper](https://arxiv.org/pdf/1301.3781.pdf) from Mikolov et al.
* [NIPS paper](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) with improvements for word2vec also from Mikolov et al.
* An [implementation of word2vec](http://www.thushv.com/natural_language_processing/word2vec-part-1-nlp-with-deep-learning-with-tensorflow-skip-gram/) from Thushan Ganegedara
* TensorFlow [word2vec tutorial](https://www.tensorflow.org/tutorials/word2vec)
## Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.

To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.

Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an **embedding lookup** and the number of hidden units is the **embedding dimension**.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called **Word2Vec** uses the embedding layer to find vector representations of words that contain semantic meaning.
## Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
```
import time
import numpy as np
import tensorflow as tf
import utils
```
Load the [text8 dataset](http://mattmahoney.net/dc/textdata.html), a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the `data` folder. Then you can extract it and delete the archive file to save storage space.
```
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
```
C:\Users\spark\Desktop\text8>head text8
anarchism originated as a term of abuse first used against early working class radicals including the diggers of the english revolution and the sans culottes of the french revolution whilst the term is still used in a pejorative way to describe any act that used violent means to destroy the organization of society it has also been taken up as a positive label by self defined anarchists the word anarchism is derived from the greek without archons ruler chief king anarchism as a political philosophy is the belief that rulers are unnecessary and should be abolished although there are differing interpretations of what this means anarchism also refers to related social movements that advocate the elimination of authoritarian institutions particularly the state the word anarchy as most anarchists use it does not imply chaos nihilism or anomie but rather a harmonious anti authoritarian society in place of what are regarded as authoritarian political structures and coercive economic institutions anarchists advocate social relations based upon voluntary association of autonomous individuals mutual aid and self governance while anarchism is most easily defined by what it is against anarchists also offer positive visions of what they believe to be a truly free society however ideas about how an anarchist society might work vary considerably especially with respect to economics there is also disagreement about how a free society might be brought about origins and predecessors kropotkin and others argue that before recorded history human society was organized on anarchist principles most anthropologists follow kropotkin and engels in believing that hunter gatherer bands were egalitarian and lacked division of labour accumulated wealth or decreed law and had equal access to resources william godwin anarchists including the the anarchy organisation and rothbard find anarchist attitudes in taoism from ancient china kropotkin found similar ideas in stoic zeno of citium according to kropotkin zeno repudiated the omnipotence of the state its intervention and regimentation and proclaimed the sovereignty of the moral law of the individual the anabaptists of one six th century europe are sometimes considered to be religious forerunners of modern anarchism bertrand russell in his history of western philosophy writes that the anabaptists repudiated all law since they held that the good man will be guided at every moment by the holy spirit from this premise they arrive at communism the diggers or true levellers were an early communistic movement during the time of the english civil war and are considered by some as forerunners of modern anarchism in the modern era the first to use the term to mean something other than chaos was louis armand baron de lahontan in his nouveaux voyages dans l am rique septentrionale one seven zero three where he described the indigenous american society which had no state laws prisons priests or private property as being in anarchy russell means a libertarian and leader in the american indian movement has repeatedly stated that he is an anarchist and so are all his ancestors in one seven nine three in the thick of the french revolution william godwin published an enquiry concerning political justice although godwin did not use the word anarchism many later anarchists have regarded this book as the first major anarchist text and godwin as the founder of philosophical anarchism but at this point no anarchist movement yet existed and the term anarchiste was known mainly as an insult hurled by the bourgeois girondins at more radical elements in the french revolution the first self labelled anarchist pierre joseph proudhon it is commonly held that it wasn t until pierre joseph proudhon published what is property in one eight four zero that the term anarchist was adopted as a self description it is for this reason that some claim proudhon as the founder of modern anarchist theory in what is property proudhon answers with the famous accusation property is theft in this work he opposed the institution of decreed property propri t where owners have complete rights to use and abuse their property as they wish such as exploiting workers for profit in its place proudhon supported what he called possession individuals can have limited rights to use resources capital and goods in accordance with principles of equality and justice proudhon s vision of anarchy which he called mutualism mutuellisme involved an exchange economy where individuals and groups could trade the products of their labor using labor notes which represented the amount of working time involved in production this would ensure that no one would profit from the labor of others workers could freely join together in co operative workshops an interest free bank would be set up to provide everyone with access to the means of production proudhon s ideas were influential within french working class movements and his followers were active in the revolution of one eight four eight in france proudhon s philosophy of property is complex it was developed in a number of works over his lifetime and there are differing interpretations of some of his ideas for more detailed discussion see here max stirner s egoism in his the ego and its own stirner argued that most commonly accepted social institutions including the notion of state property as a right natural rights in general and the very notion of society were mere illusions or ghosts in the mind saying of society that the individuals are its reality he advocated egoism and a form of amoralism in which individuals would unite in associations of egoists only when it was in their self interest to do so for him property simply comes about through might whoever knows how to take to defend the thing to him belongs property and what i have in my power that is my own so long as i assert myself as holder i am the proprietor of the thing stirner never called himself an anarchist he accepted only the label egoist nevertheless his ideas were influential on many individualistically inclined anarchists although interpretations of his thought are diverse american individualist anarchism benjamin tucker in one eight two five josiah warren had participated in a communitarian experiment headed by robert owen called new harmony which failed in a few years amidst much internal conflict warren blamed the community s failure on a lack of individual sovereignty and a lack of private property warren proceeded to organise experimenal anarchist communities which respected what he called the sovereignty of the individual at utopia and modern times in one eight three three warren wrote and published the peaceful revolutionist which some have noted to be the first anarchist periodical ever published benjamin tucker says that warren was the first man to expound and formulate the doctrine now known as anarchism liberty xiv december one nine zero zero one benjamin tucker became interested in anarchism through meeting josiah warren and william b greene he edited and published liberty from august one eight eight one to april one nine zero eight it is widely considered to be the finest individualist anarchist periodical ever issued in the english language tucker s conception of individualist anarchism incorporated the ideas of a variety of theorists greene s ideas on mutual banking warren s ideas on cost as the limit of price a heterodox variety of labour theory of value proudhon s market anarchism max stirner s egoism and herbert spencer s law of equal freedom tucker strongly supported the individual s right to own the product of his or her labour as private property and believed in a market economy for trading this property he argued that in a truly free market system without the state the abundance of competition would eliminate profits and ensure that all workers received the full value of their labor other one nine th century individualists included lysander spooner stephen pearl andrews and victor yarros the first international mikhail bakunin one eight one four one eight seven six in europe harsh reaction followed the revolutions of one eight four eight twenty years later in one eight six four the international workingmen s association sometimes called the first international united some diverse european revolutionary currents including anarchism due to its genuine links to active workers movements the international became signficiant from the start karl marx was a leading figure in the international he was elected to every succeeding general council of the association the first objections to marx came from the mutualists who opposed communism and statism shortly after mikhail bakunin and his followers joined in one eight six eight the first international became polarised into two camps with marx and bakunin as their respective figureheads the clearest difference between the camps was over strategy the anarchists around bakunin favoured in kropotkin s words direct economical struggle against capitalism without interfering in the political parliamentary agitation at that time marx and his followers focused on parliamentary activity bakunin characterised marx s ideas as authoritarian and predicted that if a marxist party gained to power its leaders would end up as bad as the ruling class they had fought against in one eight seven two the conflict climaxed with a final split between the two groups at the hague congress this is often cited as the origin of the conflict between anarchists and marxists from this moment the social democratic and libertarian currents of socialism had distinct organisations including rival internationals anarchist communism peter kropotkin proudhon and bakunin both opposed communism associating it with statism however in the one eight seven zero s many anarchists moved away from bakunin s economic thinking called collectivism and embraced communist concepts communists believed the means of production should be owned collectively and that goods be distributed by need not labor an early anarchist communist was joseph d jacque the first person to describe himself as libertarian unlike proudhon he argued that it is not the product of his or her labor that the worker has a right to but to the satisfaction of his or her needs whatever may be their nature he announced his ideas in his us published journal le libertaire one eight five eight one eight six one peter kropotkin often seen as the most important theorist outlined his economic ideas in the conquest of bread and fields factories and workshops he felt co operation is more beneficial than competition illustrated in nature in mutual aid a factor of evolution one eight nine seven subsequent anarchist communists include emma goldman and alexander berkman many in the anarcho syndicalist movements see below saw anarchist communism as their objective isaac puente s one nine three two comunismo libertario was adopted by the spanish cnt as its manifesto for a post revolutionary society some anarchists disliked merging communism with anarchism several individualist anarchists maintained that abolition of private property was not consistent with liberty for example benjamin tucker whilst professing respect for kropotkin and publishing his work described communist anarchism as pseudo anarchism propaganda of the deed johann most was an outspoken advocate of violence anarchists have often been portrayed as dangerous and violent due mainly to a number of high profile violent acts including riots assassinations insurrections and terrorism by some anarchists some revolutionaries of the late one nine th century encouraged acts of political violence such as bombings and the assassinations of heads of state to further anarchism such actions have sometimes been called propaganda by the deed one of the more outspoken advocates of this strategy was johann most who said the existing system will be quickest and most radically overthrown by the annihilation of its exponents therefore massacres of the enemies of the people must be set in motion most s preferred method of terrorism dynamite earned him the moniker dynamost however there is no consensus on the legitimacy or utility of violence in general mikhail bakunin and errico malatesta for example wrote of violence as a necessary and sometimes desirable force in revolutionary settings but at the same time they denounced acts of individual terrorism malatesta in on violence and bakunin when he refuted nechaev other anarchists sometimes identified as pacifist anarchists advocated complete nonviolence leo tolstoy whose philosophy is often viewed as a form of christian anarchism see below was a notable exponent of nonviolent resistance anarchism in the labour movement the red and black flag coming from the experience of anarchists in the labour movement is particularly associated with anarcho syndicalism anarcho syndicalism was an early two zero th century working class movement seeking to overthrow capitalism and the state to institute a worker controlled society the movement pursued industrial actions such as general strike as a primary strategy many anarcho syndicalists believed in anarchist communism though not all communists believed in syndicalism after the one eight seven one repression french anarchism reemerged influencing the bourses de travails of autonomous workers groups and trade unions from this movement the conf d ration g n rale du travail general confederation of work cgt was formed in one eight nine five as the first major anarcho syndicalist movement emile pataud and emile pouget s writing for the cgt saw libertarian communism developing from a general strike after one nine one four the cgt moved away from anarcho syndicalism due to the appeal of bolshevism french style syndicalism was a significant movement in europe prior to one nine two one and remained a significant movement in spain until the mid one nine four zero s the industrial workers of the world iww founded in one nine zero five in the us espoused unionism and sought a general strike to usher in a stateless society in one nine two three one zero zero zero zero zero members existed with the support of up to three zero zero zero zero zero though not explicitly anarchist they organized by rank and file democracy embodying a spirit of resistance that has inspired many anglophone syndicalists cnt propaganda from april two zero zero four reads don t let the politicians rule our lives you vote and they decide don t allow it unity action self management spanish anarchist trade union federations were formed in the one eight seven zero s one nine zero zero and one nine one zero the most successful was the confederaci n nacional del trabajo national confederation of labour cnt founded in one nine one zero prior to the one nine four zero s the cnt was the major force in spanish working class politics with a membership of one five eight million in one nine three four the cnt played a major role in the spanish civil war see also anarchism in spain syndicalists like ricardo flores mag n were key figures in the mexican revolution latin american anarchism was strongly influenced extending to the zapatista rebellion and the factory occupation movements in argentina in berlin in one nine two two the cnt was joined with the international workers association an anarcho syndicalist successor to the first international contemporary anarcho syndicalism continues as a minor force in many socities much smaller than in the one nine one zero s two zero s and three zero s the largest organised anarchist movement today is in spain in the form of the confederaci n general del trabajo and the cnt the cgt claims a paid up membership of six zero zero zero zero and received over a million votes in spanish syndical elections other active syndicalist movements include the us workers solidarity alliance and the uk solidarity federation the revolutionary industrial unionist industrial workers of the world also exists claiming two zero zero zero paid members contemporary critics of anarcho syndicalism and revolutionary industrial unionism claim that they are workerist and fail to deal with economic life outside work post leftist critics such as bob black claim anarcho syndicalism advocates oppressive social structures such as work and the workplace anarcho syndicalists in general uphold principles of workers solidarity direct action and self management the russian revolution the russian revolution of one nine one seven was a seismic event in the development of anarchism as a movement and as a philosophy anarchists participated alongside the bolsheviks in both february and october revolutions many anarchists initially supporting the bolshevik coup however the bolsheviks soon turned against the anarchists and other left wing opposition a conflict which culminated in the one nine one eight kronstadt rebellion anarchists in central russia were imprisoned or driven underground or joined the victorious bolsheviks in ukraine anarchists fought in the civil war against both whites and bolsheviks within the makhnovshchina peasant army led by nestor makhno expelled american anarchists emma goldman and alexander berkman before leaving russia were amongst those agitating in response to bolshevik policy and the suppression of the kronstadt uprising both wrote classic accounts of their experiences in russia aiming to expose the reality of bolshevik control for them bakunin s predictions about the consequences of marxist rule had proved all too true the victory of the bolsheviks in the october revolution and the resulting russian civil war did serious damage to anarchist movements internationally many workers and activists saw bolshevik success as setting an example communist parties grew at the expense of anarchism and other socialist movements in france and the us for example the major syndicalist movements of the cgt and iww began to realign themselves away from anarchism and towards the communist international in paris the dielo truda group of russian anarchist exiles which included nestor makhno concluded that anarchists needed to develop new forms of organisation in response to the structures of bolshevism their one nine two six manifesto known as the organisational platform of the libertarian communists was supported by some communist anarchists though opposed by many others the platform continues to inspire some contemporary anarchist groups who believe in an anarchist movement organised around its principles of theoretical unity tactical unity collective responsibility and federalism platformist groups today include the workers solidarity movement in ireland the uk s anarchist federation and the late north eastern federation of anarchist communists in the northeastern united states and bordering canada the fight against fascism spain one nine three six members of the cnt construct armoured cars to fight against the fascists in one of the collectivised factories in the one nine two zero s and one nine three zero s the familiar dynamics of anarchism s conflict with the state were transformed by the rise of fascism in europe in many cases european anarchists faced difficult choices should they join in popular fronts with reformist democrats and soviet led communists against a common fascist enemy luigi fabbri an exile from italian fascism was amongst those arguing that fascism was something different fascism is not just another form of government which like all others uses violence it is the most authoritarian and the most violent form of government imaginable it represents the utmost glorification of the theory and practice of the principle of authority in france where the fascists came close to insurrection in the february one nine three four riots anarchists divided over a united front policy in spain the cnt initially refused to join a popular front electoral alliance and abstention by cnt supporters led to a right wing election victory but in one nine three six the cnt changed its policy and anarchist votes helped bring the popular front back to power months later the ruling class responded with an attempted coup and the spanish civil war one nine three six three nine was underway in reponse to the army rebellion an anarchist inspired movement of peasants and workers supported by armed militias took control of the major city of barcelona and of large areas of rural spain where they collectivized the land but even before the eventual fascist victory in one nine three nine the anarchists were losing ground in a bitter struggle with the stalinists the cnt leadership often appeared confused and divided with some members controversially entering the government stalinist led troops suppressed the collectives and persecuted both dissident marxists and anarchists since the late one nine seven zero s anarchists have been involved in fighting the rise of neo fascist groups in germany and the united kingdom some anarchists worked within militant anti fascist groups alongside members of the marxist left they advocated directly combating fascists with physical force rather than relying on the state since the late one nine nine zero s a similar tendency has developed within us anarchism see also anti racist action us anti fascist action uk antifa religious anarchism leo tolstoy one eight two eight one nine one zero most anarchist culture tends to be secular if not outright anti religious however the combination of religious social conscience historical religiousity amongst oppressed social classes and the compatibility of some interpretations of religious traditions with anarchism has resulted in religious anarchism christian anarchists believe that there is no higher authority than god and oppose earthly authority such as government and established churches they believe that jesus teachings were clearly anarchistic but were corrupted when christianity was declared the official religion of rome christian anarchists who follow jesus directive to turn the other cheek are strict pacifists the most famous advocate of christian anarchism was leo tolstoy author of the kingdom of god is within you who called for a society based on compassion nonviolent principles and freedom christian anarchists tend to form experimental communities they also occasionally resist taxation many christian anarchists are vegetarian or vegan christian anarchy can be said to have roots as old as the religion s birth as the early church exhibits many anarchistic tendencies such as communal goods and wealth by aiming to obey utterly certain of the bible s teachings certain anabaptist groups of sixteenth century europe attempted to emulate the early church s social economic organisation and philosophy by regarding it as the only social structure capable of true obediance to jesus teachings and utterly rejected in theory all earthly hierarchies and authority and indeed non anabaptists in general and violence as ungodly such groups for example the hutterites typically went from initially anarchistic beginnings to as their movements stabalised more authoritarian social models chinese anarchism was most influential in the one nine two zero s strands of chinese anarchism included tai xu s buddhist anarchism which was influenced by tolstoy and the well field system neopaganism with its focus on the environment and equality along with its often decentralized nature has lead to a number of neopagan anarchists one of the most prominent is starhawk who writes extensively about both spirituality and activism anarchism and feminism emma
## Preprocessing
Here I'm fixing up the text to make training easier. This comes from the `utils` module I wrote. The `preprocess` function coverts any punctuation into tokens, so a period is changed to ` <PERIOD> `. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
```
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
```
And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list `int_words`.
```
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
```
## Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. Check out my solution to see how I did it.
> **Exercise:** Implement subsampling for the words in `int_words`. That is, go through `int_words` and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to `train_words`.
```
from collections import Counter
import random
threshold = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
```
## Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From [Mikolov et al.](https://arxiv.org/pdf/1301.3781.pdf):
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
> **Exercise:** Implement a function `get_target` that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.
```
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
```
Here's a function that returns batches for our network. The idea is that it grabs `batch_size` words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
```
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
```
## Building the graph
From [Chris McCormick's blog](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), we can see the general structure of our network.

The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the `inputs` and `labels` placeholders like normal.
> **Exercise:** Assign `inputs` and `labels` using `tf.placeholder`. We're going to be passing in integers, so set the data types to `tf.int32`. The batches we're passing in will have varying sizes, so set the batch sizes to [`None`]. To make things work later, you'll need to set the second dimension of `labels` to `None` or `1`.
```
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
```
## Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
> **Exercise:** Tensorflow provides a convenient function [`tf.nn.embedding_lookup`](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup) that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use `tf.nn.embedding_lookup` to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using [tf.random_uniform](https://www.tensorflow.org/api_docs/python/tf/random_uniform).
```
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs)
```
## Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called ["negative sampling"](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). Tensorflow has a convenient function to do this, [`tf.nn.sampled_softmax_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss).
> **Exercise:** Below, create weights and biases for the softmax layer. Then, use [`tf.nn.sampled_softmax_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss) to calculate the loss. Be sure to read the documentation to figure out how it works.
```
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b,
labels, embed,
n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
```
## Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
```
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keepdims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
```
Restore the trained network if you need to:
```
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
```
## Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out [this post from Christopher Olah](http://colah.github.io/posts/2014-10-Visualizing-MNIST/) to learn more about T-SNE and other ways to visualize high-dimensional data.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
```
|
github_jupyter
|
import time
import numpy as np
import tensorflow as tf
import utils
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
from collections import Counter
import random
threshold = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs)
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b,
labels, embed,
n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keepdims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
| 0.662578 | 0.974556 |
```
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
model = ws.models['driver_model.pkl']
print(model.name, 'version', model.version)
import os
folder_name = 'driver-service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(folder_name, exist_ok=True)
print(folder_name, 'folder created.')
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package("scikit-learn")
myenv.add_conda_package("lightgbm")
# Save the environment config as a .yml file
env_file = folder_name + "/driver_env.yml"
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
source_directory = folder_name,
entry_script="score.py",
conda_file="driver_env.yml")
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "driver-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
print(service.state)
print(service.get_logs())
for webservice_name in ws.webservices:
print(webservice_name)
import json
# This time our input is an array of two feature arrays
x_new = [[0,1,8,1,0,0,1,0,0,0,0,0,0,0,12,1,0,0,0.5,0.3,0.610327781,7,1,-1,0,-1,1,1,1,2,1,65,1,0.316227766,0.669556409,0.352136337,3.464101615,0.1,0.8,0.6,1,1,6,3,6,2,9,1,1,1,12,0,1,1,0,0,1],
[4,2,5,1,0,0,0,0,1,0,0,0,0,0,5,1,0,0,0.9,0.5,0.771362431,4,1,-1,0,0,11,1,1,0,1,103,1,0.316227766,0.60632002,0.358329457,2.828427125,0.4,0.5,0.4,3,3,8,4,10,2,7,2,0,3,10,0,0,1,1,0,1]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = predictions['result']
for i in range(len(x_new)):
print ("Driver {}".format(x_new[i]), predicted_classes[i] )
service.delete()
```
|
github_jupyter
|
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
model = ws.models['driver_model.pkl']
print(model.name, 'version', model.version)
import os
folder_name = 'driver-service'
# Create a folder for the web service files
experiment_folder = './' + folder_name
os.makedirs(folder_name, exist_ok=True)
print(folder_name, 'folder created.')
from azureml.core.conda_dependencies import CondaDependencies
# Add the dependencies for our model (AzureML defaults is already included)
myenv = CondaDependencies()
myenv.add_conda_package("scikit-learn")
myenv.add_conda_package("lightgbm")
# Save the environment config as a .yml file
env_file = folder_name + "/driver_env.yml"
with open(env_file,"w") as f:
f.write(myenv.serialize_to_string())
print("Saved dependency info in", env_file)
# Print the .yml file
with open(env_file,"r") as f:
print(f.read())
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
# Configure the scoring environment
inference_config = InferenceConfig(runtime= "python",
source_directory = folder_name,
entry_script="score.py",
conda_file="driver_env.yml")
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "driver-service"
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
print(service.state)
print(service.get_logs())
for webservice_name in ws.webservices:
print(webservice_name)
import json
# This time our input is an array of two feature arrays
x_new = [[0,1,8,1,0,0,1,0,0,0,0,0,0,0,12,1,0,0,0.5,0.3,0.610327781,7,1,-1,0,-1,1,1,1,2,1,65,1,0.316227766,0.669556409,0.352136337,3.464101615,0.1,0.8,0.6,1,1,6,3,6,2,9,1,1,1,12,0,1,1,0,0,1],
[4,2,5,1,0,0,0,0,1,0,0,0,0,0,5,1,0,0,0.9,0.5,0.771362431,4,1,-1,0,0,11,1,1,0,1,103,1,0.316227766,0.60632002,0.358329457,2.828427125,0.4,0.5,0.4,3,3,8,4,10,2,7,2,0,3,10,0,0,1,1,0,1]]
# Convert the array or arrays to a serializable list in a JSON document
input_json = json.dumps({"data": x_new})
# Call the web service, passing the input data
predictions = service.run(input_data = input_json)
# Get the predicted classes.
predicted_classes = predictions['result']
for i in range(len(x_new)):
print ("Driver {}".format(x_new[i]), predicted_classes[i] )
service.delete()
| 0.464659 | 0.298622 |
# Lesson 0: Welcome to Jupyter Notebooks!
If you want to learn how to use this tool you've come to the right place. This article will teach you all you need to know to use Jupyter Notebooks effectively. You only need to go through Section 1 to learn the basics and you can go into Section 2 if you want to further increase your productivity.
You might be reading this tutorial in a web page (maybe Github or the course's webpage). We strongly suggest to read this tutorial in a (yes, you guessed it) Jupyter Notebook. This way you will be able to actually *try* the different commands we will introduce here.
## Redistribution Notice
This Jupyter Notebook tutorial was shamelessly copied from the [fast.ai](https://www.fast.ai/) Course v3. The original can be found in the [fasta.ai GitHub repository](https://github.com/fastai/course-v3/blob/master/nbs/dl1/00_notebook_tutorial.ipynb). Some changes, including paths to images, removal of the "cell Tricks" section and addition of this notice were made to the document. Also because this course will not cover neural networks, the examples referencing the fasta.ai code have been adjusted or removed. Full credit for produciton of this material goes to the fast.ai authors.
## Section 1: Need to Know
### Introduction
Let's build up from the basics, what is a Jupyter Notebook? Well, you are reading one. It is a document made of cells. You can write like I am writing now (markdown cells) or you can perform calculations in Python (code cells) and run them like this:
```
1+1
```
Cool huh? This combination of prose and code makes Jupyter Notebook ideal for experimentation: we can see the rationale for each experiment, the code and the results in one comprehensive document. In fast.ai, each lesson is documented in a notebook and you can later use that notebook to experiment yourself.
Other renowned institutions in academy and industry use Jupyter Notebook: Google, Microsoft, IBM, Bloomberg, Berkeley and NASA among others. Even Nobel-winning economists [use Jupyter Notebooks](https://paulromer.net/jupyter-mathematica-and-the-future-of-the-research-paper/) for their experiments and some suggest that Jupyter Notebooks will be the [new format for research papers](https://www.theatlantic.com/science/archive/2018/04/the-scientific-paper-is-obsolete/556676/).
### Writing
A type of cell in which you can write like this is called _Markdown_. [_Markdown_](https://en.wikipedia.org/wiki/Markdown) is a very popular markup language. To specify that a cell is _Markdown_ you need to click in the drop-down menu in the toolbar and select _Markdown_.
Click on the the '+' button on the left and select _Markdown_ from the toolbar.
Now you can type your first _Markdown_ cell. Write 'My first markdown cell' and press run.

You should see something like this:
My first markdown cell
Now try making your first _Code_ cell: follow the same steps as before but don't change the cell type (when you add a cell its default type is _Code_). Type something like 3/2. You should see '1.5' as output.
```
3/2
```
### Modes
If you made a mistake in your *Markdown* cell and you have already ran it, you will notice that you cannot edit it just by clicking on it. This is because you are in **Command Mode**. Jupyter Notebooks have two distinct modes:
1. **Edit Mode**: Allows you to edit a cell's content.
2. **Command Mode**: Allows you to edit the notebook as a whole and use keyboard shortcuts but not edit a cell's content.
You can toggle between these two by either pressing <kbd>ESC</kbd> and <kbd>Enter</kbd> or clicking outside a cell or inside it (you need to double click if its a Markdown cell). You can always know which mode you're on since the current cell has a green border if in **Edit Mode** and a blue border in **Command Mode**. Try it!
### Other Important Considerations
1. Your notebook is autosaved every 120 seconds. If you want to manually save it you can just press the save button on the upper left corner or press <kbd>s</kbd> in **Command Mode**.

2. To know if your kernel is computing or not you can check the dot in your upper right corner. If the dot is full, it means that the kernel is working. If not, it is idle. You can place the mouse on it and see the state of the kernel be displayed.

3. There are a couple of shortcuts you must know about which we use **all** the time (always in **Command Mode**). These are:
<kbd>Shift</kbd>+<kbd>Enter</kbd>: Runs the code or markdown on a cell
<kbd>Up Arrow</kbd>+<kbd>Down Arrow</kbd>: Toggle across cells
<kbd>b</kbd>: Create new cell
<kbd>0</kbd>+<kbd>0</kbd>: Reset Kernel
You can find more shortcuts in the Shortcuts section below.
4. You may need to use a terminal in a Jupyter Notebook environment (for example to git pull on a repository). That is very easy to do, just press 'New' in your Home directory and 'Terminal'. Don't know how to use the Terminal? We made a tutorial for that as well. You can find it [here](https://course.fast.ai/terminal_tutorial.html).

That's it. This is all you need to know to use Jupyter Notebooks. That said, we have more tips and tricks below ↓↓↓
## Section 2: Going deeper
### Markdown formatting
#### Italics, Bold, Strikethrough, Inline, Blockquotes and Links
The five most important concepts to format your code appropriately when using markdown are:
1. *Italics*: Surround your text with '\_' or '\*'
2. **Bold**: Surround your text with '\__' or '\**'
3. `inline`: Surround your text with '\`'
4. > blockquote: Place '\>' before your text.
5. [Links](https://course.fast.ai/): Surround the text you want to link with '\[\]' and place the link adjacent to the text, surrounded with '()'
#### Headings
Notice that including a hashtag before the text in a markdown cell makes the text a heading. The number of hashtags you include will determine the priority of the header ('#' is level one, '##' is level two, '###' is level three and '####' is level four). We will add three new cells with the '+' button on the left to see how every level of heading looks.
Double click on some headings and find out what level they are!
#### Lists
There are three types of lists in markdown.
Ordered list:
1. Step 1
2. Step 1B
3. Step 3
Unordered list
* learning rate
* cycle length
* weight decay
Task list
- [x] Learn Jupyter Notebooks
- [x] Writing
- [x] Modes
- [x] Other Considerations
- [ ] Change the world
Double click on each to see how they are built!
### Code Capabilities
**Code** cells are different than **Markdown** cells in that they have an output cell. This means that we can _keep_ the results of our code within the notebook and share them. Let's say we want to show a graph that explains the result of an experiment. We can just run the necessary cells and save the notebook. The output will be there when we open it again! Try it out by running the next four cells.
```
# Import necessary libraries
import matplotlib.pyplot as plt
a = 1
b = a + 1
c = b + a + 1
d = c + b + a + 1
a, b, c ,d
plt.plot([a,b,c,d])
plt.show()
```
### Running the app locally
You may be running Jupyter Notebook from an interactive coding environment like Gradient, Sagemaker or Salamander. You can also run a Jupyter Notebook server from your local computer. What's more, if you have installed Anaconda you don't even need to install Jupyter (if not, just `pip install jupyter`).
You just need to run `jupyter notebook` in your terminal. Remember to run it from a folder that contains all the folders/files you will want to access. You will be able to open, view and edit files located within the directory in which you run this command but not files in parent directories.
If a browser tab does not open automatically once you run the command, you should CTRL+CLICK the link starting with 'https://localhost:' and this will open a new tab in your default browser.
### Creating a notebook
Click on 'New' in the upper right corner and 'Python 3' in the drop-down list (we are going to use a [Python kernel](https://github.com/ipython/ipython) for all our experiments).

Note: You will sometimes hear people talking about the Notebook 'kernel'. The 'kernel' is just the Python engine that performs the computations for you.
### Shortcuts and tricks
#### Command Mode Shortcuts
There are a couple of useful keyboard shortcuts in `Command Mode` that you can leverage to make Jupyter Notebook faster to use. Remember that to switch back and forth between `Command Mode` and `Edit Mode` with <kbd>Esc</kbd> and <kbd>Enter</kbd>.
<kbd>m</kbd>: Convert cell to Markdown
<kbd>y</kbd>: Convert cell to Code
<kbd>D</kbd>+<kbd>D</kbd>: Delete the cell(if it's not the only cell) or delete the content of the cell and reset cell to Code(if only one cell left)
<kbd>o</kbd>: Toggle between hide or show output
<kbd>Shift</kbd>+<kbd>Arrow up/Arrow down</kbd>: Selects multiple cells. Once you have selected them you can operate on them like a batch (run, copy, paste etc).
<kbd>Shift</kbd>+<kbd>M</kbd>: Merge selected cells.
<kbd>Shift</kbd>+<kbd>Tab</kbd>: [press these two buttons at the same time, once] Tells you which parameters to pass on a function
<kbd>Shift</kbd>+<kbd>Tab</kbd>: [press these two buttons at the same time, three times] Gives additional information on the method
#### Line Magics
Line magics are functions that you can run on cells and take as an argument the rest of the line from where they are called. You call them by placing a '%' sign before the command. The most useful ones are:
`%matplotlib inline`: This command ensures that all matplotlib plots will be plotted in the output cell within the notebook and will be kept in the notebook when saved.
`%reload_ext autoreload`, `%autoreload 2`: Reload all modules before executing a new line. If a module is edited, it is not necessary to rerun the import commands, the modules will be reloaded automatically.
These three commands are often called together at the beginning of every notebook.
```
%matplotlib inline
%reload_ext autoreload
%autoreload 2
```
`%timeit`: Runs a line a ten thousand times and displays the average time it took to run it.
```
%timeit [i+1 for i in range(1000)]
```
`%debug`: Allows to inspect a function which is showing an error using the [Python debugger](https://docs.python.org/3/library/pdb.html).
```
for i in range(1000):
a = i+1
b = 'string'
c = b+1
%debug
```
|
github_jupyter
|
1+1
3/2
# Import necessary libraries
import matplotlib.pyplot as plt
a = 1
b = a + 1
c = b + a + 1
d = c + b + a + 1
a, b, c ,d
plt.plot([a,b,c,d])
plt.show()
%matplotlib inline
%reload_ext autoreload
%autoreload 2
%timeit [i+1 for i in range(1000)]
for i in range(1000):
a = i+1
b = 'string'
c = b+1
%debug
| 0.261425 | 0.862988 |
```
from decouple import config
from qiskit import IBMQ
from datetime import datetime
import pprint
import json
IBMQ.load_account()
IBMQ.providers()
provider = IBMQ.get_provider(hub='strangeworks-hub', group='qc-com', project='runtime')
print(provider.backends())
import pandas as pd
df = pd.read_csv('qiskit_runtime/qka/aux_file/dataset_graph7.csv',sep=',', header=None) # alterative problem: dataset_graph10.csv
data = df.values
import numpy as np
# choose number of training and test samples per class:
num_train = 10
num_test = 10
# extract training and test sets and sort them by class label
train = data[:2*num_train, :]
test = data[2*num_train:2*(num_train+num_test), :]
ind=np.argsort(train[:,-1])
x_train = train[ind][:,:-1]
y_train = train[ind][:,-1]
ind=np.argsort(test[:,-1])
x_test = test[ind][:,:-1]
y_test = test[ind][:,-1]
from qiskit_runtime.qka import FeatureMap
d = np.shape(data)[1]-1 # feature dimension is twice the qubit number
em = [[0,2],[3,4],[2,5],[1,4],[2,3],[4,6]] # we'll match this to the 7-qubit graph
# em = [[0,1],[2,3],[4,5],[6,7],[8,9],[1,2],[3,4],[5,6],[7,8]] # we'll match this to the 10-qubit graph
fm = FeatureMap(feature_dimension=d, entangler_map=em) # define the feature map
initial_point = [0.1] # set the initial parameter for the feature map
from qiskit.tools.visualization import circuit_drawer
circuit_drawer(fm.construct_circuit(x=x_train[0], parameters=initial_point),
output='text', fold=200)
C = 1 # SVM soft-margin penalty
maxiters = 10 # number of SPSA iterations
initial_layout = [0, 1, 2, 3, 4, 5, 6] # see figure above for the 7-qubit graph
# initial_layout = [9, 8, 11, 14, 16, 19, 22, 25, 24, 23] # see figure above for the 10-qubit graph
print(provider.runtime.program('quantum-kernel-alignment'))
def interim_result_callback(job_id, interim_result):
print(f"interim result: {interim_result}\n")
# backend = provider.get_backend('ibmq_qasm_simulator')
backend = provider.get_backend('ibm_nairobi')
program_inputs = {
'feature_map': fm,
'data': x_train,
'labels': y_train,
'initial_kernel_parameters': initial_point,
'maxiters': maxiters,
'C': C,
'initial_layout': initial_layout
}
options = {'backend_name': backend.name()}
job = provider.runtime.run(program_id="quantum-kernel-alignment",
options=options,
inputs=program_inputs,
callback=interim_result_callback,
)
print(job.job_id())
result2 = job.result()
program_inputs = {
'feature_map': fm,
'data': x_train,
'labels': y_train,
'initial_kernel_parameters': initial_point,
'maxiters': maxiters,
'C': C,
'initial_layout': initial_layout
}
options = {'backend_name': 'ibmq_qasm_simulator'}
job = provider.runtime.run(program_id="quantum-kernel-alignment",
options=options,
inputs=program_inputs,
callback=None,
)
print(job.job_id())
result3 = job.result()
result
```
|
github_jupyter
|
from decouple import config
from qiskit import IBMQ
from datetime import datetime
import pprint
import json
IBMQ.load_account()
IBMQ.providers()
provider = IBMQ.get_provider(hub='strangeworks-hub', group='qc-com', project='runtime')
print(provider.backends())
import pandas as pd
df = pd.read_csv('qiskit_runtime/qka/aux_file/dataset_graph7.csv',sep=',', header=None) # alterative problem: dataset_graph10.csv
data = df.values
import numpy as np
# choose number of training and test samples per class:
num_train = 10
num_test = 10
# extract training and test sets and sort them by class label
train = data[:2*num_train, :]
test = data[2*num_train:2*(num_train+num_test), :]
ind=np.argsort(train[:,-1])
x_train = train[ind][:,:-1]
y_train = train[ind][:,-1]
ind=np.argsort(test[:,-1])
x_test = test[ind][:,:-1]
y_test = test[ind][:,-1]
from qiskit_runtime.qka import FeatureMap
d = np.shape(data)[1]-1 # feature dimension is twice the qubit number
em = [[0,2],[3,4],[2,5],[1,4],[2,3],[4,6]] # we'll match this to the 7-qubit graph
# em = [[0,1],[2,3],[4,5],[6,7],[8,9],[1,2],[3,4],[5,6],[7,8]] # we'll match this to the 10-qubit graph
fm = FeatureMap(feature_dimension=d, entangler_map=em) # define the feature map
initial_point = [0.1] # set the initial parameter for the feature map
from qiskit.tools.visualization import circuit_drawer
circuit_drawer(fm.construct_circuit(x=x_train[0], parameters=initial_point),
output='text', fold=200)
C = 1 # SVM soft-margin penalty
maxiters = 10 # number of SPSA iterations
initial_layout = [0, 1, 2, 3, 4, 5, 6] # see figure above for the 7-qubit graph
# initial_layout = [9, 8, 11, 14, 16, 19, 22, 25, 24, 23] # see figure above for the 10-qubit graph
print(provider.runtime.program('quantum-kernel-alignment'))
def interim_result_callback(job_id, interim_result):
print(f"interim result: {interim_result}\n")
# backend = provider.get_backend('ibmq_qasm_simulator')
backend = provider.get_backend('ibm_nairobi')
program_inputs = {
'feature_map': fm,
'data': x_train,
'labels': y_train,
'initial_kernel_parameters': initial_point,
'maxiters': maxiters,
'C': C,
'initial_layout': initial_layout
}
options = {'backend_name': backend.name()}
job = provider.runtime.run(program_id="quantum-kernel-alignment",
options=options,
inputs=program_inputs,
callback=interim_result_callback,
)
print(job.job_id())
result2 = job.result()
program_inputs = {
'feature_map': fm,
'data': x_train,
'labels': y_train,
'initial_kernel_parameters': initial_point,
'maxiters': maxiters,
'C': C,
'initial_layout': initial_layout
}
options = {'backend_name': 'ibmq_qasm_simulator'}
job = provider.runtime.run(program_id="quantum-kernel-alignment",
options=options,
inputs=program_inputs,
callback=None,
)
print(job.job_id())
result3 = job.result()
result
| 0.422505 | 0.370766 |
# 📃 Solution for Exercise M7.01
In this exercise we will define dummy classification baselines and use them
as reference to assess the relative predictive performance of a given model
of interest.
We illustrate those baselines with the help of the Adult Census dataset,
using only the numerical features for the sake of simplicity.
```
import pandas as pd
adult_census = pd.read_csv("../datasets/adult-census-numeric-all.csv")
data, target = adult_census.drop(columns="class"), adult_census["class"]
```
First, define a `ShuffleSplit` cross-validation strategy taking half of the
samples as a testing at each round. Let us use 10 cross-validation rounds.
```
# solution
from sklearn.model_selection import ShuffleSplit
cv = ShuffleSplit(n_splits=10, test_size=0.5, random_state=0)
```
Next, create a machine learning pipeline composed of a transformer to
standardize the data followed by a logistic regression classifier.
```
# solution
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
classifier = make_pipeline(StandardScaler(), LogisticRegression())
```
Compute the cross-validation (test) scores for the classifier on this
dataset. Store the results pandas Series as we did in the previous notebook.
```
# solution
from sklearn.model_selection import cross_validate
cv_results_logistic_regression = cross_validate(
classifier, data, target, cv=cv, n_jobs=2
)
test_score_logistic_regression = pd.Series(
cv_results_logistic_regression["test_score"], name="Logistic Regression"
)
test_score_logistic_regression
```
Now, compute the cross-validation scores of a dummy classifier that
constantly predicts the most frequent class observed the training set. Please
refer to the online documentation for the [sklearn.dummy.DummyClassifier
](https://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html)
class.
Store the results in a second pandas Series.
```
# solution
from sklearn.dummy import DummyClassifier
most_frequent_classifier = DummyClassifier(strategy="most_frequent")
cv_results_most_frequent = cross_validate(
most_frequent_classifier, data, target, cv=cv, n_jobs=2
)
test_score_most_frequent = pd.Series(
cv_results_most_frequent["test_score"], name="Most frequent class predictor"
)
test_score_most_frequent
```
Now that we collected the results from the baseline and the model,
concatenate the test scores as columns a single pandas dataframe.
```
# solution
all_test_scores = pd.concat(
[test_score_logistic_regression, test_score_most_frequent],
axis='columns',
)
all_test_scores
```
Next, plot the histogram of the cross-validation test scores for both
models with the help of [pandas built-in plotting
function](https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html#histograms).
What conclusions do you draw from the results?
```
# solution
import numpy as np
import matplotlib.pyplot as plt
bins = np.linspace(start=0.5, stop=1.0, num=100)
all_test_scores.plot.hist(bins=bins, density=True, edgecolor="black")
plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left")
plt.xlabel("Accuracy (%)")
_ = plt.title("Distribution of the CV scores")
```
We observe that the two histograms are well separated. Therefore the dummy
classifier with the strategy `most_frequent` has significantly lower accuracy
than the logistic regression classifier. We conclude that the logistic
regression model can successfully find predictive information in the input
features to improve upon the baseline.
Change the `strategy` of the dummy classifier to `"stratified"`, compute the
results. Similarly compute scores for `strategy="uniform"` and then the plot
the distribution together with the other results.
Are those new baselines better than the previous one? Why is this the case?
Please refer to the scikit-learn documentation on
[sklearn.dummy.DummyClassifier](
https://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html)
to find out about the meaning of the `"stratified"` and `"uniform"`
strategies.
```
# solution
stratified_dummy = DummyClassifier(strategy="stratified")
cv_results_stratified = cross_validate(
stratified_dummy, data, target, cv=cv, n_jobs=2
)
test_score_dummy_stratified = pd.Series(
cv_results_stratified["test_score"], name="Stratified class predictor"
)
# solution
uniform_dummy = DummyClassifier(strategy="uniform")
cv_results_uniform = cross_validate(
uniform_dummy, data, target, cv=cv, n_jobs=2
)
test_score_dummy_uniform = pd.Series(
cv_results_uniform["test_score"], name="Uniform class predictor"
)
all_test_scores = pd.concat(
[
test_score_logistic_regression,
test_score_most_frequent,
test_score_dummy_stratified,
test_score_dummy_uniform,
],
axis='columns',
)
all_test_scores.plot.hist(bins=bins, density=True, edgecolor="black")
plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left")
plt.xlabel("Accuracy (%)")
_ = plt.title("Distribution of the test scores")
```
We see that using `strategy="stratified"`, the results are much worse than
with the `most_frequent` strategy. Since the classes are imbalanced,
predicting the most frequent involves that we will be right for the
proportion of this class (~75% of the samples). However, the `"stratified"`
strategy will randomly generate predictions by respecting the training
set's class distribution, resulting in some wrong predictions even for
the most frequent class, hence we obtain a lower accuracy.
This is even more so for the `strategy="uniform"`: this strategy assigns
class labels uniformly at random. Therefore, on a binary classification
problem, the cross-validation accuracy is 50% on average, which is the
weakest of the three dummy baselines.
Note: one could argue that the `"uniform"` or `strategy="stratified"`
strategies are both valid ways to define a "chance level" baseline accuracy
for this classification problem, because they make predictions "by chance".
Another way to define a chance level would be to use the
[sklearn.model_selection.permutation_test_score](https://scikit-learn.org/stable/auto_examples/model_selection/plot_permutation_tests_for_classification.html)
utility of scikit-learn. Instead of using a dummy classifier, this function
compares the cross-validation accuracy of a model of interest to the
cross-validation accuracy of this same model but trained on randomly permuted
class labels. The `permutation_test_score` therefore defines a chance level
that depends on the choice of the class and hyper-parameters of the estimator
of interest. When training on such randomly permuted labels, many machine
learning estimators would end up approximately behaving much like the
`DummyClassifier(strategy="most_frequent")` by always predicting the majority
class, irrespective of the input features. As a result, this
`"most_frequent"` baseline is sometimes called the "chance level" for
imbalanced classification problems, even though its predictions are
completely deterministic and do not involve much "chance" anymore.
Defining the chance level using `permutation_test_score` is quite
computation-intensive because it requires fitting many non-dummy models on
random permutations of the data. Using dummy classifiers as baselines is
often enough for practical purposes. For imbalanced classification problems,
the `"most_frequent"` strategy is the strongest of the three baselines and
therefore the one we should use.
|
github_jupyter
|
import pandas as pd
adult_census = pd.read_csv("../datasets/adult-census-numeric-all.csv")
data, target = adult_census.drop(columns="class"), adult_census["class"]
# solution
from sklearn.model_selection import ShuffleSplit
cv = ShuffleSplit(n_splits=10, test_size=0.5, random_state=0)
# solution
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
classifier = make_pipeline(StandardScaler(), LogisticRegression())
# solution
from sklearn.model_selection import cross_validate
cv_results_logistic_regression = cross_validate(
classifier, data, target, cv=cv, n_jobs=2
)
test_score_logistic_regression = pd.Series(
cv_results_logistic_regression["test_score"], name="Logistic Regression"
)
test_score_logistic_regression
# solution
from sklearn.dummy import DummyClassifier
most_frequent_classifier = DummyClassifier(strategy="most_frequent")
cv_results_most_frequent = cross_validate(
most_frequent_classifier, data, target, cv=cv, n_jobs=2
)
test_score_most_frequent = pd.Series(
cv_results_most_frequent["test_score"], name="Most frequent class predictor"
)
test_score_most_frequent
# solution
all_test_scores = pd.concat(
[test_score_logistic_regression, test_score_most_frequent],
axis='columns',
)
all_test_scores
# solution
import numpy as np
import matplotlib.pyplot as plt
bins = np.linspace(start=0.5, stop=1.0, num=100)
all_test_scores.plot.hist(bins=bins, density=True, edgecolor="black")
plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left")
plt.xlabel("Accuracy (%)")
_ = plt.title("Distribution of the CV scores")
# solution
stratified_dummy = DummyClassifier(strategy="stratified")
cv_results_stratified = cross_validate(
stratified_dummy, data, target, cv=cv, n_jobs=2
)
test_score_dummy_stratified = pd.Series(
cv_results_stratified["test_score"], name="Stratified class predictor"
)
# solution
uniform_dummy = DummyClassifier(strategy="uniform")
cv_results_uniform = cross_validate(
uniform_dummy, data, target, cv=cv, n_jobs=2
)
test_score_dummy_uniform = pd.Series(
cv_results_uniform["test_score"], name="Uniform class predictor"
)
all_test_scores = pd.concat(
[
test_score_logistic_regression,
test_score_most_frequent,
test_score_dummy_stratified,
test_score_dummy_uniform,
],
axis='columns',
)
all_test_scores.plot.hist(bins=bins, density=True, edgecolor="black")
plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left")
plt.xlabel("Accuracy (%)")
_ = plt.title("Distribution of the test scores")
| 0.737914 | 0.98679 |
# Let's kill off `Runner`
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_09 import *
AvgStats
```
## Imagenette data
[Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=6571)
```
path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
bs=64
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4)
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback,
partial(BatchTransformXCallback, norm_imagenette)]
nfs = [32]*4
```
Having a Runner is great but not essential when the `Learner` already has everything needed in its state. We implement everything inside it directly instead of building a second object.
##### In Lesson 12 Jeremy Howard revisited material in the cell below [Jump_to lesson 12 video](https://course.fast.ai/videos/?lesson=12&t=65)
```
#export
def param_getter(m): return m.parameters()
class Learner():
def __init__(self, model, data, loss_func, opt_func=sgd_opt, lr=1e-2, splitter=param_getter,
cbs=None, cb_funcs=None):
self.model,self.data,self.loss_func,self.opt_func,self.lr,self.splitter = model,data,loss_func,opt_func,lr,splitter
self.in_train,self.logger,self.opt = False,print,None
# NB: Things marked "NEW" are covered in lesson 12
# NEW: avoid need for set_runner
self.cbs = []
self.add_cb(TrainEvalCallback())
self.add_cbs(cbs)
self.add_cbs(cbf() for cbf in listify(cb_funcs))
def add_cbs(self, cbs):
for cb in listify(cbs): self.add_cb(cb)
def add_cb(self, cb):
cb.set_runner(self)
setattr(self, cb.name, cb)
self.cbs.append(cb)
def remove_cbs(self, cbs):
for cb in listify(cbs): self.cbs.remove(cb)
def one_batch(self, i, xb, yb):
try:
self.iter = i
self.xb,self.yb = xb,yb; self('begin_batch')
self.pred = self.model(self.xb); self('after_pred')
self.loss = self.loss_func(self.pred, self.yb); self('after_loss')
if not self.in_train: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def all_batches(self):
self.iters = len(self.dl)
try:
for i,(xb,yb) in enumerate(self.dl): self.one_batch(i, xb, yb)
except CancelEpochException: self('after_cancel_epoch')
def do_begin_fit(self, epochs):
self.epochs,self.loss = epochs,tensor(0.)
self('begin_fit')
def do_begin_epoch(self, epoch):
self.epoch,self.dl = epoch,self.data.train_dl
return self('begin_epoch')
def fit(self, epochs, cbs=None, reset_opt=False):
# NEW: pass callbacks to fit() and have them removed when done
self.add_cbs(cbs)
# NEW: create optimizer on fit(), optionally replacing existing
if reset_opt or not self.opt: self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
try:
self.do_begin_fit(epochs)
for epoch in range(epochs):
if not self('begin_epoch'): self.all_batches()
with torch.no_grad():
self.dl = self.data.valid_dl
if not self('begin_validate'): self.all_batches()
self('after_epoch')
except CancelTrainException: self('after_cancel_train')
finally:
self('after_fit')
self.remove_cbs(cbs)
ALL_CBS = {'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step',
'after_cancel_batch', 'after_batch', 'after_cancel_epoch', 'begin_fit',
'begin_epoch', 'begin_epoch', 'begin_validate', 'after_epoch',
'after_cancel_train', 'after_fit'}
def __call__(self, cb_name):
res = False
assert cb_name in self.ALL_CBS
for cb in sorted(self.cbs, key=lambda x: x._order): res = cb(cb_name) and res
return res
#export
class AvgStatsCallback(Callback):
def __init__(self, metrics):
self.train_stats,self.valid_stats = AvgStats(metrics,True),AvgStats(metrics,False)
def begin_epoch(self):
self.train_stats.reset()
self.valid_stats.reset()
def after_loss(self):
stats = self.train_stats if self.in_train else self.valid_stats
with torch.no_grad(): stats.accumulate(self.run)
def after_epoch(self):
#We use the logger function of the `Learner` here, it can be customized to write in a file or in a progress bar
self.logger(self.train_stats)
self.logger(self.valid_stats)
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback,
partial(BatchTransformXCallback, norm_imagenette)]
#export
def get_learner(nfs, data, lr, layer, loss_func=F.cross_entropy,
cb_funcs=None, opt_func=sgd_opt, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model)
return Learner(model, data, loss_func, lr=lr, cb_funcs=cb_funcs, opt_func=opt_func)
learn = get_learner(nfs, data, 0.4, conv_layer, cb_funcs=cbfs)
%time learn.fit(1)
```
## Check everything works
Let's check our previous callbacks still work.
```
cbfs += [Recorder]
learn = get_learner(nfs, data, 0.4, conv_layer, cb_funcs=cbfs)
phases = combine_scheds([0.3, 0.7], cos_1cycle_anneal(0.2, 0.6, 0.2))
sched = ParamScheduler('lr', phases)
learn.fit(1, sched)
learn.recorder.plot_lr()
learn.recorder.plot_loss()
```
## Export
```
!./notebook2script.py 09b_learner.ipynb
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_09 import *
AvgStats
path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
bs=64
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4)
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback,
partial(BatchTransformXCallback, norm_imagenette)]
nfs = [32]*4
#export
def param_getter(m): return m.parameters()
class Learner():
def __init__(self, model, data, loss_func, opt_func=sgd_opt, lr=1e-2, splitter=param_getter,
cbs=None, cb_funcs=None):
self.model,self.data,self.loss_func,self.opt_func,self.lr,self.splitter = model,data,loss_func,opt_func,lr,splitter
self.in_train,self.logger,self.opt = False,print,None
# NB: Things marked "NEW" are covered in lesson 12
# NEW: avoid need for set_runner
self.cbs = []
self.add_cb(TrainEvalCallback())
self.add_cbs(cbs)
self.add_cbs(cbf() for cbf in listify(cb_funcs))
def add_cbs(self, cbs):
for cb in listify(cbs): self.add_cb(cb)
def add_cb(self, cb):
cb.set_runner(self)
setattr(self, cb.name, cb)
self.cbs.append(cb)
def remove_cbs(self, cbs):
for cb in listify(cbs): self.cbs.remove(cb)
def one_batch(self, i, xb, yb):
try:
self.iter = i
self.xb,self.yb = xb,yb; self('begin_batch')
self.pred = self.model(self.xb); self('after_pred')
self.loss = self.loss_func(self.pred, self.yb); self('after_loss')
if not self.in_train: return
self.loss.backward(); self('after_backward')
self.opt.step(); self('after_step')
self.opt.zero_grad()
except CancelBatchException: self('after_cancel_batch')
finally: self('after_batch')
def all_batches(self):
self.iters = len(self.dl)
try:
for i,(xb,yb) in enumerate(self.dl): self.one_batch(i, xb, yb)
except CancelEpochException: self('after_cancel_epoch')
def do_begin_fit(self, epochs):
self.epochs,self.loss = epochs,tensor(0.)
self('begin_fit')
def do_begin_epoch(self, epoch):
self.epoch,self.dl = epoch,self.data.train_dl
return self('begin_epoch')
def fit(self, epochs, cbs=None, reset_opt=False):
# NEW: pass callbacks to fit() and have them removed when done
self.add_cbs(cbs)
# NEW: create optimizer on fit(), optionally replacing existing
if reset_opt or not self.opt: self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
try:
self.do_begin_fit(epochs)
for epoch in range(epochs):
if not self('begin_epoch'): self.all_batches()
with torch.no_grad():
self.dl = self.data.valid_dl
if not self('begin_validate'): self.all_batches()
self('after_epoch')
except CancelTrainException: self('after_cancel_train')
finally:
self('after_fit')
self.remove_cbs(cbs)
ALL_CBS = {'begin_batch', 'after_pred', 'after_loss', 'after_backward', 'after_step',
'after_cancel_batch', 'after_batch', 'after_cancel_epoch', 'begin_fit',
'begin_epoch', 'begin_epoch', 'begin_validate', 'after_epoch',
'after_cancel_train', 'after_fit'}
def __call__(self, cb_name):
res = False
assert cb_name in self.ALL_CBS
for cb in sorted(self.cbs, key=lambda x: x._order): res = cb(cb_name) and res
return res
#export
class AvgStatsCallback(Callback):
def __init__(self, metrics):
self.train_stats,self.valid_stats = AvgStats(metrics,True),AvgStats(metrics,False)
def begin_epoch(self):
self.train_stats.reset()
self.valid_stats.reset()
def after_loss(self):
stats = self.train_stats if self.in_train else self.valid_stats
with torch.no_grad(): stats.accumulate(self.run)
def after_epoch(self):
#We use the logger function of the `Learner` here, it can be customized to write in a file or in a progress bar
self.logger(self.train_stats)
self.logger(self.valid_stats)
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback,
partial(BatchTransformXCallback, norm_imagenette)]
#export
def get_learner(nfs, data, lr, layer, loss_func=F.cross_entropy,
cb_funcs=None, opt_func=sgd_opt, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model)
return Learner(model, data, loss_func, lr=lr, cb_funcs=cb_funcs, opt_func=opt_func)
learn = get_learner(nfs, data, 0.4, conv_layer, cb_funcs=cbfs)
%time learn.fit(1)
cbfs += [Recorder]
learn = get_learner(nfs, data, 0.4, conv_layer, cb_funcs=cbfs)
phases = combine_scheds([0.3, 0.7], cos_1cycle_anneal(0.2, 0.6, 0.2))
sched = ParamScheduler('lr', phases)
learn.fit(1, sched)
learn.recorder.plot_lr()
learn.recorder.plot_loss()
!./notebook2script.py 09b_learner.ipynb
| 0.681409 | 0.788461 |
### Deep CNN on the CIFAR10 datasets (due: July 3)
```
import keras, os
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D
batch_size = 32
num_classes = 10
epochs = 100
data_augumentation = True
num_predictions = 20
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = 'keras_cifar10_trained_model.h5'
# (X, Y training data), (X, Y testing data)
(X_train, Y_train), (X_test, Y_test) = cifar10.load_data()
# (train(test)_samples, ...)
print('(X, Y)_train_shape:', X_train.shape, '|', Y_train.shape)
print('(X, Y)_test_shape :', X_test.shape, '|', Y_test.shape)
# Convert class vectors to binary class matrixes
Y_train = keras.utils.to_categorical(Y_train, num_classes)
Y_test = keras.utils.to_categorical(Y_test, num_classes)
model = Sequential()
# inner layer (if "layers" is 0, then this is the input layer)
for layers in range(2):
model.add(Conv2D(32, (3, 3),
padding='same',
input_shape=X_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
# output layer
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
# initiate RMSprop optimizer
optimizer = keras.optimizers.rmsprop(lr=0.0001, decay=0.00001)
# train the model with RMSprop
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
if not data_augumentation:
print('=== Not using data_augmentation ===')
model.fit(X_train, Y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(X_test, Y_test),
shuffle=True)
else:
print('=== Real-time data_augmentation ===')
data_genarator = ImageDataGenerator(featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_epsilon=0.00001,
zca_whitening=False,
rotation_range=0,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.,
zoom_range=0.,
channel_shift_range=0.,
fill_mode='nearest',
cval=0.,
horizontal_flip=True,
vertical_flip=False,
rescale=None,
preprocessing_function=None,
data_format=None,
validation_split=0.0)
data_genarator.fit(X_train)
generator = data_genarator.flow(X_train, Y_train,
batch_size = batch_size)
model.fit_generator(generator,
steps_per_epoch=len(generator),
epochs=epochs,
validation_data=(X_test, Y_test),
workers=4)
"""
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name)
model.save(model_path)
print('Saved trained model at \"%s\".' % model_path)
"""
# Score trained model.
scores = model.evaluate(X_test, Y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
print(model.history.history['val_acc'])
import matplotlib.pyplot as plt
# plotting accuracy
plt.plot(model.history.history['acc'])
plt.plot(model.history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# plotting occurred loss
plt.plot(model.history.history['loss'])
plt.plot(model.history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
```
|
github_jupyter
|
import keras, os
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D
batch_size = 32
num_classes = 10
epochs = 100
data_augumentation = True
num_predictions = 20
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = 'keras_cifar10_trained_model.h5'
# (X, Y training data), (X, Y testing data)
(X_train, Y_train), (X_test, Y_test) = cifar10.load_data()
# (train(test)_samples, ...)
print('(X, Y)_train_shape:', X_train.shape, '|', Y_train.shape)
print('(X, Y)_test_shape :', X_test.shape, '|', Y_test.shape)
# Convert class vectors to binary class matrixes
Y_train = keras.utils.to_categorical(Y_train, num_classes)
Y_test = keras.utils.to_categorical(Y_test, num_classes)
model = Sequential()
# inner layer (if "layers" is 0, then this is the input layer)
for layers in range(2):
model.add(Conv2D(32, (3, 3),
padding='same',
input_shape=X_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
# output layer
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
# initiate RMSprop optimizer
optimizer = keras.optimizers.rmsprop(lr=0.0001, decay=0.00001)
# train the model with RMSprop
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
if not data_augumentation:
print('=== Not using data_augmentation ===')
model.fit(X_train, Y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(X_test, Y_test),
shuffle=True)
else:
print('=== Real-time data_augmentation ===')
data_genarator = ImageDataGenerator(featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_epsilon=0.00001,
zca_whitening=False,
rotation_range=0,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.,
zoom_range=0.,
channel_shift_range=0.,
fill_mode='nearest',
cval=0.,
horizontal_flip=True,
vertical_flip=False,
rescale=None,
preprocessing_function=None,
data_format=None,
validation_split=0.0)
data_genarator.fit(X_train)
generator = data_genarator.flow(X_train, Y_train,
batch_size = batch_size)
model.fit_generator(generator,
steps_per_epoch=len(generator),
epochs=epochs,
validation_data=(X_test, Y_test),
workers=4)
"""
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name)
model.save(model_path)
print('Saved trained model at \"%s\".' % model_path)
"""
# Score trained model.
scores = model.evaluate(X_test, Y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
print(model.history.history['val_acc'])
import matplotlib.pyplot as plt
# plotting accuracy
plt.plot(model.history.history['acc'])
plt.plot(model.history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# plotting occurred loss
plt.plot(model.history.history['loss'])
plt.plot(model.history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
| 0.854945 | 0.811153 |
# Predicting a Pulsar Star
For this project we'll analyze **Predicting a Pulsar Star** dataset from [Kaggle](https://www.kaggle.com/pavanraj159/predicting-a-pulsar-star). The data contains the following fields:
- Mean of the integrated profile
- Standard deviation of the integrated profile
- Excess kurtosis of the integrated profile
- Skewness of the integrated profile
- Mean of the DM-SNR curve
- Standard deviation of the DM-SNR curve
- Excess kurtosis of the DM-SNR curve
- Skewness of the DM-SNR curve
- target class, 0 (negative) and 1 (positive)
### Import libraries:
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
Reading files:
```
stars = pd.read_csv("pulsar_stars.csv")
stars.head()
stars.info()
stars.describe()
```
### EDA
```
plt.figure(figsize = (12, 8))
sns.pairplot(stars,hue='target_class',palette='Dark2')
plt.figure(figsize = (12, 8))
sns.heatmap(stars.corr(), cmap='magma', annot=True)
```
From the hitmap and pairplot we see strong correlation for target class.
*Positive:*
- Excess kurtosis of the integrated profile
- Skewness of the integrated profile
*Negative:*
- Mean of the integrated profile
### Split train and test data
```
from sklearn.model_selection import train_test_split
X = stars.drop('target_class',axis=1)
y = stars['target_class']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
```
### Support Vector Machine Classifier
Call the SVC() model from sklearn and fit the model to the training data.
```
#connect classification report,confusion_matrix libraries in advance
from sklearn.metrics import classification_report,confusion_matrix
from sklearn.svm import SVC
svc_model = SVC()
svc_model.fit(X_train,y_train)
predictions_svm = svc_model.predict(X_test)
print(confusion_matrix(y_test,predictions_svm))
print(classification_report(y_test,predictions_svm))
```
### Random Forest
```
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=100)
rfc.fit(X_train, y_train)
predictions_rfc = rfc.predict(X_test)
print(confusion_matrix(y_test,predictions_rfc))
print(classification_report(y_test,predictions_rfc))
```
### Logistic Regression
```
from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression()
logmodel.fit(X_train,y_train)
predictions_log = logmodel.predict(X_test)
print(confusion_matrix(y_test,predictions_log))
print(classification_report(y_test,predictions_log))
```
### K Nearest Neighbors
```
from sklearn.neighbors import KNeighborsClassifier
```
Choose the best number of neibors:
```
error_rate = []
# Will take some time
for i in range(1,20):
knn = KNeighborsClassifier(n_neighbors = i)
knn.fit(X_train, y_train)
pred_i = knn.predict(X_test)
error_rate.append(np.mean(pred_i != y_test))
plt.figure(figsize=(10,6))
plt.plot(range(1,20),error_rate)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
knn = KNeighborsClassifier(n_neighbors=10)
knn.fit(X_train,y_train)
prediction_knn = knn.predict(X_test)
print(confusion_matrix(y_test,prediction_knn))
print(classification_report(y_test,prediction_knn))
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
stars = pd.read_csv("pulsar_stars.csv")
stars.head()
stars.info()
stars.describe()
plt.figure(figsize = (12, 8))
sns.pairplot(stars,hue='target_class',palette='Dark2')
plt.figure(figsize = (12, 8))
sns.heatmap(stars.corr(), cmap='magma', annot=True)
from sklearn.model_selection import train_test_split
X = stars.drop('target_class',axis=1)
y = stars['target_class']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
#connect classification report,confusion_matrix libraries in advance
from sklearn.metrics import classification_report,confusion_matrix
from sklearn.svm import SVC
svc_model = SVC()
svc_model.fit(X_train,y_train)
predictions_svm = svc_model.predict(X_test)
print(confusion_matrix(y_test,predictions_svm))
print(classification_report(y_test,predictions_svm))
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=100)
rfc.fit(X_train, y_train)
predictions_rfc = rfc.predict(X_test)
print(confusion_matrix(y_test,predictions_rfc))
print(classification_report(y_test,predictions_rfc))
from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression()
logmodel.fit(X_train,y_train)
predictions_log = logmodel.predict(X_test)
print(confusion_matrix(y_test,predictions_log))
print(classification_report(y_test,predictions_log))
from sklearn.neighbors import KNeighborsClassifier
error_rate = []
# Will take some time
for i in range(1,20):
knn = KNeighborsClassifier(n_neighbors = i)
knn.fit(X_train, y_train)
pred_i = knn.predict(X_test)
error_rate.append(np.mean(pred_i != y_test))
plt.figure(figsize=(10,6))
plt.plot(range(1,20),error_rate)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
knn = KNeighborsClassifier(n_neighbors=10)
knn.fit(X_train,y_train)
prediction_knn = knn.predict(X_test)
print(confusion_matrix(y_test,prediction_knn))
print(classification_report(y_test,prediction_knn))
| 0.548674 | 0.982658 |
```
!wget "https://he-public-data.s3.ap-southeast-1.amazonaws.com/shell_dataset.zip"
```
# GET DATA
```
!unzip -q "shell_dataset.zip"
!unzip -q "dataset/train.zip"
!unzip -q "dataset/test.zip"
import pandas as pd
test_data = pd.read_csv("dataset/test.csv")
train_data = pd.read_csv("train/train.csv")
sample = pd.read_csv("dataset/sample_submission.csv")
```
# CLOUD COVER %
```
import cv2
import numpy as np
import os
import matplotlib.pyplot as plt
import seaborn as sns
from datetime import datetime
```
### sample 1
```
img = cv2.resize(cv2.imread('/content/train/0101/0101142000.jpg'),(300,300))
white = np.array([255, 255, 255])
lowerBound = np.array([50,95,95])
mask = cv2.inRange(img, lowerBound, white)
masked = cv2.bitwise_and(img, img, mask=mask)
masked = cv2.cvtColor(masked,cv2.COLOR_BGR2GRAY)
RGB_image = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(RGB_image)
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(masked,cmap='gray')
```
Error around 10% on most sky images having sun.
### sample 2
```
img = cv2.resize(cv2.imread('/content/train/0101/0101110000.jpg'),(300,300))
white = np.array([255, 255, 255])
lowerBound = np.array([50,95,95])
mask = cv2.inRange(img, lowerBound, white)
masked = cv2.bitwise_and(img, img, mask=mask)
masked = cv2.cvtColor(masked,cv2.COLOR_BGR2GRAY)
RGB_image = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(RGB_image)
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(masked,cmap='gray')
```
Error around 5% for 100% covered sky
## sample 3
```
img = cv2.resize(cv2.imread('/content/train/0101/0101135000.jpg'),(300,300))
white = np.array([255, 255, 255])
lowerBound = np.array([50,95,95])
mask = cv2.inRange(img, lowerBound, white)
masked = cv2.bitwise_and(img, img, mask=mask)
masked = cv2.cvtColor(masked,cv2.COLOR_BGR2GRAY)
RGB_image = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) #only needed to display in plt
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(RGB_image)
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(masked,cmap='gray')
```
Error around 30% for clear sky case
## Function to get Percentage
```
def get_cloud_cov(img_path):
img = cv2.resize(cv2.imread(img_path),(300,300))
white = np.array([255, 255, 255])
lowerBound = np.array([50,95,95])
mask = cv2.inRange(img, lowerBound, white)
masked = cv2.bitwise_and(img, img, mask=mask)
masked = cv2.cvtColor(masked,cv2.COLOR_BGR2GRAY)
percent = int(100*(np.count_nonzero(masked)/(np.pi*150**2)))
if percent > 100:
return 100
else:
return percent
```
# Train images cloud percentage calculation
```
train_data.head(5)
train_data['time_MST'] = pd.to_datetime(train_data['MST'], format='%H:%M')
train_data = train_data.set_index('time_MST').between_time('07:40:00', '16:40:00').reset_index().reindex(columns=train_data.columns)
train_data.drop('time_MST',axis=1,inplace=True)
train_data.head(5)
len(train_data)
for i, row in enumerate(train_data.index):
#prepare path
date = datetime.strptime(train_data.loc[row,'DATE (MM/DD)']+'/2020','%m/%d/%Y').date()
time = datetime.strptime(train_data.loc[row,'MST'],'%H:%M').time()
train_data.loc[row,'date_time'] = datetime.combine(date,time)
train_data = train_data.set_index('date_time')
train_data = train_data.resample('10min').mean()
train_data.dropna(inplace = True)
train_data.head(5)
train_data['Total Cloud Cover [%]'] = np.NaN
for i, row in enumerate(train_data.index):
#prepare path
date = row.date()
time = row.time()
date = date.strftime('%m%d')
time = time.strftime('%H%M%S')
path = 'train/'+date+'/'+date+time+'.jpg'
try:
train_data.iloc[i,7] = get_cloud_cov(path)
except:
continue
train_data['Total Cloud Cover [%]'].fillna(np.mean(train_data['Total Cloud Cover [%]']),inplace=True)
train_data.head(5)
train_data.describe()['Total Cloud Cover [%]']
train_data['Total Cloud Cover [%]'] = train_data['Total Cloud Cover [%]'].astype(np.int64)
train_data.info()
train_data.to_csv('train_filled.csv')
```
# Explore
```
plt.figure(figsize=(23,5))
train_data['Total Cloud Cover [%]'].plot()
train_data['Total Cloud Cover [%]'].hist(bins=10,range=[0,100])
plt.xticks(np.arange(0,101,10))
train_data.corr()['Total Cloud Cover [%]']
```
# Split Data
```
```
# Model building
```
from statsmodels.tsa.arima_model import ARIMA
model=ARIMA(endog=train_data['Total Cloud Cover [%]'],exog=train_data,order=(1,0,1))
history = model.fit(disp=0)
print(history.summary())
```
# Directly using test weather data
```
test_data.head(5)
path = '/content/test/2/weather_data.csv'
wd = pd.read_csv(path)
todays_date = datetime.now().date()
index = pd.date_range(todays_date, periods=361, freq='1min')
wd = wd.set_index(index)
wd.drop('Time [Mins]',inplace=True,axis=1)
wd = wd.resample('10min').mean()
wd.head(5)
wd.describe()
wd['Total Cloud Cover [%]'].plot()
model=ARIMA(endog=wd['Total Cloud Cover [%]'],exog=wd,order=(1,1,1))
history = model.fit(disp=0)
print(history.summary())
for i,row in enumerate(test_data.index):
folder = test_data.loc[row,'']
```
|
github_jupyter
|
!wget "https://he-public-data.s3.ap-southeast-1.amazonaws.com/shell_dataset.zip"
!unzip -q "shell_dataset.zip"
!unzip -q "dataset/train.zip"
!unzip -q "dataset/test.zip"
import pandas as pd
test_data = pd.read_csv("dataset/test.csv")
train_data = pd.read_csv("train/train.csv")
sample = pd.read_csv("dataset/sample_submission.csv")
import cv2
import numpy as np
import os
import matplotlib.pyplot as plt
import seaborn as sns
from datetime import datetime
img = cv2.resize(cv2.imread('/content/train/0101/0101142000.jpg'),(300,300))
white = np.array([255, 255, 255])
lowerBound = np.array([50,95,95])
mask = cv2.inRange(img, lowerBound, white)
masked = cv2.bitwise_and(img, img, mask=mask)
masked = cv2.cvtColor(masked,cv2.COLOR_BGR2GRAY)
RGB_image = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(RGB_image)
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(masked,cmap='gray')
img = cv2.resize(cv2.imread('/content/train/0101/0101110000.jpg'),(300,300))
white = np.array([255, 255, 255])
lowerBound = np.array([50,95,95])
mask = cv2.inRange(img, lowerBound, white)
masked = cv2.bitwise_and(img, img, mask=mask)
masked = cv2.cvtColor(masked,cv2.COLOR_BGR2GRAY)
RGB_image = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(RGB_image)
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(masked,cmap='gray')
img = cv2.resize(cv2.imread('/content/train/0101/0101135000.jpg'),(300,300))
white = np.array([255, 255, 255])
lowerBound = np.array([50,95,95])
mask = cv2.inRange(img, lowerBound, white)
masked = cv2.bitwise_and(img, img, mask=mask)
masked = cv2.cvtColor(masked,cv2.COLOR_BGR2GRAY)
RGB_image = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) #only needed to display in plt
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(RGB_image)
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(masked,cmap='gray')
def get_cloud_cov(img_path):
img = cv2.resize(cv2.imread(img_path),(300,300))
white = np.array([255, 255, 255])
lowerBound = np.array([50,95,95])
mask = cv2.inRange(img, lowerBound, white)
masked = cv2.bitwise_and(img, img, mask=mask)
masked = cv2.cvtColor(masked,cv2.COLOR_BGR2GRAY)
percent = int(100*(np.count_nonzero(masked)/(np.pi*150**2)))
if percent > 100:
return 100
else:
return percent
train_data.head(5)
train_data['time_MST'] = pd.to_datetime(train_data['MST'], format='%H:%M')
train_data = train_data.set_index('time_MST').between_time('07:40:00', '16:40:00').reset_index().reindex(columns=train_data.columns)
train_data.drop('time_MST',axis=1,inplace=True)
train_data.head(5)
len(train_data)
for i, row in enumerate(train_data.index):
#prepare path
date = datetime.strptime(train_data.loc[row,'DATE (MM/DD)']+'/2020','%m/%d/%Y').date()
time = datetime.strptime(train_data.loc[row,'MST'],'%H:%M').time()
train_data.loc[row,'date_time'] = datetime.combine(date,time)
train_data = train_data.set_index('date_time')
train_data = train_data.resample('10min').mean()
train_data.dropna(inplace = True)
train_data.head(5)
train_data['Total Cloud Cover [%]'] = np.NaN
for i, row in enumerate(train_data.index):
#prepare path
date = row.date()
time = row.time()
date = date.strftime('%m%d')
time = time.strftime('%H%M%S')
path = 'train/'+date+'/'+date+time+'.jpg'
try:
train_data.iloc[i,7] = get_cloud_cov(path)
except:
continue
train_data['Total Cloud Cover [%]'].fillna(np.mean(train_data['Total Cloud Cover [%]']),inplace=True)
train_data.head(5)
train_data.describe()['Total Cloud Cover [%]']
train_data['Total Cloud Cover [%]'] = train_data['Total Cloud Cover [%]'].astype(np.int64)
train_data.info()
train_data.to_csv('train_filled.csv')
plt.figure(figsize=(23,5))
train_data['Total Cloud Cover [%]'].plot()
train_data['Total Cloud Cover [%]'].hist(bins=10,range=[0,100])
plt.xticks(np.arange(0,101,10))
train_data.corr()['Total Cloud Cover [%]']
```
# Model building
# Directly using test weather data
| 0.369201 | 0.768798 |
```
# Dependencies
import numpy as np
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
import matplotlib.colors as mc
from modules.dataset import load_words, load_tweets
from modules.network import get_edges, map_words, get_degree
# Set up default colors
colors=[*mc.TABLEAU_COLORS.values()]
%matplotlib inline
```
# Dataset
## Dataset creation
```
# Load words dataset table
words = load_words('data/database/words.csv')
words.head()
# Retrieve dictionaries mapping lemma tuples to numeric value
w2i, i2w = map_words(words)
# Map lemmas to node numbers
words['node'] = words.apply(lambda w: w2i[(w.text, w.pos)], axis=1)
words.head()
# Load tweets dataset table
tweets = load_tweets('data/database/tweets.csv')
tweets.head()
```
## Dataset statistics
### Number of tweets
```
# Define words from tweets of 2017 and the ones from tweets of 2018
tweets_2017 = tweets.id_str[tweets.created_at.dt.year == 2017].values
tweets_2018 = tweets.id_str[tweets.created_at.dt.year == 2018].values
# Show tweets distribution
fig, ax = plt.subplots(figsize=(7.5, 5))
_ = ax.set_title('Tweet count for 2017 and 2018 analyzed period', fontsize=18)
_ = ax.bar(['2017'], [len(tweets_2017)])
_ = ax.bar(['2018'], [len(tweets_2018)])
_ = plt.savefig('images/analysis/tweet_counts.png')
_ = plt.show()
```
### Words count
```
# Show word counts in tweets of 2017 and 2018 respectively
fig, ax = plt.subplots(figsize=(7.5, 5))
_ = ax.set_title('Word count for 2017 and 2018 analyzed period', fontsize=18)
_ = ax.bar(['2017'], sum(words.tweet.isin(tweets_2017)))
_ = ax.bar(['2018'], sum(words.tweet.isin(tweets_2018)))
_ = plt.savefig('images/analysis/words_counts.png')
_ = plt.show()
```
### Unique words count
```
# Show unique word counts in tweets of 2017 and 2018 respectively
unique_words_2017 = words.text[words.tweet.isin(tweets_2017)].unique()
unique_words_2018 = words.text[words.tweet.isin(tweets_2018)].unique()
fig, ax = plt.subplots(figsize=(7.5, 5))
_ = ax.set_title('Word count for 2017 and 2018 analyzed period')
_ = ax.bar(['2017'], unique_words_2017.shape[0])
_ = ax.bar(['2018'], unique_words_2018.shape[0])
_ = plt.savefig('images/analysis/nodes_counts.png')
_ = plt.show()
```
### Tweets lengths distributions
Histogram shows the distribution of tweet lengths in either 2017's and 2018'2 network. The difference in the two distributions is due to the fact that in november 2017 the allowed tweet lengths in term of characters has been duplicated by Twitter itself.
```
# Compute length of each tweet, for either words and characters
tweets_ = tweets.loc[:, ['id_str']]
tweets_['len_words'] = tweets.apply(lambda t: len(t.text.split(' ')), axis=1)
tweets_['len_chars'] = tweets.apply(lambda t: len(t.text), axis=1)
# Get 2017 and 2018 tweets
tweets_2017_ = tweets_[tweets_['id_str'].isin(tweets_2017)]
tweets_2018_ = tweets_[tweets_['id_str'].isin(tweets_2018)]
# Show distribution of words number per tweet in 2017 and 2018
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
# Word lengths
_ = axs[0].set_title('Number of words per tweet',fontsize=18)
_ = axs[0].hist(tweets_2017_['len_words'], bins=25, density=True, alpha=.7)
_ = axs[0].hist(tweets_2018_['len_words'], bins=50, density=True, alpha=.7)
_ = axs[0].legend(['Tweet length in 2017', 'Tweet length in 2018'])
# Charactes lengths
_ = axs[1].set_title('Number of characters per tweet', fontsize=18)
_ = axs[1].hist(tweets_2017_['len_chars'], bins=25, density=True, alpha=.7)
_ = axs[1].hist(tweets_2018_['len_chars'], bins=50, density=True, alpha=.7)
_ = axs[1].legend(['Tweet length in 2017', 'Tweet length in 2018'])
# Make plot
_ = plt.savefig('images/analysis/tweet_len_distr.png')
_ = plt.show()
```
# Network creation
## Edges creation
```
# Define years under examination
years = [2017, 2018]
# Define edges for 2017 and 2018 (as Pandas DataFrames)
edges = dict()
# Define edges for each network
for y in years:
# Get id of tweets for current year
tweet_ids = tweets.id_str[tweets.created_at.dt.year == y]
# Compute edges for current year
edges[y] = get_edges(words[words.tweet.isin(tweet_ids)])
# Save vocabularies to disk
np.save('data/edges_w2i.npy', w2i) # Save tuple to index vocabulary
np.save('data/edges_i2w.npy', i2w) # Save index to tuple vocabulary
# Save edges to disk
edges_ = [*years]
# Loop through each edges table
for i, y in enumerate(years):
# Add year column
edges_[i] = edges[y].copy()
edges_[i]['year'] = y
# Concatenate DataFrames
edges_ = pd.concat(edges_, axis=0)
# Save dataframe to disk
edges_.to_csv('data/database/edges.csv', index=False)
print('Edges for 2017\'s network')
edges[2017].head()
print('Edges for 2018\'s network')
edges[2018].head()
```
## Adjacency matrices
Compute upper triangular adjacency matrices for either 2017's and 2018's networks.
Note: adjacency matrices are saved by default to avoid recomputing.
```
# Define networks container
network = dict()
# Create newtorks
for y in years:
network[y] = nx.from_pandas_edgelist(edges[y], source='node_x', target='node_y',
edge_attr=True, create_using=nx.Graph)
# Get numpy adjacency matrices
adj_matrix = dict()
for y in years:
adj_matrix[y] = nx.to_numpy_matrix(network[y])
# Show adjacency matrices
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
# Print adjacency matrix for each network
for i, y in enumerate(years):
_ = axs[i].set_title('{:d}\'s network adjacency matrix'.format(y))
_ = axs[i].imshow(np.minimum(adj_matrix[y], np.ones(adj_matrix[y].shape)))
_ = plt.show()
adj_matrix[2017].shape
```
## Summary statistics
Compute mean, density, and other summary statistics for both 2017's and 2018's networks
```
# Initialize summary statistics
mean = {}
density = {}
std = {}
# Compute mean and density
for y in years:
x = adj_matrix[y] # Get adjacency matrix for current network
n = x.shape[0] # Get dimension of the adjacency matrix
mean[y] = x.sum() / n**2
std[y] = ( ((x - mean[y])**2).sum() / (n**2 - 1) )**.5
density[y] = np.minimum(x, np.ones((n, n))).sum() / n**2
# Print out results
for y in years:
print('{:d}\'s network has mean={:.04f}, standard deviation={:.04f} and density={:.04f}'.format(y, mean[y], std[y], density[y]))
# Show summary statistics graphically
fig, axs = plt.subplots(1, 3, figsize=(12, 5))
_ = axs[0].set_title('Mean',fontsize=15)
_ = axs[1].set_title('Standard deviation',fontsize=15)
_ = axs[2].set_title('Density',fontsize=15)
# Print scores for either 2017 and 2018
for y in years:
_ = axs[0].bar(str(y), mean[y])
_ = axs[1].bar(str(y), std[y])
_ = axs[2].bar(str(y), density[y])
# Make plot
_ = plt.savefig('images/analysis/net_stats.png')
_ = plt.show()
```
## Degrees analysis
```
# Compare degrees graphically
fig, ax = plt.subplots(figsize=(30, 5))
_ = fig.suptitle('Distribution of the networks degrees')
_ = ax.hist(get_degree(network[2017]), bins=500, alpha=0.7)
_ = ax.hist(get_degree(network[2018]), bins=500, alpha=0.7)
_ = ax.set_xlim(0, 200)
_ = ax.legend(['Degree of the network in 2017',
'Degree of the network in 2018'])
_ = plt.savefig('images/analysis/degree_hist.png')
_ = plt.show()
# Define function for computing degree analysis (compute pdf, cdf, ...)
def make_degree_analysis(network):
"""
Input:
- degrees Pandas Series node (index) maps to its degree (value)
Output:
- degree: list of degrees
- counts: list containing count for each degree
- pdf (probability distribution function): list
- cdf (cumulative distribution function): list
"""
# Get number of times a degree appeared in the network
degree = get_degree(network)
degree, count = np.unique(degree.values, return_counts=True)
pdf = count / np.sum(count) # Compute pdf
cdf = list(1 - np.cumsum(pdf))[:-1] + [0] # Compute cdf
# Return computed statistics
return degree, count, pdf, cdf
# Define function for plotting degree analysis
def plot_degree_analysis(network):
# Initialize plot
fig, axs = plt.subplots(1, 3, figsize=(12, 5))
_ = axs[0].set_title('Probability Distribution',fontsize=14)
_ = axs[1].set_title('Log-log Probability Distribution',fontsize=14)
_ = axs[2].set_title('Log-log Cumulative Distribution',fontsize=14)
# Create plot fore each network
for i, y in enumerate(network.keys()):
# Compute degree statistics
k, count, pdf, cdf = make_degree_analysis(network[y])
# Make plots
_ = axs[0].plot(k, pdf, 'o', alpha=.7)
_ = axs[1].loglog(k, pdf, 'o', alpha=.7)
_ = axs[2].loglog(k, cdf, 'o', alpha=.7)
# Show plots
_ = [axs[i].legend([str(y) for y in network.keys()], loc='upper right') for i in range(3)]
_ = plt.savefig('images/analysis/degree_distr.png')
_ = plt.show()
# Plot pdf, cdf, log-log, ... of each network
plot_degree_analysis(network)
```
# Scale-free property
## Power law estimation
```
# Estimate power law parameters for each network
# Initialize power law parameters
power_law = {
2017: {'k_sat': 4},
2018: {'k_sat': 7}
}
# Define parameters for each network
for i, y in enumerate(years):
# Get the unique values of degree and their counts
degree = get_degree(network[y])
k, count = np.unique(degree, return_counts=True)
k_sat = power_law[y]['k_sat']
# Define minumum and maximum k (degree)
power_law[y]['k_min'] = k_min = np.min(k)
power_law[y]['k_max'] = k_max = np.max(k)
# Estimate parameters
n = degree[k_sat:].shape[0]
gamma = 1 + n / np.sum(np.log(degree[k_sat:] / k_sat))
c = (gamma - 1) * k_sat ** (gamma - 1)
# Compute cutoff
cutoff = k_sat * n ** (1 / (gamma - 1))
# Store parameters
power_law[y]['gamma'] = gamma
power_law[y]['c'] = c
power_law[y]['cutoff'] = cutoff
# Pront out coefficients
for y in power_law.keys():
# Retrieve parameters
gamma, c, cutoff = power_law[y]['gamma'], power_law[y]['c'], power_law[y]['cutoff']
k_min, k_max = power_law[y]['k_min'], power_law[y]['k_max']
# Print results
out = 'Power law estimated parameters for {:d}\'s network:\n'
out += ' gamma={:.03f}, c={:.03f}, cutoff={:.03f}, min.degree={:d}, max.degree={:d}'
print(out.format(y, gamma, c, cutoff, k_min, k_max))
# Define regression lines values for either 2017 and 2018 distributions
# Define regression lines container
regression_line = {}
# Define maximum degree, for both years together
k_max = np.max([power_law[y]['k_max'] for y in power_law.keys()])
# Compute regression lines
for y in power_law.keys():
# Retrieve parameters gamma and c
gamma = power_law[y]['gamma']
c = power_law[y]['c']
# Compute regression line
regression_line[y] = c * np.arange(1, k_max) ** (1 - gamma) / (gamma - 1)
# Plot results
fig, ax = plt.subplots(figsize=(12, 5))
_ = ax.set_title('Log-log Cumulative Distribution Function',fontsize=15)
# Print every network
for i, y in enumerate(power_law.keys()):
# Retrieve degree analysis values
k, count, pdf, cdf = make_degree_analysis(network[y])
# Print dots
_ = ax.loglog(k, cdf, 'o', alpha=.7, color=colors[i])
# Print regression line
_ = ax.loglog(np.arange(1, k_max), regression_line[y], color=colors[i])
# Make plot
_ = ax.legend(['2017']*2 + ['2018']*2, loc='lower left')
_ = plt.savefig('images/analysis/power_law.png')
_ = plt.show()
```
# Small-world property
## Connected components
```
# Extract cardinality of connected components and diameter of the giant component for both nets
"""# Initialize components container
connected_components = {}
# Compute giant component for every network
for i, y in enumerate(network.keys()):
# Compute connected component
cc = sorted(nx.connected_components(network[y]), key=len, reverse=True)
# Compute diameter of the giant component
d = nx.diameter(network[y].subgraph(cc[0]))
# Store the tuple (giant component, cardinality, diameter)
connected_components[y] = []
connected_components[y].append({
'component': cc[0],
'size': len(cc[0]),
'diameter': d
})
# Store each component
for component in cc[1:]:
# Add component, without diameter
connected_components[y].append({
'component': component,
'size': len(component)
})
# Save connected components to disk
np.save('data/connected_components.npy', connected_components)"""
# Load connected components from file
connected_components = np.load('data/connected_components.npy', allow_pickle=True).item()
# Show connected components info for each year
for y in years:
# Retrieve connected component
cc = connected_components[y]
# Show giant component info
print('Network {:d}'.format(y))
print('Giant component has cardinality={:d} and diameter={:d}'.format(cc[0]['size'], cc[0]['diameter']))
# Store each component
for j, component in enumerate(cc):
if j == 0: continue
# Show other components
print('Connected component nr {:d} has cardinality={:d}'.format(j + 1, component['size']))
print()
```
## Clustering coefficient
```
# Compute and show chlustering coefficients
# Compute clustering coefficients
clust_coef = {y: pd.Series(nx.clustering(network[y], weight='weight')) for y in years}
# Make plot
fig, axs = plt.subplots(1, 2, figsize=(15, 8), sharey=True)
# Loop through each network
for i, y in enumerate(years):
cc = clust_coef[y]
_ = axs[i].set_title('{:d}\'s network'.format(y))
_ = axs[i].plot(cc.index.values, cc.values, 'x', mec=colors[i])
_ = axs[i].grid()
# Show plot
_ = plt.savefig('images/analysis/clust_coeff.png')
_ = plt.show()
giant = {y: connected_components[y][0]['component'] for y in years}
# Compute the average shortest path length for both nets
L = {y: nx.average_shortest_path_length(network[y].subgraph(giant[y]), weight='counts', method='floyd-warshall-numpy') for y in years}
for y in years:
print('Network {:d}'.format(y))
N = len(network[y].nodes)
print('log N: {:.4f}'.format( np.log(N) ))
print('log log N: {:.4f}'.format( np.log( np.log(N) ) ))
print('Average shortest path length: {:.4f}'.format(L[y]))
print('Average clustering coefficient: {:.4f}'.format(np.mean(clust_coef[y])))
print()
```
# Ranking
## Ranking of words
### Ranking by degree
```
# Define subset (firs n-th)
best = 20
# Make plot
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
# Plot each network
for i, y in enumerate(years):
degree = get_degree(network[y]).sort_values(ascending=False)
_ = axs[i].set_title('Best nodes in {:d}\'s network'.format(y))
_ = axs[i].bar(degree.index[:best].map(lambda x: str(i2w[x])), degree.values[:best], color=colors[i])
_ = axs[i].tick_params(axis='x', labelrotation=60)
# Show plot
_ = plt.savefig('images/analysis/words_rank_degree.png', bbox_inches='tight')
_ = plt.show()
```
### Ranking by betweenness
```
"""# Compute betweenness centrality measure for nodes (on giant components)
betweenness = {}
for y in years:
# Define giant component subgraph
# giant_component = connected_components[y][0]['component']
# subgraph = nx.induced_subgraph(network[y], giant_component)
# Compute betweenness
betweenness[y] = nx.betweenness_centrality(network[y], weight='weight')
# Save betweenness as numpy array
np.save('data/betweenness.npy', betweenness)"""
# Load betweenness
betweenness = np.load('data/betweenness.npy', allow_pickle=True).item()
# Define subset (firs n-th)
best = 20
# Make plot
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
for i, y in enumerate(years):
btw = pd.Series(betweenness[y]).sort_values(ascending=False)
_ = axs[i].set_title('Best nodes in {:d}\'s network'.format(y))
_ = axs[i].bar(btw.index[:best].map(lambda x: str(i2w[x])), btw.values[:best], color=colors[i])
_ = axs[i].tick_params(axis='x', labelrotation=60)
_ = plt.savefig('images/analysis/words_rank_btw.png', bbox_inches='tight')
_ = plt.show()
```
## Ranking of verbs
```
# Define verbs dictionary
verbs2i = {w: w2i[w] for w in w2i.keys() if w[1] == 'V'}
```
### Ranking by degree
```
# Define subset (firs n-th)
best = 20
# Make plot
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
# Plot each network
for i, y in enumerate(years):
degree = get_degree( network[y].subgraph(list(set(network[y].nodes()) & set(verbs2i.values()))) ).sort_values(ascending=False)
_ = axs[i].set_title('Best verbs in {:d}\'s network'.format(y))
_ = axs[i].bar(degree.index[:best].map(lambda x: str(i2w[x])), degree.values[:best], color=colors[i])
_ = axs[i].tick_params(axis='x', labelrotation=60)
# Show plot
_ = plt.savefig('images/analysis/verbs_rank_degree.png', bbox_inches='tight')
_ = plt.show()
```
### Ranking by betweenness
```
# Compute betweenness centrality measure for nodes (on giant components)
"""betweenness_verbs = {}
for y in years:
# Define giant component subgraph
giant_component = connected_components[y][0]['component']
subgraph = nx.induced_subgraph(network[y], giant_component)
# Compute betweenness
betweenness_verbs[y] = nx.betweenness_centrality(subgraph.subgraph(list(set(network[y].nodes()) & set(verbs2i.values())))
,weight='weight')
# Save betweenness as numpy array
np.save('data/betweenness_verbs.npy', betweenness_verbs)"""
# Load betweenness
betweenness_verbs = np.load('data/betweenness_verbs.npy', allow_pickle=True).item()
# Define subset (firs n-th)
best = 20
# Make plot
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
for i, y in enumerate(years):
btw = pd.Series(betweenness_verbs[y]).sort_values(ascending=False)
_ = axs[i].set_title('Best verbs in {:d}\'s network'.format(y))
_ = axs[i].bar(btw.index[:best].map(lambda x: str(i2w[x])), btw.values[:best], color=colors[i])
_ = axs[i].tick_params(axis='x', labelrotation=60)
_ = plt.savefig('images/analysis/verbs_rank_btw.png', bbox_inches='tight')
_ = plt.show()
```
## Analysis of ranking changes
### All words
```
# Define subset (firs n-th)
best = 100
nodes = set(network[2017].subgraph(connected_components[2017][0]['component']).nodes) & set(network[2018].subgraph(
connected_components[2018][0]['component']).nodes)
# Define percentage of change for btw
rate_btw = { node : (betweenness[2017][node] - betweenness[2018][node]) / (betweenness[2017][node] + betweenness[2018][node])
for node in nodes if betweenness[2017][node] + betweenness[2018][node] != 0 }
# Define percentage of change for degree
degree17 = get_degree(network[2017]).sort_values(ascending=False)
degree18 = get_degree(network[2018]).sort_values(ascending=False)
rate_degree = { node : (degree17[node] - degree18[node]) / (degree17[node] + degree18[node]) for node in nodes }
# Make plot
fig, axs = plt.subplots(2, 1, figsize=(15, 10))
rate_btw = pd.Series(rate_btw).sort_values(ascending=False)
_ = axs[0].set_title('Words with highest percentage of change in betweenness'.format(y))
_ = axs[0].bar(rate_btw.index[:best].map(lambda x: str(i2w[x])), rate_btw.values[:best])
_ = axs[0].tick_params(axis='x', labelrotation=90)
rate_degree = pd.Series(rate_degree).sort_values(ascending=False)
_ = axs[1].set_title('Words with highest percentage of change in degree'.format(y))
_ = axs[1].bar(rate_degree.index[:best].map(lambda x: str(i2w[x])), rate_degree.values[:best])
_ = axs[1].tick_params(axis='x', labelrotation=90)
_ = plt.tight_layout()
_ = plt.show()
btw1 = sum(rate_btw == 1)/len(rate_btw)
btw2 = sum(rate_btw == -1)/len(rate_btw)
btw3 = sum(rate_btw == 0)/len(rate_btw)
deg1 = sum(rate_degree == 1)/len(rate_degree)
deg2 = sum(rate_degree == -1)/len(rate_degree)
deg3 = sum(rate_degree == 0)/len(rate_degree)
# Show node differences
fig, ax = plt.subplots(1,2,figsize=(15, 5), sharey=True)
_ = ax[0].set_title('Significative values for the betweenness change rate')
_ = ax[0].bar(['% words with rate = 1'], [btw1])
_ = ax[0].bar(['% words with rate = -1'], [btw2])
_ = ax[0].bar(['% words with rate = 0'], [btw3])
_ = ax[1].set_title('Significative values for the degree change rate')
_ = ax[1].bar(['% words with rate = 1'], [deg1])
_ = ax[1].bar(['% words with rate = -1'], [deg2])
_ = ax[1].bar(['% words with rate = 0'], [deg3])
_ = plt.tight_layout()
_ = plt.savefig('images/analysis/words_change_rank.png')
_ = plt.show()
```
### Verbs
```
# Define subset (firs n-th)
best = 100
nodes_verbs = set(network[2017].subgraph( set(connected_components[2017][0]['component']) & set(verbs2i.values()) ).nodes) & set(network[2018].subgraph( set(connected_components[2018][0]['component']) & set(verbs2i.values()) ).nodes)
# Define percentage of change for btw
rate_btw_verbs = { node : (betweenness_verbs[2017][node] - betweenness_verbs[2018][node]) / (betweenness_verbs[2017][node] + betweenness_verbs[2018][node])
for node in nodes_verbs if betweenness_verbs[2017][node] + betweenness_verbs[2018][node] != 0 }
# Define percentage pf change for degree
degree17 = get_degree(network[2017]).sort_values(ascending=False)
degree18 = get_degree(network[2018]).sort_values(ascending=False)
rate_degree_verbs = { node : (degree17[node] - degree18[node]) / (degree17[node] + degree18[node]) for node in nodes_verbs }
# Make plot
fig, axs = plt.subplots(2, 1, figsize=(15, 10))
rate_btw_verbs = pd.Series(rate_btw_verbs).sort_values(ascending=False)
_ = axs[0].set_title('Words with highest percentage of change in betweenness'.format(y))
_ = axs[0].bar(rate_btw_verbs.index[:best].map(lambda x: str(i2w[x])), rate_btw_verbs.values[:best])
_ = axs[0].tick_params(axis='x', labelrotation=90)
rate_degree_verbs = pd.Series(rate_degree_verbs).sort_values(ascending=False)
_ = axs[1].set_title('Words with highest percentage of change in degree'.format(y))
_ = axs[1].bar(rate_degree_verbs.index[:best].map(lambda x: str(i2w[x])), rate_degree_verbs.values[:best])
_ = axs[1].tick_params(axis='x', labelrotation=90)
_ = plt.tight_layout()
_ = plt.show()
btw1 = sum(rate_btw_verbs == 1)/len(rate_btw_verbs)
btw2 = sum(rate_btw_verbs == -1)/len(rate_btw_verbs)
btw3 = sum(rate_btw_verbs == 0)/len(rate_btw_verbs)
deg1 = sum(rate_degree_verbs == 1)/len(rate_degree_verbs)
deg2 = sum(rate_degree_verbs == -1)/len(rate_degree_verbs)
deg3 = sum(rate_degree_verbs == 0)/len(rate_degree_verbs)
# Show node differences
fig, ax = plt.subplots(1,2,figsize=(15, 5), sharey=True)
_ = ax[0].set_title('Significative values for the betweenness change rate')
_ = ax[0].bar(['% verbs with rate = 1'], [btw1])
_ = ax[0].bar(['% verbs with rate = -1'], [btw2])
_ = ax[0].bar(['% verbs with rate = 0'], [btw3])
_ = ax[1].set_title('Significative values for the degree change rate')
_ = ax[1].bar(['% verbs with rate = 1'], [deg1])
_ = ax[1].bar(['% verbs with rate = -1'], [deg2])
_ = ax[1].bar(['% verbs with rate = 0'], [deg3])
_ = plt.tight_layout()
_ = plt.savefig('images/analysis/verbs_change_rank.png')
_ = plt.show()
```
## Selected words
```
sel_words = [('young', 'A'), ('harassment', 'N'), # big words that change size
('empower','V'), ('initiative', 'N'), ('discuss','V'), ('education', 'N'), ('dream','N'), ('dignity','N'), #positive 1
('include','V'), ('safe','A'), ('prevent', 'V'), ('security','N'), #positive 2
('work','V'), ('assault','N'), ('flee','V'), ('abuse','N')] #specific
mask = []
for w in sel_words:
if not w2i[w] in nodes:
print(' "{}" word not in both networks'.format(w))
print()
else:
#print(' "{}" word degree change rate: {}'.format(w, rate_degree[w2i[w]]))
print(' "{}" word btw change rate: {}'.format(w, rate_btw[w2i[w]]))
print()
```
## Difference between sets of nodes
```
x17 = len(set(network[2017].nodes) - set(network[2018].nodes))/len(set(network[2017].nodes)) * 100
print('Percentage of words in 2017 but not in 2018: {:d} %'.format(int(x17)))
x18 = len(set(network[2018].nodes) - set(network[2017].nodes)) / len(set(network[2018])) * 100
print('Percentage of words in 2018 but not in 2017: {:d} %'.format(int(x18)))
# Show node differences
fig, ax = plt.subplots(figsize=(7.5, 5))
_ = ax.set_title('Difference between sets of nodes')
_ = ax.bar(['2017 without 2018'], [x17])
_ = ax.bar(['2018 without 2017'], [x18])
_ = plt.savefig('images/analysis/node_sets_difference.png')
_ = plt.show()
```
# Assortativity
### Degree assortativity
```
print('Assortativity coefficient 2017:',nx.degree_assortativity_coefficient( network[2017], weight = 'counts' ))
print('Assortativity coefficient 2018:',nx.degree_assortativity_coefficient( network[2018], weight = 'counts' ))
```
### Node assortativity by attribute
```
print('Assortativity coefficient 2017:',nx.degree_assortativity_coefficient( network[2017].subgraph(
list(set(network[y].nodes) & set(verbs2i.values()))), weight = 'counts' ))
print('Assortativity coefficient 2018:',nx.degree_assortativity_coefficient( network[2018].subgraph(
list(set(network[y].nodes) & set(verbs2i.values()))), weight = 'counts' ))
```
|
github_jupyter
|
# Dependencies
import numpy as np
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
import matplotlib.colors as mc
from modules.dataset import load_words, load_tweets
from modules.network import get_edges, map_words, get_degree
# Set up default colors
colors=[*mc.TABLEAU_COLORS.values()]
%matplotlib inline
# Load words dataset table
words = load_words('data/database/words.csv')
words.head()
# Retrieve dictionaries mapping lemma tuples to numeric value
w2i, i2w = map_words(words)
# Map lemmas to node numbers
words['node'] = words.apply(lambda w: w2i[(w.text, w.pos)], axis=1)
words.head()
# Load tweets dataset table
tweets = load_tweets('data/database/tweets.csv')
tweets.head()
# Define words from tweets of 2017 and the ones from tweets of 2018
tweets_2017 = tweets.id_str[tweets.created_at.dt.year == 2017].values
tweets_2018 = tweets.id_str[tweets.created_at.dt.year == 2018].values
# Show tweets distribution
fig, ax = plt.subplots(figsize=(7.5, 5))
_ = ax.set_title('Tweet count for 2017 and 2018 analyzed period', fontsize=18)
_ = ax.bar(['2017'], [len(tweets_2017)])
_ = ax.bar(['2018'], [len(tweets_2018)])
_ = plt.savefig('images/analysis/tweet_counts.png')
_ = plt.show()
# Show word counts in tweets of 2017 and 2018 respectively
fig, ax = plt.subplots(figsize=(7.5, 5))
_ = ax.set_title('Word count for 2017 and 2018 analyzed period', fontsize=18)
_ = ax.bar(['2017'], sum(words.tweet.isin(tweets_2017)))
_ = ax.bar(['2018'], sum(words.tweet.isin(tweets_2018)))
_ = plt.savefig('images/analysis/words_counts.png')
_ = plt.show()
# Show unique word counts in tweets of 2017 and 2018 respectively
unique_words_2017 = words.text[words.tweet.isin(tweets_2017)].unique()
unique_words_2018 = words.text[words.tweet.isin(tweets_2018)].unique()
fig, ax = plt.subplots(figsize=(7.5, 5))
_ = ax.set_title('Word count for 2017 and 2018 analyzed period')
_ = ax.bar(['2017'], unique_words_2017.shape[0])
_ = ax.bar(['2018'], unique_words_2018.shape[0])
_ = plt.savefig('images/analysis/nodes_counts.png')
_ = plt.show()
# Compute length of each tweet, for either words and characters
tweets_ = tweets.loc[:, ['id_str']]
tweets_['len_words'] = tweets.apply(lambda t: len(t.text.split(' ')), axis=1)
tweets_['len_chars'] = tweets.apply(lambda t: len(t.text), axis=1)
# Get 2017 and 2018 tweets
tweets_2017_ = tweets_[tweets_['id_str'].isin(tweets_2017)]
tweets_2018_ = tweets_[tweets_['id_str'].isin(tweets_2018)]
# Show distribution of words number per tweet in 2017 and 2018
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
# Word lengths
_ = axs[0].set_title('Number of words per tweet',fontsize=18)
_ = axs[0].hist(tweets_2017_['len_words'], bins=25, density=True, alpha=.7)
_ = axs[0].hist(tweets_2018_['len_words'], bins=50, density=True, alpha=.7)
_ = axs[0].legend(['Tweet length in 2017', 'Tweet length in 2018'])
# Charactes lengths
_ = axs[1].set_title('Number of characters per tweet', fontsize=18)
_ = axs[1].hist(tweets_2017_['len_chars'], bins=25, density=True, alpha=.7)
_ = axs[1].hist(tweets_2018_['len_chars'], bins=50, density=True, alpha=.7)
_ = axs[1].legend(['Tweet length in 2017', 'Tweet length in 2018'])
# Make plot
_ = plt.savefig('images/analysis/tweet_len_distr.png')
_ = plt.show()
# Define years under examination
years = [2017, 2018]
# Define edges for 2017 and 2018 (as Pandas DataFrames)
edges = dict()
# Define edges for each network
for y in years:
# Get id of tweets for current year
tweet_ids = tweets.id_str[tweets.created_at.dt.year == y]
# Compute edges for current year
edges[y] = get_edges(words[words.tweet.isin(tweet_ids)])
# Save vocabularies to disk
np.save('data/edges_w2i.npy', w2i) # Save tuple to index vocabulary
np.save('data/edges_i2w.npy', i2w) # Save index to tuple vocabulary
# Save edges to disk
edges_ = [*years]
# Loop through each edges table
for i, y in enumerate(years):
# Add year column
edges_[i] = edges[y].copy()
edges_[i]['year'] = y
# Concatenate DataFrames
edges_ = pd.concat(edges_, axis=0)
# Save dataframe to disk
edges_.to_csv('data/database/edges.csv', index=False)
print('Edges for 2017\'s network')
edges[2017].head()
print('Edges for 2018\'s network')
edges[2018].head()
# Define networks container
network = dict()
# Create newtorks
for y in years:
network[y] = nx.from_pandas_edgelist(edges[y], source='node_x', target='node_y',
edge_attr=True, create_using=nx.Graph)
# Get numpy adjacency matrices
adj_matrix = dict()
for y in years:
adj_matrix[y] = nx.to_numpy_matrix(network[y])
# Show adjacency matrices
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
# Print adjacency matrix for each network
for i, y in enumerate(years):
_ = axs[i].set_title('{:d}\'s network adjacency matrix'.format(y))
_ = axs[i].imshow(np.minimum(adj_matrix[y], np.ones(adj_matrix[y].shape)))
_ = plt.show()
adj_matrix[2017].shape
# Initialize summary statistics
mean = {}
density = {}
std = {}
# Compute mean and density
for y in years:
x = adj_matrix[y] # Get adjacency matrix for current network
n = x.shape[0] # Get dimension of the adjacency matrix
mean[y] = x.sum() / n**2
std[y] = ( ((x - mean[y])**2).sum() / (n**2 - 1) )**.5
density[y] = np.minimum(x, np.ones((n, n))).sum() / n**2
# Print out results
for y in years:
print('{:d}\'s network has mean={:.04f}, standard deviation={:.04f} and density={:.04f}'.format(y, mean[y], std[y], density[y]))
# Show summary statistics graphically
fig, axs = plt.subplots(1, 3, figsize=(12, 5))
_ = axs[0].set_title('Mean',fontsize=15)
_ = axs[1].set_title('Standard deviation',fontsize=15)
_ = axs[2].set_title('Density',fontsize=15)
# Print scores for either 2017 and 2018
for y in years:
_ = axs[0].bar(str(y), mean[y])
_ = axs[1].bar(str(y), std[y])
_ = axs[2].bar(str(y), density[y])
# Make plot
_ = plt.savefig('images/analysis/net_stats.png')
_ = plt.show()
# Compare degrees graphically
fig, ax = plt.subplots(figsize=(30, 5))
_ = fig.suptitle('Distribution of the networks degrees')
_ = ax.hist(get_degree(network[2017]), bins=500, alpha=0.7)
_ = ax.hist(get_degree(network[2018]), bins=500, alpha=0.7)
_ = ax.set_xlim(0, 200)
_ = ax.legend(['Degree of the network in 2017',
'Degree of the network in 2018'])
_ = plt.savefig('images/analysis/degree_hist.png')
_ = plt.show()
# Define function for computing degree analysis (compute pdf, cdf, ...)
def make_degree_analysis(network):
"""
Input:
- degrees Pandas Series node (index) maps to its degree (value)
Output:
- degree: list of degrees
- counts: list containing count for each degree
- pdf (probability distribution function): list
- cdf (cumulative distribution function): list
"""
# Get number of times a degree appeared in the network
degree = get_degree(network)
degree, count = np.unique(degree.values, return_counts=True)
pdf = count / np.sum(count) # Compute pdf
cdf = list(1 - np.cumsum(pdf))[:-1] + [0] # Compute cdf
# Return computed statistics
return degree, count, pdf, cdf
# Define function for plotting degree analysis
def plot_degree_analysis(network):
# Initialize plot
fig, axs = plt.subplots(1, 3, figsize=(12, 5))
_ = axs[0].set_title('Probability Distribution',fontsize=14)
_ = axs[1].set_title('Log-log Probability Distribution',fontsize=14)
_ = axs[2].set_title('Log-log Cumulative Distribution',fontsize=14)
# Create plot fore each network
for i, y in enumerate(network.keys()):
# Compute degree statistics
k, count, pdf, cdf = make_degree_analysis(network[y])
# Make plots
_ = axs[0].plot(k, pdf, 'o', alpha=.7)
_ = axs[1].loglog(k, pdf, 'o', alpha=.7)
_ = axs[2].loglog(k, cdf, 'o', alpha=.7)
# Show plots
_ = [axs[i].legend([str(y) for y in network.keys()], loc='upper right') for i in range(3)]
_ = plt.savefig('images/analysis/degree_distr.png')
_ = plt.show()
# Plot pdf, cdf, log-log, ... of each network
plot_degree_analysis(network)
# Estimate power law parameters for each network
# Initialize power law parameters
power_law = {
2017: {'k_sat': 4},
2018: {'k_sat': 7}
}
# Define parameters for each network
for i, y in enumerate(years):
# Get the unique values of degree and their counts
degree = get_degree(network[y])
k, count = np.unique(degree, return_counts=True)
k_sat = power_law[y]['k_sat']
# Define minumum and maximum k (degree)
power_law[y]['k_min'] = k_min = np.min(k)
power_law[y]['k_max'] = k_max = np.max(k)
# Estimate parameters
n = degree[k_sat:].shape[0]
gamma = 1 + n / np.sum(np.log(degree[k_sat:] / k_sat))
c = (gamma - 1) * k_sat ** (gamma - 1)
# Compute cutoff
cutoff = k_sat * n ** (1 / (gamma - 1))
# Store parameters
power_law[y]['gamma'] = gamma
power_law[y]['c'] = c
power_law[y]['cutoff'] = cutoff
# Pront out coefficients
for y in power_law.keys():
# Retrieve parameters
gamma, c, cutoff = power_law[y]['gamma'], power_law[y]['c'], power_law[y]['cutoff']
k_min, k_max = power_law[y]['k_min'], power_law[y]['k_max']
# Print results
out = 'Power law estimated parameters for {:d}\'s network:\n'
out += ' gamma={:.03f}, c={:.03f}, cutoff={:.03f}, min.degree={:d}, max.degree={:d}'
print(out.format(y, gamma, c, cutoff, k_min, k_max))
# Define regression lines values for either 2017 and 2018 distributions
# Define regression lines container
regression_line = {}
# Define maximum degree, for both years together
k_max = np.max([power_law[y]['k_max'] for y in power_law.keys()])
# Compute regression lines
for y in power_law.keys():
# Retrieve parameters gamma and c
gamma = power_law[y]['gamma']
c = power_law[y]['c']
# Compute regression line
regression_line[y] = c * np.arange(1, k_max) ** (1 - gamma) / (gamma - 1)
# Plot results
fig, ax = plt.subplots(figsize=(12, 5))
_ = ax.set_title('Log-log Cumulative Distribution Function',fontsize=15)
# Print every network
for i, y in enumerate(power_law.keys()):
# Retrieve degree analysis values
k, count, pdf, cdf = make_degree_analysis(network[y])
# Print dots
_ = ax.loglog(k, cdf, 'o', alpha=.7, color=colors[i])
# Print regression line
_ = ax.loglog(np.arange(1, k_max), regression_line[y], color=colors[i])
# Make plot
_ = ax.legend(['2017']*2 + ['2018']*2, loc='lower left')
_ = plt.savefig('images/analysis/power_law.png')
_ = plt.show()
# Extract cardinality of connected components and diameter of the giant component for both nets
"""# Initialize components container
connected_components = {}
# Compute giant component for every network
for i, y in enumerate(network.keys()):
# Compute connected component
cc = sorted(nx.connected_components(network[y]), key=len, reverse=True)
# Compute diameter of the giant component
d = nx.diameter(network[y].subgraph(cc[0]))
# Store the tuple (giant component, cardinality, diameter)
connected_components[y] = []
connected_components[y].append({
'component': cc[0],
'size': len(cc[0]),
'diameter': d
})
# Store each component
for component in cc[1:]:
# Add component, without diameter
connected_components[y].append({
'component': component,
'size': len(component)
})
# Save connected components to disk
np.save('data/connected_components.npy', connected_components)"""
# Load connected components from file
connected_components = np.load('data/connected_components.npy', allow_pickle=True).item()
# Show connected components info for each year
for y in years:
# Retrieve connected component
cc = connected_components[y]
# Show giant component info
print('Network {:d}'.format(y))
print('Giant component has cardinality={:d} and diameter={:d}'.format(cc[0]['size'], cc[0]['diameter']))
# Store each component
for j, component in enumerate(cc):
if j == 0: continue
# Show other components
print('Connected component nr {:d} has cardinality={:d}'.format(j + 1, component['size']))
print()
# Compute and show chlustering coefficients
# Compute clustering coefficients
clust_coef = {y: pd.Series(nx.clustering(network[y], weight='weight')) for y in years}
# Make plot
fig, axs = plt.subplots(1, 2, figsize=(15, 8), sharey=True)
# Loop through each network
for i, y in enumerate(years):
cc = clust_coef[y]
_ = axs[i].set_title('{:d}\'s network'.format(y))
_ = axs[i].plot(cc.index.values, cc.values, 'x', mec=colors[i])
_ = axs[i].grid()
# Show plot
_ = plt.savefig('images/analysis/clust_coeff.png')
_ = plt.show()
giant = {y: connected_components[y][0]['component'] for y in years}
# Compute the average shortest path length for both nets
L = {y: nx.average_shortest_path_length(network[y].subgraph(giant[y]), weight='counts', method='floyd-warshall-numpy') for y in years}
for y in years:
print('Network {:d}'.format(y))
N = len(network[y].nodes)
print('log N: {:.4f}'.format( np.log(N) ))
print('log log N: {:.4f}'.format( np.log( np.log(N) ) ))
print('Average shortest path length: {:.4f}'.format(L[y]))
print('Average clustering coefficient: {:.4f}'.format(np.mean(clust_coef[y])))
print()
# Define subset (firs n-th)
best = 20
# Make plot
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
# Plot each network
for i, y in enumerate(years):
degree = get_degree(network[y]).sort_values(ascending=False)
_ = axs[i].set_title('Best nodes in {:d}\'s network'.format(y))
_ = axs[i].bar(degree.index[:best].map(lambda x: str(i2w[x])), degree.values[:best], color=colors[i])
_ = axs[i].tick_params(axis='x', labelrotation=60)
# Show plot
_ = plt.savefig('images/analysis/words_rank_degree.png', bbox_inches='tight')
_ = plt.show()
"""# Compute betweenness centrality measure for nodes (on giant components)
betweenness = {}
for y in years:
# Define giant component subgraph
# giant_component = connected_components[y][0]['component']
# subgraph = nx.induced_subgraph(network[y], giant_component)
# Compute betweenness
betweenness[y] = nx.betweenness_centrality(network[y], weight='weight')
# Save betweenness as numpy array
np.save('data/betweenness.npy', betweenness)"""
# Load betweenness
betweenness = np.load('data/betweenness.npy', allow_pickle=True).item()
# Define subset (firs n-th)
best = 20
# Make plot
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
for i, y in enumerate(years):
btw = pd.Series(betweenness[y]).sort_values(ascending=False)
_ = axs[i].set_title('Best nodes in {:d}\'s network'.format(y))
_ = axs[i].bar(btw.index[:best].map(lambda x: str(i2w[x])), btw.values[:best], color=colors[i])
_ = axs[i].tick_params(axis='x', labelrotation=60)
_ = plt.savefig('images/analysis/words_rank_btw.png', bbox_inches='tight')
_ = plt.show()
# Define verbs dictionary
verbs2i = {w: w2i[w] for w in w2i.keys() if w[1] == 'V'}
# Define subset (firs n-th)
best = 20
# Make plot
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
# Plot each network
for i, y in enumerate(years):
degree = get_degree( network[y].subgraph(list(set(network[y].nodes()) & set(verbs2i.values()))) ).sort_values(ascending=False)
_ = axs[i].set_title('Best verbs in {:d}\'s network'.format(y))
_ = axs[i].bar(degree.index[:best].map(lambda x: str(i2w[x])), degree.values[:best], color=colors[i])
_ = axs[i].tick_params(axis='x', labelrotation=60)
# Show plot
_ = plt.savefig('images/analysis/verbs_rank_degree.png', bbox_inches='tight')
_ = plt.show()
# Compute betweenness centrality measure for nodes (on giant components)
"""betweenness_verbs = {}
for y in years:
# Define giant component subgraph
giant_component = connected_components[y][0]['component']
subgraph = nx.induced_subgraph(network[y], giant_component)
# Compute betweenness
betweenness_verbs[y] = nx.betweenness_centrality(subgraph.subgraph(list(set(network[y].nodes()) & set(verbs2i.values())))
,weight='weight')
# Save betweenness as numpy array
np.save('data/betweenness_verbs.npy', betweenness_verbs)"""
# Load betweenness
betweenness_verbs = np.load('data/betweenness_verbs.npy', allow_pickle=True).item()
# Define subset (firs n-th)
best = 20
# Make plot
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
for i, y in enumerate(years):
btw = pd.Series(betweenness_verbs[y]).sort_values(ascending=False)
_ = axs[i].set_title('Best verbs in {:d}\'s network'.format(y))
_ = axs[i].bar(btw.index[:best].map(lambda x: str(i2w[x])), btw.values[:best], color=colors[i])
_ = axs[i].tick_params(axis='x', labelrotation=60)
_ = plt.savefig('images/analysis/verbs_rank_btw.png', bbox_inches='tight')
_ = plt.show()
# Define subset (firs n-th)
best = 100
nodes = set(network[2017].subgraph(connected_components[2017][0]['component']).nodes) & set(network[2018].subgraph(
connected_components[2018][0]['component']).nodes)
# Define percentage of change for btw
rate_btw = { node : (betweenness[2017][node] - betweenness[2018][node]) / (betweenness[2017][node] + betweenness[2018][node])
for node in nodes if betweenness[2017][node] + betweenness[2018][node] != 0 }
# Define percentage of change for degree
degree17 = get_degree(network[2017]).sort_values(ascending=False)
degree18 = get_degree(network[2018]).sort_values(ascending=False)
rate_degree = { node : (degree17[node] - degree18[node]) / (degree17[node] + degree18[node]) for node in nodes }
# Make plot
fig, axs = plt.subplots(2, 1, figsize=(15, 10))
rate_btw = pd.Series(rate_btw).sort_values(ascending=False)
_ = axs[0].set_title('Words with highest percentage of change in betweenness'.format(y))
_ = axs[0].bar(rate_btw.index[:best].map(lambda x: str(i2w[x])), rate_btw.values[:best])
_ = axs[0].tick_params(axis='x', labelrotation=90)
rate_degree = pd.Series(rate_degree).sort_values(ascending=False)
_ = axs[1].set_title('Words with highest percentage of change in degree'.format(y))
_ = axs[1].bar(rate_degree.index[:best].map(lambda x: str(i2w[x])), rate_degree.values[:best])
_ = axs[1].tick_params(axis='x', labelrotation=90)
_ = plt.tight_layout()
_ = plt.show()
btw1 = sum(rate_btw == 1)/len(rate_btw)
btw2 = sum(rate_btw == -1)/len(rate_btw)
btw3 = sum(rate_btw == 0)/len(rate_btw)
deg1 = sum(rate_degree == 1)/len(rate_degree)
deg2 = sum(rate_degree == -1)/len(rate_degree)
deg3 = sum(rate_degree == 0)/len(rate_degree)
# Show node differences
fig, ax = plt.subplots(1,2,figsize=(15, 5), sharey=True)
_ = ax[0].set_title('Significative values for the betweenness change rate')
_ = ax[0].bar(['% words with rate = 1'], [btw1])
_ = ax[0].bar(['% words with rate = -1'], [btw2])
_ = ax[0].bar(['% words with rate = 0'], [btw3])
_ = ax[1].set_title('Significative values for the degree change rate')
_ = ax[1].bar(['% words with rate = 1'], [deg1])
_ = ax[1].bar(['% words with rate = -1'], [deg2])
_ = ax[1].bar(['% words with rate = 0'], [deg3])
_ = plt.tight_layout()
_ = plt.savefig('images/analysis/words_change_rank.png')
_ = plt.show()
# Define subset (firs n-th)
best = 100
nodes_verbs = set(network[2017].subgraph( set(connected_components[2017][0]['component']) & set(verbs2i.values()) ).nodes) & set(network[2018].subgraph( set(connected_components[2018][0]['component']) & set(verbs2i.values()) ).nodes)
# Define percentage of change for btw
rate_btw_verbs = { node : (betweenness_verbs[2017][node] - betweenness_verbs[2018][node]) / (betweenness_verbs[2017][node] + betweenness_verbs[2018][node])
for node in nodes_verbs if betweenness_verbs[2017][node] + betweenness_verbs[2018][node] != 0 }
# Define percentage pf change for degree
degree17 = get_degree(network[2017]).sort_values(ascending=False)
degree18 = get_degree(network[2018]).sort_values(ascending=False)
rate_degree_verbs = { node : (degree17[node] - degree18[node]) / (degree17[node] + degree18[node]) for node in nodes_verbs }
# Make plot
fig, axs = plt.subplots(2, 1, figsize=(15, 10))
rate_btw_verbs = pd.Series(rate_btw_verbs).sort_values(ascending=False)
_ = axs[0].set_title('Words with highest percentage of change in betweenness'.format(y))
_ = axs[0].bar(rate_btw_verbs.index[:best].map(lambda x: str(i2w[x])), rate_btw_verbs.values[:best])
_ = axs[0].tick_params(axis='x', labelrotation=90)
rate_degree_verbs = pd.Series(rate_degree_verbs).sort_values(ascending=False)
_ = axs[1].set_title('Words with highest percentage of change in degree'.format(y))
_ = axs[1].bar(rate_degree_verbs.index[:best].map(lambda x: str(i2w[x])), rate_degree_verbs.values[:best])
_ = axs[1].tick_params(axis='x', labelrotation=90)
_ = plt.tight_layout()
_ = plt.show()
btw1 = sum(rate_btw_verbs == 1)/len(rate_btw_verbs)
btw2 = sum(rate_btw_verbs == -1)/len(rate_btw_verbs)
btw3 = sum(rate_btw_verbs == 0)/len(rate_btw_verbs)
deg1 = sum(rate_degree_verbs == 1)/len(rate_degree_verbs)
deg2 = sum(rate_degree_verbs == -1)/len(rate_degree_verbs)
deg3 = sum(rate_degree_verbs == 0)/len(rate_degree_verbs)
# Show node differences
fig, ax = plt.subplots(1,2,figsize=(15, 5), sharey=True)
_ = ax[0].set_title('Significative values for the betweenness change rate')
_ = ax[0].bar(['% verbs with rate = 1'], [btw1])
_ = ax[0].bar(['% verbs with rate = -1'], [btw2])
_ = ax[0].bar(['% verbs with rate = 0'], [btw3])
_ = ax[1].set_title('Significative values for the degree change rate')
_ = ax[1].bar(['% verbs with rate = 1'], [deg1])
_ = ax[1].bar(['% verbs with rate = -1'], [deg2])
_ = ax[1].bar(['% verbs with rate = 0'], [deg3])
_ = plt.tight_layout()
_ = plt.savefig('images/analysis/verbs_change_rank.png')
_ = plt.show()
sel_words = [('young', 'A'), ('harassment', 'N'), # big words that change size
('empower','V'), ('initiative', 'N'), ('discuss','V'), ('education', 'N'), ('dream','N'), ('dignity','N'), #positive 1
('include','V'), ('safe','A'), ('prevent', 'V'), ('security','N'), #positive 2
('work','V'), ('assault','N'), ('flee','V'), ('abuse','N')] #specific
mask = []
for w in sel_words:
if not w2i[w] in nodes:
print(' "{}" word not in both networks'.format(w))
print()
else:
#print(' "{}" word degree change rate: {}'.format(w, rate_degree[w2i[w]]))
print(' "{}" word btw change rate: {}'.format(w, rate_btw[w2i[w]]))
print()
x17 = len(set(network[2017].nodes) - set(network[2018].nodes))/len(set(network[2017].nodes)) * 100
print('Percentage of words in 2017 but not in 2018: {:d} %'.format(int(x17)))
x18 = len(set(network[2018].nodes) - set(network[2017].nodes)) / len(set(network[2018])) * 100
print('Percentage of words in 2018 but not in 2017: {:d} %'.format(int(x18)))
# Show node differences
fig, ax = plt.subplots(figsize=(7.5, 5))
_ = ax.set_title('Difference between sets of nodes')
_ = ax.bar(['2017 without 2018'], [x17])
_ = ax.bar(['2018 without 2017'], [x18])
_ = plt.savefig('images/analysis/node_sets_difference.png')
_ = plt.show()
print('Assortativity coefficient 2017:',nx.degree_assortativity_coefficient( network[2017], weight = 'counts' ))
print('Assortativity coefficient 2018:',nx.degree_assortativity_coefficient( network[2018], weight = 'counts' ))
print('Assortativity coefficient 2017:',nx.degree_assortativity_coefficient( network[2017].subgraph(
list(set(network[y].nodes) & set(verbs2i.values()))), weight = 'counts' ))
print('Assortativity coefficient 2018:',nx.degree_assortativity_coefficient( network[2018].subgraph(
list(set(network[y].nodes) & set(verbs2i.values()))), weight = 'counts' ))
| 0.784071 | 0.818265 |
```
#%run get_data.ipynb
#!pip3 install import_ipynb
#!pip3 install mpl_finance
import import_ipynb
import get_data as get
import numpy as np
import pandas as pd
import datetime as dt
import matplotlib.pyplot as plt
from mpl_finance import candlestick_ohlc
import matplotlib.dates as mdates
import os
import matplotlib as mpl
from timeit import default_timer as timer
start = timer()
def money_volume_flow(ticker):
'''Calculates the money_volume_flow and creates
a column on existing csv file and adds to it using the
inbult get.add_col function'''
df = pd.DataFrame()
data = get.get_data(ticker)
money_flow_volume_list = np.zeros(len(data))
for i in range(len(data)):
high = data.iloc[i]['High']
low = data.iloc[i]['Low']
close = data.iloc[i]['Close']
volume = data.iloc[i]['Volume']
money_multiplier = ((close - low) - (high - close))/(high - low)
money_flow_volume = money_multiplier*volume
money_flow_volume_list[i] = money_flow_volume
#data['Money Flow Volume'] = money_flow_volume_list
df['Money Flow Volume'] = money_flow_volume_list
return df
if __name__=='__main__':
print(money_volume_flow('JNJ').head())
end = timer()
time_elapsed = end - start
print('\nTime taken to run the code: {}\n'.format(round(time_elapsed,3)))
def ADL(ticker):
'''Calculates the Accumulation Distribution Line
It also add rows of Money Flow Volume and ADL to csv file
by using the get.add_col function'''
df = money_volume_flow(ticker)
get.add_col(df,ticker,heading = 'Money Flow Volume')
ADL_list = [] #np.zeros(len(df),dtype = 'float64')
ADL_df = pd.DataFrame()
sum = 0
for value in df.values.flatten():
sum += value
ADL_list.append(sum)
ADL_df['ADL'] = ADL_list
get.add_col(ADL_df,ticker,heading = 'ADL')
return ADL_df
if __name__=="__main__":
print(ADL('AAPL').tail(5))
def chaikin_oscillator(ticker,time_frame = (3,10)):
if os.path.exists('stock_dfs/{}.csv'.format(ticker)):
df = pd.read_csv('stock_dfs/{}.csv'.format(ticker),
parse_dates=True,index_col=0)
if not 'ADL' in df.columns:
ADL(ticker);
df = pd.read_csv('stock_dfs/{}.csv'.format(ticker),
parse_dates=True,index_col=0)
else:
ADL(ticker);
df = pd.read_csv('stock_dfs/{}.csv'.format(ticker),
parse_dates=True,index_col=0)
data = df['ADL']
low = min(time_frame)
high = max(time_frame)
ema_low = get.exponential_moving_average(data,window = low)
ema_high = get.exponential_moving_average(data,window = high)
chi_osc = ema_low - ema_high
get.add_col(chi_osc,ticker,heading = 'Chaikin Oscillator')
return chi_osc
if __name__=="__main__":
print(chaikin_oscillator('MSFT').tail(10))
def count(ticker):
'''Counts the number of positive and negative money flow volume'''
if os.path.exists('stock_dfs/{}.csv'.format(ticker)):
df = pd.read_csv('stock_dfs/{}.csv'.format(ticker),parse_dates=True,index_col=0)
if not 'Money Flow Volume' in df.columns:
data = money_volume_flow(ticker)
else:
data = df['Money Flow Volume']
else:
data = money_volume_flow(ticker)
#df = pd.read_csv('stock_dfs/{}.csv'.format(ticker),parse_dates=True,index_col=0)
count = {'Up':0 , 'Down' : 0}
#print(data)
count['Up'] = len(data[data>=0])
count['Down'] = len(data) - count['Up']
return count
if __name__=='__main__':
print(count('FB'))
'''img_source1 = mpl.image.imread('chiOsc_plots/GOOG_chiOsc_plot.png')
fig = plt.figure(figsize=(13,10),facecolor= '#07000d')
ax1 = plt.subplot2grid((6,2),(0,0),rowspan = 3, colspan = 1,facecolor='#07000d')
ax2 = plt.subplot2grid((6,2),(0,1),rowspan = 3, colspan = 1, sharex = ax1,facecolor='#07000d')
#ax1 = plt.subplot(121)
#ax2 = plt.subplot(122)
ax1.imshow(img_source1)
ax2.imshow(img_source1)
fig.subplots_adjust(left = 0.1,right = 0.93,top = 0.95,wspace = 0.2,hspace=0.09)
if __name__=='__main__':
plt.show()
fig.savefig('GOOG_plot.png',facecolor = fig.get_facecolor(),format='png') #, dpi=500 '''
```
|
github_jupyter
|
#%run get_data.ipynb
#!pip3 install import_ipynb
#!pip3 install mpl_finance
import import_ipynb
import get_data as get
import numpy as np
import pandas as pd
import datetime as dt
import matplotlib.pyplot as plt
from mpl_finance import candlestick_ohlc
import matplotlib.dates as mdates
import os
import matplotlib as mpl
from timeit import default_timer as timer
start = timer()
def money_volume_flow(ticker):
'''Calculates the money_volume_flow and creates
a column on existing csv file and adds to it using the
inbult get.add_col function'''
df = pd.DataFrame()
data = get.get_data(ticker)
money_flow_volume_list = np.zeros(len(data))
for i in range(len(data)):
high = data.iloc[i]['High']
low = data.iloc[i]['Low']
close = data.iloc[i]['Close']
volume = data.iloc[i]['Volume']
money_multiplier = ((close - low) - (high - close))/(high - low)
money_flow_volume = money_multiplier*volume
money_flow_volume_list[i] = money_flow_volume
#data['Money Flow Volume'] = money_flow_volume_list
df['Money Flow Volume'] = money_flow_volume_list
return df
if __name__=='__main__':
print(money_volume_flow('JNJ').head())
end = timer()
time_elapsed = end - start
print('\nTime taken to run the code: {}\n'.format(round(time_elapsed,3)))
def ADL(ticker):
'''Calculates the Accumulation Distribution Line
It also add rows of Money Flow Volume and ADL to csv file
by using the get.add_col function'''
df = money_volume_flow(ticker)
get.add_col(df,ticker,heading = 'Money Flow Volume')
ADL_list = [] #np.zeros(len(df),dtype = 'float64')
ADL_df = pd.DataFrame()
sum = 0
for value in df.values.flatten():
sum += value
ADL_list.append(sum)
ADL_df['ADL'] = ADL_list
get.add_col(ADL_df,ticker,heading = 'ADL')
return ADL_df
if __name__=="__main__":
print(ADL('AAPL').tail(5))
def chaikin_oscillator(ticker,time_frame = (3,10)):
if os.path.exists('stock_dfs/{}.csv'.format(ticker)):
df = pd.read_csv('stock_dfs/{}.csv'.format(ticker),
parse_dates=True,index_col=0)
if not 'ADL' in df.columns:
ADL(ticker);
df = pd.read_csv('stock_dfs/{}.csv'.format(ticker),
parse_dates=True,index_col=0)
else:
ADL(ticker);
df = pd.read_csv('stock_dfs/{}.csv'.format(ticker),
parse_dates=True,index_col=0)
data = df['ADL']
low = min(time_frame)
high = max(time_frame)
ema_low = get.exponential_moving_average(data,window = low)
ema_high = get.exponential_moving_average(data,window = high)
chi_osc = ema_low - ema_high
get.add_col(chi_osc,ticker,heading = 'Chaikin Oscillator')
return chi_osc
if __name__=="__main__":
print(chaikin_oscillator('MSFT').tail(10))
def count(ticker):
'''Counts the number of positive and negative money flow volume'''
if os.path.exists('stock_dfs/{}.csv'.format(ticker)):
df = pd.read_csv('stock_dfs/{}.csv'.format(ticker),parse_dates=True,index_col=0)
if not 'Money Flow Volume' in df.columns:
data = money_volume_flow(ticker)
else:
data = df['Money Flow Volume']
else:
data = money_volume_flow(ticker)
#df = pd.read_csv('stock_dfs/{}.csv'.format(ticker),parse_dates=True,index_col=0)
count = {'Up':0 , 'Down' : 0}
#print(data)
count['Up'] = len(data[data>=0])
count['Down'] = len(data) - count['Up']
return count
if __name__=='__main__':
print(count('FB'))
'''img_source1 = mpl.image.imread('chiOsc_plots/GOOG_chiOsc_plot.png')
fig = plt.figure(figsize=(13,10),facecolor= '#07000d')
ax1 = plt.subplot2grid((6,2),(0,0),rowspan = 3, colspan = 1,facecolor='#07000d')
ax2 = plt.subplot2grid((6,2),(0,1),rowspan = 3, colspan = 1, sharex = ax1,facecolor='#07000d')
#ax1 = plt.subplot(121)
#ax2 = plt.subplot(122)
ax1.imshow(img_source1)
ax2.imshow(img_source1)
fig.subplots_adjust(left = 0.1,right = 0.93,top = 0.95,wspace = 0.2,hspace=0.09)
if __name__=='__main__':
plt.show()
fig.savefig('GOOG_plot.png',facecolor = fig.get_facecolor(),format='png') #, dpi=500 '''
| 0.136263 | 0.449272 |
```
import os
import os.path
import glob
import cv2
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
plt.rcParams['figure.figsize'] = (5, 5)
def binarize(image):
return 0 * (image < 128) + 1 * (image >= 128)
def difference(path1, path2, list1, list2):
flag = 0
for i in range(len(list1)):
image1 = binarize(plt.imread(path1 + list1[i]))
image2 = binarize(plt.imread(path2 + list2[i]))
if sum(sum(image1 - image2)):
flag = 1
print(i)
if flag:
return False
else:
return True
```
### Check one pair of images
```
org_path = '/Users/vladarozova/Dropbox/New experiment/Images/tiff/Cytosoft 2 kPa/Combination B/'
res_path = '/Users/vladarozova/Dropbox/New experiment/Analysis/cell-profiler/cellmasks/'
originals = ['A1-5-WGA-mask.tif']
results = ['cellmasks_39.jpeg']
print(difference(org_path, res_path, originals, results))
N = 0
original = binarize(plt.imread(org_path + originals[N]))
result = binarize(plt.imread(res_path + results[N]))
diff = original - result
x, y = diff.nonzero()
x, y
# Plot the original
plt.imshow(original);
plt.scatter(y, x);
# Plot cell profiler output
plt.imshow(result);
plt.scatter(y, x);
```
### Compare the results against the originals
** Settings **
```
def get_org_path(s, c):
return '../../Images/tiff/Cytosoft ' + s + ' kPa/Combination ' + c + '/'
def get_res_path(s):
return '../cell-profiler/cellmasks/' + s + '/'
def compare(list1, list2):
for i in range(len(list1)):
image1 = cv2.imread(list1[i])
image2 = cv2.imread(list2[i])
if not (image1 == image2).all():
print(list1[i])
print(list2[i])
stiffness = ["0.2", "2", "16", "32", "64"]
for s in stiffness:
print("Comparing images at stiffness:", s, '...\n')
originals = glob.glob(get_org_path(s, "B") + '*WGA-mask.tif')
originals.extend(glob.glob(get_org_path(s, "C") + '*WGA-mask.tif'))
results = glob.glob(get_res_path(s) + '*WGA-mask.tiff')
assert len(originals)==len(results)
originals.sort()
results.sort()
compare(originals, results)
originals
results
for filename in results:
img = cv2.imread(filename)
# img = cv2.threshold(img, 128, 255, cv2.THRESH_BINARY)[1]
image1 = cv2.imread(originals[0])
image2 = cv2.imread(results[0])
assert (image1 == image2).all()
```
** Compare images **
#### Load Cell Profiler results
```
stiffness = ["0.2", "0.5", "16", "2", "32", "64", "8"]
n0 = 0
for s in stiffness:
originals = []
org_path = '/Users/vladarozova/Dropbox/New experiment/Images/tiff/Cytosoft ' + s + ' kPa/Combination B/'
# List of original masks
os.chdir(org_path)
originals = glob.glob('*WGA-mask.tif')
# Sort the list
originals.sort()
# Number of images
n = len(originals)
if difference(org_path, res_path, originals, results[n0 : n0 + n]):
print("Stiffness {} kPa. All the images are pairwise equal.".format(s))
else:
print("Check the masks!")
n0 = n0 + n
# Path to cell profiler results
res_path = '/Users/vladarozova/Dropbox/New experiment/Analysis/cell-profiler/cellmasks/'
# List of Cell Profiler results
os.chdir(res_path)
results = glob.glob('*.jpeg')
# Sort the list
results.sort()
```
#### Compare
### Compare cell profiler outputs: primary vs secondary object
```
# Path to cell profiler results
res_path = '/Users/vladarozova/Dropbox/New experiment/Analysis/cell-profiler/cellmasks/'
# Lists of Cell Profiler results
os.chdir(res_path)
list1 = glob.glob('cellmasks1*')
list2 = glob.glob('cellmasks2*')
# Sort the lists
list1.sort()
list2.sort()
print(difference(res_path, res_path, list1, list2))
```
### Check the pairs that are not equal
```
org_path = res_path
org_list = list1
res_list = list2
N = 64
print(org_list[N])
print(res_list[N])
original = binarize(plt.imread(org_path + org_list[N]))
result = binarize(plt.imread(res_path + res_list[N]))
diff = original - result
x, y = diff.nonzero()
x, y
# Plot the original
plt.imshow(original);
plt.scatter(y, x);
# Plot cell profiler output
plt.imshow(result);
plt.scatter(y, x);
print(diff.sum())
plt.imshow(diff);
plt.scatter(y, x);
original[x, y]
result[x, y]
x_lower = x.min() - 5
x_upper = x.max() + 5
y_lower = y.min() - 5
y_upper = y.max() + 5
plt.imshow(original[x_lower : x_upper, y_lower : y_upper]);
plt.imshow(result[x_lower : x_upper, y_lower : y_upper]);
plt.imshow(diff[x_lower : x_upper, y_lower : y_upper]);
```
### Check an image
```
path = '/Users/vladarozova/Dropbox/New experiment/Images/tiff/Cytosoft 64 kPa/Combination B/'
name = 'A1-2-WGA-mask.tif'
image = plt.imread(path + name)
plt.imshow(image);
plt.scatter(691, 427);
```
|
github_jupyter
|
import os
import os.path
import glob
import cv2
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
plt.rcParams['figure.figsize'] = (5, 5)
def binarize(image):
return 0 * (image < 128) + 1 * (image >= 128)
def difference(path1, path2, list1, list2):
flag = 0
for i in range(len(list1)):
image1 = binarize(plt.imread(path1 + list1[i]))
image2 = binarize(plt.imread(path2 + list2[i]))
if sum(sum(image1 - image2)):
flag = 1
print(i)
if flag:
return False
else:
return True
org_path = '/Users/vladarozova/Dropbox/New experiment/Images/tiff/Cytosoft 2 kPa/Combination B/'
res_path = '/Users/vladarozova/Dropbox/New experiment/Analysis/cell-profiler/cellmasks/'
originals = ['A1-5-WGA-mask.tif']
results = ['cellmasks_39.jpeg']
print(difference(org_path, res_path, originals, results))
N = 0
original = binarize(plt.imread(org_path + originals[N]))
result = binarize(plt.imread(res_path + results[N]))
diff = original - result
x, y = diff.nonzero()
x, y
# Plot the original
plt.imshow(original);
plt.scatter(y, x);
# Plot cell profiler output
plt.imshow(result);
plt.scatter(y, x);
def get_org_path(s, c):
return '../../Images/tiff/Cytosoft ' + s + ' kPa/Combination ' + c + '/'
def get_res_path(s):
return '../cell-profiler/cellmasks/' + s + '/'
def compare(list1, list2):
for i in range(len(list1)):
image1 = cv2.imread(list1[i])
image2 = cv2.imread(list2[i])
if not (image1 == image2).all():
print(list1[i])
print(list2[i])
stiffness = ["0.2", "2", "16", "32", "64"]
for s in stiffness:
print("Comparing images at stiffness:", s, '...\n')
originals = glob.glob(get_org_path(s, "B") + '*WGA-mask.tif')
originals.extend(glob.glob(get_org_path(s, "C") + '*WGA-mask.tif'))
results = glob.glob(get_res_path(s) + '*WGA-mask.tiff')
assert len(originals)==len(results)
originals.sort()
results.sort()
compare(originals, results)
originals
results
for filename in results:
img = cv2.imread(filename)
# img = cv2.threshold(img, 128, 255, cv2.THRESH_BINARY)[1]
image1 = cv2.imread(originals[0])
image2 = cv2.imread(results[0])
assert (image1 == image2).all()
stiffness = ["0.2", "0.5", "16", "2", "32", "64", "8"]
n0 = 0
for s in stiffness:
originals = []
org_path = '/Users/vladarozova/Dropbox/New experiment/Images/tiff/Cytosoft ' + s + ' kPa/Combination B/'
# List of original masks
os.chdir(org_path)
originals = glob.glob('*WGA-mask.tif')
# Sort the list
originals.sort()
# Number of images
n = len(originals)
if difference(org_path, res_path, originals, results[n0 : n0 + n]):
print("Stiffness {} kPa. All the images are pairwise equal.".format(s))
else:
print("Check the masks!")
n0 = n0 + n
# Path to cell profiler results
res_path = '/Users/vladarozova/Dropbox/New experiment/Analysis/cell-profiler/cellmasks/'
# List of Cell Profiler results
os.chdir(res_path)
results = glob.glob('*.jpeg')
# Sort the list
results.sort()
# Path to cell profiler results
res_path = '/Users/vladarozova/Dropbox/New experiment/Analysis/cell-profiler/cellmasks/'
# Lists of Cell Profiler results
os.chdir(res_path)
list1 = glob.glob('cellmasks1*')
list2 = glob.glob('cellmasks2*')
# Sort the lists
list1.sort()
list2.sort()
print(difference(res_path, res_path, list1, list2))
org_path = res_path
org_list = list1
res_list = list2
N = 64
print(org_list[N])
print(res_list[N])
original = binarize(plt.imread(org_path + org_list[N]))
result = binarize(plt.imread(res_path + res_list[N]))
diff = original - result
x, y = diff.nonzero()
x, y
# Plot the original
plt.imshow(original);
plt.scatter(y, x);
# Plot cell profiler output
plt.imshow(result);
plt.scatter(y, x);
print(diff.sum())
plt.imshow(diff);
plt.scatter(y, x);
original[x, y]
result[x, y]
x_lower = x.min() - 5
x_upper = x.max() + 5
y_lower = y.min() - 5
y_upper = y.max() + 5
plt.imshow(original[x_lower : x_upper, y_lower : y_upper]);
plt.imshow(result[x_lower : x_upper, y_lower : y_upper]);
plt.imshow(diff[x_lower : x_upper, y_lower : y_upper]);
path = '/Users/vladarozova/Dropbox/New experiment/Images/tiff/Cytosoft 64 kPa/Combination B/'
name = 'A1-2-WGA-mask.tif'
image = plt.imread(path + name)
plt.imshow(image);
plt.scatter(691, 427);
| 0.335786 | 0.707784 |
```
!pip install de_sim
```
<!-- :Author: Arthur Goldberg <Arthur.Goldberg@mssm.edu> -->
<!-- :Date: 2020-07-13 -->
<!-- :Copyright: 2020, Karr Lab -->
<!-- :License: MIT -->
# A stochastic epidemic model
<font size="4">
Epidemics occur when infectious diseases spread through a susceptible population.
Models that classify individuals by their infectious state are used to study the dynamics of epidemics.
The simplest approach considers three infectious states:
<br>
* *Susceptible*: a person who can become infected if exposed
* *Infectious*: a person who is infected, and can transmit the infection to a susceptible person
* *Recovered*: a person who has recovered from an infection, and cannot be reinfected
Dynamic analyses of epidemics are called Susceptible, Infectious, or Recovered (SIR) models.
SIR models are described by the initial population of people in each state and the rates at which they transition between states.

*SIR model states and transitions*
S and I represent the number of individuals in states Susceptible and Infectious, respectively. β and γ are model parameters.
We present a stochastic SIR model that demonstrates the core features of DE-Sim.
The SIR model uses DE-Sim to implement a continuous-time Markov chain model, as described in section 3 of Allen, 2017. Infectious Disease Modelling.
Let's implement and use the SIR model.

First, define the event messages
</font>
```
"DE-Sim implementation of an SIR epidemic model"
import enum
import numpy
import de_sim
class StateTransitionType(enum.Enum):
" State transition types "
s_to_i = 'Transition from Susceptible to Infectious'
i_to_r = 'Transition from Infectious to Recovered'
class TransitionMessage(de_sim.EventMessage):
"Message for all model transitions"
transition_type: StateTransitionType
MESSAGE_TYPES = [TransitionMessage]
```

<font size="4">
Next, define a simulation object. It has these attributes:
<br>
* s (int): number of susceptible subjects
* i (int): number of infectious subjects
* N (int): total number of susceptible subjects, a constant
* beta (float): SIR beta parameter
* gamma (float): SIR gamma parameter
* random_state (numpy.random.RandomState): a random state
</font>
```
class SIR(de_sim.SimulationObject):
"""Implement a Susceptible, Infectious, or Recovered (SIR)
epidemic model"""
def __init__(self, name, s, i, N, beta, gamma):
" Initialize an SIR instance "
self.s = s
self.i = i
self.N = N
self.beta = beta
self.gamma = gamma
self.random_state = numpy.random.RandomState()
super().__init__(name)
def init_before_run(self):
" Send the initial events "
self.schedule_next_event()
def schedule_next_event(self):
" Schedule the next SIR event "
rates = {'s_to_i': self.beta * self.s * self.i / self.N,
'i_to_r': self.gamma * self.i}
lambda_val = rates['s_to_i'] + rates['i_to_r']
if lambda_val == 0:
# no transitions remain
return
tau = self.random_state.exponential(1.0/lambda_val)
prob_s_to_i = rates['s_to_i'] / lambda_val
if self.random_state.random_sample() < prob_s_to_i:
self.send_event(tau, self, TransitionMessage(StateTransitionType.s_to_i))
else:
self.send_event(tau, self, TransitionMessage(StateTransitionType.i_to_r))
def handle_state_transition(self, event):
" Handle an infectious state transition event "
transition_type = event.message.transition_type
if transition_type is StateTransitionType.s_to_i:
self.s -= 1
self.i += 1
elif transition_type is StateTransitionType.i_to_r:
self.i -= 1
self.schedule_next_event()
event_handlers = [(TransitionMessage, 'handle_state_transition')]
# register the message types sent
messages_sent = MESSAGE_TYPES
from de_sim.checkpoint import AccessCheckpoints, Checkpoint
from de_sim.simulation_checkpoint_object import (AccessStateObjectInterface,
CheckpointSimulationObject)
class AccessSIRObjectState(AccessStateObjectInterface):
""" Get the state of the simulation
Attributes:
sir (`obj`): an SIR object
random_state (`numpy.random.RandomState`): a random state
"""
def __init__(self, sir):
self.sir = sir
self.random_state = sir.random_state
def get_checkpoint_state(self, time):
""" Get the simulation's state
Args:
time (`float`): current time; ignored
"""
return dict(s=self.sir.s,
i=self.sir.i)
def get_random_state(self):
" Get the simulation's random state "
return self.random_state.get_state()
```

<font size="4">
The next cell defines code to run the SIR model and visualize its predictions.
</font>
```
import pandas
class RunSIR(object):
def __init__(self, checkpoint_dir):
self.checkpoint_dir = checkpoint_dir
def simulate(self, recording_period, max_time, **sir_args):
""" Create and run an SIR simulation
Args:
recording_period (`float`): interval between state checkpoints
max_time (`float`): simulation end time
sir_args (`dict`): arguments for an SIR object
"""
# create a simulator
simulator = de_sim.Simulator()
# create an SIR instance
self.sir = sir = SIR(**sir_args)
simulator.add_object(sir)
# create a checkpoint simulation object
access_state_object = AccessSIRObjectState(sir)
checkpointing_obj = CheckpointSimulationObject('checkpointing_obj',
recording_period,
self.checkpoint_dir,
access_state_object)
simulator.add_object(checkpointing_obj)
# initialize simulation, which sends the SIR instance an initial event message
simulator.initialize()
# run the simulation
event_num = simulator.simulate(max_time).num_events
def last_checkpoint(self):
""" Get the last checkpoint of the last simulation run
Returns:
`Checkpoint`: the last checkpoint of the last simulation run
"""
access_checkpoints = AccessCheckpoints(self.checkpoint_dir)
last_checkpoint_time = access_checkpoints.list_checkpoints()[-1]
return access_checkpoints.get_checkpoint(time=last_checkpoint_time)
def history_to_dataframe(self):
fields = ('s', 'i', 'r')
hist = []
index = []
access_checkpoints = AccessCheckpoints(self.checkpoint_dir)
for checkpoint_time in access_checkpoints.list_checkpoints():
state = access_checkpoints.get_checkpoint(time=checkpoint_time).state
state_as_list = [state['s'], state['i'], self.sir.N - state['s'] - state['i']]
hist.append(dict(zip(fields, state_as_list)))
index.append(checkpoint_time)
return pandas.DataFrame(hist)
```

<font size="4">
Use the model to view an epidemic's predictions.
We use parameters from Allen (2017), and print and plot the trajectory of a single simulation.
Since the model is stochastic, each run produces a different trajectory.
</font>
```
import tempfile
sir_args = dict(name='sir',
s=98,
i=2,
N=100,
beta=0.3,
gamma=0.15)
with tempfile.TemporaryDirectory() as tmpdirname:
run_sir = RunSIR(tmpdirname)
run_sir.simulate(10, 100, **sir_args)
# print and plot an epidemic's predicted trajectory
sir_data_frame = run_sir.history_to_dataframe()
axes = sir_data_frame.plot()
axes.set_xlabel("Time")
rv = axes.set_ylabel("Population")
```

<font size="4">
An important prediction generated by the SIR model is the severity of the epidemic, which can be summarized by the fraction of people who became infected.
We run an ensemble of simulations and examine the predicted distribution of severity.
</font>
```
import math
num_sims = 100
infection_rates= []
for _ in range(num_sims):
with tempfile.TemporaryDirectory() as tmpdirname:
run_sirs = RunSIR(tmpdirname)
run_sirs.simulate(recording_period=10, max_time=60, **sir_args)
# infection rate = infectious + recovered
# N = s + i + r => i + r = N - s
final_state = run_sirs.last_checkpoint().state
N = sir_args['N']
infection_rate = (N - final_state['s'])/N
infection_rates.append(infection_rate)
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
rv = plt.hist(infection_rates)
ax.set_title('Infection rate distribution')
ax.set_xlabel('Infection rate')
rv = ax.set_ylabel('Frequency')
```
<font size="4">
As predicted by Allen's (2017) analysis, for the parameters in `sir_args` the infection rate distribution is bimodal. Most of the epidemics infect a majority of the population and a small fraction of them (Allen predicts 25%) burn out and infect only a minority of the population.
</font>

<font size="4">
The simple model above only touches the surface of epidemic modeling. Many extensions are possible:
* A spatial model with multiple geographic areas: each area would be represented by an instance of SIR.
* An extension of the spatial model that also represents travel between geographic areas
* A model that represents individuals in more states, such as multiple infectious states which distinguish between asymptomatic and symptomatic individuals, with a lower transmission parameter β for symptomatic individuals who would likely isolate while recovering
* A model that can model both small and large populations: it would use the stochastic approach above to integrate small populations and ODEs to integrate large populations. Models and simulators that use multiple integration methods are called *multi-algorithmic*.
We encourage you to experiment with different parameters for this model and build your own models!
</font>
**References**
Allen, L.J., 2017. A primer on stochastic epidemic models: Formulation, numerical
simulation, and analysis. Infectious Disease Modelling, 2(2), pp.128-142.
|
github_jupyter
|
!pip install de_sim
"DE-Sim implementation of an SIR epidemic model"
import enum
import numpy
import de_sim
class StateTransitionType(enum.Enum):
" State transition types "
s_to_i = 'Transition from Susceptible to Infectious'
i_to_r = 'Transition from Infectious to Recovered'
class TransitionMessage(de_sim.EventMessage):
"Message for all model transitions"
transition_type: StateTransitionType
MESSAGE_TYPES = [TransitionMessage]
class SIR(de_sim.SimulationObject):
"""Implement a Susceptible, Infectious, or Recovered (SIR)
epidemic model"""
def __init__(self, name, s, i, N, beta, gamma):
" Initialize an SIR instance "
self.s = s
self.i = i
self.N = N
self.beta = beta
self.gamma = gamma
self.random_state = numpy.random.RandomState()
super().__init__(name)
def init_before_run(self):
" Send the initial events "
self.schedule_next_event()
def schedule_next_event(self):
" Schedule the next SIR event "
rates = {'s_to_i': self.beta * self.s * self.i / self.N,
'i_to_r': self.gamma * self.i}
lambda_val = rates['s_to_i'] + rates['i_to_r']
if lambda_val == 0:
# no transitions remain
return
tau = self.random_state.exponential(1.0/lambda_val)
prob_s_to_i = rates['s_to_i'] / lambda_val
if self.random_state.random_sample() < prob_s_to_i:
self.send_event(tau, self, TransitionMessage(StateTransitionType.s_to_i))
else:
self.send_event(tau, self, TransitionMessage(StateTransitionType.i_to_r))
def handle_state_transition(self, event):
" Handle an infectious state transition event "
transition_type = event.message.transition_type
if transition_type is StateTransitionType.s_to_i:
self.s -= 1
self.i += 1
elif transition_type is StateTransitionType.i_to_r:
self.i -= 1
self.schedule_next_event()
event_handlers = [(TransitionMessage, 'handle_state_transition')]
# register the message types sent
messages_sent = MESSAGE_TYPES
from de_sim.checkpoint import AccessCheckpoints, Checkpoint
from de_sim.simulation_checkpoint_object import (AccessStateObjectInterface,
CheckpointSimulationObject)
class AccessSIRObjectState(AccessStateObjectInterface):
""" Get the state of the simulation
Attributes:
sir (`obj`): an SIR object
random_state (`numpy.random.RandomState`): a random state
"""
def __init__(self, sir):
self.sir = sir
self.random_state = sir.random_state
def get_checkpoint_state(self, time):
""" Get the simulation's state
Args:
time (`float`): current time; ignored
"""
return dict(s=self.sir.s,
i=self.sir.i)
def get_random_state(self):
" Get the simulation's random state "
return self.random_state.get_state()
import pandas
class RunSIR(object):
def __init__(self, checkpoint_dir):
self.checkpoint_dir = checkpoint_dir
def simulate(self, recording_period, max_time, **sir_args):
""" Create and run an SIR simulation
Args:
recording_period (`float`): interval between state checkpoints
max_time (`float`): simulation end time
sir_args (`dict`): arguments for an SIR object
"""
# create a simulator
simulator = de_sim.Simulator()
# create an SIR instance
self.sir = sir = SIR(**sir_args)
simulator.add_object(sir)
# create a checkpoint simulation object
access_state_object = AccessSIRObjectState(sir)
checkpointing_obj = CheckpointSimulationObject('checkpointing_obj',
recording_period,
self.checkpoint_dir,
access_state_object)
simulator.add_object(checkpointing_obj)
# initialize simulation, which sends the SIR instance an initial event message
simulator.initialize()
# run the simulation
event_num = simulator.simulate(max_time).num_events
def last_checkpoint(self):
""" Get the last checkpoint of the last simulation run
Returns:
`Checkpoint`: the last checkpoint of the last simulation run
"""
access_checkpoints = AccessCheckpoints(self.checkpoint_dir)
last_checkpoint_time = access_checkpoints.list_checkpoints()[-1]
return access_checkpoints.get_checkpoint(time=last_checkpoint_time)
def history_to_dataframe(self):
fields = ('s', 'i', 'r')
hist = []
index = []
access_checkpoints = AccessCheckpoints(self.checkpoint_dir)
for checkpoint_time in access_checkpoints.list_checkpoints():
state = access_checkpoints.get_checkpoint(time=checkpoint_time).state
state_as_list = [state['s'], state['i'], self.sir.N - state['s'] - state['i']]
hist.append(dict(zip(fields, state_as_list)))
index.append(checkpoint_time)
return pandas.DataFrame(hist)
import tempfile
sir_args = dict(name='sir',
s=98,
i=2,
N=100,
beta=0.3,
gamma=0.15)
with tempfile.TemporaryDirectory() as tmpdirname:
run_sir = RunSIR(tmpdirname)
run_sir.simulate(10, 100, **sir_args)
# print and plot an epidemic's predicted trajectory
sir_data_frame = run_sir.history_to_dataframe()
axes = sir_data_frame.plot()
axes.set_xlabel("Time")
rv = axes.set_ylabel("Population")
import math
num_sims = 100
infection_rates= []
for _ in range(num_sims):
with tempfile.TemporaryDirectory() as tmpdirname:
run_sirs = RunSIR(tmpdirname)
run_sirs.simulate(recording_period=10, max_time=60, **sir_args)
# infection rate = infectious + recovered
# N = s + i + r => i + r = N - s
final_state = run_sirs.last_checkpoint().state
N = sir_args['N']
infection_rate = (N - final_state['s'])/N
infection_rates.append(infection_rate)
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
rv = plt.hist(infection_rates)
ax.set_title('Infection rate distribution')
ax.set_xlabel('Infection rate')
rv = ax.set_ylabel('Frequency')
| 0.745398 | 0.917893 |
# ARC Tools
### Visuallize 2D torsion scan
#### input parameters:
```
path = 'path/to/directed/scan/file.yml'
label = 'ethanol'
cmap = 'Blues'
cmap = 'RdBu_r'
resolution = 80
from arc.plotter import plot_2d_rotor_scan
from arc.common import read_yaml_file
from arc.plotter import draw_structure
from arc.species.converter import str_to_xyz
import numpy as np
%matplotlib notebook
content = read_yaml_file(path)
plot_2d_rotor_scan(content, path='/home/alongd/Code/ARC/Projects/directed_rotors/',
label=label, cmap=cmap, resolution=resolution)
```
Optional arguments for cmap::
Accent, Accent_r, Blues, Blues_r, BrBG, BrBG_r, BuGn, BuGn_r, BuPu, BuPu_r, CMRmap, CMRmap_r, Dark2, Dark2_r,
GnBu, GnBu_r, Greens, Greens_r, Greys, Greys_r, OrRd, OrRd_r, Oranges, Oranges_r, PRGn, PRGn_r, Paired,
Paired_r, Pastel1, Pastel1_r, Pastel2, Pastel2_r, PiYG, PiYG_r, PuBu, PuBuGn, PuBuGn_r, PuBu_r, PuOr, PuOr_r,
PuRd, PuRd_r, Purples, Purples_r, RdBu, RdBu_r, RdGy, RdGy_r, RdPu, RdPu_r, RdYlBu, RdYlBu_r, RdYlGn, RdYlGn_r,
Reds, Reds_r, Set1, Set1_r, Set2, Set2_r, Set3, Set3_r, Spectral, Spectral_r, Wistia, Wistia_r, YlGn, YlGnBu,
YlGnBu_r, YlGn_r, YlOrBr, YlOrBr_r, YlOrRd, YlOrRd_r, afmhot, afmhot_r, autumn, autumn_r, binary, binary_r,
bone, bone_r, brg, brg_r, bwr, bwr_r, cividis, cividis_r, cool, cool_r, coolwarm, coolwarm_r, copper, copper_r,
cubehelix, cubehelix_r, flag, flag_r, gist_earth, gist_earth_r, gist_gray, gist_gray_r, gist_heat, gist_heat_r,
gist_ncar, gist_ncar_r, gist_rainbow, gist_rainbow_r, gist_stern, gist_stern_r, gist_yarg, gist_yarg_r, gnuplot,
gnuplot2, gnuplot2_r, gnuplot_r, gray, gray_r, hot, hot_r, hsv, hsv_r, inferno, inferno_r, jet, jet_r, magma,
magma_r, nipy_spectral, nipy_spectral_r, ocean, ocean_r, pink, pink_r, plasma, plasma_r, prism, prism_r,
rainbow, rainbow_r, seismic, seismic_r, spring, spring_r, summer, summer_r, tab10, tab10_r, tab20, tab20_r,
tab20b, tab20b_r, tab20c, tab20c_r, terrain, terrain_r, viridis, viridis_r, winter, winter_r
#### Select dihedral combination to view the respective conformer:
```
phi0 = 115.01
phi1 = -115.01
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return array[idx]
phis0 = np.array(sorted(list(set([float(key[0]) for key in content['directed_scan'].keys()]))), np.float64)
phis1 = np.array(sorted(list(set([float(key[1]) for key in content['directed_scan'].keys()]))), np.float64)
phi0 = find_nearest(phis0, phi0)
phi1 = find_nearest(phis1, phi1)
print(f'Showing the respective conformer for phi0 = {phi0}, phi1 = {phi1}')
xyz = str_to_xyz(content['directed_scan'][tuple(['{:.2f}'.format(phi0), '{:.2f}'.format(phi1)])]['xyz'])
draw_structure(xyz)
```
|
github_jupyter
|
path = 'path/to/directed/scan/file.yml'
label = 'ethanol'
cmap = 'Blues'
cmap = 'RdBu_r'
resolution = 80
from arc.plotter import plot_2d_rotor_scan
from arc.common import read_yaml_file
from arc.plotter import draw_structure
from arc.species.converter import str_to_xyz
import numpy as np
%matplotlib notebook
content = read_yaml_file(path)
plot_2d_rotor_scan(content, path='/home/alongd/Code/ARC/Projects/directed_rotors/',
label=label, cmap=cmap, resolution=resolution)
phi0 = 115.01
phi1 = -115.01
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return array[idx]
phis0 = np.array(sorted(list(set([float(key[0]) for key in content['directed_scan'].keys()]))), np.float64)
phis1 = np.array(sorted(list(set([float(key[1]) for key in content['directed_scan'].keys()]))), np.float64)
phi0 = find_nearest(phis0, phi0)
phi1 = find_nearest(phis1, phi1)
print(f'Showing the respective conformer for phi0 = {phi0}, phi1 = {phi1}')
xyz = str_to_xyz(content['directed_scan'][tuple(['{:.2f}'.format(phi0), '{:.2f}'.format(phi1)])]['xyz'])
draw_structure(xyz)
| 0.535341 | 0.803251 |
<img style="float: center; width: 100%" src="https://raw.githubusercontent.com/andrejkk/TalksImgs/master/FrontSlideUpperBan.png">
<p style="margin-bottom:2cm;"></p>
<center>
<H1> 12. Design and implementation of experiments </H1>
<br>
<H3> Andrej Košir, Lucami, FE </H3>
<H4> Contact: prof. dr. Andrej Košir, andrej.kosir@lucami.fe.uni-lj.si, skype=akosir_sid </H4>
</center>
<p style="margin-bottom:2cm;"></p>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> </div>
</div>
### Sections and learning outcomes
#### Goal:
To understand the significance of user exerience, to learn main challenges of user experimental design, implementation and analysis
#### Learning outcomes
Understand basics of experimental design.
Understand the process of experimental design.
Understand statistical hypothese testing with user experiments.
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 1 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09. 1. Introduction </div>
</div>
## Content
09.1. Introduction
■ The problem and the relevance of user testing $\large{*}$
■ What is experimental design $\large{*}$
■ Experimental and non-experimental designs $\large{*}$
■ Relevant aspects of experimental design with users
09.2. Statistical experimental design
■ Basic scheme $\large{*}$
■ Desing ANOVA
09.3. Success metrics
09.3.1. Development of success metric
■ Design requirements of success metric $\large{*}$
■ Creating an initial version of the questionnaire
■ Factor analysis and selection of questions
■ Psychometric characteristics and success metrics $\large{*}$
09.3.2. Psychometric characteristics
■ Questionnaires and psychometric characteristics
■ Validity $\large{*}$
■ Reliability $\large{*}$
09.4. Design of the study / experiment
■ Introduction
■ Step 1: Defining the objectives of the experiment $\large{*}$
■ Step 2: Cost functions - success metrics $\large{*}$
■ Step 3: Determination of factors $\large{*}$
■ Step 4: Determining the experimental scenario $\large{*}$
■ Step 5: Determination of criteria and selection of test subjects $\large{*}$
■ Step 6: Implementation of the experiment environment $\large{*}$
■ Step 7: Analysis of results: psychometric characteristics $\large{*}$
■ Step 8: Analysis of results: hypothesis testing $\large{*}$
<p style="margin-bottom:1cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 2 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09. 1. Introduction </div>
</div>
## ■ The problem and the relevance of user testing
#### Problem: Is the communication device (Alexa versus simulator) relevant factor in the quality of speech interface?
Theories can not answer this question. ** The only option is user testing. **.
Why:
- in human behavior **is a true randomness** (not only a missing information);
- there are no good simulators of human behavior (yet);
#### Interpretation of results
<table style="width:20%">
<tr>
<th>Device</th>
<th>Success metric $o$</th>
</tr>
<tr>
<td>Alexa Echo</td>
<td>4.2</td>
</tr>
<tr>
<td>PC</td>
<td>2.1</td>
</tr>
<tr>
</table>
Example of results 2:
<table style="width:20%">
<tr>
<th>Device</th>
<th>Success metric $o$</th>
</tr>
<tr>
<td>Alexa Echo</td>
<td>3.8</td>
</tr>
<tr>
<td>PC</td>
<td>3.7</td>
</tr>
<tr>
</table>
##### Problem: Is the difference between $ 3.7 $ and $ 3.8 $ true (Echo is better than PCa) or just a coincidence?
- even if there is no real difference, we never get the same values!
- the term **true** means that the difference would be consistently preserved through several repetitions of experiments!
Solution: **statistical hypothesis testing**
<p style="margin-bottom:1cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 3 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09. 1. Introduction </div>
</div>
## ■ What is experimental design?
The elements of the experimental design are as follows:
##### 1. Dependent variable (response, response) $ c $:
Criterion Function, Performance Indicator, etc., which gives the main result. In our case, the success metrics of the conversation.
##### 2. Controlled factor(s) $ F_e $:
Independent variables that we control **in the experiment**. In our case, this is a communication device, therefore it has a controlled factor (levels) {PC, Alexa Echo}.
##### 3. Nuisance factor(s):
These are undesirable effects on experimental results that are unavoidable. We wish to neutralize them in several ways;
##### 4. Input noise $ \ varepsilon $:
This is the noise by which we present unknown disturbing factors and real noise, resulting in the randomness of human behavior, the noise of sensors, etc.
<img style="float: center; width: 50%" src="https://raw.githubusercontent.com/andrejkk/UPK_DataImgs/master/ExperimentalDesignBox.png">
#### Def. Desing of experiment (DOE) is an effective experimental design process that involves the selection of a criterion function, the design of experimental procedures, and an analysis of the results obtained leading to valid and objective conclusions.
With this:
- Procedures also include determining the criteria for selecting test persons (participants), the required number of test persons, and so on. They also include an exact experiment flow;
- before the implementatiom, we create **a statistical plan** for the analysis of data that will answer the research questions;
- each experiment covers only where the test set is representative;
<p style="margin-bottom:1cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 4 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09. 1. Introduction </div>
</div>
## ■ Experimental and non-experimental designs
#### Eksperimental design
This is an experiment (with end users), where we **compare** two or more options
- user groups
- system/device versions
- situations
- ... <br>
among them. For this, we need to control at least one of the **factors**, that is to control its value.
In this respect, it is important that the other **factors** (impacts on the result) are adequately addressed.
#### Non-Experimental desing
This is an experiment in which we observe the results **without control of the factors** and **do not interfere** with the way in the control of the factors in the course itself.
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 5 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09. 1. Introduction </div>
</div>
## ■ Relevant aspects of experimental design with users
#### Experimental goal for the test user
What does the test user have in mind when experimenting?
Our example: according to the topic of conversation eg. easy and quick information retrieval.
#### Test users
Which group are represented by: age group, social group, level of skills, etc.
How many test users do we need? This is the subject of a priori analysis of statistical power.
#### Randomization
Randomization of interfering factors is crucial for the validity of the results. Without randomization we only have a **quasiexperiment**.
#### Test user scenarios
The test user scenario must be
- "imaginable" for the user
- clear on the instructions - uncontrolled surprises add noise to the results
#### Technical implementation of the experiment
Technically, the experiment must be at least solidly implemented, otherwise the problem of experimental setup attracts the attention of the test persons and the results are distorted.
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 6 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.2. Statistical experimental design </div>
</div>
## 09.2. Statistical experimental design
■ Basic scheme
■ Desing ANOVA
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 7 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.2. Statistical experimental design </div>
</div>
## ■ Basic scheme
<img style="float: center; width: 50%" src="https://raw.githubusercontent.com/andrejkk/UPK_DataImgs/master/ExperimentalDesignBox.png">
#### Statistical experimental design
Design of data analysis leads to statistical testing of the selected hypothesis, such designs are
- ANOVA
- Latin square
- ...
#### Selected terminology
Since ANOVA is the basic scheme, terminology is also derived from its terminology.
##### Regarding the control of the factors
- fixed effect: the experimental factor is controlled
- random effect: the value of the experimental factor is randomized
- mixed effect model: we have factors with fixed and random effects
##### Regarding the number of factors
- one-way: the design has one controlled factor
- two way: the design has two controlled factors
<p style="margin-bottom:1cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 8 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.2. Statistical experimental design </div>
</div>
## ■ Desing ANOVA
$\def\ovr#1{{\overline{#1}}}$
$\def\o#1{{\overline{#1}}}$
$\def\SS{{\mbox{SS}}}$
$\def\Var{{\mbox{Var}}}$
#### Controlled and nuisance factors
Types:
- fixed effect: the experimental factor is controlled
- random effect: the value of the experimental factor is random
- mixed effect model: we have factors with fixed and random effects
Factors values:
- discrete values
- two or more values
#### Assumptions
Variables are distributed in a normal way.
#### Null hypothesis and test
We cover only the variant of **single factor** with **fixed effect**.
Null hypothesis:
$$ H_{0} = [\ovr{y}_{G_1} = \cdots = \ovr{y}_{G_I}], $$
Test: F-test.
**We have**
###### Notations
- $n$ is the number of experiments performed at each value from the controlled factor$ F_e$;
- $a$ is the number of values of the controlled factor $F_e \in \{1, 2, \ldots, a \}$;
- $y_{ij}$ is the $j$-th result experiment, $i = 1, \ldots, n$, at the value of the factor $F_e = i$
- $y_{i.} = \sum_{j = 1}^n y_{ij}$ sum after experiments
- $\o{y}_{i.} = y_{i.}/n$ average per experiment
- $y_{. j} = \sum_{i = 1}^a y_{ij}$ of the sum by the values of the factor
- $\o{y}_{.j} = y _{.j}/n$ is the average of the values of the factor
- $y_{..} = \sum_{i, j} y_{ij}$ the total sum
- $\o{y}_{..} = y_{..}/n$ the total average
##### The sum of squares
- total sum of squares
$$ \SS_{tot} = \sum_{i=1}^a \sum_{j=1}^n {y_{ij} - \o{y_{..}})^2} $$
- the sum of the squares of the factor
$$ \SS_{trt} = n\sum_{i = 1}^a (\o{y}_{i.} - \o{y_{..}})^2 $$
- Then, the total sum of squares can be divided into the sum of the factor and the error $\SS_{err}$, that is
$$ \SS_{tot} = \SS_{trt} + \SS_{err}. $$
##### Statistics F
Statistics
$$ F_0 = \frac {SS_{trt} / (a-1)} {SS_{err} / (n(a-1))} $$
is distributed by the distribution $F(a-1, n (a-1))$, from which we calculate the $p$ value.
<p style="margin-bottom:1cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 9 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
```
# Define functions
def get_ANOVA_SS(data_df):
a,n = data_df.shape
N = a*n
y_pp = 1.0*data_df.sum().sum()
y_ip = 1.0*data_df.sum(axis=1)
# SS total
SS_T = (data_df**2).sum().sum() - (y_pp**2)/N
# SS treatment
SS_trt = (1.0/n)*(y_ip**2).sum() - (y_pp**2)/N
# SS error
SS_err = SS_T - SS_trt
# Report
return SS_trt, SS_err, SS_T
## ANOVA
import numpy as np
import pandas as pd
from scipy.stats import f
# Load data
post_fn = 'https://raw.githubusercontent.com/andrejkk/UPK_DataImgs/master/ANOVAtestData31.csv'
data_df = pd.read_csv(post_fn, sep=';', encoding='utf8')
# Summs of squares
SS_trt, SS_err, SS_T = get_ANOVA_SS(data_df)
print (SS_trt, SS_err, SS_T)
# F-stat
a,n = data_df.shape
N = a*n
F_0 = (SS_trt/(a-1)) / (SS_err/(N-a))
# P-value
pVal = 1-f.cdf(F_0, a-1, N-a)
print ('p value = ', pVal)
```
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.3. Success metrics </div>
</div>
## 09.3. Success metrics
09.3.1. Development of success metric
■ Design requirements of success metric
■ Creating an initial version of the questionnaire
■ Implementation of experiment and data acquisition
■ Factor analysis and selection of questions
■ Psychometric characteristics and success metrics
<br>
_Literatura:_ [J. R. Lewis, M. L. Hardzinski: Investigating the psychometric properties of the Speech User
Interface Service Quality questionnaire, Int J Speech Technol, 18:479–487, 2015.](https://link.springer.com/article/10.1007/s10772-015-9289-1)
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 10 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.3.1. Development of success metric </div>
</div>
## ■ Design requirements of success metric uspešnosti
#### Def .: A novel performance metric is based on the desired characteristics, that is, important aspects to cover. In these aspects, we form an initial questionnaire. It is essential that as the starting point we use EXISTING metrics that are closest to the one we develop.
#### Terminology
- metric: the measure by which we measure the selected aspect of the phenomenon, in our case of the quality of service
- psychometric characteristics: quantitative measurements of success metrics instrument quality
#### Design requirements of success metrics
Includes the following steps
1. Study of related mertrik:
- research with sufficiently detailed experiments
- psychometric characteristics already achieved
2. Choice of important aspects
- from existing research
- important aspects for our framework
#### Example
1. Studies before it (items 1.3 and 1.4, paragraphs 480 - 482):
- Mean oppinion scale (MOS)
- Substantive Assessment of Speech System Interfaces (SASSI)
- Speech User
- Interface Service Quality (SUISQ)
2. Selected aspects prior to the study (Chapter 2, paragraphs 482 - 483)
- aspects of SUISQ
- psychometric characteristics:
- reliability (item 2.2.1)
- constructive validity (Chapter 2.2.2)
- criterion validity (item 2.2.3)
- sensitivity (section 2.2.4)
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 11 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.3.1. Development of success metric </div>
</div>
## ■ Creating an initial version of the questionnaire
The initial version of the questionnaire is based on
1. Selected aspects of pre-studies
- we select only groups of questions from the studies
2. Questions added according to our specific requirements
- even here, if possible, address to existing questionnaires
It is important that **questions are well defined**, otherwise they will be eliminated from the measurement characteristics.
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 12 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.3.1. Development of success metric </div>
</div>
## ■ Implementation of experiment and data acquisition
With an initial version of the questionnaire, we perform an experiment with a **representative set of test users**.
The obtained data is arranged in a format that is suitable for Factor analysis.
Depending on the nature of the experiments, we execute the experiment
- in the lab
- online platforms
- Amazon Mechanical Turk https://www.mturk.com/
- Clickworker https://www.clickworker.com/
- social networks
#### Example
Execution of the experiment (Chapter 3, paras 483).
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 13 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.3.1. Development of success metric </div>
</div>
## ■ Factor analysis and selection of questions
Factor analysis is a statistical procedure that
1. Define latent factors:
- aspects
2. Assess the importance of the questions
- questions are grouped into aspects
- Excluded contributing issues
3. Assess the quality of the entire metric grouping
- if the initial list of questions was not well-structured, the array matrix factor ** does not have a structure ** and the process was not successful.
#### Example
Factor analysis of the questionnaire that gave the questionnaire / instrumet **Speech User
Interface Service Quality (SUISQ)** (Chapter 4, item 484).
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 14 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.3.1. Development of success metric </div>
</div>
## ■ Psychometric characteristics and success metrics
#### Psychometric characteristics
See the following subsection.
##### Example
In chapter 3 (paragraph 483), the results of the psychometric characteristics of the proposed questionnaire **SUISQ-R** are given. They are
- reliability (item 3.2.2)
- sensitivity (item 3.2.5)
#### Calculate the success metric
The factor matrix also **weighs** to multiply the numerical values of the responses, and thus we evaluate individual latent variables - dimensions.
The weights obtained are listed below. These are the weights that multiply the answers, and thus we obtain the estimates of the dimensions (latent factors) of UGO, CSB, SC and V for individual respondents.
|UGO | CSB | SC | V |
|-|-|-|-|
|0.858 | 0.228 | 0.146 | -0.124 |
|0.834 | 0.205 | 0.117 | -0.088 |
|0.834 | 0.245 | 0.159 | -0.088 |
|0.831 | 0.155 | 0.078 | -0.089 |
|0.805 | 0.19 | 0.031 | -0.073 |
|0.8 | 0.18 | 0.162 | -0.025 |
|0.799 | 0.219 | 0.028 | -0.098 |
|0.794 | 0.164 | 0.297 | -0.105 |
|0.628 | 0.439 | -0.009 | -0.099 |
|0.336 | 0.758 | 0.041 | -0.099 |
|0.256 | 0.739 | 0.316 | -0.105 |
|0.127 | 0.736 | 0.079 | -0.214 |
|0.188 | 0.726 | 0.434 | -0.084 |
|0.355 | 0.711 | -0.054 | -0.041 |
|0.271 | 0.668 | 0.4 | -0.156 |
|0.29 | 0.648 | 0.482 | -0.15 |
|0.26 | 0.599 | 0.447 | -0.163 |
|0.096 | 0.139 | 0.808 | -0.054 |
|0.164 | 0.242 | 0.797 | -0.14 |
|0.127 | 0.238 | 0.658 | -0.045 |
|0.027 | -0.004 | 0.585 | 0.121 |
|-0.139 | -0.161 | -0.036 | 0.73 |
|-0.185 | 0.084 | -0.084 | 0.706 |
|0.075 | -0.199 | 0.011 | 0.701 |
|-0.223 | -0.431 | 0.029 | 0.655 |
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 15 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
```
## Estimate quality of speech interface
# User Goal Orientation = UGO: 8 items, a = .92
# Customer Service Behaviors = CSB: 8 items, a = .89
# Speech Characteristics = SC: 5 items
# Verbosity = V: 4 items, a = .69
import numpy as np
import pandas as pd
# Load data
post_fn = 'https://raw.githubusercontent.com/andrejkk/UPK_DataImgs/master/PredPost-vprasalnik(dvogovor)(Responses).csv'
data_df = pd.read_csv(post_fn, header=0, sep=';', encoding='utf8')
#qs_weights_fn = 'https://raw.githubusercontent.com/andrejkk/UPK_DataImgs/master/QualityOfConvSys_Weights.cvs'
#qs_weights_df = pd.read_csv(qs_weights_fn, header=0, sep=';', encoding='utf8')
qs_weights_df = pd.DataFrame(
[[0.858, 0.228, 0.146, -0.124],
[0.834, 0.205, 0.117, -0.088],
[0.834, 0.245, 0.159, -0.088],
[0.831, 0.155, 0.078, -0.089],
[0.805, 0.19, 0.031, -0.073],
[0.8, 0.18, 0.162, -0.025],
[0.799, 0.219, 0.028, -0.098],
[0.794, 0.164, 0.297, -0.105],
[0.628, 0.439, -0.009, -0.099],
[0.336, 0.758, 0.041, -0.099],
[0.256, 0.739, 0.316, -0.105],
[0.127, 0.736, 0.079, -0.214],
[0.188, 0.726, 0.434, -0.084],
[0.355, 0.711, -0.054, -0.041],
[0.271, 0.668, 0.4, -0.156],
[0.29, 0.648, 0.482, -0.15],
[0.26, 0.599, 0.447, -0.163],
[0.096, 0.139, 0.808, -0.054],
[0.164, 0.242, 0.797, -0.14],
[0.127, 0.238, 0.658, -0.045],
[0.027, -0.004, 0.585, 0.121],
[-0.139, -0.161, -0.036, 0.73],
[-0.185, 0.084, -0.084, 0.706],
[0.075, -0.199, 0.011, 0.701],
[-0.223, -0.431, 0.029, 0.655]])
# Selectors
post_qs_inds = list(range(11,36))
data_qs_df = data_df.iloc[:, post_qs_inds]
# Estimate latent variables
speech_quality_est = data_qs_df.dot(qs_weights_df.as_matrix())
speech_quality_est.columns = ['UGO', 'CSB', 'SC', 'V']
speech_quality_est
```
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.3.2. Psychometric characteristics </div>
</div>
## 09.3.2. Psychometric characteristics
■ Questionnaires and psychometric characteristics
■ Validity
■ Reliability
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 16 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.3.2. Psychometric characteristics </div>
</div>
## ■ Questionnaires and psychometric characteristics
#### Problem
Questionnaires can give **useless results** because answered questions are not related to the real situation. There are several reasons for this:
1. Concepts in questions or questions themselves **are not well defined**: the terms in the human language can simultaneously carry multiple meanings and if they are not specified well, the answers refer to different concepts, objects, etc.
2. In the question it is not clear what it refers to - no reference is given: questions may concern the quality of the service, the interface, the content, etc.;
3. Questions may be offensive or disruptive for the test person
Consequently, it is imperative that we are skeptical about the particular issues.
#### Solution
Psychometric characteristics perceive questions that **do not work**, that is, they do not give meaningful answers. It is about the fact that the measurement characteristics perceive the characteristics of the answers that must always be valid (eg stability on multiple questions, etc.), and thus good metrological qualities **is a necessary condition for the applicability of responses**, but not a sufficient condition.
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 17 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.3.2. Psychometric characteristics </div>
</div>
## ■ Validity
#### Def .: The instrument is valid if it really measures the variable (concept / phenomenon) for which it is intended.
#### Checking validaty
Validity can not be easily verified with statistical formulas. It is mainly evaluated by:
1. Evaluation of human experts in the field;
2. Sufficiently high correlation (e.g., $ 0.3 $) with variables, for which we expect the variable to be linked;
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 18 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.3.2. Psychometric characteristics </div>
</div>
## ■ Reliability
#### Def .: Reliability refers to the consistency of the measure, that is, whether the same concept / occurrence at repeated measurement is measured sufficiently similarly.
#### Types of reliability
1. Reliability in time - reliability test-retest: multiple testing to give similar results (_not useful for the case of a double-word with Alexo_)
2. Internal consistency: whether the answers to the questions are sufficiently interconnected (_crucial in the case of a double-word with Alexo_ - see below)
3. Consistency between research (researchers) (_useful in the case of a double-word with Alexo_)
#### Internal consistency - split half method
Answers to the question ** Are the answers / data stable? **
##### Determination process
$\def\s{{\sigma}}$
With this method for given data:
1. Test persons are randomly divided into two halves
2. Calculate the correlations between them $r_{12}$
3. According to the Spearman-Brown formula, we calculate the reliability coefficient
- for default same standard deviations
$$ r_{tt} = \frac{2 r_{12}}{1 + r_{12}} $$
- for various standard deviations
$$ r_{tt} = \frac{4\s_1\s_2 r_{12}}{\s_1^2 + \s_2^2 + 2\s_1 \s_2 r_{12}}, $$
where $\s_1$ and $\s_2$ are the standard deviation of individual data.
#### Internal consistency - Cronbach alpha
Answers to a question **Do all the questions / data together measure a coherent concept?**
##### How to determine it
The results (answers) on $K$ of the questions $X_i$ are summed up in
$$ Y = X_1 + \cdots + X_K, $$
we calculate the standard deviations of the individual results of $\s_{X_i}$ and the sum of $Y_{Y}$, and calculate
$$ \alpha = \frac{K-1}{K} \left(1-\frac{\sum _{i=1}^K\s_{X_i}^2}{\s_Y^2}\right) $$
<p style="margin-bottom:0.5cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 19 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
```
## Measurement characteristics
import numpy as np
import pandas as pd
import random
# Load data
post_fn = 'https://raw.githubusercontent.com/andrejkk/UPK_DataImgs/master/PredPost-vprasalnik(dvogovor)(Responses).csv'
#post_fn = 'https://raw.githubusercontent.com/andrejkk/UPK_DataImgs/master/PostQs_Responses.csv'
data_df = pd.read_csv(post_fn, header=0, sep=';', encoding='utf8')
# Selectors
pre_qs_inds = list(range(5,11))
post_qs_inds = list(range(11,36))
# ==============================================================================================
## Validitiy
# We estimate is OK
# ==============================================================================================
## Reliability
# Parallel forms - inter method reliability: divide test tasks to two parts and corelate anwsers
# Split half
np.random.shuffle(post_qs_inds)
perm_inds_1, perm_inds_2 = post_qs_inds[0:12], post_qs_inds[12:24]
anws_1, anws_2 = data_df.iloc[:, perm_inds_1], data_df.iloc[:, perm_inds_2]
# Get correlation coefficient
half1_pd = anws_1.sum(axis=1)
half2_pd = anws_2.sum(axis=1)
corr = np.corrcoef(half1_pd, half2_pd)[0,1]
# ==============================================================================================
print ('====================================================================================')
print ('== For instruments')
# Spearman–Brown prediction formula
# Equal variances
req_tt = 2*corr / (1.0 + corr)
print ('Equal var, r_tt=', req_tt)
# Non-equal variances
sd1, sd2 = np.std(half1_pd), np.std(half2_pd)
rneq_tt = 4*sd1*sd2*corr / (sd1**2 + sd2**2 + 2*sd1*sd2*corr)
print ('Std 1 = ', sd1)
print ('Std 2 = ', sd2)
print ('Not equal var, r_tt=', rneq_tt)
## ==============================================================
# Internal consistency - Cronbach alpha
# Verifies the instrument as a whole
post_qs_df = data_df.iloc[:, 11:36]
_,K = post_qs_df.shape
X = post_qs_df.sum(axis=1)
ss_X = np.var(X, ddof=1)
ss_Yi = np.var(post_qs_df, ddof=1)
ss_Y = ss_Yi.sum()
Cronb_a = (K/(K-1))*(1.0 - ss_Y/ss_X)
print ('Cronbach Alpha =', Cronb_a)
# ==============================================================================================
print ('')
print ('===============================================================================')
print ('== For quality metrics')
## ==============================================================
# Internal consistency - Cronbach alpha
_,K = speech_quality_est.shape
X = speech_quality_est.sum(axis=1)
ss_X = np.var(X, ddof=1)
ss_Yi = np.var(speech_quality_est, ddof=1)
ss_Y = ss_Yi.sum()
Cronb_a = (K/(K-1))*(1.0 - ss_Y/ss_X)
print ('Cronbach Alpha =', Cronb_a)
```
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.4. Design of the study / experiment </div>
</div>
## 09.4. Design of the study / experiment
■ Introduction
■ Step 1: Defining the objectives of the experiment
■ Step 2: Cost functions - success metrics
■ Step 3: Izbira statističnega načrta in določitev factorjev
■ Step 4: Determining the experimental scenario
■ Step 5: Criteria for and selection of test subjects
■ Step 6: Implementation of the experiment environment
■ Step 7: Analysis of results: psychometric characteristics
■ Step 8: Analysis of results: hypothesis testing
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 20 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.4. Design of the study / experiment </div>
</div>
## ■ Introduction
Instructions for experiments and studies are derived from **the definition of western science**:
- the result is scientific, if it comes out of a correct experiment
- results can be **interpreted**
- the experiment must be reproducible
- sufficiently detailed
- accessible test data
- the result of the repetition matches the original experiment
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 21 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.4. Design of the study / experiment </div>
</div>
## ■ Step 1: Defining the objectives of the experiment
Experimental and observational studies have different objectives:
#### The goal of an experimental study
The goal is to respond to the **reasearch question**. This is a question to which the experiment in responding.
In our case, the research question _"Does the device (simulator or Echo) affect the perceived quality of the conversation?"_
#### The objective of an observational study
The aim of the observational studies is
- building **models of relations between independent and dependent variables**
- analysis of real-life interactions - independent and dependent variables
- ...
#### Case Study and Population Study
Case study
- deals with a small number of cases
- results **are not generalizable** to any population.
Population study
- has a population (test persons, ..) - a sample that **represents a known population**, e.g.
- the elderly with cognitive impairments
- Recreational athletes 18-14 years old
- ...
- it has a sufficiently large sample that the conclusions are valid with a high degree of reliability
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 22 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.4. Design of the study / experiment </div>
</div>
## ■ Step 2: Cost functions - success metrics
#### Def.: A measure of performance is any metric that measures the performance of the system that we are anlyzing.
The measure of performance should **measure the aspects for which it is intended**.
The performance measure (success metric) should give performance estimates that
- provides ordered values, which allows separation between better and worse variants / implementations.
- have an interpretation
Performance measures have different sources:
- established measures in the field
- new constructions
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 23 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.4. Design of the study / experiment </div>
</div>
## ■ Step 3: Izbira statističnega načrta in določitev faktorjev
#### Statistical experimental design
There are several different designs, see section 09.3
- ANOVA
- Latin square
- fractional
#### Factors
Depending on the goal of the experiment, we determine the factors - **variables that influence the outcome of the study or experiment** as measured by the criterion function (success metrics).
Factors are determined based on knowledge of the field.
##### Factors of the experimental plan
We have
- controlled factors
- nuisance factors
##### Factors of the observational study
There are no controlled factors in the observational study.
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 24 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.4. Design of the study / experiment </div>
</div>
## ■ Step 4: Determining the experimental scenario
#### Def.: The experimental scenario is a scenario performed by test persons in order to obtain relevant measurements and results.
#### The "imaginable goal" of the experiment
- the users experimental goal is the **goal that test subjects have in mind during experiment execution** and not the experimental goal of the experiment!
- gives **the reference frame** results of the experiment - the measurement and the answers to the questions
- directly related to
- experimental service
- experimental content
#### Guidelines for the experimental scenario
- The experimental scenario is imaginable for the test persons:
- older 60+ and "escape room" do not go together
- ...
- the experimental scenario should be simple enough to
- be presented with instructions
- test persons not to experience **unforeseen surprise** - every surprise destroys the scanned frame of the test person and increases the nereliability of responses (responses and psychophysiological measurements)
- clear instructions and guides are important, which do not lead the test persons to the **surprise** during the performance
- every **event development option** is an option for new user information
- eg. the "fast-forward" option when viewing a movie will tell whether the test person has decided on it
- appropriate cognitive effort of the test subjects:
- too small: uninterested
- too big: defensive mechanics hide results
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 25 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.4. Design of the study / experiment </div>
</div>
## ■ Step 5: Criteria for and selection of test subjects
#### Representativeness of the results
Experiment results are **representative** for a given population, if
- test persons (subjects) represent all relevant subgroups of this population
- the sample of test persons is large enough
#### Explicit criteria for selecting test subjects
This is a description and justification for the selection of test subjects:
- description of the selection criteria for the test subjects:
- demography: age, gender, ...
- Skills with technology, ...
- ...
- how we accessed the test subjects:
- phone
- an existing registry and a random set
- ...
- criteria for the procedure for the elimination of test subjects
- during the experiment: non-response
- after the experiment: we detected that the task did not seriously solve (how)
- ...
The procedure for selecting test persons is a prerequisite for interpreting the results.
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 26 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.4. Design of the study / experiment </div>
</div>
## ■ Step 6: Implementation of the experiment environment
#### Implementation of the enviroment
- the application we are testing
- esperiment management system
- data capture system: sensors, back-end, questionnaires, ...
#### Execution
- trial series for
- the elimination of interfering factors for test persons
- Data capture test
- experimental analysis of the results
- performance measurement
- post-interview
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 27 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.4. Design of the study / experiment </div>
</div>
## ■ Step 7: Analysis of results: psychometric characteristics
Acceptable psychometric characteristics is a necessary condition for the validity of the results obtained.
Glejte poglavje 09.3.2.
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 28 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
<div style="display:flex;font-weight:bold;font-size:0.9em;">
<div style="flex:1;width:50%;"> 09. Design and implementation of experiments </div>
<div style="flex:1;width:50%;text-align:right;"> 09.4. Design of the study / experiment </div>
</div>
## ■ Step 8: Analiza rezultatov: gradnja modelov, analiza povezav, testiranje hipotez
#### Observational study: model construction and analysis of relations
The results of the observational study give
- **learning and / or test-set** for the construction of models using hardware and statistical learning.
- **data** for analysis of links between phenomena - variables
#### Experimental Principles: Hypothesis Testing
Experimental design make data for **hypothesis testing**
- from the **research question** we formulate a **work hypothesis**
- our example of talking to Amazon Alexa:
- research question: does the interface (Echo or simulator) influence the perception of the quality of the conversation?
- working hypothesis: the interface affects the perception of the quality of the dialog
- on the basis of a working hypothesis we form a **null hypothesis**
- our example of talking to Alexa:
- null hypothesis (e)
$$ H_0 = [\overline {y}_S = \overline{y}_E], $$
where $ \overline{y}_S $ is the average of the results of the group with the simulator and the $\overline{y}_E$ is the average of the group results with the Echo device.
#### Example: One-way ANOVA with a fixed effect
<p style="margin-bottom:2cm;"></p>
<div style="width:100%;text-align:right;font-weight:bold;font-size:1.2em;"> 29 </div>
<img src="https://raw.githubusercontent.com/andrejkk/ORvTK_SlidesImgs/master/footer_full.jpg">
```
# Define functions
def get_ANOVA_SS(data_df):
a,n = data_df.shape
N = a*n
y_pp = 1.0*data_df.sum().sum()
y_ip = 1.0*data_df.sum(axis=1)
# SS total
SS_T = (data_df**2).sum().sum() - (y_pp**2)/N
# SS treatment
SS_trt = (1.0/n)*(y_ip**2).sum() - (y_pp**2)/N
# SS error
SS_err = SS_T - SS_trt
# Report
return SS_trt, SS_err, SS_T
## Analysis of results
import numpy as np
import pandas as pd
import random
import matplotlib.pyplot as plt
# Load data
post_fn = 'https://raw.githubusercontent.com/andrejkk/UPK_DataImgs/master/PredPost-vprasalnik(dvogovor)(Responses).csv'
data_df = pd.read_csv(post_fn, header=0, sep=';', encoding='utf8')
# Selectors
post_qs_inds = list(range(11,36))
anws_df = data_df.iloc[:, post_qs_inds]
# Plot answers
anws_df.plot(figsize=(20,10))
# one-way ANOVA fixed factor
speech_quality_est
## ANOVA
import pandas as pd
from scipy.stats import f
latent_fs = ['UGO', 'CSB', 'SC', 'V']
groups = {}
groups['PC'] = [0, 1, 2, 3, 4, 5, 6, 7]
groups['Echo'] = [8, 9, 10, 11, 12, 13, 14, 15]
# For all vars
for fs_n in latent_fs:
# Select groups
sq_est_PC = speech_quality_est[fs_n][groups['PC']]
sq_est_Echo = speech_quality_est[fs_n][groups['Echo']]
# Compute SS
curr_df = pd.DataFrame([sq_est_PC.as_matrix(), sq_est_Echo.as_matrix()]) #, columns = [1,2,3,4,5,6,7,8])
SS_trt, SS_err, SS_T = get_ANOVA_SS(curr_df)
# F-stat
a,n = data_df.shape
N = a*n
F_0 = (SS_trt/(a-1)) / (SS_err/(N-a))
# P-value
p_val = 1-f.cdf(F_0, a-1, N-a)
# Report
print ('Latent var:', fs_n, 'p-val=', p_val)
## CUT ======================
```
|
github_jupyter
|
# Define functions
def get_ANOVA_SS(data_df):
a,n = data_df.shape
N = a*n
y_pp = 1.0*data_df.sum().sum()
y_ip = 1.0*data_df.sum(axis=1)
# SS total
SS_T = (data_df**2).sum().sum() - (y_pp**2)/N
# SS treatment
SS_trt = (1.0/n)*(y_ip**2).sum() - (y_pp**2)/N
# SS error
SS_err = SS_T - SS_trt
# Report
return SS_trt, SS_err, SS_T
## ANOVA
import numpy as np
import pandas as pd
from scipy.stats import f
# Load data
post_fn = 'https://raw.githubusercontent.com/andrejkk/UPK_DataImgs/master/ANOVAtestData31.csv'
data_df = pd.read_csv(post_fn, sep=';', encoding='utf8')
# Summs of squares
SS_trt, SS_err, SS_T = get_ANOVA_SS(data_df)
print (SS_trt, SS_err, SS_T)
# F-stat
a,n = data_df.shape
N = a*n
F_0 = (SS_trt/(a-1)) / (SS_err/(N-a))
# P-value
pVal = 1-f.cdf(F_0, a-1, N-a)
print ('p value = ', pVal)
## Estimate quality of speech interface
# User Goal Orientation = UGO: 8 items, a = .92
# Customer Service Behaviors = CSB: 8 items, a = .89
# Speech Characteristics = SC: 5 items
# Verbosity = V: 4 items, a = .69
import numpy as np
import pandas as pd
# Load data
post_fn = 'https://raw.githubusercontent.com/andrejkk/UPK_DataImgs/master/PredPost-vprasalnik(dvogovor)(Responses).csv'
data_df = pd.read_csv(post_fn, header=0, sep=';', encoding='utf8')
#qs_weights_fn = 'https://raw.githubusercontent.com/andrejkk/UPK_DataImgs/master/QualityOfConvSys_Weights.cvs'
#qs_weights_df = pd.read_csv(qs_weights_fn, header=0, sep=';', encoding='utf8')
qs_weights_df = pd.DataFrame(
[[0.858, 0.228, 0.146, -0.124],
[0.834, 0.205, 0.117, -0.088],
[0.834, 0.245, 0.159, -0.088],
[0.831, 0.155, 0.078, -0.089],
[0.805, 0.19, 0.031, -0.073],
[0.8, 0.18, 0.162, -0.025],
[0.799, 0.219, 0.028, -0.098],
[0.794, 0.164, 0.297, -0.105],
[0.628, 0.439, -0.009, -0.099],
[0.336, 0.758, 0.041, -0.099],
[0.256, 0.739, 0.316, -0.105],
[0.127, 0.736, 0.079, -0.214],
[0.188, 0.726, 0.434, -0.084],
[0.355, 0.711, -0.054, -0.041],
[0.271, 0.668, 0.4, -0.156],
[0.29, 0.648, 0.482, -0.15],
[0.26, 0.599, 0.447, -0.163],
[0.096, 0.139, 0.808, -0.054],
[0.164, 0.242, 0.797, -0.14],
[0.127, 0.238, 0.658, -0.045],
[0.027, -0.004, 0.585, 0.121],
[-0.139, -0.161, -0.036, 0.73],
[-0.185, 0.084, -0.084, 0.706],
[0.075, -0.199, 0.011, 0.701],
[-0.223, -0.431, 0.029, 0.655]])
# Selectors
post_qs_inds = list(range(11,36))
data_qs_df = data_df.iloc[:, post_qs_inds]
# Estimate latent variables
speech_quality_est = data_qs_df.dot(qs_weights_df.as_matrix())
speech_quality_est.columns = ['UGO', 'CSB', 'SC', 'V']
speech_quality_est
## Measurement characteristics
import numpy as np
import pandas as pd
import random
# Load data
post_fn = 'https://raw.githubusercontent.com/andrejkk/UPK_DataImgs/master/PredPost-vprasalnik(dvogovor)(Responses).csv'
#post_fn = 'https://raw.githubusercontent.com/andrejkk/UPK_DataImgs/master/PostQs_Responses.csv'
data_df = pd.read_csv(post_fn, header=0, sep=';', encoding='utf8')
# Selectors
pre_qs_inds = list(range(5,11))
post_qs_inds = list(range(11,36))
# ==============================================================================================
## Validitiy
# We estimate is OK
# ==============================================================================================
## Reliability
# Parallel forms - inter method reliability: divide test tasks to two parts and corelate anwsers
# Split half
np.random.shuffle(post_qs_inds)
perm_inds_1, perm_inds_2 = post_qs_inds[0:12], post_qs_inds[12:24]
anws_1, anws_2 = data_df.iloc[:, perm_inds_1], data_df.iloc[:, perm_inds_2]
# Get correlation coefficient
half1_pd = anws_1.sum(axis=1)
half2_pd = anws_2.sum(axis=1)
corr = np.corrcoef(half1_pd, half2_pd)[0,1]
# ==============================================================================================
print ('====================================================================================')
print ('== For instruments')
# Spearman–Brown prediction formula
# Equal variances
req_tt = 2*corr / (1.0 + corr)
print ('Equal var, r_tt=', req_tt)
# Non-equal variances
sd1, sd2 = np.std(half1_pd), np.std(half2_pd)
rneq_tt = 4*sd1*sd2*corr / (sd1**2 + sd2**2 + 2*sd1*sd2*corr)
print ('Std 1 = ', sd1)
print ('Std 2 = ', sd2)
print ('Not equal var, r_tt=', rneq_tt)
## ==============================================================
# Internal consistency - Cronbach alpha
# Verifies the instrument as a whole
post_qs_df = data_df.iloc[:, 11:36]
_,K = post_qs_df.shape
X = post_qs_df.sum(axis=1)
ss_X = np.var(X, ddof=1)
ss_Yi = np.var(post_qs_df, ddof=1)
ss_Y = ss_Yi.sum()
Cronb_a = (K/(K-1))*(1.0 - ss_Y/ss_X)
print ('Cronbach Alpha =', Cronb_a)
# ==============================================================================================
print ('')
print ('===============================================================================')
print ('== For quality metrics')
## ==============================================================
# Internal consistency - Cronbach alpha
_,K = speech_quality_est.shape
X = speech_quality_est.sum(axis=1)
ss_X = np.var(X, ddof=1)
ss_Yi = np.var(speech_quality_est, ddof=1)
ss_Y = ss_Yi.sum()
Cronb_a = (K/(K-1))*(1.0 - ss_Y/ss_X)
print ('Cronbach Alpha =', Cronb_a)
# Define functions
def get_ANOVA_SS(data_df):
a,n = data_df.shape
N = a*n
y_pp = 1.0*data_df.sum().sum()
y_ip = 1.0*data_df.sum(axis=1)
# SS total
SS_T = (data_df**2).sum().sum() - (y_pp**2)/N
# SS treatment
SS_trt = (1.0/n)*(y_ip**2).sum() - (y_pp**2)/N
# SS error
SS_err = SS_T - SS_trt
# Report
return SS_trt, SS_err, SS_T
## Analysis of results
import numpy as np
import pandas as pd
import random
import matplotlib.pyplot as plt
# Load data
post_fn = 'https://raw.githubusercontent.com/andrejkk/UPK_DataImgs/master/PredPost-vprasalnik(dvogovor)(Responses).csv'
data_df = pd.read_csv(post_fn, header=0, sep=';', encoding='utf8')
# Selectors
post_qs_inds = list(range(11,36))
anws_df = data_df.iloc[:, post_qs_inds]
# Plot answers
anws_df.plot(figsize=(20,10))
# one-way ANOVA fixed factor
speech_quality_est
## ANOVA
import pandas as pd
from scipy.stats import f
latent_fs = ['UGO', 'CSB', 'SC', 'V']
groups = {}
groups['PC'] = [0, 1, 2, 3, 4, 5, 6, 7]
groups['Echo'] = [8, 9, 10, 11, 12, 13, 14, 15]
# For all vars
for fs_n in latent_fs:
# Select groups
sq_est_PC = speech_quality_est[fs_n][groups['PC']]
sq_est_Echo = speech_quality_est[fs_n][groups['Echo']]
# Compute SS
curr_df = pd.DataFrame([sq_est_PC.as_matrix(), sq_est_Echo.as_matrix()]) #, columns = [1,2,3,4,5,6,7,8])
SS_trt, SS_err, SS_T = get_ANOVA_SS(curr_df)
# F-stat
a,n = data_df.shape
N = a*n
F_0 = (SS_trt/(a-1)) / (SS_err/(N-a))
# P-value
p_val = 1-f.cdf(F_0, a-1, N-a)
# Report
print ('Latent var:', fs_n, 'p-val=', p_val)
## CUT ======================
| 0.542863 | 0.937326 |
```
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pylab as plt
from skimage import io
from skimage.data import rocket
import stretchablecorr as sc
# ======
# Crop
# ======
x, y = (322, 150)
plt.figure();
plt.imshow(rocket());
print(rocket().shape)
plt.plot(x, y, 'sr');
C, ij = sc.crop(rocket(), (x, y), 50)
plt.imshow(C);
print(ij)
C, ij = sc.crop(rocket(), (322.2, 150.8), 50)
print(ij)
# ============
# get shifts
# ============
I = rocket().mean(axis=2)
window_half_size = 30
A, ij = sc.crop(I, (322, 150), window_half_size)
B, ij = sc.crop(I, (323, 153), window_half_size)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,4))
ax1.imshow(A);
ax2.imshow(B);
# true shifts = -1, -3
sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=15,
offset=(0.0, 0.0),
coarse_search=False,
upsample_factor=100,
method='skimage')
sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=10,
offset=(0.0, 0.0),
coarse_search=True,
upsample_factor=100,
method='skimage')
sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=10,
coarse_search=True,
method='opti')
window_half_size = 5
dx_span, dy_span, phase_corr, res = sc.output_cross_correlation(A, B, upsamplefactor=5, phase=False)
displ, err = sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=window_half_size,
coarse_search=False,
phase=False,
method='opti')
print(displ)
plt.figure();
plt.pcolor(dx_span, dy_span, phase_corr);
plt.plot(*displ[::1], 'rx')
plt.colorbar(); plt.axis('equal');
plt.xlim([-3, 3]);
plt.ylim([-3, 3]);
```
### Benchmark
```
%%timeit
sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=10,
coarse_search=True,
method='opti')
# 2.1 ms ± 18.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 1.88 ms ± 8.03 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%%timeit
sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=15,
upsample_factor=10,
coarse_search=False,
method='skimage')
# upsample 100: 4.19 ms ± 226 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# upsample 50: 1.16 ms ± 25.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# upsample10: 800 µs ± 4.12 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%%timeit
sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=15,
coarse_search=False,
phase=False,
method='opti')
# 3.09 ms ± 35.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 1.25 ms ± 22 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# 766 µs ± 140 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) with jit
# 1.04 ms ± 5.21 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) with jit
%%timeit
sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=10,
upsample_factor=100,
coarse_search=False,
method='skimage')
# 2.88 ms ± 48.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
```
### Error estimation
```
I = io.imread('./images/PerlinNoise2d.png').mean(axis=2)
I = I[::3, ::3]
plt.imshow(I); plt.title('Perlin noise')
def construct_key(p):
name = f"{p['window_half_size']}px"
if p['coarse_search']:
name += ' coarse'
if p['phase']:
name += ' phase'
else:
name += ' CC'
name += ' ' + p['method']
return name
def sample_one(A, B, sigma, params):
B_prime = B + sigma*np.std(B)*np.random.randn(*B.shape)
x, y = np.array(A.T.shape)/2
u, errors = sc.get_shifts(A, B_prime, x, y, **params)
return u, errors
def sample_N(A, B, sigma, params, N, random=False):
errors = []
displ = []
for _ in range(N):
try:
if random:
sigma = 0.1 + np.random.rand()*10
u, err = sample_one(A, B, sigma, params)
errors.append(err)
displ.append(u)
except ValueError:
pass
return np.array(displ), np.array(errors)
def avg_radius(u):
d = np.sqrt(np.sum((u-u.mean(axis=0))**2, axis=1))
return d.mean()
params = {'window_half_size': 20,
'coarse_search':False,
'phase': False,
'method':'opti' }
displ, errors = sample_N(I, I, 0.2, params, 1500, random=True)
displ = np.sqrt(np.sum(displ**2, axis=1))
plt.title('FRAE (Hessian)')
plt.loglog(displ, errors[:, 1], '.')
plt.title('z-score')
plt.semilogx(displ, errors[:, 0], '.')
plt.loglog(errors[:, 1], errors[:, 2], '.')
# compute averages
stored_results = {}
params = {'window_half_size': 20,
'coarse_search':False,
'phase': False,
'method':'opti' }
sigma_span = np.logspace(-1, 1, 20)
results = {'params':params,
'avg_radius':[],
'mean_errors':[]}
for sigma in sigma_span:
print('sigma:', sigma, end='\r')
displ, errors = sample_N(I, I, sigma, params, 100)
results['avg_radius'].append( avg_radius(displ) )
results['mean_errors'].append( errors.mean(axis=0) )
results['avg_radius'] = np.array(results['avg_radius'])
results['mean_errors'] = np.vstack(results['mean_errors'])
stored_results[construct_key(results['params'])] = results
for results in stored_results.values():
# Graph
linewidth = 3
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(9,6))
ax1.axhline(y=1, color='black', linewidth=1)
ax1.loglog(sigma_span, results['avg_radius'], '-o', label='truth (MC)', linewidth=linewidth, color='red')
ax1.set_ylabel('actual error [px]\n (MC sampling)')
ax1.loglog(sigma_span, results['mean_errors'][:, 1], linewidth=linewidth)
ax1.set_xticks([])
ax1.grid(axis='x', which='both')
ax2.semilogx(sigma_span, results['mean_errors'][:, 0], linewidth=linewidth, color='black')
ax2.set_ylabel('z-score');
ax2.set_xticks([])
ax2.grid(axis='x', which='both');
ax3.loglog(sigma_span, results['mean_errors'][:, 1], linewidth=linewidth)
ax3.loglog(sigma_span, results['mean_errors'][:, 2], linewidth=linewidth)
ax3.set_ylabel('FRAE [px]');
ax3.grid(axis='x', which='both')
ax1.set_xlim([min(sigma_span), max(sigma_span)])
ax2.set_xlim([min(sigma_span), max(sigma_span)])
ax3.set_xlim([min(sigma_span), max(sigma_span)])
plt.xlabel('sigma - noise level');
ax1.set_title(construct_key(results['params']));
```
Cl:
- CC is at least one of order of magnitude better than phase-corr
- z-score works when errors > 1px, saturate below
- FRAE is strange...
```
for results in stored_results.values():
# Graph
linewidth=2
plt.figure();
plt.xlabel('error [px]\n truth (MC sampling)')
plt.plot(results['avg_radius'], results['mean_errors'][:, 0], linewidth=linewidth)
plt.ylabel('z-score');
plt.title(construct_key(results['params']));
```
## Draft
```
def estimate(window_half_size, sigma, phase, N=100):
dxy_err = np.vstack([sample(window_half_size, sigma=sigma, phase=phase) for _ in range(N)])
dxy = dxy_err[:, :2]
eps_MC = np.sqrt(np.sum((dxy - dxy.mean(axis=0))**2, axis=1)).mean()
err_estimate = dxy_err[:, 2]
return eps_MC, err_estimate[0]
sigma_span = np.logspace(-1, 3, 15)
errs = np.vstack([estimate(10, s, phase=True, N=100) for s in sigma_span])
fig, (ax1, ax2) = plt.subplots(2, )
plt.title('phase opti')
ax1.semilogx(sigma_span, errs[:, 1], label='estimate')
ax1.set_ylabel('z-score');
ax2.semilogx(sigma_span, errs[:, 0], '-o', label='true (MC)');
ax2.set_ylabel('shift error [px]');
plt.legend();
plt.xlabel('noise level');
cube, image_names = sc.load_image_sequence('./images/HS2_01/')
def custom_entropy(A):
p = (A - A.min())/A.ptp()
p = p/np.sum(p)
p = p[p>0]
return -np.sum( p*np.log(p) )/np.log(A.size)
def scd_moment(A, x, y):
i, j = np.arange(A.shape[0]), np.arange(A.shape[1])
i_grid, j_grid = np.meshgrid(i, j)
pi = (i_grid - y)**2
pj = (j_grid - x)**2
d = np.sqrt(pi + pj)
return np.average(d, weights=A)
I = cube[20]
window_half_size = 15
A, ij = sc.crop(I, (222, 150), window_half_size)
B, ij = sc.crop(I+4, (222, 150), window_half_size)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,4))
ax1.imshow(A);
ax2.imshow(B);
dx_span, dy_span, phase_corr, res = sc.output_cross_correlation(A, B, upsamplefactor=5, phase=False)
displ, err = get_shifts(A, B, *np.array(A.shape)/2,
window_half_size=14,
coarse_search=False,
phase=False,
method='opti')
plt.pcolor(dx_span, dy_span, phase_corr);
plt.plot(*displ[::-1], 'rx')
plt.colorbar(); plt.axis('equal');
i_grid, j_grid = np.meshgrid(dy_span, dx_span)
pi = (i_grid - dy)**2
pj = (j_grid - dx)**2
d = np.sqrt(pi + pj)**(1/2)
print(np.average(d, weights=np.log(phase_corr)-np.log(phase_corr).min()))
print('f ', res.fun)
print('max ', phase_corr.max())
print('mean', phase_corr.mean())
print('std ', phase_corr.std())
print((phase_corr.max() - phase_corr.mean())/phase_corr.std())
print(scd_moment(phase_corr, dx, dy))
A = np.zeros(shape)+1
A[10, 10] = 2
B = np.zeros(shape)+1
B[15, 15] = 3
shape = (61, 61)
A = np.random.randn(*shape)
B = np.random.randn(*shape)#np.ones_like(A)
print(np.sum(A))
ddx = np.diff(phase_corr, axis=0)
ddy = np.diff(phase_corr, axis=1)
mask_x = np.sign( ddx[:-1, :] ) > np.sign( ddx[1:, :] )
mask_y = np.sign( ddy[:, :-1] ) > np.sign( ddy[:, 1:] )
peaks = mask_x[:, 1:-1]&mask_y[1:-1]
phase_corr_peak = phase_corr[1:-1, 1:-1].copy()
phase_corr_peak[~peaks] = 0
plt.imshow(phase_corr_peak);
from skimage.morphology import local_maxima
plt.imshow( local_maxima(phase_corr) )
plt.pcolor(dx_span, dy_span, np.log(phase_corr));
plt.colorbar(); plt.axis('equal');
get_shifts(A, B, *np.array(A.shape)/2,
window_half_size=14,
coarse_search=False,
phase=False,
method='opti')
get_shifts(A, B, window_half_size, window_half_size,
window_half_size=5,
coarse_search=False,
method='opti')
def sample(window_half_size, sigma, phase):
B_prime = B + sigma*np.std(B)*np.random.randn(*B.shape)
x, y = np.array(A.T.shape)/2
dx, dy, err = get_shifts(A, B_prime, x, y,
window_half_size=window_half_size,
coarse_search=False,
phase=phase,
method='opti')
return dx, dy, err
def estimate(window_half_size, sigma, phase, N=100):
dxy_err = np.vstack([sample(window_half_size, sigma=sigma, phase=phase) for _ in range(N)])
dxy = dxy_err[:, :2]
eps_MC = np.sqrt(np.sum((dxy - dxy.mean(axis=0))**2, axis=1)).mean()
err_estimate = dxy_err[:, 2]
return eps_MC, err_estimate[0]
sigma_span = np.logspace(-1, 3, 15)
errs = np.vstack([estimate(10, s, phase=True, N=100) for s in sigma_span])
fig, (ax1, ax2) = plt.subplots(2, )
plt.title('phase opti')
ax1.semilogx(sigma_span, errs[:, 1], label='estimate')
ax1.set_ylabel('z-score');
ax2.semilogx(sigma_span, errs[:, 0], '-o', label='true (MC)');
ax2.set_ylabel('shift error [px]');
plt.legend();
plt.xlabel('noise level');
sigma_span = np.logspace(-1, 2, 15)
errs = np.vstack([estimate(15, s, phase=False, N=100) for s in sigma_span])
plt.title('cc opti')
plt.loglog(sigma_span, 1/errs[:, 1], label='FRAE')
#plt.semilogx(sigma_span, errs[:, 0], '-o', label='true (MC)');
plt.legend(); plt.ylabel('shift error [px]'); plt.xlabel('noise level');
sigma_span = np.logspace(-1, 2, 15)
errs = np.vstack([estimate(15, s, phase=False, N=100) for s in sigma_span])
plt.title('CC opti')
plt.loglog(sigma_span, errs[:, 0], label='FRAE')
plt.loglog(sigma_span, errs[:, 1], '-o', label='true (MC)');
plt.legend(); plt.ylabel('shift error [px]'); plt.xlabel('noise level');
I = cube[0]
J = cube[10]
window_half_size = 100
x, y = 100, 106
A, ij = sc.crop(I, (x, y), window_half_size)
B, ij = sc.crop(J, (x, y), window_half_size)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,4))
ax1.imshow(A);
ax2.imshow(B);
window_half_size_span = np.arange(5, 70, 5)
res = [get_shifts(I, J, x, y,
window_half_size=whs,
coarse_search=True,
phase=True,
method='opti')
for whs in window_half_size_span]
err = [row[1] for row in res]
err = np.vstack(err)
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(9,6))
ax1.plot(window_half_size_span, err[:, 0]);
ax2.plot(window_half_size_span, err[:, 1]);
ax1.set_ylabel('z-score');
ax2.set_ylabel('FRAE [px]');
plt.xlabel('window half size');
plt.plot(window_half_size_span, res[:, 0]-res[:, 0].mean());
plt.plot(window_half_size_span, res[:, 1]-res[:, 1].mean());
window_half_size = 50
A, ij = sc.crop(I, (232, 150), window_half_size)
B, ij = sc.crop(J, (232, 150), window_half_size)
dx_span, dy_span, phase_corr, res = sc.output_cross_correlation(A, B, upsamplefactor=1, phase=False)
print('z_score', (phase_corr.max()-phase_corr.mean())/phase_corr.std())
plt.pcolor(dx_span, dy_span, phase_corr);
plt.colorbar(); plt.axis('equal');
def sample(window_half_size, sigma, phase):
x, y = np.array(A.T.shape)/2
dx, dy, err = get_shifts(A, B_prime, x, y,
window_half_size=window_half_size,
coarse_search=False,
phase=phase,
method='opti')
return dx, dy, err
def estimate(window_half_size, sigma, phase, N=100):
dxy_err = np.vstack([sample(window_half_size, sigma=sigma, phase=phase) for _ in range(N)])
dxy = dxy_err[:, :2]
eps_MC = np.sqrt(np.sum((dxy - dxy.mean(axis=0))**2, axis=1)).mean()
err_estimate = dxy_err[:, 2]
return eps_MC, err_estimate[0]
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pylab as plt
from skimage import io
from skimage.data import rocket
import stretchablecorr as sc
# ======
# Crop
# ======
x, y = (322, 150)
plt.figure();
plt.imshow(rocket());
print(rocket().shape)
plt.plot(x, y, 'sr');
C, ij = sc.crop(rocket(), (x, y), 50)
plt.imshow(C);
print(ij)
C, ij = sc.crop(rocket(), (322.2, 150.8), 50)
print(ij)
# ============
# get shifts
# ============
I = rocket().mean(axis=2)
window_half_size = 30
A, ij = sc.crop(I, (322, 150), window_half_size)
B, ij = sc.crop(I, (323, 153), window_half_size)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,4))
ax1.imshow(A);
ax2.imshow(B);
# true shifts = -1, -3
sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=15,
offset=(0.0, 0.0),
coarse_search=False,
upsample_factor=100,
method='skimage')
sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=10,
offset=(0.0, 0.0),
coarse_search=True,
upsample_factor=100,
method='skimage')
sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=10,
coarse_search=True,
method='opti')
window_half_size = 5
dx_span, dy_span, phase_corr, res = sc.output_cross_correlation(A, B, upsamplefactor=5, phase=False)
displ, err = sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=window_half_size,
coarse_search=False,
phase=False,
method='opti')
print(displ)
plt.figure();
plt.pcolor(dx_span, dy_span, phase_corr);
plt.plot(*displ[::1], 'rx')
plt.colorbar(); plt.axis('equal');
plt.xlim([-3, 3]);
plt.ylim([-3, 3]);
%%timeit
sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=10,
coarse_search=True,
method='opti')
# 2.1 ms ± 18.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 1.88 ms ± 8.03 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%%timeit
sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=15,
upsample_factor=10,
coarse_search=False,
method='skimage')
# upsample 100: 4.19 ms ± 226 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# upsample 50: 1.16 ms ± 25.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# upsample10: 800 µs ± 4.12 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%%timeit
sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=15,
coarse_search=False,
phase=False,
method='opti')
# 3.09 ms ± 35.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 1.25 ms ± 22 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# 766 µs ± 140 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) with jit
# 1.04 ms ± 5.21 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) with jit
%%timeit
sc.get_shifts(A, B, window_half_size, window_half_size,
window_half_size=10,
upsample_factor=100,
coarse_search=False,
method='skimage')
# 2.88 ms ± 48.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
I = io.imread('./images/PerlinNoise2d.png').mean(axis=2)
I = I[::3, ::3]
plt.imshow(I); plt.title('Perlin noise')
def construct_key(p):
name = f"{p['window_half_size']}px"
if p['coarse_search']:
name += ' coarse'
if p['phase']:
name += ' phase'
else:
name += ' CC'
name += ' ' + p['method']
return name
def sample_one(A, B, sigma, params):
B_prime = B + sigma*np.std(B)*np.random.randn(*B.shape)
x, y = np.array(A.T.shape)/2
u, errors = sc.get_shifts(A, B_prime, x, y, **params)
return u, errors
def sample_N(A, B, sigma, params, N, random=False):
errors = []
displ = []
for _ in range(N):
try:
if random:
sigma = 0.1 + np.random.rand()*10
u, err = sample_one(A, B, sigma, params)
errors.append(err)
displ.append(u)
except ValueError:
pass
return np.array(displ), np.array(errors)
def avg_radius(u):
d = np.sqrt(np.sum((u-u.mean(axis=0))**2, axis=1))
return d.mean()
params = {'window_half_size': 20,
'coarse_search':False,
'phase': False,
'method':'opti' }
displ, errors = sample_N(I, I, 0.2, params, 1500, random=True)
displ = np.sqrt(np.sum(displ**2, axis=1))
plt.title('FRAE (Hessian)')
plt.loglog(displ, errors[:, 1], '.')
plt.title('z-score')
plt.semilogx(displ, errors[:, 0], '.')
plt.loglog(errors[:, 1], errors[:, 2], '.')
# compute averages
stored_results = {}
params = {'window_half_size': 20,
'coarse_search':False,
'phase': False,
'method':'opti' }
sigma_span = np.logspace(-1, 1, 20)
results = {'params':params,
'avg_radius':[],
'mean_errors':[]}
for sigma in sigma_span:
print('sigma:', sigma, end='\r')
displ, errors = sample_N(I, I, sigma, params, 100)
results['avg_radius'].append( avg_radius(displ) )
results['mean_errors'].append( errors.mean(axis=0) )
results['avg_radius'] = np.array(results['avg_radius'])
results['mean_errors'] = np.vstack(results['mean_errors'])
stored_results[construct_key(results['params'])] = results
for results in stored_results.values():
# Graph
linewidth = 3
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(9,6))
ax1.axhline(y=1, color='black', linewidth=1)
ax1.loglog(sigma_span, results['avg_radius'], '-o', label='truth (MC)', linewidth=linewidth, color='red')
ax1.set_ylabel('actual error [px]\n (MC sampling)')
ax1.loglog(sigma_span, results['mean_errors'][:, 1], linewidth=linewidth)
ax1.set_xticks([])
ax1.grid(axis='x', which='both')
ax2.semilogx(sigma_span, results['mean_errors'][:, 0], linewidth=linewidth, color='black')
ax2.set_ylabel('z-score');
ax2.set_xticks([])
ax2.grid(axis='x', which='both');
ax3.loglog(sigma_span, results['mean_errors'][:, 1], linewidth=linewidth)
ax3.loglog(sigma_span, results['mean_errors'][:, 2], linewidth=linewidth)
ax3.set_ylabel('FRAE [px]');
ax3.grid(axis='x', which='both')
ax1.set_xlim([min(sigma_span), max(sigma_span)])
ax2.set_xlim([min(sigma_span), max(sigma_span)])
ax3.set_xlim([min(sigma_span), max(sigma_span)])
plt.xlabel('sigma - noise level');
ax1.set_title(construct_key(results['params']));
for results in stored_results.values():
# Graph
linewidth=2
plt.figure();
plt.xlabel('error [px]\n truth (MC sampling)')
plt.plot(results['avg_radius'], results['mean_errors'][:, 0], linewidth=linewidth)
plt.ylabel('z-score');
plt.title(construct_key(results['params']));
def estimate(window_half_size, sigma, phase, N=100):
dxy_err = np.vstack([sample(window_half_size, sigma=sigma, phase=phase) for _ in range(N)])
dxy = dxy_err[:, :2]
eps_MC = np.sqrt(np.sum((dxy - dxy.mean(axis=0))**2, axis=1)).mean()
err_estimate = dxy_err[:, 2]
return eps_MC, err_estimate[0]
sigma_span = np.logspace(-1, 3, 15)
errs = np.vstack([estimate(10, s, phase=True, N=100) for s in sigma_span])
fig, (ax1, ax2) = plt.subplots(2, )
plt.title('phase opti')
ax1.semilogx(sigma_span, errs[:, 1], label='estimate')
ax1.set_ylabel('z-score');
ax2.semilogx(sigma_span, errs[:, 0], '-o', label='true (MC)');
ax2.set_ylabel('shift error [px]');
plt.legend();
plt.xlabel('noise level');
cube, image_names = sc.load_image_sequence('./images/HS2_01/')
def custom_entropy(A):
p = (A - A.min())/A.ptp()
p = p/np.sum(p)
p = p[p>0]
return -np.sum( p*np.log(p) )/np.log(A.size)
def scd_moment(A, x, y):
i, j = np.arange(A.shape[0]), np.arange(A.shape[1])
i_grid, j_grid = np.meshgrid(i, j)
pi = (i_grid - y)**2
pj = (j_grid - x)**2
d = np.sqrt(pi + pj)
return np.average(d, weights=A)
I = cube[20]
window_half_size = 15
A, ij = sc.crop(I, (222, 150), window_half_size)
B, ij = sc.crop(I+4, (222, 150), window_half_size)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,4))
ax1.imshow(A);
ax2.imshow(B);
dx_span, dy_span, phase_corr, res = sc.output_cross_correlation(A, B, upsamplefactor=5, phase=False)
displ, err = get_shifts(A, B, *np.array(A.shape)/2,
window_half_size=14,
coarse_search=False,
phase=False,
method='opti')
plt.pcolor(dx_span, dy_span, phase_corr);
plt.plot(*displ[::-1], 'rx')
plt.colorbar(); plt.axis('equal');
i_grid, j_grid = np.meshgrid(dy_span, dx_span)
pi = (i_grid - dy)**2
pj = (j_grid - dx)**2
d = np.sqrt(pi + pj)**(1/2)
print(np.average(d, weights=np.log(phase_corr)-np.log(phase_corr).min()))
print('f ', res.fun)
print('max ', phase_corr.max())
print('mean', phase_corr.mean())
print('std ', phase_corr.std())
print((phase_corr.max() - phase_corr.mean())/phase_corr.std())
print(scd_moment(phase_corr, dx, dy))
A = np.zeros(shape)+1
A[10, 10] = 2
B = np.zeros(shape)+1
B[15, 15] = 3
shape = (61, 61)
A = np.random.randn(*shape)
B = np.random.randn(*shape)#np.ones_like(A)
print(np.sum(A))
ddx = np.diff(phase_corr, axis=0)
ddy = np.diff(phase_corr, axis=1)
mask_x = np.sign( ddx[:-1, :] ) > np.sign( ddx[1:, :] )
mask_y = np.sign( ddy[:, :-1] ) > np.sign( ddy[:, 1:] )
peaks = mask_x[:, 1:-1]&mask_y[1:-1]
phase_corr_peak = phase_corr[1:-1, 1:-1].copy()
phase_corr_peak[~peaks] = 0
plt.imshow(phase_corr_peak);
from skimage.morphology import local_maxima
plt.imshow( local_maxima(phase_corr) )
plt.pcolor(dx_span, dy_span, np.log(phase_corr));
plt.colorbar(); plt.axis('equal');
get_shifts(A, B, *np.array(A.shape)/2,
window_half_size=14,
coarse_search=False,
phase=False,
method='opti')
get_shifts(A, B, window_half_size, window_half_size,
window_half_size=5,
coarse_search=False,
method='opti')
def sample(window_half_size, sigma, phase):
B_prime = B + sigma*np.std(B)*np.random.randn(*B.shape)
x, y = np.array(A.T.shape)/2
dx, dy, err = get_shifts(A, B_prime, x, y,
window_half_size=window_half_size,
coarse_search=False,
phase=phase,
method='opti')
return dx, dy, err
def estimate(window_half_size, sigma, phase, N=100):
dxy_err = np.vstack([sample(window_half_size, sigma=sigma, phase=phase) for _ in range(N)])
dxy = dxy_err[:, :2]
eps_MC = np.sqrt(np.sum((dxy - dxy.mean(axis=0))**2, axis=1)).mean()
err_estimate = dxy_err[:, 2]
return eps_MC, err_estimate[0]
sigma_span = np.logspace(-1, 3, 15)
errs = np.vstack([estimate(10, s, phase=True, N=100) for s in sigma_span])
fig, (ax1, ax2) = plt.subplots(2, )
plt.title('phase opti')
ax1.semilogx(sigma_span, errs[:, 1], label='estimate')
ax1.set_ylabel('z-score');
ax2.semilogx(sigma_span, errs[:, 0], '-o', label='true (MC)');
ax2.set_ylabel('shift error [px]');
plt.legend();
plt.xlabel('noise level');
sigma_span = np.logspace(-1, 2, 15)
errs = np.vstack([estimate(15, s, phase=False, N=100) for s in sigma_span])
plt.title('cc opti')
plt.loglog(sigma_span, 1/errs[:, 1], label='FRAE')
#plt.semilogx(sigma_span, errs[:, 0], '-o', label='true (MC)');
plt.legend(); plt.ylabel('shift error [px]'); plt.xlabel('noise level');
sigma_span = np.logspace(-1, 2, 15)
errs = np.vstack([estimate(15, s, phase=False, N=100) for s in sigma_span])
plt.title('CC opti')
plt.loglog(sigma_span, errs[:, 0], label='FRAE')
plt.loglog(sigma_span, errs[:, 1], '-o', label='true (MC)');
plt.legend(); plt.ylabel('shift error [px]'); plt.xlabel('noise level');
I = cube[0]
J = cube[10]
window_half_size = 100
x, y = 100, 106
A, ij = sc.crop(I, (x, y), window_half_size)
B, ij = sc.crop(J, (x, y), window_half_size)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,4))
ax1.imshow(A);
ax2.imshow(B);
window_half_size_span = np.arange(5, 70, 5)
res = [get_shifts(I, J, x, y,
window_half_size=whs,
coarse_search=True,
phase=True,
method='opti')
for whs in window_half_size_span]
err = [row[1] for row in res]
err = np.vstack(err)
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(9,6))
ax1.plot(window_half_size_span, err[:, 0]);
ax2.plot(window_half_size_span, err[:, 1]);
ax1.set_ylabel('z-score');
ax2.set_ylabel('FRAE [px]');
plt.xlabel('window half size');
plt.plot(window_half_size_span, res[:, 0]-res[:, 0].mean());
plt.plot(window_half_size_span, res[:, 1]-res[:, 1].mean());
window_half_size = 50
A, ij = sc.crop(I, (232, 150), window_half_size)
B, ij = sc.crop(J, (232, 150), window_half_size)
dx_span, dy_span, phase_corr, res = sc.output_cross_correlation(A, B, upsamplefactor=1, phase=False)
print('z_score', (phase_corr.max()-phase_corr.mean())/phase_corr.std())
plt.pcolor(dx_span, dy_span, phase_corr);
plt.colorbar(); plt.axis('equal');
def sample(window_half_size, sigma, phase):
x, y = np.array(A.T.shape)/2
dx, dy, err = get_shifts(A, B_prime, x, y,
window_half_size=window_half_size,
coarse_search=False,
phase=phase,
method='opti')
return dx, dy, err
def estimate(window_half_size, sigma, phase, N=100):
dxy_err = np.vstack([sample(window_half_size, sigma=sigma, phase=phase) for _ in range(N)])
dxy = dxy_err[:, :2]
eps_MC = np.sqrt(np.sum((dxy - dxy.mean(axis=0))**2, axis=1)).mean()
err_estimate = dxy_err[:, 2]
return eps_MC, err_estimate[0]
| 0.362856 | 0.668737 |
# Analyze Respiratory Rate Variability (RRV)
This example can be referenced by [citing the package](https://github.com/neuropsychology/NeuroKit#citation).
Respiratory Rate Variability (RRV), or variations in respiratory rhythm, are crucial indices of general health and respiratory complications. This example shows how to use NeuroKit to perform RRV analysis.
## Download Data and Extract Relevant Signals
```
# Load NeuroKit and other useful packages
import neurokit2 as nk
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 15, 5 # Bigger images
```
In this example, we will download a dataset that contains electrocardiogram, respiratory, and electrodermal activity signals, and extract only the respiratory (RSP) signal.
```
# Get data
data = pd.read_csv("https://raw.githubusercontent.com/neuropsychology/NeuroKit/master/data/bio_eventrelated_100hz.csv")
rsp = data["RSP"]
nk.signal_plot(rsp, sampling_rate=100) # Visualize
```
You now have the raw RSP signal in the shape of a vector (i.e., a one-dimensional array). You can then clean it using `rsp_clean()` and extract the inhalation peaks of the signal using `rsp_peaks()`. This will output 1) a *dataframe* indicating the occurrences of inhalation peaks and exhalation troughs ("1" marked in a list of zeros), and 2) a *dictionary* showing the samples of peaks and troughs.
*Note: As the dataset has a frequency of 100Hz, make sure the `sampling_rate` is also set to 100Hz. It is critical that you specify the correct sampling rate of your signal throughout all the processing functions.*
```
# Clean signal
cleaned = nk.rsp_clean(rsp, sampling_rate=100)
# Extract peaks
df, peaks_dict = nk.rsp_peaks(cleaned)
info = nk.rsp_fixpeaks(peaks_dict)
formatted = nk.signal_formatpeaks(info, desired_length=len(cleaned),peak_indices=info["RSP_Peaks"])
nk.signal_plot(pd.DataFrame({"RSP_Raw": rsp, "RSP_Clean": cleaned}), sampling_rate=100, subplots=True)
candidate_peaks = nk.events_plot(peaks_dict['RSP_Peaks'], cleaned)
fixed_peaks = nk.events_plot(info['RSP_Peaks'], cleaned)
# Extract rate
rsp_rate = nk.rsp_rate(cleaned, peaks_dict, sampling_rate=100)
# Visualize
nk.signal_plot(rsp_rate, sampling_rate=100)
plt.ylabel('BPM')
```
## Analyse RRV
Now that we have extracted the respiratory rate signal and the peaks dictionary, you can then input these into `rsp_rrv()`. This outputs a variety of RRV indices including time domain, frequency domain, and nonlinear features. Examples of time domain features include RMSSD (root-mean-squared standard deviation) or SDBB (standard deviation of the breath-to-breath intervals). Power spectral analyses (e.g., LF, HF, LFHF) and entropy measures (e.g., sample entropy, SampEn where smaller values indicate that respiratory rate is regular and predictable) are also examples of frequency domain and nonlinear features respectively.
A Poincaré plot is also shown when setting `show=True`, plotting each breath-to-breath interval against the next successive one. It shows the distribution of successive respiratory rates.
```
rrv = nk.rsp_rrv(rsp_rate, info, sampling_rate=100, show=True)
rrv
```
This is a simple visualization tool for short-term (SD1) and long-term variability (SD2) in respiratory rhythm.
### See documentation for full reference
RRV method taken from : Soni et al. 2019
|
github_jupyter
|
# Load NeuroKit and other useful packages
import neurokit2 as nk
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 15, 5 # Bigger images
# Get data
data = pd.read_csv("https://raw.githubusercontent.com/neuropsychology/NeuroKit/master/data/bio_eventrelated_100hz.csv")
rsp = data["RSP"]
nk.signal_plot(rsp, sampling_rate=100) # Visualize
# Clean signal
cleaned = nk.rsp_clean(rsp, sampling_rate=100)
# Extract peaks
df, peaks_dict = nk.rsp_peaks(cleaned)
info = nk.rsp_fixpeaks(peaks_dict)
formatted = nk.signal_formatpeaks(info, desired_length=len(cleaned),peak_indices=info["RSP_Peaks"])
nk.signal_plot(pd.DataFrame({"RSP_Raw": rsp, "RSP_Clean": cleaned}), sampling_rate=100, subplots=True)
candidate_peaks = nk.events_plot(peaks_dict['RSP_Peaks'], cleaned)
fixed_peaks = nk.events_plot(info['RSP_Peaks'], cleaned)
# Extract rate
rsp_rate = nk.rsp_rate(cleaned, peaks_dict, sampling_rate=100)
# Visualize
nk.signal_plot(rsp_rate, sampling_rate=100)
plt.ylabel('BPM')
rrv = nk.rsp_rrv(rsp_rate, info, sampling_rate=100, show=True)
rrv
| 0.625324 | 0.992837 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
%matplotlib notebook
fruits = pd.read_table('fruit_data_with_colors.txt')
fruits.head()
# create a mapping from fruit label value to fruit name to make results easier to interpret
look_up_fruit_name = dict(zip(fruits.fruit_label.unique(), fruits.fruit_name.unique()))
look_up_fruit_name
fruits.shape
# Split the data into training and testing
X = fruits[['mass', 'width', 'height', 'color_score']]
y = fruits['fruit_label']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)
# plotting a scatter matrix
from matplotlib import cm
cmap = cm.get_cmap('gnuplot')
scatter = pd.scatter_matrix(X_train, c = y_train, marker = 'o', s=40, hist_kwds={'bins':15}, figsize=(9,9), cmap = cmap)
# plotting a 3D scatter plot
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection = '3d')
ax.scatter(X_train['width'], X_train['height'], X_train['color_score'], c = y_train, marker = 'o', s=100)
ax.set_xlabel('width')
ax.set_ylabel('height')
ax.set_zlabel('color_score')
plt.show()
X = fruits[['mass', 'width', 'height']]
y = fruits['fruit_label']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# Create classifier object
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 5)
# Train the classifier using the training data
knn.fit(X_train, y_train)
# Estimate the accuracy of the classifier on future data, using the test data
knn.score(X_test, y_test)
# Use the trained k-NN classifier model to classify new, previously unseen objects
# first example: a small fruit with mass 20g, width 4.3 cm, height 5.5 cm
fruit_prediction = knn.predict([[20, 4.3, 5.5]])
look_up_fruit_name[fruit_prediction[0]]
fruit_prediction = knn.predict([[100, 6.3, 8.5]])
look_up_fruit_name[fruit_prediction[0]]
# plot the decision boundries of the k-NN classifier
from adspy_shared_utilities import plot_fruit_knn
plot_fruit_knn(X_train, y_train, 5, 'uniform')
plot_fruit_knn(X_train, y_train, 1, 'uniform')
plot_fruit_knn(X_train, y_train, 10, 'uniform')
# How sensitive is k-NN classification accuracy to the choice of the 'k' parameter
k_range = range(1, 20)
scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors = k)
knn.fit(X_train, y_train)
scores.append(knn.score(X_test, y_test))
plt.figure()
plt.xlabel('k')
plt.ylabel('accuracy')
plt.scatter(k_range, scores)
plt.xticks([0,5,10,15,20])
# How sensitive is k-NN classification accuracy to the train/test split proportion
t = [0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2]
knn = KNeighborsClassifier(n_neighbors = 5)
plt.figure()
for s in t:
scores = []
for i in range(1, 1000):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1-s)
knn.fit(X_train, y_train)
scores.append(knn.score(X_test, y_test))
plt.plot(s, np.mean(scores), 'bo')
plt.xlabel('Training set proportion (%)')
plt.ylabel('accuracy');
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
%matplotlib notebook
fruits = pd.read_table('fruit_data_with_colors.txt')
fruits.head()
# create a mapping from fruit label value to fruit name to make results easier to interpret
look_up_fruit_name = dict(zip(fruits.fruit_label.unique(), fruits.fruit_name.unique()))
look_up_fruit_name
fruits.shape
# Split the data into training and testing
X = fruits[['mass', 'width', 'height', 'color_score']]
y = fruits['fruit_label']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)
# plotting a scatter matrix
from matplotlib import cm
cmap = cm.get_cmap('gnuplot')
scatter = pd.scatter_matrix(X_train, c = y_train, marker = 'o', s=40, hist_kwds={'bins':15}, figsize=(9,9), cmap = cmap)
# plotting a 3D scatter plot
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection = '3d')
ax.scatter(X_train['width'], X_train['height'], X_train['color_score'], c = y_train, marker = 'o', s=100)
ax.set_xlabel('width')
ax.set_ylabel('height')
ax.set_zlabel('color_score')
plt.show()
X = fruits[['mass', 'width', 'height']]
y = fruits['fruit_label']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# Create classifier object
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 5)
# Train the classifier using the training data
knn.fit(X_train, y_train)
# Estimate the accuracy of the classifier on future data, using the test data
knn.score(X_test, y_test)
# Use the trained k-NN classifier model to classify new, previously unseen objects
# first example: a small fruit with mass 20g, width 4.3 cm, height 5.5 cm
fruit_prediction = knn.predict([[20, 4.3, 5.5]])
look_up_fruit_name[fruit_prediction[0]]
fruit_prediction = knn.predict([[100, 6.3, 8.5]])
look_up_fruit_name[fruit_prediction[0]]
# plot the decision boundries of the k-NN classifier
from adspy_shared_utilities import plot_fruit_knn
plot_fruit_knn(X_train, y_train, 5, 'uniform')
plot_fruit_knn(X_train, y_train, 1, 'uniform')
plot_fruit_knn(X_train, y_train, 10, 'uniform')
# How sensitive is k-NN classification accuracy to the choice of the 'k' parameter
k_range = range(1, 20)
scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors = k)
knn.fit(X_train, y_train)
scores.append(knn.score(X_test, y_test))
plt.figure()
plt.xlabel('k')
plt.ylabel('accuracy')
plt.scatter(k_range, scores)
plt.xticks([0,5,10,15,20])
# How sensitive is k-NN classification accuracy to the train/test split proportion
t = [0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2]
knn = KNeighborsClassifier(n_neighbors = 5)
plt.figure()
for s in t:
scores = []
for i in range(1, 1000):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1-s)
knn.fit(X_train, y_train)
scores.append(knn.score(X_test, y_test))
plt.plot(s, np.mean(scores), 'bo')
plt.xlabel('Training set proportion (%)')
plt.ylabel('accuracy');
| 0.812719 | 0.795181 |
# DCGAN
*Zhiang Chen, April 2017*
Using the package: https://github.com/sugyan/tf-dcgan
### 1. Import packages
```
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
import matplotlib.pyplot as plt
import random
import operator
import time
import os
import math
import deepdish as dd
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
from math import *
import time
from dcgan import DCGAN
from datetime import datetime
```
### 2. Import data
```
wd = os.getcwd()
os.chdir('..')
file_name = 'resized_depth_data2.h5'
save = dd.io.load(file_name)
train_objects = save['train_objects']
train_orientations = save['train_orientations']
train_values = save['train_values']
valid_objects = save['valid_objects']
valid_orientations = save['valid_orientations']
valid_values = save['valid_values']
test_objects = save['test_objects']
test_orientations = save['test_orientations']
test_values = save['test_values']
value2object = save['value2object']
object2value = save['object2value']
del save
os.chdir(wd)
print('training dataset', train_objects.shape, train_orientations.shape, train_values.shape)
print('validation dataset', valid_objects.shape, valid_orientations.shape, valid_values.shape)
print('testing dataset', test_objects.shape, test_orientations.shape, test_values.shape)
```
### 3. Shuffle data
```
image_size = 48
def randomize(dataset, classes, angles):
permutation = np.random.permutation(classes.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_classes = classes[permutation]
shuffled_angles = angles[permutation]
return shuffled_dataset, shuffled_classes, shuffled_angles
train_dataset, train_classes, train_angles = randomize(train_values, train_objects, train_orientations)
valid_dataset, valid_classes, valid_angles = randomize(valid_values, valid_objects, valid_orientations)
test_dataset, test_classes, test_angles = randomize(test_values, test_objects, test_orientations)
train_dataset = train_dataset[:150000,:,:]
train_angles = train_angles[:150000,:]
train_classes = train_classes[:150000,:]
valid_dataset = valid_dataset[:5000,:,:]
valid_angles = valid_angles[:5000,:]
valid_classes = valid_classes[:5000,:]
test_dataset = test_dataset[:5000,:,:]
test_angles = test_angles[:5000,:]
test_classes = test_classes[:5000,:]
train_dataset = train_dataset.reshape(-1,image_size,image_size,1)
test_dataset = test_dataset.reshape(-1,image_size,image_size,1)
n_samples = train_dataset.shape[0]
```
### 4. DCGAN
```
FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_string('logdir', 'logdir',
"""Directory where to write event logs and checkpoint.""")
tf.app.flags.DEFINE_integer('max_steps', 80000,
"""Number of batches to run.""")
tf.app.flags.DEFINE_string('images_dir', 'images',
"""Directory where to write generated images.""")
np.random.seed(0)
tf.set_random_seed(0)
s_size = 3 # s_size*2**4 == image_size
dcgan = DCGAN(s_size=s_size)
batch_size = dcgan.batch_size #128
min_queue_examples = 5000
train_images = tf.train.shuffle_batch([train_dataset], \
batch_size=batch_size, \
capacity=min_queue_examples + 3 * batch_size, \
min_after_dequeue=min_queue_examples, \
enqueue_many = True)
test_images = tf.train.shuffle_batch([test_dataset], \
batch_size=batch_size, \
capacity=min_queue_examples + 3 * batch_size, \
min_after_dequeue=min_queue_examples, \
enqueue_many = True)
losses = dcgan.loss(train_images)
# feature matching
graph = tf.get_default_graph()
features_g = tf.reduce_mean(graph.get_tensor_by_name('dg/d/conv4/outputs:0'), 0)
features_t = tf.reduce_mean(graph.get_tensor_by_name('dt/d/conv4/outputs:0'), 0)
losses[dcgan.g] += tf.multiply(tf.nn.l2_loss(features_g - features_t), 0.05)
tf.summary.scalar('g loss', losses[dcgan.g])
tf.summary.scalar('d loss', losses[dcgan.d])
train_op = dcgan.train(losses, learning_rate=0.0001)
summary_op = tf.summary.merge_all()
g_saver = tf.train.Saver(dcgan.g.variables, max_to_keep=15)
d_saver = tf.train.Saver(dcgan.d.variables, max_to_keep=15)
g_checkpoint_path = os.path.join(FLAGS.logdir, '/g.ckpt')
d_checkpoint_path = os.path.join(FLAGS.logdir, '/d.ckpt')
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
config.log_device_placement = True
config.gpu_options.allocator_type = 'BFC'
with tf.Session(config=config) as sess:
summary_writer = tf.summary.FileWriter(FLAGS.logdir, graph=sess.graph)
# restore or initialize generator
sess.run(tf.global_variables_initializer())
'''
if os.path.exists(g_checkpoint_path):
print('restore variables:')
for v in dcgan.g.variables:
print(' ' + v.name)
g_saver.restore(sess, g_checkpoint_path)
if os.path.exists(d_checkpoint_path):
print('restore variables:')
for v in dcgan.d.variables:
print(' ' + v.name)
d_saver.restore(sess, d_checkpoint_path)
'''
try:
g_saver.restore(sess, './logdir/g.ckpt')
print ('restore variables:')
for v in dcgan.g.variables:
print(' ' + v.name)
except:
print "g using random initialization"
try:
d_saver.restore(sess, './logdir/d.ckpt')
print('restore variables:')
for v in dcgan.d.variables:
print(' ' + v.name)
except:
print "d using random initialization"
# setup for monitoring
sample_z = sess.run(tf.random_uniform([dcgan.batch_size, dcgan.z_dim], minval=-1.0, maxval=1.0))
images = dcgan.sample_images(5, 5, inputs=sample_z)
# start training
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
dcgan.retrain
for step in range(FLAGS.max_steps):
start_time = time.time()
_, g_loss, d_loss = sess.run([train_op, losses[dcgan.g], losses[dcgan.d]])
duration = time.time() - start_time
if step%20 == 0:
print('{}: step {:5d}, loss = (G: {:.8f}, D: {:.8f}) ({:.3f} sec/batch)'.format(
datetime.now(), step, g_loss, d_loss, duration))
# save generated images
if step % 100 == 0:
# summary
summary_str = sess.run(summary_op)
summary_writer.add_summary(summary_str, step)
# sample images
filename = os.path.join(FLAGS.images_dir, '%05d.jpg' % step)
with open(filename, 'wb') as f:
f.write(sess.run(images))
# save variables
if (step+1) == FLAGS.max_steps:
g_saver.save(sess, './logdir/g.ckpt')
d_saver.save(sess, './logdir/d.ckpt')
coord.request_stop()
coord.join(threads)
```
|
github_jupyter
|
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
import matplotlib.pyplot as plt
import random
import operator
import time
import os
import math
import deepdish as dd
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
from math import *
import time
from dcgan import DCGAN
from datetime import datetime
wd = os.getcwd()
os.chdir('..')
file_name = 'resized_depth_data2.h5'
save = dd.io.load(file_name)
train_objects = save['train_objects']
train_orientations = save['train_orientations']
train_values = save['train_values']
valid_objects = save['valid_objects']
valid_orientations = save['valid_orientations']
valid_values = save['valid_values']
test_objects = save['test_objects']
test_orientations = save['test_orientations']
test_values = save['test_values']
value2object = save['value2object']
object2value = save['object2value']
del save
os.chdir(wd)
print('training dataset', train_objects.shape, train_orientations.shape, train_values.shape)
print('validation dataset', valid_objects.shape, valid_orientations.shape, valid_values.shape)
print('testing dataset', test_objects.shape, test_orientations.shape, test_values.shape)
image_size = 48
def randomize(dataset, classes, angles):
permutation = np.random.permutation(classes.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_classes = classes[permutation]
shuffled_angles = angles[permutation]
return shuffled_dataset, shuffled_classes, shuffled_angles
train_dataset, train_classes, train_angles = randomize(train_values, train_objects, train_orientations)
valid_dataset, valid_classes, valid_angles = randomize(valid_values, valid_objects, valid_orientations)
test_dataset, test_classes, test_angles = randomize(test_values, test_objects, test_orientations)
train_dataset = train_dataset[:150000,:,:]
train_angles = train_angles[:150000,:]
train_classes = train_classes[:150000,:]
valid_dataset = valid_dataset[:5000,:,:]
valid_angles = valid_angles[:5000,:]
valid_classes = valid_classes[:5000,:]
test_dataset = test_dataset[:5000,:,:]
test_angles = test_angles[:5000,:]
test_classes = test_classes[:5000,:]
train_dataset = train_dataset.reshape(-1,image_size,image_size,1)
test_dataset = test_dataset.reshape(-1,image_size,image_size,1)
n_samples = train_dataset.shape[0]
FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_string('logdir', 'logdir',
"""Directory where to write event logs and checkpoint.""")
tf.app.flags.DEFINE_integer('max_steps', 80000,
"""Number of batches to run.""")
tf.app.flags.DEFINE_string('images_dir', 'images',
"""Directory where to write generated images.""")
np.random.seed(0)
tf.set_random_seed(0)
s_size = 3 # s_size*2**4 == image_size
dcgan = DCGAN(s_size=s_size)
batch_size = dcgan.batch_size #128
min_queue_examples = 5000
train_images = tf.train.shuffle_batch([train_dataset], \
batch_size=batch_size, \
capacity=min_queue_examples + 3 * batch_size, \
min_after_dequeue=min_queue_examples, \
enqueue_many = True)
test_images = tf.train.shuffle_batch([test_dataset], \
batch_size=batch_size, \
capacity=min_queue_examples + 3 * batch_size, \
min_after_dequeue=min_queue_examples, \
enqueue_many = True)
losses = dcgan.loss(train_images)
# feature matching
graph = tf.get_default_graph()
features_g = tf.reduce_mean(graph.get_tensor_by_name('dg/d/conv4/outputs:0'), 0)
features_t = tf.reduce_mean(graph.get_tensor_by_name('dt/d/conv4/outputs:0'), 0)
losses[dcgan.g] += tf.multiply(tf.nn.l2_loss(features_g - features_t), 0.05)
tf.summary.scalar('g loss', losses[dcgan.g])
tf.summary.scalar('d loss', losses[dcgan.d])
train_op = dcgan.train(losses, learning_rate=0.0001)
summary_op = tf.summary.merge_all()
g_saver = tf.train.Saver(dcgan.g.variables, max_to_keep=15)
d_saver = tf.train.Saver(dcgan.d.variables, max_to_keep=15)
g_checkpoint_path = os.path.join(FLAGS.logdir, '/g.ckpt')
d_checkpoint_path = os.path.join(FLAGS.logdir, '/d.ckpt')
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
config.log_device_placement = True
config.gpu_options.allocator_type = 'BFC'
with tf.Session(config=config) as sess:
summary_writer = tf.summary.FileWriter(FLAGS.logdir, graph=sess.graph)
# restore or initialize generator
sess.run(tf.global_variables_initializer())
'''
if os.path.exists(g_checkpoint_path):
print('restore variables:')
for v in dcgan.g.variables:
print(' ' + v.name)
g_saver.restore(sess, g_checkpoint_path)
if os.path.exists(d_checkpoint_path):
print('restore variables:')
for v in dcgan.d.variables:
print(' ' + v.name)
d_saver.restore(sess, d_checkpoint_path)
'''
try:
g_saver.restore(sess, './logdir/g.ckpt')
print ('restore variables:')
for v in dcgan.g.variables:
print(' ' + v.name)
except:
print "g using random initialization"
try:
d_saver.restore(sess, './logdir/d.ckpt')
print('restore variables:')
for v in dcgan.d.variables:
print(' ' + v.name)
except:
print "d using random initialization"
# setup for monitoring
sample_z = sess.run(tf.random_uniform([dcgan.batch_size, dcgan.z_dim], minval=-1.0, maxval=1.0))
images = dcgan.sample_images(5, 5, inputs=sample_z)
# start training
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
dcgan.retrain
for step in range(FLAGS.max_steps):
start_time = time.time()
_, g_loss, d_loss = sess.run([train_op, losses[dcgan.g], losses[dcgan.d]])
duration = time.time() - start_time
if step%20 == 0:
print('{}: step {:5d}, loss = (G: {:.8f}, D: {:.8f}) ({:.3f} sec/batch)'.format(
datetime.now(), step, g_loss, d_loss, duration))
# save generated images
if step % 100 == 0:
# summary
summary_str = sess.run(summary_op)
summary_writer.add_summary(summary_str, step)
# sample images
filename = os.path.join(FLAGS.images_dir, '%05d.jpg' % step)
with open(filename, 'wb') as f:
f.write(sess.run(images))
# save variables
if (step+1) == FLAGS.max_steps:
g_saver.save(sess, './logdir/g.ckpt')
d_saver.save(sess, './logdir/d.ckpt')
coord.request_stop()
coord.join(threads)
| 0.561816 | 0.721449 |
Integration of datasets from different batches is often a central step in a single cell analysis pipeline. In this notebook we are going to use a conditional variational autoencoder (CVAE) to integrate a single-cell dataset with significant batch effects. As demonstrated by scVI ([Lopez 18](https://www.nature.com/articles/s41592-018-0229-2.epdf?author_access_token=5sMbnZl1iBFitATlpKkddtRgN0jAjWel9jnR3ZoTv0P1-tTjoP-mBfrGiMqpQx63aBtxToJssRfpqQ482otMbBw2GIGGeinWV4cULBLPg4L4DpCg92dEtoMaB1crCRDG7DgtNrM_1j17VfvHfoy1cQ%3D%3D)) CVAEs are very well suited for integration of single-cell data. By injecting the condition label into the encoder and decoder layer, the network is incentivized to only learn variation in the dataset that cannot be explained by the condition label.
```
import numpy as np
import scanpy as sc
import tensorflow.keras as keras
from sklearn import preprocessing as pp
from latent.models import NegativeBinomialVAE as NBVAE;
```
We import all necessary dependencies, including the `VariationalAutoencoder` from LatentLego. Now we load the dataset with `scanpy`. Here, we use the dataset used in [this scanpy tutorial](https://scanpy-tutorials.readthedocs.io/en/latest/integrating-data-using-ingest.html#Pancreas), since it contains strong batch effects and has been usen in various papers on data integration.
```
adata = sc.read('data/pancreas.h5ad', backup_url='https://www.dropbox.com/s/qj1jlm9w10wmt0u/pancreas.h5ad?dl=1')
print(adata);
```
## Data preprocessing
As a first step, we preprocess the data and visualize it using UMAP, so we can appreciate the batch effects.
```
sc.pp.highly_variable_genes(adata, n_top_genes=2000)
sc.pp.pca(adata)
sc.pp.neighbors(adata)
sc.tl.umap(adata);
```
Now we plot the UMAP representation of the data.
```
sc.pl.umap(adata, color=['batch', 'celltype'])
```
We can clearly see that a major part of the variation in this dataset is driven by batch and celltypes from different batches do not co-cluster at all. This is a good indicatior that integration of the different batches is necessary for downstream analysis.
## Preparing the data
Currently, LatentLego is mostly an addon to TensorFlow/Keras and provides no interface to work with `AnnData` objects directly. I will probably add that in a future version though. For now, we have to extract and prepare the model inputs manually.
```
# Select highly variable genes
highvar = adata.raw.var.index.isin(adata.var.index)
# Extract unscaled data
X_use = np.array(adata.raw.X[:, highvar].todense())
# Calculate size factors
n_umis = X_use.sum(1)
size_factors = n_umis / np.median(n_umis)
# Get batch label and format to one-hot encoded matrix
cond = adata.obs['batch'].values
le = pp.LabelEncoder()
cond = le.fit_transform(cond)
cond = keras.utils.to_categorical(cond)
print(cond)
```
## Fit the model
Now we prepare the model as well as a callback for early stopping and the optimizer. With `conditional = 'all'`, we tell the model to inject the condition in every hidden layer of the encoder and decoder networks. We also use a conditional version of the VAMP prior ([Tomczak 2017](https://arxiv.org/abs/1705.07120)) that has been shown to perform well on single cell data ([Dony 2020](https://icml-compbio.github.io/2020/papers/WCBICML2020_paper_37.pdf)).
```
# Initiate keras callback function and optimizer
es_callback = keras.callbacks.EarlyStopping(
monitor='loss', min_delta=0.001, patience=10, verbose=0, mode='auto',
baseline=None, restore_best_weights=False
)
optimizer = keras.optimizers.Adam(learning_rate=0.0002)
# Initiate autoencoder
autoencoder = NBVAE(
x_dim = X_use.shape[1],
encoder_units = [256, 128],
decoder_units = [128, 256],
latent_dim = 10,
kld_weight = 1e-3,
conditional = 'all',
prior = 'vamp',
n_pseudoinputs = 50,
dispersion = 'gene'
)
autoencoder.compile(optimizer=optimizer, run_eagerly=False)
```
Now we train the keras model the model (depending on where you are running this, this might take a while ;))
```
history = autoencoder.fit(
[X_use, cond, size_factors],
batch_size = 50,
epochs = 100,
use_multiprocessing = True,
workers = 30,
callbacks = [es_callback],
verbose = False
);
```
Now we can use the `.transform()` method to obtain the latent representation. We'll add that to the `AnnData` object and use UMAP to further reduce it to 2D.
```
latent = autoencoder.transform([X_use, cond])
adata.obsm['X_ae'] = latent
sc.pp.neighbors(adata, use_rep='X_ae', n_neighbors=30)
sc.tl.umap(adata, min_dist=0.1, spread=0.5)
```
And plot the result:
```
p = sc.pl.scatter(adata, show=False, basis='umap', color=['celltype', 'batch'])
```
We can see that the batches are integrated a lot better while preserving a good separation of celltypes. We can of course also use a 'regular' variational autoencoder with a standard normal prior. However, with the same weight of the KLD loss, the latent representation will be 'smoother' and the celltypes are less well separated.
```
# Initiate autoencoder
autoencoder = NBVAE(
x_dim = X_use.shape[1],
encoder_units = [256, 128],
decoder_units = [128, 256],
latent_dim = 10,
kld_weight = 1e-3,
conditional = 'all',
dispersion = 'gene'
)
autoencoder.compile(optimizer=optimizer, run_eagerly=False)
history = autoencoder.fit(
[X_use, cond, size_factors],
batch_size = 50,
epochs = 100,
use_multiprocessing = True,
workers = 30,
callbacks = [es_callback],
verbose = False
);
latent = autoencoder.transform([X_use, cond])
adata.obsm['X_ae'] = latent
sc.pp.neighbors(adata, use_rep='X_ae', n_neighbors=30)
sc.tl.umap(adata, min_dist=0.1, spread=0.5)
p = sc.pl.scatter(adata, show=False, basis='umap', color=['celltype', 'batch'])
```
|
github_jupyter
|
import numpy as np
import scanpy as sc
import tensorflow.keras as keras
from sklearn import preprocessing as pp
from latent.models import NegativeBinomialVAE as NBVAE;
adata = sc.read('data/pancreas.h5ad', backup_url='https://www.dropbox.com/s/qj1jlm9w10wmt0u/pancreas.h5ad?dl=1')
print(adata);
sc.pp.highly_variable_genes(adata, n_top_genes=2000)
sc.pp.pca(adata)
sc.pp.neighbors(adata)
sc.tl.umap(adata);
sc.pl.umap(adata, color=['batch', 'celltype'])
# Select highly variable genes
highvar = adata.raw.var.index.isin(adata.var.index)
# Extract unscaled data
X_use = np.array(adata.raw.X[:, highvar].todense())
# Calculate size factors
n_umis = X_use.sum(1)
size_factors = n_umis / np.median(n_umis)
# Get batch label and format to one-hot encoded matrix
cond = adata.obs['batch'].values
le = pp.LabelEncoder()
cond = le.fit_transform(cond)
cond = keras.utils.to_categorical(cond)
print(cond)
# Initiate keras callback function and optimizer
es_callback = keras.callbacks.EarlyStopping(
monitor='loss', min_delta=0.001, patience=10, verbose=0, mode='auto',
baseline=None, restore_best_weights=False
)
optimizer = keras.optimizers.Adam(learning_rate=0.0002)
# Initiate autoencoder
autoencoder = NBVAE(
x_dim = X_use.shape[1],
encoder_units = [256, 128],
decoder_units = [128, 256],
latent_dim = 10,
kld_weight = 1e-3,
conditional = 'all',
prior = 'vamp',
n_pseudoinputs = 50,
dispersion = 'gene'
)
autoencoder.compile(optimizer=optimizer, run_eagerly=False)
history = autoencoder.fit(
[X_use, cond, size_factors],
batch_size = 50,
epochs = 100,
use_multiprocessing = True,
workers = 30,
callbacks = [es_callback],
verbose = False
);
latent = autoencoder.transform([X_use, cond])
adata.obsm['X_ae'] = latent
sc.pp.neighbors(adata, use_rep='X_ae', n_neighbors=30)
sc.tl.umap(adata, min_dist=0.1, spread=0.5)
p = sc.pl.scatter(adata, show=False, basis='umap', color=['celltype', 'batch'])
# Initiate autoencoder
autoencoder = NBVAE(
x_dim = X_use.shape[1],
encoder_units = [256, 128],
decoder_units = [128, 256],
latent_dim = 10,
kld_weight = 1e-3,
conditional = 'all',
dispersion = 'gene'
)
autoencoder.compile(optimizer=optimizer, run_eagerly=False)
history = autoencoder.fit(
[X_use, cond, size_factors],
batch_size = 50,
epochs = 100,
use_multiprocessing = True,
workers = 30,
callbacks = [es_callback],
verbose = False
);
latent = autoencoder.transform([X_use, cond])
adata.obsm['X_ae'] = latent
sc.pp.neighbors(adata, use_rep='X_ae', n_neighbors=30)
sc.tl.umap(adata, min_dist=0.1, spread=0.5)
p = sc.pl.scatter(adata, show=False, basis='umap', color=['celltype', 'batch'])
| 0.813831 | 0.980337 |
```
from __future__ import absolute_import, division, print_function
import glob
import logging
import os
import random
import json
import numpy as np
import torch
from torch.utils.data import (DataLoader, RandomSampler, SequentialSampler,
TensorDataset)
import random
from torch.utils.data.distributed import DistributedSampler
from tqdm import tqdm_notebook, trange
from tensorboardX import SummaryWriter
from pytorch_transformers import (WEIGHTS_NAME, BertConfig, BertForSequenceClassification, BertTokenizer,
XLMConfig, XLMForSequenceClassification, XLMTokenizer,
XLNetConfig, XLNetForSequenceClassification, XLNetTokenizer,
RobertaConfig, RobertaForSequenceClassification, RobertaTokenizer)
from pytorch_transformers import AdamW, WarmupLinearSchedule
from utils import (convert_examples_to_features,
output_modes, processors)
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
args = {
'data_dir': 'data/',
'model_type': 'xlnet',
'model_name': 'xlnet-base-cased',
'task_name': 'binary',
'output_dir': 'outputs/',
'cache_dir': 'cache/',
'do_train': True,
'do_eval': True,
'fp16': True,
'fp16_opt_level': 'O1',
'max_seq_length': 128,
'output_mode': 'classification',
'train_batch_size': 8,
'eval_batch_size': 8,
'gradient_accumulation_steps': 1,
'num_train_epochs': 1,
'weight_decay': 0,
'learning_rate': 4e-5,
'adam_epsilon': 1e-8,
'warmup_ratio': 0.06,
'warmup_steps': 0,
'max_grad_norm': 1.0,
'logging_steps': 50,
'evaluate_during_training': False,
'save_steps': 2000,
'eval_all_checkpoints': True,
'overwrite_output_dir': False,
'reprocess_input_data': True,
'notes': 'Using Yelp Reviews dataset'
}
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
args
with open('args.json', 'w') as f:
json.dump(args, f)
if os.path.exists(args['output_dir']) and os.listdir(args['output_dir']) and args['do_train'] and not args['overwrite_output_dir']:
raise ValueError("Output directory ({}) already exists and is not empty. Use --overwrite_output_dir to overcome.".format(args['output_dir']))
MODEL_CLASSES = {
'bert': (BertConfig, BertForSequenceClassification, BertTokenizer),
'xlnet': (XLNetConfig, XLNetForSequenceClassification, XLNetTokenizer),
'xlm': (XLMConfig, XLMForSequenceClassification, XLMTokenizer),
'roberta': (RobertaConfig, RobertaForSequenceClassification, RobertaTokenizer)
}
config_class, model_class, tokenizer_class = MODEL_CLASSES[args['model_type']]
config = config_class.from_pretrained(args['model_name'], num_labels=2, finetuning_task=args['task_name'])
tokenizer = tokenizer_class.from_pretrained(args['model_name'])
model = model_class.from_pretrained(args['model_name'])
model.to(device);
task = args['task_name']
if task in processors.keys() and task in output_modes.keys():
processor = processors[task]()
label_list = processor.get_labels()
num_labels = len(label_list)
else:
raise KeyError(f'{task} not found in processors or in output_modes. Please check utils.py.')
def load_and_cache_examples(task, tokenizer, evaluate=False):
processor = processors[task]()
output_mode = args['output_mode']
mode = 'dev' if evaluate else 'train'
cached_features_file = os.path.join(args['data_dir'], f"cached_{mode}_{args['model_name']}_{args['max_seq_length']}_{task}")
if os.path.exists(cached_features_file) and not args['reprocess_input_data']:
logger.info("Loading features from cached file %s", cached_features_file)
features = torch.load(cached_features_file)
else:
logger.info("Creating features from dataset file at %s", args['data_dir'])
label_list = processor.get_labels()
examples = processor.get_dev_examples(args['data_dir']) if evaluate else processor.get_train_examples(args['data_dir'])
if __name__ == "__main__":
features = convert_examples_to_features(examples, label_list, args['max_seq_length'], tokenizer, output_mode,
cls_token_at_end=bool(args['model_type'] in ['xlnet']), # xlnet has a cls token at the end
cls_token=tokenizer.cls_token,
cls_token_segment_id=2 if args['model_type'] in ['xlnet'] else 0,
sep_token=tokenizer.sep_token,
sep_token_extra=bool(args['model_type'] in ['roberta']), # roberta uses an extra separator b/w pairs of sentences, cf. github.com/pytorch/fairseq/commit/1684e166e3da03f5b600dbb7855cb98ddfcd0805
pad_on_left=bool(args['model_type'] in ['xlnet']), # pad on the left for xlnet
pad_token=tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0],
pad_token_segment_id=4 if args['model_type'] in ['xlnet'] else 0)
logger.info("Saving features into cached file %s", cached_features_file)
torch.save(features, cached_features_file)
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in features], dtype=torch.long)
if output_mode == "classification":
all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.long)
elif output_mode == "regression":
all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.float)
dataset = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids)
return dataset
def train(train_dataset, model, tokenizer):
tb_writer = SummaryWriter()
train_sampler = RandomSampler(train_dataset)
train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=args['train_batch_size'])
t_total = len(train_dataloader) // args['gradient_accumulation_steps'] * args['num_train_epochs']
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': args['weight_decay']},
{'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
warmup_steps = math.ceil(t_total * args['warmup_ratio'])
args['warmup_steps'] = warmup_steps if args['warmup_steps'] == 0 else args['warmup_steps']
optimizer = AdamW(optimizer_grouped_parameters, lr=args['learning_rate'], eps=args['adam_epsilon'])
scheduler = WarmupLinearSchedule(optimizer, warmup_steps=args['warmup_steps'], t_total=t_total)
if args['fp16']:
try:
from apex import amp
except ImportError:
raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.")
model, optimizer = amp.initialize(model, optimizer, opt_level=args['fp16_opt_level'])
logger.info("***** Running training *****")
logger.info(" Num examples = %d", len(train_dataset))
logger.info(" Num Epochs = %d", args['num_train_epochs'])
logger.info(" Total train batch size = %d", args['train_batch_size'])
logger.info(" Gradient Accumulation steps = %d", args['gradient_accumulation_steps'])
logger.info(" Total optimization steps = %d", t_total)
global_step = 0
tr_loss, logging_loss = 0.0, 0.0
model.zero_grad()
train_iterator = trange(int(args['num_train_epochs']), desc="Epoch")
for _ in train_iterator:
epoch_iterator = tqdm_notebook(train_dataloader, desc="Iteration")
for step, batch in enumerate(epoch_iterator):
model.train()
batch = tuple(t.to(device) for t in batch)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'token_type_ids': batch[2] if args['model_type'] in ['bert', 'xlnet'] else None, # XLM don't use segment_ids
'labels': batch[3]}
outputs = model(**inputs)
loss = outputs[0] # model outputs are always tuple in pytorch-transformers (see doc)
print("\r%f" % loss, end='')
if args['gradient_accumulation_steps'] > 1:
loss = loss / args['gradient_accumulation_steps']
if args['fp16']:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), args['max_grad_norm'])
else:
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), args['max_grad_norm'])
tr_loss += loss.item()
if (step + 1) % args['gradient_accumulation_steps'] == 0:
scheduler.step() # Update learning rate schedule
optimizer.step()
model.zero_grad()
global_step += 1
if args['logging_steps'] > 0 and global_step % args['logging_steps'] == 0:
# Log metrics
if args['evaluate_during_training']: # Only evaluate when single GPU otherwise metrics may not average well
results = evaluate(model, tokenizer)
for key, value in results.items():
tb_writer.add_scalar('eval_{}'.format(key), value, global_step)
tb_writer.add_scalar('lr', scheduler.get_lr()[0], global_step)
tb_writer.add_scalar('loss', (tr_loss - logging_loss)/args['logging_steps'], global_step)
logging_loss = tr_loss
if args['save_steps'] > 0 and global_step % args['save_steps'] == 0:
# Save model checkpoint
output_dir = os.path.join(args['output_dir'], 'checkpoint-{}'.format(global_step))
if not os.path.exists(output_dir):
os.makedirs(output_dir)
model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training
model_to_save.save_pretrained(output_dir)
logger.info("Saving model checkpoint to %s", output_dir)
return global_step, tr_loss / global_step
from sklearn.metrics import mean_squared_error, matthews_corrcoef, confusion_matrix
from scipy.stats import pearsonr
def get_mismatched(labels, preds):
mismatched = labels != preds
examples = processor.get_dev_examples(args['data_dir'])
wrong = [i for (i, v) in zip(examples, mismatched) if v]
return wrong
def get_eval_report(labels, preds):
mcc = matthews_corrcoef(labels, preds)
tn, fp, fn, tp = confusion_matrix(labels, preds).ravel()
return {
"mcc": mcc,
"tp": tp,
"tn": tn,
"fp": fp,
"fn": fn
}, get_mismatched(labels, preds)
def compute_metrics(task_name, preds, labels):
assert len(preds) == len(labels)
return get_eval_report(labels, preds)
def evaluate(model, tokenizer, prefix=""):
# Loop to handle MNLI double evaluation (matched, mis-matched)
eval_output_dir = args['output_dir']
results = {}
EVAL_TASK = args['task_name']
eval_dataset = load_and_cache_examples(EVAL_TASK, tokenizer, evaluate=True)
if not os.path.exists(eval_output_dir):
os.makedirs(eval_output_dir)
eval_sampler = SequentialSampler(eval_dataset)
eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=args['eval_batch_size'])
# Eval!
logger.info("***** Running evaluation {} *****".format(prefix))
logger.info(" Num examples = %d", len(eval_dataset))
logger.info(" Batch size = %d", args['eval_batch_size'])
eval_loss = 0.0
nb_eval_steps = 0
preds = None
out_label_ids = None
for batch in tqdm_notebook(eval_dataloader, desc="Evaluating"):
model.eval()
batch = tuple(t.to(device) for t in batch)
with torch.no_grad():
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'token_type_ids': batch[2] if args['model_type'] in ['bert', 'xlnet'] else None, # XLM don't use segment_ids
'labels': batch[3]}
outputs = model(**inputs)
tmp_eval_loss, logits = outputs[:2]
eval_loss += tmp_eval_loss.mean().item()
nb_eval_steps += 1
if preds is None:
preds = logits.detach().cpu().numpy()
out_label_ids = inputs['labels'].detach().cpu().numpy()
else:
preds = np.append(preds, logits.detach().cpu().numpy(), axis=0)
out_label_ids = np.append(out_label_ids, inputs['labels'].detach().cpu().numpy(), axis=0)
eval_loss = eval_loss / nb_eval_steps
if args['output_mode'] == "classification":
preds = np.argmax(preds, axis=1)
elif args['output_mode'] == "regression":
preds = np.squeeze(preds)
result, wrong = compute_metrics(EVAL_TASK, preds, out_label_ids)
results.update(result)
output_eval_file = os.path.join(eval_output_dir, "eval_results.txt")
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results {} *****".format(prefix))
for key in sorted(result.keys()):
logger.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
return results, wrong
if args['do_train']:
train_dataset = load_and_cache_examples(task, tokenizer)
global_step, tr_loss = train(train_dataset, model, tokenizer)
logger.info(" global_step = %s, average loss = %s", global_step, tr_loss)
if args['do_train']:
if not os.path.exists(args['output_dir']):
os.makedirs(args['output_dir'])
logger.info("Saving model checkpoint to %s", args['output_dir'])
model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training
model_to_save.save_pretrained(args['output_dir'])
tokenizer.save_pretrained(args['output_dir'])
torch.save(args, os.path.join(args['output_dir'], 'training_args.bin'))
results = {}
if args['do_eval']:
checkpoints = [args['output_dir']]
if args['eval_all_checkpoints']:
checkpoints = list(os.path.dirname(c) for c in sorted(glob.glob(args['output_dir'] + '/**/' + WEIGHTS_NAME, recursive=True)))
logging.getLogger("pytorch_transformers.modeling_utils").setLevel(logging.WARN) # Reduce logging
logger.info("Evaluate the following checkpoints: %s", checkpoints)
for checkpoint in checkpoints:
global_step = checkpoint.split('-')[-1] if len(checkpoints) > 1 else ""
model = model_class.from_pretrained(checkpoint)
model.to(device)
result, wrong_preds = evaluate(model, tokenizer, prefix=global_step)
result = dict((k + '_{}'.format(global_step), v) for k, v in result.items())
results.update(result)
results
```
|
github_jupyter
|
from __future__ import absolute_import, division, print_function
import glob
import logging
import os
import random
import json
import numpy as np
import torch
from torch.utils.data import (DataLoader, RandomSampler, SequentialSampler,
TensorDataset)
import random
from torch.utils.data.distributed import DistributedSampler
from tqdm import tqdm_notebook, trange
from tensorboardX import SummaryWriter
from pytorch_transformers import (WEIGHTS_NAME, BertConfig, BertForSequenceClassification, BertTokenizer,
XLMConfig, XLMForSequenceClassification, XLMTokenizer,
XLNetConfig, XLNetForSequenceClassification, XLNetTokenizer,
RobertaConfig, RobertaForSequenceClassification, RobertaTokenizer)
from pytorch_transformers import AdamW, WarmupLinearSchedule
from utils import (convert_examples_to_features,
output_modes, processors)
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
args = {
'data_dir': 'data/',
'model_type': 'xlnet',
'model_name': 'xlnet-base-cased',
'task_name': 'binary',
'output_dir': 'outputs/',
'cache_dir': 'cache/',
'do_train': True,
'do_eval': True,
'fp16': True,
'fp16_opt_level': 'O1',
'max_seq_length': 128,
'output_mode': 'classification',
'train_batch_size': 8,
'eval_batch_size': 8,
'gradient_accumulation_steps': 1,
'num_train_epochs': 1,
'weight_decay': 0,
'learning_rate': 4e-5,
'adam_epsilon': 1e-8,
'warmup_ratio': 0.06,
'warmup_steps': 0,
'max_grad_norm': 1.0,
'logging_steps': 50,
'evaluate_during_training': False,
'save_steps': 2000,
'eval_all_checkpoints': True,
'overwrite_output_dir': False,
'reprocess_input_data': True,
'notes': 'Using Yelp Reviews dataset'
}
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
args
with open('args.json', 'w') as f:
json.dump(args, f)
if os.path.exists(args['output_dir']) and os.listdir(args['output_dir']) and args['do_train'] and not args['overwrite_output_dir']:
raise ValueError("Output directory ({}) already exists and is not empty. Use --overwrite_output_dir to overcome.".format(args['output_dir']))
MODEL_CLASSES = {
'bert': (BertConfig, BertForSequenceClassification, BertTokenizer),
'xlnet': (XLNetConfig, XLNetForSequenceClassification, XLNetTokenizer),
'xlm': (XLMConfig, XLMForSequenceClassification, XLMTokenizer),
'roberta': (RobertaConfig, RobertaForSequenceClassification, RobertaTokenizer)
}
config_class, model_class, tokenizer_class = MODEL_CLASSES[args['model_type']]
config = config_class.from_pretrained(args['model_name'], num_labels=2, finetuning_task=args['task_name'])
tokenizer = tokenizer_class.from_pretrained(args['model_name'])
model = model_class.from_pretrained(args['model_name'])
model.to(device);
task = args['task_name']
if task in processors.keys() and task in output_modes.keys():
processor = processors[task]()
label_list = processor.get_labels()
num_labels = len(label_list)
else:
raise KeyError(f'{task} not found in processors or in output_modes. Please check utils.py.')
def load_and_cache_examples(task, tokenizer, evaluate=False):
processor = processors[task]()
output_mode = args['output_mode']
mode = 'dev' if evaluate else 'train'
cached_features_file = os.path.join(args['data_dir'], f"cached_{mode}_{args['model_name']}_{args['max_seq_length']}_{task}")
if os.path.exists(cached_features_file) and not args['reprocess_input_data']:
logger.info("Loading features from cached file %s", cached_features_file)
features = torch.load(cached_features_file)
else:
logger.info("Creating features from dataset file at %s", args['data_dir'])
label_list = processor.get_labels()
examples = processor.get_dev_examples(args['data_dir']) if evaluate else processor.get_train_examples(args['data_dir'])
if __name__ == "__main__":
features = convert_examples_to_features(examples, label_list, args['max_seq_length'], tokenizer, output_mode,
cls_token_at_end=bool(args['model_type'] in ['xlnet']), # xlnet has a cls token at the end
cls_token=tokenizer.cls_token,
cls_token_segment_id=2 if args['model_type'] in ['xlnet'] else 0,
sep_token=tokenizer.sep_token,
sep_token_extra=bool(args['model_type'] in ['roberta']), # roberta uses an extra separator b/w pairs of sentences, cf. github.com/pytorch/fairseq/commit/1684e166e3da03f5b600dbb7855cb98ddfcd0805
pad_on_left=bool(args['model_type'] in ['xlnet']), # pad on the left for xlnet
pad_token=tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0],
pad_token_segment_id=4 if args['model_type'] in ['xlnet'] else 0)
logger.info("Saving features into cached file %s", cached_features_file)
torch.save(features, cached_features_file)
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in features], dtype=torch.long)
if output_mode == "classification":
all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.long)
elif output_mode == "regression":
all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.float)
dataset = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids)
return dataset
def train(train_dataset, model, tokenizer):
tb_writer = SummaryWriter()
train_sampler = RandomSampler(train_dataset)
train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=args['train_batch_size'])
t_total = len(train_dataloader) // args['gradient_accumulation_steps'] * args['num_train_epochs']
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': args['weight_decay']},
{'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
warmup_steps = math.ceil(t_total * args['warmup_ratio'])
args['warmup_steps'] = warmup_steps if args['warmup_steps'] == 0 else args['warmup_steps']
optimizer = AdamW(optimizer_grouped_parameters, lr=args['learning_rate'], eps=args['adam_epsilon'])
scheduler = WarmupLinearSchedule(optimizer, warmup_steps=args['warmup_steps'], t_total=t_total)
if args['fp16']:
try:
from apex import amp
except ImportError:
raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.")
model, optimizer = amp.initialize(model, optimizer, opt_level=args['fp16_opt_level'])
logger.info("***** Running training *****")
logger.info(" Num examples = %d", len(train_dataset))
logger.info(" Num Epochs = %d", args['num_train_epochs'])
logger.info(" Total train batch size = %d", args['train_batch_size'])
logger.info(" Gradient Accumulation steps = %d", args['gradient_accumulation_steps'])
logger.info(" Total optimization steps = %d", t_total)
global_step = 0
tr_loss, logging_loss = 0.0, 0.0
model.zero_grad()
train_iterator = trange(int(args['num_train_epochs']), desc="Epoch")
for _ in train_iterator:
epoch_iterator = tqdm_notebook(train_dataloader, desc="Iteration")
for step, batch in enumerate(epoch_iterator):
model.train()
batch = tuple(t.to(device) for t in batch)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'token_type_ids': batch[2] if args['model_type'] in ['bert', 'xlnet'] else None, # XLM don't use segment_ids
'labels': batch[3]}
outputs = model(**inputs)
loss = outputs[0] # model outputs are always tuple in pytorch-transformers (see doc)
print("\r%f" % loss, end='')
if args['gradient_accumulation_steps'] > 1:
loss = loss / args['gradient_accumulation_steps']
if args['fp16']:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), args['max_grad_norm'])
else:
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), args['max_grad_norm'])
tr_loss += loss.item()
if (step + 1) % args['gradient_accumulation_steps'] == 0:
scheduler.step() # Update learning rate schedule
optimizer.step()
model.zero_grad()
global_step += 1
if args['logging_steps'] > 0 and global_step % args['logging_steps'] == 0:
# Log metrics
if args['evaluate_during_training']: # Only evaluate when single GPU otherwise metrics may not average well
results = evaluate(model, tokenizer)
for key, value in results.items():
tb_writer.add_scalar('eval_{}'.format(key), value, global_step)
tb_writer.add_scalar('lr', scheduler.get_lr()[0], global_step)
tb_writer.add_scalar('loss', (tr_loss - logging_loss)/args['logging_steps'], global_step)
logging_loss = tr_loss
if args['save_steps'] > 0 and global_step % args['save_steps'] == 0:
# Save model checkpoint
output_dir = os.path.join(args['output_dir'], 'checkpoint-{}'.format(global_step))
if not os.path.exists(output_dir):
os.makedirs(output_dir)
model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training
model_to_save.save_pretrained(output_dir)
logger.info("Saving model checkpoint to %s", output_dir)
return global_step, tr_loss / global_step
from sklearn.metrics import mean_squared_error, matthews_corrcoef, confusion_matrix
from scipy.stats import pearsonr
def get_mismatched(labels, preds):
mismatched = labels != preds
examples = processor.get_dev_examples(args['data_dir'])
wrong = [i for (i, v) in zip(examples, mismatched) if v]
return wrong
def get_eval_report(labels, preds):
mcc = matthews_corrcoef(labels, preds)
tn, fp, fn, tp = confusion_matrix(labels, preds).ravel()
return {
"mcc": mcc,
"tp": tp,
"tn": tn,
"fp": fp,
"fn": fn
}, get_mismatched(labels, preds)
def compute_metrics(task_name, preds, labels):
assert len(preds) == len(labels)
return get_eval_report(labels, preds)
def evaluate(model, tokenizer, prefix=""):
# Loop to handle MNLI double evaluation (matched, mis-matched)
eval_output_dir = args['output_dir']
results = {}
EVAL_TASK = args['task_name']
eval_dataset = load_and_cache_examples(EVAL_TASK, tokenizer, evaluate=True)
if not os.path.exists(eval_output_dir):
os.makedirs(eval_output_dir)
eval_sampler = SequentialSampler(eval_dataset)
eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=args['eval_batch_size'])
# Eval!
logger.info("***** Running evaluation {} *****".format(prefix))
logger.info(" Num examples = %d", len(eval_dataset))
logger.info(" Batch size = %d", args['eval_batch_size'])
eval_loss = 0.0
nb_eval_steps = 0
preds = None
out_label_ids = None
for batch in tqdm_notebook(eval_dataloader, desc="Evaluating"):
model.eval()
batch = tuple(t.to(device) for t in batch)
with torch.no_grad():
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'token_type_ids': batch[2] if args['model_type'] in ['bert', 'xlnet'] else None, # XLM don't use segment_ids
'labels': batch[3]}
outputs = model(**inputs)
tmp_eval_loss, logits = outputs[:2]
eval_loss += tmp_eval_loss.mean().item()
nb_eval_steps += 1
if preds is None:
preds = logits.detach().cpu().numpy()
out_label_ids = inputs['labels'].detach().cpu().numpy()
else:
preds = np.append(preds, logits.detach().cpu().numpy(), axis=0)
out_label_ids = np.append(out_label_ids, inputs['labels'].detach().cpu().numpy(), axis=0)
eval_loss = eval_loss / nb_eval_steps
if args['output_mode'] == "classification":
preds = np.argmax(preds, axis=1)
elif args['output_mode'] == "regression":
preds = np.squeeze(preds)
result, wrong = compute_metrics(EVAL_TASK, preds, out_label_ids)
results.update(result)
output_eval_file = os.path.join(eval_output_dir, "eval_results.txt")
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results {} *****".format(prefix))
for key in sorted(result.keys()):
logger.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
return results, wrong
if args['do_train']:
train_dataset = load_and_cache_examples(task, tokenizer)
global_step, tr_loss = train(train_dataset, model, tokenizer)
logger.info(" global_step = %s, average loss = %s", global_step, tr_loss)
if args['do_train']:
if not os.path.exists(args['output_dir']):
os.makedirs(args['output_dir'])
logger.info("Saving model checkpoint to %s", args['output_dir'])
model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training
model_to_save.save_pretrained(args['output_dir'])
tokenizer.save_pretrained(args['output_dir'])
torch.save(args, os.path.join(args['output_dir'], 'training_args.bin'))
results = {}
if args['do_eval']:
checkpoints = [args['output_dir']]
if args['eval_all_checkpoints']:
checkpoints = list(os.path.dirname(c) for c in sorted(glob.glob(args['output_dir'] + '/**/' + WEIGHTS_NAME, recursive=True)))
logging.getLogger("pytorch_transformers.modeling_utils").setLevel(logging.WARN) # Reduce logging
logger.info("Evaluate the following checkpoints: %s", checkpoints)
for checkpoint in checkpoints:
global_step = checkpoint.split('-')[-1] if len(checkpoints) > 1 else ""
model = model_class.from_pretrained(checkpoint)
model.to(device)
result, wrong_preds = evaluate(model, tokenizer, prefix=global_step)
result = dict((k + '_{}'.format(global_step), v) for k, v in result.items())
results.update(result)
results
| 0.595493 | 0.236417 |
```
%%writefile app.py
import streamlit as st
import tensorflow as tf
import cv2
from PIL import Image ,ImageOps
import numpy as np
from tensorflow.keras.preprocessing.image import load_img, img_to_array
@st.cache(allow_output_mutation=True)
def predict(img):
IMAGE_SIZE = 224
classes = ['Apple - Apple scab', 'Apple - Black rot',
'Apple - Cedar apple rust', 'Apple - healthy', 'Background without leaves',
'Blueberry - healthy', 'Cherry - Powdery mildew', 'Cherry - healthy',
'Corn - Cercospora leaf spot Gray leaf spot', 'Corn - Common rust',
'Corn - Northern Leaf Blight', 'Corn - healthy', 'Grape - Black rot',
'Grape - Esca (Black Measles)', 'Grape - Leaf blight (Isariopsis Leaf Spot)',
'Grape - healthy', 'Orange - Haunglongbing (Citrus greening)',
'Peach - Bacterial spot', 'Peach - healthy', 'Pepper, bell - Bacterial spot',
'Pepper, bell - healthy', 'Potato - Early blight', 'Potato - Late blight',
'Potato - healthy', 'Raspberry - healthy', 'Soybean - healthy',
'Squash - Powdery mildew', 'Strawberry - Leaf scorch', 'Strawberry - healthy',
'Tomato - Bacterial spot', 'Tomato - Early blight', 'Tomato - Late blight',
'Tomato - Leaf Mold', 'Tomato - Septoria leaf spot',
'Tomato - Spider mites Two-spotted spider mite', 'Tomato - Target Spot',
'Tomato - Tomato Yellow Leaf Curl Virus', 'Tomato - Tomato mosaic virus',
'Tomato - healthy']
model_path = r'model'
model = tf.keras.models.load_model(model_path)
img = Image.open(img)
img = img.resize((IMAGE_SIZE, IMAGE_SIZE))
img = img_to_array(img)
img = img.reshape((1, IMAGE_SIZE, IMAGE_SIZE, 3))
img = img/255.
class_probabilities = model.predict(x=img)
class_probabilities = np.squeeze(class_probabilities)
prediction_index = int(np.argmax(class_probabilities))
prediction_class = classes[prediction_index]
prediction_probability = class_probabilities[prediction_index] * 100
prediction_probability = round(prediction_probability, 2)
return prediction_class, prediction_probability
def load_model():
model=tf.keras.models.load_model('my_model.hdf5')
return model
model2=load_model()
def import_and_predict(image_data , model):
size=(256,256)
image = ImageOps.fit(image_data,size,Image.ANTIALIAS)
img=np.asarray(image)
img_reshape=img[np.newaxis,...]
prediction=model2.predict(img_reshape)
return prediction
st.markdown('<style>body{text-align: center;}</style>', unsafe_allow_html=True)
# Main app interface
st.title('plant and soil Classification ')
st.write('By Kareem Negm')
st.image('appimage2.jpg')
img = st.file_uploader(label='Upload leaf image (PNG, JPG or JPEG)', type=['png', 'jpg', 'jpeg'])
st.write('Please specify the type of classifier (soil or plant)')
if img is not None:
predict_button = st.button(label='Plante Disease Classifier')
prediction_class, prediction_probability = predict(img)
if predict_button:
st.image(image=img.read(), caption='Uploaded image')
st.subheader('Prediction')
st.info(f'Classification: {prediction_class}, Accuracy: {prediction_probability}%')
if prediction_class=='Tomato - Bacterial spot':
url = 'https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases/cataracts'
if st.button('Guidance page '):
st.write('the url: %s' % url)
elif prediction_class=='Tomato - Early blight':
url2 = 'https://www.pesches.com/blogs/news/how-to-fight-early-blight'
if st.button('Guidance page '):
st.write('the url: %s' % url2)
elif prediction_class=='Tomato - Late blight':
url3 = 'https://www.gardentech.com/disease/late-blight'
if st.button('Guidance page'):
st.write('the url: %s' % url3)
elif prediction_class=='Tomato - Leaf Mold':
url2 = 'https://www.rhs.org.uk/advice/profile?pid=468'
if st.button('Guidance page '):
st.write('the url: %s' % url2)
elif prediction_class=='Tomato - Septoria leaf spot':
url3 = 'https://www.missouribotanicalgarden.org/gardens-gardening/your-garden/help-for-the-home-gardener/advice-tips-resources/pests-and-problems/diseases/fungal-spots/septoria-leaf-spot-of-tomato.aspx'
if st.button('Guidance page'):
st.write('the url: %s' % url3)
elif prediction_class=='Tomato - Spider mites Two-spotted spider mite':
url2 = 'https://www.gardeningknowhow.com/plant-problems/pests/insects/two-spotted-spider-mite-control.htm'
if st.button('Guidance page '):
st.write('the url: %s' % url2)
elif prediction_class=='Tomato - Target Spot':
url3 = 'https://www.searlesgardening.com.au/control-target-spot-plants-and-vegetables'
if st.button('Guidance page'):
st.write('the url: %s' % url3)
elif prediction_class=='Tomato - Tomato Yellow Leaf Curl Virus':
url2 = 'https://www2.ipm.ucanr.edu/agriculture/tomato/tomato-yellow-leaf-curl/'
if st.button('Guidance page '):
st.write('the url: %s' % url2)
elif prediction_class=='Tomato - Tomato mosaic virus':
url3 = 'https://www.planetnatural.com/pest-problem-solver/plant-disease/mosaic-virus/'
if st.button('Guidance page'):
st.write('the url: %s' % url3)
predict_button2 = st.button(label='Soil Classifier')
if predict_button2:
image=Image.open(img)
st.image(image)
st.subheader('Prediction')
predictions=import_and_predict(image,model2)
class_names=['clay soil', 'gravel soil', 'loam soil', 'sand soil']
score = tf.nn.softmax(predictions[0])
st.info(f'Classification: {class_names[np.argmax(predictions)]}, Accuracy: { 100 * np.max(score)}%')
```
|
github_jupyter
|
%%writefile app.py
import streamlit as st
import tensorflow as tf
import cv2
from PIL import Image ,ImageOps
import numpy as np
from tensorflow.keras.preprocessing.image import load_img, img_to_array
@st.cache(allow_output_mutation=True)
def predict(img):
IMAGE_SIZE = 224
classes = ['Apple - Apple scab', 'Apple - Black rot',
'Apple - Cedar apple rust', 'Apple - healthy', 'Background without leaves',
'Blueberry - healthy', 'Cherry - Powdery mildew', 'Cherry - healthy',
'Corn - Cercospora leaf spot Gray leaf spot', 'Corn - Common rust',
'Corn - Northern Leaf Blight', 'Corn - healthy', 'Grape - Black rot',
'Grape - Esca (Black Measles)', 'Grape - Leaf blight (Isariopsis Leaf Spot)',
'Grape - healthy', 'Orange - Haunglongbing (Citrus greening)',
'Peach - Bacterial spot', 'Peach - healthy', 'Pepper, bell - Bacterial spot',
'Pepper, bell - healthy', 'Potato - Early blight', 'Potato - Late blight',
'Potato - healthy', 'Raspberry - healthy', 'Soybean - healthy',
'Squash - Powdery mildew', 'Strawberry - Leaf scorch', 'Strawberry - healthy',
'Tomato - Bacterial spot', 'Tomato - Early blight', 'Tomato - Late blight',
'Tomato - Leaf Mold', 'Tomato - Septoria leaf spot',
'Tomato - Spider mites Two-spotted spider mite', 'Tomato - Target Spot',
'Tomato - Tomato Yellow Leaf Curl Virus', 'Tomato - Tomato mosaic virus',
'Tomato - healthy']
model_path = r'model'
model = tf.keras.models.load_model(model_path)
img = Image.open(img)
img = img.resize((IMAGE_SIZE, IMAGE_SIZE))
img = img_to_array(img)
img = img.reshape((1, IMAGE_SIZE, IMAGE_SIZE, 3))
img = img/255.
class_probabilities = model.predict(x=img)
class_probabilities = np.squeeze(class_probabilities)
prediction_index = int(np.argmax(class_probabilities))
prediction_class = classes[prediction_index]
prediction_probability = class_probabilities[prediction_index] * 100
prediction_probability = round(prediction_probability, 2)
return prediction_class, prediction_probability
def load_model():
model=tf.keras.models.load_model('my_model.hdf5')
return model
model2=load_model()
def import_and_predict(image_data , model):
size=(256,256)
image = ImageOps.fit(image_data,size,Image.ANTIALIAS)
img=np.asarray(image)
img_reshape=img[np.newaxis,...]
prediction=model2.predict(img_reshape)
return prediction
st.markdown('<style>body{text-align: center;}</style>', unsafe_allow_html=True)
# Main app interface
st.title('plant and soil Classification ')
st.write('By Kareem Negm')
st.image('appimage2.jpg')
img = st.file_uploader(label='Upload leaf image (PNG, JPG or JPEG)', type=['png', 'jpg', 'jpeg'])
st.write('Please specify the type of classifier (soil or plant)')
if img is not None:
predict_button = st.button(label='Plante Disease Classifier')
prediction_class, prediction_probability = predict(img)
if predict_button:
st.image(image=img.read(), caption='Uploaded image')
st.subheader('Prediction')
st.info(f'Classification: {prediction_class}, Accuracy: {prediction_probability}%')
if prediction_class=='Tomato - Bacterial spot':
url = 'https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases/cataracts'
if st.button('Guidance page '):
st.write('the url: %s' % url)
elif prediction_class=='Tomato - Early blight':
url2 = 'https://www.pesches.com/blogs/news/how-to-fight-early-blight'
if st.button('Guidance page '):
st.write('the url: %s' % url2)
elif prediction_class=='Tomato - Late blight':
url3 = 'https://www.gardentech.com/disease/late-blight'
if st.button('Guidance page'):
st.write('the url: %s' % url3)
elif prediction_class=='Tomato - Leaf Mold':
url2 = 'https://www.rhs.org.uk/advice/profile?pid=468'
if st.button('Guidance page '):
st.write('the url: %s' % url2)
elif prediction_class=='Tomato - Septoria leaf spot':
url3 = 'https://www.missouribotanicalgarden.org/gardens-gardening/your-garden/help-for-the-home-gardener/advice-tips-resources/pests-and-problems/diseases/fungal-spots/septoria-leaf-spot-of-tomato.aspx'
if st.button('Guidance page'):
st.write('the url: %s' % url3)
elif prediction_class=='Tomato - Spider mites Two-spotted spider mite':
url2 = 'https://www.gardeningknowhow.com/plant-problems/pests/insects/two-spotted-spider-mite-control.htm'
if st.button('Guidance page '):
st.write('the url: %s' % url2)
elif prediction_class=='Tomato - Target Spot':
url3 = 'https://www.searlesgardening.com.au/control-target-spot-plants-and-vegetables'
if st.button('Guidance page'):
st.write('the url: %s' % url3)
elif prediction_class=='Tomato - Tomato Yellow Leaf Curl Virus':
url2 = 'https://www2.ipm.ucanr.edu/agriculture/tomato/tomato-yellow-leaf-curl/'
if st.button('Guidance page '):
st.write('the url: %s' % url2)
elif prediction_class=='Tomato - Tomato mosaic virus':
url3 = 'https://www.planetnatural.com/pest-problem-solver/plant-disease/mosaic-virus/'
if st.button('Guidance page'):
st.write('the url: %s' % url3)
predict_button2 = st.button(label='Soil Classifier')
if predict_button2:
image=Image.open(img)
st.image(image)
st.subheader('Prediction')
predictions=import_and_predict(image,model2)
class_names=['clay soil', 'gravel soil', 'loam soil', 'sand soil']
score = tf.nn.softmax(predictions[0])
st.info(f'Classification: {class_names[np.argmax(predictions)]}, Accuracy: { 100 * np.max(score)}%')
| 0.519278 | 0.274783 |
```
import escher
import escher.urls
import cobra
import cobra.test
import json
import os
from IPython.display import HTML
from copy import deepcopy
d = escher.urls.root_directory
print('Escher directory: %s' % d)
```
### Embed an Escher map in an IPython notebook
```
escher.list_available_maps()
b = escher.Builder(map_name='iJO1366.Fatty acid beta-oxidation')
b.display_in_notebook()
```
### Plot FBA solutions in Escher
```
model = cobra.io.load_json_model( "iJO1366.json") # E coli metabolic model
FBA_Solution = model.optimize() # FBA of the original model
print('Original Growth rate: %.9f' % FBA_Solution.f)
b = escher.Builder(map_name='iJO1366.Fatty acid beta-oxidation',
reaction_data=FBA_Solution.x_dict,
# color and size according to the absolute value
reaction_styles=['color', 'size', 'abs', 'text'],
# change the default colors
reaction_scale=[{'type': 'min', 'color': '#cccccc', 'size': 4},
{'type': 'mean', 'color': '#0000dd', 'size': 20},
{'type': 'max', 'color': '#ff0000', 'size': 40}],
# only show the primary metabolites
hide_secondary_metabolites=True,
highlight_missing = True)
b.display_in_notebook()
#b.display_in_browser()
# MAP EDITION
model_knockout = model.copy()
cobra.manipulation.delete_model_genes(model_knockout, ["b0693"]) #ODC - speF
cobra.manipulation.delete_model_genes(model_knockout, ["b2965"]) #ODC - speC
cobra.manipulation.delete_model_genes(model_knockout, ["b2937"]) #Agmatinase - speB.
knockout_FBA_solution = model_knockout.optimize() # FBA of the knockout
print('Knockout Growth rate: %.9f' % knockout_FBA_solution.f)
#PASS THE MODEL TO A NEW BUILDER
b = escher.Builder(map_name='iJO1366.Fatty acid beta-oxidation',
reaction_data=knockout_FBA_solution.x_dict,
# color and size according to the absolute value
reaction_styles=['color', 'size', 'abs', 'text'],
# change the default colors
reaction_scale=[{'type': 'min', 'color': '#cccccc', 'size': 4},
{'type': 'mean', 'color': '#0000dd', 'size': 20},
{'type': 'max', 'color': '#ff0000', 'size': 40}],
# only show the primary metabolites
hide_secondary_metabolites=True,
highlight_missing = True)
b.display_in_notebook()
#b.display_in_browser()
```
|
github_jupyter
|
import escher
import escher.urls
import cobra
import cobra.test
import json
import os
from IPython.display import HTML
from copy import deepcopy
d = escher.urls.root_directory
print('Escher directory: %s' % d)
escher.list_available_maps()
b = escher.Builder(map_name='iJO1366.Fatty acid beta-oxidation')
b.display_in_notebook()
model = cobra.io.load_json_model( "iJO1366.json") # E coli metabolic model
FBA_Solution = model.optimize() # FBA of the original model
print('Original Growth rate: %.9f' % FBA_Solution.f)
b = escher.Builder(map_name='iJO1366.Fatty acid beta-oxidation',
reaction_data=FBA_Solution.x_dict,
# color and size according to the absolute value
reaction_styles=['color', 'size', 'abs', 'text'],
# change the default colors
reaction_scale=[{'type': 'min', 'color': '#cccccc', 'size': 4},
{'type': 'mean', 'color': '#0000dd', 'size': 20},
{'type': 'max', 'color': '#ff0000', 'size': 40}],
# only show the primary metabolites
hide_secondary_metabolites=True,
highlight_missing = True)
b.display_in_notebook()
#b.display_in_browser()
# MAP EDITION
model_knockout = model.copy()
cobra.manipulation.delete_model_genes(model_knockout, ["b0693"]) #ODC - speF
cobra.manipulation.delete_model_genes(model_knockout, ["b2965"]) #ODC - speC
cobra.manipulation.delete_model_genes(model_knockout, ["b2937"]) #Agmatinase - speB.
knockout_FBA_solution = model_knockout.optimize() # FBA of the knockout
print('Knockout Growth rate: %.9f' % knockout_FBA_solution.f)
#PASS THE MODEL TO A NEW BUILDER
b = escher.Builder(map_name='iJO1366.Fatty acid beta-oxidation',
reaction_data=knockout_FBA_solution.x_dict,
# color and size according to the absolute value
reaction_styles=['color', 'size', 'abs', 'text'],
# change the default colors
reaction_scale=[{'type': 'min', 'color': '#cccccc', 'size': 4},
{'type': 'mean', 'color': '#0000dd', 'size': 20},
{'type': 'max', 'color': '#ff0000', 'size': 40}],
# only show the primary metabolites
hide_secondary_metabolites=True,
highlight_missing = True)
b.display_in_notebook()
#b.display_in_browser()
| 0.30965 | 0.568895 |
```
!pip install -U tensorflow-addons
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
import cv2
import os
import scipy.io
import shutil
```
### Hyper parameters
```
image_size = 224
patch_size = 32
input_shape = (image_size, image_size, 3)
learning_rate = 0.001
weight_decay = 0.0001
batch_size = 32
num_epochs = 100
num_patches = (image_size // patch_size) ** 2
projection_dim = 64
num_heads = 4
# Size of the transformer layers
transformer_units = [
projection_dim * 2,
projection_dim,
]
transformer_layers = 4
mlp_head_units = [2048, 1024, 512, 64, 32] # Size of the dense layers
```
### Prepare dataset
```
path_to_download_file = keras.utils.get_file(
fname='caltech_101_zipped',
origin="https://data.caltech.edu/tindfiles/serve/e41f5188-0b32-41fa-801b-d1e840915e80/",
extract=True,
archive_format='zip',
cache_dir='./'
)
shutil.unpack_archive('datasets/caltech-101/101_ObjectCategories.tar.gz', './')
shutil.unpack_archive('datasets/caltech-101/Annotations.tar', './')
path_images = '101_ObjectCategories/airplanes/'
path_annot = 'Annotations/Airplanes_Side_2/'
image_paths = [f for f in os.listdir(path_images) if os.path.isfile(os.path.join(path_images, f))]
annot_paths = [f for f in os.listdir(path_annot) if os.path.isfile(os.path.join(path_annot, f))]
image_paths.sort()
annot_paths.sort()
image_paths[:10], annot_paths[:10]
images, targets = [], []
for i in range(len(annot_paths)):
annot = scipy.io.loadmat(os.path.join(path_annot, annot_paths[i]))['box_coord'][0]
top_left_x, top_left_y = annot[2], annot[0]
bottom_right_x, bottom_right_y = annot[3], annot[1]
image = keras.utils.load_img(os.path.join(path_images, image_paths[i]))
(w, h) = image.size[:2]
# Resize train images
if i < int(len(annot_paths) * 0.8):
image = image.resize((image_size, image_size))
images.append(keras.utils.img_to_array(image))
# Apply relative scaling
targets.append((
float(top_left_x) / w,
float(top_left_y) / h,
float(bottom_right_x) / w,
float(bottom_right_y) / h
))
(x_train, y_train) = (
np.asarray(images[: int(len(images) * 0.8)]),
np.asarray(targets[: int(len(targets) * 0.8)])
)
(x_test, y_test) = (
np.asarray(images[int(len(images) * 0.8) :]),
np.asarray(targets[int(len(targets) * 0.8) :])
)
```
### MLP layer
```
def mlp(x, hidden_units, dropout_rate):
for units in hidden_units:
x = layers.Dense(units, activation=tf.nn.gelu)(x)
x = layers.Dropout(dropout_rate)(x)
return x
```
### Patch creation layer
```
class Patches(layers.Layer):
def __init__(self, patch_size):
super().__init__()
self.patch_size = patch_size
def call(self, images):
batch_size = tf.shape(images)[0]
patches = tf.image.extract_patches(
images=images,
sizes=[1, self.patch_size, self.patch_size, 1],
strides=[1, self.patch_size, self.patch_size, 1],
rates=[1, 1, 1, 1],
padding='VALID'
)
return tf.reshape(patches, [batch_size, -1, patches.shape[-1]])
```
#### Display patches
```
plt.figure(figsize=(4, 4))
plt.imshow(x_train[0].astype('uint8'))
plt.axis('off')
patches = Patches(patch_size)(tf.convert_to_tensor([x_train[0]]))
print(f'Image size: {image_size}x{image_size}')
print(f'Patch_size: {patch_size}x{patch_size}')
print(f'{patches.shape[1]} patches per image')
print(f'{patches.shape[-1]} elements per patch')
print(f'Pathces shape: {patches.shape}')
n = int(np.sqrt(patches.shape[1]))
plt.figure(figsize=(4, 4))
for i, patch in enumerate(patches[0]):
ax = plt.subplot(n, n, i + 1)
patch_img = tf.reshape(patch, (patch_size, patch_size, 3))
plt.imshow(patch_img.numpy().astype('uint8'))
plt.axis('off')
```
### Patch encoder
```
class PatchEncoder(layers.Layer):
def __init__(self, num_patches, projection_dim):
super().__init__()
self.num_patches = num_patches
self.projection = layers.Dense(projection_dim)
self.position_embedding = layers.Embedding(
input_dim=num_patches, output_dim=projection_dim
)
def call(self, patch):
positions = tf.range(start=0, limit=self.num_patches, delta=1)
encoded = self.projection(patch) + self.position_embedding(positions)
return encoded
```
### Build the ViT model
```
def create_vit_object_detector(
input_shape,
patch_size,
num_patches,
projection_dim,
num_heads,
transformer_units,
transformer_layers,
mlp_head_units
):
inputs = layers.Input(shape=input_shape)
patches = Patches(patch_size)(inputs)
encoded_patches = PatchEncoder(num_patches, projection_dim)(patches)
for _ in range(transformer_layers):
# Layer norm
x1 = layers.LayerNormalization(epsilon=1e-6)(encoded_patches)
# MHA
attention_output = layers.MultiHeadAttention(
num_heads, projection_dim, dropout=0.1
)(x1, x1) # self attention
# Skip connection
x2 = layers.Add()([attention_output, encoded_patches])
# Layer norm
x3 = layers.LayerNormalization(epsilon=1e-6)(x2)
# MLP
x3 = mlp(x3, transformer_units, 0.1)
# Skip connection
encoded_patches = layers.Add()([x3, x2])
# Output of transformer blocks: [batch_size, num_patches, projection_dim]
# Create a [batch_size, projection_dim] tensor
# step1: layer norm
# step2: flatten [batch_size, num_patches * projection_dim]
representation = layers.LayerNormalization(epsilon=1e-6)(encoded_patches)
representation = layers.Flatten()(representation)
representation = layers.Dropout(0.3)(representation)
print(representation.get_shape())
# mlp
features = mlp(representation, mlp_head_units, dropout_rate=0.3)
# Final four neurons that output bounding box
bounding_box = layers.Dense(4)(features)
return keras.Model(inputs=inputs, outputs=bounding_box)
```
### Run the experiment
```
def run_experiment(model, learning_rate, weight_decay, batch_size, num_epochs):
optimizer = tfa.optimizers.AdamW(
learning_rate=learning_rate, weight_decay=weight_decay
)
model.compile(optimizer=optimizer, loss=keras.losses.MeanSquaredError())
checkpoint_filepath = './'
checkpoint_callback = keras.callbacks.ModelCheckpoint(
checkpoint_filepath, monitor='val_loss',
save_best_only=True, save_weights_only=True
)
history = model.fit(
x=x_train, y=y_train,
batch_size=batch_size,
epochs=num_epochs,
validation_split=0.1,
callbacks=[
checkpoint_callback, keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
]
)
return history
vit_object_detector = create_vit_object_detector(
input_shape, patch_size, num_patches, projection_dim, num_heads,
transformer_units, transformer_layers, mlp_head_units
)
history = run_experiment(vit_object_detector, learning_rate, weight_decay, batch_size, num_epochs)
```
### Evaluate the model
```
def bounding_box_iou(box_predicted, box_truth):
top_x_intersect = max(box_predicted[0], box_truth[0])
top_y_intersect = max(box_predicted[1], box_truth[1])
bottom_x_intersect = min(box_predicted[2], box_truth[2])
bottom_y_intersect = min(box_predicted[3], box_truth[3])
intersection_area = max(0, bottom_x_intersect - top_x_intersect + 1) * max(0, bottom_y_intersect - top_y_intersect + 1)
box_predicted_area = \
(box_predicted[2] - box_predicted[0] + 1) * \
(box_predicted[3] - box_predicted[1] + 1)
box_truth_area = \
(box_truth[2] - box_truth[0] + 1) * \
(box_truth[3] - box_truth[1] + 1)
return intersection_area / float(box_predicted_area + box_truth_area - intersection_area)
import matplotlib.patches as plot_patches
def get_bbox(coords, w, h):
top_left_x, top_left_y = int(coords[0] * w), int(coords[1] * h)
bottom_right_x, bottom_right_y = int(coords[2] * w), int(coords[3] * h)
bbox = [top_left_x, top_left_y, bottom_right_x, bottom_right_y]
return bbox
def draw_bbox(bbox, ax, is_preds):
top_left_x, top_left_y = bbox[:2]
bottom_right_x, bottom_right_y = bbox[2:]
rect = plot_patches.Rectangle(
(top_left_x, top_left_y),
bottom_right_x - top_left_x,
bottom_right_y - top_left_y,
facecolor='none',
edgecolor='red',
linewidth=1
)
label = 'Predicted' if is_preds else 'Target'
ax.add_patch(rect)
ax.set_xlabel(
label + ': ' +
str(top_left_x) + ', ' +
str(top_left_y) + ', ' +
str(bottom_right_x) + ', ' +
str(bottom_right_x)
)
mean_iou = 0.0
for i, input_image in enumerate(x_test[:10]):
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 15))
im = input_image
ax1.imshow(im.astype('uint8'))
ax2.imshow(im.astype('uint8'))
input_image = cv2.resize(
input_image, (image_size, image_size)
)
input_image = np.expand_dims(input_image, axis=0)
preds = vit_object_detector.predict(input_image)[0]
(h, w) = im.shape[0:2]
box_predicted = get_bbox(preds, w, h)
draw_bbox(box_predicted, ax1, is_preds=True)
# Draw truth bounding box
box_truth = get_bbox(y_test[i], w, h)
draw_bbox(box_truth, ax2, is_preds=False)
mean_iou += bounding_box_iou(box_predicted, box_truth)
print(f'mean_iou: {mean_iou / len(x_test[:10])}')
```
|
github_jupyter
|
!pip install -U tensorflow-addons
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
import cv2
import os
import scipy.io
import shutil
image_size = 224
patch_size = 32
input_shape = (image_size, image_size, 3)
learning_rate = 0.001
weight_decay = 0.0001
batch_size = 32
num_epochs = 100
num_patches = (image_size // patch_size) ** 2
projection_dim = 64
num_heads = 4
# Size of the transformer layers
transformer_units = [
projection_dim * 2,
projection_dim,
]
transformer_layers = 4
mlp_head_units = [2048, 1024, 512, 64, 32] # Size of the dense layers
path_to_download_file = keras.utils.get_file(
fname='caltech_101_zipped',
origin="https://data.caltech.edu/tindfiles/serve/e41f5188-0b32-41fa-801b-d1e840915e80/",
extract=True,
archive_format='zip',
cache_dir='./'
)
shutil.unpack_archive('datasets/caltech-101/101_ObjectCategories.tar.gz', './')
shutil.unpack_archive('datasets/caltech-101/Annotations.tar', './')
path_images = '101_ObjectCategories/airplanes/'
path_annot = 'Annotations/Airplanes_Side_2/'
image_paths = [f for f in os.listdir(path_images) if os.path.isfile(os.path.join(path_images, f))]
annot_paths = [f for f in os.listdir(path_annot) if os.path.isfile(os.path.join(path_annot, f))]
image_paths.sort()
annot_paths.sort()
image_paths[:10], annot_paths[:10]
images, targets = [], []
for i in range(len(annot_paths)):
annot = scipy.io.loadmat(os.path.join(path_annot, annot_paths[i]))['box_coord'][0]
top_left_x, top_left_y = annot[2], annot[0]
bottom_right_x, bottom_right_y = annot[3], annot[1]
image = keras.utils.load_img(os.path.join(path_images, image_paths[i]))
(w, h) = image.size[:2]
# Resize train images
if i < int(len(annot_paths) * 0.8):
image = image.resize((image_size, image_size))
images.append(keras.utils.img_to_array(image))
# Apply relative scaling
targets.append((
float(top_left_x) / w,
float(top_left_y) / h,
float(bottom_right_x) / w,
float(bottom_right_y) / h
))
(x_train, y_train) = (
np.asarray(images[: int(len(images) * 0.8)]),
np.asarray(targets[: int(len(targets) * 0.8)])
)
(x_test, y_test) = (
np.asarray(images[int(len(images) * 0.8) :]),
np.asarray(targets[int(len(targets) * 0.8) :])
)
def mlp(x, hidden_units, dropout_rate):
for units in hidden_units:
x = layers.Dense(units, activation=tf.nn.gelu)(x)
x = layers.Dropout(dropout_rate)(x)
return x
class Patches(layers.Layer):
def __init__(self, patch_size):
super().__init__()
self.patch_size = patch_size
def call(self, images):
batch_size = tf.shape(images)[0]
patches = tf.image.extract_patches(
images=images,
sizes=[1, self.patch_size, self.patch_size, 1],
strides=[1, self.patch_size, self.patch_size, 1],
rates=[1, 1, 1, 1],
padding='VALID'
)
return tf.reshape(patches, [batch_size, -1, patches.shape[-1]])
plt.figure(figsize=(4, 4))
plt.imshow(x_train[0].astype('uint8'))
plt.axis('off')
patches = Patches(patch_size)(tf.convert_to_tensor([x_train[0]]))
print(f'Image size: {image_size}x{image_size}')
print(f'Patch_size: {patch_size}x{patch_size}')
print(f'{patches.shape[1]} patches per image')
print(f'{patches.shape[-1]} elements per patch')
print(f'Pathces shape: {patches.shape}')
n = int(np.sqrt(patches.shape[1]))
plt.figure(figsize=(4, 4))
for i, patch in enumerate(patches[0]):
ax = plt.subplot(n, n, i + 1)
patch_img = tf.reshape(patch, (patch_size, patch_size, 3))
plt.imshow(patch_img.numpy().astype('uint8'))
plt.axis('off')
class PatchEncoder(layers.Layer):
def __init__(self, num_patches, projection_dim):
super().__init__()
self.num_patches = num_patches
self.projection = layers.Dense(projection_dim)
self.position_embedding = layers.Embedding(
input_dim=num_patches, output_dim=projection_dim
)
def call(self, patch):
positions = tf.range(start=0, limit=self.num_patches, delta=1)
encoded = self.projection(patch) + self.position_embedding(positions)
return encoded
def create_vit_object_detector(
input_shape,
patch_size,
num_patches,
projection_dim,
num_heads,
transformer_units,
transformer_layers,
mlp_head_units
):
inputs = layers.Input(shape=input_shape)
patches = Patches(patch_size)(inputs)
encoded_patches = PatchEncoder(num_patches, projection_dim)(patches)
for _ in range(transformer_layers):
# Layer norm
x1 = layers.LayerNormalization(epsilon=1e-6)(encoded_patches)
# MHA
attention_output = layers.MultiHeadAttention(
num_heads, projection_dim, dropout=0.1
)(x1, x1) # self attention
# Skip connection
x2 = layers.Add()([attention_output, encoded_patches])
# Layer norm
x3 = layers.LayerNormalization(epsilon=1e-6)(x2)
# MLP
x3 = mlp(x3, transformer_units, 0.1)
# Skip connection
encoded_patches = layers.Add()([x3, x2])
# Output of transformer blocks: [batch_size, num_patches, projection_dim]
# Create a [batch_size, projection_dim] tensor
# step1: layer norm
# step2: flatten [batch_size, num_patches * projection_dim]
representation = layers.LayerNormalization(epsilon=1e-6)(encoded_patches)
representation = layers.Flatten()(representation)
representation = layers.Dropout(0.3)(representation)
print(representation.get_shape())
# mlp
features = mlp(representation, mlp_head_units, dropout_rate=0.3)
# Final four neurons that output bounding box
bounding_box = layers.Dense(4)(features)
return keras.Model(inputs=inputs, outputs=bounding_box)
def run_experiment(model, learning_rate, weight_decay, batch_size, num_epochs):
optimizer = tfa.optimizers.AdamW(
learning_rate=learning_rate, weight_decay=weight_decay
)
model.compile(optimizer=optimizer, loss=keras.losses.MeanSquaredError())
checkpoint_filepath = './'
checkpoint_callback = keras.callbacks.ModelCheckpoint(
checkpoint_filepath, monitor='val_loss',
save_best_only=True, save_weights_only=True
)
history = model.fit(
x=x_train, y=y_train,
batch_size=batch_size,
epochs=num_epochs,
validation_split=0.1,
callbacks=[
checkpoint_callback, keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
]
)
return history
vit_object_detector = create_vit_object_detector(
input_shape, patch_size, num_patches, projection_dim, num_heads,
transformer_units, transformer_layers, mlp_head_units
)
history = run_experiment(vit_object_detector, learning_rate, weight_decay, batch_size, num_epochs)
def bounding_box_iou(box_predicted, box_truth):
top_x_intersect = max(box_predicted[0], box_truth[0])
top_y_intersect = max(box_predicted[1], box_truth[1])
bottom_x_intersect = min(box_predicted[2], box_truth[2])
bottom_y_intersect = min(box_predicted[3], box_truth[3])
intersection_area = max(0, bottom_x_intersect - top_x_intersect + 1) * max(0, bottom_y_intersect - top_y_intersect + 1)
box_predicted_area = \
(box_predicted[2] - box_predicted[0] + 1) * \
(box_predicted[3] - box_predicted[1] + 1)
box_truth_area = \
(box_truth[2] - box_truth[0] + 1) * \
(box_truth[3] - box_truth[1] + 1)
return intersection_area / float(box_predicted_area + box_truth_area - intersection_area)
import matplotlib.patches as plot_patches
def get_bbox(coords, w, h):
top_left_x, top_left_y = int(coords[0] * w), int(coords[1] * h)
bottom_right_x, bottom_right_y = int(coords[2] * w), int(coords[3] * h)
bbox = [top_left_x, top_left_y, bottom_right_x, bottom_right_y]
return bbox
def draw_bbox(bbox, ax, is_preds):
top_left_x, top_left_y = bbox[:2]
bottom_right_x, bottom_right_y = bbox[2:]
rect = plot_patches.Rectangle(
(top_left_x, top_left_y),
bottom_right_x - top_left_x,
bottom_right_y - top_left_y,
facecolor='none',
edgecolor='red',
linewidth=1
)
label = 'Predicted' if is_preds else 'Target'
ax.add_patch(rect)
ax.set_xlabel(
label + ': ' +
str(top_left_x) + ', ' +
str(top_left_y) + ', ' +
str(bottom_right_x) + ', ' +
str(bottom_right_x)
)
mean_iou = 0.0
for i, input_image in enumerate(x_test[:10]):
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 15))
im = input_image
ax1.imshow(im.astype('uint8'))
ax2.imshow(im.astype('uint8'))
input_image = cv2.resize(
input_image, (image_size, image_size)
)
input_image = np.expand_dims(input_image, axis=0)
preds = vit_object_detector.predict(input_image)[0]
(h, w) = im.shape[0:2]
box_predicted = get_bbox(preds, w, h)
draw_bbox(box_predicted, ax1, is_preds=True)
# Draw truth bounding box
box_truth = get_bbox(y_test[i], w, h)
draw_bbox(box_truth, ax2, is_preds=False)
mean_iou += bounding_box_iou(box_predicted, box_truth)
print(f'mean_iou: {mean_iou / len(x_test[:10])}')
| 0.791982 | 0.811937 |
# CHAracter Recognition in Natural Images (CHARIN)
### This notebook is an attempt to walk through the entire code step-by-step, explaining the different blocks, to give an overview of the project.
```
%load_ext autoreload
%autoreload 2
import os
os.getcwd()
import sys, glob, shutil
os.chdir(os.path.dirname(os.getcwd()))
os.getcwd()
```
#### Adding "src/networks" folder in path, to enable in-line imports for the network files using importlib
```
import os, sys
sys.path.append(os.path.abspath('./src/networks'))
#To handel OOM errors
import tensorflow as tf
import keras.backend.tensorflow_backend as ktf
from keras import backend as K
def get_session():
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction= 0.9,
allow_growth=True)
return tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
ktf.set_session(get_session())
#Standard imports
import pandas as pd
import importlib
import pickle
import collections
import numpy as np
from keras.models import load_model
from keras.optimizers import Adam, RMSprop, Nadam, SGD
```
### Loading all the custom functions that we have written. These make the training script neat and help in debugging in case of errors.
```
#Custom imports
import config
from src.training import data_loader
from src.training.data_generator import DataGenerator
from src.training.keras_callbacks import get_callbacks
from src.training.training_modes import training_scratch, training_checkpoint, fine_tune, transfer_learning
from src.training.keras_history import generate_stats
from src.training.plots import save_plots
```
## Reading Config
```
base_path = config.base_path
exp_name = config.exp_name
#Params
#Constants
size = config.size
classes = config.nclasses
chs = config.chs
#Training Params
epochs = config.epochs
learning_rate = config.learning_rate
batch_size = config.batch_size
initial_epoch = config.initial_epoch
f = open(config.class_weights_path, 'rb')
class_weights = pickle.load(f)
print("class_weights are:")
print(collections.OrderedDict(sorted(class_weights.items())))
training_frm_scratch = config.training_frm_scratch
training_frm_chkpt = config.training_frm_chkpt
fine_tuning = config.fine_tuning
transfer_lr = config.transfer_lr
trial = config.trial
if sum((training_frm_scratch, training_frm_chkpt, fine_tuning, transfer_lr)) != 1:
raise Exception("Conflicting training modes")
```
## Building data source
```
X_train, y_train, X_val, y_val, X_test, y_test = data_loader.build_source(base_path)
len(X_train), len(X_val)
if trial:
print("Running in trail mode")
samples = config.samples
X_train = X_train[:samples]
y_train = y_train[:samples]
X_val = X_val[:samples]
y_val = y_val[:samples]
X_test = X_test[:samples]
y_test = y_test[:samples]
```
## Data Generator
```
train_spe = int(np.floor(len(X_train)/ batch_size)) #spe = Steps per epoch
val_spe = int(np.floor(len(X_val)/batch_size))
print(train_spe, val_spe)
# Initialise training and validation generators
preprocess = getattr(importlib.import_module(config.model),"pre_process")
train_generator = DataGenerator(base_path, file_paths =X_train, labels =y_train, preprocess = preprocess,
batch_size = batch_size, dim=(size,size), n_channels=chs, n_classes= classes,
shuffle=True)
validation_generator = DataGenerator(base_path, file_paths =X_val, labels =y_val, preprocess = preprocess,
batch_size = batch_size, dim=(size,size), n_channels= chs, n_classes= classes,
shuffle=True)
X_t,y_t = train_generator.__getitem__(2)
X_v,y_v = validation_generator.__getitem__(2)
X_t.shape, y_t.shape, X_v.shape, y_v.shape
```
### Defining a super set for loss, optimiser and metric functions. The user can select any from there options using the config file
```
loss_class = {'cat_cross': 'categorical_crossentropy',
'sp_cat_cross': 'sparse categorical crossentropy'}
metric_class = {'acc':'accuracy'}
optimiser_class = {'adam': (Adam, {}),
'nadam': (Nadam, {}),
'rmsprop': (RMSprop, {}),
'sgd':(SGD, {'decay':1e-6, 'momentum':0.90, 'nesterov':True})}
```
## Initialise Model
```
if training_frm_scratch:
model, gpu_model = training_scratch(optimiser_class, loss_class, metric_class)
elif training_frm_chkpt:
model, gpu_model = training_checkpoint()
elif fine_tuning:
model, gpu_model = fine_tune(optimiser_class, loss_class, metric_class)
elif transfer_lr:
model, gpu_model = transfer_learning(optimiser_class, loss_class, metric_class)
```
### Print the model params
```
print("Model training params:")
trainable_count = int(np.sum([K.count_params(p) for p in set(model.trainable_weights)]))
non_trainable_count = int(np.sum([K.count_params(p) for p in set(model.non_trainable_weights)]))
params = (trainable_count + non_trainable_count,trainable_count, non_trainable_count)
print('Total params: {:,}'.format(params[0]))
print('Trainable params: {:,}'.format(params[1]))
print('Non-trainable params: {:,}'.format(params[2]))
```
### Set the callbacks to be used for training
```
#Set callbacks
callbacks_list = get_callbacks(model)
```
## Start/Resume training
```
# Start/resume training
if config.no_of_gpu > 1:
history = gpu_model.fit_generator(steps_per_epoch= train_spe,
generator=train_generator,
epochs=epochs,
workers=4,
use_multiprocessing=True,
validation_data = validation_generator,
validation_steps = val_spe,
initial_epoch = initial_epoch,
class_weight = class_weights,
callbacks = callbacks_list)
else:
history = model.fit_generator(steps_per_epoch= train_spe,
generator=train_generator,
epochs=epochs,
workers=4,
use_multiprocessing=True,
validation_data = validation_generator,
validation_steps = val_spe,
initial_epoch = initial_epoch,
class_weight = class_weights,
callbacks = callbacks_list)
#Save final complete model
filename = "model_ep_"+str(int(epochs))+"_batch_"+str(int(batch_size))
model.save("./data/"+exp_name+"/"+filename+".h5")
print("Saved complete model file at: ", filename+"_model"+".h5")
#Save history
history_to_save = generate_stats(history, config)
pd.DataFrame(history_to_save).to_csv("./data/"+exp_name+"/"+filename + "_train_results.csv")
save_plots(history, exp_name)
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
import os
os.getcwd()
import sys, glob, shutil
os.chdir(os.path.dirname(os.getcwd()))
os.getcwd()
import os, sys
sys.path.append(os.path.abspath('./src/networks'))
#To handel OOM errors
import tensorflow as tf
import keras.backend.tensorflow_backend as ktf
from keras import backend as K
def get_session():
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction= 0.9,
allow_growth=True)
return tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
ktf.set_session(get_session())
#Standard imports
import pandas as pd
import importlib
import pickle
import collections
import numpy as np
from keras.models import load_model
from keras.optimizers import Adam, RMSprop, Nadam, SGD
#Custom imports
import config
from src.training import data_loader
from src.training.data_generator import DataGenerator
from src.training.keras_callbacks import get_callbacks
from src.training.training_modes import training_scratch, training_checkpoint, fine_tune, transfer_learning
from src.training.keras_history import generate_stats
from src.training.plots import save_plots
base_path = config.base_path
exp_name = config.exp_name
#Params
#Constants
size = config.size
classes = config.nclasses
chs = config.chs
#Training Params
epochs = config.epochs
learning_rate = config.learning_rate
batch_size = config.batch_size
initial_epoch = config.initial_epoch
f = open(config.class_weights_path, 'rb')
class_weights = pickle.load(f)
print("class_weights are:")
print(collections.OrderedDict(sorted(class_weights.items())))
training_frm_scratch = config.training_frm_scratch
training_frm_chkpt = config.training_frm_chkpt
fine_tuning = config.fine_tuning
transfer_lr = config.transfer_lr
trial = config.trial
if sum((training_frm_scratch, training_frm_chkpt, fine_tuning, transfer_lr)) != 1:
raise Exception("Conflicting training modes")
X_train, y_train, X_val, y_val, X_test, y_test = data_loader.build_source(base_path)
len(X_train), len(X_val)
if trial:
print("Running in trail mode")
samples = config.samples
X_train = X_train[:samples]
y_train = y_train[:samples]
X_val = X_val[:samples]
y_val = y_val[:samples]
X_test = X_test[:samples]
y_test = y_test[:samples]
train_spe = int(np.floor(len(X_train)/ batch_size)) #spe = Steps per epoch
val_spe = int(np.floor(len(X_val)/batch_size))
print(train_spe, val_spe)
# Initialise training and validation generators
preprocess = getattr(importlib.import_module(config.model),"pre_process")
train_generator = DataGenerator(base_path, file_paths =X_train, labels =y_train, preprocess = preprocess,
batch_size = batch_size, dim=(size,size), n_channels=chs, n_classes= classes,
shuffle=True)
validation_generator = DataGenerator(base_path, file_paths =X_val, labels =y_val, preprocess = preprocess,
batch_size = batch_size, dim=(size,size), n_channels= chs, n_classes= classes,
shuffle=True)
X_t,y_t = train_generator.__getitem__(2)
X_v,y_v = validation_generator.__getitem__(2)
X_t.shape, y_t.shape, X_v.shape, y_v.shape
loss_class = {'cat_cross': 'categorical_crossentropy',
'sp_cat_cross': 'sparse categorical crossentropy'}
metric_class = {'acc':'accuracy'}
optimiser_class = {'adam': (Adam, {}),
'nadam': (Nadam, {}),
'rmsprop': (RMSprop, {}),
'sgd':(SGD, {'decay':1e-6, 'momentum':0.90, 'nesterov':True})}
if training_frm_scratch:
model, gpu_model = training_scratch(optimiser_class, loss_class, metric_class)
elif training_frm_chkpt:
model, gpu_model = training_checkpoint()
elif fine_tuning:
model, gpu_model = fine_tune(optimiser_class, loss_class, metric_class)
elif transfer_lr:
model, gpu_model = transfer_learning(optimiser_class, loss_class, metric_class)
print("Model training params:")
trainable_count = int(np.sum([K.count_params(p) for p in set(model.trainable_weights)]))
non_trainable_count = int(np.sum([K.count_params(p) for p in set(model.non_trainable_weights)]))
params = (trainable_count + non_trainable_count,trainable_count, non_trainable_count)
print('Total params: {:,}'.format(params[0]))
print('Trainable params: {:,}'.format(params[1]))
print('Non-trainable params: {:,}'.format(params[2]))
#Set callbacks
callbacks_list = get_callbacks(model)
# Start/resume training
if config.no_of_gpu > 1:
history = gpu_model.fit_generator(steps_per_epoch= train_spe,
generator=train_generator,
epochs=epochs,
workers=4,
use_multiprocessing=True,
validation_data = validation_generator,
validation_steps = val_spe,
initial_epoch = initial_epoch,
class_weight = class_weights,
callbacks = callbacks_list)
else:
history = model.fit_generator(steps_per_epoch= train_spe,
generator=train_generator,
epochs=epochs,
workers=4,
use_multiprocessing=True,
validation_data = validation_generator,
validation_steps = val_spe,
initial_epoch = initial_epoch,
class_weight = class_weights,
callbacks = callbacks_list)
#Save final complete model
filename = "model_ep_"+str(int(epochs))+"_batch_"+str(int(batch_size))
model.save("./data/"+exp_name+"/"+filename+".h5")
print("Saved complete model file at: ", filename+"_model"+".h5")
#Save history
history_to_save = generate_stats(history, config)
pd.DataFrame(history_to_save).to_csv("./data/"+exp_name+"/"+filename + "_train_results.csv")
save_plots(history, exp_name)
| 0.375821 | 0.825414 |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Array/eigen_analysis.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Array/eigen_analysis.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Array/eigen_analysis.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Array/eigen_analysis.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Compute the Principal Components of a Landsat 8 image.
# Load a landsat 8 image, select the bands of interest.
image = ee.Image('LANDSAT/LC8_L1T/LC80440342014077LGN00') \
.select(['B2', 'B3', 'B4', 'B5', 'B6', 'B7', 'B10', 'B11'])
# Display the input imagery and the region in which to do the PCA.
region = image.geometry()
Map.centerObject(ee.FeatureCollection(region), 10)
Map.addLayer(ee.Image().paint(region, 0, 2), {}, 'Region')
Map.addLayer(image, {'bands': ['B5', 'B4', 'B2'], 'min': 0, 'max': 20000}, 'Original Image')
# Set some information about the input to be used later.
scale = 30
bandNames = image.bandNames()
# Mean center the data to enable a faster covariance reducer
# and an SD stretch of the principal components.
meanDict = image.reduceRegion(**{
'reducer': ee.Reducer.mean(),
'geometry': region,
'scale': scale,
'maxPixels': 1e9
})
means = ee.Image.constant(meanDict.values(bandNames))
centered = image.subtract(means)
# This helper function returns a list of new band names.
def getNewBandNames(prefix):
seq = ee.List.sequence(1, bandNames.length())
return seq.map(lambda b: ee.String(prefix).cat(ee.Number(b).int().format()))
# This function accepts mean centered imagery, a scale and
# a region in which to perform the analysis. It returns the
# Principal Components (PC) in the region as a new image.
def getPrincipalComponents(centered, scale, region):
# Collapse the bands of the image into a 1D array per pixel.
arrays = centered.toArray()
# Compute the covariance of the bands within the region.
covar= arrays.reduceRegion(**{
'reducer': ee.Reducer.centeredCovariance(),
'geometry': region,
'scale': scale,
'maxPixels': 1e9
})
# Get the 'array' covariance result and cast to an array.
# This represents the band-to-band covariance within the region.
covarArray = ee.Array(covar.get('array'))
# Perform an eigen analysis and slice apart the values and vectors.
eigens = covarArray.eigen()
# This is a P-length vector of Eigenvalues.
eigenValues = eigens.slice(1, 0, 1)
# This is a PxP matrix with eigenvectors in rows.
eigenVectors = eigens.slice(1, 1)
# Convert the array image to 2D arrays for matrix computations.
arrayImage = arrays.toArray(1)
# Left multiply the image array by the matrix of eigenvectors.
principalComponents = ee.Image(eigenVectors).matrixMultiply(arrayImage)
# Turn the square roots of the Eigenvalues into a P-band image.
sdImage = ee.Image(eigenValues.sqrt()) \
.arrayProject([0]).arrayFlatten([getNewBandNames('sd')])
# Turn the PCs into a P-band image, normalized by SD.
return principalComponents \
.arrayProject([0]) \
.arrayFlatten([getNewBandNames('pc')]) \
.divide(sdImage) \
# Get the PCs at the specified scale and in the specified region
pcImage = getPrincipalComponents(centered, scale, region)
Map.addLayer(pcImage.select(0), {}, 'Image')
for i in range(0, bandNames.length().getInfo()):
band = pcImage.bandNames().get(i).getInfo()
Map.addLayer(pcImage.select([band]), {'min': -2, 'max': 2}, band)
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
|
github_jupyter
|
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
# Add Earth Engine dataset
# Compute the Principal Components of a Landsat 8 image.
# Load a landsat 8 image, select the bands of interest.
image = ee.Image('LANDSAT/LC8_L1T/LC80440342014077LGN00') \
.select(['B2', 'B3', 'B4', 'B5', 'B6', 'B7', 'B10', 'B11'])
# Display the input imagery and the region in which to do the PCA.
region = image.geometry()
Map.centerObject(ee.FeatureCollection(region), 10)
Map.addLayer(ee.Image().paint(region, 0, 2), {}, 'Region')
Map.addLayer(image, {'bands': ['B5', 'B4', 'B2'], 'min': 0, 'max': 20000}, 'Original Image')
# Set some information about the input to be used later.
scale = 30
bandNames = image.bandNames()
# Mean center the data to enable a faster covariance reducer
# and an SD stretch of the principal components.
meanDict = image.reduceRegion(**{
'reducer': ee.Reducer.mean(),
'geometry': region,
'scale': scale,
'maxPixels': 1e9
})
means = ee.Image.constant(meanDict.values(bandNames))
centered = image.subtract(means)
# This helper function returns a list of new band names.
def getNewBandNames(prefix):
seq = ee.List.sequence(1, bandNames.length())
return seq.map(lambda b: ee.String(prefix).cat(ee.Number(b).int().format()))
# This function accepts mean centered imagery, a scale and
# a region in which to perform the analysis. It returns the
# Principal Components (PC) in the region as a new image.
def getPrincipalComponents(centered, scale, region):
# Collapse the bands of the image into a 1D array per pixel.
arrays = centered.toArray()
# Compute the covariance of the bands within the region.
covar= arrays.reduceRegion(**{
'reducer': ee.Reducer.centeredCovariance(),
'geometry': region,
'scale': scale,
'maxPixels': 1e9
})
# Get the 'array' covariance result and cast to an array.
# This represents the band-to-band covariance within the region.
covarArray = ee.Array(covar.get('array'))
# Perform an eigen analysis and slice apart the values and vectors.
eigens = covarArray.eigen()
# This is a P-length vector of Eigenvalues.
eigenValues = eigens.slice(1, 0, 1)
# This is a PxP matrix with eigenvectors in rows.
eigenVectors = eigens.slice(1, 1)
# Convert the array image to 2D arrays for matrix computations.
arrayImage = arrays.toArray(1)
# Left multiply the image array by the matrix of eigenvectors.
principalComponents = ee.Image(eigenVectors).matrixMultiply(arrayImage)
# Turn the square roots of the Eigenvalues into a P-band image.
sdImage = ee.Image(eigenValues.sqrt()) \
.arrayProject([0]).arrayFlatten([getNewBandNames('sd')])
# Turn the PCs into a P-band image, normalized by SD.
return principalComponents \
.arrayProject([0]) \
.arrayFlatten([getNewBandNames('pc')]) \
.divide(sdImage) \
# Get the PCs at the specified scale and in the specified region
pcImage = getPrincipalComponents(centered, scale, region)
Map.addLayer(pcImage.select(0), {}, 'Image')
for i in range(0, bandNames.length().getInfo()):
band = pcImage.bandNames().get(i).getInfo()
Map.addLayer(pcImage.select([band]), {'min': -2, 'max': 2}, band)
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
| 0.846926 | 0.967132 |
# FAQs for Regression, MAP and MLE
* So far we have focused on regression. We began with the polynomial regression example where we have training data $\mathbf{X}$ and associated training labels $\mathbf{t}$ and we use these to estimate weights, $\mathbf{w}$ to fit a polynomial curve through the data:
\begin{equation}
y(x, \mathbf{w}) = \sum_{j=0}^M w_j x^j
\end{equation}
* We derived how to estimate the weights using both maximum likelihood estimation (MLE) and maximum a-posteriori estimation (MAP).
* Then, last class we said that we can generalize this further using basis functions (instead of only raising x to the jth power):
\begin{equation}
y(x, \mathbf{w}) = \sum_{j=0}^M w_j \phi_j(x)
\end{equation}
where $\phi_j(\cdot)$ is any basis function you choose to use on the data.
* *Why is regression useful?*
* Regression is a common type of machine learning problem where we want to map inputs to a value (instead of a class label). For example, the example we used in our first class was mapping silhouttes of individuals to their age. So regression is an important technique whenever you want to map from a data set to another value of interest. *Can you think of other examples of regression problems?*
* *Why would I want to use other basis functions?*
* So, we began with the polynomial curve fitting example just so we can have a concrete example to work through but polynomial curve fitting is not the best approach for every problem. You can think of the basis functions as methods to extract useful features from your data. For example, if it is more useful to compute distances between data points (instead of raising each data point to various powers), then you should do that instead!
* *Why did we go through all the math derivations? You could've just provided the MLE and MAP solution to us since that is all we need in practice to code this up.*
* In practice, you may have unique requirements for a particular problem and will need to decide upon and set up a different data likelihood and prior for a problem. For example, we assumed Gaussian noise for our regression example with a Gaussian zero-mean prior on the weights. You may have a application in which you know the noise is Gamma disributed and have other requirements for the weights that you want to incorporate into the prior. Knowing the process used to derive the estimate for weights in this case is a helpful guide for deriving your solution. (Also, on a practical note for the course, stepping through the math served as a quick review of various linear algebra, calculus and statistics topics that will be useful throughout the course.)
* *What is overfitting and why is it bad?*
* The goal of a supervised machine learning algorithm is to be able to learn a mapping from inputs to desired outputs from training data. When you overfit, you memorize your training data such that you can recreate the samples perfectly. This often comes about when you have a model that is more complex than your underlying true model and/or you do not have the data to support such a complex model. However, you do this at the cost of generalization. When you overfit, you do very well on training data but poorly on test (or unseen) data. So, to have useful trained machine learning model, you need to avoid overfitting. You can avoid overfitting through a number of ways. The methods we discussed in class are using *enough* data and regularization. Overfitting is related to the "bias-variance trade-off" (discussed in section 3.2 of the reading). There is a trade-off between bias and variance. Complex models have low bias and high variance (which is another way of saying, they fit the training data very well but may oscillate widely between training data points) where as rigid (not-complex-enough) models have high bias and low variance (they do not oscillate widely but may not fit the training data very well either).
* *What is the goal of MLE and MAP?*
* MLE and MAP are general approaches for estimating parameter values. For example, you may have data from some unknown distribution that you would like to model as best you can with a Gaussian distribution. You can use MLE or MAP to estimate the Gaussian parameters to fit the data and determine your estimate at what the true (but unknown) distribution is.
* *Why would you use MAP over MLE (or vice versa)?*
* As we saw in class, MAP is a method to add in other terms to trade off against the data likelihood during optimization. It is a mechanism to incorporate our "prior belief" about the parameters. In our example in class, we used the MAP solution for the weights in regression to help prevent overfitting by imposing the assumptions that the weights should be small in magnitude. When you have enough data, the MAP and the MLE solution converge to the same solution. The amount of data you need for this to occur varies based on how strongly you impose the prior (which is done using the variance of the prior distribution).
# Probabilistic Generative Models
* So far we have focused on regression. Today we will begin to discuss classification.
* Suppose we have training data from two classes, $C_1$ and $C_2$, and we would like to train a classifier to assign a label to incoming test points whether they belong to class 1 or 2.
* There are *many* classifiers in the machine learning literature. We will cover a few in this class. Today we will focus on probabilistic generative approaches for classification.
* A *generative* approach for classification is one in which we estimate the parameters for distributions that generate the data for each class. Then, when we have a test point, we can compute the posterior probability of that point belonging to each class and assign the point to the class with the highest posterior probability.
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
%matplotlib inline
mean1 = [-1.5, -1]
mean2 = [1, 1]
cov1 = [[1,0], [0,2]]
cov2 = [[2,.1],[.1,.2]]
N1 = 250
N2 = 100
def generateData(mean1, mean2, cov1, cov2, N1=100, N2=100):
# We are generating data from two Gaussians to represent two classes.
# In practice, we would not do this - we would just have data from the problem we are trying to solve.
class1X = np.random.multivariate_normal(mean1, cov1, N1)
class2X = np.random.multivariate_normal(mean2, cov2, N2)
fig = plt.figure()
ax = fig.add_subplot(*[1,1,1])
ax.scatter(class1X[:,0], class1X[:,1], c='r')
ax.scatter(class2X[:,0], class2X[:,1])
plt.show()
return class1X, class2X
class1X, class2X = generateData(mean1, mean2,cov1,cov2, N1,N2)
```
In the data we generated above, we have a "red" class and a "blue" class. When we are given a test sample, we will want to assign the label of either red or blue.
We can compute the posterior probability for class $C_1$ as follows:
\begin{eqnarray}
p(C_1 | x) &=& \frac{p(x|C_1)p(C_1)}{p(x)}\\
&=& \frac{p(x|C_1)p(C_1)}{p(x|C_1)p(C_1) + p(x|C_2)p(C_2)}\\
\end{eqnarray}
We can similarly compute the posterior probability for class $C_2$:
\begin{eqnarray}
p(C_2 | x) &=& \frac{p(x|C_2)p(C_2)}{p(x|C_1)p(C_1) + p(x|C_2)p(C_2)}\\
\end{eqnarray}
Note that $p(C_1|x) + p(C_2|x) = 1$.
So, to train the classifier, what we need is to determine the parametric forms and estimate the parameters for $p(x|C_1)$, $p(x|C_2)$, $p(C_1)$ and $p(C_2)$.
For example, we can assume that the data from both $C_1$ and $C_2$ are distributed according to Gaussian distributions. In this case,
\begin{eqnarray}
p(\mathbf{x}|C_k) = \frac{1}{(2\pi)^{D/2}}\frac{1}{|\Sigma|^{1/2}}\exp\left\{ - \frac{1}{2} (\mathbf{x}-\mu_k)^T\Sigma_k^{-1}(\mathbf{x}-\mu_k)\right\}
\end{eqnarray}
Given the assumption of the Gaussian form, how would you estimate the parameter for $p(x|C_1)$ and $p(x|C_2)$? *You can use maximum likelihood estimate for the mean and covariance!*
The MLE estimate for the mean of class $C_k$ is:
\begin{eqnarray}
\mu_{k,MLE} = \frac{1}{N_k} \sum_{n \in C_k} \mathbf{x}_n
\end{eqnarray}
where $N_k$ is the number of training data points that belong to class $C_k$
The MLE estimate for the covariance of class $C_k$ is:
\begin{eqnarray}
\Sigma_k = \frac{1}{N_k} \sum_{n \in C_k} (\mathbf{x}_n - \mu_{k,MLE})(\mathbf{x}_n - \mu_{k,MLE})^T
\end{eqnarray}
We can determine the values for $p(C_1)$ and $p(C_2)$ from the number of data points in each class:
\begin{eqnarray}
p(C_k) = \frac{N_k}{N}
\end{eqnarray}
where $N$ is the total number of data points.
```
#Estimate the mean and covariance for each class from the training data
mu1 = np.mean(class1X, axis=0)
print(mu1)
cov1 = np.cov(class1X.T)
print(cov1)
mu2 = np.mean(class2X, axis=0)
print(mu2)
cov2 = np.cov(class2X.T)
print(cov2)
# Estimate the prior for each class
pC1 = class1X.shape[0]/(class1X.shape[0] + class2X.shape[0])
print(pC1)
pC2 = class2X.shape[0]/(class1X.shape[0] + class2X.shape[0])
print(pC2)
#We now have all parameters needed and can compute values for test samples
from scipy.stats import multivariate_normal
x = np.linspace(-5, 4, 100)
y = np.linspace(-6, 6, 100)
xm,ym = np.meshgrid(x, y)
X = np.dstack([xm,ym])
#look at the pdf for class 1
y1 = multivariate_normal.pdf(X, mean=mu1, cov=cov1)
plt.imshow(y1)
#look at the pdf for class 2
y2 = multivariate_normal.pdf(X, mean=mu2, cov=cov2);
plt.imshow(y2)
#Look at the posterior for class 1
pos1 = (y1*pC1)/(y1*pC1 + y2*pC2 );
plt.imshow(pos1)
#Look at the posterior for class 2
pos2 = (y2*pC2)/(y1*pC1 + y2*pC2 );
plt.imshow(pos2)
#Look at the decision boundary
plt.imshow(pos1>pos2)
```
*How did we come up with using the MLE solution for the mean and variance? How did we determine how to compute $p(C_1)$ and $p(C_2)$?
* We can define a likelihood for this problem and maximize it!
\begin{eqnarray}
p(\mathbf{t}, \mathbf{X}|\pi, \mu_1, \mu_2, \Sigma_1, \Sigma_2) = \prod_{n=1}^N \left[\pi N(x_n|\mu_1, \Sigma_1)\right]^{t_n}\left[(1-\pi)N(x_n|\mu_2, \Sigma_2) \right]^{1-t_n}
\end{eqnarray}
* *How would we maximize this?* As usual, we would use our "trick" and take the log of the likelihood function. Then, we would take the derivative with respect to each parameter we are interested in, set the derivative to zero, and solve for the parameter of interest.
## Reading Assignment: Read Section 4.2 and Section 2.5.2
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
%matplotlib inline
mean1 = [-1.5, -1]
mean2 = [1, 1]
cov1 = [[1,0], [0,2]]
cov2 = [[2,.1],[.1,.2]]
N1 = 250
N2 = 100
def generateData(mean1, mean2, cov1, cov2, N1=100, N2=100):
# We are generating data from two Gaussians to represent two classes.
# In practice, we would not do this - we would just have data from the problem we are trying to solve.
class1X = np.random.multivariate_normal(mean1, cov1, N1)
class2X = np.random.multivariate_normal(mean2, cov2, N2)
fig = plt.figure()
ax = fig.add_subplot(*[1,1,1])
ax.scatter(class1X[:,0], class1X[:,1], c='r')
ax.scatter(class2X[:,0], class2X[:,1])
plt.show()
return class1X, class2X
class1X, class2X = generateData(mean1, mean2,cov1,cov2, N1,N2)
#Estimate the mean and covariance for each class from the training data
mu1 = np.mean(class1X, axis=0)
print(mu1)
cov1 = np.cov(class1X.T)
print(cov1)
mu2 = np.mean(class2X, axis=0)
print(mu2)
cov2 = np.cov(class2X.T)
print(cov2)
# Estimate the prior for each class
pC1 = class1X.shape[0]/(class1X.shape[0] + class2X.shape[0])
print(pC1)
pC2 = class2X.shape[0]/(class1X.shape[0] + class2X.shape[0])
print(pC2)
#We now have all parameters needed and can compute values for test samples
from scipy.stats import multivariate_normal
x = np.linspace(-5, 4, 100)
y = np.linspace(-6, 6, 100)
xm,ym = np.meshgrid(x, y)
X = np.dstack([xm,ym])
#look at the pdf for class 1
y1 = multivariate_normal.pdf(X, mean=mu1, cov=cov1)
plt.imshow(y1)
#look at the pdf for class 2
y2 = multivariate_normal.pdf(X, mean=mu2, cov=cov2);
plt.imshow(y2)
#Look at the posterior for class 1
pos1 = (y1*pC1)/(y1*pC1 + y2*pC2 );
plt.imshow(pos1)
#Look at the posterior for class 2
pos2 = (y2*pC2)/(y1*pC1 + y2*pC2 );
plt.imshow(pos2)
#Look at the decision boundary
plt.imshow(pos1>pos2)
| 0.622804 | 0.995439 |
# Riskfolio-Lib Tutorial:
<br>__[Financionerioncios](https://financioneroncios.wordpress.com)__
<br>__[Orenji](https://www.orenj-i.net)__
<br>__[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)__
<br>__[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__
<a href='https://ko-fi.com/B0B833SXD' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://cdn.ko-fi.com/cdn/kofi1.png?v=2' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
## Tutorial 21: Constraints on Return and Risk Measures
## 1. Downloading the data:
```
import numpy as np
import pandas as pd
import yfinance as yf
import warnings
warnings.filterwarnings("ignore")
pd.options.display.float_format = '{:.4%}'.format
# Date range
start = '2016-01-01'
end = '2019-12-30'
# Tickers of assets
assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'APA', 'MMC', 'JPM',
'ZION', 'PSA', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'TMO',
'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI', 'T', 'BA']
assets.sort()
# Downloading data
data = yf.download(assets, start = start, end = end)
data = data.loc[:,('Adj Close', slice(None))]
data.columns = assets
# Calculating returns
Y = data[assets].pct_change().dropna()
display(Y.head())
```
## 2. Estimating Mean Variance Portfolios
### 2.1 Calculating the portfolio that maximizes Sharpe ratio.
```
import riskfolio as rp
# Building the portfolio object
port = rp.Portfolio(returns=Y)
# Calculating optimal portfolio
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
# Estimate optimal portfolio:
model='Classic' # Could be Classic (historical), BL (Black Litterman) or FM (Factor Model)
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = True # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
```
### 2.2 Plotting portfolio composition
```
# Plotting the composition of the portfolio
ax = rp.plot_pie(w=w, title='Sharpe Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
```
### 2.3 Calculate Efficient Frontier
```
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting the efficient frontier in Std. Dev. dimension
label = 'Max Risk Adjusted Return Portfolio' # Title of point
mu = port.mu # Expected returns
cov = port.cov # Covariance matrix
returns = port.returns # Returns of the assets
ax = rp.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting the efficient frontier in CVaR dimension
ax = rp.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm='CVaR',
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
```
We can see that in this case, the efficient frontier made using mean-variance optimization has a similar form when we plot it using CVaR as risk measure.
```
# Plotting the efficient frontier in Max Drawdown dimension
ax = rp.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm='MDD',
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
```
We can see that in this case, the efficient frontier made using mean-variance optimization looks like a snake when we plot it using Max Drawdown as risk measure.
## 3. Building Portfolios with Constraints on Return and Risk Measures
### 3.1 Estimating Risk Limits for the Available Set of Assets
This is the first step, we are going to estimate the min and max values for each risk measures. I recommend this step because in large scale problems is not practical to build the entire efficient frontier, it is faster to find the first and last point of the frontier for each risk measure. Using this portfolios we can find the lowest and higher values for each risk measure that we can obtained with the available set of assets.
```
risk = ['MV', 'CVaR', 'MDD']
label = ['Std. Dev.', 'CVaR', 'Max Drawdown']
alpha = 0.05
for i in range(3):
limits = port.frontier_limits(model=model, rm=risk[i], rf=rf, hist=hist)
risk_min = rp.Sharpe_Risk(limits['w_min'], cov=cov, returns=returns, rm=risk[i], rf=rf, alpha=alpha)
risk_max = rp.Sharpe_Risk(limits['w_max'], cov=cov, returns=returns, rm=risk[i], rf=rf, alpha=alpha)
if 'Drawdown' in label[i]:
factor = 1
else:
factor = 252**0.5
print('\nMin Return ' + label[i] + ': ', (mu @ limits['w_min']).item() * 252)
print('Max Return ' + label[i] + ': ', (mu @ limits['w_max']).item() * 252)
print('Min ' + label[i] + ': ', risk_min * factor)
print('Max ' + label[i] + ': ', risk_max * factor)
```
We can see from the above information, that if our objective function uses Std. Dev. as risk measure, we only can obtain returns between 12.85% and 31.17%, and Std. Dev. between 10.37% and 21.92%. The same applies for the other risk measures. This is very useful because if we put a constraint on max CVaR below 23.75% the optimization problem doesn't have a solution.
### 3.2 Calculating the portfolio that maximizes Sharpe ratio with constraints in Return, CVaR and Max Drawdown.
```
rm = 'MV' # Risk measure
# Constraint on minimum Return
port.lowerret = 0.16/252 # We transform annual return to daily return
# Constraint on maximum CVaR
port.upperCVaR = 0.26/252**0.5 # We transform annual CVaR to daily CVaR
# Constraint on maximum Max Drawdown
port.uppermdd = 0.131 # We don't need to transform drawdowns risk measures
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
```
### 3.3 Plotting portfolio composition
```
ax = rp.plot_pie(w=w, title='Sharpe Mean CVaR', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
```
### 3.4 Calculate Efficient Frontier
```
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting the efficient frontier in Std. Dev. dimension
label = 'Max Risk Adjusted Return Portfolio' # Title of point
ax = rp.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
```
We can see that the new efficient frontier has a lower bound on returns of 16%.
```
# Plotting the efficient frontier in CVaR dimension
ax = rp.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm='CVaR',
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
```
We can see that the new efficient frontier has a upper bound on CVaR of 26%.
```
# Plotting the efficient frontier in Max Drawdown dimension
ax = rp.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm='MDD',
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
```
We can see that the new efficient frontier has a upper bound on Max Drawdown of 13.1%.
|
github_jupyter
|
import numpy as np
import pandas as pd
import yfinance as yf
import warnings
warnings.filterwarnings("ignore")
pd.options.display.float_format = '{:.4%}'.format
# Date range
start = '2016-01-01'
end = '2019-12-30'
# Tickers of assets
assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'APA', 'MMC', 'JPM',
'ZION', 'PSA', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'TMO',
'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI', 'T', 'BA']
assets.sort()
# Downloading data
data = yf.download(assets, start = start, end = end)
data = data.loc[:,('Adj Close', slice(None))]
data.columns = assets
# Calculating returns
Y = data[assets].pct_change().dropna()
display(Y.head())
import riskfolio as rp
# Building the portfolio object
port = rp.Portfolio(returns=Y)
# Calculating optimal portfolio
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
# Estimate optimal portfolio:
model='Classic' # Could be Classic (historical), BL (Black Litterman) or FM (Factor Model)
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = True # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
# Plotting the composition of the portfolio
ax = rp.plot_pie(w=w, title='Sharpe Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting the efficient frontier in Std. Dev. dimension
label = 'Max Risk Adjusted Return Portfolio' # Title of point
mu = port.mu # Expected returns
cov = port.cov # Covariance matrix
returns = port.returns # Returns of the assets
ax = rp.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting the efficient frontier in CVaR dimension
ax = rp.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm='CVaR',
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting the efficient frontier in Max Drawdown dimension
ax = rp.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm='MDD',
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
risk = ['MV', 'CVaR', 'MDD']
label = ['Std. Dev.', 'CVaR', 'Max Drawdown']
alpha = 0.05
for i in range(3):
limits = port.frontier_limits(model=model, rm=risk[i], rf=rf, hist=hist)
risk_min = rp.Sharpe_Risk(limits['w_min'], cov=cov, returns=returns, rm=risk[i], rf=rf, alpha=alpha)
risk_max = rp.Sharpe_Risk(limits['w_max'], cov=cov, returns=returns, rm=risk[i], rf=rf, alpha=alpha)
if 'Drawdown' in label[i]:
factor = 1
else:
factor = 252**0.5
print('\nMin Return ' + label[i] + ': ', (mu @ limits['w_min']).item() * 252)
print('Max Return ' + label[i] + ': ', (mu @ limits['w_max']).item() * 252)
print('Min ' + label[i] + ': ', risk_min * factor)
print('Max ' + label[i] + ': ', risk_max * factor)
rm = 'MV' # Risk measure
# Constraint on minimum Return
port.lowerret = 0.16/252 # We transform annual return to daily return
# Constraint on maximum CVaR
port.upperCVaR = 0.26/252**0.5 # We transform annual CVaR to daily CVaR
# Constraint on maximum Max Drawdown
port.uppermdd = 0.131 # We don't need to transform drawdowns risk measures
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
ax = rp.plot_pie(w=w, title='Sharpe Mean CVaR', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting the efficient frontier in Std. Dev. dimension
label = 'Max Risk Adjusted Return Portfolio' # Title of point
ax = rp.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting the efficient frontier in CVaR dimension
ax = rp.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm='CVaR',
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting the efficient frontier in Max Drawdown dimension
ax = rp.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm='MDD',
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
| 0.77586 | 0.936692 |
```
import numpy as np;
import pandas as pd;
import datetime as dt;
all_parking_data = pd.read_csv("collection_sheet_unmodified.csv", index_col = 'Absolute Spot Number');
display(all_parking_data);
all_parking_data.sort_values(by = ['Street Name']);
display(all_parking_data);
display(all_parking_data.loc[all_parking_data['8:00 AM'] == 'Occupied'].groupby(['Route Number','Side of Street']).count()[['8:00 AM']]);
display(all_parking_data.loc[all_parking_data['8:00 AM']== 'Unoccupied'].groupby(['Route Number','Side of Street']).count()[['8:00 AM']]);
display(all_parking_data.loc[all_parking_data['8:00 AM']== 'Occupied'].groupby(['Street Name']).count()[['8:00 AM']]);
df2 = all_parking_data.loc[all_parking_data['8:00 AM'] == 'Occupied'].groupby(['Route Number','Side of Street']).count()[['8:00 AM']];
df3 = all_parking_data.loc[all_parking_data['8:00 AM'] == 'Unoccupied'].groupby(['Route Number','Side of Street']).count()[['8:00 AM']];
df2 = df2.rename(columns = {'8:00 AM': '8:00 AM Occupied'})
df2['8:00 AM Unoccupied'] = df3['8:00 AM']
df2
def Occupancy(time):
df2 = all_parking_data.loc[all_parking_data[time] == 'Occupied'].groupby(['Route Number','Side of Street', 'Street Name']).count()[[time]];
df3 = all_parking_data.loc[all_parking_data[time] == 'Unoccupied'].groupby(['Route Number','Side of Street','Street Name']).count()[[time]];
df2 = df2.rename(columns = {time: time + ' Occupied'})
df2[time + ' Unoccupied'] = df3[time]
df2['Total Spots'] = df2[time + ' Unoccupied'] + df2[time + ' Occupied']
df2.sort_values('Route Number')
return df2;
def Occupancy2(time):
df2 = all_parking_data.loc[all_parking_data[time] == 'Occupied'].groupby(['Street Name']).count()[[time]];
df3 = all_parking_data.loc[all_parking_data[time] == 'Unoccupied'].groupby(['Street Name']).count()[[time]];
df2 = df2.rename(columns = {time: time + ' Occupied'})
df2[time + ' Unoccupied'] = df3[time]
df2['Total Spots'] = df2[time + ' Unoccupied'] + df2[time + ' Occupied']
df2.sort_values('Street Name')
return df2;
Occupancy('4:00 PM')
Occupancy2('4:00 PM')
all_times_occ = ['6:00 AM Occupied', '7:00 AM Occupied', '8:00 AM Occupied', '9:00 AM Occupied', '10:00 AM Occupied', '11:00 AM Occupied',
'12:00 PM Occupied', '1:00 PM Occupied', '2:00 PM Occupied', '3:00 PM Occupied', '4:00 PM Occupied', '5:00 PM Occupied', '6:00 PM Occupied', '7:00 PM Occupied', '8:00 PM Occupied'];
all_cols = all_times_occ
new_df = pd.DataFrame(columns = all_cols);
add_once = True;
all_times = ['6:00 AM', '7:00 AM', '8:00 AM', '9:00 AM', '10:00 AM', '11:00 AM',
'12:00 PM', '1:00 PM', '2:00 PM', '3:00 PM', '4:00 PM', '5:00 PM', '6:00 PM', '7:00 PM', '8:00 PM'];
for time in all_times:
temp_df = Occupancy2(time);
if (add_once):
add_once = False;
new_df['Total Spots'] = temp_df['Total Spots']
new_df[time + ' Occupied'] = temp_df[time + ' Occupied'];
new_df = new_df.fillna(0)
total = new_df['Total Spots']
new_df.drop(labels=['Total Spots'], axis=1,inplace = True)
new_df.insert(0, 'Total Spots', total)
new_df
new_df.to_csv('Aggregated_Bar_Chart.csv')
```
|
github_jupyter
|
import numpy as np;
import pandas as pd;
import datetime as dt;
all_parking_data = pd.read_csv("collection_sheet_unmodified.csv", index_col = 'Absolute Spot Number');
display(all_parking_data);
all_parking_data.sort_values(by = ['Street Name']);
display(all_parking_data);
display(all_parking_data.loc[all_parking_data['8:00 AM'] == 'Occupied'].groupby(['Route Number','Side of Street']).count()[['8:00 AM']]);
display(all_parking_data.loc[all_parking_data['8:00 AM']== 'Unoccupied'].groupby(['Route Number','Side of Street']).count()[['8:00 AM']]);
display(all_parking_data.loc[all_parking_data['8:00 AM']== 'Occupied'].groupby(['Street Name']).count()[['8:00 AM']]);
df2 = all_parking_data.loc[all_parking_data['8:00 AM'] == 'Occupied'].groupby(['Route Number','Side of Street']).count()[['8:00 AM']];
df3 = all_parking_data.loc[all_parking_data['8:00 AM'] == 'Unoccupied'].groupby(['Route Number','Side of Street']).count()[['8:00 AM']];
df2 = df2.rename(columns = {'8:00 AM': '8:00 AM Occupied'})
df2['8:00 AM Unoccupied'] = df3['8:00 AM']
df2
def Occupancy(time):
df2 = all_parking_data.loc[all_parking_data[time] == 'Occupied'].groupby(['Route Number','Side of Street', 'Street Name']).count()[[time]];
df3 = all_parking_data.loc[all_parking_data[time] == 'Unoccupied'].groupby(['Route Number','Side of Street','Street Name']).count()[[time]];
df2 = df2.rename(columns = {time: time + ' Occupied'})
df2[time + ' Unoccupied'] = df3[time]
df2['Total Spots'] = df2[time + ' Unoccupied'] + df2[time + ' Occupied']
df2.sort_values('Route Number')
return df2;
def Occupancy2(time):
df2 = all_parking_data.loc[all_parking_data[time] == 'Occupied'].groupby(['Street Name']).count()[[time]];
df3 = all_parking_data.loc[all_parking_data[time] == 'Unoccupied'].groupby(['Street Name']).count()[[time]];
df2 = df2.rename(columns = {time: time + ' Occupied'})
df2[time + ' Unoccupied'] = df3[time]
df2['Total Spots'] = df2[time + ' Unoccupied'] + df2[time + ' Occupied']
df2.sort_values('Street Name')
return df2;
Occupancy('4:00 PM')
Occupancy2('4:00 PM')
all_times_occ = ['6:00 AM Occupied', '7:00 AM Occupied', '8:00 AM Occupied', '9:00 AM Occupied', '10:00 AM Occupied', '11:00 AM Occupied',
'12:00 PM Occupied', '1:00 PM Occupied', '2:00 PM Occupied', '3:00 PM Occupied', '4:00 PM Occupied', '5:00 PM Occupied', '6:00 PM Occupied', '7:00 PM Occupied', '8:00 PM Occupied'];
all_cols = all_times_occ
new_df = pd.DataFrame(columns = all_cols);
add_once = True;
all_times = ['6:00 AM', '7:00 AM', '8:00 AM', '9:00 AM', '10:00 AM', '11:00 AM',
'12:00 PM', '1:00 PM', '2:00 PM', '3:00 PM', '4:00 PM', '5:00 PM', '6:00 PM', '7:00 PM', '8:00 PM'];
for time in all_times:
temp_df = Occupancy2(time);
if (add_once):
add_once = False;
new_df['Total Spots'] = temp_df['Total Spots']
new_df[time + ' Occupied'] = temp_df[time + ' Occupied'];
new_df = new_df.fillna(0)
total = new_df['Total Spots']
new_df.drop(labels=['Total Spots'], axis=1,inplace = True)
new_df.insert(0, 'Total Spots', total)
new_df
new_df.to_csv('Aggregated_Bar_Chart.csv')
| 0.214527 | 0.275392 |
# Working with text corpora
Your text data usually comes in the form of (long) plain text strings that are stored in one or several files on disk. The [Corpus](api.rst#tmtoolkit-corpus) class is for loading and managing *plain text* corpora, i.e. a set of documents with a label and their content as text strings. It resembles a [Python dictionary](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) with additional functionality.
Let's import the `Corpus` class first:
```
from tmtoolkit.corpus import Corpus
```
## Loading text data
Several methods are implemented to load text data from different sources:
- load built-in datasets
- load plain text files (".txt files")
- load folder(s) with plain text files
- load a tabular (i.e. CSV or Excel) file containing document IDs and texts
- load a ZIP file containing plain text or tabular files
We can create a `Corpus` object directly by immediately loading a dataset using one of the `Corpus.from_...` methods. This is what we've done when we used `corpus = Corpus.from_builtin_corpus('en-NewsArticles')` in the [previous chapter](getting_started.ipynb). Let's load a folder with example documents. Make sure that the path is relative to the current working directory. The data for these examples can be downloaded from [GitHub](https://github.com/WZBSocialScienceCenter/tmtoolkit/tree/master/doc/source/data).
<div class="alert alert-info">
Note
If you want to work with "rich text documents", i.e. formatted, non-plain text sources such as PDFs, Word documents, HTML files, etc. you must convert them to one of the supported formats first. For example you can use the [pdftotext](https://www.mankier.com/1/pdftotext) command from the Linux package `poppler-utils` to convert from PDF to plain text files or [pandoc](https://pandoc.org/) to convert from Word or HTML to plain text.
</div>
```
corpus = Corpus.from_folder('data/corpus_example')
corpus
```
Again, we can have a look which document labels were created and print one sample document:
```
corpus.doc_labels
corpus['sample1']
```
Now let's look at *all* documents' text. Since we have a very small corpus, printing all text out shouldn't be a problem. We can iterate through all documents by using the `items()` method because a `Corpus` object behaves like a `dict`. We will write a small function for this because we'll reuse this later and one of the most important principles when writing code is [DRY – don't repeat yourself](https://en.wikipedia.org/wiki/Don't_repeat_yourself).
```
def print_corpus(c):
"""Print all documents and their text in corpus `c`"""
for doc_label, doc_text in c.items():
print(doc_label, ':')
print(doc_text)
print('---\n')
print_corpus(corpus)
```
Another option is to create a `Corpus` object by passing a dictionary of already obtained data and optionally add further documents using the `Corpus.add_...` methods. We can also create an empty `Corpus` and then add documents:
```
corpus = Corpus()
corpus
corpus.add_files('data/corpus_example/sample1.txt')
corpus.doc_labels
```
See how we created an empty corpus first and then added a single document. Also note that this time the document label is different. Its prefixed by a normalized version of the path to the document. We can alter the `doc_label_fmt` argument of [Corpus.add_files()](api.rst#tmtoolkit.corpus.Corpus.add_files) in order to control how document labels are generated. But at first, let's remove the previously loaded document from the corpus. Since a `Corpus` instance behaves like a Python `dict`, we can use `del`:
```
del corpus['data_corpus_example-sample1']
corpus
```
Now we use a modified `doc_label_fmt` paramater value to generate document labels only from the file name and not from the full path to the document. We also load three files now:
```
corpus.add_files(['data/corpus_example/sample1.txt',
'data/corpus_example/sample2.txt',
'data/corpus_example/sample3.txt'],
doc_label_fmt='{basename}')
corpus.doc_labels
```
As noted in the beginning, there are more `add_...` and `from_...` methods to load text data from different sources. See the [Corpus API](api.rst#tmtoolkit-corpus) for details.
<div class="alert alert-info">
Note
Please be aware of the difference of the `add_...` and `from_...` methods: The former *modifies* a given Corpus instance, whereas the latter *creates* a new Corpus instance.
</div>
## Corpus properties and methods
A `Corpus` object provides several helpful properties that summarize the plain text data and several methods to manage the documents.
### Number of documents and characters
Let's start with the number of documents in the corpus. There are two ways to obtain this value:
```
len(corpus)
corpus.n_docs
```
Another important property is the number of characters per document:
```
corpus.doc_lengths
```
### Characters used in the corpus
The `unique_characters` property returns the set of characters that occur at least once in the document.
```
corpus.unique_characters
```
This is helpful if you want to check if there are strange characters in your documents that you may want to replace or remove. For example, I included a Unicode smiley ☺ in the first document (which may not be rendered correctly in your browser) that we can remove using [Corpus.remove_characters()](api.rst#tmtoolkit.corpus.Corpus.remove_characters).
```
corpus['sample1']
corpus.remove_characters('☺')
corpus['sample1']
```
[Corpus.filter_characters()](api.rst#tmtoolkit.corpus.Corpus.filter_characters) behaves similar to the above used method but by default removes *all* characters that are not in a whitelist of allowed characters.
[Corpus.replace_characters()](api.rst#tmtoolkit.corpus.Corpus.replace_characters) also allows to replace certain characters with others. With [Corpus.apply()](api.rst#tmtoolkit.corpus.Corpus.apply) you can perform any custom text transformation on each document.
There are more filtering methods: [Corpus.filter_by_min_length()](api.rst#tmtoolkit.corpus.Corpus.filter_by_min_length) / [Corpus.filter_by_max_length()](api.rst#tmtoolkit.corpus.Corpus.filter_by_max_length) allow to remove documents that are too short or too long.
<div class="alert alert-info">
Note
These methods already go in the direction of "text preprocessing", which is the topic of the next chapter and is implemented in the [tmtoolkit.preprocess](api.rst#tmtoolkit-preprocess) module. However, the methods in `Corpus` differ substantially from the `preprocess` module, as the `Corpus` methods work on untokenized plain text strings whereas the `preprocess` functions and methods work on document *tokens* (e.g. individual words) and therefore provide a much richer set of tools. However, sometimes it is necessary to do things like removing certain characters *before* tokenization, e.g. when such characters confuse the tokenizer.
</div>
### Splitting by paragraphs
Another helpful method is [Corpus.split_by_paragraphs()](api.rst#tmtoolkit.corpus.Corpus.split_by_paragraphs). This allows splitting each document of the corpus by paragraph.
Again, let's have a look at our current corpus' documents:
```
print_corpus(corpus)
```
As we can see, `sample1` contains one paragraph, `sample2` two and `sample3` three paragraphs. Now we can split those and get the expected number of documents (each paragraph is then an individual document):
```
corpus.split_by_paragraphs()
corpus
```
Our newly created six documents:
```
print_corpus(corpus)
```
You can further customize the splitting process by tweaking the parameters, e.g. the minimum number of line breaks used to detect paragraphs (default is two line breaks).
### Sampling a corpus
Finally you can sample the documents in a corpus using [Corpus.sample()](api.rst#tmtoolkit.corpus.Corpus.sample). To get a random sample of three documents from our corpus:
```
corpus.sample(3)
corpus.doc_labels
```
Note that this returns a new `Corpus` instance by default. You can pass `as_corpus=False` if you only need a Python dict.
The [next chapter](preprocessing.ipynb) will show how to apply several text preprocessing functions to a corpus.
|
github_jupyter
|
from tmtoolkit.corpus import Corpus
corpus = Corpus.from_folder('data/corpus_example')
corpus
corpus.doc_labels
corpus['sample1']
def print_corpus(c):
"""Print all documents and their text in corpus `c`"""
for doc_label, doc_text in c.items():
print(doc_label, ':')
print(doc_text)
print('---\n')
print_corpus(corpus)
corpus = Corpus()
corpus
corpus.add_files('data/corpus_example/sample1.txt')
corpus.doc_labels
del corpus['data_corpus_example-sample1']
corpus
corpus.add_files(['data/corpus_example/sample1.txt',
'data/corpus_example/sample2.txt',
'data/corpus_example/sample3.txt'],
doc_label_fmt='{basename}')
corpus.doc_labels
len(corpus)
corpus.n_docs
corpus.doc_lengths
corpus.unique_characters
corpus['sample1']
corpus.remove_characters('☺')
corpus['sample1']
print_corpus(corpus)
corpus.split_by_paragraphs()
corpus
print_corpus(corpus)
corpus.sample(3)
corpus.doc_labels
| 0.31384 | 0.982203 |
```
import random
import os
import shutil
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
import torchvision.transforms as transforms
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torchvision.datasets as dsets
import torchvision
random.seed(42)
class resBlock(nn.Module):
def __init__(self, in_channels=64, out_channels=64, k=3, s=1, p=1):
super(resBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels, out_channels, k, stride=s, padding=p)
self.bn1 = nn.BatchNorm2d(out_channels)
self.conv2 = nn.Conv2d(out_channels, out_channels, k, stride=s, padding=p)
self.bn2 = nn.BatchNorm2d(out_channels)
def forward(self, x):
y = F.relu(self.bn1(self.conv1(x)))
return self.bn2(self.conv2(y)) + x
class resTransposeBlock(nn.Module):
def __init__(self, in_channels=64, out_channels=64, k=3, s=1, p=1):
super(resTransposeBlock, self).__init__()
self.conv1 = nn.ConvTranspose2d(in_channels, out_channels, k, stride=s, padding=p)
self.bn1 = nn.BatchNorm2d(out_channels)
self.conv2 = nn.ConvTranspose2d(out_channels, out_channels, k, stride=s, padding=p)
self.bn2 = nn.BatchNorm2d(out_channels)
def forward(self, x):
y = F.relu(self.bn1(self.conv1(x)))
return self.bn2(self.conv2(y)) + x
class VGG19_extractor(nn.Module):
def __init__(self, cnn):
super(VGG19_extractor, self).__init__()
self.features1 = nn.Sequential(*list(cnn.features.children())[:3])
self.features2 = nn.Sequential(*list(cnn.features.children())[:5])
self.features3 = nn.Sequential(*list(cnn.features.children())[:12])
def forward(self, x):
return self.features1(x), self.features2(x), self.features3(x)
vgg19_exc = VGG19_extractor(torchvision.models.vgg19(pretrained=True))
vgg19_exc = vgg19_exc.cuda()
```
### Designing Encoder (E)
```
class Encoder(nn.Module):
def __init__(self, n_res_blocks=5):
super(Encoder, self).__init__()
self.n_res_blocks = n_res_blocks
self.conv1 = nn.Conv2d(3, 64, 3, stride=2, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_1' + str(i+1), resBlock(in_channels=64, out_channels=64, k=3, s=1, p=1))
self.conv2 = nn.Conv2d(64, 32, 3, stride=2, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_2' + str(i+1), resBlock(in_channels=32, out_channels=32, k=3, s=1, p=1))
self.conv3 = nn.Conv2d(32, 8, 3, stride=1, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_3' + str(i+1), resBlock(in_channels=8, out_channels=8, k=3, s=1, p=1))
self.conv4 = nn.Conv2d(8, 1, 3, stride=1, padding=1)
def forward(self, x):
y = F.relu(self.conv1(x))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_1'+str(i+1))(y))
y = F.relu(self.conv2(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_2'+str(i+1))(y))
y = F.relu(self.conv3(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_3'+str(i+1))(y))
y = self.conv4(y)
return y
E1 = Encoder(n_res_blocks=10)
```
### Designing Decoder (D)
```
class Decoder(nn.Module):
def __init__(self, n_res_blocks=5):
super(Decoder, self).__init__()
self.n_res_blocks = n_res_blocks
self.conv1 = nn.ConvTranspose2d(1, 8, 3, stride=1, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_1' + str(i+1), resTransposeBlock(in_channels=8, out_channels=8, k=3, s=1, p=1))
self.conv2 = nn.ConvTranspose2d(8, 32, 3, stride=1, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_2' + str(i+1), resTransposeBlock(in_channels=32, out_channels=32, k=3, s=1, p=1))
self.conv3 = nn.ConvTranspose2d(32, 64, 3, stride=2, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_3' + str(i+1), resTransposeBlock(in_channels=64, out_channels=64, k=3, s=1, p=1))
self.conv4 = nn.ConvTranspose2d(64, 3, 3, stride=2, padding=1)
def forward(self, x):
y = F.relu(self.conv1(x))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_1'+str(i+1))(y))
y = F.relu(self.conv2(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_2'+str(i+1))(y))
y = F.relu(self.conv3(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_3'+str(i+1))(y))
y = self.conv4(y)
return y
D1 = Decoder(n_res_blocks=10)
```
### Putting it in box, VAE
```
class VAE(nn.Module):
def __init__(self, encoder, decoder):
super(VAE, self).__init__()
self.E = encoder
self.D = decoder
self._enc_mu = nn.Linear(11*11, 128)
self._enc_log_sigma = nn.Linear(11*11, 128)
self._din_layer = nn.Linear(128, 11*11)
def _sample_latent(self, h_enc):
'''
Return the latent normal sample z ~ N(mu, sigma^2)
'''
mu = self._enc_mu(h_enc)
# print('mu size : ', mu.size())
log_sigma = self._enc_log_sigma(h_enc)
# print('log_sigma size : ', log_sigma.size())
sigma = torch.exp(log_sigma)
# print('sigma size : ', sigma.size())
std_z = torch.from_numpy(np.random.normal(0, 1, size=sigma.size())).float()
self.z_mean = mu
self.z_sigma = sigma
return mu + sigma * Variable(std_z, requires_grad=False).cuda() # Reparameterization trick
def forward(self, x):
n_bacth = x.size()[0]
h_enc = self.E(x)
indim1_D, indim2_D = h_enc.size()[2], h_enc.size()[3]
# print('h_enc size : ', h_enc.size())
h_enc = h_enc.view(n_bacth, 1, -1)
# print('h_enc size : ', h_enc.size())
z = self._sample_latent(h_enc)
# print('z_size : ', z.size())
z = self._din_layer(z)
# print('z_size : ', z.size())
z = z.view(n_bacth, 1, indim1_D, indim2_D)
# print('z_size : ', z.size())
# print('Dz_size : ', self.D(z).size())
return self.D(z)
V = VAE(E1, D1)
V = V.cuda()
class AE(nn.Module):
def __init__(self, encoder, decoder):
super(AE, self).__init__()
self.E = encoder
self.D = decoder
def forward(self, x):
h_enc = self.E(x)
return self.D(h_enc)
A = AE(E1, D1)
```
#### loading the saved model
```
A.load_state_dict(torch.load('../saved_models/newTest3.pth'))
A = A.cuda()
```
### Dataloading and stuff
```
mytransform1 = transforms.Compose(
[transforms.RandomCrop((41,41)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
def mynorm2(x):
m1 = torch.min(x)
m2 = torch.max(x)
if m2-m1 < 1e-6:
return x
else:
return (x-m1)/(m2-m1)
mytransform2 = transforms.Compose(
[transforms.RandomCrop((121,121)),
# transforms.Lambda( lambda x : Image.fromarray(gaussian_filter(x, sigma=(10,10,0)) )),
# transforms.Resize((41,41)),
transforms.ToTensor(),
transforms.Lambda( lambda x : mynorm2(x) )])
trainset = dsets.ImageFolder(root='../sample_dataset/train/',transform=mytransform2)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=8, shuffle=True, num_workers=2)
testset = dsets.ImageFolder(root='../sample_dataset/test/',transform=mytransform2)
testloader = torch.utils.data.DataLoader(testset, batch_size=8, shuffle=True, num_workers=2)
# functions to show an image
def imshow(img):
#img = img / 2 + 0.5
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
def imshow2(img):
m1 = torch.min(img)
m2 = torch.max(img)
# img = img/m2
if m2-m1 < 1e-6:
img = img/m2
else:
img = (img-m1)/(m2-m1)
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = next(dataiter) #all the images under the same 'unlabeled' folder
# print(labels)
# show images
imshow(torchvision.utils.make_grid(images))
```
### training thingy
```
def latent_loss(z_mean, z_stddev):
mean_sq = z_mean * z_mean
stddev_sq = z_stddev * z_stddev
return 0.5 * torch.mean(mean_sq + stddev_sq - torch.log(stddev_sq) - 1)
def save_model(model, model_name):
try:
os.makedirs('../saved_models')
except OSError:
pass
torch.save(model.state_dict(), '../saved_models/'+model_name)
print('model saved at '+'../saved_models/'+model_name)
# dataloader = iter(trainloader)
testiter = iter(trainloader)
testX, _ = next(testiter)
def eval_model(model):
X = testX
print('input looks like ...')
plt.figure()
imshow(torchvision.utils.make_grid(X))
X = Variable(X).cuda()
Y = model(X)
print('output looks like ...')
plt.figure()
imshow2(torchvision.utils.make_grid(Y.data.cpu()))
def train(model, rec_interval=2, disp_interval=20, eval_interval=1):
nepoch = 500
Criterion1 = nn.MSELoss()
Criterion2 = nn.L1Loss()
optimizer = optim.Adam(model.parameters(), lr=1e-5)
loss_track = []
for eph in range(nepoch):
dataloader = iter(trainloader)
print('starting epoch {} ...'.format(eph))
for i, (X, _) in enumerate(dataloader):
X = Variable(X).cuda()
optimizer.zero_grad()
reconX = model(X)
# KLTerm = latent_loss(model.z_mean, model.z_sigma)
reconTerm = Criterion1(reconX, X) + Criterion2(reconX, X)
# loss = reconTerm + 100*KLTerm
loss =reconTerm
loss.backward()
optimizer.step()
if i%rec_interval == 0:
loss_track.append(loss.data[0])
if i%disp_interval == 0:
# print('epoch : {}, iter : {}, KLterm : {}, reconTerm : {}, totalLoss : {}'.format(eph, i, KLTerm.data[0], reconTerm.data[0], loss.data[0]))
print('epoch : {}, iter : {}, reconTerm : {}, totalLoss : {}'.format(eph, i, reconTerm.data[0], loss.data[0]))
# if eph%eval_interval == 0:
# print('after epoch {} ...'.format(eph))
# eval_model(model)
return loss_track
def train_ae(model, rec_interval=2, disp_interval=20, eval_interval=1):
nepoch = 1000
Criterion2 = nn.MSELoss()
Criterion1 = nn.L1Loss()
optimizer = optim.Adam(model.parameters(), lr=1e-5)
loss_track = []
for eph in range(nepoch):
dataloader = iter(trainloader)
print('starting epoch {} ...'.format(eph))
for i, (X, _) in enumerate(dataloader):
X = Variable(X).cuda()
optimizer.zero_grad()
reconX = model(X)
l2 = Criterion2(reconX, X)
# l1 = Criterion1(reconX, X)
t1, t2, t3 = vgg19_exc(X)
rt1, rt2, rt3 = vgg19_exc(reconX)
# t1 = Variable(t1.data)
# rt1 = Variable(rt1.data)
# t2 = Variable(t2.data)
# rt2 = Variable(rt2.data)
t3 = Variable(t3.data)
rt3 = Variable(rt3.data)
# vl1 = Criterion2(rt1, t1)
# vl2 = Criterion2(rt2, t2)
vl3 = Criterion2(rt3, t3)
reconTerm = 10*l2 + vl3
loss = reconTerm
loss.backward()
optimizer.step()
if i%rec_interval == 0:
loss_track.append(loss.data[0])
if i%disp_interval == 0:
print('epoch: {}, iter: {}, L2term: {}, vl3: {}, totalLoss: {}'.format(
eph, i, l2.data[0], vl3.data[0], loss.data[0]))
return loss_track
```
#### Notes on training
It seems like the combination of L1 and L2 loss is not helping and also the features from deeper layers from VGG19 are more effective than the features on the shallow leve
```
loss_track = train_ae(A, disp_interval=100)
plt.plot(loss_track)
eval_model(A)
save_model(A, 'AE_VGGFeatX1.pth')
```
#### experiments with the sigmoid and BCE
```
class AE1(nn.Module):
def __init__(self, encoder, decoder):
super(AE1, self).__init__()
self.E = encoder
self.D = decoder
def forward(self, x):
h_enc = self.E(x)
return F.relu(self.D(h_enc))
A1 = AE1(E1, D1)
A1 = A1.cuda()
def train_ae_logsmax(model, rec_interval=2, disp_interval=20, eval_interval=1):
nepoch = 100
Criterion2 = nn.MSELoss()
Criterion1 = nn.L1Loss()
optimizer = optim.Adam(model.parameters(), lr=1e-5)
loss_track = []
for eph in range(nepoch):
dataloader = iter(trainloader)
print('starting epoch {} ...'.format(eph))
for i, (X, _) in enumerate(dataloader):
X = Variable(X).cuda()
optimizer.zero_grad()
reconX = model(X)
l2 = Criterion2(reconX, X)
# l1 = Criterion1(reconX, X)
t1, t2, t3 = vgg19_exc(X)
rt1, rt2, rt3 = vgg19_exc(reconX)
# t1 = Variable(t1.data)
# rt1 = Variable(rt1.data)
# t2 = Variable(t2.data)
# rt2 = Variable(rt2.data)
t3 = Variable(t3.data)
rt3 = Variable(rt3.data)
# vl1 = Criterion2(rt1, t1)
# vl2 = Criterion2(rt2, t2)
vl3 = Criterion2(rt3, t3)
reconTerm = 10*l2 + vl3
loss = reconTerm
loss.backward()
optimizer.step()
if i%rec_interval == 0:
loss_track.append(loss.data[0])
if i%disp_interval == 0:
print('epoch: {}, iter: {}, L2term: {}, vl3: {}, totalLoss: {}'.format(
eph, i, l2.data[0], vl3.data[0], loss.data[0]))
return loss_track
loss_track1 = train_ae_logsmax(A1, disp_interval=100)
```
|
github_jupyter
|
import random
import os
import shutil
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
import torchvision.transforms as transforms
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torchvision.datasets as dsets
import torchvision
random.seed(42)
class resBlock(nn.Module):
def __init__(self, in_channels=64, out_channels=64, k=3, s=1, p=1):
super(resBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels, out_channels, k, stride=s, padding=p)
self.bn1 = nn.BatchNorm2d(out_channels)
self.conv2 = nn.Conv2d(out_channels, out_channels, k, stride=s, padding=p)
self.bn2 = nn.BatchNorm2d(out_channels)
def forward(self, x):
y = F.relu(self.bn1(self.conv1(x)))
return self.bn2(self.conv2(y)) + x
class resTransposeBlock(nn.Module):
def __init__(self, in_channels=64, out_channels=64, k=3, s=1, p=1):
super(resTransposeBlock, self).__init__()
self.conv1 = nn.ConvTranspose2d(in_channels, out_channels, k, stride=s, padding=p)
self.bn1 = nn.BatchNorm2d(out_channels)
self.conv2 = nn.ConvTranspose2d(out_channels, out_channels, k, stride=s, padding=p)
self.bn2 = nn.BatchNorm2d(out_channels)
def forward(self, x):
y = F.relu(self.bn1(self.conv1(x)))
return self.bn2(self.conv2(y)) + x
class VGG19_extractor(nn.Module):
def __init__(self, cnn):
super(VGG19_extractor, self).__init__()
self.features1 = nn.Sequential(*list(cnn.features.children())[:3])
self.features2 = nn.Sequential(*list(cnn.features.children())[:5])
self.features3 = nn.Sequential(*list(cnn.features.children())[:12])
def forward(self, x):
return self.features1(x), self.features2(x), self.features3(x)
vgg19_exc = VGG19_extractor(torchvision.models.vgg19(pretrained=True))
vgg19_exc = vgg19_exc.cuda()
class Encoder(nn.Module):
def __init__(self, n_res_blocks=5):
super(Encoder, self).__init__()
self.n_res_blocks = n_res_blocks
self.conv1 = nn.Conv2d(3, 64, 3, stride=2, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_1' + str(i+1), resBlock(in_channels=64, out_channels=64, k=3, s=1, p=1))
self.conv2 = nn.Conv2d(64, 32, 3, stride=2, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_2' + str(i+1), resBlock(in_channels=32, out_channels=32, k=3, s=1, p=1))
self.conv3 = nn.Conv2d(32, 8, 3, stride=1, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_3' + str(i+1), resBlock(in_channels=8, out_channels=8, k=3, s=1, p=1))
self.conv4 = nn.Conv2d(8, 1, 3, stride=1, padding=1)
def forward(self, x):
y = F.relu(self.conv1(x))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_1'+str(i+1))(y))
y = F.relu(self.conv2(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_2'+str(i+1))(y))
y = F.relu(self.conv3(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_3'+str(i+1))(y))
y = self.conv4(y)
return y
E1 = Encoder(n_res_blocks=10)
class Decoder(nn.Module):
def __init__(self, n_res_blocks=5):
super(Decoder, self).__init__()
self.n_res_blocks = n_res_blocks
self.conv1 = nn.ConvTranspose2d(1, 8, 3, stride=1, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_1' + str(i+1), resTransposeBlock(in_channels=8, out_channels=8, k=3, s=1, p=1))
self.conv2 = nn.ConvTranspose2d(8, 32, 3, stride=1, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_2' + str(i+1), resTransposeBlock(in_channels=32, out_channels=32, k=3, s=1, p=1))
self.conv3 = nn.ConvTranspose2d(32, 64, 3, stride=2, padding=1)
for i in range(n_res_blocks):
self.add_module('residual_block_3' + str(i+1), resTransposeBlock(in_channels=64, out_channels=64, k=3, s=1, p=1))
self.conv4 = nn.ConvTranspose2d(64, 3, 3, stride=2, padding=1)
def forward(self, x):
y = F.relu(self.conv1(x))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_1'+str(i+1))(y))
y = F.relu(self.conv2(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_2'+str(i+1))(y))
y = F.relu(self.conv3(y))
for i in range(self.n_res_blocks):
y = F.relu(self.__getattr__('residual_block_3'+str(i+1))(y))
y = self.conv4(y)
return y
D1 = Decoder(n_res_blocks=10)
class VAE(nn.Module):
def __init__(self, encoder, decoder):
super(VAE, self).__init__()
self.E = encoder
self.D = decoder
self._enc_mu = nn.Linear(11*11, 128)
self._enc_log_sigma = nn.Linear(11*11, 128)
self._din_layer = nn.Linear(128, 11*11)
def _sample_latent(self, h_enc):
'''
Return the latent normal sample z ~ N(mu, sigma^2)
'''
mu = self._enc_mu(h_enc)
# print('mu size : ', mu.size())
log_sigma = self._enc_log_sigma(h_enc)
# print('log_sigma size : ', log_sigma.size())
sigma = torch.exp(log_sigma)
# print('sigma size : ', sigma.size())
std_z = torch.from_numpy(np.random.normal(0, 1, size=sigma.size())).float()
self.z_mean = mu
self.z_sigma = sigma
return mu + sigma * Variable(std_z, requires_grad=False).cuda() # Reparameterization trick
def forward(self, x):
n_bacth = x.size()[0]
h_enc = self.E(x)
indim1_D, indim2_D = h_enc.size()[2], h_enc.size()[3]
# print('h_enc size : ', h_enc.size())
h_enc = h_enc.view(n_bacth, 1, -1)
# print('h_enc size : ', h_enc.size())
z = self._sample_latent(h_enc)
# print('z_size : ', z.size())
z = self._din_layer(z)
# print('z_size : ', z.size())
z = z.view(n_bacth, 1, indim1_D, indim2_D)
# print('z_size : ', z.size())
# print('Dz_size : ', self.D(z).size())
return self.D(z)
V = VAE(E1, D1)
V = V.cuda()
class AE(nn.Module):
def __init__(self, encoder, decoder):
super(AE, self).__init__()
self.E = encoder
self.D = decoder
def forward(self, x):
h_enc = self.E(x)
return self.D(h_enc)
A = AE(E1, D1)
A.load_state_dict(torch.load('../saved_models/newTest3.pth'))
A = A.cuda()
mytransform1 = transforms.Compose(
[transforms.RandomCrop((41,41)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
def mynorm2(x):
m1 = torch.min(x)
m2 = torch.max(x)
if m2-m1 < 1e-6:
return x
else:
return (x-m1)/(m2-m1)
mytransform2 = transforms.Compose(
[transforms.RandomCrop((121,121)),
# transforms.Lambda( lambda x : Image.fromarray(gaussian_filter(x, sigma=(10,10,0)) )),
# transforms.Resize((41,41)),
transforms.ToTensor(),
transforms.Lambda( lambda x : mynorm2(x) )])
trainset = dsets.ImageFolder(root='../sample_dataset/train/',transform=mytransform2)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=8, shuffle=True, num_workers=2)
testset = dsets.ImageFolder(root='../sample_dataset/test/',transform=mytransform2)
testloader = torch.utils.data.DataLoader(testset, batch_size=8, shuffle=True, num_workers=2)
# functions to show an image
def imshow(img):
#img = img / 2 + 0.5
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
def imshow2(img):
m1 = torch.min(img)
m2 = torch.max(img)
# img = img/m2
if m2-m1 < 1e-6:
img = img/m2
else:
img = (img-m1)/(m2-m1)
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = next(dataiter) #all the images under the same 'unlabeled' folder
# print(labels)
# show images
imshow(torchvision.utils.make_grid(images))
def latent_loss(z_mean, z_stddev):
mean_sq = z_mean * z_mean
stddev_sq = z_stddev * z_stddev
return 0.5 * torch.mean(mean_sq + stddev_sq - torch.log(stddev_sq) - 1)
def save_model(model, model_name):
try:
os.makedirs('../saved_models')
except OSError:
pass
torch.save(model.state_dict(), '../saved_models/'+model_name)
print('model saved at '+'../saved_models/'+model_name)
# dataloader = iter(trainloader)
testiter = iter(trainloader)
testX, _ = next(testiter)
def eval_model(model):
X = testX
print('input looks like ...')
plt.figure()
imshow(torchvision.utils.make_grid(X))
X = Variable(X).cuda()
Y = model(X)
print('output looks like ...')
plt.figure()
imshow2(torchvision.utils.make_grid(Y.data.cpu()))
def train(model, rec_interval=2, disp_interval=20, eval_interval=1):
nepoch = 500
Criterion1 = nn.MSELoss()
Criterion2 = nn.L1Loss()
optimizer = optim.Adam(model.parameters(), lr=1e-5)
loss_track = []
for eph in range(nepoch):
dataloader = iter(trainloader)
print('starting epoch {} ...'.format(eph))
for i, (X, _) in enumerate(dataloader):
X = Variable(X).cuda()
optimizer.zero_grad()
reconX = model(X)
# KLTerm = latent_loss(model.z_mean, model.z_sigma)
reconTerm = Criterion1(reconX, X) + Criterion2(reconX, X)
# loss = reconTerm + 100*KLTerm
loss =reconTerm
loss.backward()
optimizer.step()
if i%rec_interval == 0:
loss_track.append(loss.data[0])
if i%disp_interval == 0:
# print('epoch : {}, iter : {}, KLterm : {}, reconTerm : {}, totalLoss : {}'.format(eph, i, KLTerm.data[0], reconTerm.data[0], loss.data[0]))
print('epoch : {}, iter : {}, reconTerm : {}, totalLoss : {}'.format(eph, i, reconTerm.data[0], loss.data[0]))
# if eph%eval_interval == 0:
# print('after epoch {} ...'.format(eph))
# eval_model(model)
return loss_track
def train_ae(model, rec_interval=2, disp_interval=20, eval_interval=1):
nepoch = 1000
Criterion2 = nn.MSELoss()
Criterion1 = nn.L1Loss()
optimizer = optim.Adam(model.parameters(), lr=1e-5)
loss_track = []
for eph in range(nepoch):
dataloader = iter(trainloader)
print('starting epoch {} ...'.format(eph))
for i, (X, _) in enumerate(dataloader):
X = Variable(X).cuda()
optimizer.zero_grad()
reconX = model(X)
l2 = Criterion2(reconX, X)
# l1 = Criterion1(reconX, X)
t1, t2, t3 = vgg19_exc(X)
rt1, rt2, rt3 = vgg19_exc(reconX)
# t1 = Variable(t1.data)
# rt1 = Variable(rt1.data)
# t2 = Variable(t2.data)
# rt2 = Variable(rt2.data)
t3 = Variable(t3.data)
rt3 = Variable(rt3.data)
# vl1 = Criterion2(rt1, t1)
# vl2 = Criterion2(rt2, t2)
vl3 = Criterion2(rt3, t3)
reconTerm = 10*l2 + vl3
loss = reconTerm
loss.backward()
optimizer.step()
if i%rec_interval == 0:
loss_track.append(loss.data[0])
if i%disp_interval == 0:
print('epoch: {}, iter: {}, L2term: {}, vl3: {}, totalLoss: {}'.format(
eph, i, l2.data[0], vl3.data[0], loss.data[0]))
return loss_track
loss_track = train_ae(A, disp_interval=100)
plt.plot(loss_track)
eval_model(A)
save_model(A, 'AE_VGGFeatX1.pth')
class AE1(nn.Module):
def __init__(self, encoder, decoder):
super(AE1, self).__init__()
self.E = encoder
self.D = decoder
def forward(self, x):
h_enc = self.E(x)
return F.relu(self.D(h_enc))
A1 = AE1(E1, D1)
A1 = A1.cuda()
def train_ae_logsmax(model, rec_interval=2, disp_interval=20, eval_interval=1):
nepoch = 100
Criterion2 = nn.MSELoss()
Criterion1 = nn.L1Loss()
optimizer = optim.Adam(model.parameters(), lr=1e-5)
loss_track = []
for eph in range(nepoch):
dataloader = iter(trainloader)
print('starting epoch {} ...'.format(eph))
for i, (X, _) in enumerate(dataloader):
X = Variable(X).cuda()
optimizer.zero_grad()
reconX = model(X)
l2 = Criterion2(reconX, X)
# l1 = Criterion1(reconX, X)
t1, t2, t3 = vgg19_exc(X)
rt1, rt2, rt3 = vgg19_exc(reconX)
# t1 = Variable(t1.data)
# rt1 = Variable(rt1.data)
# t2 = Variable(t2.data)
# rt2 = Variable(rt2.data)
t3 = Variable(t3.data)
rt3 = Variable(rt3.data)
# vl1 = Criterion2(rt1, t1)
# vl2 = Criterion2(rt2, t2)
vl3 = Criterion2(rt3, t3)
reconTerm = 10*l2 + vl3
loss = reconTerm
loss.backward()
optimizer.step()
if i%rec_interval == 0:
loss_track.append(loss.data[0])
if i%disp_interval == 0:
print('epoch: {}, iter: {}, L2term: {}, vl3: {}, totalLoss: {}'.format(
eph, i, l2.data[0], vl3.data[0], loss.data[0]))
return loss_track
loss_track1 = train_ae_logsmax(A1, disp_interval=100)
| 0.91491 | 0.639877 |
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import seaborn as sns
import pprint
%matplotlib inline
```
## Dataset Source
### Dataset used here for the analysis and its license agreement information can be found can be found by clicking dataset link
- DATASETLINK = [dataset_link](https://www.kaggle.com/libinmathew264/youtube-top-4000-channels-based-on-subscribers)
```
df = pd.read_csv('YouTube.csv', encoding='ISO-8859-1')
df.head()
df.columns
len(df.columns.values)
```
#### missing data in all total data
```
df.isnull().sum().plot(kind = 'bar')
```
### Part I : The most sought-after contents by the YouTube audience
- If one looks at the various kinds of content that are available on YouTube one might think that anything can work here. But numbers illustrate a different story. There are some specific sections of content that are availed by most of the YouTube audience across the world.
- For example, if we talk about the number of channels per segment, then “Entertainment” section is at the top followed by music and games, clearly indicating a higher trend of these segments on this platform.
#### which type of channels mostly exist on youtube
```
# which type of channels mostly exist on youtube
popular_channels = df.channeltype.value_counts().to_frame('count')
display(popular_channels)
```
- These numbers very clearly show that entertainment, music, and games are the most consumed contents on YouTube platform and any channel subscribing to this kind of content creation will quickly be accepted by the audience.
```
most_popular_channels.plot(kind = 'bar')
```
### PART II: Curating content for segment-specific audience
-But what if we do not have a choice but selecting one particular segment. What if we specialize in a certain kind of content and do not have the freedom of shifting to the top three consumer segments. How, then can we strategize and curate our content for maximum outreach to the audience.
The first step is to identify where your audience is located and what kind of content you need to create, even in that particular segment.
Take the case of the education industry.
-
#### which type of channels has most subscribers
```
# which type of channels has most subscribers
most_subscribed_channels = df.groupby('channeltype').sum()['subscribers'].sort_values(ascending = False).to_frame('subscribers_count')
#most_subscribed_channels.reset_index(level=0, inplace=True)
most_subscribed_channels
```
- The highest number of subscribers that avail the educational content through YouTube is the United States of America, India, and Mexico.
- For the reasons to be considered are the US has the fastest internet connectivity, its focus on online educational mediums.
- Recent trends in Mexico education for transforming it and budget allocation for offline & online mediums of education in Mexico.
- Based on these trends you can easily find foreign-multinational YouTubers catering to the audience of India and China just because they are so large in numbers, widely spread across the globe and general content acceptability from a larger audience.
```
#most_subscribed_channels.plot(kind = 'bar')
most_subscribed_channels.plot(kind = 'bar')
```
#### which country has most subscribers for education channel type
```
# which country has most subscribers for education channel type
education_subscriber = df[df.channeltype == 'Education'].groupby('country').sum()['subscribers'].sort_values(ascending = False).to_frame('Educational_Subscriber')
education_subscriber
# NOTE INDIA is the second largest educational subscriber
education_subscriber.plot(kind = 'bar', figsize=(10, 5), title="Top Educational Subscribers")
```
#### which country has most subscribers for Music channel type
```
# which country has most subscribers for Music channel type
music_subscriber = df[df.channeltype == 'Music'].groupby('country').sum()['subscribers'].sort_values(ascending = False).to_frame('Music_Subscriber')
music_subscriber
```
#### Top 10 music subscriber based countries
```
# Top 10 music subscriber based countries
music_subscriber[:10].plot(kind = 'bar', rot=0, figsize=(15, 4), title="Music Subscribers")
```
#### which country has most subscribers for Sports channel type
```
# which country has most subscribers for Sports channel type
sports_subscriber = df[df.channeltype == 'Sports'].groupby('country').sum()['subscribers'].sort_values(ascending = False).to_frame('Sports_Subscriber')
sports_subscriber
sports_subscriber.plot(kind = 'bar', figsize=(15, 4), title="Sports Subscriber")
```
#### Visualizing correlation of each subscriber segment gourp by country
- For this first merge all the subscriber segment data by number of subscriber in each country for particular segments.
There are cases when no segment subscriber exist in a country, so by default missing values has been feed as null value in the dataframe.
We have used fillna with 0 as it is count of number of sunscriber
```
# merging all segments into a dataframe
channel_subscribers = None
for chl_type in df.channeltype.dropna().unique():
print(chl_type)
df_ = df[df.channeltype == chl_type].groupby('country').sum()['subscribers'].to_frame(chl_type)
print(df_.shape)
if channel_subscribers is None:
channel_subscribers = df_
else:
channel_subscribers = channel_subscribers.join(df_)
df_channel_subscribers = df_channel_subscribers.fillna(0)
df_channel_subscribers
df_channel_subscribers.shape
# Correlation of channeltype subscribers count
sns.heatmap(df_channel_subscribers.corr())
```
### PART III: Monetising your content
The final destination for any content medium will be to monetize at a certain level.
If we talk about revenue some of the channels earning highest from YouTube platform are T-Series, Cocomelon — Nursery Rhymes, SET India, Vlad and Nikita, Zee TV which clearly indicates repeated use of entertainment and music channels.
We can safely infer here that maximum content consumed repeated and uniquely on YouTube belongs to entertainment and music category.
```
highest_earning = df.sort_values('MonthlyEarningsMin', ascending=False)
highest_earning = highest_earning[['name', 'MonthlyEarningsMin']]
highest_earning = highest_earning.set_index('name')
highest_earning.head(15).plot(kind = 'bar', figsize=(10, 5), title="Highest Earning")
```
#### Top 25 earning for a segment
```
highest_earning.head(25)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import seaborn as sns
import pprint
%matplotlib inline
df = pd.read_csv('YouTube.csv', encoding='ISO-8859-1')
df.head()
df.columns
len(df.columns.values)
df.isnull().sum().plot(kind = 'bar')
# which type of channels mostly exist on youtube
popular_channels = df.channeltype.value_counts().to_frame('count')
display(popular_channels)
most_popular_channels.plot(kind = 'bar')
# which type of channels has most subscribers
most_subscribed_channels = df.groupby('channeltype').sum()['subscribers'].sort_values(ascending = False).to_frame('subscribers_count')
#most_subscribed_channels.reset_index(level=0, inplace=True)
most_subscribed_channels
#most_subscribed_channels.plot(kind = 'bar')
most_subscribed_channels.plot(kind = 'bar')
# which country has most subscribers for education channel type
education_subscriber = df[df.channeltype == 'Education'].groupby('country').sum()['subscribers'].sort_values(ascending = False).to_frame('Educational_Subscriber')
education_subscriber
# NOTE INDIA is the second largest educational subscriber
education_subscriber.plot(kind = 'bar', figsize=(10, 5), title="Top Educational Subscribers")
# which country has most subscribers for Music channel type
music_subscriber = df[df.channeltype == 'Music'].groupby('country').sum()['subscribers'].sort_values(ascending = False).to_frame('Music_Subscriber')
music_subscriber
# Top 10 music subscriber based countries
music_subscriber[:10].plot(kind = 'bar', rot=0, figsize=(15, 4), title="Music Subscribers")
# which country has most subscribers for Sports channel type
sports_subscriber = df[df.channeltype == 'Sports'].groupby('country').sum()['subscribers'].sort_values(ascending = False).to_frame('Sports_Subscriber')
sports_subscriber
sports_subscriber.plot(kind = 'bar', figsize=(15, 4), title="Sports Subscriber")
# merging all segments into a dataframe
channel_subscribers = None
for chl_type in df.channeltype.dropna().unique():
print(chl_type)
df_ = df[df.channeltype == chl_type].groupby('country').sum()['subscribers'].to_frame(chl_type)
print(df_.shape)
if channel_subscribers is None:
channel_subscribers = df_
else:
channel_subscribers = channel_subscribers.join(df_)
df_channel_subscribers = df_channel_subscribers.fillna(0)
df_channel_subscribers
df_channel_subscribers.shape
# Correlation of channeltype subscribers count
sns.heatmap(df_channel_subscribers.corr())
highest_earning = df.sort_values('MonthlyEarningsMin', ascending=False)
highest_earning = highest_earning[['name', 'MonthlyEarningsMin']]
highest_earning = highest_earning.set_index('name')
highest_earning.head(15).plot(kind = 'bar', figsize=(10, 5), title="Highest Earning")
highest_earning.head(25)
| 0.382141 | 0.896614 |
# Alphalens Tear Sheets for
# Factor - Historical Returns
# for Period: 1/1/2018 to 4/20/2020 (last 28 Months)
```
# Import all required libraries
# Quantopian Libraries
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.data import factset, USEquityPricing
from quantopian.pipeline.classifiers.fundamentals import Sector
from quantopian.pipeline.factors import Returns, SimpleMovingAverage, CustomFactor, RSI
# Alpha Lens libraries
from alphalens.performance import mean_information_coefficient
from alphalens.utils import get_clean_factor_and_forward_returns
from alphalens.tears import create_information_tear_sheet, create_returns_tear_sheet
```
## Define Your Alpha Factor Here
Spend your time in this cell, creating good factors. Then simply run the rest of the notebook to analyze
```
# Define the factor (Historic Returns) for the Pipeline function
# The Date Range used is Jan-1,2018 to Apr-30,2020 (ie 28 months)
def make_pipeline():
return Pipeline(
columns={
'hist_returns' : Returns(window_length=2) ,
},
screen=QTradableStocksUS()
)
# Set the Start and End Dates for Factor Data
start_date='2018-01-01'
end_date='2020-04-30'
# Now run the Pipe
factor_data = run_pipeline(make_pipeline(), start_date, end_date)
# Display factor data and notice the columns: Date, Equity and Historical Returns
print(type(factor_data))
print(len(factor_data))
factor_data.head()
# Set the Start and End Dates and Get Pricing Data for the equities in the Factor Data
start_date='2018-01-01'
end_date='2020-05-31' # End Date for Pricing Data should be > End Date for Factor Data for Forward Returns
pricing_data = get_pricing(factor_data.index.levels[1], start_date, end_date, fields='open_price')
# Display Pricing data and notice the columns: Date, Equity and Price
print(type(pricing_data))
print(len(pricing_data))
pricing_data.head()
# Merge the Factor Data and Pricing Data using the Function from Alphalens
merged_data = get_clean_factor_and_forward_returns(
factor = factor_data['hist_returns'],
prices = pricing_data,
quantiles=5,
periods=(1,),
max_loss=100,
)
# Display Merged data and notice the columns
print(type(merged_data))
print(len(merged_data))
merged_data.head()
# Create Information Tear Sheets using Alphalens
create_information_tear_sheet(merged_data)
# Create Returns Tear Sheets using Alphalens
create_returns_tear_sheet(merged_data)
```
|
github_jupyter
|
# Import all required libraries
# Quantopian Libraries
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.data import factset, USEquityPricing
from quantopian.pipeline.classifiers.fundamentals import Sector
from quantopian.pipeline.factors import Returns, SimpleMovingAverage, CustomFactor, RSI
# Alpha Lens libraries
from alphalens.performance import mean_information_coefficient
from alphalens.utils import get_clean_factor_and_forward_returns
from alphalens.tears import create_information_tear_sheet, create_returns_tear_sheet
# Define the factor (Historic Returns) for the Pipeline function
# The Date Range used is Jan-1,2018 to Apr-30,2020 (ie 28 months)
def make_pipeline():
return Pipeline(
columns={
'hist_returns' : Returns(window_length=2) ,
},
screen=QTradableStocksUS()
)
# Set the Start and End Dates for Factor Data
start_date='2018-01-01'
end_date='2020-04-30'
# Now run the Pipe
factor_data = run_pipeline(make_pipeline(), start_date, end_date)
# Display factor data and notice the columns: Date, Equity and Historical Returns
print(type(factor_data))
print(len(factor_data))
factor_data.head()
# Set the Start and End Dates and Get Pricing Data for the equities in the Factor Data
start_date='2018-01-01'
end_date='2020-05-31' # End Date for Pricing Data should be > End Date for Factor Data for Forward Returns
pricing_data = get_pricing(factor_data.index.levels[1], start_date, end_date, fields='open_price')
# Display Pricing data and notice the columns: Date, Equity and Price
print(type(pricing_data))
print(len(pricing_data))
pricing_data.head()
# Merge the Factor Data and Pricing Data using the Function from Alphalens
merged_data = get_clean_factor_and_forward_returns(
factor = factor_data['hist_returns'],
prices = pricing_data,
quantiles=5,
periods=(1,),
max_loss=100,
)
# Display Merged data and notice the columns
print(type(merged_data))
print(len(merged_data))
merged_data.head()
# Create Information Tear Sheets using Alphalens
create_information_tear_sheet(merged_data)
# Create Returns Tear Sheets using Alphalens
create_returns_tear_sheet(merged_data)
| 0.64579 | 0.780453 |
# Automatic Speech Recognition combined with Speaker Diarization
```
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg
!pip install unidecode
# ## Install NeMo
BRANCH = 'r1.0.0rc1'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr]
## Install TorchAudio
!pip install torchaudio -f https://download.pytorch.org/whl/torch_stable.html
```
# Introduction
In the early years, speaker diarization algorithms were developed for speech recognition on multispeaker audio recordings to enable speaker adaptive processing, but also gained its own value as a stand-alone application over
time to provide speaker-specific meta information for downstream tasks such as audio retrieval.
Automatic Speech Recognition output when combined with Speaker labels has shown immense use in many tasks, ranging from analyzing telephonic conversation to decoding meeting transcriptions.
In this tutorial we demonstrate how one can get ASR transcriptions combined with Speaker labels along with voice activity time stamps using NeMo asr collections.
For detailed understanding of transcribing words with ASR refer to this [ASR tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/01_ASR_with_NeMo.ipynb), and for detailed understanding of speaker diarizing an audio refer to this [Diarization inference](https://github.com/NVIDIA/NeMo/blob/main/tutorials/speaker_recognition/Speaker_Diarization_Inference.ipynb) tutorial
Let's first import nemo asr and other libraries for visualization purposes
```
import nemo.collections.asr as nemo_asr
import numpy as np
from IPython.display import Audio, display
import librosa
import os
import wget
import matplotlib.pyplot as plt
```
We demonstrate this tutorial using merged an4 audio, that has two speakers(male and female) speaking dates in different formats. If not exists already download the data and listen to it
```
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
if not os.path.exists(os.path.join(data_dir,'an4_diarize_test.wav')):
AUDIO_FILENAME = wget.download(an4_audio_url, data_dir)
else:
AUDIO_FILENAME = os.path.join(data_dir,'an4_diarize_test.wav')
signal, sample_rate = librosa.load(AUDIO_FILENAME, sr=None)
display(Audio(signal,rate=sample_rate))
def show_figure(signal,text='Audio',overlay_color=[]):
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.scatter(np.arange(len(signal)),signal,s=1,marker='o',c='k')
if len(overlay_color):
plt.scatter(np.arange(len(signal)),signal,s=1,marker='o',c=overlay_color)
fig.suptitle(text, fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
plt.ylabel('signal strength', fontsize=14);
plt.axis([0,len(signal),-0.5,+0.5])
time_axis,_ = plt.xticks();
plt.xticks(time_axis[:-1],time_axis[:-1]/sample_rate);
```
plot the audio
```
show_figure(signal)
```
We start our demonstration by first transcribing the audio using our pretrained model `QuartzNet15x5Base-En` and use the CTC output probabilities to get timestamps for words spoken. We then later use these timestamps to get speaker label information using speaker diarizer model.
Download and load pretrained quartznet asr model
```
#Load model
asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name='QuartzNet15x5Base-En', strict=False)
```
Transcribe the audio
```
files = [AUDIO_FILENAME]
transcript = asr_model.transcribe(paths2audio_files=files)[0]
print(f'Transcript: "{transcript}"')
```
Get CTC log probabilities with output labels
```
# softmax implementation in NumPy
def softmax(logits):
e = np.exp(logits - np.max(logits))
return e / e.sum(axis=-1).reshape([logits.shape[0], 1])
# let's do inference once again but without decoder
logits = asr_model.transcribe(files, logprobs=True)[0]
probs = softmax(logits)
# 20ms is duration of a timestep at output of the model
time_stride = 0.02
# get model's alphabet
labels = list(asr_model.decoder.vocabulary) + ['blank']
labels[0] = 'space'
```
We use CTC labels for voice activity detection. To detect speech and non-speech segments in the audio, we use blank and space labels in the CTC outputs. Consecutive labels with spaces or blanks longer than a threshold are considered non-speech segments
```
blanks = []
state = ''
idx_state = 0
if np.argmax(probs[0]) == 28:
state = 'blank'
for idx in range(1, probs.shape[0]):
current_char_idx = np.argmax(probs[idx])
if state == 'blank' and current_char_idx != 0 and current_char_idx != 28:
blanks.append([idx_state, idx-1])
state = ''
if state == '':
if current_char_idx == 28:
state = 'blank'
idx_state = idx
if state == 'blank':
blanks.append([idx_state, len(probs)-1])
threshold=20 #minimun width to consider non-speech activity
non_speech=list(filter(lambda x:x[1]-x[0]>threshold,blanks))
# get timestamps for space symbols
spaces = []
state = ''
idx_state = 0
if np.argmax(probs[0]) == 0:
state = 'space'
for idx in range(1, probs.shape[0]):
current_char_idx = np.argmax(probs[idx])
if state == 'space' and current_char_idx != 0 and current_char_idx != 28:
spaces.append([idx_state, idx-1])
state = ''
if state == '':
if current_char_idx == 0:
state = 'space'
idx_state = idx
if state == 'space':
spaces.append([idx_state, len(pred)-1])
# calibration offset for timestamps: 180 ms
offset = -0.18
# split the transcript into words
words = transcript.split()
```
Frame level stamps for non speech frames
```
print(non_speech)
```
write to rttm type file for later use in extracting speaker labels
```
frame_offset=offset/time_stride
speech_labels=[]
uniq_id = os.path.basename(AUDIO_FILENAME).split('.')[0]
with open(uniq_id+'.rttm','w') as f:
for idx in range(len(non_speech)-1):
start = (non_speech[idx][1]+frame_offset)*time_stride
end = (non_speech[idx+1][0]+frame_offset)*time_stride
f.write("SPEAKER {} 1 {:.3f} {:.3f} <NA> <NA> speech <NA>\n".format(uniq_id,start,end-start))
speech_labels.append("{:.3f} {:.3f} speech".format(start,end))
if non_speech[-1][1] < len(probs):
start = (non_speech[-1][1]+frame_offset)*time_stride
end = (len(probs)+frame_offset)*time_stride
f.write("SPEAKER {} 1 {:.3f} {:.3f} <NA> <NA> speech <NA>\n".format(uniq_id,start,end-start))
speech_labels.append("{:.3f} {:.3f} speech".format(start,end))
```
Time stamps for speech frames
```
print(speech_labels)
COLORS="b g c m y".split()
def get_color(signal,speech_labels,sample_rate=16000):
c=np.array(['k']*len(signal))
for time_stamp in speech_labels:
start,end,label=time_stamp.split()
start,end = int(float(start)*16000),int(float(end)*16000),
if label == "speech":
code = 'red'
else:
code = COLORS[int(label.split('_')[-1])]
c[start:end]=code
return c
```
With voice activity time stamps extracted from CTC outputs, here we show the Voice Activity signal in <span style="color:red">**red**</span> color and background speech in **black** color
```
color=get_color(signal,speech_labels)
show_figure(signal,'an4 audio signal with vad',color)
```
We use helper function from speaker utils to convert voice activity rttm file to manifest to diarize using
speaker diarizer clustering inference model
```
from nemo.collections.asr.parts.speaker_utils import write_rttm2manifest
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
oracle_manifest = os.path.join(output_dir,'oracle_manifest.json')
write_rttm2manifest(paths2audio_files=files,
paths2rttm_files=[uniq_id+'.rttm'],
manifest_file=oracle_manifest)
!cat {output_dir}/oracle_manifest.json
```
Set up diarizer model
```
from omegaconf import OmegaConf
MODEL_CONFIG = os.path.join(data_dir,'speaker_diarization.yaml')
if not os.path.exists(MODEL_CONFIG):
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_recognition/conf/speaker_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
pretrained_speaker_model='speakerdiarization_speakernet'
config.diarizer.paths2audio_files = files
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
# Ignoring vad we just need to pass the manifest file we created
config.diarizer.speaker_embeddings.oracle_vad_manifest = oracle_manifest
```
Diarize the audio at provided time stamps
```
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config);
oracle_model.diarize();
from nemo.collections.asr.parts.speaker_utils import rttm_to_labels
pred_rttm=os.path.join(output_dir,'pred_rttms',uniq_id+'.rttm')
labels=rttm_to_labels(pred_rttm)
print("speaker labels with time stamps\n",labels)
```
Now let us see the audio plot color coded per speaker
```
color=get_color(signal,labels)
show_figure(signal,'audio with speaker labels',color)
display(Audio(signal,rate=16000))
```
Finally transcribe audio with time stamps and speaker label information
```
pos_prev = 0
idx=0
start_point,end_point,speaker=labels[idx].split()
print("{} [{:.2f} - {:.2f} sec]".format(speaker,float(start_point),float(end_point)),end=" ")
for j, spot in enumerate(spaces):
pos_end = offset + (spot[0]+spot[1])/2*time_stride
if pos_prev < float(end_point):
print(words[j],end=" ")
else:
print()
idx+=1
start_point,end_point,speaker=labels[idx].split()
print("{} [{:.2f} - {:.2f} sec]".format(speaker,float(start_point),float(end_point)),end=" ")
print(words[j],end=" ")
pos_prev = pos_end
print(words[j+1],end=" ")
```
|
github_jupyter
|
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg
!pip install unidecode
# ## Install NeMo
BRANCH = 'r1.0.0rc1'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr]
## Install TorchAudio
!pip install torchaudio -f https://download.pytorch.org/whl/torch_stable.html
import nemo.collections.asr as nemo_asr
import numpy as np
from IPython.display import Audio, display
import librosa
import os
import wget
import matplotlib.pyplot as plt
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
if not os.path.exists(os.path.join(data_dir,'an4_diarize_test.wav')):
AUDIO_FILENAME = wget.download(an4_audio_url, data_dir)
else:
AUDIO_FILENAME = os.path.join(data_dir,'an4_diarize_test.wav')
signal, sample_rate = librosa.load(AUDIO_FILENAME, sr=None)
display(Audio(signal,rate=sample_rate))
def show_figure(signal,text='Audio',overlay_color=[]):
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.scatter(np.arange(len(signal)),signal,s=1,marker='o',c='k')
if len(overlay_color):
plt.scatter(np.arange(len(signal)),signal,s=1,marker='o',c=overlay_color)
fig.suptitle(text, fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
plt.ylabel('signal strength', fontsize=14);
plt.axis([0,len(signal),-0.5,+0.5])
time_axis,_ = plt.xticks();
plt.xticks(time_axis[:-1],time_axis[:-1]/sample_rate);
show_figure(signal)
#Load model
asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name='QuartzNet15x5Base-En', strict=False)
files = [AUDIO_FILENAME]
transcript = asr_model.transcribe(paths2audio_files=files)[0]
print(f'Transcript: "{transcript}"')
# softmax implementation in NumPy
def softmax(logits):
e = np.exp(logits - np.max(logits))
return e / e.sum(axis=-1).reshape([logits.shape[0], 1])
# let's do inference once again but without decoder
logits = asr_model.transcribe(files, logprobs=True)[0]
probs = softmax(logits)
# 20ms is duration of a timestep at output of the model
time_stride = 0.02
# get model's alphabet
labels = list(asr_model.decoder.vocabulary) + ['blank']
labels[0] = 'space'
blanks = []
state = ''
idx_state = 0
if np.argmax(probs[0]) == 28:
state = 'blank'
for idx in range(1, probs.shape[0]):
current_char_idx = np.argmax(probs[idx])
if state == 'blank' and current_char_idx != 0 and current_char_idx != 28:
blanks.append([idx_state, idx-1])
state = ''
if state == '':
if current_char_idx == 28:
state = 'blank'
idx_state = idx
if state == 'blank':
blanks.append([idx_state, len(probs)-1])
threshold=20 #minimun width to consider non-speech activity
non_speech=list(filter(lambda x:x[1]-x[0]>threshold,blanks))
# get timestamps for space symbols
spaces = []
state = ''
idx_state = 0
if np.argmax(probs[0]) == 0:
state = 'space'
for idx in range(1, probs.shape[0]):
current_char_idx = np.argmax(probs[idx])
if state == 'space' and current_char_idx != 0 and current_char_idx != 28:
spaces.append([idx_state, idx-1])
state = ''
if state == '':
if current_char_idx == 0:
state = 'space'
idx_state = idx
if state == 'space':
spaces.append([idx_state, len(pred)-1])
# calibration offset for timestamps: 180 ms
offset = -0.18
# split the transcript into words
words = transcript.split()
print(non_speech)
frame_offset=offset/time_stride
speech_labels=[]
uniq_id = os.path.basename(AUDIO_FILENAME).split('.')[0]
with open(uniq_id+'.rttm','w') as f:
for idx in range(len(non_speech)-1):
start = (non_speech[idx][1]+frame_offset)*time_stride
end = (non_speech[idx+1][0]+frame_offset)*time_stride
f.write("SPEAKER {} 1 {:.3f} {:.3f} <NA> <NA> speech <NA>\n".format(uniq_id,start,end-start))
speech_labels.append("{:.3f} {:.3f} speech".format(start,end))
if non_speech[-1][1] < len(probs):
start = (non_speech[-1][1]+frame_offset)*time_stride
end = (len(probs)+frame_offset)*time_stride
f.write("SPEAKER {} 1 {:.3f} {:.3f} <NA> <NA> speech <NA>\n".format(uniq_id,start,end-start))
speech_labels.append("{:.3f} {:.3f} speech".format(start,end))
print(speech_labels)
COLORS="b g c m y".split()
def get_color(signal,speech_labels,sample_rate=16000):
c=np.array(['k']*len(signal))
for time_stamp in speech_labels:
start,end,label=time_stamp.split()
start,end = int(float(start)*16000),int(float(end)*16000),
if label == "speech":
code = 'red'
else:
code = COLORS[int(label.split('_')[-1])]
c[start:end]=code
return c
color=get_color(signal,speech_labels)
show_figure(signal,'an4 audio signal with vad',color)
from nemo.collections.asr.parts.speaker_utils import write_rttm2manifest
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
oracle_manifest = os.path.join(output_dir,'oracle_manifest.json')
write_rttm2manifest(paths2audio_files=files,
paths2rttm_files=[uniq_id+'.rttm'],
manifest_file=oracle_manifest)
!cat {output_dir}/oracle_manifest.json
from omegaconf import OmegaConf
MODEL_CONFIG = os.path.join(data_dir,'speaker_diarization.yaml')
if not os.path.exists(MODEL_CONFIG):
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_recognition/conf/speaker_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
pretrained_speaker_model='speakerdiarization_speakernet'
config.diarizer.paths2audio_files = files
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
# Ignoring vad we just need to pass the manifest file we created
config.diarizer.speaker_embeddings.oracle_vad_manifest = oracle_manifest
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config);
oracle_model.diarize();
from nemo.collections.asr.parts.speaker_utils import rttm_to_labels
pred_rttm=os.path.join(output_dir,'pred_rttms',uniq_id+'.rttm')
labels=rttm_to_labels(pred_rttm)
print("speaker labels with time stamps\n",labels)
color=get_color(signal,labels)
show_figure(signal,'audio with speaker labels',color)
display(Audio(signal,rate=16000))
pos_prev = 0
idx=0
start_point,end_point,speaker=labels[idx].split()
print("{} [{:.2f} - {:.2f} sec]".format(speaker,float(start_point),float(end_point)),end=" ")
for j, spot in enumerate(spaces):
pos_end = offset + (spot[0]+spot[1])/2*time_stride
if pos_prev < float(end_point):
print(words[j],end=" ")
else:
print()
idx+=1
start_point,end_point,speaker=labels[idx].split()
print("{} [{:.2f} - {:.2f} sec]".format(speaker,float(start_point),float(end_point)),end=" ")
print(words[j],end=" ")
pos_prev = pos_end
print(words[j+1],end=" ")
| 0.735357 | 0.959573 |
```
from tueplots import fonts, figsizes
import matplotlib.pyplot as plt
# Increase the resolution of all the plots below
plt.rcParams.update({"figure.dpi": 150})
# "Better" figure size to display the font-changes
plt.rcParams.update(figsizes.icml2022(column="half"))
```
Fonts in `tueplots` follow the same interface as the other settings.
There are some pre-defined font recipes for a few journals, and they return dictionaries that are compatible with `matplotlib.pyplot.rcParams.update()`.
```
fonts.neurips2021()
```
Compare the following default font to some of the alternatives that we provide:
```
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.jmlr2001_tex(family="serif"))
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.jmlr2001_tex(family="sans-serif"))
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.neurips2021())
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.neurips2021(family="sans-serif"))
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.neurips2021_tex(family="sans-serif"))
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.neurips2021_tex(family="serif"))
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.beamer_moml())
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
with plt.rc_context(fonts.icml2022()):
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
with plt.rc_context(fonts.icml2022_tex(family="sans-serif")):
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
with plt.rc_context(fonts.icml2022_tex(family="serif")):
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.aistats2022_tex(family="serif"))
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.aistats2022_tex(family="sans-serif"))
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
```
|
github_jupyter
|
from tueplots import fonts, figsizes
import matplotlib.pyplot as plt
# Increase the resolution of all the plots below
plt.rcParams.update({"figure.dpi": 150})
# "Better" figure size to display the font-changes
plt.rcParams.update(figsizes.icml2022(column="half"))
fonts.neurips2021()
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.jmlr2001_tex(family="serif"))
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.jmlr2001_tex(family="sans-serif"))
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.neurips2021())
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.neurips2021(family="sans-serif"))
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.neurips2021_tex(family="sans-serif"))
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.neurips2021_tex(family="serif"))
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.beamer_moml())
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
with plt.rc_context(fonts.icml2022()):
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
with plt.rc_context(fonts.icml2022_tex(family="sans-serif")):
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
with plt.rc_context(fonts.icml2022_tex(family="serif")):
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.aistats2022_tex(family="serif"))
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
plt.rcParams.update(fonts.aistats2022_tex(family="sans-serif"))
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
ax.set_title("Title")
ax.set_xlabel("xlabel $\int_a^b f(x) dx$")
ax.set_ylabel("ylabel $x \sim \mathcal{N}(x)$")
plt.show()
| 0.775435 | 0.976084 |
# Using regionprops_3d to analyze properties of each pore
The ``regionprops`` function included in *Scikit-image* is pretty thorough, and the recent version of *Scikit-image* (>0.14) vastly increase support for 3D images. Nonetheless, there are still a handful of features and properties that are useful for porous media analysis. The ``regionprops_3d`` in *PoreSpy* aims to address thsi need, and it's use is illustrated here.
```
import numpy as np
import porespy as ps
import scipy.ndimage as spim
import matplotlib.pyplot as plt
ps.visualization.set_mpl_style()
%matplotlib inline
```
## Generating a test image
Start by generating a test image using the ``generators`` module in *PoreSpy*.
```
# NBVAL_IGNORE_OUTPUT
np.random.seed(1)
im = ps.generators.blobs(shape=[200, 200], porosity=0.6, blobiness=1)
plt.subplots(1, 1, figsize=(6, 6))
fig = plt.imshow(im, cmap=plt.cm.inferno)
```
## Segementing void space into regions for individual analysis
Next, we need to segment the image into discrete pores, which will become the *regions* that we analyze. For this purpose we'll use the SNOW algorithm, which helps to find true local maximums in the distance transform that are used as markers in the *watershed* segementation.
```
# NBVAL_IGNORE_OUTPUT
snow = ps.filters.snow_partitioning(im=im, return_all=True)
regions = snow.regions*snow.im
# NBVAL_IGNORE_OUTPUT
plt.subplots(1, 1, figsize=(6, 6))
fig = plt.imshow(regions, cmap=plt.cm.inferno)
```
## Applying regionsprops_3d
Now that the void space has been segmented into discrete regions, it's possible to extract information about each region using ``regionsprops_3d``.
> **NOTE**: *PoreSpy* calls the *Scikit-image* ``regionprops`` function internally, and uses many of it's values in subsequent calculations. The result return by ``regionprops_3d`` is the same as ``regionprops`` of *Scikit-image*, but with additional information added to each region.
```
# NBVAL_IGNORE_OUTPUT
props = ps.metrics.regionprops_3D(regions)
```
> **NOTE:** The ``regionprops_3d`` function in *PoreSpy* is compatible with the ``regionsprops`` function in *Scikit-image*, which returns the results in a somewhat confusing format. An object is created for each region, and the properites of that region can be accessed as attributes of the object (e.g. ``obj[10].area``). This makes it somewhat annoying, since all the ``area`` values cannot be accessed as a single array (*PoreSpy* has a function to address this, described below), but there is another larger *gotcha*: Each of the region objects are collected in a list like ``[obj1, obj2, ...]``, **BUT** all regions labelled with 0 are ignored (which is solid phase in this example), so the object located in position 0 of the list corresponds to region 1. Hence, users must be careful to index into the list correctly.
## Listing all available properties
Let's look at some of the properties for the regions, starting by printing a list of all available properties for a given region:
```
r = props[0]
attrs = [a for a in r.__dir__() if not a.startswith('_')]
print(attrs)
```
## Analyze properties for a single region
Now let's look at some of the properties for each region:
```
# NBVAL_IGNORE_OUTPUT
# View am image of the region in isolation
plt.subplots(1, 1, figsize=(6, 6))
plt.imshow(r.image)
# NBVAL_IGNORE_OUTPUT
# View image of region's border and largest incribed sphere together
plt.subplots(1, 1, figsize=(6, 6))
plt.imshow(r.border + 0.5*r.inscribed_sphere)
```
One of the most useful properties is the convex image, which is an image of the region with all the depressions in the boundary filled in. This is useful because one can compare it to the actual region and learn about the shape of the region. One such metric is the *solidity* which is defined as the ratio of pixels in the region to pixels of the convex hull image.
```
# NBVAL_IGNORE_OUTPUT
plt.subplots(1, 1, figsize=(6, 6))
fig = plt.imshow(r.image + 1.0*r.convex_image)
print(f"Solidity: {r.solidity:.3f}")
```
## Extracting one property for all regions as an array
As mentioned above, the *list* of objects that are returned by the ``regionprops_3d`` funciton is a bit inconvenient for accessing one piece of information for all regions at once. *PoreSpy* has a function called ``props_to_DataFrame`` which helps in this regard by generating a Pandas DataFrame object with all of the *key metrics* listed as Numpy arrays in each column. *Key metrics* refers to scalar values like area and solidity.
```
df = ps.metrics.props_to_DataFrame(props)
```
As can be seen above, there are fewer items in this DataFrame than on the regionprops objects. This is because only scalar values are kept (e.g. images are ignored), and some of the metrics were not valid (e.g. intensity_image).
With this DataFrame in hand, we can now look a histograms of various properties:
```
plt.figure(figsize=[8, 4])
plt.subplot(1, 3, 1)
fig = plt.hist(df['volume'])
plt.subplot(1, 3, 2)
fig = plt.hist(df['solidity'])
plt.subplot(1, 3, 3)
fig = plt.hist(df['sphericity'])
```
Another useful feature of the Pandas DataFrame is the ability to look at all metrics for a given pore at once, which is done by looking at a single row in all columns:
```
df.iloc[0]
```
## Creating a composite image of region images
Another useful function available in *PoreSpy* is ``prop_to_image``, which can create an image from the various subimages available on each region.
```
# NBVAL_IGNORE_OUTPUT
# Create an image of maximally inscribed spheres
sph = ps.metrics.prop_to_image(regionprops=props, shape=im.shape, prop='inscribed_sphere')
plt.subplots(1, 1, figsize=(6, 6))
fig = plt.imshow(sph + 0.5*(~im) , cmap=plt.cm.inferno)
plt.show()
```
## Creating a colorized image based on region properties
The ``prop_to_image`` function can also accept a scalar property which will result in an image of the regions colorized according to the local value of that property.
```
# NBVAL_IGNORE_OUTPUT
# Create an image colorized by solidity
sph = ps.metrics.prop_to_image(regionprops=props, shape=im.shape, prop='solidity')
plt.subplots(1, 1, figsize=(6, 6))
fig = plt.imshow(sph + 0.5*(~im) , cmap=plt.cm.jet)
```
An interesting result can be seen where the regions at the edges are darker signifying more *solidity*. This is because the straight edges conform exactly to their convex hulls.
|
github_jupyter
|
import numpy as np
import porespy as ps
import scipy.ndimage as spim
import matplotlib.pyplot as plt
ps.visualization.set_mpl_style()
%matplotlib inline
# NBVAL_IGNORE_OUTPUT
np.random.seed(1)
im = ps.generators.blobs(shape=[200, 200], porosity=0.6, blobiness=1)
plt.subplots(1, 1, figsize=(6, 6))
fig = plt.imshow(im, cmap=plt.cm.inferno)
# NBVAL_IGNORE_OUTPUT
snow = ps.filters.snow_partitioning(im=im, return_all=True)
regions = snow.regions*snow.im
# NBVAL_IGNORE_OUTPUT
plt.subplots(1, 1, figsize=(6, 6))
fig = plt.imshow(regions, cmap=plt.cm.inferno)
# NBVAL_IGNORE_OUTPUT
props = ps.metrics.regionprops_3D(regions)
r = props[0]
attrs = [a for a in r.__dir__() if not a.startswith('_')]
print(attrs)
# NBVAL_IGNORE_OUTPUT
# View am image of the region in isolation
plt.subplots(1, 1, figsize=(6, 6))
plt.imshow(r.image)
# NBVAL_IGNORE_OUTPUT
# View image of region's border and largest incribed sphere together
plt.subplots(1, 1, figsize=(6, 6))
plt.imshow(r.border + 0.5*r.inscribed_sphere)
# NBVAL_IGNORE_OUTPUT
plt.subplots(1, 1, figsize=(6, 6))
fig = plt.imshow(r.image + 1.0*r.convex_image)
print(f"Solidity: {r.solidity:.3f}")
df = ps.metrics.props_to_DataFrame(props)
plt.figure(figsize=[8, 4])
plt.subplot(1, 3, 1)
fig = plt.hist(df['volume'])
plt.subplot(1, 3, 2)
fig = plt.hist(df['solidity'])
plt.subplot(1, 3, 3)
fig = plt.hist(df['sphericity'])
df.iloc[0]
# NBVAL_IGNORE_OUTPUT
# Create an image of maximally inscribed spheres
sph = ps.metrics.prop_to_image(regionprops=props, shape=im.shape, prop='inscribed_sphere')
plt.subplots(1, 1, figsize=(6, 6))
fig = plt.imshow(sph + 0.5*(~im) , cmap=plt.cm.inferno)
plt.show()
# NBVAL_IGNORE_OUTPUT
# Create an image colorized by solidity
sph = ps.metrics.prop_to_image(regionprops=props, shape=im.shape, prop='solidity')
plt.subplots(1, 1, figsize=(6, 6))
fig = plt.imshow(sph + 0.5*(~im) , cmap=plt.cm.jet)
| 0.560493 | 0.985841 |
```
# Enable importing of utilities
import sys
sys.path.append('..')
%matplotlib inline
from pylab import rcParams
rcParams['figure.figsize'] = 10, 6
```
# Analyizing Rainfall near Lake Chad
This tutorial is focused on solving the problem of determining **when** the rainy season starts and ends over Lake Chad.
[Future notebooks in this series]() deal with analyzing the Lake Chad region *before* and *after* the rainy season to determine how much the rainy season contributes to the surface area of Lake Chad.
### What to expect from this notebook
- Introduction to precipitation data.
- Exposure to datacube
- Exposure to `xarrays`
- Visualizing time series data
- curve fitting to determine start and end dates of the rainy season.
### Algorithmic process
1. create Datacube object
2. define boundaries of study area
3. use boudaries to load data
4. create a time series representation of data
5. curve fit to find rainy season start and end
# Creating the datacube object
The following code connects to the datacube and accepts `chad_rainfall` as an app-name.
```
import datacube
dc = datacube.Datacube(app = "chad_rainfall")
```
This object is the main interface to your stored and ingested data. It can handle complicated things like reprojecting data with varying resolutions and orientations. It can also be used to explore existing datasets. In this tutorial, it is only used for loading data from the datacube
## Loading GPM data
A small dataset is easier to work with than the entirety of Lake Chad. The region you're about to load contains GPM measurements for a small area of Lake Chad near the mouth of its largest contributing river. The code below displays the bounds of the region but doesn't load it.
```
from utils.data_cube_utilities.dc_display_map import display_map
display_map(latitude = (12.75, 13.0),longitude = (14.25, 14.5))
```
## Setting boundaries of our load
```
## Define Geographic boundaries using a (min,max) tuple.
latitude = (12.75, 13.0)
longitude = (14.25, 14.5)
## Specify a date range using a (min,max) tuple
from datetime import datetime
time = (datetime(2015,1,1), datetime(2016,1,2))
## define the name you gave your data while it was being "ingested", as well as the platform it was captured on.
product = 'gpm_imerg_gis_daily_global'
platform = 'GPM'
```
<br>
It's simple to intuit what the **latitude**,**longitude** and **time** bounds will get you. It will give you a bounded and grided-dataset containing our rainy season. Each square in the diagram below represents the smallest spatial unit in our imagery. This smallest unit is often reffered to as a **pixel**.
<br>

While defining space and time bounds are simple to understand, it may be more complicated to pick up on what **product** and **platform** are. Platform will tell you how/where the data is produced. Product is a key used to chose what representation of that platform's data you wish to index.
For the sake of this tutorial, think of **product** and **platform** as shorthand names we used to look up data that is:
- produced on a **GPM** platform.
- represented using **gpm_imerg_gis_daily_global** settings or types.
The representation reflects personal prefferences that define, for example, the resolution of each pixel, how pixels are sampled, and the sort of geometric projections this grid of pixels undergoes to assume its current shape. Scripts for adding more general product types/representations are avaiable [here](), but aren't necessary to understanding this stage of the tutorial.
## Loading the data
```
#Load Percipitation data using parameters,
gpm_data = dc.load(latitude = latitude, longitude = longitude, time = time, product = product, platform = platform)
```
## Exploring precipitation data
The code above should have loaded an [xarray]() containing your GPM data. An xarray data-structure is essentially a wrapper for high-dimensional data-arrays. One of its main uses is the coupling of different data with a shared set of coordinates.
Conceptually, you can imagine GPM's xarray looking like this:
<br>

<br>
Each latitude-longitude coordinate pair will have, `total_precipitation`, `liquid_precipitation`, `ice_precipitation` and `percent_liquid` measurements taken for them.
An Xarray Dataset will store each of these measurements in separate grids and share a single set of coordinates among all measurements.
To get some detailed information about an xarray, a `print()` statement will give you a readout on its structure.
<br>
```
print( gpm_data )
```
<br>
Using the readout above we can quickly gain a summary of the dataset we just loaded by examining:
- **Coordinates**
a readout of individual coordinates. In this case, it's three lists of `latitude`, `longitude`, and `time` values
- **Dimensions**
a readout of how large each dimension is. In this case, we've loaded in a 3 by 3 area of land have 295 acquisitions between 2014-2015. This can hint that this is a daily, not hourly or monthly precipitation product.
- **Data Variables**
a readout of what sort of data is stored. In this case, each `latitude`, `longitude`, and `time` point will store four types of data. One for each `total_precipitation`, `liquid_precipitation`, `ice_precipitation`,`percent_liquid` variable.
- **Data Size**
Each entry has a size/type associated. IE. `int32`, `int8`, `float64`. You can use this to manage the memory footprint of your object.
# Computing the daily average of precipitation
The xarray that you're using has several built-in functions to efficiently run large scale operations on all points within the xarray. The following code makes use of one of those built-in functions to determine the average precipitation in an area for each time slice
```
mean_precipitation = gpm_data.mean(dim = ['latitude', 'longitude'])
```
The code above will:
- take the average of all measurements in latitude rows and store them in a **time * longitude** coordinate pair.
- take the average of all measurements in longitude `longitude` rows and store them in a **time** coordinate.
The diagram below should detail the operations of averaging these areas.

Our new xarray dataset will retain a single dimension called time. Each point in time will store a value representing the average of all pixel values at that time.
Take a look at the print statement below.
```
print(mean_precipitation)
```
Things you should notice:
- We're still representing mean_values using an xarray
- only time is displayed under the **coordinates** section.
**latitude** and **longitude** are essentially dropped.
- The averaging operation was performed on all datasets; `total_precipitation`, `liquid_precipitation`,`ice_precipitation`, and `percent_liquid`
# Displaying time series data
Xarray Datasets store several [data-arrays](http://xarray.pydata.org/en/stable/generated/xarray.DataArray.html).
The code below neatly extracts a **total_precipitation** data-array from our **mean_precipitation** Dataset.
<br>
```
mean_total_precipitation = mean_precipitation.total_precipitation
```
<br>
The new representation is also an xarray data-structure. When printed it looks something like this.
<br>
```
print(mean_total_precipitation)
```
<br>
For time series plotting we care about extracting **time** coordinates, and the data-array **values**
<br>
```
times = mean_total_precipitation.time.values
values = mean_total_precipitation.values
```
<br>
The next line of code plots a time series graph of our values
<br>
```
import matplotlib.pyplot as plt
plt.plot(times, values)
```
# Determining the bounds of the rainy season
The section above displayed daily precipitation values observed in 2015.
The shape would fit a bell curve very well. The following code deals with fitting a bell curve to our time series.
<br>
```
#Code for this algorithm is out of scope for this tutorial on datacube and is abstracted away for demonstration purposes.
import demo.curve_fit_gaussian as curve_fit
curve_fit.plot_fit(times, values, standard_deviations = 2)
```
<br>
We pick two points that are equidistant from the center/peak of this curve to act as our bounding points for the rainy season.
<br>
```
curve_fit.get_bounds(times, values, standard_deviations = 2)
```
It appears that **JUNE** and **OCTOBER** should be adequate bounds for the rainy season.
# Next Steps
This notebook served as an introduction to datacube and the xarray datasets. Now that we have the extent of our rainy season you can proceed with the [next notebook](igarss_chad_02.ipynb), in which landsat 7 data is broken into a pre and post rainy season dataset and cleaned up in preparation for [water detection](igarss_chad_03.ipynb). The entire notebook has been condensed down to a about a dozen lines of code below.
```
from datetime import datetime
import demo.curve_fit_gaussian as curve_fit
latitude = (12.75, 13.0)
longitude = (14.25, 14.5)
time = (datetime(2015,1,1), datetime(2016,1,2))
product = 'gpm_imerg_gis_daily_global'
platform = 'GPM'
gpm_data = dc.load(latitude = latitude, longitude = longitude, time = time, product = product, platform = platform)
mean_precipitation = gpm_data.mean(dim = ['latitude', 'longitude'])
times = mean_precipitation.time.values
values = mean_precipitation.total_precipitation.values
curve_fit.get_bounds(times, values, standard_deviations = 3)
```
|
github_jupyter
|
# Enable importing of utilities
import sys
sys.path.append('..')
%matplotlib inline
from pylab import rcParams
rcParams['figure.figsize'] = 10, 6
import datacube
dc = datacube.Datacube(app = "chad_rainfall")
from utils.data_cube_utilities.dc_display_map import display_map
display_map(latitude = (12.75, 13.0),longitude = (14.25, 14.5))
## Define Geographic boundaries using a (min,max) tuple.
latitude = (12.75, 13.0)
longitude = (14.25, 14.5)
## Specify a date range using a (min,max) tuple
from datetime import datetime
time = (datetime(2015,1,1), datetime(2016,1,2))
## define the name you gave your data while it was being "ingested", as well as the platform it was captured on.
product = 'gpm_imerg_gis_daily_global'
platform = 'GPM'
#Load Percipitation data using parameters,
gpm_data = dc.load(latitude = latitude, longitude = longitude, time = time, product = product, platform = platform)
print( gpm_data )
mean_precipitation = gpm_data.mean(dim = ['latitude', 'longitude'])
print(mean_precipitation)
mean_total_precipitation = mean_precipitation.total_precipitation
print(mean_total_precipitation)
times = mean_total_precipitation.time.values
values = mean_total_precipitation.values
import matplotlib.pyplot as plt
plt.plot(times, values)
#Code for this algorithm is out of scope for this tutorial on datacube and is abstracted away for demonstration purposes.
import demo.curve_fit_gaussian as curve_fit
curve_fit.plot_fit(times, values, standard_deviations = 2)
curve_fit.get_bounds(times, values, standard_deviations = 2)
from datetime import datetime
import demo.curve_fit_gaussian as curve_fit
latitude = (12.75, 13.0)
longitude = (14.25, 14.5)
time = (datetime(2015,1,1), datetime(2016,1,2))
product = 'gpm_imerg_gis_daily_global'
platform = 'GPM'
gpm_data = dc.load(latitude = latitude, longitude = longitude, time = time, product = product, platform = platform)
mean_precipitation = gpm_data.mean(dim = ['latitude', 'longitude'])
times = mean_precipitation.time.values
values = mean_precipitation.total_precipitation.values
curve_fit.get_bounds(times, values, standard_deviations = 3)
| 0.445047 | 0.98805 |
# Physionet 2017 | ECG Rhythm Classification
## 4. Train Model (CPU Test)
### Sebastian D. Goodfellow, Ph.D.
<br>
# Setup Noteboook
```
# Import 3rd party libraries
import os
import sys
import numpy as np
import pickle
# Deep learning libraries
import tensorflow as tf
# Import local Libraries
sys.path.insert(0, r'C:\Users\sebig\Documents\code\deep_ecg')
from utils.plotting.time_series import plot_time_series_widget
from utils.data.labels.one_hot_encoding import one_hot_encoding
from utils.devices.device_check import print_device_counts, get_device_names
from train.train import train
from model.model import Model
# Configure Notebook
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%load_ext autoreload
%autoreload 2
```
# 1. Load ECG Dataset
```
# Set path
path = os.path.join(os.path.dirname(os.getcwd()), 'data', 'training')
# Set sample rate
fs = 300
# Unpickle
with open(os.path.join(path, 'training_60s.pickle'), "rb") as input_file:
data = pickle.load(input_file)
# Get training data
x_train = data['data_train'].values.reshape(data['data_train'].shape[0], data['data_train'].shape[1], 1)
y_train = data['labels_train']['label_int'].values.reshape(data['labels_train'].shape[0], 1).astype(int)
# Get validation data
x_val = data['data_val'].values.reshape(data['data_val'].shape[0], data['data_val'].shape[1], 1)
y_val = data['labels_val']['label_int'].values.reshape(data['labels_val'].shape[0], 1).astype(int)
# Print dimensions
print('x_train dimensions: ' + str(x_train.shape))
print('y_train dimensions: ' + str(y_train.shape))
print('x_val dimensions: ' + str(x_val.shape))
print('y_val dimensions: ' + str(y_val.shape))
# One hot encoding array dimensions
y_train_1hot = one_hot_encoding(labels=y_train.ravel(), classes=len(np.unique(y_train.ravel())))
y_val_1hot = one_hot_encoding(labels=y_val.ravel(), classes=len(np.unique(y_val.ravel())))
# Print dimensions
print('x_train dimensions: ' + str(x_train.shape))
print('y_train dimensions: ' + str(y_train.shape))
print('y_train_1hot dimensions: ' + str(y_train_1hot.shape))
print('x_val dimensions: ' + str(x_val.shape))
print('y_val dimensions: ' + str(y_val.shape))
print('y_val_1hot dimensions: ' + str(y_val_1hot.shape))
# Label lookup
label_lookup = {'N': 0, 'A': 1, 'O': 2, '~': 3}
# Label dimensions
print('Train: Classes: ' + str(np.unique(y_train.ravel())))
print('Train: Count: ' + str(np.bincount(y_train.ravel())))
print('Val: Classes: ' + str(np.unique(y_val.ravel())))
print('Val: Count: ' + str(np.bincount(y_val.ravel())))
# Label dictionary
label_list = ['Normal Sinus Rhythm', 'Atrial Fibrillation', 'Other Rhythm']
# PLot times series
plot_time_series_widget(time_series=x_train, labels=y_train, fs=fs, label_list=label_list)
```
# 2. Device Check
```
# Get GPU count
print_device_counts()
```
# 3. Initialize Model
```
# Set save path for graphs, summaries, and checkpoints
save_path = r'C:\Users\sebig\Desktop\tensorboard\deep_ecg\test'
# Set model name
model_name = 'test_10'
# Maximum number of checkpoints to keep
max_to_keep = 20
# Set randome states
seed = 0
tf.set_random_seed(seed)
# Get training dataset dimensions
(m, length, channels) = x_train.shape
# Get number of label classes
classes = y_train_1hot.shape[1]
# Choose network
network_name = 'DeepECG'
# Set network inputs
network_parameters = dict(
length=length,
channels=channels,
classes=classes,
seed=seed,
)
# Create model
model = Model(
model_name=model_name,
network_name=network_name,
network_parameters=network_parameters,
save_path=save_path,
max_to_keep=max_to_keep
)
```
# 7. Train Model
```
# Set hyper-parameters
epochs = 10
minibatch_size = 1
learning_rate = 0.001
# Train model
train(model=model, x_train=x_train[0:1], y_train=y_train_1hot[0:1], x_val=x_val[0:1], y_val=y_val_1hot[0:1],
learning_rate=learning_rate, epochs=epochs, mini_batch_size=minibatch_size)
```
|
github_jupyter
|
# Import 3rd party libraries
import os
import sys
import numpy as np
import pickle
# Deep learning libraries
import tensorflow as tf
# Import local Libraries
sys.path.insert(0, r'C:\Users\sebig\Documents\code\deep_ecg')
from utils.plotting.time_series import plot_time_series_widget
from utils.data.labels.one_hot_encoding import one_hot_encoding
from utils.devices.device_check import print_device_counts, get_device_names
from train.train import train
from model.model import Model
# Configure Notebook
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%load_ext autoreload
%autoreload 2
# Set path
path = os.path.join(os.path.dirname(os.getcwd()), 'data', 'training')
# Set sample rate
fs = 300
# Unpickle
with open(os.path.join(path, 'training_60s.pickle'), "rb") as input_file:
data = pickle.load(input_file)
# Get training data
x_train = data['data_train'].values.reshape(data['data_train'].shape[0], data['data_train'].shape[1], 1)
y_train = data['labels_train']['label_int'].values.reshape(data['labels_train'].shape[0], 1).astype(int)
# Get validation data
x_val = data['data_val'].values.reshape(data['data_val'].shape[0], data['data_val'].shape[1], 1)
y_val = data['labels_val']['label_int'].values.reshape(data['labels_val'].shape[0], 1).astype(int)
# Print dimensions
print('x_train dimensions: ' + str(x_train.shape))
print('y_train dimensions: ' + str(y_train.shape))
print('x_val dimensions: ' + str(x_val.shape))
print('y_val dimensions: ' + str(y_val.shape))
# One hot encoding array dimensions
y_train_1hot = one_hot_encoding(labels=y_train.ravel(), classes=len(np.unique(y_train.ravel())))
y_val_1hot = one_hot_encoding(labels=y_val.ravel(), classes=len(np.unique(y_val.ravel())))
# Print dimensions
print('x_train dimensions: ' + str(x_train.shape))
print('y_train dimensions: ' + str(y_train.shape))
print('y_train_1hot dimensions: ' + str(y_train_1hot.shape))
print('x_val dimensions: ' + str(x_val.shape))
print('y_val dimensions: ' + str(y_val.shape))
print('y_val_1hot dimensions: ' + str(y_val_1hot.shape))
# Label lookup
label_lookup = {'N': 0, 'A': 1, 'O': 2, '~': 3}
# Label dimensions
print('Train: Classes: ' + str(np.unique(y_train.ravel())))
print('Train: Count: ' + str(np.bincount(y_train.ravel())))
print('Val: Classes: ' + str(np.unique(y_val.ravel())))
print('Val: Count: ' + str(np.bincount(y_val.ravel())))
# Label dictionary
label_list = ['Normal Sinus Rhythm', 'Atrial Fibrillation', 'Other Rhythm']
# PLot times series
plot_time_series_widget(time_series=x_train, labels=y_train, fs=fs, label_list=label_list)
# Get GPU count
print_device_counts()
# Set save path for graphs, summaries, and checkpoints
save_path = r'C:\Users\sebig\Desktop\tensorboard\deep_ecg\test'
# Set model name
model_name = 'test_10'
# Maximum number of checkpoints to keep
max_to_keep = 20
# Set randome states
seed = 0
tf.set_random_seed(seed)
# Get training dataset dimensions
(m, length, channels) = x_train.shape
# Get number of label classes
classes = y_train_1hot.shape[1]
# Choose network
network_name = 'DeepECG'
# Set network inputs
network_parameters = dict(
length=length,
channels=channels,
classes=classes,
seed=seed,
)
# Create model
model = Model(
model_name=model_name,
network_name=network_name,
network_parameters=network_parameters,
save_path=save_path,
max_to_keep=max_to_keep
)
# Set hyper-parameters
epochs = 10
minibatch_size = 1
learning_rate = 0.001
# Train model
train(model=model, x_train=x_train[0:1], y_train=y_train_1hot[0:1], x_val=x_val[0:1], y_val=y_val_1hot[0:1],
learning_rate=learning_rate, epochs=epochs, mini_batch_size=minibatch_size)
| 0.404272 | 0.765944 |
```
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Sequential, Model
from keras.layers import Dropout, Flatten, Dense, Conv2D, MaxPooling2D
from keras import backend as k
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint
import numpy as np
import h5py
img_width, img_height = 256, 256
### Build the network
model = Sequential()
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1', input_shape=(256, 256, 3)))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2'))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool'))
# Block 2
model.add(Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1'))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2'))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool'))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1'))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2'))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3'))
model.add(MaxPooling2D((2,2), strides=(2,2), name='block3_pool'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3'))
model.add(MaxPooling2D((2,2), strides=(2,2), name='block4_pool'))
model.summary()
layer_dict = dict([(layer.name, layer) for layer in model.layers])
a = [layer.name for layer in model.layers]
print(a)
weights_path = './vgg19_weights.h5'
# ('https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg19_weights_tf_dim_ordering_tf_kernels.h5)
f = h5py.File(weights_path)
model.load_weights(weights_path, by_name=True)
layer_count=0;
weights_dict = []
for layer in model.layers:
weights = layer.get_weights()
weights_dict.append(weights)
layer_count = layer_count + 1
print("Model Layer name : {}".format(str(layer.name)))
print("The total number layers is : " + str(layer_count))
for i in layer_dict.keys():
index = a.index(i)
model.layers[index].set_weights(weights_dict[index])
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3'))
model.add(MaxPooling2D((2,2), strides=(2,2), name='block5_pool'))
model.add(Flatten())
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
model.summary()
batch_size = 16
adam = Adam(lr=0.00002)
model.compile(loss='mse', optimizer=adam, metrics=['accuracy'])
filepath="./models_vgg/checkpoint_model-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
#history_object = model.fit(np.array(X_train), y_train_one_hot, batch_size=32, epochs=10, verbose=1, validation_split=0.2)
history_object = model.fit_generator(datagen.flow(X_train, y_train_one_hot, batch_size=batch_size),
validation_data=datagen_validation.flow(X_train, y_train_one_hot, batch_size=32),
samples_per_epoch=len(X_train),
epochs=60,
verbose=1,
validation_steps=int(len(X_train) / batch_size),
callbacks=callbacks_list)
model_path = 'models_vgg/model_VGG_tuned_v8_10-4.h5'
model.save(model_path)
print("Model saved to {}".format(model_path))
```
|
github_jupyter
|
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Sequential, Model
from keras.layers import Dropout, Flatten, Dense, Conv2D, MaxPooling2D
from keras import backend as k
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint
import numpy as np
import h5py
img_width, img_height = 256, 256
### Build the network
model = Sequential()
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1', input_shape=(256, 256, 3)))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2'))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool'))
# Block 2
model.add(Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1'))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2'))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool'))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1'))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2'))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3'))
model.add(MaxPooling2D((2,2), strides=(2,2), name='block3_pool'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3'))
model.add(MaxPooling2D((2,2), strides=(2,2), name='block4_pool'))
model.summary()
layer_dict = dict([(layer.name, layer) for layer in model.layers])
a = [layer.name for layer in model.layers]
print(a)
weights_path = './vgg19_weights.h5'
# ('https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg19_weights_tf_dim_ordering_tf_kernels.h5)
f = h5py.File(weights_path)
model.load_weights(weights_path, by_name=True)
layer_count=0;
weights_dict = []
for layer in model.layers:
weights = layer.get_weights()
weights_dict.append(weights)
layer_count = layer_count + 1
print("Model Layer name : {}".format(str(layer.name)))
print("The total number layers is : " + str(layer_count))
for i in layer_dict.keys():
index = a.index(i)
model.layers[index].set_weights(weights_dict[index])
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3'))
model.add(MaxPooling2D((2,2), strides=(2,2), name='block5_pool'))
model.add(Flatten())
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
model.summary()
batch_size = 16
adam = Adam(lr=0.00002)
model.compile(loss='mse', optimizer=adam, metrics=['accuracy'])
filepath="./models_vgg/checkpoint_model-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
#history_object = model.fit(np.array(X_train), y_train_one_hot, batch_size=32, epochs=10, verbose=1, validation_split=0.2)
history_object = model.fit_generator(datagen.flow(X_train, y_train_one_hot, batch_size=batch_size),
validation_data=datagen_validation.flow(X_train, y_train_one_hot, batch_size=32),
samples_per_epoch=len(X_train),
epochs=60,
verbose=1,
validation_steps=int(len(X_train) / batch_size),
callbacks=callbacks_list)
model_path = 'models_vgg/model_VGG_tuned_v8_10-4.h5'
model.save(model_path)
print("Model saved to {}".format(model_path))
| 0.906451 | 0.64839 |
# Uncertainty Forest: How to Run Tutorial
This set of two tutorials (`uncertaintyforest_running_example.ipynb` and `uncertaintyforest_fig1.ipynb`) will explain the UncertaintyForest class. After following both tutorials, you should have the ability to run UncertaintyForest code on your own machine and generate Figure 1 from [this paper](https://arxiv.org/pdf/1907.00325.pdf).
If you haven't seen it already, take a look at other tutorials to setup and install the progressive learning package `Installation-and-Package-Setup-Tutorial.ipynb`
## Simply Running the Uncertainty Forest class
### *Goal: Train the UncertaintyForest classifier on some training data and produce a metric of accuracy on some test data*
### 1: First, we'll import required packages and set some parameters for the forest.
```
from proglearn.forest import UncertaintyForest
from proglearn.sims import generate_gaussian_parity
# Real Params.
n_train = 10000 # number of training data points
n_test = 1000 # number of testing data points
num_trials = 10 # number of trials
n_estimators = 100 # number of estimators
```
#### We've done a lot. Can we just run it now? Yes!
### 2: Creating & Training our UncertaintyForest
First, generate our data:
```
X, y = generate_gaussian_parity(n_train+n_test)
```
Now, split that data into training and testing data. We don't want to accidently train on our test data.
```
X_train = X[0:n_train] # Takes the first n_train number of data points and saves as X_train
y_train = y[0:n_train] # same as above for the labels
X_test = X[n_train:] # Takes the remainder of the data (n_test data points) and saves as X_test
y_test = y[n_train:] # same as above for the labels
```
Then, create our forest:
```
UF = UncertaintyForest(n_estimators = n_estimators)
```
Then fit our learner:
```
UF.fit(X_train, y_train)
```
Well, we're done. Exciting right?
### 3: Producing a Metric of Accuracy for Our Learner
We've now created our learner and trained it. But to actually show if what we did is effective at predicting the class labels of the data, we'll create some test data (with the same distribution as the train data) and see if we classify it correctly.
```
X_test, y_test = generate_gaussian_parity(n_test) # creates the test data
predictions = UF.predict(X_test) # predict the class labels of the test data
```
To see the learner's accuracy, we'll now compare the predictions with the actual test data labels. We'll find the number correct and divide by the number of data.
```
accuracy = sum(predictions == y_test)/n_test
```
And, let's take a look at our accuracy:
```
print(accuracy)
```
Ta-da. That's an uncertainty forest at work.
## What's next? --> See a metric on the power of uncertainty forest by generating Figure 1 from [this paper](https://arxiv.org/pdf/1907.00325.pdf)
### To do this, check out `uncertaintyforest_fig1`
|
github_jupyter
|
from proglearn.forest import UncertaintyForest
from proglearn.sims import generate_gaussian_parity
# Real Params.
n_train = 10000 # number of training data points
n_test = 1000 # number of testing data points
num_trials = 10 # number of trials
n_estimators = 100 # number of estimators
X, y = generate_gaussian_parity(n_train+n_test)
X_train = X[0:n_train] # Takes the first n_train number of data points and saves as X_train
y_train = y[0:n_train] # same as above for the labels
X_test = X[n_train:] # Takes the remainder of the data (n_test data points) and saves as X_test
y_test = y[n_train:] # same as above for the labels
UF = UncertaintyForest(n_estimators = n_estimators)
UF.fit(X_train, y_train)
X_test, y_test = generate_gaussian_parity(n_test) # creates the test data
predictions = UF.predict(X_test) # predict the class labels of the test data
accuracy = sum(predictions == y_test)/n_test
print(accuracy)
| 0.457379 | 0.992892 |
# SageMaker Inference Recommender - XGBoost
## 1. Introduction
SageMaker Inference Recommender is a new capability of SageMaker that reduces the time required to get machine learning (ML) models in production by automating load tests and optimizing model performance across instance types. You can use Inference Recommender to select a real-time inference endpoint that delivers the best performance at the lowest cost.
Get started with Inference Recommender on SageMaker in minutes while selecting an instance and get an optimized endpoint configuration in hours, eliminating weeks of manual testing and tuning time.
## 2. Setup
Note that we are using the `conda_python3` kernel in SageMaker Notebook Instances. This is running Python 3.6. If you'd like to use the same setup, in the AWS Management Console, go to the Amazon SageMaker console. Choose Notebook Instances, and click create a new notebook instance. Upload the current notebook and set the kernel. You can also run this in SageMaker Studio Notebooks with the `Python 3 (Data Science)` kernel.
In the next steps, you'll import standard methods and libraries as well as set variables that will be used in this notebook. The `get_execution_role` function retrieves the AWS Identity and Access Management (IAM) role you created at the time of creating your notebook instance.
```
from sagemaker import get_execution_role, Session, image_uris
import boto3
import time
region = boto3.Session().region_name
role = get_execution_role()
sm_client = boto3.client("sagemaker", region_name=region)
sagemaker_session = Session()
print(region)
```
## 3. Machine learning model details
Inference Recommender uses metadata about your ML model to recommend the best instance types and endpoint configurations for deployment. You can provide as much or as little information as you'd like but the more information you provide, the better your recommendations will be.
ML Frameworks: `TENSORFLOW, PYTORCH, XGBOOST, SAGEMAKER-SCIKIT-LEARN`
ML Domains: `COMPUTER_VISION, NATURAL_LANGUAGE_PROCESSING, MACHINE_LEARNING`
Example ML Tasks: `CLASSIFICATION, REGRESSION, IMAGE_CLASSIFICATION, OBJECT_DETECTION, SEGMENTATION, MASK_FILL, TEXT_CLASSIFICATION, TEXT_GENERATION, OTHER`
```
# ML framework details
framework = "XGBOOST"
framework_version = "1.2.0"
# model name as standardized by model zoos or a similar open source model
model_name = "xgboost"
# ML model details
ml_domain = "MACHINE_LEARNING"
ml_task = "CLASSIFICATION"
```
## 4. Create a model archive
SageMaker models need to be packaged in `.tar.gz` files. When your SageMaker Endpoint is provisioned, the files in the archive will be extracted and put in `/opt/ml/model/` on the Endpoint.
In this step, there are two optional tasks to:
(1) Download a pretrained model from Keras applications
(2) Download a sample inference script (inference.py) from S3
These tasks are provided as a sample reference but can and should be modified when using your own trained models with Inference Recommender.
### Optional: Train an XGBoost model
Let's quickly train an XGBoost model. If you already have a model, you can skip this step and proceed to the next section.
For the purposes of this notebook, we are training an XGBoost model on random data.
```
# Install sklearn and XGBoost
!pip3 install -U scikit-learn xgboost==1.2.0 --quiet
# Import required libraries
import numpy as np
from numpy import loadtxt
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
# Generate dummy data to perform binary classification
seed = 7
features = 50 # number of features
samples = 10000 # number of samples
X = np.random.rand(samples, features).astype("float32")
Y = np.random.randint(2, size=samples)
test_size = 0.1
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=test_size, random_state=seed)
model = XGBClassifier()
model.fit(X_train, y_train)
model_fname = "xgboost.model"
model.save_model(model_fname)
```
### Create a tarball
To bring your own XGBoost model, SageMaker expects a single archive file in .tar.gz format, containing a model file and optionally inference code.
```
model_archive_name = "model.tar.gz"
!tar -cvpzf {model_archive_name} 'xgboost.model'
```
### Upload to S3
We now have a model archive ready. We need to upload it to S3 before we can use with Inference Recommender. Furthermore, we will use the SageMaker Python SDK to handle the upload.
```
# model package tarball (model artifact + inference code)
model_url = sagemaker_session.upload_data(path=model_archive_name, key_prefix="model")
print("model uploaded to: {}".format(model_url))
```
## 5. Create a sample payload archive
We need to create an archive that contains individual files that Inference Recommender can send to your SageMaker Endpoints. Inference Recommender will randomly sample files from this archive so make sure it contains a similar distribution of payloads you'd expect in production. Note that your inference code must be able to read in the file formats from the sample payload.
*Here we are only adding a single CSV file for the example. In your own use case(s), it's recommended to add a variety of samples that is representative of your payloads.*
```
payload_archive_name = "payload.tar.gz"
print(X_test.shape)
batch_size = 100
np.savetxt("sample.csv", X_test[0:batch_size, :], delimiter=",")
!wc -l sample.csv
```
### Create a tarball
```
!tar -cvzf {payload_archive_name} sample.csv
```
### Upload to S3
Next, we'll upload the packaged payload examples (payload.tar.gz) that was created above to S3. The S3 location will be used as input to our Inference Recommender job later in this notebook.
```
sample_payload_url = sagemaker_session.upload_data(path=payload_archive_name, key_prefix="payload")
```
## 6. Register model in Model Registry
In order to use Inference Recommender, you must have a versioned model in SageMaker Model Registry. To register a model in the Model Registry, you must have a model artifact packaged in a tarball and an inference container image. Registering a model includes the following steps:
1) **Create Model Group:** This is a one-time task per machine learning use case. A Model Group contains one or more versions of your packaged model.
2) **Register Model Version/Package:** This task is performed for each new packaged model version.
### Container image URL
If you don’t have an inference container image, you can use one of the open source AWS [Deep Learning Containers (DLCs)](https://github.com/aws/deep-learning-containers) provided by AWS to serve your ML model. The code below retrieves a DLC based on your ML framework, framework version, python version, and instance type.
```
dlc_uri = image_uris.retrieve("xgboost", region, "1.2-1")
dlc_uri
```
### Create Model Group
```
model_package_group_name = "{}-cpu-models-".format(framework) + str(round(time.time()))
model_package_group_description = "{} models".format(ml_task.lower())
model_package_group_input_dict = {
"ModelPackageGroupName": model_package_group_name,
"ModelPackageGroupDescription": model_package_group_description,
}
create_model_package_group_response = sm_client.create_model_package_group(
**model_package_group_input_dict
)
print(
"ModelPackageGroup Arn : {}".format(create_model_package_group_response["ModelPackageGroupArn"])
)
```
### Register Model Version/Package
In this step, you'll register your pretrained model that was packaged in the prior steps as a new version in SageMaker Model Registry. First, you'll configure the model package/version identifying which model package group this new model should be registered within as well as identify the initial approval status. You'll also identify the domain and task for your model. These values were set earlier in the notebook
where `ml_domain = 'MACHINE_LEARNING'` and `ml_task = 'CLASSIFICATION'`
*Note: ModelApprovalStatus is a configuration parameter that can be used in conjunction with SageMaker Projects to trigger automated deployment pipeline.*
```
model_package_description = "{} {} inference recommender".format(framework, model_name)
model_approval_status = "PendingManualApproval"
create_model_package_input_dict = {
"ModelPackageGroupName": model_package_group_name,
"Domain": ml_domain.upper(),
"Task": ml_task.upper(),
"SamplePayloadUrl": sample_payload_url,
"ModelPackageDescription": model_package_description,
"ModelApprovalStatus": model_approval_status,
}
```
### Set up inference specification
You'll now set up the inference specification configuration for your model version. This contains information on how the model should be hosted.
Inference Recommender expects a single input MIME type for sending requests. Learn more about [common inference data formats on SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-inference.html). This MIME type will be sent in the Content-Type header when invoking your endpoint.
```
input_mime_types = ["text/csv"]
```
If you specify a set of instance types below (i.e. non-empty list), then Inference Recommender will only support recommendations within the set of instances below. For this example, we provide a list of common CPU instance types used with XGBoost.
```
supported_realtime_inference_types = [
"ml.m4.2xlarge",
"ml.c5.2xlarge",
"ml.c5.xlarge",
"ml.c5.9xlarge",
]
modelpackage_inference_specification = {
"InferenceSpecification": {
"Containers": [
{
"Image": dlc_uri,
"Framework": framework.upper(),
"FrameworkVersion": framework_version,
"NearestModelName": model_name,
}
],
"SupportedContentTypes": input_mime_types, # required, must be non-null
"SupportedResponseMIMETypes": [],
"SupportedRealtimeInferenceInstanceTypes": supported_realtime_inference_types, # optional
}
}
# Specify the model data
modelpackage_inference_specification["InferenceSpecification"]["Containers"][0][
"ModelDataUrl"
] = model_url
```
Now that you've configured the model package, the next step is to create the model package/version in SageMaker Model Registry
```
create_model_package_input_dict.update(modelpackage_inference_specification)
create_mode_package_response = sm_client.create_model_package(**create_model_package_input_dict)
model_package_arn = create_mode_package_response["ModelPackageArn"]
print("ModelPackage Version ARN : {}".format(model_package_arn))
```
## 7. Create an Inference Recommender Default Job
Now with your model in Model Registry, you can kick off a 'Default' job to get instance recommendations. This only requires your `ModelPackageVersionArn` and comes back with recommendations within 45 minutes.
The output is a list of instance type recommendations with associated environment variables, cost, throughput and latency metrics.
```
job_name = model_name + "-instance-" + str(round(time.time()))
job_description = "{} {}".format(framework, model_name)
job_type = "Default"
print(job_name)
rv = sm_client.create_inference_recommendations_job(
JobName=job_name,
JobDescription=job_description, # optional
JobType=job_type,
RoleArn=role,
InputConfig={"ModelPackageVersionArn": model_package_arn},
)
print(rv)
```
## 8. Instance Recommendation Results
Each inference recommendation includes `InstanceType`, `InitialInstanceCount`, `EnvironmentParameters` which are tuned environment variable parameters for better performance. We also include performance and cost metrics such as `MaxInvocations`, `ModelLatency`, `CostPerHour` and `CostPerInference`. We believe these metrics will help you narrow down to a specific endpoint configuration that suits your use case.
Example:
If your motivation is overall price-performance with an emphasis on throughput, then you should focus on `CostPerInference` metrics
If your motivation is a balance between latency and throughput, then you should focus on `ModelLatency` / `MaxInvocations` metrics
| Metric | Description |
| --- | --- |
| ModelLatency | The interval of time taken by a model to respond as viewed from SageMaker. This interval includes the local communication times taken to send the request and to fetch the response from the container of a model and the time taken to complete the inference in the container. <br /> Units: Microseconds |
| MaximumInvocations | The maximum number of InvokeEndpoint requests sent to a model endpoint. <br /> Units: None |
| CostPerHour | The estimated cost per hour for your real-time endpoint. <br /> Units: US Dollars |
| CostPerInference | The estimated cost per inference for your real-time endpoint. <br /> Units: US Dollars |
```
import pprint
import pandas as pd
finished = False
while not finished:
inference_recommender_job = sm_client.describe_inference_recommendations_job(JobName=job_name)
if inference_recommender_job["Status"] in ["COMPLETED", "STOPPED", "FAILED"]:
finished = True
else:
print("In progress")
time.sleep(300)
if inference_recommender_job["Status"] == "FAILED":
print("Inference recommender job failed ")
print("Failed Reason: {}".inference_recommender_job["FailedReason"])
else:
print("Inference recommender job completed")
data = [
{**x["EndpointConfiguration"], **x["ModelConfiguration"], **x["Metrics"]}
for x in inference_recommender_job["InferenceRecommendations"]
]
df = pd.DataFrame(data)
df.drop("VariantName", inplace=True, axis=1)
pd.set_option("max_colwidth", 400)
df.head()
```
## 9. Conclusion
This notebook discussed how to use SageMaker Inference Recommender with an XGBoost model to help determine the right CPU instance to reduce costs and maximize performance. The notebook walked you through training a quick XGBoost model, registering your model in Model Registry, and creating an Inference Recommender Default job to get recommendations. You can modify the batch size, features and instance types to match your own ML workload as well as bring your own XGBoost model for testing.
|
github_jupyter
|
from sagemaker import get_execution_role, Session, image_uris
import boto3
import time
region = boto3.Session().region_name
role = get_execution_role()
sm_client = boto3.client("sagemaker", region_name=region)
sagemaker_session = Session()
print(region)
# ML framework details
framework = "XGBOOST"
framework_version = "1.2.0"
# model name as standardized by model zoos or a similar open source model
model_name = "xgboost"
# ML model details
ml_domain = "MACHINE_LEARNING"
ml_task = "CLASSIFICATION"
# Install sklearn and XGBoost
!pip3 install -U scikit-learn xgboost==1.2.0 --quiet
# Import required libraries
import numpy as np
from numpy import loadtxt
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
# Generate dummy data to perform binary classification
seed = 7
features = 50 # number of features
samples = 10000 # number of samples
X = np.random.rand(samples, features).astype("float32")
Y = np.random.randint(2, size=samples)
test_size = 0.1
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=test_size, random_state=seed)
model = XGBClassifier()
model.fit(X_train, y_train)
model_fname = "xgboost.model"
model.save_model(model_fname)
model_archive_name = "model.tar.gz"
!tar -cvpzf {model_archive_name} 'xgboost.model'
# model package tarball (model artifact + inference code)
model_url = sagemaker_session.upload_data(path=model_archive_name, key_prefix="model")
print("model uploaded to: {}".format(model_url))
payload_archive_name = "payload.tar.gz"
print(X_test.shape)
batch_size = 100
np.savetxt("sample.csv", X_test[0:batch_size, :], delimiter=",")
!wc -l sample.csv
!tar -cvzf {payload_archive_name} sample.csv
sample_payload_url = sagemaker_session.upload_data(path=payload_archive_name, key_prefix="payload")
dlc_uri = image_uris.retrieve("xgboost", region, "1.2-1")
dlc_uri
model_package_group_name = "{}-cpu-models-".format(framework) + str(round(time.time()))
model_package_group_description = "{} models".format(ml_task.lower())
model_package_group_input_dict = {
"ModelPackageGroupName": model_package_group_name,
"ModelPackageGroupDescription": model_package_group_description,
}
create_model_package_group_response = sm_client.create_model_package_group(
**model_package_group_input_dict
)
print(
"ModelPackageGroup Arn : {}".format(create_model_package_group_response["ModelPackageGroupArn"])
)
model_package_description = "{} {} inference recommender".format(framework, model_name)
model_approval_status = "PendingManualApproval"
create_model_package_input_dict = {
"ModelPackageGroupName": model_package_group_name,
"Domain": ml_domain.upper(),
"Task": ml_task.upper(),
"SamplePayloadUrl": sample_payload_url,
"ModelPackageDescription": model_package_description,
"ModelApprovalStatus": model_approval_status,
}
input_mime_types = ["text/csv"]
supported_realtime_inference_types = [
"ml.m4.2xlarge",
"ml.c5.2xlarge",
"ml.c5.xlarge",
"ml.c5.9xlarge",
]
modelpackage_inference_specification = {
"InferenceSpecification": {
"Containers": [
{
"Image": dlc_uri,
"Framework": framework.upper(),
"FrameworkVersion": framework_version,
"NearestModelName": model_name,
}
],
"SupportedContentTypes": input_mime_types, # required, must be non-null
"SupportedResponseMIMETypes": [],
"SupportedRealtimeInferenceInstanceTypes": supported_realtime_inference_types, # optional
}
}
# Specify the model data
modelpackage_inference_specification["InferenceSpecification"]["Containers"][0][
"ModelDataUrl"
] = model_url
create_model_package_input_dict.update(modelpackage_inference_specification)
create_mode_package_response = sm_client.create_model_package(**create_model_package_input_dict)
model_package_arn = create_mode_package_response["ModelPackageArn"]
print("ModelPackage Version ARN : {}".format(model_package_arn))
job_name = model_name + "-instance-" + str(round(time.time()))
job_description = "{} {}".format(framework, model_name)
job_type = "Default"
print(job_name)
rv = sm_client.create_inference_recommendations_job(
JobName=job_name,
JobDescription=job_description, # optional
JobType=job_type,
RoleArn=role,
InputConfig={"ModelPackageVersionArn": model_package_arn},
)
print(rv)
import pprint
import pandas as pd
finished = False
while not finished:
inference_recommender_job = sm_client.describe_inference_recommendations_job(JobName=job_name)
if inference_recommender_job["Status"] in ["COMPLETED", "STOPPED", "FAILED"]:
finished = True
else:
print("In progress")
time.sleep(300)
if inference_recommender_job["Status"] == "FAILED":
print("Inference recommender job failed ")
print("Failed Reason: {}".inference_recommender_job["FailedReason"])
else:
print("Inference recommender job completed")
data = [
{**x["EndpointConfiguration"], **x["ModelConfiguration"], **x["Metrics"]}
for x in inference_recommender_job["InferenceRecommendations"]
]
df = pd.DataFrame(data)
df.drop("VariantName", inplace=True, axis=1)
pd.set_option("max_colwidth", 400)
df.head()
| 0.45641 | 0.954435 |
# Matrix representation of quantum circuits - notations and gotchas
> A case study by building a tensor network to match qiskit conventions
- toc: true
- badges: true
- comments: true
- categories: [qiskit, tensor networks, quantum concepts]
- image: images/upsidedown.jpg
# Intro
Usually, for experimenting with quantum circuits I use `qiskit`. As any higher level environment it is very convenient for common tasks, but may turn out too inflexible for unusual use cases. A somewhat opposite approach is to use much lower level tools to gain in flexibility at the expense of convenience. Currently I want to use Google's `tensornetwork` [package](https://tensornetwork.readthedocs.io/en/latest/) for simulations and training of quantum circuits, but this requires building many things that are for free in `qiskit` from scratch. It is also necessary to become explicit about conventions for matrix representation of quantum circuits. As long as you stay within a single framework this may not be an issue. However for debugging purposes as well as for comparison between different frameworks this may become unavoidable. Thus, I always anticipated, that a day will come when I need to face my fears and order all terms in a tensor product by hands. Now it seems I'm past the difficult part and I'm better writing this down in case I would need to do something similar in the future.
## Defining the problem
OK, so what is the problem? Consider the following simple circuit built with `qiskit`:
```
#collapse
import numpy as np
from qiskit import QuantumCircuit
from qiskit.quantum_info import Operator, Statevector
qc = QuantumCircuit(2)
qc.x(0)
qc.y(1)
qc.cx(0,1)
qc.draw(output='mpl')
```
It is not hard or ambiguous to interpret what this circuit does by inspecting the diagram. Say the input state is $q_0=|0\rangle$, $q_1=|1\rangle$. After $X$ acts on $q_0$ it becomes $q_0\to X |0\rangle=|1\rangle$. Similarly, $q_1$ after $Y$ becomes $q_1\to Y|1\rangle=-i |0\rangle$. Since now $q_0$ is "on" the CNOT gate switches the state of $q_1$ further to $q_0 \to -i|1\rangle$. So the end result is that $q_0=|0\rangle, q_1=|1\rangle$ is transformed to $q_0=|1\rangle, q_1=-i|1\rangle$. Or perhaps a picture says it better

Similarly, we can work out what the circuit does for other computational basis states which by linearity fully fixes the action of the circuit. Although quite explicit, this is a clumsy description. This is why the matrix notation is usually used. And indeed, we can obtain the matrix corresponding to our quantum circuit quite easily in `qiskit`:
```
U_qs = Operator(qc).data
U_qs
```
It is important to realize that a number of conventions must be chosen before such explicit matrix representation can be written down. In particular, I will emphasize two points I tripped over while studying this: ordering of the qubit states in the tensor product or "vertical ordering" and ordering of operators or "horizontal ordering".
<img src="myimages/upsidedown.jpg" alt="Drawing" style="width: 400px;"/>
In the rest of the post I will clarify what are the conventions used in `qiskit` and how to reproduce the circuit with the `tensornetwork` library.
# States: vertical ordering
## Single qubit states
First we need to give matrix representations to two basis states of a single qubit. Here I think it is quite uncontroversial to choose
\begin{align}
|0\rangle = \begin{pmatrix}1\\0\end{pmatrix},\qquad |1\rangle = \begin{pmatrix}0\\1\end{pmatrix} \label{kets}
\end{align}
These are the "ket" vectors. Their "bra" counterparts are
\begin{align}
\langle 0| = \begin{pmatrix}1 & 0\end{pmatrix}, \qquad \langle 1| = \begin{pmatrix}0 & 1\end{pmatrix} \label{bras}
\end{align}
With these, the following operators can be computed
\begin{align}
|0\rangle\langle 0| = \begin{pmatrix}1 & 0 \\ 0 & 0\end{pmatrix},\qquad |0\rangle\langle 1| = \begin{pmatrix}0 & 1 \\ 0 & 0\end{pmatrix} \nonumber\\ |1\rangle\langle 0| = \begin{pmatrix}0 & 0 \\ 1 & 0\end{pmatrix},\qquad |1\rangle\langle 1| = \begin{pmatrix}0 & 0 \\ 0 & 1\end{pmatrix} \label{ketbras}
\end{align}
## Multiple qubit states
When there is more than a single qubit things become a bit more interesting and potentially confusing. For example, the combined Hilbert space of two qubits $\mathcal{H}_2$ is a tensor product of single-qubit Hilbert spaces $\mathcal{H}_2 = \mathcal{H}_1 \otimes \mathcal{H}_1$ but we need to decide which qubit goes first and which goes second. In `qiskit` a convention is adopted that additional qubits join from the *left*, i.e. when we have two qubits as here
```
#collapse
qc01 = QuantumCircuit(2)
qc01.draw(output='mpl')
```
The state of the system is $|q_1\rangle\otimes |q_0\rangle$ (this is of course only true literally for [non-entangled states](https://idnm.github.io/blog/quantum%20concepts/qiskit/2021/07/12/Entanglement.html) but we can define everything only on the computational basis states ). OK, but how do we translate this into the matrix representation? The states in the tensor product of vector spaces can be represented by the [Kronecker product](https://en.wikipedia.org/wiki/Kronecker_product) which is not symmetric with respect to permutation arguments. Best way to explain how Kronecker product works is, as usual, through examples:
\begin{align}
\begin{pmatrix} 1 \\ 0 \end{pmatrix} \otimes \begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} a \\ b \\ 0 \\ 0 \end{pmatrix},\qquad \begin{pmatrix} 0 \\ 1 \end{pmatrix} \otimes \begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} 0\\ 0\\ a \\ b \end{pmatrix}
\end{align}
Result for generic left vector can be obtained by linearity
\begin{align}
\begin{pmatrix} x \\ y \end{pmatrix} \otimes \begin{pmatrix} a \\ b \end{pmatrix} = x \begin{pmatrix} 1 \\ 0 \end{pmatrix} \otimes \begin{pmatrix} a \\ b\end{pmatrix} +y\begin{pmatrix} 0 \\ 1 \end{pmatrix} \otimes \begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} x a\\ x b\\ y a \\ y b \end{pmatrix} = \begin{pmatrix} x \begin{pmatrix} a\\ b\end{pmatrix} \\ y \begin{pmatrix} a \\ b\end{pmatrix} \end{pmatrix}
\end{align}
The last notation here is a bit informal but it shows what happens. One just substitutes the right vector into all elements of the left vector, multiplied by the corresponding components of the left vector. The Kronecker product is defined in the same way for matrices of arbitrary size, not just for two vectors.
So, now we can compute matrix representations of states in the computation basis of two-qubit system
\begin{align}
|00\rangle = \begin{pmatrix}1\\0 \end{pmatrix} \otimes \begin{pmatrix}1\\0 \end{pmatrix} = \begin{pmatrix}1\\0\\0\\0\end{pmatrix},\quad |01\rangle = \begin{pmatrix}1\\0 \end{pmatrix} \otimes \begin{pmatrix}0\\1 \end{pmatrix} = \begin{pmatrix}0\\1\\0\\0\end{pmatrix} \label{01}\\
|10\rangle = \begin{pmatrix}0\\1\end{pmatrix} \otimes \begin{pmatrix}1\\0 \end{pmatrix} = \begin{pmatrix}0\\0\\1\\0\end{pmatrix},\quad |11\rangle = \begin{pmatrix}0\\1\end{pmatrix} \otimes \begin{pmatrix}0\\1 \end{pmatrix} = \begin{pmatrix}0\\0\\0\\1\end{pmatrix}
\end{align}
There is a useful relation between the index of the non-zero element $n$ in the four-dimensional representation and the computational basis bitstring $q_1q_0$, namely $n=2q_1+q_0$. I.e. the bitstring $q_1q_0$ is the binary representation of the index $n$. This extends to arbitrary number of qubits, for example since $101$ is $5$ in binary representation it follows
\begin{align}
|101\rangle = \begin{pmatrix}0\\0\\0\\0\\0\\1\\0\\0 \end{pmatrix} \label{101}
\end{align}
(try to obtain this from the two tensor products!)
Don't believe me? OK, let's check! In `qiskit` there is a convenient function to construct a vector representation from a bit string which we will take advantage of. First start with a two-qubit example:
```
s01 = Statevector.from_label('01')
s01.data
```
Comparing to \eqref{01} we find agreement. Similarly,
```
s101 = Statevector.from_label('101')
s101.data
```
Again, this is in agreement with \eqref{101}.
However, I am not sure that this relation is sufficient to justify the ordering of the tensor products. To me it is much more natural to read the circuit from top to bottom and construct the Hilbert spaces accordingly, say $\mathcal{H}_0\otimes \mathcal{H}_1 \otimes \mathcal{H}_2 \dots$ instead of $\cdots \mathcal{H}_2\otimes \mathcal{H}_1\otimes \mathcal{H}_0$. Later I will change the ordering of the tensor product to my liking, but for now we stick with the `qiskit` one. Now, with conventions for states in place we can proceed to operators.
# Operators: horizontal ordering
One can say that convention for states representation and ordering of tensor products is a "vertical" convention. There is also a "horizontal" convention which might be potentially confusing. Consider the following circuit
```
#collapse
qc123 = QuantumCircuit(1)
qc123.rx(1, 0)
qc123.ry(2, 0)
qc123.rz(3, 0)
qc123.draw(output='mpl')
```
Here, the operator $R_x$ is appplied first, the operator $R_y$ second and $R_z$ last. So in mathematical notation the circuit corresponds to $R_z R_y R_x$ and *not* to $R_x R_y R_z$. I think that the circuit notation is actually better. We think and write from left to right, this is also a direction that time flows on paper. When another thing happens, we write it to the right and it would be convenient to apply the corresponding operator also to the right. I heard real mathematicians complain about that issue, but I guess we are stuck with it for now.
# Paper-and-pencil computation
With the set up in place we can compute the circuit of interest by hands. For convenience I plot it here once again:
```
#collapse
qc.draw(output='mpl')
```
OK, so what is the unitary matrix corresponding to this circuit? It is
\begin{align}
U = CNOT_{01} \cdot (Y\otimes X)
\end{align}
Here
\begin{multline}
CNOT_{01} = \mathbb{1}\otimes |0\rangle\langle 0|+X\otimes |1\rangle\langle 1|=\\\begin{pmatrix}1&0\\0&1\end{pmatrix}\otimes \begin{pmatrix}1&0\\0&0\end{pmatrix}+\begin{pmatrix}0&1\\1&0\end{pmatrix}\otimes \begin{pmatrix}0&0\\0&1\end{pmatrix}=\begin{pmatrix}1&0&0&0\\0&0&0&1\\0&0&1&0\\0&1&0&0\end{pmatrix}
\end{multline}
and
\begin{align}
Y\otimes X = \begin{pmatrix} 0& -i\\i&0\end{pmatrix} \otimes \begin{pmatrix} 0& 1\\1&0\end{pmatrix}=\begin{pmatrix}0&0&0&-i\\0&0&-i&0\\0&i&0&0\\i&0&0&0\end{pmatrix}
\end{align}
Multiplying them together gives
\begin{align}
U = \begin{pmatrix}0 & 0 & 0 & -i \\ i&0&0&0 \\ 0 & i & 0 & 0 \\ 0 & 0 & -i & 0\end{pmatrix}
\end{align}
Alright, so this is indeed the matrix that `qiskit` computes:
```
U_qs
```
We can now also check that that the states evolve as we expected. For example recall that we computed that our quantum circuit maps $q_0 =|0\rangle, q_1 =|1\rangle$ to $q_0 =|1\rangle, q_1 =|1\rangle$ with an overall phase $-i$. Agreement with `qiskit` can be checked as follows:
```
qs_state = Statevector.from_label('10').evolve(qc).data
our_state = -1j*Statevector.from_label('11').data
np.allclose(qs_state, our_state)
```
# Implementation with `tensornetworks`
I will not give a proper introduction to tensor networks but just make some digressions I think should be helpful as we go along.
First thing we will need are the matrices defining $X, Y$ and $CNOT$ gates. Let us introduce them.
```
X = np.array([[0, 1], [1, 0]])
Y = np.array([[0, -1j], [1j, 0]])
CNOT = np.array([[1, 0, 0, 0],
[0, 0, 0, 1],
[0, 0, 1, 0],
[0, 1, 0, 0]]).reshape(2,2,2,2)
```
## An aside about reshaping
Note that as usually written, $CNOT$ is a $4\times4$ matrix. Since as a quantum gate it acts on two qubits, so it should rather be a four-legged tensor. This is the purpose of the reshaping operation. At first the reshaping might be a bit tricky, so let me illustrate it with an example. Introduce two $4\times4$ matrices and define their product:
```
A = np.random.rand(4,4)
B = np.random.rand(4,4)
AB = A @ B
```
Now define the corresponding four-legged tensors.
```
import tensornetwork as tn
a = tn.Node(A.reshape(2,2,2,2))
b = tn.Node(B.reshape(2,2,2,2))
```
By contracting the legs (or "edges" in terminology of `tensornetworks`) appropriately, we can reproduce the matrix multiplication. First the code:
```
a[2] ^ b[0]
a[3] ^ b[1]
ab = tn.contractors.greedy([a, b], output_edge_order=[a[0], a[1], b[2], b[3]]).tensor
```
We can check that the contraction performed in this way exactly reproduces the matrix multiplication of original $4\times4$ matrices:
```
np.allclose(AB, ab.reshape(4,4))
```
This can be interpreted graphically as follows. First, the reshaping procedure can be thought of as splitting each of two four-dimensional legs of the original matrix into two two-dimensional ones

The labels on the legs have nothing to do with qubit states, these are just indices of edges as assigned by `tn.Node` operation on our matrices. The matrix multiplication of the original matrices in terms of four-legged tensors then can be drawn as follows

The index arrangements in the last part explain why we connected the edges in our code the way we did. This is something to watch out for. For example, connecting edges of two identity tensors in the wrong way may produce a $SWAP$ gate.
## Tensor product ordering
The matrix representation of a tensor diagram like this

also comes with a convention for the ordering of tensor products. In `tensornetwork` as well as in my opinion it is natural to order top-down, i.e. the above diagram is $U\otimes \mathbb{1}$ instead of $\mathbb{1}\otimes U$ as is adopted in `qiskit`.
## Circuit from tensor network
Alright, not we are in a position to reproduce the circuit unitary from the tensor network with nodes `x`, `y` and `cnot`:
```
# Make tensors from matrices
x, y, cnot = list(map(tn.Node, [X, Y, CNOT]))
# Connect edges properly
cnot[2] ^ y[0]
cnot[3] ^ x[0]
# Perform the contraction ~ matrix multiplication
U_tn = tn.contractors.greedy([cnot, x, y], output_edge_order=[cnot[0], cnot[1], y[1], x[1]]).tensor
```
This way of contracting the edges corresponds to the following diagram:

Note that this is basically the original circuit with both the vertical and the horizontal directions reversed. The horizontal reversal is due to mathematical vs circuit notation (circuit is better!) and the vertical reversal is due to the mismatch between `qiskit` and `tensornetwork` ordering of tensor product (`tensornetwork`'s is better!). We can check that the unitary we obtain from this tensor network agrees with `qiskit`'s
```
np.allclose(U_tn.reshape(4,4), U_qs)
```
## A better way
I find all this misalignment very inconvenient and hard to debug. Ideally I want to look at the quantum circuit and construct the corresponding tensor network just as I read a text: from left to right and from top to bottom. Here I propose a solution which seems much more satisfactory to me. We will deal with horizontal reversal by first defining edges and then applying gates to them. This way we can read the circuit from left to right and simply add new gates, just as in `qiskit`. I will not try to revert the vertical direction directly, because I find it hard to think upside down. Instead, for comparison with `qiskit` I will use a built-in `reverse_bits` method.
So let's start by defining a function that applies a given gate to the collection of qubits (this is a slight modification of an [example](https://tensornetwork.readthedocs.io/en/latest/quantum_circuit.html) from `tensornetwork` docs) :
```
def apply_gate(qubits, gate_tensor, positions):
gate = tn.Node(gate_tensor)
assert len(gate.edges) == 2*len(positions), 'Gate size does not match positions provided.'
for i, p in enumerate(positions):
# Connect RIGHT legs of the gate to the active qubits
gate[i+len(positions)] ^ qubits[p]
# Reassing active qubits to the corresponding LEFT legs of the gate
qubits[p] = gate[i]
```
Importantly, here, in contrast to the official docs, we append the gate from the *left*, so that a sequence of application of some $G_1$ followed by $G_2$ is equivalent to the application of $G_2\cdot G_1$. Now there is one more subtlety. Previously we used matrix representation of $CNOT$ assuming that the uppermost qubit comes last in the tensor product. Now that we decided to turn this convention upside down our matrix representation of $CNOT$ must be $CNOT =|0\rangle\langle 0|\otimes \mathbb{1}+|1\rangle\langle 1|\otimes X$ or explicitly
```
CNOT = np.array([[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 0, 1],
[0, 0, 1, 0]]).reshape(2,2,2,2)
```
With that we are ready to reconstruct our original circuit in a convenient way:
```
# The context manager `NodeCollection` is a bit of a magic trick
# which keeps track of all tensors in the network automatically.
all_nodes = []
with tn.NodeCollection(all_nodes):
# I do not know how to create 'abstract' edges in `tensornetworks`.
# Instead, I create an identity tensor and use its edges to apply new gates to.
id0 = tn.Node(np.identity(4).reshape(2,2,2,2))
qubits0 = id0.edges[2:4]
qubits = id0.edges[0:2]
apply_gate(qubits, X, [0])
apply_gate(qubits, Y, [1])
apply_gate(qubits, CNOT, [0,1])
```
Now let us check!
```
U_tn = tn.contractors.greedy(all_nodes, output_edge_order=qubits+qubits0).tensor.reshape(4,4)
U_reversed_qs = Operator(qc.reverse_bits()).data
np.allclose(U_tn, U_reversed_qs)
```
Wohoo, it worked! If that looked simple to you I'm happy. It took me several hours of debugging to finally match the two matrices. Just to make sure, let me conclude with a more complicated example.
```
qc3 = QuantumCircuit(3)
qc3.x(0)
qc3.cx(0, 1)
qc3.y(1)
qc3.x(2)
qc3.cx(2, 1)
qc3.y(2)
qc3.draw(output='mpl')
```
As you can see, constructing the tensor network analog now works more or less identically:
```
all_nodes = []
with tn.NodeCollection(all_nodes):
id0 = tn.Node(np.identity(8).reshape(2,2,2,2,2,2))
qubits0 = id0.edges[3:6]
qubits = id0.edges[0:3]
# The essential part
apply_gate(qubits, X, [0])
apply_gate(qubits, CNOT, [0, 1])
apply_gate(qubits, Y, [1])
apply_gate(qubits, X, [2])
apply_gate(qubits, CNOT, [2, 1])
apply_gate(qubits, Y, [2])
```
And now we compare:
```
U3_tn = tn.contractors.greedy(all_nodes, output_edge_order=qubits+qubits0).tensor.reshape(8,8)
U3_qs_reversed = Operator(qc3.reverse_bits()).data
np.allclose(U3_tn, U3_qs_reversed)
```
Alright, this resounding **True** is the best way to conclude that comes to mind. I own many thanks to [Ilia Luchnikov](https://github.com/LuchnikovI) for the help with `tensornetwork` library. Any questions are welcome in the comments!
|
github_jupyter
|
#collapse
import numpy as np
from qiskit import QuantumCircuit
from qiskit.quantum_info import Operator, Statevector
qc = QuantumCircuit(2)
qc.x(0)
qc.y(1)
qc.cx(0,1)
qc.draw(output='mpl')
U_qs = Operator(qc).data
U_qs
#collapse
qc01 = QuantumCircuit(2)
qc01.draw(output='mpl')
s01 = Statevector.from_label('01')
s01.data
s101 = Statevector.from_label('101')
s101.data
#collapse
qc123 = QuantumCircuit(1)
qc123.rx(1, 0)
qc123.ry(2, 0)
qc123.rz(3, 0)
qc123.draw(output='mpl')
#collapse
qc.draw(output='mpl')
U_qs
qs_state = Statevector.from_label('10').evolve(qc).data
our_state = -1j*Statevector.from_label('11').data
np.allclose(qs_state, our_state)
X = np.array([[0, 1], [1, 0]])
Y = np.array([[0, -1j], [1j, 0]])
CNOT = np.array([[1, 0, 0, 0],
[0, 0, 0, 1],
[0, 0, 1, 0],
[0, 1, 0, 0]]).reshape(2,2,2,2)
A = np.random.rand(4,4)
B = np.random.rand(4,4)
AB = A @ B
import tensornetwork as tn
a = tn.Node(A.reshape(2,2,2,2))
b = tn.Node(B.reshape(2,2,2,2))
a[2] ^ b[0]
a[3] ^ b[1]
ab = tn.contractors.greedy([a, b], output_edge_order=[a[0], a[1], b[2], b[3]]).tensor
np.allclose(AB, ab.reshape(4,4))
# Make tensors from matrices
x, y, cnot = list(map(tn.Node, [X, Y, CNOT]))
# Connect edges properly
cnot[2] ^ y[0]
cnot[3] ^ x[0]
# Perform the contraction ~ matrix multiplication
U_tn = tn.contractors.greedy([cnot, x, y], output_edge_order=[cnot[0], cnot[1], y[1], x[1]]).tensor
np.allclose(U_tn.reshape(4,4), U_qs)
def apply_gate(qubits, gate_tensor, positions):
gate = tn.Node(gate_tensor)
assert len(gate.edges) == 2*len(positions), 'Gate size does not match positions provided.'
for i, p in enumerate(positions):
# Connect RIGHT legs of the gate to the active qubits
gate[i+len(positions)] ^ qubits[p]
# Reassing active qubits to the corresponding LEFT legs of the gate
qubits[p] = gate[i]
CNOT = np.array([[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 0, 1],
[0, 0, 1, 0]]).reshape(2,2,2,2)
# The context manager `NodeCollection` is a bit of a magic trick
# which keeps track of all tensors in the network automatically.
all_nodes = []
with tn.NodeCollection(all_nodes):
# I do not know how to create 'abstract' edges in `tensornetworks`.
# Instead, I create an identity tensor and use its edges to apply new gates to.
id0 = tn.Node(np.identity(4).reshape(2,2,2,2))
qubits0 = id0.edges[2:4]
qubits = id0.edges[0:2]
apply_gate(qubits, X, [0])
apply_gate(qubits, Y, [1])
apply_gate(qubits, CNOT, [0,1])
U_tn = tn.contractors.greedy(all_nodes, output_edge_order=qubits+qubits0).tensor.reshape(4,4)
U_reversed_qs = Operator(qc.reverse_bits()).data
np.allclose(U_tn, U_reversed_qs)
qc3 = QuantumCircuit(3)
qc3.x(0)
qc3.cx(0, 1)
qc3.y(1)
qc3.x(2)
qc3.cx(2, 1)
qc3.y(2)
qc3.draw(output='mpl')
all_nodes = []
with tn.NodeCollection(all_nodes):
id0 = tn.Node(np.identity(8).reshape(2,2,2,2,2,2))
qubits0 = id0.edges[3:6]
qubits = id0.edges[0:3]
# The essential part
apply_gate(qubits, X, [0])
apply_gate(qubits, CNOT, [0, 1])
apply_gate(qubits, Y, [1])
apply_gate(qubits, X, [2])
apply_gate(qubits, CNOT, [2, 1])
apply_gate(qubits, Y, [2])
U3_tn = tn.contractors.greedy(all_nodes, output_edge_order=qubits+qubits0).tensor.reshape(8,8)
U3_qs_reversed = Operator(qc3.reverse_bits()).data
np.allclose(U3_tn, U3_qs_reversed)
| 0.567937 | 0.989551 |
# Effect of the sample size in cross-validation
In the previous notebook, we presented the general cross-validation framework
and how to assess if a predictive model is underfiting, overfitting, or
generalizing. Besides these aspects, it is also important to understand how
the different errors are influenced by the number of samples available.
In this notebook, we will show this aspect by looking a the variability of
the different errors.
Let's first load the data and create the same model as in the previous
notebook.
```
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing(as_frame=True)
data, target = housing.data, housing.target
target *= 100 # rescale the target in k$
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">If you want a deeper overview regarding this dataset, you can refer to the
Appendix - Datasets description section at the end of this MOOC.</p>
</div>
```
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor()
```
## Learning curve
To understand the impact of the number of samples available for training on
the statistical performance of a predictive model, it is possible to
synthetically reduce the number of samples used to train the predictive model
and check the training and testing errors.
Therefore, we can vary the number of samples in the training set and repeat
the experiment. The training and testing scores can be plotted similarly to
the validation curve, but instead of varying an hyperparameter, we vary the
number of training samples. This curve is called the **learning curve**.
It gives information regarding the benefit of adding new training samples
to improve a model's statistical performance.
Let's compute the learning curve for a decision tree and vary the
proportion of the training set from 10% to 100%.
```
import numpy as np
train_sizes = np.linspace(0.1, 1.0, num=5, endpoint=True)
train_sizes
```
We will use a `ShuffleSplit` cross-validation to assess our predictive model.
```
from sklearn.model_selection import ShuffleSplit
cv = ShuffleSplit(n_splits=30, test_size=0.2)
```
Now, we are all set to carry out the experiment.
```
from sklearn.model_selection import learning_curve
results = learning_curve(
regressor, data, target, train_sizes=train_sizes, cv=cv,
scoring="neg_mean_absolute_error", n_jobs=2)
train_size, train_scores, test_scores = results[:3]
# Convert the scores into errors
train_errors, test_errors = -train_scores, -test_scores
```
Now, we can plot the curve curve.
```
import matplotlib.pyplot as plt
plt.errorbar(train_size, train_errors.mean(axis=1),
yerr=train_errors.std(axis=1), label="Training error")
plt.errorbar(train_size, test_errors.mean(axis=1),
yerr=test_errors.std(axis=1), label="Testing error")
plt.legend()
plt.xscale("log")
plt.xlabel("Number of samples in the training set")
plt.ylabel("Mean absolute error (k$)")
_ = plt.title("Learning curve for decision tree")
```
We see that the more samples we add to the training set on this learning
curve, the lower the error becomes. With this curve, we are searching for the
plateau for which there is no benefit to adding samples anymore or assessing
the potential gain of adding more samples into the training set.
For this dataset we notice that our decision tree model would really benefit
from additional datapoints to reduce the amount of over-fitting and hopefully
reduce the testing error even further.
## Summary
In the notebook, we learnt:
* the influence of the number of samples in a dataset, especially on the
variability of the errors reported when running the cross-validation;
* about the learning curve that is a visual representation of the capacity
of a model to improve by adding new samples.
|
github_jupyter
|
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing(as_frame=True)
data, target = housing.data, housing.target
target *= 100 # rescale the target in k$
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor()
import numpy as np
train_sizes = np.linspace(0.1, 1.0, num=5, endpoint=True)
train_sizes
from sklearn.model_selection import ShuffleSplit
cv = ShuffleSplit(n_splits=30, test_size=0.2)
from sklearn.model_selection import learning_curve
results = learning_curve(
regressor, data, target, train_sizes=train_sizes, cv=cv,
scoring="neg_mean_absolute_error", n_jobs=2)
train_size, train_scores, test_scores = results[:3]
# Convert the scores into errors
train_errors, test_errors = -train_scores, -test_scores
import matplotlib.pyplot as plt
plt.errorbar(train_size, train_errors.mean(axis=1),
yerr=train_errors.std(axis=1), label="Training error")
plt.errorbar(train_size, test_errors.mean(axis=1),
yerr=test_errors.std(axis=1), label="Testing error")
plt.legend()
plt.xscale("log")
plt.xlabel("Number of samples in the training set")
plt.ylabel("Mean absolute error (k$)")
_ = plt.title("Learning curve for decision tree")
| 0.882731 | 0.995076 |
Doc title: **Cross-Border E-Business Platform Charges Report Series (Shopee)**
Article notes: This is a series of document files for introducing the charges of various cross-border e-business platform. It is targeting on going to be a proper preparation for running business on these e-platforms and fits for newbies.
文章备注:介绍目前主要跨境电商平台的运营收费。本篇介绍Shopee平台。
Last modified date: 2019-08-06 19:32:15
# 平台简介 Brief Intro
Shopee平台是纽交所上市的SEA公司下属电商平台,自2017年开始在东南亚大力推广,是这两年东南亚地区增长最快的电商平台。目前东南亚地区电商交易额排行第2,仅次于Lazada。
Shopee覆盖的国家和地区于包括新加坡、马来西亚、菲律宾、台湾、印度尼西亚、泰国和越南(与Lazada高度重合)。
Shopee后台管理模式类似于速卖通和Lazada的混合体,类似Lazada的不同国家地区分为不同子站点,类似速卖通的统一管理商品,可设置店铺折扣活动和参与平台活动,以及与速卖通直通车相似的关键字广告等。
# 费用介绍 Charges
Shopee平台2019年针对中国跨境卖家,无平台使用费、年费,无保证金。采用分级佣金制,按订单收取佣金和支付手续费。并且目前实施入驻头三个月交易免佣金制度
## 佣金 Commission Fees
自2019年7月16日起,Shopee平台佣金费率规定如下:

## 交易手续费 Charges Of Transaction Service
Shopee平台手续费率规定如下:

## SLS物流费用
与Lazada相似,Shopee对物流渠道同样有所要求,具体如下图所示:

Shopee推荐采用的物流为SLS(Shopee Logistics Service),目前国内有深圳、上海、义乌、泉州四个SLS集货仓。
SLS跨境费率标准如下(均以重量0.5kg为准,由于大多数东南亚国家均采用分区邮费计算,因此部分站点显示的是其可能的运费范围):
- 台湾,NTD$75-85 (分为店配和宅配)
- 马来西亚,MYR$11.3-12.3
- 印度尼西亚,IDR$70000-100000
- 菲律宾,PHP$374-424
- 新加坡,SGD$6.55
- 泰国,THB$130-330
- 越南,VND$60000-85000
## 产品推广费用
Shopee产品运营推广项目包括:
- 关键字广告(Paid Ads,收费广告)
- 店铺折扣、优惠券
- 参与平台活动
- 站外引流、粉丝圈
- 小语种客服支持(泰语、印尼语、越南语,新卖家有免费期,过后部分站点收费,费率为2%)
因此类费用非必须,本文测算基础定价时不作考虑。
# 定价测算 Pricing Estimation
Shopee的成本与费用可分为以下几个部分:
(1)固定成本
包括:出厂成本,运费,佣金,手续费
(2)非固定成本
包括:广告费、小语种客服费用
```
# 费用设置说明
# 生产成本(人民币)
RMB_COP = 20
# 国内运费(人民币)
RMB_DLC = 3
# 马来西亚SLS跨境运费及汇率(以一对鞋平均约0.5kg计算,以运费最高值计算,下同,马币)
MYR_SLS = 12.3
MYR_EXCHANGE_RATE = 1.65
# 印度尼西亚SLS跨境运费及汇率(印尼盾)
IDR_SLS = 100000
IDR_EXCHANGE_RATE = 0.0005
# 菲律宾SLS跨境运费及汇率(菲律宾比索)
PHP_SLS = 424
PHP_EXCHANGE_RATE = 0.15
# 新加坡SLS跨境运费及汇率(新加坡币)
SGD_SLS = 6.55
SGD_EXCHANGE_RATE = 5
# 泰国SLS跨境运费及汇率(泰铢)
THB_SLS = 330
THB_EXCHANGE_RATE = 0.2
# 越南SLS跨境运费及汇率(越南盾)
VND_SLS = 85000
VND_EXCHANGE_RATE = 0.0003
# 台湾SLS跨境运费及汇率(新台币)
NTD_SLS = 85
NTD_EXCHANGE_RATE = 0.25
# 佣金费率
CR_RATE = 0.06
# 交易手续费率
PR_RATE = 0.02
# 成本价(零利润)计算
# 公式: 成本价 = 生产成本 + 国内运费 + 跨境运费 + 佣金 + 手续费
print('Shopee计算结果如下:\n')
# 马来西亚(马币)
COST_MYR = round(((RMB_COP + RMB_DLC) / MYR_EXCHANGE_RATE + MYR_SLS) / (1 - CR_RATE - PR_RATE), 2)
print('马来西亚测算成本价:MYR${:.2f},约合人民币¥{:.2f}。\n'.format(COST_MYR, COST_MYR * MYR_EXCHANGE_RATE))
# 印度尼西亚(印尼盾)
COST_IDR = round(((RMB_COP + RMB_DLC) / IDR_EXCHANGE_RATE + IDR_SLS) / (1 - CR_RATE - PR_RATE), 2)
print('印度尼西亚测算成本价:IDR${:.2f},约合人民币¥{:.2f}。\n'.format(COST_IDR, COST_IDR * IDR_EXCHANGE_RATE))
# 菲律宾(菲律宾比索)
COST_PHP = round(((RMB_COP + RMB_DLC) / PHP_EXCHANGE_RATE + PHP_SLS) / (1 - CR_RATE - PR_RATE), 2)
print('菲律宾测算成本价:PHP${:.2f},约合人民币¥{:.2f}。\n'.format(COST_PHP, COST_PHP * PHP_EXCHANGE_RATE))
# 新加坡(新加坡币)
COST_SGD = round(((RMB_COP + RMB_DLC) / SGD_EXCHANGE_RATE + SGD_SLS) / (1 - CR_RATE - PR_RATE), 2)
print('新加坡测算成本价:SGD${:.2f},约合人民币¥{:.2f}。\n'.format(COST_SGD, COST_SGD * SGD_EXCHANGE_RATE))
# 泰国(泰铢)
COST_THB = round(((RMB_COP + RMB_DLC) / THB_EXCHANGE_RATE + THB_SLS) / (1 - CR_RATE - PR_RATE), 2)
print('泰国测算成本价:THB${:.2f},约合人民币¥{:.2f}。\n'.format(COST_THB, COST_THB * THB_EXCHANGE_RATE))
# 越南(越南盾)
COST_VND = round(((RMB_COP + RMB_DLC) / VND_EXCHANGE_RATE + VND_SLS) / (1 - CR_RATE - PR_RATE), 2)
print('越南测算成本价:VND${:.2f},约合人民币¥{:.2f}。\n'.format(COST_VND, COST_VND * VND_EXCHANGE_RATE))
# 台湾(新台币)
COST_NTD = round(((RMB_COP + RMB_DLC) / NTD_EXCHANGE_RATE + NTD_SLS) / (1 - CR_RATE - PR_RATE), 2)
print('台湾测算成本价:VND${:.2f},约合人民币¥{:.2f}。\n'.format(COST_NTD, COST_NTD * NTD_EXCHANGE_RATE))
```
**以下是Lazada相同条件下的测算结果:**
<img src='lazada_cost_analysis_result.png' align='left'>
<hr>
# 几点想法 Thoughts Of Start
(1)Shopee这两年一直在加大对东南亚市场的投入,3个月的免佣金期,以及较为合理的后台管理模式,对卖家具有一定的额吸引力。
(2)通过上述的测算,需要注意的是,由于物流方面Shopee不占优势,因此包邮商品的成本在Shopee平台上总体要比Lazada高出不少,这也造成同样是包邮产品的价格设置上,Shopee平台上无法做到比Lazada平台更低的价格。因此,设置不包邮销售(可设置免邮费额度)可能在Shopee上更为现实一些。
(3)对于产品上架要求,Shopee并未强调商品页面上的设计要求。不过Shopee后台也有“店铺装修”等类似功能,结合这两年东南亚电商市场的发展,对店铺和商品页面进行一些美工设计优化也将会成为一个必然趋势。
(4)跨境物流方面,Shopee的SLS总体上和Lazada的LGS操作模式差不多,都是在收到订单后,把货物打包好发到相应的国内集货仓即可,两者也同样在深圳和义乌这两个国内跨境贸易活跃的地方设置有集货仓。
(5)客服方面,Shopee针对泰国、越南、印尼市场有小语种客服服务,新卖家可以免费申请使用一段时间,免费期后需要付一定的服务费用(目前与交易手续费相同,为订单价格的2%)。对于新加坡、菲律宾、马来西亚,Shopee建议买家和卖家采用英语交流,因此由卖家负责客服。台湾订单采用中文客服。
# 附录:参考链接 References Links
- [Shopee卖家官方网站](https://shopee.cn/)
- [Shopee卖家入驻通道](https://shopee.cn/seller)
[返回目录](e_biz_charges_report_catalog.ipynb)
|
github_jupyter
|
# 费用设置说明
# 生产成本(人民币)
RMB_COP = 20
# 国内运费(人民币)
RMB_DLC = 3
# 马来西亚SLS跨境运费及汇率(以一对鞋平均约0.5kg计算,以运费最高值计算,下同,马币)
MYR_SLS = 12.3
MYR_EXCHANGE_RATE = 1.65
# 印度尼西亚SLS跨境运费及汇率(印尼盾)
IDR_SLS = 100000
IDR_EXCHANGE_RATE = 0.0005
# 菲律宾SLS跨境运费及汇率(菲律宾比索)
PHP_SLS = 424
PHP_EXCHANGE_RATE = 0.15
# 新加坡SLS跨境运费及汇率(新加坡币)
SGD_SLS = 6.55
SGD_EXCHANGE_RATE = 5
# 泰国SLS跨境运费及汇率(泰铢)
THB_SLS = 330
THB_EXCHANGE_RATE = 0.2
# 越南SLS跨境运费及汇率(越南盾)
VND_SLS = 85000
VND_EXCHANGE_RATE = 0.0003
# 台湾SLS跨境运费及汇率(新台币)
NTD_SLS = 85
NTD_EXCHANGE_RATE = 0.25
# 佣金费率
CR_RATE = 0.06
# 交易手续费率
PR_RATE = 0.02
# 成本价(零利润)计算
# 公式: 成本价 = 生产成本 + 国内运费 + 跨境运费 + 佣金 + 手续费
print('Shopee计算结果如下:\n')
# 马来西亚(马币)
COST_MYR = round(((RMB_COP + RMB_DLC) / MYR_EXCHANGE_RATE + MYR_SLS) / (1 - CR_RATE - PR_RATE), 2)
print('马来西亚测算成本价:MYR${:.2f},约合人民币¥{:.2f}。\n'.format(COST_MYR, COST_MYR * MYR_EXCHANGE_RATE))
# 印度尼西亚(印尼盾)
COST_IDR = round(((RMB_COP + RMB_DLC) / IDR_EXCHANGE_RATE + IDR_SLS) / (1 - CR_RATE - PR_RATE), 2)
print('印度尼西亚测算成本价:IDR${:.2f},约合人民币¥{:.2f}。\n'.format(COST_IDR, COST_IDR * IDR_EXCHANGE_RATE))
# 菲律宾(菲律宾比索)
COST_PHP = round(((RMB_COP + RMB_DLC) / PHP_EXCHANGE_RATE + PHP_SLS) / (1 - CR_RATE - PR_RATE), 2)
print('菲律宾测算成本价:PHP${:.2f},约合人民币¥{:.2f}。\n'.format(COST_PHP, COST_PHP * PHP_EXCHANGE_RATE))
# 新加坡(新加坡币)
COST_SGD = round(((RMB_COP + RMB_DLC) / SGD_EXCHANGE_RATE + SGD_SLS) / (1 - CR_RATE - PR_RATE), 2)
print('新加坡测算成本价:SGD${:.2f},约合人民币¥{:.2f}。\n'.format(COST_SGD, COST_SGD * SGD_EXCHANGE_RATE))
# 泰国(泰铢)
COST_THB = round(((RMB_COP + RMB_DLC) / THB_EXCHANGE_RATE + THB_SLS) / (1 - CR_RATE - PR_RATE), 2)
print('泰国测算成本价:THB${:.2f},约合人民币¥{:.2f}。\n'.format(COST_THB, COST_THB * THB_EXCHANGE_RATE))
# 越南(越南盾)
COST_VND = round(((RMB_COP + RMB_DLC) / VND_EXCHANGE_RATE + VND_SLS) / (1 - CR_RATE - PR_RATE), 2)
print('越南测算成本价:VND${:.2f},约合人民币¥{:.2f}。\n'.format(COST_VND, COST_VND * VND_EXCHANGE_RATE))
# 台湾(新台币)
COST_NTD = round(((RMB_COP + RMB_DLC) / NTD_EXCHANGE_RATE + NTD_SLS) / (1 - CR_RATE - PR_RATE), 2)
print('台湾测算成本价:VND${:.2f},约合人民币¥{:.2f}。\n'.format(COST_NTD, COST_NTD * NTD_EXCHANGE_RATE))
| 0.190611 | 0.538134 |
# Bills and votes
This notebook aims to
1. generate csv
```
import numpy as np
import pandas as pd
pd.options.mode.chained_assignment = None
import matplotlib.pyplot as plt
import requests
ALL_LAWS_PATH = '../data/all-votes/laws_20years.csv'
VOTES_115 = '../data/all-votes/votes_115.csv'
API_KEY = 'EaNt0652GV92i9U9Mlhs0ggCwLPyRB23bc6qAeyX'
all_laws = pd.read_csv(ALL_LAWS_PATH)
def get_cosponsors_idx(bills):
bills['cosponsors_sen'] = ''
bills['cosponsors_rep'] = ''
bills['cosponsors_del'] = ''
for i in range(len(bills)):
cosponsors_list_sen = []
cosponsors_list_rep = []
cosponsors_list_del = []
for j in range(len(bills.cosponsors.iloc[i])):
if bills.cosponsors.iloc[i][j]['cosponsor_title'] == 'Sen.':
cosponsors_list_sen.append(bills.cosponsors.iloc[i][j]['cosponsor_id'])
elif bills.cosponsors.iloc[i][j]['cosponsor_title'] == 'Rep.':
cosponsors_list_rep.append(bills.cosponsors.iloc[i][j]['cosponsor_id'])
elif bills.cosponsors.iloc[i][j]['cosponsor_title'] == 'Del.':
cosponsors_list_del.append(bills.cosponsors.iloc[i][j]['cosponsor_id'])
bills.cosponsors_sen.iloc[i] = cosponsors_list_sen
bills.cosponsors_rep.iloc[i] = cosponsors_list_rep
bills.cosponsors_del.iloc[i] = cosponsors_list_del
bills = bills.drop('cosponsors', axis = 1)
return bills
def gen_bill_csv_file(congress_number):
bills = all_laws[(all_laws['congress'] == congress_number) & (all_laws['bill_api_uri'].notnull())]
bills_ = bills['bill_api_uri'].unique()
bills_sponsorship = []
for i in range(0, len(bills_)):
url = bills_[i][:-5] + '/cosponsors.json'
d = {}
try:
req = requests.get(url, headers={'X-API-Key': API_KEY}).json()
results = req['results'][0]
d['bill_url'] = bills_[i]
d['sponsor_party'] = results['sponsor_party']
d['sponsor_id'] = results['sponsor_id']
d['sponsor_title'] = results['sponsor_title']
d['committees'] = results['committees']
d['number_of_cosponsors'] = results['number_of_cosponsors']
d['cosponsors_by_party'] = results['cosponsors_by_party']
d['cosponsors'] = results['cosponsors']
bills_sponsorship.append(d)
except:
pass
bills_sponsorship = pd.DataFrame(bills_sponsorship)
bills = bills.merge(bills_sponsorship, left_on = 'bill_api_uri', right_on = 'bill_url')
bills = bills.drop(['result','total_no', 'total_not_voting', 'total_yes', 'cosponsors_by_party','number_of_cosponsors','congress'], axis=1)
bills = bills[bills['sponsor_id'] !='']
bills = bills[bills['sponsor_title'] !='']
bills = bills[bills['sponsor_party'] != 'ID']
bills = get_cosponsors_idx(bills)
bills.to_csv('../data/bills/bills_{}.csv'.format(congress_number), index = False)
for congress_number in range(105, 116):
gen_bill_csv_file(congress_number);
```
## Get votes senate
```
def get_votes(congress_number):
votes = list(all_laws[all_laws.congress == congress_number]['vote_uri'])
df = pd.DataFrame()
for i, url in enumerate(votes):
req = requests.get(url, headers={'X-API-Key': API_KEY}).json()
d = pd.DataFrame(req['results']['votes']['vote']['positions'])
d['vote_uri'] = url
df = pd.concat([df, d])
df.to_csv('../data/votes/votes_{}.csv'.format(congress_number), index=False)
for congress_number in range(105, 116):
get_votes(congress_number);
```
|
github_jupyter
|
import numpy as np
import pandas as pd
pd.options.mode.chained_assignment = None
import matplotlib.pyplot as plt
import requests
ALL_LAWS_PATH = '../data/all-votes/laws_20years.csv'
VOTES_115 = '../data/all-votes/votes_115.csv'
API_KEY = 'EaNt0652GV92i9U9Mlhs0ggCwLPyRB23bc6qAeyX'
all_laws = pd.read_csv(ALL_LAWS_PATH)
def get_cosponsors_idx(bills):
bills['cosponsors_sen'] = ''
bills['cosponsors_rep'] = ''
bills['cosponsors_del'] = ''
for i in range(len(bills)):
cosponsors_list_sen = []
cosponsors_list_rep = []
cosponsors_list_del = []
for j in range(len(bills.cosponsors.iloc[i])):
if bills.cosponsors.iloc[i][j]['cosponsor_title'] == 'Sen.':
cosponsors_list_sen.append(bills.cosponsors.iloc[i][j]['cosponsor_id'])
elif bills.cosponsors.iloc[i][j]['cosponsor_title'] == 'Rep.':
cosponsors_list_rep.append(bills.cosponsors.iloc[i][j]['cosponsor_id'])
elif bills.cosponsors.iloc[i][j]['cosponsor_title'] == 'Del.':
cosponsors_list_del.append(bills.cosponsors.iloc[i][j]['cosponsor_id'])
bills.cosponsors_sen.iloc[i] = cosponsors_list_sen
bills.cosponsors_rep.iloc[i] = cosponsors_list_rep
bills.cosponsors_del.iloc[i] = cosponsors_list_del
bills = bills.drop('cosponsors', axis = 1)
return bills
def gen_bill_csv_file(congress_number):
bills = all_laws[(all_laws['congress'] == congress_number) & (all_laws['bill_api_uri'].notnull())]
bills_ = bills['bill_api_uri'].unique()
bills_sponsorship = []
for i in range(0, len(bills_)):
url = bills_[i][:-5] + '/cosponsors.json'
d = {}
try:
req = requests.get(url, headers={'X-API-Key': API_KEY}).json()
results = req['results'][0]
d['bill_url'] = bills_[i]
d['sponsor_party'] = results['sponsor_party']
d['sponsor_id'] = results['sponsor_id']
d['sponsor_title'] = results['sponsor_title']
d['committees'] = results['committees']
d['number_of_cosponsors'] = results['number_of_cosponsors']
d['cosponsors_by_party'] = results['cosponsors_by_party']
d['cosponsors'] = results['cosponsors']
bills_sponsorship.append(d)
except:
pass
bills_sponsorship = pd.DataFrame(bills_sponsorship)
bills = bills.merge(bills_sponsorship, left_on = 'bill_api_uri', right_on = 'bill_url')
bills = bills.drop(['result','total_no', 'total_not_voting', 'total_yes', 'cosponsors_by_party','number_of_cosponsors','congress'], axis=1)
bills = bills[bills['sponsor_id'] !='']
bills = bills[bills['sponsor_title'] !='']
bills = bills[bills['sponsor_party'] != 'ID']
bills = get_cosponsors_idx(bills)
bills.to_csv('../data/bills/bills_{}.csv'.format(congress_number), index = False)
for congress_number in range(105, 116):
gen_bill_csv_file(congress_number);
def get_votes(congress_number):
votes = list(all_laws[all_laws.congress == congress_number]['vote_uri'])
df = pd.DataFrame()
for i, url in enumerate(votes):
req = requests.get(url, headers={'X-API-Key': API_KEY}).json()
d = pd.DataFrame(req['results']['votes']['vote']['positions'])
d['vote_uri'] = url
df = pd.concat([df, d])
df.to_csv('../data/votes/votes_{}.csv'.format(congress_number), index=False)
for congress_number in range(105, 116):
get_votes(congress_number);
| 0.186428 | 0.573798 |
[](https://colab.research.google.com/github/eirasf/GCED-AA2/blob/main/lab6/lab6-parte1.ipynb)
# Práctica 6: Redes neuronales convolucionales - Parte 1 - FF vs CNN
### Pre-requisitos. Instalar paquetes
Para la primera parte de este Laboratorio 6 necesitaremos TensorFlow y TensorFlow-Datasets. Además, como habitualmente, fijaremos la semilla aleatoria para asegurar la reproducibilidad de los experimentos.
```
import tensorflow as tf
import tensorflow_datasets as tfds
#Fijamos la semilla para poder reproducir los resultados
import os
import numpy as np
import random
seed=1234567
os.environ['PYTHONHASHSEED']=str(seed)
tf.random.set_seed(seed)
np.random.seed(seed)
random.seed(seed)
```
Además, cargamos también APIs que vamos a emplear para que el código quede más legible
```
#API de Keras, modelo Sequential y la capa Dense
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense
#Para mostrar gráficas
from matplotlib import pyplot
```
### Carga del conjunto de datos
En esta ocasión trabajaremos con el conjunto de imágenes *mnist*, que representa dígitos escritos a mano.
```
import tensorflow_datasets as tfds
# El parámetro with_info=True nos permite acceder a información sobre el dataset
# Carga el conjunto de datos mnist. Usaremos el primer 80% de la partición train para ds_train y el 20% restante para ds_val. ds_test tomará la partición test.
(ds_train, ds_test, ds_val), ds_info = tfds.load(..., with_info=True, as_supervised=True)
# En dicha información se encuentran los nombres de las clases y las dimensiones de las imágenes
NUM_CLASSES = ds_info.features['label'].num_classes
nombres_clases = ds_info.features['label'].names
dimensiones = ds_info.features['image'].shape
print("Hay %d clases"%NUM_CLASSES)
# Para comprobar que se ha cargado tomamos un elemento y lo mostramos
ej_imagen, ej_etiqueta = next(iter(ds_train.take(1)))
pyplot.imshow(ej_imagen[:,:,0])
pyplot.xlabel(nombres_clases[ej_etiqueta.numpy()])
pyplot.show()
```
## Preprocesado de los datos
La etiqueta que nos suministra el dataset es numérica. Sin embargo, nosotros prediciremos un vector con tantas componentes como clases, donde cada componente estima la probabilidad de que el ejemplo pertenezca a una clase. Por tanto, hay que convertir la etiqueta suministrada a codificación one_hot con la función [tf.one_hot](https://www.tensorflow.org/api_docs/python/tf/one_hot)
Por otra parte, cada color de cada pixel de la imagen viene indicado con un entero entre 0 y 255. Para entrenar es preferible que se indiquen con números entre 0 y 1, por lo que deberemos escalar la imagen dividiendo su tensor por 255.
**PISTA: el número de clases se ha almacenado anteriormente en la variable NUM_CLASSES**
```
dimensiones = ej_imagen.shape
## TODO: convierte las etiquetas a tipo one hot.
ds_train = ds_train.map(lambda image, label: (tf.cast(image,tf.float32)/255.0, ...))
ds_test = ...
ds_val = ...
```
## Ajustando los datos con un red neuronal feed-forward
Vamos a modelar los datos con una red feed-forward que tenga capas de 40, 25 y 16 unidades (todas con activación ReLU). Debemos tener en cuenta que las imágenes son tensores de dimensión 3 (su `shape` es (28,28,1)), mientras que la entrada de nuestras capas Dense debe ser un tensor de dimensión 1. Para adecuar la entrada a lo que necesitamos, vamos a "aplanar" los tensores de las imágenes, que pasarán de `shape` (28,28,1) a `shape` (784). Utilizaremos para ello una capa `Flatten`.
Por último, la salida de nuestro modelo debe tener tantas componentes como clases distintas tiene el conjunto. Como queremos que la salida aproxime la probabilidad de las distintas clases, lo habitual sería poner una función de activación *softmax*, pero en este caso, por razones de eficiencia del entrenamiento, es mejor dejar una salida lineal y posteriormente indicarle a la función de pérdida que las salidas vienen en ese formato.
```
# TODO - Crea el modelo descrito
model = ...
#Construimos el modelo y mostramos
model.build()
print(model.summary())
# VERIFICACIÓN
assert model.count_params()==33011, 'Revisa la arquitectura de tu modelo'
```
### Entrenamiento del modelo
Vamos a establecer la función de pérdida, el optimizador (Adam con el LR por defecto) y la métrica que nos servirá para evaluar el rendimiento del modelo entrenado (precisión categórica).
Como intentamos predecir una clase entre varias, nuestra función de pérdida debe ser la [entropía cruzada categórica](https://www.tensorflow.org/api_docs/python/tf/keras/losses/CategoricalCrossentropy). Aquí es donde le indicaremos que la salida de nuestra red no son valores entre 0 y 1, sino que son valores reales que deben ser utilizados como *logits* por la función softmax.
```
#TODO - Compila el modelo con los parámetros indicados
model.compile(...)
```
Como siempre, entrenaremos el modelo usando `model.fit`. Para ello, previamente debemos indicar a nuestro dataset que haga lotes de 128 elementos. Le indicaremos también que baraje los datos utilizando un buffer de 5 veces el tamaño de lote. La aleatorización debe hacerse antes de la partición en lotes, para que se aleatoricen los elementos y no los lotes.
```
# TODO - Baraja y trocea los datasets en lotes.
ds_train_batch = ...
ds_val_batch = ...
# TODO - Entrena el modelo. Con 16 epochs será suficiente.
# Haz que nos ofrezca también las mediciones de pérdida y precisión sobre el conjunto de validación,
# para saber si el modelo está sobreajustando.
history = model.fit(...)
# plot training history
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='val')
pyplot.legend()
pyplot.title('Loss')
pyplot.show()
# plot training history
pyplot.plot(history.history['categorical_accuracy'], label='train')
pyplot.plot(history.history['val_categorical_accuracy'], label='val')
pyplot.legend()
pyplot.title('Accuracy')
pyplot.show()
```
### Verificación del rendimiento
Aprovecharemos el conjunto de test para comprobar la capacidad de generalización de nuestro modelo.
```
# TODO - Evalúa el modelo sobre el conjunto de test. Previamente deberás hacer lotes con el conjunto de test.
print("Evaluación sobre el conjunto TEST:")
ds_test_batch = ...
...
```
Si todo ha ido correctamente deberías haber obtenido un valor de precisión sobre el conjunto de test comparable a los obtenidos con los conjuntos de entrenamiento y validación, lo que indica que el modelo generaliza bien a otros datos del conjunto original pero... ¿tenemos un buen modelo?
Vamos a comprobar la robustez del modelo haciendo pequeños desplazamientos de las imágenes originales. Utilizaremos un pequeño modelo que aplique una traslación aleatoria de hasta un 10% del tamaño de la imagen a cada una de las imágenes de test. Nos ayudaremos de la capa `RandomTranslation` de preprocesado de Keras.
```
from tensorflow.keras.layers.experimental.preprocessing import RandomTranslation
translator = Sequential(
[RandomTranslation(height_factor=0.1, width_factor=0.1)]
)
# TODO - Aplica la red translator a cada imagen
ds_test_desplazado = ds_test_batch.map(lambda image, label: (...,label))
# TODO - Toma un elemento de ds_test_desplazado y muéstralo con pyplot
ej_imagen_desplazada = next(iter(ds_test_desplazado.take(1)))[0][0]
pyplot.imshow(ej_imagen_desplazada[:,:,0])
pyplot.xlabel(nombres_clases[ej_etiqueta.numpy()])
pyplot.title('imagen desplazada')
pyplot.show()
```
Comprobemos ahora la precisión sobre este nuevo conjunto de imágenes que han sido ligeramente desplazadas.
```
print("Evaluación sobre el conjunto TEST DESPLAZADO:")
...
```
Si todo ha ido bien, deberías haber comprobado que estas pequeñas traslaciones son suficientes para que la precisión del modelo baje sustancialmente. Las redes feed-forward no son robustas ante este tipo de perturbaciones.
## Comparativa con una red convolucional
Declara ahora un modelo convolucional con la siguiente arquitectura:
1. [Convolución 2D](https://keras.io/api/layers/convolution_layers/convolution2d/) de 8 filtros y stride 3, con activación ReLU
1. [Pooling 2D](https://keras.io/api/layers/pooling_layers/max_pooling2d/) tomando el máximo de cada grupo de 2x2
1. [Convolución 2D](https://keras.io/api/layers/convolution_layers/convolution2d/) de 8 filtros y stride 3, con activación ReLU
1. [Pooling 2D](https://keras.io/api/layers/pooling_layers/max_pooling2d/) tomando el máximo de cada grupo de 2x2
1. Capa Densa (requiere aplanado previo) de 32 unidades y activación ReLU
Ejecuta la siguiente celda y repite el compilado, entrenamiento y verificaciones posteriores para observar la diferencia.
```
# TODO
model = ...
#Construimos el modelo y mostramos
model.build()
print(model.summary())
# VERIFICACIÓN
assert model.count_params()==7426, 'Revisa la arquitectura de tu modelo'
```
### Reflexiones sobre la comparativa
- ¿Qué has observado en el rendimiento?
- ¿Cuántos parámetros tiene la red convolucional respecto a la *feed-forward*
- ¿Cómo ha cambiado el tiempo de ejecución?
- ¿Es más robusta frente a los desplazamientos esta red?
|
github_jupyter
|
import tensorflow as tf
import tensorflow_datasets as tfds
#Fijamos la semilla para poder reproducir los resultados
import os
import numpy as np
import random
seed=1234567
os.environ['PYTHONHASHSEED']=str(seed)
tf.random.set_seed(seed)
np.random.seed(seed)
random.seed(seed)
#API de Keras, modelo Sequential y la capa Dense
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense
#Para mostrar gráficas
from matplotlib import pyplot
import tensorflow_datasets as tfds
# El parámetro with_info=True nos permite acceder a información sobre el dataset
# Carga el conjunto de datos mnist. Usaremos el primer 80% de la partición train para ds_train y el 20% restante para ds_val. ds_test tomará la partición test.
(ds_train, ds_test, ds_val), ds_info = tfds.load(..., with_info=True, as_supervised=True)
# En dicha información se encuentran los nombres de las clases y las dimensiones de las imágenes
NUM_CLASSES = ds_info.features['label'].num_classes
nombres_clases = ds_info.features['label'].names
dimensiones = ds_info.features['image'].shape
print("Hay %d clases"%NUM_CLASSES)
# Para comprobar que se ha cargado tomamos un elemento y lo mostramos
ej_imagen, ej_etiqueta = next(iter(ds_train.take(1)))
pyplot.imshow(ej_imagen[:,:,0])
pyplot.xlabel(nombres_clases[ej_etiqueta.numpy()])
pyplot.show()
dimensiones = ej_imagen.shape
## TODO: convierte las etiquetas a tipo one hot.
ds_train = ds_train.map(lambda image, label: (tf.cast(image,tf.float32)/255.0, ...))
ds_test = ...
ds_val = ...
# TODO - Crea el modelo descrito
model = ...
#Construimos el modelo y mostramos
model.build()
print(model.summary())
# VERIFICACIÓN
assert model.count_params()==33011, 'Revisa la arquitectura de tu modelo'
#TODO - Compila el modelo con los parámetros indicados
model.compile(...)
# TODO - Baraja y trocea los datasets en lotes.
ds_train_batch = ...
ds_val_batch = ...
# TODO - Entrena el modelo. Con 16 epochs será suficiente.
# Haz que nos ofrezca también las mediciones de pérdida y precisión sobre el conjunto de validación,
# para saber si el modelo está sobreajustando.
history = model.fit(...)
# plot training history
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='val')
pyplot.legend()
pyplot.title('Loss')
pyplot.show()
# plot training history
pyplot.plot(history.history['categorical_accuracy'], label='train')
pyplot.plot(history.history['val_categorical_accuracy'], label='val')
pyplot.legend()
pyplot.title('Accuracy')
pyplot.show()
# TODO - Evalúa el modelo sobre el conjunto de test. Previamente deberás hacer lotes con el conjunto de test.
print("Evaluación sobre el conjunto TEST:")
ds_test_batch = ...
...
from tensorflow.keras.layers.experimental.preprocessing import RandomTranslation
translator = Sequential(
[RandomTranslation(height_factor=0.1, width_factor=0.1)]
)
# TODO - Aplica la red translator a cada imagen
ds_test_desplazado = ds_test_batch.map(lambda image, label: (...,label))
# TODO - Toma un elemento de ds_test_desplazado y muéstralo con pyplot
ej_imagen_desplazada = next(iter(ds_test_desplazado.take(1)))[0][0]
pyplot.imshow(ej_imagen_desplazada[:,:,0])
pyplot.xlabel(nombres_clases[ej_etiqueta.numpy()])
pyplot.title('imagen desplazada')
pyplot.show()
print("Evaluación sobre el conjunto TEST DESPLAZADO:")
...
# TODO
model = ...
#Construimos el modelo y mostramos
model.build()
print(model.summary())
# VERIFICACIÓN
assert model.count_params()==7426, 'Revisa la arquitectura de tu modelo'
| 0.33939 | 0.974459 |
```
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
```
# Pie Charts
```
plt.figure(figsize=(9,9))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.pie (tickets , labels= labels , colors= colors , startangle=45)
plt.show()
```
#### Display percentage and actual value in Pie Chart
```
# Display percentage in Pie Chart using autopct='%1.1f%%'
plt.figure(figsize=(8,8))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#7CB342','#C0CA33','#FFB300','#F57C00']
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , shadow='true', autopct='%1.1f%%', explode=[0,0 , 0 , 0])
plt.show()
plt.figure(figsize=(8,8))
tickets = [48 , 30 , 20 , 15]
total = np.sum(tickets)
labels = ['Low' , 'Medium' , 'High' , 'Critical']
def val_per(x):
return '{:.2f}%\n({:.0f})'.format(x, total*x/100)
colors = ['#7CB342','#C0CA33','#FFB300','#F57C00']
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , shadow='true', autopct=val_per, explode=[0,0 , 0 , 0])
plt.show()
```
#### Explode Slice in Pie Chart
```
#Explode 4th Slice
plt.figure(figsize=(8,8))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#7CB342','#C0CA33','#FFB300','#F57C00']
# explode = [0,0,0,0.1] will explode the fourth slice
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , autopct='%1.1f%%' , shadow='true', explode=[0,0 , 0 , 0.1])
plt.show()
#Explode 3rd & 4th Slice
plt.figure(figsize=(8,8))
tickets = [48 , 30 , 20 , 15]
label = ['Low' , 'Medium' , 'High' , 'Critical']
color = ['#7CB342','#C0CA33','#FFB300','#F57C00']
# explode = [0,0,0.1,0.1] will explode the 3rd & 4th slice
plt.pie (tickets , labels= label , colors= color , startangle=45 ,autopct='%1.1f%%', shadow='true', explode=[0,0 , 0.1 , 0.1])
plt.legend()
plt.show()
```
#### Display multiple pie plots in one figure
```
fig = plt.figure(figsize=(20,6))
tickets = [48 , 30 , 20 , 15]
priority = ['Low' , 'Medium' , 'High' , 'Critical']
status = ['Resolved' , 'Cancelled' , 'Pending' , 'Assigned']
company = ['IBM' , 'Microsoft', 'BMC' , 'Apple']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.subplot(1,3,1)
plt.pie (tickets , labels= priority , colors= colors , startangle=45)
plt.subplot(1,3,2)
plt.pie (tickets , labels= status , colors= colors , startangle=45)
plt.subplot(1,3,3)
plt.pie (tickets , labels= company , colors= colors , startangle=45)
plt.show()
fig = plt.figure(figsize=(20,13))
tickets = [48 , 30 , 20 , 15]
priority = ['Low' , 'Medium' , 'High' , 'Critical']
status = ['Resolved' , 'Cancelled' , 'Pending' , 'Assigned']
company = ['IBM' , 'Microsoft', 'BMC' , 'Apple']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.subplot(2,3,1)
plt.pie (tickets , labels= priority , colors= colors , startangle=45 , autopct='%1.1f%%')
plt.subplot(2,3,2)
plt.pie (tickets , labels= status , colors= colors , startangle=45 , autopct='%1.1f%%')
plt.subplot(2,3,3)
plt.pie (tickets , labels= company , colors= colors , startangle=45 , autopct='%1.1f%%')
plt.subplot(2,3,4)
plt.pie (tickets , labels= priority , colors= colors , startangle=45, autopct='%1.1f%%')
plt.subplot(2,3,5)
plt.pie (tickets , labels= status , colors= colors , startangle=45 ,autopct='%1.1f%%')
plt.subplot(2,3,6)
plt.pie (tickets , labels= company , colors= colors , startangle=45, autopct='%1.1f%%')
plt.show()
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(9,9))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.pie (tickets , labels= labels , colors= colors , startangle=45)
plt.show()
# Display percentage in Pie Chart using autopct='%1.1f%%'
plt.figure(figsize=(8,8))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#7CB342','#C0CA33','#FFB300','#F57C00']
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , shadow='true', autopct='%1.1f%%', explode=[0,0 , 0 , 0])
plt.show()
plt.figure(figsize=(8,8))
tickets = [48 , 30 , 20 , 15]
total = np.sum(tickets)
labels = ['Low' , 'Medium' , 'High' , 'Critical']
def val_per(x):
return '{:.2f}%\n({:.0f})'.format(x, total*x/100)
colors = ['#7CB342','#C0CA33','#FFB300','#F57C00']
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , shadow='true', autopct=val_per, explode=[0,0 , 0 , 0])
plt.show()
#Explode 4th Slice
plt.figure(figsize=(8,8))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#7CB342','#C0CA33','#FFB300','#F57C00']
# explode = [0,0,0,0.1] will explode the fourth slice
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , autopct='%1.1f%%' , shadow='true', explode=[0,0 , 0 , 0.1])
plt.show()
#Explode 3rd & 4th Slice
plt.figure(figsize=(8,8))
tickets = [48 , 30 , 20 , 15]
label = ['Low' , 'Medium' , 'High' , 'Critical']
color = ['#7CB342','#C0CA33','#FFB300','#F57C00']
# explode = [0,0,0.1,0.1] will explode the 3rd & 4th slice
plt.pie (tickets , labels= label , colors= color , startangle=45 ,autopct='%1.1f%%', shadow='true', explode=[0,0 , 0.1 , 0.1])
plt.legend()
plt.show()
fig = plt.figure(figsize=(20,6))
tickets = [48 , 30 , 20 , 15]
priority = ['Low' , 'Medium' , 'High' , 'Critical']
status = ['Resolved' , 'Cancelled' , 'Pending' , 'Assigned']
company = ['IBM' , 'Microsoft', 'BMC' , 'Apple']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.subplot(1,3,1)
plt.pie (tickets , labels= priority , colors= colors , startangle=45)
plt.subplot(1,3,2)
plt.pie (tickets , labels= status , colors= colors , startangle=45)
plt.subplot(1,3,3)
plt.pie (tickets , labels= company , colors= colors , startangle=45)
plt.show()
fig = plt.figure(figsize=(20,13))
tickets = [48 , 30 , 20 , 15]
priority = ['Low' , 'Medium' , 'High' , 'Critical']
status = ['Resolved' , 'Cancelled' , 'Pending' , 'Assigned']
company = ['IBM' , 'Microsoft', 'BMC' , 'Apple']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.subplot(2,3,1)
plt.pie (tickets , labels= priority , colors= colors , startangle=45 , autopct='%1.1f%%')
plt.subplot(2,3,2)
plt.pie (tickets , labels= status , colors= colors , startangle=45 , autopct='%1.1f%%')
plt.subplot(2,3,3)
plt.pie (tickets , labels= company , colors= colors , startangle=45 , autopct='%1.1f%%')
plt.subplot(2,3,4)
plt.pie (tickets , labels= priority , colors= colors , startangle=45, autopct='%1.1f%%')
plt.subplot(2,3,5)
plt.pie (tickets , labels= status , colors= colors , startangle=45 ,autopct='%1.1f%%')
plt.subplot(2,3,6)
plt.pie (tickets , labels= company , colors= colors , startangle=45, autopct='%1.1f%%')
plt.show()
| 0.400046 | 0.78535 |
```
from mplsoccer.pitch import Pitch
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import math
import os
plt.style.use('ggplot')
%config InlineBackend.figure_format='retina'
pitch = Pitch(layout=(1, 2), figsize=(16, 9), orientation='vertical', view='half',
pad_left=-25, pad_right=-25, pad_bottom=-35,
label=True, axis=True,
goal_type='box')
def calculate_visible(x, y, pitch):
left_post, right_post = pitch.goal_right
d1 = max(abs(left_post[1] - y), abs(right_post[1] - y))
d2 = abs(pitch.right - x)
d3 = abs(pitch.center_width - y)
angle_start = math.degrees(math.atan2(d2, d1))
goal_width = abs(right_post - left_post)[1]
angle_deg = math.atan2(goal_width * d2 , (d2**2 + d3**2 - (goal_width / 2.) ** 2))
if angle_deg < 0:
angle_deg = math.pi + angle_deg
angle_deg = round(math.degrees(angle_deg), 1)
angle_start = round(angle_start, 1)
return angle_start, angle_deg
x, y = (100, 30)
x1, y1 = (108, 50)
pitch = Pitch(layout=(1, 2), figsize=(16, 9), orientation='vertical', view='half', #line_color='white',
pad_left=-25, pad_right=-25, pad_bottom=-35, #pitch_color='grass', stripe=True,
label=True, axis=True,
goal_type='box')
fig, axes = pitch.draw()
left_post, right_post = pitch.goal_right
mid_goal_y = pitch.center_width
arrowstyle = dict(arrowstyle="-|>", connectionstyle="arc3,rad=-0.1", ec="blue", fc='blue')
for ax in axes:
pitch.scatter([x, x1], [y, y1], ax=ax, s=400, marker='football', zorder=2)
pitch.annotate(f'o = distance \n to mid \n line {abs(mid_goal_y - y1)}',
(106, 45), ax=ax, fontsize=12, color='blue', ha='center', va='center')
pitch.annotate(f'a = distance \n to goal \n line = {abs(pitch.right - x1)}',
(111, 37), ax=ax, fontsize=12, color='blue', ha='center', va='center')
# first angle
pitch.lines(x, y, left_post[0], mid_goal_y,
lw=1, color='#3F3F3F', ax=axes[0], zorder=1.5, linestyle='--')
mid_angle1 = round(math.degrees(math.atan2(abs(mid_goal_y - y), abs(pitch.right - x))), 1)
arc3 = patches.Arc((mid_goal_y, pitch.right), 10, 10, angle=270, theta1=-mid_angle1, theta2=0,
ec='#3F3F3F', linewidth=2, fill=False, zorder=2)
axes[0].add_patch(arc3)
pitch.annotate(f'{mid_angle1}°',(113.5, 38.6), ax=axes[0], fontsize=12, color='#3F3F3F', ha='center', va='center')
mid_angle2 = round(math.degrees(math.atan2(abs(mid_goal_y - y1), abs(pitch.right - x1))), 1)
distance2 = round((abs(mid_goal_y - y1)**2 + abs(pitch.right - x1)**2)**0.5, 1)
arc4 = patches.Arc((mid_goal_y, pitch.right), 10, 10, angle=270, theta1=0, theta2=mid_angle2,
ec='blue', linewidth=2, fill=False, zorder=2)
axes[0].add_patch(arc4)
pitch.annotate(f'{mid_angle2}°',(116.5, 41.5), ax=axes[0], fontsize=12, color='blue', ha='center', va='center')
pitch.arrows([x1, x1, x1], [y1, y1, mid_goal_y], [left_post[0], x1, left_post[0]], [mid_goal_y, mid_goal_y, mid_goal_y],
width=2, color='blue', ax=axes[0], headlength=10, headwidth=10)
pitch.annotate(distance2, (113, 47.5), ax=axes[0], fontsize=12, color='blue', ha='center', va='center')
pitch.annotate(r'angle = $\tan^{-1}$(o/a)', xy=(117, 41.5), xytext=(119, 48),
arrowprops=arrowstyle, ax=axes[0], fontsize=12, color='blue', ha='center', va='center')
distance_annotation = r'distance = $\sqrt{o^\mathsf{2} + a^\mathsf{2}}$'
pitch.annotate(distance_annotation, xy=(113.5, 48), xytext=(117, 50),
arrowprops=arrowstyle, ax=axes[0], fontsize=12, color='blue', ha='center', va='center')
axes[0].set_title('Angle and distance to the mid-goal', fontsize=18, pad=10)
# second angle
pitch.lines([x, x], [y, y], [left_post[0], right_post[0]], [left_post[1], right_post[1]],
lw=1, ax=axes[1], color='#3F3F3F', zorder=1.5, linestyle='--')
pitch.arrows([x1, x1], [y1, y1], [left_post[0], right_post[0]], [left_post[1], right_post[1]], ax=axes[1],
width=2, color='blue', headlength=10, headwidth=10)
angle_start1, angle_deg1 = calculate_visible(x, y, pitch)
arc1 = patches.Arc((y, x), 10, 10, angle=0, theta1=angle_start1, theta2=angle_start1 + angle_deg1,
ec='#3F3F3F', linewidth=2, fill=False, zorder=2)
axes[1].add_patch(arc1)
pitch.annotate(f'{angle_deg1}°',(x+3, y+2), ax=axes[1], fontsize=12, ha='center', va='center')
angle_start2, angle_deg2 = calculate_visible(x1, y1, pitch)
arc2 = patches.Arc((y1, x1), 10, 10, angle=0, theta1= 180 - angle_deg2 - angle_start2,
theta2= 180 - angle_start2, ec='blue', linewidth=2, fill=False, zorder=2)
axes[1].add_patch(arc2)
pitch.annotate(f'{angle_deg2}°',(x1+3, y1-2.5), ax=axes[1], fontsize=12, color='blue', ha='center', va='center')
pitch.arrows([x1, x1], [y1, mid_goal_y], [x1, left_post[0]], [mid_goal_y, mid_goal_y],
width=2, color='blue', ax=axes[1], headlength=10, headwidth=10)
angle_annotation = r'angle = $tan^{-1}\frac{goal width * o}{o^2 + a^2 - (goal width/2)^2}$'
pitch.annotate(angle_annotation, xy=(110.5, 47), xytext=(100, 45),
arrowprops=arrowstyle, ax=axes[1], fontsize=16, color='blue', ha='center', va='center')
axes[1].set_title('Visible angle to the goal posts', fontsize=18, pad=10)
fig.savefig(os.path.join('..', 'figures', '03_angle_and_distance.png'), bbox_inches = 'tight', pad_inches = 0.1)
```
|
github_jupyter
|
from mplsoccer.pitch import Pitch
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import math
import os
plt.style.use('ggplot')
%config InlineBackend.figure_format='retina'
pitch = Pitch(layout=(1, 2), figsize=(16, 9), orientation='vertical', view='half',
pad_left=-25, pad_right=-25, pad_bottom=-35,
label=True, axis=True,
goal_type='box')
def calculate_visible(x, y, pitch):
left_post, right_post = pitch.goal_right
d1 = max(abs(left_post[1] - y), abs(right_post[1] - y))
d2 = abs(pitch.right - x)
d3 = abs(pitch.center_width - y)
angle_start = math.degrees(math.atan2(d2, d1))
goal_width = abs(right_post - left_post)[1]
angle_deg = math.atan2(goal_width * d2 , (d2**2 + d3**2 - (goal_width / 2.) ** 2))
if angle_deg < 0:
angle_deg = math.pi + angle_deg
angle_deg = round(math.degrees(angle_deg), 1)
angle_start = round(angle_start, 1)
return angle_start, angle_deg
x, y = (100, 30)
x1, y1 = (108, 50)
pitch = Pitch(layout=(1, 2), figsize=(16, 9), orientation='vertical', view='half', #line_color='white',
pad_left=-25, pad_right=-25, pad_bottom=-35, #pitch_color='grass', stripe=True,
label=True, axis=True,
goal_type='box')
fig, axes = pitch.draw()
left_post, right_post = pitch.goal_right
mid_goal_y = pitch.center_width
arrowstyle = dict(arrowstyle="-|>", connectionstyle="arc3,rad=-0.1", ec="blue", fc='blue')
for ax in axes:
pitch.scatter([x, x1], [y, y1], ax=ax, s=400, marker='football', zorder=2)
pitch.annotate(f'o = distance \n to mid \n line {abs(mid_goal_y - y1)}',
(106, 45), ax=ax, fontsize=12, color='blue', ha='center', va='center')
pitch.annotate(f'a = distance \n to goal \n line = {abs(pitch.right - x1)}',
(111, 37), ax=ax, fontsize=12, color='blue', ha='center', va='center')
# first angle
pitch.lines(x, y, left_post[0], mid_goal_y,
lw=1, color='#3F3F3F', ax=axes[0], zorder=1.5, linestyle='--')
mid_angle1 = round(math.degrees(math.atan2(abs(mid_goal_y - y), abs(pitch.right - x))), 1)
arc3 = patches.Arc((mid_goal_y, pitch.right), 10, 10, angle=270, theta1=-mid_angle1, theta2=0,
ec='#3F3F3F', linewidth=2, fill=False, zorder=2)
axes[0].add_patch(arc3)
pitch.annotate(f'{mid_angle1}°',(113.5, 38.6), ax=axes[0], fontsize=12, color='#3F3F3F', ha='center', va='center')
mid_angle2 = round(math.degrees(math.atan2(abs(mid_goal_y - y1), abs(pitch.right - x1))), 1)
distance2 = round((abs(mid_goal_y - y1)**2 + abs(pitch.right - x1)**2)**0.5, 1)
arc4 = patches.Arc((mid_goal_y, pitch.right), 10, 10, angle=270, theta1=0, theta2=mid_angle2,
ec='blue', linewidth=2, fill=False, zorder=2)
axes[0].add_patch(arc4)
pitch.annotate(f'{mid_angle2}°',(116.5, 41.5), ax=axes[0], fontsize=12, color='blue', ha='center', va='center')
pitch.arrows([x1, x1, x1], [y1, y1, mid_goal_y], [left_post[0], x1, left_post[0]], [mid_goal_y, mid_goal_y, mid_goal_y],
width=2, color='blue', ax=axes[0], headlength=10, headwidth=10)
pitch.annotate(distance2, (113, 47.5), ax=axes[0], fontsize=12, color='blue', ha='center', va='center')
pitch.annotate(r'angle = $\tan^{-1}$(o/a)', xy=(117, 41.5), xytext=(119, 48),
arrowprops=arrowstyle, ax=axes[0], fontsize=12, color='blue', ha='center', va='center')
distance_annotation = r'distance = $\sqrt{o^\mathsf{2} + a^\mathsf{2}}$'
pitch.annotate(distance_annotation, xy=(113.5, 48), xytext=(117, 50),
arrowprops=arrowstyle, ax=axes[0], fontsize=12, color='blue', ha='center', va='center')
axes[0].set_title('Angle and distance to the mid-goal', fontsize=18, pad=10)
# second angle
pitch.lines([x, x], [y, y], [left_post[0], right_post[0]], [left_post[1], right_post[1]],
lw=1, ax=axes[1], color='#3F3F3F', zorder=1.5, linestyle='--')
pitch.arrows([x1, x1], [y1, y1], [left_post[0], right_post[0]], [left_post[1], right_post[1]], ax=axes[1],
width=2, color='blue', headlength=10, headwidth=10)
angle_start1, angle_deg1 = calculate_visible(x, y, pitch)
arc1 = patches.Arc((y, x), 10, 10, angle=0, theta1=angle_start1, theta2=angle_start1 + angle_deg1,
ec='#3F3F3F', linewidth=2, fill=False, zorder=2)
axes[1].add_patch(arc1)
pitch.annotate(f'{angle_deg1}°',(x+3, y+2), ax=axes[1], fontsize=12, ha='center', va='center')
angle_start2, angle_deg2 = calculate_visible(x1, y1, pitch)
arc2 = patches.Arc((y1, x1), 10, 10, angle=0, theta1= 180 - angle_deg2 - angle_start2,
theta2= 180 - angle_start2, ec='blue', linewidth=2, fill=False, zorder=2)
axes[1].add_patch(arc2)
pitch.annotate(f'{angle_deg2}°',(x1+3, y1-2.5), ax=axes[1], fontsize=12, color='blue', ha='center', va='center')
pitch.arrows([x1, x1], [y1, mid_goal_y], [x1, left_post[0]], [mid_goal_y, mid_goal_y],
width=2, color='blue', ax=axes[1], headlength=10, headwidth=10)
angle_annotation = r'angle = $tan^{-1}\frac{goal width * o}{o^2 + a^2 - (goal width/2)^2}$'
pitch.annotate(angle_annotation, xy=(110.5, 47), xytext=(100, 45),
arrowprops=arrowstyle, ax=axes[1], fontsize=16, color='blue', ha='center', va='center')
axes[1].set_title('Visible angle to the goal posts', fontsize=18, pad=10)
fig.savefig(os.path.join('..', 'figures', '03_angle_and_distance.png'), bbox_inches = 'tight', pad_inches = 0.1)
| 0.666714 | 0.81615 |
# 06 Changing a model
The first thing you might have noticed about the previous notebook is that we didn't define the model. Behind the scenes a basic model was created and encapsulated inside a simulation and this was run by the solver. The reason for this is currently the distributed solvers handle functions that create objects inside processes better than being passed objects directly. This adds a little complication to the solving process but hopefully not that much. This notebook demonstrates how to change the model by passing a `sim_func`.
```
try:
import liionpack as lp
except:
!pip install -q git+https://github.com/pybamm-team/liionpack.git@main
import liionpack as lp
import pybamm
import numpy as np
```
Liionpack includes a couple of pre-defined `sim_func` functions for you so let's check them out first
The idea is that the function gets called behind the scenes by the solvers and we pass in the parameter_values dictionary and return a simulation that is ready to build and use.
```
import inspect
lines = inspect.getsource(lp.basic_simulation)
print(lines)
```
The basic ingredients for a simulation in PyBaMM are the model, parameter_values and solver. We can optionally add events to the liionpack simulation by explicitly adding them as variables using the helper function `add_events_to_model`. We also need to use the casadi solver and different modes have not been currently not been tested but "safe" mode is known to work.
Let's run the function to get a simulation and solve it outside of the normal liionpack solving process to get an idea of the output of the model for a single cell
```
sim = lp.basic_simulation()
sim.solve([0, 3600])
sim.plot()
```
Another pre-defined `sim_func` is the `thermal_simulation`
```
lines = inspect.getsource(lp.thermal_simulation)
print(lines)
```
The thermal model does not have to be run with heat transfer coefficients as inputs but this is the easiest way to alter cooling conditions within the pack.
```
tSim = lp.thermal_simulation()
htcs = [5.0, 10.0, 20.0]
sols = []
for htc in htcs:
sols.append(
tSim.solve(
[0, 3600], inputs={"Total heat transfer coefficient [W.m-2.K-1]": htc}
)
)
pybamm.QuickPlot(
solutions=sols,
output_variables=["Terminal voltage [V]", "Volume-averaged cell temperature [K]"],
labels=htcs,
).dynamic_plot()
```
To use the bespoke functions in liionpack let's first create a netlist and then solve it.
```
netlist = lp.setup_circuit(Np=10, Ns=1, Rb=1e-3, Rc=1e-2, Ri=1e-2, V=3.6, I=20)
parameter_values = pybamm.ParameterValues(chemistry=pybamm.parameter_sets.Chen2020)
experiment = pybamm.Experiment(
operating_conditions=["Discharge at 20 A for 1 hour"], period="1 minute"
)
output = lp.solve(
netlist=netlist,
parameter_values=parameter_values,
experiment=experiment,
sim_func=lp.thermal_simulation,
output_variables=["Volume-averaged cell temperature [K]"],
inputs={"Total heat transfer coefficient [W.m-2.K-1]": np.ones(10) * 10.0},
)
```
The input array must be the same length as the number of cells in the pack. Here we have solved for every battery having the same heat transfer coefficient and thermal differences arise from the different cell currents
```
lp.plot_output(output)
output = lp.solve(
netlist=netlist,
parameter_values=parameter_values,
experiment=experiment,
sim_func=lp.thermal_simulation,
output_variables=["Volume-averaged cell temperature [K]"],
inputs={
"Total heat transfer coefficient [W.m-2.K-1]": np.linspace(1, 10, 10) * 10.0
},
)
```
We can easily prescribe a different heat transfer coefficient for each cell and see the results
```
lp.plot_output(output)
```
|
github_jupyter
|
try:
import liionpack as lp
except:
!pip install -q git+https://github.com/pybamm-team/liionpack.git@main
import liionpack as lp
import pybamm
import numpy as np
import inspect
lines = inspect.getsource(lp.basic_simulation)
print(lines)
sim = lp.basic_simulation()
sim.solve([0, 3600])
sim.plot()
lines = inspect.getsource(lp.thermal_simulation)
print(lines)
tSim = lp.thermal_simulation()
htcs = [5.0, 10.0, 20.0]
sols = []
for htc in htcs:
sols.append(
tSim.solve(
[0, 3600], inputs={"Total heat transfer coefficient [W.m-2.K-1]": htc}
)
)
pybamm.QuickPlot(
solutions=sols,
output_variables=["Terminal voltage [V]", "Volume-averaged cell temperature [K]"],
labels=htcs,
).dynamic_plot()
netlist = lp.setup_circuit(Np=10, Ns=1, Rb=1e-3, Rc=1e-2, Ri=1e-2, V=3.6, I=20)
parameter_values = pybamm.ParameterValues(chemistry=pybamm.parameter_sets.Chen2020)
experiment = pybamm.Experiment(
operating_conditions=["Discharge at 20 A for 1 hour"], period="1 minute"
)
output = lp.solve(
netlist=netlist,
parameter_values=parameter_values,
experiment=experiment,
sim_func=lp.thermal_simulation,
output_variables=["Volume-averaged cell temperature [K]"],
inputs={"Total heat transfer coefficient [W.m-2.K-1]": np.ones(10) * 10.0},
)
lp.plot_output(output)
output = lp.solve(
netlist=netlist,
parameter_values=parameter_values,
experiment=experiment,
sim_func=lp.thermal_simulation,
output_variables=["Volume-averaged cell temperature [K]"],
inputs={
"Total heat transfer coefficient [W.m-2.K-1]": np.linspace(1, 10, 10) * 10.0
},
)
lp.plot_output(output)
| 0.401805 | 0.992725 |
How have MLB Hall of Famer's Salaries changed over time? I will use Sean Lahman's Baseball Database to explore the following questions:
1. Has the pay of Hall of Famer's when adjusted for inflation increased over time?
2. How does the pay of Hall of Famer's evolve over their careers and has this changed over time?
3. Who is the highest paid Hall of Famer of all-time?
4. Who is the lowest paid Hall of Famer of all-time?
## 1. Has the pay of Hall of Famer's when adjusted for inflation increased over time?
The Lahman database only has salary dating back to the 70s but baseball-reference.com has much more. I obtained this data for a whole bunch of players. Here's what it looks like:
```
import pandas as pd
import numpy as np
salaries = pd.read_csv('./../data/Salaries/salary.csv')
salaries.head()
salaries.shape
```
How many unique players?
```
len(salaries.bbrefID.unique())
```
Now let's establish a connection to the Lahman Database. I have this database loaded into a psql database on an AWS instance. I'll connect to it here.
```
from sqlalchemy import create_engine
import getpass
passw = getpass.getpass("Password Please: ")
cnx = create_engine('postgresql://adam:%s@52.23.226.111:5432/baseball'%passw)
```
Here are the tables in the database:
```
print ', '.join(pd.read_sql_query("select table_name from information_schema.tables where table_schema = 'public';",cnx).table_name.tolist())
```
Let's take a look at the hall of fame table.
```
hall_of_fame = pd.read_sql_query('select * from hall_of_fame;',cnx)
hall_of_fame.head()
hall_of_fame.votedby.value_counts()
hall_of_fame.category.value_counts()
```
I'll only consider Players. Also, I'll exclude players from the Negro League since I do not have salary data on them.
I'll make a python set of all the player ids of these hall of famers.
```
hall = set(hall_of_fame[(hall_of_fame.inducted=='Y') &
(hall_of_fame.category=='Player') &
(hall_of_fame.votedby!='Negro League')].player_id)
hall.discard(u'griffcl01') ## he was not inducted as a player: http://www.baseball-reference.com/players/g/griffcl01.shtml
len(hall)
```
Now let's filter the salary table to just Hall of Famers. We need to first match the bbref IDs to the player_id that the Lahman database uses.
```
player = pd.read_sql_query('select * from player;',cnx)
bbid_to_pid = {b:p for b,p in zip(player.bbref_id,player.player_id)}
pid_to_name = {p:(fn,ln) for p,fn,ln in zip(player.player_id,player.name_first,player.name_last)}
salaries.insert(0,'player_id',[bbid_to_pid[bbid] for bbid in salaries.bbrefID])
salaries = salaries[salaries.player_id.isin(hall)].reset_index(drop=True)
salaries.head(3)
salaries.shape
```
Let's see if we have data on all 225 hall of famers..
```
len(salaries.player_id.unique())
```
Ok, that's not bad. Let's see how many null values there are for salary.
```
sum(salaries.Salary.isnull())
```
Yikes, that's a lot. We'll have to figure out a smart way to deal with that.
Let's see some of the oldest data.
```
salaries.sort_values('Year').head(7)
```
Some of the null value are a result of the fact that the years in which a player played on multiple teams have null values for one of the entries. After converting Salary to a number, I'll group by player_id and Year and see how many truely missing Salary entires we have.
```
salaries.Salary = pd.to_numeric(salaries.Salary.str.replace('$','').str.replace(',',''))
salaries.Year = salaries.Year.astype(int)
salaries.head(3)
unique_player_years = salaries.groupby(['player_id','Year']).sum().shape[0]
null_player_years = sum(salaries.groupby(['player_id','Year']).sum().Salary.isnull())
print unique_player_years, null_player_years, float(null_player_years)/unique_player_years
```
Still 39% of the data is missing. Eventually I will impute this data and try to do it in a way that makes sense. First, let's start visualizing the data a little bit. Let's aggregate the mean salary by year.
```
counts = salaries.dropna().groupby('Year',as_index=False).count()[['Year','Salary']]
counts.head(5)
counts.tail(3)
```
Too avoid getting too noisy of a picture, I'm going to restrict the mean salaries to only years when we have at least 4 players' salaries.
```
mean_salaries = salaries.dropna().groupby('Year',as_index=False).mean()[counts.Salary>3]
mean_salaries.head(3)
```
I'll plot the average HOF salary across time. Actually, I'll plot the log of salary since that will make it easier to visualize.
```
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.plot(mean_salaries.Year,np.log10(mean_salaries.Salary));
plt.xlabel('Year');
plt.ylabel('Log of Average Salary');
plt.title('Log of Average Salary of Hall of Famers');
```
Ok, this is to be expected, the average salary has been increasing through time. Cool to see that the average salary did not rise from about 1926 to about 1940 and then dropped around World War 2 as this corresponds to expectations.
Let's plot this next to the Consumer Price Index to see it in some context. Since the CPI did not start officially being tracked until 1913, I obtained some CPI data from [here](http://www.econ.yale.edu/~shiller/data.htm). This is data used by Robert Sciller in *Irrational Exuberance*. It happens to start exactly in 1871 which is the first year of MLB. I'll read this in and then do the necessary steps to get it plotted next to the Average Salary.
```
cpi = pd.read_csv('./../data/Salaries/ie_data.csv')[['Date','CPI']]
cpi.head(3)
```
I'll just use April of every year as I don't need month by month data.
```
cpi = cpi[cpi.Date.astype(str).str.endswith('03')].reset_index(drop=True)
cpi.Date = cpi.Date.astype(int)
cpi.columns = ['Year','CPI']
cpi.head(3)
```
Now I want to see how much the salary of a base year compares to the rest of the years if we adjust for inflation. I'll use 1881 as a base year for now since it is the first year we have non-null data for. I'll calculate how much 1372 dollars in 1873 corresponds to in all the rest of the years. Then I'll plot the result.
```
cpi.insert(len(cpi.columns),'Base-1881',cpi.CPI.values/cpi[cpi.Year==1881].CPI.values)
cpi.head(4)
adjusted_base_salary = pd.merge(cpi,mean_salaries,on='Year')
adjusted_base_salary['1881_Salary_Adjusted'] = adjusted_base_salary.iloc[0].Salary*adjusted_base_salary['Base-1881']
adjusted_base_salary.head(4)
plt.plot(adjusted_base_salary.Year,np.log10(adjusted_base_salary.Salary),label='Average Salary');
plt.plot(adjusted_base_salary.Year,np.log10(adjusted_base_salary['1881_Salary_Adjusted']),label='CPI');
plt.plot()
plt.xlabel('Year');
plt.ylabel('Log Dollars');
plt.title('Log of Average Salary Versus Log CPI');
plt.legend();
```
Ok, so we can see quite clearly that the average salary of hall of famers has outpaced the rate of inflation. Let's see this one other way. Let's put all the average salaries in 2009 dollars (the last year in this data) and then plot the average through time.
```
adjusted_base_salary.insert(3,'Base-2009',adjusted_base_salary.iloc[-1].CPI/adjusted_base_salary.CPI)
adjusted_base_salary.head(2)
adjusted_base_salary['Salary_in_2009_dollars'] = adjusted_base_salary.Salary * adjusted_base_salary['Base-2009']
adjusted_base_salary.head(2)
plt.plot(adjusted_base_salary.Year,np.log10(adjusted_base_salary.Salary_in_2009_dollars),label='Average Salary');
plt.plot()
plt.xlabel('Year');
plt.ylabel('Log Dollars');
plt.title('Log of Average Salary in 2009 Dollars');
plt.legend();
```
Since log10 of 10 million is 7, this coresponds to hall-of-famers making around 10 million on average throughout their careers in the mid 2000s. Back before the turn of the century, hall-of-fame players were only making bettween 30 and 100 thousand dollars in 2009 dollars.
Salaries have increased tremndously over the past 40 years. Hall of fame caliber players are now averaging 10 times more per year over the course of their careers than Hank Aaron made at the peak of his earning power.
I feel as though I have satisfactorily answered the first of my four driving questions. Now on to the rest.
## 2. How does the pay of Hall of Famer's evolve over their carrers and has this changed over time?
We would like to impute data that is missing. Here's my plan for doing so. We use the ratios of all players' earnings in the time series that is their careers and then use this average career earnings trajectory to imput missing data.
Because the common career trajectory might be changing through time, I'll bin the data into 6 bins like this:
1. 1871-1899
2. 1900-1919
3. 1920-1939
4. 1940-1959
5. 1960-1979
6. 1980-2010
First, we will have to drop the players who have all missing values for their salaries since we have no info on them.
```
players_to_drop = salaries.groupby('player_id',as_index=False).count()['player_id'][(salaries.groupby('player_id').count().Salary==0).values]
players_to_drop
salaries = salaries[~salaries.player_id.isin(players_to_drop)].reset_index(drop=True)
```
First let me insert a column for year of the career and adjust all the salaries for inflation.
```
salaries.insert(3,'Year_of_career',np.zeros(len(salaries)))
for bbref in pd.unique(salaries.bbrefID):
salaries.ix[salaries.bbrefID==bbref,'Year_of_career'] = range(1,sum(salaries.bbrefID==bbref)+1)
cpi.insert(len(cpi.columns),'Base-2010',cpi[cpi.Year==2010].CPI.values/cpi.CPI.values)
year_to_base_2010 = {y:b for y,b in zip(cpi.Year,cpi['Base-2010'])}
salaries.insert(len(salaries.columns),'Salary-2010',[year_to_base_2010[y]*s for y,s in zip(salaries.Year,salaries.Salary)])
salaries.head(3)
```
Now I'm going to drop the duplicates of player-Year combinations. I need to make sure to drop only the null row and not the non null row so first I'll sort by player-Year-Salary and then I'll drop the duplicates.
```
salaries = salaries.sort_values(['player_id','Year','Salary'])
salaries = salaries.drop_duplicates(subset=['player_id','Year'],keep='first')
```
Now I'm going to calculate the ratio of first year salary to the rest of the year's salaries across all players for which the first year's salary is available.
```
max_seasons = salaries.Year_of_career.max().astype(int)
A = pd.DataFrame({'%d' % i : [[] for _ in range(max_seasons)] for i in range(1,max_seasons+1)})[['%d' % i for i in range(1,max_seasons+1)]]
for player_id,df in salaries.groupby('player_id'):
for year1 in df.Year_of_career:
for year2 in df.Year_of_career:
ratio = df[df.Year_of_career==year1]['Salary-2010'].values/df[df.Year_of_career==year2]['Salary-2010'].values
if np.isnan(ratio):
continue
A.iloc[int(year1)-1,int(year2)-1].append(ratio[0])
```
Time to plot the data.
```
x,y,w = [],[],[]
for u,arr in enumerate(A['1']):
if len(arr)<=3:
continue
else:
x.append(u+1)
y.append(np.mean(arr))
w.append(1/np.std(arr) if np.std(arr)!=0 else 1)
from scipy.interpolate import InterpolatedUnivariateSpline
s = InterpolatedUnivariateSpline(x, y, w, k=1)
plt.scatter(x,y);
plt.plot(x,s(x));
plt.title('Average Hall of Famer Earning Trajectory')
plt.xlabel('Year of Career')
plt.ylabel('Ratio to First Year Salary');
```
Now I'll use this average trajectory to impute all the missing salary data. I'll use all the available year's data on a players salary to do this by imputing what the salary would be with each point and then taking the average across all points.
```
for player_id,df in salaries.groupby('player_id'):
for year1 in df.Year_of_career:
if np.isnan(df[df.Year_of_career==year1]['Salary-2010'].values[0]):
impute = []
for year2 in df.Year_of_career:
if np.isnan(df[df.Year_of_career==year2]['Salary-2010'].values[0]):
continue
else:
impute.append(s(year1)/s(year2) * df[df.Year_of_career==year2]['Salary-2010'].values[0])
salaries.loc[(salaries.player_id==player_id) & (salaries.Year_of_career==year1),'Salary-2010'] = np.mean(impute)
sum(salaries['Salary-2010'].isnull())
```
Yay! No more nulls. Now let's bin the data into our 6 bins then visualize the career earning trajectories for each of the bins.
```
salaries.insert(len(salaries.columns),'Bin_1',salaries.Year<1900)
salaries.insert(len(salaries.columns),'Bin_2',np.logical_and(salaries.Year>=1900,salaries.Year<1920))
salaries.insert(len(salaries.columns),'Bin_3',np.logical_and(salaries.Year>=1920,salaries.Year<1940))
salaries.insert(len(salaries.columns),'Bin_4',np.logical_and(salaries.Year>=1940,salaries.Year<1960))
salaries.insert(len(salaries.columns),'Bin_5',np.logical_and(salaries.Year>=1960,salaries.Year<1980))
salaries.insert(len(salaries.columns),'Bin_6',salaries.Year>1980)
for b in range(1,7):
base_salary = salaries[salaries['Bin_%d' % b]].groupby('Year_of_career',as_index=False).mean().iloc[0]['Salary-2010']
x = salaries[salaries['Bin_%d' % b]].groupby('Year_of_career',as_index=False).mean().Year_of_career
y = salaries[salaries['Bin_%d' % b]].groupby('Year_of_career',as_index=False).mean()['Salary-2010']/base_salary
plt.plot(x,y,label='Bin %d' % b)
plt.legend();
plt.xlabel('Year of Career')
plt.ylabel("Ratio to First Year's Salary")
plt.title('Career Earnings Trajectory Across Six Time Periods');
```
So not only are Hall of Famers making more money than ever, the ratio of their salary during the peak of their careers to their rookie salary is higher as well.
## 3. Who is the highest paid Hall of Famer of all-time?
Who is the highest paid hall of famer of all-time. Well if what I've seen so far has taught me anything, it's that the pay of hall of famers has changed substantially throughout the history of baseball. How to answer this question can definetly by debated but I think that players should be compared to their peers to control for aspects of the game changing through time.
To answer this question I will look at two metrics:
* highest single season of pay
* highest average pay
I will take a nearest neighbor approach to this question. My X will have two features - year started career and year ended career. For each player I'll find his k nearest neighbors and compare their average metrics to the player's highest metric. The players with the highest differences I'll determine to be the highest paid players.
First I'll make a nice dataframe with player_id, first season of career, last season of career, highest single season pay and average pay.
```
first_season = {pl:yr for pl,yr in zip(salaries.groupby('player_id').min()['Year'].index,salaries.groupby('player_id').min()['Year'])}
last_season = {pl:yr for pl,yr in zip(salaries.groupby('player_id').max()['Year'].index,salaries.groupby('player_id').max()['Year'])}
highest_season_pay = {pl:pa for pl,pa in zip(salaries.groupby('player_id').max()['Salary-2010'].index,salaries.groupby('player_id').max()['Salary-2010'])}
ave_pay = {pl:pa for pl,pa in zip(salaries.groupby('player_id').mean()['Salary-2010'].index,salaries.groupby('player_id').mean()['Salary-2010'])}
salaries_new = pd.DataFrame({'player_id':pd.unique(salaries.player_id),'first_season':[first_season[p] for p in pd.unique(salaries.player_id)],'last_season':[last_season[p] for p in pd.unique(salaries.player_id)],'highest_season_pay':[highest_season_pay[p] for p in pd.unique(salaries.player_id)],'ave_pay':[ave_pay[p] for p in pd.unique(salaries.player_id)]})
salaries_new = salaries_new[['player_id','first_season','last_season','highest_season_pay','ave_pay']]
salaries_new.head(3)
```
Let's try k equals 8 nearest neighbors first.
```
from sklearn.neighbors import KNeighborsRegressor
knn = KNeighborsRegressor(n_neighbors=8, weights='uniform')
d_larg = {}
for player_id in pd.unique(salaries_new.player_id):
X = salaries_new[salaries_new.player_id!=player_id].iloc[:,1:3].values
y = salaries_new[salaries_new.player_id!=player_id].iloc[:,-2]
knn.fit(X,y)
d_larg[player_id] = (salaries_new[salaries_new.player_id==player_id].iloc[:,-2].values - knn.predict(salaries_new[salaries_new.player_id==player_id].iloc[:,1:3].values))[0]
```
Top 5 Players:
```
for key in sorted(d_larg,key=d_larg.get,reverse=True)[:5]:
print key,' '.join(pid_to_name[key]),d_larg[key]
```
Let's try k equals 12 also.
```
knn = KNeighborsRegressor(n_neighbors=12, weights='uniform')
d_larg = {}
for player_id in pd.unique(salaries_new.player_id):
X = salaries_new[salaries_new.player_id!=player_id].iloc[:,1:3].values
y = salaries_new[salaries_new.player_id!=player_id].iloc[:,-2]
knn.fit(X,y)
d_larg[player_id] = (salaries_new[salaries_new.player_id==player_id].iloc[:,-2].values - knn.predict(salaries_new[salaries_new.player_id==player_id].iloc[:,1:3].values))[0]
for key in sorted(d_larg,key=d_larg.get,reverse=True)[:5]:
print key,' '.join(pid_to_name[key]),d_larg[key]
```
Seems pretty robust to the choice of K. This metric really favors pitchers. Let's see the average pay metric. Let's just try k = 12 this time.
```
knn = KNeighborsRegressor(n_neighbors=10, weights='uniform')
d_ave = {}
for player_id in pd.unique(salaries_new.player_id):
X = salaries_new[salaries_new.player_id!=player_id].iloc[:,1:3].values
y = salaries_new[salaries_new.player_id!=player_id].iloc[:,-1]
knn.fit(X,y)
d_ave[player_id] = (salaries_new[salaries_new.player_id==player_id].iloc[:,-1].values - knn.predict(salaries_new[salaries_new.player_id==player_id].iloc[:,1:3].values))[0]
for key in sorted(d_ave,key=d_ave.get,reverse=True)[:5]:
print key,' '.join(pid_to_name[key]),d_ave[key]
```
According to this analysis, Pedro Martinez is the highest paid Hall of Famer of all time.
## 4. Who is the lowest paid Hall of Famer of all-time?
I'll conclude by showing the lowest paid Hall of Famer of all-time by both metrics. I'll keep K fixed at 10 for brevity's sake.
```
for key in sorted(d_larg,key=d_larg.get)[:5]:
print key,' '.join(pid_to_name[key]),d_larg[key]
for key in sorted(d_ave,key=d_ave.get)[:5]:
print key,' '.join(pid_to_name[key]),d_ave[key]
```
According to this analysis, either Craig Biggio or Roberto Alomar is the lowest paid Hall of Famer of all time, depending on your definition.
|
github_jupyter
|
import pandas as pd
import numpy as np
salaries = pd.read_csv('./../data/Salaries/salary.csv')
salaries.head()
salaries.shape
len(salaries.bbrefID.unique())
from sqlalchemy import create_engine
import getpass
passw = getpass.getpass("Password Please: ")
cnx = create_engine('postgresql://adam:%s@52.23.226.111:5432/baseball'%passw)
print ', '.join(pd.read_sql_query("select table_name from information_schema.tables where table_schema = 'public';",cnx).table_name.tolist())
hall_of_fame = pd.read_sql_query('select * from hall_of_fame;',cnx)
hall_of_fame.head()
hall_of_fame.votedby.value_counts()
hall_of_fame.category.value_counts()
hall = set(hall_of_fame[(hall_of_fame.inducted=='Y') &
(hall_of_fame.category=='Player') &
(hall_of_fame.votedby!='Negro League')].player_id)
hall.discard(u'griffcl01') ## he was not inducted as a player: http://www.baseball-reference.com/players/g/griffcl01.shtml
len(hall)
player = pd.read_sql_query('select * from player;',cnx)
bbid_to_pid = {b:p for b,p in zip(player.bbref_id,player.player_id)}
pid_to_name = {p:(fn,ln) for p,fn,ln in zip(player.player_id,player.name_first,player.name_last)}
salaries.insert(0,'player_id',[bbid_to_pid[bbid] for bbid in salaries.bbrefID])
salaries = salaries[salaries.player_id.isin(hall)].reset_index(drop=True)
salaries.head(3)
salaries.shape
len(salaries.player_id.unique())
sum(salaries.Salary.isnull())
salaries.sort_values('Year').head(7)
salaries.Salary = pd.to_numeric(salaries.Salary.str.replace('$','').str.replace(',',''))
salaries.Year = salaries.Year.astype(int)
salaries.head(3)
unique_player_years = salaries.groupby(['player_id','Year']).sum().shape[0]
null_player_years = sum(salaries.groupby(['player_id','Year']).sum().Salary.isnull())
print unique_player_years, null_player_years, float(null_player_years)/unique_player_years
counts = salaries.dropna().groupby('Year',as_index=False).count()[['Year','Salary']]
counts.head(5)
counts.tail(3)
mean_salaries = salaries.dropna().groupby('Year',as_index=False).mean()[counts.Salary>3]
mean_salaries.head(3)
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.plot(mean_salaries.Year,np.log10(mean_salaries.Salary));
plt.xlabel('Year');
plt.ylabel('Log of Average Salary');
plt.title('Log of Average Salary of Hall of Famers');
cpi = pd.read_csv('./../data/Salaries/ie_data.csv')[['Date','CPI']]
cpi.head(3)
cpi = cpi[cpi.Date.astype(str).str.endswith('03')].reset_index(drop=True)
cpi.Date = cpi.Date.astype(int)
cpi.columns = ['Year','CPI']
cpi.head(3)
cpi.insert(len(cpi.columns),'Base-1881',cpi.CPI.values/cpi[cpi.Year==1881].CPI.values)
cpi.head(4)
adjusted_base_salary = pd.merge(cpi,mean_salaries,on='Year')
adjusted_base_salary['1881_Salary_Adjusted'] = adjusted_base_salary.iloc[0].Salary*adjusted_base_salary['Base-1881']
adjusted_base_salary.head(4)
plt.plot(adjusted_base_salary.Year,np.log10(adjusted_base_salary.Salary),label='Average Salary');
plt.plot(adjusted_base_salary.Year,np.log10(adjusted_base_salary['1881_Salary_Adjusted']),label='CPI');
plt.plot()
plt.xlabel('Year');
plt.ylabel('Log Dollars');
plt.title('Log of Average Salary Versus Log CPI');
plt.legend();
adjusted_base_salary.insert(3,'Base-2009',adjusted_base_salary.iloc[-1].CPI/adjusted_base_salary.CPI)
adjusted_base_salary.head(2)
adjusted_base_salary['Salary_in_2009_dollars'] = adjusted_base_salary.Salary * adjusted_base_salary['Base-2009']
adjusted_base_salary.head(2)
plt.plot(adjusted_base_salary.Year,np.log10(adjusted_base_salary.Salary_in_2009_dollars),label='Average Salary');
plt.plot()
plt.xlabel('Year');
plt.ylabel('Log Dollars');
plt.title('Log of Average Salary in 2009 Dollars');
plt.legend();
players_to_drop = salaries.groupby('player_id',as_index=False).count()['player_id'][(salaries.groupby('player_id').count().Salary==0).values]
players_to_drop
salaries = salaries[~salaries.player_id.isin(players_to_drop)].reset_index(drop=True)
salaries.insert(3,'Year_of_career',np.zeros(len(salaries)))
for bbref in pd.unique(salaries.bbrefID):
salaries.ix[salaries.bbrefID==bbref,'Year_of_career'] = range(1,sum(salaries.bbrefID==bbref)+1)
cpi.insert(len(cpi.columns),'Base-2010',cpi[cpi.Year==2010].CPI.values/cpi.CPI.values)
year_to_base_2010 = {y:b for y,b in zip(cpi.Year,cpi['Base-2010'])}
salaries.insert(len(salaries.columns),'Salary-2010',[year_to_base_2010[y]*s for y,s in zip(salaries.Year,salaries.Salary)])
salaries.head(3)
salaries = salaries.sort_values(['player_id','Year','Salary'])
salaries = salaries.drop_duplicates(subset=['player_id','Year'],keep='first')
max_seasons = salaries.Year_of_career.max().astype(int)
A = pd.DataFrame({'%d' % i : [[] for _ in range(max_seasons)] for i in range(1,max_seasons+1)})[['%d' % i for i in range(1,max_seasons+1)]]
for player_id,df in salaries.groupby('player_id'):
for year1 in df.Year_of_career:
for year2 in df.Year_of_career:
ratio = df[df.Year_of_career==year1]['Salary-2010'].values/df[df.Year_of_career==year2]['Salary-2010'].values
if np.isnan(ratio):
continue
A.iloc[int(year1)-1,int(year2)-1].append(ratio[0])
x,y,w = [],[],[]
for u,arr in enumerate(A['1']):
if len(arr)<=3:
continue
else:
x.append(u+1)
y.append(np.mean(arr))
w.append(1/np.std(arr) if np.std(arr)!=0 else 1)
from scipy.interpolate import InterpolatedUnivariateSpline
s = InterpolatedUnivariateSpline(x, y, w, k=1)
plt.scatter(x,y);
plt.plot(x,s(x));
plt.title('Average Hall of Famer Earning Trajectory')
plt.xlabel('Year of Career')
plt.ylabel('Ratio to First Year Salary');
for player_id,df in salaries.groupby('player_id'):
for year1 in df.Year_of_career:
if np.isnan(df[df.Year_of_career==year1]['Salary-2010'].values[0]):
impute = []
for year2 in df.Year_of_career:
if np.isnan(df[df.Year_of_career==year2]['Salary-2010'].values[0]):
continue
else:
impute.append(s(year1)/s(year2) * df[df.Year_of_career==year2]['Salary-2010'].values[0])
salaries.loc[(salaries.player_id==player_id) & (salaries.Year_of_career==year1),'Salary-2010'] = np.mean(impute)
sum(salaries['Salary-2010'].isnull())
salaries.insert(len(salaries.columns),'Bin_1',salaries.Year<1900)
salaries.insert(len(salaries.columns),'Bin_2',np.logical_and(salaries.Year>=1900,salaries.Year<1920))
salaries.insert(len(salaries.columns),'Bin_3',np.logical_and(salaries.Year>=1920,salaries.Year<1940))
salaries.insert(len(salaries.columns),'Bin_4',np.logical_and(salaries.Year>=1940,salaries.Year<1960))
salaries.insert(len(salaries.columns),'Bin_5',np.logical_and(salaries.Year>=1960,salaries.Year<1980))
salaries.insert(len(salaries.columns),'Bin_6',salaries.Year>1980)
for b in range(1,7):
base_salary = salaries[salaries['Bin_%d' % b]].groupby('Year_of_career',as_index=False).mean().iloc[0]['Salary-2010']
x = salaries[salaries['Bin_%d' % b]].groupby('Year_of_career',as_index=False).mean().Year_of_career
y = salaries[salaries['Bin_%d' % b]].groupby('Year_of_career',as_index=False).mean()['Salary-2010']/base_salary
plt.plot(x,y,label='Bin %d' % b)
plt.legend();
plt.xlabel('Year of Career')
plt.ylabel("Ratio to First Year's Salary")
plt.title('Career Earnings Trajectory Across Six Time Periods');
first_season = {pl:yr for pl,yr in zip(salaries.groupby('player_id').min()['Year'].index,salaries.groupby('player_id').min()['Year'])}
last_season = {pl:yr for pl,yr in zip(salaries.groupby('player_id').max()['Year'].index,salaries.groupby('player_id').max()['Year'])}
highest_season_pay = {pl:pa for pl,pa in zip(salaries.groupby('player_id').max()['Salary-2010'].index,salaries.groupby('player_id').max()['Salary-2010'])}
ave_pay = {pl:pa for pl,pa in zip(salaries.groupby('player_id').mean()['Salary-2010'].index,salaries.groupby('player_id').mean()['Salary-2010'])}
salaries_new = pd.DataFrame({'player_id':pd.unique(salaries.player_id),'first_season':[first_season[p] for p in pd.unique(salaries.player_id)],'last_season':[last_season[p] for p in pd.unique(salaries.player_id)],'highest_season_pay':[highest_season_pay[p] for p in pd.unique(salaries.player_id)],'ave_pay':[ave_pay[p] for p in pd.unique(salaries.player_id)]})
salaries_new = salaries_new[['player_id','first_season','last_season','highest_season_pay','ave_pay']]
salaries_new.head(3)
from sklearn.neighbors import KNeighborsRegressor
knn = KNeighborsRegressor(n_neighbors=8, weights='uniform')
d_larg = {}
for player_id in pd.unique(salaries_new.player_id):
X = salaries_new[salaries_new.player_id!=player_id].iloc[:,1:3].values
y = salaries_new[salaries_new.player_id!=player_id].iloc[:,-2]
knn.fit(X,y)
d_larg[player_id] = (salaries_new[salaries_new.player_id==player_id].iloc[:,-2].values - knn.predict(salaries_new[salaries_new.player_id==player_id].iloc[:,1:3].values))[0]
for key in sorted(d_larg,key=d_larg.get,reverse=True)[:5]:
print key,' '.join(pid_to_name[key]),d_larg[key]
knn = KNeighborsRegressor(n_neighbors=12, weights='uniform')
d_larg = {}
for player_id in pd.unique(salaries_new.player_id):
X = salaries_new[salaries_new.player_id!=player_id].iloc[:,1:3].values
y = salaries_new[salaries_new.player_id!=player_id].iloc[:,-2]
knn.fit(X,y)
d_larg[player_id] = (salaries_new[salaries_new.player_id==player_id].iloc[:,-2].values - knn.predict(salaries_new[salaries_new.player_id==player_id].iloc[:,1:3].values))[0]
for key in sorted(d_larg,key=d_larg.get,reverse=True)[:5]:
print key,' '.join(pid_to_name[key]),d_larg[key]
knn = KNeighborsRegressor(n_neighbors=10, weights='uniform')
d_ave = {}
for player_id in pd.unique(salaries_new.player_id):
X = salaries_new[salaries_new.player_id!=player_id].iloc[:,1:3].values
y = salaries_new[salaries_new.player_id!=player_id].iloc[:,-1]
knn.fit(X,y)
d_ave[player_id] = (salaries_new[salaries_new.player_id==player_id].iloc[:,-1].values - knn.predict(salaries_new[salaries_new.player_id==player_id].iloc[:,1:3].values))[0]
for key in sorted(d_ave,key=d_ave.get,reverse=True)[:5]:
print key,' '.join(pid_to_name[key]),d_ave[key]
for key in sorted(d_larg,key=d_larg.get)[:5]:
print key,' '.join(pid_to_name[key]),d_larg[key]
for key in sorted(d_ave,key=d_ave.get)[:5]:
print key,' '.join(pid_to_name[key]),d_ave[key]
| 0.207455 | 0.923108 |
# Lab 12
**SID: 11912725**
**Name: 周民涛**
```
import matplotlib.pyplot as plt
import numpy as np
map_matrix = np.load("lab12_map_matrix.npy")
plt.imshow(map_matrix)
plt.show()
map_matrix
# map_matrix
# map_matrix.shape # (20,20,3)
height = map_matrix.shape[0]
width = map_matrix.shape[1]
start_rgb = (255, 255, 255)
obstacle_rgb = (0, 30, 0)
goal_rgb = (255, 0, 0)
start_indices = np.argwhere(np.all(map_matrix == start_rgb, axis=-1))
obstacle_indices = np.argwhere(np.all(map_matrix == obstacle_rgb, axis=-1))
goal_indices = np.argwhere(np.all(map_matrix == goal_rgb, axis=-1))
dx = [0, 1, 0, -1]
dy = [1, 0, -1, 0]
actions = [0, 1, 2, 3]
states = [(x, y) for x in range(height) for y in range(width)]
gamma = 0.8
theta = 0.0001
reward_matrix = np.load('lab12_reward_matrix.npy')
plt.imshow(reward_matrix)
plt.show()
# reward_matrix
reward_matrix.shape # (20,20)
v = np.zeros_like(reward_matrix)
policy = np.zeros_like(reward_matrix, dtype=int)
while True:
delta = 0
for s in states:
best_action_value = -np.inf
best_action = -1
x, y = s
for action in actions:
new_x = x + dx[action]
new_y = y + dy[action]
if 0 <= new_x < height and 0 <= new_y < width:
value = reward_matrix[x][y] + gamma * v[new_x, new_y]
if best_action_value < value:
best_action_value = value
best_action = action
action = policy[x][y]
new_x = x + dx[action]
new_y = y + dy[action]
if 0 <= new_x < height and 0 <= new_y < width and best_action_value <= reward_matrix[x][y] + gamma * v[
new_x, new_y]:
continue
policy[x][y] = best_action
if delta < np.abs(best_action_value - v[s]):
delta = np.abs(best_action_value - v[s])
for s in states:
x, y = s
action = policy[x][y]
new_x = x + dx[action]
new_y = y + dy[action]
if 0 <= new_x < height and 0 <= new_y < width:
v[x][y] = reward_matrix[x][y] + gamma * v[new_x, new_y]
else:
v[x][y] = -np.inf
if delta < theta:
break
next_index = list(start_indices[0])
while next_index not in goal_indices.tolist():
map_matrix[next_index[0], next_index[1], :] = 255
action = policy[next_index[0], next_index[1]]
next_index[0] += dx[action]
next_index[1] += dy[action]
# print the shortest path
plt.imshow(map_matrix)
plt.show()
```
## Questions
1. In Reinforcement Learning (RL), the problem to resolve is described as a Markov Decision Process (MDP). Theoretical results in RL rely on the MDP description being a correct match to the problem. If your problem is well described as a MDP, then RL may be a good framework to use to find solutions.
2. The two required properties of dynamic programming are:
* Optimal substructure: optimal solution of the sub-problem can be used to solve the overall problem.
* Overlapping sub-problems: sub-problems recur many times. Solutions of sub-problems can be cached and reused.
When the problem dosen't have such conditions, the MDP methods performs poorly.
3. If the conditions of the second question are satisfied and the problem is a Markov process.
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
map_matrix = np.load("lab12_map_matrix.npy")
plt.imshow(map_matrix)
plt.show()
map_matrix
# map_matrix
# map_matrix.shape # (20,20,3)
height = map_matrix.shape[0]
width = map_matrix.shape[1]
start_rgb = (255, 255, 255)
obstacle_rgb = (0, 30, 0)
goal_rgb = (255, 0, 0)
start_indices = np.argwhere(np.all(map_matrix == start_rgb, axis=-1))
obstacle_indices = np.argwhere(np.all(map_matrix == obstacle_rgb, axis=-1))
goal_indices = np.argwhere(np.all(map_matrix == goal_rgb, axis=-1))
dx = [0, 1, 0, -1]
dy = [1, 0, -1, 0]
actions = [0, 1, 2, 3]
states = [(x, y) for x in range(height) for y in range(width)]
gamma = 0.8
theta = 0.0001
reward_matrix = np.load('lab12_reward_matrix.npy')
plt.imshow(reward_matrix)
plt.show()
# reward_matrix
reward_matrix.shape # (20,20)
v = np.zeros_like(reward_matrix)
policy = np.zeros_like(reward_matrix, dtype=int)
while True:
delta = 0
for s in states:
best_action_value = -np.inf
best_action = -1
x, y = s
for action in actions:
new_x = x + dx[action]
new_y = y + dy[action]
if 0 <= new_x < height and 0 <= new_y < width:
value = reward_matrix[x][y] + gamma * v[new_x, new_y]
if best_action_value < value:
best_action_value = value
best_action = action
action = policy[x][y]
new_x = x + dx[action]
new_y = y + dy[action]
if 0 <= new_x < height and 0 <= new_y < width and best_action_value <= reward_matrix[x][y] + gamma * v[
new_x, new_y]:
continue
policy[x][y] = best_action
if delta < np.abs(best_action_value - v[s]):
delta = np.abs(best_action_value - v[s])
for s in states:
x, y = s
action = policy[x][y]
new_x = x + dx[action]
new_y = y + dy[action]
if 0 <= new_x < height and 0 <= new_y < width:
v[x][y] = reward_matrix[x][y] + gamma * v[new_x, new_y]
else:
v[x][y] = -np.inf
if delta < theta:
break
next_index = list(start_indices[0])
while next_index not in goal_indices.tolist():
map_matrix[next_index[0], next_index[1], :] = 255
action = policy[next_index[0], next_index[1]]
next_index[0] += dx[action]
next_index[1] += dy[action]
# print the shortest path
plt.imshow(map_matrix)
plt.show()
| 0.287568 | 0.881513 |
# Unsupervised Learning
Install/update seaborn by uncommenting the following cell (make sure to restart the kernel after installation is complete):
```
#!conda install seaborn
import matplotlib.pyplot as plt
import seaborn as sns
# the following line gets the bucket name attached to our cluster
bucket = spark._jsc.hadoopConfiguration().get("fs.gs.system.bucket")
# specifying the path to our bucket where the data is located (no need to edit this path anymore)
data = "gs://" + bucket + "/notebooks/jupyter/data/"
print(data)
df = spark.read.format("csv")\
.option("header", "true")\
.option("inferSchema", "true")\
.load(data + "iris.csv")\
.coalesce(5)
df = df.drop('_c0')
df.cache()
df.show(1)
df.printSchema()
print("This datasets consists of {} rows.".format(df.count()))
sample_df = df.sample(fraction=0.6, seed=843)
sample_df = sample_df.toPandas()
sns.pairplot(sample_df, hue='species');
```
## Principal Component Analysis - PCA
### Unsupervised learning example: Iris dimensionality reduction
As an example of an unsupervised learning problem, let's take a look at reducing the dimensionality of the Iris data so it is more easily visualizable. Recall that the Iris data is four dimensional: there are four features recorded for each sample. We will reduce it to 2, but first we need to transform the data into features. Since this is unsupervised learning we won't need a label:
```
from pyspark.ml.feature import RFormula
supervised = RFormula(formula="species ~ .")
fittedRF = supervised.fit(df) # fit the transformer
preparedDF = fittedRF.transform(df) # transform
preparedDF = preparedDF.drop("label") # we don't really need a label
preparedDF.show(3)
from pyspark.ml.feature import PCA as PCAml
from pyspark.ml.linalg import Vectors
pca = PCAml(k=2, inputCol="features", outputCol="pca")
model = pca.fit(preparedDF)
transformed = model.transform(preparedDF)
transformed.show(2, False)
```
Let's visualize our data in the PC1 and PC2 axes:
```
from pyspark.sql.functions import udf
from pyspark.sql.types import FloatType
firstElement=udf(lambda v:float(v[0]),FloatType())
secondElement=udf(lambda v:float(v[1]),FloatType())
pcaDF = transformed.select(firstElement('pca'), secondElement('pca'), 'species').toPandas()
pcaDF.columns = ['pc1', 'pc2', 'species']
pcaDF.head()
sns.set(rc={'figure.figsize':(8,6)}) # Figure size
sns.scatterplot(data=pcaDF, x='pc1', y='pc2', hue='species');
```
## k-means
𝘬-means is one of the most popular clustering algorithms. In this algorithm, a user-specified number of clusters (𝘬) are randomly assigned to different points in the dataset. The unassigned points are then “assigned” to a cluster based on their proximity (measured in Euclidean distance) to the previously assigned point. Once this assignment happens, the center of this cluster (called the centroid) is computed, and the process repeats. All points are assigned to a particular centroid, and a new centroid is computed. We repeat this process for a finite number of iterations or until convergence (i.e., when our centroid locations stop changing). This does not, however, mean that our clusters are always sensical. For instance, a given “logical” cluster of data might be split right down the middle simply because of the starting points of two distinct clusters. Thus, it is often a good idea to perform multiple runs of 𝘬-means starting with different initializations.
Below we will use the `transformed` dataframe to create the clusters. Note that only the `features` column will be used, even though we are passing the entire dataframe; i.e., the labels (`species`) won't be used in this clustering algorithm.
```
from pyspark.ml.clustering import KMeans
from pyspark.ml.evaluation import ClusteringEvaluator
# Trains a k-means model.
kmeans = KMeans(k=3, seed=843)
model = kmeans.fit(transformed)
# Make predictions
predictions = model.transform(transformed)
# Evaluate clustering by computing Silhouette score
evaluator = ClusteringEvaluator()
silhouette = evaluator.evaluate(predictions)
print("Silhouette with squared euclidean distance = " + str(silhouette))
# Shows the result.
centers = model.clusterCenters()
print("Cluster Centers: ")
for center in centers:
print(center)
```
We will now visualize our clusters in the PC1 and PC2 axes and compare them with the `species` column:
```
from pyspark.sql.functions import udf
from pyspark.sql.types import FloatType
firstElement=udf(lambda v:float(v[0]),FloatType())
secondElement=udf(lambda v:float(v[1]),FloatType())
kMeansDf = predictions.select('*', firstElement('pca'), secondElement('pca')).toPandas()
kMeansDf.columns = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width',
'species', 'features', 'pca', 'prediction', 'pc1', 'pc2']
kMeansDf.head()
sns.scatterplot(data=kMeansDf, x='pc1', y='pc2', hue='species', style='prediction');
```
|
github_jupyter
|
#!conda install seaborn
import matplotlib.pyplot as plt
import seaborn as sns
# the following line gets the bucket name attached to our cluster
bucket = spark._jsc.hadoopConfiguration().get("fs.gs.system.bucket")
# specifying the path to our bucket where the data is located (no need to edit this path anymore)
data = "gs://" + bucket + "/notebooks/jupyter/data/"
print(data)
df = spark.read.format("csv")\
.option("header", "true")\
.option("inferSchema", "true")\
.load(data + "iris.csv")\
.coalesce(5)
df = df.drop('_c0')
df.cache()
df.show(1)
df.printSchema()
print("This datasets consists of {} rows.".format(df.count()))
sample_df = df.sample(fraction=0.6, seed=843)
sample_df = sample_df.toPandas()
sns.pairplot(sample_df, hue='species');
from pyspark.ml.feature import RFormula
supervised = RFormula(formula="species ~ .")
fittedRF = supervised.fit(df) # fit the transformer
preparedDF = fittedRF.transform(df) # transform
preparedDF = preparedDF.drop("label") # we don't really need a label
preparedDF.show(3)
from pyspark.ml.feature import PCA as PCAml
from pyspark.ml.linalg import Vectors
pca = PCAml(k=2, inputCol="features", outputCol="pca")
model = pca.fit(preparedDF)
transformed = model.transform(preparedDF)
transformed.show(2, False)
from pyspark.sql.functions import udf
from pyspark.sql.types import FloatType
firstElement=udf(lambda v:float(v[0]),FloatType())
secondElement=udf(lambda v:float(v[1]),FloatType())
pcaDF = transformed.select(firstElement('pca'), secondElement('pca'), 'species').toPandas()
pcaDF.columns = ['pc1', 'pc2', 'species']
pcaDF.head()
sns.set(rc={'figure.figsize':(8,6)}) # Figure size
sns.scatterplot(data=pcaDF, x='pc1', y='pc2', hue='species');
from pyspark.ml.clustering import KMeans
from pyspark.ml.evaluation import ClusteringEvaluator
# Trains a k-means model.
kmeans = KMeans(k=3, seed=843)
model = kmeans.fit(transformed)
# Make predictions
predictions = model.transform(transformed)
# Evaluate clustering by computing Silhouette score
evaluator = ClusteringEvaluator()
silhouette = evaluator.evaluate(predictions)
print("Silhouette with squared euclidean distance = " + str(silhouette))
# Shows the result.
centers = model.clusterCenters()
print("Cluster Centers: ")
for center in centers:
print(center)
from pyspark.sql.functions import udf
from pyspark.sql.types import FloatType
firstElement=udf(lambda v:float(v[0]),FloatType())
secondElement=udf(lambda v:float(v[1]),FloatType())
kMeansDf = predictions.select('*', firstElement('pca'), secondElement('pca')).toPandas()
kMeansDf.columns = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width',
'species', 'features', 'pca', 'prediction', 'pc1', 'pc2']
kMeansDf.head()
sns.scatterplot(data=kMeansDf, x='pc1', y='pc2', hue='species', style='prediction');
| 0.761716 | 0.935641 |
```
# Setting up a custom stylesheet in IJulia
file = open("style.css") # A .css file in the same folder as this notebook file
styl = readstring(file) # Read the file # New in 0.6
HTML("$styl") # Output as HTML
```
# Functions
<h2>In this lesson</h2>
- [Introduction](#Introduction)
- [Outcomes](#Outcomes)
- [Single expression functions](#Single-expression-functions)
- [Multiple expression functions](#Multiple-expression-functions)
- [Optional arguments](#Optional-arguments)
- [Using keyword arguments to bypass the order problem](#Using-keyword-arguments-to-bypass-the-order-problem)
- [Functions with a variable number of arguments](#Functions-with-a-variable-number-of-arguments)
- [Passing arrays as function arguments](#Passing-arrays-as-function-arguments)
- [Type parameters](#Type-parameters)
- [Stabby functions and do blocks](#Stabby-functions-and-do-blocks)
- [Using functions as arguments](#Using-functions-as-arguments)
<hr>
<h2>Introduction</h2>
Julia is by en large a functional language. Most of what we do is simply passing arguments to functions. When we call a function we actually call a whole buch of them. Julia decides which one it is going to use based on the argument types (there is a lookup table for every function, which is stored with the function). Julia generates low-level code based on your computer's instruction set. So, when you create a function such as...
```
function cbd(a)
return a^3
end
```
... a whole bunch of methods are created (the different implementations of a function are called **methods**). When the function is called with an integer argument, Julia will generate code that uses the CPU's integer multiplication instruction set and when a floating point value is used, the floating point multiplication instruction set will be targeted.
Multiple dispatch refers to calling the right implementation of a function based on the arguments. Note that only the positional arguments are used to look up the correct method. When the function is used again, but with different argument types, a new method is selected. This is called **overloading**.
Let's look at all the methods that can be called for the +() function.
```
# Usual syntax
3 + 4
# Functional syntax
+(3, 4)
```
[Back to the top](#In-this-lesson)
<hr>
<h2>Outcomes</h2>
After successfully completing this lecture, you will be able to:
- Create single expression functions
- Create multiple expression functions
- Add optional arguments with default values to functions
- Create keyword arguments
- Create function with a variable number of arguments
- Pass arrays and tuples to functions
- Specify argument types
- Create stabby functions
- Use functions as arguments
[Back to the top](#In-this-lesson)
<hr>
<h2>Single expression functions</h2>
We can mimick mathematical functions in Julia. One of the first mathetical functions we all came across was $ f \left( x \right) = {x}^{2} $. This is simple to do in Julia.
```
f(x) = x^2
```
We note our function, with a name of `f` and that it has one method. We placed an argument placeholder called `x`. Let's take a look at this single method that was created.
```
methods(f)
```
We can call our function and pass a valid arguemt.
```
f(3)
```
As expected we get the solution $ {3}^{2} = 9 $.
[Back to the top](#In-this-lesson)
<hr>
<h2>Multiple expression functions</h2>
With single expression functions it was convenient to use the shortcut (almost mathematical) syntax we used above. If we want a function to do a few more things, even have flow control, we have to use function syntax. In the first example below we will have a function that takes two arguments and performs two tasks (has two expressions).
```
# Declaring the block of code as a function using the function keyword, giving it a name,
# and listing the arguments
function mltpl(x, y)
print("The first value is $x and the second value is $y.\n$x x $y is:")
# The dollar signs are placeholders for the argument values
# The \n combination indicates a new-line
return x * y
end
# Indentation happened automatically in IJulia
# Calling the function and passing values for the two arguments
mltpl(3, 4)
```
We can omit the `return` keyword. If so, only the last calculation before the `end` is returned (together with `print()` functions), although the rest is still executed.
```
function mltpl2(x, y)
print("Blah, blah,... Multiply!")
x * y
end
mltpl2(3, 4)
# Now, let's get a bit crazy
function mltpl3(x, y)
print("More blah, blah...")
x + y
x * y
end
mltpl3(3, 4)
```
So the `x + y` was not returned.
This is not to say that in Julia only a single value is returned when omitting the return keyword. Have a look at the next example.
```
function math_func(a, b)
print("This function will return addition, subtraction and multiplication of the values $a and $b\.")
a + b, a - b, a * b
end
# Calling math_func(), which will return a tuple
math_func(3, 4)
```
This can be very useful in a Julia program. This is how we might use it:
```
ans1, ans2, ans3 = math_func(3, 4)
ans1
ans2
ans3
```
[Back to the top](#In-this-lesson)
<hr>
<h2>Optional arguments</h2>
A default value can be passed to an argument when defining a function.
```
function func(a, b, c = 100)
print(" We have the values $a, $b, and $c.")
end
```
We can either omit the last argument when we call the function, in which case the default value is used, or we can pass our own value.
```
# Calling the function, but omitting the last argument
func(1, 10)
# The last argument can be overwritten with a new value
func(1, 2, 3)
```
[Back to the top](#In-this-lesson)
<hr>
<h2>Using keyword arguments to bypass the order problem<h2/>
We can create function with many, many argument. Problem is, we might forget the argument order when calling the function and passing values to it. To solve this problem the semi-colon (;) can be used (usually after the ordered arguments). Let's take a look.
```
# A most ridiculously long print statement (apologies)
function func2(a, b, c = 100 ; p = 100, q = "red")
print("The first ordered argument value is $(a).", "\n")
print("The second ordered argumnent is $(b).", "\n")
print("The third ordered argument was optional.", "\n")
print("If you see a value of 100 here, you either passed a value of 100 or omitted it: $(c).", "\n")
print("Let's see what happend to the keyword p: $(p).", "\n")
print("Let's see what happens to the keyword q: $(q).", "\n")
print("Oh yes, let's also return something useful, like multiplying $(a) and $(b), yielding:", "\n")
return a * b
end
# Calling just the first two ordered arguments
func2(3, 4)
# Calling something else for c
func2(3, 4, 5)
# Now let's have some fun with the keyword arguments
func2(3, 4, p = pi)
# Now for q
func2(3, 4, 2, q = "Hello!")
# Mixing the keyword around (as long as we use their names)
func2(3, 4, 2, q = "It works!", p = exp(1))
```
The keyword arguments can be placed anywhere, simply use their names. The values before the semicolon, though has to be used, or at least interspersed in the correct order.
```
# And finally, we go bananas!
func2(q = "Bananas!", 3, 4, p = sqrt(3), 2)
```
[Back to the top](#In-this-lesson)
<hr>
<h2>Functions with a variable number of arguments</h2>
We can use three dots, as in ..., (called a splat or ellipsis) to indicate none, one, or many arguments. Let's have a look
```
function func3(args...)
print("I can tell you how many arguments you passed: $(length(args)).")
end
# Calling nothing, nothing, nothing. Hello! Is anyone home?
func3()
# Someone's home!
func3(1000000)
# It's Julia!
func3("Julia")
func3("Hello", "Julia")
func3("Julia", "is", 1, "in", "a", 1000000, "!")
```
The splat or ellipsis as indicator of allowing the use of multiple (infinite) arguments, can solve some problems. In the example below we will pass a list of strings as arguments and see what happens.
```
function surgery(string_array)
string_items = join(string_array, ", ", " and ")
print("Today I performed the following operations: $string_items\!")
end
# Passing two arguments
surgery(["colonic resection", "appendectomy"])
# What if I forget the square brackets []
# The join() function will act on the characters in the string
surgery("appendectomy")
# Now we don't restrict the number of arguments
function splat_surgery(stringsss...)
string_items = join(stringsss, ", ", " and ")
print("Today I performed the following operations: $string_items\!")
end
splat_surgery("appendectomy")
# We can even just add strings without it being part of an array
splat_surgery("colonic resection", "appendectomy", "omentopexy", "cholecystectomy")
```
For the sake of clarity, look at the following example to see what Julia does to the args... arguments. You will note that it is actually managed as a tuple.
```
function argues(a, b, s...)
print("The argument values are: $a, $b, and $s")
end
# The first two values, 3 and 4, have proper assignment, but the rest will be in a tuple
argues(3, 4, 5, 6, 7, 8, "Julia")
# Now for an empty tuple
argues(3, 4)
```
Now for some real fun. We can combine keywords and splats. Have a look at this.
```
# Creating a function that only contains keywords, but they are
# splats (can we use the term splats?)
function fun_func(; a...)
a
end
# Calling the fun_func() function, remembering to give the keywords names
fun_func(var1 = "Julia", var2 = "Language", val1 = 3)
```
We now have a collection of (key, value) tuples, with the key coming from the name we gave the keyword argument. Moreover, it is actually a symbol which you will note by the colon (:) preceding it.
[Back to the top](#In-this-lesson)
<hr>
<h2>Passing arrays as function arguments<h2/>
Once a function is defined, an array of values can be passed to it using the map function.
```
# Creating an array
xvals = [-3, -2.5, -2, -1.5, -1, -0.5, 0, 0.5, 1, 1.5, 2, 2.5, 3];
# Creating the function
function sqr(a)
return a^2
end
# Mapping the array to the function
map(sqr, xvals)
```
Mapping is not alway required. Some inbuilt Julia functions do element-wise operations on arrays anyway. It is also a lot faster. In the first example we will map the array of integers from $ 1 $ to $ 10,000 $ to the trigonometric sine function. We'll use `@time` to time how long the mapping takes and then repeat the exercise using the inbuilt element-wise operation of the sine function.
```
@time map(sin, collect(1:10000));
@time sin.(collect(1:10000)); # New in 0.6
```
Arrays or tuples can cause problems when passed to a function. The following won't work:
```
array_1 = [3, 4]
tuple1 = (3, 4)
function h(x,y)
return 3 * x + 2 * y
end
h(array_1)
h(tuple_1)
```
This problem was solved using the apply() function. This has been depracated, though. Now, just use the splat or ellipsis.
```
array_1 = [3,4]
tuple_1 = (3, 4);
function h(x, y)
return 3 * x + 2 * y
end
h(array_1...)
h(tuple_1...)
```
With the exception of numbers and characters (or other plain data), values of arguments are passed by reference only and are not copied. They can therefor be altered. We find a good example in arrays. Have a look at this example.
```
# Creating an array
array_primes = [2, 3, 5, 7, 11, 13, 17, 19];
# Creating afunction that inserts a new value at the end of the array
function add_ele(a)
push!(a, 23)
end
# Calling the function and adding the array array_primes as an argument
add_ele(array_primes)
```
[Back to the top](#In-this-lesson)
<hr>
<h2>Type parameters</h2>
It is possible to limit a function to accepting only cenrtain argument types.
```
function m(x::Int)
return 3 * x
end
# Calling the function with an integer
m(3)
# Checking the methods of m()
methods(m)
```
We can even include type information in the method defintion.
```
function arg_test{T <: Real}(x::T)
print("$x is of type $T")
end
```
A little explanation of the function above is probably required. The curly braces, `{}`, part goes between the function name and the argument parentheses, `()`. `T` is used by convension. Above we use the `{T <: Real}` syntax. That means the type can be Real or any subtype of Real. We can be more specific, i.e. only allow integers, `{T::Int}`.
```
arg_test(3)
arg_test(3 // 7)
```
Here is an example of using two argument that can be of any type, as long as they are the same.
```
function ident_types{T}(a::T, b::T)
return +(a, b)
end
# Adding two complex numbers
ident_types(2 + 3im, 1 + 0im)
```
If we were to use the following...
```
ident_types(2 + 3im, 1)
```
... this error would occur:
```
LoadError: MethodError: `ident_types` has no method matching ident_types(::Complex{Int64}, ::Int64)
Closest candidates are:
ident_types{T}(::T, !Matched::T)
while loading In[97], in expression starting on line 1
```
[Back to the top](#In-this-lesson)
<hr>
<h2>Stabby functions and do blocks</h2>
Stabby lambda functions as they are called, are quick-and-dirty functions. They are examples of anonymous functions, the latter referring to the fact that they don't have a name. The do block is also a form of anonymous function. Let's look at some examples.
```
# The Julia syntax uses the -> character combinations, hence stabby!
x -> 2x^2 + 3x - 2
```
We can now us the `map()` function to apply the values in an array to this stabby function. Note that the stabby function cannot be called.
```
map(x -> 2x^2 + 3x - 2, [1, 2, 3, 4, 5])
```
There is another way of achieving this using `do`.
```
# Let's do something
map([1, 2, 3, 4, 5]) do x
2x^2 + 3x - 2
end
```
The `do` block can do some more!
```
map([3, 6, 9, 10, 11]) do x
if mod(x, 3) == 0
100x
elseif mod(x, 3) == 1
200x
else
mod(x, 3) == 2
300x
end
end
```
[Back to the top](#In-this-lesson)
<hr>
<h2>Using functions as arguments</h2>
As the title of this section implies, we can pass a function as an argument. That functional argument will actually call the function.
```
# First function
function string_func(s)
str = s()
print("I love $str\!")
end
# Second function
function luv()
return("Julia")
end
string_func(luv)
# Calling the function string_func
# Passing a function as an argument, which then calls that function
# The called luv function returns the string Julia, which is now the argument of the originally called function
```
[Back to the top](#In-this-lesson)
|
github_jupyter
|
# Setting up a custom stylesheet in IJulia
file = open("style.css") # A .css file in the same folder as this notebook file
styl = readstring(file) # Read the file # New in 0.6
HTML("$styl") # Output as HTML
function cbd(a)
return a^3
end
# Usual syntax
3 + 4
# Functional syntax
+(3, 4)
f(x) = x^2
methods(f)
f(3)
# Declaring the block of code as a function using the function keyword, giving it a name,
# and listing the arguments
function mltpl(x, y)
print("The first value is $x and the second value is $y.\n$x x $y is:")
# The dollar signs are placeholders for the argument values
# The \n combination indicates a new-line
return x * y
end
# Indentation happened automatically in IJulia
# Calling the function and passing values for the two arguments
mltpl(3, 4)
function mltpl2(x, y)
print("Blah, blah,... Multiply!")
x * y
end
mltpl2(3, 4)
# Now, let's get a bit crazy
function mltpl3(x, y)
print("More blah, blah...")
x + y
x * y
end
mltpl3(3, 4)
function math_func(a, b)
print("This function will return addition, subtraction and multiplication of the values $a and $b\.")
a + b, a - b, a * b
end
# Calling math_func(), which will return a tuple
math_func(3, 4)
ans1, ans2, ans3 = math_func(3, 4)
ans1
ans2
ans3
function func(a, b, c = 100)
print(" We have the values $a, $b, and $c.")
end
# Calling the function, but omitting the last argument
func(1, 10)
# The last argument can be overwritten with a new value
func(1, 2, 3)
# A most ridiculously long print statement (apologies)
function func2(a, b, c = 100 ; p = 100, q = "red")
print("The first ordered argument value is $(a).", "\n")
print("The second ordered argumnent is $(b).", "\n")
print("The third ordered argument was optional.", "\n")
print("If you see a value of 100 here, you either passed a value of 100 or omitted it: $(c).", "\n")
print("Let's see what happend to the keyword p: $(p).", "\n")
print("Let's see what happens to the keyword q: $(q).", "\n")
print("Oh yes, let's also return something useful, like multiplying $(a) and $(b), yielding:", "\n")
return a * b
end
# Calling just the first two ordered arguments
func2(3, 4)
# Calling something else for c
func2(3, 4, 5)
# Now let's have some fun with the keyword arguments
func2(3, 4, p = pi)
# Now for q
func2(3, 4, 2, q = "Hello!")
# Mixing the keyword around (as long as we use their names)
func2(3, 4, 2, q = "It works!", p = exp(1))
# And finally, we go bananas!
func2(q = "Bananas!", 3, 4, p = sqrt(3), 2)
function func3(args...)
print("I can tell you how many arguments you passed: $(length(args)).")
end
# Calling nothing, nothing, nothing. Hello! Is anyone home?
func3()
# Someone's home!
func3(1000000)
# It's Julia!
func3("Julia")
func3("Hello", "Julia")
func3("Julia", "is", 1, "in", "a", 1000000, "!")
function surgery(string_array)
string_items = join(string_array, ", ", " and ")
print("Today I performed the following operations: $string_items\!")
end
# Passing two arguments
surgery(["colonic resection", "appendectomy"])
# What if I forget the square brackets []
# The join() function will act on the characters in the string
surgery("appendectomy")
# Now we don't restrict the number of arguments
function splat_surgery(stringsss...)
string_items = join(stringsss, ", ", " and ")
print("Today I performed the following operations: $string_items\!")
end
splat_surgery("appendectomy")
# We can even just add strings without it being part of an array
splat_surgery("colonic resection", "appendectomy", "omentopexy", "cholecystectomy")
function argues(a, b, s...)
print("The argument values are: $a, $b, and $s")
end
# The first two values, 3 and 4, have proper assignment, but the rest will be in a tuple
argues(3, 4, 5, 6, 7, 8, "Julia")
# Now for an empty tuple
argues(3, 4)
# Creating a function that only contains keywords, but they are
# splats (can we use the term splats?)
function fun_func(; a...)
a
end
# Calling the fun_func() function, remembering to give the keywords names
fun_func(var1 = "Julia", var2 = "Language", val1 = 3)
# Creating an array
xvals = [-3, -2.5, -2, -1.5, -1, -0.5, 0, 0.5, 1, 1.5, 2, 2.5, 3];
# Creating the function
function sqr(a)
return a^2
end
# Mapping the array to the function
map(sqr, xvals)
@time map(sin, collect(1:10000));
@time sin.(collect(1:10000)); # New in 0.6
array_1 = [3, 4]
tuple1 = (3, 4)
function h(x,y)
return 3 * x + 2 * y
end
h(array_1)
h(tuple_1)
array_1 = [3,4]
tuple_1 = (3, 4);
function h(x, y)
return 3 * x + 2 * y
end
h(array_1...)
h(tuple_1...)
# Creating an array
array_primes = [2, 3, 5, 7, 11, 13, 17, 19];
# Creating afunction that inserts a new value at the end of the array
function add_ele(a)
push!(a, 23)
end
# Calling the function and adding the array array_primes as an argument
add_ele(array_primes)
function m(x::Int)
return 3 * x
end
# Calling the function with an integer
m(3)
# Checking the methods of m()
methods(m)
function arg_test{T <: Real}(x::T)
print("$x is of type $T")
end
arg_test(3)
arg_test(3 // 7)
function ident_types{T}(a::T, b::T)
return +(a, b)
end
# Adding two complex numbers
ident_types(2 + 3im, 1 + 0im)
ident_types(2 + 3im, 1)
LoadError: MethodError: `ident_types` has no method matching ident_types(::Complex{Int64}, ::Int64)
Closest candidates are:
ident_types{T}(::T, !Matched::T)
while loading In[97], in expression starting on line 1
# The Julia syntax uses the -> character combinations, hence stabby!
x -> 2x^2 + 3x - 2
map(x -> 2x^2 + 3x - 2, [1, 2, 3, 4, 5])
# Let's do something
map([1, 2, 3, 4, 5]) do x
2x^2 + 3x - 2
end
map([3, 6, 9, 10, 11]) do x
if mod(x, 3) == 0
100x
elseif mod(x, 3) == 1
200x
else
mod(x, 3) == 2
300x
end
end
# First function
function string_func(s)
str = s()
print("I love $str\!")
end
# Second function
function luv()
return("Julia")
end
string_func(luv)
# Calling the function string_func
# Passing a function as an argument, which then calls that function
# The called luv function returns the string Julia, which is now the argument of the originally called function
| 0.675978 | 0.959687 |
# Gráficos de Dispersión simples
Otro tipo de diagrama de uso común es el diagrama de dispersión simple, un primo cercano del grafico de líneas. A diferencia de este, en lugar de que los puntos estén unidos por segmentos de línea, aquí los puntos se representan individualmente con un marcador como un punto, círculo u otra forma.
Comenzaremos configurando el notebook para graficar e importar las funciones que usaremos:
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
```
## Gráficos de dispersión con ``plt.plot``
En el notebook pasado, hemos visto la dualidad de interfaz que presenta Matplotlib para representar gráficos: ``plt.plot``/``ax.plot``.
Pues así como antes la hemos empleado para representar gráficas de línea, también la podremos usar ahora para representar gráficos de dispersión, lo único que deberemos hacer es establecer un marcador:
```
help(plt.plot)
x = np.linspace(0, 10, 30)
y = np.sin(x)
plt.plot(x, y, ':.k');
```
El tercer argumento en la llamada a la función es un carácter que representa el tipo de símbolo utilizado para el trazado. Así como antes específicabamos opciones como ``-`` o ``-`` para controlar el estilo de línea, el estilo de marcador tiene su propio conjunto de códigos cortos. La lista completa de símbolos disponibles se puede ver en la documentación de `` plt.plot ``, o en la documentación en línea de Matplotlib.
La mayoría de las posibilidades son bastante intuitivas y mostraremos algunas de las más comunes aquí:
```
var1 = 20
"asda {}".format(var1)
np.random.seed(0)
for marker in ['o', '.', ',', 'x', '+', 'v', '^', '<', '>', 's', 'd']:
plt.plot(np.random.rand(5), np.random.rand(5), marker,
label="marker='{}'".format(marker))
plt.legend(numpoints=1)
plt.xlim(0, 1.8);
```
Para obtener aún más posibilidades, estos códigos de caracteres pueden combinarse con códigos de línea y color para trazar puntos junto con una línea que los conecte:
```
plt.plot(x, y, '-ok')
```
Los parámetros de `` plt.plot `` se pueden utilizar para especificar una amplia gama de propiedades de las líneas y marcadores:
```
plt.plot(x, y, '-p',
color='gray',
linewidth=4,
markerfacecolor='yellow',
markersize=15,
markeredgecolor='blue',
markeredgewidth=2);
```
Esta flexibilidad de la función `` plt.plot `` permite una amplia variedad de posibles opciones de visualización.
Para obtener una descripción más completa, te recomendaría que consultases la documentación de `` plt.plot ``.
## Gráficos de dispersión con ``plt.scatter``
Otra de las herramientas más potentes para crear diagramas de dispersión es la función `` plt.scatter ``, que se puede usar de manera muy similar a la función `` plt.plot ``:
```
plt.scatter(x, y, marker='o');
```
**La principal diferencia entre `` plt.scatter `` y `` plt.plot `` es que el primero se puede usar para crear diagramas de dispersión donde las propiedades de cada punto individual (tamaño, color marcador, color de borde, etc.) se puede controlar individualmente o asignar a los datos. **
Demostremos esto creando un diagrama de dispersión aleatorio con puntos de muchos colores y tamaños.
Para ver mejor los resultados superpuestos, también usaremos la palabra clave ``alfa`` para ajustar el nivel de transparencia:
```
x
rng = np.random.RandomState(10)
x = rng.randn(100)
y = rng.randn(100)
colors = rng.rand(100)
sizes = 1000 * rng.rand(100)
plt.scatter(x, y, c=colors, s=sizes, alpha=0.3,
cmap='viridis')
plt.colorbar(); # show color scale
```
Fíjate que el argumento de color se asigna automáticamente a una escala de colores (mediante el comando `` colorbar () ``), y que el argumento de tamaño se especifica en píxeles.
De esta forma, el color y tamaño de los puntos se pueden utilizar para transmitir información en la visualización, con el fin de mostrar datos multidimensionales.
Por ejemplo, podríamos usar los datos de Iris de Scikit-Learn, donde cada muestra es uno de los tres tipos de flores a las que se les ha medido cuidadosamente el tamaño de sus pétalos y sépalos:
```
from sklearn.datasets import load_iris
iris = load_iris()
features = iris.data.T
plt.scatter(features[0], features[1], alpha=0.5,
s=100*features[3], c=iris.target, cmap='viridis')
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1]);
```
Podemos ver que este diagrama de dispersión nos da la capacidad de explorar simultáneamente cuatro dimensiones diferentes de los datos: la ubicación (x, y) de cada punto corresponde al largo y ancho del sépalo, el tamaño del punto está relacionado con el ancho del pétalo, y el color está relacionado con la especie particular de flor.
Los diagramas de dispersión multicolores y de múltiples características como este pueden ser útiles tanto para la exploración como para la presentación de datos.
## ``plot`` vs. ``scatter``: Una nota sobre eficiencia:
Bien, ya hemos definido las principales funciones disponibles en Matplotlib para la representación de gráficos de línea y dispersos: `` plt.plot `` y `` plt.scatter ``, pero... ¿por qué debería usar una sobre la otra?
Si bien es cierto que no importa tanto para pequeñas cantidades de datos, según los datasets comienzan a crecer en tamaño hasta llegar a miles y miles de puntos, `` plt.plot `` puede ser notablemente más eficiente que `` plt.scatter ``. La razón es que `` plt.scatter `` tiene la capacidad de representar un tamaño y / o color diferente para cada punto, por lo que el renderizador debe hacer el trabajo adicional de construir cada punto individualmente.
Por otro lado, en `` plt.plot `` los puntos son siempre esencialmente clones entre sí, por lo que el trabajo de determinar la apariencia de los puntos se realiza solo una vez para todo el conjunto de datos.
Para conjuntos de datos grandes, la diferencia entre estos dos puede llevar a un rendimiento muy diferente y, por esta razón, se debe preferir `` plt.plot `` sobre `` plt.scatter `` para conjuntos de datos grandes.
## Ejercicio 1
1. Créate un vector ``x`` de 40 puntos definido en [-5, 5]
2. Representa la función $y = 10^{sin(x+pi/2)} $ con una línea
3. Sobre la misma Figura, representa la misma función ahora con marcadores con la forma * y de color verde
4. Créate otra figura donde dibujes una línea con sus marcadores, pero en una sola representación, no hagas 2 como antes. Deberás usar un marcador 'o' y un color rojo
## Ejercicio 2
Hasta ahora hemos estado generando puntos de un vector y representando una función. Sin embargo, como hemos hecho en este notebook, también podemos representar puntos que no tienen por qué obedecer a un vector equiespaciado y su función, sino que pueden ser conjuntos de puntos independientes, como pueden ser 2 columnas d eun DataFrame.
En este caso, vamos a cargar los datos del fichero csv "data_to_plot.csv", donde tienes una columna ``x`` y otra ``y``, que deberás pasarle al plot. Representa estos puntos en un gráfico con un marcador ``.`` de color negro. Añádele todos los extras que quieras: leyenda, nombre de la gráfica...
```
import pandas as pd
df = pd.read_csv("data_to_plot.csv")
plt.plot(df['x'].values, df['y'].values, 'o')
```
## Ejercicio 3
Para realizar representaciones en este notebook, hemos utilizado un dataset de prueba de sklearn, el cual tenía diferentes medidas de flores. A continuación, utilizaremos uno de los dataset que ya hemos visto para analizarlo gráficamente:
1. Lee el fichero de coches ('coches.csv') que tienes en esta carpeta
2. Representa, en la misma figura, los caballos de potencia ("hp") frente a el tiempo que tarda en hacer 1/4 de milla ("qsec") y frente a las millas por galón ("mpg").
1. La variable "qsec" deberá ser representada con marcadores 'o' y una línea dashed de color rojo (marcador y linea)
2. La variable "mpg" deberá ser representada con marcadores '^' verdes de tamaño 8 y una línea azul sólida
3. Representa con el método ``scatter`` la variable 'hp' frente a 'qsec' pero con el color en función de 'mpg', el tamaño en función de 'gear' (con un factor que puedes decidir tú para que se vean más grandes) y un mapeo de color 'viridis'. Muestra la barra de color a la derecha del gráfico, como hemos hecho antes. Añade los complementos visuales que quieras.
```
# 1.
import numpy as np
import pandas as pd
df = pd.read_csv("coches.csv")
df
# 2.
df = df.sort_values(by='hp')
plt.plot(df['hp'], df['qsec'], '--or', label='qsec')
plt.plot(df['hp'], df['mpg'], color = 'b', marker='^', markeredgecolor='g', markersize=8, markerfacecolor='g', linestyle='solid', label='mpg')
plt.xlabel('Cv')
plt.ylabel('Unds')
plt.legend();
# 3.
plt.scatter(df['qsec'], df['hp'], s= df.gear*100, c= df.mpg, alpha= 0.3, cmap= 'viridis')
plt.xlabel('Quarter Mile per Sec')
plt.ylabel('Horse Power')
plt.title('Car Efficiency')
plt.colorbar();
```
|
github_jupyter
|
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
help(plt.plot)
x = np.linspace(0, 10, 30)
y = np.sin(x)
plt.plot(x, y, ':.k');
var1 = 20
"asda {}".format(var1)
np.random.seed(0)
for marker in ['o', '.', ',', 'x', '+', 'v', '^', '<', '>', 's', 'd']:
plt.plot(np.random.rand(5), np.random.rand(5), marker,
label="marker='{}'".format(marker))
plt.legend(numpoints=1)
plt.xlim(0, 1.8);
plt.plot(x, y, '-ok')
plt.plot(x, y, '-p',
color='gray',
linewidth=4,
markerfacecolor='yellow',
markersize=15,
markeredgecolor='blue',
markeredgewidth=2);
plt.scatter(x, y, marker='o');
x
rng = np.random.RandomState(10)
x = rng.randn(100)
y = rng.randn(100)
colors = rng.rand(100)
sizes = 1000 * rng.rand(100)
plt.scatter(x, y, c=colors, s=sizes, alpha=0.3,
cmap='viridis')
plt.colorbar(); # show color scale
from sklearn.datasets import load_iris
iris = load_iris()
features = iris.data.T
plt.scatter(features[0], features[1], alpha=0.5,
s=100*features[3], c=iris.target, cmap='viridis')
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1]);
import pandas as pd
df = pd.read_csv("data_to_plot.csv")
plt.plot(df['x'].values, df['y'].values, 'o')
# 1.
import numpy as np
import pandas as pd
df = pd.read_csv("coches.csv")
df
# 2.
df = df.sort_values(by='hp')
plt.plot(df['hp'], df['qsec'], '--or', label='qsec')
plt.plot(df['hp'], df['mpg'], color = 'b', marker='^', markeredgecolor='g', markersize=8, markerfacecolor='g', linestyle='solid', label='mpg')
plt.xlabel('Cv')
plt.ylabel('Unds')
plt.legend();
# 3.
plt.scatter(df['qsec'], df['hp'], s= df.gear*100, c= df.mpg, alpha= 0.3, cmap= 'viridis')
plt.xlabel('Quarter Mile per Sec')
plt.ylabel('Horse Power')
plt.title('Car Efficiency')
plt.colorbar();
| 0.567218 | 0.989306 |
# Meta-Analytic Coactivation Modeling
```
# First, import the necessary modules and functions
import os
from datetime import datetime
import matplotlib.pyplot as plt
from myst_nb import glue
from repo2data.repo2data import Repo2Data
import nimare
start = datetime.now()
# Install the data if running locally, or points to cached data if running on neurolibre
DATA_REQ_FILE = os.path.join("../binder/data_requirement.json")
FIG_DIR = os.path.abspath("../images")
# Download data
repo2data = Repo2Data(DATA_REQ_FILE)
data_path = repo2data.install()
data_path = os.path.join(data_path[0], "data")
# Now, load the Datasets we will use in this chapter
neurosynth_dset = nimare.dataset.Dataset.load(os.path.join(data_path, "neurosynth_dataset.pkl.gz"))
```
Meta-analytic coactivation modeling (MACM) {cite:p}`Laird2009-gc,Robinson2010-iv,Eickhoff2010-vx`, also known as meta-analytic connectivity modeling, uses meta-analytic data to measure co-occurrence of activations between brain regions providing evidence of functional connectivity of brain regions across tasks.
In coordinate-based MACM, whole-brain studies within the database are selected based on whether or not they report at least one peak in a region of interest specified for the analysis.
These studies are then subjected to a meta-analysis, often comparing the selected studies to those remaining in the database.
In this way, the significance of each voxel in the analysis corresponds to whether there is greater convergence of foci at the voxel among studies which also report foci in the region of interest than those which do not.
<!-- TODO: Determine appropriate citation style here. -->
MACM results have historically been accorded a similar interpretation to task-related functional connectivity (e.g., {cite:p}`Hok2015-lt,Kellermann2013-en`), although this approach is quite removed from functional connectivity analyses of task fMRI data (e.g., beta-series correlations, psychophysiological interactions, or even seed-to-voxel functional connectivity analyses on task data).
Nevertheless, MACM analyses do show high correspondence with resting-state functional connectivity {cite:p}`Reid2017-ez`.
MACM has been used to characterize the task-based functional coactivation of the cerebellum {cite:p}`Riedel2015-tx`, lateral prefrontal cortex {cite:p}`Reid2016-ba`, fusiform gyrus {cite:p}`Caspers2014-ja`, and several other brain regions.
Within NiMARE, MACMs can be performed by selecting studies in a Dataset based on the presence of activation within a target mask or coordinate-centered sphere.
In this section, we will perform two MACMs- one with a target mask and one with a coordinate-centered sphere.
For the former, we use {py:meth}`nimare.dataset.Dataset.get_studies_by_mask`.
For the latter, we use {py:meth}`nimare.dataset.Dataset.get_studies_by_coordinate`.
```
# Create Dataset only containing studies with peaks within the amygdala mask
amygdala_mask = os.path.join(data_path, "amygdala_roi.nii.gz")
amygdala_ids = neurosynth_dset.get_studies_by_mask(amygdala_mask)
dset_amygdala = neurosynth_dset.slice(amygdala_ids)
# Create Dataset only containing studies with peaks within the sphere ROI
sphere_ids = neurosynth_dset.get_studies_by_coordinate([[24, -2, -20]], r=6)
dset_sphere = neurosynth_dset.slice(sphere_ids)
import numpy as np
from nilearn import input_data, plotting
# In order to plot a sphere with a precise radius around a coordinate with
# nilearn, we need to use a NiftiSpheresMasker
mask_img = neurosynth_dset.masker.mask_img
sphere_masker = input_data.NiftiSpheresMasker([[24, -2, -20]], radius=6, mask_img=mask_img)
sphere_masker.fit(mask_img)
sphere_img = sphere_masker.inverse_transform(np.array([[1]]))
fig, axes = plt.subplots(figsize=(6, 4), nrows=2)
display = plotting.plot_roi(
amygdala_mask,
annotate=False,
draw_cross=False,
axes=axes[0],
figure=fig,
)
axes[0].set_title("Amygdala ROI")
display = plotting.plot_roi(
sphere_img,
annotate=False,
draw_cross=False,
axes=axes[1],
figure=fig,
)
axes[1].set_title("Spherical ROI")
glue("figure_macm_rois", fig, display=False)
```
```{glue:figure} figure_macm_rois
:name: figure_macm_rois
:align: center
Region of interest masks for (1) a target mask-based MACM and (2) a coordinate-based MACM.
```
Once the `Dataset` has been reduced to studies with coordinates within the mask or sphere requested, any of the supported CBMA Estimators can be run.
```
from nimare import meta
meta_amyg = meta.cbma.ale.ALE(kernel__sample_size=20)
results_amyg = meta_amyg.fit(dset_amygdala)
meta_sphere = meta.cbma.ale.ALE(kernel__sample_size=20)
results_sphere = meta_sphere.fit(dset_sphere)
meta_results = {
"Amygdala ALE MACM": results_amyg.get_map("z", return_type="image"),
"Sphere ALE MACM": results_sphere.get_map("z", return_type="image"),
}
fig, axes = plt.subplots(figsize=(6, 4), nrows=2)
for i_meta, (name, file_) in enumerate(meta_results.items()):
display = plotting.plot_stat_map(
file_,
annotate=False,
axes=axes[i_meta],
cmap="Reds",
cut_coords=[24, -2, -20],
draw_cross=False,
figure=fig,
)
axes[i_meta].set_title(name)
colorbar = display._cbar
colorbar_ticks = colorbar.get_ticks()
if colorbar_ticks[0] < 0:
new_ticks = [colorbar_ticks[0], 0, colorbar_ticks[-1]]
else:
new_ticks = [colorbar_ticks[0], colorbar_ticks[-1]]
colorbar.set_ticks(new_ticks, update_ticks=True)
glue("figure_macm", fig, display=False)
```
```{glue:figure} figure_macm
:name: figure_macm
:align: center
Unthresholded z-statistic maps for (1) the target mask-based MACM and (2) the coordinate-based MACM.
```
```
end = datetime.now()
print(f"macm.md took {end - start} to build.")
```
|
github_jupyter
|
# First, import the necessary modules and functions
import os
from datetime import datetime
import matplotlib.pyplot as plt
from myst_nb import glue
from repo2data.repo2data import Repo2Data
import nimare
start = datetime.now()
# Install the data if running locally, or points to cached data if running on neurolibre
DATA_REQ_FILE = os.path.join("../binder/data_requirement.json")
FIG_DIR = os.path.abspath("../images")
# Download data
repo2data = Repo2Data(DATA_REQ_FILE)
data_path = repo2data.install()
data_path = os.path.join(data_path[0], "data")
# Now, load the Datasets we will use in this chapter
neurosynth_dset = nimare.dataset.Dataset.load(os.path.join(data_path, "neurosynth_dataset.pkl.gz"))
# Create Dataset only containing studies with peaks within the amygdala mask
amygdala_mask = os.path.join(data_path, "amygdala_roi.nii.gz")
amygdala_ids = neurosynth_dset.get_studies_by_mask(amygdala_mask)
dset_amygdala = neurosynth_dset.slice(amygdala_ids)
# Create Dataset only containing studies with peaks within the sphere ROI
sphere_ids = neurosynth_dset.get_studies_by_coordinate([[24, -2, -20]], r=6)
dset_sphere = neurosynth_dset.slice(sphere_ids)
import numpy as np
from nilearn import input_data, plotting
# In order to plot a sphere with a precise radius around a coordinate with
# nilearn, we need to use a NiftiSpheresMasker
mask_img = neurosynth_dset.masker.mask_img
sphere_masker = input_data.NiftiSpheresMasker([[24, -2, -20]], radius=6, mask_img=mask_img)
sphere_masker.fit(mask_img)
sphere_img = sphere_masker.inverse_transform(np.array([[1]]))
fig, axes = plt.subplots(figsize=(6, 4), nrows=2)
display = plotting.plot_roi(
amygdala_mask,
annotate=False,
draw_cross=False,
axes=axes[0],
figure=fig,
)
axes[0].set_title("Amygdala ROI")
display = plotting.plot_roi(
sphere_img,
annotate=False,
draw_cross=False,
axes=axes[1],
figure=fig,
)
axes[1].set_title("Spherical ROI")
glue("figure_macm_rois", fig, display=False)
Once the `Dataset` has been reduced to studies with coordinates within the mask or sphere requested, any of the supported CBMA Estimators can be run.
| 0.529263 | 0.876052 |
# 自然语言推断:使用注意力
:label:`sec_natural-language-inference-attention`
我们在 :numref:`sec_natural-language-inference-and-dataset`中介绍了自然语言推断任务和SNLI数据集。鉴于许多模型都是基于复杂而深度的架构,Parikh等人提出用注意力机制解决自然语言推断问题,并称之为“可分解注意力模型” :cite:`Parikh.Tackstrom.Das.ea.2016`。这使得模型没有循环层或卷积层,在SNLI数据集上以更少的参数实现了当时的最佳结果。在本节中,我们将描述并实现这种基于注意力的自然语言推断方法(使用MLP),如 :numref:`fig_nlp-map-nli-attention`中所述。

:label:`fig_nlp-map-nli-attention`
## 模型
与保留前提和假设中词元的顺序相比,我们可以将一个文本序列中的词元与另一个文本序列中的每个词元对齐,然后比较和聚合这些信息,以预测前提和假设之间的逻辑关系。与机器翻译中源句和目标句之间的词元对齐类似,前提和假设之间的词元对齐可以通过注意力机制灵活地完成。

:label:`fig_nli_attention`
:numref:`fig_nli_attention`描述了使用注意力机制的自然语言推断方法。从高层次上讲,它由三个联合训练的步骤组成:对齐、比较和汇总。我们将在下面一步一步地对它们进行说明。
```
import paddle
from paddle import nn
from paddle.nn import functional as F
from d2l import paddle as d2l
```
### 注意(Attending)
第一步是将一个文本序列中的词元与另一个序列中的每个词元对齐。假设前提是“我确实需要睡眠”,假设是“我累了”。由于语义上的相似性,我们不妨将假设中的“我”与前提中的“我”对齐,将假设中的“累”与前提中的“睡眠”对齐。同样,我们可能希望将前提中的“我”与假设中的“我”对齐,将前提中的“需要”和“睡眠”与假设中的“累”对齐。请注意,这种对齐是使用加权平均的“软”对齐,其中理想情况下较大的权重与要对齐的词元相关联。为了便于演示, :numref:`fig_nli_attention`以“硬”对齐的方式显示了这种对齐方式。
现在,我们更详细地描述使用注意力机制的软对齐。用$\mathbf{A} = (\mathbf{a}_1, \ldots, \mathbf{a}_m)$和$\mathbf{B} = (\mathbf{b}_1, \ldots, \mathbf{b}_n)$表示前提和假设,其词元数量分别为$m$和$n$,其中$\mathbf{a}_i, \mathbf{b}_j \in \mathbb{R}^{d}$($i = 1, \ldots, m, j = 1, \ldots, n$)是$d$维的词向量。对于软对齐,我们将注意力权重$e_{ij} \in \mathbb{R}$计算为:
$$e_{ij} = f(\mathbf{a}_i)^\top f(\mathbf{b}_j),$$
:eqlabel:`eq_nli_e`
其中函数$f$是在下面的`mlp`函数中定义的多层感知机。输出维度$f$由`mlp`的`num_hiddens`参数指定。
```
def mlp(num_inputs, num_hiddens, flatten):
net = []
net.append(nn.Dropout(0.2))
net.append(nn.Linear(num_inputs, num_hiddens))
net.append(nn.ReLU())
if flatten:
net.append(nn.Flatten(start_axis=1))
net.append(nn.Dropout(0.2))
net.append(nn.Linear(num_hiddens, num_hiddens))
net.append(nn.ReLU())
if flatten:
net.append(nn.Flatten(start_axis=1))
return nn.Sequential(*net)
```
值得注意的是,在 :eqref:`eq_nli_e`中,$f$分别输入$\mathbf{a}_i$和$\mathbf{b}_j$,而不是将它们一对放在一起作为输入。这种*分解*技巧导致$f$只有$m + n$个次计算(线性复杂度),而不是$mn$次计算(二次复杂度)
对 :eqref:`eq_nli_e`中的注意力权重进行规范化,我们计算假设中所有词元向量的加权平均值,以获得假设的表示,该假设与前提中索引$i$的词元进行软对齐:
$$
\boldsymbol{\beta}_i = \sum_{j=1}^{n}\frac{\exp(e_{ij})}{ \sum_{k=1}^{n} \exp(e_{ik})} \mathbf{b}_j.
$$
同样,我们计算假设中索引为$j$的每个词元与前提词元的软对齐:
$$
\boldsymbol{\alpha}_j = \sum_{i=1}^{m}\frac{\exp(e_{ij})}{ \sum_{k=1}^{m} \exp(e_{kj})} \mathbf{a}_i.
$$
下面,我们定义`Attend`类来计算假设(`beta`)与输入前提`A`的软对齐以及前提(`alpha`)与输入假设`B`的软对齐。
```
class Attend(nn.Layer):
def __init__(self, num_inputs, num_hiddens, **kwargs):
super(Attend, self).__init__(**kwargs)
self.f = mlp(num_inputs, num_hiddens, flatten=False)
def forward(self, A, B):
# A/B的形状:(批量大小,序列A/B的词元数,embed_size)
# f_A/f_B的形状:(批量大小,序列A/B的词元数,num_hiddens)
f_A = self.f(A)
f_B = self.f(B)
# e的形状:(批量大小,序列A的词元数,序列B的词元数)
e = paddle.bmm(f_A, f_B.transpose([0, 2, 1]))
# beta的形状:(批量大小,序列A的词元数,embed_size),
# 意味着序列B被软对齐到序列A的每个词元(beta的第1个维度)
beta = paddle.bmm(F.softmax(e, axis=-1), B)
# beta的形状:(批量大小,序列B的词元数,embed_size),
# 意味着序列A被软对齐到序列B的每个词元(alpha的第1个维度)
alpha = paddle.bmm(F.softmax(e.transpose([0, 2, 1]), axis=-1), A)
return beta, alpha
```
### 比较
在下一步中,我们将一个序列中的词元与与该词元软对齐的另一个序列进行比较。请注意,在软对齐中,一个序列中的所有词元(尽管可能具有不同的注意力权重)将与另一个序列中的词元进行比较。为便于演示, :numref:`fig_nli_attention`对词元以*硬*的方式对齐。例如,上述的“注意”(attending)步骤确定前提中的“need”和“sleep”都与假设中的“tired”对齐,则将对“疲倦-需要睡眠”进行比较。
在比较步骤中,我们将来自一个序列的词元的连结(运算符$[\cdot, \cdot]$)和来自另一序列的对齐的词元送入函数$g$(一个多层感知机):
$$\mathbf{v}_{A,i} = g([\mathbf{a}_i, \boldsymbol{\beta}_i]), i = 1, \ldots, m\\ \mathbf{v}_{B,j} = g([\mathbf{b}_j, \boldsymbol{\alpha}_j]), j = 1, \ldots, n.$$
:eqlabel:`eq_nli_v_ab`
在 :eqref:`eq_nli_v_ab`中,$\mathbf{v}_{A,i}$是指,所有假设中的词元与前提中词元$i$软对齐,再与词元$i$的比较;而$\mathbf{v}_{B,j}$是指,所有前提中的词元与假设中词元$i$软对齐,再与词元$i$的比较。下面的`Compare`个类定义了比较步骤。
```
class Compare(nn.Layer):
def __init__(self, num_inputs, num_hiddens, **kwargs):
super(Compare, self).__init__(**kwargs)
self.g = mlp(num_inputs, num_hiddens, flatten=False)
def forward(self, A, B, beta, alpha):
V_A = self.g(paddle.concat([A, beta], axis=2))
V_B = self.g(paddle.concat([B, alpha], axis=2))
return V_A, V_B
```
### 聚合
现在我们有有两组比较向量$\mathbf{v}_{A,i}$($i = 1, \ldots, m$)和$\mathbf{v}_{B,j}$($j = 1, \ldots, n$)。在最后一步中,我们将聚合这些信息以推断逻辑关系。我们首先求和这两组比较向量:
$$
\mathbf{v}_A = \sum_{i=1}^{m} \mathbf{v}_{A,i}, \quad \mathbf{v}_B = \sum_{j=1}^{n}\mathbf{v}_{B,j}.
$$
接下来,我们将两个求和结果的连结提供给函数$h$(一个多层感知机),以获得逻辑关系的分类结果:
$$
\hat{\mathbf{y}} = h([\mathbf{v}_A, \mathbf{v}_B]).
$$
聚合步骤在以下`Aggregate`类中定义。
```
class Aggregate(nn.Layer):
def __init__(self, num_inputs, num_hiddens, num_outputs, **kwargs):
super(Aggregate, self).__init__(**kwargs)
self.h = mlp(num_inputs, num_hiddens, flatten=True)
self.linear = nn.Linear(num_hiddens, num_outputs)
def forward(self, V_A, V_B):
# 对两组比较向量分别求和
V_A = V_A.sum(axis=1)
V_B = V_B.sum(axis=1)
# 将两个求和结果的连结送到多层感知机中
Y_hat = self.linear(self.h(paddle.concat([V_A, V_B], axis=1)))
return Y_hat
```
### 整合代码
通过将注意步骤、比较步骤和聚合步骤组合在一起,我们定义了可分解注意力模型来联合训练这三个步骤。
```
class DecomposableAttention(nn.Layer):
def __init__(self, vocab, embed_size, num_hiddens, num_inputs_attend=100,
num_inputs_compare=200, num_inputs_agg=400, **kwargs):
super(DecomposableAttention, self).__init__(**kwargs)
self.embedding = nn.Embedding(len(vocab), embed_size)
self.attend = Attend(num_inputs_attend, num_hiddens)
self.compare = Compare(num_inputs_compare, num_hiddens)
# 有3种可能的输出:蕴涵、矛盾和中性
self.aggregate = Aggregate(num_inputs_agg, num_hiddens, num_outputs=3)
def forward(self, X):
premises, hypotheses = X
A = self.embedding(premises)
B = self.embedding(hypotheses)
beta, alpha = self.attend(A, B)
V_A, V_B = self.compare(A, B, beta, alpha)
Y_hat = self.aggregate(V_A, V_B)
return Y_hat
```
## 训练和评估模型
现在,我们将在SNLI数据集上对定义好的可分解注意力模型进行训练和评估。我们从读取数据集开始。
### 读取数据集
我们使用 :numref:`sec_natural-language-inference-and-dataset`中定义的函数下载并读取SNLI数据集。批量大小和序列长度分别设置为$256$和$50$。
```
batch_size, num_steps = 256, 50
train_iter, test_iter, vocab = d2l.load_data_snli(batch_size, num_steps)
```
### 创建模型
我们使用预训练好的100维GloVe嵌入来表示输入词元。我们将向量$\mathbf{a}_i$和$\mathbf{b}_j$在 :eqref:`eq_nli_e`中的维数预定义为100。 :eqref:`eq_nli_e`中的函数$f$和 :eqref:`eq_nli_v_ab`中的函数$g$的输出维度被设置为200.然后我们创建一个模型实例,初始化它的参数,并加载GloVe嵌入来初始化输入词元的向量。
```
embed_size, num_hiddens, devices = 100, 200, d2l.try_all_gpus()
net = DecomposableAttention(vocab, embed_size, num_hiddens)
glove_embedding = d2l.TokenEmbedding('glove.6b.100d')
embeds = glove_embedding[vocab.idx_to_token]
net.embedding.weight.set_value(embeds);
```
### 训练和评估模型
与 :numref:`sec_multi_gpu`中接受单一输入(如文本序列或图像)的`split_batch`函数不同,我们定义了一个`split_batch_multi_inputs`函数以小批量接受多个输入,如前提和假设。
现在我们可以在SNLI数据集上训练和评估模型。
```
lr, num_epochs = 0.001, 4
trainer = paddle.optimizer.Adam(learning_rate=lr, parameters=net.parameters())
loss = nn.CrossEntropyLoss(reduction="none")
d2l.train_ch13(net, train_iter, test_iter, loss, trainer, num_epochs,
devices)
```
### 使用模型
最后,定义预测函数,输出一对前提和假设之间的逻辑关系。
```
#@save
def predict_snli(net, vocab, premise, hypothesis):
"""预测前提和假设之间的逻辑关系"""
net.eval()
premise = paddle.to_tensor(vocab[premise], place=d2l.try_gpu())
hypothesis = paddle.to_tensor(vocab[hypothesis], place=d2l.try_gpu())
label = paddle.argmax(net([premise.reshape((1, -1)),
hypothesis.reshape((1, -1))]), axis=1)
return 'entailment' if label == 0 else 'contradiction' if label == 1 \
else 'neutral'
```
我们可以使用训练好的模型来获得对示例句子的自然语言推断结果。
```
predict_snli(net, vocab, ['he', 'is', 'good', '.'], ['he', 'is', 'bad', '.'])
```
## 小结
* 可分解注意模型包括三个步骤来预测前提和假设之间的逻辑关系:注意、比较和聚合。
* 通过注意力机制,我们可以将一个文本序列中的词元与另一个文本序列中的每个词元对齐,反之亦然。这种对齐是使用加权平均的软对齐,其中理想情况下较大的权重与要对齐的词元相关联。
* 在计算注意力权重时,分解技巧会带来比二次复杂度更理想的线性复杂度。
* 我们可以使用预训练好的词向量作为下游自然语言处理任务(如自然语言推断)的输入表示。
## 练习
1. 使用其他超参数组合训练模型。你能在测试集上获得更高的准确度吗?
1. 自然语言推断的可分解注意模型的主要缺点是什么?
1. 假设我们想要获得任何一对句子的语义相似级别(例如,0到1之间的连续值)。我们应该如何收集和标注数据集?你能设计一个有注意力机制的模型吗?
[Discussions](https://discuss.d2l.ai/t/5728)
|
github_jupyter
|
import paddle
from paddle import nn
from paddle.nn import functional as F
from d2l import paddle as d2l
def mlp(num_inputs, num_hiddens, flatten):
net = []
net.append(nn.Dropout(0.2))
net.append(nn.Linear(num_inputs, num_hiddens))
net.append(nn.ReLU())
if flatten:
net.append(nn.Flatten(start_axis=1))
net.append(nn.Dropout(0.2))
net.append(nn.Linear(num_hiddens, num_hiddens))
net.append(nn.ReLU())
if flatten:
net.append(nn.Flatten(start_axis=1))
return nn.Sequential(*net)
class Attend(nn.Layer):
def __init__(self, num_inputs, num_hiddens, **kwargs):
super(Attend, self).__init__(**kwargs)
self.f = mlp(num_inputs, num_hiddens, flatten=False)
def forward(self, A, B):
# A/B的形状:(批量大小,序列A/B的词元数,embed_size)
# f_A/f_B的形状:(批量大小,序列A/B的词元数,num_hiddens)
f_A = self.f(A)
f_B = self.f(B)
# e的形状:(批量大小,序列A的词元数,序列B的词元数)
e = paddle.bmm(f_A, f_B.transpose([0, 2, 1]))
# beta的形状:(批量大小,序列A的词元数,embed_size),
# 意味着序列B被软对齐到序列A的每个词元(beta的第1个维度)
beta = paddle.bmm(F.softmax(e, axis=-1), B)
# beta的形状:(批量大小,序列B的词元数,embed_size),
# 意味着序列A被软对齐到序列B的每个词元(alpha的第1个维度)
alpha = paddle.bmm(F.softmax(e.transpose([0, 2, 1]), axis=-1), A)
return beta, alpha
class Compare(nn.Layer):
def __init__(self, num_inputs, num_hiddens, **kwargs):
super(Compare, self).__init__(**kwargs)
self.g = mlp(num_inputs, num_hiddens, flatten=False)
def forward(self, A, B, beta, alpha):
V_A = self.g(paddle.concat([A, beta], axis=2))
V_B = self.g(paddle.concat([B, alpha], axis=2))
return V_A, V_B
class Aggregate(nn.Layer):
def __init__(self, num_inputs, num_hiddens, num_outputs, **kwargs):
super(Aggregate, self).__init__(**kwargs)
self.h = mlp(num_inputs, num_hiddens, flatten=True)
self.linear = nn.Linear(num_hiddens, num_outputs)
def forward(self, V_A, V_B):
# 对两组比较向量分别求和
V_A = V_A.sum(axis=1)
V_B = V_B.sum(axis=1)
# 将两个求和结果的连结送到多层感知机中
Y_hat = self.linear(self.h(paddle.concat([V_A, V_B], axis=1)))
return Y_hat
class DecomposableAttention(nn.Layer):
def __init__(self, vocab, embed_size, num_hiddens, num_inputs_attend=100,
num_inputs_compare=200, num_inputs_agg=400, **kwargs):
super(DecomposableAttention, self).__init__(**kwargs)
self.embedding = nn.Embedding(len(vocab), embed_size)
self.attend = Attend(num_inputs_attend, num_hiddens)
self.compare = Compare(num_inputs_compare, num_hiddens)
# 有3种可能的输出:蕴涵、矛盾和中性
self.aggregate = Aggregate(num_inputs_agg, num_hiddens, num_outputs=3)
def forward(self, X):
premises, hypotheses = X
A = self.embedding(premises)
B = self.embedding(hypotheses)
beta, alpha = self.attend(A, B)
V_A, V_B = self.compare(A, B, beta, alpha)
Y_hat = self.aggregate(V_A, V_B)
return Y_hat
batch_size, num_steps = 256, 50
train_iter, test_iter, vocab = d2l.load_data_snli(batch_size, num_steps)
embed_size, num_hiddens, devices = 100, 200, d2l.try_all_gpus()
net = DecomposableAttention(vocab, embed_size, num_hiddens)
glove_embedding = d2l.TokenEmbedding('glove.6b.100d')
embeds = glove_embedding[vocab.idx_to_token]
net.embedding.weight.set_value(embeds);
lr, num_epochs = 0.001, 4
trainer = paddle.optimizer.Adam(learning_rate=lr, parameters=net.parameters())
loss = nn.CrossEntropyLoss(reduction="none")
d2l.train_ch13(net, train_iter, test_iter, loss, trainer, num_epochs,
devices)
#@save
def predict_snli(net, vocab, premise, hypothesis):
"""预测前提和假设之间的逻辑关系"""
net.eval()
premise = paddle.to_tensor(vocab[premise], place=d2l.try_gpu())
hypothesis = paddle.to_tensor(vocab[hypothesis], place=d2l.try_gpu())
label = paddle.argmax(net([premise.reshape((1, -1)),
hypothesis.reshape((1, -1))]), axis=1)
return 'entailment' if label == 0 else 'contradiction' if label == 1 \
else 'neutral'
predict_snli(net, vocab, ['he', 'is', 'good', '.'], ['he', 'is', 'bad', '.'])
| 0.715623 | 0.901097 |
# Manipulação de dados com *pandas*
## Introdução
*Pandas* é uma biblioteca para leitura, tratamento e manipulação de dados em *Python* que possui funções muito similares a softwares empregados em planilhamento, tais como _Microsoft Excel_, _LibreOffice Calc_ e _Apple Numbers_. Além de ser uma ferramenta de uso gratuito, ela possui inúmeras vantagens. Para saber mais sobre suas capacidades, veja [página oficial](https://pandas.pydata.org/about/index.html) da biblioteca.
Nesta parte de nosso curso, aprenderemos duas novas estruturas de dados que *pandas* introduz:
* *Series* e
* *DataFrame*.
Um *DataFrame* é uma estrutura de dados tabular com linhas e colunas rotuladas.
| | Peso | Altura| Idade| Gênero |
| :------------- |:-------------:| :-----:|:------:|:-----:|
| Ana | 55 | 162 | 20 | `feminino` |
| João | 80 | 178 | 19 | `masculino` |
| Maria | 62 | 164 | 21 | `feminino` |
| Pedro | 67 | 165 | 22 | `masculino`|
| Túlio | 73 | 171 | 20 | `masculino` |
As colunas do *DataFrame* são vetores unidimensionais do tipo *Series*, ao passo que as linhas são rotuladas por uma estrutura de dados especial chamada *index*. Os *index* no *Pandas* são listas personalizadas de rótulos que nos permitem realizar pesquisas rápidas e algumas operações importantes.
Para utilizarmos estas estruturas de dados, importaremos as bibliotecas *numpy* utilizando o _placeholder_ usual *np* e *pandas* utilizando o _placeholder_ usual *pd*.
```
import numpy as np
import pandas as pd
```
## *Series*
As *Series*:
* são vetores, ou seja, são *arrays* unidimensionais;
* possuem um *index* para cada entrada (e são muito eficientes para operar com base neles);
* podem conter qualquer um dos tipos de dados (`int`, `str`, `float` etc.).
### Criando um objeto do tipo *Series*
O método padrão é utilizar a função *Series* da biblioteca pandas:
```python
serie_exemplo = pd.Series(dados_de_interesse, index=indice_de_interesse)
```
No exemplo acima, `dados_de_interesse` pode ser:
* um dicionário (objeto do tipo `dict`);
* uma lista (objeto do tipo `list`);
* um objeto `array` do *numpy*;
* um escalar, tal como o número inteiro 1.
### Criando *Series* a partir de dicionários
```
dicionario_exemplo = {'Ana':20, 'João': 19, 'Maria': 21, 'Pedro': 22, 'Túlio': 20}
pd.Series(dicionario_exemplo)
```
Note que o *index* foi obtido a partir das "chaves" dos dicionários. Assim, no caso do exemplo, o *index* foi dado por "Ana", "João", "Maria", "Pedro" e "Túlio". A ordem do *index* foi dada pela ordem de entrada no dicionário.
Podemos fornecer um novo *index* ao dicionário já criado
```
pd.Series(dicionario_exemplo, index=['Maria', 'Maria', 'ana', 'Paula', 'Túlio', 'Pedro'])
```
Dados não encontrados são assinalados por um valor especial. O marcador padrão do *pandas* para dados faltantes é o `NaN` (*not a number*).
### Criando *Series* a partir de listas
```
lista_exemplo = [1,2,3,4,5]
pd.Series(lista_exemplo)
```
Se os *index* não forem fornecidos, o *pandas* atribuirá automaticamente os valores `0, 1, ..., N-1`, onde `N` é o número de elementos da lista.
### Criando *Series* a partir de *arrays* do *numpy*
```
array_exemplo = np.array([1,2,3,4,5])
pd.Series(array_exemplo)
```
### Fornecendo um *index* na criação da *Series*
O total de elementos do *index* deve ser igual ao tamanho do *array*. Caso contrário, um erro será retornado.
```
pd.Series(array_exemplo, index=['a','b','c','d','e','f'])
pd.Series(array_exemplo, index=['a','b','c','d','e'])
```
Além disso, não é necessário que que os elementos no *index* sejam únicos.
```
pd.Series(array_exemplo, index=['a','a','b','b','c'])
```
Um erro ocorrerá se uma operação que dependa da unicidade dos elementos no *index* for realizada, a exemplo do método `reindex`.
```
series_exemplo = pd.Series(array_exemplo, index=['a','a','b','b','c'])
series_exemplo.reindex(['b','a','c','d','e']) # 'a' e 'b' duplicados na origem
```
### Criando *Series* a partir de escalares
```
pd.Series(1, index=['a', 'b', 'c', 'd'])
```
Neste caso, um índice **deve** ser fornecido!
### *Series* comportam-se como *arrays* do *numpy*
Uma *Series* do *pandas* comporta-se como um *array* unidimensional do *numpy*. Pode ser utilizada como argumento para a maioria das funções do *numpy*. A diferença é que o *index* aparece.
Exemplo:
```
series_exemplo = pd.Series(array_exemplo, index=['a','b','c','d','e'])
series_exemplo[2]
series_exemplo[:2]
np.log(series_exemplo)
```
Mais exemplos:
```
serie_1 = pd.Series([1,2,3,4,5])
serie_2 = pd.Series([4,5,6,7,8])
serie_1 + serie_2
serie_1 * 2 - serie_2 * 3
```
Assim como *arrays* do *numpy*, as *Series* do *pandas* também possuem atributos *dtype* (data type).
```
series_exemplo.dtype
```
Se o interesse for utilizar os dados de uma *Series* do *pandas* como um *array* do *numpy*, basta utilizar o método `to_numpy` para convertê-la.
```
series_exemplo.to_numpy()
```
### *Series* comportam-se como dicionários
Podemos acessar os elementos de uma *Series* através das chaves fornecidas no *index*.
```
series_exemplo
series_exemplo['a']
```
Podemos adicionar novos elementos associados a chaves novas.
```
series_exemplo['f'] = 6
series_exemplo
'f' in series_exemplo
'g' in series_exemplo
```
Neste examplo, tentamos acessar uma chave inexistente. Logo, um erro ocorre.
```
series_exemplo['g']
series_exemplo.get('g')
```
Entretanto, podemos utilizar o método `get` para lidar com chaves que possivelmente inexistam e adicionar um `NaN` do *numpy* como valor alternativo se, de fato, não exista valor atribuído.
```
series_exemplo.get('g',np.nan)
```
### O atributo `name`
Uma *Series* do *pandas* possui um atributo opcional `name` que nos permite identificar o objeto. Ele é bastante útil em operações envolvendo *DataFrames*.
```
serie_com_nome = pd.Series(dicionario_exemplo, name = "Idade")
serie_com_nome
```
### A função `date_range`
Em muitas situações, os índices podem ser organizados como datas. A função `data_range` cria índices a partir de datas. Alguns argumentos desta função são:
- `start`: `str` contendo a data que serve como limite à esquerda das datas. Padrão: `None`
- `end`: `str` contendo a data que serve como limite à direita das datas. Padrão: `None`
- `freq`: frequência a ser considerada. Por exemplo, dias (`D`), horas (`H`), semanas (`W`), fins de meses (`M`), inícios de meses (`MS`), fins de anos (`Y`), inícios de anos (`YS`) etc. Pode-se também utilizar múltiplos (p.ex. `5H`, `2Y` etc.). Padrão: `None`.
- `periods`: número de períodos a serem considerados (o período é determinado pelo argumento `freq`).
Abaixo damos exemplos do uso de `date_range` com diferente formatos de data.
```
pd.date_range(start='1/1/2020', freq='W', periods=10)
pd.date_range(start='2010-01-01', freq='2Y', periods=10)
pd.date_range('1/1/2020', freq='5H', periods=10)
pd.date_range(start='2010-01-01', freq='3YS', periods=3)
```
O exemplo a seguir cria duas *Series* com valores aleatórios associados a um interstício de 10 dias.
```
indice_exemplo = pd.date_range('2020-01-01', periods=10, freq='D')
serie_1 = pd.Series(np.random.randn(10),index=indice_exemplo)
serie_2 = pd.Series(np.random.randn(10),index=indice_exemplo)
```
## *DataFrame*
Como dissemos anterioremente, o *DataFrame* é a segunda estrutura basilar do *pandas*. Um *DataFrame*:
- é uma tabela, ou seja, é bidimensional;
- tem cada coluna formada como uma *Series* do *pandas*;
- pode ter *Series* contendo tipos de dado diferentes.
### Criando um *DataFrame*
O método padrão para criarmos um *DataFrame* é através de uma função com mesmo nome.
```python
df_exemplo = pd.DataFrame(dados_de_interesse, index = indice_de_interesse,
columns = colunas_de_interesse)
```
Ao criar um *DataFrame*, podemos informar
- `index`: rótulos para as linhas (atributos *index* das *Series*).
- `columns`: rótulos para as colunas (atributos *name* das *Series*).
No _template_, `dados_de_interesse` pode ser
* um dicionário de:
* *arrays* unidimensionais do *numpy*;
* listas;
* dicionários;
* *Series* do *pandas*.
* um *array* bidimensional do *numpy*;
* uma *Series* do *Pandas*;
* outro *DataFrame*.
### Criando um *DataFrame* a partir de dicionários de *Series*
Neste método de criação, as *Series* do dicionário não precisam possuir o mesmo número de elementos. O *index* do *DataFrame* será dado pela **união** dos *index* de todas as *Series* contidas no dicionário.
Exemplo:
```
serie_Idade = pd.Series({'Ana':20, 'João': 19, 'Maria': 21, 'Pedro': 22}, name="Idade")
serie_Peso = pd.Series({'Ana':55, 'João': 80, 'Maria': 62, 'Pedro': 67, 'Túlio': 73}, name="Peso")
serie_Altura = pd.Series({'Ana':162, 'João': 178, 'Maria': 162, 'Pedro': 165, 'Túlio': 171}, name="Altura")
dicionario_series_exemplo = {'Idade': serie_Idade, 'Peso': serie_Peso, 'Altura': serie_Altura}
df_dict_series = pd.DataFrame(dicionario_series_exemplo)
df_dict_series
```
Compare este resultado com a criação de uma planilha pelos métodos usuais. Veja que há muita flexibilidade para criarmos ou modificarmos uma tabela.
Vejamos exemplos sobre como acessar intervalos de dados na tabela.
```
pd.DataFrame(dicionario_series_exemplo, index=['Ana','Maria'])
pd.DataFrame(dicionario_series_exemplo, index=['Ana','Maria'], columns=['Peso','Altura'])
```
Neste exemplo, adicionamos a coluna `IMC`, ainda sem valores calculados.
```
pd.DataFrame(dicionario_series_exemplo, index=['Ana','Maria','Paula'],
columns=['Peso','Altura','IMC'])
df_exemplo_IMC = pd.DataFrame(dicionario_series_exemplo,
columns=['Peso','Altura','IMC'])
```
Agora, mostramos como os valores do IMC podem ser calculados diretamente por computação vetorizada sobre as *Series*.
```
df_exemplo_IMC['IMC']=round(df_exemplo_IMC['Peso']/(df_exemplo_IMC['Altura']/100)**2,2)
df_exemplo_IMC
```
### Criando um *DataFrame* a partir de dicionários de listas ou *arrays* do *numpy*:
Neste método de criação, os *arrays* ou as listas **devem** possuir o mesmo comprimento. Se o *index* não for informado, o *index* será dado de forma similar ao do objeto tipo *Series*.
Exemplo com dicionário de listas:
```
dicionario_lista_exemplo = {'Idade': [20,19,21,22,20],
'Peso': [55,80,62,67,73],
'Altura': [162,178,162,165,171]}
pd.DataFrame(dicionario_lista_exemplo)
```
Mais exemplos:
```
pd.DataFrame(dicionario_lista_exemplo, index=['Ana','João','Maria','Pedro','Túlio'])
```
Exemplos com dicionário de *arrays* do *numpy*:
```
dicionario_array_exemplo = {'Idade': np.array([20,19,21,22,20]),
'Peso': np.array([55,80,62,67,73]),
'Altura': np.array([162,178,162,165,171])}
pd.DataFrame(dicionario_array_exemplo)
```
Mais exemplos:
```
pd.DataFrame(dicionario_array_exemplo, index=['Ana','João','Maria','Pedro','Túlio'])
```
### Criando um *DataFrame* a partir de uma *Series* do *pandas*
Neste caso, o *DataFrame* terá o mesmo *index* que a *Series* do *pandas* e apenas uma coluna.
```
series_exemplo = pd.Series({'Ana':20, 'João': 19, 'Maria': 21, 'Pedro': 22, 'Túlio': 20})
pd.DataFrame(series_exemplo)
```
Caso a *Series* possua um atributo `name` especificado, este será o nome da coluna do *DataFrame*.
```
series_exemplo_Idade = pd.Series({'Ana':20, 'João': 19, 'Maria': 21, 'Pedro': 22, 'Túlio': 20}, name="Idade")
pd.DataFrame(series_exemplo_Idade)
```
### Criando um *DataFrame* a partir de lista de *Series* do *pandas*
Neste caso, a entrada dos dados da lista no *DataFrame* será feita por linha.
```
pd.DataFrame([serie_Peso, serie_Altura, serie_Idade])
```
Podemos corrigir a orientação usando o método `transpose`.
```
pd.DataFrame([serie_Peso, serie_Altura, serie_Idade]).transpose()
```
### Criando um *DataFrame* a partir de arquivos
Para criar um *DataFrame* a partir de um arquivo, precisamos de funções do tipo `pd.read_FORMATO`, onde `FORMATO` indica o formato a ser importado sob o pressuposto de que a biblioteca *pandas* foi devidamente importada com `pd`.
Os formatos mais comuns são:
* *csv* (comma-separated values),
* *xls* ou *xlsx* (formatos do Microsoft Excel),
* *hdf5* (comumente utilizado em *big data*),
* *json* (comumente utilizado em páginas da internet).
As funções para leitura correspondentes são:
* `pd.read_csv`,
* `pd.read_excel`,
* `pd.read_hdf`,
* `pd.read_json`,
respectivamente.
De todas elas, a função mais utilizada é `read_csv`. Ela possui vários argumentos. Vejamos os mais utilizados:
* `file_path_or_buffer`: o endereço do arquivo a ser lido. Pode ser um endereço da internet.
* `sep`: o separador entre as entradas de dados. O separador padrão é `,`.
* `index_col`: a coluna que deve ser usada para formar o *index*. O padrão é `None`. Porém pode ser alterado para outro. Um separador comumente encontrado é o `\t` (TAB).
* `names`: nomes das colunas a serem usadas. O padrão é `None`.
* `header`: número da linha que servirá como nome para as colunas. O padrão é `infer` (ou seja, tenta deduzir automaticamente). Se os nomes das colunas forem passados através do `names`, então `header` será automaticamente considerado como `None`.
**Exemplo:** considere o arquivo `data/exemplo_data.csv` contendo:
```
,coluna_1,coluna_2
2020-01-01,-0.4160923582996922,1.8103644347460834
2020-01-02,-0.1379696602473578,2.5785204825192785
2020-01-03,0.5758273450544708,0.06086648807755068
2020-01-04,-0.017367186564883633,1.2995865328684455
2020-01-05,1.3842792448510655,-0.3817320973859929
2020-01-06,0.5497056238566345,-1.308789022968975
2020-01-07,-0.2822962331437976,-1.6889791765925102
2020-01-08,-0.9897300598660013,-0.028120707936426497
2020-01-09,0.27558240737928663,-0.1776585993494299
2020-01-10,0.6851316082235455,0.5025348904591399
```
Para ler o arquivo acima basta fazer:
```
df_exemplo_0 = pd.read_csv('data/exemplo_data.csv')
df_exemplo_0
```
No exemplo anterior, as colunas receberam nomes corretamentes exceto pela primeira coluna que gostaríamos de considerar como *index*. Neste caso fazemos:
```
df_exemplo = pd.read_csv('data/exemplo_data.csv', index_col=0)
df_exemplo
```
### O método *head* do *DataFrame*
O método `head`, sem argumento, permite que visualizemos as 5 primeiras linhas do *DataFrame*.
```
df_exemplo.head()
```
Se for passado um argumento com valor `n`, as `n` primeiras linhas são impressas.
```
df_exemplo.head(2)
df_exemplo.head(7)
```
### O método `tail` do *DataFrame*
O método `tail`, sem argumento, retorna as últimas 5 linhas do *DataFrame*.
```
df_exemplo.tail()
```
Se for passado um argumento com valor `n`, as `n` últimas linhas são impressas.
```
df_exemplo.tail(2)
df_exemplo.tail(7)
```
### Atributos de *Series* e *DataFrames*
Atributos comumente usados para *Series* e *DataFrames* são:
* `shape`: fornece as dimensões do objeto em questão (*Series* ou *DataFrame*) em formato consistente com o atributo `shape` de um *array* do *numpy*.
* `index`: fornece o índice do objeto. No caso do *DataFrame* são os rótulos das linhas.
* `columns`: fornece as colunas (apenas disponível para *DataFrames*)
Exemplo:
```
df_exemplo.shape
serie_1.shape
df_exemplo.index
serie_1.index
df_exemplo.columns
```
Se quisermos obter os dados contidos nos *index* ou nas *Series* podemos utilizar a propriedade `.array`.
```
serie_1.index.array
df_exemplo.columns.array
```
Se o interesse for obter os dados como um `array` do *numpy*, devemos utilizar o método `.to_numpy()`.
Exemplo:
```
serie_1.index.to_numpy()
df_exemplo.columns.to_numpy()
```
O método `.to_numpy()` também está disponível em *DataFrames*:
```
df_exemplo.to_numpy()
```
A função do *numpy* `asarray()` é compatível com *index*, *columns* e *DataFrames* do *pandas*:
```
np.asarray(df_exemplo.index)
np.asarray(df_exemplo.columns)
np.asarray(df_exemplo)
```
### Informações sobre as colunas de um *DataFrame*
Para obtermos uma breve descrição sobre as colunas de um *DataFrame* utilizamos o método `info`.
Exemplo:
```
df_exemplo.info()
```
### Criando arquivos a partir de *DataFrames*
Para criar arquivos a partir de *DataFrames*, basta utilizar os métodos do tipo `pd.to_FORMATO`, onde `FORMATO` indica o formato a ser exportado e supondo que a biblioteca *pandas* foi importada com `pd`.
Com relação aos tipos de arquivo anteriores, os métodos para exportação correspondentes são:
* `.to_csv` ('endereço_do_arquivo'),
* `.to_excel` ('endereço_do_arquivo'),
* `.to_hdf` ('endereço_do_arquivo'),
* `.to_json`('endereço_do_arquivo'),
onde `endereço_do_arquivo` é uma `str` que contém o endereço do arquivo a ser exportado.
Exemplo:
Para exportar para o arquivo `exemplo_novo.csv`, utilizaremos o método `.to_csv` ao *DataFrame* `df_exemplo`:
```
df_exemplo.to_csv('data/exemplo_novo.csv')
```
### Exemplo COVID-19 PB
Dados diários de COVID-19 do estado da Paraíba:
*Fonte: https://superset.plataformatarget.com.br/superset/dashboard/microdados/*
```
dados_covid_PB = pd.read_csv('https://superset.plataformatarget.com.br/superset/explore_json/?form_data=%7B%22slice_id%22%3A1550%7D&csv=true',
sep=',', index_col=0)
dados_covid_PB.info()
dados_covid_PB.head()
dados_covid_PB.tail()
dados_covid_PB['estado'] = 'PB'
dados_covid_PB.head()
dados_covid_PB.to_csv('data/dadoscovidpb.csv')
```
|
github_jupyter
|
import numpy as np
import pandas as pd
serie_exemplo = pd.Series(dados_de_interesse, index=indice_de_interesse)
dicionario_exemplo = {'Ana':20, 'João': 19, 'Maria': 21, 'Pedro': 22, 'Túlio': 20}
pd.Series(dicionario_exemplo)
pd.Series(dicionario_exemplo, index=['Maria', 'Maria', 'ana', 'Paula', 'Túlio', 'Pedro'])
lista_exemplo = [1,2,3,4,5]
pd.Series(lista_exemplo)
array_exemplo = np.array([1,2,3,4,5])
pd.Series(array_exemplo)
pd.Series(array_exemplo, index=['a','b','c','d','e','f'])
pd.Series(array_exemplo, index=['a','b','c','d','e'])
pd.Series(array_exemplo, index=['a','a','b','b','c'])
series_exemplo = pd.Series(array_exemplo, index=['a','a','b','b','c'])
series_exemplo.reindex(['b','a','c','d','e']) # 'a' e 'b' duplicados na origem
pd.Series(1, index=['a', 'b', 'c', 'd'])
series_exemplo = pd.Series(array_exemplo, index=['a','b','c','d','e'])
series_exemplo[2]
series_exemplo[:2]
np.log(series_exemplo)
serie_1 = pd.Series([1,2,3,4,5])
serie_2 = pd.Series([4,5,6,7,8])
serie_1 + serie_2
serie_1 * 2 - serie_2 * 3
series_exemplo.dtype
series_exemplo.to_numpy()
series_exemplo
series_exemplo['a']
series_exemplo['f'] = 6
series_exemplo
'f' in series_exemplo
'g' in series_exemplo
series_exemplo['g']
series_exemplo.get('g')
series_exemplo.get('g',np.nan)
serie_com_nome = pd.Series(dicionario_exemplo, name = "Idade")
serie_com_nome
pd.date_range(start='1/1/2020', freq='W', periods=10)
pd.date_range(start='2010-01-01', freq='2Y', periods=10)
pd.date_range('1/1/2020', freq='5H', periods=10)
pd.date_range(start='2010-01-01', freq='3YS', periods=3)
indice_exemplo = pd.date_range('2020-01-01', periods=10, freq='D')
serie_1 = pd.Series(np.random.randn(10),index=indice_exemplo)
serie_2 = pd.Series(np.random.randn(10),index=indice_exemplo)
df_exemplo = pd.DataFrame(dados_de_interesse, index = indice_de_interesse,
columns = colunas_de_interesse)
serie_Idade = pd.Series({'Ana':20, 'João': 19, 'Maria': 21, 'Pedro': 22}, name="Idade")
serie_Peso = pd.Series({'Ana':55, 'João': 80, 'Maria': 62, 'Pedro': 67, 'Túlio': 73}, name="Peso")
serie_Altura = pd.Series({'Ana':162, 'João': 178, 'Maria': 162, 'Pedro': 165, 'Túlio': 171}, name="Altura")
dicionario_series_exemplo = {'Idade': serie_Idade, 'Peso': serie_Peso, 'Altura': serie_Altura}
df_dict_series = pd.DataFrame(dicionario_series_exemplo)
df_dict_series
pd.DataFrame(dicionario_series_exemplo, index=['Ana','Maria'])
pd.DataFrame(dicionario_series_exemplo, index=['Ana','Maria'], columns=['Peso','Altura'])
pd.DataFrame(dicionario_series_exemplo, index=['Ana','Maria','Paula'],
columns=['Peso','Altura','IMC'])
df_exemplo_IMC = pd.DataFrame(dicionario_series_exemplo,
columns=['Peso','Altura','IMC'])
df_exemplo_IMC['IMC']=round(df_exemplo_IMC['Peso']/(df_exemplo_IMC['Altura']/100)**2,2)
df_exemplo_IMC
dicionario_lista_exemplo = {'Idade': [20,19,21,22,20],
'Peso': [55,80,62,67,73],
'Altura': [162,178,162,165,171]}
pd.DataFrame(dicionario_lista_exemplo)
pd.DataFrame(dicionario_lista_exemplo, index=['Ana','João','Maria','Pedro','Túlio'])
dicionario_array_exemplo = {'Idade': np.array([20,19,21,22,20]),
'Peso': np.array([55,80,62,67,73]),
'Altura': np.array([162,178,162,165,171])}
pd.DataFrame(dicionario_array_exemplo)
pd.DataFrame(dicionario_array_exemplo, index=['Ana','João','Maria','Pedro','Túlio'])
series_exemplo = pd.Series({'Ana':20, 'João': 19, 'Maria': 21, 'Pedro': 22, 'Túlio': 20})
pd.DataFrame(series_exemplo)
series_exemplo_Idade = pd.Series({'Ana':20, 'João': 19, 'Maria': 21, 'Pedro': 22, 'Túlio': 20}, name="Idade")
pd.DataFrame(series_exemplo_Idade)
pd.DataFrame([serie_Peso, serie_Altura, serie_Idade])
pd.DataFrame([serie_Peso, serie_Altura, serie_Idade]).transpose()
,coluna_1,coluna_2
2020-01-01,-0.4160923582996922,1.8103644347460834
2020-01-02,-0.1379696602473578,2.5785204825192785
2020-01-03,0.5758273450544708,0.06086648807755068
2020-01-04,-0.017367186564883633,1.2995865328684455
2020-01-05,1.3842792448510655,-0.3817320973859929
2020-01-06,0.5497056238566345,-1.308789022968975
2020-01-07,-0.2822962331437976,-1.6889791765925102
2020-01-08,-0.9897300598660013,-0.028120707936426497
2020-01-09,0.27558240737928663,-0.1776585993494299
2020-01-10,0.6851316082235455,0.5025348904591399
df_exemplo_0 = pd.read_csv('data/exemplo_data.csv')
df_exemplo_0
df_exemplo = pd.read_csv('data/exemplo_data.csv', index_col=0)
df_exemplo
df_exemplo.head()
df_exemplo.head(2)
df_exemplo.head(7)
df_exemplo.tail()
df_exemplo.tail(2)
df_exemplo.tail(7)
df_exemplo.shape
serie_1.shape
df_exemplo.index
serie_1.index
df_exemplo.columns
serie_1.index.array
df_exemplo.columns.array
serie_1.index.to_numpy()
df_exemplo.columns.to_numpy()
df_exemplo.to_numpy()
np.asarray(df_exemplo.index)
np.asarray(df_exemplo.columns)
np.asarray(df_exemplo)
df_exemplo.info()
df_exemplo.to_csv('data/exemplo_novo.csv')
dados_covid_PB = pd.read_csv('https://superset.plataformatarget.com.br/superset/explore_json/?form_data=%7B%22slice_id%22%3A1550%7D&csv=true',
sep=',', index_col=0)
dados_covid_PB.info()
dados_covid_PB.head()
dados_covid_PB.tail()
dados_covid_PB['estado'] = 'PB'
dados_covid_PB.head()
dados_covid_PB.to_csv('data/dadoscovidpb.csv')
| 0.224735 | 0.927429 |
```
%matplotlib inline
import astra
import numpy as np
import pylab as plt
import os
import glob
import matplotlib
font = {'size' : 18}
matplotlib.rc('font', **font)
from scipy.signal import medfilt
def log_progress(sequence, every=None, size=None):
from ipywidgets import IntProgress, HTML, VBox
from IPython.display import display
is_iterator = False
if size is None:
try:
size = len(sequence)
except TypeError:
is_iterator = True
if size is not None:
if every is None:
if size <= 200:
every = 1
else:
every = size / 200 # every 0.5%
else:
assert every is not None, 'sequence is iterator, set every'
if is_iterator:
progress = IntProgress(min=0, max=1, value=1)
progress.bar_style = 'info'
else:
progress = IntProgress(min=0, max=size, value=0)
label = HTML()
box = VBox(children=[label, progress])
display(box)
index = 0
try:
for index, record in enumerate(sequence, 1):
if index == 1 or index % every == 0:
if is_iterator:
label.value = '{index} / ?'.format(index=index)
else:
progress.value = index
label.value = u'{index} / {size}'.format(
index=index,
size=size
)
yield record
except:
progress.bar_style = 'danger'
raise
else:
progress.bar_style = 'success'
progress.value = index
label.value = unicode(index or '?')
def images_diff(im1, im2):
assert(im1.shape==im2.shape)
rec_diff = np.zeros(shape=(im1.shape[0],im1.shape[1],3), dtype='float32')
im1_t = im1.copy()
im1_t = (im1_t-im1_t.min())/(im1_t.max()-im1_t.min())
im2_t = im2.copy()
im2_t = (im2_t-im2_t.min())/(im2_t.max()-im2_t.min())
# nrecon_rec_t[nrecon_rec_t<0] = 0
diff_rec = im1_t-im2_t
rec_diff[...,0] = diff_rec*(diff_rec>0)
rec_diff[...,1] = -diff_rec*(diff_rec<0)
rec_diff[...,2] = rec_diff[...,1]
return rec_diff
!ls /home/makov/diskmnt/big/yaivan/RC/MMC1_2.82um_/
!ls /home/makov/diskmnt/big/yaivan/MMC_1/_tmp/nrecon/bh_0_rc_0/
!ls '/home/makov/diskmnt/big/yaivan/Sand/Reconstructed/
def get_bh_level(nf):
return(int(os.path.split(nf)[-1].split('_')[1]))
def get_rc_level(nf):
return(int(os.path.split(nf)[-1].split('_')[3]))
def get_data(folder):
try:
data_file = glob.glob(os.path.join(folder, '*_sino*.tif'))[0]
# print(data_file)
sinogram = plt.imread(data_file).astype('float32')
data_file = glob.glob(os.path.join(folder, '*_sinoraw_*.tif'))[0]
sinraw = plt.imread(data_file).astype('float32')
rec_file = glob.glob(os.path.join(folder, '*_rec*.png'))[0]
rec = plt.imread(rec_file).astype('float32')
except e:
print 'folder'
raise e
return sinogram, sinraw, rec
objects = []
# objects.append({'name':'MMC_1',
# 'data_root':'/home/makov/diskmnt/big/yaivan/RC/MMC1_2.82um_/',
# 'rc_ref':16})
# objects.append({'name':'Sand',
# 'data_root':'/home/makov/diskmnt/big/yaivan/RC/Chieftain_Unc_2.8_/',
# 'rc_ref':20})
# objects.append({'name':'HP_Stage',
# 'data_root':'/home/makov/diskmnt/big/yaivan/RC/S2-Barnett@HP_P1_2.99um_/',
# 'rc_ref':20})
objects.append({'name':'Model object',
'data_root':'/home/makov/diskmnt/big/yaivan/RC/cube_/',
'rc_ref':0})
for rc_object in objects:# data_root = '/home/makov/diskmnt/big/yaivan/MMC_1/_tmp/nrecon/'
data_root= rc_object['data_root']
# nrecon_root_folder = os.path.join(data_root,'_tmp','nrecon')
nrecon_folders = glob.glob(os.path.join(data_root, 'bh_*_rc_*'))
nrecon_folders = [nf for nf in nrecon_folders if os.path.isdir(nf)]
print len(nrecon_folders)
for nf in nrecon_folders:
print get_rc_level(nf),
print
sino = {}
sinoraw = {}
rec ={}
for nf in log_progress(nrecon_folders):
rc_level = get_rc_level(nf)
sino[rc_level], sinoraw[rc_level], rec[rc_level] = get_data(nf)
h={}
for k, v in log_progress(list(rec.iteritems())):
r = rec[k]
h[k], _ = np.histogram(r,bins=1000)
x = []
y = []
for k,v in h.iteritems():
x.append(k)
y.append(np.sum(v**2))
plt.figure(figsize=(10,7))
plt.title('{} Reference RC:{}'.format(rc_object['name'],rc_object['rc_ref']))
plt.plot(x,y,'o')
plt.ylabel('Sum of hist^2')
plt.xlabel('RC')
plt.grid(True)
plt.show()
sino = {}
sinoraw = {}
rec ={}
for nf in log_progress(nrecon_folders):
rc_level = get_rc_level(nf)
sino[rc_level], sinoraw[rc_level], rec[rc_level] = get_data(nf)
h={}
for k, v in log_progress(list(rec.iteritems())):
r = rec[k]
h[k], _ = np.histogram(r,bins=1000)
x = []
y = []
for k,v in h.iteritems():
x.append(k)
y.append(np.sum(v**2))
plt.figure(figsize=(10,7))
plt.plot(x,y,'o')
plt.grid(True)
plt.show()
```
|
github_jupyter
|
%matplotlib inline
import astra
import numpy as np
import pylab as plt
import os
import glob
import matplotlib
font = {'size' : 18}
matplotlib.rc('font', **font)
from scipy.signal import medfilt
def log_progress(sequence, every=None, size=None):
from ipywidgets import IntProgress, HTML, VBox
from IPython.display import display
is_iterator = False
if size is None:
try:
size = len(sequence)
except TypeError:
is_iterator = True
if size is not None:
if every is None:
if size <= 200:
every = 1
else:
every = size / 200 # every 0.5%
else:
assert every is not None, 'sequence is iterator, set every'
if is_iterator:
progress = IntProgress(min=0, max=1, value=1)
progress.bar_style = 'info'
else:
progress = IntProgress(min=0, max=size, value=0)
label = HTML()
box = VBox(children=[label, progress])
display(box)
index = 0
try:
for index, record in enumerate(sequence, 1):
if index == 1 or index % every == 0:
if is_iterator:
label.value = '{index} / ?'.format(index=index)
else:
progress.value = index
label.value = u'{index} / {size}'.format(
index=index,
size=size
)
yield record
except:
progress.bar_style = 'danger'
raise
else:
progress.bar_style = 'success'
progress.value = index
label.value = unicode(index or '?')
def images_diff(im1, im2):
assert(im1.shape==im2.shape)
rec_diff = np.zeros(shape=(im1.shape[0],im1.shape[1],3), dtype='float32')
im1_t = im1.copy()
im1_t = (im1_t-im1_t.min())/(im1_t.max()-im1_t.min())
im2_t = im2.copy()
im2_t = (im2_t-im2_t.min())/(im2_t.max()-im2_t.min())
# nrecon_rec_t[nrecon_rec_t<0] = 0
diff_rec = im1_t-im2_t
rec_diff[...,0] = diff_rec*(diff_rec>0)
rec_diff[...,1] = -diff_rec*(diff_rec<0)
rec_diff[...,2] = rec_diff[...,1]
return rec_diff
!ls /home/makov/diskmnt/big/yaivan/RC/MMC1_2.82um_/
!ls /home/makov/diskmnt/big/yaivan/MMC_1/_tmp/nrecon/bh_0_rc_0/
!ls '/home/makov/diskmnt/big/yaivan/Sand/Reconstructed/
def get_bh_level(nf):
return(int(os.path.split(nf)[-1].split('_')[1]))
def get_rc_level(nf):
return(int(os.path.split(nf)[-1].split('_')[3]))
def get_data(folder):
try:
data_file = glob.glob(os.path.join(folder, '*_sino*.tif'))[0]
# print(data_file)
sinogram = plt.imread(data_file).astype('float32')
data_file = glob.glob(os.path.join(folder, '*_sinoraw_*.tif'))[0]
sinraw = plt.imread(data_file).astype('float32')
rec_file = glob.glob(os.path.join(folder, '*_rec*.png'))[0]
rec = plt.imread(rec_file).astype('float32')
except e:
print 'folder'
raise e
return sinogram, sinraw, rec
objects = []
# objects.append({'name':'MMC_1',
# 'data_root':'/home/makov/diskmnt/big/yaivan/RC/MMC1_2.82um_/',
# 'rc_ref':16})
# objects.append({'name':'Sand',
# 'data_root':'/home/makov/diskmnt/big/yaivan/RC/Chieftain_Unc_2.8_/',
# 'rc_ref':20})
# objects.append({'name':'HP_Stage',
# 'data_root':'/home/makov/diskmnt/big/yaivan/RC/S2-Barnett@HP_P1_2.99um_/',
# 'rc_ref':20})
objects.append({'name':'Model object',
'data_root':'/home/makov/diskmnt/big/yaivan/RC/cube_/',
'rc_ref':0})
for rc_object in objects:# data_root = '/home/makov/diskmnt/big/yaivan/MMC_1/_tmp/nrecon/'
data_root= rc_object['data_root']
# nrecon_root_folder = os.path.join(data_root,'_tmp','nrecon')
nrecon_folders = glob.glob(os.path.join(data_root, 'bh_*_rc_*'))
nrecon_folders = [nf for nf in nrecon_folders if os.path.isdir(nf)]
print len(nrecon_folders)
for nf in nrecon_folders:
print get_rc_level(nf),
print
sino = {}
sinoraw = {}
rec ={}
for nf in log_progress(nrecon_folders):
rc_level = get_rc_level(nf)
sino[rc_level], sinoraw[rc_level], rec[rc_level] = get_data(nf)
h={}
for k, v in log_progress(list(rec.iteritems())):
r = rec[k]
h[k], _ = np.histogram(r,bins=1000)
x = []
y = []
for k,v in h.iteritems():
x.append(k)
y.append(np.sum(v**2))
plt.figure(figsize=(10,7))
plt.title('{} Reference RC:{}'.format(rc_object['name'],rc_object['rc_ref']))
plt.plot(x,y,'o')
plt.ylabel('Sum of hist^2')
plt.xlabel('RC')
plt.grid(True)
plt.show()
sino = {}
sinoraw = {}
rec ={}
for nf in log_progress(nrecon_folders):
rc_level = get_rc_level(nf)
sino[rc_level], sinoraw[rc_level], rec[rc_level] = get_data(nf)
h={}
for k, v in log_progress(list(rec.iteritems())):
r = rec[k]
h[k], _ = np.histogram(r,bins=1000)
x = []
y = []
for k,v in h.iteritems():
x.append(k)
y.append(np.sum(v**2))
plt.figure(figsize=(10,7))
plt.plot(x,y,'o')
plt.grid(True)
plt.show()
| 0.157525 | 0.344058 |
<a href="https://colab.research.google.com/github/nickprock/corso_data_science/blob/master/machine_learning_pills/02_unsupervised/06_dimensionality_reduction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Dimensionality Reduction
Le tecniche per la riduzione della dimensionalità sono molto utili quando abbiamo molte featurese il modello non riesce atrovare pattern nei dati per questo motivo.
Ne esistono diverse ma in questo notebook vedremo la più utilizzata, la Principal Component Analysis (PCA).
Funziona quando ci sono relazioni lineari tra le variabili, quindi in presenza di variabili numeriche continue. **L'obiettivo è quello di ridurre ma con la minima perdita di informazione.**
L'algoritmo per la riduzione della dimensionalità con PCA è più o meno il seguente:
1. Standardizzare il dataset *d-dimensionale*
2. Costruire la matrice di covarianza
3. Decomporre la matrice di covarianza nei suoi **autovalori** e **autovettori**
4. Ordinare gli autovalori riducendone l'ordine per valutare i corrispondenti autovettori
5. Selezionare *k* autovettori, che corrispondono ai *k* più grandi autovalori, dove *k* è la nuova dimensionalità del sottospazio delle caratteristiche, con $k \leq d$
6. Costruire la matrice di proiezione *W* dai primi *k* autovettori
7. Trasformare il dataset di input per ottenere il nuovo spazio delle caratteristiche.
<br>

<br>
[Image Credits](https://stats.stackexchange.com/questions/320743/why-are-eigenvectors-the-principal-components-in-principal-component-analysis)
<br>
Un esempio molto semplice per spiegare la PCA è quello dei pesci. Abbiamo dei pesci e possono essere misurati per:
* lunghezza
* altezza
* peso
Queste tre dimensioni sono correlate tra loro, quindi possono essere riassunte in un'unica nuova dimensione:
* stazza
### Dataset
Per questo esempio utilizzeremo nuovamente il dataset Iris già visto nel notebook sui decision tree.
```
import numpy as np
from sklearn.decomposition import PCA
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
```
Riproduciamo l'esperimento con il decision tree ma con tutte le variabili.
Il dataset è semplice e l'accuratezza sarà pari ad 1.
Lo scopo di questo mnotebook non è un miglioramento ma mostrare la poca perdita di informazione utilizzando la PCA.
```
iris = load_iris()
x = iris.data
y = iris.target
train_x, test_x, train_y, test_y = train_test_split(x, y, test_size = 0.2, random_state = 42)
clf = DecisionTreeClassifier()
clf.fit(train_x, train_y)
import time
start_time = time.time()
yhat = clf.predict(test_x)
print("execution time: ", time.time() - start_time)
print("\n")
print("accuracy: ", accuracy_score(test_y, yhat))
```
### PCA: scegliere il numero di componenti a priori
In questo caso l'iperparametro da impostare è il numero stesso delle componenti.
Vogliamo due dimensioni, vedremo come sono fatte le componenti.
Notare che sul train applichiamo *fit_transform* mentre sul test solo *transform* questo vale per tutte le trasformazioni. Tra train e test ci potrebbe essere una scala di valori diversa che indurrebbe in errore lo stimatore.
```
pca = PCA(n_components=2)
train_x_pca = pca.fit_transform(train_x)
test_x_pca = pca.transform(test_x)
import matplotlib.pyplot as plt
for i in np.unique(train_y):
mask = train_y == i
plt.scatter(train_x_pca[mask, 0], train_x_pca[mask, 1], label=i)
clf.fit(train_x_pca, train_y)
import time
start_time = time.time()
yhat_pca = clf.predict(test_x_pca)
print("execution time: ", time.time() - start_time)
print("\n")
print("accuracy: ", accuracy_score(test_y, yhat_pca))
```
### PCA: scegliere la varianza spiegata
Un altro iperparametro che è possibile scegliere è quanta dell'informazione nel dataset originale vogliamo mantenere.
Solitamente deve essere almeno maggiore al 75% per non icorrere in errori grandi. Meglio se dal 90% in su, in questo caso una sola componente spiegherebbe quasi il 90% della variabilità quindi per averne tre usiamo il 99% come livello.
Quando si usa la varianza spiegata *svd_solver* che effettua la [**Singol Value Decomposition**](https://it.wikipedia.org/wiki/Decomposizione_ai_valori_singolari) deve essere impostato a "*full*".
```
pca = PCA(n_components=0.99, svd_solver="full")
train_x_pca = pca.fit_transform(train_x)
test_x_pca = pca.transform(test_x)
plt.figure(figsize=(18,10))
for i in np.unique(train_y):
mask = train_y == i
plt.subplot(3,1,1)
plt.scatter(train_x_pca[mask,0], train_x_pca[mask,1], label = i)
plt.subplot(3,1,2)
plt.scatter(train_x_pca[mask,0], train_x_pca[mask,2], label = i)
plt.subplot(3,1,3)
plt.scatter(train_x_pca[mask,1], train_x_pca[mask,2], label = i)
plt.show()
clf.fit(train_x_pca, train_y)
import time
start_time = time.time()
yhat_pca = clf.predict(test_x_pca)
print("execution time: ", time.time() - start_time)
print("\n")
print("accuracy: ", accuracy_score(test_y, yhat_pca))
```
### Esercizio
<br>

<br>
[Image Credits](https://dataaspirant.com/2017/01/09/knn-implementation-r-using-caret-package/)
<br>
1. Creare un notebook utilizzando il dataset [Wine](http://archive.ics.uci.edu/ml/datasets/wine) e costruire un classificatore con e senza l'utilizzo della PCA.
2. Valutare i risultati della classificazione
3. Trovare il numero di componenti/livello di varianza spiegata ottimale
4. Scatterplot del risultato
**N.B. La classe [1, 2, 3] sta nella prima colonna del file.**
|
github_jupyter
|
import numpy as np
from sklearn.decomposition import PCA
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
iris = load_iris()
x = iris.data
y = iris.target
train_x, test_x, train_y, test_y = train_test_split(x, y, test_size = 0.2, random_state = 42)
clf = DecisionTreeClassifier()
clf.fit(train_x, train_y)
import time
start_time = time.time()
yhat = clf.predict(test_x)
print("execution time: ", time.time() - start_time)
print("\n")
print("accuracy: ", accuracy_score(test_y, yhat))
pca = PCA(n_components=2)
train_x_pca = pca.fit_transform(train_x)
test_x_pca = pca.transform(test_x)
import matplotlib.pyplot as plt
for i in np.unique(train_y):
mask = train_y == i
plt.scatter(train_x_pca[mask, 0], train_x_pca[mask, 1], label=i)
clf.fit(train_x_pca, train_y)
import time
start_time = time.time()
yhat_pca = clf.predict(test_x_pca)
print("execution time: ", time.time() - start_time)
print("\n")
print("accuracy: ", accuracy_score(test_y, yhat_pca))
pca = PCA(n_components=0.99, svd_solver="full")
train_x_pca = pca.fit_transform(train_x)
test_x_pca = pca.transform(test_x)
plt.figure(figsize=(18,10))
for i in np.unique(train_y):
mask = train_y == i
plt.subplot(3,1,1)
plt.scatter(train_x_pca[mask,0], train_x_pca[mask,1], label = i)
plt.subplot(3,1,2)
plt.scatter(train_x_pca[mask,0], train_x_pca[mask,2], label = i)
plt.subplot(3,1,3)
plt.scatter(train_x_pca[mask,1], train_x_pca[mask,2], label = i)
plt.show()
clf.fit(train_x_pca, train_y)
import time
start_time = time.time()
yhat_pca = clf.predict(test_x_pca)
print("execution time: ", time.time() - start_time)
print("\n")
print("accuracy: ", accuracy_score(test_y, yhat_pca))
| 0.563138 | 0.924925 |
# Computing gradients and derivatives in PyTorch
>"Making good use of the `gradient` argument in PyTorch's `backward` function"
- toc: true
- badges: true
- comments: true
- categories: [mathematics]
---
tags: mathematics pytorch gradients backward automatic differentiation vector-Jacobian product backpropagation
---
# tl;dr
The `backward` function in `PyTorch` can be used to compute the derivatives or gradients of functions. The `backward` function computes vector-Jacobian products so that the appropriate vector must be determined. In other words, the correct `gradient` argument must be passed to `backward`, although not passing `gradient` explicitly will cause `backward` to choose the appropriate value but only in the simplest cases.
This notebook explains vector-Jacobian products and how to choose the `gradient` argument in the `backward` function in the general case.
# A brief overview
In the case of a function taking a scalar and returning a scalar, the use of the `backward` function is quite straight-forward:
```
# collapse-hide
import torch
x = torch.tensor(1., requires_grad=True)
y = x**2
y.backward()
print(f"Derivative at a single point:")
print(x.grad.data)
```
However, when
- the function is **multi-valued** (e.g. vector- or matrix-valued); or
- one wishes to compute the derivative of a function at **mulitple** points,
then the `gradient` argument in `backward` must be suitably chosen. For example:
```
# collapse-hide
import torch
x = torch.linspace(-2, 2, 5, requires_grad=True)
y = x**2
gradient = torch.ones_like(y)
y.backward(gradient)
print("Derivative at multiple points:")
print(x.grad.data)
```
Indeed, more precisely, the `backward` function computes vector-Jacobian products, which is not explicit in the function's doc string:
```
# collapse-hide
print("First line of `torch.Tensor.backward` doc string:")
print("\""+ torch.Tensor.backward.__doc__.split("\n")[0] + "\"")
```
although some explanations are given [in this official tutorial](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#gradients). The crucial point is therefore to choose the appropriate vector, which is passed to the `backward` function in its `gradient` argument:
```
# collapse-hide
import inspect
import torch
print(f"torch.Tensor.backward{inspect.signature(torch.Tensor.backward)}")
print("...")
print("\n".join(torch.Tensor.backward.__doc__.split("\n")[11:18]))
print("...")
```
There is a way around specifying the `gradient` argument. Revisiting the example above, the derivative at multiple points can be equivalently calculated by adding a `sum()`:
```
# collapse-hide
import torch
x = torch.linspace(-2, 2, 5, requires_grad=True)
y = (x**2).sum()
y.backward()
print("Derivative at multiple points:")
print(x.grad.data)
```
Here, the `backward` method is invoked on a different `tensor`:
```
(x**2).backward()
```
if `x` contains a single input,
vs
```
(x**2).sum().backward()
```
if `x` contains multiple inputs.
On the other hand, passing the `gradient` argument, whether `x` contains one or multiple inputs, the same command is used to compute the derivatives:
```
y = (x**2)
y.backward(torch.ones_like(y))
```
Roughly speaking, the difference between the two methods, namely setting `gradient=torch.ones_like(y)` or adding `sum()`, is in the order of the summation and differentiation.
# Usage examples of the `backward` function
The derivative of the **scalar**, **univariate** function $f(x)=x^2$ at a **single** point $x=1$:
```
import torch
x = torch.tensor(1., requires_grad=True)
y = x**2
y.backward()
x.grad
```
The derivative of the **scalar**, **univariate** function $f(x)=x^2$ at **multiple** points $x= -2, -1, \dots, 2$:
```
import torch
x = torch.linspace(-2, 2, 5, requires_grad=True)
y = x**2
v = torch.ones_like(y)
y.backward(v)
x.grad
```
The gradient of the **scalar**, **multivariate** function $f(x_1, x_2)=3x_1^2 + 5x_2^2$ at a **single** point $(x_1, x_2)=(-1, 2)$:
```
import torch
x = torch.tensor([-1., 2.], requires_grad=True)
w = torch.tensor([3., 5.])
y = (x*x*w).sum()
y.backward()
x.grad
```
The gradient of the **scalar**, **multivariate** function $f(x_1, x_2) = -x_1^2 + x_2^2$ at **multiple** points $(x_1, x_2)$:
```
import torch
x = torch.arange(6, dtype=float).view(3, 2).requires_grad_(True)
w = torch.tensor([-1, 1])
y = (x*x*w).sum(1)
v = torch.ones_like(y)
y.backward(v)
x.grad
```
The _derivatives_ of the **vector-valued**, **univariate** function $f(x)= (-x^3, 5x)$ at a **single** point $x=1$, i.e. the derivative of
- its first component function $f_1(x)=-x^3$; and
- its second component function $f_2(x)=5x$.
```
# collapse-hide
import torch
x = torch.tensor(1., requires_grad=True)
y = torch.stack([-x**3, 5*x])
v1 = torch.tensor([1., 0.])
y.backward(v1, retain_graph=True)
print(f"f_1'({x.data.item()}) = {x.grad.data.item():>4}")
x.grad.zero_()
v2 = torch.tensor([0., 1.])
y.backward(v2)
print(f"f_2'({x.data.item()}) = {x.grad.data.item():>4}")
```
The _derivatives_ of the **vector-valued**, **univariate** function $f(x)= (-x^3, 5x)$ at **multiple** points, i.e. the derivative of
- its first component function $f_1(x)=-x^3$; and
- its second component function $f_2(x)=5x$.
```
# collapse-hide
import torch
import itertools
x = torch.arange(3, dtype=float, requires_grad=True)
y = torch.stack([-x**3, 5*x])
ranges = [range(_) for _ in y.shape]
v1 = torch.tensor([1. if i == 0 else 0. for i, j in itertools.product(*ranges)]).view(*y.shape)
y.backward(v1, retain_graph=True)
print(f"Derivative of f_1(x)=-3x^2 at the points {tuple(x.data.view(-1).tolist())}:")
print(x.grad)
x.grad.zero_()
v2 = torch.tensor([1. if i == 1 else 0. for i, j in itertools.product(*ranges)]).view(*y.shape)
y.backward(v2)
print(f"\nDerivative of f_2(x)=5x at the points {tuple(x.data.view(-1).tolist())}:")
print(x.grad)
```
The _gradients_ of the **vector-valued**, **multivariate** function
$$
f(x_1, \dots, x_n) = (x_1 + \dots + x_n\,, x_1^2 + \dots + x_n^2)
$$
at a **single** point $(x_1, \dots, x_n)$, i.e. the gradient of
- its first component function $f_1(x_1, \dots, x_n) = x_1 + \dots + x_n$; and
- its second component function $f_2(x_1, \dots, x_n) = x_1^2 + \dots + x_n^2$.
```
# collapse-show
import torch
x = torch.arange(4, dtype=float, requires_grad=True)
y = torch.stack([x.sum(), (x**2).sum()])
print(f"x : {tuple(x.data.tolist())}")
print(f"y = (y_1, y_2) : {tuple(y.data.tolist())}")
v1 = torch.tensor([1., 0.])
y.backward(v1, retain_graph=True)
print(f"gradient of y_1 : {tuple(x.grad.data.tolist())}")
x.grad.zero_()
v2 = torch.tensor([0., 1.])
y.backward(v2)
print(f"gradient of y_2 : {tuple(x.grad.data.tolist())}")
```
The _gradients_ of the **vector-valued**, **multivariate** function
$$
f(x_1, \dots, x_n) = (x_1 + \dots + x_n\,, x_1^2 + \dots + x_n^2)
$$
at **multiple** points, i.e. the gradient of
- its first component function $f_1(x_1, \dots, x_n) = x_1 + \dots + x_n$; and
- its second component function $f_2(x_1, \dots, x_n) = x_1^2 + \dots + x_n^2$.
```
# collapse-show
import torch
import itertools
x = torch.arange(4*3, dtype=float).view(-1,4).requires_grad_(True)
y = torch.stack([x.sum(1), (x**2).sum(1)])
print("x:")
print(x.data)
print("y:")
print(y.data)
print()
ranges = [range(_) for _ in y.shape]
v1 = torch.tensor([1. if i == 0 else 0. for i, j in itertools.product(*ranges)]).view(*y.shape)
y.backward(v1, retain_graph=True)
print("Gradients of the f1 at multiple points:")
print(x.grad)
x.grad.zero_()
print()
v2 = torch.tensor([1. if i == 1 else 0. for i, j in itertools.product(*ranges)]).view(*y.shape)
y.backward(v2)
print("Gradients of the f2 at multiple points:")
print(x.grad)
```
# Mathematical preliminaries
## Scalars, vectors, matrices, and tensors
- A **scalar** is a real number. It is usually denoted with $x$.
- An **$n$-dimensional vector** is a list $(x_1, \dots, x_n)$ of scalars.
- An **$m$-by-$n$ matrix** is an array with $m$ rows and $n$ columns of scalars:
$$
\begin{bmatrix}w_{1,1}&\dots&w_{1,n}\\\vdots&\ddots&\vdots\\w_{m,1}&\dots&w_{m,n}\end{bmatrix}
$$
- A **column vector** of length $n$ is a $n$-by-$1$ matrix:
$$\begin{bmatrix}x_1\\\vdots\\x_n\end{bmatrix}$$
Note that it is distinct from its vector counterpart $(x_1, \dots, x_n)$.
- A **row vector** of length $n$ is a $1$-by-$n$ matrix:
$$\begin{bmatrix}x_1&\dots&x_n\end{bmatrix}$$
Note that it is distinct from its vector and column vector counterparts.
>Note:
For convenience, we may denote a vector, a column vector, or a row vector with a single symbol, typically $x$.
In another post we establish the following correspondence between these mathematical entities and their `tensor` counterparts in `PyTorch`:
|mathematical name|mathematical notation|`tensor` shape|`tensor` dimension|
|---|---|---|---|
|scalar|$x$|`()`|`0`|
|vector|$(x_1, \dots, x_n)$|`(n,)`|`1`|
|matrix|$\begin{bmatrix}w_{1,1}&\dots&w_{1,n}\\\vdots&\ddots&\vdots\\w_{m,1}&\dots&w_{n,m}\end{bmatrix}$|`(m,n)`| `2`|
|column vector|$\begin{bmatrix}x_1\\\vdots\\x_n\end{bmatrix}$|`(n,1)`|`2`|
|row vector|$\begin{bmatrix}x_1&\dots&x_n\end{bmatrix}$|`(1,n)`|`2`|
## Mathematical functions
- We consider functions which are mappings from scalars, vectors, or matrices to scalars, vectors, or matrices. It is generically denoted $y=f(x)$.
- A **scalar** function $y=f(x)$ is a function returning a scalar, i.e. $y$ is a scalar.
- A **vector-valued** function $y=f(x)$ is a function returning a vector, i.e. $y$ is a vector. We often write
$$f(x) = (f_1(x), \dots, f_m(x))$$
if the output is $m$-dimensional, where each of $f_1(x), \dots, f_m(x)$ is a scalar function.
- A **univariate** function $y=f(x)$ is a function depending on a scalar $x$.
- A **multivariate** function $y=f(x)$ is a function depending on a vector $x=(x_1, \dots, x_n)$.
In summary
|$y=f(x)$|scalar-valued|vector-valued|
|---|---|---|
|**univariate**|$x$ is a scalar<br>$y$ is a scalar|$x$ is a scalar<br>$y$ is a vector|
|**multivariate**|$x$ is a vector<br>$y$ is a scalar|$x$is a vector<br>$y$ is a vector|
## Differentiation
### Basic definitions
We do not recall the definitions for:
- the **derivative** $f'(x)$ of a scalar, uni-variate function $y=f(x)$ evaluated at a scalar $x$;
- the **partial derivatives** $\frac{\partial f}{\partial x_i}(x)$, $i=1, \dots, n$, of a scalar, multivariate function $y=f(x)$ with respect to the variables $x_1, \dots, x_n$, and evaluated at $x=(x_1, \dots, x_n)$.
### Derivatives of vector-valued, univariate functions
The **derivative** of a vector-valued, uni-variate function $y=f(x)$ evaluated at a scalar $x$ is the vertical concatenation of the derivatives of its component functions:
$$f'(x) = \begin{bmatrix}f_1'(x)\\\vdots\\f_m'(x)\end{bmatrix}$$
### Gradients
The **gradient** of a scalar-valued function $y=f(x)$, is the *row* vector of its partial derivatives:
$$\nabla f(x) = \begin{bmatrix}\frac{\partial f}{\partial x_1}(x)&\dots&\frac{\partial f}{\partial x_n}(x)\end{bmatrix}$$
with length $n$ if $x$ is $n$-dimensional: $x=(x_1, \dots, x_n)$.
### Jacobians
The **Jacobian** of a vector-valued, multivariate function $y=f(x)$ is the vertical concatenation of the gradients of the component functions $f_1, \dots, f_m$:
$$J_f(x)
\,=\,
\begin{bmatrix}
\nabla f_1(x)\\\vdots\\\nabla f_m(x)
\end{bmatrix}
\,=\,
\begin{bmatrix}
\frac{\partial f_1}{\partial x_1}(x)&\dots&\frac{\partial f_1}{\partial x_n}(x)\\
\vdots&\ddots&\vdots\\
\frac{\partial f_m}{\partial x_1}(x)&\dots&\frac{\partial f_m}{\partial x_n}(x)
\end{bmatrix}
$$
It is thus an $m$-by-$n$ matrix, i.e. with $m$ rows and $n$ columns.
#### Special case: $m=1$
In case $m=1$, the Jacobian agrees with the gradient of a scalar, multivariate function:
$$J_f(x) = \nabla f(x)$$
#### Special case: $n=1$
In case $n=1$, the Jacobian agrees with the derivative of a vector-valued, univariate function.
$$J_f(x) = \begin{bmatrix}f_1'(x)\\\vdots\\f_m'(x)\end{bmatrix}$$
## Vector-Jacobian products
Given a vector-valued, multivariate function $y=f(x)$ and a _column_ vector
$v=\begin{bmatrix}v_1\\\vdots\\v_m\end{bmatrix}$,
the **vector-Jacobian product** is the matrix multiplication
$$v^\top J_f(x) \,=\,
\begin{bmatrix}
v_1&\dots&v_m
\end{bmatrix}
\begin{bmatrix}
\frac{\partial f_1}{\partial x_1}(x)&\dots&\frac{\partial f_1}{\partial x_n}(x)\\
\vdots&\ddots&\vdots\\
\frac{\partial f_m}{\partial x_1}(x)&\dots&\frac{\partial f_m}{\partial x_n}(x)
\end{bmatrix}
$$
which is then a _row_ vector of length $n$.
### Special case
If $v^\top$ happens to be the gradient of a scalar-valued function $z=\ell(y)$ evaluated at $f(x)$, i.e. $v = \nabla \ell(y)$ where $y=f(x)$, then
\begin{equation}
v^\top J_f(x)
\,=\,\nabla (\ell\circ f)(x)
\end{equation}
In other words, $v^\top J_f(x)$ is the gradient of the composition of the function $\ell$ with the function $f$.
>Note:
The vector-Jacobian product can be generalized to cases where $x$ and $y$ are (mathematical) tensors of higher dimensions. This generalization is in fact used in some of the examples of this post.
### Application: Gradients of vector-valued functions
If $y=f(x)=(f_1(x), \dots, f_m(x))$ is a vector-valued, multivariate function, one computes the gradients $\nabla f_1(x), \dots, \nabla f_m(x)$ one at a time, each time with a suitable vector $v$. Indeed, fix $i$ between $1$ and $m$, and define $\ell_i(y)=y_i$ the function selecting the $i$-th coordinate of $y=(y_1, \dots, y_m)$, so that
$$f_i(x) = \ell_i(f(x))\,.$$
Noting that
$$\nabla \ell_i(y) = \begin{bmatrix}0&\cdots&0&1&0&\cdots&0\end{bmatrix}$$
where the only non-zero coordinate is in the $i$-th position, then
$$
\begin{align}
\nabla \ell_i(f(x))J_f(x)
& =
\begin{bmatrix}0&\cdots&0&1&0&\cdots&0\end{bmatrix}
\begin{bmatrix}
\frac{\partial f_1}{\partial x_1}(x)&\dots&\frac{\partial f_1}{\partial x_n}(x)\\
\vdots&\ddots&\vdots\\
\frac{\partial f_m}{\partial x_1}(x)&\dots&\frac{\partial f_m}{\partial x_n}(x)
\end{bmatrix}\\
&=
\begin{bmatrix}\frac{\partial f_i}{\partial x_1}(x)&\dots&\frac{\partial f_i}{\partial x_n}(x)\end{bmatrix}
\end{align}
$$
### Application: Derivatives at multiple points
To evaluate the derivative of a scalar, univariate function $f(x)$ at multiple sample points $x^{(1)}, \dots, x^{(N)}$, we create a *new*, vector-valued and multivariate function
$$F(x)=\begin{bmatrix}f\left(x^{(1)}\right)\\ \vdots \\ f\left(x^{(N)}\right)\end{bmatrix}
\qquad\textrm{where}\qquad
x\,=\,(x^{(1)}, \dots, x^{(N)})\,.$$
Thus, its Jacobian is
$$J_F(x)=\begin{bmatrix}
f'(x^{(1)})&&&&\\
&\ddots&&&\\
&&f'(x^{(j)})&&\\
&&&\ddots&\\
&&&&f'(x^{(N)})\end{bmatrix}
$$
where all off-diagonal terms are $0$.
Thus, setting $v=\begin{bmatrix}1\\\vdots\\1\end{bmatrix}$, we obtain the gradient of $f$ evaluated at the $N$ sample points $x^{(1)}\,, \dots\,, x^{(N)}$:
$$\begin{bmatrix}f'(x^{(1)})&\dots& f'(x^{(j)})&\cdots& f'(x^{(N)})\end{bmatrix}
=\left[1\,,\dots\,,1\right]
J_f(x)\,.$$
The interpretation here is that the resulting row vector contains the derivative of $f$ at the samples $x^{(1)}$ to $x^{(N)}$.
### The trick with `sum()`
The trick of adding `sum()` before calling `backward` differs with the previous application only in the order of operations performed: the summation is performed before differentiation.
From a scalar, univariate function $y=f(x)$, construct a new scalar, multivariate function
$$G(x_1, \dots, x_N) = f(x_1) + \dots + f(x_N)$$
Using the rules of vector calculus, the gradient of $G$ at an $n$-dimensional point $(x_1, \dots, x_N)$ is
$$
\begin{align}
\nabla G(x) & = \begin{bmatrix}\frac{\partial G}{\partial x_1}(x)&\cdots&\frac{\partial G}{\partial x_N}\end{bmatrix}\\
& = \begin{bmatrix}f'(x_1)&\cdots&f'(x_N)\end{bmatrix}
\end{align}
$$
The interpretation here is that the resulting row vector contains the gradient of $G$ at the $N$-dimensional point $(x_1, \dots, x_N)$.
# Computing gradients with `PyTorch`
A mathematical function is a mapping, which strictly speaking one should denote $f$. The denotation $y=f(x)$ is simply to suggest that the typical input will be denoted $x$ and the corresponding output will be denoted $y$. Otherwise, $y=f(x)$ actually asserts the identity between a value $y$ and the evaluation of the function $f$ at the value $x$.
In `PyTorch`, the primary objects are `tensor`s, which can represent (mathematical) scalars, vectors, and matrices (as well as mathematical tensors). The way a `PyTorch` function calculates a `tensor`, generically denoted `y` and called the output, from another `tensor`, generically denoted `x` and called the input, reflects the action of a mathematical function $f$ (or $y=f(x)$).
Conversely, a mathematical function $f$ can be evaluated at $x$ using `PyTorch`, and furthermore `PyTorch` allows to evaluate the derivative or gradient of $f$ at $x$ via the method `backward`. More specifically, the `backward` function performs vector-Jacobian products, where the vector correspond to the `gradient` argument. The key point in using the `backward` is thus to understand how to choose the `gradient` argument.
The mathematical preliminaries above show how `gradient` should be chosen. There are two key points:
1. `gradient` has the same shape as `y`;
1. `gradient` is populated with `0.`'s and `1.`'s, and the location of the `1.`'s corresponding to the inputs and outputs of interest.
# Examples revisited
>Note:
The variable `v` is passed to the `gradient` argument in all our examples.
For the derivative of a scalar, univariate function evaluated a single point, we choose `gradient=torch.tensor(1.)`, which is the default value:
```
import torch
x = torch.tensor(1., requires_grad=True)
y = x**2
v = torch.ones_like(y)
y.backward()
print(f"Shape of x : {tuple(x.shape)}")
print(f"Shape of y : {tuple(y.shape)}")
print(f"gradient argument : {v}")
```
Note that if `x` is cast as a `1`-dimensional `tensor`, then (in this particular example) `y` is also a `1`-dimensional `tensor`:
```
import torch
x = torch.tensor([1.], requires_grad=True)
y = x**2
v = torch.ones_like(y)
y.backward()
print(f"Shape of x : {tuple(x.shape)}")
print(f"Shape of y : {tuple(y.shape)}")
print(f"gradient argument : {v}")
```
Similarly if `x` is cast as `2`-dimensional `tensor`:
```
import torch
x = torch.tensor([[1.]], requires_grad=True)
y = x**2
v = torch.ones_like(y)
y.backward()
print(f"Shape of x : {tuple(x.shape)}")
print(f"Shape of y : {tuple(y.shape)}")
print(f"gradient argument : {v}")
```
For the derivative of a scalar, univariate function evaluated at multiple points, `gradient` contains all `1.`'s and is of same shape as `y`:
```
import torch
x = torch.linspace(-1, 1, 5, requires_grad=True)
y = x**2
v = torch.ones_like(y)
y.backward(v)
print(f"Shape of x : {tuple(x.shape)}")
print(f"Shape of y : {tuple(y.shape)}")
print(f"gradient argument : {v}")
```
Casting `x` in a different shape changes the shape of `y`, and thus of `gradient`:
```
import torch
x = torch.linspace(-2, 2, 5).view(-1,1).requires_grad_(True)
y = x**2
v = torch.ones_like(y)
y.backward(v)
print(f"Shape of x : {tuple(x.shape)}")
print(f"Shape of y : {tuple(y.shape)}")
print(f"gradient argument : ")
print(v)
```
For the derivative of a vector-valued, univariate function evaluated at a single point, the derivative of each component function is calculated one at a time, and `gradient` consists of all `0.`'s except for one `1.`, which is located at a position corresponding to the component function. In the example below, the function is in fact *matrix-valued*, namely we calculate the derivative of
$$f(x) = \begin{bmatrix}1&x\\x^2&x^3\\x^4&x^5\end{bmatrix}\qquad \textrm{at}\quad x\,=\,1\,.$$
```
# collapse-show
import torch
import itertools
x = torch.tensor(1., requires_grad=True)
y = torch.stack([x**i for i in range(6)]).view(3,2)
ranges = [range(_) for _ in y.shape]
print("x:")
print(x.data)
print("\ny:")
print(y.data)
derivatives = torch.zeros_like(y)
for i, j in itertools.product(*ranges):
v = torch.zeros_like(y)
v[i,j] = 1.
if x.grad is not None: x.grad.zero_()
y.backward(v, retain_graph=True)
derivatives[i,j] = x.grad.item()
print("\nDerivatives:")
print(derivatives)
```
>Note:
The use of `for` loops can be avoided.
For the gradient of a scalar, multivariate function evaluated at a single point, `gradient=torch.tensor(1.)`:
```
import torch
x = torch.tensor([-1., 2.], requires_grad=True)
w = torch.tensor([3., 5.])
y = (x*x*w).sum()
v = torch.ones_like(y)
y.backward()
print(f"Shape of x : {tuple(x.shape)}")
print(f"Shape of y : {tuple(y.shape)}")
print(f"gradient argument : {v}")
```
In the following example, the input `x` is a `(3,2)`-tensor:
```
x = torch.arange(6, dtype=float).view(3,2).requires_grad_(True)
y = (x**2).sum()
v = torch.ones_like(y)
y.backward(v)
print(f"Shape of x: {tuple(x.shape)}")
print(f"Shape of y: {tuple(y.shape)}")
print(f"gradient argument: {v}")
print("x:")
print(x.data)
print("x.grad:")
print(x.grad.data)
```
|
github_jupyter
|
# collapse-hide
import torch
x = torch.tensor(1., requires_grad=True)
y = x**2
y.backward()
print(f"Derivative at a single point:")
print(x.grad.data)
# collapse-hide
import torch
x = torch.linspace(-2, 2, 5, requires_grad=True)
y = x**2
gradient = torch.ones_like(y)
y.backward(gradient)
print("Derivative at multiple points:")
print(x.grad.data)
# collapse-hide
print("First line of `torch.Tensor.backward` doc string:")
print("\""+ torch.Tensor.backward.__doc__.split("\n")[0] + "\"")
# collapse-hide
import inspect
import torch
print(f"torch.Tensor.backward{inspect.signature(torch.Tensor.backward)}")
print("...")
print("\n".join(torch.Tensor.backward.__doc__.split("\n")[11:18]))
print("...")
# collapse-hide
import torch
x = torch.linspace(-2, 2, 5, requires_grad=True)
y = (x**2).sum()
y.backward()
print("Derivative at multiple points:")
print(x.grad.data)
(x**2).backward()
(x**2).sum().backward()
y = (x**2)
y.backward(torch.ones_like(y))
import torch
x = torch.tensor(1., requires_grad=True)
y = x**2
y.backward()
x.grad
import torch
x = torch.linspace(-2, 2, 5, requires_grad=True)
y = x**2
v = torch.ones_like(y)
y.backward(v)
x.grad
import torch
x = torch.tensor([-1., 2.], requires_grad=True)
w = torch.tensor([3., 5.])
y = (x*x*w).sum()
y.backward()
x.grad
import torch
x = torch.arange(6, dtype=float).view(3, 2).requires_grad_(True)
w = torch.tensor([-1, 1])
y = (x*x*w).sum(1)
v = torch.ones_like(y)
y.backward(v)
x.grad
# collapse-hide
import torch
x = torch.tensor(1., requires_grad=True)
y = torch.stack([-x**3, 5*x])
v1 = torch.tensor([1., 0.])
y.backward(v1, retain_graph=True)
print(f"f_1'({x.data.item()}) = {x.grad.data.item():>4}")
x.grad.zero_()
v2 = torch.tensor([0., 1.])
y.backward(v2)
print(f"f_2'({x.data.item()}) = {x.grad.data.item():>4}")
# collapse-hide
import torch
import itertools
x = torch.arange(3, dtype=float, requires_grad=True)
y = torch.stack([-x**3, 5*x])
ranges = [range(_) for _ in y.shape]
v1 = torch.tensor([1. if i == 0 else 0. for i, j in itertools.product(*ranges)]).view(*y.shape)
y.backward(v1, retain_graph=True)
print(f"Derivative of f_1(x)=-3x^2 at the points {tuple(x.data.view(-1).tolist())}:")
print(x.grad)
x.grad.zero_()
v2 = torch.tensor([1. if i == 1 else 0. for i, j in itertools.product(*ranges)]).view(*y.shape)
y.backward(v2)
print(f"\nDerivative of f_2(x)=5x at the points {tuple(x.data.view(-1).tolist())}:")
print(x.grad)
# collapse-show
import torch
x = torch.arange(4, dtype=float, requires_grad=True)
y = torch.stack([x.sum(), (x**2).sum()])
print(f"x : {tuple(x.data.tolist())}")
print(f"y = (y_1, y_2) : {tuple(y.data.tolist())}")
v1 = torch.tensor([1., 0.])
y.backward(v1, retain_graph=True)
print(f"gradient of y_1 : {tuple(x.grad.data.tolist())}")
x.grad.zero_()
v2 = torch.tensor([0., 1.])
y.backward(v2)
print(f"gradient of y_2 : {tuple(x.grad.data.tolist())}")
# collapse-show
import torch
import itertools
x = torch.arange(4*3, dtype=float).view(-1,4).requires_grad_(True)
y = torch.stack([x.sum(1), (x**2).sum(1)])
print("x:")
print(x.data)
print("y:")
print(y.data)
print()
ranges = [range(_) for _ in y.shape]
v1 = torch.tensor([1. if i == 0 else 0. for i, j in itertools.product(*ranges)]).view(*y.shape)
y.backward(v1, retain_graph=True)
print("Gradients of the f1 at multiple points:")
print(x.grad)
x.grad.zero_()
print()
v2 = torch.tensor([1. if i == 1 else 0. for i, j in itertools.product(*ranges)]).view(*y.shape)
y.backward(v2)
print("Gradients of the f2 at multiple points:")
print(x.grad)
import torch
x = torch.tensor(1., requires_grad=True)
y = x**2
v = torch.ones_like(y)
y.backward()
print(f"Shape of x : {tuple(x.shape)}")
print(f"Shape of y : {tuple(y.shape)}")
print(f"gradient argument : {v}")
import torch
x = torch.tensor([1.], requires_grad=True)
y = x**2
v = torch.ones_like(y)
y.backward()
print(f"Shape of x : {tuple(x.shape)}")
print(f"Shape of y : {tuple(y.shape)}")
print(f"gradient argument : {v}")
import torch
x = torch.tensor([[1.]], requires_grad=True)
y = x**2
v = torch.ones_like(y)
y.backward()
print(f"Shape of x : {tuple(x.shape)}")
print(f"Shape of y : {tuple(y.shape)}")
print(f"gradient argument : {v}")
import torch
x = torch.linspace(-1, 1, 5, requires_grad=True)
y = x**2
v = torch.ones_like(y)
y.backward(v)
print(f"Shape of x : {tuple(x.shape)}")
print(f"Shape of y : {tuple(y.shape)}")
print(f"gradient argument : {v}")
import torch
x = torch.linspace(-2, 2, 5).view(-1,1).requires_grad_(True)
y = x**2
v = torch.ones_like(y)
y.backward(v)
print(f"Shape of x : {tuple(x.shape)}")
print(f"Shape of y : {tuple(y.shape)}")
print(f"gradient argument : ")
print(v)
# collapse-show
import torch
import itertools
x = torch.tensor(1., requires_grad=True)
y = torch.stack([x**i for i in range(6)]).view(3,2)
ranges = [range(_) for _ in y.shape]
print("x:")
print(x.data)
print("\ny:")
print(y.data)
derivatives = torch.zeros_like(y)
for i, j in itertools.product(*ranges):
v = torch.zeros_like(y)
v[i,j] = 1.
if x.grad is not None: x.grad.zero_()
y.backward(v, retain_graph=True)
derivatives[i,j] = x.grad.item()
print("\nDerivatives:")
print(derivatives)
import torch
x = torch.tensor([-1., 2.], requires_grad=True)
w = torch.tensor([3., 5.])
y = (x*x*w).sum()
v = torch.ones_like(y)
y.backward()
print(f"Shape of x : {tuple(x.shape)}")
print(f"Shape of y : {tuple(y.shape)}")
print(f"gradient argument : {v}")
x = torch.arange(6, dtype=float).view(3,2).requires_grad_(True)
y = (x**2).sum()
v = torch.ones_like(y)
y.backward(v)
print(f"Shape of x: {tuple(x.shape)}")
print(f"Shape of y: {tuple(y.shape)}")
print(f"gradient argument: {v}")
print("x:")
print(x.data)
print("x.grad:")
print(x.grad.data)
| 0.58676 | 0.9877 |
# 1. Projektarbeit - Lineare Regression
© Thomas Robert Holy 2019
<br>
Version 1.0.0
<br><br>
Visit me on GitHub: https://github.com/trh0ly
<br>
Kaggle Link: https://www.kaggle.com/c/data-driven-business-analytics/leaderboard
## Grundlegene Einstellungen
### Import der Bibliotheken
```
import numpy as np
import numpy.ma as ma
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn import preprocessing
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
import datetime as dt
import seaborn as sns
import sys
```
### Optikeinstellungen
```
%%javascript
IPython.OutputArea.auto_scroll_threshold = 9999;
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
pd.set_option('display.width', 350)
plt.rcParams['figure.figsize'] = (12, 6) # macht die Plots größer
```
## Definition von Hilfsfunktionen
### Definition One-Hot-Encoding Funktion
```
# Definition einer Funktion, die das One-Hot-Encoding durchführt
#------------
# Argumente:
# - df: DataFrame welcher bearbeitet werden soll
# - df_row: DataFrame-Spalte die bearbeitet werden soll
# - make_df:
# ---> Wenn False, dann werden die transformierten Werte als Array zurückgegeben
# ---> Wenn True, dann wird ein DataFrame zurückgegeben
# - overwrite_inv_encoded:
# ---> Wenn None, dann werden Spaltennamen des DataFrames auf Grundlage der Merkmals-
# ausprägungen im originalen DataFrame ermittelt
# ---> Wenn != None, dann werden die Spalten manuell überschrieben
#------------
def onehot_encoder_func(df, df_row, make_df=False, overwrite_inv_encoded=None):
values = np.array(df[df_row]) # Transformation der DataFrame-Saplte in ein Numpy Array
label_encoder = LabelEncoder() # Definition des Label-Encoders
integer_encoded = label_encoder.fit_transform(values.ravel()) # Label-Encoder auf Values fitten
onehot_encoder = OneHotEncoder(sparse=False, handle_unknown='ignore') # Definition des One-Hot-Encoders
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1) # Reshape der Integer-Encoded Values
onehot_encoded = onehot_encoder.fit_transform(integer_encoded) # One-Hot-Encoder auf Values fitten
"""
Wenn make_df == False:
Rückgabe der One-Hot-Encoded
Values als Array
"""
if make_df == False:
return onehot_encoded # Rückgabe One-Hot-Endcoded Array
"""
Wenn make_df == True:
Rückgabe der One-Hot-Encoded
Values als DataFrame
"""
if make_df == True:
"""
Wenn overwrite_inv_encoded == None:
Spalten des DataFrame werden nicht
manuell überschrieben. Die Spaltennamen
werden aus den Mermalsausprägungen des
originalen DataFrame gewonnen.
"""
if overwrite_inv_encoded == None:
counter, array = 0, [] # Zähler, Main-Array werden inizialisiert
# For-Schleife, welche ein Arrays mit Arrays generiert, welche
# von i=0 bis i=len(onehot_encoded) jeweils eine 1 enthalten
for i in range(0, len(onehot_encoded[0])):
temp = [0] * len(onehot_encoded[0]) # Temporäres Array
temp[i] = 1 # Variable i im temporären Array wird 1 gesetzt
array.append(temp) # Generiertes Array wird dem Main-Array angefügt
inv_encoded = onehot_encoder.inverse_transform(array) # Inverse Transformation One-Hot-Encoder
inv_encoded = label_encoder.inverse_transform(inv_encoded.astype(int).ravel()) # Inverse Transformation Label-Encoder
# Generierung des DataFrames mit den One-Hot-Encoded Values und originalen Merkmalsausprägungen als Spalten-Namen
encoded_df = pd.DataFrame(onehot_encoded, dtype=float, columns=list(inv_encoded), index=df.index)
return encoded_df # Rückgabe DataFrame
"""
Wenn overwrite_inv_encoded != None:
Spalten des DataFrame werden manuell
überschrieben sofern Übergebene Array
genügend Spaltennamen enthält.
"""
if overwrite_inv_encoded != None:
# Prüfung ob die Länge des manuell festgelegten Arrays gleich
# der Länge eines Arrays ist, welches aus den One-Hot-Endcoded
# Values resultiert.
case = len(overwrite_inv_encoded) == (len(onehot_encoded[0]))
"""
# Sofern case = True werden die Spalten
wie gewünscht überschrieben
"""
if case == True:
# Generierung des DataFrames mit den One-Hot-Encoded Values und manuell überschriebenen Spalten-Namen
encoded_df = pd.DataFrame(onehot_encoded, dtype=float, columns=list(overwrite_inv_encoded), index=df.index)
return encoded_df # Rückgabe DataFrame
"""
ERROR MESSAGE sofern "overwrite_inv_encoded"
nicht lang genug.
"""
else:
ERROR = 'ERROR Len of "overwrite_inv_encoded" have to be {}!'.format(len(onehot_encoded[0]))
return ERROR
"""
ERROR MESSAGE sofern "make_df"
kein gültiges Argument erhält.
"""
else:
print('ERROR: "make_df" needs an Argument!')
```
### Definition Funktion für ein Regressionsmodell
```
# Definition einer Funktion welche die Regression mit n erklärenden Variablen durchführt
#------------
# Argumente:
# - degrees: Anzahl erklärender Variablen
# - manuell:
# ---> Wenn False, dann Einbeziehung aller Spalten des DataFrames
# ---> Wenn True, dann wird manuelle Auswahl aktiviert
# - log:
# ---> Wenn True, dann ausgabe der Zwischenschritte
# - plt:
# ---> Wenn True, damm ausgabe eines Histograms für die Spalte Preis
# - submit:
# --->Wenn True, dann werden berechnetet Werte in .csv gespeichert
#------------
def regression(n, manu_data=None, log=False, plt=False, submit=False):
#--------------------------------
start = dt.datetime.now() # Startzeit ermitteln
#--------------------------------
train_data = pd.read_csv('train.csv', index_col=0) # Datensatz einlesen
train_data['Verhandlungsbasis'].fillna(0.0, inplace=True) # NaN-Werte in der Spalte "Verhandlungsbasis" füllen
train_data['Kilometer'] / 1000.0 # Spalte "Kilometer" teilen
#--------------------------------
"""
Wenn log == True, dann
Ausgabe bereinigter Dataframe
"""
if log == True:
print(train_data.head())
#--------------------------------
# One-Hot-Encoder über die Spalten
# "Privatverkauf", "Finanzierung" und "Hersteller" jagen
onehot_encoded = onehot_encoder_func(train_data, 'Privatverkauf', make_df=False)
train_data['Privatverkauf'] = onehot_encoded
onehot_encoded = onehot_encoder_func(train_data, 'Finanzierung', make_df=False)
train_data['Finanzierung'] = onehot_encoded
onehot_encoded = onehot_encoder_func(train_data, 'Hersteller', make_df=True)
#--------------------------------
# Spalte "Hersteller" aus DataFrame entfernen, da durch One-Hot-Encoder ersetzt
# und aus One-Hot-Encoding resultierender DataFrame anfügen
train_data = train_data.drop('Hersteller', axis=1)
train_data = train_data.join(onehot_encoded)
#--------------------------------
"""
Wenn log == True, dann
Ausgabe transformierter Dataframe
"""
if log == True:
print(train_data.head())
#--------------------------------
# (Manuelles) Modell Erstellen und fitten
# Erstellen einer Pipeline, die für die Prognose verwendet wird
model = Pipeline([('add_x_square', PolynomialFeatures(degree=n)), # degree = n fügt X^n der Featurematrix hinzu
('linear_regression', LinearRegression()),]) # das Regressionsmodell
#--------------------------------
"""
Wenn manu_data == None, dann
fitten des Modells auf alle Spalten
des eingelesenen DataFrames
"""
if manu_data == None:
model.fit(train_data.drop('Preis',axis='columns'),train_data.Preis) # fitten aller Parameter in der Pipeline
"""
Wenn manu_data != None, dann
fitten des Modells auf die manuell
ausgewählten Spalten
"""
if manu_data != None:
case_1 = len(manu_data) <= len(train_data.columns)
case_2 = len(manu_data) > 2
"""
Sofern case_1 und case_2 die Bedigungen
erfüllen, wird das Modell auf die manuell
ausgewählten Spalten gefittet
"""
if case_1 == True and case_2 == True:
X = train_data[manu_data]
model.fit(X.drop('Preis',axis='columns'),X.Preis) # fitten der manuell angebenen Parameter in der Pipeline
"""
Sofern die Bedinungen nicht
erfüllt werden, wird eine
Fehlermeldung zurückgegeben
"""
else:
print('ERROR Len of "manu_data" have to be 2 < len(manu_data) < {}!'.format(len(train_data.columns)))
sys.exit
#--------------------------------
# Trainiertes Modell auf die Ausgangsdaten anwenden
"""
Sofern manu_data == None wird Prognose
auf Baisis aller im Datensatz vorhanden
Spalten vorgenommen
"""
if manu_data == None:
pred_train = model.predict(train_data.drop('Preis',axis='columns'))
"""
Sofern manu_data != None wird
Prognose auf Basis der manuell
ausgewählten Daten gefittet
"""
if manu_data != None:
pred_train = model.predict(X.drop('Preis',axis='columns'))
#--------------------------------
# MSE zwischen prognostizierten Preis und tatsächlichen Preis berechnen
mse = mean_squared_error(train_data['Preis'], pred_train)
#--------------------------------
df_test = pd.read_csv('test.csv', index_col=0) # Einlesen des Testdatensatzes
df_test['Verhandlungsbasis'].fillna(0.0, inplace=True) # NaN-Werte in der Spalte "Verhandlungsbasis" füllen
df_test['Kilometer'] / 1000.0 # Spalte "Kilometer" teilen
#--------------------------------
# One-Hot-Encoder über die Spalten
# "Privatverkauf", "Finanzierung" und "Hersteller" jagen
onehot_encoded = onehot_encoder_func(df_test, 'Privatverkauf', make_df=False)
df_test['Privatverkauf'] = onehot_encoded
onehot_encoded = onehot_encoder_func(df_test, 'Finanzierung', make_df=False)
df_test['Finanzierung'] = onehot_encoded
onehot_encoded = onehot_encoder_func(df_test, 'Hersteller', make_df=True)
#--------------------------------
# Spalte "Hersteller" aus DataFrame entfernen, da durch One-Hot-Encoder ersetzt
# und aus One-Hot-Encoding resultierender DataFrame anfügen
df_test = df_test.drop('Hersteller', axis=1)
df_test = df_test.join(onehot_encoded)
"""
Sofern manu_data != None wird
Testdatensatz nur auf alle manuell
ausgewählen Spalten reduziert
"""
if manu_data != None:
manu_data.remove('Preis')
df_test = df_test[manu_data]
manu_data.append('Preis')
#--------------------------------
pred = model.predict(df_test) # Anwendung des Modells auf die Testdaten
df_test['Preis'] = pred # Die Spalte "Preis" des Testdatensatzes wird mit Prognosewerden gefüllt
#--------------------------------
end = dt.datetime.now() # Startzeit ermitteln
print('Verstrichene Zeit: {}.'.format(end - start))
print("Mean squared error of " + str(n) + " degree model: %.2f \n" % mse)
#--------------------------------
"""
Wenn log == True, dann
Ausgabe transformierter Dataframe
"""
if log == True:
print(df_test.head())
"""
Wenn submit == True, dann
Speicherung der prognostizierten
Preise in eienr .csv-Datei
"""
if submit == True:
df_submission = df_test['Preis'].reset_index()
df_submission.to_csv('./submission_' + str(n) + '-degree_model.csv', index=False)
"""
Wenn log == True, dann
Ausgabe transformierter Dataframe
"""
if log == True:
print(df_submission.head())
#--------------------------------
"""
Wenn plt == True, dann
Ausgabe wird ein Histogramm der
Spalte "Preis" geplottet
"""
if plt == True:
hist_train_data = df_test.hist(column=['Preis'])
#--------------------------------
return mse # Rückgabe des berechneten MSE
```
## Anwendung einer (multivariaten) linearen Regression
### Ausführung der Funktion "regression" in einer For-Schleife und Dokumentation der MSEs
- 0 = Einfache lineare Regression
<br>
- 1 = Regressionsmodell mit den erklärenden Variablen X, X^2
<br>
- 2 = Regressionsmodell mit den erklärenden Variablen X bis X^3
<br>
- ...
<br>
- n = Regressionsmodell mit den erklärenden Variablen X bis X^(n+1)
```
mse_liste = [] # Array welches die MSEs speichert
# For-Schleife die die Funktion "regression" n mal ausführt und
# den jeweils resultierenden MSE im Array "mse_liste" speichert
for i in range(1,8 + 1):
mse = regression(i,log=False,submit=False)
mse_liste.append(mse)
#--------------------------------
# Ausgabe der Berechnungen
print('Es wurden folgende MSEs berechnet:\n{}'.format(mse_liste))
print('Der kleinste MSE hat den Index {} (d.h. {} Degrees) und beträgt {}.'.format(np.argmin(mse_liste), int(np.argmin(mse_liste) + 1), min(mse_liste)))
```
### Run und Submit des Modells mit dem kleinsten MSE
```
"""
Kaggle Score: 0.92166
"""
_ = regression(3, log=True, submit=True, plt=True)
```
## Weitere Überlegungen
### Heatmap zur manuellen Auswahl relevanter Features
Die Heatmap verrät welchen Einfluss die Features aus den Preis haben.
Anhand dessen kann entschieden werden welche Features verwendet werden sollten um den Preis zu prognostizieren.
```
train_data = pd.read_csv('train.csv', index_col=0) # Datensatz einlesen
train_data['Verhandlungsbasis'].fillna(0.0, inplace=True) # NaN-Werte füllen
#--------------------------------
# Anwendung One-Hot-Encoder Funktion
onehot_encoded = onehot_encoder_func(train_data, 'Privatverkauf', make_df=False)
train_data['Privatverkauf'] = onehot_encoded
onehot_encoded = onehot_encoder_func(train_data, 'Finanzierung', make_df=False)
train_data['Finanzierung'] = onehot_encoded
onehot_encoded = onehot_encoder_func(train_data, 'Hersteller', make_df=True)
train_data = train_data.drop('Hersteller', axis=1)
train_data = train_data.join(onehot_encoded)
#--------------------------------
# Heatmap Plotten
X = train_data.drop('Preis',axis='columns') # Independent columns
y = train_data['Preis'] # Target column
corrmat = train_data.corr() # Get correlations of each features in dataset
top_corr_features = corrmat.index
plt.figure(figsize=(10,10)) # Größe Grafik
ax = sns.heatmap(train_data[top_corr_features].corr(),annot=True,cmap="RdYlGn") # Plot heat map
bottom, top = ax.get_ylim() # Get limits X-, Y-Achse
ax.set_ylim(bottom + 0.5, top - 0.5) # Erweiterterung der Limits
```
### Ausführung der Funktion "regression" in einer For-Schleife und Dokumentation der MSEs mit einer "manuellen" Featureauswahl
```
# Auswall relevanter Features aus Basis der Heatmap (Eigene Auswahl)
data = ['Kilometer', 'Zylinder', 'Liter', 'Tueren', 'Verhandlungsbasis',
'BMW', 'Daimler', 'Fiat', 'Ford', 'Volkswagen', 'Preis']
mse_liste2 = [] # Array welches die MSEs speichert
# For-Schleife die die Funktion "regression" n mal ausführt und
# den jeweils resultierenden MSE im Array "mse_liste" speichert
for i in range(1,8 + 1):
mse = regression(i, data, log=False, submit=False, plt=False)
mse_liste2.append(mse)
#--------------------------------
# Ausgabe der Berechnungen
print('Es wurden folgende MSEs berechnet:\n{}'.format(mse_liste2))
print('Der kleinste MSE hat den Index {} (d.h. {} Degrees) und beträgt {}.'.format(np.argmin(mse_liste2), int(np.argmin(mse_liste2) + 1), min(mse_liste2)))
print('Im ersten Modell, welches nicht manuell eingeschränkt wurde, war der kleinste MSE {}.'.format(min(mse_liste)))
```
### Run und Submit des Modells mit dem festen Fit
```
"""
Kaggle Score: 0.97180
- Größerer MSE als zuvor
- Besser Fit auf Testdatensatz
--> Vorheriges Modell ist Overfitted
"""
_ = regression(2, data, log=True, submit=True, plt=True)
```
## Lineare Lasso-Regression
### Import weiterer Pakete
```
from sklearn import preprocessing
from sklearn import linear_model
from sklearn.model_selection import GridSearchCV
```
### Definition weiterer Hilfsfunktionen
#### Funktion für das Einlesen, bereinigen und One-Hot-Encoden der Daten
```
# Definition einer Funktion, die die Daten einliest, bereinigt und One-Hot encoded
#------------
# Argumente:
# - manu_data:
# ---> Wenn = None: Alle eingelesenen Spalten werden im DataFrame verwendet
# ---> Wenn != None: Nur eine Auswahl von Spalten wird im DataFrame verwendet
#------------
def read_and_make_data_fit(manu_data=None):
#-------------------------------------------------------------------------
# Trainigsdatensatz
train_data = pd.read_csv('train.csv', index_col=0) # Datensatz einlesen
train_data['Verhandlungsbasis'].fillna(0.0, inplace=True) # NaN-Werte in der Spalte "Verhandlungsbasis" füllen
train_data['Kilometer'] / 1000.0 # Spalte "Kilometer" teilen
#--------------------------------
# One-Hot-Encoding
onehot_encoded = onehot_encoder_func(train_data, 'Privatverkauf', make_df=False)
train_data['Privatverkauf'] = onehot_encoded
onehot_encoded = onehot_encoder_func(train_data, 'Finanzierung', make_df=False)
train_data['Finanzierung'] = onehot_encoded
onehot_encoded = onehot_encoder_func(train_data, 'Hersteller', make_df=True)
#--------------------------------
# DataFrames joinen
train_data = train_data.drop('Hersteller', axis=1)
train_data = train_data.join(onehot_encoded)
#--------------------------------
# Ggf. auswahl der Spalten
if manu_data != None:
train_data = train_data.filter(manu_data, axis=1)
#-------------------------------------------------------------------------
# Testdatensatz
df_test = pd.read_csv('test.csv', index_col=0) # Einlesen des Testdatensatzes
df_test['Verhandlungsbasis'].fillna(0.0, inplace=True) # NaN-Werte in der Spalte "Verhandlungsbasis" füllen
df_test['Kilometer'] / 1000.0 # Spalte "Kilometer" teilen
#--------------------------------
# One-Hot-Encoding
onehot_encoded = onehot_encoder_func(df_test, 'Privatverkauf', make_df=False)
df_test['Privatverkauf'] = onehot_encoded
onehot_encoded = onehot_encoder_func(df_test, 'Finanzierung', make_df=False)
df_test['Finanzierung'] = onehot_encoded
onehot_encoded = onehot_encoder_func(df_test, 'Hersteller', make_df=True)
#--------------------------------
# DataFrames joinen
df_test = df_test.drop('Hersteller', axis=1)
df_test = df_test.join(onehot_encoded)
#--------------------------------
# Ggf. auswahl der Spalten
if manu_data != None:
df_test = df_test.filter(manu_data, axis=1)
return train_data, df_test
```
#### Funktion zur Erstellung des Modells
```
# Definition einer Funktion, die eine Gittersuche durchführt und den besten Hyperparamtet alpha (=lambda) zurückgibt
#------------
# Argumente:
# - alpha_space: Übergabe einer Range mit alpha-Paramtern an die Funktion
# - degrees: Grad zur Bestmimmung der Modellkomplexität
# - cvs: Komplexität der Kreuzvalidierung
# - threads: Anzahl der paralell ausgeführten Berechnungen
# - manu_data:
# ---> Wenn = None: Alle eingelesenen Spalten werden im DataFrame verwendet
# ---> Wenn != None: Nur eine Auswahl von Spalten wird im DataFrame verwendet
# - log:
# ---> Wenn True, dann ausgabe der Zwischenschritte
# - plt:
# ---> Wenn True, damm ausgabe eines Histograms für die Spalte Preis
# - submit:
# --->Wenn True, dann werden berechnetet Werte in .csv gespeichert
#------------
def lasso_model_grid_search(alpha_space, degrees=2, cvs=5, threads=8, manu_data=None, submit=False, plt=False, log=False):
start = dt.datetime.now() # Startzeit ermitteln
#--------------------------------
# Daten einlesen
trainData, testData = read_and_make_data_fit(manu_data)
#--------------------------------
# Pipeline erzeugen
lasso_pipe_loop = Pipeline([
('add_x_square', PolynomialFeatures(degree=degrees,include_bias=False)), # degree = 2 fügt X^2 der Featurematrix hinz
('scaler', preprocessing.StandardScaler()),
('lasso_regression', linear_model.Lasso(fit_intercept=True, max_iter = 100000))]) # das Regressionsmodell
#--------------------------------
# Lambda- / Alpha-Kette definieren
lasso_pipe_parameter = [
{'lasso_regression__alpha': alpha_space}]
#--------------------------------
# Gittersuche
grid_search_lasso = GridSearchCV(
estimator=lasso_pipe_loop,
param_grid=lasso_pipe_parameter,
scoring='neg_mean_squared_error',
cv=cvs,
n_jobs=threads,
iid=False)
#--------------------------------
# Fitten auf Trainigsdatensatz
_ = grid_search_lasso.fit(trainData.drop('Preis',axis='columns'),trainData.Preis)
#--------------------------------
# Anwendung des Modells auf Testdatensatz und Submit
pred = grid_search_lasso.predict(testData)
testData['Preis'] = pred
df_submission = testData['Preis'].reset_index()
end = dt.datetime.now() # Startzeit ermitteln
print('Verstrichene Zeit: {}.'.format(end - start))
#--------------------------------
if submit == True:
df_submission.to_csv('./submission_lambda_' + str(degrees) + '-degrees.csv', index=False)
#--------------------------------
# Ausgabe bester Hyperparamter und plotten
print("Bester Hyperparameter lambda: {} ".format(grid_search_lasso.best_params_['lasso_regression__alpha'] ))
print("MSE für besten Hyperparameter: {} \n".format(grid_search_lasso.best_score_))
if log == True:
print(df_submission.head())
#--------------------------------
if plt == True:
#_ = pd.DataFrame(grid_search_lasso.cv_results_)[['rank_test_score']].plot()
hist_train_data = testData.hist(column=['Preis'])
return grid_search_lasso.best_score_
```
### Ausführung der Funktion "lasso_model_grid_search" in einer For-Schleife und Dokumentation der MSEs
```
alpha_range = np.logspace(start=0, stop=10, num=1000)
cvsss = 5
data = ['Kilometer', 'Zylinder', 'Liter', 'Tueren', 'Verhandlungsbasis',
'BMW', 'Daimler', 'Fiat', 'Ford', 'Volkswagen', 'Preis']
lasso_mse_liste = []
for i in range(1,3 + 1):
mse = lasso_model_grid_search(alpha_space=alpha_range, degrees=i, cvs=cvsss, threads=10, manu_data=data, submit=False, plt=False, log=False)
lasso_mse_liste.append(mse)
#--------------------------------
# Ausgabe der Berechnungen
print('Es wurden folgende MSEs berechnet:\n{}'.format(lasso_mse_liste))
print('Der kleinste MSE hat den Index {} (d.h. {} Degrees) und beträgt {}.'.format(np.argmax(lasso_mse_liste), int(np.argmax(lasso_mse_liste) + 1), max(lasso_mse_liste)))
```
### Run und Submit des Modells mit dem festen Fit
```
"""
Kaggle Score: 0.97154
-> Sehr nah am Top-Score von 0.97180
"""
_ = lasso_model_grid_search(alpha_space=alpha_range, degrees=2, cvs=cvsss, threads=10, manu_data=data, submit=True, plt=True, log=True)
```
|
github_jupyter
|
import numpy as np
import numpy.ma as ma
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn import preprocessing
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
import datetime as dt
import seaborn as sns
import sys
%%javascript
IPython.OutputArea.auto_scroll_threshold = 9999;
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
pd.set_option('display.width', 350)
plt.rcParams['figure.figsize'] = (12, 6) # macht die Plots größer
# Definition einer Funktion, die das One-Hot-Encoding durchführt
#------------
# Argumente:
# - df: DataFrame welcher bearbeitet werden soll
# - df_row: DataFrame-Spalte die bearbeitet werden soll
# - make_df:
# ---> Wenn False, dann werden die transformierten Werte als Array zurückgegeben
# ---> Wenn True, dann wird ein DataFrame zurückgegeben
# - overwrite_inv_encoded:
# ---> Wenn None, dann werden Spaltennamen des DataFrames auf Grundlage der Merkmals-
# ausprägungen im originalen DataFrame ermittelt
# ---> Wenn != None, dann werden die Spalten manuell überschrieben
#------------
def onehot_encoder_func(df, df_row, make_df=False, overwrite_inv_encoded=None):
values = np.array(df[df_row]) # Transformation der DataFrame-Saplte in ein Numpy Array
label_encoder = LabelEncoder() # Definition des Label-Encoders
integer_encoded = label_encoder.fit_transform(values.ravel()) # Label-Encoder auf Values fitten
onehot_encoder = OneHotEncoder(sparse=False, handle_unknown='ignore') # Definition des One-Hot-Encoders
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1) # Reshape der Integer-Encoded Values
onehot_encoded = onehot_encoder.fit_transform(integer_encoded) # One-Hot-Encoder auf Values fitten
"""
Wenn make_df == False:
Rückgabe der One-Hot-Encoded
Values als Array
"""
if make_df == False:
return onehot_encoded # Rückgabe One-Hot-Endcoded Array
"""
Wenn make_df == True:
Rückgabe der One-Hot-Encoded
Values als DataFrame
"""
if make_df == True:
"""
Wenn overwrite_inv_encoded == None:
Spalten des DataFrame werden nicht
manuell überschrieben. Die Spaltennamen
werden aus den Mermalsausprägungen des
originalen DataFrame gewonnen.
"""
if overwrite_inv_encoded == None:
counter, array = 0, [] # Zähler, Main-Array werden inizialisiert
# For-Schleife, welche ein Arrays mit Arrays generiert, welche
# von i=0 bis i=len(onehot_encoded) jeweils eine 1 enthalten
for i in range(0, len(onehot_encoded[0])):
temp = [0] * len(onehot_encoded[0]) # Temporäres Array
temp[i] = 1 # Variable i im temporären Array wird 1 gesetzt
array.append(temp) # Generiertes Array wird dem Main-Array angefügt
inv_encoded = onehot_encoder.inverse_transform(array) # Inverse Transformation One-Hot-Encoder
inv_encoded = label_encoder.inverse_transform(inv_encoded.astype(int).ravel()) # Inverse Transformation Label-Encoder
# Generierung des DataFrames mit den One-Hot-Encoded Values und originalen Merkmalsausprägungen als Spalten-Namen
encoded_df = pd.DataFrame(onehot_encoded, dtype=float, columns=list(inv_encoded), index=df.index)
return encoded_df # Rückgabe DataFrame
"""
Wenn overwrite_inv_encoded != None:
Spalten des DataFrame werden manuell
überschrieben sofern Übergebene Array
genügend Spaltennamen enthält.
"""
if overwrite_inv_encoded != None:
# Prüfung ob die Länge des manuell festgelegten Arrays gleich
# der Länge eines Arrays ist, welches aus den One-Hot-Endcoded
# Values resultiert.
case = len(overwrite_inv_encoded) == (len(onehot_encoded[0]))
"""
# Sofern case = True werden die Spalten
wie gewünscht überschrieben
"""
if case == True:
# Generierung des DataFrames mit den One-Hot-Encoded Values und manuell überschriebenen Spalten-Namen
encoded_df = pd.DataFrame(onehot_encoded, dtype=float, columns=list(overwrite_inv_encoded), index=df.index)
return encoded_df # Rückgabe DataFrame
"""
ERROR MESSAGE sofern "overwrite_inv_encoded"
nicht lang genug.
"""
else:
ERROR = 'ERROR Len of "overwrite_inv_encoded" have to be {}!'.format(len(onehot_encoded[0]))
return ERROR
"""
ERROR MESSAGE sofern "make_df"
kein gültiges Argument erhält.
"""
else:
print('ERROR: "make_df" needs an Argument!')
# Definition einer Funktion welche die Regression mit n erklärenden Variablen durchführt
#------------
# Argumente:
# - degrees: Anzahl erklärender Variablen
# - manuell:
# ---> Wenn False, dann Einbeziehung aller Spalten des DataFrames
# ---> Wenn True, dann wird manuelle Auswahl aktiviert
# - log:
# ---> Wenn True, dann ausgabe der Zwischenschritte
# - plt:
# ---> Wenn True, damm ausgabe eines Histograms für die Spalte Preis
# - submit:
# --->Wenn True, dann werden berechnetet Werte in .csv gespeichert
#------------
def regression(n, manu_data=None, log=False, plt=False, submit=False):
#--------------------------------
start = dt.datetime.now() # Startzeit ermitteln
#--------------------------------
train_data = pd.read_csv('train.csv', index_col=0) # Datensatz einlesen
train_data['Verhandlungsbasis'].fillna(0.0, inplace=True) # NaN-Werte in der Spalte "Verhandlungsbasis" füllen
train_data['Kilometer'] / 1000.0 # Spalte "Kilometer" teilen
#--------------------------------
"""
Wenn log == True, dann
Ausgabe bereinigter Dataframe
"""
if log == True:
print(train_data.head())
#--------------------------------
# One-Hot-Encoder über die Spalten
# "Privatverkauf", "Finanzierung" und "Hersteller" jagen
onehot_encoded = onehot_encoder_func(train_data, 'Privatverkauf', make_df=False)
train_data['Privatverkauf'] = onehot_encoded
onehot_encoded = onehot_encoder_func(train_data, 'Finanzierung', make_df=False)
train_data['Finanzierung'] = onehot_encoded
onehot_encoded = onehot_encoder_func(train_data, 'Hersteller', make_df=True)
#--------------------------------
# Spalte "Hersteller" aus DataFrame entfernen, da durch One-Hot-Encoder ersetzt
# und aus One-Hot-Encoding resultierender DataFrame anfügen
train_data = train_data.drop('Hersteller', axis=1)
train_data = train_data.join(onehot_encoded)
#--------------------------------
"""
Wenn log == True, dann
Ausgabe transformierter Dataframe
"""
if log == True:
print(train_data.head())
#--------------------------------
# (Manuelles) Modell Erstellen und fitten
# Erstellen einer Pipeline, die für die Prognose verwendet wird
model = Pipeline([('add_x_square', PolynomialFeatures(degree=n)), # degree = n fügt X^n der Featurematrix hinzu
('linear_regression', LinearRegression()),]) # das Regressionsmodell
#--------------------------------
"""
Wenn manu_data == None, dann
fitten des Modells auf alle Spalten
des eingelesenen DataFrames
"""
if manu_data == None:
model.fit(train_data.drop('Preis',axis='columns'),train_data.Preis) # fitten aller Parameter in der Pipeline
"""
Wenn manu_data != None, dann
fitten des Modells auf die manuell
ausgewählten Spalten
"""
if manu_data != None:
case_1 = len(manu_data) <= len(train_data.columns)
case_2 = len(manu_data) > 2
"""
Sofern case_1 und case_2 die Bedigungen
erfüllen, wird das Modell auf die manuell
ausgewählten Spalten gefittet
"""
if case_1 == True and case_2 == True:
X = train_data[manu_data]
model.fit(X.drop('Preis',axis='columns'),X.Preis) # fitten der manuell angebenen Parameter in der Pipeline
"""
Sofern die Bedinungen nicht
erfüllt werden, wird eine
Fehlermeldung zurückgegeben
"""
else:
print('ERROR Len of "manu_data" have to be 2 < len(manu_data) < {}!'.format(len(train_data.columns)))
sys.exit
#--------------------------------
# Trainiertes Modell auf die Ausgangsdaten anwenden
"""
Sofern manu_data == None wird Prognose
auf Baisis aller im Datensatz vorhanden
Spalten vorgenommen
"""
if manu_data == None:
pred_train = model.predict(train_data.drop('Preis',axis='columns'))
"""
Sofern manu_data != None wird
Prognose auf Basis der manuell
ausgewählten Daten gefittet
"""
if manu_data != None:
pred_train = model.predict(X.drop('Preis',axis='columns'))
#--------------------------------
# MSE zwischen prognostizierten Preis und tatsächlichen Preis berechnen
mse = mean_squared_error(train_data['Preis'], pred_train)
#--------------------------------
df_test = pd.read_csv('test.csv', index_col=0) # Einlesen des Testdatensatzes
df_test['Verhandlungsbasis'].fillna(0.0, inplace=True) # NaN-Werte in der Spalte "Verhandlungsbasis" füllen
df_test['Kilometer'] / 1000.0 # Spalte "Kilometer" teilen
#--------------------------------
# One-Hot-Encoder über die Spalten
# "Privatverkauf", "Finanzierung" und "Hersteller" jagen
onehot_encoded = onehot_encoder_func(df_test, 'Privatverkauf', make_df=False)
df_test['Privatverkauf'] = onehot_encoded
onehot_encoded = onehot_encoder_func(df_test, 'Finanzierung', make_df=False)
df_test['Finanzierung'] = onehot_encoded
onehot_encoded = onehot_encoder_func(df_test, 'Hersteller', make_df=True)
#--------------------------------
# Spalte "Hersteller" aus DataFrame entfernen, da durch One-Hot-Encoder ersetzt
# und aus One-Hot-Encoding resultierender DataFrame anfügen
df_test = df_test.drop('Hersteller', axis=1)
df_test = df_test.join(onehot_encoded)
"""
Sofern manu_data != None wird
Testdatensatz nur auf alle manuell
ausgewählen Spalten reduziert
"""
if manu_data != None:
manu_data.remove('Preis')
df_test = df_test[manu_data]
manu_data.append('Preis')
#--------------------------------
pred = model.predict(df_test) # Anwendung des Modells auf die Testdaten
df_test['Preis'] = pred # Die Spalte "Preis" des Testdatensatzes wird mit Prognosewerden gefüllt
#--------------------------------
end = dt.datetime.now() # Startzeit ermitteln
print('Verstrichene Zeit: {}.'.format(end - start))
print("Mean squared error of " + str(n) + " degree model: %.2f \n" % mse)
#--------------------------------
"""
Wenn log == True, dann
Ausgabe transformierter Dataframe
"""
if log == True:
print(df_test.head())
"""
Wenn submit == True, dann
Speicherung der prognostizierten
Preise in eienr .csv-Datei
"""
if submit == True:
df_submission = df_test['Preis'].reset_index()
df_submission.to_csv('./submission_' + str(n) + '-degree_model.csv', index=False)
"""
Wenn log == True, dann
Ausgabe transformierter Dataframe
"""
if log == True:
print(df_submission.head())
#--------------------------------
"""
Wenn plt == True, dann
Ausgabe wird ein Histogramm der
Spalte "Preis" geplottet
"""
if plt == True:
hist_train_data = df_test.hist(column=['Preis'])
#--------------------------------
return mse # Rückgabe des berechneten MSE
mse_liste = [] # Array welches die MSEs speichert
# For-Schleife die die Funktion "regression" n mal ausführt und
# den jeweils resultierenden MSE im Array "mse_liste" speichert
for i in range(1,8 + 1):
mse = regression(i,log=False,submit=False)
mse_liste.append(mse)
#--------------------------------
# Ausgabe der Berechnungen
print('Es wurden folgende MSEs berechnet:\n{}'.format(mse_liste))
print('Der kleinste MSE hat den Index {} (d.h. {} Degrees) und beträgt {}.'.format(np.argmin(mse_liste), int(np.argmin(mse_liste) + 1), min(mse_liste)))
"""
Kaggle Score: 0.92166
"""
_ = regression(3, log=True, submit=True, plt=True)
train_data = pd.read_csv('train.csv', index_col=0) # Datensatz einlesen
train_data['Verhandlungsbasis'].fillna(0.0, inplace=True) # NaN-Werte füllen
#--------------------------------
# Anwendung One-Hot-Encoder Funktion
onehot_encoded = onehot_encoder_func(train_data, 'Privatverkauf', make_df=False)
train_data['Privatverkauf'] = onehot_encoded
onehot_encoded = onehot_encoder_func(train_data, 'Finanzierung', make_df=False)
train_data['Finanzierung'] = onehot_encoded
onehot_encoded = onehot_encoder_func(train_data, 'Hersteller', make_df=True)
train_data = train_data.drop('Hersteller', axis=1)
train_data = train_data.join(onehot_encoded)
#--------------------------------
# Heatmap Plotten
X = train_data.drop('Preis',axis='columns') # Independent columns
y = train_data['Preis'] # Target column
corrmat = train_data.corr() # Get correlations of each features in dataset
top_corr_features = corrmat.index
plt.figure(figsize=(10,10)) # Größe Grafik
ax = sns.heatmap(train_data[top_corr_features].corr(),annot=True,cmap="RdYlGn") # Plot heat map
bottom, top = ax.get_ylim() # Get limits X-, Y-Achse
ax.set_ylim(bottom + 0.5, top - 0.5) # Erweiterterung der Limits
# Auswall relevanter Features aus Basis der Heatmap (Eigene Auswahl)
data = ['Kilometer', 'Zylinder', 'Liter', 'Tueren', 'Verhandlungsbasis',
'BMW', 'Daimler', 'Fiat', 'Ford', 'Volkswagen', 'Preis']
mse_liste2 = [] # Array welches die MSEs speichert
# For-Schleife die die Funktion "regression" n mal ausführt und
# den jeweils resultierenden MSE im Array "mse_liste" speichert
for i in range(1,8 + 1):
mse = regression(i, data, log=False, submit=False, plt=False)
mse_liste2.append(mse)
#--------------------------------
# Ausgabe der Berechnungen
print('Es wurden folgende MSEs berechnet:\n{}'.format(mse_liste2))
print('Der kleinste MSE hat den Index {} (d.h. {} Degrees) und beträgt {}.'.format(np.argmin(mse_liste2), int(np.argmin(mse_liste2) + 1), min(mse_liste2)))
print('Im ersten Modell, welches nicht manuell eingeschränkt wurde, war der kleinste MSE {}.'.format(min(mse_liste)))
"""
Kaggle Score: 0.97180
- Größerer MSE als zuvor
- Besser Fit auf Testdatensatz
--> Vorheriges Modell ist Overfitted
"""
_ = regression(2, data, log=True, submit=True, plt=True)
from sklearn import preprocessing
from sklearn import linear_model
from sklearn.model_selection import GridSearchCV
# Definition einer Funktion, die die Daten einliest, bereinigt und One-Hot encoded
#------------
# Argumente:
# - manu_data:
# ---> Wenn = None: Alle eingelesenen Spalten werden im DataFrame verwendet
# ---> Wenn != None: Nur eine Auswahl von Spalten wird im DataFrame verwendet
#------------
def read_and_make_data_fit(manu_data=None):
#-------------------------------------------------------------------------
# Trainigsdatensatz
train_data = pd.read_csv('train.csv', index_col=0) # Datensatz einlesen
train_data['Verhandlungsbasis'].fillna(0.0, inplace=True) # NaN-Werte in der Spalte "Verhandlungsbasis" füllen
train_data['Kilometer'] / 1000.0 # Spalte "Kilometer" teilen
#--------------------------------
# One-Hot-Encoding
onehot_encoded = onehot_encoder_func(train_data, 'Privatverkauf', make_df=False)
train_data['Privatverkauf'] = onehot_encoded
onehot_encoded = onehot_encoder_func(train_data, 'Finanzierung', make_df=False)
train_data['Finanzierung'] = onehot_encoded
onehot_encoded = onehot_encoder_func(train_data, 'Hersteller', make_df=True)
#--------------------------------
# DataFrames joinen
train_data = train_data.drop('Hersteller', axis=1)
train_data = train_data.join(onehot_encoded)
#--------------------------------
# Ggf. auswahl der Spalten
if manu_data != None:
train_data = train_data.filter(manu_data, axis=1)
#-------------------------------------------------------------------------
# Testdatensatz
df_test = pd.read_csv('test.csv', index_col=0) # Einlesen des Testdatensatzes
df_test['Verhandlungsbasis'].fillna(0.0, inplace=True) # NaN-Werte in der Spalte "Verhandlungsbasis" füllen
df_test['Kilometer'] / 1000.0 # Spalte "Kilometer" teilen
#--------------------------------
# One-Hot-Encoding
onehot_encoded = onehot_encoder_func(df_test, 'Privatverkauf', make_df=False)
df_test['Privatverkauf'] = onehot_encoded
onehot_encoded = onehot_encoder_func(df_test, 'Finanzierung', make_df=False)
df_test['Finanzierung'] = onehot_encoded
onehot_encoded = onehot_encoder_func(df_test, 'Hersteller', make_df=True)
#--------------------------------
# DataFrames joinen
df_test = df_test.drop('Hersteller', axis=1)
df_test = df_test.join(onehot_encoded)
#--------------------------------
# Ggf. auswahl der Spalten
if manu_data != None:
df_test = df_test.filter(manu_data, axis=1)
return train_data, df_test
# Definition einer Funktion, die eine Gittersuche durchführt und den besten Hyperparamtet alpha (=lambda) zurückgibt
#------------
# Argumente:
# - alpha_space: Übergabe einer Range mit alpha-Paramtern an die Funktion
# - degrees: Grad zur Bestmimmung der Modellkomplexität
# - cvs: Komplexität der Kreuzvalidierung
# - threads: Anzahl der paralell ausgeführten Berechnungen
# - manu_data:
# ---> Wenn = None: Alle eingelesenen Spalten werden im DataFrame verwendet
# ---> Wenn != None: Nur eine Auswahl von Spalten wird im DataFrame verwendet
# - log:
# ---> Wenn True, dann ausgabe der Zwischenschritte
# - plt:
# ---> Wenn True, damm ausgabe eines Histograms für die Spalte Preis
# - submit:
# --->Wenn True, dann werden berechnetet Werte in .csv gespeichert
#------------
def lasso_model_grid_search(alpha_space, degrees=2, cvs=5, threads=8, manu_data=None, submit=False, plt=False, log=False):
start = dt.datetime.now() # Startzeit ermitteln
#--------------------------------
# Daten einlesen
trainData, testData = read_and_make_data_fit(manu_data)
#--------------------------------
# Pipeline erzeugen
lasso_pipe_loop = Pipeline([
('add_x_square', PolynomialFeatures(degree=degrees,include_bias=False)), # degree = 2 fügt X^2 der Featurematrix hinz
('scaler', preprocessing.StandardScaler()),
('lasso_regression', linear_model.Lasso(fit_intercept=True, max_iter = 100000))]) # das Regressionsmodell
#--------------------------------
# Lambda- / Alpha-Kette definieren
lasso_pipe_parameter = [
{'lasso_regression__alpha': alpha_space}]
#--------------------------------
# Gittersuche
grid_search_lasso = GridSearchCV(
estimator=lasso_pipe_loop,
param_grid=lasso_pipe_parameter,
scoring='neg_mean_squared_error',
cv=cvs,
n_jobs=threads,
iid=False)
#--------------------------------
# Fitten auf Trainigsdatensatz
_ = grid_search_lasso.fit(trainData.drop('Preis',axis='columns'),trainData.Preis)
#--------------------------------
# Anwendung des Modells auf Testdatensatz und Submit
pred = grid_search_lasso.predict(testData)
testData['Preis'] = pred
df_submission = testData['Preis'].reset_index()
end = dt.datetime.now() # Startzeit ermitteln
print('Verstrichene Zeit: {}.'.format(end - start))
#--------------------------------
if submit == True:
df_submission.to_csv('./submission_lambda_' + str(degrees) + '-degrees.csv', index=False)
#--------------------------------
# Ausgabe bester Hyperparamter und plotten
print("Bester Hyperparameter lambda: {} ".format(grid_search_lasso.best_params_['lasso_regression__alpha'] ))
print("MSE für besten Hyperparameter: {} \n".format(grid_search_lasso.best_score_))
if log == True:
print(df_submission.head())
#--------------------------------
if plt == True:
#_ = pd.DataFrame(grid_search_lasso.cv_results_)[['rank_test_score']].plot()
hist_train_data = testData.hist(column=['Preis'])
return grid_search_lasso.best_score_
alpha_range = np.logspace(start=0, stop=10, num=1000)
cvsss = 5
data = ['Kilometer', 'Zylinder', 'Liter', 'Tueren', 'Verhandlungsbasis',
'BMW', 'Daimler', 'Fiat', 'Ford', 'Volkswagen', 'Preis']
lasso_mse_liste = []
for i in range(1,3 + 1):
mse = lasso_model_grid_search(alpha_space=alpha_range, degrees=i, cvs=cvsss, threads=10, manu_data=data, submit=False, plt=False, log=False)
lasso_mse_liste.append(mse)
#--------------------------------
# Ausgabe der Berechnungen
print('Es wurden folgende MSEs berechnet:\n{}'.format(lasso_mse_liste))
print('Der kleinste MSE hat den Index {} (d.h. {} Degrees) und beträgt {}.'.format(np.argmax(lasso_mse_liste), int(np.argmax(lasso_mse_liste) + 1), max(lasso_mse_liste)))
"""
Kaggle Score: 0.97154
-> Sehr nah am Top-Score von 0.97180
"""
_ = lasso_model_grid_search(alpha_space=alpha_range, degrees=2, cvs=cvsss, threads=10, manu_data=data, submit=True, plt=True, log=True)
| 0.414899 | 0.78316 |
# Quickstart: Restaurant tipping demo
Demo built to match Matlab's restaurant tipping example, which I'll dare to call the Hello World of Fuzzy Logic.
For the original explanation visit https://www.mathworks.com/help/fuzzy/working-from-the-command-line.html.
```
import zadeh
```
## System definition
Define the **input variables**.
```
service = zadeh.FuzzyVariable(
zadeh.FloatDomain("service", 0, 10, 100),
{
"poor": zadeh.GaussianFuzzySet(1.5, 0),
"good": zadeh.GaussianFuzzySet(1.5, 5),
"excellent": zadeh.GaussianFuzzySet(1.5, 10),
},
)
service.plot()
food = zadeh.FuzzyVariable(
zadeh.FloatDomain("food", 0, 10, 100),
{
"rancid": zadeh.TrapezoidalFuzzySet(-2, 0, 1, 3),
"delicious": zadeh.TrapezoidalFuzzySet(7, 9, 10, 12),
},
)
food.plot()
```
Define the **output variable**.
```
tip = zadeh.FuzzyVariable(
zadeh.FloatDomain("tip", 0, 30, 100),
{
"cheap": zadeh.TriangularFuzzySet(0, 5, 10),
"average": zadeh.TriangularFuzzySet(10, 15, 20),
"generous": zadeh.TriangularFuzzySet(20, 25, 30),
},
)
tip.plot()
```
Define the **rules**.
```
# "|"" is fuzzy OR, ">>" if fuzzy implication
rule_set = [
((service == "poor") | (food == "rancid")) >> (tip == "cheap"),
(service == "good") >> (tip == "average"),
((service == "excellent") | (food == "delicious")) >> (tip == "generous"),
]
for rule in rule_set:
print(rule)
```
**Build** the system.
```
fis = zadeh.FIS([food, service], rule_set, tip)
```
## Usage
Single prediction
```
fis.get_crisp_output({"food": 9, "service": 7})
```
Automatically generate an interactive explorer using ipywidgets (only visible when running the notebook)
```
fis.get_interactive()
```
Plot the surface
```
import matplotlib.pyplot as plt
fis.plot_2d(food, service)
plt.tight_layout()
```
Produce a plot which can be used to explain the rules for a set of values:
```
fis.plot_rules({"food": 9, "service": 7})
plt.show()
for rule in fis.rules:
print(rule)
```
Each row corresponds to a rule in order:
- In the first one there is (almost) no activation of the any of the clasues. Hence, the ouput is null.
- In the second one there is some activation of the antecendent. Hence, the average membership function is the output, bounded by the activation value.
- In the third one, there are two clauses joined by an "or", hence, the activation it the one from the higher one, as reflected in the output.
In the last row, last column, the combination of the output of the rules is shown, as well as the centroid which defines the crisp output.
|
github_jupyter
|
import zadeh
service = zadeh.FuzzyVariable(
zadeh.FloatDomain("service", 0, 10, 100),
{
"poor": zadeh.GaussianFuzzySet(1.5, 0),
"good": zadeh.GaussianFuzzySet(1.5, 5),
"excellent": zadeh.GaussianFuzzySet(1.5, 10),
},
)
service.plot()
food = zadeh.FuzzyVariable(
zadeh.FloatDomain("food", 0, 10, 100),
{
"rancid": zadeh.TrapezoidalFuzzySet(-2, 0, 1, 3),
"delicious": zadeh.TrapezoidalFuzzySet(7, 9, 10, 12),
},
)
food.plot()
tip = zadeh.FuzzyVariable(
zadeh.FloatDomain("tip", 0, 30, 100),
{
"cheap": zadeh.TriangularFuzzySet(0, 5, 10),
"average": zadeh.TriangularFuzzySet(10, 15, 20),
"generous": zadeh.TriangularFuzzySet(20, 25, 30),
},
)
tip.plot()
# "|"" is fuzzy OR, ">>" if fuzzy implication
rule_set = [
((service == "poor") | (food == "rancid")) >> (tip == "cheap"),
(service == "good") >> (tip == "average"),
((service == "excellent") | (food == "delicious")) >> (tip == "generous"),
]
for rule in rule_set:
print(rule)
fis = zadeh.FIS([food, service], rule_set, tip)
fis.get_crisp_output({"food": 9, "service": 7})
fis.get_interactive()
import matplotlib.pyplot as plt
fis.plot_2d(food, service)
plt.tight_layout()
fis.plot_rules({"food": 9, "service": 7})
plt.show()
for rule in fis.rules:
print(rule)
| 0.368633 | 0.930458 |
```
import pandas as pd
import os
import numpy as np
from string_search import *
data_dir = r'C:\Users\ozano\Desktop\senet'
data_path = os.path.join(data_dir, 'results_me.csv')
df = pd.read_csv(data_path, sep = ';')
cols_to_use = ['AD', 'ADRES']
df = df[cols_to_use]
df.shape
```
## Preprocess
```
for col in df.columns:
df[col] = df[col].apply(lambda x: preprocess(x))
df[col] = '#' + df[col] + '#'
df.head(10)
```
## Get N-Grams
```
df_ngram = pd.DataFrame()
for col in df.columns:
df_ngram[col] = df[col].apply(lambda x: get_n_grams(x))
df_ngram.head()
```
## Create Index
```
import math
def get_n_gram_length(x):
return max(1, len(x))
n_index_tokens = np.array([4, 7, 10, 15, 20, 30, 50, 70])
labels = ['AD', 'ADRES']
length_data = {}
index_data = {label: [] for label in labels}
for label in labels:
length_data[label] = df_ngram[label].apply(get_n_gram_length).values
for n_index_token in n_index_tokens:
for label in labels:
index_data[label].append(create_ngram_index(df_ngram[label].values, n_index_tokens = n_index_token))
```
## Search
```
#%%timeit
# 1 search 10us, full search 400us
from time import time
start_time = time()
input_string = 'palazoglu sok n 2 sisli'
search_person = False
search_label = 'ADRES'#'AD'#'ADRES'
# mehmet kocamanoglu erkan calik
# mehmet caliskan ahmet doger
# metin aydinhusamettin aydin
# mehmet ali emirbayer
# mehmet caliskan
# med egitimdersane yayincilik basimm pazarlama ve d
# mer su urunleri hayvancilik nakliye pazarlama ithalat ihrac
# mer ihracat
# yildiz mahdogus 1ara skhibrhm eren apk3 d13
# bagcilar mah alinak sitesi otopark karsisi no 8 kat 4 baglar
# istoc 3 ada no 56mahmutbey bagcilar istanbul
# istasyon mh19sk 20a etimesgut ankara
input_string = '#' + input_string + '#'
input_n_grams = get_n_grams(input_string)
input_ngram_count = len(input_n_grams)
index_size = len(n_index_tokens) - 1
if not search_person:
for i, n_index_token in enumerate(n_index_tokens[:-1]):
if input_ngram_count <= n_index_token:
index_size = i
break
print(f'Input n_gram count: {input_ngram_count}')
print(f'Searching with index size: {n_index_tokens[index_size]}')
matches = get_matches(input_n_grams, index_data[search_label][index_size], length_data[search_label])
match_bins = bin_matches(matches, get_sorted = False)
match_values = get_match_values(match_bins, df[search_label].values)
while not search_person and len(match_values[1.0]) == 0 and len(match_values[0.8]) == 0 and index_size < len(n_index_tokens) - 1:
index_size = index_size + 1
print(f'Searching with index size: {n_index_tokens[index_size]}')
matches = get_matches(input_n_grams, index_data[search_label][index_size], length_data[search_label])
match_bins = bin_matches(matches, get_sorted = False)
match_values = get_match_values(match_bins, df[search_label].values)
print('Time: {:.3f} ms'.format(time() - start_time))
```
## Get values
```
match_values[1.0]
match_values[0.8]
match_values[0.6]
match_values[0.4][:20] # Shows first n
match_values[0.0][:20] # Shows first n
```
|
github_jupyter
|
import pandas as pd
import os
import numpy as np
from string_search import *
data_dir = r'C:\Users\ozano\Desktop\senet'
data_path = os.path.join(data_dir, 'results_me.csv')
df = pd.read_csv(data_path, sep = ';')
cols_to_use = ['AD', 'ADRES']
df = df[cols_to_use]
df.shape
for col in df.columns:
df[col] = df[col].apply(lambda x: preprocess(x))
df[col] = '#' + df[col] + '#'
df.head(10)
df_ngram = pd.DataFrame()
for col in df.columns:
df_ngram[col] = df[col].apply(lambda x: get_n_grams(x))
df_ngram.head()
import math
def get_n_gram_length(x):
return max(1, len(x))
n_index_tokens = np.array([4, 7, 10, 15, 20, 30, 50, 70])
labels = ['AD', 'ADRES']
length_data = {}
index_data = {label: [] for label in labels}
for label in labels:
length_data[label] = df_ngram[label].apply(get_n_gram_length).values
for n_index_token in n_index_tokens:
for label in labels:
index_data[label].append(create_ngram_index(df_ngram[label].values, n_index_tokens = n_index_token))
#%%timeit
# 1 search 10us, full search 400us
from time import time
start_time = time()
input_string = 'palazoglu sok n 2 sisli'
search_person = False
search_label = 'ADRES'#'AD'#'ADRES'
# mehmet kocamanoglu erkan calik
# mehmet caliskan ahmet doger
# metin aydinhusamettin aydin
# mehmet ali emirbayer
# mehmet caliskan
# med egitimdersane yayincilik basimm pazarlama ve d
# mer su urunleri hayvancilik nakliye pazarlama ithalat ihrac
# mer ihracat
# yildiz mahdogus 1ara skhibrhm eren apk3 d13
# bagcilar mah alinak sitesi otopark karsisi no 8 kat 4 baglar
# istoc 3 ada no 56mahmutbey bagcilar istanbul
# istasyon mh19sk 20a etimesgut ankara
input_string = '#' + input_string + '#'
input_n_grams = get_n_grams(input_string)
input_ngram_count = len(input_n_grams)
index_size = len(n_index_tokens) - 1
if not search_person:
for i, n_index_token in enumerate(n_index_tokens[:-1]):
if input_ngram_count <= n_index_token:
index_size = i
break
print(f'Input n_gram count: {input_ngram_count}')
print(f'Searching with index size: {n_index_tokens[index_size]}')
matches = get_matches(input_n_grams, index_data[search_label][index_size], length_data[search_label])
match_bins = bin_matches(matches, get_sorted = False)
match_values = get_match_values(match_bins, df[search_label].values)
while not search_person and len(match_values[1.0]) == 0 and len(match_values[0.8]) == 0 and index_size < len(n_index_tokens) - 1:
index_size = index_size + 1
print(f'Searching with index size: {n_index_tokens[index_size]}')
matches = get_matches(input_n_grams, index_data[search_label][index_size], length_data[search_label])
match_bins = bin_matches(matches, get_sorted = False)
match_values = get_match_values(match_bins, df[search_label].values)
print('Time: {:.3f} ms'.format(time() - start_time))
match_values[1.0]
match_values[0.8]
match_values[0.6]
match_values[0.4][:20] # Shows first n
match_values[0.0][:20] # Shows first n
| 0.156041 | 0.564639 |
# Google Data Analytics Capstone - Case Study 1
* Author: Wenny
* Date: 2021/11/05
## Import essential libs
```
import pandas as pd
import numpy as np
STORAGE_PATH = "/home/nfs_home/mammoth/GoogleAnalyticCerts" ## Path to all the historical data
```
## Prepare
### Guiding questions
#### How is the data organized?
```
df_202004 = pd.read_csv(f"{STORAGE_PATH}/202004-divvy-tripdata.csv")
df_202004.head(5)
df_202104 = pd.read_csv(f"{STORAGE_PATH}/202104-divvy-tripdata.csv")
df_202104.head(5)
df.columns
```
#### How did you verify the data’s integrity?
```
import os
CSV_Files = [File for File in os.listdir(STORAGE_PATH) if File[-4:]==".csv"]
Dataset = [pd.read_csv(File) for File in CSV_Files] ## Loading all historical data, beware of the memory usage.
for data,columns in zip(CSV_Files,Columns):
print(data,columns)
```
### Are there any problems with the data?
```
Dataset[0].isnull().any()
# Missing value
for name,data in zip(CSV_Files,Dataset):
if data.isnull().values.any():
print(f"Null value detected in {name}.")
print(Dataset[0].loc[Dataset[0].isna().any(axis=1)])
# Proportion of members and casual users
print("Porportions of member, casual, and others:")
for name,data in zip(CSV_Files,Dataset):
if 'member_casual' not in data.columns:
print(f"{name} does not have \"member_casual\" column!")
continue
num_member = len(data[data['member_casual']=="member"])
num_casual = len(data[data['member_casual']=="casual"])
num_others = len(data[(data['member_casual']!="member")&(data['member_casual']!="casual")])
num_total = len(data)
print(f"{name}: {num_member/num_total}/{num_casual/num_total}/{num_others/num_total}")
```
## Process
### Guiding questions
#### Have you ensured your data’s integrity?
```
# Integrate all Trips files and station files separately
Trips_Files = [File for File in os.listdir(STORAGE_PATH) if not "Stations" in File and File[-4:]==".csv"]
Station_Files = [File for File in os.listdir(STORAGE_PATH) if "Stations" in File and File[-4:]==".csv"]
Trips_Dataset = [pd.read_csv(File) for File in Trips_Files]
Station_Dataset = [pd.read_csv(File) for File in Station_Files]
# Rename columns in Divvy_Trips_2018_Q1.csv
for data in Trips_Dataset:
data.rename(columns={"01 - Rental Details Rental ID":"trip_id",
"01 - Rental Details Local Start Time":"start_time",
"01 - Rental Details Local End Time":"end_time",
"01 - Rental Details Bike ID":"bikeid",
"01 - Rental Details Duration In Seconds Uncapped":"tripduration",
"03 - Rental Start Station ID":"from_station_id",
"03 - Rental Start Station Name":"from_station_name",
"02 - Rental End Station ID":"to_station_id",
"02 - Rental End Station Name":"to_station_name",
"User Type":"usertype",
"Member Gender":"gender",
"05 - Member Details Member Birthday Year":"birthyear"},inplace=True)
# Integrate the entire trips dataset:
## Columns with ID were removed
## tripduration = ended_at - started_at (if missing)
## usertype = df['member_casual]=='member'?'subscriber':'customer' (if missing)
## Season, Weekday, Start_hour
from datetime import datetime
from tqdm import tqdm
def get_duration_sec(t1, t2):
t1 = datetime.strptime(t1, "%Y-%m-%d %H:%M:%S")
t2 = datetime.strptime(t2, "%Y-%m-%d %H:%M:%S")
return int((t2 - t1).seconds)
Integrated_dataset = []
for df in tqdm(Trips_Dataset):
if not "usertype" in df.columns:
df.rename(columns={'member_casual':'usertype'},inplace=True)
df['usertype'].replace({'member':'Subscriber','casual':'Customer'},inplace=True)
if "starttime" in df.columns:
df.rename(columns={'starttime':'start_time'},inplace=True)
df.rename(columns={'stoptime':'end_time'},inplace=True)
if "started_at" in df.columns:
df.rename(columns={'started_at':'start_time'},inplace=True)
df.rename(columns={'ended_at':'end_time'},inplace=True)
if not "tripduration" in df.columns:
df['tripduration'] = df[['start_time','end_time']].apply(lambda x:get_duration_sec(*x),axis=1)
df['tripduration'] = [float(val.replace(',','')) if type(val)==str else val for val in df['tripduration'].values]
season = []
weekday = []
start_hour = []
for date in df['start_time']:
if "/" in date: # 4/30/2016 23:59
try:
time = datetime.strptime(date, "%m/%d/%Y %H:%M:%S")
except:
time = datetime.strptime(date, "%m/%d/%Y %H:%M")
else:
try: # 2020-03-07 15:25:55
time = datetime.strptime(date, "%Y-%m-%d %H:%M:%S")
except: # 2020-03-07 15:25
time = datetime.strptime(date, "%Y-%m-%d %H:%M")
season.append(time.month /4)
weekday.append(time.strftime('%A'))
start_hour.append(time.hour)
df["season"] = season
df["weekday"] = weekday
df["start_hour"] = start_hour
Integrated_dataset.append(df[["start_time","end_time","tripduration","season","weekday","start_hour","usertype"]])
Integrated_dataset = pd.concat(Integrated_dataset)
# Remove usertype "Dependent", which the meaning is not described.
Integrated_dataset = Integrated_dataset[~(Integrated_dataset["usertype"]=="Dependent")]
```
#### How can you verify that your data is clean and ready to analyze?
```
# No Nans
Integrated_dataset.isna().any()
#dtype of tripduration is float64
Integrated_dataset.dtypes
#"usertype" now containes only two possible values
Integrated_dataset['usertype'].unique()
```
### Analyze
#### What surprises did you discover in the data?
```
Integrated_dataset.head(10)
# As a whole
import matplotlib.pyplot as plt
Subscribers = Integrated_dataset[Integrated_dataset['usertype']=="Subscriber"]
Customers = Integrated_dataset[Integrated_dataset['usertype']=="Customer"]
Subscribers.describe()
Customers.describe()
plt.boxplot([Subscribers['tripduration'].values,Customers['tripduration'].values], showfliers=False)
plt.xticks([1,2],['Subscribers','Customers'])
plt.ylabel("trip duration (secs)")
plt.title("Box plot for trip duration between different usertypes")
# In the view of 7 days of a week:
Weekday = ["Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"]
Sub_Weekday_cnt = [len(Subscribers[Subscribers['weekday']==Day]) for Day in Weekday]
Cus_Weekday_cnt = [len(Customers[Customers['weekday']==Day]) for Day in Weekday]
x = np.arange(len(Weekday))
width = 0.3
plt.bar(x, Sub_Weekday_cnt, width, color='tab:blue', label='Subscriber')
plt.bar(x + width, Cus_Weekday_cnt, width, color='tab:red', label='Customers')
plt.xticks(x + width / 2, Weekday,rotation=45)
plt.ylabel('Number of records')
plt.title('Number of records in weekday')
plt.legend(bbox_to_anchor=(1,1), loc='upper left')
plt.show()
# In the view of 7 days of a week:
Hours = list(range(0,24))
Sub_Hour_cnt = [len(Subscribers[Subscribers['start_hour']==Hour]) for Hour in Hours]
Cus_Hour_cnt = [len(Customers[Customers['start_hour']==Hour]) for Hour in Hours]
x = np.arange(len(Hours))
width = 0.1
plt.bar(x, Sub_Hour_cnt, width, color='tab:blue', label='Subscriber')
plt.bar(x + width, Cus_Hour_cnt, width, color='tab:red', label='Customers')
plt.xticks(x + width / 2, Hours, rotation=45)
plt.xlabel('Hours')
plt.ylabel('Number of records')
plt.title('Number of records for every hour')
plt.legend(bbox_to_anchor=(1,1), loc='upper left')
plt.show()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
STORAGE_PATH = "/home/nfs_home/mammoth/GoogleAnalyticCerts" ## Path to all the historical data
df_202004 = pd.read_csv(f"{STORAGE_PATH}/202004-divvy-tripdata.csv")
df_202004.head(5)
df_202104 = pd.read_csv(f"{STORAGE_PATH}/202104-divvy-tripdata.csv")
df_202104.head(5)
df.columns
import os
CSV_Files = [File for File in os.listdir(STORAGE_PATH) if File[-4:]==".csv"]
Dataset = [pd.read_csv(File) for File in CSV_Files] ## Loading all historical data, beware of the memory usage.
for data,columns in zip(CSV_Files,Columns):
print(data,columns)
Dataset[0].isnull().any()
# Missing value
for name,data in zip(CSV_Files,Dataset):
if data.isnull().values.any():
print(f"Null value detected in {name}.")
print(Dataset[0].loc[Dataset[0].isna().any(axis=1)])
# Proportion of members and casual users
print("Porportions of member, casual, and others:")
for name,data in zip(CSV_Files,Dataset):
if 'member_casual' not in data.columns:
print(f"{name} does not have \"member_casual\" column!")
continue
num_member = len(data[data['member_casual']=="member"])
num_casual = len(data[data['member_casual']=="casual"])
num_others = len(data[(data['member_casual']!="member")&(data['member_casual']!="casual")])
num_total = len(data)
print(f"{name}: {num_member/num_total}/{num_casual/num_total}/{num_others/num_total}")
# Integrate all Trips files and station files separately
Trips_Files = [File for File in os.listdir(STORAGE_PATH) if not "Stations" in File and File[-4:]==".csv"]
Station_Files = [File for File in os.listdir(STORAGE_PATH) if "Stations" in File and File[-4:]==".csv"]
Trips_Dataset = [pd.read_csv(File) for File in Trips_Files]
Station_Dataset = [pd.read_csv(File) for File in Station_Files]
# Rename columns in Divvy_Trips_2018_Q1.csv
for data in Trips_Dataset:
data.rename(columns={"01 - Rental Details Rental ID":"trip_id",
"01 - Rental Details Local Start Time":"start_time",
"01 - Rental Details Local End Time":"end_time",
"01 - Rental Details Bike ID":"bikeid",
"01 - Rental Details Duration In Seconds Uncapped":"tripduration",
"03 - Rental Start Station ID":"from_station_id",
"03 - Rental Start Station Name":"from_station_name",
"02 - Rental End Station ID":"to_station_id",
"02 - Rental End Station Name":"to_station_name",
"User Type":"usertype",
"Member Gender":"gender",
"05 - Member Details Member Birthday Year":"birthyear"},inplace=True)
# Integrate the entire trips dataset:
## Columns with ID were removed
## tripduration = ended_at - started_at (if missing)
## usertype = df['member_casual]=='member'?'subscriber':'customer' (if missing)
## Season, Weekday, Start_hour
from datetime import datetime
from tqdm import tqdm
def get_duration_sec(t1, t2):
t1 = datetime.strptime(t1, "%Y-%m-%d %H:%M:%S")
t2 = datetime.strptime(t2, "%Y-%m-%d %H:%M:%S")
return int((t2 - t1).seconds)
Integrated_dataset = []
for df in tqdm(Trips_Dataset):
if not "usertype" in df.columns:
df.rename(columns={'member_casual':'usertype'},inplace=True)
df['usertype'].replace({'member':'Subscriber','casual':'Customer'},inplace=True)
if "starttime" in df.columns:
df.rename(columns={'starttime':'start_time'},inplace=True)
df.rename(columns={'stoptime':'end_time'},inplace=True)
if "started_at" in df.columns:
df.rename(columns={'started_at':'start_time'},inplace=True)
df.rename(columns={'ended_at':'end_time'},inplace=True)
if not "tripduration" in df.columns:
df['tripduration'] = df[['start_time','end_time']].apply(lambda x:get_duration_sec(*x),axis=1)
df['tripduration'] = [float(val.replace(',','')) if type(val)==str else val for val in df['tripduration'].values]
season = []
weekday = []
start_hour = []
for date in df['start_time']:
if "/" in date: # 4/30/2016 23:59
try:
time = datetime.strptime(date, "%m/%d/%Y %H:%M:%S")
except:
time = datetime.strptime(date, "%m/%d/%Y %H:%M")
else:
try: # 2020-03-07 15:25:55
time = datetime.strptime(date, "%Y-%m-%d %H:%M:%S")
except: # 2020-03-07 15:25
time = datetime.strptime(date, "%Y-%m-%d %H:%M")
season.append(time.month /4)
weekday.append(time.strftime('%A'))
start_hour.append(time.hour)
df["season"] = season
df["weekday"] = weekday
df["start_hour"] = start_hour
Integrated_dataset.append(df[["start_time","end_time","tripduration","season","weekday","start_hour","usertype"]])
Integrated_dataset = pd.concat(Integrated_dataset)
# Remove usertype "Dependent", which the meaning is not described.
Integrated_dataset = Integrated_dataset[~(Integrated_dataset["usertype"]=="Dependent")]
# No Nans
Integrated_dataset.isna().any()
#dtype of tripduration is float64
Integrated_dataset.dtypes
#"usertype" now containes only two possible values
Integrated_dataset['usertype'].unique()
Integrated_dataset.head(10)
# As a whole
import matplotlib.pyplot as plt
Subscribers = Integrated_dataset[Integrated_dataset['usertype']=="Subscriber"]
Customers = Integrated_dataset[Integrated_dataset['usertype']=="Customer"]
Subscribers.describe()
Customers.describe()
plt.boxplot([Subscribers['tripduration'].values,Customers['tripduration'].values], showfliers=False)
plt.xticks([1,2],['Subscribers','Customers'])
plt.ylabel("trip duration (secs)")
plt.title("Box plot for trip duration between different usertypes")
# In the view of 7 days of a week:
Weekday = ["Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"]
Sub_Weekday_cnt = [len(Subscribers[Subscribers['weekday']==Day]) for Day in Weekday]
Cus_Weekday_cnt = [len(Customers[Customers['weekday']==Day]) for Day in Weekday]
x = np.arange(len(Weekday))
width = 0.3
plt.bar(x, Sub_Weekday_cnt, width, color='tab:blue', label='Subscriber')
plt.bar(x + width, Cus_Weekday_cnt, width, color='tab:red', label='Customers')
plt.xticks(x + width / 2, Weekday,rotation=45)
plt.ylabel('Number of records')
plt.title('Number of records in weekday')
plt.legend(bbox_to_anchor=(1,1), loc='upper left')
plt.show()
# In the view of 7 days of a week:
Hours = list(range(0,24))
Sub_Hour_cnt = [len(Subscribers[Subscribers['start_hour']==Hour]) for Hour in Hours]
Cus_Hour_cnt = [len(Customers[Customers['start_hour']==Hour]) for Hour in Hours]
x = np.arange(len(Hours))
width = 0.1
plt.bar(x, Sub_Hour_cnt, width, color='tab:blue', label='Subscriber')
plt.bar(x + width, Cus_Hour_cnt, width, color='tab:red', label='Customers')
plt.xticks(x + width / 2, Hours, rotation=45)
plt.xlabel('Hours')
plt.ylabel('Number of records')
plt.title('Number of records for every hour')
plt.legend(bbox_to_anchor=(1,1), loc='upper left')
plt.show()
| 0.243373 | 0.766862 |
# Function(s) to determine desired AB outcome
```
# Import statements
import pandas as pd
```
### Create df of desired outcomes indexed by baserunners and number of outs
```
def outcome_df() -> pd.DataFrame:
'''
Returns pandas dataframe of desired outcomes sorted by outs+baserunners.
TODO: Account for balls and strikes, as well, maybe using multiindexing
'''
# GB=groundball, FB=flyball, Whiffs=swing+miss
df = pd.DataFrame(data={'Outs':[0,1,2], 'First':['GB','GB','FB'], 'Second':['FB','FB','FB'],
'Third':['Whiffs','Whiffs','FB'], 'First and Second':['GB','GB','FB'],
'First and Third':['Whiffs','GB','FB'], 'Second and Third':['Whiffs','Whiffs','FB'],
'Bases Loaded':['Whiffs','GB','FB'], 'Bases Empty':['FB','FB','FB']}
).set_index('Outs')
return(df)
```
### Create dataframe of best pitches to throw to achieve a desired outcome
```
def pitch_for_outcome() -> pd.DataFrame:
'''
Returns pandas dataframe of best pitches to throw sorted by desired outcome.
NOTE: This is a hard-coded version of what we want to intelligently infer/predict using data!!!
'''
df = pd.DataFrame(data={'Outcome':['GB','FB','Whiffs'], 'Pitch':['Curveball','Fastball','Slider'],
'Location':['Low','Up','Low and Away']}).set_index('Outcome')
return(df)
```
### Convert boolean baserunner columns from game_state dataframe into readable strings (good for keys and/or column names)
```
def to_baserunners(game_state_row: tuple) -> str:
'''
Converts baserunner info from row of game_state df to string key for later column matching.
>>> to_baserunners((...,'1st':False,'2nd':True,'3rd':True,...))
'2nd and 3rd'
'''
# Logic to translate baserunner info from game_state_row
first = game_state_row.First
second = game_state_row.Second
third = game_state_row.Third
if first:
if second:
if third:
baserunners = 'Bases Loaded'
else:
baserunners = 'First and Second'
elif third:
baserunners = 'First and Third'
else:
baserunners = 'First'
else:
if second:
if third:
baserunners = 'Second and Third'
else:
baserunners = 'Second'
elif third:
baserunners = 'Third'
else:
baserunners = 'Bases Empty'
# Return baserunner info string
return(baserunners)
```
### Convert exact ball-strike count to type of count (batter ahead, even, first pitch, etc.)
```
def count_converter(balls:int = None, strikes:int = None) -> str:
'''
Converts ball-strike count to type/description of count. Takes two integers inputs, returns string.
If no input given, returns 'All Counts'.
Note: 'First Pitch' and 'Two Strikes' take priority over other categories. 'Full Count' not supported.
>>> count_converter(2,2)
'Even'
>>> count_converter(0,0)
'First Pitch'
'''
# Conversion logic
if balls == 0 and strikes == 0:
count_type = 'First Pitch'
elif strikes == 2:
count_type = 'Two Strikes'
elif balls > strikes:
count_type = 'Batter Ahead'
elif strikes > balls:
count_type = 'Pitcher Ahead'
elif balls == strikes:
count_type = 'Even'
else:
count_type = 'All Counts'
# Return count_type string
return(count_type)
```
### Select desired outcomes for each situation in a game_state dataframe
```
def desired_outcomes(game_state: pd.DataFrame) -> pd.Series:
'''
Returns pandas series of desired AB outcomes based on given game state dataframe.
Return values may include 'GB','FB','Whiffs','BA' (groundball, flyball, swing+miss, min batting average).
'''
# Dataframe of desired outcomes sorted by outs+baserunners
best_outcomes = outcome_df()
# Loop through game_state observations, extract relevant info, match with desired outcome
desired_outcomes = []
for row in game_state.itertuples():
outs = row.Outs
baserunners = to_baserunners(row)
desired_outcomes.append(best_outcomes.loc[outs,baserunners])
# Return desied outcomes as pandas series
return(pd.Series(desired_outcomes).rename('Desired Outcome'))
```
### Choose pitch selection based on desired outcome
```
def desired_pitch(desired_outcomes: pd.Series) -> pd.DataFrame:
'''
Returns pandas dataframe of desired pitches to achieve desired outcomes in a given game state.
Dataframe includes pitch type and location.
'''
# Dataframe of desired pitches sorted by desired outcome
best_pitches = pitch_for_outcome()
# Loop through desired outcomes and match with best pitch selection
pitch_selections = pd.DataFrame(columns=best_pitches.columns)
for outcome in desired_outcomes:
pitch_selections = pitch_selections.append(best_pitches.loc[outcome],ignore_index=True)
# Return pitch selections
return(pitch_selections)
```
### Test case (change cell from markdown to code and run)
import project_path
import import_ipynb
from WhichPitch.Lib import sim_data
game_states = sim_data.sim_game_state(10)
outcomes = desired_outcomes(game_states)
pitches = desired_pitch(outcomes)
pd.concat([game_states,outcomes,pitches],axis=1)
|
github_jupyter
|
# Import statements
import pandas as pd
def outcome_df() -> pd.DataFrame:
'''
Returns pandas dataframe of desired outcomes sorted by outs+baserunners.
TODO: Account for balls and strikes, as well, maybe using multiindexing
'''
# GB=groundball, FB=flyball, Whiffs=swing+miss
df = pd.DataFrame(data={'Outs':[0,1,2], 'First':['GB','GB','FB'], 'Second':['FB','FB','FB'],
'Third':['Whiffs','Whiffs','FB'], 'First and Second':['GB','GB','FB'],
'First and Third':['Whiffs','GB','FB'], 'Second and Third':['Whiffs','Whiffs','FB'],
'Bases Loaded':['Whiffs','GB','FB'], 'Bases Empty':['FB','FB','FB']}
).set_index('Outs')
return(df)
def pitch_for_outcome() -> pd.DataFrame:
'''
Returns pandas dataframe of best pitches to throw sorted by desired outcome.
NOTE: This is a hard-coded version of what we want to intelligently infer/predict using data!!!
'''
df = pd.DataFrame(data={'Outcome':['GB','FB','Whiffs'], 'Pitch':['Curveball','Fastball','Slider'],
'Location':['Low','Up','Low and Away']}).set_index('Outcome')
return(df)
def to_baserunners(game_state_row: tuple) -> str:
'''
Converts baserunner info from row of game_state df to string key for later column matching.
>>> to_baserunners((...,'1st':False,'2nd':True,'3rd':True,...))
'2nd and 3rd'
'''
# Logic to translate baserunner info from game_state_row
first = game_state_row.First
second = game_state_row.Second
third = game_state_row.Third
if first:
if second:
if third:
baserunners = 'Bases Loaded'
else:
baserunners = 'First and Second'
elif third:
baserunners = 'First and Third'
else:
baserunners = 'First'
else:
if second:
if third:
baserunners = 'Second and Third'
else:
baserunners = 'Second'
elif third:
baserunners = 'Third'
else:
baserunners = 'Bases Empty'
# Return baserunner info string
return(baserunners)
def count_converter(balls:int = None, strikes:int = None) -> str:
'''
Converts ball-strike count to type/description of count. Takes two integers inputs, returns string.
If no input given, returns 'All Counts'.
Note: 'First Pitch' and 'Two Strikes' take priority over other categories. 'Full Count' not supported.
>>> count_converter(2,2)
'Even'
>>> count_converter(0,0)
'First Pitch'
'''
# Conversion logic
if balls == 0 and strikes == 0:
count_type = 'First Pitch'
elif strikes == 2:
count_type = 'Two Strikes'
elif balls > strikes:
count_type = 'Batter Ahead'
elif strikes > balls:
count_type = 'Pitcher Ahead'
elif balls == strikes:
count_type = 'Even'
else:
count_type = 'All Counts'
# Return count_type string
return(count_type)
def desired_outcomes(game_state: pd.DataFrame) -> pd.Series:
'''
Returns pandas series of desired AB outcomes based on given game state dataframe.
Return values may include 'GB','FB','Whiffs','BA' (groundball, flyball, swing+miss, min batting average).
'''
# Dataframe of desired outcomes sorted by outs+baserunners
best_outcomes = outcome_df()
# Loop through game_state observations, extract relevant info, match with desired outcome
desired_outcomes = []
for row in game_state.itertuples():
outs = row.Outs
baserunners = to_baserunners(row)
desired_outcomes.append(best_outcomes.loc[outs,baserunners])
# Return desied outcomes as pandas series
return(pd.Series(desired_outcomes).rename('Desired Outcome'))
def desired_pitch(desired_outcomes: pd.Series) -> pd.DataFrame:
'''
Returns pandas dataframe of desired pitches to achieve desired outcomes in a given game state.
Dataframe includes pitch type and location.
'''
# Dataframe of desired pitches sorted by desired outcome
best_pitches = pitch_for_outcome()
# Loop through desired outcomes and match with best pitch selection
pitch_selections = pd.DataFrame(columns=best_pitches.columns)
for outcome in desired_outcomes:
pitch_selections = pitch_selections.append(best_pitches.loc[outcome],ignore_index=True)
# Return pitch selections
return(pitch_selections)
| 0.474631 | 0.861305 |
# Dijkstra's Algorithm
In the "Greedy Algorithms" lesson, we implemented the **Dijkstra's Algorithm** to find the distance of each node from the given source node. In this exercise, you'll implement the same **Dijkstra's algorithm to find the length of the shortest path between a given pair of nodes,** but this time we will have a better time complexity. First, let's build the graph.
## Graph Representation
In order to run Dijkstra's Algorithm, we'll need to add distance to each edge. We'll use the `GraphEdge` class below to represent each edge between a pair of nodes. You are free to create your own implementation of an undirected graph.
```
# Helper Class
class GraphEdge(object):
def __init__(self, destinationNode, distance):
self.node = destinationNode
self.distance = distance
```
The new graph representation should look like this:
```
# Helper Classes
class GraphNode(object):
def __init__(self, val):
self.value = val
self.edges = []
def add_child(self, node, distance):
self.edges.append(GraphEdge(node, distance))
def remove_child(self, del_node):
if del_node in self.edges:
self.edges.remove(del_node)
class Graph(object):
def __init__(self, node_list):
self.nodes = node_list
# adds an edge between node1 and node2 in both directions
def add_edge(self, node1, node2, distance):
if node1 in self.nodes and node2 in self.nodes:
node1.add_child(node2, distance)
node2.add_child(node1, distance)
def remove_edge(self, node1, node2):
if node1 in self.nodes and node2 in self.nodes:
node1.remove_child(node2)
node2.remove_child(node1)
```
### Exercise - Write the function definition here
Using what you've learned, implement Dijkstra's Algorithm
```
import math
def dijkstra(graph, start_node, end_node):
# Create a dictionary that stores the distance to all nodes in the form of node:distance as key:value
# Assume the initial distance to all nodes is infinity.
# Use math.inf as a predefined constant equal to positive infinity
distance_dict = {node: math.inf for node in graph.nodes}
# Build a dictionary that will store the "shortest" distance to all nodes, wrt the start_node
shortest_distance = {}
distance_dict[start_node] = 0
while distance_dict:
# Sort the distance_dict, and pick the key:value having smallest distance
current_node, node_distance = sorted(distance_dict.items(), key=lambda x: x[1])[0]
# Remove the current node from the distance_dict, and store the same key:value in shortest_distance
shortest_distance[current_node] = distance_dict.pop(current_node)
# Check for each neighbour of current_node, if the distance_to_neighbour is smaller than the alreday stored distance, update the distance_dict
for edge in current_node.edges:
if edge.node in distance_dict:
distance_to_neighbour = node_distance + edge.distance
if distance_dict[edge.node] > distance_to_neighbour:
distance_dict[edge.node] = distance_to_neighbour
return shortest_distance[end_node]
```
<span class="graffiti-highlight graffiti-id_6vmf0hp-id_cjtybve"><i></i><button>Show Solution</button></span>
### Test - Let's test your function
```
# Test Case 1:
# Create a graph
node_u = GraphNode('U')
node_d = GraphNode('D')
node_a = GraphNode('A')
node_c = GraphNode('C')
node_i = GraphNode('I')
node_t = GraphNode('T')
node_y = GraphNode('Y')
graph = Graph([node_u, node_d, node_a, node_c, node_i, node_t, node_y])
# add_edge() function will add an edge between node1 and node2 in both directions
graph.add_edge(node_u, node_a, 4)
graph.add_edge(node_u, node_c, 6)
graph.add_edge(node_u, node_d, 3)
graph.add_edge(node_d, node_c, 4)
graph.add_edge(node_a, node_i, 7)
graph.add_edge(node_c, node_i, 4)
graph.add_edge(node_c, node_t, 5)
graph.add_edge(node_i, node_y, 4)
graph.add_edge(node_t, node_y, 5)
# Shortest Distance from U to Y is 14
print('Shortest Distance from {} to {} is {}'.format(node_u.value, node_y.value, dijkstra(graph, node_u, node_y)))
# Test Case 2
node_A = GraphNode('A')
node_B = GraphNode('B')
node_C = GraphNode('C')
graph = Graph([node_A, node_B, node_C])
graph.add_edge(node_A, node_B, 5)
graph.add_edge(node_B, node_C, 5)
graph.add_edge(node_A, node_C, 10)
# Shortest Distance from A to C is 10
print('Shortest Distance from {} to {} is {}'.format(node_A.value, node_C.value, dijkstra(graph, node_A, node_C)))
# Test Case 3
node_A = GraphNode('A')
node_B = GraphNode('B')
node_C = GraphNode('C')
node_D = GraphNode('D')
node_E = GraphNode('E')
graph = Graph([node_A, node_B, node_C, node_D, node_E])
graph.add_edge(node_A, node_B, 3)
graph.add_edge(node_A, node_D, 2)
graph.add_edge(node_B, node_D, 4)
graph.add_edge(node_B, node_E, 6)
graph.add_edge(node_B, node_C, 1)
graph.add_edge(node_C, node_E, 2)
graph.add_edge(node_E, node_D, 1)
# Shortest Distance from A to C is 4
print('Shortest Distance from {} to {} is {}'.format(node_A.value, node_C.value, dijkstra(graph, node_A, node_C)))
```
|
github_jupyter
|
# Helper Class
class GraphEdge(object):
def __init__(self, destinationNode, distance):
self.node = destinationNode
self.distance = distance
# Helper Classes
class GraphNode(object):
def __init__(self, val):
self.value = val
self.edges = []
def add_child(self, node, distance):
self.edges.append(GraphEdge(node, distance))
def remove_child(self, del_node):
if del_node in self.edges:
self.edges.remove(del_node)
class Graph(object):
def __init__(self, node_list):
self.nodes = node_list
# adds an edge between node1 and node2 in both directions
def add_edge(self, node1, node2, distance):
if node1 in self.nodes and node2 in self.nodes:
node1.add_child(node2, distance)
node2.add_child(node1, distance)
def remove_edge(self, node1, node2):
if node1 in self.nodes and node2 in self.nodes:
node1.remove_child(node2)
node2.remove_child(node1)
import math
def dijkstra(graph, start_node, end_node):
# Create a dictionary that stores the distance to all nodes in the form of node:distance as key:value
# Assume the initial distance to all nodes is infinity.
# Use math.inf as a predefined constant equal to positive infinity
distance_dict = {node: math.inf for node in graph.nodes}
# Build a dictionary that will store the "shortest" distance to all nodes, wrt the start_node
shortest_distance = {}
distance_dict[start_node] = 0
while distance_dict:
# Sort the distance_dict, and pick the key:value having smallest distance
current_node, node_distance = sorted(distance_dict.items(), key=lambda x: x[1])[0]
# Remove the current node from the distance_dict, and store the same key:value in shortest_distance
shortest_distance[current_node] = distance_dict.pop(current_node)
# Check for each neighbour of current_node, if the distance_to_neighbour is smaller than the alreday stored distance, update the distance_dict
for edge in current_node.edges:
if edge.node in distance_dict:
distance_to_neighbour = node_distance + edge.distance
if distance_dict[edge.node] > distance_to_neighbour:
distance_dict[edge.node] = distance_to_neighbour
return shortest_distance[end_node]
# Test Case 1:
# Create a graph
node_u = GraphNode('U')
node_d = GraphNode('D')
node_a = GraphNode('A')
node_c = GraphNode('C')
node_i = GraphNode('I')
node_t = GraphNode('T')
node_y = GraphNode('Y')
graph = Graph([node_u, node_d, node_a, node_c, node_i, node_t, node_y])
# add_edge() function will add an edge between node1 and node2 in both directions
graph.add_edge(node_u, node_a, 4)
graph.add_edge(node_u, node_c, 6)
graph.add_edge(node_u, node_d, 3)
graph.add_edge(node_d, node_c, 4)
graph.add_edge(node_a, node_i, 7)
graph.add_edge(node_c, node_i, 4)
graph.add_edge(node_c, node_t, 5)
graph.add_edge(node_i, node_y, 4)
graph.add_edge(node_t, node_y, 5)
# Shortest Distance from U to Y is 14
print('Shortest Distance from {} to {} is {}'.format(node_u.value, node_y.value, dijkstra(graph, node_u, node_y)))
# Test Case 2
node_A = GraphNode('A')
node_B = GraphNode('B')
node_C = GraphNode('C')
graph = Graph([node_A, node_B, node_C])
graph.add_edge(node_A, node_B, 5)
graph.add_edge(node_B, node_C, 5)
graph.add_edge(node_A, node_C, 10)
# Shortest Distance from A to C is 10
print('Shortest Distance from {} to {} is {}'.format(node_A.value, node_C.value, dijkstra(graph, node_A, node_C)))
# Test Case 3
node_A = GraphNode('A')
node_B = GraphNode('B')
node_C = GraphNode('C')
node_D = GraphNode('D')
node_E = GraphNode('E')
graph = Graph([node_A, node_B, node_C, node_D, node_E])
graph.add_edge(node_A, node_B, 3)
graph.add_edge(node_A, node_D, 2)
graph.add_edge(node_B, node_D, 4)
graph.add_edge(node_B, node_E, 6)
graph.add_edge(node_B, node_C, 1)
graph.add_edge(node_C, node_E, 2)
graph.add_edge(node_E, node_D, 1)
# Shortest Distance from A to C is 4
print('Shortest Distance from {} to {} is {}'.format(node_A.value, node_C.value, dijkstra(graph, node_A, node_C)))
| 0.723407 | 0.986376 |
# Lab 03.2: Filtering Data
This lab is presented with some revisions from [Dennis Sun at Cal Poly](https://web.calpoly.edu/~dsun09/index.html) and his [Data301 Course](http://users.csc.calpoly.edu/~dsun09/data301/lectures.html)
### When you have filled out all the questions, submit via [Tulane Canvas](https://tulane.instructure.com/)
```
%matplotlib inline
import pandas as pd
pd.options.display.max_rows = 5
titanic_df = pd.read_csv('../data/titanic.csv')
titanic_df.head()
```
In the previous chapter, we only analyzed one variable at a time, but we always analyzed _all_ of the observations in a data set. But what if we want to analyze, say, only the passengers on the Titanic who were _male_? To do this, we have to **filter** the data. That is, we have to remove the rows of the `titanic_df` `DataFrame` where `sex` is not equal to `"male"`. In this section, we will learn several ways to obtain such a subsetted `DataFrame`.
## Two Ways to Filter a DataFrame
One way to filter a `pandas` `DataFrame`, that uses a technique we learned in Chapter 1, is to set the filtering variable as the index and select the value you want using `.loc`.
So for example, if we wanted a `DataFrame` with just the male passengers, we could do:
```
males = titanic_df.set_index("sex").loc["male"]
males
males.age.plot.hist()
```
The more common way to filter a `DataFrame` is to use a **boolean mask**. A boolean mask is simply a `Series` of booleans whose index matches the index of the `DataFrame`.
The easiest way to create a boolean mask is to use one of the standard comparison operators `==`, `<`, `>`, and `!=` on an existing column in the `DataFrame`. For example, the following code produces a boolean mask that is equal to `True` for the male passengers and `False` otherwise.
```
titanic_df.sex == "male"
```
Notice that the equality operator `==` is not being used in the usual sense, i.e., to determine whether the object `titanic_df.sex` is the string `"male"`. This makes no sense, since `titanic_df.sex` is a `Series`. Instead, the equality operator is being _broadcast_ over the elements of `titanic_df.sex`. As a result, we end up with a `Series` of booleans that indicates whether _each_ element of `titanic_df.sex` is equal to `"male"`.
This boolean mask can then be passed into a `DataFrame` to obtain just the subset of rows where the mask equals `True`.
```
titanic_df[titanic_df.sex == "male"]
```
How can we tell that it worked? For one, notice that the index is missing the numbers 0 and 2; that's because passengers 0 and 2 in the original `DataFrame` were female. Also, the index goes up to 1308, but there are only 843 rows in this `DataFrame`.
In this new `DataFrame`, the variable `sex` should only take on one value, `"male"`. Let's check this.
```
titanic_df[titanic_df.sex == "male"]["sex"].value_counts()
```
Now we can analyze this subsetted `DataFrame` using the techniques we learned in Chapter 1. For example, the following code produces a histogram of the ages of the male passengers on the Titanic:
```
titanic_df[titanic_df.sex == "male"].age.plot.hist()
```
Boolean masks are also compatible with `.loc` and `.iloc`:
```
titanic_df.loc[titanic_df.sex == "male"]
```
The ability to pass a boolean mask into `.loc` or `.iloc` is useful if we want to select columns at the same time that we are filtering rows. For example, the following code returns the ages of the male passengers:
```
titanic_df.loc[titanic_df.sex == "male", "age"]
```
Of course, this result could be obtained another way; we could first apply the boolean mask and then select the column from the subsetted `DataFrame`, the same way we would select a column from any other `DataFrame`:
```
titanic_df[titanic_df.sex == "male"]["age"]
```
### Speed Comparison
We've just seen two ways to filter a `DataFrame`. Which is better?
One consideration is that the first method forces you to set the index of your `DataFrame` to the variable you want to filter on. If your `DataFrame` already has a natural index, you might not want to replace that index just to be able to filter the data.
Another consideration is speed. Let's test the runtimes of the two options by using the `%timeit` magic. (**Warning:** The cell below will take a while to run, since `timeit` will run each command multiple times and report the mean and standard deviation of the runtimes.)
```
%timeit titanic_df.set_index("sex").loc["male"].age.mean()
%timeit titanic_df[titanic_df.sex == "male"].age.mean()
```
So boolean masking is also significantly faster than re-indexing and selecting. All things considered, boolean masking is the best way to filter your data.
### Working with Boolean Series
Remember that a boolean mask is a `Series` of booleans. A boolean variable is usually regarded as categorical, but it can also be regarded as quantitative, where `True`s are 1s and `False`s are 0s. For example, the following command actually produces a `Series` of 0s and 3s.
```
(titanic_df.sex == "male") * 3
```
How can we use the dimorphic nature of booleans to our advantage? In Chapter 1.2, we saw how we functions like `.sum()` and `.mean()` could be applied to a binary categorical variable whose categories are coded as 0 and 1, such as the `survived` variable in the Titanic data set. The sum tells us the _number_ of observations in category 1, while the mean tells us the _proportion_ in category 1.
Since boolean `Series` are essentially variables of 0s and 1s, the command
```
(titanic_df.sex == "male").sum()
```
returns the _number_ of observations where `sex == "male"` and
```
(titanic_df.sex == "male").mean()
```
returns the _proportion_ of observations where `sex == "male"`. Check that these answers are correct by some other method.
## Filtering on Multiple Criteria
What if we want to visualize the age distribution of male _survivors_ on the Titanic?" To answer this question, we have to filter the `DataFrame` on two variables, `sex` and `survived`.
We can filter on two or more criteria by combining boolean masks using logical operators. First, let's get the boolean masks for the two filters of interest:
```
titanic_df.sex == "male"
titanic_df.survived == 1
```
Now, we want to combine these two boolean masks into a single mask that is `True` only when _both_ masks are `True`. This can be accomplished with the logical operator `&`.
```
(titanic_df.sex == "male") & (titanic_df.survived == 1)
```
Verify for yourself that the `True` values in this `Series` correspond to observations where _both_ masks were True.
**Warning**:_ Notice the parentheses around each boolean mask above. These parentheses are necessary because of operator precedence. In Python, the logical operator `&` has higher precedence than the comparison operator `==`, so the command
`titanic_df.sex == "male" & titanic_df.survived == 1`
will be interpreted as
`titanic_df.sex == ("male" & titanic_df.survived) == 1`
and result in an error. Python does not know how to evaluate `("male" & titanic_df.survived)`, since the logical operator `&` is not defined between a `str` and a `Series`.
The parentheses ensure that Python evaluates the boolean masks first and the logical operator second:
`(titanic_df.sex == "male") & (titanic_df.survived == 1)`.
It is very easy to forget these parentheses. Unfortunately, the error message that you get is not particularly helpful for debugging the code. If you don't believe me, just try running the offending command (without parentheses)!
Now with the boolean mask in hand, we can plot the age distribution of male survivors on the Titanic:
```
titanic_df[(titanic_df.sex == "male") & (titanic_df.survived == 1)].age.plot.hist()
```
Notice the peak between 0 and 10. A disproportionate number of young children survived because they were given priority to board the lifeboats.
Besides `&`, there are two other logical operators, `|` and `~`, that can be used to modify and combine boolean masks.
- `&` means "and"
- `|` means "or"
- `~` means "not"
Like `&`, `|` and `~` operate elementwise on boolean `Series`. Examples are provided below.
```
# male OR survived
(titanic_df.sex == "male") | (titanic_df.survived == 1)
# equivalent to (titanic_df.sex != "male")
~(titanic_df.sex == "male")
```
Notice how we use parentheses to ensure that the boolean mask is evaluated before the logical operators.
# Exercises
Exercises 1-3 deal with the Titanic data set.
**Exercise 1.** Is there any advantage to selecting the column at the same time you apply the boolean mask? In other words, is the second option below any faster than the first?
1. `titanic_df[titanic_df.sex == "female"].age`
2. `titanic_df.loc[titanic_df.sex == "female", "age"]`
Use the `%timeit` magic to compare the runtimes of these two options.
```
%timeit titanic_df[titanic_df.sex == "female"].age
%timeit titanic_df.loc[titanic_df.sex == "female", "age"]
# the results show that the second option is faster by a few hundred microseconds. I suppose this might make a difference if
# it were being used on a much larger data set, but as it is there really is no noticeable advantage to using either of these.
```
**Exercise 2.** Produce a graphic that compares the age distribution of the males who survived with the age distribution of the males who did not.
```
titanic_df[(titanic_df.sex == "male")&(titanic_df.survived == 0)].age.plot.hist() #blue
titanic_df[(titanic_df.sex == "male")&(titanic_df.survived == 1)].age.plot.hist() #orange
```
**Exercise 3.** What proportion of 1st class passengers survived? What proportion of 3rd class passengers survived? See if you can use boolean masks to do this.
```
display(((titanic_df.survived == 1)&(titanic_df.pclass == 1)).mean())
display(((titanic_df.survived == 1)&(titanic_df.pclass == 3)).mean())
```
Exercises 4-7 ask you to analyze the Tips data set (`../data/tips.csv`). The following code reads the data into a `DataFrame` called `tips_df` and creates a new column called `tip_percent` out of the `tip` and `total_bill` columns. This new column represents the tip as a percentage of the total bill (as a number between 0 and 1).
```
tips_df = pd.read_csv("../data/tips.csv")
tips_df["tip_percent"] = tips_df.tip / tips_df.total_bill
tips_df
```
**Exercise 4.** Calculate the average tip percentage paid by parties of 4 or more.
```
tips_df.rename(columns={"size":"party"}, inplace=True)
tips_df[tips_df.party >= 4].tip_percent.mean()
```
**Exercise 5.** Make a visualization comparing the distribution of tip percentages left by males and females. How do they compare?
```
tips_df[tips_df.sex == "Male"]["tip_percent"].plot.hist(bins=50) #blue
tips_df[tips_df.sex == "Male"]["tip_percent"].plot.density(xlim=(0, 0.8)) #orange
tips_df[tips_df.sex == "Female"]["tip_percent"].plot.hist(bins=50) #green
tips_df[tips_df.sex == "Female"]["tip_percent"].plot.density(xlim=(0, 0.8)) #red
# both sexes had roughly the same distribution for tip percentage, but overwhelmingly the males paid more frequently.
```
**Exercise 6.** What is the average table size on weekdays? (_Hint:_ There are at least two ways to create the appropriate boolean mask: using the `|` logical operator and using the `.isin()` method. See if you can do it both ways.)
```
tips_df[(tips_df.day == "Thur")|(tips_df.day == "Fri")].party.mean()
# thursday and friday are the only weekdays that actually show up in the data
```
**Exercise 7.** Calculate the average table size for each day of the week. On which day of the week does the waiter serve the largest parties, on average?
```
display(tips_df[(tips_df.day == "Thur")].party.mean())
display(tips_df[(tips_df.day == "Fri")].party.mean())
display(tips_df[(tips_df.day == "Sat")].party.mean())
display(tips_df[(tips_df.day == "Sun")].party.mean())
# on average, the waiter serves the largest parties on Sundays
```
### When you have filled out all the questions, submit via [Tulane Canvas](https://tulane.instructure.com/)
|
github_jupyter
|
%matplotlib inline
import pandas as pd
pd.options.display.max_rows = 5
titanic_df = pd.read_csv('../data/titanic.csv')
titanic_df.head()
males = titanic_df.set_index("sex").loc["male"]
males
males.age.plot.hist()
titanic_df.sex == "male"
titanic_df[titanic_df.sex == "male"]
titanic_df[titanic_df.sex == "male"]["sex"].value_counts()
titanic_df[titanic_df.sex == "male"].age.plot.hist()
titanic_df.loc[titanic_df.sex == "male"]
titanic_df.loc[titanic_df.sex == "male", "age"]
titanic_df[titanic_df.sex == "male"]["age"]
%timeit titanic_df.set_index("sex").loc["male"].age.mean()
%timeit titanic_df[titanic_df.sex == "male"].age.mean()
(titanic_df.sex == "male") * 3
(titanic_df.sex == "male").sum()
(titanic_df.sex == "male").mean()
titanic_df.sex == "male"
titanic_df.survived == 1
(titanic_df.sex == "male") & (titanic_df.survived == 1)
titanic_df[(titanic_df.sex == "male") & (titanic_df.survived == 1)].age.plot.hist()
# male OR survived
(titanic_df.sex == "male") | (titanic_df.survived == 1)
# equivalent to (titanic_df.sex != "male")
~(titanic_df.sex == "male")
%timeit titanic_df[titanic_df.sex == "female"].age
%timeit titanic_df.loc[titanic_df.sex == "female", "age"]
# the results show that the second option is faster by a few hundred microseconds. I suppose this might make a difference if
# it were being used on a much larger data set, but as it is there really is no noticeable advantage to using either of these.
titanic_df[(titanic_df.sex == "male")&(titanic_df.survived == 0)].age.plot.hist() #blue
titanic_df[(titanic_df.sex == "male")&(titanic_df.survived == 1)].age.plot.hist() #orange
display(((titanic_df.survived == 1)&(titanic_df.pclass == 1)).mean())
display(((titanic_df.survived == 1)&(titanic_df.pclass == 3)).mean())
tips_df = pd.read_csv("../data/tips.csv")
tips_df["tip_percent"] = tips_df.tip / tips_df.total_bill
tips_df
tips_df.rename(columns={"size":"party"}, inplace=True)
tips_df[tips_df.party >= 4].tip_percent.mean()
tips_df[tips_df.sex == "Male"]["tip_percent"].plot.hist(bins=50) #blue
tips_df[tips_df.sex == "Male"]["tip_percent"].plot.density(xlim=(0, 0.8)) #orange
tips_df[tips_df.sex == "Female"]["tip_percent"].plot.hist(bins=50) #green
tips_df[tips_df.sex == "Female"]["tip_percent"].plot.density(xlim=(0, 0.8)) #red
# both sexes had roughly the same distribution for tip percentage, but overwhelmingly the males paid more frequently.
tips_df[(tips_df.day == "Thur")|(tips_df.day == "Fri")].party.mean()
# thursday and friday are the only weekdays that actually show up in the data
display(tips_df[(tips_df.day == "Thur")].party.mean())
display(tips_df[(tips_df.day == "Fri")].party.mean())
display(tips_df[(tips_df.day == "Sat")].party.mean())
display(tips_df[(tips_df.day == "Sun")].party.mean())
# on average, the waiter serves the largest parties on Sundays
| 0.177597 | 0.995673 |
# Specifying Primitive Options
By default, DFS will apply primitives across all dataframes and columns. This behavior can be altered through a few different parameters. Dataframes and columns can be optionally ignored or included for an entire DFS run or on a per-primitive basis, enabling greater control over features and less run time overhead.
```
import featuretools as ft
from featuretools.tests.testing_utils import make_ecommerce_entityset
es = make_ecommerce_entityset()
features_list = ft.dfs(entityset=es,
target_dataframe_name='customers',
agg_primitives=['mode'],
trans_primitives=['weekday'],
features_only=True)
features_list
```
## Specifying Options for an Entire Run
The `ignore_dataframes` and `ignore_columns` parameters of DFS control dataframes and columns that should be ignored for all primitives. This is useful for ignoring columns or dataframes that don't relate to the problem or otherwise shouldn't be included in the DFS run.
```
# ignore the 'log' and 'cohorts' dataframes entirely
# ignore the 'birthday' column in 'customers' and the 'device_name' column in 'sessions'
features_list = ft.dfs(entityset=es,
target_dataframe_name='customers',
agg_primitives=['mode'],
trans_primitives=['weekday'],
ignore_dataframes=['log', 'cohorts'],
ignore_columns={'sessions': ['device_name'],
'customers': ['birthday']},
features_only=True)
features_list
```
DFS completely ignores the `log` and `cohorts` dataframes when creating features. It also ignores the columns `device_name` and `birthday` in `sessions` and `customers` respectively. However, both of these options can be overridden by individual primitive options in the `primitive_options` parameter.
## Specifying for Individual Primitives
Options for individual primitives or groups of primitives are set by the `primitive_options` parameter of DFS. This parameter maps any desired options to specific primitives. In the case of conflicting options, options set at this level will override options set at the entire DFS run level, and the include options will always take priority over their ignore counterparts.
Using the string primitive name or the primitive type will apply the options to all primitives of the same name. You can also set options for a specific instance of a primitive by using the primitive instance as a key in the `primitive_options` dictionary. Note, however, that specifying options for a specific instance will result in that instance ignoring any options set for the generic primitive through options with the primitive name or class as the key.
### Specifying Dataframes for Individual Primitives
Which dataframes to include/ignore can also be specified for a single primitive or a group of primitives. Dataframes can be ignored using the `ignore_dataframes` option in `primitive_options`, while dataframes to explicitly include are set by the ``include_dataframes`` option. When ``include_dataframes`` is given, all dataframes not listed are ignored by the primitive. No columns from any excluded dataframe will be used to generate features with the given primitive.
```
# ignore the 'cohorts' and 'log' dataframes, but only for the primitive 'mode'
# include only the 'customers' dataframe for the primitives 'weekday' and 'day'
features_list = ft.dfs(entityset=es,
target_dataframe_name='customers',
agg_primitives=['mode'],
trans_primitives=['weekday', 'day'],
primitive_options={
'mode': {'ignore_dataframes': ['cohorts', 'log']},
('weekday', 'day'): {'include_dataframes': ['customers']}
},
features_only=True)
features_list
```
In this example, DFS would only use the `customers` dataframe for both `weekday` and `day`, and would use all dataframes except `cohorts` and `log` for `mode`.
### Specifying Columns for Individual Primitives
Specific columns can also be explicitly included/ignored for a primitive or group of primitives. Columns to
ignore is set by the `ignore_columns` option, while columns to include are set by `include_columns`. When the
`include_columns` option is set, no other columns from that dataframe will be used to make features with the given primitive.
```
# Include the columns 'product_id' and 'zipcode', 'device_type', and 'cancel_reason' for 'mean'
# Ignore the columns 'signup_date' and 'cancel_date' for 'weekday'
features_list = ft.dfs(entityset=es,
target_dataframe_name='customers',
agg_primitives=['mode'],
trans_primitives=['weekday'],
primitive_options={
'mode': {'include_columns': {'log': ['product_id', 'zipcode'],
'sessions': ['device_type'],
'customers': ['cancel_reason']}},
'weekday': {'ignore_columns': {'customers': ['signup_date', 'cancel_date']}}
},
features_only=True)
features_list
```
Here, `mode` will only use the columns `product_id` and `zipcode` from the dataframe `log`, `device_type`
from the dataframe `sessions`, and `cancel_reason` from `customers`. For any other dataframe, `mode` will use all
columns. The `weekday` primitive will use all columns in all dataframes except for `signup_date` and `cancel_date`
from the `customers` dataframe.
### Specifying GroupBy Options
GroupBy Transform Primitives also have the additional options `include_groupby_dataframes`, `ignore_groupby_dataframes`, `include_groupby_columns`, and `ignore_groupby_columns`. These options are used to specify dataframes and columns to include/ignore as groupings for inputs. By default, DFS only groups by foreign key columns. Specifying `include_groupby_columns` overrides this default, and will only group by columns given. On the other hand, `ignore_groupby_columns` will continue to use only the foreign key columns, ignoring any columns specified that are also foreign key columns. Note that if including non-foreign key columns to group by, the included columns must be categorical columns.
```
features_list = ft.dfs(entityset=es,
target_dataframe_name='log',
agg_primitives=[],
trans_primitives=[],
groupby_trans_primitives=['cum_sum', 'cum_count'],
primitive_options={
'cum_sum': {'ignore_groupby_columns': {'log': ['product_id']}},
'cum_count': {'include_groupby_columns': {'log': ['product_id',
'priority_level']},
'ignore_groupby_dataframes': ['sessions']}
},
features_only=True)
features_list
```
We ignore `product_id` as a groupby for `cum_sum` but still use any other foreign key columns in that or any other dataframe. For `cum_count`, we use only `product_id` and `priority_level` as groupbys. Note that `cum_sum` doesn't use
`priority_level` because it's not a foreign key column, but we explicitly include it for `cum_count`. Finally, note that specifying groupby options doesn't affect what features the primitive is applied to. For example, `cum_count` ignores the dataframe `sessions` for groupbys, but the feature `<Feature: CUM_COUNT(sessions.device_name) by product_id>` is still made. The groupby is from the target dataframe `log`, so the feature is valid given the associated options. To ignore the `sessions` dataframe for `cum_count`, the `ignore_dataframes` option for `cum_count` would need to include `sessions`.
## Specifying for each Input for Multiple Input Primitives
For primitives that take multiple columns as input, such as `Trend`, the above options can be specified for each input by passing them in as a list. If only one option dictionary is given, it is used for all inputs. The length of the list provided must match the number of inputs the primitive takes.
```
features_list = ft.dfs(entityset=es,
target_dataframe_name='customers',
agg_primitives=['trend'],
trans_primitives=[],
primitive_options={
'trend': [{'ignore_columns': {'log': ['value_many_nans']}},
{'include_columns': {'customers': ['signup_date'],
'log': ['datetime']}}]
},
features_only=True)
features_list
```
Here, we pass in a list of primitive options for trend. We ignore the column `value_many_nans` for the first input
to `trend`, and include the column `signup_date` from `customers` for the second input.
|
github_jupyter
|
import featuretools as ft
from featuretools.tests.testing_utils import make_ecommerce_entityset
es = make_ecommerce_entityset()
features_list = ft.dfs(entityset=es,
target_dataframe_name='customers',
agg_primitives=['mode'],
trans_primitives=['weekday'],
features_only=True)
features_list
# ignore the 'log' and 'cohorts' dataframes entirely
# ignore the 'birthday' column in 'customers' and the 'device_name' column in 'sessions'
features_list = ft.dfs(entityset=es,
target_dataframe_name='customers',
agg_primitives=['mode'],
trans_primitives=['weekday'],
ignore_dataframes=['log', 'cohorts'],
ignore_columns={'sessions': ['device_name'],
'customers': ['birthday']},
features_only=True)
features_list
# ignore the 'cohorts' and 'log' dataframes, but only for the primitive 'mode'
# include only the 'customers' dataframe for the primitives 'weekday' and 'day'
features_list = ft.dfs(entityset=es,
target_dataframe_name='customers',
agg_primitives=['mode'],
trans_primitives=['weekday', 'day'],
primitive_options={
'mode': {'ignore_dataframes': ['cohorts', 'log']},
('weekday', 'day'): {'include_dataframes': ['customers']}
},
features_only=True)
features_list
# Include the columns 'product_id' and 'zipcode', 'device_type', and 'cancel_reason' for 'mean'
# Ignore the columns 'signup_date' and 'cancel_date' for 'weekday'
features_list = ft.dfs(entityset=es,
target_dataframe_name='customers',
agg_primitives=['mode'],
trans_primitives=['weekday'],
primitive_options={
'mode': {'include_columns': {'log': ['product_id', 'zipcode'],
'sessions': ['device_type'],
'customers': ['cancel_reason']}},
'weekday': {'ignore_columns': {'customers': ['signup_date', 'cancel_date']}}
},
features_only=True)
features_list
features_list = ft.dfs(entityset=es,
target_dataframe_name='log',
agg_primitives=[],
trans_primitives=[],
groupby_trans_primitives=['cum_sum', 'cum_count'],
primitive_options={
'cum_sum': {'ignore_groupby_columns': {'log': ['product_id']}},
'cum_count': {'include_groupby_columns': {'log': ['product_id',
'priority_level']},
'ignore_groupby_dataframes': ['sessions']}
},
features_only=True)
features_list
features_list = ft.dfs(entityset=es,
target_dataframe_name='customers',
agg_primitives=['trend'],
trans_primitives=[],
primitive_options={
'trend': [{'ignore_columns': {'log': ['value_many_nans']}},
{'include_columns': {'customers': ['signup_date'],
'log': ['datetime']}}]
},
features_only=True)
features_list
| 0.407805 | 0.911219 |
<h1 align='center'> Project: Hotel booking demand exploration
# <font color='#347b98'> 1. Data source
- Collecting data from Kaggle directly with a file consisting with almost 120 thousand row and 32 columns.
- Original source link: https://www.sciencedirect.com/science/article/pii/S2352340918315191
- Kaggle source link: https://www.kaggle.com/jessemostipak/hotel-booking-demand
- Data is kind of clean and nicely structured.
```
# Imported all packages that are needed for the project
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from pandas_profiling import ProfileReport
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import cross_validate
from sklearn.pipeline import Pipeline
import warnings
warnings.filterwarnings('ignore')
```
## Load Data
```
df = pd.read_csv('path\hotel_bookings.csv')
df.head()
df.info() # Check all data types
```
## EDA
```
# Compare the numbers between cancelled and not cancelled.
plt.figure(figsize=(8,6))
sns.countplot(y='is_canceled', data=df);
plt.title('Number of cancellation or no cancellation');
```
- This barchart shows the number sof count of cancelled and not cancelled. Apparently, cancelled is about 60% of not cancelled. Because this columns is target label, we do not need to resample it.
```
# Describe the cancellation distribution from month to month
sort_months = df.groupby('arrival_date_month')['is_canceled'].sum().sort_values()
df_months_cancelled = pd.DataFrame(sort_months).reset_index()
plt.figure(figsize=(8,6))
pic = sns.barplot(x='arrival_date_month', y='is_canceled',data=df_months_cancelled);
pic.set_title('Total numbers of cancellation in 3 years for each month');
pic.set_xticklabels(labels = df_months_cancelled['arrival_date_month'] ,rotation=-45);
pic.set(xlabel = 'Months', ylabel = 'Number of cancellation');
```
- in three years from 2015 to 2017, August has most number of cancellation followed by July and May. In january, there is least number of cancellation.
```
# create linechart to identify the relationship between length of lead time and cancellation rate.
df_copy = df.copy()
bins = [-1,5,10,15,20,30,40,50,100,200,300,400,500,1000]
labels = ['less or equal to 5 days', '6-10 days','11-15 days', '16-20 days','21-30 days','31-40 days','41-50 days',
'51-100 dasy','101-200 days', '201-300 days','301-400 days', '401-500 days','over 500 days']
df_copy['lead_time_binned'] = pd.cut(df_copy['lead_time'], bins = bins, labels = labels)
lead_time_cancel_count = df_copy.groupby('lead_time_binned')['is_canceled'].count()
lead_time_cancel_count = pd.DataFrame(lead_time_cancel_count).reset_index()
df_cancel = df_copy[df_copy['is_canceled'] == 1]
df_cancel = df_cancel.groupby('lead_time_binned')['is_canceled'].count()
df_cancel = pd.DataFrame(df_cancel).reset_index()
df_cancel_with_total = df_cancel.merge(lead_time_cancel_count, how='left',on='lead_time_binned')
df_cancel_with_total = df_cancel_with_total.rename(columns = {'is_canceled_x':'number_cancel','is_canceled_y':'total_number'})
df_cancel_with_total['cancel_rate'] = df_cancel_with_total['number_cancel'] / df_cancel_with_total['total_number']
plt.figure(figsize=(8,6))
pic_2 = sns.lineplot(x='lead_time_binned', y='cancel_rate', data=df_cancel_with_total, palette = 'RdBu_r');
pic_2.set_xticklabels(labels = df_cancel_with_total['lead_time_binned'], rotation=-45);
pic_2.set_title('cancellation rate classified by lead time categories');
```
- Binning the lead time into couple segments to reduce categories and easily to see the trend of cancellation ratio(# of cancellation / total # of reservation). And no doubt that with longer lead time, more possible consumers will cancel their reservations. The period from lead time 31-50 days and 301 to 500 days have two flat rate as cancel rates keep stable and even drops a bit.
```
# Indicate differences between hotel types with cancellation for stays in weekend nights.
plt.figure(figsize=(8,6))
sns.violinplot(x='hotel', y='stays_in_weekend_nights',data= df, hue='is_canceled',palette="Set3");
# # Indicate differences between hotel types with cancellation for stays in week nights.
plt.figure(figsize=(8,6))
sns.violinplot(x='hotel', y='stays_in_week_nights',data= df, hue='is_canceled',palette="Set3");
```
- Above two violin graphs show two types of hotels has different cancellation distribution on week nights and weekend nights.
- median (a white dot on the violin plot)
- interquartile range (the black bar in the center of violin)
- the lower/upper adjacent values (the black lines stretched from the bar) is boundary of how we tell outliers.
```
# Cancellation can be related to whether adults have babies and children or not.
def have_kids(series_1, series_2):
lst = []
for i in range(len(series_1)):
if series_1[i] == 0 & series_2[i] == 0:
lst.append(0)
else:
lst.append(1)
return lst
df_cy = df.copy()
df_cy['have_kids'] = pd.Series(have_kids(df['children'], df['babies']))
plt.figure(figsize=(8,6));
sns.countplot(x='is_canceled', hue='have_kids',data=df_cy, palette='RdBu_r');
```
- Create a new feature showing if adults travel with babies or children or both. As we can see the ratio of cancelled or not seems not have a really strong realtionship with this new feature.
```
labels = ['Aviation','Complementary','Corporate','Direct','Groups','Offline TA/TO','Online TA','Undefined']
cancelled = df_cy[df_cy['is_canceled']==1].groupby('market_segment')['is_canceled'].value_counts().to_numpy()
not_cancelled = df_cy[df_cy['is_canceled']==0][['market_segment', 'is_canceled']] \
.append({'market_segment': 'Undefined', 'is_canceled': 0}, ignore_index=True) \
.groupby('market_segment')['is_canceled'].value_counts()
not_cancelled['Undefined'] = 0
not_cancelled.to_numpy()
width = 0.6
fig, ax = plt.subplots()
plt.xticks(rotation=-45)
ax.bar(labels, cancelled, width, label='Cancelled');
ax.bar(labels,not_cancelled, width, label='Not cancelled');
```
- Above chart should be a stack barplot showing total numbers of cancelled and not cancelled regarding to different market segments, but I cannot figure out that for now.
- next couple cells are codes prepared to plot stack barplot. Because I have not figured out how to achieve it, so I just left them as a code reminder.
```
cancelled = df_cy[df_cy['is_canceled']==1].groupby('market_segment')['is_canceled'].value_counts()#.to_numpy()
cancelled
not_cancelled= df_cy[df_cy['is_canceled']==0].groupby('market_segment')['is_canceled'].value_counts()# \
#.append(pd.Series([0],index=['Undefined']))#.to_numpy()
not_cancelled
#.reindex(df_cy['market_segment'].unique(), fill_value=0)
not_cancelled = df_cy[df_cy['is_canceled']==0][['market_segment', 'is_canceled']] \
.append({'market_segment': 'Undefined', 'is_canceled': 0}, ignore_index=True) \
.groupby('market_segment')['is_canceled'].value_counts()
not_cancelled['Undefined'] = 0
not_cancelled.to_numpy()
cancelled = df_cy[df_cy['is_canceled']==1].groupby('market_segment')['is_canceled'].value_counts().to_numpy()
cancelled
df_cy[df_cy['is_canceled']==0]['market_segment'].value_counts().append(pd.Series([0],index=['Undefined']))#.to_numpy()
df_cy[df_cy['is_canceled']==1]['market_segment'].value_counts()#.to_numpy()
list(set(df_cy['market_segment'].values))
```
## Missing Values Handling
```
df.isnull().sum()
```
- As we can see, there are many missing values but all distributed in three columns.
- Delete agent and company directly because these are IDs for both columns.
- For country, delete it for now because we cannot judge customers whether they will cancel the reservation by where they are from.
- Children column has 4 missing values, it takes a very small percentage of all values, will fill it with zero.
```
df_cop = df.copy()
df_cop = df_cop.drop(columns = ['company','agent','country','reservation_status','reservation_status_date'])
df_cop['children'].fillna(value=0, inplace=True)
df_cop.isnull().sum()
```
## Check correlation of numerical variables with ''Is_canceled'' column
As shown below, 'arrival_date_week_number', 'children', 'stays_in_weekend_nights' and 'arrival_date_day_of_month' are less correlated with cancellation.
Considered some of thses columns can be converted and be used for creating new features. I will feature engineer some of them later on.
```
df_cop.corr()['is_canceled'].sort_values(ascending=False)
```
## Feature Engineering
- Convert categorical variables to numerical variables by One-hot encoding. (Not considering using label encoder because there is not variables have no ordinal relationship).
- Creating new features based on existing features.
```
# Create a columns called lead time binned to split lead time into groups.
bins = [-1,5,10,15,20,30,40,50,100,200,300,400,500,1000]
labels = ['less or equal to 5 days', '6-10 days','11-15 days', '16-20 days','21-30 days',
'31-40 days','41-50 days','51-100 dasy','101-200 days', '201-300 days','301-400 days',
'401-500 days','over 500 days']
df_cop['lead_time_binned'] = pd.cut(df_cop['lead_time'], bins = bins, labels = labels)
df_cop = pd.get_dummies(columns = ['hotel','arrival_date_month','meal','market_segment',
'distribution_channel','lead_time_binned','deposit_type','customer_type']
,data=df_cop,drop_first=True)
df_cop
# create a feature called family size contain all people.
df_cop['family_size'] = df_cop['adults'] + df_cop['babies'] + df_cop['children']
# to cretae a column which shows if reserved room type matches assigned room type,
# '1' means matches, '0' means not matches.
def match_type(s1,s2):
lst = []
for (m, n) in zip(range(len(s1)), range(len(s2))):
if s1[m] == s2[n]:
lst.append(1)
else:
lst.append(0)
return lst
df_cop['room_type_matches'] = match_type(df_cop['reserved_room_type'], df_cop['assigned_room_type'])
dff = df_cop
dff = dff.drop(columns=['reserved_room_type','assigned_room_type'])
dff = dff.drop(columns = ['deposit_type_Non Refund'])
dff
```
## machine learning
```
dff.info()
```
## Baseline Model - XGBoost / Logistic Regression
```
# Baseline1 - XGBoost
from xgboost import XGBClassifier
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
xgb = XGBClassifier()
xgb.fit(X_train, y_train)
y_pred_train = xgb.predict(X_train)
y_pred = xgb.predict(X_test)
print(classification_report(y_train, y_pred_train))
print(classification_report(y_test, y_pred))
# Baseline2 - Logistic regression
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
lr = LogisticRegression()
lr.fit(X_train, y_train)
y_train_pred = lr.predict(X_train)
y_pred = lr.predict(X_test)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
lr.coef_[0]
pic = pd.DataFrame(lr.coef_[0], index=X.columns).sort_values(by=[0]).reset_index()
pic
# plot the coeff of each variables
plt.figure(figsize=(12,12));
sns.barplot(y='index', x=0,data=pic);
```
- Bar chart above shows the distribution of coeff_. It is evident that total of special rquest have a significantly negative effect to the model, followed by booking changes. For positive coef_, previous_cancellation and room type matches affect mostly to the model.
# Use GridSearch to find best hyper parameters for each model then use pipeline to double check how each model perform
## Logistic Regression
```
# Use gridsearch to find best hyper parameters
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
param = {
'penalty':['l1','l2']
}
lr = LogisticRegression()
gs = GridSearchCV(lr, param, cv=5, scoring='f1', n_jobs=-1, verbose=2)
gs.fit(X_train_scaled, y_train)
gs.best_params_
gsbm = gs.best_estimator_
y_train_pred = gsbm.predict(X_train_scaled)
y_pred = gsbm.predict(X_test_scaled)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
# use pipeline to apply LR to predict cancellation.
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
# scaler = StandardScaler()
# X_train_scaled = scaler.fit_transform(X_train)
# X_test_scaled = scaler.transform(X_test)
pipe = Pipeline([
('scaling', StandardScaler()),
('LogisticRegression', LogisticRegression(penalty = 'l2'))
])
scores = cross_validate(pipe, X_train_scaled, y_train, cv=5, scoring=['f1', 'accuracy'])
print(pd.DataFrame(scores))
pipe.fit(X_train, y_train)
y_train_pred = pipe.predict(X_train)
y_pred = pipe.predict(X_test)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
```
- As we can see, compared to baseline models, after scaling and cross validating, f1-score, precision and recall rate increase significantly. model has low bias and low variance. Seems like LR is likely to be a good model for this case.
## Decision Tree
```
# Use gridsearch to find best hyper parameters
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=41, stratify = y)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
param = {
'criterion':['gini','entropy'],
'max_depth':[5,10,15,20,25,30],
'min_samples_split': [2,3,4,5,8.10],
'min_samples_leaf':[1,2,3,4,5,6],
'max_features' :["sqrt", "log2"]
}
dt = DecisionTreeClassifier()
gs = GridSearchCV(dt, param, cv=5, scoring='f1', n_jobs=-1, verbose=2)
gs.fit(X_train_scaled, y_train)
gs.best_params_
gsbm = gs.best_estimator_
y_train_pred = gsbm.predict(X_train_scaled)
y_pred = gsbm.predict(X_test_scaled)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
# use pipeline to apply DT to predict cancellation.
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
pipe = Pipeline([
#('scaling', StandardScaler()),
('DecisionTree', DecisionTreeClassifier(criterion= 'entropy',max_depth= 25,max_features= 'sqrt',min_samples_leaf= 1,min_samples_split= 3))
])
scores = cross_validate(pipe, X_train_scaled, y_train, cv=5, scoring=['f1', 'accuracy'])
print(pd.DataFrame(scores))
pipe.fit(X_train_scaled, y_train)
y_train_pred = pipe.predict(X_train_scaled)
y_pred = pipe.predict(X_test_scaled)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
```
- Decision Tree is a model that is easily to reach overfitting. Not exception for this case, DT gets a high score but decreases 7% in test set. But compared to LR, DT seems have a better performce in the whole.
## Random Forest
```
# Use gridsearch to find best hyper parameters
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
param = {
# 'criterion':['gini','entropy'],
'max_depth':[10,20,30,40,50,None],
'n_estimators':[100,200,300,400]
# 'min_samples_split': [2,3,4,5,8.10],
# 'min_samples_leaf':[1,2,3,4,5,6],
# 'max_features' :["sqrt", "log2"]
}
rf = RandomForestClassifier()
gs = GridSearchCV(rf, param, cv=5, scoring='f1', n_jobs=-1, verbose=2)
gs.fit(X_train_scaled, y_train)
gs.best_params_
gsbm = gs.best_estimator_
y_train_pred = gsbm.predict(X_train_scaled)
y_pred = gsbm.predict(X_test_scaled)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
# use pipeline to apply RF to predict cancellation.
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
pipe = Pipeline([
#('scaling', StandardScaler()),
('RandomForest', RandomForestClassifier(n_estimators = 300))
])
scores = cross_validate(pipe, X_train_scaled, y_train, cv=5, scoring=['f1', 'accuracy'])
print(pd.DataFrame(scores))
pipe.fit(X_train_scaled, y_train)
y_train_pred = pipe.predict(X_train_scaled)
y_pred = pipe.predict(X_test_scaled)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
```
- Random Forest gets a better socre in training set than Decision Tree. But it has the same problem - overfitting.
## Interpretation - Permutation Importance
```
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
# scaler = StandardScaler()
# X_train_scaled = scaler.fit_transform(X_train)
# X_test_scaled = scaler.transform(X_test)
pipe = Pipeline([
('scaling', StandardScaler()),
('LogisticRegression', LogisticRegression(penalty = 'l2'))
])
scores = cross_validate(pipe, X_train_scaled, y_train, cv=5, scoring=['f1', 'accuracy'])
print(pd.DataFrame(scores))
pipe.fit(X_train, y_train)
y_train_pred = pipe.predict(X_train)
y_pred = pipe.predict(X_test)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
import eli5
from eli5.sklearn import PermutationImportance
perm = PermutationImportance(pipe, random_state=1)
perm.fit(X_train, y_train)
eli5.show_weights(perm, feature_names=X_train.columns.tolist())
```
- Even Logistic Refression model does not have the highest score in both training and testing sets but it is the most balanced model that has low bias and variance. the other two have overfitting issues. Because of time limits, three models can be improved by doing more jobs on EDA and feature engineering to reduce variance in two tree based models.
- From permutation importance chart, we can notice that arrival date week numbers affects most to the LR model followed by number of special request, which means arrival date of week are most predictive feature affecting the model. Second comes number of special requests which are shown on coef_ chart as well indicating it put most negative impact on baseline model(LR).
|
github_jupyter
|
# Imported all packages that are needed for the project
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from pandas_profiling import ProfileReport
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import cross_validate
from sklearn.pipeline import Pipeline
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv('path\hotel_bookings.csv')
df.head()
df.info() # Check all data types
# Compare the numbers between cancelled and not cancelled.
plt.figure(figsize=(8,6))
sns.countplot(y='is_canceled', data=df);
plt.title('Number of cancellation or no cancellation');
# Describe the cancellation distribution from month to month
sort_months = df.groupby('arrival_date_month')['is_canceled'].sum().sort_values()
df_months_cancelled = pd.DataFrame(sort_months).reset_index()
plt.figure(figsize=(8,6))
pic = sns.barplot(x='arrival_date_month', y='is_canceled',data=df_months_cancelled);
pic.set_title('Total numbers of cancellation in 3 years for each month');
pic.set_xticklabels(labels = df_months_cancelled['arrival_date_month'] ,rotation=-45);
pic.set(xlabel = 'Months', ylabel = 'Number of cancellation');
# create linechart to identify the relationship between length of lead time and cancellation rate.
df_copy = df.copy()
bins = [-1,5,10,15,20,30,40,50,100,200,300,400,500,1000]
labels = ['less or equal to 5 days', '6-10 days','11-15 days', '16-20 days','21-30 days','31-40 days','41-50 days',
'51-100 dasy','101-200 days', '201-300 days','301-400 days', '401-500 days','over 500 days']
df_copy['lead_time_binned'] = pd.cut(df_copy['lead_time'], bins = bins, labels = labels)
lead_time_cancel_count = df_copy.groupby('lead_time_binned')['is_canceled'].count()
lead_time_cancel_count = pd.DataFrame(lead_time_cancel_count).reset_index()
df_cancel = df_copy[df_copy['is_canceled'] == 1]
df_cancel = df_cancel.groupby('lead_time_binned')['is_canceled'].count()
df_cancel = pd.DataFrame(df_cancel).reset_index()
df_cancel_with_total = df_cancel.merge(lead_time_cancel_count, how='left',on='lead_time_binned')
df_cancel_with_total = df_cancel_with_total.rename(columns = {'is_canceled_x':'number_cancel','is_canceled_y':'total_number'})
df_cancel_with_total['cancel_rate'] = df_cancel_with_total['number_cancel'] / df_cancel_with_total['total_number']
plt.figure(figsize=(8,6))
pic_2 = sns.lineplot(x='lead_time_binned', y='cancel_rate', data=df_cancel_with_total, palette = 'RdBu_r');
pic_2.set_xticklabels(labels = df_cancel_with_total['lead_time_binned'], rotation=-45);
pic_2.set_title('cancellation rate classified by lead time categories');
# Indicate differences between hotel types with cancellation for stays in weekend nights.
plt.figure(figsize=(8,6))
sns.violinplot(x='hotel', y='stays_in_weekend_nights',data= df, hue='is_canceled',palette="Set3");
# # Indicate differences between hotel types with cancellation for stays in week nights.
plt.figure(figsize=(8,6))
sns.violinplot(x='hotel', y='stays_in_week_nights',data= df, hue='is_canceled',palette="Set3");
# Cancellation can be related to whether adults have babies and children or not.
def have_kids(series_1, series_2):
lst = []
for i in range(len(series_1)):
if series_1[i] == 0 & series_2[i] == 0:
lst.append(0)
else:
lst.append(1)
return lst
df_cy = df.copy()
df_cy['have_kids'] = pd.Series(have_kids(df['children'], df['babies']))
plt.figure(figsize=(8,6));
sns.countplot(x='is_canceled', hue='have_kids',data=df_cy, palette='RdBu_r');
labels = ['Aviation','Complementary','Corporate','Direct','Groups','Offline TA/TO','Online TA','Undefined']
cancelled = df_cy[df_cy['is_canceled']==1].groupby('market_segment')['is_canceled'].value_counts().to_numpy()
not_cancelled = df_cy[df_cy['is_canceled']==0][['market_segment', 'is_canceled']] \
.append({'market_segment': 'Undefined', 'is_canceled': 0}, ignore_index=True) \
.groupby('market_segment')['is_canceled'].value_counts()
not_cancelled['Undefined'] = 0
not_cancelled.to_numpy()
width = 0.6
fig, ax = plt.subplots()
plt.xticks(rotation=-45)
ax.bar(labels, cancelled, width, label='Cancelled');
ax.bar(labels,not_cancelled, width, label='Not cancelled');
cancelled = df_cy[df_cy['is_canceled']==1].groupby('market_segment')['is_canceled'].value_counts()#.to_numpy()
cancelled
not_cancelled= df_cy[df_cy['is_canceled']==0].groupby('market_segment')['is_canceled'].value_counts()# \
#.append(pd.Series([0],index=['Undefined']))#.to_numpy()
not_cancelled
#.reindex(df_cy['market_segment'].unique(), fill_value=0)
not_cancelled = df_cy[df_cy['is_canceled']==0][['market_segment', 'is_canceled']] \
.append({'market_segment': 'Undefined', 'is_canceled': 0}, ignore_index=True) \
.groupby('market_segment')['is_canceled'].value_counts()
not_cancelled['Undefined'] = 0
not_cancelled.to_numpy()
cancelled = df_cy[df_cy['is_canceled']==1].groupby('market_segment')['is_canceled'].value_counts().to_numpy()
cancelled
df_cy[df_cy['is_canceled']==0]['market_segment'].value_counts().append(pd.Series([0],index=['Undefined']))#.to_numpy()
df_cy[df_cy['is_canceled']==1]['market_segment'].value_counts()#.to_numpy()
list(set(df_cy['market_segment'].values))
df.isnull().sum()
df_cop = df.copy()
df_cop = df_cop.drop(columns = ['company','agent','country','reservation_status','reservation_status_date'])
df_cop['children'].fillna(value=0, inplace=True)
df_cop.isnull().sum()
df_cop.corr()['is_canceled'].sort_values(ascending=False)
# Create a columns called lead time binned to split lead time into groups.
bins = [-1,5,10,15,20,30,40,50,100,200,300,400,500,1000]
labels = ['less or equal to 5 days', '6-10 days','11-15 days', '16-20 days','21-30 days',
'31-40 days','41-50 days','51-100 dasy','101-200 days', '201-300 days','301-400 days',
'401-500 days','over 500 days']
df_cop['lead_time_binned'] = pd.cut(df_cop['lead_time'], bins = bins, labels = labels)
df_cop = pd.get_dummies(columns = ['hotel','arrival_date_month','meal','market_segment',
'distribution_channel','lead_time_binned','deposit_type','customer_type']
,data=df_cop,drop_first=True)
df_cop
# create a feature called family size contain all people.
df_cop['family_size'] = df_cop['adults'] + df_cop['babies'] + df_cop['children']
# to cretae a column which shows if reserved room type matches assigned room type,
# '1' means matches, '0' means not matches.
def match_type(s1,s2):
lst = []
for (m, n) in zip(range(len(s1)), range(len(s2))):
if s1[m] == s2[n]:
lst.append(1)
else:
lst.append(0)
return lst
df_cop['room_type_matches'] = match_type(df_cop['reserved_room_type'], df_cop['assigned_room_type'])
dff = df_cop
dff = dff.drop(columns=['reserved_room_type','assigned_room_type'])
dff = dff.drop(columns = ['deposit_type_Non Refund'])
dff
dff.info()
# Baseline1 - XGBoost
from xgboost import XGBClassifier
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
xgb = XGBClassifier()
xgb.fit(X_train, y_train)
y_pred_train = xgb.predict(X_train)
y_pred = xgb.predict(X_test)
print(classification_report(y_train, y_pred_train))
print(classification_report(y_test, y_pred))
# Baseline2 - Logistic regression
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
lr = LogisticRegression()
lr.fit(X_train, y_train)
y_train_pred = lr.predict(X_train)
y_pred = lr.predict(X_test)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
lr.coef_[0]
pic = pd.DataFrame(lr.coef_[0], index=X.columns).sort_values(by=[0]).reset_index()
pic
# plot the coeff of each variables
plt.figure(figsize=(12,12));
sns.barplot(y='index', x=0,data=pic);
# Use gridsearch to find best hyper parameters
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
param = {
'penalty':['l1','l2']
}
lr = LogisticRegression()
gs = GridSearchCV(lr, param, cv=5, scoring='f1', n_jobs=-1, verbose=2)
gs.fit(X_train_scaled, y_train)
gs.best_params_
gsbm = gs.best_estimator_
y_train_pred = gsbm.predict(X_train_scaled)
y_pred = gsbm.predict(X_test_scaled)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
# use pipeline to apply LR to predict cancellation.
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
# scaler = StandardScaler()
# X_train_scaled = scaler.fit_transform(X_train)
# X_test_scaled = scaler.transform(X_test)
pipe = Pipeline([
('scaling', StandardScaler()),
('LogisticRegression', LogisticRegression(penalty = 'l2'))
])
scores = cross_validate(pipe, X_train_scaled, y_train, cv=5, scoring=['f1', 'accuracy'])
print(pd.DataFrame(scores))
pipe.fit(X_train, y_train)
y_train_pred = pipe.predict(X_train)
y_pred = pipe.predict(X_test)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
# Use gridsearch to find best hyper parameters
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=41, stratify = y)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
param = {
'criterion':['gini','entropy'],
'max_depth':[5,10,15,20,25,30],
'min_samples_split': [2,3,4,5,8.10],
'min_samples_leaf':[1,2,3,4,5,6],
'max_features' :["sqrt", "log2"]
}
dt = DecisionTreeClassifier()
gs = GridSearchCV(dt, param, cv=5, scoring='f1', n_jobs=-1, verbose=2)
gs.fit(X_train_scaled, y_train)
gs.best_params_
gsbm = gs.best_estimator_
y_train_pred = gsbm.predict(X_train_scaled)
y_pred = gsbm.predict(X_test_scaled)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
# use pipeline to apply DT to predict cancellation.
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
pipe = Pipeline([
#('scaling', StandardScaler()),
('DecisionTree', DecisionTreeClassifier(criterion= 'entropy',max_depth= 25,max_features= 'sqrt',min_samples_leaf= 1,min_samples_split= 3))
])
scores = cross_validate(pipe, X_train_scaled, y_train, cv=5, scoring=['f1', 'accuracy'])
print(pd.DataFrame(scores))
pipe.fit(X_train_scaled, y_train)
y_train_pred = pipe.predict(X_train_scaled)
y_pred = pipe.predict(X_test_scaled)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
# Use gridsearch to find best hyper parameters
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
param = {
# 'criterion':['gini','entropy'],
'max_depth':[10,20,30,40,50,None],
'n_estimators':[100,200,300,400]
# 'min_samples_split': [2,3,4,5,8.10],
# 'min_samples_leaf':[1,2,3,4,5,6],
# 'max_features' :["sqrt", "log2"]
}
rf = RandomForestClassifier()
gs = GridSearchCV(rf, param, cv=5, scoring='f1', n_jobs=-1, verbose=2)
gs.fit(X_train_scaled, y_train)
gs.best_params_
gsbm = gs.best_estimator_
y_train_pred = gsbm.predict(X_train_scaled)
y_pred = gsbm.predict(X_test_scaled)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
# use pipeline to apply RF to predict cancellation.
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
pipe = Pipeline([
#('scaling', StandardScaler()),
('RandomForest', RandomForestClassifier(n_estimators = 300))
])
scores = cross_validate(pipe, X_train_scaled, y_train, cv=5, scoring=['f1', 'accuracy'])
print(pd.DataFrame(scores))
pipe.fit(X_train_scaled, y_train)
y_train_pred = pipe.predict(X_train_scaled)
y_pred = pipe.predict(X_test_scaled)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
X = dff.drop(columns='is_canceled')
y = dff['is_canceled']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify = y)
# scaler = StandardScaler()
# X_train_scaled = scaler.fit_transform(X_train)
# X_test_scaled = scaler.transform(X_test)
pipe = Pipeline([
('scaling', StandardScaler()),
('LogisticRegression', LogisticRegression(penalty = 'l2'))
])
scores = cross_validate(pipe, X_train_scaled, y_train, cv=5, scoring=['f1', 'accuracy'])
print(pd.DataFrame(scores))
pipe.fit(X_train, y_train)
y_train_pred = pipe.predict(X_train)
y_pred = pipe.predict(X_test)
print(classification_report(y_train, y_train_pred))
print(classification_report(y_test, y_pred))
import eli5
from eli5.sklearn import PermutationImportance
perm = PermutationImportance(pipe, random_state=1)
perm.fit(X_train, y_train)
eli5.show_weights(perm, feature_names=X_train.columns.tolist())
| 0.482185 | 0.913291 |
# End-to-End Demonstration: Google Classroom
*Goal*: run the Google Classroom Extractor and then upload the results into an `LMS` database.
## Software Requirements
1. Be sure to install Python 3.9; if you have multiple versions, make sure that the `python` command runs version 3.9.x. You can confirm your version by running `python --version` at a command prompt.
1. Microsoft SQL Server 2017 or 2019, in Windows or Linux.
## Getting Started
1. Confirm you have [poetry](https://python-poetry.org) installed (`poetry --version`).
1. Follow the [Google Classroom setup instructions](../google-classroom/README.md) in order to create a `service-account.json` file.
1. Follow the [notebook instructions](README.md) to install dependencies used by this notebook.
1. Create an `LMS` database in SQL Server.
```
# Load some utilities
from IPython.display import display, Markdown
# Setup logging
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
```
## Prepare Input Data
Update the variables in the next block as needed.
```
CLASSROOM_ACCOUNT = "admin@ibamonitoring.org"
START_DATE = "2020-08-17"
END_DATE = "2021-05-23"
LOG_LEVEL = "INFO"
OUTPUT_DIRECTORY = "gc-data"
SYNC_DATABASE_DIRECTORY=OUTPUT_DIRECTORY
DB_ENGINE = "mssql"
DB_SERVER = "localhost"
DB_NAME = "LMS"
DB_PORT = 1433
EXTRACT_ASSIGNMENTS = True
EXTRACT_ACTIVITIES = False
EXTRACT_ATTENDANCE = False
EXTRACT_GRADES = False
```
## Run the Google Classroom Extractor
```
from edfi_google_classroom_extractor.helpers.arg_parser import MainArguments as gc_args
from edfi_google_classroom_extractor import facade
arguments = gc_args(
classroom_account=CLASSROOM_ACCOUNT,
log_level=LOG_LEVEL,
output_directory=OUTPUT_DIRECTORY,
usage_start_date=START_DATE,
usage_end_date=END_DATE,
sync_database_directory=SYNC_DATABASE_DIRECTORY,
extract_assignments=EXTRACT_ASSIGNMENTS,
extract_activities=EXTRACT_ACTIVITIES,
extract_attendance=EXTRACT_ATTENDANCE,
extract_grades=EXTRACT_GRADES,
)
facade.run(arguments)
```
## Run the Learning Management System Data Store Loader (LMS-DS-Loader)
The default setup below uses Windows integrated security. For username/password security, please review the commented-out code.
```
from edfi_lms_ds_loader.helpers.argparser import MainArguments as lms_args
from edfi_lms_ds_loader import loader_facade
arguments = lms_args(
OUTPUT_DIRECTORY,
DB_ENGINE,
LOG_LEVEL
)
arguments.set_connection_string_using_integrated_security(
DB_SERVER,
DB_PORT,
DB_NAME,
)
# For password auth, comment out the line above and uncomment this one:
# arguments.set_connection_string(
# DB_SERVER,
# DB_PORT,
# DB_NAME,
# USERNAME,
# PASSWORD,
# )
loader_facade.run_loader(arguments)
```
|
github_jupyter
|
# Load some utilities
from IPython.display import display, Markdown
# Setup logging
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
CLASSROOM_ACCOUNT = "admin@ibamonitoring.org"
START_DATE = "2020-08-17"
END_DATE = "2021-05-23"
LOG_LEVEL = "INFO"
OUTPUT_DIRECTORY = "gc-data"
SYNC_DATABASE_DIRECTORY=OUTPUT_DIRECTORY
DB_ENGINE = "mssql"
DB_SERVER = "localhost"
DB_NAME = "LMS"
DB_PORT = 1433
EXTRACT_ASSIGNMENTS = True
EXTRACT_ACTIVITIES = False
EXTRACT_ATTENDANCE = False
EXTRACT_GRADES = False
from edfi_google_classroom_extractor.helpers.arg_parser import MainArguments as gc_args
from edfi_google_classroom_extractor import facade
arguments = gc_args(
classroom_account=CLASSROOM_ACCOUNT,
log_level=LOG_LEVEL,
output_directory=OUTPUT_DIRECTORY,
usage_start_date=START_DATE,
usage_end_date=END_DATE,
sync_database_directory=SYNC_DATABASE_DIRECTORY,
extract_assignments=EXTRACT_ASSIGNMENTS,
extract_activities=EXTRACT_ACTIVITIES,
extract_attendance=EXTRACT_ATTENDANCE,
extract_grades=EXTRACT_GRADES,
)
facade.run(arguments)
from edfi_lms_ds_loader.helpers.argparser import MainArguments as lms_args
from edfi_lms_ds_loader import loader_facade
arguments = lms_args(
OUTPUT_DIRECTORY,
DB_ENGINE,
LOG_LEVEL
)
arguments.set_connection_string_using_integrated_security(
DB_SERVER,
DB_PORT,
DB_NAME,
)
# For password auth, comment out the line above and uncomment this one:
# arguments.set_connection_string(
# DB_SERVER,
# DB_PORT,
# DB_NAME,
# USERNAME,
# PASSWORD,
# )
loader_facade.run_loader(arguments)
| 0.373876 | 0.723236 |
This example creates a fake in-memory particle dataset and then loads it as a yt dataset using the `load_particles` function.
Our "fake" dataset will be numpy arrays filled with normally distributed randoml particle positions and uniform particle masses. Since real data is often scaled, I arbitrarily multiply by 1e6 to show how to deal with scaled data.
```
import numpy as np
n_particles = 5000000
ppx, ppy, ppz = 1e6*np.random.normal(size=[3, n_particles])
ppm = np.ones(n_particles)
```
The `load_particles` function accepts a dictionary populated with particle data fields loaded in memory as numpy arrays or python lists:
```
data = {'particle_position_x': ppx,
'particle_position_y': ppy,
'particle_position_z': ppz,
'particle_mass': ppm}
```
To hook up with yt's internal field system, the dictionary keys must be 'particle_position_x', 'particle_position_y', 'particle_position_z', and 'particle_mass', as well as any other particle field provided by one of the particle frontends.
The `load_particles` function transforms the `data` dictionary into an in-memory yt `Dataset` object, providing an interface for further analysis with yt. The example below illustrates how to load the data dictionary we created above.
```
import yt
from yt.units import parsec, Msun
bbox = 1.1*np.array([[min(ppx), max(ppx)], [min(ppy), max(ppy)], [min(ppz), max(ppz)]])
ds = yt.load_particles(data, length_unit=parsec, mass_unit=1e8*Msun, bbox=bbox)
```
The `length_unit` and `mass_unit` are the conversion from the units used in the `data` dictionary to CGS. I've arbitrarily chosen one parsec and 10^8 Msun for this example.
The `n_ref` parameter controls how many particle it takes to accumulate in an oct-tree cell to trigger refinement. Larger `n_ref` will decrease poisson noise at the cost of resolution in the octree.
Finally, the `bbox` parameter is a bounding box in the units of the dataset that contains all of the particles. This is used to set the size of the base octree block.
This new dataset acts like any other yt `Dataset` object, and can be used to create data objects and query for yt fields. This example shows how to access "deposit" fields:
```
ad = ds.all_data()
# This is generated with "cloud-in-cell" interpolation.
cic_density = ad["deposit", "all_cic"]
# These three are based on nearest-neighbor cell deposition
nn_density = ad["deposit", "all_density"]
nn_deposited_mass = ad["deposit", "all_mass"]
particle_count_per_cell = ad["deposit", "all_count"]
ds.field_list
ds.derived_field_list
slc = yt.SlicePlot(ds, 2, ('deposit', 'all_cic'))
slc.set_width((8, 'Mpc'))
```
Finally, one can specify multiple particle types in the `data` directory by setting the field names to be field tuples (the default field type for particles is `"io"`) if one is not specified:
```
n_star_particles = 1000000
n_dm_particles = 2000000
ppxd, ppyd, ppzd = 1e6*np.random.normal(size=[3, n_dm_particles])
ppmd = np.ones(n_dm_particles)
ppxs, ppys, ppzs = 5e5*np.random.normal(size=[3, n_star_particles])
ppms = 0.1*np.ones(n_star_particles)
data2 = {('dm', 'particle_position_x'): ppxd,
('dm', 'particle_position_y'): ppyd,
('dm', 'particle_position_z'): ppzd,
('dm', 'particle_mass'): ppmd,
('star', 'particle_position_x'): ppxs,
('star', 'particle_position_y'): ppys,
('star', 'particle_position_z'): ppzs,
('star', 'particle_mass'): ppms}
ds2 = yt.load_particles(data2, length_unit=parsec, mass_unit=1e8*Msun, n_ref=256, bbox=bbox)
```
We now have separate `"dm"` and `"star"` particles, as well as their deposited fields:
```
slc = yt.SlicePlot(ds2, 2, [('deposit', 'dm_cic'), ('deposit', 'star_cic')])
slc.set_width((8, 'Mpc'))
```
|
github_jupyter
|
import numpy as np
n_particles = 5000000
ppx, ppy, ppz = 1e6*np.random.normal(size=[3, n_particles])
ppm = np.ones(n_particles)
data = {'particle_position_x': ppx,
'particle_position_y': ppy,
'particle_position_z': ppz,
'particle_mass': ppm}
import yt
from yt.units import parsec, Msun
bbox = 1.1*np.array([[min(ppx), max(ppx)], [min(ppy), max(ppy)], [min(ppz), max(ppz)]])
ds = yt.load_particles(data, length_unit=parsec, mass_unit=1e8*Msun, bbox=bbox)
ad = ds.all_data()
# This is generated with "cloud-in-cell" interpolation.
cic_density = ad["deposit", "all_cic"]
# These three are based on nearest-neighbor cell deposition
nn_density = ad["deposit", "all_density"]
nn_deposited_mass = ad["deposit", "all_mass"]
particle_count_per_cell = ad["deposit", "all_count"]
ds.field_list
ds.derived_field_list
slc = yt.SlicePlot(ds, 2, ('deposit', 'all_cic'))
slc.set_width((8, 'Mpc'))
n_star_particles = 1000000
n_dm_particles = 2000000
ppxd, ppyd, ppzd = 1e6*np.random.normal(size=[3, n_dm_particles])
ppmd = np.ones(n_dm_particles)
ppxs, ppys, ppzs = 5e5*np.random.normal(size=[3, n_star_particles])
ppms = 0.1*np.ones(n_star_particles)
data2 = {('dm', 'particle_position_x'): ppxd,
('dm', 'particle_position_y'): ppyd,
('dm', 'particle_position_z'): ppzd,
('dm', 'particle_mass'): ppmd,
('star', 'particle_position_x'): ppxs,
('star', 'particle_position_y'): ppys,
('star', 'particle_position_z'): ppzs,
('star', 'particle_mass'): ppms}
ds2 = yt.load_particles(data2, length_unit=parsec, mass_unit=1e8*Msun, n_ref=256, bbox=bbox)
slc = yt.SlicePlot(ds2, 2, [('deposit', 'dm_cic'), ('deposit', 'star_cic')])
slc.set_width((8, 'Mpc'))
| 0.435902 | 0.962691 |
# Causal Effect for Logistic Regression
## Import and settings
In this example, we need to import `numpy`, `pandas`, and `graphviz` in addition to `lingam`.
```
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import make_prior_knowledge
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
np.random.seed(0)
```
## Utility function
We define a utility function to draw the directed acyclic graph.
```
def make_graph(adjacency_matrix, labels=None):
idx = np.abs(adjacency_matrix) > 0.01
dirs = np.where(idx)
d = graphviz.Digraph(engine='dot')
names = labels if labels else [f'x{i}' for i in range(len(adjacency_matrix))]
for to, from_, coef in zip(dirs[0], dirs[1], adjacency_matrix[idx]):
d.edge(names[from_], names[to], label=f'{coef:.2f}')
return d
```
## Test data
We use 'Wine Quality Data Set' (https://archive.ics.uci.edu/ml/datasets/Wine+Quality)
```
X = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv', sep=';')
X['quality'] = np.where(X['quality']>5, 1, 0)
print(X.shape)
X.head()
```
## Causal Discovery
To run causal discovery, we create a `DirectLiNGAM` object and call the `fit` method.
```
pk = make_prior_knowledge(
n_variables=len(X.columns),
sink_variables=[11])
model = lingam.DirectLiNGAM(prior_knowledge=pk)
model.fit(X)
labels = [f'{i}. {col}' for i, col in enumerate(X.columns)]
make_graph(model.adjacency_matrix_, labels)
```
## Prediction Model
We create the logistic regression model because the target is a discrete variable.
```
from sklearn.linear_model import LogisticRegression
target = 11 # quality
features = [i for i in range(X.shape[1]) if i != target]
reg = LogisticRegression(solver='liblinear')
reg.fit(X.iloc[:, features], X.iloc[:, target])
```
## Identification of Feature with Greatest Causal Influence on Prediction
To identify of the feature having the greatest intervention effect on the prediction, we create a `CausalEffect` object and call the `estimate_effects_on_prediction` method.
```
ce = lingam.CausalEffect(model)
effects = ce.estimate_effects_on_prediction(X, target, reg)
df_effects = pd.DataFrame()
df_effects['feature'] = X.columns
df_effects['effect_plus'] = effects[:, 0]
df_effects['effect_minus'] = effects[:, 1]
df_effects
max_index = np.unravel_index(np.argmax(effects), effects.shape)
print(X.columns[max_index[0]])
```
## Estimation of Optimal Intervention
`estimate_optimal_intervention` method of `CausalEffect` is available only for linear regression models.
|
github_jupyter
|
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import make_prior_knowledge
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
np.random.seed(0)
def make_graph(adjacency_matrix, labels=None):
idx = np.abs(adjacency_matrix) > 0.01
dirs = np.where(idx)
d = graphviz.Digraph(engine='dot')
names = labels if labels else [f'x{i}' for i in range(len(adjacency_matrix))]
for to, from_, coef in zip(dirs[0], dirs[1], adjacency_matrix[idx]):
d.edge(names[from_], names[to], label=f'{coef:.2f}')
return d
X = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv', sep=';')
X['quality'] = np.where(X['quality']>5, 1, 0)
print(X.shape)
X.head()
pk = make_prior_knowledge(
n_variables=len(X.columns),
sink_variables=[11])
model = lingam.DirectLiNGAM(prior_knowledge=pk)
model.fit(X)
labels = [f'{i}. {col}' for i, col in enumerate(X.columns)]
make_graph(model.adjacency_matrix_, labels)
from sklearn.linear_model import LogisticRegression
target = 11 # quality
features = [i for i in range(X.shape[1]) if i != target]
reg = LogisticRegression(solver='liblinear')
reg.fit(X.iloc[:, features], X.iloc[:, target])
ce = lingam.CausalEffect(model)
effects = ce.estimate_effects_on_prediction(X, target, reg)
df_effects = pd.DataFrame()
df_effects['feature'] = X.columns
df_effects['effect_plus'] = effects[:, 0]
df_effects['effect_minus'] = effects[:, 1]
df_effects
max_index = np.unravel_index(np.argmax(effects), effects.shape)
print(X.columns[max_index[0]])
| 0.576423 | 0.944434 |
```
!pip install dynet
!git clone https://github.com/neubig/nn4nlp-code.git
from __future__ import print_function
import time
from collections import defaultdict
import random
import sys
import argparse
import dynet as dy
import numpy as np
parser = argparse.ArgumentParser(description='BiLSTM variants.')
parser.add_argument('--teacher', action='store_true')
parser.add_argument('--perceptron', action='store_true')
parser.add_argument('--cost', action='store_true')
parser.add_argument('--hinge', action='store_true')
parser.add_argument('--schedule', action='store_true')
opts = ['--teacher']
args = parser.parse_args(opts)
use_teacher_forcing = args.teacher
use_structure_perceptron = args.perceptron
use_cost_augmented = args.cost
use_hinge = args.hinge
use_schedule = args.schedule
print("Training BiLSTM %s teacher forcing (%s schedule), %s structured perceptron loss, %s augmented cost, %s margin."
% ("with" if use_teacher_forcing else "without",
"with" if use_schedule else "without",
"with" if use_structure_perceptron else "without",
"with" if use_cost_augmented else "without",
"with" if use_hinge else "without"
)
)
# format of files: each line is "word1|tag1 word2|tag2 ..."
train_file = "nn4nlp-code/data/tags/train.txt"
dev_file = "nn4nlp-code/data/tags/dev.txt"
w2i = defaultdict(lambda: len(w2i))
t2i = defaultdict(lambda: len(t2i))
def read(fname):
"""
Read tagged file
"""
with open(fname, "r") as f:
for line in f:
words, tags = [], []
for wt in line.strip().split():
w, t = wt.split('|')
words.append(w2i[w])
tags.append(t2i[t])
yield (words, tags)
class AlwaysTrueSampler:
"""
An always true sampler, only sample fromtrue distribution.
"""
def sample_true(self):
return True
def decay(self):
pass
class ScheduleSampler:
"""
A linear schedule sampler.
"""
def __init__(self, start_rate=1, min_rate=0.2, decay_rate=0.1):
self.min_rate = min_rate
self.iter = 0
self.decay_rate = decay_rate
self.start_rate = start_rate
self.reach_min = False
self.sample_rate = start_rate
def decay_func(self):
if not self.reach_min:
self.sample_rate = self.start_rate - self.iter * self.decay_rate
if self.sample_rate < self.min_rate:
self.reach_min = True
self.sample_rate = self.min_rate
def decay(self):
self.iter += 1
self.decay_func()
print("Sample rate is now %.2f" % self.sample_rate)
def sample_true(self):
return random.random() < self.sample_rate
# Read the data
train = list(read(train_file))
unk_word = w2i["<unk>"]
w2i = defaultdict(lambda: unk_word, w2i)
unk_tag = t2i["<unk>"]
start_tag = t2i["<start>"]
t2i = defaultdict(lambda: unk_tag, t2i)
nwords = len(w2i)
ntags = len(t2i)
dev = list(read(dev_file))
# DyNet Starts
model = dy.Model()
trainer = dy.AdamTrainer(model)
# Model parameters
EMBED_SIZE = 64
TAG_EMBED_SIZE = 16
HIDDEN_SIZE = 128
assert HIDDEN_SIZE % 2 == 0
# Lookup parameters for word embeddings
LOOKUP = model.add_lookup_parameters((nwords, EMBED_SIZE))
if use_teacher_forcing:
TAG_LOOKUP = model.add_lookup_parameters((ntags, TAG_EMBED_SIZE))
if use_schedule:
sampler = ScheduleSampler()
else:
sampler = AlwaysTrueSampler()
# Word-level BiLSTM is just a composition of two LSTMs.
if use_teacher_forcing:
fwdLSTM = dy.SimpleRNNBuilder(1, EMBED_SIZE + TAG_EMBED_SIZE, HIDDEN_SIZE / 2, model) # Forward LSTM
else:
fwdLSTM = dy.SimpleRNNBuilder(1, EMBED_SIZE, HIDDEN_SIZE / 2, model) # Forward LSTM
# We cannot insert previous predicted tag to the backward LSTM anyway.
bwdLSTM = dy.SimpleRNNBuilder(1, EMBED_SIZE, HIDDEN_SIZE / 2, model) # Backward LSTM
# Word-level softmax
W_sm = model.add_parameters((ntags, HIDDEN_SIZE))
b_sm = model.add_parameters(ntags)
# Calculate the scores for one example
def calc_scores(words):
"""
Calculate scores using BiLSTM.
:param words:
:return:
"""
dy.renew_cg()
word_embs = [LOOKUP[x] for x in words]
# Transduce all batch elements with an LSTM
fwd_init = fwdLSTM.initial_state()
fwd_word_reps = fwd_init.transduce(word_embs)
bwd_init = bwdLSTM.initial_state()
bwd_word_reps = bwd_init.transduce(reversed(word_embs))
combined_word_reps = [dy.concatenate([f, b]) for f, b in zip(fwd_word_reps, reversed(bwd_word_reps))]
# Softmax scores
W = dy.parameter(W_sm)
b = dy.parameter(b_sm)
scores = [dy.affine_transform([b, W, x]) for x in combined_word_reps]
return scores
def calc_scores_with_previous_tag(words, referent_tags=None):
"""
Calculate scores using previous tag as input. If the referent tags are provided, then we will sample from previous
referent tag or previous system prediction.
:param words:
:param referent_tags:
:return:
"""
dy.renew_cg()
word_embs = [LOOKUP[x] for x in words]
# Transduce all batch elements for the backward LSTM, using the original word embeddings.
bwd_init = bwdLSTM.initial_state()
bwd_word_reps = bwd_init.transduce(reversed(word_embs))
# Softmax scores
W = dy.parameter(W_sm)
b = dy.parameter(b_sm)
scores = []
# Transduce one by one for the forward LSTM
fwd_init = fwdLSTM.initial_state()
s_fwd = fwd_init
prev_tag = start_tag
index = 0
for word, bwd_word_rep in zip(word_embs, reversed(bwd_word_reps)):
# Concatenate word and tag representation just as training.
fwd_input = dy.concatenate([word, TAG_LOOKUP[prev_tag]])
s_fwd = s_fwd.add_input(fwd_input)
combined_rep = dy.concatenate([s_fwd.output(), bwd_word_rep])
score = dy.affine_transform([b, W, combined_rep])
prediction = np.argmax(score.npvalue())
if referent_tags:
if sampler.sample_true():
prev_tag = referent_tags[index]
else:
prev_tag = prediction
index += 1
else:
prev_tag = prediction
scores.append(score)
return scores
def mle(scores, tags):
losses = [dy.pickneglogsoftmax(score, tag) for score, tag in zip(scores, tags)]
return dy.esum(losses)
def hamming_cost(predictions, reference):
return sum(p != r for p, r in zip(predictions, reference))
def calc_sequence_score(scores, tags):
return dy.esum([score[tag] for score, tag in zip(scores, tags)])
def hamming_augmented_decode(scores, reference):
"""
Local decoding with hamming cost.
:param scores: Local decoding scores.
:param reference: Referent tag result.
:return:
"""
augmented_result = []
for score, referent_tag in zip(scores, reference):
origin_scores = score.npvalue()
cost = np.ones(origin_scores.shape)
cost[referent_tag] = 0
augmented_result.append(np.argmax(np.add(origin_scores, cost)))
return augmented_result
def perceptron_loss(scores, reference):
if use_cost_augmented:
predictions = hamming_augmented_decode(scores, reference)
else:
predictions = [np.argmax(score.npvalue()) for score in scores]
margin = dy.scalarInput(-2)
if predictions != reference:
reference_score = calc_sequence_score(scores, reference)
prediction_score = calc_sequence_score(scores, predictions)
if use_cost_augmented:
# One could actually get the hamming augmented value during decoding, but we didn't do it here for
# demonstration purpose.
hamming = dy.scalarInput(hamming_cost(predictions, reference))
loss = prediction_score + hamming - reference_score
else:
loss = prediction_score - reference_score
if use_hinge:
loss = dy.emax([dy.scalarInput(0), loss - margin])
return loss
else:
return dy.scalarInput(0)
# Calculate MLE loss for one example
def calc_loss(scores, tags):
if use_structure_perceptron:
return perceptron_loss(scores, tags)
else:
return mle(scores, tags)
# Calculate number of tags correct for one example
def calc_correct(scores, tags):
correct = [np.argmax(score.npvalue()) == tag for score, tag in zip(scores, tags)]
return sum(correct)
# Perform training
for ITER in range(100):
random.shuffle(train)
start = time.time()
this_sents = this_words = this_loss = this_correct = 0
for sid in range(0, len(train)):
this_sents += 1
if this_sents % int(1000) == 0:
print("train loss/word=%.4f, acc=%.2f%%, word/sec=%.4f" % (
this_loss / this_words, 100 * this_correct / this_words, this_words / (time.time() - start)),
file=sys.stderr)
# train on the example
words, tags = train[sid]
# choose whether to use teacher forcing
if use_teacher_forcing:
scores = calc_scores_with_previous_tag(words, tags)
else:
scores = calc_scores(words)
loss_exp = calc_loss(scores, tags)
this_correct += calc_correct(scores, tags)
this_loss += loss_exp.scalar_value()
this_words += len(words)
loss_exp.backward()
trainer.update()
# Decay the schedule sampler if using schedule sampling.
sampler.decay()
# Perform evaluation
start = time.time()
this_sents = this_words = this_loss = this_correct = 0
for words, tags in dev:
this_sents += 1
# choose whether to use teacher forcing
if use_teacher_forcing:
scores = calc_scores_with_previous_tag(words)
else:
scores = calc_scores(words)
loss_exp = calc_loss(scores, tags)
this_correct += calc_correct(scores, tags)
this_loss += loss_exp.scalar_value()
this_words += len(words)
print("dev loss/word=%.4f, acc=%.2f%%, word/sec=%.4f" % (
this_loss / this_words, 100 * this_correct / this_words, this_words / (time.time() - start)), file=sys.stderr)
```
|
github_jupyter
|
!pip install dynet
!git clone https://github.com/neubig/nn4nlp-code.git
from __future__ import print_function
import time
from collections import defaultdict
import random
import sys
import argparse
import dynet as dy
import numpy as np
parser = argparse.ArgumentParser(description='BiLSTM variants.')
parser.add_argument('--teacher', action='store_true')
parser.add_argument('--perceptron', action='store_true')
parser.add_argument('--cost', action='store_true')
parser.add_argument('--hinge', action='store_true')
parser.add_argument('--schedule', action='store_true')
opts = ['--teacher']
args = parser.parse_args(opts)
use_teacher_forcing = args.teacher
use_structure_perceptron = args.perceptron
use_cost_augmented = args.cost
use_hinge = args.hinge
use_schedule = args.schedule
print("Training BiLSTM %s teacher forcing (%s schedule), %s structured perceptron loss, %s augmented cost, %s margin."
% ("with" if use_teacher_forcing else "without",
"with" if use_schedule else "without",
"with" if use_structure_perceptron else "without",
"with" if use_cost_augmented else "without",
"with" if use_hinge else "without"
)
)
# format of files: each line is "word1|tag1 word2|tag2 ..."
train_file = "nn4nlp-code/data/tags/train.txt"
dev_file = "nn4nlp-code/data/tags/dev.txt"
w2i = defaultdict(lambda: len(w2i))
t2i = defaultdict(lambda: len(t2i))
def read(fname):
"""
Read tagged file
"""
with open(fname, "r") as f:
for line in f:
words, tags = [], []
for wt in line.strip().split():
w, t = wt.split('|')
words.append(w2i[w])
tags.append(t2i[t])
yield (words, tags)
class AlwaysTrueSampler:
"""
An always true sampler, only sample fromtrue distribution.
"""
def sample_true(self):
return True
def decay(self):
pass
class ScheduleSampler:
"""
A linear schedule sampler.
"""
def __init__(self, start_rate=1, min_rate=0.2, decay_rate=0.1):
self.min_rate = min_rate
self.iter = 0
self.decay_rate = decay_rate
self.start_rate = start_rate
self.reach_min = False
self.sample_rate = start_rate
def decay_func(self):
if not self.reach_min:
self.sample_rate = self.start_rate - self.iter * self.decay_rate
if self.sample_rate < self.min_rate:
self.reach_min = True
self.sample_rate = self.min_rate
def decay(self):
self.iter += 1
self.decay_func()
print("Sample rate is now %.2f" % self.sample_rate)
def sample_true(self):
return random.random() < self.sample_rate
# Read the data
train = list(read(train_file))
unk_word = w2i["<unk>"]
w2i = defaultdict(lambda: unk_word, w2i)
unk_tag = t2i["<unk>"]
start_tag = t2i["<start>"]
t2i = defaultdict(lambda: unk_tag, t2i)
nwords = len(w2i)
ntags = len(t2i)
dev = list(read(dev_file))
# DyNet Starts
model = dy.Model()
trainer = dy.AdamTrainer(model)
# Model parameters
EMBED_SIZE = 64
TAG_EMBED_SIZE = 16
HIDDEN_SIZE = 128
assert HIDDEN_SIZE % 2 == 0
# Lookup parameters for word embeddings
LOOKUP = model.add_lookup_parameters((nwords, EMBED_SIZE))
if use_teacher_forcing:
TAG_LOOKUP = model.add_lookup_parameters((ntags, TAG_EMBED_SIZE))
if use_schedule:
sampler = ScheduleSampler()
else:
sampler = AlwaysTrueSampler()
# Word-level BiLSTM is just a composition of two LSTMs.
if use_teacher_forcing:
fwdLSTM = dy.SimpleRNNBuilder(1, EMBED_SIZE + TAG_EMBED_SIZE, HIDDEN_SIZE / 2, model) # Forward LSTM
else:
fwdLSTM = dy.SimpleRNNBuilder(1, EMBED_SIZE, HIDDEN_SIZE / 2, model) # Forward LSTM
# We cannot insert previous predicted tag to the backward LSTM anyway.
bwdLSTM = dy.SimpleRNNBuilder(1, EMBED_SIZE, HIDDEN_SIZE / 2, model) # Backward LSTM
# Word-level softmax
W_sm = model.add_parameters((ntags, HIDDEN_SIZE))
b_sm = model.add_parameters(ntags)
# Calculate the scores for one example
def calc_scores(words):
"""
Calculate scores using BiLSTM.
:param words:
:return:
"""
dy.renew_cg()
word_embs = [LOOKUP[x] for x in words]
# Transduce all batch elements with an LSTM
fwd_init = fwdLSTM.initial_state()
fwd_word_reps = fwd_init.transduce(word_embs)
bwd_init = bwdLSTM.initial_state()
bwd_word_reps = bwd_init.transduce(reversed(word_embs))
combined_word_reps = [dy.concatenate([f, b]) for f, b in zip(fwd_word_reps, reversed(bwd_word_reps))]
# Softmax scores
W = dy.parameter(W_sm)
b = dy.parameter(b_sm)
scores = [dy.affine_transform([b, W, x]) for x in combined_word_reps]
return scores
def calc_scores_with_previous_tag(words, referent_tags=None):
"""
Calculate scores using previous tag as input. If the referent tags are provided, then we will sample from previous
referent tag or previous system prediction.
:param words:
:param referent_tags:
:return:
"""
dy.renew_cg()
word_embs = [LOOKUP[x] for x in words]
# Transduce all batch elements for the backward LSTM, using the original word embeddings.
bwd_init = bwdLSTM.initial_state()
bwd_word_reps = bwd_init.transduce(reversed(word_embs))
# Softmax scores
W = dy.parameter(W_sm)
b = dy.parameter(b_sm)
scores = []
# Transduce one by one for the forward LSTM
fwd_init = fwdLSTM.initial_state()
s_fwd = fwd_init
prev_tag = start_tag
index = 0
for word, bwd_word_rep in zip(word_embs, reversed(bwd_word_reps)):
# Concatenate word and tag representation just as training.
fwd_input = dy.concatenate([word, TAG_LOOKUP[prev_tag]])
s_fwd = s_fwd.add_input(fwd_input)
combined_rep = dy.concatenate([s_fwd.output(), bwd_word_rep])
score = dy.affine_transform([b, W, combined_rep])
prediction = np.argmax(score.npvalue())
if referent_tags:
if sampler.sample_true():
prev_tag = referent_tags[index]
else:
prev_tag = prediction
index += 1
else:
prev_tag = prediction
scores.append(score)
return scores
def mle(scores, tags):
losses = [dy.pickneglogsoftmax(score, tag) for score, tag in zip(scores, tags)]
return dy.esum(losses)
def hamming_cost(predictions, reference):
return sum(p != r for p, r in zip(predictions, reference))
def calc_sequence_score(scores, tags):
return dy.esum([score[tag] for score, tag in zip(scores, tags)])
def hamming_augmented_decode(scores, reference):
"""
Local decoding with hamming cost.
:param scores: Local decoding scores.
:param reference: Referent tag result.
:return:
"""
augmented_result = []
for score, referent_tag in zip(scores, reference):
origin_scores = score.npvalue()
cost = np.ones(origin_scores.shape)
cost[referent_tag] = 0
augmented_result.append(np.argmax(np.add(origin_scores, cost)))
return augmented_result
def perceptron_loss(scores, reference):
if use_cost_augmented:
predictions = hamming_augmented_decode(scores, reference)
else:
predictions = [np.argmax(score.npvalue()) for score in scores]
margin = dy.scalarInput(-2)
if predictions != reference:
reference_score = calc_sequence_score(scores, reference)
prediction_score = calc_sequence_score(scores, predictions)
if use_cost_augmented:
# One could actually get the hamming augmented value during decoding, but we didn't do it here for
# demonstration purpose.
hamming = dy.scalarInput(hamming_cost(predictions, reference))
loss = prediction_score + hamming - reference_score
else:
loss = prediction_score - reference_score
if use_hinge:
loss = dy.emax([dy.scalarInput(0), loss - margin])
return loss
else:
return dy.scalarInput(0)
# Calculate MLE loss for one example
def calc_loss(scores, tags):
if use_structure_perceptron:
return perceptron_loss(scores, tags)
else:
return mle(scores, tags)
# Calculate number of tags correct for one example
def calc_correct(scores, tags):
correct = [np.argmax(score.npvalue()) == tag for score, tag in zip(scores, tags)]
return sum(correct)
# Perform training
for ITER in range(100):
random.shuffle(train)
start = time.time()
this_sents = this_words = this_loss = this_correct = 0
for sid in range(0, len(train)):
this_sents += 1
if this_sents % int(1000) == 0:
print("train loss/word=%.4f, acc=%.2f%%, word/sec=%.4f" % (
this_loss / this_words, 100 * this_correct / this_words, this_words / (time.time() - start)),
file=sys.stderr)
# train on the example
words, tags = train[sid]
# choose whether to use teacher forcing
if use_teacher_forcing:
scores = calc_scores_with_previous_tag(words, tags)
else:
scores = calc_scores(words)
loss_exp = calc_loss(scores, tags)
this_correct += calc_correct(scores, tags)
this_loss += loss_exp.scalar_value()
this_words += len(words)
loss_exp.backward()
trainer.update()
# Decay the schedule sampler if using schedule sampling.
sampler.decay()
# Perform evaluation
start = time.time()
this_sents = this_words = this_loss = this_correct = 0
for words, tags in dev:
this_sents += 1
# choose whether to use teacher forcing
if use_teacher_forcing:
scores = calc_scores_with_previous_tag(words)
else:
scores = calc_scores(words)
loss_exp = calc_loss(scores, tags)
this_correct += calc_correct(scores, tags)
this_loss += loss_exp.scalar_value()
this_words += len(words)
print("dev loss/word=%.4f, acc=%.2f%%, word/sec=%.4f" % (
this_loss / this_words, 100 * this_correct / this_words, this_words / (time.time() - start)), file=sys.stderr)
| 0.697094 | 0.270613 |
# Introduction to Deep Learning with PyTorch
In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks.
## Neural Networks
Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.
<img src="assets/simple_neuron.png" width=400px>
Mathematically this looks like:
$$
\begin{align}
y &= f(w_1 x_1 + w_2 x_2 + b) \\
y &= f\left(\sum_i w_i x_i \right)
\end{align}
$$
With vectors this is the dot/inner product of two vectors:
$$
h = \begin{bmatrix}
x_1 \, x_2 \cdots x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$$
### Stack them up!
We can assemble these unit neurons into layers and stacks, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.
<img src='assets/multilayer_diagram_weights.png' width=450px>
We can express this mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated
$$
\vec{h} = [h_1 \, h_2] =
\begin{bmatrix}
x_1 \, x_2 \cdots \, x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_{11} & w_{12} \\
w_{21} &w_{22} \\
\vdots &\vdots \\
w_{n1} &w_{n2}
\end{bmatrix}
$$
The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply
$$
y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)
$$
## Tensors
It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.
<img src="assets/tensor_examples.svg" width=600px>
With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
```
First, let's see how we work with PyTorch tensors. These are the fundamental data structures of neural networks and PyTorch, so it's imporatant to understand how these work.
```
x = torch.rand(3, 2)
x
y = torch.ones(x.size())
y
z = x + y
z
```
In general PyTorch tensors behave similar to Numpy arrays. They are zero indexed and support slicing.
```
z[0]
z[:, 1:]
```
Tensors typically have two forms of methods, one method that returns another tensor and another method that performs the operation in place. That is, the values in memory for that tensor are changed without creating a new tensor. In-place functions are always followed by an underscore, for example `z.add()` and `z.add_()`.
```
# Return a new tensor z + 1
z.add(1)
# z tensor is unchanged
z
# Add 1 and update z tensor in-place
z.add_(1)
# z has been updated
z
```
### Reshaping
Reshaping tensors is a really common operation. First to get the size and shape of a tensor use `.size()`. Then, to reshape a tensor, use `.resize_()`. Notice the underscore, reshaping is an in-place operation.
```
z.size()
z
z.resize_(2, 3)
z
```
## Numpy to Torch and back
Converting between Numpy arrays and Torch tensors is super simple and useful. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
```
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
```
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
```
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
```
|
github_jupyter
|
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
x = torch.rand(3, 2)
x
y = torch.ones(x.size())
y
z = x + y
z
z[0]
z[:, 1:]
# Return a new tensor z + 1
z.add(1)
# z tensor is unchanged
z
# Add 1 and update z tensor in-place
z.add_(1)
# z has been updated
z
z.size()
z
z.resize_(2, 3)
z
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
| 0.640299 | 0.992604 |
# Chapter 12 - Tuples
-------------------------------
A tuple is a group of one or more values that are treated as a whole. This chapter explains how to recognize and use tuples.
---
## Using tuples
A tuple is a group of one or more values, separated by commas. Normally, tuples are written with parentheses around them, but the parentheses are not actually necessary (except in circumstances where otherwise confusion would arise). For example:
```
t1 = ("apple", "orange")
print( type( t1 ) )
t2 = "banana", "cherry"
print( type( t2 ))
```
You can mix data types within tuples. You can even put tuples in tuples.
```
t1 = ("apple", 3, 1.4)
t2 = ("apple", 3, 1.4, ("banana", 5))
```
To find out how many elements a tuple contains, you can use the `len()` function.
```
t1 = ("apple", "orange")
t2 = ("apple", 3, 1.4)
t3 = ("apple", 3, 1.4, ("banana", 5))
print( len( t1 ) )
print( len( t2 ) )
print( len( t3 ) )
```
Note that in this example, the length of `t3` is 4, and not 5. The last element of `t3` is the tuple `("banana", 5)`, which counts as one element.
You can use a `for` loop to access individual elements of a tuple in sequence.
```
t1 = ("apple", 3, 1.4, ("banana", 5))
for element in t1:
print( element )
```
You can also use the `max()` and `min()` functions to get the maximum respectively the minimum from a tuple of numbers. You can sum the elements of a tuple using the `sum()` function.
```
t1 = (327, 419, 101, 667, 925, 225)
print( max( t1 ) )
print( min( t1 ) )
print( sum( t1 ) )
```
You can test whether an element is part of a tuple by using the `in` operator.
```
t1 = ("apple", "banana", "cherry")
print( "banana" in t1 )
print( "orange" in t1 )
```
### Tuple assignments
As you have seen, you can create a tuple by assigning comma-separated values to a variable. Parentheses around it are optional. What if you want to create a tuple with only one element?
```
t1 = ("apple")
print( type( t1 ) )
```
As you can see, putting parentheses around the element does not work, as parentheses are optional. Python introduced a little trick to create a tuple with only one element, and that is that you indicate that it is a tuple by placing a comma after the value. This is rather unintuitive and I would even say "degenerate", but historically this was the solution that an early version of Python introduced, and for compatibility reasons it was not changed.
```
t1 = ("apple",)
print( type( t1 ) )
print( len( t1 ) )
```
Python allows you to place a tuple left of the assignment operator. This is an exception to the rule that only one variable can be placed left of an assignment. The values at the right side are copied one-by-one to the left side, left to right.
```
t1, t2 = "apple", "banana"
print( t1 )
print( t2 )
```
You can place parentheses around the values at the right side, and/or parentheses around the variables at the left side, which makes no difference.
If you place more variables at the left side than values at the right side, you get a runtime error. The same for placing fewer (unless you place just one, as shown above). However, you can create tuples at the right side by placing parentheses.
```
t1, t2 = ("apple", "banana"), "cherry"
print( t1 )
print( t2 )
```
### Tuple indices
Just like with strings, you can access the individual elements of a tuple using indices. Where with strings the individual elements are characters, for tuples they are the values. For instance:
```
t1 = ("apple", "banana", "cherry", "durian")
print( t1[2] )
```
You can even use slices, with the same rules a for strings (if you do not remember, check the previous chapter again). A slice of a tuple is another tuple. For example:
```
t1 = ("apple", "banana", "cherry", "durian", "orange")
print( t1[1:4] )
```
Since tuples are indexed, an alternative for a `for` loop to access the individual elements of a tuple is to loop over the indices.
```
t1 = ("apple", "banana", "cherry", "durian", "orange")
i = 0
while i < len( t1 ):
print( t1[i] )
i += 1
```
**Exercise**: Write a `for` loop that displays all the values of the elements of a tuple, and also displays their index.
```
# Values with index.
t1 = ("apple", "banana", "cherry", "durian", "orange")
```
### Tuple comparisons
You can compare two tuples with each other by using the regular comparison operators. These operators first compare the first two elements of the tuples. If these are different, then the comparison will determine which one is "lower" based on the rules for these data types, and result in `True` or `False`. If they are equal, the second elements will be compared, etcetera.
```
t1 = ( "apple", "banana" )
t2 = ( "apple", "banana" )
t3 = ( "apple", "cherry" )
t4 = ( "apple", "banana", "cherry" )
print( t1 == t2 )
print( t1 < t3 )
print( t1 > t4 )
print( t3 > t4 )
```
### Tuple return values
In the chapter about functions, you learned that functions can return multiple values. If you code something like that, what actually happens is that the function is returning a tuple. To deal with such return values, you assign them to variables as explained under "tuple assignments" above.
---
## Tuples are immutable
Just like strings, tuples are immutable. This means that you cannot assign a new value to one element of a tuple. The example below will produce a runtime error when run.
```
t1 = ("apple", "banana", "cherry", "durian")
t1[0] = "orange"
```
---
## Applications of tuples
Tuples are not used often in Python code (except as return values of functions). A logical application of tuples would be to deal with values that always occur in small collections. However, object orientation offers many tools and techniques to deal with such small collections, which means that programmers usually revert to object orientation when they need something like that. Object orientation follows in a later chapter.
For the moment, here is an example of the use of tuples in an application. Suppose that you have to write a program that deals with geometric figures in 2-dimensional space. A concept that you need is that of a point: a location in 2D space that is identified by two coordinates. Rather than write functions that always require a separate X-coordinate and a separate Y-coordinate, you can specify that coordinates are always communicated in the form of tuples.
```
from math import sqrt
# Returns the distance between two points in 2-dimensional space.
# The points are the parameters of the function; each point is a tuple of two numeric values.
def distance( p1, p2 ):
return sqrt( (p1[0] - p2[0])**2 + (p1[1] - p2[1])**2 )
point1 = (1,2)
point2 = (5,5)
print( "Distance between", point1, "and", point2, "is", distance( point1, point2 ) )
```
An advantage of using tuples to communicate coordinates is that it is relatively easy to write functions that can deal with coordinates in higher-dimensional spaces too.
```
from math import sqrt
# Distance between two points in N-dimensional space.
# The points should have the same dimension, i.e., they are tuples of
# numeric values, and they should have the same length.
def distance( p1, p2 ):
total = 0
for i in range( len( p1 ) ):
total += (p1[i] - p2[i])**2
return sqrt( total )
# 1-dimensional space
point1 = (1,)
point2 = (5,)
print( "1D: Distance between", point1, "and", point2, "is", distance( point1, point2 ) )
# 2-dimensional space
point1 = (1,2)
point2 = (5,5)
print( "2D: Distance between", point1, "and", point2, "is", distance( point1, point2 ) )
# 3-dimensional space
point1 = (1,2,4)
point2 = (5,5,8)
print( "3D: Distance between", point1, "and", point2, "is", distance( point1, point2 ) )
```
---
## What you learned
In this chapter, you learned about:
- Tuples
- Tuple assignments
- Tuple indices
- Immutability of tuples
- Applications of tuples
-------------
## Exercises
### Exercise 11.1
A complex number is a number of the form `a + bi`, whereby `a` and `b` are constants, and `i` is a special value that is defined as the square root of `-1`. Of course, you never try to actually calculate what the square root of `-1` is, as that gives a runtime error; in complex numbers, you always let the `i` remain. For instance, the complex number `3 + 2i` cannot be simplified any further. Addition of two complex numbers `a + bi` and `c + di` is defined as `(a + c) + (b + d)i`. Represent a complex number as a tuple of two numeric values, and create a function that calculates the addition of two complex numbers.
```
# Adding complex numbers.
```
### Exercise 11.2
Multiplication of two complex numbers `a + bi` and `c + di` is defined as `(a*c - b*d) + (a*d + b*c)i`. Write a function that calculates the multiplication of two complex numbers.
```
# Multiplying complex numbers.
```
### Exercise 11.3
Consider the definition of a new datatype. The new datatype is the "inttuple". An inttuple is defined as being either an integer, or a tuple consisting of inttuples. You see an example of an inttuple in the code block below. Write a function that prints all the integer values stored in an inttuple. Hint: Since the inttuple is defined recursively, a recursive function is probably the right approach. Use the `isinstance()` function (explained in the chapter on functions) to determine whether you are dealing with an integer or a tuple. If you do this correctly, the function will print the numbers 1 to 20 sequentially.
```
# Processing inttuples
inttuple = ( 1, 2, ( 3, 4 ), 5, ( ( 6, 7, 8, ( 9, 10 ), 11 ), 12, 13 ), ( ( 14, 15, 16 ), ( 17, 18, 19, 20 ) ) )
```
-------------------------------------------------------------
End of Chapter 11. Version 1.2.
|
github_jupyter
|
t1 = ("apple", "orange")
print( type( t1 ) )
t2 = "banana", "cherry"
print( type( t2 ))
t1 = ("apple", 3, 1.4)
t2 = ("apple", 3, 1.4, ("banana", 5))
t1 = ("apple", "orange")
t2 = ("apple", 3, 1.4)
t3 = ("apple", 3, 1.4, ("banana", 5))
print( len( t1 ) )
print( len( t2 ) )
print( len( t3 ) )
t1 = ("apple", 3, 1.4, ("banana", 5))
for element in t1:
print( element )
t1 = (327, 419, 101, 667, 925, 225)
print( max( t1 ) )
print( min( t1 ) )
print( sum( t1 ) )
t1 = ("apple", "banana", "cherry")
print( "banana" in t1 )
print( "orange" in t1 )
t1 = ("apple")
print( type( t1 ) )
t1 = ("apple",)
print( type( t1 ) )
print( len( t1 ) )
t1, t2 = "apple", "banana"
print( t1 )
print( t2 )
t1, t2 = ("apple", "banana"), "cherry"
print( t1 )
print( t2 )
t1 = ("apple", "banana", "cherry", "durian")
print( t1[2] )
t1 = ("apple", "banana", "cherry", "durian", "orange")
print( t1[1:4] )
t1 = ("apple", "banana", "cherry", "durian", "orange")
i = 0
while i < len( t1 ):
print( t1[i] )
i += 1
# Values with index.
t1 = ("apple", "banana", "cherry", "durian", "orange")
t1 = ( "apple", "banana" )
t2 = ( "apple", "banana" )
t3 = ( "apple", "cherry" )
t4 = ( "apple", "banana", "cherry" )
print( t1 == t2 )
print( t1 < t3 )
print( t1 > t4 )
print( t3 > t4 )
t1 = ("apple", "banana", "cherry", "durian")
t1[0] = "orange"
from math import sqrt
# Returns the distance between two points in 2-dimensional space.
# The points are the parameters of the function; each point is a tuple of two numeric values.
def distance( p1, p2 ):
return sqrt( (p1[0] - p2[0])**2 + (p1[1] - p2[1])**2 )
point1 = (1,2)
point2 = (5,5)
print( "Distance between", point1, "and", point2, "is", distance( point1, point2 ) )
from math import sqrt
# Distance between two points in N-dimensional space.
# The points should have the same dimension, i.e., they are tuples of
# numeric values, and they should have the same length.
def distance( p1, p2 ):
total = 0
for i in range( len( p1 ) ):
total += (p1[i] - p2[i])**2
return sqrt( total )
# 1-dimensional space
point1 = (1,)
point2 = (5,)
print( "1D: Distance between", point1, "and", point2, "is", distance( point1, point2 ) )
# 2-dimensional space
point1 = (1,2)
point2 = (5,5)
print( "2D: Distance between", point1, "and", point2, "is", distance( point1, point2 ) )
# 3-dimensional space
point1 = (1,2,4)
point2 = (5,5,8)
print( "3D: Distance between", point1, "and", point2, "is", distance( point1, point2 ) )
# Adding complex numbers.
# Multiplying complex numbers.
# Processing inttuples
inttuple = ( 1, 2, ( 3, 4 ), 5, ( ( 6, 7, 8, ( 9, 10 ), 11 ), 12, 13 ), ( ( 14, 15, 16 ), ( 17, 18, 19, 20 ) ) )
| 0.45181 | 0.984396 |
# Test stuff and explore FCN with VGG16 example
```
import os.path
import tensorflow as tf
import helper
import cv2
import numpy as np
import scipy.misc
import warnings
from distutils.version import LooseVersion
import project_tests as tests
from IPython.display import display, HTML
display(HTML(data="""
<style>
div#notebook-container { width: 75%; }
div#menubar-container { width: 75%; }
div#maintoolbar-container { width: 75%; }
</style>"""))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
data_dir = './data'
runs_dir = './runs'
# Download pretrained vgg model
helper.maybe_download_pretrained_vgg(data_dir)
# Path to vgg model
vgg_tag = 'vgg16'
vgg_path = os.path.join(data_dir, 'vgg')
def getGraph(sess):
tf.saved_model.loader.load(sess, [vgg_tag], vgg_path)
graph = tf.get_default_graph()
return graph
```
## Print all variables / layer names in graph
```
tf.reset_default_graph()
with tf.Session() as sess:
graph = getGraph(sess)
saver = tf.train.Saver()
saver.restore(sess, "./runs/semantic_segmentation_model.ckpt")
for i in graph.get_operations():
print("{}\n\t{}".format(i.name, i.values()))
```
## Get trainable variables
```
for var in tf.trainable_variables():
print(var)
```
## Get global variables
```
for var in tf.global_variables():
print(var)
```
## Test inference on video
Necessary imports and initialization:
```
from moviepy.editor import VideoFileClip
from IPython.display import HTML
from moviepy.editor import *
import helper
import project_tests as tests
from main import load_vgg, layers, optimize, train_nn, test_nn
tf.reset_default_graph()
model_checkpoint = "./runs/semantic_segmentation_model.ckpt"
num_classes = 2
image_shape = (160, 576)
video_fps = 30
video_output_folder = "videos_output/"
videos = [
"data/project_video.mp4",
"data/challenge_video.mp4",
"data/harder_challenge_video.mp4"
]
def process_video_image(sess, logits, keep_prob, image_input_op, image_src, image_shape):
# first crop image to correct aspect of `image_shape`
image_src_shape = image_src.shape
new_y = (image_shape[0] * image_src_shape[1]) // image_shape[1]
image_crop = image_src[new_y:,:]
image_resized = scipy.misc.imresize(image_crop, image_shape)
feed_dict = {keep_prob: 1.0,
image_input_op: [image_resized]}
im_softmax = sess.run([tf.nn.softmax(logits)],
feed_dict=feed_dict)
im_softmax = im_softmax[0][:, 1].reshape(image_shape[0], image_shape[1])
segmentation = (im_softmax > 0.5).reshape(image_shape[0], image_shape[1], 1)
mask = np.dot(segmentation, np.array([[0, 255, 0, 127]]))
mask = scipy.misc.toimage(mask, mode="RGBA")
street_im = scipy.misc.toimage(image_resized)
street_im.paste(mask, box=None, mask=mask)
return np.asarray(street_im)
video_fps = 10
clip_part = (0.0, 6.0)
tf.reset_default_graph()
# TF placeholders
correct_label = tf.placeholder(tf.int32, [None, None, None, num_classes], name="correct_label")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
with tf.Session() as sess:
# Path to vgg model
vgg_path = os.path.join(data_dir, "vgg")
# Build NN using load_vgg, layers
image_input, keep_prob, vgg_layer3_out, vgg_layer4_out, vgg_layer7_out = load_vgg(sess, vgg_path)
last_layer = layers(vgg_layer3_out, vgg_layer4_out, vgg_layer7_out, num_classes)
tvars = tf.trainable_variables()
trainable_vars = [var for var in tvars if "fc6" not in var.name and
"fc7" not in var.name]
# Set-up optimizer
return_list = optimize(last_layer, correct_label, learning_rate, num_classes, trainable_vars)
logits_op, train_op, loss_op, mean_iou_value, mean_iou_update_op = return_list
#graph = tf.get_default_graph()
saver = tf.train.Saver()
try:
saver.restore(sess, model_checkpoint)
except:
print("Couldn't load model last checkpoint ({}).".format(model_checkpoint))
print("You need to either provide the required checkpoint files or train the network from scratch!")
input_image_op = graph.get_tensor_by_name("image_input:0")
logits_op = graph.get_tensor_by_name("decoder_logits:0")
keep_prob = graph.get_tensor_by_name("keep_prob:0")
for video in videos:
if not os.path.exists(video_output_folder):
os.makedirs(video_output_folder)
result_path = video_output_folder + os.path.basename(video)
if not os.path.isfile(video):
print("Video {} doesn't exist!".format(video))
else:
clip1 = VideoFileClip(video) #.subclip(*clip_part)
video_slowdown_factor = video_fps / clip1.fps
clip1 = clip1.fx(vfx.speedx, video_slowdown_factor)
white_clip = clip1.fl_image(lambda img: process_video_image(sess, logits_op, keep_prob, input_image_op, img, image_shape))
%time white_clip.write_videofile(result_path, audio=False, fps=video_fps)
HTML("""<video width="960" height="540" controls><source src="{0}"></video>""".format("videos_output/harder_challenge_video.mp4"))
tf.reset_default_graph()
# TF placeholders
correct_label = tf.placeholder(tf.int32, [None, None, None, num_classes], name="correct_label")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
# Create a builder
builder = tf.saved_model.builder.SavedModelBuilder('./saved/')
with tf.Session() as sess:
# Path to vgg model
vgg_path = os.path.join(data_dir, "vgg")
# Build NN using load_vgg, layers
image_input, keep_prob, vgg_layer3_out, vgg_layer4_out, vgg_layer7_out = load_vgg(sess, vgg_path)
last_layer = layers(vgg_layer3_out, vgg_layer4_out, vgg_layer7_out, num_classes)
tvars = tf.trainable_variables()
trainable_vars = [var for var in tvars if "fc6" not in var.name and
"fc7" not in var.name]
# Set-up optimizer
return_list = optimize(last_layer, correct_label, learning_rate, num_classes, trainable_vars)
logits_op, train_op, loss_op, mean_iou_value, mean_iou_update_op = return_list
#graph = tf.get_default_graph()
saver = tf.train.Saver()
try:
saver.restore(sess, model_checkpoint)
except:
print("Couldn't load model last checkpoint ({}).".format(model_checkpoint))
print("You need to either provide the required checkpoint files or train the network from scratch!")
builder.add_meta_graph_and_variables(sess,
[tf.saved_model.tag_constants.TRAINING],
signature_def_map=None,
assets_collection=None)
builder.save()
```
|
github_jupyter
|
import os.path
import tensorflow as tf
import helper
import cv2
import numpy as np
import scipy.misc
import warnings
from distutils.version import LooseVersion
import project_tests as tests
from IPython.display import display, HTML
display(HTML(data="""
<style>
div#notebook-container { width: 75%; }
div#menubar-container { width: 75%; }
div#maintoolbar-container { width: 75%; }
</style>"""))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
data_dir = './data'
runs_dir = './runs'
# Download pretrained vgg model
helper.maybe_download_pretrained_vgg(data_dir)
# Path to vgg model
vgg_tag = 'vgg16'
vgg_path = os.path.join(data_dir, 'vgg')
def getGraph(sess):
tf.saved_model.loader.load(sess, [vgg_tag], vgg_path)
graph = tf.get_default_graph()
return graph
tf.reset_default_graph()
with tf.Session() as sess:
graph = getGraph(sess)
saver = tf.train.Saver()
saver.restore(sess, "./runs/semantic_segmentation_model.ckpt")
for i in graph.get_operations():
print("{}\n\t{}".format(i.name, i.values()))
for var in tf.trainable_variables():
print(var)
for var in tf.global_variables():
print(var)
from moviepy.editor import VideoFileClip
from IPython.display import HTML
from moviepy.editor import *
import helper
import project_tests as tests
from main import load_vgg, layers, optimize, train_nn, test_nn
tf.reset_default_graph()
model_checkpoint = "./runs/semantic_segmentation_model.ckpt"
num_classes = 2
image_shape = (160, 576)
video_fps = 30
video_output_folder = "videos_output/"
videos = [
"data/project_video.mp4",
"data/challenge_video.mp4",
"data/harder_challenge_video.mp4"
]
def process_video_image(sess, logits, keep_prob, image_input_op, image_src, image_shape):
# first crop image to correct aspect of `image_shape`
image_src_shape = image_src.shape
new_y = (image_shape[0] * image_src_shape[1]) // image_shape[1]
image_crop = image_src[new_y:,:]
image_resized = scipy.misc.imresize(image_crop, image_shape)
feed_dict = {keep_prob: 1.0,
image_input_op: [image_resized]}
im_softmax = sess.run([tf.nn.softmax(logits)],
feed_dict=feed_dict)
im_softmax = im_softmax[0][:, 1].reshape(image_shape[0], image_shape[1])
segmentation = (im_softmax > 0.5).reshape(image_shape[0], image_shape[1], 1)
mask = np.dot(segmentation, np.array([[0, 255, 0, 127]]))
mask = scipy.misc.toimage(mask, mode="RGBA")
street_im = scipy.misc.toimage(image_resized)
street_im.paste(mask, box=None, mask=mask)
return np.asarray(street_im)
video_fps = 10
clip_part = (0.0, 6.0)
tf.reset_default_graph()
# TF placeholders
correct_label = tf.placeholder(tf.int32, [None, None, None, num_classes], name="correct_label")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
with tf.Session() as sess:
# Path to vgg model
vgg_path = os.path.join(data_dir, "vgg")
# Build NN using load_vgg, layers
image_input, keep_prob, vgg_layer3_out, vgg_layer4_out, vgg_layer7_out = load_vgg(sess, vgg_path)
last_layer = layers(vgg_layer3_out, vgg_layer4_out, vgg_layer7_out, num_classes)
tvars = tf.trainable_variables()
trainable_vars = [var for var in tvars if "fc6" not in var.name and
"fc7" not in var.name]
# Set-up optimizer
return_list = optimize(last_layer, correct_label, learning_rate, num_classes, trainable_vars)
logits_op, train_op, loss_op, mean_iou_value, mean_iou_update_op = return_list
#graph = tf.get_default_graph()
saver = tf.train.Saver()
try:
saver.restore(sess, model_checkpoint)
except:
print("Couldn't load model last checkpoint ({}).".format(model_checkpoint))
print("You need to either provide the required checkpoint files or train the network from scratch!")
input_image_op = graph.get_tensor_by_name("image_input:0")
logits_op = graph.get_tensor_by_name("decoder_logits:0")
keep_prob = graph.get_tensor_by_name("keep_prob:0")
for video in videos:
if not os.path.exists(video_output_folder):
os.makedirs(video_output_folder)
result_path = video_output_folder + os.path.basename(video)
if not os.path.isfile(video):
print("Video {} doesn't exist!".format(video))
else:
clip1 = VideoFileClip(video) #.subclip(*clip_part)
video_slowdown_factor = video_fps / clip1.fps
clip1 = clip1.fx(vfx.speedx, video_slowdown_factor)
white_clip = clip1.fl_image(lambda img: process_video_image(sess, logits_op, keep_prob, input_image_op, img, image_shape))
%time white_clip.write_videofile(result_path, audio=False, fps=video_fps)
HTML("""<video width="960" height="540" controls><source src="{0}"></video>""".format("videos_output/harder_challenge_video.mp4"))
tf.reset_default_graph()
# TF placeholders
correct_label = tf.placeholder(tf.int32, [None, None, None, num_classes], name="correct_label")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
# Create a builder
builder = tf.saved_model.builder.SavedModelBuilder('./saved/')
with tf.Session() as sess:
# Path to vgg model
vgg_path = os.path.join(data_dir, "vgg")
# Build NN using load_vgg, layers
image_input, keep_prob, vgg_layer3_out, vgg_layer4_out, vgg_layer7_out = load_vgg(sess, vgg_path)
last_layer = layers(vgg_layer3_out, vgg_layer4_out, vgg_layer7_out, num_classes)
tvars = tf.trainable_variables()
trainable_vars = [var for var in tvars if "fc6" not in var.name and
"fc7" not in var.name]
# Set-up optimizer
return_list = optimize(last_layer, correct_label, learning_rate, num_classes, trainable_vars)
logits_op, train_op, loss_op, mean_iou_value, mean_iou_update_op = return_list
#graph = tf.get_default_graph()
saver = tf.train.Saver()
try:
saver.restore(sess, model_checkpoint)
except:
print("Couldn't load model last checkpoint ({}).".format(model_checkpoint))
print("You need to either provide the required checkpoint files or train the network from scratch!")
builder.add_meta_graph_and_variables(sess,
[tf.saved_model.tag_constants.TRAINING],
signature_def_map=None,
assets_collection=None)
builder.save()
| 0.589362 | 0.704681 |
# Using Schemas in Kosh
This notebook shows how to use schema in Kosh to validate your metadata
```
import kosh
import os
kosh_example_sql_file = "kosh_schemas_example.sql"
# Create and open a new store (erase if exists)
store = kosh.create_new_db(kosh_example_sql_file)
# create a dataset
dataset = store.create()
```
Let's create a schema to validate our metadata
a schema object takes two dictionaries as input
one for the required attributes and one for the optional attributes
For each attributes we need to provide validation functions or valid values
- If the "validation" is a callable it will be applied on values of the attribute and must pass and return True
- If the validation is an instance of 'type' the attribute must be an instance of the validation type
- Otherwise the value must match "validation"
It is possible though to have multiple possible validations for a single attribute, simply define them in the dictionary as a list, if any validation passes the attribute is considered valid
Let's create a validation schema that requires our datasets to have the attribute "must" with any value and allow for an attribute 'maybe' that must be one of 1, "yes" or True
```
required = {"must": None}
optional = {"maybe": [1, "yes"]}
schema = kosh.KoshSchema(required, optional)
```
Our current (blank) dataset will not validate, we can first try it as follow:
```
try:
schema.validate(dataset)
except ValueError as err:
print("As expected, we failed to validate with error:", err)
# Let's add the attribute
dataset.must = "I have must"
# Validation now passes
schema.validate(dataset)
```
Now let's have must as an integer
```
required = {"must": int}
optional = {"maybe": [1, "yes"]}
schema = kosh.KoshSchema(required, optional)
# it does not validate anymore
try:
schema.validate(dataset)
except ValueError as err:
print("As expected, it now fails to validate with error:", err)
dataset
# Let's fix this
dataset.must = 5
# It now validates
schema.validate(dataset)
# Note that any extra attribute is ok but will not be checked for validation
dataset.any = "hi"
schema.validate(dataset)
# We can now enforce this schema subsequently
dataset.schema = schema
# Now we cannot set `must` to a bad value
try:
dataset.must = 7.6
except ValueError as err:
print("Failed to set attribute as it did not validate (must be int). Error:", err)
# Still at 5
dataset.must
```
Note that when setting the schema attribute all attributes of the dataset will be checked
```
dataset2 = store.create()
dataset2.must = 7.6
try:
dataset2.schema = schema
except:
pass
# Similarly optional attribute must validate
try:
dataset.maybe = "b"
except ValueError as err:
print("Optional attributes must validate as well. Error:", err)
dataset.maybe = "yes"
dataset.maybe = 1
```
Now sometimes we need more complex validation let's create a simple validation function
```
def isYes(value):
if isinstance(value, str):
return value.lower()[0] == "y"
elif isinstance(value, int):
return value == 1
required = {"must": int}
optional = {"maybe": isYes}
schema = kosh.KoshSchema(required, optional)
dataset.schema = schema
dataset.maybe = "y"
```
we can also pass list of possible validations
```
def isNo(value):
if isinstance(value, str):
return value.lower()[0] == "n"
elif isinstance(value, int):
return value == 0
required = {"must": int}
optional = {"maybe": [isYes, isNo, "oui"]}
schema = kosh.KoshSchema(required, optional)
dataset.schema = schema
dataset.maybe = "N"
dataset.maybe = 'No'
dataset.maybe = 'oui'
dataset.maybe = 'Yes'
```
|
github_jupyter
|
import kosh
import os
kosh_example_sql_file = "kosh_schemas_example.sql"
# Create and open a new store (erase if exists)
store = kosh.create_new_db(kosh_example_sql_file)
# create a dataset
dataset = store.create()
required = {"must": None}
optional = {"maybe": [1, "yes"]}
schema = kosh.KoshSchema(required, optional)
try:
schema.validate(dataset)
except ValueError as err:
print("As expected, we failed to validate with error:", err)
# Let's add the attribute
dataset.must = "I have must"
# Validation now passes
schema.validate(dataset)
required = {"must": int}
optional = {"maybe": [1, "yes"]}
schema = kosh.KoshSchema(required, optional)
# it does not validate anymore
try:
schema.validate(dataset)
except ValueError as err:
print("As expected, it now fails to validate with error:", err)
dataset
# Let's fix this
dataset.must = 5
# It now validates
schema.validate(dataset)
# Note that any extra attribute is ok but will not be checked for validation
dataset.any = "hi"
schema.validate(dataset)
# We can now enforce this schema subsequently
dataset.schema = schema
# Now we cannot set `must` to a bad value
try:
dataset.must = 7.6
except ValueError as err:
print("Failed to set attribute as it did not validate (must be int). Error:", err)
# Still at 5
dataset.must
dataset2 = store.create()
dataset2.must = 7.6
try:
dataset2.schema = schema
except:
pass
# Similarly optional attribute must validate
try:
dataset.maybe = "b"
except ValueError as err:
print("Optional attributes must validate as well. Error:", err)
dataset.maybe = "yes"
dataset.maybe = 1
def isYes(value):
if isinstance(value, str):
return value.lower()[0] == "y"
elif isinstance(value, int):
return value == 1
required = {"must": int}
optional = {"maybe": isYes}
schema = kosh.KoshSchema(required, optional)
dataset.schema = schema
dataset.maybe = "y"
def isNo(value):
if isinstance(value, str):
return value.lower()[0] == "n"
elif isinstance(value, int):
return value == 0
required = {"must": int}
optional = {"maybe": [isYes, isNo, "oui"]}
schema = kosh.KoshSchema(required, optional)
dataset.schema = schema
dataset.maybe = "N"
dataset.maybe = 'No'
dataset.maybe = 'oui'
dataset.maybe = 'Yes'
| 0.407333 | 0.929184 |
# Navigation
---
In this notebook, you will learn how to use the Unity ML-Agents environment for the first project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893).
### 1. Start the Environment
We begin by importing some necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
from unityagents import UnityEnvironment
import numpy as np
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/Banana.app"`
- **Windows** (x86): `"path/to/Banana_Windows_x86/Banana.exe"`
- **Windows** (x86_64): `"path/to/Banana_Windows_x86_64/Banana.exe"`
- **Linux** (x86): `"path/to/Banana_Linux/Banana.x86"`
- **Linux** (x86_64): `"path/to/Banana_Linux/Banana.x86_64"`
- **Linux** (x86, headless): `"path/to/Banana_Linux_NoVis/Banana.x86"`
- **Linux** (x86_64, headless): `"path/to/Banana_Linux_NoVis/Banana.x86_64"`
For instance, if you are using a Mac, then you downloaded `Banana.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="Banana.app")
```
```
env = UnityEnvironment(file_name="./Banana_Linux/Banana.x86_64")
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
The simulation contains a single agent that navigates a large environment. At each time step, it has four actions at its disposal:
- `0` - walk forward
- `1` - walk backward
- `2` - turn left
- `3` - turn right
The state space has `37` dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. A reward of `+1` is provided for collecting a yellow banana, and a reward of `-1` is provided for collecting a blue banana.
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.vector_observations[0]
print('States look like:', state)
state_size = len(state)
print('States have length:', state_size)
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
Once this cell is executed, you will watch the agent's performance, if it selects an action (uniformly) at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment.
Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
```
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0 # initialize the score
while True:
action = np.random.randint(action_size) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
score += reward # update the score
state = next_state # roll over the state to next time step
if done: # exit loop if episode finished
break
print("Score: {}".format(score))
```
When finished, you can close the environment.
```
env.close()
```
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
|
github_jupyter
|
from unityagents import UnityEnvironment
import numpy as np
env = UnityEnvironment(file_name="Banana.app")
env = UnityEnvironment(file_name="./Banana_Linux/Banana.x86_64")
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.vector_observations[0]
print('States look like:', state)
state_size = len(state)
print('States have length:', state_size)
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0 # initialize the score
while True:
action = np.random.randint(action_size) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
score += reward # update the score
state = next_state # roll over the state to next time step
if done: # exit loop if episode finished
break
print("Score: {}".format(score))
env.close()
env_info = env.reset(train_mode=True)[brain_name]
| 0.256273 | 0.983832 |
<h2 style='color:blue' align='center'>Transfer learning in image classification</h2>
**In this notebook we will use transfer learning and take pre-trained model from google's Tensorflow Hub and re-train that on flowers dataset. Using pre-trained model saves lot of time and computational budget for new classification problem at hand**
```
# Install tensorflow_hub using pip install tensorflow_hub first
import numpy as np
import cv2
import PIL.Image as Image
import os
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
```
**Make predictions using ready made model (without any training)**
```
IMAGE_SHAPE = (224, 224)
classifier = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4", input_shape=IMAGE_SHAPE+(3,))
])
gold_fish = Image.open("goldfish.jpg").resize(IMAGE_SHAPE)
gold_fish
gold_fish = np.array(gold_fish)/255.0
gold_fish.shape
gold_fish[np.newaxis, ...]
result = classifier.predict(gold_fish[np.newaxis, ...])
result.shape
predicted_label_index = np.argmax(result)
predicted_label_index
# tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
image_labels = []
with open("ImageNetLabels.txt", "r") as f:
image_labels = f.read().splitlines()
image_labels[:5]
image_labels[predicted_label_index]
```
<h3 style='color:purple'>Load flowers dataset</h3>
```
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, cache_dir='.', untar=True)
# cache_dir indicates where to download data. I specified . which means current directory
# untar true will unzip it
data_dir
import pathlib
data_dir = pathlib.Path(data_dir)
data_dir
list(data_dir.glob('*/*.jpg'))[:5]
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
roses = list(data_dir.glob('roses/*'))
roses[:5]
PIL.Image.open(str(roses[1]))
tulips = list(data_dir.glob('tulips/*'))
PIL.Image.open(str(tulips[0]))
```
<h3 style='color:purple'>Read flowers images from disk into numpy array using opencv</h3>
```
flowers_images_dict = {
'roses': list(data_dir.glob('roses/*')),
'daisy': list(data_dir.glob('daisy/*')),
'dandelion': list(data_dir.glob('dandelion/*')),
'sunflowers': list(data_dir.glob('sunflowers/*')),
'tulips': list(data_dir.glob('tulips/*')),
}
flowers_labels_dict = {
'roses': 0,
'daisy': 1,
'dandelion': 2,
'sunflowers': 3,
'tulips': 4,
}
flowers_images_dict['roses'][:5]
str(flowers_images_dict['roses'][0])
img = cv2.imread(str(flowers_images_dict['roses'][0]))
img.shape
cv2.resize(img,(224,224)).shape
X, y = [], []
for flower_name, images in flowers_images_dict.items():
for image in images:
img = cv2.imread(str(image))
resized_img = cv2.resize(img,(224,224))
X.append(resized_img)
y.append(flowers_labels_dict[flower_name])
X = np.array(X)
y = np.array(y)
```
<h3 style='color:purple'>Train test split</h3>
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
```
<h3 style='color:purple'>Preprocessing: scale images</h3>
```
X_train_scaled = X_train / 255
X_test_scaled = X_test / 255
```
**Make prediction using pre-trained model on new flowers dataset**
```
X[0].shape
IMAGE_SHAPE+(3,)
x0_resized = cv2.resize(X[0], IMAGE_SHAPE)
x1_resized = cv2.resize(X[1], IMAGE_SHAPE)
x2_resized = cv2.resize(X[2], IMAGE_SHAPE)
plt.axis('off')
plt.imshow(X[0])
plt.axis('off')
plt.imshow(X[1])
plt.axis('off')
plt.imshow(X[2])
predicted = classifier.predict(np.array([x0_resized, x1_resized, x2_resized]))
predicted = np.argmax(predicted, axis=1)
predicted
image_labels[795]
```
<h3 style='color:purple'>Now take pre-trained model and retrain it using flowers images</h3>
```
feature_extractor_model = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
pretrained_model_without_top_layer = hub.KerasLayer(
feature_extractor_model, input_shape=(224, 224, 3), trainable=False)
num_of_flowers = 5
model = tf.keras.Sequential([
pretrained_model_without_top_layer,
tf.keras.layers.Dense(num_of_flowers)
])
model.summary()
model.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
model.fit(X_train_scaled, y_train, epochs=5)
model.evaluate(X_test_scaled,y_test)
```
|
github_jupyter
|
# Install tensorflow_hub using pip install tensorflow_hub first
import numpy as np
import cv2
import PIL.Image as Image
import os
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
IMAGE_SHAPE = (224, 224)
classifier = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4", input_shape=IMAGE_SHAPE+(3,))
])
gold_fish = Image.open("goldfish.jpg").resize(IMAGE_SHAPE)
gold_fish
gold_fish = np.array(gold_fish)/255.0
gold_fish.shape
gold_fish[np.newaxis, ...]
result = classifier.predict(gold_fish[np.newaxis, ...])
result.shape
predicted_label_index = np.argmax(result)
predicted_label_index
# tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
image_labels = []
with open("ImageNetLabels.txt", "r") as f:
image_labels = f.read().splitlines()
image_labels[:5]
image_labels[predicted_label_index]
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, cache_dir='.', untar=True)
# cache_dir indicates where to download data. I specified . which means current directory
# untar true will unzip it
data_dir
import pathlib
data_dir = pathlib.Path(data_dir)
data_dir
list(data_dir.glob('*/*.jpg'))[:5]
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
roses = list(data_dir.glob('roses/*'))
roses[:5]
PIL.Image.open(str(roses[1]))
tulips = list(data_dir.glob('tulips/*'))
PIL.Image.open(str(tulips[0]))
flowers_images_dict = {
'roses': list(data_dir.glob('roses/*')),
'daisy': list(data_dir.glob('daisy/*')),
'dandelion': list(data_dir.glob('dandelion/*')),
'sunflowers': list(data_dir.glob('sunflowers/*')),
'tulips': list(data_dir.glob('tulips/*')),
}
flowers_labels_dict = {
'roses': 0,
'daisy': 1,
'dandelion': 2,
'sunflowers': 3,
'tulips': 4,
}
flowers_images_dict['roses'][:5]
str(flowers_images_dict['roses'][0])
img = cv2.imread(str(flowers_images_dict['roses'][0]))
img.shape
cv2.resize(img,(224,224)).shape
X, y = [], []
for flower_name, images in flowers_images_dict.items():
for image in images:
img = cv2.imread(str(image))
resized_img = cv2.resize(img,(224,224))
X.append(resized_img)
y.append(flowers_labels_dict[flower_name])
X = np.array(X)
y = np.array(y)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
X_train_scaled = X_train / 255
X_test_scaled = X_test / 255
X[0].shape
IMAGE_SHAPE+(3,)
x0_resized = cv2.resize(X[0], IMAGE_SHAPE)
x1_resized = cv2.resize(X[1], IMAGE_SHAPE)
x2_resized = cv2.resize(X[2], IMAGE_SHAPE)
plt.axis('off')
plt.imshow(X[0])
plt.axis('off')
plt.imshow(X[1])
plt.axis('off')
plt.imshow(X[2])
predicted = classifier.predict(np.array([x0_resized, x1_resized, x2_resized]))
predicted = np.argmax(predicted, axis=1)
predicted
image_labels[795]
feature_extractor_model = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
pretrained_model_without_top_layer = hub.KerasLayer(
feature_extractor_model, input_shape=(224, 224, 3), trainable=False)
num_of_flowers = 5
model = tf.keras.Sequential([
pretrained_model_without_top_layer,
tf.keras.layers.Dense(num_of_flowers)
])
model.summary()
model.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
model.fit(X_train_scaled, y_train, epochs=5)
model.evaluate(X_test_scaled,y_test)
| 0.713232 | 0.927626 |

OpenDreamKit review meeting 26 April 2017, Brussels
# Micromagnetic standard problem 3
## Problem specification
This problem is to calculate a single domain limit of a cubic magnetic particle. This is the size $L$ of equal energy for the so-called flower state (which one may also call a splayed state or a modified single-domain state) on the one hand, and the vortex or curling state on the other hand.
Geometry:
A cube with edge length, $L$, expressed in units of the intrinsic length scale, $l_\text{ex} = \sqrt{A/K_\text{m}}$, where $K_\text{m}$ is a magnetostatic energy density, $K_\text{m} = \frac{1}{2}\mu_{0}M_\text{s}^{2}$.
Material parameters:
- uniaxial anisotropy $K_\text{u}$ with $K_\text{u} = 0.1 K_\text{m}$, and with the easy axis directed parallel to a principal axis of the cube (0, 0, 1),
- exchange energy constant is $A = \frac{1}{2}\mu_{0}M_\text{s}^{2}l_\text{ex}^{2}$.
More details about the standard problem 3 can be found in Ref. 1.
## Simulation
```
import discretisedfield as df
import oommfc as oc
import numpy as np
%matplotlib inline
def m_init_flower(pos):
"""Function for initiaising the flower state."""
x, y, z = pos[0]/1e-9, pos[1]/1e-9, pos[2]/1e-9
mx = 0
my = 2*z - 1
mz = -2*y + 1
norm_squared = mx**2 + my**2 + mz**2
if norm_squared <= 0.05:
return (1, 0, 0)
else:
return (mx, my, mz)
def m_init_vortex(pos):
"""Function for initialising the vortex state."""
x, y, z = pos[0]/1e-9, pos[1]/1e-9, pos[2]/1e-9
mx = 0
my = np.sin(np.pi/2 * (x-0.5))
mz = np.cos(np.pi/2 * (x-0.5))
return (mx, my, mz)
def minimise_system_energy(L, m_init):
print("Working on L={} ({})".format(L, m_init.__name__))
N = 10 # discretisation in one dimension
cubesize = 100e-9 # cube edge length (m)
cellsize = cubesize/N # discretisation in all three dimensions.
lex = cubesize/L # exchange length.
Km = 1e6 # magnetostatic energy density (J/m**3)
Ms = np.sqrt(2*Km/oc.mu0) # magnetisation saturation (A/m)
A = 0.5 * oc.mu0 * Ms**2 * lex**2 # exchange energy constant
K = 0.1*Km # Uniaxial anisotropy constant
u = (0, 0, 1) # Uniaxial anisotropy easy-axis
p1 = (0, 0, 0) # Minimum sample coordinate.
p2 = (cubesize, cubesize, cubesize) # Maximum sample coordinate.
cell = (cellsize, cellsize, cellsize) # Discretisation.
mesh = oc.Mesh(p1=(0, 0, 0), p2=(cubesize, cubesize, cubesize),
cell=(cellsize, cellsize, cellsize)) # Create a mesh object.
system = oc.System(name="stdprob3")
system.hamiltonian = oc.Exchange(A) \
+ oc.UniaxialAnisotropy(K, u) \
+ oc.Demag()
system.m = df.Field(mesh, value=m_init, norm=Ms)
md = oc.MinDriver() # minimise system energy
md.drive(system)
return system
```
### Compute relaxed magnetisation states with one function call
**Vortex** state:
```
system = minimise_system_energy(8, m_init_vortex)
print("Total energy is {}J".format(system.total_energy()))
system.m.plot_slice('y', 50e-9, xsize=4);
```
**Flower** state:
```
system = minimise_system_energy(8, m_init_flower)
print("Total energy is {}J".format(system.total_energy()))
system.m.plot_slice('x', 50e-9, xsize=4);
```
### Compute table and plot for energy crossing
```
L_array = np.linspace(8, 9, 9) # values of L for which
# the system is relaxed.
vortex_energies = []
flower_energies = []
for L in L_array: # iterate through simulation data points
vortex = minimise_system_energy(L, m_init_vortex)
flower = minimise_system_energy(L, m_init_flower)
vortex_energies.append(vortex.total_energy())
flower_energies.append(flower.total_energy())
# Plot the results
import matplotlib.pyplot as plt
plt.plot(L_array, vortex_energies, 'o-', label='vortex')
plt.plot(L_array, flower_energies, 'o-', label='flower')
plt.xlabel('L (lex)')
plt.ylabel('E')
plt.xlim([8.0, 9.0])
plt.grid()
plt.legend()
```
We now know that the energy crossing occurs between $8l_\text{ex}$ and $9l_\text{ex}$, so a root finding algorithm can be used to find the exact crossing.
```
from scipy.optimize import bisect
def energy_difference(L):
vortex = minimise_system_energy(L, m_init_vortex)
flower = minimise_system_energy(L, m_init_flower)
return vortex.total_energy() - flower.total_energy()
xtol=0.1
cross_section = bisect(energy_difference, 8, 9, xtol=xtol)
print("The transition between vortex and flower states\n"
"occurs at {}*lex +-{}".format(cross_section, xtol))
```
## References
[1] µMAG Site Directory http://www.ctcms.nist.gov/~rdm/mumag.org.html
|
github_jupyter
|
import discretisedfield as df
import oommfc as oc
import numpy as np
%matplotlib inline
def m_init_flower(pos):
"""Function for initiaising the flower state."""
x, y, z = pos[0]/1e-9, pos[1]/1e-9, pos[2]/1e-9
mx = 0
my = 2*z - 1
mz = -2*y + 1
norm_squared = mx**2 + my**2 + mz**2
if norm_squared <= 0.05:
return (1, 0, 0)
else:
return (mx, my, mz)
def m_init_vortex(pos):
"""Function for initialising the vortex state."""
x, y, z = pos[0]/1e-9, pos[1]/1e-9, pos[2]/1e-9
mx = 0
my = np.sin(np.pi/2 * (x-0.5))
mz = np.cos(np.pi/2 * (x-0.5))
return (mx, my, mz)
def minimise_system_energy(L, m_init):
print("Working on L={} ({})".format(L, m_init.__name__))
N = 10 # discretisation in one dimension
cubesize = 100e-9 # cube edge length (m)
cellsize = cubesize/N # discretisation in all three dimensions.
lex = cubesize/L # exchange length.
Km = 1e6 # magnetostatic energy density (J/m**3)
Ms = np.sqrt(2*Km/oc.mu0) # magnetisation saturation (A/m)
A = 0.5 * oc.mu0 * Ms**2 * lex**2 # exchange energy constant
K = 0.1*Km # Uniaxial anisotropy constant
u = (0, 0, 1) # Uniaxial anisotropy easy-axis
p1 = (0, 0, 0) # Minimum sample coordinate.
p2 = (cubesize, cubesize, cubesize) # Maximum sample coordinate.
cell = (cellsize, cellsize, cellsize) # Discretisation.
mesh = oc.Mesh(p1=(0, 0, 0), p2=(cubesize, cubesize, cubesize),
cell=(cellsize, cellsize, cellsize)) # Create a mesh object.
system = oc.System(name="stdprob3")
system.hamiltonian = oc.Exchange(A) \
+ oc.UniaxialAnisotropy(K, u) \
+ oc.Demag()
system.m = df.Field(mesh, value=m_init, norm=Ms)
md = oc.MinDriver() # minimise system energy
md.drive(system)
return system
system = minimise_system_energy(8, m_init_vortex)
print("Total energy is {}J".format(system.total_energy()))
system.m.plot_slice('y', 50e-9, xsize=4);
system = minimise_system_energy(8, m_init_flower)
print("Total energy is {}J".format(system.total_energy()))
system.m.plot_slice('x', 50e-9, xsize=4);
L_array = np.linspace(8, 9, 9) # values of L for which
# the system is relaxed.
vortex_energies = []
flower_energies = []
for L in L_array: # iterate through simulation data points
vortex = minimise_system_energy(L, m_init_vortex)
flower = minimise_system_energy(L, m_init_flower)
vortex_energies.append(vortex.total_energy())
flower_energies.append(flower.total_energy())
# Plot the results
import matplotlib.pyplot as plt
plt.plot(L_array, vortex_energies, 'o-', label='vortex')
plt.plot(L_array, flower_energies, 'o-', label='flower')
plt.xlabel('L (lex)')
plt.ylabel('E')
plt.xlim([8.0, 9.0])
plt.grid()
plt.legend()
from scipy.optimize import bisect
def energy_difference(L):
vortex = minimise_system_energy(L, m_init_vortex)
flower = minimise_system_energy(L, m_init_flower)
return vortex.total_energy() - flower.total_energy()
xtol=0.1
cross_section = bisect(energy_difference, 8, 9, xtol=xtol)
print("The transition between vortex and flower states\n"
"occurs at {}*lex +-{}".format(cross_section, xtol))
| 0.7865 | 0.951414 |
```
import Bio.PDB as PDB
import os
import itertools
import numpy as np
import json
import itertools
from collections import defaultdict
import matplotlib.pyplot as plt
from matplotlib import colors
from matplotlib import rc, font_manager
from matplotlib.ticker import MultipleLocator, FormatStrFormatter
#H-bond strengths/distance strong 2.2-2.5 medium 2.5-3.2
#Recognised max pipi_distance for pi-pi stacking is 4
Max_OHOH = 3.2
Max_pipi = 4.0
##Designed for testing pdb files containing only Tyr residues (extracted from the original file)
p = PDB.PDBParser()
parent_directory = "./"
pdb_file_directory = parent_directory+"Complete_biological_units_CC_Tyr/"
save_file_directory_OHOH = parent_directory+"OHOH_test/"
save_file_directory_pipi = parent_directory+"pipi_test/"
PDBS_OHOH = {}
PDBS_pipi = {}
for pdb_entry in os.listdir(pdb_file_directory):
Distance_OHOH = {}
Distance_pipi = {}
if pdb_entry[-3:] != "pdb":
print(pdb_entry)
pass
else:
structure = p.get_structure(str(pdb_entry), pdb_file_directory+pdb_entry)
residue_combinations = itertools.combinations(structure.get_residues(), 2)
pairs_measured = defaultdict(list)
for residue1, residue2 in residue_combinations:
### Some pdb files form permutations instead of combinations, therefore this ensures duplications are filtered out
pairs_measured[residue1.parent.id+str(residue1.id[1])].append(residue2.parent.id+str(residue2.id[1]))
if (residue1.parent.id+str(residue1.id[1]) in pairs_measured[residue2.parent.id+str(residue2.id[1])] or
residue1.parent.id+str(residue1.id[1]) == residue2.parent.id+str(residue2.id[1])):
continue
else:
residue1_string = [x.id for x in residue1.get_unpacked_list()]
residue2_string = [x.id for x in residue2.get_unpacked_list()]
if 'OH' in residue1_string and 'OH' in residue2_string:
PDBS_OHOH[pdb_entry+"_"+residue1.parent.id+str(residue1.id[1])+residue2.parent.id+str(residue2.id[1])]=(
residue1['OH']-residue2['OH'])
else:
print("Missing OH in "+pdb_entry)
if 'CG' and 'CZ' in residue1_string and 'CG' and 'CZ' in residue2_string:
Ring_center1 = (residue1['CG'].get_coord()+residue1['CZ'].get_coord())/2
Ring_center2 = (residue2['CG'].get_coord()+residue2['CZ'].get_coord())/2
pipi_value = np.sqrt(np.sum(np.square(Ring_center1-Ring_center2)))
PDBS_pipi[pdb_entry+"_"+residue1.parent.id+str(residue1.id[1])+residue2.parent.id+str(residue2.id[1])]=(
pipi_value)
##Find examples within limit of pi-pi interaction distances and OHOH interaction distances
OHOH_interactions = []
for OHOH_entry in PDBS_OHOH:
if PDBS_OHOH[OHOH_entry] <= Max_OHOH:
OHOH_interactions.append(OHOH_entry)
pipi_interactions = []
for pipi_entry in PDBS_pipi:
if PDBS_pipi[pipi_entry] <= Max_pipi:
pipi_interactions.append(pipi_entry)
OHOH_interactions
pipi_interactions
font = {'fontname':'Arial','size' : 14}
params = {'text.usetex': False, 'mathtext.fontset': 'stixsans'}
plt.rcParams.update(params)
fontProperties = {'size' : 15}
rc('font',**fontProperties)
#Plot histogram of OH OH distances up to 10 angstrom
fig1, ax1 = plt.subplots()
ax1.hist(PDBS_OHOH.values(), range=(0,10), bins=20)
ax1.set_xlabel(r'Tyrosine OH-OH distance ($\AA $)')
ax1.set_ylabel(r'Number of examples')
ax1.set_title(r'Tyrosine OH-OH distances in coiled coils')
plt.savefig(parent_directory+"OHOH_histogram.png", bbox_inches='tight', dpi=600)
fig3, ax3 = plt.subplots()
ax3.hist(PDBS_pipi.values(), range=(0,10), bins=20)
ax3.set_xlabel(r'Tyrosine $\pi $-$\pi $ distance ($\AA $)')
ax3.set_ylabel(r'Number of examples')
ax3.set_title(r'Tyrosine $\pi $-$\pi $ distances in coiled coils')
plt.savefig(parent_directory+"pipi_histogram.png", bbox_inches='tight', dpi=600)
```
|
github_jupyter
|
import Bio.PDB as PDB
import os
import itertools
import numpy as np
import json
import itertools
from collections import defaultdict
import matplotlib.pyplot as plt
from matplotlib import colors
from matplotlib import rc, font_manager
from matplotlib.ticker import MultipleLocator, FormatStrFormatter
#H-bond strengths/distance strong 2.2-2.5 medium 2.5-3.2
#Recognised max pipi_distance for pi-pi stacking is 4
Max_OHOH = 3.2
Max_pipi = 4.0
##Designed for testing pdb files containing only Tyr residues (extracted from the original file)
p = PDB.PDBParser()
parent_directory = "./"
pdb_file_directory = parent_directory+"Complete_biological_units_CC_Tyr/"
save_file_directory_OHOH = parent_directory+"OHOH_test/"
save_file_directory_pipi = parent_directory+"pipi_test/"
PDBS_OHOH = {}
PDBS_pipi = {}
for pdb_entry in os.listdir(pdb_file_directory):
Distance_OHOH = {}
Distance_pipi = {}
if pdb_entry[-3:] != "pdb":
print(pdb_entry)
pass
else:
structure = p.get_structure(str(pdb_entry), pdb_file_directory+pdb_entry)
residue_combinations = itertools.combinations(structure.get_residues(), 2)
pairs_measured = defaultdict(list)
for residue1, residue2 in residue_combinations:
### Some pdb files form permutations instead of combinations, therefore this ensures duplications are filtered out
pairs_measured[residue1.parent.id+str(residue1.id[1])].append(residue2.parent.id+str(residue2.id[1]))
if (residue1.parent.id+str(residue1.id[1]) in pairs_measured[residue2.parent.id+str(residue2.id[1])] or
residue1.parent.id+str(residue1.id[1]) == residue2.parent.id+str(residue2.id[1])):
continue
else:
residue1_string = [x.id for x in residue1.get_unpacked_list()]
residue2_string = [x.id for x in residue2.get_unpacked_list()]
if 'OH' in residue1_string and 'OH' in residue2_string:
PDBS_OHOH[pdb_entry+"_"+residue1.parent.id+str(residue1.id[1])+residue2.parent.id+str(residue2.id[1])]=(
residue1['OH']-residue2['OH'])
else:
print("Missing OH in "+pdb_entry)
if 'CG' and 'CZ' in residue1_string and 'CG' and 'CZ' in residue2_string:
Ring_center1 = (residue1['CG'].get_coord()+residue1['CZ'].get_coord())/2
Ring_center2 = (residue2['CG'].get_coord()+residue2['CZ'].get_coord())/2
pipi_value = np.sqrt(np.sum(np.square(Ring_center1-Ring_center2)))
PDBS_pipi[pdb_entry+"_"+residue1.parent.id+str(residue1.id[1])+residue2.parent.id+str(residue2.id[1])]=(
pipi_value)
##Find examples within limit of pi-pi interaction distances and OHOH interaction distances
OHOH_interactions = []
for OHOH_entry in PDBS_OHOH:
if PDBS_OHOH[OHOH_entry] <= Max_OHOH:
OHOH_interactions.append(OHOH_entry)
pipi_interactions = []
for pipi_entry in PDBS_pipi:
if PDBS_pipi[pipi_entry] <= Max_pipi:
pipi_interactions.append(pipi_entry)
OHOH_interactions
pipi_interactions
font = {'fontname':'Arial','size' : 14}
params = {'text.usetex': False, 'mathtext.fontset': 'stixsans'}
plt.rcParams.update(params)
fontProperties = {'size' : 15}
rc('font',**fontProperties)
#Plot histogram of OH OH distances up to 10 angstrom
fig1, ax1 = plt.subplots()
ax1.hist(PDBS_OHOH.values(), range=(0,10), bins=20)
ax1.set_xlabel(r'Tyrosine OH-OH distance ($\AA $)')
ax1.set_ylabel(r'Number of examples')
ax1.set_title(r'Tyrosine OH-OH distances in coiled coils')
plt.savefig(parent_directory+"OHOH_histogram.png", bbox_inches='tight', dpi=600)
fig3, ax3 = plt.subplots()
ax3.hist(PDBS_pipi.values(), range=(0,10), bins=20)
ax3.set_xlabel(r'Tyrosine $\pi $-$\pi $ distance ($\AA $)')
ax3.set_ylabel(r'Number of examples')
ax3.set_title(r'Tyrosine $\pi $-$\pi $ distances in coiled coils')
plt.savefig(parent_directory+"pipi_histogram.png", bbox_inches='tight', dpi=600)
| 0.322206 | 0.262996 |
# Working with Different Types of Data
Let's start by first loading some data.
Variable `data` shows where data is located. Modify it as needed
```
data = "gs://is843/notebooks/jupyter/data/"
df = spark.read.format("csv")\
.option("header", "true")\
.option("inferSchema", "true")\
.load(data + "retail-data/by-day/2010-12-01.csv")
df.createOrReplaceTempView("dfTable")
df.printSchema()
df.show(5)
print(df.count(), 'rows')
```
## Converting to Spark Types
`lit()` function converts a type in Python to its correspnding Spark representation. Here’s how we can convert a couple of different kinds of Python values to their respective Spark types:
```
from pyspark.sql.functions import lit
df.select(lit(5), lit("five"), lit(5.0))
```
There is no function needed for SQL:
```
spark.sql("""
SELECT 5, "five", 5.0
""")
```
## Working with Booleans
Booleans are essential when it comes to data analysis because they are the foundation for all filtering. Boolean statements consist of four elements: *and*, *or*, *true*, and *false*. We use these simple structures to build logical statements that evaluate to either *true* or *false*. These statements are often used as conditional requirements for when a row of data must either pass the test (evaluate to true) or else it will be filtered out.
Let’s use our retail dataset to explore working with Booleans. We can specify equality as well as less-than or greater-than:
```
from pyspark.sql.functions import col
df.where(col("InvoiceNo") != 536365)\
.select("InvoiceNo", "Description")\
.show(5, False)
df.where("InvoiceNo <> 536365").select("InvoiceNo", "Description").show(5, False)
```
Although you can specify your statements explicitly by using **and** if you like, they’re often easier to understand and to read if you specify them serially. or statements need to be specified in the same statement:
```
from pyspark.sql.functions import instr
priceFilter = col("UnitPrice") > 600
descripFilter = instr(df.Description, "POSTAGE") >= 1 # instr(): Locate the position of the first occurrence of substr column in the given string.
df.where(df.StockCode.isin("DOT")).where(priceFilter | descripFilter).show()
```
Equivalent SQL:
```sql
SELECT * FROM dfTable
WHERE StockCode in ("DOT") AND
(UnitPrice > 600 OR instr(Description, "POSTAGE") >= 1)
```
```
from pyspark.sql.functions import instr
DOTCodeFilter = col("StockCode") == "DOT"
priceFilter = col("UnitPrice") > 600
descripFilter = instr(col("Description"), "POSTAGE") >= 1
df.withColumn("isExpensive", DOTCodeFilter & (priceFilter | descripFilter))\
.where("isExpensive")\
.select("unitPrice", "isExpensive").show(5)
spark.sql("""
SELECT
UnitPrice,
(StockCode = 'DOT' AND (UnitPrice > 600 OR instr(Description, "POSTAGE") >= 1)) as isExpensive
FROM dfTable
WHERE (StockCode = 'DOT' AND (UnitPrice > 600 OR instr(Description, "POSTAGE") >= 1))
""").show()
```
Notice how we did not need to specify our filter as an expression and how we could use a column name without any extra work.
If you’re coming from a SQL background, all of these statements should seem quite familiar. Indeed, all of them can be expressed as a where clause. In fact, it’s often easier to just express filters as SQL statements than using the programmatic DataFrame interface and Spark SQL allows us to do this without paying any performance penalty. For example, the following statement uses SQL commands within `expr()`:
```
from pyspark.sql.functions import expr
df.withColumn("isExpensive", expr("NOT UnitPrice <= 250"))\
.where("isExpensive")\
.select("Description", "UnitPrice").show(5)
```
## Working with Numbers
When working with big data, the second most common task you will do after filtering things is counting things. For the most part, we simply need to express our computation, and that should be valid assuming that we’re working with numerical data types.
To fabricate a contrived example, let’s imagine that we found out that we mis-recorded the quantity in our retail dataset and the true quantity is equal to $(Current\_Quantity * Unit\_Price)^2 + 5$. This will introduce our first numerical function as well as the `pow()` function that raises a column to the expressed power:
```
from pyspark.sql.functions import expr, pow
fabricatedQuantity = pow(col("Quantity") * col("UnitPrice"), 2) + 5
df.select(expr("CustomerId"), fabricatedQuantity.alias("realQuantity")).show(2)
```
Notice that we were able to multiply our columns together because they were both numerical. Naturally we can add and subtract as necessary, as well. In fact, we can do all of this as a SQL expression, as well:
```
df.selectExpr(
"CustomerId",
"(POWER((Quantity * UnitPrice), 2.0) + 5) as realQuantity").show(2)
```
In SQL:
```sql
SELECT customerId, (POWER((Quantity * UnitPrice), 2.0) + 5) as realQuantity
FROM dfTable
```
Another common numerical task is rounding. If you’d like to just round to a whole number, oftentimes you can cast the value to an integer and that will work just fine. However, Spark also has more detailed functions for performing this explicitly and to a certain level of precision. In the following example, we round to one decimal place:
```
from pyspark.sql.functions import lit, round, bround
df.select(round(lit("2.5")), bround(lit("2.5"))).show(1)
```
In SQL
```sql
SELECT round(2.5), bround(2.5)
```
Another numerical task is to compute the correlation of two columns. For example, we can see the Pearson correlation coefficient for two columns to see if cheaper things are typically bought in greater quantities. We can do this through a function as well as through the DataFrame statistic methods:
```
df.stat.corr("Quantity", "UnitPrice")
from pyspark.sql.functions import corr
df.select(corr("Quantity", "UnitPrice")).show()
```
In SQL
```sql
SELECT corr(Quantity, UnitPrice) FROM dfTable
```
Another common task is to compute summary statistics for a column or set of columns. We can use the `describe` method to achieve exactly this. This will take all numerical and string columns and calculate the count, mean, standard deviation, min, and max:
```
df.describe(['Quantity', 'UnitPrice', 'Country']).show()
```
If you need these exact numbers, you can also perform this as an aggregation yourself by importing the functions and applying them to the columns that you need:
```
from pyspark.sql.functions import count, mean, stddev_pop, min, max
```
There are a number of statistical functions available in the `StatFunctions` Package (accessible using stat as we see in the code block below). These are DataFrame methods that you can use to calculate a variety of different things. For instance, you can calculate either exact or approximate quantiles of your data using the `approxQuantile` method:
```
colName = "UnitPrice"
quantileProbs = [0.5]
relError = 0.05
df.stat.approxQuantile("UnitPrice", quantileProbs, relError)
```
Finding frequent items for columns:
```
df.stat.freqItems(["StockCode", "Quantity"]).show()
```
As a last note, we can also add a unique ID to each row by using the function `monotonically_increasing_id`. This function generates a unique value for each row, starting with 0:
```
from pyspark.sql.functions import monotonically_increasing_id
df.select(monotonically_increasing_id(), "StockCode", "Quantity", "UnitPrice").show(5)
```
## Working with Strings
The `initcap` function will capitalize every word in a given string, with the first letter of each word in uppercase, all other letters in lowercase:
```
from pyspark.sql.functions import initcap
df.select(initcap(col("Description"))).show(5, False)
```
You can cast strings in uppercase and lowercase, as well:
```
from pyspark.sql.functions import lower, upper
df.select(col("Description"),
lower(col("Description")),
upper(col("Description"))).show(2, False)
```
In SQL
```sql
SELECT Description, lower(Description), upper(Description) FROM dfTable
```
Another trivial task is adding or removing spaces around a string. You can do this by using `lpad`, `ltrim`, `rpad` and `rtrim`, `trim`:
```
from pyspark.sql.functions import lit, ltrim, rtrim, rpad, lpad, trim
df.select(
ltrim(lit(" HELLO ")).alias("ltrim"),
rtrim(lit(" HELLO ")).alias("rtrim"),
trim(lit(" HELLO ")).alias("trim"),
lpad(lit("HELLO"), 3, " ").alias("lp"),
rpad(lit("HELLO"), 10, " ").alias("rp")).show(1)
```
In SQL
```sql
SELECT
ltrim(' HELLLOOOO '),
rtrim(' HELLLOOOO '),
trim(' HELLLOOOO '),
lpad('HELLOOOO ', 3, ' '),
rpad('HELLOOOO ', 10, ' ')
FROM dfTable
```
Note that if lpad or rpad takes a number less than the length of the string, it will always remove values from the right side of the string.
## Regular Expressions
Probably one of the most frequently performed tasks is searching for the existence of one string in another or replacing all mentions of a string with another value. This is often done with a tool called *regular expressions* that exists in many programming languages.
Spark takes advantage of the complete power of Java regular expressions. There are two key functions in Spark that you’ll need in order to perform regular expression tasks: `regexp_extract` and `regexp_replace`. These functions extract values and replace values, respectively.
Let’s explore how to use the `regexp_replace` function to replace substitute color names in our description column:
```
from pyspark.sql.functions import regexp_replace
regex_string = "BLACK|WHITE|RED|GREEN|BLUE"
df.select(
regexp_replace(col("Description"), regex_string, "COLOR").alias("color_clean"),
col("Description")).show(2)
```
In SQL
```sql
SELECT
regexp_replace(Description, 'BLACK|WHITE|RED|GREEN|BLUE', 'COLOR') as
color_clean, Description
FROM dfTable
```
Another task might be to replace given characters with other characters. Building this as a regular expression could be tedious, so Spark also provides the `translate` function to replace these values. This is done at the character level and will replace all instances of a character with the indexed character in the replacement string:
```
from pyspark.sql.functions import translate
df.select(translate(col("Description"), "LEET", "1337"),col("Description"))\
.show(2, False)
```
In SQL
```sql
SELECT translate(Description, 'LEET', '1337'), Description FROM dfTable
```
We can also perform something similar, like pulling out the first mentioned color:
```
from pyspark.sql.functions import regexp_extract
extract_str = "(BLACK|WHITE|RED|GREEN|BLUE)"
df.select(
regexp_extract(col("Description"), extract_str, 1).alias("color_clean"),
col("Description")).show(5, False)
```
In SQL
```sql
SELECT regexp_extract(Description, '(BLACK|WHITE|RED|GREEN|BLUE)', 1),
Description
FROM dfTable
```
Sometimes, rather than extracting values, we simply want to check for their existence. We can do this with the `instr` method on each column. This will return a *Boolean* declaring whether the value you specify is in the column’s string:
```
from pyspark.sql.functions import instr
containsBlack = instr(col("Description"), "BLACK") >= 1
containsWhite = instr(col("Description"), "WHITE") >= 1
df.withColumn("hasSimpleColor", containsBlack | containsWhite)\
.where("hasSimpleColor")\
.select("Description").show(3, False)
```
In SQL
```sql
SELECT Description FROM dfTable
WHERE instr(Description, 'BLACK') >= 1 OR instr(Description, 'WHITE') >= 1
```
This is trivial with just two values, but it becomes more complicated when there are more values.
Let’s work through this in a more rigorous way and take advantage of Spark’s ability to accept a dynamic number of arguments. When we convert a list of values into a set of arguments and pass them into a function, we use a language feature called varargs. Using this feature, we can effectively unravel an array of arbitrary length and pass it as arguments to a function.
We can also do this quite easily in Python. In this case, we’re going to use a different function, `locate`, that returns the integer location (1 based location). We then convert that to a Boolean before using it as the same basic feature:
```
from pyspark.sql.functions import expr, locate
simpleColors = ["black", "white", "red", "green", "blue"]
def color_locator(column, color_string):
return locate(color_string.upper(), column)\
.cast("boolean")\
.alias("is_" + color_string)
selectedColumns = [color_locator(df.Description, c) for c in simpleColors]
selectedColumns.append(expr("*")) # has to be a Column type
df.select(*selectedColumns).show(3, False)
df.select(*selectedColumns).where(expr("is_white OR is_red"))\
.select("Description").show(3, False)
df.select(*selectedColumns)
```
## Working with Dates and Timestamps
Let’s begin with the basics and get the current date and the current timestamps:
```
from pyspark.sql.functions import current_date, current_timestamp
dateDF = spark.range(10)\
.withColumn("today", current_date())\
.withColumn("now", current_timestamp())
dateDF.createOrReplaceTempView("dateTable")
dateDF.printSchema()
dateDF.show(5, False)
```
Notice that `current_timestamp()` records the current time of when the action which is `show()` takes place, not when the transformation, dateDF creation, was placed.
Also, notice that the date/time is being recorded as GMT. One should always have in mind which timezone is being used. You can set a session local timezone if necessary by setting spark.conf.sessionLocalTimeZone in the SQL configurations.
Now that we have a simple DataFrame to work with, let’s add and subtract five days from today. These functions take a column and then the number of days to either add or subtract as the arguments:
```
from pyspark.sql.functions import date_add, date_sub
dateDF.select(date_sub(col("today"), 5), date_add(col("today"), 5)).show(1)
```
In SQL
```sql
SELECT date_sub(today, 5), date_add(today, 5) FROM dateTable
```
Another common task is to take a look at the difference between two dates. We can do this with the `datediff` function that will return the number of days in between two dates. Most often we just care about the days, and because the number of days varies from month to month, there also exists a function, `months_between`, that gives you the number of months between two dates:
```
from pyspark.sql.functions import datediff, months_between, to_date
dateDF.withColumn("week_ago", date_sub(col("today"), 7))\
.select("week_ago", "today", datediff(col("week_ago"), col("today"))).show(1)
dateDF.select(
to_date(lit("2020-01-01")).alias("start"),
to_date(lit("2021-03-01")).alias("end"))\
.select("start", "end", months_between(col("start"), col("end"))).show(1)
```
In SQL
```sql
SELECT to_date('2020-01-01'), months_between('2020-01-01', '2021-03-01'),
datediff('2020-01-01', '2021-03-01')
FROM dateTable
```
Notice that we introduced a new function: the `to_date` function. The `to_date` function allows you to convert a string to a date, optionally with a specified format. We specify our format in the [Java SimpleDateFormat](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html) which will be important to reference if you use this function:
```
from pyspark.sql.functions import to_date, lit
spark.range(5).withColumn("date", lit("2017-01-01"))\
.select(to_date(col("date"))).show(1)
```
Spark will not throw an error if it cannot parse the date; rather, it will just return null. This can be a bit tricky in larger pipelines because you might be expecting your data in one format and getting it in another. To illustrate, let’s take a look at the date format that has switched from year-month-day to year-day-month. Spark will fail to parse this date and silently return null instead:
```
dateDF.select(to_date(lit("2016-20-12")),to_date(lit("2017-12-11"))).show(1)
```
We find this to be an especially tricky situation for bugs because some dates might match the correct format, whereas others do not. In the previous example, notice how the second date appears as Decembers 11th instead of the correct day, November 12th. Spark doesn’t throw an error because it cannot know whether the days are mixed up or that specific row is incorrect.
Let’s fix this pipeline, step by step, and come up with a robust way to avoid these issues entirely. The first step is to remember that we need to specify our date format according to the Java SimpleDateFormat standard.
We will use two functions to fix this: `to_date` and `to_timestamp`. The former optionally expects a format, whereas the latter requires one:
```
from pyspark.sql.functions import to_date
dateFormat = "yyyy-dd-MM"
cleanDateDF = spark.range(1).select(
to_date(lit("2017-12-11"), dateFormat).alias("date"),
to_date(lit("2017-20-12"), dateFormat).alias("date2"))
cleanDateDF.show()
cleanDateDF.createOrReplaceTempView("dateTable2")
```
In SQL
```sql
SELECT to_date(date, 'yyyy-dd-MM'), to_date(date2, 'yyyy-dd-MM'), to_date(date)
FROM dateTable2
```
Now let’s use an example of to_timestamp, which always requires a format to be specified:
```
from pyspark.sql.functions import to_timestamp
cleanDateDF.select(to_timestamp(col("date"), "yyyy-dd-MM")).show()
```
After we have our date or timestamp in the correct format and type, comparing between them is actually quite easy. We just need to be sure to either use a date/timestamp type or specify our string according to the right format of *yyyy-MM-dd* if we’re comparing a date:
```
cleanDateDF.where(col("date2") > lit("2017-12-12")).show()
```
One minor point is that we can also set this as a string, which Spark parses to a literal:
```
cleanDateDF.filter(col("date2") > "2017-12-12").show()
```
It is important to point out that a good practice is to parse the values explicitly instead of relying on implicit conversions.
## Working with Nulls in Data
As a best practice, you should always use nulls to represent missing or empty data in your DataFrames. Spark can optimize working with null values more than it can if you use empty strings or other values. The primary way of interacting with null values, at DataFrame scale, is to use the `.na` subpackage on a DataFrame. There are also several functions for performing operations and explicitly specifying how Spark should handle null values.
**WARNING**
Nulls are a challenging part of all programming, and Spark is no exception. Being explicit is always better than being implicit when handling null values. When we declare a column as not having a null type, that is not actually enforced. To reiterate, when you define a schema in which all columns are declared to not have null values, Spark will not enforce that and will happily let null values into that column. The nullable signal is simply to help Spark SQL optimize for handling that column. If you have null values in columns that should not have null values, you can get an incorrect result or see strange exceptions that can be difficult to debug.
There are two things you can do with null values: you can explicitly drop nulls or you can fill them with a value (globally or on a per-column basis). Let’s experiment with each of these. But first we will examin whether we have null values or not.
### `isNull`
`isNull(expr)` - Returns true if expr is null, or false otherwise.
```
df.where(col("UnitPrice").isNull()).show()
df.where(col("CustomerID").isNull()).show(5)
```
Let's get a count of how many null values exist in each column using a loop and `isNull` function:
```
[(c, df.where(col(c).isNull()).count()) for c in df.columns]
print("DataFrame df consists of {} records, out of which {} records have a missing CustomerID, {} have a missing Description, and {} are missing both!"\
.format(df.count(),
df.where(col("CustomerID").isNull()).count(),
df.where(col("Description").isNull()).count(),
df.where(col("CustomerID").isNull() & col("Description").isNull()).count()))
df.where(col("CustomerID").isNull() & col("Description").isNull()).show()
```
### Coalesce
Spark includes a function to allow you to select the first non-null value from a set of columns by using the `coalesce` function. In this case, there are no null values, so it simply returns the first column:
```
from pyspark.sql.functions import coalesce
df.select(coalesce(col("Description"), col("CustomerId"))).show()
```
### `ifnull`, `nullIf`, `nvl`, and `nvl2`
There are several other SQL functions that you can use to achieve similar things.
The `ifnull` and `nvl` functions are synonyms: ifnull(expr1, expr2) - Returns expr2 if expr1 is null, or expr1 otherwise.
```
spark.sql("""
SELECT
ifnull(null, 'expr2'),
ifnull('expr1', 'expr2'),
nvl(null, 'expr2')
FROM dfTable LIMIT 1
""").show()
```
`nullif`: nullif(expr1, expr2) - Returns null if expr1 equals to expr2, or expr1 otherwise.
```
spark.sql("""
SELECT
nullif('expr1', 'expr1'),
nullif('expr1', 'expr2')
FROM dfTable LIMIT 1
""").show()
```
`nvl2`: nvl2(expr1, expr2, expr3) - Returns expr2 if expr1 is not null, or expr3 otherwise.
```
spark.sql("""
SELECT
nvl2('expr1', 'expr2', "expr3"),
nvl2(null, 'expr2', "expr3")
FROM dfTable LIMIT 1
""").show()
```
Naturally, we can use these in select expressions on DataFrames, as well.
### Negating conditions within a filter
We can use `~` within `where()` to negate the filter. For instance this can be used in conjunction with `isNull()` to filter values that are not null:
```
df.where(~col("UnitPrice").isNull()).show(5)
```
### `drop`
The simplest function is `drop`, which removes rows that contain nulls. The default is to drop any row in which any value is null:
```
df.na.drop()
```
Equivalent to
```python
df.na.drop("any")
```
In SQL, we have to do this column by column:
```sql
SELECT * FROM dfTable WHERE Description IS NOT NULL
```
Specifying "`any`" as an argument drops a row if any of the values are null.
Let's check and see how many rows were actually dropped:
```
df.count() - df.na.drop().count()
```
Which is what we expected, based on the above summary.
Using “all” drops the row only if all values are `null` or `NaN` for that row:
```
df.na.drop("all")
```
Since we don't have any rows with all columns being null this doesn't drop anything:
```
df.na.drop("all").count()
```
We can also apply this to certain sets of columns by passing in an array of columns:
```
df.na.drop("all", subset=["CustomerID", "Description"])
```
The following code shows that the above command will drop 10 records that have both *CustomerID* and *Description* missing.
```
df.count() - df.na.drop("all", subset=["CustomerID", "Description"]).count()
```
### `fill`
Using the `fil`l function, you can "fill" one or more columns with a set of values. This can be done by specifying a map— that is a particular value and a set of columns.
For example, to fill all null values in columns of type String, you might specify a string:
```
df.printSchema()
df.where(col("Description").isNull())\
.na.fill("No Value").show(5)
```
We could do the same for columns of type Integer by using df.na.fill(5:Integer), or for Doubles df.na.fill(5:Double). To specify columns, we just pass in an array of column names like we did in the previous example:
```
df.where(col("CustomerID").isNull())\
.na.fill(0.0).show(5)
```
Please note that in the code above the filtering is just for "showing" purposes and "fill" will get applied to all the columns with type Double.
We can also do this with a Python dictionary, where the key is the column name and the value is the value we would like to use to fill null values:
```
fill_cols_vals = {"CustomerID": 0.0, "Description" : "No Value"}
df2 = df.na.fill(fill_cols_vals)
df2.where(col("CustomerID").isNull() | col("Description").isNull()).show()
```
We can see that all the null values have been replaced with a non-null that we specidied for df2.
### `replace`
In addition to replacing null values like we did with `drop` and `fill`, there are more flexible options that you can use with more than just null values. Probably the most common use case is to replace all values in a certain column according to their current value. The only requirement is that this value be the same type as the original value:
```
df.na.replace([""], ["UNKNOWN"], "Description")
```
In this particular example we didn't have an empty string in Description to replace it with UNKNONW:
```
df.na.replace([""], ["UNKNOWN"], "Description").where("Description = 'UNKNOWN'").count()
```
## Ordering
You can use `asc_nulls_first`, `desc_nulls_first`, `asc_nulls_last`, or `desc_nulls_last` to specify where you would like your null values to appear in an ordered DataFrame.
```
from pyspark.sql.functions import desc_nulls_first
df.orderBy(col("Description").desc_nulls_first()).show()
```
SQL equivalent:
```sql
SELECT * FROM dfTable
ORDER BY Description DESC NULLS FIRST
LIMIT 15
```
## Working with Complex Types
Complex types can help you organize and structure your data in ways that make more sense for the problem that you are hoping to solve. There are three kinds of complex types: structs, arrays, and maps.
### Structs
You can think of structs as DataFrames within DataFrames. A worked example will illustrate this more clearly. We can create a struct by wrapping a set of columns in parenthesis in a query:
```
df.selectExpr("(Description, InvoiceNo) as complex", "*").show(5)
```
Which is equivalent to:
```python
df.selectExpr("struct(Description, InvoiceNo) as complex", "*")
```
Let's make a new DataFrame by wrapping two columns, and including two other columns from our df:
```
from pyspark.sql.functions import struct
complexDF = df.select(struct("Description", "InvoiceNo").alias("complex"), "StockCode", "CustomerID")
complexDF.createOrReplaceTempView("complexDF")
complexDF.show(5, False)
```
We now have a DataFrame with a column complex. We can query it just as we might another DataFrame, the only difference is that we use a dot syntax to do so, or the column method `getField`:
We can access and expand all the columns by:
```
complexDF.select("complex.Description", "complex.InvoiceNo", "StockCode", "CustomerID").show(5, False)
complexDF.select(col("complex").getField("Description")).show(2)
complexDF.printSchema()
```
We can also query all values in the struct by using *. This brings up all the columns to the top-level DataFrame:
```
complexDF.select("complex.*").show(2)
```
In SQL
```sql
SELECT complex.* FROM complexDF
```
### Arrays
To define arrays, let’s work through a use case. With our current data, our objective is to take every single word in our Description column and convert that into a row in our DataFrame.
The first task is to turn our Description column into a complex type, an array.
### `split`
We do this by using the split function and specify the delimiter:
```
from pyspark.sql.functions import split
df.select(split(col("Description"), " ")).show(2)
```
In SQL
```sql
SELECT split(Description, ' ') FROM dfTable
```
This is quite powerful because Spark allows us to manipulate this complex type as another column. We can also query the values of the array using Python-like syntax:
```
df.select(split(col("Description"), " ").alias("array_col"))\
.selectExpr("array_col[0]").show(2)
```
In SQL
```sql
SELECT split(Description, ' ')[0] FROM dfTable
```
### Array Length
We can determine the array’s length by querying for its size:
```
from pyspark.sql.functions import size
df.select(size(split(col("Description"), " "))).show(2)
```
### `array_contains`
We can also see whether this array contains a value:
```
from pyspark.sql.functions import array_contains
df.select(array_contains(split(col("Description"), " "), "WHITE")).show(5)
```
However, this does not solve our current problem. To convert a complex type into a set of rows (one per value in our array), we need to use the explode function.
### `explode`
The `explode` function takes a column that consists of arrays and creates one row (with the rest of the values duplicated) per value in the array. Figure below illustrates the process.
<img src="https://github.com/soltaniehha/Big-Data-Analytics-for-Business/blob/master/figs/06-01-Exploding-a-column-of-text.png?raw=true" width="900" align="left"/>
```
from pyspark.sql.functions import split, explode
df.withColumn("splitted", split(col("Description"), " "))\
.withColumn("exploded", explode(col("splitted")))\
.select("Description", "InvoiceNo", "exploded").show(13, False)
```
### Maps
Maps are created by using the `create_map` function (`map` in SQL) and key-value pairs of columns. You then can select them just like you might select from an array:
```
from pyspark.sql.functions import create_map
df.select(create_map(col("Description"), col("InvoiceNo")).alias("complex_map")).show(5, False)
```
In SQL
```sql
SELECT map(Description, InvoiceNo) as complex_map FROM dfTable
WHERE Description IS NOT NULL
```
```
df.select(create_map(col("Description"), col("InvoiceNo")).alias("complex_map"))\
.selectExpr("*", "complex_map['WHITE METAL LANTERN']").show(5, False)
```
You can also explode map types, which will turn them into columns:
```
df.select(create_map(col("Description"), col("InvoiceNo")).alias("complex_map"))\
.selectExpr("explode(complex_map)").show(2, False)
```
## User-Defined Functions
One of the most powerful things that you can do in Spark is define your own functions. These user-defined functions (UDFs) make it possible for you to write your own custom transformations using Python or Scala and even use external libraries. UDFs can take and return one or more columns as input. Spark UDFs are incredibly powerful because you can write them in several different programming languages; you do not need to create them in an esoteric format or domain-specific language. They’re just functions that operate on the data, record by record. By default, these functions are registered as temporary functions to be used in that specific SparkSession or Context.
Although you can write UDFs in Scala, Python, or Java, there are performance considerations that you should be aware of. To illustrate this, we’re going to walk through exactly what happens when you create UDF, pass that into Spark, and then execute code using that UDF.
The first step is the actual function. We’ll create a simple one for this example. Let’s write a power3 function that takes a number and raises it to a power of three:
```
def power3(double_value):
return double_value ** 3
power3(2.0)
```
In this trivial example, we can see that our functions work as expected. We are able to provide an individual input and produce the expected result (with this simple test case). Thus far, our expectations for the input are high: it must be a specific type and cannot be a null value.
Now that we’ve created these functions and tested them, we need to register them with Spark so that we can use them on all of our worker machines. Spark will serialize the function on the driver and transfer it over the network to all executor processes. This happens regardless of language.
When you use the function, Spark starts a Python process on the worker, serializes all of the data to a format that Python can understand (remember, it was in the JVM earlier), executes the function row by row on that data in the Python process, and then finally returns the results of the row operations to the JVM and Spark. Figure below provides an overview of the process:
<img src="https://github.com/soltaniehha/Big-Data-Analytics-for-Business/blob/master/figs/06-01-UDF.png?raw=true" width="700" align="center"/>
**Warning:** Starting this Python process is expensive. The real cost is in serializing the data to Python.
We need to register the function to make it available as a DataFrame function:
```
from pyspark.sql.functions import udf
power3udf = udf(power3)
```
Then, we can use it in our DataFrame code:
```
from pyspark.sql.functions import col
udfExampleDF = spark.range(5).toDF("num")
udfExampleDF.createOrReplaceTempView("udfExampleDFTable")
udfExampleDF.select(power3udf(col("num"))).show(5)
```
At this juncture, we can use this only as a DataFrame function. That is to say, we can’t use it within a string expression, only on an expression. However, we can also register this UDF as a Spark SQL function. This is valuable because it makes it simple to use this function within SQL as well as across languages.
```
spark.udf.register("power3py", power3)
udfExampleDF.selectExpr("power3py(num)").show()
```
In SQL:
```
spark.sql("""
SELECT num, power3py(num) from udfExampleDFTable
""").show()
```
For a complete list of **pyspark.sql.functions** visit [Spark documentation page](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=w#module-pyspark.sql.functions).
|
github_jupyter
|
data = "gs://is843/notebooks/jupyter/data/"
df = spark.read.format("csv")\
.option("header", "true")\
.option("inferSchema", "true")\
.load(data + "retail-data/by-day/2010-12-01.csv")
df.createOrReplaceTempView("dfTable")
df.printSchema()
df.show(5)
print(df.count(), 'rows')
from pyspark.sql.functions import lit
df.select(lit(5), lit("five"), lit(5.0))
spark.sql("""
SELECT 5, "five", 5.0
""")
from pyspark.sql.functions import col
df.where(col("InvoiceNo") != 536365)\
.select("InvoiceNo", "Description")\
.show(5, False)
df.where("InvoiceNo <> 536365").select("InvoiceNo", "Description").show(5, False)
from pyspark.sql.functions import instr
priceFilter = col("UnitPrice") > 600
descripFilter = instr(df.Description, "POSTAGE") >= 1 # instr(): Locate the position of the first occurrence of substr column in the given string.
df.where(df.StockCode.isin("DOT")).where(priceFilter | descripFilter).show()
SELECT * FROM dfTable
WHERE StockCode in ("DOT") AND
(UnitPrice > 600 OR instr(Description, "POSTAGE") >= 1)
from pyspark.sql.functions import instr
DOTCodeFilter = col("StockCode") == "DOT"
priceFilter = col("UnitPrice") > 600
descripFilter = instr(col("Description"), "POSTAGE") >= 1
df.withColumn("isExpensive", DOTCodeFilter & (priceFilter | descripFilter))\
.where("isExpensive")\
.select("unitPrice", "isExpensive").show(5)
spark.sql("""
SELECT
UnitPrice,
(StockCode = 'DOT' AND (UnitPrice > 600 OR instr(Description, "POSTAGE") >= 1)) as isExpensive
FROM dfTable
WHERE (StockCode = 'DOT' AND (UnitPrice > 600 OR instr(Description, "POSTAGE") >= 1))
""").show()
from pyspark.sql.functions import expr
df.withColumn("isExpensive", expr("NOT UnitPrice <= 250"))\
.where("isExpensive")\
.select("Description", "UnitPrice").show(5)
from pyspark.sql.functions import expr, pow
fabricatedQuantity = pow(col("Quantity") * col("UnitPrice"), 2) + 5
df.select(expr("CustomerId"), fabricatedQuantity.alias("realQuantity")).show(2)
df.selectExpr(
"CustomerId",
"(POWER((Quantity * UnitPrice), 2.0) + 5) as realQuantity").show(2)
SELECT customerId, (POWER((Quantity * UnitPrice), 2.0) + 5) as realQuantity
FROM dfTable
from pyspark.sql.functions import lit, round, bround
df.select(round(lit("2.5")), bround(lit("2.5"))).show(1)
SELECT round(2.5), bround(2.5)
df.stat.corr("Quantity", "UnitPrice")
from pyspark.sql.functions import corr
df.select(corr("Quantity", "UnitPrice")).show()
SELECT corr(Quantity, UnitPrice) FROM dfTable
df.describe(['Quantity', 'UnitPrice', 'Country']).show()
from pyspark.sql.functions import count, mean, stddev_pop, min, max
colName = "UnitPrice"
quantileProbs = [0.5]
relError = 0.05
df.stat.approxQuantile("UnitPrice", quantileProbs, relError)
df.stat.freqItems(["StockCode", "Quantity"]).show()
from pyspark.sql.functions import monotonically_increasing_id
df.select(monotonically_increasing_id(), "StockCode", "Quantity", "UnitPrice").show(5)
from pyspark.sql.functions import initcap
df.select(initcap(col("Description"))).show(5, False)
from pyspark.sql.functions import lower, upper
df.select(col("Description"),
lower(col("Description")),
upper(col("Description"))).show(2, False)
SELECT Description, lower(Description), upper(Description) FROM dfTable
from pyspark.sql.functions import lit, ltrim, rtrim, rpad, lpad, trim
df.select(
ltrim(lit(" HELLO ")).alias("ltrim"),
rtrim(lit(" HELLO ")).alias("rtrim"),
trim(lit(" HELLO ")).alias("trim"),
lpad(lit("HELLO"), 3, " ").alias("lp"),
rpad(lit("HELLO"), 10, " ").alias("rp")).show(1)
SELECT
ltrim(' HELLLOOOO '),
rtrim(' HELLLOOOO '),
trim(' HELLLOOOO '),
lpad('HELLOOOO ', 3, ' '),
rpad('HELLOOOO ', 10, ' ')
FROM dfTable
from pyspark.sql.functions import regexp_replace
regex_string = "BLACK|WHITE|RED|GREEN|BLUE"
df.select(
regexp_replace(col("Description"), regex_string, "COLOR").alias("color_clean"),
col("Description")).show(2)
SELECT
regexp_replace(Description, 'BLACK|WHITE|RED|GREEN|BLUE', 'COLOR') as
color_clean, Description
FROM dfTable
from pyspark.sql.functions import translate
df.select(translate(col("Description"), "LEET", "1337"),col("Description"))\
.show(2, False)
SELECT translate(Description, 'LEET', '1337'), Description FROM dfTable
from pyspark.sql.functions import regexp_extract
extract_str = "(BLACK|WHITE|RED|GREEN|BLUE)"
df.select(
regexp_extract(col("Description"), extract_str, 1).alias("color_clean"),
col("Description")).show(5, False)
SELECT regexp_extract(Description, '(BLACK|WHITE|RED|GREEN|BLUE)', 1),
Description
FROM dfTable
from pyspark.sql.functions import instr
containsBlack = instr(col("Description"), "BLACK") >= 1
containsWhite = instr(col("Description"), "WHITE") >= 1
df.withColumn("hasSimpleColor", containsBlack | containsWhite)\
.where("hasSimpleColor")\
.select("Description").show(3, False)
SELECT Description FROM dfTable
WHERE instr(Description, 'BLACK') >= 1 OR instr(Description, 'WHITE') >= 1
from pyspark.sql.functions import expr, locate
simpleColors = ["black", "white", "red", "green", "blue"]
def color_locator(column, color_string):
return locate(color_string.upper(), column)\
.cast("boolean")\
.alias("is_" + color_string)
selectedColumns = [color_locator(df.Description, c) for c in simpleColors]
selectedColumns.append(expr("*")) # has to be a Column type
df.select(*selectedColumns).show(3, False)
df.select(*selectedColumns).where(expr("is_white OR is_red"))\
.select("Description").show(3, False)
df.select(*selectedColumns)
from pyspark.sql.functions import current_date, current_timestamp
dateDF = spark.range(10)\
.withColumn("today", current_date())\
.withColumn("now", current_timestamp())
dateDF.createOrReplaceTempView("dateTable")
dateDF.printSchema()
dateDF.show(5, False)
from pyspark.sql.functions import date_add, date_sub
dateDF.select(date_sub(col("today"), 5), date_add(col("today"), 5)).show(1)
SELECT date_sub(today, 5), date_add(today, 5) FROM dateTable
from pyspark.sql.functions import datediff, months_between, to_date
dateDF.withColumn("week_ago", date_sub(col("today"), 7))\
.select("week_ago", "today", datediff(col("week_ago"), col("today"))).show(1)
dateDF.select(
to_date(lit("2020-01-01")).alias("start"),
to_date(lit("2021-03-01")).alias("end"))\
.select("start", "end", months_between(col("start"), col("end"))).show(1)
SELECT to_date('2020-01-01'), months_between('2020-01-01', '2021-03-01'),
datediff('2020-01-01', '2021-03-01')
FROM dateTable
from pyspark.sql.functions import to_date, lit
spark.range(5).withColumn("date", lit("2017-01-01"))\
.select(to_date(col("date"))).show(1)
dateDF.select(to_date(lit("2016-20-12")),to_date(lit("2017-12-11"))).show(1)
from pyspark.sql.functions import to_date
dateFormat = "yyyy-dd-MM"
cleanDateDF = spark.range(1).select(
to_date(lit("2017-12-11"), dateFormat).alias("date"),
to_date(lit("2017-20-12"), dateFormat).alias("date2"))
cleanDateDF.show()
cleanDateDF.createOrReplaceTempView("dateTable2")
SELECT to_date(date, 'yyyy-dd-MM'), to_date(date2, 'yyyy-dd-MM'), to_date(date)
FROM dateTable2
from pyspark.sql.functions import to_timestamp
cleanDateDF.select(to_timestamp(col("date"), "yyyy-dd-MM")).show()
cleanDateDF.where(col("date2") > lit("2017-12-12")).show()
cleanDateDF.filter(col("date2") > "2017-12-12").show()
df.where(col("UnitPrice").isNull()).show()
df.where(col("CustomerID").isNull()).show(5)
[(c, df.where(col(c).isNull()).count()) for c in df.columns]
print("DataFrame df consists of {} records, out of which {} records have a missing CustomerID, {} have a missing Description, and {} are missing both!"\
.format(df.count(),
df.where(col("CustomerID").isNull()).count(),
df.where(col("Description").isNull()).count(),
df.where(col("CustomerID").isNull() & col("Description").isNull()).count()))
df.where(col("CustomerID").isNull() & col("Description").isNull()).show()
from pyspark.sql.functions import coalesce
df.select(coalesce(col("Description"), col("CustomerId"))).show()
spark.sql("""
SELECT
ifnull(null, 'expr2'),
ifnull('expr1', 'expr2'),
nvl(null, 'expr2')
FROM dfTable LIMIT 1
""").show()
spark.sql("""
SELECT
nullif('expr1', 'expr1'),
nullif('expr1', 'expr2')
FROM dfTable LIMIT 1
""").show()
spark.sql("""
SELECT
nvl2('expr1', 'expr2', "expr3"),
nvl2(null, 'expr2', "expr3")
FROM dfTable LIMIT 1
""").show()
df.where(~col("UnitPrice").isNull()).show(5)
df.na.drop()
In SQL, we have to do this column by column:
Specifying "`any`" as an argument drops a row if any of the values are null.
Let's check and see how many rows were actually dropped:
Which is what we expected, based on the above summary.
Using “all” drops the row only if all values are `null` or `NaN` for that row:
Since we don't have any rows with all columns being null this doesn't drop anything:
We can also apply this to certain sets of columns by passing in an array of columns:
The following code shows that the above command will drop 10 records that have both *CustomerID* and *Description* missing.
### `fill`
Using the `fil`l function, you can "fill" one or more columns with a set of values. This can be done by specifying a map— that is a particular value and a set of columns.
For example, to fill all null values in columns of type String, you might specify a string:
We could do the same for columns of type Integer by using df.na.fill(5:Integer), or for Doubles df.na.fill(5:Double). To specify columns, we just pass in an array of column names like we did in the previous example:
Please note that in the code above the filtering is just for "showing" purposes and "fill" will get applied to all the columns with type Double.
We can also do this with a Python dictionary, where the key is the column name and the value is the value we would like to use to fill null values:
We can see that all the null values have been replaced with a non-null that we specidied for df2.
### `replace`
In addition to replacing null values like we did with `drop` and `fill`, there are more flexible options that you can use with more than just null values. Probably the most common use case is to replace all values in a certain column according to their current value. The only requirement is that this value be the same type as the original value:
In this particular example we didn't have an empty string in Description to replace it with UNKNONW:
## Ordering
You can use `asc_nulls_first`, `desc_nulls_first`, `asc_nulls_last`, or `desc_nulls_last` to specify where you would like your null values to appear in an ordered DataFrame.
SQL equivalent:
## Working with Complex Types
Complex types can help you organize and structure your data in ways that make more sense for the problem that you are hoping to solve. There are three kinds of complex types: structs, arrays, and maps.
### Structs
You can think of structs as DataFrames within DataFrames. A worked example will illustrate this more clearly. We can create a struct by wrapping a set of columns in parenthesis in a query:
Which is equivalent to:
Let's make a new DataFrame by wrapping two columns, and including two other columns from our df:
We now have a DataFrame with a column complex. We can query it just as we might another DataFrame, the only difference is that we use a dot syntax to do so, or the column method `getField`:
We can access and expand all the columns by:
We can also query all values in the struct by using *. This brings up all the columns to the top-level DataFrame:
In SQL
### Arrays
To define arrays, let’s work through a use case. With our current data, our objective is to take every single word in our Description column and convert that into a row in our DataFrame.
The first task is to turn our Description column into a complex type, an array.
### `split`
We do this by using the split function and specify the delimiter:
In SQL
This is quite powerful because Spark allows us to manipulate this complex type as another column. We can also query the values of the array using Python-like syntax:
In SQL
### Array Length
We can determine the array’s length by querying for its size:
### `array_contains`
We can also see whether this array contains a value:
However, this does not solve our current problem. To convert a complex type into a set of rows (one per value in our array), we need to use the explode function.
### `explode`
The `explode` function takes a column that consists of arrays and creates one row (with the rest of the values duplicated) per value in the array. Figure below illustrates the process.
<img src="https://github.com/soltaniehha/Big-Data-Analytics-for-Business/blob/master/figs/06-01-Exploding-a-column-of-text.png?raw=true" width="900" align="left"/>
### Maps
Maps are created by using the `create_map` function (`map` in SQL) and key-value pairs of columns. You then can select them just like you might select from an array:
In SQL
You can also explode map types, which will turn them into columns:
## User-Defined Functions
One of the most powerful things that you can do in Spark is define your own functions. These user-defined functions (UDFs) make it possible for you to write your own custom transformations using Python or Scala and even use external libraries. UDFs can take and return one or more columns as input. Spark UDFs are incredibly powerful because you can write them in several different programming languages; you do not need to create them in an esoteric format or domain-specific language. They’re just functions that operate on the data, record by record. By default, these functions are registered as temporary functions to be used in that specific SparkSession or Context.
Although you can write UDFs in Scala, Python, or Java, there are performance considerations that you should be aware of. To illustrate this, we’re going to walk through exactly what happens when you create UDF, pass that into Spark, and then execute code using that UDF.
The first step is the actual function. We’ll create a simple one for this example. Let’s write a power3 function that takes a number and raises it to a power of three:
In this trivial example, we can see that our functions work as expected. We are able to provide an individual input and produce the expected result (with this simple test case). Thus far, our expectations for the input are high: it must be a specific type and cannot be a null value.
Now that we’ve created these functions and tested them, we need to register them with Spark so that we can use them on all of our worker machines. Spark will serialize the function on the driver and transfer it over the network to all executor processes. This happens regardless of language.
When you use the function, Spark starts a Python process on the worker, serializes all of the data to a format that Python can understand (remember, it was in the JVM earlier), executes the function row by row on that data in the Python process, and then finally returns the results of the row operations to the JVM and Spark. Figure below provides an overview of the process:
<img src="https://github.com/soltaniehha/Big-Data-Analytics-for-Business/blob/master/figs/06-01-UDF.png?raw=true" width="700" align="center"/>
**Warning:** Starting this Python process is expensive. The real cost is in serializing the data to Python.
We need to register the function to make it available as a DataFrame function:
Then, we can use it in our DataFrame code:
At this juncture, we can use this only as a DataFrame function. That is to say, we can’t use it within a string expression, only on an expression. However, we can also register this UDF as a Spark SQL function. This is valuable because it makes it simple to use this function within SQL as well as across languages.
In SQL:
| 0.506103 | 0.982856 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.