repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
ES-DOC/esdoc-jupyterhub
notebooks/ncc/cmip6/models/noresm2-mm/seaice.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mm', 'seaice') """ Explanation: ES-DOC CMIP6 Model Properties - Seaice MIP Era: CMIP6 Institute: NCC Source ID: NORESM2-MM Topic: Seaice Sub-Topics: Dynamics, Thermodynamics, Radiative Processes. Properties: 80 (63 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:24 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --> Model 2. Key Properties --> Variables 3. Key Properties --> Seawater Properties 4. Key Properties --> Resolution 5. Key Properties --> Tuning Applied 6. Key Properties --> Key Parameter Values 7. Key Properties --> Assumptions 8. Key Properties --> Conservation 9. Grid --> Discretisation --> Horizontal 10. Grid --> Discretisation --> Vertical 11. Grid --> Seaice Categories 12. Grid --> Snow On Seaice 13. Dynamics 14. Thermodynamics --> Energy 15. Thermodynamics --> Mass 16. Thermodynamics --> Salt 17. Thermodynamics --> Salt --> Mass Transport 18. Thermodynamics --> Salt --> Thermodynamics 19. Thermodynamics --> Ice Thickness Distribution 20. Thermodynamics --> Ice Floe Size Distribution 21. Thermodynamics --> Melt Ponds 22. Thermodynamics --> Snow Processes 23. Radiative Processes 1. Key Properties --> Model Name of seaice model used. 1.1. Model Overview Is Required: TRUE    Type: STRING    Cardinality: 1.1 Overview of sea ice model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE    Type: STRING    Cardinality: 1.1 Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.variables.prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea ice temperature" # "Sea ice concentration" # "Sea ice thickness" # "Sea ice volume per grid cell area" # "Sea ice u-velocity" # "Sea ice v-velocity" # "Sea ice enthalpy" # "Internal ice stress" # "Salinity" # "Snow temperature" # "Snow depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2. Key Properties --> Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required: TRUE    Type: ENUM    Cardinality: 1.N List of prognostic variables in the sea ice component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS-10" # "Constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --> Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required: TRUE    Type: ENUM    Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Ocean Freezing Point Value Is Required: FALSE    Type: FLOAT    Cardinality: 0.1 If using a constant seawater freezing point, specify this value. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --> Resolution Resolution of the sea ice grid 4.1. Name Is Required: TRUE    Type: STRING    Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Canonical Horizontal Resolution Is Required: TRUE    Type: STRING    Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.3. Number Of Horizontal Gridpoints Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --> Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required: TRUE    Type: STRING    Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Target Is Required: TRUE    Type: STRING    Cardinality: 1.1 What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Simulations Is Required: TRUE    Type: STRING    Cardinality: 1.1 *Which simulations had tuning applied, e.g. all, not historical, only pi-control? * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Metrics Used Is Required: TRUE    Type: STRING    Cardinality: 1.1 List any observed metrics used in tuning model/parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.5. Variables Is Required: FALSE    Type: STRING    Cardinality: 0.1 Which variables were changed during the tuning process? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ice strength (P*) in units of N m{-2}" # "Snow conductivity (ks) in units of W m{-1} K{-1} " # "Minimum thickness of ice created in leads (h0) in units of m" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Key Properties --> Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required: FALSE    Type: ENUM    Cardinality: 0.N What values were specificed for the following parameters if used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Additional Parameters Is Required: FALSE    Type: STRING    Cardinality: 0.N If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.description') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --> Assumptions Assumptions made in the sea ice model 7.1. Description Is Required: TRUE    Type: STRING    Cardinality: 1.N General overview description of any key assumptions made in this model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. On Diagnostic Variables Is Required: TRUE    Type: STRING    Cardinality: 1.N Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Missing Processes Is Required: TRUE    Type: STRING    Cardinality: 1.N List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --> Conservation Conservation in the sea ice component 8.1. Description Is Required: TRUE    Type: STRING    Cardinality: 1.1 Provide a general description of conservation methodology. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.properties') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Mass" # "Salt" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Properties Is Required: TRUE    Type: ENUM    Cardinality: 1.N Properties conserved in sea ice by the numerical schemes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Budget Is Required: TRUE    Type: STRING    Cardinality: 1.1 For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3 End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.4. Was Flux Correction Used Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Does conservation involved flux correction? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Corrected Conserved Prognostic Variables Is Required: TRUE    Type: STRING    Cardinality: 1.1 List any variables which are conserved by more than the numerical scheme alone. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Ocean grid" # "Atmosphere Grid" # "Own Grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Grid --> Discretisation --> Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required: TRUE    Type: ENUM    Cardinality: 1.1 Grid on which sea ice is horizontal discretised? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Structured grid" # "Unstructured grid" # "Adaptive grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.2. Grid Type Is Required: TRUE    Type: ENUM    Cardinality: 1.1 What is the type of sea ice grid? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite differences" # "Finite elements" # "Finite volumes" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.3. Scheme Is Required: TRUE    Type: ENUM    Cardinality: 1.1 What is the advection scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.4. Thermodynamics Time Step Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 What is the time step in the sea ice model thermodynamic component in seconds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.5. Dynamics Time Step Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 What is the time step in the sea ice model dynamic component in seconds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.6. Additional Details Is Required: FALSE    Type: STRING    Cardinality: 0.1 Specify any additional horizontal discretisation details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Zero-layer" # "Two-layers" # "Multi-layers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Grid --> Discretisation --> Vertical Sea ice vertical properties 10.1. Layering Is Required: TRUE    Type: ENUM    Cardinality: 1.N What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 10.2. Number Of Layers Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 If using multi-layers specify how many. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.3. Additional Details Is Required: FALSE    Type: STRING    Cardinality: 0.1 Specify any additional vertical grid details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 11. Grid --> Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Set to true if the sea ice model has multiple sea ice categories. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Number Of Categories Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 If using sea ice categories specify how many. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Category Limits Is Required: TRUE    Type: STRING    Cardinality: 1.1 If using sea ice categories specify each of the category limits. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Ice Thickness Distribution Scheme Is Required: TRUE    Type: STRING    Cardinality: 1.1 Describe the sea ice thickness distribution scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.other') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.5. Other Is Required: FALSE    Type: STRING    Cardinality: 0.1 If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Grid --> Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Is snow on ice represented in this model? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 12.2. Number Of Snow Levels Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 Number of vertical levels of snow on ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Snow Fraction Is Required: TRUE    Type: STRING    Cardinality: 1.1 Describe how the snow fraction on sea ice is determined End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.4. Additional Details Is Required: FALSE    Type: STRING    Cardinality: 0.1 Specify any additional details related to snow on ice. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.horizontal_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required: TRUE    Type: ENUM    Cardinality: 1.1 What is the method of horizontal advection of sea ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Transport In Thickness Space Is Required: TRUE    Type: ENUM    Cardinality: 1.1 What is the method of sea ice transport in thickness space (i.e. in thickness categories)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Hibler 1979" # "Rothrock 1975" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Ice Strength Formulation Is Required: TRUE    Type: ENUM    Cardinality: 1.1 Which method of sea ice strength formulation is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.redistribution') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Rafting" # "Ridging" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Redistribution Is Required: TRUE    Type: ENUM    Cardinality: 1.N Which processes can redistribute sea ice (including thickness)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.rheology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Free-drift" # "Mohr-Coloumb" # "Visco-plastic" # "Elastic-visco-plastic" # "Elastic-anisotropic-plastic" # "Granular" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Rheology Is Required: TRUE    Type: ENUM    Cardinality: 1.1 Rheology, what is the ice deformation formulation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice latent heat (Semtner 0-layer)" # "Pure ice latent and sensible heat" # "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)" # "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Thermodynamics --> Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required: TRUE    Type: ENUM    Cardinality: 1.1 What is the energy formulation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice" # "Saline ice" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.2. Thermal Conductivity Is Required: TRUE    Type: ENUM    Cardinality: 1.1 What type of thermal conductivity is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Conduction fluxes" # "Conduction and radiation heat fluxes" # "Conduction, radiation and latent heat transport" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.3. Heat Diffusion Is Required: TRUE    Type: ENUM    Cardinality: 1.1 What is the method of heat diffusion? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heat Reservoir" # "Thermal Fixed Salinity" # "Thermal Varying Salinity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.4. Basal Heat Flux Is Required: TRUE    Type: ENUM    Cardinality: 1.1 Method by which basal ocean heat flux is handled? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.5. Fixed Salinity Value Is Required: FALSE    Type: FLOAT    Cardinality: 0.1 If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.6. Heat Content Of Precipitation Is Required: TRUE    Type: STRING    Cardinality: 1.1 Describe the method by which the heat content of precipitation is handled. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.7. Precipitation Effects On Salinity Is Required: FALSE    Type: STRING    Cardinality: 0.1 If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Thermodynamics --> Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required: TRUE    Type: STRING    Cardinality: 1.1 Describe the method by which new sea ice is formed in open water. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Ice Vertical Growth And Melt Is Required: TRUE    Type: STRING    Cardinality: 1.1 Describe the method that governs the vertical growth and melt of sea ice. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Floe-size dependent (Bitz et al 2001)" # "Virtual thin ice melting (for single-category)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Ice Lateral Melting Is Required: TRUE    Type: ENUM    Cardinality: 1.1 What is the method of sea ice lateral melting? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.4. Ice Surface Sublimation Is Required: TRUE    Type: STRING    Cardinality: 1.1 Describe the method that governs sea ice surface sublimation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.5. Frazil Ice Is Required: TRUE    Type: STRING    Cardinality: 1.1 Describe the method of frazil ice formation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 16. Thermodynamics --> Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 16.2. Sea Ice Salinity Thermal Impacts Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Does sea ice salinity impact the thermal properties of sea ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Thermodynamics --> Salt --> Mass Transport Mass transport of salt 17.1. Salinity Type Is Required: TRUE    Type: ENUM    Cardinality: 1.1 How is salinity determined in the mass transport of salt calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 17.2. Constant Salinity Value Is Required: FALSE    Type: FLOAT    Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Details Is Required: FALSE    Type: STRING    Cardinality: 0.1 Describe the salinity profile used. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Thermodynamics --> Salt --> Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required: TRUE    Type: ENUM    Cardinality: 1.1 How is salinity determined in the thermodynamic calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 18.2. Constant Salinity Value Is Required: FALSE    Type: FLOAT    Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.3. Additional Details Is Required: FALSE    Type: STRING    Cardinality: 0.1 Describe the salinity profile used. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Virtual (enhancement of thermal conductivity, thin ice melting)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Thermodynamics --> Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required: TRUE    Type: ENUM    Cardinality: 1.1 How is the sea ice thickness distribution represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Parameterised" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Thermodynamics --> Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required: TRUE    Type: ENUM    Cardinality: 1.1 How is the sea ice floe-size represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Details Is Required: FALSE    Type: STRING    Cardinality: 0.1 Please provide further details on any parameterisation of floe-size. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 21. Thermodynamics --> Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Are melt ponds included in the sea ice model? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flocco and Feltham (2010)" # "Level-ice melt ponds" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21.2. Formulation Is Required: TRUE    Type: ENUM    Cardinality: 1.1 What method of melt pond formulation is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Albedo" # "Freshwater" # "Heat" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21.3. Impacts Is Required: TRUE    Type: ENUM    Cardinality: 1.N What do melt ponds have an impact on? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22. Thermodynamics --> Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.N Set to True if the sea ice model has a snow aging scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Snow Aging Scheme Is Required: FALSE    Type: STRING    Cardinality: 0.1 Describe the snow aging scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.3. Has Snow Ice Formation Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.N Set to True if the sea ice model has snow ice formation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.4. Snow Ice Formation Scheme Is Required: FALSE    Type: STRING    Cardinality: 0.1 Describe the snow ice formation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.5. Redistribution Is Required: TRUE    Type: STRING    Cardinality: 1.1 What is the impact of ridging on snow cover? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Single-layered heat diffusion" # "Multi-layered heat diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.6. Heat Diffusion Is Required: TRUE    Type: ENUM    Cardinality: 1.1 What is the heat diffusion through snow methodology in sea ice thermodynamics? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Parameterized" # "Multi-band albedo" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required: TRUE    Type: ENUM    Cardinality: 1.1 Method used to handle surface albedo. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Exponential attenuation" # "Ice radiation transmission per category" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. Ice Radiation Transmission Is Required: TRUE    Type: ENUM    Cardinality: 1.N Method by which solar radiation through sea ice is handled. End of explanation """
xpharry/Udacity-DLFoudation
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - Project 1 Solution-checkpoint.ipynb
mit
def pretty_print_review_and_label(i): print(labels[i] + "\t:\t" + reviews[i][:80] + "...") g = open('reviews.txt','r') # What we know! reviews = list(map(lambda x:x[:-1],g.readlines())) g.close() g = open('labels.txt','r') # What we WANT to know! labels = list(map(lambda x:x[:-1].upper(),g.readlines())) g.close() len(reviews) reviews[0] labels[0] """ Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network by Andrew Trask Twitter: @iamtrask Blog: http://iamtrask.github.io What You Should Already Know neural networks, forward and back-propagation stochastic gradient descent mean squared error and train/test splits Where to Get Help if You Need it Re-watch previous Udacity Lectures Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17) Shoot me a tweet @iamtrask Tutorial Outline: Intro: The Importance of "Framing a Problem" Curate a Dataset Developing a "Predictive Theory" PROJECT 1: Quick Theory Validation Transforming Text to Numbers PROJECT 2: Creating the Input/Output Data Putting it all together in a Neural Network PROJECT 3: Building our Neural Network Understanding Neural Noise PROJECT 4: Making Learning Faster by Reducing Noise Analyzing Inefficiencies in our Network PROJECT 5: Making our Network Train and Run Faster Further Noise Reduction PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary Analysis: What's going on in the weights? Lesson: Curate a Dataset End of explanation """ print("labels.txt \t : \t reviews.txt\n") pretty_print_review_and_label(2137) pretty_print_review_and_label(12816) pretty_print_review_and_label(6267) pretty_print_review_and_label(21934) pretty_print_review_and_label(5297) pretty_print_review_and_label(4998) """ Explanation: Lesson: Develop a Predictive Theory End of explanation """ from collections import Counter import numpy as np positive_counts = Counter() negative_counts = Counter() total_counts = Counter() for i in range(len(reviews)): if(labels[i] == 'POSITIVE'): for word in reviews[i].split(" "): positive_counts[word] += 1 total_counts[word] += 1 else: for word in reviews[i].split(" "): negative_counts[word] += 1 total_counts[word] += 1 positive_counts.most_common() pos_neg_ratios = Counter() for term,cnt in list(total_counts.most_common()): if(cnt > 100): pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1) pos_neg_ratios[term] = pos_neg_ratio for word,ratio in pos_neg_ratios.most_common(): if(ratio > 1): pos_neg_ratios[word] = np.log(ratio) else: pos_neg_ratios[word] = -np.log((1 / (ratio+0.01))) # words most frequently seen in a review with a "POSITIVE" label pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label list(reversed(pos_neg_ratios.most_common()))[0:30] """ Explanation: Project 1: Quick Theory Validation End of explanation """
quietcoolwu/python-playground
notebooks/Module_Inspect.ipynb
mit
#coding: UTF-8 import sys # 模块,sys指向这个模块对象 def foo(): pass # 函数,foo指向这个函数对象 class Cat(object): # 类,Cat指向这个类对象 def __init__(self, name='kitty'): self.name = name def sayHi(self): # 实例方法,sayHi指向这个方法对象,使用类或实例.sayHi访问 print self.name, 'says Hi!' # 访问名为name的字段,使用实例.name访问 cat = Cat() # cat是Cat类的实例对象 print Cat.sayHi # 使用类名访问实例方法时,方法是未绑定的(unbound) print cat.sayHi # 使用实例访问实例方法时,方法是绑定的(bound) """ Explanation: Python 运行环境的自省机制 有时候我们会碰到这样的需求,需要执行对象的某个方法,或是需要对对象的某个字段赋值,而方法名或是字段名在编码代码时并不能确定,需要通过参数传递字符串的形式输入。 这个机制被称为反射(反过来让对象告诉我们他是什么),或是自省,用于实现在运行时获取未知对象的信息。 反射是个很吓唬人的名词,听起来高深莫测,在一般的编程语言里反射相对其他概念来说稍显复杂,一般来说都是作为高级主题来讲;但在Python中反射非常简单,用起来几乎感觉不到与其他的代码有区别,使用反射获取到的函数和方法可以像平常一样加上括号直接调用,获取到类后可以直接构造实例;不过获取到的字段不能直接赋值,因为拿到的其实是另一个指向同一个地方的引用,赋值只能改变当前的这个引用而已。 End of explanation """ cat = Cat('kitty') print cat.name # 访问实例属性 cat.sayHi() # 调用实例方法 print dir(cat) # 获取实例的属性名,以列表形式返回 if hasattr(cat, 'name'): # 检查实例是否有这个属性 setattr(cat, 'name', 'tiger') # same as: a.name = 'tiger' print getattr(cat, 'name') # same as: print a.name getattr(cat, 'sayHi')() # same as: cat.sayHi() """ Explanation: 访问对象的属性 以下列出了几个内建方法,可以用来检查或是访问对象的属性。这些方法可以用于任意对象而不仅仅是例子中的Cat实例对象;Python中一切都是对象。 End of explanation """ co = cat.sayHi.func_code print co print co.co_argcount # 1 print co.co_names # ('name',) print co.co_varnames # ('self',) print co.co_flags & 0b100 # 0 """ Explanation: 代码块(func_code) <pre> 代码块可以由类源代码、函数源代码或是一个简单的语句代码编译得到。 co_argcount: 普通参数的总数,不包括*参数和**参数。 co_names: 所有的参数名(包括*参数和**参数)和局部变量名的元组。 co_varnames: 所有的局部变量名的元组。 co_filename: 源代码所在的文件名。 co_flags: 这是一个数值,每一个二进制位都包含了特定信息。较关注的是0b100(0x4)和0b1000(0x8),如果co_flags & 0b100 != 0,说明使用了*args参数;如果co_flags & 0b1000 != 0,说明使用了**kwargs参数。另外,如果co_flags & 0b100000(0x20) != 0,则说明这是一个生成器函数(generator function)。 </pre> End of explanation """ def add(x, y=1): f = sys._getframe() # same as inspect.currentframe() print locals() print f.f_locals # same as locals() print f.f_back # <frame object at 0x...> return x+y add(2) """ Explanation: 栈帧(frame) 栈帧表示程序运行时函数调用栈中的某一帧。函数没有属性可以获取它,因为它在函数调用时才会产生,而生成器则是由函数调用返回的,所以有属性指向栈帧。想要获得某个函数相关的栈帧,则必须在调用这个函数且这个函数尚未返回时获取。你可以使用sys模块的_getframe()函数、或inspect模块的currentframe()函数获取当前栈帧。这里列出来的属性全部是只读的。 1. f_back: 调用栈的前一帧。 2. f_code: 栈帧对应的code对象。 3. f_locals: 用在当前栈帧时与内建函数locals()相同,但你可以先获取其他帧然后使用这个属性获取那个帧的locals()。 4. f_globals: 用在当前栈帧时与内建函数globals()相同,但你可以先获取其他帧。 End of explanation """ def div(x, y): try: return x/y except: print sys.exc_info() tb = sys.exc_info()[2] # return (exc_type, exc_value, traceback) print tb print tb.tb_lineno # "return x/y" 的行号 div(1, 0) """ Explanation: 追踪(traceback) 追踪是在出现异常时用于回溯的对象,与栈帧相反。由于异常时才会构建,而异常未捕获时会一直向外层栈帧抛出,所以需要使用try才能见到这个对象。你可以使用sys模块的exc_info()函数获得它,这个函数返回一个元组,元素分别是异常类型、异常对象、追踪。traceback的属性全部是只读的。 1. tb_next: 追踪的下一个追踪对象。 2. tb_frame: 当前追踪对应的栈帧。 3. tb_lineno: 当前追踪的行号。 End of explanation """ import inspect def add(x, y=1, *z): print inspect.getargvalues(inspect.currentframe()) return x + y + sum(z) add(2) def recurse(limit): local_variable = '.' * limit # locals 包含了并非 recurse() 参数的 local_variable: '.' print(limit, inspect.getargvalues(inspect.currentframe())) if limit <= 0: return recurse(limit - 1) return local_variable if __name__ == '__main__': recurse(2) """ Explanation: Inspect 模块在栈帧检查时的使用 除了代码对象的自省外, inspect 模块包括了一些函数以检查函数执行时的运行时环境。这些函数中的多数是处理调用栈,操作对象是调用帧。 栈中的每个帧记录包含: 1. 帧对象 2. 代码所在文件的文件名 3. 该文件中当前运行的行号 4. 所调用的函数名 5. 源码中上下文代码行的一个列表 6. 该列表中当前行的索引 这些信息会在运行异常时产生 traceback. 它对于记录日志或者调试程序也很用,可以通过查看栈帧发现函数的参数值。 currentframe() 会返回位于栈顶的帧,对应当前函数。 getargvalues() 返回一个 tuple, 包含: 1. 参数名 2. 变参名 3. 帧中局部值构成的 dict, 名为locals, 每对 key:value 为 变量名:变量值。 结合它们,可以显示调用栈中不同的函数参数和局部变量。 End of explanation """
gururajl/deep-learning
language-translation/dlnd_language_translation.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) len(source_text) """ Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation """ view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation """ def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function #from IPython.core.debugger import Tracer; Tracer()() source_id_text = [[source_vocab_to_int[w] for w in sen.split()] for sen in source_text.split('\n')] target_id_text = [[target_vocab_to_int[w] for w in sen.split()] + [target_vocab_to_int['<EOS>']] for sen in target_text.split('\n')] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) """ Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) """ Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation """ def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function input_data = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') lr = tf.placeholder(tf.float32, name='learning_rate') keep_prob = tf.placeholder(tf.float32, name='keep_prob') target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length') max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len') source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length') return input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) """ Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoder_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Target sequence length placeholder named "target_sequence_length" with rank 1 Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0. Source sequence length placeholder named "source_sequence_length" with rank 1 Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) End of explanation """ def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function # strip the last col stripped = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) # prepend with <GO> processed = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), stripped], 1) return processed """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) """ Explanation: Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. End of explanation """ from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function embed = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) #LSTM cell def build_cell(rnn_size): lstm = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=123)) dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return dropout cell = tf.contrib.rnn.MultiRNNCell([build_cell(rnn_size) for _ in range(num_layers)]) enc_output, enc_state = tf.nn.dynamic_rnn(cell, embed, sequence_length=source_sequence_length, dtype=tf.float32) return enc_output, enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) """ Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer: * Embed the encoder input using tf.contrib.layers.embed_sequence * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper * Pass cell and embedded input to tf.nn.dynamic_rnn() End of explanation """ def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, sequence_length=target_sequence_length, time_major=False) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer) decoder_output = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True, maximum_iterations=max_summary_length)[0] return decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) """ Explanation: Decoding - Training Create a training decoding layer: * Create a tf.contrib.seq2seq.TrainingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation """ def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) # Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) """ Explanation: Decoding - Inference Create inference decoder: * Create a tf.contrib.seq2seq.GreedyEmbeddingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation """ def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) #LSTM cell def build_cell(rnn_size): lstm = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=123)) dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return dropout dec_cell = tf.contrib.rnn.MultiRNNCell([build_cell(rnn_size) for _ in range(num_layers)]) output_layer = Dense(target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1)) with tf.variable_scope("decode"): decoder_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) with tf.variable_scope("decode", reuse=True): decoder_infer = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return decoder_output, decoder_infer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) """ Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation """ def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) train_decoder_output, infer_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return train_decoder_output, infer_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) """ Explanation: Build the Neural Network Apply the functions you implemented above to: Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size). Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function. Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function. End of explanation """ # Number of Epochs epochs = 8 # Batch Size batch_size = 128 # RNN Size rnn_size = 128 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 200 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.7 display_step = 200 """ Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability Set display_step to state how many steps between each debug output statement End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths """ Explanation: Batch and pad the source and target sequences End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') """ Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) """ Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() """ Explanation: Checkpoint End of explanation """ def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function input = [vocab_to_int[word] if word in vocab_to_int else vocab_to_int['<UNK>'] for word in sentence.lower().split()] return input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) """ Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation """ translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) """ Explanation: Translate This will translate translate_sentence from English to French. End of explanation """
AllenDowney/ThinkStats2
code/chap13ex.ipynb
gpl-3.0
from os.path import basename, exists def download(url): filename = basename(url) if not exists(filename): from urllib.request import urlretrieve local, _ = urlretrieve(url, filename) print("Downloaded " + local) download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py") download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py") import thinkstats2 import thinkplot import numpy as np import pandas as pd try: import empiricaldist except ImportError: !pip install empiricaldist """ Explanation: Chapter 13 Examples and Exercises from Think Stats, 2nd Edition http://thinkstats2.com Copyright 2016 Allen B. Downey MIT License: https://opensource.org/licenses/MIT End of explanation """ download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py") download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct") download( "https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz" ) import nsfg preg = nsfg.ReadFemPreg() complete = preg.query("outcome in [1, 3, 4]").prglngth cdf = thinkstats2.Cdf(complete, label="cdf") """ Explanation: Survival analysis If we have an unbiased sample of complete lifetimes, we can compute the survival function from the CDF and the hazard function from the survival function. Here's the distribution of pregnancy length in the NSFG dataset. End of explanation """ download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/survival.py") import survival def MakeSurvivalFromCdf(cdf, label=""): """Makes a survival function based on a CDF. cdf: Cdf returns: SurvivalFunction """ ts = cdf.xs ss = 1 - cdf.ps return survival.SurvivalFunction(ts, ss, label) sf = MakeSurvivalFromCdf(cdf, label="survival") print(cdf[13]) print(sf[13]) """ Explanation: The survival function is just the complementary CDF. End of explanation """ thinkplot.Plot(sf) thinkplot.Cdf(cdf, alpha=0.2) thinkplot.Config(loc="center left") """ Explanation: Here's the CDF and SF. End of explanation """ hf = sf.MakeHazardFunction(label="hazard") print(hf[39]) thinkplot.Plot(hf) thinkplot.Config(ylim=[0, 0.75], loc="upper left") """ Explanation: And here's the hazard function. End of explanation """ download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dct") download( "https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dat.gz" ) resp6 = nsfg.ReadFemResp() """ Explanation: Age at first marriage We'll use the NSFG respondent file to estimate the hazard function and survival function for age at first marriage. End of explanation """ resp6.cmmarrhx.replace([9997, 9998, 9999], np.nan, inplace=True) resp6["agemarry"] = (resp6.cmmarrhx - resp6.cmbirth) / 12.0 resp6["age"] = (resp6.cmintvw - resp6.cmbirth) / 12.0 """ Explanation: We have to clean up a few variables. End of explanation """ complete = resp6[resp6.evrmarry == 1].agemarry.dropna() ongoing = resp6[resp6.evrmarry == 0].age """ Explanation: And the extract the age at first marriage for people who are married, and the age at time of interview for people who are not. End of explanation """ from collections import Counter def EstimateHazardFunction(complete, ongoing, label="", verbose=False): """Estimates the hazard function by Kaplan-Meier. http://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator complete: list of complete lifetimes ongoing: list of ongoing lifetimes label: string verbose: whether to display intermediate results """ if np.sum(np.isnan(complete)): raise ValueError("complete contains NaNs") if np.sum(np.isnan(ongoing)): raise ValueError("ongoing contains NaNs") hist_complete = Counter(complete) hist_ongoing = Counter(ongoing) ts = list(hist_complete | hist_ongoing) ts.sort() at_risk = len(complete) + len(ongoing) lams = pd.Series(index=ts, dtype=float) for t in ts: ended = hist_complete[t] censored = hist_ongoing[t] lams[t] = ended / at_risk if verbose: print(t, at_risk, ended, censored, lams[t]) at_risk -= ended + censored return survival.HazardFunction(lams, label=label) """ Explanation: The following function uses Kaplan-Meier to estimate the hazard function. End of explanation """ hf = EstimateHazardFunction(complete, ongoing) thinkplot.Plot(hf) thinkplot.Config(xlabel="Age (years)", ylabel="Hazard") sf = hf.MakeSurvival() thinkplot.Plot(sf) thinkplot.Config(xlabel="Age (years)", ylabel="Prob unmarried", ylim=[0, 1]) """ Explanation: Here is the hazard function and corresponding survival function. End of explanation """ def EstimateMarriageSurvival(resp): """Estimates the survival curve. resp: DataFrame of respondents returns: pair of HazardFunction, SurvivalFunction """ # NOTE: Filling missing values would be better than dropping them. complete = resp[resp.evrmarry == 1].agemarry.dropna() ongoing = resp[resp.evrmarry == 0].age hf = EstimateHazardFunction(complete, ongoing) sf = hf.MakeSurvival() return hf, sf def ResampleSurvival(resp, iters=101): """Resamples respondents and estimates the survival function. resp: DataFrame of respondents iters: number of resamples """ _, sf = EstimateMarriageSurvival(resp) thinkplot.Plot(sf) low, high = resp.agemarry.min(), resp.agemarry.max() ts = np.arange(low, high, 1 / 12.0) ss_seq = [] for _ in range(iters): sample = thinkstats2.ResampleRowsWeighted(resp) _, sf = EstimateMarriageSurvival(sample) ss_seq.append(sf.Probs(ts)) low, high = thinkstats2.PercentileRows(ss_seq, [5, 95]) thinkplot.FillBetween(ts, low, high, color="gray", label="90% CI") """ Explanation: Quantifying uncertainty To see how much the results depend on random sampling, we'll use a resampling process again. End of explanation """ ResampleSurvival(resp6) thinkplot.Config( xlabel="Age (years)", ylabel="Prob unmarried", xlim=[12, 46], ylim=[0, 1], loc="upper right", ) """ Explanation: The following plot shows the survival function based on the raw data and a 90% CI based on resampling. End of explanation """ download( "https://github.com/AllenDowney/ThinkStats2/raw/master/code/1995FemRespData.dat.gz" ) download( "https://github.com/AllenDowney/ThinkStats2/raw/master/code/2006_2010_FemRespSetup.dct" ) download( "https://github.com/AllenDowney/ThinkStats2/raw/master/code/2006_2010_FemResp.dat.gz" ) resp5 = survival.ReadFemResp1995() resp6 = survival.ReadFemResp2002() resp7 = survival.ReadFemResp2010() resps = [resp5, resp6, resp7] """ Explanation: The SF based on the raw data falls outside the 90% CI because the CI is based on weighted resampling, and the raw data is not. You can confirm that by replacing ResampleRowsWeighted with ResampleRows in ResampleSurvival. More data To generate survivial curves for each birth cohort, we need more data, which we can get by combining data from several NSFG cycles. End of explanation """ def AddLabelsByDecade(groups, **options): """Draws fake points in order to add labels to the legend. groups: GroupBy object """ thinkplot.PrePlot(len(groups)) for name, _ in groups: label = "%d0s" % name thinkplot.Plot([15], [1], label=label, **options) def EstimateMarriageSurvivalByDecade(groups, **options): """Groups respondents by decade and plots survival curves. groups: GroupBy object """ thinkplot.PrePlot(len(groups)) for _, group in groups: _, sf = EstimateMarriageSurvival(group) thinkplot.Plot(sf, **options) def PlotResampledByDecade(resps, iters=11, predict_flag=False, omit=None): """Plots survival curves for resampled data. resps: list of DataFrames iters: number of resamples to plot predict_flag: whether to also plot predictions """ for i in range(iters): samples = [thinkstats2.ResampleRowsWeighted(resp) for resp in resps] sample = pd.concat(samples, ignore_index=True) groups = sample.groupby("decade") if omit: groups = [(name, group) for name, group in groups if name not in omit] # TODO: refactor this to collect resampled estimates and # plot shaded areas if i == 0: AddLabelsByDecade(groups, alpha=0.7) if predict_flag: PlotPredictionsByDecade(groups, alpha=0.1) EstimateMarriageSurvivalByDecade(groups, alpha=0.1) else: EstimateMarriageSurvivalByDecade(groups, alpha=0.2) """ Explanation: The following is the code from survival.py that generates SFs broken down by decade of birth. End of explanation """ PlotResampledByDecade(resps) thinkplot.Config( xlabel="Age (years)", ylabel="Prob unmarried", xlim=[13, 45], ylim=[0, 1] ) """ Explanation: Here are the results for the combined data. End of explanation """ def PlotPredictionsByDecade(groups, **options): """Groups respondents by decade and plots survival curves. groups: GroupBy object """ hfs = [] for _, group in groups: hf, sf = EstimateMarriageSurvival(group) hfs.append(hf) thinkplot.PrePlot(len(hfs)) for i, hf in enumerate(hfs): if i > 0: hf.Extend(hfs[i - 1]) sf = hf.MakeSurvival() thinkplot.Plot(sf, **options) """ Explanation: We can generate predictions by assuming that the hazard function of each generation will be the same as for the previous generation. End of explanation """ PlotResampledByDecade(resps, predict_flag=True) thinkplot.Config( xlabel="Age (years)", ylabel="Prob unmarried", xlim=[13, 45], ylim=[0, 1] ) """ Explanation: And here's what that looks like. End of explanation """ preg = nsfg.ReadFemPreg() complete = preg.query("outcome in [1, 3, 4]").prglngth print("Number of complete pregnancies", len(complete)) ongoing = preg[preg.outcome == 6].prglngth print("Number of ongoing pregnancies", len(ongoing)) hf = EstimateHazardFunction(complete, ongoing) sf1 = hf.MakeSurvival() """ Explanation: Remaining lifetime Distributions with difference shapes yield different behavior for remaining lifetime as a function of age. End of explanation """ rem_life1 = sf1.RemainingLifetime() thinkplot.Plot(rem_life1) thinkplot.Config( title="Remaining pregnancy length", xlabel="Weeks", ylabel="Mean remaining weeks" ) """ Explanation: Here's the expected remaining duration of a pregnancy as a function of the number of weeks elapsed. After week 36, the process becomes "memoryless". End of explanation """ hf, sf2 = EstimateMarriageSurvival(resp6) func = lambda pmf: pmf.Percentile(50) rem_life2 = sf2.RemainingLifetime(filler=np.inf, func=func) thinkplot.Plot(rem_life2) thinkplot.Config( title="Years until first marriage", ylim=[0, 15], xlim=[11, 31], xlabel="Age (years)", ylabel="Median remaining years", ) """ Explanation: And here's the median remaining time until first marriage as a function of age. End of explanation """ def CleanData(resp): """Cleans respondent data. resp: DataFrame """ resp.cmdivorcx.replace([9998, 9999], np.nan, inplace=True) resp["notdivorced"] = resp.cmdivorcx.isnull().astype(int) resp["duration"] = (resp.cmdivorcx - resp.cmmarrhx) / 12.0 resp["durationsofar"] = (resp.cmintvw - resp.cmmarrhx) / 12.0 month0 = pd.to_datetime("1899-12-15") dates = [month0 + pd.DateOffset(months=cm) for cm in resp.cmbirth] resp["decade"] = (pd.DatetimeIndex(dates).year - 1900) // 10 CleanData(resp6) married6 = resp6[resp6.evrmarry == 1] CleanData(resp7) married7 = resp7[resp7.evrmarry == 1] """ Explanation: Exercises Exercise: In NSFG Cycles 6 and 7, the variable cmdivorcx contains the date of divorce for the respondent’s first marriage, if applicable, encoded in century-months. Compute the duration of marriages that have ended in divorce, and the duration, so far, of marriages that are ongoing. Estimate the hazard and survival curve for the duration of marriage. Use resampling to take into account sampling weights, and plot data from several resamples to visualize sampling error. Consider dividing the respondents into groups by decade of birth, and possibly by age at first marriage. End of explanation """
antoniomezzacapo/qiskit-tutorial
qiskit/ignis/relaxation_and_decoherence.ipynb
apache-2.0
import qiskit as qk import numpy as np from scipy.optimize import curve_fit from qiskit.tools.qcvv.fitters import exp_fit_fun, osc_fit_fun, plot_coherence from qiskit.wrapper.jupyter import * # Load saved IBMQ accounts qk.IBMQ.load_accounts() # backend and token settings backend = qk.IBMQ.get_backend('ibmq_16_melbourne') # the device to run on shots = 1024 # the number of shots in the experiment # function for padding with QId gates def pad_QId(circuit,N,qr): # circuit to add to, N = number of QId gates to add, qr = qubit reg for ii in range(N): circuit.barrier(qr) circuit.iden(qr) return circuit """ Explanation: <img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left"> Relaxation and Decoherence The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial. Contributors Martin Sandberg, Hanhee Paik, Antonio Córcoles, Doug McClure, and Jay Gambetta Introduction The interaction of quantum systems with their environment imposes certain limits and constraints on the study of their dynamics. The level of isolation of a quantum system dictates the rate at which it can exchange energy with its environment. This means that a quantum system will not hold a particular state for an arbitrary time, but will in general exchange energy with its environment and relax (or excite) to another state with different energy. This brings a trade-off in terms of controllability: a system that does not exchange much energy with the environment will keep its state for longer, but it will be more difficult to access and manipulate. Interaction with the environment can also result in decoherence, a process that does not result in energy exchange but that transforms quantum coherent states into classical mixed states. These processes, energy relaxation and decoherence, are typically described by timescales referred to as $T_1$ and $T_2$, respectively. End of explanation """ # Select qubit whose T1 is to be measured qubit=1 # Creating registers qr = qk.QuantumRegister(5) cr = qk.ClassicalRegister(5) # the delay times are all set in terms of single-qubit gates # so we need to calculate the time from these parameters params = backend.properties()['qubits'][qubit] pulse_length=params['gateTime']['value'] # single-qubit gate time buffer_length=params['buffer']['value'] # spacing between pulses unit = params['gateTime']['unit'] steps=10 gates_per_step=120 max_gates=(steps-1)*gates_per_step+1 tot_length=buffer_length+pulse_length time_per_step=gates_per_step*tot_length qc_dict={} for ii in range(steps): step_num='step_%s'%(str(ii)) qc_dict.update({step_num:qk.QuantumCircuit(qr, cr)}) qc_dict[step_num].x(qr[qubit]) qc_dict[step_num]=pad_QId(qc_dict[step_num],gates_per_step*ii,qr[qubit]) qc_dict[step_num].barrier(qr[qubit]) qc_dict[step_num].measure(qr[qubit], cr[qubit]) circuits=list(qc_dict.values()) %%qiskit_job_status # run the program status = backend.status() if status['operational'] == False or status['pending_jobs'] > 10: print('Warning: the selected backend appears to be busy or unavailable at present; consider choosing a different one if possible') t1_job=qk.execute(circuits, backend, shots=shots) # arrange the data from the run result_t1 = t1_job.result() keys_0_1=list(result_t1.get_counts(qc_dict['step_0']).keys())# get the key of the excited state '00001' data=np.zeros(len(qc_dict.keys())) # numpy array for data sigma_data = np.zeros(len(qc_dict.keys())) # change unit from ns to microseconds plot_factor=1 if unit.find('ns')>-1: plot_factor=1000 punit='$\\mu$s' xvals=time_per_step*np.linspace(0,len(qc_dict.keys()),len(qc_dict.keys()))/plot_factor # calculate the time steps in microseconds for ii,key in enumerate(qc_dict.keys()): # get the data in terms of counts for the excited state normalized to the total number of counts data[ii]=float(result_t1.get_counts(qc_dict[key])[keys_0_1[1]])/shots sigma_data[ii] = np.sqrt(data[ii]*(1-data[ii]))/np.sqrt(shots) # fit the data to an exponential fitT1, fcov = curve_fit(exp_fit_fun, xvals, data, bounds=([-1,2,0], [1., 500, 1])) ferr = np.sqrt(np.diag(fcov)) plot_coherence(xvals, data, sigma_data, fitT1, exp_fit_fun, punit, 'T$_1$ ', qubit) print("a: " + str(round(fitT1[0],2)) + u" \u00B1 " + str(round(ferr[0],2))) print("T1: " + str(round(fitT1[1],2))+ " µs" + u" \u00B1 " + str(round(ferr[1],2)) + ' µs') print("c: " + str(round(fitT1[2],2)) + u" \u00B1 " + str(round(ferr[2],2))) """ Explanation: Measurement of $T_1$ Let's measure the relaxation time ($T_1$ time) of one of our qubits. To do this, we simply run a series of experiments in which we place the qubit in the excited state ($|1\rangle$) and measure its state after a delay time that is varied across the set of experiments. The probability of obtaining the state $|1\rangle$ decays exponentially as the delay time is increased; the characteristic time of this exponential is defined as $T_1$. The IBM Q Experience does not currently support delays of arbitrary length, so for now, we just append a series of identity operations after the initial excitation pulse. Each identity operation has the same duration of a single-qubit gate and is followed by a -shorter- buffer time. These parameters are backend-dependent. End of explanation """ str(params['T1']['value']) +' ' + params['T1']['unit'] """ Explanation: The last calibration of $T_1$ was measured to be End of explanation """ # Select qubit on which to measure T2* qubit=1 # Creating registers qr = qk.QuantumRegister(5) cr = qk.ClassicalRegister(5) params = backend.properties()['qubits'][qubit] pulse_length=params['gateTime']['value'] # single-qubit gate time buffer_length=params['buffer']['value'] # spacing between pulses unit = params['gateTime']['unit'] steps=35 gates_per_step=20 max_gates=(steps-1)*gates_per_step+2 num_osc=5 tot_length=buffer_length+pulse_length time_per_step=gates_per_step*tot_length qc_dict={} for ii in range(steps): step_num='step_%s'%(str(ii)) qc_dict.update({step_num:qk.QuantumCircuit(qr, cr)}) qc_dict[step_num].h(qr[qubit]) qc_dict[step_num]=pad_QId(qc_dict[step_num],gates_per_step*ii,qr[qubit]) qc_dict[step_num].u1(2*np.pi*num_osc*ii/(steps-1),qr[qubit]) qc_dict[step_num].h(qr[qubit]) qc_dict[step_num].barrier(qr[qubit]) qc_dict[step_num].measure(qr[qubit], cr[qubit]) circuits=list(qc_dict.values()) %%qiskit_job_status # run the program status = backend.status() if status['operational'] == False or status['pending_jobs'] > 10: print('Warning: the selected backend appears to be busy or unavailable at present; consider choosing a different one if possible') t2star_job=qk.execute(circuits, backend, shots=shots) # arrange the data from the run result_t2star = t2star_job.result() keys_0_1=list(result_t2star.get_counts(qc_dict['step_0']).keys())# get the key of the excited state '00001' # change unit from ns to microseconds plot_factor=1 if unit.find('ns')>-1: plot_factor=1000 punit='$\\mu$s' xvals=time_per_step*np.linspace(0,len(qc_dict.keys()),len(qc_dict.keys()))/plot_factor # calculate the time steps data=np.zeros(len(qc_dict.keys())) # numpy array for data sigma_data = np.zeros(len(qc_dict.keys())) for ii,key in enumerate(qc_dict.keys()): # get the data in terms of counts for the excited state normalized to the total number of counts data[ii]=float(result_t2star.get_counts(qc_dict[key])[keys_0_1[1]])/shots sigma_data[ii] = np.sqrt(data[ii]*(1-data[ii]))/np.sqrt(shots) fitT2s, fcov = curve_fit(osc_fit_fun, xvals, data, p0=[0.5, 100, 1/10, np.pi, 0], bounds=([0.3,0,0,0,0], [0.5, 200, 1/2,2*np.pi,1])) ferr = np.sqrt(np.diag(fcov)) plot_coherence(xvals, data, sigma_data, fitT2s, osc_fit_fun, punit, '$T_2^*$ ', qubit) print("a: " + str(round(fitT2s[0],2)) + u" \u00B1 " + str(round(ferr[0],2))) print("T2*: " + str(round(fitT2s[1],2))+ " µs"+ u" \u00B1 " + str(round(ferr[1],2)) + ' µs') print("f: " + str(round(10**3*fitT2s[2],3)) + 'kHz' + u" \u00B1 " + str(round(10**6*ferr[2],3)) + 'kHz') print("phi: " + str(round(fitT2s[3],2)) + u" \u00B1 " + str(round(ferr[3],2))) print("c: " + str(round(fitT2s[4],2)) + u" \u00B1 " + str(round(ferr[4],2))) """ Explanation: Measurement of $T_2^*$ We can also measure the coherence time of our qubits. In order to do this, we place the qubit in a superposition state and let it evolve before measuring in the $X$-basis. We will see that as time increases, the qubit evolves from a pure superposition state $|\Psi_s\rangle = |0 + 1\rangle$ to a mixture state $|\Psi_m\rangle = |0\rangle + |1\rangle$ with no phase information. In the actual experiment, we change the phase of the pulse before the measurement in order to create oscillations in the observed dynamics. If we just did two Hadamard gates separated by a delay, we would observe a decay of characteristic time $T^*_2$, but with a strong dependence on any deviation of the calibrated qubit frequency from the actual one. By implementing the qubit pulses with different phases, we shift the frequency dependence into the oscillating feature of the dynamics, and can fit the decaying envelope for a more faithful measure of the coherence time. End of explanation """ # Select qubit to measure T2 echo on qubit=1 # Creating registers qr = qk.QuantumRegister(5) cr = qk.ClassicalRegister(5) params = backend.properties()['qubits'][qubit] pulse_length=params['gateTime']['value'] # single-qubit gate time buffer_length=params['buffer']['value'] # spacing between pulses unit = params['gateTime']['unit'] steps=18 gates_per_step=28 tot_length=buffer_length+pulse_length max_gates=(steps-1)*2*gates_per_step+3 time_per_step=(2*gates_per_step)*tot_length qc_dict={} for ii in range(steps): step_num='step_%s'%(str(ii)) qc_dict.update({step_num:qk.QuantumCircuit(qr, cr)}) qc_dict[step_num].h(qr[qubit]) qc_dict[step_num]=pad_QId(qc_dict[step_num],gates_per_step*ii,qr[qubit]) qc_dict[step_num].x(qr[qubit]) qc_dict[step_num]=pad_QId(qc_dict[step_num],gates_per_step*ii,qr[qubit]) qc_dict[step_num].h(qr[qubit]) qc_dict[step_num].barrier(qr[qubit]) qc_dict[step_num].measure(qr[qubit], cr[qubit]) circuits=list(qc_dict.values()) %%qiskit_job_status # run the program status = backend.status() if status['operational'] == False or status['pending_jobs'] > 10: print('Warning: the selected backend appears to be busy or unavailable at present; consider choosing a different one if possible') t2echo_job=qk.execute(circuits, backend, shots=shots) # arrange the data from the run result_t2echo = t2echo_job.result() keys_0_1=list(result_t2echo.get_counts(qc_dict['step_0']).keys())# get the key of the excited state '00001' # change unit from ns to microseconds plot_factor=1 if unit.find('ns')>-1: plot_factor=1000 punit='$\\mu$s' xvals=time_per_step*np.linspace(0,len(qc_dict.keys()),len(qc_dict.keys()))/plot_factor # calculate the time steps data=np.zeros(len(qc_dict.keys())) # numpy array for data sigma_data = np.zeros(len(qc_dict.keys())) for ii,key in enumerate(qc_dict.keys()): # get the data in terms of counts for the excited state normalized to the total number of counts data[ii]=float(result_t2echo.get_counts(qc_dict[key])[keys_0_1[1]])/shots sigma_data[ii] = np.sqrt(data[ii]*(1-data[ii]))/np.sqrt(shots) fitT2e, fcov = curve_fit(exp_fit_fun, xvals, data, bounds=([-1,10,0], [1, 150, 1])) ferr = np.sqrt(np.diag(fcov)) plot_coherence(xvals, data, sigma_data, fitT2e, exp_fit_fun, punit, '$T_{2echo}$ ', qubit) print("a: " + str(round(fitT2e[0],2)) + u" \u00B1 " + str(round(ferr[0],2))) print("T2: " + str(round(fitT2e[1],2))+ ' µs' + u" \u00B1 " + str(round(ferr[1],2)) + ' µs') print("c: " + str(round(fitT2e[2],2)) + u" \u00B1 " + str(round(ferr[2],2))) """ Explanation: Measurement of $T_2$ Echo We have referred to the previous experiment's characteristic time as $T^*_2$ and not $T_2$ by analogy to nuclear magnetic resonance (NMR). Indeed, one can isolate different frequency components to the decoherence process by devising increasingly elaborated pulse sequences. To illustrate the analogy with NMR, one can think about an ensemble of nuclear spins precessing in an external DC magnetic field. Due to field inhomogeneities, each spin might precess with a slightly different Larmor frequency. This certainly will affect the observed coherence time of the ensemble. However, it is possible to echo away this low-frequency decoherence process by applying a pi-pulse to the system halfway through the delay. The effect of this pi-pulse is to reverse the direction of the precession of each individual spin due to field inhomogeneities. Thus, the spins that had precessed more now start precessing in the opposite direction faster than the spins that had precessed less, and after an equal delay, all the spins in the system recover the initial coherence, except for other, higher-frequency, decoherence mechanisms. Here, we are measuring only a single qubit rather than an ensemble of spins, but coherence measurements require averaging an ensemble of measurements in order to eliminate projection noise, and run-to-run fluctuations in the qubit's frequency will similarly manifest themselves as decoherence if they are not canceled out. By running this $T_2$ echo sequence, we can therefore remove low-frequency components of the decoherence. End of explanation """ str(params['T2']['value']) +' ' + params['T2']['unit'] """ Explanation: The last calibration of $T_2$ was measured to be End of explanation """ # Select qubit for CPMG measurement of T2 qubit=1 # Creating registers qr = qk.QuantumRegister(5) cr = qk.ClassicalRegister(5) params = backend.properties()['qubits'][qubit] pulse_length=params['gateTime']['value'] # single-qubit gate time buffer_length=params['buffer']['value'] # spacing between pulses unit = params['gateTime']['unit'] steps=10 gates_per_step=18 num_echo=5 # has to be odd number to end up in excited state at the end tot_length=buffer_length+pulse_length time_per_step=((num_echo+1)*gates_per_step+num_echo)*tot_length max_gates=num_echo*(steps-1)*gates_per_step+num_echo+2 qc_dict={} for ii in range(steps): step_num='step_%s'%(str(ii)) qc_dict.update({step_num:qk.QuantumCircuit(qr, cr)}) qc_dict[step_num].h(qr[qubit]) for iii in range(num_echo): qc_dict[step_num]=pad_QId(qc_dict[step_num], gates_per_step*ii, qr[qubit]) qc_dict[step_num].x(qr[qubit]) qc_dict[step_num]=pad_QId(qc_dict[step_num], gates_per_step*ii, qr[qubit]) qc_dict[step_num].h(qr[qubit]) qc_dict[step_num].barrier(qr[qubit]) qc_dict[step_num].measure(qr[qubit], cr[qubit]) circuits=list(qc_dict.values()) %%qiskit_job_status # run the program status = backend.status() if status['operational'] == False or status['pending_jobs'] > 10: print('Warning: the selected backend appears to be busy or unavailable at present; consider choosing a different one if possible') t2cpmg_job=qk.execute(circuits, backend, shots=shots) # arrange the data from the run result_t2cpmg = t2cpmg_job.result() keys_0_1=list(result_t2cpmg.get_counts(qc_dict['step_0']).keys())# get the key of the excited state '00001' # change unit from ns to microseconds plot_factor=1 if unit.find('ns')>-1: plot_factor=1000 punit='$\\mu$s' xvals=time_per_step*np.linspace(0,len(qc_dict.keys()),len(qc_dict.keys()))/plot_factor # calculate the time steps data=np.zeros(len(qc_dict.keys())) # numpy array for data sigma_data = np.zeros(len(qc_dict.keys())) for ii,key in enumerate(qc_dict.keys()): # get the data in terms of counts for the excited state normalized to the total number of counts data[ii]=float(result_t2cpmg.get_counts(qc_dict[key])[keys_0_1[1]])/shots sigma_data[ii] = np.sqrt(data[ii]*(1-data[ii]))/np.sqrt(shots) fitT2cpmg, fcov = curve_fit(exp_fit_fun, xvals, data, bounds=([-1,10,0], [1, 150, 1])) ferr = np.sqrt(np.diag(fcov)) plot_coherence(xvals, data, sigma_data, fitT2cpmg, exp_fit_fun, punit, '$T_{2cpmg}$ ', qubit) print("a: " + str(round(fitT2cpmg[0],2)) + u" \u00B1 " + str(round(ferr[0],2))) print("T2: " + str(round(fitT2cpmg[1],2))+ ' µs' + u" \u00B1 " + str(round(ferr[1],2)) + ' µs') print("c: " + str(round(fitT2cpmg[2],2)) + u" \u00B1 " + str(round(ferr[2],2))) """ Explanation: CPMG measurement As explained above, the echo sequence removes low-frequency decoherence mechanisms. This noise-filtering procedure can be extended with increased number of pi-pulses within the delay. In the following experiment, we implement an echo experiment with seven pi-pulses during the delay between the initial and final pulses. This kind of echo with several pi-pulses is referred to as a CPMG experiment, after Carr, Purcell, Meiboom, and Gill. End of explanation """
GoogleCloudPlatform/mlops-with-vertex-ai
07-prediction-serving.ipynb
apache-2.0
import os from datetime import datetime import tensorflow as tf from google.cloud import aiplatform as vertex_ai """ Explanation: 07 - Prediction Serving The purpose of the notebook is to show how to use the deployed model for online and batch prediction. The notebook covers the following tasks: 1. Test the endpoints for online prediction. 2. Use the uploaded custom model for batch prediction. 3. Run a the batch prediction pipeline using Vertex Pipelines. Setup Import libraries End of explanation """ PROJECT = '[your-project-id]' # Change to your project id. REGION = 'us-central1' # Change to your region. BUCKET = '[your-bucket-name]' # Change to your bucket name. if PROJECT == "" or PROJECT is None or PROJECT == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT = shell_output[0] if BUCKET == "" or BUCKET is None or BUCKET == "[your-bucket-name]": # Get your bucket name to GCP project id BUCKET = PROJECT # Try to create the bucket if it doesn't exists ! gsutil mb -l $REGION gs://$BUCKET print("") print("Project ID:", PROJECT) print("Region:", REGION) print("Bucket name:", BUCKET) """ Explanation: Setup Google Cloud project End of explanation """ VERSION = 'v01' DATASET_DISPLAY_NAME = 'chicago-taxi-tips' MODEL_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier-{VERSION}' ENDPOINT_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier' SERVE_BQ_DATASET_NAME = 'playground_us' # Change to your serving BigQuery dataset name. SERVE_BQ_TABLE_NAME = 'chicago_taxitrips_prep' # Change to your serving BigQuery table name. """ Explanation: Set configurations End of explanation """ vertex_ai.init( project=PROJECT, location=REGION, staging_bucket=BUCKET ) endpoint_name = vertex_ai.Endpoint.list( filter=f'display_name={ENDPOINT_DISPLAY_NAME}', order_by="update_time")[-1].gca_resource.name endpoint = vertex_ai.Endpoint(endpoint_name) test_instances = [ { "dropoff_grid": ["POINT(-87.6 41.9)"], "euclidean": [2064.2696], "loc_cross": [""], "payment_type": ["Credit Card"], "pickup_grid": ["POINT(-87.6 41.9)"], "trip_miles": [1.37], "trip_day": [12], "trip_hour": [16], "trip_month": [2], "trip_day_of_week": [4], "trip_seconds": [555] } ] predictions = endpoint.predict(test_instances).predictions for prediction in predictions: print(prediction) explanations = endpoint.explain(test_instances).explanations for explanation in explanations: print(explanation) """ Explanation: 1. Making Online Predicitons End of explanation """ WORKSPACE = f"gs://{BUCKET}/{DATASET_DISPLAY_NAME}/" SERVING_DATA_DIR = os.path.join(WORKSPACE, 'serving_data') SERVING_INPUT_DATA_DIR = os.path.join(SERVING_DATA_DIR, 'input_data') SERVING_OUTPUT_DATA_DIR = os.path.join(SERVING_DATA_DIR, 'output_predictions') if tf.io.gfile.exists(SERVING_DATA_DIR): print("Removing previous serving data...") tf.io.gfile.rmtree(SERVING_DATA_DIR) print("Creating serving data directory...") tf.io.gfile.mkdir(SERVING_DATA_DIR) print("Serving data directory is ready.") """ Explanation: 2. Batch Prediction End of explanation """ from src.common import datasource_utils from src.preprocessing import etl LIMIT = 10000 sql_query = datasource_utils.get_serving_source_query( bq_dataset_name=SERVE_BQ_DATASET_NAME, bq_table_name=SERVE_BQ_TABLE_NAME, limit=LIMIT ) print(sql_query) job_name = f"extract-{DATASET_DISPLAY_NAME}-serving-{datetime.now().strftime('%Y%m%d%H%M%S')}" args = { 'job_name': job_name, #'runner': 'DataflowRunner', 'sql_query': sql_query, 'exported_data_prefix': os.path.join(SERVING_INPUT_DATA_DIR, "data-"), 'temporary_dir': os.path.join(WORKSPACE, 'tmp'), 'gcs_location': os.path.join(WORKSPACE, 'bq_tmp'), 'project': PROJECT, 'region': REGION, 'setup_file': './setup.py' } tf.get_logger().setLevel('ERROR') print("Data extraction started...") etl.run_extract_pipeline(args) print("Data extraction completed.") !gsutil ls {SERVING_INPUT_DATA_DIR} """ Explanation: Extract serving data to Cloud Storage as JSONL End of explanation """ model_name = vertex_ai.Model.list( filter=f'display_name={MODEL_DISPLAY_NAME}', order_by="update_time")[-1].gca_resource.name job_resources = { "machine_type": 'n1-standard-2', #'accelerator_count': 1, #'accelerator_type': 'NVIDIA_TESLA_T4' "starting_replica_count": 1, "max_replica_count": 10, } job_display_name = f"{MODEL_DISPLAY_NAME}-prediction-job-{datetime.now().strftime('%Y%m%d%H%M%S')}" vertex_ai.BatchPredictionJob.create( job_display_name=job_display_name, model_name=model_name, gcs_source=SERVING_INPUT_DATA_DIR + '/*.jsonl', gcs_destination_prefix=SERVING_OUTPUT_DATA_DIR, instances_format='jsonl', predictions_format='jsonl', sync=True, **job_resources, ) """ Explanation: Submit the batch prediction job End of explanation """ WORKSPACE = f"gs://{BUCKET}/{DATASET_DISPLAY_NAME}/" ARTIFACT_STORE = os.path.join(WORKSPACE, 'tfx_artifacts') PIPELINE_NAME = f'{MODEL_DISPLAY_NAME}-predict-pipeline' """ Explanation: 3. Run the batch prediction pipeline using Vertex Pipelines End of explanation """ os.environ["PROJECT"] = PROJECT os.environ["REGION"] = REGION os.environ["GCS_LOCATION"] = f"gs://{BUCKET}/{DATASET_DISPLAY_NAME}" os.environ["MODEL_DISPLAY_NAME"] = MODEL_DISPLAY_NAME os.environ["PIPELINE_NAME"] = PIPELINE_NAME os.environ["ARTIFACT_STORE_URI"] = ARTIFACT_STORE os.environ["BATCH_PREDICTION_BQ_DATASET_NAME"] = SERVE_BQ_DATASET_NAME os.environ["BATCH_PREDICTION_BQ_TABLE_NAME"] = SERVE_BQ_TABLE_NAME os.environ["SERVE_LIMIT"] = "1000" os.environ["BEAM_RUNNER"] = "DirectRunner" os.environ["TFX_IMAGE_URI"] = f"gcr.io/{PROJECT}/{DATASET_DISPLAY_NAME}:{VERSION}" import importlib from src.tfx_pipelines import config importlib.reload(config) for key, value in config.__dict__.items(): if key.isupper(): print(f'{key}: {value}') """ Explanation: Set the pipeline configurations for the Vertex AI run End of explanation """ !echo $TFX_IMAGE_URI !gcloud builds submit --tag $TFX_IMAGE_URI . --timeout=15m --machine-type=e2-highcpu-8 """ Explanation: (Optional) Build the ML container image This is the TFX runtime environment for the training pipeline steps. End of explanation """ from src.tfx_pipelines import runner pipeline_definition_file = f'{config.PIPELINE_NAME}.json' pipeline_definition = runner.compile_prediction_pipeline(pipeline_definition_file) """ Explanation: Compile pipeline End of explanation """ from kfp.v2.google.client import AIPlatformClient pipeline_client = AIPlatformClient( project_id=PROJECT, region=REGION) pipeline_client.create_run_from_job_spec( job_spec_path=pipeline_definition_file ) """ Explanation: Submit run to Vertex Pipelines End of explanation """
IS-ENES-Data/submission_forms
test/Templates/DKRZ_CDP_submission_form.ipynb
apache-2.0
from dkrz_forms import form_widgets form_widgets.show_status('form-submission') """ Explanation: Generic DKRZ CMIP Data Pool (CDP) ingest form This form is intended to request data to be made locally available in the DKRZ national data archive. If the requested data is available via ESGF please use the specific ESGF replication form. Please provide information on the following aspects of your data ingest request: * scientific context of data * data access policies to be supported * technical details, like * amount of data * source of data End of explanation """ MY_LAST_NAME = "...." # e.gl MY_LAST_NAME = "schulz" #------------------------------------------------- from dkrz_forms import form_handler, form_widgets form_info = form_widgets.check_pwd(MY_LAST_NAME) sf = form_handler.init_form(form_info) form = sf.sub.entity_out.form_info """ Explanation: Please provide information to unlock your form last name password End of explanation """ # (informal) type of data form.data_type = "...." # e.g. model data, observational data, .. # free text describing scientific context of data form.scientific_context ="..." # free text describing the expected usage as part of the DKRZ CMIP Data pool form.usage = "...." # free text describing access rights (who is allowed to read the data) form.access_rights = "...." # generic terms of policy information form.terms_of_use = "...." # e.g. unrestricted, restricted # any additional comment on context form.access_group = "...." form.context_comment = "...." """ Explanation: some context information Please provide some generic context information about the data, which should be availabe as part of the DKRZ CMIP Data Pool (CDP) End of explanation """ # information on where the data is stored and can be accessed # e.g. file system path if on DKRZ storage, url etc. if on web accessible resources (cloud,thredds server,..) form.data_path = "...." # timing constraints, when the data ingest should be completed # (e.g. because the data source is only accessible in specific time frame) form.best_ingest_before = "...." # directory structure information, especially form.directory_structure = "..." # e.g. institute/experiment/file.nc form.directory_structure_convention = "..." # e.g. CMIP5, CMIP6, CORDEX, your_convention_name form.directory_structure_comment = "..." # free text, e.g. with link describing the directory structure convention you used # metadata information form.metadata_convention_name = "..." # e.g. CF1.6 etc. None if not applicable form.metadata_comment = "..." # information about metadata, e.g. links to metadata info etc. """ Explanation: technical information concerning your request End of explanation """ # to be completed .. """ Explanation: Check your submission form Please evaluate the following cell to check your submission form. In case of errors, please go up to the corresponden information cells and update your information accordingly... End of explanation """ form_handler.save_form(sf,"..my comment..") # edit my comment info """ Explanation: Save your form your form will be stored (the form name consists of your last name plut your keyword) End of explanation """ form_handler.email_form_info(sf) form_handler.form_submission(sf) """ Explanation: officially submit your form the form will be submitted to the DKRZ team to process you also receive a confirmation email with a reference to your online form for future modifications End of explanation """
abhishekraok/GraphMap
notebook/Getting_Started.ipynb
apache-2.0
%pylab inline import sys import os sys.path.insert(0,'..') import graphmap """ Explanation: GraphMap Getting Started This notebook shows how to get started using GraphMap End of explanation """ from graphmap.graphmap_main import GraphMap from graphmap.memory_persistence import MemoryPersistence G = GraphMap(MemoryPersistence()) """ Explanation: First let us import the module and create a GraphMap that persists in memory. End of explanation """ from graphmap.graph_helpers import NodeLink seattle_skyline_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/2f/Space_Needle002.jpg/640px-Space_Needle002.jpg' mt_tacoma_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/a/a2/Mount_Rainier_from_the_Silver_Queen_Peak.jpg/1024px-Mount_Rainier_from_the_Silver_Queen_Peak.jpg' seattle_node_link = NodeLink('seattle') mt_tacoma_node_link = NodeLink('tacoma') G.create_node(root_node_link=seattle_node_link, image_value_link=seattle_skyline_image_url) G.create_node(root_node_link=mt_tacoma_node_link, image_value_link=mt_tacoma_image_url) """ Explanation: Let us create two nodes with images of Seattle skyline and Mt. Tacoma from wikimedia. End of explanation """ seattle_pil_image_result = G.get_image_at_quad_key(root_node_link=seattle_node_link, resolution=256, quad_key='') mt_tacoma_pil_image_result = G.get_image_at_quad_key(root_node_link=mt_tacoma_node_link, resolution=256, quad_key='') import matplotlib.pyplot as plt plt.imshow(seattle_pil_image_result.value) plt.figure() plt.imshow(mt_tacoma_pil_image_result.value) """ Explanation: Now that we have created the 'seattle' node let's see how it looks End of explanation """ insert_quad_key = '13' created_node_link_result = G.connect_child(root_node_link=seattle_node_link, quad_key=insert_quad_key, child_node_link=mt_tacoma_node_link,) print(created_node_link_result) """ Explanation: Let us insert the 'tacoma' node into the 'seattle' node at the top right. The quad key we will use is 13. 1 correpsonds to the top right quadrant, inside that we will insert at bottom right hence 3. End of explanation """ created_node_link = created_node_link_result.value new_seattle_image_result = G.get_image_at_quad_key(created_node_link, resolution=256, quad_key='') new_seattle_image_result plt.imshow(new_seattle_image_result.value) """ Explanation: Let us see how the new_seattle_node looks after the insertion. End of explanation """
pfschus/fission_bicorrelation
methods/build_det_df_angles_pairs.ipynb
mit
%%javascript $.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js') """ Explanation: <h1 id="tocheading">Table of Contents</h1> <div id="toc"></div> End of explanation """ %%html <img src="fig/setup.png",width=80%,height=80%> """ Explanation: Chi-Nu Array Detector Angles Author: Patricia Schuster Date: Fall 2016/Winter 2017 Institution: University of Michigan NERS Email: pfschus@umich.edu What are we doing today? Goal: Import and analyze the angles between all of the detector pairs in the Chi-Nu array. As a reminder, this is what the Chi-Nu array looks like: End of explanation """ 45*44/2 """ Explanation: There are 45 detectors in this array, making for 990 detector pairs: End of explanation """ # Import packages import os.path import time import numpy as np np.set_printoptions(threshold=np.nan) # print entire matrices import sys import inspect import matplotlib.pyplot as plt import scipy.io as sio from tqdm import * import pandas as pd import seaborn as sns sns.set_palette('spectral') sns.set_style(style='white') sys.path.append('../scripts/') import bicorr as bicorr %load_ext autoreload %autoreload 2 """ Explanation: In order to characterize the angular distribution of the neutrons and gamma-rays emitted in a fission interaction, we are going to analyze the data from pairs of detectors at different angles from one another. In this notebook I am going to import the detector angle data that Matthew provided me and explore the data. 1) Import the angular data to a dictionary 2) Visualize the angular data 3) Find detector pairs in a given angular range 4) Generate pairs vs. angle ranges End of explanation """ help(bicorr.build_ch_lists) chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists(print_flag = True) """ Explanation: Step 1: Initialize pandas DataFrame with detector pairs The detector pair angles are stored in a file lanl_detector_angles.mat. Write a function to load it as an array and then generate a pandas DataFrame This was done before in bicorr.build_dict_det_pair(). Replace with a pandas dataFrame. Columns will be: Detector 1 Detector 2 Index in bicorr_hist_master Angle between detectors We can add more columns later very easily. Load channel lists Use the function bicorr.built_ch_lists() to generate numpy arrays with all of the channel numbers: End of explanation """ det_df = pd.DataFrame(columns=('d1', 'd2', 'd1d2', 'angle')) """ Explanation: Initialize dataFrame with detector channel numbers End of explanation """ # Fill pandas dataFrame with d1, d2, and d1d2 count = 0 det_pair_chs = np.zeros(num_det_pairs,dtype=np.int) # Loop through all detector pairs for i1 in np.arange(0,num_dets): det1ch = detList[i1] for i2 in np.arange(i1+1,num_dets): det2ch = detList[i2] det_df.loc[count,'d1' ] = det1ch det_df.loc[count,'d2' ] = det2ch det_df.loc[count,'d1d2'] = 100*det1ch+det2ch count = count+1 det_df.head() plt.plot(det_df['d1d2'],det_df.index,'.k') plt.xlabel('Detector pair (100*det1ch+det2ch)') plt.ylabel('Index in det_df') plt.title('Mapping between detector pair and index') plt.show() """ Explanation: The pandas dataFrame should have 990 entries, one for each detector pair. Generate this. End of explanation """ ax = det_df.plot('d1','d2',kind='scatter', marker = 's',edgecolor='none',s=13, c='d1d2') plt.xlim([0,50]) plt.ylim([0,50]) ax.set_aspect('equal') plt.xlabel('Detector 1 channel') plt.ylabel('Detector 2 channel') plt.show() """ Explanation: Visualize the dataFrame so far Try using the built-in pandas.DataFrame.plot method. End of explanation """ bicorr.plot_det_df(det_df, which=['index']) """ Explanation: There are some problems with displaying the labels, so instead I will use matplotlib directly. I am writing a function to generate this plot since I will likely want to view it a lot. End of explanation """ os.listdir('../meas_info/') """ Explanation: Step 2: Fill angles column The lanl_detector_angles.mat file is located in my measurements folder: End of explanation """ det2detAngle = sio.loadmat('../meas_info/lanl_detector_angles.mat')['det2detAngle'] det2detAngle.shape plt.pcolormesh(det2detAngle, cmap = "viridis") cbar = plt.colorbar() cbar.set_label('Angle (degrees)') plt.xlabel('Detector 1') plt.ylabel('Detector 2') plt.show() """ Explanation: What does this file look like? Import the .mat file and take a look. End of explanation """ for pair in det_df.index: det_df.loc[pair,'angle'] = det2detAngle[int(det_df.loc[pair,'d1'])][int(det_df.loc[pair,'d2'])] det_df.head() """ Explanation: The array currently is ndets x ndets with an angle at every index. This is twice as many entries as we need because pairs are repeated at (d1,d2) and (d2,d1). Loop through the pairs and store the angle to the dataframe. Fill the 'angle' column of the DataFrame: End of explanation """ bicorr.plot_det_df(det_df,which=['angle']) """ Explanation: Visualize the angular data End of explanation """ dict_pair_to_index, dict_index_to_pair = bicorr.build_dict_det_pair() dict_pair_to_angle = bicorr.build_dict_pair_to_angle(dict_pair_to_index,foldername='../../measurements/') det1ch_old, det2ch_old, angle_old = bicorr.unpack_dict_pair_to_angle(dict_pair_to_angle) """ Explanation: Verify accuracy of pandas method Make use of git to checkout old versions. Previously, I generated a dictionary that mapped the detector pair d1d2 index to the angle. Verify that the new method using pandas is producing the same array of angles. Old version using channel lists, dictionary End of explanation """ det_df = bicorr.load_det_df() det1ch_new = det_df['d1'].values det2ch_new = det_df['d2'].values angle_new = det_df['angle'].values """ Explanation: New method using pandas det_df End of explanation """ plt.plot([0,180],[0,180],'r') plt.plot(angle_old, angle_new, '.k') plt.xlabel('Angle old (degrees)') plt.ylabel('Angle new (degrees)') plt.title('Compare angles from new and old method') plt.show() """ Explanation: Compare the two End of explanation """ np.sum((angle_old - angle_new) < 0.001) """ Explanation: Are the angle vectors within 0.001 degrees of each other? If so, then consider the two equal. End of explanation """ det_df.head() """ Explanation: Yes, consider them the same. Step 3: Extract information from det_df I need to exact information from det_df using the pandas methods. What are a few things I want to do? End of explanation """ d = 8 ind_mask = (det_df['d2'] == d) # Get a glimpse of the mask's first five elements ind_mask.head() # View the mask entries that are equal to true ind_mask[ind_mask] """ Explanation: Return rows that meet a given condition There are two primary methods for accessing rows in the dataFrame that meet certain conditions. In our case, the conditions may be which detector pairs or which angle ranges we want to access. Return a True/False mask indicating which entries meet the conditions Return a pandas Index structure containing the indices of those entries As an example, I will look for rows in which d2=8. As a note, this will not be all entries in which channel 8 was involved because there are other pairs in which d1=8 that will not be included. Return the rows Start with the mask method, which can be used to store our conditions. End of explanation """ ind = det_df.index[ind_mask] print(ind) """ Explanation: The other method is to use the .index method to return a pandas index structure. Pull the indices from det_df using the mask. End of explanation """ np.sum(ind_mask) """ Explanation: Count the number of rows Using the mask End of explanation """ len(ind) """ Explanation: Using the index structure End of explanation """ # A single detector, may be d1 or d2 d = 8 ind_mask = (det_df['d1']==d) | (det_df['d2']==d) ind = det_df.index[ind_mask] """ Explanation: Extract information for a single detector Find indices for that detector End of explanation """ det_df[ind_mask].head() """ Explanation: These lines can be accessed in det_df directly. End of explanation """ det_df_this_det = det_df.loc[ind,['d1','d2']] det_df_this_det.head() """ Explanation: Return a list of the other detector pair Since the detector may be d1 or d2, I may need to return a list of the other pair, regardless of the order. How can I generate an array of the other detector in the pair? End of explanation """ det_df_this_det['dN'] = det_df_this_det.d1 * det_df_this_det.d2 / d det_df_this_det.head() plt.plot(det_df_this_det['dN'],'.k') plt.xlabel('Array in dataFrame') plt.ylabel('dN (other channel)') plt.title('Other channel for pairs including ch '+str(d)) plt.show() """ Explanation: This is a really stupid method, but I can multiply the two detectors together and then divide by 8 to divide out that channel. End of explanation """ plt.plot(det_df.loc[ind,'angle'],'.k') plt.xlabel('Index') plt.ylabel('Angle between pairs') plt.title('Angle for pairs including ch '+ str(d)) plt.show() plt.plot(det_df_this_det['dN'],det_df.loc[ind,'angle'],'.k') plt.axvline(d,color='r') plt.xlabel('dN (other channel)') plt.ylabel('Angle between pairs') plt.title('Angle for pairs including ch '+ str(d)) plt.show() """ Explanation: Return the angles End of explanation """ d1 = 1 d2 = 4 if d2 < d1: print('Warning: d2 < d1. Channels inverted') ind_mask = (det_df['d1']==d1) & (det_df['d2']==d2) ind = det_df.index[ind_mask] det_df[ind_mask] det_df[ind_mask]['angle'] """ Explanation: Extract information for a given pair Find indices for that pair End of explanation """ bicorr.d1d2_index(det_df,4,1) """ Explanation: I will write a function that returns the index. End of explanation """ bicorr_data = bicorr.load_bicorr(bicorr_path = '../2017_01_09_pfs_build_bicorr_hist_master/1/bicorr1') bicorr_data.shape det_df = bicorr.load_det_df() dict_pair_to_index, dict_index_to_pair = bicorr.build_dict_det_pair() d1 = 4 d2 = 8 print(dict_pair_to_index[100*d1+d2]) print(bicorr.d1d2_index(det_df,d1,d2)) """ Explanation: Compare to speed of dictionary For a large number of detector pairs, which is faster for retrieving the indices? End of explanation """ start_time = time.time() for i in tqdm(np.arange(bicorr_data.size),ascii=True): d1 = bicorr_data[i]['det1ch'] d2 = bicorr_data[i]['det2ch'] index = dict_pair_to_index[100*d1+d2] print(time.time()-start_time) """ Explanation: Loop through bicorr_data and generate the index for all pairs. Using the dictionary method End of explanation """ start_time = time.time() for i in tqdm(np.arange(bicorr_data.size),ascii=True): d1 = bicorr_data[i]['det1ch'] d2 = bicorr_data[i]['det2ch'] index = bicorr.d1d2_index(det_df,d1,d2) print(time.time()-start_time) """ Explanation: Using the pandas dataFrame method End of explanation """ det_df.index det_df.head() det_df[['d1d2','d2']].head() dict_index_to_pair = det_df['d1d2'].to_dict() dict_pair_to_index = {v: k for k, v in dict_index_to_pair.items()} dict_pair_to_angle = pd.Series(det_df['angle'].values,index=det_df['d1d2']).to_dict() """ Explanation: I'm not going to run this because tqdm says it will take approximately 24 minutes. So instead I should go with the dict method. But I would like to produce the dictionary from the pandas array directly. Produce dictionaries from det_df Instead of relying on dict_pair_to_index all the time, I will generate it on the fly when filling bicorr_hist_master in build_bicorr_hist_master since that function requires generating the index so many times. The three dictionaries that I need are: dict_pair_to_index dict_index_to_pair dict_pair_to_angle End of explanation """ help(bicorr.build_dict_det_pair) dict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df) """ Explanation: Functionalize these dictionaries so I can produce them on the fly. End of explanation """ det_df.to_pickle('../meas_info/det_df_pairs_angles.pkl') det_df.to_csv('../meas_info/det_df_pairs_angles.csv',index = False) """ Explanation: Instructions: Save, load det_df file I'm going to store the dataFrame using to_pickle. At this point, it only contains information on the pairs and angles. No bin column has been added. End of explanation """ help(bicorr.load_det_df) det_df = bicorr.load_det_df() det_df.head() det_df = bicorr.load_det_df() bicorr.plot_det_df(det_df, show_flag = True, which = ['index']) bicorr.plot_det_df(det_df, show_flag = True, which = ['angle']) """ Explanation: Revive the dataFrame from the .pkl file. Write a function to do this automatically. Option to display plots. End of explanation """
transcranial/keras-js
notebooks/layers/convolutional/UpSampling3D.ipynb
mit
data_in_shape = (2, 2, 2, 3) L = UpSampling3D(size=(2, 2, 2), data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(260) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['convolutional.UpSampling3D.0'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: UpSampling3D [convolutional.UpSampling3D.0] size 2x2x2 upsampling on 2x2x2x3 input, data_format='channels_last' End of explanation """ data_in_shape = (2, 2, 2, 3) L = UpSampling3D(size=(2, 2, 2), data_format='channels_first') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(261) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['convolutional.UpSampling3D.1'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [convolutional.UpSampling3D.1] size 2x2x2 upsampling on 2x2x2x3 input, data_format='channels_first' End of explanation """ data_in_shape = (2, 1, 3, 2) L = UpSampling3D(size=(1, 3, 2), data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(252) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['convolutional.UpSampling3D.2'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [convolutional.UpSampling3D.2] size 1x3x2 upsampling on 2x1x3x2 input, data_format='channels_last' End of explanation """ data_in_shape = (2, 1, 3, 3) L = UpSampling3D(size=(2, 1, 2), data_format='channels_first') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(253) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['convolutional.UpSampling3D.3'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [convolutional.UpSampling3D.3] size 2x1x2 upsampling on 2x1x3x3 input, data_format='channels_first' End of explanation """ data_in_shape = (2, 1, 3, 2) L = UpSampling3D(size=2, data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(254) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['convolutional.UpSampling3D.4'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [convolutional.UpSampling3D.4] size 2 upsampling on 2x1x3x2 input, data_format='channels_last' End of explanation """ import os filename = '../../../test/data/layers/convolutional/UpSampling3D.json' if not os.path.exists(os.path.dirname(filename)): os.makedirs(os.path.dirname(filename)) with open(filename, 'w') as f: json.dump(DATA, f) print(json.dumps(DATA)) """ Explanation: export for Keras.js tests End of explanation """
gkvoelkl/python-sonic
python-sonic.ipynb
mit
from psonic import * """ Explanation: python-sonic - Programming Music with Python, Sonic Pi or Supercollider Python-Sonic is a simple Python interface for Sonic Pi, which is a real great music software created by Sam Aaron (http://sonic-pi.net). At the moment Python-Sonic works with Sonic Pi. It is planned, that it will work with Supercollider, too. If you like it, use it. If you have some suggestions, tell me (gkvoelkl@nelson-games.de). Installation First you need Python 3 (https://www.python.org, ) - Python 3.5 should work, because it's the development environment Then Sonic Pi (https://sonic-pi.net) - That makes the sound Modul python-osc (https://pypi.python.org/pypi/python-osc) - Connection between Python and Sonic Pi Server And this modul python-sonic - simply copy the source Or try That should work. Limitations You have to start Sonic Pi first before you can use it with python-sonic Only the notes from C2 to C6 Changelog |Version | | |--------------|------------------------------------------------------------------------------------------| | 0.2.0 | Some changes for Sonic Pi 2.11. Simpler multi-threading with decorator @in_thread. Messaging with cue and sync. | | 0.3.0 | OSC Communication | | 0.3.1. | Update, sort and duration of samples | | 0.3.2. | Restructured | | 0.4.0 | Changes communication ports and recording | Communication The API python-sonic communications with Sonic Pi over UDP and two ports. One port is an internal Sonic Pi port and could be changed. For older Sonic Pi Version you have to set the ports explicitly Examples Many of the examples are inspired from the help menu in Sonic Pi. End of explanation """ play(70) #play MIDI note 70 """ Explanation: The first sound End of explanation """ play(72) sleep(1) play(75) sleep(1) play(79) """ Explanation: Some more notes End of explanation """ play(C5) sleep(0.5) play(D5) sleep(0.5) play(G5) """ Explanation: In more tratitional music notation End of explanation """ play(Fs5) sleep(0.5) play(Eb5) """ Explanation: Play sharp notes like F# or dimished ones like Eb End of explanation """ play(72,amp=2) sleep(0.5) play(74,pan=-1) #left """ Explanation: Play louder (parameter amp) or from a different direction (parameter pan) End of explanation """ use_synth(SAW) play(38) sleep(0.25) play(50) sleep(0.5) use_synth(PROPHET) play(57) sleep(0.25) """ Explanation: Different synthesizer sounds End of explanation """ play (60, attack=0.5, decay=1, sustain_level=0.4, sustain=2, release=0.5) sleep(4) """ Explanation: ADSR (Attack, Decay, Sustain and Release) Envelope End of explanation """ sample(AMBI_LUNAR_LAND, amp=0.5) sample(LOOP_AMEN,pan=-1) sleep(0.877) sample(LOOP_AMEN,pan=1) sample(LOOP_AMEN,rate=0.5) sample(LOOP_AMEN,rate=1.5) sample(LOOP_AMEN,rate=-1)#back sample(DRUM_CYMBAL_OPEN,attack=0.01,sustain=0.3,release=0.1) sample(LOOP_AMEN,start=0.5,finish=0.8,rate=-0.2,attack=0.3,release=1) """ Explanation: Play some samples End of explanation """ import random for i in range(5): play(random.randrange(50, 100)) sleep(0.5) for i in range(3): play(random.choice([C5,E5,G5])) sleep(1) """ Explanation: Play some random notes End of explanation """ from psonic import * number_of_pieces = 8 for i in range(16): s = random.randrange(0,number_of_pieces)/number_of_pieces #sample starts at 0.0 and finishes at 1.0 f = s + (1.0/number_of_pieces) sample(LOOP_AMEN,beat_stretch=2,start=s,finish=f) sleep(2.0/number_of_pieces) """ Explanation: Sample slicing End of explanation """ while True: if one_in(2): sample(DRUM_HEAVY_KICK) sleep(0.5) else: sample(DRUM_CYMBAL_CLOSED) sleep(0.25) """ Explanation: An infinite loop and if End of explanation """ import random from psonic import * from threading import Thread def bass_sound(): c = chord(E3, MAJOR7) while True: use_synth(PROPHET) play(random.choice(c), release=0.6) sleep(0.5) def snare_sound(): while True: sample(ELEC_SNARE) sleep(1) bass_thread = Thread(target=bass_sound) snare_thread = Thread(target=snare_sound) bass_thread.start() snare_thread.start() while True: pass """ Explanation: If you want to hear more than one sound at a time, use Threads. End of explanation """ from psonic import * from threading import Thread, Condition from random import choice def random_riff(condition): use_synth(PROPHET) sc = scale(E3, MINOR) while True: s = random.choice([0.125,0.25,0.5]) with condition: condition.wait() #Wait for message for i in range(8): r = random.choice([0.125, 0.25, 1, 2]) n = random.choice(sc) co = random.randint(30,100) play(n, release = r, cutoff = co) sleep(s) def drums(condition): while True: with condition: condition.notifyAll() #Message to threads for i in range(16): r = random.randrange(1,10) sample(DRUM_BASS_HARD, rate=r) sleep(0.125) condition = Condition() random_riff_thread = Thread(name='consumer1', target=random_riff, args=(condition,)) drums_thread = Thread(name='producer', target=drums, args=(condition,)) random_riff_thread.start() drums_thread.start() input("Press Enter to continue...") """ Explanation: Every function bass_sound and snare_sound have its own thread. Your can hear them running. End of explanation """ from psonic import * from random import choice tick = Message() @in_thread def random_riff(): use_synth(PROPHET) sc = scale(E3, MINOR) while True: s = random.choice([0.125,0.25,0.5]) tick.sync() for i in range(8): r = random.choice([0.125, 0.25, 1, 2]) n = random.choice(sc) co = random.randint(30,100) play(n, release = r, cutoff = co) sleep(s) @in_thread def drums(): while True: tick.cue() for i in range(16): r = random.randrange(1,10) sample(DRUM_BASS_HARD, rate=r) sleep(0.125) random_riff() drums() input("Press Enter to continue...") from psonic import * tick = Message() @in_thread def metronom(): while True: tick.cue() sleep(1) @in_thread def instrument(): while True: tick.sync() sample(DRUM_HEAVY_KICK) metronom() instrument() while True: pass """ Explanation: To synchronize the thread, so that they play a note at the same time, you can use Condition. One function sends a message with condition.notifyAll the other waits until the message comes condition.wait. More simple with decorator @in_thread End of explanation """ from psonic import * play ([64, 67, 71], amp = 0.3) sleep(1) play ([E4, G4, B4]) sleep(1) """ Explanation: Play a list of notes End of explanation """ play(chord(E4, MINOR)) sleep(1) play(chord(E4, MAJOR)) sleep(1) play(chord(E4, MINOR7)) sleep(1) play(chord(E4, DOM7)) sleep(1) """ Explanation: Play chords End of explanation """ play_pattern( chord(E4, 'm7')) play_pattern_timed( chord(E4, 'm7'), 0.25) play_pattern_timed(chord(E4, 'dim'), [0.25, 0.5]) """ Explanation: Play arpeggios End of explanation """ play_pattern_timed(scale(C3, MAJOR), 0.125, release = 0.1) play_pattern_timed(scale(C3, MAJOR, num_octaves = 2), 0.125, release = 0.1) play_pattern_timed(scale(C3, MAJOR_PENTATONIC, num_octaves = 2), 0.125, release = 0.1) """ Explanation: Play scales End of explanation """ import random from psonic import * s = scale(C3, MAJOR) s s.reverse() play_pattern_timed(s, 0.125, release = 0.1) random.shuffle(s) play_pattern_timed(s, 0.125, release = 0.1) """ Explanation: The function scale returns a list with all notes of a scale. So you can use list methodes or functions. For example to play arpeggios descending or shuffeld. End of explanation """ from psonic import * from threading import Thread def my_loop(): play(60) sleep(1) def looper(): while True: my_loop() looper_thread = Thread(name='looper', target=looper) looper_thread.start() input("Press Enter to continue...") """ Explanation: Live Loop One of the best in SONIC PI is the Live Loop. While a loop is playing music you can change it and hear the change. Let's try it in Python, too. End of explanation """ def my_loop(): use_synth(TB303) play (60, release= 0.3) sleep (0.25) def my_loop(): use_synth(TB303) play (chord(E3, MINOR), release= 0.3) sleep(0.5) def my_loop(): use_synth(TB303) sample(DRUM_BASS_HARD, rate = random.uniform(0.5, 2)) play(random.choice(chord(E3, MINOR)), release= 0.2, cutoff=random.randrange(60, 130)) sleep(0.25) """ Explanation: Now change the function my_loop und you can hear it. End of explanation """ from psonic import * from threading import Thread, Condition from random import choice def loop_foo(): play (E4, release = 0.5) sleep (0.5) def loop_bar(): sample (DRUM_SNARE_SOFT) sleep (1) def live_loop_1(condition): while True: with condition: condition.notifyAll() #Message to threads loop_foo() def live_loop_2(condition): while True: with condition: condition.wait() #Wait for message loop_bar() condition = Condition() live_thread_1 = Thread(name='producer', target=live_loop_1, args=(condition,)) live_thread_2 = Thread(name='consumer1', target=live_loop_2, args=(condition,)) live_thread_1.start() live_thread_2.start() input("Press Enter to continue...") def loop_foo(): play (A4, release = 0.5) sleep (0.5) def loop_bar(): sample (DRUM_HEAVY_KICK) sleep (0.125) """ Explanation: To stop the sound you have to end the kernel. In IPython with Kernel --> Restart Now with two live loops which are synch. End of explanation """ from psonic import * from threading import Thread, Condition, Event def loop_foo(): play (E4, release = 0.5) sleep (0.5) def loop_bar(): sample (DRUM_SNARE_SOFT) sleep (1) def live_loop_1(condition,stop_event): while not stop_event.is_set(): with condition: condition.notifyAll() #Message to threads loop_foo() def live_loop_2(condition,stop_event): while not stop_event.is_set(): with condition: condition.wait() #Wait for message loop_bar() condition = Condition() stop_event = Event() live_thread_1 = Thread(name='producer', target=live_loop_1, args=(condition,stop_event)) live_thread_2 = Thread(name='consumer1', target=live_loop_2, args=(condition,stop_event)) live_thread_1.start() live_thread_2.start() input("Press Enter to continue...") stop_event.set() """ Explanation: If would be nice if we can stop the loop with a simple command. With stop event it works. End of explanation """ sc = Ring(scale(E3, MINOR_PENTATONIC)) def loop_foo(): play (next(sc), release= 0.1) sleep (0.125) sc2 = Ring(scale(E3,MINOR_PENTATONIC,num_octaves=2)) def loop_bar(): use_synth(DSAW) play (next(sc2), release= 0.25) sleep (0.25) """ Explanation: More complex live loops End of explanation """ import random from psonic import * from threading import Thread, Condition, Event def live_1(): pass def live_2(): pass def live_3(): pass def live_4(): pass def live_loop_1(condition,stop_event): while not stop_event.is_set(): with condition: condition.notifyAll() #Message to threads live_1() def live_loop_2(condition,stop_event): while not stop_event.is_set(): with condition: condition.wait() #Wait for message live_2() def live_loop_3(condition,stop_event): while not stop_event.is_set(): with condition: condition.wait() #Wait for message live_3() def live_loop_4(condition,stop_event): while not stop_event.is_set(): with condition: condition.wait() #Wait for message live_4() condition = Condition() stop_event = Event() live_thread_1 = Thread(name='producer', target=live_loop_1, args=(condition,stop_event)) live_thread_2 = Thread(name='consumer1', target=live_loop_2, args=(condition,stop_event)) live_thread_3 = Thread(name='consumer2', target=live_loop_3, args=(condition,stop_event)) live_thread_4 = Thread(name='consumer3', target=live_loop_3, args=(condition,stop_event)) live_thread_1.start() live_thread_2.start() live_thread_3.start() live_thread_4.start() input("Press Enter to continue...") """ Explanation: Now a simple structure with four live loops End of explanation """ def live_1(): sample(BD_HAUS,amp=2) sleep(0.5) pass def live_2(): #sample(AMBI_CHOIR, rate=0.4) #sleep(1) pass def live_3(): use_synth(TB303) play(E2, release=4,cutoff=120,cutoff_attack=1) sleep(4) def live_4(): notes = scale(E3, MINOR_PENTATONIC, num_octaves=2) for i in range(8): play(random.choice(notes),release=0.1,amp=1.5) sleep(0.125) """ Explanation: After starting the loops you can change them End of explanation """ stop_event.set() """ Explanation: And stop. End of explanation """ from psonic import * synth(SINE, note=D4) synth(SQUARE, note=D4) synth(TRI, note=D4, amp=0.4) detune = 0.7 synth(SQUARE, note = E4) synth(SQUARE, note = E4+detune) detune=0.1 # Amplitude shaping synth(SQUARE, note = E2, release = 2) synth(SQUARE, note = E2+detune, amp = 2, release = 2) synth(GNOISE, release = 2, amp = 1, cutoff = 60) synth(GNOISE, release = 0.5, amp = 1, cutoff = 100) synth(NOISE, release = 0.2, amp = 1, cutoff = 90) """ Explanation: Creating Sound End of explanation """ from psonic import * with Fx(SLICER): synth(PROPHET,note=E2,release=8,cutoff=80) synth(PROPHET,note=E2+4,release=8,cutoff=80) with Fx(SLICER, phase=0.125, probability=0.6,prob_pos=1): synth(TB303, note=E2, cutoff_attack=8, release=8) synth(TB303, note=E3, cutoff_attack=4, release=8) synth(TB303, note=E4, cutoff_attack=2, release=8) """ Explanation: Next Step Using FX Not implemented yet End of explanation """ from psonic import * """ Explanation: OSC Communication (Sonic Pi Ver. 3.x or better) In Sonic Pi version 3 or better you can work with messages. End of explanation """ run("""live_loop :foo do use_real_time a, b, c = sync "/osc*/trigger/prophet" synth :prophet, note: a, cutoff: b, sustain: c end """) """ Explanation: First you need a programm in the Sonic Pi server that receives messages. You can write it in th GUI or send one with Python. End of explanation """ send_message('/trigger/prophet', 70, 100, 8) stop() """ Explanation: Now send a message to Sonic Pi. End of explanation """ from psonic import * # start recording start_recording() play(chord(E4, MINOR)) sleep(1) play(chord(E4, MAJOR)) sleep(1) play(chord(E4, MINOR7)) sleep(1) play(chord(E4, DOM7)) sleep(1) # stop recording stop_recording # save file save_recording('/Volumes/jupyter/python-sonic/test.wav') """ Explanation: Recording With python-sonic you can record wave files. End of explanation """ from psonic import * #Inspired by Steve Reich Clapping Music clapping = [1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0] for i in range(13): for j in range(4): for k in range(12): if clapping[k] ==1 : sample(DRUM_SNARE_SOFT,pan=-0.5) if clapping[(i+k)%12] == 1: sample(DRUM_HEAVY_KICK,pan=0.5) sleep (0.25) """ Explanation: More Examples End of explanation """
sarvex/PythonMachineLearning
Chapter 2/EstimatorCV Objects.ipynb
isc
from sklearn.datasets import load_iris from sklearn.cross_validation import train_test_split iris = load_iris() X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state=0) from sklearn.feature_selection import RFE from sklearn.linear_model import LogisticRegression feature_elimination_lr = RFE(LogisticRegression(C=100), n_features_to_select=2) feature_elimination_lr.fit(X_train, y_train) feature_elimination_lr.score(X_test, y_test) from sklearn.grid_search import GridSearchCV param_grid = {'n_features_to_select': range(1, 5)} grid_search = GridSearchCV(feature_elimination_lr, param_grid, cv=5) grid_search.fit(X_train, y_train) grid_search.score(X_test, y_test) grid_search.best_params_ from sklearn.feature_selection import RFECV rfecv = RFECV(LogisticRegression(C=100)).fit(X_train, y_train) rfecv.score(X_test, y_test) rfecv.n_features_ """ Explanation: EstimatorCV Objects for Efficient Parameter Search Recursive Feature Eliminiation End of explanation """ from sklearn.datasets import make_regression X, y = make_regression(noise=60, random_state=0) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) from sklearn.linear_model import Lasso, LassoCV lasso = Lasso().fit(X_train, y_train) print("lasso score with default alpha: %f" % lasso.score(X_test, y_test)) lassocv = LassoCV().fit(X_train, y_train) print("lasso score with automatic alpha: %f" % lassocv.score(X_test, y_test)) grid_search = GridSearchCV(Lasso(), param_grid={'alpha': np.logspace(-5, 1, 20)}) grid_search.fit(X_train, y_train) print("lasso score with grid-searched alpha: %f" % grid_search.score(X_test, y_test)) print("best alpha found by LassoCV: %f" % lassocv.alpha_) print("best alpha found by GridSearchCV: %f" % grid_search.best_params_['alpha']) %timeit Lasso().fit(X_train, y_train) %timeit LassoCV().fit(X_train, y_train) %timeit grid_search.fit(X_train, y_train) """ Explanation: Efficient hyper-parameter selection for Lasso End of explanation """
pycomlink/pycomlink
notebooks/Blackout gap detection examples.ipynb
bsd-3-clause
import matplotlib.pyplot as plt import numpy as np import pycomlink as pycml import xarray as xr from tqdm import tqdm import urllib.request import io import pycomlink.processing.blackout_gap_detection as blackout_detection # Do show xarray.Dataset representation as text because gitlab/github # do not (yet) render the html output correctly (or sometimes not at all...) xr.set_options(display_style="text"); # get data from 500 CMLs and eleven days data_path = pycml.io.examples.get_example_data_path() cmls = xr.open_dataset(data_path + "/example_cml_data.nc") # get recieved signal levels values without default fill values (-99 and -99.9) rsl = cmls.rsl.where(cmls.rsl > -99).isel(channel_id=0) gap_start_list = [] gap_end_list = [] gap_mask_list = [] for cml_id in tqdm(rsl.cml_id): # select individual RSL time series rsl_i = rsl.sel(cml_id=cml_id) # check whether RSL before and after gaps are below rsl_threshold # here we take -65 dBm as threshold as all CMLs have a median RSL > -60 dBm gap_start, gap_end = blackout_detection.get_blackout_start_and_end( rsl=rsl_i.values, rsl_threshold=-65 ) # create a mask for all gaps fullflilling the criteria above mask = blackout_detection.created_blackout_gap_mask_from_start_end_markers( rsl_i.values, gap_start, gap_end ) mask_reverse = blackout_detection.created_blackout_gap_mask_from_start_end_markers( rsl_i.values[::-1], gap_end[::-1], gap_start[::-1] ) mask = mask | mask_reverse[::-1] gap_start_list.append(gap_start) gap_end_list.append(gap_end) gap_mask_list.append(mask) # parse gap starts, ends and the mask to one xarray dataset with the CML data rsl = rsl.to_dataset() rsl["gap_start"] = (["cml_id", "time"], gap_start_list) rsl["gap_end_list"] = (["cml_id", "time"], gap_end_list) rsl["mask"] = (["cml_id", "time"], gap_mask_list) # get the CMLs with the many blackout minutes (here more than 35) rsl_blackouts = rsl.isel(cml_id=(np.array(gap_mask_list).sum(axis=1) > 35)) rsl_blackouts """ Explanation: Blackout gap detection example notebook This notebook shows 1. the usage of a blackout gap detection on a dataset of 500 CMLs and a 10 day period 2. two example CMLs with which have a very high and a very low number of blackout gaps End of explanation """ # plot the three CMLs with most blackouts minutes (more than 35 minutes) for cml_id in rsl_blackouts.cml_id: rsl_blackouts.rsl.sel(cml_id=cml_id, time="2018-05-13").plot( figsize=(10, 4), label="rsl" ) (rsl_blackouts.mask.sel(cml_id=cml_id, time="2018-05-13") * -20).plot( label="detected blackout" ) plt.legend() """ Explanation: CMLs with more than 35 minutes of blackout within the 10-day example data End of explanation """ # load data e.g. with. curl -0` or `wget` !curl -O https://zenodo.org/record/6337557/files/blackout_example_cmls.nc !curl -O https://zenodo.org/record/6337557/files/blackout_example_radar_reference.nc # open dataset with xarray cmls = xr.open_dataset('blackout_example_cmls.nc') reference = xr.open_dataset('blackout_example_radar_reference.nc') """ Explanation: Investiagte blackouts for two CMLs over three years Data is available from this zenodo repository and has to be downloaded for the following analysis End of explanation """ cmls reference # remove default rsl values cmls["rsl"] = cmls.rsl.where(cmls.rsl > -99).isel(channel_id=0) cmls["tsl"] = cmls.tsl.where(cmls.rsl > -99).isel(channel_id=0) # define a plotting function def plt_ts(rsl, tsl, ref, mask, start, ax=None): if ax is None: fig, ax = plt.subplots() ay = ax.twinx() ay.bar(np.arange(len(ref)), ref * 60, color="#045a8d", alpha=0.6) ay.set_ylim(-10, 180) ay.set_yticks([0, 50, 100, 150]) ay.set_ylabel("rainfall intensity \n[mm/h]", color="#045a8d") ax.plot(tsl, color="#238b45", label="transmitted \nsignal level", lw=2) ax.plot(rsl, color="#cc4c02", label="recieved \nsignal level", lw=2) ax.plot((mask * -5) + 33.5, lw=4, color="black") ax.plot((mask + 100), color="black", lw=3, label="detected \nblackout gap") ax.set_ylim(-92, 30) ax.set_title("") ax.set_xticks([0, 30, 60, 90, 120, 150, 180, 210, 240]) ax.set_xlabel("time [minutes]") ax.legend( loc="center left", ncol=3, ) ax.set_title(str(start.values)[0:10] + " " + str(start.values)[11:16]) ax.set_yticks([20, 0, -20, -40, -60, -80]) ax.set_ylabel("signal level\n[dBm]") plt.show() for cml_id in cmls.cml_id: rsl = cmls.sel(cml_id=cml_id).rsl tsl = cmls.sel(cml_id=cml_id).tsl ref = reference.sel(cml_id=cml_id) # using the blackout gap detection as in the example above gap_start, gap_end = blackout_detection.get_blackout_start_and_end( rsl=rsl.values, rsl_threshold=-65 ) mask = blackout_detection.created_blackout_gap_mask_from_start_end_markers( rsl.values, gap_start, gap_end ) mask_reverse = blackout_detection.created_blackout_gap_mask_from_start_end_markers( rsl.values[::-1], gap_end[::-1], gap_start[::-1] ) mask = mask | mask_reverse[::-1] rsl = rsl.to_dataset() rsl["mask"] = ("time", mask) print( "For CML with the id " + str(cml_id.values) + " there are " + str(mask.sum()) + " detected blackout minutes." ) print("Plotting all detected gaps:") # remove gap_start times which are less than 60 minutes after another gap_start very_close_gaps = ( np.diff(rsl.sel(time=gap_start).time.values) / 60000000000 ).astype(int) < 60 gap_time = rsl.sel(time=gap_start).time[~np.append(very_close_gaps, np.array(True))] # plot each gap for gap_start_time in gap_time: start = gap_start_time - np.timedelta64(2, "h") end = gap_start_time + np.timedelta64(2, "h") plt_ts( rsl=rsl.rsl.sel(time=slice(start, end)), tsl=tsl.sel(time=slice(start, end)), ref=ref.rainfall_amount.sel(time=slice(start, end)), mask=rsl.mask.sel(time=slice(start, end)), start=start, ) print("#################################################\n") """ Explanation: The data consists of RSL and TSL from two CMLs with one minutes resolution over three years and respective path-averaged reference data from RADKLIM-YW, a gauge-adjusted, climatologically correct weather radar product from the German Weather Service. End of explanation """
vzg100/Post-Translational-Modification-Prediction
.ipynb_checkpoints/Phosphorylation Chemical Tests - MLP-checkpoint.ipynb
mit
from pred import Predictor from pred import sequence_vector from pred import chemical_vector """ Explanation: Template for test End of explanation """ par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] for i in par: print("y", i) y = Predictor() y.load_data(file="Data/Training/clean_s.csv") y.process_data(vector_function="chemical", amino_acid="S", imbalance_function=i, random_data=0) y.supervised_training("mlp_adam") y.benchmark("Data/Benchmarks/phos.csv", "S") del y print("x", i) x = Predictor() x.load_data(file="Data/Training/clean_s.csv") x.process_data(vector_function="chemical", amino_acid="S", imbalance_function=i, random_data=1) x.supervised_training("mlp_adam") x.benchmark("Data/Benchmarks/phos.csv", "S") del x """ Explanation: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation. Included is N Phosphorylation however no benchmarks are available, yet. Training data is from phospho.elm and benchmarks are from dbptm. End of explanation """ par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] for i in par: print("y", i) y = Predictor() y.load_data(file="Data/Training/clean_Y.csv") y.process_data(vector_function="chemical", amino_acid="Y", imbalance_function=i, random_data=0) y.supervised_training("mlp_adam") y.benchmark("Data/Benchmarks/phos.csv", "Y") del y print("x", i) x = Predictor() x.load_data(file="Data/Training/clean_Y.csv") x.process_data(vector_function="chemical", amino_acid="Y", imbalance_function=i, random_data=1) x.supervised_training("mlp_adam") x.benchmark("Data/Benchmarks/phos.csv", "Y") del x """ Explanation: Y Phosphorylation End of explanation """ par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] for i in par: print("y", i) y = Predictor() y.load_data(file="Data/Training/clean_t.csv") y.process_data(vector_function="chemical", amino_acid="T", imbalance_function=i, random_data=0) y.supervised_training("mlp_adam") y.benchmark("Data/Benchmarks/phos.csv", "T") del y print("x", i) x = Predictor() x.load_data(file="Data/Training/clean_t.csv") x.process_data(vector_function="chemical", amino_acid="T", imbalance_function=i, random_data=1) x.supervised_training("mlp_adam") x.benchmark("Data/Benchmarks/phos.csv", "T") del x """ Explanation: T Phosphorylation End of explanation """
privong/pythonclub
sessions/01-introduction/Software I.ipynb
gpl-3.0
import numpy as np x = np.array([1, 5, 3, 4, 2]) x """ Explanation: Software I : Anaconda, AstroPy, and libraries We will make extensive use of Python and various associated libraries and so the first thing we need to ensure is that we all have a common setup and are using the same software. The Python distribution that we have decided to use is <i>Anaconda</i> which can be downloaded from <a href="http://continuum.io/downloads">here</a> (although we hope that you have already done this prior to the school). Make sure that you installed the Python 2.7 version for your operating system (there is nothing wrong with Python 3.x but it is slightly different syntactically). Python 2 will be supported until 2020. We promote to write code that supports Python 2.7 and 3.x simultaneously, and you can use the future package explained <a href="http://python-future.org">here</a> For installing Anaconda in Linux follow this <a href="https://www.continuum.io/downloads#linux">link</a> For installing Anaconda in Mac follow this <a href="https://www.continuum.io/downloads#osx">link</a> For installing Anaconda in Windows follow this <a href="https://www.continuum.io/downloads#windows">link</a> We will follow the LSST Data Managment Style guide explained <a href="https://developer.lsst.io/coding/python_style_guide.html#pep-8-is-the-baseline-coding-style">here</a> Installing packages One of the advantages of the <i>Anaconda</i> distribution is that it comes with many of the most commonly-used Python packages, such as <a href="http://www.numpy.org">numpy</a>, <a href="http://www.scipy.org">scipy</a>, and <a href="http://scikit-learn.org">scikit-learn</a>, preinstalled. However, if you do need to install a new package then it is very straightforward: you can either use the Anaconda installation tool <i>conda</i> or the generic Python tool <i>pip</i> (both use a different registry of available packages and sometimes a particular package will not available via one tool but will be via the other). For example, <a href="https://github.com/bwlewis/irlbpy">irlbpy</a> is a superfast algorithm for finding the largest eigenvalues (and corresponding eigenvectors) of very large matrices. We can try to install it first with <i>conda</i>: <code>conda install irlbpy</code> but this will not find it: <code>Fetching package metadata: .... Error: No packages found in current osx-64 channels matching: irlbpy You can search for this package on Binstar with <br/> binstar search -t conda irlbpy </code> so instead we try with <i>pip</i>: <code>pip install irlbpy</code> In the event that both fail, you always just download the package source code and then install it manually with: <code>python install setup.py</code> in the appropriate source directory. We'll now take a brief look at a few of the main Python packages. Python interpreter The standard way to use the Python programming language is to use the Python interpreter to run python code. The python interpreter is a program that reads and execute the python code in files passed to it as arguments. At the command prompt, the command python is used to invoke the Python interpreter. For example, to run a file my-program.py that contains python code from the command prompt, use:: $ python my-program.py We can also start the interpreter by simply typing python at the command line, and interactively type python code into the interpreter. <img src="images/python-screenshot.jpg" width="600"> This is often how we want to work when developing scientific applications, or when doing small calculations. But the standard python interpreter is not very convenient for this kind of work, due to a number of limitations. IPython IPython is an interactive shell that addresses the limitation of the standard python interpreter, and it is a work-horse for scientific use of python. It provides an interactive prompt to the python interpreter with a greatly improved user-friendliness. <img src="images/ipython-screenshot.jpg" width="600"> Some of the many useful features of IPython includes: Command history, which can be browsed with the up and down arrows on the keyboard. Tab auto-completion. In-line editing of code. Object introspection, and automatic extract of documentation strings from python objects like classes and functions. Good interaction with operating system shell. Support for multiple parallel back-end processes, that can run on computing clusters or cloud services like Amazon EE2. Jupyter notebook <a href="http://ipython.org/notebook.html">Jupyter notebook</a> is an HTML-based notebook environment for Python, similar to Mathematica or Maple. It is based on the IPython shell, but provides a cell-based environment with great interactivity, where calculations can be organized and documented in a structured way. <img src="images/ipython-notebook-screenshot.jpg" widt="600"> Although using a web browser as graphical interface, IPython notebooks are usually run locally, from the same computer that run the browser. To start a new Jupyter notebook session, run the following command: $ jupyter notebook from a directory where you want the notebooks to be stored. This will open a new browser window (or a new tab in an existing window) with an index page where existing notebooks are shown and from which new notebooks can be created. Usually, the URL for the Jupyter notebook is http://localhost:8888 NumPy <a href="http://www.numpy.org">NumPy</a> is the main Python package for working with N-dimensional arrays. Any list of numbers can be recast as a NumPy array: End of explanation """ print x.min(), x.max(), x.sum(), x.argmin(), x.argmax() """ Explanation: Arrays have a number of useful methods associated with them: End of explanation """ np.sin(x * np.pi / 180.) """ Explanation: and NumPy functions can act on arrays in an elementwise fashion: End of explanation """ np.arange(1, 10, 0.5) np.linspace(1, 10, 5) np.logspace(1, 3, 5) """ Explanation: Ranges of values are easily produced: End of explanation """ help(np.random.random) np.random.random(10) """ Explanation: Random numbers are also easily generated in the half-open interval [0, 1): End of explanation """ np.random.normal(loc = 2.5, scale = 5, size = 10) """ Explanation: or from one of the large number of statistical distributions provided: End of explanation """ x = np.random.normal(size = 100) np.where(x > 3.) """ Explanation: Another useful method is the <i>where</i> function for identifying elements that satisfy a particular condition: End of explanation """ x = np.array([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]) np.sin(x) """ Explanation: Of course, all of these work equally well with multidimensional arrays. End of explanation """ data = np.loadtxt("data/sample_data.csv", delimiter = ",", skiprows = 3) data """ Explanation: Data can also be automatically loaded from a file into a Numpy array via the <i>loadtxt</i> or <i>genfromtxt</i> methods: End of explanation """ import matplotlib.pyplot as plt %matplotlib inline x=np.array([1,2,3,4,5,6,7,8,9,10]) y=x**2 plt.plot(x,y) plt.xlabel('X-axis title') plt.ylabel('Y-axis title') # evenly sampled time at 200ms intervals t = np.arange(0., 5., 0.2) # red dashes, blue squares and green triangles plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^') help(plt.hist) x=10+5*np.random.randn(10000) n, bins, patches=plt.hist(x,bins=30) """ Explanation: Matplotlib <a href="http://matplotlib.org/">Matplotlib</a> is Python's most popular and comprehensive plotting library that is especially useful in combination with NumPy/SciPy. End of explanation """ from astropy import units as u from astropy.coordinates import SkyCoord c = SkyCoord(ra = 10.625 * u.degree, dec = 41.2 * u.degree, frame = 'icrs') print c.to_string('hmsdms') print c.galactic from astropy.cosmology import WMAP9 as cosmo print cosmo.comoving_distance(1.25), cosmo.luminosity_distance(1.25) """ Explanation: AstroPy <a href="http://www.astropy.org">AstroPy</a> aims to provide a core set of subpackages to specifically support astronomy. These include methods to work with image and table data formats, e.g., FITS, VOTable, etc., along with astronomical coordinate and unit systems, and cosmological calculations. End of explanation """ from astropy.io import fits hdulist = fits.open('data/sample_image.fits') hdulist.info() data=hdulist[0].data data from astropy.io.votable import parse votable = parse('data/sample_votable.xml') table = votable.get_first_table() fields=table.fields data=table.array print fields print data """ Explanation: You can read FITS images and VOTables using astropy End of explanation """ from astroquery.ned import Ned coo = SkyCoord(ra = 56.38, dec = 38.43, unit = (u.deg, u.deg)) result = Ned.query_region(coo, radius = 0.07 * u.deg) set(result.columns['Type']) """ Explanation: A useful affiliated package is <a href="https://astroquery.readthedocs.org">Astroquery</a> which provides tools for querying astronomical web forms and databases. This is not part of the regular AstroPy distribution and needs to be installed separately. Whereas many data archives have standardized VO interfaces to support data access, Astroquery mimics a web browser and provides access via an archive's form interface. This can be useful as not all provided information is necesarily available via the VO. For example, the <a href="http://ned.ipac.caltech.edu">NASA Extragalactic Database</a> is a very useful human-curated resource for extragalactic objects. However, a lot of the information that is available via the web pages is not available through an easy programmatic API. Let's say that we want to get the list of object types associated with a particulae source: End of explanation """ f = lambda x: np.cos(-x ** 2 / 9.) x = np.linspace(0, 10, 11) y = f(x) from scipy.interpolate import interp1d f1 = interp1d(x, y) f2 = interp1d(x, y, kind = 'cubic') from scipy.integrate import quad print quad(f1, 0, 10) print quad(f2, 0, 10) print quad(f, 0, 10) """ Explanation: SciPy <a href="http://www.scipy.org">SciPy</a> provides a number of subpackages that deal with common operations in scientific computing, such as numerical integration, optimization, interpolation, Fourier transforms and linear algebra. End of explanation """
PyDataMadrid2016/Conference-Info
talks_materials/20160410_1215_Whoosh_a_fast_pure_Python_search_engine_library/whooshNotebook.ipynb
mit
from IPython.display import Image Image(filename='files/screenshot.png') from IPython.display import Image Image(filename='files/whoosh.jpg') """ Explanation: Whoosh: a fast pure-Python search engine library Pydata Madrid 2016.04.10 Who am I? Claudia Guirao Fernández @claudiaguirao Background: Double degree in Law and Business Administration Data Scientist at PcComponentes.com Professional learning enthusiast End of explanation """ import csv catalog = csv.DictReader(open('files/catalogo_head.csv')) print list(catalog)[0].keys() catalog = csv.DictReader(open('files/catalogo_head.csv')) for product in catalog: print product["Codigo"] + ' - ' + product["Articulo"] + ' - ' + product["Categoria"] """ Explanation: Means the sound made by something that is moving quickly Whoosh, so fast and easy that even a lawyer could manage it What is Whoosh? Whoosh is a library of classes and functions for indexing text and then searching the index. It allows you to develop custom search engines for your content. Whoosh is fast, but uses only pure Python, so it will run anywhere Python runs, without requiring a compiler. It’s a programmer library for creating a search engine Allows indexing, choose the level of information stored for each term in each field, parsing search queries, choose scoring algorithms, etc. but... All indexed text in Whoosh must be unicode. Only runs in 2.7 NOT in python 3 Why Whoosh instead Elastic Search? Why I personally choose whoosh instead other high performance solutions: I was mainly focused on index / search definition 12k documents aprox, mb instead gb, "small data" Fast development No compilers, no java If your are a begginer, you have no team, you need a fast solution, you need to work isolated or you have a small project this is your solution otherwise Elastic Search might be your tech. Development stages Data treatment Schema Index Search Other stuff Data Treatment Data set is available in csv format at www.pccomponentes.com > mi panel de cliente > descargar tarifa It is in latin, it has special characters and missing values No tags, emphasis and laboured phrasing, lots of irrelevant information mixed with the relevant information. TONS OF FUN! End of explanation """ Image(filename='files/tf.png') Image(filename='files/idf.png') Image(filename='files/tfidf.png') from nltk.corpus import stopwords import csv stop_words_spa = stopwords.words("spanish") stop_words_eng = stopwords.words("english") with open('files/adjetivos.csv', 'rb') as f: reader = csv.reader(f) adjetivos=[] for row in reader: for word in row: adjetivos.append(word) import math #tf-idf functions: def tf(word, blob): return float(blob.words.count(word))/float(len(blob.words)) def idf(word, bloblist): return (float(math.log(len(bloblist)))/float(1 + n_containing(word, bloblist))) def n_containing(word, bloblist): return float(sum(1 for blob in bloblist if word in blob)) def tfidf(word, blob, bloblist): return float(tf(word, blob)) * float(idf(word, bloblist)) import csv from textblob import TextBlob as tb catalog = csv.DictReader(open('files/catalogo_head.csv')) bloblist = [] for product in catalog: text =unicode(product["Articulo"], encoding="utf-8", errors="ignore").lower() text = ' '.join([word for word in text.split() if word not in stop_words_spa]) # remove Spanish stopwords text = ' '.join([word for word in text.split() if word not in stop_words_eng]) #remove English stopwords text = ' '.join([word for word in text.split() if word not in adjetivos]) # remove meaningless adjectives value = tb(text) # bag of words bloblist.append(value) tags = [] for blob in bloblist: scores = {word: tfidf(word, blob, bloblist) for word in blob.words} sorted_words = sorted(scores.items(), key=lambda x: x[1], reverse=True) terms = '' for word, score in sorted_words[:3]: terms = terms+word+' ' tags.append(terms) for t in tags: print unicode(t) """ Explanation: TAGS Document: each product Corpus: catalog TF-IDF term frequency–inverse document frequency, reflects how important a word is to a document in a collection or corpus End of explanation """ from whoosh.lang.porter import stem print "stemming: "+stem("analyse") from whoosh.lang.morph_en import variations print "variations: " print list(variations("analyse"))[0:5] import csv catalog = csv.DictReader(open('files/catalogo_contags.csv')) print list(catalog)[0].keys() from whoosh.index import create_in from whoosh.analysis import StemmingAnalyzer from whoosh.fields import * catalog = csv.DictReader(open('files/catalogo_contags.csv')) data_set = [] for row in catalog: row["Categoria"] = unicode(row["Categoria"], encoding="utf-8", errors="ignore").lower() row["Articulo"] =unicode(row["Articulo"], encoding="utf-8", errors="ignore").lower() row["Articulo"] = ' '.join([word for word in row["Articulo"].split() if word not in stop_words_spa]) row["Articulo"] = ' '.join([word for word in row["Articulo"].split() if word not in stop_words_eng]) row["Articulo"] = ' '.join([word for word in row["Articulo"].split() if word not in adjetivos]) row["tags"] = unicode(row["tags"], encoding="utf-8", errors="ignore") row["Ean"] = unicode(row["Ean"], encoding="utf-8", errors="ignore") row["Codigo"] = unicode(row["Codigo"], encoding="utf-8", errors="ignore") row["PVP"] = float(row["PVP"]) row["Plazo"] = unicode(row["Plazo"], encoding="utf-8", errors="ignore") data_set.append(row) print str(len(data_set)) + ' products' schema = Schema(Codigo=ID(stored=True), Ean=TEXT(stored=True), Categoria=TEXT(analyzer=StemmingAnalyzer(minsize=3), stored=True), Articulo=TEXT(analyzer=StemmingAnalyzer(minsize=3), field_boost=2.0, stored=True), Tags=KEYWORD(field_boost=1.0, stored=True), PVP=NUMERIC(sortable = True), Plazo = TEXT(stored=True)) """ Explanation: Other ideas Use the search engine as tagger e.g. all products with the word "kids" will be tagged as "child" ("niños" o "infantil") Use the database as tagger e.g. all smartphones below 150€ tagged as "cheap" * Teamwork is always better * *I had to collaborate with other departments, SEO and Cataloging * Schema Types of fields: TEXT : for body text, allows phrase searching. KEYWORD: space- or comma-separated keywords, tags ID: single unit, e.g. prod NUMERIC: int, long, or float, sortable format DATETIME: sortable BOOLEAN: users to search for yes, no, true, false, 1, 0, t or f. Field boosting. * Is a multiplier applied to the score of any term found in the field.* Form diversity Stemming (great if you are working English) Removes suffixes Variation (great if you are working English) Encodes the words in the index in a base form End of explanation """ from whoosh import index from datetime import datetime start = datetime.now() ix = create_in("indexdir", schema) #clears the index #on a directory with an existing index will clear the current contents of the index writer = ix.writer() for product in data_set: writer.add_document(Codigo=unicode(product["Codigo"]), Ean=unicode(product["Ean"]), Categoria=unicode(product["Categoria"]), Articulo=unicode(product["Articulo"]), Tags=unicode(product["tags"]), PVP=float(product["PVP"])) writer.commit() finish = datetime.now() time = finish-start print time """ Explanation: Index Whoosh allows you to: - Create an index object in accordance with the schema - Merge segments: an efficient way to add documents - Delete documents in index: writer.delete_document(docnum) - Update documents: writer.update_document - Incremental index End of explanation """ Image(filename='files/screenshot_files.png') """ Explanation: 12k documents aprox stored in 10mb, index created in less than 15 seconds, depending on the computer End of explanation """ from whoosh.qparser import MultifieldParser, OrGroup qp = MultifieldParser(["Categoria", "Articulo", "Tags", "Ean", "Codigo", "Tags"], # all selected fields schema=ix.schema, # with my schema group=OrGroup) # OR instead AND user_query = 'Cargador de coche USB' user_query = unicode(user_query, encoding="utf-8", errors="ignore") user_query = user_query.lower() user_query = ' '.join([word for word in user_query.split() if word not in stop_words_spa]) user_query = ' '.join([word for word in user_query.split() if word not in stop_words_eng]) print "this is our query: " + user_query q = qp.parse(user_query) print "this is our parsed query: " + str(q) with ix.searcher() as searcher: results = searcher.search(q) print str(len(results))+' hits' print results[0]["Codigo"]+' - '+results[0]["Articulo"]+' - '+results[0]["Categoria"] """ Explanation: Search Parsing Scoring: The default is BM25F, but you can change it. myindex.searcher(weighting=scoring.TF_IDF()) Sorting: by scoring, by relevance, custom metrics Filtering: e.g. by category Paging: let you set up number of articles by page and ask for a specific page number Parsing Convert a query string submitted by a user into query objects Default parser: QueryParser("content", schema=myindex.schema) MultifieldParser: Returns a QueryParser configured to search in multiple fields Whoosh also allows you to customize your parser. End of explanation """ with ix.searcher() as searcher: print ''' ----------- word-scoring sorting ------------ ''' results = searcher.search(q) for hit in results: print hit["Articulo"]+' - '+str(hit["PVP"])+' eur' print ''' --------------- PVP sorting ----------------- ''' results = searcher.search(q, sortedby="PVP") for hit in results: print hit["Articulo"]+' - '+str(hit["PVP"])+' eur' """ Explanation: Sorting We can sort by any field that is previously marked as sortable in the schema. python PVP=NUMERIC(sortable=True) End of explanation """
vzg100/Post-Translational-Modification-Prediction
.ipynb_checkpoints/Phosphorylation Sequence Tests -MLP -dbptm+ELM-checkpoint.ipynb
mit
from pred import Predictor from pred import sequence_vector from pred import chemical_vector """ Explanation: Template for test End of explanation """ par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] for i in par: print("y", i) y = Predictor() y.load_data(file="Data/Training/clean_s_filtered.csv") y.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=0) y.supervised_training("mlp_adam") y.benchmark("Data/Benchmarks/phos.csv", "S") del y print("x", i) x = Predictor() x.load_data(file="Data/Training/clean_s_filtered.csv") x.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=1) x.supervised_training("mlp_adam") x.benchmark("Data/Benchmarks/phos.csv", "S") del x """ Explanation: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation. Included is N Phosphorylation however no benchmarks are available, yet. Training data is from phospho.elm and benchmarks are from dbptm. End of explanation """ par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] for i in par: print("y", i) y = Predictor() y.load_data(file="Data/Training/clean_Y_filtered.csv") y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=0) y.supervised_training("mlp_adam") y.benchmark("Data/Benchmarks/phos.csv", "Y") del y print("x", i) x = Predictor() x.load_data(file="Data/Training/clean_Y_filtered.csv") x.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=1) x.supervised_training("mlp_adam") x.benchmark("Data/Benchmarks/phos.csv", "Y") del x """ Explanation: Y Phosphorylation End of explanation """ par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] for i in par: print("y", i) y = Predictor() y.load_data(file="Data/Training/clean_t_filtered.csv") y.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=0) y.supervised_training("mlp_adam") y.benchmark("Data/Benchmarks/phos.csv", "T") del y print("x", i) x = Predictor() x.load_data(file="Data/Training/clean_t_filtered.csv") x.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=1) x.supervised_training("mlp_adam") x.benchmark("Data/Benchmarks/phos.csv", "T") del x """ Explanation: T Phosphorylation End of explanation """
roatienza/Deep-Learning-Experiments
versions/2020/cnn/code/cnn-functional.ipynb
mit
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow from tensorflow.keras.layers import Dense, Dropout, Input from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten from tensorflow.keras.models import Model from tensorflow.keras.datasets import mnist from tensorflow.keras.utils import to_categorical # load MNIST dataset (x_train, y_train), (x_test, y_test) = mnist.load_data() # from sparse label to categorical num_labels = len(np.unique(y_train)) y_train = to_categorical(y_train) y_test = to_categorical(y_test) # reshape and normalize input images image_size = x_train.shape[1] x_train = np.reshape(x_train,[-1, image_size, image_size, 1]) x_test = np.reshape(x_test,[-1, image_size, image_size, 1]) x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32') / 255 """ Explanation: Using Functional API to build CNN We start introducing the concept of Functional API as an alternative way of building keras models. In the previous examples on MLP and CNN on MNIST, we used Sequential API. The Sequential API is fine if we are building simple models wherein there is a single point for input and single point for output. In advanced models, this is not sufficient since we build more complex graphs possibly with multiple inputs and outputs. In such cases, Functional API is the method of choice. Functional API builds upon the concept of function composition: \begin{equation} y = f_n \circ f_{n-1} \circ \ldots \circ f_1(x) \end{equation} The output of one function becomes the input of the next function and so on. We can also have a function with multiple outputs that become inputs to multiple functions. Or, we can have multiple functional blocks with multiple separate inputs that are combined into one or more outputs. In the following example, we will show how to build a model made of 3-Conv2D-1-Dense using Functional API. Similar to the previous example on CNN on MNIST, let us do the initializations, and input and label pre-processing. Since this is similar to the CNN model that we built using Sequential API, it is expected that its performance is more or less the same. End of explanation """ # network parameters input_shape = (image_size, image_size, 1) batch_size = 128 kernel_size = 3 filters = 64 """ Explanation: Hyper-parameters The hyper-parameters are similar to the one used in CNN on MNIST example. End of explanation """ # use functional API to build cnn layers inputs = Input(shape=input_shape) y = Conv2D(filters=filters, kernel_size=kernel_size, activation='relu', padding='same')(inputs) y = MaxPooling2D()(y) y = Conv2D(filters=filters, kernel_size=kernel_size, activation='relu', padding='same')(y) y = MaxPooling2D()(y) y = Conv2D(filters=filters, kernel_size=kernel_size, activation='relu', padding='same')(y) """ Explanation: Actual Model Building Using the Functional API Keras layer y = Conv2D(x), we can stack 3 CNN layers together to form a simple backbone network. The y = MaxPooling2D(x) is used to compress the learned feature maps. With compressed feature maps, the CNN learns new representations with a bigger receptive field. In Functional API, the output of one layer becomes the input of the next layer. For example if the first layer is y2 = Conv2D(y1), then the next layer is y3 = Conv2D(y2). To save from variable name pollution, we normally reuse the same variable name (eg. y) as shown below. In Sequential model building, we use the add() method of a model to stack multiple layers together. End of explanation """ # image to vector before connecting to dense layer y = Flatten()(y) # dropout regularization #y = Dropout(dropout)(y) outputs = Dense(num_labels, activation='softmax')(y) # build the model by supplying inputs/outputs model = Model(inputs=inputs, outputs=outputs) # network model in text model.summary() """ Explanation: Head is a Dense Layer Since we are doing logistic regression, we need to flatten the output of the 3-layer CNN so that we can generate the right number of logits to model a 10-class categorical distribution. This is the same concept that we used in MLP. End of explanation """ # classifier loss, Adam optimizer, classifier accuracy model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) # train the model with input images and labels model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=20, batch_size=batch_size) # model accuracy on test dataset score = model.evaluate(x_test, y_test, batch_size=batch_size) print("\nTest accuracy: %.1f%%" % (100.0 * score[1])) """ Explanation: Model Training and Validation The last step is similar to our MLP example. We compile the model and perform training by calling fit. The model validation is the last step. End of explanation """
newsapps/public-notebooks
Monthly homicides and shootings report.ipynb
mit
import os import requests def get_table_data(table_name): url = '%stable/json/%s' % (os.environ['NEWSROOMDB_URL'], table_name) try: r = requests.get(url) return r.json() except: print 'doh' return get_table_data(table_name) homicides = get_table_data('homicides') shootings = get_table_data('shootings') print 'Found %d homicides and %d shootings' % (len(homicides), len(shootings)) """ Explanation: First, we need to connect to NewsroomDB and download all shootings and homicides. End of explanation """ from datetime import date, datetime today = date.today() homicides_this_month = {} for h in homicides: try: dt = datetime.strptime(h['Occ Date'], '%Y-%m-%d') except ValueError: continue if dt.month == today.month: if dt.year not in homicides_this_month: homicides_this_month[dt.year] = [] homicides_this_month[dt.year].append(h) shootings_this_month = {} for s in shootings: try: dt = datetime.strptime(s['Date'], '%Y-%m-%d') except ValueError: continue if dt.month == today.month: if dt.year not in shootings_this_month: shootings_this_month[dt.year] = [] shootings_this_month[dt.year].append(s) for year in sorted(shootings_this_month.keys(), reverse=True): try: s = len(shootings_this_month[year]) except: s = 0 try: h = len(homicides_this_month[year]) except: h = 0 print '%d:\t%d shootings\t\t%d homicides' % (year, s, h) """ Explanation: Right now, we're interested in all shootings and homicides for the current month. So filter the lists based on whatever that month is. End of explanation """ from datetime import date, timedelta test_date = date.today() one_day = timedelta(days=1) shooting_days = {} for shooting in shootings: if shooting['Date'] not in shooting_days: shooting_days[shooting['Date']] = 0 shooting_days[shooting['Date']] += 1 while test_date.year > 2013: if test_date.strftime('%Y-%m-%d') not in shooting_days: print 'No shootings on %s' % test_date test_date -= one_day """ Explanation: Now let's find the days without a shooting. End of explanation """ from datetime import date, timedelta test_date = date.today() one_day = timedelta(days=1) homicide_days = {} for homicide in homicides: if homicide['Occ Date'] not in homicide_days: homicide_days[homicide['Occ Date']] = 0 homicide_days[homicide['Occ Date']] += 1 while test_date.year > 2013: if test_date.strftime('%Y-%m-%d') not in homicide_days: print 'No homicides on %s' % test_date test_date -= one_day """ Explanation: And days without a homicide. End of explanation """ coordinates = [] for homicide in homicides: if not homicide['Occ Date'].startswith('2015-'): continue # Since the format of this field is (x, y) (or y, x? I always confuse the two) we need to extract just x and y try: coordinates.append( (homicide['Geocode Override'][1:-1].split(',')[0], homicide['Geocode Override'][1:-1].split(',')[1])) except: # Not valid/expected lat/long format continue print len(coordinates) for coordinate in coordinates: print '%s,%s' % (coordinate[0].strip(), coordinate[1].strip()) """ Explanation: Let's get the latitudes and longitudes of every murder in 2015 (Manya Brachear asked). End of explanation """
lfsimoes/beam_paco__gtoc5
usage_demos.ipynb
mit
# https://esa.github.io/pykep/ # https://github.com/esa/pykep # https://pypi.python.org/pypi/pykep/ import PyKEP as pk import numpy as np from tqdm import tqdm, trange import matplotlib.pylab as plt %matplotlib inline import seaborn as sns plt.rcParams['figure.figsize'] = 10, 8 from gtoc5 import * from gtoc5.multiobjective import * from gtoc5.phasing import * from paco import * from paco_traj import * from experiments import * from experiments__paco import * """ Explanation: Ants in Space! An introduction to the code in beam_paco__gtoc5 Luís F. Simões 2017-04 <h1 id="tocheading">Table of Contents</h1> <div id="toc"></div> End of explanation """ %load_ext watermark %watermark -v -m -p PyKEP,numpy,scipy,tqdm,pandas,matplotlib,seaborn # https://github.com/rasbt/watermark """ Explanation: Taking a look at our Python environment. End of explanation """ from urllib.request import urlretrieve import gzip urlretrieve('http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/tsp/eil101.tsp.gz', filename='eil101.tsp.gz'); """ Explanation: Solving a TSPLIB problem with P-ACO In this section we show the steps to solve a Travelling Salesman Problem (TSP) instance with the Population-based Ant Colony Optimization (P-ACO) algorithm. We'll use a TSP instance downloaded from the TSPLIB: eil101 (symmetric, 101 cities, total distance in best known solution: 629). End of explanation """ with gzip.open('eil101.tsp.gz') as f: xy_locs = np.loadtxt(f, skiprows=6, usecols=(1,2), comments='EOF', dtype=np.int) nr_cities = len(xy_locs) xy_locs[:5] """ Explanation: Load each city's (x, y) coordinates. End of explanation """ distances = np.zeros((nr_cities, nr_cities)) for a in range(nr_cities): for b in range(a, nr_cities): distances[a,b] = distances[b,a] = np.linalg.norm(xy_locs[a] - xy_locs[b]) distances[:4, :4] """ Explanation: Calculate distances matrix. End of explanation """ rng, seed = initialize_rng(seed=None) print('Seed:', seed) path_handler = tsp_path(distances, random_state=rng) aco = paco(path_handler.nr_nodes, path_handler, random_state=rng) """ Explanation: Instantiate the TSP "path handler" with this distances matrix, and P-ACO, with its default parameters. End of explanation """ %time (quality, best) = aco.solve(nr_generations=100) quality """ Explanation: Solve it. End of explanation """ %time (quality, best) = aco.solve(nr_generations=400, reinitialize=False) quality """ Explanation: Continue refining the solution for a few more generations. End of explanation """ xy = np.vstack([xy_locs[best], xy_locs[best][0]]) # to connect back to the start line, = plt.plot(xy[:,0], xy[:,1], 'go-') """ Explanation: Let's see what we found. End of explanation """ t = mission_to_1st_asteroid(1712) add_asteroid(t, 4893) """ Explanation: Basic steps for assembling GTOC5 trajectories The two primary functions for assembling a GTOC5 trajectory are here mission_to_1st_asteroid() and add_asteroid(). The first initializes the mission's data structure with the details of the Earth launch leg, that takes the spacecraft towards the mission's first asteroid. Subsequently, via multiple calls to add_asteroid(), the mission is extended with additional exploration targets. Each call to add_asteroid() creates a rendezvous leg towards the specified asteroid, immediately followed by a flyby of that same asteroid, and so increases the mission's overall score by 1. Here's an example of a mission that launches towards asteroid 1712, and moves next to asteroid 4893. The True value returned by add_asteroid() indicates that a feasible transfer leg was found, and asteroid 4893 was therefore added to the mission. End of explanation """ score(t), final_mass(t), tof(t) * DAY2YEAR """ Explanation: We can evaluate this trajectory with respect to its score (number of asteroids fully explored), final mass (in kg), and time of flight (here converted from days to years). End of explanation """ resource_rating(t) """ Explanation: An aggregation of the mission's mass and time costs can be obtained with resource_rating(). It measures the extent to which the mass and time budgets available for the mission have been depleted by the trajectory. It produces a value of 1.0 at the start of the mission, and a value of 0.0 when the mission has exhausted its 3500 kg of available mass, or its maximum duration of 15 years. End of explanation """ score(t) + resource_rating(t) """ Explanation: As the score increments discretely by 1.0 with each added asteroid, and the resource rating evaluates mass and time available in a range of [0, 1], both can be combined to give a single-objective evaluation of the trajectory, that should be maximized: End of explanation """ print(seq(t)) print(seq(t, incl_flyby=False)) """ Explanation: Calling seq(), we can see either the full sequence of asteroids visited in each leg, or just the distinct asteroids visited in the mission. In this example, we see that the mission starts on Earth (id 0), performs a rendezvous with asteroid 1712, followed by a flyby of the same asteroid, and then repeats the pattern at asteroid 4893. End of explanation """ t[-1] """ Explanation: The trajectory data structure built by mission_to_1st_asteroid() and add_asteroid() is a list of tuples summarizing the evolution of the spacecraft's state. It provides the minimal sufficient information from which a more detailed view can be reproduced, if so desired. Each tuple contains: asteroid ID spacecraft mass epoch the leg's $\Delta T$ the leg's $\Delta V$ The mass and epoch values correspond to the state at the given asteroid, at the end of a rendezvous or self-fly-by leg, after deploying the corresponding payload. The $\Delta T$ and $\Delta V$ values refer to that leg that just ended. End of explanation """ pk.epoch(t[-1][2], 'mjd') """ Explanation: Epochs are given here as Modified Julian Dates (MJD), and can be converted as: End of explanation """ import os from copy import copy def greedy_step(traj): traj_asts = set(seq(traj, incl_flyby=False)) progress_bar_args = dict(leave=False, file=os.sys.stdout, desc='attempting score %d' % (score(traj)+1)) extended = [] for a in trange(len(asteroids), **progress_bar_args): if a in traj_asts: continue tt = copy(traj) if add_asteroid(tt, next_ast=a, use_cache=False): extended.append(tt) return max(extended, key=resource_rating, default=[]) # measure time taken at one level to attempt legs towards all asteroids (that aren't already in the traj.) %time _ = greedy_step(mission_to_1st_asteroid(1712)) def greedy_search(first_ast): t = mission_to_1st_asteroid(first_ast) while True: tt = greedy_step(t) if tt == []: # no more asteroids could be added return t t = tt %time T = greedy_search(first_ast=1712) score(T), resource_rating(T), final_mass(T), tof(T) * DAY2YEAR print(seq(T, incl_flyby=False)) """ Explanation: Greedy search In this section we perform a Greedy search for a GTOC5 trajectory. We'll start by going to asteroid 1712. Then, and at every following step, we attempt to create legs towards all still available asteroids. Among equally-scored alternatives, we greedily pick the one with highest resource rating to adopt into the trajectory, and continue from there. Search stops when no feasible legs are found that can take us to another asteroid. This will happen either because no solutions were found that would allow for a leg to be created, or because adding a found solution would require the spacecraft to exceed the mission's mass or time budgets. End of explanation """ t = mission_to_1st_asteroid(1712) """ Explanation: Greedy search gave us a trajectory that is able to visit 14 distinct asteroids. However, by the 14th, the spacecraft finds itself unable to find a viable target to fly to next, even though it has 84.8 kg of mass still available (the spacecraft itself weighs 500 kg, so the mission cannot go below that value), and 2 years remain in its 15 year mission. Phasing indicators A big disadvantage of the approach followed above is the high computational cost of deciding which asteroid to go to next. It entails the optimization of up to 7075 legs, only to then pick a single one and discard all the other results. An alternative is to use one of the indicators available in gtoc5/phasing.py. They can provide an indication of how likely a specific asteroid is to be an easily reachable target. End of explanation """ r = rate__orbital_2(dep_ast=t[-1][0], dep_t=t[-1][2], leg_dT=125) r[seq(t)] = np.inf # (exclude bodies already visited) """ Explanation: We use here the (improved) orbital phasing indicator to rate destinations with respect to the estimated ΔV of hypothetical legs that would depart from dep_ast, at epoch dep_t, towards each possible asteroid, arriving there within leg_dT days. We don't know exactly how long the transfer time chosen by add_asteroid() would be, but we take leg_dT=125 days as reference transfer time. End of explanation """ r.argsort()[:5] """ Explanation: Below are the 5 asteroids the indicator estimates would be most easily reachable. As we've seen above in the results from the greedy search, asteroid 4893, here the 2nd best rated alternative, would indeed be the target reachable with lowest ΔV. End of explanation """ [add_asteroid(copy(t), a) for a in r.argsort()[:5]] """ Explanation: The indicator is however not infallible. If we attempt to go from asteroid 1712 towards each of these asteroids, we find that none of them are actually reachable, except for 4893! Still, the indicator allows us to narrow our focus considerably. End of explanation """ def narrowed_greedy_step(traj, top=10): traj_asts = set(seq(traj, incl_flyby=False)) extended = [] ratings = rate__orbital_2(dep_ast=traj[-1][0], dep_t=traj[-1][2], leg_dT=125) for a in ratings.argsort()[:top]: if a in traj_asts: continue tt = copy(traj) if add_asteroid(tt, next_ast=a, use_cache=False): extended.append(tt) return max(extended, key=resource_rating, default=[]) def narrowed_greedy_search(first_ast, **kwargs): t = mission_to_1st_asteroid(first_ast) while True: tt = narrowed_greedy_step(t, **kwargs) if tt == []: # no more asteroids could be added return t t = tt # measure time taken at one level to attempt legs towards the best `top` asteroids %time _ = narrowed_greedy_step(mission_to_1st_asteroid(1712), top=10) %time T = narrowed_greedy_search(first_ast=1712, top=10) score(T), resource_rating(T), final_mass(T), tof(T) * DAY2YEAR print(seq(T, incl_flyby=False)) """ Explanation: Armed with the indicator, we can reimplement the greedy search, so it will only optimize legs towards a number of top rated alternatives, and then proceed with the best out of those. End of explanation """ gtoc_ph = init__path_handler(multiobj_evals=True) # configuring Beam P-ACO to behave as a deterministic multi-objective Beam Search _args = { 'beam_width': 20, 'branch_factor': 250, 'alpha': 0.0, # 0.0: no pheromones used 'beta': 1.0, 'prob_greedy': 1.0, # 1.0: deterministic, greedy branching decisions } bpaco = beam_paco_pareto(nr_nodes=len(asteroids), path_handler=gtoc_ph, random_state=None, **_args) # start the search # given we're running the algoritm in deterministic mode, we execute it for a single generation %time best_pf = bpaco.solve(nr_generations=1) # being this a `_pareto` class, .best returns a Pareto front # pick the first solution from the Pareto front best_eval, best = best_pf[0] # Evaluation of the best found solution # (score, mass consumed, time of flight) best_eval # sequence of asteroids visited (0 is the Earth) print(seq(best, incl_flyby=False)) # mission data structure, up to the full scoring of the first two asteroids best[:5] """ Explanation: We were able to find another score 14 trajectory, but this time it took us ~1 second, whereas before it was taking us 2 and a half minutes. Finding a GTOC5 trajectory of score 17 with Beam Search End of explanation """ %%javascript $.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js') // https://github.com/kmahelona/ipython_notebook_goodies """ Explanation: Generate the Table of Contents End of explanation """
PollyP/CAPE-ratios
notebooks/Using CAPE ratios to make investment decisions.ipynb
mit
%matplotlib inline import IPython.html.widgets as widgets import IPython.display as display import matplotlib.pyplot as plt import matplotlib.pylab as pylab import pandas as pd pd.set_option('display.float_format', lambda x: '%.4f' % x) pylab.rcParams['figure.figsize'] = 14, 8 pd.set_option('display.width', 400) # load the CAPE data and clean it up a bit. # source: http://www.econ.yale.edu/~shiller/data.htm cape_data = pd.read_csv('CAPE data', skiprows=range(0,8), skip_blank_lines=True, \ names=['date','s&p comp price','s&p comp div','s&p comp earnings','CPI',\ 'date fraction','int rate GS10','real price','real div','real earnings','CAPE','na']) cape_data = cape_data[0:1737] cape_data.sort('date',inplace=True) cape_data.drop('na', axis=1,inplace=True) cape_data.set_index('date') cape_data[['s&p comp price', 'CPI', 'int rate GS10']] = cape_data[['s&p comp price', 'CPI','int rate GS10']].astype(float) cape_data = cape_data.iloc[948:] # look at sample data since 1950 """ Explanation: Using CAPE ratios as market timing indicators CAPE ratios are a valuation metric for the U.S. S&P 500 equity indexes, invented by Benjamin Graham and David Dodd and popularized by the behavioral economist and Nobel laureate Robert Shiller. It seeks to improve on the standard price-to-earnings ratio by using the moving ten-year inflation-adjusted earnings average instead, and thereby reducing the impacts of short-term market volatility. Higher CAPE ratios imply lower real returns over the long term. When the CAPE ratios climb, many people [get concerned about unsupportable valuations and the increased potential for stomach-churning drops] (http://www.cnbc.com/2015/09/03/risk-of-big-stock-drops-grows-robert-shiller.html). As I write this, we're in the mid-twenties, significantly above the long-term average of 17. (Though not so out of line with most of the post-2000 CAPE ratios, with the notable exception of the 2008 meltdown.) Still, Professor Shiller is careful to say that CAPE ratios aren't a short-term market timing tool. But I am the kind of person who, when told, "do not use CAPE ratios as short-term market timing tool!" immediately wonders, "Huh. I wonder if I can use the CAPE ratios as a short-term market timing tool?" Wouldn't it be great if there were some strategy where you sell when the CAPE ratio gets over X and buy back in when it goes below Y and beat the buy-and-hold performance? Fortunately, Professor Shiller makes his data available, so it's easy to find out. Download the data and load it into a DataFrame We're using Python, and its very useful data analysis library Pandas. Its DataFrame structure is a natural for time series processing like this. We'll do some plotting, too; for that, we'll use the matplotlib library. End of explanation """ cape_data.head() """ Explanation: Let's see how the dataset is structured. End of explanation """ temp_df = cape_data.copy() # create a column that holds the price in 2015 dollars CPI2015 = temp_df.iloc[-1]['CPI'] temp_df['2015 price'] = temp_df['s&p comp price'] * ( CPI2015 / temp_df['CPI'] ) * .06 # ... and a column that holds the 1 year PE temp_df['1PE'] = temp_df['s&p comp price'] / temp_df['s&p comp earnings'] # plot the price, CAPE, and 1Y PE ax = temp_df.plot( x='date', y='2015 price', color='Black') temp_df.plot( x='date',y='CAPE', color='Red', ax = ax ) temp_df.plot( x='date', y='1PE', color='Blue', ax=ax) ax.legend(['S&P comp. price, 2015 dollars (scaled)','CAPE (10PE)','1YPE'], loc='upper left') """ Explanation: Now let's plot the price, CAPE, and 1YPE data. We'll normalize the price data, adjusting for 2015 dollars and scaling it to make it readable next to the earnings ratios. End of explanation """ def split_periods( sell_threshold, buy_threshold, df, debug=False ): """ Given an input dataframe, return two lists of subset dataframes. In the first list are all the investable periods, and in the second list are the noninvestable periods. Investability is determined by CAPE ratio trigger values. Inputs: sell_threshold - the CAPE ratio point at which to end the investable period buy_threshold - the CAPE ratio at which to start the investable period df - the initial dataset with the CAPE ratio column debug - optional boolean for heavy debugging output Outputs: Returns two lists. The first is a list of subset dataframes that meet the investable period criteria, and second is a list of subset dataframes that don't meet the investable period critera. """ invested_state = True invested_periods = list() noninvested_periods = list() current_period = list() # add an 'is invested' column df['is_invested'] = None for index, row in df.iterrows(): if row['CAPE'] > sell_threshold and invested_state == True: # cape too high, cashing out. add the previous invested period to the invested periods list invested_state = False if debug: print "sell threshold, appending " + str( df.loc[ df['date'].isin( current_period) ] ) invested_periods.append( df.loc[ df['date'].isin( current_period) ] ) current_period = list() if row['CAPE'] < buy_threshold and invested_state == False: # cape low enough to buy back in. add the previous noninvested period to the noninvested periods list invested_state = True if debug: print "buy threshold, appending " + str( df.loc[ df['date'].isin( current_period) ] ) noninvested_periods.append( df.loc[ df['date'].isin( current_period) ] ) current_period = list() # set this row's 'is invested' state current_period.append( row['date'] ) df.loc[index,'is_invested'] = invested_state # don't forget rows at the end of the dataset if invested_state == True: if debug: print "end of df: appending to invested: " + str( df.loc[ df['date'].isin( current_period) ] ) invested_periods.append( df.loc[ df['date'].isin( current_period) ] ) else: if debug: print "end of df: appending to noninvested: " + str( df.loc[ df['date'].isin( current_period) ] ) noninvested_periods.append( df.loc[ df['date'].isin( current_period) ] ) return (invested_periods,noninvested_periods) """ Explanation: OK, time to write some code to test our sell- and buy-trigger hypothesis. Implement the core functions we'll be using We'll need a split up the input data into two groups. The first group will hold all the periods where we have not hit our sell trigger and are invested in the market. The second group holds all the periods where we hit the sell trigger but have not yet hit our next buy trigger, and thus are not invested in the market. End of explanation """ def compute_gain_or_loss( df ): ''' compute the difference from the price in the first row to the price in the last row, adding in dividends for the total. ''' if len(df) == 0: return 0.0 divs = df['s&p comp div'] / 12 # dividends are annualized; break it down to months. divs_sum = divs.sum() if len(df) == 1: return divs_sum return df.iloc[-1]['s&p comp price'] - df.iloc[0]['s&p comp price'] + divs_sum """ Explanation: For each investable period, we'll need to figure out the price gain or loss over that period. We'll be collecting dividends, so we'll add that into the total. End of explanation """ def compute_interest( df ): ''' for the time period in the dataframe, compute the interest that would be accrued ''' intr = df['int rate GS10'] / 12 # to get monthly gs10 interest rate intr = intr / 10 # to approximate short term interest rate return intr.sum() """ Explanation: For each noninvestable period, we'll be parking our money in a safe short-term interest bearing account. We'll write a function to calculate that amount. End of explanation """ def compute_passive_gains( df ): ''' compute the total gains or losses for the dataframe ''' gains_list = map( compute_gain_or_loss, [ df, ] ) return sum(gains_list) def compute_cape_strategy_gains( sell_threshold, buy_threshold, df, debug = False ): ''' compute the total gains and losses for the dataframe, using the sell and buy thresholds to time the investable periods ''' # split the dataframe into invested and noninvested periods, using the thresholds (invested_periods,noninvested_periods) = split_periods(sell_threshold, buy_threshold, df, debug ) if debug: print "\n\n===cape invested periods" + str(invested_periods) + "\n====" print "\n\n===cape noninvested periods " + str(noninvested_periods) + "\n====" # compute the gain or loss for each invested period gain_list = map( compute_gain_or_loss, invested_periods ) # compute the interest accrued for each noninvested period int_list = map( compute_interest, noninvested_periods ) return sum(gain_list) + sum(int_list) """ Explanation: Compute_passive_gains() calculates the gains for the baseline buy-and-hold case. It returns the change in price plus the dividends over the entire time period. Compute_cape_strategy_gains() splits the input data into investable and noninvestable periods, based on its sell and buy thresholds given as parameters. For each of the investable periods, it computes the gain or loss plus the dividends. For each of the noninvestable periods, it computes the interest accrued. It sums these together and returns this as the total gain for the CAPE strategy. End of explanation """ sample_df = cape_data.copy() results = [] # compute the buy and hold results baseline_gains = compute_passive_gains( sample_df ) # compute the CAPE strategy results for various sell and buy thresholds for s in range(20,50): for b in range( 10, 30 ): if b > s: continue cape_gains = compute_cape_strategy_gains( s, b, sample_df ) results.append( [ baseline_gains - cape_gains, s, b ] ) results_df = pd.DataFrame( data=results, columns=['baseline - cape return', 'sell threshold', 'buy threshold']) """ Explanation: Do the calculations We calculate the baseline buy-and-hold case, and then loop through a range of possible sell and buy threshold values for the CAPE strategy. We store the difference between the baseline gain and the CAPE strategy in a DataFrame. A positive number means you do better with buy-and-hold than with the given CAPE strategy; a negative number means the CAPE strategy wins. End of explanation """ baseline_df = results_df[ results_df['baseline - cape return'] > 0 ] ax = baseline_df.plot(kind='scatter', x='sell threshold',y='buy threshold', \ color='Blue', s = baseline_df['baseline - cape return'] * .05) cape_df = results_df[ results_df['baseline - cape return'] <= 0 ] cape_df.plot(kind='scatter', x='sell threshold',y='buy threshold', \ color='Red', s = cape_df['baseline - cape return'] * -.05 , ax = ax ) """ Explanation: Plot the results Now we plot the results. The axes indicate the sell and buy threshold parameters tested with the cape strategy. The dots tell you how the test came out: blue means buy-and-hold wins, and red means the CAPE strategy wins. The size of the dot gives you a relative indicator of how much winning is going on; the bigger the dot, the bigger the win. End of explanation """ # import and options %matplotlib inline import IPython.html.widgets as widgets import IPython.display as display import matplotlib.pyplot as plt import matplotlib.pylab as pylab import pandas as pd pd.set_option('display.float_format', lambda x: '%.4f' % x) pylab.rcParams['figure.figsize'] = 14, 8 pd.set_option('display.width', 400) # read in test file for tests 1 and 2 test_data = pd.read_csv('testdata.csv', skiprows=range(0,8), skip_blank_lines=True, \ names=['date','s&p comp price','s&p comp div','s&p comp earnings','CPI',\ 'date fraction','int rate GS10','real price','real div','real earnings','CAPE','na']) test_data.drop('na', axis=1,inplace=True) test_data.sort('date', inplace=True) test_data.set_index('date') ############################# # # test 1 # ############################# print "\n\ntest 1: baseline and cape strategies are the same" sample_df = test_data.copy() results = [] baseline_gains = compute_passive_gains( sample_df ) print "baseline gains " + str(baseline_gains) assert baseline_gains == 1012.0 print "doing cape computation now" sell_threshold = 25.0 buy_threshold = 20.0 cape_gains = compute_cape_strategy_gains( sell_threshold, buy_threshold, sample_df ) print "cape_gains " + str(cape_gains) assert cape_gains == 1012.0 results.append( [ baseline_gains - cape_gains, sell_threshold, buy_threshold ] ) results_df = pd.DataFrame( data=results, columns=['baseline - cape return', 'sell threshold', 'buy threshold']) assert len(results_df) == 1 assert results_df.iloc[0]['baseline - cape return'] == 0.0 assert results_df.iloc[0]['sell threshold'] == sell_threshold assert results_df.iloc[0]['buy threshold'] == buy_threshold print "test 1 passed" ############################# # # test 2 # ############################# print "\n\ntest 2: cape strategy always in cash" sample_df = test_data.copy() results = [] baseline_gains = compute_passive_gains( sample_df ) print "baseline gains " + str(baseline_gains) assert baseline_gains == 1012.0 print "doing cape computation now" sell_threshold = 15.0 buy_threshold = 10.0 cape_gains = compute_cape_strategy_gains( sell_threshold, buy_threshold, sample_df ) print "cape_gains " + str(cape_gains) assert cape_gains == 0.6 results.append( [ baseline_gains - cape_gains, sell_threshold, buy_threshold ] ) results_df = pd.DataFrame( data=results, columns=['baseline - cape return', 'sell threshold', 'buy threshold']) assert len(results_df) == 1 assert results_df.iloc[0]['baseline - cape return'] == 1011.4 assert results_df.iloc[0]['sell threshold'] == sell_threshold assert results_df.iloc[0]['buy threshold'] == buy_threshold print "test 2 passed" # read in test file for test 3 test_data = pd.read_csv('testdata2.csv', skiprows=range(0,8), skip_blank_lines=True, \ names=['date','s&p comp price','s&p comp div','s&p comp earnings','CPI',\ 'date fraction','int rate GS10','real price','real div','real earnings','CAPE','na']) test_data.drop('na', axis=1,inplace=True) test_data.sort('date', inplace=True) test_data.set_index('date') ############################# # # test 3 # ############################# print "\n\ntest 3: cape strategy sometimes in cash" sample_df = test_data.copy() results = [] baseline_gains = compute_passive_gains( sample_df ) print "baseline gains " + str(baseline_gains) assert baseline_gains == 12.0 print "doing cape computation now" sell_threshold = 22.0 buy_threshold = 21.0 cape_gains = compute_cape_strategy_gains( sell_threshold, buy_threshold, sample_df, debug = True ) print "cape_gains " + str(cape_gains) assert cape_gains == 1008.20 results.append( [ baseline_gains - cape_gains, sell_threshold, buy_threshold ] ) results_df = pd.DataFrame( data=results, columns=['baseline - cape return', 'sell threshold', 'buy threshold']) assert len(results_df) == 1 assert results_df.iloc[0]['baseline - cape return'] == -996.2 assert results_df.iloc[0]['sell threshold'] == sell_threshold assert results_df.iloc[0]['buy threshold'] == buy_threshold print "test 3 passed" """ Explanation: What this tells me is that you can use the CAPE ratio to beat buy-and-hold -- but only when it's very, very high. Which isn't terribly often: the CAPE ratio has been over 35 4.3% of the 1950-2015 timeframe, all of it during the dotcom boom of 1998-2001. Looking at other stock market debacles, Black Tuesday's 1929 CAPE ratio was around 30, and Black Monday's 1987 CAPE ratio was in the high teens. When we look at the plot, we see that buy-and-hold easily beats a CAPE strategy that uses either of those numbers as sell triggers. You see where I'm going with this, right? CAPE ratios are not, in fact, useful market timing tools. Also, it's really, really, really hard to beat buy-and-hold. Really. Effin. Hard. Appendix: further reading Professor Shiller's most well-known popular book is Irrational Exuberance, where he describes his theory of behavioral economics and asset bubbles in particular. Professor Shiller was a fascinating interviewee on Barry Ritholtz's Masters in Business podcast. Appendix: tests End of explanation """
phoebe-project/phoebe2-docs
2.3/tutorials/nelder_mead.ipynb
gpl-3.0
#!pip install -I "phoebe>=2.3,<2.4" import phoebe from phoebe import u # units import numpy as np logger = phoebe.logger('error') """ Explanation: Advanced: Nelder-Mead Optimizer Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). End of explanation """ b = phoebe.default_binary() b.set_value('ecc', 0.2) b.set_value('per0', 25) b.set_value('teff@primary', 7000) b.set_value('teff@secondary', 6000) b.set_value('sma@binary', 7) b.set_value('incl@binary', 80) b.set_value('q', 0.3) b.set_value('t0_supconj', 0.1) b.set_value('requiv@primary', 2.0) b.set_value('vgamma', 80) lctimes = phoebe.linspace(0, 10, 1005) rvtimes = phoebe.linspace(0, 10, 105) b.add_dataset('lc', compute_times=lctimes) b.add_dataset('rv', compute_times=rvtimes) b.add_compute('ellc', compute='fastcompute') b.set_value_all('ld_mode', 'lookup') b.run_compute(compute='fastcompute') fluxes = b.get_value('fluxes@model') + np.random.normal(size=lctimes.shape) * 0.01 fsigmas = np.ones_like(lctimes) * 0.02 rvsA = b.get_value('rvs@primary@model') + np.random.normal(size=rvtimes.shape) * 10 rvsB = b.get_value('rvs@secondary@model') + np.random.normal(size=rvtimes.shape) * 10 rvsigmas = np.ones_like(rvtimes) * 20 b = phoebe.default_binary() b.add_dataset('lc', compute_phases=phoebe.linspace(0,1,201), times=lctimes, fluxes=fluxes, sigmas=fsigmas, dataset='lc01') b.add_dataset('rv', compute_phases=phoebe.linspace(0,1,201), times=rvtimes, rvs={'primary': rvsA, 'secondary': rvsB}, sigmas=rvsigmas, dataset='rv01') b.add_compute('ellc', compute='fastcompute') b.set_value_all('ld_mode', 'lookup') """ Explanation: Create fake "observations" We'll create the same fake "observations" as in the Inverse Paper Examples. For the sake of efficiency for this example, we'll use the ellc backend. In practice, this is only safe for areas of the parameter space where phoebe and ellc are in sufficient agreement. End of explanation """ b.set_value('sma@binary', 7+0.5) b.set_value('incl@binary', 80+10) b.set_value('q', 0.3+0.1) b.set_value('t0_supconj', 0.1) b.set_value('requiv@primary', 2.0-0.3) b.run_compute(compute='fastcompute', model='orig_model') _ = b.plot(x='phases', show=True) """ Explanation: We'll set the initial model to be close to the correct values, but not exact. In practice, we would instead have to use a combination of manual tweaking, LC estimators, and RV_estimators to get in the rough parameter space of the solution before starting to use optimizers. End of explanation """ b.add_solver('optimizer.nelder_mead', solver='nm_solver') """ Explanation: Nelder-Mead Options As with any solver, we call b.add_solver and this time pass optimizer.nelder_mead. End of explanation """ print(b.filter(solver='nm_solver')) """ Explanation: This adds new parameters to our bundle with the options for running nelder-mead. End of explanation """ b.set_value('compute', solver='nm_solver', value='fastcompute') """ Explanation: Here we get to choose which set of compute-options will be used while optimizing (we'll choose the ellc compute options we called 'fastcompute' just for the sake of efficiency) End of explanation """ b.set_value('maxiter', 1000) b.set_value('maxfev', 1000) """ Explanation: We'll also set maxiter and maxfev to smaller values (these are just passed directly to scipy.optimize.minimize. End of explanation """ print(b.get_parameter('fit_parameters')) print(b.get_parameter('initial_values')) """ Explanation: The fit_parameters parameter takes a list of twigs (see the general concepts tutorial for a refresher). These parameters will be those that are optimized. By default, each parameter will start at its current face-value. To change the starting positions, you can either change the face-values in the bundle, or pass alternate starting positions to initial_values (as a dictionary of twig-value pairs). End of explanation """ b.set_value('fit_parameters', ['teff', 'requiv']) b.get_value('fit_parameters', expand=True) """ Explanation: fit_parameters does accept partial-twigs as well as wildcard matching. To see the full list of matched parameters for the current set value, you can pass expand=True to get_value. End of explanation """ b.set_value('fit_parameters', ['q', 'vgamma', 't0_supconj']) b.get_value('fit_parameters', expand=True) """ Explanation: For this example, let's try to fit the RV first, so we'll optimize q, vgamma, and t0_supconj. End of explanation """ print(b.filter(qualifier='enabled', compute='fastcompute')) b.disable_dataset('lc01', compute='fastcompute') print(b.filter(qualifier='enabled', compute='fastcompute')) b.run_solver('nm_solver', solution='nm_sol') """ Explanation: Note that the optimizer options also contains the ability to set "priors". If set (to the label of a distribution set, these will be included in the cost function and can be used to limit the range that a parameter will be allowed to explore within the optimizer. Running Nelder-Mead Once our options are set, we can call b.run_solver, pass the label of our solver ('nm_solver' in this case), and optionally a label for the resulting "solution" (same as labeling the results from b.run_compute with a "model"). In this case, we want to fit only the RVs, so we'll disable the LCs in the compute options that we're using. End of explanation """ print(b.filter(solution='nm_sol')) """ Explanation: Interpreting the Returned Solution We can now look at the new parameters this has added to the bundle: End of explanation """ print(b.adopt_solution(trial_run=True)) """ Explanation: And then adopt the final parameter values, via b.adopt_solution. By passing trial_run=True, we can see what changes will be made without actually changing the face-values in the bundle. End of explanation """ print(b.adopt_solution(trial_run=True, adopt_parameters=['q'])) """ Explanation: Note that by default, all fitted parameters will be adopted. But we can change this by setting adopt_parameters (in the solution) or by passing adopt_parameters directly to adopt_solution. End of explanation """ b.run_compute(compute='fastcompute', sample_from='nm_sol', model='nm_model') _ = b.plot(kind='rv', x='phases', linestyle={'model': 'solid'}, color={'nm_model': 'red', 'model_orig': 'green'}, show=True) """ Explanation: To see the affect of these new parameter-values on the model without adopting their values, we can also pass the solution directly to run_compute with sample_from. End of explanation """ b.run_compute(compute='phoebe01', sample_from='nm_sol', model='nm_model_phoebe') _ = b.plot(model='nm_model*', kind='rv', x='phases', linestyle={'model': 'solid'}, color={'nm_model': 'red', 'nm_model_phoebe': 'blue'}, show=True) """ Explanation: Just by looking, we can see that this isn't quite perfect yet and could use some additional optimization, but is definitely a step in the right direction! If you're using an alternate backend for the sake of efficiency (as we are here), it is always a good idea to compare against a full robust forward model before adopting results (as various backends/options have different assumptions and the wrappers calling these other codes are not perfect 1:1 representations). End of explanation """ b.adopt_solution('nm_sol') print(b.filter(qualifier=['q', 'vgamma', 't0_supconj'], context=['component', 'system'])) """ Explanation: Here, for example, we see that our 'fastcompute' is ignoring the Rossiter-McLaughlin effect. In practice, since we have data in this region, this would be a cause for concern. For this example, our fake data was created using the same 'fastcompute' options... so we won't worry about it. Adopting Solution To "permananetly" adopt these proposed changes, we call b.adopt_solution (and can optionally pass remove_solution=True to cleanup the solution parameters if they will no longer be needed) End of explanation """
wcmckee/signinlca
.ipynb_checkpoints/pggNumAdd-checkpoint.ipynb
mit
#for ronum in ranumlis: # print ronum randict = dict() othgues = [] othlow = 0 othhigh = 9 for ranez in range(10): randxz = random.randint(othlow, othhigh) othgues.append(randxz) othlow = (othlow + 10) othhigh = (othhigh + 10) #print othgues tenlis = ['zero', 'ten', 'twenty', 'thirty', 'fourty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety'] #for telis in tenlis: # for diez in dieci: # print telis #randict """ Explanation: Testing of playing pyguessgame. Generates random numbers and plays a game. Create two random lists of numbers 0/9,10/19,20/29 etc to 100. Compare the two lists. If win mark, if lose mark. Debian End of explanation """ for ronum in ranumlis: #print ronum if ronum in othgues: print (str(ronum) + ' You Win!') else: print (str(ronum) + ' You Lose!') #dieci = dict() #for ranz in range(10): #print str(ranz) + str(1)# # dieci.update({str(ranz) + str(1): str(ranz)}) # for numz in range(10): #print str(ranz) + str(numz) # print numz #print zetoo #for diez in dieci: # print diez #for sinum in ranumlis: # print str(sinum) + (str('\n')) #if str(sinum) in othhigh: # print 'Win' #import os #os.system('sudo adduser joemanz --disabled-login --quiet -D') #uslis = os.listdir('/home/wcmckee/signinlca/usernames/') #print ('User List: ') #for usl in uslis: # print usl # os.system('sudo adduser ' + usl + ' ' + '--disabled-login --quiet') # os.system('sudo mv /home/wcmckee/signinlca/usernames/' + usl + ' ' + '/home/' + usl + ' ') #print dieci """ Explanation: Makes dict with keys pointing to the 10s numbers. The value needs the list of random numbers updated. Currently it just adds the number one. How to add the random number list? End of explanation """
AndysDeepAbstractions/deep-learning
image-classification/dlnd_image_classification.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' # Use Floyd's cifar-10 dataset if present floyd_cifar10_location = '/input/cifar-10/python.tar.gz' if isfile(floyd_cifar10_location): tar_gz_path = floyd_cifar10_location else: tar_gz_path = 'cifar-10-python.tar.gz' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(tar_gz_path): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', tar_gz_path, pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open(tar_gz_path) as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) """ Explanation: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) """ Explanation: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions. End of explanation """ def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function a = 0 b = 1 grayscale_min = 0 grayscale_max = 255 return a + ( ( (x - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) ) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize) """ Explanation: Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. End of explanation """ def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function # TODO: Implement Function import numpy numpy.set_printoptions(threshold=numpy.nan) from sklearn.preprocessing import LabelBinarizer # Turn labels into numbers and apply One-Hot Encoding encoder = LabelBinarizer() encoder.fit([0,1,2,3,4,5,6,7,8,9]) #print("encoder.classes_ : {}".format(encoder.classes_)) #print("x[0:12] : {}".format(x[0:12])) x = encoder.transform(x) # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32 x = x.astype(np.float32) #print("x[0:12] : \n{}".format(x[0:12])) #print("\ntype is : {} should be : {} ?".format(type(x).__module__,np.__name__)) return x """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode) """ Explanation: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) """ Explanation: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function features = tf.placeholder(tf.float32, shape=[None, image_shape[0], image_shape[1], image_shape[2]],name = 'x') return features def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function labels = tf.placeholder(tf.float32, shape=[None, n_classes],name = 'y') return labels def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function keep_prob = tf.placeholder(tf.float32, name='keep_prob') return keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) """ Explanation: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size. End of explanation """ def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ #conv_ksize is missing in description # Create the weight and bias biases = tf.Variable(tf.zeros(conv_num_outputs)) weights_depth = x_tensor.shape.as_list()[-1] weights_dim = [conv_ksize[0], conv_ksize[1], weights_depth, conv_num_outputs] #print((x_tensor.shape)) #print(weights_dim) weights = tf.Variable(tf.truncated_normal(weights_dim)) # Apply Convolution conv_strides = [1, conv_strides[0], conv_strides[1], 1] # (batch, height, width, depth) padding = 'SAME' conv_layer = tf.nn.conv2d(x_tensor, weights, conv_strides, padding) # Add bias conv_layer = tf.nn.bias_add(conv_layer, biases) # Apply activation function conv_layer = tf.nn.relu(conv_layer) filter_shape = [1, pool_ksize[0], pool_ksize[1], 1] pool_strides = [1, pool_strides[0], pool_strides[1], 1] padding = 'VALID' pool = tf.nn.max_pool(conv_layer, filter_shape, pool_strides, padding) return pool """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool) """ Explanation: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers. End of explanation """ def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function dimensions = (x_tensor.get_shape().as_list()[1:4]) prod = 1 for dimension in dimensions: prod *= dimension x_tensor = tf.reshape(x_tensor, [-1,prod]) return x_tensor """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten) """ Explanation: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function tensor_out = tf.contrib.layers.fully_connected(x_tensor, num_outputs) return tensor_out """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn) """ Explanation: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function return fully_conn(x_tensor, num_outputs) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output) """ Explanation: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this. End of explanation """ def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # x_tensor = x #:param x_tensor: TensorFlow Tensor conv_num_outputs = 4 #:param conv_num_outputs: Number of outputs for the convolutional layer conv_strides = (1,1) #:param conv_strides: Stride 2-D Tuple for convolution pool_ksize = (2,2) #:param pool_ksize: kernal size 2-D Tuple for pool pool_strides = (1,1) #:param pool_strides: Stride 2-D Tuple for pool conv_ksize = (3,3) x_tensor = conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv_num_outputs = 8 #:param conv_num_outputs: Number of outputs for the convolutional layer x_tensor = conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv_num_outputs = 16 #:param conv_num_outputs: Number of outputs for the convolutional layer x_tensor = conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) # TODO: Apply a Flatten Layer # Function Definition from Above: # x_tensor = flatten(x_tensor) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # num_outputs = 80 x_tensor = fully_conn(x_tensor, num_outputs) x_tensor = tf.nn.dropout(x_tensor, keep_prob) num_outputs = 40 x_tensor = fully_conn(x_tensor, num_outputs) x_tensor = tf.nn.dropout(x_tensor, keep_prob) num_outputs = 20 x_tensor = tf.nn.dropout(x_tensor, keep_prob) x_tensor = fully_conn(x_tensor, num_outputs) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # num_outputs = 10 x_tensor = output(x_tensor, num_outputs) # TODO: return output return x_tensor """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) """ Explanation: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob. End of explanation """ def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ # TODO: Implement Function #x = neural_net_image_input((32, 32, 3)) #y = neural_net_label_input(10) #keep_prob = neural_net_keep_prob_input() #print(label_batch) config=tf.ConfigProto(#allow_soft_placement=True, log_device_placement=True, device_count = {'GPU': 8}) sess = tf.Session(config=config) feed_dict={keep_prob:keep_probability,x:feature_batch,y:label_batch} session.run(optimizer,feed_dict=feed_dict) #pass """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network) """ Explanation: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network. End of explanation """ def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function feed_dict={keep_prob:1.,x:feature_batch,y:label_batch} #cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) #optimizer = tf.train.AdamOptimizer().minimize(cost) # Calculate batch loss and accuracy loss = session.run(cost,feed_dict=feed_dict) # Should this be done on validation data? valid_acc = sess.run(accuracy,feed_dict=feed_dict) print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss,valid_acc)) #pass """ Explanation: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. End of explanation """ # TODO: Tune Parameters epochs = 50 batch_size = 64 keep_probability = 0.95 """ Explanation: Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) """ Explanation: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) """ Explanation: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): """ Test the saved model against the test dataset """ test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() """ Explanation: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. End of explanation """
highb/deep-learning
gan_mnist/Intro_to_GANs_Exercises.ipynb
mit
%matplotlib inline import pickle as pkl import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data') """ Explanation: Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out: Pix2Pix CycleGAN A whole list The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator. The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator. The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow. End of explanation """ def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(dtype=tf.float32, shape=(None, real_dim), name="inputs_real") inputs_z = tf.placeholder(dtype=tf.float32, shape=(None, z_dim), name="inputs_z") return inputs_real, inputs_z """ Explanation: Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively. End of explanation """ def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('generator', reuse=reuse): # finish this # Hidden layer h1 = tf.layers.dense(inputs=z, units=n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) # Logits and tanh output logits = tf.layers.dense(h1, out_dim) out = tf.tanh(logits) return out """ Explanation: Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement: python with tf.variable_scope('scope_name', reuse=False): # code here Here's more from the TensorFlow documentation to get another look at using tf.variable_scope. Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x: $$ f(x) = max(\alpha * x, x) $$ Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope. End of explanation """ def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('discriminator', reuse=reuse): # finish this # Hidden layer h1 = tf.layers.dense(inputs=x, units=n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) logits = tf.layers.dense(h1, 1, activation=None) out = tf.sigmoid(logits) return out, logits """ Explanation: Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope. End of explanation """ # Size of input image to discriminator input_size = 784 # 28x28 MNIST images flattened # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Label smoothing smooth = 0.1 """ Explanation: Hyperparameters End of explanation """ tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Generator network here g_model = generator(input_z, input_size) # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = discriminator(input_real) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True) """ Explanation: Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). Exercise: Build the network from the functions you defined earlier. End of explanation """ # Calculate losses real_labels = tf.ones_like(d_logits_real) * (1 - smooth) fake_labels = tf.zeros_like(d_logits_real) gen_labels = tf.ones_like(d_logits_fake) d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits( logits=d_logits_real, labels=real_labels)) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits( logits=d_logits_fake, labels=fake_labels)) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits( logits=d_logits_fake, labels=gen_labels)) """ Explanation: Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator. End of explanation """ # Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [var for var in t_vars if var.name.find("generator") != -1] d_vars = [var for var in t_vars if var.name.find("discriminator") != -1] d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars) """ Explanation: Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list. Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately. End of explanation """ batch_size = 100 epochs = 100 samples = [] losses = [] saver = tf.train.Saver(var_list = g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f) """ Explanation: Training End of explanation """ %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend() """ Explanation: Training loss Here we'll check out the training losses for the generator and discriminator. End of explanation """ def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') return fig, axes # Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f) """ Explanation: Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. End of explanation """ _ = view_samples(-1, samples) """ Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make. End of explanation """ rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[::int(len(sample)/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) """ Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! End of explanation """ saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) view_samples(0, [gen_samples]) """ Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! End of explanation """
mne-tools/mne-tools.github.io
0.24/_downloads/548b4fc45f1ed79527138879cd79d3c8/muscle_detection.ipynb
bsd-3-clause
# Authors: Adonay Nunes <adonay.s.nunes@gmail.com> # Luke Bloy <luke.bloy@gmail.com> # License: BSD-3-Clause import os.path as op import matplotlib.pyplot as plt import numpy as np from mne.datasets.brainstorm import bst_auditory from mne.io import read_raw_ctf from mne.preprocessing import annotate_muscle_zscore # Load data data_path = bst_auditory.data_path() raw_fname = op.join(data_path, 'MEG', 'bst_auditory', 'S01_AEF_20131218_01.ds') raw = read_raw_ctf(raw_fname, preload=False) raw.crop(130, 160).load_data() # just use a fraction of data for speed here raw.resample(300, npad="auto") """ Explanation: Annotate muscle artifacts Muscle contractions produce high frequency activity that can mask brain signal of interest. Muscle artifacts can be produced when clenching the jaw, swallowing, or twitching a cranial muscle. Muscle artifacts are most noticeable in the range of 110-140 Hz. This example uses :func:~mne.preprocessing.annotate_muscle_zscore to annotate segments where muscle activity is likely present. This is done by band-pass filtering the data in the 110-140 Hz range. Then, the envelope is taken using the hilbert analytical signal to only consider the absolute amplitude and not the phase of the high frequency signal. The envelope is z-scored and summed across channels and divided by the square root of the number of channels. Because muscle artifacts last several hundred milliseconds, a low-pass filter is applied on the averaged z-scores at 4 Hz, to remove transient peaks. Segments above a set threshold are annotated as BAD_muscle. In addition, the min_length_good parameter determines the cutoff for whether short spans of "good data" in between muscle artifacts are included in the surrounding "BAD" annotation. End of explanation """ raw.notch_filter([50, 100]) # The threshold is data dependent, check the optimal threshold by plotting # ``scores_muscle``. threshold_muscle = 5 # z-score # Choose one channel type, if there are axial gradiometers and magnetometers, # select magnetometers as they are more sensitive to muscle activity. annot_muscle, scores_muscle = annotate_muscle_zscore( raw, ch_type="mag", threshold=threshold_muscle, min_length_good=0.2, filter_freq=[110, 140]) """ Explanation: Notch filter the data: <div class="alert alert-info"><h4>Note</h4><p>If line noise is present, you should perform notch-filtering *before* detecting muscle artifacts. See `tut-section-line-noise` for an example.</p></div> End of explanation """ fig, ax = plt.subplots() ax.plot(raw.times, scores_muscle) ax.axhline(y=threshold_muscle, color='r') ax.set(xlabel='time, (s)', ylabel='zscore', title='Muscle activity') """ Explanation: Plot muscle z-scores across recording End of explanation """ order = np.arange(144, 164) raw.set_annotations(annot_muscle) raw.plot(start=5, duration=20, order=order) """ Explanation: View the annotations End of explanation """
vinutah/UGAN
02_code/Warmup To GANs.ipynb
mit
import numpy as np a = np.zeros((2,2)) a a.shape np.reshape(a, (1,4)) b = np.ones((2,2)) b np.sum(b, axis=1) """ Explanation: Thinking Axes for ML-Packages Model Specification or Writing a configuration file Choice of Grammer JSON Caffe Google DistBelief CNTK Programmatic generation Writing Code Choice of a High Level Language Lua Torch Python Theano TensorFlow Rich Community Library Infrastructure TensorFlow vs Theano Theano Deep learning librray Has python wrapper Inspiration for TensorFlow Academic Project Been longer More stable ? TensorFlow Very similar system Better support for distributed systems Google's Project Started slow compared to Theano Pickedup heat after open-sourcing What is TensorFlow ? Its a deep learning library rite ? Yes, Its a open-source deep learning library Why was it not called DeepFlow ? Its more general Provides primitives for defining functions on Tensors Automatically computing derivatives of arbitrary functions on Tensors One can use TensorFlow to solve PDEs But whats with the Flow ? TensorFlow programs are structured into 2 phases A Graph Construction Phase A Graph Execution Phase Graph Construction Phase Assembles a graph, a computation graph Think of Tensors Flowing throught this graph, undergoing transformation at each node Graph Execution Phase Uses a session to execute operations that are specified in this graph. Wait.. Whats a Tensor ? Formally, Tensors are multilinear maps from vector spaces to the real numbers Add more formalism. Ideas A scalar is a tensor A vector is a tensor A matrix is a tensor A Tensor can be represented as A multidimensional array of number So need a N-d array library Why not just use Numpy ? Yes, It has Ndarray Support But cannot create tensor Functions Cannot automatically compute derivatives No GPU support So TensorFlow is a Feature rich N-d Array Library, nothing more Thinking in Numpy-Land End of explanation """ import tensorflow as tf tf.InteractiveSession() a = tf.zeros((2,2)) a b = tf.ones((2,2)) b tf.reduce_sum(b, reduction_indices=1).eval() a.get_shape() tf.reshape(a, (1,4)).eval() """ Explanation: Thinking in TensorFlow-Land End of explanation """ # TensorFlow computations define a computation graph # This means no numerical value until evaluated explicitly a = np.zeros((2,2)) a ta = tf.zeros((2,2)) print(a) print(ta) print(ta.eval()) """ Explanation: Numpy to TensorFlow Dictionary Explicit Evaluation End of explanation """ # A session object encapsulates the environment in which # Tensor objects are evaluated a = tf.constant(5.0) a b = tf.constant(6.0) b c = a * b with tf.Session() as session: print(session.run(c)) print(c.eval) """ Explanation: Session Object End of explanation """ w1 = tf.ones((2,2)) w1 w2 = tf.Variable(tf.zeros((2,2)), name='weights') w2 with tf.Session() as sess: # this is a tensor flow constant print(sess.run(w1)) # this is a tensor flow variable # please note the init call sess.run(tf.global_variables_initializer()) print(sess.run(w2)) """ Explanation: NOTE-1 tf.InteractiveSession() syntactic sugar for keeping a default session open in jupyter or ipython NOTE-2 session.run(c) is an example of a tensorFlow Fetch coming up soon Computation Graph Idea Repeated to stress.. Tensorflow program structured as A Graph construction phase A Graph execution phase This uses a session to execute operations in the graph All computations add nodes to global default graph Variables All tensors we have used previously have been constant tensors None of the tensors we have used untill now were variables Lets define our first ever tensor variable End of explanation """ W = tf.Variable(tf.zeros((2,2)), name="weights") # variable objects can be initialized from constants R = tf.Variable(tf.random_normal((2,2)), name="random_weights") # variable objects can be initialized from random values with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(W)) print(sess.run(R)) """ Explanation: TensorFlow variables must be initialized before they have values! Please contrast with constant tensors. End of explanation """ state = tf.Variable(0, name="counter") new_value = tf.add(state, tf.constant(1)) # think of this as doing # new_value = state + 1 update = tf.assign(state, new_value) # think of this as state = new_value with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(state)) for _ in range(3): sess.run(update) print(sess.run(state)) # # state = 0 # # print(state) # for _ in range(3) # state = state + 1 # print(state) # """ Explanation: Updating Variable State End of explanation """ input1 = tf.constant(3.0) input2 = tf.constant(2.0) input3 = tf.constant(5.0) intermed = tf.add(input2, input3) mul = tf.multiply(input1, intermed) with tf.Session() as sess: result = sess.run([mul, intermed]) print(result) # calling sess.run(var) on a tf.Session() object # retrives its value. # if you want to retrieve multiple variables # simultaneously can do # like sess.run([var1, var2]) """ Explanation: Fetching Variable State End of explanation """ a = np.zeros((3,3)) ta = tf.convert_to_tensor(a) with tf.Session() as sess: print(sess.run(ta)) """ Explanation: Input External Data into TensorFlow All Previous examples have manually defined tensors How to get external data sets into TensorFlow land ? Import from numpy works ... End of explanation """ # define placeholder objects for data entry input1 = tf.placeholder(tf.float32) input2 = tf.placeholder(tf.float32) output = tf.multiply(input1, input2) with tf.Session() as sess: print(sess.run([output], feed_dict={input1:[7.], input2:[2.] })) # fetch value of input from computation graph # feed data into the compuation graph """ Explanation: Placeholders Getting the data with tf.convert_to_tensor() is cool, but as you see it does not scale. Use Dummy nodes that provide entry points for data to the computational graph This takes us to tf.placeholder variables Now we need a mapping from tf.placeholder variables or their names to data like numpy arrays, lists, etc.. This takes us to feed_dict This is a python dictionary End of explanation """ # with tf.variable_scope("foo"): # with tf.variable_scope("bar"): # v = tf.get_variable("v", [1]) # assert v.name == "foo/bar/v:0" """ Explanation: Feed Dictionaries Variable Scope Complicated tensorFlow models can have 100's of variables tf.variable_scope() provides simple name-spacing to avoid clashes tf.get_variable() creates/accesses variables from withing a variable scope Variable scope is a simple type of namespacing that adds prefixes to variable names within scope End of explanation """ # with tf.variable_scope("foo"): # v = tf.get_variable("v", [1]) # tf.get_variable_scope().reuse_variables() # v1 = tf.get_variable("v", [1]) # assert v1 == v """ Explanation: Variable scope control variable reuse End of explanation """ # # with tf.variable_scope("foo"): # v = tf.get_variable("v", [1]) # assert v.name == "foo/v:0" # # # with tf.variable_scope("foo"): # v = tf.get_variable("v", [1]) # with tf.variable_scope("foo", reuse=True): # v1 = tf.get_variable("v", [1]) # assert v1 = v # """ Explanation: Get Variable Behaviour of get_variable() depends on reuse is set to false create and return new variable reuse is set to true search for existing variable with given name raise ValueError if none found End of explanation """ # this is our first tensorflow line of code # 1. its a tensorflow constant ! # welcome to tensorflow land x = tf.constant(35, name='x') """ Explanation: Simple TensorFlow Scripts TensorFlow Constants End of explanation """ y = x + 5 print(y) """ Explanation: Build the computation Graph End of explanation """ with tf.Session() as session: print(session.run(y)) %matplotlib inline import matplotlib.image as mpimg import matplotlib.pyplot as plt # which image filename = "ganesha.jpg" # load image raw_image_data = mpimg.imread(filename) # Lord Ganesha was a scribe for Mr. Veda Vyasa # who was narrating the Mahabharata. # # Later today we want to see if GAN's can learn # the joint distribution over b/w images of Ganesha's # # For now, here is how, our god looks like ... # # Notice that there are 13 discrete features. plt.imshow(raw_image_data) """ Explanation: Run the Computation Graph End of explanation """ # create a # 1, tenforflow constant (last time) # 2. tensorflow variable (now) x = tf.Variable(raw_image_data, name='x') # tf.initialize_all_variables() was deprecated recently model = tf.global_variables_initializer() with tf.Session() as session: # perform a basic operation transpose_op = tf.transpose(x, perm=[1, 0, 2]) session.run(model) result = session.run(transpose_op) # he may not like it, but here is the transpose plt.imshow(result) """ Explanation: TensorFlow Variables Stepping into TensorFlow-land from Numpy-land End of explanation """ x = tf.placeholder("float", 3) # size is optional, but helps y = x * 2 with tf.Session() as session: result = session.run(y, feed_dict={x: [1, 2, 3]}) print(result) """ Explanation: TensorFlow Placeholders End of explanation """ x = tf.placeholder("float", [None, 3]) # size can be multidimensional # None means , you dont know the size now # Like Data sets used in ML # You dont want to hardcode the number of samples y = x * 2 x_data = [[1, 2, 3], [4, 5, 6],] # # this is 2 by 3 # can be 3 by 3 # can be 4 by 3 ... with tf.Session() as session: result = session.run(y, feed_dict={x: x_data}) print(result) """ Explanation: 2D Placeholder End of explanation """ image = tf.placeholder("uint8", [None, None, 3]) reverse = tf.reverse(image, [True, False]) # [True, False, False] with tf.Session() as session: result = session.run(reverse, feed_dict={image: raw_image_data}) print(result.shape) plt.imshow(result) plt.show() """ Explanation: 3D Placeholder End of explanation """
ampl/amplpy
notebooks/gspread.ipynb
bsd-3-clause
from google.colab import auth auth.authenticate_user() """ Explanation: AMPLPY: Using Google Sheets Documentation: http://amplpy.readthedocs.io GitHub Repository: https://github.com/ampl/amplpy PyPI Repository: https://pypi.python.org/pypi/amplpy Jupyter Notebooks: https://github.com/ampl/amplpy/tree/master/notebooks This notebook uses the snippet from https://colab.research.google.com/notebooks/snippets/sheets.ipynb in order to load data from the Goolge Sheet at https://docs.google.com/spreadsheets/d/1sTyJdgnMCrmuZDtUjs-cOpRLoKgByM8U-lHieNBNaRY/edit?usp=sharing Autheticate in order to use Google Sheets End of explanation """ !pip install -q amplpy ampltools gspread --upgrade """ Explanation: Setup End of explanation """ MODULES=['ampl', 'coin'] from ampltools import cloud_platform_name, ampl_notebook from amplpy import AMPL, register_magics if cloud_platform_name() is None: ampl = AMPL() # Use local installation of AMPL else: ampl = ampl_notebook(modules=MODULES) # Install AMPL and use it register_magics(ampl_object=ampl) # Evaluate %%ampl_eval cells with ampl.eval() """ Explanation: Google Colab & Kaggle interagration End of explanation """ %%ampl_eval option version; """ Explanation: Use %%ampl_eval to evaluate AMPL commands End of explanation """ %%ampl_eval set NUTR; set FOOD; param cost {FOOD} > 0; param f_min {FOOD} >= 0; param f_max {j in FOOD} >= f_min[j]; param n_min {NUTR} >= 0; param n_max {i in NUTR} >= n_min[i]; param amt {NUTR,FOOD} >= 0; var Buy {j in FOOD} >= f_min[j], <= f_max[j]; minimize Total_Cost: sum {j in FOOD} cost[j] * Buy[j]; subject to Diet {i in NUTR}: n_min[i] <= sum {j in FOOD} amt[i,j] * Buy[j] <= n_max[i]; """ Explanation: Define the model End of explanation """ import gspread from google.auth import default creds, _ = default() gclient = gspread.authorize(creds) def open_spreedsheet(name): if name.startswith('https://'): return gclient.open_by_url(name) return gclient.open(name) """ Explanation: Instatiate gspread client End of explanation """ # spreedsheet = open_spreedsheet('DietModelSheet') spreedsheet = open_spreedsheet('https://docs.google.com/spreadsheets/d/1sTyJdgnMCrmuZDtUjs-cOpRLoKgByM8U-lHieNBNaRY/edit?usp=sharing') def get_worksheet_values(name): return spreedsheet.worksheet(name).get_values(value_render_option='UNFORMATTED_VALUE') """ Explanation: Open speedsheet using name or URL End of explanation """ import pandas as pd def table_to_dataframe(rows): return pd.DataFrame(rows[1:], columns=rows[0]).set_index(rows[0][0]) def matrix_to_dataframe(rows, tr=False): col_labels = rows[0][1:] row_labels = [row[0] for row in rows[1:]] def label(pair): return pair if not tr else (pair[1], pair[0]) data = { label((rlabel, clabel)): rows[i+1][j+1] for i, rlabel in enumerate(row_labels) for j, clabel in enumerate(col_labels)} df = pd.Series(data).reset_index() df.columns = ['index1', 'index2', rows[0][0]] return df.set_index(['index1', 'index2']) """ Explanation: Define auxiliar functions to convert data from worksheets into dataframes End of explanation """ rows = get_worksheet_values('FOOD') df = table_to_dataframe(rows) ampl.set_data(df, set_name='FOOD') # send the data to AMPL df """ Explanation: Load data from the first worksheet End of explanation """ rows = get_worksheet_values('NUTR') df = table_to_dataframe(rows) ampl.set_data(df, set_name='NUTR') # Send the data to AMPL df """ Explanation: Load the data from the second worksheet End of explanation """ rows = get_worksheet_values('amt') df = matrix_to_dataframe(rows, tr=True) ampl.set_data(df) # Send the data to AMPL df """ Explanation: Load the data from the third worksheet End of explanation """ %%ampl_eval option solver cbc; solve; display Buy; """ Explanation: Use %%ampl_eval to solve the model with cbc End of explanation """ ampl.var['Buy'].get_values().to_pandas() """ Explanation: Retrieve the solution as a pandas dataframe End of explanation """
mne-tools/mne-tools.github.io
0.20/_downloads/aa221dc65413caee3ba4b18802f88d21/plot_topo_compare_conditions.ipynb
bsd-3-clause
# Authors: Denis Engemann <denis.engemann@gmail.com> # Alexandre Gramfort <alexandre.gramfort@inria.fr> # License: BSD (3-clause) import matplotlib.pyplot as plt import mne from mne.viz import plot_evoked_topo from mne.datasets import sample print(__doc__) data_path = sample.data_path() """ Explanation: Compare evoked responses for different conditions In this example, an Epochs object for visual and auditory responses is created. Both conditions are then accessed by their respective names to create a sensor layout plot of the related evoked responses. End of explanation """ raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin = -0.2 tmax = 0.5 # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) # Set up amplitude-peak rejection values for MEG channels reject = dict(grad=4000e-13, mag=4e-12) # Create epochs including different events event_id = {'audio/left': 1, 'audio/right': 2, 'visual/left': 3, 'visual/right': 4} epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks='meg', baseline=(None, 0), reject=reject) # Generate list of evoked objects from conditions names evokeds = [epochs[name].average() for name in ('left', 'right')] """ Explanation: Set parameters End of explanation """ colors = 'blue', 'red' title = 'MNE sample data\nleft vs right (A/V combined)' plot_evoked_topo(evokeds, color=colors, title=title, background_color='w') plt.show() """ Explanation: Show topography for two different conditions End of explanation """
JustinShenk/genre-melodies
create_dataset.ipynb
mit
import os import shutil import spotipy import pickle import pandas as pd import numpy as np %matplotlib notebook import matplotlib.pyplot as plt import seaborn as sns from sklearn.manifold import TSNE from sklearn.decomposition import PCA from collections import Counter if not os.path.exists('clean_midi'): # Download the 'Clean MIDI' dataset from http://colinraffel.com/projects/lmd/ from six.moves import urllib import StringIO import gzip import tarfile FILE_URL = 'http://hog.ee.columbia.edu/craffel/lmd/clean_midi.tar.gz' response = urllib.request.urlopen(FILE_URL) print("INFO: Downloaded {}".format(FILE_URL)) compressedFile = StringIO.StringIO() compressedFile.write(response.read()) compressedFile.seek(0) decompressedFile = gzip.GzipFile(fileobj=compressedFile, mode='rb') OUTFILE_PATH = 'clean_midi.tar' with open(OUTFILE_PATH, 'wb') as outfile: outfile.write(decompressedFile.read()) tar = tarfile.open(OUTFILE_PATH) tar.extractall() tar.close() print("INFO: Extracted data") else: print("INFO: Found `clean_midi` directory") """ Explanation: Generate genre-based melodies using Magenta Download the Lakh MIDI Dataset (http://hog.ee.columbia.edu/craffel/lmd/) End of explanation """ if not os.path.exists("genres.p"): # Use Spotify's API to genre lookup. Login first and get your OAuth token: # https://developer.spotify.com/web-api/search-item/ # NOTE: Replace `AUTH` value with your token. AUTH = "ENTER-MY-AUTH-KEY" # Get artists from folder names artists = [item for item in os.listdir('clean_midi') if not item.startswith('.')] sp = spotipy.Spotify(auth=AUTH) genres = {} for i,artist in enumerate(artists): try: results = sp.search(q=artist, type='artist',limit=1) items = results['artists']['items'] genre_list = items[0]['genres'] if len(items) else items['genres'] genres[artist] = genre_list if i < 5: print("INFO: Preview {}/5".format(i + 1), artist, genre_list[:5]) except Exception as e: print("INFO: ", artist, "not included: ", e) # Save to pickle file pickle.dump(genres,open("genres.p","wb")) else: # Load genres meta-data genres = pickle.load(open("genres.p","rb")) """ Explanation: Preprocessing Create author-genre dictionary for preprocessing and analysis End of explanation """ # Get the most common genres flattened_list = [item for sublist in list(genres.values()) for item in sublist] c = Counter(flattened_list) c.most_common()[:20] """ Explanation: Examine distribution of genres in dataset End of explanation """ # Convert labels to vectors categories = set(sorted(list(flattened_list))) df = pd.DataFrame(columns=categories) for author,genre_list in genres.items(): row = pd.Series(np.zeros(len(categories)),name=author) for ind,genre in enumerate(categories): if genre in genre_list: row[ind] = 1 d = pd.DataFrame(row).T d.columns = categories df = pd.concat([df,d]) df = df.reindex_axis(sorted(df.columns), axis=1) # Assign label for each author corresponding with meta-genre (eg, rock, classical) def getStyle(genre_substring): """Get data where features contain `genre_substring`.""" style_index = np.asarray([genre_substring in x for x in df.columns]) style_array = df.iloc[:,style_index].any(axis=1) return style_array # Create array of color/labels color_array = np.zeros(df.shape[0]) genre_labels = ['other','rock','metal','pop','mellow','country', 'rap','classical'] for i,g in enumerate(genre_labels): if g == 'other': pass else: color_array[np.where(getStyle(g))] = i """ Explanation: Load author-genre metadata into dataframe End of explanation """ # 2-dimensions model = TSNE(random_state=0) np.set_printoptions(suppress=True) X_tsne = model.fit_transform(df.values) X_pca = PCA().fit_transform(df.values) # 3-dimensions model3d = TSNE(n_components=3, random_state=0) np.set_printoptions(suppress=True) X_tsne3d = model3d.fit_transform(df.values) X_pca3d = PCA(n_components=3).fit_transform(df.values) %pylab inline CMAP_NAME = 'Set1' cmap = matplotlib.cm.get_cmap(CMAP_NAME,lut=max(color_array)+1) figure(figsize=(10, 5)) suptitle('Artist-genre visualization (2-D)') axes_tsne = [] axes_pca = [] # TSNE subplot(121) title('TSNE') for l in set(color_array): ax = plt.scatter(X_tsne[color_array==l][:, 0], X_tsne[color_array==l][:, 1],c=cmap(l/max(color_array)), s=5) axes_tsne.append(ax) legend(handles=axes_tsne, labels=genre_labels,frameon=True,markerscale=2) # PCA subplot(122) title('PCA') for l in set(color_array): ax = plt.scatter(X_pca[color_array==l][:, 0], X_pca[color_array==l][:, 1],c=cmap(l/max(color_array)), s=5) axes_pca.append(ax) legend(handles=axes_pca, labels=genre_labels,frameon=True,markerscale=2) plt.show() from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize=(10, 5)) suptitle('Artist-genre visualization (3-D)') ax = fig.add_subplot(121, projection='3d') title('TSNE') axes_tsne = [] axes_pca = [] scatter_proxies = [] for l in set(color_array): ax.scatter(X_tsne3d[color_array==l][:, 0], X_tsne3d[color_array==l][:, 1],X_tsne3d[color_array==l][:, 2],c=cmap(l/max(color_array)), s=5,zdir='x') axes_tsne.append(fig.gca()) proxy = matplotlib.lines.Line2D([0],[0], linestyle="none", c=cmap(l/max(color_array)), marker = 'o') scatter_proxies.append(proxy) legend(handles=scatter_proxies, labels=genre_labels,numpoints=1,frameon=True,markerscale=0.8) ax2 = fig.add_subplot(122, projection='3d') title('PCA') for l in set(color_array): ax2.scatter(X_pca3d[color_array==l][:, 0], X_pca3d[color_array==l][:, 1], X_pca3d[color_array==l][:, 2],c=cmap(l/max(color_array)),s=5) axes_pca.append(fig.gca()) legend(handles=scatter_proxies, labels=genre_labels,numpoints=1,frameon=True,markerscale=0.8) plt.show() """ Explanation: Visualize author classification by genre using TSNE and PCA End of explanation """ MIDI_DIR = os.path.join(os.getcwd(),'clean_midi') def get_artists(genre): """Get artists with label `genre`.""" artists = [artist for artist, gs in genres.items() if genre in gs] return artists # Get artist with genres 'soft rock' and 'disco' genre_data = {} metal = get_artists('metal') classical = get_artists('classical') genre_data['metal'] = metal genre_data['classical'] = classical # Copy artists to a genre-specific folder for genre, artists in genre_data.items(): try: for artist in artists: _genre = genre.replace(' ','_').replace('&','n') shutil.copytree(os.path.join(MIDI_DIR,artist),os.path.join(os.getcwd(),'subsets',_genre,artist)) except Exception as e: print(e) """ Explanation: Choose 2 genres with many artists and with unlikely overlap Metal and classical are two candidates. End of explanation """
SJSlavin/phys202-2015-work
assignments/assignment05/InteractEx02.ipynb
mit
%matplotlib inline from matplotlib import pyplot as plt import numpy as np from IPython.html.widgets import interact, interactive, fixed from IPython.display import display """ Explanation: Interact Exercise 2 Imports End of explanation """ # YOUR CODE HERE def plot_sin1(a, b): x = np.linspace(0, 4*np.pi, 200) ax = plt.subplot(111) plt.plot(x, np.sin(a*x + b)) plt.xlim((0, 4*np.pi)) plt.ylim((-1.1, 1.1)) plt.xticks([0, np.pi, 2*np.pi, 3*np.pi, 4*np.pi], ["0", "$\pi$", "$2\pi$", "$3\pi$", "$4\pi$"]) plt.tick_params(axis = "x", direction = "out", length = 5) plt.tick_params(axis = "y", direction = "out", length = 5) plt.grid(True) ax.spines["right"].set_visible(False) ax.spines["top"].set_visible(False) plot_sin1(5, 3.4) """ Explanation: Plotting with parameters Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$. Customize your visualization to make it effective and beautiful. Customize the box, grid, spines and ticks to match the requirements of this data. Use enough points along the x-axis to get a smooth plot. For the x-axis tick locations use integer multiples of $\pi$. For the x-axis tick labels use multiples of pi using LaTeX: $3\pi$. End of explanation """ # YOUR CODE HERE interact(plot_sin1, a=(0.0, 5.0, 0.1), b=(-5.0, 5.0, 0.1)) assert True # leave this for grading the plot_sine1 exercise """ Explanation: Then use interact to create a user interface for exploring your function: a should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$. b should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$. End of explanation """ # YOUR CODE HERE def plot_sine2(a, b, style="b"): #fig = plot_sin1(a, b) #plt.figure(fig) #plt.set_linestyle = style x = np.linspace(0, 4*np.pi, 200) ax = plt.subplot(111) plt.plot(x, np.sin(a*x + b), style) plt.xlim((0, 4*np.pi)) plt.ylim((-1.1, 1.1)) plt.xticks([0, np.pi, 2*np.pi, 3*np.pi, 4*np.pi], ["0", "$\pi$", "$2\pi$", "$3\pi$", "$4\pi$"]) plt.tick_params(axis = "x", direction = "out", length = 5) plt.tick_params(axis = "y", direction = "out", length = 5) plt.grid(True) ax.spines["right"].set_visible(False) ax.spines["top"].set_visible(False) plot_sine2(4.0, -1.0, 'r--') """ Explanation: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument: dashed red: r-- blue circles: bo dotted black: k. Write a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line. End of explanation """ # YOUR CODE HERE interact(plot_sine2, a=(0.0, 5.0, 0.1), b=(-5.0, 5.0, 0.1), style={'dotted blue line':"b--", "black circles":"ko", "red triangles":"r^"}) assert True # leave this for grading the plot_sine2 exercise """ Explanation: Use interact to create a UI for plot_sine2. Use a slider for a and b as above. Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles. End of explanation """
tensorflow/docs-l10n
site/ja/guide/migrate.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2018 The TensorFlow Authors. End of explanation """ import tensorflow as tf import tensorflow.compat.v1 as v1 import tensorflow_datasets as tfds g = v1.Graph() with g.as_default(): in_a = v1.placeholder(dtype=v1.float32, shape=(2)) in_b = v1.placeholder(dtype=v1.float32, shape=(2)) def forward(x): with v1.variable_scope("matmul", reuse=v1.AUTO_REUSE): W = v1.get_variable("W", initializer=v1.ones(shape=(2,2)), regularizer=lambda x:tf.reduce_mean(x**2)) b = v1.get_variable("b", initializer=v1.zeros(shape=(2))) return W * x + b out_a = forward(in_a) out_b = forward(in_b) reg_loss=v1.losses.get_regularization_loss(scope="matmul") with v1.Session(graph=g) as sess: sess.run(v1.global_variables_initializer()) outs = sess.run([out_a, out_b, reg_loss], feed_dict={in_a: [1, 0], in_b: [0, 1]}) print(outs[0]) print() print(outs[1]) print() print(outs[2]) """ Explanation: TensorFlow 1 のコードを TensorFlow 2 に移行する <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/guide/migrate"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/migrate.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/migrate.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/migrate.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td> </table> 本ドキュメントは、低レベル TensorFlow API のユーザーを対象としています。高レベル API(tf.keras)をご使用の場合は、コードを TensorFlow 2.x と完全互換にするためのアクションはほとんどまたはまったくありません。 オプティマイザのデフォルトの学習率を確認してください。 メトリクスが記録される「名前」が変更されている可能性があることに注意してください。 TensorFlow 2.x で 1.X のコードを未修正で実行することは、(contrib を除き)依然として可能です。 python import tensorflow.compat.v1 as tf tf.disable_v2_behavior() しかし、これでは TensorFlow 2.0 で追加された改善の多くを活用できません。このガイドでは、コードのアップグレード、さらなる単純化、パフォーマンス向上、そしてより容易なメンテナンスについて説明します。 自動変換スクリプト このドキュメントで説明される変更を実装する前に行うべき最初のステップは、アップグレードスクリプトを実行してみることです。 これはコードを TensorFlow 2.x にアップグレードする際の初期パスとしては十分ですが、v2 特有のコードに変換するわけではありません。コードは依然として tf.compat.v1 エンドポイントを使用して、プレースホルダー、セッション、コレクション、その他 1.x スタイルの機能へのアクセスが可能です。 トップレベルの動作の変更 tf.compat.v1.disable_v2_behavior() を使用することで TensorFlow 2.x でコードが機能する場合でも、対処すべきグローバルな動作の変更があります。主な変更点は次のとおりです。 Eager execution、v1.enable_eager_execution(): 暗黙的に tf.Graph を使用するコードは失敗します。このコードは必ず with tf.Graph().as_default() コンテキストでラップしてください。 リソース変数、v1.enable_resource_variables(): 一部のコードは、TensorFlow 参照変数によって有効化される非決定的な動作に依存する場合があります。 リソース変数は書き込み中にロックされるため、より直感的な一貫性を保証します。 これによりエッジケースでの動作が変わる場合があります。 これにより余分なコピーが作成されるため、メモリ使用量が増える可能性があります。 これを無効にするには、use_resource=False を tf.Variable コンストラクタに渡します。 テンソルの形状、v1.enable_v2_tensorshape(): TensorFlow 2.x は、テンソルの形状の動作を簡略化されており、t.shape[0].value の代わりに t.shape[0] とすることができます。簡単な変更なので、すぐに修正しておくことをお勧めします。例については TensorShape をご覧ください。 制御フロー、v1.enable_control_flow_v2(): TensorFlow 2.x 制御フローの実装が簡略化されたため、さまざまなグラフ表現を生成します。問題が生じた場合には、バグを報告してください。 TensorFlow 2.x のコードを作成する このガイドでは、TensorFlow 1.x のコードを TensorFlow 2.x に変換するいくつかの例を確認します。これらの変更によって、コードがパフォーマンスの最適化および簡略化された API 呼び出しを活用できるようになります。 それぞれのケースのパターンは次のとおりです。 1. v1.Session.run 呼び出しを置き換える すべての v1.Session.run 呼び出しは、Python 関数で置き換える必要があります。 feed_dictおよびv1.placeholderは関数の引数になります。 fetch は関数の戻り値になります。 Eager execution では、pdb などの標準的な Python ツールを使用して、変換中に簡単にデバッグできます。 次に、tf.function デコレータを追加して、グラフで効率的に実行できるようにします。 この機能についての詳細は、AutoGraph ガイドをご覧ください。 注意点: v1.Session.run とは異なり、tf.function は固定のリターンシグネチャを持ち、常にすべての出力を返します。これによってパフォーマンスの問題が生じる場合は、2 つの個別の関数を作成します。 tf.control_dependencies または同様の演算は必要ありません。tf.function は、記述された順序で実行されたかのように動作します。たとえば、tf.Variable 割り当てと tf.assert は自動的に実行されます。 「モデルを変換する」セクションには、この変換プロセスの実際の例が含まれています。 2. Python オブジェクトを変数と損失の追跡に使用する TensorFlow 2.x では、いかなる名前ベースの変数追跡もまったく推奨されていません。 変数の追跡には Python オブジェクトを使用します。 v1.get_variable の代わりに tf.Variable を使用してください。 すべてのv1.variable_scopeは Python オブジェクトに変換が可能です。通常は次のうちの 1 つになります。 tf.keras.layers.Layer tf.keras.Model tf.Module tf.Graph.get_collection(tf.GraphKeys.VARIABLES) などの変数のリストを集める必要がある場合には、Layer および Model オブジェクトの .variables と .trainable_variables 属性を使用します。 これら Layer クラスと Model クラスは、グローバルコレクションの必要性を除去した別のプロパティを幾つか実装します。.losses プロパティは、tf.GraphKeys.LOSSES コレクション使用の置き換えとなります。 詳細は Keras ガイドをご覧ください。 警告 : 多くの tf.compat.v1 シンボルはグローバルコレクションを暗黙的に使用しています。 3. トレーニングループをアップグレードする ご利用のユースケースで動作する最高レベルの API を使用してください。独自のトレーニングループを構築するよりも tf.keras.Model.fit の選択を推奨します。 これらの高レベル関数は、独自のトレーニングループを書く場合に見落とされやすい多くの低レベル詳細を管理します。例えば、それらは自動的に正則化損失を集めて、モデルを呼び出す時にtraining=True引数を設定します。 4. データ入力パイプラインをアップグレードする データ入力には tf.data データセットを使用してください。それらのオブジェクトは効率的で、表現力があり、TensorFlow とうまく統合します。 次のように、tf.keras.Model.fit メソッドに直接渡すことができます。 python model.fit(dataset, epochs=5) また、標準的な Python で直接にイテレートすることもできます。 python for example_batch, label_batch in dataset: break 5. compat.v1シンボルを移行する tf.compat.v1モジュールには、元のセマンティクスを持つ完全な TensorFlow 1.x API が含まれています。 TensorFlow 2 アップグレードスクリプトは、変換が安全な場合、つまり v2 バージョンの動作が完全に同等であると判断できる場合は、シンボルを 2.0 と同等のものに変換します。(たとえば、これらは同じ関数なので、v1.arg_max の名前を tf.argmax に変更します。) コードの一部を使用してアップグレードスクリプトを実行した後に、compat.v1 が頻出する可能性があります。 コードを調べ、それらを手動で同等の v2 のコードに変換する価値はあります。(該当するものがある場合には、ログに表示されているはずです。) モデルを変換する 低レベル変数 & 演算子実行 低レベル API の使用例を以下に示します。 変数スコープを使用して再利用を制御する。 v1.get_variableで変数を作成する。 コレクションに明示的にアクセスする。 次のようなメソッドでコレクションに暗黙的にアクセスする。 v1.global_variables v1.losses.get_regularization_loss v1.placeholder を使用してグラフ入力のセットアップをする。 Session.runでグラフを実行する。 変数を手動で初期化する。 変換前 TensorFlow 1.x を使用したコードでは、これらのパターンは以下のように表示されます。 End of explanation """ W = tf.Variable(tf.ones(shape=(2,2)), name="W") b = tf.Variable(tf.zeros(shape=(2)), name="b") @tf.function def forward(x): return W * x + b out_a = forward([1,0]) print(out_a) out_b = forward([0,1]) regularizer = tf.keras.regularizers.l2(0.04) reg_loss=regularizer(W) """ Explanation: 変換後 変換されたコードでは : 変数はローカル Python オブジェクトです。 forward関数は依然として計算を定義します。 Session.run呼び出しはforwardへの呼び出しに置き換えられます。 パフォーマンス向上のためにオプションでtf.functionデコレータを追加可能です。 どのグローバルコレクションも参照せず、正則化は手動で計算されます。 セッションやプレースホルダーはありません。 End of explanation """ def model(x, training, scope='model'): with v1.variable_scope(scope, reuse=v1.AUTO_REUSE): x = v1.layers.conv2d(x, 32, 3, activation=v1.nn.relu, kernel_regularizer=lambda x:0.004*tf.reduce_mean(x**2)) x = v1.layers.max_pooling2d(x, (2, 2), 1) x = v1.layers.flatten(x) x = v1.layers.dropout(x, 0.1, training=training) x = v1.layers.dense(x, 64, activation=v1.nn.relu) x = v1.layers.batch_normalization(x, training=training) x = v1.layers.dense(x, 10) return x train_data = tf.ones(shape=(1, 28, 28, 1)) test_data = tf.ones(shape=(1, 28, 28, 1)) train_out = model(train_data, training=True) test_out = model(test_data, training=False) print(train_out) print() print(test_out) """ Explanation: tf.layersベースのモデル v1.layersモジュールは、変数を定義および再利用するv1.variable_scopeに依存するレイヤー関数を含めるために使用されます。 変換前 End of explanation """ model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.04), input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(10) ]) train_data = tf.ones(shape=(1, 28, 28, 1)) test_data = tf.ones(shape=(1, 28, 28, 1)) train_out = model(train_data, training=True) print(train_out) test_out = model(test_data, training=False) print(test_out) # Here are all the trainable variables. len(model.trainable_variables) # Here is the regularization loss. model.losses """ Explanation: 変換後 レイヤーの単純なスタックが tf.keras.Sequentialにぴったり収まります。(より複雑なモデルについてはカスタムレイヤーとモデルおよび Functional API をご覧ください。) モデルが変数と正則化損失を追跡します。 v1.layersからtf.keras.layersへの直接的なマッピングがあるため、変換は一対一対応でした。 ほとんどの引数はそのままです。しかし、以下の点は異なります。 training引数は、それが実行される時点でモデルによって各レイヤーに渡されます。 元のmodel関数への最初の引数(入力 x)はなくなりました。これはオブジェクトレイヤーがモデルの呼び出しからモデルの構築を分離するためです。 また以下にも注意してください。 tf.contribからの初期化子の正則化子を使用している場合は、他よりも多くの引数変更があります。 コードはコレクションに書き込みを行わないため、v1.losses.get_regularization_lossなどの関数はそれらの値を返さなくなり、トレーニングループが壊れる可能性があります。 End of explanation """ def model(x, training, scope='model'): with v1.variable_scope(scope, reuse=v1.AUTO_REUSE): W = v1.get_variable( "W", dtype=v1.float32, initializer=v1.ones(shape=x.shape), regularizer=lambda x:0.004*tf.reduce_mean(x**2), trainable=True) if training: x = x + W else: x = x + W * 0.5 x = v1.layers.conv2d(x, 32, 3, activation=tf.nn.relu) x = v1.layers.max_pooling2d(x, (2, 2), 1) x = v1.layers.flatten(x) return x train_out = model(train_data, training=True) test_out = model(test_data, training=False) """ Explanation: 変数とv1.layersの混在 既存のコードは低レベルの TensorFlow 1.x 変数と演算子に高レベルのv1.layersが混ざっていることがよくあります。 変換前 End of explanation """ # Create a custom layer for part of the model class CustomLayer(tf.keras.layers.Layer): def __init__(self, *args, **kwargs): super(CustomLayer, self).__init__(*args, **kwargs) def build(self, input_shape): self.w = self.add_weight( shape=input_shape[1:], dtype=tf.float32, initializer=tf.keras.initializers.ones(), regularizer=tf.keras.regularizers.l2(0.02), trainable=True) # Call method will sometimes get used in graph mode, # training will get turned into a tensor @tf.function def call(self, inputs, training=None): if training: return inputs + self.w else: return inputs + self.w * 0.5 custom_layer = CustomLayer() print(custom_layer([1]).numpy()) print(custom_layer([1], training=True).numpy()) train_data = tf.ones(shape=(1, 28, 28, 1)) test_data = tf.ones(shape=(1, 28, 28, 1)) # Build the model including the custom layer model = tf.keras.Sequential([ CustomLayer(input_shape=(28, 28, 1)), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), ]) train_out = model(train_data, training=True) test_out = model(test_data, training=False) """ Explanation: 変換後 このコードを変換するには、前の例で示したレイヤーからレイヤーへのマッピングのパターンに従います。 一般的なパターンは次の通りです。 __init__でレイヤーパラメータを収集する。 buildで変数を構築する。 callで計算を実行し、結果を返す。 v1.variable_scopeは事実上それ自身のレイヤーです。従ってtf.keras.layers.Layerとして書き直します。詳細はガイドをご覧ください。 End of explanation """ datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_train, mnist_test = datasets['train'], datasets['test'] """ Explanation: 注意点: サブクラス化された Keras モデルとレイヤーは v1 グラフ(自動制御依存性なし)と eager モードの両方で実行される必要があります。 call()をtf.function()にラップして、AutoGraph と自動制御依存性を得るようにします。 training引数を受け取ってcallすることを忘れないようにしてください。 それはtf.Tensorである場合があります。 それは Python ブール型である場合があります。 self.add_weight()を使用して、コンストラクタまたはModel.buildでモデル変数を作成します。 Model.buildでは、入力形状にアクセスできるため、適合する形状で重みを作成できます。 tf.keras.layers.Layer.add_weightを使用すると、Keras が変数と正則化損失を追跡できるようになります。 オブジェクトにtf.Tensorsを保持してはいけません。 それらはtf.functionまたは eager コンテキスト内のいずれかで作成される可能性がありますが、それらのテンソルは異なる振る舞いをします。 状態にはtf.Variableを使用してください。これは常に両方のコンテキストから使用可能です。 tf.Tensorsは中間値専用です。 Slim &amp; contrib.layers に関する注意 古い TensorFlow 1.x コードの大部分は Slim ライブラリを使用しており、これはtf.contrib.layersとして TensorFlow 1.x でパッケージ化されていました。 contribモジュールに関しては、TensorFlow 2.x ではtf.compat.v1内でも、あっても利用できなくなりました。Slim を使用したコードの TensorFlow 2.x への変換は、v1.layersを使用したレポジトリの変換よりも複雑です。現実的には、まず最初に Slim コードをv1.layersに変換してから Keras に変換するほうが賢明かもしれません。 arg_scopesを除去します。すべての引数は明示的である必要があります。 それらを使用する場合、 normalizer_fnとactivation_fnをそれら自身のレイヤーに分割します。 分離可能な畳み込みレイヤーは 1 つまたはそれ以上の異なる Keras レイヤー(深さ的な、ポイント的な、分離可能な Keras レイヤー)にマップします。 Slim とv1.layersには異なる引数名とデフォルト値があります。 一部の引数には異なるスケールがあります。 Slim 事前トレーニング済みモデルを使用する場合は、tf.keras.applicationsから Keras 事前トレーニング済みモデル、または元の Slim コードからエクスポートされた TensorFlow ハブの TensorFlow 2 SavedModel をお試しください。 一部のtf.contribレイヤーはコアの TensorFlow に移動されていない可能性がありますが、代わりに TensorFlow アドオンパッケージに移動されています。 トレーニング tf.kerasモデルにデータを供給する方法は沢山あります。それらは Python ジェネレータと Numpy 配列を入力として受け取ります。 モデルへのデータ供給方法として推奨するのは、データ操作用の高パフォーマンスクラスのコレクションを含むtf.dataパッケージの使用です。 依然としてtf.queueを使用している場合、これらは入力パイプラインとしてではなく、データ構造としてのみサポートされます。 データセットを使用する TensorFlow Dataset パッケージ(tfds)には、事前定義されたデータセットをtf.data.Datasetオブジェクトとして読み込むためのユーティリティが含まれています。 この例として、tfdsを使用して MNISTdataset を読み込んでみましょう。 End of explanation """ BUFFER_SIZE = 10 # Use a much larger value for real code. BATCH_SIZE = 64 NUM_EPOCHS = 5 def scale(image, label): image = tf.cast(image, tf.float32) image /= 255 return image, label """ Explanation: 次に、トレーニング用のデータを準備します。 各画像をリスケールする。 例の順序をシャッフルする。 画像とラベルのバッチを集める。 End of explanation """ train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) test_data = mnist_test.map(scale).batch(BATCH_SIZE) STEPS_PER_EPOCH = 5 train_data = train_data.take(STEPS_PER_EPOCH) test_data = test_data.take(STEPS_PER_EPOCH) image_batch, label_batch = next(iter(train_data)) """ Explanation: 例を短く保つために、データセットをトリミングして 5 バッチのみを返すようにします。 End of explanation """ model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.02), input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(10) ]) # Model is the full model w/o custom layers model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit(train_data, epochs=NUM_EPOCHS) loss, acc = model.evaluate(test_data) print("Loss {}, Accuracy {}".format(loss, acc)) """ Explanation: Keras トレーニングループを使用する トレーニングプロセスの低レベル制御が不要な場合は、Keras 組み込みのfit、evaluate、predictメソッドの使用が推奨されます。これらのメソッドは(シーケンシャル、関数型、またはサブクラス化)実装を問わず、モデルをトレーニングするための統一インターフェースを提供します。 これらのメソッドには次のような優位点があります。 Numpy 配列、Python ジェネレータ、tf.data.Datasetsを受け取ります。 正則化と活性化損失を自動的に適用します。 マルチデバイストレーニングのためにtf.distributeをサポートします。 任意の callable は損失とメトリクスとしてサポートします。 tf.keras.callbacks.TensorBoardのようなコールバックとカスタムコールバックをサポートします。 自動的に TensorFlow グラフを使用し、高性能です。 ここにDatasetを使用したモデルのトレーニング例を示します。(この機能ついての詳細はチュートリアルをご覧ください。) End of explanation """ # Model is the full model w/o custom layers model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) for epoch in range(NUM_EPOCHS): #Reset the metric accumulators model.reset_metrics() for image_batch, label_batch in train_data: result = model.train_on_batch(image_batch, label_batch) metrics_names = model.metrics_names print("train: ", "{}: {:.3f}".format(metrics_names[0], result[0]), "{}: {:.3f}".format(metrics_names[1], result[1])) for image_batch, label_batch in test_data: result = model.test_on_batch(image_batch, label_batch, # return accumulated metrics reset_metrics=False) metrics_names = model.metrics_names print("\neval: ", "{}: {:.3f}".format(metrics_names[0], result[0]), "{}: {:.3f}".format(metrics_names[1], result[1])) """ Explanation: ループを自分で書く Keras モデルのトレーニングステップは動作していても、そのステップの外でより制御が必要な場合は、データ イテレーション ループでtf.keras.Model.train_on_batchメソッドの使用を検討してみてください。 tf.keras.callbacks.Callbackとして、多くのものが実装可能であることに留意してください。 このメソッドには前のセクションで言及したメソッドの優位点の多くがありますが、外側のループのユーザー制御も与えます。 tf.keras.Model.test_on_batchまたはtf.keras.Model.evaluateを使用して、トレーニング中のパフォーマンスをチェックすることも可能です。 注意: train_on_batchとtest_on_batchは、デフォルトで単一バッチの損失とメトリクスを返します。reset_metrics=Falseを渡すと累積メトリックを返しますが、必ずメトリックアキュムレータを適切にリセットすることを忘れないようにしてくだい。また、AUCのような一部のメトリクスは正しく計算するためにreset_metrics=Falseが必要なことも覚えておいてください。 上のモデルのトレーニングを続けます。 End of explanation """ model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.02), input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(10) ]) optimizer = tf.keras.optimizers.Adam(0.001) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) @tf.function def train_step(inputs, labels): with tf.GradientTape() as tape: predictions = model(inputs, training=True) regularization_loss=tf.math.add_n(model.losses) pred_loss=loss_fn(labels, predictions) total_loss=pred_loss + regularization_loss gradients = tape.gradient(total_loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) for epoch in range(NUM_EPOCHS): for inputs, labels in train_data: train_step(inputs, labels) print("Finished epoch", epoch) """ Explanation: <a name="custom_loop"></a> トレーニングステップをカスタマイズする より多くの柔軟性と制御を必要とする場合、独自のトレーニングループを実装することでそれが可能になります。以下の 3 つのステップを踏みます。 Python ジェネレータかtf.data.Datasetをイテレートして例のバッチを作成します。 tf.GradientTapeを使用して勾配を集めます。 tf.keras.optimizersの 1 つを使用して、モデルの変数に重み更新を適用します。 留意点: サブクラス化されたレイヤーとモデルのcallメソッドには、常にtraining引数を含めます。 training引数を確実に正しくセットしてモデルを呼び出します。 使用方法によっては、モデルがデータのバッチ上で実行されるまでモデル変数は存在しないかもしれません。 モデルの正則化損失などを手動で処理する必要があります。 v1 と比べて簡略化されている点に注意してください : 変数初期化子を実行する必要はありません。作成時に変数は初期化されます。 たとえtf.function演算が eager モードで振る舞う場合でも、手動の制御依存性を追加する必要はありません。 End of explanation """ cce = tf.keras.losses.CategoricalCrossentropy(from_logits=True) cce([[1, 0]], [[-1.0,3.0]]).numpy() """ Explanation: 新しいスタイルのメトリクスと損失 TensorFlow 2.x では、メトリクスと損失はオブジェクトです。Eager で実行的にtf.function内で動作します。 損失オブジェクトは呼び出し可能で、(y_true, y_pred) を引数として期待します。 End of explanation """ # Create the metrics loss_metric = tf.keras.metrics.Mean(name='train_loss') accuracy_metric = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy') @tf.function def train_step(inputs, labels): with tf.GradientTape() as tape: predictions = model(inputs, training=True) regularization_loss=tf.math.add_n(model.losses) pred_loss=loss_fn(labels, predictions) total_loss=pred_loss + regularization_loss gradients = tape.gradient(total_loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) # Update the metrics loss_metric.update_state(total_loss) accuracy_metric.update_state(labels, predictions) for epoch in range(NUM_EPOCHS): # Reset the metrics loss_metric.reset_states() accuracy_metric.reset_states() for inputs, labels in train_data: train_step(inputs, labels) # Get the metric results mean_loss=loss_metric.result() mean_accuracy = accuracy_metric.result() print('Epoch: ', epoch) print(' loss: {:.3f}'.format(mean_loss)) print(' accuracy: {:.3f}'.format(mean_accuracy)) """ Explanation: メトリックオブジェクトには次のメソッドがあります 。 Metric.update_state() — 新しい観測を追加する Metric.result() — 観測値が与えられたとき、メトリックの現在の結果を得る Metric.reset_states() — すべての観測をクリアする オブジェクト自体は呼び出し可能です。呼び出しはupdate_stateと同様に新しい観測の状態を更新し、メトリクスの新しい結果を返します。 メトリックの変数を手動で初期化する必要はありません。また、TensorFlow 2.x は自動制御依存性を持つため、それらについても気にする必要はありません。 次のコードは、メトリックを使用してカスタムトレーニングループ内で観測される平均損失を追跡します。 End of explanation """ model.compile( optimizer = tf.keras.optimizers.Adam(0.001), loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics = ['acc', 'accuracy', tf.keras.metrics.SparseCategoricalAccuracy(name="my_accuracy")]) history = model.fit(train_data) history.history.keys() """ Explanation: <a id="keras_metric_names"></a> Keras メトリック名 TensorFlow 2.x では、Keras モデルはメトリクス名の処理に関してより一貫性があります。 メトリクスリストで文字列を渡すと、まさにその文字列がメトリクスのnameとして使用されます。これらの名前は<br>model.fitによって返される履歴オブジェクトと、keras.callbacksに渡されるログに表示されます。これはメトリクスリストで渡した文字列に設定されています。 End of explanation """ def wrap_frozen_graph(graph_def, inputs, outputs): def _imports_graph_def(): tf.compat.v1.import_graph_def(graph_def, name="") wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, []) import_graph = wrapped_import.graph return wrapped_import.prune( tf.nest.map_structure(import_graph.as_graph_element, inputs), tf.nest.map_structure(import_graph.as_graph_element, outputs)) """ Explanation: これはmetrics=["accuracy"]を渡すとdict_keys(['loss', 'acc'])になっていた、以前のバージョンとは異なります。 Keras オプティマイザ v1.train.AdamOptimizerやv1.train.GradientDescentOptimizerなどのv1.train内のオプティマイザは、tf.keras.optimizers内に同等のものを持ちます。 v1.trainをkeras.optimizersに変換する オプティマイザを変換する際の注意事項を次に示します。 オプティマイザをアップグレードすると、古いチェックポイントとの互換性がなくなる可能性があります。 epsilon のデフォルトはすべて1e-8ではなく1e-7になりました。(これはほとんどのユースケースで無視できます。) v1.train.GradientDescentOptimizerはtf.keras.optimizers.SGDで直接置き換えが可能です。 v1.train.MomentumOptimizerはモメンタム引数(tf.keras.optimizers.SGD(..., momentum=...))を使用してSGDオプティマイザで直接置き換えが可能です。 v1.train.AdamOptimizerを変換してtf.keras.optimizers.Adamを使用することが可能です。<code>beta1</code>引数とbeta2引数の名前は、beta_1とbeta_2に変更されています。 v1.train.RMSPropOptimizerはtf.keras.optimizers.RMSpropに変換可能です。 decay引数の名前はrhoに変更されています。 v1.train.AdadeltaOptimizerはtf.keras.optimizers.Adadeltaに直接変換が可能です。 tf.train.AdagradOptimizerは tf.keras.optimizers.Adagradに直接変換が可能です。 tf.train.FtrlOptimizerはtf.keras.optimizers.Ftrlに直接変換が可能です。accum_nameおよびlinear_name引数は削除されています。 tf.contrib.AdamaxOptimizerとtf.contrib.NadamOptimizerは tf.keras.optimizers.Adamaxとtf.keras.optimizers.Nadamに直接変換が可能です。beta1引数とbeta2引数の名前は、beta_1とbeta_2に変更されています。 一部のtf.keras.optimizersの新しいデフォルト <a id="keras_optimizer_lr"></a> 警告: モデルの収束挙動に変化が見られる場合には、デフォルトの学習率を確認してください。 optimizers.SGD、optimizers.Adam、またはoptimizers.RMSpropに変更はありません。 次のデフォルトの学習率が変更されました。 optimizers.Adagrad 0.01 から 0.001 へ optimizers.Adadelta 1.0 から 0.001 へ optimizers.Adamax 0.002 から 0.001 へ optimizers.Nadam 0.002 から 0.001 へ TensorBoard TensorFlow 2 には、TensorBoard で視覚化するための要約データを記述するために使用されるtf.summary API の大幅な変更が含まれています。新しいtf.summaryの概要については、TensorFlow 2 API を使用した複数のチュートリアルがあります。これには、TensorBoard TensorFlow 2 移行ガイドも含まれています。 保存と読み込み <a id="checkpoints"></a> チェックポイントの互換性 TensorFlow 2.x はオブジェクトベースのチェックポイントを使用します。 古いスタイルの名前ベースのチェックポイントは、注意を払えば依然として読み込むことができます。コード変換プロセスは変数名変更という結果になるかもしれませんが、回避方法はあります。 最も単純なアプローチは、チェックポイント内の名前と新しいモデルの名前を揃えて並べることです。 変数にはすべて依然として設定が可能なname引数があります。 Keras モデルはまた name引数を取り、それらの変数のためのプレフィックスとして設定されます。 v1.name_scope関数は、変数名のプレフィックスの設定に使用できます。これはtf.variable_scopeとは大きく異なります。これは名前だけに影響するもので、変数と再利用の追跡はしません。 ご利用のユースケースで動作しない場合は、v1.train.init_from_checkpointを試してみてください。これはassignment_map引数を取り、古い名前から新しい名前へのマッピングを指定します。 注意 : 読み込みを遅延できるオブジェクトベースのチェックポイントとは異なり、名前ベースのチェックポイントは関数が呼び出される時に全ての変数が構築されていることを要求します。一部のモデルは、buildを呼び出すかデータのバッチでモデルを実行するまで変数の構築を遅延します。 TensorFlow Estimatorリポジトリには事前作成された Estimator のチェックポイントを TensorFlow 1.X から 2.0 にアップグレードするための変換ツールが含まれています。これは、同様のユースケースのツールを構築する方法の例として有用な場合があります。 保存されたモデルの互換性 保存されたモデルには、互換性に関する重要な考慮事項はありません。 TensorFlow 1.x saved_models は TensorFlow 2.x で動作します。 TensorFlow 2.x saved_models は全ての演算がサポートされていれば TensorFlow 1.x で動作します。 Graph.pb または Graph.pbtxt 未加工のGraph.pbファイルを TensorFlow 2.x にアップグレードする簡単な方法はありません。確実な方法は、ファイルを生成したコードをアップグレードすることです。 ただし、「凍結グラフ」(変数が定数に変換されたtf.Graph)がある場合、v1.wrap_functionを使用してconcrete_functionへの変換が可能です。 End of explanation """ path = tf.keras.utils.get_file( 'inception_v1_2016_08_28_frozen.pb', 'http://storage.googleapis.com/download.tensorflow.org/models/inception_v1_2016_08_28_frozen.pb.tar.gz', untar=True) """ Explanation: たとえば、次のような凍結された Inception v1 グラフ(2016 年)があります。 End of explanation """ graph_def = tf.compat.v1.GraphDef() loaded = graph_def.ParseFromString(open(path,'rb').read()) """ Explanation: tf.GraphDefを読み込みます。 End of explanation """ inception_func = wrap_frozen_graph( graph_def, inputs='input:0', outputs='InceptionV1/InceptionV1/Mixed_3b/Branch_1/Conv2d_0a_1x1/Relu:0') """ Explanation: これをconcrete_functionにラップします。 End of explanation """ input_img = tf.ones([1,224,224,3], dtype=tf.float32) inception_func(input_img).shape """ Explanation: 入力としてテンソルを渡します。 End of explanation """ # Define the estimator's input_fn def input_fn(): datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_train, mnist_test = datasets['train'], datasets['test'] BUFFER_SIZE = 10000 BATCH_SIZE = 64 def scale(image, label): image = tf.cast(image, tf.float32) image /= 255 return image, label[..., tf.newaxis] train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) return train_data.repeat() # Define train &amp; eval specs train_spec = tf.estimator.TrainSpec(input_fn=input_fn, max_steps=STEPS_PER_EPOCH * NUM_EPOCHS) eval_spec = tf.estimator.EvalSpec(input_fn=input_fn, steps=STEPS_PER_EPOCH) """ Explanation: Estimator Estimator でトレーニングする Estimator は TensorFlow 2.0 でサポートされています。 Estimator を使用する際には、TensorFlow 1.x. からのinput_fn()、tf.estimator.TrainSpec、tf.estimator.EvalSpecを使用できます。 ここに train と evaluate specs を伴う input_fn を使用する例があります。 input_fn と train/eval specs を作成する End of explanation """ def make_model(): return tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.02), input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(10) ]) model = make_model() model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) estimator = tf.keras.estimator.model_to_estimator( keras_model = model ) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) """ Explanation: Keras モデル定義を使用する TensorFlow 2.x で Estimator を構築する方法には、いくつかの違いがあります。 モデルは Keras を使用して定義することを推奨します。次にtf.keras.estimator.model_to_estimatorユーティリティを使用して、モデルを Estimator に変更します。次のコードは Estimator を作成してトレーニングする際に、このユーティリティをどのように使用するかを示します。 End of explanation """ def my_model_fn(features, labels, mode): model = make_model() optimizer = tf.compat.v1.train.AdamOptimizer() loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) training = (mode == tf.estimator.ModeKeys.TRAIN) predictions = model(features, training=training) if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions) reg_losses = model.get_losses_for(None) + model.get_losses_for(features) total_loss=loss_fn(labels, predictions) + tf.math.add_n(reg_losses) accuracy = tf.compat.v1.metrics.accuracy(labels=labels, predictions=tf.math.argmax(predictions, axis=1), name='acc_op') update_ops = model.get_updates_for(None) + model.get_updates_for(features) minimize_op = optimizer.minimize( total_loss, var_list=model.trainable_variables, global_step=tf.compat.v1.train.get_or_create_global_step()) train_op = tf.group(minimize_op, update_ops) return tf.estimator.EstimatorSpec( mode=mode, predictions=predictions, loss=total_loss, train_op=train_op, eval_metric_ops={'accuracy': accuracy}) # Create the Estimator &amp; Train estimator = tf.estimator.Estimator(model_fn=my_model_fn) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) """ Explanation: 注意 : Keras で重み付きメトリクスを作成し、model_to_estimatorを使用してそれらを Estimator API で重み付きメトリクスを変換することはサポートされません。それらのメトリクスは、add_metrics関数を使用して Estimator 仕様で直接作成する必要があります。 カスタム model_fn を使用する 保守する必要がある既存のカスタム Estimator model_fn を持つ場合には、model_fnを変換して Keras モデルを使用できるようにすることが可能です。 しかしながら、互換性の理由から、カスタムmodel_fnは依然として1.x スタイルのグラフモードで動作します。これは eager execution はなく自動制御依存性もないことも意味します。 注意: 長期的には、特にカスタムの model_fn を使って、tf.estimator から移行することを計画する必要があります。代替の API は tf.keras と tf.distribute です。トレーニングの一部に Estimator を使用する必要がある場合は、tf.keras.estimator.model_to_estimator コンバータを使用して keras.Model から <code>Estimator</code> を作成する必要があります。 <a name="minimal_changes"></a> 最小限の変更で model_fn をカスタマイズする TensorFlow 2.0 でカスタムmodel_fnを動作させるには、既存のコードの変更を最小限に留めたい場合、optimizersやmetricsなどのtf.compat.v1シンボルを使用することができます。 カスタムmodel_fnで Keras モデルを使用することは、それをカスタムトレーニングループで使用することに類似しています。 mode引数を基に、training段階を適切に設定します。 モデルのtrainable_variablesをオプティマイザに明示的に渡します。 しかし、カスタムループと比較して、重要な違いがあります。 Model.lossesを使用する代わりにModel.get_losses_forを使用して損失を抽出します。 Model.get_updates_forを使用してモデルの更新を抽出します。 注意 : 「更新」は各バッチの後にモデルに適用される必要がある変更です。例えば、layers.BatchNormalizationレイヤーの平均と分散の移動平均などです。 次のコードはカスタムmodel_fnから Estimator を作成し、これらの懸念事項をすべて示しています。 End of explanation """ def my_model_fn(features, labels, mode): model = make_model() training = (mode == tf.estimator.ModeKeys.TRAIN) loss_obj = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) predictions = model(features, training=training) # Get both the unconditional losses (the None part) # and the input-conditional losses (the features part). reg_losses = model.get_losses_for(None) + model.get_losses_for(features) total_loss=loss_obj(labels, predictions) + tf.math.add_n(reg_losses) # Upgrade to tf.keras.metrics. accuracy_obj = tf.keras.metrics.Accuracy(name='acc_obj') accuracy = accuracy_obj.update_state( y_true=labels, y_pred=tf.math.argmax(predictions, axis=1)) train_op = None if training: # Upgrade to tf.keras.optimizers. optimizer = tf.keras.optimizers.Adam() # Manually assign tf.compat.v1.global_step variable to optimizer.iterations # to make tf.compat.v1.train.global_step increased correctly. # This assignment is a must for any `tf.train.SessionRunHook` specified in # estimator, as SessionRunHooks rely on global step. optimizer.iterations = tf.compat.v1.train.get_or_create_global_step() # Get both the unconditional updates (the None part) # and the input-conditional updates (the features part). update_ops = model.get_updates_for(None) + model.get_updates_for(features) # Compute the minimize_op. minimize_op = optimizer.get_updates( total_loss, model.trainable_variables)[0] train_op = tf.group(minimize_op, *update_ops) return tf.estimator.EstimatorSpec( mode=mode, predictions=predictions, loss=total_loss, train_op=train_op, eval_metric_ops={'Accuracy': accuracy_obj}) # Create the Estimator &amp; Train. estimator = tf.estimator.Estimator(model_fn=my_model_fn) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) """ Explanation: TensorFlow 2.x シンボルでmodel_fnをカスタマイズする TensorFlow 1.x シンボルをすべて削除し、カスタムmodel_fn をネイティブの TensorFlow 2.x にアップグレードする場合は、オプティマイザとメトリクスをtf.keras.optimizersとtf.keras.metricsにアップグレードする必要があります。 カスタムmodel_fnでは、上記の変更に加えて、さらにアップグレードを行う必要があります。 v1.train.Optimizer の代わりに tf.keras.optimizers を使用します。 損失が呼び出し可能(関数など)な場合は、Optimizer.minimize()を使用してtrain_op/minimize_opを取得します。 train_op/minimize_opを計算するには、 損失がスカラー損失Tensor(呼び出し不可)の場合は、Optimizer.get_updates()を使用します。返されるリストの最初の要素は目的とするtrain_op/minimize_opです。 損失が呼び出し可能(関数など)な場合は、Optimizer.minimize()を使用してtrain_op/minimize_opを取得します。 評価にはtf.compat.v1.metricsの代わりにtf.keras.metricsを使用します。 上記のmy_model_fnの例では、2.0 シンボルの移行されたコードは次のように表示されます。 End of explanation """ ! curl -O https://raw.githubusercontent.com/tensorflow/estimator/master/tensorflow_estimator/python/estimator/tools/checkpoint_converter.py """ Explanation: 事前作成された Estimator tf.estimator.DNN*、tf.estimator.Linear*、 tf.estimator.DNNLinearCombined*のファミリーに含まれる事前作成された Estimator は、依然として TensorFlow 2.0 API でもサポートされていますが、一部の引数が変更されています。 input_layer_partitioner: v2 で削除されました。 loss_reduction: tf.compat.v1.losses.Reductionの代わりにtf.keras.losses.Reductionに更新されました。デフォルト値もtf.compat.v1.losses.Reduction.SUMからtf.keras.losses.Reduction.SUM_OVER_BATCH_SIZEに変更されています。 optimizer、dnn_optimizer、linear_optimizer: これらの引数はtf.compat.v1.train.Optimizerの代わりにtf.keras.optimizersに更新されています。 上記の変更を移行するには : TensorFlow 2.x では配布戦略が自動的に処理するため、input_layer_partitionerの移行は必要ありません。 loss_reductionについてはtf.keras.losses.Reductionでサポートされるオプションを確認してください。 optimizer 引数の場合: 1) optimizer、dnn_optimizer、または linear_optimizer 引数を渡さない場合、または 2) optimizer 引数を string としてコードに指定しない場合、デフォルトで tf.keras.optimizers が使用されるため、何も変更する必要はありません。 optimizer引数については、optimizer、dnn_optimizer、linear_optimizer引数を渡さない場合、またはoptimizer引数をコード内の内のstringとして指定する場合は、何も変更する必要はありません。デフォルトでtf.keras.optimizersを使用します。それ以外の場合は、tf.compat.v1.train.Optimizerから対応するtf.keras.optimizersに更新する必要があります。 チェックポイントコンバータ <a id="checkpoint_converter"></a> tf.keras.optimizersは異なる変数セットを生成してチェックポイントに保存するするため、keras.optimizersへの移行は TensorFlow 1.x を使用して保存されたチェックポイントを壊してしまいます。TensorFlow 2.x への移行後に古いチェックポイントを再利用できるようにするには、チェックポイントコンバータツールをお試しください。 End of explanation """ ! python checkpoint_converter.py -h """ Explanation: ツールにはヘルプが組み込まれています。 End of explanation """ # Create a shape and choose an index i = 0 shape = tf.TensorShape([16, None, 256]) shape """ Explanation: <a id="tensorshape"></a> TensorShape このクラスはtf.compat.v1.Dimensionオブジェクトの代わりにintを保持することにより単純化されました。従って、.value()を呼び出してintを取得する必要はありません。 個々のtf.compat.v1.Dimensionオブジェクトは依然としてtf.TensorShape.dimsからアクセス可能です。 以下に TensorFlow 1.x と TensorFlow 2.x 間の違いを示します。 End of explanation """ value = shape[i] value """ Explanation: TensorFlow 1.x で次を使っていた場合: python value = shape[i].value Then do this in TensorFlow 2.x: End of explanation """ for value in shape: print(value) """ Explanation: TensorFlow 1.x で次を使っていた場合: python for dim in shape: value = dim.value print(value) TensorFlow 2.0 では次のようにします: End of explanation """ other_dim = 16 Dimension = tf.compat.v1.Dimension if shape.rank is None: dim = Dimension(None) else: dim = shape.dims[i] dim.is_compatible_with(other_dim) # or any other dimension method shape = tf.TensorShape(None) if shape: dim = shape.dims[i] dim.is_compatible_with(other_dim) # or any other dimension method """ Explanation: TensorFlow 1.x で次を使っていた場合(またはその他の次元のメソッドを使用していた場合): python dim = shape[i] dim.assert_is_compatible_with(other_dim) TensorFlow 2.0 では次のようにします: End of explanation """ print(bool(tf.TensorShape([]))) # Scalar print(bool(tf.TensorShape([0]))) # 0-length vector print(bool(tf.TensorShape([1]))) # 1-length vector print(bool(tf.TensorShape([None]))) # Unknown-length vector print(bool(tf.TensorShape([1, 10, 100]))) # 3D tensor print(bool(tf.TensorShape([None, None, None]))) # 3D tensor with no known dimensions print() print(bool(tf.TensorShape(None))) # A tensor with unknown rank. """ Explanation: tf.TensorShape のブール型の値は、階数がわかっている場合は Trueで、そうでない場合はFalseです。 End of explanation """
lalonica/PhD
vehicles/VehiclesTimeCycles-sample.ipynb
gpl-3.0
%matplotlib inline from pandas import Series, DataFrame import pandas as pd from itertools import * import itertools import numpy as np import csv import math import matplotlib.pyplot as plt from matplotlib import pylab from scipy.signal import hilbert, chirp import scipy import networkx as nx """ Explanation: Loading the necessary libraries End of explanation """ c_dataset = ['vID','fID', 'tF', 'Time', 'lX', 'lY', 'gX', 'gY', 'vLen', 'vWid', 'vType','vVel', 'vAcc', 'vLane', 'vPrec', 'vFoll', 'spac','headway' ] dataset = pd.read_table('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\dataset_meters_sample.txt', sep=r"\s+", header=None, names=c_dataset) dataset """ Explanation: Loading the dataset 0750-0805 Description of the dataset is at: D:/zzzLola/PhD/DataSet/US101/US101_time_series/US-101-Main-Data/vehicle-trajectory-data/trajectory-data-dictionary.htm End of explanation """ numV = dataset['vID'].unique() len(numV) numTS = dataset['Time'].unique() len(numTS) """ Explanation: What is the number of different vehicles for the 15 min How many timestamps? Are the timestamps of the vehicles matched? To transfor the distaces, veloc and acceleration to meters, m/s. To compute the distances all to all. Compute the time cycles. End of explanation """ dataset['tF'].describe() des_all = dataset.describe() des_all #des_all.to_csv('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\description_allDataset.csv', sep='\t', encoding='utf-8') #dataset.to_csv('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\dataset_meters.txt', sep='\t', encoding='utf-8') #table.groupby('YEARMONTH').CLIENTCODE.nunique() v_num_lanes = dataset.groupby('vID').vLane.nunique() v_num_lanes[v_num_lanes > 1].count() v_num_lanes[v_num_lanes == 1].count() dataset[:10] """ Explanation: 15min = 900 s = 9000 ms // 9529ms = 952.9s = 15min 52.9s The actual temporal length of this dataset is 15min 52.9s. Looks like the timestamp of the vehicles is matches. Which make sense attending to the way the data is obtained. There is no GPS on the vehicles, but from cameras synchronized localized at different buildings. For every time stamp, check how many vehicles are accelerating when the one behind is also or not... : - vehicle_acceleration vs precedin_vehicl_acceleration - vehicle_acceleration vs follower_vehicl_acceleration When is a vehicle changing lanes? End of explanation """ #len(dataTime) """ Explanation: def calculateDistance(x1,y1,x2,y2): dist = math.sqrt((x2 - x1)2 + (y2 - y1)2) return dist result = df1.append(df2) count = 0 dist = 0 create an empty dataframe index = pd.date_range(todays_date-datetime.timedelta(10), periods=10, freq='D') columns_dist = ['vIDa','Timea', 'gXa', 'gYa', 'vTypea','vVela', 'vAcca', 'vLanea', 'vPreca', 'vFolla', 'vIDb','Timeb', 'gXb', 'gYb', 'vTypeb','vVelb', 'vAccb', 'vLaneb', 'vPrecb', 'vFollb'] df_ = pd.DataFrame(index=index, columns=columns) df_dist = pd.DataFrame(columns=columns_dist) df_dist = df_dist.fillna(0) # with 0s rather than NaNs Fill the dataframe df = df.append(data) times = dataset['Time'].unique() for time in times: print 'Time %i ' %time dataTime = dataset.loc[dataset['Time'] == time] row_iterator = dataTime.iterrows() for index, row in row_iterator: if index+1 &gt; len(dataTime)-1: print 'The index is %i ' %index print row['vID'] print dataTime.iloc[index+1]['vID'] #while row.notnull == True: # last = row_iterator.next() # print last #if ((index+1)): # j=index+1 # print 'The index+1 is: %i' %j # for j, row in dataTime.iterrows(): # #dist = calculateDistance(dataTime[index,'gX'],dataTime[index,'gY'],dataTime[j,'gX'],dataTime[j,'gY'],) # #i_data = array_data.tolist # #dist_med = (array_data[i, 3], array_data[i, 0], array_data[j,0], dist, array_data[i, 10], array_data[i, 11], # #array_data[i, 13],array_data[i, 14], array_data[i, 15]) # #dist_list.append(dist_med) # count = len(dataTime) #print ('The count is: %i' %count) #count = 0 #dist = calculateDistance() End of explanation """ data = dataset.set_index("vID") data[:13] #Must be before, I guess. dataset = dataset.drop(['fID','tF','lX','lY','vLen','vWid','spac','headway'], axis=1) dataset """ Explanation: if i+1 > len(df)-1: pass elif (df.loc[i+1,'a_d'] == df.loc [i,'a_d']): pass elif (df.loc [i+2,'station'] == df.loc [i,'station'] and (df.loc [i+2,'direction'] == df.loc [i,'direction'])): pass else: df.loc[i,'value_id'] = value_id import pandas as pd from itertools import izip df = pd.DataFrame(['AA', 'BB', 'CC'], columns = ['value']) for id1, id2 in izip(df.iterrows(),df.ix[1:].iterrows()): print id1[1]['value'] print id2[1]['value'] https://docs.python.org/3.1/library/itertools.html http://stackoverflow.com/questions/25715627/itertools-selecting-in-pandas-based-on-previous-three-rows-or-previous-element https://pymotw.com/2/itertools/ Calculation of DISTANCES End of explanation """ times = dataset['Time'].unique() data = pd.DataFrame() data = data.fillna(0) # with 0s rather than NaNs dTime = pd.DataFrame() for time in times: print 'Time %i ' %time dataTime0 = dataset.loc[dataset['Time'] == time] list_vIDs = dataTime0.vID.tolist() #print list_vIDs dataTime = dataTime0.set_index("vID") #index_dataTime = dataTime.index.values #print dataTime perm = list(permutations(list_vIDs,2)) #print perm dist = pd.DataFrame([((((dataTime.loc[p[0],'gX'] - dataTime.loc[p[1],'gX']))**2) + (((dataTime.loc[p[0],'gY'] - dataTime.loc[p[1],'gY']))**2))**0.5 for p in perm] , index=perm, columns = {'dist'}) #dist['time'] = time ##Matrix with dist and time #merge dataTime with distances dist['FromTo'] = dist.index dist['vID'] = dist.FromTo.str[0] dist['To'] = dist.FromTo.str[1] dataTimeDist = pd.merge(dataTime0,dist, on = 'vID') dataTimeDist = dataTimeDist.drop(['gX','gY'], axis=1) print dataTimeDist data = data.append(dataTimeDist) data """ Explanation: This code works!! NO TOCAR End of explanation """ def save_graph(graph,file_name): #initialze Figure plt.figure(num=None, figsize=(20, 20), dpi=80) plt.axis('off') fig = plt.figure(1) pos = nx.spring_layout(graph) nx.draw_networkx_nodes(graph,pos) nx.draw_networkx_edges(graph,pos) nx.draw_networkx_labels(graph,pos) #cut = 1.00 #xmax = cut * max(xx for xx, yy in pos.values()) #ymax = cut * max(yy for xx, yy in pos.values()) #plt.xlim(0, xmax) #plt.ylim(0, ymax) plt.savefig(file_name,bbox_inches="tight") pylab.close() del fig times = dataset['Time'].unique() data = pd.DataFrame() data = data.fillna(0) # with 0s rather than NaNs data_graph = pd.DataFrame() data_graph = data.fillna(0) dTime = pd.DataFrame() for time in times: #print 'Time %i ' %time dataTime0 = dataset.loc[dataset['Time'] == time] list_vIDs = dataTime0.vID.tolist() #print list_vIDs dataTime = dataTime0.set_index("vID") #index_dataTime = dataTime.index.values #print dataTime perm = list(permutations(list_vIDs,2)) #print perm dist = [((((dataTime.loc[p[0],'gX'] - dataTime.loc[p[1],'gX']))**2) + (((dataTime.loc[p[0],'gY'] - dataTime.loc[p[1],'gY']))**2))**0.5 for p in perm] dataDist = pd.DataFrame(dist , index=perm, columns = {'dist'}) #Convert the matrix into a square matrix #Create the fields vID and To dataDist['FromTo'] = dataDist.index dataDist['vID'] = dataDist.FromTo.str[0] dataDist['To'] = dataDist.FromTo.str[1] #I multi dataDist['inv_dist'] = (1/dataDist.dist)*100 #Delete the intermediate FromTo field dataDist = dataDist.drop('FromTo', 1) #With pivot and the 3 columns I can generate the square matrix #Here is where I should have the condition of the max distance: THRESHOLD dataGraph = dataDist.pivot(index='vID', columns='To', values = 'inv_dist').fillna(0) print dataDist #graph = nx.from_numpy_matrix(dataGraph.values) #graph = nx.relabel_nodes(graph, dict(enumerate(dataGraph.columns))) #save_graph(graph,'my_graph+%i.png' %time) #print dataDist #data = data.append(dist) """ Explanation: Computing the GRAPH IT WORKS DO NOT TOUCH!! End of explanation """ def save_graph(graph,my_weight,file_name): #initialze Figure plt.figure(num=None, figsize=(20, 20), dpi=80) plt.axis('off') fig = plt.figure(1) pos = nx.spring_layout(graph,weight='my_weight') #spring_layout(graph) nx.draw_networkx_nodes(graph,pos) nx.draw_networkx_edges(graph,pos) nx.draw_networkx_labels(graph,pos) #cut = 1.00 #xmax = cut * max(xx for xx, yy in pos.values()) #ymax = cut * max(yy for xx, yy in pos.values()) #plt.xlim(0, xmax) #plt.ylim(0, ymax) plt.savefig(file_name,bbox_inches="tight") pylab.close() del fig times = dataset['Time'].unique() data = pd.DataFrame() data = data.fillna(0) # with 0s rather than NaNs dTime = pd.DataFrame() for time in times: #print 'Time %i ' %time dataTime0 = dataset.loc[dataset['Time'] == time] list_vIDs = dataTime0.vID.tolist() #print list_vIDs dataTime = dataTime0.set_index("vID") #index_dataTime = dataTime.index.values #print dataTime perm = list(permutations(list_vIDs,2)) #print perm dist = [((((dataTime.loc[p[0],'gX'] - dataTime.loc[p[1],'gX']))**2) + (((dataTime.loc[p[0],'gY'] - dataTime.loc[p[1],'gY']))**2))**0.5 for p in perm] dataDist = pd.DataFrame(dist , index=perm, columns = {'dist'}) #Create the fields vID and To dataDist['FromTo'] = dataDist.index dataDist['From'] = dataDist.FromTo.str[0] dataDist['To'] = dataDist.FromTo.str[1] #I multiply by 100 in order to scale the number dataDist['weight'] = (1/dataDist.dist)*100 #Delete the intermediate FromTo field dataDist = dataDist.drop('FromTo', 1) graph = nx.from_pandas_dataframe(dataDist, 'From','To',['weight']) save_graph(graph,'weight','000_my_graph+%i.png' %time) dataDist graph[1917][1919]['weight'] """ Explanation: Using from_pandas_dataframe End of explanation """
econ-ark/HARK
examples/HowWeSolveIndShockConsumerType/HowWeSolveIndShockConsumerType.ipynb
apache-2.0
from HARK.ConsumptionSaving.ConsIndShockModel import IndShockConsumerType, init_lifecycle import numpy as np import matplotlib.pyplot as plt LifecycleExample = IndShockConsumerType(**init_lifecycle) LifecycleExample.cycles = 1 # Make this consumer live a sequence of periods exactly once LifecycleExample.solve() """ Explanation: How we solve a model defined by the IndShockConsumerType class The IndShockConsumerType reprents the work-horse consumption savings model with temporary and permanent shocks to income, finite or infinite horizons, CRRA utility and more. In this DemARK we take you through the steps involved in solving one period of such a model. The inheritance chains can be a little long, so figuring out where all the parameters and methods come from can be a bit confusing. Hence this map! The intention is to make it easier to know how to inheret from IndShockConsumerType in the sense that you know where to look for specific solver logic, but also so you know can figure out which methods to overwrite or supplement in your own AgentType and solver! The solveConsIndShock function In HARK, a period's problem is always solved by the callable (function or callable object instance) stored in the field solve_one_period. In the case of IndShockConsumerType, this function is called solveConsIndShock. The function accepts a number of arguments, that it uses to construct an instance of either a ConsIndShockSolverBasic or a ConsIndShockSolver. These solvers both have the methods prepare_to_solve and solve, that we will have a closer look at in this notebook. This means, that the logic of solveConsIndShock is basically: Check if cubic interpolation (CubicBool) or construction of the value function interpolant (vFuncBool) are requested. Construct an instance of ConsIndShockSolverBasic if neither are requested, else construct a ConsIndShockSolver. Call this solver. Call solver.prepare_to_solve() Call solver.solve() and return the output as the current solution. Two types of solvers As mentioned above, solve_one_period will construct an instance of the class ConsIndShockSolverBasicor ConsIndShockSolver. The main difference is whether it uses cubic interpolation or if it explicitly constructs a value function approximation. The choice and construction of a solver instance is bullet 1) from above. What happens in upon construction Neither of the two solvers have their own __init__. ConsIndShockSolver inherits from ConsIndShockSolverBasic that in turn inherits from ConsIndShockSetup. ConsIndShockSetup inherits from ConsPerfForesightSolver, which itself is just an Object, so we get the inheritance structure ConsPerfForesightSolver $\leftarrow$ ConsIndShockSetup $\leftarrow$ ConsIndShockSolverBasic $\leftarrow$ ConsIndShockSolver When one of the two classes in the end of the inheritance chain is called, it will call ConsIndShockSetup.__init__(args...). This takes a whole list of fixed inputs that then gets assigned to the object through a ConsIndShockSetup.assign_parameters(solution_next,IncomeDstn,LivPrb,DiscFac,CRRA,Rfree,PermGroFac,BoroCnstArt,aXtraGrid,vFuncBool,CubicBool) call, that then calls ConsPerfForesightSolver.assign_parameters(self,solution_next,DiscFac,LivPrb,CRRA,Rfree,PermGroFac) We're getting kind of detailed here, but it is simply to help us understand the inheritance structure. The methods are quite straight forward, and simply assign the list of variables to self. The ones that do not get assigned by the ConsPerfForesightSolver method gets assign by the ConsIndShockSetup method instead. After all the input parameters are set, we update the utility function definitions. Remember, that we restrict ourselves to CRRA utility functions, and these are parameterized with the scalar we call CRRA in HARK. We use the two-argument CRRA utility (and derivatives, inverses, etc) from HARK.utilities, so we need to create a lambda (an anonymous function) according to the fixed CRRA we have chosen. This gets done through a call to ConsIndShockSetup.defUtilityFuncs() that itself calls ConsPerfForesightSolver.defUtilityFuncs() Again, we wish to emphasize the inheritance structure. The method in ConsPerfForesightSolver defines the most basic utility functions (utility, its marginal and its marginal marginal), and ConsIndShockSolver adds additional functions (marginal of inverse, inverse of marginal, marginal of inverse of marginal, and optionally inverse if vFuncBool is true). To sum up, the __init__ method lives in ConsIndShockSetup, calls assign_parameters and defUtilityFuncs from ConsPerfForesightSolver and defines its own methods with the same names that adds some methods used to solve the IndShockConsumerType using EGM. The main things controlled by the end-user are whether cubic interpolation should be used, CubicBool, and if the value function should be explicitly formed, vFuncBool. Prepare to solve We are now in bullet 2) from the list above. The prepare_to_solve method is all about grabbing relevant information from next period's solution, calculating some limiting solutions. It comes from ConsIndShockSetup and calls two methods: ConsIndShockSetup.setAndUpdateValues(self.solution_next,self.IncomeDstn,self.LivPrb,self.DiscFac) ConsIndShockSetup.defBoroCnst(self.BoroCnstArt) First, we have setAndUpdateValues. The main purpose is to grab the relevant vectors that represent the shock distributions, the effective discount factor, and value function (marginal, level, marginal marginal depending on the options). It also calculates some limiting marginal propensities to consume and human wealth levels. Second, we have defBoroCnst. As the name indicates, it calculates the natural borrowing constraint, handles artificial borrowing constraints, and defines the consumption function where the constraint binds (cFuncNowCnst). To sum, prepare_to_solve sets up the stochastic environment an borrowing constraints the consumer might face. It also grabs interpolants from "next period"'s solution. Solve it! The last method solveConsIndShock will call from the solver is solve. This method essentially has four steps: 1. Pre-processing for EGM: solver.prepare_to_calc_EndOfPrdvP 1. First step of EGM: solver.calc_EndOfPrdvP 1. Second step of EGM: solver.make_basic_solution 1. Add MPC and human wealth: solver.add_MPC_and_human_wealth Pre-processing for EGM prepare_to_calc_EndOfPrdvP Find relevant values of end-of-period asset values (according to aXtraGrid and natural borrowing constraint) and next period values implied by current period end-of-period assets and stochastic elements. The method stores the following in self: values of permanent shocks in PermShkVals_temp shock probabilities in ShkPrbs_temp next period resources in mNrmNext current grid of end-of-period assets in aNrmNow The method also returns aNrmNow. The definition is in ConsIndShockSolverBasic and is not overwritten in ConsIndShockSolver. First step of EGM calc_EndOfPrdvP Find the marginal value of having some level of end-of-period assets today. End-of-period assets as well as stochastics imply next-period resources at the beginning of the period, calculated above. Return the result as EndOfPrdvP. Second step of EGM make_basic_solution Apply inverse marginal utility function to nodes from about to find (m, c) pairs for the new consumption function in get_points_for_interpolation and create the interpolants in use_points_for_interpolation. The latter constructs the ConsumerSolution that contains the current consumption function cFunc, the current marginal value function vPfunc, and the smallest possible resource level mNrmMinNow. Add MPC and human wealth add_MPC_and_human_wealth Add values calculated in defBoroCnst now that we have a solution object to put them in. Special to the non-Basic solver We are now done, but in the ConsIndShockSolver (non-Basic!) solver there are a few extra steps. We add steady state m, and depending on the values of vFuncBool and CubicBool we also add the value function and the marginal marginal value function. Let's try it in action! First, we define a standard lifecycle model, solve it and then End of explanation """ from HARK.utilities import plot_funcs plot_funcs([LifecycleExample.solution[0].cFunc],LifecycleExample.solution[0].mNrmMin,10) """ Explanation: Let's have a look at the solution in time period second period. We should then be able to End of explanation """ from HARK.ConsumptionSaving.ConsIndShockModel import ConsIndShockSolverBasic solver = ConsIndShockSolverBasic(LifecycleExample.solution[1], LifecycleExample.IncShkDstn[0], LifecycleExample.LivPrb[0], LifecycleExample.DiscFac, LifecycleExample.CRRA, LifecycleExample.Rfree, LifecycleExample.PermGroFac[0], LifecycleExample.BoroCnstArt, LifecycleExample.aXtraGrid, LifecycleExample.vFuncBool, LifecycleExample.CubicBool) solver.prepare_to_solve() """ Explanation: Let us then create a solver for the first period. End of explanation """ solver.DiscFacEff solver.PermShkMinNext """ Explanation: Many important values are now calculated and stored in solver, such as the effective discount factor, the smallest permanent income shock, and more. End of explanation """ plot_funcs([solver.cFuncNowCnst],solver.mNrmMinNow,10) """ Explanation: These values were calculated in setAndUpdateValues. In defBoroCnst that was also called, several things were calculated, for example the consumption function defined by the borrowing constraint. End of explanation """ solver.prepare_to_calc_EndOfPrdvP() """ Explanation: Then, we set up all the grids, grabs the discrete shock distributions, and state grids in prepare_to_calc_EndOfPrdvP. End of explanation """ EndOfPrdvP = solver.calc_EndOfPrdvP() """ Explanation: Then we calculate the marginal utility of next period's resources given the stochastic environment and current grids. End of explanation """ solution = solver.make_basic_solution(EndOfPrdvP,solver.aNrmNow,solver.make_linear_cFunc) """ Explanation: Then, we essentially just have to construct the (resource, consumption) pairs by completing the EGM step, and constructing the interpolants by using the knowledge that the limiting solutions are those of the perfect foresight model. This is done with make_basic_solution as discussed above. End of explanation """ solver.add_MPC_and_human_wealth(solution) """ Explanation: Lastly, we add the MPC and human wealth quantities we calculated in the method that prepared the solution of this period. End of explanation """ plot_funcs([LifecycleExample.solution[0].cFunc, solution.cFunc],LifecycleExample.solution[0].mNrmMin,10) """ Explanation: All that is left is to verify that the solution in solution is identical to LifecycleExample.solution[0]. We can plot the against each other: End of explanation """ eval_grid = np.linspace(0, 20, 200) LifecycleExample.solution[0].cFunc(eval_grid) - solution.cFunc(eval_grid) """ Explanation: Although, it's probably even clearer if we just subtract the function values from each other at some grid. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/pcmdi/cmip6/models/pcmdi-test-1-0/seaice.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'pcmdi', 'pcmdi-test-1-0', 'seaice') """ Explanation: ES-DOC CMIP6 Model Properties - Seaice MIP Era: CMIP6 Institute: PCMDI Source ID: PCMDI-TEST-1-0 Topic: Seaice Sub-Topics: Dynamics, Thermodynamics, Radiative Processes. Properties: 80 (63 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:36 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of sea ice model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.variables.prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea ice temperature" # "Sea ice concentration" # "Sea ice thickness" # "Sea ice volume per grid cell area" # "Sea ice u-velocity" # "Sea ice v-velocity" # "Sea ice enthalpy" # "Internal ice stress" # "Salinity" # "Snow temperature" # "Snow depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the sea ice component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS-10" # "Constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Ocean Freezing Point Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant seawater freezing point, specify this value. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.3. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Target Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Simulations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Which simulations had tuning applied, e.g. all, not historical, only pi-control? * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Metrics Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any observed metrics used in tuning model/parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.5. Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Which variables were changed during the tuning process? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ice strength (P*) in units of N m{-2}" # "Snow conductivity (ks) in units of W m{-1} K{-1} " # "Minimum thickness of ice created in leads (h0) in units of m" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N What values were specificed for the following parameters if used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Additional Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.description') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General overview description of any key assumptions made in this model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. On Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Missing Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Provide a general description of conservation methodology. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.properties') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Mass" # "Salt" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Properties Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in sea ice by the numerical schemes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3 End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.4. Was Flux Correction Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does conservation involved flux correction? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Corrected Conserved Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any variables which are conserved by more than the numerical scheme alone. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Ocean grid" # "Atmosphere Grid" # "Own Grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Grid on which sea ice is horizontal discretised? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Structured grid" # "Unstructured grid" # "Adaptive grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.2. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the type of sea ice grid? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite differences" # "Finite elements" # "Finite volumes" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the advection scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.4. Thermodynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model thermodynamic component in seconds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.5. Dynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model dynamic component in seconds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.6. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional horizontal discretisation details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Zero-layer" # "Two-layers" # "Multi-layers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 10.2. Number Of Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using multi-layers specify how many. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional vertical grid details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Set to true if the sea ice model has multiple sea ice categories. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Number Of Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify how many. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Category Limits Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify each of the category limits. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Ice Thickness Distribution Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the sea ice thickness distribution scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.other') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.5. Other Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow on ice represented in this model? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 12.2. Number Of Snow Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels of snow on ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Snow Fraction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the snow fraction on sea ice is determined End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.4. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional details related to snow on ice. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.horizontal_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of horizontal advection of sea ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Transport In Thickness Space Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice transport in thickness space (i.e. in thickness categories)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Hibler 1979" # "Rothrock 1975" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Ice Strength Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which method of sea ice strength formulation is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.redistribution') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Rafting" # "Ridging" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which processes can redistribute sea ice (including thickness)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.rheology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Free-drift" # "Mohr-Coloumb" # "Visco-plastic" # "Elastic-visco-plastic" # "Elastic-anisotropic-plastic" # "Granular" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Rheology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Rheology, what is the ice deformation formulation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice latent heat (Semtner 0-layer)" # "Pure ice latent and sensible heat" # "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)" # "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the energy formulation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice" # "Saline ice" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.2. Thermal Conductivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of thermal conductivity is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Conduction fluxes" # "Conduction and radiation heat fluxes" # "Conduction, radiation and latent heat transport" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.3. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of heat diffusion? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heat Reservoir" # "Thermal Fixed Salinity" # "Thermal Varying Salinity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.4. Basal Heat Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method by which basal ocean heat flux is handled? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.5. Fixed Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.6. Heat Content Of Precipitation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which the heat content of precipitation is handled. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.7. Precipitation Effects On Salinity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which new sea ice is formed in open water. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Ice Vertical Growth And Melt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs the vertical growth and melt of sea ice. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Floe-size dependent (Bitz et al 2001)" # "Virtual thin ice melting (for single-category)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Ice Lateral Melting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice lateral melting? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.4. Ice Surface Sublimation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs sea ice surface sublimation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.5. Frazil Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of frazil ice formation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 16.2. Sea Ice Salinity Thermal Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does sea ice salinity impact the thermal properties of sea ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the mass transport of salt calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 17.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the thermodynamic calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 18.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Virtual (enhancement of thermal conductivity, thin ice melting)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice thickness distribution represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Parameterised" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice floe-size represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Please provide further details on any parameterisation of floe-size. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are melt ponds included in the sea ice model? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flocco and Feltham (2010)" # "Level-ice melt ponds" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21.2. Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What method of melt pond formulation is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Albedo" # "Freshwater" # "Heat" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21.3. Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What do melt ponds have an impact on? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has a snow aging scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Snow Aging Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow aging scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.3. Has Snow Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has snow ice formation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.4. Snow Ice Formation Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow ice formation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.5. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the impact of ridging on snow cover? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Single-layered heat diffusion" # "Multi-layered heat diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.6. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the heat diffusion through snow methodology in sea ice thermodynamics? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Parameterized" # "Multi-band albedo" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used to handle surface albedo. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Exponential attenuation" # "Ice radiation transmission per category" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. Ice Radiation Transmission Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method by which solar radiation through sea ice is handled. End of explanation """
amkatrutsa/MIPT-Opt
Spring2022/hb_acc_grad.ipynb
mit
import liboptpy.base_optimizer as base import numpy as np import liboptpy.unconstr_solvers.fo as fo import liboptpy.step_size as ss import matplotlib.pyplot as plt %matplotlib inline plt.rc("text", usetex=True) class HeavyBall(base.LineSearchOptimizer): def __init__(self, f, grad, step_size, beta, **kwargs): super().__init__(f, grad, step_size, **kwargs) self._beta = beta def get_direction(self, x): self._current_grad = self._grad(x) return -self._current_grad def _f_update_x_next(self, x, alpha, h): if len(self.convergence) < 2: return x + alpha * h else: return x + alpha * h + self._beta * (x - self.convergence[-2]) def get_stepsize(self): return self._step_size.get_stepsize(self._grad_mem[-1], self.convergence[-1], len(self.convergence)) np.random.seed(42) n = 100 A = np.random.randn(n, n) A = A.T.dot(A) x_true = np.random.randn(n) b = A.dot(x_true) f = lambda x: 0.5 * x.dot(A.dot(x)) - b.dot(x) grad = lambda x: A.dot(x) - b A_eigvals = np.linalg.eigvalsh(A) L = np.max(A_eigvals) mu = np.min(A_eigvals) print(L, mu) print("Condition number = {}".format(L / mu)) alpha_opt = 4 / (np.sqrt(L) + np.sqrt(mu))**2 beta_opt = np.maximum((1 - np.sqrt(alpha_opt * L))**2, (1 - np.sqrt(alpha_opt * mu))**2) print(alpha_opt, beta_opt) beta_test = 0.95 methods = { "GD fixed": fo.GradientDescent(f, grad, ss.ConstantStepSize(1 / L)), "GD Armijo": fo.GradientDescent(f, grad, ss.Backtracking("Armijo", rho=0.5, beta=0.1, init_alpha=1.)), r"HB, $\beta = {}$".format(beta_test): HeavyBall(f, grad, ss.ConstantStepSize(1 / L), beta=beta_test), "HB optimal": HeavyBall(f, grad, ss.ConstantStepSize(alpha_opt), beta = beta_opt), "CG": fo.ConjugateGradientQuad(A, b) } x0 = np.random.randn(n) max_iter = 5000 tol = 1e-6 for m in methods: _ = methods[m].solve(x0=x0, max_iter=max_iter, tol=tol) figsize = (10, 8) fontsize = 26 plt.figure(figsize=figsize) for m in methods: plt.semilogy([np.linalg.norm(grad(x)) for x in methods[m].get_convergence()], label=m) plt.legend(fontsize=fontsize, loc="best") plt.xlabel("Number of iteration, $k$", fontsize=fontsize) plt.ylabel(r"$\| f'(x_k)\|_2$", fontsize=fontsize) plt.xticks(fontsize=fontsize) _ = plt.yticks(fontsize=fontsize) for m in methods: print(m) %timeit methods[m].solve(x0=x0, max_iter=max_iter, tol=tol) """ Explanation: Beyond gradient descent, vol. 2 Концепция оптимальных методов, метод тяжёлого шарика ускоренный метод Нестерова Схема получения оценок снизу на сложность методов и задач Фиксируем класс функций $\mathcal{F}$ Фиксируем класс методов оптимизации $\mathcal{M}$ Ищем настолько плохую функцию из класса $\mathcal{F}$, что любой метод из класса $\mathcal{M}$ сходится не лучше, чем некоторая оценка Такая оценка называется оценкой снизу Пример Фиксируем класс методов Рассмотрим такие методы, что $$ x_{k+1} = x_0 + \texttt{span}{f'(x_0), \ldots, f'(x_k)} $$ Далее в рамках этого семинара для краткости только такие методы будем называть методами первого порядка Весь последующий анализ НЕ применим, если $$ x_{k+1} = x_0 + G(f'(x_0), \ldots, f'(x_k)), $$ где $G$ - некоторая нелинейная функция С такими методами мы познакомимся на одном из ближайших занятий Фиксируем класс функций Выпуклые функции с липшицевым градиентом Теорема. Существует выпуклая функция с Липшицевым градиентом, такая что $$ f(x_t) - f^ \geq \frac{3L\|x_0 - x^\|_2^2}{32(t+1)^2}, $$ где $x_k = x_0 + \texttt{span}{f'(x_0), \ldots, f'(x_{k-1})}$, $1 \leq k \leq t$ Привести пример такой функции и доказать эту теорему Вам надо в домашнем задании. Сильно выпуклые функции с липшицевым градиентом Теорема. Существует сильно выпуклая функция с Липшицевым градиентом, такая что $$ f(x_t) - f^ \geq \frac{\mu}{2}\left( \frac{\sqrt{\kappa} - 1}{\sqrt{\kappa} + 1} \right)^{2t}\|x_0 - x^\|_2^2, $$ где $\kappa = \frac{L}{\mu}$ и $x_k = x_0 + \texttt{span}{f'(x_0), \ldots, f'(x_{k-1})}$, $1 \leq k \leq t$ Привести пример такой функции и доказать эту теорему Вам надо в домашнем задании. Оценки сходимости известных методов Оценки сходимости для градиентного спуска: напоминание Пусть $f(x)$ дифференцируема на $\mathbb{R}^n$ $f(x)$ выпукла $f'(x)$ удовлетворяет условию Липшица с константой $L$ $\alpha = \dfrac{1}{L}$ Тогда $$ f(x_k) - f^ \leq \dfrac{2L \| x_0 - x^\|^2_2}{k+4} $$ Пусть $f(x)$ дифференцируема на $\mathbb{R}^n$, градиент $f(x)$ удовлетворяет условию Липшица с константой $L$ $f(x)$ является сильно выпуклой с константой $\mu$ $\alpha = \dfrac{2}{\mu + L}$ Тогда для градиентного спуска выполнено: $$ \| x_k - x^ \|_2 \leq \left( \dfrac{\kappa - 1}{\kappa + 1} \right)^k \|x_0 - x^\|_2 $$ $$f(x_k) - f^ \leq \dfrac{L}{2} \left( \dfrac{\kappa - 1}{\kappa + 1} \right)^{2k} \| x_0 - x^\|^2_2, $$ где $\kappa = \frac{L}{\mu}$ Оценки сходимости для метода сопряжённых градиентов: напоминание Для сильно выпуклой квадратичной функции $$ f(x) = \frac{1}{2}x^{\top}Ax - b^{\top}x $$ и метода сопряжённых градиентов справедлива следующая оценка сходимости $$ 2 (f_k - f^) = \| x_{k} - x^ \|_A \leq 2\left( \dfrac{\sqrt{\kappa(A)} - 1}{\sqrt{\kappa(A)} + 1} \right)^k \|x_0 - x^*\|_A, $$ где $\kappa(A) = \frac{\lambda_1(A)}{\lambda_n(A)} = \frac{L}{\mu}$ - число обусловленности матрицы $A$, $\lambda_1(A) \geq ... \geq \lambda_n(A) > 0$ - собственные значения матрицы $A$ Can we do better? Существует ли метод, который сходится в соответствии с нижними оценками для - произвольной сильно выпуклой функции с липшицевым градиентом (не только квадратичной)? - произвольной выпуклой функции с липшицевым градиентом? Метод тяжёлого шарика (heavy-ball method) Предложен в 1964 г. Б.Т. Поляком <img src="./polyak.jpeg"> Для квадратичной целевой функции зигзагообразное поведение градиентного спуска обусловлено неоднородностью направлений Давайте учитывать предыдущие направления для поиска новой точки Метод тяжёлого шарика $$ x_{k+1} = x_k - \alpha_k f'(x_k) + {\color{red}{\beta_k(x_k - x_{k-1})}} $$ Помимо параметра шага вдоль антиградиента $\alpha_k$ появился ещё один параметр $\beta_k$ Геометрическая интерпретация метода тяжёлого шарика Картинка отсюда <img src="./heavy_ball.png" width=600 align="center"> Теорема сходимости Пусть $f$ сильно выпукла с Липшицевым градиентом. Тогда для $$ \alpha_k = \frac{4}{(\sqrt{L} + \sqrt{\mu})^2} $$ и $$ \beta_k = \max(|1 - \sqrt{\alpha_k L}|^2, |1 - \sqrt{\alpha_k \mu}|^2) $$ справедлива следующая оценка сходимости $$ \left\| \begin{bmatrix} x_{k+1} - x^ \ x_k - x^ \end{bmatrix} \right\|_2 \leq \left( \frac{\sqrt{\kappa} - 1}{\sqrt{\kappa} + 1} \right)^k \left \| \begin{bmatrix} x_1 - x^ \ x_0 - x^ \end{bmatrix} \right \|_2 $$ Совпадает с оценкой снизу для методов первого порядка! Оптимальные параметры $\alpha_k$ и $\beta_k$ определяются через неизвестные константы $L$ и $\mu$ Схема доказательства Перепишем метод как \begin{align} \begin{bmatrix} x_{k+1}\ x_k \end{bmatrix} = \begin{bmatrix} (1 + \beta_k)I & -\beta_k I\ I & 0 \end{bmatrix} \begin{bmatrix} x_k\ x_{k-1} \end{bmatrix} + \begin{bmatrix} -\alpha_k f'(x_k)\ 0 \end{bmatrix} \end{align} Используем теорему из анализа \begin{align} \begin{bmatrix} x_{k+1} - x^\ x_k - x^ \end{bmatrix} = \underbrace{ \begin{bmatrix} (1 + \beta_k)I - \alpha_k \int_0^1 f''(x(\tau))d\tau & -\beta_k I\ I & 0 \end{bmatrix}}_{=A_k}\begin{bmatrix} x_k - x^\ x_{k-1} - x^\end{bmatrix}, \end{align} где $x(\tau) = x_k + \tau(x^* - x_k) $ - В силу интегральной теоремы о среднем $A_k(x) = \int_0^1 f''(x(\tau))d\tau = f''(z)$, поэтому $L$ и $\mu$ ограничивают спектр $A_k(x)$ - Сходимость зависит от спектрального радиуса матрицы итераций $A_k$ Получим оценку на спектр $A_k$ $$ A_k = \begin{bmatrix} (1 + \beta_k)I - \alpha_k A(x_k) & -\beta_k I \ I & 0 \end{bmatrix} $$ Пусть $A(x_k) = U\Lambda(x_k) U^{\top}$, поскольку гессиан - симметричная матрица, тогда $$ \begin{bmatrix} U^{\top} & 0 \ 0 & U^{\top} \end{bmatrix} \begin{bmatrix} (1 + \beta_k)I - \alpha_k A(x_k) & -\beta_k I \ I & 0 \end{bmatrix} \begin{bmatrix} U & 0\ 0 & U \end{bmatrix} = \begin{bmatrix} (1 + \beta_k)I - \alpha_k \Lambda(x_k) & -\beta_k I \ I & 0 \end{bmatrix} = \hat{A}_k $$ Ортогональное преобразование не меняет спектральный радиус матрицы Далее сделаем перестановку строк и столбцов так, чтобы $$ \hat{A}_k \simeq \mathrm{diag}(T_1, \ldots, T_n), $$ где $T_i = \begin{bmatrix} 1 + \beta_k - \alpha_k \lambda_i & -\beta_k \ 1 & 0 \end{bmatrix}$ и $\simeq$ обозначает равенство спектральных радиусов поскольку матрица перестановки является ортогональной Покажем как сделать такую перестановку на примере матрицы $4 \times 4$ $$ \begin{bmatrix}a & 0 & c & 0 \ 0 & b & 0 & c \ 1 & 0 & 0 & 0\ 0 & 1 & 0 & 0 \end{bmatrix} \rightarrow \begin{bmatrix}a & 0 & c & 0 \ 1 & 0 & 0 & 0 \ 0 & b & 0 & c \ 0 & 1 & 0 & 0 \end{bmatrix} \to \begin{bmatrix}a & c & 0 & 0 \ 1 & 0 & 0 & 0 \ 0 & 0 & b & c \ 0 & 0 & 1 & 0 \end{bmatrix} $$ Свели задачу к оценке спектрального радиуса блочно-диагональной матрицы $\hat{A}_k$ $\rho(\hat{A}k) = \max\limits{i=1,\ldots,n} { |\lambda_1(T_i)|, |\lambda_2(T_i)|} $ Характеристическое уравнение для $T_i$ $$ \beta_k - u (1 + \beta_k - \alpha_k \lambda_i - u) = 0 \quad u^2 - u(1 + \beta_k - \alpha_k\lambda_i) + \beta_k = 0 $$ - Дальнейшее изучение распределения корней и их границ даёт оценку из условия теоремы Эксперименты Тестовая задача 1 $$ f(x) = \frac{1}{2}x^{\top}Ax - b^{\top}x \to \min_x, $$ где матрица $A$ плохо обусловлена, но положительно определена! End of explanation """ n = 300 m = 1000 import sklearn.datasets as skldata import jax import jax.numpy as jnp import scipy.optimize as scopt from jax.config import config config.update("jax_enable_x64", True) X, y = skldata.make_classification(n_classes=2, n_features=n, n_samples=m, n_informative=n//3) C = 1 @jax.jit def f(w): return jnp.linalg.norm(w)**2 / 2 + C * jnp.mean(jnp.logaddexp(jnp.zeros(X.shape[0]), -y * (X @ w))) # def grad(w): # denom = scspec.expit(-y * X.dot(w)) # return w - C * X.T.dot(y * denom) / X.shape[0] autograd_f = jax.jit(jax.grad(f)) x0 = jnp.ones(n) print("Initial function value = {}".format(f(x0))) print("Initial gradient norm = {}".format(jnp.linalg.norm(autograd_f(x0)))) alpha_test = 5e-3 beta_test = 0.9 methods = { r"GD, $\alpha_k = {}$".format(alpha_test): fo.GradientDescent(f, autograd_f, ss.ConstantStepSize(alpha_test)), "GD Armijo": fo.GradientDescent(f, autograd_f, ss.Backtracking("Armijo", rho=0.7, beta=0.1, init_alpha=1.)), r"HB, $\beta = {}$".format(beta_test): HeavyBall(f, autograd_f, ss.ConstantStepSize(alpha_test), beta=beta_test), } # x0 = np.random.rand(n) # x0 = jnp.zeros(n) x0 = jax.random.normal(jax.random.PRNGKey(0), (X.shape[1],)) max_iter = 400 tol = 1e-3 for m in methods: print(m) _ = methods[m].solve(x0=x0, max_iter=max_iter, tol=tol) scopt_cg_array = [] def callback(x, arr): arr.append(x) scopt_cg_callback = lambda x: callback(x, scopt_cg_array) x = scopt.minimize(f, x0, tol=tol, method="CG", jac=autograd_f, callback=scopt_cg_callback, options={"maxiter": max_iter}) x = x.x figsize = (10, 8) fontsize = 26 plt.figure(figsize=figsize) for m in methods: plt.semilogy([np.linalg.norm(autograd_f(x)) for x in methods[m].get_convergence()], label=m) plt.semilogy([np.linalg.norm(autograd_f(x)) for x in scopt_cg_array], label="CG PR") plt.legend(fontsize=fontsize, loc="best") plt.xlabel("Number of iteration, $k$", fontsize=fontsize) plt.ylabel(r"$\| f'(x_k)\|_2$", fontsize=fontsize) plt.xticks(fontsize=fontsize) _ = plt.yticks(fontsize=fontsize) for m in methods: print(m) %timeit methods[m].solve(x0=x0, max_iter=max_iter, tol=tol) """ Explanation: Тестовая задача 2 $$ f(w) = \frac12 \|w\|2^2 + C \frac1m \sum{i=1}^m \log (1 + \exp(- y_i \langle x_i, w \rangle)) \to \min_w $$ End of explanation """ import numpy as np import liboptpy.unconstr_solvers.fo as fo import liboptpy.step_size as ss import liboptpy.base_optimizer as base import matplotlib.pyplot as plt %matplotlib inline np.random.seed(42) n = 100 A = np.random.randn(n, n) A = A.T.dot(A) A_eigvals = np.linalg.eigvalsh(A) mu = np.min(A_eigvals) A = A - (mu - 1e-6) * np.eye(n) x_true = np.random.randn(n) b = A.dot(x_true) f = lambda x: 0.5 * x.dot(A.dot(x)) - b.dot(x) grad = lambda x: A.dot(x) - b A_eigvals = np.linalg.eigvalsh(A) L = np.max(A_eigvals) mu = np.min(A_eigvals) print(L, mu) class HeavyBall(base.LineSearchOptimizer): def __init__(self, f, grad, step_size, beta, **kwargs): super().__init__(f, grad, step_size, **kwargs) self._beta = beta def get_direction(self, x): self._current_grad = self._grad(x) return -self._current_grad def _f_update_x_next(self, x, alpha, h): if len(self.convergence) < 2: return x + alpha * h else: return x + alpha * h + self._beta * (x - self.convergence[-2]) def get_stepsize(self): return self._step_size.get_stepsize(self._grad_mem[-1], self.convergence[-1], len(self.convergence)) beta_test = 0.8 methods = { "GD fixed": fo.GradientDescent(f, grad, ss.ConstantStepSize(1 / L)), "GD Armijo": fo.GradientDescent(f, grad, ss.Backtracking("Armijo", rho=0.5, beta=0.1, init_alpha=1.)), r"HB, $\beta = {}$".format(beta_test): HeavyBall(f, grad, ss.ConstantStepSize(1 / L), beta=beta_test), "Nesterov": fo.AcceleratedGD(f, grad, ss.ConstantStepSize(1 / L)), } x0 = np.random.randn(n) max_iter = 10000 tol = 1e-6 for m in methods: _ = methods[m].solve(x0=x0, max_iter=max_iter, tol=tol) figsize = (10, 8) fontsize = 26 plt.figure(figsize=figsize) for m in methods: plt.semilogy([np.linalg.norm(grad(x)) for x in methods[m].get_convergence()], label=m) # plt.semilogy([np.linalg.norm(x - x_true) for x in methods[m].get_convergence()], label=m) # plt.semilogy([f(x) - f(x_true) for x in methods[m].get_convergence()], label=m) plt.legend(fontsize=fontsize, loc="best") plt.xlabel("Number of iteration, $k$", fontsize=fontsize) # plt.ylabel(r"$\| f'(x_k)\|_2$", fontsize=fontsize) plt.xticks(fontsize=fontsize) _ = plt.yticks(fontsize=fontsize) for m in methods: print(m) %timeit methods[m].solve(x0=x0, max_iter=max_iter, tol=tol) """ Explanation: Главное про метод тяжёлого шарика Двухшаговый метод Не обязательно монотонный Параметры зависят от неизвестных констант Решает проблему осцилляций для плохо обусловленных задач Сходимость для сильно выпуклых функций совпадает с оптимальной оценкой Ускоренный метод Нестерова Предложен в 1983 г. Ю.Е. Нестеровым <img src="./nesterov.jpeg"> Одна из возможных форм записи \begin{equation} \begin{split} & y_0 = x_0 \ & x_{k+1} = y_k - \alpha_k f'(y_k)\ & y_{k+1} = x_{k+1} + \frac{k}{k + 3} (x_{k+1} - x_k) \end{split} \end{equation} Сравните с методом тяжёлого шарика Tакже не обязательно монотонен Для любителей геометрии есть альтернативный метод под названием geometric descent с такой же скоростью сходимости Геометрическая интерпретация ускоренного метода Нестерова <img src="nesterov_plot.png" width=600> Теорема сходимости Пусть $f$ выпукла с Липшицевым градиентом, а шаг $\alpha_k = \frac{1}{L}$. Тогда ускоренный метод Нестерова сходится как $$ f(x_k) - f^ \leq \frac{2L \|x_0 - x^\|_2^2}{(k+1)^2} $$ Пусть $f$ сильно выпукла с липшицевым градиентом. Тогда ускоренный метод Нестерова при шаге $\alpha_k = \frac{1}{L}$ сходится как $$ f(x_k) - f^* \leq L\|x_k - x_0\|_2^2 \left(1 - \frac{1}{\sqrt{\kappa}} \right)^k $$ Тестовая задача $$ f(x) = \frac{1}{2}x^{\top}Ax - b^{\top}x \to \min_x, $$ где матрица $A$ положительно полуопределённая, то есть функция НЕ является сильно выпуклой End of explanation """ beta_test = 0.9 methods = { "GD Armijo": fo.GradientDescent(f, grad, ss.Backtracking("Armijo", rho=0.5, beta=0.1, init_alpha=1.)), r"HB, $\beta = {}$".format(beta_test): HeavyBall(f, grad, ss.ConstantStepSize(1 / L), beta=beta_test), "Nesterov": fo.AcceleratedGD(f, grad, ss.ConstantStepSize(1e-3)), "Nesterov adaptive": fo.AcceleratedGD(f, grad, ss.Backtracking(rule_type="Lipschitz", rho=0.5, init_alpha=1)), } x0 = np.zeros(n) max_iter = 2000 tol = 1e-6 for m in methods: _ = methods[m].solve(x0=x0, max_iter=max_iter, tol=tol) figsize = (10, 8) fontsize = 26 plt.figure(figsize=figsize) for m in methods: plt.semilogy([np.linalg.norm(grad(x)) for x in methods[m].get_convergence()], label=m) plt.legend(fontsize=fontsize, loc="best") plt.xlabel("Number of iteration, $k$", fontsize=fontsize) plt.ylabel(r"$\| f'(x_k)\|_2$", fontsize=fontsize) plt.xticks(fontsize=fontsize) _ = plt.yticks(fontsize=fontsize) for m in methods: print(m) %timeit methods[m].solve(x0=x0, max_iter=max_iter, tol=tol) """ Explanation: Главное про ускоренный метод Нестерова Теоретически оптимальная скорость сходимости для выпуклых и сильно выпуклых функций Необходимо подбирать шаг Волнообразное поведение может быть подавлено с помощью рестартов Нелинейные композиции градиентов могут дать более быстрые на практике методы (см самое начало семинара) Оригинальное доказательство сложно понять на интуитивном уровне, есть гораздо более простые подходы, будут рассказаны позднее Адаптивный выбор константы $L$ В методе Нестерова шаг у градиента равен $\frac{1}{L}$, где $L$ - константа Липшица градиента Однако она обычно неизвестна НО! У нас есть условие на эту константу, а именно $$ f(y) \leq f(x) + \langle f'(x), y - x \rangle + \frac{L}{2}\|x - y\|_2^2 $$ Аналогия с правилами Армихо и прочими такого типа Более подробно про этот метод смотрите в книге лектора ```python def backtracking_L(f, h, grad, x, L0, rho): L = L0 fx = f(x) gradx = grad(x) while True: y = x - 1 / L * h if f(y) &lt;= fx - 1 / L * gradx.dot(h) + 1 / (2 * L) * h.dot(h): break else: L = L * rho return L ``` Эксперименты End of explanation """ import jax import jax.numpy as jnp from jax.config import config config.update("jax_enable_x64", True) import sklearn.datasets as skldata n = 300 m = 1000 X, y = skldata.make_classification(n_classes=2, n_features=n, n_samples=m, n_informative=n//3, random_state=42) C = 1 @jax.jit def f(w): return jnp.linalg.norm(w)**2 / 2 + C * jnp.mean(jnp.logaddexp(jnp.zeros(X.shape[0]), -y * (X @ w))) # def grad(w): # denom = scspec.expit(-y * X.dot(w)) # return w - C * X.T.dot(y * denom) / X.shape[0] autograd_f = jax.jit(jax.grad(f)) x0 = jnp.ones(n) print("Initial function value = {}".format(f(x0))) print("Initial gradient norm = {}".format(jnp.linalg.norm(autograd_f(x0)))) beta_test = 0.9 L_trial = (1 + C*100) methods = { "GD Armijo": fo.GradientDescent(f, autograd_f, ss.Backtracking("Armijo", rho=0.5, beta=0.01, init_alpha=1.)), r"HB, $\beta = {}$".format(beta_test): HeavyBall(f, autograd_f, ss.ConstantStepSize(1 / L_trial), beta=beta_test), "Nesterov": fo.AcceleratedGD(f, autograd_f, ss.ConstantStepSize(1 / L_trial)), "Nesterov adaptive": fo.AcceleratedGD(f, autograd_f, ss.Backtracking(rule_type="Lipschitz", rho=0.5, init_alpha=1)), } max_iter = 2000 tol = 1e-6 for m in methods: _ = methods[m].solve(x0=x0, max_iter=max_iter, tol=tol) figsize = (10, 8) fontsize = 26 plt.figure(figsize=figsize) for m in methods: plt.semilogy([np.linalg.norm(autograd_f(x)) for x in methods[m].get_convergence()], label=m) plt.legend(fontsize=fontsize, loc="best") plt.xlabel("Number of iteration, $k$", fontsize=fontsize) plt.ylabel(r"$\| f'(x_k)\|_2$", fontsize=fontsize) plt.xticks(fontsize=fontsize) _ = plt.yticks(fontsize=fontsize) for m in methods: print(m) %timeit methods[m].solve(x0=x0, max_iter=max_iter, tol=tol) """ Explanation: Эксперимент на неквадратичной задаче $$ f(w) = \frac12 \|w\|2^2 + C \frac1m \sum{i=1}^m \log (1 + \exp(- y_i \langle x_i, w \rangle)) \to \min_w $$ End of explanation """
mne-tools/mne-tools.github.io
0.20/_downloads/eea7e38645d4176f944e2f8d02a34fde/plot_run_ica.ipynb
bsd-3-clause
# Authors: Denis Engemann <denis.engemann@gmail.com> # # License: BSD (3-clause) import mne from mne.preprocessing import ICA, create_ecg_epochs from mne.datasets import sample print(__doc__) """ Explanation: Compute ICA components on epochs ICA is fit to MEG raw data. We assume that the non-stationary EOG artifacts have already been removed. The sources matching the ECG are automatically found and displayed. <div class="alert alert-info"><h4>Note</h4><p>This example does quite a bit of processing, so even on a fast machine it can take about a minute to complete.</p></div> End of explanation """ data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.pick_types(meg=True, eeg=False, exclude='bads', stim=True) raw.filter(1, 30, fir_design='firwin') # longer + more epochs for more artifact exposure events = mne.find_events(raw, stim_channel='STI 014') epochs = mne.Epochs(raw, events, event_id=None, tmin=-0.2, tmax=0.5) """ Explanation: Read and preprocess the data. Preprocessing consists of: MEG channel selection 1-30 Hz band-pass filter epoching -0.2 to 0.5 seconds with respect to events End of explanation """ ica = ICA(n_components=0.95, method='fastica').fit(epochs) ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5) ecg_inds, scores = ica.find_bads_ecg(ecg_epochs) ica.plot_components(ecg_inds) """ Explanation: Fit ICA model using the FastICA algorithm, detect and plot components explaining ECG artifacts. End of explanation """ ica.plot_properties(epochs, picks=ecg_inds) """ Explanation: Plot properties of ECG components: End of explanation """
phoebe-project/phoebe2-docs
2.2/tutorials/ecc.ipynb
gpl-3.0
!pip install -I "phoebe>=2.2,<2.3" """ Explanation: Eccentricity (Volume Conservation) Setup Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). End of explanation """ %matplotlib inline import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary() """ Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details. End of explanation """ print(b.get_parameter(qualifier='ecc')) print(b.get_parameter(qualifier='ecosw', context='component')) print(b.get_parameter(qualifier='esinw', context='component')) """ Explanation: Relevant Parameters End of explanation """ print(b.get_parameter(qualifier='ecosw', context='constraint')) print(b.get_parameter(qualifier='esinw', context='constraint')) """ Explanation: Relevant Constraints End of explanation """ b.add_dataset('mesh', times=np.linspace(0,1,11), columns=['volume']) b.set_value('ecc', 0.2) b.run_compute() print(b['volume@primary@model']) afig, mplfig = b['mesh01'].plot(x='times', y='volume', show=True) b.remove_dataset('mesh01') """ Explanation: Influence on Meshes (volume conservation) End of explanation """ b.add_dataset('rv', times=np.linspace(0,1,51)) b.run_compute() afig, mplfig = b['rv@model'].plot(show=True) b.remove_dataset('rv01') """ Explanation: Influence on Radial Velocities End of explanation """ b.add_dataset('lc', times=np.linspace(0,1,51)) b.run_compute() afig, mplfig = b['lc@model'].plot(show=True) """ Explanation: Influence on Light Curves (fluxes) End of explanation """
yashdeeph709/Algorithms
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/Statements Assessment Test - Solutions-checkpoint.ipynb
apache-2.0
st = 'Print only the words that start with s in this sentence' for word in st.split(): if word[0] == 's': print word """ Explanation: Statements Assessment Solutions Use for, split(), and if to create a Statement that will print out words that start with 's': End of explanation """ range(0,11,2) """ Explanation: Use range() to print all the even numbers from 0 to 10. End of explanation """ [x for x in range(1,50) if x%3 == 0] """ Explanation: Use List comprehension to create a list of all numbers between 1 and 50 that are divisble by 3. End of explanation """ st = 'Print every word in this sentence that has an even number of letters' for word in st.split(): if len(word)%2 == 0: print word+" <-- has an even length!" """ Explanation: Go through the string below and if the length of a word is even print "even!" End of explanation """ for num in xrange(1,101): if num % 5 == 0 and num % 3 == 0: print "FizzBuzz" elif num % 3 == 0: print "Fizz" elif num % 5 == 0: print "Buzz" else: print num """ Explanation: Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz". End of explanation """ st = 'Create a list of the first letters of every word in this string' [word[0] for word in st.split()] """ Explanation: Use List Comprehension to create a list of the first letters of every word in the string below: End of explanation """
samuelsinayoko/kaggle-housing-prices
ols/regression_full.ipynb
mit
dfnum = pd.read_csv('transformed_numerical_dataset_imputed.csv', index_col=['Dataset','Id']) dfnum.head() dfcat = pd.read_csv('cleaned_categorical_vars_with_colz_sorted_by_goodness.csv', index_col=['Dataset','Id']) dfcat.head() dfcat.head() df = pd.concat([dfnum, dfcat.iloc[:, :ncat]], axis=1) df.shape """ Explanation: Load data Generated in notebook data_exploration_numerical_features.ipynb End of explanation """ target = pd.read_csv('../data/train_target.csv') scaler = sk.preprocessing.StandardScaler() def transform_target(target): logtarget = np.log1p(target / 1000) return scaler.fit_transform(logtarget) def inverse_transform_target(target_t): logtarget = scaler.inverse_transform(target_t) return np.expm1(logtarget) * 1000 target_t = transform_target(target) # Test assert all(target == inverse_transform_target(target_t)) """ Explanation: Recreate transformed (standardized) sale price End of explanation """ data = df.loc['train',:].copy() data['SalePrice'] = target_t data.columns desc = 'SalePrice' + \ ' ~ ' + \ ' + '.join(data.drop('SalePrice', axis=1).iloc[:, :-ncat]) + \ ' + ' + \ ' + '.join('C({})'.format(col) for col in data.drop('SalePrice', axis=1).iloc[:, -ncat:]) desc """ Explanation: Ordinary Least Squares End of explanation """ regression2 = smapi.ols(desc, data=data).fit() regression2.summary() """ Explanation: As can be seen below, using more numerical values improves R-squared to 0.88 which is pretty good, though there's of course a risk of overfitting. End of explanation """ def get_data(X, y): df = X.copy() df['SalePrice'] = y return df def ols3(X, y): data = get_data(X, y) return smapi.ols(desc, data=data) """ Explanation: Cross validation End of explanation """ submission_t = regression2.predict(df.loc['test',:]) """ Explanation: Make a submission End of explanation """ submission = inverse_transform_target(submission_t) submission def save(filename, submission): df = pd.DataFrame(data={ "Id": np.arange(len(submission)) + 1461, "SalePrice": submission }) df.to_csv(filename, index=False) save('ols_full_{}.csv'.format(ncat), submission) """ Explanation: Scale the result End of explanation """
mne-tools/mne-tools.github.io
0.16/_downloads/plot_lcmv_beamformer_volume.ipynb
bsd-3-clause
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import sample from mne.beamformer import make_lcmv, apply_lcmv from nilearn.plotting import plot_stat_map from nilearn.image import index_img print(__doc__) # sphinx_gallery_thumbnail_number = 3 """ Explanation: Compute LCMV inverse solution on evoked data in volume source space Compute LCMV inverse solution on an auditory evoked dataset in a volume source space. It stores the solution in a nifti file for visualisation, e.g. with Freeview. End of explanation """ data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif' fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-vol-7-fwd.fif' # Get epochs event_id, tmin, tmax = [1, 2], -0.2, 0.5 # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels events = mne.read_events(event_fname) # Set up pick list: gradiometers and magnetometers, excluding bad channels picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True, exclude='bads') # Pick the channels of interest raw.pick_channels([raw.ch_names[pick] for pick in picks]) # Re-normalize our empty-room projectors, so they are fine after subselection raw.info.normalize_proj() # Read epochs proj = False # already applied epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, 0), preload=True, proj=proj, reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6)) evoked = epochs.average() # Visualize sensor space data evoked.plot_joint(ts_args=dict(time_unit='s'), topomap_args=dict(time_unit='s')) """ Explanation: Data preprocessing: End of explanation """ # Read regularized noise covariance and compute regularized data covariance noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method='shrunk') data_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15, method='shrunk') # Read forward model forward = mne.read_forward_solution(fname_fwd) # Compute weights of free orientation (vector) beamformer with weight # normalization (neural activity index, NAI). Providing a noise covariance # matrix enables whitening of the data and forward solution. Source orientation # is optimized by setting pick_ori to 'max-power'. # weight_norm can also be set to 'unit-noise-gain'. Source orientation can also # be 'normal' (but only when using a surface-based source space) or None, # which computes a vector beamfomer. Note, however, that not all combinations # of orientation selection and weight normalization are implemented yet. filters = make_lcmv(evoked.info, forward, data_cov, reg=0.05, noise_cov=noise_cov, pick_ori='max-power', weight_norm='nai') # Apply this spatial filter to the evoked data. stc = apply_lcmv(evoked, filters, max_ori_out='signed') """ Explanation: Compute covariance matrices, fit and apply spatial filter. End of explanation """ # take absolute values for plotting stc.data[:, :] = np.abs(stc.data) # Save result in stc files stc.save('lcmv-vol') stc.crop(0.0, 0.2) # Save result in a 4D nifti file img = mne.save_stc_as_volume('lcmv_inverse.nii.gz', stc, forward['src'], mri_resolution=False) t1_fname = data_path + '/subjects/sample/mri/T1.mgz' # Plotting with nilearn ###################################################### # Based on the visualization of the sensor space data (gradiometers), plot # activity at 88 ms idx = stc.time_as_index(0.088) plot_stat_map(index_img(img, idx), t1_fname, threshold=0.45, title='LCMV (t=%.3f s.)' % stc.times[idx]) # plot source time courses with the maximum peak amplitudes at 88 ms plt.figure() plt.plot(stc.times, stc.data[np.argsort(np.max(stc.data[:, idx], axis=1))[-40:]].T) plt.xlabel('Time (ms)') plt.ylabel('LCMV value') plt.show() """ Explanation: Plot source space activity: End of explanation """
eneskemalergin/OldBlog
_oldnotebooks/Introduction_to_Pandas-2.ipynb
mit
import pandas as pd SpotCrudePrices_2013_Data= { 'U.K. Brent' : {'2013-Q1':112.9, '2013-Q2':103.0, '2013-Q3':110.1, '2013-Q4':109.4}, 'Dubai': {'2013-Q1':108.1, '2013-Q2':100.8, '2013-Q3':106.1,'2013-Q4':106.7}, 'West Texas Intermediate': {'2013-Q1':94.4, '2013-Q2':94.2, '2013-Q3':105.8,'2013-Q4':97.4}} SpotCrudePrices_2013=pd.DataFrame.from_dict(SpotCrudePrices_2013_Data) SpotCrudePrices_2013 """ Explanation: Hello all, I am back with another of my notes on Pandas, Today I will focus on indexing and selection of data from pandas object. It's really important since effective use of pandas requires a good knowledge of the indexing and selection of data. In the last post while introduding data structures, I talked about basic indexing, I will show here as well for the sake of completeness. Basic Indexing End of explanation """ dubaiPrices=SpotCrudePrices_2013['Dubai'] dubaiPrices """ Explanation: We can select the prices for the available time periods of Dubai crude oil by using the [] operator: End of explanation """ SpotCrudePrices_2013[['West Texas Intermediate','U.K. Brent']] """ Explanation: We can also pass a list of columns to the [] operator in order to select the columns in a particular order: End of explanation """ SpotCrudePrices_2013.Dubai """ Explanation: Rows cannot be selected with the bracket operator [] in a DataFrame. One can retrieve values from a Series, DataFrame, or Panel directly as an attribute using dot operator End of explanation """ SpotCrudePrices_2013.columns=['Dubai','UK_Brent','West_Texas_Intermediate'] SpotCrudePrices_2013 SpotCrudePrices_2013.UK_Brent """ Explanation: However, this only works if the index element is a valid Python identifier, Dubai in this case is valid but U.K. Brent is not. We can change the names to valid identifiers: End of explanation """ SpotCrudePrices_2013[[1]] """ Explanation: We can also select prices by specifying a column index number to select column 1 (U.K. Brent) End of explanation """ SpotCrudePrices_2013[2:] # Reverse the order of rows in DataFrame SpotCrudePrices_2013[::-1] # Selecting Dubai's data as Pandas Series dubaiPrices = SpotCrudePrices_2013['Dubai'] # Obtain the last 3 rows or all rows but the first: dubaiPrices[1:] # Obtain all rows but the last dubaiPrices[:-1] # Reverse the rows dubaiPrices[::-1] """ Explanation: We can slice a range by using the [] operator. The syntax of the slicing operator exactly matches that of NumPy's: ar[startIndex: endIndex: stepValue] For a DataFrame, [] slices across rows, Obrain all rows starting from index 2: End of explanation """ NYC_SnowAvgsData={'Months' : ['January','February','March','April', 'November', 'December'], 'Avg SnowDays' : [4.0,2.7,1.7,0.2,0.2,2.3], 'Avg Precip. (cm)' : [17.8,22.4,9.1,1.5,0.8,12.2], 'Avg Low Temp. (F)' : [27,29,35,45,42,32] } NYC_SnowAvgs = pd.DataFrame(NYC_SnowAvgsData, index=NYC_SnowAvgsData['Months'], columns=['Avg SnowDays','Avg Precip. (cm)','Avg Low Temp. (F)']) NYC_SnowAvgs # Using a single label: NYC_SnowAvgs.loc['January'] # Using a list of labels NYC_SnowAvgs.loc[['January', 'April']] # Using a Label range: NYC_SnowAvgs.loc['January' : 'March'] """ Explanation: Label, Integer, and Mixed Indexing In addition to the standard indexing operator [] and attribute operator, there are operators provided in pandas to make the job of indexing easier and more convenient. By label indexing, we generally mean indexing by a header name, which tends to be a string value in most cases. These operators are as follows: The .loc operator: It allows label-oriented indexing The .iloc operator: It allows integer-based indexing The .ix operator: It allows mixed label and integer-based indexing Label-Oriented Indexing The .loc operator supports pure label-based indexing. End of explanation """ NYC_SnowAvgs.loc[:,'Avg SnowDays'] # to select a specific coordinate value NYC_SnowAvgs.loc['March','Avg SnowDays'] # Alternative Style NYC_SnowAvgs.loc['March']['Avg SnowDays'] # Without using loc function, square bracket as follows NYC_SnowAvgs['Avg SnowDays']['March'] """ Explanation: Note that while using the .loc , .iloc , and .ix operators on a DataFrame, the row index must always be specified first. This is the opposite of the [] operator, where only columns can be selected directly. End of explanation """ NYC_SnowAvgs.loc['March'] """ Explanation: We can use the .loc operator to select the rows instead: End of explanation """ # Selecting months have less than one snow day average NYC_SnowAvgs.loc[NYC_SnowAvgs['Avg SnowDays']<1,:] # brand of crude priced above 110 a barrel for row 2013-Q1 SpotCrudePrices_2013.loc[:,SpotCrudePrices_2013.loc['2013-Q1']>110] # Using 2 .loc for more precise selection, how cool is that """ Explanation: We can use selection with boolean statements, while we are selecting in Pandas. End of explanation """ SpotCrudePrices_2013.loc['2013-Q1']>110 """ Explanation: Note that the preceding arguments involve the Boolean operators &lt; and &gt; that actually evaluate the Boolean arrays, for example: End of explanation """ import scipy.constants as phys import math sci_values=pd.DataFrame([[math.pi, math.sin(math.pi),math.cos(math.pi)], [math.e,math.log(math.e), phys.golden], [phys.c,phys.g,phys.e], [phys.m_e,phys.m_p,phys.m_n]], index=list(range(0,20,5))) sci_values # Select first two rows by using integer slicing sci_values.iloc[:2] sci_values.iloc[2,0:2] """ Explanation: Integer-Oriented Indexing The .iloc operator supports integer-based positional indexing. It accepts the following as inputs: A single integer, for example, 7 A list or array of integers, for example, [2,3] A slice object with integers, for example, 1:4 End of explanation """ sci_values.iloc[10] sci_values.loc[10] # To Slice out a specific row sci_values.iloc[2:3,:] # TO obtain a cross-section using an integer position sci_values.iloc[3] """ Explanation: Note that the arguments to .iloc are strictly positional and have nothing to do with the index values. we should use the label-indexing operator .loc instead... End of explanation """ sci_values.iloc[3,0] sci_values.iat[3,0] %timeit sci_values.iloc[3,0] %timeit sci_values.iat[3,0] """ Explanation: The .iat and .at operators can be used for a quick selection of scalar values. They are faster than them but not really common End of explanation """ stockIndexDataDF=pd.read_csv('stock_index_closing.csv') stockIndexDataDF """ Explanation: Mixed Indexing with .ix operator The .ix operator behaves like a mixture of the .loc and .iloc operators, with the .loc behavior taking precedence. It takes the following as possible inputs: A single label or integer A list of integeres or labels An integer slice or label slice A Boolean array In the following examples I will use this data set imported from csv TradingDate,Nasdaq,S&amp;P 500,Russell 2000 2014/01/30,4123.13,1794.19,1139.36 2014/01/31,4103.88,1782.59,1130.88 2014/02/03,3996.96,1741.89,1094.58 2014/02/04,4031.52,1755.2,1102.84 2014/02/05,4011.55,1751.64,1093.59 2014/02/06,4057.12,1773.43,1103.93 End of explanation """ stockIndexDF=stockIndexDataDF.set_index('TradingDate') stockIndexDF # Using a single label stockIndexDF.ix['2014/01/30'] # Using a list of labels stockIndexDF.ix[['2014/01/30', '2014/02/06']] type(stockIndexDF.ix['2014/01/30']) type(stockIndexDF.ix[['2014/01/30']]) """ Explanation: What we see from the preceding example is that the DataFrame created has an integer-based row index. We promptly set the index to be the trading date to index it based on the trading date so that we can use the .ix operator: End of explanation """ # Using a label-based slice: tradingDates=stockIndexDataDF.TradingDate stockIndexDF.ix[tradingDates[:3]] # Using a single integer: stockIndexDF.ix[0] # Using a list of integers: stockIndexDF.ix[[0,2]] # Using an integer slice: stockIndexDF.ix[1:3] # Using an boolean array stockIndexDF.ix[stockIndexDF['Russell 2000']>1100] """ Explanation: For the former, the indexer is a scalar; for the latter, the indexer is a list. A list indexer is used to select multiple columns. A multi-column slice of a DataFrame can only result in another DataFrame since it is 2D; hence, what is returned in the latter case is a DataFrame. End of explanation """ sharesIndexDataDF=pd.read_csv('stock_index_closing.csv') sharesIndexDataDF # Create a MultiIndex from trading date and priceType columns sharesIndexDF=sharesIndexDataDF.set_index(['TradingDate','PriceType']) mIndex = sharesIndexDF.index mIndex sharesIndexDF """ Explanation: As in the case of .loc , the row index must be specified first for the .ix operator. We now turn to the topic of MultiIndexing. Multi-level or hierarchical indexing is useful because it enables the pandas user to select and massage data in multiple dimensions by using data structures such as Series and DataFrame. End of explanation """ mIndex.get_level_values(0) mIndex.get_level_values(1) """ Explanation: Upon inspection, we see that the MultiIndex consists of a list of tuples. Applying the get_level_values function with the appropriate argument produces a list of the labels for each level of the index: End of explanation """ # Getting All Price Type of date sharesIndexDF.ix['2014/02/21'] # Getting specific PriceType of date sharesIndexDF.ix['2014/02/21','open'] # We can slice on first level sharesIndexDF.ix['2014/02/21':'2014/02/24'] # But if we can slice at lower level: sharesIndexDF.ix[('2014/02/21','open'):('2014/02/24','open')] """ Explanation: You can achieve hierarchical indexing with a MultiIndexed DataFrame: End of explanation """ sharesIndexDF.sortlevel(0).ix[('2014/02/21','open'):('2014/02/24','open')] """ Explanation: However, this results in KeyError with a rather strange error message. The key lesson to be learned here is that the current incarnation of MultiIndex requires the labels to be sorted for the lower-level slicing routines to work correctly. In order to do this, you can utilize the sortlevel() method, which sorts the labels of an axis within a MultiIndex. To be on the safe side, sort first before slicing with a MultiIndex. Thus, we can do the following: End of explanation """ # Swapping level 0 and 1 in x axis swappedDF=sharesIndexDF[:7].swaplevel(0, 1, axis=0) swappedDF """ Explanation: The swaplevel function enables levels within the MultiIndex to be swapped: End of explanation """ reorderedDF=sharesIndexDF[:7].reorder_levels(['PriceType','TradingDate'],axis=0) reorderedDF """ Explanation: The reorder_levels function is more general, allowing you to specify the order of the levels: End of explanation """ # Selecting price type close which are bigger than 4300 in Nasdaq sharesIndexDataDF.ix[(sharesIndexDataDF['PriceType']=='close')&(sharesIndexDataDF['Nasdaq']>4300) ] """ Explanation: Boolean Indexing We use Boolean indexing to filter or select parts of the data. OR operator is | AND operator is &amp; NOT operator is ~ These operators must be grouped using parentheses when used together. End of explanation """ # Ww can also do this extensively highSelection=sharesIndexDataDF['PriceType']=='high' NasdaqHigh=sharesIndexDataDF['Nasdaq']<4300 sharesIndexDataDF.ix[highSelection & NasdaqHigh] """ Explanation: You can also create Boolean conditions in which you use arrays to filter out parts of the data: End of explanation """ # Check values in Series stockSeries=pd.Series(['NFLX','AMZN','GOOG','FB','TWTR']) stockSeries.isin(['AMZN','FB']) # We can use the sub selecting to selecting true values stockSeries[stockSeries.isin(['AMZN','FB'])] # Dictionary to create a dataframe australianMammals= {'kangaroo': {'Subclass':'marsupial','Species Origin':'native'}, 'flying fox' : {'Subclass':'placental','Species Origin':'native'}, 'black rat': {'Subclass':'placental','Species Origin':'invasive'}, 'platypus' : {'Subclass':'monotreme','Species Origin':'native'}, 'wallaby' :{'Subclass':'marsupial','Species Origin':'native'}, 'palm squirrel' : {'Subclass':'placental','Origin':'invasive'}, 'anteater': {'Subclass':'monotreme', 'Origin':'native'}, 'koala': {'Subclass':'marsupial', 'Origin':'native'}} ozzieMammalsDF = pd.DataFrame(australianMammals) ozzieMammalsDF aussieMammalsDF=ozzieMammalsDF.T # Transposing the data frame aussieMammalsDF # Selecting native animals aussieMammalsDF.isin({'Subclass':['marsupial'],'Origin':['native']}) """ Explanation: The isin and anyall methods enable user to achieve more with Boolean indexing that the standart operators used in the preceding sections. The isin method takes a list of values and returns a Boolean array with True at the positions within the Series or DataFrame that match the values in the list. End of explanation """ import numpy as np np.random.seed(100) # Setting random generator to 100 so we can generate same results later normvals = pd.Series([np.random.normal() for i in np.arange(10)]) normvals # Return values bigger than 0 normvals[normvals>0] # Return values bigger than 0, prints the same shape # by putting NaN to other places normvals.where(normvals>0) # Creating DataFrame with set random values np.random.seed(100) normDF = pd.DataFrame([[round(np.random.normal(),3) for i in np.arange(5)] for j in range(3)], columns=['0','30','60','90','120']) normDF # For DataFrames we get same shape no matter we use normDF[normDF>0] # For DataFrames we get same shape no matter we use normDF.where(normDF>0) # The inverse operation of the where is mask normDF.mask(normDF>0) """ Explanation: where() method The where() method is used to ensure that the result of Boolean filtering is the same shape as the original data. End of explanation """
biof-309-python/BIOF309-2016-Fall
Week_02/Week 2 - 04 - Homework.ipynb
mit
# This sequence is the first 100 nucleotides of the Influenza H1N1 Virus segment 8 flu_ns1_seq = 'GTGACAAAGACATAATGGATCCAAACACTGTGTCAAGCTTTCAGGTAGATTGCTTTCTTTGGCATGTCCGCAAACGAGTTGCAGACCAAGAACTAGGTGA' """ Explanation: Week 2 Homework We have seen this week how to print and manipulate text string in python. Lets use the skills we have learned to write a program to calculate the GC percentage of a DNA sequence. Recall that the GC percentage of a DNA sequence can be a sign that we are looking at a gene. Pseudocode Pseudocode is the term used to describe a draft outline of a program written in plain English (or whatever language you write it in :-) ). We use pseudocode to discuss the functionality of the program as well as key elements in the program. Starting a program by using pseudocode can help to get your logic down quickly without having to be concerned with hte exact details or syntax of the programming language. Write a python program to calculate a GC percentage End of explanation """ # Write your code here (if you wish) """ Explanation: Pseudocode: - Count the number of "C"s in the above sequence - Count the number of "G"s in the above sequence - Add "C" and "G" counts together - Count the total number of nucleotides in the sequence - Divide teh total number of "C" and "G" nucleotides by the total number of nucleotides - Print the percentage NOTE: Please get into teh good habit of commenting your code and describing what you are going to do or are doing. There must be at least one comment in your code. End of explanation """ %%writefile GC_calculator.py #Paste Code here """ Explanation: If you would like to create a file with your source doe paste it in the cell below and run. Please remember to add your name to the file. End of explanation """
Naereen/notebooks
Février 2021 un mini challenge arithmético-algorithmique.ipynb
mit
// ceci est du code Java 9 et pas Python ! // On a besoin des dépendances suivantes : import java.util.Calendar; // pour Calendar.FEBRUARY, .YEAR, .MONDAY import java.util.GregorianCalendar; // pour import java.util.stream.IntStream; // pour cet IntStream // ceci est du code Java 9 et pas Python ! IntStream.rangeClosed(1994, 2077) //.parallel() // ce .parallel() est inutile, il y a trop peu de valeurs .mapToObj(i -> new GregorianCalendar(i, Calendar.FEBRUARY, 1)) .filter(calendar -> !calendar.isLeapYear(calendar.get(Calendar.YEAR))) .filter(calendar -> calendar.get(Calendar.DAY_OF_WEEK) == Calendar.MONDAY) .count(); """ Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Février-2021-un-mini-challenge-arithmético-algorithmique" data-toc-modified-id="Février-2021-un-mini-challenge-arithmético-algorithmique-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Février 2021 un mini challenge arithmético-algorithmique</a></span><ul class="toc-item"><li><span><a href="#Challenge-:" data-toc-modified-id="Challenge-:-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Challenge :</a></span></li><li><span><a href="#Réponse-en-Java-(par-un-de-mes-élèves-de-L3-SIF)" data-toc-modified-id="Réponse-en-Java-(par-un-de-mes-élèves-de-L3-SIF)-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Réponse en Java (par <a href="http://www.dit.ens-rennes.fr/" target="_blank">un de mes élèves de L3 SIF</a>)</a></span></li><li><span><a href="#Réponse-en-Bash-(par-Lilian-Besson)-?" data-toc-modified-id="Réponse-en-Bash-(par-Lilian-Besson)-?-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Réponse en Bash (par <a href="https://perso.crans.org/besson/" target="_blank">Lilian Besson</a>) ?</a></span></li><li><span><a href="#Réponse-en-Python-(par-Lilian-Besson)" data-toc-modified-id="Réponse-en-Python-(par-Lilian-Besson)-1.4"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Réponse en Python (par <a href="https://perso.crans.org/besson/" target="_blank">Lilian Besson</a>)</a></span></li><li><span><a href="#Réponse-en-OCaml-(par-Lilian-Besson)" data-toc-modified-id="Réponse-en-OCaml-(par-Lilian-Besson)-1.5"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Réponse en OCaml (par <a href="https://perso.crans.org/besson/" target="_blank">Lilian Besson</a>)</a></span></li><li><span><a href="#Réponse-en-Rust-(par-un-de-mes-élèves-(Théo-Degioanni))" data-toc-modified-id="Réponse-en-Rust-(par-un-de-mes-élèves-(Théo-Degioanni))-1.6"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Réponse en Rust (par <a href="https://github.com/Moxinilian" target="_blank">un de mes élèves (Théo Degioanni)</a>)</a></span></li><li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-1.7"><span class="toc-item-num">1.7&nbsp;&nbsp;</span>Conclusion</a></span></li><li><span><a href="#Challenge-(pour-les-futurs-agrégs-maths)" data-toc-modified-id="Challenge-(pour-les-futurs-agrégs-maths)-1.8"><span class="toc-item-num">1.8&nbsp;&nbsp;</span>Challenge (pour les futurs agrégs maths)</a></span><ul class="toc-item"><li><span><a href="#Une-première-réponse" data-toc-modified-id="Une-première-réponse-1.8.1"><span class="toc-item-num">1.8.1&nbsp;&nbsp;</span>Une première réponse</a></span></li></ul></li></ul></li></ul></div> Février 2021 un mini challenge arithmético-algorithmique Challenge : Le lundi 01 février 2021, j'ai donné à mes élèves de L3 et M1 du département informatique de l'ENS Rennes le challenge suivant : Mini challenge algorithmique pour les passionnés en manque de petits exercices de code : (optionnel) Vous avez dû observer que ce mois de février est spécial parce que le 1er février est un lundi, et qu'il a exactement 4 lundis, 4 mardis, 4 mercredis, 4 jeudis, 4 vendredis, 4 samedis et 4 dimanches. Question : Comptez le nombre de mois de février répondant à ce critère (je n'ai pas trouvé de nom précis), depuis l'année de création de l'ENS Rennes (1994, enfin pour Cachan antenne Bretagne) jusqu'à 2077 (1994 et 2077 inclus). Auteur : Lilian Besson License : MIT Date : 01/02/2021 Cours : ALGO2 @ ENS Rennes <span style="color:red;">Attention : ce notebook est déclaré avec le kernel Python, mais certaines sections (Java, OCaml et Rust) ont été exécutées avec le kernel correspondant. La coloration syntaxique multi-langage n'est pas (encore?) supportée, désolé d'avance.</span> Réponse en Java (par un de mes élèves de L3 SIF) Oui, on peut utiliser Java dans des notebooks ! Voir ce poste de blogue, et ce kernel IJava. Moi je trouve ça chouette, et je m'en suis servi en INF1 au semestre dernier Il en avait trouvé 9. Date et heure : lundi 01 février, 20h32. End of explanation """ System.out.println("Test d'une cellule en Java dans un notebook déclaré comme Java"); // ==> ça marche ! %%java System.out.println("Test d'une cellule en Java dans un notebook déclaré comme Python"); // cela ne marche pas ! # On peut aussi écrire une cellule Python qui fait appel à une commande Bash !echo 'System.out.println("\nTest d\'une ligne de Java dans un notebook déclaré comme Python");' | jshell -q %%bash # voir une commande Bash directement ! # mais uniquement depuis un notebook Python echo 'System.out.println("\nok");' | jshell -q """ Explanation: Si les cellules précédentes ne s'exécute pas, a priori c'est normal : ce notebook est déclaré en Python ! Il faudrait utiliser une des astuces suivantes, mais flemme. End of explanation """ %%bash ncal February 2021 """ Explanation: Réponse en Bash (par Lilian Besson) ? En bidouillant avec des programmes en lignes de commandes tels que cal et des grep on devrait pouvoir s'en sortir facilement. Ça tient même en une ligne ! Date et heure : 01/02/2021, 21h16 End of explanation """ %%bash time for ((annee=1994; annee<=2077; annee+=1)); do ncal February $annee \ | grep 'lu 1 8 15 22' \ | grep -v 29; done \ | wc -l %%bash for ((annee=1994; annee<=2077; annee+=1)); do ncal February $annee | grep 'lu 1 8 15 22' | grep -v 29; done | wc -l """ Explanation: En recherchant exactement cette chaîne "lu 1 8 15 22" et en excluant 29 des lignes trouvées, on obtient la réponse : End of explanation """ import calendar def filter_annee(annee): return ( set(calendar.Calendar(annee).itermonthdays2(annee, 2)) & {(1,0), (28, 6), (29, 0)} ) == {(1, 0), (28, 6)} filter_annee(2020), filter_annee(2021), filter_annee(2022) """ Explanation: Réponse en Python (par Lilian Besson) Avec le module calendar on pourrait faire comme en Bash : imprimer les calendriers, et rechercher des chaînes particulières... mais ce n'est pas très propre. Essayons avec ce même module mais en écrivant une solution fonctionnelle ! Date et heure : lundi 01 février, 21h40. End of explanation """ %%time len(list(filter(filter_annee, ( annee for annee in range(1994, 2077 + 1) # if not calendar.isleap(annee) # en fait c'est inutile ) ))) """ Explanation: Et donc on a juste à compter les années, de 1994 à 2077 inclus, qui ne sont pas des années bissextiles et qui satisfont le filtre : End of explanation """ (* cette cellule est en OCaml *) (* avec la solution en Bash et Sys.command... *) Sys.command "bash -c \"for ((annee=1994; annee<=2077; annee+=1)); do ncal February \\$annee | grep 'lu 1 8 15 22' | grep -v 29; done | wc -l\"";; (* mais ça ne compte pas ! *) """ Explanation: Réponse en OCaml (par Lilian Besson) En installant et utilisant ocaml-calendar cela ne doit pas être trop compliqué. On peut s'inspirer du code Java ci-dessus, qui a une approche purement fonctionnelle. Date et heure : ? End of explanation """ type day = int and dayofweek = int and month = int and year = int ;; type date = { d: day; q: dayofweek; m: month; y: year };; let is_not_bissextil (y: year) : bool = (y mod 4 != 0) || (y mod 100 == 0 && y mod 400 != 0) ;; is_not_bissextil 2019;; is_not_bissextil 2020;; is_not_bissextil 2021;; (* Ce Warning:8 est volontaire ! *) let length_of_month (m: month) (y: year) : int = match m with | 4 | 6 | 9 | 11 -> 30 | 1 | 3 | 5 | 7 | 8 | 10 | 12 -> 31 | 2 -> if is_not_bissextil(y) then 28 else 29 ;; length_of_month 2 2019;; (* 28 *) length_of_month 2 2020;; (* 29 *) length_of_month 2 2021;; (* 28 *) let next_dayofweek (q: dayofweek) = 1 + (q mod 7) ;; next_dayofweek 1;; (* Monday => Tuesday *) next_dayofweek 2;; (* Tuesday => Wednesday *) next_dayofweek 3;; (* Wednesday => Thursday *) next_dayofweek 4;; (* Thursday => Friday *) next_dayofweek 5;; (* Friday => Saturday *) next_dayofweek 6;; (* Saturday => Sunday *) next_dayofweek 7;; (* Sunday => Monday *) let next_day { d; q; m; y } = let l_o_m = length_of_month m y in if (d = 31 && m = 12) then { d=1; q=next_dayofweek q; m=1; y=y+1 } else begin if (d = l_o_m) then { d=1; q=next_dayofweek q; m=m+1; y=y } else { d=d+1; q=next_dayofweek q; m=m; y=y } end ;; let rec iterate (n: int) (f: 'a -> 'a) (x: 'a): 'a = match n with | 0 -> x (* identité *) | 1 -> f(x) | n -> iterate (n-1) f (f(x)) ;; let start_of_next_month { d; q; m; y } = let l_o_m = length_of_month m y in let nb_nextday = l_o_m - d + 1 in if (m = 12) then { d=1; q=iterate nb_nextday next_dayofweek q; m=1; y=y+1 } else { d=1; q=iterate nb_nextday next_dayofweek q; m=m+1; y=y } ;; """ Explanation: On pourrait faire en calculant manuellement les quantièmes du 01/01/YYYY pour YYYY entre 1994 et 2077. End of explanation """ Sys.command "ncal Jan 1994 | grep ' 1'"; let start_date = {d=1; q=6; m=1; y=1994};; (* 01/01/1994 était un samedi, q=6 *) (* let start_date = {d=1; q=3; m=1; y=2020};; (* 01/01/2020 était un mercredi, q=3 *) *) let end_date = {d=31; q=0; m=1; y=2077};; (* on se fiche du quantième ici ! *) (* let end_date = {d=31; q=0; m=1; y=2021};; (* on se fiche du quantième ici ! *) *) let aujourdhui = ref start_date;; (* les champs sont pas mutables, go reference *) let nb_bon_mois_fevrier = ref 0;; let aujourdhuis = ref [];; let solutions = ref [];; while (!aujourdhui.y <= end_date.y || !aujourdhui.y <= end_date.y || !aujourdhui.y <= end_date.y) do if (!aujourdhui.d = 1 && !aujourdhui.m = 2) then begin (* on a un début de février, est-ce qu'il vérifie les critères ? *) let date_suivante = iterate 27 next_day !aujourdhui in let date_suivante_p1 = next_day date_suivante in if ( date_suivante.d = 28 && date_suivante.m = 2 && date_suivante_p1.d != 29 (* année pas bisextile *) && !aujourdhui.q = 1 (* mois février commence par lundi *) ) then begin solutions := !aujourdhui :: !solutions; incr nb_bon_mois_fevrier; end; end; (* on a un jour quelconque, on avance d'un mois *) aujourdhui := start_of_next_month !aujourdhui; aujourdhuis := !aujourdhui :: !aujourdhuis; done;; !nb_bon_mois_fevrier;; """ Explanation: Je vais tricher un peu et utiliser la connaissance que le 01/01/1994 est un samedi : End of explanation """ !solutions """ Explanation: On peut même facilement vérifier les années qui ont été trouvées, et donc vérifier que 2021 est dedans. J'ai aussi eu la chance d'observer ce phénomène en 1999 (mais je ne me souviens pas l'avoir remarqué, j'avais 6 ans !) et en 2010 (et je me souviens l'avoir remarqué, normal j'étais en MP* et on adore ce genre de coïncidences). End of explanation """ :dep chrono = "0.4" use chrono::TimeZone; use chrono::Datelike; fn main() { let n = (1994..=2077) .filter(|n| chrono::Utc.ymd_opt(*n, 2, 29) == chrono::LocalResult::None) .map(|n| chrono::Utc.ymd(n, 2, 1)) .filter(|t| t.weekday() == chrono::Weekday::Mon) .count(); println!("{}", n); } main() """ Explanation: (le kernel ocaml-jupyter est génial, mais plante un peu, j'ai galéré à écrire cette douzaine de cellules sans devoir relancer Jupyter plusieurs fois... bug signalé, résolution en cours...) Réponse en Rust (par un de mes élèves (Théo Degioanni)) Le code Rust proposé peut être executé depuis le bac à sable Rust : https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=2ab9c57e9d114a344363e21f9493bf22 Mais on peut aussi utiliser le kernel Jupyter proposé par le projet evcxr de Google : il faut installer Rust, je n'avais jamais fait j'ai donc suivi rustup.rs le site officiel de présentation de l'installation de Rust ; puis j'ai suivi les explications pour installer le kernel sur GitHub @google/evcxr ; puis j'ai écrit cette cellule ci-dessous, j'ai changé le Noyau en Rust, et j'ai exécuté la cellule ; notez qu'avec l'extension jupyter nbextension ExecuteTime, j'ai vu que la première cellule avait pris quasiment 10 secondes... mais je pense qu'il s'agit du temps d'installer et de compilation du module chrono (je ne suis pas encore très familier avec Rust). Les exécutions suivantes prennent environ 300ms pour la définition (y a-t-il recompilation même si le texte ne change pas ?) de la fonction ; Et environ 700ms pour l'exécution. C'est bien plus lent que les 35 ms de mon code naïf en OCaml (qui est juste interprété et pas compilé), que les 10 ms de Python, ou les 100 ms de Bash. Mais pas grave ! Date et heure : lundi 01/02/2021, 21h20 End of explanation """
mne-tools/mne-tools.github.io
stable/_downloads/1537c1215a3e40187a4513e0b5f1d03d/eeg_csd.ipynb
bsd-3-clause
# Authors: Alex Rockhill <aprockhill@mailbox.org> # # License: BSD-3-Clause import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import sample print(__doc__) data_path = sample.data_path() """ Explanation: Transform EEG data using current source density (CSD) This script shows an example of how to use CSD :footcite:PerrinEtAl1987,PerrinEtAl1989,Cohen2014,KayserTenke2015. CSD takes the spatial Laplacian of the sensor signal (derivative in both x and y). It does what a planar gradiometer does in MEG. Computing these spatial derivatives reduces point spread. CSD transformed data have a sharper or more distinct topography, reducing the negative impact of volume conduction. End of explanation """ meg_path = data_path / 'MEG' / 'sample' raw = mne.io.read_raw_fif(meg_path / 'sample_audvis_raw.fif') raw = raw.pick_types(meg=False, eeg=True, eog=True, ecg=True, stim=True, exclude=raw.info['bads']).load_data() events = mne.find_events(raw) raw.set_eeg_reference(projection=True).apply_proj() """ Explanation: Load sample subject data End of explanation """ raw_csd = mne.preprocessing.compute_current_source_density(raw) raw.plot() raw_csd.plot() """ Explanation: Plot the raw data and CSD-transformed raw data: End of explanation """ raw.plot_psd() raw_csd.plot_psd() """ Explanation: Also look at the power spectral densities: End of explanation """ event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3, 'visual/right': 4, 'smiley': 5, 'button': 32} epochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=.5, preload=True) evoked = epochs['auditory'].average() """ Explanation: CSD can also be computed on Evoked (averaged) data. Here we epoch and average the data so we can demonstrate that. End of explanation """ times = np.array([-0.1, 0., 0.05, 0.1, 0.15]) evoked_csd = mne.preprocessing.compute_current_source_density(evoked) evoked.plot_joint(title='Average Reference', show=False) evoked_csd.plot_joint(title='Current Source Density') """ Explanation: First let's look at how CSD affects scalp topography: End of explanation """ fig, ax = plt.subplots(4, 4) fig.subplots_adjust(hspace=0.5) fig.set_size_inches(10, 10) for i, lambda2 in enumerate([0, 1e-7, 1e-5, 1e-3]): for j, m in enumerate([5, 4, 3, 2]): this_evoked_csd = mne.preprocessing.compute_current_source_density( evoked, stiffness=m, lambda2=lambda2) this_evoked_csd.plot_topomap( 0.1, axes=ax[i, j], outlines='skirt', contours=4, time_unit='s', colorbar=False, show=False) ax[i, j].set_title('stiffness=%i\nλ²=%s' % (m, lambda2)) """ Explanation: CSD has parameters stiffness and lambda2 affecting smoothing and spline flexibility, respectively. Let's see how they affect the solution: End of explanation """
ppham27/MLaPP-solutions
chap04/17.ipynb
mit
%matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap # benchmark sklearn implementations, these are much faster from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis # my own implementation, results are identical but runs much slower import DiscriminantAnalysis raw_data = pd.read_csv("heightWeightData.txt", header=None, names=["gender", "height", "weight"]) raw_data.info() raw_data.head() """ Explanation: LDA/QDA on height/weight data We're asked to fit a Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) model to the height/weight data and compute the the misclassification rate. See my implementation of these algorithms on GitHub. The data can be found here. End of explanation """ plt.figure(figsize=(8,8)) labels = {1: 'Male', 2: 'Female'} colors = {1: 'blue', 2: 'red'} def plot_height_weight(title="Weight vs Height", ax=None): if ax == None: ax = plt.gca() for name, group in raw_data.groupby('gender'): ax.scatter(group.height, group.weight, color=colors[name], label=labels[name]) ax.set_title(title) ax.set_xlabel('height') ax.set_ylabel('weight') ax.legend(loc='upper left') ax.grid() plot_height_weight() plt.show() """ Explanation: Let's plot the data, coloring males blue and females red, too. End of explanation """ x_min = 50 x_max = 85 y_min = 50 y_max = 300 X = raw_data[['height', 'weight']].as_matrix() y = raw_data['gender'].as_matrix() xx, yy = np.meshgrid(np.linspace(x_min, x_max, num=200, endpoint=True), np.linspace(y_min, y_max, num=200, endpoint=True)) cmap_light = ListedColormap(['#AAAAFF','#FFAAAA']) def plot_height_weight_mesh(xx, yy, Z, comment=None, title=None, ax=None): if ax == None: ax = plt.gca() ax.pcolormesh(xx, yy, Z, cmap=cmap_light) if title == None: plot_height_weight(ax=ax) else: plot_height_weight(ax=ax, title = title) ax.set_xlim([x_min, x_max]) ax.set_ylim([y_min, y_max]) if comment != None: ax.text(0.95, 0.05, comment, transform=ax.transAxes, verticalalignment="bottom", horizontalalignment="right", fontsize=14) def decimal_to_percent(x, decimals=2): return '{0:.2f}%'.format(np.round(100*x, decimals=2)) """ Explanation: Let's define a few globals to help with the plotting and model fitting. End of explanation """ # qda = QuadraticDiscriminantAnalysis(store_covariances=True) # sklearn implementation qda = DiscriminantAnalysis.QDA() # my implementation qda.fit(X, y) qda_misclassification = 1 - qda.score(X, y) qda_Z = qda.predict(np.c_[xx.ravel(), yy.ravel()]) qda_Z = qda_Z.reshape(xx.shape) plt.figure(figsize=(8,8)) plot_height_weight_mesh(xx, yy, qda_Z, title="Weight vs Height: QDA", comment="Misclassification rate: " + decimal_to_percent(qda_misclassification)) plt.show() """ Explanation: Let's try the QDA model first. We follow the conventions of the sklearn implementation. Let $X$ be our data or design matrix and $y$ be our class label. The rows of $X$ consist of the vectors $\mathbf{x}i$, which are the individual observations. We let $y_i \sim \mathrm{Multinomial}\left(\theta_1,\ldots,\theta_K\right)$, where $\sum{k=1}^K \theta_k = 1$. We let $\mathbf{x}_i \mid y_i = k \sim \mathcal{N}\left(\mathbf{\mu}_k, \Sigma_k\right)$, which is a multivariate normal. We use the following estimates for the parameters: \begin{align} \hat{\theta}k &= \frac{N_k}{N} \ \hat{\mu}_k &= \frac{1}{N_k}\sum{{i~:~ y_i = k}}\mathbf{x}i \ \hat{\Sigma}_k &= \frac{1}{N_k - 1}\sum{{i~:~y_i = k}} \left(\mathbf{x}_i - \hat{\mu}_k\right)\left(\mathbf{x}_i - \hat{\mu}_k\right)^\intercal. \end{align} $N$ is total number of observations, and $N_k$ is the number of observations of class $k$. Thus, for each class $k$, we compute the proportion of observations that are of that class, the sample mean, and the unbiased estimate for covariance. End of explanation """ # lda = LinearDiscriminantAnalysis(store_covariance=True) # sklearn implementation lda = DiscriminantAnalysis.LDA() # my implementation lda.fit(X, y) lda_misclassification = 1 - lda.score(X, y) lda_Z = lda.predict(np.c_[xx.ravel(), yy.ravel()]) lda_Z = lda_Z.reshape(xx.shape) plt.figure(figsize=(8,8)) plot_height_weight_mesh(xx, yy, lda_Z, title="Weight vs Height: LDA", comment="Misclassification rate: " + decimal_to_percent(lda_misclassification)) plt.show() """ Explanation: Let's look at LDA now. We compute $\hat{\theta}_k$ and $\hat{\mu}_k$ in the same manner. However, now we have that all covariances are equal, that is, $\hat{\Sigma} = \hat{\Sigma}_k$ for all $k$. Let $p$ be the number of features, that is, the number of columns in $X$. First, we note that \begin{align} \log p\left(\mathcal{D} \mid \boldsymbol\mu, \Sigma,\boldsymbol\theta\right) &= \sum_{i=1}^N \log p\left(\mathbf{x}i,y_i \mid \boldsymbol\mu, \Sigma,\boldsymbol\theta\right) = \sum{k = 1}^K \sum_{{i~:~y_i = k}} \log p\left(\mathbf{x}i, y_i = k \mid \mu_k, \Sigma, \theta_k\right) \ &= \sum{k = 1}^K \sum_{{i~:~y_i = k}} \left[\log p(y_i = k) + \log p\left(\mathbf{x}i \mid \mu_k, \Sigma, y_i=k\right)\right] \ &= \sum{k = 1}^K \sum_{{i~:~y_i = k}} \left[\log \theta_k - \frac{p}{2}\log 2\pi - \frac{1}{2}\log|\Sigma| - \frac{1}{2}\left(\mathbf{x}i - \mu_k\right)^\intercal\Sigma^{-1}\left(\mathbf{x}_i - \mu_k\right)\right] \ &= -\frac{Np}{2}\log 2\pi -\frac{N}{2}\log|\Sigma| + \sum{k = 1}^K \left(N_k\log\theta_k - \frac{1}{2}\sum_{{i~:~y_i = k}}\left(\mathbf{x}_i - \mu_k\right)^\intercal\Sigma^{-1}\left(\mathbf{x}_i - \mu_k\right)\right). \end{align} Let $\Lambda = \Sigma^{-1}$. The MLE is invariant with regard to reparameterization, so after isolating the terms that involve $\Sigma$ we can focus on maximizing \begin{align} l(\Lambda) &= \frac{N}{2}\log|\Lambda| - \frac{1}{2}\sum_{k = 1}^K \sum_{{i~:~y_i = k}}\left(\mathbf{x}i - \mu_k\right)^\intercal\Lambda\left(\mathbf{x}_i - \mu_k\right) \ &= \frac{N}{2}\log|\Lambda| - \frac{1}{2}\sum{k = 1}^K \sum_{{i~:~y_i = k}}\operatorname{tr}\left(\left(\mathbf{x}_i - \mu_k\right)\left(\mathbf{x}_i - \mu_k\right)^\intercal\Lambda\right), \end{align} where we have used the fact that $\left(\mathbf{x}_i - \mu_k\right)^\intercal\Sigma^{-1}\left(\mathbf{x}_i - \mu_k\right)$ is a scalar, so we can replace it with the trace, and then, we apply the fact that the trace remains the same after cyclic permutations. Now, we note these two identities to help us take the derivative with respect to $\Lambda$, \begin{align} \frac{\partial}{\partial\Lambda}\log|\Lambda| &= \left(\Lambda^{-1}\right)^\intercal \ \frac{\partial}{\partial\Lambda}\operatorname{tr}\left(A\Lambda\right) &= A^\intercal. \end{align} Thus, we'll have that \begin{align} l^\prime(\Lambda) &= \frac{N}{2}\left(\Lambda^{-1}\right)^\intercal - \frac{1}{2}\sum_{k=1}^K\left(\sum_{{i~:~y_i = k}}\left(\mathbf{x}i - \mu_k\right)\left(\mathbf{x}_i - \mu_k\right)^\intercal\right) \ &= \frac{N}{2}\Sigma - \frac{1}{2}\sum{k=1}^K\left(\sum_{{i~:~y_i = k}}\left(\mathbf{x}_i - \mu_k\right)\left(\mathbf{x}_i - \mu_k\right)^\intercal\right) \end{align} since $\Lambda^{-1} = \Sigma$ and $\Sigma$ is symmetric. Setting $l^\prime(\Lambda) = 0$ and solving for $\Sigma$, we have that \begin{equation} \Sigma = \frac{1}{N} \sum_{k=1}^K\left(\sum_{{i~:~y_i = k}}\left(\mathbf{x}_i - \mu_k\right)\left(\mathbf{x}_i - \mu_k\right)^\intercal\right). \end{equation} In this manner our estimate is \begin{equation} \hat{\Sigma} = \frac{1}{N} \sum_{k=1}^K\left(\sum_{{i~:~y_i = k}}\left(\mathbf{x}_i - \hat{\mu}_k\right)\left(\mathbf{x}_i - \hat{\mu}_k\right)^\intercal\right). \end{equation} For whatever reason, sklearn uses the biased MLE estimate for covariance in LDA, but it uses the unbiased estimate for covariance in QDA. End of explanation """ fig = plt.figure(figsize=(14,8)) ax1 = fig.add_subplot(1,2,1) ax2 = fig.add_subplot(1,2,2) plot_height_weight_mesh(xx, yy, qda_Z, title="Weight vs Height: QDA", ax=ax1, comment="Misclassification rate: " + decimal_to_percent(qda_misclassification)) plot_height_weight_mesh(xx, yy, lda_Z, title="Weight vs Height: LDA", ax=ax2, comment="Misclassification rate: " + decimal_to_percent(lda_misclassification)) fig.suptitle('Comparison of QDA and LDA', fontsize=18) plt.show() """ Explanation: Clearly, we see that the two methods give us different decision boundaries. However, for these data, the misclassification rate is the same. End of explanation """
ernestyalumni/CompPhys
crack/crack.ipynb
apache-2.0
def fibonacci(n): if (n == 0): return 0 elif (n == 1): return 1 elif (n==2): return 1 return fibonacci(n-1) + fibonacci(n-2) [fibonacci(n) for n in range(15)] """ Explanation: cf. Gayle Laakmann McDowell. Cracking the Coding Interview: 189 Programming Questions and Solutions. 6th Edition. CareerCup; 6th edition (July 1, 2015). ISBN-13: 978-0984782857 Trees and Graphs; Ch. 4 Trees and Graphs of McDowell (2015) cf. Chapter 8, Recursion and Dynamic Programming. Consider the Fibonacci numbers and the recurrence relation $$ F_n = F_{n-1} + F_{n-2} $$ The problem appears to be expressed as follows, given $n \in \mathbb{Z}^+$, for some finite $S \subset \mathbb{Z}^+$, $F:\mathbb{Z}^+ \to \mathbb{K}$, we want some recurrence relation $f$ $$ F(n):= f(n,S,F) $$ For notation, use $$ f(n) = f(n-1) + f(n-2) $$ with $f(1) = f(2) = 1$. Consider $$ \begin{aligned} & f(3)=f(2) + f(1) \ & f(4) = f(3) + f(2) = (f(2) + f(1)) + f(2) \ & f(5) = f(4) + f(3) = (f(3) + f(2) ) + (f(2) + f(1)) = (f(2) + f(1) + f(2) ) + (f(2) + f(1)) \end{aligned} $$ End of explanation """ def paragraphize(inputFunction): def newFunction(): return "<p>" + inputFunction() + "</p>" return newFunction @paragraphize def getText(): return "Here is some text!" """ Explanation: Decorators cf. https://jeremykun.com/2012/01/12/a-spoonful-of-python/ A decorator is a way to hide pre- or post-processing to a function. A decorator accepts a function $f$ as input, and returns a function which potentially does something else, but perhaps uses $f$ in an interesting way. e.g. End of explanation """ getText() """ Explanation: in other words, it is shorthand for the following statement: getText = paragraphize(getText) End of explanation """ def f(*args): print args print args[0] return args f((3,4,2)) f(3,4,2) """ Explanation: review of * notation for multiple arguments, unpacking the multiple comma-separated values End of explanation """ def memoize(f): cache = {} def memoizedFunction(*args): if args not in cache: cache[args] = f(*args) return cache[args] # this next line allows us to access the cache from other parts of the code by # attaching it to memoizedFunction as an attribute memoizedFunction.cache = cache return memoizedFunction @memoize def fibonacci(n): if n == 0: return 0 elif n <= 2: return 1 else: return fibonacci(n-1) + fibonacci(n-2) [fibonacci(n) for n in range(15)] def memoization(f): memo = {} def memoizedf(i): if i not in memo: memo[i] = f(i) return memo[i] # this next line allows us to access the cache from other parts of the code by # attaching it to memoizedf as an attribute memoizedf.memo = memo return memoizedf @memoization def fibonacci(n): if n == 0: return 0 elif n <= 2: return 1 else: return fibonacci(n-1) + fibonacci(n-2) [fibonacci(n) for n in range(15)] """ Explanation: Top-Down Dynamic Programming (or Memoization) End of explanation """ def fibonacci(n): if (n == 0): return 0 elif (n == 1): return 1 memo = [0 for _ in range(n)] memo[0] = 0 memo[1] = 1 for i in range(2,n): memo[i] = memo[i-1]+memo[i-2] return memo[n-1]+memo[n-2] [fibonacci(n) for n in range(15)] """ Explanation: Bottom-Up Dynamic Programming cf. pp. 134 Chapter 8 - Recursion and Dynamic Programming End of explanation """ def fibonacci(n): if (n==0): return 0 f_nm2 = 0 # f(n-2) = 0; f(2-2)=f(0)=0 f_nm1 = 1 # f(n-1) = 1; f(2-1)=f(1)=1 for idx in range(2,n): f_n = f_nm2 + f_nm1 # f(n) f_nm2 = f_nm1 f_nm1 = f_n return f_nm2 + f_nm1 [fibonacci(n) for n in range(15)] [fibonacci(n) for n in range(15)][1:] """ Explanation: If you really think about how this works, you only use memo[i] for memo[i+1] and memo[i+2]. You don't need it after that. Therefore, we can get rid of the memo table and just store a few variables. End of explanation """ def f(n,steps): if n == 0: return 1 elif (n < 0) or (steps == []): return 0 else: s_0 = steps[0] # pop out the first step to try f_n = f(n-s_0,steps) + f(n,steps[1:]) return f_n steps_eg = [1,2,3] print( [f(n,steps_eg) for n in range(15)] ) """ Explanation: Interview Questions; for Chapter 8, Recursion and Dynamic Programming 8.1 Triple Step: staircase with $n$ steps, can hop either 1,2, or 3 steps each time. End of explanation """ def f(n): if (n==0): return 1 elif (n==1): return 1 elif (n==2): return 2 else: return f(n-3)+f(n-2)+f(n-1) print( [f(n) for n in range(15)] ) def memoize(f): memo={} def memof(i): # i \in 0,1,...n-1 if i not in memo: memo[i] = f(i) return memo[i] memof.memo = memo return memof @memoize def f(n): if (n==0): return 1 elif (n==1): return 1 elif (n==2): return 2 else: return f(n-3)+f(n-2)+f(n-1) print( [f(n) for n in range(38)] ) """ Explanation: Brute force solution that was given in the back Let's think about this with the following question: What is the very last step that is done? Let $S=(1,2,3)$; in general, $S \in \textbf{FiniteSet}$, s.t. $s_i< s_j$ , \, $\forall \, 0 \leq i < j \leq |S|-1$. Let $n$ steps. If we thought about (assumed that) all of the paths to $n$th step, we can get up to $n$th step by any of the following: Consider $f(n) \equiv $ number of ways to hop up $n$ stairs, given $S$, set of possible steps. $$ f(n) = f(n-3) + f(n-2) + f(n-1) $$ And so considering the initial case, $$ \begin{aligned} & f(0) = 1 \ & f(1) = 1 \ & f(2) = 2 \ & f(3) = f(0) + f(1)+f(2) = 4 \ & f(4) = 1+2+4= 7 \end{aligned} $$ End of explanation """ def memoize(f): memo={} def memof(i): # i \in 0,1,...n-1 if i not in memo: memo[i] = f(i) print( "In memo, key: %d f(i) : %d was missing \n" % (i, f(i))) print( "In memof, key: %d memo[i] : %d \n" % (i,memo[i])) return memo[i] memof.memo = memo # print( "In memoize, memo : ", memo, "\n") print(memo) return memof @memoize def f(n): if (n==0): return 1 elif (n==1): return 1 elif (n==2): return 2 else: return f(n-3)+f(n-2)+f(n-1) for n in range(6): #print( "In f", f(n) ) # print("In f : %d \n", f(n)) print(f(n)) """ Explanation: The above is the solution. Here is the print out: End of explanation """ def backtrace(pos,paths,off,v): if v == []: return None elif (pos == (0,0)): return 1, paths elif (pos[0] <0 ): return None elif (pos[1] <0): return None elif pos in paths: return None else: """ Explanation: 8.2 Robot in a Grid: End of explanation """
rochelleterman/scrape-interwebz
3_Beautiful_Soup/1_bs_workbook.ipynb
mit
# import required modules import requests from bs4 import BeautifulSoup from datetime import datetime import time import re import sys """ Explanation: Webscraping with Beautiful Soup Intro In this tutorial, we'll be scraping information on the state senators of Illinois, available here, as well as the list of bills each senator has sponsored (e.g., here. The Tools Requests Beautiful Soup End of explanation """ # make a GET request req = requests.get('http://www.ilga.gov/senate/default.asp') # read the content of the server’s response src = req.text """ Explanation: Part 1: Using Beautiful Soup 1.1 Make a Get Request and Read in HTML We use requests library to: 1. make a GET request to the page 2. read in the html of the page End of explanation """ # parse the response into an HTML tree soup = BeautifulSoup(src, 'lxml') # take a look print(soup.prettify()[:1000]) """ Explanation: 1.2 Soup it Now we use the BeautifulSoup function to parse the reponse into an HTML tree. This returns an object (called a soup object) which contains all of the HTML in the original document. End of explanation """ # find all elements in a certain tag # these two lines of code are equivilant # soup.find_all("a") """ Explanation: 1.3 Find Elements BeautifulSoup has a number of functions to find things on a page. Like other webscraping tools, Beautiful Soup lets you find elements by their: HTML tags HTML Attributes CSS Selectors Let's search first for HTML tags. The function find_all searches the soup tree to find all the elements with an a particular HTML tag, and returns all of those elements. What does the example below do? End of explanation """ # soup.find_all("a") # soup("a") """ Explanation: NB: Because find_all() is the most popular method in the Beautiful Soup search API, you can use a shortcut for it. If you treat the BeautifulSoup object as though it were a function, then it’s the same as calling find_all() on that object. These two lines of code are equivalent: End of explanation """ # Get only the 'a' tags in 'sidemenu' class soup("a", class_="sidemenu") """ Explanation: That's a lot! Many elements on a page will have the same html tag. For instance, if you search for everything with the a tag, you're likely to get a lot of stuff, much of which you don't want. What if we wanted to search for HTML tags ONLY with certain attributes, like particular CSS classes? We can do this by adding an additional argument to the find_all In the example below, we are finding all the a tags, and then filtering those with class = "sidemenu". End of explanation """ # get elements with "a.sidemenu" CSS Selector. soup.select("a.sidemenu") """ Explanation: Oftentimes a more efficient way to search and find things on a website is by CSS selector. For this we have to use a different method, select(). Just pass a string into the .select() to get all elements with that string as a valid CSS selector. In the example above, we can use "a.sidemenu" as a CSS selector, which returns all a tags with class sidemenu. End of explanation """ # YOUR CODE HERE """ Explanation: Challenge 1 Find all the &lt;a&gt; elements in class mainmenu End of explanation """ # this is a list soup.select("a.sidemenu") # we first want to get an individual tag object first_link = soup.select("a.sidemenu")[0] # check out its class type(first_link) """ Explanation: 1.4 Get Attributes and Text of Elements Once we identify elements, we want the access information in that element. Oftentimes this means two things: Text Attributes Getting the text inside an element is easy. All we have to do is use the text member of a tag object: End of explanation """ print(first_link.text) """ Explanation: It's a tag! Which means it has a text member: End of explanation """ print(first_link['href']) """ Explanation: Sometimes we want the value of certain attributes. This is particularly relevant for a tags, or links, where the href attribute tells us where the link goes. You can access a tag’s attributes by treating the tag like a dictionary: End of explanation """ # YOUR CODE HERE """ Explanation: Challenge 2 Find all the href attributes (url) from the mainmenu. End of explanation """ # make a GET request req = requests.get('http://www.ilga.gov/senate/default.asp?GA=98') # read the content of the server’s response src = req.text # soup it soup = BeautifulSoup(src, "lxml") """ Explanation: Part 2 Believe it or not, that's all you need to scrape a website. Let's apply these skills to scrape http://www.ilga.gov/senate/default.asp?GA=98 NB: we're just going to scrape the 98th general assembly. Our goal is to scrape information on each senator, including their: - name - district - party 2.1 First, make the get request and soup it. End of explanation """ # get all tr elements rows = soup.find_all("tr") len(rows) """ Explanation: 2.2 Find the right elements and text. Now let's try to get a list of rows in that table. Remember that rows are identified by the tr tag. End of explanation """ # returns every ‘tr tr tr’ css selector in the page rows = soup.select('tr tr tr') print(rows[2].prettify()) """ Explanation: But remember, find_all gets all the elements with the tr tag. We can use smart CSS selectors to get only the rows we want. End of explanation """ # select only those 'td' tags with class 'detail' row = rows[2] detailCells = row.select('td.detail') detailCells """ Explanation: We can use the select method on anything. Let's say we want to find everything with the CSS selector td.detail in an item of the list we created above. End of explanation """ # Keep only the text in each of those cells rowData = [cell.text for cell in detailCells] """ Explanation: Most of the time, we're interested in the actual text of a website, not its tags. Remember, to get the text of an HTML element, use the text member. End of explanation """ # check em out print(rowData[0]) # Name print(rowData[3]) # district print(rowData[4]) # party """ Explanation: Now we can combine the beautifulsoup tools with our basic python skills to scrape an entire web page. End of explanation """ # make a GET request req = requests.get('http://www.ilga.gov/senate/default.asp?GA=98') # read the content of the server’s response src = req.text # soup it soup = BeautifulSoup(src, "lxml") # Create empty list to store our data members = [] # returns every ‘tr tr tr’ css selector in the page rows = soup.select('tr tr tr') # loop through all rows for row in rows: # select only those 'td' tags with class 'detail' detailCells = row.select('td.detail') # get rid of junk rows if len(detailCells) is not 5: continue # Keep only the text in each of those cells rowData = [cell.text for cell in detailCells] # Collect information name = rowData[0] district = int(rowData[3]) party = rowData[4] # Store in a tuple tup = (name,district,party) # Append to list members.append(tup) len(members) """ Explanation: 2.3 Loop it all together Let's use a for loop to get 'em all! End of explanation """ # make a GET request req = requests.get('http://www.ilga.gov/senate/default.asp?GA=98') # read the content of the server’s response src = req.text # soup it soup = BeautifulSoup(src, "lxml") # Create empty list to store our data members = [] # returns every ‘tr tr tr’ css selector in the page rows = soup.select('tr tr tr') # loop through all rows for row in rows: # select only those 'td' tags with class 'detail' detailCells = row.select('td.detail') # get rid of junk rows if len(detailCells) is not 5: continue # Keep only the text in each of those cells rowData = [cell.text for cell in detailCells] # Collect information name = rowData[0] district = int(rowData[3]) party = rowData[4] # YOUR CODE HERE. # Store in a tuple tup = (name, district, party, full_path) # Append to list members.append(tup) # Uncomment to test # members[:5] """ Explanation: Challege 3: Get HREF element pointing to members' bills. The code above retrieves information on: - the senator's name - their district number - and their party We now want to retrieve the URL for each senator's list of bills. The format for the list of bills for a given senator is: http://www.ilga.gov/senate/SenatorBills.asp + ? + GA=98 + &MemberID=memberID + &Primary=True to get something like: http://www.ilga.gov/senate/SenatorBills.asp?MemberID=1911&GA=98&Primary=True You should be able to see that, unfortunately, memberID is not currently something pulled out in our scraping code. Your initial task is to modify the code above so that we also retrieve the full URL which points to the corresponding page of primary-sponsored bills, for each member, and return it along with their name, district, and party. Tips: To do this, you will want to get the appropriate anchor element (&lt;a&gt;) in each legislator's row of the table. You can again use the .select() method on the row object in the loop to do this — similar to the command that finds all of the td.detail cells in the row. Remember that we only want the link to the legislator's bills, not the committees or the legislator's profile page. The anchor elements' HTML will look like &lt;a href="/senate/Senator.asp/..."&gt;Bills&lt;/a&gt;. The string in the href attribute contains the relative link we are after. You can access an attribute of a BeatifulSoup Tag object the same way you access a Python dictionary: anchor['attributeName']. (See the <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/#tag">documentation</a> for more details). NOTE: There are a lot of different ways to use BeautifulSoup to get things done; whatever you need to do to pull that HREF out is fine. Posting on the etherpad is recommended for discussing different strategies. I've started out the code for you. Fill it in where it says #YOUR CODE HERE (Save the path into an object called full_path End of explanation """ # YOUR FUNCTION HERE # Uncomment to test you3 code! # senateMembers = get_members('http://www.ilga.gov/senate/default.asp?GA=98') # len(senateMembers) """ Explanation: Challenge 4: Make a function Turn the code above into a function that accepts a URL, scrapes the URL for its senators, and returns a list of tuples containing information about each senator. End of explanation """ # COMPLETE THIS FUNCTION def get_bills(url): src = requests.get(url).text soup = BeautifulSoup(src) rows = soup.select('tr') bills = [] for row in rows: # YOUR CODE HERE tup = (bill_id, description, champber, last_action, last_action_date) bills.append(tup) return(bills) # uncomment to test your code: # test_url = senateMembers[0][3] # get_bills(test_url)[0:5] """ Explanation: Part 3: Scrape Bills 3.1 Writing a Scraper Function Now we want to scrape the webpages corresponding to bills sponsored by each bills. Write a function called get_bills(url) to parse a given Bills URL. This will involve: requesting the URL using the <a href="http://docs.python-requests.org/en/latest/">requests</a> library using the features of the BeautifulSoup library to find all of the &lt;td&gt; elements with the class billlist return a list of tuples, each with: description (2nd column) chamber (S or H) (3rd column) the last action (4th column) the last action date (5th column) I've started the function for you. Fill in the rest. End of explanation """ # YOUR CODE HERE # Uncomment to test # bills_dict[52] """ Explanation: 3.2 Get all the bills Finally, create a dictionary bills_dict which maps a district number (the key) onto a list_of_bills (the value) eminating from that district. You can do this by looping over all of the senate members in members_dict and calling get_bills() for each of their associated bill URLs. NOTE: please call the function time.sleep(0.5) for each iteration of the loop, so that we don't destroy the state's web site. End of explanation """
vikashvverma/machine-learning
mlbasic/UnSupervised/Project/customer_segments.ipynb
mit
# Import libraries necessary for this project import numpy as np import pandas as pd from IPython.display import display # Allows the use of display() for DataFrames # Import supplementary visualizations code visuals.py import visuals as vs # Pretty display for notebooks %matplotlib inline # Load the wholesale customers dataset try: data = pd.read_csv("customers.csv") data.drop(['Region', 'Channel'], axis = 1, inplace = True) print("Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)) except: print("Dataset could not be loaded. Is the dataset missing?") """ Explanation: Machine Learning Engineer Nanodegree Unsupervised Learning Project: Creating Customer Segments Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting Started In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer. The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers. Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported. End of explanation """ # Display a description of the dataset display(data.describe()) """ Explanation: Data Exploration In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project. Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase. End of explanation """ # TODO: Select three indices of your choice you wish to sample from the dataset indices = [1, 23, 123] # indices = [1, 14, 171] # Create a DataFrame of the chosen samples samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True) print("Chosen samples of wholesale customers dataset:") display(samples) """ Explanation: Implementation: Selecting Samples To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another. End of explanation """ from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeRegressor # TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature new_data = data.drop(['Fresh'], axis = 1) # TODO: Split the data into training and testing sets(0.25) using the given feature as the target # Set a random state. X_train, X_test, y_train, y_test = train_test_split(new_data, data['Fresh'], test_size=0.25, random_state=7) # TODO: Create a decision tree regressor and fit it to the training set regressor = DecisionTreeRegressor() regressor.fit(X_train, y_train) # TODO: Report the score of the prediction using the testing set score = regressor.score(X_test, y_test) print(score) """ Explanation: Question 1 Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers. What kind of establishment (customer) could each of the three samples you've chosen represent? Hint: Examples of establishments include places like markets, cafes, delis, wholesale retailers, among many others. Avoid using names for establishments, such as saying "McDonalds" when describing a sample customer as a restaurant. You can use the mean values for reference to compare your samples with. The mean values are as follows: Fresh: 12000.2977 Milk: 5796.2 Grocery: 7951.3 Detergents_paper: 2881.4 Delicatessen: 1524.8 Knowing this, how do your samples compare? Does that help in driving your insight into what kind of establishments they might be? Answer: Chosen simple represent following establishment: - Index 1: This establishment seem to be a cafe where spending is spread across different category which are around mean. - Index 23: This establishment seems to be wholesale retailers as it has most of the spending amount well above mean. - Index 123: This establishment seems to be a restaurant(perhaps vegetarian) as it has most spendings on Fresh, Milk and Grocery. Implementation: Feature Relevance One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature. In the code block below, you will need to implement the following: - Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function. - Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets. - Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state. - Import a decision tree regressor, set a random_state, and fit the learner to the training data. - Report the prediction score of the testing set using the regressor's score function. End of explanation """ # Produce a scatter matrix for each pair of features in the data pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde'); import seaborn as sns; sns.heatmap(data.corr()) """ Explanation: Question 2 Which feature did you attempt to predict? What was the reported prediction score? Is this feature necessary for identifying customers' spending habits? Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data. If you get a low score for a particular feature, that lends us to beleive that that feature point is hard to predict using the other features, thereby making it an important feature to consider when considering relevance. Answer: - I attempted to predict feature Fresh - The score for the prediction is: -0.524 - It seems this feature(Fresh) is necessary to identify customer's spending habit as predicting this feature based on other feature is quite difficult resulting in a negative score. Visualize Feature Distributions To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix. End of explanation """ # TODO: Scale the data using the natural logarithm log_data = np.log(data) # TODO: Scale the sample data using the natural logarithm log_samples = np.log(samples) # Produce a scatter matrix for each pair of newly-transformed features pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde'); """ Explanation: Question 3 Using the scatter matrix as a reference, discuss the distribution of the dataset, specifically talk about the normality, outliers, large number of data points near 0 among others. If you need to sepearate out some of the plots individually to further accentuate your point, you may do so as well. Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed? Hint: Is the data normally distributed? Where do most of the data points lie? You can use corr() to get the feature correlations and then visualize them using a heatmap(the data that would be fed into the heatmap would be the correlation values, for eg: data.corr()) to gain further insight. Answer: - As it appears from the scatter plot above: - Most of the points, one feature compared to other, lies near origin. This essentially mean most of the spending in one feature are not related to other feature. However some of the features seem to be lineary correlated e.g. Grocery and Detergents_Paper, Grocery and Milk and Grocery & Milk and Detergetnts_Paper. - There seems to be few outliers in our data set which might mean high value of max value for several of our features. e.g. Milk and Detergents_Paper seem to be a somewhat lineraly correlated but there are few outliers(Detergents_Paper) to far right having no correlation with Milk. - Also it seems to there are not correlations between Delicatessen and other features. - Following features seem to be somewhat correlated: - Detergents_Paper and Grocery - Milk and Grocery - Milk and Detergents_Paper - The feature(Fresh) I tried to predict does not seem to have correlation with any of the feature, hence the score is negative. - As we can see some clear correlation between some features, there are some features which are nowhere correlated. In fact there seem to be a bit negative correlation between some features. Data Preprocessing In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful. Implementation: Feature Scaling If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm. In the code block below, you will need to implement the following: - Assign a copy of the data to log_data after applying logarithmic scaling. Use the np.log function for this. - Assign a copy of the sample data to log_samples after applying logarithmic scaling. Again, use np.log. End of explanation """ # Display the log-transformed sample data display(log_samples) """ Explanation: Observation After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before). Run the code below to see how the sample data has changed after having the natural logarithm applied to it. End of explanation """ display(log_data.describe()) # For each feature find the data points with extreme high or low values for feature in log_data.keys(): # TODO: Calculate Q1 (25th percentile of the data) for the given feature Q1 = np.percentile(log_data[feature], 25) # TODO: Calculate Q3 (75th percentile of the data) for the given feature Q3 = np.percentile(log_data[feature], 75) # TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range) step = 1.5 * (Q3 - Q1) # Display the outliers # print(step) print("Data points considered outliers for the feature '{}':".format(feature, step)) display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))]) # OPTIONAL: Select the indices for data points you wish to remove outliers = [65, 66, 75, 128, 154] # Remove the outliers, if any were specified good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True) """ Explanation: Implementation: Outlier Detection Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal. In the code block below, you will need to implement the following: - Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this. - Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile. - Assign the calculation of an outlier step for the given feature to step. - Optionally remove data points from the dataset by adding indices to the outliers list. NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points! Once you have performed this implementation, the dataset will be stored in the variable good_data. End of explanation """ from sklearn.decomposition import PCA # TODO: Apply PCA by fitting the good data with the same number of dimensions as features pca = PCA() pca.fit(good_data) # TODO: Transform log_samples using the PCA fit above pca_samples = pca.transform(log_samples) # Generate PCA results plot pca_results = vs.pca_results(good_data, pca) """ Explanation: Question 4 Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why. Hint: If you have datapoints that are outliers in multiple categories think about why that may be and if they warrant removal. Also note how k-means is affected by outliers and whether or not this plays a factor in your analysis of whether or not to remove them. Answer: - There are many data points which are considered outliers for more than one feature. e.g point at indices 65, 66, 75, 128 and 154. Since these are reported as outlier with respect to more than one feature, it strongly suggest to be outlier. - Outliers should be removed from data set as it might affect our model significantly and we might not be able create a better cluster. For an example in following image outlier is marked as A and outlier is marked as red. It is clear that original centroid is marked correctly, however if we consider Sum of Squared distance then centroid will move towards the outlier resulting in a not so good location for centroid. - Many data points are added to the list of outliers. I beleive this is necessary as it might affect k-means clustering and we might not have a good model. Feature Transformation In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers. Implementation: PCA Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data. In the code block below, you will need to implement the following: - Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca. - Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples. End of explanation """ # Display sample log-data after having a PCA transformation applied display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values)) """ Explanation: Question 5 How much variance in the data is explained in total by the first and second principal component? How much variance in the data is explained by the first four principal components? Using the visualization provided above, talk about each dimension and the cumulative variance explained by each, stressing upon which features are well represented by each dimension(both in terms of positive and negative variance explained). Discuss what the first four dimensions best represent in terms of customer spending. Hint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the individual feature weights. Answer: - The variance explained by first and second principal components is: 0.7315 - The variance explained by first four principal components is: 0.931 - The above visualization providesfollowing insights in for different dimension: - Dimension 1: - Cummulative variance is 0.4895 - Detergents_Paper is well represented - Milk and Grocery also have some contribution - Fresh and Frozen have negative contribution which means it is not represented well by this Dimension - Dimension 2: - Cummulative variance is 0.2420 - Fresh is well represented - Frozen and Delicatessen also have significant contribution - Dimension 3: - Cummulative variance is 0.1063 - Frozen is well represented - Delicatessen has some contribution in opposite direction - Dimension 4: - Cummulative variance is 0.0932 - Frozen is well represented - Delicatessen has some contribution in opposite direction - Dimension 5: - Cummulative variance is 0.0465 - Milk is well represented - Detergents_Paper and Delicatessen has some contribution in opposite direction - Dimension 6: - Cummulative variance is 0.0225 - Grocery is well represented - Milk and Detergents_Paper have some contribution in opposite direction Observation Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points. End of explanation """ # TODO: Apply PCA by fitting the good data with only two dimensions pca = PCA(n_components=2).fit(good_data) # TODO: Transform the good data using the PCA fit above reduced_data = pca.transform(good_data) # TODO: Transform log_samples using the PCA fit above pca_samples = pca.transform(log_samples) # Create a DataFrame for the reduced data reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2']) """ Explanation: Implementation: Dimensionality Reduction When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards. In the code block below, you will need to implement the following: - Assign the results of fitting PCA in two dimensions with good_data to pca. - Apply a PCA transformation of good_data using pca.transform, and assign the results to reduced_data. - Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples. End of explanation """ # Display sample log-data after applying PCA transformation in two dimensions display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2'])) """ Explanation: Observation Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions. End of explanation """ # Create a biplot vs.biplot(good_data, reduced_data, pca) """ Explanation: Visualizing a Biplot A biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features. Run the code cell below to produce a biplot of the reduced-dimension data. End of explanation """ from sklearn.cluster import KMeans from sklearn.metrics import silhouette_score # TODO: Apply your clustering algorithm of choice to the reduced data clusterer = KMeans(n_clusters=2, random_state=7) clusterer.fit(reduced_data) # TODO: Predict the cluster for each data point preds = clusterer.predict(reduced_data) # TODO: Find the cluster centers centers = clusterer.cluster_centers_ # TODO: Predict the cluster for each transformed sample data point sample_preds = clusterer.predict(pca_samples) # TODO: Calculate the mean silhouette coefficient for the number of clusters chosen score = silhouette_score(reduced_data, clusterer.labels_) print(score) """ Explanation: Observation Once we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much on the other product categories. From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier? Clustering In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale. Question 6 What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why? Hint: Think about the differences between hard clustering and soft clustering and which would be appropriate for our dataset. Answer: - Advantages of K-Means clustering: - Easy to implement - Works well with large data set - Advantages of Gaussian Mixture Model clustering: - Well for clustering data where some data points might belong to multiple cluster - Fastest algorithm for learning mixture model - I think of using K-Means clustering because it works well with large data set and easy to implement. Since it's easy to implement, it would be good to use and evaluate. Implementation: Creating Clusters Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering. In the code block below, you will need to implement the following: - Fit a clustering algorithm to the reduced_data and assign it to clusterer. - Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds. - Find the cluster centers using the algorithm's respective attribute and assign them to centers. - Predict the cluster for each sample data point in pca_samples and assign them sample_preds. - Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds. - Assign the silhouette score to score and print the result. End of explanation """ # Display the results of the clustering from implementation vs.cluster_results(reduced_data, preds, centers, pca_samples) """ Explanation: Question 7 Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score? Answer: Silhoutte score for different clusters: - When number of cluster is: - 2: 0.4262 - 3: 0.3969 - 4: 0.3339 - 5: 0.3522 - 6: 0.3625 - When number of cluster is 2, Silhutte score is best(0.4262). Cluster Visualization Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters. End of explanation """ # TODO: Inverse transform the centers log_centers = pca.inverse_transform(centers) # TODO: Exponentiate the centers true_centers = np.exp(log_centers) # Display the true centers segments = ['Segment {}'.format(i) for i in range(0,len(centers))] true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys()) true_centers.index = segments display(true_centers) display(data.describe()) display(samples) """ Explanation: Implementation: Data Recovery Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations. In the code block below, you will need to implement the following: - Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers. - Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers. End of explanation """ # Display the predictions for i, pred in enumerate(sample_preds): print("Sample point", i, "predicted to be in Cluster", pred) """ Explanation: Question 8 Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project(specifically looking at the mean values for the various feature points). What set of establishments could each of the customer segments represent? Hint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'. Think about what each segment represents in terms their values for the feature points chosen. Reference these values with the mean values to get some perspective into what kind of establishment they represent. Answer: - Segment 0 seems to represent a cafe where the requirement is spread across category which are around mean. - Segment 1 seem to represent a small retailer which might explain low usage of Detergents_Paper. Question 9 For each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this?* Run the code block below to find which cluster each sample point is predicted to be. End of explanation """ # Display the clustering results based on 'Channel' data vs.channel_results(reduced_data, outliers, pca_samples) """ Explanation: Answer: - All the three samples seem to be present in Segment 0 - Sample 0 and Sample 2 seem to be correctly predicted but I am bit skeptical about Sample 1 as it has a bit high value of Milk which is inclined towards Segment 0. But given the value of Milk and Grocery has high contribution towards Segment 0, it might belong to Segment 0 which is as predicted above. Conclusion In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships. Question 10 Companies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?* Hint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most? Answer: I think customer in Segment 0 will react positively to these changes: - Grocery, Detergents_Paper and Milk contribute heavly to Segment 0 as evident by Biplot above in this document. These materials are less prone to get affected if stored for longer times. e.g. Poeple tends to buy Grocery once in a week or month, hence changing the delivery period from 5 days a week to 3 days a week might not impact the customers in this segment. - Customers in Segment 1 has more inclination towards Frozen and Fresh. Storing a Fresh material for longer time and reducing the deliver from 5 days a week to 3 days a week might not receive positive response from this customer segment. Question 11 Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service. How can the wholesale distributor label the new customers using only their estimated product spending and the customer segment* data? Hint: A supervised learner could be used to train on the original customers. What would be the target variable? Answer: - I think based on the data provided by new customer our target variable would be Segment 0 or Segment 1. - Since we now have Segment 0 and Segment 1 for existing data. This can act as a label and we can use the features provided by new customers to predict which labels these new customers belong to. Visualizing Underlying Distributions At the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset. Run the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling. End of explanation """
jealdana/Open-Notebooks
PythonWars/Python Wars.ipynb
gpl-3.0
import webbrowser import requests import bs4 import csv import pandas as pd res = requests.get('http://mytowntutors.com/2014/04/star-wars-the-clone-wars/') print(res.status_code == requests.codes.ok) print("Number of lines in te downloaded page: %i"%len(res.text)) print("The first 20 thousand lines for a quick assesment. Hint: by using the finder it will give you an idea of the tags related to the info.") print(res.text[:20000]) # quick assesment """ Explanation: Episode 1 - The Python Menace Description: In this tutorial we will crawl quotes from Star Wars The clone wars, clean the format and automate a daily random email. Activities included in this tutorial: Web scraping and automated mailing Python and SQL Topics: Data mining Automation Natural Language Processing (NLP) SQL Web scraping Machine learning Path of the Python Menace: Get the force in the quotes Send the quotes through email Episode 2 Classify the quotes by topics using Natural language processing + Machine Learning Select a specific topic and receive a quote Use jupyter buttons Save the logs to the SQL palace Theme: Starwars, python The website is from mytowntutors.com : quotes link End of explanation """ starwars_website = bs4.BeautifulSoup(res.text) starwars_quotes = starwars_website.select('li') print("Number of elements that have li tag" + str(len(starwars_quotes))); starwars_quotes[126].getText(); # Since the quotes are together we need to find the first # quote and the last. In this case were 19 and 127 respectively starwar_quotes_selected = [] for quote in starwars_quotes[19:127]: starwar_quotes_selected.append(quote.getText()) starwar_quotes_selected """ Explanation: <a id="GettingQuotes"></a> Getting the quotes Extracting the quotes End of explanation """ starwar_quotes_selected_cleaned = [] for quote_number in range(len(starwar_quotes_selected)): starwar_quotes_selected_cleaned.append(starwar_quotes_selected[quote_number].encode("utf-8").split(":")[1].strip("\xe2\x80\x9d").strip(" \xe2\x80\x9c").replace("\xe2\x80\x99","'").replace("\xe2\x80\x94",",")) starwar_quotes_selected_cleaned """ Explanation: Cleaning the quotes End of explanation """ path = "./" with open(path+"starwar_quotes.txt", "wb") as f: for item in starwar_quotes_selected_cleaned: f.write("%s\n" % item) # writer = csv.writer(f) # writer.writerows([starwar_quotes_selected_cleaned]) pd.read_table(path+"starwar_quotes.txt",header=None) """ Explanation: Mailing the quotes Get the record of the last 15 quotes. We will randomize the quotes, those that are not in the last 15 sent quotes, so you don't get the same quotes in the same order over and over. # Hint: circular buffer for sent quotes. Mail the quote End of explanation """
cslab-org/cslab
static/teaching/pattern/PR_Your_Name.ipynb
mit
%matplotlib inline # code to sample a random number between 0 & 1 # Try running this multiple times by pressing Ctrl-Enter import numpy as np import matplotlib.pyplot as plt print np.random.random() """ Explanation: Your Name: Roll Number: 1. Linear Discriminant Analysis In this part, we will do lda on a synthetic data set. That means we will generate the data ourselves and then fit a linear classifier to this data. Step1: Create data set We are going to sample 500 points each from three 2d gaussian distributions. The means of the three gaussians are $\mu_1 = [a, b]^T$, $\mu_2 = [a+2, b+4]^T$ and $\mu_3 = [a+4, b]^T$ respectively, where a is the last digit of your roll number and b is second last digit of your roll number. <br> Similarly the covariance matrices are $\Sigma_1 = \Sigma_2 = \Sigma_3 = I$ <br> To generate points from 2d gaussians, we should first know how to generate random numbers. How to generate random numbers? use numpy random package. End of explanation """ print np.random.randn() """ Explanation: How to sample from a gaussian? Use randn function to sample from a 1D gaussian with mean 0 and variance 1. End of explanation """ points = np.random.normal(3, 1, 1000) # A histogram plot. It looks like a gaussian distribution centered around 3 plt.hist(points) plt.show() """ Explanation: Let's sample 1000 points! Use random.normal(mu, sigma, number of points). Let'us assume mean is 3. End of explanation """ mean = np.array([3, 3]) cov = np.eye(2) # the identity matrix points = np.random.multivariate_normal(mean, cov, 1000) # scatter plot with x axis as the first column of points and y axis as the second column plt.scatter(points[:, 0], points[:, 1]) plt.show() """ Explanation: Generate samples from a 2D gaussian Use random.multivariate_normal(mean, cov, 100) to generate 100 points from a multivariate gaussian End of explanation """ d1 = np.random.multivariate_normal([3, 0], cov, 500) d2 = np.random.multivariate_normal([5, 4], cov, 500) d3 = np.random.multivariate_normal([7, 0], cov, 500) data = np.vstack([d1, d2, d3]) plt.scatter(d1[:, 0], d1[:, 1], color='red') plt.scatter(d2[:, 0], d2[:, 1], color='blue') plt.scatter(d3[:, 0], d3[:, 1], color='green') plt.show() """ Explanation: Sample from three different 2D gaussians The means of the three gaussians should be $\mu_1 = [a, b]^T$, $\mu_2 = [a+2, b+4]^T$ and $\mu_3 = [a+4, b]^T$ respectively, where a is the last digit of your roll number and b is * the second last digit of your roll number*. <br> Similarly the covariance matrices are $\Sigma_1 = \Sigma_2 = \Sigma_3 = I$ <br> End of explanation """ # Generate 100 points points = np.array([]) for i in range(100): # sample 100 points if np.random.rand() > 0.5: points = np.append(points, np.random.normal(2,1)) else: points = np.append(points, np.random.normal(6,1)) plt.hist(points) plt.show() """ Explanation: Step2: Estimate the Parameters Estimate 3 means and a covariance matrix from data We have assumed that $\Sigma = \sigma^2 I$. <br> Convince yourself that the Maximum Likelihood Estimate for $\sigma^2$ is $\frac{1}{2n}\sum\limits_{i=1}^n (x_i-\mu)^T(x_i-\mu)$, where $n$ is the number of samples. <br> Let's compute the maximum likelihood estimates for the three sets of data points (generated from 3 different gaussians) separately, denote them as $\hat\sigma_1^2$, $\hat\sigma_2^2$ and $\hat\sigma_3^2$ and then take the combined estimate as the averae of the three estimates. Step3: Draw the Decision Boundaries Refer your notes/textbook to convince yourself that in the particular case where all the normal distributions have the same prior and the same covariance matrix of the form $\sigma^2I$, the discriminant functions are given by $$g_i(x) = \mu_i^Tx - \frac{1}{2}\mu_i^T\mu_i$$Find the point at which $g_1(x) = g_2(x) = g_3(x)$ <br> Draw the three decision boundaries by solving $g_1(x) = g_2(x)$, $g_1(x) = g_3(x)$ and $g_2(x) = g_3(x)$ 2. Parzen Window Gaussian kernel smoothing The kernel density model is given by $$p(x) = \frac{1}{N} \sum_{i=1}^N \frac{1}{(2\pi h^2)^{D/2}} exp\left(\frac{- (x-x_i)^T(x-x_i)}{2h^2}\right) \ $$ where D is the dimension (which is 2 here), h is the standard deviation parameter we have to set, and N is the total number of samples. Density estimation in 1 dimension Let's generate data from a mixture of two 1D gaussians as follows. Toss a fair coin, if the outcome is heads, sample a data point from the first gaussian, otherwise sample from the second gaussian. The two gaussians have a mean 2 and 4 and a standard deviation of 1. End of explanation """ h = 0.08 X = np.arange(-2, 10, 0.02) # for each point in x, we have compute its pdf Y = np.array([]) N = len(points) for x in X: t = 0 for xi in points: t += np.exp(-(x-xi)**2/(2*h*h)) y = (t/(2*np.pi*h*h)**0.5)/N Y = np.append(Y, y) plt.plot(X, Y) plt.show() """ Explanation: Parzen window estimation Our x ranges approximately from -2 to 10. The pdf is given by $p(x) = \frac{1}{N} \sum\limits_{i=1}^N \frac{1}{(2\pi h^2)^{1/2}} exp\left(\frac{- (x-x_i)^2}{2h^2}\right) \ $ for every value of x. In order to plot the estimated density, we compute the above pdf for a range of x, starting from -2 till 10, incrementing x by 0.02. Choose different values for the smoothing parameter h to get the best density estimate. (Try h=0.08, 0.1, 0.15 etc.) What value of h gives the bimodal distribution? End of explanation """ from mpl_toolkits.mplot3d import Axes3D """ Explanation: Density estimation in 2 Dimension Similarly do density estimation for the above data set which we sampled from 3 2d gaussians. Note: It will be computationally expensive to calculate the density for all the points in the 2D plane. So do density estimation for points in the square [c-2, c+2]x[d-2, d+2] where (c,d) denotes the coordinates of the meeting point of the three discriminant lines in the Linear Discriminant Analysis we have done above. End of explanation """
mtasende/Machine-Learning-Nanodegree-Capstone
notebooks/prod/n09_dyna_q_learner.ipynb
mit
# Basic imports import os import pandas as pd import matplotlib.pyplot as plt import numpy as np import datetime as dt import scipy.optimize as spo import sys from time import time from sklearn.metrics import r2_score, median_absolute_error from multiprocessing import Pool %matplotlib inline %pylab inline pylab.rcParams['figure.figsize'] = (20.0, 10.0) %load_ext autoreload %autoreload 2 sys.path.append('../../') import recommender.simulator as sim from utils.analysis import value_eval from recommender.agent import Agent from functools import partial NUM_THREADS = 1 LOOKBACK = 252*2 + 28 STARTING_DAYS_AHEAD = 20 POSSIBLE_FRACTIONS = [0.0, 1.0] DYNA = 20 # Get the data SYMBOL = 'SPY' total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature') data_train_df = total_data_train_df[SYMBOL].unstack() total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature') data_test_df = total_data_test_df[SYMBOL].unstack() if LOOKBACK == -1: total_data_in_df = total_data_train_df data_in_df = data_train_df else: data_in_df = data_train_df.iloc[-LOOKBACK:] total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:] # Create many agents index = np.arange(NUM_THREADS).tolist() env, num_states, num_actions = sim.initialize_env(total_data_in_df, SYMBOL, starting_days_ahead=STARTING_DAYS_AHEAD, possible_fractions=POSSIBLE_FRACTIONS) agents = [Agent(num_states=num_states, num_actions=num_actions, random_actions_rate=0.98, random_actions_decrease=0.999, dyna_iterations=DYNA, name='Agent_{}'.format(i)) for i in index] def show_results(results_list, data_in_df, graph=False): for values in results_list: total_value = values.sum(axis=1) print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value)))) print('-'*100) initial_date = total_value.index[0] compare_results = data_in_df.loc[initial_date:, 'Close'].copy() compare_results.name = SYMBOL compare_results_df = pd.DataFrame(compare_results) compare_results_df['portfolio'] = total_value std_comp_df = compare_results_df / compare_results_df.iloc[0] if graph: plt.figure() std_comp_df.plot() """ Explanation: In this notebook a Q learner with dyna will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). End of explanation """ print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:])))) # Simulate (with new envs, each time) n_epochs = 4 for i in range(n_epochs): tic = time() env.reset(STARTING_DAYS_AHEAD) results_list = sim.simulate_period(total_data_in_df, SYMBOL, agents[0], starting_days_ahead=STARTING_DAYS_AHEAD, possible_fractions=POSSIBLE_FRACTIONS, verbose=False, other_env=env) toc = time() print('Epoch: {}'.format(i)) print('Elapsed time: {} seconds.'.format((toc-tic))) print('Random Actions Rate: {}'.format(agents[0].random_actions_rate)) show_results([results_list], data_in_df) env.reset(STARTING_DAYS_AHEAD) results_list = sim.simulate_period(total_data_in_df, SYMBOL, agents[0], learn=False, starting_days_ahead=STARTING_DAYS_AHEAD, possible_fractions=POSSIBLE_FRACTIONS, other_env=env) show_results([results_list], data_in_df, graph=True) """ Explanation: Let's show the symbols data, to see how good the recommender has to be. End of explanation """ TEST_DAYS_AHEAD = 20 env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD) tic = time() results_list = sim.simulate_period(total_data_test_df, SYMBOL, agents[0], learn=False, starting_days_ahead=TEST_DAYS_AHEAD, possible_fractions=POSSIBLE_FRACTIONS, verbose=False, other_env=env) toc = time() print('Epoch: {}'.format(i)) print('Elapsed time: {} seconds.'.format((toc-tic))) print('Random Actions Rate: {}'.format(agents[0].random_actions_rate)) show_results([results_list], data_test_df, graph=True) """ Explanation: Let's run the trained agent, with the test set First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality). End of explanation """ env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD) tic = time() results_list = sim.simulate_period(total_data_test_df, SYMBOL, agents[0], learn=True, starting_days_ahead=TEST_DAYS_AHEAD, possible_fractions=POSSIBLE_FRACTIONS, verbose=False, other_env=env) toc = time() print('Epoch: {}'.format(i)) print('Elapsed time: {} seconds.'.format((toc-tic))) print('Random Actions Rate: {}'.format(agents[0].random_actions_rate)) show_results([results_list], data_test_df, graph=True) """ Explanation: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few). End of explanation """ print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:])))) """ Explanation: What are the metrics for "holding the position"? End of explanation """ import pickle with open('../../data/simple_q_learner_fast_learner_full_training.pkl', 'wb') as best_agent: pickle.dump(agents[0], best_agent) """ Explanation: Conclusion: End of explanation """
amolsharma99/UdacityDeepLearningClass
1_notmnist.ipynb
mit
# These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle """ Explanation: Deep Learning Assignment 1 The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later. This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. End of explanation """ url = 'http://yaroslavvb.com/upload/notMNIST/' def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) """ Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. End of explanation """ num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) """ Explanation: Extract the dataset from the compressed .tar.gz file. This should give you a set of directories, labelled A through J. End of explanation """ image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) image_index = 0 print(folder) for image in os.listdir(folder): image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[image_index, :, :] = image_data image_index += 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') num_images = image_index dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) """ Explanation: Problem 1 Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display. Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size. We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. End of explanation """ def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) """ Explanation: Problem 2 Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot. Problem 3 Another check: we expect the data to be balanced across classes. Verify that. Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9. Also create a validation dataset for hyperparameter tuning. End of explanation """ def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) """ Explanation: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. End of explanation """ pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) """ Explanation: Problem 4 Convince yourself that the data is still good after shuffling! Finally, let's save the data for later reuse: End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/dwd/cmip6/models/sandbox-3/atmos.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-3', 'atmos') """ Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: DWD Source ID: SANDBOX-3 Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:57 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) """ Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) """ Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) """ Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) """ Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) """ Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation """
jArumugam/python-notes
P11Iterators and Generators Homework.ipynb
mit
def gensquares(N): for num in xrange(1,N): yield num**2 for x in gensquares(10): print x """ Explanation: Iterators and Generators Homework Problem 1 Create a generator that generates the squares of numbers up to some number N. End of explanation """ import random random.randint(1,10) def rand_num(low,high,n): for num in xrange(n): yield random.randint(low,high) for num in rand_num(1,10,12): print num """ Explanation: Problem 2 Create a generator that yields "n" random numbers between a low and high number (that are inputs). Note: Use the random library. For example: End of explanation """ s = 'hello' s_iter = iter(s) next(s_iter) next(s_iter) #code here """ Explanation: Problem 3 Use the iter() function to convert the string below End of explanation """ my_list = [1,2,3,4,5] gencomp = (item for item in my_list if item > 3) for item in gencomp: print item """ Explanation: Problem 4 Explain a use case for a generator using a yield statement where you would not want to use a normal function with a return statement. loop over a large matrix or data frame Extra Credit! Can you explain what gencomp is in the code below? (Note: We never covered this in lecture! You will have to do some googling/Stack Overflowing!) End of explanation """
clumdee/clumdee.github.io
assets/img/twitterBNK48/code_with_pyspark.ipynb
mit
from pyspark.sql import SparkSession from pyspark.sql.functions import lower, split, explode, substring, count from datetime import datetime # create SparkSession spark = SparkSession.builder.appName('streamTwitterTags').getOrCreate() # connect and get tweets tweets = spark.readStream.format("socket").option("host", "127.0.0.1").option("port", 5555).load() # convert to lowercase and split the words words_train = tweets.select(split(lower(tweets.value), " ").alias("value")) # 'explode' a list of words into rows with single word words_df = words_train.select(explode(words_train.value).alias('word_explode')) # keep only rows that the word starts with '#' hashtags = words_df.filter(substring(words_df.word_explode,0,1)=='#') hashtags = hashtags.select(hashtags.word_explode.alias('hashtag')) # count hashtags count_hashtags = hashtags.groupBy('hashtag').count() count_hashtags_order = count_hashtags.orderBy(count_hashtags['count'].desc()) # start streaming start_time = datetime.now() query = count_hashtags_order.writeStream.outputMode("complete").format("memory").queryName("topTagsTable").start() """ Explanation: สำรวจความนิยมของเหล่าไอดอล BNK48 แบบ real-time จาก Twitter's hashtags ด้วย Python ตัวอย่างแสดงการ streaming และจัดการข้อมูลด้วย PySpark (2.2.0) Techical part 0 -- Workflow ส่วน workflow หลักๆ ก็เหมือนกับการทำ streaming ด้วย Python networking interface ปกตินะครับมาดู โดยมีการเปลี่ยนแปลงโดยการใช้ PySpark ในขั้นตอน Technical part 2 และ Technicla part 3 ดึงข้อมูลจาก Twitter ด้วย Tweepy Streaming และการจัดการข้อมูล DataFrame ด้วย Pyspark จัดลำดับ hashtags และทำหน้าจอแสดงผลแบบ real-time อย่างง่ายๆ โดยการใช้ PySpark จะมีประโยชน์เมื่อต้องจัดการข้อมูลขนาดใหญ่เนื่องจากสามารถทำ distributed computation ได้ Techical part 1 -- ดึงข้อมูลจาก Twitter ด้วย Tweepy ทำการสร้าง script (streamingTwitterTags.py) เพื่อเชื่อมต่อและดึงข้อมูลจาก Twitter อย่างที่แสดงใน blog หลักครับ Technical part 2 -- Streaming และการจัดการข้อมูล DataFrame ด้วย Pyspark สำหรับการ streaming data บน Jupyter Notebook โดยใช้ PySpark ทำได้ดังส่วนข้างล่างครับ มีกระบวนการโดยสรุปก็คือ * บรรทัดที่ 6: สร้าง SparkSession * บรรทัดที่ 9: เชื่อมต่อ streaming socket โดยให้กรอกค่า "host" และ ค่า "port" ที่เราตั้งไว้ใน streamingTwitterTags.py * บรรทัดที่ 11-23: จัดการข้อมูล twitter streaming โดยค่าบนบรรทัดที่ 22 คือ PySpark DataFrame ที่เรียงลำดับ hashtags ทั้งหมดจากมากลงไปหาน้อยโดยเก็บข้อมูลจาก tweets ทั้งหมดที่มีคำว่า '#bnk48' อยู่ (ตั้งไว้ใน streamingTwitterTags.py) * บรรทัดที่ 26-27: บันทึกเวลาและเริ่มการ streaming โดยให้ผลลัพธ์บันทึกไว้ใน table ชื่อ "topTagsTable" End of explanation """ spark.sql("select * from topTagsTable").limit(11).show() """ Explanation: เมื่อเรามีทั้ง script สำหรับทำการ streaming tweets (streamingTwitterTags.py) และ script สำหรับจัดการข้อมูลบน Jupyter Notebook แล้ว เราก็พร้อมที่จะทดสอบระบบ โดยเริ่มจากสั่ง run script สำหรับ streaming ก่อน <img src="https://raw.githubusercontent.com/clumdee/clumdee.github.io/master/assets/img/twitterBNK48/twitterStreaming_run.png" alt="twitterStreaming_run" style="width: 700px;"/> จากนั้นก็ run code PySpark บน Jupyter Notebook เท่านี้ code ของเราเริ่มดึง tweets จาก Twitter มาจัดการสร้าง DataFrame ถ้าอยากจะรู้ว่า ณ ขณะนั้นๆ DataFrame ของเราหน้าตาเป็นยังไงก็เรียกดูได้ด้วยคำสั่ง End of explanation """ import time from datetime import timedelta import pandas as pd import matplotlib.pyplot as plt from matplotlib.ticker import MaxNLocatorหลังหลัง from IPython import display %matplotlib inline # set streaming period stream_period = 10 # in minutes finish_time = start_time + timedelta(minutes=stream_period) # interactively query in-memory table while datetime.now() < finish_time: # set wait time between iteration time.sleep(10) # get top hashtags top_hashtags_sql = spark.sql("select * from topTagsTable").limit(11) # convert top hashtags DataFrame to Pandas DataFrame top_hashtags = top_hashtags_sql.toPandas() # number of '#bnk48' bnk48_count = top_hashtags[top_hashtags['hashtag']=='#bnk48']['count'].values # create bar chart ranking top ten hashtags related to '#bnk48' fig, ax = plt.subplots(1,1,figsize=(10,6)) top_hashtags[top_hashtags['hashtag']!='#bnk48'].plot(kind='bar', x='hashtag', y='count', legend=False, ax=ax) ax.set_title("Top 10 hashtags related to #BNK48 (%d counts)" % bnk48_count, fontsize=18) ax.set_xlabel("Hashtag", fontsize=18) ax.set_ylabel("Count", fontsize=18) ax.set_xticklabels(ax.get_xticklabels(), {"fontsize":14}, rotation=30) ax.yaxis.set_major_locator(MaxNLocator(integer=True)) # show only integer yticks plt.yticks(fontsize=14) # clear previous output, print start time and current time, and plot the current chart display.clear_output(wait=True) print("start time:", start_time.strftime('%Y-%m-%d %H:%M:%S')) print("current time:", datetime.now().strftime('%Y-%m-%d %H:%M:%S')) plt.show() """ Explanation: ก็จะเป็นการเรียกข้อมูล hashtags 11 ลำดับแรกขึ้นมา ขั้นตอนนี้อาจต้องรอให้เรา run code ไปสักพัก (ไม่กี่วินาทีก็พอ) ก่อนเพื่อให้มีข้อมูลใน DataFrame มาแสดง ไม่งั้นเราก็จะเห็น DataFrame เปล่าๆ การเห็น DataFrame ออกมาแบบนี้ก็ดีในระดับนึง แต่จะดียิ่งขึ้นถ้าเรานำข้อมูลที่ streaming นี้มาวาด chart ที่ช่วยแสดงผลแบบ real-time Technical part 3 -- จัดลำดับ hashtags และทำหน้าจอแสดงผลแบบ real-time อย่างง่ายๆ เมื่อเรามีตัว PySpark DataFrame แล้ว วิธีที่จะแสดงผลอย่างง่ายก็คือ แปลงข้อมูลเป็น Pandas DataFrame แล้วใช้ที่สุด Pandas+Matplotlib ช่วยทำ chart ครับ โดยเราอยากให้ chart ของเราคอย update เรื่อยๆ ตามช่วงเวลาที่เรากำหนดไว้ ตัว code ก็มีประมาณนี้ครับ End of explanation """
computational-class/cjc2016
code/tba/Introduction-to-Non-Personalized-Recommenders.ipynb
mit
from IPython.core.display import Image Image(filename='/Users/chengjun/GitHub/cjc2016/figure/recsys_arch.png') """ Explanation: Introduction to Non-Personalized Recommenders The recommendation problem Recommenders have been around since at least 1992. Today we see different flavours of recommenders, deployed across different verticals: Amazon Netflix Facebook Last.fm. What exactly do they do? Definitions from the literature In a typical recommender system people provide recommendations as inputs, which the system then aggregates and directs to appropriate recipients. -- Resnick and Varian, 1997 Collaborative filtering simply means that people collaborate to help one another perform filtering by recording their reactions to documents they read. -- Goldberg et al, 1992 In its most common formulation, the recommendation problem is reduced to the problem of estimating ratings for the items that have not been seen by a user. Intuitively, this estimation is usually based on the ratings given by this user to other items and on some other information [...] Once we can estimate ratings for the yet unrated items, we can recommend to the user the item(s) with the highest estimated rating(s). -- Adomavicius and Tuzhilin, 2005 Driven by computer algorithms, recommenders help consumers by selecting products they will probably like and might buy based on their browsing, searches, purchases, and preferences. -- Konstan and Riedl, 2012 Notation $U$ is the set of users in our domain. Its size is $|U|$. $I$ is the set of items in our domain. Its size is $|I|$. $I(u)$ is the set of items that user $u$ has rated. $-I(u)$ is the complement of $I(u)$ i.e., the set of items not yet seen by user $u$. $U(i)$ is the set of users that have rated item $i$. $-U(i)$ is the complement of $U(i)$. Goal of a recommendation system $ \newcommand{\argmax}{\mathop{\rm argmax}\nolimits} \forall{u \in U},\; i^* = \argmax_{i \in -I(u)} [S(u,i)] $ Problem statement The recommendation problem in its most basic form is quite simple to define: |-------------------+-----+-----+-----+-----+-----| | user_id, movie_id | m_1 | m_2 | m_3 | m_4 | m_5 | |-------------------+-----+-----+-----+-----+-----| | u_1 | ? | ? | 4 | ? | 1 | |-------------------+-----+-----+-----+-----+-----| | u_2 | 3 | ? | ? | 2 | 2 | |-------------------+-----+-----+-----+-----+-----| | u_3 | 3 | ? | ? | ? | ? | |-------------------+-----+-----+-----+-----+-----| | u_4 | ? | 1 | 2 | 1 | 1 | |-------------------+-----+-----+-----+-----+-----| | u_5 | ? | ? | ? | ? | ? | |-------------------+-----+-----+-----+-----+-----| | u_6 | 2 | ? | 2 | ? | ? | |-------------------+-----+-----+-----+-----+-----| | u_7 | ? | ? | ? | ? | ? | |-------------------+-----+-----+-----+-----+-----| | u_8 | 3 | 1 | 5 | ? | ? | |-------------------+-----+-----+-----+-----+-----| | u_9 | ? | ? | ? | ? | 2 | |-------------------+-----+-----+-----+-----+-----| Given a partially filled matrix of ratings ($|U|x|I|$), estimate the missing values. Challenges Availability of item metadata Content-based techniques are limited by the amount of metadata that is available to describe an item. There are domains in which feature extraction methods are expensive or time consuming, e.g., processing multimedia data such as graphics, audio/video streams. In the context of grocery items for example, it's often the case that item information is only partial or completely missing. Examples include: Ingredients Nutrition facts Brand Description County of origin New user problem A user has to have rated a sufficient number of items before a recommender system can have a good idea of what their preferences are. In a content-based system, the aggregation function needs ratings to aggregate. New item problem Collaborative filters rely on an item being rated by many users to compute aggregates of those ratings. Think of this as the exact counterpart of the new user problem for content-based systems. Data sparsity When looking at the more general versions of content-based and collaborative systems, the success of the recommender system depends on the availability of a critical mass of user/item iteractions. We get a first glance at the data sparsity problem by quantifying the ratio of existing ratings vs $|U|x|I|$. A highly sparse matrix of interactions makes it difficult to compute similarities between users and items. As an example, for a user whose tastes are unusual compared to the rest of the population, there will not be any other users who are particularly similar, leading to poor recommendations. Flow chart: the big picture End of explanation """ import pandas as pd unames = ['user_id', 'username'] users = pd.read_table('/Users/chengjun/GitHub/cjc2016/data/users_set.dat', sep='|', header=None, names=unames) rnames = ['user_id', 'course_id', 'rating'] ratings = pd.read_table('/Users/chengjun/GitHub/cjc2016/data/ratings.dat', sep='|', header=None, names=rnames) mnames = ['course_id', 'title', 'avg_rating', 'workload', 'university', 'difficulty', 'provider'] courses = pd.read_table('/Users/chengjun/GitHub/cjc2016/data/cursos.dat', sep='|', header=None, names=mnames) # show how one of them looks ratings.head(10) # show how one of them looks users[:5] courses[:5] """ Explanation: The CourseTalk dataset: loading and first look Loading of the CourseTalk database. The CourseTalk data is spread across three files. Using the pd.read_table method we load each file: End of explanation """ coursetalk = pd.merge(pd.merge(ratings, courses), users) coursetalk coursetalk.ix[0] """ Explanation: Using pd.merge we get it all into one big DataFrame. End of explanation """ dir(pivot_table) from pandas import pivot_table mean_ratings = pivot_table(coursetalk, values = 'rating', columns='provider', aggfunc='mean') mean_ratings.order(ascending=False) """ Explanation: Collaborative filtering: generalizations of the aggregation function Non-personalized recommendations Groupby The idea of groupby is that of split-apply-combine: split data in an object according to a given key; apply a function to each subset; combine results into a new object. To get mean course ratings grouped by the provider, we can use the pivot_table method: End of explanation """ ratings_by_title = coursetalk.groupby('title').size() ratings_by_title[:10] active_titles = ratings_by_title.index[ratings_by_title >= 20] active_titles[:10] """ Explanation: Now let's filter down to courses that received at least 20 ratings (a completely arbitrary number); To do this, I group the data by course_id and use size() to get a Series of group sizes for each title: End of explanation """ mean_ratings = coursetalk.pivot_table('rating', columns='title', aggfunc='mean') mean_ratings """ Explanation: The index of titles receiving at least 20 ratings can then be used to select rows from mean_ratings above: End of explanation """ mean_ratings.ix[active_titles].order(ascending=False) """ Explanation: By computing the mean rating for each course, we will order with the highest rating listed first. End of explanation """ mean_ratings = coursetalk.pivot_table('rating', index='title',columns='provider', aggfunc='mean') mean_ratings[:10] mean_ratings['coursera'][active_titles].order(ascending=False)[:10] """ Explanation: To see the top courses among Coursera students, we can sort by the 'Coursera' column in descending order: End of explanation """ # transform the ratings frame into a ratings matrix ratings_mtx_df = coursetalk.pivot_table(values='rating', index='user_id', columns='title') ratings_mtx_df.ix[ratings_mtx_df.index[:15], ratings_mtx_df.columns[:15]] """ Explanation: Now, let's go further! How about rank the courses with the highest percentage of ratings that are 4 or higher ? % of ratings 4+ Let's start with a simple pivoting example that does not involve any aggregation. We can extract a ratings matrix as follows: End of explanation """ ratings_gte_4 = ratings_mtx_df[ratings_mtx_df>=4.0] # with an integer axis index only label-based indexing is possible ratings_gte_4.ix[ratings_gte_4.index[:15], ratings_gte_4.columns[:15]] """ Explanation: Let's extract only the rating that are 4 or higher. End of explanation """ ratings_gte_4_pd = pd.DataFrame({'total': ratings_mtx_df.count(), 'gte_4': ratings_gte_4.count()}) ratings_gte_4_pd.head(10) ratings_gte_4_pd['gte_4_ratio'] = (ratings_gte_4_pd['gte_4'] * 1.0)/ ratings_gte_4_pd.total ratings_gte_4_pd.head(10) ranking = [(title,total,gte_4, score) for title, total, gte_4, score in ratings_gte_4_pd.itertuples()] for title, total, gte_4, score in sorted(ranking, key=lambda x: (x[3], x[2], x[1]) , reverse=True)[:10]: print title, total, gte_4, score """ Explanation: Now picking the number of total ratings for each course and the count of ratings 4+ , we can merge them into one DataFrame. End of explanation """ ratings_by_title = coursetalk.groupby('title').size() ratings_by_title.order(ascending=False)[:10] """ Explanation: Let's now go easy. Let's count the number of ratings for each course, and order with the most number of ratings. End of explanation """ for title, total, gte_4, score in sorted(ranking, key=lambda x: (x[2], x[3], x[1]) , reverse=True)[:10]: print title, total, gte_4, score """ Explanation: Considering this information we can sort by the most rated ones with highest percentage of 4+ ratings. End of explanation """ course_users = coursetalk.pivot_table('rating', index='title', columns='user_id') course_users.ix[course_users.index[:15], course_users.columns[:15]] """ Explanation: Finally using the formula above that we learned, let's find out what the courses that most often occur wit the popular MOOC An introduction to Interactive Programming with Python by using the method "x + y/ x" . For each course, calculate the percentage of Programming with python raters who also rated that course. Order with the highest percentage first, and voilá we have the top 5 moocs. End of explanation """ ratings_by_course = coursetalk[coursetalk.title == 'An Introduction to Interactive Programming in Python'] ratings_by_course.set_index('user_id', inplace=True) """ Explanation: First, let's get only the users that rated the course An Introduction to Interactive Programming in Python End of explanation """ their_ids = ratings_by_course.index their_ratings = course_users[their_ids] course_users[their_ids].ix[course_users[their_ids].index[:15], course_users[their_ids].columns[:15]] """ Explanation: Now, for all other courses let's filter out only the ratings from users that rated the Python course. End of explanation """ course_count = their_ratings.ix['An Introduction to Interactive Programming in Python'].count() sims = their_ratings.apply(lambda profile: profile.count() / float(course_count) , axis=1) """ Explanation: By applying the division: number of ratings who rated Python Course and the given course / total of ratings who rated the Python Course we have our percentage. End of explanation """ sims.order(ascending=False)[1:][:10] """ Explanation: Ordering by the score, highest first excepts the first one which contains the course itself. End of explanation """
flohorovicic/pynoddy
docs/notebooks/1-Simulation.ipynb
gpl-2.0
from IPython.core.display import HTML css_file = 'pynoddy.css' HTML(open(css_file, "r").read()) %matplotlib inline # Basic settings import sys, os import subprocess sys.path.append("../..") # Now import pynoddy import pynoddy import importlib importlib.reload(pynoddy) import pynoddy.output import pynoddy.history # determine path of repository to set paths corretly below repo_path = os.path.realpath('../..') """ Explanation: Simulation of a Noddy history and visualisation of output This example shows how the module pynoddy.history can be used to compute the model, and how simple visualisations can be generated with pynoddy.output. End of explanation """ # Change to sandbox directory to store results os.chdir(os.path.join(repo_path, 'sandbox')) # Path to exmaple directory in this repository example_directory = os.path.join(repo_path,'examples') # Compute noddy model for history file history_file = 'simple_two_faults.his' history = os.path.join(example_directory, history_file) output_name = 'noddy_out' # call Noddy # NOTE: Make sure that the noddy executable is accessible in the system!! print(subprocess.Popen(['noddy.exe', history, output_name, 'BLOCK'], shell=False, stderr=subprocess.PIPE, stdout=subprocess.PIPE).stdout.read()) # """ Explanation: Compute the model The simplest way to perform the Noddy simulation through Python is simply to call the executable. One way that should be fairly platform independent is to use Python's own subprocess module: End of explanation """ pynoddy.compute_model(history, output_name) """ Explanation: For convenience, the model computation is wrapped into a Python function in pynoddy: End of explanation """ N1 = pynoddy.output.NoddyOutput(output_name) """ Explanation: Note: The Noddy call from Python is, to date, calling Noddy through the subprocess function. In a future implementation, this call could be substituted with a full wrapper for the C-functions written in Python. Therefore, using the member function compute_model is not only easier, but also the more "future-proof" way to compute the Noddy model. Loading Noddy output files Noddy simulations produce a variety of different output files, depending on the type of simulation. The basic output is the geological model. Additional output files can contain geophysical responses, etc. Loading the output files is simplified with a class class container that reads all relevant information and provides simple methods for plotting, model analysis, and export. To load the output information into a Python object: End of explanation """ print("The model has an extent of %.0f m in x-direction, with %d cells of width %.0f m" % (N1.extent_x, N1.nx, N1.delx)) """ Explanation: The object contains the calculated geology blocks and some additional information on grid spacing, model extent, etc. For example: End of explanation """ N1.plot_section('y', figsize = (5,3)) """ Explanation: Plotting sections through the model The NoddyOutput class has some basic methods for the visualisation of the generated models. To plot sections through the model: End of explanation """ N1.export_to_vtk() """ Explanation: Export model to VTK A simple possibility to visualise the modeled results in 3-D is to export the model to a VTK file and then to visualise it with a VTK viewer, for example Paraview. To export the model, simply use: End of explanation """
NekuSakuraba/my_capstone_research
subjects/diffusion maps/Diffusion Maps 02.ipynb
mit
n = 3 X, y = make_blobs(n_samples=n, cluster_std=.1, centers=[[1,1]]) X """ Explanation: Diffusion Distance <br /> A distance function between any two points based on the random walk on the graph [1]. Diffusion map <br /> Low dimensional description of the data by the first few eigenvectors [1]. End of explanation """ L = k(X, .9) """ Explanation: Define a pairwise similarity matrix between points... End of explanation """ D = diag(L) """ Explanation: and a diagonal normalization matrix $D_{i,i} = \sum_j L_{i,j}$ End of explanation """ M = inv(D).dot(L) """ Explanation: Matrix M <br /> $M = D^{-1}L$ End of explanation """ Ms = diag(D, .5).dot(M).dot(diag(D,-.5)) """ Explanation: The matrix M is adjoint to a symmetric matrix <br /> $M_s = D^{1/2}MD^{-1/2}$ M and M<sub>s</sub> share the same eigenvalues. <br /> Since M<sub>s</sub> is symmetric, it is diagonalizable and has a set of n real eigenvalues {$\lambda_{j=0}^{n-1}$} whose corresponding eigenvectors form an orthonormal basis of $\mathbf{R}^n$. <br /> The left and right eigenvectors of M, denoted $\phi_j$ and $\psi_j$ are related to those of M<sub>s</sub>. $$ \phi_j = \mathbf{v}_j D^{1/2}, \psi_j = \mathbf{v}_j D^{-1/2} $$ End of explanation """ p0 = np.eye(n) """ Explanation: Now we utilize the fact that by constrution M is a stochastic matrix End of explanation """ e = p0 for i in range(1000): e = e.dot(M) print e p1 = p0.dot(M) p1 w, v = eig(M) w = w.real print w print v # sorting the eigenvalues and vectors temp = {_:(w[_], v[:,_]) for _ in range(len(w))} w = [] v = [] for _ in sorted(temp.items(), key=lambda x:x[1], reverse=True): w.append(_[1][0]) v.append(_[1][1]) w = np.array(w) v = np.array(v).T print w print v psi = v / v[:,0] print psi """ Explanation: *The stationary probability distribution $\Phi_0$ * End of explanation """ diffmap = (w.reshape(-1,1) * psi.T).T[:,1:] diffmap """ Explanation: Diffusion Map $$ \Psi_t(x) = (\lambda_1^t\psi(x), \lambda_2^t\psi(x), ..., \lambda_k^t\psi(x)) $$ End of explanation """ dt0 = pairwise_distances(diffmap)**2 dt0 """ Explanation: Diffusion Distance Defined by Euclidean distance in the diffusion map $$ D_t^2(x_0, x_1) = ||\Psi_t(x_0) - \Psi_t(x_1)||^2 $$ End of explanation """ dt = [] for i in range(n): _ = [] for j in range(n): _.append(sum((p1[i]-p1[j])**2 / v[:,0]**2)) dt.append(_) dt = np.array(dt) dt (dt0 - dt) print M M.sum(axis=1) w, v = eig(M) w = w.real print w print v p0*w[0]*v[:,0]**2 + p0*w[1]*v[:,1]**2 + p0*w[2]*v[:,2]**2 """ Explanation: Diffusion Distance [2] Defined by probability distribution on time t. $$ D_t^2(x_0, x_1) = ||p(t, y|x_0) - p(t, y|x_1)||_w^2 \ = \sum_y (p(t, y|x_0) - p(t, y|x_1))^2 w(y) $$ End of explanation """
mayank-johri/LearnSeleniumUsingPython
Section 2 - Advance Python/Chapter S2.01 - Functional Programming/04_functools.ipynb
gpl-3.0
def power(base, exponent): return base ** exponent def square(base): return power(base, 2) def cube(base): return power(base, 3) """ Explanation: functools The functools module is for higher-order functions: functions that act on or return other functions. In general, any callable object can be treated as a function for the purposes of this module. Common functions in functools are as follows partial reduce partial functools.partial does the follows: Makes a new version of a function with one or more arguments already filled in. New version of a function documents itself. End of explanation """ from functools import partial square = partial(power, exponent=2) cube = partial(power, exponent=3) print(square(2)) print(cube(2)) print(square(2, exponent=4)) print(cube(2, exponent=9)) from functools import partial def multiply(x,y): return x * y # create a new function that multiplies by 2 db2 = partial(multiply,2) print(db2(4)) db4 = partial(multiply, 4) print(db4(3)) from functools import partial #---------------------------------------------------------------------- def add(x, y): """""" return x + y #---------------------------------------------------------------------- def multiply(x, y): """""" return x * y #---------------------------------------------------------------------- def run(func): """""" print (func()) #---------------------------------------------------------------------- def main(): """""" a1 = partial(add, 1, 2) m1 = partial(multiply, 5, 8) run(a1) run(m1) if __name__ == "__main__": main() def another_function(func): """ A function that accepts another function """ def wrapper(): """ A wrapping function """ val = "The result of %s is %s" % (func(), eval(func()) ) return val return wrapper #---------------------------------------------------------------------- @another_function def a_function(): """A pretty useless function""" return "1+1" #---------------------------------------------------------------------- if __name__ == "__main__": print (a_function.__name__) print (a_function.__doc__) print(a_function()) from functools import wraps #---------------------------------------------------------------------- def another_function(func): """ A function that accepts another function """ @wraps(func) def wrapper(): """ A wrapping function """ val = "The result of %s is %s" % (func(), eval(func()) ) return val return wrapper #---------------------------------------------------------------------- @another_function def a_function(): """A pretty useless function""" return "1+1" #---------------------------------------------------------------------- if __name__ == "__main__": #a_function() print (a_function.__name__) print (a_function.__doc__) print(a_function()) """ Explanation: Now lets see the magic of partial End of explanation """ import functools def myfunc1(a, b=2): print ('\tcalled myfunc1 with:', (a, b)) return def myfunc(a, b=2): """Docstring for myfunc().""" print ('\tcalled myfunc with:', (a, b)) return def show_details(name, f): """Show details of a callable object.""" print ('%s:' % name) print ('\tobject:', f) print ('\t__name__:',) try: print (f.__name__) except AttributeError: print ('(no __name__)') print ('\t__doc__', repr(f.__doc__)) print return show_details('myfunc1', myfunc1) print("~"*20) show_details('myfunc', myfunc) p1 = functools.partial(myfunc, b=4) print("+"*20) show_details('raw wrapper', p1) print("^"*20) print ('Updating wrapper:') print ('\tassign:', functools.WRAPPER_ASSIGNMENTS) print ('\tupdate:', functools.WRAPPER_UPDATES) print("*"*20) functools.update_wrapper(p1, myfunc) show_details('updated wrapper', p1) """ Explanation: Here we import wraps from the functools module and use it as a decorator for the nested wrapper function inside of another_function to map the name and doc to the wrapper function update_wrapper The partial object does not have name or doc attributes by default, and without those attributes decorated functions are more difficult to debug. Using update_wrapper(), copies or adds attributes from the original function to the partial object. End of explanation """
Imperial-College-Data-Science-Society/Scientific-Python
notebooks/Scientific-Python.ipynb
mit
import numpy as np """ Explanation: Introduction to scientific Python Numpy Exercise difficulty rating: [1] easy. ~1 min [2] medium. ~2-3 minutes [3] difficult. Suitable for a standalone project. Numpy is imported and aliased to np by convention End of explanation """ a = np.array([0, 1, 2]) # a rank 1 array of integers """ Explanation: Array creation The basic data structure provided by Numpy is the ndarray (n-dimensional array). Each array can store items of the same datatype (e.g. integers, floats, etc.). Arrays can be created in a number of ways: from a Python iterable (usually a list) End of explanation """ a = np.arange(start=0., stop=1., step=.1) # a rank 1 array of floats # note that start is included but stop is not b = np.arange(10) # a rank 1 array of ints from 0 to 9 """ Explanation: using arange (similar to Python's builtin range but returns an array): End of explanation """ a = np.linspace(start=0, stop=5, num=10) # a rank 1 array of 10 evenly-spaced floats between 0 and 5 # this can be thought of as a linear axis of a graph with evenly-spaced ticks b = np.logspace(-6, -1, 6) # same as above but on logarithmic scale """ Explanation: using linspace and logspace End of explanation """ zeros = np.zeros(5) # rank 1 array of 5 zeros ones = np.ones(3) # it's in the name eye = np.eye(8) # 8x8 identity matrix """ Explanation: using built-in helper functions End of explanation """ # answer: x = np.linspace(0, 60, 61) y = np.logspace(0, 6, 7) """ Explanation: Exercises create a set of axes for a graph as Numpy arrays. Let the x axis be uniformly spaced from 0 to 60 with scale 1 (i.e. one tick per value) and the y axis be on log scale and represent the powers of 10 from 0 to 6 [1]. End of explanation """ a = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) print(a[0]) # prints [0, 1, 2] print(a[0, 0]) # prints 0 print(a[:, 1]) # prints [1, 4, 7] """ Explanation: Array indexing and shapes Basic array indexing is similar to Python lists. However, arrays can be indexed along multiple dimensions. End of explanation """ a = np.linspace(0, 9, 10) print(a[0:5]) # prints [0., 1., 2., 3., 4.] b = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10, 11]]) print('-'*12) print(b[:, 0:2]) # prints [[0, 1] # [3, 4] # [6, 7] # [9, 10]] """ Explanation: Here : denotes all elements along the axis. For a 2D array (i.e. a matrix) axis 0 is along rows and axis 1 along columns. Slicing Arrays support slicing along each axis, which works very similarly to Python lists. End of explanation """ a = np.arange(10) print(a[a < 5]) # prints [0, 1, 2, 3, 4] print(a[a % 2 == 0]) # prints [0, 2, 4, 6, 8] """ Explanation: Boolean indexing Arrays can also be indexed using boolean values End of explanation """ a = np.array([0, 1, 2]) print(a.shape) # prints 3 b = np.array([[[0], [1], [2]], [[3], [4], [5]], [[6], [7], [8]]]) print(b.shape) # prints (3, 3, 1) """ Explanation: Shapes The shape of the array is a tuple of size (array.ndim), where each element is the size of the array along that axis. End of explanation """ a = np.arange(9).reshape((3, 3)) print(a) # prints [[0 1 2] # [3 4 5] # [6 7 8]] """ Explanation: The shape of the array can be changed using the reshape method. End of explanation """ try: a = np.arange(5).reshape((3, 3)) except ValueError as e: print(e) """ Explanation: Note that the total number of elements in the new array has to be equal to the number of elements in the initial array. The code below will throw an error. End of explanation """ a = np.array([[23, 324, 21, 116], [ 0, 55, 232, 122], [42, 43, 44, 45], [178, 67, 567, 55]]) # answer first_column = a[:, 0] third_row = a[2] """ Explanation: Exercises get the first column and the third row of the following array [1]. End of explanation """ # answer a = np.arange(0, 51) divisible_by_7 = a[a % 7 == 0] """ Explanation: find all numbers divisible by 7 between 0 and 50 [1]. End of explanation """ a = np.arange(9).reshape(3, 3) b = np.array([10, 11, 12]) print(a + 1) # operation is performed on the whole array print('-'*12) print(a - 5) print('-'*12) print(b * 3) print('-'*12) print(b / 2) """ Explanation: Mathematical operations and broadcasting Basic arithmetics Numpy arrays support the standard mathematical operations. End of explanation """ print(a * a) # multiply each element of a by itself print('-'*12) print(a + b) # add b elementwise to every row of a print('-'*12) print(a[:, 0] + b) # add b only to the first column of a """ Explanation: Broadcasting Operations involving 2 or more arrays are also possible. However, they must obey the rules of broadcasting. End of explanation """ a = np.random.normal(0, 1, (4, 3)) # np.random.normal creates an array of normally-distributed random numbers # with given mean and standard deviation (e.g. 0 and 1) and in given shape. print(np.mean(a)) # a number close to 0 print('-'*12) print(np.mean(a) == a.mean()) # many functions are also implemented as methods in the ndarray class print('-'*12) print(np.std(a)) # close to 1 print('-'*12) print(np.sum(a)) # Compute sum of all elements print('-'*12) print(np.sum(a, axis=0)) # Compute sum of each column print('-'*12) print(np.sum(a, axis=1)) # Compute sum of each row """ Explanation: Notice that in the last 2 examples above we could perform the operations even though the arrays did not have the same shape. This is one of the most powerful features of Numpy that allows for very efficient computations. You can read more about broadcasting in the official documentation and here. Built-in functions Numpy has efficient implementations of many standard mathematical and statistical functions. End of explanation """ # answer a = np.random.normal(5, 10, (6, 6)) print(np.mean(a)) print(a.std()) """ Explanation: Exercises create a 6x6 array of normally distributed random numbers with mean 5 and standard deviation 10. Print its computed mean and standard deviation [1]. End of explanation """ # answer a += 5 """ Explanation: add 5 to every element of the array from the previous exercise [1]. End of explanation """ b = np.array([0, 1, 0, 1, 0, 1]) # answer a[:, 2] *= b """ Explanation: multiply the third column of the array from the previous exercise by the given array [1]. End of explanation """ # answer a = np.eye(7) * 7 """ Explanation: create a 7x7 matrix with 7 on the leading diagonal and 0 everywhere else (note: you might find the eye function useful) [1]. End of explanation """ # person 1 2 3 4 # month spending = np.array([[450.55, 340.67, 1023.98, 765.30], # 1 [430.46, 315.99, 998.48, 760.78], # 2 [470.30, 320.34, 1013.67, 774.50], # 3 [445.62, 400.60, 1020.20, 799.45], # 4 [432.01, 330.13, 1011.76, 750.91]]) # 5 # answer total_person_2 = np.sum(spending[:, 1]) mean_all = np.mean(spending, axis=0) """ Explanation: the following array represents the spending, in pounds of 4 people over 5 months (rows represent time and columns the individuals). Compute the total spending of person 2. Compute the average spending in each month for each person (note: use the axis argument) [1]. End of explanation """ v = np.array([1,2,3]) w = np.array([4,5]) # answer outer = np.reshape(v, (3, 1)) * w """ Explanation: the outer product of two vectors u and v can be obtained by multiplying each element of u by each element of v. Compute the outer product of the given vectors (note: use the reshape function) [2]. End of explanation """ u = np.array([4, 2, 5]) # a row vector in R^3, shape (3,) v = np.array([.5, .3, .87]) a = np.array([[3, -2, 1], [9, 6, 10], [6, -4, 3]]) # a 3x3 matrix b = np.array([[.25, .2], [4.3, .1], [ 1., .82]]) # a 3x2 matrix print(u.dot(v)) # the dot product of vectors print('-'*12) print(a.dot(u)) # the matrix vector product print('-'*12) print(a.dot(b)) # matrix multiplication print('-'*12) print(np.linalg.norm(u)) # norm aka magnitude of a vector print('-'*12) print(np.outer(u, v)) # uv^T print('-'*12) print(np.transpose(b)) # equivalent to b.T print('-'*12) print(u.T) # note that this does not turn a row vector into column vector # i.e. the shape of u.T is still (3,) print('-'*12) print(np.linalg.inv(a)) # find the inverse of a matrix, a^-1 print('-'*12) print(np.linalg.solve(a, u)) # solve the linear system ax = u for x """ Explanation: Linear algebra Numpy has extensive support for linear algebra operations. End of explanation """ u = np.array([.4, .23, .01]) v = np.array([.12, 1.1, .5]) # answer def dist(u, v): return np.sqrt(np.sum((u - v) ** 2)) # or def dist(u, v): return np.linalg.norm(u - v) """ Explanation: Exercises write a function to compute the Euclidean distance between two vectors. Evaluate it on the given data [1]. End of explanation """ def f_true(x): return 2 * x + 5 def f(x): return f_true(x) + np.random.normal(0, 1, xx.shape[0]) xx = np.linspace(-10, 10) X = np.ones((xx.shape[0], 2)) X[:, 1] = xx y = f(xx) test = np.array([-5.5, 1.2, 4.8, 9.]) # answer W = (np.linalg.inv(X.T.dot(X)).dot(X.T)).dot(y) # or W = np.linalg.pinv(X).dot(y) y_hat = test * W[1] + W[0] """ Explanation: the closed-form solution for the linear regression problem can be written as $W = (X^{T}X)^{-1}X^{T}y$ Compute the weight matrix $W$ for the given data. Check if your weight and bias are close to the true values (2, 5). Compute the estimated values $\hat{y}$ for the test dataset [2]. End of explanation """ # a non-vectorized function def log_pos_p1(x): """Compute the natural log + 1 of positive elements in x. Return 0 for elements < 0.""" result = np.zeros_like(x) # create an array of zeros with the same shape and type as x for i, e in np.ndenumerate(x): # ndenumerate returns tuples (index, element) if e > 0: result[i] = np.log(e) + 1 return result def log_pos_p1_vectorized(x): # same as above but faster result = np.zeros_like(x) result[x > 0] = np.log(x[x > 0]) + 1 return result x = np.arange(-1000., 1000.) %timeit log_pos_p1(x) %timeit log_pos_p1_vectorized(x) # check if the results are correct result_slow = log_pos_p1(x) result_vectorized = log_pos_p1_vectorized(x) np.allclose(result_slow, result_vectorized) """ Explanation: Vectorized functions and speed The functions in Numpy, including the mathematical operators are implemented in efficient C code operating on whole arrays. Therfore, it is usually a good idea to avoid element-by-element computations in a for or while loop. Functions that operate on whole arrays are referred to as vectorized. End of explanation """ def relu(x): result = np.zeros_like(x) # create an array of 0s with the same shape and type as x for i, e in np.ndenumerate(x): # ndenumerate returns tuples (index, element) if e > 0: result[i] = e return result x = np.arange(-10000., 10000.) # answer def relu_vectorized(x): result = np.zeros_like(x) result[x > 0] = x[x > 0] return result # or def relu_vectorized(x): return np.where(x > 0, x, 0) # np.where returns elements from the first array if condition is true, # second otherwise """ Explanation: Exercises the rectified linear unit (aka ReLU) is a commonly used activation function in machine learning. It can be computed using the following Python function: python def relu(x): result = np.zeros_like(x) # create an array of 0s with the same shape and type as x for i, e in np.ndenumerate(x): # ndenumerate returns tuples (index, element) if x &gt; 0: result[i] = e return result Vectorize this function. Evaluate the running time using the %timeit command on the data below [2]. End of explanation """ def f(x): return x ** 3 a = 0 b = 1 n = 1000 # answer def trapz(f, a, b, n): h = (b - a) / n x = np.linspace(1., n-1, n) return (h/2) * f(a) + (h/2) * f(b) + h * np.sum(f(a + x * h)) """ Explanation: the integral of a function can be numerically aproximated using the trapezoidal rule, defined as: $\displaystyle\int_{a}^{b}f(x)dx \approx \frac{h}{2}f(a) + \frac{h}{2}f(b) + h\displaystyle\sum_{i=1}^{n-1}f(a + ih), \space h = \frac{b - a}{n}$ Write a vectorized function to integrate a function using the trapezoidal rule. It should accept as arguments the function to integrate, the lower and upper bounds and the number of iterations $n$. Do not use an explicit for-loop for the summation. Use your function to intergrate $x^3$ from 0 to 1 using $n=1000$ (the true value of this integral is $\frac{1}{4}$) [2]. End of explanation """ # show plots without need of calling `.show()` %matplotlib inline # scientific computing library import numpy as np # visualization tools import matplotlib.pyplot as plt import seaborn as sns # prettify plots plt.rcParams['figure.figsize'] = [20.0, 5.0] sns.set_palette(sns.color_palette("muted")) sns.set_style("ticks") # supress warnings import warnings warnings.filterwarnings('ignore') # set random seed np.random.seed(0) """ Explanation: Challenges Write a simple gradient descent optimizer using numpy [3]. Use it to fit a linear regression model to the sample data from the previous exercises [2]. You might find the notebook from the first workshop on linear models useful. Write a function to compute the elements of the Mandelbrot set with given resolution [3]. Estimate the area of the computed set [2]. Make the function as fast as possible using vectorization [3]. Logistic regression is a popular model used to estimate the probability of a binary outcome (i.e. 0 or 1). It belongs to a broader class of generalized linear models. Write a logistic regression solver using Numpy [3]. You can adopt multiple approaches, e.g. gradient descent from the first challenge or algorithms like Newton's method or iteratively-reweighted least squares (Wikipedia is a good place to start). Test your implementation on the classic Titanic dataset or any other binary classification problem. Compare the performance of your model to a standard implementation (e.g. scikit-learn) using a metric of your choice. Compare the speed of your method to the reference implementation and try to make it as fast as possible (using vectorization, faster algorithms, etc.) Matplotlib Visualization As its name conveys, matplotlib is plotting library inspired by the MATLAB plotting API.<br> seaborn is a wrapper library for matplotlib, encapsulating some of the low-level functionalities of it.<br><br> This is a beginers-level tutorial to matplotlib using seaborn palettes for "prettifying" the plots. Similar results can be replicated by just using matplotlib. inline mode When working on an interactive environment, such as IPython (and Jupyter), there is the option of draw plots without calling the .show() method, which is necessary for script mode. To do so, use the oneliner below. End of explanation """ # x-axis variable x = np.linspace(-2*np.pi, 2*np.pi) # y-axis variable y = np.cos(x) # making sure that the length of the two variables match assert(len(x) == len(y)) # visualize plt.plot(x, y); # WARNING: Don't forget this line in `script` mode # plt.show() """ Explanation: Simple Plot How to plot a 2D plot of two variables $x$ and $y$.<br><br> Warning<br> The length of the two variables should match, such that len(x) == len(y). Single End of explanation """ # x-axis variable x = np.linspace(-2*np.pi, 2*np.pi) # y-axis variable y_cos = np.cos(x) y_sin = np.sin(x) # visualize plt.plot(x, y_cos) plt.plot(x, y_sin); # WARNING: Don't forget this line in `script` mode # plt.show() """ Explanation: Multiple Plot two functions $f$ and $g$ against the independent variable $x$ on the same graph. End of explanation """ # x-axis variable x = np.linspace(-2*np.pi, 2*np.pi) # y-axis variable y_cos = np.cos(x) y_sin = np.sin(x) # visualize plt.plot(x, y_cos, label="cos(x)") # `label` argument is used in the plot legend to represent this curve plt.plot(x, y_sin, label="sin(x)") # meta data plt.title("Simple Plot") plt.xlabel("independent variable") plt.ylabel("function values") plt.legend(); # WARNING: Don't forget this line in `script` mode # plt.show() """ Explanation: Metadata Provide "metadata" for our plot, such as: * title * axes labels * legend End of explanation """ # x-axis variable x = np.linspace(-2*np.pi, 2*np.pi) # y-axis variable y_cos = np.cos(x) y_sin = np.sin(x) y_mix = y_cos + y_sin # visualize plt.plot(x, y_cos, linestyle="-", marker="o", label="cos(x)", linewidth=1) plt.plot(x, y_sin, linestyle="--", marker="x", label="sin(x)", linewidth=3) plt.plot(x, y_mix, linestyle="-.", marker="*", label="cos(x) + sin(x)", color='orange') # meta data plt.title("Simple Plot") plt.xlabel("independent variable") plt.ylabel("function values") plt.legend(); # WARNING: Don't forget this line in `script` mode # plt.show() """ Explanation: Styling Advanced styling and formating options are available, such as: * marker style * line style * color * linewidth End of explanation """ ## SOLUTION # activation function definitions def sigmoid(z): return 1/(1 + np.exp(-z)) # x-axis variable x = np.linspace(-10, 10) # y-axis variable y_sigm = sigmoid(x) y_tanh = np.tanh(x) # visualize plt.plot(x, y_sigm, linestyle="-", marker="o", label="sigm(x)", linewidth=1) plt.plot(x, y_tanh, linestyle="--", marker="x", label="tanh(x)", linewidth=3) # meta data plt.title("Activation Functions Plots") plt.xlabel("independent variable") plt.ylabel("function values") plt.legend(); # WARNING: Don't forget this line in `script` mode # plt.show() """ Explanation: Task (Activation Functions Plots) In neural networks, various activation functions are used, some of with are illustrated below! TODO<br> Choose 2 activation functions, * implement them (as we did for $cos(x)$ and $sin(x)$ functions) * plot them in the same graph, using plt.plot Make sure you provide a title, axes label and a legend! Don't forget to put your personal tone to this task and style the plot!! End of explanation """ # uniform random variable u = np.random.uniform(-1, 1, 1000) # visualize plt.hist(u, label="Uniform Distribution") # metadata plt.title("Matplotlib Histogram") plt.xlabel("random variable (binned)") plt.ylabel("frequency") plt.legend(); # WARNING: Don't forget this line in `script` mode # plt.show() """ Explanation: Histogram In the ICDSS Machine Learning Workshop: Linear Models, we plotted the histograms of the daily returns for three istruments (AAPL, GOOG, SPY) in order to get an idea of the underlying distribution of our stochastic processes. This is not the only user-case!<br><br> When working with random variables, in a stochastic setting, histograms are a very representative way to visualize the random variables PDFs.<br><br> Warning<br> Histograms are visualizing the distribution of a single random variable, so make sure you notice the difference between: * plt.plot(x, y) <br> <br> * plt.hist(z) plt.hist End of explanation """ # uniform random variable u = np.random.uniform(-1, 1, 1000) # visualize sns.distplot(u, label="Uniform Distribution") # metadata plt.title("Seaborn Distribution Plot") plt.xlabel("random variable (binned)") plt.ylabel("frequency") plt.legend(); # WARNING: Don't forget this line in `script` mode # plt.show() """ Explanation: sns.distplot seaborn wraps plt.hist method and applies also some KDE (Kernel Distribution Estimation), providing a better insight into the distribution of the random variable. End of explanation """ ## SOLUTION # guassian normal random variable z = np.random.normal(0, 1, 1000) p = np.random.poisson(5, 1000) # visualize sns.distplot(z, label="Gaussian Normal Distribution") sns.distplot(p, label="Poisson Distribution") # metadata plt.title("Stochastic Processes Distribution Plots") plt.xlabel("random variable (binned)") plt.ylabel("frequency") plt.legend(); # WARNING: Don't forget this line in `script` mode # plt.show() """ Explanation: Task (Stochastic Processes Distributions Plots) We covered only the uniform distribution, but there are many more already implemented in NumPy. Generate similar plots for other distributions of your choice. TODO<br> Choose 2 distributions, * implement them (as we did for uniform distribution) * plot them in the same graph, using plt.hist or sns.distplot Make sure you provide a title, axes label and a legend! Don't forget to put your personal tone to this task and style the plot!! End of explanation """ # scientific computing library import numpy as np # optimization package from scipy.optimize import minimize # visualization tools import matplotlib.pyplot as plt import seaborn as sns # show plots without need of calling `.show()` %matplotlib inline # prettify plots plt.rcParams['figure.figsize'] = [20.0, 5.0] sns.set_palette(sns.color_palette("muted")) sns.set_style("ticks") # supress warnings import warnings warnings.filterwarnings('ignore') # set random seed np.random.seed(0) """ Explanation: SciPy Optimization End of explanation """ # function definition def f(x): return x**2 - 2*x + 5*np.sin(x) """ `minimize(fun, x0, method, tol)`: Parameters ---------- fun: callable Objective function x0: numpy.ndarray Initial guess str: string or callable Type of solver, consult [docs](https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.optimize.minimize.html) tol: float Tolerance for termination Returns ------- result: OptimizeResult x: numpy.ndarray Minimum success: bool Status of optomization message: string Description of cause of termination """ result = minimize(fun=f, x0=np.random.randn()) # report print("Minimizer:", result.x) print("Message:", result.message) print("Success:", result.success) # visualization t = np.linspace(-5, 5) plt.plot(t, f(t), label='Objective Function') plt.plot(result.x, f(result.x), 'ro', label='Global Minimum') plt.legend(); """ Explanation: Find global minimizer of function $f$: $$f(x) = x^{2} - 2x + 5sin(x), \quad x \in \mathcal{R}$$ End of explanation """ ## SOLUTION def h(x): return 2 - 9*x + x**2 _result = minimize(fun=h, x0=np.random.randn()) # report print("Minimizer:", _result.x) print("Message:", _result.message) print("Success:", _result.success) # visualization _t = np.linspace(-2, 10) plt.plot(_t, h(_t), label='Objective Function') plt.plot(_result.x, h(_result.x), 'ro', label='Global Minimum') plt.legend(); """ Explanation: Task (Polynomial Optimization) Convex optimization is a useful tool that can be applied to various objective functions. TODO<br> Use scipy.optimize.minimize function to minimize $f(x) = 2 - 9x + x^2, x \in \mathcal{R}$. End of explanation """ # target function def g_true(x): return 7*x - 3 """ Explanation: Linear Regression Find a function $f: \mathcal{X} \rightarrow \mathcal{Y}$, such that: $$f(\mathbf{X}) = \mathbf{X} * w \approx \mathbf{y}$$ True target function: $$g_{true}(x) = 7x - 3, \quad x \in \mathcal{R}$$ End of explanation """ # observation function def g(x): return g_true(x) + np.random.normal(0, 1, len(x)) """ Explanation: Observed target function: $$g(x) = 7x - 3 + \epsilon, \quad x \in \mathcal{R} \text{ and } \epsilon \sim \mathcal{N}(0,1)$$ End of explanation """ x = np.linspace(-10, 10) y = g(x) """ Explanation: Dataset End of explanation """ # add a columns of ones in x # x -> X # [1] -> [1, 1] # [3] -> [1, 3] # [7] -> [1, 7] X = np.ones((len(x), 2)) X[:, 1] = x """ Explanation: Preprocessing End of explanation """ def single_add(x, y, z): # do stuff with x, y, z return x + y + z print("Single Addition: 1 + 2 + 3 =", single_add(1,2,3)) def high_order_add(x, y): def cyrrying_add(z): return x + y + z return cyrrying_add print("Cyrrying Addition: 1 + 2 + 3 =", high_order_add(1,2)(3)) """ Explanation: Functioncal Programming Note Use high-order functions to abstract calculations End of explanation """ # high-order function def loss(X, y): # mean squared error loss function def _mse(w): assert(len(X) == len(y)) k = len(X) sum = 0 for (xi, yi) in zip(X, y): sum += (yi - np.dot(xi, w))**2 return sum/(2*k) return _mse result = minimize(loss(X, y), [0,0]) w = result.x print("Optimal weights:", w) print("True weights:", [-3, 7]) plt.title("Linear Model") plt.xlabel("x") plt.ylabel("y") plt.plot(x, y, 'o', label='Observations') plt.plot(x, np.dot(X, w), '--', label='Linear Model') plt.plot(x, g_true(x), label='True Target', alpha=0.5) plt.legend(); """ Explanation: Mean Squared Error (L2-Norm Error) $$MSE(\mathbf{X}, \mathbf{y}, w) = \frac{1}{2k} \sum_{i=1}^{k} (y_{i} - w_{i} * x_{i})^{2}$$ End of explanation """ from mpl_toolkits.mplot3d import Axes3D __loss = loss(X, y) b, m = np.meshgrid(np.linspace(-10, 30, 50), np.linspace(-10, 30, 50)) zs = np.array([__loss([_b, _m]) for _b, _m in zip(np.ravel(b), np.ravel(m))]) Z = zs.reshape(m.shape) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_surface(m, b, Z, rstride=1, cstride=1, alpha=0.5) ax.scatter(w[1], w[0], __loss(w), color='red') ax.set_title('Optimization Surface') ax.set_xlabel('$w_{0}$') ax.set_ylabel('$w_{1}$') ax.set_zlabel('error'); """ Explanation: 3D Optimization Surface End of explanation """ ## SOLUTION # high-order function def _loss(X, y): # mean squared error loss function def _l1(w): assert(len(X) == len(y)) k = len(X) sum = 0 for (xi, yi) in zip(X, y): sum += np.abs(yi - np.dot(xi, w)) return sum/(2*k) return _l1 _result = minimize(_loss(X, y), [0,0]) _w = result.x print("Optimal weights:", _w) print("True weights:", [-3, 7]) plt.title("Linear Model") plt.xlabel("x") plt.ylabel("y") plt.plot(x, y, 'o', label='Observations') plt.plot(x, np.dot(X, _w), '--', label='Linear Model') plt.plot(x, g_true(x), label='True Target', alpha=0.5) plt.legend(); """ Explanation: Task (L1-Norm Error) Mean Squared Error (MSE) is often called also L2-Norm Error, since the square (L2-Norm) of the prediction error is used. The L1-Norm Error term is used for the absolute value of the prediction error, such that: $$E_{L1}(\mathbf{X}, \mathbf{y}, w) = \frac{1}{2k} \sum_{i=1}^{k} |y_{i} - w_{i} * x_{i}|$$ TODO<br> Use scipy.optimize.minimize function to minimize the L1-Norm Error of the provided data (X, y).<br><br> <em>Hint</em>: Reimplement the loss function to calculate the L1-Norm Error, instead of the MSE. End of explanation """
dietmarw/EK5312_ElectricalMachines
Chapman/Ch6-Example_6-06.ipynb
unlicense
%pylab notebook """ Explanation: Electric Machinery Fundamentals 5th edition Chapter 6 (Code examples) Example 6-6: Creates and plot of the torque-speed curve of an induction motor with a double-cage rotor design as depicted in Figure 6-29. Note: You should first click on "Cell &rarr; Run All" in order that the plots get generated. Import the PyLab namespace (provides set of useful commands and constants like Pi) End of explanation """ r1 = 0.641 # Stator resistance x1 = 0.750 # Stator reactance r2 = 0.300 # Rotor resistance for single cage motor r2i = 0.400 # Rotor resistance for inner cage of double-cage motor r2o = 3.200 # Rotor resistance for outercage of double-cage motor x2 = 0.500 # Rotor reactance for single cage motor x2i = 3.300 # Rotor reactance for inner cage of double-cage motor x2o = 0.500 # Rotor reactance for outer cage of double-cage motor xm = 26.3 # Magnetization branch reactance v_phase = 460 / sqrt(3) # Phase voltage n_sync = 1800 # Synchronous speed (r/min) w_sync = n_sync * 2*pi/60 # Synchronous speed (rad/s) """ Explanation: First, initialize the values needed in this program. End of explanation """ v_th = v_phase * ( xm / sqrt(r1**2 + (x1 + xm)**2) ) z_th = ((1j*xm) * (r1 + 1j*x1)) / (r1 + 1j*(x1 + xm)) r_th = real(z_th) x_th = imag(z_th) """ Explanation: Calculate the Thevenin voltage and impedance from Equations 7-41a: $$ V_{TH} = V_\phi \frac{X_M}{\sqrt{R_1^2 + (X_1 + X_M)^2}} $$ and 7-43: $$ Z_{TH} = \frac{jX_m (R_1 + jX_1)}{R_1 + j(X_1 + X_M)} $$ End of explanation """ s = linspace(0, 1, 50) # slip s[0] = 0.001 # avoid divide-by-zero problems nm = (1 - s) * n_sync # mechanical speed """ Explanation: Now calculate the torque-speed characteristic for many slips between 0 and 1. End of explanation """ t_ind1 = ((3 * v_th**2 * r2/s) / (w_sync * ((r_th + r2/s)**2 + (x_th + x2)**2))) """ Explanation: Calculate torque for the single-cage rotor using: $$ \tau_\text{ind} = \frac{3 V_{TH}^2 R_2 / s}{\omega_\text{sync}[(R_{TH} + R_2/s)^2 + (X_{TH} + X_2)^2]} $$ End of explanation """ y_r = 1/(r2i + 1j*s*x2i) + 1/(r2o + 1j*s*x2o) z_r = 1/y_r # Effective rotor impedance r2eff = real(z_r) # Effective rotor resistance x2eff = imag(z_r) # Effective rotor reactance """ Explanation: Calculate resistance and reactance of the double-cage rotor at this slip, and then use those values to calculate the induced torque. End of explanation """ t_ind2 = ((3 * v_th**2 * (r2eff) / s) / (w_sync * ((r_th + (r2eff)/s)**2 + (x_th + x2eff)**2))) """ Explanation: Calculate induced torque for double-cage rotor. End of explanation """ rc('text', usetex=True) # enable LaTeX commands for plot plot(nm, t_ind1,'b', nm, t_ind2,'k--', lw=2) xlabel(r'$\mathbf{n_{m}}\ [rpm]$') ylabel(r'$\mathbf{\tau_{ind}}\ [Nm]$') title ('Induction motor torque-speed characteristic') legend ((r'Single-cage design','Double-cage design'), loc = 3); grid() """ Explanation: Plot the torque-speed curve: End of explanation """
gwsb-istm-6212-fall-2016/syllabus-and-schedule
lectures/week-11/20161122-lecture-notes.ipynb
cc0-1.0
sc """ Explanation: A brief tour of Spark Apache Spark is "a fast and general engine for large-scale data processing." It comes from the broader Hadoop ecosystem but can be used in a near-standalone mode, which we'll use here. This is a Jupyter notebook with PySpark enabled. To enable PySpark, you need to have Spark available, and certain environment variables set. On an Ubuntu-16.04 machine, where you've downloaded the latest Spark: % sudo apt-get install openjdk-9-jre-headless % tar xvzf spark-2.0.2-bin-hadoop2.7.tgz % export PATH=`pwd`/spark-2.0.2-bin-hadoop2.7/bin:$PATH % export PYSPARK_DRIVER_PYTHON=jupyter % export PYSPARK_DRIVER_PYTHON_OPTS='notebook' pyspark % pyspark This file tells Jupyter exactly how to connect with Spark and Python. Start a new Python notebook in Jupyter after it opens up, and note that you are getting a little more output from Jupyter in your shell window than normal. That's Spark - it's a little wordy. Getting started with SparkContext and RDDs Working with Spark, everything goes through a SparkContext object. It's available (after several seconds of startup time - look at the shell window where you started the Jupyter notebook and you'll see a lot of Spark startup messages) as the object sc: End of explanation """ !wget https://s3.amazonaws.com/capitalbikeshare-data/2015-Q4-cabi-trip-history-data.zip !unzip 2015-Q4-cabi-trip-history-data.zip !mv 2015-Q4-Trips-History-Data.csv q4.csv !wc -l q4.csv """ Explanation: Everything you do with Spark here will go through this object. It is a feature of pySpark to define and make this available in the shell environment, and the Jupyter kernel makes that available through a notebook. The key construct in Spark is a Resilient Distributed Dataset, or RDD. An RDD leverages the data- and computing resource-tracking capabilities of the Hadoop infrastructure layer to make a dataset available in RAM. This is a key enhancement over the Hadoop or broader Map/Reduce model where data for every step of computation comes from disk and returns to disk between steps. Using RAM like this makes everything go faster. Another key concept in Spark is that it will split up your data and handling the low-level details of mapping, shuffling, and reducing data for you. Rather than the Hadoop style code we saw previously, Spark introduces a few language constructs that are easy to learn and work with. Let's go through those basics now. First, let's load up data. The simplest way is to use the SparkContext to access a text file. Let's visit our Bikeshare data one last time. End of explanation """ !csvcut -n q4.csv !head -5 q4.csv | csvlook """ Explanation: This is the same trip dataset we looked at previously, with the familiar format: End of explanation """ from operator import add """ Explanation: Python modules Always good to bring your imports together in one place. End of explanation """ rides = sc.textFile('q4.csv') """ Explanation: To prep the data for use as an RDD, we just need one line: End of explanation """ rides.count() """ Explanation: See how quickly it returned? That's because (as we learned from Hari, Nisha, and Mokeli) the processing of the data is deferred - we haven't actually done anything with it but prepare it for use as an RDD within Spark. Let's do something simple first. End of explanation """ station_pairs = rides \ .map(lambda r: r.split(",")) \ .map(lambda r: ((r[4], r[6]), 1)) """ Explanation: That took a little bit longer. To see why, we can jump over to the Spark UI. On my machine right now, it's available at http://localhost:4040/jobs/ but note that that URL might not work for you - it's just local to my machine. (Explore the Spark UI a little bit) You can find your local Spark UI by examining the text output from the same shell window we looked back at a little while ago. The one where you started Jupyter and saw all the Spark startup information will now have a bunch of lines about the job we processed. Scroll back and look for something like this up near the top: INFO SparkUI: Started SparkUI at http://localhost:4040/jobs/ Whatever that URL is on your VM, that's where you can find the Spark UI on your host. Reviewing the resulting data from that one simple job -- counting lines -- tells us a lot about what the Hadoop environment and Spark on top of it do automatically for us. Remember that point about how these tools make it easier for us to write code that uses parallel computing without having to be experts? Let's do something a little more interesting. This is a CSV file, so let's count ride stations pairs. To do this we need to map each input line and extract the start and stop station, then we need to reduce that by aggregating the count lines. Fortunately we can do that with the Python keywords map (which maps input data to an output), filter (which selects or filters some data from a larger set based on a test), and lambda (which defines "anonymous" functions inline). These are common functional programming constructs and date back many, many decades, so they are a natural fit here because the Map/Reduce paradigm itself is a functional model. First, we split up the data rows, our map step. End of explanation """ key_func = lambda k, v: -v station_counts = station_pairs \ .reduceByKey(add) \ .takeOrdered(10, key=lambda r: -r[1]) station_counts """ Explanation: Several things to note here: That was instantaneous. We haven't computed anything yet - this is "lazy evaluation". lambda takes an input (here r) and returns a result (here the split array once, then a tuple of the two station names with a counter, 1). It's like you're defining a function right in the middle of other code and not giving it a name. That's why they're called "anonymous" or "inline" functions. We are chaining two map commands together. This should look familiar - it's just like piping. This leaves us with a mapped data structure that needs to be reduced. End of explanation """ !csvcut -c5,7 q4.csv | sort | uniq -c | sort -rn | head """ Explanation: (look at the Spark UI again for this job!) There we go, the counts of station pairs. Lots of people riding around the mall, and from Eastern Market to Lincoln Park (my neighborhood!). What just happened? We imported the add operation for use as a parameter to reduceByKey reduceByKey is a Spark verb that lets us reduce mapped data, in this case, something like a GROUP BY operation, where we operate on the RDD provided using the function passed in as a parameter: add We finally execute the whole operation with takeOrdered, which invokes the full computation and takes the top 10 results. We calculate the top 10 with the anonymous sort key function lambda r: -r[1] which returns a descending sort result set of the added up key/count pairs by their counts. We show the result, station_counts There we go! We just computed something using Hadoop and Spark. Of course we can do the same thing with csvcut | sort | uniq -c | sort -rn | head, right? End of explanation """ def get_duration_min(ms): return int(ms) / (60 * 1000) get_duration_min('696038') """ Explanation: Which took longer? Why? Computing some numbers Counting is well and good, but let's do a little math and get some basic descriptive statistics. Let's reimplement our milliseconds-to-minutes conversion, then find what the central tendencies of the duration might be. Great. Now to compute the duration in minutes, which we'll get in seconds from a datediff, then divide by 60. End of explanation """ import numpy as np from pyspark.mllib.stat import Statistics """ Explanation: We'll use this function in our pipeline in a moment. Next we need to reach for Spark Statistics: End of explanation """ header = rides.first() rides = rides.filter(lambda x: x != header) """ Explanation: To generate summary statistics, we need to first create a vector of data, in this case our duration counts. First we need a little trick to skip the first line, our header row. End of explanation """ durations = rides \ .map(lambda r: r.split(",")) \ .map(lambda r: np.array([get_duration_min(r[0])])) durations.take(5) """ Explanation: Next we have to get our durations from the source data. Note that we next have to wrap these in NumPy vectors. End of explanation """ summary = Statistics.colStats(durations) print(summary.mean()) print(summary.variance()) print(summary.numNonzeros()) """ Explanation: Now we just feed that to a Summarizer: End of explanation """ !wget https://github.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/raw/master/exercises/pg2500.txt !mv pg2500.txt siddhartha.txt siddhartha_text = sc.textFile('siddhartha.txt') word_counts = siddhartha_text \ .flatMap(lambda line: line.lower().split(" ")) \ .map(lambda word: (word, 1)) \ .reduceByKey(add) word_counts.takeOrdered(25, lambda w: -w[1]) """ Explanation: Counting words We can revisit our word count example as well. Let's reach back for another familiar dataset: End of explanation """ many_texts = sc.textFile('texts/*.txt') word_counts = many_texts \ .flatMap(lambda line: line.lower().split(" ")) \ .map(lambda word: (word, 1)) \ .reduceByKey(add) word_count """ Explanation: What do you see that we could improve in this word count? End of explanation """
musketeer191/job_analytics
parse_title.ipynb
gpl-3.0
df = pd.read_csv(DATA_DIR + 'doc_index_filter.csv') titles = list(df['title'].unique()) n_title = len(titles) print('# unique titles: %d' % n_title) title_stats = pd.read_csv(DATA_DIR + 'stats_job_titles.csv') """ Explanation: Data loading: End of explanation """ def parseBatch(b, start=None, end=None): ''' @brief: parse a batch of 100 titles @return: DF containing results of parsing the 100 titles in batch b ''' print('Parsing batch {}...'.format(b)) if (not start) and (not end): start = 100*b; end = start + 100 batch_titles = titles[start:end] frames = [parse(t) for t in batch_titles] res = pd.concat(frames) res.reset_index(inplace=True); del res['index'] time.sleep(3) # defensive save result of the batch fname = DATA_DIR + 'tmp/b{}.csv'.format(b) res.to_csv(fname, index=False) print('\t saved result to tmp file') return res n_batch = n_title/100; remainder = n_title % 100 frames = [parseBatch(b) for b in range(n_batch)] rem_titles = titles[-remainder:] last_frame = pd.concat([parse(t) for t in rem_titles]) frames.append(last_frame) res = pd.concat(frames) print res.shape """ Explanation: Parsing job titles We need to divide the parsing process into several medium-size batches as this is a good practice when we are using a web service or access web/remote server. The division also allows us to easily locate a bug when it occurs. End of explanation """ print('# dups in parsing result: %d' %sum(res.duplicated('title'))) res[res.duplicated('title')] """ Explanation: Check for duplicates in results: End of explanation """ invalid_titles = res.title[res.duplicated('title')].unique() print('# invalid titles: %d' %len(invalid_titles)) pd.DataFrame({'title': invalid_titles}).to_csv(DATA_DIR + 'invalid_titles.csv', index=False) """ Explanation: Save invalid titles: End of explanation """ res = res.drop_duplicates('title') print res.shape res.domain.fillna('', inplace=True); res.position.fillna('', inplace=True) res.pri_func.fillna('', inplace=True); res.sec_func.fillna('', inplace=True) res.to_csv(DATA_DIR + 'parsed_titles.csv', index=False) """ Explanation: Rm dups due to invalid titles and replace NAs by empty strings (to avoid NAs destroying standard titles later): End of explanation """ def camelCase(s): return s.title() def joinValidParts(row): ''' @brief: Only join valid parts with data, parts which are empty str are removed ''' parts = [row.position, row.domain, row.sec_func, row.pri_func] valid_parts = [p for p in parts if p != ''] # print('# valid parts: %d' %len(valid_parts)) return ' '.join(valid_parts) # This naive join will cause confusing results res['std_title'] = res.position + ' ' + res.domain + ' ' + res.sec_func + ' ' + res.pri_func """ Explanation: Standardize Job Titles The standard form for all titles is: position + domain + (secondary function) + primary function. End of explanation """ row = res.query('non_std_title != std_title').iloc[0] my_util.tagChanges(row.non_std_title, row.title) """ Explanation: The following helps to detect the confusing problem: non_std_title seems identical with title. End of explanation """ row = res.query('non_std_title == "Site Engineer"').iloc[0] print joinValidParts(row) ' '.join([row.position, row.domain, row.sec_func, row.pri_func]) res['std_title'] = res.apply(joinValidParts, axis=1) res.std_title = res.std_title.apply(camelCase) """ Explanation: It shows that the problem is due to NA parts, e.g. sec-func, which add spaces (almost invisible to naked eyes) to the std-ized version. Thus we need a better join which combines only valid parts. That made me create joinValidParts(). End of explanation """ res.rename(columns={'std_title': 'title'}, inplace=True) res.to_csv(DATA_DIR + 'parsed_titles.csv', index=False) stdForm = dict(zip(res.non_std_title, res.title)) print('# non-std titles which can be standardized: %d' %len(stdForm.keys())) uniq_std_titles = np.unique((stdForm.values())) print('# unique standard titles: %d' % len(uniq_std_titles)) """ Explanation: From now on, title means standard title. End of explanation """ def toStd(t): if t not in stdForm.keys(): return t else: if stdForm[t] == '': # for titles which could not be parsed return t else: return stdForm[t] # return stdForm[t] if t in stdForm.keys() else t df = pd.read_csv(DATA_DIR + 'doc_index_filter.csv') df.columns uniq_non_std_titles = df['non_std_title'].unique() # sum([t not in stdForm.keys() for t in uniq_non_std_titles]) tmp = [t for t in uniq_non_std_titles if t not in stdForm.keys()] tmp[:5] del df['title'] print("# unique non-std titles in data: %d" % df.non_std_title.nunique()) df['title'] = map(toStd, df['non_std_title']) """ Explanation: Standardizing Job Titles of Posts End of explanation """ df.query('non_std_title != title')[['non_std_title', 'title']].head() any(df.title == '') df.to_csv(DATA_DIR + 'doc_index_filter.csv') """ Explanation: Sanity check if the std-ation works: End of explanation """ by_title_agg = df.groupby('title').agg({'job_id': 'nunique'}) by_title_agg.rename(columns={'job_id': 'n_post'}, inplace=True) by_title_agg.reset_index(inplace=True) by_title_agg.head() titles2_ = by_title_agg.query('n_post >= 2') print('# titles with >= 2 posts: %d' %titles2_.shape[0]) title_df = pd.merge(titles2_, res) print(title_df.shape[0]) # res.columns title_df.columns title_df.to_csv(DATA_DIR + 'new_titles_2posts_up.csv', index=False) """ Explanation: Re-query job titles with at least 2 posts: End of explanation """ by_domain = res.groupby('domain') print('# domains: {}'.format(by_domain.ngroups)) by_domain_agg = by_domain.agg({'title': len}) by_domain_agg = by_domain_agg.add_prefix('n_').reset_index() by_domain_agg.sort_values('n_title', ascending=False, inplace=True) by_domain_agg.describe().round(1) """ Explanation: Domains in Job Titles End of explanation """ domains_2 = by_domain_agg.query('n_title >= 2') print('# domains with at least 2 job titles: %d' %len(domains_2)) by_domain_agg.to_csv(DATA_DIR + 'stats_domains.csv', index=False) """ Explanation: Though the total no. of domains is large, we actually only interested in domains with at least $2$ job titles. The reason is the domains with only $1$ title give us no pairwise similarity score. End of explanation """ by_pri_func = res.groupby('pri_func') print('# primary functions: {}'.format(by_pri_func.ngroups)) by_pri_func_agg = by_pri_func.agg({'title': len}) by_pri_func_agg = by_pri_func_agg.add_prefix('n_').reset_index() by_pri_func_agg.sort_values('n_title', ascending=False, inplace=True) by_pri_func_agg.to_csv(DATA_DIR + 'stats_pri_funcs.csv', index=False) by_pri_func_agg.describe().round(1) by_pri_func_agg.head(10) """ Explanation: Primary Functions in Job Titles End of explanation """ # r0 = requests.post(parse_url, auth=(user, pwd), # json={"job_title":"accountant", "verbose": False}) # print r0.status_code # print r0.json()['output'] r1 = requests.post(parse_url, auth=(user, pwd), json={"job_title":"software developer", "verbose": False}) print r1.status_code print r1.json()['output'] # r2 = requests.post(parse_url, auth=(user, pwd), # json={"job_title":"pre-school teacher", "verbose": False}) # print r2.status_code # print r2.json()['output'] # r3 = requests.post(parse_url, auth=(user, pwd), # json={"job_title":"Assistant Civil and Structural Engineer", # "verbose": False}) r4 = requests.post(parse_url, auth=(user, pwd), json={"job_title": "Security Analyst, Information Technology", "verbose": False}) print r4.json()['output'] r5 = requests.post(parse_url, auth=(user, pwd), json={"job_title": "Data analyst and scientist", "verbose": False}) print r5.json()['output'] """ Explanation: Test parser web service (src by Koon Han) End of explanation """
intel-analytics/analytics-zoo
apps/tfnet/image_classification_inference.ipynb
apache-2.0
from zoo.common.nncontext import * sc = init_nncontext("Tfnet Example") import sys import tensorflow as tf sys.path slim_path = "/path/to/yourdownload/models/research/slim" # Please set this to the directory where you clone the tensorflow models repository sys.path.append(slim_path) """ Explanation: Image classification using tensorflow pre-trained model Tensorflow-Slim image classification model library provides both the implementation and pre-trianed checkpoint many popular convolution neural nets for image classification. Using TFNet in Analytics-Zoo, we can easily load these pre-trained model and make distributed inference with only a few lines of code. Add the slim image classification model library to $PYTHONPATH End of explanation """ from nets import inception slim = tf.contrib.slim images = tf.placeholder(dtype=tf.float32, shape=(None, 224, 224, 3)) with slim.arg_scope(inception.inception_v1_arg_scope()): logits, end_points = inception.inception_v1(images, num_classes=1001, is_training=False) sess = tf.Session() saver = tf.train.Saver() saver.restore(sess, "/path/to/yourdownload/checkpoint/inception_v1.ckpt") # You need to edit this path to the checkpoint you downloaded """ Explanation: Construct the inference graph and restore the checkpoint End of explanation """ from zoo.util.tf import export_tf export_tf(sess, "/path/to/yourdownload/models/tfnet/", inputs=[images], outputs=[logits]) """ Explanation: Export the graph as a frozen inference graph The export_tf utility function will frozen the tensorflow graph, strip unused operation according to the inputs and outputs and save it to the specified directory along with the input/output tensor names. End of explanation """ from zoo.tfpark import TFNet model = TFNet.from_export_folder("/path/to/yourdownload/models/tfnet") """ Explanation: Load to Analytics-Zoo End of explanation """ import cv2 import numpy as np import json im = cv2.imread("test.jpg") im = cv2.resize(im, (224, 224)) im = (im - 127.0) / 128.0 im = np.expand_dims(im, 0) import json with open("imagenet_class_index.json") as f: class_idx = json.load(f) idx2label = [class_idx[str(k)][1] for k in range(len(class_idx))] import numpy as np result = model.predict(im).first() print(idx2label[np.argmax(result, 0)]) """ Explanation: Test it on one image End of explanation """ tf.reset_default_graph()# if you want to test your code, you can use it to reset your graph and to avoid some mistakes images = tf.placeholder(dtype=tf.float32, shape=(None, 224, 224, 3)) with slim.arg_scope(inception.inception_v1_arg_scope()): logits, end_points = inception.inception_v1(images, num_classes=1001) sess = tf.Session() saver = tf.train.Saver() saver.restore(sess, "/path/to/yourdownload/checkpoint/inception_v1.ckpt")# You need to edit this path to the checkpoint you downloaded """ Explanation: Fine tune on the data Construct the inference graph and restore the checkpoint End of explanation """ from zoo.util.tf import export_tf avg_pool = end_points['AvgPool_0a_7x7'] export_tf(sess, "/path/to/yourdownload/models/tfnet/", inputs=[images], outputs=[avg_pool]) """ Explanation: Remove the dense layer of the graph(Inception_v1) and export the graph left as a frozen inference graph The export_tf utility function will frozen the tensorflow graph, strip unused operation according to the inputs and outputs and save it to the specified directory along with the input/output tensor names. In this example, we use the "AvgPool_0a_7x7" as our end_points, freeze the graph accoring to "AvgPool_0a_7x7" and export this graph to the specific directory End of explanation """ from zoo.tfpark import TFNet amodel = TFNet.from_export_folder("/path/to/yourdownload/models/tfnet/") """ Explanation: Load to Analytics-Zoo Load that freezed graph from the specific directory above End of explanation """ from bigdl.nn.layer import Sequential,Transpose,Contiguous,Linear,ReLU, SoftMax, Reshape, View, MulConstant, SpatialAveragePooling full_model = Sequential() full_model.add(Transpose([(2,4), (2,3)])) scalar = 1. /255 full_model.add(MulConstant(scalar)) full_model.add(Contiguous()) full_model.add(amodel) full_model.add(View([1024])) full_model.add(Linear(1024,2)) """ Explanation: Add the layers you want to the model Use sequential model to build your model. First transport the type of data from NCHW to NHWC. Then multiply the scalar and make the input and output both contiguous. Add the freezed graph and add a linear layer after the graph. End of explanation """ import re from bigdl.nn.criterion import CrossEntropyCriterion from pyspark import SparkConf from pyspark.ml import Pipeline from pyspark.sql.functions import col, udf from pyspark.sql.types import DoubleType, StringType from zoo.common.nncontext import * from zoo.feature.image import * from zoo.pipeline.api.keras.layers import Dense, Input, Flatten from zoo.pipeline.api.keras.models import * from zoo.pipeline.api.net import * from zoo.pipeline.nnframes import * image_path = "file:///path/toyourdownload/dogs-vs-cats/train/*.jpg" imageDF = NNImageReader.readImages(image_path, sc) getName = udf(lambda row: row[0], StringType()) def label(name): if 'cat' in name: return 1.0 elif 'dog' in name: return 2.0 else: raise ValueError("file name format is not correct: %s" % name) getLabel = udf(lambda name: label(name), DoubleType()) labelDF = imageDF.withColumn("name", getName(col("image"))) \ .withColumn("label", getLabel(col('name'))) (trainingDF, validationDF) = labelDF.randomSplit([0.9, 0.1]) labelDF.select("name","label").show(10, False) """ Explanation: get data Set the path to the cat_dog data you downloaded. Then read these images from this path. Use udf functions to design some functions to mark data. Seperate the data into two groups: training data and validation data. End of explanation """ trainingDF.groupBy("label").count().show() """ Explanation: Show the distribution of Training Data End of explanation """ transformer = ChainedPreprocessing( [RowToImageFeature(), ImageResize(224, 224), ImageChannelNormalize(123.0, 117.0, 104.0), ImageMatToTensor(), ImageFeatureToTensor()]) from bigdl.optim.optimizer import * from bigdl.nn.criterion import * import datetime as dt app_name='classification cat vs dog'+dt.datetime.now().strftime("%Y%m%d-%H%M%S") train_summary = TrainSummary(log_dir='./log/', app_name=app_name) classifier = NNClassifier(full_model, CrossEntropyCriterion(), transformer)\ .setFeaturesCol("image")\ .setCachingSample(False)\ .setLearningRate(0.003)\ .setBatchSize(16)\ .setMaxEpoch(9)\ .setTrainSummary(train_summary) # BatchSize is a multiple of physcial_core_number(in your bash command) from pyspark.ml import Pipeline import os import datetime as dt if not os.path.exists("./log"): os.makedirs("./log") print("Saving logs to ", app_name) pipeline = Pipeline(stages=[classifier]) trainedModel = pipeline.fit(trainingDF) """ Explanation: Using pipeline to train the data Combine the preprocessing processes together. Before identify the classifier, design the summary's name to make sure we will store the log in right name. Identify the args of classifier. Then save logs to the dirctionary. Use the classifier identified just to create pipeline. Use this pipeline as your model to train the data. End of explanation """ # predict using the validation data predictionDF = trainedModel.transform(validationDF).cache() # caculate the correct rate and the test error correct = predictionDF.filter("label=prediction").count() overall = predictionDF.count() accuracy = correct * 1.0 / overall print("Test Error = %g " % (1.0 - accuracy)) """ Explanation: Test the model on validation data Use the vallidation data to test the model, and print the test error End of explanation """ # get a pic about the result and make it can show in GUI import matplotlib matplotlib.use('Agg') %pylab inline from matplotlib import pyplot as plt from matplotlib.pyplot import imshow """ Explanation: Make the pic can be seen on this page. Repeatedly run this code without restart the kernel will result in a warning. End of explanation """ # read loss from summary nad show in pic loss = np.array(train_summary.read_scalar("Loss")) plt.figure(figsize = (12,12)) plt.plot(loss[:,0], loss[:,1], label='loss') plt.xlim(0, loss.shape[0]+10) plt.grid(True) plt.title("loss") """ Explanation: Show the change in loss using a pic. End of explanation """ samplecat=predictionDF.filter(predictionDF.prediction==1.0).limit(2).collect() sampledog=predictionDF.filter(predictionDF.prediction==2.0).sort("label", ascending=False).limit(2).collect() """ Explanation: Collect two examples from both sides. End of explanation """ from IPython.display import Image, display for cat in samplecat: print ("prediction:"), cat.prediction display(Image(cat.image.origin[5:], height=256,width=256)) """ Explanation: Show two cat-pics and their predictions. End of explanation """ # show the two pics and their things as follow from IPython.display import Image, display for dog in sampledog: print ("prediction:"), dog.prediction display(Image(dog.image.origin[5:], height=256,width=256)) """ Explanation: Show two dog-pics and their predictions. End of explanation """
quantopian/research_public
notebooks/lectures/Estimating_Covariance_Matrices/notebook.ipynb
apache-2.0
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import scipy.stats as stats from sklearn import covariance """ Explanation: Estimation of Covariance Matrices By Christopher van Hoecke and Max Margenot Part of the Quantopian Lecture Series: www.quantopian.com/lectures github.com/quantopian/research_public Volatility has long been a thorn in the side of investors in the market. Successfully measuring volatility would allow for more accurate modeling of the returns and more stable investments leading to greater returns, but forecasting volatility accurately is a difficult problem. Measuring Volatility Volatility needs to be forward-looking and predictive in order to make smart decisions. Unfortunately, simply taking the historical standard deviation of an individual asset's returns falls short when we take into account need for robustness to the future. When we scale the problem up to the point where we need to forecast the volatility for many assets, it gets even harder. To model how a portfolio overall changes, it is important to look not only at the volatility of each asset in the portfolio, but also at the pairwise covariances of every asset involved. The relationship between two or more assets provides valuable insights and a path towards reduction of overall portfolio volatility. A large number of assets with low covariance would assure they decrease or increase independently of each other. Indepedent assets have less of an impact on our portfolio's volatility as they give us true diversity and help us avoid position concentration risk. Covariance In statistics and probability, the covariance is a measure of the joint variability of two random variables. When random variables exhibit similar behavior, there tends to be a high covariance between them. Mathematically, we express the covariance of X with respect to Y as: $$ COV(X, Y) = E[(X - E[X])(Y - E[Y])]$$ Notice that if we take the covariance of $X$ with itself, we get: $$ COV(X, X) = E[(X - E[X])(X - E[X])] = E[(X - E[X])^2] = VAR(X) $$ We can use covariance to quantify the similarities between different assets in much the same way. If two assets have a high covariance, they will generally behave the same way. Assets with particularly high covariance can essentially replace each other. Covariance matrices form the backbone of Modern Portfolio theory (MPT). MPT focuses on maximizing return for a given level of risk, making essential the methods with which we estimate that risk. We use covariances to quantify the joint risk of assets, forming how we view the risk of an entire portfolio. What is key is that investing in assets that have high pairwise covariances provides little diversification because of how closely their fluctuations are related. End of explanation """ # Generate random values of x X = np.random.normal(size = 1000) epsilon = np.random.normal(0, 3, size = len(X)) Y = 5*X + epsilon product = (X - np.mean(X))*(Y - np.mean(Y)) expected_value = np.mean(product) print 'Value of the covariance between X and Y:', expected_value """ Explanation: Let's take the covariance of two closely related variables, $X$ and $Y$. Say that $X$ is some randomly drawn set and that $Y = 5X + \epsilon$, where $\epsilon$ is some extra noise. We can compute the covariance using the formula above to get a clearer picture of how $X$ evolves with respect to asset $Y$. End of explanation """ np.cov([X, Y]) """ Explanation: We can also compute the covariance between $X$ and $Y$ with a single function. End of explanation """ print np.var(X), np.var(Y) """ Explanation: This gives us the covariance matrix between $X$ and $Y$. The diagonals are their respective variances and the indices $(i, j)$ refer to the covariance between assets indexed $i$ and $j$. End of explanation """ # scatter plot of X and y from statsmodels import regression import statsmodels.api as sm def linreg(X,Y): # Running the linear regression X = sm.add_constant(X) model = regression.linear_model.OLS(Y, X).fit() a = model.params[0] b = model.params[1] X = X[:, 1] # Return summary of the regression and plot results X2 = np.linspace(X.min(), X.max(), 100) Y_hat = X2 * b + a plt.scatter(X, Y, alpha=0.3) # Plot the raw data plt.plot(X2, Y_hat, 'r', alpha=0.9); # Add the regression line, colored in red plt.xlabel('X Value') plt.ylabel('Y Value') return model.summary() linreg(X, Y) plt.scatter(X, Y) plt.title('Scatter plot and linear equation of x as a function of y') plt.xlabel('X') plt.ylabel('Y') plt.legend(['Linear equation', 'Scatter Plot']); """ Explanation: In this case, we only have two assets so we only have indices $(0, 1)$ and $(1, 0)$. Covariance matrices are symmetric, since $COV(X, Y) = COV(Y, X)$, which is why the off-diagonals mirror each other. We can intuitively think of this as how much $Y$ changes when $X$ changes and vice-versa. As such, our covariance value of about 5 could have been anticipated from the definition of the relationship between $X$ and $Y$. Here is a scatterplot between $X$ and $Y$ with a line of best fit down the middle. End of explanation """ # Four asset example of the covariance matrix. start_date = '2016-01-01' end_date = '2016-02-01' returns = get_pricing( ['SBUX', 'AAPL', 'GS', 'GILD'], start_date=start_date, end_date=end_date, fields='price' ).pct_change()[1:] returns.columns = map(lambda x: x.symbol, returns.columns) print 'Covariance matrix:' print returns.cov() """ Explanation: Between the covariance, the linear regression, and our knowledge of how $X$ and $Y$ are related, we can easily assess the relationship between our toy variables. With real data, there are two main complicating factors. The first is that we are exmaining significantly more relationships. The second is that we do not know any of their underlying relationships. These hindrances speak to the benefit of having accurate estimates of covariance matrices. The Covariance Matrix As the number of assets we are curious about increases, so too do the dimensions of the covariance matrix that describes their relationships. If we take the covariance between $N$ assets, we will get out a $N \times N$ covariance matrix. This allows us to efficiently express the relationships between many arrays at once. As with the simple $2\times 2$ case, the $i$-th diagonal is the variance of the $i$-th asset and the values at $(i, j)$ and $(j, i)$ refer to the covariance between asset $i$ and asset $j$. We display this with the following notation: $$ \Sigma = \left[\begin{matrix} VAR(X_1) & COV(X_1, X_2) & \cdots & COV(X_1, X_N) \ COV(X_2, X_0) & VAR(X_2) & \cdots & COV(X_2, X_N) \ \vdots & \vdots & \ddots & \vdots \ COV(X_N, X_1) & COV(X_N, X_2) & \cdots & VAR(X_N) \end{matrix}\right] $$ When trying to find the covariance of many assets, it quickly becomes apparent why the matrix notation is more favorable. End of explanation """ # Getting the return data of assets. start = '2016-01-01' end = '2016-02-01' symbols = ['AAPL', 'MSFT', 'BRK-A', 'GE', 'FDX', 'SBUX'] prices = get_pricing(symbols, start_date = start, end_date = end, fields = 'price') prices.columns = map(lambda x: x.symbol, prices.columns) returns = prices.pct_change()[1:] returns.head() """ Explanation: Why does all this matter? We measure the covariance of the assets in our portfolio to make sure we have an accurate picture of the risks involved in holding those assets togther. We want to apportion our capital amongst these assets in such a way as to minimize our exposure to the risks associated with each individual asset and to neutralize exposure to systematic risk. This is done through the process of portfolio optimization. Portfolio optimization routines go through exactly this process, finding the appropriate weights for each asset given its risks. Mean-variance optimization, a staple of MPT, does exactly this. Estimating the covariance matrix becomes critical when using methods that rely on it, as we cannot know the true statistical relationships underlying our chosen assets. The stability and accuracy of these estimates are essential to getting stable weights that encapsulate our risks and intentions. Unfortunately, the most obvious way to calculate a covariance matrix estimate, the sample covariance, is notoriously unstable. If we have fewer time observations of our assets than the number of assets ($T < N$), the estimate becomes especially unreliable. The extreme values react more strongly to changes, and as the extreme values of the covariance jump around, our optimizers are perturbed, giving us inconsistent weights. This is a problem when we are trying to make many independent bets on many assets to improve our risk exposures through diversification. Even if we have more time elements than assets that we are trading, we can run into issues, as the time component may span multiple regimes, giving us covariance matrices that are still inaccurate. The solution in many cases is to use a robust formulation of the covariance matrix. If we can estimate a covariance matrix that still captures the relationships between assets and is simultaneously more stable, then we can have more faith in the output of our optimizers. A main way that we handle this is by using some form of a shrinkage estimator. Shrinkage Estimators The concept of shrinkage stems from the need for stable covariance matrices. The basic way we "shrink" a matrix is to reduce the extreme values of the sample covariance matrix by pulling them closer to the center. Practically, we take a linear combination of the sample covariance covariance matrix a constant array representing the center. Given a sample covariance matrix, $\textbf{S}$, the mean variance, $\mu$, and the shrinkage constant $\delta$, the shrunk estimated covariance is mathematically defined as: $$(1 - \delta)\textbf{S} + \delta\mu\textbf{1}$$ We restrict $\delta$ such that $0 \leq \delta \leq 1$ making this a weighted average between the sample covariance and the mean variance matrix. The optimal value of $\delta$ has been tackled several times. For our purposes, we will use the formulation by Ledoit and Wolf. Ledoit-Wolf Estimator. In their paper, Ledoit and Wolf proposed an optimal $\delta$: $$\hat\delta^* \max{0, \min{\frac{\hat\kappa}{T},1}}$$ $\hat\kappa$ has a mathematical formulation that is beyond the scope of this lecture, but you can find its definition in the paper. The Ledoit-Wolf Estimator is the robust covariance estimate that uses this optimal $\hat\delta^*$ to shrink the sample covariance matrix. We can draw an implementation of it directly from scikit-learn for easy use. End of explanation """ in_sample_lw = covariance.ledoit_wolf(returns)[0] print in_sample_lw """ Explanation: Here we calculate the in-sample Ledoit-Wolf estimator. End of explanation """ oos_start = '2016-02-01' oos_end = '2016-03-01' oos_prices = get_pricing(symbols, start_date = oos_start, end_date = oos_end, fields = 'price') oos_prices.columns = map(lambda x: x.symbol, oos_prices.columns) oos_returns = oos_prices.pct_change()[1:] out_sample_lw = covariance.ledoit_wolf(oos_returns)[0] lw_errors = sum(abs(np.subtract(in_sample_lw, out_sample_lw))) print "Average Ledoit-Wolf error: ", np.mean(lw_errors) """ Explanation: Calculating Errors We can quantify the difference between the in and out-of-sample estimates by taking the absolute difference element-by-element for the two matrices. We represent this mathematically as: $$ \frac{1}{n} \sum_{i=1}^{n} |a_i - b_i| $$ First, we calculate the out-of-sample estimate and then we compare. End of explanation """ sample_errors = sum(abs(np.subtract(returns.cov().values, oos_returns.cov().values))) print 'Average sample covariance error: ', np.mean(sample_errors) print 'Error improvement of LW over sample: {0:.2f}%'.format((np.mean(sample_errors/lw_errors)-1)*100) """ Explanation: Comparing to Sample Matrix We can check how much of an improvement this is by comparing the errors with the erros of the sample covariance. End of explanation """ sns.boxplot( data = pd.DataFrame({ 'Sample Covariance Error': sample_errors, 'Ledoit-Wolf Error': lw_errors }) ) plt.title('Box Plot of Errors') plt.ylabel('Error'); """ Explanation: We can see that the improvement of Ledoit-Wolf over the sample covariance is pretty solid. This translates into decreased volatility and turnover rate in our portfolio, and thus increased returns when using the shrunk covariance matrix. End of explanation """ start_date = '2016-01-01' end_date = '2017-06-01' symbols = [ 'SPY', 'XLF', 'XLE', 'XLU','XLK', 'XLI', 'XLB', 'GE', 'GS', 'BRK-A', 'JPM', 'AAPL', 'MMM', 'BA', 'CSCO','KO', 'DIS','DD', 'XOM', 'INTC', 'IBM', 'NKE', 'MSFT', 'PG', 'UTX', 'HD', 'MCD', 'CVX', 'AXP','JNJ', 'MRK', 'CAT', 'PFE', 'TRV', 'UNH', 'WMT', 'VZ', 'QQQ', 'BAC', 'F', 'C', 'CMCSA', 'MS', 'ORCL', 'PEP', 'HON', 'GILD', 'LMT', 'UPS', 'HP', 'FDX', 'GD', 'SBUX' ] prices = get_pricing(symbols, start_date=start_date, end_date=end_date, fields='price') prices.columns = map(lambda x: x.symbol, prices.columns) returns = prices.pct_change()[1:] dates = returns.resample('M').first().index """ Explanation: Adding More Assets Now we bring this to more assets over a longer time period. Let's see how the errors change over a series of months. End of explanation """ sample_covs = [] lw_covs = [] for i in range(1, len(dates)): sample_cov = returns[dates[i-1]:dates[i]].cov().values sample_covs.append(sample_cov) lw_cov = covariance.ledoit_wolf(returns[dates[i-1]:dates[i]])[0] lw_covs.append(lw_cov) """ Explanation: Here we calculate our different covariance estimates. End of explanation """ lw_diffs = [] for pair in zip(lw_covs[:-1], lw_covs[1:]): diff = np.mean(np.sum(np.abs(pair[0] - pair[1]))) lw_diffs.append(diff) sample_diffs = [] for pair in zip(sample_covs[:-1], sample_covs[1:]): diff = np.mean(np.sum(np.abs(pair[0] - pair[1]))) sample_diffs.append(diff) """ Explanation: Here we calculate the error for each time period. End of explanation """ plt.plot(dates[2:], lw_diffs) plt.plot(dates[2:], sample_diffs) plt.xlabel('Time') plt.ylabel('Mean Error') plt.legend(['Ledoit-Wolf Errors', 'Sample Covariance Errors']); """ Explanation: And here we plot the errors over time! End of explanation """
mne-tools/mne-tools.github.io
0.23/_downloads/4a4a8e5bd5ae7cafea93a04d8c0a0d00/psf_ctf_vertices_lcmv.ipynb
bsd-3-clause
# Author: Olaf Hauk <olaf.hauk@mrc-cbu.cam.ac.uk> # # License: BSD (3-clause) import mne from mne.datasets import sample from mne.beamformer import make_lcmv, make_lcmv_resolution_matrix from mne.minimum_norm import get_cross_talk print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects/' fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif' fname_evo = data_path + '/MEG/sample/sample_audvis-ave.fif' raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' # Read raw data raw = mne.io.read_raw_fif(raw_fname) # only pick good EEG/MEG sensors raw.info['bads'] += ['EEG 053'] # bads + 1 more picks = mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads') # Find events events = mne.find_events(raw) # event_id = {'aud/l': 1, 'aud/r': 2, 'vis/l': 3, 'vis/r': 4} event_id = {'vis/l': 3, 'vis/r': 4} tmin, tmax = -.2, .25 # epoch duration epochs = mne.Epochs(raw, events, event_id=event_id, tmin=tmin, tmax=tmax, picks=picks, baseline=(-.2, 0.), preload=True) del raw # covariance matrix for pre-stimulus interval tmin, tmax = -.2, 0. cov_pre = mne.compute_covariance(epochs, tmin=tmin, tmax=tmax, method='empirical') # covariance matrix for post-stimulus interval (around main evoked responses) tmin, tmax = 0.05, .25 cov_post = mne.compute_covariance(epochs, tmin=tmin, tmax=tmax, method='empirical') info = epochs.info del epochs # read forward solution forward = mne.read_forward_solution(fname_fwd) # use forward operator with fixed source orientations mne.convert_forward_solution(forward, surf_ori=True, force_fixed=True, copy=False) # read noise covariance matrix noise_cov = mne.read_cov(fname_cov) # regularize noise covariance (we used 'empirical' above) noise_cov = mne.cov.regularize(noise_cov, info, mag=0.1, grad=0.1, eeg=0.1, rank='info') """ Explanation: Compute cross-talk functions for LCMV beamformers Visualise cross-talk functions at one vertex for LCMV beamformers computed with different data covariance matrices, which affects their cross-talk functions. End of explanation """ # compute LCMV beamformer filters for pre-stimulus interval filters_pre = make_lcmv(info, forward, cov_pre, reg=0.05, noise_cov=noise_cov, pick_ori=None, rank=None, weight_norm=None, reduce_rank=False, verbose=False) # compute LCMV beamformer filters for post-stimulus interval filters_post = make_lcmv(info, forward, cov_post, reg=0.05, noise_cov=noise_cov, pick_ori=None, rank=None, weight_norm=None, reduce_rank=False, verbose=False) """ Explanation: Compute LCMV filters with different data covariance matrices End of explanation """ rm_pre = make_lcmv_resolution_matrix(filters_pre, forward, info) rm_post = make_lcmv_resolution_matrix(filters_post, forward, info) # compute cross-talk functions (CTFs) for one target vertex sources = [3000] stc_pre = get_cross_talk(rm_pre, forward['src'], sources, norm=True) stc_post = get_cross_talk(rm_post, forward['src'], sources, norm=True) verttrue = [forward['src'][0]['vertno'][sources[0]]] # pick one vertex del forward """ Explanation: Compute resolution matrices for the two LCMV beamformers End of explanation """ brain_pre = stc_pre.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir, figure=1, clim=dict(kind='value', lims=(0, .2, .4))) brain_pre.add_text(0.1, 0.9, 'LCMV beamformer with pre-stimulus\ndata ' 'covariance matrix', 'title', font_size=16) # mark true source location for CTFs brain_pre.add_foci(verttrue, coords_as_verts=True, scale_factor=1., hemi='lh', color='green') """ Explanation: Visualize Pre: End of explanation """ brain_post = stc_post.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir, figure=2, clim=dict(kind='value', lims=(0, .2, .4))) brain_post.add_text(0.1, 0.9, 'LCMV beamformer with post-stimulus\ndata ' 'covariance matrix', 'title', font_size=16) brain_post.add_foci(verttrue, coords_as_verts=True, scale_factor=1., hemi='lh', color='green') """ Explanation: Post: End of explanation """
feststelltaste/software-analytics
notebooks/Mining performance HotSpots with JProfiler, jQAssistant, Neo4j and Pandas.ipynb
gpl-3.0
with open (r'input/spring-petclinic/JDBC_Probe_Hot_Spots_jmeter_test.xml') as log: [print(line[:97] + "...") for line in log.readlines()[:10]] """ Explanation: Mining performance hotspots with JProfiler, jQAssistant, Neo4j and Pandas TL;DR I show how I determine the parts of an application that trigger unnecessary SQL statements by using graph analysis of a call tree. Introduction General claim We don't need more tools that show use more problems in our software. We need ways to determine the right problem to solve. But often we just fix the symptoms on the surface rather than the underlying problems. I find that approach non-professional and want to do my part to improve this situation by delivering root cause analysis of symptoms to get to the real problems in our software. What you will see In this notebook, I'll show you one of my approaches for mining performance problems based on application runtime performance analysis. In general, I use this approach to make the point that there are severe design errors that have a negative influence on the application's overall performance. In this example, I show you how I determine the reasons behind a massive amount of executed SQL statements in an application step by step. The key idea is to use graph analysis to analyze call stacks that were created by a profiling tool. With this approach, I'm not only able to show the hotspots that are involved in the performance issue (because the hotspots show that some SQL statements take long to execute or are executed too often), but also the reasons behind these hotspots. I achieve this by extracting various additional information like the web requests, application's entry points and the triggers within the application causing the hotspots. This is very helpful to determine the most critical parts in the application and gives you a hint where you could start improving immediately. I use this analysis at work to determine the biggest performance bottleneck in a medium sized application (~700 kLOCs). Based on the results, we work out possible improvements for that specific hotspot, create a prototypical fix for it, measure the fix's impact and, if the results are convincing, roll out the fix for that problem application wide (and work on the next performance bottleneck and so on). I hope you'll see that this is a very reasonable approach albeit the simplified use case that I show in this blog post/notebook. Used software Before we start, I want to briefly introduce you to the setup that I use for this analysis: Fork of the Spring sample project PetClinic as application to torture Tomcat 8 installation as servlet container for the application (standalone, for easier integration of the profiling tool) JMeter load testing tool for executing some requests JProfiler profiler for recording performance measures jQAssistant static analysis tool for reading in call trees Neo4j graph database and Cypher graph query language for executing the graph analysis Pandas, py2neo and Bokeh on Jupyter* as documentation, execution and analysis environment The first ones are dependent on the environment and programming language you use. jQAssistant, Neo4j and Pandas are my default environment for software analytics so far. I'll show your how all those tools fit together. So let's get started! *actually, what you see here, is the result of an executed Jupyter notebook, too. You can find that notebook on GitHub. Performance Profiling As a prerequisite for this analysis, we need performance profiling data gathered by a profiler. A profiler will be integrated into the runtime environment (e. g. Java Virtual Machine) of your application and measures diverse properties like method execution time, number of web service calls, executed SQL statements etc. Additionally, we need something that uses or clicks through our application to get some numbers. In my case, I run the Spring PetClinic performance test using JMeter. As profiling tool, I use JProfiler to record some performance measures while the test was running. <p><tt>&lt;advertisment&gt;</tt><br /> At this point, I want to thank ej-technologies for providing me with a [free open-source license](https://www.ej-technologies.com/buy/jprofiler/openSource) for JProfiler that enables this blog post in exchange of mentioning their product: <a href="http://www.ej-technologies.com/products/jprofiler/overview.html"> ![](https://www.ej-technologies.com/images/product_banners/jprofiler_large.png)</a> JProfiler is a great commercial tool for profiling Java application and costs around 400 €. It really worth the money because it gives you deep insights how your application performs under the hood. <tt>&lt;/advertisment&gt;</tt> </p> Also outside the advertisement block, I personally like JProfiler a lot because it does what it does very very good. Back to the article. The recording of the measures starts before the execution of the performance test and stops after the test has finished successfully. The result is stored in a file as so-called "snapshot". The use of a snapshot enables you to repeat your analysis over and over again with exactly the same performance measures. What we usually need for performance analysis is a recorded runtime stack of all method calls as a call tree. A call tree shows you a tree of the called methods. Below, you can see the call tree for the called methods with their measured CPU wall clock time (aka the real time that is spent in that method) and the number of invocations for a complete test run: With such a view, you see which parts of your application call which classes and methods by drilling down the hierarchy by hand: But there is more: You can also "turn around" the call tree and list all the so-called "HotSpots". Technically, e. g. for CPU HotSpots, JProfiler sums up all the measurements for the method call leafs that take longer than 0.1% of all method calls. With this view, you see the application's hotspots immediately: These views are also available for other measures like web service calls, file accesses or DB calls, that is shown below: This is the data that we need for our SQL statement analysis. The big problem is, that we can't easily see where all those SQL statements come from because we just see the isolated SQL statements. And this is where our journey begins... Reading XML into Neo4j The input data For further processing, we export such a call tree into a XML file (via the JProfiler GUI or the command line tool jpexport). If we export the data of the SQL hotspots (incl. the complete call tree) with JProfiler, we'll get a XML file like the following: End of explanation """ import pandas as pd from py2neo import Graph graph = Graph() """ Explanation: For a full version of this file, see GitHub: https://github.com/feststelltaste/software-analytics/blob/master/notebooks/input/spring-petclinic/JDBC_Probe_Hot_Spots_jmeter_test.xml This file consists of all the information that we've seen in the JProfiler GUI, but as XML elements and attributes. And here comes the great part: The content itself is graph-like because the XML elements are nested! So the <tt>&lt;tree&gt;</tt> element contains the <tt>&lt;hotspot&gt;</tt> elements that contain the <tt>&lt;node&gt;</tt> elements and so on. A case for a graph database like Neo4j! But how do we get that XML file into our Neo4j database? jQAssistant to the rescue! Scanning with jQAssistant jQAssistant is a great and versatile tool for scanning various graph-like structured data into Neo4j (see my experiences with jQAssistant so far for more information). I just downloaded the version 1.1.3, added the binary to my <tt>PATH</tt> system variable and executed the following command (works for jQAssistant versions prior to 1.2.0, I haven't figured it out how to do it with newer versions yet): <pre> jqassistant scan -f xml:document::JDBC_Probe_Hot_Spots_jmeter_test.xml </pre> This will import the XML structure as a graph into the Neo4j graph database that is used by jQAssistant under the hood. Exploring the data So, if we want to have a quick look at the stored data, we can start jQAssistant's Neo4j embedded instance via <pre> jqassistant server </pre> open <tt>http://localhost:7474</tt>, click in the overview menu at the label <tt>File</tt>, click on some nodes and you will see something like this: It shows the content of the XML file from above as a graph in Neo4j: * The pink node is the entry point &ndash; the XML file. * To the right, there is the first XML element <tt>&lt;tree&gt;</tt> in that file, connected by the <tt>HAS_ROOT_ELEMENT</tt> relationship. * The <tt>&lt;tree&gt;</tt> element has some attributes, connected by <tt>HAS_ATTRIBUTE</tt>. * From the <tt>&lt;tree&gt;</tt> element, there are multiple outgoing relationships with various <tt>&lt;hotspot&gt;</tt> nodes, containing some information about the executed SQLs in the referenced attributes. * The attributes that are connected to these elements contain the values that we need for our purpose later on. So, for example the attribute with the name <tt>value</tt> contains the executed SQL statement: The attribute with the name <tt>count</tt> contains the number of executions of a SQL statement: Each element or attribute is also labeled correspondingly with <tt>Element</tt> or <tt>Attribute</tt>. Looking at real data I want to show you the data from the database in a more nicer way. So, we load our main libraries and initialize the connection to Neo4j database by creating a <tt>Graph</tt> object (for more details on this have a look at this blog post) End of explanation """ query=""" MATCH (e:Element)-[:HAS_ATTRIBUTE]->(a:Attribute) WHERE a.value = "SELECT id, name FROM types ORDER BY name" WITH e as node MATCH (node)-[:HAS_ATTRIBUTE]->(all:Attribute) RETURN all.name, all.value """ pd.DataFrame(graph.run(query).data()) """ Explanation: We execute a simple query for one XML element and it's relationships to its attributes. For example, if we want to display the data of this <tt>&lt;hotspot</tt> element xml &lt;hotspot leaf="false" value="SELECT id, name FROM types ORDER BY name" time="78386" count="107"&gt; as graph, we get that information from all the attributes of an element (don't worry about the syntax of the following two Cypher statements. They are just there to show you the underlying data as an example). End of explanation """ query=""" MATCH (e:Element)-[:HAS_ATTRIBUTE]->(a:Attribute) WHERE id(e) = 12 //just select an arbitrary node RETURN a.name, a.value """ pd.DataFrame(graph.run(query).data()) """ Explanation: As seen in the picture with the huge graph above, each <tt>&lt;hotspot&gt;</tt> node refers to the further <tt>&lt;node&gt;</tt>s, that call the hotspots. In our case, these nodes are the methods in our application that are responsible for the executions of the SQL statements. If we list all attributes of such a node, we've got plenty of information of the callees of the hotspots. For example these nodes contain information about the method name (<tt>method</tt>) or the number of executed SQL statements (<tt>count</tt>): End of explanation """ query=""" MATCH (n:Node)-[r:CALLS|CREATED_FROM|LEADS_TO]->() DELETE r, n RETURN COUNT(r), COUNT(n)""" graph.run(query).data() """ Explanation: Prepare performance analysis Because it's a bit cumbersome to work at the abstraction level of the XML file, let's enrich this graph with a few better concepts for mining performance problems. Clean up (optional) Before executing the first statements, we clean up any preexisting data from previous queries. This is only necessary when you execute this notebook several times on the same data store (like me). It makes the results repeatable and thus more reproducible (a property we should generally strive for!). End of explanation """ query = """ MATCH (n:Element {name: "node"}), (n)-[:HAS_ATTRIBUTE]->(classAttribut:Attribute {name : "class"}), (n)-[:HAS_ATTRIBUTE]->(methodAttribut:Attribute {name : "methodName"}), (n)-[:HAS_ATTRIBUTE]->(countAttribut:Attribute {name : "count"}), (n)-[:HAS_ATTRIBUTE]->(timeAttribut:Attribute {name : "time"}) CREATE (x:Node:Call { fqn: classAttribut.value, class: SPLIT(classAttribut.value,".")[-1], method: methodAttribut.value, count: toFloat(countAttribut.value), time: toFloat(timeAttribut.value) })-[r:CREATED_FROM]->(n) RETURN COUNT(x), COUNT(r) """ graph.run(query).data() """ Explanation: Consolidate the data We create some new nodes that contain all the information from the XML part of the graph that we need. We simply copy the values of some attributes to new <tt>Call</tt> nodes. In our Cypher query, we first retrieve all <tt>&lt;node&gt;</tt> elements (identified by their "name" property) and some attributes that we need for our analysis. For each relevant information item, we create a variable to retrieve the information later on: cypher MATCH (n:Element {name: "node"}), (n)-[:HAS_ATTRIBUTE]-&gt;(classAttribut:Attribute {name : "class"}), (n)-[:HAS_ATTRIBUTE]-&gt;(methodAttribut:Attribute {name : "methodName"}), (n)-[:HAS_ATTRIBUTE]-&gt;(countAttribut:Attribute {name : "count"}), (n)-[:HAS_ATTRIBUTE]-&gt;(timeAttribut:Attribute {name : "time"}) For each <tt>&lt;node&gt;</tt> element we've found, we tag the nodes with the label <tt>Node</tt> to have a general marker for the JProfiler measurements (which is "node" by coincidence) and mark all nodes that contain information about the calling classes and methods with the label <tt>Call</tt>: cypher CREATE (x:Node:Call { We also copy the relevant information from the <tt>&lt;node&gt;</tt> element's attributes into the new nodes. We put the value of the class attribute (that consists of the Java package name and the class name) into the <tt>fqn</tt> (full qualified name) property and add just the name of the class in the <tt>class</tt> property (just for displaying reasons in the end). The rest is copied as well, including some type conversions for <tt>count</tt> and <tt>time</tt>: cypher fqn: classAttribut.value, class: SPLIT(classAttribut.value,".")[-1], method: methodAttribut.value, count: toFloat(countAttribut.value), time: toFloat(timeAttribut.value) }) Additionally, we track the origin of the information by a <tt>CREATED_FROM</tt> relationship to connect the new nodes later on: cypher -[r:CREATED_FROM]-&gt;(n) So, the complete query looks like the following and will be executed against the Neo4j data store: End of explanation """ query = """ MATCH (n:Element { name: "hotspot"}), (n)-[:HAS_ATTRIBUTE]->(valueAttribut:Attribute {name : "value"}), (n)-[:HAS_ATTRIBUTE]->(countAttribut:Attribute {name : "count"}), (n)-[:HAS_ATTRIBUTE]->(timeAttribut:Attribute {name : "time"}) WHERE n.name = "hotspot" CREATE (x:Node:HotSpot { value: valueAttribut.value, count: toFloat(countAttribut.value), time: toFloat(timeAttribut.value) })-[r:CREATED_FROM]->(n) RETURN COUNT(x), COUNT(r) """ graph.run(query).data() """ Explanation: We do the same for the <tt>&lt;hotspot&gt;</tt> elements. Here, the attributes are a little bit different, because we are gathering data from the hotspots that contain information about the executed SQL statements: End of explanation """ query=""" MATCH (outerNode:Node)-[:CREATED_FROM]-> (outerElement:Element)-[:HAS_ELEMENT]-> (innerElement:Element)<-[:CREATED_FROM]-(innerNode:Node) CREATE (outerNode)<-[r:CALLS]-(innerNode) RETURN COUNT(r) """ graph.run(query).data() """ Explanation: Now, we have many new nodes in our database that are aren't directly connected. E. g. a <tt>Call</tt> node looks like this: So, let's connect them. How? We've saved that information with our <tt>CREATED_FROM</tt> relationship: This information can be used to connect the <tt>Call</tt> nodes as well as the <tt>HotSpot</tt> nodes. End of explanation """ query=""" MATCH (x) WHERE x.fqn = "_jprofiler_annotation_class" AND x.method STARTS WITH "HTTP" SET x:Request RETURN COUNT(x) """ graph.run(query).data() """ Explanation: And there we have it! Just click in the Neo4j browser on the relationship CALLS and the you'll see our call tree from JProfiler as call graph in Neo4j, ready for root cause analysis! Root Cause Analysis Conceptual model All the work before was just there to get a nice graph model that feels more natural. Now comes the analysis part: As mentioned in the introduction, we don't only want the hotspots that signal that something awkward happened, but also * the trigger in our application of the hotspot combined with * the information about the entry point (e. g. where in our application does the problem happen) and * (optionally) the request that causes the problem (to be able to localize the problem) Speaking in graph terms, we need some specific nodes of our call tree graph with the following information: * <tt>HotSpot</tt>: The executed SQL statement aka the <tt>HotSpot</tt> node * <tt>Trigger</tt>: The executor of the SQL statement in our application aka the <tt>Call</tt> node with the last class/method call that starts with our application's package name * <tt>Entry</tt>: The first call of our own application code aka the <tt>Call</tt> node that starts also with our application's package name * <tt>Request</tt>: The <tt>Call</tt> node with the information about the HTTP request (optional, but because JProfiler delivers this information as well, we use it here in this example) These points in the call tree should give us enough information that we can determine where to look for improvements in our application. Challenges There is one thing that is a little bit tricky to implement: It's to model what we see as "last" and "first" in Neo4j / Cypher. Because we are using the package name of a class to identify our own application, there are many <tt>Call</tt> nodes in one call graph part that have that package name. Neo4j would (rightly) return too many results (for us) because it would deliver one result for each match. And a match is also given when a <tt>Call</tt> node within our application matches the package name. So, how do we mark the first and last nodes of our application code? Well, take one step at a time. Before we are doing anything awkward, we are trying to store all the information that we know into the graph before executing our analysis. I always favor this approach instead of trying to find a solution with complicated cypher queries, where you'll probably mix up things easily. Preparing the query First, we can identify the request, that triggers the SQL statement, because we configured JProfiler to include that information in our call tree. We simply label them with the label <tt>Request</tt>. Identifying all request End of explanation """ query=""" MATCH request_to_entry=shortestPath((request:Request)-[:CALLS*]->(entry:Call)) WHERE entry.fqn STARTS WITH "org.springframework.samples.petclinic" AND SINGLE(n IN NODES(request_to_entry) WHERE exists(n.fqn) AND n.fqn STARTS WITH "org.springframework.samples.petclinic") SET entry:Entry RETURN COUNT(entry) """ graph.run(query).data() """ Explanation: Identifying all entry points into the application Next, we identify the entry points aka first nodes of our application. We can achieve this by first searching all the shortest paths between the already existing <tt>Request</tt> nodes and all the nodes that have our package name. From all these subgraphs, we take only the single subgraph that has only a single node with the package name of our application. This is the first node that occurs in the call graph when starting from the request (I somehow can feel that there is a more elegant way to do this. If so, please let me know!). We mark that nodes as <tt>Entry</tt> nodes. End of explanation """ query=""" MATCH trigger_to_hotspot=shortestPath((trigger:Call)-[:CALLS*]->(hotspot:HotSpot)) WHERE trigger.fqn STARTS WITH "org.springframework.samples.petclinic" AND SINGLE(n IN NODES(trigger_to_hotspot) WHERE exists(n.fqn) AND n.fqn STARTS WITH "org.springframework.samples.petclinic") SET trigger:Trigger RETURN count(trigger) """ graph.run(query).data() """ Explanation: With the same approach, we can mark all the calls that trigger the execution of the SQL statements with the label <tt>Trigger</tt>. End of explanation """ query=""" MATCH (request:Request)-[:CALLS*]-> (entry:Entry)-[:CALLS*]-> (trigger:Trigger)-[:CALLS*]-> (hotspot:HotSpot) CREATE UNIQUE (request)-[leads1:LEADS_TO]-> (entry)-[leads2:LEADS_TO]-> (trigger)-[leads3:LEADS_TO]->(hotspot) RETURN count(leads1), count(leads2), count(leads3) """ graph.run(query).data() """ Explanation: After marking all the relevant nodes, we connect them via a new relationshop <tt>LEADS_TO</tt> to enable more elegant queries later on. End of explanation """ query=""" MATCH (request:Request)-[:LEADS_TO]-> (entry:Entry)-[:LEADS_TO]-> (trigger:Trigger)-[:LEADS_TO]-> (hotspot:HotSpot) RETURN request.method as request, request.count as sql_count, entry.class as entry_class, entry.method as entry_method, trigger.class as trigger_class, trigger.method as trigger_method, hotspot.value as sql, hotspot.count as sql_count_sum """ hotspots = pd.DataFrame(graph.run(query).data()) hotspots.head() """ Explanation: Getting results All the previous steps where needed to enable this simple query, that gives us the spots in the application, that lead eventually to the hotspots! End of explanation """ sqls_per_method = hotspots.groupby([ 'request', 'entry_class', 'entry_method', 'trigger_class', 'trigger_method']).agg( {'sql_count' : 'sum', 'request' : 'count'}) sqls_per_method """ Explanation: The returned data consists of * <tt>request</tt>: the name of the HTTP request * <tt>sql_count</tt>: the number of SQL statements caused by this HTTP request * <tt>entry_class</tt>: the class name of the entry point into our application * <tt>entry_method</tt>: the method name of the entry point into our application * <tt>trigger_class</tt>: the class name of the exit point out of our application * <tt>trigger_method</tt>: the method name of the exit point out of our application * <tt>sql</tt>: the executed SQL statement * <tt>sql_count_sum</tt>: the amount of all executed SQL statements of the same kind A look at a subgraph If we take a look at the Neo4j browser and execute the statement from above but returning the nodes, aka cypher MATCH (request:Request)-[:LEADS_TO]-&gt; (entry:Entry)-[:LEADS_TO]-&gt; (trigger:Trigger)-[:LEADS_TO]-&gt; (hotspot:HotSpot) RETURN request, entry, trigger, hotspot we get a nice overview of all our performance hotspots, e. g. With this graphical view, it's easy to see the connection between the requests, our application code and the hotspots. Albeit this view is nice for exploration, it's not actionable. So let's use Pandas to shape the data to knowledge! In-depth analysis First, we have a look which parts in the application trigger all the SQLs. We simply group some columns to get a more dense overview: End of explanation """ hotspots['table'] = hotspots['sql'].\ str.upper().str.extract( r".*(FROM|INTO|UPDATE) ([\w\.]*)", expand=True)[1] hotspots['table'].value_counts() """ Explanation: We see immediately that we have an issue with the loading of the pet's owners via the <tt>OwnerController</tt>. Let's look at the problem from another perspective: What kind of data is loaded by whom from the tables. We simply chop the SQL and extract just the name of the database table (in fact, the regex is so simple that some of the tables weren't identified. But because these are special cases, we can ignore them): End of explanation """ grouped_by_entry_class_and_table = hotspots.groupby(['entry_class', 'table'])[['sql_count']].sum() grouped_by_entry_class_and_table """ Explanation: And group the hotspots accordingly. End of explanation """ from bokeh.charts import Donut, show, output_notebook plot_data = grouped_by_entry_class_and_table.reset_index() d = Donut(plot_data, label=['entry_class', 'table'], values='sql_count', text_font_size='8pt', hover_text='sql_count' ) output_notebook() show(d) """ Explanation: Now we made the problem more obvious: The class <tt>OwnerController</tt> works heavily with the <tt>PETS</tt> table and the pet's <tt>TYPES</tt> table. Surely an error in our program. Let's visualize the problem with a nice donut chart in Bokeh: End of explanation """ hotspots.groupby(['trigger_class', 'trigger_method'])[['sql_count']].sum().sort_values( by='sql_count', ascending=False).head(5) """ Explanation: You could also have a look at the most problematic spot in the application by grouping the data by the class and the method that triggers the execution of the most SQL statements. End of explanation """
shirishr/My-Progress-at-Machine-Learning
Udacity_Machine_Learning/finding_donors/finding_donors.ipynb
mit
# Import libraries necessary for this project import numpy as np import pandas as pd from time import time from IPython.display import display # Allows the use of display() for DataFrames # Import supplementary visualization code visuals.py import visuals as vs # Pretty display for notebooks %matplotlib inline # Load the Census dataset data = pd.read_csv("census.csv") display(len(data)) # Success - Display the first record display(data.head(n=5)) """ Explanation: Machine Learning Engineer Nanodegree Supervised Learning Project: Finding Donors for CharityML Welcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting Started In this project, you will employ several supervised algorithms of your choice to accurately model individuals' income using data collected from the 1994 U.S. Census. You will then choose the best candidate algorithm from preliminary results and further optimize this algorithm to best model the data. Your goal with this implementation is to construct a model that accurately predicts whether an individual makes more than $50,000. This sort of task can arise in a non-profit setting, where organizations survive on donations. Understanding an individual's income can help a non-profit better understand how large of a donation to request, or whether or not they should reach out to begin with. While it can be difficult to determine an individual's general income bracket directly from public sources, we can (as we will see) infer this value from other publically available features. The dataset for this project originates from the UCI Machine Learning Repository. The datset was donated by Ron Kohavi and Barry Becker, after being published in the article "Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid". You can find the article by Ron Kohavi online. The data we investigate here consists of small changes to the original dataset, such as removing the 'fnlwgt' feature and records with missing or ill-formatted entries. Exploring the Data Run the code cell below to load necessary Python libraries and load the census data. Note that the last column from this dataset, 'income', will be our target label (whether an individual makes more than, or at most, $50,000 annually). All other columns are features about each individual in the census database. End of explanation """ # TODO: Total number of records n_records = len(data) # TODO: Number of records where individual's income is more than $50,000 n_greater_50k = len(np.where(data['income']=='>50K')[0]) # TODO: Number of records where individual's income is at most $50,000 n_at_most_50k = len(np.where(data['income']=='<=50K')[0]) # TODO: Percentage of individuals whose income is more than $50,000 greater_percent = float(n_greater_50k)/float(n_records)*100 # Print the results print "Total number of records: {}".format(n_records) print "Individuals making more than $50,000: {}".format(n_greater_50k) print "Individuals making at most $50,000: {}".format(n_at_most_50k) print "Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent) """ Explanation: Implementation: Data Exploration A cursory investigation of the dataset will determine how many individuals fit into either group, and will tell us about the percentage of these individuals making more than \$50,000. In the code cell below, you will need to compute the following: - The total number of records, 'n_records' - The number of individuals making more than \$50,000 annually, 'n_greater_50k'. - The number of individuals making at most \$50,000 annually, 'n_at_most_50k'. - The percentage of individuals making more than \$50,000 annually, 'greater_percent'. Hint: You may need to look at the table above to understand how the 'income' entries are formatted. End of explanation """ # Split the data into features and target label income_raw = data['income'] features_raw = data.drop('income', axis = 1) # Visualize skewed continuous features of original data vs.distribution(data) """ Explanation: Preparing the Data Before data can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured — this is typically known as preprocessing. Fortunately, for this dataset, there are no invalid or missing entries we must deal with, however, there are some qualities about certain features that must be adjusted. This preprocessing can help tremendously with the outcome and predictive power of nearly all learning algorithms. Transforming Skewed Continuous Features A dataset may sometimes contain at least one feature whose values tend to lie near a single number, but will also have a non-trivial number of vastly larger or smaller values than that single number. Algorithms can be sensitive to such distributions of values and can underperform if the range is not properly normalized. With the census dataset two features fit this description: 'capital-gain' and 'capital-loss'. Run the code cell below to plot a histogram of these two features. Note the range of the values present and how they are distributed. End of explanation """ # Log-transform the skewed features skewed = ['capital-gain', 'capital-loss'] features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1)) # Visualize the new log distributions vs.distribution(features_raw, transformed = True) """ Explanation: For highly-skewed feature distributions such as 'capital-gain' and 'capital-loss', it is common practice to apply a <a href="https://en.wikipedia.org/wiki/Data_transformation_(statistics)">logarithmic transformation</a> on the data so that the very large and very small values do not negatively affect the performance of a learning algorithm. Using a logarithmic transformation significantly reduces the range of values caused by outliers. Care must be taken when applying this transformation however: The logarithm of 0 is undefined, so we must translate the values by a small amount above 0 to apply the the logarithm successfully. Run the code cell below to perform a transformation on the data and visualize the results. Again, note the range of values and how they are distributed. End of explanation """ # Import sklearn.preprocessing.StandardScaler from sklearn.preprocessing import MinMaxScaler # Initialize a scaler, then apply it to the features scaler = MinMaxScaler() numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week'] features_raw[numerical] = scaler.fit_transform(data[numerical]) #Note to myself: it means (X-Xmin)/Xmax-Xmin) # Show an example of a record with scaling applied display(features_raw.head(n = 1)) """ Explanation: Normalizing Numerical Features In addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution (such as 'capital-gain' or 'capital-loss' above); however, normalization ensures that each feature is treated equally when applying supervised learners. Note that once scaling is applied, observing the data in its raw form will no longer have the same original meaning, as exampled below. Run the code cell below to normalize each numerical feature. We will use sklearn.preprocessing.MinMaxScaler for this. End of explanation """ # TODO: One-hot encode the 'features_raw' data using pandas.get_dummies() features = pd.get_dummies(features_raw) # TODO: Encode the 'income_raw' data to numerical values income =pd.get_dummies(income_raw)['>50K'] #display (income) # Print the number of features after one-hot encoding encoded = list(features.columns) print "{} total features after one-hot encoding.".format(len(encoded)) # Uncomment the following line to see the encoded feature names print encoded """ Explanation: Implementation: Data Preprocessing From the table in Exploring the Data above, we can see there are several features for each record that are non-numeric. Typically, learning algorithms expect input to be numeric, which requires that non-numeric features (called categorical variables) be converted. One popular way to convert categorical variables is by using the one-hot encoding scheme. One-hot encoding creates a "dummy" variable for each possible category of each non-numeric feature. For example, assume someFeature has three possible entries: A, B, or C. We then encode this feature into someFeature_A, someFeature_B and someFeature_C. | | someFeature | | someFeature_A | someFeature_B | someFeature_C | | :-: | :-: | | :-: | :-: | :-: | | 0 | B | | 0 | 1 | 0 | | 1 | C | ----> one-hot encode ----> | 0 | 0 | 1 | | 2 | A | | 1 | 0 | 0 | Additionally, as with the non-numeric features, we need to convert the non-numeric target label, 'income' to numerical values for the learning algorithm to work. Since there are only two possible categories for this label ("<=50K" and ">50K"), we can avoid using one-hot encoding and simply encode these two categories as 0 and 1, respectively. In code cell below, you will need to implement the following: - Use pandas.get_dummies() to perform one-hot encoding on the 'features_raw' data. - Convert the target label 'income_raw' to numerical entries. - Set records with "<=50K" to 0 and records with ">50K" to 1. End of explanation """ # Import train_test_split from sklearn.cross_validation import train_test_split # Split the 'features' and 'income' data into training and testing sets X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0) # Show the results of the split print "Training set has {} samples.".format(X_train.shape[0]) print "Testing set has {} samples.".format(X_test.shape[0]) """ Explanation: Shuffle and Split Data Now all categorical variables have been converted into numerical features, and all numerical features have been normalized. As always, we will now split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing. Run the code cell below to perform this split. End of explanation """ # TODO: Calculate accuracy accuracy = float(n_greater_50k)/float(n_records) # TODO: Calculate F-score using the formula above for beta = 0.5 beta = 0.5 recall = 1.0 fscore = (1+beta**2)*(accuracy*recall)/(beta**2*accuracy+recall) # Print the results print "Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore) """ Explanation: Evaluating Model Performance In this section, we will investigate four different algorithms, and determine which is best at modeling the data. Three of these algorithms will be supervised learners of your choice, and the fourth algorithm is known as a naive predictor. Metrics and the Naive Predictor CharityML, equipped with their research, knows individuals that make more than \$50,000 are most likely to donate to their charity. Because of this, CharityML is particularly interested in predicting who makes more than \$50,000 accurately. It would seem that using accuracy as a metric for evaluating a particular model's performace would be appropriate. Additionally, identifying someone that does not make more than \$50,000 as someone who does would be detrimental to CharityML, since they are looking to find individuals willing to donate. Therefore, a model's ability to precisely predict those that make more than \$50,000 is more important than the model's ability to recall those individuals. We can use F-beta score as a metric that considers both precision and recall: $$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$ In particular, when $\beta = 0.5$, more emphasis is placed on precision. This is called the F$_{0.5}$ score (or F-score for simplicity). Looking at the distribution of classes (those who make at most \$50,000, and those who make more), it's clear most individuals do not make more than \$50,000. This can greatly affect accuracy, since we could simply say "this person does not make more than \$50,000" and generally be right, without ever looking at the data! Making such a statement would be called naive, since we have not considered any information to substantiate the claim. It is always important to consider the naive prediction for your data, to help establish a benchmark for whether a model is performing well. That been said, using that prediction would be pointless: If we predicted all people made less than \$50,000, CharityML would identify no one as donors. Question 1 - Naive Predictor Performace If we chose a model that always predicted an individual made more than \$50,000, what would that model's accuracy and F-score be on this dataset? Note: You must use the code cell below and assign your results to 'accuracy' and 'fscore' to be used later. End of explanation """ # TODO: Import two metrics from sklearn - fbeta_score and accuracy_score from sklearn.metrics import fbeta_score, accuracy_score from time import time def train_predict(learner, sample_size, X_train, y_train, X_test, y_test): ''' inputs: - learner: the learning algorithm to be trained and predicted on - sample_size: the size of samples (number) to be drawn from training set - X_train: features training set - y_train: income training set - X_test: features testing set - y_test: income testing set ''' results = {} # TODO: Fit the learner to the training data using slicing with 'sample_size' X_train = X_train[:sample_size] y_train = y_train[:sample_size] #Several comments below talk about first 300 training samples. I assume it is controlled by the sample_size start = time() # Get start time learner = learner.fit(X_train, y_train) end = time() # Get end time # TODO: Calculate the training time results['train_time'] = end - start # TODO: Get the predictions on the test set, # then get predictions on the first 300 training samples start = time() # Get start time predictions_test = learner.predict(X_test) predictions_train = learner.predict(X_train) end = time() # Get end time # TODO: Calculate the total prediction time results['pred_time'] = end - start # TODO: Compute accuracy on the first 300 training samples results['acc_train'] = accuracy_score(y_train, predictions_train) # TODO: Compute accuracy on test set results['acc_test'] = accuracy_score(y_test, predictions_test) # TODO: Compute F-score on the the first 300 training samples results['f_train'] = fbeta_score(y_train, predictions_train, beta=beta) # TODO: Compute F-score on the test set results['f_test'] = fbeta_score(y_test, predictions_test, beta=beta) # Success print "{} trained on {} samples.".format(learner.__class__.__name__, sample_size) # Return the results return results """ Explanation: Supervised Learning Models The following supervised learning models are currently available in scikit-learn that you may choose from: - Gaussian Naive Bayes (GaussianNB) - Decision Trees - Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting) - K-Nearest Neighbors (KNeighbors) - Stochastic Gradient Descent Classifier (SGDC) - Support Vector Machines (SVM) - Logistic Regression Question 2 - Model Application List three of the supervised learning models above that are appropriate for this problem that you will test on the census data. For each model chosen - Describe one real-world application in industry where the model can be applied. (You may need to do research for this — give references!) - What are the strengths of the model; when does it perform well? - What are the weaknesses of the model; when does it perform poorly? - What makes this model a good candidate for the problem, given what you know about the data? Answer: model | real-world application | strength | weakness | why it's a good candidate --- | --- | --- | --- | --- Logistic Regression | Cancer prediction based on patient characteristics | predictions on small datasets small number of features with this can be efficient and fast | When data contains features with complexity, unless the features are carefully selected and finetuned this may suffer from under / over - fitting | This model may be suitable since the expected output is categorical and binary on top Gaussian Naive Bayes | say automatic sorting of incoming e-mail into different categories | Simple model yet works in many complex real-world situations and requires a small number of training data| Ideal for features with no relationship with each other | It's simplicity is its power k-Nearest Neighbor | Applications that call for pattern recognition to determine outcome (Including recognizing faces of people in pictures) | Simplest and yet effective (facial recognition !!) | It is instance-based and lazy learning. It is sensitive to the local structure of the data | With 102 features and 45,222 observations, k-NN will be manageable computationally Implementation - Creating a Training and Predicting Pipeline To properly evaluate the performance of each model you've chosen, it's important that you create a training and predicting pipeline that allows you to quickly and effectively train models using various sizes of training data and perform predictions on the testing data. Your implementation here will be used in the following section. In the code block below, you will need to implement the following: - Import fbeta_score and accuracy_score from sklearn.metrics. - Fit the learner to the sampled training data and record the training time. - Perform predictions on the test data X_test, and also on the first 300 training points X_train[:300]. - Record the total prediction time. - Calculate the accuracy score for both the training subset and testing set. - Calculate the F-score for both the training subset and testing set. - Make sure that you set the beta parameter! End of explanation """ # TODO: Import the three supervised learning models from sklearn from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB # TODO: Initialize the three models seed=7 clf_A = LogisticRegression(random_state=seed) clf_B = GaussianNB() clf_C = KNeighborsClassifier() # TODO: Calculate the number of samples for 1%, 10%, and 100% of the training data n_train = len(y_train) samples_1 = n_train*1/100 samples_10 = n_train*1/10 samples_100 = n_train*1/1 # Collect results on the learners results = {} for clf in [clf_A, clf_B, clf_C]: clf_name = clf.__class__.__name__ results[clf_name] = {} for i, samples in enumerate([samples_1, samples_10, samples_100]): results[clf_name][i] = \ train_predict(clf, samples, X_train, y_train, X_test, y_test) # Run metrics visualization for the three supervised learning models chosen vs.evaluate(results, accuracy, fscore) from sklearn.metrics import confusion_matrix import seaborn as sns import matplotlib.pyplot as plt for clf in [clf_A, clf_B, clf_C]: model = clf cm = confusion_matrix(y_test.values, model.predict(X_test)) # view with a heatmap fig = plt.figure() sns.heatmap(cm, annot=True, cmap='Blues', xticklabels=['no', 'yes'], yticklabels=['no', 'yes']) plt.ylabel('True label') plt.xlabel('Predicted label') plt.title('Confusion matrix for:\n{}'.format(model.__class__.__name__)); """ Explanation: Implementation: Initial Model Evaluation In the code cell, you will need to implement the following: - Import the three supervised learning models you've discussed in the previous section. - Initialize the three models and store them in 'clf_A', 'clf_B', and 'clf_C'. - Use a 'random_state' for each model you use, if provided. - Note: Use the default settings for each model — you will tune one specific model in a later section. - Calculate the number of records equal to 1%, 10%, and 100% of the training data. - Store those values in 'samples_1', 'samples_10', and 'samples_100' respectively. Note: Depending on which algorithms you chose, the following implementation may take some time to run! End of explanation """ # TODO: Import 'GridSearchCV', 'make_scorer', and any other necessary libraries from sklearn.grid_search import GridSearchCV from sklearn.metrics import make_scorer # TODO: Initialize the classifier model = LogisticRegression(random_state=7) # TODO: Create the parameters list you wish to tune param_grid = {'solver': ['sag', 'lbfgs', 'newton-cg'], 'C': [0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]} """ Algorithm to use in the optimization problem. For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ is faster for large ones. For multiclass problems, only ‘newton-cg’, ‘sag’ and ‘lbfgs’ handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes. ‘newton-cg’, ‘lbfgs’ and ‘sag’ only handle L2 penalty.(L1 regularization and L2 regularization are two closely related techniques that can be used by machine learning (ML) training algorithms to reduce model overfitting) ‘liblinear’ might be slower in LogisticRegressionCV because it does not handle warm-starting. Note that ‘sag’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. """ # TODO: Make an fbeta_score scoring object scorer = make_scorer(fbeta_score, beta=beta) # TODO: Perform grid search on the classifier using 'scorer' as the scoring method grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=scorer) # TODO: Fit the grid search object to the training data and find the optimal parameters grid_fit = grid.fit(X_train, y_train) # Get the estimator best_clf = grid_fit.best_estimator_ # Make predictions using the unoptimized and model predictions = (clf.fit(X_train, y_train)).predict(X_test) best_predictions = best_clf.predict(X_test) # Report the before-and-afterscores print "Unoptimized model\n------" print "Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions)) print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5)) print "\nOptimized Model\n------" print "Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)) print "Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)) """ Explanation: Improving Results In this final section, you will choose from the three supervised learning models the best model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (X_train and y_train) by tuning at least one parameter to improve upon the untuned model's F-score. Question 3 - Choosing the Best Model Based on the evaluation you performed earlier, in one to two paragraphs, explain to CharityML which of the three models you believe to be most appropriate for the task of identifying individuals that make more than \$50,000. Hint: Your answer should include discussion of the metrics, prediction/training time, and the algorithm's suitability for the data. Answer: I believe "Logistic Regression" is the most appropriate model. It has low training/testing speed, and accuracy/F-scores of test data with different training sets sizes is consistent. Training scores and testing scores are similar suggesting no overfitting. Its prediction precision for high-income is 1300/1790=72.63% and its recall is 1300/2180 = 59.63% (False positive and false negative are low) Although "k-Nearest Neighbor" has a slightly higer accuracy and f-score, it takes lot longer in terms of processing time. If time is not a critical factor (as it would be for a fly-by-wire airplane or a self-driving car) ,I would perfer this model. Its prediction precision for high-income is 1300/2240=58.04% and its recall is 1300/1690 = 76.92%. (False positive and false negative are low) "Naive Bayers with Gaussian" model is the fastest with least variance of scores between training and testing, however, does poorly when dataset is smaller. It prediction precision for high-income is 2100/3600=58.33% and its recall is 2100/5500 = 38.18%. (False positive is not low) Question 4 - Describing the Model in Layman's Terms In one to two paragraphs, explain to CharityML, in layman's terms, how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation. Answer: The final model chosen is called Logistic Regression. Logistics regression works with odds. What are the odds that someone's income is more than $50,000 (high-income)? It depends on what we know about the person. It also depends on what have we learnt about persons that are high-income and low-income. We have 45222 persons of which 11208 are known to be high income. Our model needs to learn. It needs to figure out odds of a person being high-income given the fact that the person is a college graduate. It needs to figure out odds of a person being high-income given that the person is young or given that person is married or given that the person has a job or owns a business. We will let our model read 80% of our 45,222 observations that are randomly selected so that they will cover married as well as not married or divorced, full-time workers (40 hours per week) as well as part time workers, young as well as old. See the picture of a cube below: It is am imaginary space for a model that considers age (one edge of the cube), marital_status (second edge of the cube) and say education (the third edge of the cube). This model will read the data and place it as blue balls for low-income and red balls for high-income. <img src="files/separable.png"> See a plane that separates the red and the blue balls. This smart model has figured out where that plane should be, what its shape should be and what it's angle should be to best separate the red and the blue balls. It did that in what we call as the training phase after it read the data and analyzed it. We need to know how well will the model perform when it has to figure out income level of a new person. This is the purpose of the 20% data that we held back. Now in the second phase that we call validation we ask it to figure out its guess without telling it the actual income level of these 20%. Are these persons high-income or lo-income? This is how we estimate its accuracy and also reliability / dependability. We don't want our model to falsely identify someone as high-income when they are not or falsely identify someone as low-income when they are not. We also don't want someone exactly matching attributes of a known high-income person to be identified as low-income and vice-a-versa. See below a 2 X 2 matrix where one side is true high-income (yes/no) and the other side is predicted high-income (yes/no) <img src="files/Confusion_LR.png"> The way to read this figure is that our model predicts someone as high-income 1300 times out of 1790. It also predicts someone as low-income 6300 times out of 7180. If someone is truly high-income it correctly recognizes 1300 times out of 2180. If someone is truly low-income it correctly recognizes them 6300 times out of 6790. Eventually we will decide how many edges as age, education, marital status of a person should we have for our decision space. Remember our cube had only 3 edges but our mathematical model can have more. However, it is best to not have too many. Not all of them are important and we can afford to not use some without significant loss in accuracy but that would make our prediction process faster. Once finalized these specific attributes such as age, education, marital status of a person will be provided to the model whenever a new person is to be examined, and our model will then predict whether the person is high-income (or not). Implementation: Model Tuning Fine tune the chosen model. Use grid search (GridSearchCV) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following: - Import sklearn.grid_search.GridSearchCV and sklearn.metrics.make_scorer. - Initialize the classifier you've chosen and store it in clf. - Set a random_state if one is available to the same state you set before. - Create a dictionary of parameters you wish to tune for the chosen model. - Example: parameters = {'parameter' : [list of values]}. - Note: Avoid tuning the max_features parameter of your learner if that parameter is available! - Use make_scorer to create an fbeta_score scoring object (with $\beta = 0.5$). - Perform grid search on the classifier clf using the 'scorer', and store it in grid_obj. - Fit the grid search object to the training data (X_train, y_train), and store it in grid_fit. Note: Depending on the algorithm chosen and the parameter list, the following implementation may take some time to run! End of explanation """ # TODO: Import a supervised learning model that has 'feature_importances_' from sklearn.ensemble import AdaBoostClassifier, RandomForestClassifier, GradientBoostingClassifier # TODO: Train the supervised model on the training set model = GradientBoostingClassifier().fit(X_train, y_train) # TODO: Extract the feature importances importances = model.feature_importances_ # Plot vs.feature_plot(importances, X_train, y_train) """ Explanation: Question 5 - Final Model Evaluation What is your optimized model's accuracy and F-score on the testing data? Are these scores better or worse than the unoptimized model? How do the results from your optimized model compare to the naive predictor benchmarks you found earlier in Question 1? Note: Fill in the table below with your results, and then provide discussion in the Answer box. Results: | Metric | Benchmark Predictor | Unoptimized Model | Optimized Model | | :------------: | :-----------------: | :---------------: | :-------------: | | Accuracy Score | 0.2478 | 0.8201 | 0.8494 | | F-score | 0.2917 | 0.6317 | 0.7008 | Answer: The optimized model have better accuracy and F-score than unoptimized model. The improvement is small but an improvement nonetheless ! The optimized model is much improvement over the benchmark predicator (naive bayes) in terms of accuracy and F-score. Feature Importance An important task when performing supervised learning on a dataset like the census data we study here is determining which features provide the most predictive power. By focusing on the relationship between only a few crucial features and the target label we simplify our understanding of the phenomenon, which is most always a useful thing to do. In the case of this project, that means we wish to identify a small number of features that most strongly predict whether an individual makes at most or more than \$50,000. Choose a scikit-learn classifier (e.g., adaboost, random forests) that has a feature_importance_ attribute, which is a function that ranks the importance of features according to the chosen classifier. In the next python cell fit this classifier to training set and use this attribute to determine the top 5 most important features for the census dataset. Question 6 - Feature Relevance Observation When Exploring the Data, it was shown there are thirteen available features for each individual on record in the census data. Of these thirteen records, which five features do you believe to be most important for prediction, and in what order would you rank them and why? Answer: My guess is based on what I see in my church. People who donate significantly (High Income >$50K) are: elderly, happily married, professional, 9-5 jobs (4o hr week) and well-educated. Hence in my mind the features by order of dmiminishing priority are: age, marital-status, occupation, hours-per-week and seducation_level Implementation - Extracting Feature Importance Choose a scikit-learn supervised learning algorithm that has a feature_importance_ attribute availble for it. This attribute is a function that ranks the importance of each feature when making predictions based on the chosen algorithm. In the code cell below, you will need to implement the following: - Import a supervised learning model from sklearn if it is different from the three used earlier. - Train the supervised model on the entire training set. - Extract the feature importances using '.feature_importances_'. End of explanation """ # Import functionality for cloning a model from sklearn.base import clone # Reduce the feature space X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]] X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]] # Train on the "best" model found from grid search earlier clf = (clone(best_clf)).fit(X_train_reduced, y_train) # Make new predictions reduced_predictions = clf.predict(X_test_reduced) # Report scores from the final model using both versions of data print "Final Model trained on full data\n------" print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)) print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)) print "\nFinal Model trained on reduced data\n------" print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions)) print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5)) """ Explanation: Question 7 - Extracting Feature Importance Observe the visualization created above which displays the five most relevant features for predicting if an individual makes at most or above \$50,000. How do these five features compare to the five features you discussed in Question 6? If you were close to the same answer, how does this visualization confirm your thoughts? If you were not close, why do you think these features are more relevant? Answer: Wow !! This is an eye-opener. Education is the biggest factor that determines if someone is a donor, next is age, happily married and then capital-loss and capital-gain ! occupation is immaterial !! Hours wordked is immaterial !!! What matters is did you suffer a wall-street loss or gain !!!! My takeaway from this is that you are scarred if you experience a wall-street loss OR you are generous when you experience a wall-street gain(is a lottery a capital-gain?...maybe) I did expect occupation to play a role in this but I have been proven wrong. Maybe the people in my church are capital-gainers (which I would not know) and they get clubbed in >$50K group Feature Selection How does a model perform if we only use a subset of all the available features in the data? With less features required to train, the expectation is that training and prediction time is much lower — at the cost of performance metrics. From the visualization above, we see that the top five most important features contribute more than half of the importance of all features present in the data. This hints that we can attempt to reduce the feature space and simplify the information required for the model to learn. The code cell below will use the same optimized model you found earlier, and train it on the same training set with only the top five important features. End of explanation """
julienchastang/unidata-python-workshop
notebooks/MetPy_Advanced/Isentropic Analysis.ipynb
mit
from siphon.catalog import TDSCatalog cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/grib/' 'NCEP/GFS/Global_0p5deg/catalog.xml') best = cat.datasets['Best GFS Half Degree Forecast Time Series'] """ Explanation: <a name="top"></a> <div style="width:1000 px"> <div style="float:right; width:98 px; height:98px;"> <img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;"> </div> <h1>Advanced MetPy: Isentropic Analysis</h1> <div style="clear:both"></div> </div> <hr style="height:2px;"> Overview: Teaching: 30 minutes Exercises: 30 minutes Objectives <a href="#download">Download GFS output from TDS</a> <a href="#interpolation">Interpolate GFS output to an isentropic level</a> <a href="#ascent">Calculate regions of isentropic ascent and descent</a> <a name="download"></a> Downloading GFS Output First we need some grids of values to work with. We can do this by dowloading information from the latest run of the GFS available on Unidata's THREDDS data server. First we access the catalog for the half-degree GFS output, and look for the dataset called the "Best GFS Half Degree Forecast Time Series". This dataset combines multiple sets of model runs to yield a time series of output with the shortest forecast offset. End of explanation """ subset_access = best.subset() query = subset_access.query() """ Explanation: Next, we set up access to request subsets of data from the model. This uses the NetCDF Subset Service (NCSS) to make requests from the GRIB collection and get results in netCDF format. End of explanation """ sorted(v for v in subset_access.variables if v.endswith('isobaric')) """ Explanation: Let's see what variables are available. Instead of just printing subset_access.variables, we can ask Python to only display variables that end with "isobaric", which is how the TDS denotes GRIB fields that are specified on isobaric levels. End of explanation """ from datetime import datetime query.time(datetime.utcnow()) query.variables('Temperature_isobaric', 'Geopotential_height_isobaric', 'u-component_of_wind_isobaric', 'v-component_of_wind_isobaric', 'Relative_humidity_isobaric') query.lonlat_box(west=-130, east=-50, south=10, north=60) query.accept('netcdf4') """ Explanation: Now we put together the "query"--the way we ask for data we want. We give ask for a wide box of data over the U.S. for the time step that's closest to now. We also request temperature, height, winds, and relative humidity. By asking for netCDF4 data, the result is compressed, so the download is smaller. End of explanation """ nc = subset_access.get_data(query) """ Explanation: Now all that's left is to actually make the request for data: End of explanation """ from xarray.backends import NetCDF4DataStore import xarray as xr ds = xr.open_dataset(NetCDF4DataStore(nc)) """ Explanation: Open the returned netCDF data using XArray: End of explanation """ import metpy.calc as mpcalc from metpy.units import units import numpy as np """ Explanation: <a name="interpolation"></a> Isentropic Interpolation Now let's take what we've downloaded, and use it to make an isentropic map. In this case, we're interpolating from one vertical coordinate, pressure, to another: potential temperature. MetPy has a function isentropic_interpolation that can do this for us. First, let's start with a few useful imports. End of explanation """ ds = ds.metpy.parse_cf() temperature = ds['Temperature_isobaric'][0] data_proj = temperature.metpy.cartopy_crs """ Explanation: Let's parse out the metadata for the isobaric temperature and get the projection information. We also index with 0 to get the first, and only, time: End of explanation """ lat = temperature.metpy.y lon = temperature.metpy.x # Need to adjust units on humidity because '%' causes problems ds['Relative_humidity_isobaric'].attrs['units'] = 'percent' rh = ds['Relative_humidity_isobaric'][0] height = ds['Geopotential_height_isobaric'][0] u = ds['u-component_of_wind_isobaric'][0] v = ds['v-component_of_wind_isobaric'][0] # Can have different vertical levels for wind and thermodynamic variables # Find and select the common levels press = temperature.metpy.vertical common_levels = np.intersect1d(press, u.metpy.vertical) temperature = temperature.metpy.sel(vertical=common_levels) u = u.metpy.sel(vertical=common_levels) v = v.metpy.sel(vertical=common_levels) # Get common pressure levels as a data array press = press.metpy.sel(vertical=common_levels) """ Explanation: Let's pull out the grids out into some shorter variable names. End of explanation """ isen_level = np.array([320]) * units.kelvin isen_press, isen_u, isen_v = mpcalc.isentropic_interpolation(isen_level, press, temperature, u, v) """ Explanation: Next, we perform the isentropic interpolation. At a minimum, this must be given one or more isentropic levels, the 3-D temperature field, and the pressure levels of the original field; it then returns the 3D array of pressure values (2D slices for each isentropic level). You can also pass addition fields which will be interpolated to these levels as well. Below, we interpolate the winds (and pressure) to the 320K isentropic level: End of explanation """ # Need to squeeze() out the size-1 dimension for the isentropic level isen_press = isen_press.squeeze() isen_u = isen_u.squeeze() isen_v = isen_v.squeeze() %matplotlib inline import matplotlib.pyplot as plt import cartopy.crs as ccrs import cartopy.feature as cfeature # Create a plot and basic map projection fig = plt.figure(figsize=(14, 8)) ax = fig.add_subplot(1, 1, 1, projection=ccrs.LambertConformal(central_longitude=-100)) # Contour the pressure values for the isentropic level. We keep the handle # for the contour so that we can have matplotlib label the contours levels = np.arange(300, 1000, 25) cntr = ax.contour(lon, lat, isen_press, transform=data_proj, colors='black', levels=levels) cntr.clabel(fmt='%d') # Set up slices to subset the wind barbs--the slices below are the same as `::5` # We put these here so that it's easy to change and keep all of the ones below matched # up. lon_slice = slice(None, None, 5) lat_slice = slice(None, None, 3) ax.barbs(lon[lon_slice], lat[lat_slice], isen_u[lat_slice, lon_slice].to('knots').magnitude, isen_v[lat_slice, lon_slice].to('knots').magnitude, transform=data_proj, zorder=2) ax.add_feature(cfeature.LAND) ax.add_feature(cfeature.OCEAN) ax.add_feature(cfeature.COASTLINE) ax.add_feature(cfeature.BORDERS, linewidth=2) ax.add_feature(cfeature.STATES, linestyle=':') ax.set_extent((-120, -70, 25, 55), crs=data_proj) """ Explanation: Let's plot the results and see what it looks like: End of explanation """ # Needed to make numpy broadcasting work between 1D pressure and other 3D arrays # Use .metpy.unit_array to get numpy array with units rather than xarray DataArray pressure_for_calc = press.metpy.unit_array[:, None, None] # # YOUR CODE: Calculate mixing ratio using something from mpcalc # # Take the return and convert manually to units of 'dimenionless' #mixing.ito('dimensionless') # # YOUR CODE: Interpolate all the data # # Squeeze the returned arrays #isen_press = isen_press.squeeze() #isen_mixing = isen_mixing.squeeze() #isen_u = isen_u.squeeze() #isen_v = isen_v.squeeze() # Create Plot -- same as before fig = plt.figure(figsize=(14, 8)) ax = fig.add_subplot(1, 1, 1, projection=ccrs.LambertConformal(central_longitude=-100)) levels = np.arange(300, 1000, 25) cntr = ax.contour(lon, lat, isen_press, transform=data_proj, colors='black', levels=levels) cntr.clabel(fmt='%d') lon_slice = slice(None, None, 8) lat_slice = slice(None, None, 8) ax.barbs(lon[lon_slice], lat[lat_slice], isen_u[lat_slice, lon_slice].to('knots').magnitude, isen_v[lat_slice, lon_slice].to('knots').magnitude, transform=data_proj, zorder=2) # # YOUR CODE: Contour/Contourf the mixing ratio values # ax.add_feature(cfeature.LAND) ax.add_feature(cfeature.OCEAN) ax.add_feature(cfeature.COASTLINE) ax.add_feature(cfeature.BORDERS, linewidth=2) ax.add_feature(cfeature.STATES, linestyle=':') ax.set_extent((-120, -70, 25, 55), crs=data_proj) """ Explanation: Exercise Let's add some moisture information to this plot. Feel free to choose a different isentropic level. Calculate the mixing ratio (using the appropriate function from mpcalc) Call isentropic_interpolation with mixing ratio--you should copy the one from above and add mixing ratio to the call so that it interpolates everything. contour (in green) or contourf your moisture information on the map alongside pressure You'll want to refer to the MetPy API documentation to see what calculation functions would help you. End of explanation """ # %load solutions/mixing.py """ Explanation: Solution End of explanation """ isen_press = mpcalc.smooth_gaussian(isen_press.squeeze(), 9) isen_u = mpcalc.smooth_gaussian(isen_u.squeeze(), 9) isen_v = mpcalc.smooth_gaussian(isen_v.squeeze(), 9) """ Explanation: <a name="ascent"></a> Calculating Isentropic Ascent Air flow across isobars on an isentropic surface represents vertical motion. We can use MetPy to calculate this ascent for us. Since calculating this involves taking derivatives, first let's smooth the input fields using a gaussian_filter. End of explanation """ # Use .values because we don't care about using DataArray dx, dy = mpcalc.lat_lon_grid_deltas(lon.values, lat.values) """ Explanation: Next, we need to take our grid point locations which are in degrees, and convert them to grid spacing in meters--this is what we need to pass to functions taking derivatives. End of explanation """ lift = -mpcalc.advection(isen_press, [isen_u, isen_v], [dx, dy], dim_order='yx') """ Explanation: Now we can calculate the isentropic ascent. $\omega$ is given by: $$\omega = \left(\frac{\partial P}{\partial t}\right)_\theta + \vec{V} \cdot \nabla P + \frac{\partial P}{\partial \theta}\frac{d\theta}{dt}$$ Note, the second term of the above equation is just pressure advection (negated). Therefore, we can use MetPy to calculate this as: End of explanation """ # YOUR CODE GOES HERE """ Explanation: Exercise Use contourf to plot the isentropic lift alongside the isobars and wind barbs. You probably want to convert the values of lift to microbars/s. End of explanation """ # %load solutions/lift.py """ Explanation: Solution End of explanation """
brian-rose/env-415-site
notes/TransientWarming.ipynb
mit
%matplotlib notebook import numpy as np import matplotlib.pyplot as plt import climlab import netCDF4 as nc """ Explanation: ENV / ATM 415: Climate Laboratory Exploring the rate of climate change Tuesday April 11, 2016 End of explanation """ # Need the ozone data again for our Radiative-Convective model ozone_filename = 'ozone_1.9x2.5_L26_2000clim_c091112.nc' ozone = nc.Dataset(ozone_filename) lat = ozone.variables['lat'][:] lon = ozone.variables['lon'][:] lev = ozone.variables['lev'][:] O3_zon = np.mean( ozone.variables['O3'],axis=(0,3) ) O3_global = np.sum( O3_zon * np.cos(np.deg2rad(lat)), axis=1 ) / np.sum( np.cos(np.deg2rad(lat) ) ) #steps_per_year = 20 steps_per_year = 180 # parameter set 1 -- store in a dictionary for easy re-use p1 = {'albedo_sfc': 0.22, 'adj_lapse_rate': 6., 'timestep': climlab.constants.seconds_per_year/steps_per_year, 'qStrat': 5E-6, 'relative_humidity': 0.77} # parameter set 2 p2 = {'albedo_sfc': 0.2025, 'adj_lapse_rate': 7., 'timestep': climlab.constants.seconds_per_year/steps_per_year, 'qStrat': 2E-7, 'relative_humidity': 0.6} # Make a list of the two parameter sets param = [p1, p2] # And a list of two corresponding Radiative-Convective models! slab = [] for p in param: model = climlab.BandRCModel(lev=lev, **p) model.absorber_vmr['O3'] = O3_global slab.append(model) """ Explanation: So far in this class we have talked a lot about Equilibrium Climate Sensitivity: the surface warming that is necessary to bring the planetary energy budget back into balance (energy in = energy out) after a doubling of atmospheric CO2. Although this concept is very important, it is not the only important measure of climate change, and not the only question for which we need to apply climate models. Consider two basic facts about climate change in the real world: There is no sudden, abrupt doubling of CO2. Instead, CO2 and other radiative forcing agents change gradually over time. The timescale for adjustment to equilibrium is very long because of the heat capacity of the deep oceans. We will now extend our climate model to deal with both of these issues simultaneously. Two versions of Radiative-Convective Equilibrium with different climate sensitivities We are going to use the BandRCModel but set it up with two slightly different sets of parameters. End of explanation """ for model in slab: model.integrate_converge() print 'The equilibrium surface temperatures are:' print 'Model 0: %0.2f K' %slab[0].Ts print 'Model 1: %0.2f K' %slab[1].Ts """ Explanation: Run both models out to equilibrium and check their surface temperatures: End of explanation """ # We will make copies of each model and double CO2 in the copy slab_2x = [] for model in slab: model_2x = climlab.process_like(model) model_2x.absorber_vmr['CO2'] *= 2. model_2x.integrate_converge() slab_2x.append(model_2x) # Climate sensitivity DeltaT = [] for n in range(len(slab)): DeltaT.append(slab_2x[n].Ts - slab[n].Ts) print 'The equilibrium climate sensitivities are:' print 'Model 0: %0.2f K' %DeltaT[0] print 'Model 1: %0.2f K' %DeltaT[1] """ Explanation: Ok so our two models (by construction) start out with nearly identical temperatures. Now we double CO2 and calculate the Equilibrium Climate Sensitivity: End of explanation """ slab[0].depth_bounds """ Explanation: So Model 0 is more sensitive than Model 1. It has a larger system gain, or a more positive overall climate feedback. This is actually due to differences in how we have parameterized the water vapor feedback in the two models. We could look at this more carefully if we wished. Time to reach equilibrium These models reached their new equilibria in just a few years. Why is that? Because they have very little heat capacity: End of explanation """ # Create the domains ocean_bounds = np.arange(0., 2010., 100.) depthax = climlab.Axis(axis_type='depth', bounds=ocean_bounds) levax = climlab.Axis(axis_type='lev', points=lev) atm = climlab.domain.Atmosphere(axes=levax) ocean = climlab.domain.Ocean(axes=depthax) # Model 0 has a higher ocean heat diffusion coefficient -- # a more efficent deep ocean heat sink ocean_diff = [5.E-4, 3.5E-4] # List of deep ocean models deep = [] for n in range(len(param)): # Create the state variables Tinitial_ocean = slab[n].Ts * np.ones(ocean.shape) Tocean = climlab.Field(Tinitial_ocean.copy(), domain=ocean) Tatm = climlab.Field(slab[n].Tatm.copy(), domain=atm) # By declaring Ts to be a numpy view of the first element of the array Tocean # Ts becomes effectively a dynamic reference to that element and is properly updated Ts = Tocean[0:1] atm_state = {'Tatm': Tatm, 'Ts': Ts} model = climlab.BandRCModel(state=atm_state, **param[n]) model.set_state('Tocean', Tocean) diff = climlab.dynamics.diffusion.Diffusion(state={'Tocean': model.Tocean}, K=ocean_diff[n], diffusion_axis='depth', **param[n]) model.add_subprocess('OHU', diff) model.absorber_vmr['O3'] = O3_global deep.append(model) print deep[0] """ Explanation: The "ocean" in these models is just a "slab" of water 1 meter deep. That's all we need to calculate the equilibrium temperatures, but it tells us nothing about the timescales for climate change in the real world. For this, we need a deep ocean that can exchange heat with the surface. Transient warming scenarios in column models with ocean heat uptake We are now going to build two new models. The atmosphere (radiative-convective model) will be identical to the two "slab" models we just used. But these will be coupled to a column of ocean water 2000 m deep! We will parameterize the ocean heat uptake as a diffusive mixing process. Much like when we discussed the diffusive parameterization for atmospheric heat transport -- we are assuming that ocean dynamics result in a vertical mixing of heat from warm to cold temperatures. The following code will set this up for us. We will make one more assumption, just for the sake of illustration: The more sensitive model (Model 0) is also more efficent at taking up heat into the deep ocean End of explanation """ # This code will generate a 'live plot' showing the transient warming in the two models. # The figure will update itself after every 5 years of simulation num_years = 400 years = np.arange(num_years+1) Tsarray = [] Tocean = [] netrad = [] for n in range(len(param)): thisTs = np.nan * np.zeros(num_years+1) thisnetrad = np.nan * np.zeros(num_years+1) thisTocean = np.nan * np.zeros((deep[n].Tocean.size, num_years+1)) thisTs[0] = deep[n].Ts thisnetrad[0] = deep[n].ASR - deep[n].OLR thisTocean[:, 0] = deep[n].Tocean Tsarray.append(thisTs) Tocean.append(thisTocean) netrad.append(thisnetrad) CO2initial = deep[0].absorber_vmr['CO2'][0] CO2array = np.nan * np.zeros(num_years+1) CO2array[0] = CO2initial * 1E6 colorlist = ['b', 'r'] co2color = 'k' num_axes = len(param) + 1 fig, ax = plt.subplots(num_axes, figsize=(12,14)) # Twin the x-axis twice to make independent y-axes. topaxes = [ax[0], ax[0].twinx(), ax[0].twinx()] # Make some space on the right side for the extra y-axis. fig.subplots_adjust(right=0.85) # Move the last y-axis spine over to the right by 10% of the width of the axes topaxes[-1].spines['right'].set_position(('axes', 1.1)) # To make the border of the right-most axis visible, we need to turn the frame # on. This hides the other plots, however, so we need to turn its fill off. topaxes[-1].set_frame_on(True) topaxes[-1].patch.set_visible(False) for n, model in enumerate(slab_2x): topaxes[0].plot(model.Ts*np.ones_like(Tsarray[n]), '--', color=colorlist[n]) topaxes[0].set_ylabel('Surface temperature (K)') topaxes[0].set_xlabel('Years') topaxes[0].set_title('Transient warming scenario: 1%/year CO2 increase to doubling, followed by CO2 stabilization', fontsize=14) topaxes[0].legend(['Model 0', 'Model 1'], loc='lower right') topaxes[1].plot(CO2array, color=co2color) topaxes[1].set_ylabel('CO2 (ppm)', color=co2color) for tl in topaxes[1].get_yticklabels(): tl.set_color(co2color) topaxes[1].set_ylim(300., 1000.) topaxes[2].set_ylabel('Radiative forcing (W/m2)', color='b') for tl in topaxes[2].get_yticklabels(): tl.set_color('b') topaxes[2].set_ylim(0, 4) contour_levels = np.arange(-1, 4.5, 0.25) for n in range(len(param)): cax = ax[n+1].contourf(years, deep[n].depth, Tocean[n] - Tsarray[n][0], levels=contour_levels) ax[n+1].invert_yaxis() ax[n+1].set_ylabel('Depth (m)') ax[n+1].set_xlabel('Years') fig.subplots_adjust(bottom=0.12) cbar_ax = fig.add_axes([0.25, 0.02, 0.5, 0.03]) fig.colorbar(cax, cax=cbar_ax, orientation='horizontal'); # Increase CO2 by 1% / year for 70 years (until doubled), and then hold constant for y in range(num_years): if deep[0].absorber_vmr['CO2'][0] < 2 * CO2initial: for model in deep: model.absorber_vmr['CO2'] *= 1.01 CO2array[y+1] = deep[0].absorber_vmr['CO2'][0] * 1E6 for n, model in enumerate(deep): model.integrate_years(1, verbose=False) Tsarray[n][y+1] = deep[n].Ts Tocean[n][:, y+1] = deep[n].Tocean netrad[n][y+1] = deep[n].ASR - deep[n].OLR # is it time to update the plots? plot_freq = 5 if y%plot_freq == 0: for n, model in enumerate(deep): topaxes[0].plot(Tsarray[n], color=colorlist[n]) topaxes[2].plot(netrad[n], ':', color=colorlist[n]) for n in range(len(param)): cax = ax[n+1].contourf(years, deep[n].depth, Tocean[n] - Tsarray[n][0], levels=contour_levels) topaxes[1].plot(CO2array, color=co2color) fig.canvas.draw() """ Explanation: Now consider the CO2 increase. In the real world, CO2 has been increasing every year since the beginning of industrialization. Future CO2 concentrations depend on collective choices made by human societies about how much fossil fuel to extract and burn. We will set up a simple scenario. Suppose that CO2 increases by 1% of its existing concentration every year until it reaches 2x its initial concentration. This takes about 70 years. After 70 years, we assume that all anthropogenic emissions, and CO2 concentration is stabilized at the 2x level. What happens to the surface temperature? How do the histories of surface and deep ocean temperature compare in our two models? We are going to simulation 400 years of transient global warming in the two models. End of explanation """
aaai2018-paperid-62/aaai2018-paperid-62
parameter_figures.ipynb
mit
import pandas as pd file = 'data/evaluations.csv' conversion_dict = {'research_type': lambda x: int(x == 'E')} evaluation_data = pd.read_csv(file, sep=',', header=0, index_col=0, converters=conversion_dict) print('Samples per conference\n{}'.format(evaluation_data.groupby('conference').size())) """ Explanation: Figure generation for all parameters This Jupyter notebook generates figures to show the coverage for each parameter. The data We start by loading the CSV file into a pandas DataFrame. End of explanation """ import numpy as np import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('ggplot') %matplotlib notebook colors = matplotlib.cm.get_cmap().colors len_colors = len(colors) def plot_bars(data, keys, elements, filename, figsize=(4,4)): plot_scores = [] for (key, element) in zip(keys, elements): plot_scores.append(data[element].mean(axis=0)) fig = plt.figure(figsize=figsize) ax = plt.subplot(111) N = len(plot_scores) ind = np.arange(N) width = 0.7 plot_colors = colors[0:len_colors:int(len_colors/N)] ax.bar(ind+0.5, plot_scores, width, align='center', alpha=0.5, color=plot_colors) for x, y in zip(ind, plot_scores): ax.text(x+0.5, 0.95, '{0:.0%}'.format(y), ha='center', va='top', size=12) ax.set_xlim(0,N) ax.set_xticks(ind+0.5) ax.set_xticklabels(keys, rotation=35) ax.set_ylim(0, 1.0) ax.set_yticks([0.25, 0.50, 0.75, 1.0]) ax.set_yticklabels(['25%', r'50%', '75%', '100%'], fontdict={'horizontalalignment': 'right'}) plt.tight_layout() plt.savefig('figures/{}.png'.format(filename), format='png', bbox_inches='tight') evaluation_data = evaluation_data.groupby('research_type').get_group(1) keys = ['Results', 'Test', 'Valid-\nation', 'Train'] columns = ['results', 'test', 'validation', 'train'] plot_bars(evaluation_data[columns], keys, columns, 'freq_data', figsize=(4,3)) keys = ['Pseudo\ncode', 'Research\nquestion', 'Research\nmethod', 'Objective/\nGoal', 'Problem'] columns = ['pseudocode', 'research_question', 'research_method', 'goal/objective', 'problem_description'] plot_bars(evaluation_data[columns], keys, columns, 'freq_method', figsize=(4,3)) keys = ['Exp.\ncode', 'Exp.\nsetup', 'SW\ndep.', 'HW\nspec.', 'Method\ncode', 'Prediction', 'Hypothesis'] columns = ['open_experiment_code', 'experiment_setup', 'software_dependencies', 'hardware_specification', 'open_source_code', 'prediction', 'hypothesis'] method_data = evaluation_data[columns] plot_bars(method_data, keys, columns, 'freq_experiment', figsize=(5,3)) """ Explanation: Generation We will generate figures for four different categorisations: method, data, and experiment. The categories consist of the following variables: (method) problem, objective/goal, research method, research questions, and pseudo code; (data) training, validation, test, and results data; (experiment) hypothesis, prediction, method source code, hardware specification, software dependencies, experiment setup, experiment source code. End of explanation """ import IPython import platform print('Python version: {}'.format(platform.python_version())) print('IPython version: {}'.format(IPython.__version__)) print('matplotlib version: {}'.format(matplotlib.__version__)) print('numpy version: {}'.format(np.__version__)) print('pandas version: {}'.format(pd.__version__)) """ Explanation: Versions Here's a generated output to keep track of software versions used to run this Jupyter notebook. End of explanation """
DJCordhose/ai
notebooks/tensorflow/fashion_mnist_tpu.ipynb
mit
import tensorflow as tf import numpy as np (x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data() # add empty color dimension x_train = np.expand_dims(x_train, -1) x_test = np.expand_dims(x_test, -1) """ Explanation: View in Colaboratory Fashion MNIST with Keras and TPUs Let's try out using tf.keras and Cloud TPUs to train a model on the fashion MNIST dataset. First, let's grab our dataset using tf.keras.datasets. Talken from https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/fashion_mnist.ipynb End of explanation """ model = tf.keras.models.Sequential() model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:])) model.add(tf.keras.layers.Conv2D(64, (5, 5), padding='same', activation='elu')) model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2))) model.add(tf.keras.layers.Dropout(0.25)) model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:])) model.add(tf.keras.layers.Conv2D(128, (5, 5), padding='same', activation='elu')) model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2))) model.add(tf.keras.layers.Dropout(0.25)) model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:])) model.add(tf.keras.layers.Conv2D(256, (5, 5), padding='same', activation='elu')) model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2))) model.add(tf.keras.layers.Dropout(0.25)) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(256)) model.add(tf.keras.layers.Activation('elu')) model.add(tf.keras.layers.Dropout(0.5)) model.add(tf.keras.layers.Dense(10)) model.add(tf.keras.layers.Activation('softmax')) model.summary() """ Explanation: Defining our model We will use a standard conv-net for this example. We have 3 layers with drop-out and batch normalization between each layer. End of explanation """ import os tpu_model = tf.contrib.tpu.keras_to_tpu_model( model, strategy=tf.contrib.tpu.TPUDistributionStrategy( tf.contrib.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR']) ) ) tpu_model.compile( optimizer=tf.train.AdamOptimizer(learning_rate=1e-3, ), loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['sparse_categorical_accuracy'] ) def train_gen(batch_size): while True: offset = np.random.randint(0, x_train.shape[0] - batch_size) yield x_train[offset:offset+batch_size], y_train[offset:offset + batch_size] tpu_model.fit_generator( train_gen(1024), epochs=10, steps_per_epoch=100, validation_data=(x_test, y_test), ) """ Explanation: Training on the TPU We're ready to train! We first construct our model on the TPU, and compile it. Here we demonstrate that we can use a generator function and fit_generator to train the model. You can also pass in x_train and y_train to tpu_model.fit() instead. End of explanation """ LABEL_NAMES = ['t_shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle_boots'] cpu_model = tpu_model.sync_to_cpu() from matplotlib import pyplot %matplotlib inline def plot_predictions(images, predictions): n = images.shape[0] nc = int(np.ceil(n / 4)) f, axes = pyplot.subplots(nc, 4) for i in range(nc * 4): y = i // 4 x = i % 4 axes[x, y].axis('off') label = LABEL_NAMES[np.argmax(predictions[i])] confidence = np.max(predictions[i]) if i > n: continue axes[x, y].imshow(images[i]) axes[x, y].text(0.5, 0.5, label + '\n%.3f' % confidence, fontsize=14) pyplot.gcf().set_size_inches(8, 8) plot_predictions(np.squeeze(x_test[:16]), cpu_model.predict(x_test[:16])) """ Explanation: Checking our results (inference) Now that we're done training, let's see how well we can predict fashion categories! Keras/TPU prediction isn't working due to a small bug (fixed in TF 1.12!), but we can predict on the CPU to see how our results look. End of explanation """
anhaidgroup/py_entitymatching
notebooks/guides/end_to_end_em_guides/Basic EM Workflow.ipynb
bsd-3-clause
import sys sys.path.append('/Users/pradap/Documents/Research/Python-Package/anhaid/py_entitymatching/') import py_entitymatching as em import pandas as pd import os # Display the versions print('python version: ' + sys.version ) print('pandas version: ' + pd.__version__ ) print('magellan version: ' + em.__version__ ) """ Explanation: Introduction This IPython notebook explains a basic workflow two tables using py_entitymatching. Our goal is to come up with a workflow to match DBLP and ACM datasets. Specifically, we want to achieve precision greater than 95% and get recall greater than 90%. The datasets contain information about the conference papers published in top databse conferences. First, we need to import py_entitymatching package and other libraries as follows: End of explanation """ # Get the paths path_A = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'dblp_demo.csv' path_B = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'acm_demo.csv' # Load csv files as dataframes and set the key attribute in the dataframe A = em.read_csv_metadata(path_A, key='id') B = em.read_csv_metadata(path_B, key='id') print('Number of tuples in A: ' + str(len(A))) print('Number of tuples in B: ' + str(len(B))) print('Number of tuples in A X B (i.e the cartesian product): ' + str(len(A)*len(B))) A.head(2) B.head(2) # Display the key attributes of table A and B. em.get_key(A), em.get_key(B) """ Explanation: Matching two tables typically consists of the following three steps: 1. Reading the input tables 2. Blocking the input tables to get a candidate set 3. Matching the tuple pairs in the candidate set Read Input Tables We begin by loading the input tables. For the purpose of this guide, we use the datasets that are included with the package. End of explanation """ # Blocking plan # A, B -- AttrEquivalence blocker [year] --------------------| # |---> candidate set # A, B -- Overlap blocker [title]---------------------------| # Create attribute equivalence blocker ab = em.AttrEquivalenceBlocker() # Block tables using 'year' attribute : same year include in candidate set C1 = ab.block_tables(A, B, 'paper year', 'paper year', l_output_attrs=['title', 'authors', 'paper year'], r_output_attrs=['title', 'authors', 'paper year'] ) len(C1) # Initialize overlap blocker ob = em.OverlapBlocker() # Block over title attribute C2 = ob.block_tables(A, B, 'title', 'title', show_progress=False, overlap_size=2) len(C2) # Combine the outputs from attr. equivalence blocker and overlap blocker C = em.combine_blocker_outputs_via_union([C1, C2]) len(C) """ Explanation: Block Tables To Get Candidate Set Before we do the matching, we would like to remove the obviously non-matching tuple pairs from the input tables. This would reduce the number of tuple pairs considered for matching. py_entitymatching provides four different blockers: (1) attribute equivalence, (2) overlap, (3) rule-based, and (4) black-box. The user can mix and match these blockers to form a blocking sequence applied to input tables. For the matching problem at hand, we know that two conference papers published in different years cannot match, or if there are errors in the year then there should be at least some overlap between the paper titles. So we decide the apply the following blocking plan: End of explanation """ # Sample candidate set S = em.sample_table(C, 450) """ Explanation: Match Tuple Pairs in Candidate Set In this step, we would want to match the tuple pairs in the candidate set. Specifically, we use learning-based method for matching purposes. This typically involves the following four steps: Sampling and labeling the candidate set Splitting the labeled data into development and evaluation set Selecting the best learning based matcher using the development set Evaluating the selected matcher using the evaluation set Sampling and labeling the candidate set First, we randomly sample 450 tuple pairs for labeling purposes. End of explanation """ # Label S #G = em.label_table(S, 'label') """ Explanation: Next, we label the sampled candidate set. Specify we would enter 1 for a match and 0 for a non-match. End of explanation """ # Load the pre-labeled data path_G = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'labeled_data_demo.csv' G = em.read_csv_metadata(path_G, key='_id', ltable=A, rtable=B, fk_ltable='ltable_id', fk_rtable='rtable_id') len(G) """ Explanation: For the purposes of this guide, we will load in a pre-labeled dataset (of 450 tuple pairs) included in this package. End of explanation """ # Split S into development set (I) and evaluation set (J) IJ = em.split_train_test(G, train_proportion=0.7, random_state=0) I = IJ['train'] J = IJ['test'] """ Explanation: Splitting the labeled data into development and evaluation set In this step, we split the labeled data into two sets: development (I) and evaluation (J). Specifically, the development set is used to come up with the best learning-based matcher and the evaluation set used to evaluate the selected matcher on unseen data. End of explanation """ # Create a set of ML-matchers dt = em.DTMatcher(name='DecisionTree', random_state=0) svm = em.SVMMatcher(name='SVM', random_state=0) rf = em.RFMatcher(name='RF', random_state=0) lg = em.LogRegMatcher(name='LogReg', random_state=0) ln = em.LinRegMatcher(name='LinReg') """ Explanation: Selecting the best learning-based matcher Selecting the best learning-based matcher typically involves the following steps: Creating a set of learning-based matchers Creating features Converting the development set into feature vectors Selecting the best learning-based matcher using k-fold cross validation Creating a Set of Learning-based Matchers End of explanation """ # Generate features feature_table = em.get_features_for_matching(A, B, validate_inferred_attr_types=False) # List the names of the features generated feature_table['feature_name'] """ Explanation: Creating Features Next, we need to create a set of features for the development set. py_entitymatching provides a way to automatically generate features based on the attributes in the input tables. For the purposes of this guide, we use the automatically generated features. End of explanation """ # Convert the I into a set of feature vectors using F H = em.extract_feature_vecs(I, feature_table=feature_table, attrs_after='label', show_progress=False) # Display first few rows H.head(3) """ Explanation: Converting the Development Set to Feature Vectors End of explanation """ # Select the best ML matcher using CV result = em.select_matcher([dt, rf, svm, ln, lg], table=H, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'], k=5, target_attr='label', metric_to_select_matcher='precision', random_state=0) result['cv_stats'] # Select the best ML matcher using CV result = em.select_matcher([dt, rf, svm, ln, lg], table=H, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'], k=5, target_attr='label', metric_to_select_matcher='recall', random_state=0) result['cv_stats'] """ Explanation: Selecting the Best Matcher Using Cross-validation Now, we select the best matcher using k-fold cross-validation. For the purposes of this guide, we use five fold cross validation and use 'precision' and 'recall' metric to select the best matcher. End of explanation """ # Convert J into a set of feature vectors using feature table L = em.extract_feature_vecs(J, feature_table=feature_table, attrs_after='label', show_progress=False) """ Explanation: We observe that the best matcher (RF) is getting us to the precision and recall that we expect (i.e P > 95% and R > 90%). So, we select this matcher and now we can proceed on to evaluating the best matcher on the unseen data (the evaluation set). Evaluating the Matching Output Evaluating the matching outputs for the evaluation set typically involves the following four steps: 1. Converting the evaluation set to feature vectors 2. Training matcher using the feature vectors extracted from the development set 3. Predicting the evaluation set using the trained matcher 4. Evaluating the predicted matches Converting the Evaluation Set to Feature Vectors As before, we convert to the feature vectors (using the feature table and the evaluation set) End of explanation """ # Train using feature vectors from I dt.fit(table=H, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'], target_attr='label') """ Explanation: Training the Selected Matcher Now, we train the matcher using all of the feature vectors from the development set. For the purposes of this guide we use random forest as the selected matcher. End of explanation """ # Predict on L predictions = dt.predict(table=L, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'], append=True, target_attr='predicted', inplace=False) """ Explanation: Predicting the Matches Next, we predict the matches for the evaluation set (using the feature vectors extracted from it). End of explanation """ # Evaluate the predictions eval_result = em.eval_matches(predictions, 'label', 'predicted') em.print_eval_summary(eval_result) """ Explanation: Evaluating the Matching Output Finally, we evaluate the accuracy of predicted outputs End of explanation """
ihmeuw/dismod_mr
examples/consistent_data_from_vivarium_artifact.ipynb
agpl-3.0
np.random.seed(123456) # if dismod_mr is not installed, it should possible to use # !conda install --yes pymc # !pip install dismod_mr import dismod_mr # you also need one more pip installable package # !pip install vivarium_public_health import vivarium_public_health """ Explanation: Consistent models in DisMod-MR from Vivarium artifact draw Take i, r, f, p from a Vivarium artifact, and make a consistent version of them. See how it compares to the original. End of explanation """ from vivarium_public_health.dataset_manager import Artifact art = Artifact('/share/costeffectiveness/artifacts/obesity/obesity.hdf') art.keys def format_for_dismod(df, data_type): df = df.query('draw==0 and sex=="Female" and year_start==2017').copy() df['data_type'] = data_type df['area'] = 'all' df['standard_error'] = 0.001 df['upper_ci'] = np.nan df['lower_ci'] = np.nan df['effective_sample_size'] = 10_000 df['sex'] = 'total' df = df.rename({'age_group_start': 'age_start', 'age_group_end': 'age_end',}, axis=1) return df p = format_for_dismod(art.load('cause.ischemic_heart_disease.prevalence'), 'p') i = format_for_dismod(art.load('cause.ischemic_heart_disease.incidence'), 'i') f = format_for_dismod(art.load('cause.ischemic_heart_disease.excess_mortality'), 'f') m_all = format_for_dismod(art.load('cause.all_causes.cause_specific_mortality'), 'm_all') csmr = format_for_dismod(art.load('cause.ischemic_heart_disease.cause_specific_mortality'), 'csmr') # could also try 'pf' dm = dismod_mr.data.ModelData() dm.input_data = pd.concat([p, i, f, m_all, csmr ], ignore_index=True) for rate_type in 'ifr': dm.set_knots(rate_type, [0,40,60,80,90,100]) dm.set_level_value('i', age_before=30, age_after=101, value=0) dm.set_increasing('i', age_start=50, age_end=100) dm.set_level_value('p', value=0, age_before=30, age_after=101) dm.set_level_value('r', value=0, age_before=101, age_after=101) dm.input_data.data_type.value_counts() dm.setup_model(rate_model='normal', include_covariates=False) import pymc as pm m = pm.MAP(dm.vars) %%time m.fit(verbose=1) from IPython.core.pylabtools import figsize figsize(11, 5.5) dm.plot() !date """ Explanation: Consistent fit with all data Let's start with a consistent fit of the simulated PD data. This includes data on prevalence, incidence, and SMR, and the assumption that remission rate is zero. All together this counts as four different data types in the DisMod-II accounting. End of explanation """
Olsthoorn/IHE-python-course-2017
exercises/Apr18/TimeSeriesSampling.ipynb
gpl-2.0
import pandas as pd import matplotlib.pyplot as plt import numpy as np """ Explanation: <figure> <IMG SRC="../../logo/logo.png" WIDTH=250 ALIGN="right"> </figure> IHE Python course, 2017 Time series manipulation T.N.Olsthoorn, April 18, 2017 Most scientists and engineers, including hydrologists, physisists, electronic engineers, social scientists and economists are often faced with time series that bear information that is to be extracted or to be used in predictions. Pandas has virtually all the tools that are required to handle time series, while keeping dates and data strictly connected. These time series loaded into pandas then form the basis of further analysis. Loading into pandas can be done with pd.read_csv, pd.read_table, pd.read_excel as we used before as well as with numerous other functions ready to be using in pandas. Just use tab-complition to see al the possibilities End of explanation """ [d for d in dir(pd) if d.startswith("read")] pd.read_table() [d for d in dir(pd) if d.startswith("read")] """ Explanation: Show which reading functions pandas has as onboard methods. We can use a coprehension to select what we want: End of explanation """ cd python/IHEcourse2017/exercises/Apr18/ pwd """ Explanation: Hence there's a large number of possibilities. Move to the directory with the examples. Then print pwd to see if you're there. Notice, the first part of the pwd command will be different on your computer. End of explanation """ ls """ Explanation: See if we have a csv datafile, which is a long year groundwater head series in the south of the Netherlands (chosen more or less at random for its length). End of explanation """ import os os.path.isfile("B50E0133001_1.csv") """ Explanation: It's not a bad habit to use os to verify that the file exists. End of explanation """ pb = pd.read_csv("B50E0133001_1.csv") pb.head() """ Explanation: Ok, now we will naively try to read it in using pd.read_csv. This may fail or not. If it fails we sharpen the knife by adding or using one or more options provided by pd.read_csv. End of explanation """ pb = pd.read_csv("B50E0133001_1.csv", skiprows=9) pb.head() """ Explanation: Obviously, the read_csv above failed. Upon inspection of the file in an editor, we see that the top is a mess. Not really, but at least we want to sktip this part and get to the actual time series data of interest further down in the file. So let's skip a few rows (too few, but we can correct step by step) End of explanation """ pb = pd.read_csv("B50E0133001_1.csv", skiprows=11) pb.head() """ Explanation: Ok, we got some top table in the file. See which line pd thought was the header. Ok. skip a few more lines. End of explanation """ pb = pd.read_csv("B50E0133001_1.csv", skiprows=15) pb.head() """ Explanation: Now we really got the first table in the file, but this is not the one we need. On line 3 we see the desired header line. So skip 3 more lines to get there. End of explanation """ pb = pd.read_csv("B50E0133001_1.csv", skiprows=15, index_col="Peildatum") pb.head() """ Explanation: This is fine. At least a good start. But we want "Peildatum" as our index. So: End of explanation """ pb = pd.read_csv("B50E0133001_1.csv", skiprows=15, index_col="Peildatum", parse_dates=True) pb.head() """ Explanation: Better, but the idex still consists of strings and not of dates. Therefore, tell read_csv to part the dates: End of explanation """ pb = pd.read_csv("B50E0133001_1.csv", skiprows=15, index_col="Peildatum", parse_dates=True, dayfirst=True) pb.head() pb.head() """ Explanation: Problem is that some dates will be messed up as pandas will by default interprete dates as mm-dd-yyyyy, while we have dd-mm-yyyy. For some dates this does not matter but for other dates this is ambiguous unless it is specified that the dates start with the day instead of the month. End of explanation """ pb = pd.read_csv("B50E0133001_1.csv", skiprows=15, index_col="Peildatum", parse_dates=True, dayfirst=True, usecols=["Stand (cm t.o.v. NAP)"]) pb.head() """ Explanation: So far so good. Now do some clean-up as we only need the 6th column with the head above national datum. We can tell read_csv what columns to use by specifying a list of headers. First trial End of explanation """ pb = pd.read_csv("B50E0133001_1.csv", skiprows=15, index_col="Peildatum", parse_dates=True, dayfirst=True, usecols=["Peildatum", "Stand (cm t.o.v. NAP)"]) pb.head() """ Explanation: This failed, because we now have to specify all columns we want to use. This should include the columne "Peildatum". So add it to the list. End of explanation """ pb.columns = ["NAP"] pb.head() """ Explanation: This is fine. We now have a one-column dataFrame with the proper index. For English speakers, change the column header for better readability. End of explanation """ print(type(pb)) print(type(pb['NAP'])) """ Explanation: Check that pb is still a data frame, and only when we select one column from a dataFrame it becomes a series. End of explanation """ pb = pb['NAP'] print(type(pb)) """ Explanation: So select this column to get a time series. End of explanation """ pb.plot() plt.show() # default color is blue, and default plot is line. """ Explanation: Dataframes and series can immediately be plotted. Of course, you may also plot titles on the axes and above the plot. But because of lazyness, I leave this out for this exercise. End of explanation """ pb.resample("AS").mean().head() pb.resample("AS-APR").mean().head() """ Explanation: The next problem is to get the mean of the highest three measurements within each hydrological year, which starts on April 1 and ends at March 31. This requires resampling the data per hydrologic year. Which can be done with aliases put in the rule of the resample function of pandas series and dataFrames. Here are options: Offset aliases (previously alled time rules) that can be used or resampling a time series or a dataFrame http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases B business day frequency C custom business day frequency (experimental) D calendar day frequency W weekly frequency M month end frequency SM semi-month end frequency (15th and end of month) BM business month end frequency CBM custom business month end frequency MS month start frequency SMS semi-month start frequency (1st and 15th) BMS business month start frequency CBMS custom business month start frequency Q quarter end frequency BQ business quarter endfrequency QS quarter start frequency BQS business quarter start frequency A year end frequency BA business year end frequency AS year start frequency BAS business year start frequency BH business hour frequency H hourly frequency T minutely frequency S secondly frequency L milliseonds U microseconds N nanoseconds But fo sample at some arbitrary interval we need anchored offsets as the resample rule. Here are the options. Anchored offsets http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases For some frequencies you can specify an anchoring suffix: Alias Description W-SUN weekly frequency (sundays). Same as ‘W’ W-MON weekly frequency (mondays) W-TUE weekly frequency (tuesdays) W-WED weekly frequency (wednesdays) W-THU weekly frequency (thursdays) W-FRI weekly frequency (fridays) W-SAT weekly frequency (saturdays) (B)Q(S)-DEC quarterly frequency, year ends in December. Same as ‘Q’ (B)Q(S)-JAN quarterly frequency, year ends in January (B)Q(S)-FEB quarterly frequency, year ends in February (B)Q(S)-MAR quarterly frequency, year ends in March (B)Q(S)-APR quarterly frequency, year ends in April (B)Q(S)-MAY quarterly frequency, year ends in May (B)Q(S)-JUN quarterly frequency, year ends in June (B)Q(S)-JUL quarterly frequency, year ends in July (B)Q(S)-AUG quarterly frequency, year ends in August (B)Q(S)-SEP quarterly frequency, year ends in September (B)Q(S)-OCT quarterly frequency, year ends in October (B)Q(S)-NOV quarterly frequency, year ends in November (B)A(S)-DEC annual frequency, anchored end of December. Same as ‘A’ (B)A(S)-JAN annual frequency, anchored end of January (B)A(S)-FEB annual frequency, anchored end of February (B)A(S)-MAR annual frequency, anchored end of March (B)A(S)-APR annual frequency, anchored end of April (B)A(S)-MAY annual frequency, anchored end of May (B)A(S)-JUN annual frequency, anchored end of June (B)A(S)-JUL annual frequency, anchored end of July (B)A(S)-AUG annual frequency, anchored end of August (B)A(S)-SEP annual frequency, anchored end of September (B)A(S)-OCT annual frequency, anchored end of October (B)A(S)-NOV annual frequency, anchored end of November To see this at work. Resample the time series by hydrological year and compute the mean head in every hydrological year. This can be done as follows: End of explanation """ Z = pb.resample("AS-APR") type(Z) """ Explanation: This uses Groupby functionality. Which we'll inspect next. In fact, pb.resample(...) yields a DatetimeIndexResampler End of explanation """ [z for z in dir(Z) if not z.startswith("_")] """ Explanation: This resampler has its own functinality that can be used. This fucntionality is shown here: End of explanation """ Z.max().plot(label="max") Z.mean().plot(label="mean") Z.min().plot(label="min") plt.title("The max, mean and min of the head in each hydrological year") plt.legend(loc='best') plt.show() Z.max() for z in Z: print(z) """ Explanation: It's now easy to plot the resampled data using several of the functions, like so: Notice that Z.mean() is a pandas series so that Z.mean().plot() is plot method of the pandas series. End of explanation """ print(Z.agg.__doc__) """ Explanation: Insteresting is the agg function (which is an abbreviation of aggregate function). Here is its documentation: End of explanation """ def highest3(z): """returns mean of highest 3 values using np.argsort""" I = np.argsort(z)[-3:] return z[I].mean() def highest3a(z): """returns mean of highest 3 values using np.sort""" z = np.sort(z) return z[-3:].mean() # Apply print("Using np.argsort") highest = pb.resample("AS-APR").agg(highest3) highest.columns = ["mean_highest_value"] print(highest.head()) print("\nUsing np.sort") highesta = pb.resample("AS-APR").agg(highest3a) highesta.columns = ["mean_highest_value"] print(highesta.head()) """ Explanation: So we can use any function and apply it on the data grouped by the resampler. These data are the time series consisting of the data that fall in the interval between the last resample moment and the currrent one. Let's try this out to get the three highest values of any hydrological year. For this we define our own function, called highest3. It works by taking z which should be a time series, one consisting of any of the hydrological years in your long-year time series. We use argsort to the indices of the ordered values (we could also directly use the values themselves, but it's good to know argsort exists). The series is sorted from low to high, so we take the last 3 values, i.e. the highest 3. Notice that this also works in Python of the total number of values is less than three, so we don't need to check this. Then we return the mean of these highest three values. That's all. End of explanation """ def h_and_l_3(z): z = np.sort(z) # rounding off for a nicer list, but is not necessary return (np.round(z[ :3].mean()), np.round(z[-3:].mean())) # Apply h_and_l = pb.resample("AS-APR").agg(h_and_l_3) h_and_l.columns = ["mean_lowest_and_highest_values"] h_and_l.head() """ Explanation: This, of course, solves the problem. Which means we could just as well also compute the lowest 3 at the same time. And why not also remember the highest and lowest 3 End of explanation """ def h3(z): """Returns a tuple of the three highest value within sampling interval""" return (z[np.argsort(z)[-3:]],) Z.agg(h3).head() """ Explanation: The above functions all reduce, that is, they all aggreate the data held by the resampler for each sampling interval to a single value (or a tuple) End of explanation """ Z.apply(h3).head() """ Explanation: This does indeed give a tuple of the three highest values within each sampling interval, but we can't plot these values easily on the graph of the time series. Other functionality of the sampler are the indices, i.e. Z.indices. This yields a dictionary with the indices into the overall time series that belong to each resampled timestamp. Therefore we can readily find the values that belong to each hydrological year. End of explanation """ dd = list() def h33(z): """Returns a tuple of the three highest value within sampling interval""" #print(type(z)) dd.append(z[z.argsort()[-3:]]) return # the time series are put in the list dd Z.apply(h33).head() #Z.agg(h33).head() # alternative works just as well # for instance show dd[3] print(type(dd[3])) dd[3] """ Explanation: So appy() works the same as agg() at least here. If we want to plot the three highest points in each hydrlogical year, we could make a list with the sub time series that consist of the three highest points with there timestamp as index. Then, each item in this list is a series consisting of three values, which we may plot one after the other. End of explanation """ pb.plot() # plot all data as a line for d in dd: #plot sub time series of the three highest points d.plot(marker='o') plt.show() """ Explanation: The next step is to plot them. But we first plot the entire data set as a line. Then we plot each sub time series as small circles. The adjacent hydrological years then have a different color. End of explanation """ dd dd = list() # the entire time series in each sampling interval. dd3 = list() # only the three highest values in each sampling interval. def h33(z): """Returns a tuple of the three highest value within sampling interval Notice that this function just used append() to generate a list as a side-effect. It effectively consists of two lines and returns nothing. """ # z is what the sampler Z yields while resampling the original time series # It isthe sub-time series that falls in the running interval. # With tis append we get a list of the sub time series. dd.append(z[:]) # z[:] forces a copy # you can do an argsort on z. This yields a time series with the same index # but with as values the index in the original series. You can see it if # you print it here or make a list of these index-time series. dd3.append(z[z.argsort()[-3:]]) return # Here we apply the function by calling the method .agg() of the sampler Z. # The method receives the just created function as input. It applies this function # on every iteration, that is on every sub-time series. # Each time the function h33 is called it appends to the lists dd and ddr. # The sampler Z method agg calls the funcion h33 for every sample interval. # You may be tempted to insert a print statement in the function to see that # this is what actually happens. Z.apply(h33) # Then plot the sub-time series in the lists in dd and dd3. # We make sure to use the same color for all points in the same # hydrological year in both dd and dd3. # The subseries in dd are plotted as a line, those in dd3 as small circles. clr = 'brgkmcy'; i=0 # colors to use for d3, d in zip(dd3, dd): d.plot(marker='.', color=clr[i]) # all data in hydrological year d3.plot(marker='o', color=clr[i]) # highest three i += 1 if i==len(clr): i=0 # set i to 0 when colors are exhausted. plt.title("measurements per hydr. yr with the 3 highest accentuated") plt.xlabel('time') plt.ylabel('cm above national datum NAP') plt.show() """ Explanation: If we want to color the data in the same hydrological year in the same color, then we also make a list of all data in each sampling interval next to the list of the three highest values. Each item in dd has the complete time series of the interval, each item in dd3 has a tiem series of the three highest values alone. The append within the function is away of using a side-effect to get things done. It's a bit sneaky, not very elegant. But it works: End of explanation """ print("The sub time series for 1964\n") print(dd[10]) print("\nThe indices that sort this sub series. It is itself a time series") dd[10].argsort() """ Explanation: Show that the argort works to get the indices that sort the time series End of explanation """ # Don't need this, but just to make sure refresh our sampler Z = pb.resample("AS-APR") """ Explanation: Instead of appending to the list dd and dd3 sneakyly behind the scene (hidden inside the function, that is as a side effect of the function), we can also aim in achieveing the same thing head-on. This can be done using the indices of each sub-timeseries, which is also a functionality of the sample. End of explanation """ Idict = Z.indices type(Idict) """ Explanation: The resampler object Z also has a method indices, which yields a dictionary with the indices of the values that fall in each sampling interval. The indices are the absolute indices, i.e. they point into the large, original time series. Let's see how this works. First generate the dictionary. End of explanation """ pb.ix[3] # Show the indices for one of the keys of the Idict for k in Idict.keys(): print(k) # the key print() print(Idict[k]) # the indices print() print(pb.ix[Idict[k]]) # the values beloning to these indices break """ Explanation: A dict has keys. So let's show one item in this dict like so: End of explanation """ I # Show the indices for one of the keys of the Idict fig, ax = plt.subplots() clr = "brgkmcy"; i=0 Idict = Z.indices for k in Idict.keys(): I = Idict[k] # The indices belonging to this key k ax.plot(pb.ix[I].index, pb.ix[I].values, color=clr[i]) # The values have dimension [1,n] so use values[0] to get a 1D array of indices J = np.argsort(pb.ix[I].values[0])[-3:] # Need a comprehension to get the indexes because # indexing like I[J] is not allowed for lists Idx = [I[j] for j in J] ax.plot(pb.index[Idx], pb.values[Idx], color=clr[i], marker='o') i += 1; if i==len(clr): i=0 # plot the hydrological year boundaries as vertical grey lines ylim = ax.get_ylim() for k in Idict.keys(): i = Idict[k][-1] ax.plot(pb.index[[i, i]], ylim, color=[0.8, 0.8, 0.8]) plt.show() #pb.ix[I].plot(ax=ax) # the values beloning to these indices (can't omit the legend) """ Explanation: This implies that we can now plot each sub time series like so: To plot them together with the boundaries of each hydrological year, we first plot the data as a colored line, within each hydrological year. Then we plot the vertical lines that separate the hydrological years. The lines are colored light grey using color=[R, G, B] where R, G and B are all 0.8. ax=get_ylim() gets the extremes of the vertical axis, which are then used to draw the vertical lines. End of explanation """
ctralie/TUMTopoTimeSeries2016
Image Patches.ipynb
apache-2.0
import numpy as np %matplotlib notebook import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from matplotlib.offsetbox import OffsetImage, AnnotationBbox from ripser import ripser from persim import plot_diagrams from GeomUtils import getGreedyPerm import sys sys.path.append("DREiMac/dreimac") from ProjectiveCoordinates import ProjectiveCoords, getStereoProjCodim1 import warnings warnings.filterwarnings('ignore') """ Explanation: Image Patches In this module, you will explore the topology of different collections of image patches. Each image patch is a square $d \times d$ region of pixels. Each pixel can be thought of as a dimension, so each patch lives in $\mathbb{R}^{d \times d}$, and a collection of patches can be thought of as a Euclidean point cloud in $\mathbb{R}^{d \times d}$ First, we perform all of the necessary library imports. End of explanation """ def getPatches(I, dim): """ Given an image I, return all of the dim x dim patches in I :param I: An M x N image :param d: The dimension of the square patches :returns P: An (M-d+1)x(N-d+1)x(d^2) array of all patches """ #http://stackoverflow.com/questions/13682604/slicing-a-numpy-image-array-into-blocks shape = np.array(I.shape*2) strides = np.array(I.strides*2) W = np.asarray(dim) shape[I.ndim:] = W shape[:I.ndim] -= W - 1 if np.any(shape < 1): raise ValueError('Window size %i is too large for image'%dim) P = np.lib.stride_tricks.as_strided(I, shape=shape, strides=strides) P = np.reshape(P, [P.shape[0]*P.shape[1], dim*dim]) return P def imscatter(X, P, dim, zoom=1): """ Plot patches in specified locations in R2 Parameters ---------- X : ndarray (N, 2) The positions of each patch in R2 P : ndarray (N, dim*dim) An array of all of the patches dim : int The dimension of each patch """ #https://stackoverflow.com/questions/22566284/matplotlib-how-to-plot-images-instead-of-points ax = plt.gca() for i in range(P.shape[0]): patch = np.reshape(P[i, :], (dim, dim)) x, y = X[i, :] im = OffsetImage(patch, zoom=zoom, cmap = 'gray') ab = AnnotationBbox(im, (x, y), xycoords='data', frameon=False) ax.add_artist(ab) ax.update_datalim(X) ax.autoscale() ax.set_xticks([]) ax.set_yticks([]) def plotPatches(P, zoom = 1): """ Plot patches in a best fitting rectangular grid """ N = P.shape[0] d = int(np.sqrt(P.shape[1])) dgrid = int(np.ceil(np.sqrt(N))) ex = np.arange(dgrid) x, y = np.meshgrid(ex, ex) X = np.zeros((N, 2)) X[:, 0] = x.flatten()[0:N] X[:, 1] = y.flatten()[0:N] imscatter(X, P, d, zoom) """ Explanation: We now define a few functions which will help us to sample patches from an image and to plot a collection of patches End of explanation """ # First create an image of a disc res = 50 R = res/2 [I, J] = np.meshgrid(np.arange(res) ,np.arange(res)) Im = ((I-R)**2 + (J-R)**2) < (0.5*R*R) Im = 1.0*Im plt.imshow(Im, interpolation='none', cmap='gray') plt.show() """ Explanation: Example 1: Patches On A Disc First, we start off by sampling patches from an image representing a disc End of explanation """ dim = 5 P = getPatches(Im, dim) #Remove redundant patches to cut down on computation time toKeep = [0] XSqr = np.sum(P**2, 1) D = XSqr[:, None] + XSqr[None, :] - 2*P.dot(P.T) for i in range(1, D.shape[0]): if np.sum(D[i, 0:i] == 0) > 0: continue toKeep.append(i) P = P[np.array(toKeep), :] plt.figure(figsize=(8, 8)) plotPatches(P, zoom=3) ax = plt.gca() ax.set_facecolor((0.7, 0.7, 0.7)) plt.show() """ Explanation: Now, sample all unique 5x5 patches from this image, for a collection of patches which lives in 25 dimensional Euclidean space End of explanation """ plt.figure() dgms = ripser(P, maxdim=2)['dgms'] plot_diagrams(dgms) plt.show() """ Explanation: Now, let's compute persistence diagrams up to H2 for this collection of patches Based on the diagrams, what shape do the patches concentrate on? Can you arrange the patches on that shape? What happens if you get rid of the constant all black or all white patches, and you normalize the rest of the patches to have unit norm? What topological manifold is this? End of explanation """ def getLinePatches(dim, NAngles, NOffsets, sigma): N = NAngles*NOffsets P = np.zeros((N, dim*dim)) thetas = np.linspace(0, np.pi, NAngles+1)[0:NAngles] #ps = np.linspace(-0.5*np.sqrt(2), 0.5*np.sqrt(2), NOffsets) ps = np.linspace(-1, 1, NOffsets) idx = 0 [Y, X] = np.meshgrid(np.linspace(-0.5, 0.5, dim), np.linspace(-0.5, 0.5, dim)) for i in range(NAngles): c = np.cos(thetas[i]) s = np.sin(thetas[i]) for j in range(NOffsets): patch = X*c + Y*s + ps[j] patch = np.exp(-patch**2/sigma**2) P[idx, :] = patch.flatten() idx += 1 return P P = getLinePatches(dim=10, NAngles = 16, NOffsets = 16, sigma=0.25) plt.figure(figsize=(8, 8)) plotPatches(P, zoom=2) ax = plt.gca() ax.set_facecolor((0.7, 0.7, 0.7)) plt.show() """ Explanation: Example 2: Oriented Line Segments We now examine the collection of patches which hold oriented, blurry line segments that are varying distances from the center of the patch. First, let's start by setting up the patches. Below, the "dim" variable sets the patch resolution, and the "sigma" variable sets the blurriness (a larger sigma means blurrier line segments). End of explanation """ dgmsz2 = ripser(P, coeff=2, maxdim=2)['dgms'] dgmsz3 = ripser(P, coeff=3, maxdim=2)['dgms'] plt.figure(figsize=(12, 6)) plt.subplot(121) plot_diagrams(dgmsz2) plt.title("$\mathbb{Z}/2$") plt.subplot(122) plot_diagrams(dgmsz3) plt.title("$\mathbb{Z}/3$") plt.show() """ Explanation: Now let's compute persistence diagrams for this collection of patches. This time, we will compute with both $\mathbb{Z}/2$ coefficients and $\mathbb{Z}/3$ coefficients up to H2. Based on the persistence diagrams, what shape do the patches appear to concentrate on? Can you arrange the patches on this shape to explain why? What happens to the persistence diagrams when you make sigma very small and the patches become sharper, or when you make sigma close to 1 and the patches become very blurry? Can you explain what's happening geometrically? End of explanation """ def plotProjBoundary(): t = np.linspace(0, 2*np.pi, 200) plt.plot(np.cos(t), np.sin(t), 'c') plt.axis('equal') ax = plt.gca() ax.arrow(-0.1, 1, 0.001, 0, head_width = 0.15, head_length = 0.2, fc = 'c', ec = 'c', width = 0) ax.arrow(0.1, -1, -0.001, 0, head_width = 0.15, head_length = 0.2, fc = 'c', ec = 'c', width = 0) ax.set_facecolor((0.35, 0.35, 0.35)) P = getLinePatches(dim=10, NAngles = 200, NOffsets = 200, sigma=0.25) # Construct projective coordinates object pcoords = ProjectiveCoords(P, n_landmarks=200) # Figure out index of maximum persistence dot in H1 I1 = pcoords.dgms_[1] cocycle_idx = np.argsort(I1[:, 0] - I1[:, 1])[0:1] # Perform projective coordinates using the representative cocycle from that point res = pcoords.get_coordinates(proj_dim=2, perc=0.99, cocycle_idx=cocycle_idx) X = res['X'] idx = getGreedyPerm(X, 400)['perm'] SFinal = getStereoProjCodim1(X[idx, :]) P = P[idx, :] plt.figure(figsize=(8, 8)) imscatter(SFinal, P, 10) plotProjBoundary() plt.show() """ Explanation: Now we will look at these patches using "projective coordinates" (finding a map to $RP^2$). End of explanation """ def getNaturalImagePatches(ndir, nsharp, dim): N = ndir*nsharp t = np.linspace(0, 2*np.pi, ndir+1)[0:ndir] a, b = np.cos(t), np.sin(t) t = np.linspace(0, 2*np.pi, nsharp+1)[0:nsharp] c, d = np.cos(t), np.sin(t) a, b, c, d = a.flatten(), b.flatten(), c.flatten(), d.flatten() hdim = int((dim-1)/2) xr = np.linspace(-1, 1, dim) X, Y = np.meshgrid(xr, xr) P = np.zeros((N, dim*dim)) idx = 0 for i in range(a.size): for j in range(c.size): proj = a[i]*X + b[i]*Y p = c[j]*proj + d[j]*(proj**2) P[idx, :] = p.flatten() idx += 1 return P res = 15 dim = 8 P = getNaturalImagePatches(res, res, dim) plt.figure(figsize=(8, 8)) plotPatches(P, zoom = 2) ax = plt.gca() ax.set_facecolor((0.15, 0.15, 0.15)) plt.show() """ Explanation: Example 3: Natural Image Patches We will now generate a set of patches that occur in "natural images," which are essentially gradients from dark to light in different directions, which are centered at the patch. End of explanation """ res = 200 dgmsz2 = ripser(P, coeff=2, maxdim=2)['dgms'] dgmsz3 = ripser(P, coeff=3, maxdim=2)['dgms'] I1 = dgmsz2[1] I2 = dgmsz3[1] I1 = I1[np.argsort(I1[:, 0]-I1[:, 1]), :] I2 = I2[np.argsort(I2[:, 0]-I2[:, 1]), :] print("Max 2 for Z2:\n%s"%I1[0:2, :]) print("\nMax 2 for Z3:\n%s"%I2[0:2, :]) plt.figure(figsize=(12, 6)) plt.subplot(121) plot_diagrams(dgmsz2) plt.title("$\mathbb{Z}/2$") plt.subplot(122) plot_diagrams(dgmsz3) plt.title("$\mathbb{Z}/3$") plt.show() """ Explanation: Now let's look at the persistent homology of this collection of patches with $\mathbb{Z} / 2\mathbb{Z}$ $\mathbb{Z} / 3\mathbb{Z}$ coefficients Questions What topological manifold do these patches concentrate on, based on what you see in the persistence diagrams (hint: be careful that two points may be on top of each other in H1 for $\mathbb{Z} / 2\mathbb{Z}$) End of explanation """ res = 500 dim = 8 P = getNaturalImagePatches(res, res, dim) pcoords = ProjectiveCoords(P, n_landmarks=200) # Figure out index of maximum persistence dot in H1 I1 = pcoords.dgms_[1] cocycle_idx = np.argsort(I1[:, 0] - I1[:, 1])[0:1] # Perform projective coordinates using the representative cocycle from that point res = pcoords.get_coordinates(proj_dim=2, perc=0.99, cocycle_idx=cocycle_idx) X = res['X'] idx = getGreedyPerm(X, 400)['perm'] SFinal = getStereoProjCodim1(X[idx, :]) P = P[idx, :] plt.figure(figsize=(8, 4)) plt.subplot(121) plot_diagrams(pcoords.dgms_[1], labels=['H1']) plt.subplot(122) imscatter(SFinal, P, dim) plotProjBoundary() plt.show() """ Explanation: Now let's look at these points with projective coordinates End of explanation """
mtat76/atm-py
examples/instruments_POPS_housekeeping.ipynb
mit
from atmPy.instruments.POPS import housekeeping %matplotlib inline """ Explanation: Introduction This module is in charge of reading the POPS housekeeping file and converting it to a TimeSeries instance. Imports End of explanation """ filename = './data/POPS_housekeeping.csv' hk = housekeeping.read_csv(filename) """ Explanation: Reading a housekeeping file End of explanation """ out = hk.plot_all() """ Explanation: Done! hk is an instance of TimeSeries and you can do with it what ever the instance is capable of (see here). E.g. plot stuff. End of explanation """
milancurcic/lunch-bytes
Fall_2018/LB22/NeuralNetDemo.ipynb
cc0-1.0
import numpy as np import pandas as pd from sklearn import model_selection def xfer(wsum): out = 1.0 / (1.0 + np.exp(-wsum)) return out def ErrHid(output, weights, outerr): dt = np.dot(weights, outerr) ErrorHid = output * (1.0 - output) * dt return ErrorHid def ErrOut(output, targets): ErrorOut = output * (1.0 - output) * (targets - output) return ErrorOut def WgtAdj(weights, responsibities, values, learnrate): weights += (learnrate * np.outer(values, responsibilities)) return weights def BiasAdj(bias, responsibilities, learnrate): bias += (learn * responsibilities) return bias def run(examples, targets, weightsI2H, weightsH2O, biasHID, biasOUT): HiddenLayerInput = np.dot(examples, weightsI2H) + biasHID HiddenLayerOutput = xfer(wsum=HiddenLayerInput) OutputLayerInput = np.dot(HiddenLayerOutput, weightsH2O) + biasOUT Output = xfer(wsum=OutputLayerInput) return Output def nnet(examples, targets, hidden, learn): # Set number of attributes passed: number of columns in 'examples' NumAtt = examples.shape[1] # Set number of output neurons: number of columns in 'targets' NumOut = train_targets.shape[1] # Randomly initialize weight matrices (replace any zeros with 0.1) weightsI2H = np.random.uniform(-1,1, size=(NumAtt,hidden)) weightsI2H[weightsI2H==0] = 0.1 weightsH2O = np.random.uniform(-1,1,size=(hidden,NumOut)) weightsH2O[weightsH2O==0] = 0.1 # Randomly initialize biases for hidden and output layer neurons biasHID = np.random.uniform(-1,1,size=hidden) biasOUT = np.random.uniform(-1,1,size=NumOut) # Loop to pass each training example for ex in range(len(examples)): # Forward propagate examples HiddenLayerInput = np.dot(examples[ex,:], weightsI2H) + biasHID HiddenLayerOutput = xfer(wsum=HiddenLayerInput) OutputLayerInput = np.dot(HiddenLayerOutput, weightsH2O) + biasOUT Output = xfer(wsum=OutputLayerInput) # Back-proagate error: calculaterror responsibilities for output and hidden layer neurons OutputErr = ErrOut(output=Output, targets=targets[ex,:]) HiddenErr = ErrHid(output=HiddenLayerOutput, weights=weightsH2O, outerr=OutputErr) # Adjust weights weightsI2H += learn * np.outer(examples[ex,:], HiddenErr) weightsH2O += learn * np.outer(HiddenLayerOutput, OutputErr) biasHID += learn * HiddenErr biasOUT += learn * OutputErr # Correctly classified? if (train_targets[ex,0] < train_targets[ex,1]) == (Output[0] < Output[1]): print('Correct') else: print('Incorrect') return weightsI2H, weightsH2O, biasHID, biasOUT """ Explanation: Create and test a simple neural network Lunch Bytes (LB22) • October 26, 2018 • Matt Grossi (matthew.grossi at rsmas.miami.edu) This document provides a practical follow-up to my talk, Peeking under the hood of an artificial neural network. Let's set up a simple neural network! First we define functions for the routines we'll be using frequently. The workflow for [most of] these come from the presentation slides. Disclaimer: There are undoubtedly far more efficient ways to carry out these operations. I've chosen overly descriptive variables names to be as descriptive (and hopefully as helpful) as possible. I have not thoroughly checked this over for correct performance, typos, etc. End of explanation """ data = pd.read_csv("~/Documents/Classes/RSMAS/MachLearn/Project1/banknote.csv", names=['var', 'skew', 'curt', 'ent', 'class-yes']) data.head(3) data.tail(3) """ Explanation: Let's test this on a data set from the UC Irving Machine Learning Respository (https://archive.ics.uci.edu/ml/index.php). This handy respository contains hundreds of pre-classified data sets that are ideal for testing machine learning algorithms. Consider the banknote authentication data set. The documentation states that it contains 1372 examples of banknote-like specimens that are classified as either authentic (class 1) or not authentic (class 0) based on four attributes: variance, skewness, curtosis, and entropy of the image. End of explanation """ data = data.sample(frac=1).reset_index(drop=True) """ Explanation: Before feeding a neural net, we need to set up our data. First, we note that these data are sorted by class (the last column, 0s and 1s), as is often the case with pre-classified data. It is good practice to shuffle the data to help ensure that the distribution of classes is roughly the same in both the training and testing subsets. End of explanation """ data['class-no'] = np.where(data['class-yes']==0, 1, 0) data.head() """ Explanation: We can think of these data as having two classes: authentic and not authentic. Our data set already contains a column that contains 1s whenever the example is in the authentic class. Now let's add a column that has a 1 when the example is in the not authentic class. Then print the first few lines to make sure we have what we think we have. End of explanation """ data_norm = (data - data.min(axis=0)) / (data.max(axis=0) - data.min(axis=0)) """ Explanation: Next we need to normalize our data such that all columns are between 0 and 1: End of explanation """ train, test = model_selection.train_test_split(data_norm, test_size = 0.2) train_examples = np.array(train.iloc[:,0:4], dtype=np.float64) train_targets = np.array(train[['class-yes', 'class-no']], dtype=np.float64) test_examples = np.array(test.iloc[:,0:4], dtype=np.float64) test_targets = np.array(test[['class-yes', 'class-no']], dtype=np.float64) """ Explanation: Now, split the data into training and testing subsets. Here we choose to use 80% of the examples for training and 20% for testing. Finally, separate into examples, which will contain the attributes, and the targets, which will contain the class flag. End of explanation """ #train_targets[np.where(train_targets==1)] = 0.8 #train_targets[np.where(train_targets==0)] = 0.2 #test_targets[np.where(test_targets==1)] = 0.8 #test_targets[np.where(test_targets==0)] = 0.2 """ Explanation: Finally, replace targets 1 and 0 with 0.8 and 0.2. (Why?) End of explanation """ I2H, H2O, bHID, bOUT = nnet(examples=train_examples, targets=train_targets, hidden=5, learn=0.05) """ Explanation: Now train the neural net! End of explanation """ out = run(examples=test_examples, targets=test_targets, weightsI2H=I2H, weightsH2O=H2O, biasHID=bHID, biasOUT=bOUT) out[0:5,] """ Explanation: For instructive purposes, the function prints whether each example is correctly classified or not. One can see the variation in performance from example to example. Keep in mind that after each incorrect classification, slight adjustments are made to the internal weights. The function 'nnet' loops through every training example once, representing one training epoch. In practice, the network would be trained over many (sometimes hundreds) of epochs, depending on the number of examples and the complexity of the data. At the end of each epoch, the model is run on the testing data and the mean squared error (MSE) is calculated over all of these training examples to assess the model performance. As the model learns, the MSE should decrease. Just for fun, let's run our neural net on the test examples. Notice that 'nnet' outputs a weight matrix and bias vector for each layer. All we need to do to run the model is forward-propagate the testing examples using these final weights and biases. No weight adjustments will be made, because we are running a trained model. See the 'run' function above. End of explanation """ # Predicted classes predicted_classes = np.array(pd.DataFrame(out).idxmax(axis=1)) print(predicted_classes) # Target classes target_classes = np.array(pd.DataFrame(test_targets).idxmax(axis=1)) print(target_classes) """ Explanation: What do these numbers mean? Remember that our target matrix consists of 0.8s and 0.2s (originally 1s and 0s) and correspond to whether the example was in the class authentic or not authentic. Our neural network had 2 output neurons, one for each class. The columns of 'out' therefore represent these two classes, in the same order as in the 'data'. Because this is a simple classification problem, we can choose for each example the class (i.e., column) with the higher number: End of explanation """ sum(target_classes == predicted_classes)/len(test_targets) * 100 """ Explanation: Remember that python uses zero-based indexing. Thus, a 0 here means the first column contains the highest number, meaning the example is authentic. This is admittedly a little confusing. How does this compare to the actual testing data? The neural net is correct if the predicted class agrees with the actual class: End of explanation """
AllenDowney/ThinkStats2
workshop/effect_size.ipynb
gpl-3.0
%matplotlib inline import numpy import scipy.stats import matplotlib.pyplot as plt from ipywidgets import interact, interactive, fixed import ipywidgets as widgets # seed the random number generator so we all get the same results numpy.random.seed(17) """ Explanation: Effect Size Examples and exercises for a tutorial on statistical inference. Copyright 2016 Allen Downey License: Creative Commons Attribution 4.0 International End of explanation """ mu1, sig1 = 178, 7.7 male_height = scipy.stats.norm(mu1, sig1) mu2, sig2 = 163, 7.3 female_height = scipy.stats.norm(mu2, sig2) """ Explanation: Part One To explore statistics that quantify effect size, we'll look at the difference in height between men and women. I used data from the Behavioral Risk Factor Surveillance System (BRFSS) to estimate the mean and standard deviation of height in cm for adult women and men in the U.S. I'll use scipy.stats.norm to represent the distributions. The result is an rv object (which stands for random variable). End of explanation """ def eval_pdf(rv, num=4): mean, std = rv.mean(), rv.std() xs = numpy.linspace(mean - num*std, mean + num*std, 100) ys = rv.pdf(xs) return xs, ys """ Explanation: The following function evaluates the normal (Gaussian) probability density function (PDF) within 4 standard deviations of the mean. It takes and rv object and returns a pair of NumPy arrays. End of explanation """ xs, ys = eval_pdf(male_height) plt.plot(xs, ys, label='male', linewidth=4, color='C0') xs, ys = eval_pdf(female_height) plt.plot(xs, ys, label='female', linewidth=4, color='C1') plt.xlabel('height (cm)'); """ Explanation: Here's what the two distributions look like. End of explanation """ male_sample = male_height.rvs(1000) female_sample = female_height.rvs(1000) """ Explanation: Let's assume for now that those are the true distributions for the population. I'll use rvs to generate random samples from the population distributions. Note that these are totally random, totally representative samples, with no measurement error! End of explanation """ mean1, std1 = male_sample.mean(), male_sample.std() mean1, std1 """ Explanation: Both samples are NumPy arrays. Now we can compute sample statistics like the mean and standard deviation. End of explanation """ mean2, std2 = female_sample.mean(), female_sample.std() mean2, std2 """ Explanation: The sample mean is close to the population mean, but not exact, as expected. End of explanation """ difference_in_means = male_sample.mean() - female_sample.mean() difference_in_means # in cm """ Explanation: And the results are similar for the female sample. Now, there are many ways to describe the magnitude of the difference between these distributions. An obvious one is the difference in the means: End of explanation """ # Solution goes here """ Explanation: On average, men are 14--15 centimeters taller. For some applications, that would be a good way to describe the difference, but there are a few problems: Without knowing more about the distributions (like the standard deviations) it's hard to interpret whether a difference like 15 cm is a lot or not. The magnitude of the difference depends on the units of measure, making it hard to compare across different studies. There are a number of ways to quantify the difference between distributions. A simple option is to express the difference as a percentage of the mean. Exercise 1: what is the relative difference in means, expressed as a percentage? End of explanation """ simple_thresh = (mean1 + mean2) / 2 simple_thresh """ Explanation: STOP HERE: We'll regroup and discuss before you move on. Part Two An alternative way to express the difference between distributions is to see how much they overlap. To define overlap, we choose a threshold between the two means. The simple threshold is the midpoint between the means: End of explanation """ thresh = (std1 * mean2 + std2 * mean1) / (std1 + std2) thresh """ Explanation: A better, but slightly more complicated threshold is the place where the PDFs cross. End of explanation """ male_below_thresh = sum(male_sample < thresh) male_below_thresh """ Explanation: In this example, there's not much difference between the two thresholds. Now we can count how many men are below the threshold: End of explanation """ female_above_thresh = sum(female_sample > thresh) female_above_thresh """ Explanation: And how many women are above it: End of explanation """ male_overlap = male_below_thresh / len(male_sample) female_overlap = female_above_thresh / len(female_sample) male_overlap, female_overlap """ Explanation: The "overlap" is the area under the curves that ends up on the wrong side of the threshold. End of explanation """ misclassification_rate = (male_overlap + female_overlap) / 2 misclassification_rate """ Explanation: In practical terms, you might report the fraction of people who would be misclassified if you tried to use height to guess sex, which is the average of the male and female overlap rates: End of explanation """ # Solution goes here # Solution goes here """ Explanation: Another way to quantify the difference between distributions is what's called "probability of superiority", which is a problematic term, but in this context it's the probability that a randomly-chosen man is taller than a randomly-chosen woman. Exercise 2: Suppose I choose a man and a woman at random. What is the probability that the man is taller? HINT: You can zip the two samples together and count the number of pairs where the male is taller, or use NumPy array operations. End of explanation """ def CohenEffectSize(group1, group2): """Compute Cohen's d. group1: Series or NumPy array group2: Series or NumPy array returns: float """ diff = group1.mean() - group2.mean() n1, n2 = len(group1), len(group2) var1 = group1.var() var2 = group2.var() pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2) d = diff / numpy.sqrt(pooled_var) return d """ Explanation: Overlap (or misclassification rate) and "probability of superiority" have two good properties: As probabilities, they don't depend on units of measure, so they are comparable between studies. They are expressed in operational terms, so a reader has a sense of what practical effect the difference makes. Cohen's effect size There is one other common way to express the difference between distributions. Cohen's $d$ is the difference in means, standardized by dividing by the standard deviation. Here's the math notation: $ d = \frac{\bar{x}_1 - \bar{x}_2} s $ where $s$ is the pooled standard deviation: $s = \sqrt{\frac{n_1 s^2_1 + n_2 s^2_2}{n_1+n_2}}$ Here's a function that computes it: End of explanation """ CohenEffectSize(male_sample, female_sample) """ Explanation: Computing the denominator is a little complicated; in fact, people have proposed several ways to do it. This implementation uses the "pooled standard deviation", which is a weighted average of the standard deviations of the two groups. And here's the result for the difference in height between men and women. End of explanation """ def overlap_superiority(control, treatment, n=1000): """Estimates overlap and superiority based on a sample. control: scipy.stats rv object treatment: scipy.stats rv object n: sample size """ control_sample = control.rvs(n) treatment_sample = treatment.rvs(n) thresh = (control.mean() + treatment.mean()) / 2 control_above = sum(control_sample > thresh) treatment_below = sum(treatment_sample < thresh) overlap = (control_above + treatment_below) / n superiority = (treatment_sample > control_sample).mean() return overlap, superiority """ Explanation: Most people don't have a good sense of how big $d=1.9$ is, so let's make a visualization to get calibrated. Here's a function that encapsulates the code we already saw for computing overlap and probability of superiority. End of explanation """ def plot_pdfs(cohen_d=2): """Plot PDFs for distributions that differ by some number of stds. cohen_d: number of standard deviations between the means """ control = scipy.stats.norm(0, 1) treatment = scipy.stats.norm(cohen_d, 1) xs, ys = eval_pdf(control) plt.fill_between(xs, ys, label='control', color='C1', alpha=0.5) xs, ys = eval_pdf(treatment) plt.fill_between(xs, ys, label='treatment', color='C0', alpha=0.5) o, s = overlap_superiority(control, treatment) plt.text(0, 0.05, 'overlap ' + str(o)) plt.text(0, 0.15, 'superiority ' + str(s)) plt.show() #print('overlap', o) #print('superiority', s) """ Explanation: Here's the function that takes Cohen's $d$, plots normal distributions with the given effect size, and prints their overlap and superiority. End of explanation """ plot_pdfs(2) """ Explanation: Here's an example that demonstrates the function: End of explanation """ slider = widgets.FloatSlider(min=0, max=4, value=2) interact(plot_pdfs, cohen_d=slider); """ Explanation: And an interactive widget you can use to visualize what different values of $d$ mean: End of explanation """
astrojhgu/mcupy
example/bimodal/README.ipynb
bsd-3-clause
%matplotlib inline from mcupy.graph import * from mcupy.utils import * from mcupy.nodes import * from mcupy.jagsparser import * import scipy import seaborn import pylab """ Explanation: A bimodal example This is a sample to infer the parameters of a bimodal model, which is a mixture of two Normal distribution components. The data is read from data6.2.1.dat.R, which is from First of course, import necessary packages. End of explanation """ data=parseJagsDataFile('data6.2.1.dat.R') obsval=data['obsval'] err=data['err'] """ Explanation: Then read the data from a jags data file End of explanation """ dummy=pylab.hist(obsval,bins=10) """ Explanation: Then Let's plot the histogram of the data. End of explanation """ g=Graph() p=FixedUniformNode(1e-5,1-1e-5).withTag("p") sig1=FixedUniformNode(1e-10,10).withTag("sig1") sig2=FixedUniformNode(1e-10,10).withTag("sig2") cent1=FixedUniformNode(4,10).withTag("cent1") cent2Upper=ConstNode(10+1e-6).withTag("cent2Upper") cent2=UniformNode(cent1,cent2Upper).withTag("cent2") for i in range(0,len(obsval)): b=BernNode(p).inGroup("b") cent=CondNode(b,cent1,cent2).inGroup("cent") sig=CondNode(b,sig1,sig2).inGroup("sig") val=NormalNode(cent,sig).inGroup("val") obsvalNode=NormalNode(val,ConstNode(err[i])).withObservedValue(obsval[i]).inGroup("obsval") g.addNode(obsvalNode) """ Explanation: Then compose the Bayesian network End of explanation """ display_graph(g) """ Explanation: Show the structure of the graph to check it. End of explanation """ monP=g.getMonitor(p) monCent1=g.getMonitor(cent1) monCent2=g.getMonitor(cent2) monSig1=g.getMonitor(sig1) monSig2=g.getMonitor(sig2) """ Explanation: Declare some monitors to record the results. End of explanation """ results=[] for i in log_progress(range(0,10000)): g.sample() for i in log_progress(range(0,10000)): g.sample() results.append([monP.get(),monCent1.get(),monCent2.get(),monSig1.get(),monSig2.get()]) results=scipy.array(results) """ Explanation: Burn 10000 times and sample 10000 times. End of explanation """ dummy=pylab.hist(results[:,0],bins=100) dummy=pylab.hist(results[:,1],bins=100) dummy=pylab.hist(results[:,2],bins=100) dummy=pylab.hist(results[:,3],bins=100) dummy=pylab.hist(results[:,4],bins=100) seaborn.jointplot(results[:,1],results[:,2],kind='hex') seaborn.jointplot(results[:,0],results[:,1],kind='hex') """ Explanation: Plot the results. End of explanation """
hktxt/MachineLearning
Backpropagation.ipynb
gpl-3.0
%run "readonly/BackpropModule.ipynb" # PACKAGE import numpy as np import matplotlib.pyplot as plt # PACKAGE # First load the worksheet dependencies. # Here is the activation function and its derivative. sigma = lambda z : 1 / (1 + np.exp(-z)) d_sigma = lambda z : np.cosh(z/2)**(-2) / 4 # This function initialises the network with it's structure, it also resets any training already done. def reset_network (n1 = 6, n2 = 7, random=np.random) : global W1, W2, W3, b1, b2, b3 W1 = random.randn(n1, 1) / 2 W2 = random.randn(n2, n1) / 2 W3 = random.randn(2, n2) / 2 b1 = random.randn(n1, 1) / 2 b2 = random.randn(n2, 1) / 2 b3 = random.randn(2, 1) / 2 # This function feeds forward each activation to the next layer. It returns all weighted sums and activations. def network_function(a0) : z1 = W1 @ a0 + b1 a1 = sigma(z1) z2 = W2 @ a1 + b2 a2 = sigma(z2) z3 = W3 @ a2 + b3 a3 = sigma(z3) return a0, z1, a1, z2, a2, z3, a3 # This is the cost function of a neural network with respect to a training set. def cost(x, y) : return np.linalg.norm(network_function(x)[-1] - y)**2 / x.size """ Explanation: Backpropagation Instructions In this assignment, you will train a neural network to draw a curve. The curve takes one input variable, the amount travelled along the curve from 0 to 1, and returns 2 outputs, the 2D coordinates of the position of points on the curve. To help capture the complexity of the curve, we shall use two hidden layers in our network with 6 and 7 neurons respectively. You will be asked to complete functions that calculate the Jacobian of the cost function, with respect to the weights and biases of the network. Your code will form part of a stochastic steepest descent algorithm that will train your network. Matrices in Python Recall from assignments in the previous course in this specialisation that matrices can be multiplied together in two ways. Element wise: when two matrices have the same dimensions, matrix elements in the same position in each matrix are multiplied together In python this uses the '$*$' operator. python A = B * C Matrix multiplication: when the number of columns in the first matrix is the same as the number of rows in the second. In python this uses the '$@$' operator python A = B @ C This assignment will not test which ones to use where, but it will use both in the starter code presented to you. There is no need to change these or worry about their specifics. How to submit To complete the assignment, edit the code in the cells below where you are told to do so. Once you are finished and happy with it, press the Submit Assignment button at the top of this worksheet. Test your code using the cells at the bottom of the notebook before you submit. Please don't change any of the function names, as these will be checked by the grading script. Feed forward In the following cell, we will define functions to set up our neural network. Namely an activation function, $\sigma(z)$, it's derivative, $\sigma'(z)$, a function to initialise weights and biases, and a function that calculates each activation of the network using feed-forward. Recall the feed-forward equations, $$ \mathbf{a}^{(n)} = \sigma(\mathbf{z}^{(n)}) $$ $$ \mathbf{z}^{(n)} = \mathbf{W}^{(n)}\mathbf{a}^{(n-1)} + \mathbf{b}^{(n)} $$ In this worksheet we will use the logistic function as our activation function, rather than the more familiar $\tanh$. $$ \sigma(\mathbf{z}) = \frac{1}{1 + \exp(-\mathbf{z})} $$ There is no need to edit the following cells. They do not form part of the assessment. You may wish to study how it works though. Run the following cells before continuing. End of explanation """ # GRADED FUNCTION # Jacobian for the third layer weights. There is no need to edit this function. def J_W3 (x, y) : # First get all the activations and weighted sums at each layer of the network. a0, z1, a1, z2, a2, z3, a3 = network_function(x) # We'll use the variable J to store parts of our result as we go along, updating it in each line. # Firstly, we calculate dC/da3, using the expressions above. J = 2 * (a3 - y) # Next multiply the result we've calculated by the derivative of sigma, evaluated at z3. J = J * d_sigma(z3) # Then we take the dot product (along the axis that holds the training examples) with the final partial derivative, # i.e. dz3/dW3 = a2 # and divide by the number of training examples, for the average over all training examples. J = J @ a2.T / x.size # Finally return the result out of the function. return J # In this function, you will implement the jacobian for the bias. # As you will see from the partial derivatives, only the last partial derivative is different. # The first two partial derivatives are the same as previously. # ===YOU SHOULD EDIT THIS FUNCTION=== def J_b3 (x, y) : # As last time, we'll first set up the activations. a0, z1, a1, z2, a2, z3, a3 = network_function(x) # Next you should implement the first two partial derivatives of the Jacobian. # ===COPY TWO LINES FROM THE PREVIOUS FUNCTION TO SET UP THE FIRST TWO JACOBIAN TERMS=== J = 2 * (a3 - y) J = J * d_sigma(z3) # For the final line, we don't need to multiply by dz3/db3, because that is multiplying by 1. # We still need to sum over all training examples however. # There is no need to edit this line. J = np.sum(J, axis=1, keepdims=True) / x.size return J """ Explanation: Backpropagation In the next cells, you will be asked to complete functions for the Jacobian of the cost function with respect to the weights and biases. We will start with layer 3, which is the easiest, and work backwards through the layers. We'll define our Jacobians as, $$ \mathbf{J}{\mathbf{W}^{(3)}} = \frac{\partial C}{\partial \mathbf{W}^{(3)}} $$ $$ \mathbf{J}{\mathbf{b}^{(3)}} = \frac{\partial C}{\partial \mathbf{b}^{(3)}} $$ etc., where $C$ is the average cost function over the training set. i.e., $$ C = \frac{1}{N}\sum_k C_k $$ You calculated the following in the practice quizzes, $$ \frac{\partial C}{\partial \mathbf{W}^{(3)}} = \frac{\partial C}{\partial \mathbf{a}^{(3)}} \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}} \frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{W}^{(3)}} ,$$ for the weight, and similarly for the bias, $$ \frac{\partial C}{\partial \mathbf{b}^{(3)}} = \frac{\partial C}{\partial \mathbf{a}^{(3)}} \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}} \frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{b}^{(3)}} .$$ With the partial derivatives taking the form, $$ \frac{\partial C}{\partial \mathbf{a}^{(3)}} = 2(\mathbf{a}^{(3)} - \mathbf{y}) $$ $$ \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}} = \sigma'({z}^{(3)})$$ $$ \frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{W}^{(3)}} = \mathbf{a}^{(2)}$$ $$ \frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{b}^{(3)}} = 1$$ We'll do the J_W3 ($\mathbf{J}_{\mathbf{W}^{(3)}}$) function for you, so you can see how it works. You should then be able to adapt the J_b3 function, with help, yourself. End of explanation """ # GRADED FUNCTION # Compare this function to J_W3 to see how it changes. # There is no need to edit this function. def J_W2 (x, y) : #The first two lines are identical to in J_W3. a0, z1, a1, z2, a2, z3, a3 = network_function(x) J = 2 * (a3 - y) # the next two lines implement da3/da2, first σ' and then W3. J = J * d_sigma(z3) J = (J.T @ W3).T # then the final lines are the same as in J_W3 but with the layer number bumped down. J = J * d_sigma(z2) J = J @ a1.T / x.size return J # As previously, fill in all the incomplete lines. # ===YOU SHOULD EDIT THIS FUNCTION=== def J_b2 (x, y) : a0, z1, a1, z2, a2, z3, a3 = network_function(x) J = 2 * (a3 - y) J = J * d_sigma(z3) J = (J.T @ W3).T J = J * d_sigma(z2) J = np.sum(J, axis=1, keepdims=True) / x.size return J """ Explanation: We'll next do the Jacobian for the Layer 2. The partial derivatives for this are, $$ \frac{\partial C}{\partial \mathbf{W}^{(2)}} = \frac{\partial C}{\partial \mathbf{a}^{(3)}} \left( \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}} \right) \frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{z}^{(2)}} \frac{\partial \mathbf{z}^{(2)}}{\partial \mathbf{W}^{(2)}} ,$$ $$ \frac{\partial C}{\partial \mathbf{b}^{(2)}} = \frac{\partial C}{\partial \mathbf{a}^{(3)}} \left( \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}} \right) \frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{z}^{(2)}} \frac{\partial \mathbf{z}^{(2)}}{\partial \mathbf{b}^{(2)}} .$$ This is very similar to the previous layer, with two exceptions: * There is a new partial derivative, in parentheses, $\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}}$ * The terms after the parentheses are now one layer lower. Recall the new partial derivative takes the following form, $$ \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}} = \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}} \frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{a}^{(2)}} = \sigma'(\mathbf{z}^{(3)}) \mathbf{W}^{(3)} $$ To show how this changes things, we will implement the Jacobian for the weight again and ask you to implement it for the bias. End of explanation """ # GRADED FUNCTION # Fill in all incomplete lines. # ===YOU SHOULD EDIT THIS FUNCTION=== def J_W1 (x, y) : a0, z1, a1, z2, a2, z3, a3 = network_function(x) J = 2 * (a3 - y) J = J * d_sigma(z3) J = (J.T @ W3).T J = J * d_sigma(z2) J = (J.T @ W2).T J = J * d_sigma(z1) J = J @ a0.T / x.size return J # Fill in all incomplete lines. # ===YOU SHOULD EDIT THIS FUNCTION=== def J_b1 (x, y) : a0, z1, a1, z2, a2, z3, a3 = network_function(x) J = 2 * (a3 - y) J = J * d_sigma(z3) J = (J.T @ W3).T J = J * d_sigma(z2) J = (J.T @ W2).T J = J * d_sigma(z1) J = np.sum(J, axis=1, keepdims=True) / x.size return J """ Explanation: Layer 1 is very similar to Layer 2, but with an addition partial derivative term. $$ \frac{\partial C}{\partial \mathbf{W}^{(1)}} = \frac{\partial C}{\partial \mathbf{a}^{(3)}} \left( \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}} \frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{a}^{(1)}} \right) \frac{\partial \mathbf{a}^{(1)}}{\partial \mathbf{z}^{(1)}} \frac{\partial \mathbf{z}^{(1)}}{\partial \mathbf{W}^{(1)}} ,$$ $$ \frac{\partial C}{\partial \mathbf{b}^{(1)}} = \frac{\partial C}{\partial \mathbf{a}^{(3)}} \left( \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}} \frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{a}^{(1)}} \right) \frac{\partial \mathbf{a}^{(1)}}{\partial \mathbf{z}^{(1)}} \frac{\partial \mathbf{z}^{(1)}}{\partial \mathbf{b}^{(1)}} .$$ You should be able to adapt lines from the previous cells to complete both the weight and bias Jacobian. End of explanation """ x, y = training_data() reset_network() """ Explanation: Test your code before submission To test the code you've written above, run all previous cells (select each cell, then press the play button [ ▶| ] or press shift-enter). You can then use the code below to test out your function. You don't need to submit these cells; you can edit and run them as much as you like. First, we generate training data, and generate a network with randomly assigned weights and biases. End of explanation """ plot_training(x, y, iterations=50000, aggression=7, noise=1) """ Explanation: Next, if you've implemented the assignment correctly, the following code will iterate through a steepest descent algorithm using the Jacobians you have calculated. The function will plot the training data (in green), and your neural network solutions in pink for each iteration, and orange for the last output. It takes about 50,000 iterations to train this network. We can split this up though - 10,000 iterations should take about a minute to run. Run the line below as many times as you like. End of explanation """
turbomanage/training-data-analyst
quests/serverlessml/07_caip/solution/export_data.ipynb
apache-2.0
%%bash export PROJECT=$(gcloud config list project --format "value(core.project)") echo "Your current GCP Project Name is: "$PROJECT import os PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 # Do not change these os.environ["PROJECT"] = PROJECT os.environ["REGION"] = REGION os.environ["BUCKET"] = PROJECT + '-ml' # DEFAULT BUCKET WILL BE PROJECT ID -ml if PROJECT == "your-gcp-project-here": print("Don't forget to update your PROJECT name! Currently:", PROJECT) """ Explanation: Exporting data from BigQuery to Google Cloud Storage In this notebook, we export BigQuery data to GCS so that we can reuse our Keras model that was developed on CSV data. End of explanation """ %%bash ## Create a BigQuery dataset for serverlessml if it doesn't exist datasetexists=$(bq ls -d | grep -w serverlessml) if [ -n "$datasetexists" ]; then echo -e "BigQuery dataset already exists, let's not recreate it." else echo "Creating BigQuery dataset titled: serverlessml" bq --location=US mk --dataset \ --description 'Taxi Fare' \ $PROJECT:serverlessml echo "\nHere are your current datasets:" bq ls fi ## Create new ML GCS bucket if it doesn't exist already... exists=$(gsutil ls -d | grep -w gs://${PROJECT}-ml/) if [ -n "$exists" ]; then echo -e "Bucket exists, let's not recreate it." else echo "Creating a new GCS bucket." gsutil mb -l ${REGION} gs://${PROJECT}-ml echo -e "\nHere are your current buckets:" gsutil ls fi """ Explanation: Create BigQuery dataset and GCS Bucket If you haven't already, create the the BigQuery dataset and GCS Bucket we will need. End of explanation """ %%bigquery CREATE OR REPLACE TABLE serverlessml.feateng_training_data AS SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_datetime, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers, 'unused' AS key FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 """ Explanation: Create BigQuery tables Let's create a table with 1 million examples. Note that the order of columns is exactly what was in our CSV files. End of explanation """ %%bigquery CREATE OR REPLACE TABLE serverlessml.feateng_valid_data AS SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_datetime, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers, 'unused' AS key FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 """ Explanation: Make the validation dataset be 1/10 the size of the training dataset. End of explanation """ %%bash OUTDIR=gs://$BUCKET/quests/serverlessml/data echo "Deleting current contents of $OUTDIR" gsutil -m -q rm -rf $OUTDIR echo "Extracting training data to $OUTDIR" bq --location=US extract \ --destination_format CSV \ --field_delimiter "," --noprint_header \ serverlessml.feateng_training_data \ $OUTDIR/taxi-train-*.csv echo "Extracting validation data to $OUTDIR" bq --location=US extract \ --destination_format CSV \ --field_delimiter "," --noprint_header \ serverlessml.feateng_valid_data \ $OUTDIR/taxi-valid-*.csv gsutil ls -l $OUTDIR !gsutil cat gs://$BUCKET/quests/serverlessml/data/taxi-train-000000000000.csv | head -2 """ Explanation: Export the tables as CSV files Change the BUCKET variable below to match a bucket that you own. End of explanation """
ThomasProctor/Slide-Rule-Data-Intensive
UdacityMachineLearning/Lessons 1-3 mini-projects.ipynb
mit
import sklearn.naive_bayes as nb """ Explanation: Naive Bayes End of explanation """ alabels_test=np.array(labels_test) alabels_test.shape features_test.shape """ Explanation: My own exploration of the data End of explanation """ features_test[:10].sum(axis=1) features_test.sum()/features_test.shape[0] """ Explanation: This indicates axis 0 is emails, and 1 is words(features?)? End of explanation """ %%time model=nb.GaussianNB() model.fit(features_train,labels_train) %%time testprediction=model.predict(features_test) (testprediction==alabels_test).sum()/alabels_test.shape[0] """ Explanation: I don't understand how this information is coded. The logical thing to me would be an integer record of whether a word is used in an email, but since it appears that at least most of the first ten contain non integer amounts, that doesn't seem to be the case. Anyway, I can't seem to understand the average length of the emails, which would be an indicator of how confident one can be of writers of the emails. Q1: What is the accuracy? End of explanation """ from sklearn import svm """ Explanation: This is astoundingly good. I would never guess that people are so reliable in their choice of words. Q2: Which takes longer, training or prediction We see that training takes a good deal longer; not surprising at all. SVM End of explanation """ sub_features_train=features_train[:int(round(features_train.shape[0]/100))] sub_labels_train=labels_train[:int(round(features_train.shape[0]/100))] kernels=['linear', 'poly', 'rbf', 'sigmoid'] for i in kernels: print(i) model=svm.SVC(kernel=i) %time model.fit(sub_features_train,sub_labels_train) print((model.predict(features_test)==alabels_test).sum()/alabels_test.shape[0]) """ Explanation: My own fooling around. End of explanation """ model=svm.SVC(kernel='linear') %%time model.fit(features_train,labels_train) %%time testprediction=model.predict(features_test) np.sum(testprediction==alabels_test)/alabels_test.shape[0] """ Explanation: Wow. I did not think that the kernel mattered that much. I guess a highly complex kernel requires some tuning of hyper-parameters? I really need to understand better how to tailor svm kernels to the data. It's pretty hard to do though, as long as this data is a black box. Q1: What is the accuracy of the linear classifier? End of explanation """ #training time lsvm=(2*60+13)*1000 lsvm/805 #prediction time 15.9*1000/117 """ Explanation: Q2: How do the training and prediction times compare to Naive Bayes? End of explanation """ sub_features_train=features_train[:int(round(features_train.shape[0]/100))] sub_labels_train=labels_train[:int(round(features_train.shape[0]/100))] model=svm.SVC(kernel='linear') %time model.fit(sub_features_train,sub_labels_train) %time testprediction=model.predict(features_test) np.sum(testprediction==alabels_test)/alabels_test.shape[0] """ Explanation: Very poorly. Training and prediction times are both well over 100 times longer for SVM. Q3: What is the accuracy after shrinking the training set to 1% of it's original size? End of explanation """ model=svm.SVC(kernel='rbf') %time model.fit(sub_features_train,sub_labels_train) %time testprediction=model.predict(features_test) np.sum(testprediction==alabels_test)/alabels_test.shape[0] asub_labels_train=np.array(sub_labels_train) np.sum(model.predict(sub_features_train)==asub_labels_train)/asub_labels_train.shape[0] """ Explanation: That's an awful lot faster, and doesn't do too bad as far a prediction goes. The prediction time is also drammatically lower. Q4: Which of these are applications where you can imagine a very quick-running algorithm is especially important? Flagging credit card fraud, and blocking a transaction before it goes through and voice recognition, like Siri, both would require quick prediction time. However, training time for both of these can be long. There are very few applications where long training times are not acceptable for the final product, though long training times can definitely make testing difficult. Q5: What’s the accuracy with the more complex rbf kernel? End of explanation """ a=10**np.arange(1,10) for i in a: model=svm.SVC(kernel='rbf',C=i) print('C='+str(i)) print('fitting:') %time model.fit(sub_features_train,sub_labels_train) print('prediction:') %time testprediction=model.predict(features_test) print('accuracy='+str(np.sum(testprediction==alabels_test)/alabels_test.shape[0])) """ Explanation: I would have guessed that this kernel might just be overfitting the data, but that doesn't quite seem to be the case - it doesn't even predict the training data well. It must be that it is "underfitting", and doesn't have the freedom to change the shape to match the data. Q6 & Q7: What value of C gives the best accuracy, and what is it? End of explanation """ a=np.linspace(10000-5000,10000+5000,num=10,dtype=int) for i in a: model=svm.SVC(kernel='rbf',C=i) print('C='+str(i)) print('fitting:') %time model.fit(sub_features_train,sub_labels_train) print('prediction:') %time testprediction=model.predict(features_test) print('accuracy='+str(np.sum(testprediction==alabels_test)/alabels_test.shape[0])) """ Explanation: There seems to be an ideal value for this. The default value, 1, is insufficiantly complex to follow the data and is underfit, and at some point, around C=10000, it shifts to being overfit, as the complexities of the decision boundary allow it to simply follow every single point, with the ideal value around 10000. However, I should note that doing this process is bad data science - we are over-fitting the parameter C to the test data set. Udacity's suggestion doesn't even do this bad process right, it has us stop at 10000, and we don't even know if we could do better by going higher. End of explanation """ model=svm.SVC(kernel='rbf',C=10000) %%time model.fit(features_train,labels_train) """ Explanation: Q8: What is the accuracy with the full set and "optimized" C? End of explanation """ %%time testprediction=model.predict(features_test) labels_test=np.array(labels_test) (np.sum(testprediction==labels_test))/labels_test.shape[0] """ Explanation: That took a while, but it was still an awful lot shorter than the 15 minutes it took with C=1 End of explanation """ print('elm 10 prediction = ' + str(testprediction[10])+', actual = '+ str(labels_test[10])) print('elm 26 prediction = ' + str(testprediction[26])+', actual = '+ str(labels_test[26])) print('elm 50 prediction = ' + str(testprediction[50])+', actual = '+ str(labels_test[50])) np.sum(testprediction) """ Explanation: That's so good that I have a hard time believing it. As I was using the test set to choose my C, it's probable that I have overfitted to the test set. In order to do this process properly, the training set should be devided into subsets, and one subset used for training, one for testing, and then the acuraccy should be maximized with respect to the parameter. Then you can test on a larger test and training set to get a real understanding of the accuracy. The sklearn.grid_search.GridSearchCV method uses cross validation, a slightly more complex version of what I just described to do this. Q9: What is the prediction for these examples? End of explanation """ from sklearn.tree import DecisionTreeClassifier """ Explanation: Decision Trees End of explanation """ model=DecisionTreeClassifier(min_samples_split=40) %%time model.fit(features_train,labels_train) %%time testprediction=model.predict(features_test) labels_test=np.array(labels_test) (testprediction==labels_test).sum()/labels_test.shape[0] """ Explanation: Q1: What is the accuracy with the minimum sample split equal to 40? End of explanation """ features_train.shape[1] """ Explanation: Q2: Speeding Up Via Feature Selection End of explanation """ from email_preprocess import preprocesssmall features_train, features_test, labels_train, labels_test = preprocesssmall() features_train.shape[1] """ Explanation: The feature selection algorithm is selecting only the features that are most well correlated with the data, with the correlation in this case measured by a $\chi ^2$ correlation test between the feature and the labels. In this case, we're picking out the top 10% most highly correlated variables. Q3: Smaller number of features End of explanation """ %time model.fit(features_train,labels_train) %time testprediction=model.predict(features_test) labels_test=np.array(labels_test) (testprediction==labels_test).sum()/labels_test.shape[0] """ Explanation: With a smaller number of variables, we cannot have a less complex decision surface. If we add a completely random feature, however, with an ideal machine learning algorithm there should be no increased complexity, and in general a good algorithm should only increase the complexity of the decision surface with more features if the new features add useful information about the labels. Since we are dropping features that are fairly highly correlated with the labels, this will decrease the complexity of the decision surface. Q5: Accuracy with less features End of explanation """
lithiumdenis/MLSchool
3. Гоблины, гули и призраки.ipynb
mit
train, test = pd.read_csv( 'data/HelloKaggle/train.csv' # путь к вашему файлу train ), pd.read_csv( 'data/HelloKaggle/test.csv' # путь к вашему файлу test ) train.head() X = train.drop(['id', 'type'], axis=1) y = train['type'] """ Explanation: Зайдите на http://www.kaggle.com и зарегистрируйтесь. Далее вам нужен датасет по этой ссылке: https://www.kaggle.com/c/ghouls-goblins-and-ghosts-boo . И train, и test, и sample_submission. Скачайте его и положите рядом с тетрадью, как вам удобнее. End of explanation """ from sklearn.preprocessing import LabelEncoder from sklearn import preprocessing answers_encoder = LabelEncoder() y = answers_encoder.fit_transform(y) y[:5] # вместо строк получились метки классов answers_encoder.classes_ # - здесь наши классы все еще лежат """ Explanation: К делу! Выполним необходимое кодирование В этот раз я помогу немного :) Метки классов End of explanation """ from sklearn.linear_model import LogisticRegression """ Explanation: Используйте метод inverse_transform, чтобы превратить предсказания алгоритмов (числа) обратно в строки: End of explanation """ def onehot_encode(df_train, df_test, column): from sklearn.preprocessing import LabelBinarizer cs = df_train.select_dtypes(include=['O']).columns.values if column not in cs: return (df_train, df_test, None) rest = [x for x in df_train.columns.values if x != column] lb = LabelBinarizer() train_data = lb.fit_transform(df_train[column]) test_data = lb.transform(df_test[column]) new_col_names = ['%s_%s' % (column, x) for x in lb.classes_] if len(new_col_names) != train_data.shape[1]: new_col_names = new_col_names[::-1][:train_data.shape[1]] new_train = pd.concat((df_train.drop([column], axis=1), pd.DataFrame(data=train_data, columns=new_col_names)), axis=1) new_test = pd.concat((df_test.drop([column], axis=1), pd.DataFrame(data=test_data, columns=new_col_names)), axis=1) return (new_train, new_test, lb) X.head(2) X, test, lb = onehot_encode(X, test, 'color') X.head(2) test.head(2) """ Explanation: answers_encoder.inverse_transform( # сюда массив с предсказаниями ) ```python типа того: clf = LogisticRegression() clf.fit(X_train, y_train); predicts = clf.predict(X_test) strings = answers_encoder.inverse_transform(predicts) # <-- ! ``` Теперь все признаки У нас встретился признак цвета - color - закодируем его так, как я сделал ниже. В этот раз. End of explanation """ sns.pairplot(train, hue='type'); from sklearn.linear_model import LogisticRegression from sklearn.model_selection import cross_val_score, StratifiedKFold clf = LogisticRegression() cv_scores = cross_val_score(clf, X, y, cv=10) sns.distplot(cv_scores); plt.title('Average score: {}'.format(np.mean(cv_scores))); sns.jointplot(x='bone_length', y='rotting_flesh', data=train); """ Explanation: Мы превратили все признаки в числовой вид, и с этим можно работать! Визуализация End of explanation """ train, test = pd.read_csv( 'data/HelloKaggle/train.csv' # путь к вашему файлу train ), pd.read_csv( 'data/HelloKaggle/test.csv' # путь к вашему файлу test ) np.random.seed = 1234 #Закодировали цвет цифрами вместо названий в test & train colorenc = preprocessing.LabelEncoder() colorenc.fit(pd.concat((test['color'], train['color']))) test['color'] = colorenc.transform(test['color']) train['color'] = colorenc.transform(train['color']) #Закодировали типы существ цифрами вместо названий в test & train monsterenc = preprocessing.LabelEncoder() monsterenc.fit(train['type']) train['type'] = monsterenc.transform(train['type']) from sklearn.model_selection import cross_val_score poly_features = preprocessing.PolynomialFeatures(3) #Подготовили данные так, что X_tr - таблица без id и type, а в y_tr сохранены type X_tr, y_tr = train.drop(['id', 'type'], axis=1), train['type'] #Типа перевели dataFrame в array и сдалали над ним предварительную обработку X_tr = poly_features.fit_transform(X_tr) """ Explanation: Классификация Получим что-нибудь пригодное для Kaggle End of explanation """ from sklearn.linear_model import LogisticRegression typelogfit = LogisticRegression() scores = cross_val_score(typelogfit, X_tr, y_tr) typelogfit.fit(X_tr, y_tr) print('Best score:', scores.min()) """ Explanation: Logistic Regression End of explanation """ clf = LogisticRegression() parameter_grid = { } cross_validation = StratifiedKFold(n_splits=10, random_state=SEED) grid_search = GridSearchCV(clf, param_grid=parameter_grid, cv=cross_validation) grid_search.fit(X_tr, y_tr); print('Best score: {}'.format(grid_search.best_score_)) """ Explanation: Logistic Regression + Gridsearch End of explanation """ from sklearn.neighbors import KNeighborsClassifier typeNN = KNeighborsClassifier() scores = cross_val_score(typeNN, X_tr, y_tr) typeNN.fit(X_tr, y_tr) print('Best score:', scores.min()) # confusion matrix from sklearn.metrics import confusion_matrix y_tr_pr = typeNN.predict(X_tr) cm = pd.DataFrame(confusion_matrix(y_tr, y_tr_pr), index=monsterenc.classes_, columns=monsterenc.classes_) sns.heatmap(cm, annot=True, fmt="d") #ROC-AUC from scikitplot import classifier_factory typeNN = classifier_factory(typeNN) typeNN.plot_roc_curve(X_tr, y_tr, cv=10, random_state=1, curves=('each_class')); #Kaggle save X_te = poly_features.fit_transform(test.drop(['id'], axis=1)) y_te = typeNN.predict(X_te) ans_nn = pd.DataFrame({'id': test['id'], 'type': monsterenc.inverse_transform(y_te)}) ans_nn.to_csv('ans_NN.csv', index=False) """ Explanation: Nearest Neighbour End of explanation """ parameter_grid = { } cross_validation = StratifiedKFold(n_splits=10, random_state=SEED) clf = KNeighborsClassifier() grid_search2 = GridSearchCV(clf, param_grid=parameter_grid, cv=cross_validation) grid_search2.fit(X_tr, y_tr); print('Best score: {}'.format(grid_search2.best_score_)) """ Explanation: Nearest Neighbour + Gridsearch End of explanation """ from sklearn.svm import SVC typeSVM = SVC() scores = cross_val_score(typeSVM, X_tr, y_tr) typeSVM.fit(X_tr, y_tr) print('Best score:', scores.min()) # confusion matrix y_tr_pr = typeSVM.predict(X_tr) cm = pd.DataFrame(confusion_matrix(y_tr, y_tr_pr), index=monsterenc.classes_, columns=monsterenc.classes_) sns.heatmap(cm, annot=True, fmt="d") #Kaggle save X_te = poly_features.fit_transform(test.drop(['id'], axis=1)) y_te = typeSVM.predict(X_te) ans_svm = pd.DataFrame({'id': test['id'], 'type': monsterenc.inverse_transform(y_te)}) ans_svm.to_csv('ans_svm.csv', index=False) """ Explanation: SVM End of explanation """ from sklearn.svm import SVC clf = SVC() parameter_grid = { } cross_validation = StratifiedKFold(n_splits=10, random_state=SEED) grid_search = GridSearchCV(clf, param_grid=parameter_grid, cv=cross_validation) grid_search.fit(X_tr, y_tr); print('Best score: {}'.format(grid_search.best_score_)) """ Explanation: SVM + Gridsearch End of explanation """
dietmarw/EK5312_ElectricalMachines
Chapman/Ch1-Problem_1-12.ipynb
unlicense
%pylab notebook %precision 4 from scipy import constants as c # we like to use some constants """ Explanation: Excercises Electric Machinery Fundamentals Chapter 1 Problem 1-12 End of explanation """ N1 = 600 N2 = 200 i1 = 0.5 # A i2 = 1.0 # A """ Explanation: Description The core shown in Figure P1-4 below: <img src="figs/FigC_P1-4.jpg" width="70%"> and is made of a steel whose magnetization curve is shown in Figure P1-9 below: <img src="figs/FigC_P1-9.jpg" width="60%"> Repeat Problem 1-7, but this time do not assume a constant value of $\mu_r$. How much flux is produced in the core by the currents specified? What is the relative permeability of this core under these conditions? Was the assumption in Problem 1-7 that the relative permeability was equal to 1200 a good assumption for these conditions? Is it a good assumption in general? End of explanation """ F_tot = N1 * i1 + N2 * i2 print('F_tot = {:.1f} At'.format(F_tot)) """ Explanation: SOLUTION The two coils on this core are wound so that their magnetomotive forces are additive, so the total magnetomotive force on this core is $$\mathcal{F}_\text{TOT} = N_1 i_1 + N_2 I_2$$ End of explanation """ lc = 4 * (0.075 + 0.5 + 0.075) # [m] core length on all 4 sides. H = F_tot / lc print('H = {:.1f} At/m'.format(H)) """ Explanation: Therefore, the magnetizing intensity $H$ is: $$ H = \frac{\mathcal{F_\text{TOT}}}{l_c}$$ End of explanation """ B = 0.17 # [T] """ Explanation: From the magnetization curve: <img src="figs/FigC_P1-9sol.png" width="60%"> End of explanation """ A = 0.15**2 # [m²] phi_tot = B*A print('ϕ = {:.3f} mWb'.format(phi_tot*1000)) """ Explanation: and the total flux in the core is: $$\phi_\text{TOT} = BA$$ End of explanation """ mu_r = phi_tot * lc / (F_tot * c.mu_0 * A) print('μ = {:.1f}'.format(mu_r)) """ Explanation: The relative permeability of the core can be found from the reluctance as follows: $$\mathcal{R}\text{TOT} = \frac{\mathcal{F}\text{TOT}}{\mathcal{\phi}_\text{TOT}} = \frac{l_c}{\mu_0 \mu_r A}$$ Solving for $\mu_r$ yields: $$\mu_r = \frac{\mathcal{\phi}\text{TOT}l_c}{\mathcal{F}\text{TOT}\mu_0 A}$$ End of explanation """
paulovn/ml-vm-notebook
vmfiles/IPNB/Examples/g Misc/10 ipywidgets.ipynb
bsd-3-clause
from __future__ import print_function from ipywidgets import widgets """ Explanation: Ipywidgets ipywidgets is a Python package providing interactive widgets for Jupyter notebooks. * ipywidgets installation * A small tutorial: interactive dashboards on Jupyter End of explanation """ from IPython.display import display def handle_submit(sender): print("Submitted:", text.value) text = widgets.Text() display(text) text.on_submit(handle_submit) """ Explanation: Text example This example shows a text box. The widget handler just receives the text and prints it out. End of explanation """ def doit(n): 'receive the result of the widget in the function, and do sometthing with it' print("processed:", n*2+1) widgets.interact(doit, n=[1, 2, 3]); widgets.interact(doit, n=(0,1,0.1)); import matplotlib.pyplot as plt import numpy as np import math t = np.arange( 0.0, 1.0, 0.01) def myplot( factor ): plt.plot( t, np.sin(2*math.pi*t*factor) ) plt.show() widgets.interact( myplot, factor=(0, 10, 2) ); def do_something( fruit="oranges" ): print("Selected: [",fruit,"]") widgets.interact(do_something, fruit=['apples','pears', 'oranges']); """ Explanation: Interact interact(function, variable) creates a widget to modify the variable and binds it to the passed funcion. The widget type that is created depends on the type of the passed variable. Note that the name of the variable given in the argument to interact must be the same as its name as given as argument to the handler function End of explanation """ from ipywidgets.widgets import ( Button, HBox, VBox, Text, Textarea, Checkbox, IntSlider, Controller, Dropdown, ColorPicker) from ipywidgets import Layout area = """Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.""" textarea = Textarea(value=area, layout=Layout(height="8em", width="30em")) dropdown = Dropdown(description='Choice', options=['foo', 'bar']) HBox( [VBox([dropdown, HBox([Button(description='A'), Button(description='B')])]), textarea]) """ Explanation: Boxes End of explanation """
arcyfelix/Courses
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/01-Introduction-to-Neural-Networks/01_Neural Network From Scratch.ipynb
apache-2.0
class SimpleClass(): def __init__(self, str_input): print("SIMPLE" + str_input) class ExtendedClass(SimpleClass): def __init__(self): print('EXTENDED') """ Explanation: Manual Neural Network In this notebook we will manually build out a neural network that mimics the TensorFlow API. This will greatly help your understanding when working with the real TensorFlow! Quick Note on Super() and OOP End of explanation """ s = ExtendedClass() """ Explanation: The child class will use its own initialization method, if not specified otherwise. End of explanation """ class ExtendedClass(SimpleClass): def __init__(self): super().__init__(" My String") print('EXTENDED') s = ExtendedClass() """ Explanation: If we want to use initialization from the parent class, we can do that using: python super().__init__() End of explanation """ class Operation(): """ An Operation is a node in a "Graph". TensorFlow will also use this concept of a Graph. This Operation class will be inherited by other classes that actually compute the specific operation, such as adding or matrix multiplication. """ def __init__(self, input_nodes = []): """ Intialize an Operation """ # The list of input nodes self.input_nodes = input_nodes # Initialize list of nodes consuming this node's output self.output_nodes = [] # For every node in the input, we append this operation (self) to the list of # the consumers of the input nodes for node in input_nodes: node.output_nodes.append(self) # There will be a global default graph (TensorFlow works this way) # We will then append this particular operation # Append this operation to the list of operations in the currently active default graph _default_graph.operations.append(self) def compute(self): """ This is a placeholder function. It will be overwritten by the actual specific operation that inherits from this class. """ pass """ Explanation: Operation End of explanation """ class add(Operation): def __init__(self, x, y): super().__init__([x, y]) def compute(self, x_var, y_var): self.inputs = [x_var, y_var] return x_var + y_var """ Explanation: Example Operations Addition End of explanation """ class multiply(Operation): def __init__(self, a, b): super().__init__([a, b]) def compute(self, a_var, b_var): self.inputs = [a_var, b_var] return a_var * b_var """ Explanation: Multiplication End of explanation """ class matmul(Operation): def __init__(self, a, b): super().__init__([a, b]) def compute(self, a_mat, b_mat): self.inputs = [a_mat, b_mat] return a_mat.dot(b_mat) """ Explanation: Matrix Multiplication End of explanation """ class Placeholder(): """ A placeholder is a node that needs to be provided a value for computing the output in the Graph. In case of supervised learning, X (input) and Y (output) will require placeholders. """ def __init__(self): self.output_nodes = [] _default_graph.placeholders.append(self) """ Explanation: Placeholders End of explanation """ class Variable(): """ This variable is a changeable parameter of the Graph. For a simple neural networks, it will be weights and biases. """ def __init__(self, initial_value = None): self.value = initial_value self.output_nodes = [] _default_graph.variables.append(self) """ Explanation: Variables End of explanation """ class Graph(): def __init__(self): self.operations = [] self.placeholders = [] self.variables = [] def set_as_default(self): """ Sets this Graph instance as the Global Default Graph """ global _default_graph _default_graph = self """ Explanation: Graph End of explanation """ g = Graph() g.set_as_default() print("Operations:") print(g.operations) print("Placeholders:") print(g.placeholders) print("Variables:") print(g.variables) A = Variable(10) print("Operations:") print(g.operations) print("Placeholders:") print(g.placeholders) print("Variables:") print(g.variables) b = Variable(1) print("Operations:") print(g.operations) print("Placeholders:") print(g.placeholders) print("Variables:") print(g.variables) # Will be filled out later x = Placeholder() print("Operations:") print(g.operations) print("Placeholders:") print(g.placeholders) print("Variables:") print(g.variables) y = multiply(A,x) print("Operations:") print(g.operations) print("Placeholders:") print(g.placeholders) print("Variables:") print(g.variables) z = add(y, b) print("Operations:") print(g.operations) print("Placeholders:") print(g.placeholders) print("Variables:") print(g.variables) """ Explanation: A Basic Graph $$ z = Ax + b $$ With A=10 and b=1 $$ z = 10x + 1 $$ Just need a placeholder for x and then once x is filled in we can solve it! End of explanation """ import numpy as np """ Explanation: Session End of explanation """ def traverse_postorder(operation): """ PostOrder Traversal of Nodes. Basically makes sure computations are done in the correct order (Ax first , then Ax + b). """ nodes_postorder = [] def recurse(node): if isinstance(node, Operation): for input_node in node.input_nodes: recurse(input_node) nodes_postorder.append(node) recurse(operation) return nodes_postorder class Session: def run(self, operation, feed_dict = {}): """ operation: The operation to compute feed_dict: Dictionary mapping placeholders to input values (the data) """ # Puts nodes in correct order nodes_postorder = traverse_postorder(operation) print("Post Order:") print(nodes_postorder) for node in nodes_postorder: if type(node) == Placeholder: node.output = feed_dict[node] elif type(node) == Variable: node.output = node.value else: # Operation node.inputs = [input_node.output for input_node in node.input_nodes] node.output = node.compute(*node.inputs) # Convert lists to numpy arrays if type(node.output) == list: node.output = np.array(node.output) # Return the requested node value return operation.output sess = Session() result = sess.run(operation = z, feed_dict = {x : 10}) """ Explanation: Traversing Operation Nodes More details about tree post order traversal: https://en.wikipedia.org/wiki/Tree_traversal#Post-order_(LRN) End of explanation """ result 10 * 10 + 1 # Running just y = Ax # The post order should be only up to result = sess.run(operation = y, feed_dict = {x : 10}) result """ Explanation: The result should look like: Variable (A), Placeholder (x), Multiple operation (Ax), Variable (b), Add (Ax + b) End of explanation """ g = Graph() g.set_as_default() A = Variable([[10, 20], [30, 40]]) b = Variable([1, 1]) x = Placeholder() y = matmul(A,x) z = add(y,b) sess = Session() result = sess.run(operation = z, feed_dict = {x : 10}) result """ Explanation: Looks like we did it! End of explanation """ import matplotlib.pyplot as plt %matplotlib inline # Defining sigmoid function def sigmoid(z): return 1 / (1 + np.exp(-z)) sample_z = np.linspace(-10, 10, 100) sample_a = sigmoid(sample_z) plt.figure(figsize = (8, 8)) plt.title("Sigmoid") plt.plot(sample_z, sample_a) """ Explanation: Activation Function End of explanation """ class Sigmoid(Operation): def __init__(self, z): # a is the input node super().__init__([z]) def compute(self, z_val): return 1 / (1 + np.exp(-z_val)) """ Explanation: Sigmoid as an Operation End of explanation """ from sklearn.datasets import make_blobs # Creating 50 samples divided into 2 blobs with 2 features data = make_blobs(n_samples = 50, n_features = 2, centers = 2, random_state = 75) data features = data[0] plt.scatter(features[:,0],features[:,1]) labels = data[1] plt.scatter(x = features[:,0], y = features[:,1], c = labels, cmap = 'coolwarm') # DRAW A LINE THAT SEPERATES CLASSES x = np.linspace(0, 11 ,10) y = -x + 5 plt.scatter(features[:,0], features[:,1], c = labels, cmap = 'coolwarm') plt.plot(x,y) """ Explanation: Classification Example End of explanation """ z = np.array([1, 1]).dot(np.array([[8], [10]])) - 5 print(z) a = 1 / (1 + np.exp(-z)) print(a) """ Explanation: Defining the Perceptron $$ y = mx + b $$ $$ y = -x + 5 $$ $$ f1 = mf2 + b , m = 1$$ $$ f1 = -f2 + 5 $$ $$ f1 + f2 - 5 = 0 $$ Convert to a Matrix Representation of Features $$ w^Tx + b = 0 $$ $$ \Big(1, 1\Big)f - 5 = 0 $$ Then if the result is > 0 its label 1, if it is less than 0, it is label=0 Example Point Let's say we have the point f1=2 , f2=2 otherwise stated as (8,10). Then we have: $$ \begin{pmatrix} 1 , 1 \end{pmatrix} \begin{pmatrix} 8 \ 10 \end{pmatrix} + 5 = $$ End of explanation """ z = np.array([1,1]).dot(np.array([[2],[-10]])) - 5 print(z) a = 1 / (1 + np.exp(-z)) print(a) """ Explanation: Or if we have (4,-10) End of explanation """ g = Graph() g.set_as_default() x = Placeholder() w = Variable([1,1]) b = Variable(-5) z = add(matmul(w,x),b) a = Sigmoid(z) sess = Session() sess.run(operation = a, feed_dict = {x : [8, 10]}) sess.run(operation = a, feed_dict = {x : [2, -10]}) """ Explanation: Using an Example Session Graph End of explanation """
ethen8181/machine-learning
networkx/page_rank.ipynb
mit
# code for loading the format for the notebook import os # path : store the current path to convert back to it later path = os.getcwd() os.chdir(os.path.join('..', 'notebook_format')) from formats import load_style load_style(plot_style=False) os.chdir(path) # 1. magic for inline plot # 2. magic to print version # 3. magic so that the notebook will reload external python modules # 4. magic to enable retina (high resolution) plots # https://gist.github.com/minrk/3301035 %matplotlib inline %load_ext watermark %load_ext autoreload %autoreload 2 %config InlineBackend.figure_format='retina' import numpy as np import networkx as nx import matplotlib.pyplot as plt %watermark -a 'Ethen' -d -t -v -p numpy,networkx,matplotlib """ Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#PageRank" data-toc-modified-id="PageRank-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>PageRank</a></span><ul class="toc-item"><li><span><a href="#Taxation" data-toc-modified-id="Taxation-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Taxation</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Reference</a></span></li></ul></div> End of explanation """ nodes = ['A', 'B', 'C', 'D'] edges = [ ('A', 'B'), ('A', 'C'), ('A', 'D'), ('B', 'A'), ('B', 'D'), ('D', 'B'), ('D', 'C'), ('C', 'A') ] graph = nx.DiGraph() graph.add_nodes_from(nodes) graph.add_edges_from(edges) graph # change default style figure and font size plt.rcParams['figure.figsize'] = 8, 6 plt.rcParams['font.size'] = 12 # quick and dirty visualization of the graph we've defined above nx.draw(graph, with_labels=True, node_color='skyblue', alpha=0.7) """ Explanation: PageRank PageRank is a function that assigns a number weighting each page in the Web, the intent is that the higher the PageRank of a page, the more important the page is. We can think of the Web as a directed graph, where the pages are the nodes and if there exists a link that connects page1 to page2 then there would be an edge connecting the two nodes. Imagine an toy example where there are only 4 pages/nodes, ${A, B, C, D}$: $A$ has links connecting itself to each of ther other three pages. $B$ has links to $A$ and $D$. $D$ has links to $B$ and $C$. $C$ has links only to $A$. End of explanation """ trans_matrix = nx.to_numpy_array(graph) trans_matrix /= trans_matrix.sum(axis=1, keepdims=True) trans_matrix """ Explanation: Given this graph, we can build a transition matrix to depict what is the probability of landing on a given page after 1 step. If we look at an example below, the matrix has: $n$ rows and columns if there are $n$ pages Each element in the matrix, $m_{ij}$ takes on the value of $1 / k$ if page $i$ has $k$ edges and one of them is $j$. Otherwise $m_{ij}$ is 0. End of explanation """ n_nodes = trans_matrix.shape[0] init_vector = np.repeat(1 / n_nodes, n_nodes) init_vector @ trans_matrix """ Explanation: Now suppose we start at any of the $n$ pages of the Web with equal probability. Then the initial vector $v_0$ will have $1/n$ for each page. If $M$ is the transition matrix of the Web, then after one step, the distribution of us landing on each of the page can be computed by a matrix vector multiplication. $v_0 M$ End of explanation """ # we can tweak the number of iterations parameter # and see that the resulting probability remains # the same even if we increased the number to 50 n_iters = 30 result = init_vector for _ in range(n_iters): result = result @ trans_matrix result """ Explanation: As we can see after 1 step, the probability of landing on the first page, page $A$, is higher than the probability of landing on other pages. We can repeat this matrix vector multiplication for multiple times and our results will eventually converge. Giving us an estimated probability of landing on each page, which in term is PageRank's estimate of how important a given page is when compared to all the other page in the Web. End of explanation """ # we replaced C's out-link, ('C', 'A'), # from the list of edges with a link within # the page itself ('C', 'C'), note that # we can also avoid this problem by note # including self-loops in the edges nodes = ['A', 'B', 'C', 'D'] edges = [ ('A', 'B'), ('A', 'C'), ('A', 'D'), ('B', 'A'), ('B', 'D'), ('D', 'B'), ('D', 'C'), ('C', 'C') ] graph = nx.DiGraph() graph.add_nodes_from(nodes) graph.add_edges_from(edges) # not showing the self-loop edge for node C nx.draw(graph, with_labels=True, node_color='skyblue', alpha=0.7) # notice in the transition probability matrix, the third row, node C # contains 1 for a single entry trans_matrix = nx.to_numpy_array(graph) trans_matrix /= trans_matrix.sum(axis=1, keepdims=True) trans_matrix n_iters = 40 result = init_vector for _ in range(n_iters): result = result @ trans_matrix result """ Explanation: This sort of convergence behavior is an example of the Markov Chain processes. It is known that the distribution of $v = Mv$ converges, provided two conditions are met: The graph is strongly connected; that is, it is possible to get from any node to any other node. There are no dead ends: nodes that have no edges out. If we stare at the formula $v = Mv$ long enough, we can observe that our final result vector $v$ is an eigenvector of the matrix $M$ (recall an eigenvector of a matrix $M$ is a vector $v$ that satisfies $v = \lambda Mv$ for some constant eigenvalue $\lambda$). Taxation The vanilla PageRank that we've introduced above needs some tweaks to handle data that can appear in real world scenarios. The two problems that we need to avoid is what's called spider traps and dead end. spider trap is a set of nodes with edges, but these edges all links within the page itself. This causes the PageRank calculation to place all the PageRank score within the spider traps. End of explanation """ # we remove C's out-link, ('C', 'A'), # from the list of edges nodes = ['A', 'B', 'C', 'D'] edges = [ ('A', 'B'), ('A', 'C'), ('A', 'D'), ('B', 'A'), ('B', 'D'), ('D', 'B'), ('D', 'C') ] graph = nx.DiGraph() graph.add_nodes_from(nodes) graph.add_edges_from(edges) nx.draw(graph, with_labels=True, node_color='skyblue', alpha=0.7) # trick for numpy for dealing with zero division # https://stackoverflow.com/questions/26248654/numpy-return-0-with-divide-by-zero trans_matrix = nx.to_numpy_array(graph) summed = trans_matrix.sum(axis=1, keepdims=True) trans_matrix = np.divide(trans_matrix, summed, out=np.zeros_like(trans_matrix), where=summed!=0) # notice in the transition probability matrix, the third row, node C # consists of all 0 trans_matrix n_iters = 40 result = init_vector for _ in range(n_iters): result = result @ trans_matrix result """ Explanation: As predicted, all the PageRank is at node $C$, since once we land there, there's no way for us to leave. The other problem dead end describes pages that have no out-links, as a result pages that reaches these dead ends will not have any PageRank. End of explanation """ def build_trans_matrix(graph: nx.DiGraph, beta: float=0.9) -> np.ndarray: n_nodes = len(graph) trans_matrix = nx.to_numpy_array(graph) # assign uniform probability to dangling nodes (nodes without out links) const_vector = np.repeat(1.0 / n_nodes, n_nodes) row_sum = trans_matrix.sum(axis=1) dangling_nodes = np.where(row_sum == 0)[0] if len(dangling_nodes): for node in dangling_nodes: trans_matrix[node] = const_vector row_sum[node] = 1 trans_matrix /= row_sum.reshape(-1, 1) return beta * trans_matrix + (1 - beta) * const_vector trans_matrix = build_trans_matrix(graph) trans_matrix n_iters = 20 result = init_vector for _ in range(n_iters): result = result @ trans_matrix result """ Explanation: As we see, the result tells us the probability of us being anywhere goes to 0, as the number of steps increase. To avoid the two problems mentioned above, we will modify the calculation of PageRank. At each step, we will give it a small probability of "teleporting" to a random page, rather than following an out-link from their current page. The notation form for the description above would be: \begin{align} v^\prime = \beta M v + (1 - \beta) e / n \end{align} where: $\beta$ is a chosen constant, usually in the range of 0.8 to 0.9. $e$ is a vector of all 1s with the appropriate number of elements so that the matrix addition adds up. $n$ is the number of pages/nodes in the Web graph. The term, $\beta M v$ denotes that at this step, there is a probability $\beta$ that we will follow an out-link from their present page. Notice the term $(1 - \beta) e / n$ does not depend on $v$, thus if there are some dead ends in the graph, there will always be some fraction of opportunity to jump out of that rabbit hole. This idea of adding $\beta$ is referred to as taxation (networkx package calls this damping factor). End of explanation """ pagerank_score = nx.pagerank(graph, alpha=0.9) pagerank_score """ Explanation: The result looks much more reasonable after introducing the taxation. We can also compare the it with the pagerank function from networkx. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/test-institute-1/cmip6/models/sandbox-2/toplevel.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-2', 'toplevel') """ Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: TEST-INSTITUTE-1 Source ID: SANDBOX-2 Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:43 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """
jvbalen/cover_id
learn.ipynb
mit
n_patches, patch_len = 8, 64 # train, test, validation split ratio = (50,20,30) clique_dict, _ = SHS_data.read_cliques() train_cliques, test_cliques_big, _ = util.split_train_test_validation(clique_dict, ratio=ratio) # preload training data to memory (just about doable) print('Preloading training data...') train_uris = util.uris_from_clique_dict(train_cliques) chroma_dict = SHS_data.preload_chroma(train_uris) # make a training dataset of cover and non-cover pairs of songs print('Preparing training dataset...') X_A, X_B, Y, pair_uris = paired_data.dataset_of_pairs(train_cliques, chroma_dict, n_patches=n_patches, patch_len=patch_len) print(' Training set:', X_A.shape, X_B.shape, Y.shape) """ Explanation: Learning cover song fingerprints This notebook contains experiments in which a fingerprint is learned from a dataset of cover songs. The main idea behind this is explained in our Audio Bigrams paper [1]. Very briefly explained: most fingerprints encode some kind of co-occurrence of salient events (e.g., Shazam, 'Intervalgram') 'salient event detection' can be implemented as a convolution: conv2d(X, W) with W the 'salient events'. co-occurrence can be implemented as conv2d(X, w) @ X.T with w a window and @ the matrix product. all of this is differentiable, therefore, any fingerprinting system that can be formulated like this can be trained 'end-to-end'. To evaluate the learned fingerprint, we compare to the state-of-the-art '2D Fourier Transform Magniture Coeffients' by Bertin-Mahieux and Ellis [2], and a simpler fingerprinting approach by Kim et al [3]. We use the Second-hand Song Dataset with dublicates removed as proposed by Julien Osmalskyj. [1] Van Balen, J., Wiering, F., & Veltkamp, R. (2015). Audio Bigrams as a Unifying Model of Pitch-based Song Description. [2] Bertin-Mahieux, T., & Ellis, D. P. W. (2012). Large-Scale Cover Song Recognition Using The 2d Fourier Transform Magnitude. In Proc. International Society for Music Information Retrieval Conference. [3] Kim, S., Unal, E., & Narayanan, S. (2008). Music fingerprint extraction for classical music cover song identification. IEEE Conference on Multimedia and Expo. Training data End of explanation """ # pick a test subset n_test_cliques = 50 # e.g., 50 ~ small actual datasets test_cliques = {uri: test_cliques_big[uri] for uri in test_cliques_big.keys()[:n_test_cliques]} # preload test data to memory (just about doable) print('Preloading test data...') test_uris = util.uris_from_clique_dict(test_cliques) chroma_dict_T = SHS_data.preload_chroma(test_uris) # make a test dataset of cover and non-cover pairs of songs print('Preparing test dataset...') X_A_T, X_B_T, Y_T, test_pair_uris_T = paired_data.dataset_of_pairs(test_cliques, chroma_dict_T, n_patches=n_patches, patch_len=patch_len) print(' Test set:', X_A_T.shape, X_B_T.shape, Y_T.shape) """ Explanation: Test data For now, load just a small part of the test set that we'll evaluate at every iteration, e.g., a few times batch size End of explanation """ # for repeated runs with different networks tf.reset_default_graph() # make network network = learn.siamese_network(input_shape=(n_patches*patch_len, 12)) network.add_conv_layer(shape=(1,12), n_filters=12, padding='VALID') network.add_matmul_layer(filter_len=12, n_filters=12) """ Explanation: Network Set up a siamese network. End of explanation """ alpha = 4 m = 10 lr = 3e-4 batch_size = 100 n_iterations = 3200 # 3200 ~ 10 epoques (train set ~ 320 x 100) # training metrics loss, pair_loss, non_pair_loss = network.loss(m=m, alpha=alpha) bhatt, d_pairs, d_non_pairs = network.bhattacharyya() # optimiser train_step = network.train_step(loss, learning_rate=lr) # choose which metrics to log metrics = [loss, d_pairs, d_non_pairs] # start Tensorflow session sess = tf.InteractiveSession() sess.run(tf.initialize_all_variables()) # train and test batches train_batches = learn.get_batches([X_A, X_B, Y], batch_size=batch_size) test_batch = [X_A_T, X_B_T, Y_T] # train for step in range(n_iterations): train_batch = next(train_batches) # report network.log_errors(sess, train_batch=train_batch, test_batch=test_batch, metrics=metrics, log_every=10) # train train_feed = {network.x_A:train_batch[0], network.x_B:train_batch[1], network.is_cover:train_batch[2]} train_step.run(feed_dict=train_feed) # report final network.log_errors(sess, train_batch=train_batch, test_batch=test_batch, metrics=metrics) """ Explanation: Training Set training parameters and run for n_epoque iterations. Current implementation requires an 'interactive session'. End of explanation """ plt.figure(figsize=(10,4)) plt.plot(network.train_log['TR.loss']); plt.plot(network.train_log['TE.loss'], color='k'); plt.title('train (b) and test (k) loss function'); """ Explanation: Plot train and test outcome loss functions Note that training loss fluctuates much more as it's computed for a different batch at every step, while test error is (currently) computed for the same (slightly larger) subset at every step. End of explanation """ def plot_distances(feed): y_A, y_B = sess.run([network.subnet_A[-1], network.subnet_B[-1]], feed_dict=feed) is_cover = feed[network.is_cover] pair_dists = np.sqrt(np.sum((y_A - y_B)**2, axis=1))[np.where(is_cover==1)] non_pair_dists = np.sqrt(np.sum((y_A - y_B)**2, axis=1))[np.where(is_cover==0)] bins = np.arange(0,20,0.5) plt.figure(figsize=(16,4)) plt.subplot(121) plt.hist(non_pair_dists, bins=bins, alpha=0.5); plt.hist(pair_dists, bins=bins, color='r', alpha=0.5); plt.subplot(143) plt.boxplot([non_pair_dists, pair_dists]); # train distances train_feed = {network.x_A:train_batch[0], network.x_B:train_batch[1], network.is_cover: train_batch[2]} plot_distances(train_feed) # test distances test_feed = {network.x_A: X_A_T, network.x_B: X_B_T, network.is_cover: Y_T} plot_distances(test_feed) """ Explanation: distances End of explanation """ import main import fingerprints as fp def fingerprint(chroma, n_patches=8, patch_len=64): n_frames, n_bins = chroma.shape if not n_frames == n_patches * patch_len: chroma = paired_data.patchwork(chroma, n_patches=n_patches, patch_len=patch_len) fps = [] for i in range(12): chroma_trans = np.roll(chroma, -i, axis=1) chroma_tensor = chroma_trans.reshape((1, n_patches*patch_len, 12)) network_out = network.subnet_A[-1] fp = network_out.eval(feed_dict={network.x_A : chroma_tensor}) fps.append(fp.flatten()) return fps """ Explanation: Test fingerprint Test the learned fingerprint and compare to other impelementations. End of explanation """ results = main.run_leave_one_out_experiment(test_cliques_big, fp_function=fp.cov, print_every=50) print('results:', results) """ Explanation: covariance-based fingerprint Kim, S., Unal, E., & Narayanan, S. (2008). Music fingerprint extraction for classical music cover song identification. IEEE Conference on Multimedia and Expo. test_cliques_big: results: {'mean r5': 0.073941374228151044, 'mean ap': 0.069848998736677395, 'mean p1': 0.097366977509599564} End of explanation """ results = main.run_leave_one_out_experiment(test_cliques_big, fp_function=fp.fourier, print_every=50) print('results:', results) """ Explanation: 2d-DFT-based fingerprint Bertin-Mahieux, T., & Ellis, D. P. W. (2012). Large-Scale Cover Song Recognition Using The 2d Fourier Transform Magnitude. In Proc. International Society for Music Information Retrieval Conference. The PCA step at the end of the algorithm is not implemented at the moment. Without PCA, a patch length of 64 was found to be better than the original 75, and mean pooling across patches worked better than median-pooling, so these parameters were used instead. The difference between PCA and no PCA performance on a 12K test set was 0.08912 vs. 0.09475 mean average precision, or about 6%. test_cliques_big: results: {'mean r5': 0.11885786185784572, 'mean ap': 0.11072061679198177, 'mean p1': 0.14070213933077344} End of explanation """ results = main.run_leave_one_out_experiment(test_cliques_big, fp_function=fingerprint, print_every=50) print('fp_results:', results) """ Explanation: learned fingerprint We now see that with the right configuration, we are able to make the fingerprinter do a little bit better than the 2d-DFT-based fingerprints: test_cliques_big with W_1 = 12x12 and W_2 = 12x12. fp_results: {'mean r5': 0.13775251746235256, 'mean ap': 0.12335753431869072, 'mean p1': 0.15551289083927591} End of explanation """
BrainIntensive/OnlineBrainIntensive
resources/nipype/nipype_tutorial/notebooks/basic_joinnodes.ipynb
mit
from nipype import Node, JoinNode, Workflow # Specify fake input node A a = Node(interface=A(), name="a") # Iterate over fake node B's input 'in_file? b = Node(interface=B(), name="b") b.iterables = ('in_file', [file1, file2]) # Pass results on to fake node C c = Node(interface=C(), name="c") # Join forked execution workflow in fake node D d = JoinNode(interface=D(), joinsource="b", joinfield="in_files", name="d") # Put everything into a workflow as usual workflow = Workflow(name="workflow") workflow.connect([(a, b, [('subject', 'subject')]), (b, c, [('out_file', 'in_file')]) (c, d, [('out_file', 'in_files')]) ]) """ Explanation: <img src="../static/images/joinnode.png" width="240"> JoinNode JoinNode have the opposite effect of a MapNode or iterables. Where they split up the execution workflow into many different branches, a JoinNode merges them back into on node. For a more detailed explanation, check out JoinNode, synchronize and itersource from the main homepage. Simple example Let's consider the very simple example depicted at the top of this page: End of explanation """ from nipype import JoinNode, Node, Workflow from nipype.interfaces.utility import Function, IdentityInterface # Create iteration node from nipype import IdentityInterface iternode = Node(IdentityInterface(fields=['number_id']), name="iternode") iternode.iterables = [('number_id', [1, 4, 9])] # Create join node - compute square root for each element in the joined list def compute_sqrt(numbers): from math import sqrt return [sqrt(e) for e in numbers] joinnode = JoinNode(Function(input_names=['numbers'], output_names=['sqrts'], function=compute_sqrt), name='joinnode', joinsource='iternode', joinfield=['numbers']) # Create the workflow and run it joinflow = Workflow(name='joinflow') joinflow.connect(iternode, 'number_id', joinnode, 'numbers') res = joinflow.run() """ Explanation: As you can see, setting up a JoinNode is rather simple. The only difference to a normal Node are the joinsource and the joinfield. joinsource specifies from which node the information to join is coming and the joinfield specifies the input field of the JoinNode where the information to join will be entering the node. More realistic example Let's consider another example where we have one node that iterates over 3 different numbers and another node that joins those three different numbers (each coming from a separate branch of the workflow) into one list. To make the whole thing a bit more realistic, the second node will use the Function interface to do something with those numbers, before we spit them out again. End of explanation """ res.nodes()[0].result.outputs res.nodes()[0].inputs """ Explanation: Now, let's look at the input and output of the joinnode: End of explanation """
gfeiden/Notebook
Daily/20150820_A_star_models.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np """ Explanation: Surface Boundary Conditions on the Sub-Giant Branch In particular, exploring how surface boundary conditions can affect the morphology of the sub-giant branch in relation to the "retired A-star" debate in the literature. End of explanation """ dsep15_directory = '/Users/grefe950/evolve/dmestar/trk' dsep08_directory = '/Users/grefe950/evolve/dsep08/trk/fehp00afep0/' parsec_directory = '/Users/grefe950/evolve/padova/trk/2008/z017y26' """ Explanation: Link to directory where model tracks exist. End of explanation """ def loadTrack(filename): trk = np.genfromtxt(filename, usecols=(0,1,2,3,4,5)) bools = [x[0] > 1.0e8 for x in trk] return np.compress(bools, trk, axis=0) """ Explanation: Routine to load and trim model mass track. End of explanation """ trk_atlas = loadTrack('{0}/tmp/atlas/gs98/m2500_GS98_p000_p0_y28_mlt1.884.trk'.format(dsep15_directory)) trk_marcs = loadTrack('{0}/gas07/p000/a0/amlt2202/m2500_GAS07_p000_p0_y26_mlt2.202.trk'.format(dsep15_directory)) trk_dsep = loadTrack('{0}/m250fehp00afep0.jc2mass'.format(dsep08_directory)) trk_parsec = np.genfromtxt('{0}/ms_2.50.dat'.format(parsec_directory), usecols=(0,1,2,3,4,5)) """ Explanation: Load models with several different boundary condition prescriptions, End of explanation """ fig, ax = plt.subplots(1, 1, figsize=(10., 6.)) ax.set_xlim(1.1e4, 4.0e3) ax.set_ylim(1.6, 2.0) ax.set_xlabel('${\\rm Effective\ Temperature\ (K)}$', fontsize=20., family='serif') ax.set_ylabel('$\\log (L / L_{\\odot})$', fontsize=20., family='serif') ax.tick_params(which='major', axis='both', length=15., labelsize=16.) ax.plot(10**trk_atlas[:,1], trk_atlas[:,3], lw=2, c='#0094b2', label='GS98, Phx tau=10') ax.plot(10**trk_dsep[:,1], trk_dsep[:,3], lw=2, dashes=(15., 5.), c='#B22222', label='GS98, Phx T(tau)=Teff') ax.plot(10**trk_marcs[:,1], trk_marcs[:,3], lw=2, dashes=(2.0, 2.0), c='#555555', label='GAS07, Mrc tau=50') ax.plot(10**trk_parsec[:,3], trk_parsec[:,2], lw=2, dashes=(10., 10.), c='#9400D2', label='Parsec 2008') ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) """ Explanation: Plot mass track. End of explanation """ fig, ax = plt.subplots(1, 1, figsize=(10., 6.)) ax.set_xlim(1.1e4, 4.0e3) ax.set_ylim(1.6, 2.0) ax.set_xlabel('${\\rm Effective\ Temperature\ (K)}$', fontsize=20., family='serif') ax.set_ylabel('$\\log (L / L_{\\odot})$', fontsize=20., family='serif') ax.tick_params(which='major', axis='both', length=15., labelsize=16.) ax.plot(10**trk_atlas[:,1], trk_atlas[:,3], lw=2, c='#0094b2', label='GS98, Phx tau=10') ax.plot(10**trk_dsep[:,1], trk_dsep[:,3], lw=2, dashes=(15., 5.), c='#B22222', label='GS98, Phx T(tau)=Teff') ax.plot(10**trk_marcs[:,1], trk_marcs[:,3], lw=2, dashes=(2.0, 2.0), c='#555555', label='GAS07, Mrc tau=50') ax.plot(10**trk_parsec[:,3] + 500., trk_parsec[:,2] + 0.02, lw=2, dashes=(10., 10.), c='#9400D2', label='Parsec 2008') ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) """ Explanation: Adjust temperature and luminosity to better reveal morphological difference along the sub-giant branch. End of explanation """ fig, ax = plt.subplots(1, 1, figsize=(10., 6.)) ax.set_xlim(0.5, 0.7) ax.set_ylim(4.5, 0.5) ax.set_xlabel('${\\rm Age\\ (Gyr)}$', fontsize=20.) ax.set_ylabel('$\\log (g)$', fontsize=20.) ax.tick_params(which='major', axis='both', length=15., labelsize=16.) ax.plot(trk_atlas[:,0]/1.0e9, trk_atlas[:,2], lw=2, c='#0094b2', label='GS98, Phx tau=10') ax.plot(trk_dsep[:,0]/1.0e9, trk_dsep[:,2], lw=2, dashes=(15., 5.), c='#B22222', label='GS98, Phx T(tau)=Teff') ax.plot(trk_marcs[:,0]/1.0e9, trk_marcs[:,2], lw=2, dashes=(2.0, 2.0), c='#555555', label='GAS07, Mrc tau=50') ax.plot(trk_parsec[:,1]/1.0e9, trk_parsec[:,4], lw=2, dashes=(10., 10.), c='#9400D2', label='Parsec 2008') ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) """ Explanation: All three Dartmouth models exhibit similar morphological trends along the sub-giant branch with subtle differences owing to small differences in adopted chemical compositions, convective mixing lengths, and locations where surface boundary conditions are attached. The Padova model, on the other hand, shows distinct morphological differences compared to the Dartmouth tracks and exhibits an overall cooler effective throughout its evolution by approximately 500 K. I shifted the Parsec track so to match the Dartmouth models near the MSTO around 7800 K by adding 500 K to the effective temperature and 0.02 dex to the logarithm of the bolometric luminosity. The Padova model shows a morphology that is somewhat mixed between the Dartmouth models with GS98 and GAS07 solar compositions. In general, this does not appear to be a signifiacnt source of discrepancy between the models, with the MS morphology being similar up toward the end of the blue loop. However, the Padova model enters the sub-giant branch with a shallower slope. This leads to the Padova model showing a larger relative difference between temperatures at the turn-off and bRGB. The luminosity at the bRGB is comparable to our track with the GAS07 solar abundance, whereas the GS98 models (with higher overall Z) extend to lower luminosities. The extend of the sub-giant branch in effective temperature among the Padova models suggests that the internal structure of these particular models are distinct from those of the Dartmouth models. These types of structural differences along the sub-giant branch could lead to a variety of asteroseismic signatures among the models. Whether a higher mass Padova model would yield a similar sub-giant branch extend is hard to gauge with their course grid resolution (see below). We can also ask how radii (here, log(g)s) differ between the models as a function of age to probe how the asteroseismic mean density will be affected by potential structural differences in the evolution models. End of explanation """
CommonClimate/teaching_notebooks
research/SEA_high_internal_variability.ipynb
mit
%load_ext autoreload %autoreload 2 %matplotlib inline import LMRt import os import numpy as np import seaborn as sns import matplotlib.pyplot as plt from scipy.stats.mstats import mquantiles import xarray as xr from matplotlib import gridspec from scipy.signal import find_peaks import pandas as pd import pickle from tqdm import tqdm import pyleoclim as pyleo with xr.open_dataset('sst_nino34_cm2p1_1860.nc') as ds: print(ds) nino34 = ds['sst_nino34'].values time = ds['time'].values #print(np.shape(nino34)) """ Explanation: Superposed epoch analysis in the presence of high internal variability We will be using a 4000yr pre-industrial time series of monthly-mean NINO3.4 SST from the GFDL CM2.1, described in: CM2.1 model formulation, and tropical/ENSO evaluation: - Delworth et al. (2006): http://doi.org/10.1175/JCLI3629.1 - Wittenberg et al. (2006): http://doi.org/10.1175/JCLI3631.1 Pre-industrial control simulation, and long-range ENSO modulation & memory: - Wittenberg et al. (2009): http://doi.org/10.1029/2009GL038710 - Wittenberg et al. (2014): http://doi.org/10.1175/JCLI-D-13-00577.1 - Atwood et al. (CD 2017): http://doi.org/10.1007/s00382-016-3477-9 h/t Andrew Wittenberg for providing the simulation. Exploratory analysis End of explanation """ with open('cm2.1_nino34_TY.pkl', 'rb') as f: year, nino34_ann = pickle.load(f) # ignore last value (NaN) year = year[:-1]; nino34_ann = nino34_ann[:-1] nino34_ann -= np.mean(nino34_ann) fig, ax = plt.subplots() sns.distplot(nino34_ann,ax=ax) sns.despine() ax.set(title='Distribution of annual values',xlabel = 'NINO3.4 SST',ylabel = 'PDF') """ Explanation: Applying the tropical year average to this is a little tedious, so we skip some tests and load a up a file provided by Feng. End of explanation """ thre = 0.15 q = np.quantile(nino34_ann,[thre, 1-thre]) nina = np.where(nino34_ann <= q[0]) nino = np.where(nino34_ann >= q[1]) fig, ax = plt.subplots(figsize=[10, 4]) ax.plot(year, nino34_ann, color='gray',linewidth=0.2) ax.plot(year[nino],nino34_ann[nino],'o',alpha=0.6,markersize=3,color='C3') ax.plot(year[nina],nino34_ann[nina],'o',alpha=0.6,markersize=3,color='C0') plt.text(4100,2,'El Niño events',color='C3') plt.text(4100,-2,'La Niña events',color='C0') # ax.set_xlabel('Year') ax.set_ylabel('Niño 3.4') ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) len(nina[0]) """ Explanation: This shows a nice skewness comparable to observations. Let's define a quantile-based threshold for El Niño and La Niña events: End of explanation """ from scipy.stats import gaussian_kde nkeys = [5,10,15,20,50] clr = plt.cm.tab10(np.linspace(0,1,10)) prob = np.empty([len(nkeys),1]) nt = len(year) nMC = 10000 # number of Monte Carlo draws comp = np.empty([len(nkeys),nMC]) fig, ax = plt.subplots(figsize=(8,4)) xm = np.linspace(-2,3,200) for key in nkeys: i = nkeys.index(key) for m in range(nMC): events = np.random.choice(nt, size=[key, 1], replace=False, p=None) comp[i,m] = np.mean(nino34_ann[events],axis=0) x = np.sort(comp[i,:]) # sort it by increasing values kde = gaussian_kde(x,bw_method=0.2) # apply Kernel Density Estimation if any(x>=q[1]): comp_nino = x[x>=q[1]] prob[i] = kde.integrate_box_1d(q[1],5) ax.fill_between(comp_nino,kde(comp_nino),alpha=0.3, color = clr[i]) else: prob[i]=0 ax.plot(xm,kde(xm),linewidth=2,color=clr[i],label=str(key) + ', '+ f'{prob[i][0]:3.4f}') ax.axvline(q[1],linestyle='--',alpha=0.2,color='black') plt.legend(title=r'# key dates, $P(x > x_{crit})$',loc=5,fontsize=10,title_fontsize=12) ax.set_xlim([-2,3]) ax.set_xlabel('Niño 3.4') ax.set_ylabel('Density') ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.set_title('Probability of accidentally identifying unforced events as forced') fig.savefig("CM2.1_compositing_accidents.pdf",dpi=200,pad_inches=0.2) """ Explanation: Accidental El Niño composites Under stationary boundary conditions, warm (or cold) events can only appear in composites due to sampling artifacts, which should be larger for small number of key dates. Let us use resampling to evaluate the risk of wrongly identifying "forced" responses when none exists. Here our criterion for identifying warm events is that they exceed the threshold defined above. End of explanation """
sidazhang/udacity-dlnd
sentiment-network/Sentiment_Classification_Projects.ipynb
mit
def pretty_print_review_and_label(i): print(labels[i] + "\t:\t" + reviews[i][:80] + "...") g = open('reviews.txt','r') # What we know! reviews = list(map(lambda x:x[:-1],g.readlines())) g.close() g = open('labels.txt','r') # What we WANT to know! labels = list(map(lambda x:x[:-1].upper(),g.readlines())) g.close() """ Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network by Andrew Trask Twitter: @iamtrask Blog: http://iamtrask.github.io What You Should Already Know neural networks, forward and back-propagation stochastic gradient descent mean squared error and train/test splits Where to Get Help if You Need it Re-watch previous Udacity Lectures Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code) Shoot me a tweet @iamtrask Tutorial Outline: Intro: The Importance of "Framing a Problem" (this lesson) Curate a Dataset Developing a "Predictive Theory" PROJECT 1: Quick Theory Validation Transforming Text to Numbers PROJECT 2: Creating the Input/Output Data Putting it all together in a Neural Network (video only - nothing in notebook) PROJECT 3: Building our Neural Network Understanding Neural Noise PROJECT 4: Making Learning Faster by Reducing Noise Analyzing Inefficiencies in our Network PROJECT 5: Making our Network Train and Run Faster Further Noise Reduction PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary Analysis: What's going on in the weights? Lesson: Curate a Dataset<a id='lesson_1'></a> The cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything. End of explanation """ len(reviews) reviews[0] labels[0] """ Explanation: Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way. End of explanation """ print("labels.txt \t : \t reviews.txt\n") pretty_print_review_and_label(2137) pretty_print_review_and_label(12816) pretty_print_review_and_label(6267) pretty_print_review_and_label(21934) pretty_print_review_and_label(5297) pretty_print_review_and_label(4998) """ Explanation: Lesson: Develop a Predictive Theory<a id='lesson_2'></a> End of explanation """ from collections import Counter import numpy as np """ Explanation: Project 1: Quick Theory Validation<a id='project_1'></a> There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook. You'll find the Counter class to be useful in this exercise, as well as the numpy library. End of explanation """ # Create three Counter objects to store positive, negative and total counts positive_counts = Counter() negative_counts = Counter() total_counts = Counter() """ Explanation: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words. End of explanation """ # TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects """ Explanation: TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter. Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show. End of explanation """ # Examine the counts of the most common words in positive reviews positive_counts.most_common() # Examine the counts of the most common words in negative reviews negative_counts.most_common() """ Explanation: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used. End of explanation """ # Create Counter object to store positive/negative ratios pos_neg_ratios = Counter() # TODO: Calculate the ratios of positive and negative uses of the most common words # Consider words to be "common" if they've been used at least 100 times """ Explanation: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews. TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios. Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews. End of explanation """ print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"])) print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"])) print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"])) """ Explanation: Examine the ratios you've calculated for a few words: End of explanation """ # TODO: Convert ratios to logs """ Explanation: Looking closely at the values you just calculated, we see the following: Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be. Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be. Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway. Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons: Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys. When comparing absolute values it's easier to do that around zero than one. To fix these issues, we'll convert all of our ratios to new values using logarithms. TODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio)) In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs. End of explanation """ print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"])) print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"])) print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"])) """ Explanation: Examine the new ratios you've calculated for the same words from before: End of explanation """ # words most frequently seen in a review with a "POSITIVE" label pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label list(reversed(pos_neg_ratios.most_common()))[0:30] # Note: Above is the code Andrew uses in his solution video, # so we've included it here to avoid confusion. # If you explore the documentation for the Counter class, # you will see you could also find the 30 least common # words like this: pos_neg_ratios.most_common()[:-31:-1] """ Explanation: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments. Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.) The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).) You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios. End of explanation """ from IPython.display import Image review = "This was a horrible, terrible movie." Image(filename='sentiment_network.png') review = "The movie was excellent" Image(filename='sentiment_network_pos.png') """ Explanation: End of Project 1. Watch the next video to see Andrew's solution, then continue on to the next lesson. Transforming Text into Numbers<a id='lesson_3'></a> The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything. End of explanation """ # TODO: Create set named "vocab" containing all of the words from all of the reviews vocab = None """ Explanation: Project 2: Creating the Input/Output Data<a id='project_2'></a> TODO: Create a set named vocab that contains every word in the vocabulary. End of explanation """ vocab_size = len(vocab) print(vocab_size) """ Explanation: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074 End of explanation """ from IPython.display import Image Image(filename='sentiment_network_2.png') """ Explanation: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer. End of explanation """ # TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros layer_0 = None """ Explanation: TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns. End of explanation """ layer_0.shape from IPython.display import Image Image(filename='sentiment_network.png') """ Explanation: Run the following cell. It should display (1, 74074) End of explanation """ # Create a dictionary of words in the vocabulary mapped to index positions # (to be used in layer_0) word2index = {} for i,word in enumerate(vocab): word2index[word] = i # display the map of words to indices word2index """ Explanation: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word. End of explanation """ def update_input_layer(review): """ Modify the global layer_0 to represent the vector form of review. The element at a given index of layer_0 should represent how many times the given word occurs in the review. Args: review(string) - the string of the review Returns: None """ global layer_0 # clear out previous state by resetting the layer to be all 0s layer_0 *= 0 # TODO: count how many times each word is used in the given review and store the results in layer_0 """ Explanation: TODO: Complete the implementation of update_input_layer. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside layer_0. End of explanation """ update_input_layer(reviews[0]) layer_0 """ Explanation: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0. End of explanation """ def get_target_for_label(label): """Convert a label to `0` or `1`. Args: label(string) - Either "POSITIVE" or "NEGATIVE". Returns: `0` or `1`. """ # TODO: Your code here """ Explanation: TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1, depending on whether the given label is NEGATIVE or POSITIVE, respectively. End of explanation """ labels[0] get_target_for_label(labels[0]) """ Explanation: Run the following two cells. They should print out'POSITIVE' and 1, respectively. End of explanation """ labels[1] get_target_for_label(labels[1]) """ Explanation: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively. End of explanation """ import time import sys import numpy as np # Encapsulate our neural network in a class class SentimentNetwork: def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1): """Create a SentimenNetwork with the given settings Args: reviews(list) - List of reviews used for training labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews hidden_nodes(int) - Number of nodes to create in the hidden layer learning_rate(float) - Learning rate to use while training """ # Assign a seed to our random number generator to ensure we get # reproducable results during development np.random.seed(1) # process the reviews and their associated labels so that everything # is ready for training self.pre_process_data(reviews, labels) # Build the network to have the number of hidden nodes and the learning rate that # were passed into this initializer. Make the same number of input nodes as # there are vocabulary words and create a single output node. self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels): review_vocab = set() # TODO: populate review_vocab with all of the words in the given reviews # Remember to split reviews into individual words # using "split(' ')" instead of "split()". # Convert the vocabulary set to a list so we can access words via indices self.review_vocab = list(review_vocab) label_vocab = set() # TODO: populate label_vocab with all of the words in the given labels. # There is no need to split the labels because each one is a single word. # Convert the label vocabulary set to a list so we can access labels via indices self.label_vocab = list(label_vocab) # Store the sizes of the review and label vocabularies. self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) # Create a dictionary of words in the vocabulary mapped to index positions self.word2index = {} # TODO: populate self.word2index with indices for all the words in self.review_vocab # like you saw earlier in the notebook # Create a dictionary of labels mapped to index positions self.label2index = {} # TODO: do the same thing you did for self.word2index and self.review_vocab, # but for self.label2index and self.label_vocab instead def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Store the number of nodes in input, hidden, and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Store the learning rate self.learning_rate = learning_rate # Initialize weights # TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between # the input layer and the hidden layer. self.weights_0_1 = None # TODO: initialize self.weights_1_2 as a matrix of random values. # These are the weights between the hidden layer and the output layer. self.weights_1_2 = None # TODO: Create the input layer, a two-dimensional matrix with shape # 1 x input_nodes, with all values initialized to zero self.layer_0 = np.zeros((1,input_nodes)) def update_input_layer(self,review): # TODO: You can copy most of the code you wrote for update_input_layer # earlier in this notebook. # # However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE # THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS. # For example, replace "layer_0 *= 0" with "self.layer_0 *= 0" pass def get_target_for_label(self,label): # TODO: Copy the code you wrote for get_target_for_label # earlier in this notebook. pass def sigmoid(self,x): # TODO: Return the result of calculating the sigmoid activation function # shown in the lectures pass def sigmoid_output_2_derivative(self,output): # TODO: Return the derivative of the sigmoid activation function, # where "output" is the original output from the sigmoid fucntion pass def train(self, training_reviews, training_labels): # make sure out we have a matching number of reviews and labels assert(len(training_reviews) == len(training_labels)) # Keep track of correct predictions to display accuracy during training correct_so_far = 0 # Remember when we started for printing time statistics start = time.time() # loop through all the given reviews and run a forward and backward pass, # updating weights for every item for i in range(len(training_reviews)): # TODO: Get the next review and its correct label # TODO: Implement the forward pass through the network. # That means use the given review to update the input layer, # then calculate values for the hidden layer, # and finally calculate the output layer. # # Do not use an activation function for the hidden layer, # but use the sigmoid activation function for the output layer. # TODO: Implement the back propagation pass here. # That means calculate the error for the forward pass's prediction # and update the weights in the network according to their # contributions toward the error, as calculated via the # gradient descent and back propagation algorithms you # learned in class. # TODO: Keep track of correct predictions. To determine if the prediction was # correct, check that the absolute value of the output error # is less than 0.5. If so, add one to the correct_so_far count. # For debug purposes, print out our prediction accuracy and speed # throughout the training process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \ + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): """ Attempts to predict the labels for the given testing_reviews, and uses the test_labels to calculate the accuracy of those predictions. """ # keep track of how many correct predictions we make correct = 0 # we'll time how many predictions per second we make start = time.time() # Loop through each of the given reviews and call run to predict # its label. for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the prediction process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \ + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): """ Returns a POSITIVE or NEGATIVE prediction for the given review. """ # TODO: Run a forward pass through the network, like you did in the # "train" function. That means use the given review to # update the input layer, then calculate values for the hidden layer, # and finally calculate the output layer. # # Note: The review passed into this function for prediction # might come from anywhere, so you should convert it # to lower case prior to using it. # TODO: The output layer should now contain a prediction. # Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`, # and `NEGATIVE` otherwise. pass """ Explanation: End of Project 2. Watch the next video to see Andrew's solution, then continue on to the next lesson. Project 3: Building a Neural Network<a id='project_3'></a> TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following: - Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. - Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs. - Re-use the code from earlier in this notebook to create the training data (see TODOs in the code) - Implement the pre_process_data function to create the vocabulary for our training data generating functions - Ensure train trains over the entire corpus Where to Get Help if You Need it Re-watch earlier Udacity lectures Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code) End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) """ Explanation: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from. End of explanation """ mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing. End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network. End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network. End of explanation """ from IPython.display import Image Image(filename='sentiment_network.png') def update_input_layer(review): global layer_0 # clear out previous state, reset the layer to be all 0s layer_0 *= 0 for word in review.split(" "): layer_0[0][word2index[word]] += 1 update_input_layer(reviews[0]) layer_0 review_counter = Counter() for word in reviews[0].split(" "): review_counter[word] += 1 review_counter.most_common() """ Explanation: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson. End of Project 3. Watch the next video to see Andrew's solution, then continue on to the next lesson. Understanding Neural Noise<a id='lesson_4'></a> The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything. End of explanation """ # TODO: -Copy the SentimentNetwork class from Projet 3 lesson # -Modify it to reduce noise, like in the video """ Explanation: Project 4: Reducing Noise in Our Input Data<a id='project_4'></a> TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following: * Copy the SentimentNetwork class you created earlier into the following cell. * Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used. End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions. End of explanation """ Image(filename='sentiment_network_sparse.png') layer_0 = np.zeros(10) layer_0 layer_0[4] = 1 layer_0[9] = 1 layer_0 weights_0_1 = np.random.randn(10,5) layer_0.dot(weights_0_1) indices = [4,9] layer_1 = np.zeros(5) for index in indices: layer_1 += (1 * weights_0_1[index]) layer_1 Image(filename='sentiment_network_sparse_2.png') layer_1 = np.zeros(5) for index in indices: layer_1 += (weights_0_1[index]) layer_1 """ Explanation: End of Project 4. Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson. Analyzing Inefficiencies in our Network<a id='lesson_5'></a> The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything. End of explanation """ # TODO: -Copy the SentimentNetwork class from Project 4 lesson # -Modify it according to the above instructions """ Explanation: Project 5: Making our Network More Efficient<a id='project_5'></a> TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following: * Copy the SentimentNetwork class from the previous project into the following cell. * Remove the update_input_layer function - you will not need it in this version. * Modify init_network: You no longer need a separate input layer, so remove any mention of self.layer_0 You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero Modify train: Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step. At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review. Remove call to update_input_layer Use self's layer_1 instead of a local layer_1 object. In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review. When updating weights_0_1, only update the individual weights that were used in the forward pass. Modify run: Remove call to update_input_layer Use self's layer_1 instead of a local layer_1 object. Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review. End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to recreate the network and train it once again. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions. End of explanation """ Image(filename='sentiment_network_sparse_2.png') # words most frequently seen in a review with a "POSITIVE" label pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label list(reversed(pos_neg_ratios.most_common()))[0:30] from bokeh.models import ColumnDataSource, LabelSet from bokeh.plotting import figure, show, output_file from bokeh.io import output_notebook output_notebook() hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="Word Positive/Negative Affinity Distribution") p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555") show(p) frequency_frequency = Counter() for word, cnt in total_counts.most_common(): frequency_frequency[cnt] += 1 hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="The frequency distribution of the words in our corpus") p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555") show(p) """ Explanation: End of Project 5. Watch the next video to see Andrew's solution, then continue on to the next lesson. Further Noise Reduction<a id='lesson_6'></a> End of explanation """ # TODO: -Copy the SentimentNetwork class from Project 5 lesson # -Modify it according to the above instructions """ Explanation: Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a> TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following: * Copy the SentimentNetwork class from the previous project into the following cell. * Modify pre_process_data: Add two additional parameters: min_count and polarity_cutoff Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.) Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times. Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff Modify __init__: Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to train your network with a small polarity cutoff. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: And run the following cell to test it's performance. It should be End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to train your network with a much larger polarity cutoff. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: And run the following cell to test it's performance. End of explanation """ mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01) mlp_full.train(reviews[:-1000],labels[:-1000]) Image(filename='sentiment_network_sparse.png') def get_most_similar_words(focus = "horrible"): most_similar = Counter() for word in mlp_full.word2index.keys(): most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]]) return most_similar.most_common() get_most_similar_words("excellent") get_most_similar_words("terrible") import matplotlib.colors as colors words_to_visualize = list() for word, ratio in pos_neg_ratios.most_common(500): if(word in mlp_full.word2index.keys()): words_to_visualize.append(word) for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]: if(word in mlp_full.word2index.keys()): words_to_visualize.append(word) pos = 0 neg = 0 colors_list = list() vectors_list = list() for word in words_to_visualize: if word in pos_neg_ratios.keys(): vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]]) if(pos_neg_ratios[word] > 0): pos+=1 colors_list.append("#00ff00") else: neg+=1 colors_list.append("#000000") from sklearn.manifold import TSNE tsne = TSNE(n_components=2, random_state=0) words_top_ted_tsne = tsne.fit_transform(vectors_list) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="vector T-SNE for most polarized words") source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0], x2=words_top_ted_tsne[:,1], names=words_to_visualize, color=colors_list)) p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color") word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6, text_font_size="8pt", text_color="#555555", source=source, text_align='center') p.add_layout(word_labels) show(p) # green indicates positive words, black indicates negative words """ Explanation: End of Project 6. Watch the next video to see Andrew's solution, then continue on to the next lesson. Analysis: What's Going on in the Weights?<a id='lesson_7'></a> End of explanation """
recepkabatas/Spark
1_notmnist.ipynb
apache-2.0
# These are all the modules we'll be using later. Make sure you can import them # before proceeding further. import matplotlib.pyplot as plt import numpy as np import os import tarfile import urllib from urllib.request import urlretrieve from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression import pickle """ Explanation: Deep Learning with TensorFlow Credits: Forked from TensorFlow by Google Setup Refer to the setup instructions. Exercise 1 The objective of this exercise is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later. This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. End of explanation """ url = 'http://yaroslavvb.com/upload/notMNIST/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urllib.urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print ('Found and verified', filename) else: raise Exception('Failed to verify' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) """ Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. End of explanation """ num_classes = 10 def extract(filename): tar = tarfile.open(filename) tar.extractall() tar.close() root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz data_folders = [os.path.join(root, d) for d in sorted(os.listdir(root))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_folders, len(data_folders))) print (data_folders) return data_folders train_folders = extract(train_filename) test_folders = extract(test_filename) """ Explanation: Extract the dataset from the compressed .tar.gz file. This should give you a set of directories, labelled A through J. End of explanation """ image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load(data_folders, min_num_images, max_num_images): dataset = np.ndarray( shape=(max_num_images, image_size, image_size), dtype=np.float32) labels = np.ndarray(shape=(max_num_images), dtype=np.int32) label_index = 0 image_index = 0 for folder in data_folders: print folder for image in os.listdir(folder): if image_index >= max_num_images: raise Exception('More images than expected: %d >= %d' % ( num_images, max_num_images)) image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[image_index, :, :] = image_data labels[image_index] = label_index image_index += 1 except IOError as e: print 'Could not read:', image_file, ':', e, '- it\'s ok, skipping.' label_index += 1 num_images = image_index dataset = dataset[0:num_images, :, :] labels = labels[0:num_images] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % ( num_images, min_num_images)) print 'Full dataset tensor:', dataset.shape print 'Mean:', np.mean(dataset) print 'Standard deviation:', np.std(dataset) print 'Labels:', labels.shape return dataset, labels train_dataset, train_labels = load(train_folders, 450000, 550000) test_dataset, test_labels = load(test_folders, 18000, 20000) """ Explanation: Problem 1 Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display. Now let's load the data in a more manageable format. We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. The labels will be stored into a separate array of integers 0 through 9. A few images might not be readable, we'll just skip them. End of explanation """ np.random.seed(133) def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) """ Explanation: Problem 2 Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot. Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. End of explanation """ train_size = 200000 valid_size = 10000 valid_dataset = train_dataset[:valid_size,:,:] valid_labels = train_labels[:valid_size] train_dataset = train_dataset[valid_size:valid_size+train_size,:,:] train_labels = train_labels[valid_size:valid_size+train_size] print 'Training', train_dataset.shape, train_labels.shape print 'Validation', valid_dataset.shape, valid_labels.shape """ Explanation: Problem 3 Convince yourself that the data is still good after shuffling! Problem 4 Another check: we expect the data to be balanced across classes. Verify that. Prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. Also create a validation dataset for hyperparameter tuning. End of explanation """ pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print 'Unable to save data to', pickle_file, ':', e raise statinfo = os.stat(pickle_file) print 'Compressed pickle size:', statinfo.st_size """ Explanation: Finally, let's save the data for later reuse: End of explanation """
YuriyGuts/kaggle-quora-question-pairs
notebooks/feature-3rdparty-dasolmar-whq.ipynb
mit
from pygoose import * """ Explanation: Feature: "Jaccard with WHQ" (@dasolmar) Based on the kernel XGB with whq jaccard by David Solis. Imports This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace. End of explanation """ import nltk from collections import Counter from nltk.corpus import stopwords nltk.download('stopwords') """ Explanation: NLTK tools End of explanation """ project = kg.Project.discover() """ Explanation: Config Automatically discover the paths to various data folders and compose the project structure. End of explanation """ feature_list_id = '3rdparty_dasolmar_whq' """ Explanation: Identifier for storing these features on disk and referring to them later. End of explanation """ df_train = pd.read_csv(project.data_dir + 'train.csv').fillna('') df_test = pd.read_csv(project.data_dir + 'test.csv').fillna('') """ Explanation: Read data Original question sets. End of explanation """ stops = set(stopwords.words("english")) """ Explanation: NLTK built-in stopwords. End of explanation """ # If a word appears only once, we ignore it completely (likely a typo) # Epsilon defines a smoothing constant, which makes the effect of extremely rare words smaller def get_weight(count, eps=10000, min_count=2): return 0 if count < min_count else 1 / (count + eps) def add_word_count(x, df, word): x['das_q1_' + word] = df['question1'].apply(lambda x: (word in str(x).lower())*1) x['das_q2_' + word] = df['question2'].apply(lambda x: (word in str(x).lower())*1) x['das_' + word + '_both'] = x['das_q1_' + word] * x['das_q2_' + word] train_qs = pd.Series(df_train['question1'].tolist() + df_train['question2'].tolist()).astype(str) words = (" ".join(train_qs)).lower().split() counts = Counter(words) weights = {word: get_weight(count) for word, count in counts.items()} def word_shares(row): q1_list = str(row['question1']).lower().split() q1 = set(q1_list) q1words = q1.difference(stops) if len(q1words) == 0: return '0:0:0:0:0:0:0:0' q2_list = str(row['question2']).lower().split() q2 = set(q2_list) q2words = q2.difference(stops) if len(q2words) == 0: return '0:0:0:0:0:0:0:0' words_hamming = sum(1 for i in zip(q1_list, q2_list) if i[0]==i[1])/max(len(q1_list), len(q2_list)) q1stops = q1.intersection(stops) q2stops = q2.intersection(stops) q1_2gram = set([i for i in zip(q1_list, q1_list[1:])]) q2_2gram = set([i for i in zip(q2_list, q2_list[1:])]) shared_2gram = q1_2gram.intersection(q2_2gram) shared_words = q1words.intersection(q2words) shared_weights = [weights.get(w, 0) for w in shared_words] q1_weights = [weights.get(w, 0) for w in q1words] q2_weights = [weights.get(w, 0) for w in q2words] total_weights = q1_weights + q1_weights R1 = np.sum(shared_weights) / np.sum(total_weights) #tfidf share R2 = len(shared_words) / (len(q1words) + len(q2words) - len(shared_words)) #count share R31 = len(q1stops) / len(q1words) #stops in q1 R32 = len(q2stops) / len(q2words) #stops in q2 Rcosine_denominator = (np.sqrt(np.dot(q1_weights,q1_weights))*np.sqrt(np.dot(q2_weights,q2_weights))) Rcosine = np.dot(shared_weights, shared_weights)/Rcosine_denominator if len(q1_2gram) + len(q2_2gram) == 0: R2gram = 0 else: R2gram = len(shared_2gram) / (len(q1_2gram) + len(q2_2gram)) return '{}:{}:{}:{}:{}:{}:{}:{}'.format(R1, R2, len(shared_words), R31, R32, R2gram, Rcosine, words_hamming) df = pd.concat([df_train, df_test]) df['word_shares'] = df.apply(word_shares, axis=1, raw=True) x = pd.DataFrame() x['das_word_match'] = df['word_shares'].apply(lambda x: float(x.split(':')[0])) x['das_word_match_2root'] = np.sqrt(x['das_word_match']) x['das_tfidf_word_match'] = df['word_shares'].apply(lambda x: float(x.split(':')[1])) x['das_shared_count'] = df['word_shares'].apply(lambda x: float(x.split(':')[2])) x['das_stops1_ratio'] = df['word_shares'].apply(lambda x: float(x.split(':')[3])) x['das_stops2_ratio'] = df['word_shares'].apply(lambda x: float(x.split(':')[4])) x['das_shared_2gram'] = df['word_shares'].apply(lambda x: float(x.split(':')[5])) x['das_cosine'] = df['word_shares'].apply(lambda x: float(x.split(':')[6])) x['das_words_hamming'] = df['word_shares'].apply(lambda x: float(x.split(':')[7])) x['das_diff_stops_r'] = np.abs(x['das_stops1_ratio'] - x['das_stops2_ratio']) x['das_len_q1'] = df['question1'].apply(lambda x: len(str(x))) x['das_len_q2'] = df['question2'].apply(lambda x: len(str(x))) x['das_diff_len'] = np.abs(x['das_len_q1'] - x['das_len_q2']) x['das_caps_count_q1'] = df['question1'].apply(lambda x:sum(1 for i in str(x) if i.isupper())) x['das_caps_count_q2'] = df['question2'].apply(lambda x:sum(1 for i in str(x) if i.isupper())) x['das_diff_caps'] = np.abs(x['das_caps_count_q1'] - x['das_caps_count_q2']) x['das_len_char_q1'] = df['question1'].apply(lambda x: len(str(x).replace(' ', ''))) x['das_len_char_q2'] = df['question2'].apply(lambda x: len(str(x).replace(' ', ''))) x['das_diff_len_char'] = np.abs(x['das_len_char_q1'] - x['das_len_char_q2']) x['das_len_word_q1'] = df['question1'].apply(lambda x: len(str(x).split())) x['das_len_word_q2'] = df['question2'].apply(lambda x: len(str(x).split())) x['das_diff_len_word'] = np.abs(x['das_len_word_q1'] - x['das_len_word_q2']) x['das_avg_word_len1'] = x['das_len_char_q1'] / x['das_len_word_q1'] x['das_avg_word_len2'] = x['das_len_char_q2'] / x['das_len_word_q2'] x['das_diff_avg_word'] = np.abs(x['das_avg_word_len1'] - x['das_avg_word_len2']) # x['exactly_same'] = (df['question1'] == df['question2']).astype(int) # x['duplicated'] = df.duplicated(['question1','question2']).astype(int) whq_words = ['how', 'what', 'which', 'who', 'where', 'when', 'why'] for whq in whq_words: add_word_count(x, df, whq) whq_columns_q1 = ['das_q1_' + whq for whq in whq_words] whq_columns_q2 = ['das_q2_' + whq for whq in whq_words] x['whq_count_q1'] = x[whq_columns_q1].sum(axis=1) x['whq_count_q2'] = x[whq_columns_q2].sum(axis=1) x['whq_count_diff'] = np.abs(x['whq_count_q1'] - x['whq_count_q2']) feature_names = list(x.columns.values) print("Features: {}".format(feature_names)) X_train = x[:df_train.shape[0]].values X_test = x[df_train.shape[0]:].values """ Explanation: Build features End of explanation """ project.save_features(X_train, X_test, feature_names, feature_list_id) """ Explanation: Save features End of explanation """
Aggieyixin/cjc2016
code/16&17networkx.ipynb
mit
%matplotlib inline import networkx as nx import matplotlib.cm as cm import matplotlib.pyplot as plt import networkx as nx G=nx.Graph() # G = nx.DiGraph() # 有向网络 # 添加(孤立)节点 G.add_node("spam") # 添加节点和链接 G.add_edge(1,2) print(G.nodes()) print(G.edges()) # 绘制网络 nx.draw(G, with_labels = True) """ Explanation: 网络科学理论简介 网络科学:描述节点属性 王成军 wangchengjun@nju.edu.cn 计算传播网 http://computational-communication.com http://networkx.readthedocs.org/en/networkx-1.11/tutorial/ End of explanation """ G = nx.Graph() n = 0 with open ('/Users/chengjun/bigdata/www.dat.gz.txt') as f: for line in f: n += 1 if n % 10**4 == 0: flushPrint(n) x, y = line.rstrip().split(' ') G.add_edge(x,y) nx.info(G) """ Explanation: WWW Data download http://www3.nd.edu/~networks/resources.htm World-Wide-Web: [README] [DATA] Réka Albert, Hawoong Jeong and Albert-László Barabási: Diameter of the World Wide Web Nature 401, 130 (1999) [ PDF ] 作业: 下载www数据 构建networkx的网络对象g(提示:有向网络) 将www数据添加到g当中 计算网络中的节点数量和链接数量 End of explanation """ G = nx.karate_club_graph() clubs = [G.node[i]['club'] for i in G.nodes()] colors = [] for j in clubs: if j == 'Mr. Hi': colors.append('r') else: colors.append('g') nx.draw(G, with_labels = True, node_color = colors) G.node[1] # 节点1的属性 G.edge.keys()[:3] # 前三条边的id nx.info(G) G.nodes()[:10] G.edges()[:3] G.neighbors(1) nx.average_shortest_path_length(G) """ Explanation: 描述网络 nx.karate_club_graph 我们从karate_club_graph开始,探索网络的基本性质。 End of explanation """ nx.diameter(G)#返回图G的直径(最长最短路径的长度) """ Explanation: 网络直径 End of explanation """ nx.density(G) nodeNum = len(G.nodes()) edgeNum = len(G.edges()) 2.0*edgeNum/(nodeNum * (nodeNum - 1)) """ Explanation: 密度 End of explanation """ cc = nx.clustering(G) cc.items()[:5] plt.hist(cc.values(), bins = 15) plt.xlabel('$Clustering \, Coefficient, \, C$', fontsize = 20) plt.ylabel('$Frequency, \, F$', fontsize = 20) plt.show() """ Explanation: 作业: 计算www网络的网络密度 聚集系数 End of explanation """ # M. E. J. Newman, Mixing patterns in networks Physical Review E, 67 026126, 2003 nx.degree_assortativity_coefficient(G) #计算一个图的度匹配性。 Ge=nx.Graph() Ge.add_nodes_from([0,1],size=2) Ge.add_nodes_from([2,3],size=3) Ge.add_edges_from([(0,1),(2,3)]) print(nx.numeric_assortativity_coefficient(Ge,'size')) # plot degree correlation from collections import defaultdict import numpy as np l=defaultdict(list) g = nx.karate_club_graph() for i in g.nodes(): k = [] for j in g.neighbors(i): k.append(g.degree(j)) l[g.degree(i)].append(np.mean(k)) #l.append([g.degree(i),np.mean(k)]) x = l.keys() y = [np.mean(i) for i in l.values()] #x, y = np.array(l).T plt.plot(x, y, 'r-o', label = '$Karate\;Club$') plt.legend(loc=1,fontsize=10, numpoints=1) plt.xscale('log'); plt.yscale('log') plt.ylabel(r'$<knn(k)$> ', fontsize = 20) plt.xlabel('$k$', fontsize = 20) plt.show() """ Explanation: Spacing in Math Mode In a math environment, LaTeX ignores the spaces you type and puts in the spacing that it thinks is best. LaTeX formats mathematics the way it's done in mathematics texts. If you want different spacing, LaTeX provides the following four commands for use in math mode: \; - a thick space \: - a medium space \, - a thin space \! - a negative thin space 匹配系数 End of explanation """ dc = nx.degree_centrality(G) closeness = nx.closeness_centrality(G) betweenness= nx.betweenness_centrality(G) fig = plt.figure(figsize=(15, 4),facecolor='white') ax = plt.subplot(1, 3, 1) plt.hist(dc.values(), bins = 20) plt.xlabel('$Degree \, Centrality$', fontsize = 20) plt.ylabel('$Frequency, \, F$', fontsize = 20) ax = plt.subplot(1, 3, 2) plt.hist(closeness.values(), bins = 20) plt.xlabel('$Closeness \, Centrality$', fontsize = 20) ax = plt.subplot(1, 3, 3) plt.hist(betweenness.values(), bins = 20) plt.xlabel('$Betweenness \, Centrality$', fontsize = 20) plt.tight_layout() plt.show() fig = plt.figure(figsize=(15, 8),facecolor='white') for k in betweenness: plt.scatter(dc[k], closeness[k], s = betweenness[k]*1000) plt.text(dc[k], closeness[k]+0.02, str(k)) plt.xlabel('$Degree \, Centrality$', fontsize = 20) plt.ylabel('$Closeness \, Centrality$', fontsize = 20) plt.show() """ Explanation: Degree centrality measures.(度中心性) degree_centrality(G) # Compute the degree centrality for nodes. in_degree_centrality(G) # Compute the in-degree centrality for nodes. out_degree_centrality(G) # Compute the out-degree centrality for nodes. closeness_centrality(G[, v, weighted_edges]) # Compute closeness centrality for nodes. betweenness_centrality(G[, normalized, ...]) # Betweenness centrality measures.(介数中心性) End of explanation """ from collections import defaultdict import numpy as np def plotDegreeDistribution(G): degs = defaultdict(int) for i in G.degree().values(): degs[i]+=1 items = sorted ( degs.items () ) x, y = np.array(items).T plt.plot(x, y, 'b-o') plt.xscale('log') plt.yscale('log') plt.legend(['Degree']) plt.xlabel('$K$', fontsize = 20) plt.ylabel('$P_K$', fontsize = 20) plt.title('$Degree\,Distribution$', fontsize = 20) plt.show() G = nx.karate_club_graph() plotDegreeDistribution(G) """ Explanation: 度分布 End of explanation """ import networkx as nx import matplotlib.pyplot as plt RG = nx.random_graphs.random_regular_graph(3,200) #生成包含20个节点、每个节点有3个邻居的规则图RG pos = nx.spectral_layout(RG) #定义一个布局,此处采用了spectral布局方式,后变还会介绍其它布局方式,注意图形上的区别 nx.draw(RG,pos,with_labels=False,node_size = 30) #绘制规则图的图形,with_labels决定节点是非带标签(编号),node_size是节点的直径 plt.show() #显示图形 plotDegreeDistribution(RG) """ Explanation: 网络科学理论简介 网络科学:分析网络结构 王成军 wangchengjun@nju.edu.cn 计算传播网 http://computational-communication.com 规则网络 End of explanation """ import networkx as nx import matplotlib.pyplot as plt ER = nx.random_graphs.erdos_renyi_graph(200,0.05) #生成包含20个节点、以概率0.2连接的随机图 pos = nx.shell_layout(ER) #定义一个布局,此处采用了shell布局方式 nx.draw(ER,pos,with_labels=False,node_size = 30) plt.show() plotDegreeDistribution(ER) """ Explanation: ER随机网络 End of explanation """ import networkx as nx import matplotlib.pyplot as plt WS = nx.random_graphs.watts_strogatz_graph(200,4,0.3) #生成包含200个节点、每个节点4个近邻、随机化重连概率为0.3的小世界网络 pos = nx.circular_layout(WS) #定义一个布局,此处采用了circular布局方式 nx.draw(WS,pos,with_labels=False,node_size = 30) #绘制图形 plt.show() plotDegreeDistribution(WS) nx.diameter(WS) cc = nx.clustering(WS) plt.hist(cc.values(), bins = 10) plt.xlabel('$Clustering \, Coefficient, \, C$', fontsize = 20) plt.ylabel('$Frequency, \, F$', fontsize = 20) plt.show() import numpy as np np.mean(cc.values()) """ Explanation: 小世界网络 End of explanation """ import networkx as nx import matplotlib.pyplot as plt BA= nx.random_graphs.barabasi_albert_graph(200,2) #生成n=20、m=1的BA无标度网络 pos = nx.spring_layout(BA) #定义一个布局,此处采用了spring布局方式 nx.draw(BA,pos,with_labels=False,node_size = 30) #绘制图形 plt.show() plotDegreeDistribution(BA) BA= nx.random_graphs.barabasi_albert_graph(20000,2) #生成n=20、m=1的BA无标度网络 plotDegreeDistribution(BA) """ Explanation: BA网络 End of explanation """ Ns = [i*10 for i in [1, 10, 100, 1000]] ds = [] for N in Ns: print N BA= nx.random_graphs.barabasi_albert_graph(N,2) d = nx.average_shortest_path_length(BA) ds.append(d) plt.plot(Ns, ds, 'r-o') plt.xlabel('$N$', fontsize = 20) plt.ylabel('$<d>$', fontsize = 20) plt.xscale('log') plt.show() """ Explanation: 作业: 阅读 Barabasi (1999) Internet Diameter of the world wide web.Nature.401 绘制www网络的出度分布、入度分布 使用BA模型生成节点数为N、幂指数为$\gamma$的网络 计算平均路径长度d与节点数量的关系 <img src = './img/diameter.png' width = 10000> End of explanation """ # subgraph G = nx.Graph() # or DiGraph, MultiGraph, MultiDiGraph, etc G.add_path([0,1,2,3]) H = G.subgraph([0,1,2]) G.edges(), H.edges() """ Explanation: More http://computational-communication.com/wiki/index.php?title=Networkx End of explanation """
npo-poms/pyapi
changes_demo.ipynb
gpl-3.0
objects = ijson.items(client.changes(stream=True, limit=100000), 'changes.item') data = {} """ Explanation: Receive (streamingly) the latest 100000 changes. End of explanation """ count = 0 print("Iterating all results, and collecting some data") for o in objects: if count % 20000 == 0: sys.stdout.write("\n%05d" % count) if count % 1000 == 0: sys.stdout.write('.') sys.stdout.flush() count += 1 if "media" in o: media = o["media"] for broadcaster in media["broadcasters"]: if "sortDate" in media: sortDate = datetime.fromtimestamp(media["sortDate"] / 1e3) bid = broadcaster["id"] if not bid in data: data[bid] = [] data[bid].append(sortDate) """ Explanation: Now iterate those changes and collect all sort dates per broadcaster (this may take some time) End of explanation """ sorted_by_value = sorted(data.items(), key=lambda kv: -1 * len(kv[1])) for e in sorted_by_value: print("%-20s %d" % (e[0], len(e[1]))) """ Explanation: Now collect some information about the collected results and show it End of explanation """