content
large_stringlengths
0
6.46M
path
large_stringlengths
3
331
license_type
large_stringclasses
2 values
repo_name
large_stringlengths
5
125
language
large_stringclasses
1 value
is_vendor
bool
2 classes
is_generated
bool
2 classes
length_bytes
int64
4
6.46M
extension
large_stringclasses
75 values
text
stringlengths
0
6.46M
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/PrintMap.R \name{PrintMap} \alias{PrintMap} \title{Output module: PrintMap} \usage{ PrintMap(.model, .ras, plot = TRUE, points = TRUE, dir = NULL, <<<<<<< HEAD filename = NULL, size = c(480, 480), res = 72, threshold = NULL, ...) ======= filename = NULL, size = c(480, 480), res = 72, threshold = NULL, thresholdmethod = c("probability", "quantile", "falsepositive", "falsenegative"), ...) >>>>>>> bbbeeb65404403aecd1580f0ae10bc4db2f0a6fd } \arguments{ \item{.model}{\strong{Internal parameter, do not use in the workflow function}. \code{.model} is list of a data frame (\code{data}) and a model object (\code{model}). \code{.model} is passed automatically in workflow, combining data from the model module(s) and process module(s), to the output module(s) and should not be passed by the user.} \item{.ras}{\strong{Internal parameter, do not use in the workflow function}. \code{.ras} is a raster layer, brick or stack object. \code{.ras} is passed automatically in workflow from the covariate module(s) to the output module(s) and should not be passed by the user.} \item{plot}{If \code{TRUE} the plot will be displayed in the device} \item{points}{If \code{TRUE} the training points will be plotted over the prediction surface} \item{dir}{Directory where plots are saved. If both \code{dir} and \code{filename} are \code{NULL} (default) then plots are not saved.} \item{filename}{The name to be given to the output as a character, don't include a file extension. If both \code{dir} and \code{filename} are \code{NULL} (default) then plots are not saved.} \item{size}{A vector containing the width and height of the output figure when writing to a png file. Example: c(800,600).} \item{res}{The output resolution in ppi when writing to a png file.} <<<<<<< HEAD \item{threshold}{The threshold percentile to use to convert probabilities to binary 1's and 0's. Default is NULL ie not used.} ======= \item{threshold}{The threshold value to use to convert probabilities to binary 1's and 0's. Default is NULL ie not used.} \item{thresholdmethod}{The method used to calculate probability threshold. One of 'probability', 'quantile', 'falsepositive', 'falsenegative'. See Details for specifics.} >>>>>>> bbbeeb65404403aecd1580f0ae10bc4db2f0a6fd \item{...}{Parameters passed to sp::spplot, useful for setting title and axis labels e.g. \code{xlab = 'Axis Label', main = 'My Plot Title'}} } \value{ A Raster object giving the probabilistic model predictions for each cell of covariate raster layer } \description{ Plot a map of predicted surface. } \details{ For creating maps with only presence absence values, there are a number of options for setting a threshold. The \code{threshold} argument sets the value for the threshold while \code{thresholdmethod} selects the methods used to set the threshold. \enumerate{ \item `probability' (default) Any pixels with predicted probability (or relative probability, depending on the model) greater than the threshold are set to presence \item `quantile' \code{threshold} gives the proportion of pixels that should be absense. The threshold value is selected so that this is true. \item `falsepositive' \code{threshold} sets the proportion of absense data points (not pixels) that should be misclassified as presence. \item `falsenegative' \code{threshold} sets the proportion of presence data points (not pixels) that should be misclassified as absense } } \section{Version}{ 1.1 } \section{Date submitted}{ 2016-04-02 } \author{ ZOON Developers, James Campbell, \email{zoonproject@gmail.com} } \seealso{ Other output: \code{\link{Appify}}, \code{\link{InteractiveCovariateMap}}, \code{\link{InteractiveMap}}, \code{\link{NoOutput}}, \code{\link{PerformanceMeasures}}, \code{\link{PredictNewRasterMap}}, \code{\link{ROCcurve}}, \code{\link{ResponseCurveViz}}, \code{\link{ResponseCurve}}, \code{\link{SameTimePlaceMap}}, \code{\link{SeparatePA}}, \code{\link{SurfaceMap}}, \code{\link{VariableImportance}} }
/man/PrintMap.Rd
no_license
AugustT/modules
R
false
true
4,075
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/PrintMap.R \name{PrintMap} \alias{PrintMap} \title{Output module: PrintMap} \usage{ PrintMap(.model, .ras, plot = TRUE, points = TRUE, dir = NULL, <<<<<<< HEAD filename = NULL, size = c(480, 480), res = 72, threshold = NULL, ...) ======= filename = NULL, size = c(480, 480), res = 72, threshold = NULL, thresholdmethod = c("probability", "quantile", "falsepositive", "falsenegative"), ...) >>>>>>> bbbeeb65404403aecd1580f0ae10bc4db2f0a6fd } \arguments{ \item{.model}{\strong{Internal parameter, do not use in the workflow function}. \code{.model} is list of a data frame (\code{data}) and a model object (\code{model}). \code{.model} is passed automatically in workflow, combining data from the model module(s) and process module(s), to the output module(s) and should not be passed by the user.} \item{.ras}{\strong{Internal parameter, do not use in the workflow function}. \code{.ras} is a raster layer, brick or stack object. \code{.ras} is passed automatically in workflow from the covariate module(s) to the output module(s) and should not be passed by the user.} \item{plot}{If \code{TRUE} the plot will be displayed in the device} \item{points}{If \code{TRUE} the training points will be plotted over the prediction surface} \item{dir}{Directory where plots are saved. If both \code{dir} and \code{filename} are \code{NULL} (default) then plots are not saved.} \item{filename}{The name to be given to the output as a character, don't include a file extension. If both \code{dir} and \code{filename} are \code{NULL} (default) then plots are not saved.} \item{size}{A vector containing the width and height of the output figure when writing to a png file. Example: c(800,600).} \item{res}{The output resolution in ppi when writing to a png file.} <<<<<<< HEAD \item{threshold}{The threshold percentile to use to convert probabilities to binary 1's and 0's. Default is NULL ie not used.} ======= \item{threshold}{The threshold value to use to convert probabilities to binary 1's and 0's. Default is NULL ie not used.} \item{thresholdmethod}{The method used to calculate probability threshold. One of 'probability', 'quantile', 'falsepositive', 'falsenegative'. See Details for specifics.} >>>>>>> bbbeeb65404403aecd1580f0ae10bc4db2f0a6fd \item{...}{Parameters passed to sp::spplot, useful for setting title and axis labels e.g. \code{xlab = 'Axis Label', main = 'My Plot Title'}} } \value{ A Raster object giving the probabilistic model predictions for each cell of covariate raster layer } \description{ Plot a map of predicted surface. } \details{ For creating maps with only presence absence values, there are a number of options for setting a threshold. The \code{threshold} argument sets the value for the threshold while \code{thresholdmethod} selects the methods used to set the threshold. \enumerate{ \item `probability' (default) Any pixels with predicted probability (or relative probability, depending on the model) greater than the threshold are set to presence \item `quantile' \code{threshold} gives the proportion of pixels that should be absense. The threshold value is selected so that this is true. \item `falsepositive' \code{threshold} sets the proportion of absense data points (not pixels) that should be misclassified as presence. \item `falsenegative' \code{threshold} sets the proportion of presence data points (not pixels) that should be misclassified as absense } } \section{Version}{ 1.1 } \section{Date submitted}{ 2016-04-02 } \author{ ZOON Developers, James Campbell, \email{zoonproject@gmail.com} } \seealso{ Other output: \code{\link{Appify}}, \code{\link{InteractiveCovariateMap}}, \code{\link{InteractiveMap}}, \code{\link{NoOutput}}, \code{\link{PerformanceMeasures}}, \code{\link{PredictNewRasterMap}}, \code{\link{ROCcurve}}, \code{\link{ResponseCurveViz}}, \code{\link{ResponseCurve}}, \code{\link{SameTimePlaceMap}}, \code{\link{SeparatePA}}, \code{\link{SurfaceMap}}, \code{\link{VariableImportance}} }
#' Select/rename variables by name #' #' Choose or rename variables from a tbl. #' `select()` keeps only the variables you mention; `rename()` #' keeps all variables. #' #' These functions work by column index, not value; thus, an expression #' like `select(data.frame(x = 1:5, y = 10), z = x+1)` does not create a variable #' with values `2:6`. (In the current implementation, the expression `z = x+1` #' wouldn't do anything useful.) To calculate using column values, see #' [mutate()]/[transmute()]. #' #' @section Useful functions: #' As well as using existing functions like `:` and `c()`, there are #' a number of special functions that only work inside `select()`: #' #' * [starts_with()], [ends_with()], [contains()] #' * [matches()] #' * [num_range()] #' * [one_of()] #' * [everything()] #' * [group_cols()] #' #' To drop variables, use `-`. #' #' Note that except for `:`, `-` and `c()`, all complex expressions #' are evaluated outside the data frame context. This is to prevent #' accidental matching of data frame variables when you refer to #' variables from the calling context. #' #' @section Scoped selection and renaming: #' #' The three [scoped] variants of `select()` ([select_all()], #' [select_if()] and [select_at()]) and the three variants of #' `rename()` ([rename_all()], [rename_if()], [rename_at()]) make it #' easy to apply a renaming function to a selection of variables. #' #' @inheritParams filter #' @inheritSection filter Tidy data #' @param ... <[`tidy-select`][dplyr_tidy_select]> One or more unquoted #' expressions separated by commas. You can treat variable names like they #' are positions, so you can use expressions like `x:y` to select ranges of #' variables. #' #' Positive values select variables; negative values drop variables. #' If the first expression is negative, `select()` will automatically #' start with all variables. #' #' Use named arguments, e.g. `new_name = old_name`, to rename selected variables. #' See [select helpers][tidyselect::select_helpers] for more details and #' examples about tidyselect helpers such as `starts_with()`, `everything()`, ... #' @return An object of the same class as `.data`. #' @family single table verbs #' @export #' @examples #' iris <- as_tibble(iris) # so it prints a little nicer #' select(iris, starts_with("Petal")) #' select(iris, ends_with("Width")) #' #' # Move Species variable to the front #' select(iris, Species, everything()) #' #' # Move Sepal.Length variable to back #' # first select all variables except Sepal.Length, then re select Sepal.Length #' select(iris, -Sepal.Length, Sepal.Length) #' #' df <- as.data.frame(matrix(runif(100), nrow = 10)) #' df <- as_tibble(df[c(3, 4, 7, 1, 9, 8, 5, 2, 6, 10)]) #' select(df, V4:V6) #' select(df, num_range("V", 4:6)) #' #' # Drop variables with - #' select(iris, -starts_with("Petal")) #' #' # Select the grouping variables: #' starwars %>% group_by(gender) %>% select(group_cols()) #' #' #' # Renaming ----------------------------------------- #' # * select() keeps only the variables you specify #' select(iris, petal_length = Petal.Length) #' #' # * rename() keeps all variables #' rename(iris, petal_length = Petal.Length) #' #' # * select() can rename variables in a group #' select(iris, obs = starts_with('S')) select <- function(.data, ...) { UseMethod("select") } #' @export select.list <- function(.data, ...) { abort("`select()` doesn't handle lists.") } #' @rdname select #' @export rename <- function(.data, ...) { UseMethod("rename") } #' @export select.grouped_df <- function(.data, ...) { vars <- tidyselect::vars_select(tbl_vars(.data), !!!enquos(...)) vars <- ensure_group_vars(vars, .data, notify = TRUE) select_impl(.data, vars) } #' @export select.data.frame <- function(.data, ...) { # Pass via splicing to avoid matching vars_select() arguments vars <- tidyselect::vars_select(tbl_vars(.data), !!!enquos(...)) select_impl(.data, vars) } #' @export rename.grouped_df <- function(.data, ...) { vars <- tidyselect::vars_rename(names(.data), ...) select_impl(.data, vars) } #' @export rename.data.frame <- function(.data, ...) { vars <- tidyselect::vars_rename(names(.data), !!!enquos(...)) select_impl(.data, vars) } # Helpers ----------------------------------------------------------------- select_impl <- function(.data, vars) { positions <- match(vars, names(.data)) if (any(test <- is.na(positions))) { wrong <- which(test)[1L] abort( glue( "invalid column index : {wrong} for variable: '{new}' = '{old}'", new = names(vars)[wrong], vars[wrong] ), .subclass = "dplyr_select_wrong_selection" ) } out <- set_names(.data[, positions, drop = FALSE], names(vars)) if (is_grouped_df(.data)) { # we might have to alter the names of the groups metadata groups <- attr(.data, "groups") group_vars <- c(vars[vars %in% names(groups)], .rows = ".rows") groups <- select_impl(groups, group_vars) out <- new_grouped_df(out, groups) } out } ensure_group_vars <- function(vars, data, notify = TRUE) { group_names <- group_vars(data) missing <- setdiff(group_names, vars) if (length(missing) > 0) { if (notify) { inform(glue( "Adding missing grouping variables: ", paste0("`", missing, "`", collapse = ", ") )) } vars <- c(set_names(missing, missing), vars) } vars }
/R/select.R
permissive
WbGuo96/dplyr
R
false
false
5,421
r
#' Select/rename variables by name #' #' Choose or rename variables from a tbl. #' `select()` keeps only the variables you mention; `rename()` #' keeps all variables. #' #' These functions work by column index, not value; thus, an expression #' like `select(data.frame(x = 1:5, y = 10), z = x+1)` does not create a variable #' with values `2:6`. (In the current implementation, the expression `z = x+1` #' wouldn't do anything useful.) To calculate using column values, see #' [mutate()]/[transmute()]. #' #' @section Useful functions: #' As well as using existing functions like `:` and `c()`, there are #' a number of special functions that only work inside `select()`: #' #' * [starts_with()], [ends_with()], [contains()] #' * [matches()] #' * [num_range()] #' * [one_of()] #' * [everything()] #' * [group_cols()] #' #' To drop variables, use `-`. #' #' Note that except for `:`, `-` and `c()`, all complex expressions #' are evaluated outside the data frame context. This is to prevent #' accidental matching of data frame variables when you refer to #' variables from the calling context. #' #' @section Scoped selection and renaming: #' #' The three [scoped] variants of `select()` ([select_all()], #' [select_if()] and [select_at()]) and the three variants of #' `rename()` ([rename_all()], [rename_if()], [rename_at()]) make it #' easy to apply a renaming function to a selection of variables. #' #' @inheritParams filter #' @inheritSection filter Tidy data #' @param ... <[`tidy-select`][dplyr_tidy_select]> One or more unquoted #' expressions separated by commas. You can treat variable names like they #' are positions, so you can use expressions like `x:y` to select ranges of #' variables. #' #' Positive values select variables; negative values drop variables. #' If the first expression is negative, `select()` will automatically #' start with all variables. #' #' Use named arguments, e.g. `new_name = old_name`, to rename selected variables. #' See [select helpers][tidyselect::select_helpers] for more details and #' examples about tidyselect helpers such as `starts_with()`, `everything()`, ... #' @return An object of the same class as `.data`. #' @family single table verbs #' @export #' @examples #' iris <- as_tibble(iris) # so it prints a little nicer #' select(iris, starts_with("Petal")) #' select(iris, ends_with("Width")) #' #' # Move Species variable to the front #' select(iris, Species, everything()) #' #' # Move Sepal.Length variable to back #' # first select all variables except Sepal.Length, then re select Sepal.Length #' select(iris, -Sepal.Length, Sepal.Length) #' #' df <- as.data.frame(matrix(runif(100), nrow = 10)) #' df <- as_tibble(df[c(3, 4, 7, 1, 9, 8, 5, 2, 6, 10)]) #' select(df, V4:V6) #' select(df, num_range("V", 4:6)) #' #' # Drop variables with - #' select(iris, -starts_with("Petal")) #' #' # Select the grouping variables: #' starwars %>% group_by(gender) %>% select(group_cols()) #' #' #' # Renaming ----------------------------------------- #' # * select() keeps only the variables you specify #' select(iris, petal_length = Petal.Length) #' #' # * rename() keeps all variables #' rename(iris, petal_length = Petal.Length) #' #' # * select() can rename variables in a group #' select(iris, obs = starts_with('S')) select <- function(.data, ...) { UseMethod("select") } #' @export select.list <- function(.data, ...) { abort("`select()` doesn't handle lists.") } #' @rdname select #' @export rename <- function(.data, ...) { UseMethod("rename") } #' @export select.grouped_df <- function(.data, ...) { vars <- tidyselect::vars_select(tbl_vars(.data), !!!enquos(...)) vars <- ensure_group_vars(vars, .data, notify = TRUE) select_impl(.data, vars) } #' @export select.data.frame <- function(.data, ...) { # Pass via splicing to avoid matching vars_select() arguments vars <- tidyselect::vars_select(tbl_vars(.data), !!!enquos(...)) select_impl(.data, vars) } #' @export rename.grouped_df <- function(.data, ...) { vars <- tidyselect::vars_rename(names(.data), ...) select_impl(.data, vars) } #' @export rename.data.frame <- function(.data, ...) { vars <- tidyselect::vars_rename(names(.data), !!!enquos(...)) select_impl(.data, vars) } # Helpers ----------------------------------------------------------------- select_impl <- function(.data, vars) { positions <- match(vars, names(.data)) if (any(test <- is.na(positions))) { wrong <- which(test)[1L] abort( glue( "invalid column index : {wrong} for variable: '{new}' = '{old}'", new = names(vars)[wrong], vars[wrong] ), .subclass = "dplyr_select_wrong_selection" ) } out <- set_names(.data[, positions, drop = FALSE], names(vars)) if (is_grouped_df(.data)) { # we might have to alter the names of the groups metadata groups <- attr(.data, "groups") group_vars <- c(vars[vars %in% names(groups)], .rows = ".rows") groups <- select_impl(groups, group_vars) out <- new_grouped_df(out, groups) } out } ensure_group_vars <- function(vars, data, notify = TRUE) { group_names <- group_vars(data) missing <- setdiff(group_names, vars) if (length(missing) > 0) { if (notify) { inform(glue( "Adding missing grouping variables: ", paste0("`", missing, "`", collapse = ", ") )) } vars <- c(set_names(missing, missing), vars) } vars }
\name{boot.env} \alias{boot.env} \title{Bootstrap for env} \description{ Compute bootstrap standard error for the response envelope estimator. } \usage{ boot.env(X, Y, u, B) } \arguments{ \item{X}{Predictors. An n by p matrix, p is the number of predictors. The predictors can be univariate or multivariate, discrete or continuous.} \item{Y}{Multivariate responses. An n by r matrix, r is the number of responses and n is number of observations. The responses must be continuous variables.} \item{u}{Dimension of the envelope. An integer between 0 and r.} \item{B}{Number of bootstrap samples. A positive integer.} } \details{ This function computes the bootstrap standard errors for the regression coefficients in the envelope model by bootstrapping the residuals. } \value{The output is an r by p matrix. \item{bootse}{The standard error for elements in beta computed by bootstrap.} } \examples{ data(wheatprotein) X <- wheatprotein[, 8] Y <- wheatprotein[, 1:6] u <- u.env(X, Y) u B <- 100 bootse <- boot.env(X, Y, 1, B) bootse }
/man/boot.env.Rd
no_license
cran/Renvlp
R
false
false
1,048
rd
\name{boot.env} \alias{boot.env} \title{Bootstrap for env} \description{ Compute bootstrap standard error for the response envelope estimator. } \usage{ boot.env(X, Y, u, B) } \arguments{ \item{X}{Predictors. An n by p matrix, p is the number of predictors. The predictors can be univariate or multivariate, discrete or continuous.} \item{Y}{Multivariate responses. An n by r matrix, r is the number of responses and n is number of observations. The responses must be continuous variables.} \item{u}{Dimension of the envelope. An integer between 0 and r.} \item{B}{Number of bootstrap samples. A positive integer.} } \details{ This function computes the bootstrap standard errors for the regression coefficients in the envelope model by bootstrapping the residuals. } \value{The output is an r by p matrix. \item{bootse}{The standard error for elements in beta computed by bootstrap.} } \examples{ data(wheatprotein) X <- wheatprotein[, 8] Y <- wheatprotein[, 1:6] u <- u.env(X, Y) u B <- 100 bootse <- boot.env(X, Y, 1, B) bootse }
#' @export run_inversion <- function(data_name, spectra_id, prospect_version = "D", ...) { data_path <- here::here("processed_data", paste0(data_name, ".rds")) stopifnot(file.exists(data_path)) datalist <- readRDS(data_path) metadata <- datalist$metadata %>% filter(spectra_id == !!spectra_id) stopifnot(nrow(metadata) == 1) obs_raw <- datalist$spectra[datalist$data_wl_inds, spectra_id] miss <- is.na(obs_raw) observed <- obs_raw[!miss] prospect_ind <- datalist$prospect_wl_inds[!miss] stopifnot(length(observed) == length(prospect_ind)) if (metadata$spectra_type == "reflectance") { rtm <- function(param) { prospect(param, prospect_version)[, 1] } } else if (metadata$spectra_type == "pseudo-absorbance") { rtm <- function(param) { pout <- prospect(param, prospect_version)[, 1] log10(1 / pout) } } else { stop("Unknown spectra type \"", metadata$spectra_type, "\"") } model <- function(params) rtm(params)[prospect_ind] test_mod <- model(defparam(paste0("prospect_", tolower(prospect_version)))) stopifnot(length(test_mod) == length(observed)) prior <- prospect_bt_prior(prospect_version) invert_bt( observed = observed, model = model, prior = prior, ... ) } #' @export process_samples <- function(samps) { samps_mcmc <- BayesianTools::getSample(samps, coda = TRUE) samps_burned <- PEcAn.assim.batch::autoburnin(samps_mcmc, method = "gelman.plot") samps_summary <- summary(samps_burned) bound <- cbind(samps_summary$statistics, samps_summary$quantiles) summary_df <- as_tibble(bound) %>% mutate(parameter = rownames(bound)) %>% select(parameter, everything()) list( summary_df = summary_df, samples = samps_burned ) }
/R/run_inversion.R
no_license
ashiklom/rspecan
R
false
false
1,759
r
#' @export run_inversion <- function(data_name, spectra_id, prospect_version = "D", ...) { data_path <- here::here("processed_data", paste0(data_name, ".rds")) stopifnot(file.exists(data_path)) datalist <- readRDS(data_path) metadata <- datalist$metadata %>% filter(spectra_id == !!spectra_id) stopifnot(nrow(metadata) == 1) obs_raw <- datalist$spectra[datalist$data_wl_inds, spectra_id] miss <- is.na(obs_raw) observed <- obs_raw[!miss] prospect_ind <- datalist$prospect_wl_inds[!miss] stopifnot(length(observed) == length(prospect_ind)) if (metadata$spectra_type == "reflectance") { rtm <- function(param) { prospect(param, prospect_version)[, 1] } } else if (metadata$spectra_type == "pseudo-absorbance") { rtm <- function(param) { pout <- prospect(param, prospect_version)[, 1] log10(1 / pout) } } else { stop("Unknown spectra type \"", metadata$spectra_type, "\"") } model <- function(params) rtm(params)[prospect_ind] test_mod <- model(defparam(paste0("prospect_", tolower(prospect_version)))) stopifnot(length(test_mod) == length(observed)) prior <- prospect_bt_prior(prospect_version) invert_bt( observed = observed, model = model, prior = prior, ... ) } #' @export process_samples <- function(samps) { samps_mcmc <- BayesianTools::getSample(samps, coda = TRUE) samps_burned <- PEcAn.assim.batch::autoburnin(samps_mcmc, method = "gelman.plot") samps_summary <- summary(samps_burned) bound <- cbind(samps_summary$statistics, samps_summary$quantiles) summary_df <- as_tibble(bound) %>% mutate(parameter = rownames(bound)) %>% select(parameter, everything()) list( summary_df = summary_df, samples = samps_burned ) }
# Read in netCDF files to pull climate data for North America # This requires you to work off of external hard drive # This code pulls data for the calendar yar before and after the field sample date # because PNP requires full year of data to run # Ailene got a start on this: June 13, 2017 #If just looking at nam climate, do this: #tempval <- list() nafiles <- dir(climatedrive)[grep("princetonclimdata", dir(climatedrive))] #loop through each lat/long for which we want to calculate chilling and pull the climate data for that lat/long #the climate data that we are pulling is daily min and max temperature for(i in 1:nrow(nam)){ # i = 1 # find this location lo <- nam[i,"chill.long"] + 360 la <- nam[i,"chill.lat"] # make sure longitudes are negative, need to be for North America if(lo > 0) { lo = lo*-1 } ##REMOVE##yr <- as.numeric(nam[i,"year"])#i think we need to use this year for nam because it is referenced in the code below. #yr <-as.numeric(substr(nam[i,"fieldsample.date2"],1,4))#year for climate data yr<-as.numeric(substr(nam[i,"fieldsample.date2"],1,4)) # start and end days of the climate data we need to calculate chilling, for the focal lat/long. #This is in days since baseline date (sept 1) Set to GMT to avoid daylight savings insanity # using d$fieldsample.date2 (this is the same as fieldsampledate, but formatted as "%Y-%m-%d") #for pmp, we always need climate data to go until 12-31 fsday <- strptime(nam[i,"fieldsample.date2"],"%Y-%m-%d", tz = "GMT") endday <- strptime(paste(yr, "12-31", sep="-"),"%Y-%m-%d", tz = "GMT") if(nam[i,"fieldsample.date2"]!="" & as.numeric(substr(nam[i,"fieldsample.date2"],6,7))>=9){ stday <- strptime(paste(yr, "01-01", sep="-"),"%Y-%m-%d", tz="GMT") firstyr <- yr;# endyr<-yr+1;#month of last date of climate year endday <- strptime(paste(yr+1, "12-31", sep="-"),"%Y-%m-%d", tz = "GMT") pmpclim<-c(firstyr, endyr) }#If field sample date is after september 1, then we use the chilling from the current year, since sept 1 if(nam[i,"fieldsample.date2"]!="" & as.numeric(substr(nam[i,"fieldsample.date2"],6,7))<9){ stday <- strptime(paste(yr-1, "01-01", sep="-"),"%Y-%m-%d", tz="GMT")#always start getting date jan 1 firstyr <- yr-1;# use previous year's fall months of chilling (Sept-Dec) endyr<-yr;#month of last date of climate year endday <- strptime(paste(yr, "12-31", sep="-"),"%Y-%m-%d", tz = "GMT") pmpclim<-c(firstyr, endyr) }#If field sample date is before september 1, then we use the chilling from the previous year. if(la==38.988){# #For this one study (swartz81) we need two extra years of climate data (e.g. because of long chilling treatments) to correspond to the budburst dates and calculate accurate forcing. #we will use the latitude of this study to select it out and extend the end yr for climate data to pull #unique(nam$datasetID[nam$chill.lat== 38.988]) stday <- strptime(paste(yr-1, "01-01", sep="-"),"%Y-%m-%d", tz="GMT")#always start getting date jan 1 firstyr <- yr-1;# use previous year's fall months of chilling (Sept-Dec) secondyr<-yr;# thirdyr<-yr+1;# endyr<-yr+2;#month of last date of climate year endday <- strptime(paste(yr+2, "12-31", sep="-"),"%Y-%m-%d", tz = "GMT") pmpclim<-c(firstyr,secondyr,thirdyr,endyr) } # now loop over these year-month combo files and get temperature values for this date range. mins <- maxs <- vector() for(j in c(pmpclim)){ # j = 1956 filemax <- list.files(path=paste(climatedrive,nafiles, sep="/"), pattern=paste0("tmax",j), full.names = TRUE) filemin <- list.files(path=paste(climatedrive,nafiles, sep="/"), pattern=paste0("tmin",j), full.names = TRUE) if(length(nchar(filemax))==0){next} if(length(nchar(filemin))==0){next} jx <- nc_open(filemax) jn <- nc_open(filemin) diff.long.cell <- abs(jx$dim$lon$vals-as.numeric(lo)) diff.lat.cell <- abs(jx$dim$lat$vals-as.numeric(la)) long.cell <- which(diff.long.cell==min(diff.long.cell))[1] lat.cell <- which(diff.lat.cell==min(diff.lat.cell))[1] mintest<-ncvar_get(jn,'tmin',start=c(long.cell,lat.cell,1),count=c(1,1,-1))#check that the lat/long combinations has temperature data. #print(mintest);print(j) #if no temperature data for the focal lat/long, choose the next closest one. #the below code goes up to 0.1 degrees (~10km) away from the closest lat/long) if(is.na(unique(mintest))){#if there are no temp data for the selected lat/long, choose a different one diff.long.cell[which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1]]<-NA diff.long.cell[which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1]]<-NA long.cell <- which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1] #select the closest longitude & latitude with climate data to longitude[i] lat.cell <- which(diff.lat.cell==min(diff.lat.cell,na.rm=TRUE))[1] mintest<-ncvar_get(jn,'tmin',start=c(long.cell,lat.cell,1),count=c(1,1,-1)) if(is.na(unique(mintest))){ diff.long.cell[which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1]]<-NA diff.long.cell[which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1]]<-NA long.cell <- which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1] #select the closest longitude & latitude with climate data to longitude[i] lat.cell <- which(diff.lat.cell==min(diff.lat.cell,na.rm=TRUE))[1] mintest<-ncvar_get(jn,'tmin',start=c(long.cell,lat.cell,1),count=c(1,1,-1)) if(is.na(unique(mintest))){ diff.long.cell[which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1]]<-NA diff.long.cell[which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1]]<-NA long.cell <- which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1] #select the closest longitude & latitude with climate data to longitude[i] lat.cell <- which(diff.lat.cell==min(diff.lat.cell,na.rm=TRUE))[1] }}} mins <- c(mins, (ncvar_get(jn,'tmin',start=c(long.cell,lat.cell,1),count=c(1,1,-1)))-273.15) maxs <- c(maxs, (ncvar_get(jx,'tmax',start=c(long.cell,lat.cell,1),count=c(1,1,-1)))-273.15) nc_close(jx) nc_close(jn) } #print(i);print(stday);print(endday) tempval[[as.character(nam[i,"ID_fieldsample.date2"])]] <- data.frame(Lat = la,Long = lo,Date = as.character(seq(stday, endday, by = "day")), Tmin = mins[1:length(seq(stday, endday, by = "day"))], Tmax =maxs[1:length(seq(stday, endday, by = "day"))])# } # If you want to (as Lizzie does) you can write out tempval, which is all the climate pulled in a list form save(tempval, file="output/dailyclim/fieldclimate_daily.RData") #(If you want to avoid connecting to the external hard drive, then start here) #load this .RData workspace) #load("output/dailyclim/fieldclimate_daily.RData") #dailytemp <- do.call("rbind", tempval) #dailytemp<-as.data.frame(cbind(row.names(dailytemp),dailytemp)) #colnames(dailytemp)[1]<-"ID_fieldsample.date2" #dailytemp2<-separate(data = dailytemp, col = ID_fieldsample.date2, into = c("datasetID", "lat","long","fieldsample.date2"), sep = "\\_") #row.names(dailytemp2)<-NULL #dailytemp3<-subset(dailytemp2,select=c(datasetID,lat,long,fieldsample.date2,Date,Tmin,Tmax)) #note: no climate data for boyer 1983-1984 stop("Not an error, just stopping here to say we're now done pulling daily climate data for North America!")
/analyses/bb_dailyclimate/source/pulldailyclimate_nam.R
no_license
lizzieinvancouver/ospree
R
false
false
7,536
r
# Read in netCDF files to pull climate data for North America # This requires you to work off of external hard drive # This code pulls data for the calendar yar before and after the field sample date # because PNP requires full year of data to run # Ailene got a start on this: June 13, 2017 #If just looking at nam climate, do this: #tempval <- list() nafiles <- dir(climatedrive)[grep("princetonclimdata", dir(climatedrive))] #loop through each lat/long for which we want to calculate chilling and pull the climate data for that lat/long #the climate data that we are pulling is daily min and max temperature for(i in 1:nrow(nam)){ # i = 1 # find this location lo <- nam[i,"chill.long"] + 360 la <- nam[i,"chill.lat"] # make sure longitudes are negative, need to be for North America if(lo > 0) { lo = lo*-1 } ##REMOVE##yr <- as.numeric(nam[i,"year"])#i think we need to use this year for nam because it is referenced in the code below. #yr <-as.numeric(substr(nam[i,"fieldsample.date2"],1,4))#year for climate data yr<-as.numeric(substr(nam[i,"fieldsample.date2"],1,4)) # start and end days of the climate data we need to calculate chilling, for the focal lat/long. #This is in days since baseline date (sept 1) Set to GMT to avoid daylight savings insanity # using d$fieldsample.date2 (this is the same as fieldsampledate, but formatted as "%Y-%m-%d") #for pmp, we always need climate data to go until 12-31 fsday <- strptime(nam[i,"fieldsample.date2"],"%Y-%m-%d", tz = "GMT") endday <- strptime(paste(yr, "12-31", sep="-"),"%Y-%m-%d", tz = "GMT") if(nam[i,"fieldsample.date2"]!="" & as.numeric(substr(nam[i,"fieldsample.date2"],6,7))>=9){ stday <- strptime(paste(yr, "01-01", sep="-"),"%Y-%m-%d", tz="GMT") firstyr <- yr;# endyr<-yr+1;#month of last date of climate year endday <- strptime(paste(yr+1, "12-31", sep="-"),"%Y-%m-%d", tz = "GMT") pmpclim<-c(firstyr, endyr) }#If field sample date is after september 1, then we use the chilling from the current year, since sept 1 if(nam[i,"fieldsample.date2"]!="" & as.numeric(substr(nam[i,"fieldsample.date2"],6,7))<9){ stday <- strptime(paste(yr-1, "01-01", sep="-"),"%Y-%m-%d", tz="GMT")#always start getting date jan 1 firstyr <- yr-1;# use previous year's fall months of chilling (Sept-Dec) endyr<-yr;#month of last date of climate year endday <- strptime(paste(yr, "12-31", sep="-"),"%Y-%m-%d", tz = "GMT") pmpclim<-c(firstyr, endyr) }#If field sample date is before september 1, then we use the chilling from the previous year. if(la==38.988){# #For this one study (swartz81) we need two extra years of climate data (e.g. because of long chilling treatments) to correspond to the budburst dates and calculate accurate forcing. #we will use the latitude of this study to select it out and extend the end yr for climate data to pull #unique(nam$datasetID[nam$chill.lat== 38.988]) stday <- strptime(paste(yr-1, "01-01", sep="-"),"%Y-%m-%d", tz="GMT")#always start getting date jan 1 firstyr <- yr-1;# use previous year's fall months of chilling (Sept-Dec) secondyr<-yr;# thirdyr<-yr+1;# endyr<-yr+2;#month of last date of climate year endday <- strptime(paste(yr+2, "12-31", sep="-"),"%Y-%m-%d", tz = "GMT") pmpclim<-c(firstyr,secondyr,thirdyr,endyr) } # now loop over these year-month combo files and get temperature values for this date range. mins <- maxs <- vector() for(j in c(pmpclim)){ # j = 1956 filemax <- list.files(path=paste(climatedrive,nafiles, sep="/"), pattern=paste0("tmax",j), full.names = TRUE) filemin <- list.files(path=paste(climatedrive,nafiles, sep="/"), pattern=paste0("tmin",j), full.names = TRUE) if(length(nchar(filemax))==0){next} if(length(nchar(filemin))==0){next} jx <- nc_open(filemax) jn <- nc_open(filemin) diff.long.cell <- abs(jx$dim$lon$vals-as.numeric(lo)) diff.lat.cell <- abs(jx$dim$lat$vals-as.numeric(la)) long.cell <- which(diff.long.cell==min(diff.long.cell))[1] lat.cell <- which(diff.lat.cell==min(diff.lat.cell))[1] mintest<-ncvar_get(jn,'tmin',start=c(long.cell,lat.cell,1),count=c(1,1,-1))#check that the lat/long combinations has temperature data. #print(mintest);print(j) #if no temperature data for the focal lat/long, choose the next closest one. #the below code goes up to 0.1 degrees (~10km) away from the closest lat/long) if(is.na(unique(mintest))){#if there are no temp data for the selected lat/long, choose a different one diff.long.cell[which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1]]<-NA diff.long.cell[which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1]]<-NA long.cell <- which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1] #select the closest longitude & latitude with climate data to longitude[i] lat.cell <- which(diff.lat.cell==min(diff.lat.cell,na.rm=TRUE))[1] mintest<-ncvar_get(jn,'tmin',start=c(long.cell,lat.cell,1),count=c(1,1,-1)) if(is.na(unique(mintest))){ diff.long.cell[which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1]]<-NA diff.long.cell[which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1]]<-NA long.cell <- which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1] #select the closest longitude & latitude with climate data to longitude[i] lat.cell <- which(diff.lat.cell==min(diff.lat.cell,na.rm=TRUE))[1] mintest<-ncvar_get(jn,'tmin',start=c(long.cell,lat.cell,1),count=c(1,1,-1)) if(is.na(unique(mintest))){ diff.long.cell[which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1]]<-NA diff.long.cell[which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1]]<-NA long.cell <- which(diff.long.cell==min(diff.long.cell,na.rm=TRUE))[1] #select the closest longitude & latitude with climate data to longitude[i] lat.cell <- which(diff.lat.cell==min(diff.lat.cell,na.rm=TRUE))[1] }}} mins <- c(mins, (ncvar_get(jn,'tmin',start=c(long.cell,lat.cell,1),count=c(1,1,-1)))-273.15) maxs <- c(maxs, (ncvar_get(jx,'tmax',start=c(long.cell,lat.cell,1),count=c(1,1,-1)))-273.15) nc_close(jx) nc_close(jn) } #print(i);print(stday);print(endday) tempval[[as.character(nam[i,"ID_fieldsample.date2"])]] <- data.frame(Lat = la,Long = lo,Date = as.character(seq(stday, endday, by = "day")), Tmin = mins[1:length(seq(stday, endday, by = "day"))], Tmax =maxs[1:length(seq(stday, endday, by = "day"))])# } # If you want to (as Lizzie does) you can write out tempval, which is all the climate pulled in a list form save(tempval, file="output/dailyclim/fieldclimate_daily.RData") #(If you want to avoid connecting to the external hard drive, then start here) #load this .RData workspace) #load("output/dailyclim/fieldclimate_daily.RData") #dailytemp <- do.call("rbind", tempval) #dailytemp<-as.data.frame(cbind(row.names(dailytemp),dailytemp)) #colnames(dailytemp)[1]<-"ID_fieldsample.date2" #dailytemp2<-separate(data = dailytemp, col = ID_fieldsample.date2, into = c("datasetID", "lat","long","fieldsample.date2"), sep = "\\_") #row.names(dailytemp2)<-NULL #dailytemp3<-subset(dailytemp2,select=c(datasetID,lat,long,fieldsample.date2,Date,Tmin,Tmax)) #note: no climate data for boyer 1983-1984 stop("Not an error, just stopping here to say we're now done pulling daily climate data for North America!")
# |----------------------------------------------------------------------------------| # | Project: Skin UVB SKH1 mouse model treated with UA/SFN | # | Script: Methyl-seq data analysis and visualization using DSS | # | Coordinator: Ran Yin, Renyi Wu | # | Author: Davit Sargsyan | # | Created: 03/17/2018 | # | Modified:04/05/2018, DS: changed hitmaps to donut plots; added more comparisons | # | 11/30/2018, DS: TIIA methyl-seq sample is WJ4 | # |----------------------------------------------------------------------------------| # sink(file = "tmp/log_skin_uvb_dna_v3.1.txt") date() # if (!requireNamespace("BiocManager", # quietly = TRUE)) # install.packages("BiocManager") # BiocManager::install("ChIPseeker") # BiocManager::install("TxDb.Mmusculus.UCSC.mm10.knownGene") # BiocManager::install("DSS") # BiocManager::install("org.Mm.eg.db") require(data.table) require(ggplot2) require(knitr) require(ChIPseeker) require(TxDb.Mmusculus.UCSC.mm10.knownGene) require(DSS) # Load and view raw counts (no annoation)---- dt01 <- fread("data/Renyi_Methylseq_12292017/combined_WJ_anno.csv") dt01 # NOTE: there are 14 rows representing mitochondrial DNA unique(dt01$chr) dt01[dt01$chr == "chrM",] # Annotate---- # NOTE: definition of promoter: # Source: http://www.sequenceontology.org/browser/current_svn/term/SO:0000167 peakAnno1 <- annotatePeak(peak = "data/Renyi_Methylseq_12292017/combined_WJ_anno.csv", tssRegion = c(-3000, 3000), TxDb = TxDb.Mmusculus.UCSC.mm10.knownGene, annoDb = "org.Mm.eg.db") head(peakAnno1@detailGenomicAnnotation) t1 <- peakAnno1@annoStat t1$Feature <- factor(t1$Feature, levels = as.character(t1$Feature[order(t1$Frequency, decreasing = TRUE)])) t1 p1 <- ggplot(t1, aes(x = rep(1, nrow(t1)), y = Frequency, fill = Feature)) + geom_bar(width = 1, stat = "identity", color = "black") + coord_polar("y", start = 0, direction = -1) + scale_x_continuous("", limits = c(-1.5, 1.5), expand = c(0, 0)) + ggtitle("Annotation by Region (%)") + theme(plot.title = element_text(hjust = 0.5), axis.title.x = element_blank(), axis.text.x = element_blank(), axis.ticks.x = element_blank(), axis.title.y = element_blank(), axis.text.y = element_blank(), axis.ticks.y = element_blank(), axis.line = element_blank(), panel.border = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank()) p1 tiff(filename = "tmp/mes13_anno_by_reg.tiff", height = 5, width = 5, units = 'in', res = 1200, compression = "lzw+p") print(p1) graphics.off() # Make data table---- dt1 <- data.table(start = peakAnno1@anno@ranges@start, as.data.frame(peakAnno1@anno@elementMetadata@listData)) # Remove unmapped regions dt1 <- dt1[!is.na(dt1$SYMBOL == "NA"), ] # Removed 12 rows # Subset data: LG, HG and TIIA---- dt1 <- data.table(gene = dt1$SYMBOL, anno = dt1$annotation, geneId = dt1$geneId, chr = dt1$geneChr, pos = dt1$start, distanceToTSS = dt1$distanceToTSS, reg = NA, dt1[, CpG:WJ02.X], dt1[, WJ04.N:WJ04.X], geneName = dt1$GENENAME) dt1 # # Dispersion Shrinkage for Sequencing data (DSS)---- # # This is based on Wald test for beta-binomial distribution. # # Source: https://www.bioconductor.org/packages/release/bioc/vignettes/DSS/inst/doc/DSS.pdf # # The DM detection procedure implemented in DSS is based on a rigorous Wald test for betabinomial # # distributions. The test statistics depend on the biological variations (characterized # # by dispersion parameter) as well as the sequencing depth. An important part of the algorithm # # is the estimation of dispersion parameter, which is achieved through a shrinkage estimator # # based on a Bayesian hierarchical model [1]. An advantage of DSS is that the test can be # # performed even when there is no biological replicates. That’s because by smoothing, the # # neighboring CpG sites can be viewed as “pseudo-replicates", and the dispersion can still be # # estimated with reasonable precision. # Regions---- kable(data.table(table(substr(dt1$anno, 1, 9)))) # |V1 | N| # |:---------|-----:| # |3' UTR | 4549| # |5' UTR | 533| # |Distal In | 49714| # |Downstrea | 2183| # |Exon (ENS | 14335| # |Intron (E | 51484| # |Promoter | 94369| # Separate Promoter, Body and Downstream---- dt1$reg <- as.character(dt1$anno) # a. Promoter: up to 3kb upstream dt1$reg[substr(dt1$anno, 1, 8) == "Promoter"] <- "Promoter" # b. Body: exons and introns dt1$reg[substr(dt1$anno, 1, 4) %in% c("Exon", "Intr")] <- "Body" # NEW (11/01/2018): if annotated as promoter but beyond TSS, make it Body # Fixed by setting tssRegion = c(-3000, 0) in annotation # CHECK: dt1[dt1$reg == "Promoter" & dt1$distanceToTSS > 0,] # None # c. Downstream: Distal Intergenic and Downstream dt1$reg[substr(dt1$anno, 1, 4) %in% c("Dist", "Down")] <- "Downstream" dt1$reg <- factor(dt1$reg, levels = c("Promoter", "5' UTR", "Body", "3' UTR", "Downstream")) kable(data.table(table(dt1$reg))) # |V1 | N| # |:----------|-----:| # |Promoter | 94369| # |5' UTR | 533| # |Body | 65819| # |3' UTR | 4549| # |Downstream | 51897| # CpG distribution and coverage---- p2 <- ggplot(dt1, aes(x = CpG)) + facet_wrap(~ reg, scale = "free_y") + geom_histogram(color = "black", fill = "grey", binwidth = 5) + scale_x_continuous(name = "Number of CpG", breaks = c(3, seq(from = 5, to = 60, by = 5))) + coord_cartesian(xlim=c(3, 60)) + scale_y_continuous(name = "Counts") + ggtitle("Distribution of DMR by Number of CpG and Region") p2 tiff(filename = "tmp/mes13_CpG_by_reg_hist.tiff", height = 6, width = 9, units = 'in', res = 1200, compression = "lzw+p") print(p2) graphics.off() # Percent methylation---- tmp <- as.matrix(dt1[, WJ01.N:WJ04.X]) head(tmp) # Remove rows with all NAs ndx.keep <- rowSums(is.na(tmp)) < 6 sum(ndx.keep) # 211,128 out of 217,111 dt1 <- dt1[ndx.keep, ] tmp <- tmp[ndx.keep, ] dtN <- tmp[, seq(1, ncol(tmp) - 1, by = 2)] head(dtN) dtX <- tmp[, seq(2, ncol(tmp), by = 2)] head(dtX) # Add 0.5 to all NAs and zeros in meth. hits # NOTE: if there were no hits (N = NA or 0), the pct will be NA anyway dtX <- apply(dtX, 2, function(a) { a[is.na(a)] <- a[a == 0] <- 0.5 return(a) }) dtX pct <- dtX/dtN colnames(pct) <- substr(colnames(pct), 1, nchar(colnames(pct)) - 2) head(pct) # Remove rows with all zeros---- dim(pct[rowSums(pct) == 0, ]) dim(pct[is.na(rowSums(pct)), ]) dim(pct) ndx.keep <- rowSums(pct) != 0 & !is.na(rowSums(pct)) pct <- pct[ndx.keep, ] dt1 <- dt1[ndx.keep, ] dtN <- dtN[ndx.keep, ] dtX <- dtX[ndx.keep, ] dim(dtX) # 187,601 remaine # Hits per CpG average (i.e. vertical coverage)---- t1 <- apply(dtN, 2, function(a) { return(round(a/dt1$CpG, 1)) }) mu <- list() for (i in 1:ncol(t1)) { x1 <- aggregate(x = t1[, i], FUN = mean, by = list(dt1$reg)) x2 <- aggregate(x = t1[, i], FUN = mean, by = list(dt1$reg)) x3 <- merge(x1, x2, by = "Group.1") mu[[i]] <- data.table(reg = x3[, 1], mu = (x3[, 2] + x3[, 3])/2) } names(mu) <- unique(substr(colnames(t1), 1, 4)) t2 <- data.table(Region = mu$WJ01$reg, LG = mu$WJ01$mu, HG = mu$WJ02$mu, TIIA = mu$WJ04$mu) t2 # Average methylation per region per treatment/time mumth <- list() for (i in 1:ncol(pct)) { x1 <- aggregate(x = c(pct[, i], pct[, i]), FUN = mean, by = list(rep(dt1$reg, 2))) x2 <- aggregate(x = c(pct[, i], pct[, i]), FUN = sd, by = list(rep(dt1$reg, 2))) x3 <- aggregate(x = c(pct[, i], pct[, i]), FUN = length, by = list(rep(dt1$reg, 2))) tmp <- merge(x1, x2, by = "Group.1") tmp <- merge(tmp, x3, by = "Group.1") tmp$sem <- tmp$x.y/sqrt(tmp$x) mumth[[i]] <- data.table(rep(colnames(pct)[[i]], 5), tmp) colnames(mumth[[i]]) <- c("trt", "reg", "mu", "std", "n", "sem") } mumth <- do.call("rbind", mumth) mumth mumth$trt <- factor(mumth$trt, # levels = c("WJ02", # "WJ01", # "WJ05"), levels = c("WJ02", "WJ01", "WJ04"), labels = c("HG", "LG", "TIIA")) mumth$`Methylation (%)` <- 100*mumth$mu mumth$`SEM (%)` <- 100*mumth$sem p1 <- ggplot(mumth, aes(x = reg, y = `Methylation (%)`, group = trt, fill = trt)) + geom_bar(position = position_dodge(), stat="identity", color = "black") + scale_x_discrete("Region") + scale_y_continuous(limits = c(0, 100)) + scale_fill_discrete("Treatment") + ggtitle("Percent of Methylated CpG by Region") + theme(plot.title = element_text(hjust = 0.5), axis.text.x = element_text(angle = 45, hjust = 1)) p1 tiff(filename = "tmp/mes13_avg_methyl_by_reg.tiff", height = 6, width = 7, units = 'in', res = 1200, compression = "lzw+p") print(p1) graphics.off() # DNA vs. RNA---- pctMeth <- data.table(gene = dt1$gene, anno = dt1$anno, pct) pctMeth$`HG-LG DNA` <- 100*(pctMeth$WJ02 - pctMeth$WJ01) pctMeth$`TIIA-HG DNA` <- 100*(pctMeth$WJ04 - pctMeth$WJ02) pctMeth # Load RNA DiffExp---- # NOTE: produced by mes13_rnaseq_DEGseq_TIIA_v2.R script on 04/30/2018! # expRNA <- fread("data/mes13_tiia_genes_q-0.5_log2-0.3.csv") expRNA <- fread("data/mes13_tiia_rnaseq_degseq_genes_q-0.5_log2-0.3.csv") colnames(expRNA)[1] <- "gene" rna_dna <- merge(pctMeth, expRNA, by = "gene") rna_dna # Separate regions--- rna_dna$reg <- as.character(rna_dna$anno) rna_dna$reg[substr(rna_dna$anno, 1, 8) == "Promoter"] <- "Promoter" rna_dna$reg[substr(rna_dna$anno, 1, 4) %in% c("Exon", "Intr")] <- "Body" rna_dna$reg[substr(rna_dna$anno, 1, 4) %in% c("Dist", "Down")] <- "Downstream" rna_dna$reg <- factor(rna_dna$reg, levels = c("Promoter", "5' UTR", "Body", "3' UTR", "Downstream")) kable(data.table(table(dt1$reg))) # |V1 | N| # |:----------|-----:| # |Promoter | 85752| # |5' UTR | 489| # |Body | 54716| # |3' UTR | 3917| # |Downstream | 42727| rna_dna[rna_dna$gene == "Nmu",] g1 <- rna_dna[rna_dna$`HG-LG DNA` >= 10 & rna_dna$`HG-LG` <= -0.3 & reg == "Promoter"] g1 length(unique(g1$gene)) g2 <- rna_dna[rna_dna$`HG-LG DNA` <= -10 & rna_dna$`HG-LG` >= 0.3 & reg == "Promoter"] g2 length(unique(g2$gene)) g3 <- rna_dna[rna_dna$`TIIA-HG DNA` >= 10 & rna_dna$`TIIA-HG` <= -0.3 & reg == "Promoter"] g3 length(unique(g3$gene)) g4 <- rna_dna[rna_dna$`TIIA-HG DNA` <= -10 & rna_dna$`TIIA-HG` >= 0.3 & reg == "Promoter"] g4 length(unique(g4$gene)) write.csv(g1, file = "tmp/dna.up_rna.dn_hg-lg.csv", row.names = FALSE) write.csv(g2, file = "tmp/dna.dn_rna.up_hg-lg.csv", row.names = FALSE) write.csv(g3, file = "tmp/dna.up_rna.dn_tiia-hg.csv", row.names = FALSE) write.csv(g4, file = "tmp/dna.dn_rna.up_tiia-hg.csv", row.names = FALSE) # HG vs. LG Starburst---- tmp1 <- unique(rna_dna[gene %in% unique(g1$gene) & reg == "Promoter", c("gene", "reg", "HG-LG")]) setkey(tmp1, `HG-LG`) tmp1$ypos <- seq(from = min(rna_dna$`HG-LG`), to = max(rna_dna$`HG-LG`), length.out = length(tmp1$gene)) tmp1 tmp2 <- unique(rna_dna[gene %in% c(unique(g2$gene), "Nmu") & reg == "Promoter", c("gene", "reg", "HG-LG")]) setkey(tmp2, `HG-LG`) tmp2$ypos <- seq(from = min(rna_dna$`HG-LG`), to = max(rna_dna$`HG-LG`), length.out = length(tmp2$gene)) tmp2 rna_dna$a <- 0.7 rna_dna$a[rna_dna$`HG-LG DNA` > -10 & rna_dna$`HG-LG DNA` < 10] <- 0.3 p1 <- ggplot(data = rna_dna, aes(x = `HG-LG DNA`, y = `HG-LG`, fill = reg)) + geom_segment(data = tmp1, aes(x = -35, y = `HG-LG`, xend = 25, yend = `HG-LG`), linetype = "dotted") + geom_segment(data = tmp1, aes(x = 25, y = `HG-LG`, xend = 35, yend = ypos)) + geom_text(data = tmp1, aes(x = 35, y = ypos, label = gene), size = 4, hjust = 0) + geom_segment(data = tmp2, aes(x = -35, y = `HG-LG`, xend = 25, yend = `HG-LG`), linetype = "dotted") + geom_segment(data = tmp2, aes(x = -45, y = ypos, xend = -35, yend = `HG-LG`)) + geom_text(data = tmp2, aes(x = -45, y = ypos, label = gene), size = 4, hjust = 1) + geom_hline(yintercept = c(-0.3, 0.3), linetype = "dashed") + geom_vline(xintercept = c(-10, 10), linetype = "dashed") + geom_point(aes(alpha = a), size = 2, shape = 21) + scale_x_continuous("DNA Methylation Difference(%)", breaks = seq(-35, 35, 10), limits = c(-50, 45)) + scale_y_continuous("RNA Expression Difference (log2)") + ggtitle("HG - LG") + scale_fill_manual("Region", values = c("Promoter" = "green", "5' UTR" = "white", "Body" = "blue", "3' UTR" = "grey", "Downstream" = "red")) + scale_alpha_continuous(guide = FALSE) + theme_bw() + theme(plot.title = element_text(hjust = 0.5), legend.position = "top", panel.border = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.line = element_line(colour = "black")) p1 tiff(filename = "tmp/starburst_hg-lg.tiff", height = 8, width = 8, units = 'in', res = 300, compression = "lzw+p") print(p1) graphics.off() # TIIA vs. HG Starburst---- tmp3 <- unique(rna_dna[gene %in% c(unique(g3$gene), "Fgl2") & reg == "Promoter", c("gene", "reg", "TIIA-HG")]) setkey(tmp3, `TIIA-HG`) tmp3$ypos <- seq(from = min(rna_dna$`TIIA-HG`), to = max(rna_dna$`TIIA-HG`), length.out = length(tmp3$gene)) tmp3 tmp4 <- unique(rna_dna[gene %in% unique(g4$gene) & reg == "Promoter", c("gene", "reg", "TIIA-HG")]) setkey(tmp4, `TIIA-HG`) tmp4$ypos <- seq(from = min(rna_dna$`TIIA-HG`), to = max(rna_dna$`TIIA-HG`), length.out = length(tmp4$gene)) tmp4 rna_dna$a <- 0.7 rna_dna$a[rna_dna$`TIIA-HG DNA` > -10 & rna_dna$`TIIA-HG DNA` < 10] <- 0.3 p2 <- ggplot(data = rna_dna, aes(x = `TIIA-HG DNA`, y = `TIIA-HG`, fill = reg)) + geom_segment(data = tmp3, aes(x = -35, y = `TIIA-HG`, xend = 25, yend = `TIIA-HG`), linetype = "dotted") + geom_segment(data = tmp3, aes(x = 25, y = `TIIA-HG`, xend = 35, yend = ypos)) + geom_text(data = tmp3, aes(x = 35, y = ypos, label = gene), size = 4, hjust = 0) + geom_segment(data = tmp4, aes(x = -35, y = `TIIA-HG`, xend = 25, yend = `TIIA-HG`), linetype = "dotted") + geom_segment(data = tmp4, aes(x = -45, y = ypos, xend = -35, yend = `TIIA-HG`)) + geom_text(data = tmp4, aes(x = -45, y = ypos, label = gene), size = 4, hjust = 1) + geom_hline(yintercept = c(-0.3, 0.3), linetype = "dashed") + geom_vline(xintercept = c(-10, 10), linetype = "dashed") + geom_point(aes(alpha = a), size = 2, shape = 21) + scale_x_continuous("DNA Methylation Difference(%)", breaks = seq(-35, 35, 10), limits = c(-50, 45)) + scale_y_continuous("RNA Expression Difference (log2)") + ggtitle("TIIA - HG") + scale_fill_manual("Region", values = c("Promoter" = "green", "5' UTR" = "white", "Body" = "blue", "3' UTR" = "grey", "Downstream" = "red")) + scale_alpha_continuous(guide = FALSE) + theme_bw() + theme(plot.title = element_text(hjust = 0.5), legend.position = "top", panel.border = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.line = element_line(colour = "black")) p2 tiff(filename = "tmp/starburst_tiia-hg.tiff", height = 8, width = 8, units = 'in', res = 300, compression = "lzw+p") print(p2) graphics.off() # Detailes on genes with significant expression differences---- # Genes with RNA down/DNA up---- gene.keep <- c("Fgl2", "Gulo", "Kcnip2", "Nmu") gene.keep # DNA---- dna <- data.table(gene = dt1$gene, CpG = dt1$CpG, annotation = as.character(dt1$anno), distanceToTSS = dt1$distanceToTSS, pct) dna <- dna[gene %in% gene.keep, ] dna class(dna$distanceToTSS) # Differences dna$`HG-LG` <- 100*(dna$WJ02 - dna$WJ01) dna$`TIIA-HG` <- 100*(dna$WJ04 - dna$WJ02) dna$reg <- "5 to 10" dna$reg[dna$CpG > 10] <- "11 to 20" dna$reg[dna$CpG > 20] <- ">20" dna$reg <- factor(dna$reg, levels = c("5 to 10", "11 to 20", ">20")) dna setkey(dna, gene, distanceToTSS) dna[, distRank := rank(distanceToTSS), by = gene] # Long data dt3 <- melt.data.table(data = dna, id.vars = c("gene", "reg", "annotation", "distanceToTSS", "distRank"), measure.vars = c("HG-LG", "TIIA-HG"), variable.name = "Treatment", value.name = "DNA") dt3$Treatment <- as.character(dt3$Treatment) dt3$annotation[substr(dt3$annotation, 1, 4) == "Exon"] <- "Exon" dt3$annotation[substr(dt3$annotation, 1, 6) == "Intron"] <- "Intron" dt3$annotation[substr(dt3$annotation, 1, 8) == "Promoter"] <- "Promoter" dt3$annotation[substr(dt3$annotation, 1, 4) == "Down"] <- "Downstream" dt3$annotation <- factor(dt3$annotation) # RNA # RNA data Long format---- dt2 <- melt.data.table(data = expRNA, id.vars = "gene", measure.vars = c("HG-LG", "TIIA-HG"), variable.name = "Treatment", value.name = "RNA") dt2$Treatment <- as.character(dt2$Treatment) dt2 # Merge DNA with RNA---- dt3 <- merge(dt3, dt2, by = c("gene", "Treatment")) dt3 # Isolate genes---- gX <- unique(dt3$gene) dna.gX <- dt3[dt3$gene %in% gX, ] dna.gX$y0 <- 0 dna.gX$Treatment <- paste(dna.gX$Treatment, " (RNA = ", round(dna.gX$RNA, 3), ")", sep = "") p1 <- ggplot(dna.gX, aes(x = distRank, y = DNA)) + facet_wrap(.~ gene + Treatment, scales = "free_x", ncol = 4) + geom_rect(aes(xmin = -Inf, xmax = Inf, ymin = -Inf, ymax = -10), fill = "pink") + geom_rect(aes(xmin = -Inf, xmax = Inf, ymin = 10, ymax = Inf), fill = "lightgreen") + geom_hline(yintercept = 0) + geom_hline(yintercept = c(10, -10), linetype = "dashed") + geom_segment(aes(x = distRank, y = y0, xend = distRank, yend = DNA)) + geom_point(aes(x = distRank, y = DNA, fill = annotation, size = reg), shape = 21) + # scale_x_continuous("Distance from TSS", # breaks = dna.gX$distRank, # labels = dna.gX$distanceToTSS) + scale_x_continuous("Distance from TSS") + scale_y_continuous("% Methylation") + ggtitle(paste("Gene:", gX)) + scale_fill_manual("Region", values = c("Distal Intergenic" = "purple", "Exon" = "blue", "Intron" = "white", "Promoter" = "brown", "3' UTR" = "black", "5' UTR" = "yellow", "Downstream" = "orange")) + scale_size_manual("Number of CpG-s", values = c("5 to 10" = 5, "11 to 20" = 6, ">20" = 7)) + guides(fill = guide_legend(override.aes = list(size = 7))) + theme(plot.title = element_text(hjust = 0.5), legend.position = "top", axis.text.x=element_blank(), axis.ticks.x=element_blank()) p1 tiff(filename = "tmp/lollipops.tiff", height = 8, width = 12, units = 'in', res = 1200, compression = "lzw+p") print(p1) graphics.off() # sessionInfo() # sink()
/source/mes13_methylseq_DSS_TIIA_v3.2.R
no_license
KongLabRUSP/mes13
R
false
false
25,006
r
# |----------------------------------------------------------------------------------| # | Project: Skin UVB SKH1 mouse model treated with UA/SFN | # | Script: Methyl-seq data analysis and visualization using DSS | # | Coordinator: Ran Yin, Renyi Wu | # | Author: Davit Sargsyan | # | Created: 03/17/2018 | # | Modified:04/05/2018, DS: changed hitmaps to donut plots; added more comparisons | # | 11/30/2018, DS: TIIA methyl-seq sample is WJ4 | # |----------------------------------------------------------------------------------| # sink(file = "tmp/log_skin_uvb_dna_v3.1.txt") date() # if (!requireNamespace("BiocManager", # quietly = TRUE)) # install.packages("BiocManager") # BiocManager::install("ChIPseeker") # BiocManager::install("TxDb.Mmusculus.UCSC.mm10.knownGene") # BiocManager::install("DSS") # BiocManager::install("org.Mm.eg.db") require(data.table) require(ggplot2) require(knitr) require(ChIPseeker) require(TxDb.Mmusculus.UCSC.mm10.knownGene) require(DSS) # Load and view raw counts (no annoation)---- dt01 <- fread("data/Renyi_Methylseq_12292017/combined_WJ_anno.csv") dt01 # NOTE: there are 14 rows representing mitochondrial DNA unique(dt01$chr) dt01[dt01$chr == "chrM",] # Annotate---- # NOTE: definition of promoter: # Source: http://www.sequenceontology.org/browser/current_svn/term/SO:0000167 peakAnno1 <- annotatePeak(peak = "data/Renyi_Methylseq_12292017/combined_WJ_anno.csv", tssRegion = c(-3000, 3000), TxDb = TxDb.Mmusculus.UCSC.mm10.knownGene, annoDb = "org.Mm.eg.db") head(peakAnno1@detailGenomicAnnotation) t1 <- peakAnno1@annoStat t1$Feature <- factor(t1$Feature, levels = as.character(t1$Feature[order(t1$Frequency, decreasing = TRUE)])) t1 p1 <- ggplot(t1, aes(x = rep(1, nrow(t1)), y = Frequency, fill = Feature)) + geom_bar(width = 1, stat = "identity", color = "black") + coord_polar("y", start = 0, direction = -1) + scale_x_continuous("", limits = c(-1.5, 1.5), expand = c(0, 0)) + ggtitle("Annotation by Region (%)") + theme(plot.title = element_text(hjust = 0.5), axis.title.x = element_blank(), axis.text.x = element_blank(), axis.ticks.x = element_blank(), axis.title.y = element_blank(), axis.text.y = element_blank(), axis.ticks.y = element_blank(), axis.line = element_blank(), panel.border = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank()) p1 tiff(filename = "tmp/mes13_anno_by_reg.tiff", height = 5, width = 5, units = 'in', res = 1200, compression = "lzw+p") print(p1) graphics.off() # Make data table---- dt1 <- data.table(start = peakAnno1@anno@ranges@start, as.data.frame(peakAnno1@anno@elementMetadata@listData)) # Remove unmapped regions dt1 <- dt1[!is.na(dt1$SYMBOL == "NA"), ] # Removed 12 rows # Subset data: LG, HG and TIIA---- dt1 <- data.table(gene = dt1$SYMBOL, anno = dt1$annotation, geneId = dt1$geneId, chr = dt1$geneChr, pos = dt1$start, distanceToTSS = dt1$distanceToTSS, reg = NA, dt1[, CpG:WJ02.X], dt1[, WJ04.N:WJ04.X], geneName = dt1$GENENAME) dt1 # # Dispersion Shrinkage for Sequencing data (DSS)---- # # This is based on Wald test for beta-binomial distribution. # # Source: https://www.bioconductor.org/packages/release/bioc/vignettes/DSS/inst/doc/DSS.pdf # # The DM detection procedure implemented in DSS is based on a rigorous Wald test for betabinomial # # distributions. The test statistics depend on the biological variations (characterized # # by dispersion parameter) as well as the sequencing depth. An important part of the algorithm # # is the estimation of dispersion parameter, which is achieved through a shrinkage estimator # # based on a Bayesian hierarchical model [1]. An advantage of DSS is that the test can be # # performed even when there is no biological replicates. That’s because by smoothing, the # # neighboring CpG sites can be viewed as “pseudo-replicates", and the dispersion can still be # # estimated with reasonable precision. # Regions---- kable(data.table(table(substr(dt1$anno, 1, 9)))) # |V1 | N| # |:---------|-----:| # |3' UTR | 4549| # |5' UTR | 533| # |Distal In | 49714| # |Downstrea | 2183| # |Exon (ENS | 14335| # |Intron (E | 51484| # |Promoter | 94369| # Separate Promoter, Body and Downstream---- dt1$reg <- as.character(dt1$anno) # a. Promoter: up to 3kb upstream dt1$reg[substr(dt1$anno, 1, 8) == "Promoter"] <- "Promoter" # b. Body: exons and introns dt1$reg[substr(dt1$anno, 1, 4) %in% c("Exon", "Intr")] <- "Body" # NEW (11/01/2018): if annotated as promoter but beyond TSS, make it Body # Fixed by setting tssRegion = c(-3000, 0) in annotation # CHECK: dt1[dt1$reg == "Promoter" & dt1$distanceToTSS > 0,] # None # c. Downstream: Distal Intergenic and Downstream dt1$reg[substr(dt1$anno, 1, 4) %in% c("Dist", "Down")] <- "Downstream" dt1$reg <- factor(dt1$reg, levels = c("Promoter", "5' UTR", "Body", "3' UTR", "Downstream")) kable(data.table(table(dt1$reg))) # |V1 | N| # |:----------|-----:| # |Promoter | 94369| # |5' UTR | 533| # |Body | 65819| # |3' UTR | 4549| # |Downstream | 51897| # CpG distribution and coverage---- p2 <- ggplot(dt1, aes(x = CpG)) + facet_wrap(~ reg, scale = "free_y") + geom_histogram(color = "black", fill = "grey", binwidth = 5) + scale_x_continuous(name = "Number of CpG", breaks = c(3, seq(from = 5, to = 60, by = 5))) + coord_cartesian(xlim=c(3, 60)) + scale_y_continuous(name = "Counts") + ggtitle("Distribution of DMR by Number of CpG and Region") p2 tiff(filename = "tmp/mes13_CpG_by_reg_hist.tiff", height = 6, width = 9, units = 'in', res = 1200, compression = "lzw+p") print(p2) graphics.off() # Percent methylation---- tmp <- as.matrix(dt1[, WJ01.N:WJ04.X]) head(tmp) # Remove rows with all NAs ndx.keep <- rowSums(is.na(tmp)) < 6 sum(ndx.keep) # 211,128 out of 217,111 dt1 <- dt1[ndx.keep, ] tmp <- tmp[ndx.keep, ] dtN <- tmp[, seq(1, ncol(tmp) - 1, by = 2)] head(dtN) dtX <- tmp[, seq(2, ncol(tmp), by = 2)] head(dtX) # Add 0.5 to all NAs and zeros in meth. hits # NOTE: if there were no hits (N = NA or 0), the pct will be NA anyway dtX <- apply(dtX, 2, function(a) { a[is.na(a)] <- a[a == 0] <- 0.5 return(a) }) dtX pct <- dtX/dtN colnames(pct) <- substr(colnames(pct), 1, nchar(colnames(pct)) - 2) head(pct) # Remove rows with all zeros---- dim(pct[rowSums(pct) == 0, ]) dim(pct[is.na(rowSums(pct)), ]) dim(pct) ndx.keep <- rowSums(pct) != 0 & !is.na(rowSums(pct)) pct <- pct[ndx.keep, ] dt1 <- dt1[ndx.keep, ] dtN <- dtN[ndx.keep, ] dtX <- dtX[ndx.keep, ] dim(dtX) # 187,601 remaine # Hits per CpG average (i.e. vertical coverage)---- t1 <- apply(dtN, 2, function(a) { return(round(a/dt1$CpG, 1)) }) mu <- list() for (i in 1:ncol(t1)) { x1 <- aggregate(x = t1[, i], FUN = mean, by = list(dt1$reg)) x2 <- aggregate(x = t1[, i], FUN = mean, by = list(dt1$reg)) x3 <- merge(x1, x2, by = "Group.1") mu[[i]] <- data.table(reg = x3[, 1], mu = (x3[, 2] + x3[, 3])/2) } names(mu) <- unique(substr(colnames(t1), 1, 4)) t2 <- data.table(Region = mu$WJ01$reg, LG = mu$WJ01$mu, HG = mu$WJ02$mu, TIIA = mu$WJ04$mu) t2 # Average methylation per region per treatment/time mumth <- list() for (i in 1:ncol(pct)) { x1 <- aggregate(x = c(pct[, i], pct[, i]), FUN = mean, by = list(rep(dt1$reg, 2))) x2 <- aggregate(x = c(pct[, i], pct[, i]), FUN = sd, by = list(rep(dt1$reg, 2))) x3 <- aggregate(x = c(pct[, i], pct[, i]), FUN = length, by = list(rep(dt1$reg, 2))) tmp <- merge(x1, x2, by = "Group.1") tmp <- merge(tmp, x3, by = "Group.1") tmp$sem <- tmp$x.y/sqrt(tmp$x) mumth[[i]] <- data.table(rep(colnames(pct)[[i]], 5), tmp) colnames(mumth[[i]]) <- c("trt", "reg", "mu", "std", "n", "sem") } mumth <- do.call("rbind", mumth) mumth mumth$trt <- factor(mumth$trt, # levels = c("WJ02", # "WJ01", # "WJ05"), levels = c("WJ02", "WJ01", "WJ04"), labels = c("HG", "LG", "TIIA")) mumth$`Methylation (%)` <- 100*mumth$mu mumth$`SEM (%)` <- 100*mumth$sem p1 <- ggplot(mumth, aes(x = reg, y = `Methylation (%)`, group = trt, fill = trt)) + geom_bar(position = position_dodge(), stat="identity", color = "black") + scale_x_discrete("Region") + scale_y_continuous(limits = c(0, 100)) + scale_fill_discrete("Treatment") + ggtitle("Percent of Methylated CpG by Region") + theme(plot.title = element_text(hjust = 0.5), axis.text.x = element_text(angle = 45, hjust = 1)) p1 tiff(filename = "tmp/mes13_avg_methyl_by_reg.tiff", height = 6, width = 7, units = 'in', res = 1200, compression = "lzw+p") print(p1) graphics.off() # DNA vs. RNA---- pctMeth <- data.table(gene = dt1$gene, anno = dt1$anno, pct) pctMeth$`HG-LG DNA` <- 100*(pctMeth$WJ02 - pctMeth$WJ01) pctMeth$`TIIA-HG DNA` <- 100*(pctMeth$WJ04 - pctMeth$WJ02) pctMeth # Load RNA DiffExp---- # NOTE: produced by mes13_rnaseq_DEGseq_TIIA_v2.R script on 04/30/2018! # expRNA <- fread("data/mes13_tiia_genes_q-0.5_log2-0.3.csv") expRNA <- fread("data/mes13_tiia_rnaseq_degseq_genes_q-0.5_log2-0.3.csv") colnames(expRNA)[1] <- "gene" rna_dna <- merge(pctMeth, expRNA, by = "gene") rna_dna # Separate regions--- rna_dna$reg <- as.character(rna_dna$anno) rna_dna$reg[substr(rna_dna$anno, 1, 8) == "Promoter"] <- "Promoter" rna_dna$reg[substr(rna_dna$anno, 1, 4) %in% c("Exon", "Intr")] <- "Body" rna_dna$reg[substr(rna_dna$anno, 1, 4) %in% c("Dist", "Down")] <- "Downstream" rna_dna$reg <- factor(rna_dna$reg, levels = c("Promoter", "5' UTR", "Body", "3' UTR", "Downstream")) kable(data.table(table(dt1$reg))) # |V1 | N| # |:----------|-----:| # |Promoter | 85752| # |5' UTR | 489| # |Body | 54716| # |3' UTR | 3917| # |Downstream | 42727| rna_dna[rna_dna$gene == "Nmu",] g1 <- rna_dna[rna_dna$`HG-LG DNA` >= 10 & rna_dna$`HG-LG` <= -0.3 & reg == "Promoter"] g1 length(unique(g1$gene)) g2 <- rna_dna[rna_dna$`HG-LG DNA` <= -10 & rna_dna$`HG-LG` >= 0.3 & reg == "Promoter"] g2 length(unique(g2$gene)) g3 <- rna_dna[rna_dna$`TIIA-HG DNA` >= 10 & rna_dna$`TIIA-HG` <= -0.3 & reg == "Promoter"] g3 length(unique(g3$gene)) g4 <- rna_dna[rna_dna$`TIIA-HG DNA` <= -10 & rna_dna$`TIIA-HG` >= 0.3 & reg == "Promoter"] g4 length(unique(g4$gene)) write.csv(g1, file = "tmp/dna.up_rna.dn_hg-lg.csv", row.names = FALSE) write.csv(g2, file = "tmp/dna.dn_rna.up_hg-lg.csv", row.names = FALSE) write.csv(g3, file = "tmp/dna.up_rna.dn_tiia-hg.csv", row.names = FALSE) write.csv(g4, file = "tmp/dna.dn_rna.up_tiia-hg.csv", row.names = FALSE) # HG vs. LG Starburst---- tmp1 <- unique(rna_dna[gene %in% unique(g1$gene) & reg == "Promoter", c("gene", "reg", "HG-LG")]) setkey(tmp1, `HG-LG`) tmp1$ypos <- seq(from = min(rna_dna$`HG-LG`), to = max(rna_dna$`HG-LG`), length.out = length(tmp1$gene)) tmp1 tmp2 <- unique(rna_dna[gene %in% c(unique(g2$gene), "Nmu") & reg == "Promoter", c("gene", "reg", "HG-LG")]) setkey(tmp2, `HG-LG`) tmp2$ypos <- seq(from = min(rna_dna$`HG-LG`), to = max(rna_dna$`HG-LG`), length.out = length(tmp2$gene)) tmp2 rna_dna$a <- 0.7 rna_dna$a[rna_dna$`HG-LG DNA` > -10 & rna_dna$`HG-LG DNA` < 10] <- 0.3 p1 <- ggplot(data = rna_dna, aes(x = `HG-LG DNA`, y = `HG-LG`, fill = reg)) + geom_segment(data = tmp1, aes(x = -35, y = `HG-LG`, xend = 25, yend = `HG-LG`), linetype = "dotted") + geom_segment(data = tmp1, aes(x = 25, y = `HG-LG`, xend = 35, yend = ypos)) + geom_text(data = tmp1, aes(x = 35, y = ypos, label = gene), size = 4, hjust = 0) + geom_segment(data = tmp2, aes(x = -35, y = `HG-LG`, xend = 25, yend = `HG-LG`), linetype = "dotted") + geom_segment(data = tmp2, aes(x = -45, y = ypos, xend = -35, yend = `HG-LG`)) + geom_text(data = tmp2, aes(x = -45, y = ypos, label = gene), size = 4, hjust = 1) + geom_hline(yintercept = c(-0.3, 0.3), linetype = "dashed") + geom_vline(xintercept = c(-10, 10), linetype = "dashed") + geom_point(aes(alpha = a), size = 2, shape = 21) + scale_x_continuous("DNA Methylation Difference(%)", breaks = seq(-35, 35, 10), limits = c(-50, 45)) + scale_y_continuous("RNA Expression Difference (log2)") + ggtitle("HG - LG") + scale_fill_manual("Region", values = c("Promoter" = "green", "5' UTR" = "white", "Body" = "blue", "3' UTR" = "grey", "Downstream" = "red")) + scale_alpha_continuous(guide = FALSE) + theme_bw() + theme(plot.title = element_text(hjust = 0.5), legend.position = "top", panel.border = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.line = element_line(colour = "black")) p1 tiff(filename = "tmp/starburst_hg-lg.tiff", height = 8, width = 8, units = 'in', res = 300, compression = "lzw+p") print(p1) graphics.off() # TIIA vs. HG Starburst---- tmp3 <- unique(rna_dna[gene %in% c(unique(g3$gene), "Fgl2") & reg == "Promoter", c("gene", "reg", "TIIA-HG")]) setkey(tmp3, `TIIA-HG`) tmp3$ypos <- seq(from = min(rna_dna$`TIIA-HG`), to = max(rna_dna$`TIIA-HG`), length.out = length(tmp3$gene)) tmp3 tmp4 <- unique(rna_dna[gene %in% unique(g4$gene) & reg == "Promoter", c("gene", "reg", "TIIA-HG")]) setkey(tmp4, `TIIA-HG`) tmp4$ypos <- seq(from = min(rna_dna$`TIIA-HG`), to = max(rna_dna$`TIIA-HG`), length.out = length(tmp4$gene)) tmp4 rna_dna$a <- 0.7 rna_dna$a[rna_dna$`TIIA-HG DNA` > -10 & rna_dna$`TIIA-HG DNA` < 10] <- 0.3 p2 <- ggplot(data = rna_dna, aes(x = `TIIA-HG DNA`, y = `TIIA-HG`, fill = reg)) + geom_segment(data = tmp3, aes(x = -35, y = `TIIA-HG`, xend = 25, yend = `TIIA-HG`), linetype = "dotted") + geom_segment(data = tmp3, aes(x = 25, y = `TIIA-HG`, xend = 35, yend = ypos)) + geom_text(data = tmp3, aes(x = 35, y = ypos, label = gene), size = 4, hjust = 0) + geom_segment(data = tmp4, aes(x = -35, y = `TIIA-HG`, xend = 25, yend = `TIIA-HG`), linetype = "dotted") + geom_segment(data = tmp4, aes(x = -45, y = ypos, xend = -35, yend = `TIIA-HG`)) + geom_text(data = tmp4, aes(x = -45, y = ypos, label = gene), size = 4, hjust = 1) + geom_hline(yintercept = c(-0.3, 0.3), linetype = "dashed") + geom_vline(xintercept = c(-10, 10), linetype = "dashed") + geom_point(aes(alpha = a), size = 2, shape = 21) + scale_x_continuous("DNA Methylation Difference(%)", breaks = seq(-35, 35, 10), limits = c(-50, 45)) + scale_y_continuous("RNA Expression Difference (log2)") + ggtitle("TIIA - HG") + scale_fill_manual("Region", values = c("Promoter" = "green", "5' UTR" = "white", "Body" = "blue", "3' UTR" = "grey", "Downstream" = "red")) + scale_alpha_continuous(guide = FALSE) + theme_bw() + theme(plot.title = element_text(hjust = 0.5), legend.position = "top", panel.border = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.line = element_line(colour = "black")) p2 tiff(filename = "tmp/starburst_tiia-hg.tiff", height = 8, width = 8, units = 'in', res = 300, compression = "lzw+p") print(p2) graphics.off() # Detailes on genes with significant expression differences---- # Genes with RNA down/DNA up---- gene.keep <- c("Fgl2", "Gulo", "Kcnip2", "Nmu") gene.keep # DNA---- dna <- data.table(gene = dt1$gene, CpG = dt1$CpG, annotation = as.character(dt1$anno), distanceToTSS = dt1$distanceToTSS, pct) dna <- dna[gene %in% gene.keep, ] dna class(dna$distanceToTSS) # Differences dna$`HG-LG` <- 100*(dna$WJ02 - dna$WJ01) dna$`TIIA-HG` <- 100*(dna$WJ04 - dna$WJ02) dna$reg <- "5 to 10" dna$reg[dna$CpG > 10] <- "11 to 20" dna$reg[dna$CpG > 20] <- ">20" dna$reg <- factor(dna$reg, levels = c("5 to 10", "11 to 20", ">20")) dna setkey(dna, gene, distanceToTSS) dna[, distRank := rank(distanceToTSS), by = gene] # Long data dt3 <- melt.data.table(data = dna, id.vars = c("gene", "reg", "annotation", "distanceToTSS", "distRank"), measure.vars = c("HG-LG", "TIIA-HG"), variable.name = "Treatment", value.name = "DNA") dt3$Treatment <- as.character(dt3$Treatment) dt3$annotation[substr(dt3$annotation, 1, 4) == "Exon"] <- "Exon" dt3$annotation[substr(dt3$annotation, 1, 6) == "Intron"] <- "Intron" dt3$annotation[substr(dt3$annotation, 1, 8) == "Promoter"] <- "Promoter" dt3$annotation[substr(dt3$annotation, 1, 4) == "Down"] <- "Downstream" dt3$annotation <- factor(dt3$annotation) # RNA # RNA data Long format---- dt2 <- melt.data.table(data = expRNA, id.vars = "gene", measure.vars = c("HG-LG", "TIIA-HG"), variable.name = "Treatment", value.name = "RNA") dt2$Treatment <- as.character(dt2$Treatment) dt2 # Merge DNA with RNA---- dt3 <- merge(dt3, dt2, by = c("gene", "Treatment")) dt3 # Isolate genes---- gX <- unique(dt3$gene) dna.gX <- dt3[dt3$gene %in% gX, ] dna.gX$y0 <- 0 dna.gX$Treatment <- paste(dna.gX$Treatment, " (RNA = ", round(dna.gX$RNA, 3), ")", sep = "") p1 <- ggplot(dna.gX, aes(x = distRank, y = DNA)) + facet_wrap(.~ gene + Treatment, scales = "free_x", ncol = 4) + geom_rect(aes(xmin = -Inf, xmax = Inf, ymin = -Inf, ymax = -10), fill = "pink") + geom_rect(aes(xmin = -Inf, xmax = Inf, ymin = 10, ymax = Inf), fill = "lightgreen") + geom_hline(yintercept = 0) + geom_hline(yintercept = c(10, -10), linetype = "dashed") + geom_segment(aes(x = distRank, y = y0, xend = distRank, yend = DNA)) + geom_point(aes(x = distRank, y = DNA, fill = annotation, size = reg), shape = 21) + # scale_x_continuous("Distance from TSS", # breaks = dna.gX$distRank, # labels = dna.gX$distanceToTSS) + scale_x_continuous("Distance from TSS") + scale_y_continuous("% Methylation") + ggtitle(paste("Gene:", gX)) + scale_fill_manual("Region", values = c("Distal Intergenic" = "purple", "Exon" = "blue", "Intron" = "white", "Promoter" = "brown", "3' UTR" = "black", "5' UTR" = "yellow", "Downstream" = "orange")) + scale_size_manual("Number of CpG-s", values = c("5 to 10" = 5, "11 to 20" = 6, ">20" = 7)) + guides(fill = guide_legend(override.aes = list(size = 7))) + theme(plot.title = element_text(hjust = 0.5), legend.position = "top", axis.text.x=element_blank(), axis.ticks.x=element_blank()) p1 tiff(filename = "tmp/lollipops.tiff", height = 8, width = 12, units = 'in', res = 1200, compression = "lzw+p") print(p1) graphics.off() # sessionInfo() # sink()
# Source all of R file l_fun <- list.files(settings$sourcepath) for(i in 1:length(l_fun)) source(paste0(settings$sourcepath, l_fun[i])) l_proc <- list.files(paste0(getwd(),"/app/processing/")) for(i in 1:length(l_proc)) source(paste0(getwd(),"/app/processing/", l_proc[i])) rm(i, l_fun, l_proc) library(plotly) library(shinyWidgets) library(shinydashboard) library(dashboardthemes) library(shinydashboardPlus) library(shinyAce) check.packages(c("shiny", "shinythemes", "shinyWidgets", "shinycustomloader", "shinyjs", "rmarkdown", "data.table", "stringr", "parallel", "stats", "plotly", "DT"))
/app/global.R
permissive
ClementRivet/CeremApp
R
false
false
785
r
# Source all of R file l_fun <- list.files(settings$sourcepath) for(i in 1:length(l_fun)) source(paste0(settings$sourcepath, l_fun[i])) l_proc <- list.files(paste0(getwd(),"/app/processing/")) for(i in 1:length(l_proc)) source(paste0(getwd(),"/app/processing/", l_proc[i])) rm(i, l_fun, l_proc) library(plotly) library(shinyWidgets) library(shinydashboard) library(dashboardthemes) library(shinydashboardPlus) library(shinyAce) check.packages(c("shiny", "shinythemes", "shinyWidgets", "shinycustomloader", "shinyjs", "rmarkdown", "data.table", "stringr", "parallel", "stats", "plotly", "DT"))
# Compute Multiples library(dplyr) setwd("C:/Users/user/Disk Google/Disertace/Analysis/Survival/Data") source("01_import_albertina.R") data01 <- read.table("multiples_data_surv.txt", sep=";", header=TRUE) data02 <- import_albertina("data_glmm_surv.csv") data03 <- data02$data data03$year <- substr(data03$DAT_OD, 1, 4) data <- merge(x=data01, y=data03, by.x=c("ICO","year","AKTIVACELK"), by.y=c("ICO","year","AKTIVACELK")) #data[which(duplicated(data)), ] data04.1 <- data %>% select(-DAT_OD, -DAT_DO, -TZPZ, -TZPVVAS) data04 <- data04.1[-which(duplicated(data04.1)), ] # Variables # Assume that NA means 0. data04$Z <- ifelse(is.na(data04$Z), 0, data04$Z) data04$OBEZNAA <- ifelse(is.na(data04$OBEZNAA), 0, data04$OBEZNAA) data04$VLASTNIJM <- ifelse(is.na(data04$VLASTNIJM), 0, data04$VLASTNIJM) data04$HVML <- ifelse(is.na(data04$HVML), 0, data04$HVML) data04$KZ <- ifelse(is.na(data04$KZ), 0, data04$KZ) data04$KBU <- ifelse(is.na(data04$KBU), 0, data04$KBU) data04$ZCPIMM <- ifelse(is.na(data04$ZCPIMM), 0, data04$ZCPIMM) #===== Compute multiples - mine ===== acid.test <- with(data04, (OBEZNAA-Z)/(KZ+KBU) ) debt.ratio <- with(data04, (AKTIVACELK-VLASTNIJM)/AKTIVACELK) asset.turn <- with(data04, (sales) / (AKTIVACELK) ) returns <- with(data04, (PROVHOSPV)/(AKTIVACELK) ) #======= Outliers and Missing Values ====== source("compute_hpd_data.R") data.multiples <- data.frame(acid.test, debt.ratio, asset.turn, returns) data.ident <- data04[ ,c("ICO", "year")] data.win <- data.hpd(data.multiples)$data # round(data.hpd(data.multiples)$limits, 3) data_clean <- data.frame(data.ident, data.win) defaults <- data03_defaults[, c("ICO","year","def", "class")] str(data03_defaults) data.final <- merge(x=defaults, y=data_clean, by.x=c("ICO", "year"), by.y=c("ICO", "year") ) data.final %>% group_by(year,def) %>% summarise(count=n() ) summary(data.final) write.table(x=data.final, file="final_data.txt", sep=";", row.names=FALSE) #========== OLD WAY ================ # acid.test acid.test <- ifelse(acid.test>5 | (data04$KZ+data04$KBU <0), 5, acid.test) #boxplot(acid.test) data04[which(is.infinite(acid.test)), ] #===== Compute multiples - Altman ===== x1 <- with(data04, (OBEZNAA-KZ+KBU)/(AKTIVACELK)) x2 <- with(data04, HVML/AKTIVACELK) x3 <- with(data04, ebit/AKTIVACELK) x4 <- with(data04, (sales) / (AKTIVACELK) ) #===================================================== # Check for correspoding values --- #--- str(data01) # from survival analysis 2620 str(data03) # 2871 from new Albertina query str(data) # after join 2624 str(data04) # 2620 first <- data01 %>% group_by(ICO) %>% summarise(count = n()) last <- data04 %>% group_by(ICO) %>% summarise(count = n()) nrow(first);nrow(last) check <- data.frame(cbind(first, last)) check$check <- with(check, count - count.1) #===================== #---------------- summary(data04) table() show <- t(head(sort(apply(data04, 1, function(x){sum(as.numeric(is.na(x)))}), decreasing=TRUE ), 20)) data04[as.numeric(colnames(show)), ] head(data04[which(is.na(data04$ebit)), ]) data04 %>% filter(ICO==24747629) data01 %>% filter(ICO==25560590) data01$ICO #---Manually remove: remove_ico <- c(24747629, )
/Data/05_multiples_surv.r
no_license
luboRprojects/Thesis
R
false
false
3,200
r
# Compute Multiples library(dplyr) setwd("C:/Users/user/Disk Google/Disertace/Analysis/Survival/Data") source("01_import_albertina.R") data01 <- read.table("multiples_data_surv.txt", sep=";", header=TRUE) data02 <- import_albertina("data_glmm_surv.csv") data03 <- data02$data data03$year <- substr(data03$DAT_OD, 1, 4) data <- merge(x=data01, y=data03, by.x=c("ICO","year","AKTIVACELK"), by.y=c("ICO","year","AKTIVACELK")) #data[which(duplicated(data)), ] data04.1 <- data %>% select(-DAT_OD, -DAT_DO, -TZPZ, -TZPVVAS) data04 <- data04.1[-which(duplicated(data04.1)), ] # Variables # Assume that NA means 0. data04$Z <- ifelse(is.na(data04$Z), 0, data04$Z) data04$OBEZNAA <- ifelse(is.na(data04$OBEZNAA), 0, data04$OBEZNAA) data04$VLASTNIJM <- ifelse(is.na(data04$VLASTNIJM), 0, data04$VLASTNIJM) data04$HVML <- ifelse(is.na(data04$HVML), 0, data04$HVML) data04$KZ <- ifelse(is.na(data04$KZ), 0, data04$KZ) data04$KBU <- ifelse(is.na(data04$KBU), 0, data04$KBU) data04$ZCPIMM <- ifelse(is.na(data04$ZCPIMM), 0, data04$ZCPIMM) #===== Compute multiples - mine ===== acid.test <- with(data04, (OBEZNAA-Z)/(KZ+KBU) ) debt.ratio <- with(data04, (AKTIVACELK-VLASTNIJM)/AKTIVACELK) asset.turn <- with(data04, (sales) / (AKTIVACELK) ) returns <- with(data04, (PROVHOSPV)/(AKTIVACELK) ) #======= Outliers and Missing Values ====== source("compute_hpd_data.R") data.multiples <- data.frame(acid.test, debt.ratio, asset.turn, returns) data.ident <- data04[ ,c("ICO", "year")] data.win <- data.hpd(data.multiples)$data # round(data.hpd(data.multiples)$limits, 3) data_clean <- data.frame(data.ident, data.win) defaults <- data03_defaults[, c("ICO","year","def", "class")] str(data03_defaults) data.final <- merge(x=defaults, y=data_clean, by.x=c("ICO", "year"), by.y=c("ICO", "year") ) data.final %>% group_by(year,def) %>% summarise(count=n() ) summary(data.final) write.table(x=data.final, file="final_data.txt", sep=";", row.names=FALSE) #========== OLD WAY ================ # acid.test acid.test <- ifelse(acid.test>5 | (data04$KZ+data04$KBU <0), 5, acid.test) #boxplot(acid.test) data04[which(is.infinite(acid.test)), ] #===== Compute multiples - Altman ===== x1 <- with(data04, (OBEZNAA-KZ+KBU)/(AKTIVACELK)) x2 <- with(data04, HVML/AKTIVACELK) x3 <- with(data04, ebit/AKTIVACELK) x4 <- with(data04, (sales) / (AKTIVACELK) ) #===================================================== # Check for correspoding values --- #--- str(data01) # from survival analysis 2620 str(data03) # 2871 from new Albertina query str(data) # after join 2624 str(data04) # 2620 first <- data01 %>% group_by(ICO) %>% summarise(count = n()) last <- data04 %>% group_by(ICO) %>% summarise(count = n()) nrow(first);nrow(last) check <- data.frame(cbind(first, last)) check$check <- with(check, count - count.1) #===================== #---------------- summary(data04) table() show <- t(head(sort(apply(data04, 1, function(x){sum(as.numeric(is.na(x)))}), decreasing=TRUE ), 20)) data04[as.numeric(colnames(show)), ] head(data04[which(is.na(data04$ebit)), ]) data04 %>% filter(ICO==24747629) data01 %>% filter(ICO==25560590) data01$ICO #---Manually remove: remove_ico <- c(24747629, )
library(MASS) library(ggplot2) library(ggfortify) library(leaps) library(reshape2) library(plyr) df = read.csv('ahs.csv', na.strings='') df = subset(df, select=c('NUNIT2', 'ROOMS', 'BEDRMS', 'size', 'vintage', 'heatingtype', 'heatingfuel', 'actype', 'ZINC2', 'POOR', 'WEIGHT', 'smsa', 'cmsa', 'metro3', 'division', 'location')) df$NUNIT2 = as.numeric(gsub("'", "", df$NUNIT2)) df = rename(df, c('ZINC2'='income', 'POOR'='fpl')) x.vars.con = c() y.vars.con = c('income') # y.vars.con = c('fpl') x.vars.cat = c('vintage', 'size', 'heatingtype', 'heatingfuel', 'actype', 'division') # x.vars.cat = c('vintage', 'size', 'heatingtype', 'heatingfuel', 'actype', 'smsa') # x.vars.cat = c('vintage') # 0.030 # x.vars.cat = c('size') # 0.105 # x.vars.cat = c('heatingtype') # 0.019 # x.vars.cat = c('heatingfuel') # 0.009 # x.vars.cat = c('actype') # 0.014 # x.vars.cat = c('division') # 0.012 # x.vars.cat = c('smsa') # 0.023 y.vars.cat = c() df$values = 'actual' df[c(x.vars.cat, y.vars.cat)] = lapply(df[c(x.vars.cat, y.vars.cat)], factor) # apply factor to each of the categorical vars df = na.omit(df) # this removes rows with at least one NA df = df[df$size!='Blank', ] dep_vars = c(y.vars.con, y.vars.cat) indep_vars = c(x.vars.con, x.vars.cat) # change the reference factors df$vintage = relevel(df$vintage, ref='<1950') # income, division # df$vintage = relevel(df$vintage, ref='1960s') # income, smsa df$actype = relevel(df$actype, ref='Room') df$heatingtype = relevel(df$heatingtype, ref='Cooking stove') df$division = relevel(df$division, ref='South Atlantic - East South Central') # df$smsa = relevel(df$smsa, ref='Hartford, CT') # FIRST PASS attach(df) df.lm1 = lm(paste(dep_vars, paste(indep_vars, collapse=' + '), sep=' ~ '), weights=WEIGHT, data=df, x=T) detach(df) summary(df.lm1) table = as.data.frame.matrix(summary(df.lm1)$coefficients) table = table[order(table[['Pr(>|t|)']]), ] table[['Pr(>|t|)']] = formatC(table[['Pr(>|t|)']], format='e', digits=5) table[['Estimate']] = round(table[['Estimate']], 5) table[['Std. Error']] = round(table[['Std. Error']], 5) table[['t value']] = round(table[['t value']], 5) write.csv(table, 'lm1.csv') # write out first pass to csv write.csv(data.frame("R^2"=summary(df.lm1)$r.squared[1], "Adj-R^2"=summary(df.lm1)$adj.r.squared[1]), "stat1.csv", row.names=F) ### sig_indep_vars_factors = rownames(data.frame(summary(df.lm1)$coefficients)[data.frame(summary(df.lm1)$coefficients)$'Pr...t..' <= 0.05, ]) # remove insignificant vars sig_indep_vars_factors = sig_indep_vars_factors[!sig_indep_vars_factors %in% c('(Intercept)')] sig_indep_vars = c() for (x in indep_vars) { for (y in sig_indep_vars_factors) { if (grepl(x, y)) { if (!(x %in% sig_indep_vars)) { sig_indep_vars = c(sig_indep_vars, x) } } } } # SECOND PASS attach(df) df.lm2 = lm(paste(dep_vars, paste(sig_indep_vars, collapse=' + '), sep=' ~ '), weights=WEIGHT, data=df, x=T) detach(df) summary(df.lm2) table = as.data.frame.matrix(summary(df.lm2)$coefficients) table = table[order(table[['Pr(>|t|)']]), ] table[['Pr(>|t|)']] = formatC(table[['Pr(>|t|)']], format='e', digits=5) table[['Estimate']] = round(table[['Estimate']], 5) table[['Std. Error']] = round(table[['Std. Error']], 5) table[['t value']] = round(table[['t value']], 5) write.csv(table, 'lm2.csv') # write out second pass to csv write.csv(data.frame("R^2"=summary(df.lm2)$r.squared[1], "Adj-R^2"=summary(df.lm2)$adj.r.squared[1]), "stat2.csv", row.names=F) ### stop() df2 = df df2$values = 'predict' df2[[y.vars.con[[1]]]] = predict(df.lm2, newdata=subset(df2, select=sig_indep_vars)) # this is the same as the fitted values counts = c(sum(df$WEIGHT), sum(df2$WEIGHT)) labels = paste(c('actual', 'predict'), ', n = ', round(counts), sep='') p = ggplot(NULL, aes_string(x=y.vars.con[[1]], colour='values')) + geom_density(data=df2) + geom_density(data=df) + scale_colour_discrete(name='model', labels=labels) ggsave(p, file='dist.png', width=14) p = autoplot(df.lm2, label.size=3) ggsave(p, file='stat.png', width=14) for (x in sig_indep_vars) { lvls = levels(as.factor(df2[[x]])) counts = aggregate(df2$WEIGHT, by=list(bin=df2[[x]]), FUN=sum)$x labels = paste(lvls, ', n = ', round(counts), sep='') p = ggplot(df2, aes_string(x=y.vars.con[[1]])) + geom_density(aes_string(colour=x)) + scale_colour_discrete(name=x, labels=labels) ggsave(p, file=paste(x,'png',sep='_pre.'), width=14) lvls = levels(as.factor(df[[x]])) counts = aggregate(df$WEIGHT, by=list(bin=df[[x]]), FUN=sum)$x labels = paste(lvls, ', n = ', round(counts), sep='') q = ggplot(df, aes_string(x=y.vars.con[[1]])) + geom_density(aes_string(colour=x)) + scale_colour_discrete(name=x, labels=labels) ggsave(q, file=paste(x,'png',sep='_act.'), width=14) } stop() # size and vintage sizes_and_vintages = expand.grid(levels(df$size), levels(df$vintage)) sizes_and_vintages = rename(sizes_and_vintages, c('Var1'='size', 'Var2'='vintage')) sizes_and_vintages$income = predict(df.lm2, newdata=sizes_and_vintages) write.csv(sizes_and_vintages, 'income_estimates.csv', row.names=F) for (vintage in levels(as.factor(df$vintage))){ temp = df[df$vintage==vintage, ] temp2 = df2[df2$vintage==vintage, ] temp$size_and_vintage = paste(temp$size, temp$vintage) temp2$size_and_vintage = paste(temp2$size, temp2$vintage) lvls = levels(as.factor(temp2$size_and_vintage)) counts = aggregate(temp2$WEIGHT, by=list(bin=temp2$size_and_vintage), FUN=sum)$x labels = paste(lvls, ', n = ', round(counts), sep='') p = ggplot(temp2, aes_string(x=y.vars.con[[1]])) + geom_density(aes(colour=size_and_vintage)) + scale_colour_discrete(name='size_and_vintage', labels=labels) ggsave(p, file=paste(gsub('<', '', vintage),'png',sep='_pre.'), width=14) lvls = levels(as.factor(temp$size_and_vintage)) counts = aggregate(temp$WEIGHT, by=list(bin=temp$size_and_vintage), FUN=sum)$x labels = paste(lvls, ', n = ', round(counts), sep='') q = ggplot(temp, aes_string(x=y.vars.con[[1]])) + geom_density(aes(colour=size_and_vintage)) + scale_colour_discrete(name='size_and_vintage', labels=labels) ggsave(q, file=paste(gsub('<', '', vintage),'png',sep='_act.'), width=14) }
/data/ahs/MLR/MLR.R
permissive
dsgrid/OpenStudio-BuildStock
R
false
false
6,216
r
library(MASS) library(ggplot2) library(ggfortify) library(leaps) library(reshape2) library(plyr) df = read.csv('ahs.csv', na.strings='') df = subset(df, select=c('NUNIT2', 'ROOMS', 'BEDRMS', 'size', 'vintage', 'heatingtype', 'heatingfuel', 'actype', 'ZINC2', 'POOR', 'WEIGHT', 'smsa', 'cmsa', 'metro3', 'division', 'location')) df$NUNIT2 = as.numeric(gsub("'", "", df$NUNIT2)) df = rename(df, c('ZINC2'='income', 'POOR'='fpl')) x.vars.con = c() y.vars.con = c('income') # y.vars.con = c('fpl') x.vars.cat = c('vintage', 'size', 'heatingtype', 'heatingfuel', 'actype', 'division') # x.vars.cat = c('vintage', 'size', 'heatingtype', 'heatingfuel', 'actype', 'smsa') # x.vars.cat = c('vintage') # 0.030 # x.vars.cat = c('size') # 0.105 # x.vars.cat = c('heatingtype') # 0.019 # x.vars.cat = c('heatingfuel') # 0.009 # x.vars.cat = c('actype') # 0.014 # x.vars.cat = c('division') # 0.012 # x.vars.cat = c('smsa') # 0.023 y.vars.cat = c() df$values = 'actual' df[c(x.vars.cat, y.vars.cat)] = lapply(df[c(x.vars.cat, y.vars.cat)], factor) # apply factor to each of the categorical vars df = na.omit(df) # this removes rows with at least one NA df = df[df$size!='Blank', ] dep_vars = c(y.vars.con, y.vars.cat) indep_vars = c(x.vars.con, x.vars.cat) # change the reference factors df$vintage = relevel(df$vintage, ref='<1950') # income, division # df$vintage = relevel(df$vintage, ref='1960s') # income, smsa df$actype = relevel(df$actype, ref='Room') df$heatingtype = relevel(df$heatingtype, ref='Cooking stove') df$division = relevel(df$division, ref='South Atlantic - East South Central') # df$smsa = relevel(df$smsa, ref='Hartford, CT') # FIRST PASS attach(df) df.lm1 = lm(paste(dep_vars, paste(indep_vars, collapse=' + '), sep=' ~ '), weights=WEIGHT, data=df, x=T) detach(df) summary(df.lm1) table = as.data.frame.matrix(summary(df.lm1)$coefficients) table = table[order(table[['Pr(>|t|)']]), ] table[['Pr(>|t|)']] = formatC(table[['Pr(>|t|)']], format='e', digits=5) table[['Estimate']] = round(table[['Estimate']], 5) table[['Std. Error']] = round(table[['Std. Error']], 5) table[['t value']] = round(table[['t value']], 5) write.csv(table, 'lm1.csv') # write out first pass to csv write.csv(data.frame("R^2"=summary(df.lm1)$r.squared[1], "Adj-R^2"=summary(df.lm1)$adj.r.squared[1]), "stat1.csv", row.names=F) ### sig_indep_vars_factors = rownames(data.frame(summary(df.lm1)$coefficients)[data.frame(summary(df.lm1)$coefficients)$'Pr...t..' <= 0.05, ]) # remove insignificant vars sig_indep_vars_factors = sig_indep_vars_factors[!sig_indep_vars_factors %in% c('(Intercept)')] sig_indep_vars = c() for (x in indep_vars) { for (y in sig_indep_vars_factors) { if (grepl(x, y)) { if (!(x %in% sig_indep_vars)) { sig_indep_vars = c(sig_indep_vars, x) } } } } # SECOND PASS attach(df) df.lm2 = lm(paste(dep_vars, paste(sig_indep_vars, collapse=' + '), sep=' ~ '), weights=WEIGHT, data=df, x=T) detach(df) summary(df.lm2) table = as.data.frame.matrix(summary(df.lm2)$coefficients) table = table[order(table[['Pr(>|t|)']]), ] table[['Pr(>|t|)']] = formatC(table[['Pr(>|t|)']], format='e', digits=5) table[['Estimate']] = round(table[['Estimate']], 5) table[['Std. Error']] = round(table[['Std. Error']], 5) table[['t value']] = round(table[['t value']], 5) write.csv(table, 'lm2.csv') # write out second pass to csv write.csv(data.frame("R^2"=summary(df.lm2)$r.squared[1], "Adj-R^2"=summary(df.lm2)$adj.r.squared[1]), "stat2.csv", row.names=F) ### stop() df2 = df df2$values = 'predict' df2[[y.vars.con[[1]]]] = predict(df.lm2, newdata=subset(df2, select=sig_indep_vars)) # this is the same as the fitted values counts = c(sum(df$WEIGHT), sum(df2$WEIGHT)) labels = paste(c('actual', 'predict'), ', n = ', round(counts), sep='') p = ggplot(NULL, aes_string(x=y.vars.con[[1]], colour='values')) + geom_density(data=df2) + geom_density(data=df) + scale_colour_discrete(name='model', labels=labels) ggsave(p, file='dist.png', width=14) p = autoplot(df.lm2, label.size=3) ggsave(p, file='stat.png', width=14) for (x in sig_indep_vars) { lvls = levels(as.factor(df2[[x]])) counts = aggregate(df2$WEIGHT, by=list(bin=df2[[x]]), FUN=sum)$x labels = paste(lvls, ', n = ', round(counts), sep='') p = ggplot(df2, aes_string(x=y.vars.con[[1]])) + geom_density(aes_string(colour=x)) + scale_colour_discrete(name=x, labels=labels) ggsave(p, file=paste(x,'png',sep='_pre.'), width=14) lvls = levels(as.factor(df[[x]])) counts = aggregate(df$WEIGHT, by=list(bin=df[[x]]), FUN=sum)$x labels = paste(lvls, ', n = ', round(counts), sep='') q = ggplot(df, aes_string(x=y.vars.con[[1]])) + geom_density(aes_string(colour=x)) + scale_colour_discrete(name=x, labels=labels) ggsave(q, file=paste(x,'png',sep='_act.'), width=14) } stop() # size and vintage sizes_and_vintages = expand.grid(levels(df$size), levels(df$vintage)) sizes_and_vintages = rename(sizes_and_vintages, c('Var1'='size', 'Var2'='vintage')) sizes_and_vintages$income = predict(df.lm2, newdata=sizes_and_vintages) write.csv(sizes_and_vintages, 'income_estimates.csv', row.names=F) for (vintage in levels(as.factor(df$vintage))){ temp = df[df$vintage==vintage, ] temp2 = df2[df2$vintage==vintage, ] temp$size_and_vintage = paste(temp$size, temp$vintage) temp2$size_and_vintage = paste(temp2$size, temp2$vintage) lvls = levels(as.factor(temp2$size_and_vintage)) counts = aggregate(temp2$WEIGHT, by=list(bin=temp2$size_and_vintage), FUN=sum)$x labels = paste(lvls, ', n = ', round(counts), sep='') p = ggplot(temp2, aes_string(x=y.vars.con[[1]])) + geom_density(aes(colour=size_and_vintage)) + scale_colour_discrete(name='size_and_vintage', labels=labels) ggsave(p, file=paste(gsub('<', '', vintage),'png',sep='_pre.'), width=14) lvls = levels(as.factor(temp$size_and_vintage)) counts = aggregate(temp$WEIGHT, by=list(bin=temp$size_and_vintage), FUN=sum)$x labels = paste(lvls, ', n = ', round(counts), sep='') q = ggplot(temp, aes_string(x=y.vars.con[[1]])) + geom_density(aes(colour=size_and_vintage)) + scale_colour_discrete(name='size_and_vintage', labels=labels) ggsave(q, file=paste(gsub('<', '', vintage),'png',sep='_act.'), width=14) }
context("test-ggstrat-plot_addons.R") test_that("facet reordering works", { p <- mtcars %>% dplyr::mutate( # factor not in sorted order cyl_fct = paste("cyl =", cyl) %>% factor(levels = c("cyl = 8", "cyl = 4", "cyl = 6")), # character gear_fct = paste("gear =", gear) ) %>% ggplot2::ggplot(ggplot2::aes(wt, mpg)) + ggplot2::geom_point() + # additional layer with character versions of what is a factor in original data ggplot2::geom_point( ggplot2::aes(x = 5, y = mpg_line), data = tibble::tibble( gear_fct = c("gear = 3", "gear = 4", "gear = 5", "gear = 6"), cyl_fct = c("cyl = 4", "cyl = 6", "cyl = 8", "cyl = 2"), mpg_line = c(15, 25, 35, 45) ), col = "red" ) vdiffr::expect_doppelganger( "sequential_layer_facet grid", p + ggplot2::facet_grid(ggplot2::vars(cyl_fct), ggplot2::vars(gear_fct)) + sequential_layer_facets() ) vdiffr::expect_doppelganger( "sequential_layer_facet wrap", p + ggplot2::facet_wrap(ggplot2::vars(cyl_fct, gear_fct)) + sequential_layer_facets() ) expect_silent( ggplot2::ggplot_build( p + ggplot2::facet_null() ) ) }) test_that("CONISS can be added to a plot", { coniss <- alta_lake_geochem %>% nested_data(age, param, value, trans = scale) %>% nested_chclust_coniss() # skip("CONISS plots do not render identically between vdiffrAddin() and CMD check") withr::with_envvar(list(VDIFFR_RUN_TESTS = FALSE), { vdiffr::expect_doppelganger( "plot coniss y", ggplot2::ggplot(alta_lake_geochem, ggplot2::aes(x = value, y = age)) + geom_lineh() + ggplot2::facet_grid(cols = vars(param)) + layer_dendrogram(coniss, ggplot2::aes(y = age), param = "CONISS") + layer_zone_boundaries(coniss, ggplot2::aes(y = age)) ) vdiffr::expect_doppelganger( "plot coniss x", ggplot2::ggplot(alta_lake_geochem, ggplot2::aes(x = age, y = value)) + ggplot2::geom_line() + ggplot2::facet_grid(rows = vars(param)) + layer_dendrogram(coniss, ggplot2::aes(x = age), param = "CONISS") + layer_zone_boundaries(coniss, ggplot2::aes(x = age)) ) grp_coniss <- keji_lakes_plottable %>% dplyr::group_by(location) %>% nested_data(depth, taxon, rel_abund) %>% nested_chclust_coniss() vdiffr::expect_doppelganger( "plot coniss abundance y", plot_layer_dendrogram(grp_coniss, ggplot2::aes(y = depth), taxon = "CONISS") + ggplot2::facet_grid(rows = vars(location), cols = vars(taxon)) + ggplot2::scale_y_reverse() ) vdiffr::expect_doppelganger( "plot coniss abundance x", plot_layer_dendrogram(grp_coniss, ggplot2::aes(x = depth), taxon = "CONISS") + ggplot2::facet_grid(cols = vars(location)) + ggplot2::scale_y_reverse() ) }) }) test_that("PCAs can be added to a plot", { pca <- alta_lake_geochem %>% nested_data(age, param, value, trans = scale) %>% nested_prcomp() # skip("PCA plots do not render identically between vdiffrAddin() and CMD check") withr::with_envvar(list(VDIFFR_RUN_TESTS = FALSE), { vdiffr::expect_doppelganger( "plot PCA x", ggplot2::ggplot(alta_lake_geochem, ggplot2::aes(x = value, y = age)) + geom_lineh() + ggplot2::facet_grid(cols = vars(param)) + layer_scores(pca, key = "param", which = c("PC1", "PC2")) ) vdiffr::expect_doppelganger( "plot PCA y", ggplot2::ggplot(alta_lake_geochem, ggplot2::aes(y = value, x = age)) + ggplot2::geom_line() + ggplot2::facet_grid(rows = vars(param)) + layer_scores(pca, key = "param", value = "value", which = c("PC1", "PC2")) ) grp_pca <- keji_lakes_plottable %>% dplyr::group_by(location) %>% nested_data(depth, taxon, rel_abund, trans = sqrt) %>% nested_prcomp() vdiffr::expect_doppelganger( "plot PCA scores y (rev)", plot_layer_scores(grp_pca, ggplot2::aes(y = depth), which = c("PC1", "PC2")) + ggplot2::scale_y_reverse() ) vdiffr::expect_doppelganger( "plot PCA scores x", plot_layer_scores(grp_pca, ggplot2::aes(x = depth), which = c("PC1", "PC2")) ) }) })
/tests/testthat/test-ggstrat-plot_addons.R
permissive
pinemmatthew/tidypaleo
R
false
false
4,285
r
context("test-ggstrat-plot_addons.R") test_that("facet reordering works", { p <- mtcars %>% dplyr::mutate( # factor not in sorted order cyl_fct = paste("cyl =", cyl) %>% factor(levels = c("cyl = 8", "cyl = 4", "cyl = 6")), # character gear_fct = paste("gear =", gear) ) %>% ggplot2::ggplot(ggplot2::aes(wt, mpg)) + ggplot2::geom_point() + # additional layer with character versions of what is a factor in original data ggplot2::geom_point( ggplot2::aes(x = 5, y = mpg_line), data = tibble::tibble( gear_fct = c("gear = 3", "gear = 4", "gear = 5", "gear = 6"), cyl_fct = c("cyl = 4", "cyl = 6", "cyl = 8", "cyl = 2"), mpg_line = c(15, 25, 35, 45) ), col = "red" ) vdiffr::expect_doppelganger( "sequential_layer_facet grid", p + ggplot2::facet_grid(ggplot2::vars(cyl_fct), ggplot2::vars(gear_fct)) + sequential_layer_facets() ) vdiffr::expect_doppelganger( "sequential_layer_facet wrap", p + ggplot2::facet_wrap(ggplot2::vars(cyl_fct, gear_fct)) + sequential_layer_facets() ) expect_silent( ggplot2::ggplot_build( p + ggplot2::facet_null() ) ) }) test_that("CONISS can be added to a plot", { coniss <- alta_lake_geochem %>% nested_data(age, param, value, trans = scale) %>% nested_chclust_coniss() # skip("CONISS plots do not render identically between vdiffrAddin() and CMD check") withr::with_envvar(list(VDIFFR_RUN_TESTS = FALSE), { vdiffr::expect_doppelganger( "plot coniss y", ggplot2::ggplot(alta_lake_geochem, ggplot2::aes(x = value, y = age)) + geom_lineh() + ggplot2::facet_grid(cols = vars(param)) + layer_dendrogram(coniss, ggplot2::aes(y = age), param = "CONISS") + layer_zone_boundaries(coniss, ggplot2::aes(y = age)) ) vdiffr::expect_doppelganger( "plot coniss x", ggplot2::ggplot(alta_lake_geochem, ggplot2::aes(x = age, y = value)) + ggplot2::geom_line() + ggplot2::facet_grid(rows = vars(param)) + layer_dendrogram(coniss, ggplot2::aes(x = age), param = "CONISS") + layer_zone_boundaries(coniss, ggplot2::aes(x = age)) ) grp_coniss <- keji_lakes_plottable %>% dplyr::group_by(location) %>% nested_data(depth, taxon, rel_abund) %>% nested_chclust_coniss() vdiffr::expect_doppelganger( "plot coniss abundance y", plot_layer_dendrogram(grp_coniss, ggplot2::aes(y = depth), taxon = "CONISS") + ggplot2::facet_grid(rows = vars(location), cols = vars(taxon)) + ggplot2::scale_y_reverse() ) vdiffr::expect_doppelganger( "plot coniss abundance x", plot_layer_dendrogram(grp_coniss, ggplot2::aes(x = depth), taxon = "CONISS") + ggplot2::facet_grid(cols = vars(location)) + ggplot2::scale_y_reverse() ) }) }) test_that("PCAs can be added to a plot", { pca <- alta_lake_geochem %>% nested_data(age, param, value, trans = scale) %>% nested_prcomp() # skip("PCA plots do not render identically between vdiffrAddin() and CMD check") withr::with_envvar(list(VDIFFR_RUN_TESTS = FALSE), { vdiffr::expect_doppelganger( "plot PCA x", ggplot2::ggplot(alta_lake_geochem, ggplot2::aes(x = value, y = age)) + geom_lineh() + ggplot2::facet_grid(cols = vars(param)) + layer_scores(pca, key = "param", which = c("PC1", "PC2")) ) vdiffr::expect_doppelganger( "plot PCA y", ggplot2::ggplot(alta_lake_geochem, ggplot2::aes(y = value, x = age)) + ggplot2::geom_line() + ggplot2::facet_grid(rows = vars(param)) + layer_scores(pca, key = "param", value = "value", which = c("PC1", "PC2")) ) grp_pca <- keji_lakes_plottable %>% dplyr::group_by(location) %>% nested_data(depth, taxon, rel_abund, trans = sqrt) %>% nested_prcomp() vdiffr::expect_doppelganger( "plot PCA scores y (rev)", plot_layer_scores(grp_pca, ggplot2::aes(y = depth), which = c("PC1", "PC2")) + ggplot2::scale_y_reverse() ) vdiffr::expect_doppelganger( "plot PCA scores x", plot_layer_scores(grp_pca, ggplot2::aes(x = depth), which = c("PC1", "PC2")) ) }) })
# TODO: Add comment # # Author: ahrnee-adm ############################################################################### # TODO: Add comment # # Author: ahrnee-adm ############################################################################### ### INIT if(!grepl("SafeQuant\\.Rcheck",getwd())){ setwd(dirname(sys.frame(1)$ofile)) } source("initTestSession.R") ### INIT END ### TEST FUNCTIONS testExpDesignTagToExpDesign <- function(){ cat("--- testExpDesignTagToExpDesign: --- \n") stopifnot(all.equal(pData(eset),expDesign)) stopifnot(all.equal(sampleNames(eset),colnames(m))) stopifnot(all.equal(nrow(exprs(eset)),nrow(m))) expDesignString1 <- "1,2,3:4,5,6" # 6-plex default: 1,2,3:4,5,6 #condition isControl #1 Condition 1 TRUE #2 Condition 1 TRUE #3 Condition 1 TRUE #4 Condition 2 FALSE #5 Condition 2 FALSE #6 Condition 2 FALSE expDesign <- data.frame(condition=paste("Condition",sort(rep(c(1,2),3))),isControl=sort(rep(c(T,F),3),decreasing=T) ) expDesign1 <- expDesignTagToExpDesign(expDesignString1, expDesign) stopifnot(nrow(expDesign1) == 6 ) stopifnot(length(unique(expDesign1$condition)) == 2 ) stopifnot(sum(expDesign1$isControl) == 3 ) expDesignString2 <- "1,4,7,10:2,5,8:3,6,9" # 10-plex default is "1,4,7,10:2,5,8:3,6,9" #condition isControl #1 Condition 1 TRUE #2 Condition 2 FALSE #3 Condition 3 FALSE #4 Condition 1 TRUE #5 Condition 2 FALSE #6 Condition 3 FALSE #7 Condition 1 TRUE #8 Condition 2 FALSE #9 Condition 3 FALSE #10 Condition 1 TRUE expDesign <- data.frame(condition=paste("Condition",c(1,2,3,1,2,3,1,2,3,1)),isControl=c(T,F,F,T,F,F,T,F,F,T) ) expDesign2 <- expDesignTagToExpDesign(expDesignString2, expDesign) stopifnot(nrow(expDesign2) == 10 ) stopifnot(length(unique(expDesign2$condition)) == 3 ) stopifnot(sum(expDesign2$isControl) == 4 ) ### condition name assignment when mixing runs from different conditions expDesign <- data.frame(condition=paste("foo",c(1,1,1,2,2,3,3)),isControl=c(F,F,F,T,T,F,F) ) stopifnot(all(grepl("foo" ,expDesignTagToExpDesign("1,2,3:4,5:6",expDesign)$condition))) stopifnot(all(grepl("Condition" ,expDesignTagToExpDesign("1:4,6:5",expDesign)$condition))) stopifnot(all(grepl("foo" ,expDesignTagToExpDesign("1,2,3:4,5",expDesign)$condition))) stopifnot(all(grepl("Condition" ,expDesignTagToExpDesign("1,2,4",expDesign)$condition))) stopifnot(all(grepl("foo" ,expDesignTagToExpDesign("2",expDesign)$condition))) stopifnot( length(unique(expDesignTagToExpDesign("1:2:3:4:5:6",expDesign)$condition)) == 6) cat("--- testExpDesignTagToExpDesign: PASS ALL TEST --- \n") } ### TEST FUNCTIONS END ### TESTS testExpDesignTagToExpDesign() #names(expDesign) <- 1:ncol(expDesign)
/tests/testUserOptions.R
no_license
eahrne/SafeQuant
R
false
false
2,804
r
# TODO: Add comment # # Author: ahrnee-adm ############################################################################### # TODO: Add comment # # Author: ahrnee-adm ############################################################################### ### INIT if(!grepl("SafeQuant\\.Rcheck",getwd())){ setwd(dirname(sys.frame(1)$ofile)) } source("initTestSession.R") ### INIT END ### TEST FUNCTIONS testExpDesignTagToExpDesign <- function(){ cat("--- testExpDesignTagToExpDesign: --- \n") stopifnot(all.equal(pData(eset),expDesign)) stopifnot(all.equal(sampleNames(eset),colnames(m))) stopifnot(all.equal(nrow(exprs(eset)),nrow(m))) expDesignString1 <- "1,2,3:4,5,6" # 6-plex default: 1,2,3:4,5,6 #condition isControl #1 Condition 1 TRUE #2 Condition 1 TRUE #3 Condition 1 TRUE #4 Condition 2 FALSE #5 Condition 2 FALSE #6 Condition 2 FALSE expDesign <- data.frame(condition=paste("Condition",sort(rep(c(1,2),3))),isControl=sort(rep(c(T,F),3),decreasing=T) ) expDesign1 <- expDesignTagToExpDesign(expDesignString1, expDesign) stopifnot(nrow(expDesign1) == 6 ) stopifnot(length(unique(expDesign1$condition)) == 2 ) stopifnot(sum(expDesign1$isControl) == 3 ) expDesignString2 <- "1,4,7,10:2,5,8:3,6,9" # 10-plex default is "1,4,7,10:2,5,8:3,6,9" #condition isControl #1 Condition 1 TRUE #2 Condition 2 FALSE #3 Condition 3 FALSE #4 Condition 1 TRUE #5 Condition 2 FALSE #6 Condition 3 FALSE #7 Condition 1 TRUE #8 Condition 2 FALSE #9 Condition 3 FALSE #10 Condition 1 TRUE expDesign <- data.frame(condition=paste("Condition",c(1,2,3,1,2,3,1,2,3,1)),isControl=c(T,F,F,T,F,F,T,F,F,T) ) expDesign2 <- expDesignTagToExpDesign(expDesignString2, expDesign) stopifnot(nrow(expDesign2) == 10 ) stopifnot(length(unique(expDesign2$condition)) == 3 ) stopifnot(sum(expDesign2$isControl) == 4 ) ### condition name assignment when mixing runs from different conditions expDesign <- data.frame(condition=paste("foo",c(1,1,1,2,2,3,3)),isControl=c(F,F,F,T,T,F,F) ) stopifnot(all(grepl("foo" ,expDesignTagToExpDesign("1,2,3:4,5:6",expDesign)$condition))) stopifnot(all(grepl("Condition" ,expDesignTagToExpDesign("1:4,6:5",expDesign)$condition))) stopifnot(all(grepl("foo" ,expDesignTagToExpDesign("1,2,3:4,5",expDesign)$condition))) stopifnot(all(grepl("Condition" ,expDesignTagToExpDesign("1,2,4",expDesign)$condition))) stopifnot(all(grepl("foo" ,expDesignTagToExpDesign("2",expDesign)$condition))) stopifnot( length(unique(expDesignTagToExpDesign("1:2:3:4:5:6",expDesign)$condition)) == 6) cat("--- testExpDesignTagToExpDesign: PASS ALL TEST --- \n") } ### TEST FUNCTIONS END ### TESTS testExpDesignTagToExpDesign() #names(expDesign) <- 1:ncol(expDesign)
#' min2maxC #' negative Data conversion use Maximum subtraction method #' @param x The Data that needs to be maximized. #' @param i Index column. #' #' @return Index column maximized min2maxC<-function(x,i){ sampleData2<-x i=i sampleData3<-sampleData2[,c(1,i)] sampleData3$FZZH<- max(sampleData3[,2])-sampleData3[,2] sampleData4<-sampleData3[,c(1,3)] transDatacol<-sampleData4 names(transDatacol)[1] <- names(sampleData2[1]) names(transDatacol)[2] <- names(sampleData2[i]) return(transDatacol) }
/R/min2maxC.R
permissive
zhengyu888/AHPtopsis
R
false
false
514
r
#' min2maxC #' negative Data conversion use Maximum subtraction method #' @param x The Data that needs to be maximized. #' @param i Index column. #' #' @return Index column maximized min2maxC<-function(x,i){ sampleData2<-x i=i sampleData3<-sampleData2[,c(1,i)] sampleData3$FZZH<- max(sampleData3[,2])-sampleData3[,2] sampleData4<-sampleData3[,c(1,3)] transDatacol<-sampleData4 names(transDatacol)[1] <- names(sampleData2[1]) names(transDatacol)[2] <- names(sampleData2[i]) return(transDatacol) }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/rainfall.R \name{precipitation_above_reference} \alias{precipitation_above_reference} \title{Get proportion of rainfall above reference} \usage{ precipitation_above_reference(precipitation, reference_precipitation, wet_day_threshold = 1) } \arguments{ \item{precipitation}{vector with rainfall values} \item{reference_precipitation}{reference value of precipitation} \item{wet_day_threshold}{Numeric. Amount of precipitation at which the day is considered a wet day. Defaults to 1.} } \value{ a value between 0 and 1 } \description{ Calculates the proportion of rainfall above the refrerence_rainfall value. Usually the reference corresponds to the 95th percentile of a climate normal/reference period. The proportion is often calculated in an annual basis and is a measure of the proportion of annual total rain that falls in intense events. This measure provides information about the importance of intense rainfall events for total annual rainfall. } \examples{ library(dplyr) # Simulate one measurement of rain per day for 10 years rainfall <- rlnorm(10*365) years <- rep(2001:2010, each = 365) rain_data <- tibble(rainfall = rainfall, year = years) # We chose a climate normal climate_normal <- c(2001, 2005) rain_data \%>\% # calculate reference rainfall mutate(ref = get_reference_precipitation(rainfall, year, climate_normal, percentile = 95L)) \%>\% # calculate prorportion of rainfall above reference group_by(year) \%>\% summarise(prop_above = precipitation_above_reference(rainfall, ref)) } \seealso{ Other rainfall functions: \code{\link{get_reference_precipitation}} } \concept{rainfall functions}
/man/precipitation_above_reference.Rd
permissive
StatisticsNZ/er.helpers
R
false
true
1,833
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/rainfall.R \name{precipitation_above_reference} \alias{precipitation_above_reference} \title{Get proportion of rainfall above reference} \usage{ precipitation_above_reference(precipitation, reference_precipitation, wet_day_threshold = 1) } \arguments{ \item{precipitation}{vector with rainfall values} \item{reference_precipitation}{reference value of precipitation} \item{wet_day_threshold}{Numeric. Amount of precipitation at which the day is considered a wet day. Defaults to 1.} } \value{ a value between 0 and 1 } \description{ Calculates the proportion of rainfall above the refrerence_rainfall value. Usually the reference corresponds to the 95th percentile of a climate normal/reference period. The proportion is often calculated in an annual basis and is a measure of the proportion of annual total rain that falls in intense events. This measure provides information about the importance of intense rainfall events for total annual rainfall. } \examples{ library(dplyr) # Simulate one measurement of rain per day for 10 years rainfall <- rlnorm(10*365) years <- rep(2001:2010, each = 365) rain_data <- tibble(rainfall = rainfall, year = years) # We chose a climate normal climate_normal <- c(2001, 2005) rain_data \%>\% # calculate reference rainfall mutate(ref = get_reference_precipitation(rainfall, year, climate_normal, percentile = 95L)) \%>\% # calculate prorportion of rainfall above reference group_by(year) \%>\% summarise(prop_above = precipitation_above_reference(rainfall, ref)) } \seealso{ Other rainfall functions: \code{\link{get_reference_precipitation}} } \concept{rainfall functions}
#Frequency plot of the 1st variable "Firefighter units deployed per incident" #Install package RCurl if it's not present install.packages("RCurl") #Load required package RCurl library(RCurl) #Reading dataset from GITHUB x <- getURL("https://raw.githubusercontent.com/deyvidwilliam/cebd1160/master/data/3interventions_casernes_distance_2015_2017.csv") y <- read.csv(text = x) #Frequency plot of the variable "units deployed per incident" available at column #11 of the dataset # X axe limited to 30 for better visualization # Number of breakpoints (bins) modified to match the highest value of the variable hist(y[,11], main = "Frequency plot (histogram) for\n firefighter units deployed per incident", col="green", xlab="Units deployed", ylab = "Incidents", xlim=c(0,30), breaks=max(y[,11]))
/R/var1.r
no_license
deyvidwilliam/cebd1160
R
false
false
820
r
#Frequency plot of the 1st variable "Firefighter units deployed per incident" #Install package RCurl if it's not present install.packages("RCurl") #Load required package RCurl library(RCurl) #Reading dataset from GITHUB x <- getURL("https://raw.githubusercontent.com/deyvidwilliam/cebd1160/master/data/3interventions_casernes_distance_2015_2017.csv") y <- read.csv(text = x) #Frequency plot of the variable "units deployed per incident" available at column #11 of the dataset # X axe limited to 30 for better visualization # Number of breakpoints (bins) modified to match the highest value of the variable hist(y[,11], main = "Frequency plot (histogram) for\n firefighter units deployed per incident", col="green", xlab="Units deployed", ylab = "Incidents", xlim=c(0,30), breaks=max(y[,11]))
## averages and sd library(dplyr); library(ggplot2); library(reshape) is.na(results)<-sapply(results, is.infinite) # clean results results[is.na(results)]<-NA ## avg and sd of metrics summary_stats <- results %>% group_by(.dots=c("connectivity","patchquality")) %>% # by treatment summarise_at(vars(colnames(results[,-1:-8])),funs(mean(.,na.rm=TRUE),sd(.,na.rm=TRUE))) ## avg and sd of species composition composition <- rep_samples %>% rowwise() %>% transform(total = rowSums(rep_samples[15:23])) # total abundance composition[15:23] <- composition[15:23]/composition[24][row(composition[15:23])] # scale to find relative abundance composition <- composition[,-24]; composition <- composition[,-(11:14)] # remove extra columns composition_local <- composition %>% group_by(.dots=c("connectivity","quality","time")) %>% # find avgs and sd of compositions summarise_at(vars(colnames(composition[,-(1:10)])),funs(mean(.,na.rm=TRUE),sd(.,na.rm=TRUE))) # plotting community composition for (c in 1:length(connec)){ # plot for (q in ss.prop){ png(paste0("rawplots/communitycomp/composition_box_",connec[c] , q,".png")) boxplot(x = composition[composition$connectivity==c & composition$quality==q & composition$time == max(composition$time - sampfreq),-(1:10)] ,xlab="species",ylab="composition",outline=FALSE) dev.off() } }
/results_summary.R
no_license
TristanGarry/connectivity-patchquality
R
false
false
1,369
r
## averages and sd library(dplyr); library(ggplot2); library(reshape) is.na(results)<-sapply(results, is.infinite) # clean results results[is.na(results)]<-NA ## avg and sd of metrics summary_stats <- results %>% group_by(.dots=c("connectivity","patchquality")) %>% # by treatment summarise_at(vars(colnames(results[,-1:-8])),funs(mean(.,na.rm=TRUE),sd(.,na.rm=TRUE))) ## avg and sd of species composition composition <- rep_samples %>% rowwise() %>% transform(total = rowSums(rep_samples[15:23])) # total abundance composition[15:23] <- composition[15:23]/composition[24][row(composition[15:23])] # scale to find relative abundance composition <- composition[,-24]; composition <- composition[,-(11:14)] # remove extra columns composition_local <- composition %>% group_by(.dots=c("connectivity","quality","time")) %>% # find avgs and sd of compositions summarise_at(vars(colnames(composition[,-(1:10)])),funs(mean(.,na.rm=TRUE),sd(.,na.rm=TRUE))) # plotting community composition for (c in 1:length(connec)){ # plot for (q in ss.prop){ png(paste0("rawplots/communitycomp/composition_box_",connec[c] , q,".png")) boxplot(x = composition[composition$connectivity==c & composition$quality==q & composition$time == max(composition$time - sampfreq),-(1:10)] ,xlab="species",ylab="composition",outline=FALSE) dev.off() } }
library(multicon) ### Name: catseye ### Title: Cat's Eye ### Aliases: catseye ### Keywords: graphing distributions ### ** Examples # A Single Group f <- rnorm(50) catseye(f, conf=.95, xlab="", ylab="DV", las=1) catseye(f, conf=.95, xlab="", ylab="DV", las=1, col="light green", main="Cat's Eye Plot for a Single Group Mean", sub="95 percent CI") # Two Groups f2 <- rnorm(100) g <- rep(1:2, each=50) catseye(f2, grp=g, xlab="Conditions", ylab="DV", grp.names=c("Control", "Experimental"), las=1) catseye(f2, grp=g, conf=.8, xlab="", ylab="DV", grp.names=c("Control", "Experimental"), las=1, col="cyan", main="Two Group Mean Comparison", sub="80 percent CIs") # Three Groups f3 <- c(rnorm(10), rnorm(10, mean=.5), rnorm(10, mean=1, sd=2)) g2 <- rep(1:3, each=10) catseye(f3, grp=g2, conf=.95, xlab="Conditions", ylab="DV", grp.names=c("Group 1", "Group 2", "Group 3"), las=1, col="cyan", main="Three Group Mean Comparison") # A 2 x 2 Design f4 <- rnorm(200) fac1 <- rep(1:2, each=100) fac2 <- rep(3:4, 100) catseye(f4, list(fac1, fac2), xlab="Conditions", ylab="DV", grp.names=c("High/High", "High/Low", "Low/High", "Low/Low"),las=1, col="orange", main="A 2 x 2 Experiment Comparison") # Using the xpoints argument to create visual space catseye(f4, list(fac1, fac2), xlab="Conditions", ylab="DV", grp.names=c("High/High", "High/Low", "Low/High", "Low/Low"),xpoints=c(1,2,4,5), las=1, col="orange", main="A 2 x 2 Experiment Comparison") # A 2 x 3 Design f5 <- rnorm(180) fac1 <- rep(1:2, each=90) fac2 <- rep(3:5, 60) catseye(f5, list(fac1, fac2), xlab="Conditions", ylab="DV", grp.names=c("High/A", "High/B", "High/C", "Low/A", "Low/B","Low/C"), las=1, main="A 2 x 3 Experiment Comparison") # Using the xpoints argument to create visual space catseye(f5, list(fac1, fac2), xlab="Conditions", ylab="DV", grp.names=c("High/A", "High/B", "High/C", "Low/A", "Low/B","Low/C"), xpoints=c(1,2,3,5,6,7), las=1, main="A 2 x 3 Experiment Comparison")
/data/genthat_extracted_code/multicon/examples/catseye.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
1,994
r
library(multicon) ### Name: catseye ### Title: Cat's Eye ### Aliases: catseye ### Keywords: graphing distributions ### ** Examples # A Single Group f <- rnorm(50) catseye(f, conf=.95, xlab="", ylab="DV", las=1) catseye(f, conf=.95, xlab="", ylab="DV", las=1, col="light green", main="Cat's Eye Plot for a Single Group Mean", sub="95 percent CI") # Two Groups f2 <- rnorm(100) g <- rep(1:2, each=50) catseye(f2, grp=g, xlab="Conditions", ylab="DV", grp.names=c("Control", "Experimental"), las=1) catseye(f2, grp=g, conf=.8, xlab="", ylab="DV", grp.names=c("Control", "Experimental"), las=1, col="cyan", main="Two Group Mean Comparison", sub="80 percent CIs") # Three Groups f3 <- c(rnorm(10), rnorm(10, mean=.5), rnorm(10, mean=1, sd=2)) g2 <- rep(1:3, each=10) catseye(f3, grp=g2, conf=.95, xlab="Conditions", ylab="DV", grp.names=c("Group 1", "Group 2", "Group 3"), las=1, col="cyan", main="Three Group Mean Comparison") # A 2 x 2 Design f4 <- rnorm(200) fac1 <- rep(1:2, each=100) fac2 <- rep(3:4, 100) catseye(f4, list(fac1, fac2), xlab="Conditions", ylab="DV", grp.names=c("High/High", "High/Low", "Low/High", "Low/Low"),las=1, col="orange", main="A 2 x 2 Experiment Comparison") # Using the xpoints argument to create visual space catseye(f4, list(fac1, fac2), xlab="Conditions", ylab="DV", grp.names=c("High/High", "High/Low", "Low/High", "Low/Low"),xpoints=c(1,2,4,5), las=1, col="orange", main="A 2 x 2 Experiment Comparison") # A 2 x 3 Design f5 <- rnorm(180) fac1 <- rep(1:2, each=90) fac2 <- rep(3:5, 60) catseye(f5, list(fac1, fac2), xlab="Conditions", ylab="DV", grp.names=c("High/A", "High/B", "High/C", "Low/A", "Low/B","Low/C"), las=1, main="A 2 x 3 Experiment Comparison") # Using the xpoints argument to create visual space catseye(f5, list(fac1, fac2), xlab="Conditions", ylab="DV", grp.names=c("High/A", "High/B", "High/C", "Low/A", "Low/B","Low/C"), xpoints=c(1,2,3,5,6,7), las=1, main="A 2 x 3 Experiment Comparison")
#' Return listEvents data #' #' \code{listEvents} returns top-level event data. #' #' \code{listEvents} returns top-level event data (e.g. id, name, number of #' associated markets) and can be filtered via the API-NG based on a number of #' arguments. This data is Useful for finding specific event identification #' numbers, which are then usually passed to further functions. #' By default, \code{listEvents} returns are limited to the forthcoming 24-hour #' period. However, this can be changed by user specified date/time stamps. #' A full description of the function output may be found here: #' \url{https://api.developer.betfair.com/services/webapps/docs/display/1smk3cen4v3lu3yomq5qye0ni/Betting+Type+Definitions#BettingTypeDefinitions-EventResult} #' #' @seealso \code{\link{loginBF}}, which must be executed first. #' #' @param eventTypeIds vector <String>. Restrict events by event type associated #' with the market. (i.e., Football, Horse Racing, etc). Accepts multiple IDs #' (See examples). IDs can be obtained via \code{\link{listEventTypes}}. #' Required. No default. #' #' @param marketTypeCodes vector <String>. Restrict to events that match the #' type of the market (i.e. MATCH_ODDS, HALF_TIME_SCORE). You should use this #' instead of relying on the market name as the market type codes are the same #' in all locales. Accepts multiple market type codes (See examples). Market #' codes can be obtained via \code{\link{listMarketTypes}}. Optional. Default #' is NULL. #' #' @param fromDate The start date from which to return matching events. Format #' is \%Y-\%m-\%dT\%TZ. Optional. If not defined, it defaults to current #' system date and time minus 2 hours (to allow searching of all in-play #' football matches). #' #' @param toDate The end date to stop returning matching events. Format is #' \%Y-\%m-\%dT\%TZ. Optional. If not defined defaults to the current system #' date and time plus 24 hours. #' #' @param eventIds vector <String>. Restrict to events that are associated with #' the specified eventIDs (e.g. "27675602"). Optional. Default is NULL. #' #' @param competitionIds vector <String>. Restrict to events that are associated #' with the specified competition IDs (e.g. EPL = "31", La Liga = "117"). #' Optional. Default is NULL. #' #' @param marketIds vector <String>. Restrict to events that are associated with #' the specified marketIDs (e.g. "1.122958246"). Optional. Default is NULL. #' #' @param marketCountries vector <String>. Restrict to events that are in the #' specified country or countries. Accepts multiple country codes (See #' examples). Codes can be obtained via \code{\link{listCountries}}. Optional. #' Default is NULL. #' #' @param venues vector <String>. Restrict events by the venue associated with #' the market. This functionality is currently only available for horse racing #' markets (e.g.venues=c("Exeter","Navan")). Optional. Default is NULL. #' #' @param bspOnly Boolean. Restrict to betfair staring price (bsp) events only #' if TRUE or non-bsp events if FALSE. Optional. Default is NULL, which means #' that both bsp and non-bsp events are returned. #' #' @param turnInPlayEnabled Boolean. Restrict to events that will turn in play #' if TRUE or will not turn in play if FALSE. Optional. Default is NULL, which #' means that both event types are returned. #' #' @param inPlayOnly Boolean. Restrict to events that are currently in play if #' TRUE or not inplay if FALSE. Optional. Default is NULL, which means that #' both inplay and non-inplay events are returned. #' #' @param marketBettingTypes vector <String>. Restrict to events that match the #' betting type of the market (i.e. Odds, Asian Handicap Singles, or Asian #' Handicap Doubles). Optional. Default is NULL. See #' \url{https://api.developer.betfair.com/services/webapps/docs/display/1smk3cen4v3lu3yomq5qye0ni/Betting+Enums#BettingEnums-MarketBettingType} #' for a full list (and description) of viable parameter values. #' #' @param withOrders String. Restrict to events in which the user has bets of a #' specified status. The two viable values are "EXECUTION_COMPLETE" (an order #' that does not have any remaining unmatched portion) and "EXECUTABLE" (an #' order that has a remaining unmatched portion). Optional. Default is NULL. #' #' @param textQuery String. Restrict events by any text associated with the #' event such as the Name, Event, Competition, etc. The string can include a #' wildcard (*) character as long as it is not the first character. Optional. #' Default is NULL. #' #' @param suppress Boolean. By default, this parameter is set to FALSE, meaning #' that a warning is posted when the listEvents call throws an error. Changing #' this parameter to TRUE will suppress this warning. #' #' @param sslVerify Boolean. This argument defaults to TRUE and is optional. In #' some cases, where users have a self signed SSL Certificate, for example #' they may be behind a proxy server, Betfair will fail login with "SSL #' certificate problem: self signed certificate in certificate chain". If this #' error occurs you may set sslVerify to FALSE. This does open a small #' security risk of a man-in-the-middle intercepting your login credentials. #' #' #' #' @return Response from Betfair is stored in the listEvents variable, which is #' then parsed from JSON as a list. Only the first item of this list contains #' the required event type identification details. If the listEvents call #' throws an error, a data frame containing error information is returned. #' #' @section Note on \code{listEventsOps} variable: The \code{listEventsOps} #' variable is used to firstly build an R data frame containing all the data #' to be passed to Betfair, in order for the function to execute successfully. #' The data frame is then converted to JSON and included in the HTTP POST #' request. #' #' @examples #' \dontrun{ #' # Return event data for the Horse Racing event type, in both Great #' Britain and Ireland and Win markets only. #' listEvents(eventTypeIds = "7", marketCountries = c("GB", "IE"), #' marketTypeCodes = "WIN") #' #' # Return event data for the Horse Racing event type, in only Great #' Britain, but both Win and Place markets. #' listEvents(eventTypeIds = "7", marketCountries = "GB", #' marketTypeCodes = c("WIN", "PLACE") #' ) #' #' # Return event data for both Horse Racing and Football event types, in #' Great Britain only and for both Win and Match Odds market types. #' listEvents(eventTypeIds = c("7","1"), #' marketCountries = "GB", #' marketTypeCodes = c("WIN", "MATCH_ODDS") #' ) #' #' # Return event data for all football matches currently inplay, #' restricted to events with Match Odds market types. #' listEvents(eventTypeIds = c("1"),marketTypeCodes = c("MATCH_ODDS"), #' inPlayOnly = TRUE, MarketSort = "MAXIMUM_TRADED") #' } #' listEvents <- function(eventTypeIds, marketTypeCodes=NULL, fromDate = (format(Sys.time() -7200, "%Y-%m-%dT%TZ")), toDate = (format(Sys.time() + 86400, "%Y-%m-%dT%TZ")), eventIds = NULL, competitionIds = NULL, marketIds =NULL, marketCountries = NULL, venues = NULL, bspOnly = NULL, turnInPlayEnabled = NULL, inPlayOnly = NULL, marketBettingTypes = NULL, withOrders = NULL, textQuery = NULL, suppress = FALSE, sslVerify = TRUE) { options(stringsAsFactors = FALSE) listEventsOps <- data.frame(jsonrpc = "2.0", method = "SportsAPING/v1.0/listEvents", id = "1") listEventsOps$params <- data.frame(filter = c("")) listEventsOps$params$filter <- data.frame(marketStartTime = c("")) if (!is.null(eventIds)) { listEventsOps$params$filter$eventIds <- list(eventIds) } if (!is.null(eventTypeIds)) { listEventsOps$params$filter$eventTypeIds <- list(eventTypeIds) } if (!is.null(competitionIds)) { listEventsOps$params$filter$competitionIds <- list(competitionIds) } if (!is.null(marketIds)) { listEventsOps$params$filter$marketIds <- list(marketIds) } if (!is.null(venues)) { listEventsOps$params$filter$venues <- list(venues) } if (!is.null(marketCountries)) { listEventsOps$params$filter$marketCountries <- list(marketCountries) } if (!is.null(marketTypeCodes)) { listEventsOps$params$filter$marketTypeCodes <- list(marketTypeCodes) } listEventsOps$params$filter$bspOnly <- bspOnly listEventsOps$params$filter$turnInPlayEnabled <- turnInPlayEnabled listEventsOps$params$filter$inPlayOnly <- inPlayOnly listEventsOps$params$filter$textQuery <- textQuery if (!is.null(marketBettingTypes)) { listEventsOps$params$filter$marketBettingTypes <- list(marketBettingTypes) } if (!is.null(withOrders)) { listEventsOps$params$filter$withOrders <- list(withOrders) } listEventsOps$params$filter$marketStartTime <- data.frame(from = fromDate, to = toDate) listEventsOps <- listEventsOps[c("jsonrpc", "method", "params", "id")] listEventsOps <- jsonlite::toJSON(listEventsOps, pretty = TRUE) # Read Environment variables for authorisation details product <- Sys.getenv('product') token <- Sys.getenv('token') headers <- list( 'Accept' = 'application/json', 'X-Application' = product, 'X-Authentication' = token, 'Content-Type' = 'application/json' ) listEvents <- as.list(jsonlite::fromJSON( RCurl::postForm( Sys.getenv('betfair-betting'), .opts = list( postfields = listEventsOps, httpheader = headers, ssl.verifypeer = sslVerify ) ) )) if(is.null(listEvents$error)) as.data.frame(listEvents$result[1]) else({ if(!suppress) warning("Error- See output for details") as.data.frame(listEvents$error)}) }
/R/listEvents.R
no_license
tobiasstrebitzer/abettor
R
false
false
10,099
r
#' Return listEvents data #' #' \code{listEvents} returns top-level event data. #' #' \code{listEvents} returns top-level event data (e.g. id, name, number of #' associated markets) and can be filtered via the API-NG based on a number of #' arguments. This data is Useful for finding specific event identification #' numbers, which are then usually passed to further functions. #' By default, \code{listEvents} returns are limited to the forthcoming 24-hour #' period. However, this can be changed by user specified date/time stamps. #' A full description of the function output may be found here: #' \url{https://api.developer.betfair.com/services/webapps/docs/display/1smk3cen4v3lu3yomq5qye0ni/Betting+Type+Definitions#BettingTypeDefinitions-EventResult} #' #' @seealso \code{\link{loginBF}}, which must be executed first. #' #' @param eventTypeIds vector <String>. Restrict events by event type associated #' with the market. (i.e., Football, Horse Racing, etc). Accepts multiple IDs #' (See examples). IDs can be obtained via \code{\link{listEventTypes}}. #' Required. No default. #' #' @param marketTypeCodes vector <String>. Restrict to events that match the #' type of the market (i.e. MATCH_ODDS, HALF_TIME_SCORE). You should use this #' instead of relying on the market name as the market type codes are the same #' in all locales. Accepts multiple market type codes (See examples). Market #' codes can be obtained via \code{\link{listMarketTypes}}. Optional. Default #' is NULL. #' #' @param fromDate The start date from which to return matching events. Format #' is \%Y-\%m-\%dT\%TZ. Optional. If not defined, it defaults to current #' system date and time minus 2 hours (to allow searching of all in-play #' football matches). #' #' @param toDate The end date to stop returning matching events. Format is #' \%Y-\%m-\%dT\%TZ. Optional. If not defined defaults to the current system #' date and time plus 24 hours. #' #' @param eventIds vector <String>. Restrict to events that are associated with #' the specified eventIDs (e.g. "27675602"). Optional. Default is NULL. #' #' @param competitionIds vector <String>. Restrict to events that are associated #' with the specified competition IDs (e.g. EPL = "31", La Liga = "117"). #' Optional. Default is NULL. #' #' @param marketIds vector <String>. Restrict to events that are associated with #' the specified marketIDs (e.g. "1.122958246"). Optional. Default is NULL. #' #' @param marketCountries vector <String>. Restrict to events that are in the #' specified country or countries. Accepts multiple country codes (See #' examples). Codes can be obtained via \code{\link{listCountries}}. Optional. #' Default is NULL. #' #' @param venues vector <String>. Restrict events by the venue associated with #' the market. This functionality is currently only available for horse racing #' markets (e.g.venues=c("Exeter","Navan")). Optional. Default is NULL. #' #' @param bspOnly Boolean. Restrict to betfair staring price (bsp) events only #' if TRUE or non-bsp events if FALSE. Optional. Default is NULL, which means #' that both bsp and non-bsp events are returned. #' #' @param turnInPlayEnabled Boolean. Restrict to events that will turn in play #' if TRUE or will not turn in play if FALSE. Optional. Default is NULL, which #' means that both event types are returned. #' #' @param inPlayOnly Boolean. Restrict to events that are currently in play if #' TRUE or not inplay if FALSE. Optional. Default is NULL, which means that #' both inplay and non-inplay events are returned. #' #' @param marketBettingTypes vector <String>. Restrict to events that match the #' betting type of the market (i.e. Odds, Asian Handicap Singles, or Asian #' Handicap Doubles). Optional. Default is NULL. See #' \url{https://api.developer.betfair.com/services/webapps/docs/display/1smk3cen4v3lu3yomq5qye0ni/Betting+Enums#BettingEnums-MarketBettingType} #' for a full list (and description) of viable parameter values. #' #' @param withOrders String. Restrict to events in which the user has bets of a #' specified status. The two viable values are "EXECUTION_COMPLETE" (an order #' that does not have any remaining unmatched portion) and "EXECUTABLE" (an #' order that has a remaining unmatched portion). Optional. Default is NULL. #' #' @param textQuery String. Restrict events by any text associated with the #' event such as the Name, Event, Competition, etc. The string can include a #' wildcard (*) character as long as it is not the first character. Optional. #' Default is NULL. #' #' @param suppress Boolean. By default, this parameter is set to FALSE, meaning #' that a warning is posted when the listEvents call throws an error. Changing #' this parameter to TRUE will suppress this warning. #' #' @param sslVerify Boolean. This argument defaults to TRUE and is optional. In #' some cases, where users have a self signed SSL Certificate, for example #' they may be behind a proxy server, Betfair will fail login with "SSL #' certificate problem: self signed certificate in certificate chain". If this #' error occurs you may set sslVerify to FALSE. This does open a small #' security risk of a man-in-the-middle intercepting your login credentials. #' #' #' #' @return Response from Betfair is stored in the listEvents variable, which is #' then parsed from JSON as a list. Only the first item of this list contains #' the required event type identification details. If the listEvents call #' throws an error, a data frame containing error information is returned. #' #' @section Note on \code{listEventsOps} variable: The \code{listEventsOps} #' variable is used to firstly build an R data frame containing all the data #' to be passed to Betfair, in order for the function to execute successfully. #' The data frame is then converted to JSON and included in the HTTP POST #' request. #' #' @examples #' \dontrun{ #' # Return event data for the Horse Racing event type, in both Great #' Britain and Ireland and Win markets only. #' listEvents(eventTypeIds = "7", marketCountries = c("GB", "IE"), #' marketTypeCodes = "WIN") #' #' # Return event data for the Horse Racing event type, in only Great #' Britain, but both Win and Place markets. #' listEvents(eventTypeIds = "7", marketCountries = "GB", #' marketTypeCodes = c("WIN", "PLACE") #' ) #' #' # Return event data for both Horse Racing and Football event types, in #' Great Britain only and for both Win and Match Odds market types. #' listEvents(eventTypeIds = c("7","1"), #' marketCountries = "GB", #' marketTypeCodes = c("WIN", "MATCH_ODDS") #' ) #' #' # Return event data for all football matches currently inplay, #' restricted to events with Match Odds market types. #' listEvents(eventTypeIds = c("1"),marketTypeCodes = c("MATCH_ODDS"), #' inPlayOnly = TRUE, MarketSort = "MAXIMUM_TRADED") #' } #' listEvents <- function(eventTypeIds, marketTypeCodes=NULL, fromDate = (format(Sys.time() -7200, "%Y-%m-%dT%TZ")), toDate = (format(Sys.time() + 86400, "%Y-%m-%dT%TZ")), eventIds = NULL, competitionIds = NULL, marketIds =NULL, marketCountries = NULL, venues = NULL, bspOnly = NULL, turnInPlayEnabled = NULL, inPlayOnly = NULL, marketBettingTypes = NULL, withOrders = NULL, textQuery = NULL, suppress = FALSE, sslVerify = TRUE) { options(stringsAsFactors = FALSE) listEventsOps <- data.frame(jsonrpc = "2.0", method = "SportsAPING/v1.0/listEvents", id = "1") listEventsOps$params <- data.frame(filter = c("")) listEventsOps$params$filter <- data.frame(marketStartTime = c("")) if (!is.null(eventIds)) { listEventsOps$params$filter$eventIds <- list(eventIds) } if (!is.null(eventTypeIds)) { listEventsOps$params$filter$eventTypeIds <- list(eventTypeIds) } if (!is.null(competitionIds)) { listEventsOps$params$filter$competitionIds <- list(competitionIds) } if (!is.null(marketIds)) { listEventsOps$params$filter$marketIds <- list(marketIds) } if (!is.null(venues)) { listEventsOps$params$filter$venues <- list(venues) } if (!is.null(marketCountries)) { listEventsOps$params$filter$marketCountries <- list(marketCountries) } if (!is.null(marketTypeCodes)) { listEventsOps$params$filter$marketTypeCodes <- list(marketTypeCodes) } listEventsOps$params$filter$bspOnly <- bspOnly listEventsOps$params$filter$turnInPlayEnabled <- turnInPlayEnabled listEventsOps$params$filter$inPlayOnly <- inPlayOnly listEventsOps$params$filter$textQuery <- textQuery if (!is.null(marketBettingTypes)) { listEventsOps$params$filter$marketBettingTypes <- list(marketBettingTypes) } if (!is.null(withOrders)) { listEventsOps$params$filter$withOrders <- list(withOrders) } listEventsOps$params$filter$marketStartTime <- data.frame(from = fromDate, to = toDate) listEventsOps <- listEventsOps[c("jsonrpc", "method", "params", "id")] listEventsOps <- jsonlite::toJSON(listEventsOps, pretty = TRUE) # Read Environment variables for authorisation details product <- Sys.getenv('product') token <- Sys.getenv('token') headers <- list( 'Accept' = 'application/json', 'X-Application' = product, 'X-Authentication' = token, 'Content-Type' = 'application/json' ) listEvents <- as.list(jsonlite::fromJSON( RCurl::postForm( Sys.getenv('betfair-betting'), .opts = list( postfields = listEventsOps, httpheader = headers, ssl.verifypeer = sslVerify ) ) )) if(is.null(listEvents$error)) as.data.frame(listEvents$result[1]) else({ if(!suppress) warning("Error- See output for details") as.data.frame(listEvents$error)}) }
testlist <- list(x = c(2.14495729724329e-312, 1.04857559801461e-255, 3.13705387384993e-115, 1.90276391784976e-308, 6.51960956380587e-311, -3.74890975345391e-253, 5.97468560415014e-92, 2.58981145684914e-307, 0, 0, 0, 0, 0, 0, 0, 0, 1.26685862504307e-279, 5.74552043740138e+294, -5.15273908894684e-36, -7.72134029854232e-84, 5.88522309181711e-315, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), y = numeric(0)) result <- do.call(blorr:::blr_pairs_cpp,testlist) str(result)
/blorr/inst/testfiles/blr_pairs_cpp/libFuzzer_blr_pairs_cpp/blr_pairs_cpp_valgrind_files/1609955831-test.R
no_license
akhikolla/updated-only-Issues
R
false
false
462
r
testlist <- list(x = c(2.14495729724329e-312, 1.04857559801461e-255, 3.13705387384993e-115, 1.90276391784976e-308, 6.51960956380587e-311, -3.74890975345391e-253, 5.97468560415014e-92, 2.58981145684914e-307, 0, 0, 0, 0, 0, 0, 0, 0, 1.26685862504307e-279, 5.74552043740138e+294, -5.15273908894684e-36, -7.72134029854232e-84, 5.88522309181711e-315, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), y = numeric(0)) result <- do.call(blorr:::blr_pairs_cpp,testlist) str(result)
tableFileUI <- function(id, inputBoxTitle = "Input", outputBoxTitle = "Output", csvlabel = "CSV file", label_1 = "time", label_2 = "N", default_frame = data.frame(c(0, 10), c(70, 80))) { ns <- NS(id) names(default_frame) <- c(label_1, label_2) tagList( fluidRow( ## Input tabBox(title = inputBoxTitle, id = ns("my_tabBox"), side = "right", selected = "Excel", # tabPanel("Old", # matrixInput(inputId = ns("manual_table"), # label = paste(label_1, label_2, sep = " - "), # data = default_frame) # ), tabPanel("Manual", rHandsontableOutput(ns("hot")) ), tabPanel("Text", fileInput(ns("file"), csvlabel), radioButtons(ns("sep"), "Separator", c(Comma = ",", Semicolon = ";", Tab = "\t"), "\t"), radioButtons(ns("dec"), "Decimal Point", c(Point = ".", Comma = ","), ".") ), tabPanel("Excel", fileInput(ns("excel_file"), "Excel file"), textInput(ns("excel_sheet"), "Sheet name", "Sheet1"), numericInput(ns("excel_skip"), "Skip", 0) ) ), ## Output box(title = outputBoxTitle, collapsible = TRUE, status = "primary", actionButton(ns("update_table"), "Refresh"), # tableOutput(ns("my_table")) plotOutput(ns("my_plot")), tags$hr(), downloadLink(ns("export_table"), "Export") ) ) ) }
/tableFileUI.R
no_license
albgarre/bioinactivation_FE
R
false
false
2,154
r
tableFileUI <- function(id, inputBoxTitle = "Input", outputBoxTitle = "Output", csvlabel = "CSV file", label_1 = "time", label_2 = "N", default_frame = data.frame(c(0, 10), c(70, 80))) { ns <- NS(id) names(default_frame) <- c(label_1, label_2) tagList( fluidRow( ## Input tabBox(title = inputBoxTitle, id = ns("my_tabBox"), side = "right", selected = "Excel", # tabPanel("Old", # matrixInput(inputId = ns("manual_table"), # label = paste(label_1, label_2, sep = " - "), # data = default_frame) # ), tabPanel("Manual", rHandsontableOutput(ns("hot")) ), tabPanel("Text", fileInput(ns("file"), csvlabel), radioButtons(ns("sep"), "Separator", c(Comma = ",", Semicolon = ";", Tab = "\t"), "\t"), radioButtons(ns("dec"), "Decimal Point", c(Point = ".", Comma = ","), ".") ), tabPanel("Excel", fileInput(ns("excel_file"), "Excel file"), textInput(ns("excel_sheet"), "Sheet name", "Sheet1"), numericInput(ns("excel_skip"), "Skip", 0) ) ), ## Output box(title = outputBoxTitle, collapsible = TRUE, status = "primary", actionButton(ns("update_table"), "Refresh"), # tableOutput(ns("my_table")) plotOutput(ns("my_plot")), tags$hr(), downloadLink(ns("export_table"), "Export") ) ) ) }
library(tm) library(caret) setwd("C:/Users/schinnamgar/Desktop/Final research/Short answers/Data") set.seed(123) for(iter in 1:10) { num=5 while (num <=25) { datasetname <- paste("dataset",iter,".csv",sep="") dataset_raw_name <- paste("dataset",iter,"-","raw",".","csv",sep="") dataset_tf_name <- paste("dataset",iter,"-","tf-",num,".","csv",sep="") if(file.exists(dataset_tf_name)) file.remove(dataset_tf_name) dataset <- read.csv(datasetname) dataset <- dataset[,-1] essaytextcorpus <- Corpus(VectorSource(dataset[,2])) essaytextcorpus <- tm_map(essaytextcorpus,tolower) essaytextcorpus <- tm_map(essaytextcorpus,removePunctuation) essaytextcorpus <- tm_map(essaytextcorpus,removeWords,stopwords()) essaytextcorpus <- tm_map(essaytextcorpus,stripWhitespace) essaytextcorpus <- tm_map(essaytextcorpus,stemDocument) essaytextcorpus_dtm <- DocumentTermMatrix(essaytextcorpus) essay_dict <- findFreqTerms(essaytextcorpus_dtm, num) essaytextcorpus_dtm <- DocumentTermMatrix(essaytextcorpus,list(dictionary = essay_dict)) essaytextcorpus_matrix <- as.matrix(essaytextcorpus_dtm) convert_counts <- function(x) { x <- ifelse(x > 0, 1, 0) x <- factor(x, levels = c(0, 1)) return(x) } essaytextcorpus_dtm <- apply(essaytextcorpus_dtm, MARGIN = 2, convert_counts) write.table(essaytextcorpus_dtm,dataset_tf_name,append = FALSE,sep=",",eol = "\n",row.names=F) num=num+5 } }
/ApplyFindFreqTermsonDatasets.R
no_license
SUNILKUMARCHINNAMGARI/phdrcode
R
false
false
1,369
r
library(tm) library(caret) setwd("C:/Users/schinnamgar/Desktop/Final research/Short answers/Data") set.seed(123) for(iter in 1:10) { num=5 while (num <=25) { datasetname <- paste("dataset",iter,".csv",sep="") dataset_raw_name <- paste("dataset",iter,"-","raw",".","csv",sep="") dataset_tf_name <- paste("dataset",iter,"-","tf-",num,".","csv",sep="") if(file.exists(dataset_tf_name)) file.remove(dataset_tf_name) dataset <- read.csv(datasetname) dataset <- dataset[,-1] essaytextcorpus <- Corpus(VectorSource(dataset[,2])) essaytextcorpus <- tm_map(essaytextcorpus,tolower) essaytextcorpus <- tm_map(essaytextcorpus,removePunctuation) essaytextcorpus <- tm_map(essaytextcorpus,removeWords,stopwords()) essaytextcorpus <- tm_map(essaytextcorpus,stripWhitespace) essaytextcorpus <- tm_map(essaytextcorpus,stemDocument) essaytextcorpus_dtm <- DocumentTermMatrix(essaytextcorpus) essay_dict <- findFreqTerms(essaytextcorpus_dtm, num) essaytextcorpus_dtm <- DocumentTermMatrix(essaytextcorpus,list(dictionary = essay_dict)) essaytextcorpus_matrix <- as.matrix(essaytextcorpus_dtm) convert_counts <- function(x) { x <- ifelse(x > 0, 1, 0) x <- factor(x, levels = c(0, 1)) return(x) } essaytextcorpus_dtm <- apply(essaytextcorpus_dtm, MARGIN = 2, convert_counts) write.table(essaytextcorpus_dtm,dataset_tf_name,append = FALSE,sep=",",eol = "\n",row.names=F) num=num+5 } }
# Haversine function for great circle distance ---------------------------- # Modified from https://www.r-bloggers.com/great-circle-distance-calculations-in-r/ haversine <- function(lat1, lat2, long1, long2) { deg2rad <- function(deg) deg*pi/180 R <- 6371 # Mean earth radius (km) lat1 <- deg2rad(lat1) lat2 <- deg2rad(lat2) long1 <- deg2rad(long1) long2 <- deg2rad(long2) delta.lat <- lat1 - lat2 delta.long <- long1 - long2 a <- sin(delta.lat/2)^2 + cos(lat1)*cos(lat2)*sin(delta.long/2)^2 c <- 2*asin(purrr::map_dbl(a, function(x) min(1, sqrt(x)))) c*R }
/src/haversine.R
no_license
amkusmec/genomes2fields
R
false
false
592
r
# Haversine function for great circle distance ---------------------------- # Modified from https://www.r-bloggers.com/great-circle-distance-calculations-in-r/ haversine <- function(lat1, lat2, long1, long2) { deg2rad <- function(deg) deg*pi/180 R <- 6371 # Mean earth radius (km) lat1 <- deg2rad(lat1) lat2 <- deg2rad(lat2) long1 <- deg2rad(long1) long2 <- deg2rad(long2) delta.lat <- lat1 - lat2 delta.long <- long1 - long2 a <- sin(delta.lat/2)^2 + cos(lat1)*cos(lat2)*sin(delta.long/2)^2 c <- 2*asin(purrr::map_dbl(a, function(x) min(1, sqrt(x)))) c*R }
.packageName <- "rowcolmacros" ### Useful macros ##' Remove from workspace all objects with names matching regexp ##' ##' @param regexp a regular expression (as a character object) ##' @return none - side effects only ##' @seealso \code{\link{rmexgrep}} ##' @author Ben Veal ##' @export rmgrep <- defmacro(regexp,expr={ rm(list=ls()[grep(regexp,ls())]) }) ##' Remove from workspace all objects with names not matching regexp ##' ##' @param regexp a regular expression (as a character object) ##' ##' @return none - side effects only ##' @seealso \code{\link{rmgrep}} ##' @author Ben Veal ##' @export rmexgrep <- defmacro(regexp,expr={ rm(list=keep(list=ls()[grep(regexp,ls())])) }) ##' Select columns from data.frame matching regexp ##' ##' @param df a data.frame object ##' @param regex a regular expression ##' ##' @return A subset of the columns of 'df' ##' @seealso \code{\link{selectcolsT}}, \code{\link{aselectcols}}, \code{\link{aselectcolsT}} ##' @author Ben Veal ##' @export selectcols <- defmacro(df,regex,expr={ df[,grep(regex,colnames(df))] }) ##' Select columns from data.table matching regexp ##' ##' @param dt a data.table object ##' @param regex a regular expression ##' ##' @return A subset of the columns of 'dt' ##' @seealso \code{\link{selectcols}}, \code{\link{aselectcols}}, \code{\link{aselectcolsT}} ##' @author Ben Veal ##' @export selectcolsT <- defmacro(dt,regex,expr={ dt[,grep(regex,colnames(dt)),with=F] }) ##' Select columns from data.frame fuzzy matching string ##' ##' Uses fuzzy matching to select columns from 'df' that match 'str' ##' @param df a data.frame object ##' @param str a string ##' ##' @return A subset of the columns of 'df' ##' @seealso \code{\link{selectcols}}, \code{\link{selectcolsT}}, \code{\link{aselectcolsT}} ##' @author Ben Veal ##' @export aselectcols <- defmacro(df,str,expr={ df[,agrep(str,colnames(df))] }) ##' Select columns from data.table fuzzy matching string ##' ##' Uses fuzzy matching to select columns from 'dt' that match 'str' ##' @param dt a data.table object ##' @param str a string ##' ##' @return A subset of the columns of 'dt' ##' @seealso \code{\link{selectcols}}, \code{\link{selectcolsT}}, \code{\link{aselectcols}} ##' @author Ben Veal ##' @export aselectcolsT <- defmacro(dt,str,expr={ dt[,agrep(str,colnames(dt)),with=F] }) ##' Show names and numbers of columns from df with names matching regexp (ignoring case) ##' ##' @param df a data.frame object ##' @param regex a regular expression (as a character object) ##' ##' @return A data.frame object with one column containing names of columns of 'df', and whose row names are the ##' corresponding row numbers of 'df' ##' @export findcols <- defmacro(df,regex,expr={ data.frame(grep(regex,colnames(df),ignore.case=T,value=T),row.names=grep(regex,colnames(df),ignore.case=T,value=F)) }) ##' Show names and numbers of columns from df with names fuzzy matching string (ignoring case) ##' ##' Uses fuzzy matching to find columns of 'df' that match string 'str' ##' @param df a data.frame object ##' @param str a character object ##' ##' @return A data.frame object with one column containing names of columns of 'df', and whose row names are the ##' corresponding row numbers of 'df' ##' @author Ben Veal ##' @export afindcols <- defmacro(df,str,expr={ data.frame(agrep(str,colnames(df),ignore.case=T,value=T),row.names=agrep(str,colnames(df),ignore.case=T,value=F)) }) ##' Remove columns from data.frame that match regexp ##' ##' @param df a data.frame object ##' @param regex a regular expression (as a character object) ##' ##' @return A subset of the columns of 'df' ##' @author Ben Veal ##' @export removecols <- defmacro(df,regex,expr={ if(length(grep(regex,colnames(df)))) df[,-grep(regex,colnames(df))] else df }) ##' Remove columns from data.table that match regexp ##' ##' @param dt a data.table object ##' @param regex a regular expression (as a character object) ##' ##' @return A subset of the columns of 'dt' ##' @author Ben Veal ##' @export removecolsT <- defmacro(dt,regex,expr={ # as above but for datatables if(length(grep(regex,colnames(dt)))) dt[,-grep(regex,colnames(dt)),with=F] else dt }) ##' Remove columns from data.frame that fuzzy match string ##' ##' Uses fuzzy string matching to select which columns of 'df' to remove. ##' @param df a data.frame object ##' @param str a string with which to fuzzy match the column names of 'df' ##' ##' @return A subset of the columns of 'df' ##' @author Ben Veal ##' @export aremovecols <- defmacro(df,str,expr={ if(length(agrep(str,colnames(df)))) df[,-agrep(str,colnames(df))] else df }) ##' Remove columns from data.table that fuzzy match string ##' ##' Uses fuzzy string matching to select which columns of 'dt' to remove. ##' @param dt a data.table object ##' @param str a string with which to fuzzy match the column names of 'dt' ##' ##' @return A subset of the columns of 'dt' ##' @author Ben Veal ##' @export aremovecolsT <- defmacro(dt,str,expr={ if(length(agrep(str,colnames(dt)))) dt[,-agrep(str,colnames(dt)),with=F] else dt }) ##' @title Rename matches to regexp in column names of data.frame object. ##' @details Replace first match to regexp 'regex' in columns of 'df' with 'repl'. ##' @param df a data.frame object ##' @param regex a string/character object ##' @param repl a string/character object ##' ##' @return none - side effects only ##' @author Ben Veal ##' @export renamecols <- defmacro(df,regex,repl,expr={ sub(regex,repl,colnames(df)) colnames(df) <- sub(regex,repl,colnames(df)) }) ##' @title Rename matches to regexp in column names of data.frame object. ##' @details Replace all matches to regexp 'regex' in columns of 'df' with 'repl'. ##' @param df a data.frame object ##' @param regex a string/character object ##' @param repl a string/character object ##' @return none - side effects only ##' @author Ben Veal ##' @export grenamecols <- defmacro(df,regex,repl,expr={ gsub(regex,repl,colnames(df)) colnames(df) <- gsub(regex,repl,colnames(df)) }) ##' @title Select rows from data.frame matching regexp ##' @param df a data.frame object ##' @param regex a regular expression ##' @return A subset of the rows of 'df' ##' @author Ben Veal ##' @export selectrows <- defmacro(df,regex,expr={ df[grep(regex,rownames(df)),] }) ##' @title Select rows from data.frame fuzzy matching string ##' @details Uses fuzzy matching to select rows from 'df' that match 'str' ##' @param df a data.frame object ##' @param str a string ##' @return A subset of the rows of 'df' ##' @author Ben Veal ##' @export aselectrows <- defmacro(df,str,expr={ df[agrep(str,rownames(df)),] }) ##' @title Show names and numbers of rows from data.frame with names matching regexp (ignoring case) ##' @param df a data.frame object ##' @param regex a regular expression (as a character object) ##' @return A data.frame object with one row containing names of rows of 'df', and whose row names are the ##' corresponding row numbers of 'df' ##' @author Ben Veal ##' @export findrows <- defmacro(df,regex,expr={ data.frame(grep(regex,rownames(df),ignore.case=T,value=T),row.names=grep(regex,rownames(df),ignore.case=T,value=F)) }) ##' @title Select rows from data.frame fuzzy matching string (ignoring case) ##' @details Uses fuzzy matching to select rows from 'df' that match 'str' ##' @param df a data.frame object ##' @param str a string ##' @return A subset of the rows of 'df' ##' @author Ben Veal ##' @export afindrows <- defmacro(df,str,expr={ # as above but using fuzzy string matching (doesn't parse regexps!) data.frame(agrep(str,rownames(df),ignore.case=T,value=T),row.names=agrep(str,rownames(df),ignore.case=T,value=F)) }) ##' @title Remove rows from data.frame that match regexp ##' @param df a data.frame object ##' @param regex a regular expression (as a character object) ##' @return A subset of the rows of 'df' ##' @author Ben Veal ##' @export removerows <- defmacro(df,regex,expr={ if(length(grep(regex,rownames(df)))>0) df[-grep(regex,rownames(df)),] else df }) ##' @title Remove rows from data.frame that fuzzy match string ##' @details Uses fuzzy string matching to select which rows of 'df' to remove. ##' @param df a data.frame object ##' @param str a string with which to fuzzy match the row names of 'df' ##' @return A subset of the rows of 'df' ##' @author Ben Veal ##' @export aremoverows <- defmacro(df,str,expr={ if(length(agrep(str,rownames(df))>0)) df[-agrep(str,rownames(df)),] else df }) ##' @title Rename matches to regexp in row names of data.frame object. ##' @details Replace first match to regexp 'regex' in rows of 'df' with 'repl'. ##' @param df a data.frame object ##' @param regex a string/character object ##' @param repl a string/character object ##' @return none - side effects only ##' @author Ben Veal ##' @export renamerows <- defmacro(df,regex,repl,expr={ sub(regex,repl,rownames(df)) rownames(df)<-sub(regex,repl,rownames(df)) }) ##' @title Rename matches to regexp in row names of data.frame object. ##' @details Replace all matches to regexp 'regex' in rows of 'df' with 'repl'. ##' @param df a data.frame object ##' @param regex a string/character object ##' @param repl a string/character object ##' @return none - side effects only ##' @author Ben Veal ##' @export grenamerows <- defmacro(df,regex,repl,expr={ gsub(regex,repl,rownames(df)) rownames(df) <- gsub(regex,repl,rownames(df)) }) ##' @title Macro to remove empty strings from character vector. ##' @param strvec a character vector ##' @return A character vector with empty strings removed ##' @author Ben Veal ##' @export strRmEmpty <- defmacro(strvec,expr={Filter(function(str) nchar(str) > 0, strvec)}) ##' @title Macro to remove whitespace from strings (inside string aswell as at ends). ##' @param strvec a character vector ##' @return A character vector with empty strings removed ##' @author Ben Veal ##' @export strRmSpace <- defmacro(str,expr={gsub("[[:space:]]*","",str)})
/R/rowcol_macros.R
no_license
vapniks/rowcol_macros
R
false
false
10,076
r
.packageName <- "rowcolmacros" ### Useful macros ##' Remove from workspace all objects with names matching regexp ##' ##' @param regexp a regular expression (as a character object) ##' @return none - side effects only ##' @seealso \code{\link{rmexgrep}} ##' @author Ben Veal ##' @export rmgrep <- defmacro(regexp,expr={ rm(list=ls()[grep(regexp,ls())]) }) ##' Remove from workspace all objects with names not matching regexp ##' ##' @param regexp a regular expression (as a character object) ##' ##' @return none - side effects only ##' @seealso \code{\link{rmgrep}} ##' @author Ben Veal ##' @export rmexgrep <- defmacro(regexp,expr={ rm(list=keep(list=ls()[grep(regexp,ls())])) }) ##' Select columns from data.frame matching regexp ##' ##' @param df a data.frame object ##' @param regex a regular expression ##' ##' @return A subset of the columns of 'df' ##' @seealso \code{\link{selectcolsT}}, \code{\link{aselectcols}}, \code{\link{aselectcolsT}} ##' @author Ben Veal ##' @export selectcols <- defmacro(df,regex,expr={ df[,grep(regex,colnames(df))] }) ##' Select columns from data.table matching regexp ##' ##' @param dt a data.table object ##' @param regex a regular expression ##' ##' @return A subset of the columns of 'dt' ##' @seealso \code{\link{selectcols}}, \code{\link{aselectcols}}, \code{\link{aselectcolsT}} ##' @author Ben Veal ##' @export selectcolsT <- defmacro(dt,regex,expr={ dt[,grep(regex,colnames(dt)),with=F] }) ##' Select columns from data.frame fuzzy matching string ##' ##' Uses fuzzy matching to select columns from 'df' that match 'str' ##' @param df a data.frame object ##' @param str a string ##' ##' @return A subset of the columns of 'df' ##' @seealso \code{\link{selectcols}}, \code{\link{selectcolsT}}, \code{\link{aselectcolsT}} ##' @author Ben Veal ##' @export aselectcols <- defmacro(df,str,expr={ df[,agrep(str,colnames(df))] }) ##' Select columns from data.table fuzzy matching string ##' ##' Uses fuzzy matching to select columns from 'dt' that match 'str' ##' @param dt a data.table object ##' @param str a string ##' ##' @return A subset of the columns of 'dt' ##' @seealso \code{\link{selectcols}}, \code{\link{selectcolsT}}, \code{\link{aselectcols}} ##' @author Ben Veal ##' @export aselectcolsT <- defmacro(dt,str,expr={ dt[,agrep(str,colnames(dt)),with=F] }) ##' Show names and numbers of columns from df with names matching regexp (ignoring case) ##' ##' @param df a data.frame object ##' @param regex a regular expression (as a character object) ##' ##' @return A data.frame object with one column containing names of columns of 'df', and whose row names are the ##' corresponding row numbers of 'df' ##' @export findcols <- defmacro(df,regex,expr={ data.frame(grep(regex,colnames(df),ignore.case=T,value=T),row.names=grep(regex,colnames(df),ignore.case=T,value=F)) }) ##' Show names and numbers of columns from df with names fuzzy matching string (ignoring case) ##' ##' Uses fuzzy matching to find columns of 'df' that match string 'str' ##' @param df a data.frame object ##' @param str a character object ##' ##' @return A data.frame object with one column containing names of columns of 'df', and whose row names are the ##' corresponding row numbers of 'df' ##' @author Ben Veal ##' @export afindcols <- defmacro(df,str,expr={ data.frame(agrep(str,colnames(df),ignore.case=T,value=T),row.names=agrep(str,colnames(df),ignore.case=T,value=F)) }) ##' Remove columns from data.frame that match regexp ##' ##' @param df a data.frame object ##' @param regex a regular expression (as a character object) ##' ##' @return A subset of the columns of 'df' ##' @author Ben Veal ##' @export removecols <- defmacro(df,regex,expr={ if(length(grep(regex,colnames(df)))) df[,-grep(regex,colnames(df))] else df }) ##' Remove columns from data.table that match regexp ##' ##' @param dt a data.table object ##' @param regex a regular expression (as a character object) ##' ##' @return A subset of the columns of 'dt' ##' @author Ben Veal ##' @export removecolsT <- defmacro(dt,regex,expr={ # as above but for datatables if(length(grep(regex,colnames(dt)))) dt[,-grep(regex,colnames(dt)),with=F] else dt }) ##' Remove columns from data.frame that fuzzy match string ##' ##' Uses fuzzy string matching to select which columns of 'df' to remove. ##' @param df a data.frame object ##' @param str a string with which to fuzzy match the column names of 'df' ##' ##' @return A subset of the columns of 'df' ##' @author Ben Veal ##' @export aremovecols <- defmacro(df,str,expr={ if(length(agrep(str,colnames(df)))) df[,-agrep(str,colnames(df))] else df }) ##' Remove columns from data.table that fuzzy match string ##' ##' Uses fuzzy string matching to select which columns of 'dt' to remove. ##' @param dt a data.table object ##' @param str a string with which to fuzzy match the column names of 'dt' ##' ##' @return A subset of the columns of 'dt' ##' @author Ben Veal ##' @export aremovecolsT <- defmacro(dt,str,expr={ if(length(agrep(str,colnames(dt)))) dt[,-agrep(str,colnames(dt)),with=F] else dt }) ##' @title Rename matches to regexp in column names of data.frame object. ##' @details Replace first match to regexp 'regex' in columns of 'df' with 'repl'. ##' @param df a data.frame object ##' @param regex a string/character object ##' @param repl a string/character object ##' ##' @return none - side effects only ##' @author Ben Veal ##' @export renamecols <- defmacro(df,regex,repl,expr={ sub(regex,repl,colnames(df)) colnames(df) <- sub(regex,repl,colnames(df)) }) ##' @title Rename matches to regexp in column names of data.frame object. ##' @details Replace all matches to regexp 'regex' in columns of 'df' with 'repl'. ##' @param df a data.frame object ##' @param regex a string/character object ##' @param repl a string/character object ##' @return none - side effects only ##' @author Ben Veal ##' @export grenamecols <- defmacro(df,regex,repl,expr={ gsub(regex,repl,colnames(df)) colnames(df) <- gsub(regex,repl,colnames(df)) }) ##' @title Select rows from data.frame matching regexp ##' @param df a data.frame object ##' @param regex a regular expression ##' @return A subset of the rows of 'df' ##' @author Ben Veal ##' @export selectrows <- defmacro(df,regex,expr={ df[grep(regex,rownames(df)),] }) ##' @title Select rows from data.frame fuzzy matching string ##' @details Uses fuzzy matching to select rows from 'df' that match 'str' ##' @param df a data.frame object ##' @param str a string ##' @return A subset of the rows of 'df' ##' @author Ben Veal ##' @export aselectrows <- defmacro(df,str,expr={ df[agrep(str,rownames(df)),] }) ##' @title Show names and numbers of rows from data.frame with names matching regexp (ignoring case) ##' @param df a data.frame object ##' @param regex a regular expression (as a character object) ##' @return A data.frame object with one row containing names of rows of 'df', and whose row names are the ##' corresponding row numbers of 'df' ##' @author Ben Veal ##' @export findrows <- defmacro(df,regex,expr={ data.frame(grep(regex,rownames(df),ignore.case=T,value=T),row.names=grep(regex,rownames(df),ignore.case=T,value=F)) }) ##' @title Select rows from data.frame fuzzy matching string (ignoring case) ##' @details Uses fuzzy matching to select rows from 'df' that match 'str' ##' @param df a data.frame object ##' @param str a string ##' @return A subset of the rows of 'df' ##' @author Ben Veal ##' @export afindrows <- defmacro(df,str,expr={ # as above but using fuzzy string matching (doesn't parse regexps!) data.frame(agrep(str,rownames(df),ignore.case=T,value=T),row.names=agrep(str,rownames(df),ignore.case=T,value=F)) }) ##' @title Remove rows from data.frame that match regexp ##' @param df a data.frame object ##' @param regex a regular expression (as a character object) ##' @return A subset of the rows of 'df' ##' @author Ben Veal ##' @export removerows <- defmacro(df,regex,expr={ if(length(grep(regex,rownames(df)))>0) df[-grep(regex,rownames(df)),] else df }) ##' @title Remove rows from data.frame that fuzzy match string ##' @details Uses fuzzy string matching to select which rows of 'df' to remove. ##' @param df a data.frame object ##' @param str a string with which to fuzzy match the row names of 'df' ##' @return A subset of the rows of 'df' ##' @author Ben Veal ##' @export aremoverows <- defmacro(df,str,expr={ if(length(agrep(str,rownames(df))>0)) df[-agrep(str,rownames(df)),] else df }) ##' @title Rename matches to regexp in row names of data.frame object. ##' @details Replace first match to regexp 'regex' in rows of 'df' with 'repl'. ##' @param df a data.frame object ##' @param regex a string/character object ##' @param repl a string/character object ##' @return none - side effects only ##' @author Ben Veal ##' @export renamerows <- defmacro(df,regex,repl,expr={ sub(regex,repl,rownames(df)) rownames(df)<-sub(regex,repl,rownames(df)) }) ##' @title Rename matches to regexp in row names of data.frame object. ##' @details Replace all matches to regexp 'regex' in rows of 'df' with 'repl'. ##' @param df a data.frame object ##' @param regex a string/character object ##' @param repl a string/character object ##' @return none - side effects only ##' @author Ben Veal ##' @export grenamerows <- defmacro(df,regex,repl,expr={ gsub(regex,repl,rownames(df)) rownames(df) <- gsub(regex,repl,rownames(df)) }) ##' @title Macro to remove empty strings from character vector. ##' @param strvec a character vector ##' @return A character vector with empty strings removed ##' @author Ben Veal ##' @export strRmEmpty <- defmacro(strvec,expr={Filter(function(str) nchar(str) > 0, strvec)}) ##' @title Macro to remove whitespace from strings (inside string aswell as at ends). ##' @param strvec a character vector ##' @return A character vector with empty strings removed ##' @author Ben Veal ##' @export strRmSpace <- defmacro(str,expr={gsub("[[:space:]]*","",str)})
library(stm) ### Name: thetaPosterior ### Title: Draw from Theta Posterior ### Aliases: thetaPosterior ### ** Examples #global approximation draws <- thetaPosterior(gadarianFit, nsims = 100)
/data/genthat_extracted_code/stm/examples/thetaPosterior.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
198
r
library(stm) ### Name: thetaPosterior ### Title: Draw from Theta Posterior ### Aliases: thetaPosterior ### ** Examples #global approximation draws <- thetaPosterior(gadarianFit, nsims = 100)
library(tidyr) library(dplyr) library(utils) ## Download the data and extract it data_url <- "https://d396qusza40orc.cloudfront.net/dsscapstone/dataset/Coursera-SwiftKey.zip" zip_name <- "Coursera-SwiftKey.zip" download.file(data_url, method="curl", destfile=zip_name) unzip(zip_name, exdir="./data") path_to_blogs <- "./data/final/en_US/en_US.blogs.txt" path_to_news <- "./data/final/en_US/en_US.news.txt" path_to_twitter <- "./data/final/en_US/en_US.twitter.txt" #con <- file(path_to_twitter, "r") #readLines(con, 1) ## Read the first line of text #readLines(con, 1) ## Read the next line of text #readLines(con, 5) ## Read in the next 5 lines of text #close(con) file.remove(zip_name) # Q1: The en_US.blogs.txt, file is how many megabytes? # A1: ls -alh in the \verb|Coursera-Swiftkey/final/en_US|Coursera-Swiftkey/final/en_US # # Q2: The en_US.twitter.txt has how many lines of text? # A2: wc -l en_US.twitter.txt in bash # or # length(readLines("en_US.twitter.txt")) # in R # # Q3: What is the length of the longest line seen in any of the three en_US data sets? # A3: wc -L *.txt # in the directory with the three files # Get length of longest line with bash: # awk '{ if (length($0) > max) {max = length($0); maxline = $0} } END { print maxline }' YOURFILE | wc -c # # Q4: In the en_US twitter data set, if you divide the number of lines where the # word "love" (all lowercase) occurs by the number of lines the word "hate" # (all lowercase) occurs, about what do you get? # A4: grep "love" en_US.twitter wc −l # grep "hate" en_US.twitter wc −l # and divide them with bc or do the following: # love=$(grep "love" en_US.twitter.txt wc−l) # then # hate=$(grep "hate" en_US.twitter.txt wc−l) # then # let m=love/hate # then # echo $m # # Q5: The one tweet in the en_US twitter data set that matches the word "biostats" says what? # A5: grep -i "biostat" en_US.twitter.txt # # Q6: How many tweets have the exact characters "A computer once beat me at # chess, but it was no match for me at kickboxing". (I.e. the line matches # those characters exactly.) # A6: grep -x "A computer once beat me at chess, but it was no match for me at kickboxing" en_US.twitter.txt wc −l #
/get_data.R
permissive
Ghost-8D/data_science_capstone
R
false
false
2,273
r
library(tidyr) library(dplyr) library(utils) ## Download the data and extract it data_url <- "https://d396qusza40orc.cloudfront.net/dsscapstone/dataset/Coursera-SwiftKey.zip" zip_name <- "Coursera-SwiftKey.zip" download.file(data_url, method="curl", destfile=zip_name) unzip(zip_name, exdir="./data") path_to_blogs <- "./data/final/en_US/en_US.blogs.txt" path_to_news <- "./data/final/en_US/en_US.news.txt" path_to_twitter <- "./data/final/en_US/en_US.twitter.txt" #con <- file(path_to_twitter, "r") #readLines(con, 1) ## Read the first line of text #readLines(con, 1) ## Read the next line of text #readLines(con, 5) ## Read in the next 5 lines of text #close(con) file.remove(zip_name) # Q1: The en_US.blogs.txt, file is how many megabytes? # A1: ls -alh in the \verb|Coursera-Swiftkey/final/en_US|Coursera-Swiftkey/final/en_US # # Q2: The en_US.twitter.txt has how many lines of text? # A2: wc -l en_US.twitter.txt in bash # or # length(readLines("en_US.twitter.txt")) # in R # # Q3: What is the length of the longest line seen in any of the three en_US data sets? # A3: wc -L *.txt # in the directory with the three files # Get length of longest line with bash: # awk '{ if (length($0) > max) {max = length($0); maxline = $0} } END { print maxline }' YOURFILE | wc -c # # Q4: In the en_US twitter data set, if you divide the number of lines where the # word "love" (all lowercase) occurs by the number of lines the word "hate" # (all lowercase) occurs, about what do you get? # A4: grep "love" en_US.twitter wc −l # grep "hate" en_US.twitter wc −l # and divide them with bc or do the following: # love=$(grep "love" en_US.twitter.txt wc−l) # then # hate=$(grep "hate" en_US.twitter.txt wc−l) # then # let m=love/hate # then # echo $m # # Q5: The one tweet in the en_US twitter data set that matches the word "biostats" says what? # A5: grep -i "biostat" en_US.twitter.txt # # Q6: How many tweets have the exact characters "A computer once beat me at # chess, but it was no match for me at kickboxing". (I.e. the line matches # those characters exactly.) # A6: grep -x "A computer once beat me at chess, but it was no match for me at kickboxing" en_US.twitter.txt wc −l #
\name{plot.ridge} \alias{plot.ridge} \alias{plot.pcaridge} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Ridge Trace Plots } \description{ The bivariate ridge trace plot displays 2D projections of the covariance ellipsoids for a set of ridge regression estimates indexed by a ridge tuning constant. The centers of these ellipses show the bias induced for each parameter, and also how the change in the ridge estimate for one parameter is related to changes for other parameters. The size and shapes of the covariance ellipses show directly the effect on precision of the estimates as a function of the ridge tuning constant. } \usage{ \method{plot}{ridge}(x, variables = 1:2, radius = 1, which.lambda=1:length(x$lambda), labels=lambda, pos=3, cex=1.2, lwd = 2, lty = 1, xlim, ylim, col = c("black", "red", "darkgreen", "blue", "darkcyan", "magenta", "brown", "darkgray"), center.pch = 16, center.cex = 1.5, fill = FALSE, fill.alpha = 0.3, ref=TRUE, ref.col=gray(.70), ...) \method{plot}{pcaridge}(x, variables = (p-1):p, labels=NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A \code{ridge} object, as fit by \code{\link{ridge}} } \item{variables}{ Predictors in the model to be displayed in the plot: an integer or character vector of length 2, giving the indices or names of the variables. Defaults to the first two predictors for \code{ridge} objects or the \emph{last} two dimensions for \code{pcaridge} objects. } \item{radius}{ Radius of the ellipse-generating circle for the covariance ellipsoids. The default, \code{radius=1} gives a standard \dQuote{unit} ellipsoid. Typically, values \code{radius<1} gives less cluttered displays. } \item{which.lambda}{ A vector of indices used to select the values of \code{lambda} for which ellipses are plotted. The default is to plot ellipses for all values of \code{lambda} in the \code{ridge} object. } \item{labels}{ A vector of character strings or expressions used as labels for the ellipses. Use \code{labels=NULL} to suppress these. } \item{pos, cex}{ Scalars or vectors of positions (relative to the ellipse centers) and character size used to label the ellipses } \item{lwd, lty}{ Line width and line type for the covariance ellipsoids. Recycled as necessary. } \item{xlim, ylim}{ X, Y limits for the plot, each a vector of length 2. If missing, the range of the covariance ellipsoids is used. } \item{col}{ A numeric or character vector giving the colors used to plot the covariance ellipsoids. Recycled as necessary. } \item{center.pch}{ Plotting character used to show the bivariate ridge estimates. Recycled as necessary. } \item{center.cex}{ Size of the plotting character for the bivariate ridge estimates } \item{fill}{ Logical vector: Should the covariance ellipsoids be filled? Recycled as necessary. } \item{fill.alpha}{ Numeric vector: alpha transparency value(s) in the range (0, 1) for filled ellipsoids. Recycled as necessary. } \item{ref}{ Logical: whether to draw horizontal and vertical reference lines at 0. } \item{ref.col}{ Color of reference lines. } \item{\dots}{ Other arguments passed down to \code{\link[graphics]{plot.default}}, e.g., \code{xlab}, \code{ylab}, and other graphic parameters. } } %\details{ %%% ~~ If necessary, more details than the description above ~~ %} \value{ None. Used for its side effect of plotting. } \references{ Friendly, M. (2013). The Generalized Ridge Trace Plot: Visualizing Bias \emph{and} Precision. \emph{Journal of Computational and Graphical Statistics}, \bold{22}(1), 50-68, doi:10.1080/10618600.2012.681237, \url{http://euclid.psych.yorku.ca/datavis/papers/genridge.pdf} } \author{ Michael Friendly } %\note{ %%% ~~further notes~~ %} %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{ridge}} for details on ridge regression as implemented here \code{\link{pairs.ridge}}, \code{\link{traceplot}}, \code{\link{biplot.pcaridge}} and \code{\link{plot3d.ridge}} for other plotting methods } \examples{ longley.y <- longley[, "Employed"] longley.X <- data.matrix(longley[, c(2:6,1)]) lambda <- c(0, 0.005, 0.01, 0.02, 0.04, 0.08) lambdaf <- c("", ".005", ".01", ".02", ".04", ".08") lridge <- ridge(longley.y, longley.X, lambda=lambda) op <- par(mfrow=c(2,2), mar=c(4, 4, 1, 1)+ 0.1) for (i in 2:5) { plot.ridge(lridge, variables=c(1,i), radius=0.5, cex.lab=1.5) text(lridge$coef[1,1], lridge$coef[1,i], expression(~widehat(beta)^OLS), cex=1.5, pos=4, offset=.1) if (i==2) text(lridge$coef[-1,1:2], lambdaf[-1], pos=3, cex=1.25) } par(op) if (require("ElemStatLearn")) { py <- prostate[, "lpsa"] pX <- data.matrix(prostate[, 1:8]) pridge <- ridge(py, pX, df=8:1) plot(pridge) plot(pridge, fill=c(TRUE, rep(FALSE,7))) } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{hplot } %\keyword{ ~kwd2 }% __ONLY ONE__ keyword per line
/man/plot.ridge.Rd
no_license
guhjy/genridge
R
false
false
5,168
rd
\name{plot.ridge} \alias{plot.ridge} \alias{plot.pcaridge} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Ridge Trace Plots } \description{ The bivariate ridge trace plot displays 2D projections of the covariance ellipsoids for a set of ridge regression estimates indexed by a ridge tuning constant. The centers of these ellipses show the bias induced for each parameter, and also how the change in the ridge estimate for one parameter is related to changes for other parameters. The size and shapes of the covariance ellipses show directly the effect on precision of the estimates as a function of the ridge tuning constant. } \usage{ \method{plot}{ridge}(x, variables = 1:2, radius = 1, which.lambda=1:length(x$lambda), labels=lambda, pos=3, cex=1.2, lwd = 2, lty = 1, xlim, ylim, col = c("black", "red", "darkgreen", "blue", "darkcyan", "magenta", "brown", "darkgray"), center.pch = 16, center.cex = 1.5, fill = FALSE, fill.alpha = 0.3, ref=TRUE, ref.col=gray(.70), ...) \method{plot}{pcaridge}(x, variables = (p-1):p, labels=NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A \code{ridge} object, as fit by \code{\link{ridge}} } \item{variables}{ Predictors in the model to be displayed in the plot: an integer or character vector of length 2, giving the indices or names of the variables. Defaults to the first two predictors for \code{ridge} objects or the \emph{last} two dimensions for \code{pcaridge} objects. } \item{radius}{ Radius of the ellipse-generating circle for the covariance ellipsoids. The default, \code{radius=1} gives a standard \dQuote{unit} ellipsoid. Typically, values \code{radius<1} gives less cluttered displays. } \item{which.lambda}{ A vector of indices used to select the values of \code{lambda} for which ellipses are plotted. The default is to plot ellipses for all values of \code{lambda} in the \code{ridge} object. } \item{labels}{ A vector of character strings or expressions used as labels for the ellipses. Use \code{labels=NULL} to suppress these. } \item{pos, cex}{ Scalars or vectors of positions (relative to the ellipse centers) and character size used to label the ellipses } \item{lwd, lty}{ Line width and line type for the covariance ellipsoids. Recycled as necessary. } \item{xlim, ylim}{ X, Y limits for the plot, each a vector of length 2. If missing, the range of the covariance ellipsoids is used. } \item{col}{ A numeric or character vector giving the colors used to plot the covariance ellipsoids. Recycled as necessary. } \item{center.pch}{ Plotting character used to show the bivariate ridge estimates. Recycled as necessary. } \item{center.cex}{ Size of the plotting character for the bivariate ridge estimates } \item{fill}{ Logical vector: Should the covariance ellipsoids be filled? Recycled as necessary. } \item{fill.alpha}{ Numeric vector: alpha transparency value(s) in the range (0, 1) for filled ellipsoids. Recycled as necessary. } \item{ref}{ Logical: whether to draw horizontal and vertical reference lines at 0. } \item{ref.col}{ Color of reference lines. } \item{\dots}{ Other arguments passed down to \code{\link[graphics]{plot.default}}, e.g., \code{xlab}, \code{ylab}, and other graphic parameters. } } %\details{ %%% ~~ If necessary, more details than the description above ~~ %} \value{ None. Used for its side effect of plotting. } \references{ Friendly, M. (2013). The Generalized Ridge Trace Plot: Visualizing Bias \emph{and} Precision. \emph{Journal of Computational and Graphical Statistics}, \bold{22}(1), 50-68, doi:10.1080/10618600.2012.681237, \url{http://euclid.psych.yorku.ca/datavis/papers/genridge.pdf} } \author{ Michael Friendly } %\note{ %%% ~~further notes~~ %} %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{ridge}} for details on ridge regression as implemented here \code{\link{pairs.ridge}}, \code{\link{traceplot}}, \code{\link{biplot.pcaridge}} and \code{\link{plot3d.ridge}} for other plotting methods } \examples{ longley.y <- longley[, "Employed"] longley.X <- data.matrix(longley[, c(2:6,1)]) lambda <- c(0, 0.005, 0.01, 0.02, 0.04, 0.08) lambdaf <- c("", ".005", ".01", ".02", ".04", ".08") lridge <- ridge(longley.y, longley.X, lambda=lambda) op <- par(mfrow=c(2,2), mar=c(4, 4, 1, 1)+ 0.1) for (i in 2:5) { plot.ridge(lridge, variables=c(1,i), radius=0.5, cex.lab=1.5) text(lridge$coef[1,1], lridge$coef[1,i], expression(~widehat(beta)^OLS), cex=1.5, pos=4, offset=.1) if (i==2) text(lridge$coef[-1,1:2], lambdaf[-1], pos=3, cex=1.25) } par(op) if (require("ElemStatLearn")) { py <- prostate[, "lpsa"] pX <- data.matrix(prostate[, 1:8]) pridge <- ridge(py, pX, df=8:1) plot(pridge) plot(pridge, fill=c(TRUE, rep(FALSE,7))) } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{hplot } %\keyword{ ~kwd2 }% __ONLY ONE__ keyword per line
# setwd("D:\\work\\workspace\\edu\\edu\\data-sci\\coursera\\getdata\\011\\quizz3") if (!file.exists('data')) { dir.create('data') } fileUrl <- "https://d396qusza40orc.cloudfront.net/getdata%2Fdata%2Fss06hid.csv" download.file(fileUrl, destfile="./data/q1.csv", method="internal") data <- read.csv("./data/q1.csv") agricultureLogical <- data$ACR == 3 & data$AGS == 6 which(agricultureLogical)[1:3]
/data-sci/coursera/getdata/011/quizz3/q1_csv.R
no_license
Mr-Nancy/edu
R
false
false
401
r
# setwd("D:\\work\\workspace\\edu\\edu\\data-sci\\coursera\\getdata\\011\\quizz3") if (!file.exists('data')) { dir.create('data') } fileUrl <- "https://d396qusza40orc.cloudfront.net/getdata%2Fdata%2Fss06hid.csv" download.file(fileUrl, destfile="./data/q1.csv", method="internal") data <- read.csv("./data/q1.csv") agricultureLogical <- data$ACR == 3 & data$AGS == 6 which(agricultureLogical)[1:3]
library(assertthat) library(devtools) library(data.table) library(yaml) library(ccdata) # ccfun library library(ccfun) # Load data cc <- readRDS(file="../data/cc.RDS") data_fields <- yaml::yaml.load_file("../config/data_fields.yaml") ccfun::relabel_cols(cc, "NHICcode", "shortName", dict=data_fields) setnames(cc, "episode_id", "id") summary(cc$hrate) library(ggplot2) sample(unique(cc$id), 3) cc.plot <- cc[id == sample(unique(cc$id), 5) & time < 168] ggplot(data=cc.plot, aes(x=time, y=hrate, colour=id, group=id)) + geom_smooth() + geom_point() + coord_cartesian(x=c(0,168), y=c(0,200)) + guides(colour=FALSE, group=FALSE) + theme_minimal()
/src/prep.R
no_license
CC-HIC/shiny-cc
R
false
false
667
r
library(assertthat) library(devtools) library(data.table) library(yaml) library(ccdata) # ccfun library library(ccfun) # Load data cc <- readRDS(file="../data/cc.RDS") data_fields <- yaml::yaml.load_file("../config/data_fields.yaml") ccfun::relabel_cols(cc, "NHICcode", "shortName", dict=data_fields) setnames(cc, "episode_id", "id") summary(cc$hrate) library(ggplot2) sample(unique(cc$id), 3) cc.plot <- cc[id == sample(unique(cc$id), 5) & time < 168] ggplot(data=cc.plot, aes(x=time, y=hrate, colour=id, group=id)) + geom_smooth() + geom_point() + coord_cartesian(x=c(0,168), y=c(0,200)) + guides(colour=FALSE, group=FALSE) + theme_minimal()
rm(list = ls()) source("0. functions.r") library(dplyr) #for data cleaning library(caret) ## create a full set of dummy variables (binary categorical variables) ###Missing data imputation library(mice) #for imputation #clustering package options #library(fpc) #flexible processes for clustering #library(vegan) #library(ecodist) #read in data all_fishes <- read.csv("data/fishes.csv", na.strings = c("","NA")) #row.names(fishes) <- fishes$CommonName #extract fish that occur in TBGB fishes <- all_fishes[all_fishes$Occurance_TBGB=="RAR"|all_fishes$Occurance_TBGB=="RES",] ###------------------------Discretization------------------------------### #######Changing continuous variables to nominal #tropic level fishes$trophic_bin <- cut(fishes$Trophic_level,breaks = c(-Inf,3,3.5,4,Inf), labels = c("Low", "Medium", "High", "VHigh")) #maximum depth fishes$maxDepth_bin <- cut(fishes$max_depth,breaks = c(-Inf, exp(3), exp(4), exp(5), exp(6), Inf), labels = c("Reef", "Shallow", "Ocean", "Deep", "Bathy")) #common maximum depth fishes$CommaxDepth_bin <- cut(fishes$DepthRangeComDeep_F, breaks = c(-Inf, exp(3), exp(4), exp(5), exp(6), Inf), labels = c("Reef", "Shallow", "Ocean", "Deep", "Bathy")) #maximum length fishes$maxLength_bin <- cut(fishes$maxLength, breaks = c(-Inf, exp(3), exp(4), exp(5), Inf), labels = c("Small", "Medium", "Large", "VLarge")) ##----------------- Data Checking ----------------------## #check the completenesss of the data i.e. how many missing? #>20% of cases missing don't include 116*.25 #[1] 29 ##---- split into integer and factor variables #--first factor variables is.fact <- sapply(fishes, is.factor) fishes_fact <- fishes[, is.fact] round(apply(fishes_fact, 2, count_NAs), 2) #--second numerical values fishes_num <- fishes[, !is.fact] round(apply(fishes_num, 2, count_NAs), 2) ##----------------- Selecting Variables ----------------------## ##choose variables to cluster species by ##including all variables with less than 20% missing #Raw data fish <- fishes[, c("group_name", "Scientific_name", "CommonName", "Class", "Order", "Family", #"Distribution", #"IUCN_class", "Occurance_TBGB", "Diet", "Temp_type", "Vertical_Habitat", "Horizontal_Habitat", #"Estuarine_use", "Pelvic_finPosition", "Caudal_finShape", #"Dorsal_finShape", "Swimming_mode", "Body_form", "Eye_position", "Oral_gapePosition", #"Spine", #"Colour", #"Reproductive_Season", "Reproductive_strategy", "Sexual_differentation", "Migration", "Parental_care", "Egg_attach", "Reproduction_location", "Shooling_type", "pop_double", "maxLength_bin", "trophic_bin", "CommaxDepth_bin", "maxDepth_bin")] ###----------------Missing data imputation------------------------## #Use MICE to impute missing data cluster.imp <- mice(fish[,8:29], m=5, method = 'polyreg', print = F) ClusterImp <- complete(cluster.imp) ClusterImp <- cbind(ClusterImp,fish[,1:7]) save(ClusterImp, file = "data/fish_imputed.RData")
/2.6 data_prep.R
no_license
jimjunker1/FunctionalGroupClassification
R
false
false
4,187
r
rm(list = ls()) source("0. functions.r") library(dplyr) #for data cleaning library(caret) ## create a full set of dummy variables (binary categorical variables) ###Missing data imputation library(mice) #for imputation #clustering package options #library(fpc) #flexible processes for clustering #library(vegan) #library(ecodist) #read in data all_fishes <- read.csv("data/fishes.csv", na.strings = c("","NA")) #row.names(fishes) <- fishes$CommonName #extract fish that occur in TBGB fishes <- all_fishes[all_fishes$Occurance_TBGB=="RAR"|all_fishes$Occurance_TBGB=="RES",] ###------------------------Discretization------------------------------### #######Changing continuous variables to nominal #tropic level fishes$trophic_bin <- cut(fishes$Trophic_level,breaks = c(-Inf,3,3.5,4,Inf), labels = c("Low", "Medium", "High", "VHigh")) #maximum depth fishes$maxDepth_bin <- cut(fishes$max_depth,breaks = c(-Inf, exp(3), exp(4), exp(5), exp(6), Inf), labels = c("Reef", "Shallow", "Ocean", "Deep", "Bathy")) #common maximum depth fishes$CommaxDepth_bin <- cut(fishes$DepthRangeComDeep_F, breaks = c(-Inf, exp(3), exp(4), exp(5), exp(6), Inf), labels = c("Reef", "Shallow", "Ocean", "Deep", "Bathy")) #maximum length fishes$maxLength_bin <- cut(fishes$maxLength, breaks = c(-Inf, exp(3), exp(4), exp(5), Inf), labels = c("Small", "Medium", "Large", "VLarge")) ##----------------- Data Checking ----------------------## #check the completenesss of the data i.e. how many missing? #>20% of cases missing don't include 116*.25 #[1] 29 ##---- split into integer and factor variables #--first factor variables is.fact <- sapply(fishes, is.factor) fishes_fact <- fishes[, is.fact] round(apply(fishes_fact, 2, count_NAs), 2) #--second numerical values fishes_num <- fishes[, !is.fact] round(apply(fishes_num, 2, count_NAs), 2) ##----------------- Selecting Variables ----------------------## ##choose variables to cluster species by ##including all variables with less than 20% missing #Raw data fish <- fishes[, c("group_name", "Scientific_name", "CommonName", "Class", "Order", "Family", #"Distribution", #"IUCN_class", "Occurance_TBGB", "Diet", "Temp_type", "Vertical_Habitat", "Horizontal_Habitat", #"Estuarine_use", "Pelvic_finPosition", "Caudal_finShape", #"Dorsal_finShape", "Swimming_mode", "Body_form", "Eye_position", "Oral_gapePosition", #"Spine", #"Colour", #"Reproductive_Season", "Reproductive_strategy", "Sexual_differentation", "Migration", "Parental_care", "Egg_attach", "Reproduction_location", "Shooling_type", "pop_double", "maxLength_bin", "trophic_bin", "CommaxDepth_bin", "maxDepth_bin")] ###----------------Missing data imputation------------------------## #Use MICE to impute missing data cluster.imp <- mice(fish[,8:29], m=5, method = 'polyreg', print = F) ClusterImp <- complete(cluster.imp) ClusterImp <- cbind(ClusterImp,fish[,1:7]) save(ClusterImp, file = "data/fish_imputed.RData")
library(ggplot2) # Load datasets from local folder NEI <- readRDS("/Users/Sachin/coursera/exploratorydataanalysis_project2/summarySCC_PM25.rds") SCC <- readRDS("/Users/Sachin/coursera/exploratorydataanalysis_project2/Source_Classification_Code.rds") # Sampling NEI_sampling <- NEI[sample(nrow(NEI), size=5000, replace=F), ] # Baltimore City, Maryland == fips MD <- subset(NEI, fips == 24510) MD$year <- factor(MD$year, levels=c('1999', '2002', '2005', '2008')) png(filename='/Users/Sachin/coursera/exploratorydataanalysis_project2/plot3.png') ggplot(data=MD, aes(x=year, y=log(Emissions))) + facet_grid(. ~ type) + guides(fill=F) + geom_boxplot(aes(fill=type)) + stat_boxplot(geom ='errorbar') + ylab(expression(paste('Log', ' of PM'[2.5], ' Emissions'))) + xlab('Year') + ggtitle('Emissions per Type in Baltimore City, Maryland') + geom_jitter(alpha=0.10) dev.off()
/plot3.R
no_license
sachinvraje/exploratorydataanalysis_project2
R
false
false
886
r
library(ggplot2) # Load datasets from local folder NEI <- readRDS("/Users/Sachin/coursera/exploratorydataanalysis_project2/summarySCC_PM25.rds") SCC <- readRDS("/Users/Sachin/coursera/exploratorydataanalysis_project2/Source_Classification_Code.rds") # Sampling NEI_sampling <- NEI[sample(nrow(NEI), size=5000, replace=F), ] # Baltimore City, Maryland == fips MD <- subset(NEI, fips == 24510) MD$year <- factor(MD$year, levels=c('1999', '2002', '2005', '2008')) png(filename='/Users/Sachin/coursera/exploratorydataanalysis_project2/plot3.png') ggplot(data=MD, aes(x=year, y=log(Emissions))) + facet_grid(. ~ type) + guides(fill=F) + geom_boxplot(aes(fill=type)) + stat_boxplot(geom ='errorbar') + ylab(expression(paste('Log', ' of PM'[2.5], ' Emissions'))) + xlab('Year') + ggtitle('Emissions per Type in Baltimore City, Maryland') + geom_jitter(alpha=0.10) dev.off()
#' Create a dispersal function #' #' @description A dispersal kernal function is a mathematical representation of how species redistribute #' across the landscape. #' #' A common dispersal kernel is provided in the software for the user to select (see #' \link[steps]{exponential_dispersal_kernel}), however, a user may also provide a #' custom written dispersal kernel. #' #' @rdname dispersal_function #' #' @param distance_decay (exponential dispersal parameter) controls the rate at which the population disperses with distance #' @param normalize (exponential dispersal parameter) should the normalising constant be used - default is FALSE. #' #' @return An object of class \code{dispersal_function} #' #' @export #' #' @examples #' #' test_dispersal_function <- exponential_dispersal_kernel() exponential_dispersal_kernel <- function (distance_decay = 0.5, normalize = FALSE) { if (normalize) { fun <- function (r) (1 / (2 * pi * distance_decay ^ 2)) * exp(-r / distance_decay) } else { fun <- function (r) exp(-r / distance_decay) } as.dispersal_function(fun) } # #' @rdname dispersal_function # #' # #' @param x an object to print or test as a dispersal_function object # #' @param ... further arguments passed to or from other methods # #' # #' @export # #' # #' @examples # #' # #' print(test_dispersal_function) # # print.dispersal_function <- function (x, ...) { # cat("This is a dispersal_function object") # } ########################## ### internal functions ### ########################## as.dispersal_function <- function (dispersal_function) { as_class(dispersal_function, "dispersal_function", "function") }
/R/dispersal_kernel_functions-class.R
no_license
qaecology/steps
R
false
false
1,658
r
#' Create a dispersal function #' #' @description A dispersal kernal function is a mathematical representation of how species redistribute #' across the landscape. #' #' A common dispersal kernel is provided in the software for the user to select (see #' \link[steps]{exponential_dispersal_kernel}), however, a user may also provide a #' custom written dispersal kernel. #' #' @rdname dispersal_function #' #' @param distance_decay (exponential dispersal parameter) controls the rate at which the population disperses with distance #' @param normalize (exponential dispersal parameter) should the normalising constant be used - default is FALSE. #' #' @return An object of class \code{dispersal_function} #' #' @export #' #' @examples #' #' test_dispersal_function <- exponential_dispersal_kernel() exponential_dispersal_kernel <- function (distance_decay = 0.5, normalize = FALSE) { if (normalize) { fun <- function (r) (1 / (2 * pi * distance_decay ^ 2)) * exp(-r / distance_decay) } else { fun <- function (r) exp(-r / distance_decay) } as.dispersal_function(fun) } # #' @rdname dispersal_function # #' # #' @param x an object to print or test as a dispersal_function object # #' @param ... further arguments passed to or from other methods # #' # #' @export # #' # #' @examples # #' # #' print(test_dispersal_function) # # print.dispersal_function <- function (x, ...) { # cat("This is a dispersal_function object") # } ########################## ### internal functions ### ########################## as.dispersal_function <- function (dispersal_function) { as_class(dispersal_function, "dispersal_function", "function") }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/utility.R \name{filter_destatis_code} \alias{filter_destatis_code} \title{Filter data frame using the Destatis table code, discarding all columns that only consist of NA.} \usage{ filter_destatis_code(df, tablename) } \arguments{ \item{df}{Data frame with \code{tablename} column} \item{tablename}{Destatis table code} } \value{ Filtered data frame with all NA-columns dropped } \description{ Filter data frame using the Destatis table code, discarding all columns that only consist of NA. } \examples{ \dontrun{ filter_destatis_code(df = db_nrw_213, tablename = "21391KF061") } }
/man/filter_destatis_code.Rd
permissive
sjewo/RUBer
R
false
true
660
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/utility.R \name{filter_destatis_code} \alias{filter_destatis_code} \title{Filter data frame using the Destatis table code, discarding all columns that only consist of NA.} \usage{ filter_destatis_code(df, tablename) } \arguments{ \item{df}{Data frame with \code{tablename} column} \item{tablename}{Destatis table code} } \value{ Filtered data frame with all NA-columns dropped } \description{ Filter data frame using the Destatis table code, discarding all columns that only consist of NA. } \examples{ \dontrun{ filter_destatis_code(df = db_nrw_213, tablename = "21391KF061") } }
library(tidyverse) library(tidycensus) #get tables available via api v18 <- tidycensus::load_variables(2018, "acs5") #scan census tables scan <- c() #get vars with races tables_with_races <- v18 %>% filter(nchar(name) == 11) %>% mutate(table_num = str_sub(name, 1, 6), race = str_sub(name,7,7), element = str_sub(name, 9, nchar(name))) %>% select(-name)
/make_census_race.R
no_license
labouz/census_nacs
R
false
false
386
r
library(tidyverse) library(tidycensus) #get tables available via api v18 <- tidycensus::load_variables(2018, "acs5") #scan census tables scan <- c() #get vars with races tables_with_races <- v18 %>% filter(nchar(name) == 11) %>% mutate(table_num = str_sub(name, 1, 6), race = str_sub(name,7,7), element = str_sub(name, 9, nchar(name))) %>% select(-name)
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/io-wig.R \name{read_wig} \alias{read_wig} \title{Read a WIG file} \usage{ read_wig(file, genome_info = NULL, overlap_ranges = NULL) } \arguments{ \item{file}{A path to a file or a connection.} \item{genome_info}{An optional character string or a Ranges object that contains information about the genome build. For example the USSC identifier "hg19" will add build information to the returned GRanges.} \item{overlap_ranges}{An optional Ranges object. Only the intervals in the file that overlap the Ranges will be returned.} } \value{ A GRanges object A GRanges object } \description{ This is a lightweight wrapper to the import family of functions defined in \pkg{rtracklayer}. } \examples{ test_path <- system.file("tests", package = "rtracklayer") test_wig <- file.path(test_path, "step.wig") gr <- read_wig(test_wig) gr gr <- read_wig(test_wig, genome_info = "hg19") } \seealso{ \code{rtracklayer::\link[rtracklayer:WIGFile-class]{WIGFile()}} }
/man/io-wig-read.Rd
no_license
liupfskygre/plyranges
R
false
true
1,030
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/io-wig.R \name{read_wig} \alias{read_wig} \title{Read a WIG file} \usage{ read_wig(file, genome_info = NULL, overlap_ranges = NULL) } \arguments{ \item{file}{A path to a file or a connection.} \item{genome_info}{An optional character string or a Ranges object that contains information about the genome build. For example the USSC identifier "hg19" will add build information to the returned GRanges.} \item{overlap_ranges}{An optional Ranges object. Only the intervals in the file that overlap the Ranges will be returned.} } \value{ A GRanges object A GRanges object } \description{ This is a lightweight wrapper to the import family of functions defined in \pkg{rtracklayer}. } \examples{ test_path <- system.file("tests", package = "rtracklayer") test_wig <- file.path(test_path, "step.wig") gr <- read_wig(test_wig) gr gr <- read_wig(test_wig, genome_info = "hg19") } \seealso{ \code{rtracklayer::\link[rtracklayer:WIGFile-class]{WIGFile()}} }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/f_disp.R \name{f_disp} \alias{f_disp} \title{Functional dispersion} \usage{ f_disp( x, trait_db = NULL, tax_lev = "Taxa", type = NULL, traitSel = FALSE, col_blocks = NULL, nbdim = 2, distance = "gower", zerodist_rm = FALSE, correction = "none", traceB = FALSE, set_param = list(max_nbdim = 15, prec = "Qt", tol = 1e-07, cor.zero = TRUE) ) } \arguments{ \item{x}{results of function \code{aggregate_taxa()}.} \item{trait_db}{a trait database. Can be a \code{data.frame} ot a \code{dist} object. Taxonomic level of the traits database must match those of the taxonomic database. No automatic check is done by the \code{function}.} \item{tax_lev}{character string giving the taxonomic level used to retrieve trait information. Possible levels are \code{"Taxa"}, \code{"Species"}, \code{"Genus"}, \code{"Family"} as returned by the \link{aggregatoR} function.} \item{type}{the type of variables speciefied in \code{trait_db}. Must be one of \code{F}, fuzzy, or \code{C}, continuous. If more control is needed please consider to provide \code{trait_db} as a \code{dist} object. It works only when \code{trait_db} is a \code{data.frame}, otherwise ingored.} \item{traitSel}{interactively select traits.} \item{col_blocks}{A vector that contains the number of modalities for each trait. Not needed when \code{euclidean} distance is used.} \item{nbdim}{number of dimensions for the multidimensional functional spaces. We suggest to keep \code{nbdim} as low as possible. By default \code{biomonitoR} set the number of dimensions to 2. Select \code{auto} if you want the automated selection approach according to Maire et al. (2015).} \item{distance}{to be used to compute functional distances, \code{euclidean} or \code{gower}. Default to \code{gower}.} \item{zerodist_rm}{If \code{TRUE} aggregates taxa with the same traits.} \item{correction}{Correction methods for negative eigenvalues, can be one of \code{none}, \code{lingoes}, \code{cailliez}, \code{sqrt} and \code{quasi}. Ignored when type is set to \code{C}.} \item{traceB}{if \code{TRUE} ffrich will return a list as specified in details.} \item{set_param}{a list of parameters for fine tuning the calculations. \code{max_nbdim} set the maximum number of dimension for evaluating the quality of the functional space. \code{prec} can be \code{Qt} or \code{QJ}, please refere to the \code{convhulln} documentation for more information. Deafault to \code{QJ}, less accurate but less prone to errors. \code{tol} a tolerance threshold for zero, see the function \code{is.euclid}, \code{lingoes} and \code{cailliez} from the \code{ade4} for more details. Default to 1e-07. \code{cor.zero} = \code{TRUE} if TRUE, zero distances are not modified. see the function \code{is.euclid}, \code{lingoes} and \code{cailliez} from the \code{ade4} for more details. Default to \code{TRUE}.} } \value{ a vector with fuzzy functional richness results. \enumerate{ \item \strong{results}: results of \code{f_disp()}; \item \strong{traits}: a data.frame containing the traits used for the calculations; \item \strong{taxa}: a data.frame conaining the taxa used for th calculations; \item \strong{nbdim}: number of dimensions used after calculatin the quality of functional spaces according to Maire et al., (2015); \item \strong{correction}: the type of correction used. \item \strong{NA_detection}: a data.frame containing taxa on the first column and the corresponding trais with NAs on the second column. \item \strong{duplicated_traits}: if present, list the taxa with the same traits. } } \description{ \Sexpr[results=rd, stage=render]{ lifecycle::badge("maturing") } This function calculates the functional dispersion. } \examples{ data(macro_ex) data_bio <- as_biomonitor(macro_ex) data_agr <- aggregate_taxa(data_bio) data_ts <- assign_traits(data_agr) # averaging data_ts_av <- average_traits(data_ts) col_blocks <- c(8, 7, 3, 9, 4, 3, 6, 2, 5, 3, 9, 8, 8, 5, 7, 5, 4, 4, 2, 3, 8) f_disp(data_agr, trait_db = data_ts_av, type = "F", col_blocks = col_blocks) f_disp(data_agr, trait_db = data_ts_av, type = "F", col_blocks = col_blocks, nbdim = 10, correction = "cailliez" ) library(ade4) rownames(data_ts_av) <- data_ts_av$Taxa traits_prep <- prep.fuzzy(data_ts_av[, -1], col.blocks = col_blocks) traits_dist <- ktab.list.df(list(traits_prep)) traits_dist <- dist.ktab(traits_dist, type = "F") f_disp(data_agr, trait_db = traits_dist) } \references{ Barber, C. B., Dobkin, D. P., & Huhdanpaa, H. (1996). The quickhull algorithm for convex hulls. ACM Transactions on Mathematical Software (TOMS), 22(4), 469-483. Cornwell, W. K., Schwilk, D. W., & Ackerly, D. D. (2006). A trait-based test for habitat filtering: convex hull volume. Ecology, 87(6), 1465-1471 Maire, E., Grenouillet, G., Brosse, S., & Villeger, S. (2015). How many dimensions are needed to accurately assess functional diversity? A pragmatic approach for assessing the quality of functional spaces. Global Ecology and Biogeography, 24(6), 728-740. Mason, N. W., Mouillot, D., Lee, W. G., and Wilson, J. B. (2005). Functional richness, functional evenness and functional divergence: the primary components of functional diversity. Oikos, 111(1), 112-118. Villeger, S., Mason, N. W., & Mouillot, D. (2008). New multidimensional functional diversity indices for a multifaceted framework in functional ecology. Ecology, 89(8), 2290-2301. } \seealso{ \link{aggregate_taxa} }
/man/f_disp.Rd
no_license
TommasoCanc/biomonitoR
R
false
true
5,501
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/f_disp.R \name{f_disp} \alias{f_disp} \title{Functional dispersion} \usage{ f_disp( x, trait_db = NULL, tax_lev = "Taxa", type = NULL, traitSel = FALSE, col_blocks = NULL, nbdim = 2, distance = "gower", zerodist_rm = FALSE, correction = "none", traceB = FALSE, set_param = list(max_nbdim = 15, prec = "Qt", tol = 1e-07, cor.zero = TRUE) ) } \arguments{ \item{x}{results of function \code{aggregate_taxa()}.} \item{trait_db}{a trait database. Can be a \code{data.frame} ot a \code{dist} object. Taxonomic level of the traits database must match those of the taxonomic database. No automatic check is done by the \code{function}.} \item{tax_lev}{character string giving the taxonomic level used to retrieve trait information. Possible levels are \code{"Taxa"}, \code{"Species"}, \code{"Genus"}, \code{"Family"} as returned by the \link{aggregatoR} function.} \item{type}{the type of variables speciefied in \code{trait_db}. Must be one of \code{F}, fuzzy, or \code{C}, continuous. If more control is needed please consider to provide \code{trait_db} as a \code{dist} object. It works only when \code{trait_db} is a \code{data.frame}, otherwise ingored.} \item{traitSel}{interactively select traits.} \item{col_blocks}{A vector that contains the number of modalities for each trait. Not needed when \code{euclidean} distance is used.} \item{nbdim}{number of dimensions for the multidimensional functional spaces. We suggest to keep \code{nbdim} as low as possible. By default \code{biomonitoR} set the number of dimensions to 2. Select \code{auto} if you want the automated selection approach according to Maire et al. (2015).} \item{distance}{to be used to compute functional distances, \code{euclidean} or \code{gower}. Default to \code{gower}.} \item{zerodist_rm}{If \code{TRUE} aggregates taxa with the same traits.} \item{correction}{Correction methods for negative eigenvalues, can be one of \code{none}, \code{lingoes}, \code{cailliez}, \code{sqrt} and \code{quasi}. Ignored when type is set to \code{C}.} \item{traceB}{if \code{TRUE} ffrich will return a list as specified in details.} \item{set_param}{a list of parameters for fine tuning the calculations. \code{max_nbdim} set the maximum number of dimension for evaluating the quality of the functional space. \code{prec} can be \code{Qt} or \code{QJ}, please refere to the \code{convhulln} documentation for more information. Deafault to \code{QJ}, less accurate but less prone to errors. \code{tol} a tolerance threshold for zero, see the function \code{is.euclid}, \code{lingoes} and \code{cailliez} from the \code{ade4} for more details. Default to 1e-07. \code{cor.zero} = \code{TRUE} if TRUE, zero distances are not modified. see the function \code{is.euclid}, \code{lingoes} and \code{cailliez} from the \code{ade4} for more details. Default to \code{TRUE}.} } \value{ a vector with fuzzy functional richness results. \enumerate{ \item \strong{results}: results of \code{f_disp()}; \item \strong{traits}: a data.frame containing the traits used for the calculations; \item \strong{taxa}: a data.frame conaining the taxa used for th calculations; \item \strong{nbdim}: number of dimensions used after calculatin the quality of functional spaces according to Maire et al., (2015); \item \strong{correction}: the type of correction used. \item \strong{NA_detection}: a data.frame containing taxa on the first column and the corresponding trais with NAs on the second column. \item \strong{duplicated_traits}: if present, list the taxa with the same traits. } } \description{ \Sexpr[results=rd, stage=render]{ lifecycle::badge("maturing") } This function calculates the functional dispersion. } \examples{ data(macro_ex) data_bio <- as_biomonitor(macro_ex) data_agr <- aggregate_taxa(data_bio) data_ts <- assign_traits(data_agr) # averaging data_ts_av <- average_traits(data_ts) col_blocks <- c(8, 7, 3, 9, 4, 3, 6, 2, 5, 3, 9, 8, 8, 5, 7, 5, 4, 4, 2, 3, 8) f_disp(data_agr, trait_db = data_ts_av, type = "F", col_blocks = col_blocks) f_disp(data_agr, trait_db = data_ts_av, type = "F", col_blocks = col_blocks, nbdim = 10, correction = "cailliez" ) library(ade4) rownames(data_ts_av) <- data_ts_av$Taxa traits_prep <- prep.fuzzy(data_ts_av[, -1], col.blocks = col_blocks) traits_dist <- ktab.list.df(list(traits_prep)) traits_dist <- dist.ktab(traits_dist, type = "F") f_disp(data_agr, trait_db = traits_dist) } \references{ Barber, C. B., Dobkin, D. P., & Huhdanpaa, H. (1996). The quickhull algorithm for convex hulls. ACM Transactions on Mathematical Software (TOMS), 22(4), 469-483. Cornwell, W. K., Schwilk, D. W., & Ackerly, D. D. (2006). A trait-based test for habitat filtering: convex hull volume. Ecology, 87(6), 1465-1471 Maire, E., Grenouillet, G., Brosse, S., & Villeger, S. (2015). How many dimensions are needed to accurately assess functional diversity? A pragmatic approach for assessing the quality of functional spaces. Global Ecology and Biogeography, 24(6), 728-740. Mason, N. W., Mouillot, D., Lee, W. G., and Wilson, J. B. (2005). Functional richness, functional evenness and functional divergence: the primary components of functional diversity. Oikos, 111(1), 112-118. Villeger, S., Mason, N. W., & Mouillot, D. (2008). New multidimensional functional diversity indices for a multifaceted framework in functional ecology. Ecology, 89(8), 2290-2301. } \seealso{ \link{aggregate_taxa} }
#This is the code for estimation using maximum resolution (pixel analysis) #The code here produces Figures 2,3 and the global test of all coefficients equal to zero using the random effects model conducted in section 4.5.1 #In addition, the code produces Figures 1-4 in the Web appendix ################################################################################################ #Two data sets are used here: #1. locations_data.CSV: A data set of RAC locations. #The first three columns are used. #The first column indicates the shoe number, #the second indicates the x axis of the RAC location #the third indicates the Y axis of the RAC location. #2. contacts_data.txt: A data set of the contact surface #This is a pixel data where 1 indicates there is a contact surface and 0 otherwise #There are 307 columns and 395 rows in each shoe #The number of shoes is 387 but 386 is the number of shoes with RACs - shoe 127 has no RACS ################################################################################################ setwd("C:\\Users\\kapna\\Dropbox\\naomi-micha\\shoe_data\\Codes for JASA") ################### set.seed(313) #install.packages("splines") #install.packages("lme4") #install.packages("rgl") #install.packages("fields") #install.packages("survival") #install.packages("smoothie") #install.packages("car") #install.packages("ggplot2") #install.packages("reshape2") library(splines) library(lme4) library(rgl) library(fields) library(survival) library(smoothie) library(car) library(ggplot2) library(reshape2) ################## col_shoe<-307 #307 is the number of columns in each shoe row_shoe<-395 #395 is the number of rows in each shoe num_shoe<-387 #387 is the number of shoes but 386 is the number of shoes with RACs - shoe 127 has no RACS rel_col_shoe<-150 #out of the 307 columns only 150 are relevant (contain non zero pixels in some shoes) rel_row_shoe<-300 #out of the 395 rows only 300 are relevant (contain non zero pixels in some shoes) rel_x_cord<-0.25 #using coordinates as in the locations_data.CSV file the relevant x coordinates are between -.25 and 0.25 rel_Y_cord<-0.5 #the relevant Y coordinates are between -0.5 and 0.5 #The following two functions convert the x and Y coordinates of the location of a RAC to the X and Y pixels ################################################################################################################## # aspix_x converts the x coordinate to the x pixel # INPUT: # ====== # x - the x coordinate # col_shoe - the number of columns in each shoe # rel_col_shoe -the number of relevant columns #out of the 307 columns only 150 are relevant (contain non zero pixels in some shoes) # rel_x_cord - the relevant coordintes #(using coordinates as in the locations_data.CSV file. The relevant x coordinates are between -.25 and 0.25) ################################################################################################################## aspix_x <-function(x,col_shoe=307,rel_col_shoe=150,rel_x_cord=0.25) { not_rel_col<-ceiling((col_shoe - rel_col_shoe)/2) delx <- (2*rel_x_cord)/rel_col_shoe pix_x <- col_shoe-(floor((x+rel_x_cord)/delx)+not_rel_col) #The plus rel_x_cord is because it is --rel_x_cord (the x starts from -rel_x_cord) return(pix_x) } ################################################################################################################## # aspix_y converts the Y coordinate to the Y pixel # INPUT: # ====== # y - the y coordinate # row_shoe - the number of rows in each shoe # rel_row_shoe -the number of relevant rows #out of the 395 rows only 300 are relevant (contain non zero pixels in some shoes) # rel_Y_cord - the relevant coordintes #(using coordinates as in the locations_data.CSV file. the relevant Y coordinates are between -0.5 and 0.5) ################################################################################################################## aspix_y<-function(y,row_shoe=395,rel_row_shoe=300,rel_Y_cord=0.5) { not_rel_row<-ceiling((row_shoe-rel_row_shoe)/2) dely <- (2*rel_Y_cord)/rel_row_shoe pix_y <- row_shoe-(floor((y+rel_Y_cord)/dely)+not_rel_row) # The plus rel_Y_cord is because it is --rel_Y_cord (the y starts from -0.5) return(pix_y) } ############################# #organizing the contacts_data ############################# #We are importing the contacts_data as character and creating a list of contact shoe matrices d <- readChar("contacts_data.txt",nchars=(col_shoe*row_shoe+2)*num_shoe) data <- list() for(i in 1:num_shoe) { data[[i]] <- matrix(as.numeric(unlist(strsplit(substr(d, 1+(col_shoe*row_shoe+2)*(i-1), (col_shoe*row_shoe+2)*i-2), split="")) ),row_shoe,col_shoe,byrow=1) } #Shoe 9 should be mirrored as all other shoes shoe9rev <- data[[9]] #(compare image(data[[8]]) and image(data[[9]])) data[[9]] <- data[[9]][,ncol(data[[9]]):1] #########cleaning the data set########################################### #There are identifying stamps the police put on each shoeprint #These are not part of the shoe's contact surface and thus are omitted #The first stage in cleaning the stamps was to try to separate them from the actual contact surface #We try to find the lower bound of the cumulative contact surfce to separate the stamps from the actual contact surface #we found that if we look only at the contact surface that appeared in more than 8 shoes it provided a relatively good separation allcont <- data[[1]] for(i in 2:num_shoe) { allcont <- allcont+data[[i]] #this is the contact of all shoes } allcont <- (allcont>=8)*1 #here we see pixels that appear in more than 8 shoes #Removing the stamps #finding the lower bound of the contact surface h_width<-floor(row_shoe/2) #this is half the width of the shoe lb<- rep(NA,h_width) j<-1 while(allcont[h_width,j]==0) j<-j+1 lb[1] <- j-1 for(i in 2:h_width) { j<- lb[i-1] if(allcont[h_width-i+1,j]==0) { while((allcont[h_width-i+1,j]==0)&&j<rel_row_shoe) j <- j+1 lb[i] <- j-1 }else{ while((allcont[h_width-i+1,j]==1)&&j>0) j <- j-1 lb[i] <- j } } for(i in 1:h_width) allcont[h_width-i+1,1:lb[i]] <- 0 #removing the lower stamp #the upper bound of the contact surface ub<- rep(NA,h_width) j<-col_shoe while(allcont[h_width,j]==0) j<-j-1 ub[1] <- j+1 for(i in 2:h_width) { j<- ub[i-1] if(allcont[h_width-i+1,j]==0) { while((allcont[h_width-i+1,j]==0)&&j>0) j <- j-1 ub[i] <- j+1 }else{ while((allcont[h_width-i+1,j]==1)&&j<rel_row_shoe) j <- j+1 ub[i] <- j } } for(i in 1:h_width) allcont[h_width-i+1,ub[i]:col_shoe] <- 0 #removing the upper stamp for(i in 1:num_shoe) { data[[i]] <- data[[i]]*allcont } ###################Working with the locations data############## acciden<-read.csv("locations_data.CSV",header=TRUE) acci <- list() delx <- 2*rel_x_cord/rel_col_shoe dely <- 2*rel_Y_cord/rel_row_shoe for (i in (c(1:126,128:num_shoe)) )#shoe 127 doesn't have RACs { acci[[i]] <- matrix(0,row_shoe,col_shoe) locations <- cbind(acciden$x[acciden$shoe==i],acciden$y[acciden$shoe==i]) # the coordinates of the RAC for(j in 1:nrow(locations)) { xpix <- aspix_x(locations[j,1]) ypix<-aspix_y(locations[j,2]) acci[[i]][ypix,xpix] <- acci[[i]][ypix,xpix]+1 #if there is more than one RAC (accidental) in a pixel we will count it as well } } ###RACs can be observed only on the contact surface, but as we show below, the data has RACs where there is no contact surface m <- rep(NA,num_shoe) for(i in (c(1:126,128:num_shoe))) { m[i] <- min(data[[i]][acci[[i]]>=1]) # checking to see if there are RACs where there is no contact surface } # 0 means that there is at least one RAC that is not on the contact surface # As noted in Section 4. When RACs are created they may tear the shoe sole such that the location of the RAC appears to be on an area with #no contact surface and thus the value of the contact surface is set to 1 in all cases where there are RACs data_temp <- list() # a "solution", add contact surface where there is a RAC. for(i in (c(1:126,128:387))) { data_temp[[i]] <- data[[i]] data_temp[[i]][acci[[i]]>=1] <- 1 } data_pix<-list() # each data_pix[[i]] is a matrix with column 1 indicating the shoe, 2 the x, 3 the y, 4 the amount of RACs in that pixel # we include only data where there is contact surface (after adjusting for the case that if there is a RAC there will be contact surface) for(i in (c(1:126,128:num_shoe))) { xcoor <- t(matrix(rep((-col_shoe/2+1:col_shoe)*rel_Y_cord/rel_col_shoe,row_shoe),col_shoe,row_shoe)) ycoor <- -matrix(rep((-row_shoe/2+1:row_shoe)*rel_Y_cord/rel_col_shoe,col_shoe),row_shoe,col_shoe) shoe<-rep(i,length(data_temp[[i]][data_temp[[i]]==1])) data_pix[[i]]<-cbind(shoe,xcoor[data_temp[[i]]==1],ycoor[data_temp[[i]]==1],acci[[i]][data_temp[[i]]==1])# the data is only where there is contact surface } data_pix_use<-numeric() for (i in (c(1:126,128:num_shoe))) { data_pix_use<-rbind(data_pix_use,data_pix[[i]]) } #As noted in Section 4 of the article, the number of RACS is set to 1 in 38 cases where there are 2 RACs. #Appearance of two RACs in the same pixel may be due to the way the data were pre-processed and the location was defined. n_Acc<-data_pix_use[,4] #data_pix_use[n_Acc==2,] -> These are the 38 pixels with 2 RACs n_Acc[n_Acc>=1] <-1 # more than one RAC in a shoe is considered as 1 x<- data_pix_use[,2] y<- data_pix_use[,3] shoe<-as.factor(data_pix_use[,1]) #it should be noted that as factor changes the numbering #since shoe 127 doesn't exist, as factor makes the numbering of shoes 128 to 387 to decrease by 1. (shoe 128 is now 127 etc.) mydata <- data.frame(cbind(n_Acc, x, y,shoe)) #This is the data that will be used for(j in 1:nrow(locations)) { xpix <-aspix_x(locations[j,1]) ypix <- aspix_y(locations[j,2]) acci[[i]][ypix,xpix] <- acci[[i]][ypix,xpix]+1 #if there is more than one RAC in a pixel we will count it as well } sumacci <- acci[[1]] for(i in c(2:126,128:387)) { sumacci <- sumacci+acci[[i]] } sumcont <- data[[1]] for(i in c(2:126,128:387)) { sumcont <- sumcont+data[[i]] } ###############creating case control data########################################################################## # As noted in Section 4.4, estimating the intensity function at a high resolution is computationally challenging #and thus case-control sub-sampling techniques are used #The calculations were based on within-cluster case-control sub-sampling, #which includes all cases (pixels with RACs, nij = 1) and 20 random controls (pixels without RACs, nij = 0) from each shoe dataCC <- numeric() for(i in 1:length(unique(shoe))) { case <- mydata[mydata$shoe==i&mydata$n_Acc>0,] control <- mydata[mydata$shoe==i&mydata$n_Acc==0,] control <- control[sample(nrow(control),size=20,replace=FALSE),] dataCC <- rbind(dataCC,case,control) } ################################################################################################################## ################################################################################################################## # The naive smooth estimator used on the basis of the entire data, not the case control #A uniform kernel is used #where each entry of the smoothed matrix is calculated as the average of its 21^2 neighbor entries in the original matrix. # INPUT: # ====== #cumRAC is the cumulative matrix of RAC locations of all shoes #cumContact is the cumulative matrix of all contact surfaces of all shoes #areaShoe is the area of the of the shoes which defines the contour of all shoes #In our case is all pixels that appear in more than 8 shoes ################################################################################################################## Naive<-function(cumRAC=sumacci,cumContact=sumcont,areaShoe=allcont) { Naivemat<- cumRAC/cumContact Naivemat[areaShoe==0] <- NA est <- kernel2dsmooth(Naivemat, kernel.type="boxcar",n=21) est[areaShoe==0] <- NA return(est) } naive_smooth<-Naive() image.plot(naive_smooth,axes=FALSE) #The random effects and the CML estimates were calculated using a product of natural cubic splines #Three knots for the X-axis and five knots for the Y-axis were used and their positions were set according to equal quantiles. #These numbers of knots enabled flexibility and still avoided computational problems ################################################################################################################## # The random effects estimator # INPUT: # ====== #nknotsx the number of x knots using a product of natural cubic splines #nknotsy the number of y knots using a product of natural cubic splines #dat is the data used for estimation, we are using here the case control data ################################################################################################################## ####The random effects estimator Random<-function(nknotsx=3,nknotsy=5,dat=dataCC) { knotsx <- as.numeric(quantile(dat$x,1:nknotsx/(1+nknotsx))) knotsy <-as.numeric(quantile(dat$y,1:nknotsy/(1+nknotsy))) shoe<-dat$shoe est<- glmer(dat$n_Acc ~ ns(dat$x,knots=knotsx):ns(dat$y,knots=knotsy)+(1 | shoe) , data= dat , family=binomial(link="logit"),control = glmerControl(optimizer = "bobyqa")) return(est) } rand<-Random() #plot of the random effects estimator nknotsx <- 3 nknotsy <- 5 knotsx <- as.numeric(quantile(dataCC$x,1:nknotsx/(1+nknotsx))) knotsy <-as.numeric(quantile(dataCC$y,1:nknotsy/(1+nknotsy))) basx <- ns(dataCC$x,knots=knotsx) basy <- ns(dataCC$y,knots=knotsy) xy <- expand.grid(xcoor[1,],ycoor[,1]) newdesignmat <- rep(1,length(xy[,1])) for(i in 1:length(predict(basy,1))) { for(j in 1:length(predict(basx,1))) { newdesignmat <- cbind(newdesignmat,predict(basx, xy[,1])[,j]*predict(basy, xy[,2])[,i]) } } pred.case_control <- newdesignmat%*%fixef(rand)+log(0.005) #log(0.005) is the offset pred.case_control[t(allcont)==0] <- NA #areas out of the contour (less than 8 shoes has contact surface in these pixels) are given NA prob.pred <- exp(matrix(pred.case_control ,row_shoe,col_shoe,byrow=1))/(1+exp(matrix(pred.case_control ,row_shoe,col_shoe,byrow=1))) intens <- -log(1-prob.pred) #turning it to intensity image.plot(intens,axes=FALSE) m<-mean(pred.case_control,na.rm=TRUE) #for use in the CML ################################################################################################################## # The CML estimator # INPUT: # ====== #nknotsx the number of x knots using a product of natural cubic splines #nknotsy the number of y knots using a product of natural cubic splines #dat is the data used for estimation, we are using here the case control data ################################################################################################################## CML<-function(nknotsx=3,nknotsy=5,dat=dataCC) { knotsx <- as.numeric(quantile(dat$x,1:nknotsx/(1+nknotsx))) knotsy <-as.numeric(quantile(dat$y,1:nknotsy/(1+nknotsy))) shoe<-dat$shoe est<- clogit(dat$n_Acc~ ns(dat$x,knots=knotsx):ns(dat$y,knots=knotsy)+strata(shoe) , data=dat) return(est) } cml<-CML() #plot cml newdesignmat1 <- rep(1,length(xy[,1])) for(i in 1:length(predict(basy,1))) { for(j in 1:length(predict(basx,1))) { newdesignmat1 <- cbind(newdesignmat1,predict(basx, xy[,1])[,j]*predict(basy, xy[,2])[,i]) } } pred.cml.bin.case_control <- newdesignmat1%*%c(0,coefficients(cml)) # the intercept can't be estimated since it cancels pred.cml.bin.case_control[t(allcont)==0] <- NA #areas out of the contour (less than 8 shoes have contact surface in these pixels) are given NA m_1<-mean(pred.cml.bin.case_control,na.rm=TRUE) pred.cml.bin.case_control<-pred.cml.bin.case_control-m_1+m #making the means of random and cml to be equal prob.pred_cml <- exp(matrix(pred.cml.bin.case_control ,row_shoe,col_shoe,byrow=1))/(1+exp(matrix(pred.cml.bin.case_control ,row_shoe,col_shoe,byrow=1))) intens.pred_cml <- -log(1-prob.pred_cml) image.plot(intens.pred_cml,axes=FALSE) #notice that these probabilities depend on the intercept which is not included since it cancels. # Figure 2: the 3 estimators intensities on the same scale sub <- 70 cols <- sub:(col_shoe-sub) #we multiply CML and random so they will be on the same scale com_3_est<-cbind(naive_smooth[,cols],exp(-0.9915/2)*intens[,cols],exp(-0.9915/2)*intens.pred_cml[,cols]) #0.9915 is sigma^2 of the random effect. e^(sigma^2/2) is the expectation of a log linear variable lognormal(0,sigma^2). This is the expectation of the random. image.plot(t(com_3_est[nrow(com_3_est):1,]),axes=FALSE,xlab='Naive,Random,CML') pdf(file ="pixel_inten_JASAup.pdf", height=6, width=6) image.plot(t(com_3_est[nrow(com_3_est):1,]),axes=FALSE,xlab='Naive Random CML') dev.off() ## hypothesis testing## (Section 4.5.1) co <- fixef(rand) vc <- vcov(rand) matr <- diag(length(co))[-1,] testing<-linearHypothesis(rand,hypothesis.matrix=matr,rhs=rep(0,length(co)-1),test=c("Chisq", "F"),vcov.=vc,coef.=co) testing$`Pr(>Chisq)` #pvalue is approximately zero #confidence intervals - Figure 3 ################################################################################################################## # CML confidence interval # INPUT: # ====== #dat - the data used for estimation, we are using here the case control data # col_shoe - the number of columns in each shoe # row_shoe - the number of rows in each shoe # rel_Y_cord - the relevant coordinates #(using coordinates as in the locations_data.CSV file. the relevant Y coordinates are between -0.5 and 0.5) # rel_col_shoe -the number of relevant columns #out of the 307 columns only 150 are relevant (contain non zero pixels in some shoes) #nknotsx the number of x knots using a product of natural cubic splines #nknotsy the number of y knots using a product of natural cubic splines ################################################################################################################## CI_cml <- function(dat=dataCC,col_shoe=307,row_shoe=395,rel_Y_cord=0.5,rel_col_shoe=150,nknotsx=3,nknotsy=5) { xcoor <- t(matrix(rep((-col_shoe/2+1:col_shoe)*rel_Y_cord/rel_col_shoe,row_shoe),col_shoe,row_shoe)) ycoor <- -matrix(rep((-row_shoe/2+1:row_shoe)*rel_Y_cord/rel_col_shoe,col_shoe),row_shoe,col_shoe) cml_bin_fit<-CML(dat=dat,nknotsx=nknotsx,nknotsy=nknotsy) rand<-Random(dat=dat,nknotsx=nknotsx,nknotsy=nknotsy) knotsx <- as.numeric(quantile(dat$x,1:nknotsx/(1+nknotsx))) knotsy <-as.numeric(quantile(dat$y,1:nknotsy/(1+nknotsy))) basx <- ns(dat$x,knots=knotsx) basy <- ns(dat$y,knots=knotsy) xy <- expand.grid(xcoor[1,],ycoor[,1]) newdesignmat <- rep(1,length(xy[,1])) for(i in 1:length(predict(basy,1))) { for(j in 1:length(predict(basx,1))) { newdesignmat <- cbind(newdesignmat,predict(basx, xy[,1])[,j]*predict(basy, xy[,2])[,i]) } } pred.case_control_r <- newdesignmat%*%fixef(rand)+log(0.005) #log(0.005) is the offset pred.case_control_r[t(allcont)==0] <- NA #areas out of the contour (less than 8 shoes has contact surface in these pixels) are given NA prob.pred <- exp(matrix(pred.case_control_r ,row_shoe,col_shoe,byrow=1))/(1+exp(matrix(pred.case_control_r ,row_shoe,col_shoe,byrow=1))) m<-mean(pred.case_control_r,na.rm=TRUE) cov_mat<-matrix(as.numeric(vcov(cml_bin_fit)),length(coefficients(cml_bin_fit))) avg <- newdesignmat%*%c(0,coefficients(cml_bin_fit)) avg[t(allcont)==0] <- NA #areas out of the contour (less than 8 shoes has contact surface in these pixels) are given NA m_1 <- mean(avg,,na.rm=TRUE) avg<- avg-m_1+m newdesignmat<-newdesignmat[,-1] std <- as.matrix(sqrt(rowSums((newdesignmat%*%cov_mat)*newdesignmat))) ex <- exp(avg%*%c(1,1,1)+1.96*std%*%c(-1,0,1)) pr <- ex/(1+ex) inte <- -log(1-pr) for(i in 1:3) inte[as.vector(t(allcont)==0),i] <- NA ret <- list(matrix(inte[,1],row_shoe,col_shoe,byrow=1),matrix(inte[,2],row_shoe,col_shoe,byrow=1),matrix(inte[,3],row_shoe,col_shoe,byrow=1)) names(ret) <- c("Low","Mid","High") return(ret) } CI <- CI_cml() ################################################################################################################## # Random confidence interval # INPUT: # ====== #dat - the data used for estimation, we are using here the case control data # col_shoe - the number of columns in each shoe # row_shoe - the number of rows in each shoe # rel_Y_cord - the relevant coordinates #(using coordinates as in the locations_data.CSV file. the relevant Y coordinates are between -0.5 and 0.5) # rel_col_shoe -the number of relevant columns #out of the 307 columns only 150 are relevant (contain non zero pixels in some shoes) #nknotsx the number of x knots using a product of natural cubic splines #nknotsy the number of y knots using a product of natural cubic splines ################################################################################################################## CI_random <- function(dat=dataCC,col_shoe=307,row_shoe=395,rel_Y_cord=0.5,rel_col_shoe=150,nknotsx=3,nknotsy=5) { xcoor <- t(matrix(rep((-col_shoe/2+1:col_shoe)*rel_Y_cord/rel_col_shoe,row_shoe),col_shoe,row_shoe)) ycoor <- -matrix(rep((-row_shoe/2+1:row_shoe)*rel_Y_cord/rel_col_shoe,col_shoe),row_shoe,col_shoe) fit.rand<-Random(dat=dat,nknotsx=nknotsx,nknotsy=nknotsy) cov_mat<-matrix(as.numeric(vcov(fit.rand)),length(fixef(fit.rand))) knotsx <- as.numeric(quantile(dat$x,1:nknotsx/(1+nknotsx))) knotsy <-as.numeric(quantile(dat$y,1:nknotsy/(1+nknotsy))) basx <- ns(dat$x,knots=knotsx) basy <- ns(dat$y,knots=knotsy) xy <- expand.grid(xcoor[1,],ycoor[,1]) newdesignmat <- rep(1,length(xy[,1])) for(i in 1:length(predict(basy,1))) { for(j in 1:length(predict(basx,1))) { newdesignmat <- cbind(newdesignmat,predict(basx, xy[,1])[,j]*predict(basy, xy[,2])[,i]) } } avg <- newdesignmat%*%fixef(fit.rand)+log(0.005) std <- as.matrix(sqrt(rowSums((newdesignmat%*%cov_mat)*newdesignmat))) ex <- exp(avg%*%c(1,1,1)+1.96*std%*%c(-1,0,1)) pr <- ex/(1+ex) inte <- -log(1-pr) for(i in 1:3) inte[as.vector(t(allcont)==0),i] <- NA ret <- list(matrix(inte[,1],row_shoe,col_shoe,byrow=1),matrix(inte[,2],row_shoe,col_shoe,byrow=1),matrix(inte[,3],row_shoe,col_shoe,byrow=1)) names(ret) <- c("Low","Mid","High") return(ret) } CI_r <- CI_random() #As shown in Figure 3 the confidence interval is calculated for 3 cut points sho <- CI$Mid*exp(-0.9915/2) #As done in the presentation of the estimators we multiply so they will be on the same scale sho[110,] <- 0 sho[190,] <- 0 sho[250,] <- 0 #The first part of figure 3 image.plot(sho,axes=FALSE) #intervals for the cut points: cut1<-110 cut2<-190 cut3<-250 ################################################################################################################## # CI_cut creates an image of the confidence interval of the random effects estimator and the CML estimator #for a specific row of the shoe # INPUT: # ====== #cut - the specific row of the shoe on the basis of which the CI would be calculated #ran - the range of the columns on the basis of which the CI would be calculated ################################################################################################################## range<-c(1:col_shoe) CI_cut<-function(cut=110,ran=range) { CI_plot_cml<-data.frame(range,CI$Low[cut,]*exp(-0.9915/2),CI$Mid[cut,]*exp(-0.9915/2),CI$High[cut,]*exp(-0.9915/2),rep("CML",length(range))) #As done in the presentation of the estimators we multiply so they will be on the same scale CI_plot_rand<-data.frame(range,CI_r$Low[cut,]*exp(-0.9915/2),CI_r$Mid[cut,]*exp(-0.9915/2),CI_r$High[cut,]*exp(-0.9915/2),rep("Random",length(range))) names(CI_plot_cml)<-c("x","Low_CI","Mid_CI","High_CI","Estimator") names(CI_plot_rand)<-c("x","Low_CI","Mid_CI","High_CI","Estimator") CI_plot<-rbind(CI_plot_cml,CI_plot_rand) CI_cut<-ggplot(CI_plot) + geom_line(aes(x=x,y=Low_CI,colour=Estimator)) + geom_line(aes(x=x,y=High_CI,colour=Estimator)) + geom_ribbon(aes(x=x,ymin=Low_CI,ymax=High_CI,fill=Estimator),alpha=0.5) + geom_line(aes(x=x,y=Mid_CI,colour=Estimator),size=1) + theme_bw() + theme(plot.title = element_text(color="black", size=14, face="bold")) + scale_fill_brewer(palette="Set1") + scale_color_brewer(palette="Set1") + ylab("Probability") + labs(colour="Estimator",fill="Estimator") + coord_cartesian(xlim = c(100, 220),ylim=c(0.001,0.0082)) + theme(axis.title.x = element_blank(),axis.text.x = element_blank()) return(CI_cut) } CI_cut1<-CI_cut(cut=110,ran=range) CI_cut2<-CI_cut(cut=190,ran=range) CI_cut3<-CI_cut(cut=250,ran=range) # Producing Figures 1-4 in the Web appendix #Figure 1 in the Web appendix qplot(as.vector(table(acciden$shoe)), geom="histogram",bins=60,alpha=I(.5),col=I("black"))+xlab("RACs")+ylab("count") pdf(file ="hist_racs_JASAup.pdf", height=6, width=6) qplot(as.vector(table(acciden$shoe)), geom="histogram",bins=60,alpha=I(.5),col=I("black"))+xlab("RACs")+ylab("count") dev.off() min_num_RACs<-min(table(acciden$shoe)) max_num_RACs<-max(table(acciden$shoe)) mean_num_RACs<-mean(table(acciden$shoe)) #Figure 2 in the Web appendix mat_shoe_acc<- as.matrix(table(n_Acc,shoe)) #matrix of shoes, number of pixels with zero and number of pixels with 1 cont_pix<- as.vector(mat_shoe_acc[1,]+mat_shoe_acc[2,]) # The number of pixels with contact surface is the sum of pixels with zero and with one (pixels with no contact surface are not part of the data) qplot(cont_pix, geom="histogram",bins=20,alpha=I(.5),col=I("black"))+xlab("Contact surface (number of pixels)")+ylab("count") pdf(file ="hist_npix_JASAup.pdf", height=6, width=6) qplot(cont_pix, geom="histogram",bins=20,alpha=I(.5),col=I("black"))+xlab("Contact surface (number of pixels)")+ylab("count") dev.off() min_num_pix<-min(cont_pix) max_num_pix<-max(cont_pix) mean_num_pix<-mean(cont_pix) #Figure 3 in the Web appendix tmp <- sumcont tmp[allcont==0]<-NA image.plot(t(tmp[nrow(tmp):1,]),axes=FALSE,xlab = 'Cumulative Contact Surface') #image of cumulative contact surface pdf(file ="cum_contact_JASA.pdf", height=6, width=6) image.plot(t(tmp[nrow(tmp):1,]),axes=FALSE,xlab = 'Cumulative Contact Surface') #image of cumulative contact surface dev.off() #Figure 4 in the Web appendix dat<-data.frame(cbind(cont_pix,mat_shoe_acc[2,])) p1 <- ggplot( dat,aes(x = cont_pix, y = mat_shoe_acc[2,])) p2<- p1 + geom_point(color="dark grey")+geom_smooth(method = "lm", se = FALSE,color=" black") + labs(x="Contact surface (number of pixels)", y = "number of RACs") m <- lm(dat$V2 ~ dat$cont_pix) a <- signif(coef(m)[1], digits = 4) b <- signif(coef(m)[2], digits = 2) textlab <- paste("y = ",b,"x + ",a, sep="") R_spea<-cor(dat$cont_pix, y = dat$V2, method = "spearman") p3<- p2 + geom_text(aes(x = 15400, y = 210, label = textlab), color=" black", size=6, parse = FALSE) p3 +annotate("text", x = 16000, y = 190, label = "spearman's r = 0.1161",color="black", size=6,fontface =2) pdf(file ="scat_contact_rac_JASA.pdf", height=6, width=6) p3<- p2 + geom_text(aes(x = 15400, y = 210, label = textlab), color=" black", size=6, parse = FALSE) p3 +annotate("text", x = 16000, y = 190, label = "spearman's r = 0.1161",color="black", size=6,fontface =2) dev.off()
/Code/pixel analysis.R
no_license
naomikap/rac-intensity
R
false
false
27,406
r
#This is the code for estimation using maximum resolution (pixel analysis) #The code here produces Figures 2,3 and the global test of all coefficients equal to zero using the random effects model conducted in section 4.5.1 #In addition, the code produces Figures 1-4 in the Web appendix ################################################################################################ #Two data sets are used here: #1. locations_data.CSV: A data set of RAC locations. #The first three columns are used. #The first column indicates the shoe number, #the second indicates the x axis of the RAC location #the third indicates the Y axis of the RAC location. #2. contacts_data.txt: A data set of the contact surface #This is a pixel data where 1 indicates there is a contact surface and 0 otherwise #There are 307 columns and 395 rows in each shoe #The number of shoes is 387 but 386 is the number of shoes with RACs - shoe 127 has no RACS ################################################################################################ setwd("C:\\Users\\kapna\\Dropbox\\naomi-micha\\shoe_data\\Codes for JASA") ################### set.seed(313) #install.packages("splines") #install.packages("lme4") #install.packages("rgl") #install.packages("fields") #install.packages("survival") #install.packages("smoothie") #install.packages("car") #install.packages("ggplot2") #install.packages("reshape2") library(splines) library(lme4) library(rgl) library(fields) library(survival) library(smoothie) library(car) library(ggplot2) library(reshape2) ################## col_shoe<-307 #307 is the number of columns in each shoe row_shoe<-395 #395 is the number of rows in each shoe num_shoe<-387 #387 is the number of shoes but 386 is the number of shoes with RACs - shoe 127 has no RACS rel_col_shoe<-150 #out of the 307 columns only 150 are relevant (contain non zero pixels in some shoes) rel_row_shoe<-300 #out of the 395 rows only 300 are relevant (contain non zero pixels in some shoes) rel_x_cord<-0.25 #using coordinates as in the locations_data.CSV file the relevant x coordinates are between -.25 and 0.25 rel_Y_cord<-0.5 #the relevant Y coordinates are between -0.5 and 0.5 #The following two functions convert the x and Y coordinates of the location of a RAC to the X and Y pixels ################################################################################################################## # aspix_x converts the x coordinate to the x pixel # INPUT: # ====== # x - the x coordinate # col_shoe - the number of columns in each shoe # rel_col_shoe -the number of relevant columns #out of the 307 columns only 150 are relevant (contain non zero pixels in some shoes) # rel_x_cord - the relevant coordintes #(using coordinates as in the locations_data.CSV file. The relevant x coordinates are between -.25 and 0.25) ################################################################################################################## aspix_x <-function(x,col_shoe=307,rel_col_shoe=150,rel_x_cord=0.25) { not_rel_col<-ceiling((col_shoe - rel_col_shoe)/2) delx <- (2*rel_x_cord)/rel_col_shoe pix_x <- col_shoe-(floor((x+rel_x_cord)/delx)+not_rel_col) #The plus rel_x_cord is because it is --rel_x_cord (the x starts from -rel_x_cord) return(pix_x) } ################################################################################################################## # aspix_y converts the Y coordinate to the Y pixel # INPUT: # ====== # y - the y coordinate # row_shoe - the number of rows in each shoe # rel_row_shoe -the number of relevant rows #out of the 395 rows only 300 are relevant (contain non zero pixels in some shoes) # rel_Y_cord - the relevant coordintes #(using coordinates as in the locations_data.CSV file. the relevant Y coordinates are between -0.5 and 0.5) ################################################################################################################## aspix_y<-function(y,row_shoe=395,rel_row_shoe=300,rel_Y_cord=0.5) { not_rel_row<-ceiling((row_shoe-rel_row_shoe)/2) dely <- (2*rel_Y_cord)/rel_row_shoe pix_y <- row_shoe-(floor((y+rel_Y_cord)/dely)+not_rel_row) # The plus rel_Y_cord is because it is --rel_Y_cord (the y starts from -0.5) return(pix_y) } ############################# #organizing the contacts_data ############################# #We are importing the contacts_data as character and creating a list of contact shoe matrices d <- readChar("contacts_data.txt",nchars=(col_shoe*row_shoe+2)*num_shoe) data <- list() for(i in 1:num_shoe) { data[[i]] <- matrix(as.numeric(unlist(strsplit(substr(d, 1+(col_shoe*row_shoe+2)*(i-1), (col_shoe*row_shoe+2)*i-2), split="")) ),row_shoe,col_shoe,byrow=1) } #Shoe 9 should be mirrored as all other shoes shoe9rev <- data[[9]] #(compare image(data[[8]]) and image(data[[9]])) data[[9]] <- data[[9]][,ncol(data[[9]]):1] #########cleaning the data set########################################### #There are identifying stamps the police put on each shoeprint #These are not part of the shoe's contact surface and thus are omitted #The first stage in cleaning the stamps was to try to separate them from the actual contact surface #We try to find the lower bound of the cumulative contact surfce to separate the stamps from the actual contact surface #we found that if we look only at the contact surface that appeared in more than 8 shoes it provided a relatively good separation allcont <- data[[1]] for(i in 2:num_shoe) { allcont <- allcont+data[[i]] #this is the contact of all shoes } allcont <- (allcont>=8)*1 #here we see pixels that appear in more than 8 shoes #Removing the stamps #finding the lower bound of the contact surface h_width<-floor(row_shoe/2) #this is half the width of the shoe lb<- rep(NA,h_width) j<-1 while(allcont[h_width,j]==0) j<-j+1 lb[1] <- j-1 for(i in 2:h_width) { j<- lb[i-1] if(allcont[h_width-i+1,j]==0) { while((allcont[h_width-i+1,j]==0)&&j<rel_row_shoe) j <- j+1 lb[i] <- j-1 }else{ while((allcont[h_width-i+1,j]==1)&&j>0) j <- j-1 lb[i] <- j } } for(i in 1:h_width) allcont[h_width-i+1,1:lb[i]] <- 0 #removing the lower stamp #the upper bound of the contact surface ub<- rep(NA,h_width) j<-col_shoe while(allcont[h_width,j]==0) j<-j-1 ub[1] <- j+1 for(i in 2:h_width) { j<- ub[i-1] if(allcont[h_width-i+1,j]==0) { while((allcont[h_width-i+1,j]==0)&&j>0) j <- j-1 ub[i] <- j+1 }else{ while((allcont[h_width-i+1,j]==1)&&j<rel_row_shoe) j <- j+1 ub[i] <- j } } for(i in 1:h_width) allcont[h_width-i+1,ub[i]:col_shoe] <- 0 #removing the upper stamp for(i in 1:num_shoe) { data[[i]] <- data[[i]]*allcont } ###################Working with the locations data############## acciden<-read.csv("locations_data.CSV",header=TRUE) acci <- list() delx <- 2*rel_x_cord/rel_col_shoe dely <- 2*rel_Y_cord/rel_row_shoe for (i in (c(1:126,128:num_shoe)) )#shoe 127 doesn't have RACs { acci[[i]] <- matrix(0,row_shoe,col_shoe) locations <- cbind(acciden$x[acciden$shoe==i],acciden$y[acciden$shoe==i]) # the coordinates of the RAC for(j in 1:nrow(locations)) { xpix <- aspix_x(locations[j,1]) ypix<-aspix_y(locations[j,2]) acci[[i]][ypix,xpix] <- acci[[i]][ypix,xpix]+1 #if there is more than one RAC (accidental) in a pixel we will count it as well } } ###RACs can be observed only on the contact surface, but as we show below, the data has RACs where there is no contact surface m <- rep(NA,num_shoe) for(i in (c(1:126,128:num_shoe))) { m[i] <- min(data[[i]][acci[[i]]>=1]) # checking to see if there are RACs where there is no contact surface } # 0 means that there is at least one RAC that is not on the contact surface # As noted in Section 4. When RACs are created they may tear the shoe sole such that the location of the RAC appears to be on an area with #no contact surface and thus the value of the contact surface is set to 1 in all cases where there are RACs data_temp <- list() # a "solution", add contact surface where there is a RAC. for(i in (c(1:126,128:387))) { data_temp[[i]] <- data[[i]] data_temp[[i]][acci[[i]]>=1] <- 1 } data_pix<-list() # each data_pix[[i]] is a matrix with column 1 indicating the shoe, 2 the x, 3 the y, 4 the amount of RACs in that pixel # we include only data where there is contact surface (after adjusting for the case that if there is a RAC there will be contact surface) for(i in (c(1:126,128:num_shoe))) { xcoor <- t(matrix(rep((-col_shoe/2+1:col_shoe)*rel_Y_cord/rel_col_shoe,row_shoe),col_shoe,row_shoe)) ycoor <- -matrix(rep((-row_shoe/2+1:row_shoe)*rel_Y_cord/rel_col_shoe,col_shoe),row_shoe,col_shoe) shoe<-rep(i,length(data_temp[[i]][data_temp[[i]]==1])) data_pix[[i]]<-cbind(shoe,xcoor[data_temp[[i]]==1],ycoor[data_temp[[i]]==1],acci[[i]][data_temp[[i]]==1])# the data is only where there is contact surface } data_pix_use<-numeric() for (i in (c(1:126,128:num_shoe))) { data_pix_use<-rbind(data_pix_use,data_pix[[i]]) } #As noted in Section 4 of the article, the number of RACS is set to 1 in 38 cases where there are 2 RACs. #Appearance of two RACs in the same pixel may be due to the way the data were pre-processed and the location was defined. n_Acc<-data_pix_use[,4] #data_pix_use[n_Acc==2,] -> These are the 38 pixels with 2 RACs n_Acc[n_Acc>=1] <-1 # more than one RAC in a shoe is considered as 1 x<- data_pix_use[,2] y<- data_pix_use[,3] shoe<-as.factor(data_pix_use[,1]) #it should be noted that as factor changes the numbering #since shoe 127 doesn't exist, as factor makes the numbering of shoes 128 to 387 to decrease by 1. (shoe 128 is now 127 etc.) mydata <- data.frame(cbind(n_Acc, x, y,shoe)) #This is the data that will be used for(j in 1:nrow(locations)) { xpix <-aspix_x(locations[j,1]) ypix <- aspix_y(locations[j,2]) acci[[i]][ypix,xpix] <- acci[[i]][ypix,xpix]+1 #if there is more than one RAC in a pixel we will count it as well } sumacci <- acci[[1]] for(i in c(2:126,128:387)) { sumacci <- sumacci+acci[[i]] } sumcont <- data[[1]] for(i in c(2:126,128:387)) { sumcont <- sumcont+data[[i]] } ###############creating case control data########################################################################## # As noted in Section 4.4, estimating the intensity function at a high resolution is computationally challenging #and thus case-control sub-sampling techniques are used #The calculations were based on within-cluster case-control sub-sampling, #which includes all cases (pixels with RACs, nij = 1) and 20 random controls (pixels without RACs, nij = 0) from each shoe dataCC <- numeric() for(i in 1:length(unique(shoe))) { case <- mydata[mydata$shoe==i&mydata$n_Acc>0,] control <- mydata[mydata$shoe==i&mydata$n_Acc==0,] control <- control[sample(nrow(control),size=20,replace=FALSE),] dataCC <- rbind(dataCC,case,control) } ################################################################################################################## ################################################################################################################## # The naive smooth estimator used on the basis of the entire data, not the case control #A uniform kernel is used #where each entry of the smoothed matrix is calculated as the average of its 21^2 neighbor entries in the original matrix. # INPUT: # ====== #cumRAC is the cumulative matrix of RAC locations of all shoes #cumContact is the cumulative matrix of all contact surfaces of all shoes #areaShoe is the area of the of the shoes which defines the contour of all shoes #In our case is all pixels that appear in more than 8 shoes ################################################################################################################## Naive<-function(cumRAC=sumacci,cumContact=sumcont,areaShoe=allcont) { Naivemat<- cumRAC/cumContact Naivemat[areaShoe==0] <- NA est <- kernel2dsmooth(Naivemat, kernel.type="boxcar",n=21) est[areaShoe==0] <- NA return(est) } naive_smooth<-Naive() image.plot(naive_smooth,axes=FALSE) #The random effects and the CML estimates were calculated using a product of natural cubic splines #Three knots for the X-axis and five knots for the Y-axis were used and their positions were set according to equal quantiles. #These numbers of knots enabled flexibility and still avoided computational problems ################################################################################################################## # The random effects estimator # INPUT: # ====== #nknotsx the number of x knots using a product of natural cubic splines #nknotsy the number of y knots using a product of natural cubic splines #dat is the data used for estimation, we are using here the case control data ################################################################################################################## ####The random effects estimator Random<-function(nknotsx=3,nknotsy=5,dat=dataCC) { knotsx <- as.numeric(quantile(dat$x,1:nknotsx/(1+nknotsx))) knotsy <-as.numeric(quantile(dat$y,1:nknotsy/(1+nknotsy))) shoe<-dat$shoe est<- glmer(dat$n_Acc ~ ns(dat$x,knots=knotsx):ns(dat$y,knots=knotsy)+(1 | shoe) , data= dat , family=binomial(link="logit"),control = glmerControl(optimizer = "bobyqa")) return(est) } rand<-Random() #plot of the random effects estimator nknotsx <- 3 nknotsy <- 5 knotsx <- as.numeric(quantile(dataCC$x,1:nknotsx/(1+nknotsx))) knotsy <-as.numeric(quantile(dataCC$y,1:nknotsy/(1+nknotsy))) basx <- ns(dataCC$x,knots=knotsx) basy <- ns(dataCC$y,knots=knotsy) xy <- expand.grid(xcoor[1,],ycoor[,1]) newdesignmat <- rep(1,length(xy[,1])) for(i in 1:length(predict(basy,1))) { for(j in 1:length(predict(basx,1))) { newdesignmat <- cbind(newdesignmat,predict(basx, xy[,1])[,j]*predict(basy, xy[,2])[,i]) } } pred.case_control <- newdesignmat%*%fixef(rand)+log(0.005) #log(0.005) is the offset pred.case_control[t(allcont)==0] <- NA #areas out of the contour (less than 8 shoes has contact surface in these pixels) are given NA prob.pred <- exp(matrix(pred.case_control ,row_shoe,col_shoe,byrow=1))/(1+exp(matrix(pred.case_control ,row_shoe,col_shoe,byrow=1))) intens <- -log(1-prob.pred) #turning it to intensity image.plot(intens,axes=FALSE) m<-mean(pred.case_control,na.rm=TRUE) #for use in the CML ################################################################################################################## # The CML estimator # INPUT: # ====== #nknotsx the number of x knots using a product of natural cubic splines #nknotsy the number of y knots using a product of natural cubic splines #dat is the data used for estimation, we are using here the case control data ################################################################################################################## CML<-function(nknotsx=3,nknotsy=5,dat=dataCC) { knotsx <- as.numeric(quantile(dat$x,1:nknotsx/(1+nknotsx))) knotsy <-as.numeric(quantile(dat$y,1:nknotsy/(1+nknotsy))) shoe<-dat$shoe est<- clogit(dat$n_Acc~ ns(dat$x,knots=knotsx):ns(dat$y,knots=knotsy)+strata(shoe) , data=dat) return(est) } cml<-CML() #plot cml newdesignmat1 <- rep(1,length(xy[,1])) for(i in 1:length(predict(basy,1))) { for(j in 1:length(predict(basx,1))) { newdesignmat1 <- cbind(newdesignmat1,predict(basx, xy[,1])[,j]*predict(basy, xy[,2])[,i]) } } pred.cml.bin.case_control <- newdesignmat1%*%c(0,coefficients(cml)) # the intercept can't be estimated since it cancels pred.cml.bin.case_control[t(allcont)==0] <- NA #areas out of the contour (less than 8 shoes have contact surface in these pixels) are given NA m_1<-mean(pred.cml.bin.case_control,na.rm=TRUE) pred.cml.bin.case_control<-pred.cml.bin.case_control-m_1+m #making the means of random and cml to be equal prob.pred_cml <- exp(matrix(pred.cml.bin.case_control ,row_shoe,col_shoe,byrow=1))/(1+exp(matrix(pred.cml.bin.case_control ,row_shoe,col_shoe,byrow=1))) intens.pred_cml <- -log(1-prob.pred_cml) image.plot(intens.pred_cml,axes=FALSE) #notice that these probabilities depend on the intercept which is not included since it cancels. # Figure 2: the 3 estimators intensities on the same scale sub <- 70 cols <- sub:(col_shoe-sub) #we multiply CML and random so they will be on the same scale com_3_est<-cbind(naive_smooth[,cols],exp(-0.9915/2)*intens[,cols],exp(-0.9915/2)*intens.pred_cml[,cols]) #0.9915 is sigma^2 of the random effect. e^(sigma^2/2) is the expectation of a log linear variable lognormal(0,sigma^2). This is the expectation of the random. image.plot(t(com_3_est[nrow(com_3_est):1,]),axes=FALSE,xlab='Naive,Random,CML') pdf(file ="pixel_inten_JASAup.pdf", height=6, width=6) image.plot(t(com_3_est[nrow(com_3_est):1,]),axes=FALSE,xlab='Naive Random CML') dev.off() ## hypothesis testing## (Section 4.5.1) co <- fixef(rand) vc <- vcov(rand) matr <- diag(length(co))[-1,] testing<-linearHypothesis(rand,hypothesis.matrix=matr,rhs=rep(0,length(co)-1),test=c("Chisq", "F"),vcov.=vc,coef.=co) testing$`Pr(>Chisq)` #pvalue is approximately zero #confidence intervals - Figure 3 ################################################################################################################## # CML confidence interval # INPUT: # ====== #dat - the data used for estimation, we are using here the case control data # col_shoe - the number of columns in each shoe # row_shoe - the number of rows in each shoe # rel_Y_cord - the relevant coordinates #(using coordinates as in the locations_data.CSV file. the relevant Y coordinates are between -0.5 and 0.5) # rel_col_shoe -the number of relevant columns #out of the 307 columns only 150 are relevant (contain non zero pixels in some shoes) #nknotsx the number of x knots using a product of natural cubic splines #nknotsy the number of y knots using a product of natural cubic splines ################################################################################################################## CI_cml <- function(dat=dataCC,col_shoe=307,row_shoe=395,rel_Y_cord=0.5,rel_col_shoe=150,nknotsx=3,nknotsy=5) { xcoor <- t(matrix(rep((-col_shoe/2+1:col_shoe)*rel_Y_cord/rel_col_shoe,row_shoe),col_shoe,row_shoe)) ycoor <- -matrix(rep((-row_shoe/2+1:row_shoe)*rel_Y_cord/rel_col_shoe,col_shoe),row_shoe,col_shoe) cml_bin_fit<-CML(dat=dat,nknotsx=nknotsx,nknotsy=nknotsy) rand<-Random(dat=dat,nknotsx=nknotsx,nknotsy=nknotsy) knotsx <- as.numeric(quantile(dat$x,1:nknotsx/(1+nknotsx))) knotsy <-as.numeric(quantile(dat$y,1:nknotsy/(1+nknotsy))) basx <- ns(dat$x,knots=knotsx) basy <- ns(dat$y,knots=knotsy) xy <- expand.grid(xcoor[1,],ycoor[,1]) newdesignmat <- rep(1,length(xy[,1])) for(i in 1:length(predict(basy,1))) { for(j in 1:length(predict(basx,1))) { newdesignmat <- cbind(newdesignmat,predict(basx, xy[,1])[,j]*predict(basy, xy[,2])[,i]) } } pred.case_control_r <- newdesignmat%*%fixef(rand)+log(0.005) #log(0.005) is the offset pred.case_control_r[t(allcont)==0] <- NA #areas out of the contour (less than 8 shoes has contact surface in these pixels) are given NA prob.pred <- exp(matrix(pred.case_control_r ,row_shoe,col_shoe,byrow=1))/(1+exp(matrix(pred.case_control_r ,row_shoe,col_shoe,byrow=1))) m<-mean(pred.case_control_r,na.rm=TRUE) cov_mat<-matrix(as.numeric(vcov(cml_bin_fit)),length(coefficients(cml_bin_fit))) avg <- newdesignmat%*%c(0,coefficients(cml_bin_fit)) avg[t(allcont)==0] <- NA #areas out of the contour (less than 8 shoes has contact surface in these pixels) are given NA m_1 <- mean(avg,,na.rm=TRUE) avg<- avg-m_1+m newdesignmat<-newdesignmat[,-1] std <- as.matrix(sqrt(rowSums((newdesignmat%*%cov_mat)*newdesignmat))) ex <- exp(avg%*%c(1,1,1)+1.96*std%*%c(-1,0,1)) pr <- ex/(1+ex) inte <- -log(1-pr) for(i in 1:3) inte[as.vector(t(allcont)==0),i] <- NA ret <- list(matrix(inte[,1],row_shoe,col_shoe,byrow=1),matrix(inte[,2],row_shoe,col_shoe,byrow=1),matrix(inte[,3],row_shoe,col_shoe,byrow=1)) names(ret) <- c("Low","Mid","High") return(ret) } CI <- CI_cml() ################################################################################################################## # Random confidence interval # INPUT: # ====== #dat - the data used for estimation, we are using here the case control data # col_shoe - the number of columns in each shoe # row_shoe - the number of rows in each shoe # rel_Y_cord - the relevant coordinates #(using coordinates as in the locations_data.CSV file. the relevant Y coordinates are between -0.5 and 0.5) # rel_col_shoe -the number of relevant columns #out of the 307 columns only 150 are relevant (contain non zero pixels in some shoes) #nknotsx the number of x knots using a product of natural cubic splines #nknotsy the number of y knots using a product of natural cubic splines ################################################################################################################## CI_random <- function(dat=dataCC,col_shoe=307,row_shoe=395,rel_Y_cord=0.5,rel_col_shoe=150,nknotsx=3,nknotsy=5) { xcoor <- t(matrix(rep((-col_shoe/2+1:col_shoe)*rel_Y_cord/rel_col_shoe,row_shoe),col_shoe,row_shoe)) ycoor <- -matrix(rep((-row_shoe/2+1:row_shoe)*rel_Y_cord/rel_col_shoe,col_shoe),row_shoe,col_shoe) fit.rand<-Random(dat=dat,nknotsx=nknotsx,nknotsy=nknotsy) cov_mat<-matrix(as.numeric(vcov(fit.rand)),length(fixef(fit.rand))) knotsx <- as.numeric(quantile(dat$x,1:nknotsx/(1+nknotsx))) knotsy <-as.numeric(quantile(dat$y,1:nknotsy/(1+nknotsy))) basx <- ns(dat$x,knots=knotsx) basy <- ns(dat$y,knots=knotsy) xy <- expand.grid(xcoor[1,],ycoor[,1]) newdesignmat <- rep(1,length(xy[,1])) for(i in 1:length(predict(basy,1))) { for(j in 1:length(predict(basx,1))) { newdesignmat <- cbind(newdesignmat,predict(basx, xy[,1])[,j]*predict(basy, xy[,2])[,i]) } } avg <- newdesignmat%*%fixef(fit.rand)+log(0.005) std <- as.matrix(sqrt(rowSums((newdesignmat%*%cov_mat)*newdesignmat))) ex <- exp(avg%*%c(1,1,1)+1.96*std%*%c(-1,0,1)) pr <- ex/(1+ex) inte <- -log(1-pr) for(i in 1:3) inte[as.vector(t(allcont)==0),i] <- NA ret <- list(matrix(inte[,1],row_shoe,col_shoe,byrow=1),matrix(inte[,2],row_shoe,col_shoe,byrow=1),matrix(inte[,3],row_shoe,col_shoe,byrow=1)) names(ret) <- c("Low","Mid","High") return(ret) } CI_r <- CI_random() #As shown in Figure 3 the confidence interval is calculated for 3 cut points sho <- CI$Mid*exp(-0.9915/2) #As done in the presentation of the estimators we multiply so they will be on the same scale sho[110,] <- 0 sho[190,] <- 0 sho[250,] <- 0 #The first part of figure 3 image.plot(sho,axes=FALSE) #intervals for the cut points: cut1<-110 cut2<-190 cut3<-250 ################################################################################################################## # CI_cut creates an image of the confidence interval of the random effects estimator and the CML estimator #for a specific row of the shoe # INPUT: # ====== #cut - the specific row of the shoe on the basis of which the CI would be calculated #ran - the range of the columns on the basis of which the CI would be calculated ################################################################################################################## range<-c(1:col_shoe) CI_cut<-function(cut=110,ran=range) { CI_plot_cml<-data.frame(range,CI$Low[cut,]*exp(-0.9915/2),CI$Mid[cut,]*exp(-0.9915/2),CI$High[cut,]*exp(-0.9915/2),rep("CML",length(range))) #As done in the presentation of the estimators we multiply so they will be on the same scale CI_plot_rand<-data.frame(range,CI_r$Low[cut,]*exp(-0.9915/2),CI_r$Mid[cut,]*exp(-0.9915/2),CI_r$High[cut,]*exp(-0.9915/2),rep("Random",length(range))) names(CI_plot_cml)<-c("x","Low_CI","Mid_CI","High_CI","Estimator") names(CI_plot_rand)<-c("x","Low_CI","Mid_CI","High_CI","Estimator") CI_plot<-rbind(CI_plot_cml,CI_plot_rand) CI_cut<-ggplot(CI_plot) + geom_line(aes(x=x,y=Low_CI,colour=Estimator)) + geom_line(aes(x=x,y=High_CI,colour=Estimator)) + geom_ribbon(aes(x=x,ymin=Low_CI,ymax=High_CI,fill=Estimator),alpha=0.5) + geom_line(aes(x=x,y=Mid_CI,colour=Estimator),size=1) + theme_bw() + theme(plot.title = element_text(color="black", size=14, face="bold")) + scale_fill_brewer(palette="Set1") + scale_color_brewer(palette="Set1") + ylab("Probability") + labs(colour="Estimator",fill="Estimator") + coord_cartesian(xlim = c(100, 220),ylim=c(0.001,0.0082)) + theme(axis.title.x = element_blank(),axis.text.x = element_blank()) return(CI_cut) } CI_cut1<-CI_cut(cut=110,ran=range) CI_cut2<-CI_cut(cut=190,ran=range) CI_cut3<-CI_cut(cut=250,ran=range) # Producing Figures 1-4 in the Web appendix #Figure 1 in the Web appendix qplot(as.vector(table(acciden$shoe)), geom="histogram",bins=60,alpha=I(.5),col=I("black"))+xlab("RACs")+ylab("count") pdf(file ="hist_racs_JASAup.pdf", height=6, width=6) qplot(as.vector(table(acciden$shoe)), geom="histogram",bins=60,alpha=I(.5),col=I("black"))+xlab("RACs")+ylab("count") dev.off() min_num_RACs<-min(table(acciden$shoe)) max_num_RACs<-max(table(acciden$shoe)) mean_num_RACs<-mean(table(acciden$shoe)) #Figure 2 in the Web appendix mat_shoe_acc<- as.matrix(table(n_Acc,shoe)) #matrix of shoes, number of pixels with zero and number of pixels with 1 cont_pix<- as.vector(mat_shoe_acc[1,]+mat_shoe_acc[2,]) # The number of pixels with contact surface is the sum of pixels with zero and with one (pixels with no contact surface are not part of the data) qplot(cont_pix, geom="histogram",bins=20,alpha=I(.5),col=I("black"))+xlab("Contact surface (number of pixels)")+ylab("count") pdf(file ="hist_npix_JASAup.pdf", height=6, width=6) qplot(cont_pix, geom="histogram",bins=20,alpha=I(.5),col=I("black"))+xlab("Contact surface (number of pixels)")+ylab("count") dev.off() min_num_pix<-min(cont_pix) max_num_pix<-max(cont_pix) mean_num_pix<-mean(cont_pix) #Figure 3 in the Web appendix tmp <- sumcont tmp[allcont==0]<-NA image.plot(t(tmp[nrow(tmp):1,]),axes=FALSE,xlab = 'Cumulative Contact Surface') #image of cumulative contact surface pdf(file ="cum_contact_JASA.pdf", height=6, width=6) image.plot(t(tmp[nrow(tmp):1,]),axes=FALSE,xlab = 'Cumulative Contact Surface') #image of cumulative contact surface dev.off() #Figure 4 in the Web appendix dat<-data.frame(cbind(cont_pix,mat_shoe_acc[2,])) p1 <- ggplot( dat,aes(x = cont_pix, y = mat_shoe_acc[2,])) p2<- p1 + geom_point(color="dark grey")+geom_smooth(method = "lm", se = FALSE,color=" black") + labs(x="Contact surface (number of pixels)", y = "number of RACs") m <- lm(dat$V2 ~ dat$cont_pix) a <- signif(coef(m)[1], digits = 4) b <- signif(coef(m)[2], digits = 2) textlab <- paste("y = ",b,"x + ",a, sep="") R_spea<-cor(dat$cont_pix, y = dat$V2, method = "spearman") p3<- p2 + geom_text(aes(x = 15400, y = 210, label = textlab), color=" black", size=6, parse = FALSE) p3 +annotate("text", x = 16000, y = 190, label = "spearman's r = 0.1161",color="black", size=6,fontface =2) pdf(file ="scat_contact_rac_JASA.pdf", height=6, width=6) p3<- p2 + geom_text(aes(x = 15400, y = 210, label = textlab), color=" black", size=6, parse = FALSE) p3 +annotate("text", x = 16000, y = 190, label = "spearman's r = 0.1161",color="black", size=6,fontface =2) dev.off()
library(dashboardthemes) customTheme <- shinyDashboardThemeDIY( ### general appFontFamily = "Arial" ,appFontColor = "black" #"grey" ,primaryFontColor = "grey" ,infoFontColor = "rgb(0,0,0)" ,successFontColor = "rgb(0,0,0)" ,warningFontColor = "rgb(0,0,0)" ,dangerFontColor = "rgb(0,0,0)" ,bodyBackColor = "rgb(248,248,248)" ### header ,logoBackColor = "rgb(0,0,0)" ,headerButtonBackColor = "rgb(0,0,0)" ,headerButtonIconColor = "rgb(75,75,75)" ,headerButtonBackColorHover = "rgb(210,210,210)" ,headerButtonIconColorHover = "rgb(0,0,0)" ,headerBackColor = "rgb(0,0,0)" ,headerBoxShadowColor = "#aaaaaa" ,headerBoxShadowSize = "2px 2px 2px" ### sidebar # ,sidebarBackColor = cssGradientThreeColors( # direction = "down" # ,colorStart = "rgb(20,97,117)" # ,colorMiddle = "rgb(56,161,187)" # ,colorEnd = "rgb(3,22,56)" # ,colorStartPos = 0 # ,colorMiddlePos = 50 # ,colorEndPos = 100 # ) ,sidebarBackColor = "lightgrey" ,sidebarPadding = 0 ,sidebarMenuBackColor = "transparent" ,sidebarMenuPadding = 0 ,sidebarMenuBorderRadius = 0 ,sidebarShadowRadius = "3px 5px 5px" ,sidebarShadowColor = "#aaaaaa" ,sidebarUserTextColor = "grey" ,sidebarSearchBackColor = "rgb(55,72,80)" ,sidebarSearchIconColor = "rgb(153,153,153)" ,sidebarSearchBorderColor = "rgb(55,72,80)" ,sidebarTabTextColor = "black" ,sidebarTabTextSize = 13 ,sidebarTabBorderStyle = "none none solid none" ,sidebarTabBorderColor = "rgb(35,106,135)" ,sidebarTabBorderWidth = 1 ,sidebarTabBackColorSelected = cssGradientThreeColors( direction = "right" ,colorStart = "rgba(44,222,235,1)" ,colorMiddle = "rgba(44,222,235,1)" ,colorEnd = "rgba(0,255,213,1)" ,colorStartPos = 0 ,colorMiddlePos = 30 ,colorEndPos = 100 ) ,sidebarTabTextColorSelected = "black" ,sidebarTabRadiusSelected = "0px 0px 0px 0px" ,sidebarTabBackColorHover = cssGradientThreeColors( direction = "right" ,colorStart = "rgba(44,222,235,1)" ,colorMiddle = "rgba(44,222,235,1)" ,colorEnd = "rgba(0,255,213,1)" ,colorStartPos = 0 ,colorMiddlePos = 30 ,colorEndPos = 100 ) ,sidebarTabTextColorHover = "rgb(50,50,50)" ,sidebarTabBorderStyleHover = "none none solid none" ,sidebarTabBorderColorHover = "rgb(75,126,151)" ,sidebarTabBorderWidthHover = 1 ,sidebarTabRadiusHover = "0px 0px 0px 0px" ### boxes ,boxBackColor = "rgb(255,255,255)" ,boxBorderRadius = 5 ,boxShadowSize = "0px 1px 1px" ,boxShadowColor = "rgba(0,0,0,.1)" ,boxTitleSize = 16 ,boxDefaultColor = "rgb(210,214,220)" ,boxPrimaryColor = "rgba(44,222,235,1)" ,boxInfoColor = "rgb(210,214,220)" ,boxSuccessColor = "rgba(0,255,213,1)" ,boxWarningColor = "rgb(244,156,104)" ,boxDangerColor = "rgb(255,88,55)" ,tabBoxTabColor = "rgb(255,255,255)" ,tabBoxTabTextSize = 14 ,tabBoxTabTextColor = "rgb(0,0,0)" ,tabBoxTabTextColorSelected = "rgb(0,0,0)" ,tabBoxBackColor = "rgb(255,255,255)" ,tabBoxHighlightColor = "rgba(44,222,235,1)" ,tabBoxBorderRadius = 5 ### inputs ,buttonBackColor = "rgb(245,245,245)" ,buttonTextColor = "rgb(0,0,0)" ,buttonBorderColor = "rgb(200,200,200)" ,buttonBorderRadius = 5 ,buttonBackColorHover = "grey" ,buttonTextColorHover = "grey" ,buttonBorderColorHover = "grey" ,textboxBackColor = "white" ,textboxBorderColor = "rgb(200,200,200)" ,textboxBorderRadius = 5 ,textboxBackColorSelect = "rgb(245,245,245)" ,textboxBorderColorSelect = "rgb(200,200,200)" ### tables ,tableBackColor = "rgb(255,255,255)" ,tableBorderColor = "rgb(240,240,240)" ,tableBorderTopSize = 1 ,tableBorderRowSize = 1 )
/src/emerging_topics/dashboard/r_shiny_app_v2/theme.R
permissive
uva-bi-sdad/publicrd
R
false
false
3,707
r
library(dashboardthemes) customTheme <- shinyDashboardThemeDIY( ### general appFontFamily = "Arial" ,appFontColor = "black" #"grey" ,primaryFontColor = "grey" ,infoFontColor = "rgb(0,0,0)" ,successFontColor = "rgb(0,0,0)" ,warningFontColor = "rgb(0,0,0)" ,dangerFontColor = "rgb(0,0,0)" ,bodyBackColor = "rgb(248,248,248)" ### header ,logoBackColor = "rgb(0,0,0)" ,headerButtonBackColor = "rgb(0,0,0)" ,headerButtonIconColor = "rgb(75,75,75)" ,headerButtonBackColorHover = "rgb(210,210,210)" ,headerButtonIconColorHover = "rgb(0,0,0)" ,headerBackColor = "rgb(0,0,0)" ,headerBoxShadowColor = "#aaaaaa" ,headerBoxShadowSize = "2px 2px 2px" ### sidebar # ,sidebarBackColor = cssGradientThreeColors( # direction = "down" # ,colorStart = "rgb(20,97,117)" # ,colorMiddle = "rgb(56,161,187)" # ,colorEnd = "rgb(3,22,56)" # ,colorStartPos = 0 # ,colorMiddlePos = 50 # ,colorEndPos = 100 # ) ,sidebarBackColor = "lightgrey" ,sidebarPadding = 0 ,sidebarMenuBackColor = "transparent" ,sidebarMenuPadding = 0 ,sidebarMenuBorderRadius = 0 ,sidebarShadowRadius = "3px 5px 5px" ,sidebarShadowColor = "#aaaaaa" ,sidebarUserTextColor = "grey" ,sidebarSearchBackColor = "rgb(55,72,80)" ,sidebarSearchIconColor = "rgb(153,153,153)" ,sidebarSearchBorderColor = "rgb(55,72,80)" ,sidebarTabTextColor = "black" ,sidebarTabTextSize = 13 ,sidebarTabBorderStyle = "none none solid none" ,sidebarTabBorderColor = "rgb(35,106,135)" ,sidebarTabBorderWidth = 1 ,sidebarTabBackColorSelected = cssGradientThreeColors( direction = "right" ,colorStart = "rgba(44,222,235,1)" ,colorMiddle = "rgba(44,222,235,1)" ,colorEnd = "rgba(0,255,213,1)" ,colorStartPos = 0 ,colorMiddlePos = 30 ,colorEndPos = 100 ) ,sidebarTabTextColorSelected = "black" ,sidebarTabRadiusSelected = "0px 0px 0px 0px" ,sidebarTabBackColorHover = cssGradientThreeColors( direction = "right" ,colorStart = "rgba(44,222,235,1)" ,colorMiddle = "rgba(44,222,235,1)" ,colorEnd = "rgba(0,255,213,1)" ,colorStartPos = 0 ,colorMiddlePos = 30 ,colorEndPos = 100 ) ,sidebarTabTextColorHover = "rgb(50,50,50)" ,sidebarTabBorderStyleHover = "none none solid none" ,sidebarTabBorderColorHover = "rgb(75,126,151)" ,sidebarTabBorderWidthHover = 1 ,sidebarTabRadiusHover = "0px 0px 0px 0px" ### boxes ,boxBackColor = "rgb(255,255,255)" ,boxBorderRadius = 5 ,boxShadowSize = "0px 1px 1px" ,boxShadowColor = "rgba(0,0,0,.1)" ,boxTitleSize = 16 ,boxDefaultColor = "rgb(210,214,220)" ,boxPrimaryColor = "rgba(44,222,235,1)" ,boxInfoColor = "rgb(210,214,220)" ,boxSuccessColor = "rgba(0,255,213,1)" ,boxWarningColor = "rgb(244,156,104)" ,boxDangerColor = "rgb(255,88,55)" ,tabBoxTabColor = "rgb(255,255,255)" ,tabBoxTabTextSize = 14 ,tabBoxTabTextColor = "rgb(0,0,0)" ,tabBoxTabTextColorSelected = "rgb(0,0,0)" ,tabBoxBackColor = "rgb(255,255,255)" ,tabBoxHighlightColor = "rgba(44,222,235,1)" ,tabBoxBorderRadius = 5 ### inputs ,buttonBackColor = "rgb(245,245,245)" ,buttonTextColor = "rgb(0,0,0)" ,buttonBorderColor = "rgb(200,200,200)" ,buttonBorderRadius = 5 ,buttonBackColorHover = "grey" ,buttonTextColorHover = "grey" ,buttonBorderColorHover = "grey" ,textboxBackColor = "white" ,textboxBorderColor = "rgb(200,200,200)" ,textboxBorderRadius = 5 ,textboxBackColorSelect = "rgb(245,245,245)" ,textboxBorderColorSelect = "rgb(200,200,200)" ### tables ,tableBackColor = "rgb(255,255,255)" ,tableBorderColor = "rgb(240,240,240)" ,tableBorderTopSize = 1 ,tableBorderRowSize = 1 )
#' Return all synonyms for a taxon name with a given id from NBN #' #' @export #' @param id the taxon identifier code #' @param ... Further args passed on to [crul::verb-GET] #' @return A data.frame #' @family nbn #' @references https://api.nbnatlas.org/ #' @examples \dontrun{ #' nbn_synonyms(id = 'NHMSYS0001501147') #' nbn_synonyms(id = 'NHMSYS0000456036') #' #' # none #' nbn_synonyms(id = 'NHMSYS0000502940') #' } nbn_synonyms <- function(id, ...) { url <- file.path(nbn_base(), "species", id) df <- nbn_GET_2(url, ...) df$synonyms }
/R/nbn_synonyms.R
permissive
ropensci/taxize
R
false
false
545
r
#' Return all synonyms for a taxon name with a given id from NBN #' #' @export #' @param id the taxon identifier code #' @param ... Further args passed on to [crul::verb-GET] #' @return A data.frame #' @family nbn #' @references https://api.nbnatlas.org/ #' @examples \dontrun{ #' nbn_synonyms(id = 'NHMSYS0001501147') #' nbn_synonyms(id = 'NHMSYS0000456036') #' #' # none #' nbn_synonyms(id = 'NHMSYS0000502940') #' } nbn_synonyms <- function(id, ...) { url <- file.path(nbn_base(), "species", id) df <- nbn_GET_2(url, ...) df$synonyms }
#------------define my new transformation------------ library(scales) # John and Draper's modulus transformation modulus_trans <- function(lambda){ trans_new("modulus", transform = function(y){ if(lambda != 0){ yt <- sign(y) * (((abs(y) + 1) ^ lambda - 1) / lambda) } else { yt = sign(y) * (log(abs(y) + 1)) } return(yt) }, inverse = function(yt){ if(lambda != 0){ y <- ((abs(yt) * lambda + 1) ^ (1 / lambda) - 1) * sign(yt) } else { y <- (exp(abs(yt)) - 1) * sign(yt) } return(y) } ) } #-------------analysis------------ library(RODBC) library(showtext) library(dplyr) library(ggplot2) # comnect to database PlayPen <- odbcConnect("PlayPen_prod") sqlQuery(PlayPen, "use nzis11") # load fonts font.add.google("Poppins", "myfont") showtext.auto() inc <- sqlQuery(PlayPen, "select * from vw_mainheader") svg("../img/0006_better_density_plot.svg", 8, 5) ggplot(inc, aes(x = income)) + geom_density() + geom_rug() + scale_x_continuous(trans = modulus_trans(lambda = 0.25), label = dollar) + theme_minimal(base_family = "myfont") dev.off() ggplot(inc, aes(x = hours, y = income)) + geom_jitter(alpha = 0.2) + scale_x_continuous(trans = modulus_trans(lambda = 0.25)) + scale_y_continuous(trans = modulus_trans(lambda = 0.25), label = dollar) + theme_minimal(base_family = "myfont") p1 <- ggplot(inc, aes(x = hours, y = income)) + facet_wrap(~region) + geom_point(alpha = 0.2) + scale_x_continuous(trans = modulus_trans(0.25)) + scale_y_continuous(trans = modulus_trans(0.25), label = dollar) + theme_light(base_family = "myfont") svg("../img/0006_income_by_region.svg", 12, 8) print(p1) dev.off() png("../img/0006_income_by_region.png", 12 * 70, 8 * 70, res = 70) print(p1) dev.off() #-----------------add on as requested----------- p1 <- ggplot(inc, aes(x = hours, y = income)) + facet_wrap(~region) + geom_point(alpha = 0.2) + scale_y_continuous(label = dollar) + theme_light(base_family = "myfont") svg("../img/0006_income_by_region_no_transform.svg", 10, 6) print(p1) dev.off()
/_working/0006_scale_transforms.R
no_license
ellisp/blog-source
R
false
false
2,345
r
#------------define my new transformation------------ library(scales) # John and Draper's modulus transformation modulus_trans <- function(lambda){ trans_new("modulus", transform = function(y){ if(lambda != 0){ yt <- sign(y) * (((abs(y) + 1) ^ lambda - 1) / lambda) } else { yt = sign(y) * (log(abs(y) + 1)) } return(yt) }, inverse = function(yt){ if(lambda != 0){ y <- ((abs(yt) * lambda + 1) ^ (1 / lambda) - 1) * sign(yt) } else { y <- (exp(abs(yt)) - 1) * sign(yt) } return(y) } ) } #-------------analysis------------ library(RODBC) library(showtext) library(dplyr) library(ggplot2) # comnect to database PlayPen <- odbcConnect("PlayPen_prod") sqlQuery(PlayPen, "use nzis11") # load fonts font.add.google("Poppins", "myfont") showtext.auto() inc <- sqlQuery(PlayPen, "select * from vw_mainheader") svg("../img/0006_better_density_plot.svg", 8, 5) ggplot(inc, aes(x = income)) + geom_density() + geom_rug() + scale_x_continuous(trans = modulus_trans(lambda = 0.25), label = dollar) + theme_minimal(base_family = "myfont") dev.off() ggplot(inc, aes(x = hours, y = income)) + geom_jitter(alpha = 0.2) + scale_x_continuous(trans = modulus_trans(lambda = 0.25)) + scale_y_continuous(trans = modulus_trans(lambda = 0.25), label = dollar) + theme_minimal(base_family = "myfont") p1 <- ggplot(inc, aes(x = hours, y = income)) + facet_wrap(~region) + geom_point(alpha = 0.2) + scale_x_continuous(trans = modulus_trans(0.25)) + scale_y_continuous(trans = modulus_trans(0.25), label = dollar) + theme_light(base_family = "myfont") svg("../img/0006_income_by_region.svg", 12, 8) print(p1) dev.off() png("../img/0006_income_by_region.png", 12 * 70, 8 * 70, res = 70) print(p1) dev.off() #-----------------add on as requested----------- p1 <- ggplot(inc, aes(x = hours, y = income)) + facet_wrap(~region) + geom_point(alpha = 0.2) + scale_y_continuous(label = dollar) + theme_light(base_family = "myfont") svg("../img/0006_income_by_region_no_transform.svg", 10, 6) print(p1) dev.off()
c DCNF-Autarky [version 0.0.1]. c Copyright (c) 2018-2019 Swansea University. c c Input Clause Count: 17545 c Performing E1-Autarky iteration. c Remaining clauses count after E-Reduction: 17544 c c Performing E1-Autarky iteration. c Remaining clauses count after E-Reduction: 17544 c c Input Parameter (command line, file): c input filename pcnf/PCNF/stmt21_310_360.qdimacs c output filename /tmp/dcnfAutarky.dimacs c autarky level 1 c conformity level 0 c encoding type 2 c no.of var 5233 c no.of clauses 17545 c no.of taut cls 0 c c Output Parameters: c remaining no.of clauses 17544 c c pcnf/PCNF/stmt21_310_360.qdimacs 5233 17545 E1 [1] 0 260 4972 17544 RED
/code/dcnf-ankit-optimized/Results/PCNF-TRACK-2018/E1/Experiments/stmt21_310_360/stmt21_310_360.R
no_license
arey0pushpa/dcnf-autarky
R
false
false
690
r
c DCNF-Autarky [version 0.0.1]. c Copyright (c) 2018-2019 Swansea University. c c Input Clause Count: 17545 c Performing E1-Autarky iteration. c Remaining clauses count after E-Reduction: 17544 c c Performing E1-Autarky iteration. c Remaining clauses count after E-Reduction: 17544 c c Input Parameter (command line, file): c input filename pcnf/PCNF/stmt21_310_360.qdimacs c output filename /tmp/dcnfAutarky.dimacs c autarky level 1 c conformity level 0 c encoding type 2 c no.of var 5233 c no.of clauses 17545 c no.of taut cls 0 c c Output Parameters: c remaining no.of clauses 17544 c c pcnf/PCNF/stmt21_310_360.qdimacs 5233 17545 E1 [1] 0 260 4972 17544 RED
testlist <- list(x = c(8.06445078720382e-251, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), y = numeric(0)) result <- do.call(blorr:::blr_pairs_cpp,testlist) str(result)
/blorr/inst/testfiles/blr_pairs_cpp/libFuzzer_blr_pairs_cpp/blr_pairs_cpp_valgrind_files/1609955888-test.R
no_license
akhikolla/updated-only-Issues
R
false
false
401
r
testlist <- list(x = c(8.06445078720382e-251, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), y = numeric(0)) result <- do.call(blorr:::blr_pairs_cpp,testlist) str(result)
library(ggplot2) library(corrplot) #This report investigates Red Wine dataset df = read.csv("C:/users/layal/Desktop/R Project/wineQualityReds.csv") dim(df) str(df) summary(df) #This dataset has 1599 observations and 13 numerical variables. #Header 2: Univariant Plots Section # Check if there are missing values d<-subset(df,is.na(df)) dim(d) # no missing values # Check if observations are unique, I will remove the column marking the # sequence number for each observations, it is not needed in my EDA df$X <- NULL dim(unique(df)) == dim(df) # Yes! All observations are unique. ggplot(aes(x=quality), data = df) + geom_histogram() table(df$quality) # Following I will add two columns to dataset, total acidity and quality # bucket. Later in this report I will describe how I reach the conclusion # that total acidity should include citric acid and fixed acidity. # Also, quality bucket is going to help in the multivariate analysis. df$total.acidity = df$fixed.acidity+df$citric.acid df$quality.bucket <- ifelse(df$quality<=5,"Low", "High") # The quality of the red wine in this data set ranges between 3 and 8. # There is very small number of wine with very low quality of 3 or very high # quality of 8. The observations seem to be right skewed with short tail, # they center around 5.5. ggplot(aes(x=fixed.acidity), data = df) + geom_density(alpha=0.2, fill="blue") summary(df$fixed.acidity) #Most of the observations have fixed.acidity of about 7. The distribution #is right skewed with a tail expanding until about 16. # Too much fixed acidity will result in tartar wine, this explains why # there are small number of observations with high amounts of this acid. # There are no values of zero or 1 so it should be safe to try the log scale. ggplot(aes(x=log10(fixed.acidity)), data = df) + geom_density(alpha=0.2, fill="red") + geom_vline(aes(xintercept=mean(log10(fixed.acidity), na.rm=T)), color="red", linetype="dashed", size=1) # The density plot of the log10 of this variable looks "close" to normal # the mean is 0.91 and is represented using the vertical red line, # it is not quite at the center because the density is not normal. summary(log10(df$fixed.acidity)) ggplot(aes(x=volatile.acidity), data = df) + geom_density(alpha=0.2, fill = "yellow") # The density plot is bimodal. Volatile acidity is found in very small # amounts in red wine, it appears when wine is steamed. More about this # variable later in this report. ggplot(aes(citric.acid), data = df) + geom_histogram(binwidth = 0.005) # Citric acids usualy appear in citris fruit, they give freshness # to the wine. # The above histogram of citrict.acid does not show any particular pattern. # The majoriy of the observations does not include any amount of this acid. sort(table(df$citric.acid)) ggplot(aes(residual.sugar), data = df) + geom_density(fill="green") #The density looks right skewed with long tail. Even when transformed # to log10 (not interesting enough to include in this document), it also # looks right skewed. Log10 is not useful because most values are very small. # The description document mentions that #"wines with greater than 45 grams/liter are considered sweet". # There are no sweet wines in this sample. The maximum value is 15.5. summary(df$residual.sugar) ggplot(aes(chlorides), data = df) + geom_density(fill="orange") # Similar observations can be seen when plotting the chlorides. Most # observations have very small amounts of this compound. It is clearly right # skewed. Chlorides are present in wine from materials used to sterilize # wine making equipments, detergents are not commonly used to clean the # equipments because they leave residue. Based on this reasoning, # I don't think chloride is major component of wine. ggplot(aes(free.sulfur.dioxide), data = df) + geom_density(alpha = 0.2, fill = 'blue') ggplot(aes(total.sulfur.dioxide - free.sulfur.dioxide), data = df) + geom_density(alpha = 0.2, fill = 'blue') + xlab("bound sulfur") summary(df$free.sulfur.dioxide) ggplot(aes(total.sulfur.dioxide), data = df) + geom_density(alpha = 0.2, fill = 'blue') # According to the dataset description, the total.sulfur.dioxide is # a superset of free.sulfur.dioxide and bound form of SO2. # Most observations have small concentration of free or bound SO2. Only # very observations have relatively higher concentrations of these materials # which result in right skewed density plot. ggplot(aes(sulphates), data = df) + geom_density() # The majority of observations have relatively small amount of Sulphate. # Even at small amounts suphur can be antioxidant, anti aging and hence, # it preserves the freshness of the wine. # The density plot is right skewed. Sulphate contribute to SO2 # substances so it will be interesting to study these materials together. ggplot(aes(density), data = df) + geom_density() + geom_vline(aes(xintercept=mean(density, na.rm=T)), color="red", linetype="dashed", size=1) # Density is normally distibuted with center around the mean of 0.9967 # and sd of 0.0019 # Density depends on alcohol and sugar percentage so it should interesting # to study these three variables together later in this report. mean(df$density) sd(df$density) ggplot(aes(pH), data = df) + geom_density() + geom_vline(aes(xintercept=mean(pH, na.rm=T)), color="red", linetype="dashed", size=1) # pH measures the concentration of fixed acidity and citric acidity. # Volatile acidity is measured using steam distillation process. # Higher amounts of acid lead to lower values of pH and more acidic taste. # pH is log scaled by nature. # pH is normally distributed with a center of 3.3, and sd of 0.15 # pH for red wine should ranges between 3 and 4, which is the case for # this dataset. There are 29 exceptions with a pH value lower than 3. # The majority of these exceptions are of quality 3 and 4. mean(df$pH) sd(df$pH) d <- subset(df, (df$pH < 3)) table(d$quality) ggplot(aes(alcohol), data = df) + geom_density() # I don't think this is bimodal and it is probably a stretch to just # call it right skewed. # Most wines have about 9.5% alcohol, few observations have higher # percentages of alcohol with a maximum of 15%. tail(names(sort(table(df$alcohol))), 1) summary(df$alcohol) #*********************** Multivariante plots*********************** # I have 13 variables so I will start by plotting the correlation # coefficient corr.matrix = cor(df) corrplot(m, method = 'circle') # From the plot above, I think it will be useful to look at the following # relationships: # fixed.acidity vs citric.acid # fixed.acidity vs pH (negative) # volatile.acidity vs citric.acid (negative) # citric.acid vs pH (negative) # # quality vs alcohol # quality vs sulphates # quality vs citric.acid # quality vs. fixed.acidity # quality vs volatile.acidity (negative) # density vs fixed acidity # density vs alcohol (negative) # density vs citric.acid # density vs residual sugar # density vs pH (negative) # Start by the acidity (fixed, volatile and citric). Recall pH is # the measurement of strength of acidity, and acidity is a measurement of # how much acid you have. # Usually, the higher the amound of acid the smaller the pH. # Therefore, it is expected to see linear relationship # between pH and acidity. Volatile acids are not measured by pH # though. Although volatile acids seem to have a positive relationship # with pH in the correlation plot above, it is likely a fake # relationship that is created by other factors. ggplot(aes(x=pH, y = citric.acid), data = df) + geom_point() + geom_smooth(method="lm") cor.test(df$pH,df$citric.acid) # There is a good negative relationship between pH and fixed acidity. # The pearson correlation coefficient is -0.54 # It it also interesting to see zero amount of citric acid and different # values of pH, implying that pH depends on other substances as we will # see below. ggplot(aes(x=pH, y = volatile.acidity), data = df) + geom_point(color='red') + geom_smooth(method="lm") cor.test(df$pH,df$volatile.acidity) # There is a weak relationship of 0.23 between volatile acidity and pH. # The Pearson correlation coefficient is 0.24. # Again, I think this relationship is fake as pH is not an indication of # volatile acidity. ggplot(aes(x=pH, y = fixed.acidity), data = df) + geom_point(color='brown') + geom_smooth(method="lm") cor.test(df$pH,df$fixed.acidity) # In high concentration the fixed acids can give the wine tartaric taste. # There is strong negative relationship between pH and fixed acidity. # The pearson correlation coeficient is -0.68 # Fixed acidity is very important in wine industry, it gives each wine its # own characteristic. Fixed and citric acids are the two factors that # contribute to pH in this dataset. ggplot(aes(x=pH, y = fixed.acidity, color=citric.acid), data = df) + geom_point() + geom_smooth(method="lm") cor.test(df$pH,(df$fixed.acidity+df$citric.acid)) # The plot above shows that when you have high fixed and citric acids # amounts the pH will be low and vice versa. # The sum of these two substances have a - 0.69 correlation coefficient. # density vs fixed acidity # density vs alcohol (negative) # density vs citric.acid # density vs residual sugar # density vs pH (negative) # Next I will study density. The dataset description says that the main # components of density is sugar and alcohol, I will test the accuracy # of this statement. ggplot(aes(x=density, y = alcohol), data = df) + geom_point(color = 'pink') + ylim(8,13) + geom_smooth(method='lm') cor.test(df$density, df$alcohol) # There is a negative relationship with an R of -0.50 between density # and alcohol. ggplot(aes(x=density, y = (residual.sugar)), data = df) + geom_jitter(alpha = 1/5) + ylim(0.5, 5) + geom_smooth(method='lm') cor.test(df$density, df$residual.sugar) # There is a positive weak relationship with as R of 0.36 between # density and residual.sugar. ggplot(aes(x=density, y = total.acidity), data = df) + geom_jitter(alpha = 1/5) + geom_smooth(method='lm') cor.test(df$density, df$total.acidity) # There is a good positive relationship with an R of 0.66 between denstiy # and total.acidity. # I conclude that residual sugar doesn't correlate much with density, # rather alcohol and total acidity seems to be the major contributers. # I think more statistical analysis are needed to determine if the # relationship between density and total acidity is real or an artifact # of other relationships, but I cannot see that by using my current tools ggplot(aes(x=density, y = total.acidity, color = alcohol), data = df) + geom_point() + geom_smooth(method='lm', color='pink') # This plot shows, again, the nice positive relationship between density, # and total acidity and a negative one with alcohol. # quality vs alcohol # quality vs volatile.acidity (negative) # quality vs sulphates # quality vs citric.acid # quality vs. fixed.acidity # Note that the lack of observations with quality of 3, 4, 7 and 8 makes # it hard to make conclusions out of the plots below. ggplot(aes(x=as.factor(quality), y = alcohol), data = df) + geom_boxplot() # Quality increases with alcohol, the boxplot shows this with the exception # of quality 5 which has lower mean of alcohol with respect to quality of 4. # Quality 5 also has a large number of outliers. There is a positive # relationship between alcohol and the wine quality with R of # 0.48. ggplot(aes(x=as.factor(quality), y = volatile.acidity), data = df) + geom_boxplot() # As for the volatile acidity, the alcohol mean of each quality category # is getting smaller with increasing quality. The negative # relationship can also be shown with R value of -0.39. # Quality depends weakly on sulphates (R = 0.25), citric.acid (R = 0.23) # and total.acidity (R = 0.17), as shown below. ggplot(aes(y=sulphates, x = as.factor(quality)), data = df) + geom_boxplot() # This is a weak positive relationship with the quartiles (almost) increasing # with quality and the mean of sulphates for each category also increases. # The R value for this relationship is 0.25. ggplot(aes(y=citric.acid, x = as.factor(quality)), data = df) + geom_boxplot() # Although we can see the positive relationship between citric acid and # quality, the small number of observations in category 3 is making it # hard to make conclusions because the huge quartile of category 3 is # probably weakening the relationship between citric acid and quality. # The R value is 0.23. ggplot(aes(y=fixed.acidity, x = as.factor(quality)), data = df) + geom_boxplot() # Fixed acidity also has a weak relationship with quality, qualities of 3 # and 8 and the too few observations is affecting the increasing trend. # This relationship has an R value of 0.17. summary(df$quality) cor.test(df$total.acidity, df$quality) ggplot(aes(x=alcohol, y = volatile.acidity), data = df) + geom_jitter(color = 'red', alpha = 1/5) + ylim(0.125, 1.2) + facet_wrap(~df$quality.bucket) ggplot(aes(x=volatile.acidity, fill = quality.bucket), data = df) + geom_density(alpha=0.2) # The plots above show other views of the major two factors contributing # to quality. The first plot shows the low quality win (<=5) vs high # quality (> 5). As usualy I solved overplotting by adding a jitter. # The weak relationships do not make it easy to view the relationship # but one can notice that high quality has relatively more small values # of volatile acidity and higher numbers of alcohol percentages. # Second plot is interesting because it "kind of" breaks down the bimodal # density plot in volatile acidity, it turns out that the higher mod # belongs to low quality wine. # Quality, sulphates and alcohol ggplot(aes(x=alcohol, y = sulphates, color = quality.bucket), data = df) + geom_jitter(alpha=1/5) + ylim(0.3,1.3) # Recall, that low quality in this plot refers # to quality of less than or equal to 5, otherwise the wine is # in the High category. # It is interesting to see that low quality wine has smaller amount # of alcohol and also smaller amounts of sulphates. # Sulphates and chlorides ggplot(aes(x=sulphates, y = chlorides), data = df) + geom_jitter(alpha=1/10) + ylim(0, 0.2) + xlim(0.2, 1.25) + geom_smooth(method='lm') # There is a weak relationship between sulphates and chlorides with # R = 0.37. I'm not a chemist but I looked it up and there is no chemical # relationship between the two compounds so I think that this relationship # is fake. cor.test(df$sulphates, df$chlorides) # Free, bound and total sulfur dioxide. ggplot(aes(x=free.sulfur.dioxide, y= total.sulfur.dioxide), data = df) + ylim(0, 150) + xlim(0, 60) + geom_jitter(alpha = 1/10, color = "purple") # Recall that total sulfur dioxide is a superset of free.sulfur.dioxide # and bound sulfur dioxide (per the dataset description). # The plot confirms this because we don't see a point where # free sulfur dioxide is larger than the total. # Seems like bound sulfur dioxide composed most of the total sulfur dioxide # because the range of sulphates is small. ggplot(aes(x=sulphates, y= total.sulfur.dioxide), data = df) + ylim(0, 150) + xlim(0.25, 1.5) + geom_jitter(alpha = 1/10, color = "red") + geom_smooth(method="lm") # The plot also shows that the majority of observations have small # sulphates value mainly between 0.4 to 0.8. It also shows a number # of observations with high total.sulfur.dioxide value and still very low # sulphate value. Therefore, there is no obvious linear relationship between # the two variables although their description implies a relationship. # I plot the smooth line and it confirms my expectations.
/red_wine_analysis.R
no_license
layalir/redWine
R
false
false
15,776
r
library(ggplot2) library(corrplot) #This report investigates Red Wine dataset df = read.csv("C:/users/layal/Desktop/R Project/wineQualityReds.csv") dim(df) str(df) summary(df) #This dataset has 1599 observations and 13 numerical variables. #Header 2: Univariant Plots Section # Check if there are missing values d<-subset(df,is.na(df)) dim(d) # no missing values # Check if observations are unique, I will remove the column marking the # sequence number for each observations, it is not needed in my EDA df$X <- NULL dim(unique(df)) == dim(df) # Yes! All observations are unique. ggplot(aes(x=quality), data = df) + geom_histogram() table(df$quality) # Following I will add two columns to dataset, total acidity and quality # bucket. Later in this report I will describe how I reach the conclusion # that total acidity should include citric acid and fixed acidity. # Also, quality bucket is going to help in the multivariate analysis. df$total.acidity = df$fixed.acidity+df$citric.acid df$quality.bucket <- ifelse(df$quality<=5,"Low", "High") # The quality of the red wine in this data set ranges between 3 and 8. # There is very small number of wine with very low quality of 3 or very high # quality of 8. The observations seem to be right skewed with short tail, # they center around 5.5. ggplot(aes(x=fixed.acidity), data = df) + geom_density(alpha=0.2, fill="blue") summary(df$fixed.acidity) #Most of the observations have fixed.acidity of about 7. The distribution #is right skewed with a tail expanding until about 16. # Too much fixed acidity will result in tartar wine, this explains why # there are small number of observations with high amounts of this acid. # There are no values of zero or 1 so it should be safe to try the log scale. ggplot(aes(x=log10(fixed.acidity)), data = df) + geom_density(alpha=0.2, fill="red") + geom_vline(aes(xintercept=mean(log10(fixed.acidity), na.rm=T)), color="red", linetype="dashed", size=1) # The density plot of the log10 of this variable looks "close" to normal # the mean is 0.91 and is represented using the vertical red line, # it is not quite at the center because the density is not normal. summary(log10(df$fixed.acidity)) ggplot(aes(x=volatile.acidity), data = df) + geom_density(alpha=0.2, fill = "yellow") # The density plot is bimodal. Volatile acidity is found in very small # amounts in red wine, it appears when wine is steamed. More about this # variable later in this report. ggplot(aes(citric.acid), data = df) + geom_histogram(binwidth = 0.005) # Citric acids usualy appear in citris fruit, they give freshness # to the wine. # The above histogram of citrict.acid does not show any particular pattern. # The majoriy of the observations does not include any amount of this acid. sort(table(df$citric.acid)) ggplot(aes(residual.sugar), data = df) + geom_density(fill="green") #The density looks right skewed with long tail. Even when transformed # to log10 (not interesting enough to include in this document), it also # looks right skewed. Log10 is not useful because most values are very small. # The description document mentions that #"wines with greater than 45 grams/liter are considered sweet". # There are no sweet wines in this sample. The maximum value is 15.5. summary(df$residual.sugar) ggplot(aes(chlorides), data = df) + geom_density(fill="orange") # Similar observations can be seen when plotting the chlorides. Most # observations have very small amounts of this compound. It is clearly right # skewed. Chlorides are present in wine from materials used to sterilize # wine making equipments, detergents are not commonly used to clean the # equipments because they leave residue. Based on this reasoning, # I don't think chloride is major component of wine. ggplot(aes(free.sulfur.dioxide), data = df) + geom_density(alpha = 0.2, fill = 'blue') ggplot(aes(total.sulfur.dioxide - free.sulfur.dioxide), data = df) + geom_density(alpha = 0.2, fill = 'blue') + xlab("bound sulfur") summary(df$free.sulfur.dioxide) ggplot(aes(total.sulfur.dioxide), data = df) + geom_density(alpha = 0.2, fill = 'blue') # According to the dataset description, the total.sulfur.dioxide is # a superset of free.sulfur.dioxide and bound form of SO2. # Most observations have small concentration of free or bound SO2. Only # very observations have relatively higher concentrations of these materials # which result in right skewed density plot. ggplot(aes(sulphates), data = df) + geom_density() # The majority of observations have relatively small amount of Sulphate. # Even at small amounts suphur can be antioxidant, anti aging and hence, # it preserves the freshness of the wine. # The density plot is right skewed. Sulphate contribute to SO2 # substances so it will be interesting to study these materials together. ggplot(aes(density), data = df) + geom_density() + geom_vline(aes(xintercept=mean(density, na.rm=T)), color="red", linetype="dashed", size=1) # Density is normally distibuted with center around the mean of 0.9967 # and sd of 0.0019 # Density depends on alcohol and sugar percentage so it should interesting # to study these three variables together later in this report. mean(df$density) sd(df$density) ggplot(aes(pH), data = df) + geom_density() + geom_vline(aes(xintercept=mean(pH, na.rm=T)), color="red", linetype="dashed", size=1) # pH measures the concentration of fixed acidity and citric acidity. # Volatile acidity is measured using steam distillation process. # Higher amounts of acid lead to lower values of pH and more acidic taste. # pH is log scaled by nature. # pH is normally distributed with a center of 3.3, and sd of 0.15 # pH for red wine should ranges between 3 and 4, which is the case for # this dataset. There are 29 exceptions with a pH value lower than 3. # The majority of these exceptions are of quality 3 and 4. mean(df$pH) sd(df$pH) d <- subset(df, (df$pH < 3)) table(d$quality) ggplot(aes(alcohol), data = df) + geom_density() # I don't think this is bimodal and it is probably a stretch to just # call it right skewed. # Most wines have about 9.5% alcohol, few observations have higher # percentages of alcohol with a maximum of 15%. tail(names(sort(table(df$alcohol))), 1) summary(df$alcohol) #*********************** Multivariante plots*********************** # I have 13 variables so I will start by plotting the correlation # coefficient corr.matrix = cor(df) corrplot(m, method = 'circle') # From the plot above, I think it will be useful to look at the following # relationships: # fixed.acidity vs citric.acid # fixed.acidity vs pH (negative) # volatile.acidity vs citric.acid (negative) # citric.acid vs pH (negative) # # quality vs alcohol # quality vs sulphates # quality vs citric.acid # quality vs. fixed.acidity # quality vs volatile.acidity (negative) # density vs fixed acidity # density vs alcohol (negative) # density vs citric.acid # density vs residual sugar # density vs pH (negative) # Start by the acidity (fixed, volatile and citric). Recall pH is # the measurement of strength of acidity, and acidity is a measurement of # how much acid you have. # Usually, the higher the amound of acid the smaller the pH. # Therefore, it is expected to see linear relationship # between pH and acidity. Volatile acids are not measured by pH # though. Although volatile acids seem to have a positive relationship # with pH in the correlation plot above, it is likely a fake # relationship that is created by other factors. ggplot(aes(x=pH, y = citric.acid), data = df) + geom_point() + geom_smooth(method="lm") cor.test(df$pH,df$citric.acid) # There is a good negative relationship between pH and fixed acidity. # The pearson correlation coefficient is -0.54 # It it also interesting to see zero amount of citric acid and different # values of pH, implying that pH depends on other substances as we will # see below. ggplot(aes(x=pH, y = volatile.acidity), data = df) + geom_point(color='red') + geom_smooth(method="lm") cor.test(df$pH,df$volatile.acidity) # There is a weak relationship of 0.23 between volatile acidity and pH. # The Pearson correlation coefficient is 0.24. # Again, I think this relationship is fake as pH is not an indication of # volatile acidity. ggplot(aes(x=pH, y = fixed.acidity), data = df) + geom_point(color='brown') + geom_smooth(method="lm") cor.test(df$pH,df$fixed.acidity) # In high concentration the fixed acids can give the wine tartaric taste. # There is strong negative relationship between pH and fixed acidity. # The pearson correlation coeficient is -0.68 # Fixed acidity is very important in wine industry, it gives each wine its # own characteristic. Fixed and citric acids are the two factors that # contribute to pH in this dataset. ggplot(aes(x=pH, y = fixed.acidity, color=citric.acid), data = df) + geom_point() + geom_smooth(method="lm") cor.test(df$pH,(df$fixed.acidity+df$citric.acid)) # The plot above shows that when you have high fixed and citric acids # amounts the pH will be low and vice versa. # The sum of these two substances have a - 0.69 correlation coefficient. # density vs fixed acidity # density vs alcohol (negative) # density vs citric.acid # density vs residual sugar # density vs pH (negative) # Next I will study density. The dataset description says that the main # components of density is sugar and alcohol, I will test the accuracy # of this statement. ggplot(aes(x=density, y = alcohol), data = df) + geom_point(color = 'pink') + ylim(8,13) + geom_smooth(method='lm') cor.test(df$density, df$alcohol) # There is a negative relationship with an R of -0.50 between density # and alcohol. ggplot(aes(x=density, y = (residual.sugar)), data = df) + geom_jitter(alpha = 1/5) + ylim(0.5, 5) + geom_smooth(method='lm') cor.test(df$density, df$residual.sugar) # There is a positive weak relationship with as R of 0.36 between # density and residual.sugar. ggplot(aes(x=density, y = total.acidity), data = df) + geom_jitter(alpha = 1/5) + geom_smooth(method='lm') cor.test(df$density, df$total.acidity) # There is a good positive relationship with an R of 0.66 between denstiy # and total.acidity. # I conclude that residual sugar doesn't correlate much with density, # rather alcohol and total acidity seems to be the major contributers. # I think more statistical analysis are needed to determine if the # relationship between density and total acidity is real or an artifact # of other relationships, but I cannot see that by using my current tools ggplot(aes(x=density, y = total.acidity, color = alcohol), data = df) + geom_point() + geom_smooth(method='lm', color='pink') # This plot shows, again, the nice positive relationship between density, # and total acidity and a negative one with alcohol. # quality vs alcohol # quality vs volatile.acidity (negative) # quality vs sulphates # quality vs citric.acid # quality vs. fixed.acidity # Note that the lack of observations with quality of 3, 4, 7 and 8 makes # it hard to make conclusions out of the plots below. ggplot(aes(x=as.factor(quality), y = alcohol), data = df) + geom_boxplot() # Quality increases with alcohol, the boxplot shows this with the exception # of quality 5 which has lower mean of alcohol with respect to quality of 4. # Quality 5 also has a large number of outliers. There is a positive # relationship between alcohol and the wine quality with R of # 0.48. ggplot(aes(x=as.factor(quality), y = volatile.acidity), data = df) + geom_boxplot() # As for the volatile acidity, the alcohol mean of each quality category # is getting smaller with increasing quality. The negative # relationship can also be shown with R value of -0.39. # Quality depends weakly on sulphates (R = 0.25), citric.acid (R = 0.23) # and total.acidity (R = 0.17), as shown below. ggplot(aes(y=sulphates, x = as.factor(quality)), data = df) + geom_boxplot() # This is a weak positive relationship with the quartiles (almost) increasing # with quality and the mean of sulphates for each category also increases. # The R value for this relationship is 0.25. ggplot(aes(y=citric.acid, x = as.factor(quality)), data = df) + geom_boxplot() # Although we can see the positive relationship between citric acid and # quality, the small number of observations in category 3 is making it # hard to make conclusions because the huge quartile of category 3 is # probably weakening the relationship between citric acid and quality. # The R value is 0.23. ggplot(aes(y=fixed.acidity, x = as.factor(quality)), data = df) + geom_boxplot() # Fixed acidity also has a weak relationship with quality, qualities of 3 # and 8 and the too few observations is affecting the increasing trend. # This relationship has an R value of 0.17. summary(df$quality) cor.test(df$total.acidity, df$quality) ggplot(aes(x=alcohol, y = volatile.acidity), data = df) + geom_jitter(color = 'red', alpha = 1/5) + ylim(0.125, 1.2) + facet_wrap(~df$quality.bucket) ggplot(aes(x=volatile.acidity, fill = quality.bucket), data = df) + geom_density(alpha=0.2) # The plots above show other views of the major two factors contributing # to quality. The first plot shows the low quality win (<=5) vs high # quality (> 5). As usualy I solved overplotting by adding a jitter. # The weak relationships do not make it easy to view the relationship # but one can notice that high quality has relatively more small values # of volatile acidity and higher numbers of alcohol percentages. # Second plot is interesting because it "kind of" breaks down the bimodal # density plot in volatile acidity, it turns out that the higher mod # belongs to low quality wine. # Quality, sulphates and alcohol ggplot(aes(x=alcohol, y = sulphates, color = quality.bucket), data = df) + geom_jitter(alpha=1/5) + ylim(0.3,1.3) # Recall, that low quality in this plot refers # to quality of less than or equal to 5, otherwise the wine is # in the High category. # It is interesting to see that low quality wine has smaller amount # of alcohol and also smaller amounts of sulphates. # Sulphates and chlorides ggplot(aes(x=sulphates, y = chlorides), data = df) + geom_jitter(alpha=1/10) + ylim(0, 0.2) + xlim(0.2, 1.25) + geom_smooth(method='lm') # There is a weak relationship between sulphates and chlorides with # R = 0.37. I'm not a chemist but I looked it up and there is no chemical # relationship between the two compounds so I think that this relationship # is fake. cor.test(df$sulphates, df$chlorides) # Free, bound and total sulfur dioxide. ggplot(aes(x=free.sulfur.dioxide, y= total.sulfur.dioxide), data = df) + ylim(0, 150) + xlim(0, 60) + geom_jitter(alpha = 1/10, color = "purple") # Recall that total sulfur dioxide is a superset of free.sulfur.dioxide # and bound sulfur dioxide (per the dataset description). # The plot confirms this because we don't see a point where # free sulfur dioxide is larger than the total. # Seems like bound sulfur dioxide composed most of the total sulfur dioxide # because the range of sulphates is small. ggplot(aes(x=sulphates, y= total.sulfur.dioxide), data = df) + ylim(0, 150) + xlim(0.25, 1.5) + geom_jitter(alpha = 1/10, color = "red") + geom_smooth(method="lm") # The plot also shows that the majority of observations have small # sulphates value mainly between 0.4 to 0.8. It also shows a number # of observations with high total.sulfur.dioxide value and still very low # sulphate value. Therefore, there is no obvious linear relationship between # the two variables although their description implies a relationship. # I plot the smooth line and it confirms my expectations.
#!/usr/bin/env Rscript source("common.r") ## source: ## https://www.tutorialspoint.com/r/r_xml_files.htm fn <- file.path(DATA.DIR,"sample.xml") if(!file.exists(fn)) stop("Sample xml file not found:",fn) ## Load the package required to read XML files. if (!require("XML")) stop("'XML' package is needed.") ## Also load the other required package. if(!require("methods")) stop("'methods' package is needed.") ## Give the input file name to the function. result <- xmlParse(file = fn) ## Print the result. ## print(result) ## Exract the root node form the xml file. cat("Root node:\n") rootnode <- xmlRoot(result) print(rootnode) ## Find number of nodes in the root. rootsize <- xmlSize(rootnode) ## Print the result. cat("\nRoot size:",rootsize,"\n") cat("\nFirst element:\n") print(rootnode[1]) print(str(rootnode[1])) cat("\nAs list item:\n") print(rootnode[[1]]) cat("\nAs data frame:\n") a.df <- xmlToDataFrame(fn) print(str(a.df)) print(a.df) ## cat("Going over each element:\n") ## for(node.it in 1:rootsize) { ## cat("node",node.it,":\n") ## node.size <- xmlSize(rootnode[[node.it]]) ## for(line.it in 1:node.size) { ## cat("item",line.it,":\n") ## print(rootnode[[node.it]][[line.it]]) ## } ## }
/xml.r
no_license
tiborh/r
R
false
false
1,261
r
#!/usr/bin/env Rscript source("common.r") ## source: ## https://www.tutorialspoint.com/r/r_xml_files.htm fn <- file.path(DATA.DIR,"sample.xml") if(!file.exists(fn)) stop("Sample xml file not found:",fn) ## Load the package required to read XML files. if (!require("XML")) stop("'XML' package is needed.") ## Also load the other required package. if(!require("methods")) stop("'methods' package is needed.") ## Give the input file name to the function. result <- xmlParse(file = fn) ## Print the result. ## print(result) ## Exract the root node form the xml file. cat("Root node:\n") rootnode <- xmlRoot(result) print(rootnode) ## Find number of nodes in the root. rootsize <- xmlSize(rootnode) ## Print the result. cat("\nRoot size:",rootsize,"\n") cat("\nFirst element:\n") print(rootnode[1]) print(str(rootnode[1])) cat("\nAs list item:\n") print(rootnode[[1]]) cat("\nAs data frame:\n") a.df <- xmlToDataFrame(fn) print(str(a.df)) print(a.df) ## cat("Going over each element:\n") ## for(node.it in 1:rootsize) { ## cat("node",node.it,":\n") ## node.size <- xmlSize(rootnode[[node.it]]) ## for(line.it in 1:node.size) { ## cat("item",line.it,":\n") ## print(rootnode[[node.it]][[line.it]]) ## } ## }
# Name : Manan Bhatt # CS 513 B - Knowledge Discovery And Data Mining # Final Question 2 # CWID : 104530306 rm(list=ls()) #install.packages("randomForest") library(randomForest) ChooseFile <- file.choose() Admission_Cat<- read.csv(ChooseFile,header = TRUE, na.strings=' ?') View(Admission_Cat) #Data Preparation Admission_Cat <- Admission_Cat[,-1] View(Admission_Cat) cols <- ncol(Admission_Cat) cols Admission_Cat[1:cols] <- lapply(Admission_Cat[1:cols], factor) View(Admission_Cat) #Splitting Data split<-sort(sample(nrow(Admission_Cat),round(.30*nrow(Admission_Cat)))) Training_Data <- Admission_Cat[-split,] Testing_Data <- Admission_Cat[split,] #Model fit <- randomForest( ADMIT~., data=Training_Data, importance=TRUE, ntree=1000) importance(fit) varImpPlot(fit) Prediction <- predict(fit, Testing_Data) table(actual=Testing_Data$ADMIT,Prediction) tab<-table(actual=Testing_Data$ADMIT,Prediction) #Accuracy accuracy <- function(x){sum(diag(x)/(sum(rowSums(x)))) * 100} accuracy(tab)
/Final Exam/Final_Q2.R
no_license
Manan31/CS-513-Knowledge-Discovery-and-Data-Mining-Stevens
R
false
false
996
r
# Name : Manan Bhatt # CS 513 B - Knowledge Discovery And Data Mining # Final Question 2 # CWID : 104530306 rm(list=ls()) #install.packages("randomForest") library(randomForest) ChooseFile <- file.choose() Admission_Cat<- read.csv(ChooseFile,header = TRUE, na.strings=' ?') View(Admission_Cat) #Data Preparation Admission_Cat <- Admission_Cat[,-1] View(Admission_Cat) cols <- ncol(Admission_Cat) cols Admission_Cat[1:cols] <- lapply(Admission_Cat[1:cols], factor) View(Admission_Cat) #Splitting Data split<-sort(sample(nrow(Admission_Cat),round(.30*nrow(Admission_Cat)))) Training_Data <- Admission_Cat[-split,] Testing_Data <- Admission_Cat[split,] #Model fit <- randomForest( ADMIT~., data=Training_Data, importance=TRUE, ntree=1000) importance(fit) varImpPlot(fit) Prediction <- predict(fit, Testing_Data) table(actual=Testing_Data$ADMIT,Prediction) tab<-table(actual=Testing_Data$ADMIT,Prediction) #Accuracy accuracy <- function(x){sum(diag(x)/(sum(rowSums(x)))) * 100} accuracy(tab)
\alias{gtkBoxPackStart} \name{gtkBoxPackStart} \title{gtkBoxPackStart} \description{Adds \code{child} to \code{box}, packed with reference to the start of \code{box}. The \code{child} is packed after any other child packed with reference to the start of \code{box}.} \usage{gtkBoxPackStart(object, child, expand = TRUE, fill = TRUE, padding = 0)} \arguments{ \item{\verb{object}}{a \code{\link{GtkBox}}} \item{\verb{child}}{the \code{\link{GtkWidget}} to be added to \code{box}} \item{\verb{expand}}{\code{TRUE} if the new child is to be given extra space allocated to \code{box}. The extra space will be divided evenly between all children of \code{box} that use this option} \item{\verb{fill}}{\code{TRUE} if space given to \code{child} by the \code{expand} option is actually allocated to \code{child}, rather than just padding it. This parameter has no effect if \code{expand} is set to \code{FALSE}. A child is always allocated the full height of a \code{\link{GtkHBox}} and the full width of a \code{\link{GtkVBox}}. This option affects the other dimension} \item{\verb{padding}}{extra space in pixels to put between this child and its neighbors, over and above the global amount specified by \verb{"spacing"} property. If \code{child} is a widget at one of the reference ends of \code{box}, then \code{padding} pixels are also put between \code{child} and the reference edge of \code{box}} } \author{Derived by RGtkGen from GTK+ documentation} \keyword{internal}
/RGtk2/man/gtkBoxPackStart.Rd
no_license
lawremi/RGtk2
R
false
false
1,475
rd
\alias{gtkBoxPackStart} \name{gtkBoxPackStart} \title{gtkBoxPackStart} \description{Adds \code{child} to \code{box}, packed with reference to the start of \code{box}. The \code{child} is packed after any other child packed with reference to the start of \code{box}.} \usage{gtkBoxPackStart(object, child, expand = TRUE, fill = TRUE, padding = 0)} \arguments{ \item{\verb{object}}{a \code{\link{GtkBox}}} \item{\verb{child}}{the \code{\link{GtkWidget}} to be added to \code{box}} \item{\verb{expand}}{\code{TRUE} if the new child is to be given extra space allocated to \code{box}. The extra space will be divided evenly between all children of \code{box} that use this option} \item{\verb{fill}}{\code{TRUE} if space given to \code{child} by the \code{expand} option is actually allocated to \code{child}, rather than just padding it. This parameter has no effect if \code{expand} is set to \code{FALSE}. A child is always allocated the full height of a \code{\link{GtkHBox}} and the full width of a \code{\link{GtkVBox}}. This option affects the other dimension} \item{\verb{padding}}{extra space in pixels to put between this child and its neighbors, over and above the global amount specified by \verb{"spacing"} property. If \code{child} is a widget at one of the reference ends of \code{box}, then \code{padding} pixels are also put between \code{child} and the reference edge of \code{box}} } \author{Derived by RGtkGen from GTK+ documentation} \keyword{internal}
#' --- #' title: "Historical reconstruction of MRHSA data" #' author: "Michael Cysouw" #' date: "`r Sys.Date()`" #' --- # produce output # rmarkdown::render("manual.R") #' ### necessary libraries # require(qlcMatrix) # require(qlcData) # require(qlcVisualize) # require(showtext) # require(apcluster) #' ### read data # special function adapted to the details of the data source("code/readData.R") loc <- read_loc("sources/mrhsa/mrhsa-gid-wkt.tsv") old <- read_mrhsa("sources/mrhsa/aeltere-generation-ipa.tsv", loc) # help function for visualisation # - draw.cluster based on "limage" # - draw.line to add separation lines into the plots # - plot.cluster to send drawings to PDF source("code/visualizeClusters.R") #' ### visualise reconstructions sounds <- sapply(strsplit(colnames(old$data), "_"), function(x) x[2]) draw.cluster("t", data = old$data, clusters = sounds) # save all images as PDF # sapply(sort(unique(sounds)), plot.cluster, data = old$data, clusters = sounds) #' ### Similarity between alignments # do not count shared gaps as similarity # because then completely different alignments with many gaps get similar tmp <- t(old$data) tmp[tmp == "-"] <- NA sim <- qlcMatrix::sim.obs(tmp, method = "res") rm(tmp) #' ### Clustering of alignments # looking for clusters of alignments clusters1 <- cutree(hclust(as.dist(-sim)),h = -0.01) max(clusters1) # This is an interesting alternative option p <- apcluster::apcluster(sim) clusters2 <- apcluster::labels(p, type = "enum") max(clusters2) rm(p) # compare clusterings compare <- table(clusters1, clusters2) heatmap( -compare) # relation of clustering to proposed reconstruction in the data compare <- table(sounds, clusters1) heatmap( -compare^.2) compare <- table(sounds, clusters2) heatmap( -compare^.2) #' ### inspection of clusters clusters <- clusters1 # most frequent sounds per cluster stats(clusters, old$data) # add one image to the output draw.cluster(9, data = old$data, clusters) # save all images as PDF # sapply(1:max(clusters), plot.cluster, data = old$data, clusters = clusters) #' ### compare clustering p/b/f with induced 2/3/12 # combinations of clusters # plot.cluster(c(2,3,12), data = old$data, clusters) # compare # plot.cluster(c("b", "p", "f") , old$data, sounds) #' ### compare old with new s <- qlcMatrix::sim.obs(t(old$data), method="weighted") system_stability <- function(sound, village, data = old$data, sim = s, boundary = .3) { sim_to_same <- sim[sound, which(data[village, ] == data[village, sound])] sim_to_same <- sim_to_same[sim_to_same > boundary] others <- table(data[ , sound]) stat <- sapply(names(others), function(x) { sim_to_other <- sim[sound, which(data[village,] == x)] sim_to_other <- sim_to_other[sim_to_other > boundary] if (length(sim_to_other)<=1 | length(sim_to_same)<=1) { NA } else { t.test(sim_to_same, sim_to_other)$statistic } }) return(cbind(frequency = others, statistic = stat)) } #' ### phones per village uniquePhones <- as.character(sort(unique(old$raw$ipa))) phoneFreq <- apply(old$data, 1, function(x) { table(x)[uniquePhones] } ) phoneFreq[is.na(phoneFreq)] <- 0 rownames(phoneFreq) <- uniquePhones
/manual.R
no_license
cysouw/MRHSA
R
false
false
3,192
r
#' --- #' title: "Historical reconstruction of MRHSA data" #' author: "Michael Cysouw" #' date: "`r Sys.Date()`" #' --- # produce output # rmarkdown::render("manual.R") #' ### necessary libraries # require(qlcMatrix) # require(qlcData) # require(qlcVisualize) # require(showtext) # require(apcluster) #' ### read data # special function adapted to the details of the data source("code/readData.R") loc <- read_loc("sources/mrhsa/mrhsa-gid-wkt.tsv") old <- read_mrhsa("sources/mrhsa/aeltere-generation-ipa.tsv", loc) # help function for visualisation # - draw.cluster based on "limage" # - draw.line to add separation lines into the plots # - plot.cluster to send drawings to PDF source("code/visualizeClusters.R") #' ### visualise reconstructions sounds <- sapply(strsplit(colnames(old$data), "_"), function(x) x[2]) draw.cluster("t", data = old$data, clusters = sounds) # save all images as PDF # sapply(sort(unique(sounds)), plot.cluster, data = old$data, clusters = sounds) #' ### Similarity between alignments # do not count shared gaps as similarity # because then completely different alignments with many gaps get similar tmp <- t(old$data) tmp[tmp == "-"] <- NA sim <- qlcMatrix::sim.obs(tmp, method = "res") rm(tmp) #' ### Clustering of alignments # looking for clusters of alignments clusters1 <- cutree(hclust(as.dist(-sim)),h = -0.01) max(clusters1) # This is an interesting alternative option p <- apcluster::apcluster(sim) clusters2 <- apcluster::labels(p, type = "enum") max(clusters2) rm(p) # compare clusterings compare <- table(clusters1, clusters2) heatmap( -compare) # relation of clustering to proposed reconstruction in the data compare <- table(sounds, clusters1) heatmap( -compare^.2) compare <- table(sounds, clusters2) heatmap( -compare^.2) #' ### inspection of clusters clusters <- clusters1 # most frequent sounds per cluster stats(clusters, old$data) # add one image to the output draw.cluster(9, data = old$data, clusters) # save all images as PDF # sapply(1:max(clusters), plot.cluster, data = old$data, clusters = clusters) #' ### compare clustering p/b/f with induced 2/3/12 # combinations of clusters # plot.cluster(c(2,3,12), data = old$data, clusters) # compare # plot.cluster(c("b", "p", "f") , old$data, sounds) #' ### compare old with new s <- qlcMatrix::sim.obs(t(old$data), method="weighted") system_stability <- function(sound, village, data = old$data, sim = s, boundary = .3) { sim_to_same <- sim[sound, which(data[village, ] == data[village, sound])] sim_to_same <- sim_to_same[sim_to_same > boundary] others <- table(data[ , sound]) stat <- sapply(names(others), function(x) { sim_to_other <- sim[sound, which(data[village,] == x)] sim_to_other <- sim_to_other[sim_to_other > boundary] if (length(sim_to_other)<=1 | length(sim_to_same)<=1) { NA } else { t.test(sim_to_same, sim_to_other)$statistic } }) return(cbind(frequency = others, statistic = stat)) } #' ### phones per village uniquePhones <- as.character(sort(unique(old$raw$ipa))) phoneFreq <- apply(old$data, 1, function(x) { table(x)[uniquePhones] } ) phoneFreq[is.na(phoneFreq)] <- 0 rownames(phoneFreq) <- uniquePhones
## These two functions create a special object with initial matrix (makeCacheMatrix) and caches its inverse matrix (cacheSolve) makeCacheMatrix <- function(x = matrix()) { ## Creates special "matrix" (list of functions) m <- NULL set <- function(y) { ## set the value of the matrix x <<- y m <<- NULL } get <- function() x ## get the value of the matrix setsolve <- function(solve) m <<- solve ##set the value of the inverse matrix getsolve <- function() m ## get the value of the inverse matrix list(set = set, get = get, setsolve = setsolve, getsolve = getsolve) ## returnes a list of functions described above } cacheSolve <- function(x, ...) { ## Return a matrix that is the inverse of 'x' m <- x$getsolve() if(!is.null(m)) { message("getting cached data") return(m) ## return matrix if it was inversed before } data <- x$get() m <- solve(data, ...) x$setsolve(m) m }
/cachematrix.R
no_license
Amourka/ProgrammingAssignment2
R
false
false
953
r
## These two functions create a special object with initial matrix (makeCacheMatrix) and caches its inverse matrix (cacheSolve) makeCacheMatrix <- function(x = matrix()) { ## Creates special "matrix" (list of functions) m <- NULL set <- function(y) { ## set the value of the matrix x <<- y m <<- NULL } get <- function() x ## get the value of the matrix setsolve <- function(solve) m <<- solve ##set the value of the inverse matrix getsolve <- function() m ## get the value of the inverse matrix list(set = set, get = get, setsolve = setsolve, getsolve = getsolve) ## returnes a list of functions described above } cacheSolve <- function(x, ...) { ## Return a matrix that is the inverse of 'x' m <- x$getsolve() if(!is.null(m)) { message("getting cached data") return(m) ## return matrix if it was inversed before } data <- x$get() m <- solve(data, ...) x$setsolve(m) m }
\name{importMapSet} \alias{importMapSet} \title{ Import Annotation MapSet Files } \description{ Create a MapSet object from a set of text files. } \usage{ importMapSet(path = ".") } \arguments{ \item{path}{ the folder that holds the annotation map text files. } } \details{ This function rebuilds an annotation MapSet from a set of map text files. See \code{\link{MapSets}} for a more detailed overview. } \value{ A MapSet object (returned invisibly). Also, a compressed MapSet.rda file gets written to \code{path}, and the MapSet gets added to the set of defined species. } \seealso{ \code{\link{addMapSet}}, for adding new MapSets to the set of defined species. \code{\link{exportCurrentMapSet}}, for the inverse capability, to write a MapSet to disk. }
/man/importMapSet.Rd
no_license
robertdouglasmorrison/DuffyTools
R
false
false
795
rd
\name{importMapSet} \alias{importMapSet} \title{ Import Annotation MapSet Files } \description{ Create a MapSet object from a set of text files. } \usage{ importMapSet(path = ".") } \arguments{ \item{path}{ the folder that holds the annotation map text files. } } \details{ This function rebuilds an annotation MapSet from a set of map text files. See \code{\link{MapSets}} for a more detailed overview. } \value{ A MapSet object (returned invisibly). Also, a compressed MapSet.rda file gets written to \code{path}, and the MapSet gets added to the set of defined species. } \seealso{ \code{\link{addMapSet}}, for adding new MapSets to the set of defined species. \code{\link{exportCurrentMapSet}}, for the inverse capability, to write a MapSet to disk. }
hsCmdLineArgs <- function(spec=c(),openConnections=TRUE,args=commandArgs(TRUE)) { basespec = c( 'mapper', 'm', 0, "logical","Runs the mapper.",FALSE, 'reducer', 'r', 0, "logical","Runs the reducer, unless already running mapper.",FALSE, 'mapcols', 'a', 0, "logical","Prints column headers for mapper output.",FALSE, 'reducecols', 'b', 0, "logical","Prints column headers for reducer output.",FALSE, 'infile' , 'i', 1, "character","Specifies an input file, otherwise use stdin.",NA, 'outfile', 'o', 1, "character","Specifies an output file, otherwise use stdout.",NA, 'skip', 's',1,"numeric","Number of lines of input to skip at the beginning.",0, 'chunksize', 'c',1,"numeric","Number of lines to read at once, a la scan.",-1, 'memlim', 'z',1,"numeric","Max number of bytes allowed for use in carry.",-1, 'numlines', 'n',1,"numeric","Max number of lines to read from input, per mapper or reducer job.",0, 'sepr', 'e',1,"character","Separator character, as used by scan.",'\t', 'insep', 'f',1,"character","Separator character for input, defaults to sepr.",NA, 'outsep', 'g',1,"character","Separator character output, defaults to sepr.",NA, 'debug', 'd',0,"logical","Turn on debugging output.",FALSE, 'help', 'h',0,"logical","Get a help message.",FALSE ) specmat = matrix(c(spec,basespec),ncol=6,byrow=TRUE) opt = getopt(specmat[,1:5],opt=args) ## Set Default parameter values for (p in seq(along=specmat[,1])) { s = specmat[p,1] if (is.null(opt[[specmat[p,1]]]) ) { opt[[specmat[p,1]]] = specmat[p,6] storage.mode( opt[[specmat[p,1]]] ) <- specmat[p,4] } } ## Set separator character if (is.na(opt$insep)) opt$insep=opt$sep if (is.na(opt$outsep)) opt$outsep=opt$sep ## Print help, if necessary if ( opt$help || (!opt$mapper && !opt$reducer && !opt$mapcols && !opt$reducecols) ) { ##Get the script name (only works when invoked with Rscript). ## self = commandArgs()[1]; cat(getopt(specmat,usage=TRUE)) opt$set = FALSE ## return(opt) } if (openConnections) { if (is.na(opt$infile) && (opt$numlines==0)) { opt$incon = file(description="stdin",open="r") } else if (is.na(opt$infile) && (opt$numlines>0)) { opt$incon = pipe( paste("head -n",opt$numlines), "r" ) } else if (opt$numlines>0) { opt$incon = pipe( paste("head -n",opt$numlines,opt$infile), "r" ) } else if (opt$numlines==0) { opt$incon = file(description=opt$infile,open="r") } else { stop("I can't figure out what's going on with this input stuff.") } if (is.na(opt$outfile) ) { opt$outcon = "" } else { opt$outcon = file(description=opt$outfile,open="w") } } opt$set = TRUE return(opt) }
/HadoopStreaming/R/hsCmdLineArgs.R
no_license
ingted/R-Examples
R
false
false
2,827
r
hsCmdLineArgs <- function(spec=c(),openConnections=TRUE,args=commandArgs(TRUE)) { basespec = c( 'mapper', 'm', 0, "logical","Runs the mapper.",FALSE, 'reducer', 'r', 0, "logical","Runs the reducer, unless already running mapper.",FALSE, 'mapcols', 'a', 0, "logical","Prints column headers for mapper output.",FALSE, 'reducecols', 'b', 0, "logical","Prints column headers for reducer output.",FALSE, 'infile' , 'i', 1, "character","Specifies an input file, otherwise use stdin.",NA, 'outfile', 'o', 1, "character","Specifies an output file, otherwise use stdout.",NA, 'skip', 's',1,"numeric","Number of lines of input to skip at the beginning.",0, 'chunksize', 'c',1,"numeric","Number of lines to read at once, a la scan.",-1, 'memlim', 'z',1,"numeric","Max number of bytes allowed for use in carry.",-1, 'numlines', 'n',1,"numeric","Max number of lines to read from input, per mapper or reducer job.",0, 'sepr', 'e',1,"character","Separator character, as used by scan.",'\t', 'insep', 'f',1,"character","Separator character for input, defaults to sepr.",NA, 'outsep', 'g',1,"character","Separator character output, defaults to sepr.",NA, 'debug', 'd',0,"logical","Turn on debugging output.",FALSE, 'help', 'h',0,"logical","Get a help message.",FALSE ) specmat = matrix(c(spec,basespec),ncol=6,byrow=TRUE) opt = getopt(specmat[,1:5],opt=args) ## Set Default parameter values for (p in seq(along=specmat[,1])) { s = specmat[p,1] if (is.null(opt[[specmat[p,1]]]) ) { opt[[specmat[p,1]]] = specmat[p,6] storage.mode( opt[[specmat[p,1]]] ) <- specmat[p,4] } } ## Set separator character if (is.na(opt$insep)) opt$insep=opt$sep if (is.na(opt$outsep)) opt$outsep=opt$sep ## Print help, if necessary if ( opt$help || (!opt$mapper && !opt$reducer && !opt$mapcols && !opt$reducecols) ) { ##Get the script name (only works when invoked with Rscript). ## self = commandArgs()[1]; cat(getopt(specmat,usage=TRUE)) opt$set = FALSE ## return(opt) } if (openConnections) { if (is.na(opt$infile) && (opt$numlines==0)) { opt$incon = file(description="stdin",open="r") } else if (is.na(opt$infile) && (opt$numlines>0)) { opt$incon = pipe( paste("head -n",opt$numlines), "r" ) } else if (opt$numlines>0) { opt$incon = pipe( paste("head -n",opt$numlines,opt$infile), "r" ) } else if (opt$numlines==0) { opt$incon = file(description=opt$infile,open="r") } else { stop("I can't figure out what's going on with this input stuff.") } if (is.na(opt$outfile) ) { opt$outcon = "" } else { opt$outcon = file(description=opt$outfile,open="w") } } opt$set = TRUE return(opt) }
#' loadPosterior #' #' Returns the full mcmc object from a BayesTraits log file. This #' is used inside plot functions and so on, but might be useful for #' other MCMC manipulations and so on. #' Extracts the MCMC samples from a BayesTraits logfile (i.e. discards the #' header information and coerces samples into a matrix.) #' @param logfile The name of the logfile of the BayesTraits analysis. #' @param thinning Thinning parameter for the posterior - defaults to 1 #' (all samples). 2 uses every second sample, 3 every third and so on. #' @param burnin The number of generations to remove from the start of the #' chain as burnin. Use if the chain has not reached convergence before sampling #' began. Useful if the burnin parameter for the analysis itself was not long #' enough. #' @return A tibble (see \link[tibble]{tibble}) with the class "bt_post" containing #' the samples from the BayesTraits MCMC chain. Headers vary on model type. #' @export #' @name loadPosterior loadPosterior <- function(logfile, thinning = 1, burnin = 0) { raw <- readLines(logfile) # TODO Return the model type with the output, and put this into classes. # Adapt other functions to deal with the new output of btmcmc, and perhaps # implement methods based on the class (i.e. the model) that comes out of # this. model <- gsub(" ", "", raw[2]) output <- do.call(rbind, strsplit(raw[grep("\\bIteration\\b", raw):length(raw)], "\t")) colnames(output) <- output[1, ] output <- output[c(2:nrow(output)), ] output <- data.frame(output, stringsAsFactors = FALSE) for (i in 1:ncol(output)) { if (colnames(output)[i] != "Model.string" && colnames(output)[i] != "Dep...InDep") { output[ ,i] <- as.numeric(output[ ,i]) } } output <- tibble::as.tibble(output[seq.int(burnin, nrow(output), thinning), ]) class(output) <- append("bt_post", class(output)) return(output) }
/retired_functions/loadPosterior.R
no_license
hferg/bayestraitr
R
false
false
1,893
r
#' loadPosterior #' #' Returns the full mcmc object from a BayesTraits log file. This #' is used inside plot functions and so on, but might be useful for #' other MCMC manipulations and so on. #' Extracts the MCMC samples from a BayesTraits logfile (i.e. discards the #' header information and coerces samples into a matrix.) #' @param logfile The name of the logfile of the BayesTraits analysis. #' @param thinning Thinning parameter for the posterior - defaults to 1 #' (all samples). 2 uses every second sample, 3 every third and so on. #' @param burnin The number of generations to remove from the start of the #' chain as burnin. Use if the chain has not reached convergence before sampling #' began. Useful if the burnin parameter for the analysis itself was not long #' enough. #' @return A tibble (see \link[tibble]{tibble}) with the class "bt_post" containing #' the samples from the BayesTraits MCMC chain. Headers vary on model type. #' @export #' @name loadPosterior loadPosterior <- function(logfile, thinning = 1, burnin = 0) { raw <- readLines(logfile) # TODO Return the model type with the output, and put this into classes. # Adapt other functions to deal with the new output of btmcmc, and perhaps # implement methods based on the class (i.e. the model) that comes out of # this. model <- gsub(" ", "", raw[2]) output <- do.call(rbind, strsplit(raw[grep("\\bIteration\\b", raw):length(raw)], "\t")) colnames(output) <- output[1, ] output <- output[c(2:nrow(output)), ] output <- data.frame(output, stringsAsFactors = FALSE) for (i in 1:ncol(output)) { if (colnames(output)[i] != "Model.string" && colnames(output)[i] != "Dep...InDep") { output[ ,i] <- as.numeric(output[ ,i]) } } output <- tibble::as.tibble(output[seq.int(burnin, nrow(output), thinning), ]) class(output) <- append("bt_post", class(output)) return(output) }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/kinfitr_refPatlak.R \name{refPatlak} \alias{refPatlak} \title{Patlak Reference Tissue Model} \usage{ refPatlak( t_tac, reftac, roitac, tstarIncludedFrames, weights = NULL, frameStartEnd = NULL ) } \arguments{ \item{t_tac}{Numeric vector of times for each frame in minutes. We use the time halfway through the frame as well as a zero. If a time zero frame is not included, it will be added.} \item{reftac}{Numeric vector of radioactivity concentrations in the reference tissue for each frame. We include zero at time zero: if not included, it is added.} \item{roitac}{Numeric vector of radioactivity concentrations in the target tissue for each frame. We include zero at time zero: if not included, it is added.} \item{tstarIncludedFrames}{The number of frames to be used in the regression model, i.e. the number of frames for which the function is linear after pseudo-equilibrium is reached. This is a count from the end of the measurement, so a value of 10 means that last 10 frames will be used. This value can be estimated using \code{refPatlak_tstar}.} \item{weights}{Optional. Numeric vector of the weights assigned to each frame in the fitting. We include zero at time zero: if not included, it is added. If not specified, uniform weights will be used.} \item{frameStartEnd}{Optional: This allows one to specify the beginning and final frame to use for modelling, e.g. c(1,20). This is to assess time stability.} } \value{ A list with a data frame of the fitted parameters \code{out$par}, the model fit object \code{out$fit}, a dataframe containing the TACs of the data \code{out$tacs}, a dataframe containing the TACs of the fitted values \code{out$fitvals}, a vector of the weights \code{out$weights}, and the specified tstarIncludedFrames value \code{out$tstarIncludedFrames} } \description{ Function to fit the Patlak Reference Tissue Model of Patlak & Blasbert (1985) to data. } \examples{ # Note: Reference region models, and irreversible binding models, should not # be used for PBR28 - this is just to demonstrate function data(pbr28) t_tac <- pbr28$tacs[[2]]$Times / 60 reftac <- pbr28$tacs[[2]]$CBL roitac <- pbr28$tacs[[2]]$STR weights <- pbr28$tacs[[2]]$Weights fit <- refPatlak(t_tac, reftac, roitac, tstarIncludedFrames = 10, weights = weights) } \references{ Patlak CS, Blasberg RG. Graphical evaluation of blood-to-brain transfer constants from multiple-time uptake data. Generalizations. Journal of Cerebral Blood Flow & Metabolism. 1985 Dec 1;5(4):584-90. } \author{ Granville J Matheson, \email{mathesong@gmail.com} }
/man/refPatlak.Rd
no_license
kang2000h/kinfitr
R
false
true
2,641
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/kinfitr_refPatlak.R \name{refPatlak} \alias{refPatlak} \title{Patlak Reference Tissue Model} \usage{ refPatlak( t_tac, reftac, roitac, tstarIncludedFrames, weights = NULL, frameStartEnd = NULL ) } \arguments{ \item{t_tac}{Numeric vector of times for each frame in minutes. We use the time halfway through the frame as well as a zero. If a time zero frame is not included, it will be added.} \item{reftac}{Numeric vector of radioactivity concentrations in the reference tissue for each frame. We include zero at time zero: if not included, it is added.} \item{roitac}{Numeric vector of radioactivity concentrations in the target tissue for each frame. We include zero at time zero: if not included, it is added.} \item{tstarIncludedFrames}{The number of frames to be used in the regression model, i.e. the number of frames for which the function is linear after pseudo-equilibrium is reached. This is a count from the end of the measurement, so a value of 10 means that last 10 frames will be used. This value can be estimated using \code{refPatlak_tstar}.} \item{weights}{Optional. Numeric vector of the weights assigned to each frame in the fitting. We include zero at time zero: if not included, it is added. If not specified, uniform weights will be used.} \item{frameStartEnd}{Optional: This allows one to specify the beginning and final frame to use for modelling, e.g. c(1,20). This is to assess time stability.} } \value{ A list with a data frame of the fitted parameters \code{out$par}, the model fit object \code{out$fit}, a dataframe containing the TACs of the data \code{out$tacs}, a dataframe containing the TACs of the fitted values \code{out$fitvals}, a vector of the weights \code{out$weights}, and the specified tstarIncludedFrames value \code{out$tstarIncludedFrames} } \description{ Function to fit the Patlak Reference Tissue Model of Patlak & Blasbert (1985) to data. } \examples{ # Note: Reference region models, and irreversible binding models, should not # be used for PBR28 - this is just to demonstrate function data(pbr28) t_tac <- pbr28$tacs[[2]]$Times / 60 reftac <- pbr28$tacs[[2]]$CBL roitac <- pbr28$tacs[[2]]$STR weights <- pbr28$tacs[[2]]$Weights fit <- refPatlak(t_tac, reftac, roitac, tstarIncludedFrames = 10, weights = weights) } \references{ Patlak CS, Blasberg RG. Graphical evaluation of blood-to-brain transfer constants from multiple-time uptake data. Generalizations. Journal of Cerebral Blood Flow & Metabolism. 1985 Dec 1;5(4):584-90. } \author{ Granville J Matheson, \email{mathesong@gmail.com} }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/temporal.downscaling.R \name{get.ncvector} \alias{get.ncvector} \title{Get time series vector from netCDF file} \usage{ get.ncvector(var, lati = lati, loni = loni, run.dates = run.dates, met.nc) } \arguments{ \item{met.nc}{netcdf file with CF variable names} } \value{ numeric vector } \description{ Get time series vector from netCDF file } \details{ internal convenience function for streamlining extraction of data from netCDF files with CF-compliant variable names } \author{ David Shaner LeBauer }
/modules/data.atmosphere/man/get.ncvector.Rd
permissive
araiho/pecan
R
false
true
583
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/temporal.downscaling.R \name{get.ncvector} \alias{get.ncvector} \title{Get time series vector from netCDF file} \usage{ get.ncvector(var, lati = lati, loni = loni, run.dates = run.dates, met.nc) } \arguments{ \item{met.nc}{netcdf file with CF variable names} } \value{ numeric vector } \description{ Get time series vector from netCDF file } \details{ internal convenience function for streamlining extraction of data from netCDF files with CF-compliant variable names } \author{ David Shaner LeBauer }
rlab9.2 = function(theorder){ data(Tbrate,package="Ecdat") # r = the 91-day Treasury bill rate # y = the log of real GDP # pi = the inflation rate # fit the nonseasonal ARIMA model found by auto.arima pi=Tbrate[,3] autofit = auto.arima(pi,max.P=0,max.Q=0,ic="bic") fit = arima(pi,order=theorder) forecasts = predict(fit,36) plot(pi,xlim=c(1980,2006),ylim=c(-7,12)) lines(seq(from=1997,by=.25,length=36),forecasts$pred,col="red") lines(seq(from=1997,by=.25,length=36),forecasts$pred + 1.96*forecasts$se,col="blue") lines(seq(from=1997,by=.25,length=36),forecasts$pred - 1.96*forecasts$se,col="blue") } debug(rlab9.2) rlab9.2(c(1,1,1)) undebug(rlab9.2)
/rlab/rlab9-2.R
no_license
cxh1996108/FE2509
R
false
false
699
r
rlab9.2 = function(theorder){ data(Tbrate,package="Ecdat") # r = the 91-day Treasury bill rate # y = the log of real GDP # pi = the inflation rate # fit the nonseasonal ARIMA model found by auto.arima pi=Tbrate[,3] autofit = auto.arima(pi,max.P=0,max.Q=0,ic="bic") fit = arima(pi,order=theorder) forecasts = predict(fit,36) plot(pi,xlim=c(1980,2006),ylim=c(-7,12)) lines(seq(from=1997,by=.25,length=36),forecasts$pred,col="red") lines(seq(from=1997,by=.25,length=36),forecasts$pred + 1.96*forecasts$se,col="blue") lines(seq(from=1997,by=.25,length=36),forecasts$pred - 1.96*forecasts$se,col="blue") } debug(rlab9.2) rlab9.2(c(1,1,1)) undebug(rlab9.2)
library(RgoogleMaps) library(shiny) Client_base <- read.csv("identicator/clients.csv") trades_base <- read.csv("identicator/trades.csv") getGeoCode("Ogrodowa 58 ,Warszawa,Polska")
/identicator.R
no_license
pastakrk/identicator
R
false
false
184
r
library(RgoogleMaps) library(shiny) Client_base <- read.csv("identicator/clients.csv") trades_base <- read.csv("identicator/trades.csv") getGeoCode("Ogrodowa 58 ,Warszawa,Polska")
library(HDF5Array) library(SingleCellExperiment) getRowData <- function(path) { data.frame( Ensembl=as.character(h5read(path, "mm10/genes")), Symbol=as.character(h5read(path, "mm10/gene_names")), stringsAsFactors=FALSE ) } getColData <- function(path) { barcode <- as.character(h5read(path, "mm10/barcodes")) lib <- as.integer(sub(".*-", "", barcode)) data.frame( Barcode=barcode, Sequence=sub("-.*", "", barcode), Library=lib, Mouse=ifelse(lib <= 69, "A", "B"), stringsAsFactors=FALSE ) } ################################################################################# # Processing the full 1 million cell data set. url <- "https://cf.10xgenomics.com/samples/cell-exp/1.3.0/1M_neurons/1M_neurons_filtered_gene_bc_matrices_h5.h5" path <- basename(url) download.file(url, path) # Loading the data into R. saveRDS(getRowData(path), "1M_neurons_filtered_gene_bc_matrices_h5_rowData.rds") saveRDS(getColData(path), "1M_neurons_filtered_gene_bc_matrices_h5_colData.rds") # Converting into a `HDF5Matrix` object tenxmat <- TENxMatrix(path) options(DelayedArray.block.size=1e9) # 1GB block size. mat.out <- writeHDF5Array( tenxmat, file="1M_neurons_filtered_gene_bc_matrices_h5_rectangular.h5", name="counts", chunkdim=beachmat::getBestChunkDims(dim(tenxmat)) ) ################################################################################# # Processing the 20K subset. url <- "http://cf.10xgenomics.com/samples/cell-exp/1.3.0/1M_neurons/1M_neurons_neuron20k.h5" path <- basename(url) download.file(url, path) # Loading the data into R. saveRDS(getRowData(path), "1M_neurons_neuron20k_rowData.rds") saveRDS(getColData(path), "1M_neurons_neuron20k_colData.rds") # Converting into a `HDF5Matrix` object tenxmat <- TENxMatrix(path) options(DelayedArray.block.size=1e9) # 1GB block size. mat.out <- writeHDF5Array( tenxmat, file="1M_neurons_neuron20k_rectangular.h5", name="counts", chunkdim=beachmat::getBestChunkDims(dim(tenxmat)) )
/inst/scripts/make-data.R
no_license
vjcitn/TENxBrainData
R
false
false
2,063
r
library(HDF5Array) library(SingleCellExperiment) getRowData <- function(path) { data.frame( Ensembl=as.character(h5read(path, "mm10/genes")), Symbol=as.character(h5read(path, "mm10/gene_names")), stringsAsFactors=FALSE ) } getColData <- function(path) { barcode <- as.character(h5read(path, "mm10/barcodes")) lib <- as.integer(sub(".*-", "", barcode)) data.frame( Barcode=barcode, Sequence=sub("-.*", "", barcode), Library=lib, Mouse=ifelse(lib <= 69, "A", "B"), stringsAsFactors=FALSE ) } ################################################################################# # Processing the full 1 million cell data set. url <- "https://cf.10xgenomics.com/samples/cell-exp/1.3.0/1M_neurons/1M_neurons_filtered_gene_bc_matrices_h5.h5" path <- basename(url) download.file(url, path) # Loading the data into R. saveRDS(getRowData(path), "1M_neurons_filtered_gene_bc_matrices_h5_rowData.rds") saveRDS(getColData(path), "1M_neurons_filtered_gene_bc_matrices_h5_colData.rds") # Converting into a `HDF5Matrix` object tenxmat <- TENxMatrix(path) options(DelayedArray.block.size=1e9) # 1GB block size. mat.out <- writeHDF5Array( tenxmat, file="1M_neurons_filtered_gene_bc_matrices_h5_rectangular.h5", name="counts", chunkdim=beachmat::getBestChunkDims(dim(tenxmat)) ) ################################################################################# # Processing the 20K subset. url <- "http://cf.10xgenomics.com/samples/cell-exp/1.3.0/1M_neurons/1M_neurons_neuron20k.h5" path <- basename(url) download.file(url, path) # Loading the data into R. saveRDS(getRowData(path), "1M_neurons_neuron20k_rowData.rds") saveRDS(getColData(path), "1M_neurons_neuron20k_colData.rds") # Converting into a `HDF5Matrix` object tenxmat <- TENxMatrix(path) options(DelayedArray.block.size=1e9) # 1GB block size. mat.out <- writeHDF5Array( tenxmat, file="1M_neurons_neuron20k_rectangular.h5", name="counts", chunkdim=beachmat::getBestChunkDims(dim(tenxmat)) )
\name{get_bsblue} \alias{get_bsblue} \title{Get Boostrap blue colour} \usage{ get_bsblue() } \description{ Get Bootstrap blue colour } \examples{ get_bsblue() }
/man/get_bsblue.Rd
no_license
eamoakohene/beamaColours
R
false
false
162
rd
\name{get_bsblue} \alias{get_bsblue} \title{Get Boostrap blue colour} \usage{ get_bsblue() } \description{ Get Bootstrap blue colour } \examples{ get_bsblue() }
c = TRUE a=1 while (c) { print(a) a = a+1 if (a == 10) { break } }
/test1.r
no_license
oussamasiyagh/compiler-for-R
R
false
false
90
r
c = TRUE a=1 while (c) { print(a) a = a+1 if (a == 10) { break } }
## ggplot2 tutorial for plotting rnaseq data ## we will be using the Gocke 2016 striatum dataset once again #start off the script by cleaning up R workstation rm(list = ls()) #and by importing any necessary packages (if you don't have any of these, install them now) library(dplyr) library(reshape2) library(useful) library(ggplot2) library(plotly) library(ggpubr) #it can always be useful to set the working directory for this script at the beginning, so that you do #not need to give the full filepath everytime you are importing or saving data tables setwd('E:\\DATA\\rna_seq_datasets\\adult_striatum') ## OR ## ## setwd(choose.dir()) #time to read in our data! this is just the raw csv expression matrix, with a few misnamed columns df <- read.csv('gocke2016_taxonomy_mouse_striatum_GSE82187.csv') ### I advise AGAINST opening a preview of data this size, especially when there are a lot of variables (columns), R gets angry ### #to avoid that, we can print out representative portions within the terminal #head is very bad, not super helpful in this scenario head(df) #a different function (corner) that i found in the 'useful' package is better for glancing at this type of data corner(df) #returns all of the column names and row names of the dataframe, helpful, but a bit overwhelming colnames(df) row.names(df) #based on what we saw above, we probably don't care much about a few of the variable columns #let's get rid of them using the subset function df <- subset(df, select = -c(X, cell.name, experiment, protocol)) corner(df) #since we might be interested in the types of cells, let's make sure all the names are correct/non-overlapping unique(df$type) #get a quick summary of the number of times this conditional is met summary(df$type == 'Oligodendrocyte') #we can directly change the values in the 'type' column by indexing to that location and reassigning df$type[df$type == 'Astrocyte'] = 'Astro' df$type[df$type == 'neuron'] = 'Neuron' df$type[df$type == 'Oligodendrocyte'] = 'Oligo' unique(df$type) length(unique(df$type)) #now that we are confident are data is barebones and clean, let's use the 'melt' function from the #reshape package to get the expression data into tidy format df_tidy <- melt(df, value.name = 'expression', variable.name = 'gene') #now that we have tidy data, we can open up the preview and feel less worried about it crashing R :P View(df_tidy) #time to group our data, compute summary statistics on specific groups that constitute aesthetics we want to plot df_grouped <- df_tidy %>% group_by(type, gene) %>% summarise(mean_expression = mean(expression)) #basic plot of the data we have prepared! this will take a long, long time - we are trying to plot a dot for #every gene in the dataset ggplot(df_grouped, aes(x = type, y = mean_expression))+ geom_jitter() #colors might help it look prettier... ggplot(df_grouped, aes(x = type, y = mean_expression, color = type))+ geom_jitter() #what if we don't want to group along the x-axis by type? give the aes argument an arbitrary value! ggplot(df_grouped, aes(x = 1, y = mean_expression, color = type))+ geom_jitter() #okay we can't actually glean much info from that crowded scatter plot, let's narrow in on one gene gene_of_interest = 'Gapdh' goi <- filter(df_tidy, gene == gene_of_interest) goi_grouped <- filter(df_grouped, gene == gene_of_interest) ### okay now we have our data in a good format for plotting with ggplot!! #let's start with a basic bar plot layered with all individidual values as dots ggplot(goi_grouped, aes(x = type, y = mean_expression, fill = type, color = type)) + geom_bar(stat = 'identity', aes(x = type, y = mean_expression, fill = type, color = type))+ geom_jitter(data = goi, aes(x = type, y = expression), size = 1.5, shape = 21, color = 'black', width = 0.2)+ theme_classic() #okay now line!! ggplot(goi_grouped, aes(x = type, y = mean_expression, fill = type, color = type)) + geom_line(aes(x = type, y = mean_expression, fill = type, color = type))+ geom_jitter(data = goi, aes(x = type, y = expression), size = 1.5, shape = 21, color = 'black', width = 0.2)+ theme_classic() #hmmm error #when plotting a line graph, you need a group variable! which in our case is arbitrary cuz there will only be #one line to group into ggplot(goi_grouped, aes(x = type, y = mean_expression, group = 1)) + geom_line(size = 1.5, color = 'gray')+ geom_jitter(data = goi, aes(x = type, y = expression, fill = type, color = type), size = 1.5, shape = 21, color = 'black', width = 0.2)+ ggtitle(paste(gene_of_interest, 'Expression'))+ theme_classic()+ theme(plot.title = element_text(hjust = 0.5)) #how to plot SEM error bars #first we should create a function that allows us to calculate the standard error while in a dplyr pipe sem <- function(x) sqrt(var(x)/length(x)) df_grouped <- df_tidy %>% group_by(type, gene) %>% summarise(sem = sem(expression), mean_expression = mean(expression)) goi <- filter(df_tidy, gene == gene_of_interest) goi_grouped <- filter(df_grouped, gene == gene_of_interest) ggplot(goi_grouped, aes(x = type, y = mean_expression, fill = type, color = type)) + geom_bar(stat = 'identity', aes(x = type, y = mean_expression, fill = type, color = type))+ geom_jitter(data = goi, aes(x = type, y = expression), size = 1.5, shape = 21, color = 'black', width = 0.2)+ geom_errorbar(aes(ymin = mean_expression - sem, ymax = mean_expression + sem), color = 'black', width = .1, size = 1.2)+ theme_classic() ### beeswarm plot (aka superplot) library(ggbeeswarm) ggplot(goi, aes(x = 0, y = expression, color = factor(type)))+ geom_beeswarm(cex = 3)+ geom_beeswarm(data = goi_grouped, aes(x = 0, y = mean_expression, color = factor(type)), size = 8)+ xlab('')+ ylab('Relative Expression')+ ggtitle(paste(gene_of_interest, 'Expression'))+ theme_bw()+ theme( legend.position="right", plot.title = element_text(hjust = 0.9, face = 'bold') ) ### new bar plot, this time with error bars/p values! # resources taken from this link: http://www.sthda.com/english/articles/24-ggpubr-publication-ready-plots/76-add-p-values-and-significance-levels-to-ggplots/ gene_of_interest <- 'Csf1r' df_grouped <- df_tidy %>% group_by(type, gene) %>% summarise(sem = sem(expression), expression = mean(expression)) goi <- filter(df_tidy, gene == gene_of_interest) goi_grouped <- filter(df_grouped, gene == gene_of_interest) my_comparisons <- list(c('Neuron', 'Microglia'), c('Astro', 'Vascular')) ggplot(goi, aes(x = type, y = expression, fill = type, color = type)) + geom_bar(data = goi_grouped, stat = 'identity', aes(x = type, y = expression, fill = type, color = type))+ geom_errorbar(data = goi_grouped, aes(ymin = expression - sem, ymax = expression + sem), color = 'black', width = .1, size = 1.2)+ geom_jitter(aes(x = type, y = expression), size = 1.5, shape = 21, color = 'black', width = 0.2)+ theme_classic()+ stat_compare_means(data = goi, method = 't.test', comparisons = my_comparisons) ### okay now let's practice with an ANOVA ## for this, we will probably want two different groups, maybe a two factor nominal variable such as sex? ## for the purposes of this tutorial we can randomly add male or female to every row in the dataframe df_tidy$sex <- sample(x = c('Male', 'Female'), size = nrow(df_tidy), replace = TRUE) gene_of_interest <- 'Csf1r' df_grouped <- df_tidy %>% group_by(type, gene, sex) %>% summarise(sem = sem(expression), expression = mean(expression)) goi <- filter(df_tidy, gene == gene_of_interest) goi_grouped <- filter(df_grouped, gene == gene_of_interest) ggplot(goi, aes(x = type, y = expression, fill = type, color = type)) + geom_bar(data = goi_grouped, stat = 'identity', aes(x = type, y = expression, fill = type, color = type))+ geom_errorbar(data = goi_grouped, aes(ymin = expression - sem, ymax = expression + sem), color = 'black', width = .1, size = 1.2)+ geom_jitter(aes(x = type, y = expression), size = 1.5, shape = 21, color = 'black', width = 0.2)+ theme_classic()+ facet_wrap(~sex)+ stat_compare_means(data = goi, method = 't.test', comparisons = my_comparisons) ggplot(goi, mapping = aes(x = type, y = expression, fill = interaction(sex, type), color = interaction(sex, type))) + geom_bar(data = goi_grouped, stat = 'identity', aes(x = type, y = expression), position = position_dodge(.9))+ geom_errorbar(data = goi_grouped, aes(ymin = expression - sem, ymax = expression + sem), color = 'black', width = .1, size = 1.2, position = position_dodge(.9))+ geom_jitter(aes(x = type, y = expression), size = 1.5, shape = 21, color = 'black', position = position_jitterdodge(jitter.width = 0, dodge.width = 0.9))+ theme_classic()+ stat_compare_means(data = goi, method = 't.test', aes(group = interaction(type,sex))) ggplot(goi, mapping = aes(x = type, y = expression, fill = interaction(sex, type), color = interaction(sex, type))) + geom_bar(data = goi_grouped, stat = 'identity', aes(x = type, y = expression), position = position_dodge(.9))+ geom_errorbar(data = goi_grouped, aes(ymin = expression - sem, ymax = expression + sem), color = 'black', width = .1, size = 1.2, position = position_dodge(.9))+ geom_jitter(aes(x = type, y = expression), size = 1.5, shape = 21, color = 'black', position = position_jitterdodge(jitter.width = 0, dodge.width = 0.9))+ theme_classic()+ stat_compare_means(data = goi, method = 't.test', comparisons = my_comparisons) ## okay now back to doing an ANOVA gene_of_interest <- 'Xist' goi <- filter(df_tidy, gene == gene_of_interest) aov_output <- aov(data = goi, expression ~ type * sex * length) anova_table <- data.frame(unclass(summary(aov_output)))
/programming_tutorial_scripts/200511_ggplot_tutorial_rnaseq_plotting.R
no_license
bendevlin18/programming-teaching-resources
R
false
false
9,737
r
## ggplot2 tutorial for plotting rnaseq data ## we will be using the Gocke 2016 striatum dataset once again #start off the script by cleaning up R workstation rm(list = ls()) #and by importing any necessary packages (if you don't have any of these, install them now) library(dplyr) library(reshape2) library(useful) library(ggplot2) library(plotly) library(ggpubr) #it can always be useful to set the working directory for this script at the beginning, so that you do #not need to give the full filepath everytime you are importing or saving data tables setwd('E:\\DATA\\rna_seq_datasets\\adult_striatum') ## OR ## ## setwd(choose.dir()) #time to read in our data! this is just the raw csv expression matrix, with a few misnamed columns df <- read.csv('gocke2016_taxonomy_mouse_striatum_GSE82187.csv') ### I advise AGAINST opening a preview of data this size, especially when there are a lot of variables (columns), R gets angry ### #to avoid that, we can print out representative portions within the terminal #head is very bad, not super helpful in this scenario head(df) #a different function (corner) that i found in the 'useful' package is better for glancing at this type of data corner(df) #returns all of the column names and row names of the dataframe, helpful, but a bit overwhelming colnames(df) row.names(df) #based on what we saw above, we probably don't care much about a few of the variable columns #let's get rid of them using the subset function df <- subset(df, select = -c(X, cell.name, experiment, protocol)) corner(df) #since we might be interested in the types of cells, let's make sure all the names are correct/non-overlapping unique(df$type) #get a quick summary of the number of times this conditional is met summary(df$type == 'Oligodendrocyte') #we can directly change the values in the 'type' column by indexing to that location and reassigning df$type[df$type == 'Astrocyte'] = 'Astro' df$type[df$type == 'neuron'] = 'Neuron' df$type[df$type == 'Oligodendrocyte'] = 'Oligo' unique(df$type) length(unique(df$type)) #now that we are confident are data is barebones and clean, let's use the 'melt' function from the #reshape package to get the expression data into tidy format df_tidy <- melt(df, value.name = 'expression', variable.name = 'gene') #now that we have tidy data, we can open up the preview and feel less worried about it crashing R :P View(df_tidy) #time to group our data, compute summary statistics on specific groups that constitute aesthetics we want to plot df_grouped <- df_tidy %>% group_by(type, gene) %>% summarise(mean_expression = mean(expression)) #basic plot of the data we have prepared! this will take a long, long time - we are trying to plot a dot for #every gene in the dataset ggplot(df_grouped, aes(x = type, y = mean_expression))+ geom_jitter() #colors might help it look prettier... ggplot(df_grouped, aes(x = type, y = mean_expression, color = type))+ geom_jitter() #what if we don't want to group along the x-axis by type? give the aes argument an arbitrary value! ggplot(df_grouped, aes(x = 1, y = mean_expression, color = type))+ geom_jitter() #okay we can't actually glean much info from that crowded scatter plot, let's narrow in on one gene gene_of_interest = 'Gapdh' goi <- filter(df_tidy, gene == gene_of_interest) goi_grouped <- filter(df_grouped, gene == gene_of_interest) ### okay now we have our data in a good format for plotting with ggplot!! #let's start with a basic bar plot layered with all individidual values as dots ggplot(goi_grouped, aes(x = type, y = mean_expression, fill = type, color = type)) + geom_bar(stat = 'identity', aes(x = type, y = mean_expression, fill = type, color = type))+ geom_jitter(data = goi, aes(x = type, y = expression), size = 1.5, shape = 21, color = 'black', width = 0.2)+ theme_classic() #okay now line!! ggplot(goi_grouped, aes(x = type, y = mean_expression, fill = type, color = type)) + geom_line(aes(x = type, y = mean_expression, fill = type, color = type))+ geom_jitter(data = goi, aes(x = type, y = expression), size = 1.5, shape = 21, color = 'black', width = 0.2)+ theme_classic() #hmmm error #when plotting a line graph, you need a group variable! which in our case is arbitrary cuz there will only be #one line to group into ggplot(goi_grouped, aes(x = type, y = mean_expression, group = 1)) + geom_line(size = 1.5, color = 'gray')+ geom_jitter(data = goi, aes(x = type, y = expression, fill = type, color = type), size = 1.5, shape = 21, color = 'black', width = 0.2)+ ggtitle(paste(gene_of_interest, 'Expression'))+ theme_classic()+ theme(plot.title = element_text(hjust = 0.5)) #how to plot SEM error bars #first we should create a function that allows us to calculate the standard error while in a dplyr pipe sem <- function(x) sqrt(var(x)/length(x)) df_grouped <- df_tidy %>% group_by(type, gene) %>% summarise(sem = sem(expression), mean_expression = mean(expression)) goi <- filter(df_tidy, gene == gene_of_interest) goi_grouped <- filter(df_grouped, gene == gene_of_interest) ggplot(goi_grouped, aes(x = type, y = mean_expression, fill = type, color = type)) + geom_bar(stat = 'identity', aes(x = type, y = mean_expression, fill = type, color = type))+ geom_jitter(data = goi, aes(x = type, y = expression), size = 1.5, shape = 21, color = 'black', width = 0.2)+ geom_errorbar(aes(ymin = mean_expression - sem, ymax = mean_expression + sem), color = 'black', width = .1, size = 1.2)+ theme_classic() ### beeswarm plot (aka superplot) library(ggbeeswarm) ggplot(goi, aes(x = 0, y = expression, color = factor(type)))+ geom_beeswarm(cex = 3)+ geom_beeswarm(data = goi_grouped, aes(x = 0, y = mean_expression, color = factor(type)), size = 8)+ xlab('')+ ylab('Relative Expression')+ ggtitle(paste(gene_of_interest, 'Expression'))+ theme_bw()+ theme( legend.position="right", plot.title = element_text(hjust = 0.9, face = 'bold') ) ### new bar plot, this time with error bars/p values! # resources taken from this link: http://www.sthda.com/english/articles/24-ggpubr-publication-ready-plots/76-add-p-values-and-significance-levels-to-ggplots/ gene_of_interest <- 'Csf1r' df_grouped <- df_tidy %>% group_by(type, gene) %>% summarise(sem = sem(expression), expression = mean(expression)) goi <- filter(df_tidy, gene == gene_of_interest) goi_grouped <- filter(df_grouped, gene == gene_of_interest) my_comparisons <- list(c('Neuron', 'Microglia'), c('Astro', 'Vascular')) ggplot(goi, aes(x = type, y = expression, fill = type, color = type)) + geom_bar(data = goi_grouped, stat = 'identity', aes(x = type, y = expression, fill = type, color = type))+ geom_errorbar(data = goi_grouped, aes(ymin = expression - sem, ymax = expression + sem), color = 'black', width = .1, size = 1.2)+ geom_jitter(aes(x = type, y = expression), size = 1.5, shape = 21, color = 'black', width = 0.2)+ theme_classic()+ stat_compare_means(data = goi, method = 't.test', comparisons = my_comparisons) ### okay now let's practice with an ANOVA ## for this, we will probably want two different groups, maybe a two factor nominal variable such as sex? ## for the purposes of this tutorial we can randomly add male or female to every row in the dataframe df_tidy$sex <- sample(x = c('Male', 'Female'), size = nrow(df_tidy), replace = TRUE) gene_of_interest <- 'Csf1r' df_grouped <- df_tidy %>% group_by(type, gene, sex) %>% summarise(sem = sem(expression), expression = mean(expression)) goi <- filter(df_tidy, gene == gene_of_interest) goi_grouped <- filter(df_grouped, gene == gene_of_interest) ggplot(goi, aes(x = type, y = expression, fill = type, color = type)) + geom_bar(data = goi_grouped, stat = 'identity', aes(x = type, y = expression, fill = type, color = type))+ geom_errorbar(data = goi_grouped, aes(ymin = expression - sem, ymax = expression + sem), color = 'black', width = .1, size = 1.2)+ geom_jitter(aes(x = type, y = expression), size = 1.5, shape = 21, color = 'black', width = 0.2)+ theme_classic()+ facet_wrap(~sex)+ stat_compare_means(data = goi, method = 't.test', comparisons = my_comparisons) ggplot(goi, mapping = aes(x = type, y = expression, fill = interaction(sex, type), color = interaction(sex, type))) + geom_bar(data = goi_grouped, stat = 'identity', aes(x = type, y = expression), position = position_dodge(.9))+ geom_errorbar(data = goi_grouped, aes(ymin = expression - sem, ymax = expression + sem), color = 'black', width = .1, size = 1.2, position = position_dodge(.9))+ geom_jitter(aes(x = type, y = expression), size = 1.5, shape = 21, color = 'black', position = position_jitterdodge(jitter.width = 0, dodge.width = 0.9))+ theme_classic()+ stat_compare_means(data = goi, method = 't.test', aes(group = interaction(type,sex))) ggplot(goi, mapping = aes(x = type, y = expression, fill = interaction(sex, type), color = interaction(sex, type))) + geom_bar(data = goi_grouped, stat = 'identity', aes(x = type, y = expression), position = position_dodge(.9))+ geom_errorbar(data = goi_grouped, aes(ymin = expression - sem, ymax = expression + sem), color = 'black', width = .1, size = 1.2, position = position_dodge(.9))+ geom_jitter(aes(x = type, y = expression), size = 1.5, shape = 21, color = 'black', position = position_jitterdodge(jitter.width = 0, dodge.width = 0.9))+ theme_classic()+ stat_compare_means(data = goi, method = 't.test', comparisons = my_comparisons) ## okay now back to doing an ANOVA gene_of_interest <- 'Xist' goi <- filter(df_tidy, gene == gene_of_interest) aov_output <- aov(data = goi, expression ~ type * sex * length) anova_table <- data.frame(unclass(summary(aov_output)))
.plot_store <- function() { .last_plot <- NULL list( get = function() .last_plot, set = function(value) .last_plot <<- value ) } .store <- .plot_store() # Set last plot # Set last plot created or modified # # @arguments plot to store # @keyword internal set_last_plot <- function(value) .store$set(value) # Retrieve last plot modified/created. # Whenever a plot is created or modified, it is recorded. # # @seealso \code{\link{ggsave}} # @keyword hplot last_plot <- function() .store$get()
/R/plot-last.r
no_license
genome-vendor/r-cran-ggplot2
R
false
false
512
r
.plot_store <- function() { .last_plot <- NULL list( get = function() .last_plot, set = function(value) .last_plot <<- value ) } .store <- .plot_store() # Set last plot # Set last plot created or modified # # @arguments plot to store # @keyword internal set_last_plot <- function(value) .store$set(value) # Retrieve last plot modified/created. # Whenever a plot is created or modified, it is recorded. # # @seealso \code{\link{ggsave}} # @keyword hplot last_plot <- function() .store$get()
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. skip_if_not_available("dataset") skip_if_not_available("utf8proc") library(dplyr, warn.conflicts = FALSE) library(lubridate) library(stringr) library(stringi) test_that("paste, paste0, and str_c", { df <- tibble( v = c("A", "B", "C"), w = c("a", "b", "c"), x = c("d", NA_character_, "f"), y = c(NA_character_, "h", "i"), z = c(1.1, 2.2, NA) ) x <- Expression$field_ref("x") y <- Expression$field_ref("y") # no NAs in data compare_dplyr_binding( .input %>% transmute(paste(v, w)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(paste(v, w, sep = "-")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(paste0(v, w)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(str_c(v, w)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(str_c(v, w, sep = "+")) %>% collect(), df ) # NAs in data compare_dplyr_binding( .input %>% transmute(paste(x, y)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(paste(x, y, sep = "-")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(str_c(x, y)) %>% collect(), df ) # non-character column in dots compare_dplyr_binding( .input %>% transmute(paste0(x, y, z)) %>% collect(), df ) # literal string in dots compare_dplyr_binding( .input %>% transmute(paste(x, "foo", y)) %>% collect(), df ) # literal NA in dots compare_dplyr_binding( .input %>% transmute(paste(x, NA, y)) %>% collect(), df ) # expressions in dots compare_dplyr_binding( .input %>% transmute(paste0(x, toupper(y), as.character(z))) %>% collect(), df ) # sep is literal NA # errors in paste() (consistent with base::paste()) expect_error( nse_funcs$paste(x, y, sep = NA_character_), "Invalid separator" ) # emits null in str_c() (consistent with stringr::str_c()) compare_dplyr_binding( .input %>% transmute(str_c(x, y, sep = NA_character_)) %>% collect(), df ) # sep passed in dots to paste0 (which doesn't take a sep argument) compare_dplyr_binding( .input %>% transmute(paste0(x, y, sep = "-")) %>% collect(), df ) # known differences # arrow allows the separator to be an array expect_equal( df %>% Table$create() %>% transmute(result = paste(x, y, sep = w)) %>% collect(), df %>% transmute(result = paste(x, w, y, sep = "")) ) # expected errors # collapse argument not supported expect_error( nse_funcs$paste(x, y, collapse = ""), "collapse" ) expect_error( nse_funcs$paste0(x, y, collapse = ""), "collapse" ) expect_error( nse_funcs$str_c(x, y, collapse = ""), "collapse" ) # literal vectors of length != 1 not supported expect_error( nse_funcs$paste(x, character(0), y), "Literal vectors of length != 1 not supported in string concatenation" ) expect_error( nse_funcs$paste(x, c(",", ";"), y), "Literal vectors of length != 1 not supported in string concatenation" ) }) test_that("grepl with ignore.case = FALSE and fixed = TRUE", { df <- tibble(x = c("Foo", "bar")) compare_dplyr_binding( .input %>% filter(grepl("o", x, fixed = TRUE)) %>% collect(), df ) }) test_that("sub and gsub with ignore.case = FALSE and fixed = TRUE", { df <- tibble(x = c("Foo", "bar")) compare_dplyr_binding( .input %>% transmute(x = sub("Foo", "baz", x, fixed = TRUE)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = gsub("o", "u", x, fixed = TRUE)) %>% collect(), df ) }) # many of the remainder of these tests require RE2 skip_if_not_available("re2") test_that("grepl", { df <- tibble(x = c("Foo", "bar")) for (fixed in c(TRUE, FALSE)) { compare_dplyr_binding( .input %>% filter(grepl("Foo", x, fixed = fixed)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = grepl("^B.+", x, ignore.case = FALSE, fixed = fixed)) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(grepl("Foo", x, ignore.case = FALSE, fixed = fixed)) %>% collect(), df ) } }) test_that("grepl with ignore.case = TRUE and fixed = TRUE", { df <- tibble(x = c("Foo", "bar")) # base::grepl() ignores ignore.case = TRUE with a warning when fixed = TRUE, # so we can't use compare_dplyr_binding() for these tests expect_equal( df %>% Table$create() %>% filter(grepl("O", x, ignore.case = TRUE, fixed = TRUE)) %>% collect(), tibble(x = "Foo") ) expect_equal( df %>% Table$create() %>% filter(x = grepl("^B.+", x, ignore.case = TRUE, fixed = TRUE)) %>% collect(), tibble(x = character(0)) ) }) test_that("str_detect", { df <- tibble(x = c("Foo", "bar")) compare_dplyr_binding( .input %>% filter(str_detect(x, regex("^F"))) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_detect(x, regex("^f[A-Z]{2}", ignore_case = TRUE))) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_detect(x, regex("^f[A-Z]{2}", ignore_case = TRUE), negate = TRUE)) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_detect(x, fixed("o"))) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_detect(x, fixed("O"))) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_detect(x, fixed("O", ignore_case = TRUE))) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_detect(x, fixed("O", ignore_case = TRUE), negate = TRUE)) %>% collect(), df ) }) test_that("sub and gsub", { df <- tibble(x = c("Foo", "bar")) for (fixed in c(TRUE, FALSE)) { compare_dplyr_binding( .input %>% transmute(x = sub("Foo", "baz", x, fixed = fixed)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = sub("^B.+", "baz", x, ignore.case = FALSE, fixed = fixed)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = sub("Foo", "baz", x, ignore.case = FALSE, fixed = fixed)) %>% collect(), df ) } }) test_that("sub and gsub with ignore.case = TRUE and fixed = TRUE", { df <- tibble(x = c("Foo", "bar")) # base::sub() and base::gsub() ignore ignore.case = TRUE with a warning when # fixed = TRUE, so we can't use compare_dplyr_binding() for these tests expect_equal( df %>% Table$create() %>% transmute(x = sub("O", "u", x, ignore.case = TRUE, fixed = TRUE)) %>% collect(), tibble(x = c("Fuo", "bar")) ) expect_equal( df %>% Table$create() %>% transmute(x = gsub("o", "u", x, ignore.case = TRUE, fixed = TRUE)) %>% collect(), tibble(x = c("Fuu", "bar")) ) expect_equal( df %>% Table$create() %>% transmute(x = sub("^B.+", "baz", x, ignore.case = TRUE, fixed = TRUE)) %>% collect(), df # unchanged ) }) test_that("str_replace and str_replace_all", { df <- tibble(x = c("Foo", "bar")) compare_dplyr_binding( .input %>% transmute(x = str_replace_all(x, "^F", "baz")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_replace_all(x, regex("^F"), "baz")) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(x = str_replace(x, "^F[a-z]{2}", "baz")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_replace(x, regex("^f[A-Z]{2}", ignore_case = TRUE), "baz")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_replace_all(x, fixed("o"), "u")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_replace(x, fixed("O"), "u")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_replace(x, fixed("O", ignore_case = TRUE), "u")) %>% collect(), df ) }) test_that("strsplit and str_split", { df <- tibble(x = c("Foo and bar", "baz and qux and quux")) compare_dplyr_binding( .input %>% mutate(x = strsplit(x, "and")) %>% collect(), df, # `ignore_attr = TRUE` because the vctr coming back from arrow (ListArray) # has type information in it, but it's just a bare list from R/dplyr. ignore_attr = TRUE ) compare_dplyr_binding( .input %>% mutate(x = strsplit(x, "and.*", fixed = TRUE)) %>% collect(), df, ignore_attr = TRUE ) compare_dplyr_binding( .input %>% mutate(x = strsplit(x, " +and +")) %>% collect(), df, ignore_attr = TRUE ) compare_dplyr_binding( .input %>% mutate(x = str_split(x, "and")) %>% collect(), df, ignore_attr = TRUE ) compare_dplyr_binding( .input %>% mutate(x = str_split(x, "and", n = 2)) %>% collect(), df, ignore_attr = TRUE ) compare_dplyr_binding( .input %>% mutate(x = str_split(x, fixed("and"), n = 2)) %>% collect(), df, ignore_attr = TRUE ) compare_dplyr_binding( .input %>% mutate(x = str_split(x, regex("and"), n = 2)) %>% collect(), df, ignore_attr = TRUE ) compare_dplyr_binding( .input %>% mutate(x = str_split(x, "Foo|bar", n = 2)) %>% collect(), df, ignore_attr = TRUE ) }) test_that("strrep and str_dup", { df <- tibble(x = c("foo1", " \tB a R\n", "!apACHe aRroW!")) for (times in 0:8) { compare_dplyr_binding( .input %>% mutate(x = strrep(x, times)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(x = str_dup(x, times)) %>% collect(), df ) } }) test_that("str_to_lower, str_to_upper, and str_to_title", { df <- tibble(x = c("foo1", " \tB a R\n", "!apACHe aRroW!")) compare_dplyr_binding( .input %>% transmute( x_lower = str_to_lower(x), x_upper = str_to_upper(x), x_title = str_to_title(x) ) %>% collect(), df ) # Error checking a single function because they all use the same code path. expect_error( nse_funcs$str_to_lower("Apache Arrow", locale = "sp"), "Providing a value for 'locale' other than the default ('en') is not supported in Arrow", fixed = TRUE ) }) test_that("arrow_*_split_whitespace functions", { # use only ASCII whitespace characters df_ascii <- tibble(x = c("Foo\nand bar", "baz\tand qux and quux")) # use only non-ASCII whitespace characters df_utf8 <- tibble(x = c("Foo\u00A0and\u2000bar", "baz\u2006and\u1680qux\u3000and\u2008quux")) df_split <- tibble(x = list(c("Foo", "and", "bar"), c("baz", "and", "qux", "and", "quux"))) # use default option values expect_equal( df_ascii %>% Table$create() %>% mutate(x = arrow_ascii_split_whitespace(x)) %>% collect(), df_split, ignore_attr = TRUE ) expect_equal( df_utf8 %>% Table$create() %>% mutate(x = arrow_utf8_split_whitespace(x)) %>% collect(), df_split, ignore_attr = TRUE ) # specify non-default option values expect_equal( df_ascii %>% Table$create() %>% mutate( x = arrow_ascii_split_whitespace(x, options = list(max_splits = 1, reverse = TRUE)) ) %>% collect(), tibble(x = list(c("Foo\nand", "bar"), c("baz\tand qux and", "quux"))), ignore_attr = TRUE ) expect_equal( df_utf8 %>% Table$create() %>% mutate( x = arrow_utf8_split_whitespace(x, options = list(max_splits = 1, reverse = TRUE)) ) %>% collect(), tibble(x = list(c("Foo\u00A0and", "bar"), c("baz\u2006and\u1680qux\u3000and", "quux"))), ignore_attr = TRUE ) }) test_that("errors and warnings in string splitting", { # These conditions generate an error, but abandon_ship() catches the error, # issues a warning, and pulls the data into R (if computing on InMemoryDataset) # Elsewhere we test that abandon_ship() works, # so here we can just call the functions directly x <- Expression$field_ref("x") expect_error( nse_funcs$str_split(x, fixed("and", ignore_case = TRUE)), "Case-insensitive string splitting not supported in Arrow" ) expect_error( nse_funcs$str_split(x, coll("and.?")), "Pattern modifier `coll()` not supported in Arrow", fixed = TRUE ) expect_error( nse_funcs$str_split(x, boundary(type = "word")), "Pattern modifier `boundary()` not supported in Arrow", fixed = TRUE ) expect_error( nse_funcs$str_split(x, "and", n = 0), "Splitting strings into zero parts not supported in Arrow" ) # This condition generates a warning expect_warning( nse_funcs$str_split(x, fixed("and"), simplify = TRUE), "Argument 'simplify = TRUE' will be ignored" ) }) test_that("errors and warnings in string detection and replacement", { x <- Expression$field_ref("x") expect_error( nse_funcs$str_detect(x, boundary(type = "character")), "Pattern modifier `boundary()` not supported in Arrow", fixed = TRUE ) expect_error( nse_funcs$str_replace_all(x, coll("o", locale = "en"), "ó"), "Pattern modifier `coll()` not supported in Arrow", fixed = TRUE ) # This condition generates a warning expect_warning( nse_funcs$str_replace_all(x, regex("o", multiline = TRUE), "u"), "Ignoring pattern modifier argument not supported in Arrow: \"multiline\"" ) }) test_that("backreferences in pattern in string detection", { skip("RE2 does not support backreferences in pattern (https://github.com/google/re2/issues/101)") df <- tibble(x = c("Foo", "bar")) compare_dplyr_binding( .input %>% filter(str_detect(x, regex("F([aeiou])\\1"))) %>% collect(), df ) }) test_that("backreferences (substitutions) in string replacement", { df <- tibble(x = c("Foo", "bar")) compare_dplyr_binding( .input %>% transmute(desc = sub( "(?:https?|ftp)://([^/\r\n]+)(/[^\r\n]*)?", "path `\\2` on server `\\1`", url )) %>% collect(), tibble(url = "https://arrow.apache.org/docs/r/") ) compare_dplyr_binding( .input %>% transmute(x = str_replace(x, "^(\\w)o(.*)", "\\1\\2p")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_replace(x, regex("^(\\w)o(.*)", ignore_case = TRUE), "\\1\\2p")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_replace(x, regex("^(\\w)o(.*)", ignore_case = TRUE), "\\1\\2p")) %>% collect(), df ) }) test_that("edge cases in string detection and replacement", { # in case-insensitive fixed match/replace, test that "\\E" in the search # string and backslashes in the replacement string are interpreted literally. # this test does not use compare_dplyr_binding() because base::sub() and # base::grepl() do not support ignore.case = TRUE when fixed = TRUE. expect_equal( tibble(x = c("\\Q\\e\\D")) %>% Table$create() %>% filter(grepl("\\E", x, ignore.case = TRUE, fixed = TRUE)) %>% collect(), tibble(x = c("\\Q\\e\\D")) ) expect_equal( tibble(x = c("\\Q\\e\\D")) %>% Table$create() %>% transmute(x = sub("\\E", "\\L", x, ignore.case = TRUE, fixed = TRUE)) %>% collect(), tibble(x = c("\\Q\\L\\D")) ) # test that a user's "(?i)" prefix does not break the "(?i)" prefix that's # added in case-insensitive regex match/replace compare_dplyr_binding( .input %>% filter(grepl("(?i)^[abc]{3}$", x, ignore.case = TRUE, fixed = FALSE)) %>% collect(), tibble(x = c("ABC")) ) compare_dplyr_binding( .input %>% transmute(x = sub("(?i)^[abc]{3}$", "123", x, ignore.case = TRUE, fixed = FALSE)) %>% collect(), tibble(x = c("ABC")) ) }) test_that("strptime", { # base::strptime() defaults to local timezone # but arrow's strptime defaults to UTC. # So that tests are consistent, set the local timezone to UTC # TODO: consider reevaluating this workaround after ARROW-12980 withr::local_timezone("UTC") t_string <- tibble(x = c("2018-10-07 19:04:05", NA)) t_stamp <- tibble(x = c(lubridate::ymd_hms("2018-10-07 19:04:05"), NA)) expect_equal( t_string %>% Table$create() %>% mutate( x = strptime(x) ) %>% collect(), t_stamp, ignore_attr = "tzone" ) expect_equal( t_string %>% Table$create() %>% mutate( x = strptime(x, format = "%Y-%m-%d %H:%M:%S") ) %>% collect(), t_stamp, ignore_attr = "tzone" ) expect_equal( t_string %>% Table$create() %>% mutate( x = strptime(x, format = "%Y-%m-%d %H:%M:%S", unit = "ns") ) %>% collect(), t_stamp, ignore_attr = "tzone" ) expect_equal( t_string %>% Table$create() %>% mutate( x = strptime(x, format = "%Y-%m-%d %H:%M:%S", unit = "s") ) %>% collect(), t_stamp, ignore_attr = "tzone" ) tstring <- tibble(x = c("08-05-2008", NA)) tstamp <- strptime(c("08-05-2008", NA), format = "%m-%d-%Y") expect_equal( tstring %>% Table$create() %>% mutate( x = strptime(x, format = "%m-%d-%Y") ) %>% pull(), # R's strptime returns POSIXlt (list type) as.POSIXct(tstamp), ignore_attr = "tzone" ) }) test_that("errors in strptime", { # Error when tz is passed x <- Expression$field_ref("x") expect_error( nse_funcs$strptime(x, tz = "PDT"), "Time zone argument not supported in Arrow" ) }) test_that("strftime", { skip_on_os("windows") # https://issues.apache.org/jira/browse/ARROW-13168 times <- tibble( datetime = c(lubridate::ymd_hms("2018-10-07 19:04:05", tz = "Etc/GMT+6"), NA), date = c(as.Date("2021-01-01"), NA) ) formats <- "%a %A %w %d %b %B %m %y %Y %H %I %p %M %z %Z %j %U %W %x %X %% %G %V %u" formats_date <- "%a %A %w %d %b %B %m %y %Y %H %I %p %M %j %U %W %x %X %% %G %V %u" compare_dplyr_binding( .input %>% mutate(x = strftime(datetime, format = formats)) %>% collect(), times ) compare_dplyr_binding( .input %>% mutate(x = strftime(date, format = formats_date)) %>% collect(), times ) compare_dplyr_binding( .input %>% mutate(x = strftime(datetime, format = formats, tz = "Pacific/Marquesas")) %>% collect(), times ) compare_dplyr_binding( .input %>% mutate(x = strftime(datetime, format = formats, tz = "EST", usetz = TRUE)) %>% collect(), times ) withr::with_timezone( "Pacific/Marquesas", { compare_dplyr_binding( .input %>% mutate( x = strftime(datetime, format = formats, tz = "EST"), x_date = strftime(date, format = formats_date, tz = "EST") ) %>% collect(), times ) compare_dplyr_binding( .input %>% mutate( x = strftime(datetime, format = formats), x_date = strftime(date, format = formats_date) ) %>% collect(), times ) } ) # This check is due to differences in the way %c currently works in Arrow and R's strftime. # We can revisit after https://github.com/HowardHinnant/date/issues/704 is resolved. expect_error( times %>% Table$create() %>% mutate(x = strftime(datetime, format = "%c")) %>% collect(), "%c flag is not supported in non-C locales." ) # Output precision of %S depends on the input timestamp precision. # Timestamps with second precision are represented as integers while # milliseconds, microsecond and nanoseconds are represented as fixed floating # point numbers with 3, 6 and 9 decimal places respectively. compare_dplyr_binding( .input %>% mutate(x = strftime(datetime, format = "%S")) %>% transmute(as.double(substr(x, 1, 2))) %>% collect(), times, tolerance = 1e-6 ) }) test_that("format_ISO8601", { skip_on_os("windows") # https://issues.apache.org/jira/browse/ARROW-13168 times <- tibble(x = c(lubridate::ymd_hms("2018-10-07 19:04:05", tz = "Etc/GMT+6"), NA)) compare_dplyr_binding( .input %>% mutate(x = format_ISO8601(x, precision = "ymd", usetz = FALSE)) %>% collect(), times ) if (getRversion() < "3.5") { # before 3.5, times$x will have no timezone attribute, so Arrow faithfully # errors that there is no timezone to format: expect_error( times %>% Table$create() %>% mutate(x = format_ISO8601(x, precision = "ymd", usetz = TRUE)) %>% collect(), "Timezone not present, cannot convert to string with timezone: %Y-%m-%d%z" ) # See comment regarding %S flag in strftime tests expect_error( times %>% Table$create() %>% mutate(x = format_ISO8601(x, precision = "ymdhms", usetz = TRUE)) %>% mutate(x = gsub("\\.0*", "", x)) %>% collect(), "Timezone not present, cannot convert to string with timezone: %Y-%m-%dT%H:%M:%S%z" ) } else { compare_dplyr_binding( .input %>% mutate(x = format_ISO8601(x, precision = "ymd", usetz = TRUE)) %>% collect(), times ) # See comment regarding %S flag in strftime tests compare_dplyr_binding( .input %>% mutate(x = format_ISO8601(x, precision = "ymdhms", usetz = TRUE)) %>% mutate(x = gsub("\\.0*", "", x)) %>% collect(), times ) } # See comment regarding %S flag in strftime tests compare_dplyr_binding( .input %>% mutate(x = format_ISO8601(x, precision = "ymdhms", usetz = FALSE)) %>% mutate(x = gsub("\\.0*", "", x)) %>% collect(), times ) }) test_that("arrow_find_substring and arrow_find_substring_regex", { df <- tibble(x = c("Foo and Bar", "baz and qux and quux")) expect_equal( df %>% Table$create() %>% mutate(x = arrow_find_substring(x, options = list(pattern = "b"))) %>% collect(), tibble(x = c(-1, 0)) ) expect_equal( df %>% Table$create() %>% mutate(x = arrow_find_substring( x, options = list(pattern = "b", ignore_case = TRUE) )) %>% collect(), tibble(x = c(8, 0)) ) expect_equal( df %>% Table$create() %>% mutate(x = arrow_find_substring_regex( x, options = list(pattern = "^[fb]") )) %>% collect(), tibble(x = c(-1, 0)) ) expect_equal( df %>% Table$create() %>% mutate(x = arrow_find_substring_regex( x, options = list(pattern = "[AEIOU]", ignore_case = TRUE) )) %>% collect(), tibble(x = c(1, 1)) ) }) test_that("stri_reverse and arrow_ascii_reverse functions", { df_ascii <- tibble(x = c("Foo\nand bar", "baz\tand qux and quux")) df_utf8 <- tibble(x = c("Foo\u00A0\u0061nd\u00A0bar", "\u0062az\u00A0and\u00A0qux\u3000and\u00A0quux")) compare_dplyr_binding( .input %>% mutate(x = stri_reverse(x)) %>% collect(), df_utf8 ) compare_dplyr_binding( .input %>% mutate(x = stri_reverse(x)) %>% collect(), df_ascii ) expect_equal( df_ascii %>% Table$create() %>% mutate(x = arrow_ascii_reverse(x)) %>% collect(), tibble(x = c("rab dna\nooF", "xuuq dna xuq dna\tzab")) ) expect_error( df_utf8 %>% Table$create() %>% mutate(x = arrow_ascii_reverse(x)) %>% collect(), "Invalid: Non-ASCII sequence in input" ) }) test_that("str_like", { df <- tibble(x = c("Foo and bar", "baz and qux and quux")) # TODO: After new version of stringr with str_like has been released, update all # these tests to use compare_dplyr_binding # No match - entire string expect_equal( df %>% Table$create() %>% mutate(x = str_like(x, "baz")) %>% collect(), tibble(x = c(FALSE, FALSE)) ) # Match - entire string expect_equal( df %>% Table$create() %>% mutate(x = str_like(x, "Foo and bar")) %>% collect(), tibble(x = c(TRUE, FALSE)) ) # Wildcard expect_equal( df %>% Table$create() %>% mutate(x = str_like(x, "f%", ignore_case = TRUE)) %>% collect(), tibble(x = c(TRUE, FALSE)) ) # Ignore case expect_equal( df %>% Table$create() %>% mutate(x = str_like(x, "f%", ignore_case = FALSE)) %>% collect(), tibble(x = c(FALSE, FALSE)) ) # Single character expect_equal( df %>% Table$create() %>% mutate(x = str_like(x, "_a%")) %>% collect(), tibble(x = c(FALSE, TRUE)) ) # This will give an error until a new version of stringr with str_like has been released skip_if_not(packageVersion("stringr") > "1.4.0") compare_dplyr_binding( .input %>% mutate(x = str_like(x, "%baz%")) %>% collect(), df ) }) test_that("str_pad", { df <- tibble(x = c("Foo and bar", "baz and qux and quux")) compare_dplyr_binding( .input %>% mutate(x = str_pad(x, width = 31)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(x = str_pad(x, width = 30, side = "right")) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(x = str_pad(x, width = 31, side = "left", pad = "+")) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(x = str_pad(x, width = 10, side = "left", pad = "+")) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(x = str_pad(x, width = 31, side = "both")) %>% collect(), df ) }) test_that("substr", { df <- tibble(x = "Apache Arrow") compare_dplyr_binding( .input %>% mutate(y = substr(x, 1, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, 0, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, -1, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, 6, 1)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, -1, -2)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, 9, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, 1, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, 8, 12)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, -5, -1)) %>% collect(), df ) expect_error( nse_funcs$substr("Apache Arrow", c(1, 2), 3), "`start` must be length 1 - other lengths are not supported in Arrow" ) expect_error( nse_funcs$substr("Apache Arrow", 1, c(2, 3)), "`stop` must be length 1 - other lengths are not supported in Arrow" ) }) test_that("substring", { # nse_funcs$substring just calls nse_funcs$substr, tested extensively above df <- tibble(x = "Apache Arrow") compare_dplyr_binding( .input %>% mutate(y = substring(x, 1, 6)) %>% collect(), df ) }) test_that("str_sub", { df <- tibble(x = "Apache Arrow") compare_dplyr_binding( .input %>% mutate(y = str_sub(x, 1, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, 0, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, -1, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, 6, 1)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, -1, -2)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, -1, 3)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, 9, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, 1, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, 8, 12)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, -5, -1)) %>% collect(), df ) expect_error( nse_funcs$str_sub("Apache Arrow", c(1, 2), 3), "`start` must be length 1 - other lengths are not supported in Arrow" ) expect_error( nse_funcs$str_sub("Apache Arrow", 1, c(2, 3)), "`end` must be length 1 - other lengths are not supported in Arrow" ) }) test_that("str_starts, str_ends, startsWith, endsWith", { df <- tibble(x = c("Foo", "bar", "baz", "qux")) compare_dplyr_binding( .input %>% filter(str_starts(x, "b.*")) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_starts(x, "b.*", negate = TRUE)) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_starts(x, fixed("b.*"))) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_starts(x, fixed("b"))) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_ends(x, "r")) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_ends(x, "r", negate = TRUE)) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_ends(x, fixed("r$"))) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_ends(x, fixed("r"))) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(startsWith(x, "b")) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(endsWith(x, "r")) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(startsWith(x, "b.*")) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(endsWith(x, "r$")) %>% collect(), df ) }) test_that("str_count", { df <- tibble( cities = c("Kolkata", "Dar es Salaam", "Tel Aviv", "San Antonio", "Cluj Napoca", "Bern", "Bogota"), dots = c("a.", "...", ".a.a", "a..a.", "ab...", "dse....", ".f..d..") ) compare_dplyr_binding( .input %>% mutate(a_count = str_count(cities, pattern = "a")) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(p_count = str_count(cities, pattern = "d")) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(p_count = str_count(cities, pattern = regex("d", ignore_case = TRUE) )) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(e_count = str_count(cities, pattern = "u")) %>% collect(), df ) # nse_funcs$str_count() is not vectorised over pattern compare_dplyr_binding( .input %>% mutate(let_count = str_count(cities, pattern = c("a", "b", "e", "g", "p", "n", "s"))) %>% collect(), df, warning = TRUE ) compare_dplyr_binding( .input %>% mutate(dots_count = str_count(dots, ".")) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(dots_count = str_count(dots, fixed("."))) %>% collect(), df ) })
/r/tests/testthat/test-dplyr-funcs-string.R
permissive
romainfrancois/arrow
R
false
false
32,667
r
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. skip_if_not_available("dataset") skip_if_not_available("utf8proc") library(dplyr, warn.conflicts = FALSE) library(lubridate) library(stringr) library(stringi) test_that("paste, paste0, and str_c", { df <- tibble( v = c("A", "B", "C"), w = c("a", "b", "c"), x = c("d", NA_character_, "f"), y = c(NA_character_, "h", "i"), z = c(1.1, 2.2, NA) ) x <- Expression$field_ref("x") y <- Expression$field_ref("y") # no NAs in data compare_dplyr_binding( .input %>% transmute(paste(v, w)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(paste(v, w, sep = "-")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(paste0(v, w)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(str_c(v, w)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(str_c(v, w, sep = "+")) %>% collect(), df ) # NAs in data compare_dplyr_binding( .input %>% transmute(paste(x, y)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(paste(x, y, sep = "-")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(str_c(x, y)) %>% collect(), df ) # non-character column in dots compare_dplyr_binding( .input %>% transmute(paste0(x, y, z)) %>% collect(), df ) # literal string in dots compare_dplyr_binding( .input %>% transmute(paste(x, "foo", y)) %>% collect(), df ) # literal NA in dots compare_dplyr_binding( .input %>% transmute(paste(x, NA, y)) %>% collect(), df ) # expressions in dots compare_dplyr_binding( .input %>% transmute(paste0(x, toupper(y), as.character(z))) %>% collect(), df ) # sep is literal NA # errors in paste() (consistent with base::paste()) expect_error( nse_funcs$paste(x, y, sep = NA_character_), "Invalid separator" ) # emits null in str_c() (consistent with stringr::str_c()) compare_dplyr_binding( .input %>% transmute(str_c(x, y, sep = NA_character_)) %>% collect(), df ) # sep passed in dots to paste0 (which doesn't take a sep argument) compare_dplyr_binding( .input %>% transmute(paste0(x, y, sep = "-")) %>% collect(), df ) # known differences # arrow allows the separator to be an array expect_equal( df %>% Table$create() %>% transmute(result = paste(x, y, sep = w)) %>% collect(), df %>% transmute(result = paste(x, w, y, sep = "")) ) # expected errors # collapse argument not supported expect_error( nse_funcs$paste(x, y, collapse = ""), "collapse" ) expect_error( nse_funcs$paste0(x, y, collapse = ""), "collapse" ) expect_error( nse_funcs$str_c(x, y, collapse = ""), "collapse" ) # literal vectors of length != 1 not supported expect_error( nse_funcs$paste(x, character(0), y), "Literal vectors of length != 1 not supported in string concatenation" ) expect_error( nse_funcs$paste(x, c(",", ";"), y), "Literal vectors of length != 1 not supported in string concatenation" ) }) test_that("grepl with ignore.case = FALSE and fixed = TRUE", { df <- tibble(x = c("Foo", "bar")) compare_dplyr_binding( .input %>% filter(grepl("o", x, fixed = TRUE)) %>% collect(), df ) }) test_that("sub and gsub with ignore.case = FALSE and fixed = TRUE", { df <- tibble(x = c("Foo", "bar")) compare_dplyr_binding( .input %>% transmute(x = sub("Foo", "baz", x, fixed = TRUE)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = gsub("o", "u", x, fixed = TRUE)) %>% collect(), df ) }) # many of the remainder of these tests require RE2 skip_if_not_available("re2") test_that("grepl", { df <- tibble(x = c("Foo", "bar")) for (fixed in c(TRUE, FALSE)) { compare_dplyr_binding( .input %>% filter(grepl("Foo", x, fixed = fixed)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = grepl("^B.+", x, ignore.case = FALSE, fixed = fixed)) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(grepl("Foo", x, ignore.case = FALSE, fixed = fixed)) %>% collect(), df ) } }) test_that("grepl with ignore.case = TRUE and fixed = TRUE", { df <- tibble(x = c("Foo", "bar")) # base::grepl() ignores ignore.case = TRUE with a warning when fixed = TRUE, # so we can't use compare_dplyr_binding() for these tests expect_equal( df %>% Table$create() %>% filter(grepl("O", x, ignore.case = TRUE, fixed = TRUE)) %>% collect(), tibble(x = "Foo") ) expect_equal( df %>% Table$create() %>% filter(x = grepl("^B.+", x, ignore.case = TRUE, fixed = TRUE)) %>% collect(), tibble(x = character(0)) ) }) test_that("str_detect", { df <- tibble(x = c("Foo", "bar")) compare_dplyr_binding( .input %>% filter(str_detect(x, regex("^F"))) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_detect(x, regex("^f[A-Z]{2}", ignore_case = TRUE))) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_detect(x, regex("^f[A-Z]{2}", ignore_case = TRUE), negate = TRUE)) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_detect(x, fixed("o"))) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_detect(x, fixed("O"))) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_detect(x, fixed("O", ignore_case = TRUE))) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_detect(x, fixed("O", ignore_case = TRUE), negate = TRUE)) %>% collect(), df ) }) test_that("sub and gsub", { df <- tibble(x = c("Foo", "bar")) for (fixed in c(TRUE, FALSE)) { compare_dplyr_binding( .input %>% transmute(x = sub("Foo", "baz", x, fixed = fixed)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = sub("^B.+", "baz", x, ignore.case = FALSE, fixed = fixed)) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = sub("Foo", "baz", x, ignore.case = FALSE, fixed = fixed)) %>% collect(), df ) } }) test_that("sub and gsub with ignore.case = TRUE and fixed = TRUE", { df <- tibble(x = c("Foo", "bar")) # base::sub() and base::gsub() ignore ignore.case = TRUE with a warning when # fixed = TRUE, so we can't use compare_dplyr_binding() for these tests expect_equal( df %>% Table$create() %>% transmute(x = sub("O", "u", x, ignore.case = TRUE, fixed = TRUE)) %>% collect(), tibble(x = c("Fuo", "bar")) ) expect_equal( df %>% Table$create() %>% transmute(x = gsub("o", "u", x, ignore.case = TRUE, fixed = TRUE)) %>% collect(), tibble(x = c("Fuu", "bar")) ) expect_equal( df %>% Table$create() %>% transmute(x = sub("^B.+", "baz", x, ignore.case = TRUE, fixed = TRUE)) %>% collect(), df # unchanged ) }) test_that("str_replace and str_replace_all", { df <- tibble(x = c("Foo", "bar")) compare_dplyr_binding( .input %>% transmute(x = str_replace_all(x, "^F", "baz")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_replace_all(x, regex("^F"), "baz")) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(x = str_replace(x, "^F[a-z]{2}", "baz")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_replace(x, regex("^f[A-Z]{2}", ignore_case = TRUE), "baz")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_replace_all(x, fixed("o"), "u")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_replace(x, fixed("O"), "u")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_replace(x, fixed("O", ignore_case = TRUE), "u")) %>% collect(), df ) }) test_that("strsplit and str_split", { df <- tibble(x = c("Foo and bar", "baz and qux and quux")) compare_dplyr_binding( .input %>% mutate(x = strsplit(x, "and")) %>% collect(), df, # `ignore_attr = TRUE` because the vctr coming back from arrow (ListArray) # has type information in it, but it's just a bare list from R/dplyr. ignore_attr = TRUE ) compare_dplyr_binding( .input %>% mutate(x = strsplit(x, "and.*", fixed = TRUE)) %>% collect(), df, ignore_attr = TRUE ) compare_dplyr_binding( .input %>% mutate(x = strsplit(x, " +and +")) %>% collect(), df, ignore_attr = TRUE ) compare_dplyr_binding( .input %>% mutate(x = str_split(x, "and")) %>% collect(), df, ignore_attr = TRUE ) compare_dplyr_binding( .input %>% mutate(x = str_split(x, "and", n = 2)) %>% collect(), df, ignore_attr = TRUE ) compare_dplyr_binding( .input %>% mutate(x = str_split(x, fixed("and"), n = 2)) %>% collect(), df, ignore_attr = TRUE ) compare_dplyr_binding( .input %>% mutate(x = str_split(x, regex("and"), n = 2)) %>% collect(), df, ignore_attr = TRUE ) compare_dplyr_binding( .input %>% mutate(x = str_split(x, "Foo|bar", n = 2)) %>% collect(), df, ignore_attr = TRUE ) }) test_that("strrep and str_dup", { df <- tibble(x = c("foo1", " \tB a R\n", "!apACHe aRroW!")) for (times in 0:8) { compare_dplyr_binding( .input %>% mutate(x = strrep(x, times)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(x = str_dup(x, times)) %>% collect(), df ) } }) test_that("str_to_lower, str_to_upper, and str_to_title", { df <- tibble(x = c("foo1", " \tB a R\n", "!apACHe aRroW!")) compare_dplyr_binding( .input %>% transmute( x_lower = str_to_lower(x), x_upper = str_to_upper(x), x_title = str_to_title(x) ) %>% collect(), df ) # Error checking a single function because they all use the same code path. expect_error( nse_funcs$str_to_lower("Apache Arrow", locale = "sp"), "Providing a value for 'locale' other than the default ('en') is not supported in Arrow", fixed = TRUE ) }) test_that("arrow_*_split_whitespace functions", { # use only ASCII whitespace characters df_ascii <- tibble(x = c("Foo\nand bar", "baz\tand qux and quux")) # use only non-ASCII whitespace characters df_utf8 <- tibble(x = c("Foo\u00A0and\u2000bar", "baz\u2006and\u1680qux\u3000and\u2008quux")) df_split <- tibble(x = list(c("Foo", "and", "bar"), c("baz", "and", "qux", "and", "quux"))) # use default option values expect_equal( df_ascii %>% Table$create() %>% mutate(x = arrow_ascii_split_whitespace(x)) %>% collect(), df_split, ignore_attr = TRUE ) expect_equal( df_utf8 %>% Table$create() %>% mutate(x = arrow_utf8_split_whitespace(x)) %>% collect(), df_split, ignore_attr = TRUE ) # specify non-default option values expect_equal( df_ascii %>% Table$create() %>% mutate( x = arrow_ascii_split_whitespace(x, options = list(max_splits = 1, reverse = TRUE)) ) %>% collect(), tibble(x = list(c("Foo\nand", "bar"), c("baz\tand qux and", "quux"))), ignore_attr = TRUE ) expect_equal( df_utf8 %>% Table$create() %>% mutate( x = arrow_utf8_split_whitespace(x, options = list(max_splits = 1, reverse = TRUE)) ) %>% collect(), tibble(x = list(c("Foo\u00A0and", "bar"), c("baz\u2006and\u1680qux\u3000and", "quux"))), ignore_attr = TRUE ) }) test_that("errors and warnings in string splitting", { # These conditions generate an error, but abandon_ship() catches the error, # issues a warning, and pulls the data into R (if computing on InMemoryDataset) # Elsewhere we test that abandon_ship() works, # so here we can just call the functions directly x <- Expression$field_ref("x") expect_error( nse_funcs$str_split(x, fixed("and", ignore_case = TRUE)), "Case-insensitive string splitting not supported in Arrow" ) expect_error( nse_funcs$str_split(x, coll("and.?")), "Pattern modifier `coll()` not supported in Arrow", fixed = TRUE ) expect_error( nse_funcs$str_split(x, boundary(type = "word")), "Pattern modifier `boundary()` not supported in Arrow", fixed = TRUE ) expect_error( nse_funcs$str_split(x, "and", n = 0), "Splitting strings into zero parts not supported in Arrow" ) # This condition generates a warning expect_warning( nse_funcs$str_split(x, fixed("and"), simplify = TRUE), "Argument 'simplify = TRUE' will be ignored" ) }) test_that("errors and warnings in string detection and replacement", { x <- Expression$field_ref("x") expect_error( nse_funcs$str_detect(x, boundary(type = "character")), "Pattern modifier `boundary()` not supported in Arrow", fixed = TRUE ) expect_error( nse_funcs$str_replace_all(x, coll("o", locale = "en"), "ó"), "Pattern modifier `coll()` not supported in Arrow", fixed = TRUE ) # This condition generates a warning expect_warning( nse_funcs$str_replace_all(x, regex("o", multiline = TRUE), "u"), "Ignoring pattern modifier argument not supported in Arrow: \"multiline\"" ) }) test_that("backreferences in pattern in string detection", { skip("RE2 does not support backreferences in pattern (https://github.com/google/re2/issues/101)") df <- tibble(x = c("Foo", "bar")) compare_dplyr_binding( .input %>% filter(str_detect(x, regex("F([aeiou])\\1"))) %>% collect(), df ) }) test_that("backreferences (substitutions) in string replacement", { df <- tibble(x = c("Foo", "bar")) compare_dplyr_binding( .input %>% transmute(desc = sub( "(?:https?|ftp)://([^/\r\n]+)(/[^\r\n]*)?", "path `\\2` on server `\\1`", url )) %>% collect(), tibble(url = "https://arrow.apache.org/docs/r/") ) compare_dplyr_binding( .input %>% transmute(x = str_replace(x, "^(\\w)o(.*)", "\\1\\2p")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_replace(x, regex("^(\\w)o(.*)", ignore_case = TRUE), "\\1\\2p")) %>% collect(), df ) compare_dplyr_binding( .input %>% transmute(x = str_replace(x, regex("^(\\w)o(.*)", ignore_case = TRUE), "\\1\\2p")) %>% collect(), df ) }) test_that("edge cases in string detection and replacement", { # in case-insensitive fixed match/replace, test that "\\E" in the search # string and backslashes in the replacement string are interpreted literally. # this test does not use compare_dplyr_binding() because base::sub() and # base::grepl() do not support ignore.case = TRUE when fixed = TRUE. expect_equal( tibble(x = c("\\Q\\e\\D")) %>% Table$create() %>% filter(grepl("\\E", x, ignore.case = TRUE, fixed = TRUE)) %>% collect(), tibble(x = c("\\Q\\e\\D")) ) expect_equal( tibble(x = c("\\Q\\e\\D")) %>% Table$create() %>% transmute(x = sub("\\E", "\\L", x, ignore.case = TRUE, fixed = TRUE)) %>% collect(), tibble(x = c("\\Q\\L\\D")) ) # test that a user's "(?i)" prefix does not break the "(?i)" prefix that's # added in case-insensitive regex match/replace compare_dplyr_binding( .input %>% filter(grepl("(?i)^[abc]{3}$", x, ignore.case = TRUE, fixed = FALSE)) %>% collect(), tibble(x = c("ABC")) ) compare_dplyr_binding( .input %>% transmute(x = sub("(?i)^[abc]{3}$", "123", x, ignore.case = TRUE, fixed = FALSE)) %>% collect(), tibble(x = c("ABC")) ) }) test_that("strptime", { # base::strptime() defaults to local timezone # but arrow's strptime defaults to UTC. # So that tests are consistent, set the local timezone to UTC # TODO: consider reevaluating this workaround after ARROW-12980 withr::local_timezone("UTC") t_string <- tibble(x = c("2018-10-07 19:04:05", NA)) t_stamp <- tibble(x = c(lubridate::ymd_hms("2018-10-07 19:04:05"), NA)) expect_equal( t_string %>% Table$create() %>% mutate( x = strptime(x) ) %>% collect(), t_stamp, ignore_attr = "tzone" ) expect_equal( t_string %>% Table$create() %>% mutate( x = strptime(x, format = "%Y-%m-%d %H:%M:%S") ) %>% collect(), t_stamp, ignore_attr = "tzone" ) expect_equal( t_string %>% Table$create() %>% mutate( x = strptime(x, format = "%Y-%m-%d %H:%M:%S", unit = "ns") ) %>% collect(), t_stamp, ignore_attr = "tzone" ) expect_equal( t_string %>% Table$create() %>% mutate( x = strptime(x, format = "%Y-%m-%d %H:%M:%S", unit = "s") ) %>% collect(), t_stamp, ignore_attr = "tzone" ) tstring <- tibble(x = c("08-05-2008", NA)) tstamp <- strptime(c("08-05-2008", NA), format = "%m-%d-%Y") expect_equal( tstring %>% Table$create() %>% mutate( x = strptime(x, format = "%m-%d-%Y") ) %>% pull(), # R's strptime returns POSIXlt (list type) as.POSIXct(tstamp), ignore_attr = "tzone" ) }) test_that("errors in strptime", { # Error when tz is passed x <- Expression$field_ref("x") expect_error( nse_funcs$strptime(x, tz = "PDT"), "Time zone argument not supported in Arrow" ) }) test_that("strftime", { skip_on_os("windows") # https://issues.apache.org/jira/browse/ARROW-13168 times <- tibble( datetime = c(lubridate::ymd_hms("2018-10-07 19:04:05", tz = "Etc/GMT+6"), NA), date = c(as.Date("2021-01-01"), NA) ) formats <- "%a %A %w %d %b %B %m %y %Y %H %I %p %M %z %Z %j %U %W %x %X %% %G %V %u" formats_date <- "%a %A %w %d %b %B %m %y %Y %H %I %p %M %j %U %W %x %X %% %G %V %u" compare_dplyr_binding( .input %>% mutate(x = strftime(datetime, format = formats)) %>% collect(), times ) compare_dplyr_binding( .input %>% mutate(x = strftime(date, format = formats_date)) %>% collect(), times ) compare_dplyr_binding( .input %>% mutate(x = strftime(datetime, format = formats, tz = "Pacific/Marquesas")) %>% collect(), times ) compare_dplyr_binding( .input %>% mutate(x = strftime(datetime, format = formats, tz = "EST", usetz = TRUE)) %>% collect(), times ) withr::with_timezone( "Pacific/Marquesas", { compare_dplyr_binding( .input %>% mutate( x = strftime(datetime, format = formats, tz = "EST"), x_date = strftime(date, format = formats_date, tz = "EST") ) %>% collect(), times ) compare_dplyr_binding( .input %>% mutate( x = strftime(datetime, format = formats), x_date = strftime(date, format = formats_date) ) %>% collect(), times ) } ) # This check is due to differences in the way %c currently works in Arrow and R's strftime. # We can revisit after https://github.com/HowardHinnant/date/issues/704 is resolved. expect_error( times %>% Table$create() %>% mutate(x = strftime(datetime, format = "%c")) %>% collect(), "%c flag is not supported in non-C locales." ) # Output precision of %S depends on the input timestamp precision. # Timestamps with second precision are represented as integers while # milliseconds, microsecond and nanoseconds are represented as fixed floating # point numbers with 3, 6 and 9 decimal places respectively. compare_dplyr_binding( .input %>% mutate(x = strftime(datetime, format = "%S")) %>% transmute(as.double(substr(x, 1, 2))) %>% collect(), times, tolerance = 1e-6 ) }) test_that("format_ISO8601", { skip_on_os("windows") # https://issues.apache.org/jira/browse/ARROW-13168 times <- tibble(x = c(lubridate::ymd_hms("2018-10-07 19:04:05", tz = "Etc/GMT+6"), NA)) compare_dplyr_binding( .input %>% mutate(x = format_ISO8601(x, precision = "ymd", usetz = FALSE)) %>% collect(), times ) if (getRversion() < "3.5") { # before 3.5, times$x will have no timezone attribute, so Arrow faithfully # errors that there is no timezone to format: expect_error( times %>% Table$create() %>% mutate(x = format_ISO8601(x, precision = "ymd", usetz = TRUE)) %>% collect(), "Timezone not present, cannot convert to string with timezone: %Y-%m-%d%z" ) # See comment regarding %S flag in strftime tests expect_error( times %>% Table$create() %>% mutate(x = format_ISO8601(x, precision = "ymdhms", usetz = TRUE)) %>% mutate(x = gsub("\\.0*", "", x)) %>% collect(), "Timezone not present, cannot convert to string with timezone: %Y-%m-%dT%H:%M:%S%z" ) } else { compare_dplyr_binding( .input %>% mutate(x = format_ISO8601(x, precision = "ymd", usetz = TRUE)) %>% collect(), times ) # See comment regarding %S flag in strftime tests compare_dplyr_binding( .input %>% mutate(x = format_ISO8601(x, precision = "ymdhms", usetz = TRUE)) %>% mutate(x = gsub("\\.0*", "", x)) %>% collect(), times ) } # See comment regarding %S flag in strftime tests compare_dplyr_binding( .input %>% mutate(x = format_ISO8601(x, precision = "ymdhms", usetz = FALSE)) %>% mutate(x = gsub("\\.0*", "", x)) %>% collect(), times ) }) test_that("arrow_find_substring and arrow_find_substring_regex", { df <- tibble(x = c("Foo and Bar", "baz and qux and quux")) expect_equal( df %>% Table$create() %>% mutate(x = arrow_find_substring(x, options = list(pattern = "b"))) %>% collect(), tibble(x = c(-1, 0)) ) expect_equal( df %>% Table$create() %>% mutate(x = arrow_find_substring( x, options = list(pattern = "b", ignore_case = TRUE) )) %>% collect(), tibble(x = c(8, 0)) ) expect_equal( df %>% Table$create() %>% mutate(x = arrow_find_substring_regex( x, options = list(pattern = "^[fb]") )) %>% collect(), tibble(x = c(-1, 0)) ) expect_equal( df %>% Table$create() %>% mutate(x = arrow_find_substring_regex( x, options = list(pattern = "[AEIOU]", ignore_case = TRUE) )) %>% collect(), tibble(x = c(1, 1)) ) }) test_that("stri_reverse and arrow_ascii_reverse functions", { df_ascii <- tibble(x = c("Foo\nand bar", "baz\tand qux and quux")) df_utf8 <- tibble(x = c("Foo\u00A0\u0061nd\u00A0bar", "\u0062az\u00A0and\u00A0qux\u3000and\u00A0quux")) compare_dplyr_binding( .input %>% mutate(x = stri_reverse(x)) %>% collect(), df_utf8 ) compare_dplyr_binding( .input %>% mutate(x = stri_reverse(x)) %>% collect(), df_ascii ) expect_equal( df_ascii %>% Table$create() %>% mutate(x = arrow_ascii_reverse(x)) %>% collect(), tibble(x = c("rab dna\nooF", "xuuq dna xuq dna\tzab")) ) expect_error( df_utf8 %>% Table$create() %>% mutate(x = arrow_ascii_reverse(x)) %>% collect(), "Invalid: Non-ASCII sequence in input" ) }) test_that("str_like", { df <- tibble(x = c("Foo and bar", "baz and qux and quux")) # TODO: After new version of stringr with str_like has been released, update all # these tests to use compare_dplyr_binding # No match - entire string expect_equal( df %>% Table$create() %>% mutate(x = str_like(x, "baz")) %>% collect(), tibble(x = c(FALSE, FALSE)) ) # Match - entire string expect_equal( df %>% Table$create() %>% mutate(x = str_like(x, "Foo and bar")) %>% collect(), tibble(x = c(TRUE, FALSE)) ) # Wildcard expect_equal( df %>% Table$create() %>% mutate(x = str_like(x, "f%", ignore_case = TRUE)) %>% collect(), tibble(x = c(TRUE, FALSE)) ) # Ignore case expect_equal( df %>% Table$create() %>% mutate(x = str_like(x, "f%", ignore_case = FALSE)) %>% collect(), tibble(x = c(FALSE, FALSE)) ) # Single character expect_equal( df %>% Table$create() %>% mutate(x = str_like(x, "_a%")) %>% collect(), tibble(x = c(FALSE, TRUE)) ) # This will give an error until a new version of stringr with str_like has been released skip_if_not(packageVersion("stringr") > "1.4.0") compare_dplyr_binding( .input %>% mutate(x = str_like(x, "%baz%")) %>% collect(), df ) }) test_that("str_pad", { df <- tibble(x = c("Foo and bar", "baz and qux and quux")) compare_dplyr_binding( .input %>% mutate(x = str_pad(x, width = 31)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(x = str_pad(x, width = 30, side = "right")) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(x = str_pad(x, width = 31, side = "left", pad = "+")) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(x = str_pad(x, width = 10, side = "left", pad = "+")) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(x = str_pad(x, width = 31, side = "both")) %>% collect(), df ) }) test_that("substr", { df <- tibble(x = "Apache Arrow") compare_dplyr_binding( .input %>% mutate(y = substr(x, 1, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, 0, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, -1, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, 6, 1)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, -1, -2)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, 9, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, 1, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, 8, 12)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = substr(x, -5, -1)) %>% collect(), df ) expect_error( nse_funcs$substr("Apache Arrow", c(1, 2), 3), "`start` must be length 1 - other lengths are not supported in Arrow" ) expect_error( nse_funcs$substr("Apache Arrow", 1, c(2, 3)), "`stop` must be length 1 - other lengths are not supported in Arrow" ) }) test_that("substring", { # nse_funcs$substring just calls nse_funcs$substr, tested extensively above df <- tibble(x = "Apache Arrow") compare_dplyr_binding( .input %>% mutate(y = substring(x, 1, 6)) %>% collect(), df ) }) test_that("str_sub", { df <- tibble(x = "Apache Arrow") compare_dplyr_binding( .input %>% mutate(y = str_sub(x, 1, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, 0, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, -1, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, 6, 1)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, -1, -2)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, -1, 3)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, 9, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, 1, 6)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, 8, 12)) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(y = str_sub(x, -5, -1)) %>% collect(), df ) expect_error( nse_funcs$str_sub("Apache Arrow", c(1, 2), 3), "`start` must be length 1 - other lengths are not supported in Arrow" ) expect_error( nse_funcs$str_sub("Apache Arrow", 1, c(2, 3)), "`end` must be length 1 - other lengths are not supported in Arrow" ) }) test_that("str_starts, str_ends, startsWith, endsWith", { df <- tibble(x = c("Foo", "bar", "baz", "qux")) compare_dplyr_binding( .input %>% filter(str_starts(x, "b.*")) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_starts(x, "b.*", negate = TRUE)) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_starts(x, fixed("b.*"))) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_starts(x, fixed("b"))) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_ends(x, "r")) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_ends(x, "r", negate = TRUE)) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_ends(x, fixed("r$"))) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(str_ends(x, fixed("r"))) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(startsWith(x, "b")) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(endsWith(x, "r")) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(startsWith(x, "b.*")) %>% collect(), df ) compare_dplyr_binding( .input %>% filter(endsWith(x, "r$")) %>% collect(), df ) }) test_that("str_count", { df <- tibble( cities = c("Kolkata", "Dar es Salaam", "Tel Aviv", "San Antonio", "Cluj Napoca", "Bern", "Bogota"), dots = c("a.", "...", ".a.a", "a..a.", "ab...", "dse....", ".f..d..") ) compare_dplyr_binding( .input %>% mutate(a_count = str_count(cities, pattern = "a")) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(p_count = str_count(cities, pattern = "d")) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(p_count = str_count(cities, pattern = regex("d", ignore_case = TRUE) )) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(e_count = str_count(cities, pattern = "u")) %>% collect(), df ) # nse_funcs$str_count() is not vectorised over pattern compare_dplyr_binding( .input %>% mutate(let_count = str_count(cities, pattern = c("a", "b", "e", "g", "p", "n", "s"))) %>% collect(), df, warning = TRUE ) compare_dplyr_binding( .input %>% mutate(dots_count = str_count(dots, ".")) %>% collect(), df ) compare_dplyr_binding( .input %>% mutate(dots_count = str_count(dots, fixed("."))) %>% collect(), df ) })
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/db-setup-tools.R \name{demo_db_create} \alias{demo_db_create} \title{Create demo database} \usage{ demo_db_create(db_type = "nucleotide", n = 100) } \arguments{ \item{db_type}{character, database type} \item{n}{integer, number of mock sequences} } \description{ Creates a local mock SQL database from package test data for demonstration purposes. No internet connection required. } \examples{ library(restez) # set the restez path to a temporary dir restez_path_set(filepath = tempdir()) # create demo database demo_db_create(n = 5) restez_connect() # in the demo, IDs are 'demo_1', 'demo_2' ... (gb_sequence_get(id = 'demo_1')) # Delete a demo database after an example db_delete(everything = TRUE) } \seealso{ Other database: \code{\link{count_db_ids}}, \code{\link{db_create}}, \code{\link{db_delete}}, \code{\link{db_download}}, \code{\link{is_in_db}}, \code{\link{list_db_ids}} } \concept{database}
/man/demo_db_create.Rd
permissive
rheiland/restez
R
false
true
990
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/db-setup-tools.R \name{demo_db_create} \alias{demo_db_create} \title{Create demo database} \usage{ demo_db_create(db_type = "nucleotide", n = 100) } \arguments{ \item{db_type}{character, database type} \item{n}{integer, number of mock sequences} } \description{ Creates a local mock SQL database from package test data for demonstration purposes. No internet connection required. } \examples{ library(restez) # set the restez path to a temporary dir restez_path_set(filepath = tempdir()) # create demo database demo_db_create(n = 5) restez_connect() # in the demo, IDs are 'demo_1', 'demo_2' ... (gb_sequence_get(id = 'demo_1')) # Delete a demo database after an example db_delete(everything = TRUE) } \seealso{ Other database: \code{\link{count_db_ids}}, \code{\link{db_create}}, \code{\link{db_delete}}, \code{\link{db_download}}, \code{\link{is_in_db}}, \code{\link{list_db_ids}} } \concept{database}
library(optband) ### Name: psi ### Title: The psi function ### Aliases: psi ### Keywords: internal ### ** Examples psi(.1)
/data/genthat_extracted_code/optband/examples/psi.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
131
r
library(optband) ### Name: psi ### Title: The psi function ### Aliases: psi ### Keywords: internal ### ** Examples psi(.1)
Dtheta <- function(xvec, yvec, h, driftfun, difffun1, c0) { # G = Rdtq2d(thetavec, x1, x2, h, numsteps, k, bigm) G = integrandmat(xvec, yvec, h, driftfun, difffun1, c0) nn = length(xvec) mm = length(yvec) # each column same and replicated, rows have xvec X = replicate(mm, xvec) # each row same, columns have yvec Y = t(replicate(nn, yvec)) # change the derivatives! f = driftfun(c0 ,Y) g = difffun1(c0, Y) part1 = (X - Y - f*h) # these derivatives are the same regardless of what f and g you have dGdf = G*part1/g^2 dGdg = G*(-1/g + (part1)^2/(g^3*h)) # make everything a list so that we can handle high-dimensional problems derivatives = list(NULL) dfdtheta = list(NULL) dgdtheta = list(NULL) dfdtheta[[1]] = f/c0[1] dgdtheta[[1]] = 0 dfdtheta[[2]] = c0[1]*Y dgdtheta[[2]] = 0 dfdtheta[[3]] = 0 dgdtheta[[3]] = 1 # chain rule! for (i in c(1:3)) derivatives[[i]] = dGdf * dfdtheta[[i]] + dGdg * dgdtheta[[i]] return(derivatives) }
/HMCtesting/HMC2d/Dtheta.R
no_license
hbhat4000/sdeinference
R
false
false
1,068
r
Dtheta <- function(xvec, yvec, h, driftfun, difffun1, c0) { # G = Rdtq2d(thetavec, x1, x2, h, numsteps, k, bigm) G = integrandmat(xvec, yvec, h, driftfun, difffun1, c0) nn = length(xvec) mm = length(yvec) # each column same and replicated, rows have xvec X = replicate(mm, xvec) # each row same, columns have yvec Y = t(replicate(nn, yvec)) # change the derivatives! f = driftfun(c0 ,Y) g = difffun1(c0, Y) part1 = (X - Y - f*h) # these derivatives are the same regardless of what f and g you have dGdf = G*part1/g^2 dGdg = G*(-1/g + (part1)^2/(g^3*h)) # make everything a list so that we can handle high-dimensional problems derivatives = list(NULL) dfdtheta = list(NULL) dgdtheta = list(NULL) dfdtheta[[1]] = f/c0[1] dgdtheta[[1]] = 0 dfdtheta[[2]] = c0[1]*Y dgdtheta[[2]] = 0 dfdtheta[[3]] = 0 dgdtheta[[3]] = 1 # chain rule! for (i in c(1:3)) derivatives[[i]] = dGdf * dfdtheta[[i]] + dGdg * dgdtheta[[i]] return(derivatives) }
rm(list=ls()) if(!require("shiny")) {install.packages("shiny")} if(!require("ggplot2")) install.packages("ggplot2") if(!require("ggcorrplot")) install.packages("ggcorrplot") library(ggplot2) library(ggcorrplot) library(shiny) shinyApp( ui = tagList( # shinythemes::themeSelector(), navbarPage( # theme = "cerulean", # <--- To use a theme, uncomment this "shinythemes", titlePanel('Parameters'), #Navbar 1 # has sidepanel tabPanel("1. Descriptive Techniques", # Sidebar panel for inputs ---- sidebarPanel( # Input: Slider for the number of bins ---- tags$hr(), checkboxInput('header', 'Header', TRUE), radioButtons(inputId = 'sep', label = 'Separator', choices = c(Comma=',', Semicolon=';', Tab='\t')), radioButtons('quote', 'Quote', c(None='', 'Double Quote'='"', 'Single Quote'="'")), fileInput('file', 'Choose CSV File', accept=c('text/csv', 'text/comma-separated-values,text/plain', '.csv') ), conditionalPanel(condition = "output.fileUploaded", radioButtons(inputId = "plottype", label = "Select Plot Type", choices = c("Scatterplot", "Correlogram", "Histogram"), selected = "") ), conditionalPanel( condition = "input.plottype == 'Scatterplot'", selectInput(inputId = "Numeric", label = "Select Numeric", choices = ""), selectInput(inputId = "Categorical", label = "Select Categorical", choices = ""), selectizeInput(inputId = "NumberSel", label = "Select 2 numbers", choices = ""), actionButton(inputId = "runScatter", "Create Scatterplot"), checkboxGroupInput(inputId = "FieldSelection", label = "Select Fields for Table Display") ) ), mainPanel( tabsetPanel( tabPanel("Table", #h4("Table"), #tableOutput("table"), #h4("Verbatim text output"), #verbatimTextOutput("txtout"), #h1("Header 1"), #h2("Header 2"), #h3("Header 3"), #h4("Header 4"), #h5("Header 5") ), tabPanel("GenPlot", "INSERT CODE HERE AND BELOW, BEWARE OF BRACKETS"), tabPanel("Plot"), tabPanel("Pie"), tabPanel("Scatterplot"), tabPanel("Correlogram"), tabPanel("Str") ) ) ), #Navbar 2 # currently empty sidepanel tabPanel("2. Probability Models", mainPanel( tabsetPanel( tabPanel("Discreet Model", "INSERT CODE HERE AND BELOW, BEWARE OF BRACKETS"), tabPanel("Continuous Model", "INSERT CODE HERE AND BELOW, BEWARE OF BRACKETS") ) ) ), #Navbar 3 # currently empty sidepanel tabPanel("3. Hypothesis Testing", "INSERT CODE HERE AND BELOW, BEWARE OF BRACKETS"), #Navbar 4 # currently empty sidepanel tabPanel("4. General Linear Models", "INSERT CODE HERE AND BELOW, BEWARE OF BRACKETS") ) ), server = function(input, output) { output$txtout <- renderText({ paste(input$txt, input$slider, format(input$date), sep = ", ") }) output$table <- renderTable({ head(cars, 4) }) } ) shinyApp(ui = ui, server = server)
/shiny_1.1.R
no_license
smileyowley/B9DA101_shiny
R
false
false
4,221
r
rm(list=ls()) if(!require("shiny")) {install.packages("shiny")} if(!require("ggplot2")) install.packages("ggplot2") if(!require("ggcorrplot")) install.packages("ggcorrplot") library(ggplot2) library(ggcorrplot) library(shiny) shinyApp( ui = tagList( # shinythemes::themeSelector(), navbarPage( # theme = "cerulean", # <--- To use a theme, uncomment this "shinythemes", titlePanel('Parameters'), #Navbar 1 # has sidepanel tabPanel("1. Descriptive Techniques", # Sidebar panel for inputs ---- sidebarPanel( # Input: Slider for the number of bins ---- tags$hr(), checkboxInput('header', 'Header', TRUE), radioButtons(inputId = 'sep', label = 'Separator', choices = c(Comma=',', Semicolon=';', Tab='\t')), radioButtons('quote', 'Quote', c(None='', 'Double Quote'='"', 'Single Quote'="'")), fileInput('file', 'Choose CSV File', accept=c('text/csv', 'text/comma-separated-values,text/plain', '.csv') ), conditionalPanel(condition = "output.fileUploaded", radioButtons(inputId = "plottype", label = "Select Plot Type", choices = c("Scatterplot", "Correlogram", "Histogram"), selected = "") ), conditionalPanel( condition = "input.plottype == 'Scatterplot'", selectInput(inputId = "Numeric", label = "Select Numeric", choices = ""), selectInput(inputId = "Categorical", label = "Select Categorical", choices = ""), selectizeInput(inputId = "NumberSel", label = "Select 2 numbers", choices = ""), actionButton(inputId = "runScatter", "Create Scatterplot"), checkboxGroupInput(inputId = "FieldSelection", label = "Select Fields for Table Display") ) ), mainPanel( tabsetPanel( tabPanel("Table", #h4("Table"), #tableOutput("table"), #h4("Verbatim text output"), #verbatimTextOutput("txtout"), #h1("Header 1"), #h2("Header 2"), #h3("Header 3"), #h4("Header 4"), #h5("Header 5") ), tabPanel("GenPlot", "INSERT CODE HERE AND BELOW, BEWARE OF BRACKETS"), tabPanel("Plot"), tabPanel("Pie"), tabPanel("Scatterplot"), tabPanel("Correlogram"), tabPanel("Str") ) ) ), #Navbar 2 # currently empty sidepanel tabPanel("2. Probability Models", mainPanel( tabsetPanel( tabPanel("Discreet Model", "INSERT CODE HERE AND BELOW, BEWARE OF BRACKETS"), tabPanel("Continuous Model", "INSERT CODE HERE AND BELOW, BEWARE OF BRACKETS") ) ) ), #Navbar 3 # currently empty sidepanel tabPanel("3. Hypothesis Testing", "INSERT CODE HERE AND BELOW, BEWARE OF BRACKETS"), #Navbar 4 # currently empty sidepanel tabPanel("4. General Linear Models", "INSERT CODE HERE AND BELOW, BEWARE OF BRACKETS") ) ), server = function(input, output) { output$txtout <- renderText({ paste(input$txt, input$slider, format(input$date), sep = ", ") }) output$table <- renderTable({ head(cars, 4) }) } ) shinyApp(ui = ui, server = server)
require("RcppArmadillo") require("Rcpp") require("FKF") require("dlm") library("GKF") Plot <- 0 Arima21 <- 1 LocalLevel <- 0 LinearGrowth <- 0 DynamicCAPM <- 0 if (Arima21){ ## <---------------------------------------------------------------------------> ## Example 1: ARMA(2, 1) model estimation. ## <---------------------------------------------------------------------------> ## This example shows how to fit an ARMA(2, 1) model using this Kalman ## filter implementation (see also stats' makeARIMA and KalmanRun). n <- 1000 ## Set the AR parameters ar1 <- 0.6 ar2 <- 0.2 ma1 <- -0.2 sigma <- sqrt(0.2) ## Sample from an ARMA(2, 1) process a <- arima.sim(model = list(ar = c(ar1, ar2), ma = ma1), n = n,innov = rnorm(n) * sigma) ## Create a state space representation out of the four ARMA parameters arma21ss <- function(ar1, ar2, ma1, sigma) { Tt <- matrix(c(ar1, ar2, 1, 0), ncol = 2) Zt <- matrix(c(1, 0), ncol = 2) ct <- matrix(0) dt <- matrix(0, nrow = 2) GGt <- matrix(0) H <- matrix(c(1, ma1), nrow = 2) * sigma HHt <- H %*% t(H) a0 <- c(0, 0) P0 <- matrix(1e6, nrow = 2, ncol = 2) return(list(a0 = a0, P0 = P0, ct = ct, dt = dt, Zt = Zt, Tt = Tt, GGt = GGt,HHt=HHt)) } sp <- arma21ss(ar1,ar2,ma1,sigma) yt <- rbind(a) fkfPack <- fkf(a0 = sp$a0, P0 = sp$P0, dt = sp$dt, ct = sp$ct, Tt = sp$Tt, Zt = sp$Zt, HHt = sp$HHt, GGt = sp$GGt, yt = yt) fkfCpp <- FKF(a0 = sp$a0, P0 = sp$P0, dt = sp$dt, ct = sp$ct, Tt = sp$Tt, Zt = sp$Zt, Qt = sp$HHt, Ht = sp$GGt, yt = yt) } if (LocalLevel){ ## <---------------------------------------------------------------------------> ## Example 2: Local level model for the Nile's annual flow. ## <---------------------------------------------------------------------------> ## Transition equation: ## alpha[t+1] = alpha[t] + eta[t], eta[t] ~ N(0, HHt) ## Measurement equation: ## y[t] = alpha[t] + eps[t], eps[t] ~ N(0, GGt) y <- c(Nile,Nile + runif(length(Nile),-1,1)) y[c(3, 10)] <- NA # NA values can be handled ## Set constant parameters: dt <- ct <- matrix(0) Zt <- Tt <- matrix(1) a0 <- y[1] # Estimation of the first year flow P0 <- matrix(100) # Variance of 'a0' HHt <- matrix(var(y, na.rm = TRUE) * .5) GGt <- matrix(var(y, na.rm = TRUE) * .5) ## Filter Nile data with estimated parameters: fkfPack <- fkf(a0, P0, dt, ct, Tt, Zt, HHt = HHt,GGt = GGt, yt = rbind(y)) fkfCpp <- FKF_Rcpp(a0=a0, P0=P0, dt=dt, ct=ct, Tt=Tt, Zt=Zt,Qt=HHt,Ht=GGt,yt= rbind(y)) } if (LinearGrowth){ ## <---------------------------------------------------------------------------> ## Example 3: Multivariate Linear growth model (See Petris p128) ## <---------------------------------------------------------------------------> ## alpha_t=c(mu1_t,mu2_t,beta1_t,beta2_t)' ## Tt=[1,0,1,0|0,1,0,1|0,0,1,0|0,0,0,1] ## Rt=0 , Qt=diag(Q_mu,Q_beta), Q_mu=0,Q_beta=[49,155|155,437266] ## Zt=[1,0,0,0|0,1,0,0], Ht=[72,1018|1018,14353] ## Transition equation: ## alpha_t+1 = Tt*alpha_t + eta[t], eta[t] ~ N(0, Qt) ## Measurement equation: ## y_t = Zt*alpha_t + eps_t, eps_t ~ N(0, Ht) ## data used are annual groth for spain and Danemark collected in invest2.dat ## read data invest <- read.table("invest2.dat") ## Prepare model mod <- dlmModPoly(2) mod$FF <- mod$FF %x% diag(2) mod$GG <- mod$GG %x% diag(2) W1 <- matrix(c(0.5,0,0,0.5), 2, 2) W2 <- diag(c(49, 437266)) W2[1, 2] <- W2[2, 1] <- 155 mod$W <- bdiag(W1, W2) V <- diag(c(72, 14353)) V[1, 2] <- V[2, 1] <- 1018 mod$V <- V mod$m0 <- rep(0, 4) mod$C0 <- diag(4) * 1e4 ## dlm computation filtered <- dlmFilter(invest, mod) logLikdlm <- dlmLL(as.matrix(invest), mod) ## up to a cte ## Smoothed values smoothed <- dlmSmooth(filtered) alpha.Smoothed <- dropFirst(smoothed$s) theta.Smoothed <- t(mod$FF %*% t(alpha.Smoothed)) ## Sampled values alpha.Sampled <- dlmBSample(filtered) theta.Sampled <- t(mod$FF %*% t(dropFirst(alpha.Sampled))) ## Rcpp l <- .checkKFInputs(a0=mod$m0,P0=mod$C0,dt=rep(0, 4),ct=rep(0, 2), Tt=mod$GG,Zt=mod$FF,Qt=mod$W,Ht=mod$V, yt= t(as.matrix(invest))) a0 = l$a0 ; P0=l$P0 ; dt =l$dt ; ct=l$ct Tt= l$Tt ; Zt=l$Zt ; Qt =l$Qt ; Ht=l$Ht yt=t(as.matrix(invest)) ##fkfCpp <- FKF_Rcpp(a0= a0,P0=P0,dt=dt,ct=ct,Tt=Tt,Zt=Zt,Qt=Qt, ## Ht=Ht,yt=yt,checkInputs=FALSE) SmoothCpp <- GaussianSignalSmoothing(a0_=a0,P0_=P0,dt_=dt,ct_=ct,Tt_=Tt, Zt_=Zt,Ht_=Ht,Qt_=Qt,yt_=yt) SampleCpp <- GaussianthetaSampling(a0_=a0,P0_=P0,dt_=dt,ct_=ct,Tt_=Tt, Zt_=Zt,Ht_=Ht,Qt_=Qt,yt_=yt, M=1,seedVal=200) ##------------------- plot the result ## Extract Relevent part of the Data RcppSmooth <- t(SmoothCpp$theta_hat) RcppSample <- t(res$theta_tilda[,,1]) n <- nrow(RcppSmooth) i=2 if (Plot) pdf("dlmSamplingCompare_2.pdf") ylim=c(-50,max(c(theta.Smoothed[,i],theta.Sampled[,i],RcppSmooth[,i],RcppSample[,i]))+50) plot(1:n,theta.Smoothed[,i],type="l",col="black",ylim=ylim,lwd=2) points(1:n,theta.Sampled[,i],type="l",col="blue",ylim=ylim,lwd=2) points(1:n,RcppSmooth[,i],type="l",col="red",ylim=ylim,lwd=2) points(1:n,RcppSample[,i],type="l",col="green",ylim=ylim,lwd=2) leg <- c("dlm smoothed","dlm sampled","Rcpp smoothed", "Rcpp sampled") legend("topleft",leg,col=c("black","blue","red","green"),lwd=c(2,2,2,2),cex=0.7,bty="n") if (Plot) dev.off() } if (DynamicCAPM){ ## <---------------------------------------------------------------------------> ## Example 4: Dynamic CAPM model (See Petris p132) ## <---------------------------------------------------------------------------> ## alpha_t=c(mu1_t,mu2_t,beta1_t,beta2_t)' ## Tt=[1,0,1,0|0,1,0,1|0,0,1,0|0,0,0,1] ## Rt=0 , Qt=diag(Q_mu,Q_beta), Q_mu=0,Q_beta=[49,155|155,437266] ## Zt=[1,0,0,0|0,1,0,0], Ht=[72,1018|1018,14353] ## Transition equation: ## alpha_t+1 = Tt*alpha_t + eta[t], eta[t] ~ N(0, Qt) ## Measurement equation: ## y_t = Zt*alpha_t + eps_t, eps_t ~ N(0, Ht) ## data used are annual groth for spain and Danemark collected in invest2.dat tmp <- ts(read.table("P.dat",header = TRUE), start = c(1978, 1), frequency = 12) * 100 y <- tmp[, 1 : 4] - tmp[, "RKFREE"] colnames(y) <- colnames(tmp)[1 : 4] market <- tmp[, "MARKET"] - tmp[, "RKFREE"] rm("tmp") m <- NCOL(y) ## Set up the model CAPM <- dlmModReg(market) CAPM$FF <- CAPM$FF %x% diag(m) CAPM$GG <- CAPM$GG %x% diag(m) CAPM$JFF <- CAPM$JFF %x% diag(m) CAPM$W <- CAPM$W %x% matrix(0, m, m) CAPM$W[-(1 : m), -(1 : m)] <- c(8.153e-07, -3.172e-05, -4.267e-05,-6.649e-05, -3.172e-05, 0.001377, 0.001852, 0.002884, -4.267e-05, 0.001852, 0.002498, 0.003884, -6.649e-05, 0.002884, 0.003884, 0.006057) CAPM$V <- CAPM$V %x% matrix(0, m, m) CAPM$V[] <- c(41.06, 0.01571, -0.9504, -2.328, 0.01571, 24.23, 5.783, 3.376, -0.9504, 5.783, 39.2, 8.145, -2.328, 3.376, 8.145,39.29) CAPM$m0 <- rep(0, 2 * m) CAPM$C0 <- diag(1e7, nr = 2 * m) fkfdlm <- dlmFilter(y, CAPM) logLikdlm <- dlmLL(y,CAPM) ## Rcpp ## get the matrix Zt n <- nrow(y) Zt <- array(0,dim=c(m,2*m,n)) Vmarket <- as.numeric(market) yt <- t(as.matrix(y)) for (i in 1:n) Zt[,,i] <- cbind(1,Vmarket[i]) %x% diag(m) l <- .checkKFInputs(a0=CAPM$m0,P0=CAPM$C0,dt=rep(0,8),ct=rep(0, 4), Tt=CAPM$GG,Zt=Zt,Qt=CAPM$W,Ht=CAPM$V , yt= yt) a0 = l$a0 ; P0=l$P0 ; dt =l$dt ; ct=l$ct Tt= l$Tt ; Zt=l$Zt ; Qt =l$Qt ; Ht=l$Ht fkfCpp <- FKF_Rcpp(a0= a0,P0= P0,dt=dt,ct=ct,Tt=Tt,Zt=Zt,Qt=Qt, Ht=Ht,yt= yt,checkInputs=FALSE) UpdateCpp <- NRUpdatingStep(a0,P0,dt,ct,Tt,Zt,Ht,Qt,yt) fkfPack <- fkf(a0,P0,dt,ct,Tt,Zt,HHt=Qt,GGt=Ht,yt= yt) }
/vignettes/testKF.R
no_license
GeoBosh/GKF
R
false
false
8,509
r
require("RcppArmadillo") require("Rcpp") require("FKF") require("dlm") library("GKF") Plot <- 0 Arima21 <- 1 LocalLevel <- 0 LinearGrowth <- 0 DynamicCAPM <- 0 if (Arima21){ ## <---------------------------------------------------------------------------> ## Example 1: ARMA(2, 1) model estimation. ## <---------------------------------------------------------------------------> ## This example shows how to fit an ARMA(2, 1) model using this Kalman ## filter implementation (see also stats' makeARIMA and KalmanRun). n <- 1000 ## Set the AR parameters ar1 <- 0.6 ar2 <- 0.2 ma1 <- -0.2 sigma <- sqrt(0.2) ## Sample from an ARMA(2, 1) process a <- arima.sim(model = list(ar = c(ar1, ar2), ma = ma1), n = n,innov = rnorm(n) * sigma) ## Create a state space representation out of the four ARMA parameters arma21ss <- function(ar1, ar2, ma1, sigma) { Tt <- matrix(c(ar1, ar2, 1, 0), ncol = 2) Zt <- matrix(c(1, 0), ncol = 2) ct <- matrix(0) dt <- matrix(0, nrow = 2) GGt <- matrix(0) H <- matrix(c(1, ma1), nrow = 2) * sigma HHt <- H %*% t(H) a0 <- c(0, 0) P0 <- matrix(1e6, nrow = 2, ncol = 2) return(list(a0 = a0, P0 = P0, ct = ct, dt = dt, Zt = Zt, Tt = Tt, GGt = GGt,HHt=HHt)) } sp <- arma21ss(ar1,ar2,ma1,sigma) yt <- rbind(a) fkfPack <- fkf(a0 = sp$a0, P0 = sp$P0, dt = sp$dt, ct = sp$ct, Tt = sp$Tt, Zt = sp$Zt, HHt = sp$HHt, GGt = sp$GGt, yt = yt) fkfCpp <- FKF(a0 = sp$a0, P0 = sp$P0, dt = sp$dt, ct = sp$ct, Tt = sp$Tt, Zt = sp$Zt, Qt = sp$HHt, Ht = sp$GGt, yt = yt) } if (LocalLevel){ ## <---------------------------------------------------------------------------> ## Example 2: Local level model for the Nile's annual flow. ## <---------------------------------------------------------------------------> ## Transition equation: ## alpha[t+1] = alpha[t] + eta[t], eta[t] ~ N(0, HHt) ## Measurement equation: ## y[t] = alpha[t] + eps[t], eps[t] ~ N(0, GGt) y <- c(Nile,Nile + runif(length(Nile),-1,1)) y[c(3, 10)] <- NA # NA values can be handled ## Set constant parameters: dt <- ct <- matrix(0) Zt <- Tt <- matrix(1) a0 <- y[1] # Estimation of the first year flow P0 <- matrix(100) # Variance of 'a0' HHt <- matrix(var(y, na.rm = TRUE) * .5) GGt <- matrix(var(y, na.rm = TRUE) * .5) ## Filter Nile data with estimated parameters: fkfPack <- fkf(a0, P0, dt, ct, Tt, Zt, HHt = HHt,GGt = GGt, yt = rbind(y)) fkfCpp <- FKF_Rcpp(a0=a0, P0=P0, dt=dt, ct=ct, Tt=Tt, Zt=Zt,Qt=HHt,Ht=GGt,yt= rbind(y)) } if (LinearGrowth){ ## <---------------------------------------------------------------------------> ## Example 3: Multivariate Linear growth model (See Petris p128) ## <---------------------------------------------------------------------------> ## alpha_t=c(mu1_t,mu2_t,beta1_t,beta2_t)' ## Tt=[1,0,1,0|0,1,0,1|0,0,1,0|0,0,0,1] ## Rt=0 , Qt=diag(Q_mu,Q_beta), Q_mu=0,Q_beta=[49,155|155,437266] ## Zt=[1,0,0,0|0,1,0,0], Ht=[72,1018|1018,14353] ## Transition equation: ## alpha_t+1 = Tt*alpha_t + eta[t], eta[t] ~ N(0, Qt) ## Measurement equation: ## y_t = Zt*alpha_t + eps_t, eps_t ~ N(0, Ht) ## data used are annual groth for spain and Danemark collected in invest2.dat ## read data invest <- read.table("invest2.dat") ## Prepare model mod <- dlmModPoly(2) mod$FF <- mod$FF %x% diag(2) mod$GG <- mod$GG %x% diag(2) W1 <- matrix(c(0.5,0,0,0.5), 2, 2) W2 <- diag(c(49, 437266)) W2[1, 2] <- W2[2, 1] <- 155 mod$W <- bdiag(W1, W2) V <- diag(c(72, 14353)) V[1, 2] <- V[2, 1] <- 1018 mod$V <- V mod$m0 <- rep(0, 4) mod$C0 <- diag(4) * 1e4 ## dlm computation filtered <- dlmFilter(invest, mod) logLikdlm <- dlmLL(as.matrix(invest), mod) ## up to a cte ## Smoothed values smoothed <- dlmSmooth(filtered) alpha.Smoothed <- dropFirst(smoothed$s) theta.Smoothed <- t(mod$FF %*% t(alpha.Smoothed)) ## Sampled values alpha.Sampled <- dlmBSample(filtered) theta.Sampled <- t(mod$FF %*% t(dropFirst(alpha.Sampled))) ## Rcpp l <- .checkKFInputs(a0=mod$m0,P0=mod$C0,dt=rep(0, 4),ct=rep(0, 2), Tt=mod$GG,Zt=mod$FF,Qt=mod$W,Ht=mod$V, yt= t(as.matrix(invest))) a0 = l$a0 ; P0=l$P0 ; dt =l$dt ; ct=l$ct Tt= l$Tt ; Zt=l$Zt ; Qt =l$Qt ; Ht=l$Ht yt=t(as.matrix(invest)) ##fkfCpp <- FKF_Rcpp(a0= a0,P0=P0,dt=dt,ct=ct,Tt=Tt,Zt=Zt,Qt=Qt, ## Ht=Ht,yt=yt,checkInputs=FALSE) SmoothCpp <- GaussianSignalSmoothing(a0_=a0,P0_=P0,dt_=dt,ct_=ct,Tt_=Tt, Zt_=Zt,Ht_=Ht,Qt_=Qt,yt_=yt) SampleCpp <- GaussianthetaSampling(a0_=a0,P0_=P0,dt_=dt,ct_=ct,Tt_=Tt, Zt_=Zt,Ht_=Ht,Qt_=Qt,yt_=yt, M=1,seedVal=200) ##------------------- plot the result ## Extract Relevent part of the Data RcppSmooth <- t(SmoothCpp$theta_hat) RcppSample <- t(res$theta_tilda[,,1]) n <- nrow(RcppSmooth) i=2 if (Plot) pdf("dlmSamplingCompare_2.pdf") ylim=c(-50,max(c(theta.Smoothed[,i],theta.Sampled[,i],RcppSmooth[,i],RcppSample[,i]))+50) plot(1:n,theta.Smoothed[,i],type="l",col="black",ylim=ylim,lwd=2) points(1:n,theta.Sampled[,i],type="l",col="blue",ylim=ylim,lwd=2) points(1:n,RcppSmooth[,i],type="l",col="red",ylim=ylim,lwd=2) points(1:n,RcppSample[,i],type="l",col="green",ylim=ylim,lwd=2) leg <- c("dlm smoothed","dlm sampled","Rcpp smoothed", "Rcpp sampled") legend("topleft",leg,col=c("black","blue","red","green"),lwd=c(2,2,2,2),cex=0.7,bty="n") if (Plot) dev.off() } if (DynamicCAPM){ ## <---------------------------------------------------------------------------> ## Example 4: Dynamic CAPM model (See Petris p132) ## <---------------------------------------------------------------------------> ## alpha_t=c(mu1_t,mu2_t,beta1_t,beta2_t)' ## Tt=[1,0,1,0|0,1,0,1|0,0,1,0|0,0,0,1] ## Rt=0 , Qt=diag(Q_mu,Q_beta), Q_mu=0,Q_beta=[49,155|155,437266] ## Zt=[1,0,0,0|0,1,0,0], Ht=[72,1018|1018,14353] ## Transition equation: ## alpha_t+1 = Tt*alpha_t + eta[t], eta[t] ~ N(0, Qt) ## Measurement equation: ## y_t = Zt*alpha_t + eps_t, eps_t ~ N(0, Ht) ## data used are annual groth for spain and Danemark collected in invest2.dat tmp <- ts(read.table("P.dat",header = TRUE), start = c(1978, 1), frequency = 12) * 100 y <- tmp[, 1 : 4] - tmp[, "RKFREE"] colnames(y) <- colnames(tmp)[1 : 4] market <- tmp[, "MARKET"] - tmp[, "RKFREE"] rm("tmp") m <- NCOL(y) ## Set up the model CAPM <- dlmModReg(market) CAPM$FF <- CAPM$FF %x% diag(m) CAPM$GG <- CAPM$GG %x% diag(m) CAPM$JFF <- CAPM$JFF %x% diag(m) CAPM$W <- CAPM$W %x% matrix(0, m, m) CAPM$W[-(1 : m), -(1 : m)] <- c(8.153e-07, -3.172e-05, -4.267e-05,-6.649e-05, -3.172e-05, 0.001377, 0.001852, 0.002884, -4.267e-05, 0.001852, 0.002498, 0.003884, -6.649e-05, 0.002884, 0.003884, 0.006057) CAPM$V <- CAPM$V %x% matrix(0, m, m) CAPM$V[] <- c(41.06, 0.01571, -0.9504, -2.328, 0.01571, 24.23, 5.783, 3.376, -0.9504, 5.783, 39.2, 8.145, -2.328, 3.376, 8.145,39.29) CAPM$m0 <- rep(0, 2 * m) CAPM$C0 <- diag(1e7, nr = 2 * m) fkfdlm <- dlmFilter(y, CAPM) logLikdlm <- dlmLL(y,CAPM) ## Rcpp ## get the matrix Zt n <- nrow(y) Zt <- array(0,dim=c(m,2*m,n)) Vmarket <- as.numeric(market) yt <- t(as.matrix(y)) for (i in 1:n) Zt[,,i] <- cbind(1,Vmarket[i]) %x% diag(m) l <- .checkKFInputs(a0=CAPM$m0,P0=CAPM$C0,dt=rep(0,8),ct=rep(0, 4), Tt=CAPM$GG,Zt=Zt,Qt=CAPM$W,Ht=CAPM$V , yt= yt) a0 = l$a0 ; P0=l$P0 ; dt =l$dt ; ct=l$ct Tt= l$Tt ; Zt=l$Zt ; Qt =l$Qt ; Ht=l$Ht fkfCpp <- FKF_Rcpp(a0= a0,P0= P0,dt=dt,ct=ct,Tt=Tt,Zt=Zt,Qt=Qt, Ht=Ht,yt= yt,checkInputs=FALSE) UpdateCpp <- NRUpdatingStep(a0,P0,dt,ct,Tt,Zt,Ht,Qt,yt) fkfPack <- fkf(a0,P0,dt,ct,Tt,Zt,HHt=Qt,GGt=Ht,yt= yt) }
library(sqldf) data<-read.csv.sql("household_power_consumption.txt",sql="select * from file where Date in ('1/2/2007','2/2/2007')",sep=";") data$Date <- strptime(paste(data$Date,data$Time), "%d/%m/%Y %H:%M:%S") plot(data$Date,data$Sub_metering_1,type="l",xlab="",ylab="Energy sub metering") lines(data$Date,data$Sub_metering_2,type="l",col="red") lines(data$Date,data$Sub_metering_3,type="l",col="blue") legend("topright",lty=1,col=c("black","red","blue"),legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3")) dev.copy(png,file="plot3.png") dev.off()
/plot3.R
no_license
CharlotteDing/ExData_Plotting1
R
false
false
558
r
library(sqldf) data<-read.csv.sql("household_power_consumption.txt",sql="select * from file where Date in ('1/2/2007','2/2/2007')",sep=";") data$Date <- strptime(paste(data$Date,data$Time), "%d/%m/%Y %H:%M:%S") plot(data$Date,data$Sub_metering_1,type="l",xlab="",ylab="Energy sub metering") lines(data$Date,data$Sub_metering_2,type="l",col="red") lines(data$Date,data$Sub_metering_3,type="l",col="blue") legend("topright",lty=1,col=c("black","red","blue"),legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3")) dev.copy(png,file="plot3.png") dev.off()
# Exercise 1: reading and querying a web API # Load the httr and jsonlite libraries for accessing data # You can also load `dplyr` if you wish to use it load("httr") load("dplyr") # Create a variable base_uri that stores the base URI (as a string) for the # Github API (https://api.github.com) base_uri <- "https://api.github.com" View(base_uri) # Under the "Repositories" category of the API documentation, find the endpoint # that will list _repos in an organization_. Then create a variable named # `org_resource` that stores the endpoint for the `programming-for-data-science` # organization repos (this is the _path_ to the resource of interest). org_resource <- "/programming-for-data-sciece" # Send a GET request to this endpoint (the `base_uri` followed by the # `org_resource` path). Print the response to show that your request worked. # (The listed URI will also allow you to inspect the JSON in the browser easily). get <- GET(paste0(base_uri, org_resource)) print(get) # Extract the content of the response using the `content()` function, saving it # in a variable. variable <- content(get) # Convert the content variable from a JSON string into a data frame. body_data <- fromJSON(content(get, "text")) # How many (public) repositories does the organization have? repo <- variable$repositories # Now a second query: # Create a variable `search_endpoint` that stores the endpoint used to search # for repositories. (Hint: look for a "Search" endpoint in the documentation). # Search queries require a query parameter (for what to search for). Create a # `query_params` list variable that specifies an appropriate key and value for # the search term (you can search for anything you want!) # Send a GET request to the `search_endpoint`--including your params list as the # `query`. Print the response to show that your request worked. # Extract the content of the response and convert it from a JSON string into a # data frame. # How many search repos did your search find? (Hint: check the list names to # find an appropriate value). # What are the full names of the top 5 repos in the search results?
/chapter-14-exercises/exercise-1/exercise.R
permissive
rozgillie/book-exercises
R
false
false
2,142
r
# Exercise 1: reading and querying a web API # Load the httr and jsonlite libraries for accessing data # You can also load `dplyr` if you wish to use it load("httr") load("dplyr") # Create a variable base_uri that stores the base URI (as a string) for the # Github API (https://api.github.com) base_uri <- "https://api.github.com" View(base_uri) # Under the "Repositories" category of the API documentation, find the endpoint # that will list _repos in an organization_. Then create a variable named # `org_resource` that stores the endpoint for the `programming-for-data-science` # organization repos (this is the _path_ to the resource of interest). org_resource <- "/programming-for-data-sciece" # Send a GET request to this endpoint (the `base_uri` followed by the # `org_resource` path). Print the response to show that your request worked. # (The listed URI will also allow you to inspect the JSON in the browser easily). get <- GET(paste0(base_uri, org_resource)) print(get) # Extract the content of the response using the `content()` function, saving it # in a variable. variable <- content(get) # Convert the content variable from a JSON string into a data frame. body_data <- fromJSON(content(get, "text")) # How many (public) repositories does the organization have? repo <- variable$repositories # Now a second query: # Create a variable `search_endpoint` that stores the endpoint used to search # for repositories. (Hint: look for a "Search" endpoint in the documentation). # Search queries require a query parameter (for what to search for). Create a # `query_params` list variable that specifies an appropriate key and value for # the search term (you can search for anything you want!) # Send a GET request to the `search_endpoint`--including your params list as the # `query`. Print the response to show that your request worked. # Extract the content of the response and convert it from a JSON string into a # data frame. # How many search repos did your search find? (Hint: check the list names to # find an appropriate value). # What are the full names of the top 5 repos in the search results?
# # Users select inputId = "sex" and inputId = "race" to create the plot # library(shiny) library(dplyr) library(ggplot2) # Define server logic required to draw a histogram shinyServer(function(input, output) { output$eduPlot <- renderPlot({ plotData <- eduAttain %>% select(year, education_level, sex, input$race) %>% filter(sex == input$sex) sub_title <- paste0("Sex = ", input$sex ,"; Race = ", input$race) eduPlot <- ggplot(data = plotData, aes(x=year, y=plotData[,4])) + geom_line(aes(colour=education_level)) + labs(title = "Percent of People Attaining Education Level", subtitle = sub_title, x = "year", y = "Percent of People, (age 25 - 29)") + theme(plot.title = element_text(size=14)) + theme(plot.subtitle=element_text(size=12)) + scale_colour_discrete(name="Educational Level Attained") + theme(legend.position="bottom", legend.direction = "vertical") + theme(legend.title = element_text(size=11)) + theme(legend.text = element_text(size=10)) eduPlot }) })
/server.R
no_license
brooney27519/DevDataProducts_CourseProject
R
false
false
1,139
r
# # Users select inputId = "sex" and inputId = "race" to create the plot # library(shiny) library(dplyr) library(ggplot2) # Define server logic required to draw a histogram shinyServer(function(input, output) { output$eduPlot <- renderPlot({ plotData <- eduAttain %>% select(year, education_level, sex, input$race) %>% filter(sex == input$sex) sub_title <- paste0("Sex = ", input$sex ,"; Race = ", input$race) eduPlot <- ggplot(data = plotData, aes(x=year, y=plotData[,4])) + geom_line(aes(colour=education_level)) + labs(title = "Percent of People Attaining Education Level", subtitle = sub_title, x = "year", y = "Percent of People, (age 25 - 29)") + theme(plot.title = element_text(size=14)) + theme(plot.subtitle=element_text(size=12)) + scale_colour_discrete(name="Educational Level Attained") + theme(legend.position="bottom", legend.direction = "vertical") + theme(legend.title = element_text(size=11)) + theme(legend.text = element_text(size=10)) eduPlot }) })
library(rmetasim) ### Name: landscape.new.locus ### Title: Add a locus ### Aliases: landscape.new.locus ### Keywords: misc ### ** Examples exampleland <- landscape.new.empty() exampleland <- landscape.new.intparam(exampleland, s=2, h=2) exampleland <- landscape.new.floatparam(exampleland) exampleland <- landscape.new.switchparam(exampleland) exampleland <- landscape.new.locus(exampleland,type=2,ploidy=2, mutationrate=.001,numalleles=5,allelesize=100) exampleland$loci rm(exampleland)
/data/genthat_extracted_code/rmetasim/examples/landscape.new.locus.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
538
r
library(rmetasim) ### Name: landscape.new.locus ### Title: Add a locus ### Aliases: landscape.new.locus ### Keywords: misc ### ** Examples exampleland <- landscape.new.empty() exampleland <- landscape.new.intparam(exampleland, s=2, h=2) exampleland <- landscape.new.floatparam(exampleland) exampleland <- landscape.new.switchparam(exampleland) exampleland <- landscape.new.locus(exampleland,type=2,ploidy=2, mutationrate=.001,numalleles=5,allelesize=100) exampleland$loci rm(exampleland)
# Author: Hamidreza Zoraghein # Date: 2019-02-11 # Purpose: Disaggregates total state-level migrations to values per contributing state. This is for the bilateral model. ############################# #*** Set general options ***# ############################# options("scipen"=100, "digits"=4) # to force R not to use scientific notations ############################ #*** The required paths ***# ############################ # Workspace path <- "C:/Users/Hamidreza.Zoraghein/Google Drive/Sensitivity_Analysis/Bilateral" # Path to results folder resultsPath <- file.path(path, "Doub_Scenario") # Path to state-level inputs folder inputsPath <- file.path(path, "State_Inputs") # Path to the pachage mspackage <- file.path(path, "Scripts", "multistate_0.1.0.tar.gz") # csv file containing population projection with no domestic migration applied pop.no.dom.proj <- file.path(resultsPath, "state_pop_projections_no_dom.csv") ####################### #*** Load packages ***# ####################### if ("readxl" %in% installed.packages()) { library(readxl) } else { install.packages("readxl") library(readxl) } if ("multistate" %in% installed.packages()) { library(multistate) } else { install.packages("multistate") library(multistate) } ################################### #*** Declare general variables ***# ################################### #* Specify regions regUAll <- c("9-CT", "23-ME", "25-MA", "33-NH", "44-RI", "50-VT", "34-NJ", "36-NY", "42-PA", "17-IL", "18-IN", "26-MI", "39-OH", "55-WI", "19-IA", "20-KS", "27-MN", "29-MO", "31-NE", "38-ND", "46-SD", "10-DE", "11-DC", "12-FL", "13-GA", "24-MD", "37-NC", "45-SC", "51-VA", "54-WV", "1-AL", "21-KY", "28-MS", "47-TN", "5-AR", "22-LA", "40-OK", "48-TX", "4-AZ", "8-CO", "16-ID", "30-MT", "32-NV", "35-NM", "49-UT", "56-WY", "2-AK", "6-CA", "15-HI", "41-OR", "53-WA") #* Specify scenario scenUAll <- c("Constant_rate", "SSP2", "SSP3", "SSP5") cur.scenario <- "Constant_rate" # Specify the domestic migration factor # If scenario is not "Constant_rate" (for fertility, mortality and international migration), this factor will become dynamic later scen.factor <- 2 # 1 for regular, 0 for no domestic migration, 0.5 for half scenario and 2 for double scenario # Sepecify if international migration is applied int.mig <- 1 # 1 applied 0 not applied # Other parameters yearStart <- 2010 # Base year yearEnd <- 2100 # Last year for which population is projected steps <- yearEnd - yearStart # each projection step can be thought of as the resulting year (e.g., projection step 1 projects values for year 1 using input for year 0) num.ages <- 101 #From 0 to 100 ################### #*** Main body ***# ################### # Scenario table generation based on modifications to the "Constant_rate" scenario for SSP2, SSP3 and SSP5 # cur.scenario could be SSP2, SSP3 and SSP5 if (cur.scenario != "Constant_rate"){ scenario.csv <- file.path(resultsPath, paste0(cur.scenario, "_scenario.csv")) scenario.table <- read.csv(scenario.csv, stringsAsFactors = F, check.names = F) dom.mig.factor <- scenario.table$Dom_Mig_Factor } # initialize a dataframe that will hold population values without applying deomestic migration pop.upd.df <- data.frame(matrix(0, nrow = num.ages * 4, ncol = length(regUAll))) colnames(pop.upd.df) <- regUAll tot.upd.pop <- read.csv(pop.no.dom.proj, stringsAsFactors = F, check.names = F) #Save two dataframes per state: one for in-migration from other states and two for out-migration to other states for (state in regUAll){ cat(paste("The current state is", state, "\n")) # Initialize two dataframes for holding state-level in and out migration for the current state in.mig.path <- file.path(resultsPath, state, paste0(state, "_total_in_mig.csv")) out.mig.path <- file.path(resultsPath, state, paste0(state, "_total_out_mig.csv")) tot.in.migration <- NULL tot.out.migration <- NULL # In each year, read the population of all other states (from the previous step) and calulate state-level in and out migration for (t in 0:90){ pop.dataframe <- tot.upd.pop[seq(num.ages*4*t+1, num.ages*4*(t+1)),] if (cur.scenario != scenUAll[1]){ # Assign the relevant migration factor scen.factor <- dom.mig.factor[t+1] } cur.in.state.mig <- f.in.state.dom.mig.calc(inputsPath, state, pop.dataframe, scen.factor) cur.out.state.mig <- f.out.state.dom.mig.calc(inputsPath, state, pop.dataframe, scen.factor) # Add the current migration values to the total dataframes tot.in.migration <- rbind(tot.in.migration, cur.in.state.mig) tot.out.migration <- rbind(tot.out.migration, cur.out.state.mig) } # Save the total dataframes write.csv(tot.in.migration, in.mig.path, row.names = F) write.csv(tot.out.migration, out.mig.path, row.names = F) }
/r_scripts/State_Level_Migration.R
permissive
IMMM-SFA/statepop
R
false
false
4,981
r
# Author: Hamidreza Zoraghein # Date: 2019-02-11 # Purpose: Disaggregates total state-level migrations to values per contributing state. This is for the bilateral model. ############################# #*** Set general options ***# ############################# options("scipen"=100, "digits"=4) # to force R not to use scientific notations ############################ #*** The required paths ***# ############################ # Workspace path <- "C:/Users/Hamidreza.Zoraghein/Google Drive/Sensitivity_Analysis/Bilateral" # Path to results folder resultsPath <- file.path(path, "Doub_Scenario") # Path to state-level inputs folder inputsPath <- file.path(path, "State_Inputs") # Path to the pachage mspackage <- file.path(path, "Scripts", "multistate_0.1.0.tar.gz") # csv file containing population projection with no domestic migration applied pop.no.dom.proj <- file.path(resultsPath, "state_pop_projections_no_dom.csv") ####################### #*** Load packages ***# ####################### if ("readxl" %in% installed.packages()) { library(readxl) } else { install.packages("readxl") library(readxl) } if ("multistate" %in% installed.packages()) { library(multistate) } else { install.packages("multistate") library(multistate) } ################################### #*** Declare general variables ***# ################################### #* Specify regions regUAll <- c("9-CT", "23-ME", "25-MA", "33-NH", "44-RI", "50-VT", "34-NJ", "36-NY", "42-PA", "17-IL", "18-IN", "26-MI", "39-OH", "55-WI", "19-IA", "20-KS", "27-MN", "29-MO", "31-NE", "38-ND", "46-SD", "10-DE", "11-DC", "12-FL", "13-GA", "24-MD", "37-NC", "45-SC", "51-VA", "54-WV", "1-AL", "21-KY", "28-MS", "47-TN", "5-AR", "22-LA", "40-OK", "48-TX", "4-AZ", "8-CO", "16-ID", "30-MT", "32-NV", "35-NM", "49-UT", "56-WY", "2-AK", "6-CA", "15-HI", "41-OR", "53-WA") #* Specify scenario scenUAll <- c("Constant_rate", "SSP2", "SSP3", "SSP5") cur.scenario <- "Constant_rate" # Specify the domestic migration factor # If scenario is not "Constant_rate" (for fertility, mortality and international migration), this factor will become dynamic later scen.factor <- 2 # 1 for regular, 0 for no domestic migration, 0.5 for half scenario and 2 for double scenario # Sepecify if international migration is applied int.mig <- 1 # 1 applied 0 not applied # Other parameters yearStart <- 2010 # Base year yearEnd <- 2100 # Last year for which population is projected steps <- yearEnd - yearStart # each projection step can be thought of as the resulting year (e.g., projection step 1 projects values for year 1 using input for year 0) num.ages <- 101 #From 0 to 100 ################### #*** Main body ***# ################### # Scenario table generation based on modifications to the "Constant_rate" scenario for SSP2, SSP3 and SSP5 # cur.scenario could be SSP2, SSP3 and SSP5 if (cur.scenario != "Constant_rate"){ scenario.csv <- file.path(resultsPath, paste0(cur.scenario, "_scenario.csv")) scenario.table <- read.csv(scenario.csv, stringsAsFactors = F, check.names = F) dom.mig.factor <- scenario.table$Dom_Mig_Factor } # initialize a dataframe that will hold population values without applying deomestic migration pop.upd.df <- data.frame(matrix(0, nrow = num.ages * 4, ncol = length(regUAll))) colnames(pop.upd.df) <- regUAll tot.upd.pop <- read.csv(pop.no.dom.proj, stringsAsFactors = F, check.names = F) #Save two dataframes per state: one for in-migration from other states and two for out-migration to other states for (state in regUAll){ cat(paste("The current state is", state, "\n")) # Initialize two dataframes for holding state-level in and out migration for the current state in.mig.path <- file.path(resultsPath, state, paste0(state, "_total_in_mig.csv")) out.mig.path <- file.path(resultsPath, state, paste0(state, "_total_out_mig.csv")) tot.in.migration <- NULL tot.out.migration <- NULL # In each year, read the population of all other states (from the previous step) and calulate state-level in and out migration for (t in 0:90){ pop.dataframe <- tot.upd.pop[seq(num.ages*4*t+1, num.ages*4*(t+1)),] if (cur.scenario != scenUAll[1]){ # Assign the relevant migration factor scen.factor <- dom.mig.factor[t+1] } cur.in.state.mig <- f.in.state.dom.mig.calc(inputsPath, state, pop.dataframe, scen.factor) cur.out.state.mig <- f.out.state.dom.mig.calc(inputsPath, state, pop.dataframe, scen.factor) # Add the current migration values to the total dataframes tot.in.migration <- rbind(tot.in.migration, cur.in.state.mig) tot.out.migration <- rbind(tot.out.migration, cur.out.state.mig) } # Save the total dataframes write.csv(tot.in.migration, in.mig.path, row.names = F) write.csv(tot.out.migration, out.mig.path, row.names = F) }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/mosaic-package.R, R/reexports.R \docType{import} \name{reexports} \alias{reexports} \alias{makeFun} \alias{fitdistr} \alias{fractions} \alias{lhs} \alias{rhs} \alias{condition} \alias{counts} \alias{props} \alias{prop} \alias{prop1} \alias{perc} \alias{count} \alias{tally} \alias{dfapply} \alias{ediff} \alias{inspect} \alias{msummary} \alias{n_missing} \alias{logit} \alias{ilogit} \title{Objects exported from other packages} \keyword{internal} \description{ These objects are imported from other packages. Follow the links below to see their documentation. \describe{ \item{MASS}{\code{\link[MASS]{fitdistr}}, \code{\link[MASS]{fractions}}} \item{mosaicCore}{\code{\link[mosaicCore]{makeFun}}, \code{\link[mosaicCore]{lhs}}, \code{\link[mosaicCore]{rhs}}, \code{\link[mosaicCore]{condition}}, \code{\link[mosaicCore]{makeFun}}, \code{\link[mosaicCore]{counts}}, \code{\link[mosaicCore]{props}}, \code{\link[mosaicCore]{prop}}, \code{\link[mosaicCore]{prop1}}, \code{\link[mosaicCore]{perc}}, \code{\link[mosaicCore]{count}}, \code{\link[mosaicCore]{tally}}, \code{\link[mosaicCore]{dfapply}}, \code{\link[mosaicCore]{ediff}}, \code{\link[mosaicCore]{inspect}}, \code{\link[mosaicCore]{msummary}}, \code{\link[mosaicCore]{n_missing}}, \code{\link[mosaicCore]{logit}}, \code{\link[mosaicCore]{ilogit}}} }}
/man/reexports.Rd
no_license
dtkaplan/mosaic
R
false
true
1,393
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/mosaic-package.R, R/reexports.R \docType{import} \name{reexports} \alias{reexports} \alias{makeFun} \alias{fitdistr} \alias{fractions} \alias{lhs} \alias{rhs} \alias{condition} \alias{counts} \alias{props} \alias{prop} \alias{prop1} \alias{perc} \alias{count} \alias{tally} \alias{dfapply} \alias{ediff} \alias{inspect} \alias{msummary} \alias{n_missing} \alias{logit} \alias{ilogit} \title{Objects exported from other packages} \keyword{internal} \description{ These objects are imported from other packages. Follow the links below to see their documentation. \describe{ \item{MASS}{\code{\link[MASS]{fitdistr}}, \code{\link[MASS]{fractions}}} \item{mosaicCore}{\code{\link[mosaicCore]{makeFun}}, \code{\link[mosaicCore]{lhs}}, \code{\link[mosaicCore]{rhs}}, \code{\link[mosaicCore]{condition}}, \code{\link[mosaicCore]{makeFun}}, \code{\link[mosaicCore]{counts}}, \code{\link[mosaicCore]{props}}, \code{\link[mosaicCore]{prop}}, \code{\link[mosaicCore]{prop1}}, \code{\link[mosaicCore]{perc}}, \code{\link[mosaicCore]{count}}, \code{\link[mosaicCore]{tally}}, \code{\link[mosaicCore]{dfapply}}, \code{\link[mosaicCore]{ediff}}, \code{\link[mosaicCore]{inspect}}, \code{\link[mosaicCore]{msummary}}, \code{\link[mosaicCore]{n_missing}}, \code{\link[mosaicCore]{logit}}, \code{\link[mosaicCore]{ilogit}}} }}
\name{rvPDT.test.sub} \alias{rvPDT.test.sub} \title{Internal function.} \description{ Internal function of testing rare variants for binary traits using general pedigrees. }
/man/rvPDT.test.sub.Rd
no_license
cran/rvHPDT
R
false
false
191
rd
\name{rvPDT.test.sub} \alias{rvPDT.test.sub} \title{Internal function.} \description{ Internal function of testing rare variants for binary traits using general pedigrees. }
library(FNN) train <- read.csv("train.csv") digit<-matrix(as.integer(train[4932,2:785]), nrow = 28, byrow=T ) image(z = digit)
/code/digit_recognizer.R
no_license
behnoush/Neural-Network
R
false
false
126
r
library(FNN) train <- read.csv("train.csv") digit<-matrix(as.integer(train[4932,2:785]), nrow = 28, byrow=T ) image(z = digit)
library(viridis) ### Name: scale_color_viridis ### Title: Viridis color scales ### Aliases: scale_color_viridis scale_colour_viridis scale_color_viridis ### scale_fill_viridis ### ** Examples library(ggplot2) # ripped from the pages of ggplot2 p <- ggplot(mtcars, aes(wt, mpg)) p + geom_point(size=4, aes(colour = factor(cyl))) + scale_color_viridis(discrete=TRUE) + theme_bw() # ripped from the pages of ggplot2 dsub <- subset(diamonds, x > 5 & x < 6 & y > 5 & y < 6) dsub$diff <- with(dsub, sqrt(abs(x-y))* sign(x-y)) d <- ggplot(dsub, aes(x, y, colour=diff)) + geom_point() d + scale_color_viridis() + theme_bw() # from the main viridis example dat <- data.frame(x = rnorm(10000), y = rnorm(10000)) ggplot(dat, aes(x = x, y = y)) + geom_hex() + coord_fixed() + scale_fill_viridis() + theme_bw() library(ggplot2) library(MASS) library(gridExtra) data("geyser", package="MASS") ggplot(geyser, aes(x = duration, y = waiting)) + xlim(0.5, 6) + ylim(40, 110) + stat_density2d(aes(fill = ..level..), geom="polygon") + theme_bw() + theme(panel.grid=element_blank()) -> gg grid.arrange( gg + scale_fill_viridis(option="A") + labs(x="Virdis A", y=NULL), gg + scale_fill_viridis(option="B") + labs(x="Virdis B", y=NULL), gg + scale_fill_viridis(option="C") + labs(x="Virdis C", y=NULL), gg + scale_fill_viridis(option="D") + labs(x="Virdis D", y=NULL), gg + scale_fill_viridis(option="E") + labs(x="Virdis E", y=NULL), ncol=3, nrow=2 )
/data/genthat_extracted_code/viridis/examples/scale_viridis.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
1,480
r
library(viridis) ### Name: scale_color_viridis ### Title: Viridis color scales ### Aliases: scale_color_viridis scale_colour_viridis scale_color_viridis ### scale_fill_viridis ### ** Examples library(ggplot2) # ripped from the pages of ggplot2 p <- ggplot(mtcars, aes(wt, mpg)) p + geom_point(size=4, aes(colour = factor(cyl))) + scale_color_viridis(discrete=TRUE) + theme_bw() # ripped from the pages of ggplot2 dsub <- subset(diamonds, x > 5 & x < 6 & y > 5 & y < 6) dsub$diff <- with(dsub, sqrt(abs(x-y))* sign(x-y)) d <- ggplot(dsub, aes(x, y, colour=diff)) + geom_point() d + scale_color_viridis() + theme_bw() # from the main viridis example dat <- data.frame(x = rnorm(10000), y = rnorm(10000)) ggplot(dat, aes(x = x, y = y)) + geom_hex() + coord_fixed() + scale_fill_viridis() + theme_bw() library(ggplot2) library(MASS) library(gridExtra) data("geyser", package="MASS") ggplot(geyser, aes(x = duration, y = waiting)) + xlim(0.5, 6) + ylim(40, 110) + stat_density2d(aes(fill = ..level..), geom="polygon") + theme_bw() + theme(panel.grid=element_blank()) -> gg grid.arrange( gg + scale_fill_viridis(option="A") + labs(x="Virdis A", y=NULL), gg + scale_fill_viridis(option="B") + labs(x="Virdis B", y=NULL), gg + scale_fill_viridis(option="C") + labs(x="Virdis C", y=NULL), gg + scale_fill_viridis(option="D") + labs(x="Virdis D", y=NULL), gg + scale_fill_viridis(option="E") + labs(x="Virdis E", y=NULL), ncol=3, nrow=2 )
library(mosaic) ### Name: mWorldMap ### Title: Make a world map with 'ggplot2' ### Aliases: mWorldMap ### ** Examples ## Not run: ##D gdpData <- CIAdata("GDP") # load some world data ##D ##D mWorldMap(gdpData, key="country", fill="GDP") ##D ##D gdpData <- gdpData %>% mutate(GDP5 = ntiles(-GDP, 5, format="rank")) ##D mWorldMap(gdpData, key="country", fill="GDP5") ##D ##D mWorldMap(gdpData, key="country", plot="frame") + ##D geom_point() ##D ##D mergedData <- mWorldMap(gdpData, key="country", plot="none") ##D ##D ggplot(mergedData, aes(x=long, y=lat, group=group, order=order)) + ##D geom_polygon(aes(fill=GDP5), color="gray70", size=.5) + guides(fill=FALSE) ## End(Not run)
/data/genthat_extracted_code/mosaic/examples/mWorldMap.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
700
r
library(mosaic) ### Name: mWorldMap ### Title: Make a world map with 'ggplot2' ### Aliases: mWorldMap ### ** Examples ## Not run: ##D gdpData <- CIAdata("GDP") # load some world data ##D ##D mWorldMap(gdpData, key="country", fill="GDP") ##D ##D gdpData <- gdpData %>% mutate(GDP5 = ntiles(-GDP, 5, format="rank")) ##D mWorldMap(gdpData, key="country", fill="GDP5") ##D ##D mWorldMap(gdpData, key="country", plot="frame") + ##D geom_point() ##D ##D mergedData <- mWorldMap(gdpData, key="country", plot="none") ##D ##D ggplot(mergedData, aes(x=long, y=lat, group=group, order=order)) + ##D geom_polygon(aes(fill=GDP5), color="gray70", size=.5) + guides(fill=FALSE) ## End(Not run)
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/ware.R \docType{data} \name{WareColors} \alias{WareColors} \title{Paleta de Ware} \format{Un simple vector} \usage{ data(datos) } \description{ Paleta de Ware } \references{ Ware } \keyword{datasets}
/man/WareColors.Rd
no_license
pacoalonso/alonsaRp
R
false
true
278
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/ware.R \docType{data} \name{WareColors} \alias{WareColors} \title{Paleta de Ware} \format{Un simple vector} \usage{ data(datos) } \description{ Paleta de Ware } \references{ Ware } \keyword{datasets}
library(NHPoisson) ### Name: emplambdaD.fun ### Title: Empirical occurrence rates of a NHPP on disjoint intervals ### Aliases: emplambdaD.fun ### ** Examples data(BarTxTn) BarEv<-POTevents.fun(T=BarTxTn$Tx,thres=318, date=cbind(BarTxTn$ano,BarTxTn$mes,BarTxTn$dia)) # empirical rate based on disjoint intervals using nint to specify the intervals emplambdaDB<-emplambdaD.fun(posE=BarEv$Px,inddat=BarEv$inddat, t=c(1:8415), nint=55) # empirical rate based on disjoint intervals using lint to specify the intervals emplambdaDB<-emplambdaD.fun(posE=BarEv$Px,inddat=BarEv$inddat, t=c(1:8415), lint=153)
/data/genthat_extracted_code/NHPoisson/examples/emplambdaD.fun.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
617
r
library(NHPoisson) ### Name: emplambdaD.fun ### Title: Empirical occurrence rates of a NHPP on disjoint intervals ### Aliases: emplambdaD.fun ### ** Examples data(BarTxTn) BarEv<-POTevents.fun(T=BarTxTn$Tx,thres=318, date=cbind(BarTxTn$ano,BarTxTn$mes,BarTxTn$dia)) # empirical rate based on disjoint intervals using nint to specify the intervals emplambdaDB<-emplambdaD.fun(posE=BarEv$Px,inddat=BarEv$inddat, t=c(1:8415), nint=55) # empirical rate based on disjoint intervals using lint to specify the intervals emplambdaDB<-emplambdaD.fun(posE=BarEv$Px,inddat=BarEv$inddat, t=c(1:8415), lint=153)
### R code from vignette source 'ashg2012.Rnw' ################################################### ### code chunk number 1: load-gtx-and-data ################################################### library(gtx) data(lipid.cad.scores) # format small P-value for LaTeX latexp <- function(pval, digits = 1) { paste(round(10^(log10(pval) - floor(log10(pval))), digits), "\\\\times10^{", floor(log10(pval)), "}", sep = "") } # compute odds-per-percent-change at qq-th quantile of estimate and CI # assuming risk score in ln(biomarker) and outcome in ln(odds) oppc <- function(grs, pc, qq, digits = 2) { return(round(sort(exp((grs$ahat + qnorm(qq)*grs$aSE)*log((100 + pc)/100))), digits)) } ################################################### ### code chunk number 2: hdl-plot1 ################################################### with(subset(lipid.cad.scores, score == "HDL"), { par(mar = c(4, 4, 0, 0.5) + 0.1) grs.plot(coef, beta_CAD, se_CAD, locus, textpos = c(3,3,1,1,1,1,3,1,3,1)) title(xlab = "ln change in HDL per allele", ylab = "ln(odds) change in CAD risk per allele") }) ################################################### ### code chunk number 3: tg-plot1 ################################################### with(subset(lipid.cad.scores, score == "TG"), { par(mar = c(4, 4, 0, 0.5) + 0.1) grs.plot(coef, beta_CAD, se_CAD, locus, textpos = c(1,1,1,1,1,1,1,1,3)) title(xlab = "ln change in triglycerides per allele", ylab = "ln(odds) change in CAD risk per allele") }) ################################################### ### code chunk number 4: fit-all-snps ################################################### hdl.grs1 <- with(subset(lipid.cad.scores, score == "HDL"), grs.summary(coef, beta_CAD, se_CAD, 38684 + 9633)) tg.grs1 <- with(subset(lipid.cad.scores, score == "TG"), grs.summary(coef, beta_CAD, se_CAD, 38684 + 9633)) ################################################### ### code chunk number 5: fit-subset-snps ################################################### hdl.filter <- with(subset(lipid.cad.scores, score == "HDL"), grs.filter.Qrs(coef, beta_CAD, se_CAD)) hdl.grs2 <- with(subset(lipid.cad.scores, score == "HDL"), grs.summary(coef[hdl.filter], beta_CAD[hdl.filter], se_CAD[hdl.filter], 38684 + 9633)) tg.filter <- with(subset(lipid.cad.scores, score == "TG"), grs.filter.Qrs(coef, beta_CAD, se_CAD)) tg.grs2 <- with(subset(lipid.cad.scores, score == "TG"), grs.summary(coef[tg.filter], beta_CAD[tg.filter], se_CAD[tg.filter], 38684 + 9633)) ################################################### ### code chunk number 6: hdl-plot2 ################################################### with(subset(lipid.cad.scores, score == "HDL"), { par(mar = c(4, 4, 0, 0.5) + 0.1) grs.plot(coef[hdl.filter], beta_CAD[hdl.filter], se_CAD[hdl.filter], locus[hdl.filter], textpos = c(3,3,1,1,1,1,3,1,3,1)[hdl.filter]) title(xlab = "ln change in HDL per allele", ylab = "ln(odds) change in CAD risk per allele") }) ################################################### ### code chunk number 7: tg-plot2 ################################################### with(subset(lipid.cad.scores, score == "TG"), { par(mar = c(4, 4, 0, 0.5) + 0.1) grs.plot(coef[tg.filter], beta_CAD[tg.filter], se_CAD[tg.filter], locus[tg.filter], textpos = c(1,1,1,1,1,1,1,1,3)[tg.filter]) title(xlab = "ln change in triglycerides per allele", ylab = "ln(odds) change in CAD risk per allele") })
/data/genthat_extracted_code/gtx/vignettes/ashg2012.R
no_license
surayaaramli/typeRrh
R
false
false
3,461
r
### R code from vignette source 'ashg2012.Rnw' ################################################### ### code chunk number 1: load-gtx-and-data ################################################### library(gtx) data(lipid.cad.scores) # format small P-value for LaTeX latexp <- function(pval, digits = 1) { paste(round(10^(log10(pval) - floor(log10(pval))), digits), "\\\\times10^{", floor(log10(pval)), "}", sep = "") } # compute odds-per-percent-change at qq-th quantile of estimate and CI # assuming risk score in ln(biomarker) and outcome in ln(odds) oppc <- function(grs, pc, qq, digits = 2) { return(round(sort(exp((grs$ahat + qnorm(qq)*grs$aSE)*log((100 + pc)/100))), digits)) } ################################################### ### code chunk number 2: hdl-plot1 ################################################### with(subset(lipid.cad.scores, score == "HDL"), { par(mar = c(4, 4, 0, 0.5) + 0.1) grs.plot(coef, beta_CAD, se_CAD, locus, textpos = c(3,3,1,1,1,1,3,1,3,1)) title(xlab = "ln change in HDL per allele", ylab = "ln(odds) change in CAD risk per allele") }) ################################################### ### code chunk number 3: tg-plot1 ################################################### with(subset(lipid.cad.scores, score == "TG"), { par(mar = c(4, 4, 0, 0.5) + 0.1) grs.plot(coef, beta_CAD, se_CAD, locus, textpos = c(1,1,1,1,1,1,1,1,3)) title(xlab = "ln change in triglycerides per allele", ylab = "ln(odds) change in CAD risk per allele") }) ################################################### ### code chunk number 4: fit-all-snps ################################################### hdl.grs1 <- with(subset(lipid.cad.scores, score == "HDL"), grs.summary(coef, beta_CAD, se_CAD, 38684 + 9633)) tg.grs1 <- with(subset(lipid.cad.scores, score == "TG"), grs.summary(coef, beta_CAD, se_CAD, 38684 + 9633)) ################################################### ### code chunk number 5: fit-subset-snps ################################################### hdl.filter <- with(subset(lipid.cad.scores, score == "HDL"), grs.filter.Qrs(coef, beta_CAD, se_CAD)) hdl.grs2 <- with(subset(lipid.cad.scores, score == "HDL"), grs.summary(coef[hdl.filter], beta_CAD[hdl.filter], se_CAD[hdl.filter], 38684 + 9633)) tg.filter <- with(subset(lipid.cad.scores, score == "TG"), grs.filter.Qrs(coef, beta_CAD, se_CAD)) tg.grs2 <- with(subset(lipid.cad.scores, score == "TG"), grs.summary(coef[tg.filter], beta_CAD[tg.filter], se_CAD[tg.filter], 38684 + 9633)) ################################################### ### code chunk number 6: hdl-plot2 ################################################### with(subset(lipid.cad.scores, score == "HDL"), { par(mar = c(4, 4, 0, 0.5) + 0.1) grs.plot(coef[hdl.filter], beta_CAD[hdl.filter], se_CAD[hdl.filter], locus[hdl.filter], textpos = c(3,3,1,1,1,1,3,1,3,1)[hdl.filter]) title(xlab = "ln change in HDL per allele", ylab = "ln(odds) change in CAD risk per allele") }) ################################################### ### code chunk number 7: tg-plot2 ################################################### with(subset(lipid.cad.scores, score == "TG"), { par(mar = c(4, 4, 0, 0.5) + 0.1) grs.plot(coef[tg.filter], beta_CAD[tg.filter], se_CAD[tg.filter], locus[tg.filter], textpos = c(1,1,1,1,1,1,1,1,3)[tg.filter]) title(xlab = "ln change in triglycerides per allele", ylab = "ln(odds) change in CAD risk per allele") })
#start print("hello")
/Demo.R
no_license
PredAnaForTFS/PredictiveAnalyticsForTFS
R
false
false
22
r
#start print("hello")
#set the probe probe_ptp<- "cg08954025" #set the correct dir setwd("~/R/ageing/datasets/nafld/males") #load the data load("yx_train.r") load("yx_test.r") # run the revalidation source("~/R/ageing/functions/revalidate.r") revalidate() # 0.8214 accuracy #predict age with the marker load("~/R/ageing/datasets/nafld/males/age_yx_train_controls.r") load("~/R/ageing/datasets/nafld/males/age_yx_test_controls.r") age_yx_train_controls <- t(age_yx_train_controls) age_yx_test_controls <- t(age_yx_test_controls) x_train <- as.matrix(age_yx_train_controls[,probe_ptp]) y_train <- as.matrix(age_yx_train_controls[,"age"]) x_test <- as.matrix(age_yx_test_controls[,probe_ptp]) y_test <- as.matrix(age_yx_test_controls[,"age"]) library(tensorflow) library(keras) model <- keras_model_sequential() model %>% layer_dense(units = 100, activation = 'relu', input_shape = c(1)) %>% layer_dense(units = 100, activation = 'relu') %>% layer_dense(units = 100, activation = 'relu') %>% layer_dense(units = 100, activation = 'relu') %>% layer_dense(units = 100, activation = 'relu') %>% layer_dense(units = 10, activation = 'relu') %>% layer_dense(units = 1) summary(model) model %>% compile( loss = 'mse', optimizer = "adam", metrics = "mean_absolute_error" ) batch_size <- 16 epochs <- 200 # Fit model to data history <- model %>% fit( x_train, y_train, batch_size = batch_size, shuffle = T, epochs = epochs, validation_data = list(x_test,y_test)) plot(history) #val is MAE 14 years ###################################### ##################################### trainagep <- predict(model, x = x_train) mean(abs(trainagep - y_train)) hist(trainagep - y_train) cor(y = trainagep, x = y_train) testagep <- predict(model, x = x_test) mean(abs(testagep - y_test)) cor(y = testagep, x = y_test) plot(y = testagep, x = y_test, xlab = "True Age", ylab = "Predicted Age", main = "Male Liver - Healthy") hist(testagep - y_test) plot(y=age_yx_train_controls[,"age"], x=age_yx_train_controls[,probe_ptp]) m = lm(age ~ cg08954025, data = data.frame(age_yx_train_controls)) #r = 0.67 p = predict(m, newdata= data.frame(cg08954025 = age_yx_test_controls[,"cg08954025"])) mean(abs(p - age_yx_test_controls[,"age"])) #14 plot(x=age_yx_test_controls[,"age"], y=age_yx_test_controls[,"cg08954025"]) abline(m) cor(x=age_yx_test_controls[,"age"],p)
/scripts/nafld/males/s_revalidate_male_liver.R
no_license
k1sauce/Ageing-Project
R
false
false
2,362
r
#set the probe probe_ptp<- "cg08954025" #set the correct dir setwd("~/R/ageing/datasets/nafld/males") #load the data load("yx_train.r") load("yx_test.r") # run the revalidation source("~/R/ageing/functions/revalidate.r") revalidate() # 0.8214 accuracy #predict age with the marker load("~/R/ageing/datasets/nafld/males/age_yx_train_controls.r") load("~/R/ageing/datasets/nafld/males/age_yx_test_controls.r") age_yx_train_controls <- t(age_yx_train_controls) age_yx_test_controls <- t(age_yx_test_controls) x_train <- as.matrix(age_yx_train_controls[,probe_ptp]) y_train <- as.matrix(age_yx_train_controls[,"age"]) x_test <- as.matrix(age_yx_test_controls[,probe_ptp]) y_test <- as.matrix(age_yx_test_controls[,"age"]) library(tensorflow) library(keras) model <- keras_model_sequential() model %>% layer_dense(units = 100, activation = 'relu', input_shape = c(1)) %>% layer_dense(units = 100, activation = 'relu') %>% layer_dense(units = 100, activation = 'relu') %>% layer_dense(units = 100, activation = 'relu') %>% layer_dense(units = 100, activation = 'relu') %>% layer_dense(units = 10, activation = 'relu') %>% layer_dense(units = 1) summary(model) model %>% compile( loss = 'mse', optimizer = "adam", metrics = "mean_absolute_error" ) batch_size <- 16 epochs <- 200 # Fit model to data history <- model %>% fit( x_train, y_train, batch_size = batch_size, shuffle = T, epochs = epochs, validation_data = list(x_test,y_test)) plot(history) #val is MAE 14 years ###################################### ##################################### trainagep <- predict(model, x = x_train) mean(abs(trainagep - y_train)) hist(trainagep - y_train) cor(y = trainagep, x = y_train) testagep <- predict(model, x = x_test) mean(abs(testagep - y_test)) cor(y = testagep, x = y_test) plot(y = testagep, x = y_test, xlab = "True Age", ylab = "Predicted Age", main = "Male Liver - Healthy") hist(testagep - y_test) plot(y=age_yx_train_controls[,"age"], x=age_yx_train_controls[,probe_ptp]) m = lm(age ~ cg08954025, data = data.frame(age_yx_train_controls)) #r = 0.67 p = predict(m, newdata= data.frame(cg08954025 = age_yx_test_controls[,"cg08954025"])) mean(abs(p - age_yx_test_controls[,"age"])) #14 plot(x=age_yx_test_controls[,"age"], y=age_yx_test_controls[,"cg08954025"]) abline(m) cor(x=age_yx_test_controls[,"age"],p)
# Taken from https://github.com/Vivianstats/scImpute # find_hv_genes #' @importFrom stats quantile .find_hv_genes <- function(count, I, J){ count_nzero = lapply(1:I, function(i) setdiff(count[i, ], log10(1.01))) mu = sapply(count_nzero, mean) mu[is.na(mu)] = 0 sd = sapply(count_nzero, sd) sd[is.na(sd)] = 0 cv = sd/mu cv[is.na(cv)] = 0 # sum(mu >= 1 & cv >= quantile(cv, 0.25), na.rm = TRUE) high_var_genes = which(mu >= 1 & cv >= stats::quantile(cv, 0.25)) if(length(high_var_genes) < 500){ high_var_genes = 1:I} count_hv = count[high_var_genes, ] return(count_hv) } # find_neighbors #' @importFrom stats prcomp quantile #' @importFrom rsvd rpca #' @importFrom kernlab specc #' @importFrom parallel mclapply .find_neighbors <- function(count_hv, labeled, J, Kcluster = NULL, ncores, cell_labels = NULL){ if(labeled == TRUE){ if(class(cell_labels) == "character"){ labels_uniq = unique(cell_labels) labels_mth = 1:length(labels_uniq) names(labels_mth) = labels_uniq clust = labels_mth[cell_labels] }else{ clust = cell_labels } nclust = length(unique(clust)) dist_list = lapply(1:nclust, function(ll){ cell_inds = which(clust == ll) count_hv_sub = count_hv[, cell_inds, drop = FALSE] if(length(cell_inds) < 1000){ var_thre = 0.4 pca = stats::prcomp(t(count_hv_sub)) eigs = (pca$sdev)^2 var_cum = cumsum(eigs)/sum(eigs) if(max(var_cum) <= var_thre){ npc = length(var_cum) }else{ npc = which.max(var_cum > var_thre) if (labeled == FALSE){ npc = max(npc, Kcluster) } } }else{ var_thre = 0.6 pca = rsvd::rpca(t(count_hv_sub), k = 1000, center = TRUE, scale = FALSE) eigs = (pca$sdev)^2 var_cum = cumsum(eigs)/sum(eigs) if(max(var_cum) <= var_thre){ npc = length(var_cum) }else{ npc = which.max(var_cum > var_thre) if (labeled == FALSE){ npc = max(npc, Kcluster) } } } if (npc < 3){ npc = 3 } mat_pcs = t(pca$x[, 1:npc]) dist_cells_list = mclapply(1:length(cell_inds), function(id1){ d = sapply(1:id1, function(id2){ sse = sum((mat_pcs[, id1] - mat_pcs[, id2])^2) sqrt(sse) }) return(c(d, rep(0, length(cell_inds)-id1))) }, mc.cores = ncores) dist_cells = matrix(0, nrow = length(cell_inds), ncol = length(cell_inds)) for(cellid in 1:length(cell_inds)){dist_cells[cellid, ] = dist_cells_list[[cellid]]} dist_cells = dist_cells + t(dist_cells) return(dist_cells) }) return(list(dist_list = dist_list, clust = clust)) } if(labeled == FALSE){ ## dimeansion reduction if(J < 5000){ var_thre = 0.4 pca = stats::prcomp(t(count_hv)) eigs = (pca$sdev)^2 var_cum = cumsum(eigs)/sum(eigs) if(max(var_cum) <= var_thre){ npc = length(var_cum) }else{ npc = which.max(var_cum > var_thre) if (labeled == FALSE){ npc = max(npc, Kcluster) } } }else{ var_thre = 0.6 pca = rsvd::rpca(t(count_hv), k = 1000, center = TRUE, scale = FALSE) eigs = (pca$sdev)^2 var_cum = cumsum(eigs)/sum(eigs) if(max(var_cum) <= var_thre){ npc = length(var_cum) }else{ npc = which.max(var_cum > var_thre) if (labeled == FALSE){ npc = max(npc, Kcluster) } } } if (npc < 3){ npc = 3 } mat_pcs = t(pca$x[, 1:npc]) # columns are cells ## detect outliers dist_cells_list = mclapply(1:J, function(id1){ d = sapply(1:id1, function(id2){ sse = sum((mat_pcs[, id1] - mat_pcs[, id2])^2) sqrt(sse) }) return(c(d, rep(0, J-id1))) }, mc.cores = ncores) dist_cells = matrix(0, nrow = J, ncol = J) for(cellid in 1:J){dist_cells[cellid, ] = dist_cells_list[[cellid]]} dist_cells = dist_cells + t(dist_cells) min_dist = sapply(1:J, function(i){ min(dist_cells[i, -i]) }) iqr = stats::quantile(min_dist, 0.75) - stats::quantile(min_dist, 0.25) outliers = which(min_dist > 1.5 * iqr + stats::quantile(min_dist, 0.75)) ## clustering non_out = setdiff(1:J, outliers) spec_res = kernlab::specc(t(mat_pcs[, non_out]), centers = Kcluster, kernel = "rbfdot") nbs = rep(NA, J) nbs[non_out] = spec_res return(list(dist_cells = dist_cells, clust = nbs)) } } ### root-finding equation .fn = function(alpha, target){ log(alpha) - digamma(alpha) - target } ### update parameters in gamma distribution #' @importFrom stats uniroot .update_gmm_pars = function(x, wt){ tp_s = sum(wt) tp_t = sum(wt * x) tp_u = sum(wt * log(x)) tp_v = -tp_u / tp_s - log(tp_s / tp_t) if (tp_v <= 0){ alpha = 20 }else{ alpha0 = (3 - tp_v + sqrt((tp_v - 3)^2 + 24 * tp_v)) / 12 / tp_v if (alpha0 >= 20){alpha = 20 }else{ alpha = stats::uniroot(.fn, c(0.9, 1.1) * alpha0, target = tp_v, extendInt = "yes")$root } } ## need to solve log(x) - digamma(x) = tp_v ## We use this approximation to compute the initial value beta = tp_s / tp_t * alpha return(c(alpha, beta)) } ### estimate parameters in the mixture distribution #' @importFrom stats sd .get_mix <- function(xdata, point){ inits = rep(0, 5) inits[1] = sum(xdata == point)/length(xdata) if (inits[1] == 0) {inits[1] = 0.01} inits[2:3] = c(0.5, 1) xdata_rm = xdata[xdata > point] inits[4:5] = c(mean(xdata_rm), stats::sd(xdata_rm)) if (is.na(inits[5])) {inits[5] = 0} paramt = inits eps = 10 iter = 0 loglik_old = 0 while(eps > 0.5) { wt = .calculate_weight(xdata, paramt) paramt[1] = sum(wt[, 1])/nrow(wt) paramt[4] = sum(wt[, 2] * xdata)/sum(wt[, 2]) paramt[5] = sqrt(sum(wt[, 2] * (xdata - paramt[4])^2)/sum(wt[, 2])) paramt[2:3] = .update_gmm_pars(x=xdata, wt=wt[,1]) loglik = sum(log10(.dmix(xdata, paramt))) eps = (loglik - loglik_old)^2 loglik_old = loglik iter = iter + 1 if (iter > 100) break } return(paramt) } #' @importFrom stats dgamma dnorm .dmix <- function (x, pars) { pars[1] * stats::dgamma(x, shape = pars[2], rate = pars[3]) + (1 - pars[1]) * stats::dnorm(x, mean = pars[4], sd = pars[5]) } #' @importFrom parallel mclapply .get_mix_parameters <- function (count, point = log10(1.01), ncores = 8) { count = as.matrix(count) null_genes = which(abs(rowSums(count) - point * ncol(count)) < 1e-10) parslist = parallel::mclapply(1:nrow(count), function(ii) { if (ii %% 2000 == 0) { gc() } if (ii %in% null_genes) { return(rep(NA, 5)) } xdata = count[ii, ] paramt = try(.get_mix(xdata, point), silent = TRUE) if (class(paramt) == "try-error"){ paramt = rep(NA, 5) } return(paramt) }, mc.cores = ncores) parslist = Reduce(rbind, parslist) colnames(parslist) = c("rate", "alpha", "beta", "mu", "sigma") return(parslist) } # find_va_genes #' @importFrom stats complete.cases dgamma dnorm .find_va_genes = function(parslist, subcount){ point = log10(1.01) valid_genes = which( (rowSums(subcount) > point * ncol(subcount)) & stats::complete.cases(parslist) ) if(length(valid_genes) == 0) return(valid_genes) # find out genes that violate assumption mu = parslist[, "mu"] sgene1 = which(mu <= log10(1+1.01)) # sgene2 = which(mu <= log10(10+1.01) & mu - parslist[,5] > log10(1.01)) dcheck1 = stats::dgamma(mu+1, shape = parslist[, "alpha"], rate = parslist[, "beta"]) dcheck2 = stats::dnorm(mu+1, mean = parslist[, "mu"], sd = parslist[, "sigma"]) sgene3 = which(dcheck1 >= dcheck2 & mu <= 1) sgene = union(sgene1, sgene3) valid_genes = setdiff(valid_genes, sgene) return(valid_genes) } # calculate_weight #' @importFrom stats dgamma dnorm .calculate_weight <- function (x, paramt){ pz1 = paramt[1] * stats::dgamma(x, shape = paramt[2], rate = paramt[3]) pz2 = (1 - paramt[1]) * stats::dnorm(x, mean = paramt[4], sd = paramt[5]) pz = pz1/(pz1 + pz2) pz[pz1 == 0] = 0 return(cbind(pz, 1 - pz)) } # impute_nnls #' @importFrom penalized penalized predict .impute_nnls <- function(Ic, cellid, subcount, droprate, geneid_drop, geneid_obs, nbs, distc){ yobs = subcount[ ,cellid] if (length(geneid_drop) == 0 | length(geneid_drop) == Ic) { return(yobs) } yimpute = rep(0, Ic) xx = subcount[geneid_obs, nbs] yy = subcount[geneid_obs, cellid] ximpute = subcount[geneid_drop, nbs] num_thre = 1000 if(ncol(xx) >= min(num_thre, nrow(xx))){ if (num_thre >= nrow(xx)){ new_thre = round((2*nrow(xx)/3)) }else{ new_thre = num_thre} filterid = order(distc[cellid, -cellid])[1: new_thre] xx = xx[, filterid, drop = FALSE] ximpute = ximpute[, filterid, drop = FALSE] } set.seed(cellid) nnls = penalized::penalized(yy, penalized = xx, unpenalized = ~0, positive = TRUE, lambda1 = 0, lambda2 = 0, maxiter = 3000, trace = FALSE) ynew = penalized::predict(nnls, penalized = ximpute, unpenalized = ~0)[,1] yimpute[geneid_drop] = ynew yimpute[geneid_obs] = yobs[geneid_obs] return(yimpute) } #' @importFrom parallel makeCluster stopCluster #' @importFrom doParallel registerDoParallel #' @importFrom foreach foreach %dopar% .imputation_model8 = function(count, labeled, point, drop_thre = 0.5, Kcluster = 10, ncores){ count = as.matrix(count) I = nrow(count) J = ncol(count) count_imp = count # find highly variable genes count_hv = .find_hv_genes(count, I, J) if(Kcluster == 1){ clust = rep(1, J) if(J < 5000){ var_thre = 0.4 pca = stats::prcomp(t(count_hv)) eigs = (pca$sdev)^2 var_cum = cumsum(eigs)/sum(eigs) if(max(var_cum) <= var_thre){ npc = length(var_cum) }else{ npc = which.max(var_cum > var_thre) if (labeled == FALSE){ npc = max(npc, Kcluster) } } }else{ var_thre = 0.6 pca = rsvd::rpca(t(count_hv), k = 1000, center = TRUE, scale = FALSE) eigs = (pca$sdev)^2 var_cum = cumsum(eigs)/sum(eigs) if(max(var_cum) <= var_thre){ npc = length(var_cum) }else{ npc = which.max(var_cum > var_thre) if (labeled == FALSE){ npc = max(npc, Kcluster) } } } if (npc < 3){ npc = 3 } mat_pcs = t(pca$x[, 1:npc]) # columns are cells dist_cells_list = mclapply(1:J, function(id1){ d = sapply(1:id1, function(id2){ sse = sum((mat_pcs[, id1] - mat_pcs[, id2])^2) sqrt(sse) }) return(c(d, rep(0, J-id1))) }, mc.cores = ncores) dist_cells = matrix(0, nrow = J, ncol = J) for(cellid in 1:J){dist_cells[cellid, ] = dist_cells_list[[cellid]]} dist_cells = dist_cells + t(dist_cells) }else{ set.seed(Kcluster) neighbors_res = .find_neighbors(count_hv = count_hv, labeled = FALSE, J = J, Kcluster = Kcluster, ncores = ncores) dist_cells = neighbors_res$dist_cells clust = neighbors_res$clust } # mixture model nclust = sum(!is.na(unique(clust))) cl = parallel::makeCluster(ncores) doParallel::registerDoParallel(cl) for(cc in 1:nclust){ params <- .get_mix_parameters(count = count[, which(clust == cc), drop = FALSE], point = log10(1.01), ncores = ncores) cells = which(clust == cc) if(length(cells) <= 1) { next } parslist = params valid_genes = .find_va_genes(parslist, subcount = count[, cells]) if(length(valid_genes) <= 10){ next } subcount = count[valid_genes, cells, drop = FALSE] Ic = length(valid_genes) Jc = ncol(subcount) parslist = parslist[valid_genes, ] droprate = t(sapply(1:Ic, function(i) { wt = .calculate_weight(subcount[i, ], parslist[i, ]) return(wt[, 1]) })) mucheck = sweep(subcount, MARGIN = 1, parslist[, "mu"], FUN = ">") droprate[mucheck & droprate > drop_thre] = 0 # dropouts setA = lapply(1:Jc, function(cellid){ which(droprate[, cellid] > drop_thre) }) # non-dropouts setB = lapply(1:Jc, function(cellid){ which(droprate[, cellid] <= drop_thre) }) # imputation subres = foreach::foreach(cellid = 1:Jc, .packages = c("penalized"), .combine = cbind, .export = c(".impute_nnls")) %dopar% { if (cellid %% 100 == 0) {gc()} nbs = setdiff(1:Jc, cellid) if (length(nbs) == 0) {return(NULL)} geneid_drop = setA[[cellid]] geneid_obs = setB[[cellid]] y = try(.impute_nnls(Ic, cellid, subcount, droprate, geneid_drop, geneid_obs, nbs, distc = dist_cells[cells, cells]), silent = TRUE) if (class(y) == "try-error") { # print(y) y = subcount[, cellid, drop = FALSE] } return(y) } count_imp[valid_genes, cells] = subres } parallel::stopCluster(cl) outlier = which(is.na(clust)) count_imp[count_imp < point] = point return(list(count_imp = count_imp, outlier = outlier)) }
/R/utils_scimpute.R
permissive
bvieth/powsimR
R
false
false
14,195
r
# Taken from https://github.com/Vivianstats/scImpute # find_hv_genes #' @importFrom stats quantile .find_hv_genes <- function(count, I, J){ count_nzero = lapply(1:I, function(i) setdiff(count[i, ], log10(1.01))) mu = sapply(count_nzero, mean) mu[is.na(mu)] = 0 sd = sapply(count_nzero, sd) sd[is.na(sd)] = 0 cv = sd/mu cv[is.na(cv)] = 0 # sum(mu >= 1 & cv >= quantile(cv, 0.25), na.rm = TRUE) high_var_genes = which(mu >= 1 & cv >= stats::quantile(cv, 0.25)) if(length(high_var_genes) < 500){ high_var_genes = 1:I} count_hv = count[high_var_genes, ] return(count_hv) } # find_neighbors #' @importFrom stats prcomp quantile #' @importFrom rsvd rpca #' @importFrom kernlab specc #' @importFrom parallel mclapply .find_neighbors <- function(count_hv, labeled, J, Kcluster = NULL, ncores, cell_labels = NULL){ if(labeled == TRUE){ if(class(cell_labels) == "character"){ labels_uniq = unique(cell_labels) labels_mth = 1:length(labels_uniq) names(labels_mth) = labels_uniq clust = labels_mth[cell_labels] }else{ clust = cell_labels } nclust = length(unique(clust)) dist_list = lapply(1:nclust, function(ll){ cell_inds = which(clust == ll) count_hv_sub = count_hv[, cell_inds, drop = FALSE] if(length(cell_inds) < 1000){ var_thre = 0.4 pca = stats::prcomp(t(count_hv_sub)) eigs = (pca$sdev)^2 var_cum = cumsum(eigs)/sum(eigs) if(max(var_cum) <= var_thre){ npc = length(var_cum) }else{ npc = which.max(var_cum > var_thre) if (labeled == FALSE){ npc = max(npc, Kcluster) } } }else{ var_thre = 0.6 pca = rsvd::rpca(t(count_hv_sub), k = 1000, center = TRUE, scale = FALSE) eigs = (pca$sdev)^2 var_cum = cumsum(eigs)/sum(eigs) if(max(var_cum) <= var_thre){ npc = length(var_cum) }else{ npc = which.max(var_cum > var_thre) if (labeled == FALSE){ npc = max(npc, Kcluster) } } } if (npc < 3){ npc = 3 } mat_pcs = t(pca$x[, 1:npc]) dist_cells_list = mclapply(1:length(cell_inds), function(id1){ d = sapply(1:id1, function(id2){ sse = sum((mat_pcs[, id1] - mat_pcs[, id2])^2) sqrt(sse) }) return(c(d, rep(0, length(cell_inds)-id1))) }, mc.cores = ncores) dist_cells = matrix(0, nrow = length(cell_inds), ncol = length(cell_inds)) for(cellid in 1:length(cell_inds)){dist_cells[cellid, ] = dist_cells_list[[cellid]]} dist_cells = dist_cells + t(dist_cells) return(dist_cells) }) return(list(dist_list = dist_list, clust = clust)) } if(labeled == FALSE){ ## dimeansion reduction if(J < 5000){ var_thre = 0.4 pca = stats::prcomp(t(count_hv)) eigs = (pca$sdev)^2 var_cum = cumsum(eigs)/sum(eigs) if(max(var_cum) <= var_thre){ npc = length(var_cum) }else{ npc = which.max(var_cum > var_thre) if (labeled == FALSE){ npc = max(npc, Kcluster) } } }else{ var_thre = 0.6 pca = rsvd::rpca(t(count_hv), k = 1000, center = TRUE, scale = FALSE) eigs = (pca$sdev)^2 var_cum = cumsum(eigs)/sum(eigs) if(max(var_cum) <= var_thre){ npc = length(var_cum) }else{ npc = which.max(var_cum > var_thre) if (labeled == FALSE){ npc = max(npc, Kcluster) } } } if (npc < 3){ npc = 3 } mat_pcs = t(pca$x[, 1:npc]) # columns are cells ## detect outliers dist_cells_list = mclapply(1:J, function(id1){ d = sapply(1:id1, function(id2){ sse = sum((mat_pcs[, id1] - mat_pcs[, id2])^2) sqrt(sse) }) return(c(d, rep(0, J-id1))) }, mc.cores = ncores) dist_cells = matrix(0, nrow = J, ncol = J) for(cellid in 1:J){dist_cells[cellid, ] = dist_cells_list[[cellid]]} dist_cells = dist_cells + t(dist_cells) min_dist = sapply(1:J, function(i){ min(dist_cells[i, -i]) }) iqr = stats::quantile(min_dist, 0.75) - stats::quantile(min_dist, 0.25) outliers = which(min_dist > 1.5 * iqr + stats::quantile(min_dist, 0.75)) ## clustering non_out = setdiff(1:J, outliers) spec_res = kernlab::specc(t(mat_pcs[, non_out]), centers = Kcluster, kernel = "rbfdot") nbs = rep(NA, J) nbs[non_out] = spec_res return(list(dist_cells = dist_cells, clust = nbs)) } } ### root-finding equation .fn = function(alpha, target){ log(alpha) - digamma(alpha) - target } ### update parameters in gamma distribution #' @importFrom stats uniroot .update_gmm_pars = function(x, wt){ tp_s = sum(wt) tp_t = sum(wt * x) tp_u = sum(wt * log(x)) tp_v = -tp_u / tp_s - log(tp_s / tp_t) if (tp_v <= 0){ alpha = 20 }else{ alpha0 = (3 - tp_v + sqrt((tp_v - 3)^2 + 24 * tp_v)) / 12 / tp_v if (alpha0 >= 20){alpha = 20 }else{ alpha = stats::uniroot(.fn, c(0.9, 1.1) * alpha0, target = tp_v, extendInt = "yes")$root } } ## need to solve log(x) - digamma(x) = tp_v ## We use this approximation to compute the initial value beta = tp_s / tp_t * alpha return(c(alpha, beta)) } ### estimate parameters in the mixture distribution #' @importFrom stats sd .get_mix <- function(xdata, point){ inits = rep(0, 5) inits[1] = sum(xdata == point)/length(xdata) if (inits[1] == 0) {inits[1] = 0.01} inits[2:3] = c(0.5, 1) xdata_rm = xdata[xdata > point] inits[4:5] = c(mean(xdata_rm), stats::sd(xdata_rm)) if (is.na(inits[5])) {inits[5] = 0} paramt = inits eps = 10 iter = 0 loglik_old = 0 while(eps > 0.5) { wt = .calculate_weight(xdata, paramt) paramt[1] = sum(wt[, 1])/nrow(wt) paramt[4] = sum(wt[, 2] * xdata)/sum(wt[, 2]) paramt[5] = sqrt(sum(wt[, 2] * (xdata - paramt[4])^2)/sum(wt[, 2])) paramt[2:3] = .update_gmm_pars(x=xdata, wt=wt[,1]) loglik = sum(log10(.dmix(xdata, paramt))) eps = (loglik - loglik_old)^2 loglik_old = loglik iter = iter + 1 if (iter > 100) break } return(paramt) } #' @importFrom stats dgamma dnorm .dmix <- function (x, pars) { pars[1] * stats::dgamma(x, shape = pars[2], rate = pars[3]) + (1 - pars[1]) * stats::dnorm(x, mean = pars[4], sd = pars[5]) } #' @importFrom parallel mclapply .get_mix_parameters <- function (count, point = log10(1.01), ncores = 8) { count = as.matrix(count) null_genes = which(abs(rowSums(count) - point * ncol(count)) < 1e-10) parslist = parallel::mclapply(1:nrow(count), function(ii) { if (ii %% 2000 == 0) { gc() } if (ii %in% null_genes) { return(rep(NA, 5)) } xdata = count[ii, ] paramt = try(.get_mix(xdata, point), silent = TRUE) if (class(paramt) == "try-error"){ paramt = rep(NA, 5) } return(paramt) }, mc.cores = ncores) parslist = Reduce(rbind, parslist) colnames(parslist) = c("rate", "alpha", "beta", "mu", "sigma") return(parslist) } # find_va_genes #' @importFrom stats complete.cases dgamma dnorm .find_va_genes = function(parslist, subcount){ point = log10(1.01) valid_genes = which( (rowSums(subcount) > point * ncol(subcount)) & stats::complete.cases(parslist) ) if(length(valid_genes) == 0) return(valid_genes) # find out genes that violate assumption mu = parslist[, "mu"] sgene1 = which(mu <= log10(1+1.01)) # sgene2 = which(mu <= log10(10+1.01) & mu - parslist[,5] > log10(1.01)) dcheck1 = stats::dgamma(mu+1, shape = parslist[, "alpha"], rate = parslist[, "beta"]) dcheck2 = stats::dnorm(mu+1, mean = parslist[, "mu"], sd = parslist[, "sigma"]) sgene3 = which(dcheck1 >= dcheck2 & mu <= 1) sgene = union(sgene1, sgene3) valid_genes = setdiff(valid_genes, sgene) return(valid_genes) } # calculate_weight #' @importFrom stats dgamma dnorm .calculate_weight <- function (x, paramt){ pz1 = paramt[1] * stats::dgamma(x, shape = paramt[2], rate = paramt[3]) pz2 = (1 - paramt[1]) * stats::dnorm(x, mean = paramt[4], sd = paramt[5]) pz = pz1/(pz1 + pz2) pz[pz1 == 0] = 0 return(cbind(pz, 1 - pz)) } # impute_nnls #' @importFrom penalized penalized predict .impute_nnls <- function(Ic, cellid, subcount, droprate, geneid_drop, geneid_obs, nbs, distc){ yobs = subcount[ ,cellid] if (length(geneid_drop) == 0 | length(geneid_drop) == Ic) { return(yobs) } yimpute = rep(0, Ic) xx = subcount[geneid_obs, nbs] yy = subcount[geneid_obs, cellid] ximpute = subcount[geneid_drop, nbs] num_thre = 1000 if(ncol(xx) >= min(num_thre, nrow(xx))){ if (num_thre >= nrow(xx)){ new_thre = round((2*nrow(xx)/3)) }else{ new_thre = num_thre} filterid = order(distc[cellid, -cellid])[1: new_thre] xx = xx[, filterid, drop = FALSE] ximpute = ximpute[, filterid, drop = FALSE] } set.seed(cellid) nnls = penalized::penalized(yy, penalized = xx, unpenalized = ~0, positive = TRUE, lambda1 = 0, lambda2 = 0, maxiter = 3000, trace = FALSE) ynew = penalized::predict(nnls, penalized = ximpute, unpenalized = ~0)[,1] yimpute[geneid_drop] = ynew yimpute[geneid_obs] = yobs[geneid_obs] return(yimpute) } #' @importFrom parallel makeCluster stopCluster #' @importFrom doParallel registerDoParallel #' @importFrom foreach foreach %dopar% .imputation_model8 = function(count, labeled, point, drop_thre = 0.5, Kcluster = 10, ncores){ count = as.matrix(count) I = nrow(count) J = ncol(count) count_imp = count # find highly variable genes count_hv = .find_hv_genes(count, I, J) if(Kcluster == 1){ clust = rep(1, J) if(J < 5000){ var_thre = 0.4 pca = stats::prcomp(t(count_hv)) eigs = (pca$sdev)^2 var_cum = cumsum(eigs)/sum(eigs) if(max(var_cum) <= var_thre){ npc = length(var_cum) }else{ npc = which.max(var_cum > var_thre) if (labeled == FALSE){ npc = max(npc, Kcluster) } } }else{ var_thre = 0.6 pca = rsvd::rpca(t(count_hv), k = 1000, center = TRUE, scale = FALSE) eigs = (pca$sdev)^2 var_cum = cumsum(eigs)/sum(eigs) if(max(var_cum) <= var_thre){ npc = length(var_cum) }else{ npc = which.max(var_cum > var_thre) if (labeled == FALSE){ npc = max(npc, Kcluster) } } } if (npc < 3){ npc = 3 } mat_pcs = t(pca$x[, 1:npc]) # columns are cells dist_cells_list = mclapply(1:J, function(id1){ d = sapply(1:id1, function(id2){ sse = sum((mat_pcs[, id1] - mat_pcs[, id2])^2) sqrt(sse) }) return(c(d, rep(0, J-id1))) }, mc.cores = ncores) dist_cells = matrix(0, nrow = J, ncol = J) for(cellid in 1:J){dist_cells[cellid, ] = dist_cells_list[[cellid]]} dist_cells = dist_cells + t(dist_cells) }else{ set.seed(Kcluster) neighbors_res = .find_neighbors(count_hv = count_hv, labeled = FALSE, J = J, Kcluster = Kcluster, ncores = ncores) dist_cells = neighbors_res$dist_cells clust = neighbors_res$clust } # mixture model nclust = sum(!is.na(unique(clust))) cl = parallel::makeCluster(ncores) doParallel::registerDoParallel(cl) for(cc in 1:nclust){ params <- .get_mix_parameters(count = count[, which(clust == cc), drop = FALSE], point = log10(1.01), ncores = ncores) cells = which(clust == cc) if(length(cells) <= 1) { next } parslist = params valid_genes = .find_va_genes(parslist, subcount = count[, cells]) if(length(valid_genes) <= 10){ next } subcount = count[valid_genes, cells, drop = FALSE] Ic = length(valid_genes) Jc = ncol(subcount) parslist = parslist[valid_genes, ] droprate = t(sapply(1:Ic, function(i) { wt = .calculate_weight(subcount[i, ], parslist[i, ]) return(wt[, 1]) })) mucheck = sweep(subcount, MARGIN = 1, parslist[, "mu"], FUN = ">") droprate[mucheck & droprate > drop_thre] = 0 # dropouts setA = lapply(1:Jc, function(cellid){ which(droprate[, cellid] > drop_thre) }) # non-dropouts setB = lapply(1:Jc, function(cellid){ which(droprate[, cellid] <= drop_thre) }) # imputation subres = foreach::foreach(cellid = 1:Jc, .packages = c("penalized"), .combine = cbind, .export = c(".impute_nnls")) %dopar% { if (cellid %% 100 == 0) {gc()} nbs = setdiff(1:Jc, cellid) if (length(nbs) == 0) {return(NULL)} geneid_drop = setA[[cellid]] geneid_obs = setB[[cellid]] y = try(.impute_nnls(Ic, cellid, subcount, droprate, geneid_drop, geneid_obs, nbs, distc = dist_cells[cells, cells]), silent = TRUE) if (class(y) == "try-error") { # print(y) y = subcount[, cellid, drop = FALSE] } return(y) } count_imp[valid_genes, cells] = subres } parallel::stopCluster(cl) outlier = which(is.na(clust)) count_imp[count_imp < point] = point return(list(count_imp = count_imp, outlier = outlier)) }
rm(list = ls()) # Pacotes ----------------------------------------------------------------- library(xgboost) library(tidyverse) library(DMwR) library(ModelMetrics) library(rBayesianOptimization) # Load Data --------------------------------------------------------------- load('produced_data/final_train.RData') # Modelo Final ------------------------------------------------------------ xgb_fit_final <- function(max_depth, eta, nrounds, subsample, colsample_bytree){ # Treino x_train_tmp <- as.matrix(x_train) y_train_tmp <- y_train %>% select(APROVOU) %>% mutate(APROVOU = ifelse(APROVOU == 1, "YES", "NO"), APROVOU = factor(APROVOU, levels = c("NO", "YES"))) row.names(y_train_tmp) <- NULL row.names(x_train_tmp) <- NULL x_train_tmp <- as.data.frame(cbind(y_train_tmp, x_train_tmp)) data_smote <- SMOTE(APROVOU ~ ., data = x_train_tmp, perc.over = 2000, perc.under = 105) table(data_smote$APROVOU) rm(x_train_tmp); gc(); dtrain <- xgb.DMatrix(data = data.matrix(data_smote[, -1]), label = as.numeric(data_smote[, "APROVOU"]) - 1) # Parâmetros parametros <- list( objective = "binary:logistic", max_depth = max_depth, eta = eta, colsample_bytree = colsample_bytree, subsample = subsample, eval_metric = "auc" ) fit <- xgb.train(params = parametros, data = dtrain, nrounds = nrounds, maximize = TRUE) return(fit) } set.seed(44439) fit <- xgb_fit_final(max_depth = 22, eta = 0.0393, nrounds = 1965, subsample = 0.7631, colsample_bytree = 0.2593) dfinal <- xgb.DMatrix(data = x_final) pred <- predict(fit, dfinal) y_final$chance <- pred y_final$velocidade <- x_final[,"n_tram_30d"] y_final$qtd_tramitacoes <- x_final[, "qtd_tram"] write.csv(y_final, 'produced_data/dados_chance.csv', row.names = FALSE)
/R/rscripts/train_xgboost_final.R
no_license
treisdev/supreme-potato
R
false
false
1,924
r
rm(list = ls()) # Pacotes ----------------------------------------------------------------- library(xgboost) library(tidyverse) library(DMwR) library(ModelMetrics) library(rBayesianOptimization) # Load Data --------------------------------------------------------------- load('produced_data/final_train.RData') # Modelo Final ------------------------------------------------------------ xgb_fit_final <- function(max_depth, eta, nrounds, subsample, colsample_bytree){ # Treino x_train_tmp <- as.matrix(x_train) y_train_tmp <- y_train %>% select(APROVOU) %>% mutate(APROVOU = ifelse(APROVOU == 1, "YES", "NO"), APROVOU = factor(APROVOU, levels = c("NO", "YES"))) row.names(y_train_tmp) <- NULL row.names(x_train_tmp) <- NULL x_train_tmp <- as.data.frame(cbind(y_train_tmp, x_train_tmp)) data_smote <- SMOTE(APROVOU ~ ., data = x_train_tmp, perc.over = 2000, perc.under = 105) table(data_smote$APROVOU) rm(x_train_tmp); gc(); dtrain <- xgb.DMatrix(data = data.matrix(data_smote[, -1]), label = as.numeric(data_smote[, "APROVOU"]) - 1) # Parâmetros parametros <- list( objective = "binary:logistic", max_depth = max_depth, eta = eta, colsample_bytree = colsample_bytree, subsample = subsample, eval_metric = "auc" ) fit <- xgb.train(params = parametros, data = dtrain, nrounds = nrounds, maximize = TRUE) return(fit) } set.seed(44439) fit <- xgb_fit_final(max_depth = 22, eta = 0.0393, nrounds = 1965, subsample = 0.7631, colsample_bytree = 0.2593) dfinal <- xgb.DMatrix(data = x_final) pred <- predict(fit, dfinal) y_final$chance <- pred y_final$velocidade <- x_final[,"n_tram_30d"] y_final$qtd_tramitacoes <- x_final[, "qtd_tram"] write.csv(y_final, 'produced_data/dados_chance.csv', row.names = FALSE)
# Download the file url1 <- "https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2FNEI_data.zip" destfile1 <- "destfile.zip" if(!file.exists(destfile1)) { download.file(url1, destfile = destfile1, method = "curl") unzip(destfile1, exdir = ".") } # Load the NEI & SCC data frames. NEI <- readRDS("summarySCC_PM25.rds") SCC <- readRDS("Source_Classification_Code.rds") # Gather the subset of the NEI data which corresponds to vehicles vehicles <- grepl("vehicle", SCC$SCC.Level.Two, ignore.case=TRUE) vehiclesSCC <- SCC[vehicles,]$SCC vehiclesNEI <- NEI[NEI$SCC %in% vehiclesSCC,] # Subset the vehicles NEI data by each city's fip and add city name. vehiclesBaltimoreNEI <- vehiclesNEI[vehiclesNEI$fips=="24510",] vehiclesBaltimoreNEI$city <- "Baltimore City" vehiclesLANEI <- vehiclesNEI[vehiclesNEI$fips=="06037",] vehiclesLANEI$city <- "Los Angeles County" # Combine the two subsets with city name into one data frame bothNEI <- rbind(vehiclesBaltimoreNEI,vehiclesLANEI) png("plot6.png",width=480,height=480,units="px",bg="transparent") library(ggplot2) ggp <- ggplot(bothNEI, aes(x=factor(year), y=Emissions, fill=city)) + geom_bar(aes(fill=year),stat="identity") + facet_grid(scales="free", space="free", .~city) + guides(fill=FALSE) + theme_bw() + labs(x="year", y=expression("Total PM"[2.5]*" Emission (Kilo-Tons)")) + labs(title=expression("PM"[2.5]*" Motor Vehicle Source Emissions in Baltimore & LA, 1999-2008")) print(ggp) dev.off()
/Plot6.R
no_license
Sakshi-Niranjan-Kulkarni/Exploratory-Data-Analysis-2
R
false
false
1,507
r
# Download the file url1 <- "https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2FNEI_data.zip" destfile1 <- "destfile.zip" if(!file.exists(destfile1)) { download.file(url1, destfile = destfile1, method = "curl") unzip(destfile1, exdir = ".") } # Load the NEI & SCC data frames. NEI <- readRDS("summarySCC_PM25.rds") SCC <- readRDS("Source_Classification_Code.rds") # Gather the subset of the NEI data which corresponds to vehicles vehicles <- grepl("vehicle", SCC$SCC.Level.Two, ignore.case=TRUE) vehiclesSCC <- SCC[vehicles,]$SCC vehiclesNEI <- NEI[NEI$SCC %in% vehiclesSCC,] # Subset the vehicles NEI data by each city's fip and add city name. vehiclesBaltimoreNEI <- vehiclesNEI[vehiclesNEI$fips=="24510",] vehiclesBaltimoreNEI$city <- "Baltimore City" vehiclesLANEI <- vehiclesNEI[vehiclesNEI$fips=="06037",] vehiclesLANEI$city <- "Los Angeles County" # Combine the two subsets with city name into one data frame bothNEI <- rbind(vehiclesBaltimoreNEI,vehiclesLANEI) png("plot6.png",width=480,height=480,units="px",bg="transparent") library(ggplot2) ggp <- ggplot(bothNEI, aes(x=factor(year), y=Emissions, fill=city)) + geom_bar(aes(fill=year),stat="identity") + facet_grid(scales="free", space="free", .~city) + guides(fill=FALSE) + theme_bw() + labs(x="year", y=expression("Total PM"[2.5]*" Emission (Kilo-Tons)")) + labs(title=expression("PM"[2.5]*" Motor Vehicle Source Emissions in Baltimore & LA, 1999-2008")) print(ggp) dev.off()
# plot results for additional datasets in supplementary plot_add_sim_results = function(outfolder, plotfolder, extended = F) { library(ggplot2) df = NULL age = if(!extended) 0.2 else 0.1 props = if(!extended) c(0.1, 0.3, 0.5) else 0.1 if(!extended) { conds = c("", "_ss", "_relaxed", "_mrphdisc") #, "_large" cnames = c("Normal", "Less fossils", "Relaxed clock", "Realistic characters") # , "More fossils" } else { conds = c("", "_no_deposit", "_burst_deposit", "_low_morph") #, "_morph_relaxed") cnames = c("Normal", "No deposit", "Short-time deposit", "Low morphological clock rate") #, "Relaxed morphological clock") } names = c("mean_RF", "datedf_med_rel_error", "datedf_coverage", "undatedf_med_rel_error", "undatedf_coverage", "datedf_top_error","undatedf_top_error", "undatedf_HPD_width") plotnames = c("Robinson-Foulds distance", "Relative error of age\n(precise-date fossils)", "Coverage of age (precise-date fossils)", "Relative error of age \n(imprecise-date fossils)", "Coverage of age (imprecise-date fossils)", "Proportion of correct positions \n(precise-date fossils)", "Proportion of correct positions \n(imprecise-date fossils)", "HPD width (imprecise-date fossils)") for(cnd in conds) { for(prop in props) { if(cnd == "_no_deposit") prop = 0 file_name = paste0(outfolder, "/DS_seed_451_prop_", prop, "_age_", age, cnd, "_results.RData") load(file_name) results = get(paste0("DS_seed_451_prop_", prop, "_age_", age, cnd, "_results")) for(m in names) { if(all(is.na(results[[m]]))) df = rbind(df, data.frame(Cond = cnames[which(conds == cnd)], Prop = as.character(prop), values = NA, Measure = m)) else df = rbind(df, data.frame(Cond = cnames[which(conds == cnd)], Prop = as.character(prop), values = results[[m]][!is.na(results[[m]])], Measure = m)) } } } cbPalette <- c("#56B4E9", "#CC79A7", "#009E73", "#F0E442", "#D55E00") for(i in 1:length(names)) { subdf = df[which(df$Measure == names[i]),] subdf$Cond = factor(subdf$Cond, levels = cnames) pl = ggplot(subdf, mapping=aes(x=Prop, y=values, colour=factor(Cond))) + xlab("Proportion of imprecise-date fossils") + ylab(plotnames[i]) + geom_boxplot(width = 0.5, position = position_dodge(width=0.7)) if(i == 8) { #HPD width pl = pl + geom_hline(yintercept = 20, color = "#D55E00", size = 1) } if(i %in% c(3,5)) { #coverage plots pl = pl + ylim(0.2, 1.0) } if(i %in% c(6,7)) { #topological error pl = pl + ylim(0.0, 1.0) } if(i %in% c(5,8)) { #with legend pl = pl + theme(text = element_text(size = 15), legend.position = "bottom", legend.direction = "vertical") + scale_color_manual(values=cbPalette, name = "Simulation condition", drop = F) h = 6.6 ggsave(paste0(plotfolder, "/", names[i], ".pdf"), width = 5, height = h) } else { #no legend pl = pl + theme(text = element_text(size = 15), legend.position = "none") + scale_color_manual(values=cbPalette, drop = F) ggsave(paste0(plotfolder, "/", names[i], ".pdf"), width = 5, height = 5) } } } # ridge plot for a simulated run plot_example_ridges = function(trees_file, dataset_dir, plotfile = NULL, seed = 451, maxn = 10) { library(ggplot2) library(ggridges) filenm = strsplit(trees_file,'/')[[1]] filenm = filenm[length(filenm)] filenm = substr(filenm, 1, nchar(filenm) -6) filenm = strsplit(filenm,"_")[[1]] prop = filenm[5] age = filenm[7] cond = if(filenm[8] %in% c("ss", "large", "relaxed", "mrphdisc")) filenm[8] else "" idx = as.numeric(if(cond == "") filenm[8] else filenm[9]) name = paste0("DS_seed_", seed, "_prop_", prop, "_age_", age, cond) load(paste0(dataset_dir, "/", name, ".RData")) true_tree = samp_trees[[idx]] true_fossils = fossils[[idx]] trees = read.incomplete.nexus(trees_file) n = length(trees) trees = trees[round(n*0.25):n] fid = true_tree$tip.label[(length(true_tree$tip.label) - length(true_fossils$sp) +1):length(true_tree$tip.label)][1:maxn] est_ages = list() for(t in trees) { tag = ape::node.depth.edgelength(t) tag = max(tag) - tag for(ff in fid) { est_ages[[ff]] = c(est_ages[[ff]], tag[which(t$tip.label == ff)]) } } min_true = 0.9*min(true_fossils$hmin[1:maxn]) max_true = 1.1*max(true_fossils$hmax[1:maxn]) age_range = seq(min_true, max_true, 0.01) sim_ages = lapply(1:length(fid), function(ff) { sapply(age_range, function(a) { if(a < true_fossils$hmin[ff] || a > true_fossils$hmax[ff]) 0.01 else 1 }) }) names(sim_ages) = fid true_ages = true_fossils$h[1:maxn] names(true_ages) = fid df = data.frame() for(ff in fid) { df = rbind(df, data.frame(taxa = ff, type = "simulated", age = age_range, height = sim_ages[[ff]])) df = rbind(df, data.frame(taxa = ff, type = "estimated", age = est_ages[[ff]], height = NA)) } dftrue = data.frame(taxa = as.factor(fid), true = true_ages, type = "simulated") nf = paste0("example file, prop = ", prop, ", age = ", age, ", rep = ", idx) pl = ggplot(df, aes(x = age, y = taxa, color = type, fill = type)) + geom_density_ridges(data = df[df$type == "simulated",], stat="identity", alpha = 0.5, scale = 0.6, aes(height=height)) + geom_density_ridges(data = df[df$type == "estimated",], alpha = 0.5, scale = 0.8) + geom_segment(data = dftrue, aes(x = true, xend = true+0.1, y = as.numeric(taxa), yend = as.numeric(taxa) + 0.6), color = "red") + scale_color_manual(values = c("#56B4E9", "#CC79A7")) + scale_fill_manual(values = c("#56B4E9", "#CC79A7")) + scale_x_reverse(expand = c(0, 0)) + scale_y_discrete(expand = expansion(mult = c(0.01, .1))) + theme(axis.text.y = element_text(vjust = 0, face = "italic"), legend.title = element_blank(), axis.title = element_text(size = 16), axis.text = element_text(size = 14), legend.text = element_text(size = 12)) + ggtitle(paste0("Estimated and simulated ages,\n", nf)) + labs(y = "Fossil species") + theme(plot.title = element_text(size = 18, hjust = 0.5)) if(is.null(plotfile)) show(pl) else ggsave(paste0(plotfile), height = 7, width = 9) } # prior plots for penguins dataset penguins_prior_plot = function(taxafolder, plotfolder) { library(ggplot2) library(ggridges) n = 3 prop = c(0.5, 1) for(i in 1:n) { for(p in prop) { age_table = read.table(file.path(taxafolder, paste0("taxa_", i, "_", p, ".tsv")), header = T, stringsAsFactors = F) age_table = age_table[age_table$min > 1e-3, ] max_true = 1.1*max(age_table$max) age_range = seq(0, max_true, 0.05) true_ages = lapply(1:length(age_table$taxon), function(id) { sapply(age_range, function(a) { if(a < age_table$min[id] || a > age_table$max[id]) 0.01 else 1 }) }) names(true_ages) = age_table$taxon df = data.frame() for(ff in 1:length(true_ages)) { df = rbind(df, data.frame(taxa = .taxa.name(names(true_ages)[ff]), age = age_range, height = true_ages[[ff]])) } df$taxa = factor(df$taxa) int = c("small", "large", "extended")[i] nf = paste0(p*100, "% imprecise-date fossils") pl = ggplot(df, aes(x = age, y = taxa)) + geom_density_ridges(stat="identity", alpha = 0.7, scale = 0.6, aes(height = height), color = "#009E73", fill = "#009E73") + scale_x_reverse(expand = c(0, 0)) + scale_y_discrete(expand = expansion(mult = 0.01)) + theme(axis.text.y = element_text(vjust = 0, face = "italic"), legend.title = element_blank(), axis.title = element_text(size = 16), axis.text = element_text(size = 14), legend.text = element_text(size = 12)) + ggtitle(paste0("Age ranges,\n", int, " interval, ", nf)) + labs(y = "Fossil species") + theme(plot.title = element_text(size = 18, hjust = 0.5)) ggsave(paste0(plotfolder, "/penguins_priors_", i, "_", p, ".pdf"), height = 12, width = 9) } } }
/code_files/figures/figures_suppl.R
permissive
bjoelle/Poorly_dated_fossils_SI
R
false
false
8,153
r
# plot results for additional datasets in supplementary plot_add_sim_results = function(outfolder, plotfolder, extended = F) { library(ggplot2) df = NULL age = if(!extended) 0.2 else 0.1 props = if(!extended) c(0.1, 0.3, 0.5) else 0.1 if(!extended) { conds = c("", "_ss", "_relaxed", "_mrphdisc") #, "_large" cnames = c("Normal", "Less fossils", "Relaxed clock", "Realistic characters") # , "More fossils" } else { conds = c("", "_no_deposit", "_burst_deposit", "_low_morph") #, "_morph_relaxed") cnames = c("Normal", "No deposit", "Short-time deposit", "Low morphological clock rate") #, "Relaxed morphological clock") } names = c("mean_RF", "datedf_med_rel_error", "datedf_coverage", "undatedf_med_rel_error", "undatedf_coverage", "datedf_top_error","undatedf_top_error", "undatedf_HPD_width") plotnames = c("Robinson-Foulds distance", "Relative error of age\n(precise-date fossils)", "Coverage of age (precise-date fossils)", "Relative error of age \n(imprecise-date fossils)", "Coverage of age (imprecise-date fossils)", "Proportion of correct positions \n(precise-date fossils)", "Proportion of correct positions \n(imprecise-date fossils)", "HPD width (imprecise-date fossils)") for(cnd in conds) { for(prop in props) { if(cnd == "_no_deposit") prop = 0 file_name = paste0(outfolder, "/DS_seed_451_prop_", prop, "_age_", age, cnd, "_results.RData") load(file_name) results = get(paste0("DS_seed_451_prop_", prop, "_age_", age, cnd, "_results")) for(m in names) { if(all(is.na(results[[m]]))) df = rbind(df, data.frame(Cond = cnames[which(conds == cnd)], Prop = as.character(prop), values = NA, Measure = m)) else df = rbind(df, data.frame(Cond = cnames[which(conds == cnd)], Prop = as.character(prop), values = results[[m]][!is.na(results[[m]])], Measure = m)) } } } cbPalette <- c("#56B4E9", "#CC79A7", "#009E73", "#F0E442", "#D55E00") for(i in 1:length(names)) { subdf = df[which(df$Measure == names[i]),] subdf$Cond = factor(subdf$Cond, levels = cnames) pl = ggplot(subdf, mapping=aes(x=Prop, y=values, colour=factor(Cond))) + xlab("Proportion of imprecise-date fossils") + ylab(plotnames[i]) + geom_boxplot(width = 0.5, position = position_dodge(width=0.7)) if(i == 8) { #HPD width pl = pl + geom_hline(yintercept = 20, color = "#D55E00", size = 1) } if(i %in% c(3,5)) { #coverage plots pl = pl + ylim(0.2, 1.0) } if(i %in% c(6,7)) { #topological error pl = pl + ylim(0.0, 1.0) } if(i %in% c(5,8)) { #with legend pl = pl + theme(text = element_text(size = 15), legend.position = "bottom", legend.direction = "vertical") + scale_color_manual(values=cbPalette, name = "Simulation condition", drop = F) h = 6.6 ggsave(paste0(plotfolder, "/", names[i], ".pdf"), width = 5, height = h) } else { #no legend pl = pl + theme(text = element_text(size = 15), legend.position = "none") + scale_color_manual(values=cbPalette, drop = F) ggsave(paste0(plotfolder, "/", names[i], ".pdf"), width = 5, height = 5) } } } # ridge plot for a simulated run plot_example_ridges = function(trees_file, dataset_dir, plotfile = NULL, seed = 451, maxn = 10) { library(ggplot2) library(ggridges) filenm = strsplit(trees_file,'/')[[1]] filenm = filenm[length(filenm)] filenm = substr(filenm, 1, nchar(filenm) -6) filenm = strsplit(filenm,"_")[[1]] prop = filenm[5] age = filenm[7] cond = if(filenm[8] %in% c("ss", "large", "relaxed", "mrphdisc")) filenm[8] else "" idx = as.numeric(if(cond == "") filenm[8] else filenm[9]) name = paste0("DS_seed_", seed, "_prop_", prop, "_age_", age, cond) load(paste0(dataset_dir, "/", name, ".RData")) true_tree = samp_trees[[idx]] true_fossils = fossils[[idx]] trees = read.incomplete.nexus(trees_file) n = length(trees) trees = trees[round(n*0.25):n] fid = true_tree$tip.label[(length(true_tree$tip.label) - length(true_fossils$sp) +1):length(true_tree$tip.label)][1:maxn] est_ages = list() for(t in trees) { tag = ape::node.depth.edgelength(t) tag = max(tag) - tag for(ff in fid) { est_ages[[ff]] = c(est_ages[[ff]], tag[which(t$tip.label == ff)]) } } min_true = 0.9*min(true_fossils$hmin[1:maxn]) max_true = 1.1*max(true_fossils$hmax[1:maxn]) age_range = seq(min_true, max_true, 0.01) sim_ages = lapply(1:length(fid), function(ff) { sapply(age_range, function(a) { if(a < true_fossils$hmin[ff] || a > true_fossils$hmax[ff]) 0.01 else 1 }) }) names(sim_ages) = fid true_ages = true_fossils$h[1:maxn] names(true_ages) = fid df = data.frame() for(ff in fid) { df = rbind(df, data.frame(taxa = ff, type = "simulated", age = age_range, height = sim_ages[[ff]])) df = rbind(df, data.frame(taxa = ff, type = "estimated", age = est_ages[[ff]], height = NA)) } dftrue = data.frame(taxa = as.factor(fid), true = true_ages, type = "simulated") nf = paste0("example file, prop = ", prop, ", age = ", age, ", rep = ", idx) pl = ggplot(df, aes(x = age, y = taxa, color = type, fill = type)) + geom_density_ridges(data = df[df$type == "simulated",], stat="identity", alpha = 0.5, scale = 0.6, aes(height=height)) + geom_density_ridges(data = df[df$type == "estimated",], alpha = 0.5, scale = 0.8) + geom_segment(data = dftrue, aes(x = true, xend = true+0.1, y = as.numeric(taxa), yend = as.numeric(taxa) + 0.6), color = "red") + scale_color_manual(values = c("#56B4E9", "#CC79A7")) + scale_fill_manual(values = c("#56B4E9", "#CC79A7")) + scale_x_reverse(expand = c(0, 0)) + scale_y_discrete(expand = expansion(mult = c(0.01, .1))) + theme(axis.text.y = element_text(vjust = 0, face = "italic"), legend.title = element_blank(), axis.title = element_text(size = 16), axis.text = element_text(size = 14), legend.text = element_text(size = 12)) + ggtitle(paste0("Estimated and simulated ages,\n", nf)) + labs(y = "Fossil species") + theme(plot.title = element_text(size = 18, hjust = 0.5)) if(is.null(plotfile)) show(pl) else ggsave(paste0(plotfile), height = 7, width = 9) } # prior plots for penguins dataset penguins_prior_plot = function(taxafolder, plotfolder) { library(ggplot2) library(ggridges) n = 3 prop = c(0.5, 1) for(i in 1:n) { for(p in prop) { age_table = read.table(file.path(taxafolder, paste0("taxa_", i, "_", p, ".tsv")), header = T, stringsAsFactors = F) age_table = age_table[age_table$min > 1e-3, ] max_true = 1.1*max(age_table$max) age_range = seq(0, max_true, 0.05) true_ages = lapply(1:length(age_table$taxon), function(id) { sapply(age_range, function(a) { if(a < age_table$min[id] || a > age_table$max[id]) 0.01 else 1 }) }) names(true_ages) = age_table$taxon df = data.frame() for(ff in 1:length(true_ages)) { df = rbind(df, data.frame(taxa = .taxa.name(names(true_ages)[ff]), age = age_range, height = true_ages[[ff]])) } df$taxa = factor(df$taxa) int = c("small", "large", "extended")[i] nf = paste0(p*100, "% imprecise-date fossils") pl = ggplot(df, aes(x = age, y = taxa)) + geom_density_ridges(stat="identity", alpha = 0.7, scale = 0.6, aes(height = height), color = "#009E73", fill = "#009E73") + scale_x_reverse(expand = c(0, 0)) + scale_y_discrete(expand = expansion(mult = 0.01)) + theme(axis.text.y = element_text(vjust = 0, face = "italic"), legend.title = element_blank(), axis.title = element_text(size = 16), axis.text = element_text(size = 14), legend.text = element_text(size = 12)) + ggtitle(paste0("Age ranges,\n", int, " interval, ", nf)) + labs(y = "Fossil species") + theme(plot.title = element_text(size = 18, hjust = 0.5)) ggsave(paste0(plotfolder, "/penguins_priors_", i, "_", p, ".pdf"), height = 12, width = 9) } } }
head(TempHeart) xyplot(heartRate ~ bodyTemp, data = TempHeart, type = c("p", "r"))
/inst/snippets/Figure10.13.R
no_license
rpruim/ISIwithR
R
false
false
84
r
head(TempHeart) xyplot(heartRate ~ bodyTemp, data = TempHeart, type = c("p", "r"))
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/normalization_PCA.R \name{normalization_PCA} \alias{normalization_PCA} \title{normalization_PCA} \usage{ normalization_PCA(e2, f2, p2, color = "species", shape = "no_shape", opacity = 0.5, ellipse_needed = TRUE, ellipseLineType = "solid", showlegend = TRUE, dotsize = 15, ellipse_line_width = 1, confidence_level = 0.95, paper_bgcolor = "rgba(245,246,249,1)", plot_bgcolor = "rgba(245,246,249,1)", width = 1000, height = 1000, title = NULL) } \description{ stat } \examples{ normalization_PCA() }
/man/normalization_PCA.Rd
no_license
jaspershen/metabox.stat
R
false
true
586
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/normalization_PCA.R \name{normalization_PCA} \alias{normalization_PCA} \title{normalization_PCA} \usage{ normalization_PCA(e2, f2, p2, color = "species", shape = "no_shape", opacity = 0.5, ellipse_needed = TRUE, ellipseLineType = "solid", showlegend = TRUE, dotsize = 15, ellipse_line_width = 1, confidence_level = 0.95, paper_bgcolor = "rgba(245,246,249,1)", plot_bgcolor = "rgba(245,246,249,1)", width = 1000, height = 1000, title = NULL) } \description{ stat } \examples{ normalization_PCA() }
library(chinese.misc) ### Name: is_positive_integer ### Title: A Convenient Version of is.integer ### Aliases: is_positive_integer ### ** Examples is_positive_integer(NULL) is_positive_integer(as.integer(NA)) is_positive_integer(integer(0)) is_positive_integer(3.0) is_positive_integer(3.3) is_positive_integer(1:5) is_positive_integer(1:5, len = c(2, 10)) is_positive_integer(1:5, len = c(2:10))
/data/genthat_extracted_code/chinese.misc/examples/is_positive_integer.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
404
r
library(chinese.misc) ### Name: is_positive_integer ### Title: A Convenient Version of is.integer ### Aliases: is_positive_integer ### ** Examples is_positive_integer(NULL) is_positive_integer(as.integer(NA)) is_positive_integer(integer(0)) is_positive_integer(3.0) is_positive_integer(3.3) is_positive_integer(1:5) is_positive_integer(1:5, len = c(2, 10)) is_positive_integer(1:5, len = c(2:10))
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/signals.R \docType{data} \name{signals} \alias{signals} \alias{signals} \alias{process_terminate} \alias{process_kill} \alias{process_send_signal} \alias{SIGABRT} \alias{SIGALRM} \alias{SIGCHLD} \alias{SIGCONT} \alias{SIGFPE} \alias{SIGHUP} \alias{SIGILL} \alias{SIGINT} \alias{SIGKILL} \alias{SIGPIPE} \alias{SIGQUIT} \alias{SIGSEGV} \alias{SIGSTOP} \alias{SIGTERM} \alias{SIGTSTP} \alias{SIGTTIN} \alias{SIGTTOU} \alias{SIGUSR1} \alias{SIGUSR2} \alias{CTRL_C_EVENT} \alias{CTRL_BREAK_EVENT} \title{Sending signals to the child process.} \format{An object of class \code{list}.} \usage{ signals process_terminate(handle) process_kill(handle) process_send_signal(handle, signal) SIGABRT SIGALRM SIGCHLD SIGCONT SIGFPE SIGHUP SIGILL SIGINT SIGKILL SIGPIPE SIGQUIT SIGSEGV SIGSTOP SIGTERM SIGTSTP SIGTTIN SIGTTOU SIGUSR1 SIGUSR2 CTRL_C_EVENT CTRL_BREAK_EVENT } \arguments{ \item{handle}{Process handle obtained from \code{spawn_process()}.} \item{signal}{Signal number, one of \code{names(signals)}.} } \description{ Sending signals to the child process. Operating-System-level signals that can be sent via \link{process_send_signal} are defined in the `subprocess::signals`` list. It is a list that is generated when the package is loaded and it contains only signals supported by the current platform (Windows or Linux). All signals, both supported and not supported by the current platform, are also exported under their names. If a given signal is not supported on the current platform, then its value is set to \code{NA}. Calling \code{process_kill()} and \code{process_terminate()} invokes the appropriate OS routine (\code{waitpid()} or \code{WaitForSingleObject()}, closing the process handle, etc.) that effectively lets the operating system clean up after the child process. Calling \code{process_send_signal()} is not accompanied by such clean-up and if the child process exits it needs to be followed by a call to \code{\link[=process_wait]{process_wait()}}. \code{process_terminate()} on Linux sends the \code{SIGTERM} signal to the process pointed to by \code{handle}. On Windows it calls \code{TerminateProcess()}. \code{process_kill()} on Linux sends the \code{SIGKILL} signal to \code{handle}. On Windows it is an alias for \code{process_terminate()}. \code{process_send_signal()} sends an OS-level \code{signal} to \code{handle}. In Linux all standard signal numbers are supported. On Windows supported signals are \code{SIGTERM}, \code{CTRL_C_EVENT} and \code{CTRL_BREAK_EVENT}. Those values will be available via the \code{signals} list which is also attached in the package namespace. } \details{ In Windows, signals are delivered either only to the child process or to the child process and all its descendants. This behavior is controlled by the \code{termination_mode} argument of the \code{\link[subprocess:spawn_process]{subprocess::spawn_process()}} function. Setting it to \code{TERMINATION_GROUP} results in signals being delivered to the child and its descendants. } \examples{ \dontrun{ # send the SIGKILL signal to bash h <- spawn_process('bash') process_signal(h, signals$SIGKILL) process_signal(h, SIGKILL) # is SIGABRT supported on the current platform? is.na(SIGABRT) } \dontrun{ # Windows process_send_signal(h, SIGTERM) process_send_signal(h, CTRL_C_EVENT) process_send_signal(h, CTRL_BREAK_EVENT) } } \seealso{ \code{\link[=spawn_process]{spawn_process()}} } \keyword{datasets}
/man/signals.Rd
no_license
cran/subprocess
R
false
true
3,527
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/signals.R \docType{data} \name{signals} \alias{signals} \alias{signals} \alias{process_terminate} \alias{process_kill} \alias{process_send_signal} \alias{SIGABRT} \alias{SIGALRM} \alias{SIGCHLD} \alias{SIGCONT} \alias{SIGFPE} \alias{SIGHUP} \alias{SIGILL} \alias{SIGINT} \alias{SIGKILL} \alias{SIGPIPE} \alias{SIGQUIT} \alias{SIGSEGV} \alias{SIGSTOP} \alias{SIGTERM} \alias{SIGTSTP} \alias{SIGTTIN} \alias{SIGTTOU} \alias{SIGUSR1} \alias{SIGUSR2} \alias{CTRL_C_EVENT} \alias{CTRL_BREAK_EVENT} \title{Sending signals to the child process.} \format{An object of class \code{list}.} \usage{ signals process_terminate(handle) process_kill(handle) process_send_signal(handle, signal) SIGABRT SIGALRM SIGCHLD SIGCONT SIGFPE SIGHUP SIGILL SIGINT SIGKILL SIGPIPE SIGQUIT SIGSEGV SIGSTOP SIGTERM SIGTSTP SIGTTIN SIGTTOU SIGUSR1 SIGUSR2 CTRL_C_EVENT CTRL_BREAK_EVENT } \arguments{ \item{handle}{Process handle obtained from \code{spawn_process()}.} \item{signal}{Signal number, one of \code{names(signals)}.} } \description{ Sending signals to the child process. Operating-System-level signals that can be sent via \link{process_send_signal} are defined in the `subprocess::signals`` list. It is a list that is generated when the package is loaded and it contains only signals supported by the current platform (Windows or Linux). All signals, both supported and not supported by the current platform, are also exported under their names. If a given signal is not supported on the current platform, then its value is set to \code{NA}. Calling \code{process_kill()} and \code{process_terminate()} invokes the appropriate OS routine (\code{waitpid()} or \code{WaitForSingleObject()}, closing the process handle, etc.) that effectively lets the operating system clean up after the child process. Calling \code{process_send_signal()} is not accompanied by such clean-up and if the child process exits it needs to be followed by a call to \code{\link[=process_wait]{process_wait()}}. \code{process_terminate()} on Linux sends the \code{SIGTERM} signal to the process pointed to by \code{handle}. On Windows it calls \code{TerminateProcess()}. \code{process_kill()} on Linux sends the \code{SIGKILL} signal to \code{handle}. On Windows it is an alias for \code{process_terminate()}. \code{process_send_signal()} sends an OS-level \code{signal} to \code{handle}. In Linux all standard signal numbers are supported. On Windows supported signals are \code{SIGTERM}, \code{CTRL_C_EVENT} and \code{CTRL_BREAK_EVENT}. Those values will be available via the \code{signals} list which is also attached in the package namespace. } \details{ In Windows, signals are delivered either only to the child process or to the child process and all its descendants. This behavior is controlled by the \code{termination_mode} argument of the \code{\link[subprocess:spawn_process]{subprocess::spawn_process()}} function. Setting it to \code{TERMINATION_GROUP} results in signals being delivered to the child and its descendants. } \examples{ \dontrun{ # send the SIGKILL signal to bash h <- spawn_process('bash') process_signal(h, signals$SIGKILL) process_signal(h, SIGKILL) # is SIGABRT supported on the current platform? is.na(SIGABRT) } \dontrun{ # Windows process_send_signal(h, SIGTERM) process_send_signal(h, CTRL_C_EVENT) process_send_signal(h, CTRL_BREAK_EVENT) } } \seealso{ \code{\link[=spawn_process]{spawn_process()}} } \keyword{datasets}
# EPI5143 Winter 2020 Quiz 3. Due Wednesday March 11th, 2020 by 11:59pm. # Please provide the code (and results from console if requested) you used to execute # the requested commands in each question 1 to 4 in this file. Don't forget to include # your name in the document! (and in the filename) don't include the plot images themselves # When you have completed the assignment, save it in same file format right from RStudio # (.R which is simply a plain text file) ans submit it to Github using instructions # from last class and the link emailed to you # If you haven't already, install the "tidyverse" package, and load it into memory # using the library() command # The data visualization lecture notes, as well as Chapter 3: Data Visualization from # "R for Data Science" (available at https://r4ds.had.co.nz/ ) are good resources to # provide guidance # Question 1. The dataset mpg dataset is a base R dataset which includes data on fuel efficiency of a number # of makes and models of automobile Have a look at this dataset using the View() command. # how many observations and how many variables does this dataset have? (provide the code # and result from the console window) # (hint: use the nrow() and ncol() and/or the dim() R functions ) Rows= 234 Columns= 11 Code: > data(mpg) > View(mpg) > nrow(mpg) [1] 234 > ncol(mpg) [1] 11 # The following commands to ggplot create a basic plot of the # highway fuel efficiency vs.engine size (displacement in L) for vehicles in the dataset # ie. x=displ and y=hwy, run this code and look at the plot # (click zoom in the plots window to make it bigger) ################## ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy)) ################## # Question 2. modify and run the ggplot code to make each class of vehicle a different colour ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy, color=class)) # Question 3. further modify and run the code to use a different shape to plot vehicles # according whether vehicle is front, rear or 4 wheel drive (drv) ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy, color=class, shape=drv)) # Question 4. further modify and run the code to make the size of each point on the plot proportional # to the number of cylinders the vehicle's engine has (cyl) ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy, color=class, shape=drv, size=cyl)) # Question 5. Modify the code to add a suitable title of your choice to your plot ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy, color=class, shape=drv, size=cyl)) + ggtitle("Fuel Efficiency and Engine Size")
/Empringham_EPI5143 Winter2020 Quiz 3.R
no_license
EPI5143/quiz-3-briannaempringham
R
false
false
2,650
r
# EPI5143 Winter 2020 Quiz 3. Due Wednesday March 11th, 2020 by 11:59pm. # Please provide the code (and results from console if requested) you used to execute # the requested commands in each question 1 to 4 in this file. Don't forget to include # your name in the document! (and in the filename) don't include the plot images themselves # When you have completed the assignment, save it in same file format right from RStudio # (.R which is simply a plain text file) ans submit it to Github using instructions # from last class and the link emailed to you # If you haven't already, install the "tidyverse" package, and load it into memory # using the library() command # The data visualization lecture notes, as well as Chapter 3: Data Visualization from # "R for Data Science" (available at https://r4ds.had.co.nz/ ) are good resources to # provide guidance # Question 1. The dataset mpg dataset is a base R dataset which includes data on fuel efficiency of a number # of makes and models of automobile Have a look at this dataset using the View() command. # how many observations and how many variables does this dataset have? (provide the code # and result from the console window) # (hint: use the nrow() and ncol() and/or the dim() R functions ) Rows= 234 Columns= 11 Code: > data(mpg) > View(mpg) > nrow(mpg) [1] 234 > ncol(mpg) [1] 11 # The following commands to ggplot create a basic plot of the # highway fuel efficiency vs.engine size (displacement in L) for vehicles in the dataset # ie. x=displ and y=hwy, run this code and look at the plot # (click zoom in the plots window to make it bigger) ################## ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy)) ################## # Question 2. modify and run the ggplot code to make each class of vehicle a different colour ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy, color=class)) # Question 3. further modify and run the code to use a different shape to plot vehicles # according whether vehicle is front, rear or 4 wheel drive (drv) ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy, color=class, shape=drv)) # Question 4. further modify and run the code to make the size of each point on the plot proportional # to the number of cylinders the vehicle's engine has (cyl) ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy, color=class, shape=drv, size=cyl)) # Question 5. Modify the code to add a suitable title of your choice to your plot ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy, color=class, shape=drv, size=cyl)) + ggtitle("Fuel Efficiency and Engine Size")
BootAfterBootT.PI <- function(x,p,h,nboot,prob) { #set.seed(12345) n <- nrow(x) B <- OLS.ART(x,p,h,prob) BBC <- BootstrapT(x,p,h,200) BBCB <- BootstrapTB(x,p,h,200) bb <- BBCB$coef eb <- sqrt( (n-p) / ( (n-p)-length(bb)))*BBCB$resid bias <- B$coef - BBC$coef ef <- sqrt( (n-p) / ( (n-p)-length(bb)))*BBC$resid fore <- matrix(NA,nrow=nboot,ncol=h) for(i in 1:nboot) { index <- as.integer(runif(n-p, min=1, max=nrow(eb))) es <- eb[index,1] xs <- ysbT(x, bb, es) bs <- LSMT(xs,p)$coef bsc <- bs-bias bsc <- adjust(bs,bsc,p) if(sum(bsc) != sum(bs)) bsc[(p+1):(p+2),] <- RE.LSMT(xs,p,bsc) fore[i,] <- ART.ForeB(xs,bsc,h,ef,length(bs)-2) } Interval <- matrix(NA,nrow=h,ncol=length(prob),dimnames=list(1:h,prob)) for( i in 1:h) Interval[i,] <- quantile(fore[,i],probs=prob) return(list(PI=Interval,forecast=BBC$forecast)) }
/BootPR/R/BootAfterBootT.PI.R
no_license
ingted/R-Examples
R
false
false
948
r
BootAfterBootT.PI <- function(x,p,h,nboot,prob) { #set.seed(12345) n <- nrow(x) B <- OLS.ART(x,p,h,prob) BBC <- BootstrapT(x,p,h,200) BBCB <- BootstrapTB(x,p,h,200) bb <- BBCB$coef eb <- sqrt( (n-p) / ( (n-p)-length(bb)))*BBCB$resid bias <- B$coef - BBC$coef ef <- sqrt( (n-p) / ( (n-p)-length(bb)))*BBC$resid fore <- matrix(NA,nrow=nboot,ncol=h) for(i in 1:nboot) { index <- as.integer(runif(n-p, min=1, max=nrow(eb))) es <- eb[index,1] xs <- ysbT(x, bb, es) bs <- LSMT(xs,p)$coef bsc <- bs-bias bsc <- adjust(bs,bsc,p) if(sum(bsc) != sum(bs)) bsc[(p+1):(p+2),] <- RE.LSMT(xs,p,bsc) fore[i,] <- ART.ForeB(xs,bsc,h,ef,length(bs)-2) } Interval <- matrix(NA,nrow=h,ncol=length(prob),dimnames=list(1:h,prob)) for( i in 1:h) Interval[i,] <- quantile(fore[,i],probs=prob) return(list(PI=Interval,forecast=BBC$forecast)) }
## Code assumes energy data has been downloaded and unzipped ## from https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2Fhousehold_power_consumption.zip ## Set working directory and read in data ## Values are seperated by ";" setwd("C:/Users/eayres/Desktop") df <- read.table("household_power_consumption.txt", header=TRUE, sep=";") col <- names(df) ## Replace df with times starting on 1 Feb 2007. Include the following 2880 rows (i.e., 2 days). Reapply column names df <- read.table("household_power_consumption.txt", sep=";", na.strings=c("NA","?"), skip=66637, nrows=2880) colnames(df) <- c(col) ## Add single column containing date and time and convert to date/time format df$DatTim = paste(df$Date, df$Time, sep=" ") df$DatTim <- strptime(df$DatTim, "%d/%m/%Y %H:%M:%S") ## Assign global active power and date/time to y/x x <- df$DatTim y <- df$Global_active_power ## Create line plot of global active power time series png("C:/Users/eayres/Documents/GitHub/datasciencecoursera/Data Science Specialization/04_Exploratory Data Analysis/CourseProject1/Code and Plots/plot2.png", width=480, height=480, units = "px") par(mar=c(4,4,1,1)) plot(x, y, ylab = "Global Active Power (kilowatts)", xlab = "", type = "l") dev.off()
/Data Science Specialization/04_Exploratory Data Analysis/CourseProject1/Code and Plots/plot2.R
no_license
edwardayres/datasciencecoursera
R
false
false
1,237
r
## Code assumes energy data has been downloaded and unzipped ## from https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2Fhousehold_power_consumption.zip ## Set working directory and read in data ## Values are seperated by ";" setwd("C:/Users/eayres/Desktop") df <- read.table("household_power_consumption.txt", header=TRUE, sep=";") col <- names(df) ## Replace df with times starting on 1 Feb 2007. Include the following 2880 rows (i.e., 2 days). Reapply column names df <- read.table("household_power_consumption.txt", sep=";", na.strings=c("NA","?"), skip=66637, nrows=2880) colnames(df) <- c(col) ## Add single column containing date and time and convert to date/time format df$DatTim = paste(df$Date, df$Time, sep=" ") df$DatTim <- strptime(df$DatTim, "%d/%m/%Y %H:%M:%S") ## Assign global active power and date/time to y/x x <- df$DatTim y <- df$Global_active_power ## Create line plot of global active power time series png("C:/Users/eayres/Documents/GitHub/datasciencecoursera/Data Science Specialization/04_Exploratory Data Analysis/CourseProject1/Code and Plots/plot2.png", width=480, height=480, units = "px") par(mar=c(4,4,1,1)) plot(x, y, ylab = "Global Active Power (kilowatts)", xlab = "", type = "l") dev.off()
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. #' @importFrom utils object.size .serialize_arrow_r_metadata <- function(x) { assert_is(x, "list") # drop problems attributes (most likely from readr) x[["attributes"]][["problems"]] <- NULL out <- serialize(x, NULL, ascii = TRUE) # if the metadata is over 100 kB, compress if (option_compress_metadata() && object.size(out) > 100000) { out_comp <- serialize(memCompress(out, type = "gzip"), NULL, ascii = TRUE) # but ensure that the compression+serialization is effective. if (object.size(out) > object.size(out_comp)) out <- out_comp } rawToChar(out) } .unserialize_arrow_r_metadata <- function(x) { tryCatch({ out <- unserialize(charToRaw(x)) # if this is still raw, try decompressing if (is.raw(out)) { out <- unserialize(memDecompress(out, type = "gzip")) } out }, error = function(e) { warning("Invalid metadata$r", call. = FALSE) NULL }) } apply_arrow_r_metadata <- function(x, r_metadata) { tryCatch({ columns_metadata <- r_metadata$columns if (is.data.frame(x)) { if (length(names(x)) && !is.null(columns_metadata)) { for (name in intersect(names(columns_metadata), names(x))) { x[[name]] <- apply_arrow_r_metadata(x[[name]], columns_metadata[[name]]) } } } else if (is.list(x) && !inherits(x, "POSIXlt") && !is.null(columns_metadata)) { x <- map2(x, columns_metadata, function(.x, .y) { apply_arrow_r_metadata(.x, .y) }) x } if (!is.null(r_metadata$attributes)) { attributes(x)[names(r_metadata$attributes)] <- r_metadata$attributes if (inherits(x, "POSIXlt")) { # We store POSIXlt as a StructArray, which is translated back to R # as a data.frame, but while data frames have a row.names = c(NA, nrow(x)) # attribute, POSIXlt does not, so since this is now no longer an object # of class data.frame, remove the extraneous attribute attr(x, "row.names") <- NULL } } }, error = function(e) { warning("Invalid metadata$r", call. = FALSE) }) x } arrow_attributes <- function(x, only_top_level = FALSE) { att <- attributes(x) removed_attributes <- character() if (identical(class(x), c("tbl_df", "tbl", "data.frame"))) { removed_attributes <- c("class", "row.names", "names") } else if (inherits(x, "data.frame")) { removed_attributes <- c("row.names", "names") } else if (inherits(x, "factor")) { removed_attributes <- c("class", "levels") } else if (inherits(x, "integer64") || inherits(x, "Date")) { removed_attributes <- c("class") } else if (inherits(x, "POSIXct")) { removed_attributes <- c("class", "tzone") } else if (inherits(x, "hms") || inherits(x, "difftime")) { removed_attributes <- c("class", "units") } att <- att[setdiff(names(att), removed_attributes)] if (isTRUE(only_top_level)) { return(att) } if (is.data.frame(x)) { columns <- map(x, arrow_attributes) out <- if (length(att) || !all(map_lgl(columns, is.null))) { list(attributes = att, columns = columns) } return(out) } columns <- NULL if (is.list(x) && !inherits(x, "POSIXlt")) { # for list columns, we also keep attributes of each # element in columns columns <- map(x, arrow_attributes) if (all(map_lgl(columns, is.null))) { columns <- NULL } } if (length(att) || !is.null(columns)) { list(attributes = att, columns = columns) } else { NULL } }
/r/R/metadata.R
permissive
Sebastiaan-Alvarez-Rodriguez/arrow
R
false
false
4,279
r
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. #' @importFrom utils object.size .serialize_arrow_r_metadata <- function(x) { assert_is(x, "list") # drop problems attributes (most likely from readr) x[["attributes"]][["problems"]] <- NULL out <- serialize(x, NULL, ascii = TRUE) # if the metadata is over 100 kB, compress if (option_compress_metadata() && object.size(out) > 100000) { out_comp <- serialize(memCompress(out, type = "gzip"), NULL, ascii = TRUE) # but ensure that the compression+serialization is effective. if (object.size(out) > object.size(out_comp)) out <- out_comp } rawToChar(out) } .unserialize_arrow_r_metadata <- function(x) { tryCatch({ out <- unserialize(charToRaw(x)) # if this is still raw, try decompressing if (is.raw(out)) { out <- unserialize(memDecompress(out, type = "gzip")) } out }, error = function(e) { warning("Invalid metadata$r", call. = FALSE) NULL }) } apply_arrow_r_metadata <- function(x, r_metadata) { tryCatch({ columns_metadata <- r_metadata$columns if (is.data.frame(x)) { if (length(names(x)) && !is.null(columns_metadata)) { for (name in intersect(names(columns_metadata), names(x))) { x[[name]] <- apply_arrow_r_metadata(x[[name]], columns_metadata[[name]]) } } } else if (is.list(x) && !inherits(x, "POSIXlt") && !is.null(columns_metadata)) { x <- map2(x, columns_metadata, function(.x, .y) { apply_arrow_r_metadata(.x, .y) }) x } if (!is.null(r_metadata$attributes)) { attributes(x)[names(r_metadata$attributes)] <- r_metadata$attributes if (inherits(x, "POSIXlt")) { # We store POSIXlt as a StructArray, which is translated back to R # as a data.frame, but while data frames have a row.names = c(NA, nrow(x)) # attribute, POSIXlt does not, so since this is now no longer an object # of class data.frame, remove the extraneous attribute attr(x, "row.names") <- NULL } } }, error = function(e) { warning("Invalid metadata$r", call. = FALSE) }) x } arrow_attributes <- function(x, only_top_level = FALSE) { att <- attributes(x) removed_attributes <- character() if (identical(class(x), c("tbl_df", "tbl", "data.frame"))) { removed_attributes <- c("class", "row.names", "names") } else if (inherits(x, "data.frame")) { removed_attributes <- c("row.names", "names") } else if (inherits(x, "factor")) { removed_attributes <- c("class", "levels") } else if (inherits(x, "integer64") || inherits(x, "Date")) { removed_attributes <- c("class") } else if (inherits(x, "POSIXct")) { removed_attributes <- c("class", "tzone") } else if (inherits(x, "hms") || inherits(x, "difftime")) { removed_attributes <- c("class", "units") } att <- att[setdiff(names(att), removed_attributes)] if (isTRUE(only_top_level)) { return(att) } if (is.data.frame(x)) { columns <- map(x, arrow_attributes) out <- if (length(att) || !all(map_lgl(columns, is.null))) { list(attributes = att, columns = columns) } return(out) } columns <- NULL if (is.list(x) && !inherits(x, "POSIXlt")) { # for list columns, we also keep attributes of each # element in columns columns <- map(x, arrow_attributes) if (all(map_lgl(columns, is.null))) { columns <- NULL } } if (length(att) || !is.null(columns)) { list(attributes = att, columns = columns) } else { NULL } }
setwd("../../GSE9348_GSE10961/Data") # Importing data into R library(affy) affybatch = ReadAffy() affybatch # Probe summarization, Backgraound correction eset = rma(affybatch , normalize = F) eset expression.matrix = exprs(eset) dim(expression.matrix) pData(phenoData(eset)) # Removing rows containing more than 50% absent probsets in each group Absent.probes = mas5calls.AffyBatch(affybatch) Absent.probes Absent.probes = exprs(Absent.probes)#large matrix of A/M/P probesets dim(Absent.probes) index.absent.probs = c() for(i in 1:length(Absent.probes[,1])){ if(sum(Absent.probes[i,1:18] == "A") > 9 | sum(Absent.probes[i,19:39] == "A") > 10 | sum(Absent.probes[i,40:51] == "A") > 6){ index.absent.probs = c(index.absent.probs,i) } } expression.matrix = expression.matrix[-index.absent.probs,] pData(featureData(eset)) = pData(featureData(eset))[-index.absent.probs,] dim(expression.matrix) # Altering sample names samples = paste0(substr(colnames(expression.matrix) , 1 , 10) , c(rep("_M" , 18) , rep("_P" , 21) , rep("_N" , 12))) colnames(expression.matrix) = samples dim(expression.matrix) # PCA plot to recognize biased samples based on eigenvector 1 and eigenvector 2 library(ggplot2) pc = prcomp(expression.matrix) pcr = pc$rotation pcr = as.data.frame(pcr) pcr$sample = rownames(pcr) pcr$groups=c(rep("_M" , 18) , rep("_P" , 21) , rep("_N" , 12)) pcr$groups=factor(pcr$groups , levels = c("_M" , "_N" , "_P")) ggplot(pcr , aes(PC1 , PC2 , label = sample , colour = groups)) + geom_text( size = 4) + xlim(-0.153,-0.127) + geom_label(label.size = 0.25) + theme_linedraw() + theme(axis.title=element_text(size=18 , face = "bold") , axis.text=element_text(size=14 , colour = "black") , title = element_text(size=18) , legend.text = element_text(size = 14),legend.title = element_text(size = 18), legend.background = element_rect(color = "steelblue", linetype = "solid"), legend.key = element_rect(fill = NULL, color = "black") ) # Recognizing and removing biased samples based on hierarchical clustering and Number-SD method () l = list() index = list(1:18 , 19:39 , 40:51) out.samples = list() for(i in 1:3){ mat = expression.matrix[ , index[[i]]] IAC=cor(mat,use="p")#1st round hist(IAC,sub=paste("Mean=",format(mean(IAC[upper.tri(IAC)]),digits=3))) library(cluster) cluster1=hclust(as.dist(1-IAC),method="average") plot(cluster1,cex=0.7,labels=dimnames(mat)[[2]]) meanIAC=apply(IAC,2,mean) sdCorr=sd(meanIAC) numbersd=(meanIAC-mean(meanIAC))/sdCorr plot(numbersd) abline(h=-2) sdout=-2 outliers=dimnames(mat)[[2]][numbersd<sdout] outliers mat = mat[,numbersd>sdout] dim(mat) sample.outliers = outliers while(length(outliers) != 0){ IAC=cor(mat,use="p") hist(IAC,sub=paste("Mean=",format(mean(IAC[upper.tri(IAC)]),digits=3))) cluster1=hclust(as.dist(1-IAC),method="average") plot(cluster1,cex=0.7,labels=dimnames(mat)[[2]]) meanIAC=apply(IAC,2,mean) sdCorr=sd(meanIAC) numbersd=(meanIAC-mean(meanIAC))/sdCorr plot(numbersd) abline(h=-2) sdout=-2 outliers=dimnames(mat)[[2]][numbersd<sdout] sample.outliers = c(sample.outliers,outliers) mat=mat[,numbersd>sdout] dim(mat) } out.samples[[i]] = sample.outliers l[[i]] = mat } expression.matrix = do.call("cbind" , l) dim(expression.matrix) out.samples = unlist(out.samples) out.samples write.table(out.samples , "out.samples.txt" , sep = "\t" , col.names = F , row.names = F , quote = F) unlockBinding("exprs",assayData(eset)) assayData(eset)$exprs = expression.matrix eset ################### ## Normalization ## ################### # Normalization using quantile method library(preprocessCore) normalized.expression.matrix = normalize.quantiles(expression.matrix) dimnames(normalized.expression.matrix) = dimnames(expression.matrix) unlockBinding("exprs",assayData(eset)) assayData(eset)$exprs = normalized.expression.matrix eset boxplot(expression.matrix , pch = "." , las=3) boxplot(normalized.expression.matrix , pch = "." , las=3) ########################################## ## Filtering and many to many problems ### ########################################## # Removing low variant genes # Select one probeset with the largest IQR to be representative of other probesets mapped to the same gene symbole sds = apply(normalized.expression.matrix , 1 , sd) hist(sds, breaks=100, col="mistyrose", xlab="standard deviation" ) abline(v=quantile(sds)[3], col="blue", lwd=3, lty=2) annotation(eset) # "hgu133plus2" library(genefilter) library(hgu133plus2.db) feset = nsFilter(eset, remove.dupEntrez=T, var.cutof = quantile(sds)[3] ) filtered.eset = feset$eset filtered.eset N.F.expression.matrix = exprs(filtered.eset) dim(N.F.expression.matrix) ############################################ ##### Differentially Expressed Genes ####### ############################################ colnames(expression.matrix) library(limma) matrix.expression = N.F.expression.matrix colnames(matrix.expression) = c(rep("M" , length(grep("_M" ,colnames(expression.matrix)))) , rep("P" , length(grep("_P" ,colnames(expression.matrix)))) , rep("N" , length(grep("_N" ,colnames(expression.matrix))))) colnames(matrix.expression) gr = factor(colnames(matrix.expression) , levels = c("M" , "P" , "N")) design = model.matrix(~0 + gr) colnames(design) = c("M", "P", "N") lm.fit = lmFit(matrix.expression, design) ### Annotation ### library(annotate) probnames = rownames(matrix.expression) gene.symbol = getSYMBOL(probnames, "hgu133plus2") mc = makeContrasts(M-P , levels = design) c.fit = contrasts.fit(lm.fit, mc) eb = eBayes(c.fit) Table = topTable(eb, adjust.method = "BH", sort.by = "logFC", genelist = gene.symbol , number = Inf) write.table(Table , "MvsP_DEGs.txt" , sep = "\t") mc = makeContrasts(M-N , levels = design) c.fit = contrasts.fit(lm.fit, mc) eb = eBayes(c.fit) Table = topTable(eb, adjust.method = "BH", sort.by = "logFC", genelist = gene.symbol , number = Inf) write.table(Table , "MvsN_DEGs.txt" , sep = "\t") mc = makeContrasts(P-N , levels = design) c.fit = contrasts.fit(lm.fit, mc) eb = eBayes(c.fit) Table = topTable(eb, adjust.method = "BH", sort.by = "logFC", genelist = gene.symbol , number = Inf) write.table(Table , "PvsN_DEGs.txt" , sep = "\t") ##################### ### Quality Plot #### ##################### annotation(eset) library(genefilter) library(hgu133plus2.db) library(annotate) feset = nsFilter(eset, remove.dupEntrez=T, var.cutof = 10^-20) filtered.eset = feset$eset filtered.eset N.F.expression.matrix = exprs(filtered.eset) dim(N.F.expression.matrix) # Most upregulated genes p1 = rownames(Table[which(Table$logFC > 6) , ]) p1 # Most downregulated genes p2 = rownames(Table[which(Table$logFC < - 4) , ]) p2 probnames = c(p2,p1) gene.symbol = getSYMBOL(probnames, "hgu133plus2") p = unname(gene.symbol) p probnames = rownames(N.F.expression.matrix) gene.symbol = getSYMBOL(probnames, "hgu133plus2.db") rownames(N.F.expression.matrix) = gene.symbol # Housekeeping genes HG = c("ACTB" , "GAPDH" , "TBP" , "RPLP0") HG = HG[HG %in% gene.symbol] HG p = N.F.expression.matrix[c(HG,p),] # 4 HK genes, 2 down-regulated genes, 8 up-regulated genes d = data.frame(apply(p[,1:18] , 1 , mean) , apply(p[,19:37] , 1 , mean)) # Between metastatic and primary d = data.frame(d , Genes = c(rep("Houskeeping" , 4) , rep("Down-Regulated" , 2), rep("Up-Regulated" , 8)) , stringsAsFactors = F) d[,3] = as.factor(d[,3]) Genes = d$Genes d$size = 1 colnames(d) = c("One" , "Two" , "Genes" , "size") d$NAME = rownames(d) library(ggplot2) library(ggrepel) g = ggplot(d, aes(One , Two)) g + geom_point(aes(color = Genes , size = 0.1)) + labs(title = "") + ylab("Primary") + xlab("Metastatic") + ylim(2,14) + xlim(2,14) + theme(axis.title=element_text(size=15 , face = "bold") , axis.text=element_text(size=16 , colour = "black") , plot.title = element_text(hjust = 0.5) , title = element_text(size=16) , legend.text = element_text(size = 12),legend.title = element_text(size = 18)) + scale_size_continuous(guide = F) + guides(colour = guide_legend(override.aes = list(size=2, stroke=2))) + geom_label_repel(aes(label = NAME), box.padding = 0.5, point.padding = 0.5, segment.color = 'grey50')
/Array_GPL570_Analysis.R
no_license
mehranpiran/Meta-Analysis
R
false
false
8,663
r
setwd("../../GSE9348_GSE10961/Data") # Importing data into R library(affy) affybatch = ReadAffy() affybatch # Probe summarization, Backgraound correction eset = rma(affybatch , normalize = F) eset expression.matrix = exprs(eset) dim(expression.matrix) pData(phenoData(eset)) # Removing rows containing more than 50% absent probsets in each group Absent.probes = mas5calls.AffyBatch(affybatch) Absent.probes Absent.probes = exprs(Absent.probes)#large matrix of A/M/P probesets dim(Absent.probes) index.absent.probs = c() for(i in 1:length(Absent.probes[,1])){ if(sum(Absent.probes[i,1:18] == "A") > 9 | sum(Absent.probes[i,19:39] == "A") > 10 | sum(Absent.probes[i,40:51] == "A") > 6){ index.absent.probs = c(index.absent.probs,i) } } expression.matrix = expression.matrix[-index.absent.probs,] pData(featureData(eset)) = pData(featureData(eset))[-index.absent.probs,] dim(expression.matrix) # Altering sample names samples = paste0(substr(colnames(expression.matrix) , 1 , 10) , c(rep("_M" , 18) , rep("_P" , 21) , rep("_N" , 12))) colnames(expression.matrix) = samples dim(expression.matrix) # PCA plot to recognize biased samples based on eigenvector 1 and eigenvector 2 library(ggplot2) pc = prcomp(expression.matrix) pcr = pc$rotation pcr = as.data.frame(pcr) pcr$sample = rownames(pcr) pcr$groups=c(rep("_M" , 18) , rep("_P" , 21) , rep("_N" , 12)) pcr$groups=factor(pcr$groups , levels = c("_M" , "_N" , "_P")) ggplot(pcr , aes(PC1 , PC2 , label = sample , colour = groups)) + geom_text( size = 4) + xlim(-0.153,-0.127) + geom_label(label.size = 0.25) + theme_linedraw() + theme(axis.title=element_text(size=18 , face = "bold") , axis.text=element_text(size=14 , colour = "black") , title = element_text(size=18) , legend.text = element_text(size = 14),legend.title = element_text(size = 18), legend.background = element_rect(color = "steelblue", linetype = "solid"), legend.key = element_rect(fill = NULL, color = "black") ) # Recognizing and removing biased samples based on hierarchical clustering and Number-SD method () l = list() index = list(1:18 , 19:39 , 40:51) out.samples = list() for(i in 1:3){ mat = expression.matrix[ , index[[i]]] IAC=cor(mat,use="p")#1st round hist(IAC,sub=paste("Mean=",format(mean(IAC[upper.tri(IAC)]),digits=3))) library(cluster) cluster1=hclust(as.dist(1-IAC),method="average") plot(cluster1,cex=0.7,labels=dimnames(mat)[[2]]) meanIAC=apply(IAC,2,mean) sdCorr=sd(meanIAC) numbersd=(meanIAC-mean(meanIAC))/sdCorr plot(numbersd) abline(h=-2) sdout=-2 outliers=dimnames(mat)[[2]][numbersd<sdout] outliers mat = mat[,numbersd>sdout] dim(mat) sample.outliers = outliers while(length(outliers) != 0){ IAC=cor(mat,use="p") hist(IAC,sub=paste("Mean=",format(mean(IAC[upper.tri(IAC)]),digits=3))) cluster1=hclust(as.dist(1-IAC),method="average") plot(cluster1,cex=0.7,labels=dimnames(mat)[[2]]) meanIAC=apply(IAC,2,mean) sdCorr=sd(meanIAC) numbersd=(meanIAC-mean(meanIAC))/sdCorr plot(numbersd) abline(h=-2) sdout=-2 outliers=dimnames(mat)[[2]][numbersd<sdout] sample.outliers = c(sample.outliers,outliers) mat=mat[,numbersd>sdout] dim(mat) } out.samples[[i]] = sample.outliers l[[i]] = mat } expression.matrix = do.call("cbind" , l) dim(expression.matrix) out.samples = unlist(out.samples) out.samples write.table(out.samples , "out.samples.txt" , sep = "\t" , col.names = F , row.names = F , quote = F) unlockBinding("exprs",assayData(eset)) assayData(eset)$exprs = expression.matrix eset ################### ## Normalization ## ################### # Normalization using quantile method library(preprocessCore) normalized.expression.matrix = normalize.quantiles(expression.matrix) dimnames(normalized.expression.matrix) = dimnames(expression.matrix) unlockBinding("exprs",assayData(eset)) assayData(eset)$exprs = normalized.expression.matrix eset boxplot(expression.matrix , pch = "." , las=3) boxplot(normalized.expression.matrix , pch = "." , las=3) ########################################## ## Filtering and many to many problems ### ########################################## # Removing low variant genes # Select one probeset with the largest IQR to be representative of other probesets mapped to the same gene symbole sds = apply(normalized.expression.matrix , 1 , sd) hist(sds, breaks=100, col="mistyrose", xlab="standard deviation" ) abline(v=quantile(sds)[3], col="blue", lwd=3, lty=2) annotation(eset) # "hgu133plus2" library(genefilter) library(hgu133plus2.db) feset = nsFilter(eset, remove.dupEntrez=T, var.cutof = quantile(sds)[3] ) filtered.eset = feset$eset filtered.eset N.F.expression.matrix = exprs(filtered.eset) dim(N.F.expression.matrix) ############################################ ##### Differentially Expressed Genes ####### ############################################ colnames(expression.matrix) library(limma) matrix.expression = N.F.expression.matrix colnames(matrix.expression) = c(rep("M" , length(grep("_M" ,colnames(expression.matrix)))) , rep("P" , length(grep("_P" ,colnames(expression.matrix)))) , rep("N" , length(grep("_N" ,colnames(expression.matrix))))) colnames(matrix.expression) gr = factor(colnames(matrix.expression) , levels = c("M" , "P" , "N")) design = model.matrix(~0 + gr) colnames(design) = c("M", "P", "N") lm.fit = lmFit(matrix.expression, design) ### Annotation ### library(annotate) probnames = rownames(matrix.expression) gene.symbol = getSYMBOL(probnames, "hgu133plus2") mc = makeContrasts(M-P , levels = design) c.fit = contrasts.fit(lm.fit, mc) eb = eBayes(c.fit) Table = topTable(eb, adjust.method = "BH", sort.by = "logFC", genelist = gene.symbol , number = Inf) write.table(Table , "MvsP_DEGs.txt" , sep = "\t") mc = makeContrasts(M-N , levels = design) c.fit = contrasts.fit(lm.fit, mc) eb = eBayes(c.fit) Table = topTable(eb, adjust.method = "BH", sort.by = "logFC", genelist = gene.symbol , number = Inf) write.table(Table , "MvsN_DEGs.txt" , sep = "\t") mc = makeContrasts(P-N , levels = design) c.fit = contrasts.fit(lm.fit, mc) eb = eBayes(c.fit) Table = topTable(eb, adjust.method = "BH", sort.by = "logFC", genelist = gene.symbol , number = Inf) write.table(Table , "PvsN_DEGs.txt" , sep = "\t") ##################### ### Quality Plot #### ##################### annotation(eset) library(genefilter) library(hgu133plus2.db) library(annotate) feset = nsFilter(eset, remove.dupEntrez=T, var.cutof = 10^-20) filtered.eset = feset$eset filtered.eset N.F.expression.matrix = exprs(filtered.eset) dim(N.F.expression.matrix) # Most upregulated genes p1 = rownames(Table[which(Table$logFC > 6) , ]) p1 # Most downregulated genes p2 = rownames(Table[which(Table$logFC < - 4) , ]) p2 probnames = c(p2,p1) gene.symbol = getSYMBOL(probnames, "hgu133plus2") p = unname(gene.symbol) p probnames = rownames(N.F.expression.matrix) gene.symbol = getSYMBOL(probnames, "hgu133plus2.db") rownames(N.F.expression.matrix) = gene.symbol # Housekeeping genes HG = c("ACTB" , "GAPDH" , "TBP" , "RPLP0") HG = HG[HG %in% gene.symbol] HG p = N.F.expression.matrix[c(HG,p),] # 4 HK genes, 2 down-regulated genes, 8 up-regulated genes d = data.frame(apply(p[,1:18] , 1 , mean) , apply(p[,19:37] , 1 , mean)) # Between metastatic and primary d = data.frame(d , Genes = c(rep("Houskeeping" , 4) , rep("Down-Regulated" , 2), rep("Up-Regulated" , 8)) , stringsAsFactors = F) d[,3] = as.factor(d[,3]) Genes = d$Genes d$size = 1 colnames(d) = c("One" , "Two" , "Genes" , "size") d$NAME = rownames(d) library(ggplot2) library(ggrepel) g = ggplot(d, aes(One , Two)) g + geom_point(aes(color = Genes , size = 0.1)) + labs(title = "") + ylab("Primary") + xlab("Metastatic") + ylim(2,14) + xlim(2,14) + theme(axis.title=element_text(size=15 , face = "bold") , axis.text=element_text(size=16 , colour = "black") , plot.title = element_text(hjust = 0.5) , title = element_text(size=16) , legend.text = element_text(size = 12),legend.title = element_text(size = 18)) + scale_size_continuous(guide = F) + guides(colour = guide_legend(override.aes = list(size=2, stroke=2))) + geom_label_repel(aes(label = NAME), box.padding = 0.5, point.padding = 0.5, segment.color = 'grey50')
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/metaphone.R \name{metaphone} \alias{metaphone} \title{Generate phonetic versions of strings with Metaphone} \usage{ metaphone(word, maxCodeLen = 10L, clean = TRUE) } \arguments{ \item{word}{string or vector of strings to encode} \item{maxCodeLen}{maximum length of the resulting encodings, in characters} \item{clean}{if \code{TRUE}, return \code{NA} for unknown alphabetical characters} } \value{ a character vector containing the metaphones of \code{word}, or an NA if the \code{word} value is NA } \description{ The function \code{metaphone} phonentically encodes the given string using the metaphone algorithm. } \details{ There is some discrepency with respect to how the metaphone algorithm actually works. For instance, there is a version in the Java Apache Commons library. There is a version provided within PHP. These do not provide the same results. On the questionable theory that the implementation in PHP is probably more well known, this code should match it in output. This implementation is based on a Javascript implementation which is itself based on the PHP internal implementation. The variable \code{maxCodeLen} is the limit on how long the returned metaphone should be. The \code{metaphone} algorithm is only defined for inputs over the standard English alphabet, \emph{i.e.}, "A-Z.". Non-alphabetical characters are removed from the string in a locale-dependent fashion. This strips spaces, hyphens, and numbers. Other letters, such as "Ü," may be permissible in the current locale but are unknown to \code{metaphone}. For inputs outside of its known range, the output is undefined and \code{NA} is returned and a \code{warning} this thrown. If \code{clean} is \code{FALSE}, \code{metaphone} attempts to process the strings. The default is \code{TRUE}. } \examples{ metaphone("wheel") metaphone(c("school", "benji")) } \references{ James P. Howard, II, "Phonetic Spelling Algorithm Implementations for R," \emph{Journal of Statistical Software}, vol. 25, no. 8, (2020), p. 1--21, <10.18637/jss.v095.i08>. } \seealso{ Other phonics: \code{\link{caverphone}()}, \code{\link{cologne}()}, \code{\link{lein}()}, \code{\link{mra_encode}()}, \code{\link{nysiis}()}, \code{\link{onca}()}, \code{\link{phonex}()}, \code{\link{phonics}()}, \code{\link{rogerroot}()}, \code{\link{soundex}()}, \code{\link{statcan}()} } \concept{phonics}
/man/metaphone.Rd
no_license
cran/phonics
R
false
true
2,442
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/metaphone.R \name{metaphone} \alias{metaphone} \title{Generate phonetic versions of strings with Metaphone} \usage{ metaphone(word, maxCodeLen = 10L, clean = TRUE) } \arguments{ \item{word}{string or vector of strings to encode} \item{maxCodeLen}{maximum length of the resulting encodings, in characters} \item{clean}{if \code{TRUE}, return \code{NA} for unknown alphabetical characters} } \value{ a character vector containing the metaphones of \code{word}, or an NA if the \code{word} value is NA } \description{ The function \code{metaphone} phonentically encodes the given string using the metaphone algorithm. } \details{ There is some discrepency with respect to how the metaphone algorithm actually works. For instance, there is a version in the Java Apache Commons library. There is a version provided within PHP. These do not provide the same results. On the questionable theory that the implementation in PHP is probably more well known, this code should match it in output. This implementation is based on a Javascript implementation which is itself based on the PHP internal implementation. The variable \code{maxCodeLen} is the limit on how long the returned metaphone should be. The \code{metaphone} algorithm is only defined for inputs over the standard English alphabet, \emph{i.e.}, "A-Z.". Non-alphabetical characters are removed from the string in a locale-dependent fashion. This strips spaces, hyphens, and numbers. Other letters, such as "Ü," may be permissible in the current locale but are unknown to \code{metaphone}. For inputs outside of its known range, the output is undefined and \code{NA} is returned and a \code{warning} this thrown. If \code{clean} is \code{FALSE}, \code{metaphone} attempts to process the strings. The default is \code{TRUE}. } \examples{ metaphone("wheel") metaphone(c("school", "benji")) } \references{ James P. Howard, II, "Phonetic Spelling Algorithm Implementations for R," \emph{Journal of Statistical Software}, vol. 25, no. 8, (2020), p. 1--21, <10.18637/jss.v095.i08>. } \seealso{ Other phonics: \code{\link{caverphone}()}, \code{\link{cologne}()}, \code{\link{lein}()}, \code{\link{mra_encode}()}, \code{\link{nysiis}()}, \code{\link{onca}()}, \code{\link{phonex}()}, \code{\link{phonics}()}, \code{\link{rogerroot}()}, \code{\link{soundex}()}, \code{\link{statcan}()} } \concept{phonics}
y <- rnorm(1000,mean=300,sd=40) y =sort(y,decreasing= FALSE) #Equal Width interval=10 interval_gap=(max(y)-min(y))/interval binned_data=array(1000) multiplier=1 k=1 m=1 temp=array(1000) for(i in 1:1000) { #cat(y[i], min(y)+ (multiplier-1) * interval_gap,min(y) + multiplier * interval_gap," ") if((y[i]<=min(y) + multiplier * interval_gap) && (y[i] >= min(y)+ (multiplier-1) * interval_gap)) { temp[k]=y[i] #print(temp[k]) k=k+1 } else { #print("hey") for(x in 1:k-1) { binned_data[m]=mean(temp[1:k-1],na.rm=TRUE) #print(binned_data1[m]) m=m+1 } multiplier=multiplier+1 temp=array(1000) k=1 temp[k]=y[i] } } for(x in m:1000) { binned_data[x]=mean(temp[1:k-1],na.rm=TRUE) #print(binned_data1[m]) } x = array(1000) for (i in 1:1000) { x[i] = i } plot(x, y, col = "blue", cex = 0.3,pch = 100) lines(x, binned_data, col = "red", cex = 0.5,pch = 100)
/histogram_analysis.R
no_license
rs07/Data-Mining
R
false
false
939
r
y <- rnorm(1000,mean=300,sd=40) y =sort(y,decreasing= FALSE) #Equal Width interval=10 interval_gap=(max(y)-min(y))/interval binned_data=array(1000) multiplier=1 k=1 m=1 temp=array(1000) for(i in 1:1000) { #cat(y[i], min(y)+ (multiplier-1) * interval_gap,min(y) + multiplier * interval_gap," ") if((y[i]<=min(y) + multiplier * interval_gap) && (y[i] >= min(y)+ (multiplier-1) * interval_gap)) { temp[k]=y[i] #print(temp[k]) k=k+1 } else { #print("hey") for(x in 1:k-1) { binned_data[m]=mean(temp[1:k-1],na.rm=TRUE) #print(binned_data1[m]) m=m+1 } multiplier=multiplier+1 temp=array(1000) k=1 temp[k]=y[i] } } for(x in m:1000) { binned_data[x]=mean(temp[1:k-1],na.rm=TRUE) #print(binned_data1[m]) } x = array(1000) for (i in 1:1000) { x[i] = i } plot(x, y, col = "blue", cex = 0.3,pch = 100) lines(x, binned_data, col = "red", cex = 0.5,pch = 100)
\name{boa.chain.del} \alias{boa.chain.del} \title{Delete MCMC Sequences} \description{ Delete MCMC sequences from the session list of sequences. } \usage{ boa.chain.del(lnames, pnames) } \arguments{ \item{lnames}{Character vector giving the names of the MCMC sequences in the session list of sequences to be deleted. If omitted, no sequences are deleted.} \item{pnames}{Character vector giving the names of the parameters in the MCMC sequences to be deleted. If omitted, no parameters are deleted.} } \section{Side Effects}{ The specified MCMC sequences are deleted from the session lists of sequences. } \author{Brian J. Smith} \keyword{utilities}
/man/boa.chain.del.Rd
no_license
cran/boa
R
false
false
691
rd
\name{boa.chain.del} \alias{boa.chain.del} \title{Delete MCMC Sequences} \description{ Delete MCMC sequences from the session list of sequences. } \usage{ boa.chain.del(lnames, pnames) } \arguments{ \item{lnames}{Character vector giving the names of the MCMC sequences in the session list of sequences to be deleted. If omitted, no sequences are deleted.} \item{pnames}{Character vector giving the names of the parameters in the MCMC sequences to be deleted. If omitted, no parameters are deleted.} } \section{Side Effects}{ The specified MCMC sequences are deleted from the session lists of sequences. } \author{Brian J. Smith} \keyword{utilities}
#Lab2_fa1 v1 <- c(1,1,1,1,1,1,1,1,1,1,3,3,3,3,3,4,5,6) v2 <- c(1,2,1,1,1,1,2,1,2,1,3,4,3,3,3,4,6,5) v3 <- c(3,3,3,3,3,1,1,1,1,1,1,1,1,1,1,5,4,6) v4 <- c(3,3,4,3,3,1,1,2,1,1,1,1,2,1,1,5,6,4) v5 <- c(1,1,1,1,1,3,3,3,3,3,1,1,1,1,1,6,4,5) v6 <- c(1,1,1,2,1,3,3,3,4,3,1,1,1,2,1,6,5,4) m1 <- cbind(v1,v2,v3,v4,v5,v6) cor(m1) factanal(m1, factors = 3) # varimax is the default factanal(m1, factors = 3, rotation = "promax") # The following shows the g factor as PC1 prcomp(m1) # signs may depend on platform ## formula interface factanal(~v1+v2+v3+v4+v5+v6, factors = 3, scores = "Bartlett")$scores #Lab2_fa2 install.packages("Hmisc") library(Hmisc) AthleticsData <- spss.get("AthleticsData.sav") attach(AthleticsData) # names(AthleticsData) cor(AthleticsData) prcomp(AthleticsData) fit.2 <- factanal(AthleticsData,factors=2,rotation="varimax") print(fit.2) fit.3 <- factanal(AthleticsData,factors=3,rotation="varimax") print(fit.3) print(fit.3, digits = 2, cutoff = .2, sort = TRUE) install.packages("GPArotation") library(GPArotation) fit <- principal(AthleticsData, nfactors=3, rotate=”varimax”) fit # print results # do not go past here unless you can find fa.promax.R fit.3.promax <- update(fit.3,rotation="promax") colnames(fit.3.promax$loadings)<-c("Endurance","Strength","Hand-Eye") print(loadings(fit.3.promax), digits = 2, cutoff = .2, sort = TRUE) AssignFactorNames <- function(fit.object,names) { colnames(fit.object$promax.loadings)<-names colnames(fit.object$varimax.loadings)<-names rownames(fit.object$corr.factors)<-names colnames(fit.object$corr.factors)<-names } fit.3.Enzmann <- fa.promax(AthleticsData,factors=3, digits=2, sort=TRUE) AssignFactorNames(fit.3.Enzmann,factor.names) fit.3.Enzmann #Lab2_fa4 data(epi) epi.keys <- make.keys(epi,list(E = c(1, 3, -5, 8, 10, 13, -15, 17, -20, 22, 25, 27, -29, -32, -34, -37, 39, -41, 44, 46, 49, -51, 53, 56), N=c(2, 4, 7, 9, 11, 14, 16, 19, 21, 23, 26, 28, 31, 33, 35, 38, 40, 43, 45, 47, 50, 52, 55, 57), L = c(6, -12, -18, 24, -30, 36, -42, -48, -54), I =c(1, 3, -5, 8, 10, 13, 22, 39, -41), S = c(-11, -15, 17, -20, 25, 27, -29, -32, -37, 44, 46, -51, 53))) scores <- scoreItems(epi.keys,epi) N <- epi[abs(epi.keys[,"N"]) >0] E <- epi[abs(epi.keys[,"E"]) >0] fa.lookup(epi.keys[,1:3],epi.dictionary) #show the items and keying information #lab1_svm1 n <- 150 # number of data points p <- 2 # dimension sigma <- 1 # variance of the distribution meanpos <- 0 # centre of the distribution of positive examples meanneg <- 3 # centre of the distribution of negative examples npos <- round(n/2) # number of positive examples nneg <- n-npos # number of negative examples # Generate the positive and negative examples xpos <- matrix(rnorm(npos*p,mean=meanpos,sd=sigma),npos,p) xneg <- matrix(rnorm(nneg*p,mean=meanneg,sd=sigma),npos,p) x <- rbind(xpos,xneg) # Generate the labels y <- matrix(c(rep(1,npos),rep(-1,nneg))) # Visualize the data plot(x,col=ifelse(y>0,1,2)) legend("topleft",c('Positive','Negative'),col=seq(2),pch=1,text.col=seq(2)) # ntrain <- round(n*0.8) # number of training examples tindex <- sample(n,ntrain) # indices of training samples xtrain <- x[tindex,] xtest <- x[-tindex,] ytrain <- y[tindex] ytest <- y[-tindex] istrain=rep(0,n) istrain[tindex]=1 # Visualize plot(x,col=ifelse(y>0,1,2),pch=ifelse(istrain==1,1,2)) legend("topleft",c('Positive Train','Positive Test','Negative Train','Negative Test'),col=c(1,1,2,2), pch=c(1,2,1,2), text.col=c(1,1,2,2)) #lab1_svm2 library(e1071) library(rpart) index <- 1:nrow(Ozone) testindex <- sample(index, trunc(length(index)/3)) testset <- na.omit(Ozone[testindex,-3]) trainset <- na.omit(Ozone[-testindex,-3]) svm.model <- svm(V4 ~ ., data = trainset, cost = 1000, gamma = 0.0001) svm.pred <- predict(svm.model, testset[,-3]) crossprod(svm.pred - testset[,3]) / length(testindex) #lab1_svm3 data(iris) attach(iris) ## classification mode # default with factor response: model <- svm(Species ~ ., data = iris) # alternatively the traditional interface: x <- subset(iris, select = -Species) y <- Species model <- svm(x, y) print(model) summary(model) # test with train data pred <- predict(model, x) # (same as:) pred <- fitted(model) # Check accuracy: table(pred, y) # compute decision values and probabilities: pred <- predict(model, x, decision.values = TRUE) attr(pred, "decision.values")[1:4,] # visualize (classes by color, SV by crosses): plot(cmdscale(dist(iris[,-5])), col = as.integer(iris[,5]), pch = c("o","+")[1:150 %in% model$index + 1]) ## try regression mode on two dimensions # create data x <- seq(0.1, 5, by = 0.05) y <- log(x) + rnorm(x, sd = 0.2) # estimate model and predict input values m <- svm(x, y) new <- predict(m, x) # visualize plot(x, y) points(x, log(x), col = 2) points(x, new, col = 4) ## density-estimation # create 2-dim. normal with rho=0: X <- data.frame(a = rnorm(1000), b = rnorm(1000)) attach(X) # traditional way: m <- svm(X, gamma = 0.1) # formula interface: m <- svm(~., data = X, gamma = 0.1) # or: m <- svm(~ a + b, gamma = 0.1) # test: newdata <- data.frame(a = c(0, 4), b = c(0, 4)) predict (m, newdata) # visualize: plot(X, col = 1:1000 %in% m$index + 1, xlim = c(-5,5), ylim=c(-5,5)) points(newdata, pch = "+", col = 2, cex = 5) # weights: (example not particularly sensible) i2 <- iris levels(i2$Species)[3] <- "versicolor" summary(i2$Species) wts <- 100 / table(i2$Species) wts m <- svm(Species ~ ., data = i2, class.weights = wts) #lab1_svm4 ## example using the promotergene data set data(promotergene) ## create test and training set ind <- sample(1:dim(promotergene)[1],20) genetrain <- promotergene[-ind, ] genetest <- promotergene[ind, ] ## train a support vector machine gene <- ksvm(Class~.,data=genetrain,kernel="rbfdot",\ kpar=list(sigma=0.015),C=70,cross=4,prob.model=TRUE) ## predict gene type probabilities on the test set genetype <- predict(gene,genetest,type="probabilities") #lab1_svm5 library(e1071) m1 <- matrix( c( 0, 0, 0, 1, 1, 2, 1, 2, 3, 2, 3, 3, 0, 1,2,3, 0, 1, 2, 3, 1, 2, 3, 2, 3, 3, 0, 0, 0, 1, 1, 2, 4, 4,4,4, 0, 1, 2, 3, 1, 1, 1, 1, 1, 1, -1,-1, -1,-1,-1,-1, 1 ,1,1,1, 1, 1,-1,-1 ), ncol = 3 ) Y = m1[,3] X = m1[,1:2] df = data.frame( X , Y ) par(mfcol=c(4,2)) for( cost in c( 1e-3 ,1e-2 ,1e-1, 1e0, 1e+1, 1e+2 ,1e+3)) { #cost <- 1 model.svm <- svm( Y ~ . , data = df , type = "C-classification" , kernel = "linear", cost = cost, scale =FALSE ) #print(model.svm$SV) plot(x=0,ylim=c(0,5), xlim=c(0,3),main= paste( "cost: ",cost, "#SV: ", nrow(model.svm$SV) )) points(m1[m1[,3]>0,1], m1[m1[,3]>0,2], pch=3, col="green") points(m1[m1[,3]<0,1], m1[m1[,3]<0,2], pch=4, col="blue") points(model.svm$SV[,1],model.svm$SV[,2], pch=18 , col = "red") } #lab1_svm6 data(spam) ## create test and training set index <- sample(1:dim(spam)[1]) spamtrain <- spam[index[1:floor(dim(spam)[1]/2)], ] spamtest <- spam[index[((ceiling(dim(spam)[1]/2)) + 1):dim(spam)[1]], ] ## train a support vector machine filter <- ksvm(type~.,data=spamtrain,kernel="rbfdot", kpar=list(sigma=0.05),C=5,cross=3) filter ## predict mail type on the test set mailtype <- predict(filter,spamtest[,-58]) ## Check results table(mailtype,spamtest[,58]) #lab1_svm7 ## Another example with the famous iris data data(iris) ## Create a kernel function using the build in rbfdot function rbf <- rbfdot(sigma=0.1) rbf ## train a bound constraint support vector machine irismodel <- ksvm(Species~.,data=iris,type="C-bsvc", kernel=rbf,C=10,prob.model=TRUE) irismodel ## get fitted values fitted(irismodel) ## Test on the training set with probabilities as output predict(irismodel, iris[,-5], type="probabilities") #lab1_svm8 ## Demo of the plot function x <- rbind(matrix(rnorm(120),,2),matrix(rnorm(120,mean=3),,2)) y <- matrix(c(rep(1,60),rep(-1,60))) svp <- ksvm(x,y,type="C-svc") plot(svp,data=x) ### Use kernelMatrix K <- as.kernelMatrix(crossprod(t(x))) svp2 <- ksvm(K, y, type="C-svc") svp2 #lab1_svm9 # test data xtest <- rbind(matrix(rnorm(20),,2),matrix(rnorm(20,mean=3),,2)) # test kernel matrix i.e. inner/kernel product of test data with # Support Vectors Ktest <- as.kernelMatrix(crossprod(t(xtest),t(x[SVindex(svp2), ]))) predict(svp2, Ktest) #### Use custom kernel k <- function(x,y) {(sum(x*y) +1)*exp(-0.001*sum((x-y)^2))} class(k) <- "kernel" data(promotergene) ## train svm using custom kernel gene <- ksvm(Class~.,data=promotergene[c(1:20, 80:100),],kernel=k, C=5,cross=5) gene #### Use text with string kernels data(reuters) is(reuters) tsv <- ksvm(reuters,rlabels,kernel="stringdot", kpar=list(length=5),cross=3,C=10) tsv ## regression # create data x <- seq(-20,20,0.1) y <- sin(x)/x + rnorm(401,sd=0.03) # train support vector machine regm <- ksvm(x,y,epsilon=0.01,kpar=list(sigma=16),cross=3) plot(x,y,type="l") lines(x,predict(regm,x),col="red")
/LAB_7/lab7.R
no_license
JacobYeah/DataAnalytics2020_Wang_Chen
R
false
false
9,330
r
#Lab2_fa1 v1 <- c(1,1,1,1,1,1,1,1,1,1,3,3,3,3,3,4,5,6) v2 <- c(1,2,1,1,1,1,2,1,2,1,3,4,3,3,3,4,6,5) v3 <- c(3,3,3,3,3,1,1,1,1,1,1,1,1,1,1,5,4,6) v4 <- c(3,3,4,3,3,1,1,2,1,1,1,1,2,1,1,5,6,4) v5 <- c(1,1,1,1,1,3,3,3,3,3,1,1,1,1,1,6,4,5) v6 <- c(1,1,1,2,1,3,3,3,4,3,1,1,1,2,1,6,5,4) m1 <- cbind(v1,v2,v3,v4,v5,v6) cor(m1) factanal(m1, factors = 3) # varimax is the default factanal(m1, factors = 3, rotation = "promax") # The following shows the g factor as PC1 prcomp(m1) # signs may depend on platform ## formula interface factanal(~v1+v2+v3+v4+v5+v6, factors = 3, scores = "Bartlett")$scores #Lab2_fa2 install.packages("Hmisc") library(Hmisc) AthleticsData <- spss.get("AthleticsData.sav") attach(AthleticsData) # names(AthleticsData) cor(AthleticsData) prcomp(AthleticsData) fit.2 <- factanal(AthleticsData,factors=2,rotation="varimax") print(fit.2) fit.3 <- factanal(AthleticsData,factors=3,rotation="varimax") print(fit.3) print(fit.3, digits = 2, cutoff = .2, sort = TRUE) install.packages("GPArotation") library(GPArotation) fit <- principal(AthleticsData, nfactors=3, rotate=”varimax”) fit # print results # do not go past here unless you can find fa.promax.R fit.3.promax <- update(fit.3,rotation="promax") colnames(fit.3.promax$loadings)<-c("Endurance","Strength","Hand-Eye") print(loadings(fit.3.promax), digits = 2, cutoff = .2, sort = TRUE) AssignFactorNames <- function(fit.object,names) { colnames(fit.object$promax.loadings)<-names colnames(fit.object$varimax.loadings)<-names rownames(fit.object$corr.factors)<-names colnames(fit.object$corr.factors)<-names } fit.3.Enzmann <- fa.promax(AthleticsData,factors=3, digits=2, sort=TRUE) AssignFactorNames(fit.3.Enzmann,factor.names) fit.3.Enzmann #Lab2_fa4 data(epi) epi.keys <- make.keys(epi,list(E = c(1, 3, -5, 8, 10, 13, -15, 17, -20, 22, 25, 27, -29, -32, -34, -37, 39, -41, 44, 46, 49, -51, 53, 56), N=c(2, 4, 7, 9, 11, 14, 16, 19, 21, 23, 26, 28, 31, 33, 35, 38, 40, 43, 45, 47, 50, 52, 55, 57), L = c(6, -12, -18, 24, -30, 36, -42, -48, -54), I =c(1, 3, -5, 8, 10, 13, 22, 39, -41), S = c(-11, -15, 17, -20, 25, 27, -29, -32, -37, 44, 46, -51, 53))) scores <- scoreItems(epi.keys,epi) N <- epi[abs(epi.keys[,"N"]) >0] E <- epi[abs(epi.keys[,"E"]) >0] fa.lookup(epi.keys[,1:3],epi.dictionary) #show the items and keying information #lab1_svm1 n <- 150 # number of data points p <- 2 # dimension sigma <- 1 # variance of the distribution meanpos <- 0 # centre of the distribution of positive examples meanneg <- 3 # centre of the distribution of negative examples npos <- round(n/2) # number of positive examples nneg <- n-npos # number of negative examples # Generate the positive and negative examples xpos <- matrix(rnorm(npos*p,mean=meanpos,sd=sigma),npos,p) xneg <- matrix(rnorm(nneg*p,mean=meanneg,sd=sigma),npos,p) x <- rbind(xpos,xneg) # Generate the labels y <- matrix(c(rep(1,npos),rep(-1,nneg))) # Visualize the data plot(x,col=ifelse(y>0,1,2)) legend("topleft",c('Positive','Negative'),col=seq(2),pch=1,text.col=seq(2)) # ntrain <- round(n*0.8) # number of training examples tindex <- sample(n,ntrain) # indices of training samples xtrain <- x[tindex,] xtest <- x[-tindex,] ytrain <- y[tindex] ytest <- y[-tindex] istrain=rep(0,n) istrain[tindex]=1 # Visualize plot(x,col=ifelse(y>0,1,2),pch=ifelse(istrain==1,1,2)) legend("topleft",c('Positive Train','Positive Test','Negative Train','Negative Test'),col=c(1,1,2,2), pch=c(1,2,1,2), text.col=c(1,1,2,2)) #lab1_svm2 library(e1071) library(rpart) index <- 1:nrow(Ozone) testindex <- sample(index, trunc(length(index)/3)) testset <- na.omit(Ozone[testindex,-3]) trainset <- na.omit(Ozone[-testindex,-3]) svm.model <- svm(V4 ~ ., data = trainset, cost = 1000, gamma = 0.0001) svm.pred <- predict(svm.model, testset[,-3]) crossprod(svm.pred - testset[,3]) / length(testindex) #lab1_svm3 data(iris) attach(iris) ## classification mode # default with factor response: model <- svm(Species ~ ., data = iris) # alternatively the traditional interface: x <- subset(iris, select = -Species) y <- Species model <- svm(x, y) print(model) summary(model) # test with train data pred <- predict(model, x) # (same as:) pred <- fitted(model) # Check accuracy: table(pred, y) # compute decision values and probabilities: pred <- predict(model, x, decision.values = TRUE) attr(pred, "decision.values")[1:4,] # visualize (classes by color, SV by crosses): plot(cmdscale(dist(iris[,-5])), col = as.integer(iris[,5]), pch = c("o","+")[1:150 %in% model$index + 1]) ## try regression mode on two dimensions # create data x <- seq(0.1, 5, by = 0.05) y <- log(x) + rnorm(x, sd = 0.2) # estimate model and predict input values m <- svm(x, y) new <- predict(m, x) # visualize plot(x, y) points(x, log(x), col = 2) points(x, new, col = 4) ## density-estimation # create 2-dim. normal with rho=0: X <- data.frame(a = rnorm(1000), b = rnorm(1000)) attach(X) # traditional way: m <- svm(X, gamma = 0.1) # formula interface: m <- svm(~., data = X, gamma = 0.1) # or: m <- svm(~ a + b, gamma = 0.1) # test: newdata <- data.frame(a = c(0, 4), b = c(0, 4)) predict (m, newdata) # visualize: plot(X, col = 1:1000 %in% m$index + 1, xlim = c(-5,5), ylim=c(-5,5)) points(newdata, pch = "+", col = 2, cex = 5) # weights: (example not particularly sensible) i2 <- iris levels(i2$Species)[3] <- "versicolor" summary(i2$Species) wts <- 100 / table(i2$Species) wts m <- svm(Species ~ ., data = i2, class.weights = wts) #lab1_svm4 ## example using the promotergene data set data(promotergene) ## create test and training set ind <- sample(1:dim(promotergene)[1],20) genetrain <- promotergene[-ind, ] genetest <- promotergene[ind, ] ## train a support vector machine gene <- ksvm(Class~.,data=genetrain,kernel="rbfdot",\ kpar=list(sigma=0.015),C=70,cross=4,prob.model=TRUE) ## predict gene type probabilities on the test set genetype <- predict(gene,genetest,type="probabilities") #lab1_svm5 library(e1071) m1 <- matrix( c( 0, 0, 0, 1, 1, 2, 1, 2, 3, 2, 3, 3, 0, 1,2,3, 0, 1, 2, 3, 1, 2, 3, 2, 3, 3, 0, 0, 0, 1, 1, 2, 4, 4,4,4, 0, 1, 2, 3, 1, 1, 1, 1, 1, 1, -1,-1, -1,-1,-1,-1, 1 ,1,1,1, 1, 1,-1,-1 ), ncol = 3 ) Y = m1[,3] X = m1[,1:2] df = data.frame( X , Y ) par(mfcol=c(4,2)) for( cost in c( 1e-3 ,1e-2 ,1e-1, 1e0, 1e+1, 1e+2 ,1e+3)) { #cost <- 1 model.svm <- svm( Y ~ . , data = df , type = "C-classification" , kernel = "linear", cost = cost, scale =FALSE ) #print(model.svm$SV) plot(x=0,ylim=c(0,5), xlim=c(0,3),main= paste( "cost: ",cost, "#SV: ", nrow(model.svm$SV) )) points(m1[m1[,3]>0,1], m1[m1[,3]>0,2], pch=3, col="green") points(m1[m1[,3]<0,1], m1[m1[,3]<0,2], pch=4, col="blue") points(model.svm$SV[,1],model.svm$SV[,2], pch=18 , col = "red") } #lab1_svm6 data(spam) ## create test and training set index <- sample(1:dim(spam)[1]) spamtrain <- spam[index[1:floor(dim(spam)[1]/2)], ] spamtest <- spam[index[((ceiling(dim(spam)[1]/2)) + 1):dim(spam)[1]], ] ## train a support vector machine filter <- ksvm(type~.,data=spamtrain,kernel="rbfdot", kpar=list(sigma=0.05),C=5,cross=3) filter ## predict mail type on the test set mailtype <- predict(filter,spamtest[,-58]) ## Check results table(mailtype,spamtest[,58]) #lab1_svm7 ## Another example with the famous iris data data(iris) ## Create a kernel function using the build in rbfdot function rbf <- rbfdot(sigma=0.1) rbf ## train a bound constraint support vector machine irismodel <- ksvm(Species~.,data=iris,type="C-bsvc", kernel=rbf,C=10,prob.model=TRUE) irismodel ## get fitted values fitted(irismodel) ## Test on the training set with probabilities as output predict(irismodel, iris[,-5], type="probabilities") #lab1_svm8 ## Demo of the plot function x <- rbind(matrix(rnorm(120),,2),matrix(rnorm(120,mean=3),,2)) y <- matrix(c(rep(1,60),rep(-1,60))) svp <- ksvm(x,y,type="C-svc") plot(svp,data=x) ### Use kernelMatrix K <- as.kernelMatrix(crossprod(t(x))) svp2 <- ksvm(K, y, type="C-svc") svp2 #lab1_svm9 # test data xtest <- rbind(matrix(rnorm(20),,2),matrix(rnorm(20,mean=3),,2)) # test kernel matrix i.e. inner/kernel product of test data with # Support Vectors Ktest <- as.kernelMatrix(crossprod(t(xtest),t(x[SVindex(svp2), ]))) predict(svp2, Ktest) #### Use custom kernel k <- function(x,y) {(sum(x*y) +1)*exp(-0.001*sum((x-y)^2))} class(k) <- "kernel" data(promotergene) ## train svm using custom kernel gene <- ksvm(Class~.,data=promotergene[c(1:20, 80:100),],kernel=k, C=5,cross=5) gene #### Use text with string kernels data(reuters) is(reuters) tsv <- ksvm(reuters,rlabels,kernel="stringdot", kpar=list(length=5),cross=3,C=10) tsv ## regression # create data x <- seq(-20,20,0.1) y <- sin(x)/x + rnorm(401,sd=0.03) # train support vector machine regm <- ksvm(x,y,epsilon=0.01,kpar=list(sigma=16),cross=3) plot(x,y,type="l") lines(x,predict(regm,x),col="red")
# Copyright 2020 Cloudera Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #' @include compat.R NULL quote_column_in_expression <- function(expr, column) { if (deparse(expr) == column) { expr <- as.symbol(deparse(expr)) } if (length(expr) == 1) { return(expr) } else { return(as.call(lapply(expr, quote_column_in_expression, column))) } } quote_columns_in_expressions <- function(exprs, columns = NULL) { lapply(exprs, function(expr) { for (column in columns) { expr <- quote_column_in_expression(expr, column) } expr }) } quote_full_expression <- function(expr) { if (is.call(expr)) { as.symbol(deparse(expr)) } else { expr } } quote_full_expressions <- function(exprs) { lapply(exprs, quote_full_expression) }
/R/quote.R
permissive
HarshalRepo/tidyquery
R
false
false
1,275
r
# Copyright 2020 Cloudera Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #' @include compat.R NULL quote_column_in_expression <- function(expr, column) { if (deparse(expr) == column) { expr <- as.symbol(deparse(expr)) } if (length(expr) == 1) { return(expr) } else { return(as.call(lapply(expr, quote_column_in_expression, column))) } } quote_columns_in_expressions <- function(exprs, columns = NULL) { lapply(exprs, function(expr) { for (column in columns) { expr <- quote_column_in_expression(expr, column) } expr }) } quote_full_expression <- function(expr) { if (is.call(expr)) { as.symbol(deparse(expr)) } else { expr } } quote_full_expressions <- function(exprs) { lapply(exprs, quote_full_expression) }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/source_EM.R \name{star_CI} \alias{star_CI} \title{Compute asymptotic confidence intervals for STAR linear regression} \usage{ star_CI( y, X, j, level = 0.95, include_plot = TRUE, transformation = "np", y_max = Inf, sd_init = 10, tol = 10^-10, max_iters = 1000 ) } \arguments{ \item{y}{\code{n x 1} vector of observed counts} \item{X}{\code{n x p} design matrix of predictors} \item{j}{the scalar column index for the desired confidence interval} \item{level}{confidence level; default is 0.95} \item{include_plot}{logical; if TRUE, include a plot of the profile likelihood} \item{transformation}{transformation to use for the latent data; must be one of \itemize{ \item "identity" (identity transformation) \item "log" (log transformation) \item "sqrt" (square root transformation) \item "np" (nonparametric transformation estimated from empirical CDF) \item "pois" (transformation for moment-matched marginal Poisson CDF) \item "neg-bin" (transformation for moment-matched marginal Negative Binomial CDF) \item "box-cox" (box-cox transformation with learned parameter) }} \item{y_max}{a fixed and known upper bound for all observations; default is \code{Inf}} \item{sd_init}{add random noise for initialization scaled by \code{sd_init} times the Gaussian MLE standard deviation} \item{tol}{tolerance for stopping the EM algorithm; default is 10^-10;} \item{max_iters}{maximum number of EM iterations before stopping; default is 1000} } \value{ the upper and lower endpoints of the confidence interval } \description{ For a linear regression model within the STAR framework, compute (asymptotic) confidence intervals for a regression coefficient of interest. Confidence intervals are computed by inverting the likelihood ratio test and profiling the log-likelihood. } \note{ The design matrix \code{X} should include an intercept. } \examples{ # Simulate data with count-valued response y: sim_dat = simulate_nb_lm(n = 100, p = 2) y = sim_dat$y; X = sim_dat$X # Select a transformation: transformation = 'np' # Confidence interval for the intercept: ci_beta_0 = star_CI(y = y, X = X, j = 1, transformation = transformation) ci_beta_0 # Confidence interval for the slope: ci_beta_1 = star_CI(y = y, X = X, j = 2, transformation = transformation) ci_beta_1 }
/man/star_CI.Rd
no_license
drkowal/rSTAR
R
false
true
2,444
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/source_EM.R \name{star_CI} \alias{star_CI} \title{Compute asymptotic confidence intervals for STAR linear regression} \usage{ star_CI( y, X, j, level = 0.95, include_plot = TRUE, transformation = "np", y_max = Inf, sd_init = 10, tol = 10^-10, max_iters = 1000 ) } \arguments{ \item{y}{\code{n x 1} vector of observed counts} \item{X}{\code{n x p} design matrix of predictors} \item{j}{the scalar column index for the desired confidence interval} \item{level}{confidence level; default is 0.95} \item{include_plot}{logical; if TRUE, include a plot of the profile likelihood} \item{transformation}{transformation to use for the latent data; must be one of \itemize{ \item "identity" (identity transformation) \item "log" (log transformation) \item "sqrt" (square root transformation) \item "np" (nonparametric transformation estimated from empirical CDF) \item "pois" (transformation for moment-matched marginal Poisson CDF) \item "neg-bin" (transformation for moment-matched marginal Negative Binomial CDF) \item "box-cox" (box-cox transformation with learned parameter) }} \item{y_max}{a fixed and known upper bound for all observations; default is \code{Inf}} \item{sd_init}{add random noise for initialization scaled by \code{sd_init} times the Gaussian MLE standard deviation} \item{tol}{tolerance for stopping the EM algorithm; default is 10^-10;} \item{max_iters}{maximum number of EM iterations before stopping; default is 1000} } \value{ the upper and lower endpoints of the confidence interval } \description{ For a linear regression model within the STAR framework, compute (asymptotic) confidence intervals for a regression coefficient of interest. Confidence intervals are computed by inverting the likelihood ratio test and profiling the log-likelihood. } \note{ The design matrix \code{X} should include an intercept. } \examples{ # Simulate data with count-valued response y: sim_dat = simulate_nb_lm(n = 100, p = 2) y = sim_dat$y; X = sim_dat$X # Select a transformation: transformation = 'np' # Confidence interval for the intercept: ci_beta_0 = star_CI(y = y, X = X, j = 1, transformation = transformation) ci_beta_0 # Confidence interval for the slope: ci_beta_1 = star_CI(y = y, X = X, j = 2, transformation = transformation) ci_beta_1 }
library(dplyr) library(httr) library(jsonlite) # get available dates path_dates <- "https://covidmap.umd.edu/api/datesavail?country=Spain" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") datedata <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(datedata, "spain_available_dates.csv") # download country data covid and smooth path <- "https://covidmap.umd.edu/api/resources?indicator=covid&type=smoothed&country=Spain&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_data_covid_smooth <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_data_covid_smooth, "./data/spain_data_covid_smooth.csv") # download country data covid and daily path <- "https://covidmap.umd.edu/api/resources?indicator=covid&type=daily&country=Spain&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_data_covid_daily <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_data_covid_daily, "./data/spain_data_covid_daily.csv") # download country data flu and smooth path <- "https://covidmap.umd.edu/api/resources?indicator=flu&type=smoothed&country=Spain&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_data_flu_smooth <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_data_flu_smooth, "./data/spain_data_flu_smooth.csv") # download country data flu and daily path <- "https://covidmap.umd.edu/api/resources?indicator=flu&type=daily&country=Spain&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_data_flu_daily <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_data_flu_daily, "./data/spain_data_flu_daily.csv") ###### Regional data ############# ## smoothed, covid, regional path <- "https://covidmap.umd.edu/api/resources?indicator=covid&type=smoothed&country=Spain&region=all&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_regional_data_covid_smooth <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_regional_data_covid_smooth, "./data/spain_regional_data_covid_smooth.csv") ## daily, covid, regional path <- "https://covidmap.umd.edu/api/resources?indicator=covid&type=daily&country=Spain&region=all&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_regional_data_covid_daily <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_regional_data_covid_daily, "./data/spain_regional_data_covid_daily.csv") ## smoothed, flu, regional path <- "https://covidmap.umd.edu/api/resources?indicator=flu&type=smoothed&country=Spain&region=all&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_regional_data_flu_smooth <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_regional_data_flu_smooth, "./data/spain_regional_data_flu_smooth.csv") ## daily, flu, regional path <- "https://covidmap.umd.edu/api/resources?indicator=flu&type=daily&country=Spain&region=all&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_regional_data_flu_daily <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_regional_data_flu_daily, "./data/spain_regional_data_flu_daily.csv")
/data/backup/umd_covid_request.R
no_license
GCGImdea/fbdatachallenge
R
false
false
3,646
r
library(dplyr) library(httr) library(jsonlite) # get available dates path_dates <- "https://covidmap.umd.edu/api/datesavail?country=Spain" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") datedata <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(datedata, "spain_available_dates.csv") # download country data covid and smooth path <- "https://covidmap.umd.edu/api/resources?indicator=covid&type=smoothed&country=Spain&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_data_covid_smooth <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_data_covid_smooth, "./data/spain_data_covid_smooth.csv") # download country data covid and daily path <- "https://covidmap.umd.edu/api/resources?indicator=covid&type=daily&country=Spain&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_data_covid_daily <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_data_covid_daily, "./data/spain_data_covid_daily.csv") # download country data flu and smooth path <- "https://covidmap.umd.edu/api/resources?indicator=flu&type=smoothed&country=Spain&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_data_flu_smooth <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_data_flu_smooth, "./data/spain_data_flu_smooth.csv") # download country data flu and daily path <- "https://covidmap.umd.edu/api/resources?indicator=flu&type=daily&country=Spain&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_data_flu_daily <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_data_flu_daily, "./data/spain_data_flu_daily.csv") ###### Regional data ############# ## smoothed, covid, regional path <- "https://covidmap.umd.edu/api/resources?indicator=covid&type=smoothed&country=Spain&region=all&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_regional_data_covid_smooth <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_regional_data_covid_smooth, "./data/spain_regional_data_covid_smooth.csv") ## daily, covid, regional path <- "https://covidmap.umd.edu/api/resources?indicator=covid&type=daily&country=Spain&region=all&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_regional_data_covid_daily <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_regional_data_covid_daily, "./data/spain_regional_data_covid_daily.csv") ## smoothed, flu, regional path <- "https://covidmap.umd.edu/api/resources?indicator=flu&type=smoothed&country=Spain&region=all&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_regional_data_flu_smooth <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_regional_data_flu_smooth, "./data/spain_regional_data_flu_smooth.csv") ## daily, flu, regional path <- "https://covidmap.umd.edu/api/resources?indicator=flu&type=daily&country=Spain&region=all&daterange=20200423-20200924" request <- GET(url = path) response <- content(request, as = "text", encoding = "UTF-8") spain_regional_data_flu_daily <- fromJSON(response, flatten = TRUE) %>% data.frame() write.csv(spain_regional_data_flu_daily, "./data/spain_regional_data_flu_daily.csv")
library(shiny) # Define the scatter plot to show on shiny app shinyUI( navbarPage("Changing Public Sentiment During Pandemic", theme=shinytheme("flatly"), tabPanel('Introduction', icon=icon('book-open'), fluidRow( mainPanel( fluidRow( column(3), column(9, h2("Changing Public Sentiment During Pandemic"), h4("COVID-19 has had a large and inescapable impact on all of our lives, both personally and professionally. As a result of that upheaval, many people's viewpoints of the world around them have changed during that time. As a healthcare worker, I have felt that change directly. I have gone from an little known role to one that is now recognized and celebrated. My goal was to study societal viewpoints toward other aspects of life and how those have changed over time during the COVID-19 pandemic. Because we are a digital society, and because lockdowns, quarantines, and stay-at-home orders have forced us to live remotely, I decided to look at public opinion through the lens of social media. I will look at public sentiment as measured by the content of Twitter posts because they are short, quick ways for people to express what they are thinking, feeling, and experiencing."), br(), h2("Methods and Workflow"), h4("Over 70 million tweets were downloaded spanning from December 1, 2019 to April 30, 2020. The tweets were found in 3 separate datasets. One contained only tweet IDs rather than the full tweet data for privacy reasons. These tweets had to be hydrated using twarc with a Twitter API and was done in chunks due to the size of the data files. The tweet ID's were chunked in a random order to ensure that each chunk contained a representative distribution of tweets across the date range. The tweets were then filtered down to only those using English and adding a column for week in addition to the date the tweet was created. I then performed an initial entity analysis of the tweet text using spaCy to determine which named entities appear most often in the dataset. Based on this analysis, I identified 21 entities covering people, locations, organizations, and sports (because we can all use a little enjoyment) that encompass many entities that are of high importance and named often in the tweet set. The tweets were then grouped into entity datasets based upon whether the text contains words associated with the entity. The entity grouped tweets were then analyzed using VADER to determine the overall sentiment of each tweet. The tweets were finally grouped by week they were created and score distribution was determined."), br(), h2("Data Used"), h4("www.trackmyhashtag.com/data/COVID-19.zip https://zenodo.org/record/3738018#.XtJGWi2ZPyK https://www.kaggle.com/smid80/coronavirus-covid19-tweets") ) ) ) )), tabPanel("Twitter Analysis", icon=icon("twitter"), fluidRow( sidebarPanel( selectizeInput("entity", h3('Select Entity:'), choices = list( Person = c(`Donald Trump` = 'trump', `Barack Obama` = 'obama', `Boris Johnson`='boris', `Nancy Pelosi`='pelosi', `Anthony Fauci`='fauci' ), Location = c(`New York` = 'nyc', `United States` = 'usa', `China`='china', `Italy`='italy', `Spain`='spain'), Organization = c(`European Union`='eu', `White House`='whitehouse', `Centers for Disease Control`='cdc', `World Health Organization`='who', `Congress`='congress', `National Health Service`='nhs'), Sports = c(`Football`='football', `Soccer`='soccer', `Hockey`='hockey', `Basketball`='basketball', `Baseball`='baseball') )), radioButtons("radio", label = h3("Tweets to include:"), choices = list("Originals only"='_orig', "Include Retweets"='')) ), mainPanel( h4("Sentiment"), imageOutput("sentiment") ) ), fluidRow( sidebarPanel( uiOutput("special_dates") ), mainPanel( fluidRow( column(9, h4("Tweet Volume"), imageOutput("volume") ) ) ) ) ) ))
/twitter_sent_app_static/ui.R
no_license
cravenre/Twitter_sentiment_in_pandemic
R
false
false
6,747
r
library(shiny) # Define the scatter plot to show on shiny app shinyUI( navbarPage("Changing Public Sentiment During Pandemic", theme=shinytheme("flatly"), tabPanel('Introduction', icon=icon('book-open'), fluidRow( mainPanel( fluidRow( column(3), column(9, h2("Changing Public Sentiment During Pandemic"), h4("COVID-19 has had a large and inescapable impact on all of our lives, both personally and professionally. As a result of that upheaval, many people's viewpoints of the world around them have changed during that time. As a healthcare worker, I have felt that change directly. I have gone from an little known role to one that is now recognized and celebrated. My goal was to study societal viewpoints toward other aspects of life and how those have changed over time during the COVID-19 pandemic. Because we are a digital society, and because lockdowns, quarantines, and stay-at-home orders have forced us to live remotely, I decided to look at public opinion through the lens of social media. I will look at public sentiment as measured by the content of Twitter posts because they are short, quick ways for people to express what they are thinking, feeling, and experiencing."), br(), h2("Methods and Workflow"), h4("Over 70 million tweets were downloaded spanning from December 1, 2019 to April 30, 2020. The tweets were found in 3 separate datasets. One contained only tweet IDs rather than the full tweet data for privacy reasons. These tweets had to be hydrated using twarc with a Twitter API and was done in chunks due to the size of the data files. The tweet ID's were chunked in a random order to ensure that each chunk contained a representative distribution of tweets across the date range. The tweets were then filtered down to only those using English and adding a column for week in addition to the date the tweet was created. I then performed an initial entity analysis of the tweet text using spaCy to determine which named entities appear most often in the dataset. Based on this analysis, I identified 21 entities covering people, locations, organizations, and sports (because we can all use a little enjoyment) that encompass many entities that are of high importance and named often in the tweet set. The tweets were then grouped into entity datasets based upon whether the text contains words associated with the entity. The entity grouped tweets were then analyzed using VADER to determine the overall sentiment of each tweet. The tweets were finally grouped by week they were created and score distribution was determined."), br(), h2("Data Used"), h4("www.trackmyhashtag.com/data/COVID-19.zip https://zenodo.org/record/3738018#.XtJGWi2ZPyK https://www.kaggle.com/smid80/coronavirus-covid19-tweets") ) ) ) )), tabPanel("Twitter Analysis", icon=icon("twitter"), fluidRow( sidebarPanel( selectizeInput("entity", h3('Select Entity:'), choices = list( Person = c(`Donald Trump` = 'trump', `Barack Obama` = 'obama', `Boris Johnson`='boris', `Nancy Pelosi`='pelosi', `Anthony Fauci`='fauci' ), Location = c(`New York` = 'nyc', `United States` = 'usa', `China`='china', `Italy`='italy', `Spain`='spain'), Organization = c(`European Union`='eu', `White House`='whitehouse', `Centers for Disease Control`='cdc', `World Health Organization`='who', `Congress`='congress', `National Health Service`='nhs'), Sports = c(`Football`='football', `Soccer`='soccer', `Hockey`='hockey', `Basketball`='basketball', `Baseball`='baseball') )), radioButtons("radio", label = h3("Tweets to include:"), choices = list("Originals only"='_orig', "Include Retweets"='')) ), mainPanel( h4("Sentiment"), imageOutput("sentiment") ) ), fluidRow( sidebarPanel( uiOutput("special_dates") ), mainPanel( fluidRow( column(9, h4("Tweet Volume"), imageOutput("volume") ) ) ) ) ) ))