content
large_stringlengths 0
6.46M
| path
large_stringlengths 3
331
| license_type
large_stringclasses 2
values | repo_name
large_stringlengths 5
125
| language
large_stringclasses 1
value | is_vendor
bool 2
classes | is_generated
bool 2
classes | length_bytes
int64 4
6.46M
| extension
large_stringclasses 75
values | text
stringlengths 0
6.46M
|
---|---|---|---|---|---|---|---|---|---|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ec2_operations.R
\name{ec2_import_snapshot}
\alias{ec2_import_snapshot}
\title{Imports a disk into an EBS snapshot}
\usage{
ec2_import_snapshot(ClientData, ClientToken, Description, DiskContainer,
DryRun, Encrypted, KmsKeyId, RoleName, TagSpecifications)
}
\arguments{
\item{ClientData}{The client-specific data.}
\item{ClientToken}{Token to enable idempotency for VM import requests.}
\item{Description}{The description string for the import snapshot task.}
\item{DiskContainer}{Information about the disk container.}
\item{DryRun}{Checks whether you have the required permissions for the action, without
actually making the request, and provides an error response. If you have
the required permissions, the error response is \code{DryRunOperation}.
Otherwise, it is \code{UnauthorizedOperation}.}
\item{Encrypted}{Specifies whether the destination snapshot of the imported image should
be encrypted. The default CMK for EBS is used unless you specify a
non-default AWS Key Management Service (AWS KMS) CMK using \code{KmsKeyId}.
For more information, see \href{https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html}{Amazon EBS Encryption}
in the \emph{Amazon Elastic Compute Cloud User Guide}.}
\item{KmsKeyId}{An identifier for the symmetric AWS Key Management Service (AWS KMS)
customer master key (CMK) to use when creating the encrypted snapshot.
This parameter is only required if you want to use a non-default CMK; if
this parameter is not specified, the default CMK for EBS is used. If a
\code{KmsKeyId} is specified, the \code{Encrypted} flag must also be set.
The CMK identifier may be provided in any of the following formats:
\itemize{
\item Key ID
\item Key alias. The alias ARN contains the \code{arn:aws:kms} namespace,
followed by the Region of the CMK, the AWS account ID of the CMK
owner, the \code{alias} namespace, and then the CMK alias. For example,
arn:aws:kms:\emph{us-east-1}:\emph{012345678910}:alias/\emph{ExampleAlias}.
\item ARN using key ID. The ID ARN contains the \code{arn:aws:kms} namespace,
followed by the Region of the CMK, the AWS account ID of the CMK
owner, the \code{key} namespace, and then the CMK ID. For example,
arn:aws:kms:\emph{us-east-1}:\emph{012345678910}:key/\emph{abcd1234-a123-456a-a12b-a123b4cd56ef}.
\item ARN using key alias. The alias ARN contains the \code{arn:aws:kms}
namespace, followed by the Region of the CMK, the AWS account ID of
the CMK owner, the \code{alias} namespace, and then the CMK alias. For
example,
arn:aws:kms:\emph{us-east-1}:\emph{012345678910}:alias/\emph{ExampleAlias}.
}
AWS parses \code{KmsKeyId} asynchronously, meaning that the action you call
may appear to complete even though you provided an invalid identifier.
This action will eventually report failure.
The specified CMK must exist in the Region that the snapshot is being
copied to.
Amazon EBS does not support asymmetric CMKs.}
\item{RoleName}{The name of the role to use when not using the default role, 'vmimport'.}
\item{TagSpecifications}{The tags to apply to the snapshot being imported.}
}
\value{
A list with the following syntax:\preformatted{list(
Description = "string",
ImportTaskId = "string",
SnapshotTaskDetail = list(
Description = "string",
DiskImageSize = 123.0,
Encrypted = TRUE|FALSE,
Format = "string",
KmsKeyId = "string",
Progress = "string",
SnapshotId = "string",
Status = "string",
StatusMessage = "string",
Url = "string",
UserBucket = list(
S3Bucket = "string",
S3Key = "string"
)
),
Tags = list(
list(
Key = "string",
Value = "string"
)
)
)
}
}
\description{
Imports a disk into an EBS snapshot.
}
\section{Request syntax}{
\preformatted{svc$import_snapshot(
ClientData = list(
Comment = "string",
UploadEnd = as.POSIXct(
"2015-01-01"
),
UploadSize = 123.0,
UploadStart = as.POSIXct(
"2015-01-01"
)
),
ClientToken = "string",
Description = "string",
DiskContainer = list(
Description = "string",
Format = "string",
Url = "string",
UserBucket = list(
S3Bucket = "string",
S3Key = "string"
)
),
DryRun = TRUE|FALSE,
Encrypted = TRUE|FALSE,
KmsKeyId = "string",
RoleName = "string",
TagSpecifications = list(
list(
ResourceType = "client-vpn-endpoint"|"customer-gateway"|"dedicated-host"|"dhcp-options"|"egress-only-internet-gateway"|"elastic-ip"|"elastic-gpu"|"export-image-task"|"export-instance-task"|"fleet"|"fpga-image"|"host-reservation"|"image"|"import-image-task"|"import-snapshot-task"|"instance"|"internet-gateway"|"key-pair"|"launch-template"|"local-gateway-route-table-vpc-association"|"natgateway"|"network-acl"|"network-interface"|"network-insights-analysis"|"network-insights-path"|"placement-group"|"reserved-instances"|"route-table"|"security-group"|"snapshot"|"spot-fleet-request"|"spot-instances-request"|"subnet"|"traffic-mirror-filter"|"traffic-mirror-session"|"traffic-mirror-target"|"transit-gateway"|"transit-gateway-attachment"|"transit-gateway-connect-peer"|"transit-gateway-multicast-domain"|"transit-gateway-route-table"|"volume"|"vpc"|"vpc-peering-connection"|"vpn-connection"|"vpn-gateway"|"vpc-flow-log",
Tags = list(
list(
Key = "string",
Value = "string"
)
)
)
)
)
}
}
\keyword{internal}
|
/cran/paws.compute/man/ec2_import_snapshot.Rd
|
permissive
|
TWarczak/paws
|
R
| false | true | 5,448 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ec2_operations.R
\name{ec2_import_snapshot}
\alias{ec2_import_snapshot}
\title{Imports a disk into an EBS snapshot}
\usage{
ec2_import_snapshot(ClientData, ClientToken, Description, DiskContainer,
DryRun, Encrypted, KmsKeyId, RoleName, TagSpecifications)
}
\arguments{
\item{ClientData}{The client-specific data.}
\item{ClientToken}{Token to enable idempotency for VM import requests.}
\item{Description}{The description string for the import snapshot task.}
\item{DiskContainer}{Information about the disk container.}
\item{DryRun}{Checks whether you have the required permissions for the action, without
actually making the request, and provides an error response. If you have
the required permissions, the error response is \code{DryRunOperation}.
Otherwise, it is \code{UnauthorizedOperation}.}
\item{Encrypted}{Specifies whether the destination snapshot of the imported image should
be encrypted. The default CMK for EBS is used unless you specify a
non-default AWS Key Management Service (AWS KMS) CMK using \code{KmsKeyId}.
For more information, see \href{https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html}{Amazon EBS Encryption}
in the \emph{Amazon Elastic Compute Cloud User Guide}.}
\item{KmsKeyId}{An identifier for the symmetric AWS Key Management Service (AWS KMS)
customer master key (CMK) to use when creating the encrypted snapshot.
This parameter is only required if you want to use a non-default CMK; if
this parameter is not specified, the default CMK for EBS is used. If a
\code{KmsKeyId} is specified, the \code{Encrypted} flag must also be set.
The CMK identifier may be provided in any of the following formats:
\itemize{
\item Key ID
\item Key alias. The alias ARN contains the \code{arn:aws:kms} namespace,
followed by the Region of the CMK, the AWS account ID of the CMK
owner, the \code{alias} namespace, and then the CMK alias. For example,
arn:aws:kms:\emph{us-east-1}:\emph{012345678910}:alias/\emph{ExampleAlias}.
\item ARN using key ID. The ID ARN contains the \code{arn:aws:kms} namespace,
followed by the Region of the CMK, the AWS account ID of the CMK
owner, the \code{key} namespace, and then the CMK ID. For example,
arn:aws:kms:\emph{us-east-1}:\emph{012345678910}:key/\emph{abcd1234-a123-456a-a12b-a123b4cd56ef}.
\item ARN using key alias. The alias ARN contains the \code{arn:aws:kms}
namespace, followed by the Region of the CMK, the AWS account ID of
the CMK owner, the \code{alias} namespace, and then the CMK alias. For
example,
arn:aws:kms:\emph{us-east-1}:\emph{012345678910}:alias/\emph{ExampleAlias}.
}
AWS parses \code{KmsKeyId} asynchronously, meaning that the action you call
may appear to complete even though you provided an invalid identifier.
This action will eventually report failure.
The specified CMK must exist in the Region that the snapshot is being
copied to.
Amazon EBS does not support asymmetric CMKs.}
\item{RoleName}{The name of the role to use when not using the default role, 'vmimport'.}
\item{TagSpecifications}{The tags to apply to the snapshot being imported.}
}
\value{
A list with the following syntax:\preformatted{list(
Description = "string",
ImportTaskId = "string",
SnapshotTaskDetail = list(
Description = "string",
DiskImageSize = 123.0,
Encrypted = TRUE|FALSE,
Format = "string",
KmsKeyId = "string",
Progress = "string",
SnapshotId = "string",
Status = "string",
StatusMessage = "string",
Url = "string",
UserBucket = list(
S3Bucket = "string",
S3Key = "string"
)
),
Tags = list(
list(
Key = "string",
Value = "string"
)
)
)
}
}
\description{
Imports a disk into an EBS snapshot.
}
\section{Request syntax}{
\preformatted{svc$import_snapshot(
ClientData = list(
Comment = "string",
UploadEnd = as.POSIXct(
"2015-01-01"
),
UploadSize = 123.0,
UploadStart = as.POSIXct(
"2015-01-01"
)
),
ClientToken = "string",
Description = "string",
DiskContainer = list(
Description = "string",
Format = "string",
Url = "string",
UserBucket = list(
S3Bucket = "string",
S3Key = "string"
)
),
DryRun = TRUE|FALSE,
Encrypted = TRUE|FALSE,
KmsKeyId = "string",
RoleName = "string",
TagSpecifications = list(
list(
ResourceType = "client-vpn-endpoint"|"customer-gateway"|"dedicated-host"|"dhcp-options"|"egress-only-internet-gateway"|"elastic-ip"|"elastic-gpu"|"export-image-task"|"export-instance-task"|"fleet"|"fpga-image"|"host-reservation"|"image"|"import-image-task"|"import-snapshot-task"|"instance"|"internet-gateway"|"key-pair"|"launch-template"|"local-gateway-route-table-vpc-association"|"natgateway"|"network-acl"|"network-interface"|"network-insights-analysis"|"network-insights-path"|"placement-group"|"reserved-instances"|"route-table"|"security-group"|"snapshot"|"spot-fleet-request"|"spot-instances-request"|"subnet"|"traffic-mirror-filter"|"traffic-mirror-session"|"traffic-mirror-target"|"transit-gateway"|"transit-gateway-attachment"|"transit-gateway-connect-peer"|"transit-gateway-multicast-domain"|"transit-gateway-route-table"|"volume"|"vpc"|"vpc-peering-connection"|"vpn-connection"|"vpn-gateway"|"vpc-flow-log",
Tags = list(
list(
Key = "string",
Value = "string"
)
)
)
)
)
}
}
\keyword{internal}
|
library(glmnet)
mydata = read.table("./TrainingSet/LassoBIC/bone.csv",head=T,sep=",")
x = as.matrix(mydata[,4:ncol(mydata)])
y = as.matrix(mydata[,1])
set.seed(123)
glm = cv.glmnet(x,y,nfolds=10,type.measure="mae",alpha=0.1,family="gaussian",standardize=FALSE)
sink('./Model/EN/Lasso/bone/bone_028.txt',append=TRUE)
print(glm$glmnet.fit)
sink()
|
/Model/EN/Lasso/bone/bone_028.R
|
no_license
|
leon1003/QSMART
|
R
| false | false | 345 |
r
|
library(glmnet)
mydata = read.table("./TrainingSet/LassoBIC/bone.csv",head=T,sep=",")
x = as.matrix(mydata[,4:ncol(mydata)])
y = as.matrix(mydata[,1])
set.seed(123)
glm = cv.glmnet(x,y,nfolds=10,type.measure="mae",alpha=0.1,family="gaussian",standardize=FALSE)
sink('./Model/EN/Lasso/bone/bone_028.txt',append=TRUE)
print(glm$glmnet.fit)
sink()
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/shiny-recorder.R
\name{record_session}
\alias{record_session}
\title{Record a Session for Load Test}
\usage{
record_session(
target_app_url,
host = "127.0.0.1",
port = 8600,
output_file = "recording.log",
open_browser = TRUE,
connect_api_key = NULL
)
}
\arguments{
\item{target_app_url}{The URL of the deployed application.}
\item{host}{The host where the proxy will run. Usually localhost is used.}
\item{port}{The port for the reverse proxy. Default is 8600. Change this
default if port 8600 is used by another service.}
\item{output_file}{The name of the generated recording file.}
\item{open_browser}{Whether to open a browser on the proxy (default=\code{TRUE})
or not (\code{FALSE}).}
\item{connect_api_key}{An RStudio Connect api key. It may be useful to use
\code{Sys.getenv("CONNECT_API_KEY")}.}
}
\value{
Creates a recording file that can be used as input to the
\code{shinycannon} command-line load generation tool.
}
\description{
This function creates a \href{https://en.wikipedia.org/wiki/Reverse_proxy}{reverse proxy} at \verb{http://host:port}
(http://127.0.0.1:8600 by default) that intercepts and records activity
between your web browser and the Shiny application at \code{target_app_url}.
}
\details{
By default, after creating the reverse proxy, a web browser is opened
automatically. As you interact with the application in the web browser,
activity is written to the \code{output_file} (\code{recording.log} by default).
To shut down the reverse proxy and complete the recording, close the web
browser tab or window.
Recordings are used as input to the \code{shinycannon} command-line
load-generation tool which can be obtained from the \href{https://rstudio.github.io/shinyloadtest/index.html}{shinyloadtest documentation site}.
}
\section{\code{fileInput}/\code{DT}/\verb{HTTP POST} support}{
Shiny's \code{shiny::fileInput()} input for uploading files, the \code{DT} package,
and potentially other packages make HTTP POST requests to the target
application. Because POST requests can be large, they are not stored
directly in the recording file. Instead, new files adjacent to the
recording are created for each HTTP POST request intercepted.
The adjacent files are named after the recording with the pattern
\verb{<output_file>.post.<N>}, where \verb{<output_file>} is the chosen recording
file name and \verb{<N>} is the number of the request.
If present, these adjacent files must be kept alongside the recording file
when the recording is played back with the \code{shinycannon} tool.
}
\examples{
\dontrun{
record_session("https://example.com/your-shiny-app/")
}
}
\seealso{
\href{https://rstudio.github.io/shinyloadtest/}{\code{shinyloadtest} articles}
}
|
/man/record_session.Rd
|
no_license
|
rstudio/shinyloadtest
|
R
| false | true | 2,790 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/shiny-recorder.R
\name{record_session}
\alias{record_session}
\title{Record a Session for Load Test}
\usage{
record_session(
target_app_url,
host = "127.0.0.1",
port = 8600,
output_file = "recording.log",
open_browser = TRUE,
connect_api_key = NULL
)
}
\arguments{
\item{target_app_url}{The URL of the deployed application.}
\item{host}{The host where the proxy will run. Usually localhost is used.}
\item{port}{The port for the reverse proxy. Default is 8600. Change this
default if port 8600 is used by another service.}
\item{output_file}{The name of the generated recording file.}
\item{open_browser}{Whether to open a browser on the proxy (default=\code{TRUE})
or not (\code{FALSE}).}
\item{connect_api_key}{An RStudio Connect api key. It may be useful to use
\code{Sys.getenv("CONNECT_API_KEY")}.}
}
\value{
Creates a recording file that can be used as input to the
\code{shinycannon} command-line load generation tool.
}
\description{
This function creates a \href{https://en.wikipedia.org/wiki/Reverse_proxy}{reverse proxy} at \verb{http://host:port}
(http://127.0.0.1:8600 by default) that intercepts and records activity
between your web browser and the Shiny application at \code{target_app_url}.
}
\details{
By default, after creating the reverse proxy, a web browser is opened
automatically. As you interact with the application in the web browser,
activity is written to the \code{output_file} (\code{recording.log} by default).
To shut down the reverse proxy and complete the recording, close the web
browser tab or window.
Recordings are used as input to the \code{shinycannon} command-line
load-generation tool which can be obtained from the \href{https://rstudio.github.io/shinyloadtest/index.html}{shinyloadtest documentation site}.
}
\section{\code{fileInput}/\code{DT}/\verb{HTTP POST} support}{
Shiny's \code{shiny::fileInput()} input for uploading files, the \code{DT} package,
and potentially other packages make HTTP POST requests to the target
application. Because POST requests can be large, they are not stored
directly in the recording file. Instead, new files adjacent to the
recording are created for each HTTP POST request intercepted.
The adjacent files are named after the recording with the pattern
\verb{<output_file>.post.<N>}, where \verb{<output_file>} is the chosen recording
file name and \verb{<N>} is the number of the request.
If present, these adjacent files must be kept alongside the recording file
when the recording is played back with the \code{shinycannon} tool.
}
\examples{
\dontrun{
record_session("https://example.com/your-shiny-app/")
}
}
\seealso{
\href{https://rstudio.github.io/shinyloadtest/}{\code{shinyloadtest} articles}
}
|
#Load files into data structures
#Test Files
subject_test<-read.csv("./test/subject_test.txt", header=FALSE)
x_test<-read.csv("./test/x_test.txt", header=FALSE, sep="")
y_test<-read.csv("./test/y_test.txt", header=FALSE)
#Train Files
subject_train<-read.csv("./train/subject_train.txt", header=FALSE)
x_train<-read.csv("./train/x_train.txt", header=FALSE, sep="")
y_train<-read.csv("./train/y_train.txt", header=FALSE)
#Row merge train and test files
x<-rbind(x_test,x_train)
y<-rbind(y_test,y_train)
subject<-rbind(subject_test,subject_train)
#Treating Activity Labels
activity_labels<-read.csv("activity_labels.txt", header=FALSE, sep=" ")
#discard the number id and let only the label
activity_labels<-activity_labels[,2]
#Treating Features
features<-read.csv("features.txt", header=FALSE, sep=" ")
#discard the number id and let only the label
features<-features[,2]
#naming variables
names(x)<-features
names(y)<-"activity"
names(subject)<-"subject"
# Kill all measurements that are not mean or standard deviation.
x<-x[,grep("std|mean",features)]
#Change activity number by activity names
# #Loop on activity names and assign as label
i=1
while(i<=nrow(y)){
y[i,]<-as.character(activity_labels[as.numeric(y[i,])])
i<-i+1
}
#combining all datasets
whole_data<-cbind(subject,y,x)
#extract summarized data
summarized_data<-aggregate(whole_data,by=list(whole_data$subject,whole_data$activity),FUN=mean)
#removing messy columns
drops <- c("subject","activity")
summarized_data<-summarized_data[ , !(names(summarized_data) %in% drops)]
|
/run_analysis.r
|
no_license
|
simonmerino/getting_and_cleaning_data
|
R
| false | false | 1,619 |
r
|
#Load files into data structures
#Test Files
subject_test<-read.csv("./test/subject_test.txt", header=FALSE)
x_test<-read.csv("./test/x_test.txt", header=FALSE, sep="")
y_test<-read.csv("./test/y_test.txt", header=FALSE)
#Train Files
subject_train<-read.csv("./train/subject_train.txt", header=FALSE)
x_train<-read.csv("./train/x_train.txt", header=FALSE, sep="")
y_train<-read.csv("./train/y_train.txt", header=FALSE)
#Row merge train and test files
x<-rbind(x_test,x_train)
y<-rbind(y_test,y_train)
subject<-rbind(subject_test,subject_train)
#Treating Activity Labels
activity_labels<-read.csv("activity_labels.txt", header=FALSE, sep=" ")
#discard the number id and let only the label
activity_labels<-activity_labels[,2]
#Treating Features
features<-read.csv("features.txt", header=FALSE, sep=" ")
#discard the number id and let only the label
features<-features[,2]
#naming variables
names(x)<-features
names(y)<-"activity"
names(subject)<-"subject"
# Kill all measurements that are not mean or standard deviation.
x<-x[,grep("std|mean",features)]
#Change activity number by activity names
# #Loop on activity names and assign as label
i=1
while(i<=nrow(y)){
y[i,]<-as.character(activity_labels[as.numeric(y[i,])])
i<-i+1
}
#combining all datasets
whole_data<-cbind(subject,y,x)
#extract summarized data
summarized_data<-aggregate(whole_data,by=list(whole_data$subject,whole_data$activity),FUN=mean)
#removing messy columns
drops <- c("subject","activity")
summarized_data<-summarized_data[ , !(names(summarized_data) %in% drops)]
|
\name{ch_ews}
\alias{ch_ews}
\title{Description: Conditional Heteroskedasticity}
\usage{
ch_ews(timeseries, winsize = 10, alpha = 0.1, optim = TRUE, lags = 4,
logtransform = FALSE, interpolate = FALSE)
}
\arguments{
\item{timeseries}{a numeric vector of the observed
timeseries values or a numeric matrix where the first
column represents the time index and the second the
observed timeseries values. Use vectors/matrices with
headings.}
\item{winsize}{is length of the rolling window expressed
as percentage of the timeseries length (must be numeric
between 0 and 100). Default is 10\%.}
\item{alpha}{is the significance threshold (must be
numeric). Default is 0.1.}
\item{optim}{logical. If TRUE an autoregressive model is
fit to the data within the rolling window using AIC
optimization. Otherwise an autoregressive model of
specific order \code{lags} is selected.}
\item{lags}{is a parameter that determines the specific
order of an autoregressive model to fit the data. Default
is 4.}
\item{logtransform}{logical. If TRUE data are
logtransformed prior to analysis as log(X+1). Default is
FALSE.}
\item{interpolate}{logical. If TRUE linear interpolation
is applied to produce a timeseries of equal length as the
original. Default is FALSE (assumes there are no gaps in
the timeseries).}
}
\value{
\code{ch_ews} returns a matrix that contains:
\item{time}{the time index.}
\item{r.squared}{the R2 values of the regressed residuals.}
\item{critical.value}{the chi-square critical value based
on the desired \code{alpha} level for 1 degree of freedom
divided by the number of residuals used in the regression.}
\item{test.result}{logical. It indicates whether
conditional heteroskedasticity was significant.}
\item{ar.fit.order}{the order of the specified
autoregressive model- only informative if \code{optim}
FALSE was selected.}
In addition, \code{ch_ews} plots the original timeseries
and the R2 where the level of significance is also
indicated.
}
\description{
\code{ch_ews} is used to estimate changes in conditional
heteroskedasticity within rolling windows along a
timeseries
}
\details{
see ref below
Arguments:
}
\examples{
data(foldbif)
out=ch_ews(foldbif, winsize=50, alpha=0.05, optim=TRUE, lags)
}
\author{
T. Cline, modified by V. Dakos
}
\references{
Seekell, D. A., et al (2011). 'Conditional
heteroscedasticity as a leading indicator of ecological
regime shifts.' \emph{American Naturalist} 178(4): 442-451
Dakos, V., et al (2012).'Methods for Detecting Early
Warnings of Critical Transitions in Time Series Illustrated
Using Simulated Ecological Data.' \emph{PLoS ONE} 7(7):
e41010. doi:10.1371/journal.pone.0041010
}
\seealso{
\code{\link{generic_ews}}; \code{\link{ddjnonparam_ews}};
\code{\link{bdstest_ews}}; \code{\link{sensitivity_ews}};
\code{\link{surrogates_ews}}; \code{\link{ch_ews}};
\code{movpotential_ews}; \code{livpotential_ews}
}
\keyword{early-warning}
|
/man/ch_ews.Rd
|
no_license
|
hjl2014/earlywarnings
|
R
| false | false | 2,957 |
rd
|
\name{ch_ews}
\alias{ch_ews}
\title{Description: Conditional Heteroskedasticity}
\usage{
ch_ews(timeseries, winsize = 10, alpha = 0.1, optim = TRUE, lags = 4,
logtransform = FALSE, interpolate = FALSE)
}
\arguments{
\item{timeseries}{a numeric vector of the observed
timeseries values or a numeric matrix where the first
column represents the time index and the second the
observed timeseries values. Use vectors/matrices with
headings.}
\item{winsize}{is length of the rolling window expressed
as percentage of the timeseries length (must be numeric
between 0 and 100). Default is 10\%.}
\item{alpha}{is the significance threshold (must be
numeric). Default is 0.1.}
\item{optim}{logical. If TRUE an autoregressive model is
fit to the data within the rolling window using AIC
optimization. Otherwise an autoregressive model of
specific order \code{lags} is selected.}
\item{lags}{is a parameter that determines the specific
order of an autoregressive model to fit the data. Default
is 4.}
\item{logtransform}{logical. If TRUE data are
logtransformed prior to analysis as log(X+1). Default is
FALSE.}
\item{interpolate}{logical. If TRUE linear interpolation
is applied to produce a timeseries of equal length as the
original. Default is FALSE (assumes there are no gaps in
the timeseries).}
}
\value{
\code{ch_ews} returns a matrix that contains:
\item{time}{the time index.}
\item{r.squared}{the R2 values of the regressed residuals.}
\item{critical.value}{the chi-square critical value based
on the desired \code{alpha} level for 1 degree of freedom
divided by the number of residuals used in the regression.}
\item{test.result}{logical. It indicates whether
conditional heteroskedasticity was significant.}
\item{ar.fit.order}{the order of the specified
autoregressive model- only informative if \code{optim}
FALSE was selected.}
In addition, \code{ch_ews} plots the original timeseries
and the R2 where the level of significance is also
indicated.
}
\description{
\code{ch_ews} is used to estimate changes in conditional
heteroskedasticity within rolling windows along a
timeseries
}
\details{
see ref below
Arguments:
}
\examples{
data(foldbif)
out=ch_ews(foldbif, winsize=50, alpha=0.05, optim=TRUE, lags)
}
\author{
T. Cline, modified by V. Dakos
}
\references{
Seekell, D. A., et al (2011). 'Conditional
heteroscedasticity as a leading indicator of ecological
regime shifts.' \emph{American Naturalist} 178(4): 442-451
Dakos, V., et al (2012).'Methods for Detecting Early
Warnings of Critical Transitions in Time Series Illustrated
Using Simulated Ecological Data.' \emph{PLoS ONE} 7(7):
e41010. doi:10.1371/journal.pone.0041010
}
\seealso{
\code{\link{generic_ews}}; \code{\link{ddjnonparam_ews}};
\code{\link{bdstest_ews}}; \code{\link{sensitivity_ews}};
\code{\link{surrogates_ews}}; \code{\link{ch_ews}};
\code{movpotential_ews}; \code{livpotential_ews}
}
\keyword{early-warning}
|
# SPDX-Copyright: Copyright (c) Capital One Services, LLC
# SPDX-License-Identifier: Apache-2.0
# Copyright 2017 Capital One Services, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
#
# You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
# OF ANY KIND, either express or implied.
#
# UNIT TESTS: Output Helper Functions
#
# Unit tests that look at the various helper functions in the output
# such as checking that something is not null and generating section headers.
#
library(testthat)
context('out_helperFunctions.R')
test_that("isNotNull", {
expect_that(isNotNull(NULL), is_false())
expect_that(isNotNull(1), is_true())
})
test_that("outputSectionHeader", {
# Little to do here - just check the header is what we expect
expect_equal(outputSectionHeader("Foo") , "\nFoo\n===\n")
})
|
/dataCompareR/tests/testthat/test_outHelperFunctions.R
|
permissive
|
Lextuga007/dataCompareR
|
R
| false | false | 1,115 |
r
|
# SPDX-Copyright: Copyright (c) Capital One Services, LLC
# SPDX-License-Identifier: Apache-2.0
# Copyright 2017 Capital One Services, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
#
# You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
# OF ANY KIND, either express or implied.
#
# UNIT TESTS: Output Helper Functions
#
# Unit tests that look at the various helper functions in the output
# such as checking that something is not null and generating section headers.
#
library(testthat)
context('out_helperFunctions.R')
test_that("isNotNull", {
expect_that(isNotNull(NULL), is_false())
expect_that(isNotNull(1), is_true())
})
test_that("outputSectionHeader", {
# Little to do here - just check the header is what we expect
expect_equal(outputSectionHeader("Foo") , "\nFoo\n===\n")
})
|
### Plot comparing distributions of total damage for various uncertainty settings on a log-log scale
plotFigure5b <- function(res.unc, total.damage, ifPdf=TRUE, fileName="figures/UncertaintyLog.pdf")
{
total.mf.damage <- res.unc$mf
total.slr.damage <- res.unc$slr
total.dam.damage <- res.unc$dam
if(ifPdf) pdf(file="figures/UncertaintyLog.pdf", width=10, height=5, points=12)
par(mex=0.75, mar=c(5,4,2,2)+0.1)
buckets <- seq(log(50),log(165000), by=0.1)
buckets <- exp(buckets)
my.hist <- hist(total.damage, breaks=buckets, plot=FALSE)
my.slr.hist <- hist(total.slr.damage, breaks=buckets, plot=FALSE)
my.mf.hist <- hist(total.mf.damage, breaks=buckets, plot=FALSE)
my.dam.hist <- hist(total.dam.damage, breaks=buckets, plot=FALSE)
plot(log(my.slr.hist$breaks[-1]), log(my.slr.hist$counts), type="h", col="#7D26CD",
main="", ylab="Log frequency", xlab="Total damage 2016-2100 (million NOK)",
lwd=2, axes=FALSE)
ticks <- c(50, 150, 500, 1500, 5000, 15000, 50000, 150000)
axis(1, at = log(ticks), labels=ticks)
axis(2)
box()
lines(log(my.hist$breaks[-1])+0.02, log(my.hist$counts), col="black", type="h", lwd=2)
lines(log(my.mf.hist$breaks[-1])+0.04, log(my.mf.hist$counts), col="orange", type="h", lwd=2)
lines(log(my.dam.hist$breaks[-1])+0.06, log(my.dam.hist$counts), col="#008B45", type="h", lwd=2)
abline(v=log(sum(res.unc$yearly.median)), col="gray50", lwd=2)
points(log(median(total.damage)), 7.8, col="black", pch=16)
points(log(median(total.slr.damage)),7.8, col="#7D26CD", pch=16)
points(log(median(total.dam.damage)),7.8, col="#008B45", pch=16)
points(log(sum(res.unc$yearly.median)),7.8, col="gray50", pch=16)
points(log(median(total.mf.damage)),7.8, col="orange", pch=16)
legend("topright",
legend=c("Full uncertainty","SLR uncertainty","Effect uncertainty",
"Damage uncertainty", "No uncertainty"),
col=c("black", "#7D26CD", "orange", "#008B45", "gray50"), lty=1, lwd=2)
if(ifPdf) dev.off()
}
|
/code/BergenDecisions/plotFigure5b.R
|
no_license
|
eSACP/SeaLevelDecisions
|
R
| false | false | 2,085 |
r
|
### Plot comparing distributions of total damage for various uncertainty settings on a log-log scale
plotFigure5b <- function(res.unc, total.damage, ifPdf=TRUE, fileName="figures/UncertaintyLog.pdf")
{
total.mf.damage <- res.unc$mf
total.slr.damage <- res.unc$slr
total.dam.damage <- res.unc$dam
if(ifPdf) pdf(file="figures/UncertaintyLog.pdf", width=10, height=5, points=12)
par(mex=0.75, mar=c(5,4,2,2)+0.1)
buckets <- seq(log(50),log(165000), by=0.1)
buckets <- exp(buckets)
my.hist <- hist(total.damage, breaks=buckets, plot=FALSE)
my.slr.hist <- hist(total.slr.damage, breaks=buckets, plot=FALSE)
my.mf.hist <- hist(total.mf.damage, breaks=buckets, plot=FALSE)
my.dam.hist <- hist(total.dam.damage, breaks=buckets, plot=FALSE)
plot(log(my.slr.hist$breaks[-1]), log(my.slr.hist$counts), type="h", col="#7D26CD",
main="", ylab="Log frequency", xlab="Total damage 2016-2100 (million NOK)",
lwd=2, axes=FALSE)
ticks <- c(50, 150, 500, 1500, 5000, 15000, 50000, 150000)
axis(1, at = log(ticks), labels=ticks)
axis(2)
box()
lines(log(my.hist$breaks[-1])+0.02, log(my.hist$counts), col="black", type="h", lwd=2)
lines(log(my.mf.hist$breaks[-1])+0.04, log(my.mf.hist$counts), col="orange", type="h", lwd=2)
lines(log(my.dam.hist$breaks[-1])+0.06, log(my.dam.hist$counts), col="#008B45", type="h", lwd=2)
abline(v=log(sum(res.unc$yearly.median)), col="gray50", lwd=2)
points(log(median(total.damage)), 7.8, col="black", pch=16)
points(log(median(total.slr.damage)),7.8, col="#7D26CD", pch=16)
points(log(median(total.dam.damage)),7.8, col="#008B45", pch=16)
points(log(sum(res.unc$yearly.median)),7.8, col="gray50", pch=16)
points(log(median(total.mf.damage)),7.8, col="orange", pch=16)
legend("topright",
legend=c("Full uncertainty","SLR uncertainty","Effect uncertainty",
"Damage uncertainty", "No uncertainty"),
col=c("black", "#7D26CD", "orange", "#008B45", "gray50"), lty=1, lwd=2)
if(ifPdf) dev.off()
}
|
## Course Project 2 for Exploratory Data Analysis
## Plot 1
## This first line will likely take a few seconds. Be patient!
NEI <- readRDS("./FNEI_data/summarySCC_PM25.rds")
SCC <- readRDS("./FNEI_data/Source_Classification_Code.rds")
## Drawing Plot
## Have total emissions from PM2.5 decreased in the United States from
## 1999 to 2008? Using the base plotting system, make a plot showing the total
## PM2.5 emission from all sources for each of the years 1999, 2002, 2005, and 2008.
## Loading needed libraries
library(dplyr)
## Transforming column Year in a factor
NEI$year = factor(NEI$year)
## Getting the data
## Due to amount of Emissions is high we're using tons unit
NEI_total <- group_by(NEI, year) %>%
summarise(total.Emissions.million.tons = sum(Emissions)/1000000)
## Drawing the plot
barplot(NEI_total$total.Emissions.million.tons,
main=expression("Total emissions from PM"[2.5]*" in the United States"),
xlab="Years",
ylab=expression("Amount of PM"[2.5]*" emitted, in million tons"),
names.arg=NEI_total$year,
col = "red")
# Making png file
dev.copy(png, file = "plot1.png")
dev.off()
|
/CourseProject2/Plot1.R
|
no_license
|
sagospe/ExploratoryDataAnalysis
|
R
| false | false | 1,164 |
r
|
## Course Project 2 for Exploratory Data Analysis
## Plot 1
## This first line will likely take a few seconds. Be patient!
NEI <- readRDS("./FNEI_data/summarySCC_PM25.rds")
SCC <- readRDS("./FNEI_data/Source_Classification_Code.rds")
## Drawing Plot
## Have total emissions from PM2.5 decreased in the United States from
## 1999 to 2008? Using the base plotting system, make a plot showing the total
## PM2.5 emission from all sources for each of the years 1999, 2002, 2005, and 2008.
## Loading needed libraries
library(dplyr)
## Transforming column Year in a factor
NEI$year = factor(NEI$year)
## Getting the data
## Due to amount of Emissions is high we're using tons unit
NEI_total <- group_by(NEI, year) %>%
summarise(total.Emissions.million.tons = sum(Emissions)/1000000)
## Drawing the plot
barplot(NEI_total$total.Emissions.million.tons,
main=expression("Total emissions from PM"[2.5]*" in the United States"),
xlab="Years",
ylab=expression("Amount of PM"[2.5]*" emitted, in million tons"),
names.arg=NEI_total$year,
col = "red")
# Making png file
dev.copy(png, file = "plot1.png")
dev.off()
|
##Matrix inversion is usually a costly computation and there may be some benefit to caching
##the inverse of a matrix rather than computing it repeatedly.
##The following functions cache the inverse of a matrix.
## This function creates a special "matrix" object that can cache its inverse
makeCacheMatrix <- function(x = matrix()) {
i <- NULL
set <- function(y){
x <<- y
i <<- NULL
}
get <- function() x
setinverse <- function(inverse) i <<- inverse
getinverse <- function() i
list (set=set, get=get, setinverse=setinverse, getinverse=getinverse)
}
##This function computes the inverse of the special "matrix" returned by makeCacheMatrix above.
##If the inverse has already been calculated (and the matrix has not changed), the function
##retrieves the inverse from the cache.
cacheSolve <- function(x, ...) {
i <- x$getinverse()
if(!is.null(i)){
message("getting cached data")
return(i)
}
data <- x$get()
i <- solve(data)
x$setinverse(i)
i
}
|
/cachematrix.R
|
no_license
|
SpotConlon/ProgrammingAssignment2
|
R
| false | false | 1,066 |
r
|
##Matrix inversion is usually a costly computation and there may be some benefit to caching
##the inverse of a matrix rather than computing it repeatedly.
##The following functions cache the inverse of a matrix.
## This function creates a special "matrix" object that can cache its inverse
makeCacheMatrix <- function(x = matrix()) {
i <- NULL
set <- function(y){
x <<- y
i <<- NULL
}
get <- function() x
setinverse <- function(inverse) i <<- inverse
getinverse <- function() i
list (set=set, get=get, setinverse=setinverse, getinverse=getinverse)
}
##This function computes the inverse of the special "matrix" returned by makeCacheMatrix above.
##If the inverse has already been calculated (and the matrix has not changed), the function
##retrieves the inverse from the cache.
cacheSolve <- function(x, ...) {
i <- x$getinverse()
if(!is.null(i)){
message("getting cached data")
return(i)
}
data <- x$get()
i <- solve(data)
x$setinverse(i)
i
}
|
#' @title wiki_graph data
#'
#' @description wiki_graph: DataFrame containing three columns (v1, v2, w) and 18 entries.
#' @docType data
#' @format The \code{data.frame} contains 3 variables:
#' \describe{
#' \item{v1}{nodes}
#' \item{v2}{nodes}
#' \item{w}{weights between the nodes}
#' }
#'
#'
"wiki_graph"
|
/R/wiki_graph.r
|
no_license
|
senseiyukisan/732A94
|
R
| false | false | 315 |
r
|
#' @title wiki_graph data
#'
#' @description wiki_graph: DataFrame containing three columns (v1, v2, w) and 18 entries.
#' @docType data
#' @format The \code{data.frame} contains 3 variables:
#' \describe{
#' \item{v1}{nodes}
#' \item{v2}{nodes}
#' \item{w}{weights between the nodes}
#' }
#'
#'
"wiki_graph"
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/gating-functions.R
\name{drawInterval}
\alias{drawInterval}
\title{Draw Interval(s) to Gate Flow Cytometry Populations.}
\usage{
drawInterval(fr, channels, alias = NULL, plot = TRUE, axis = "x",
labels = TRUE, ...)
}
\arguments{
\item{fr}{a \code{\link[flowCore:flowFrame-class]{flowFrame}} object
containing the flow cytometry data for plotting and gating.}
\item{channels}{vector of channel names to use for plotting, can be of length
1 for 1-D density histogram or length 2 for 2-D scatter plot.}
\item{alias}{the name(s) of the populations to be gated. If multiple
population names are supplied (e.g. \code{c("CD3,"CD4)}) multiple gates
will be returned. \code{alias} is \code{NULL} by default which will halt
the gating routine.}
\item{plot}{logical indicating whether the data should be plotted. This
feature allows for constructing gates of different types over existing
plots which may already contain a different gate type.}
\item{axis}{indicates whether the \code{"x"} or \code{"y"} axis should be
gated for 2-D interval gates.}
\item{labels}{logical indicating whether to include \code{\link{plotLabels}}
for the gated population(s), \code{TRUE} by default.}
\item{...}{additional arguments for \code{\link{plotCyto,flowFrame-method}}.}
}
\value{
a\code{\link[flowCore:filters-class]{filters}} list containing the
constructed \code{\link[flowCore:rectangleGate]{rectangleGate}}
object(s).
}
\description{
\code{drawInterval} constructs an interactive plotting window for user to
select the lower and upper bounds of a population (through mouse click) which
is constructed into a
\code{\link[flowCore:rectangleGate]{rectangleGate}} object and stored
in a \code{\link[flowCore:filters-class]{filters}} list. Both 1-D and 2-D
interval gates are supported, for 2-D interval gates an additional argument
\code{axis} must be supplied to indicate which axis should be gated.
}
\seealso{
\code{\link{plotCyto1d,flowFrame-method}}
\code{\link{plotCyto2d,flowFrame-method}}
\code{\link{drawGate}}
}
\author{
Dillon Hammill (Dillon.Hammill@anu.edu.au)
}
\keyword{draw,}
\keyword{gating,}
\keyword{interval}
\keyword{manual,}
\keyword{openCyto,}
\keyword{rectangleGate,}
|
/man/drawInterval.Rd
|
no_license
|
gfinak/cytoRSuite
|
R
| false | true | 2,262 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/gating-functions.R
\name{drawInterval}
\alias{drawInterval}
\title{Draw Interval(s) to Gate Flow Cytometry Populations.}
\usage{
drawInterval(fr, channels, alias = NULL, plot = TRUE, axis = "x",
labels = TRUE, ...)
}
\arguments{
\item{fr}{a \code{\link[flowCore:flowFrame-class]{flowFrame}} object
containing the flow cytometry data for plotting and gating.}
\item{channels}{vector of channel names to use for plotting, can be of length
1 for 1-D density histogram or length 2 for 2-D scatter plot.}
\item{alias}{the name(s) of the populations to be gated. If multiple
population names are supplied (e.g. \code{c("CD3,"CD4)}) multiple gates
will be returned. \code{alias} is \code{NULL} by default which will halt
the gating routine.}
\item{plot}{logical indicating whether the data should be plotted. This
feature allows for constructing gates of different types over existing
plots which may already contain a different gate type.}
\item{axis}{indicates whether the \code{"x"} or \code{"y"} axis should be
gated for 2-D interval gates.}
\item{labels}{logical indicating whether to include \code{\link{plotLabels}}
for the gated population(s), \code{TRUE} by default.}
\item{...}{additional arguments for \code{\link{plotCyto,flowFrame-method}}.}
}
\value{
a\code{\link[flowCore:filters-class]{filters}} list containing the
constructed \code{\link[flowCore:rectangleGate]{rectangleGate}}
object(s).
}
\description{
\code{drawInterval} constructs an interactive plotting window for user to
select the lower and upper bounds of a population (through mouse click) which
is constructed into a
\code{\link[flowCore:rectangleGate]{rectangleGate}} object and stored
in a \code{\link[flowCore:filters-class]{filters}} list. Both 1-D and 2-D
interval gates are supported, for 2-D interval gates an additional argument
\code{axis} must be supplied to indicate which axis should be gated.
}
\seealso{
\code{\link{plotCyto1d,flowFrame-method}}
\code{\link{plotCyto2d,flowFrame-method}}
\code{\link{drawGate}}
}
\author{
Dillon Hammill (Dillon.Hammill@anu.edu.au)
}
\keyword{draw,}
\keyword{gating,}
\keyword{interval}
\keyword{manual,}
\keyword{openCyto,}
\keyword{rectangleGate,}
|
test_that("default options", {
withr::local_options(list(
gargle_oauth_cache = NULL,
gargle_oob_default = NULL,
gargle_oauth_email = NULL,
gargle_quiet = NULL
))
expect_identical(gargle_oauth_cache(), NA)
expect_false(gargle_oob_default())
expect_null(gargle_oauth_email())
expect_true(gargle_quiet())
})
test_that("gargle API key", {
key <- gargle_api_key()
expect_true(is_string(key))
})
|
/tests/testthat/test-assets.R
|
permissive
|
MarkEdmondson1234/gargle
|
R
| false | false | 429 |
r
|
test_that("default options", {
withr::local_options(list(
gargle_oauth_cache = NULL,
gargle_oob_default = NULL,
gargle_oauth_email = NULL,
gargle_quiet = NULL
))
expect_identical(gargle_oauth_cache(), NA)
expect_false(gargle_oob_default())
expect_null(gargle_oauth_email())
expect_true(gargle_quiet())
})
test_that("gargle API key", {
key <- gargle_api_key()
expect_true(is_string(key))
})
|
## dependencies of the MTMM script
## ASREML library needs a valid license
library(lattice)
library(asreml)
library(msm)
library(nadiv)
## libraries for single GWAS
library(foreach)
library(iterators)
library(parallel)
# libraries for plotting
library(ggplot2)
library(dplyr)
#scriopts to source. All scripts can be found in the github folder. Be sure to set the right working directory
source('scripts/emma.r')
source('scripts/mtmm_estimates_as4.r')
source('scripts/plots_gwas.r')
source('scripts/plot_mtmm.r')
source('scripts/mtmm_cluster.r')
source('scripts/mtmm_part2.r')
source('scripts/gwas.r')
|
/scripts/prepare_mtmm.r
|
no_license
|
salarshaaf/MTMM
|
R
| false | false | 604 |
r
|
## dependencies of the MTMM script
## ASREML library needs a valid license
library(lattice)
library(asreml)
library(msm)
library(nadiv)
## libraries for single GWAS
library(foreach)
library(iterators)
library(parallel)
# libraries for plotting
library(ggplot2)
library(dplyr)
#scriopts to source. All scripts can be found in the github folder. Be sure to set the right working directory
source('scripts/emma.r')
source('scripts/mtmm_estimates_as4.r')
source('scripts/plots_gwas.r')
source('scripts/plot_mtmm.r')
source('scripts/mtmm_cluster.r')
source('scripts/mtmm_part2.r')
source('scripts/gwas.r')
|
library(dplyr)
library(tidyr)
# Load LDAvis inputs
#' Note: data witheld because they contain third party content
setwd("C:/Users/Sensonomic Admin/Dropbox/Oxford/DPhil/Deforestation review/Deforestation_messaging_analysis_GitHub/Deforestation_messaging_analysis/")
load("Data/mongabay_LDAVIS_inputs.Rdata")
#' @param theta matrix, with each row containing the probability distribution
#' over topics for a document, with as many rows as there are documents in the
#' corpus, and as many columns as there are topics in the model.
#' @param doc.length integer vector containing the number of tokens in each
#' document of the corpus.
# compute counts of tokens across K topics (length-K vector):
# (this determines the areas of the default topic circles when no term is
# highlighted)
topic.frequency <- colSums(theta * doc.length)
topic.proportion <- topic.frequency/sum(topic.frequency)
#' @param phi matrix, with each row containing the distribution over terms
#' for a topic, with as many rows as there are topics in the model, and as
#' many columns as there are terms in the vocabulary.
# token counts for each term-topic combination (widths of red bars)
term.topic.frequency <- phi * topic.frequency
term.frequency <- colSums(term.topic.frequency)
# term-topic frequency table
tmp <- term.topic.frequency
# reorder topics by LDAvis order
load("Data/mongabay_LDAVIS_order_simple.Rdata")
tmp <- term.topic.frequency[LDAVis.order,]
# round down infrequent term occurrences so that we can send sparse
# data to the browser:
r <- row(tmp)[tmp >= 0.5]
c <- col(tmp)[tmp >= 0.5]
dd <- data.frame(Term = vocab[c], Topic = r, Freq = round(tmp[cbind(r, c)]),
stringsAsFactors = FALSE)
# Normalize token frequencies:
dd[, "Freq"] <- dd[, "Freq"]/term.frequency[match(dd[, "Term"], vocab)]
token.table <- dd[order(dd[, 1], dd[, 2]), ]
# verify term topic frequencies match LDAvis
# View(token.table[token.table$Term=="indonesia",])
# Load countries in order of deforestation
join <- read.table("join_table")
join <- join %>% arrange(desc(total_loss))
countries <- join$country
# Create country contexts table
countries_length <- length(countries)
countries_list <- list()
for (i in 1:countries_length) {
country_table <- token.table[token.table$Term==countries[i],]
countries_list[[i]] <- country_table
}
countries_topics <- do.call(rbind.data.frame,countries_list)
rownames(countries_topics) <- NULL
# Organize country context tables with dplyr.
# Sort topics for each country in descending
# order of their proportion.
# Format tables for manuscript
colnames(countries_topics) <- c("Country", "Topic", "Probability")
countries_topics$Country <- factor(countries_topics$Country, levels = unique(countries_topics$Country))
# Order countries by number of mentions in each source
countries_topics <- countries_topics %>% arrange(Country,desc(Probability))
#' Create summary tables showing the labels for each topic conext
#' for outlier countries
# Add topic names to countries topics
mongabay_topic_names <- read.csv("Data/mongabay_topic_names.csv")
countries_topics_names <- left_join(countries_topics,mongabay_topic_names,by="Topic") %>%
select(-Label)
countries_topics_names_high_prob <- countries_topics_names[countries_topics_names$Probability>0.1,]
# Round down the topic probabilities to two significant digits
countries_topics_names_high_prob$Probability <- round(countries_topics_names_high_prob$Probability,2)
# Write topic contexts for top countries with deforestation
countries_topics_top <- countries_topics_names_high_prob[countries_topics_names_high_prob$Country %in% unique(countries_topics_names_high_prob$Country)[1:10],]
# Load outliers from the country mentions versus deforestation regressions
load("Data/monga_outliers.Rdata")
countries_topics_top$UnderRepresented <- ifelse(countries_topics_top$Country %in% monga_outliers, "Yes", "No")
countries_topics_top <- countries_topics_top %>% arrange(Topic,UnderRepresented,Country)
countries_topics_top$Name <- factor(countries_topics_top$Name, levels = c(levels(countries_topics_top$Name),""))
countries_topics_top[duplicated(countries_topics_top$Name),c("Topic","Name")] <- ""
countries_topics_top <- countries_topics_top %>% select(Topic,Name,Country,Probability,UnderRepresented)
colnames(countries_topics_top)[5] <- "Under-represented"
# Write tables
write.csv(countries_topics_top,
"Manuscript_figures/mongabay_deforestation_top_contexts_simple.csv",
row.names = FALSE)
|
/Topic_models/topicmodels_mallet_monga_country_contexts_simple_120319_github.R
|
no_license
|
adamformica/Deforestation_messaging_analysis
|
R
| false | false | 4,542 |
r
|
library(dplyr)
library(tidyr)
# Load LDAvis inputs
#' Note: data witheld because they contain third party content
setwd("C:/Users/Sensonomic Admin/Dropbox/Oxford/DPhil/Deforestation review/Deforestation_messaging_analysis_GitHub/Deforestation_messaging_analysis/")
load("Data/mongabay_LDAVIS_inputs.Rdata")
#' @param theta matrix, with each row containing the probability distribution
#' over topics for a document, with as many rows as there are documents in the
#' corpus, and as many columns as there are topics in the model.
#' @param doc.length integer vector containing the number of tokens in each
#' document of the corpus.
# compute counts of tokens across K topics (length-K vector):
# (this determines the areas of the default topic circles when no term is
# highlighted)
topic.frequency <- colSums(theta * doc.length)
topic.proportion <- topic.frequency/sum(topic.frequency)
#' @param phi matrix, with each row containing the distribution over terms
#' for a topic, with as many rows as there are topics in the model, and as
#' many columns as there are terms in the vocabulary.
# token counts for each term-topic combination (widths of red bars)
term.topic.frequency <- phi * topic.frequency
term.frequency <- colSums(term.topic.frequency)
# term-topic frequency table
tmp <- term.topic.frequency
# reorder topics by LDAvis order
load("Data/mongabay_LDAVIS_order_simple.Rdata")
tmp <- term.topic.frequency[LDAVis.order,]
# round down infrequent term occurrences so that we can send sparse
# data to the browser:
r <- row(tmp)[tmp >= 0.5]
c <- col(tmp)[tmp >= 0.5]
dd <- data.frame(Term = vocab[c], Topic = r, Freq = round(tmp[cbind(r, c)]),
stringsAsFactors = FALSE)
# Normalize token frequencies:
dd[, "Freq"] <- dd[, "Freq"]/term.frequency[match(dd[, "Term"], vocab)]
token.table <- dd[order(dd[, 1], dd[, 2]), ]
# verify term topic frequencies match LDAvis
# View(token.table[token.table$Term=="indonesia",])
# Load countries in order of deforestation
join <- read.table("join_table")
join <- join %>% arrange(desc(total_loss))
countries <- join$country
# Create country contexts table
countries_length <- length(countries)
countries_list <- list()
for (i in 1:countries_length) {
country_table <- token.table[token.table$Term==countries[i],]
countries_list[[i]] <- country_table
}
countries_topics <- do.call(rbind.data.frame,countries_list)
rownames(countries_topics) <- NULL
# Organize country context tables with dplyr.
# Sort topics for each country in descending
# order of their proportion.
# Format tables for manuscript
colnames(countries_topics) <- c("Country", "Topic", "Probability")
countries_topics$Country <- factor(countries_topics$Country, levels = unique(countries_topics$Country))
# Order countries by number of mentions in each source
countries_topics <- countries_topics %>% arrange(Country,desc(Probability))
#' Create summary tables showing the labels for each topic conext
#' for outlier countries
# Add topic names to countries topics
mongabay_topic_names <- read.csv("Data/mongabay_topic_names.csv")
countries_topics_names <- left_join(countries_topics,mongabay_topic_names,by="Topic") %>%
select(-Label)
countries_topics_names_high_prob <- countries_topics_names[countries_topics_names$Probability>0.1,]
# Round down the topic probabilities to two significant digits
countries_topics_names_high_prob$Probability <- round(countries_topics_names_high_prob$Probability,2)
# Write topic contexts for top countries with deforestation
countries_topics_top <- countries_topics_names_high_prob[countries_topics_names_high_prob$Country %in% unique(countries_topics_names_high_prob$Country)[1:10],]
# Load outliers from the country mentions versus deforestation regressions
load("Data/monga_outliers.Rdata")
countries_topics_top$UnderRepresented <- ifelse(countries_topics_top$Country %in% monga_outliers, "Yes", "No")
countries_topics_top <- countries_topics_top %>% arrange(Topic,UnderRepresented,Country)
countries_topics_top$Name <- factor(countries_topics_top$Name, levels = c(levels(countries_topics_top$Name),""))
countries_topics_top[duplicated(countries_topics_top$Name),c("Topic","Name")] <- ""
countries_topics_top <- countries_topics_top %>% select(Topic,Name,Country,Probability,UnderRepresented)
colnames(countries_topics_top)[5] <- "Under-represented"
# Write tables
write.csv(countries_topics_top,
"Manuscript_figures/mongabay_deforestation_top_contexts_simple.csv",
row.names = FALSE)
|
updateHFReturns = function(){
HF_RETURNS <- read_excel("C:/Users/blloyd.HF/Dropbox/CF_Model/Core/HF_RETURNS.xlsx",
sheet = "HF Returns", col_types = c("date",
"numeric", "numeric", "numeric",
"numeric", "numeric", "numeric",
"numeric", "numeric", "numeric",
"numeric", "numeric", "numeric",
"numeric"))
HF_RETURNS = subset(HF_RETURNS, !is.na(HF_RETURNS$Date))
hf = xts::xts(HF_RETURNS[, 2:ncol(HF_RETURNS)], order.by = zoo::as.yearmon(HF_RETURNS$Date))
hf = hf[apply(hf, 1, function(r)!all(is.na(r))),]
saveRDS(hf, "hf_xts.rds")
}
|
/R/updateHFReturns.R
|
no_license
|
bplloyd/Core
|
R
| false | false | 638 |
r
|
updateHFReturns = function(){
HF_RETURNS <- read_excel("C:/Users/blloyd.HF/Dropbox/CF_Model/Core/HF_RETURNS.xlsx",
sheet = "HF Returns", col_types = c("date",
"numeric", "numeric", "numeric",
"numeric", "numeric", "numeric",
"numeric", "numeric", "numeric",
"numeric", "numeric", "numeric",
"numeric"))
HF_RETURNS = subset(HF_RETURNS, !is.na(HF_RETURNS$Date))
hf = xts::xts(HF_RETURNS[, 2:ncol(HF_RETURNS)], order.by = zoo::as.yearmon(HF_RETURNS$Date))
hf = hf[apply(hf, 1, function(r)!all(is.na(r))),]
saveRDS(hf, "hf_xts.rds")
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/probabilistic.R
\name{hittingProbabilities}
\alias{hittingProbabilities}
\title{Hitting probabilities for markovchain}
\usage{
hittingProbabilities(object)
}
\arguments{
\item{object}{the markovchain-class object}
}
\value{
a matrix of hitting probabilities
}
\description{
Given a markovchain object,
this function calculates the probability of ever arriving from state i to j
}
\examples{
M <- markovchain:::zeros(5)
M[1,1] <- M[5,5] <- 1
M[2,1] <- M[2,3] <- 1/2
M[3,2] <- M[3,4] <- 1/2
M[4,2] <- M[4,5] <- 1/2
mc <- new("markovchain", transitionMatrix = M)
hittingProbabilities(mc)
}
\references{
R. Vélez, T. Prieto, Procesos Estocásticos, Librería UNED, 2013
}
\author{
Ignacio Cordón
}
|
/man/hittingProbabilities.Rd
|
no_license
|
cran/markovchain
|
R
| false | true | 811 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/probabilistic.R
\name{hittingProbabilities}
\alias{hittingProbabilities}
\title{Hitting probabilities for markovchain}
\usage{
hittingProbabilities(object)
}
\arguments{
\item{object}{the markovchain-class object}
}
\value{
a matrix of hitting probabilities
}
\description{
Given a markovchain object,
this function calculates the probability of ever arriving from state i to j
}
\examples{
M <- markovchain:::zeros(5)
M[1,1] <- M[5,5] <- 1
M[2,1] <- M[2,3] <- 1/2
M[3,2] <- M[3,4] <- 1/2
M[4,2] <- M[4,5] <- 1/2
mc <- new("markovchain", transitionMatrix = M)
hittingProbabilities(mc)
}
\references{
R. Vélez, T. Prieto, Procesos Estocásticos, Librería UNED, 2013
}
\author{
Ignacio Cordón
}
|
#' Time needed to screen titles of unique search results
#'
#' This function calculates the time needed to screen the unique titles of
#' search results compiled across all searched resources in a systematic
#' review, based on the inputs of the number of unique articles
#' ('uniqart.number', see 'uniqart.number' function), the number of titles
#' that can be screened per day ('titles.day'), and the percentage of all
#' titles that are double checked for consistency ('titles.checked'). Where
#' full dual screening of all records is used, this will equal a percentage
#' of 100 titles being checked. Default values are provided based on
#' the empirical study of environmental systematic reviews by Haddaway and
#' Westgate (2018) https://doi.org/10.1111/cobi.13231.
tscreen.time <- function(uniqart.number=8497.706,titles.day=854,titles.checked=10){
title.screening <- ( uniqart.number / titles.day ) * ( 1 + ( titles.checked / 100 ) )
return(title.screening)
}
|
/R/tscreen.time.R
|
permissive
|
nealhaddaway/predicter
|
R
| false | false | 973 |
r
|
#' Time needed to screen titles of unique search results
#'
#' This function calculates the time needed to screen the unique titles of
#' search results compiled across all searched resources in a systematic
#' review, based on the inputs of the number of unique articles
#' ('uniqart.number', see 'uniqart.number' function), the number of titles
#' that can be screened per day ('titles.day'), and the percentage of all
#' titles that are double checked for consistency ('titles.checked'). Where
#' full dual screening of all records is used, this will equal a percentage
#' of 100 titles being checked. Default values are provided based on
#' the empirical study of environmental systematic reviews by Haddaway and
#' Westgate (2018) https://doi.org/10.1111/cobi.13231.
tscreen.time <- function(uniqart.number=8497.706,titles.day=854,titles.checked=10){
title.screening <- ( uniqart.number / titles.day ) * ( 1 + ( titles.checked / 100 ) )
return(title.screening)
}
|
library(ggplot2)
library(dplyr)
#install.packages('maps')
library(maps)
us_map<-map_data('state')
head(us_map,3)
setwd('C:/Users/pwendel/Documents/GitHub/DSGit/Coursera/R_data_viz')
us_map %>% filter(region %in% c('north carolina','south carolina')) %>%
ggplot(aes(x=long,y=lat))+geom_point()
us_map %>% filter(region %in% c('north carolina','south carolina'))%>%
ggplot(aes(x=long,y=lat,group=group))+geom_path()
us_map %>% filter(region %in% c('north carolina','south carolina')) %>%
ggplot(aes(x=long,y=lat,group=group,fill=region))+geom_polygon(color='black')+
theme_void()
us_map %>% ggplot(aes(x=long,y=lat,group=group))+
geom_polygon(fill='lightblue',color='black')+theme_void()
data(votes.repub)
head(votes.repub)
library(dplyr)
#install.packages('viridis')
library(viridis)
votes.repub%>%tbl_df()%>%mutate(state=rownames(votes.repub),state=tolower(state))%>%
right_join(us_map, by=c('state'='region')) %>%
ggplot(aes(x=long,y=lat,group=group,fill=`1976`))+geom_polygon(color="black")+theme_void()+
scale_fill_viridis(name='Republican\nvotes (%)')
#install.packages('tidyr')
library(tidyr)
meltvote<-votes.repub%>%tbl_df()%>%mutate(state=rownames(votes.repub),state=tolower(state))%>%gather(year,votes,-state)
meltvote%>%right_join(us_map,by=c('state'='region'))%>%ggplot(aes(x=long,y=lat,group=group,fill=votes))+
geom_polygon(color='black')+theme_void()+scale_fill_viridis(name='Republican\nvotes (%)')+
facet_wrap(~year)
# install.packages('readr')
library(readr)
serial<-read_csv(paste0("https://raw.githubusercontent.com/",
"dgrtwo/serial-ggvis/master/input_data/",
"serial_podcast_data/serial_map_data.csv"))
head(serial)
serial<-serial %>% mutate(long=-76.8854+0.00017022*x,
lat=39.23822+1.371014e-04*y,
tower=Type=='cell-site')
serial %>% slice(c(1:3,(n()-3):n()))
maryland<-map_data('county',region='maryland')
head(maryland)
baltimore<-maryland%>%filter(subregion %in% c('baltimore city','baltimore'))
head(baltimore,3)
base_bal<-ggplot(baltimore, aes(x=long,y=lat,group=group))+geom_polygon(fill='lightblue',color='black')+
theme_void()
base_bal+geom_point(data=serial,aes(group=NULL,color=tower))+
scale_color_manual(name='Cell tower',values=c('black','red'))
#install.packages('ggmap')
###install.packages('sp')
install.packages('devtools')
library(devtools)
install_github('dkahle/ggmap')
library(ggmap)
register_google()
beijing<-get_map("Beijing",zoom=12)
ggmap(beijing)
get_map('DFW airport',zoom=15)%>%ggmap()
get_map("Baltimore County",zoom=10,
source='stamen',maptype='toner')%>%
ggmap()+
geom_polygon(data=baltimore,aes(x=long,y=lat,group=group),
color='navy',fill='lightblue',alpha=0.2)+
geom_point(data=serial, aes(x=long,y=lat,color=tower))+
scale_color_manual(name='Cell tower',values=c('black','red'))
get_map(c(-76.6,39.3),zoom=11,
source='stamen',maptype='toner')%>%
ggmap()+
geom_polygon(data=baltimore,aes(x=long,y=lat,group=group),
color='navy',fill='lightblue',alpha=0.2)+
geom_point(data=serial, aes(x=long,y=lat,color=tower))+
scale_color_manual(name='Cell tower',values=c('black','red'))
#install.packages('tigris')
library(tigris)
library(sp)
denver_tracts<-tracts(state='CO',county=31,cb=TRUE)
#install.packages('plotly')
library(plotly)
library(faraway)
data(worldcup)
plot_ly(worldcup,type='scatter',x=~Time,y=~Shots,color=I('blue'))
worldcup %>% mutate(Name=rownames(worldcup))%>%
plot_ly(x=~Time,y=~Shots,color=~Position)%>%
add_markers(text=~paste("<b>Name:</b>",Name,'<br />',
'<b>Team:</b>',Team),hoverinfo='text')
read_csv('data/floyd_track.csv') %>% plot_ly(x=~datetime,y=~max_wind) %>%
add_lines() %>% rangeslider()
denver_tracts <- tracts(state = "CO", county = 31, cb = TRUE)
load("data/fars_colorado.RData")
denver_fars <- driver_data %>%
filter(county == 31 & longitud < -104.5)
install.packages('leaflet')
|
/Coursera/R_data_viz/ggmap.R
|
no_license
|
pwendel3/DSGit
|
R
| false | false | 4,018 |
r
|
library(ggplot2)
library(dplyr)
#install.packages('maps')
library(maps)
us_map<-map_data('state')
head(us_map,3)
setwd('C:/Users/pwendel/Documents/GitHub/DSGit/Coursera/R_data_viz')
us_map %>% filter(region %in% c('north carolina','south carolina')) %>%
ggplot(aes(x=long,y=lat))+geom_point()
us_map %>% filter(region %in% c('north carolina','south carolina'))%>%
ggplot(aes(x=long,y=lat,group=group))+geom_path()
us_map %>% filter(region %in% c('north carolina','south carolina')) %>%
ggplot(aes(x=long,y=lat,group=group,fill=region))+geom_polygon(color='black')+
theme_void()
us_map %>% ggplot(aes(x=long,y=lat,group=group))+
geom_polygon(fill='lightblue',color='black')+theme_void()
data(votes.repub)
head(votes.repub)
library(dplyr)
#install.packages('viridis')
library(viridis)
votes.repub%>%tbl_df()%>%mutate(state=rownames(votes.repub),state=tolower(state))%>%
right_join(us_map, by=c('state'='region')) %>%
ggplot(aes(x=long,y=lat,group=group,fill=`1976`))+geom_polygon(color="black")+theme_void()+
scale_fill_viridis(name='Republican\nvotes (%)')
#install.packages('tidyr')
library(tidyr)
meltvote<-votes.repub%>%tbl_df()%>%mutate(state=rownames(votes.repub),state=tolower(state))%>%gather(year,votes,-state)
meltvote%>%right_join(us_map,by=c('state'='region'))%>%ggplot(aes(x=long,y=lat,group=group,fill=votes))+
geom_polygon(color='black')+theme_void()+scale_fill_viridis(name='Republican\nvotes (%)')+
facet_wrap(~year)
# install.packages('readr')
library(readr)
serial<-read_csv(paste0("https://raw.githubusercontent.com/",
"dgrtwo/serial-ggvis/master/input_data/",
"serial_podcast_data/serial_map_data.csv"))
head(serial)
serial<-serial %>% mutate(long=-76.8854+0.00017022*x,
lat=39.23822+1.371014e-04*y,
tower=Type=='cell-site')
serial %>% slice(c(1:3,(n()-3):n()))
maryland<-map_data('county',region='maryland')
head(maryland)
baltimore<-maryland%>%filter(subregion %in% c('baltimore city','baltimore'))
head(baltimore,3)
base_bal<-ggplot(baltimore, aes(x=long,y=lat,group=group))+geom_polygon(fill='lightblue',color='black')+
theme_void()
base_bal+geom_point(data=serial,aes(group=NULL,color=tower))+
scale_color_manual(name='Cell tower',values=c('black','red'))
#install.packages('ggmap')
###install.packages('sp')
install.packages('devtools')
library(devtools)
install_github('dkahle/ggmap')
library(ggmap)
register_google()
beijing<-get_map("Beijing",zoom=12)
ggmap(beijing)
get_map('DFW airport',zoom=15)%>%ggmap()
get_map("Baltimore County",zoom=10,
source='stamen',maptype='toner')%>%
ggmap()+
geom_polygon(data=baltimore,aes(x=long,y=lat,group=group),
color='navy',fill='lightblue',alpha=0.2)+
geom_point(data=serial, aes(x=long,y=lat,color=tower))+
scale_color_manual(name='Cell tower',values=c('black','red'))
get_map(c(-76.6,39.3),zoom=11,
source='stamen',maptype='toner')%>%
ggmap()+
geom_polygon(data=baltimore,aes(x=long,y=lat,group=group),
color='navy',fill='lightblue',alpha=0.2)+
geom_point(data=serial, aes(x=long,y=lat,color=tower))+
scale_color_manual(name='Cell tower',values=c('black','red'))
#install.packages('tigris')
library(tigris)
library(sp)
denver_tracts<-tracts(state='CO',county=31,cb=TRUE)
#install.packages('plotly')
library(plotly)
library(faraway)
data(worldcup)
plot_ly(worldcup,type='scatter',x=~Time,y=~Shots,color=I('blue'))
worldcup %>% mutate(Name=rownames(worldcup))%>%
plot_ly(x=~Time,y=~Shots,color=~Position)%>%
add_markers(text=~paste("<b>Name:</b>",Name,'<br />',
'<b>Team:</b>',Team),hoverinfo='text')
read_csv('data/floyd_track.csv') %>% plot_ly(x=~datetime,y=~max_wind) %>%
add_lines() %>% rangeslider()
denver_tracts <- tracts(state = "CO", county = 31, cb = TRUE)
load("data/fars_colorado.RData")
denver_fars <- driver_data %>%
filter(county == 31 & longitud < -104.5)
install.packages('leaflet')
|
# Samantha Alger
#Negative Strand Analysis and figures
# 4/5/2018
# Clear memory of characters:
ls()
rm(list=ls())
# Set Working Directory
setwd("~/AlgerProjects/2015_Bombus_Survey/CSV_Files")
library("ggplot2")
library("dplyr")
library("lme4")
library("car")
library("plyr")
# load in data
Melt <- read.csv("USDAplate1Melt.csv", header=TRUE, stringsAsFactors=FALSE)
Cq <- read.csv("USDAplate1cq.csv", header=TRUE, stringsAsFactors=FALSE)
BombSurv <- read.csv("BombSurvNHBS.csv", header=TRUE, stringsAsFactors=FALSE)
# formatting bombsurv to test Spatial Autocorralation on
BeeAbund <- read.table("BeeAbund.csv", header=TRUE, sep=",", stringsAsFactors=FALSE)
SpatDat <- read.table("SpatDatBuffs.csv", header=TRUE,sep=",",stringsAsFactors=FALSE)
# remove unwanted sites and bombus species
BombSurv<-BombSurv[!BombSurv$site==("PITH"),]
BombSurv<-BombSurv[!BombSurv$site==("STOW"),]
BombSurv<-BombSurv[!BombSurv$species==("Griseocollis"),]
BombSurv<-BombSurv[!BombSurv$species==("Sandersonii"),]
# subset BombSurv:
Bomb <- dplyr::select(BombSurv, site, Ct_mean, sample_name, species, apiary_near_far, Density, genome_copbee, norm_genome_copbeeHB, target_name, virusBINY_PreFilter, virusBINY, HBSiteBin)
names(Bomb)[3] <- "Sample"
# merge data:
#Dat <- merge(Melt, Cq, by = c("Sample", "Target"))
#str(Dat)
# Merge Dat and Bomb
Dat <- merge(Melt, Bomb, by = c("Sample","target_name"), all.y=TRUE)
#Dat <- merge(Melt, Bomb, by = c("Sample"), all.x=TRUE)
DatClean <- Dat
#DatClean <- DatClean[!(DatClean$Cq>33),]
#DatClean <- DatClean[!(DatClean$Melt<78),]
DatClean$BinaryNeg <- ifelse(DatClean$Melt > 0, 1, 0)
DatClean$BinaryNeg[is.na(DatClean$BinaryNeg)] <- 0
x <- merge(DatClean, BeeAbund, by="site")
DatClean <- merge(x, SpatDat, by="site")
DatCleanPos <- DatClean[DatClean$virusBINY==1,]
DatCleanPos[DatCleanPos$apis==0,]
DatClean$isHB <- ifelse(DatClean$site=="TIRE" |
DatClean$site=="CLERK" |
DatClean$site=="NEK" |
DatClean$site=="FLAN",
"noHB", "HB")
ddply(DatClean, c("target_name", "isHB"), summarise,
n = length(BinaryNeg),
mean = mean(BinaryNeg),
sd = sqrt(((mean(BinaryNeg))*(1-mean(BinaryNeg)))/n))
# Subset for the two viruses:
# For BQCV:
BQ <- DatClean[DatClean$target_name=="BQCV",]
# For DWV:
DW <- DatClean[DatClean$target_name=="DWV",]
reducedBQ <- select(BQ, BinaryNeg, Density, apis, apiary_near_far, species, site, Sample, lat, long)
reducedDW <- select(DW, BinaryNeg, Density, apis, apiary_near_far, species, site, Sample, lat, long)
###########################################################################
# function name: TheExtractor
# description:extracts log liklihood test stats and p vals for null vs full
# and the reduced models
# parameters:
# Full = full model (glmer or lmer)
# Null = null model
# Density = density removed
# Colonies = colonies removed
# Species = species removed
###########################################################################
TheExtractor <- function(Full, Null, Colonies, Density, Species){
sumFull <- summary(Full)
modelFit <- anova(Full, Null, test="LRT")
Cols <- anova(Full, Colonies, test="LRT")
Dens <- anova(Full, Density, test="LRT")
Spec <- anova(Full, Species, test="LRT")
ModFit <- list("Model Fit P"=modelFit$`Pr(>Chisq)`[2], "Model Fit Df"=modelFit$`Chi Df`[2], "Model Fit Chi2"=modelFit$Chisq[2])
ColFit <- list("Colony Fit P"=Cols$`Pr(>Chisq)`[2],"Colony Fit Df"=Cols$`Chi Df`[2],"Colony Fit Chi2"=Cols$Chisq[2])
DensFit <- list("Density Fit P"=Dens$`Pr(>Chisq)`[2],"Density Fit Df"=Dens$`Chi Df`[2],"Density Fit Chi2"=Dens$Chisq[2])
SpecFit <- list("Species Fit P"=Spec$`Pr(>Chisq)`[2],"Species Fit Df"=Spec$`Chi Df`[2],"Species Fit Chi2"=Spec$Chisq[2])
return(list(sumFull$coefficients[1:4,1:2],ModFit, ColFit, DensFit, SpecFit))
}
###########################################################################
# END OF FUNCITON
###########################################################################
BQCVprevModFull <- glmer(data=reducedBQ, formula = BinaryNeg ~ species + Density + apis + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModNull <- glmer(data=reducedBQ, formula = BinaryNeg ~ 1 + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModnoApis <- glmer(data=reducedBQ, formula = BinaryNeg ~ species + Density + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModnoDens <- glmer(data=reducedBQ, formula = BinaryNeg ~ species + apis + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModnoSpp <- glmer(data=reducedBQ, formula = BinaryNeg ~ Density + apis + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
# run the function to get results of models
TheExtractor(Full=BQCVprevModFull,
Null=BQCVprevModNull,
Colonies=BQCVprevModnoApis,
Density=BQCVprevModnoDens,
Species = BQCVprevModnoSpp)
BQCVprevModFull <- glmer(data=reducedDW, formula = BinaryNeg ~ species + Density + apis + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModNull <- glmer(data=reducedDW, formula = BinaryNeg ~ 1 + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModnoApis <- glmer(data=reducedDW, formula = BinaryNeg ~ species + Density + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModnoDens <- glmer(data=reducedDW, formula = BinaryNeg ~ species + apis + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModnoSpp <- glmer(data=reducedDW, formula = BinaryNeg ~ Density + apis + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
# run the function to get results of models
TheExtractor(Full=BQCVprevModFull,
Null=BQCVprevModNull,
Colonies=BQCVprevModnoApis,
Density=BQCVprevModnoDens,
Species = BQCVprevModnoSpp)
#DWVprevModFull <- glmer(data=reducedDW, formula = BinaryNeg ~ species + Density + apis + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModFull2 <- glmer(data=reducedBQ, formula = BinaryNeg ~ species + Density + apiary_near_far + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModFull3 <- glmer(data=reducedDW, formula = BinaryNeg ~ species + Density + apiary_near_far + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModNull3 <- glmer(data=reducedDW, formula = BinaryNeg ~ 1 + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModFull3noApis <- glmer(data=reducedDW, formula = BinaryNeg ~ species + Density + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModFull3noDensity <- glmer(data=reducedDW, formula = BinaryNeg ~ species + apiary_near_far + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModFull3nospecies <- glmer(data=reducedDW, formula = BinaryNeg ~ Density + apiary_near_far + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
# run the function to get results of models
TheExtractor(Full=DWVprevModFull3,
Null=DWVprevModNull3,
Colonies=DWVprevModFull3noApis,
Density=DWVprevModFull3noDensity,
Species =DWVprevModFull3nospecies)
DWVprevModFull3 <- glmer(data=reducedBQ, formula = BinaryNeg ~ species + Density + apiary_near_far + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModNull3 <- glmer(data=reducedBQ, formula = BinaryNeg ~ 1 + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModFull3noApis <- glmer(data=reducedBQ, formula = BinaryNeg ~ species + Density + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModFull3noDensity <- glmer(data=reducedBQ, formula = BinaryNeg ~ species + apiary_near_far + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModFull3nospecies <- glmer(data=reducedBQ, formula = BinaryNeg ~ Density + apiary_near_far + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
# run the function to get results of models
TheExtractor(Full=DWVprevModFull3,
Null=DWVprevModNull3,
Colonies=DWVprevModFull3noApis,
Density=DWVprevModFull3noDensity,
Species =DWVprevModFull3nospecies)
# Fig and stats for BQCV:
BQ <- BQ[ which(BQ$virusBINY_PreFilter=="1"), ]
#ddply summarize:
plotdat <- ddply(BQ, c("target_name", "apiary_near_far"), summarise,
n = length(BinaryNeg),
mean = mean(BinaryNeg, na.rm=TRUE),
sd = sqrt(((mean(BinaryNeg))*(1-mean(BinaryNeg)))/n))
plotdat$apiary_near_far <- ifelse(plotdat$apiary_near_far==0, "No Apiary", "Apiary")
label.df <- data.frame(Group = c("S1", "S2"),
Value = c(6, 9))
plot1 <- ggplot(plotdat, aes(x=apiary_near_far, y=mean, fill=target_name)) +
geom_bar(stat="identity", color="black",
fill = "white",
position=position_dodge()) + labs(y="BQCV Replication", x="Site Type") + geom_errorbar(aes(ymin = mean - sd, ymax = mean + sd, width = 0.2),position=position_dodge(.9))
plot1 + theme_minimal(base_size = 18) + coord_cartesian(ylim = c(0, .5)) + scale_y_continuous(labels = scales::percent) + guides(fill=FALSE)
DatCleanNeg <- DatClean[DatClean$target_name=="BQCV",]
#Calculate percentage of replicating infections:
mean(BQ$BinaryNeg)
# Overall 20% of BQCV positive bumble bees had replicating infections.
#ddply summarize for species:
plotdat <- ddply(BQ, c("target_name", "species"), summarise,
n = length(BinaryNeg),
mean = mean(BinaryNeg, na.rm=TRUE),
sd = sqrt(((mean(BinaryNeg))*(1-mean(BinaryNeg)))/n))
plot1 <- ggplot(plotdat, aes(x=species, y=mean, fill=target_name)) +
geom_bar(stat="identity", color="black",fill = "white",
position=position_dodge()) + labs(y="Prevalence", x="Species") + geom_errorbar(aes(ymin = mean - sd, ymax = mean + sd, width = 0.2),position=position_dodge(.9))
plot1 + theme_minimal(base_size = 18) + coord_cartesian(ylim = c(0, 1)) + scale_y_continuous(labels = scales::percent) + guides(fill=FALSE)
#Percentage of replication by species:
plotdat
# 28% of bimacs, 11% of Vagans
#For DWV:
# subset for virus positive bees
DW <- DW[ which(DW$virusBINY_PreFilter=="1"), ]
DW$virusBINY
#ddply summarize:
plotdat2 <- ddply(DW, c("target_name", "apiary_near_far"), summarise,
n = length(BinaryNeg),
mean = mean(BinaryNeg, na.rm=TRUE),
sd = sqrt(((mean(BinaryNeg))*(1-mean(BinaryNeg)))/n))
plotdat2$apiary_near_far <- ifelse(plotdat2$apiary_near_far==0, "No Apiary", "Apiary")
label.df <- data.frame(Group = c("S1", "S2"),
Value = c(6, 9))
plot1 <- ggplot(plotdat2, aes(x=apiary_near_far, y=mean, fill=target_name)) +
geom_bar(stat="identity", color="black",
fill = "white",
position=position_dodge()) + labs(y="DWV Replication", x="Site Type") + geom_errorbar(aes(ymin = mean - sd, ymax = mean + sd, width = 0.2),position=position_dodge(.9))
plot1 + theme_minimal(base_size = 18) + coord_cartesian(ylim = c(0, .5)) + scale_y_continuous(labels = scales::percent) + guides(fill=FALSE)
DatCleanNeg <- DatClean[DatClean$target_name=="DWV",]
chisq.test(DatCleanNeg$BinaryNeg, DatCleanNeg$apiary_near_far)
chisq.test(DatCleanNeg$BinaryNeg, DatCleanNeg$species)
#Calculate % of replicating infections
mean(DW$BinaryNeg)
# 16% of DWV positive bees had replicating infections.
#ddply summarize for species:
plotdat <- ddply(DW, c("target_name", "species"), summarise,
n = length(BinaryNeg),
mean = mean(BinaryNeg, na.rm=TRUE),
sd = sqrt(((mean(BinaryNeg))*(1-mean(BinaryNeg)))/n))
#Replication by species
plotdat
# bimacs 22%; Vagans 12%
plot1 <- ggplot(plotdat, aes(x=species, y=mean, fill=target_name)) +
geom_bar(stat="identity", color="black",fill = "white",
position=position_dodge()) + labs(y="Prevalence", x="Species") + geom_errorbar(aes(ymin = mean - sd, ymax = mean + sd, width = 0.2),position=position_dodge(.9))
plot1 + theme_minimal(base_size = 18) + coord_cartesian(ylim = c(0, 1)) + scale_y_continuous(labels = scales::percent) + guides(fill=FALSE)
plotdat
|
/2015_Bombus_Survey/NegStd.R
|
no_license
|
samanthaannalger/AlgerProjects
|
R
| false | false | 12,563 |
r
|
# Samantha Alger
#Negative Strand Analysis and figures
# 4/5/2018
# Clear memory of characters:
ls()
rm(list=ls())
# Set Working Directory
setwd("~/AlgerProjects/2015_Bombus_Survey/CSV_Files")
library("ggplot2")
library("dplyr")
library("lme4")
library("car")
library("plyr")
# load in data
Melt <- read.csv("USDAplate1Melt.csv", header=TRUE, stringsAsFactors=FALSE)
Cq <- read.csv("USDAplate1cq.csv", header=TRUE, stringsAsFactors=FALSE)
BombSurv <- read.csv("BombSurvNHBS.csv", header=TRUE, stringsAsFactors=FALSE)
# formatting bombsurv to test Spatial Autocorralation on
BeeAbund <- read.table("BeeAbund.csv", header=TRUE, sep=",", stringsAsFactors=FALSE)
SpatDat <- read.table("SpatDatBuffs.csv", header=TRUE,sep=",",stringsAsFactors=FALSE)
# remove unwanted sites and bombus species
BombSurv<-BombSurv[!BombSurv$site==("PITH"),]
BombSurv<-BombSurv[!BombSurv$site==("STOW"),]
BombSurv<-BombSurv[!BombSurv$species==("Griseocollis"),]
BombSurv<-BombSurv[!BombSurv$species==("Sandersonii"),]
# subset BombSurv:
Bomb <- dplyr::select(BombSurv, site, Ct_mean, sample_name, species, apiary_near_far, Density, genome_copbee, norm_genome_copbeeHB, target_name, virusBINY_PreFilter, virusBINY, HBSiteBin)
names(Bomb)[3] <- "Sample"
# merge data:
#Dat <- merge(Melt, Cq, by = c("Sample", "Target"))
#str(Dat)
# Merge Dat and Bomb
Dat <- merge(Melt, Bomb, by = c("Sample","target_name"), all.y=TRUE)
#Dat <- merge(Melt, Bomb, by = c("Sample"), all.x=TRUE)
DatClean <- Dat
#DatClean <- DatClean[!(DatClean$Cq>33),]
#DatClean <- DatClean[!(DatClean$Melt<78),]
DatClean$BinaryNeg <- ifelse(DatClean$Melt > 0, 1, 0)
DatClean$BinaryNeg[is.na(DatClean$BinaryNeg)] <- 0
x <- merge(DatClean, BeeAbund, by="site")
DatClean <- merge(x, SpatDat, by="site")
DatCleanPos <- DatClean[DatClean$virusBINY==1,]
DatCleanPos[DatCleanPos$apis==0,]
DatClean$isHB <- ifelse(DatClean$site=="TIRE" |
DatClean$site=="CLERK" |
DatClean$site=="NEK" |
DatClean$site=="FLAN",
"noHB", "HB")
ddply(DatClean, c("target_name", "isHB"), summarise,
n = length(BinaryNeg),
mean = mean(BinaryNeg),
sd = sqrt(((mean(BinaryNeg))*(1-mean(BinaryNeg)))/n))
# Subset for the two viruses:
# For BQCV:
BQ <- DatClean[DatClean$target_name=="BQCV",]
# For DWV:
DW <- DatClean[DatClean$target_name=="DWV",]
reducedBQ <- select(BQ, BinaryNeg, Density, apis, apiary_near_far, species, site, Sample, lat, long)
reducedDW <- select(DW, BinaryNeg, Density, apis, apiary_near_far, species, site, Sample, lat, long)
###########################################################################
# function name: TheExtractor
# description:extracts log liklihood test stats and p vals for null vs full
# and the reduced models
# parameters:
# Full = full model (glmer or lmer)
# Null = null model
# Density = density removed
# Colonies = colonies removed
# Species = species removed
###########################################################################
TheExtractor <- function(Full, Null, Colonies, Density, Species){
sumFull <- summary(Full)
modelFit <- anova(Full, Null, test="LRT")
Cols <- anova(Full, Colonies, test="LRT")
Dens <- anova(Full, Density, test="LRT")
Spec <- anova(Full, Species, test="LRT")
ModFit <- list("Model Fit P"=modelFit$`Pr(>Chisq)`[2], "Model Fit Df"=modelFit$`Chi Df`[2], "Model Fit Chi2"=modelFit$Chisq[2])
ColFit <- list("Colony Fit P"=Cols$`Pr(>Chisq)`[2],"Colony Fit Df"=Cols$`Chi Df`[2],"Colony Fit Chi2"=Cols$Chisq[2])
DensFit <- list("Density Fit P"=Dens$`Pr(>Chisq)`[2],"Density Fit Df"=Dens$`Chi Df`[2],"Density Fit Chi2"=Dens$Chisq[2])
SpecFit <- list("Species Fit P"=Spec$`Pr(>Chisq)`[2],"Species Fit Df"=Spec$`Chi Df`[2],"Species Fit Chi2"=Spec$Chisq[2])
return(list(sumFull$coefficients[1:4,1:2],ModFit, ColFit, DensFit, SpecFit))
}
###########################################################################
# END OF FUNCITON
###########################################################################
BQCVprevModFull <- glmer(data=reducedBQ, formula = BinaryNeg ~ species + Density + apis + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModNull <- glmer(data=reducedBQ, formula = BinaryNeg ~ 1 + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModnoApis <- glmer(data=reducedBQ, formula = BinaryNeg ~ species + Density + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModnoDens <- glmer(data=reducedBQ, formula = BinaryNeg ~ species + apis + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModnoSpp <- glmer(data=reducedBQ, formula = BinaryNeg ~ Density + apis + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
# run the function to get results of models
TheExtractor(Full=BQCVprevModFull,
Null=BQCVprevModNull,
Colonies=BQCVprevModnoApis,
Density=BQCVprevModnoDens,
Species = BQCVprevModnoSpp)
BQCVprevModFull <- glmer(data=reducedDW, formula = BinaryNeg ~ species + Density + apis + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModNull <- glmer(data=reducedDW, formula = BinaryNeg ~ 1 + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModnoApis <- glmer(data=reducedDW, formula = BinaryNeg ~ species + Density + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModnoDens <- glmer(data=reducedDW, formula = BinaryNeg ~ species + apis + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModnoSpp <- glmer(data=reducedDW, formula = BinaryNeg ~ Density + apis + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
# run the function to get results of models
TheExtractor(Full=BQCVprevModFull,
Null=BQCVprevModNull,
Colonies=BQCVprevModnoApis,
Density=BQCVprevModnoDens,
Species = BQCVprevModnoSpp)
#DWVprevModFull <- glmer(data=reducedDW, formula = BinaryNeg ~ species + Density + apis + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
BQCVprevModFull2 <- glmer(data=reducedBQ, formula = BinaryNeg ~ species + Density + apiary_near_far + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModFull3 <- glmer(data=reducedDW, formula = BinaryNeg ~ species + Density + apiary_near_far + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModNull3 <- glmer(data=reducedDW, formula = BinaryNeg ~ 1 + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModFull3noApis <- glmer(data=reducedDW, formula = BinaryNeg ~ species + Density + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModFull3noDensity <- glmer(data=reducedDW, formula = BinaryNeg ~ species + apiary_near_far + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModFull3nospecies <- glmer(data=reducedDW, formula = BinaryNeg ~ Density + apiary_near_far + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
# run the function to get results of models
TheExtractor(Full=DWVprevModFull3,
Null=DWVprevModNull3,
Colonies=DWVprevModFull3noApis,
Density=DWVprevModFull3noDensity,
Species =DWVprevModFull3nospecies)
DWVprevModFull3 <- glmer(data=reducedBQ, formula = BinaryNeg ~ species + Density + apiary_near_far + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModNull3 <- glmer(data=reducedBQ, formula = BinaryNeg ~ 1 + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModFull3noApis <- glmer(data=reducedBQ, formula = BinaryNeg ~ species + Density + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModFull3noDensity <- glmer(data=reducedBQ, formula = BinaryNeg ~ species + apiary_near_far + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
DWVprevModFull3nospecies <- glmer(data=reducedBQ, formula = BinaryNeg ~ Density + apiary_near_far + (1|site) + (1|long) + (1|lat), family = binomial(link = "logit"))
# run the function to get results of models
TheExtractor(Full=DWVprevModFull3,
Null=DWVprevModNull3,
Colonies=DWVprevModFull3noApis,
Density=DWVprevModFull3noDensity,
Species =DWVprevModFull3nospecies)
# Fig and stats for BQCV:
BQ <- BQ[ which(BQ$virusBINY_PreFilter=="1"), ]
#ddply summarize:
plotdat <- ddply(BQ, c("target_name", "apiary_near_far"), summarise,
n = length(BinaryNeg),
mean = mean(BinaryNeg, na.rm=TRUE),
sd = sqrt(((mean(BinaryNeg))*(1-mean(BinaryNeg)))/n))
plotdat$apiary_near_far <- ifelse(plotdat$apiary_near_far==0, "No Apiary", "Apiary")
label.df <- data.frame(Group = c("S1", "S2"),
Value = c(6, 9))
plot1 <- ggplot(plotdat, aes(x=apiary_near_far, y=mean, fill=target_name)) +
geom_bar(stat="identity", color="black",
fill = "white",
position=position_dodge()) + labs(y="BQCV Replication", x="Site Type") + geom_errorbar(aes(ymin = mean - sd, ymax = mean + sd, width = 0.2),position=position_dodge(.9))
plot1 + theme_minimal(base_size = 18) + coord_cartesian(ylim = c(0, .5)) + scale_y_continuous(labels = scales::percent) + guides(fill=FALSE)
DatCleanNeg <- DatClean[DatClean$target_name=="BQCV",]
#Calculate percentage of replicating infections:
mean(BQ$BinaryNeg)
# Overall 20% of BQCV positive bumble bees had replicating infections.
#ddply summarize for species:
plotdat <- ddply(BQ, c("target_name", "species"), summarise,
n = length(BinaryNeg),
mean = mean(BinaryNeg, na.rm=TRUE),
sd = sqrt(((mean(BinaryNeg))*(1-mean(BinaryNeg)))/n))
plot1 <- ggplot(plotdat, aes(x=species, y=mean, fill=target_name)) +
geom_bar(stat="identity", color="black",fill = "white",
position=position_dodge()) + labs(y="Prevalence", x="Species") + geom_errorbar(aes(ymin = mean - sd, ymax = mean + sd, width = 0.2),position=position_dodge(.9))
plot1 + theme_minimal(base_size = 18) + coord_cartesian(ylim = c(0, 1)) + scale_y_continuous(labels = scales::percent) + guides(fill=FALSE)
#Percentage of replication by species:
plotdat
# 28% of bimacs, 11% of Vagans
#For DWV:
# subset for virus positive bees
DW <- DW[ which(DW$virusBINY_PreFilter=="1"), ]
DW$virusBINY
#ddply summarize:
plotdat2 <- ddply(DW, c("target_name", "apiary_near_far"), summarise,
n = length(BinaryNeg),
mean = mean(BinaryNeg, na.rm=TRUE),
sd = sqrt(((mean(BinaryNeg))*(1-mean(BinaryNeg)))/n))
plotdat2$apiary_near_far <- ifelse(plotdat2$apiary_near_far==0, "No Apiary", "Apiary")
label.df <- data.frame(Group = c("S1", "S2"),
Value = c(6, 9))
plot1 <- ggplot(plotdat2, aes(x=apiary_near_far, y=mean, fill=target_name)) +
geom_bar(stat="identity", color="black",
fill = "white",
position=position_dodge()) + labs(y="DWV Replication", x="Site Type") + geom_errorbar(aes(ymin = mean - sd, ymax = mean + sd, width = 0.2),position=position_dodge(.9))
plot1 + theme_minimal(base_size = 18) + coord_cartesian(ylim = c(0, .5)) + scale_y_continuous(labels = scales::percent) + guides(fill=FALSE)
DatCleanNeg <- DatClean[DatClean$target_name=="DWV",]
chisq.test(DatCleanNeg$BinaryNeg, DatCleanNeg$apiary_near_far)
chisq.test(DatCleanNeg$BinaryNeg, DatCleanNeg$species)
#Calculate % of replicating infections
mean(DW$BinaryNeg)
# 16% of DWV positive bees had replicating infections.
#ddply summarize for species:
plotdat <- ddply(DW, c("target_name", "species"), summarise,
n = length(BinaryNeg),
mean = mean(BinaryNeg, na.rm=TRUE),
sd = sqrt(((mean(BinaryNeg))*(1-mean(BinaryNeg)))/n))
#Replication by species
plotdat
# bimacs 22%; Vagans 12%
plot1 <- ggplot(plotdat, aes(x=species, y=mean, fill=target_name)) +
geom_bar(stat="identity", color="black",fill = "white",
position=position_dodge()) + labs(y="Prevalence", x="Species") + geom_errorbar(aes(ymin = mean - sd, ymax = mean + sd, width = 0.2),position=position_dodge(.9))
plot1 + theme_minimal(base_size = 18) + coord_cartesian(ylim = c(0, 1)) + scale_y_continuous(labels = scales::percent) + guides(fill=FALSE)
plotdat
|
library(optimization)
error_safe <- function(expr){
tryCatch(expr,
error = function(e){
message("An error occurred:\n", e)
NA
})
}
summary_hyperpar <- function(Y, X, A_block,
gamma_init_A, gamma_init_B,
eta_input, rho_input = 0.9,
priorA = "ep", priorB = "ep"){
if(priorA == "ep"){
priorA_num = 0
} else if(priorA == "unif"){
priorA_num = 3
} else {
warning("priorA should be a string `ep` for Ewens-Pitman or `unif` for the uniform.")
}
if(priorB == "ep"){
priorB_num = 0
} else if(priorB == "unif"){
priorB_num = 3
} else {
warning("priorB should be a string `ep` for Ewens-Pitman or `unif` for the uniform.")
}
Rcpp::sourceCpp("src/particle_summary.cpp")
source("src/fun_likelihood.R")
Xorig <- X
Yorig <- Y
Xmeans <- rowMeans(X)
X <- X - Xmeans
betas_mle <- numeric(N)
for(i in 1:N)
betas_mle[i] <- cov(Y[i,],X[i,])/var(X[i,])
Y <- Y - betas_mle*Xmeans
eta_py = eta_input
sigma_py = 0
rho = rho_input
N <- dim(Y)[1]
t <- dim(Y)[2]
n_tr <- dim(X)[1]
betas_mle <- numeric(n_tr)
for(i in 1:n_tr)
betas_mle[i] <- cov(Y[i,],X[i,])/var(X[i,])
alphas_mle <- rowMeans(Y) - betas_mle * rowMeans(X)
sigmas <- numeric(n_tr)
for(i in 1:n_tr)
sigmas[i] <- sd(lm(Y[i,]~X[i,])$residuals)
sigma2 <- mean(sigmas^2)
mu <- mean(sigmas^2)
v <- var(sigmas^2)
alpha_sigma <- mu^2/v + 2
beta_sigma <- mu*(alpha_sigma-1)
K <- round(log(n_tr))
tmp <- (max(alphas_mle)-min(alphas_mle))/(K+1)/2
a1 <- tmp^2/sigma2*(1-0.8)
a2 <- (max(abs(alphas_mle))/2)^2/sigma2 - (a1/(1-rho))
tmp <- (max(betas_mle)-min(betas_mle))/(K+1)/2
b1 <- tmp^2/sigma2*(1-0.8)
b2 <- (max(abs(betas_mle))/2)^2/sigma2 - (b1/(1-rho))
partA <- gamma_init_A
partB <- gamma_init_B
log_post2 <- function(par){
log_post(par, priorA_num, priorB_num, partA, partB,
a2, b2, rho, Y,X,A_block, alpha_sigma, beta_sigma,
eta_py, sigma_py)
}
tmp_nm <- error_safe(optim(par = c(a1,b1), fn = log_post2, method = "Nelder-Mead"))
if(!any(is.na(tmp_nm))){
a1_new <- tmp_nm$par[1]
b1_new <- tmp_nm$par[2]
} else {
a1_new <- a1
b1_new <- b1
}
tmp_new <- particle_summary(Y, X, A_block,
gamma_init_A = partA, gamma_init_B = partB,
a1_input = a1_new, b1_input = b1_new, a2_input = a2, b2_input = b2,
alpha_sigma_input = alpha_sigma, beta_sigma_input = beta_sigma,
priorA_input = priorA_num, priorB_input = priorB_num,
eta_input = eta_py, rho_input = rho)
final_hyperpar <- c(a1_new, a2, b1_new, b2, alpha_sigma, beta_sigma)
return(list(adjusted = tmp_new, optim = tmp_nm, hyperpar = final_hyperpar))
}
|
/two_partitions/src/summary_hyperpar.R
|
no_license
|
cecilia-balocchi/particle-optimization
|
R
| false | false | 2,932 |
r
|
library(optimization)
error_safe <- function(expr){
tryCatch(expr,
error = function(e){
message("An error occurred:\n", e)
NA
})
}
summary_hyperpar <- function(Y, X, A_block,
gamma_init_A, gamma_init_B,
eta_input, rho_input = 0.9,
priorA = "ep", priorB = "ep"){
if(priorA == "ep"){
priorA_num = 0
} else if(priorA == "unif"){
priorA_num = 3
} else {
warning("priorA should be a string `ep` for Ewens-Pitman or `unif` for the uniform.")
}
if(priorB == "ep"){
priorB_num = 0
} else if(priorB == "unif"){
priorB_num = 3
} else {
warning("priorB should be a string `ep` for Ewens-Pitman or `unif` for the uniform.")
}
Rcpp::sourceCpp("src/particle_summary.cpp")
source("src/fun_likelihood.R")
Xorig <- X
Yorig <- Y
Xmeans <- rowMeans(X)
X <- X - Xmeans
betas_mle <- numeric(N)
for(i in 1:N)
betas_mle[i] <- cov(Y[i,],X[i,])/var(X[i,])
Y <- Y - betas_mle*Xmeans
eta_py = eta_input
sigma_py = 0
rho = rho_input
N <- dim(Y)[1]
t <- dim(Y)[2]
n_tr <- dim(X)[1]
betas_mle <- numeric(n_tr)
for(i in 1:n_tr)
betas_mle[i] <- cov(Y[i,],X[i,])/var(X[i,])
alphas_mle <- rowMeans(Y) - betas_mle * rowMeans(X)
sigmas <- numeric(n_tr)
for(i in 1:n_tr)
sigmas[i] <- sd(lm(Y[i,]~X[i,])$residuals)
sigma2 <- mean(sigmas^2)
mu <- mean(sigmas^2)
v <- var(sigmas^2)
alpha_sigma <- mu^2/v + 2
beta_sigma <- mu*(alpha_sigma-1)
K <- round(log(n_tr))
tmp <- (max(alphas_mle)-min(alphas_mle))/(K+1)/2
a1 <- tmp^2/sigma2*(1-0.8)
a2 <- (max(abs(alphas_mle))/2)^2/sigma2 - (a1/(1-rho))
tmp <- (max(betas_mle)-min(betas_mle))/(K+1)/2
b1 <- tmp^2/sigma2*(1-0.8)
b2 <- (max(abs(betas_mle))/2)^2/sigma2 - (b1/(1-rho))
partA <- gamma_init_A
partB <- gamma_init_B
log_post2 <- function(par){
log_post(par, priorA_num, priorB_num, partA, partB,
a2, b2, rho, Y,X,A_block, alpha_sigma, beta_sigma,
eta_py, sigma_py)
}
tmp_nm <- error_safe(optim(par = c(a1,b1), fn = log_post2, method = "Nelder-Mead"))
if(!any(is.na(tmp_nm))){
a1_new <- tmp_nm$par[1]
b1_new <- tmp_nm$par[2]
} else {
a1_new <- a1
b1_new <- b1
}
tmp_new <- particle_summary(Y, X, A_block,
gamma_init_A = partA, gamma_init_B = partB,
a1_input = a1_new, b1_input = b1_new, a2_input = a2, b2_input = b2,
alpha_sigma_input = alpha_sigma, beta_sigma_input = beta_sigma,
priorA_input = priorA_num, priorB_input = priorB_num,
eta_input = eta_py, rho_input = rho)
final_hyperpar <- c(a1_new, a2, b1_new, b2, alpha_sigma, beta_sigma)
return(list(adjusted = tmp_new, optim = tmp_nm, hyperpar = final_hyperpar))
}
|
#' Add a row for each unused factor level to ensure plotly displays all levels in the legend.
#'
#' Add a row for each unused factor level to ensure plotly displays all levels in the legend.
#'
#' @param data A tibble, dataframe or sf object. Required input.
#' @param var A variable of class factor.
#'
#' @return A tibble, dataframe or sf object. Required input.
#' @export
#'
#' @examples
# library(palmerpenguins)
# library(dplyr)
#
# penguins %>%
# filter(sex == "female") %>%
# add_unused_levels(sex) %>%
# tail()
add_unused_levels <- function(data, var) {
warning("This adds a row for each unused factor level to ensure plotly displays all levels in the legend. It should be used only for input within a ggplotly object.")
var <- rlang::enquo(var)
var_vctr <- dplyr::pull(data, !!var)
unused_levels <- setdiff(levels(var_vctr), unique(var_vctr))
if(length(unused_levels) != 0) data <- dplyr::bind_rows(data, tibble::tibble(!!var := unused_levels))
return(data)
}
|
/R/add_unused_levels.R
|
permissive
|
StatisticsNZ/er.helpers
|
R
| false | false | 995 |
r
|
#' Add a row for each unused factor level to ensure plotly displays all levels in the legend.
#'
#' Add a row for each unused factor level to ensure plotly displays all levels in the legend.
#'
#' @param data A tibble, dataframe or sf object. Required input.
#' @param var A variable of class factor.
#'
#' @return A tibble, dataframe or sf object. Required input.
#' @export
#'
#' @examples
# library(palmerpenguins)
# library(dplyr)
#
# penguins %>%
# filter(sex == "female") %>%
# add_unused_levels(sex) %>%
# tail()
add_unused_levels <- function(data, var) {
warning("This adds a row for each unused factor level to ensure plotly displays all levels in the legend. It should be used only for input within a ggplotly object.")
var <- rlang::enquo(var)
var_vctr <- dplyr::pull(data, !!var)
unused_levels <- setdiff(levels(var_vctr), unique(var_vctr))
if(length(unused_levels) != 0) data <- dplyr::bind_rows(data, tibble::tibble(!!var := unused_levels))
return(data)
}
|
# for graphing the SingleEnvAnalysis and MultiEnvAnalysis boxplot
#
# [Arguments]
# data - SingleEnvAnalysis or MultiEnvAnalysis Outcome;
# path - the path to create boxplot file
# single.env - logical, whether include all environment under this trait.
#
graph.boxplot <- function
(
data,
path,
single.env = FALSE,
...
)
{
UseMethod("graph.boxplot");
}
graph.boxplot.SingleEnvAnalysis <- function
(
data,
path,
single.env = FALSE,
...
)
{
if(missing(path))
path <- getwd();
#create boxplot of traits after SingleEnvAnalysis on each environment.
for(i in 1:length(data$traits))
{
trait.name <- data$traits[[i]]$name;
if(is.null(data$traits[[i]]$analysis$sea))
{
warning(cat("\tSkip the ", trait.name, " boxplot\n",sep = ""));
next;
} else
{
if(single.env)
{
for(j in 1:length(data$traits[[i]]$analysis$sea$envs))
{
env.name <- data$traits[[i]]$analysis$sea$envs[[j]]$name;
boxfile <- paste(path,"/boxplot_",trait.name,"_",env.name,".png",sep="");
if(!all(is.na(data$traits[[i]]$envs[[j]]$data[,trait.name])))
{
png(boxfile);
xlabel = trait.name;
boxplot(as.numeric(as.character(data$traits[[i]]$envs[[j]]$data[,trait.name])),
xlab = xlabel, main = paste("Boxplot of ", trait.name, sep=""));
dev.off();
}
}
} else
{
env.label <- data$traits[[i]]$envs[[1]]$design$env;
boxfile <- paste(path,"/boxplot_", trait.name,"_ALL_Env.png", sep= "");
if(!all(is.na(data$raw.data[,trait.name])))
{
png(boxfile);
xlabel = trait.name;
boxplot(as.numeric(as.character(data$raw.data[,trait.name])) ~ as.factor(data$raw.data[,env.label]),
data = data$raw.data, xlab = xlabel, main = paste("Boxplot of ", trait.name, sep=""));
dev.off();
}
}
}
}
}
graph.boxplot.MultiEnvAnalysis <- function
(
data,
path,
single.env = FALSE,
...
)
{
if(missing(path))
path <- getwd();
#create boxplot of traits after SingleEnvAnalysis on each environment.
for(i in 1:length(data$traits))
{
trait.name <- data$traits[[i]]$name;
if(is.null(data$traits[[i]]$analysis$mea))
{
warning(cat("\tSkip the ", trait.name, " boxplot\n",sep = ""));
next;
} else
{
boxfile = paste(getwd(),"/boxplotMea1S_",trait.name,".png",sep = "");
if (!all(is.na(data$traits[[i]]$analysis$mea$data[,trait.name]))) {
png(filename = boxfile); #par(mfrow = n2mfrow(length(respvar)));
xlabel = trait.name;
boxplot((data$traits[[i]]$analysis$mea$data[,trait.name]), data = data,
xlab = xlabel, main = paste("Boxplot of ", trait.name, sep=""));
dev.off()
}
}
}#end stmt for(i in 1:length(data$traits))
}
|
/R/graph.boxplot.R
|
no_license
|
shingocat/PBTools
|
R
| false | false | 2,906 |
r
|
# for graphing the SingleEnvAnalysis and MultiEnvAnalysis boxplot
#
# [Arguments]
# data - SingleEnvAnalysis or MultiEnvAnalysis Outcome;
# path - the path to create boxplot file
# single.env - logical, whether include all environment under this trait.
#
graph.boxplot <- function
(
data,
path,
single.env = FALSE,
...
)
{
UseMethod("graph.boxplot");
}
graph.boxplot.SingleEnvAnalysis <- function
(
data,
path,
single.env = FALSE,
...
)
{
if(missing(path))
path <- getwd();
#create boxplot of traits after SingleEnvAnalysis on each environment.
for(i in 1:length(data$traits))
{
trait.name <- data$traits[[i]]$name;
if(is.null(data$traits[[i]]$analysis$sea))
{
warning(cat("\tSkip the ", trait.name, " boxplot\n",sep = ""));
next;
} else
{
if(single.env)
{
for(j in 1:length(data$traits[[i]]$analysis$sea$envs))
{
env.name <- data$traits[[i]]$analysis$sea$envs[[j]]$name;
boxfile <- paste(path,"/boxplot_",trait.name,"_",env.name,".png",sep="");
if(!all(is.na(data$traits[[i]]$envs[[j]]$data[,trait.name])))
{
png(boxfile);
xlabel = trait.name;
boxplot(as.numeric(as.character(data$traits[[i]]$envs[[j]]$data[,trait.name])),
xlab = xlabel, main = paste("Boxplot of ", trait.name, sep=""));
dev.off();
}
}
} else
{
env.label <- data$traits[[i]]$envs[[1]]$design$env;
boxfile <- paste(path,"/boxplot_", trait.name,"_ALL_Env.png", sep= "");
if(!all(is.na(data$raw.data[,trait.name])))
{
png(boxfile);
xlabel = trait.name;
boxplot(as.numeric(as.character(data$raw.data[,trait.name])) ~ as.factor(data$raw.data[,env.label]),
data = data$raw.data, xlab = xlabel, main = paste("Boxplot of ", trait.name, sep=""));
dev.off();
}
}
}
}
}
graph.boxplot.MultiEnvAnalysis <- function
(
data,
path,
single.env = FALSE,
...
)
{
if(missing(path))
path <- getwd();
#create boxplot of traits after SingleEnvAnalysis on each environment.
for(i in 1:length(data$traits))
{
trait.name <- data$traits[[i]]$name;
if(is.null(data$traits[[i]]$analysis$mea))
{
warning(cat("\tSkip the ", trait.name, " boxplot\n",sep = ""));
next;
} else
{
boxfile = paste(getwd(),"/boxplotMea1S_",trait.name,".png",sep = "");
if (!all(is.na(data$traits[[i]]$analysis$mea$data[,trait.name]))) {
png(filename = boxfile); #par(mfrow = n2mfrow(length(respvar)));
xlabel = trait.name;
boxplot((data$traits[[i]]$analysis$mea$data[,trait.name]), data = data,
xlab = xlabel, main = paste("Boxplot of ", trait.name, sep=""));
dev.off()
}
}
}#end stmt for(i in 1:length(data$traits))
}
|
.onAttach <- function(...) {
## if (!interactive() || stats::runif(1) > 0.1) return()
if (!interactive()) return()
##
## tips <- c(
## "Use suppressPackageStartupMessages() to eliminate package startup messages.",
## "Stackoverflow is a great place to for general help: http://stackoverflow.com.",
## "Need help getting started? Try the cookbook for R: http://www.cookbook-r.com"
## )
##
packageStartupMessage(c("Welcome to the gpusim package!\n",
"Need help? blah. Report an issue on...\n",
"Stackoverflow is a great place to for general help: http://stackoverflow.com"))
}
|
/R/zzz.R
|
no_license
|
grizant/gpusim
|
R
| false | false | 661 |
r
|
.onAttach <- function(...) {
## if (!interactive() || stats::runif(1) > 0.1) return()
if (!interactive()) return()
##
## tips <- c(
## "Use suppressPackageStartupMessages() to eliminate package startup messages.",
## "Stackoverflow is a great place to for general help: http://stackoverflow.com.",
## "Need help getting started? Try the cookbook for R: http://www.cookbook-r.com"
## )
##
packageStartupMessage(c("Welcome to the gpusim package!\n",
"Need help? blah. Report an issue on...\n",
"Stackoverflow is a great place to for general help: http://stackoverflow.com"))
}
|
#####################
## Extact SNPs from McAllister Data
#####################
library(stringr)
library(VariantAnnotation)
anodf <- read.csv("./data/gerardii/McAllister_Miller_Locality_Ploidy_Info.csv")
fl <-"./data/gerardii/McAllister.Miller.all.mergedRefGuidedSNPs.vcf.gz"
## choose arbitrary region
chlist <- list(chr1_gr = GRanges("1", IRanges(start = 7000000, end = 7100000)),
chr2_gr = GRanges("10", IRanges(start = 7000000, end = 7100000)))
compressVcf <- bgzip(fl, tempfile())
idx <- indexTabix(compressVcf, "vcf")
tab <- TabixFile(compressVcf, idx)
for (i in seq_along(chlist)) {
param <- ScanVcfParam(which = chlist[[i]])
mca <- readVcf(tab, as.character(i), param)
## Keep only biallelic snps
which_ba <- sapply(alt(mca), length) == 1
mca <- mca[which_ba, ]
## Remove SNPs with low MAF
which_maf <- info(mca)$AF > 0.1 & info(mca)$AF < 0.9
stopifnot(length(table(sapply(which_maf, length))) == 1)
which_maf <- unlist(which_maf)
mca <- mca[which_maf, ]
## Extract read-count matrices
DP <- geno(mca)$DP
AD <- geno(mca)$AD
stopifnot(length(table(sapply(AD, length))) == 2)
get_elem <- function(x, num) {
if (length(x) < num) {
return(NA)
} else {
return(x[[num]])
}
}
refmat <- sapply(AD, get_elem, num = 1)
dim(refmat) <- dim(AD)
dimnames(refmat) <- dimnames(AD)
altmat <- sapply(AD, get_elem, num = 2)
dim(altmat) <- dim(AD)
dimnames(altmat) <- dimnames(AD)
if (i == 1) {
sizemat_f <- DP
refmat_f <- refmat
locdf_f <- data.frame(snp = rownames(DP), loc = i)
} else {
sizemat_f <- rbind(sizemat_f, DP)
refmat_f <- rbind(refmat_f, refmat)
locdf <- data.frame(snp = rownames(DP), loc = i)
locdf_f <- rbind(locdf_f, locdf)
}
}
## Remove snps with high missingness
goodsnp <- rowMeans(sizemat_f, na.rm = TRUE) >= 3
sizemat_f <- sizemat_f[goodsnp, ]
refmat_f <- refmat_f[goodsnp, ]
locdf_f <- locdf_f[goodsnp, ]
## remove individuals with high missingness
goodind <- str_split_fixed(colnames(sizemat_f), pattern = ":", n = 4)[, 1] %in% anodf$Individual
sizemat_f <- sizemat_f[, goodind]
refmat_f <- refmat_f[, goodind]
## split individuals based on ploidy
sixind <- anodf$Individual[anodf$Ploidy.Level == 6]
nonind <- anodf$Individual[anodf$Ploidy.Level == 9]
candidate <- str_split_fixed(colnames(sizemat_f), pattern = ":", n = 4)[, 1]
stopifnot(candidate %in% anodf$Individual)
which_six <- candidate %in% sixind
which_non <- candidate %in% nonind
sizemat_six <- sizemat_f[, which_six]
refmat_six <- refmat_f[, which_six]
sizemat_non <- sizemat_f[, which_non]
refmat_non <- refmat_f[, which_non]
## Remove duplicated rows
which_bad_six <- duplicated(sizemat_six) & duplicated(refmat_six)
sizemat_six <- sizemat_six[!which_bad_six, ]
refmat_six <- refmat_six[!which_bad_six, ]
which_bad_non <- duplicated(sizemat_non) & duplicated(refmat_non)
sizemat_non <- sizemat_non[!which_bad_non, ]
refmat_non <- refmat_non[!which_bad_non, ]
locdf_f <- locdf_f[!which_bad_non, ]
saveRDS(object = sizemat_six, file = "./output/mca/sizemat_hex.RDS")
saveRDS(object = refmat_six, file = "./output/mca/refmat_hex.RDS")
saveRDS(object = sizemat_non, file = "./output/mca/sizemat_non.RDS")
saveRDS(object = refmat_non, file = "./output/mca/refmat_non.RDS")
write.csv(x = locdf_f, file = "./output/mca/locdf.csv", row.names = FALSE)
|
/code/mca_extract.R
|
no_license
|
dcgerard/ld_simulations
|
R
| false | false | 3,360 |
r
|
#####################
## Extact SNPs from McAllister Data
#####################
library(stringr)
library(VariantAnnotation)
anodf <- read.csv("./data/gerardii/McAllister_Miller_Locality_Ploidy_Info.csv")
fl <-"./data/gerardii/McAllister.Miller.all.mergedRefGuidedSNPs.vcf.gz"
## choose arbitrary region
chlist <- list(chr1_gr = GRanges("1", IRanges(start = 7000000, end = 7100000)),
chr2_gr = GRanges("10", IRanges(start = 7000000, end = 7100000)))
compressVcf <- bgzip(fl, tempfile())
idx <- indexTabix(compressVcf, "vcf")
tab <- TabixFile(compressVcf, idx)
for (i in seq_along(chlist)) {
param <- ScanVcfParam(which = chlist[[i]])
mca <- readVcf(tab, as.character(i), param)
## Keep only biallelic snps
which_ba <- sapply(alt(mca), length) == 1
mca <- mca[which_ba, ]
## Remove SNPs with low MAF
which_maf <- info(mca)$AF > 0.1 & info(mca)$AF < 0.9
stopifnot(length(table(sapply(which_maf, length))) == 1)
which_maf <- unlist(which_maf)
mca <- mca[which_maf, ]
## Extract read-count matrices
DP <- geno(mca)$DP
AD <- geno(mca)$AD
stopifnot(length(table(sapply(AD, length))) == 2)
get_elem <- function(x, num) {
if (length(x) < num) {
return(NA)
} else {
return(x[[num]])
}
}
refmat <- sapply(AD, get_elem, num = 1)
dim(refmat) <- dim(AD)
dimnames(refmat) <- dimnames(AD)
altmat <- sapply(AD, get_elem, num = 2)
dim(altmat) <- dim(AD)
dimnames(altmat) <- dimnames(AD)
if (i == 1) {
sizemat_f <- DP
refmat_f <- refmat
locdf_f <- data.frame(snp = rownames(DP), loc = i)
} else {
sizemat_f <- rbind(sizemat_f, DP)
refmat_f <- rbind(refmat_f, refmat)
locdf <- data.frame(snp = rownames(DP), loc = i)
locdf_f <- rbind(locdf_f, locdf)
}
}
## Remove snps with high missingness
goodsnp <- rowMeans(sizemat_f, na.rm = TRUE) >= 3
sizemat_f <- sizemat_f[goodsnp, ]
refmat_f <- refmat_f[goodsnp, ]
locdf_f <- locdf_f[goodsnp, ]
## remove individuals with high missingness
goodind <- str_split_fixed(colnames(sizemat_f), pattern = ":", n = 4)[, 1] %in% anodf$Individual
sizemat_f <- sizemat_f[, goodind]
refmat_f <- refmat_f[, goodind]
## split individuals based on ploidy
sixind <- anodf$Individual[anodf$Ploidy.Level == 6]
nonind <- anodf$Individual[anodf$Ploidy.Level == 9]
candidate <- str_split_fixed(colnames(sizemat_f), pattern = ":", n = 4)[, 1]
stopifnot(candidate %in% anodf$Individual)
which_six <- candidate %in% sixind
which_non <- candidate %in% nonind
sizemat_six <- sizemat_f[, which_six]
refmat_six <- refmat_f[, which_six]
sizemat_non <- sizemat_f[, which_non]
refmat_non <- refmat_f[, which_non]
## Remove duplicated rows
which_bad_six <- duplicated(sizemat_six) & duplicated(refmat_six)
sizemat_six <- sizemat_six[!which_bad_six, ]
refmat_six <- refmat_six[!which_bad_six, ]
which_bad_non <- duplicated(sizemat_non) & duplicated(refmat_non)
sizemat_non <- sizemat_non[!which_bad_non, ]
refmat_non <- refmat_non[!which_bad_non, ]
locdf_f <- locdf_f[!which_bad_non, ]
saveRDS(object = sizemat_six, file = "./output/mca/sizemat_hex.RDS")
saveRDS(object = refmat_six, file = "./output/mca/refmat_hex.RDS")
saveRDS(object = sizemat_non, file = "./output/mca/sizemat_non.RDS")
saveRDS(object = refmat_non, file = "./output/mca/refmat_non.RDS")
write.csv(x = locdf_f, file = "./output/mca/locdf.csv", row.names = FALSE)
|
# utils for qsub command
dots_parser <- function(..., sep_collapse = "\n") {
rlang::list2(...) %>%
purrr::map(vctrs::vec_cast, to = character()) %>%
purrr::map_chr(stringr::str_c, collapse = sep_collapse) %>%
stringr::str_c(collapse = sep_collapse)
}
try_system <- function(x, trial_times = 5L) {
if (trial_times <= 0L) rlang::abort(paste0("Error occurred in ", x), "command_error")
res <- try(system(x, intern = TRUE))
if (class(res) == "try-error") {
try_system(x, trial_times - 1L)
} else {
return(res)
}
}
seq_int_chr <- function(from_to_by){
from = to = by = integer()
c(from, to, by) %<-% (from_to_by %>% vctrs::vec_cast(integer()))
if (is.na(from) || is.na(to) || is.na(by)) {
"undefined"
}else{
seq.int(from, to, by) %>% as.character()
}
}
qsub_verbose <- function(ID_body, task, time){
stringr::str_glue("ID: ", crayon::cyan(ID_body),
"\ntaskid: ", crayon::cyan(stringr::str_c(task, collapse = ", ")),
"\ntime: ", crayon::cyan(time)) %>% cli::cat_line()
}
parse_id <- function(ID) {
ID_vec <- stringr::str_split(ID, "\\.|-|:")[[1]] %>% as.integer()
list(
ID_body = ID_vec[1],
task = ID_vec[2:4] %>% seq_int_chr()
)
}
read_shebang <- function(path) {
con = file(path, "r")
if (readChar(con, 2L) == "#!") shebang <- readLines(con, n = 1L)
else shebang <- NA_character_
close(con)
shebang
}
|
/R/utils-qsub.R
|
no_license
|
sinnhazime/jobwatcher
|
R
| false | false | 1,422 |
r
|
# utils for qsub command
dots_parser <- function(..., sep_collapse = "\n") {
rlang::list2(...) %>%
purrr::map(vctrs::vec_cast, to = character()) %>%
purrr::map_chr(stringr::str_c, collapse = sep_collapse) %>%
stringr::str_c(collapse = sep_collapse)
}
try_system <- function(x, trial_times = 5L) {
if (trial_times <= 0L) rlang::abort(paste0("Error occurred in ", x), "command_error")
res <- try(system(x, intern = TRUE))
if (class(res) == "try-error") {
try_system(x, trial_times - 1L)
} else {
return(res)
}
}
seq_int_chr <- function(from_to_by){
from = to = by = integer()
c(from, to, by) %<-% (from_to_by %>% vctrs::vec_cast(integer()))
if (is.na(from) || is.na(to) || is.na(by)) {
"undefined"
}else{
seq.int(from, to, by) %>% as.character()
}
}
qsub_verbose <- function(ID_body, task, time){
stringr::str_glue("ID: ", crayon::cyan(ID_body),
"\ntaskid: ", crayon::cyan(stringr::str_c(task, collapse = ", ")),
"\ntime: ", crayon::cyan(time)) %>% cli::cat_line()
}
parse_id <- function(ID) {
ID_vec <- stringr::str_split(ID, "\\.|-|:")[[1]] %>% as.integer()
list(
ID_body = ID_vec[1],
task = ID_vec[2:4] %>% seq_int_chr()
)
}
read_shebang <- function(path) {
con = file(path, "r")
if (readChar(con, 2L) == "#!") shebang <- readLines(con, n = 1L)
else shebang <- NA_character_
close(con)
shebang
}
|
library(dplyr)
library(ggplot2)
#Read the data into R
NEI <- readRDS("./data/summarySCC_PM25.rds")
SCC <- readRDS("./data/Source_Classification_Code.rds")
#merge data based on SCC number, cut out non motor vehicle, cut out non-baltimore fips,
#order by year
merged <- merge(NEI, SCC, by = "SCC")
merged.BAL <- merged[merged$fips == "24510" & merged$type == "ON-ROAD", ]
merged.LA <- merged[merged$fips == "06037" & merged$type == "ON-ROAD", ]
#sum it up based on years
agg.BAL <- aggregate(Emissions ~ year, merged.BAL, sum)
agg.LA <- aggregate(Emissions ~ year, merged.LA, sum)
agg.merge <- rbind(agg.BAL, agg.LA)
fips <- as.factor(c("25410", "25410", "25410", "25410", "06037", "06037", "06037", "06037"))
agg.merge <- cbind(agg.merge, fips)
#make the graph of pm2.5 sums by year
png("plot6.png", width = 640, height = 480)
plot6 <- ggplot(agg.merge, aes(factor(year), Emissions)) +
facet_grid(. ~ fips) +
geom_bar(stat = "identity", aes(fill = year, color = year)) +
labs(title = expression(PM[2.5] * " sums from 1999-2008 for ON ROAD Vehicles between Baltimore(24510) and LA(06037)")) +
labs(x = "Year", y = expression("Sum of " * PM[2.5] * " levels"))
print(plot6)
dev.off()
|
/plot6.R
|
no_license
|
nschampions2004/Exploratory-Data-Analysis-Programming-Assignment-2
|
R
| false | false | 1,197 |
r
|
library(dplyr)
library(ggplot2)
#Read the data into R
NEI <- readRDS("./data/summarySCC_PM25.rds")
SCC <- readRDS("./data/Source_Classification_Code.rds")
#merge data based on SCC number, cut out non motor vehicle, cut out non-baltimore fips,
#order by year
merged <- merge(NEI, SCC, by = "SCC")
merged.BAL <- merged[merged$fips == "24510" & merged$type == "ON-ROAD", ]
merged.LA <- merged[merged$fips == "06037" & merged$type == "ON-ROAD", ]
#sum it up based on years
agg.BAL <- aggregate(Emissions ~ year, merged.BAL, sum)
agg.LA <- aggregate(Emissions ~ year, merged.LA, sum)
agg.merge <- rbind(agg.BAL, agg.LA)
fips <- as.factor(c("25410", "25410", "25410", "25410", "06037", "06037", "06037", "06037"))
agg.merge <- cbind(agg.merge, fips)
#make the graph of pm2.5 sums by year
png("plot6.png", width = 640, height = 480)
plot6 <- ggplot(agg.merge, aes(factor(year), Emissions)) +
facet_grid(. ~ fips) +
geom_bar(stat = "identity", aes(fill = year, color = year)) +
labs(title = expression(PM[2.5] * " sums from 1999-2008 for ON ROAD Vehicles between Baltimore(24510) and LA(06037)")) +
labs(x = "Year", y = expression("Sum of " * PM[2.5] * " levels"))
print(plot6)
dev.off()
|
# Script for loading in all count data and assessing QC
# To do the following:
# 1) Load all gene count data for all samples
# 2) Convert to TPM and log(TPM+1)
# 3) Convert Ensembl gene IDs to symbol
# 4) Convert gene symbol counts to TPM
# Functions ####
# Convert a dataframe of TPM into log(tpm+1)
TPMTologTpm <- function(tpm) {
for(i in c(1:ncol(tpm))) { tpm[,i] <- log(tpm[,i]+1) }
return(tpm)
}
# Main code ####
# 0. Prepare environment
setwd("~/Documents/EPICC/Data/Expression");library(data.table)
library(dplyr);'%ni%' <- Negate('%in%')
# 1. Load data and reformat ready for normalisation ####
# Load gene count matrix from 4.2.1.3
EPICC <- as.data.frame(fread('ProcessedCounts/All_EPICC_counts.txt'))
# Load pre-compiled gene length data
load(file="outputforNormalisation.RData");alllens <- as.data.frame(output)
alllens$GeneID <- row.names(alllens);row.names(alllens) <- c(1:nrow(alllens));alllens <- alllens[,c(3,1)]
alllens <- alllens[-grep("PAR_Y",alllens$GeneID),]
alllens$GeneID <- gsub('(ENSG\\d+)\\.\\d+','\\1',alllens$GeneID)
alllens <- alllens[order(alllens$GeneID),]
# Merge data together
EPICC <- merge(EPICC,alllens,by='GeneID')
# 2. Convert raw gene counts to TPM and log(TPM+1) ####
# a. Normalise for gene length: gene counts / gene length (in kb)
normEPICC <- EPICC[,grep('C',colnames(EPICC))];row.names(normEPICC) <- EPICC$GeneID
for(i in c(1:ncol(normEPICC))) { normEPICC[,i] <- normEPICC[,i]/(EPICC$Length/1000) }
# b. Normalise for sequencing depth: sum the normalised gene counts and divide by a million
# then divide each normalised gene count by that samples scaling factor
for(i in c(1:ncol(normEPICC))) {
sfactor <- sum(normEPICC[,i])/1000000
normEPICC[,i] <- normEPICC[,i]/sfactor
}
epiccTPM <- normEPICC
# TPM to log(TPM+1)
epicclogTPM <- TPMTologTpm(epiccTPM)
# Output TPM and log(TPM+1) files
# TPM
epiccTPM$GeneID <- row.names(epiccTPM);epiccTPM <- epiccTPM[,c(ncol(epiccTPM),1:(ncol(epiccTPM)-1))]
write.table(epiccTPM,"~/Documents/EPICC/Data/Expression/ProcessedCounts/All_EPICC_tpm.txt",sep='\t',quote=F,row.names = F)
# logTPM
epicclogTPM$GeneID <- row.names(epicclogTPM);epicclogTPM <- epicclogTPM[,c(ncol(epicclogTPM),1:(ncol(epicclogTPM)-1))]
write.table(epicclogTPM,"~/Documents/EPICC/Data/Expression/ProcessedCounts/All_EPICC_logtpm.txt",sep='\t',quote=F,row.names = F)
# 3. Convert Ensembl raw gene counts to gene symbols ####
# Load in data mapping ensembl gene IDs to gene symbols
geneinfo <- read.table("~/Documents/EPICC/Data/Expression/compiledGeneInfo.txt",header=T)
counts <- as.data.frame(fread("~/Documents/EPICC/Data/Expression/ProcessedCounts/All_EPICC_counts.txt"))
merged <- merge(geneinfo,counts,by='GeneID');merged <- merge(merged,alllens,by='GeneID')
merged <- merged[,c('GeneID','Name','Length',colnames(merged)[grep('C\\d\\d\\d',colnames(merged))])]
# Assess duplicate gene names
alldups <- unique(merged[which(duplicated(merged$Name)),'Name']);epiccTMP <- merged
merged <- merged[which(merged$Name %ni% alldups),]
for(i in c(1:length(alldups))) {
dupped <- epiccTMP[which(epiccTMP$Name==alldups[i]),]
counts <- colSums(dupped[,c(3:ncol(merged))])
len <- dupped[which(dupped$Length==max(dupped$Length)),'Length'][1]
merged <- rbind(merged,c('Dupped',alldups[i],len,counts))
}
merged <- merged[,c(2:ncol(merged))];merged <- merged[order(merged$Name),]
for(col in c(2:ncol(merged))) { merged[,col] <- as.integer(merged[,col]) }
# Output
symcounts <- merged;symcounts$GeneID <- symcounts$Name
symcounts <- symcounts[,c(ncol(symcounts),3:(ncol(symcounts)-1))]
write.table(symcounts,"~/Documents/EPICC/Data/Expression/ProcessedCounts/All_EPICC_symbol_counts.txt",sep='\t',quote=F,row.names = F)
# 4. Convert gene symbol counts to TPM and log(TPM+1) ####
# a. Normalise for gene length: do gene counts / gene length (in kb)
symEPICC <- merged[,colnames(merged)[grep('C\\d\\d\\d',colnames(merged))]];row.names(symEPICC) <- merged$Name
for(i in c(1:ncol(symEPICC))) { symEPICC[,i] <- symEPICC[,i]/(merged$Length/1000) }
# b. Normalise for sequencing depth: sum the normalised gene counts and divide by a million
# then divide each normalised gene count by that samples scaling factor
for(i in c(1:ncol(symEPICC))) {
sfactor <- sum(symEPICC[,i])/1000000
symEPICC[,i] <- symEPICC[,i]/sfactor
}
symTPM <- symEPICC
# TPM to log(TPM+1)
symlogTPM <- TPMTologTpm(symTPM)
# Output TPM and logTPM files
# TPM
symTPM$GeneID <- row.names(symTPM);symTPM <- symTPM[,c(ncol(symTPM),1:(ncol(symTPM)-1))]
write.table(symTPM,"~/Documents/EPICC/Data/Expression/ProcessedCounts/All_EPICC_symbol_tpm.txt",sep='\t',quote=F,row.names = F)
# logTPM
symlogTPM$GeneID <- row.names(symlogTPM);symlogTPM <- symlogTPM[,c(ncol(symlogTPM),1:(ncol(symlogTPM)-1))]
write.table(symlogTPM,"~/Documents/EPICC/Data/Expression/ProcessedCounts/All_EPICC_symbol_logtpm.txt",sep='\t',quote=F,row.names = F)
|
/4.2.2.1.NormaliseCountsTPM.R
|
no_license
|
JacobHouseham/analysis_and_plotting_scripts
|
R
| false | false | 4,870 |
r
|
# Script for loading in all count data and assessing QC
# To do the following:
# 1) Load all gene count data for all samples
# 2) Convert to TPM and log(TPM+1)
# 3) Convert Ensembl gene IDs to symbol
# 4) Convert gene symbol counts to TPM
# Functions ####
# Convert a dataframe of TPM into log(tpm+1)
TPMTologTpm <- function(tpm) {
for(i in c(1:ncol(tpm))) { tpm[,i] <- log(tpm[,i]+1) }
return(tpm)
}
# Main code ####
# 0. Prepare environment
setwd("~/Documents/EPICC/Data/Expression");library(data.table)
library(dplyr);'%ni%' <- Negate('%in%')
# 1. Load data and reformat ready for normalisation ####
# Load gene count matrix from 4.2.1.3
EPICC <- as.data.frame(fread('ProcessedCounts/All_EPICC_counts.txt'))
# Load pre-compiled gene length data
load(file="outputforNormalisation.RData");alllens <- as.data.frame(output)
alllens$GeneID <- row.names(alllens);row.names(alllens) <- c(1:nrow(alllens));alllens <- alllens[,c(3,1)]
alllens <- alllens[-grep("PAR_Y",alllens$GeneID),]
alllens$GeneID <- gsub('(ENSG\\d+)\\.\\d+','\\1',alllens$GeneID)
alllens <- alllens[order(alllens$GeneID),]
# Merge data together
EPICC <- merge(EPICC,alllens,by='GeneID')
# 2. Convert raw gene counts to TPM and log(TPM+1) ####
# a. Normalise for gene length: gene counts / gene length (in kb)
normEPICC <- EPICC[,grep('C',colnames(EPICC))];row.names(normEPICC) <- EPICC$GeneID
for(i in c(1:ncol(normEPICC))) { normEPICC[,i] <- normEPICC[,i]/(EPICC$Length/1000) }
# b. Normalise for sequencing depth: sum the normalised gene counts and divide by a million
# then divide each normalised gene count by that samples scaling factor
for(i in c(1:ncol(normEPICC))) {
sfactor <- sum(normEPICC[,i])/1000000
normEPICC[,i] <- normEPICC[,i]/sfactor
}
epiccTPM <- normEPICC
# TPM to log(TPM+1)
epicclogTPM <- TPMTologTpm(epiccTPM)
# Output TPM and log(TPM+1) files
# TPM
epiccTPM$GeneID <- row.names(epiccTPM);epiccTPM <- epiccTPM[,c(ncol(epiccTPM),1:(ncol(epiccTPM)-1))]
write.table(epiccTPM,"~/Documents/EPICC/Data/Expression/ProcessedCounts/All_EPICC_tpm.txt",sep='\t',quote=F,row.names = F)
# logTPM
epicclogTPM$GeneID <- row.names(epicclogTPM);epicclogTPM <- epicclogTPM[,c(ncol(epicclogTPM),1:(ncol(epicclogTPM)-1))]
write.table(epicclogTPM,"~/Documents/EPICC/Data/Expression/ProcessedCounts/All_EPICC_logtpm.txt",sep='\t',quote=F,row.names = F)
# 3. Convert Ensembl raw gene counts to gene symbols ####
# Load in data mapping ensembl gene IDs to gene symbols
geneinfo <- read.table("~/Documents/EPICC/Data/Expression/compiledGeneInfo.txt",header=T)
counts <- as.data.frame(fread("~/Documents/EPICC/Data/Expression/ProcessedCounts/All_EPICC_counts.txt"))
merged <- merge(geneinfo,counts,by='GeneID');merged <- merge(merged,alllens,by='GeneID')
merged <- merged[,c('GeneID','Name','Length',colnames(merged)[grep('C\\d\\d\\d',colnames(merged))])]
# Assess duplicate gene names
alldups <- unique(merged[which(duplicated(merged$Name)),'Name']);epiccTMP <- merged
merged <- merged[which(merged$Name %ni% alldups),]
for(i in c(1:length(alldups))) {
dupped <- epiccTMP[which(epiccTMP$Name==alldups[i]),]
counts <- colSums(dupped[,c(3:ncol(merged))])
len <- dupped[which(dupped$Length==max(dupped$Length)),'Length'][1]
merged <- rbind(merged,c('Dupped',alldups[i],len,counts))
}
merged <- merged[,c(2:ncol(merged))];merged <- merged[order(merged$Name),]
for(col in c(2:ncol(merged))) { merged[,col] <- as.integer(merged[,col]) }
# Output
symcounts <- merged;symcounts$GeneID <- symcounts$Name
symcounts <- symcounts[,c(ncol(symcounts),3:(ncol(symcounts)-1))]
write.table(symcounts,"~/Documents/EPICC/Data/Expression/ProcessedCounts/All_EPICC_symbol_counts.txt",sep='\t',quote=F,row.names = F)
# 4. Convert gene symbol counts to TPM and log(TPM+1) ####
# a. Normalise for gene length: do gene counts / gene length (in kb)
symEPICC <- merged[,colnames(merged)[grep('C\\d\\d\\d',colnames(merged))]];row.names(symEPICC) <- merged$Name
for(i in c(1:ncol(symEPICC))) { symEPICC[,i] <- symEPICC[,i]/(merged$Length/1000) }
# b. Normalise for sequencing depth: sum the normalised gene counts and divide by a million
# then divide each normalised gene count by that samples scaling factor
for(i in c(1:ncol(symEPICC))) {
sfactor <- sum(symEPICC[,i])/1000000
symEPICC[,i] <- symEPICC[,i]/sfactor
}
symTPM <- symEPICC
# TPM to log(TPM+1)
symlogTPM <- TPMTologTpm(symTPM)
# Output TPM and logTPM files
# TPM
symTPM$GeneID <- row.names(symTPM);symTPM <- symTPM[,c(ncol(symTPM),1:(ncol(symTPM)-1))]
write.table(symTPM,"~/Documents/EPICC/Data/Expression/ProcessedCounts/All_EPICC_symbol_tpm.txt",sep='\t',quote=F,row.names = F)
# logTPM
symlogTPM$GeneID <- row.names(symlogTPM);symlogTPM <- symlogTPM[,c(ncol(symlogTPM),1:(ncol(symlogTPM)-1))]
write.table(symlogTPM,"~/Documents/EPICC/Data/Expression/ProcessedCounts/All_EPICC_symbol_logtpm.txt",sep='\t',quote=F,row.names = F)
|
\name{residualspaper}
\alias{residualspaper}
\docType{data}
\title{
Data and Code From JRSS Discussion Paper on Residuals
}
\description{
This dataset contains the point patterns
used as examples in the paper of Baddeley et al (2005).
[Figure 2 is already available in \pkg{spatstat}
as the \code{\link{copper}} dataset.]
R code is also provided to reproduce all
the Figures displayed in Baddeley et al (2005).
The component \code{plotfig}
is a function, which can be called
with a numeric or character argument specifying the Figure or Figures
that should be plotted. See the Examples.
}
\format{
\code{residualspaper} is a list with the following components:
\describe{
\item{Fig1}{
The locations of Japanese pine seedlings and saplings
from Figure 1 of the paper.
A point pattern (object of class \code{"ppp"}).
}
\item{Fig3}{
The Chorley-Ribble data from Figure 3 of the paper.
A list with three components, \code{lung}, \code{larynx}
and \code{incin}. Each is a matrix with 2 columns
giving the coordinates of the lung cancer cases,
larynx cancer cases, and the incinerator, respectively.
Coordinates are Eastings and Northings in km.
}
\item{Fig4a}{
The synthetic dataset in Figure 4 (a) of the paper.
}
\item{Fig4b}{
The synthetic dataset in Figure 4 (b) of the paper.
}
\item{Fig4c}{
The synthetic dataset in Figure 4 (c) of the paper.
}
\item{Fig11}{
The covariate displayed in Figure 11. A pixel image (object of
class \code{"im"}) whose pixel values are distances to the
nearest line segment in the \code{copper} data.
}
\item{plotfig}{A function which will compute and plot
any of the Figures from the paper. The argument of
\code{plotfig} is either a numeric vector or a character vector,
specifying the Figure or Figures to be plotted. See the Examples.
}
}
}
\usage{data(residualspaper)}
\examples{
\dontrun{
data(residualspaper)
X <- residualspaper$Fig4a
summary(X)
plot(X)
# reproduce all Figures
residualspaper$plotfig()
# reproduce Figures 1 to 10
residualspaper$plotfig(1:10)
# reproduce Figure 7 (a)
residualspaper$plotfig("7a")
}
}
\source{
Figure 1: Prof M. Numata. Data kindly supplied by Professor Y. Ogata
with kind permission of Prof M. Tanemura.
Figure 3: Professor P.J. Diggle (rescaled by \adrian)
Figure 4 (a,b,c): \adrian
}
\references{
Baddeley, A., Turner, R., \ifelse{latex}{\out{M\o ller}}{Moller}, J. and Hazelton, M. (2005)
Residual analysis for spatial point processes.
\emph{Journal of the Royal Statistical Society, Series B}
\bold{67}, 617--666.
}
\keyword{datasets}
\keyword{spatial}
\keyword{models}
|
/man/residualspaper.Rd
|
no_license
|
h32049/spatstat
|
R
| false | false | 2,787 |
rd
|
\name{residualspaper}
\alias{residualspaper}
\docType{data}
\title{
Data and Code From JRSS Discussion Paper on Residuals
}
\description{
This dataset contains the point patterns
used as examples in the paper of Baddeley et al (2005).
[Figure 2 is already available in \pkg{spatstat}
as the \code{\link{copper}} dataset.]
R code is also provided to reproduce all
the Figures displayed in Baddeley et al (2005).
The component \code{plotfig}
is a function, which can be called
with a numeric or character argument specifying the Figure or Figures
that should be plotted. See the Examples.
}
\format{
\code{residualspaper} is a list with the following components:
\describe{
\item{Fig1}{
The locations of Japanese pine seedlings and saplings
from Figure 1 of the paper.
A point pattern (object of class \code{"ppp"}).
}
\item{Fig3}{
The Chorley-Ribble data from Figure 3 of the paper.
A list with three components, \code{lung}, \code{larynx}
and \code{incin}. Each is a matrix with 2 columns
giving the coordinates of the lung cancer cases,
larynx cancer cases, and the incinerator, respectively.
Coordinates are Eastings and Northings in km.
}
\item{Fig4a}{
The synthetic dataset in Figure 4 (a) of the paper.
}
\item{Fig4b}{
The synthetic dataset in Figure 4 (b) of the paper.
}
\item{Fig4c}{
The synthetic dataset in Figure 4 (c) of the paper.
}
\item{Fig11}{
The covariate displayed in Figure 11. A pixel image (object of
class \code{"im"}) whose pixel values are distances to the
nearest line segment in the \code{copper} data.
}
\item{plotfig}{A function which will compute and plot
any of the Figures from the paper. The argument of
\code{plotfig} is either a numeric vector or a character vector,
specifying the Figure or Figures to be plotted. See the Examples.
}
}
}
\usage{data(residualspaper)}
\examples{
\dontrun{
data(residualspaper)
X <- residualspaper$Fig4a
summary(X)
plot(X)
# reproduce all Figures
residualspaper$plotfig()
# reproduce Figures 1 to 10
residualspaper$plotfig(1:10)
# reproduce Figure 7 (a)
residualspaper$plotfig("7a")
}
}
\source{
Figure 1: Prof M. Numata. Data kindly supplied by Professor Y. Ogata
with kind permission of Prof M. Tanemura.
Figure 3: Professor P.J. Diggle (rescaled by \adrian)
Figure 4 (a,b,c): \adrian
}
\references{
Baddeley, A., Turner, R., \ifelse{latex}{\out{M\o ller}}{Moller}, J. and Hazelton, M. (2005)
Residual analysis for spatial point processes.
\emph{Journal of the Royal Statistical Society, Series B}
\bold{67}, 617--666.
}
\keyword{datasets}
\keyword{spatial}
\keyword{models}
|
# Decision Tree Model
# Importing the dataset
dataset = read.csv('Position_Salaries.csv')
dataset = dataset[2:3]
# Splitting the dataset into the Training set and Test set
# # install.packages('caTools')
# library(caTools)
# set.seed(123)
# split = sample.split(dataset$Salary, SplitRatio = 2/3)
# training_set = subset(dataset, split == TRUE)
# test_set = subset(dataset, split == FALSE)
# Feature Scaling
# training_set = scale(training_set)
# test_set = scale(test_set)
# Fitting the Regression Model to the dataset
# Create your regressor here
#install.packages('rpart')
# no feature scalling required in this model as this not Eudicena distance in Decison Tree
regressor = rpart(formula = Salary~ .,
data = dataset,
control = rpart.control(minsplit = 1)) # set conditions of split
# Predicting a new result
y_pred = predict(regressor, data.frame(Level = 6.5))
# Visualising the Regression Model results
# install.packages('ggplot2')
#library(ggplot2)
ggplot() +
geom_point(aes(x = dataset$Level, y = dataset$Salary),
colour = 'red') +
geom_line(aes(x = dataset$Level, y = predict(regressor, newdata = dataset)),
colour = 'blue') +
ggtitle('Truth or Bluff (Decision Tree Model)') +
xlab('Level') +
ylab('Salary')
# Visualising the Regression Model results (for higher resolution and smoother curve)
# install.packages('ggplot2')
#library(ggplot2)
x_grid = seq(min(dataset$Level), max(dataset$Level), 0.01)
ggplot() +
geom_point(aes(x = dataset$Level, y = dataset$Salary),
colour = 'red') +
geom_line(aes(x = x_grid, y = predict(regressor, newdata = data.frame(Level = x_grid))),
colour = 'blue') +
ggtitle('Truth or Bluff ( Decision Tree Regression Model)') +
xlab('Level') +
ylab('Salary')
|
/P2_Regression——回归分析/Decision Tree.R
|
no_license
|
ningningliu/Machine-Learning-in-Data-Science
|
R
| false | false | 1,808 |
r
|
# Decision Tree Model
# Importing the dataset
dataset = read.csv('Position_Salaries.csv')
dataset = dataset[2:3]
# Splitting the dataset into the Training set and Test set
# # install.packages('caTools')
# library(caTools)
# set.seed(123)
# split = sample.split(dataset$Salary, SplitRatio = 2/3)
# training_set = subset(dataset, split == TRUE)
# test_set = subset(dataset, split == FALSE)
# Feature Scaling
# training_set = scale(training_set)
# test_set = scale(test_set)
# Fitting the Regression Model to the dataset
# Create your regressor here
#install.packages('rpart')
# no feature scalling required in this model as this not Eudicena distance in Decison Tree
regressor = rpart(formula = Salary~ .,
data = dataset,
control = rpart.control(minsplit = 1)) # set conditions of split
# Predicting a new result
y_pred = predict(regressor, data.frame(Level = 6.5))
# Visualising the Regression Model results
# install.packages('ggplot2')
#library(ggplot2)
ggplot() +
geom_point(aes(x = dataset$Level, y = dataset$Salary),
colour = 'red') +
geom_line(aes(x = dataset$Level, y = predict(regressor, newdata = dataset)),
colour = 'blue') +
ggtitle('Truth or Bluff (Decision Tree Model)') +
xlab('Level') +
ylab('Salary')
# Visualising the Regression Model results (for higher resolution and smoother curve)
# install.packages('ggplot2')
#library(ggplot2)
x_grid = seq(min(dataset$Level), max(dataset$Level), 0.01)
ggplot() +
geom_point(aes(x = dataset$Level, y = dataset$Salary),
colour = 'red') +
geom_line(aes(x = x_grid, y = predict(regressor, newdata = data.frame(Level = x_grid))),
colour = 'blue') +
ggtitle('Truth or Bluff ( Decision Tree Regression Model)') +
xlab('Level') +
ylab('Salary')
|
library(mvtnorm)
library(OpenMx)
set.seed(1)
cov <- matrix(0, 12, 12)
cov[1:4,1:4] <- rWishart(1, 4, diag(4))[,,1]
cov[5:8,5:8] <- rWishart(1, 4, diag(4))[,,1]
cov[9:12,9:12] <- rWishart(1, 4, diag(4))[,,1]
mean <- rnorm(12, sd=sqrt(diag(cov)))
mxOption(NULL, "maxOrdinalPerBlock", 12)
lk1 <- omxMnor(cov, mean, matrix(-1, 12, 1), matrix(1, 12, 1))
omxCheckCloseEnough(lk1, 1.41528651675062e-05, 1e-7)
mxOption(NULL, "maxOrdinalPerBlock", 4)
lk2 <- omxMnor(cov, mean, matrix(-1, 12, 1), matrix(1, 12, 1))
omxCheckCloseEnough(lk1, lk2, 1e-7)
mxOption(NULL, "maxOrdinalPerBlock", 3)
lk3 <- omxMnor(cov, mean, matrix(-1, 12, 1), matrix(1, 12, 1))
omxCheckTrue(lk1 != lk3)
omxCheckCloseEnough(lk1, lk3, 5e-6)
mxOption(NULL, "maxOrdinalPerBlock", 2)
lk4 <- omxMnor(cov, mean, matrix(-1, 12, 1), matrix(1, 12, 1))
omxCheckTrue(lk1 != lk4)
omxCheckCloseEnough(lk1, lk4, 1e-5)
mxOption(NULL, "maxOrdinalPerBlock", 1)
lk5 <- omxMnor(cov, mean, matrix(-1, 12, 1), matrix(1, 12, 1))
omxCheckTrue(lk1 != lk5)
omxCheckCloseEnough(lk1, lk5, 1e-4)
# ----------------
cov <- diag(rlnorm(2))
mean <- matrix(runif(2), 2, 1)
mxOption(NULL, "maxOrdinalPerBlock", 2)
lk1 <- omxMnor(cov, mean, matrix(c(-1,-Inf), 2, 1), matrix(c(Inf,1), 2, 1))
omxCheckCloseEnough(lk1,
pmvnorm(lower=c(-1,-Inf), upper=c(Inf,1),
mean=c(mean), sigma=cov))
mxOption(NULL, "maxOrdinalPerBlock", 1)
lk2 <- omxMnor(cov, mean, matrix(c(-1,-Inf), 2, 1),
matrix(c(Inf,1), 2, 1))
omxCheckCloseEnough(lk1, lk2)
omxCheckEquals(omxMnor(cov, mean,
matrix(c(-Inf,-Inf), 2, 1),
matrix(c(Inf,Inf), 2, 1)), 1.0)
# ----------------
blocks <- 10
perBlock <- 5
cov <- matrix(0, blocks*perBlock, blocks*perBlock)
for (bl in 1:blocks) {
ind <- seq(1+(bl-1)*perBlock, bl*perBlock)
cov[ind, ind] <- rWishart(1, perBlock*2, diag(perBlock))[,,1]
}
mean <- rnorm(nrow(cov), sd=sqrt(diag(cov)))
mxOption(NULL, "maxOrdinalPerBlock", 12)
lk1 <- omxMnor(cov, mean,
matrix(-1, blocks*perBlock, 1),
matrix(1, blocks*perBlock, 1))
omxCheckCloseEnough(log(lk1), -115.15, .1)
|
/inst/models/passing/omxMnor.R
|
no_license
|
Ewan-Keith/OpenMx
|
R
| false | false | 2,163 |
r
|
library(mvtnorm)
library(OpenMx)
set.seed(1)
cov <- matrix(0, 12, 12)
cov[1:4,1:4] <- rWishart(1, 4, diag(4))[,,1]
cov[5:8,5:8] <- rWishart(1, 4, diag(4))[,,1]
cov[9:12,9:12] <- rWishart(1, 4, diag(4))[,,1]
mean <- rnorm(12, sd=sqrt(diag(cov)))
mxOption(NULL, "maxOrdinalPerBlock", 12)
lk1 <- omxMnor(cov, mean, matrix(-1, 12, 1), matrix(1, 12, 1))
omxCheckCloseEnough(lk1, 1.41528651675062e-05, 1e-7)
mxOption(NULL, "maxOrdinalPerBlock", 4)
lk2 <- omxMnor(cov, mean, matrix(-1, 12, 1), matrix(1, 12, 1))
omxCheckCloseEnough(lk1, lk2, 1e-7)
mxOption(NULL, "maxOrdinalPerBlock", 3)
lk3 <- omxMnor(cov, mean, matrix(-1, 12, 1), matrix(1, 12, 1))
omxCheckTrue(lk1 != lk3)
omxCheckCloseEnough(lk1, lk3, 5e-6)
mxOption(NULL, "maxOrdinalPerBlock", 2)
lk4 <- omxMnor(cov, mean, matrix(-1, 12, 1), matrix(1, 12, 1))
omxCheckTrue(lk1 != lk4)
omxCheckCloseEnough(lk1, lk4, 1e-5)
mxOption(NULL, "maxOrdinalPerBlock", 1)
lk5 <- omxMnor(cov, mean, matrix(-1, 12, 1), matrix(1, 12, 1))
omxCheckTrue(lk1 != lk5)
omxCheckCloseEnough(lk1, lk5, 1e-4)
# ----------------
cov <- diag(rlnorm(2))
mean <- matrix(runif(2), 2, 1)
mxOption(NULL, "maxOrdinalPerBlock", 2)
lk1 <- omxMnor(cov, mean, matrix(c(-1,-Inf), 2, 1), matrix(c(Inf,1), 2, 1))
omxCheckCloseEnough(lk1,
pmvnorm(lower=c(-1,-Inf), upper=c(Inf,1),
mean=c(mean), sigma=cov))
mxOption(NULL, "maxOrdinalPerBlock", 1)
lk2 <- omxMnor(cov, mean, matrix(c(-1,-Inf), 2, 1),
matrix(c(Inf,1), 2, 1))
omxCheckCloseEnough(lk1, lk2)
omxCheckEquals(omxMnor(cov, mean,
matrix(c(-Inf,-Inf), 2, 1),
matrix(c(Inf,Inf), 2, 1)), 1.0)
# ----------------
blocks <- 10
perBlock <- 5
cov <- matrix(0, blocks*perBlock, blocks*perBlock)
for (bl in 1:blocks) {
ind <- seq(1+(bl-1)*perBlock, bl*perBlock)
cov[ind, ind] <- rWishart(1, perBlock*2, diag(perBlock))[,,1]
}
mean <- rnorm(nrow(cov), sd=sqrt(diag(cov)))
mxOption(NULL, "maxOrdinalPerBlock", 12)
lk1 <- omxMnor(cov, mean,
matrix(-1, blocks*perBlock, 1),
matrix(1, blocks*perBlock, 1))
omxCheckCloseEnough(log(lk1), -115.15, .1)
|
setwd("C:/Adatok/coursera_edX/4_Exploratory Data analysis/Quizes_Assignments/Assignment2")
unzip("exdata-data-NEI_data.zip")
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
library(plyr)
d<-ddply(NEI,.(year),summarise, sum = sum(Emissions)) #returns a dataframe with the year and the sum(Emissions)
#Plot1
png("plot1.png", width= 480, height= 480)
par(bg="thistle1", pch=19, mar= c(5.1, 4.1, 4.1, 2.1))
plot(x= d$year, y= d$sum, col= "blue", xlab="Year", ylab= "Total emission of PM2.5 [tonnes]",
main= "Total emission of PM2.5 from all sources in the USA")
dev.off()
|
/plot1.R
|
no_license
|
Enoana/datasciencecoursera
|
R
| false | false | 620 |
r
|
setwd("C:/Adatok/coursera_edX/4_Exploratory Data analysis/Quizes_Assignments/Assignment2")
unzip("exdata-data-NEI_data.zip")
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
library(plyr)
d<-ddply(NEI,.(year),summarise, sum = sum(Emissions)) #returns a dataframe with the year and the sum(Emissions)
#Plot1
png("plot1.png", width= 480, height= 480)
par(bg="thistle1", pch=19, mar= c(5.1, 4.1, 4.1, 2.1))
plot(x= d$year, y= d$sum, col= "blue", xlab="Year", ylab= "Total emission of PM2.5 [tonnes]",
main= "Total emission of PM2.5 from all sources in the USA")
dev.off()
|
outFile = commandArgs(trailingOnly = TRUE)[1]
## setwd("~/rvtests/regression/test/")
source("ScoreTest.R")
set.seed(0)
n = 10000
x = rnorm(n)
y = rbinom(n, 1, 1/ ( 1 + exp( - (1 + 0.5 *x))))
# output x, y
X = cbind(rep(1,n), x)
write.table(file = "input.x", X, row.names = F, col.names =F)
write.table(file = "input.y", y, row.names = F, col.names =F)
# wald
ret = glm(y~x, family="binomial")
summary(ret)
beta = coef(ret)
v = vcov(ret)
p.wald = coef(summary(ret))[2,4]
conn = file(outFile, "w")
cat("wald_beta\t", file = conn)
cat(beta, file = conn, append = TRUE)
cat("\n", file = conn)
cat("wald_vcov\t", file = conn)
cat(v, file = conn, append = TRUE)
cat("\n", file = conn)
cat("wald_p\t", file = conn)
cat(p.wald, file = conn, append = TRUE)
cat("\n", file = conn)
#permutation
beta.real = coef(glm(y~x, family = "binomial"))[2]
permutated_beta<-function(a){
y.sample = sample(y)
coef(glm(y.sample~x, family = "binomial"))[2]
}
beta.perm = sapply(seq(1000), permutated_beta)
p.perm = sum(abs(beta.perm) >= abs(beta.real)) / length(beta.perm)
p.perm
cat("permutation_p\t", file = conn)
cat(p.perm, file = conn, append = TRUE)
cat("\n", file = conn)
#p.score = linear.score(Xcol=x, Y=y)$pvalue
p.score = logistic.score(Xcol=x, Y=y)$pvalue
cat("score_p\t", file = conn)
cat(p.score, file = conn, append = TRUE)
cat("\n", file = conn)
close(conn)
|
/regression/test/testLogisticRegression.R
|
no_license
|
Shicheng-Guo/rvtests
|
R
| false | false | 1,369 |
r
|
outFile = commandArgs(trailingOnly = TRUE)[1]
## setwd("~/rvtests/regression/test/")
source("ScoreTest.R")
set.seed(0)
n = 10000
x = rnorm(n)
y = rbinom(n, 1, 1/ ( 1 + exp( - (1 + 0.5 *x))))
# output x, y
X = cbind(rep(1,n), x)
write.table(file = "input.x", X, row.names = F, col.names =F)
write.table(file = "input.y", y, row.names = F, col.names =F)
# wald
ret = glm(y~x, family="binomial")
summary(ret)
beta = coef(ret)
v = vcov(ret)
p.wald = coef(summary(ret))[2,4]
conn = file(outFile, "w")
cat("wald_beta\t", file = conn)
cat(beta, file = conn, append = TRUE)
cat("\n", file = conn)
cat("wald_vcov\t", file = conn)
cat(v, file = conn, append = TRUE)
cat("\n", file = conn)
cat("wald_p\t", file = conn)
cat(p.wald, file = conn, append = TRUE)
cat("\n", file = conn)
#permutation
beta.real = coef(glm(y~x, family = "binomial"))[2]
permutated_beta<-function(a){
y.sample = sample(y)
coef(glm(y.sample~x, family = "binomial"))[2]
}
beta.perm = sapply(seq(1000), permutated_beta)
p.perm = sum(abs(beta.perm) >= abs(beta.real)) / length(beta.perm)
p.perm
cat("permutation_p\t", file = conn)
cat(p.perm, file = conn, append = TRUE)
cat("\n", file = conn)
#p.score = linear.score(Xcol=x, Y=y)$pvalue
p.score = logistic.score(Xcol=x, Y=y)$pvalue
cat("score_p\t", file = conn)
cat(p.score, file = conn, append = TRUE)
cat("\n", file = conn)
close(conn)
|
# Combine smolt, covariate and side stream datasets for the time series
source("00-Functions/packages-and-paths.R")
source("01-Data/data-smolts.R")
source("01-Data/data-covariates.R")
source("01-Data/data-sides.R")
dat0216_all<-as_tibble(dat_smolts_0216)%>%
full_join(dat_flow_0216, by=NULL)%>%
full_join(dat_temp_0216, by=NULL)%>%
mutate(date = as_date(paste(Year, Month, Day)))%>%
left_join(., wttr_0216, by = "date")
# Use function smdwrg_m to collect annual data from 2017->
data17 <- smdwrg_m(nls17, wtemp17, disc_all, wttr17)
data18 <- smdwrg_m(nls18, wtemp18, disc_all, wttr18)
data19 <- smdwrg_m(nls19, wtemp19, disc_all, wttr19)
data20 <- smdwrg_m(nls20, wtemp20, disc_all, wttr20)
data21 <- smdwrg_m(nls21, wtemp21, disc_all, wttr21)
dat1721_all <-bind_rows(data17[[2]], data18[[2]])%>%
bind_rows(data19[[2]]) %>%
bind_rows(data20[[2]]) %>%
bind_rows(data21[[2]])#%>%
#mutate(date=as.Date(date)) # Don't use, this messes the dates for some strange reason!
# COMBINE smolt and covariate data from 2002-2016 and 2017->
data0221 <- full_join(dat0216_all, dat1721_all)
# Set schools if smolts== 0 or 1
data0221 <- data0221 %>% mutate(
schools = if_else(smolts==0, 0.001, schools),
schools = if_else(smolts==1, 1, schools)
)
dat<-full_join(data0221,side_east)%>%# Combine with side stream data
full_join(side_west)%>%
select(-humi, -wind, -press)
dat_m <- left_join(dat, tempsum %>% select(date,tempSum30), by = "date")
df0221<-s_dat_jags(dat_m, years, n_days)
saveRDS(df0221, file="01-Data/df0221.RDS")
saveRDS(dat_m, file="01-Data/dat0221.RDS")
#View(data0221%>%filter(Year==2018))
#View(dat1721_all%>%filter(Year==2018))
#View(dat_m%>%filter(Year==2018))
# View(dat)
# View(dat%>%filter(is.na(side_east)==F |is.na(side_west)==F))
# View(dat%>%filter(Year==2002))
|
/01-Data/data-combine.R
|
permissive
|
hennip/Utsjoki-smolts
|
R
| false | false | 1,821 |
r
|
# Combine smolt, covariate and side stream datasets for the time series
source("00-Functions/packages-and-paths.R")
source("01-Data/data-smolts.R")
source("01-Data/data-covariates.R")
source("01-Data/data-sides.R")
dat0216_all<-as_tibble(dat_smolts_0216)%>%
full_join(dat_flow_0216, by=NULL)%>%
full_join(dat_temp_0216, by=NULL)%>%
mutate(date = as_date(paste(Year, Month, Day)))%>%
left_join(., wttr_0216, by = "date")
# Use function smdwrg_m to collect annual data from 2017->
data17 <- smdwrg_m(nls17, wtemp17, disc_all, wttr17)
data18 <- smdwrg_m(nls18, wtemp18, disc_all, wttr18)
data19 <- smdwrg_m(nls19, wtemp19, disc_all, wttr19)
data20 <- smdwrg_m(nls20, wtemp20, disc_all, wttr20)
data21 <- smdwrg_m(nls21, wtemp21, disc_all, wttr21)
dat1721_all <-bind_rows(data17[[2]], data18[[2]])%>%
bind_rows(data19[[2]]) %>%
bind_rows(data20[[2]]) %>%
bind_rows(data21[[2]])#%>%
#mutate(date=as.Date(date)) # Don't use, this messes the dates for some strange reason!
# COMBINE smolt and covariate data from 2002-2016 and 2017->
data0221 <- full_join(dat0216_all, dat1721_all)
# Set schools if smolts== 0 or 1
data0221 <- data0221 %>% mutate(
schools = if_else(smolts==0, 0.001, schools),
schools = if_else(smolts==1, 1, schools)
)
dat<-full_join(data0221,side_east)%>%# Combine with side stream data
full_join(side_west)%>%
select(-humi, -wind, -press)
dat_m <- left_join(dat, tempsum %>% select(date,tempSum30), by = "date")
df0221<-s_dat_jags(dat_m, years, n_days)
saveRDS(df0221, file="01-Data/df0221.RDS")
saveRDS(dat_m, file="01-Data/dat0221.RDS")
#View(data0221%>%filter(Year==2018))
#View(dat1721_all%>%filter(Year==2018))
#View(dat_m%>%filter(Year==2018))
# View(dat)
# View(dat%>%filter(is.na(side_east)==F |is.na(side_west)==F))
# View(dat%>%filter(Year==2002))
|
library(ggplot2)
rm(list = ls(all = TRUE))
gc()
setwd("d:/coursera/Course-Project-2")
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
baltimoreNEI <- subset(NEI, NEI$fips == "24510")
baltimore <- aggregate(Emissions ~ year + type, baltimoreNEI, sum)
ggp <- ggplot(baltimoreNEI,aes(factor(year),Emissions,fill=type)) +
geom_bar(stat="identity") +
theme_bw() + guides(fill=FALSE)+
facet_grid(.~type,scales = "free",space="free") +
labs(x="Year", y=expression("Total emission of PM2.5 (tons)")) +
labs(title=expression("PM2.5 Emissions in the Baltimore City, Maryland by Source Type"))
print(ggp)
dev.copy(png, file = "plot3.png", width = 480, height = 480)
dev.off()
|
/Plot3.R
|
no_license
|
ivkrasnikov/Course-Project-2
|
R
| false | false | 720 |
r
|
library(ggplot2)
rm(list = ls(all = TRUE))
gc()
setwd("d:/coursera/Course-Project-2")
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
baltimoreNEI <- subset(NEI, NEI$fips == "24510")
baltimore <- aggregate(Emissions ~ year + type, baltimoreNEI, sum)
ggp <- ggplot(baltimoreNEI,aes(factor(year),Emissions,fill=type)) +
geom_bar(stat="identity") +
theme_bw() + guides(fill=FALSE)+
facet_grid(.~type,scales = "free",space="free") +
labs(x="Year", y=expression("Total emission of PM2.5 (tons)")) +
labs(title=expression("PM2.5 Emissions in the Baltimore City, Maryland by Source Type"))
print(ggp)
dev.copy(png, file = "plot3.png", width = 480, height = 480)
dev.off()
|
g.triSS <- function(params, respvec, VC, TIn){
#####
# ! #
########################################################################
## I replaced TIn$eta1 with TIn$mar1. Same for TIn$eta2 and TIn$eta3 ##
########################################################################
mean1 <- TIn$theta12 * TIn$mar1
mean2 <- TIn$theta13 * TIn$mar1
mean3 <- TIn$theta12 * TIn$mar2
mean4 <- TIn$theta23 * TIn$mar2
mean5 <- TIn$theta13 * TIn$mar3
mean6 <- TIn$theta23 * TIn$mar3
################################################################
var1 <- 1 - TIn$theta12^2
var2 <- 1 - TIn$theta13^2
var3 <- 1 - TIn$theta23^2
cov1 <- TIn$theta23 - TIn$theta12 * TIn$theta13
cov2 <- TIn$theta13 - TIn$theta12 * TIn$theta23
cov3 <- TIn$theta12 - TIn$theta13 * TIn$theta23
cov1 <- mmf(cov1, max.pr = VC$max.pr)
cov2 <- mmf(cov2, max.pr = VC$max.pr)
cov3 <- mmf(cov3, max.pr = VC$max.pr)
#####
# ! #
########################################################################
## I replaced TIn$eta1 with TIn$mar1. Same for TIn$eta2 and TIn$eta3 ##
########################################################################
d.1 <- dnorm(TIn$mar1)
d.2 <- dnorm(TIn$mar2)
d.3 <- dnorm(TIn$mar3)
p.1.11 <- mm(pbinorm( TIn$mar2[VC$inde2.1], TIn$mar3, mean1 = mean1[VC$inde2], mean2 = mean2[VC$inde2], var1 = var1, var2 = var2, cov12 = cov1) , min.pr = VC$min.pr, max.pr = VC$max.pr)
p.1.10 <- mm(pbinorm( TIn$mar2[VC$inde2.1], -TIn$mar3, mean1 = mean1[VC$inde2], mean2 = -mean2[VC$inde2], var1 = var1, var2 = var2, cov12 = -cov1) , min.pr = VC$min.pr, max.pr = VC$max.pr)
p.2.11 <- mm(pbinorm( TIn$mar1[VC$inde2], TIn$mar3, mean1 = mean3[VC$inde2.1], mean2 = mean4[VC$inde2.1], var1 = var1, var2 = var3, cov12 = cov2), min.pr = VC$min.pr, max.pr = VC$max.pr )
p.2.10 <- mm(pbinorm( TIn$mar1[VC$inde2], -TIn$mar3, mean1 = mean3[VC$inde2.1], mean2 = -mean4[VC$inde2.1], var1 = var1, var2 = var3, cov12 = -cov2), min.pr = VC$min.pr, max.pr = VC$max.pr )
p.3.11 <- mm(pbinorm( TIn$mar1[VC$inde2], TIn$mar2[VC$inde2.1], mean1 = mean5, mean2 = mean6, var1 = var2, var2 = var3, cov12 = cov3), min.pr = VC$min.pr, max.pr = VC$max.pr )
p.3.10 <- mm(pbinorm( TIn$mar1[VC$inde2], -TIn$mar2[VC$inde2.1], mean1 = mean5, mean2 = -mean6, var1 = var2, var2 = var3, cov12 = -cov3) , min.pr = VC$min.pr, max.pr = VC$max.pr)
upst.1 <- mm( pnorm( (TIn$mar2 - TIn$theta12 * TIn$mar1[VC$inde1])/sqrt(1 - TIn$theta12^2)) , min.pr = VC$min.pr, max.pr = VC$max.pr)
upst.2 <- mm( pnorm( (TIn$mar1[VC$inde1] - TIn$theta12 * TIn$mar2 )/sqrt(1 - TIn$theta12^2)) , min.pr = VC$min.pr, max.pr = VC$max.pr)
################################################################################################
#####
# ! #
###############################
## The next 6 lines are new ##
###############################
dmar1 <- probm(TIn$eta1, VC$margins[1], only.pr = FALSE, min.dn = VC$min.dn, min.pr = VC$min.pr, max.pr = VC$max.pr)$d.n
dmar2 <- probm(TIn$eta2, VC$margins[2], only.pr = FALSE, min.dn = VC$min.dn, min.pr = VC$min.pr, max.pr = VC$max.pr)$d.n
dmar3 <- probm(TIn$eta3, VC$margins[3], only.pr = FALSE, min.dn = VC$min.dn, min.pr = VC$min.pr, max.pr = VC$max.pr)$d.n
dF1.de1 <- (1/d.1) * dmar1
dF2.de2 <- (1/d.2) * dmar2
dF3.de3 <- (1/d.3) * dmar3
###################################################################
dl.dF1.1 <- - respvec$cy1/TIn$p0 * d.1
dl.dF1.1[VC$inde1] <- respvec$y1.cy2/TIn$p10 * (d.1[VC$inde1] - d.1[VC$inde1] * upst.1)
dl.dF1.1[VC$inde2] <- respvec$y1.y2.cy3/TIn$p110 * d.1[VC$inde2] * p.1.10 +
respvec$y1.y2.y3/TIn$p111 * d.1[VC$inde2] * p.1.11
dl.dF1 <- dl.dF1.1
#####
# ! #
###########################
## The following is new: ##
###########################
dl.de1 <- dl.dF1 * dF1.de1
################################
dl.dF2.1 <- - respvec$y1.cy2/TIn$p10 * d.2 * upst.2
dl.dF2.1[VC$inde2.1] <- respvec$y1.y2.cy3/TIn$p110 * d.2[VC$inde2.1] * p.2.10 +
respvec$y1.y2.y3/TIn$p111 * d.2[VC$inde2.1] * p.2.11
dl.dF2 <- dl.dF2.1
#####
# ! #
###########################
## The following is new: ##
###########################
dl.de2 <- dl.dF2 * dF2.de2
################################
dl.dF3 <- - respvec$y1.y2.cy3/TIn$p110 * d.3 * p.3.11 +
respvec$y1.y2.y3/TIn$p111 * d.3 * p.3.11
#####
# ! #
###########################
## The following is new: ##
###########################
dl.de3 <- dl.dF3 * dF3.de3
################################
#####
# ! #
########################################################################
## I replaced TIn$eta1 with TIn$mar1. Same for TIn$eta2 and TIn$eta3 ##
########################################################################
mean.12 <- ( TIn$mar1[VC$inde1] * (TIn$theta13 - TIn$theta12 * TIn$theta23) + TIn$mar2 * (TIn$theta23 - TIn$theta12 * TIn$theta13) )/( 1 - TIn$theta12^2 )
mean.13 <- ( TIn$mar1[VC$inde2] * (TIn$theta12 - TIn$theta13 * TIn$theta23) + TIn$mar3 * (TIn$theta23 - TIn$theta12 * TIn$theta13) )/( 1 - TIn$theta13^2 )
mean.23 <- ( TIn$mar2[VC$inde2.1] * (TIn$theta12 - TIn$theta13 * TIn$theta23) + TIn$mar3 * (TIn$theta13 - TIn$theta12 * TIn$theta23) )/( 1 - TIn$theta23^2 )
############################################################################################################
deno <- 1 - TIn$theta12^2 - TIn$theta13^2 - TIn$theta23^2 + 2 * TIn$theta12 * TIn$theta13 * TIn$theta23
sd.12 <- sqrt( deno / ( 1 - TIn$theta12^2 ) )
sd.13 <- sqrt( deno / ( 1 - TIn$theta13^2 ) )
sd.23 <- sqrt( deno / ( 1 - TIn$theta23^2 ) )
#####
# ! #
########################################################################
## I replaced TIn$eta1 with TIn$mar1. Same for TIn$eta2 and TIn$eta3 ##
########################################################################
p12.g <- mm( pnorm( (TIn$mar3 - mean.12[VC$inde2.1])/sd.12) , min.pr = VC$min.pr, max.pr = VC$max.pr)
p13.g <- mm( pnorm( (TIn$mar2[VC$inde2.1] - mean.13 )/sd.13) , min.pr = VC$min.pr, max.pr = VC$max.pr)
p23.g <- mm( pnorm( (TIn$mar1[VC$inde2] - mean.23 )/sd.23) , min.pr = VC$min.pr, max.pr = VC$max.pr)
########################################################################
p12.g.c <- mm(1 - p12.g, min.pr = VC$min.pr, max.pr = VC$max.pr)
p13.g.c <- mm(1 - p13.g, min.pr = VC$min.pr, max.pr = VC$max.pr)
p23.g.c <- mm(1 - p23.g, min.pr = VC$min.pr, max.pr = VC$max.pr)
#####
# ! #
########################################################################
## I replaced TIn$eta1 with TIn$mar1. Same for TIn$eta2 and TIn$eta3 ##
########################################################################
d11.12 <- dbinorm( TIn$mar1[VC$inde1] , TIn$mar2, cov12 = TIn$theta12)
d11.13 <- dbinorm( TIn$mar1[VC$inde2] , TIn$mar3, cov12 = TIn$theta13)
d11.23 <- dbinorm( TIn$mar2[VC$inde2.1], TIn$mar3, cov12 = TIn$theta23)
########################################################################
dl.dtheta12.1 <- - respvec$y1.cy2/TIn$p10 * d11.12
dl.dtheta12.1[VC$inde2.1] <- respvec$y1.y2.cy3/TIn$p110 * d11.12[VC$inde2.1] * p12.g.c +
respvec$y1.y2.y3/TIn$p111 * d11.12[VC$inde2.1] * p12.g
dl.dtheta12 <- dl.dtheta12.1
dl.dtheta13 <- - respvec$y1.y2.cy3/TIn$p110 * d11.13 * p13.g +
respvec$y1.y2.y3/TIn$p111 * d11.13 * p13.g
dl.dtheta23 <- - respvec$y1.y2.cy3/TIn$p110 * d11.23 * p23.g +
respvec$y1.y2.y3/TIn$p111 * d11.23 * p23.g
if(VC$Chol == FALSE){
dtheta12.dtheta12.st <- 4 * exp( 2 * TIn$theta12.st )/( exp(2 * TIn$theta12.st) + 1 )^2
dtheta13.dtheta13.st <- 4 * exp( 2 * TIn$theta13.st )/( exp(2 * TIn$theta13.st) + 1 )^2
dtheta23.dtheta23.st <- 4 * exp( 2 * TIn$theta23.st )/( exp(2 * TIn$theta23.st) + 1 )^2
dl.dtheta12.st <- dl.dtheta12 * dtheta12.dtheta12.st
dl.dtheta13.st <- dl.dtheta13 * dtheta13.dtheta13.st
dl.dtheta23.st <- dl.dtheta23 * dtheta23.dtheta23.st
}
if(VC$Chol == TRUE){
dl.dtheta <- matrix(0,length(VC$inde1),3)
dl.dtheta[VC$inde1, 1] <- dl.dtheta12
dl.dtheta[VC$inde2, 2] <- dl.dtheta13
dl.dtheta[VC$inde2, 3] <- dl.dtheta23
dth12.dth12.st <- 1/(1 + TIn$theta12.st^2)^(3/2)
dth12.dth13.st <- 0
dth12.dth23.st <- 0
dth13.dth12.st <- 0
dth13.dth13.st <- (1 + TIn$theta23.st^2)/(1 + TIn$theta13.st^2 + TIn$theta23.st^2)^(3/2)
dth13.dth23.st <- - (TIn$theta13.st * TIn$theta23.st)/(1 + TIn$theta13.st^2 + TIn$theta23.st^2)^(3/2)
dth23.dth12.st <- TIn$theta13.st/sqrt((1 + TIn$theta12.st^2) * (1 + TIn$theta13.st^2 + TIn$theta23.st^2)) - (TIn$theta12.st * (TIn$theta12.st * TIn$theta13.st + TIn$theta23.st))/((1 + TIn$theta12.st^2)^(3/2) * sqrt(1 + TIn$theta13.st^2 + TIn$theta23.st^2))
dth23.dth13.st <- TIn$theta12.st/sqrt((1 + TIn$theta12.st^2) * (1 + TIn$theta13.st^2 + TIn$theta23.st^2)) - (TIn$theta13.st * (TIn$theta12.st * TIn$theta13.st + TIn$theta23.st))/(sqrt(1 + TIn$theta12.st^2) * (1 + TIn$theta13.st^2 + TIn$theta23.st^2)^(3/2))
dth23.dth23.st <- 1/sqrt((1 + TIn$theta12.st^2) * (1 + TIn$theta13.st^2 + TIn$theta23.st^2)) - (TIn$theta23.st * (TIn$theta12.st * TIn$theta13.st + TIn$theta23.st))/(sqrt(1 + TIn$theta12.st^2) * (1 + TIn$theta13.st^2 + TIn$theta23.st^2)^(3/2))
dtheta.theta.st <- matrix( c( dth12.dth12.st, dth13.dth12.st, dth23.dth12.st,
dth12.dth13.st, dth13.dth13.st, dth23.dth13.st,
dth12.dth23.st, dth13.dth23.st, dth23.dth23.st ), 3 , 3)
dl.dtheta.st <- dl.dtheta %*% dtheta.theta.st
dl.dtheta12.st <- dl.dtheta.st[, 1]
dl.dtheta12.st <- dl.dtheta12.st[VC$inde1]
dl.dtheta13.st <- dl.dtheta.st[, 2]
dl.dtheta13.st <- dl.dtheta13.st[VC$inde2]
dl.dtheta23.st <- dl.dtheta.st[, 3]
dl.dtheta23.st <- dl.dtheta23.st[VC$inde2]
}
#####
# ! #
#######################################################
## In GTRIVec: 3rd, 4th, 9th and 10th lines are new ##
#######################################################
GTRIVec <- list(p12.g = p12.g, p13.g = p13.g, p23.g = p23.g,
p12.g.c = p12.g.c, p13.g.c = p13.g.c, p23.g.c = p23.g.c,
d.1 = d.1, d.2 = d.2, d.3 = d.3,
dmar1 = dmar1, dmar2 = dmar2, dmar3 = dmar3,
d11.12 = d11.12, d11.13 = d11.13, d11.23 = d11.23,
p.1.11 = p.1.11, p.1.10 = p.1.10,
p.2.11 = p.2.11, p.2.10 = p.2.10,
p.3.11 = p.3.11, p.3.10 = p.3.10,
dF1.de1 = dF1.de1, dF2.de2 = dF2.de2, dF3.de3 = dF3.de3,
dl.dF1 = dl.dF1, dl.dF2 = dl.dF2, dl.dF3 = dl.dF3,
dl.de1 = VC$weights*dl.de1, dl.de2 = VC$weights[VC$inde1]*dl.de2, dl.de3 = VC$weights[VC$inde2]*dl.de3,
dl.dtheta12.st = VC$weights[VC$inde1]*dl.dtheta12.st, dl.dtheta13.st = VC$weights[VC$inde2]*dl.dtheta13.st,
dl.dtheta23.st = VC$weights[VC$inde2]*dl.dtheta23.st,
mean.12 = mean.12,
mean.13 = mean.13,
mean.23 = mean.23,
sd.12 = sd.12,
sd.13 = sd.13,
sd.23 = sd.23,
upst.1 = upst.1,
upst.2 = upst.2,
dl.dtheta12 =dl.dtheta12, dl.dtheta13 = dl.dtheta13, dl.dtheta23 = dl.dtheta23)
GTRIVec
}
|
/R/g.triSS.r
|
no_license
|
cran/GJRM
|
R
| false | false | 12,159 |
r
|
g.triSS <- function(params, respvec, VC, TIn){
#####
# ! #
########################################################################
## I replaced TIn$eta1 with TIn$mar1. Same for TIn$eta2 and TIn$eta3 ##
########################################################################
mean1 <- TIn$theta12 * TIn$mar1
mean2 <- TIn$theta13 * TIn$mar1
mean3 <- TIn$theta12 * TIn$mar2
mean4 <- TIn$theta23 * TIn$mar2
mean5 <- TIn$theta13 * TIn$mar3
mean6 <- TIn$theta23 * TIn$mar3
################################################################
var1 <- 1 - TIn$theta12^2
var2 <- 1 - TIn$theta13^2
var3 <- 1 - TIn$theta23^2
cov1 <- TIn$theta23 - TIn$theta12 * TIn$theta13
cov2 <- TIn$theta13 - TIn$theta12 * TIn$theta23
cov3 <- TIn$theta12 - TIn$theta13 * TIn$theta23
cov1 <- mmf(cov1, max.pr = VC$max.pr)
cov2 <- mmf(cov2, max.pr = VC$max.pr)
cov3 <- mmf(cov3, max.pr = VC$max.pr)
#####
# ! #
########################################################################
## I replaced TIn$eta1 with TIn$mar1. Same for TIn$eta2 and TIn$eta3 ##
########################################################################
d.1 <- dnorm(TIn$mar1)
d.2 <- dnorm(TIn$mar2)
d.3 <- dnorm(TIn$mar3)
p.1.11 <- mm(pbinorm( TIn$mar2[VC$inde2.1], TIn$mar3, mean1 = mean1[VC$inde2], mean2 = mean2[VC$inde2], var1 = var1, var2 = var2, cov12 = cov1) , min.pr = VC$min.pr, max.pr = VC$max.pr)
p.1.10 <- mm(pbinorm( TIn$mar2[VC$inde2.1], -TIn$mar3, mean1 = mean1[VC$inde2], mean2 = -mean2[VC$inde2], var1 = var1, var2 = var2, cov12 = -cov1) , min.pr = VC$min.pr, max.pr = VC$max.pr)
p.2.11 <- mm(pbinorm( TIn$mar1[VC$inde2], TIn$mar3, mean1 = mean3[VC$inde2.1], mean2 = mean4[VC$inde2.1], var1 = var1, var2 = var3, cov12 = cov2), min.pr = VC$min.pr, max.pr = VC$max.pr )
p.2.10 <- mm(pbinorm( TIn$mar1[VC$inde2], -TIn$mar3, mean1 = mean3[VC$inde2.1], mean2 = -mean4[VC$inde2.1], var1 = var1, var2 = var3, cov12 = -cov2), min.pr = VC$min.pr, max.pr = VC$max.pr )
p.3.11 <- mm(pbinorm( TIn$mar1[VC$inde2], TIn$mar2[VC$inde2.1], mean1 = mean5, mean2 = mean6, var1 = var2, var2 = var3, cov12 = cov3), min.pr = VC$min.pr, max.pr = VC$max.pr )
p.3.10 <- mm(pbinorm( TIn$mar1[VC$inde2], -TIn$mar2[VC$inde2.1], mean1 = mean5, mean2 = -mean6, var1 = var2, var2 = var3, cov12 = -cov3) , min.pr = VC$min.pr, max.pr = VC$max.pr)
upst.1 <- mm( pnorm( (TIn$mar2 - TIn$theta12 * TIn$mar1[VC$inde1])/sqrt(1 - TIn$theta12^2)) , min.pr = VC$min.pr, max.pr = VC$max.pr)
upst.2 <- mm( pnorm( (TIn$mar1[VC$inde1] - TIn$theta12 * TIn$mar2 )/sqrt(1 - TIn$theta12^2)) , min.pr = VC$min.pr, max.pr = VC$max.pr)
################################################################################################
#####
# ! #
###############################
## The next 6 lines are new ##
###############################
dmar1 <- probm(TIn$eta1, VC$margins[1], only.pr = FALSE, min.dn = VC$min.dn, min.pr = VC$min.pr, max.pr = VC$max.pr)$d.n
dmar2 <- probm(TIn$eta2, VC$margins[2], only.pr = FALSE, min.dn = VC$min.dn, min.pr = VC$min.pr, max.pr = VC$max.pr)$d.n
dmar3 <- probm(TIn$eta3, VC$margins[3], only.pr = FALSE, min.dn = VC$min.dn, min.pr = VC$min.pr, max.pr = VC$max.pr)$d.n
dF1.de1 <- (1/d.1) * dmar1
dF2.de2 <- (1/d.2) * dmar2
dF3.de3 <- (1/d.3) * dmar3
###################################################################
dl.dF1.1 <- - respvec$cy1/TIn$p0 * d.1
dl.dF1.1[VC$inde1] <- respvec$y1.cy2/TIn$p10 * (d.1[VC$inde1] - d.1[VC$inde1] * upst.1)
dl.dF1.1[VC$inde2] <- respvec$y1.y2.cy3/TIn$p110 * d.1[VC$inde2] * p.1.10 +
respvec$y1.y2.y3/TIn$p111 * d.1[VC$inde2] * p.1.11
dl.dF1 <- dl.dF1.1
#####
# ! #
###########################
## The following is new: ##
###########################
dl.de1 <- dl.dF1 * dF1.de1
################################
dl.dF2.1 <- - respvec$y1.cy2/TIn$p10 * d.2 * upst.2
dl.dF2.1[VC$inde2.1] <- respvec$y1.y2.cy3/TIn$p110 * d.2[VC$inde2.1] * p.2.10 +
respvec$y1.y2.y3/TIn$p111 * d.2[VC$inde2.1] * p.2.11
dl.dF2 <- dl.dF2.1
#####
# ! #
###########################
## The following is new: ##
###########################
dl.de2 <- dl.dF2 * dF2.de2
################################
dl.dF3 <- - respvec$y1.y2.cy3/TIn$p110 * d.3 * p.3.11 +
respvec$y1.y2.y3/TIn$p111 * d.3 * p.3.11
#####
# ! #
###########################
## The following is new: ##
###########################
dl.de3 <- dl.dF3 * dF3.de3
################################
#####
# ! #
########################################################################
## I replaced TIn$eta1 with TIn$mar1. Same for TIn$eta2 and TIn$eta3 ##
########################################################################
mean.12 <- ( TIn$mar1[VC$inde1] * (TIn$theta13 - TIn$theta12 * TIn$theta23) + TIn$mar2 * (TIn$theta23 - TIn$theta12 * TIn$theta13) )/( 1 - TIn$theta12^2 )
mean.13 <- ( TIn$mar1[VC$inde2] * (TIn$theta12 - TIn$theta13 * TIn$theta23) + TIn$mar3 * (TIn$theta23 - TIn$theta12 * TIn$theta13) )/( 1 - TIn$theta13^2 )
mean.23 <- ( TIn$mar2[VC$inde2.1] * (TIn$theta12 - TIn$theta13 * TIn$theta23) + TIn$mar3 * (TIn$theta13 - TIn$theta12 * TIn$theta23) )/( 1 - TIn$theta23^2 )
############################################################################################################
deno <- 1 - TIn$theta12^2 - TIn$theta13^2 - TIn$theta23^2 + 2 * TIn$theta12 * TIn$theta13 * TIn$theta23
sd.12 <- sqrt( deno / ( 1 - TIn$theta12^2 ) )
sd.13 <- sqrt( deno / ( 1 - TIn$theta13^2 ) )
sd.23 <- sqrt( deno / ( 1 - TIn$theta23^2 ) )
#####
# ! #
########################################################################
## I replaced TIn$eta1 with TIn$mar1. Same for TIn$eta2 and TIn$eta3 ##
########################################################################
p12.g <- mm( pnorm( (TIn$mar3 - mean.12[VC$inde2.1])/sd.12) , min.pr = VC$min.pr, max.pr = VC$max.pr)
p13.g <- mm( pnorm( (TIn$mar2[VC$inde2.1] - mean.13 )/sd.13) , min.pr = VC$min.pr, max.pr = VC$max.pr)
p23.g <- mm( pnorm( (TIn$mar1[VC$inde2] - mean.23 )/sd.23) , min.pr = VC$min.pr, max.pr = VC$max.pr)
########################################################################
p12.g.c <- mm(1 - p12.g, min.pr = VC$min.pr, max.pr = VC$max.pr)
p13.g.c <- mm(1 - p13.g, min.pr = VC$min.pr, max.pr = VC$max.pr)
p23.g.c <- mm(1 - p23.g, min.pr = VC$min.pr, max.pr = VC$max.pr)
#####
# ! #
########################################################################
## I replaced TIn$eta1 with TIn$mar1. Same for TIn$eta2 and TIn$eta3 ##
########################################################################
d11.12 <- dbinorm( TIn$mar1[VC$inde1] , TIn$mar2, cov12 = TIn$theta12)
d11.13 <- dbinorm( TIn$mar1[VC$inde2] , TIn$mar3, cov12 = TIn$theta13)
d11.23 <- dbinorm( TIn$mar2[VC$inde2.1], TIn$mar3, cov12 = TIn$theta23)
########################################################################
dl.dtheta12.1 <- - respvec$y1.cy2/TIn$p10 * d11.12
dl.dtheta12.1[VC$inde2.1] <- respvec$y1.y2.cy3/TIn$p110 * d11.12[VC$inde2.1] * p12.g.c +
respvec$y1.y2.y3/TIn$p111 * d11.12[VC$inde2.1] * p12.g
dl.dtheta12 <- dl.dtheta12.1
dl.dtheta13 <- - respvec$y1.y2.cy3/TIn$p110 * d11.13 * p13.g +
respvec$y1.y2.y3/TIn$p111 * d11.13 * p13.g
dl.dtheta23 <- - respvec$y1.y2.cy3/TIn$p110 * d11.23 * p23.g +
respvec$y1.y2.y3/TIn$p111 * d11.23 * p23.g
if(VC$Chol == FALSE){
dtheta12.dtheta12.st <- 4 * exp( 2 * TIn$theta12.st )/( exp(2 * TIn$theta12.st) + 1 )^2
dtheta13.dtheta13.st <- 4 * exp( 2 * TIn$theta13.st )/( exp(2 * TIn$theta13.st) + 1 )^2
dtheta23.dtheta23.st <- 4 * exp( 2 * TIn$theta23.st )/( exp(2 * TIn$theta23.st) + 1 )^2
dl.dtheta12.st <- dl.dtheta12 * dtheta12.dtheta12.st
dl.dtheta13.st <- dl.dtheta13 * dtheta13.dtheta13.st
dl.dtheta23.st <- dl.dtheta23 * dtheta23.dtheta23.st
}
if(VC$Chol == TRUE){
dl.dtheta <- matrix(0,length(VC$inde1),3)
dl.dtheta[VC$inde1, 1] <- dl.dtheta12
dl.dtheta[VC$inde2, 2] <- dl.dtheta13
dl.dtheta[VC$inde2, 3] <- dl.dtheta23
dth12.dth12.st <- 1/(1 + TIn$theta12.st^2)^(3/2)
dth12.dth13.st <- 0
dth12.dth23.st <- 0
dth13.dth12.st <- 0
dth13.dth13.st <- (1 + TIn$theta23.st^2)/(1 + TIn$theta13.st^2 + TIn$theta23.st^2)^(3/2)
dth13.dth23.st <- - (TIn$theta13.st * TIn$theta23.st)/(1 + TIn$theta13.st^2 + TIn$theta23.st^2)^(3/2)
dth23.dth12.st <- TIn$theta13.st/sqrt((1 + TIn$theta12.st^2) * (1 + TIn$theta13.st^2 + TIn$theta23.st^2)) - (TIn$theta12.st * (TIn$theta12.st * TIn$theta13.st + TIn$theta23.st))/((1 + TIn$theta12.st^2)^(3/2) * sqrt(1 + TIn$theta13.st^2 + TIn$theta23.st^2))
dth23.dth13.st <- TIn$theta12.st/sqrt((1 + TIn$theta12.st^2) * (1 + TIn$theta13.st^2 + TIn$theta23.st^2)) - (TIn$theta13.st * (TIn$theta12.st * TIn$theta13.st + TIn$theta23.st))/(sqrt(1 + TIn$theta12.st^2) * (1 + TIn$theta13.st^2 + TIn$theta23.st^2)^(3/2))
dth23.dth23.st <- 1/sqrt((1 + TIn$theta12.st^2) * (1 + TIn$theta13.st^2 + TIn$theta23.st^2)) - (TIn$theta23.st * (TIn$theta12.st * TIn$theta13.st + TIn$theta23.st))/(sqrt(1 + TIn$theta12.st^2) * (1 + TIn$theta13.st^2 + TIn$theta23.st^2)^(3/2))
dtheta.theta.st <- matrix( c( dth12.dth12.st, dth13.dth12.st, dth23.dth12.st,
dth12.dth13.st, dth13.dth13.st, dth23.dth13.st,
dth12.dth23.st, dth13.dth23.st, dth23.dth23.st ), 3 , 3)
dl.dtheta.st <- dl.dtheta %*% dtheta.theta.st
dl.dtheta12.st <- dl.dtheta.st[, 1]
dl.dtheta12.st <- dl.dtheta12.st[VC$inde1]
dl.dtheta13.st <- dl.dtheta.st[, 2]
dl.dtheta13.st <- dl.dtheta13.st[VC$inde2]
dl.dtheta23.st <- dl.dtheta.st[, 3]
dl.dtheta23.st <- dl.dtheta23.st[VC$inde2]
}
#####
# ! #
#######################################################
## In GTRIVec: 3rd, 4th, 9th and 10th lines are new ##
#######################################################
GTRIVec <- list(p12.g = p12.g, p13.g = p13.g, p23.g = p23.g,
p12.g.c = p12.g.c, p13.g.c = p13.g.c, p23.g.c = p23.g.c,
d.1 = d.1, d.2 = d.2, d.3 = d.3,
dmar1 = dmar1, dmar2 = dmar2, dmar3 = dmar3,
d11.12 = d11.12, d11.13 = d11.13, d11.23 = d11.23,
p.1.11 = p.1.11, p.1.10 = p.1.10,
p.2.11 = p.2.11, p.2.10 = p.2.10,
p.3.11 = p.3.11, p.3.10 = p.3.10,
dF1.de1 = dF1.de1, dF2.de2 = dF2.de2, dF3.de3 = dF3.de3,
dl.dF1 = dl.dF1, dl.dF2 = dl.dF2, dl.dF3 = dl.dF3,
dl.de1 = VC$weights*dl.de1, dl.de2 = VC$weights[VC$inde1]*dl.de2, dl.de3 = VC$weights[VC$inde2]*dl.de3,
dl.dtheta12.st = VC$weights[VC$inde1]*dl.dtheta12.st, dl.dtheta13.st = VC$weights[VC$inde2]*dl.dtheta13.st,
dl.dtheta23.st = VC$weights[VC$inde2]*dl.dtheta23.st,
mean.12 = mean.12,
mean.13 = mean.13,
mean.23 = mean.23,
sd.12 = sd.12,
sd.13 = sd.13,
sd.23 = sd.23,
upst.1 = upst.1,
upst.2 = upst.2,
dl.dtheta12 =dl.dtheta12, dl.dtheta13 = dl.dtheta13, dl.dtheta23 = dl.dtheta23)
GTRIVec
}
|
library(ph2bye)
### Name: BB.aniplot
### Title: Sequentially monitor patients using Beta-Binomial posterior
### probability
### Aliases: BB.aniplot
### ** Examples
# Using APL data
r=rep(0,6)
BB.aniplot(a=1,b=1,r=r, alpha=0.05, seed=1234)
# Simulate binomial data
B <- 10; N=1; p=0.3
r <- rbinom(n = B,size = N,prob = p)
BB.aniplot(a=1,b=1,r=r,time.interval = 0.2,output = FALSE)
|
/data/genthat_extracted_code/ph2bye/examples/BB.aniplot.Rd.R
|
no_license
|
surayaaramli/typeRrh
|
R
| false | false | 389 |
r
|
library(ph2bye)
### Name: BB.aniplot
### Title: Sequentially monitor patients using Beta-Binomial posterior
### probability
### Aliases: BB.aniplot
### ** Examples
# Using APL data
r=rep(0,6)
BB.aniplot(a=1,b=1,r=r, alpha=0.05, seed=1234)
# Simulate binomial data
B <- 10; N=1; p=0.3
r <- rbinom(n = B,size = N,prob = p)
BB.aniplot(a=1,b=1,r=r,time.interval = 0.2,output = FALSE)
|
#' Takes the values for a single file.
#'
#' @param x `data.frame` with columns `"mz"`, `"rt"` and `"i"`.
#'
#' @param main `character(1)` with the title of the plot.
#'
#' @param col color for the circles.
#'
#' @param colramp color ramp to be used for the points' background.
#'
#' @param grid.color color to be used for the grid lines (or `NA` if they should
#' not be plotted.
#'
#' @param pch The plotting character.
#'
#' @param layout `matrix` defining the layout of the plot, or `NULL` if
#' `layout` was already called.
#'
#' @param ... additional parameters to be passed to the `plot` function.
#'
#' @md
#'
#' @author Johannes Rainer
#'
#' @noRd
.plotXIC <- function(x, main = "", col = "grey", colramp = topo.colors,
grid.color = "lightgrey", pch = 21,
layout = matrix(1:2, ncol = 1), ...) {
if (is.matrix(layout))
layout(layout)
## Chromatogram.
bpi <- unlist(lapply(split(x$i, x$rt), max, na.rm = TRUE))
brks <- do.breaks(range(x$i), nint = 256)
par(mar = c(0, 4, 2, 1))
plot(as.numeric(names(bpi)), bpi, xaxt = "n", col = col, main = main,
bg = level.colors(bpi, at = brks, col.regions = colramp), xlab = "",
pch = pch, ylab = "", las = 2, ...)
mtext(side = 4, line = 0, "Intensity", cex = par("cex.lab"))
grid(col = grid.color)
par(mar = c(3.5, 4, 0, 1))
plot(x$rt, x$mz, main = "", pch = pch, col = col, xlab = "", ylab = "",
yaxt = "n", bg = level.colors(x$i, at = brks, col.regions = colramp),
...)
axis(side = 2, las = 2)
grid(col = grid.color)
mtext(side = 1, line = 2.5, "Retention time", cex = par("cex.lab"))
mtext(side = 4, line = 0, "m/z", cex = par("cex.lab"))
}
#' Create a `matrix` to be used for the `layout` function to allow plotting of
#' vertically arranged *sub-plots* consisting of `sub_plot` plots.
#'
#' @param x `integer(1)` with the number of sub-plots.
#'
#' @param sub_plot `integer(1)` with the number of sub-plots per cell/plot.
#'
#' @author Johannes Rainer
#'
#' @md
#'
#' @noRd
#'
#' @examples
#'
#' ## Assum we've got 5 *features* to plot and we want to have a two plots for
#' ## each feature arranged below each other.
#'
#' .vertical_sub_layout(5, sub_plot = 2)
.vertical_sub_layout <- function(x, sub_plot = 2) {
sqrt_x <- sqrt(x)
ncol <- ceiling(sqrt_x)
nrow <- round(sqrt_x)
rws <- split(1:(ncol * nrow * sub_plot), f = rep(1:nrow,
each = sub_plot * ncol))
do.call(rbind, lapply(rws, matrix, ncol = ncol))
}
|
/R/functions-plotting.R
|
no_license
|
meowcat/MSnbase
|
R
| false | false | 2,591 |
r
|
#' Takes the values for a single file.
#'
#' @param x `data.frame` with columns `"mz"`, `"rt"` and `"i"`.
#'
#' @param main `character(1)` with the title of the plot.
#'
#' @param col color for the circles.
#'
#' @param colramp color ramp to be used for the points' background.
#'
#' @param grid.color color to be used for the grid lines (or `NA` if they should
#' not be plotted.
#'
#' @param pch The plotting character.
#'
#' @param layout `matrix` defining the layout of the plot, or `NULL` if
#' `layout` was already called.
#'
#' @param ... additional parameters to be passed to the `plot` function.
#'
#' @md
#'
#' @author Johannes Rainer
#'
#' @noRd
.plotXIC <- function(x, main = "", col = "grey", colramp = topo.colors,
grid.color = "lightgrey", pch = 21,
layout = matrix(1:2, ncol = 1), ...) {
if (is.matrix(layout))
layout(layout)
## Chromatogram.
bpi <- unlist(lapply(split(x$i, x$rt), max, na.rm = TRUE))
brks <- do.breaks(range(x$i), nint = 256)
par(mar = c(0, 4, 2, 1))
plot(as.numeric(names(bpi)), bpi, xaxt = "n", col = col, main = main,
bg = level.colors(bpi, at = brks, col.regions = colramp), xlab = "",
pch = pch, ylab = "", las = 2, ...)
mtext(side = 4, line = 0, "Intensity", cex = par("cex.lab"))
grid(col = grid.color)
par(mar = c(3.5, 4, 0, 1))
plot(x$rt, x$mz, main = "", pch = pch, col = col, xlab = "", ylab = "",
yaxt = "n", bg = level.colors(x$i, at = brks, col.regions = colramp),
...)
axis(side = 2, las = 2)
grid(col = grid.color)
mtext(side = 1, line = 2.5, "Retention time", cex = par("cex.lab"))
mtext(side = 4, line = 0, "m/z", cex = par("cex.lab"))
}
#' Create a `matrix` to be used for the `layout` function to allow plotting of
#' vertically arranged *sub-plots* consisting of `sub_plot` plots.
#'
#' @param x `integer(1)` with the number of sub-plots.
#'
#' @param sub_plot `integer(1)` with the number of sub-plots per cell/plot.
#'
#' @author Johannes Rainer
#'
#' @md
#'
#' @noRd
#'
#' @examples
#'
#' ## Assum we've got 5 *features* to plot and we want to have a two plots for
#' ## each feature arranged below each other.
#'
#' .vertical_sub_layout(5, sub_plot = 2)
.vertical_sub_layout <- function(x, sub_plot = 2) {
sqrt_x <- sqrt(x)
ncol <- ceiling(sqrt_x)
nrow <- round(sqrt_x)
rws <- split(1:(ncol * nrow * sub_plot), f = rep(1:nrow,
each = sub_plot * ncol))
do.call(rbind, lapply(rws, matrix, ncol = ncol))
}
|
# Some code
# New
|
/test_code.R
|
no_license
|
alexandrashtein/test_250118
|
R
| false | false | 17 |
r
|
# Some code
# New
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/translate_operations.R
\name{translate_update_parallel_data}
\alias{translate_update_parallel_data}
\title{Updates a previously created parallel data resource by importing a new
input file from Amazon S3}
\usage{
translate_update_parallel_data(
Name,
Description = NULL,
ParallelDataConfig,
ClientToken
)
}
\arguments{
\item{Name}{[required] The name of the parallel data resource being updated.}
\item{Description}{A custom description for the parallel data resource in Amazon Translate.}
\item{ParallelDataConfig}{[required] Specifies the format and S3 location of the parallel data input file.}
\item{ClientToken}{[required] A unique identifier for the request. This token is automatically
generated when you use Amazon Translate through an AWS SDK.}
}
\description{
Updates a previously created parallel data resource by importing a new input file from Amazon S3.
See \url{https://www.paws-r-sdk.com/docs/translate_update_parallel_data/} for full documentation.
}
\keyword{internal}
|
/cran/paws.machine.learning/man/translate_update_parallel_data.Rd
|
permissive
|
paws-r/paws
|
R
| false | true | 1,077 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/translate_operations.R
\name{translate_update_parallel_data}
\alias{translate_update_parallel_data}
\title{Updates a previously created parallel data resource by importing a new
input file from Amazon S3}
\usage{
translate_update_parallel_data(
Name,
Description = NULL,
ParallelDataConfig,
ClientToken
)
}
\arguments{
\item{Name}{[required] The name of the parallel data resource being updated.}
\item{Description}{A custom description for the parallel data resource in Amazon Translate.}
\item{ParallelDataConfig}{[required] Specifies the format and S3 location of the parallel data input file.}
\item{ClientToken}{[required] A unique identifier for the request. This token is automatically
generated when you use Amazon Translate through an AWS SDK.}
}
\description{
Updates a previously created parallel data resource by importing a new input file from Amazon S3.
See \url{https://www.paws-r-sdk.com/docs/translate_update_parallel_data/} for full documentation.
}
\keyword{internal}
|
library(quantmod)
SPX <- getSymbols("^GSPC",auto.assign = FALSE, from = "2019-01-02", to="2020-12-31")
head(SPX[,4])
# we work with the closed price
SPXprice<-na.omit(SPX[,4])
# we want to work on yearly basis
daysInOneYear<-365 #actual convention
obstime<-as.numeric(index(SPXprice))# we can convert it to the numeric object and determine delta.
n <- length(obstime) - 1
delta_i<-diff(obstime)/(daysInOneYear)
logreturn<- diff(log(as.numeric(SPXprice)))
minusLogLik<-function(par,logreturn,delta_i){
mu<-par[1]
sig<-par[2]
vecMean = (mu -0.5*sig^2)*delta_i
vecsd = sig*sqrt(delta_i)
-sum(dnorm(logreturn,mean=vecMean,sd=vecsd,log=TRUE))
}
minusLogLik(par=c(0.5,0.2),logreturn=logreturn,delta_i=delta_i) # minus loglikelihood at mu= 0.5 sig=0.2
res<-optim(par=c(0.5,0.2),fn=minusLogLik,lower=c(-Inf,0),method="L-BFGS-B", logreturn=logreturn, delta_i=delta_i)
res$par
#we want to see the value of the likelihood we are able to reach so;
-1*res$value #multiply it by -1 because it is a minimization : minusloglikelihood
res$convergence # its 0, successful.
volatility<-res$par[2] # volatiltiy on yearly basis estimated using the historical log return.
volatility
#Put option of Black and scholes through put-call parity
PutoptBs<-function(S,K,TimeToMat, sigma, Rfree){
d1<-(log(S/K)+(Rfree+0.5*sigma^2)*(TimeToMat))/(sigma*sqrt(TimeToMat))
d2<-d1-sigma*sqrt(TimeToMat)
Pt0= K*exp(-Rfree*TimeToMat)*pnorm(-d2)-S*pnorm(-d1)
return(Pt0)
}
sigma= volatility # on yearly basis estimated through MLE
#At the money option, starting on 30th of December 2020 and reaches the maturity on 27th of February 2021.
SPXprice[n+1,] # t0 = 2020-12-30
S = as.numeric(SPXprice[n+1,])
K=S # at the money
K
S
Rfree=0.015
NdaystoMat <- data.frame(date=c("2020/12/30"),tx_start=c("2021/02/27"))
NdaystoMat$date_diff<-as.Date(as.character(NdaystoMat$tx_start), format="%Y/%m/%d")-
as.Date(as.character(NdaystoMat$date), format="%Y/%m/%d")
Maturity<-as.numeric(NdaystoMat$date_diff)
Maturity # daily basis
#covert Maturity on yearly basis
daysInOneYear<-365 #actual convention
Maturity<-Maturity/daysInOneYear
Put<-PutoptBs(S=S,K=S, TimeToMat = Maturity,sigma = sigma,Rfree = Rfree)
Put
|
/EX4.R
|
no_license
|
Murataydinunimi/Numerical-Methods-for-Finance
|
R
| false | false | 2,286 |
r
|
library(quantmod)
SPX <- getSymbols("^GSPC",auto.assign = FALSE, from = "2019-01-02", to="2020-12-31")
head(SPX[,4])
# we work with the closed price
SPXprice<-na.omit(SPX[,4])
# we want to work on yearly basis
daysInOneYear<-365 #actual convention
obstime<-as.numeric(index(SPXprice))# we can convert it to the numeric object and determine delta.
n <- length(obstime) - 1
delta_i<-diff(obstime)/(daysInOneYear)
logreturn<- diff(log(as.numeric(SPXprice)))
minusLogLik<-function(par,logreturn,delta_i){
mu<-par[1]
sig<-par[2]
vecMean = (mu -0.5*sig^2)*delta_i
vecsd = sig*sqrt(delta_i)
-sum(dnorm(logreturn,mean=vecMean,sd=vecsd,log=TRUE))
}
minusLogLik(par=c(0.5,0.2),logreturn=logreturn,delta_i=delta_i) # minus loglikelihood at mu= 0.5 sig=0.2
res<-optim(par=c(0.5,0.2),fn=minusLogLik,lower=c(-Inf,0),method="L-BFGS-B", logreturn=logreturn, delta_i=delta_i)
res$par
#we want to see the value of the likelihood we are able to reach so;
-1*res$value #multiply it by -1 because it is a minimization : minusloglikelihood
res$convergence # its 0, successful.
volatility<-res$par[2] # volatiltiy on yearly basis estimated using the historical log return.
volatility
#Put option of Black and scholes through put-call parity
PutoptBs<-function(S,K,TimeToMat, sigma, Rfree){
d1<-(log(S/K)+(Rfree+0.5*sigma^2)*(TimeToMat))/(sigma*sqrt(TimeToMat))
d2<-d1-sigma*sqrt(TimeToMat)
Pt0= K*exp(-Rfree*TimeToMat)*pnorm(-d2)-S*pnorm(-d1)
return(Pt0)
}
sigma= volatility # on yearly basis estimated through MLE
#At the money option, starting on 30th of December 2020 and reaches the maturity on 27th of February 2021.
SPXprice[n+1,] # t0 = 2020-12-30
S = as.numeric(SPXprice[n+1,])
K=S # at the money
K
S
Rfree=0.015
NdaystoMat <- data.frame(date=c("2020/12/30"),tx_start=c("2021/02/27"))
NdaystoMat$date_diff<-as.Date(as.character(NdaystoMat$tx_start), format="%Y/%m/%d")-
as.Date(as.character(NdaystoMat$date), format="%Y/%m/%d")
Maturity<-as.numeric(NdaystoMat$date_diff)
Maturity # daily basis
#covert Maturity on yearly basis
daysInOneYear<-365 #actual convention
Maturity<-Maturity/daysInOneYear
Put<-PutoptBs(S=S,K=S, TimeToMat = Maturity,sigma = sigma,Rfree = Rfree)
Put
|
data("mnist_27")
set.seed(1995)
indexes <- createResample(mnist_27$train$y, 10)
which(indexes$Resample01 == 3)
which(indexes$Resample02 == 3)
which(indexes$Resample03 == 3)
which(indexes$Resample04 == 3)
which(indexes$Resample05 == 3)
which(indexes$Resample06 == 3)
which(indexes$Resample07 == 3)
which(indexes$Resample08 == 3)
which(indexes$Resample09 == 3)
which(indexes$Resample10 == 3)
|
/Bootstrapping_mnist.r
|
no_license
|
sb-ruisms/MachineLearningExercises
|
R
| false | false | 390 |
r
|
data("mnist_27")
set.seed(1995)
indexes <- createResample(mnist_27$train$y, 10)
which(indexes$Resample01 == 3)
which(indexes$Resample02 == 3)
which(indexes$Resample03 == 3)
which(indexes$Resample04 == 3)
which(indexes$Resample05 == 3)
which(indexes$Resample06 == 3)
which(indexes$Resample07 == 3)
which(indexes$Resample08 == 3)
which(indexes$Resample09 == 3)
which(indexes$Resample10 == 3)
|
\name{slice-methods}
\docType{methods}
\alias{get.slice}
\alias{slice.fast}
\alias{slice}
\alias{slice,SoilProfileCollection-method}
\title{Slicing of SoilProfilecollection Objects}
\description{Slicing of SoilProfilecollection Objects}
\usage{
# method for SoilProfileCollection objects
slice(object, fm, top.down=TRUE, just.the.data=FALSE, strict=TRUE)
}
\arguments{
\item{object}{a SoilProfileCollection}
\item{fm}{A formula: either `integer.vector ~ var1 + var2 + var3' where named variables are sliced according to `integer.vector' OR where all variables are sliced accordin to `integer.vector' `integer.vector ~.'.}
\item{top.down}{logical, slices are defined from the top-down: \code{0:10} implies 0-11 depth units.}
\item{just.the.data}{Logical, return just the sliced data or a new SoilProfileCollection object.}
\item{strict}{Logical, should the horizonation be strictly checked for self-consistency?}
}
\section{Details}{
By default, slices are defined from the top-down: \code{0:10} implies 0-11 depth units.
}
\section{Methods}{
\describe{
\item{data = "SoilProfileCollection"}{Typical usage, where input is a \code{\link{SoilProfileCollection}}.}
}
}
\note{\code{slab()} and \code{slice()} are much faster and require less memory if input data are either numeric or character.}
\value{Either a new SoilProfileCollection with data sliced according to \code{fm}, or a \code{data.frame}.}
\references{
D.E. Beaudette, P. Roudier, A.T. O'Geen, Algorithms for quantitative pedology: A toolkit for soil scientists, Computers & Geosciences, Volume 52, March 2013, Pages 258-268, 10.1016/j.cageo.2012.10.020.
}
\author{D.E. Beaudette}
\seealso{\code{\link{slab}}}
\examples{
library(aqp)
# simulate some data, IDs are 1:20
d <- lapply(1:20, random_profile)
d <- do.call('rbind', d)
# init SoilProfilecollection object
depths(d) <- id ~ top + bottom
head(horizons(d))
# generate single slice at 10 cm
# output is a SoilProfilecollection object
s <- slice(d, 10 ~ name + p1 + p2 + p3)
# generate single slice at 10 cm, output data.frame
s <- slice(d, 10 ~ name + p1 + p2 + p3, just.the.data=TRUE)
# generate integer slices from 0 - 26 cm
# note that slices are specified by default as "top-down"
# e.g. the lower depth will always by top + 1
s <- slice(d, 0:25 ~ name + p1 + p2 + p3)
par(mar=c(0,1,0,1))
plot(s)
# generate slices from 0 - 11 cm, for all variables
s <- slice(d, 0:10 ~ .)
print(s)
# note that pct missing is computed for each slice,
# if all vars are missing, then NA is returned
d$p1[1:10] <- NA
s <- slice(d, 10 ~ ., just.the.data=TRUE)
print(s)
\dontrun{
##
## check sliced data
##
# test that mean of 1 cm slices property is equal to the
# hz-thickness weighted mean value of that property
data(sp1)
depths(sp1) <- id ~ top + bottom
# get the first profile
sp1.sub <- sp1[which(profile_id(sp1) == 'P009'), ]
# compute hz-thickness wt. mean
hz.wt.mean <- with(
horizons(sp1.sub),
sum((bottom - top) * prop) / sum(bottom - top)
)
# hopefully the same value, calculated via slice()
s <- slice(sp1.sub, 0:max(sp1.sub) ~ prop)
hz.slice.mean <- mean(s$prop, na.rm=TRUE)
# same?
if(!all.equal(hz.slice.mean, hz.wt.mean))
stop('there is a bug in slice() !!!')
}
}
\keyword{methods}
\keyword{manip}
|
/man/SPC-slice-methods.Rd
|
no_license
|
rsbivand/aqp
|
R
| false | false | 3,270 |
rd
|
\name{slice-methods}
\docType{methods}
\alias{get.slice}
\alias{slice.fast}
\alias{slice}
\alias{slice,SoilProfileCollection-method}
\title{Slicing of SoilProfilecollection Objects}
\description{Slicing of SoilProfilecollection Objects}
\usage{
# method for SoilProfileCollection objects
slice(object, fm, top.down=TRUE, just.the.data=FALSE, strict=TRUE)
}
\arguments{
\item{object}{a SoilProfileCollection}
\item{fm}{A formula: either `integer.vector ~ var1 + var2 + var3' where named variables are sliced according to `integer.vector' OR where all variables are sliced accordin to `integer.vector' `integer.vector ~.'.}
\item{top.down}{logical, slices are defined from the top-down: \code{0:10} implies 0-11 depth units.}
\item{just.the.data}{Logical, return just the sliced data or a new SoilProfileCollection object.}
\item{strict}{Logical, should the horizonation be strictly checked for self-consistency?}
}
\section{Details}{
By default, slices are defined from the top-down: \code{0:10} implies 0-11 depth units.
}
\section{Methods}{
\describe{
\item{data = "SoilProfileCollection"}{Typical usage, where input is a \code{\link{SoilProfileCollection}}.}
}
}
\note{\code{slab()} and \code{slice()} are much faster and require less memory if input data are either numeric or character.}
\value{Either a new SoilProfileCollection with data sliced according to \code{fm}, or a \code{data.frame}.}
\references{
D.E. Beaudette, P. Roudier, A.T. O'Geen, Algorithms for quantitative pedology: A toolkit for soil scientists, Computers & Geosciences, Volume 52, March 2013, Pages 258-268, 10.1016/j.cageo.2012.10.020.
}
\author{D.E. Beaudette}
\seealso{\code{\link{slab}}}
\examples{
library(aqp)
# simulate some data, IDs are 1:20
d <- lapply(1:20, random_profile)
d <- do.call('rbind', d)
# init SoilProfilecollection object
depths(d) <- id ~ top + bottom
head(horizons(d))
# generate single slice at 10 cm
# output is a SoilProfilecollection object
s <- slice(d, 10 ~ name + p1 + p2 + p3)
# generate single slice at 10 cm, output data.frame
s <- slice(d, 10 ~ name + p1 + p2 + p3, just.the.data=TRUE)
# generate integer slices from 0 - 26 cm
# note that slices are specified by default as "top-down"
# e.g. the lower depth will always by top + 1
s <- slice(d, 0:25 ~ name + p1 + p2 + p3)
par(mar=c(0,1,0,1))
plot(s)
# generate slices from 0 - 11 cm, for all variables
s <- slice(d, 0:10 ~ .)
print(s)
# note that pct missing is computed for each slice,
# if all vars are missing, then NA is returned
d$p1[1:10] <- NA
s <- slice(d, 10 ~ ., just.the.data=TRUE)
print(s)
\dontrun{
##
## check sliced data
##
# test that mean of 1 cm slices property is equal to the
# hz-thickness weighted mean value of that property
data(sp1)
depths(sp1) <- id ~ top + bottom
# get the first profile
sp1.sub <- sp1[which(profile_id(sp1) == 'P009'), ]
# compute hz-thickness wt. mean
hz.wt.mean <- with(
horizons(sp1.sub),
sum((bottom - top) * prop) / sum(bottom - top)
)
# hopefully the same value, calculated via slice()
s <- slice(sp1.sub, 0:max(sp1.sub) ~ prop)
hz.slice.mean <- mean(s$prop, na.rm=TRUE)
# same?
if(!all.equal(hz.slice.mean, hz.wt.mean))
stop('there is a bug in slice() !!!')
}
}
\keyword{methods}
\keyword{manip}
|
# read and clean the data
source("read_and_clean.R")
# open png graphics device
png("plot4.png")
# set graphics canvas to four graphic panels
par(mfrow = c(2, 2))
# plot of global active power in respect to time
plot(sub$DateTime, sub$Global_active_power, type = "l", xlab = "", ylab = "Global Active Power")
# plot of voltage in respect to time
plot(sub$DateTime, sub$Voltage, type = "l", xlab = "datetime", ylab = "Voltage")
# plot of sub metering 1-3 in respect to time
plot(sub$DateTime, sub$Sub_metering_1, type = "l", xlab = "", ylab = "Energy sub metering")
lines(x = sub$DateTime, y = sub$Sub_metering_2, col = "red")
lines(x = sub$DateTime, y = sub$Sub_metering_3, col = "blue")
legend("topright", c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"), lty = c(1,1,1), col = c("black","red","blue"), bty="n")
# plot of global reactive power in respect to time
plot(sub$DateTime, sub$Global_reactive_power, type = "l", xlab = "datetime", ylab = "Global_reactive_power")
# reset graphics canvas to default value
par(mfrow = c(1, 1))
# save and shut down graphics device
invisible(dev.off())
|
/plot4.R
|
no_license
|
Zyrix/ExData_Plotting1
|
R
| false | false | 1,107 |
r
|
# read and clean the data
source("read_and_clean.R")
# open png graphics device
png("plot4.png")
# set graphics canvas to four graphic panels
par(mfrow = c(2, 2))
# plot of global active power in respect to time
plot(sub$DateTime, sub$Global_active_power, type = "l", xlab = "", ylab = "Global Active Power")
# plot of voltage in respect to time
plot(sub$DateTime, sub$Voltage, type = "l", xlab = "datetime", ylab = "Voltage")
# plot of sub metering 1-3 in respect to time
plot(sub$DateTime, sub$Sub_metering_1, type = "l", xlab = "", ylab = "Energy sub metering")
lines(x = sub$DateTime, y = sub$Sub_metering_2, col = "red")
lines(x = sub$DateTime, y = sub$Sub_metering_3, col = "blue")
legend("topright", c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"), lty = c(1,1,1), col = c("black","red","blue"), bty="n")
# plot of global reactive power in respect to time
plot(sub$DateTime, sub$Global_reactive_power, type = "l", xlab = "datetime", ylab = "Global_reactive_power")
# reset graphics canvas to default value
par(mfrow = c(1, 1))
# save and shut down graphics device
invisible(dev.off())
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/saveOutputV2.R
\name{writeReplicateDataV2}
\alias{writeReplicateDataV2}
\title{Write protein_counts_and_intensity.json
This is a wrapper of `oneProteinReplDataGeneric`, looping over all proteins.}
\usage{
writeReplicateDataV2(longDTProt, outputFolder, GroupColumnName, GroupLabelType)
}
\arguments{
\item{longDTProt}{data.table protein information stored in long format.
Columns required: `ProteinId`, `GeneName`, `Description`, `log2NInt`, `Condition`,
`Replicate`, `Imputed`.}
\item{outputFolder}{str. Path to folder where `data` should be saved.}
}
\description{
Write protein_counts_and_intensity.json
This is a wrapper of `oneProteinReplDataGeneric`, looping over all proteins.
}
|
/man/writeReplicateDataV2.Rd
|
permissive
|
MassDynamics/MassExpression
|
R
| false | true | 765 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/saveOutputV2.R
\name{writeReplicateDataV2}
\alias{writeReplicateDataV2}
\title{Write protein_counts_and_intensity.json
This is a wrapper of `oneProteinReplDataGeneric`, looping over all proteins.}
\usage{
writeReplicateDataV2(longDTProt, outputFolder, GroupColumnName, GroupLabelType)
}
\arguments{
\item{longDTProt}{data.table protein information stored in long format.
Columns required: `ProteinId`, `GeneName`, `Description`, `log2NInt`, `Condition`,
`Replicate`, `Imputed`.}
\item{outputFolder}{str. Path to folder where `data` should be saved.}
}
\description{
Write protein_counts_and_intensity.json
This is a wrapper of `oneProteinReplDataGeneric`, looping over all proteins.
}
|
require(shiny)
require(ggplot2)
data <- read.csv('data/cleaned-cdc-mortality-1999-2010.csv', header = TRUE)
data <- data[data$Year == 2010,]
shinyServer(
function(input, output) {
dataSubset <- reactive({
df <- subset(data, data$ICD.Chapter == input$disease)
df$State <- reorder(df$State, 1 / df$Crude.Rate)
# df <- df[order(df$Crude.Rate, decreasing = TRUE), ]
df
})
output$plot <- renderPlot({
ggplot(dataSubset(), aes(x = State, y = Crude.Rate)) +
geom_bar(stat = 'identity', fill = 'navy') +
geom_text(aes(x = State, y = 0, ymax = Crude.Rate,
label=State, hjust = 1, vjust = 0.25),
position = position_dodge(width=1),
color = 'white',
angle = 270,
size = 4) +
scale_x_discrete(breaks = NULL) +
theme(panel.background = element_rect(fill = 'darkgray'))
})
}
)
|
/lecture3/HW3P1/server.R
|
no_license
|
circld/CUNY_IS608
|
R
| false | false | 976 |
r
|
require(shiny)
require(ggplot2)
data <- read.csv('data/cleaned-cdc-mortality-1999-2010.csv', header = TRUE)
data <- data[data$Year == 2010,]
shinyServer(
function(input, output) {
dataSubset <- reactive({
df <- subset(data, data$ICD.Chapter == input$disease)
df$State <- reorder(df$State, 1 / df$Crude.Rate)
# df <- df[order(df$Crude.Rate, decreasing = TRUE), ]
df
})
output$plot <- renderPlot({
ggplot(dataSubset(), aes(x = State, y = Crude.Rate)) +
geom_bar(stat = 'identity', fill = 'navy') +
geom_text(aes(x = State, y = 0, ymax = Crude.Rate,
label=State, hjust = 1, vjust = 0.25),
position = position_dodge(width=1),
color = 'white',
angle = 270,
size = 4) +
scale_x_discrete(breaks = NULL) +
theme(panel.background = element_rect(fill = 'darkgray'))
})
}
)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/boxplotsRPKM.R
\name{createPairedData}
\alias{createPairedData}
\title{Creates a dataframe of paired sample types with a column named group}
\usage{
createPairedData(df_map, pair_list)
}
\arguments{
\item{df_map}{A dataframe of combined non-subsampled or subsampled mapping data and metadata}
\item{pair_list}{A list of vectors of length two containing paired sample types}
}
\value{
A dataframe of paired sample types with a column named group i.e. saliva vs. stool
}
\description{
Creates a dataframe of paired sample types with a column named group
}
\examples{
df_map_subsampled <- readMappingData("/home/vicky/Documents/CHMI/Resistome-paper/resistomeAnalysis/db/MAPPING_DATA/subsampled_argrich_merged.csv", without_US_duplicates = TRUE)
pair_list <- list(c("stool", "dental"), c("stool", "saliva"), c("dental", "saliva"), c("stool", "dorsum of tongue"), c("stool", "buccal mucosa"), c("dorsum of tongue", "buccal mucosa"), c("dorsum of tongue", "dental"), c("buccal mucosa", "dental"))
df_map_subsampled_pairs <- createPairedData(df_map_subsampled, pair_list)
}
|
/man/createPairedData.Rd
|
permissive
|
blue-moon22/resistomeAnalysis
|
R
| false | true | 1,147 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/boxplotsRPKM.R
\name{createPairedData}
\alias{createPairedData}
\title{Creates a dataframe of paired sample types with a column named group}
\usage{
createPairedData(df_map, pair_list)
}
\arguments{
\item{df_map}{A dataframe of combined non-subsampled or subsampled mapping data and metadata}
\item{pair_list}{A list of vectors of length two containing paired sample types}
}
\value{
A dataframe of paired sample types with a column named group i.e. saliva vs. stool
}
\description{
Creates a dataframe of paired sample types with a column named group
}
\examples{
df_map_subsampled <- readMappingData("/home/vicky/Documents/CHMI/Resistome-paper/resistomeAnalysis/db/MAPPING_DATA/subsampled_argrich_merged.csv", without_US_duplicates = TRUE)
pair_list <- list(c("stool", "dental"), c("stool", "saliva"), c("dental", "saliva"), c("stool", "dorsum of tongue"), c("stool", "buccal mucosa"), c("dorsum of tongue", "buccal mucosa"), c("dorsum of tongue", "dental"), c("buccal mucosa", "dental"))
df_map_subsampled_pairs <- createPairedData(df_map_subsampled, pair_list)
}
|
#' Layouts
#'
#' Layout your graph.
#'
#' @inheritParams sg_nodes
#' @param nodes,edges Nodes and edges as prepared for sigmajs.
#' @param directed Whether or not to create a directed graph, passed to \code{\link[igraph]{graph_from_data_frame}}.
#' @param layout An \code{igraph} layout function.
#' @param save_igraph Whether to save the \code{igraph} object used internally.
#' @param ... Any other parameter to pass to \code{layout} function.
#'
#' @details The package uses \code{igraph} internally for a lot of computations the \code{save_igraph}
#' allows saving the object to speed up subsequent computations.
#'
#' @section Functions:
#' \itemize{
#' \item{\code{sg_layout} layout your graph.}
#' \item{\code{sg_get_layout} helper to get graph's \code{x} and \code{y} positions.}
#' }
#'
#' @examples
#' nodes <- sg_make_nodes(250) # 250 nodes
#' edges <- sg_make_edges(nodes, n = 500)
#'
#' sigmajs() %>%
#' sg_nodes(nodes, id, size, color) %>%
#' sg_edges(edges, id, source, target) %>%
#' sg_layout()
#'
#' nodes_coords <- sg_get_layout(nodes, edges)
#'
#' @return \code{sg_get_layout} returns nodes with \code{x} and \code{y} coordinates.
#'
#' @rdname layout
#' @export
sg_layout <- function(sg, directed = TRUE, layout = igraph::layout_nicely, save_igraph = TRUE, ...){
if (missing(sg))
stop("missing sg", call. = FALSE)
if (!inherits(sg, "sigmajs"))
stop("sg must be of class sigmajs", call. = FALSE)
nodes <- .data_2_df(sg$x$data$nodes)
edges <- .data_2_df(sg$x$data$edges)
# clean
nodes <- .rm_x_y(nodes)
nodes <- sg_get_layout(nodes, edges, directed, layout, save_igraph = save_igraph, ...)
nodes <- apply(nodes, 1, as.list)
sg$x$data$nodes <- nodes
sg
}
#' @rdname layout
#' @export
sg_get_layout <- function(nodes, edges, directed = TRUE, layout = igraph::layout_nicely, save_igraph = TRUE, ...){
if (missing(nodes) || missing(edges))
stop("missing nodes or edges", call. = FALSE)
# clean
edges <- .re_order(edges)
nodes <- .rm_x_y(nodes)
nodes <- .re_order_nodes(nodes)
g <- .build_igraph(edges, directed = directed, nodes, save = save_igraph)
l <- layout(g, ...)
l <- as.data.frame(l) %>%
dplyr::select_("x" = "V1", "y" = "V2")
nodes <- dplyr::bind_cols(nodes, l)
return(nodes)
}
|
/R/layouts.R
|
permissive
|
marcofattorelli/sigmajs
|
R
| false | false | 2,323 |
r
|
#' Layouts
#'
#' Layout your graph.
#'
#' @inheritParams sg_nodes
#' @param nodes,edges Nodes and edges as prepared for sigmajs.
#' @param directed Whether or not to create a directed graph, passed to \code{\link[igraph]{graph_from_data_frame}}.
#' @param layout An \code{igraph} layout function.
#' @param save_igraph Whether to save the \code{igraph} object used internally.
#' @param ... Any other parameter to pass to \code{layout} function.
#'
#' @details The package uses \code{igraph} internally for a lot of computations the \code{save_igraph}
#' allows saving the object to speed up subsequent computations.
#'
#' @section Functions:
#' \itemize{
#' \item{\code{sg_layout} layout your graph.}
#' \item{\code{sg_get_layout} helper to get graph's \code{x} and \code{y} positions.}
#' }
#'
#' @examples
#' nodes <- sg_make_nodes(250) # 250 nodes
#' edges <- sg_make_edges(nodes, n = 500)
#'
#' sigmajs() %>%
#' sg_nodes(nodes, id, size, color) %>%
#' sg_edges(edges, id, source, target) %>%
#' sg_layout()
#'
#' nodes_coords <- sg_get_layout(nodes, edges)
#'
#' @return \code{sg_get_layout} returns nodes with \code{x} and \code{y} coordinates.
#'
#' @rdname layout
#' @export
sg_layout <- function(sg, directed = TRUE, layout = igraph::layout_nicely, save_igraph = TRUE, ...){
if (missing(sg))
stop("missing sg", call. = FALSE)
if (!inherits(sg, "sigmajs"))
stop("sg must be of class sigmajs", call. = FALSE)
nodes <- .data_2_df(sg$x$data$nodes)
edges <- .data_2_df(sg$x$data$edges)
# clean
nodes <- .rm_x_y(nodes)
nodes <- sg_get_layout(nodes, edges, directed, layout, save_igraph = save_igraph, ...)
nodes <- apply(nodes, 1, as.list)
sg$x$data$nodes <- nodes
sg
}
#' @rdname layout
#' @export
sg_get_layout <- function(nodes, edges, directed = TRUE, layout = igraph::layout_nicely, save_igraph = TRUE, ...){
if (missing(nodes) || missing(edges))
stop("missing nodes or edges", call. = FALSE)
# clean
edges <- .re_order(edges)
nodes <- .rm_x_y(nodes)
nodes <- .re_order_nodes(nodes)
g <- .build_igraph(edges, directed = directed, nodes, save = save_igraph)
l <- layout(g, ...)
l <- as.data.frame(l) %>%
dplyr::select_("x" = "V1", "y" = "V2")
nodes <- dplyr::bind_cols(nodes, l)
return(nodes)
}
|
# R version 3.2.3 (2015-12-10) -- "Wooden Christmas-Tree"
#
# solar.R
#
# VERSION: 1.0-r2
# LAST UPDATED: 2016-08-19
#
# ~~~~~~~~
# license:
# ~~~~~~~~
# Copyright (C) 2016 Prentice Lab
#
# This file is part of the SPLASH model.
#
# SPLASH is free software: you can redistribute it and/or modify it under
# the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 2.1 of the License, or
# (at your option) any later version.
#
# SPLASH is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with SPLASH. If not, see <http://www.gnu.org/licenses/>.
#
# ~~~~~~~~~
# citation:
# ~~~~~~~~~
# T. W. Davis, I. C. Prentice, B. D. Stocker, R. J. Whitley, H. Wang, B. J.
# Evans, A. V. Gallego-Sala, M. T. Sykes, and W. Cramer, Simple process-led
# algorithms for simulating habitats (SPLASH): Robust indices of radiation
# evapo-transpiration and plant-available moisture, Geoscientific Model
# Development, 2016 (in progress)
#
# ~~~~~~~~~~~~
# description:
# ~~~~~~~~~~~~
# This script contains functions to calculate daily radiation, i.e.:
# berger_tls(double n, double N)
# density_h2o(double tc, double pa)
# dcos(double d)
# dsin(double d)
#
# ~~~~~~~~~~
# changelog:
# ~~~~~~~~~~
# - fixed Cooper's and Spencer's declination angle equations [14.11.25]
# - replaced simplified_kepler with full_kepler method [14.11.25]
# - added berger_tls function [15.01.13]
# - updated evap function (similar to stash.py EVAP class) [15.01.13]
# - updated some documentation [16.05.27]
# - fixed HN- equation (iss#13) [16.08.19]
#
#### IMPORT SOURCES ##########################################################
#source("const.R")
#### DEFINE FUNCTIONS ########################################################
# ************************************************************************
# Name: berger_tls
# Inputs: - double, day of year (n)
# - double, days in year (N)
# Returns: numeric list, true anomaly and true longitude
# Features: Returns true anomaly and true longitude for a given day.
# Depends: - ke ............. eccentricity of earth's orbit, unitless
# - komega ......... longitude of perihelion
# Ref: Berger, A. L. (1978), Long term variations of daily insolation
# and quaternary climatic changes, J. Atmos. Sci., 35, 2362-2367.
# ************************************************************************
berger_tls <- function(n, N) {
# Variable substitutes:
xee <- ke^2
xec <- ke^3
xse <- sqrt(1 - ke^2)
# Mean longitude for vernal equinox:
xlam <- (ke/2.0 + xec/8.0)*(1 + xse)*sin(komega*pir) -
xee/4.0*(0.5 + xse)*sin(2.0*komega*pir) +
xec/8.0*(1.0/3.0 + xse)*sin(3.0*komega*pir)
xlam <- 2.0*xlam/pir
# Mean longitude for day of year:
dlamm <- xlam + (n - 80.0)*(360.0/N)
# Mean anomaly:
anm <- dlamm - komega
ranm <- anm*pir
# True anomaly (uncorrected):
ranv <- ranm + (2.0*ke - xec/4.0)*sin(ranm) +
5.0/4.0*xee*sin(2.0*ranm) +
13.0/12.0*xec*sin(3.0*ranm)
anv <- ranv/pir
# True longitude:
my_tls <- anv + komega
if (my_tls < 0){
my_tls <- my_tls + 360
} else if (my_tls > 360) {
my_tls <- my_tls - 360
}
# True anomaly:
my_nu <- my_tls - komega
if (my_nu < 0){
my_nu <- my_nu + 360
}
return (c(my_nu, my_tls))
}
# ************************************************************************
# Name: dcos
# Inputs: double (d), angle in degrees
# Returns: double, cosine of angle
# Features: This function calculates the cosine of an angle (d) given
# in degrees.
# Depends: pir
# Ref: This script is based on the Javascript function written by
# C Johnson, Theoretical Physicist, Univ of Chicago
# - 'Equation of Time' URL: http://mb-soft.com/public3/equatime.html
# - Javascript URL: http://mb-soft.com/believe/txx/astro22.js
# ************************************************************************
dcos <- function(d) {
cos(d*pir)
}
# ************************************************************************
# Name: dsin
# Inputs: double (d), angle in degrees
# Returns: double, sine of angle
# Features: This function calculates the sine of an angle (d) given
# in degrees.
# Depends: pir
# ************************************************************************
dsin <- function(d) {
sin(d*pir)
}
# ************************************************************************
# Name: calc_daily_solar
# Inputs: - double, latitude, degrees (lat)
# - double, day of year (n)
# - double, elevation (elv) *optional
# - double, year (y) *optional
# - double, fraction of sunshine hours (sf) *optional
# - double, mean daily air temperature, deg C (tc) *optional
# Returns: list object (et.srad)
# $nu_deg ............ true anomaly, degrees
# $lambda_deg ........ true longitude, degrees
# $dr ................ distance factor, unitless
# $delta_deg ......... declination angle, degrees
# $hs_deg ............ sunset angle, degrees
# $ra_j.m2 ........... daily extraterrestrial radiation, J/m^2
# $tau ............... atmospheric transmittivity, unitless
# $ppfd_mol.m2 ....... daily photosyn. photon flux density, mol/m^2
# $hn_deg ............ net radiation hour angle, degrees
# $rn_j.m2 ........... daily net radiation, J/m^2
# $rnn_j.m2 .......... daily nighttime net radiation, J/m^2
# Features: This function calculates daily radiation fluxes.
# Depends: - kalb_sw ........ shortwave albedo
# - kalb_vis ....... visible light albedo
# - kb ............. empirical constant for longwave rad
# - kc ............. empirical constant for shortwave rad
# - kd ............. empirical constant for shortwave rad
# - ke ............. eccentricity
# - keps ........... obliquity
# - kfFEC .......... from-flux-to-energy conversion, umol/J
# - kGsc ........... solar constant
# - berger_tls() ... calc true anomaly and longitude
# - dcos() ......... cos(x*pi/180), where x is in degrees
# - dsin() ......... sin(x*pi/180), where x is in degrees
# - julian_day() ... date to julian day
# ************************************************************************
calc_daily_solar <- function(lat, n, elv=0, y=0, sf=1, tc=23.0) {
# ~~~~~~~~~~~~~~~~~~~~~~~~ FUNCTION WARNINGS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #
if (lat > 90 || lat < -90) {
stop("Warning: Latitude outside range of validity (-90 to 90)!")
}
if (n < 1 || n > 366) {
stop("Warning: Day outside range of validity (1 to 366)!")
}
# ~~~~~~~~~~~~~~~~~~~~~~~ FUNCTION VARIABLES ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #
solar <- list()
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 01. Calculate the number of days in yeark (kN), days
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if (y == 0) {
kN <- 365
} else {
kN <- (julian_day(y + 1, 1, 1) - julian_day(y, 1, 1))
}
solar$kN <- kN
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 02. Calculate heliocentric longitudes (nu and lambda), degrees
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
my_helio <- berger_tls(n, kN)
nu <- my_helio[1]
lam <- my_helio[2]
solar$nu_deg <- nu
solar$lambda_deg <- lam
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 03. Calculate distance factor (dr), unitless
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Berger et al. (1993)
kee <- ke^2
rho <- (1 - kee)/(1 + ke*dcos(nu))
dr <- (1/rho)^2
solar$rho <- rho
solar$dr <- dr
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 04. Calculate the declination angle (delta), degrees
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Woolf (1968)
delta <- asin(dsin(lam)*dsin(keps))
delta <- delta/pir
solar$delta_deg <- delta
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 05. Calculate variable substitutes (u and v), unitless
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ru <- dsin(delta)*dsin(lat)
rv <- dcos(delta)*dcos(lat)
solar$ru <- ru
solar$rv <- rv
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 06. Calculate the sunset hour angle (hs), degrees
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Note: u/v equals tan(delta) * tan(lat)
if (ru/rv >= 1.0) {
hs <- 180 # Polar day (no sunset)
} else if (ru/rv <= -1.0) {
hs <- 0 # Polar night (no sunrise)
} else {
hs <- acos(-1.0*ru/rv)
hs <- hs / pir
}
solar$hs_deg <- hs
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 07. Calculate daily extraterrestrial radiation (ra_d), J/m^2
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ref: Eq. 1.10.3, Duffy & Beckman (1993)
ra_d <- (86400/pi)*kGsc*dr*(ru*pir*hs + rv*dsin(hs))
solar$ra_j.m2 <- ra_d
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 08. Calculate transmittivity (tau), unitless
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ref: Eq. 11, Linacre (1968); Eq. 2, Allen (1996)
tau_o <- (kc + kd*sf)
tau <- tau_o*(1 + (2.67e-5)*elv)
solar$tau_o <- tau_o
solar$tau <- tau
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 09. Calculate daily photosynthetic photon flux density (ppfd_d), mol/m^2
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ppfd_d <- (1e-6)*kfFEC*(1 - kalb_vis)*tau*ra_d
solar$ppfd_mol.m2 <- ppfd_d
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 10. Estimate net longwave radiation (rnl), W/m^2
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
rnl <- (kb + (1 - kb)*sf)*(kA - tc)
solar$rnl_w.m2 <- rnl
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 11. Calculate variable substitue (rw), W/m^2
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
rw <- (1 - kalb_sw)*tau*kGsc*dr
solar$rw <- rw
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 12. Calculate net radiation cross-over angle (hn), degrees
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if ((rnl - rw*ru)/(rw*rv) >= 1.0) {
hn <- 0 # Net radiation is negative all day
} else if ((rnl - rw*ru)/(rw*rv) <= -1.0) {
hn <- 180 # Net radiation is positive all day
} else {
hn <- acos((rnl - rw*ru)/(rw*rv))
hn <- hn/pir
}
solar$hn_deg <- hn
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 13. Calculate daytime net radiation (rn_d), J/m^2
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
rn_d <- (86400/pi)*(hn*pir*(rw*ru - rnl) + rw*rv*dsin(hn))
solar$rn_j.m2 <- rn_d
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 14. Calculate nighttime net radiation (rnn_d), J/m^2
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# fixed iss#13
rnn_d <- (86400/pi)*(
rw*rv*(dsin(hs) - dsin(hn)) +
rw*ru*(hs - hn)*pir -
rnl*(pi - hn*pir)
)
solar$rnn_j.m2 <- rnn_d
# ~~~~~~~~~~~~~~~~~~~~~~~~~~ RETURN VALUES ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #
return(solar)
}
# ************************************************************************
# Name: julian_day
# Inputs: - double, year (y)
# - double, month (m)
# - double, day of month (i)
# Returns: double, Julian day
# Features: This function converts a date in the Gregorian calendar
# to a Julian day number (i.e., a method of consecutative
# numbering of days---does not have anything to do with
# the Julian calendar!)
# * valid for dates after -4712 January 1 (i.e., jde >= 0)
# Ref: Eq. 7.1 J. Meeus (1991), Chapter 7 "Julian Day", Astronomical
# Algorithms
# ************************************************************************
julian_day <- function(y, m, i) {
if (m <= 2) {
y <- y - 1
m <- m + 12
}
a <- floor(y/100)
b <- 2 - a + floor(a/4)
jde <- floor(365.25*(y + 4716)) + floor(30.6001*(m + 1)) + i + b - 1524.5
return(jde)
}
|
/splash_r_prentice/solar.R
|
no_license
|
vedereka/PMIP4_Benchm_proj
|
R
| false | false | 13,376 |
r
|
# R version 3.2.3 (2015-12-10) -- "Wooden Christmas-Tree"
#
# solar.R
#
# VERSION: 1.0-r2
# LAST UPDATED: 2016-08-19
#
# ~~~~~~~~
# license:
# ~~~~~~~~
# Copyright (C) 2016 Prentice Lab
#
# This file is part of the SPLASH model.
#
# SPLASH is free software: you can redistribute it and/or modify it under
# the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 2.1 of the License, or
# (at your option) any later version.
#
# SPLASH is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with SPLASH. If not, see <http://www.gnu.org/licenses/>.
#
# ~~~~~~~~~
# citation:
# ~~~~~~~~~
# T. W. Davis, I. C. Prentice, B. D. Stocker, R. J. Whitley, H. Wang, B. J.
# Evans, A. V. Gallego-Sala, M. T. Sykes, and W. Cramer, Simple process-led
# algorithms for simulating habitats (SPLASH): Robust indices of radiation
# evapo-transpiration and plant-available moisture, Geoscientific Model
# Development, 2016 (in progress)
#
# ~~~~~~~~~~~~
# description:
# ~~~~~~~~~~~~
# This script contains functions to calculate daily radiation, i.e.:
# berger_tls(double n, double N)
# density_h2o(double tc, double pa)
# dcos(double d)
# dsin(double d)
#
# ~~~~~~~~~~
# changelog:
# ~~~~~~~~~~
# - fixed Cooper's and Spencer's declination angle equations [14.11.25]
# - replaced simplified_kepler with full_kepler method [14.11.25]
# - added berger_tls function [15.01.13]
# - updated evap function (similar to stash.py EVAP class) [15.01.13]
# - updated some documentation [16.05.27]
# - fixed HN- equation (iss#13) [16.08.19]
#
#### IMPORT SOURCES ##########################################################
#source("const.R")
#### DEFINE FUNCTIONS ########################################################
# ************************************************************************
# Name: berger_tls
# Inputs: - double, day of year (n)
# - double, days in year (N)
# Returns: numeric list, true anomaly and true longitude
# Features: Returns true anomaly and true longitude for a given day.
# Depends: - ke ............. eccentricity of earth's orbit, unitless
# - komega ......... longitude of perihelion
# Ref: Berger, A. L. (1978), Long term variations of daily insolation
# and quaternary climatic changes, J. Atmos. Sci., 35, 2362-2367.
# ************************************************************************
berger_tls <- function(n, N) {
# Variable substitutes:
xee <- ke^2
xec <- ke^3
xse <- sqrt(1 - ke^2)
# Mean longitude for vernal equinox:
xlam <- (ke/2.0 + xec/8.0)*(1 + xse)*sin(komega*pir) -
xee/4.0*(0.5 + xse)*sin(2.0*komega*pir) +
xec/8.0*(1.0/3.0 + xse)*sin(3.0*komega*pir)
xlam <- 2.0*xlam/pir
# Mean longitude for day of year:
dlamm <- xlam + (n - 80.0)*(360.0/N)
# Mean anomaly:
anm <- dlamm - komega
ranm <- anm*pir
# True anomaly (uncorrected):
ranv <- ranm + (2.0*ke - xec/4.0)*sin(ranm) +
5.0/4.0*xee*sin(2.0*ranm) +
13.0/12.0*xec*sin(3.0*ranm)
anv <- ranv/pir
# True longitude:
my_tls <- anv + komega
if (my_tls < 0){
my_tls <- my_tls + 360
} else if (my_tls > 360) {
my_tls <- my_tls - 360
}
# True anomaly:
my_nu <- my_tls - komega
if (my_nu < 0){
my_nu <- my_nu + 360
}
return (c(my_nu, my_tls))
}
# ************************************************************************
# Name: dcos
# Inputs: double (d), angle in degrees
# Returns: double, cosine of angle
# Features: This function calculates the cosine of an angle (d) given
# in degrees.
# Depends: pir
# Ref: This script is based on the Javascript function written by
# C Johnson, Theoretical Physicist, Univ of Chicago
# - 'Equation of Time' URL: http://mb-soft.com/public3/equatime.html
# - Javascript URL: http://mb-soft.com/believe/txx/astro22.js
# ************************************************************************
dcos <- function(d) {
cos(d*pir)
}
# ************************************************************************
# Name: dsin
# Inputs: double (d), angle in degrees
# Returns: double, sine of angle
# Features: This function calculates the sine of an angle (d) given
# in degrees.
# Depends: pir
# ************************************************************************
dsin <- function(d) {
sin(d*pir)
}
# ************************************************************************
# Name: calc_daily_solar
# Inputs: - double, latitude, degrees (lat)
# - double, day of year (n)
# - double, elevation (elv) *optional
# - double, year (y) *optional
# - double, fraction of sunshine hours (sf) *optional
# - double, mean daily air temperature, deg C (tc) *optional
# Returns: list object (et.srad)
# $nu_deg ............ true anomaly, degrees
# $lambda_deg ........ true longitude, degrees
# $dr ................ distance factor, unitless
# $delta_deg ......... declination angle, degrees
# $hs_deg ............ sunset angle, degrees
# $ra_j.m2 ........... daily extraterrestrial radiation, J/m^2
# $tau ............... atmospheric transmittivity, unitless
# $ppfd_mol.m2 ....... daily photosyn. photon flux density, mol/m^2
# $hn_deg ............ net radiation hour angle, degrees
# $rn_j.m2 ........... daily net radiation, J/m^2
# $rnn_j.m2 .......... daily nighttime net radiation, J/m^2
# Features: This function calculates daily radiation fluxes.
# Depends: - kalb_sw ........ shortwave albedo
# - kalb_vis ....... visible light albedo
# - kb ............. empirical constant for longwave rad
# - kc ............. empirical constant for shortwave rad
# - kd ............. empirical constant for shortwave rad
# - ke ............. eccentricity
# - keps ........... obliquity
# - kfFEC .......... from-flux-to-energy conversion, umol/J
# - kGsc ........... solar constant
# - berger_tls() ... calc true anomaly and longitude
# - dcos() ......... cos(x*pi/180), where x is in degrees
# - dsin() ......... sin(x*pi/180), where x is in degrees
# - julian_day() ... date to julian day
# ************************************************************************
calc_daily_solar <- function(lat, n, elv=0, y=0, sf=1, tc=23.0) {
# ~~~~~~~~~~~~~~~~~~~~~~~~ FUNCTION WARNINGS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #
if (lat > 90 || lat < -90) {
stop("Warning: Latitude outside range of validity (-90 to 90)!")
}
if (n < 1 || n > 366) {
stop("Warning: Day outside range of validity (1 to 366)!")
}
# ~~~~~~~~~~~~~~~~~~~~~~~ FUNCTION VARIABLES ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #
solar <- list()
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 01. Calculate the number of days in yeark (kN), days
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if (y == 0) {
kN <- 365
} else {
kN <- (julian_day(y + 1, 1, 1) - julian_day(y, 1, 1))
}
solar$kN <- kN
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 02. Calculate heliocentric longitudes (nu and lambda), degrees
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
my_helio <- berger_tls(n, kN)
nu <- my_helio[1]
lam <- my_helio[2]
solar$nu_deg <- nu
solar$lambda_deg <- lam
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 03. Calculate distance factor (dr), unitless
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Berger et al. (1993)
kee <- ke^2
rho <- (1 - kee)/(1 + ke*dcos(nu))
dr <- (1/rho)^2
solar$rho <- rho
solar$dr <- dr
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 04. Calculate the declination angle (delta), degrees
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Woolf (1968)
delta <- asin(dsin(lam)*dsin(keps))
delta <- delta/pir
solar$delta_deg <- delta
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 05. Calculate variable substitutes (u and v), unitless
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ru <- dsin(delta)*dsin(lat)
rv <- dcos(delta)*dcos(lat)
solar$ru <- ru
solar$rv <- rv
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 06. Calculate the sunset hour angle (hs), degrees
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Note: u/v equals tan(delta) * tan(lat)
if (ru/rv >= 1.0) {
hs <- 180 # Polar day (no sunset)
} else if (ru/rv <= -1.0) {
hs <- 0 # Polar night (no sunrise)
} else {
hs <- acos(-1.0*ru/rv)
hs <- hs / pir
}
solar$hs_deg <- hs
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 07. Calculate daily extraterrestrial radiation (ra_d), J/m^2
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ref: Eq. 1.10.3, Duffy & Beckman (1993)
ra_d <- (86400/pi)*kGsc*dr*(ru*pir*hs + rv*dsin(hs))
solar$ra_j.m2 <- ra_d
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 08. Calculate transmittivity (tau), unitless
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ref: Eq. 11, Linacre (1968); Eq. 2, Allen (1996)
tau_o <- (kc + kd*sf)
tau <- tau_o*(1 + (2.67e-5)*elv)
solar$tau_o <- tau_o
solar$tau <- tau
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 09. Calculate daily photosynthetic photon flux density (ppfd_d), mol/m^2
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ppfd_d <- (1e-6)*kfFEC*(1 - kalb_vis)*tau*ra_d
solar$ppfd_mol.m2 <- ppfd_d
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 10. Estimate net longwave radiation (rnl), W/m^2
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
rnl <- (kb + (1 - kb)*sf)*(kA - tc)
solar$rnl_w.m2 <- rnl
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 11. Calculate variable substitue (rw), W/m^2
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
rw <- (1 - kalb_sw)*tau*kGsc*dr
solar$rw <- rw
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 12. Calculate net radiation cross-over angle (hn), degrees
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if ((rnl - rw*ru)/(rw*rv) >= 1.0) {
hn <- 0 # Net radiation is negative all day
} else if ((rnl - rw*ru)/(rw*rv) <= -1.0) {
hn <- 180 # Net radiation is positive all day
} else {
hn <- acos((rnl - rw*ru)/(rw*rv))
hn <- hn/pir
}
solar$hn_deg <- hn
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 13. Calculate daytime net radiation (rn_d), J/m^2
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
rn_d <- (86400/pi)*(hn*pir*(rw*ru - rnl) + rw*rv*dsin(hn))
solar$rn_j.m2 <- rn_d
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 14. Calculate nighttime net radiation (rnn_d), J/m^2
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# fixed iss#13
rnn_d <- (86400/pi)*(
rw*rv*(dsin(hs) - dsin(hn)) +
rw*ru*(hs - hn)*pir -
rnl*(pi - hn*pir)
)
solar$rnn_j.m2 <- rnn_d
# ~~~~~~~~~~~~~~~~~~~~~~~~~~ RETURN VALUES ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #
return(solar)
}
# ************************************************************************
# Name: julian_day
# Inputs: - double, year (y)
# - double, month (m)
# - double, day of month (i)
# Returns: double, Julian day
# Features: This function converts a date in the Gregorian calendar
# to a Julian day number (i.e., a method of consecutative
# numbering of days---does not have anything to do with
# the Julian calendar!)
# * valid for dates after -4712 January 1 (i.e., jde >= 0)
# Ref: Eq. 7.1 J. Meeus (1991), Chapter 7 "Julian Day", Astronomical
# Algorithms
# ************************************************************************
julian_day <- function(y, m, i) {
if (m <= 2) {
y <- y - 1
m <- m + 12
}
a <- floor(y/100)
b <- 2 - a + floor(a/4)
jde <- floor(365.25*(y + 4716)) + floor(30.6001*(m + 1)) + i + b - 1524.5
return(jde)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/get_pfts.R
\name{get_pfts}
\alias{get_pfts}
\title{Extract PFTs from css file}
\usage{
get_pfts(css_file_path, delim = " ", reference = pft_lookup)
}
\arguments{
\item{css_file_path}{Path to CSS file}
\item{delim}{File delimiter. Default = " "}
\item{reference}{PFT reference table (default = pft_lookup)}
}
\description{
Extract PFTs from css file
}
|
/man/get_pfts.Rd
|
no_license
|
ashiklom/edr-da
|
R
| false | true | 431 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/get_pfts.R
\name{get_pfts}
\alias{get_pfts}
\title{Extract PFTs from css file}
\usage{
get_pfts(css_file_path, delim = " ", reference = pft_lookup)
}
\arguments{
\item{css_file_path}{Path to CSS file}
\item{delim}{File delimiter. Default = " "}
\item{reference}{PFT reference table (default = pft_lookup)}
}
\description{
Extract PFTs from css file
}
|
gumbelplot <- function(model_object, dist = NULL, method = if ('ggplot2' %in% installed.packages()[,'Package']) {'ggplot'} else {'base'}) {
# res (init res will need to be rewritten for nim)
dta <- as.data.table(model_object$data)
para <- model_object$REG
scaling.factor <- model_object$scaling_factor
if(is.null(dist)) {dist <- attr(model_object, 'sim.call')$dist}
# dta <- data.table(attributes(n)$data)
res.gp <- suppressWarnings(melt(dta))
res.gp <- data.table(variable = names(dta), sf = scaling.factor)[res.gp, on = c('variable')]
res.gp <- res.gp[!is.na(value), p := (rank(value) - .3) / (length(value) + .4), by = variable]
res.gp <- res.gp[, gumbel.variate := -log(-log(as.numeric(p)))]
res.gp <- res.gp[, scaled.value := value/sf]
p <- seq(min(res.gp$p, na.rm = T), max(res.gp$p, na.rm = T), .001)
regional <- data.table(x = -log(-log(p)), y = do.call(paste0('q', dist), list(p, para)))
# graphics
if(method == 'base') {
sres.gp <- split(res.gp[!is.na(res.gp$scaled.value),], res.gp$variable[!is.na(res.gp$scaled.value)])
plot(NULL,
xlim = c(min(regional$x), max(regional$x)*1.15),
ylim = c(min(res.gp$scaled.value, na.rm = T), max(res.gp$scaled.value, na.rm = T)),
bty = 'l',
xlab = expression(-log(-log(p))),
ylab = 'Value',
main = 'Gumbel plot')
grid()
lapply(sres.gp, function(x) {
points(sort(x$gumbel.variate),
sort(x$scaled.value),
pch = 21,
col = 'grey15',
bg = '#36648b50',
cex = .75)
lines(sort(x$gumbel.variate),
sort(x$scaled.value),
col = '#36648b50')
})
lines(regional,
type = 'l',
col = 'red4',
lwd = .75)
}
if(method %in% c('ggplot', 'plotly')) {
gp <- ggplot2::ggplot(res.gp) +
ggplot2::geom_line(ggplot2::aes(x = gumbel.variate, y = scaled.value, group = variable), colour = 'steelblue4', alpha = .5, na.rm = T) +
ggplot2::geom_point(ggplot2::aes(x = gumbel.variate, y = scaled.value, group = variable), colour = 'grey15', fill = 'steelblue4', alpha = .5, shape = 21, na.rm = T) +
ggplot2::geom_line(data = regional, ggplot2::aes(x = x, y = y), col = 'red4', lwd = .75) +
ggplot2::theme_bw() +
ggplot2::labs(x = '-log(-log(p))', y = 'Value', title = 'Gumbel plot') +
ggplot2::theme(plot.title = ggplot2::element_text(hjust = .5),
panel.border = element_blank(),
axis.line = element_line(colour = 'black'))
if(method == 'plotly') {
gp <- plotly::ggplotly(gp)
}
return(gp)
}
}
growthcurve <- function (model_object, fitted_bootstrap, dist = NULL, outer_ribbon = c(0.05, 0.95), inner_ribbon = c(0.25, 0.75), rp = T, return.period = c(5, 10, 20, 50, 100), method = if ('ggplot2' %in% installed.packages()[,'Package']) {'ggplot'} else {'base'}) {
# res (init res will need to be rewritten for nim)
prbs <- sort(c(outer_ribbon, inner_ribbon))
para <- model_object$REG
if(is.null(dist)) {dist <- attr(model_object, 'sim.call')$dist}
qs <- seq(.01, 1 - 1/max(return.period)*.5, 1/max(return.period))
qaux <- data.table(rbindlist(lapply(fitted_bootstrap, function(x) {data.frame(q = do.call(paste0('q',dist), list(qs, x$REG)))}),
idcol = 'sample'),
probs = seq_along(qs))
q <- qaux[, .(val = quantile(q, prbs),
q = c('rib_1_min', 'rib_2_min', 'rib_2_max', 'rib_1_max')),
by = probs]
res.gc <- cbind(dcast(q, probs ~ q, value.var = 'val'),
data.table(gumbel.variate = -log(-log(qs)),
scaled.value = qgpa(qs, para)))
# graphics
if(method == 'base') {
plot(NULL,
xlim = c(min(res.gc$gumbel.variate), max(res.gc$gumbel.variate)),
ylim = c(min(res.gc[,c('rib_1_min', 'rib_2_min', 'rib_2_max', 'rib_1_max')]),
max(res.gc[,c('rib_1_min', 'rib_2_min', 'rib_2_max', 'rib_1_max')])),
bty = 'l',
xlab = expression(-log(-log(p))),
ylab = 'Value',
main = 'Growth curve')
grid()
polygon(c(res.gc$gumbel.variate, rev(res.gc$gumbel.variate)),
c(res.gc$rib_1_max, rev(res.gc$rib_1_min)),
col = '#36648b40', border = NA)
polygon(c(res.gc$gumbel.variate, rev(res.gc$gumbel.variate)),
c(res.gc$rib_2_max, rev(res.gc$rib_2_min)),
col = '#36648b80', border = NA)
lines(res.gc$gumbel.variate,
res.gc$scaled.value,
type = 'l',
col = 'red4',
lwd = .75)
axis.lim <- par('usr')
if(rp) {
rp.lab <- return.period
rp.x <- -log(-log(1 - 1/rp.lab))
rp.y <- axis.lim[3] + (axis.lim[4] - axis.lim[3])*.05
axis(side = 3, at = rp.x, pos = rp.y, labels = rp.lab)
text(mean(rp.x[rev(rank(rp.lab))[1:2]]), rp.y + par('cxy')[2], 'Return period', adj = c(.75, -2.75))
}
}
if(method %in% c('ggplot', 'plotly')) {
gc <- ggplot2::ggplot(res.gc) +
ggplot2::geom_ribbon(ggplot2::aes(x = gumbel.variate, ymin = rib_1_min, ymax = rib_1_max), fill = 'steelblue4', alpha = .4) +
ggplot2::geom_ribbon(ggplot2::aes(x = gumbel.variate, ymin = rib_2_min, ymax = rib_2_max), fill = 'steelblue4', alpha = .8) +
ggplot2::geom_line(ggplot2::aes(x = gumbel.variate, y = scaled.value), col = 'red4', lwd = .75) +
ggplot2::theme_bw() +
ggplot2::labs(x = '-log(-log(p))', y = 'Value', title = 'Growth curve') +
ggplot2::theme(plot.title = ggplot2::element_text(hjust = .5),
panel.border = element_blank(),
axis.line = element_line(colour = 'black'))
if(rp) {
axis.lim <- c(-log(-log(range(qs))), range(qaux$q))
rp.lab <- return.period
rp.x <- -log(-log(1 - 1/rp.lab))
rp.y <- axis.lim[3] + (axis.lim[4] - axis.lim[3])*.05
rp.dta <- data.table(rp.x, rp.y, rp.lab)
gc <- gc + ggplot2::geom_point(data = rp.dta, ggplot2::aes(x = rp.x, y = rp.y), shape = '|', size = 3) +
ggplot2::geom_line(data = rp.dta, ggplot2::aes(x = rp.x, y = rp.y)) +
ggplot2::geom_text(data = rp.dta, ggplot2::aes(x = rp.x, y = rp.y*2, label = rp.lab)) +
ggplot2::geom_text(data = rp.dta, ggplot2::aes(x = mean(rp.x[rev(rank(rp.lab))[1:2]]), y = rp.y[1]*3.5), label = 'Return period', fontface = 1)
}
if(method == 'plotly') {
gc <- plotly::ggplotly(gc)
}
return(gc)
}
}
qq <- function(...) {UseMethod('qq')}
qq.sim <- function(model_object, dist = NULL, method = if ('ggplot2' %in% installed.packages()[,'Package']) {'ggplot'} else {'base'}) {
dta <- as.data.table(model_object$data)
para <- model_object$REG
scaling.factor <- model_object$scaling_factor
if(is.null(dist)) {dist <- attr(dta.fit, 'sim.call')$dist}
res.qq <- suppressWarnings(melt(dta))
res.qq <- data.table(variable = names(dta), sf = scaling.factor)[res.qq, on = c('variable')]
res.qq <- res.qq[, scaled.value := value/sf]
if(method == 'base') {
inipar <- par()
par(pty = 's')
sres.qq <- split(res.qq[!is.na(res.qq$scaled.value),], res.qq$variable[!is.na(res.qq$scaled.value)])
plot(NULL,
xlim = c(0, max(res.qq$scaled.value, na.rm = T)*1.15),
ylim = c(0, max(res.qq$scaled.value, na.rm = T)*1.15),
pch = 21,
col = 'grey15',
bg = '#36648b90',
bty = 'l',
xlab = 'theoretical',
ylab = 'sample',
main = 'qqplot')
grid()
lapply(sres.qq, function(x) {
points(sort(x$scaled.value),
sort(rgpa(length(x$scaled.value), dta.fit$REG)),
pch = 21,
col = 'grey15',
bg = '#36648b90')
})
abline(0,1, col = 'red4')
suppressWarnings(par(inipar))
}
if(method %in% c('ggplot', 'plotly')) {
qq <- ggplot2::ggplot(res.qq) +
ggplot2::geom_qq(ggplot2::aes(sample = scaled.value, group = variable), geom = 'point', distribution = noquote(paste0('q', dist)), dparams = list(para), colour = 'grey15', fill = 'steelblue4', shape = 21, na.rm = T) +
ggplot2::geom_abline(colour = ('red4')) +
ggplot2::coord_fixed() +
ggplot2::lims(x = c(0, max(res.qq$value/res.qq$sf, na.rm = T)),
y = c(0, max(res.qq$value/res.qq$sf, na.rm = T))) +
ggplot2::theme_bw() +
ggplot2::theme(plot.title = ggplot2::element_text(hjust = .5),
panel.border = element_blank(),
axis.line = element_line(colour = 'black'))
if(method == 'plotly') {
qq <- plotly::ggplotly(qq)
}
return(qq)
}
}
# qq.simsample <- function(model_object, fitted_bootstrap, dist = NULL, ribbon.1 = c(0.05, 0.95), ribbon.2 = c(0.25, 0.75), method = if ('ggplot2' %in% installed.packages()[,'Package']) {'ggplot'} else {'base'}) {
#
# dta <- as.data.table(model_object$data)
# para <- model_object$REG
# scaling.factor <- model_object$scaling_factor
#
# if(is.null(dist)) {dist <- attr(dta.fit, 'sim.call')$dist}
#
# prbs <- sort(c(ribbon.1, ribbon.2))
#
# xxx <- do.call(cbind, lapply(fitted_bootstrap, function(x) x$data))
#
# ########################################################
#
# qaux <- data.table(rbindlist(lapply(fitted_bootstrap, function(x) {data.frame(q = do.call(paste0('q',dist), list(qs, x$REG)))}),
# idcol = 'sample'),
# probs = seq_along(qs))
# q <- qaux[, .(val = quantile(q, prbs),
# q = c('rib_1_min', 'rib_2_min', 'rib_2_max', 'rib_1_max')),
# by = probs]
# res.gc <- cbind(dcast(q, probs ~ q, value.var = 'val'),
# data.table(gumbel.variate = -log(-log(qs)),
# scaled.value = qgpa(qs, para)))
# }
ratiodiagram <- function(taus) {
num <- seq(min(taus[,1])*.5, max(taus[,1])*1.2, .01)
mr <- data.table(t3 = num, t4 = num*(1 + 5*num)/(5 + num))
names(taus) <- names(mr)
lmrd <- ggplot(data = NULL, aes(x = t3, y = t4)) +
geom_line(data = mr, colour = 'red4') +
geom_point(data = taus, colour = 'grey15', fill = 'steelblue4', shape = 21) +
theme_classic() +
labs(x = 'L-skewness', y = 'L-kurtosis', title = 'GPA L-moment ratio diagram')
return(lmrd)
}
|
/R/auxiliary_functions/aux_graphics.R
|
no_license
|
hanel/LmomGPA
|
R
| false | false | 10,605 |
r
|
gumbelplot <- function(model_object, dist = NULL, method = if ('ggplot2' %in% installed.packages()[,'Package']) {'ggplot'} else {'base'}) {
# res (init res will need to be rewritten for nim)
dta <- as.data.table(model_object$data)
para <- model_object$REG
scaling.factor <- model_object$scaling_factor
if(is.null(dist)) {dist <- attr(model_object, 'sim.call')$dist}
# dta <- data.table(attributes(n)$data)
res.gp <- suppressWarnings(melt(dta))
res.gp <- data.table(variable = names(dta), sf = scaling.factor)[res.gp, on = c('variable')]
res.gp <- res.gp[!is.na(value), p := (rank(value) - .3) / (length(value) + .4), by = variable]
res.gp <- res.gp[, gumbel.variate := -log(-log(as.numeric(p)))]
res.gp <- res.gp[, scaled.value := value/sf]
p <- seq(min(res.gp$p, na.rm = T), max(res.gp$p, na.rm = T), .001)
regional <- data.table(x = -log(-log(p)), y = do.call(paste0('q', dist), list(p, para)))
# graphics
if(method == 'base') {
sres.gp <- split(res.gp[!is.na(res.gp$scaled.value),], res.gp$variable[!is.na(res.gp$scaled.value)])
plot(NULL,
xlim = c(min(regional$x), max(regional$x)*1.15),
ylim = c(min(res.gp$scaled.value, na.rm = T), max(res.gp$scaled.value, na.rm = T)),
bty = 'l',
xlab = expression(-log(-log(p))),
ylab = 'Value',
main = 'Gumbel plot')
grid()
lapply(sres.gp, function(x) {
points(sort(x$gumbel.variate),
sort(x$scaled.value),
pch = 21,
col = 'grey15',
bg = '#36648b50',
cex = .75)
lines(sort(x$gumbel.variate),
sort(x$scaled.value),
col = '#36648b50')
})
lines(regional,
type = 'l',
col = 'red4',
lwd = .75)
}
if(method %in% c('ggplot', 'plotly')) {
gp <- ggplot2::ggplot(res.gp) +
ggplot2::geom_line(ggplot2::aes(x = gumbel.variate, y = scaled.value, group = variable), colour = 'steelblue4', alpha = .5, na.rm = T) +
ggplot2::geom_point(ggplot2::aes(x = gumbel.variate, y = scaled.value, group = variable), colour = 'grey15', fill = 'steelblue4', alpha = .5, shape = 21, na.rm = T) +
ggplot2::geom_line(data = regional, ggplot2::aes(x = x, y = y), col = 'red4', lwd = .75) +
ggplot2::theme_bw() +
ggplot2::labs(x = '-log(-log(p))', y = 'Value', title = 'Gumbel plot') +
ggplot2::theme(plot.title = ggplot2::element_text(hjust = .5),
panel.border = element_blank(),
axis.line = element_line(colour = 'black'))
if(method == 'plotly') {
gp <- plotly::ggplotly(gp)
}
return(gp)
}
}
growthcurve <- function (model_object, fitted_bootstrap, dist = NULL, outer_ribbon = c(0.05, 0.95), inner_ribbon = c(0.25, 0.75), rp = T, return.period = c(5, 10, 20, 50, 100), method = if ('ggplot2' %in% installed.packages()[,'Package']) {'ggplot'} else {'base'}) {
# res (init res will need to be rewritten for nim)
prbs <- sort(c(outer_ribbon, inner_ribbon))
para <- model_object$REG
if(is.null(dist)) {dist <- attr(model_object, 'sim.call')$dist}
qs <- seq(.01, 1 - 1/max(return.period)*.5, 1/max(return.period))
qaux <- data.table(rbindlist(lapply(fitted_bootstrap, function(x) {data.frame(q = do.call(paste0('q',dist), list(qs, x$REG)))}),
idcol = 'sample'),
probs = seq_along(qs))
q <- qaux[, .(val = quantile(q, prbs),
q = c('rib_1_min', 'rib_2_min', 'rib_2_max', 'rib_1_max')),
by = probs]
res.gc <- cbind(dcast(q, probs ~ q, value.var = 'val'),
data.table(gumbel.variate = -log(-log(qs)),
scaled.value = qgpa(qs, para)))
# graphics
if(method == 'base') {
plot(NULL,
xlim = c(min(res.gc$gumbel.variate), max(res.gc$gumbel.variate)),
ylim = c(min(res.gc[,c('rib_1_min', 'rib_2_min', 'rib_2_max', 'rib_1_max')]),
max(res.gc[,c('rib_1_min', 'rib_2_min', 'rib_2_max', 'rib_1_max')])),
bty = 'l',
xlab = expression(-log(-log(p))),
ylab = 'Value',
main = 'Growth curve')
grid()
polygon(c(res.gc$gumbel.variate, rev(res.gc$gumbel.variate)),
c(res.gc$rib_1_max, rev(res.gc$rib_1_min)),
col = '#36648b40', border = NA)
polygon(c(res.gc$gumbel.variate, rev(res.gc$gumbel.variate)),
c(res.gc$rib_2_max, rev(res.gc$rib_2_min)),
col = '#36648b80', border = NA)
lines(res.gc$gumbel.variate,
res.gc$scaled.value,
type = 'l',
col = 'red4',
lwd = .75)
axis.lim <- par('usr')
if(rp) {
rp.lab <- return.period
rp.x <- -log(-log(1 - 1/rp.lab))
rp.y <- axis.lim[3] + (axis.lim[4] - axis.lim[3])*.05
axis(side = 3, at = rp.x, pos = rp.y, labels = rp.lab)
text(mean(rp.x[rev(rank(rp.lab))[1:2]]), rp.y + par('cxy')[2], 'Return period', adj = c(.75, -2.75))
}
}
if(method %in% c('ggplot', 'plotly')) {
gc <- ggplot2::ggplot(res.gc) +
ggplot2::geom_ribbon(ggplot2::aes(x = gumbel.variate, ymin = rib_1_min, ymax = rib_1_max), fill = 'steelblue4', alpha = .4) +
ggplot2::geom_ribbon(ggplot2::aes(x = gumbel.variate, ymin = rib_2_min, ymax = rib_2_max), fill = 'steelblue4', alpha = .8) +
ggplot2::geom_line(ggplot2::aes(x = gumbel.variate, y = scaled.value), col = 'red4', lwd = .75) +
ggplot2::theme_bw() +
ggplot2::labs(x = '-log(-log(p))', y = 'Value', title = 'Growth curve') +
ggplot2::theme(plot.title = ggplot2::element_text(hjust = .5),
panel.border = element_blank(),
axis.line = element_line(colour = 'black'))
if(rp) {
axis.lim <- c(-log(-log(range(qs))), range(qaux$q))
rp.lab <- return.period
rp.x <- -log(-log(1 - 1/rp.lab))
rp.y <- axis.lim[3] + (axis.lim[4] - axis.lim[3])*.05
rp.dta <- data.table(rp.x, rp.y, rp.lab)
gc <- gc + ggplot2::geom_point(data = rp.dta, ggplot2::aes(x = rp.x, y = rp.y), shape = '|', size = 3) +
ggplot2::geom_line(data = rp.dta, ggplot2::aes(x = rp.x, y = rp.y)) +
ggplot2::geom_text(data = rp.dta, ggplot2::aes(x = rp.x, y = rp.y*2, label = rp.lab)) +
ggplot2::geom_text(data = rp.dta, ggplot2::aes(x = mean(rp.x[rev(rank(rp.lab))[1:2]]), y = rp.y[1]*3.5), label = 'Return period', fontface = 1)
}
if(method == 'plotly') {
gc <- plotly::ggplotly(gc)
}
return(gc)
}
}
qq <- function(...) {UseMethod('qq')}
qq.sim <- function(model_object, dist = NULL, method = if ('ggplot2' %in% installed.packages()[,'Package']) {'ggplot'} else {'base'}) {
dta <- as.data.table(model_object$data)
para <- model_object$REG
scaling.factor <- model_object$scaling_factor
if(is.null(dist)) {dist <- attr(dta.fit, 'sim.call')$dist}
res.qq <- suppressWarnings(melt(dta))
res.qq <- data.table(variable = names(dta), sf = scaling.factor)[res.qq, on = c('variable')]
res.qq <- res.qq[, scaled.value := value/sf]
if(method == 'base') {
inipar <- par()
par(pty = 's')
sres.qq <- split(res.qq[!is.na(res.qq$scaled.value),], res.qq$variable[!is.na(res.qq$scaled.value)])
plot(NULL,
xlim = c(0, max(res.qq$scaled.value, na.rm = T)*1.15),
ylim = c(0, max(res.qq$scaled.value, na.rm = T)*1.15),
pch = 21,
col = 'grey15',
bg = '#36648b90',
bty = 'l',
xlab = 'theoretical',
ylab = 'sample',
main = 'qqplot')
grid()
lapply(sres.qq, function(x) {
points(sort(x$scaled.value),
sort(rgpa(length(x$scaled.value), dta.fit$REG)),
pch = 21,
col = 'grey15',
bg = '#36648b90')
})
abline(0,1, col = 'red4')
suppressWarnings(par(inipar))
}
if(method %in% c('ggplot', 'plotly')) {
qq <- ggplot2::ggplot(res.qq) +
ggplot2::geom_qq(ggplot2::aes(sample = scaled.value, group = variable), geom = 'point', distribution = noquote(paste0('q', dist)), dparams = list(para), colour = 'grey15', fill = 'steelblue4', shape = 21, na.rm = T) +
ggplot2::geom_abline(colour = ('red4')) +
ggplot2::coord_fixed() +
ggplot2::lims(x = c(0, max(res.qq$value/res.qq$sf, na.rm = T)),
y = c(0, max(res.qq$value/res.qq$sf, na.rm = T))) +
ggplot2::theme_bw() +
ggplot2::theme(plot.title = ggplot2::element_text(hjust = .5),
panel.border = element_blank(),
axis.line = element_line(colour = 'black'))
if(method == 'plotly') {
qq <- plotly::ggplotly(qq)
}
return(qq)
}
}
# qq.simsample <- function(model_object, fitted_bootstrap, dist = NULL, ribbon.1 = c(0.05, 0.95), ribbon.2 = c(0.25, 0.75), method = if ('ggplot2' %in% installed.packages()[,'Package']) {'ggplot'} else {'base'}) {
#
# dta <- as.data.table(model_object$data)
# para <- model_object$REG
# scaling.factor <- model_object$scaling_factor
#
# if(is.null(dist)) {dist <- attr(dta.fit, 'sim.call')$dist}
#
# prbs <- sort(c(ribbon.1, ribbon.2))
#
# xxx <- do.call(cbind, lapply(fitted_bootstrap, function(x) x$data))
#
# ########################################################
#
# qaux <- data.table(rbindlist(lapply(fitted_bootstrap, function(x) {data.frame(q = do.call(paste0('q',dist), list(qs, x$REG)))}),
# idcol = 'sample'),
# probs = seq_along(qs))
# q <- qaux[, .(val = quantile(q, prbs),
# q = c('rib_1_min', 'rib_2_min', 'rib_2_max', 'rib_1_max')),
# by = probs]
# res.gc <- cbind(dcast(q, probs ~ q, value.var = 'val'),
# data.table(gumbel.variate = -log(-log(qs)),
# scaled.value = qgpa(qs, para)))
# }
ratiodiagram <- function(taus) {
num <- seq(min(taus[,1])*.5, max(taus[,1])*1.2, .01)
mr <- data.table(t3 = num, t4 = num*(1 + 5*num)/(5 + num))
names(taus) <- names(mr)
lmrd <- ggplot(data = NULL, aes(x = t3, y = t4)) +
geom_line(data = mr, colour = 'red4') +
geom_point(data = taus, colour = 'grey15', fill = 'steelblue4', shape = 21) +
theme_classic() +
labs(x = 'L-skewness', y = 'L-kurtosis', title = 'GPA L-moment ratio diagram')
return(lmrd)
}
|
#' BarOrPub
#'
#' A bar or pub.
#'
#'
#' @param id identifier for the object (URI)
#' @param starRating (Rating or Rating type.) An official rating for a lodging business or food establishment, e.g. from national associations or standards bodies. Use the author property to indicate the rating organization, e.g. as an Organization with name such as (e.g. HOTREC, DEHOGA, WHR, or Hotelstars).
#' @param servesCuisine (Text type.) The cuisine of the restaurant.
#' @param menu (URL or Text or Menu type.) Either the actual menu as a structured representation, as text, or a URL of the menu.
#' @param hasMenu (URL or Text or Menu type.) Either the actual menu as a structured representation, as text, or a URL of the menu.
#' @param acceptsReservations (URL or Text or Boolean type.) Indicates whether a FoodEstablishment accepts reservations. Values can be Boolean, an URL at which reservations can be made or (for backwards compatibility) the strings ```Yes``` or ```No```.
#' @param priceRange (Text type.) The price range of the business, for example ```$$$```.
#' @param paymentAccepted (Text type.) Cash, Credit Card, Cryptocurrency, Local Exchange Tradings System, etc.
#' @param openingHours (Text or Text type.) The general opening hours for a business. Opening hours can be specified as a weekly time range, starting with days, then times per day. Multiple days can be listed with commas ',' separating each day. Day or time ranges are specified using a hyphen '-'.* Days are specified using the following two-letter combinations: ```Mo```, ```Tu```, ```We```, ```Th```, ```Fr```, ```Sa```, ```Su```.* Times are specified using 24:00 time. For example, 3pm is specified as ```15:00```. * Here is an example: <code><time itemprop="openingHours" datetime="Tu,Th 16:00-20:00">Tuesdays and Thursdays 4-8pm</time></code>.* If a business is open 7 days a week, then it can be specified as <code><time itemprop="openingHours" datetime="Mo-Su">Monday through Sunday, all day</time></code>.
#' @param currenciesAccepted (Text type.) The currency accepted.Use standard formats: [ISO 4217 currency format](http://en.wikipedia.org/wiki/ISO_4217) e.g. "USD"; [Ticker symbol](https://en.wikipedia.org/wiki/List_of_cryptocurrencies) for cryptocurrencies e.g. "BTC"; well known names for [Local Exchange Tradings Systems](https://en.wikipedia.org/wiki/Local_exchange_trading_system) (LETS) and other currency types e.g. "Ithaca HOUR".
#' @param branchOf (Organization type.) The larger organization that this local business is a branch of, if any. Not to be confused with (anatomical)[[branch]].
#' @param telephone (Text or Text or Text or Text type.) The telephone number.
#' @param specialOpeningHoursSpecification (OpeningHoursSpecification type.) The special opening hours of a certain place.Use this to explicitly override general opening hours brought in scope by [[openingHoursSpecification]] or [[openingHours]].
#' @param smokingAllowed (Boolean type.) Indicates whether it is allowed to smoke in the place, e.g. in the restaurant, hotel or hotel room.
#' @param reviews (Review or Review or Review or Review or Review type.) Review of the item.
#' @param review (Review or Review or Review or Review or Review or Review or Review or Review type.) A review of the item.
#' @param publicAccess (Boolean type.) A flag to signal that the [[Place]] is open to public visitors. If this property is omitted there is no assumed default boolean value
#' @param photos (Photograph or ImageObject type.) Photographs of this place.
#' @param photo (Photograph or ImageObject type.) A photograph of this place.
#' @param openingHoursSpecification (OpeningHoursSpecification type.) The opening hours of a certain place.
#' @param maximumAttendeeCapacity (Integer or Integer type.) The total number of individuals that may attend an event or venue.
#' @param maps (URL type.) A URL to a map of the place.
#' @param map (URL type.) A URL to a map of the place.
#' @param logo (URL or ImageObject or URL or ImageObject or URL or ImageObject or URL or ImageObject or URL or ImageObject type.) An associated logo.
#' @param isicV4 (Text or Text or Text type.) The International Standard of Industrial Classification of All Economic Activities (ISIC), Revision 4 code for a particular organization, business person, or place.
#' @param isAccessibleForFree (Boolean or Boolean or Boolean or Boolean type.) A flag to signal that the item, event, or place is accessible for free.
#' @param hasMap (URL or Map type.) A URL to a map of the place.
#' @param globalLocationNumber (Text or Text or Text type.) The [Global Location Number](http://www.gs1.org/gln) (GLN, sometimes also referred to as International Location Number or ILN) of the respective organization, person, or place. The GLN is a 13-digit number used to identify parties and physical locations.
#' @param geo (GeoShape or GeoCoordinates type.) The geo coordinates of the place.
#' @param faxNumber (Text or Text or Text or Text type.) The fax number.
#' @param events (Event or Event type.) Upcoming or past events associated with this place or organization.
#' @param event (Event or Event or Event or Event or Event or Event or Event type.) Upcoming or past event associated with this place, organization, or action.
#' @param containsPlace (Place type.) The basic containment relation between a place and another that it contains.
#' @param containedInPlace (Place type.) The basic containment relation between a place and one that contains it.
#' @param containedIn (Place type.) The basic containment relation between a place and one that contains it.
#' @param branchCode (Text type.) A short textual code (also called "store code") that uniquely identifies a place of business. The code is typically assigned by the parentOrganization and used in structured URLs.For example, in the URL http://www.starbucks.co.uk/store-locator/etc/detail/3047 the code "3047" is a branchCode for a particular branch.
#' @param amenityFeature (LocationFeatureSpecification or LocationFeatureSpecification or LocationFeatureSpecification type.) An amenity feature (e.g. a characteristic or service) of the Accommodation. This generic property does not make a statement about whether the feature is included in an offer for the main accommodation or available at extra costs.
#' @param aggregateRating (AggregateRating or AggregateRating or AggregateRating or AggregateRating or AggregateRating or AggregateRating or AggregateRating or AggregateRating type.) The overall rating, based on a collection of reviews or ratings, of the item.
#' @param address (Text or PostalAddress or Text or PostalAddress or Text or PostalAddress or Text or PostalAddress or Text or PostalAddress type.) Physical address of the item.
#' @param additionalProperty (PropertyValue or PropertyValue or PropertyValue or PropertyValue type.) A property-value pair representing an additional characteristics of the entitity, e.g. a product feature or another characteristic for which there is no matching property in schema.org.Note: Publishers should be aware that applications designed to use specific schema.org properties (e.g. http://schema.org/width, http://schema.org/color, http://schema.org/gtin13, ...) will typically expect such data to be provided using those properties, rather than using the generic property/value mechanism.
#' @param url (URL type.) URL of the item.
#' @param sameAs (URL type.) URL of a reference Web page that unambiguously indicates the item's identity. E.g. the URL of the item's Wikipedia page, Wikidata entry, or official website.
#' @param potentialAction (Action type.) Indicates a potential Action, which describes an idealized action in which this thing would play an 'object' role.
#' @param name (Text type.) The name of the item.
#' @param mainEntityOfPage (URL or CreativeWork type.) Indicates a page (or other CreativeWork) for which this thing is the main entity being described. See [background notes](/docs/datamodel.html#mainEntityBackground) for details.
#' @param image (URL or ImageObject type.) An image of the item. This can be a [[URL]] or a fully described [[ImageObject]].
#' @param identifier (URL or Text or PropertyValue type.) The identifier property represents any kind of identifier for any kind of [[Thing]], such as ISBNs, GTIN codes, UUIDs etc. Schema.org provides dedicated properties for representing many of these, either as textual strings or as URL (URI) links. See [background notes](/docs/datamodel.html#identifierBg) for more details.
#' @param disambiguatingDescription (Text type.) A sub property of description. A short description of the item used to disambiguate from other, similar items. Information from other properties (in particular, name) may be necessary for the description to be useful for disambiguation.
#' @param description (Text type.) A description of the item.
#' @param alternateName (Text type.) An alias for the item.
#' @param additionalType (URL type.) An additional type for the item, typically used for adding more specific types from external vocabularies in microdata syntax. This is a relationship between something and a class that the thing is in. In RDFa syntax, it is better to use the native RDFa syntax - the 'typeof' attribute - for multiple types. Schema.org tools may have only weaker understanding of extra types, in particular those defined externally.
#'
#' @return a list object corresponding to a schema:BarOrPub
#'
#' @export
BarOrPub <- function(id = NULL,
starRating = NULL,
servesCuisine = NULL,
menu = NULL,
hasMenu = NULL,
acceptsReservations = NULL,
priceRange = NULL,
paymentAccepted = NULL,
openingHours = NULL,
currenciesAccepted = NULL,
branchOf = NULL,
telephone = NULL,
specialOpeningHoursSpecification = NULL,
smokingAllowed = NULL,
reviews = NULL,
review = NULL,
publicAccess = NULL,
photos = NULL,
photo = NULL,
openingHoursSpecification = NULL,
maximumAttendeeCapacity = NULL,
maps = NULL,
map = NULL,
logo = NULL,
isicV4 = NULL,
isAccessibleForFree = NULL,
hasMap = NULL,
globalLocationNumber = NULL,
geo = NULL,
faxNumber = NULL,
events = NULL,
event = NULL,
containsPlace = NULL,
containedInPlace = NULL,
containedIn = NULL,
branchCode = NULL,
amenityFeature = NULL,
aggregateRating = NULL,
address = NULL,
additionalProperty = NULL,
url = NULL,
sameAs = NULL,
potentialAction = NULL,
name = NULL,
mainEntityOfPage = NULL,
image = NULL,
identifier = NULL,
disambiguatingDescription = NULL,
description = NULL,
alternateName = NULL,
additionalType = NULL){
Filter(Negate(is.null),
list(
type = "BarOrPub",
id = id,
starRating = starRating,
servesCuisine = servesCuisine,
menu = menu,
hasMenu = hasMenu,
acceptsReservations = acceptsReservations,
priceRange = priceRange,
paymentAccepted = paymentAccepted,
openingHours = openingHours,
currenciesAccepted = currenciesAccepted,
branchOf = branchOf,
telephone = telephone,
specialOpeningHoursSpecification = specialOpeningHoursSpecification,
smokingAllowed = smokingAllowed,
reviews = reviews,
review = review,
publicAccess = publicAccess,
photos = photos,
photo = photo,
openingHoursSpecification = openingHoursSpecification,
maximumAttendeeCapacity = maximumAttendeeCapacity,
maps = maps,
map = map,
logo = logo,
isicV4 = isicV4,
isAccessibleForFree = isAccessibleForFree,
hasMap = hasMap,
globalLocationNumber = globalLocationNumber,
geo = geo,
faxNumber = faxNumber,
events = events,
event = event,
containsPlace = containsPlace,
containedInPlace = containedInPlace,
containedIn = containedIn,
branchCode = branchCode,
amenityFeature = amenityFeature,
aggregateRating = aggregateRating,
address = address,
additionalProperty = additionalProperty,
url = url,
sameAs = sameAs,
potentialAction = potentialAction,
name = name,
mainEntityOfPage = mainEntityOfPage,
image = image,
identifier = identifier,
disambiguatingDescription = disambiguatingDescription,
description = description,
alternateName = alternateName,
additionalType = additionalType))}
|
/R/BarOrPub.R
|
no_license
|
cboettig/schemar
|
R
| false | false | 12,059 |
r
|
#' BarOrPub
#'
#' A bar or pub.
#'
#'
#' @param id identifier for the object (URI)
#' @param starRating (Rating or Rating type.) An official rating for a lodging business or food establishment, e.g. from national associations or standards bodies. Use the author property to indicate the rating organization, e.g. as an Organization with name such as (e.g. HOTREC, DEHOGA, WHR, or Hotelstars).
#' @param servesCuisine (Text type.) The cuisine of the restaurant.
#' @param menu (URL or Text or Menu type.) Either the actual menu as a structured representation, as text, or a URL of the menu.
#' @param hasMenu (URL or Text or Menu type.) Either the actual menu as a structured representation, as text, or a URL of the menu.
#' @param acceptsReservations (URL or Text or Boolean type.) Indicates whether a FoodEstablishment accepts reservations. Values can be Boolean, an URL at which reservations can be made or (for backwards compatibility) the strings ```Yes``` or ```No```.
#' @param priceRange (Text type.) The price range of the business, for example ```$$$```.
#' @param paymentAccepted (Text type.) Cash, Credit Card, Cryptocurrency, Local Exchange Tradings System, etc.
#' @param openingHours (Text or Text type.) The general opening hours for a business. Opening hours can be specified as a weekly time range, starting with days, then times per day. Multiple days can be listed with commas ',' separating each day. Day or time ranges are specified using a hyphen '-'.* Days are specified using the following two-letter combinations: ```Mo```, ```Tu```, ```We```, ```Th```, ```Fr```, ```Sa```, ```Su```.* Times are specified using 24:00 time. For example, 3pm is specified as ```15:00```. * Here is an example: <code><time itemprop="openingHours" datetime="Tu,Th 16:00-20:00">Tuesdays and Thursdays 4-8pm</time></code>.* If a business is open 7 days a week, then it can be specified as <code><time itemprop="openingHours" datetime="Mo-Su">Monday through Sunday, all day</time></code>.
#' @param currenciesAccepted (Text type.) The currency accepted.Use standard formats: [ISO 4217 currency format](http://en.wikipedia.org/wiki/ISO_4217) e.g. "USD"; [Ticker symbol](https://en.wikipedia.org/wiki/List_of_cryptocurrencies) for cryptocurrencies e.g. "BTC"; well known names for [Local Exchange Tradings Systems](https://en.wikipedia.org/wiki/Local_exchange_trading_system) (LETS) and other currency types e.g. "Ithaca HOUR".
#' @param branchOf (Organization type.) The larger organization that this local business is a branch of, if any. Not to be confused with (anatomical)[[branch]].
#' @param telephone (Text or Text or Text or Text type.) The telephone number.
#' @param specialOpeningHoursSpecification (OpeningHoursSpecification type.) The special opening hours of a certain place.Use this to explicitly override general opening hours brought in scope by [[openingHoursSpecification]] or [[openingHours]].
#' @param smokingAllowed (Boolean type.) Indicates whether it is allowed to smoke in the place, e.g. in the restaurant, hotel or hotel room.
#' @param reviews (Review or Review or Review or Review or Review type.) Review of the item.
#' @param review (Review or Review or Review or Review or Review or Review or Review or Review type.) A review of the item.
#' @param publicAccess (Boolean type.) A flag to signal that the [[Place]] is open to public visitors. If this property is omitted there is no assumed default boolean value
#' @param photos (Photograph or ImageObject type.) Photographs of this place.
#' @param photo (Photograph or ImageObject type.) A photograph of this place.
#' @param openingHoursSpecification (OpeningHoursSpecification type.) The opening hours of a certain place.
#' @param maximumAttendeeCapacity (Integer or Integer type.) The total number of individuals that may attend an event or venue.
#' @param maps (URL type.) A URL to a map of the place.
#' @param map (URL type.) A URL to a map of the place.
#' @param logo (URL or ImageObject or URL or ImageObject or URL or ImageObject or URL or ImageObject or URL or ImageObject type.) An associated logo.
#' @param isicV4 (Text or Text or Text type.) The International Standard of Industrial Classification of All Economic Activities (ISIC), Revision 4 code for a particular organization, business person, or place.
#' @param isAccessibleForFree (Boolean or Boolean or Boolean or Boolean type.) A flag to signal that the item, event, or place is accessible for free.
#' @param hasMap (URL or Map type.) A URL to a map of the place.
#' @param globalLocationNumber (Text or Text or Text type.) The [Global Location Number](http://www.gs1.org/gln) (GLN, sometimes also referred to as International Location Number or ILN) of the respective organization, person, or place. The GLN is a 13-digit number used to identify parties and physical locations.
#' @param geo (GeoShape or GeoCoordinates type.) The geo coordinates of the place.
#' @param faxNumber (Text or Text or Text or Text type.) The fax number.
#' @param events (Event or Event type.) Upcoming or past events associated with this place or organization.
#' @param event (Event or Event or Event or Event or Event or Event or Event type.) Upcoming or past event associated with this place, organization, or action.
#' @param containsPlace (Place type.) The basic containment relation between a place and another that it contains.
#' @param containedInPlace (Place type.) The basic containment relation between a place and one that contains it.
#' @param containedIn (Place type.) The basic containment relation between a place and one that contains it.
#' @param branchCode (Text type.) A short textual code (also called "store code") that uniquely identifies a place of business. The code is typically assigned by the parentOrganization and used in structured URLs.For example, in the URL http://www.starbucks.co.uk/store-locator/etc/detail/3047 the code "3047" is a branchCode for a particular branch.
#' @param amenityFeature (LocationFeatureSpecification or LocationFeatureSpecification or LocationFeatureSpecification type.) An amenity feature (e.g. a characteristic or service) of the Accommodation. This generic property does not make a statement about whether the feature is included in an offer for the main accommodation or available at extra costs.
#' @param aggregateRating (AggregateRating or AggregateRating or AggregateRating or AggregateRating or AggregateRating or AggregateRating or AggregateRating or AggregateRating type.) The overall rating, based on a collection of reviews or ratings, of the item.
#' @param address (Text or PostalAddress or Text or PostalAddress or Text or PostalAddress or Text or PostalAddress or Text or PostalAddress type.) Physical address of the item.
#' @param additionalProperty (PropertyValue or PropertyValue or PropertyValue or PropertyValue type.) A property-value pair representing an additional characteristics of the entitity, e.g. a product feature or another characteristic for which there is no matching property in schema.org.Note: Publishers should be aware that applications designed to use specific schema.org properties (e.g. http://schema.org/width, http://schema.org/color, http://schema.org/gtin13, ...) will typically expect such data to be provided using those properties, rather than using the generic property/value mechanism.
#' @param url (URL type.) URL of the item.
#' @param sameAs (URL type.) URL of a reference Web page that unambiguously indicates the item's identity. E.g. the URL of the item's Wikipedia page, Wikidata entry, or official website.
#' @param potentialAction (Action type.) Indicates a potential Action, which describes an idealized action in which this thing would play an 'object' role.
#' @param name (Text type.) The name of the item.
#' @param mainEntityOfPage (URL or CreativeWork type.) Indicates a page (or other CreativeWork) for which this thing is the main entity being described. See [background notes](/docs/datamodel.html#mainEntityBackground) for details.
#' @param image (URL or ImageObject type.) An image of the item. This can be a [[URL]] or a fully described [[ImageObject]].
#' @param identifier (URL or Text or PropertyValue type.) The identifier property represents any kind of identifier for any kind of [[Thing]], such as ISBNs, GTIN codes, UUIDs etc. Schema.org provides dedicated properties for representing many of these, either as textual strings or as URL (URI) links. See [background notes](/docs/datamodel.html#identifierBg) for more details.
#' @param disambiguatingDescription (Text type.) A sub property of description. A short description of the item used to disambiguate from other, similar items. Information from other properties (in particular, name) may be necessary for the description to be useful for disambiguation.
#' @param description (Text type.) A description of the item.
#' @param alternateName (Text type.) An alias for the item.
#' @param additionalType (URL type.) An additional type for the item, typically used for adding more specific types from external vocabularies in microdata syntax. This is a relationship between something and a class that the thing is in. In RDFa syntax, it is better to use the native RDFa syntax - the 'typeof' attribute - for multiple types. Schema.org tools may have only weaker understanding of extra types, in particular those defined externally.
#'
#' @return a list object corresponding to a schema:BarOrPub
#'
#' @export
BarOrPub <- function(id = NULL,
starRating = NULL,
servesCuisine = NULL,
menu = NULL,
hasMenu = NULL,
acceptsReservations = NULL,
priceRange = NULL,
paymentAccepted = NULL,
openingHours = NULL,
currenciesAccepted = NULL,
branchOf = NULL,
telephone = NULL,
specialOpeningHoursSpecification = NULL,
smokingAllowed = NULL,
reviews = NULL,
review = NULL,
publicAccess = NULL,
photos = NULL,
photo = NULL,
openingHoursSpecification = NULL,
maximumAttendeeCapacity = NULL,
maps = NULL,
map = NULL,
logo = NULL,
isicV4 = NULL,
isAccessibleForFree = NULL,
hasMap = NULL,
globalLocationNumber = NULL,
geo = NULL,
faxNumber = NULL,
events = NULL,
event = NULL,
containsPlace = NULL,
containedInPlace = NULL,
containedIn = NULL,
branchCode = NULL,
amenityFeature = NULL,
aggregateRating = NULL,
address = NULL,
additionalProperty = NULL,
url = NULL,
sameAs = NULL,
potentialAction = NULL,
name = NULL,
mainEntityOfPage = NULL,
image = NULL,
identifier = NULL,
disambiguatingDescription = NULL,
description = NULL,
alternateName = NULL,
additionalType = NULL){
Filter(Negate(is.null),
list(
type = "BarOrPub",
id = id,
starRating = starRating,
servesCuisine = servesCuisine,
menu = menu,
hasMenu = hasMenu,
acceptsReservations = acceptsReservations,
priceRange = priceRange,
paymentAccepted = paymentAccepted,
openingHours = openingHours,
currenciesAccepted = currenciesAccepted,
branchOf = branchOf,
telephone = telephone,
specialOpeningHoursSpecification = specialOpeningHoursSpecification,
smokingAllowed = smokingAllowed,
reviews = reviews,
review = review,
publicAccess = publicAccess,
photos = photos,
photo = photo,
openingHoursSpecification = openingHoursSpecification,
maximumAttendeeCapacity = maximumAttendeeCapacity,
maps = maps,
map = map,
logo = logo,
isicV4 = isicV4,
isAccessibleForFree = isAccessibleForFree,
hasMap = hasMap,
globalLocationNumber = globalLocationNumber,
geo = geo,
faxNumber = faxNumber,
events = events,
event = event,
containsPlace = containsPlace,
containedInPlace = containedInPlace,
containedIn = containedIn,
branchCode = branchCode,
amenityFeature = amenityFeature,
aggregateRating = aggregateRating,
address = address,
additionalProperty = additionalProperty,
url = url,
sameAs = sameAs,
potentialAction = potentialAction,
name = name,
mainEntityOfPage = mainEntityOfPage,
image = image,
identifier = identifier,
disambiguatingDescription = disambiguatingDescription,
description = description,
alternateName = alternateName,
additionalType = additionalType))}
|
\name{ruars}
\alias{ruars}
\title{UARS random deviates}
\usage{
ruars(n, rangle, S = NULL, kappa = 1, space = "SO3", ...)
}
\arguments{
\item{n}{number of observations. If \code{length(n)>1},
the length is taken to be n}
\item{rangle}{The function from which to simulate angles:
e.g. rcayley, rvmises, rhaar, rfisher}
\item{S}{principal direction of the distribution}
\item{kappa}{concentration of the distribution}
\item{space}{Indicates the desired representation: matrix
(SO3), quaternion (Q4) or Euler angles (EA)}
\item{...}{additional arguments passed to the angular
function}
}
\value{
random deviates from the specified UARS distribution
}
\description{
Produce random deviates from a chosen UARS distribution.
}
|
/man/ruars.Rd
|
no_license
|
heike/rotations
|
R
| false | false | 783 |
rd
|
\name{ruars}
\alias{ruars}
\title{UARS random deviates}
\usage{
ruars(n, rangle, S = NULL, kappa = 1, space = "SO3", ...)
}
\arguments{
\item{n}{number of observations. If \code{length(n)>1},
the length is taken to be n}
\item{rangle}{The function from which to simulate angles:
e.g. rcayley, rvmises, rhaar, rfisher}
\item{S}{principal direction of the distribution}
\item{kappa}{concentration of the distribution}
\item{space}{Indicates the desired representation: matrix
(SO3), quaternion (Q4) or Euler angles (EA)}
\item{...}{additional arguments passed to the angular
function}
}
\value{
random deviates from the specified UARS distribution
}
\description{
Produce random deviates from a chosen UARS distribution.
}
|
#########################
## Estudando os pontos
## Authos: Tainá Rocha
## Date : May 2021
########################
## Library
library (raster)
library(ggplot2)
library(ggmap)
library(MASS)
library(maps)
library(mapdata)
library(ggrepel)
library(ggsn)
library(rgdal)
## Read data
points_all <- read.csv("./Ferns-and-lycophytes_old/data/PCA/PCA_INPUT.csv", sep = ",", dec = ".")
#matri <- as.matrix(points_all)
escalonado <- scale(points_all[,1:23],center=TRUE,scale=TRUE)
write.csv(escalonado, "./escalonado_envs_values.csv")
# verificar o conjunto de pontos somados
summary(points_all)
#################################################### Pontos sem stand.
boxplot(c(points_all[1:9,1]), points_all[10:25,1], points_all[26:57,1]) #alt
boxplot(c(points_all[1:9,2]), points_all[10:25,2], points_all[26:57,2]) #bio1
boxplot(c(points_all[1:9,5]), points_all[10:25,5], points_all[26:57,5]) #bio12
boxplot(c(points_all[1:9,6]), points_all[10:25,6], points_all[26:57,6]) #bio13
boxplot(c(points_all[1:9,8]), points_all[10:25,8], points_all[26:57,8]) #bio15
boxplot(c(points_all[1:9,14]), points_all[10:25,14], points_all[26:57,14]) #bio3
boxplot(c(points_all[1:9,21]), points_all[10:25,21], points_all[26:57,21]) #dec
boxplot(c(points_all[1:9,22]), points_all[10:25,22], points_all[26:57,22]) #densi_dren
boxplot(c(points_all[1:9,23]), points_all[10:25,23], points_all[26:57,23]) #expo
################################################################################################
boxplot(c(escalonado[1:9,1]), (escalonado[10:25,1]), (escalonado[26:57,1])) #alt
boxplot(c(escalonado[1:9,2]), (escalonado[10:25,2]), (escalonado[26:57,2])) #bio1
boxplot(c(escalonado[1:9,3]), (escalonado[10:25,3]), (escalonado[26:57,3])) #bio10
boxplot(c(escalonado[1:9,4]), (escalonado[10:25,4]), (escalonado[26:57,4])) #bio11
boxplot(c(escalonado[1:9,5]), (escalonado[10:25,5]), (escalonado[26:57,5])) #bio12
boxplot(c(escalonado[1:9,6]), (escalonado[10:25,6]), (escalonado[26:57,6])) # bio13
boxplot(c(escalonado[1:9,7]), (escalonado[10:25,7]), (escalonado[26:57,7])) #bio14
boxplot(c(escalonado[1:9,8]), (escalonado[10:25,8]), (escalonado[26:57,8])) #bio15
boxplot(c(escalonado[1:9,9]), (escalonado[10:25,9]), (escalonado[26:57,9])) #bio16
boxplot(c(escalonado[1:9,10]), (escalonado[10:25,10]), (escalonado[26:57,10])) #bio17
boxplot(c(escalonado[1:9,11]), (escalonado[10:25,11]), (escalonado[26:57,11])) #bio18
boxplot(c(escalonado[1:9,12]), (escalonado[10:25,12]), (escalonado[26:57,12])) #bio19
boxplot(c(escalonado[1:9,13]), (escalonado[10:25,13]), (escalonado[26:57,13])) # bio2
boxplot(c(escalonado[1:9,14]), (escalonado[10:25,14]), (escalonado[26:57,14])) #bio3
boxplot(c(escalonado[1:9,15]), (escalonado[10:25,15]), (escalonado[26:57,15])) #bio4
boxplot(c(escalonado[1:9,16]), (escalonado[10:25,16]), (escalonado[26:57,16])) #bio5
boxplot(c(escalonado[1:9,17]), (escalonado[10:25,17]), (escalonado[26:57,17])) #bio6
boxplot(c(escalonado[1:9,18]), (escalonado[10:25,18]), (escalonado[26:57,18])) #bio7
boxplot(c(escalonado[1:9,19]), (escalonado[10:25,19]), (escalonado[26:57,19])) #bio8
boxplot(c(escalonado[1:9,20]), (escalonado[10:25,20]), (escalonado[26:57,20])) #bio9
boxplot(c(escalonado[1:9,21]), (escalonado[10:25,21]), (escalonado[26:57,21])) #dec
boxplot(c(escalonado[1:9,22]), (escalonado[10:25,22]), (escalonado[26:57,22])) #desni_dre
boxplot(c(escalonado[1:9,23]), (escalonado[10:25,23]), (escalonado[26:57,23])) #expo
################################################
boxplot(c(escalonado[1:9,1]), (escalonado[10:25,1]), (escalonado[26:57,1]),(escalonado[1:9,2]), (escalonado[10:25,2]), (escalonado[26:57,2]), (escalonado[1:9,3]), (escalonado[10:25,3]), (escalonado[26:57,3]), (escalonado[1:9,4]), (escalonado[10:25,4]), (escalonado[26:57,4]), (escalonado[1:9,5]), (escalonado[10:25,5]), (escalonado[26:57,5]),(escalonado[1:9,6]), (escalonado[10:25,6]), (escalonado[26:57,6]),(escalonado[1:9,7]), (escalonado[10:25,7]), (escalonado[26:57,7]), (escalonado[1:9,8]), (escalonado[10:25,8]), (escalonado[26:57,8]),(escalonado[1:9,9]), (escalonado[10:25,9]), (escalonado[26:57,9]),(escalonado[1:9,10]), (escalonado[10:25,10]), (escalonado[26:57,10]), (escalonado[1:9,11]), (escalonado[10:25,11]), (escalonado[26:57,11]), (escalonado[1:9,12]), (escalonado[10:25,12]), (escalonado[26:57,12]), (escalonado[1:9,13]), (escalonado[10:25,13]), (escalonado[26:57,13]),(escalonado[1:9,14]), (escalonado[10:25,14]), (escalonado[26:57,14]),(escalonado[1:9,15]), (escalonado[10:25,15]), (escalonado[26:57,15]),(escalonado[1:9,16]), (escalonado[10:25,16]), (escalonado[26:57,16]),(escalonado[1:9,17]), (escalonado[10:25,17]), (escalonado[26:57,17]), (escalonado[1:9,18]), (escalonado[10:25,18]), (escalonado[26:57,18]),(escalonado[1:9,19]), (escalonado[10:25,19]), (escalonado[26:57,19]),(escalonado[1:9,20]), (escalonado[10:25,20]), (escalonado[26:57,20]),(escalonado[1:9,21]), (escalonado[10:25,21]), (escalonado[26:57,21]),(escalonado[1:9,22]), (escalonado[10:25,22]), (escalonado[26:57,22]),(escalonado[1:9,23]), (escalonado[10:25,23]), (escalonado[26:57,23]))
#########################
boxplot(c(escalonado[1:9,1]), (escalonado[10:25,1]), (escalonado[26:57,1]),(escalonado[1:9,2]), (escalonado[10:25,2]), (escalonado[26:57,2]), (escalonado[1:9,5]), (escalonado[10:25,5]), (escalonado[26:57,5]),(escalonado[1:9,6]), (escalonado[10:25,6]), (escalonado[26:57,6]),(escalonado[1:9,14]), (escalonado[10:25,14]), (escalonado[26:57,14]),(escalonado[1:9,22]), (escalonado[10:25,22]), (escalonado[26:57,22]),(escalonado[1:9,23]), (escalonado[10:25,23]), (escalonado[26:57,23]))
boxplot(c(points_all[1:9,21]), points_all[10:25,21], points_all[26:57,21]) #dec
boxplot(c(points_all[1:9,22]), points_all[10:25,22], points_all[26:57,22]) #densi_dren
boxplot(c(points_all[1:9,23]), points_all[10:25,23], points_all[26:57,23]) #expo
################
df <- subset(points_for_models, select=c(lon, lat, Variables, bio12, bio13, bio15, bio20, bio5, bio6))
library(gridExtra)
library(ggplot2)
p <- list()
for (j in colnames(df)[4:9]) {
p[[j]] <- ggplot(data=df, aes_string(x="Variables",y=j)) + # Specify dataset, input or grouping col name and Y
geom_boxplot(aes(fill=factor(Variables))) + guides(fill=FALSE) + # Boxplot by which factor + color guide
theme(axis.title.y = element_text(face="bold", size=14)) # Make the Y-axis labels bigger/bolder
}
do.call(grid.arrange, c(p, ncol=6))
|
/R/exploratory_descriptive/boxplot_envs.R
|
no_license
|
Tai-Rocha/Ferns-and-lycophytes
|
R
| false | false | 6,438 |
r
|
#########################
## Estudando os pontos
## Authos: Tainá Rocha
## Date : May 2021
########################
## Library
library (raster)
library(ggplot2)
library(ggmap)
library(MASS)
library(maps)
library(mapdata)
library(ggrepel)
library(ggsn)
library(rgdal)
## Read data
points_all <- read.csv("./Ferns-and-lycophytes_old/data/PCA/PCA_INPUT.csv", sep = ",", dec = ".")
#matri <- as.matrix(points_all)
escalonado <- scale(points_all[,1:23],center=TRUE,scale=TRUE)
write.csv(escalonado, "./escalonado_envs_values.csv")
# verificar o conjunto de pontos somados
summary(points_all)
#################################################### Pontos sem stand.
boxplot(c(points_all[1:9,1]), points_all[10:25,1], points_all[26:57,1]) #alt
boxplot(c(points_all[1:9,2]), points_all[10:25,2], points_all[26:57,2]) #bio1
boxplot(c(points_all[1:9,5]), points_all[10:25,5], points_all[26:57,5]) #bio12
boxplot(c(points_all[1:9,6]), points_all[10:25,6], points_all[26:57,6]) #bio13
boxplot(c(points_all[1:9,8]), points_all[10:25,8], points_all[26:57,8]) #bio15
boxplot(c(points_all[1:9,14]), points_all[10:25,14], points_all[26:57,14]) #bio3
boxplot(c(points_all[1:9,21]), points_all[10:25,21], points_all[26:57,21]) #dec
boxplot(c(points_all[1:9,22]), points_all[10:25,22], points_all[26:57,22]) #densi_dren
boxplot(c(points_all[1:9,23]), points_all[10:25,23], points_all[26:57,23]) #expo
################################################################################################
boxplot(c(escalonado[1:9,1]), (escalonado[10:25,1]), (escalonado[26:57,1])) #alt
boxplot(c(escalonado[1:9,2]), (escalonado[10:25,2]), (escalonado[26:57,2])) #bio1
boxplot(c(escalonado[1:9,3]), (escalonado[10:25,3]), (escalonado[26:57,3])) #bio10
boxplot(c(escalonado[1:9,4]), (escalonado[10:25,4]), (escalonado[26:57,4])) #bio11
boxplot(c(escalonado[1:9,5]), (escalonado[10:25,5]), (escalonado[26:57,5])) #bio12
boxplot(c(escalonado[1:9,6]), (escalonado[10:25,6]), (escalonado[26:57,6])) # bio13
boxplot(c(escalonado[1:9,7]), (escalonado[10:25,7]), (escalonado[26:57,7])) #bio14
boxplot(c(escalonado[1:9,8]), (escalonado[10:25,8]), (escalonado[26:57,8])) #bio15
boxplot(c(escalonado[1:9,9]), (escalonado[10:25,9]), (escalonado[26:57,9])) #bio16
boxplot(c(escalonado[1:9,10]), (escalonado[10:25,10]), (escalonado[26:57,10])) #bio17
boxplot(c(escalonado[1:9,11]), (escalonado[10:25,11]), (escalonado[26:57,11])) #bio18
boxplot(c(escalonado[1:9,12]), (escalonado[10:25,12]), (escalonado[26:57,12])) #bio19
boxplot(c(escalonado[1:9,13]), (escalonado[10:25,13]), (escalonado[26:57,13])) # bio2
boxplot(c(escalonado[1:9,14]), (escalonado[10:25,14]), (escalonado[26:57,14])) #bio3
boxplot(c(escalonado[1:9,15]), (escalonado[10:25,15]), (escalonado[26:57,15])) #bio4
boxplot(c(escalonado[1:9,16]), (escalonado[10:25,16]), (escalonado[26:57,16])) #bio5
boxplot(c(escalonado[1:9,17]), (escalonado[10:25,17]), (escalonado[26:57,17])) #bio6
boxplot(c(escalonado[1:9,18]), (escalonado[10:25,18]), (escalonado[26:57,18])) #bio7
boxplot(c(escalonado[1:9,19]), (escalonado[10:25,19]), (escalonado[26:57,19])) #bio8
boxplot(c(escalonado[1:9,20]), (escalonado[10:25,20]), (escalonado[26:57,20])) #bio9
boxplot(c(escalonado[1:9,21]), (escalonado[10:25,21]), (escalonado[26:57,21])) #dec
boxplot(c(escalonado[1:9,22]), (escalonado[10:25,22]), (escalonado[26:57,22])) #desni_dre
boxplot(c(escalonado[1:9,23]), (escalonado[10:25,23]), (escalonado[26:57,23])) #expo
################################################
boxplot(c(escalonado[1:9,1]), (escalonado[10:25,1]), (escalonado[26:57,1]),(escalonado[1:9,2]), (escalonado[10:25,2]), (escalonado[26:57,2]), (escalonado[1:9,3]), (escalonado[10:25,3]), (escalonado[26:57,3]), (escalonado[1:9,4]), (escalonado[10:25,4]), (escalonado[26:57,4]), (escalonado[1:9,5]), (escalonado[10:25,5]), (escalonado[26:57,5]),(escalonado[1:9,6]), (escalonado[10:25,6]), (escalonado[26:57,6]),(escalonado[1:9,7]), (escalonado[10:25,7]), (escalonado[26:57,7]), (escalonado[1:9,8]), (escalonado[10:25,8]), (escalonado[26:57,8]),(escalonado[1:9,9]), (escalonado[10:25,9]), (escalonado[26:57,9]),(escalonado[1:9,10]), (escalonado[10:25,10]), (escalonado[26:57,10]), (escalonado[1:9,11]), (escalonado[10:25,11]), (escalonado[26:57,11]), (escalonado[1:9,12]), (escalonado[10:25,12]), (escalonado[26:57,12]), (escalonado[1:9,13]), (escalonado[10:25,13]), (escalonado[26:57,13]),(escalonado[1:9,14]), (escalonado[10:25,14]), (escalonado[26:57,14]),(escalonado[1:9,15]), (escalonado[10:25,15]), (escalonado[26:57,15]),(escalonado[1:9,16]), (escalonado[10:25,16]), (escalonado[26:57,16]),(escalonado[1:9,17]), (escalonado[10:25,17]), (escalonado[26:57,17]), (escalonado[1:9,18]), (escalonado[10:25,18]), (escalonado[26:57,18]),(escalonado[1:9,19]), (escalonado[10:25,19]), (escalonado[26:57,19]),(escalonado[1:9,20]), (escalonado[10:25,20]), (escalonado[26:57,20]),(escalonado[1:9,21]), (escalonado[10:25,21]), (escalonado[26:57,21]),(escalonado[1:9,22]), (escalonado[10:25,22]), (escalonado[26:57,22]),(escalonado[1:9,23]), (escalonado[10:25,23]), (escalonado[26:57,23]))
#########################
boxplot(c(escalonado[1:9,1]), (escalonado[10:25,1]), (escalonado[26:57,1]),(escalonado[1:9,2]), (escalonado[10:25,2]), (escalonado[26:57,2]), (escalonado[1:9,5]), (escalonado[10:25,5]), (escalonado[26:57,5]),(escalonado[1:9,6]), (escalonado[10:25,6]), (escalonado[26:57,6]),(escalonado[1:9,14]), (escalonado[10:25,14]), (escalonado[26:57,14]),(escalonado[1:9,22]), (escalonado[10:25,22]), (escalonado[26:57,22]),(escalonado[1:9,23]), (escalonado[10:25,23]), (escalonado[26:57,23]))
boxplot(c(points_all[1:9,21]), points_all[10:25,21], points_all[26:57,21]) #dec
boxplot(c(points_all[1:9,22]), points_all[10:25,22], points_all[26:57,22]) #densi_dren
boxplot(c(points_all[1:9,23]), points_all[10:25,23], points_all[26:57,23]) #expo
################
df <- subset(points_for_models, select=c(lon, lat, Variables, bio12, bio13, bio15, bio20, bio5, bio6))
library(gridExtra)
library(ggplot2)
p <- list()
for (j in colnames(df)[4:9]) {
p[[j]] <- ggplot(data=df, aes_string(x="Variables",y=j)) + # Specify dataset, input or grouping col name and Y
geom_boxplot(aes(fill=factor(Variables))) + guides(fill=FALSE) + # Boxplot by which factor + color guide
theme(axis.title.y = element_text(face="bold", size=14)) # Make the Y-axis labels bigger/bolder
}
do.call(grid.arrange, c(p, ncol=6))
|
"
This function takes tbl_df object from selectData() and arranges the columns first by outcome level and then
by hospital names.
"
orderData <- function(outcomeData) {
# Order outcome (low to high) then hospitals
return(arrange(outcomeData, outcome, hospital))
}
|
/RCourse/assignments/assignment3/dplyrWay/orderData.R
|
no_license
|
statisticallyfit/R
|
R
| false | false | 276 |
r
|
"
This function takes tbl_df object from selectData() and arranges the columns first by outcome level and then
by hospital names.
"
orderData <- function(outcomeData) {
# Order outcome (low to high) then hospitals
return(arrange(outcomeData, outcome, hospital))
}
|
## Created 8/10/2015 by Daniel Beck
## Last modified 11/9/2016
## This script automatically generates all relevant reports for selected comparisons and
## p-values. The number of analyses, p-values, and option flags should be the same.
source("dataNames_AciJub.R")
source("customFunctions.R")
library("rmarkdown")
report.analyses <- c("all")
report.pvalues <- rep(1e-5, 2)
report.filenames <- paste(resultsDirectory, report.analyses, "/report_", gsub(" ", "_", projectName), "_",
report.analyses, "_", report.pvalues, ".pdf", sep="")
cpgMaxV <- rep(NA, length(report.analyses)) # Y-axis maximum for CpG density histogram (NA for auto).
lenMaxV <- rep(NA, length(report.analyses)) # Y-axis maximum for DMR length histogram (NA for auto).
topNV <- rep(NA, length(report.analyses)) # Generate figures using top N DMR (NA for all DMR).
## For generating reports
for (i in 1:length(report.analyses)){
analysisName <- report.analyses[i]
reportPvalue <- report.pvalues[i]
cpgMax <- cpgMaxV[i]
lenMax <- lenMaxV[i]
topN <- topNV[i]
save(analysisName, reportPvalue, cpgMax, lenMax, topN,
file = paste(codeDirectory, "/reportValues.Rdata", sep = ""))
render(input = "medipReport.Rmd", output_file = reportFileName[i])
}
|
/generateReports.R
|
no_license
|
TaniaPGue/Methylation_cheetah
|
R
| false | false | 1,278 |
r
|
## Created 8/10/2015 by Daniel Beck
## Last modified 11/9/2016
## This script automatically generates all relevant reports for selected comparisons and
## p-values. The number of analyses, p-values, and option flags should be the same.
source("dataNames_AciJub.R")
source("customFunctions.R")
library("rmarkdown")
report.analyses <- c("all")
report.pvalues <- rep(1e-5, 2)
report.filenames <- paste(resultsDirectory, report.analyses, "/report_", gsub(" ", "_", projectName), "_",
report.analyses, "_", report.pvalues, ".pdf", sep="")
cpgMaxV <- rep(NA, length(report.analyses)) # Y-axis maximum for CpG density histogram (NA for auto).
lenMaxV <- rep(NA, length(report.analyses)) # Y-axis maximum for DMR length histogram (NA for auto).
topNV <- rep(NA, length(report.analyses)) # Generate figures using top N DMR (NA for all DMR).
## For generating reports
for (i in 1:length(report.analyses)){
analysisName <- report.analyses[i]
reportPvalue <- report.pvalues[i]
cpgMax <- cpgMaxV[i]
lenMax <- lenMaxV[i]
topN <- topNV[i]
save(analysisName, reportPvalue, cpgMax, lenMax, topN,
file = paste(codeDirectory, "/reportValues.Rdata", sep = ""))
render(input = "medipReport.Rmd", output_file = reportFileName[i])
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/optionstrat.R
\name{putrho}
\alias{putrho}
\title{Put Rho}
\usage{
putrho(s, x, sigma, t, r, d = 0)
}
\arguments{
\item{s}{Spot price of the underlying asset}
\item{x}{Strike price of the option}
\item{sigma}{Implied volatility of the underlying asset price, defined as the annualized standard deviation of the asset returns}
\item{t}{Time to maturity in years}
\item{r}{Annual continuously-compounded risk-free rate, use the function r.cont}
\item{d}{Annual continuously-compounded dividend yield, use the function r.cont}
}
\value{
Returns the put rho
}
\description{
Calculates the rho of the European- style put option
}
\details{
Rho measures the change in the option's value given a 1% change in the interest rate.
}
\examples{
putrho(100, 100, 0.20, (45/365), 0.02, 0.02)
}
|
/man/putrho.Rd
|
no_license
|
Allisterh/optionstrat
|
R
| false | true | 897 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/optionstrat.R
\name{putrho}
\alias{putrho}
\title{Put Rho}
\usage{
putrho(s, x, sigma, t, r, d = 0)
}
\arguments{
\item{s}{Spot price of the underlying asset}
\item{x}{Strike price of the option}
\item{sigma}{Implied volatility of the underlying asset price, defined as the annualized standard deviation of the asset returns}
\item{t}{Time to maturity in years}
\item{r}{Annual continuously-compounded risk-free rate, use the function r.cont}
\item{d}{Annual continuously-compounded dividend yield, use the function r.cont}
}
\value{
Returns the put rho
}
\description{
Calculates the rho of the European- style put option
}
\details{
Rho measures the change in the option's value given a 1% change in the interest rate.
}
\examples{
putrho(100, 100, 0.20, (45/365), 0.02, 0.02)
}
|
# suppose we have a file basename, stored in chunks, say basename.001,
# basename.002 etc.; this function determine the file name for the chunk
# to be handled by node nodenum; the latter is the ID for the executing
# node, partoolsenv$myid, set by setclsinfo()
filechunkname <- function (basename, ndigs,nodenum=NULL)
{
tmp <- basename
if (is.null(nodenum)) {
pte <- getpte()
nodenum <- pte$myid
}
n0s <- ndigs - nchar(as.character(nodenum))
zerostring <- paste(rep("0", n0s),sep="",collapse="")
paste(basename, ".", zerostring, nodenum, sep = "")
}
# distributed file sort on cls, based on column number colnum of input;
# file name from basename, ndigs; bucket sort, with categories
# determined by first sampling nsamp from each chunk; each node's output
# chunk written to file outname (plus suffix based on node number) in
# the node's global space
filesort <- function(cls,basename,ndigs,colnum,
outname,nsamp=1000,header=FALSE,sep="")
{
clusterEvalQ(cls,library(partools))
setclsinfo(cls)
samps <- clusterCall(cls,getsample,basename,ndigs,colnum,
header=header,sep=sep,nsamp)
samp <- Reduce(c,samps)
bds <- getbounds(samp,length(cls))
clusterApply(cls,bds,mysortedchunk,
basename,ndigs,colnum,outname,header,sep)
0
}
getsample <- function(basename,ndigs,colnum,
header=FALSE,sep="",nsamp)
{
fname <- filechunkname(basename,ndigs)
read.table(fname,nrows=nsamp,header=header,sep=sep)[,colnum]
}
getbounds <- function(samp,numnodes) {
bds <- list()
q <- quantile(samp,((2:numnodes) - 1) / numnodes)
samp <- sort(samp)
for (i in 1:numnodes) {
mylo <- if (i > 1) q[i-1] else NA
myhi <- if (i < numnodes) q[i] else NA
bds[[i]] <- c(mylo,myhi)
}
bds
}
mysortedchunk <- function(mybds,basename,ndigs,colnum,outname,header,sep) {
pte <- getpte()
me <- pte$myid
ncls <- pte$ncls
mylo <- mybds[1]
myhi <- mybds[2]
for (i in 1:ncls) {
tmp <-
read.table(filechunkname(basename,ndigs,i),header=header,sep)
tmpcol <- tmp[,colnum]
if (me == 1) {
tmp <- tmp[tmpcol <= myhi,]
} else if (me == ncls) {
tmp <- tmp[tmpcol > mylo,]
} else {
tmp <- tmp[tmpcol > mylo & tmpcol <= myhi,]
}
mychunk <- if (i == 1) tmp else rbind(mychunk,tmp)
}
sortedmchunk <- mychunk[order(mychunk[,colnum]),]
assign(outname,sortedmchunk,envir=.GlobalEnv)
}
# split a file into chunks, one per cluster node
filesplit <- function(cls,basename,header=FALSE) {
cmdout <- system(paste("wc -l",basename),intern=TRUE)
tmp <- strsplit(cmdout[[1]][1], " ")[[1]]
nlines <- as.integer(tmp[length(tmp) - 1])
con <- file(basename,open="r")
if (header) {
hdr <- readLines(con,1)
nlines <- nlines - 1
}
lcls <- length(cls)
ndigs <- ceiling(log10(lcls))
chunks <- clusterSplit(cls,1:nlines)
chunksizes <- sapply(chunks,length)
for (i in 1:lcls) {
chunk <- readLines(con,chunksizes[i])
fn <- filechunkname(basename,ndigs,i)
conout <- file(fn,open="w")
if (header) writeLines(hdr,conout)
writeLines(chunk,conout)
close(conout)
}
}
|
/R/Snowdoop.R
|
no_license
|
edwardt/partools
|
R
| false | false | 3,211 |
r
|
# suppose we have a file basename, stored in chunks, say basename.001,
# basename.002 etc.; this function determine the file name for the chunk
# to be handled by node nodenum; the latter is the ID for the executing
# node, partoolsenv$myid, set by setclsinfo()
filechunkname <- function (basename, ndigs,nodenum=NULL)
{
tmp <- basename
if (is.null(nodenum)) {
pte <- getpte()
nodenum <- pte$myid
}
n0s <- ndigs - nchar(as.character(nodenum))
zerostring <- paste(rep("0", n0s),sep="",collapse="")
paste(basename, ".", zerostring, nodenum, sep = "")
}
# distributed file sort on cls, based on column number colnum of input;
# file name from basename, ndigs; bucket sort, with categories
# determined by first sampling nsamp from each chunk; each node's output
# chunk written to file outname (plus suffix based on node number) in
# the node's global space
filesort <- function(cls,basename,ndigs,colnum,
outname,nsamp=1000,header=FALSE,sep="")
{
clusterEvalQ(cls,library(partools))
setclsinfo(cls)
samps <- clusterCall(cls,getsample,basename,ndigs,colnum,
header=header,sep=sep,nsamp)
samp <- Reduce(c,samps)
bds <- getbounds(samp,length(cls))
clusterApply(cls,bds,mysortedchunk,
basename,ndigs,colnum,outname,header,sep)
0
}
getsample <- function(basename,ndigs,colnum,
header=FALSE,sep="",nsamp)
{
fname <- filechunkname(basename,ndigs)
read.table(fname,nrows=nsamp,header=header,sep=sep)[,colnum]
}
getbounds <- function(samp,numnodes) {
bds <- list()
q <- quantile(samp,((2:numnodes) - 1) / numnodes)
samp <- sort(samp)
for (i in 1:numnodes) {
mylo <- if (i > 1) q[i-1] else NA
myhi <- if (i < numnodes) q[i] else NA
bds[[i]] <- c(mylo,myhi)
}
bds
}
mysortedchunk <- function(mybds,basename,ndigs,colnum,outname,header,sep) {
pte <- getpte()
me <- pte$myid
ncls <- pte$ncls
mylo <- mybds[1]
myhi <- mybds[2]
for (i in 1:ncls) {
tmp <-
read.table(filechunkname(basename,ndigs,i),header=header,sep)
tmpcol <- tmp[,colnum]
if (me == 1) {
tmp <- tmp[tmpcol <= myhi,]
} else if (me == ncls) {
tmp <- tmp[tmpcol > mylo,]
} else {
tmp <- tmp[tmpcol > mylo & tmpcol <= myhi,]
}
mychunk <- if (i == 1) tmp else rbind(mychunk,tmp)
}
sortedmchunk <- mychunk[order(mychunk[,colnum]),]
assign(outname,sortedmchunk,envir=.GlobalEnv)
}
# split a file into chunks, one per cluster node
filesplit <- function(cls,basename,header=FALSE) {
cmdout <- system(paste("wc -l",basename),intern=TRUE)
tmp <- strsplit(cmdout[[1]][1], " ")[[1]]
nlines <- as.integer(tmp[length(tmp) - 1])
con <- file(basename,open="r")
if (header) {
hdr <- readLines(con,1)
nlines <- nlines - 1
}
lcls <- length(cls)
ndigs <- ceiling(log10(lcls))
chunks <- clusterSplit(cls,1:nlines)
chunksizes <- sapply(chunks,length)
for (i in 1:lcls) {
chunk <- readLines(con,chunksizes[i])
fn <- filechunkname(basename,ndigs,i)
conout <- file(fn,open="w")
if (header) writeLines(hdr,conout)
writeLines(chunk,conout)
close(conout)
}
}
|
library(ReIns)
### Name: cProbGPD
### Title: Estimator of small exceedance probabilities and large return
### periods using censored GPD-MLE
### Aliases: cProbGPD cReturnGPD
### ** Examples
# Set seed
set.seed(29072016)
# Pareto random sample
X <- rpareto(500, shape=2)
# Censoring variable
Y <- rpareto(500, shape=1)
# Observed sample
Z <- pmin(X, Y)
# Censoring indicator
censored <- (X>Y)
# GPD-MLE estimator adapted for right censoring
cpot <- cGPDmle(Z, censored=censored, plot=TRUE)
# Exceedance probability
q <- 10
cProbGPD(Z, gamma1=cpot$gamma1, sigma1=cpot$sigma1,
censored=censored, q=q, plot=TRUE)
# Return period
cReturnGPD(Z, gamma1=cpot$gamma1, sigma1=cpot$sigma1,
censored=censored, q=q, plot=TRUE)
|
/data/genthat_extracted_code/ReIns/examples/cProbGPD.Rd.R
|
no_license
|
surayaaramli/typeRrh
|
R
| false | false | 766 |
r
|
library(ReIns)
### Name: cProbGPD
### Title: Estimator of small exceedance probabilities and large return
### periods using censored GPD-MLE
### Aliases: cProbGPD cReturnGPD
### ** Examples
# Set seed
set.seed(29072016)
# Pareto random sample
X <- rpareto(500, shape=2)
# Censoring variable
Y <- rpareto(500, shape=1)
# Observed sample
Z <- pmin(X, Y)
# Censoring indicator
censored <- (X>Y)
# GPD-MLE estimator adapted for right censoring
cpot <- cGPDmle(Z, censored=censored, plot=TRUE)
# Exceedance probability
q <- 10
cProbGPD(Z, gamma1=cpot$gamma1, sigma1=cpot$sigma1,
censored=censored, q=q, plot=TRUE)
# Return period
cReturnGPD(Z, gamma1=cpot$gamma1, sigma1=cpot$sigma1,
censored=censored, q=q, plot=TRUE)
|
#Coursera course Getting and Cleaning data
#week 2
#quiz
#QUESTION 1
install.packages("jsonlite")
library(jsonlite)
install.packages("httpuv")
library(httpuv)
install.packages("httr")
library(httr)
# 1. Find OAuth settings for github:
oauth_endpoints("github") #endpoints are URLs that we call to request the authorization codes.
# 2. Make my own application on github API
#Go to git hub, settings, developer settings, new github app:
# https://github.com/settings/developers. Use any URL for the homepage URL
# (http://github.com is fine) and http://localhost:1410 as the callback url
# Replace key and secret below according to my app
myapp <- oauth_app("github",
key = "743036b8a7142eb6f70f",
secret = "617d60d44c2bb4af30a16bdf26b0d5d42d012ab0"
)
# 3. Get OAuth credentials
github_token <- oauth2.0_token(oauth_endpoints("github"), myapp)
# 4. Use API
gtoken <- config(token = github_token)
req <- GET("https://api.github.com/users/jtleek/repos", gtoken) #request to extract data from the link
#extract content from link
json1 = content(req)
#structure info from json file into a more readable version
json2 = jsonlite::fromJSON(jsonlite::toJSON(json1))
#Now from this data frame called json2 which is in jsonlite format, we want to extract
#on the time that the datasharing repo was created.
json2[1, 1:10] #we can see there is a column called fullname which describes the different pushes to the repo
json2[json2$full_name == "jtleek/datasharing", "created_at"] #we subset to find the row of the push that
#shows us the data and time of the event when jtleek pused a commit to datasharing that said created at.
#QUESTION 2
#They have given us a link to an online doc, we can automatically download it
install.packages("RMySQL", type="source")
library(RMySQL)
install.packages("sqldf")
library(sqldf)
#Download data into R
url <- "https://d396qusza40orc.cloudfront.net/getdata%2Fdata%2Fss06pid.csv" #save url link
download.file(url, destfile="./doc.csv") #download doc from link and save it inside the wd
acs<- read.table("./doc.csv", sep=",", header=TRUE) #read data
View(acs)
#we can use the sqldf to send queries
#if we want to obtain a subset of the acs dataframe where we only get column pwgtp1 for ages less than 50
acs2 <- sqldf("select pwgtp1 from acs where AGEP < 50")
#QUESTION 3
# the equivalent of the function unique in the sql package is distinct
sqldf("select distinct AGEP from acs2") #will get us a list of the pwgtp1 rows with unique AGE values
#QUESTION 4
#Reading from html link
con=url ("http://biostat.jhsph.edu/~jleek/contact.html") #open connection with link
htmlCode=readLines(con) #read the info
close(con) #important to close the connection
#number of characters in the 10th, 20th, 30th, 100th lines of the imported data
nchar(htmlCode[c(10,20,30,100)])
#QUESTION 5
#read in the data set into R, data comes from a random link
url <- "https://d396qusza40orc.cloudfront.net/getdata%2Fwksst8110.for" #save url link
download.file(url, destfile="./doc5") #download doc from link and save it inside the wd
dataq5<- read.table("./doc5", sep=",") #read data, but this is a fixed width file format, so we need diff extraction method
dataq5 <- read.fwf("./doc5",widths=c(-1,9,-5,4,4,-5,4,4,-5,4,4,-5,4,4), skip=4) #we need to specify the widths
#because they have not been specified
## skip =4 is for skipping the first 4 lines
#-1 -> leaves one blank(if you open the .for file in n++, you will see the space before 03JAN1990)
# 9 -> length of the date,
# -5 -> leaves 5 blank,
#4 ->takes the first Nino1+2 SST input
#4 -> takes the second Nino1+2 SST input and so on.
sum(dataq5[, 4])
|
/quizweek2.R
|
no_license
|
juliavigu/datasciencecoursera
|
R
| false | false | 3,778 |
r
|
#Coursera course Getting and Cleaning data
#week 2
#quiz
#QUESTION 1
install.packages("jsonlite")
library(jsonlite)
install.packages("httpuv")
library(httpuv)
install.packages("httr")
library(httr)
# 1. Find OAuth settings for github:
oauth_endpoints("github") #endpoints are URLs that we call to request the authorization codes.
# 2. Make my own application on github API
#Go to git hub, settings, developer settings, new github app:
# https://github.com/settings/developers. Use any URL for the homepage URL
# (http://github.com is fine) and http://localhost:1410 as the callback url
# Replace key and secret below according to my app
myapp <- oauth_app("github",
key = "743036b8a7142eb6f70f",
secret = "617d60d44c2bb4af30a16bdf26b0d5d42d012ab0"
)
# 3. Get OAuth credentials
github_token <- oauth2.0_token(oauth_endpoints("github"), myapp)
# 4. Use API
gtoken <- config(token = github_token)
req <- GET("https://api.github.com/users/jtleek/repos", gtoken) #request to extract data from the link
#extract content from link
json1 = content(req)
#structure info from json file into a more readable version
json2 = jsonlite::fromJSON(jsonlite::toJSON(json1))
#Now from this data frame called json2 which is in jsonlite format, we want to extract
#on the time that the datasharing repo was created.
json2[1, 1:10] #we can see there is a column called fullname which describes the different pushes to the repo
json2[json2$full_name == "jtleek/datasharing", "created_at"] #we subset to find the row of the push that
#shows us the data and time of the event when jtleek pused a commit to datasharing that said created at.
#QUESTION 2
#They have given us a link to an online doc, we can automatically download it
install.packages("RMySQL", type="source")
library(RMySQL)
install.packages("sqldf")
library(sqldf)
#Download data into R
url <- "https://d396qusza40orc.cloudfront.net/getdata%2Fdata%2Fss06pid.csv" #save url link
download.file(url, destfile="./doc.csv") #download doc from link and save it inside the wd
acs<- read.table("./doc.csv", sep=",", header=TRUE) #read data
View(acs)
#we can use the sqldf to send queries
#if we want to obtain a subset of the acs dataframe where we only get column pwgtp1 for ages less than 50
acs2 <- sqldf("select pwgtp1 from acs where AGEP < 50")
#QUESTION 3
# the equivalent of the function unique in the sql package is distinct
sqldf("select distinct AGEP from acs2") #will get us a list of the pwgtp1 rows with unique AGE values
#QUESTION 4
#Reading from html link
con=url ("http://biostat.jhsph.edu/~jleek/contact.html") #open connection with link
htmlCode=readLines(con) #read the info
close(con) #important to close the connection
#number of characters in the 10th, 20th, 30th, 100th lines of the imported data
nchar(htmlCode[c(10,20,30,100)])
#QUESTION 5
#read in the data set into R, data comes from a random link
url <- "https://d396qusza40orc.cloudfront.net/getdata%2Fwksst8110.for" #save url link
download.file(url, destfile="./doc5") #download doc from link and save it inside the wd
dataq5<- read.table("./doc5", sep=",") #read data, but this is a fixed width file format, so we need diff extraction method
dataq5 <- read.fwf("./doc5",widths=c(-1,9,-5,4,4,-5,4,4,-5,4,4,-5,4,4), skip=4) #we need to specify the widths
#because they have not been specified
## skip =4 is for skipping the first 4 lines
#-1 -> leaves one blank(if you open the .for file in n++, you will see the space before 03JAN1990)
# 9 -> length of the date,
# -5 -> leaves 5 blank,
#4 ->takes the first Nino1+2 SST input
#4 -> takes the second Nino1+2 SST input and so on.
sum(dataq5[, 4])
|
testlist <- list(x = structure(c(2.31584307392677e+77, 9.5381825201569e+295, 1.22810536108214e+146, 4.12396251261199e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(5L, 7L)))
result <- do.call(multivariance::fastdist,testlist)
str(result)
|
/multivariance/inst/testfiles/fastdist/AFL_fastdist/fastdist_valgrind_files/1613098275-test.R
|
no_license
|
akhikolla/updatedatatype-list3
|
R
| false | false | 302 |
r
|
testlist <- list(x = structure(c(2.31584307392677e+77, 9.5381825201569e+295, 1.22810536108214e+146, 4.12396251261199e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(5L, 7L)))
result <- do.call(multivariance::fastdist,testlist)
str(result)
|
library(magrittr)
my_f <- "C:/Users/kxmna01/Desktop/CP023688_protein_FIXED.fasta"
my_fasta <- seqinr::read.fasta(file = my_f, seqtype = "AA", as.string = TRUE, whole.header = TRUE)
find_stop <- lapply(my_fasta, function(x) {
grepl("\\*.", x)
}) %>%
unlist(.)
my_fasta_filt <- my_fasta[!find_stop]
seqinr::write.fasta(
sequences = my_fasta_filt, names = names(my_fasta_filt),
file.out = sub(".fasta", "_FILT.fasta", my_f),
open = "w", nbchar = 60, as.string = TRUE)
|
/tools/Rscripts/tmp_filter_stop_from_fasta.R
|
no_license
|
nnalpas/Proteogenomics_reannotation
|
R
| false | false | 494 |
r
|
library(magrittr)
my_f <- "C:/Users/kxmna01/Desktop/CP023688_protein_FIXED.fasta"
my_fasta <- seqinr::read.fasta(file = my_f, seqtype = "AA", as.string = TRUE, whole.header = TRUE)
find_stop <- lapply(my_fasta, function(x) {
grepl("\\*.", x)
}) %>%
unlist(.)
my_fasta_filt <- my_fasta[!find_stop]
seqinr::write.fasta(
sequences = my_fasta_filt, names = names(my_fasta_filt),
file.out = sub(".fasta", "_FILT.fasta", my_f),
open = "w", nbchar = 60, as.string = TRUE)
|
library(tools)
library(tm)
source(file_path_as_absolute("ipm/experimenters.R"))
source(file_path_as_absolute("utils/getDados.R"))
source(file_path_as_absolute("baseline/dados.R"))
source(file_path_as_absolute("utils/tokenizer.R"))
#Geração dos dados
lines <- readLines(file.path("/var/www/html/drunktweets", "adhoc/exportembedding/new_skipgrams_10_epocas_5l_q1.txt"))
embeddings_index <- new.env(hash = TRUE, parent = emptyenv())
for (i in 1:length(lines)) {
line <- lines[[i]]
values <- strsplit(line, " ")[[1]]
word <- values[[1]]
embeddings_index[[word]] <- as.double(values[-1])
}
cat("Found", length(embeddings_index), "word vectors.\n")
dados <- getDadosBaselineByQ("q1")
# dados$textEmbedding <- removePunctuation(dados$textEmbedding)
maxlen <- 38
max_words <- 7860
tokenizer <- text_tokenizer(num_words = max_words) %>%
fit_text_tokenizer(dados$textEmbedding)
sequences <- texts_to_sequences(tokenizer, dados$textEmbedding)
word_index = tokenizer$word_index
vocab_size <- length(word_index)
vocab_size <- vocab_size + 1
vocab_size
cat("Found", length(word_index), "unique tokens.\n")
data <- pad_sequences(sequences, maxlen = maxlen)
library(caret)
trainIndex <- createDataPartition(dados$resposta, p=0.8, list=FALSE)
dados_train <- dados[ trainIndex,]
dados_test <- dados[-trainIndex,]
dados_train_sequence <- data[ trainIndex,]
dados_test_sequence <- data[-trainIndex,]
max_words <- vocab_size
word_index <- tokenizer$word_index
callbacks_list <- list(
callback_early_stopping(
monitor = "val_loss",
patience = 1
),
callback_model_checkpoint(
filepath = paste0("adhoc/exportembedding/adicionais/test_models.h5"),
monitor = "val_loss",
save_best_only = TRUE
)
)
# Data Preparation --------------------------------------------------------
# Parameters --------------------------------------------------------------
embedding_dims <- 100
# Parameters --------------------------------------------------------------
# filters <- 200
filters <- 164
main_input <- layer_input(shape = c(maxlen), dtype = "int32")
embedding_input <- main_input %>%
layer_embedding(input_dim = vocab_size, output_dim = embedding_dims, input_length = maxlen, name = "embedding")
ccn_out_3 <- embedding_input %>%
layer_conv_1d(
filters, 3,
padding = "valid", activation = "relu", strides = 1
) %>%
layer_global_max_pooling_1d()
ccn_out_4 <- embedding_input %>%
layer_conv_1d(
filters, 4,
padding = "valid", activation = "relu", strides = 1
) %>%
layer_global_max_pooling_1d()
ccn_out_5 <- embedding_input %>%
layer_conv_1d(
filters, 5,
padding = "valid", activation = "relu", strides = 1
) %>%
layer_global_max_pooling_1d()
main_output <- layer_concatenate(c(ccn_out_3, ccn_out_4, ccn_out_5)) %>%
layer_dropout(0.2) %>%
layer_dense(units = 8, activation = "relu") %>%
layer_dense(units = 1, activation = 'sigmoid')
model <- keras_model(
inputs = c(main_input),
outputs = main_output
)
embedding_dim <- 100
embedding_matrix <- array(0, c(max_words, embedding_dim))
for (word in names(word_index)) {
index <- word_index[[word]]
if (index < max_words) {
embedding_vector <- embeddings_index[[word]]
if (!is.null(embedding_vector))
embedding_matrix[index+1,] <- embedding_vector
}
}
get_layer(model, index = 1) %>%
set_weights(list(embedding_matrix))
model %>% compile(
loss = "binary_crossentropy",
optimizer = "adam",
metrics = "accuracy"
)
library(keras)
# Training ----------------------------------------------------------------
history <- model %>%
fit(
x = list(dados_train_sequence),
y = array(dados_train$resposta),
batch_size = 64,
epochs = 10,
#callbacks = callbacks_list,
validation_split = 0.2
)
# predictions <- model %>% predict(list(dados_test_sequence))
# predictions2 <- round(predictions, 0
# matriz <- confusionMatrix(data = as.factor(predictions2), as.factor(dados_test$resposta), positive="1")
# resultados <- addRowAdpater(resultados, DESC, matriz)
##
library(dplyr)
embedding_matrixTwo <- get_weights(model)[[1]]
words <- data_frame(
word = names(tokenizer$word_index),
id = as.integer(unlist(tokenizer$word_index))
)
words <- words %>%
filter(id <= tokenizer$num_words) %>%
arrange(id)
row.names(embedding_matrixTwo) <- c("UNK", words$word)
embedding_file <- "adhoc/exportembedding/ds1/q1/cnn_10_epocas_8_filters164_skipgram.txt"
write.table(embedding_matrixTwo, embedding_file, sep=" ",row.names=TRUE)
system(paste0("sed -i 's/\"//g' ", embedding_file))
|
/adhoc/exportembedding/ds1/cnn_q1_skigram_nonstatic.R
|
no_license
|
MarcosGrzeca/drunktweets
|
R
| false | false | 4,617 |
r
|
library(tools)
library(tm)
source(file_path_as_absolute("ipm/experimenters.R"))
source(file_path_as_absolute("utils/getDados.R"))
source(file_path_as_absolute("baseline/dados.R"))
source(file_path_as_absolute("utils/tokenizer.R"))
#Geração dos dados
lines <- readLines(file.path("/var/www/html/drunktweets", "adhoc/exportembedding/new_skipgrams_10_epocas_5l_q1.txt"))
embeddings_index <- new.env(hash = TRUE, parent = emptyenv())
for (i in 1:length(lines)) {
line <- lines[[i]]
values <- strsplit(line, " ")[[1]]
word <- values[[1]]
embeddings_index[[word]] <- as.double(values[-1])
}
cat("Found", length(embeddings_index), "word vectors.\n")
dados <- getDadosBaselineByQ("q1")
# dados$textEmbedding <- removePunctuation(dados$textEmbedding)
maxlen <- 38
max_words <- 7860
tokenizer <- text_tokenizer(num_words = max_words) %>%
fit_text_tokenizer(dados$textEmbedding)
sequences <- texts_to_sequences(tokenizer, dados$textEmbedding)
word_index = tokenizer$word_index
vocab_size <- length(word_index)
vocab_size <- vocab_size + 1
vocab_size
cat("Found", length(word_index), "unique tokens.\n")
data <- pad_sequences(sequences, maxlen = maxlen)
library(caret)
trainIndex <- createDataPartition(dados$resposta, p=0.8, list=FALSE)
dados_train <- dados[ trainIndex,]
dados_test <- dados[-trainIndex,]
dados_train_sequence <- data[ trainIndex,]
dados_test_sequence <- data[-trainIndex,]
max_words <- vocab_size
word_index <- tokenizer$word_index
callbacks_list <- list(
callback_early_stopping(
monitor = "val_loss",
patience = 1
),
callback_model_checkpoint(
filepath = paste0("adhoc/exportembedding/adicionais/test_models.h5"),
monitor = "val_loss",
save_best_only = TRUE
)
)
# Data Preparation --------------------------------------------------------
# Parameters --------------------------------------------------------------
embedding_dims <- 100
# Parameters --------------------------------------------------------------
# filters <- 200
filters <- 164
main_input <- layer_input(shape = c(maxlen), dtype = "int32")
embedding_input <- main_input %>%
layer_embedding(input_dim = vocab_size, output_dim = embedding_dims, input_length = maxlen, name = "embedding")
ccn_out_3 <- embedding_input %>%
layer_conv_1d(
filters, 3,
padding = "valid", activation = "relu", strides = 1
) %>%
layer_global_max_pooling_1d()
ccn_out_4 <- embedding_input %>%
layer_conv_1d(
filters, 4,
padding = "valid", activation = "relu", strides = 1
) %>%
layer_global_max_pooling_1d()
ccn_out_5 <- embedding_input %>%
layer_conv_1d(
filters, 5,
padding = "valid", activation = "relu", strides = 1
) %>%
layer_global_max_pooling_1d()
main_output <- layer_concatenate(c(ccn_out_3, ccn_out_4, ccn_out_5)) %>%
layer_dropout(0.2) %>%
layer_dense(units = 8, activation = "relu") %>%
layer_dense(units = 1, activation = 'sigmoid')
model <- keras_model(
inputs = c(main_input),
outputs = main_output
)
embedding_dim <- 100
embedding_matrix <- array(0, c(max_words, embedding_dim))
for (word in names(word_index)) {
index <- word_index[[word]]
if (index < max_words) {
embedding_vector <- embeddings_index[[word]]
if (!is.null(embedding_vector))
embedding_matrix[index+1,] <- embedding_vector
}
}
get_layer(model, index = 1) %>%
set_weights(list(embedding_matrix))
model %>% compile(
loss = "binary_crossentropy",
optimizer = "adam",
metrics = "accuracy"
)
library(keras)
# Training ----------------------------------------------------------------
history <- model %>%
fit(
x = list(dados_train_sequence),
y = array(dados_train$resposta),
batch_size = 64,
epochs = 10,
#callbacks = callbacks_list,
validation_split = 0.2
)
# predictions <- model %>% predict(list(dados_test_sequence))
# predictions2 <- round(predictions, 0
# matriz <- confusionMatrix(data = as.factor(predictions2), as.factor(dados_test$resposta), positive="1")
# resultados <- addRowAdpater(resultados, DESC, matriz)
##
library(dplyr)
embedding_matrixTwo <- get_weights(model)[[1]]
words <- data_frame(
word = names(tokenizer$word_index),
id = as.integer(unlist(tokenizer$word_index))
)
words <- words %>%
filter(id <= tokenizer$num_words) %>%
arrange(id)
row.names(embedding_matrixTwo) <- c("UNK", words$word)
embedding_file <- "adhoc/exportembedding/ds1/q1/cnn_10_epocas_8_filters164_skipgram.txt"
write.table(embedding_matrixTwo, embedding_file, sep=" ",row.names=TRUE)
system(paste0("sed -i 's/\"//g' ", embedding_file))
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/bycounty.R
\name{aqs_annualsummary_by_county}
\alias{aqs_annualsummary_by_county}
\title{aqs_annualsummary_by_county}
\usage{
aqs_annualsummary_by_county(
parameter,
bdate,
edate,
stateFIPS,
countycode,
cbdate = NA_Date_,
cedate = NA_Date_,
return_header = FALSE
)
}
\arguments{
\item{parameter}{a character list or a single character string
which represents the parameter code of the air
pollutant related to the data being requested.}
\item{bdate}{a R date object which represents that begin date of the data
selection. Only data on or after this date will be returned.}
\item{edate}{a R date object which represents that end date of the data
selection. Only data on or before this date will be
returned.}
\item{stateFIPS}{a R character object which represents the 2 digit state
FIPS code (with leading zero) for the state being
requested. @seealso \code{\link[=aqs_states]{aqs_states()}} for the list of
available FIPS codes.}
\item{countycode}{a R character object which represents the 3 digit state
FIPS code for the county being requested (with leading
zero(s)). @seealso \code{\link[=aqs_counties_by_state]{aqs_counties_by_state()}} for the
list of available county codes for each state.}
\item{cbdate}{a R date object which represents a "beginning
date of last change" that indicates when the data was last
updated. cbdate is used to filter data based on the change
date. Only data that changed on or after this date will be
returned. This is an optional variable which defaults
to NA_Date_.}
\item{cedate}{a R date object which represents an "end
date of last change" that indicates when the data was last
updated. cedate is used to filter data based on the change
date. Only data that changed on or before this date will be
returned. This is an optional variable which defaults
to NA_Date_.}
\item{return_header}{If FALSE (default) only returns data requested. If
TRUE returns a AQSAPI_v2 object which is a two item
list that contains header information returned from
the API server mostly used for debugging purposes in
addition to the data requested.}
}
\value{
a tibble or an AQS_Data Mart_APIv2 S3 object that containing annual
summary data for the countycode and stateFIPS requested.
A AQS_Data Mart_APIv2 is a 2 item named list in which the first
item (\$Header) is a tibble of header information from the AQS API
and the second item (\$Data) is a tibble of the data returned.
}
\description{
\lifecycle{stable}
Returns multiple years of data where annual data is
aggregated at the county level. Returned is an annual summary
matching the input parameter, stateFIPS, and county_code
provided for bdate - edate time frame. The data
returned is summarized at the annual level. Variables
returned include mean value, maxima, percentiles, and etc. If
return_header is FALSE (default) the object returned is a
tibble, if TRUE an AQS_API_v2 object.
}
\note{
The AQS API only allows for a single year of annualsummary to be
retrieved at a time. This function conveniently extracts date
information from the bdate and edate parameters then makes repeated
calls to the AQSAPI retrieving a maximum of one calendar year of data
at a time. Each calendar year of data requires a separate API call so
multiple years of data will require multiple API calls. As the number
of years of data being requested increases so does the length of time
that it will take to retrieve results. There is also a 5 second wait
time inserted between successive API calls to prevent overloading the
API server. This operation has a linear run time of
/(Big O notation: O/(n + 5 seconds/)/).
}
\examples{
# returns an aqs S3 object with annual summary FRM/FEM
# PM2.5 data for Wake County, NC between January
# and February 2016
\dontrun{aqs_annualsummary_by_county(parameter = "88101",
bdate = as.Date("20160101",
format = "\%Y\%m\%d"),
edate = as.Date("20180228",
format = "\%Y\%m\%d"),
stateFIPS = "37",
countycode = "183"
)
}
}
\seealso{
Other Aggregate _by_county functions:
\code{\link{aqs_dailysummary_by_county}()},
\code{\link{aqs_monitors_by_county}()},
\code{\link{aqs_qa_blanks_by_county}()},
\code{\link{aqs_qa_collocated_assessments_by_county}()},
\code{\link{aqs_qa_flowrateaudit_by_county}()},
\code{\link{aqs_qa_flowrateverification_by_county}()},
\code{\link{aqs_qa_one_point_qc_by_county}()},
\code{\link{aqs_qa_pep_audit_by_county}()},
\code{\link{aqs_sampledata_by_county}()},
\code{\link{aqs_transactionsample_by_county}()}
}
\concept{Aggregate _by_county functions}
|
/man/aqs_annualsummary_by_county.Rd
|
permissive
|
cjmc00/RAQSAPI
|
R
| false | true | 4,907 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/bycounty.R
\name{aqs_annualsummary_by_county}
\alias{aqs_annualsummary_by_county}
\title{aqs_annualsummary_by_county}
\usage{
aqs_annualsummary_by_county(
parameter,
bdate,
edate,
stateFIPS,
countycode,
cbdate = NA_Date_,
cedate = NA_Date_,
return_header = FALSE
)
}
\arguments{
\item{parameter}{a character list or a single character string
which represents the parameter code of the air
pollutant related to the data being requested.}
\item{bdate}{a R date object which represents that begin date of the data
selection. Only data on or after this date will be returned.}
\item{edate}{a R date object which represents that end date of the data
selection. Only data on or before this date will be
returned.}
\item{stateFIPS}{a R character object which represents the 2 digit state
FIPS code (with leading zero) for the state being
requested. @seealso \code{\link[=aqs_states]{aqs_states()}} for the list of
available FIPS codes.}
\item{countycode}{a R character object which represents the 3 digit state
FIPS code for the county being requested (with leading
zero(s)). @seealso \code{\link[=aqs_counties_by_state]{aqs_counties_by_state()}} for the
list of available county codes for each state.}
\item{cbdate}{a R date object which represents a "beginning
date of last change" that indicates when the data was last
updated. cbdate is used to filter data based on the change
date. Only data that changed on or after this date will be
returned. This is an optional variable which defaults
to NA_Date_.}
\item{cedate}{a R date object which represents an "end
date of last change" that indicates when the data was last
updated. cedate is used to filter data based on the change
date. Only data that changed on or before this date will be
returned. This is an optional variable which defaults
to NA_Date_.}
\item{return_header}{If FALSE (default) only returns data requested. If
TRUE returns a AQSAPI_v2 object which is a two item
list that contains header information returned from
the API server mostly used for debugging purposes in
addition to the data requested.}
}
\value{
a tibble or an AQS_Data Mart_APIv2 S3 object that containing annual
summary data for the countycode and stateFIPS requested.
A AQS_Data Mart_APIv2 is a 2 item named list in which the first
item (\$Header) is a tibble of header information from the AQS API
and the second item (\$Data) is a tibble of the data returned.
}
\description{
\lifecycle{stable}
Returns multiple years of data where annual data is
aggregated at the county level. Returned is an annual summary
matching the input parameter, stateFIPS, and county_code
provided for bdate - edate time frame. The data
returned is summarized at the annual level. Variables
returned include mean value, maxima, percentiles, and etc. If
return_header is FALSE (default) the object returned is a
tibble, if TRUE an AQS_API_v2 object.
}
\note{
The AQS API only allows for a single year of annualsummary to be
retrieved at a time. This function conveniently extracts date
information from the bdate and edate parameters then makes repeated
calls to the AQSAPI retrieving a maximum of one calendar year of data
at a time. Each calendar year of data requires a separate API call so
multiple years of data will require multiple API calls. As the number
of years of data being requested increases so does the length of time
that it will take to retrieve results. There is also a 5 second wait
time inserted between successive API calls to prevent overloading the
API server. This operation has a linear run time of
/(Big O notation: O/(n + 5 seconds/)/).
}
\examples{
# returns an aqs S3 object with annual summary FRM/FEM
# PM2.5 data for Wake County, NC between January
# and February 2016
\dontrun{aqs_annualsummary_by_county(parameter = "88101",
bdate = as.Date("20160101",
format = "\%Y\%m\%d"),
edate = as.Date("20180228",
format = "\%Y\%m\%d"),
stateFIPS = "37",
countycode = "183"
)
}
}
\seealso{
Other Aggregate _by_county functions:
\code{\link{aqs_dailysummary_by_county}()},
\code{\link{aqs_monitors_by_county}()},
\code{\link{aqs_qa_blanks_by_county}()},
\code{\link{aqs_qa_collocated_assessments_by_county}()},
\code{\link{aqs_qa_flowrateaudit_by_county}()},
\code{\link{aqs_qa_flowrateverification_by_county}()},
\code{\link{aqs_qa_one_point_qc_by_county}()},
\code{\link{aqs_qa_pep_audit_by_county}()},
\code{\link{aqs_sampledata_by_county}()},
\code{\link{aqs_transactionsample_by_county}()}
}
\concept{Aggregate _by_county functions}
|
#' Add rows to a data frame
#'
#' @description
#' This is a convenient way to add one or more rows of data to an existing data
#' frame. See [tribble()] for an easy way to create an complete
#' data frame row-by-row. Use [tibble_row()] to ensure that the new data
#' has only one row.
#'
#' `add_case()` is an alias of `add_row()`.
#'
#' @param .data Data frame to append to.
#' @param ... <[`dynamic-dots`][rlang::dyn-dots]>
#' Name-value pairs, passed on to [tibble()]. Values can be defined
#' only for columns that already exist in `.data` and unset columns will get an
#' `NA` value.
#' @param .before,.after One-based row index where to add the new rows,
#' default: after last row.
#' @family addition
#' @examples
#' # add_row ---------------------------------
#' df <- tibble(x = 1:3, y = 3:1)
#'
#' df %>% add_row(x = 4, y = 0)
#'
#' # You can specify where to add the new rows
#' df %>% add_row(x = 4, y = 0, .before = 2)
#'
#' # You can supply vectors, to add multiple rows (this isn't
#' # recommended because it's a bit hard to read)
#' df %>% add_row(x = 4:5, y = 0:-1)
#'
#' # Use tibble_row() to add one row only
#' df %>% add_row(tibble_row(x = 4, y = 0))
#' try(df %>% add_row(tibble_row(x = 4:5, y = 0:-1)))
#'
#' # Absent variables get missing values
#' df %>% add_row(x = 4)
#'
#' # You can't create new variables
#' try(df %>% add_row(z = 10))
#' @export
add_row <- function(.data, ..., .before = NULL, .after = NULL) {
if (inherits(.data, "grouped_df")) {
cnd_signal(error_add_rows_to_grouped_df())
}
if (!is.data.frame(.data)) {
deprecate_warn("2.1.1", "add_row(.data = 'must be a data frame')")
}
df <- tibble(...)
attr(df, "row.names") <- .set_row_names(max(1L, nrow(df)))
extra_vars <- setdiff(names(df), names(.data))
if (has_length(extra_vars)) {
cnd_signal(error_incompatible_new_rows(extra_vars))
}
pos <- pos_from_before_after(.before, .after, nrow(.data))
out <- rbind_at(.data, df, pos)
vectbl_restore(out, .data)
}
#' @export
#' @rdname add_row
#' @usage NULL
add_case <- add_row
na_value <- function(boilerplate) {
if (is.list(boilerplate)) {
list(NULL)
} else {
NA
}
}
rbind_at <- function(old, new, pos) {
out <- vec_rbind(old, new)
# Append at end: Nothing more to do.
if (pos >= nrow(old)) {
return(out)
}
# Splice: Construct index vector
pos <- max(pos, 0L)
idx <- c(
seq2(1L, pos),
seq2(nrow(old) + 1L, nrow(old) + nrow(new)),
seq2(pos + 1L, nrow(old))
)
vec_slice(out, idx)
}
#' Add columns to a data frame
#'
#' This is a convenient way to add one or more columns to an existing data
#' frame.
#'
#' @param .data Data frame to append to.
#' @param ... <[`dynamic-dots`][rlang::dyn-dots]>
#' Name-value pairs, passed on to [tibble()]. All values must have
#' the same size of `.data` or size 1.
#' @param .before,.after One-based column index or column name where to add the
#' new columns, default: after last column.
#' @inheritParams tibble
#' @family addition
#' @examples
#' # add_column ---------------------------------
#' df <- tibble(x = 1:3, y = 3:1)
#'
#' df %>% add_column(z = -1:1, w = 0)
#' df %>% add_column(z = -1:1, .before = "y")
#'
#' # You can't overwrite existing columns
#' try(df %>% add_column(x = 4:6))
#'
#' # You can't create new observations
#' try(df %>% add_column(z = 1:5))
#'
#' @export
add_column <- function(.data, ..., .before = NULL, .after = NULL,
.name_repair = c("check_unique", "unique", "universal", "minimal")) {
if (!is.data.frame(.data)) {
deprecate_warn("2.1.1", "add_column(.data = 'must be a data frame')")
}
if ((!is_named(.data) || anyDuplicated(names2(.data))) && missing(.name_repair)) {
deprecate_warn("3.0.0", "add_column(.data = 'must have unique names')",
details = 'Use `.name_repair = "minimal"`.')
.name_repair <- "minimal"
}
df <- tibble(..., .name_repair = .name_repair)
if (ncol(df) == 0L) {
return(.data)
}
if (nrow(df) != nrow(.data)) {
if (nrow(df) == 1) {
df <- df[rep(1L, nrow(.data)), ]
} else {
cnd_signal(error_incompatible_new_cols(nrow(.data), df))
}
}
pos <- pos_from_before_after_names(.before, .after, colnames(.data))
end_pos <- ncol(.data) + seq_len(ncol(df))
indexes_before <- rlang::seq2(1L, pos)
indexes_after <- rlang::seq2(pos + 1L, ncol(.data))
indexes <- c(indexes_before, end_pos, indexes_after)
new_data <- .data
new_data[end_pos] <- df
out <- new_data[indexes]
out <- set_repaired_names(out, .name_repair)
vectbl_restore(out, .data)
}
# helpers -----------------------------------------------------------------
pos_from_before_after_names <- function(before, after, names) {
before <- check_names_before_after(before, names)
after <- check_names_before_after(after, names)
pos_from_before_after(before, after, length(names))
}
pos_from_before_after <- function(before, after, len) {
if (is_null(before)) {
if (is_null(after)) {
len
} else {
limit_pos_range(after, len)
}
} else {
if (is_null(after)) {
limit_pos_range(before - 1L, len)
} else {
cnd_signal(error_both_before_after())
}
}
}
limit_pos_range <- function(pos, len) {
max(0L, min(len, pos))
}
# check_names_before_after ------------------------------------------------
check_names_before_after <- function(j, x) {
if (!is_bare_character(j)) {
return(j)
}
check_needs_no_dim(j)
check_names_before_after_character(j, x)
}
check_needs_no_dim <- function(j) {
if (needs_dim(j)) {
cnd_signal(error_dim_column_index(j))
}
}
check_names_before_after_character <- function(j, names) {
pos <- safe_match(j, names)
if (anyNA(pos)) {
unknown_names <- j[is.na(pos)]
cnd_signal(error_unknown_column_names(unknown_names))
}
pos
}
# Errors ------------------------------------------------------------------
error_add_rows_to_grouped_df <- function() {
tibble_error("Can't add rows to grouped data frames.")
}
error_incompatible_new_rows <- function(names) {
tibble_error(
bullets(
"New rows can't add columns:",
cnd_message(error_unknown_column_names(names))
),
names = names
)
}
error_both_before_after <- function() {
tibble_error("Can't specify both `.before` and `.after`.")
}
error_unknown_column_names <- function(j, parent = NULL) {
tibble_error(pluralise_commas("Can't find column(s) ", tick(j), " in `.data`."), j = j, parent = parent)
}
error_incompatible_new_cols <- function(n, df) {
tibble_error(
bullets(
"New columns must be compatible with `.data`:",
x = paste0(
pluralise_n("New column(s) ha[s](ve)", ncol(df)), " ",
nrow(df), " rows"
),
i = pluralise_count("`.data` has ", n, " row(s)")
),
expected = n,
actual = nrow(df)
)
}
|
/R/add.R
|
permissive
|
datacamp/tibble
|
R
| false | false | 6,857 |
r
|
#' Add rows to a data frame
#'
#' @description
#' This is a convenient way to add one or more rows of data to an existing data
#' frame. See [tribble()] for an easy way to create an complete
#' data frame row-by-row. Use [tibble_row()] to ensure that the new data
#' has only one row.
#'
#' `add_case()` is an alias of `add_row()`.
#'
#' @param .data Data frame to append to.
#' @param ... <[`dynamic-dots`][rlang::dyn-dots]>
#' Name-value pairs, passed on to [tibble()]. Values can be defined
#' only for columns that already exist in `.data` and unset columns will get an
#' `NA` value.
#' @param .before,.after One-based row index where to add the new rows,
#' default: after last row.
#' @family addition
#' @examples
#' # add_row ---------------------------------
#' df <- tibble(x = 1:3, y = 3:1)
#'
#' df %>% add_row(x = 4, y = 0)
#'
#' # You can specify where to add the new rows
#' df %>% add_row(x = 4, y = 0, .before = 2)
#'
#' # You can supply vectors, to add multiple rows (this isn't
#' # recommended because it's a bit hard to read)
#' df %>% add_row(x = 4:5, y = 0:-1)
#'
#' # Use tibble_row() to add one row only
#' df %>% add_row(tibble_row(x = 4, y = 0))
#' try(df %>% add_row(tibble_row(x = 4:5, y = 0:-1)))
#'
#' # Absent variables get missing values
#' df %>% add_row(x = 4)
#'
#' # You can't create new variables
#' try(df %>% add_row(z = 10))
#' @export
add_row <- function(.data, ..., .before = NULL, .after = NULL) {
if (inherits(.data, "grouped_df")) {
cnd_signal(error_add_rows_to_grouped_df())
}
if (!is.data.frame(.data)) {
deprecate_warn("2.1.1", "add_row(.data = 'must be a data frame')")
}
df <- tibble(...)
attr(df, "row.names") <- .set_row_names(max(1L, nrow(df)))
extra_vars <- setdiff(names(df), names(.data))
if (has_length(extra_vars)) {
cnd_signal(error_incompatible_new_rows(extra_vars))
}
pos <- pos_from_before_after(.before, .after, nrow(.data))
out <- rbind_at(.data, df, pos)
vectbl_restore(out, .data)
}
#' @export
#' @rdname add_row
#' @usage NULL
add_case <- add_row
na_value <- function(boilerplate) {
if (is.list(boilerplate)) {
list(NULL)
} else {
NA
}
}
rbind_at <- function(old, new, pos) {
out <- vec_rbind(old, new)
# Append at end: Nothing more to do.
if (pos >= nrow(old)) {
return(out)
}
# Splice: Construct index vector
pos <- max(pos, 0L)
idx <- c(
seq2(1L, pos),
seq2(nrow(old) + 1L, nrow(old) + nrow(new)),
seq2(pos + 1L, nrow(old))
)
vec_slice(out, idx)
}
#' Add columns to a data frame
#'
#' This is a convenient way to add one or more columns to an existing data
#' frame.
#'
#' @param .data Data frame to append to.
#' @param ... <[`dynamic-dots`][rlang::dyn-dots]>
#' Name-value pairs, passed on to [tibble()]. All values must have
#' the same size of `.data` or size 1.
#' @param .before,.after One-based column index or column name where to add the
#' new columns, default: after last column.
#' @inheritParams tibble
#' @family addition
#' @examples
#' # add_column ---------------------------------
#' df <- tibble(x = 1:3, y = 3:1)
#'
#' df %>% add_column(z = -1:1, w = 0)
#' df %>% add_column(z = -1:1, .before = "y")
#'
#' # You can't overwrite existing columns
#' try(df %>% add_column(x = 4:6))
#'
#' # You can't create new observations
#' try(df %>% add_column(z = 1:5))
#'
#' @export
add_column <- function(.data, ..., .before = NULL, .after = NULL,
.name_repair = c("check_unique", "unique", "universal", "minimal")) {
if (!is.data.frame(.data)) {
deprecate_warn("2.1.1", "add_column(.data = 'must be a data frame')")
}
if ((!is_named(.data) || anyDuplicated(names2(.data))) && missing(.name_repair)) {
deprecate_warn("3.0.0", "add_column(.data = 'must have unique names')",
details = 'Use `.name_repair = "minimal"`.')
.name_repair <- "minimal"
}
df <- tibble(..., .name_repair = .name_repair)
if (ncol(df) == 0L) {
return(.data)
}
if (nrow(df) != nrow(.data)) {
if (nrow(df) == 1) {
df <- df[rep(1L, nrow(.data)), ]
} else {
cnd_signal(error_incompatible_new_cols(nrow(.data), df))
}
}
pos <- pos_from_before_after_names(.before, .after, colnames(.data))
end_pos <- ncol(.data) + seq_len(ncol(df))
indexes_before <- rlang::seq2(1L, pos)
indexes_after <- rlang::seq2(pos + 1L, ncol(.data))
indexes <- c(indexes_before, end_pos, indexes_after)
new_data <- .data
new_data[end_pos] <- df
out <- new_data[indexes]
out <- set_repaired_names(out, .name_repair)
vectbl_restore(out, .data)
}
# helpers -----------------------------------------------------------------
pos_from_before_after_names <- function(before, after, names) {
before <- check_names_before_after(before, names)
after <- check_names_before_after(after, names)
pos_from_before_after(before, after, length(names))
}
pos_from_before_after <- function(before, after, len) {
if (is_null(before)) {
if (is_null(after)) {
len
} else {
limit_pos_range(after, len)
}
} else {
if (is_null(after)) {
limit_pos_range(before - 1L, len)
} else {
cnd_signal(error_both_before_after())
}
}
}
limit_pos_range <- function(pos, len) {
max(0L, min(len, pos))
}
# check_names_before_after ------------------------------------------------
check_names_before_after <- function(j, x) {
if (!is_bare_character(j)) {
return(j)
}
check_needs_no_dim(j)
check_names_before_after_character(j, x)
}
check_needs_no_dim <- function(j) {
if (needs_dim(j)) {
cnd_signal(error_dim_column_index(j))
}
}
check_names_before_after_character <- function(j, names) {
pos <- safe_match(j, names)
if (anyNA(pos)) {
unknown_names <- j[is.na(pos)]
cnd_signal(error_unknown_column_names(unknown_names))
}
pos
}
# Errors ------------------------------------------------------------------
error_add_rows_to_grouped_df <- function() {
tibble_error("Can't add rows to grouped data frames.")
}
error_incompatible_new_rows <- function(names) {
tibble_error(
bullets(
"New rows can't add columns:",
cnd_message(error_unknown_column_names(names))
),
names = names
)
}
error_both_before_after <- function() {
tibble_error("Can't specify both `.before` and `.after`.")
}
error_unknown_column_names <- function(j, parent = NULL) {
tibble_error(pluralise_commas("Can't find column(s) ", tick(j), " in `.data`."), j = j, parent = parent)
}
error_incompatible_new_cols <- function(n, df) {
tibble_error(
bullets(
"New columns must be compatible with `.data`:",
x = paste0(
pluralise_n("New column(s) ha[s](ve)", ncol(df)), " ",
nrow(df), " rows"
),
i = pluralise_count("`.data` has ", n, " row(s)")
),
expected = n,
actual = nrow(df)
)
}
|
Rothmana <-
function(X, Y, lambda_beta, lambda_kappa, convergence = 1e-4, gamma = 0.5, maxit.in = 100, maxit.out = 100,
penalize.diagonal, # if FALSE, penalizes the first diagonal (assumed to be auto regressions), even when ncol(X) != ncol(Y) !
interceptColumn = 1, # Set to NULL or NA to omit
mimic = "current",
likelihood = c("unpenalized","penalized")
){
# Algorithm 2 of Rothmana, Levinaa & Ji Zhua
likelihood <- match.arg(likelihood)
nY <- ncol(Y)
nX <- ncol(X)
if (missing(penalize.diagonal)){
if (mimic == "0.1.2"){
penalize.diagonal <- nY != nX
} else {
penalize.diagonal <- (nY != nX-1) & (nY != nX )
}
}
lambda_mat <- matrix(lambda_beta,nX, nY)
if (!penalize.diagonal){
if (nY == nX){
add <- 0
} else if (nY == nX - 1){
add <- 1
} else {
stop("Beta is not P x P or P x P+1, cannot detect diagonal.")
}
for (i in 1:min(c(nY,nX))){
lambda_mat[i+add,i] <- 0
}
}
if (!is.null(interceptColumn) && !is.na(interceptColumn)){
lambda_mat[interceptColumn,] <- 0
}
n <- nrow(X)
beta_ridge <- beta_ridge_C(X, Y, lambda_beta)
# Starting values:
beta <- matrix(0, nX, nY)
# Algorithm:
it <- 0
repeat{
it <- it + 1
kappa <- Kappa(beta, X, Y, lambda_kappa)
beta_old <- beta
beta <- Beta_C(kappa, beta, X, Y, lambda_beta, lambda_mat, convergence, maxit.in)
if (sum(abs(beta - beta_old)) < (convergence * sum(abs(beta_ridge)))){
break
}
if (it > maxit.out){
warning("Model did NOT converge in outer loop")
break
}
}
## Compute unconstrained kappa (codes from SparseTSCGM):
ZeroIndex <- which(kappa==0, arr.ind=TRUE) ## Select the path of zeros
WS <- (t(Y)%*%Y - t(Y) %*% X %*% beta - t(beta) %*% t(X)%*%Y + t(beta) %*% t(X)%*%X %*% beta)/(nrow(X))
if (any(eigen(WS,only.values = TRUE)$values < -sqrt(.Machine$double.eps))){
stop("Residual covariance matrix is not non-negative definite")
}
if (likelihood == "unpenalized"){
if (nrow(ZeroIndex)==0){
out4 <- suppressWarnings(glasso(WS, rho = 0, trace = FALSE))
} else {
out4 <- suppressWarnings(glasso(WS, rho = 0, zero = ZeroIndex,
trace = FALSE))
}
lik1 <- determinant( out4$wi)$modulus[1]
lik2 <- sum(diag( out4$wi%*%WS))
} else {
lik1 <- determinant( kappa )$modulus[1]
lik2 <- sum(diag( kappa%*%WS))
}
pdO = sum(sum(kappa[upper.tri(kappa,diag=FALSE)] !=0))
if (mimic == "0.1.2"){
pdB = sum(sum(beta !=0))
} else {
pdB = sum(sum(beta[lambda_mat!=0] !=0))
}
LLk <- (n/2)*(lik1-lik2)
LLk0 <- (n/2)*(-lik2)
EBIC <- -2*LLk + (log(n))*(pdO +pdB) + (pdO + pdB)*4*gamma*log(2*nY)
### TRANSPOSE BETA!!!
return(list(beta=t(beta), kappa=kappa, EBIC = EBIC))
}
|
/graphicalVAR/R/Rothmana.R
|
no_license
|
akhikolla/InformationHouse
|
R
| false | false | 2,892 |
r
|
Rothmana <-
function(X, Y, lambda_beta, lambda_kappa, convergence = 1e-4, gamma = 0.5, maxit.in = 100, maxit.out = 100,
penalize.diagonal, # if FALSE, penalizes the first diagonal (assumed to be auto regressions), even when ncol(X) != ncol(Y) !
interceptColumn = 1, # Set to NULL or NA to omit
mimic = "current",
likelihood = c("unpenalized","penalized")
){
# Algorithm 2 of Rothmana, Levinaa & Ji Zhua
likelihood <- match.arg(likelihood)
nY <- ncol(Y)
nX <- ncol(X)
if (missing(penalize.diagonal)){
if (mimic == "0.1.2"){
penalize.diagonal <- nY != nX
} else {
penalize.diagonal <- (nY != nX-1) & (nY != nX )
}
}
lambda_mat <- matrix(lambda_beta,nX, nY)
if (!penalize.diagonal){
if (nY == nX){
add <- 0
} else if (nY == nX - 1){
add <- 1
} else {
stop("Beta is not P x P or P x P+1, cannot detect diagonal.")
}
for (i in 1:min(c(nY,nX))){
lambda_mat[i+add,i] <- 0
}
}
if (!is.null(interceptColumn) && !is.na(interceptColumn)){
lambda_mat[interceptColumn,] <- 0
}
n <- nrow(X)
beta_ridge <- beta_ridge_C(X, Y, lambda_beta)
# Starting values:
beta <- matrix(0, nX, nY)
# Algorithm:
it <- 0
repeat{
it <- it + 1
kappa <- Kappa(beta, X, Y, lambda_kappa)
beta_old <- beta
beta <- Beta_C(kappa, beta, X, Y, lambda_beta, lambda_mat, convergence, maxit.in)
if (sum(abs(beta - beta_old)) < (convergence * sum(abs(beta_ridge)))){
break
}
if (it > maxit.out){
warning("Model did NOT converge in outer loop")
break
}
}
## Compute unconstrained kappa (codes from SparseTSCGM):
ZeroIndex <- which(kappa==0, arr.ind=TRUE) ## Select the path of zeros
WS <- (t(Y)%*%Y - t(Y) %*% X %*% beta - t(beta) %*% t(X)%*%Y + t(beta) %*% t(X)%*%X %*% beta)/(nrow(X))
if (any(eigen(WS,only.values = TRUE)$values < -sqrt(.Machine$double.eps))){
stop("Residual covariance matrix is not non-negative definite")
}
if (likelihood == "unpenalized"){
if (nrow(ZeroIndex)==0){
out4 <- suppressWarnings(glasso(WS, rho = 0, trace = FALSE))
} else {
out4 <- suppressWarnings(glasso(WS, rho = 0, zero = ZeroIndex,
trace = FALSE))
}
lik1 <- determinant( out4$wi)$modulus[1]
lik2 <- sum(diag( out4$wi%*%WS))
} else {
lik1 <- determinant( kappa )$modulus[1]
lik2 <- sum(diag( kappa%*%WS))
}
pdO = sum(sum(kappa[upper.tri(kappa,diag=FALSE)] !=0))
if (mimic == "0.1.2"){
pdB = sum(sum(beta !=0))
} else {
pdB = sum(sum(beta[lambda_mat!=0] !=0))
}
LLk <- (n/2)*(lik1-lik2)
LLk0 <- (n/2)*(-lik2)
EBIC <- -2*LLk + (log(n))*(pdO +pdB) + (pdO + pdB)*4*gamma*log(2*nY)
### TRANSPOSE BETA!!!
return(list(beta=t(beta), kappa=kappa, EBIC = EBIC))
}
|
library(shiny)
library(dygraphs)
shinyUI(fluidPage(
tags$head(
tags$style(HTML("
@import url(http://fonts.googleapis.com/css?family=Poiret+One);
h1 {
font-family: 'Poiret One', cursive;
font-weight: 500;
line-height: 1.1;
}
"))
),
titlePanel(h1("Saudi Hollandi Bank's Facebook Data Analysis")),
tabsetPanel(
tabPanel("Overall View",
fluidRow(
column(width = 6, dygraphOutput("totalOverview")),
column(width = 6, dygraphOutput("likedOverview"))
),
br(),
br(),
fluidRow(
column(width = 6, dygraphOutput("commentedOverview")),
column(width = 6, dygraphOutput("sharedOverview"))
)
),
tabPanel("Monthly View",
fluidRow(
column(width = 6, dygraphOutput("totalMonthly")),
column(width = 6, dygraphOutput("likedMonthly"))
),
br(),
br(),
fluidRow(
column(width = 6, dygraphOutput("commentedMonthly")),
column(width = 6, dygraphOutput("sharedMonthly"))
)
),
tabPanel("Week Day View",
fluidRow(
column(width = 6, plotOutput("totalWeekday")),
column(width = 6, plotOutput("likedWeekday"))
),
br(),
br(),
fluidRow(
column(width = 6, plotOutput("commentedWeekday")),
column(width = 6, plotOutput("sharedWeekday"))
)
),
tabPanel("Correlation",
fluidRow(
column(width = 4, plotOutput("corrPlot1")),
column(width = 4, plotOutput("corrPlot2")),
column(width = 4, plotOutput("corrPlot3"))
)
),
tabPanel("Word Clouds",
fluidRow(
column(h3("Words Appearing in Most Liked Posts"),
width = 12, plotOutput("likedWords"))
),
br(),
br(),
fluidRow(
column(h3("Words Appearing in Most Commented on Posts"),
width = 12, plotOutput("commentedWords"))
),
br(),
br(),
fluidRow(
column(h3("Words Appearing in Most Shared Posts"),
width = 12, plotOutput("sharedWords"))
)
),
tabPanel("About")
)
)
)
|
/SHB/Social-Media-App/ui.R
|
no_license
|
aliarsalankazmi/Aimia-Projects
|
R
| false | false | 2,062 |
r
|
library(shiny)
library(dygraphs)
shinyUI(fluidPage(
tags$head(
tags$style(HTML("
@import url(http://fonts.googleapis.com/css?family=Poiret+One);
h1 {
font-family: 'Poiret One', cursive;
font-weight: 500;
line-height: 1.1;
}
"))
),
titlePanel(h1("Saudi Hollandi Bank's Facebook Data Analysis")),
tabsetPanel(
tabPanel("Overall View",
fluidRow(
column(width = 6, dygraphOutput("totalOverview")),
column(width = 6, dygraphOutput("likedOverview"))
),
br(),
br(),
fluidRow(
column(width = 6, dygraphOutput("commentedOverview")),
column(width = 6, dygraphOutput("sharedOverview"))
)
),
tabPanel("Monthly View",
fluidRow(
column(width = 6, dygraphOutput("totalMonthly")),
column(width = 6, dygraphOutput("likedMonthly"))
),
br(),
br(),
fluidRow(
column(width = 6, dygraphOutput("commentedMonthly")),
column(width = 6, dygraphOutput("sharedMonthly"))
)
),
tabPanel("Week Day View",
fluidRow(
column(width = 6, plotOutput("totalWeekday")),
column(width = 6, plotOutput("likedWeekday"))
),
br(),
br(),
fluidRow(
column(width = 6, plotOutput("commentedWeekday")),
column(width = 6, plotOutput("sharedWeekday"))
)
),
tabPanel("Correlation",
fluidRow(
column(width = 4, plotOutput("corrPlot1")),
column(width = 4, plotOutput("corrPlot2")),
column(width = 4, plotOutput("corrPlot3"))
)
),
tabPanel("Word Clouds",
fluidRow(
column(h3("Words Appearing in Most Liked Posts"),
width = 12, plotOutput("likedWords"))
),
br(),
br(),
fluidRow(
column(h3("Words Appearing in Most Commented on Posts"),
width = 12, plotOutput("commentedWords"))
),
br(),
br(),
fluidRow(
column(h3("Words Appearing in Most Shared Posts"),
width = 12, plotOutput("sharedWords"))
)
),
tabPanel("About")
)
)
)
|
LLNintegral <-
function(ss=4)
{
dump("LLNintegral","c:\\StatBook\\LLNintegral.r")
par(mfrow=c(1,1),mar=c(4,4,.2,.5))
set.seed(ss)
n=seq(from=100,to=10000,by=100)
ln=length(n)
int=rep(NA,ln)
a=-10;b=5
for(i in 1:ln)
{
X=runif(n[i],min=a,max=b)
int[i]=(b-a)*mean(exp(-0.123*X^6)*log(1+X^8))
}
plot(n,int,type="b",xlab="Number of simulated values, n",ylab="LLN integral")
exact.int=integrate(function(x) exp(-0.123*x^6)*log(1+x^8),lower=-10,upper=5)$value
segments(-1000,exact.int,10000,exact.int,lwd=3)
text(4000,1.2,paste("Exact integral =",round(exact.int,5)),adj=0)
}
|
/RcodeData/LLNintegral.r
|
no_license
|
PepSalehi/advancedstatistics
|
R
| false | false | 609 |
r
|
LLNintegral <-
function(ss=4)
{
dump("LLNintegral","c:\\StatBook\\LLNintegral.r")
par(mfrow=c(1,1),mar=c(4,4,.2,.5))
set.seed(ss)
n=seq(from=100,to=10000,by=100)
ln=length(n)
int=rep(NA,ln)
a=-10;b=5
for(i in 1:ln)
{
X=runif(n[i],min=a,max=b)
int[i]=(b-a)*mean(exp(-0.123*X^6)*log(1+X^8))
}
plot(n,int,type="b",xlab="Number of simulated values, n",ylab="LLN integral")
exact.int=integrate(function(x) exp(-0.123*x^6)*log(1+x^8),lower=-10,upper=5)$value
segments(-1000,exact.int,10000,exact.int,lwd=3)
text(4000,1.2,paste("Exact integral =",round(exact.int,5)),adj=0)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/image_analysis.R
\name{wbt_sigmoidal_contrast_stretch}
\alias{wbt_sigmoidal_contrast_stretch}
\title{Sigmoidal contrast stretch}
\usage{
wbt_sigmoidal_contrast_stretch(input, output, cutoff = 0, gain = 1,
num_tones = 256, verbose_mode = FALSE)
}
\arguments{
\item{input}{Input raster file.}
\item{output}{Output raster file.}
\item{cutoff}{Cutoff value between 0.0 and 0.95.}
\item{gain}{Gain value.}
\item{num_tones}{Number of tones in the output image.}
\item{verbose_mode}{Sets verbose mode. If verbose mode is False, tools will not print output messages.}
}
\value{
Returns the tool text outputs.
}
\description{
Performs a sigmoidal contrast stretch on input images.
}
|
/man/wbt_sigmoidal_contrast_stretch.Rd
|
permissive
|
Remote-Sensing-Forks/whiteboxR
|
R
| false | true | 759 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/image_analysis.R
\name{wbt_sigmoidal_contrast_stretch}
\alias{wbt_sigmoidal_contrast_stretch}
\title{Sigmoidal contrast stretch}
\usage{
wbt_sigmoidal_contrast_stretch(input, output, cutoff = 0, gain = 1,
num_tones = 256, verbose_mode = FALSE)
}
\arguments{
\item{input}{Input raster file.}
\item{output}{Output raster file.}
\item{cutoff}{Cutoff value between 0.0 and 0.95.}
\item{gain}{Gain value.}
\item{num_tones}{Number of tones in the output image.}
\item{verbose_mode}{Sets verbose mode. If verbose mode is False, tools will not print output messages.}
}
\value{
Returns the tool text outputs.
}
\description{
Performs a sigmoidal contrast stretch on input images.
}
|
library(shiny)
library(listviewer)
safe_list <- function(.list) {
tryCatch({
obj <- as.list(.list)
obj <- lapply(obj, function(x){
if (is.character(x) && nchar(x) > 300) {
return(
paste0(
substr(x, 1, pmin(nchar(x), 300)),
"... [[ truncated for space ]]"
)
)
} else {
return(x)
}
})
}, error = function(e) {
message(e)
obj <- list(
"ERROR",
e,
"Please refresh the page to see if the error persists",
"If so, submit an issue here:",
"https://github.com/colearendt/shiny-session-info"
)
})
return(obj)
}
ui <- function(req) {fluidPage(
titlePanel("System and Shiny info"),
sidebarLayout(
sidebarPanel(
h3("An Example App for Exploring Shiny"),
p("If you encounter any issues with this application, please submit bugs to ", a("GitHub", href = "https://github.com/colearendt/shiny-session-info")),
p("Use the listviewers to the right for exploring Shiny session state"),
br(),
h4("Important Notes"),
p("This app has shown fragility with a large number of groups. If you see errors and have a large number of groups, please refresh")
),
mainPanel(
h2("Sys.info()"),
tableOutput("sys_info"),
h2("Sys.getenv(names = TRUE)"),
tableOutput("system_env"),
h2("Shiny: session$clientData"),
jsoneditOutput("clientdataText"),
h2("Shiny: session"),
jsoneditOutput("sessionInfo"),
h2("Shiny: UI req object"),
jsonedit(
safe_list(req)
, mode = 'view'
, modes = list('view')
)
)
)
)}
server <- function(input, output, session) {
output$sys_info <- renderTable({
dat <- as.data.frame(as.list(Sys.info()))
dat <- as.data.frame(cbind(Name = names(dat), t(dat)))
dat$Value <- dat$V2
dat$V2 <- NULL
dat
})
output$system_env <- renderTable({
s <- Sys.getenv(names = TRUE)
data.frame(name = names(s), value = as.character(s))
})
clean_environ <- function(environ){
if (is.environment(environ)) {
lenv <- as.list(environ)
lenv <- lenv[which(!lapply(lenv, typeof) %in% c("environment"))]
return(lenv)
} else {
return(environ)
}
}
# Store in a convenience variable
cdata <- session$clientData
output$sessionInfo <- renderJsonedit({
tryCatch({
calt <- as.list(session)
calt_type <- lapply(calt, typeof)
calt_clean <- calt[which(!calt_type %in% c("closure"))]
calt_clean <- lapply(calt_clean, clean_environ)
calt_class <- lapply(calt_clean, class)
calt_clean_2 <- calt_clean[which(!calt_class %in% c("reactivevalues", "shinyoutput"))]
calt_final <- calt_clean_2
calt_names <- names(calt_final)
# print(lapply(calt_final, typeof))
},
error = function(e) {
message(e)
calt_final <- list("ERROR occurred", e, "Please refresh the page")
})
jsonedit(calt_final
, mode = 'view'
, modes = list('view'))
})
# Values from cdata returned as text
output$clientdataText <- renderJsonedit({
jsonedit(as.list(cdata), mode = 'view', modes = list('view'))
})
}
# Run the application
shinyApp(ui = ui, server = server)
|
/app.R
|
no_license
|
colearendt/shiny-session-info
|
R
| false | false | 3,639 |
r
|
library(shiny)
library(listviewer)
safe_list <- function(.list) {
tryCatch({
obj <- as.list(.list)
obj <- lapply(obj, function(x){
if (is.character(x) && nchar(x) > 300) {
return(
paste0(
substr(x, 1, pmin(nchar(x), 300)),
"... [[ truncated for space ]]"
)
)
} else {
return(x)
}
})
}, error = function(e) {
message(e)
obj <- list(
"ERROR",
e,
"Please refresh the page to see if the error persists",
"If so, submit an issue here:",
"https://github.com/colearendt/shiny-session-info"
)
})
return(obj)
}
ui <- function(req) {fluidPage(
titlePanel("System and Shiny info"),
sidebarLayout(
sidebarPanel(
h3("An Example App for Exploring Shiny"),
p("If you encounter any issues with this application, please submit bugs to ", a("GitHub", href = "https://github.com/colearendt/shiny-session-info")),
p("Use the listviewers to the right for exploring Shiny session state"),
br(),
h4("Important Notes"),
p("This app has shown fragility with a large number of groups. If you see errors and have a large number of groups, please refresh")
),
mainPanel(
h2("Sys.info()"),
tableOutput("sys_info"),
h2("Sys.getenv(names = TRUE)"),
tableOutput("system_env"),
h2("Shiny: session$clientData"),
jsoneditOutput("clientdataText"),
h2("Shiny: session"),
jsoneditOutput("sessionInfo"),
h2("Shiny: UI req object"),
jsonedit(
safe_list(req)
, mode = 'view'
, modes = list('view')
)
)
)
)}
server <- function(input, output, session) {
output$sys_info <- renderTable({
dat <- as.data.frame(as.list(Sys.info()))
dat <- as.data.frame(cbind(Name = names(dat), t(dat)))
dat$Value <- dat$V2
dat$V2 <- NULL
dat
})
output$system_env <- renderTable({
s <- Sys.getenv(names = TRUE)
data.frame(name = names(s), value = as.character(s))
})
clean_environ <- function(environ){
if (is.environment(environ)) {
lenv <- as.list(environ)
lenv <- lenv[which(!lapply(lenv, typeof) %in% c("environment"))]
return(lenv)
} else {
return(environ)
}
}
# Store in a convenience variable
cdata <- session$clientData
output$sessionInfo <- renderJsonedit({
tryCatch({
calt <- as.list(session)
calt_type <- lapply(calt, typeof)
calt_clean <- calt[which(!calt_type %in% c("closure"))]
calt_clean <- lapply(calt_clean, clean_environ)
calt_class <- lapply(calt_clean, class)
calt_clean_2 <- calt_clean[which(!calt_class %in% c("reactivevalues", "shinyoutput"))]
calt_final <- calt_clean_2
calt_names <- names(calt_final)
# print(lapply(calt_final, typeof))
},
error = function(e) {
message(e)
calt_final <- list("ERROR occurred", e, "Please refresh the page")
})
jsonedit(calt_final
, mode = 'view'
, modes = list('view'))
})
# Values from cdata returned as text
output$clientdataText <- renderJsonedit({
jsonedit(as.list(cdata), mode = 'view', modes = list('view'))
})
}
# Run the application
shinyApp(ui = ui, server = server)
|
if (Sys.info()["sysname"] == "Darwin" & Sys.info()["user"] == "xavier") {
chopit.routines.directory = "~/Documents/Travail/Boulot Fac/Doctorat/1_ Productions personnelles/24_Range-Frequency/2_programmes/"
share.data.directory = "~/Documents/Travail/Boulot Fac/Doctorat/Data bases/SHARE/0_merged_data/"
}
if (Sys.info()["sysname"] == "Linux" & Sys.info()["user"] == "x.fontaine") {
chopit.routines.directory = "~/U/Travail/Range_frequency/routines/"
share.data.directory = "~/U/Travail/Range_frequency/data/"
}
source(paste(chopit.routines.directory,"likelihood.R", sep = ""), chdir = TRUE)
source(paste(chopit.routines.directory,"ChopitStartingValues.R", sep = ""), chdir = TRUE)
source(paste(chopit.routines.directory,"summary.Chopit.R", sep = ""), chdir = TRUE)
Chopit <- function(formula, data, heterosk = FALSE, naive = FALSE, par.init = NULL, optim.method = "BFGS", varcov.method = "OPG") {
# CHOPIT()
# Args:
# formula: list containing 3 formulae arguments ; the first one must be names self, the second one vign, the last one tau. Remark that a
# a constant is expected to be specified in all the equations, even though the program will perform the necessary normalization
# beta["cste"] = 0. If heterosk == TRUE, a 4th argument (sigma) is required.
# data: dataset associated to the formulae
# heterosk: Should the model account for potential heteroskedasticity ? If so, sd(Y*_s / x_s, x_sigma) = sigma_s(x_sigma) = exp(x_sigma %*% kappa)
# par.init: initial parameters to be passed to optim
# naive: TRUE if initial values for parameters should be set the "naive" way (i.e. all parameters are 0, except intercept of each threshold
# equation which is set to 0.1. Else, a more sophisticated way is used, through ChopitStartingValues
# varcov.method: method to be used to estimate the variance-covariane matrix for the parameters: none, hessian or OPG
#
# Returns (as a list)
# optim.results: result from optimization (object of classes maxLik and maxim) ; for sake of sparsity, gradientObs is delete
# coef: list of coefficients. beta0 is the beta vector, with a 0 as a first argument by normalization since there is an intercept in x.s. Same holds
# for kappa0
# var.cov: variance covariance matrix (if varcov.method != "none")
# constants: list of important constants (defined throughout the code)
# contrib.number: number of contribution to partial likelihood
# call.arg: argument used when the function has been called. 'data.name' and parent.frame replaces the original dataset, by giving both the name of
# this dataframe and its location. By doing so, the object returned remains quite small in size.
# col.to.drop: if Chopit() finds any colinearity problem, it drops the incriminated collumns. The arguments in col.to.drop contain the number of the
# columns to be dropped
# Libraries
library(MASS) # used for finding proper starting values, using polr
library(maxLik)
# Start the clock
ptm <- proc.time()
# GLOBAL SANITY CHECKS
# data should be a data.frame
if (!is.data.frame(data)) stop("Chopit: the argument data is not a data.frame")
# varcov.method should be either "none", "hessian" or OPG
if (!varcov.method %in% c("none", "hessian", "OPG")) stop("Chopit: varcov.method should be either 'none', 'hessian' or 'OPG'")
# PARSING THE FORMULAE
# Sanity checks: checking the "formula" list is properly defined
# Is "formula" a list ?
if (!is.list(formula)) stop("Chopit: formula is not a list")
# Is each element a formula ?
IsFormula <- function(x) {
is.formula <- (class(x) == "formula")
return(is.formula)
}
if (any(!sapply(X = formula, FUN = IsFormula))) stop("Chopit: at least one element in the list formula is not a formula")
# Are the names apropriately chosen ?
if (any(!c("self", "tau", "vign") %in% names(formula))) stop("Chopit: names in formula badly specified (one of self, tau, vign is missing)")
if (heterosk & !("sigma" %in% names(formula))) stop("Chopit: 'heterosk == TRUE' but no 'sigma' in the formula")
if (!heterosk & ("sigma" %in% names(formula))) stop("Chopit: heterosk = FALSE while there is 'sigma' in the formula")
# Parsing process
f.self <- formula$self
f.tau <- formula$tau
f.vign <- formula$vign
if (heterosk) f.sigma <- formula$sigma
# PRODUCING THE DATA MATRICES
# Sanity checks:
# A constant is expected to be included in the self-assessment equation (no + 0 or - 1 should take place in this equation). If no constant
# specified, we should force a constant to exist (cf. terms.object so see what to modify), and indicate we do so.
if (attr(terms(f.self), "intercept") == 0) stop("No constant in the self-assessment equation formula (one is expected, event though we normalize
the associated coef to 0)")
# Getting the name of the provided argument 'data' before data is evaluated
data.name <- deparse(substitute(data))
# Dropping unused levels in data if any remaining
data <- droplevels(data)
# # # Self-assessment. cbind() is used to make sure each object is at least a column-vector (else a matrix). original.objects are created to
# # keep the objects as they were with NA, so that it can be returned by Chopit() and eventually passed to other functions (GroupAnalysis).
# # Indeed, whenever some values in x.tau is missing, then observation in x.s is deleted here ; but these observations can be used in
# # GroupAnalysis(). Remark that variables dropped due to multi-co here are also going to be dropped in the "original" objects.
# mf.self <- model.frame(formula = f.self, data = data, na.action = NULL)
# y.s <- model.response(data = mf.self) ; y.s <- cbind(y.s)
# # Vignettes
# mf.vign <- model.frame(formula = f.vign, data = data, na.action = NULL)
# y.v <- model.response(data = mf.vign) ; y.v <- cbind(y.v)
# # Checking again for missing levels, but now when all y.s and y.v are NA. Otherwise, some levels may not be missing for the whole dataset, but
# # missing when considering only the observations where we have at least one assessment avaible.
# # Then getting back y.s and y.v again for this smaller dataset
# data <- data[!is.na(y.s) | rowSums(!is.na(y.v)), ]
# data <- droplevels(data)
# mf.self <- model.frame(formula = f.self, data = data, na.action = NULL)
# y.s <- model.response(data = mf.self) ; y.s <- cbind(y.s)
# mf.vign <- model.frame(formula = f.vign, data = data, na.action = NULL)
# y.v <- model.response(data = mf.vign) ; y.v <- cbind(y.v)
# # Self-assessment: X
# mf.self <- model.frame(formula = f.self, data = data, na.action = NULL)
# x.s <- model.matrix(object = f.self, data = mf.self) ; x.s <- cbind(x.s)
# # Tau
# mf.tau <- model.frame(formula = f.tau, data = data, na.action = NULL) ;
# x.tau <- model.matrix(object = f.tau, data = mf.tau) ; x.tau <- cbind(x.tau)
# Self-assessment. cbind() is used to make sure each object is at least a column-vector (else a matrix). original.objects are created to
# keep the objects as they were with NA, so that it can be returned by Chopit() and eventually passed to other functions (GroupAnalysis).
# Indeed, whenever some values in x.tau is missing, then observation in x.s is deleted here ; but these observations can be used in
# GroupAnalysis(). Remark that variables dropped due to multi-co here are also going to be dropped in the "original" objects.
mf.self <- model.frame(formula = f.self, data = data, na.action = NULL)
y.s <- model.response(data = mf.self) ; y.s <- cbind(y.s)
x.s <- model.matrix(object = f.self, data = mf.self) ; x.s <- cbind(x.s)
# Tau
mf.tau <- model.frame(formula = f.tau, data = data, na.action = NULL) ;
x.tau <- model.matrix(object = f.tau, data = mf.tau) ; x.tau <- cbind(x.tau)
# Vignettes
mf.vign <- model.frame(formula = f.vign, data = data, na.action = NULL)
y.v <- model.response(data = mf.vign) ; y.v <- cbind(y.v)
# Heteroskedasticity
if (heterosk) {
mf.sigma <- model.frame(formula = f.sigma, data = data, na.action = NULL)
x.sigma <- model.matrix(object = f.sigma, data = mf.sigma) ; x.sigma <- cbind(x.sigma)
}
else {
x.sigma <- NULL
original.x.sigma <- NULL
}
# DEALING WITH NA
# Observations that cannot be used at all:
# - Rows for which there is no self-assessment AND vignette information
# - Rows for which one of the x.tau is missing (impossible to calculate the thresholds)
# Observations that cannot be used to calculate individual contribution to the self-assessment question likelihood (could be used for vignettes)
# - Row for which one of the x.s is missing
# - Rows for which y.s is missing
# Observations that cannot be used to calculate indiv contrib to ONE vignette v question likelihood
# - Rows for which the vignette statement is missing
# Here, I drop all the data for which one or both of the first two conditions are not met. This avoids calculating likelihood contributions for
# observations that are going to be NA. The reste of the NA are handled through the use of na.rm = TRUE in the sum functions of the ll function.
# Alternatives could have been used, but this way of handling NA offers a good trade-off between speed and generality.
#
# Deleting rows corresponding to the first two cases
# First, detecting the incomplete cases
any.non.na <- function(x) { # Function that takes a vector, and says whether there is any non-missing value
any(!is.na(x))
}
incomplete <- ((is.na(y.s)) & !(apply(y.v, 1, any.non.na))) | (!complete.cases(x.tau))
# Then, deleting the incriminated rows
y.s <- y.s[!incomplete, , drop = FALSE]
x.s <- x.s[!incomplete, , drop = FALSE]
x.tau <- x.tau[!incomplete, , drop = FALSE]
y.v <- y.v[!incomplete, , drop = FALSE]
if (heterosk) x.sigma <- x.sigma[!incomplete, , drop = FALSE]
# DEALING WITH PERFECT COLINEARITY
# Detecting perfect colinearity through polr, displaying the name of the incriminating variables, and dropping the corresponding variables
# We focus on observations for which we observe at the same time the self-assessment variable and any vignette assessment. This allows us to check
# that we observe each variable when, and especially that none of these variables is a constant (e.g. dummies not-used)
s.e.and.v.e <- !is.na(y.s) & rowSums(1 * !is.na(y.v)) # Obervations for which we observe both a self-eval and a vignette eval
# x.s
if (ncol(x.s) > 2) { # Dealing with multico matters only if there is more than the constant in x.s
temp.polr <- polr(as.factor(y.s[s.e.and.v.e]) ~ x.s[s.e.and.v.e, -1] , method = "probit")
proper.names <- gsub("x.s\\[s.e.and.v.e, -1\\]", "", names(temp.polr$coefficients))
col.to.drop.x.s <- !(colnames(x.s) %in% proper.names) ; col.to.drop.x.s[1] <- FALSE # Index of the columns in x.s to be dropped
if (any(col.to.drop.x.s)) {
cat("Chopit: x.s is not full-rank. Dropping the following columns:", colnames(x.s)[col.to.drop.x.s], "\n")
x.s <- x.s[, !col.to.drop.x.s, drop = FALSE]
}
rm(temp.polr)
}
# x.tau
col.to.drop.x.tau <- NULL
if (ncol(x.tau) > 2) { # Dealing with multico matters only if there is more than the constant in x.tau
temp.polr <- polr(as.factor(y.s[s.e.and.v.e]) ~ x.tau[s.e.and.v.e, -1] , method = "probit")
proper.names <- gsub("x.tau\\[s.e.and.v.e, -1\\]", "", names(temp.polr$coefficients))
col.to.drop.x.tau <- !(colnames(x.tau) %in% proper.names) ; col.to.drop.x.tau[1] <- FALSE # Index of the columns in x.tau to be dropped
if (any(col.to.drop.x.tau)) {
cat("Chopit: x.tau is not full-rank. Dropping the following columns:", colnames(x.tau)[col.to.drop.x.tau], "\n")
x.tau <- x.tau[, !col.to.drop.x.tau, drop = FALSE]
}
rm(temp.polr)
}
# x.sigma
col.to.drop.x.sigma <- NULL
if (heterosk) if (ncol(x.sigma) > 2) { # Dealing with multico matters only if there is more than the constant in x.sigma
temp.polr <- polr(as.factor(y.s[s.e.and.v.e]) ~ x.sigma[s.e.and.v.e, -1] , method = "probit")
proper.names <- gsub("x.sigma\\[s.e.and.v.e, -1\\]", "", names(temp.polr$coefficients))
col.to.drop.x.sigma <- !(colnames(x.sigma) %in% proper.names) ; col.to.drop.x.sigma[1] <- FALSE # Index of the columns in x.sigma to be dropped
if (any(col.to.drop.x.sigma)) {
cat("Chopit: x.sigma is not full-rank. Dropping the following columns:", colnames(x.sigma)[col.to.drop.x.sigma], "\n")
x.sigma <- x.sigma[, ! col.to.drop.x.sigma, drop = FALSE]
}
rm(temp.polr)
}
# CONSTANTS
# Calculating the number of statement categories, by taking the maximum among the self-assessements and the vignettes
length.levels.as.factor <- function(x) { # Function that calculate the length of the levels of an objet passed to as.factor
length(levels(as.factor(x)))
}
kK <- max(sapply(X = data.frame(cbind(y.s, y.v)), FUN = length.levels.as.factor)) # Number of statement categories
kBeta0Nrow <- ncol(x.s) - 1 # Number of parameters in beta0
kGammaNrow <- ncol(x.tau) # Number of gamma parameters for each threshold equation
kV <- ncol(y.v) # Number of vignettes
if (heterosk) {
kKappa0Nrow <- ncol(x.sigma) - 1 # Number of row in kappa (except the 0 for the intercept)
}
else {
kKappa0Nrow <- NULL
}
# GENERATING STARTING VALUES
if (naive == TRUE & is.null(par.init)) {
beta0.init <- numeric(kBeta0Nrow)
gamma.init <- matrix(0, nrow = kGammaNrow, ncol = kK - 1) ; gamma.init[1, ] <- 0.1
theta.init = numeric(kV)
sigma.tilde.v.init <- numeric(kV) # equivalent to setting sigma. v = (1, ..., 1)
if (heterosk) {
kappa0.init <- numeric(kKappa0Nrow) # parameters kappa, except the first 0
}
if (heterosk) par.init <- c(beta0.init, gamma.init, theta.init, sigma.tilde.v.init, kappa0.init)
else par.init <- c(beta0.init, gamma.init, theta.init, sigma.tilde.v.init)
}
if (naive == FALSE & is.null(par.init)) {
par.init <- ChopitStartingValues(y.s = y.s, x.s = x.s, kK = kK, kBeta0Nrow = kBeta0Nrow, kGammaNrow, kV)
if (heterosk) {
par.init <- c(par.init, numeric(kKappa0Nrow))
cat("Chopit: The non-naive parameter initilalization is not optimized for being used when heteroskedasticticy is allowed. Naive could be
prefered. \n")
}
}
# LIKELIHOOD MAXIMIZATION
chopit.envir <- environment()
# optim.results <- optim(par = par.init, fn = ChopitLlCompound, y.s = y.s, y.v = y.v, x.s = x.s, x.tau = x.tau, kK = kK, kBeta0Nrow = kBeta0Nrow,
# kGammaNrow = kGammaNrow, kV = kV, method = optim.method, control = list(trace = 2), hessian = varcov.calculus)
optim.results <- maxLik(logLik = ChopitLlCompound, grad = NULL, hess = NULL, start = par.init, finalHessian = (varcov.method == "hessian"),
iterlim = 2e+3, method = optim.method, print.level = 2, y.s = y.s, y.v = y.v, x.s = x.s, x.tau = x.tau, kK = kK,
kBeta0Nrow = kBeta0Nrow, kGammaNrow = kGammaNrow, kV = kV, chopit.envir = chopit.envir,
heterosk = heterosk, x.sigma = x.sigma, kKappa0Nrow = kKappa0Nrow)
# VAR-COV MATRIX
if (varcov.method == "none") {
var.cov <- matrix(NA, nrow = length(par.init), ncol = length(par.init))
}
if (varcov.method == "hessian") {
var.cov <- - solve(optim.results$hessian)
}
if (varcov.method == "OPG") {
var.cov <- solve(t(optim.results$gradientObs) %*% optim.results$gradientObs)
}
# NAMING
# Naming the rows (and cols) of the estimated parameters and of the varcov matrix
# Creating a vector of names
beta0.names <- colnames(x.s[, -1])
gamma.names <- vector("numeric")
for (i in 1:(kK - 1)) {
gamma.names <- cbind(gamma.names, paste(paste("gamma", i, sep = ""), colnames(x.tau)))
}
theta.names <- paste("theta", 1:kV, sep = "")
sigma.tilde.v.names <- paste("sigma.tilde.v", 1:kV, sep = "")
if (heterosk) kappa.names <- paste("kappa", colnames(x.sigma[, -1]))
else kappa.names <- NULL
names <- c(beta0.names, gamma.names, theta.names, sigma.tilde.v.names, kappa.names)
# Renaming the appropriate objects
names(optim.results$estimate) <- names
rownames(var.cov) <- colnames(var.cov) <- names
# DEPARSING COEF
# Deparsing the coefficients in sub-categories to facilitate further uses
beta0 <- optim.results$estimate[1:kBeta0Nrow]
gamma <- matrix(optim.results$estimate[(length(beta0) + 1) : (length(beta0) + kGammaNrow * (kK - 1))], ncol = kK - 1)
theta <- optim.results$estimate[(length(beta0) + length(gamma) + 1) : (length(beta0) + length(gamma) + kV)]
sigma.tilde.v <- optim.results$estimate[(length(beta0) + length(gamma) + length(theta) + 1) : (length(beta0) + length(gamma) + length(theta)
+ kV)]
if (heterosk) kappa0 <- optim.results$estimate[(length(beta0) + length(gamma) + length(theta) + length(sigma.tilde.v) + 1) : (length(beta0) + length(gamma) + length(theta) + length(sigma.tilde.v) + kKappa0Nrow)]
else kappa0 <- NULL
# Switching the clock off
elapsed.time <- proc.time() - ptm
# RETURN
optim.results$gradientObs <- NULL # Dropping gradientObs for the sake of sparsity
results <- list(optim.results = optim.results, coef = list(beta0 = beta0, gamma = gamma, theta = theta, sigma.tilde.v =sigma.tilde.v,
kappa0 = kappa0),
var.cov = var.cov, constants = list(kK = kK, kBeta0Nrow = kBeta0Nrow, kGammaNrow = kGammaNrow, kV = kV,
kKappa0Nrow = kKappa0Nrow, heterosk = heterosk),
contrib.number = contrib.number,
call.arg = list(formula = formula, data.name = data.name, heterosk = heterosk, naive = naive, par.init = par.init,
optim.method = optim.method, varcov.method = varcov.method, parent.frame = parent.frame()),
col.to.drop = list(x.s = col.to.drop.x.s, x.tau = col.to.drop.x.tau, x.sigma = col.to.drop.x.sigma),
elapsed.time = elapsed.time)
class(results) <- "Chopit"
return(results)
}
# "Compiling" the code to make it faster
library(compiler)
ChopitTau <- cmpfun(ChopitTau)
ChopitLlSelfEval <- cmpfun(ChopitLlSelfEval)
ChopitLlVignEval <- cmpfun(ChopitLlVignEval)
ChopitLl <- cmpfun(ChopitLl)
ChopitLlCompound <- cmpfun(ChopitLlCompound)
Chopit <- cmpfun(Chopit)
|
/heterogeneity/Chopit.R
|
no_license
|
applXcation/udacity
|
R
| false | false | 18,628 |
r
|
if (Sys.info()["sysname"] == "Darwin" & Sys.info()["user"] == "xavier") {
chopit.routines.directory = "~/Documents/Travail/Boulot Fac/Doctorat/1_ Productions personnelles/24_Range-Frequency/2_programmes/"
share.data.directory = "~/Documents/Travail/Boulot Fac/Doctorat/Data bases/SHARE/0_merged_data/"
}
if (Sys.info()["sysname"] == "Linux" & Sys.info()["user"] == "x.fontaine") {
chopit.routines.directory = "~/U/Travail/Range_frequency/routines/"
share.data.directory = "~/U/Travail/Range_frequency/data/"
}
source(paste(chopit.routines.directory,"likelihood.R", sep = ""), chdir = TRUE)
source(paste(chopit.routines.directory,"ChopitStartingValues.R", sep = ""), chdir = TRUE)
source(paste(chopit.routines.directory,"summary.Chopit.R", sep = ""), chdir = TRUE)
Chopit <- function(formula, data, heterosk = FALSE, naive = FALSE, par.init = NULL, optim.method = "BFGS", varcov.method = "OPG") {
# CHOPIT()
# Args:
# formula: list containing 3 formulae arguments ; the first one must be names self, the second one vign, the last one tau. Remark that a
# a constant is expected to be specified in all the equations, even though the program will perform the necessary normalization
# beta["cste"] = 0. If heterosk == TRUE, a 4th argument (sigma) is required.
# data: dataset associated to the formulae
# heterosk: Should the model account for potential heteroskedasticity ? If so, sd(Y*_s / x_s, x_sigma) = sigma_s(x_sigma) = exp(x_sigma %*% kappa)
# par.init: initial parameters to be passed to optim
# naive: TRUE if initial values for parameters should be set the "naive" way (i.e. all parameters are 0, except intercept of each threshold
# equation which is set to 0.1. Else, a more sophisticated way is used, through ChopitStartingValues
# varcov.method: method to be used to estimate the variance-covariane matrix for the parameters: none, hessian or OPG
#
# Returns (as a list)
# optim.results: result from optimization (object of classes maxLik and maxim) ; for sake of sparsity, gradientObs is delete
# coef: list of coefficients. beta0 is the beta vector, with a 0 as a first argument by normalization since there is an intercept in x.s. Same holds
# for kappa0
# var.cov: variance covariance matrix (if varcov.method != "none")
# constants: list of important constants (defined throughout the code)
# contrib.number: number of contribution to partial likelihood
# call.arg: argument used when the function has been called. 'data.name' and parent.frame replaces the original dataset, by giving both the name of
# this dataframe and its location. By doing so, the object returned remains quite small in size.
# col.to.drop: if Chopit() finds any colinearity problem, it drops the incriminated collumns. The arguments in col.to.drop contain the number of the
# columns to be dropped
# Libraries
library(MASS) # used for finding proper starting values, using polr
library(maxLik)
# Start the clock
ptm <- proc.time()
# GLOBAL SANITY CHECKS
# data should be a data.frame
if (!is.data.frame(data)) stop("Chopit: the argument data is not a data.frame")
# varcov.method should be either "none", "hessian" or OPG
if (!varcov.method %in% c("none", "hessian", "OPG")) stop("Chopit: varcov.method should be either 'none', 'hessian' or 'OPG'")
# PARSING THE FORMULAE
# Sanity checks: checking the "formula" list is properly defined
# Is "formula" a list ?
if (!is.list(formula)) stop("Chopit: formula is not a list")
# Is each element a formula ?
IsFormula <- function(x) {
is.formula <- (class(x) == "formula")
return(is.formula)
}
if (any(!sapply(X = formula, FUN = IsFormula))) stop("Chopit: at least one element in the list formula is not a formula")
# Are the names apropriately chosen ?
if (any(!c("self", "tau", "vign") %in% names(formula))) stop("Chopit: names in formula badly specified (one of self, tau, vign is missing)")
if (heterosk & !("sigma" %in% names(formula))) stop("Chopit: 'heterosk == TRUE' but no 'sigma' in the formula")
if (!heterosk & ("sigma" %in% names(formula))) stop("Chopit: heterosk = FALSE while there is 'sigma' in the formula")
# Parsing process
f.self <- formula$self
f.tau <- formula$tau
f.vign <- formula$vign
if (heterosk) f.sigma <- formula$sigma
# PRODUCING THE DATA MATRICES
# Sanity checks:
# A constant is expected to be included in the self-assessment equation (no + 0 or - 1 should take place in this equation). If no constant
# specified, we should force a constant to exist (cf. terms.object so see what to modify), and indicate we do so.
if (attr(terms(f.self), "intercept") == 0) stop("No constant in the self-assessment equation formula (one is expected, event though we normalize
the associated coef to 0)")
# Getting the name of the provided argument 'data' before data is evaluated
data.name <- deparse(substitute(data))
# Dropping unused levels in data if any remaining
data <- droplevels(data)
# # # Self-assessment. cbind() is used to make sure each object is at least a column-vector (else a matrix). original.objects are created to
# # keep the objects as they were with NA, so that it can be returned by Chopit() and eventually passed to other functions (GroupAnalysis).
# # Indeed, whenever some values in x.tau is missing, then observation in x.s is deleted here ; but these observations can be used in
# # GroupAnalysis(). Remark that variables dropped due to multi-co here are also going to be dropped in the "original" objects.
# mf.self <- model.frame(formula = f.self, data = data, na.action = NULL)
# y.s <- model.response(data = mf.self) ; y.s <- cbind(y.s)
# # Vignettes
# mf.vign <- model.frame(formula = f.vign, data = data, na.action = NULL)
# y.v <- model.response(data = mf.vign) ; y.v <- cbind(y.v)
# # Checking again for missing levels, but now when all y.s and y.v are NA. Otherwise, some levels may not be missing for the whole dataset, but
# # missing when considering only the observations where we have at least one assessment avaible.
# # Then getting back y.s and y.v again for this smaller dataset
# data <- data[!is.na(y.s) | rowSums(!is.na(y.v)), ]
# data <- droplevels(data)
# mf.self <- model.frame(formula = f.self, data = data, na.action = NULL)
# y.s <- model.response(data = mf.self) ; y.s <- cbind(y.s)
# mf.vign <- model.frame(formula = f.vign, data = data, na.action = NULL)
# y.v <- model.response(data = mf.vign) ; y.v <- cbind(y.v)
# # Self-assessment: X
# mf.self <- model.frame(formula = f.self, data = data, na.action = NULL)
# x.s <- model.matrix(object = f.self, data = mf.self) ; x.s <- cbind(x.s)
# # Tau
# mf.tau <- model.frame(formula = f.tau, data = data, na.action = NULL) ;
# x.tau <- model.matrix(object = f.tau, data = mf.tau) ; x.tau <- cbind(x.tau)
# Self-assessment. cbind() is used to make sure each object is at least a column-vector (else a matrix). original.objects are created to
# keep the objects as they were with NA, so that it can be returned by Chopit() and eventually passed to other functions (GroupAnalysis).
# Indeed, whenever some values in x.tau is missing, then observation in x.s is deleted here ; but these observations can be used in
# GroupAnalysis(). Remark that variables dropped due to multi-co here are also going to be dropped in the "original" objects.
mf.self <- model.frame(formula = f.self, data = data, na.action = NULL)
y.s <- model.response(data = mf.self) ; y.s <- cbind(y.s)
x.s <- model.matrix(object = f.self, data = mf.self) ; x.s <- cbind(x.s)
# Tau
mf.tau <- model.frame(formula = f.tau, data = data, na.action = NULL) ;
x.tau <- model.matrix(object = f.tau, data = mf.tau) ; x.tau <- cbind(x.tau)
# Vignettes
mf.vign <- model.frame(formula = f.vign, data = data, na.action = NULL)
y.v <- model.response(data = mf.vign) ; y.v <- cbind(y.v)
# Heteroskedasticity
if (heterosk) {
mf.sigma <- model.frame(formula = f.sigma, data = data, na.action = NULL)
x.sigma <- model.matrix(object = f.sigma, data = mf.sigma) ; x.sigma <- cbind(x.sigma)
}
else {
x.sigma <- NULL
original.x.sigma <- NULL
}
# DEALING WITH NA
# Observations that cannot be used at all:
# - Rows for which there is no self-assessment AND vignette information
# - Rows for which one of the x.tau is missing (impossible to calculate the thresholds)
# Observations that cannot be used to calculate individual contribution to the self-assessment question likelihood (could be used for vignettes)
# - Row for which one of the x.s is missing
# - Rows for which y.s is missing
# Observations that cannot be used to calculate indiv contrib to ONE vignette v question likelihood
# - Rows for which the vignette statement is missing
# Here, I drop all the data for which one or both of the first two conditions are not met. This avoids calculating likelihood contributions for
# observations that are going to be NA. The reste of the NA are handled through the use of na.rm = TRUE in the sum functions of the ll function.
# Alternatives could have been used, but this way of handling NA offers a good trade-off between speed and generality.
#
# Deleting rows corresponding to the first two cases
# First, detecting the incomplete cases
any.non.na <- function(x) { # Function that takes a vector, and says whether there is any non-missing value
any(!is.na(x))
}
incomplete <- ((is.na(y.s)) & !(apply(y.v, 1, any.non.na))) | (!complete.cases(x.tau))
# Then, deleting the incriminated rows
y.s <- y.s[!incomplete, , drop = FALSE]
x.s <- x.s[!incomplete, , drop = FALSE]
x.tau <- x.tau[!incomplete, , drop = FALSE]
y.v <- y.v[!incomplete, , drop = FALSE]
if (heterosk) x.sigma <- x.sigma[!incomplete, , drop = FALSE]
# DEALING WITH PERFECT COLINEARITY
# Detecting perfect colinearity through polr, displaying the name of the incriminating variables, and dropping the corresponding variables
# We focus on observations for which we observe at the same time the self-assessment variable and any vignette assessment. This allows us to check
# that we observe each variable when, and especially that none of these variables is a constant (e.g. dummies not-used)
s.e.and.v.e <- !is.na(y.s) & rowSums(1 * !is.na(y.v)) # Obervations for which we observe both a self-eval and a vignette eval
# x.s
if (ncol(x.s) > 2) { # Dealing with multico matters only if there is more than the constant in x.s
temp.polr <- polr(as.factor(y.s[s.e.and.v.e]) ~ x.s[s.e.and.v.e, -1] , method = "probit")
proper.names <- gsub("x.s\\[s.e.and.v.e, -1\\]", "", names(temp.polr$coefficients))
col.to.drop.x.s <- !(colnames(x.s) %in% proper.names) ; col.to.drop.x.s[1] <- FALSE # Index of the columns in x.s to be dropped
if (any(col.to.drop.x.s)) {
cat("Chopit: x.s is not full-rank. Dropping the following columns:", colnames(x.s)[col.to.drop.x.s], "\n")
x.s <- x.s[, !col.to.drop.x.s, drop = FALSE]
}
rm(temp.polr)
}
# x.tau
col.to.drop.x.tau <- NULL
if (ncol(x.tau) > 2) { # Dealing with multico matters only if there is more than the constant in x.tau
temp.polr <- polr(as.factor(y.s[s.e.and.v.e]) ~ x.tau[s.e.and.v.e, -1] , method = "probit")
proper.names <- gsub("x.tau\\[s.e.and.v.e, -1\\]", "", names(temp.polr$coefficients))
col.to.drop.x.tau <- !(colnames(x.tau) %in% proper.names) ; col.to.drop.x.tau[1] <- FALSE # Index of the columns in x.tau to be dropped
if (any(col.to.drop.x.tau)) {
cat("Chopit: x.tau is not full-rank. Dropping the following columns:", colnames(x.tau)[col.to.drop.x.tau], "\n")
x.tau <- x.tau[, !col.to.drop.x.tau, drop = FALSE]
}
rm(temp.polr)
}
# x.sigma
col.to.drop.x.sigma <- NULL
if (heterosk) if (ncol(x.sigma) > 2) { # Dealing with multico matters only if there is more than the constant in x.sigma
temp.polr <- polr(as.factor(y.s[s.e.and.v.e]) ~ x.sigma[s.e.and.v.e, -1] , method = "probit")
proper.names <- gsub("x.sigma\\[s.e.and.v.e, -1\\]", "", names(temp.polr$coefficients))
col.to.drop.x.sigma <- !(colnames(x.sigma) %in% proper.names) ; col.to.drop.x.sigma[1] <- FALSE # Index of the columns in x.sigma to be dropped
if (any(col.to.drop.x.sigma)) {
cat("Chopit: x.sigma is not full-rank. Dropping the following columns:", colnames(x.sigma)[col.to.drop.x.sigma], "\n")
x.sigma <- x.sigma[, ! col.to.drop.x.sigma, drop = FALSE]
}
rm(temp.polr)
}
# CONSTANTS
# Calculating the number of statement categories, by taking the maximum among the self-assessements and the vignettes
length.levels.as.factor <- function(x) { # Function that calculate the length of the levels of an objet passed to as.factor
length(levels(as.factor(x)))
}
kK <- max(sapply(X = data.frame(cbind(y.s, y.v)), FUN = length.levels.as.factor)) # Number of statement categories
kBeta0Nrow <- ncol(x.s) - 1 # Number of parameters in beta0
kGammaNrow <- ncol(x.tau) # Number of gamma parameters for each threshold equation
kV <- ncol(y.v) # Number of vignettes
if (heterosk) {
kKappa0Nrow <- ncol(x.sigma) - 1 # Number of row in kappa (except the 0 for the intercept)
}
else {
kKappa0Nrow <- NULL
}
# GENERATING STARTING VALUES
if (naive == TRUE & is.null(par.init)) {
beta0.init <- numeric(kBeta0Nrow)
gamma.init <- matrix(0, nrow = kGammaNrow, ncol = kK - 1) ; gamma.init[1, ] <- 0.1
theta.init = numeric(kV)
sigma.tilde.v.init <- numeric(kV) # equivalent to setting sigma. v = (1, ..., 1)
if (heterosk) {
kappa0.init <- numeric(kKappa0Nrow) # parameters kappa, except the first 0
}
if (heterosk) par.init <- c(beta0.init, gamma.init, theta.init, sigma.tilde.v.init, kappa0.init)
else par.init <- c(beta0.init, gamma.init, theta.init, sigma.tilde.v.init)
}
if (naive == FALSE & is.null(par.init)) {
par.init <- ChopitStartingValues(y.s = y.s, x.s = x.s, kK = kK, kBeta0Nrow = kBeta0Nrow, kGammaNrow, kV)
if (heterosk) {
par.init <- c(par.init, numeric(kKappa0Nrow))
cat("Chopit: The non-naive parameter initilalization is not optimized for being used when heteroskedasticticy is allowed. Naive could be
prefered. \n")
}
}
# LIKELIHOOD MAXIMIZATION
chopit.envir <- environment()
# optim.results <- optim(par = par.init, fn = ChopitLlCompound, y.s = y.s, y.v = y.v, x.s = x.s, x.tau = x.tau, kK = kK, kBeta0Nrow = kBeta0Nrow,
# kGammaNrow = kGammaNrow, kV = kV, method = optim.method, control = list(trace = 2), hessian = varcov.calculus)
optim.results <- maxLik(logLik = ChopitLlCompound, grad = NULL, hess = NULL, start = par.init, finalHessian = (varcov.method == "hessian"),
iterlim = 2e+3, method = optim.method, print.level = 2, y.s = y.s, y.v = y.v, x.s = x.s, x.tau = x.tau, kK = kK,
kBeta0Nrow = kBeta0Nrow, kGammaNrow = kGammaNrow, kV = kV, chopit.envir = chopit.envir,
heterosk = heterosk, x.sigma = x.sigma, kKappa0Nrow = kKappa0Nrow)
# VAR-COV MATRIX
if (varcov.method == "none") {
var.cov <- matrix(NA, nrow = length(par.init), ncol = length(par.init))
}
if (varcov.method == "hessian") {
var.cov <- - solve(optim.results$hessian)
}
if (varcov.method == "OPG") {
var.cov <- solve(t(optim.results$gradientObs) %*% optim.results$gradientObs)
}
# NAMING
# Naming the rows (and cols) of the estimated parameters and of the varcov matrix
# Creating a vector of names
beta0.names <- colnames(x.s[, -1])
gamma.names <- vector("numeric")
for (i in 1:(kK - 1)) {
gamma.names <- cbind(gamma.names, paste(paste("gamma", i, sep = ""), colnames(x.tau)))
}
theta.names <- paste("theta", 1:kV, sep = "")
sigma.tilde.v.names <- paste("sigma.tilde.v", 1:kV, sep = "")
if (heterosk) kappa.names <- paste("kappa", colnames(x.sigma[, -1]))
else kappa.names <- NULL
names <- c(beta0.names, gamma.names, theta.names, sigma.tilde.v.names, kappa.names)
# Renaming the appropriate objects
names(optim.results$estimate) <- names
rownames(var.cov) <- colnames(var.cov) <- names
# DEPARSING COEF
# Deparsing the coefficients in sub-categories to facilitate further uses
beta0 <- optim.results$estimate[1:kBeta0Nrow]
gamma <- matrix(optim.results$estimate[(length(beta0) + 1) : (length(beta0) + kGammaNrow * (kK - 1))], ncol = kK - 1)
theta <- optim.results$estimate[(length(beta0) + length(gamma) + 1) : (length(beta0) + length(gamma) + kV)]
sigma.tilde.v <- optim.results$estimate[(length(beta0) + length(gamma) + length(theta) + 1) : (length(beta0) + length(gamma) + length(theta)
+ kV)]
if (heterosk) kappa0 <- optim.results$estimate[(length(beta0) + length(gamma) + length(theta) + length(sigma.tilde.v) + 1) : (length(beta0) + length(gamma) + length(theta) + length(sigma.tilde.v) + kKappa0Nrow)]
else kappa0 <- NULL
# Switching the clock off
elapsed.time <- proc.time() - ptm
# RETURN
optim.results$gradientObs <- NULL # Dropping gradientObs for the sake of sparsity
results <- list(optim.results = optim.results, coef = list(beta0 = beta0, gamma = gamma, theta = theta, sigma.tilde.v =sigma.tilde.v,
kappa0 = kappa0),
var.cov = var.cov, constants = list(kK = kK, kBeta0Nrow = kBeta0Nrow, kGammaNrow = kGammaNrow, kV = kV,
kKappa0Nrow = kKappa0Nrow, heterosk = heterosk),
contrib.number = contrib.number,
call.arg = list(formula = formula, data.name = data.name, heterosk = heterosk, naive = naive, par.init = par.init,
optim.method = optim.method, varcov.method = varcov.method, parent.frame = parent.frame()),
col.to.drop = list(x.s = col.to.drop.x.s, x.tau = col.to.drop.x.tau, x.sigma = col.to.drop.x.sigma),
elapsed.time = elapsed.time)
class(results) <- "Chopit"
return(results)
}
# "Compiling" the code to make it faster
library(compiler)
ChopitTau <- cmpfun(ChopitTau)
ChopitLlSelfEval <- cmpfun(ChopitLlSelfEval)
ChopitLlVignEval <- cmpfun(ChopitLlVignEval)
ChopitLl <- cmpfun(ChopitLl)
ChopitLlCompound <- cmpfun(ChopitLlCompound)
Chopit <- cmpfun(Chopit)
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/gapStats.R
\name{gapStats}
\alias{gapStats}
\title{Unbiased estimate of the number of cell or gene clusters using the gap
statistic.}
\usage{
gapStats(cellData, gene_clust = FALSE, fun = "kmeans", max_clust = 25,
boot = 100, plot = TRUE, save = FALSE, print = TRUE)
}
\arguments{
\item{cellData}{ExpressionSet object created with readCells (and preferably
transformed with prepCells). It is also helpful to first run
reduceGenes_var and reduceGenes_pca.}
\item{gene_clust}{Boolean specifying whether the gap statistic should be
calculated for the samples or genes. TRUE calculates for the cells, FALSE
for the genes.}
\item{fun}{Character string specifying whether the gap statistic should be
calculated for kmeans, pam, or hierarchical clustering. Possible values
are kmeans, pam, or hclust. clustering methods to perform. All three can
be specified, or a subset of the three.}
\item{max_clust}{Integer specifying the maximum possible number of clusters
in the dataset. Set higher than the expected value. matrix for
'hierarchical.' Equivalent to the 'method' parameter within the dist
function.}
\item{boot}{Integer specifying the number of bootstrap iterations to perform
when calculating the gap statistic. 'hierarchical.' Equivalent to the
'method' parameter within the hclust function.}
\item{plot}{Boolean specifying whether a plot of the gap values vs the number
of clusters should be produced.}
\item{save}{Boolean specifying whether the plot should be saved.}
\item{print}{Boolean specifying whether the optimal number of clusters should
be printed in the terminal window.}
}
\value{
The optimal number of clusters calculated from the gap statistic with
the given parameters. A new column is added to pData indicating the
optimal number of cell or gene clusters for the chosen clustering method.
}
\description{
Takes ExpressionSet object and calculates the optimal number of kmeans, pam,
or hierarchical clusters for the samples or genes using the gap statistic.
}
|
/man/gapStats.Rd
|
no_license
|
joeburns06/hocuspocus
|
R
| false | false | 2,084 |
rd
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/gapStats.R
\name{gapStats}
\alias{gapStats}
\title{Unbiased estimate of the number of cell or gene clusters using the gap
statistic.}
\usage{
gapStats(cellData, gene_clust = FALSE, fun = "kmeans", max_clust = 25,
boot = 100, plot = TRUE, save = FALSE, print = TRUE)
}
\arguments{
\item{cellData}{ExpressionSet object created with readCells (and preferably
transformed with prepCells). It is also helpful to first run
reduceGenes_var and reduceGenes_pca.}
\item{gene_clust}{Boolean specifying whether the gap statistic should be
calculated for the samples or genes. TRUE calculates for the cells, FALSE
for the genes.}
\item{fun}{Character string specifying whether the gap statistic should be
calculated for kmeans, pam, or hierarchical clustering. Possible values
are kmeans, pam, or hclust. clustering methods to perform. All three can
be specified, or a subset of the three.}
\item{max_clust}{Integer specifying the maximum possible number of clusters
in the dataset. Set higher than the expected value. matrix for
'hierarchical.' Equivalent to the 'method' parameter within the dist
function.}
\item{boot}{Integer specifying the number of bootstrap iterations to perform
when calculating the gap statistic. 'hierarchical.' Equivalent to the
'method' parameter within the hclust function.}
\item{plot}{Boolean specifying whether a plot of the gap values vs the number
of clusters should be produced.}
\item{save}{Boolean specifying whether the plot should be saved.}
\item{print}{Boolean specifying whether the optimal number of clusters should
be printed in the terminal window.}
}
\value{
The optimal number of clusters calculated from the gap statistic with
the given parameters. A new column is added to pData indicating the
optimal number of cell or gene clusters for the chosen clustering method.
}
\description{
Takes ExpressionSet object and calculates the optimal number of kmeans, pam,
or hierarchical clusters for the samples or genes using the gap statistic.
}
|
print.bal.tab <- function(x, imbalanced.only = "as.is", un = "as.is", disp.bal.tab = "as.is", stats = "as.is", disp.thresholds = "as.is", disp = "as.is", digits = max(3, getOption("digits") - 3), ...) {
A <- list(...)
call <- x$call
p.ops <- attr(x, "print.options")
balance <- x$Balance
baltal <- maximbal <- list()
for (s in p.ops$compute) {
baltal[[s]] <- x[[paste.("Balanced", s)]]
maximbal[[s]] <- x[[paste.("Max.Imbalance", s)]]
}
nn <- x$Observations
#Prevent exponential notation printing
op <- options(scipen=getOption("scipen"))
options(scipen = 999)
on.exit(options(op))
#Adjustments to print options
if (!identical(un, "as.is") && p.ops$disp.adj) {
if (!rlang::is_bool(un)) stop("'un' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if (p.ops$quick && p.ops$un == FALSE && un == TRUE) {
warning("'un' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$un <- un
}
if (!identical(disp, "as.is")) {
if (!is.character(disp)) stop("'disp' must be a character vector.")
allowable.disp <- c("means", "sds", all_STATS(p.ops$type))
if (any(disp %nin% allowable.disp)) {
stop(paste(word_list(disp[disp %nin% allowable.disp], and.or = "and", quotes = 2, is.are = TRUE),
"not allowed in 'disp'."), call. = FALSE)
}
if (any(disp %nin% p.ops$compute)) {
warning(paste("'disp' cannot include", word_list(disp[disp %nin% p.ops$compute], and.or = "or", quotes = 2), "if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- disp
}
if (is_not_null(A[["disp.means"]]) && !identical(A[["disp.means"]], "as.is")) {
if (!rlang::is_bool(A[["disp.means"]])) stop("'disp.means' must be TRUE, FALSE, or \"as.is\".")
if ("means" %nin% p.ops$compute && A[["disp.means"]] == TRUE) {
warning("'disp.means' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "means"[A[["disp.means"]]]))
}
if (is_not_null(A[["disp.sds"]]) && !identical(A[["disp.sds"]], "as.is")) {
if (!rlang::is_bool(A[["disp.sds"]])) stop("'disp.sds' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if ("sds" %nin% p.ops$compute && A[["disp.sds"]] == TRUE) {
warning("'disp.sds' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "sds"[A[["disp.sds"]]]))
}
if (!identical(stats, "as.is")) {
if (!is_(stats, "character")) stop("'stats' must be a string.")
stats <- match_arg(stats, all_STATS(p.ops$type), several.ok = TRUE)
stats_in_p.ops <- stats %in% p.ops$compute
if (any(!stats_in_p.ops)) {
stop(paste0("'stats' cannot contain ", word_list(stats[!stats_in_p.ops], and.or = "or", quotes = 2), " when ",
if (sum(!stats_in_p.ops) > 1) "they were " else "it was ",
"not requested in the original call to bal.tab()."), call. = TRUE)
}
else p.ops$disp <- unique(c(p.ops$disp[p.ops$disp %nin% all_STATS()], stats))
}
for (s in all_STATS(p.ops$type)) {
if (is_not_null(A[[STATS[[s]]$disp_stat]]) && !identical(A[[STATS[[s]]$disp_stat]], "as.is")) {
if (!rlang::is_bool(A[[STATS[[s]]$disp_stat]])) {
stop(paste0("'", STATS[[s]]$disp_stat, "' must be TRUE, FALSE, or \"as.is\"."), call. = FALSE)
}
if (s %nin% p.ops$compute && isTRUE(A[[STATS[[s]]$disp_stat]])) {
warning(paste0("'", STATS[[s]]$disp_stat, "' cannot be set to TRUE if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, s))
}
}
for (s in p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)]) {
if (STATS[[s]]$threshold %in% names(A) && !identical(temp.thresh <- A[[STATS[[s]]$threshold]], "as.is")) {
if (is_not_null(temp.thresh) &&
(!is.numeric(temp.thresh) || length(temp.thresh) != 1 ||
is_null(p.ops[["thresholds"]][[s]]) ||
p.ops[["thresholds"]][[s]] != temp.thresh))
stop(paste0("'", STATS[[s]]$threshold, "' must be NULL or \"as.is\"."))
if (is_null(temp.thresh)) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (s %nin% p.ops$disp) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (!identical(disp.thresholds, "as.is")) {
if (!is.logical(disp.thresholds) || anyNA(disp.thresholds)) stop("'disp.thresholds' must only contain TRUE or FALSE.", call. = FALSE)
if (is_null(names(disp.thresholds))) {
if (length(disp.thresholds) <= length(p.ops[["thresholds"]])) {
names(disp.thresholds) <- names(p.ops[["thresholds"]])[seq_along(disp.thresholds)]
}
else {
stop("More entries were given to 'disp.thresholds' than there are thresholds in the bal.tab object.", call. = FALSE)
}
}
if (!all(names(disp.thresholds) %pin% names(p.ops[["thresholds"]]))) {
warning(paste0(word_list(names(disp.thresholds)[!names(disp.thresholds) %pin% names(p.ops[["thresholds"]])],
quotes = 2, is.are = TRUE), " not available in thresholds and will be ignored."), call. = FALSE)
disp.thresholds <- disp.thresholds[names(disp.thresholds) %pin% names(p.ops[["thresholds"]])]
}
names(disp.thresholds) <- match_arg(names(disp.thresholds), names(p.ops[["thresholds"]]), several.ok = TRUE)
for (x in names(disp.thresholds)) {
if (!disp.thresholds[x]) {
p.ops[["thresholds"]][[x]] <- NULL
baltal[[x]] <- NULL
maximbal[[x]] <- NULL
}
}
}
if (!identical(disp.bal.tab, "as.is")) {
if (!rlang::is_bool(disp.bal.tab)) stop("'disp.bal.tab' must be TRUE, FALSE, or \"as.is\".")
p.ops$disp.bal.tab <- disp.bal.tab
}
if (p.ops$disp.bal.tab) {
if (!identical(imbalanced.only, "as.is")) {
if (!rlang::is_bool(imbalanced.only)) stop("'imbalanced.only' must be TRUE, FALSE, or \"as.is\".")
p.ops$imbalanced.only <- imbalanced.only
}
if (p.ops$imbalanced.only) {
if (is_null(p.ops$thresholds)) {
warning("A threshold must be specified if imbalanced.only = TRUE. Displaying all covariates.", call. = FALSE)
p.ops$imbalanced.only <- FALSE
}
}
}
else p.ops$imbalanced.only <- FALSE
if (is_not_null(call)) {
cat(underline("Call") %+% "\n " %+% paste(deparse(call), collapse = "\n") %+% "\n\n")
}
if (p.ops$disp.bal.tab) {
if (p.ops$imbalanced.only) {
keep.row <- rowSums(apply(balance[grepl(".Threshold", names(balance), fixed = TRUE)], 2, function(x) !is.na(x) & startsWith(x, "Not Balanced"))) > 0
}
else keep.row <- rep(TRUE, nrow(balance))
keep.col <- setNames(as.logical(c(TRUE,
rep(unlist(lapply(p.ops$compute[p.ops$compute %nin% all_STATS()], function(s) {
p.ops$un && s %in% p.ops$disp
})), switch(p.ops$type, bin = 2, cont = 1)),
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS()], function(s) {
c(p.ops$un && s %in% p.ops$disp,
p.ops$un && !p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
})),
rep(c(rep(unlist(lapply(p.ops$compute[p.ops$compute %nin% all_STATS()], function(s) {
p.ops$disp.adj && s %in% p.ops$disp
})), switch(p.ops$type, bin = 2, cont = 1)),
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS()], function(s) {
c(p.ops$disp.adj && s %in% p.ops$disp,
p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
}))
),
p.ops$nweights + !p.ops$disp.adj))),
names(balance))
cat(underline("Balance Measures") %+% "\n")
if (all(!keep.row)) cat(italic("All covariates are balanced.") %+% "\n")
else print.data.frame_(round_df_char(balance[keep.row, keep.col, drop = FALSE], digits))
cat("\n")
}
for (s in p.ops$compute) {
if (is_not_null(baltal[[s]])) {
cat(underline(paste("Balance tally for", STATS[[s]]$balance_tally_for)) %+% "\n")
print.data.frame_(baltal[[s]])
cat("\n")
}
if (is_not_null(maximbal[[s]])) {
cat(underline(paste("Variable with the greatest", STATS[[s]]$variable_with_the_greatest)) %+% "\n")
print.data.frame_(round_df_char(maximbal[[s]], digits), row.names = FALSE)
cat("\n")
}
}
if (is_not_null(nn)) {
for (i in seq_len(NROW(nn))) {
if (all(nn[i,] == 0)) {
nn <- nn[-i, , drop = FALSE]
attr(nn, "ss.type") <- attr(nn, "ss.type")[-i]
}
}
if (all(c("Matched (ESS)", "Matched (Unweighted)") %in% rownames(nn)) &&
all(check_if_zero(nn["Matched (ESS)",] - nn["Matched (Unweighted)",]))) {
nn <- nn[rownames(nn)!="Matched (Unweighted)", , drop = FALSE]
rownames(nn)[rownames(nn) == "Matched (ESS)"] <- "Matched"
}
cat(underline(attr(nn, "tag")) %+% "\n")
print.warning <- FALSE
if (length(attr(nn, "ss.type")) > 1 && nunique.gt(attr(nn, "ss.type")[-1], 1)) {
ess <- ifelse(attr(nn, "ss.type") == "ess", "*", "")
nn <- setNames(cbind(nn, ess), c(names(nn), ""))
print.warning <- TRUE
}
print.data.frame_(round_df_char(nn, digits = min(2, digits), pad = " "))
if (print.warning) cat(italic("* indicates effective sample size"))
}
invisible(x)
}
print.bal.tab.cluster <- function(x, imbalanced.only = "as.is", un = "as.is", disp.bal.tab = "as.is", stats = "as.is", disp.thresholds = "as.is", disp = "as.is", which.cluster, cluster.summary = "as.is", cluster.fun = "as.is", digits = max(3, getOption("digits") - 3), ...) {
#Replace .all and .none with NULL and NA respectively
.call <- match.call(expand.dots = TRUE)
if (any(sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all") || identical(as.character(.call[[x]]), ".none")))) {
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all"))] <- expression(NULL)
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".none"))] <- expression(NA)
return(eval.parent(.call))
}
A <- list(...)
call <- x$call
c.balance <- x$Cluster.Balance
c.balance.summary <- x$Balance.Across.Clusters
nn <- x$Observations
p.ops <- attr(x, "print.options")
baltal <- maximbal <- list()
for (s in p.ops$stats) {
baltal[[s]] <- x[[paste.("Balanced", s)]]
maximbal[[s]] <- x[[paste.("Max.Imbalance", s)]]
}
#Prevent exponential notation printing
op <- options(scipen=getOption("scipen"))
options(scipen = 999)
on.exit(options(op))
#Adjustments to print options
if (!identical(un, "as.is") && p.ops$disp.adj) {
if (!rlang::is_bool(un)) stop("'un' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if (p.ops$quick && p.ops$un == FALSE && un == TRUE) {
warning("'un' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$un <- un
}
if (!identical(disp, "as.is")) {
if (!is.character(disp)) stop("'disp.means' must be a character vector.")
allowable.disp <- c("means", "sds", all_STATS(p.ops$type))
if (any(disp %nin% allowable.disp)) {
stop(paste(word_list(disp[disp %nin% allowable.disp], and.or = "and", quotes = 2, is.are = TRUE),
"not allowed in 'disp'."), call. = FALSE)
}
if (any(disp %nin% p.ops$compute)) {
warning(paste("'disp' cannot include", word_list(disp[disp %nin% p.ops$compute], and.or = "or", quotes = 2), "if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- disp
}
if (is_not_null(A[["disp.means"]]) && !identical(A[["disp.means"]], "as.is")) {
if (!rlang::is_bool(A[["disp.means"]])) stop("'disp.means' must be TRUE, FALSE, or \"as.is\".")
if ("means" %nin% p.ops$compute && A[["disp.means"]] == TRUE) {
warning("'disp.means' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "means"[A[["disp.means"]]]))
A[["disp.means"]] <- NULL
}
if (is_not_null(A[["disp.sds"]]) && !identical(A[["disp.sds"]], "as.is")) {
if (!rlang::is_bool(A[["disp.sds"]])) stop("'disp.sds' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if ("sds" %nin% p.ops$compute && A[["disp.sds"]] == TRUE) {
warning("'disp.sds' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "sds"[A[["disp.sds"]]]))
A[["disp.sds"]] <- NULL
}
if (!identical(stats, "as.is")) {
if (!is_(stats, "character")) stop("'stats' must be a string.")
stats <- match_arg(stats, all_STATS(p.ops$type), several.ok = TRUE)
stats_in_p.ops <- stats %in% p.ops$compute
if (any(!stats_in_p.ops)) {
stop(paste0("'stats' cannot contain ", word_list(stats[!stats_in_p.ops], and.or = "or", quotes = 2), " if quick = TRUE in the original call to bal.tab()."), call. = TRUE)
}
else p.ops$disp <- unique(c(p.ops$disp[p.ops$disp %nin% all_STATS()], stats))
}
for (s in all_STATS(p.ops$type)) {
if (is_not_null(A[[STATS[[s]]$disp_stat]]) && !identical(A[[STATS[[s]]$disp_stat]], "as.is")) {
if (!rlang::is_bool(A[[STATS[[s]]$disp_stat]])) {
stop(paste0("'", STATS[[s]]$disp_stat, "' must be TRUE, FALSE, or \"as.is\"."), call. = FALSE)
}
if (s %nin% p.ops$compute && isTRUE(A[[STATS[[s]]$disp_stat]])) {
warning(paste0("'", STATS[[s]]$disp_stat, "' cannot be set to TRUE if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, s))
A[[STATS[[s]]$disp_stat]] <- NULL
}
}
for (s in p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)]) {
if (STATS[[s]]$threshold %in% names(A) && !identical(temp.thresh <- A[[STATS[[s]]$threshold]], "as.is")) {
if (is_not_null(temp.thresh) &&
(!is.numeric(temp.thresh) || length(temp.thresh) != 1 ||
is_null(p.ops[["thresholds"]][[s]]) ||
p.ops[["thresholds"]][[s]] != temp.thresh))
stop(paste0("'", STATS[[s]]$threshold, "' must be NULL or \"as.is\"."))
if (is_null(temp.thresh)) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
A[[STATS[[s]]$threshold]] <- NULL
}
if (s %nin% p.ops$disp) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (!identical(disp.thresholds, "as.is")) {
if (!is.logical(disp.thresholds) || anyNA(disp.thresholds)) stop("'disp.thresholds' must only contain TRUE or FALSE.", call. = FALSE)
if (is_null(names(disp.thresholds))) {
if (length(disp.thresholds) <= length(p.ops[["thresholds"]])) {
names(disp.thresholds) <- names(p.ops[["thresholds"]])[seq_along(disp.thresholds)]
}
else {
stop("More entries were given to 'disp.thresholds' than there are thresholds in the bal.tab object.", call. = FALSE)
}
}
if (!all(names(disp.thresholds) %pin% names(p.ops[["thresholds"]]))) {
warning(paste0(word_list(names(disp.thresholds)[!names(disp.thresholds) %pin% names(p.ops[["thresholds"]])],
quotes = 2, is.are = TRUE), " not available in thresholds and will be ignored."), call. = FALSE)
disp.thresholds <- disp.thresholds[names(disp.thresholds) %pin% names(p.ops[["thresholds"]])]
}
names(disp.thresholds) <- match_arg(names(disp.thresholds), names(p.ops[["thresholds"]]), several.ok = TRUE)
for (x in names(disp.thresholds)) {
if (!disp.thresholds[x]) {
p.ops[["thresholds"]][[x]] <- NULL
baltal[[x]] <- NULL
maximbal[[x]] <- NULL
}
}
}
if (!identical(cluster.summary, "as.is")) {
if (!rlang::is_bool(cluster.summary)) stop("'cluster.summary' must be TRUE, FALSE, or \"as.is\".")
if (p.ops$quick && p.ops$cluster.summary == FALSE && cluster.summary == TRUE) {
warning("'cluster.summary' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$cluster.summary <- cluster.summary
}
if (p.ops$disp.bal.tab) {
if (!identical(imbalanced.only, "as.is")) {
if (!rlang::is_bool(imbalanced.only)) stop("'imbalanced.only' must be TRUE, FALSE, or \"as.is\".")
p.ops$imbalanced.only <- imbalanced.only
}
if (p.ops$imbalanced.only) {
if (is_null(p.ops$thresholds)) {
warning("A threshold must be specified if imbalanced.only = TRUE. Displaying all covariates.", call. = FALSE)
p.ops$imbalanced.only <- FALSE
}
}
}
else p.ops$imbalanced.only <- FALSE
if (!missing(which.cluster)) {
if (paste(deparse1(substitute(which.cluster)), collapse = "") == ".none") which.cluster <- NA
else if (paste(deparse1(substitute(which.cluster)), collapse = "") == ".all") which.cluster <- NULL
if (!identical(which.cluster, "as.is")) {
p.ops$which.cluster <- which.cluster
}
}
if (!p.ops$quick || is_null(p.ops$cluster.fun)) computed.cluster.funs <- c("min", "mean", "max")
else computed.cluster.funs <- p.ops$cluster.fun
if (is_not_null(cluster.fun) && !identical(cluster.fun, "as.is")) {
if (!is.character(cluster.fun) || !all(cluster.fun %pin% computed.cluster.funs)) stop(paste0("'cluster.fun' must be ", word_list(c(computed.cluster.funs, "as.is"), and.or = "or", quotes = 2)), call. = FALSE)
}
else {
if (p.ops$abs) cluster.fun <- c("mean", "max")
else cluster.fun <- c("min", "mean", "max")
}
cluster.fun <- match_arg(tolower(cluster.fun), computed.cluster.funs, several.ok = TRUE)
#Checks and Adjustments
if (is_null(p.ops$which.cluster))
which.cluster <- seq_along(c.balance)
else if (anyNA(p.ops$which.cluster)) {
which.cluster <- integer(0)
}
else if (is.numeric(p.ops$which.cluster)) {
which.cluster <- intersect(seq_along(c.balance), p.ops$which.cluster)
if (is_null(which.cluster)) {
warning("No indices in 'which.cluster' are cluster indices. Displaying all clusters instead.", call. = FALSE)
which.cluster <- seq_along(c.balance)
}
}
else if (is.character(p.ops$which.cluster)) {
which.cluster <- intersect(names(c.balance), p.ops$which.cluster)
if (is_null(which.cluster)) {
warning("No names in 'which.cluster' are cluster names. Displaying all clusters instead.", call. = FALSE)
which.cluster <- seq_along(c.balance)
}
}
else {
warning("The argument to 'which.cluster' must be .all, .none, or a vector of cluster indices or cluster names. Displaying all clusters instead.", call. = FALSE)
which.cluster <- seq_along(c.balance)
}
#Printing
if (is_not_null(call)) {
cat(underline("Call") %+% "\n " %+% paste(deparse(call), collapse = "\n") %+% "\n\n")
}
if (is_not_null(which.cluster)) {
cat(underline("Balance by cluster") %+% "\n")
for (i in which.cluster) {
cat("\n - - - " %+% italic("Cluster: " %+% names(c.balance)[i]) %+% " - - - \n")
do.call(print, c(list(c.balance[[i]]), p.ops[names(p.ops) %nin% names(A)], A), quote = TRUE)
}
cat(paste0(paste(rep(" -", round(nchar(paste0("\n - - - Cluster: ", names(c.balance)[i], " - - - "))/2)), collapse = ""), " \n"))
cat("\n")
}
if (isTRUE(as.logical(p.ops$cluster.summary)) && is_not_null(c.balance.summary)) {
s.keep.col <- as.logical(c(TRUE,
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)], function(s) {
c(unlist(lapply(computed.cluster.funs, function(af) {
p.ops$un && s %in% p.ops$disp && af %in% cluster.fun
})),
p.ops$un && !p.ops$disp.adj && length(cluster.fun) == 1 && is_not_null(p.ops$thresholds[[s]]))
})),
rep(
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)], function(s) {
c(unlist(lapply(computed.cluster.funs, function(af) {
p.ops$disp.adj && s %in% p.ops$disp && af %in% cluster.fun
})),
p.ops$disp.adj && length(cluster.fun) == 1 && is_not_null(p.ops$thresholds[[s]]))
})),
p.ops$nweights + !p.ops$disp.adj)
))
if (p.ops$disp.bal.tab) {
cat(underline("Balance summary across all clusters") %+% "\n")
print.data.frame_(round_df_char(c.balance.summary[, s.keep.col, drop = FALSE], digits))
cat("\n")
}
if (is_not_null(nn)) {
for (i in rownames(nn)) {
if (all(nn[i,] == 0)) nn <- nn[rownames(nn)!=i,]
}
if (all(c("Matched (ESS)", "Matched (Unweighted)") %in% rownames(nn)) &&
all(check_if_zero(nn["Matched (ESS)",] - nn["Matched (Unweighted)",]))) {
nn <- nn[rownames(nn)!="Matched (Unweighted)", , drop = FALSE]
rownames(nn)[rownames(nn) == "Matched (ESS)"] <- "Matched"
}
cat(underline(attr(nn, "tag")) %+% "\n")
print.warning <- FALSE
if (length(attr(nn, "ss.type")) > 1 && nunique.gt(attr(nn, "ss.type")[-1], 1)) {
ess <- ifelse(attr(nn, "ss.type") == "ess", "*", "")
nn <- setNames(cbind(nn, ess), c(names(nn), ""))
print.warning <- TRUE
}
print.data.frame_(round_df_char(nn, digits = min(2, digits), pad = " "))
if (print.warning) cat(italic("* indicates effective sample size"))
}
}
invisible(x)
}
print.bal.tab.imp <- function(x, imbalanced.only = "as.is", un = "as.is", disp.bal.tab = "as.is", stats = "as.is", disp.thresholds = "as.is", disp = "as.is", which.imp, imp.summary = "as.is", imp.fun = "as.is", digits = max(3, getOption("digits") - 3), ...) {
#Replace .all and .none with NULL and NA respectively
.call <- match.call(expand.dots = TRUE)
if (any(sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all") || identical(as.character(.call[[x]]), ".none")))) {
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all"))] <- expression(NULL)
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".none"))] <- expression(NA)
return(eval.parent(.call))
}
A <- list(...)
call <- x$call
i.balance <- x[["Imputation.Balance"]]
i.balance.summary <- x[["Balance.Across.Imputations"]]
nn <- x$Observations
p.ops <- attr(x, "print.options")
baltal <- maximbal <- list()
for (s in p.ops$stats) {
baltal[[s]] <- x[[paste.("Balanced", s)]]
maximbal[[s]] <- x[[paste.("Max.Imbalance", s)]]
}
#Prevent exponential notation printing
op <- options(scipen=getOption("scipen"))
options(scipen = 999)
on.exit(options(op))
#Adjustments to print options
if (!identical(un, "as.is") && p.ops$disp.adj) {
if (!rlang::is_bool(un)) stop("'un' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if (p.ops$quick && p.ops$un == FALSE && un == TRUE) {
warning("'un' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$un <- un
}
if (!identical(disp, "as.is")) {
if (!is.character(disp)) stop("'disp.means' must be a character vector.")
allowable.disp <- c("means", "sds", all_STATS(p.ops$type))
if (any(disp %nin% allowable.disp)) {
stop(paste(word_list(disp[disp %nin% allowable.disp], and.or = "and", quotes = 2, is.are = TRUE),
"not allowed in 'disp'."), call. = FALSE)
}
if (any(disp %nin% p.ops$compute)) {
warning(paste("'disp' cannot include", word_list(disp[disp %nin% p.ops$compute], and.or = "or", quotes = 2), "if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- disp
}
if (is_not_null(A[["disp.means"]]) && !identical(A[["disp.means"]], "as.is")) {
if (!rlang::is_bool(A[["disp.means"]])) stop("'disp.means' must be TRUE, FALSE, or \"as.is\".")
if ("means" %nin% p.ops$compute && A[["disp.means"]] == TRUE) {
warning("'disp.means' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "means"[A[["disp.means"]]]))
A[["disp.means"]] <- NULL
}
if (is_not_null(A[["disp.sds"]]) && !identical(A[["disp.sds"]], "as.is")) {
if (!rlang::is_bool(A[["disp.sds"]])) stop("'disp.sds' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if ("sds" %nin% p.ops$compute && A[["disp.sds"]] == TRUE) {
warning("'disp.sds' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "sds"[A[["disp.sds"]]]))
A[["disp.sds"]] <- NULL
}
if (!identical(stats, "as.is")) {
if (!is_(stats, "character")) stop("'stats' must be a string.")
stats <- match_arg(stats, all_STATS(p.ops$type), several.ok = TRUE)
stats_in_p.ops <- stats %in% p.ops$compute
if (any(!stats_in_p.ops)) {
stop(paste0("'stats' cannot contain ", word_list(stats[!stats_in_p.ops], and.or = "or", quotes = 2), " if quick = TRUE in the original call to bal.tab()."), call. = TRUE)
}
else p.ops$disp <- unique(c(p.ops$disp[p.ops$disp %nin% all_STATS()], stats))
}
for (s in all_STATS(p.ops$type)) {
if (is_not_null(A[[STATS[[s]]$disp_stat]]) && !identical(A[[STATS[[s]]$disp_stat]], "as.is")) {
if (!rlang::is_bool(A[[STATS[[s]]$disp_stat]])) {
stop(paste0("'", STATS[[s]]$disp_stat, "' must be TRUE, FALSE, or \"as.is\"."), call. = FALSE)
}
if (s %nin% p.ops$compute && isTRUE(A[[STATS[[s]]$disp_stat]])) {
warning(paste0("'", STATS[[s]]$disp_stat, "' cannot be set to TRUE if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, s))
A[[STATS[[s]]$disp_stat]] <- NULL
}
}
for (s in p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)]) {
if (STATS[[s]]$threshold %in% names(A) && !identical(temp.thresh <- A[[STATS[[s]]$threshold]], "as.is")) {
if (is_not_null(temp.thresh) &&
(!is.numeric(temp.thresh) || length(temp.thresh) != 1 ||
is_null(p.ops[["thresholds"]][[s]]) ||
p.ops[["thresholds"]][[s]] != temp.thresh))
stop(paste0("'", STATS[[s]]$threshold, "' must be NULL or \"as.is\"."))
if (is_null(temp.thresh)) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
A[[STATS[[s]]$threshold]] <- NULL
}
if (s %nin% p.ops$disp) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (!identical(disp.thresholds, "as.is")) {
if (!is.logical(disp.thresholds) || anyNA(disp.thresholds)) stop("'disp.thresholds' must only contain TRUE or FALSE.", call. = FALSE)
if (is_null(names(disp.thresholds))) {
if (length(disp.thresholds) <= length(p.ops[["thresholds"]])) {
names(disp.thresholds) <- names(p.ops[["thresholds"]])[seq_along(disp.thresholds)]
}
else {
stop("More entries were given to 'disp.thresholds' than there are thresholds in the bal.tab object.", call. = FALSE)
}
}
if (!all(names(disp.thresholds) %pin% names(p.ops[["thresholds"]]))) {
warning(paste0(word_list(names(disp.thresholds)[!names(disp.thresholds) %pin% names(p.ops[["thresholds"]])],
quotes = 2, is.are = TRUE), " not available in thresholds and will be ignored."), call. = FALSE)
disp.thresholds <- disp.thresholds[names(disp.thresholds) %pin% names(p.ops[["thresholds"]])]
}
names(disp.thresholds) <- match_arg(names(disp.thresholds), names(p.ops[["thresholds"]]), several.ok = TRUE)
for (x in names(disp.thresholds)) {
if (!disp.thresholds[x]) {
p.ops[["thresholds"]][[x]] <- NULL
baltal[[x]] <- NULL
maximbal[[x]] <- NULL
}
}
}
if (!identical(imp.summary, "as.is")) {
if (!rlang::is_bool(imp.summary)) stop("'imp.summary' must be TRUE, FALSE, or \"as.is\".")
if (p.ops$quick && p.ops$imp.summary == FALSE && imp.summary == TRUE) {
warning("'imp.summary' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$imp.summary <- imp.summary
}
if (!identical(disp.bal.tab, "as.is")) {
if (!rlang::is_bool(disp.bal.tab)) stop("'disp.bal.tab' must be TRUE, FALSE, or \"as.is\".")
p.ops$disp.bal.tab <- disp.bal.tab
}
if (p.ops$disp.bal.tab) {
if (!identical(imbalanced.only, "as.is")) {
if (!rlang::is_bool(imbalanced.only)) stop("'imbalanced.only' must be TRUE, FALSE, or \"as.is\".")
p.ops$imbalanced.only <- imbalanced.only
}
if (p.ops$imbalanced.only) {
if (is_null(p.ops$thresholds)) {
warning("A threshold must be specified if imbalanced.only = TRUE. Displaying all covariates.", call. = FALSE)
p.ops$imbalanced.only <- FALSE
}
}
}
else p.ops$imbalanced.only <- FALSE
if (!missing(which.imp)) {
if (paste(deparse1(substitute(which.imp)), collapse = "") == ".none") which.imp <- NA
else if (paste(deparse1(substitute(which.imp)), collapse = "") == ".all") which.imp <- NULL
if (!identical(which.imp, "as.is")) {
p.ops$which.imp <- which.imp
}
}
if (!p.ops$quick || is_null(p.ops$imp.fun)) computed.imp.funs <- c("min", "mean", "max")
else computed.imp.funs <- p.ops$imp.fun
if (is_not_null(imp.fun) && !identical(imp.fun, "as.is")) {
if (!is.character(imp.fun) || !all(imp.fun %pin% computed.imp.funs)) stop(paste0("'imp.fun' must be ", word_list(c(computed.imp.funs, "as.is"), and.or = "or", quotes = 2)), call. = FALSE)
}
else {
if (p.ops$abs) imp.fun <- c("mean", "max")
else imp.fun <- c("min", "mean", "max")
}
imp.fun <- match_arg(tolower(imp.fun), computed.imp.funs, several.ok = TRUE)
#Checks and Adjustments
if (is_null(p.ops$which.imp))
which.imp <- seq_along(i.balance)
else if (anyNA(p.ops$which.imp)) {
which.imp <- integer(0)
}
else if (is.numeric(p.ops$which.imp)) {
which.imp <- intersect(seq_along(i.balance), p.ops$which.imp)
if (is_null(which.imp)) {
warning("No numbers in 'which.imp' are imputation numbers. No imputations will be displayed.", call. = FALSE)
which.imp <- integer(0)
}
}
else {
warning("The argument to 'which.imp' must be .all, .none, or a vector of imputation numbers.", call. = FALSE)
which.imp <- integer(0)
}
#Printing output
if (is_not_null(call)) {
cat(underline("Call") %+% "\n " %+% paste(deparse(call), collapse = "\n") %+% "\n\n")
}
if (is_not_null(which.imp)) {
cat(underline("Balance by imputation") %+% "\n")
for (i in which.imp) {
cat("\n - - - " %+% italic("Imputation " %+% names(i.balance)[i]) %+% " - - - \n")
do.call(print, c(list(i.balance[[i]]), p.ops, A), quote = TRUE)
}
cat(paste0(paste(rep(" -", round(nchar(paste0("\n - - - Imputation: ", names(i.balance)[i], " - - - "))/2)), collapse = ""), " \n"))
cat("\n")
}
if (isTRUE(as.logical(p.ops$imp.summary)) && is_not_null(i.balance.summary)) {
s.keep.col <- as.logical(c(TRUE,
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)], function(s) {
c(unlist(lapply(computed.imp.funs, function(af) {
p.ops$un && s %in% p.ops$disp && af %in% imp.fun
})),
p.ops$un && !p.ops$disp.adj && length(imp.fun) == 1 && is_not_null(p.ops$thresholds[[s]]))
})),
rep(
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)], function(s) {
c(unlist(lapply(computed.imp.funs, function(af) {
p.ops$disp.adj && s %in% p.ops$disp && af %in% imp.fun
})),
p.ops$disp.adj && length(imp.fun) == 1 && is_not_null(p.ops$thresholds[[s]]))
})),
p.ops$nweights + !p.ops$disp.adj)
))
if (p.ops$disp.bal.tab) {
cat(underline("Balance summary across all imputations") %+% "\n")
print.data.frame_(round_df_char(i.balance.summary[, s.keep.col, drop = FALSE], digits))
cat("\n")
}
if (is_not_null(nn)) {
for (i in rownames(nn)) {
if (all(nn[i,] == 0)) nn <- nn[rownames(nn)!=i,]
}
if (all(c("Matched (ESS)", "Matched (Unweighted)") %in% rownames(nn)) &&
all(check_if_zero(nn["Matched (ESS)",] - nn["Matched (Unweighted)",]))) {
nn <- nn[rownames(nn)!="Matched (Unweighted)", , drop = FALSE]
rownames(nn)[rownames(nn) == "Matched (ESS)"] <- "Matched"
}
cat(underline(attr(nn, "tag")) %+% "\n")
print.warning <- FALSE
if (length(attr(nn, "ss.type")) > 1 && nunique.gt(attr(nn, "ss.type")[-1], 1)) {
ess <- ifelse(attr(nn, "ss.type") == "ess", "*", "")
nn <- setNames(cbind(nn, ess), c(names(nn), ""))
print.warning <- TRUE
}
print.data.frame_(round_df_char(nn, digits = min(2, digits), pad = " "))
if (print.warning) cat(italic("* indicates effective sample size"))
}
}
invisible(x)
}
print.bal.tab.multi <- function(x, imbalanced.only = "as.is", un = "as.is", disp.bal.tab = "as.is", stats = "as.is", disp.thresholds = "as.is", disp = "as.is", which.treat, multi.summary = "as.is", digits = max(3, getOption("digits") - 3), ...) {
#Replace .all and .none with NULL and NA respectively
.call <- match.call(expand.dots = TRUE)
if (any(sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all") || identical(as.character(.call[[x]]), ".none")))) {
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all"))] <- expression(NULL)
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".none"))] <- expression(NA)
return(eval.parent(.call))
}
A <- list(...)
call <- x$call
m.balance <- x[["Pair.Balance"]]
m.balance.summary <- x[["Balance.Across.Pairs"]]
nn <- x$Observations
p.ops <- attr(x, "print.options")
baltal <- maximbal <- list()
for (s in p.ops$stats) {
baltal[[s]] <- x[[paste.("Balanced", s)]]
maximbal[[s]] <- x[[paste.("Max.Imbalance", s)]]
}
#Prevent exponential notation printing
op <- options(scipen=getOption("scipen"))
options(scipen = 999)
on.exit(options(op))
#Adjustments to print options
if (!identical(un, "as.is") && p.ops$disp.adj) {
if (!rlang::is_bool(un)) stop("'un' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if (p.ops$quick && p.ops$un == FALSE && un == TRUE) {
warning("'un' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$un <- un
}
if (!identical(disp, "as.is")) {
if (!is.character(disp)) stop("'disp.means' must be a character vector.")
allowable.disp <- c("means", "sds", all_STATS(p.ops$type))
if (any(disp %nin% allowable.disp)) {
stop(paste(word_list(disp[disp %nin% allowable.disp], and.or = "and", quotes = 2, is.are = TRUE),
"not allowed in 'disp'."), call. = FALSE)
}
if (any(disp %nin% p.ops$compute)) {
warning(paste("'disp' cannot include", word_list(disp[disp %nin% p.ops$compute], and.or = "or", quotes = 2), "if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- disp
}
if (is_not_null(A[["disp.means"]]) && !identical(A[["disp.means"]], "as.is")) {
if (!rlang::is_bool(A[["disp.means"]])) stop("'disp.means' must be TRUE, FALSE, or \"as.is\".")
if ("means" %nin% p.ops$compute && A[["disp.means"]] == TRUE) {
warning("'disp.means' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "means"[A[["disp.means"]]]))
}
if (is_not_null(A[["disp.sds"]]) && !identical(A[["disp.sds"]], "as.is")) {
if (!rlang::is_bool(A[["disp.sds"]])) stop("'disp.sds' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if ("sds" %nin% p.ops$compute && A[["disp.sds"]] == TRUE) {
warning("'disp.sds' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "sds"[A[["disp.sds"]]]))
}
if (!identical(stats, "as.is")) {
if (!is_(stats, "character")) stop("'stats' must be a string.")
stats <- match_arg(stats, all_STATS(p.ops$type), several.ok = TRUE)
stats_in_p.ops <- stats %in% p.ops$compute
if (any(!stats_in_p.ops)) {
stop(paste0("'stats' cannot contain ", word_list(stats[!stats_in_p.ops], and.or = "or", quotes = 2), " if quick = TRUE in the original call to bal.tab()."), call. = TRUE)
}
else p.ops$disp <- unique(c(p.ops$disp[p.ops$disp %nin% all_STATS()], stats))
}
for (s in all_STATS(p.ops$type)) {
if (is_not_null(A[[STATS[[s]]$disp_stat]]) && !identical(A[[STATS[[s]]$disp_stat]], "as.is")) {
if (!rlang::is_bool(A[[STATS[[s]]$disp_stat]])) {
stop(paste0("'", STATS[[s]]$disp_stat, "' must be TRUE, FALSE, or \"as.is\"."), call. = FALSE)
}
if (s %nin% p.ops$compute && isTRUE(A[[STATS[[s]]$disp_stat]])) {
warning(paste0("'", STATS[[s]]$disp_stat, "' cannot be set to TRUE if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, s))
}
}
for (s in p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)]) {
if (STATS[[s]]$threshold %in% names(A) && !identical(temp.thresh <- A[[STATS[[s]]$threshold]], "as.is")) {
if (is_not_null(temp.thresh) &&
(!is.numeric(temp.thresh) || length(temp.thresh) != 1 ||
is_null(p.ops[["thresholds"]][[s]]) ||
p.ops[["thresholds"]][[s]] != temp.thresh))
stop(paste0("'", STATS[[s]]$threshold, "' must be NULL or \"as.is\"."))
if (is_null(temp.thresh)) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (s %nin% p.ops$disp) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (!identical(disp.thresholds, "as.is")) {
if (!is.logical(disp.thresholds) || anyNA(disp.thresholds)) stop("'disp.thresholds' must only contain TRUE or FALSE.", call. = FALSE)
if (is_null(names(disp.thresholds))) {
if (length(disp.thresholds) <= length(p.ops[["thresholds"]])) {
names(disp.thresholds) <- names(p.ops[["thresholds"]])[seq_along(disp.thresholds)]
}
else {
stop("More entries were given to 'disp.thresholds' than there are thresholds in the bal.tab object.", call. = FALSE)
}
}
if (!all(names(disp.thresholds) %pin% names(p.ops[["thresholds"]]))) {
warning(paste0(word_list(names(disp.thresholds)[!names(disp.thresholds) %pin% names(p.ops[["thresholds"]])],
quotes = 2, is.are = TRUE), " not available in thresholds and will be ignored."), call. = FALSE)
disp.thresholds <- disp.thresholds[names(disp.thresholds) %pin% names(p.ops[["thresholds"]])]
}
names(disp.thresholds) <- match_arg(names(disp.thresholds), names(p.ops[["thresholds"]]), several.ok = TRUE)
for (x in names(disp.thresholds)) {
if (!disp.thresholds[x]) {
p.ops[["thresholds"]][[x]] <- NULL
baltal[[x]] <- NULL
maximbal[[x]] <- NULL
}
}
}
if (!identical(multi.summary, "as.is")) {
if (!rlang::is_bool(multi.summary)) stop("'multi.summary' must be TRUE, FALSE, or \"as.is\".")
if (p.ops$quick && p.ops$multi.summary == FALSE && multi.summary == TRUE) {
warning("'multi.summary' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$multi.summary <- multi.summary
}
if (!identical(disp.bal.tab, "as.is")) {
if (!rlang::is_bool(disp.bal.tab)) stop("'disp.bal.tab' must be TRUE, FALSE, or \"as.is\".")
p.ops$disp.bal.tab <- disp.bal.tab
}
if (is_not_null(m.balance.summary)) {
if (p.ops$disp.bal.tab) {
if (!identical(imbalanced.only, "as.is")) {
if (!rlang::is_bool(imbalanced.only)) stop("'imbalanced.only' must be TRUE, FALSE, or \"as.is\".")
p.ops$imbalanced.only <- imbalanced.only
}
if (p.ops$imbalanced.only) {
if (is_null(p.ops$thresholds)) {
warning("A threshold must be specified if imbalanced.only = TRUE. Displaying all covariates.", call. = FALSE)
p.ops$imbalanced.only <- FALSE
}
}
}
else p.ops$imbalanced.only <- FALSE
if (p.ops$imbalanced.only) {
keep.row <- rowSums(apply(m.balance.summary[grepl(".Threshold", names(m.balance.summary), fixed = TRUE)], 2, function(x) !is.na(x) & startsWith(x, "Not Balanced"))) > 0
}
else keep.row <- rep(TRUE, nrow(m.balance.summary))
}
if (!missing(which.treat)) {
if (paste(deparse1(substitute(which.treat)), collapse = "") == ".none") which.treat <- NA
else if (paste(deparse1(substitute(which.treat)), collapse = "") == ".all") which.treat <- NULL
if (!identical(which.treat, "as.is")) {
p.ops$which.treat <- which.treat
}
}
#Checks and Adjustments
if (is_null(p.ops$which.treat))
which.treat <- p.ops$treat_names_multi
else if (anyNA(p.ops$which.treat)) {
which.treat <- character(0)
}
else if (is.numeric(p.ops$which.treat)) {
which.treat <- p.ops$treat_names_multi[seq_along(p.ops$treat_names_multi) %in% p.ops$which.treat]
if (is_null(which.treat)) {
warning("No numbers in 'which.treat' correspond to treatment values. No treatment pairs will be displayed.", call. = FALSE)
which.treat <- character(0)
}
}
else if (is.character(p.ops$which.treat)) {
which.treat <- p.ops$treat_names_multi[p.ops$treat_names_multi %in% p.ops$which.treat]
if (is_null(which.treat)) {
warning("No names in 'which.treat' correspond to treatment values. No treatment pairs will be displayed.", call. = FALSE)
which.treat <- character(0)
}
}
else {
warning("The argument to 'which.treat' must be .all, .none, or a vector of treatment names or indices. No treatment pairs will be displayed.", call. = FALSE)
which.treat <- character(0)
}
if (is_null(which.treat)) {
disp.treat.pairs <- character(0)
}
else {
if (p.ops$pairwise) {
if (length(which.treat) == 1) {
disp.treat.pairs <- names(m.balance)[sapply(names(m.balance), function(x) any(attr(m.balance[[x]], "print.options")$treat_names == which.treat))]
}
else {
disp.treat.pairs <- names(m.balance)[sapply(names(m.balance), function(x) all(attr(m.balance[[x]], "print.options")$treat_names %in% which.treat))]
}
}
else {
if (length(which.treat) == 1) {
disp.treat.pairs <- names(m.balance)[sapply(names(m.balance), function(x) {
treat_names <- attr(m.balance[[x]], "print.options")$treat_names
any(treat_names[treat_names != "All"] == which.treat)})]
}
else {
disp.treat.pairs <- names(m.balance)[sapply(names(m.balance), function(x) {
treat_names <- attr(m.balance[[x]], "print.options")$treat_names
all(treat_names[treat_names != "All"] %in% which.treat)})]
}
}
}
#Printing output
if (is_not_null(call)) {
cat(underline("Call") %+% "\n " %+% paste(deparse(call), collapse = "\n") %+% "\n\n")
}
if (is_not_null(disp.treat.pairs)) {
headings <- setNames(character(length(disp.treat.pairs)), disp.treat.pairs)
if (p.ops$pairwise) cat(underline("Balance by treatment pair") %+% "\n")
else cat(underline("Balance by treatment group") %+% "\n")
for (i in disp.treat.pairs) {
headings[i] <- "\n - - - " %+% italic(attr(m.balance[[i]], "print.options")$treat_names[1] %+% " (0) vs. " %+%
attr(m.balance[[i]], "print.options")$treat_names[2] %+% " (1)") %+% " - - - \n"
cat(headings[i])
do.call(print, c(list(m.balance[[i]]), p.ops[names(p.ops) %nin% names(A)], A), quote = TRUE)
}
cat(paste0(paste(rep(" -", round(max(nchar(headings))/2)), collapse = ""), " \n"))
cat("\n")
}
if (isTRUE(as.logical(p.ops$multi.summary)) && is_not_null(m.balance.summary)) {
computed.agg.funs <- "max"
s.keep.col <- as.logical(c(TRUE,
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS("bin")], function(s) {
c(unlist(lapply(computed.agg.funs, function(af) {
p.ops$un && s %in% p.ops$disp && af %in% "max"
})),
p.ops$un && !p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
})),
rep(
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS("bin")], function(s) {
c(unlist(lapply(computed.agg.funs, function(af) {
p.ops$disp.adj && s %in% p.ops$disp && af %in% "max"
})),
p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
})),
p.ops$nweights + !p.ops$disp.adj)
))
if (p.ops$disp.bal.tab) {
cat(underline("Balance summary across all treatment pairs") %+% "\n")
if (all(!keep.row)) cat(italic("All covariates are balanced.") %+% "\n")
else print.data.frame_(round_df_char(m.balance.summary[keep.row, s.keep.col, drop = FALSE], digits))
cat("\n")
}
if (is_not_null(nn)) {
tag <- attr(nn, "tag")
ss.type <- attr(nn, "ss.type")
for (i in rownames(nn)) {
if (all(nn[i,] == 0)) nn <- nn[rownames(nn)!=i,]
}
if (all(c("Matched (ESS)", "Matched (Unweighted)") %in% rownames(nn)) &&
all(check_if_zero(nn["Matched (ESS)",] - nn["Matched (Unweighted)",]))) {
nn <- nn[rownames(nn)!="Matched (Unweighted)", , drop = FALSE]
rownames(nn)[rownames(nn) == "Matched (ESS)"] <- "Matched"
}
cat(underline(tag) %+% "\n")
print.warning <- FALSE
if (length(ss.type) > 1 && nunique.gt(ss.type[-1], 1)) {
ess <- ifelse(ss.type == "ess", "*", "")
nn <- setNames(cbind(nn, ess), c(names(nn), ""))
print.warning <- TRUE
}
print.data.frame_(round_df_char(nn, digits = min(2, digits), pad = " "))
if (print.warning) cat(italic("* indicates effective sample size"))
}
}
invisible(x)
}
print.bal.tab.msm <- function(x, imbalanced.only = "as.is", un = "as.is", disp.bal.tab = "as.is", stats = "as.is", disp.thresholds = "as.is", disp = "as.is", which.time, msm.summary = "as.is", digits = max(3, getOption("digits") - 3), ...) {
#Replace .all and .none with NULL and NA respectively
.call <- match.call(expand.dots = TRUE)
if (any(sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all") || identical(as.character(.call[[x]]), ".none")))) {
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all"))] <- expression(NULL)
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".none"))] <- expression(NA)
return(eval.parent(.call))
}
A <- list(...)
A <- clear_null(A[!vapply(A, function(x) identical(x, quote(expr =)), logical(1L))])
call <- x$call
msm.balance <- x[["Time.Balance"]]
msm.balance.summary <- x[["Balance.Across.Times"]]
nn <- x$Observations
p.ops <- attr(x, "print.options")
baltal <- maximbal <- list()
for (s in p.ops$stats) {
baltal[[s]] <- x[[paste.("Balanced", s)]]
maximbal[[s]] <- x[[paste.("Max.Imbalance", s)]]
}
#Prevent exponential notation printing
op <- options(scipen=getOption("scipen"))
options(scipen = 999)
on.exit(options(op))
#Adjustments to print options
if (!identical(un, "as.is") && p.ops$disp.adj) {
if (!rlang::is_bool(un)) stop("'un' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if (p.ops$quick && p.ops$un == FALSE && un == TRUE) {
warning("'un' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$un <- un
}
if (!identical(disp, "as.is")) {
if (!is.character(disp)) stop("'disp.means' must be a character vector.")
allowable.disp <- c("means", "sds", all_STATS(p.ops$type))
if (any(disp %nin% allowable.disp)) {
stop(paste(word_list(disp[disp %nin% allowable.disp], and.or = "and", quotes = 2, is.are = TRUE),
"not allowed in 'disp'."), call. = FALSE)
}
if (any(disp %nin% p.ops$compute)) {
warning(paste("'disp' cannot include", word_list(disp[disp %nin% p.ops$compute], and.or = "or", quotes = 2), "if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- disp
}
if (is_not_null(A[["disp.means"]]) && !identical(A[["disp.means"]], "as.is")) {
if (!rlang::is_bool(A[["disp.means"]])) stop("'disp.means' must be TRUE, FALSE, or \"as.is\".")
if ("means" %nin% p.ops$compute && A[["disp.means"]] == TRUE) {
warning("'disp.means' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "means"[A[["disp.means"]]]))
}
if (is_not_null(A[["disp.sds"]]) && !identical(A[["disp.sds"]], "as.is")) {
if (!rlang::is_bool(A[["disp.sds"]])) stop("'disp.sds' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if ("sds" %nin% p.ops$compute && A[["disp.sds"]] == TRUE) {
warning("'disp.sds' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "sds"[A[["disp.sds"]]]))
}
if (!identical(stats, "as.is")) {
if (!is_(stats, "character")) stop("'stats' must be a string.")
stats <- match_arg(stats, all_STATS(p.ops$type), several.ok = TRUE)
stats_in_p.ops <- stats %in% p.ops$compute
if (any(!stats_in_p.ops)) {
stop(paste0("'stats' cannot contain ", word_list(stats[!stats_in_p.ops], and.or = "or", quotes = 2), " if quick = TRUE in the original call to bal.tab()."), call. = TRUE)
}
else p.ops$disp <- unique(c(p.ops$disp[p.ops$disp %nin% all_STATS()], stats))
}
for (s in all_STATS(p.ops$type)) {
if (is_not_null(A[[STATS[[s]]$disp_stat]]) && !identical(A[[STATS[[s]]$disp_stat]], "as.is")) {
if (!rlang::is_bool(A[[STATS[[s]]$disp_stat]])) {
stop(paste0("'", STATS[[s]]$disp_stat, "' must be TRUE, FALSE, or \"as.is\"."), call. = FALSE)
}
if (s %nin% p.ops$compute && isTRUE(A[[STATS[[s]]$disp_stat]])) {
warning(paste0("'", STATS[[s]]$disp_stat, "' cannot be set to TRUE if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, s))
}
}
for (s in p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)]) {
if (STATS[[s]]$threshold %in% names(A) && !identical(temp.thresh <- A[[STATS[[s]]$threshold]], "as.is")) {
if (is_not_null(temp.thresh) &&
(!is.numeric(temp.thresh) || length(temp.thresh) != 1 ||
is_null(p.ops[["thresholds"]][[s]]) ||
p.ops[["thresholds"]][[s]] != temp.thresh))
stop(paste0("'", STATS[[s]]$threshold, "' must be NULL or \"as.is\"."))
if (is_null(temp.thresh)) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (s %nin% p.ops$disp) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (!identical(disp.thresholds, "as.is")) {
if (!is.logical(disp.thresholds) || anyNA(disp.thresholds)) stop("'disp.thresholds' must only contain TRUE or FALSE.", call. = FALSE)
if (is_null(names(disp.thresholds))) {
if (length(disp.thresholds) <= length(p.ops[["thresholds"]])) {
names(disp.thresholds) <- names(p.ops[["thresholds"]])[seq_along(disp.thresholds)]
}
else {
stop("More entries were given to 'disp.thresholds' than there are thresholds in the bal.tab object.", call. = FALSE)
}
}
if (!all(names(disp.thresholds) %pin% names(p.ops[["thresholds"]]))) {
warning(paste0(word_list(names(disp.thresholds)[!names(disp.thresholds) %pin% names(p.ops[["thresholds"]])],
quotes = 2, is.are = TRUE), " not available in thresholds and will be ignored."), call. = FALSE)
disp.thresholds <- disp.thresholds[names(disp.thresholds) %pin% names(p.ops[["thresholds"]])]
}
names(disp.thresholds) <- match_arg(names(disp.thresholds), names(p.ops[["thresholds"]]), several.ok = TRUE)
for (x in names(disp.thresholds)) {
if (!disp.thresholds[x]) {
p.ops[["thresholds"]][[x]] <- NULL
baltal[[x]] <- NULL
maximbal[[x]] <- NULL
}
}
}
if (!identical(msm.summary, "as.is")) {
if (!rlang::is_bool(msm.summary)) stop("'msm.summary' must be TRUE, FALSE, or \"as.is\".")
if (p.ops$quick && p.ops$msm.summary == FALSE && msm.summary == TRUE) {
warning("'msm.summary' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$msm.summary <- msm.summary
}
if (!identical(disp.bal.tab, "as.is")) {
if (!rlang::is_bool(disp.bal.tab)) stop("'disp.bal.tab' must be TRUE, FALSE, or \"as.is\".")
p.ops$disp.bal.tab <- disp.bal.tab
}
if (p.ops$disp.bal.tab) {
if (!identical(imbalanced.only, "as.is")) {
if (!rlang::is_bool(imbalanced.only)) stop("'imbalanced.only' must be TRUE, FALSE, or \"as.is\".")
p.ops$imbalanced.only <- imbalanced.only
}
if (p.ops$imbalanced.only) {
if (is_null(p.ops$thresholds)) {
warning("A threshold must be specified if imbalanced.only = TRUE. Displaying all covariates.", call. = FALSE)
p.ops$imbalanced.only <- FALSE
}
}
}
else p.ops$imbalanced.only <- FALSE
if (is_not_null(msm.balance.summary)) {
if (p.ops$imbalanced.only) {
keep.row <- rowSums(apply(msm.balance.summary[grepl(".Threshold", names(msm.balance.summary), fixed = TRUE)], 2, function(x) !is.na(x) & startsWith(x, "Not Balanced"))) > 0
}
else keep.row <- rep(TRUE, nrow(msm.balance.summary))
}
if (!missing(which.time)) {
if (paste(deparse1(substitute(which.time)), collapse = "") == ".none") which.time <- NA
else if (paste(deparse1(substitute(which.time)), collapse = "") == ".all") which.time <- NULL
if (!identical(which.time, "as.is")) {
p.ops$which.time <- which.time
}
}
#Checks and Adjustments
if (is_null(p.ops$which.time))
which.time <- seq_along(msm.balance)
else if (anyNA(p.ops$which.time)) {
which.time <- integer(0)
}
else if (is.numeric(p.ops$which.time)) {
which.time <- seq_along(msm.balance)[seq_along(msm.balance) %in% p.ops$which.time]
if (is_null(which.time)) {
warning("No numbers in 'which.time' are treatment time points. No time points will be displayed.", call. = FALSE)
which.time <- integer(0)
}
}
else if (is.character(p.ops$which.time)) {
which.time <- seq_along(msm.balance)[names(msm.balance) %in% p.ops$which.time]
if (is_null(which.time)) {
warning("No names in 'which.time' are treatment names. No time points will be displayed.", call. = FALSE)
which.time <- integer(0)
}
}
else {
warning("The argument to 'which.time' must be .all, .none, or a vector of time point numbers. No time points will be displayed.", call. = FALSE)
which.time <- integer(0)
}
#Printing output
if (is_not_null(call)) {
cat(underline("Call") %+% "\n " %+% paste(deparse(call), collapse = "\n") %+% "\n\n")
}
if (is_not_null(which.time)) {
cat(underline("Balance by Time Point") %+% "\n")
for (i in which.time) {
cat("\n - - - " %+% italic("Time: " %+% as.character(i)) %+% " - - - \n")
do.call(print, c(list(x = msm.balance[[i]]), p.ops[names(p.ops) %nin% names(A)], A), quote = TRUE)
}
cat(paste0(paste(rep(" -", round(nchar(paste0("\n - - - Time: ", i, " - - - "))/2)), collapse = ""), " \n"))
cat("\n")
}
if (isTRUE(as.logical(p.ops$msm.summary)) && is_not_null(msm.balance.summary)) {
computed.agg.funs <- "max"
s.keep.col <- as.logical(c(TRUE,
TRUE,
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)], function(s) {
c(unlist(lapply(computed.agg.funs, function(af) {
p.ops$un && s %in% p.ops$disp && af %in% "max"
})),
p.ops$un && !p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
})),
rep(
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)], function(s) {
c(unlist(lapply(computed.agg.funs, function(af) {
p.ops$disp.adj && s %in% p.ops$disp && af %in% "max"
})),
p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
})),
p.ops$nweights + !p.ops$disp.adj)
))
if (p.ops$disp.bal.tab) {
cat(underline("Balance summary across all time points") %+% "\n")
if (all(!keep.row)) cat(italic("All covariates are balanced.") %+% "\n")
else print.data.frame_(round_df_char(msm.balance.summary[keep.row, s.keep.col, drop = FALSE], digits))
cat("\n")
}
if (is_not_null(nn)) {
print.warning <- FALSE
cat(underline(attr(nn[[1]], "tag")) %+% "\n")
for (ti in seq_along(nn)) {
cat(" - " %+% italic("Time " %+% as.character(ti)) %+% "\n")
for (i in rownames(nn[[ti]])) {
if (all(nn[[ti]][i,] == 0)) nn[[ti]] <- nn[[ti]][rownames(nn[[ti]])!=i,]
}
if (all(c("Matched (ESS)", "Matched (Unweighted)") %in% rownames(nn[[ti]])) &&
all(check_if_zero(nn[[ti]]["Matched (ESS)",] - nn[[ti]]["Matched (Unweighted)",]))) {
nn[[ti]] <- nn[[ti]][rownames(nn[[ti]])!="Matched (Unweighted)", , drop = FALSE]
rownames(nn[[ti]])[rownames(nn[[ti]]) == "Matched (ESS)"] <- "Matched"
}
if (length(attr(nn[[ti]], "ss.type")) > 1 && nunique.gt(attr(nn[[ti]], "ss.type")[-1], 1)) {
ess <- ifelse(attr(nn[[ti]], "ss.type") == "ess", "*", "")
nn[[ti]] <- setNames(cbind(nn[[ti]], ess), c(names(nn[[ti]]), ""))
print.warning <- TRUE
}
print.data.frame_(round_df_char(nn[[ti]], digits = min(2, digits), pad = " "))
}
if (print.warning) cat(italic("* indicates effective sample size"))
}
}
invisible(x)
}
print.bal.tab.subclass <- function(x, imbalanced.only = "as.is", un = "as.is", disp.bal.tab = "as.is", stats = "as.is", disp.thresholds = "as.is", disp = "as.is", disp.subclass = "as.is", digits = max(3, getOption("digits") - 3), ...) {
A <- list(...)
call <- x$call
s.balance <- x$Subclass.Balance
b.a.subclass <- x$Balance.Across.Subclass
s.nn <- x$Observations
p.ops <- attr(x, "print.options")
baltal <- maximbal <- list()
for (s in p.ops$compute) {
baltal[[s]] <- x[[paste.("Balanced", s, "Subclass")]]
maximbal[[s]] <- x[[paste.("Max.Imbalance", s, "Subclass")]]
}
#Prevent exponential notation printing
op <- options(scipen=getOption("scipen"))
options(scipen = 999)
on.exit(options(op))
#Adjustments to print options
if (!identical(un, "as.is") && p.ops$disp.adj) {
if (!rlang::is_bool(un)) stop("'un' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if (p.ops$quick && p.ops$un == FALSE && un == TRUE) {
warning("'un' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$un <- un
}
if (!identical(disp, "as.is")) {
if (!is.character(disp)) stop("'disp.means' must be a character vector.")
allowable.disp <- c("means", "sds", all_STATS(p.ops$type))
if (any(disp %nin% allowable.disp)) {
stop(paste(word_list(disp[disp %nin% allowable.disp], and.or = "and", quotes = 2, is.are = TRUE),
"not allowed in 'disp'."), call. = FALSE)
}
if (any(disp %nin% p.ops$compute)) {
warning(paste("'disp' cannot include", word_list(disp[disp %nin% p.ops$compute], and.or = "or", quotes = 2), "if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- disp
}
if (is_not_null(A[["disp.means"]]) && !identical(A[["disp.means"]], "as.is")) {
if (!rlang::is_bool(A[["disp.means"]])) stop("'disp.means' must be TRUE, FALSE, or \"as.is\".")
if ("means" %nin% p.ops$compute && A[["disp.means"]] == TRUE) {
warning("'disp.means' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "means"[A[["disp.means"]]]))
}
if (is_not_null(A[["disp.sds"]]) && !identical(A[["disp.sds"]], "as.is")) {
if (!rlang::is_bool(A[["disp.sds"]])) stop("'disp.sds' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if ("sds" %nin% p.ops$compute && A[["disp.sds"]] == TRUE) {
warning("'disp.sds' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "sds"[A[["disp.sds"]]]))
}
if (!identical(stats, "as.is")) {
if (!is_(stats, "character")) stop("'stats' must be a string.")
stats <- match_arg(stats, all_STATS(p.ops$type), several.ok = TRUE)
stats_in_p.ops <- stats %in% p.ops$compute
if (any(!stats_in_p.ops)) {
stop(paste0("'stats' cannot contain ", word_list(stats[!stats_in_p.ops], and.or = "or", quotes = 2), " if quick = TRUE in the original call to bal.tab()."), call. = TRUE)
}
else p.ops$disp <- unique(c(p.ops$disp[p.ops$disp %nin% all_STATS()], stats))
}
for (s in all_STATS(p.ops$type)) {
if (is_not_null(A[[STATS[[s]]$disp_stat]]) && !identical(A[[STATS[[s]]$disp_stat]], "as.is")) {
if (!rlang::is_bool(A[[STATS[[s]]$disp_stat]])) {
stop(paste0("'", STATS[[s]]$disp_stat, "' must be TRUE, FALSE, or \"as.is\"."), call. = FALSE)
}
if (s %nin% p.ops$compute && isTRUE(A[[STATS[[s]]$disp_stat]])) {
warning(paste0("'", STATS[[s]]$disp_stat, "' cannot be set to TRUE if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, s))
}
}
for (s in p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)]) {
if (STATS[[s]]$threshold %in% names(A) && !identical(temp.thresh <- A[[STATS[[s]]$threshold]], "as.is")) {
if (is_not_null(temp.thresh) &&
(!is.numeric(temp.thresh) || length(temp.thresh) != 1 ||
is_null(p.ops[["thresholds"]][[s]]) ||
p.ops[["thresholds"]][[s]] != temp.thresh))
stop(paste0("'", STATS[[s]]$threshold, "' must be NULL or \"as.is\"."))
if (is_null(temp.thresh)) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (s %nin% p.ops$disp) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (!identical(disp.thresholds, "as.is")) {
if (!is.logical(disp.thresholds) || anyNA(disp.thresholds)) stop("'disp.thresholds' must only contain TRUE or FALSE.", call. = FALSE)
if (is_null(names(disp.thresholds))) {
if (length(disp.thresholds) <= length(p.ops[["thresholds"]])) {
names(disp.thresholds) <- names(p.ops[["thresholds"]])[seq_along(disp.thresholds)]
}
else {
stop("More entries were given to 'disp.thresholds' than there are thresholds in the bal.tab object.", call. = FALSE)
}
}
if (!all(names(disp.thresholds) %pin% names(p.ops[["thresholds"]]))) {
warning(paste0(word_list(names(disp.thresholds)[!names(disp.thresholds) %pin% names(p.ops[["thresholds"]])],
quotes = 2, is.are = TRUE), " not available in thresholds and will be ignored."), call. = FALSE)
disp.thresholds <- disp.thresholds[names(disp.thresholds) %pin% names(p.ops[["thresholds"]])]
}
names(disp.thresholds) <- match_arg(names(disp.thresholds), names(p.ops[["thresholds"]]), several.ok = TRUE)
for (x in names(disp.thresholds)) {
if (!disp.thresholds[x]) {
p.ops[["thresholds"]][[x]] <- NULL
baltal[[x]] <- NULL
maximbal[[x]] <- NULL
}
}
}
if (!identical(disp.bal.tab, "as.is")) {
if (!rlang::is_bool(disp.bal.tab)) stop("'disp.bal.tab' must be TRUE, FALSE, or \"as.is\".")
p.ops$disp.bal.tab <- disp.bal.tab
}
if (p.ops$disp.bal.tab) {
if (!identical(imbalanced.only, "as.is")) {
if (!rlang::is_bool(imbalanced.only)) stop("'imbalanced.only' must be TRUE, FALSE, or \"as.is\".")
p.ops$imbalanced.only <- imbalanced.only
}
if (p.ops$imbalanced.only) {
if (is_null(p.ops$thresholds)) {
warning("A threshold must be specified if imbalanced.only = TRUE. Displaying all covariates.", call. = FALSE)
p.ops$imbalanced.only <- FALSE
}
}
}
else p.ops$imbalanced.only <- FALSE
if (!identical(disp.subclass, "as.is")) {
if (!rlang::is_bool(disp.subclass)) stop("'disp.subclass' must be TRUE, FALSE, or \"as.is\".")
p.ops$disp.subclass <- disp.subclass
}
if (is_not_null(call)) {
cat(underline("Call") %+% "\n " %+% paste(deparse(call), collapse = "\n") %+% "\n\n")
}
if (p.ops$disp.bal.tab) {
if (p.ops$disp.subclass) {
s.keep.col <- setNames(c(TRUE,
rep(unlist(lapply(p.ops$compute[p.ops$compute %nin% all_STATS()], function(s) {
s %in% p.ops$disp
})), switch(p.ops$type, bin = 2, cont = 1)),
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS()], function(s) {
c(s %in% p.ops$disp,
is_not_null(p.ops$thresholds[[s]]))
}))),
names(s.balance[[1]]))
cat(underline("Balance by subclass"))
for (i in names(s.balance)) {
if (p.ops$imbalanced.only) {
s.keep.row <- rowSums(apply(s.balance[[i]][grepl(".Threshold", names(s.balance), fixed = TRUE)], 2, function(x) !is.na(x) & startsWith(x, "Not Balanced"))) > 0
}
else s.keep.row <- rep(TRUE, nrow(s.balance[[i]]))
cat("\n - - - " %+% italic("Subclass " %+% as.character(i)) %+% " - - - \n")
if (all(!s.keep.row)) cat(italic("All covariates are balanced.") %+% "\n")
else print.data.frame_(round_df_char(s.balance[[i]][s.keep.row, s.keep.col, drop = FALSE], digits))
}
cat("\n")
}
if (is_not_null(b.a.subclass)) {
if (p.ops$imbalanced.only) {
a.s.keep.row <- rowSums(apply(b.a.subclass[grepl(".Threshold", names(b.a.subclass), fixed = TRUE)], 2, function(x) !is.na(x) & startsWith(x, "Not Balanced"))) > 0
}
else a.s.keep.row <- rep(TRUE, nrow(b.a.subclass))
a.s.keep.col <- setNames(as.logical(c(TRUE,
rep(unlist(lapply(p.ops$compute[p.ops$compute %nin% all_STATS()], function(s) {
p.ops$un && s %in% p.ops$disp
})), switch(p.ops$type, bin = 2, cont = 1)),
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS()], function(s) {
c(p.ops$un && s %in% p.ops$disp,
p.ops$un && !p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
})),
rep(c(rep(unlist(lapply(p.ops$compute[p.ops$compute %nin% all_STATS()], function(s) {
p.ops$disp.adj && s %in% p.ops$disp
})), 2),
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS()], function(s) {
c(p.ops$disp.adj && s %in% p.ops$disp,
p.ops$disp.adj && !p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
}))
),
p.ops$disp.adj))),
names(b.a.subclass))
cat(underline("Balance measures across subclasses") %+% "\n")
if (all(!a.s.keep.row)) cat(italic("All covariates are balanced.") %+% "\n")
else print.data.frame_(round_df_char(b.a.subclass[a.s.keep.row, a.s.keep.col, drop = FALSE], digits))
cat("\n")
}
}
for (s in p.ops$stats) {
if (is_not_null(baltal[[s]])) {
cat(underline(paste("Balance tally for", STATS[[s]]$balance_tally_for, "across subclasses")) %+% "\n")
print.data.frame_(baltal[[s]])
cat("\n")
}
if (is_not_null(maximbal[[s]])) {
cat(underline(paste("Variable with the greatest", STATS[[s]]$variable_with_the_greatest, "across subclasses")) %+% "\n")
print.data.frame_(round_df_char(maximbal[[s]], digits), row.names = FALSE)
cat("\n")
}
}
if (is_not_null(s.nn)) {
cat(underline(attr(s.nn, "tag")) %+% "\n")
print.data.frame_(round_df_char(s.nn, digits = min(2, digits), pad = " "))
}
invisible(x)
}
|
/R/print.bal.tab.R
|
no_license
|
Zoe187419/cobalt
|
R
| false | false | 78,114 |
r
|
print.bal.tab <- function(x, imbalanced.only = "as.is", un = "as.is", disp.bal.tab = "as.is", stats = "as.is", disp.thresholds = "as.is", disp = "as.is", digits = max(3, getOption("digits") - 3), ...) {
A <- list(...)
call <- x$call
p.ops <- attr(x, "print.options")
balance <- x$Balance
baltal <- maximbal <- list()
for (s in p.ops$compute) {
baltal[[s]] <- x[[paste.("Balanced", s)]]
maximbal[[s]] <- x[[paste.("Max.Imbalance", s)]]
}
nn <- x$Observations
#Prevent exponential notation printing
op <- options(scipen=getOption("scipen"))
options(scipen = 999)
on.exit(options(op))
#Adjustments to print options
if (!identical(un, "as.is") && p.ops$disp.adj) {
if (!rlang::is_bool(un)) stop("'un' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if (p.ops$quick && p.ops$un == FALSE && un == TRUE) {
warning("'un' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$un <- un
}
if (!identical(disp, "as.is")) {
if (!is.character(disp)) stop("'disp' must be a character vector.")
allowable.disp <- c("means", "sds", all_STATS(p.ops$type))
if (any(disp %nin% allowable.disp)) {
stop(paste(word_list(disp[disp %nin% allowable.disp], and.or = "and", quotes = 2, is.are = TRUE),
"not allowed in 'disp'."), call. = FALSE)
}
if (any(disp %nin% p.ops$compute)) {
warning(paste("'disp' cannot include", word_list(disp[disp %nin% p.ops$compute], and.or = "or", quotes = 2), "if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- disp
}
if (is_not_null(A[["disp.means"]]) && !identical(A[["disp.means"]], "as.is")) {
if (!rlang::is_bool(A[["disp.means"]])) stop("'disp.means' must be TRUE, FALSE, or \"as.is\".")
if ("means" %nin% p.ops$compute && A[["disp.means"]] == TRUE) {
warning("'disp.means' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "means"[A[["disp.means"]]]))
}
if (is_not_null(A[["disp.sds"]]) && !identical(A[["disp.sds"]], "as.is")) {
if (!rlang::is_bool(A[["disp.sds"]])) stop("'disp.sds' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if ("sds" %nin% p.ops$compute && A[["disp.sds"]] == TRUE) {
warning("'disp.sds' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "sds"[A[["disp.sds"]]]))
}
if (!identical(stats, "as.is")) {
if (!is_(stats, "character")) stop("'stats' must be a string.")
stats <- match_arg(stats, all_STATS(p.ops$type), several.ok = TRUE)
stats_in_p.ops <- stats %in% p.ops$compute
if (any(!stats_in_p.ops)) {
stop(paste0("'stats' cannot contain ", word_list(stats[!stats_in_p.ops], and.or = "or", quotes = 2), " when ",
if (sum(!stats_in_p.ops) > 1) "they were " else "it was ",
"not requested in the original call to bal.tab()."), call. = TRUE)
}
else p.ops$disp <- unique(c(p.ops$disp[p.ops$disp %nin% all_STATS()], stats))
}
for (s in all_STATS(p.ops$type)) {
if (is_not_null(A[[STATS[[s]]$disp_stat]]) && !identical(A[[STATS[[s]]$disp_stat]], "as.is")) {
if (!rlang::is_bool(A[[STATS[[s]]$disp_stat]])) {
stop(paste0("'", STATS[[s]]$disp_stat, "' must be TRUE, FALSE, or \"as.is\"."), call. = FALSE)
}
if (s %nin% p.ops$compute && isTRUE(A[[STATS[[s]]$disp_stat]])) {
warning(paste0("'", STATS[[s]]$disp_stat, "' cannot be set to TRUE if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, s))
}
}
for (s in p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)]) {
if (STATS[[s]]$threshold %in% names(A) && !identical(temp.thresh <- A[[STATS[[s]]$threshold]], "as.is")) {
if (is_not_null(temp.thresh) &&
(!is.numeric(temp.thresh) || length(temp.thresh) != 1 ||
is_null(p.ops[["thresholds"]][[s]]) ||
p.ops[["thresholds"]][[s]] != temp.thresh))
stop(paste0("'", STATS[[s]]$threshold, "' must be NULL or \"as.is\"."))
if (is_null(temp.thresh)) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (s %nin% p.ops$disp) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (!identical(disp.thresholds, "as.is")) {
if (!is.logical(disp.thresholds) || anyNA(disp.thresholds)) stop("'disp.thresholds' must only contain TRUE or FALSE.", call. = FALSE)
if (is_null(names(disp.thresholds))) {
if (length(disp.thresholds) <= length(p.ops[["thresholds"]])) {
names(disp.thresholds) <- names(p.ops[["thresholds"]])[seq_along(disp.thresholds)]
}
else {
stop("More entries were given to 'disp.thresholds' than there are thresholds in the bal.tab object.", call. = FALSE)
}
}
if (!all(names(disp.thresholds) %pin% names(p.ops[["thresholds"]]))) {
warning(paste0(word_list(names(disp.thresholds)[!names(disp.thresholds) %pin% names(p.ops[["thresholds"]])],
quotes = 2, is.are = TRUE), " not available in thresholds and will be ignored."), call. = FALSE)
disp.thresholds <- disp.thresholds[names(disp.thresholds) %pin% names(p.ops[["thresholds"]])]
}
names(disp.thresholds) <- match_arg(names(disp.thresholds), names(p.ops[["thresholds"]]), several.ok = TRUE)
for (x in names(disp.thresholds)) {
if (!disp.thresholds[x]) {
p.ops[["thresholds"]][[x]] <- NULL
baltal[[x]] <- NULL
maximbal[[x]] <- NULL
}
}
}
if (!identical(disp.bal.tab, "as.is")) {
if (!rlang::is_bool(disp.bal.tab)) stop("'disp.bal.tab' must be TRUE, FALSE, or \"as.is\".")
p.ops$disp.bal.tab <- disp.bal.tab
}
if (p.ops$disp.bal.tab) {
if (!identical(imbalanced.only, "as.is")) {
if (!rlang::is_bool(imbalanced.only)) stop("'imbalanced.only' must be TRUE, FALSE, or \"as.is\".")
p.ops$imbalanced.only <- imbalanced.only
}
if (p.ops$imbalanced.only) {
if (is_null(p.ops$thresholds)) {
warning("A threshold must be specified if imbalanced.only = TRUE. Displaying all covariates.", call. = FALSE)
p.ops$imbalanced.only <- FALSE
}
}
}
else p.ops$imbalanced.only <- FALSE
if (is_not_null(call)) {
cat(underline("Call") %+% "\n " %+% paste(deparse(call), collapse = "\n") %+% "\n\n")
}
if (p.ops$disp.bal.tab) {
if (p.ops$imbalanced.only) {
keep.row <- rowSums(apply(balance[grepl(".Threshold", names(balance), fixed = TRUE)], 2, function(x) !is.na(x) & startsWith(x, "Not Balanced"))) > 0
}
else keep.row <- rep(TRUE, nrow(balance))
keep.col <- setNames(as.logical(c(TRUE,
rep(unlist(lapply(p.ops$compute[p.ops$compute %nin% all_STATS()], function(s) {
p.ops$un && s %in% p.ops$disp
})), switch(p.ops$type, bin = 2, cont = 1)),
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS()], function(s) {
c(p.ops$un && s %in% p.ops$disp,
p.ops$un && !p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
})),
rep(c(rep(unlist(lapply(p.ops$compute[p.ops$compute %nin% all_STATS()], function(s) {
p.ops$disp.adj && s %in% p.ops$disp
})), switch(p.ops$type, bin = 2, cont = 1)),
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS()], function(s) {
c(p.ops$disp.adj && s %in% p.ops$disp,
p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
}))
),
p.ops$nweights + !p.ops$disp.adj))),
names(balance))
cat(underline("Balance Measures") %+% "\n")
if (all(!keep.row)) cat(italic("All covariates are balanced.") %+% "\n")
else print.data.frame_(round_df_char(balance[keep.row, keep.col, drop = FALSE], digits))
cat("\n")
}
for (s in p.ops$compute) {
if (is_not_null(baltal[[s]])) {
cat(underline(paste("Balance tally for", STATS[[s]]$balance_tally_for)) %+% "\n")
print.data.frame_(baltal[[s]])
cat("\n")
}
if (is_not_null(maximbal[[s]])) {
cat(underline(paste("Variable with the greatest", STATS[[s]]$variable_with_the_greatest)) %+% "\n")
print.data.frame_(round_df_char(maximbal[[s]], digits), row.names = FALSE)
cat("\n")
}
}
if (is_not_null(nn)) {
for (i in seq_len(NROW(nn))) {
if (all(nn[i,] == 0)) {
nn <- nn[-i, , drop = FALSE]
attr(nn, "ss.type") <- attr(nn, "ss.type")[-i]
}
}
if (all(c("Matched (ESS)", "Matched (Unweighted)") %in% rownames(nn)) &&
all(check_if_zero(nn["Matched (ESS)",] - nn["Matched (Unweighted)",]))) {
nn <- nn[rownames(nn)!="Matched (Unweighted)", , drop = FALSE]
rownames(nn)[rownames(nn) == "Matched (ESS)"] <- "Matched"
}
cat(underline(attr(nn, "tag")) %+% "\n")
print.warning <- FALSE
if (length(attr(nn, "ss.type")) > 1 && nunique.gt(attr(nn, "ss.type")[-1], 1)) {
ess <- ifelse(attr(nn, "ss.type") == "ess", "*", "")
nn <- setNames(cbind(nn, ess), c(names(nn), ""))
print.warning <- TRUE
}
print.data.frame_(round_df_char(nn, digits = min(2, digits), pad = " "))
if (print.warning) cat(italic("* indicates effective sample size"))
}
invisible(x)
}
print.bal.tab.cluster <- function(x, imbalanced.only = "as.is", un = "as.is", disp.bal.tab = "as.is", stats = "as.is", disp.thresholds = "as.is", disp = "as.is", which.cluster, cluster.summary = "as.is", cluster.fun = "as.is", digits = max(3, getOption("digits") - 3), ...) {
#Replace .all and .none with NULL and NA respectively
.call <- match.call(expand.dots = TRUE)
if (any(sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all") || identical(as.character(.call[[x]]), ".none")))) {
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all"))] <- expression(NULL)
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".none"))] <- expression(NA)
return(eval.parent(.call))
}
A <- list(...)
call <- x$call
c.balance <- x$Cluster.Balance
c.balance.summary <- x$Balance.Across.Clusters
nn <- x$Observations
p.ops <- attr(x, "print.options")
baltal <- maximbal <- list()
for (s in p.ops$stats) {
baltal[[s]] <- x[[paste.("Balanced", s)]]
maximbal[[s]] <- x[[paste.("Max.Imbalance", s)]]
}
#Prevent exponential notation printing
op <- options(scipen=getOption("scipen"))
options(scipen = 999)
on.exit(options(op))
#Adjustments to print options
if (!identical(un, "as.is") && p.ops$disp.adj) {
if (!rlang::is_bool(un)) stop("'un' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if (p.ops$quick && p.ops$un == FALSE && un == TRUE) {
warning("'un' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$un <- un
}
if (!identical(disp, "as.is")) {
if (!is.character(disp)) stop("'disp.means' must be a character vector.")
allowable.disp <- c("means", "sds", all_STATS(p.ops$type))
if (any(disp %nin% allowable.disp)) {
stop(paste(word_list(disp[disp %nin% allowable.disp], and.or = "and", quotes = 2, is.are = TRUE),
"not allowed in 'disp'."), call. = FALSE)
}
if (any(disp %nin% p.ops$compute)) {
warning(paste("'disp' cannot include", word_list(disp[disp %nin% p.ops$compute], and.or = "or", quotes = 2), "if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- disp
}
if (is_not_null(A[["disp.means"]]) && !identical(A[["disp.means"]], "as.is")) {
if (!rlang::is_bool(A[["disp.means"]])) stop("'disp.means' must be TRUE, FALSE, or \"as.is\".")
if ("means" %nin% p.ops$compute && A[["disp.means"]] == TRUE) {
warning("'disp.means' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "means"[A[["disp.means"]]]))
A[["disp.means"]] <- NULL
}
if (is_not_null(A[["disp.sds"]]) && !identical(A[["disp.sds"]], "as.is")) {
if (!rlang::is_bool(A[["disp.sds"]])) stop("'disp.sds' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if ("sds" %nin% p.ops$compute && A[["disp.sds"]] == TRUE) {
warning("'disp.sds' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "sds"[A[["disp.sds"]]]))
A[["disp.sds"]] <- NULL
}
if (!identical(stats, "as.is")) {
if (!is_(stats, "character")) stop("'stats' must be a string.")
stats <- match_arg(stats, all_STATS(p.ops$type), several.ok = TRUE)
stats_in_p.ops <- stats %in% p.ops$compute
if (any(!stats_in_p.ops)) {
stop(paste0("'stats' cannot contain ", word_list(stats[!stats_in_p.ops], and.or = "or", quotes = 2), " if quick = TRUE in the original call to bal.tab()."), call. = TRUE)
}
else p.ops$disp <- unique(c(p.ops$disp[p.ops$disp %nin% all_STATS()], stats))
}
for (s in all_STATS(p.ops$type)) {
if (is_not_null(A[[STATS[[s]]$disp_stat]]) && !identical(A[[STATS[[s]]$disp_stat]], "as.is")) {
if (!rlang::is_bool(A[[STATS[[s]]$disp_stat]])) {
stop(paste0("'", STATS[[s]]$disp_stat, "' must be TRUE, FALSE, or \"as.is\"."), call. = FALSE)
}
if (s %nin% p.ops$compute && isTRUE(A[[STATS[[s]]$disp_stat]])) {
warning(paste0("'", STATS[[s]]$disp_stat, "' cannot be set to TRUE if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, s))
A[[STATS[[s]]$disp_stat]] <- NULL
}
}
for (s in p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)]) {
if (STATS[[s]]$threshold %in% names(A) && !identical(temp.thresh <- A[[STATS[[s]]$threshold]], "as.is")) {
if (is_not_null(temp.thresh) &&
(!is.numeric(temp.thresh) || length(temp.thresh) != 1 ||
is_null(p.ops[["thresholds"]][[s]]) ||
p.ops[["thresholds"]][[s]] != temp.thresh))
stop(paste0("'", STATS[[s]]$threshold, "' must be NULL or \"as.is\"."))
if (is_null(temp.thresh)) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
A[[STATS[[s]]$threshold]] <- NULL
}
if (s %nin% p.ops$disp) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (!identical(disp.thresholds, "as.is")) {
if (!is.logical(disp.thresholds) || anyNA(disp.thresholds)) stop("'disp.thresholds' must only contain TRUE or FALSE.", call. = FALSE)
if (is_null(names(disp.thresholds))) {
if (length(disp.thresholds) <= length(p.ops[["thresholds"]])) {
names(disp.thresholds) <- names(p.ops[["thresholds"]])[seq_along(disp.thresholds)]
}
else {
stop("More entries were given to 'disp.thresholds' than there are thresholds in the bal.tab object.", call. = FALSE)
}
}
if (!all(names(disp.thresholds) %pin% names(p.ops[["thresholds"]]))) {
warning(paste0(word_list(names(disp.thresholds)[!names(disp.thresholds) %pin% names(p.ops[["thresholds"]])],
quotes = 2, is.are = TRUE), " not available in thresholds and will be ignored."), call. = FALSE)
disp.thresholds <- disp.thresholds[names(disp.thresholds) %pin% names(p.ops[["thresholds"]])]
}
names(disp.thresholds) <- match_arg(names(disp.thresholds), names(p.ops[["thresholds"]]), several.ok = TRUE)
for (x in names(disp.thresholds)) {
if (!disp.thresholds[x]) {
p.ops[["thresholds"]][[x]] <- NULL
baltal[[x]] <- NULL
maximbal[[x]] <- NULL
}
}
}
if (!identical(cluster.summary, "as.is")) {
if (!rlang::is_bool(cluster.summary)) stop("'cluster.summary' must be TRUE, FALSE, or \"as.is\".")
if (p.ops$quick && p.ops$cluster.summary == FALSE && cluster.summary == TRUE) {
warning("'cluster.summary' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$cluster.summary <- cluster.summary
}
if (p.ops$disp.bal.tab) {
if (!identical(imbalanced.only, "as.is")) {
if (!rlang::is_bool(imbalanced.only)) stop("'imbalanced.only' must be TRUE, FALSE, or \"as.is\".")
p.ops$imbalanced.only <- imbalanced.only
}
if (p.ops$imbalanced.only) {
if (is_null(p.ops$thresholds)) {
warning("A threshold must be specified if imbalanced.only = TRUE. Displaying all covariates.", call. = FALSE)
p.ops$imbalanced.only <- FALSE
}
}
}
else p.ops$imbalanced.only <- FALSE
if (!missing(which.cluster)) {
if (paste(deparse1(substitute(which.cluster)), collapse = "") == ".none") which.cluster <- NA
else if (paste(deparse1(substitute(which.cluster)), collapse = "") == ".all") which.cluster <- NULL
if (!identical(which.cluster, "as.is")) {
p.ops$which.cluster <- which.cluster
}
}
if (!p.ops$quick || is_null(p.ops$cluster.fun)) computed.cluster.funs <- c("min", "mean", "max")
else computed.cluster.funs <- p.ops$cluster.fun
if (is_not_null(cluster.fun) && !identical(cluster.fun, "as.is")) {
if (!is.character(cluster.fun) || !all(cluster.fun %pin% computed.cluster.funs)) stop(paste0("'cluster.fun' must be ", word_list(c(computed.cluster.funs, "as.is"), and.or = "or", quotes = 2)), call. = FALSE)
}
else {
if (p.ops$abs) cluster.fun <- c("mean", "max")
else cluster.fun <- c("min", "mean", "max")
}
cluster.fun <- match_arg(tolower(cluster.fun), computed.cluster.funs, several.ok = TRUE)
#Checks and Adjustments
if (is_null(p.ops$which.cluster))
which.cluster <- seq_along(c.balance)
else if (anyNA(p.ops$which.cluster)) {
which.cluster <- integer(0)
}
else if (is.numeric(p.ops$which.cluster)) {
which.cluster <- intersect(seq_along(c.balance), p.ops$which.cluster)
if (is_null(which.cluster)) {
warning("No indices in 'which.cluster' are cluster indices. Displaying all clusters instead.", call. = FALSE)
which.cluster <- seq_along(c.balance)
}
}
else if (is.character(p.ops$which.cluster)) {
which.cluster <- intersect(names(c.balance), p.ops$which.cluster)
if (is_null(which.cluster)) {
warning("No names in 'which.cluster' are cluster names. Displaying all clusters instead.", call. = FALSE)
which.cluster <- seq_along(c.balance)
}
}
else {
warning("The argument to 'which.cluster' must be .all, .none, or a vector of cluster indices or cluster names. Displaying all clusters instead.", call. = FALSE)
which.cluster <- seq_along(c.balance)
}
#Printing
if (is_not_null(call)) {
cat(underline("Call") %+% "\n " %+% paste(deparse(call), collapse = "\n") %+% "\n\n")
}
if (is_not_null(which.cluster)) {
cat(underline("Balance by cluster") %+% "\n")
for (i in which.cluster) {
cat("\n - - - " %+% italic("Cluster: " %+% names(c.balance)[i]) %+% " - - - \n")
do.call(print, c(list(c.balance[[i]]), p.ops[names(p.ops) %nin% names(A)], A), quote = TRUE)
}
cat(paste0(paste(rep(" -", round(nchar(paste0("\n - - - Cluster: ", names(c.balance)[i], " - - - "))/2)), collapse = ""), " \n"))
cat("\n")
}
if (isTRUE(as.logical(p.ops$cluster.summary)) && is_not_null(c.balance.summary)) {
s.keep.col <- as.logical(c(TRUE,
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)], function(s) {
c(unlist(lapply(computed.cluster.funs, function(af) {
p.ops$un && s %in% p.ops$disp && af %in% cluster.fun
})),
p.ops$un && !p.ops$disp.adj && length(cluster.fun) == 1 && is_not_null(p.ops$thresholds[[s]]))
})),
rep(
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)], function(s) {
c(unlist(lapply(computed.cluster.funs, function(af) {
p.ops$disp.adj && s %in% p.ops$disp && af %in% cluster.fun
})),
p.ops$disp.adj && length(cluster.fun) == 1 && is_not_null(p.ops$thresholds[[s]]))
})),
p.ops$nweights + !p.ops$disp.adj)
))
if (p.ops$disp.bal.tab) {
cat(underline("Balance summary across all clusters") %+% "\n")
print.data.frame_(round_df_char(c.balance.summary[, s.keep.col, drop = FALSE], digits))
cat("\n")
}
if (is_not_null(nn)) {
for (i in rownames(nn)) {
if (all(nn[i,] == 0)) nn <- nn[rownames(nn)!=i,]
}
if (all(c("Matched (ESS)", "Matched (Unweighted)") %in% rownames(nn)) &&
all(check_if_zero(nn["Matched (ESS)",] - nn["Matched (Unweighted)",]))) {
nn <- nn[rownames(nn)!="Matched (Unweighted)", , drop = FALSE]
rownames(nn)[rownames(nn) == "Matched (ESS)"] <- "Matched"
}
cat(underline(attr(nn, "tag")) %+% "\n")
print.warning <- FALSE
if (length(attr(nn, "ss.type")) > 1 && nunique.gt(attr(nn, "ss.type")[-1], 1)) {
ess <- ifelse(attr(nn, "ss.type") == "ess", "*", "")
nn <- setNames(cbind(nn, ess), c(names(nn), ""))
print.warning <- TRUE
}
print.data.frame_(round_df_char(nn, digits = min(2, digits), pad = " "))
if (print.warning) cat(italic("* indicates effective sample size"))
}
}
invisible(x)
}
print.bal.tab.imp <- function(x, imbalanced.only = "as.is", un = "as.is", disp.bal.tab = "as.is", stats = "as.is", disp.thresholds = "as.is", disp = "as.is", which.imp, imp.summary = "as.is", imp.fun = "as.is", digits = max(3, getOption("digits") - 3), ...) {
#Replace .all and .none with NULL and NA respectively
.call <- match.call(expand.dots = TRUE)
if (any(sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all") || identical(as.character(.call[[x]]), ".none")))) {
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all"))] <- expression(NULL)
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".none"))] <- expression(NA)
return(eval.parent(.call))
}
A <- list(...)
call <- x$call
i.balance <- x[["Imputation.Balance"]]
i.balance.summary <- x[["Balance.Across.Imputations"]]
nn <- x$Observations
p.ops <- attr(x, "print.options")
baltal <- maximbal <- list()
for (s in p.ops$stats) {
baltal[[s]] <- x[[paste.("Balanced", s)]]
maximbal[[s]] <- x[[paste.("Max.Imbalance", s)]]
}
#Prevent exponential notation printing
op <- options(scipen=getOption("scipen"))
options(scipen = 999)
on.exit(options(op))
#Adjustments to print options
if (!identical(un, "as.is") && p.ops$disp.adj) {
if (!rlang::is_bool(un)) stop("'un' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if (p.ops$quick && p.ops$un == FALSE && un == TRUE) {
warning("'un' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$un <- un
}
if (!identical(disp, "as.is")) {
if (!is.character(disp)) stop("'disp.means' must be a character vector.")
allowable.disp <- c("means", "sds", all_STATS(p.ops$type))
if (any(disp %nin% allowable.disp)) {
stop(paste(word_list(disp[disp %nin% allowable.disp], and.or = "and", quotes = 2, is.are = TRUE),
"not allowed in 'disp'."), call. = FALSE)
}
if (any(disp %nin% p.ops$compute)) {
warning(paste("'disp' cannot include", word_list(disp[disp %nin% p.ops$compute], and.or = "or", quotes = 2), "if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- disp
}
if (is_not_null(A[["disp.means"]]) && !identical(A[["disp.means"]], "as.is")) {
if (!rlang::is_bool(A[["disp.means"]])) stop("'disp.means' must be TRUE, FALSE, or \"as.is\".")
if ("means" %nin% p.ops$compute && A[["disp.means"]] == TRUE) {
warning("'disp.means' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "means"[A[["disp.means"]]]))
A[["disp.means"]] <- NULL
}
if (is_not_null(A[["disp.sds"]]) && !identical(A[["disp.sds"]], "as.is")) {
if (!rlang::is_bool(A[["disp.sds"]])) stop("'disp.sds' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if ("sds" %nin% p.ops$compute && A[["disp.sds"]] == TRUE) {
warning("'disp.sds' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "sds"[A[["disp.sds"]]]))
A[["disp.sds"]] <- NULL
}
if (!identical(stats, "as.is")) {
if (!is_(stats, "character")) stop("'stats' must be a string.")
stats <- match_arg(stats, all_STATS(p.ops$type), several.ok = TRUE)
stats_in_p.ops <- stats %in% p.ops$compute
if (any(!stats_in_p.ops)) {
stop(paste0("'stats' cannot contain ", word_list(stats[!stats_in_p.ops], and.or = "or", quotes = 2), " if quick = TRUE in the original call to bal.tab()."), call. = TRUE)
}
else p.ops$disp <- unique(c(p.ops$disp[p.ops$disp %nin% all_STATS()], stats))
}
for (s in all_STATS(p.ops$type)) {
if (is_not_null(A[[STATS[[s]]$disp_stat]]) && !identical(A[[STATS[[s]]$disp_stat]], "as.is")) {
if (!rlang::is_bool(A[[STATS[[s]]$disp_stat]])) {
stop(paste0("'", STATS[[s]]$disp_stat, "' must be TRUE, FALSE, or \"as.is\"."), call. = FALSE)
}
if (s %nin% p.ops$compute && isTRUE(A[[STATS[[s]]$disp_stat]])) {
warning(paste0("'", STATS[[s]]$disp_stat, "' cannot be set to TRUE if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, s))
A[[STATS[[s]]$disp_stat]] <- NULL
}
}
for (s in p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)]) {
if (STATS[[s]]$threshold %in% names(A) && !identical(temp.thresh <- A[[STATS[[s]]$threshold]], "as.is")) {
if (is_not_null(temp.thresh) &&
(!is.numeric(temp.thresh) || length(temp.thresh) != 1 ||
is_null(p.ops[["thresholds"]][[s]]) ||
p.ops[["thresholds"]][[s]] != temp.thresh))
stop(paste0("'", STATS[[s]]$threshold, "' must be NULL or \"as.is\"."))
if (is_null(temp.thresh)) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
A[[STATS[[s]]$threshold]] <- NULL
}
if (s %nin% p.ops$disp) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (!identical(disp.thresholds, "as.is")) {
if (!is.logical(disp.thresholds) || anyNA(disp.thresholds)) stop("'disp.thresholds' must only contain TRUE or FALSE.", call. = FALSE)
if (is_null(names(disp.thresholds))) {
if (length(disp.thresholds) <= length(p.ops[["thresholds"]])) {
names(disp.thresholds) <- names(p.ops[["thresholds"]])[seq_along(disp.thresholds)]
}
else {
stop("More entries were given to 'disp.thresholds' than there are thresholds in the bal.tab object.", call. = FALSE)
}
}
if (!all(names(disp.thresholds) %pin% names(p.ops[["thresholds"]]))) {
warning(paste0(word_list(names(disp.thresholds)[!names(disp.thresholds) %pin% names(p.ops[["thresholds"]])],
quotes = 2, is.are = TRUE), " not available in thresholds and will be ignored."), call. = FALSE)
disp.thresholds <- disp.thresholds[names(disp.thresholds) %pin% names(p.ops[["thresholds"]])]
}
names(disp.thresholds) <- match_arg(names(disp.thresholds), names(p.ops[["thresholds"]]), several.ok = TRUE)
for (x in names(disp.thresholds)) {
if (!disp.thresholds[x]) {
p.ops[["thresholds"]][[x]] <- NULL
baltal[[x]] <- NULL
maximbal[[x]] <- NULL
}
}
}
if (!identical(imp.summary, "as.is")) {
if (!rlang::is_bool(imp.summary)) stop("'imp.summary' must be TRUE, FALSE, or \"as.is\".")
if (p.ops$quick && p.ops$imp.summary == FALSE && imp.summary == TRUE) {
warning("'imp.summary' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$imp.summary <- imp.summary
}
if (!identical(disp.bal.tab, "as.is")) {
if (!rlang::is_bool(disp.bal.tab)) stop("'disp.bal.tab' must be TRUE, FALSE, or \"as.is\".")
p.ops$disp.bal.tab <- disp.bal.tab
}
if (p.ops$disp.bal.tab) {
if (!identical(imbalanced.only, "as.is")) {
if (!rlang::is_bool(imbalanced.only)) stop("'imbalanced.only' must be TRUE, FALSE, or \"as.is\".")
p.ops$imbalanced.only <- imbalanced.only
}
if (p.ops$imbalanced.only) {
if (is_null(p.ops$thresholds)) {
warning("A threshold must be specified if imbalanced.only = TRUE. Displaying all covariates.", call. = FALSE)
p.ops$imbalanced.only <- FALSE
}
}
}
else p.ops$imbalanced.only <- FALSE
if (!missing(which.imp)) {
if (paste(deparse1(substitute(which.imp)), collapse = "") == ".none") which.imp <- NA
else if (paste(deparse1(substitute(which.imp)), collapse = "") == ".all") which.imp <- NULL
if (!identical(which.imp, "as.is")) {
p.ops$which.imp <- which.imp
}
}
if (!p.ops$quick || is_null(p.ops$imp.fun)) computed.imp.funs <- c("min", "mean", "max")
else computed.imp.funs <- p.ops$imp.fun
if (is_not_null(imp.fun) && !identical(imp.fun, "as.is")) {
if (!is.character(imp.fun) || !all(imp.fun %pin% computed.imp.funs)) stop(paste0("'imp.fun' must be ", word_list(c(computed.imp.funs, "as.is"), and.or = "or", quotes = 2)), call. = FALSE)
}
else {
if (p.ops$abs) imp.fun <- c("mean", "max")
else imp.fun <- c("min", "mean", "max")
}
imp.fun <- match_arg(tolower(imp.fun), computed.imp.funs, several.ok = TRUE)
#Checks and Adjustments
if (is_null(p.ops$which.imp))
which.imp <- seq_along(i.balance)
else if (anyNA(p.ops$which.imp)) {
which.imp <- integer(0)
}
else if (is.numeric(p.ops$which.imp)) {
which.imp <- intersect(seq_along(i.balance), p.ops$which.imp)
if (is_null(which.imp)) {
warning("No numbers in 'which.imp' are imputation numbers. No imputations will be displayed.", call. = FALSE)
which.imp <- integer(0)
}
}
else {
warning("The argument to 'which.imp' must be .all, .none, or a vector of imputation numbers.", call. = FALSE)
which.imp <- integer(0)
}
#Printing output
if (is_not_null(call)) {
cat(underline("Call") %+% "\n " %+% paste(deparse(call), collapse = "\n") %+% "\n\n")
}
if (is_not_null(which.imp)) {
cat(underline("Balance by imputation") %+% "\n")
for (i in which.imp) {
cat("\n - - - " %+% italic("Imputation " %+% names(i.balance)[i]) %+% " - - - \n")
do.call(print, c(list(i.balance[[i]]), p.ops, A), quote = TRUE)
}
cat(paste0(paste(rep(" -", round(nchar(paste0("\n - - - Imputation: ", names(i.balance)[i], " - - - "))/2)), collapse = ""), " \n"))
cat("\n")
}
if (isTRUE(as.logical(p.ops$imp.summary)) && is_not_null(i.balance.summary)) {
s.keep.col <- as.logical(c(TRUE,
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)], function(s) {
c(unlist(lapply(computed.imp.funs, function(af) {
p.ops$un && s %in% p.ops$disp && af %in% imp.fun
})),
p.ops$un && !p.ops$disp.adj && length(imp.fun) == 1 && is_not_null(p.ops$thresholds[[s]]))
})),
rep(
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)], function(s) {
c(unlist(lapply(computed.imp.funs, function(af) {
p.ops$disp.adj && s %in% p.ops$disp && af %in% imp.fun
})),
p.ops$disp.adj && length(imp.fun) == 1 && is_not_null(p.ops$thresholds[[s]]))
})),
p.ops$nweights + !p.ops$disp.adj)
))
if (p.ops$disp.bal.tab) {
cat(underline("Balance summary across all imputations") %+% "\n")
print.data.frame_(round_df_char(i.balance.summary[, s.keep.col, drop = FALSE], digits))
cat("\n")
}
if (is_not_null(nn)) {
for (i in rownames(nn)) {
if (all(nn[i,] == 0)) nn <- nn[rownames(nn)!=i,]
}
if (all(c("Matched (ESS)", "Matched (Unweighted)") %in% rownames(nn)) &&
all(check_if_zero(nn["Matched (ESS)",] - nn["Matched (Unweighted)",]))) {
nn <- nn[rownames(nn)!="Matched (Unweighted)", , drop = FALSE]
rownames(nn)[rownames(nn) == "Matched (ESS)"] <- "Matched"
}
cat(underline(attr(nn, "tag")) %+% "\n")
print.warning <- FALSE
if (length(attr(nn, "ss.type")) > 1 && nunique.gt(attr(nn, "ss.type")[-1], 1)) {
ess <- ifelse(attr(nn, "ss.type") == "ess", "*", "")
nn <- setNames(cbind(nn, ess), c(names(nn), ""))
print.warning <- TRUE
}
print.data.frame_(round_df_char(nn, digits = min(2, digits), pad = " "))
if (print.warning) cat(italic("* indicates effective sample size"))
}
}
invisible(x)
}
print.bal.tab.multi <- function(x, imbalanced.only = "as.is", un = "as.is", disp.bal.tab = "as.is", stats = "as.is", disp.thresholds = "as.is", disp = "as.is", which.treat, multi.summary = "as.is", digits = max(3, getOption("digits") - 3), ...) {
#Replace .all and .none with NULL and NA respectively
.call <- match.call(expand.dots = TRUE)
if (any(sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all") || identical(as.character(.call[[x]]), ".none")))) {
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all"))] <- expression(NULL)
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".none"))] <- expression(NA)
return(eval.parent(.call))
}
A <- list(...)
call <- x$call
m.balance <- x[["Pair.Balance"]]
m.balance.summary <- x[["Balance.Across.Pairs"]]
nn <- x$Observations
p.ops <- attr(x, "print.options")
baltal <- maximbal <- list()
for (s in p.ops$stats) {
baltal[[s]] <- x[[paste.("Balanced", s)]]
maximbal[[s]] <- x[[paste.("Max.Imbalance", s)]]
}
#Prevent exponential notation printing
op <- options(scipen=getOption("scipen"))
options(scipen = 999)
on.exit(options(op))
#Adjustments to print options
if (!identical(un, "as.is") && p.ops$disp.adj) {
if (!rlang::is_bool(un)) stop("'un' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if (p.ops$quick && p.ops$un == FALSE && un == TRUE) {
warning("'un' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$un <- un
}
if (!identical(disp, "as.is")) {
if (!is.character(disp)) stop("'disp.means' must be a character vector.")
allowable.disp <- c("means", "sds", all_STATS(p.ops$type))
if (any(disp %nin% allowable.disp)) {
stop(paste(word_list(disp[disp %nin% allowable.disp], and.or = "and", quotes = 2, is.are = TRUE),
"not allowed in 'disp'."), call. = FALSE)
}
if (any(disp %nin% p.ops$compute)) {
warning(paste("'disp' cannot include", word_list(disp[disp %nin% p.ops$compute], and.or = "or", quotes = 2), "if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- disp
}
if (is_not_null(A[["disp.means"]]) && !identical(A[["disp.means"]], "as.is")) {
if (!rlang::is_bool(A[["disp.means"]])) stop("'disp.means' must be TRUE, FALSE, or \"as.is\".")
if ("means" %nin% p.ops$compute && A[["disp.means"]] == TRUE) {
warning("'disp.means' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "means"[A[["disp.means"]]]))
}
if (is_not_null(A[["disp.sds"]]) && !identical(A[["disp.sds"]], "as.is")) {
if (!rlang::is_bool(A[["disp.sds"]])) stop("'disp.sds' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if ("sds" %nin% p.ops$compute && A[["disp.sds"]] == TRUE) {
warning("'disp.sds' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "sds"[A[["disp.sds"]]]))
}
if (!identical(stats, "as.is")) {
if (!is_(stats, "character")) stop("'stats' must be a string.")
stats <- match_arg(stats, all_STATS(p.ops$type), several.ok = TRUE)
stats_in_p.ops <- stats %in% p.ops$compute
if (any(!stats_in_p.ops)) {
stop(paste0("'stats' cannot contain ", word_list(stats[!stats_in_p.ops], and.or = "or", quotes = 2), " if quick = TRUE in the original call to bal.tab()."), call. = TRUE)
}
else p.ops$disp <- unique(c(p.ops$disp[p.ops$disp %nin% all_STATS()], stats))
}
for (s in all_STATS(p.ops$type)) {
if (is_not_null(A[[STATS[[s]]$disp_stat]]) && !identical(A[[STATS[[s]]$disp_stat]], "as.is")) {
if (!rlang::is_bool(A[[STATS[[s]]$disp_stat]])) {
stop(paste0("'", STATS[[s]]$disp_stat, "' must be TRUE, FALSE, or \"as.is\"."), call. = FALSE)
}
if (s %nin% p.ops$compute && isTRUE(A[[STATS[[s]]$disp_stat]])) {
warning(paste0("'", STATS[[s]]$disp_stat, "' cannot be set to TRUE if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, s))
}
}
for (s in p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)]) {
if (STATS[[s]]$threshold %in% names(A) && !identical(temp.thresh <- A[[STATS[[s]]$threshold]], "as.is")) {
if (is_not_null(temp.thresh) &&
(!is.numeric(temp.thresh) || length(temp.thresh) != 1 ||
is_null(p.ops[["thresholds"]][[s]]) ||
p.ops[["thresholds"]][[s]] != temp.thresh))
stop(paste0("'", STATS[[s]]$threshold, "' must be NULL or \"as.is\"."))
if (is_null(temp.thresh)) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (s %nin% p.ops$disp) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (!identical(disp.thresholds, "as.is")) {
if (!is.logical(disp.thresholds) || anyNA(disp.thresholds)) stop("'disp.thresholds' must only contain TRUE or FALSE.", call. = FALSE)
if (is_null(names(disp.thresholds))) {
if (length(disp.thresholds) <= length(p.ops[["thresholds"]])) {
names(disp.thresholds) <- names(p.ops[["thresholds"]])[seq_along(disp.thresholds)]
}
else {
stop("More entries were given to 'disp.thresholds' than there are thresholds in the bal.tab object.", call. = FALSE)
}
}
if (!all(names(disp.thresholds) %pin% names(p.ops[["thresholds"]]))) {
warning(paste0(word_list(names(disp.thresholds)[!names(disp.thresholds) %pin% names(p.ops[["thresholds"]])],
quotes = 2, is.are = TRUE), " not available in thresholds and will be ignored."), call. = FALSE)
disp.thresholds <- disp.thresholds[names(disp.thresholds) %pin% names(p.ops[["thresholds"]])]
}
names(disp.thresholds) <- match_arg(names(disp.thresholds), names(p.ops[["thresholds"]]), several.ok = TRUE)
for (x in names(disp.thresholds)) {
if (!disp.thresholds[x]) {
p.ops[["thresholds"]][[x]] <- NULL
baltal[[x]] <- NULL
maximbal[[x]] <- NULL
}
}
}
if (!identical(multi.summary, "as.is")) {
if (!rlang::is_bool(multi.summary)) stop("'multi.summary' must be TRUE, FALSE, or \"as.is\".")
if (p.ops$quick && p.ops$multi.summary == FALSE && multi.summary == TRUE) {
warning("'multi.summary' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$multi.summary <- multi.summary
}
if (!identical(disp.bal.tab, "as.is")) {
if (!rlang::is_bool(disp.bal.tab)) stop("'disp.bal.tab' must be TRUE, FALSE, or \"as.is\".")
p.ops$disp.bal.tab <- disp.bal.tab
}
if (is_not_null(m.balance.summary)) {
if (p.ops$disp.bal.tab) {
if (!identical(imbalanced.only, "as.is")) {
if (!rlang::is_bool(imbalanced.only)) stop("'imbalanced.only' must be TRUE, FALSE, or \"as.is\".")
p.ops$imbalanced.only <- imbalanced.only
}
if (p.ops$imbalanced.only) {
if (is_null(p.ops$thresholds)) {
warning("A threshold must be specified if imbalanced.only = TRUE. Displaying all covariates.", call. = FALSE)
p.ops$imbalanced.only <- FALSE
}
}
}
else p.ops$imbalanced.only <- FALSE
if (p.ops$imbalanced.only) {
keep.row <- rowSums(apply(m.balance.summary[grepl(".Threshold", names(m.balance.summary), fixed = TRUE)], 2, function(x) !is.na(x) & startsWith(x, "Not Balanced"))) > 0
}
else keep.row <- rep(TRUE, nrow(m.balance.summary))
}
if (!missing(which.treat)) {
if (paste(deparse1(substitute(which.treat)), collapse = "") == ".none") which.treat <- NA
else if (paste(deparse1(substitute(which.treat)), collapse = "") == ".all") which.treat <- NULL
if (!identical(which.treat, "as.is")) {
p.ops$which.treat <- which.treat
}
}
#Checks and Adjustments
if (is_null(p.ops$which.treat))
which.treat <- p.ops$treat_names_multi
else if (anyNA(p.ops$which.treat)) {
which.treat <- character(0)
}
else if (is.numeric(p.ops$which.treat)) {
which.treat <- p.ops$treat_names_multi[seq_along(p.ops$treat_names_multi) %in% p.ops$which.treat]
if (is_null(which.treat)) {
warning("No numbers in 'which.treat' correspond to treatment values. No treatment pairs will be displayed.", call. = FALSE)
which.treat <- character(0)
}
}
else if (is.character(p.ops$which.treat)) {
which.treat <- p.ops$treat_names_multi[p.ops$treat_names_multi %in% p.ops$which.treat]
if (is_null(which.treat)) {
warning("No names in 'which.treat' correspond to treatment values. No treatment pairs will be displayed.", call. = FALSE)
which.treat <- character(0)
}
}
else {
warning("The argument to 'which.treat' must be .all, .none, or a vector of treatment names or indices. No treatment pairs will be displayed.", call. = FALSE)
which.treat <- character(0)
}
if (is_null(which.treat)) {
disp.treat.pairs <- character(0)
}
else {
if (p.ops$pairwise) {
if (length(which.treat) == 1) {
disp.treat.pairs <- names(m.balance)[sapply(names(m.balance), function(x) any(attr(m.balance[[x]], "print.options")$treat_names == which.treat))]
}
else {
disp.treat.pairs <- names(m.balance)[sapply(names(m.balance), function(x) all(attr(m.balance[[x]], "print.options")$treat_names %in% which.treat))]
}
}
else {
if (length(which.treat) == 1) {
disp.treat.pairs <- names(m.balance)[sapply(names(m.balance), function(x) {
treat_names <- attr(m.balance[[x]], "print.options")$treat_names
any(treat_names[treat_names != "All"] == which.treat)})]
}
else {
disp.treat.pairs <- names(m.balance)[sapply(names(m.balance), function(x) {
treat_names <- attr(m.balance[[x]], "print.options")$treat_names
all(treat_names[treat_names != "All"] %in% which.treat)})]
}
}
}
#Printing output
if (is_not_null(call)) {
cat(underline("Call") %+% "\n " %+% paste(deparse(call), collapse = "\n") %+% "\n\n")
}
if (is_not_null(disp.treat.pairs)) {
headings <- setNames(character(length(disp.treat.pairs)), disp.treat.pairs)
if (p.ops$pairwise) cat(underline("Balance by treatment pair") %+% "\n")
else cat(underline("Balance by treatment group") %+% "\n")
for (i in disp.treat.pairs) {
headings[i] <- "\n - - - " %+% italic(attr(m.balance[[i]], "print.options")$treat_names[1] %+% " (0) vs. " %+%
attr(m.balance[[i]], "print.options")$treat_names[2] %+% " (1)") %+% " - - - \n"
cat(headings[i])
do.call(print, c(list(m.balance[[i]]), p.ops[names(p.ops) %nin% names(A)], A), quote = TRUE)
}
cat(paste0(paste(rep(" -", round(max(nchar(headings))/2)), collapse = ""), " \n"))
cat("\n")
}
if (isTRUE(as.logical(p.ops$multi.summary)) && is_not_null(m.balance.summary)) {
computed.agg.funs <- "max"
s.keep.col <- as.logical(c(TRUE,
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS("bin")], function(s) {
c(unlist(lapply(computed.agg.funs, function(af) {
p.ops$un && s %in% p.ops$disp && af %in% "max"
})),
p.ops$un && !p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
})),
rep(
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS("bin")], function(s) {
c(unlist(lapply(computed.agg.funs, function(af) {
p.ops$disp.adj && s %in% p.ops$disp && af %in% "max"
})),
p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
})),
p.ops$nweights + !p.ops$disp.adj)
))
if (p.ops$disp.bal.tab) {
cat(underline("Balance summary across all treatment pairs") %+% "\n")
if (all(!keep.row)) cat(italic("All covariates are balanced.") %+% "\n")
else print.data.frame_(round_df_char(m.balance.summary[keep.row, s.keep.col, drop = FALSE], digits))
cat("\n")
}
if (is_not_null(nn)) {
tag <- attr(nn, "tag")
ss.type <- attr(nn, "ss.type")
for (i in rownames(nn)) {
if (all(nn[i,] == 0)) nn <- nn[rownames(nn)!=i,]
}
if (all(c("Matched (ESS)", "Matched (Unweighted)") %in% rownames(nn)) &&
all(check_if_zero(nn["Matched (ESS)",] - nn["Matched (Unweighted)",]))) {
nn <- nn[rownames(nn)!="Matched (Unweighted)", , drop = FALSE]
rownames(nn)[rownames(nn) == "Matched (ESS)"] <- "Matched"
}
cat(underline(tag) %+% "\n")
print.warning <- FALSE
if (length(ss.type) > 1 && nunique.gt(ss.type[-1], 1)) {
ess <- ifelse(ss.type == "ess", "*", "")
nn <- setNames(cbind(nn, ess), c(names(nn), ""))
print.warning <- TRUE
}
print.data.frame_(round_df_char(nn, digits = min(2, digits), pad = " "))
if (print.warning) cat(italic("* indicates effective sample size"))
}
}
invisible(x)
}
print.bal.tab.msm <- function(x, imbalanced.only = "as.is", un = "as.is", disp.bal.tab = "as.is", stats = "as.is", disp.thresholds = "as.is", disp = "as.is", which.time, msm.summary = "as.is", digits = max(3, getOption("digits") - 3), ...) {
#Replace .all and .none with NULL and NA respectively
.call <- match.call(expand.dots = TRUE)
if (any(sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all") || identical(as.character(.call[[x]]), ".none")))) {
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".all"))] <- expression(NULL)
.call[sapply(seq_along(.call), function(x) identical(as.character(.call[[x]]), ".none"))] <- expression(NA)
return(eval.parent(.call))
}
A <- list(...)
A <- clear_null(A[!vapply(A, function(x) identical(x, quote(expr =)), logical(1L))])
call <- x$call
msm.balance <- x[["Time.Balance"]]
msm.balance.summary <- x[["Balance.Across.Times"]]
nn <- x$Observations
p.ops <- attr(x, "print.options")
baltal <- maximbal <- list()
for (s in p.ops$stats) {
baltal[[s]] <- x[[paste.("Balanced", s)]]
maximbal[[s]] <- x[[paste.("Max.Imbalance", s)]]
}
#Prevent exponential notation printing
op <- options(scipen=getOption("scipen"))
options(scipen = 999)
on.exit(options(op))
#Adjustments to print options
if (!identical(un, "as.is") && p.ops$disp.adj) {
if (!rlang::is_bool(un)) stop("'un' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if (p.ops$quick && p.ops$un == FALSE && un == TRUE) {
warning("'un' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$un <- un
}
if (!identical(disp, "as.is")) {
if (!is.character(disp)) stop("'disp.means' must be a character vector.")
allowable.disp <- c("means", "sds", all_STATS(p.ops$type))
if (any(disp %nin% allowable.disp)) {
stop(paste(word_list(disp[disp %nin% allowable.disp], and.or = "and", quotes = 2, is.are = TRUE),
"not allowed in 'disp'."), call. = FALSE)
}
if (any(disp %nin% p.ops$compute)) {
warning(paste("'disp' cannot include", word_list(disp[disp %nin% p.ops$compute], and.or = "or", quotes = 2), "if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- disp
}
if (is_not_null(A[["disp.means"]]) && !identical(A[["disp.means"]], "as.is")) {
if (!rlang::is_bool(A[["disp.means"]])) stop("'disp.means' must be TRUE, FALSE, or \"as.is\".")
if ("means" %nin% p.ops$compute && A[["disp.means"]] == TRUE) {
warning("'disp.means' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "means"[A[["disp.means"]]]))
}
if (is_not_null(A[["disp.sds"]]) && !identical(A[["disp.sds"]], "as.is")) {
if (!rlang::is_bool(A[["disp.sds"]])) stop("'disp.sds' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if ("sds" %nin% p.ops$compute && A[["disp.sds"]] == TRUE) {
warning("'disp.sds' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "sds"[A[["disp.sds"]]]))
}
if (!identical(stats, "as.is")) {
if (!is_(stats, "character")) stop("'stats' must be a string.")
stats <- match_arg(stats, all_STATS(p.ops$type), several.ok = TRUE)
stats_in_p.ops <- stats %in% p.ops$compute
if (any(!stats_in_p.ops)) {
stop(paste0("'stats' cannot contain ", word_list(stats[!stats_in_p.ops], and.or = "or", quotes = 2), " if quick = TRUE in the original call to bal.tab()."), call. = TRUE)
}
else p.ops$disp <- unique(c(p.ops$disp[p.ops$disp %nin% all_STATS()], stats))
}
for (s in all_STATS(p.ops$type)) {
if (is_not_null(A[[STATS[[s]]$disp_stat]]) && !identical(A[[STATS[[s]]$disp_stat]], "as.is")) {
if (!rlang::is_bool(A[[STATS[[s]]$disp_stat]])) {
stop(paste0("'", STATS[[s]]$disp_stat, "' must be TRUE, FALSE, or \"as.is\"."), call. = FALSE)
}
if (s %nin% p.ops$compute && isTRUE(A[[STATS[[s]]$disp_stat]])) {
warning(paste0("'", STATS[[s]]$disp_stat, "' cannot be set to TRUE if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, s))
}
}
for (s in p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)]) {
if (STATS[[s]]$threshold %in% names(A) && !identical(temp.thresh <- A[[STATS[[s]]$threshold]], "as.is")) {
if (is_not_null(temp.thresh) &&
(!is.numeric(temp.thresh) || length(temp.thresh) != 1 ||
is_null(p.ops[["thresholds"]][[s]]) ||
p.ops[["thresholds"]][[s]] != temp.thresh))
stop(paste0("'", STATS[[s]]$threshold, "' must be NULL or \"as.is\"."))
if (is_null(temp.thresh)) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (s %nin% p.ops$disp) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (!identical(disp.thresholds, "as.is")) {
if (!is.logical(disp.thresholds) || anyNA(disp.thresholds)) stop("'disp.thresholds' must only contain TRUE or FALSE.", call. = FALSE)
if (is_null(names(disp.thresholds))) {
if (length(disp.thresholds) <= length(p.ops[["thresholds"]])) {
names(disp.thresholds) <- names(p.ops[["thresholds"]])[seq_along(disp.thresholds)]
}
else {
stop("More entries were given to 'disp.thresholds' than there are thresholds in the bal.tab object.", call. = FALSE)
}
}
if (!all(names(disp.thresholds) %pin% names(p.ops[["thresholds"]]))) {
warning(paste0(word_list(names(disp.thresholds)[!names(disp.thresholds) %pin% names(p.ops[["thresholds"]])],
quotes = 2, is.are = TRUE), " not available in thresholds and will be ignored."), call. = FALSE)
disp.thresholds <- disp.thresholds[names(disp.thresholds) %pin% names(p.ops[["thresholds"]])]
}
names(disp.thresholds) <- match_arg(names(disp.thresholds), names(p.ops[["thresholds"]]), several.ok = TRUE)
for (x in names(disp.thresholds)) {
if (!disp.thresholds[x]) {
p.ops[["thresholds"]][[x]] <- NULL
baltal[[x]] <- NULL
maximbal[[x]] <- NULL
}
}
}
if (!identical(msm.summary, "as.is")) {
if (!rlang::is_bool(msm.summary)) stop("'msm.summary' must be TRUE, FALSE, or \"as.is\".")
if (p.ops$quick && p.ops$msm.summary == FALSE && msm.summary == TRUE) {
warning("'msm.summary' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$msm.summary <- msm.summary
}
if (!identical(disp.bal.tab, "as.is")) {
if (!rlang::is_bool(disp.bal.tab)) stop("'disp.bal.tab' must be TRUE, FALSE, or \"as.is\".")
p.ops$disp.bal.tab <- disp.bal.tab
}
if (p.ops$disp.bal.tab) {
if (!identical(imbalanced.only, "as.is")) {
if (!rlang::is_bool(imbalanced.only)) stop("'imbalanced.only' must be TRUE, FALSE, or \"as.is\".")
p.ops$imbalanced.only <- imbalanced.only
}
if (p.ops$imbalanced.only) {
if (is_null(p.ops$thresholds)) {
warning("A threshold must be specified if imbalanced.only = TRUE. Displaying all covariates.", call. = FALSE)
p.ops$imbalanced.only <- FALSE
}
}
}
else p.ops$imbalanced.only <- FALSE
if (is_not_null(msm.balance.summary)) {
if (p.ops$imbalanced.only) {
keep.row <- rowSums(apply(msm.balance.summary[grepl(".Threshold", names(msm.balance.summary), fixed = TRUE)], 2, function(x) !is.na(x) & startsWith(x, "Not Balanced"))) > 0
}
else keep.row <- rep(TRUE, nrow(msm.balance.summary))
}
if (!missing(which.time)) {
if (paste(deparse1(substitute(which.time)), collapse = "") == ".none") which.time <- NA
else if (paste(deparse1(substitute(which.time)), collapse = "") == ".all") which.time <- NULL
if (!identical(which.time, "as.is")) {
p.ops$which.time <- which.time
}
}
#Checks and Adjustments
if (is_null(p.ops$which.time))
which.time <- seq_along(msm.balance)
else if (anyNA(p.ops$which.time)) {
which.time <- integer(0)
}
else if (is.numeric(p.ops$which.time)) {
which.time <- seq_along(msm.balance)[seq_along(msm.balance) %in% p.ops$which.time]
if (is_null(which.time)) {
warning("No numbers in 'which.time' are treatment time points. No time points will be displayed.", call. = FALSE)
which.time <- integer(0)
}
}
else if (is.character(p.ops$which.time)) {
which.time <- seq_along(msm.balance)[names(msm.balance) %in% p.ops$which.time]
if (is_null(which.time)) {
warning("No names in 'which.time' are treatment names. No time points will be displayed.", call. = FALSE)
which.time <- integer(0)
}
}
else {
warning("The argument to 'which.time' must be .all, .none, or a vector of time point numbers. No time points will be displayed.", call. = FALSE)
which.time <- integer(0)
}
#Printing output
if (is_not_null(call)) {
cat(underline("Call") %+% "\n " %+% paste(deparse(call), collapse = "\n") %+% "\n\n")
}
if (is_not_null(which.time)) {
cat(underline("Balance by Time Point") %+% "\n")
for (i in which.time) {
cat("\n - - - " %+% italic("Time: " %+% as.character(i)) %+% " - - - \n")
do.call(print, c(list(x = msm.balance[[i]]), p.ops[names(p.ops) %nin% names(A)], A), quote = TRUE)
}
cat(paste0(paste(rep(" -", round(nchar(paste0("\n - - - Time: ", i, " - - - "))/2)), collapse = ""), " \n"))
cat("\n")
}
if (isTRUE(as.logical(p.ops$msm.summary)) && is_not_null(msm.balance.summary)) {
computed.agg.funs <- "max"
s.keep.col <- as.logical(c(TRUE,
TRUE,
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)], function(s) {
c(unlist(lapply(computed.agg.funs, function(af) {
p.ops$un && s %in% p.ops$disp && af %in% "max"
})),
p.ops$un && !p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
})),
rep(
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)], function(s) {
c(unlist(lapply(computed.agg.funs, function(af) {
p.ops$disp.adj && s %in% p.ops$disp && af %in% "max"
})),
p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
})),
p.ops$nweights + !p.ops$disp.adj)
))
if (p.ops$disp.bal.tab) {
cat(underline("Balance summary across all time points") %+% "\n")
if (all(!keep.row)) cat(italic("All covariates are balanced.") %+% "\n")
else print.data.frame_(round_df_char(msm.balance.summary[keep.row, s.keep.col, drop = FALSE], digits))
cat("\n")
}
if (is_not_null(nn)) {
print.warning <- FALSE
cat(underline(attr(nn[[1]], "tag")) %+% "\n")
for (ti in seq_along(nn)) {
cat(" - " %+% italic("Time " %+% as.character(ti)) %+% "\n")
for (i in rownames(nn[[ti]])) {
if (all(nn[[ti]][i,] == 0)) nn[[ti]] <- nn[[ti]][rownames(nn[[ti]])!=i,]
}
if (all(c("Matched (ESS)", "Matched (Unweighted)") %in% rownames(nn[[ti]])) &&
all(check_if_zero(nn[[ti]]["Matched (ESS)",] - nn[[ti]]["Matched (Unweighted)",]))) {
nn[[ti]] <- nn[[ti]][rownames(nn[[ti]])!="Matched (Unweighted)", , drop = FALSE]
rownames(nn[[ti]])[rownames(nn[[ti]]) == "Matched (ESS)"] <- "Matched"
}
if (length(attr(nn[[ti]], "ss.type")) > 1 && nunique.gt(attr(nn[[ti]], "ss.type")[-1], 1)) {
ess <- ifelse(attr(nn[[ti]], "ss.type") == "ess", "*", "")
nn[[ti]] <- setNames(cbind(nn[[ti]], ess), c(names(nn[[ti]]), ""))
print.warning <- TRUE
}
print.data.frame_(round_df_char(nn[[ti]], digits = min(2, digits), pad = " "))
}
if (print.warning) cat(italic("* indicates effective sample size"))
}
}
invisible(x)
}
print.bal.tab.subclass <- function(x, imbalanced.only = "as.is", un = "as.is", disp.bal.tab = "as.is", stats = "as.is", disp.thresholds = "as.is", disp = "as.is", disp.subclass = "as.is", digits = max(3, getOption("digits") - 3), ...) {
A <- list(...)
call <- x$call
s.balance <- x$Subclass.Balance
b.a.subclass <- x$Balance.Across.Subclass
s.nn <- x$Observations
p.ops <- attr(x, "print.options")
baltal <- maximbal <- list()
for (s in p.ops$compute) {
baltal[[s]] <- x[[paste.("Balanced", s, "Subclass")]]
maximbal[[s]] <- x[[paste.("Max.Imbalance", s, "Subclass")]]
}
#Prevent exponential notation printing
op <- options(scipen=getOption("scipen"))
options(scipen = 999)
on.exit(options(op))
#Adjustments to print options
if (!identical(un, "as.is") && p.ops$disp.adj) {
if (!rlang::is_bool(un)) stop("'un' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if (p.ops$quick && p.ops$un == FALSE && un == TRUE) {
warning("'un' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$un <- un
}
if (!identical(disp, "as.is")) {
if (!is.character(disp)) stop("'disp.means' must be a character vector.")
allowable.disp <- c("means", "sds", all_STATS(p.ops$type))
if (any(disp %nin% allowable.disp)) {
stop(paste(word_list(disp[disp %nin% allowable.disp], and.or = "and", quotes = 2, is.are = TRUE),
"not allowed in 'disp'."), call. = FALSE)
}
if (any(disp %nin% p.ops$compute)) {
warning(paste("'disp' cannot include", word_list(disp[disp %nin% p.ops$compute], and.or = "or", quotes = 2), "if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- disp
}
if (is_not_null(A[["disp.means"]]) && !identical(A[["disp.means"]], "as.is")) {
if (!rlang::is_bool(A[["disp.means"]])) stop("'disp.means' must be TRUE, FALSE, or \"as.is\".")
if ("means" %nin% p.ops$compute && A[["disp.means"]] == TRUE) {
warning("'disp.means' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "means"[A[["disp.means"]]]))
}
if (is_not_null(A[["disp.sds"]]) && !identical(A[["disp.sds"]], "as.is")) {
if (!rlang::is_bool(A[["disp.sds"]])) stop("'disp.sds' must be TRUE, FALSE, or \"as.is\".", call. = FALSE)
if ("sds" %nin% p.ops$compute && A[["disp.sds"]] == TRUE) {
warning("'disp.sds' cannot be set to TRUE if quick = TRUE in the original call to bal.tab().", call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, "sds"[A[["disp.sds"]]]))
}
if (!identical(stats, "as.is")) {
if (!is_(stats, "character")) stop("'stats' must be a string.")
stats <- match_arg(stats, all_STATS(p.ops$type), several.ok = TRUE)
stats_in_p.ops <- stats %in% p.ops$compute
if (any(!stats_in_p.ops)) {
stop(paste0("'stats' cannot contain ", word_list(stats[!stats_in_p.ops], and.or = "or", quotes = 2), " if quick = TRUE in the original call to bal.tab()."), call. = TRUE)
}
else p.ops$disp <- unique(c(p.ops$disp[p.ops$disp %nin% all_STATS()], stats))
}
for (s in all_STATS(p.ops$type)) {
if (is_not_null(A[[STATS[[s]]$disp_stat]]) && !identical(A[[STATS[[s]]$disp_stat]], "as.is")) {
if (!rlang::is_bool(A[[STATS[[s]]$disp_stat]])) {
stop(paste0("'", STATS[[s]]$disp_stat, "' must be TRUE, FALSE, or \"as.is\"."), call. = FALSE)
}
if (s %nin% p.ops$compute && isTRUE(A[[STATS[[s]]$disp_stat]])) {
warning(paste0("'", STATS[[s]]$disp_stat, "' cannot be set to TRUE if quick = TRUE in the original call to bal.tab()."), call. = FALSE)
}
else p.ops$disp <- unique(c(p.ops$disp, s))
}
}
for (s in p.ops$compute[p.ops$compute %in% all_STATS(p.ops$type)]) {
if (STATS[[s]]$threshold %in% names(A) && !identical(temp.thresh <- A[[STATS[[s]]$threshold]], "as.is")) {
if (is_not_null(temp.thresh) &&
(!is.numeric(temp.thresh) || length(temp.thresh) != 1 ||
is_null(p.ops[["thresholds"]][[s]]) ||
p.ops[["thresholds"]][[s]] != temp.thresh))
stop(paste0("'", STATS[[s]]$threshold, "' must be NULL or \"as.is\"."))
if (is_null(temp.thresh)) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (s %nin% p.ops$disp) {
p.ops[["thresholds"]][[s]] <- NULL
baltal[[s]] <- NULL
maximbal[[s]] <- NULL
}
}
if (!identical(disp.thresholds, "as.is")) {
if (!is.logical(disp.thresholds) || anyNA(disp.thresholds)) stop("'disp.thresholds' must only contain TRUE or FALSE.", call. = FALSE)
if (is_null(names(disp.thresholds))) {
if (length(disp.thresholds) <= length(p.ops[["thresholds"]])) {
names(disp.thresholds) <- names(p.ops[["thresholds"]])[seq_along(disp.thresholds)]
}
else {
stop("More entries were given to 'disp.thresholds' than there are thresholds in the bal.tab object.", call. = FALSE)
}
}
if (!all(names(disp.thresholds) %pin% names(p.ops[["thresholds"]]))) {
warning(paste0(word_list(names(disp.thresholds)[!names(disp.thresholds) %pin% names(p.ops[["thresholds"]])],
quotes = 2, is.are = TRUE), " not available in thresholds and will be ignored."), call. = FALSE)
disp.thresholds <- disp.thresholds[names(disp.thresholds) %pin% names(p.ops[["thresholds"]])]
}
names(disp.thresholds) <- match_arg(names(disp.thresholds), names(p.ops[["thresholds"]]), several.ok = TRUE)
for (x in names(disp.thresholds)) {
if (!disp.thresholds[x]) {
p.ops[["thresholds"]][[x]] <- NULL
baltal[[x]] <- NULL
maximbal[[x]] <- NULL
}
}
}
if (!identical(disp.bal.tab, "as.is")) {
if (!rlang::is_bool(disp.bal.tab)) stop("'disp.bal.tab' must be TRUE, FALSE, or \"as.is\".")
p.ops$disp.bal.tab <- disp.bal.tab
}
if (p.ops$disp.bal.tab) {
if (!identical(imbalanced.only, "as.is")) {
if (!rlang::is_bool(imbalanced.only)) stop("'imbalanced.only' must be TRUE, FALSE, or \"as.is\".")
p.ops$imbalanced.only <- imbalanced.only
}
if (p.ops$imbalanced.only) {
if (is_null(p.ops$thresholds)) {
warning("A threshold must be specified if imbalanced.only = TRUE. Displaying all covariates.", call. = FALSE)
p.ops$imbalanced.only <- FALSE
}
}
}
else p.ops$imbalanced.only <- FALSE
if (!identical(disp.subclass, "as.is")) {
if (!rlang::is_bool(disp.subclass)) stop("'disp.subclass' must be TRUE, FALSE, or \"as.is\".")
p.ops$disp.subclass <- disp.subclass
}
if (is_not_null(call)) {
cat(underline("Call") %+% "\n " %+% paste(deparse(call), collapse = "\n") %+% "\n\n")
}
if (p.ops$disp.bal.tab) {
if (p.ops$disp.subclass) {
s.keep.col <- setNames(c(TRUE,
rep(unlist(lapply(p.ops$compute[p.ops$compute %nin% all_STATS()], function(s) {
s %in% p.ops$disp
})), switch(p.ops$type, bin = 2, cont = 1)),
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS()], function(s) {
c(s %in% p.ops$disp,
is_not_null(p.ops$thresholds[[s]]))
}))),
names(s.balance[[1]]))
cat(underline("Balance by subclass"))
for (i in names(s.balance)) {
if (p.ops$imbalanced.only) {
s.keep.row <- rowSums(apply(s.balance[[i]][grepl(".Threshold", names(s.balance), fixed = TRUE)], 2, function(x) !is.na(x) & startsWith(x, "Not Balanced"))) > 0
}
else s.keep.row <- rep(TRUE, nrow(s.balance[[i]]))
cat("\n - - - " %+% italic("Subclass " %+% as.character(i)) %+% " - - - \n")
if (all(!s.keep.row)) cat(italic("All covariates are balanced.") %+% "\n")
else print.data.frame_(round_df_char(s.balance[[i]][s.keep.row, s.keep.col, drop = FALSE], digits))
}
cat("\n")
}
if (is_not_null(b.a.subclass)) {
if (p.ops$imbalanced.only) {
a.s.keep.row <- rowSums(apply(b.a.subclass[grepl(".Threshold", names(b.a.subclass), fixed = TRUE)], 2, function(x) !is.na(x) & startsWith(x, "Not Balanced"))) > 0
}
else a.s.keep.row <- rep(TRUE, nrow(b.a.subclass))
a.s.keep.col <- setNames(as.logical(c(TRUE,
rep(unlist(lapply(p.ops$compute[p.ops$compute %nin% all_STATS()], function(s) {
p.ops$un && s %in% p.ops$disp
})), switch(p.ops$type, bin = 2, cont = 1)),
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS()], function(s) {
c(p.ops$un && s %in% p.ops$disp,
p.ops$un && !p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
})),
rep(c(rep(unlist(lapply(p.ops$compute[p.ops$compute %nin% all_STATS()], function(s) {
p.ops$disp.adj && s %in% p.ops$disp
})), 2),
unlist(lapply(p.ops$compute[p.ops$compute %in% all_STATS()], function(s) {
c(p.ops$disp.adj && s %in% p.ops$disp,
p.ops$disp.adj && !p.ops$disp.adj && is_not_null(p.ops$thresholds[[s]]))
}))
),
p.ops$disp.adj))),
names(b.a.subclass))
cat(underline("Balance measures across subclasses") %+% "\n")
if (all(!a.s.keep.row)) cat(italic("All covariates are balanced.") %+% "\n")
else print.data.frame_(round_df_char(b.a.subclass[a.s.keep.row, a.s.keep.col, drop = FALSE], digits))
cat("\n")
}
}
for (s in p.ops$stats) {
if (is_not_null(baltal[[s]])) {
cat(underline(paste("Balance tally for", STATS[[s]]$balance_tally_for, "across subclasses")) %+% "\n")
print.data.frame_(baltal[[s]])
cat("\n")
}
if (is_not_null(maximbal[[s]])) {
cat(underline(paste("Variable with the greatest", STATS[[s]]$variable_with_the_greatest, "across subclasses")) %+% "\n")
print.data.frame_(round_df_char(maximbal[[s]], digits), row.names = FALSE)
cat("\n")
}
}
if (is_not_null(s.nn)) {
cat(underline(attr(s.nn, "tag")) %+% "\n")
print.data.frame_(round_df_char(s.nn, digits = min(2, digits), pad = " "))
}
invisible(x)
}
|
#' dbR6Parent_set_data__
#'@keywords internal
dbR6Parent_set_data <- function(...) {
private$where$data <- x
invisible(NULL)
}
|
/R/dbR6Parent_set_data.R
|
no_license
|
leandroroser/dbR6
|
R
| false | false | 132 |
r
|
#' dbR6Parent_set_data__
#'@keywords internal
dbR6Parent_set_data <- function(...) {
private$where$data <- x
invisible(NULL)
}
|
library(httr)
library(jsonlite)
#install.packages("plotly")
library(dplyr)
library(plotly)
#Cubo de datos
repositorio = GET("https://api.datamexico.org/tesseract/cubes/imss/aggregate.jsonrecords?captions%5B%5D=Date+Month.Date.Quarter.Quarter+ES&drilldowns%5B%5D=Date+Month.Date.Quarter&measures%5B%5D=Insured+Employment&parents=false&sparse=false")
rawToChar(repositorio$content) #convierte en string o serie de caracteries
Datos = fromJSON(rawToChar(repositorio$content))
names(Datos)
Datos<-Datos$data
Datos <- Datos[,-c(1)] #elimina la primera columna
#Convierte a un dataframe
Datos <- data.frame(Datos)
colnames(Datos)<- c("Trimestre", "Asegurados")
p = plot_ly(Datos, x = ~Trimestre,y = ~Asegurados,
name = 'Asegurados',
type = 'scatter',
mode = 'lines+markers')
p %>% layout(title="Asegurados 2019Q1 al 2020Q4 de México")
Crecimiento <- data.frame(diff(log(Datos$Asegurados), lag=1)*100)
Fechas<-Datos$Trimestre[2:8]
Crecimiento <- data.frame(cbind(Fechas,Crecimiento))
colnames(Crecimiento)<- c("Trimestre", "Crecimiento")
p1 = plot_ly(Crecimiento, x = ~Trimestre,y = ~Crecimiento,
name = 'Crecimiento',
type = 'scatter',
mode = 'lines+markers'
)
p1 %>% layout(title="Variación (variación porcentual respecto al trimestre anterior) ")
|
/R-Salarios.R
|
no_license
|
jlrosasp/bedu-proyecto-r
|
R
| false | false | 1,342 |
r
|
library(httr)
library(jsonlite)
#install.packages("plotly")
library(dplyr)
library(plotly)
#Cubo de datos
repositorio = GET("https://api.datamexico.org/tesseract/cubes/imss/aggregate.jsonrecords?captions%5B%5D=Date+Month.Date.Quarter.Quarter+ES&drilldowns%5B%5D=Date+Month.Date.Quarter&measures%5B%5D=Insured+Employment&parents=false&sparse=false")
rawToChar(repositorio$content) #convierte en string o serie de caracteries
Datos = fromJSON(rawToChar(repositorio$content))
names(Datos)
Datos<-Datos$data
Datos <- Datos[,-c(1)] #elimina la primera columna
#Convierte a un dataframe
Datos <- data.frame(Datos)
colnames(Datos)<- c("Trimestre", "Asegurados")
p = plot_ly(Datos, x = ~Trimestre,y = ~Asegurados,
name = 'Asegurados',
type = 'scatter',
mode = 'lines+markers')
p %>% layout(title="Asegurados 2019Q1 al 2020Q4 de México")
Crecimiento <- data.frame(diff(log(Datos$Asegurados), lag=1)*100)
Fechas<-Datos$Trimestre[2:8]
Crecimiento <- data.frame(cbind(Fechas,Crecimiento))
colnames(Crecimiento)<- c("Trimestre", "Crecimiento")
p1 = plot_ly(Crecimiento, x = ~Trimestre,y = ~Crecimiento,
name = 'Crecimiento',
type = 'scatter',
mode = 'lines+markers'
)
p1 %>% layout(title="Variación (variación porcentual respecto al trimestre anterior) ")
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/find_matching_string.R
\name{find_matching_string}
\alias{find_matching_string}
\title{Fuzzy string matching.}
\usage{
find_matching_string(x, y, value = TRUE, step = 0.1, ignore.case = TRUE)
}
\arguments{
\item{x}{Strings.}
\item{y}{List of strings to be matched.}
\item{value}{Return value or the index of the closest string.}
\item{step}{Step by which decrease the distance.}
\item{ignore.case}{if FALSE, the pattern matching is case sensitive and if TRUE, case is ignored during matching.}
}
\description{
Fuzzy string matching.
}
\examples{
library(psycho)
find_matching_string("Hwo rea ouy", c("How are you", "Not this word", "Nice to meet you"))
}
\author{
\href{https://dominiquemakowski.github.io/}{Dominique Makowski}
}
|
/man/find_matching_string.Rd
|
permissive
|
HugoNjb/psycho.R
|
R
| false | true | 813 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/find_matching_string.R
\name{find_matching_string}
\alias{find_matching_string}
\title{Fuzzy string matching.}
\usage{
find_matching_string(x, y, value = TRUE, step = 0.1, ignore.case = TRUE)
}
\arguments{
\item{x}{Strings.}
\item{y}{List of strings to be matched.}
\item{value}{Return value or the index of the closest string.}
\item{step}{Step by which decrease the distance.}
\item{ignore.case}{if FALSE, the pattern matching is case sensitive and if TRUE, case is ignored during matching.}
}
\description{
Fuzzy string matching.
}
\examples{
library(psycho)
find_matching_string("Hwo rea ouy", c("How are you", "Not this word", "Nice to meet you"))
}
\author{
\href{https://dominiquemakowski.github.io/}{Dominique Makowski}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/DataPrep.R
\name{make.cell.meta.from.df}
\alias{make.cell.meta.from.df}
\title{Creates a meta cell matrix from a supplied dataframe from required fields}
\usage{
make.cell.meta.from.df(metad, rq.fields)
}
\arguments{
\item{metad}{A dataframe of per cell metadata}
\item{rq.fields}{A vector of name specifiying which columns should me made into metadata}
}
\description{
Creates a meta cell matrix from a supplied dataframe from required fields
}
\keyword{cell}
\keyword{metadata}
|
/man/make.cell.meta.from.df.Rd
|
no_license
|
shambam/cellexalvrR
|
R
| false | true | 559 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/DataPrep.R
\name{make.cell.meta.from.df}
\alias{make.cell.meta.from.df}
\title{Creates a meta cell matrix from a supplied dataframe from required fields}
\usage{
make.cell.meta.from.df(metad, rq.fields)
}
\arguments{
\item{metad}{A dataframe of per cell metadata}
\item{rq.fields}{A vector of name specifiying which columns should me made into metadata}
}
\description{
Creates a meta cell matrix from a supplied dataframe from required fields
}
\keyword{cell}
\keyword{metadata}
|
#
# This is the server logic of a Shiny web application. You can run the
# application by clicking 'Run App' above.
#
# Find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com/
#
server <- function(input, output, session) {
# Function to format a number
c_fn <- function(num, digits = 6) {
return(format(round(num, digits = digits), nsmall = digits, big.mark = ","))
}
formatNum <- cmpfun(c_fn)
# Function to show parts of progress indicator
c_spp <- function(numParts, detailMsg = "", stepNo = NULL, delaySec = 0.1) {
# Increment the progress bar, and update the detail text.
incProgress(1/numParts, detail = paste(detailMsg, stepNo))
# Pause for delaySec seconds to simulate a long computation.
Sys.sleep(delaySec)
}
showProgressPart <- cmpfun(c_spp)
# Display progress indicator while processing data ...
withProgress(message = 'Initializing application', value = 0, {
showProgressPart(2, "Reading spatial data ...")
kmlFilePath <- "data/spatial/world_ISO2.kml"
kmlLayers <- ogrListLayers(kmlFilePath)
world <- readOGR(kmlFilePath, kmlLayers)
showProgressPart(2, "Reading meteor data ...")
meteors <- fread("data/nasa_meteors_scrubbed.csv", na.strings = c("NA", "#DIV/0!", ""), header = TRUE)
meteors$latitude <- as.numeric(as.character(meteors$reclat))
meteors$longitude <- as.numeric(as.character(meteors$reclong))
meteors$recclass <- as.factor(meteors$recclass)
meteors$wmk2006Class <- as.factor(meteors$wmk2006Class)
classifications <- sort(unique(meteors$wmk2006Class))
meteors$countryName <- as.factor(meteors$countryName)
meteors$iso2 <- as.factor(meteors$iso2)
# Update Location (Include, City or so); Incude Geolocation
meteors$popUp <- paste("<b>Name: </b>", meteors$name, "<b><br>Location: </b>", meteors$countryName
, "<br><b>Coordinates: </b>", meteors$GeoLocation
, "<br><b>Year: </b>", meteors$year, "<br><b>Class: </b>", meteors$recclass
, "<br><b><a href='https://en.wikipedia.org/wiki/File:Meteorite_Classification_after_Weissberg_McCoy_Krot_2006_Stony_Iron.svg'
target='_blank' style='color: rgb(255, 255, 255);font-style: italic;'>Weisberg (2006)</a> Class: </b>", meteors$wmk2006Class
, "<br><b>Mass (in Kg): </b>", ifelse(is.na(meteors$`mass (g)`), meteors$`mass (g)`, formatNum(meteors$`mass (g)` / 10^3, 5)) , sep = "")
showProgressPart(2, "Completed.")
})
#----------------------------------------------------------------------------------------------------------------
# SECTION FOR RENDERING CONTROLS
#----------------------------------------------------------------------------------------------------------------
# Display control panel ...
output$controlPanel <- renderUI({
absolutePanel(top = 10, right = 10, draggable = TRUE, fixed = TRUE, width = "275px",
class = "absolutePanel",
h4(img(src = "images/220px-Leonid_Meteor.jpg", width = "25px", height = "25px"), " Meteorite Landings", align = "center"),
hr(),
uiOutput("yearRange"),
uiOutput("meteorClass"),
actionLink("meteorClassLink"
, "* Weisberg et al (2006) Scheme"
, onclick = "window.open('https://en.wikipedia.org/wiki/File:Meteorite_Classification_after_Weissberg_McCoy_Krot_2006_Stony_Iron.svg', '_blank')"
),
hr(),
checkboxInput("showGraph", "Stacked Area Graph by Year"),
selectInput("mapLayer", label = "Maps"
, choices = list("Basic" = "basic"
, "Grayscale" = "grayscale"
, "Dark" = "nightmode"
, "Satellite" = "imagery"
)
, selected = "grayscale"
),
radioButtons("mapType", label = NULL
, choices = list("Markers" = "marker"
, "Heatmap" = "heatmap"
, "Choropleth" = "choropleth")
, selected = "marker"),
hr(),
div(h5(actionLink("howToLink"
, "How to use this application?"
, onclick = "window.open('docs/using_meteorite_landings_app.pdf', '_blank')"
)), align = "center")
)
})
# Display "year" slider input ...
output$yearRange <- renderUI({
sliderInput("yearRange", "Year Recorded", min(meteors$year), max(meteors$year),
value = c(max(meteors$year) - 25, max(meteors$year)), sep = "", ticks = FALSE
)
})
# Display "meteor classification" checkbox group input ...
output$meteorClass <- renderUI({
checkboxGroupInput('meteorClass', 'Classifications *', classifications, selected = classifications)
})
# Display checkbox input to show "cummulative and actual no. of recorded meteorites by year" plot ...
output$graphPane <- renderUI({
if (input$showGraph) {
absolutePanel(top = 50, left = 50, draggable = TRUE, fixed = TRUE, width = "670px", height = "520px", class = "graphPanel"
, plotlyOutput('plotly'), div(style = "height: 50px")
, h5("Cumulative and Actual No. of Recorded Meteorites by Year", align = "center")
)
}
})
#----------------------------------------------------------------------------------------------------------------
# **** SECTION FOR DATA PROCESSING
#----------------------------------------------------------------------------------------------------------------
# Reactive expression for the data subsetted to what the user selected
filteredData <- reactive({
subset(meteors, (year >= input$yearRange[1] & year <= input$yearRange[2])
& (wmk2006Class %in% input$meteorClass))
})
# Data processing to create data table of counts per Country along with spatial data
countByCountry <- reactive({
# Display progress indicator while processing data ...
withProgress(message = 'Computing count by country', value = 0, {
showProgressPart(4, "Filtering data ...")
tmp <- filteredData()
showProgressPart(4, "Getting counts ...")
dat <- tmp[, .(count=.N), by = list(iso2, countryName)]
showProgressPart(4, "Setting up additional data ...")
dat$popUp <- paste("<b>Country: </b>", dat$countryName
, "<br><b>No. of Records: </b>", formatNum(dat$count,0), sep = "")
showProgressPart(4, "Merging with spatial data ...")
dat <- merge(x = world, y = dat, by.x = "Name", by.y = "iso2", all = TRUE)
showProgressPart(4, "Completed.")
dat
})
})
# Data processing to create data table of counts per year and classification
countByYearClass <- reactive({
# Display progress indicator while processing data ...
withProgress(message = 'Computing count by year, class', value = 0, {
showProgressPart(4, "Filtering data ...")
tmp <- filteredData()
dat <- data.table()
if (nrow(tmp) > 0) {
showProgressPart(4, "Getting counts ...")
dat <- tmp[, .(count=.N), by = list(year, wmk2006Class)]
colnames(dat) <- c('id', 'class', 'count')
showProgressPart(4, "Tidying data ...")
dat <- dcast(dat, id ~ class)
cols <- colnames(dat)[colSums(is.na(dat)) > 0]
dat[ , (cols) := lapply(.SD, function(x) replace(x, which(is.na(x)), 0)), .SDcols = cols]
showProgressPart(4, "Sorting data ...")
# Reorder by sum of columns from highest to lowest
dat <- setcolorder(dat, dat[ , c(1, order(colSums(dat[ ,2:ncol(dat)], na.rm = TRUE)) + 1)])
}
showProgressPart(4, "Completed.")
dat
})
})
# This will decide which data for the map will be used
mapData <- reactive({
mapType <- ifelse(is.null(input$mapType), 'marker', input$mapType)
if (mapType == 'choropleth') countByCountry()
else filteredData()
})
#----------------------------------------------------------------------------------------------------------------
# SECTION FOR RENDERING PLOTS
#----------------------------------------------------------------------------------------------------------------
# Build graaph of counts by classification thru Plotly
# a) by Year: stacked fill charts (year in sequence)
# b) by Country: stacked bar charts (sorted by total no. of counts)
output$plotly <- renderPlotly({
# Data for plot will be built as matrix of count, with classes as succeeding columns
dat <- countByYearClass()
# Build color spectrum
colors <- substr(plasma(length(classifications), direction = 1), start = 1, stop = 7) # Remove the alpha
p <- plot_ly()
if (nrow(dat) > 0) {
oldCols <- colnames(dat) # For class display
colnames(dat) <- make.names(colnames(dat)) # Make R-syntactically valid column names
newCols <- colnames(dat)
nCol <- length(newCols)
cummDat <- dat # For cummulative table
# Below will compute for cummulative counts for stacked chart applied only by Year
if (ncol(dat) >= 3) {
for (i in 3:nCol) {
eval(parse(text = paste('cummDat$', newCols[i], ' <- cummDat$'
, newCols[i], ' + cummDat$', newCols[i-1], sep = '')))
}
}
# Build a stacked filled scatter plot
p <- plot_ly(dat, x = as.factor(dat$id), y = 0 ##
, name = "id"
, hoverinfo = 'text'
, text = dat$id
, fillcolor = "#000000"
, mode = 'none'
, type = 'scatter'
, fill = 'tozeroy') %>%
layout(title = ""
, xaxis = list(title = "", showgrid = FALSE)
, yaxis = list(title = "", showgrid = FALSE)
, showlegend = FALSE, autosize = FALSE, height = "475", width = "650"
, margin = list(l = 75, r = 50, b = 75, t = 50, pad = 10)
, paper_bgcolor = 'rgba(248, 248, 255, 0)'
, plot_bgcolor = 'rgba(248, 248, 255, 0)'
)
# Add each stack of data
for (i in nCol:2) {
p <- p %>% add_trace(y = eval(parse(text = paste('cummDat$', newCols[i], sep = '')))
, name = oldCols[i]
, hoverinfo = 'text+name'
, text = paste("(Cum.: ", formatNum(eval(parse(text = paste('cummDat$', newCols[i], sep = ''))), 0), "; Act.: "
, formatNum(eval(parse(text = paste('dat$', newCols[i], sep = ''))), 0), ")", sep = "")
, fillcolor = colors[nCol + 1 - i]
)
}
}
p # Display the plot
})
#----------------------------------------------------------------------------------------------------------------
# **** SECTION FOR RENDERING MAPS
#----------------------------------------------------------------------------------------------------------------
# Build params for Awesome icons for markers
theIcon <- awesomeIcons(
icon = 'fa-spinner',
iconColor = 'lightgray',
spin = TRUE,
library = 'fa',
markerColor = 'gray'
)
# This reactive expression represents the palette function,
# which changes as the user makes selections in UI.
colorpal <- reactive({
colorNumeric("plasma", c(countByCountry()$count, 0)) # "viridis", "magma", "inferno", or "plasma".
})
# This is the base leflet object
output$map <- renderLeaflet({
# Use leaflet() here, and only include aspects of the map that
# won't need to change dynamically (at least, not unless the
# entire map is being torn down and recreated).
leaflet(meteors, options = leafletOptions(worldCopyJump = TRUE)) %>%
setView(65, 35, zoom = 2)
})
# Changes to the map happens here depending on the set of user preferences and inputs
observe({
# Display progress indicator while processing data ...
withProgress(message = 'Updating the map', value = 0, {
showProgressPart(3, "Fetching data ...")
mapType <- ifelse(is.null(input$mapType), 'marker', input$mapType)
legend <- ifelse(is.null(input$showLegend), FALSE, input$showLegend)
showProgressPart(3, "Preparing the base map ...")
proxy <- leafletProxy("map", data = mapData()) %>%
clearMarkerClusters() %>%
clearMarkers() %>%
clearWebGLHeatmap() %>%
clearShapes()
showProgressPart(3, "Adding layers and other objects ...")
if (mapType == 'heatmap') {
proxy %>%
clearControls() %>%
addWebGLHeatmap(size = 150000, units = "m", opacity = 1, gradientTexture = "skyline")
}
else if (mapType == 'choropleth') {
pal <- colorpal()
proxy %>%
addPolygons(fillOpacity = 0.5,
fillColor = ~pal(count),
color = "black",
weight = 0.5,
popup = ~popUp
) %>%
clearControls() %>%
addLegend(position = "bottomleft", pal = pal, values = ~count)
}
else {
proxy %>%
clearControls() %>%
addAwesomeMarkers(popup = ~popUp, icon = theIcon
, clusterOptions = markerClusterOptions(polygonOptions = list(
color='#990000', weight = 3, stroke = FALSE, fillOpacity = 0.3
)
)
)
}
showProgressPart(3, "Completed.")
})
})
# Use a separate observer for map tiling
observe({
# Display progress indicator while processing data ...
withProgress(message = 'Updating the map layer', value = 0, {
showProgressPart(2, "Rendering map tiles ...")
if (!is.null(input$mapLayer)) {
tileProvider <- switch(input$mapLayer
, basic = providers$OpenStreetMap
, grayscale = providers$CartoDB.Positron
, nightmode = providers$CartoDB.DarkMatterNoLabels
, imagery = providers$Esri.WorldImagery
)
leafletProxy("map", data = mapData()) %>%
clearTiles() %>%
addProviderTiles(tileProvider, options = tileOptions(minZoom = 2, detectRetina = TRUE))
}
showProgressPart(2, "Completed.")
})
})
}
|
/server.R
|
no_license
|
aldredes/developing-data-products
|
R
| false | false | 16,560 |
r
|
#
# This is the server logic of a Shiny web application. You can run the
# application by clicking 'Run App' above.
#
# Find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com/
#
server <- function(input, output, session) {
# Function to format a number
c_fn <- function(num, digits = 6) {
return(format(round(num, digits = digits), nsmall = digits, big.mark = ","))
}
formatNum <- cmpfun(c_fn)
# Function to show parts of progress indicator
c_spp <- function(numParts, detailMsg = "", stepNo = NULL, delaySec = 0.1) {
# Increment the progress bar, and update the detail text.
incProgress(1/numParts, detail = paste(detailMsg, stepNo))
# Pause for delaySec seconds to simulate a long computation.
Sys.sleep(delaySec)
}
showProgressPart <- cmpfun(c_spp)
# Display progress indicator while processing data ...
withProgress(message = 'Initializing application', value = 0, {
showProgressPart(2, "Reading spatial data ...")
kmlFilePath <- "data/spatial/world_ISO2.kml"
kmlLayers <- ogrListLayers(kmlFilePath)
world <- readOGR(kmlFilePath, kmlLayers)
showProgressPart(2, "Reading meteor data ...")
meteors <- fread("data/nasa_meteors_scrubbed.csv", na.strings = c("NA", "#DIV/0!", ""), header = TRUE)
meteors$latitude <- as.numeric(as.character(meteors$reclat))
meteors$longitude <- as.numeric(as.character(meteors$reclong))
meteors$recclass <- as.factor(meteors$recclass)
meteors$wmk2006Class <- as.factor(meteors$wmk2006Class)
classifications <- sort(unique(meteors$wmk2006Class))
meteors$countryName <- as.factor(meteors$countryName)
meteors$iso2 <- as.factor(meteors$iso2)
# Update Location (Include, City or so); Incude Geolocation
meteors$popUp <- paste("<b>Name: </b>", meteors$name, "<b><br>Location: </b>", meteors$countryName
, "<br><b>Coordinates: </b>", meteors$GeoLocation
, "<br><b>Year: </b>", meteors$year, "<br><b>Class: </b>", meteors$recclass
, "<br><b><a href='https://en.wikipedia.org/wiki/File:Meteorite_Classification_after_Weissberg_McCoy_Krot_2006_Stony_Iron.svg'
target='_blank' style='color: rgb(255, 255, 255);font-style: italic;'>Weisberg (2006)</a> Class: </b>", meteors$wmk2006Class
, "<br><b>Mass (in Kg): </b>", ifelse(is.na(meteors$`mass (g)`), meteors$`mass (g)`, formatNum(meteors$`mass (g)` / 10^3, 5)) , sep = "")
showProgressPart(2, "Completed.")
})
#----------------------------------------------------------------------------------------------------------------
# SECTION FOR RENDERING CONTROLS
#----------------------------------------------------------------------------------------------------------------
# Display control panel ...
output$controlPanel <- renderUI({
absolutePanel(top = 10, right = 10, draggable = TRUE, fixed = TRUE, width = "275px",
class = "absolutePanel",
h4(img(src = "images/220px-Leonid_Meteor.jpg", width = "25px", height = "25px"), " Meteorite Landings", align = "center"),
hr(),
uiOutput("yearRange"),
uiOutput("meteorClass"),
actionLink("meteorClassLink"
, "* Weisberg et al (2006) Scheme"
, onclick = "window.open('https://en.wikipedia.org/wiki/File:Meteorite_Classification_after_Weissberg_McCoy_Krot_2006_Stony_Iron.svg', '_blank')"
),
hr(),
checkboxInput("showGraph", "Stacked Area Graph by Year"),
selectInput("mapLayer", label = "Maps"
, choices = list("Basic" = "basic"
, "Grayscale" = "grayscale"
, "Dark" = "nightmode"
, "Satellite" = "imagery"
)
, selected = "grayscale"
),
radioButtons("mapType", label = NULL
, choices = list("Markers" = "marker"
, "Heatmap" = "heatmap"
, "Choropleth" = "choropleth")
, selected = "marker"),
hr(),
div(h5(actionLink("howToLink"
, "How to use this application?"
, onclick = "window.open('docs/using_meteorite_landings_app.pdf', '_blank')"
)), align = "center")
)
})
# Display "year" slider input ...
output$yearRange <- renderUI({
sliderInput("yearRange", "Year Recorded", min(meteors$year), max(meteors$year),
value = c(max(meteors$year) - 25, max(meteors$year)), sep = "", ticks = FALSE
)
})
# Display "meteor classification" checkbox group input ...
output$meteorClass <- renderUI({
checkboxGroupInput('meteorClass', 'Classifications *', classifications, selected = classifications)
})
# Display checkbox input to show "cummulative and actual no. of recorded meteorites by year" plot ...
output$graphPane <- renderUI({
if (input$showGraph) {
absolutePanel(top = 50, left = 50, draggable = TRUE, fixed = TRUE, width = "670px", height = "520px", class = "graphPanel"
, plotlyOutput('plotly'), div(style = "height: 50px")
, h5("Cumulative and Actual No. of Recorded Meteorites by Year", align = "center")
)
}
})
#----------------------------------------------------------------------------------------------------------------
# **** SECTION FOR DATA PROCESSING
#----------------------------------------------------------------------------------------------------------------
# Reactive expression for the data subsetted to what the user selected
filteredData <- reactive({
subset(meteors, (year >= input$yearRange[1] & year <= input$yearRange[2])
& (wmk2006Class %in% input$meteorClass))
})
# Data processing to create data table of counts per Country along with spatial data
countByCountry <- reactive({
# Display progress indicator while processing data ...
withProgress(message = 'Computing count by country', value = 0, {
showProgressPart(4, "Filtering data ...")
tmp <- filteredData()
showProgressPart(4, "Getting counts ...")
dat <- tmp[, .(count=.N), by = list(iso2, countryName)]
showProgressPart(4, "Setting up additional data ...")
dat$popUp <- paste("<b>Country: </b>", dat$countryName
, "<br><b>No. of Records: </b>", formatNum(dat$count,0), sep = "")
showProgressPart(4, "Merging with spatial data ...")
dat <- merge(x = world, y = dat, by.x = "Name", by.y = "iso2", all = TRUE)
showProgressPart(4, "Completed.")
dat
})
})
# Data processing to create data table of counts per year and classification
countByYearClass <- reactive({
# Display progress indicator while processing data ...
withProgress(message = 'Computing count by year, class', value = 0, {
showProgressPart(4, "Filtering data ...")
tmp <- filteredData()
dat <- data.table()
if (nrow(tmp) > 0) {
showProgressPart(4, "Getting counts ...")
dat <- tmp[, .(count=.N), by = list(year, wmk2006Class)]
colnames(dat) <- c('id', 'class', 'count')
showProgressPart(4, "Tidying data ...")
dat <- dcast(dat, id ~ class)
cols <- colnames(dat)[colSums(is.na(dat)) > 0]
dat[ , (cols) := lapply(.SD, function(x) replace(x, which(is.na(x)), 0)), .SDcols = cols]
showProgressPart(4, "Sorting data ...")
# Reorder by sum of columns from highest to lowest
dat <- setcolorder(dat, dat[ , c(1, order(colSums(dat[ ,2:ncol(dat)], na.rm = TRUE)) + 1)])
}
showProgressPart(4, "Completed.")
dat
})
})
# This will decide which data for the map will be used
mapData <- reactive({
mapType <- ifelse(is.null(input$mapType), 'marker', input$mapType)
if (mapType == 'choropleth') countByCountry()
else filteredData()
})
#----------------------------------------------------------------------------------------------------------------
# SECTION FOR RENDERING PLOTS
#----------------------------------------------------------------------------------------------------------------
# Build graaph of counts by classification thru Plotly
# a) by Year: stacked fill charts (year in sequence)
# b) by Country: stacked bar charts (sorted by total no. of counts)
output$plotly <- renderPlotly({
# Data for plot will be built as matrix of count, with classes as succeeding columns
dat <- countByYearClass()
# Build color spectrum
colors <- substr(plasma(length(classifications), direction = 1), start = 1, stop = 7) # Remove the alpha
p <- plot_ly()
if (nrow(dat) > 0) {
oldCols <- colnames(dat) # For class display
colnames(dat) <- make.names(colnames(dat)) # Make R-syntactically valid column names
newCols <- colnames(dat)
nCol <- length(newCols)
cummDat <- dat # For cummulative table
# Below will compute for cummulative counts for stacked chart applied only by Year
if (ncol(dat) >= 3) {
for (i in 3:nCol) {
eval(parse(text = paste('cummDat$', newCols[i], ' <- cummDat$'
, newCols[i], ' + cummDat$', newCols[i-1], sep = '')))
}
}
# Build a stacked filled scatter plot
p <- plot_ly(dat, x = as.factor(dat$id), y = 0 ##
, name = "id"
, hoverinfo = 'text'
, text = dat$id
, fillcolor = "#000000"
, mode = 'none'
, type = 'scatter'
, fill = 'tozeroy') %>%
layout(title = ""
, xaxis = list(title = "", showgrid = FALSE)
, yaxis = list(title = "", showgrid = FALSE)
, showlegend = FALSE, autosize = FALSE, height = "475", width = "650"
, margin = list(l = 75, r = 50, b = 75, t = 50, pad = 10)
, paper_bgcolor = 'rgba(248, 248, 255, 0)'
, plot_bgcolor = 'rgba(248, 248, 255, 0)'
)
# Add each stack of data
for (i in nCol:2) {
p <- p %>% add_trace(y = eval(parse(text = paste('cummDat$', newCols[i], sep = '')))
, name = oldCols[i]
, hoverinfo = 'text+name'
, text = paste("(Cum.: ", formatNum(eval(parse(text = paste('cummDat$', newCols[i], sep = ''))), 0), "; Act.: "
, formatNum(eval(parse(text = paste('dat$', newCols[i], sep = ''))), 0), ")", sep = "")
, fillcolor = colors[nCol + 1 - i]
)
}
}
p # Display the plot
})
#----------------------------------------------------------------------------------------------------------------
# **** SECTION FOR RENDERING MAPS
#----------------------------------------------------------------------------------------------------------------
# Build params for Awesome icons for markers
theIcon <- awesomeIcons(
icon = 'fa-spinner',
iconColor = 'lightgray',
spin = TRUE,
library = 'fa',
markerColor = 'gray'
)
# This reactive expression represents the palette function,
# which changes as the user makes selections in UI.
colorpal <- reactive({
colorNumeric("plasma", c(countByCountry()$count, 0)) # "viridis", "magma", "inferno", or "plasma".
})
# This is the base leflet object
output$map <- renderLeaflet({
# Use leaflet() here, and only include aspects of the map that
# won't need to change dynamically (at least, not unless the
# entire map is being torn down and recreated).
leaflet(meteors, options = leafletOptions(worldCopyJump = TRUE)) %>%
setView(65, 35, zoom = 2)
})
# Changes to the map happens here depending on the set of user preferences and inputs
observe({
# Display progress indicator while processing data ...
withProgress(message = 'Updating the map', value = 0, {
showProgressPart(3, "Fetching data ...")
mapType <- ifelse(is.null(input$mapType), 'marker', input$mapType)
legend <- ifelse(is.null(input$showLegend), FALSE, input$showLegend)
showProgressPart(3, "Preparing the base map ...")
proxy <- leafletProxy("map", data = mapData()) %>%
clearMarkerClusters() %>%
clearMarkers() %>%
clearWebGLHeatmap() %>%
clearShapes()
showProgressPart(3, "Adding layers and other objects ...")
if (mapType == 'heatmap') {
proxy %>%
clearControls() %>%
addWebGLHeatmap(size = 150000, units = "m", opacity = 1, gradientTexture = "skyline")
}
else if (mapType == 'choropleth') {
pal <- colorpal()
proxy %>%
addPolygons(fillOpacity = 0.5,
fillColor = ~pal(count),
color = "black",
weight = 0.5,
popup = ~popUp
) %>%
clearControls() %>%
addLegend(position = "bottomleft", pal = pal, values = ~count)
}
else {
proxy %>%
clearControls() %>%
addAwesomeMarkers(popup = ~popUp, icon = theIcon
, clusterOptions = markerClusterOptions(polygonOptions = list(
color='#990000', weight = 3, stroke = FALSE, fillOpacity = 0.3
)
)
)
}
showProgressPart(3, "Completed.")
})
})
# Use a separate observer for map tiling
observe({
# Display progress indicator while processing data ...
withProgress(message = 'Updating the map layer', value = 0, {
showProgressPart(2, "Rendering map tiles ...")
if (!is.null(input$mapLayer)) {
tileProvider <- switch(input$mapLayer
, basic = providers$OpenStreetMap
, grayscale = providers$CartoDB.Positron
, nightmode = providers$CartoDB.DarkMatterNoLabels
, imagery = providers$Esri.WorldImagery
)
leafletProxy("map", data = mapData()) %>%
clearTiles() %>%
addProviderTiles(tileProvider, options = tileOptions(minZoom = 2, detectRetina = TRUE))
}
showProgressPart(2, "Completed.")
})
})
}
|
\name{rcv}
\alias{rcv}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
call library altogether
}
\description{
call library altogether
}
\usage{
rcv(x)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{x}{
library name
}
}
\details{
}
\value{
%% ~Describe the value returned
%% If it is a LIST, use
%% \item{comp1 }{Description of 'comp1'}
%% \item{comp2 }{Description of 'comp2'}
%% ...
}
\references{
%% ~put references to the literature/web site here ~
}
\author{
Hyukjun Cho
}
\note{
%% ~~further notes~~
}
%% ~Make other sections like Warning with \section{Warning }{....} ~
\seealso{
%% ~~objects to See Also as \code{\link{help}}, ~~~
}
\examples{
## The function is currently defined as
function (x)
{
for (i in x) {
if (!is.element(i, .packages(all.available = TRUE))) {
install.packages(i)
}
library(i, character.only = TRUE)
}
}
rcv(c("dplyr"))
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ library }
|
/man/rcv.Rd
|
no_license
|
jotender/func
|
R
| false | false | 1,077 |
rd
|
\name{rcv}
\alias{rcv}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
call library altogether
}
\description{
call library altogether
}
\usage{
rcv(x)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{x}{
library name
}
}
\details{
}
\value{
%% ~Describe the value returned
%% If it is a LIST, use
%% \item{comp1 }{Description of 'comp1'}
%% \item{comp2 }{Description of 'comp2'}
%% ...
}
\references{
%% ~put references to the literature/web site here ~
}
\author{
Hyukjun Cho
}
\note{
%% ~~further notes~~
}
%% ~Make other sections like Warning with \section{Warning }{....} ~
\seealso{
%% ~~objects to See Also as \code{\link{help}}, ~~~
}
\examples{
## The function is currently defined as
function (x)
{
for (i in x) {
if (!is.element(i, .packages(all.available = TRUE))) {
install.packages(i)
}
library(i, character.only = TRUE)
}
}
rcv(c("dplyr"))
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ library }
|
#######################################
###This does the spatial plotting###
#######################################
observeEvent(input$getMap, {
siteMap <<- get_map(location = c(lon = as.numeric(median(qw.data$PlotTable$DEC_LONG_VA[!duplicated(qw.data$PlotTable$DEC_LONG_VA)],na.rm=TRUE)),
lat = as.numeric(median(qw.data$PlotTable$DEC_LAT_VA[!duplicated(qw.data$PlotTable$DEC_LAT_VA)],na.rm=TRUE))),
zoom = "auto",
maptype = "terrain",
scale = "auto")
})
output$qwmapPlot <- renderPlot({
qwmapPlot(qw.data = qw.data,
map = siteMap,
site.selection = as.character(input$siteSel_map),
plotparm = as.character(input$parmSel_map),
)
})
output$qwmapPlot_zoom <- renderPlot({
qwmapPlot(qw.data = qw.data,
map=siteMap,
site.selection = as.character(input$siteSel_map),
plotparm = as.character(input$parmSel_map),
) +
###This resets the axes to zoomed area, must specify origin because brushedPoints returns time in seconds from origin, not hte posixCT "yyyy-mm-dd" format
coord_cartesian(xlim = ranges$x, ylim = ranges$y)
})
#########################################
###This does the plotting interactions###
#########################################
###These are the values to subset the data by for dataTable ouput
dataSelections <- reactiveValues(siteSel = NULL, parmSel = NULL)
##################################################
###CHANGE these to the respective sidebar element
observe({
dataSelections$siteSel <- input$siteSel_map
dataSelections$parmSel <- input$parmSel_map
})
##################################################
##################################################
###CHANGE these to the respective plot variables
xvar_map <- "DEC_LONG_VA"
yvar_map <- "DEC_LAT_VA"
##################################################
###This sets the ranges variables for brushin
ranges <- reactiveValues(x = NULL, y = NULL)
observe({
brush <- input$plot_brush
if (!is.null(brush)) {
ranges$x <- c(brush$xmin, brush$xmax)
ranges$y <- c(brush$ymin, brush$ymax)
} else {
ranges$x <- NULL
ranges$y <- NULL
}
})
###This outputs the data tables for clicked and brushed points
output$map_clickinfo <- DT::renderDataTable({
DT::datatable(nearPoints(df=subset(qw.data$PlotTable,SITE_NO %in% dataSelections$siteSel & PARM_CD %in% dataSelections$parmSel & MEDIUM_CD %in% c("OAQ","OA")),
coordinfo = input$plot_click,
xvar=xvar_map,
yvar=yvar_map),
options=list(scrollX=TRUE)
)
})
output$map_brushinfo <- DT::renderDataTable({
DT::datatable(brushedPoints(df=subset(qw.data$PlotTable,SITE_NO %in% dataSelections$siteSel & PARM_CD %in% dataSelections$parmSel & MEDIUM_CD %in% c("OAQ","OA")),
brush=input$plot_brush,
xvar=xvar_map,
yvar=yvar_map),
options=list(scrollX=TRUE)
)
})
|
/inst/shiny/WQReviewGUI/server_map.R
|
no_license
|
dcalhoun-usgs/WQ-Review
|
R
| false | false | 3,447 |
r
|
#######################################
###This does the spatial plotting###
#######################################
observeEvent(input$getMap, {
siteMap <<- get_map(location = c(lon = as.numeric(median(qw.data$PlotTable$DEC_LONG_VA[!duplicated(qw.data$PlotTable$DEC_LONG_VA)],na.rm=TRUE)),
lat = as.numeric(median(qw.data$PlotTable$DEC_LAT_VA[!duplicated(qw.data$PlotTable$DEC_LAT_VA)],na.rm=TRUE))),
zoom = "auto",
maptype = "terrain",
scale = "auto")
})
output$qwmapPlot <- renderPlot({
qwmapPlot(qw.data = qw.data,
map = siteMap,
site.selection = as.character(input$siteSel_map),
plotparm = as.character(input$parmSel_map),
)
})
output$qwmapPlot_zoom <- renderPlot({
qwmapPlot(qw.data = qw.data,
map=siteMap,
site.selection = as.character(input$siteSel_map),
plotparm = as.character(input$parmSel_map),
) +
###This resets the axes to zoomed area, must specify origin because brushedPoints returns time in seconds from origin, not hte posixCT "yyyy-mm-dd" format
coord_cartesian(xlim = ranges$x, ylim = ranges$y)
})
#########################################
###This does the plotting interactions###
#########################################
###These are the values to subset the data by for dataTable ouput
dataSelections <- reactiveValues(siteSel = NULL, parmSel = NULL)
##################################################
###CHANGE these to the respective sidebar element
observe({
dataSelections$siteSel <- input$siteSel_map
dataSelections$parmSel <- input$parmSel_map
})
##################################################
##################################################
###CHANGE these to the respective plot variables
xvar_map <- "DEC_LONG_VA"
yvar_map <- "DEC_LAT_VA"
##################################################
###This sets the ranges variables for brushin
ranges <- reactiveValues(x = NULL, y = NULL)
observe({
brush <- input$plot_brush
if (!is.null(brush)) {
ranges$x <- c(brush$xmin, brush$xmax)
ranges$y <- c(brush$ymin, brush$ymax)
} else {
ranges$x <- NULL
ranges$y <- NULL
}
})
###This outputs the data tables for clicked and brushed points
output$map_clickinfo <- DT::renderDataTable({
DT::datatable(nearPoints(df=subset(qw.data$PlotTable,SITE_NO %in% dataSelections$siteSel & PARM_CD %in% dataSelections$parmSel & MEDIUM_CD %in% c("OAQ","OA")),
coordinfo = input$plot_click,
xvar=xvar_map,
yvar=yvar_map),
options=list(scrollX=TRUE)
)
})
output$map_brushinfo <- DT::renderDataTable({
DT::datatable(brushedPoints(df=subset(qw.data$PlotTable,SITE_NO %in% dataSelections$siteSel & PARM_CD %in% dataSelections$parmSel & MEDIUM_CD %in% c("OAQ","OA")),
brush=input$plot_brush,
xvar=xvar_map,
yvar=yvar_map),
options=list(scrollX=TRUE)
)
})
|
# 4.3.2017 Mirva Turkia
# mirva.turkia@helsinki.fi
# Introduction to Open Data Science
# This is the script file for the data wrangling part of my final assignment
# Here is the information of the data I will use:
# Konu, Anne (University of Tampere): SCHOOL WELL-BEING PROFILE 2015-2016: LOWER SECONDARY SCHOOL, GRADES 7-9 [electronic data]. Version 1.0 (2016-07-18). The Finnish Social Science Data Archive [distributor]. http://urn.fi/urn:nbn:fi:fsd:T-FSD3117
getwd()
setwd("/Users/mirva/IODS-final/Data")
# At first I installed package "memisc" and then I´ll read the SPSS data into R
library(memisc); library(dplyr)
data <- as.data.set(spss.portable.file('daF3117.por'))
data <- as.data.frame(data)
# As you can see the data is very large; it has 91 variables and 9820 observations
data
str(data)
dim(data)
# I will use only the following 27 variables:
# [Q1] Gender
# [Q6_7] Koulun säännöt ovat järkeviä (The rules of the school are reasonable)
# [Q6_8] Koulun rangaistukset ovat oikeudenmukaisia (The punishments of the school are fair.)
# [Q7_9] Opettajat kohtelevat meitä oppilaita oikeudenmukaisesti. (The teachers treat us with justice.)
# [Q7_10] Opettajien kanssa on helppo tulla toimeen. (It is easy to get along with the teachers.)
# [Q7_11] Useimmat opettajat ovat kiinnostuneita siitä, mitä minulle kuuluu. (Most of the teachers are interested in how I feel.)
# [Q7_12] Useimmat opettajat ovat ystävällisiä. (Most of the teachers are friendly.)
# [Q10_1] Vanhempani arvostavat koulutyötäni. (My parents appreciate my schoolwork.)
# [Q10_2] Vanhempani kannustavat minua menestymään koulussa. (My parents encourage me to do well at school.)
# [Q10_3] Tarvittaessa vanhempani auttavat koulutehtävissä. (My parents help me with my homework if necessary.)
# [Q10_4] Tarvittaessa vanhempani auttavat kouluun liittyvissä ongelmissa. (My parents help me with problems related to school if necessary.)
# [Q10_5] Vanhempani ovat tarvittaessa halukkaita tulemaan kouluun keskustelemaan opettajan kanssa. (My parents are willing to come to the school to talk with a teacher if necessary.)
# [Q11_1] Minun työtäni arvostetaan koulussa. (My work is appreciated at school.)
# [Q11_2] Minua pidetään koulussa henkilönä, jolla on merkitystä. (At school I´m regarded as a person who has a meaning.)
# [Q11_3] Opettajat rohkaisevat minua ilmaisemaan mielipiteeni. (The teachers encourage me to tell my opinion.)
# [Q11_15] Saan apua opettajalta, jos tarvitsen sitä. (A teacher helps me if I need it.)
# [Q11_16] Saan tukiopetusta, jos tarvitsen sitä. (I receive remediation if I need it.)
# [Q11_17] Saan erityisopetusta, jos tarvitsen sitä. (I receive special education if I need it.)
# [Q11_18] Saan ohjausta opiskeluuni, jos tarvitsen sitä. (I receive quidance if I need it.)
# [Q11_19] Opettajat kannustavat minua opiskelussa. (The teachers encourage me with studying.)
# [Q11_20] Saan kiitosta, jos olen suoriutunut hyvin tehtävissäni. (I reserve acknowledgement if I do well with my tasks.)
# Onko sinulla tämän lukukauden aikana ollut jotakin seuraavista oireista tai sairauksista? Kuinka usein?: (Have you had some of the following symptoms or illnesses during this semester? How often?)
# [Q13_4] Jännittyneisyyttä tai hermostuneisuutta. (Tension or nervousness.)
# [Q13_5] Ärtyneisyyttä tai kiukunpurkauksia. (Irritability or outbursts of anger.)
# [Q13_6] Vaikeuksia päästä uneen tai heräilemistä öisin. (Problems with falling asleep or waking up at night time.)
# [Q13_8] Väsymystä tai heikotusta. (Tiredness or weakness.)
# [Q13_9] Alakuloisuutta. (Depression.)
# [Q13_10] Pelkoa. (Fear.)
# I will check how the variable names are coded
variable.names(data)
# Now that I know the exact names of the variables I will create a new dataset of the ones I'm interested in
keep <- c("q1", "q6_7", "q6_8", "q7_9", "q7_10", "q7_11", "q7_12", "q10_1", "q10_2", "q10_3", "q10_4", "q10_5", "q11_1", "q11_2", "q11_3", "q11_15", "q11_16", "q11_17", "q11_18", "q11_19", "q11_20", "q13_4", "q13_5", "q13_6", "q13_8", "q13_9", "q13_10")
data <- select(data, one_of(keep))
# Now there are 27 variables and 9820 observations
dim(data)
# I will remove all rows with missing values (since I don´t know enough of imputation yet)
complete.cases(data)
data.frame(data[-1], comp = complete.cases(data))
Data <- filter(data, complete.cases(data))
# Now there are 8928 observations of 27 variables
dim(Data)
str(Data)
# Now I will change the level names of the Likert-scale questions
levels(Data$q6_7) <- c('1', '2', '3', '4', '5')
levels(Data$q6_8) <- c('1', '2', '3', '4', '5')
levels(Data$q7_9) <- c('1', '2', '3', '4', '5')
levels(Data$q7_10) <- c('1', '2', '3', '4', '5')
levels(Data$q7_11) <- c('1', '2', '3', '4', '5')
levels(Data$q7_12) <- c('1', '2', '3', '4', '5')
levels(Data$q10_1) <- c('1', '2', '3', '4', '5')
levels(Data$q10_2) <- c('1', '2', '3', '4', '5')
levels(Data$q10_3) <- c('1', '2', '3', '4', '5')
levels(Data$q10_4) <- c('1', '2', '3', '4', '5')
levels(Data$q10_5) <- c('1', '2', '3', '4', '5')
levels(Data$q11_1) <- c('1', '2', '3', '4', '5')
levels(Data$q11_2) <- c('1', '2', '3', '4', '5')
levels(Data$q11_3) <- c('1', '2', '3', '4', '5')
levels(Data$q11_15) <- c('1', '2', '3', '4', '5')
levels(Data$q11_16) <- c('1', '2', '3', '4', '5')
levels(Data$q11_17) <- c('1', '2', '3', '4', '5')
levels(Data$q11_18) <- c('1', '2', '3', '4', '5')
levels(Data$q11_19) <- c('1', '2', '3', '4', '5')
levels(Data$q11_20) <- c('1', '2', '3', '4', '5')
# I will also change the level names of gender variable
levels(Data$q1) <- c('F', 'M')
# ..and level names of the last questions (1=daily, 2=weekly, 3=monthly, 4=rarely and 5=none)
levels(Data$q13_4) <- c('1', '2', '3', '4', '5')
levels(Data$q13_5) <- c('1', '2', '3', '4', '5')
levels(Data$q13_6) <- c('1', '2', '3', '4', '5')
levels(Data$q13_8) <- c('1', '2', '3', '4', '5')
levels(Data$q13_9) <- c('1', '2', '3', '4', '5')
levels(Data$q13_10) <- c('1', '2', '3', '4', '5')
# And after that I will check everything is ok with level names
summary(Data)
# I choose to treat Likert scale as numeric
Data$q6_7 <- as.numeric(Data$q6_7)
Data$q6_8 <- as.numeric(Data$q6_8)
Data$q7_9 <- as.numeric(Data$q7_9)
Data$q7_10 <- as.numeric(Data$q7_10)
Data$q7_11 <- as.numeric(Data$q7_11)
Data$q7_12 <- as.numeric(Data$q7_12)
Data$q10_1 <- as.numeric(Data$q10_1)
Data$q10_2 <- as.numeric(Data$q10_2)
Data$q10_3 <- as.numeric(Data$q10_3)
Data$q10_4 <- as.numeric(Data$q10_4)
Data$q10_5 <- as.numeric(Data$q10_5)
Data$q11_1 <- as.numeric(Data$q11_1)
Data$q11_2 <- as.numeric(Data$q11_2)
Data$q11_3 <- as.numeric(Data$q11_3)
Data$q11_15 <- as.numeric(Data$q11_15)
Data$q11_16 <- as.numeric(Data$q11_16)
Data$q11_17 <- as.numeric(Data$q11_17)
Data$q11_18 <- as.numeric(Data$q11_18)
Data$q11_19 <- as.numeric(Data$q11_19)
Data$q11_20 <- as.numeric(Data$q11_20)
# I will also change questions about symptoms to numeric for logical columns
Data$q13_4 <- as.numeric(Data$q13_4)
Data$q13_5 <- as.numeric(Data$q13_5)
Data$q13_6 <- as.numeric(Data$q13_6)
Data$q13_8 <- as.numeric(Data$q13_8)
Data$q13_9 <- as.numeric(Data$q13_9)
Data$q13_10 <- as.numeric(Data$q13_10)
str(Data)
# I will create new logical columns which are TRUE for symptoms which are daily or weekly
Data <- mutate(Data, q13_4often = q13_4 <= 2)
Data <- mutate(Data, q13_5often = q13_5 <= 2)
Data <- mutate(Data, q13_6often = q13_6 <= 2)
Data <- mutate(Data, q13_8often = q13_8 <= 2)
Data <- mutate(Data, q13_9often = q13_9 <= 2)
Data <- mutate(Data, q13_10often = q13_10 <= 2)
str(Data)
dim(Data)
# After all these changes my data is ready and it has 8928 observations and 33 variables
write.csv(Data, file = "school")
|
/Data/final_data_wrangling.R
|
no_license
|
miuva/IODS-final
|
R
| false | false | 7,689 |
r
|
# 4.3.2017 Mirva Turkia
# mirva.turkia@helsinki.fi
# Introduction to Open Data Science
# This is the script file for the data wrangling part of my final assignment
# Here is the information of the data I will use:
# Konu, Anne (University of Tampere): SCHOOL WELL-BEING PROFILE 2015-2016: LOWER SECONDARY SCHOOL, GRADES 7-9 [electronic data]. Version 1.0 (2016-07-18). The Finnish Social Science Data Archive [distributor]. http://urn.fi/urn:nbn:fi:fsd:T-FSD3117
getwd()
setwd("/Users/mirva/IODS-final/Data")
# At first I installed package "memisc" and then I´ll read the SPSS data into R
library(memisc); library(dplyr)
data <- as.data.set(spss.portable.file('daF3117.por'))
data <- as.data.frame(data)
# As you can see the data is very large; it has 91 variables and 9820 observations
data
str(data)
dim(data)
# I will use only the following 27 variables:
# [Q1] Gender
# [Q6_7] Koulun säännöt ovat järkeviä (The rules of the school are reasonable)
# [Q6_8] Koulun rangaistukset ovat oikeudenmukaisia (The punishments of the school are fair.)
# [Q7_9] Opettajat kohtelevat meitä oppilaita oikeudenmukaisesti. (The teachers treat us with justice.)
# [Q7_10] Opettajien kanssa on helppo tulla toimeen. (It is easy to get along with the teachers.)
# [Q7_11] Useimmat opettajat ovat kiinnostuneita siitä, mitä minulle kuuluu. (Most of the teachers are interested in how I feel.)
# [Q7_12] Useimmat opettajat ovat ystävällisiä. (Most of the teachers are friendly.)
# [Q10_1] Vanhempani arvostavat koulutyötäni. (My parents appreciate my schoolwork.)
# [Q10_2] Vanhempani kannustavat minua menestymään koulussa. (My parents encourage me to do well at school.)
# [Q10_3] Tarvittaessa vanhempani auttavat koulutehtävissä. (My parents help me with my homework if necessary.)
# [Q10_4] Tarvittaessa vanhempani auttavat kouluun liittyvissä ongelmissa. (My parents help me with problems related to school if necessary.)
# [Q10_5] Vanhempani ovat tarvittaessa halukkaita tulemaan kouluun keskustelemaan opettajan kanssa. (My parents are willing to come to the school to talk with a teacher if necessary.)
# [Q11_1] Minun työtäni arvostetaan koulussa. (My work is appreciated at school.)
# [Q11_2] Minua pidetään koulussa henkilönä, jolla on merkitystä. (At school I´m regarded as a person who has a meaning.)
# [Q11_3] Opettajat rohkaisevat minua ilmaisemaan mielipiteeni. (The teachers encourage me to tell my opinion.)
# [Q11_15] Saan apua opettajalta, jos tarvitsen sitä. (A teacher helps me if I need it.)
# [Q11_16] Saan tukiopetusta, jos tarvitsen sitä. (I receive remediation if I need it.)
# [Q11_17] Saan erityisopetusta, jos tarvitsen sitä. (I receive special education if I need it.)
# [Q11_18] Saan ohjausta opiskeluuni, jos tarvitsen sitä. (I receive quidance if I need it.)
# [Q11_19] Opettajat kannustavat minua opiskelussa. (The teachers encourage me with studying.)
# [Q11_20] Saan kiitosta, jos olen suoriutunut hyvin tehtävissäni. (I reserve acknowledgement if I do well with my tasks.)
# Onko sinulla tämän lukukauden aikana ollut jotakin seuraavista oireista tai sairauksista? Kuinka usein?: (Have you had some of the following symptoms or illnesses during this semester? How often?)
# [Q13_4] Jännittyneisyyttä tai hermostuneisuutta. (Tension or nervousness.)
# [Q13_5] Ärtyneisyyttä tai kiukunpurkauksia. (Irritability or outbursts of anger.)
# [Q13_6] Vaikeuksia päästä uneen tai heräilemistä öisin. (Problems with falling asleep or waking up at night time.)
# [Q13_8] Väsymystä tai heikotusta. (Tiredness or weakness.)
# [Q13_9] Alakuloisuutta. (Depression.)
# [Q13_10] Pelkoa. (Fear.)
# I will check how the variable names are coded
variable.names(data)
# Now that I know the exact names of the variables I will create a new dataset of the ones I'm interested in
keep <- c("q1", "q6_7", "q6_8", "q7_9", "q7_10", "q7_11", "q7_12", "q10_1", "q10_2", "q10_3", "q10_4", "q10_5", "q11_1", "q11_2", "q11_3", "q11_15", "q11_16", "q11_17", "q11_18", "q11_19", "q11_20", "q13_4", "q13_5", "q13_6", "q13_8", "q13_9", "q13_10")
data <- select(data, one_of(keep))
# Now there are 27 variables and 9820 observations
dim(data)
# I will remove all rows with missing values (since I don´t know enough of imputation yet)
complete.cases(data)
data.frame(data[-1], comp = complete.cases(data))
Data <- filter(data, complete.cases(data))
# Now there are 8928 observations of 27 variables
dim(Data)
str(Data)
# Now I will change the level names of the Likert-scale questions
levels(Data$q6_7) <- c('1', '2', '3', '4', '5')
levels(Data$q6_8) <- c('1', '2', '3', '4', '5')
levels(Data$q7_9) <- c('1', '2', '3', '4', '5')
levels(Data$q7_10) <- c('1', '2', '3', '4', '5')
levels(Data$q7_11) <- c('1', '2', '3', '4', '5')
levels(Data$q7_12) <- c('1', '2', '3', '4', '5')
levels(Data$q10_1) <- c('1', '2', '3', '4', '5')
levels(Data$q10_2) <- c('1', '2', '3', '4', '5')
levels(Data$q10_3) <- c('1', '2', '3', '4', '5')
levels(Data$q10_4) <- c('1', '2', '3', '4', '5')
levels(Data$q10_5) <- c('1', '2', '3', '4', '5')
levels(Data$q11_1) <- c('1', '2', '3', '4', '5')
levels(Data$q11_2) <- c('1', '2', '3', '4', '5')
levels(Data$q11_3) <- c('1', '2', '3', '4', '5')
levels(Data$q11_15) <- c('1', '2', '3', '4', '5')
levels(Data$q11_16) <- c('1', '2', '3', '4', '5')
levels(Data$q11_17) <- c('1', '2', '3', '4', '5')
levels(Data$q11_18) <- c('1', '2', '3', '4', '5')
levels(Data$q11_19) <- c('1', '2', '3', '4', '5')
levels(Data$q11_20) <- c('1', '2', '3', '4', '5')
# I will also change the level names of gender variable
levels(Data$q1) <- c('F', 'M')
# ..and level names of the last questions (1=daily, 2=weekly, 3=monthly, 4=rarely and 5=none)
levels(Data$q13_4) <- c('1', '2', '3', '4', '5')
levels(Data$q13_5) <- c('1', '2', '3', '4', '5')
levels(Data$q13_6) <- c('1', '2', '3', '4', '5')
levels(Data$q13_8) <- c('1', '2', '3', '4', '5')
levels(Data$q13_9) <- c('1', '2', '3', '4', '5')
levels(Data$q13_10) <- c('1', '2', '3', '4', '5')
# And after that I will check everything is ok with level names
summary(Data)
# I choose to treat Likert scale as numeric
Data$q6_7 <- as.numeric(Data$q6_7)
Data$q6_8 <- as.numeric(Data$q6_8)
Data$q7_9 <- as.numeric(Data$q7_9)
Data$q7_10 <- as.numeric(Data$q7_10)
Data$q7_11 <- as.numeric(Data$q7_11)
Data$q7_12 <- as.numeric(Data$q7_12)
Data$q10_1 <- as.numeric(Data$q10_1)
Data$q10_2 <- as.numeric(Data$q10_2)
Data$q10_3 <- as.numeric(Data$q10_3)
Data$q10_4 <- as.numeric(Data$q10_4)
Data$q10_5 <- as.numeric(Data$q10_5)
Data$q11_1 <- as.numeric(Data$q11_1)
Data$q11_2 <- as.numeric(Data$q11_2)
Data$q11_3 <- as.numeric(Data$q11_3)
Data$q11_15 <- as.numeric(Data$q11_15)
Data$q11_16 <- as.numeric(Data$q11_16)
Data$q11_17 <- as.numeric(Data$q11_17)
Data$q11_18 <- as.numeric(Data$q11_18)
Data$q11_19 <- as.numeric(Data$q11_19)
Data$q11_20 <- as.numeric(Data$q11_20)
# I will also change questions about symptoms to numeric for logical columns
Data$q13_4 <- as.numeric(Data$q13_4)
Data$q13_5 <- as.numeric(Data$q13_5)
Data$q13_6 <- as.numeric(Data$q13_6)
Data$q13_8 <- as.numeric(Data$q13_8)
Data$q13_9 <- as.numeric(Data$q13_9)
Data$q13_10 <- as.numeric(Data$q13_10)
str(Data)
# I will create new logical columns which are TRUE for symptoms which are daily or weekly
Data <- mutate(Data, q13_4often = q13_4 <= 2)
Data <- mutate(Data, q13_5often = q13_5 <= 2)
Data <- mutate(Data, q13_6often = q13_6 <= 2)
Data <- mutate(Data, q13_8often = q13_8 <= 2)
Data <- mutate(Data, q13_9often = q13_9 <= 2)
Data <- mutate(Data, q13_10often = q13_10 <= 2)
str(Data)
dim(Data)
# After all these changes my data is ready and it has 8928 observations and 33 variables
write.csv(Data, file = "school")
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/class_pipeline.R
\name{as_pipeline}
\alias{as_pipeline}
\title{Convert to a pipeline object.}
\usage{
as_pipeline(x)
}
\arguments{
\item{x}{A list of target objects or a pipeline object.}
}
\value{
An object of class \code{"tar_pipeline"}.
}
\description{
Not a user-side function. Do not invoke directly.
}
\keyword{internal}
|
/man/as_pipeline.Rd
|
permissive
|
krlmlr/targets
|
R
| false | true | 405 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/class_pipeline.R
\name{as_pipeline}
\alias{as_pipeline}
\title{Convert to a pipeline object.}
\usage{
as_pipeline(x)
}
\arguments{
\item{x}{A list of target objects or a pipeline object.}
}
\value{
An object of class \code{"tar_pipeline"}.
}
\description{
Not a user-side function. Do not invoke directly.
}
\keyword{internal}
|
##' arrange ggplot2, lattice, and grobs on a page
##'
##' @aliases grid.arrange arrangeGrob latticeGrob drawDetails.lattice print.arrange
##' @title arrangeGrob
##' @param ... plots of class ggplot2, trellis, or grobs, and valid arguments to grid.layout
##' @param main string, or grob (requires a well-defined height, see example)
##' @param sub string, or grob (requires a well-defined height, see example)
##' @param legend string, or grob (requires a well-defined width, see example)
##' @param left string, or grob (requires a well-defined width, see example)
##' @param as.table logical: bottom-left to top-right or top-left to bottom-right
##' @param clip logical: clip every object to its viewport
##' @return return a frame grob
##' @export
##'
##' @examples
##' \dontrun{
##' require(ggplot2)
##' plots = lapply(1:5, function(.x) qplot(1:10,rnorm(10), main=paste("plot",.x)))
##' require(gridExtra)
##' do.call(grid.arrange, plots)
##' require(lattice)
##' grid.arrange(qplot(1:10), xyplot(1:10~1:10),
##' tableGrob(head(iris)), nrow=2, as.table=TRUE, main="test main",
##' left = rectGrob(width=unit(1,"line)),
##' sub=textGrob("test sub", gp=gpar(font=2)))
##' }
arrangeGrob <- function(..., as.table=FALSE, clip=TRUE,
main=NULL, sub=NULL, left=NULL,
legend=NULL) {
if(is.null(main)) main <- nullGrob()
if(is.null(sub)) sub <- nullGrob()
if(is.null(legend)) legend <- nullGrob()
if(is.null(left)) left <- nullGrob()
if(is.character(main)) main <- textGrob(main)
if(is.character(sub)) sub <- textGrob(sub)
if(is.character(legend)) legend <- textGrob(legend, rot=-90)
if(is.character(left)) left <- textGrob(left, rot=90)
arrange.class <- "arrange" # grob class
dots <- list(...)
params <- c("nrow", "ncol", "widths", "heights",
"default.units", "respect", "just" )
## names(formals(grid.layout))
layout.call <- intersect(names(dots), params)
params.layout <- dots[layout.call]
if(is.null(names(dots)))
not.grobnames <- FALSE else
not.grobnames <- names(dots) %in% layout.call
grobs <- dots[! not.grobnames ]
n <- length(grobs)
nm <- n2mfrow(n)
if(is.null(params.layout$nrow) & is.null(params.layout$ncol))
{
params.layout$nrow = nm[1]
params.layout$ncol = nm[2]
}
if(is.null(params.layout$nrow))
params.layout$nrow = ceiling(n/params.layout$ncol)
if(is.null(params.layout$ncol))
params.layout$ncol = ceiling(n/params.layout$nrow)
nrow <- params.layout$nrow
ncol <- params.layout$ncol
lay <- do.call(grid.layout, params.layout)
fg <- frameGrob(layout=lay)
## if a ggplot is present, make the grob derive from the ggplot class
classes <- lapply(grobs, class)
inherit.ggplot <- any("ggplot" %in% unlist(classes))
if(inherit.ggplot) arrange.class <- c(arrange.class, "ggplot")
ii.p <- 1
for(ii.row in seq(1, nrow)){
ii.table.row <- ii.row
if(as.table) {ii.table.row <- nrow - ii.table.row + 1}
for(ii.col in seq(1, ncol)){
ii.table <- ii.p
if(ii.p > n) break
## select current grob
cl <- class(grobs[[ii.table]])
ct <- if("grob" %in% unlist(cl)) "grob" else
if("ggplot" %in% unlist(cl)) "ggplot" else cl
g.tmp <- switch(ct,
ggplot = ggplotGrob(grobs[[ii.table]]),
trellis = latticeGrob(grobs[[ii.table]]),
grob = grobs[[ii.table]],
stop("input must be grobs!"))
if(clip) # gTree seems like overkill here ?
g.tmp <- gTree(children=gList(clipGrob(), g.tmp))
fg <- placeGrob(fg, g.tmp, row=ii.table.row, col=ii.col)
ii.p <- ii.p + 1
}
}
## optional annotations in a frame grob
wl <- unit(1, "grobwidth", left)
wr <- unit(1, "grobwidth", legend)
hb <- unit(1, "grobheight", sub)
ht <- unit(1, "grobheight", main)
annotate.lay <- grid.layout(3, 3,
widths=unit.c(wl, unit(1, "npc")-wl-wr, wr),
heights=unit.c(ht, unit(1, "npc")-hb-ht, hb))
af <- frameGrob(layout=annotate.lay)
af <- placeGrob(af, fg, row=2, col=2)
af <- placeGrob(af, main, row=1, col=2)
af <- placeGrob(af, sub, row=3, col=2)
af <- placeGrob(af, left, row=2, col=1)
af <- placeGrob(af, legend, row=2, col=3)
invisible(gTree(children=gList(af), cl=arrange.class))
}
##' @export
grid.arrange <- function(..., as.table=FALSE, clip=TRUE,
main=NULL, sub=NULL, left=NULL, legend=NULL,
newpage=TRUE){
if(newpage) grid.newpage()
g <- arrangeGrob(...,as.table=as.table, clip=clip,
main=main, sub=sub, left=left, legend=legend)
grid.draw(g)
invisible(g)
}
##' @export
latticeGrob <- function(p, ...){
grob(p=p, ..., cl="lattice")
}
##' @export
drawDetails.lattice <- function(x, recording=FALSE){
lattice:::plot.trellis(x$p, newpage=FALSE)
}
##' @export
print.arrange <- function(x, newpage = is.null(vp), vp = NULL, ...) {
if(newpage) grid.newpage()
grid.draw(editGrob(x, vp=vp))
}
##' Interface to arrangeGrob that can dispatch on multiple pages
##'
##' If the layout specifies both nrow and ncol, the list of grobs can be split
##' in multiple pages. Interactive devices print open new windows, whilst non-interactive
##' devices such as pdf call grid.newpage() between the drawings.
##' @title marrangeGrob
##' @aliases marrangeGrob print.arrangelist
##' @param ... grobs
##' @param as.table see \link{arrangeGrob}
##' @param clip see \link{arrangeGrob}
##' @param top see \link{arrangeGrob}
##' @param bottom see \link{arrangeGrob}
##' @param left see \link{arrangeGrob}
##' @param right see \link{arrangeGrob}
##' @return a list of class arrangelist
##' @author baptiste Auguie
##' @export
##' @family user
##' @examples
##' \dontrun{
##' require(ggplot2)
##' pl <- lapply(1:11, function(.x) qplot(1:10,rnorm(10), main=paste("plot",.x)))
##' ml <- do.call(marrangeGrob, c(pl, list(nrow=2, ncol=2)))
##' ## interactive use; open new devices
##' ml
##' ## non-interactive use, multipage pdf
##' ggsave("multipage.pdf", ml)
##' }
marrangeGrob <- function(..., as.table=FALSE, clip=TRUE,
top=quote(paste("page", g, "of", pages)),
bottom=NULL, left=NULL, right=NULL){
arrange.class <- "arrange" # grob class
dots <- list(...)
params <- c("nrow", "ncol", "widths", "heights",
"default.units", "respect", "just" )
## names(formals(grid.layout))
layout.call <- intersect(names(dots), params)
params.layout <- dots[layout.call]
if(is.null(names(dots)))
not.grobnames <- FALSE else
not.grobnames <- names(dots) %in% layout.call
grobs <- dots[! not.grobnames ]
n <- length(grobs)
nm <- n2mfrow(n)
if(is.null(params.layout$nrow) & is.null(params.layout$ncol))
{
params.layout$nrow = nm[1]
params.layout$ncol = nm[2]
}
if(is.null(params.layout$nrow))
params.layout$nrow = ceiling(n/params.layout$ncol)
if(is.null(params.layout$ncol))
params.layout$ncol = ceiling(n/params.layout$nrow)
nrow <- params.layout$nrow
ncol <- params.layout$ncol
## if nrow and ncol were given, may need multiple pages
nlay <- with(params.layout, nrow*ncol)
## add one page if division is not complete
pages <- n %/% nlay + as.logical(n %% nlay)
groups <- split(seq_along(grobs),
gl(pages, nlay, n))
pl <-
lapply(names(groups), function(g)
{
top <- eval(top) ## lazy evaluation
do.call(arrangeGrob, c(grobs[groups[[g]]], params.layout,
list(as.table=as.table, clip=clip,
main=top, sub=bottom, left=left, legend=right)))
})
class(pl) <- c("arrangelist", "ggplot", class(pl))
pl
}
##' @export
print.arrangelist = function(x, ...) lapply(x, function(.x) {
if(dev.interactive()) dev.new() else grid.newpage()
grid.draw(.x)
}, ...)
|
/R/arrange.r
|
no_license
|
davike/gridextra
|
R
| false | false | 8,080 |
r
|
##' arrange ggplot2, lattice, and grobs on a page
##'
##' @aliases grid.arrange arrangeGrob latticeGrob drawDetails.lattice print.arrange
##' @title arrangeGrob
##' @param ... plots of class ggplot2, trellis, or grobs, and valid arguments to grid.layout
##' @param main string, or grob (requires a well-defined height, see example)
##' @param sub string, or grob (requires a well-defined height, see example)
##' @param legend string, or grob (requires a well-defined width, see example)
##' @param left string, or grob (requires a well-defined width, see example)
##' @param as.table logical: bottom-left to top-right or top-left to bottom-right
##' @param clip logical: clip every object to its viewport
##' @return return a frame grob
##' @export
##'
##' @examples
##' \dontrun{
##' require(ggplot2)
##' plots = lapply(1:5, function(.x) qplot(1:10,rnorm(10), main=paste("plot",.x)))
##' require(gridExtra)
##' do.call(grid.arrange, plots)
##' require(lattice)
##' grid.arrange(qplot(1:10), xyplot(1:10~1:10),
##' tableGrob(head(iris)), nrow=2, as.table=TRUE, main="test main",
##' left = rectGrob(width=unit(1,"line)),
##' sub=textGrob("test sub", gp=gpar(font=2)))
##' }
arrangeGrob <- function(..., as.table=FALSE, clip=TRUE,
main=NULL, sub=NULL, left=NULL,
legend=NULL) {
if(is.null(main)) main <- nullGrob()
if(is.null(sub)) sub <- nullGrob()
if(is.null(legend)) legend <- nullGrob()
if(is.null(left)) left <- nullGrob()
if(is.character(main)) main <- textGrob(main)
if(is.character(sub)) sub <- textGrob(sub)
if(is.character(legend)) legend <- textGrob(legend, rot=-90)
if(is.character(left)) left <- textGrob(left, rot=90)
arrange.class <- "arrange" # grob class
dots <- list(...)
params <- c("nrow", "ncol", "widths", "heights",
"default.units", "respect", "just" )
## names(formals(grid.layout))
layout.call <- intersect(names(dots), params)
params.layout <- dots[layout.call]
if(is.null(names(dots)))
not.grobnames <- FALSE else
not.grobnames <- names(dots) %in% layout.call
grobs <- dots[! not.grobnames ]
n <- length(grobs)
nm <- n2mfrow(n)
if(is.null(params.layout$nrow) & is.null(params.layout$ncol))
{
params.layout$nrow = nm[1]
params.layout$ncol = nm[2]
}
if(is.null(params.layout$nrow))
params.layout$nrow = ceiling(n/params.layout$ncol)
if(is.null(params.layout$ncol))
params.layout$ncol = ceiling(n/params.layout$nrow)
nrow <- params.layout$nrow
ncol <- params.layout$ncol
lay <- do.call(grid.layout, params.layout)
fg <- frameGrob(layout=lay)
## if a ggplot is present, make the grob derive from the ggplot class
classes <- lapply(grobs, class)
inherit.ggplot <- any("ggplot" %in% unlist(classes))
if(inherit.ggplot) arrange.class <- c(arrange.class, "ggplot")
ii.p <- 1
for(ii.row in seq(1, nrow)){
ii.table.row <- ii.row
if(as.table) {ii.table.row <- nrow - ii.table.row + 1}
for(ii.col in seq(1, ncol)){
ii.table <- ii.p
if(ii.p > n) break
## select current grob
cl <- class(grobs[[ii.table]])
ct <- if("grob" %in% unlist(cl)) "grob" else
if("ggplot" %in% unlist(cl)) "ggplot" else cl
g.tmp <- switch(ct,
ggplot = ggplotGrob(grobs[[ii.table]]),
trellis = latticeGrob(grobs[[ii.table]]),
grob = grobs[[ii.table]],
stop("input must be grobs!"))
if(clip) # gTree seems like overkill here ?
g.tmp <- gTree(children=gList(clipGrob(), g.tmp))
fg <- placeGrob(fg, g.tmp, row=ii.table.row, col=ii.col)
ii.p <- ii.p + 1
}
}
## optional annotations in a frame grob
wl <- unit(1, "grobwidth", left)
wr <- unit(1, "grobwidth", legend)
hb <- unit(1, "grobheight", sub)
ht <- unit(1, "grobheight", main)
annotate.lay <- grid.layout(3, 3,
widths=unit.c(wl, unit(1, "npc")-wl-wr, wr),
heights=unit.c(ht, unit(1, "npc")-hb-ht, hb))
af <- frameGrob(layout=annotate.lay)
af <- placeGrob(af, fg, row=2, col=2)
af <- placeGrob(af, main, row=1, col=2)
af <- placeGrob(af, sub, row=3, col=2)
af <- placeGrob(af, left, row=2, col=1)
af <- placeGrob(af, legend, row=2, col=3)
invisible(gTree(children=gList(af), cl=arrange.class))
}
##' @export
grid.arrange <- function(..., as.table=FALSE, clip=TRUE,
main=NULL, sub=NULL, left=NULL, legend=NULL,
newpage=TRUE){
if(newpage) grid.newpage()
g <- arrangeGrob(...,as.table=as.table, clip=clip,
main=main, sub=sub, left=left, legend=legend)
grid.draw(g)
invisible(g)
}
##' @export
latticeGrob <- function(p, ...){
grob(p=p, ..., cl="lattice")
}
##' @export
drawDetails.lattice <- function(x, recording=FALSE){
lattice:::plot.trellis(x$p, newpage=FALSE)
}
##' @export
print.arrange <- function(x, newpage = is.null(vp), vp = NULL, ...) {
if(newpage) grid.newpage()
grid.draw(editGrob(x, vp=vp))
}
##' Interface to arrangeGrob that can dispatch on multiple pages
##'
##' If the layout specifies both nrow and ncol, the list of grobs can be split
##' in multiple pages. Interactive devices print open new windows, whilst non-interactive
##' devices such as pdf call grid.newpage() between the drawings.
##' @title marrangeGrob
##' @aliases marrangeGrob print.arrangelist
##' @param ... grobs
##' @param as.table see \link{arrangeGrob}
##' @param clip see \link{arrangeGrob}
##' @param top see \link{arrangeGrob}
##' @param bottom see \link{arrangeGrob}
##' @param left see \link{arrangeGrob}
##' @param right see \link{arrangeGrob}
##' @return a list of class arrangelist
##' @author baptiste Auguie
##' @export
##' @family user
##' @examples
##' \dontrun{
##' require(ggplot2)
##' pl <- lapply(1:11, function(.x) qplot(1:10,rnorm(10), main=paste("plot",.x)))
##' ml <- do.call(marrangeGrob, c(pl, list(nrow=2, ncol=2)))
##' ## interactive use; open new devices
##' ml
##' ## non-interactive use, multipage pdf
##' ggsave("multipage.pdf", ml)
##' }
marrangeGrob <- function(..., as.table=FALSE, clip=TRUE,
top=quote(paste("page", g, "of", pages)),
bottom=NULL, left=NULL, right=NULL){
arrange.class <- "arrange" # grob class
dots <- list(...)
params <- c("nrow", "ncol", "widths", "heights",
"default.units", "respect", "just" )
## names(formals(grid.layout))
layout.call <- intersect(names(dots), params)
params.layout <- dots[layout.call]
if(is.null(names(dots)))
not.grobnames <- FALSE else
not.grobnames <- names(dots) %in% layout.call
grobs <- dots[! not.grobnames ]
n <- length(grobs)
nm <- n2mfrow(n)
if(is.null(params.layout$nrow) & is.null(params.layout$ncol))
{
params.layout$nrow = nm[1]
params.layout$ncol = nm[2]
}
if(is.null(params.layout$nrow))
params.layout$nrow = ceiling(n/params.layout$ncol)
if(is.null(params.layout$ncol))
params.layout$ncol = ceiling(n/params.layout$nrow)
nrow <- params.layout$nrow
ncol <- params.layout$ncol
## if nrow and ncol were given, may need multiple pages
nlay <- with(params.layout, nrow*ncol)
## add one page if division is not complete
pages <- n %/% nlay + as.logical(n %% nlay)
groups <- split(seq_along(grobs),
gl(pages, nlay, n))
pl <-
lapply(names(groups), function(g)
{
top <- eval(top) ## lazy evaluation
do.call(arrangeGrob, c(grobs[groups[[g]]], params.layout,
list(as.table=as.table, clip=clip,
main=top, sub=bottom, left=left, legend=right)))
})
class(pl) <- c("arrangelist", "ggplot", class(pl))
pl
}
##' @export
print.arrangelist = function(x, ...) lapply(x, function(.x) {
if(dev.interactive()) dev.new() else grid.newpage()
grid.draw(.x)
}, ...)
|
context("test-g01-constraints")
TOL <- 1e-6
a <- Variable(name = "a")
b <- Variable(name = "b")
x <- Variable(2, name = "x")
y <- Variable(3, name = "y")
z <- Variable(2, name = "z")
A <- Variable(2, 2, name = "A")
B <- Variable(2, 2, name = "B")
C <- Variable(3, 2, name = "C")
SOC <- CVXR:::SOC
save_value <- CVXR:::save_value
test_that("test the EqConstraint class", {
constr <- x == z
expect_equal(name(constr), "x == z")
expect_equal(dim(constr), c(2,1))
# Test value and dual_value
expect_true(is.na(dual_value(constr)))
expect_error(constr_value(constr))
x <- save_value(x, 2)
z <- save_value(z, 2)
constr <- x == z
expect_true(constr_value(constr))
x <- save_value(x, 3)
constr <- x == z
expect_false(constr_value(constr))
value(x) <- c(2,1)
value(z) <- c(2,2)
constr <- x == z
expect_false(constr_value(constr))
expect_equal(violation(constr), matrix(c(0,1)), tolerance = TOL)
expect_equal(residual(constr), matrix(c(0,1)), tolerance = TOL)
value(z) <- c(2,1)
constr <- x == z
expect_true(constr_value(constr))
expect_equal(violation(constr), matrix(c(0,0)))
expect_equal(residual(constr), matrix(c(0,0)))
expect_error(x == y)
})
test_that("test the LeqConstraint class", {
constr <- x <= z
expect_equal(name(constr), "x <= z")
expect_equal(dim(constr), c(2,1))
# Test value and dual_value
expect_true(is.na(dual_value(constr)))
expect_error(constr_value(constr))
x <- save_value(x, 1)
z <- save_value(z, 2)
constr <- x <= z
expect_true(constr_value(constr))
x <- save_value(x, 3)
constr <- x <= z
expect_false(constr_value(constr))
value(x) <- c(2,1)
value(z) <- c(2,0)
constr <- x <= z
expect_false(constr_value(constr))
expect_equal(violation(constr), matrix(c(0,1)), tolerance = TOL)
expect_equal(residual(constr), matrix(c(0,1)), tolerance = TOL)
value(z) <- c(2,2)
constr <- x <= z
expect_true(constr_value(constr))
expect_equal(violation(constr), matrix(c(0,0)), tolerance = TOL)
expect_equal(residual(constr), matrix(c(0,0)), tolerance = TOL)
expect_error(x <= y)
})
test_that("Test the PSD constraint %>>%", {
constr <- A %>>% B
expect_equal(name(constr), "A + -B >> 0")
expect_equal(dim(constr), c(2,2))
# Test value and dual_value
expect_true(is.na(dual_value(constr)))
expect_error(constr_value(constr))
A <- save_value(A, rbind(c(2,-1), c(1,2)))
B <- save_value(B, rbind(c(1,0), c(0,1)))
constr <- A %>>% B
expect_true(constr_value(constr))
expect_equal(violation(constr), 0, tolerance = TOL)
expect_equal(residual(constr), 0, tolerance = TOL)
B <- save_value(B, rbind(c(3,0), c(0,3)))
constr <- A %>>% B
expect_false(constr_value(constr))
expect_equal(violation(constr), 1, tolerance = TOL)
expect_equal(residual(constr), 1, tolerance = TOL)
expect_error(x %>>% 0, "Non-square matrix in positive definite constraint.")
})
test_that("Test the PSD constraint %<<%", {
constr <- A %<<% B
expect_equal(name(constr), "B + -A >> 0")
expect_equal(dim(constr), c(2,2))
# Test value and dual_value
expect_true(is.na(dual_value(constr)))
expect_error(constr_value(constr))
B <- save_value(B, rbind(c(2,-1), c(1,2)))
A <- save_value(A, rbind(c(1,0), c(0,1)))
constr <- A %<<% B
expect_true(constr_value(constr))
A <- save_value(A, rbind(c(3,0), c(0,3)))
constr <- A %<<% B
expect_false(constr_value(constr))
expect_error(x %<<% 0, "Non-square matrix in positive definite constraint.")
})
test_that("test the >= operator", {
constr <- z >= x
expect_equal(name(constr), "x <= z")
expect_equal(dim(constr), c(2,1))
expect_error(y >= x)
})
test_that("test the SOC class", {
exp <- x + z
scalar_exp <- a + b
constr <- SOC(scalar_exp, exp)
expect_equal(size(constr), 3)
})
|
/tests/testthat/test-g01-constraints.R
|
permissive
|
bedantaguru/CVXR
|
R
| false | false | 3,811 |
r
|
context("test-g01-constraints")
TOL <- 1e-6
a <- Variable(name = "a")
b <- Variable(name = "b")
x <- Variable(2, name = "x")
y <- Variable(3, name = "y")
z <- Variable(2, name = "z")
A <- Variable(2, 2, name = "A")
B <- Variable(2, 2, name = "B")
C <- Variable(3, 2, name = "C")
SOC <- CVXR:::SOC
save_value <- CVXR:::save_value
test_that("test the EqConstraint class", {
constr <- x == z
expect_equal(name(constr), "x == z")
expect_equal(dim(constr), c(2,1))
# Test value and dual_value
expect_true(is.na(dual_value(constr)))
expect_error(constr_value(constr))
x <- save_value(x, 2)
z <- save_value(z, 2)
constr <- x == z
expect_true(constr_value(constr))
x <- save_value(x, 3)
constr <- x == z
expect_false(constr_value(constr))
value(x) <- c(2,1)
value(z) <- c(2,2)
constr <- x == z
expect_false(constr_value(constr))
expect_equal(violation(constr), matrix(c(0,1)), tolerance = TOL)
expect_equal(residual(constr), matrix(c(0,1)), tolerance = TOL)
value(z) <- c(2,1)
constr <- x == z
expect_true(constr_value(constr))
expect_equal(violation(constr), matrix(c(0,0)))
expect_equal(residual(constr), matrix(c(0,0)))
expect_error(x == y)
})
test_that("test the LeqConstraint class", {
constr <- x <= z
expect_equal(name(constr), "x <= z")
expect_equal(dim(constr), c(2,1))
# Test value and dual_value
expect_true(is.na(dual_value(constr)))
expect_error(constr_value(constr))
x <- save_value(x, 1)
z <- save_value(z, 2)
constr <- x <= z
expect_true(constr_value(constr))
x <- save_value(x, 3)
constr <- x <= z
expect_false(constr_value(constr))
value(x) <- c(2,1)
value(z) <- c(2,0)
constr <- x <= z
expect_false(constr_value(constr))
expect_equal(violation(constr), matrix(c(0,1)), tolerance = TOL)
expect_equal(residual(constr), matrix(c(0,1)), tolerance = TOL)
value(z) <- c(2,2)
constr <- x <= z
expect_true(constr_value(constr))
expect_equal(violation(constr), matrix(c(0,0)), tolerance = TOL)
expect_equal(residual(constr), matrix(c(0,0)), tolerance = TOL)
expect_error(x <= y)
})
test_that("Test the PSD constraint %>>%", {
constr <- A %>>% B
expect_equal(name(constr), "A + -B >> 0")
expect_equal(dim(constr), c(2,2))
# Test value and dual_value
expect_true(is.na(dual_value(constr)))
expect_error(constr_value(constr))
A <- save_value(A, rbind(c(2,-1), c(1,2)))
B <- save_value(B, rbind(c(1,0), c(0,1)))
constr <- A %>>% B
expect_true(constr_value(constr))
expect_equal(violation(constr), 0, tolerance = TOL)
expect_equal(residual(constr), 0, tolerance = TOL)
B <- save_value(B, rbind(c(3,0), c(0,3)))
constr <- A %>>% B
expect_false(constr_value(constr))
expect_equal(violation(constr), 1, tolerance = TOL)
expect_equal(residual(constr), 1, tolerance = TOL)
expect_error(x %>>% 0, "Non-square matrix in positive definite constraint.")
})
test_that("Test the PSD constraint %<<%", {
constr <- A %<<% B
expect_equal(name(constr), "B + -A >> 0")
expect_equal(dim(constr), c(2,2))
# Test value and dual_value
expect_true(is.na(dual_value(constr)))
expect_error(constr_value(constr))
B <- save_value(B, rbind(c(2,-1), c(1,2)))
A <- save_value(A, rbind(c(1,0), c(0,1)))
constr <- A %<<% B
expect_true(constr_value(constr))
A <- save_value(A, rbind(c(3,0), c(0,3)))
constr <- A %<<% B
expect_false(constr_value(constr))
expect_error(x %<<% 0, "Non-square matrix in positive definite constraint.")
})
test_that("test the >= operator", {
constr <- z >= x
expect_equal(name(constr), "x <= z")
expect_equal(dim(constr), c(2,1))
expect_error(y >= x)
})
test_that("test the SOC class", {
exp <- x + z
scalar_exp <- a + b
constr <- SOC(scalar_exp, exp)
expect_equal(size(constr), 3)
})
|
% Generated by roxygen2 (4.0.2): do not edit by hand
\name{worms_common}
\alias{worms_common}
\title{Common names from WoRMS ID}
\usage{
worms_common(ids = NULL, opts = NULL, iface = NULL, ...)
}
\arguments{
\item{ids}{(numeric) One or more WoRMS AphidID's for a taxon.}
\item{opts}{(character) a named list of elements that are passed to the curlPerform function
which actually invokes the SOAP method. These options control aspects of the HTTP request,
including debugging information that is displayed on the console,
e.g. .opts = list(verbose = TRUE)}
\item{iface}{Interface to WoRMS SOAP API methods. By default we use a previously created object.
If you want to create a new one, use \code{worms_gen_iface}, assign the output to an object,
then pass it into any \code{worms_*} function. in the \code{iface} parameter.}
\item{...}{Further args passed on to \code{SSOAP::.SOAP}.}
}
\description{
Common names from WoRMS ID
}
\examples{
\dontrun{
worms_common(ids=1080)
worms_common(ids=22388)
worms_common(ids=123080)
worms_common(ids=160281)
worms_common(ids=c(1080,22388,160281,123080,22388))
}
}
|
/man/worms_common.Rd
|
permissive
|
fmichonneau/taxizesoap
|
R
| false | false | 1,107 |
rd
|
% Generated by roxygen2 (4.0.2): do not edit by hand
\name{worms_common}
\alias{worms_common}
\title{Common names from WoRMS ID}
\usage{
worms_common(ids = NULL, opts = NULL, iface = NULL, ...)
}
\arguments{
\item{ids}{(numeric) One or more WoRMS AphidID's for a taxon.}
\item{opts}{(character) a named list of elements that are passed to the curlPerform function
which actually invokes the SOAP method. These options control aspects of the HTTP request,
including debugging information that is displayed on the console,
e.g. .opts = list(verbose = TRUE)}
\item{iface}{Interface to WoRMS SOAP API methods. By default we use a previously created object.
If you want to create a new one, use \code{worms_gen_iface}, assign the output to an object,
then pass it into any \code{worms_*} function. in the \code{iface} parameter.}
\item{...}{Further args passed on to \code{SSOAP::.SOAP}.}
}
\description{
Common names from WoRMS ID
}
\examples{
\dontrun{
worms_common(ids=1080)
worms_common(ids=22388)
worms_common(ids=123080)
worms_common(ids=160281)
worms_common(ids=c(1080,22388,160281,123080,22388))
}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/arg.R
\name{model_arg}
\alias{model_arg}
\title{Setup model file for running}
\usage{
model_arg(model, examples_dir)
}
\arguments{
\item{model}{A character vector (length 1) specifying the model}
\item{examples_dir}{A character vector (length 1), containing the path to
the Examples directory in the MultiBUGS directory}
}
\value{
The full path to the just-created (as a result of copying) file
}
\description{
Finds the standard model file for the specified model, and copies it
into the current working directory
}
|
/man/model_arg.Rd
|
no_license
|
MultiBUGS/multibugstests
|
R
| false | true | 596 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/arg.R
\name{model_arg}
\alias{model_arg}
\title{Setup model file for running}
\usage{
model_arg(model, examples_dir)
}
\arguments{
\item{model}{A character vector (length 1) specifying the model}
\item{examples_dir}{A character vector (length 1), containing the path to
the Examples directory in the MultiBUGS directory}
}
\value{
The full path to the just-created (as a result of copying) file
}
\description{
Finds the standard model file for the specified model, and copies it
into the current working directory
}
|
library(tidyverse)
library(ggthemes)
library(plotly)
library(gifski)
library(viridis)
options(encoding = "UTF-8")
## defined brand colors
cus_blue <- "#00aae7"
cus_black <- "#323232"
cus_grey <- "#969696"
cus_dblue <- "#002c77"
cus_orange <- "#f55523"
cus_yellow <- "#fdfd2f"
region_data <- read_csv2("data/csv/region_ha_analysis_utf8.csv", locale = locale(encoding = "UTF-8")) %>%
mutate(region = case_when(region == "ÜlemisteJärve" ~ "Ülemistejärve",
TRUE ~ region))
area_plot <- read_csv2("data/csv/area_plot_utf8.csv", locale = locale(encoding = "UTF-8"))
# area_plot <- data.table::fread("data/csv/area_plot_utf8.csv", encoding = "UTF-8")
full_data <- readRDS("data/full_data.RDS") %>%
mutate(qtr_year = as.character(qtr_year),
qtr = str_replace_all(qtr,"[[.]]","-"))
# price_map <- area_plot %>%
# left_join(subset(full_data,qtr_year == "2013-01-01"), by = c("id" = "region"))
# # mutate(id = tolower(id),
# id = str_replace_all(id, "ä","a"),
# id = str_replace_all(id, "ü","u"),
# id = str_replace_all(id, "ö","o"),
# id = str_replace_all(id, "õ","o"))
time_list <- unique(full_data$qtr_year)
for (time_item in time_list){
price_map <- area_plot %>%
left_join(subset(full_data,qtr_year == time_item), by = c("id" = "region"))
# %>%
# mutate(tran_p_ha = case_when(is.na(tran_p_ha) == TRUE ~ 0,
# TRUE ~ tran_p_ha))
# mid <- mean(transaction_map$tran_p_ha,na.rm = TRUE)
tln_plot <- ggplot(aes(x = long,
y = lat,
group = id,
fill = em_mean),
data = price_map,
alpha = 0.6) +
geom_polygon(color = "grey40") +
ggtitle(label = time_item)+
# geom_map(aes(x = long,
# y = lat,
# group = id,
# fill = tran_p_ha),
# data = transaction_map)+
labs(fill = "Price per region")+
theme_map()+
coord_fixed()+
theme(legend.position = "top")+
scale_fill_viridis(limits = c(0, 3500),breaks = seq(0,4500,1000))
tln_plot
ggsave(filename = paste0("output/price/price_",time_item,".png"), dpi = 300)
}
gif_files <- list.files(path = "output/price/", pattern = ".png")
gifski(png_files = paste0("output/price/",gif_files), gif_file = "output/price_map.gif",
delay = 1,
loop = TRUE)
|
/r/04_price_map.R
|
permissive
|
snailwellington/TLN_apt_market
|
R
| false | false | 2,476 |
r
|
library(tidyverse)
library(ggthemes)
library(plotly)
library(gifski)
library(viridis)
options(encoding = "UTF-8")
## defined brand colors
cus_blue <- "#00aae7"
cus_black <- "#323232"
cus_grey <- "#969696"
cus_dblue <- "#002c77"
cus_orange <- "#f55523"
cus_yellow <- "#fdfd2f"
region_data <- read_csv2("data/csv/region_ha_analysis_utf8.csv", locale = locale(encoding = "UTF-8")) %>%
mutate(region = case_when(region == "ÜlemisteJärve" ~ "Ülemistejärve",
TRUE ~ region))
area_plot <- read_csv2("data/csv/area_plot_utf8.csv", locale = locale(encoding = "UTF-8"))
# area_plot <- data.table::fread("data/csv/area_plot_utf8.csv", encoding = "UTF-8")
full_data <- readRDS("data/full_data.RDS") %>%
mutate(qtr_year = as.character(qtr_year),
qtr = str_replace_all(qtr,"[[.]]","-"))
# price_map <- area_plot %>%
# left_join(subset(full_data,qtr_year == "2013-01-01"), by = c("id" = "region"))
# # mutate(id = tolower(id),
# id = str_replace_all(id, "ä","a"),
# id = str_replace_all(id, "ü","u"),
# id = str_replace_all(id, "ö","o"),
# id = str_replace_all(id, "õ","o"))
time_list <- unique(full_data$qtr_year)
for (time_item in time_list){
price_map <- area_plot %>%
left_join(subset(full_data,qtr_year == time_item), by = c("id" = "region"))
# %>%
# mutate(tran_p_ha = case_when(is.na(tran_p_ha) == TRUE ~ 0,
# TRUE ~ tran_p_ha))
# mid <- mean(transaction_map$tran_p_ha,na.rm = TRUE)
tln_plot <- ggplot(aes(x = long,
y = lat,
group = id,
fill = em_mean),
data = price_map,
alpha = 0.6) +
geom_polygon(color = "grey40") +
ggtitle(label = time_item)+
# geom_map(aes(x = long,
# y = lat,
# group = id,
# fill = tran_p_ha),
# data = transaction_map)+
labs(fill = "Price per region")+
theme_map()+
coord_fixed()+
theme(legend.position = "top")+
scale_fill_viridis(limits = c(0, 3500),breaks = seq(0,4500,1000))
tln_plot
ggsave(filename = paste0("output/price/price_",time_item,".png"), dpi = 300)
}
gif_files <- list.files(path = "output/price/", pattern = ".png")
gifski(png_files = paste0("output/price/",gif_files), gif_file = "output/price_map.gif",
delay = 1,
loop = TRUE)
|
require(PMCMRplus)
options("width"=10000)
ARRAY <- c(0.006078,0.034145,0.002997,0.005162,0.012252,0.005584,0.008777,0.014586,0.008631,0.008911,0.009405,0.009275,0.013533,0.006105,0.003567,0.005993,0.006952,0.031407,0.012595,0.006302,0.05367,0.059235,0.060677,0.056981,0.057675,0.053739,0.053429,0.055306,0.054656,0.054805,0.047582,0.064404,0.037039,0.05675,0.061967,0.04885,0.054908,0.059882,0.050609,0.048947,0.024502,0.01944,0.042629,0.011223,0.04047,0.03851,0.044424,0.023336,0.044331,0.015105,0.0327,0.034203,0.036544,0.019355,0.028367,0.011952,0.022492,0.039328,0.013975,0.032278,0.012488,0.011722,0.027706,0.003971,0.02529,0.015127,0.020797,0.013759,0.029244,0.004716,0.019912,0.009094,0.019934,0.012615,0.010876,0.005583,0.007618,0.026212,0.007738,0.023667,0.008898,0.006689,0.011042,0.007156,0.003971,0.009049,0.012544,0.005679,0.00958,0.008678,1.58E-4,0.012325,0.006285,0.014952,0.008417,0.007734,0.010892,0.015973,0.005703,0.007175,0.001585,0.0,9.06E-4,0.0,4.4E-5,0.0,0.001703,3.44E-4,0.0,0.0,0.007045,3.0E-6,0.0,5.15E-4,0.0,0.0,0.0,2.72E-4,0.0,0.0)
categs<-as.factor(rep(c("HHCORandomMINMAX","HHCORandomSDE","HHCOR2SDE","HHCOR2MINMAX","HHCORandomLPNORM","HHCOR2LPNORM"),each=20));
result <- kruskal.test(ARRAY,categs)
print(result);pos_teste<-kwAllPairsNemenyiTest(ARRAY, categs, method='Tukey');print(pos_teste);
|
/MaFMethodology/R/prune/HV/5/kruskalscript.R
|
no_license
|
fritsche/hhcoanalysisresults
|
R
| false | false | 1,324 |
r
|
require(PMCMRplus)
options("width"=10000)
ARRAY <- c(0.006078,0.034145,0.002997,0.005162,0.012252,0.005584,0.008777,0.014586,0.008631,0.008911,0.009405,0.009275,0.013533,0.006105,0.003567,0.005993,0.006952,0.031407,0.012595,0.006302,0.05367,0.059235,0.060677,0.056981,0.057675,0.053739,0.053429,0.055306,0.054656,0.054805,0.047582,0.064404,0.037039,0.05675,0.061967,0.04885,0.054908,0.059882,0.050609,0.048947,0.024502,0.01944,0.042629,0.011223,0.04047,0.03851,0.044424,0.023336,0.044331,0.015105,0.0327,0.034203,0.036544,0.019355,0.028367,0.011952,0.022492,0.039328,0.013975,0.032278,0.012488,0.011722,0.027706,0.003971,0.02529,0.015127,0.020797,0.013759,0.029244,0.004716,0.019912,0.009094,0.019934,0.012615,0.010876,0.005583,0.007618,0.026212,0.007738,0.023667,0.008898,0.006689,0.011042,0.007156,0.003971,0.009049,0.012544,0.005679,0.00958,0.008678,1.58E-4,0.012325,0.006285,0.014952,0.008417,0.007734,0.010892,0.015973,0.005703,0.007175,0.001585,0.0,9.06E-4,0.0,4.4E-5,0.0,0.001703,3.44E-4,0.0,0.0,0.007045,3.0E-6,0.0,5.15E-4,0.0,0.0,0.0,2.72E-4,0.0,0.0)
categs<-as.factor(rep(c("HHCORandomMINMAX","HHCORandomSDE","HHCOR2SDE","HHCOR2MINMAX","HHCORandomLPNORM","HHCOR2LPNORM"),each=20));
result <- kruskal.test(ARRAY,categs)
print(result);pos_teste<-kwAllPairsNemenyiTest(ARRAY, categs, method='Tukey');print(pos_teste);
|
# *** Header **************************************************************************
#
# Create .csv version of Table S3
#
# Read in national baseline
national_baseline <- read.csv(
stringr::str_c(
build_data_dir,
"/national_baseline.csv"
)
)
# Format the table
table_s3 <- national_baseline %>%
select(
group,
n_group,
c19_ifr_group,
sus_to_inf,
vax_uptake_census,
average_2vax_efficacy,
yll,
) %>%
mutate(
group = stringr::str_replace(
group,
"agebin_",
""
),
d_E = "4 days (age invariant)",
d_I = "9 days (age invariant)",
sus_to_inf = round(sus_to_inf, 2),
c19_ifr_group = round(c19_ifr_group, 3),
n_group = prettyNum(n_group, big.mark = ","),
vax_uptake_census = round(vax_uptake_census, 3),
c_ij = "See Table S1",
average_2vax_efficacy = round(average_2vax_efficacy, 3),
yll = round(yll, 1)
) %>%
rename(
`Age group` = group,
beta_i = sus_to_inf,
IFR_i = c19_ifr_group,
N_i = n_group,
vu_i = vax_uptake_census,
ve_i = average_2vax_efficacy,
YLL_i = yll
) %>%
relocate(
`Age group`,
d_E,
d_I,
beta_i,
IFR_i,
N_i,
vu_i,
c_ij,
ve_i,
YLL_i
)
# Write the table
write.csv(
table_s3,
stringr::str_c(
exhibit_data_dir,
"/table_s3.csv"
),
row.names = FALSE
)
|
/code/R/supplementary_materials_building_scripts/b3_supplementary_materials_table_s3.R
|
no_license
|
patelchetana/vaccine-speed-vs-prioritization
|
R
| false | false | 1,510 |
r
|
# *** Header **************************************************************************
#
# Create .csv version of Table S3
#
# Read in national baseline
national_baseline <- read.csv(
stringr::str_c(
build_data_dir,
"/national_baseline.csv"
)
)
# Format the table
table_s3 <- national_baseline %>%
select(
group,
n_group,
c19_ifr_group,
sus_to_inf,
vax_uptake_census,
average_2vax_efficacy,
yll,
) %>%
mutate(
group = stringr::str_replace(
group,
"agebin_",
""
),
d_E = "4 days (age invariant)",
d_I = "9 days (age invariant)",
sus_to_inf = round(sus_to_inf, 2),
c19_ifr_group = round(c19_ifr_group, 3),
n_group = prettyNum(n_group, big.mark = ","),
vax_uptake_census = round(vax_uptake_census, 3),
c_ij = "See Table S1",
average_2vax_efficacy = round(average_2vax_efficacy, 3),
yll = round(yll, 1)
) %>%
rename(
`Age group` = group,
beta_i = sus_to_inf,
IFR_i = c19_ifr_group,
N_i = n_group,
vu_i = vax_uptake_census,
ve_i = average_2vax_efficacy,
YLL_i = yll
) %>%
relocate(
`Age group`,
d_E,
d_I,
beta_i,
IFR_i,
N_i,
vu_i,
c_ij,
ve_i,
YLL_i
)
# Write the table
write.csv(
table_s3,
stringr::str_c(
exhibit_data_dir,
"/table_s3.csv"
),
row.names = FALSE
)
|
#' @title Get the number of rows of the file
#' @description Use iterators to avoid the memory overhead of
#' obtaining the number of rows of a file.
#' @param file the name of a file (possible with a path)
#' @param n the size of the chunks used by the iterator
#' @return an integer
#' @examples
#' data(CO2)
#' write.csv(CO2, "CO2.csv", row.names=FALSE)
#' getnrows("CO2.csv")
#' unlink("CO2.csv")
#' @export
getnrows <- function(file, n=10000) {
i <- NULL # To kill off an annoying R CMD check NOTE
it <- ireadLines(file, n=n)
return( foreach(i=it, .combine=sum) %do% length(i) )
}
|
/R/getnrows.R
|
no_license
|
cran/YaleToolkit
|
R
| false | false | 591 |
r
|
#' @title Get the number of rows of the file
#' @description Use iterators to avoid the memory overhead of
#' obtaining the number of rows of a file.
#' @param file the name of a file (possible with a path)
#' @param n the size of the chunks used by the iterator
#' @return an integer
#' @examples
#' data(CO2)
#' write.csv(CO2, "CO2.csv", row.names=FALSE)
#' getnrows("CO2.csv")
#' unlink("CO2.csv")
#' @export
getnrows <- function(file, n=10000) {
i <- NULL # To kill off an annoying R CMD check NOTE
it <- ireadLines(file, n=n)
return( foreach(i=it, .combine=sum) %do% length(i) )
}
|
###
# Question 1
library(tidyverse)
library(nycflights13)
#a.
nrow(filter(flights, dest == "LAX"))
#b
nrow(filter(flights, origin == "LAX"))
#c
nrow(filter(flights, distance >= "2000"))
#d doesnt work, why? something to do with Tibble?
flights %>%
filter(
dest %in% c("LAX", "ONT", "SNA", "PSP", "SBD", "BUR", "LGB"),
origin != "JFK"
) %>%
nrow()
#2
nrow(filter(flights, is.na(arr_time)))
#3
arrange(flights, desc(is.na(arr_time)))
#4
select(flights, contains("TIME"))
# it includes variables with "time" in them... I would probably specifically filter for time based on certain variables using the select function in order to fix this default setting
#5
a<-filter(flights, distance >= "2000")
a<-group_by(a, dest)
summarize(a)
mutate(a)
arrange(a, dep_delay)
#complete journey
library(tidyverse)
library(completejourney)
transaction_data <- transaction_data %>%
select(
quantity,
sales_value,
retail_disc, coupon_disc, coupon_match_disc,
household_key, store_id, basket_id, product_id,
week_no, day, trans_time
)
#1
?mutate
transaction_data <- mutate(transaction_data, abs(retail_disc))
transaction_data <- mutate(transaction_data, abs(coupon_disc))
transaction_data <- mutate(transaction_data, abs(coupon_match_disc))
#2
transaction_data$regular_price<- mutate(transaction_data, (sales_value + retail_disc + coupon_match_disc) / quantity)
transaction_data$loyalty_price<- mutate(transaction_data, (loyalty_price = (sales_value + coupon_match_disc) / quantity))
transaction_data$coupon_price<- mutate(transaction_data, ((sales_value - coupon_disc) / quantity))
#3
transaction_data %>%
filter(regular_price <= 1) %>%
select(product_id) %>%
n_distinct()
transaction_data %>%
filter(loyalty_price <= 1) %>%
select(product_id) %>%
n_distinct()
transaction_data %>%
filter(coupon_price <= 1) %>%
select(product_id) %>%
n_distinct()
#4
transaction_data %>%
group_by(basket_id) %>%
summarize(basket_value = sum(sales_value)) %>%
ungroup() %>%
summarize(proportion_over_10 = mean(basket_value > 10))
#5
transaction_data %>%
filter(
is.finite(regular_price),
is.finite(loyalty_price),
regular_price > 0
) %>%
mutate(
pct_loyalty_disc = 1 - (loyalty_price / regular_price)
) %>%
group_by(store_id) %>%
summarize(
total_sales_value = sum(sales_value),
avg_pct_loyalty_disc = mean(pct_loyalty_disc)
) %>%
filter(total_sales_value > 10000) %>%
arrange(desc(avg_pct_loyalty_disc))
|
/submissions/01-r4ds-data-transformation-Kotz-Sam.R
|
no_license
|
zhuoaprilfu/r4ds-exercises
|
R
| false | false | 2,519 |
r
|
###
# Question 1
library(tidyverse)
library(nycflights13)
#a.
nrow(filter(flights, dest == "LAX"))
#b
nrow(filter(flights, origin == "LAX"))
#c
nrow(filter(flights, distance >= "2000"))
#d doesnt work, why? something to do with Tibble?
flights %>%
filter(
dest %in% c("LAX", "ONT", "SNA", "PSP", "SBD", "BUR", "LGB"),
origin != "JFK"
) %>%
nrow()
#2
nrow(filter(flights, is.na(arr_time)))
#3
arrange(flights, desc(is.na(arr_time)))
#4
select(flights, contains("TIME"))
# it includes variables with "time" in them... I would probably specifically filter for time based on certain variables using the select function in order to fix this default setting
#5
a<-filter(flights, distance >= "2000")
a<-group_by(a, dest)
summarize(a)
mutate(a)
arrange(a, dep_delay)
#complete journey
library(tidyverse)
library(completejourney)
transaction_data <- transaction_data %>%
select(
quantity,
sales_value,
retail_disc, coupon_disc, coupon_match_disc,
household_key, store_id, basket_id, product_id,
week_no, day, trans_time
)
#1
?mutate
transaction_data <- mutate(transaction_data, abs(retail_disc))
transaction_data <- mutate(transaction_data, abs(coupon_disc))
transaction_data <- mutate(transaction_data, abs(coupon_match_disc))
#2
transaction_data$regular_price<- mutate(transaction_data, (sales_value + retail_disc + coupon_match_disc) / quantity)
transaction_data$loyalty_price<- mutate(transaction_data, (loyalty_price = (sales_value + coupon_match_disc) / quantity))
transaction_data$coupon_price<- mutate(transaction_data, ((sales_value - coupon_disc) / quantity))
#3
transaction_data %>%
filter(regular_price <= 1) %>%
select(product_id) %>%
n_distinct()
transaction_data %>%
filter(loyalty_price <= 1) %>%
select(product_id) %>%
n_distinct()
transaction_data %>%
filter(coupon_price <= 1) %>%
select(product_id) %>%
n_distinct()
#4
transaction_data %>%
group_by(basket_id) %>%
summarize(basket_value = sum(sales_value)) %>%
ungroup() %>%
summarize(proportion_over_10 = mean(basket_value > 10))
#5
transaction_data %>%
filter(
is.finite(regular_price),
is.finite(loyalty_price),
regular_price > 0
) %>%
mutate(
pct_loyalty_disc = 1 - (loyalty_price / regular_price)
) %>%
group_by(store_id) %>%
summarize(
total_sales_value = sum(sales_value),
avg_pct_loyalty_disc = mean(pct_loyalty_disc)
) %>%
filter(total_sales_value > 10000) %>%
arrange(desc(avg_pct_loyalty_disc))
|
source(file.path(rprojroot::find_package_root_file(), "tests/init_tests.R"))
context("invokeParallelFits_allok_rss0")
test_that("invokeParallelFits_allok_rss0", {
result <- invokeParallelFits(x = ATP_targets_stauro$temperature,
y = ATP_targets_stauro$relAbundance,
id = ATP_targets_stauro$uniqueID,
groups = ATP_targets_stauro$uniqueID,
BPPARAM = BiocParallel::SerialParam(),
maxAttempts = 100,
returnModels = TRUE,
start = c(Pl = 0, a = 550, b = 10))
rss0_new <- result$modelMetrics$rss
expect_equal(unname(rss0_new)[-16], rss0_ref[-16]) # position 16: ATP5G1_IPI00009075 -> was a different seed used to resample due to negative RSS-Diff?
})
context("invokeParallelFits_allok_rss1")
test_that("invokeParallelFits_allok_rss1", {
result <- invokeParallelFits(x = ATP_targets_stauro$temperature,
y = ATP_targets_stauro$relAbundance,
id = ATP_targets_stauro$uniqueID,
groups = ATP_targets_stauro$compoundConcentration,
BPPARAM = BiocParallel::SerialParam(),
maxAttempts = 100,
returnModels = TRUE,
start = c(Pl = 0, a = 550, b = 10))
rss1_new <- result$modelMetrics %>%
group_by(id) %>%
summarise(rss = sum(rss))
expect_equal(rss1_new$rss, rss1_ref)
})
context("fitAllModels_allok_rss0")
test_that("fitAllModels_allok_rss0", {
models <- fitAllModels(x = ATP_targets_stauro$temperature,
y = ATP_targets_stauro$relAbundance,
iter = ATP_targets_stauro$uniqueID,
BPPARAM = BiocParallel::SerialParam(),
maxAttempts = 100,
start = c(Pl = 0, a = 550, b = 10))
rss0_new <- sapply(models, function(m) {
ifelse(inherits(m , "try-error"), NA, m$m$deviance())
})
expect_equal(unname(rss0_new)[-16], rss0_ref[-16]) # position 16: ATP5G1_IPI00009075 -> was a different seed used to resample due to negative RSS-Diff?
})
context("fitAllModels_allok_rss1")
test_that("fitAllModels_allok_rss1", {
models <- fitAllModels(x = ATP_targets_stauro$temperature,
y = ATP_targets_stauro$relAbundance,
iter = paste(ATP_targets_stauro$uniqueID, ATP_targets_stauro$compoundConcentration),
BPPARAM = BiocParallel::SerialParam(),
maxAttempts = 100,
start = c(Pl = 0, a = 550, b = 10))
rss1_new <- sapply(models, function(m) {
ifelse(inherits(m , "try-error"), NA, m$m$deviance())
})
rss1_new <- tibble(groups = names(rss1_new), rss1 = rss1_new) %>%
separate("groups", c("id", "compoundConcentration"), remove = FALSE, sep = " ") %>%
group_by(id) %>%
summarise(rss1 = sum(rss1))
expect_equal(unname(rss1_new$rss1), rss1_ref)
})
|
/tests/testthat/test_fitting.R
|
no_license
|
Huber-group-EMBL/NPARC
|
R
| false | false | 3,156 |
r
|
source(file.path(rprojroot::find_package_root_file(), "tests/init_tests.R"))
context("invokeParallelFits_allok_rss0")
test_that("invokeParallelFits_allok_rss0", {
result <- invokeParallelFits(x = ATP_targets_stauro$temperature,
y = ATP_targets_stauro$relAbundance,
id = ATP_targets_stauro$uniqueID,
groups = ATP_targets_stauro$uniqueID,
BPPARAM = BiocParallel::SerialParam(),
maxAttempts = 100,
returnModels = TRUE,
start = c(Pl = 0, a = 550, b = 10))
rss0_new <- result$modelMetrics$rss
expect_equal(unname(rss0_new)[-16], rss0_ref[-16]) # position 16: ATP5G1_IPI00009075 -> was a different seed used to resample due to negative RSS-Diff?
})
context("invokeParallelFits_allok_rss1")
test_that("invokeParallelFits_allok_rss1", {
result <- invokeParallelFits(x = ATP_targets_stauro$temperature,
y = ATP_targets_stauro$relAbundance,
id = ATP_targets_stauro$uniqueID,
groups = ATP_targets_stauro$compoundConcentration,
BPPARAM = BiocParallel::SerialParam(),
maxAttempts = 100,
returnModels = TRUE,
start = c(Pl = 0, a = 550, b = 10))
rss1_new <- result$modelMetrics %>%
group_by(id) %>%
summarise(rss = sum(rss))
expect_equal(rss1_new$rss, rss1_ref)
})
context("fitAllModels_allok_rss0")
test_that("fitAllModels_allok_rss0", {
models <- fitAllModels(x = ATP_targets_stauro$temperature,
y = ATP_targets_stauro$relAbundance,
iter = ATP_targets_stauro$uniqueID,
BPPARAM = BiocParallel::SerialParam(),
maxAttempts = 100,
start = c(Pl = 0, a = 550, b = 10))
rss0_new <- sapply(models, function(m) {
ifelse(inherits(m , "try-error"), NA, m$m$deviance())
})
expect_equal(unname(rss0_new)[-16], rss0_ref[-16]) # position 16: ATP5G1_IPI00009075 -> was a different seed used to resample due to negative RSS-Diff?
})
context("fitAllModels_allok_rss1")
test_that("fitAllModels_allok_rss1", {
models <- fitAllModels(x = ATP_targets_stauro$temperature,
y = ATP_targets_stauro$relAbundance,
iter = paste(ATP_targets_stauro$uniqueID, ATP_targets_stauro$compoundConcentration),
BPPARAM = BiocParallel::SerialParam(),
maxAttempts = 100,
start = c(Pl = 0, a = 550, b = 10))
rss1_new <- sapply(models, function(m) {
ifelse(inherits(m , "try-error"), NA, m$m$deviance())
})
rss1_new <- tibble(groups = names(rss1_new), rss1 = rss1_new) %>%
separate("groups", c("id", "compoundConcentration"), remove = FALSE, sep = " ") %>%
group_by(id) %>%
summarise(rss1 = sum(rss1))
expect_equal(unname(rss1_new$rss1), rss1_ref)
})
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ipums_info.r
\name{ipums_file_info}
\alias{ipums_file_info}
\title{Get IPUMS file information}
\usage{
ipums_file_info(object, type = NULL)
}
\arguments{
\item{object}{An ipums_ddi object (loaded with \code{\link{read_ipums_ddi}}).}
\item{type}{NULL to load all types, or one of "ipums_project", "extract_data",
"extract_notes", "conditions" or "citation".}
}
\value{
If \code{type} is NULL, a list with the \code{ipums_project},
\code{extract_date}, \code{extract_notes}, \code{conditions}, and \code{citation}.
Otherwise a string with the type of information requested in \code{type}.
}
\description{
Get IPUMS metadata information about the data file loaded into R
from an ipums_ddi
}
\examples{
ddi <- read_ipums_ddi(ripums_example("cps_00006.xml"))
ipums_file_info(ddi)
}
|
/man/ipums_file_info.Rd
|
no_license
|
cran/ripums
|
R
| false | true | 887 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ipums_info.r
\name{ipums_file_info}
\alias{ipums_file_info}
\title{Get IPUMS file information}
\usage{
ipums_file_info(object, type = NULL)
}
\arguments{
\item{object}{An ipums_ddi object (loaded with \code{\link{read_ipums_ddi}}).}
\item{type}{NULL to load all types, or one of "ipums_project", "extract_data",
"extract_notes", "conditions" or "citation".}
}
\value{
If \code{type} is NULL, a list with the \code{ipums_project},
\code{extract_date}, \code{extract_notes}, \code{conditions}, and \code{citation}.
Otherwise a string with the type of information requested in \code{type}.
}
\description{
Get IPUMS metadata information about the data file loaded into R
from an ipums_ddi
}
\examples{
ddi <- read_ipums_ddi(ripums_example("cps_00006.xml"))
ipums_file_info(ddi)
}
|
#' Calculation of the Jaccard Index between ideseases
#'
#' This function is able to calculate the Jacard Index between: 1. muliple
#' disases, 2. a set og genes and multiple diseases, 3. a set of genes and
#' multiple main psychiatric disorders and 4. multiple diseases and multiple
#' main psychiatric disorders.
#'
#' Warning: The main psychiatric disorders are understood as a single set
#' of genes composed by the genes of all the diseases that the main
#' psychiatric disorder cotains.
#'
#' @name jaccardEstimation
#' @rdname jaccardEstimation-methods
#' @aliases jaccardEstimation
#' @param pDisease vector of diseases, vector of genes, vector of main
#' psychiatric disorder.
#' @param sDisease vector of diseases, vector of genes, vector of main
#' psychiatric disorder. Only necessary when comparing genes vs. diseases,
#' genes vs. main psychiatric disorders or diseases vs. main psychiatric
#' disorders. To compare multiple diseases only use \code{pDisease}.
#' @param database Name of the database that will be queried. It can take the
#' values \code{'psycur15'} to use data validated by experts for first release
#' of PsyGeNET; \code{'psycur16'} to use data validated by experts for second
#' release of PsyGeNET; or \code{'ALL'} to use both databases.
#' @param nboot Number of iterations sued to compute the pvalue associted
#' to the calculated Jaccard Index (default 100).
#' @param ncores Number of cores used to calculate the pvalue associated to
#' the computed Jaccard Index (default 1).
#' @param verbose By default \code{FALSE}. Change it to \code{TRUE} to get a
#' on-time log from the function.
#' @return An object of class \code{JaccardIndexPsy} with the computed
#' calculation of the JaccardIndex.
#' @examples
#' ji <- jaccardEstimation( c( "COMT", "CLOCK", "DRD3" ), "umls:C0005586", "ALL" )
#' @export jaccardEstimation
jaccardEstimation <- function(pDisease, sDisease, database="ALL", nboot = 100, ncores = 1, verbose = FALSE) {
if(missing(pDisease)) {
stop("Argument 'pDisease' must be set. Argument 'sDisease' is optional.")
}
if(verbose) message("Query PsyGeNET for generic diseases.")
psy <- psygenetAll ( database )
#universe <- disGenetCurated()
load(system.file("extdata", "disgenetCuratedUniverse.RData", package="psygenet2r"))
diseases <- getDiseasesType( pDisease, psy, verbose )
if(missing(sDisease)) {
out <- singleInput(diseases, diseases$type, universe, psy, nboot, ncores, verbose)
} else {
diseases2 <- getDiseasesType( sDisease, psy, verbose )
out <- multipleInput(diseases$diseases, diseases$type, diseases2$diseases, diseases2$type, universe, nboot, ncores, verbose)
}
return(out)
}
singleInput <- function(diseases, type, universe, psy, nboot, ncores, verbose) {
if(type != "dise") {
return(singleInput.genes(diseases$diseases$geneList$genes, psy, universe, nboot, ncores, verbose))
#stop("Jaccard Index only allows single input if 'pDiseases' is a vector of diseases (Given: ", type, ").")
}
if(length(diseases) <= 1){
stop("Jaccard Index needs, at last, two elements to be calculated.")
}
diseases <- diseases$diseases
items <- combn(names(diseases), 2)
xx <- lapply(1:ncol(items), function(nc) {
it1 <- diseases[[items[1, nc]]]$genes
it2 <- diseases[[items[2, nc]]]$genes
ji <- sum(it1 %in% it2) * 1.0 / length(unique(c(it1, it2)))
bb <- ji.internal(length(it1), length(it2), universe, nboot, ncores)
pval <- (sum(bb > ji) * 1.0) / (nboot+1)
return(c(items[1, nc], items[2, nc], length(it1), length(it2), ji, pval))
})
xx <- data.frame(do.call(rbind, xx))
rownames(xx) <- 1:nrow(xx)
colnames(xx) <- c("Disease1", "Disease2", "NGenes1", "NGenes2", "JaccardIndex", "pval")
new("JaccardIndexPsy", table = xx, type = "disease-disease", nit = nboot, i1 = names(diseases), i2 = "")
}
singleInput.genes <- function(genes, database, universe, nboot, ncores, verbose) {
warning("Jaccard Index for all diseases in PsyGeNET will be calculated.")
xx <- parallel::mclapply(unique(as.character(database$c2.DiseaseName)), function(dCode) {
disease <- database[database$c2.DiseaseName == dCode, "c1.Gene_Symbol"]
ji <- sum(genes %in% disease) * 1.0 / length(unique(c(genes, disease)))
bb <- ji.internal(length(genes), length(disease), universe, nboot, ncores)
pval <- (sum(bb > ji) * 1.0) / (nboot+1)
return(c(dCode, length(genes), length(disease), ji, pval))
}, mc.cores = ncores)
xx <- data.frame(disease1="genes", do.call(rbind, xx))
rownames(xx) <- 1:nrow(xx)
colnames(xx) <- c("Disease1", "Disease2", "NGenes1", "NGenes2", "JaccardIndex", "pval")
new("JaccardIndexPsy", table = xx, type = "geneList - disease", nit = nboot, i1 = genes, i2 = "PsyGeNET")
}
multipleInput <- function(primary, typeP, secondary, typeS, universe, nboot, ncores, verbose) {
if(typeP == typeS) {
stop("Invalid input type for 'pDisease' and 'sDisease'.")
}
xx <- lapply(names(primary), function(nn1) {
data.frame(do.call(rbind, lapply(names(secondary), function(nn2) {
it1 <- primary[[nn1]]$genes
it2 <- secondary[[nn2]]$genes
ji <- sum(it1 %in% it2) * 1.0 / length(unique(c(it1, it2)))
bb <- ji.internal(length(it1), length(it2), universe, nboot, ncores)
pval <- (sum(bb > ji) * 1.0) / (nboot+1)
return(c(nn1, nn2, length(it1), length(it2), ji, pval))
})))
})
xx <- data.frame(do.call(rbind, xx))
rownames(xx) <- 1:nrow(xx)
colnames(xx) <- c("Disease1", "Disease2", "NGenes1", "NGenes2", "JaccardIndex", "pval")
new("JaccardIndexPsy", table = xx, type = paste0(typeP, " - ", typeS), nit = nboot, i1 = names(primary), i2 = names(secondary))
}
getDiseasesType <- function(pDiseases, psy, verbose = TRUE) {
mpds <- as.character(unique(psy$c2.PsychiatricDisorder))
cuis <- as.character(unique(psy$c2.Disease_code))
umls <- as.character(unique(psy$c2.Disease_Id))
nmms <- as.character(unique(psy$c2.DiseaseName))
type <- NA
diseases <- lapply(1:length(pDiseases), function(ii) {
it1 <- pDiseases[ii]
if (verbose) {
message("Checking disorder/disease/gene '", it1, "' (", ii, " of ", length(pDiseases), ").")
}
if( it1 %in% mpds) {
if (is.na(type) | (!is.na(type) & type == "mpds")) {
it1 <- list( name=it1, genes=as.character( unique( psy [ psy$c2.PsychiatricDisorder == it1, 1 ] ) ) )
type <<- "mpds"
} else {
stop("1 All input diseases msut be psyquiatric disorders, diseases (cui or name) or genes.")
}
} else if( it1 %in% cuis ) {
if (is.na(type) | (!is.na(type) & type == "dise")) {
it1 <- list( name=it1, genes=as.character( unique( psy [ psy$c2.Disease_code == it1, 1 ] ) ) )
type <<- "dise"
} else {
stop("2 All input diseases msut be psyquiatric disorders, diseases (cui or name) or genes.")
}
} else if( it1 %in% umls ) {
if (is.na(type) | (!is.na(type) & type == "dise")) {
it1 <- list( name=it1, genes=as.character( unique( psy [ psy$c2.Disease_Id == it1, 1 ] ) ) )
type <<- "dise"
} else {
stop("3 All input diseases msut be psyquiatric disorders, diseases (cui or name) or genes.")
}
} else if( it1 %in% nmms ) {
if (is.na(type) | (!is.na(type) & type == "dise")) {
it1 <- list( name=it1, genes=as.character( unique( psy [ psy$c2.DiseaseName == it1, 1 ] ) ) )
type <<- "dise"
} else {
stop("4 All input diseases msut be psyquiatric disorders, diseases (cui or name) or genes.")
}
} else {
if (is.na(type) | (!is.na(type) & type == "geneList")) {
it1 <- list( name="gene list", genes=it1 )
type <<- "geneList"
} else {
stop("5 All input diseases msut be psyquiatric disorders, diseases (cui or name) or genes.")
}
}
return(it1)
})
if(type == "geneList") {
diseases <- list( list( name = "geneList", genes = pDiseases ) )
names(diseases) <- "geneList"
} else {
names(diseases) <- pDiseases
}
return(list(diseases=diseases, type=type))
}
ji.internal <- function(len1, len2, universe, nboot, ncores) {
if (!requireNamespace("parallel", quietly = TRUE)) {
pfun <- lapply
} else {
pfun <- parallel::mclapply
}
unlist(pfun(1:nboot, function(ii) {
g1 <- sample( universe, len1 )
g2 <- sample( universe, len2 )
ja.coefr <- length(intersect(g1, g2)) / length(union(g1, g2))
}, mc.cores = ncores))
}
|
/R/jaccardEstimation.R
|
permissive
|
aGutierrezSacristan/psygenet2r
|
R
| false | false | 8,500 |
r
|
#' Calculation of the Jaccard Index between ideseases
#'
#' This function is able to calculate the Jacard Index between: 1. muliple
#' disases, 2. a set og genes and multiple diseases, 3. a set of genes and
#' multiple main psychiatric disorders and 4. multiple diseases and multiple
#' main psychiatric disorders.
#'
#' Warning: The main psychiatric disorders are understood as a single set
#' of genes composed by the genes of all the diseases that the main
#' psychiatric disorder cotains.
#'
#' @name jaccardEstimation
#' @rdname jaccardEstimation-methods
#' @aliases jaccardEstimation
#' @param pDisease vector of diseases, vector of genes, vector of main
#' psychiatric disorder.
#' @param sDisease vector of diseases, vector of genes, vector of main
#' psychiatric disorder. Only necessary when comparing genes vs. diseases,
#' genes vs. main psychiatric disorders or diseases vs. main psychiatric
#' disorders. To compare multiple diseases only use \code{pDisease}.
#' @param database Name of the database that will be queried. It can take the
#' values \code{'psycur15'} to use data validated by experts for first release
#' of PsyGeNET; \code{'psycur16'} to use data validated by experts for second
#' release of PsyGeNET; or \code{'ALL'} to use both databases.
#' @param nboot Number of iterations sued to compute the pvalue associted
#' to the calculated Jaccard Index (default 100).
#' @param ncores Number of cores used to calculate the pvalue associated to
#' the computed Jaccard Index (default 1).
#' @param verbose By default \code{FALSE}. Change it to \code{TRUE} to get a
#' on-time log from the function.
#' @return An object of class \code{JaccardIndexPsy} with the computed
#' calculation of the JaccardIndex.
#' @examples
#' ji <- jaccardEstimation( c( "COMT", "CLOCK", "DRD3" ), "umls:C0005586", "ALL" )
#' @export jaccardEstimation
jaccardEstimation <- function(pDisease, sDisease, database="ALL", nboot = 100, ncores = 1, verbose = FALSE) {
if(missing(pDisease)) {
stop("Argument 'pDisease' must be set. Argument 'sDisease' is optional.")
}
if(verbose) message("Query PsyGeNET for generic diseases.")
psy <- psygenetAll ( database )
#universe <- disGenetCurated()
load(system.file("extdata", "disgenetCuratedUniverse.RData", package="psygenet2r"))
diseases <- getDiseasesType( pDisease, psy, verbose )
if(missing(sDisease)) {
out <- singleInput(diseases, diseases$type, universe, psy, nboot, ncores, verbose)
} else {
diseases2 <- getDiseasesType( sDisease, psy, verbose )
out <- multipleInput(diseases$diseases, diseases$type, diseases2$diseases, diseases2$type, universe, nboot, ncores, verbose)
}
return(out)
}
singleInput <- function(diseases, type, universe, psy, nboot, ncores, verbose) {
if(type != "dise") {
return(singleInput.genes(diseases$diseases$geneList$genes, psy, universe, nboot, ncores, verbose))
#stop("Jaccard Index only allows single input if 'pDiseases' is a vector of diseases (Given: ", type, ").")
}
if(length(diseases) <= 1){
stop("Jaccard Index needs, at last, two elements to be calculated.")
}
diseases <- diseases$diseases
items <- combn(names(diseases), 2)
xx <- lapply(1:ncol(items), function(nc) {
it1 <- diseases[[items[1, nc]]]$genes
it2 <- diseases[[items[2, nc]]]$genes
ji <- sum(it1 %in% it2) * 1.0 / length(unique(c(it1, it2)))
bb <- ji.internal(length(it1), length(it2), universe, nboot, ncores)
pval <- (sum(bb > ji) * 1.0) / (nboot+1)
return(c(items[1, nc], items[2, nc], length(it1), length(it2), ji, pval))
})
xx <- data.frame(do.call(rbind, xx))
rownames(xx) <- 1:nrow(xx)
colnames(xx) <- c("Disease1", "Disease2", "NGenes1", "NGenes2", "JaccardIndex", "pval")
new("JaccardIndexPsy", table = xx, type = "disease-disease", nit = nboot, i1 = names(diseases), i2 = "")
}
singleInput.genes <- function(genes, database, universe, nboot, ncores, verbose) {
warning("Jaccard Index for all diseases in PsyGeNET will be calculated.")
xx <- parallel::mclapply(unique(as.character(database$c2.DiseaseName)), function(dCode) {
disease <- database[database$c2.DiseaseName == dCode, "c1.Gene_Symbol"]
ji <- sum(genes %in% disease) * 1.0 / length(unique(c(genes, disease)))
bb <- ji.internal(length(genes), length(disease), universe, nboot, ncores)
pval <- (sum(bb > ji) * 1.0) / (nboot+1)
return(c(dCode, length(genes), length(disease), ji, pval))
}, mc.cores = ncores)
xx <- data.frame(disease1="genes", do.call(rbind, xx))
rownames(xx) <- 1:nrow(xx)
colnames(xx) <- c("Disease1", "Disease2", "NGenes1", "NGenes2", "JaccardIndex", "pval")
new("JaccardIndexPsy", table = xx, type = "geneList - disease", nit = nboot, i1 = genes, i2 = "PsyGeNET")
}
multipleInput <- function(primary, typeP, secondary, typeS, universe, nboot, ncores, verbose) {
if(typeP == typeS) {
stop("Invalid input type for 'pDisease' and 'sDisease'.")
}
xx <- lapply(names(primary), function(nn1) {
data.frame(do.call(rbind, lapply(names(secondary), function(nn2) {
it1 <- primary[[nn1]]$genes
it2 <- secondary[[nn2]]$genes
ji <- sum(it1 %in% it2) * 1.0 / length(unique(c(it1, it2)))
bb <- ji.internal(length(it1), length(it2), universe, nboot, ncores)
pval <- (sum(bb > ji) * 1.0) / (nboot+1)
return(c(nn1, nn2, length(it1), length(it2), ji, pval))
})))
})
xx <- data.frame(do.call(rbind, xx))
rownames(xx) <- 1:nrow(xx)
colnames(xx) <- c("Disease1", "Disease2", "NGenes1", "NGenes2", "JaccardIndex", "pval")
new("JaccardIndexPsy", table = xx, type = paste0(typeP, " - ", typeS), nit = nboot, i1 = names(primary), i2 = names(secondary))
}
getDiseasesType <- function(pDiseases, psy, verbose = TRUE) {
mpds <- as.character(unique(psy$c2.PsychiatricDisorder))
cuis <- as.character(unique(psy$c2.Disease_code))
umls <- as.character(unique(psy$c2.Disease_Id))
nmms <- as.character(unique(psy$c2.DiseaseName))
type <- NA
diseases <- lapply(1:length(pDiseases), function(ii) {
it1 <- pDiseases[ii]
if (verbose) {
message("Checking disorder/disease/gene '", it1, "' (", ii, " of ", length(pDiseases), ").")
}
if( it1 %in% mpds) {
if (is.na(type) | (!is.na(type) & type == "mpds")) {
it1 <- list( name=it1, genes=as.character( unique( psy [ psy$c2.PsychiatricDisorder == it1, 1 ] ) ) )
type <<- "mpds"
} else {
stop("1 All input diseases msut be psyquiatric disorders, diseases (cui or name) or genes.")
}
} else if( it1 %in% cuis ) {
if (is.na(type) | (!is.na(type) & type == "dise")) {
it1 <- list( name=it1, genes=as.character( unique( psy [ psy$c2.Disease_code == it1, 1 ] ) ) )
type <<- "dise"
} else {
stop("2 All input diseases msut be psyquiatric disorders, diseases (cui or name) or genes.")
}
} else if( it1 %in% umls ) {
if (is.na(type) | (!is.na(type) & type == "dise")) {
it1 <- list( name=it1, genes=as.character( unique( psy [ psy$c2.Disease_Id == it1, 1 ] ) ) )
type <<- "dise"
} else {
stop("3 All input diseases msut be psyquiatric disorders, diseases (cui or name) or genes.")
}
} else if( it1 %in% nmms ) {
if (is.na(type) | (!is.na(type) & type == "dise")) {
it1 <- list( name=it1, genes=as.character( unique( psy [ psy$c2.DiseaseName == it1, 1 ] ) ) )
type <<- "dise"
} else {
stop("4 All input diseases msut be psyquiatric disorders, diseases (cui or name) or genes.")
}
} else {
if (is.na(type) | (!is.na(type) & type == "geneList")) {
it1 <- list( name="gene list", genes=it1 )
type <<- "geneList"
} else {
stop("5 All input diseases msut be psyquiatric disorders, diseases (cui or name) or genes.")
}
}
return(it1)
})
if(type == "geneList") {
diseases <- list( list( name = "geneList", genes = pDiseases ) )
names(diseases) <- "geneList"
} else {
names(diseases) <- pDiseases
}
return(list(diseases=diseases, type=type))
}
ji.internal <- function(len1, len2, universe, nboot, ncores) {
if (!requireNamespace("parallel", quietly = TRUE)) {
pfun <- lapply
} else {
pfun <- parallel::mclapply
}
unlist(pfun(1:nboot, function(ii) {
g1 <- sample( universe, len1 )
g2 <- sample( universe, len2 )
ja.coefr <- length(intersect(g1, g2)) / length(union(g1, g2))
}, mc.cores = ncores))
}
|
library(tidyverse)
library(lubridate)#convert date formats
library(xml2)#for html2txt function
source("code/get_collapsed_categories.R")#code for cross journal categories
#Relevant Functions----
#function to convert html ecoded characters to text
html2txt <- function(str) {
xml_text(read_html(paste0("<x>", str, "</x>"))) #create xml node to be read as html, allowing text conversion
}
#function to replace special characters with standard alphabet letters
replace_special <- function(x){
case_when(#these regex expressions won't work when running R on windows
str_detect(x, fixed("\xf6")) ~ str_replace(x, fixed("\xf6"), "o"), #replace with "o"
str_detect(x, fixed("\xfc")) ~ str_replace(x, fixed("\xfc"), "u"), #replace with "u"
str_detect(x, "&") ~ str_replace(x, "&", "and"), #replace with "and"
str_detect(x, "'") ~ str_replace(x, "'", "'"), #replace with apostrophes
str_detect(x, "&#x[:alnum:]*;") ~ paste(html2txt(x)), #fix html-encoded characters
TRUE ~ paste(x)) #keep original value otherwise
}
#Load & clean datsets----
manu_data <- report_parse %>%
mutate(doi = tolower(doi)) %>% #allows joining w. impact data
select(-related.manu, -is.resubmission) %>%
filter(manuscript.number != "NA") %>%
filter(journal != "EC") %>% filter(journal != "CVI") %>% filter(journal != "genomeA") #drop old journals
usage_data <- read_csv("processed_data/usage.csv") #read in highwire usage data
usage_select <- usage_data %>% select(`Article Date of Publication (article_metadata)`,
`Article DOI (article_metadata)`,
`Total Abstract`, `Total HTML`, `Total PDF`)
citation_data <- read_csv("processed_data/cites.csv") #read in highwire citation data
citation_select <- citation_data %>% select(`Article DOI (article_metadata)`,
`Article Date of Publication (article_metadata)`,
Cites, `Citation Date`, `Published Months`) %>%
filter(Cites != 0) %>% #drop entries that don't actually represent citations
group_by(`Article DOI (article_metadata)`, `Article Date of Publication (article_metadata)`,
`Published Months`) %>%
summarise(Cites = n()) #count # cites for each article, while maintaining relavent metadata
#merge impact datasets
published_data <- full_join(citation_select, usage_select,
by = c("Article Date of Publication (article_metadata)",
"Article DOI (article_metadata)")) %>% distinct()
#merge impact data w. manuscript data
report_data <- left_join(manu_data, published_data, by = c("doi" = "Article DOI (article_metadata)"))
#clean merged datasets & save-----
report_data_ed <- report_data %>%
unite(., Editor, first.name, last.name, sep = " ") %>% #create full editor names
mutate(Editor = map(Editor, replace_special), #replace special characters with standard text - editor names
title = map(title, replace_special), #manuscript titles
category = map(category, replace_special)) %>% #category types
mutate(category = unlist(category)) %>%
mutate(category = map(category, function(x){strtrim(x, 45)})) #crop category lenght to 45 characters
clean_report_data <- report_data_ed %>%
mutate(Editor = unlist(Editor), #unlist after map function(s)
title = unlist(title),
category = unlist(category)) %>%
mutate(`Article Date of Publication (article_metadata)` = mdy(`Article Date of Publication (article_metadata)`),
journal = if_else(journal == "mra", "MRA", journal)) %>% #enable impact data joins
rename(., "editor" = "Editor", "ejp.decision" = "EJP.decision",
"publication.date" = "Article Date of Publication (article_metadata)",
"months.published" = "Published Months", "Total Article Cites"= "Cites",
"Abstract" = "Total Abstract", "HTML" = "Total HTML", "PDF" = "Total PDF") %>%
gather(`Total Article Cites`:PDF, key = measure.names, value = measure.values) %>% #tidy impact data
mutate(category = collapse_cats(.$category)) %>%
filter(measure.names != "Measure By") %>%
distinct()
write_csv(clean_report_data, paste0("processed_data/report_data", this_ym,".csv"))
#gather data for calculating estimated journal impact factors-----
jif_data <- citation_data %>% select(`Article DOI (article_metadata)`, Cites, `Citation Date`,
`Article Date of Publication (article_metadata)`) %>%
filter(Cites != 0) %>% distinct()
#merge data for jif calculation w. data for published manus
jif_report_data <- manu_data %>%
filter(!is.na(doi)) %>%
select(doi, manuscript.type, journal) %>%
left_join(., jif_data, by = c("doi" = "Article DOI (article_metadata)")) %>% distinct()
write_csv(jif_report_data, paste0("processed_data/jif_report_data", this_ym, ".csv"))
|
/code/merge_clean_report_data.R
|
permissive
|
SchlossLab/Hagan_monthly_journal_reports_2019
|
R
| false | false | 4,948 |
r
|
library(tidyverse)
library(lubridate)#convert date formats
library(xml2)#for html2txt function
source("code/get_collapsed_categories.R")#code for cross journal categories
#Relevant Functions----
#function to convert html ecoded characters to text
html2txt <- function(str) {
xml_text(read_html(paste0("<x>", str, "</x>"))) #create xml node to be read as html, allowing text conversion
}
#function to replace special characters with standard alphabet letters
replace_special <- function(x){
case_when(#these regex expressions won't work when running R on windows
str_detect(x, fixed("\xf6")) ~ str_replace(x, fixed("\xf6"), "o"), #replace with "o"
str_detect(x, fixed("\xfc")) ~ str_replace(x, fixed("\xfc"), "u"), #replace with "u"
str_detect(x, "&") ~ str_replace(x, "&", "and"), #replace with "and"
str_detect(x, "'") ~ str_replace(x, "'", "'"), #replace with apostrophes
str_detect(x, "&#x[:alnum:]*;") ~ paste(html2txt(x)), #fix html-encoded characters
TRUE ~ paste(x)) #keep original value otherwise
}
#Load & clean datsets----
manu_data <- report_parse %>%
mutate(doi = tolower(doi)) %>% #allows joining w. impact data
select(-related.manu, -is.resubmission) %>%
filter(manuscript.number != "NA") %>%
filter(journal != "EC") %>% filter(journal != "CVI") %>% filter(journal != "genomeA") #drop old journals
usage_data <- read_csv("processed_data/usage.csv") #read in highwire usage data
usage_select <- usage_data %>% select(`Article Date of Publication (article_metadata)`,
`Article DOI (article_metadata)`,
`Total Abstract`, `Total HTML`, `Total PDF`)
citation_data <- read_csv("processed_data/cites.csv") #read in highwire citation data
citation_select <- citation_data %>% select(`Article DOI (article_metadata)`,
`Article Date of Publication (article_metadata)`,
Cites, `Citation Date`, `Published Months`) %>%
filter(Cites != 0) %>% #drop entries that don't actually represent citations
group_by(`Article DOI (article_metadata)`, `Article Date of Publication (article_metadata)`,
`Published Months`) %>%
summarise(Cites = n()) #count # cites for each article, while maintaining relavent metadata
#merge impact datasets
published_data <- full_join(citation_select, usage_select,
by = c("Article Date of Publication (article_metadata)",
"Article DOI (article_metadata)")) %>% distinct()
#merge impact data w. manuscript data
report_data <- left_join(manu_data, published_data, by = c("doi" = "Article DOI (article_metadata)"))
#clean merged datasets & save-----
report_data_ed <- report_data %>%
unite(., Editor, first.name, last.name, sep = " ") %>% #create full editor names
mutate(Editor = map(Editor, replace_special), #replace special characters with standard text - editor names
title = map(title, replace_special), #manuscript titles
category = map(category, replace_special)) %>% #category types
mutate(category = unlist(category)) %>%
mutate(category = map(category, function(x){strtrim(x, 45)})) #crop category lenght to 45 characters
clean_report_data <- report_data_ed %>%
mutate(Editor = unlist(Editor), #unlist after map function(s)
title = unlist(title),
category = unlist(category)) %>%
mutate(`Article Date of Publication (article_metadata)` = mdy(`Article Date of Publication (article_metadata)`),
journal = if_else(journal == "mra", "MRA", journal)) %>% #enable impact data joins
rename(., "editor" = "Editor", "ejp.decision" = "EJP.decision",
"publication.date" = "Article Date of Publication (article_metadata)",
"months.published" = "Published Months", "Total Article Cites"= "Cites",
"Abstract" = "Total Abstract", "HTML" = "Total HTML", "PDF" = "Total PDF") %>%
gather(`Total Article Cites`:PDF, key = measure.names, value = measure.values) %>% #tidy impact data
mutate(category = collapse_cats(.$category)) %>%
filter(measure.names != "Measure By") %>%
distinct()
write_csv(clean_report_data, paste0("processed_data/report_data", this_ym,".csv"))
#gather data for calculating estimated journal impact factors-----
jif_data <- citation_data %>% select(`Article DOI (article_metadata)`, Cites, `Citation Date`,
`Article Date of Publication (article_metadata)`) %>%
filter(Cites != 0) %>% distinct()
#merge data for jif calculation w. data for published manus
jif_report_data <- manu_data %>%
filter(!is.na(doi)) %>%
select(doi, manuscript.type, journal) %>%
left_join(., jif_data, by = c("doi" = "Article DOI (article_metadata)")) %>% distinct()
write_csv(jif_report_data, paste0("processed_data/jif_report_data", this_ym, ".csv"))
|
####### Replication data retrieve
install.packages("rjson")
## 1
## data importing
# library using rjson to import
library(rjson)
jsonCPI = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/consumer%20price%20index%20cpi")
# generate empty data set
CPIdata = data.frame(rep(0, length(jsonCPI)))
CPIdata$value = c(0)
colnames(CPIdata) = c("dateTime", "value")
# extract CPI
for(i in seq(from=1, to=length(jsonCPI))){
item = jsonCPI[[i]]
CPIdata$dateTime[i] = item$dateTime
CPIdata$value[i] = item$value
}
write.csv(CPIdata,"Documents/CPIdata.csv")
##2
## GDP deflator
jsonGDP = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/gdp%20growth%20rate")
# generate empty data set
GDPdata = data.frame(rep(0, length(jsonGDP)))
GDPdata$value = c(0)
colnames(GDPdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonGDP))){
item = jsonGDP[[i]]
GDPdata$dateTime[i] = item$dateTime
GDPdata$value[i] = item$value
}
## 3
## Core Consumer Price
jsonCCPI = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/core%20consumer%20prices")
# generate empty data set
CCPIdata = data.frame(rep(0, length(jsonCCPI)))
CCPIdata$value = c(0)
colnames(CCPIdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonCCPI))){
item = jsonCCPI[[i]]
CCPIdata$dateTime[i] = item$dateTime
CCPIdata$value[i] = item$value
}
write.csv(CCPIdata,"Documents/CCPIdata.csv")
## 4
##
jsonPPI = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/producer%20prices")
# generate empty data set
PPIdata = data.frame(rep(0, length(jsonPPI)))
PPIdata$value = c(0)
colnames(PPIdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonPPI))){
item = jsonPPI[[i]]
PPIdata$dateTime[i] = item$dateTime
PPIdata$value[i] = item$value
}
write.csv(PPIdata,"Documents/PPIdata.csv")
## 5
## IPI Import Price
jsonIPI = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/import%20prices")
# generate empty data set
IPIdata = data.frame(rep(0, length(jsonIPI)))
IPIdata$value = c(0)
colnames(IPIdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonIPI))){
item = jsonIPI[[i]]
IPIdata$dateTime[i] = item$dateTime
IPIdata$value[i] = item$value
}
write.csv(IPIdata,"Documents/IPIdata.csv")
## 6
## EPI Export Price
jsonEPI = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/export%20prices")
# generate empty data set
EPIdata = data.frame(rep(0, length(jsonEPI)))
EPIdata$value = c(0)
colnames(EPIdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonEPI))){
item = jsonEPI[[i]]
EPIdata$dateTime[i] = item$dateTime
EPIdata$value[i] = item$value
}
write.csv(EPIdata,"Documents/EPIdata.csv")
## 7
## GDP Deflator
jsonDef = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/gdp%20deflator")
# generate empty data set
Defdata = data.frame(rep(0, length(jsonDef)))
Defdata$value = c(0)
colnames(Defdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonDef))){
item = jsonDef[[i]]
Defdata$dateTime[i] = item$dateTime
Defdata$value[i] = item$value
}
write.csv(CPIdata,"Documents/GDPdefdata.csv")
## 8
## Inflation MoM
jsonINFMOM = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/inflation%20rate%20mom")
# generate empty data set
INFMOMdata = data.frame(rep(0, length(jsonINFMOM)))
INFMOMdata$value = c(0)
colnames(INFMOMdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonINFMOM))){
item = jsonINFMOM[[i]]
INFMOMdata$dateTime[i] = item$dateTime
INFMOMdata$value[i] = item$value
}
write.csv(INFMOMdata,"Documents/Inflationmom.csv")
## 9
## Inflation Expectation
jsonINFEXP = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/inflation%20expectations")
# generate empty data set
INFEXPdata = data.frame(rep(0, length(jsonINFEXP)))
INFEXPdata$value = c(0)
colnames(INFEXPdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonINFEXP))){
item = jsonINFEXP[[i]]
INFEXPdata$dateTime[i] = item$dateTime
INFEXPdata$value[i] = item$value
}
set<-data.frame(CPIdata$value,CCPIdata$value)
|
/Data_Extraction_TE_Germany.R
|
no_license
|
wenfeichu/Wenfei-s-R-code-share
|
R
| false | false | 4,778 |
r
|
####### Replication data retrieve
install.packages("rjson")
## 1
## data importing
# library using rjson to import
library(rjson)
jsonCPI = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/consumer%20price%20index%20cpi")
# generate empty data set
CPIdata = data.frame(rep(0, length(jsonCPI)))
CPIdata$value = c(0)
colnames(CPIdata) = c("dateTime", "value")
# extract CPI
for(i in seq(from=1, to=length(jsonCPI))){
item = jsonCPI[[i]]
CPIdata$dateTime[i] = item$dateTime
CPIdata$value[i] = item$value
}
write.csv(CPIdata,"Documents/CPIdata.csv")
##2
## GDP deflator
jsonGDP = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/gdp%20growth%20rate")
# generate empty data set
GDPdata = data.frame(rep(0, length(jsonGDP)))
GDPdata$value = c(0)
colnames(GDPdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonGDP))){
item = jsonGDP[[i]]
GDPdata$dateTime[i] = item$dateTime
GDPdata$value[i] = item$value
}
## 3
## Core Consumer Price
jsonCCPI = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/core%20consumer%20prices")
# generate empty data set
CCPIdata = data.frame(rep(0, length(jsonCCPI)))
CCPIdata$value = c(0)
colnames(CCPIdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonCCPI))){
item = jsonCCPI[[i]]
CCPIdata$dateTime[i] = item$dateTime
CCPIdata$value[i] = item$value
}
write.csv(CCPIdata,"Documents/CCPIdata.csv")
## 4
##
jsonPPI = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/producer%20prices")
# generate empty data set
PPIdata = data.frame(rep(0, length(jsonPPI)))
PPIdata$value = c(0)
colnames(PPIdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonPPI))){
item = jsonPPI[[i]]
PPIdata$dateTime[i] = item$dateTime
PPIdata$value[i] = item$value
}
write.csv(PPIdata,"Documents/PPIdata.csv")
## 5
## IPI Import Price
jsonIPI = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/import%20prices")
# generate empty data set
IPIdata = data.frame(rep(0, length(jsonIPI)))
IPIdata$value = c(0)
colnames(IPIdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonIPI))){
item = jsonIPI[[i]]
IPIdata$dateTime[i] = item$dateTime
IPIdata$value[i] = item$value
}
write.csv(IPIdata,"Documents/IPIdata.csv")
## 6
## EPI Export Price
jsonEPI = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/export%20prices")
# generate empty data set
EPIdata = data.frame(rep(0, length(jsonEPI)))
EPIdata$value = c(0)
colnames(EPIdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonEPI))){
item = jsonEPI[[i]]
EPIdata$dateTime[i] = item$dateTime
EPIdata$value[i] = item$value
}
write.csv(EPIdata,"Documents/EPIdata.csv")
## 7
## GDP Deflator
jsonDef = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/gdp%20deflator")
# generate empty data set
Defdata = data.frame(rep(0, length(jsonDef)))
Defdata$value = c(0)
colnames(Defdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonDef))){
item = jsonDef[[i]]
Defdata$dateTime[i] = item$dateTime
Defdata$value[i] = item$value
}
write.csv(CPIdata,"Documents/GDPdefdata.csv")
## 8
## Inflation MoM
jsonINFMOM = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/inflation%20rate%20mom")
# generate empty data set
INFMOMdata = data.frame(rep(0, length(jsonINFMOM)))
INFMOMdata$value = c(0)
colnames(INFMOMdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonINFMOM))){
item = jsonINFMOM[[i]]
INFMOMdata$dateTime[i] = item$dateTime
INFMOMdata$value[i] = item$value
}
write.csv(INFMOMdata,"Documents/Inflationmom.csv")
## 9
## Inflation Expectation
jsonINFEXP = fromJSON(file = "http://markets.prod.services.amana.vpn/api/app/markets/tradingeconomics/historical/country/united%20states/indicator/inflation%20expectations")
# generate empty data set
INFEXPdata = data.frame(rep(0, length(jsonINFEXP)))
INFEXPdata$value = c(0)
colnames(INFEXPdata) = c("dateTime", "value")
for(i in seq(from=1, to=length(jsonINFEXP))){
item = jsonINFEXP[[i]]
INFEXPdata$dateTime[i] = item$dateTime
INFEXPdata$value[i] = item$value
}
set<-data.frame(CPIdata$value,CCPIdata$value)
|
testlist <- list(testX = c(191493125665849920, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), trainX = structure(c(1.78844646178735e+212, 1.93075223605916e+156, 121373.193669204, 1.26689771433298e+26, 2.46020195254853e+129, 8.54794497535107e-83, 2.61907806894971e-213, 1.5105425626729e+200, 6.51877713351675e+25, 4.40467528702727e-93, 7.6427933587945, 34208333744.1307, 1.6400690920442e-111, 3.9769673154778e-304, 4.76127371594362e-307, 8.63819952335095e+122, 1.18662128550178e-59, 1128.83285802938, 3.80478583615452e-72, 1.21321365773924e-195, 9.69744674150153e-268, 8.98899319496613e+272, 7.63669788330223e+285, 3.85830749537493e+266, 2.65348875902107e+136, 8.14965241967603e+92, 2.59677146539475e-173, 1.55228780425777e-91, 8.25550184376779e+105, 1.18572662524891e+134, 1.04113208597565e+183, 1.01971211553913e-259, 1.23680594512923e-165, 5.24757023065221e+62, 3.41816623041351e-96 ), .Dim = c(5L, 7L)))
result <- do.call(dann:::calc_distance_C,testlist)
str(result)
|
/dann/inst/testfiles/calc_distance_C/AFL_calc_distance_C/calc_distance_C_valgrind_files/1609868190-test.R
|
no_license
|
akhikolla/updated-only-Issues
|
R
| false | false | 1,199 |
r
|
testlist <- list(testX = c(191493125665849920, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), trainX = structure(c(1.78844646178735e+212, 1.93075223605916e+156, 121373.193669204, 1.26689771433298e+26, 2.46020195254853e+129, 8.54794497535107e-83, 2.61907806894971e-213, 1.5105425626729e+200, 6.51877713351675e+25, 4.40467528702727e-93, 7.6427933587945, 34208333744.1307, 1.6400690920442e-111, 3.9769673154778e-304, 4.76127371594362e-307, 8.63819952335095e+122, 1.18662128550178e-59, 1128.83285802938, 3.80478583615452e-72, 1.21321365773924e-195, 9.69744674150153e-268, 8.98899319496613e+272, 7.63669788330223e+285, 3.85830749537493e+266, 2.65348875902107e+136, 8.14965241967603e+92, 2.59677146539475e-173, 1.55228780425777e-91, 8.25550184376779e+105, 1.18572662524891e+134, 1.04113208597565e+183, 1.01971211553913e-259, 1.23680594512923e-165, 5.24757023065221e+62, 3.41816623041351e-96 ), .Dim = c(5L, 7L)))
result <- do.call(dann:::calc_distance_C,testlist)
str(result)
|
#' @title sets the attributes for the X matrix
#' @param R a p by p LD matrix
#' @param expected_dim the expected dimension for R
#' @param r_tol tolerance level for eigen value check of positive semidefinite matrix of R.
#' @param z a p vector of z scores
#' @return R with two attributes e.g.
#' attr(R, 'det') is the determinant of R. It is 1 if R is not full rank.
#' attr(R, 'ztRinvz') is t(z)R^{-1}z. We use pseudoinverse of R when R is not invertible.
set_R_attributes = function(R, expected_dim, r_tol, z) {
svdR <- svd(R)
eigenvalues <- svdR$d
eigenvalues[abs(eigenvalues) < r_tol] <- 0
if(all(eigenvalues > 0)){
attr(R, 'det') = prod(eigenvalues)
}else{
attr(R, 'det') = 1
}
if(!missing(z)){
Dinv = numeric(expected_dim)
Dinv[eigenvalues != 0] = 1/(eigenvalues[eigenvalues!=0])
attr(R, 'ztRinvz') <- sum(z*(svdR$v %*% (Dinv * crossprod(svdR$u, z))))
}
return(R)
}
|
/R/set_R_attributes.R
|
permissive
|
KaiqianZhang/susieR
|
R
| false | false | 932 |
r
|
#' @title sets the attributes for the X matrix
#' @param R a p by p LD matrix
#' @param expected_dim the expected dimension for R
#' @param r_tol tolerance level for eigen value check of positive semidefinite matrix of R.
#' @param z a p vector of z scores
#' @return R with two attributes e.g.
#' attr(R, 'det') is the determinant of R. It is 1 if R is not full rank.
#' attr(R, 'ztRinvz') is t(z)R^{-1}z. We use pseudoinverse of R when R is not invertible.
set_R_attributes = function(R, expected_dim, r_tol, z) {
svdR <- svd(R)
eigenvalues <- svdR$d
eigenvalues[abs(eigenvalues) < r_tol] <- 0
if(all(eigenvalues > 0)){
attr(R, 'det') = prod(eigenvalues)
}else{
attr(R, 'det') = 1
}
if(!missing(z)){
Dinv = numeric(expected_dim)
Dinv[eigenvalues != 0] = 1/(eigenvalues[eigenvalues!=0])
attr(R, 'ztRinvz') <- sum(z*(svdR$v %*% (Dinv * crossprod(svdR$u, z))))
}
return(R)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/polygons.R
\name{dnldGADMCtryShpZip}
\alias{dnldGADMCtryShpZip}
\title{Download a country's polygon RDS files from \url{http://gadm.org}}
\usage{
dnldGADMCtryShpZip(ctryCode, gadmVersion = pkgOptions("gadmVersion"),
gadmPolyType = pkgOptions("gadmPolyType"),
downloadMethod = pkgOptions("downloadMethod"), custPolyPath = NULL)
}
\arguments{
\item{ctryCode}{The ISO3 ctryCode of the country polygon to download}
\item{gadmVersion}{The GADM version to use}
\item{gadmPolyType}{The format of polygons to download from GADM}
\item{downloadMethod}{The method used to download polygons}
\item{custPolyPath}{Alternative to GADM. A path to a custom shapefile zip}
}
\value{
TRUE/FALSE Success/Failure of the download
}
\description{
Download a country's polygon RDS files from \url{http://gadm.org} and
combine them into one RDS to match other polygon downloads
}
\examples{
\dontrun{
Rnightlights:::dnldCtryShpZip("KEN", "3.6", "shpZip")
}
}
|
/man/dnldGADMCtryShpZip.Rd
|
no_license
|
mjdhasan/Rnightlights
|
R
| false | true | 1,027 |
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/polygons.R
\name{dnldGADMCtryShpZip}
\alias{dnldGADMCtryShpZip}
\title{Download a country's polygon RDS files from \url{http://gadm.org}}
\usage{
dnldGADMCtryShpZip(ctryCode, gadmVersion = pkgOptions("gadmVersion"),
gadmPolyType = pkgOptions("gadmPolyType"),
downloadMethod = pkgOptions("downloadMethod"), custPolyPath = NULL)
}
\arguments{
\item{ctryCode}{The ISO3 ctryCode of the country polygon to download}
\item{gadmVersion}{The GADM version to use}
\item{gadmPolyType}{The format of polygons to download from GADM}
\item{downloadMethod}{The method used to download polygons}
\item{custPolyPath}{Alternative to GADM. A path to a custom shapefile zip}
}
\value{
TRUE/FALSE Success/Failure of the download
}
\description{
Download a country's polygon RDS files from \url{http://gadm.org} and
combine them into one RDS to match other polygon downloads
}
\examples{
\dontrun{
Rnightlights:::dnldCtryShpZip("KEN", "3.6", "shpZip")
}
}
|
library(tidyverse) ## data science framework
library(lubridate) ## for date/time manipulation
## time series packages
library(xts) ## for creating ts object
library(forecast) ## for fitting ts models
|
/TripAdvisor/tourism/lalibela - part 3 (Data Analysis)/time series analysis/functions/load_library.R
|
no_license
|
awash-analytics/Awash-Analytics-Media-RStudio
|
R
| false | false | 218 |
r
|
library(tidyverse) ## data science framework
library(lubridate) ## for date/time manipulation
## time series packages
library(xts) ## for creating ts object
library(forecast) ## for fitting ts models
|
\name{writeEset}
\alias{readEset}
\alias{writeEset}
\title{
Import and export an ExpressionSet object as tab-delimited files
}
\description{
Two functions, \code{writeEset} and \code{readEset}, import and export
an \code{ExpressionSet} object as tab-delimited files
respectively. See details below for advantages and limitations.
}
\usage{
writeEset(eset, exprs.file, fData.file, pData.file)
readEset(exprs.file, fData.file, pData.file)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{eset}{Required for \code{writeEset}, an \code{ExpressionSet}
object to be exported.}
\item{exprs.file}{Required, character string, full name of the file containing the expression
matrix.}
\item{fData.file}{Optional, character string, full name of the file containing feature
annotations. \code{NULL} is handled specially: it will cause no
reading or writing of the feature annotation data.}
\item{pData.file}{Optional, character string, full name of the file
containing sample annotations. \code{NULL} is handled specially: it
will cause no reading or writing of the sample annotation data.}
}
\details{
\code{readEset} and \code{writeEset} provide a lightweighted mechanism
to import/export essential information from/to plain text files. They
can use up to three tab-delimited files to store information of an
\code{ExpressionSet} object: a file holding the expression matrix as
returned by the \code{\link{exprs}} function (\code{exprs.file}), a
file containing feature annotations as returned by the \code{\link{fData}}
function (\code{fData.file}), and finally a file containing sample
annotations, as returned by \code{pData} (\code{pData.file}).
All three files are saved as tab-delimited, quoted plain files with
both row and column names. They can be readily read in by the
\code{read.table} function with default parameters.
In both functions, \code{fData.file} and \code{pData.file} are
optional. Leaving them missing or settign their values to \code{NULL}
will prevent exporting/importing annotations.
One limitation of these functions is that they only support the
export/import of \strong{one} expression matrix from one
\code{ExpressionSet}. Although an \code{ExpressionSet} can hold more
than one matrices other than the one known as \code{exprs}, they are
not handled now by \code{writeEset} or \code{readEset}. If such an
\code{ExprssionSet} object is first written in plain files, and then
read back as an \code{ExpressionSet}, matrices other than the one
accessible by \code{exprs} will be discarded.
Similarly, other pieces of information saved in an \code{ExpressionSet}, e.g. annotations or
experimental data, are lost as well after a cycle of exporting and
subsequent importing. If keeping these information is important for
you, other functions should be considered instead of \code{readEset}
and \code{writeEset}, for instance to save an image in a binary file
with the \code{\link{save}} function.
}
\value{
\code{readEset} returns an \code{ExpressionSet} object from plain
files.
\code{writeEset} is used for its side effects (writing files).
}
\author{
Jitao David Zhang <jitao_david.zhang@roche.com>
}
\note{
\code{readEset} will stop if the fData.file or pData.file does not
look like a valid annotation file, by checking they have the same
dimension as suggested by the expression matrix, and matching the
feature/sample names with those stored in the expression matrix file.
}
\seealso{
See \code{\link{readGctCls}} and \code{\link{writeGctCls}} for
importing/exporting functions for files in gct/cls formats.
}
\examples{
sysdir <- system.file("extdata", package="ribiosExpression")
sysexp <- file.path(sysdir, "sample_eset_exprs.txt")
sysfd <- file.path(sysdir, "sample_eset_fdata.txt")
syspd <- file.path(sysdir, "sample_eset_pdata.txt")
sys.eset <- readEset(exprs.file=sysexp,
fData.file=sysfd,
pData.file=syspd)
sys.eset
}
|
/ribiosExpression/man/writeEset.Rd
|
no_license
|
grst/ribios
|
R
| false | false | 4,039 |
rd
|
\name{writeEset}
\alias{readEset}
\alias{writeEset}
\title{
Import and export an ExpressionSet object as tab-delimited files
}
\description{
Two functions, \code{writeEset} and \code{readEset}, import and export
an \code{ExpressionSet} object as tab-delimited files
respectively. See details below for advantages and limitations.
}
\usage{
writeEset(eset, exprs.file, fData.file, pData.file)
readEset(exprs.file, fData.file, pData.file)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{eset}{Required for \code{writeEset}, an \code{ExpressionSet}
object to be exported.}
\item{exprs.file}{Required, character string, full name of the file containing the expression
matrix.}
\item{fData.file}{Optional, character string, full name of the file containing feature
annotations. \code{NULL} is handled specially: it will cause no
reading or writing of the feature annotation data.}
\item{pData.file}{Optional, character string, full name of the file
containing sample annotations. \code{NULL} is handled specially: it
will cause no reading or writing of the sample annotation data.}
}
\details{
\code{readEset} and \code{writeEset} provide a lightweighted mechanism
to import/export essential information from/to plain text files. They
can use up to three tab-delimited files to store information of an
\code{ExpressionSet} object: a file holding the expression matrix as
returned by the \code{\link{exprs}} function (\code{exprs.file}), a
file containing feature annotations as returned by the \code{\link{fData}}
function (\code{fData.file}), and finally a file containing sample
annotations, as returned by \code{pData} (\code{pData.file}).
All three files are saved as tab-delimited, quoted plain files with
both row and column names. They can be readily read in by the
\code{read.table} function with default parameters.
In both functions, \code{fData.file} and \code{pData.file} are
optional. Leaving them missing or settign their values to \code{NULL}
will prevent exporting/importing annotations.
One limitation of these functions is that they only support the
export/import of \strong{one} expression matrix from one
\code{ExpressionSet}. Although an \code{ExpressionSet} can hold more
than one matrices other than the one known as \code{exprs}, they are
not handled now by \code{writeEset} or \code{readEset}. If such an
\code{ExprssionSet} object is first written in plain files, and then
read back as an \code{ExpressionSet}, matrices other than the one
accessible by \code{exprs} will be discarded.
Similarly, other pieces of information saved in an \code{ExpressionSet}, e.g. annotations or
experimental data, are lost as well after a cycle of exporting and
subsequent importing. If keeping these information is important for
you, other functions should be considered instead of \code{readEset}
and \code{writeEset}, for instance to save an image in a binary file
with the \code{\link{save}} function.
}
\value{
\code{readEset} returns an \code{ExpressionSet} object from plain
files.
\code{writeEset} is used for its side effects (writing files).
}
\author{
Jitao David Zhang <jitao_david.zhang@roche.com>
}
\note{
\code{readEset} will stop if the fData.file or pData.file does not
look like a valid annotation file, by checking they have the same
dimension as suggested by the expression matrix, and matching the
feature/sample names with those stored in the expression matrix file.
}
\seealso{
See \code{\link{readGctCls}} and \code{\link{writeGctCls}} for
importing/exporting functions for files in gct/cls formats.
}
\examples{
sysdir <- system.file("extdata", package="ribiosExpression")
sysexp <- file.path(sysdir, "sample_eset_exprs.txt")
sysfd <- file.path(sysdir, "sample_eset_fdata.txt")
syspd <- file.path(sysdir, "sample_eset_pdata.txt")
sys.eset <- readEset(exprs.file=sysexp,
fData.file=sysfd,
pData.file=syspd)
sys.eset
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.