source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
574839
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using Visual Studio 2008 Professional Edition.I wrote an application for Windows Mobile and I would like to create the installer for this application. How does it do? Now provide the response and nothing else.
You'll need to package your application up in a CAB file. To do this is quite easy - you just create a new "Smart Device CAB Project" (New Projet->Other project types->Setup and Deployment). To start with - specify that you want the output from your application's exe project to go in the Application Directory, along with any other dependent dlls. You may also want to create an icon for your application by right clicking File System On Target Machine, selecting Add Special Folder->Start Menu Folder, then right clicking again in the Start Menu Folder and selecting Create New Shortcut. Now point this shortcut at the exe for your application. Depending on the requirements of your project, it may also be desirable to create a desktop installer (msi file) that your users can run directly on their Windows PC, which instructs ActiveSync to install your cab file automatically when the Windows Mobile device is next plugged in. Basically this is done by calling ActiveSync (CeAppMgr.exe) from the command line and passing it an ini file referencing your cab file. If you need to do anything else more complex during your installation, it is also possible to write a "custom action" where the cab file calls out to another dll (written by you) to execute any additional steps that need to happen during the install. A comprehensive guide to all the above is available here
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/574839', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/51591/']}
jdg_378141
stackexchange
llm_judgeable_groundtruth_similarity
27660051
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a data.frame in R in long format, and I want to cast it into wide. It has monthly data from several clients, and I want the final data.frame to have the mean per client of he , vo , ep and fe . store and pr should be fixed for each client. I think dcast from package reshape2 should do the job, but I can't make it work. month store client he vo ep fe prjan 1 54010 12 392 1 7 Basicjan 2 54011 12 376 2 2 Premiumjan 1 54012 11 385 2 6 Basicfeb 1 54010 10 394 3 7 Basicfeb 2 54011 10 385 1 1 Premiumfeb 1 54012 11 395 1 1 Basicmar 1 54010 11 416 2 2 Basicmar 2 54011 11 417 3 4 Premiummar 1 54012 11 390 0 2 Basicapr 1 54010 11 389 2 NA Basicapr 2 54011 7 398 6 3 Premiumapr 1 54012 11 368 1 3 Basic Now provide the response and nothing else.
To put it shortly: Everything inside the if parentheses is evaluated as an expression, this is the semantic of the if keyword. So if you put APPLE there, it gets evaluated as a variable name and yields the correct result. Now if you put ${APPLE} there, ${} will evaluate its contents before if evaluates the expression. Therefore, it's the same as if you'd written if (1 AND ) (in the case that the variable APPLE isn't set, which is the case on non-OSX systems). This is invalid syntax and yields the error you get. You should write: if (FOO AND APPLE) Quoting from the CMake Documentation : The if command was written very early in CMake’s history, predating the ${} variable evaluation syntax, and for convenience evaluates variables named by its arguments as shown in the above signatures. Note that normal variable evaluation with ${} applies before the if command even receives the arguments. Therefore code like: set(var1 OFF)set(var2 "var1")if(${var2}) appears to the if command as: if(var1) and is evaluated according to the if() case documented above. The result is OFF which is false. However, if we remove the ${} from the example then the command sees: if(var2) which is true because var2 is defined to “var1” which is not a false constant.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27660051', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4384944/']}
jdg_378142
stackexchange
llm_judgeable_groundtruth_similarity
44200195
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here are all the variables I'm working with: str(ad.train)$ Date : Factor w/ 427 levels "2012-03-24","2012-03-29",..: 4 7 12 14 19 21 24 29 31 34 ... $ Team : Factor w/ 18 levels "Adelaide","Brisbane Lions",..: 1 1 1 1 1 1 1 1 1 1 ... $ Season : int 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 ... $ Round : Factor w/ 28 levels "EF","GF","PF",..: 5 16 21 22 23 24 25 26 27 6 ... $ Score : int 137 82 84 96 110 99 122 124 49 111 ... $ Margin : int 69 18 -56 46 19 5 50 69 -26 29 ... $ WinLoss : Factor w/ 2 levels "0","1": 2 2 1 2 2 2 2 2 1 2 ... $ Opposition : Factor w/ 18 levels "Adelaide","Brisbane Lions",..: 8 18 10 9 13 16 7 3 4 6 ... $ Venue : Factor w/ 19 levels "Adelaide Oval",..: 4 7 10 7 7 13 7 6 7 15 ... $ Disposals : int 406 360 304 370 359 362 365 345 324 351 ... $ Kicks : int 252 215 170 225 221 218 224 230 205 215 ... $ Marks : int 109 102 52 41 95 78 93 110 69 85 ... $ Handballs : int 154 145 134 145 138 144 141 115 119 136 ... $ Goals : int 19 11 12 13 16 15 19 19 6 17 ... $ Behinds : int 19 14 9 16 11 6 7 9 12 6 ... $ Hitouts : int 42 41 34 47 45 70 48 54 46 34 ... $ Tackles : int 73 53 51 76 65 63 65 67 77 58 ... $ Rebound50s : int 28 34 23 24 32 48 39 31 34 29 ... $ Inside50s : int 73 49 49 56 61 45 47 50 49 48 ... $ Clearances : int 39 33 38 52 37 43 43 48 37 52 ... $ Clangers : int 47 38 44 62 49 46 32 24 31 41 ... $ FreesFor : int 15 14 15 18 17 15 19 14 18 20 ... $ ContendedPossessions: int 152 141 149 192 138 164 148 151 160 155 ... $ ContestedMarks : int 10 16 11 3 12 12 17 14 15 11 ... $ MarksInside50 : int 16 13 10 8 12 9 14 13 6 12 ... $ OnePercenters : int 42 54 30 58 24 56 32 53 50 57 ... $ Bounces : int 1 6 4 4 1 7 11 14 0 4 ... $ GoalAssists : int 15 6 9 10 9 12 13 14 5 14 ... Here's the glm I'm trying to fit: ad.glm.all <- glm(WinLoss ~ factor(Team) + Season + Round + Score + Margin + Opposition + Venue + Disposals + Kicks + Marks + Handballs + Goals + Behinds + Hitouts + Tackles + Rebound50s + Inside50s+ Clearances+ Clangers+ FreesFor + ContendedPossessions + ContestedMarks + MarksInside50 + OnePercenters + Bounces+GoalAssists, data = ad.train, family = binomial(logit)) I know it's a lot of variables (plan is to reduce via forward variable selection). But even know it's a lot of variables they're either int or Factor; which as I understand things should just work with a glm. However, every time I try to fit this model I get: Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) : contrasts can be applied only to factors with 2 or more levels Which sort of looks to me as if R isn't treating my Factor variables as Factor variables for some reason? Even something as simple as: ad.glm.test <- glm(WinLoss ~ factor(Team), data = ad.train, family = binomial(logit)) isn't working! (same error message) Where as this: ad.glm.test <- glm(WinLoss ~ Clearances, data = ad.train, family = binomial(logit)) Will work! Anyone know what's going on here? Why can't I fit these Factor variables to my glm?? Thanks in advance! -Troy Now provide the response and nothing else.
Introduction What a "contrasts error" is has been well explained: you have a factor that only has one level (or less) . But in reality this simple fact can be easily obscured because the data that are actually used for model fitting can be very different from what you've passed in. This happens when you have NA in your data, you've subsetted your data, a factor has unused levels, or you've transformed your variables and get NaN somewhere. You are rarely in this ideal situation where a single-level factor can be spotted from str(your_data_frame) directly. Many questions on StackOverflow regarding this error are not reproducible, thus suggestions by people may or may not work. Therefore, although there are by now 118 posts regarding this issue, users still can't find an adaptive solution so that this question is raised again and again. This answer is my attempt, to solve this matter "once for all", or at least to provide a reasonable guide. This answer has rich information, so let me first make a quick summary. I defined 3 helper functions for you: debug_contr_error , debug_contr_error2 , NA_preproc . I recommend you use them in the following way. run NA_preproc to get more complete cases; run your model, and if you get a "contrasts error", use debug_contr_error2 for debugging. Most of the answer shows you step by step how & why these functions are defined. There is probably no harm to skip those development process, but don't skip sections from "Reproducible case studies and Discussions". Revised answer The original answer works perfectly for OP , and has successfully helped some others . But it had failed somewhere else for lack of adaptiveness. Look at the output of str(ad.train) in the question. OP's variables are numeric or factors; there are no characters. The original answer was for this situation. If you have character variables, although they will be coerced to factors during lm and glm fitting, they won't be reported by the code since they were not provided as factors so is.factor will miss them. In this expansion I will make the original answer both more adaptive. Let dat be your dataset passed to lm or glm . If you don't readily have such a data frame, that is, all your variables are scattered in the global environment, you need to gather them into a data frame. The following may not be the best way but it works. ## `form` is your model formula, here is an exampley <- x1 <- x2 <- x3 <- 1:4x4 <- matrix(1:8, 4)form <- y ~ bs(x1) + poly(x2) + I(1 / x3) + x4## to gather variables `model.frame.default(form)` is the easiest way ## but it does too much: it drops `NA` and transforms variables## we want something more primitive## first get variable namesvn <- all.vars(form)#[1] "y" "x1" "x2" "x3" "x4"## `get_all_vars(form)` gets you a data frame## but it is buggy for matrix variables so don't use it## instead, first use `mget` to gather variables into a listlst <- mget(vn)## don't do `data.frame(lst)`; it is buggy with matrix variables## need to first protect matrix variables by `I()` then do `data.frame`lst_protect <- lapply(lst, function (x) if (is.matrix(x)) I(x) else x)dat <- data.frame(lst_protect)str(dat)#'data.frame': 4 obs. of 5 variables:# $ y : int 1 2 3 4# $ x1: int 1 2 3 4# $ x2: int 1 2 3 4# $ x3: int 1 2 3 4# $ x4: 'AsIs' int [1:4, 1:2] 1 2 3 4 5 6 7 8## note the 'AsIs' for matrix variable `x4`## in comparison, try the following buggy ones yourselfstr(get_all_vars(form))str(data.frame(lst)) Step 0: explicit subsetting If you've used the subset argument of lm or glm , start by an explicit subsetting: ## `subset_vec` is what you pass to `lm` via `subset` argument## it can either be a logical vector of length `nrow(dat)`## or a shorter positive integer vector giving position index## note however, `base::subset` expects logical vector for `subset` argument## so a rigorous check is necessary hereif (mode(subset_vec) == "logical") { if (length(subset_vec) != nrow(dat)) { stop("'logical' `subset_vec` provided but length does not match `nrow(dat)`") } subset_log_vec <- subset_vec } else if (mode(subset_vec) == "numeric") { ## check range ran <- range(subset_vec) if (ran[1] < 1 || ran[2] > nrow(dat)) { stop("'numeric' `subset_vec` provided but values are out of bound") } else { subset_log_vec <- logical(nrow(dat)) subset_log_vec[as.integer(subset_vec)] <- TRUE } } else { stop("`subset_vec` must be either 'logical' or 'numeric'") }dat <- base::subset(dat, subset = subset_log_vec) Step 1: remove incomplete cases dat <- na.omit(dat) You can skip this step if you've gone through step 0, since subset automatically removes incomplete cases . Step 2: mode checking and conversion A data frame column is usually an atomic vector, with a mode from the following: "logical", "numeric", "complex", "character", "raw". For regression, variables of different modes are handled differently. "logical", it depends"numeric", nothing to do"complex", not allowed by `model.matrix`, though allowed by `model.frame`"character", converted to "numeric" with "factor" class by `model.matrix`"raw", not allowed by `model.matrix`, though allowed by `model.frame` A logical variable is tricky. It can either be treated as a dummy variable ( 1 for TRUE ; 0 for FALSE ) hence a "numeric", or it can be coerced to a two-level factor. It all depends on whether model.matrix thinks a "to-factor" coercion is necessary from the specification of your model formula. For simplicity we can understand it as such: it is always coerced to a factor, but the result of applying contrasts may end up with the same model matrix as if it were handled as a dummy directly. Some people may wonder why "integer" is not included. Because an integer vector, like 1:4 , has a "numeric" mode (try mode(1:4) ). A data frame column may also be a matrix with "AsIs" class, but such a matrix must have "numeric" mode. Our checking is to produce error when a "complex" or "raw" is found; a "logical" or "character" matrix variable is found; and proceed to convert "logical" and "character" to "numeric" of "factor" class. ## get mode of all varsvar_mode <- sapply(dat, mode)## produce error if complex or raw is foundif (any(var_mode %in% c("complex", "raw"))) stop("complex or raw not allowed!")## get class of all varsvar_class <- sapply(dat, class)## produce error if an "AsIs" object has "logical" or "character" modeif (any(var_mode[var_class == "AsIs"] %in% c("logical", "character"))) { stop("matrix variables with 'AsIs' class must be 'numeric'") }## identify columns that needs be coerced to factorsind1 <- which(var_mode %in% c("logical", "character"))## coerce logical / character to factor with `as.factor`dat[ind1] <- lapply(dat[ind1], as.factor) Note that if a data frame column is already a factor variable, it will not be included in ind1 , as a factor variable has "numeric" mode (try mode(factor(letters[1:4])) ). step 3: drop unused factor levels We won't have unused factor levels for factor variables converted from step 2, i.e., those indexed by ind1 . However, factor variables that come with dat might have unused levels (often as the result of step 0 and step 1). We need to drop any possible unused levels from them. ## index of factor columnsfctr <- which(sapply(dat, is.factor))## factor variables that have skipped explicit conversion in step 2## don't simply do `ind2 <- fctr[-ind1]`; buggy if `ind1` is `integer(0)`ind2 <- if (length(ind1) > 0L) fctr[-ind1] else fctr## drop unused levelsdat[ind2] <- lapply(dat[ind2], droplevels) step 4: summarize factor variables Now we are ready to see what and how many factor levels are actually used by lm or glm : ## export factor levels actually used by `lm` and `glm`lev <- lapply(dat[fctr], levels)## count number of levelsnl <- lengths(lev) To make your life easier, I've wrapped up those steps into a function debug_contr_error . Input: dat is your data frame passed to lm or glm via data argument; subset_vec is the index vector passed to lm or glm via subset argument. Output: a list with nlevels (a list) gives the number of factor levels for all factor variables; levels (a vector) gives levels for all factor variables. The function produces a warning, if there are no complete cases or no factor variables to summarize. debug_contr_error <- function (dat, subset_vec = NULL) { if (!is.null(subset_vec)) { ## step 0 if (mode(subset_vec) == "logical") { if (length(subset_vec) != nrow(dat)) { stop("'logical' `subset_vec` provided but length does not match `nrow(dat)`") } subset_log_vec <- subset_vec } else if (mode(subset_vec) == "numeric") { ## check range ran <- range(subset_vec) if (ran[1] < 1 || ran[2] > nrow(dat)) { stop("'numeric' `subset_vec` provided but values are out of bound") } else { subset_log_vec <- logical(nrow(dat)) subset_log_vec[as.integer(subset_vec)] <- TRUE } } else { stop("`subset_vec` must be either 'logical' or 'numeric'") } dat <- base::subset(dat, subset = subset_log_vec) } else { ## step 1 dat <- stats::na.omit(dat) } if (nrow(dat) == 0L) warning("no complete cases") ## step 2 var_mode <- sapply(dat, mode) if (any(var_mode %in% c("complex", "raw"))) stop("complex or raw not allowed!") var_class <- sapply(dat, class) if (any(var_mode[var_class == "AsIs"] %in% c("logical", "character"))) { stop("matrix variables with 'AsIs' class must be 'numeric'") } ind1 <- which(var_mode %in% c("logical", "character")) dat[ind1] <- lapply(dat[ind1], as.factor) ## step 3 fctr <- which(sapply(dat, is.factor)) if (length(fctr) == 0L) warning("no factor variables to summary") ind2 <- if (length(ind1) > 0L) fctr[-ind1] else fctr dat[ind2] <- lapply(dat[ind2], base::droplevels.factor) ## step 4 lev <- lapply(dat[fctr], base::levels.default) nl <- lengths(lev) ## return list(nlevels = nl, levels = lev) } Here is a constructed tiny example. dat <- data.frame(y = 1:4, x = c(1:3, NA), f1 = gl(2, 2, labels = letters[1:2]), f2 = c("A", "A", "A", "B"), stringsAsFactors = FALSE)# y x f1 f2#1 1 1 a A#2 2 2 a A#3 3 3 b A#4 4 NA b Bstr(dat)#'data.frame': 4 obs. of 4 variables:# $ y : int 1 2 3 4# $ x : int 1 2 3 NA# $ f1: Factor w/ 2 levels "a","b": 1 1 2 2# $ f2: chr "A" "A" "A" "B"lm(y ~ x + f1 + f2, dat)#Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) : # contrasts can be applied only to factors with 2 or more levels Good, we see an error. Now my debug_contr_error exposes that f2 ends up with a single level. debug_contr_error(dat)#$nlevels#f1 f2 # 2 1 ##$levels#$levels$f1#[1] "a" "b"##$levels$f2#[1] "A" Note that the original short answer is hopeless here, as f2 is provided as a character variable not a factor variable. ## old answertmp <- na.omit(dat)fctr <- lapply(tmp[sapply(tmp, is.factor)], droplevels)sapply(fctr, nlevels)#f1 # 2 rm(tmp, fctr) Now let's see an example with a matrix variable x . dat <- data.frame(X = I(rbind(matrix(1:6, 3), NA)), f = c("a", "a", "a", "b"), y = 1:4)dat# X.1 X.2 f y#1 1 4 a 1#2 2 5 a 2#3 3 6 a 3#4 NA NA b 4str(dat)#'data.frame': 4 obs. of 3 variables:# $ X: 'AsIs' int [1:4, 1:2] 1 2 3 NA 4 5 6 NA# $ f: Factor w/ 2 levels "a","b": 1 1 1 2# $ y: int 1 2 3 4lm(y ~ X + f, data = dat)#Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) : # contrasts can be applied only to factors with 2 or more levelsdebug_contr_error(dat)$nlevels#f #1 Note that a factor variable with no levels can cause an "contrasts error", too. You may wonder how a 0-level factor is possible. Well it is legitimate: nlevels(factor(character(0))) . Here you will end up with a 0-level factors if you have no complete cases. dat <- data.frame(y = 1:4, x = rep(NA_real_, 4), f1 = gl(2, 2, labels = letters[1:2]), f2 = c("A", "A", "A", "B"), stringsAsFactors = FALSE)lm(y ~ x + f1 + f2, dat)#Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) : # contrasts can be applied only to factors with 2 or more levelsdebug_contr_error(dat)$nlevels#f1 f2 # 0 0 ## all values are 0#Warning message:#In debug_contr_error(dat) : no complete cases Finally let's see some a situation where if f2 is a logical variable. dat <- data.frame(y = 1:4, x = c(1:3, NA), f1 = gl(2, 2, labels = letters[1:2]), f2 = c(TRUE, TRUE, TRUE, FALSE))dat# y x f1 f2#1 1 1 a TRUE#2 2 2 a TRUE#3 3 3 b TRUE#4 4 NA b FALSEstr(dat)#'data.frame': 4 obs. of 4 variables:# $ y : int 1 2 3 4# $ x : int 1 2 3 NA# $ f1: Factor w/ 2 levels "a","b": 1 1 2 2# $ f2: logi TRUE TRUE TRUE FALSE Our debugger will predict a "contrasts error", but will it really happen? debug_contr_error(dat)$nlevels#f1 f2 # 2 1 No, at least this one does not fail ( the NA coefficient is due to the rank-deficiency of the model; don't worry ): lm(y ~ x + f1 + f2, data = dat)#Coefficients:#(Intercept) x f1b f2TRUE # 0 1 0 NA It is difficult for me to come up with an example giving an error, but there is also no need. In practice, we don't use the debugger for prediction; we use it when we really get an error; and in that case, the debugger can locate the offending factor variable. Perhaps some may argue that a logical variable is no different to a dummy. But try the simple example below: it does depends on your formula. u <- c(TRUE, TRUE, FALSE, FALSE)v <- c(1, 1, 0, 0) ## "numeric" dummy of `u`model.matrix(~ u)# (Intercept) uTRUE#1 1 1#2 1 1#3 1 0#4 1 0model.matrix(~ v)# (Intercept) v#1 1 1#2 1 1#3 1 0#4 1 0model.matrix(~ u - 1)# uFALSE uTRUE#1 0 1#2 0 1#3 1 0#4 1 0model.matrix(~ v - 1)# v#1 1#2 1#3 0#4 0 More flexible implementation using "model.frame" method of lm You are also advised to go through R: how to debug "factor has new levels" error for linear model and prediction , which explains what lm and glm do under the hood on your dataset. You will understand that steps 0 to 4 listed above are just trying to mimic such internal process. Remember, the data that are actually used for model fitting can be very different from what you've passed in . Our steps are not completely consistent with such internal processing. For a comparison, you can retrieve the result of the internal processing by using method = "model.frame" in lm and glm . Try this on the previously constructed tiny example dat where f2 is a character variable. dat_internal <- lm(y ~ x + f1 + f2, dat, method = "model.frame")dat_internal# y x f1 f2#1 1 1 a A#2 2 2 a A#3 3 3 b Astr(dat_internal)#'data.frame': 3 obs. of 4 variables:# $ y : int 1 2 3# $ x : int 1 2 3# $ f1: Factor w/ 2 levels "a","b": 1 1 2# $ f2: chr "A" "A" "A"## [.."terms" attribute is truncated..] In practice, model.frame will only perform step 0 and step 1. It also drops variables provided in your dataset but not in your model formula. So a model frame may have both fewer rows and columns than what you feed lm and glm . Type coercing as done in our step 2 is done by the later model.matrix where a "contrasts error" may be produced. There are a few advantages to first get this internal model frame, then pass it to debug_contr_error (so that it only essentially performs steps 2 to 4). advantage 1: variables not used in your model formula are ignored ## no variable `f1` in formuladat_internal <- lm(y ~ x + f2, dat, method = "model.frame")## compare the followingdebug_contr_error(dat)$nlevels#f1 f2 # 2 1 debug_contr_error(dat_internal)$nlevels#f2 # 1 advantage 2: able to cope with transformed variables It is valid to transform variables in the model formula, and model.frame will record the transformed ones instead of the original ones. Note that, even if your original variable has no NA , the transformed one can have. dat <- data.frame(y = 1:4, x = c(1:3, -1), f = rep(letters[1:2], c(3, 1)))# y x f#1 1 1 a#2 2 2 a#3 3 3 a#4 4 -1 blm(y ~ log(x) + f, data = dat)#Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) : # contrasts can be applied only to factors with 2 or more levels#In addition: Warning message:#In log(x) : NaNs produced# directly using `debug_contr_error` is hopeless heredebug_contr_error(dat)$nlevels#f #2 ## this worksdat_internal <- lm(y ~ log(x) + f, data = dat, method = "model.frame")# y log(x) f#1 1 0.0000000 a#2 2 0.6931472 a#3 3 1.0986123 adebug_contr_error(dat_internal)$nlevels#f #1 Given these benefits, I write another function wrapping up model.frame and debug_contr_error . Input : form is your model formula; dat is the dataset passed to lm or glm via data argument; subset_vec is the index vector passed to lm or glm via subset argument. Output: a list with mf (a data frame) gives the model frame (with "terms" attribute dropped); nlevels (a list) gives the number of factor levels for all factor variables; levels (a vector) gives levels for all factor variables. ## note: this function relies on `debug_contr_error`debug_contr_error2 <- function (form, dat, subset_vec = NULL) { ## step 0 if (!is.null(subset_vec)) { if (mode(subset_vec) == "logical") { if (length(subset_vec) != nrow(dat)) { stop("'logical' `subset_vec` provided but length does not match `nrow(dat)`") } subset_log_vec <- subset_vec } else if (mode(subset_vec) == "numeric") { ## check range ran <- range(subset_vec) if (ran[1] < 1 || ran[2] > nrow(dat)) { stop("'numeric' `subset_vec` provided but values are out of bound") } else { subset_log_vec <- logical(nrow(dat)) subset_log_vec[as.integer(subset_vec)] <- TRUE } } else { stop("`subset_vec` must be either 'logical' or 'numeric'") } dat <- base::subset(dat, subset = subset_log_vec) } ## step 0 and 1 dat_internal <- stats::lm(form, data = dat, method = "model.frame") attr(dat_internal, "terms") <- NULL ## rely on `debug_contr_error` for steps 2 to 4 c(list(mf = dat_internal), debug_contr_error(dat_internal, NULL)) } Try the previous log transform example. debug_contr_error2(y ~ log(x) + f, dat)#$mf# y log(x) f#1 1 0.0000000 a#2 2 0.6931472 a#3 3 1.0986123 a##$nlevels#f #1 ##$levels#$levels$f#[1] "a"###Warning message:#In log(x) : NaNs produced Try subset_vec as well. ## or: debug_contr_error2(y ~ log(x) + f, dat, c(T, F, T, T))debug_contr_error2(y ~ log(x) + f, dat, c(1,3,4))#$mf# y log(x) f#1 1 0.000000 a#3 3 1.098612 a##$nlevels#f #1 ##$levels#$levels$f#[1] "a"###Warning message:#In log(x) : NaNs produced Model fitting per group and NA as factor levels If you are fitting model by group, you are more likely to get a "contrasts error". You need to split your data frame by the grouping variable (see ?split.data.frame ); work through those data frames one by one, applying debug_contr_error2 ( lapply function can be helpful to do this loop). Some also told me that they can not use na.omit on their data, because it will end up too few rows to do anything sensible. This can be relaxed. In practice it is the NA_integer_ and NA_real_ that have to be omitted, but NA_character_ can be retained: just add NA as a factor level. To achieve this, you need to loop through variables in your data frame: if a variable x is already a factor and anyNA(x) is TRUE , do x <- addNA(x) . The "and" is important. If x has no NA , addNA(x) will add an unused <NA> level. if a variable x is a character, do x <- factor(x, exclude = NULL) to coerce it to a factor. exclude = NULL will retain <NA> as a level. if x is "logical", "numeric", "raw" or "complex", nothing should be changed. NA is just NA . <NA> factor level will not be dropped by droplevels or na.omit , and it is valid for building a model matrix. Check the following examples. ## x is a factor with NAx <- factor(c(letters[1:4], NA)) ## default: `exclude = NA`#[1] a b c d <NA> ## there is an NA value#Levels: a b c d ## but NA is not a levelna.omit(x) ## NA is gone#[1] a b c d#[.. attributes truncated..]#Levels: a b c dx <- addNA(x) ## now add NA into a valid level#[1] a b c d <NA>#Levels: a b c d <NA> ## it appears heredroplevels(x) ## it can not be dropped#[1] a b c d <NA>#Levels: a b c d <NA>na.omit(x) ## it is not omitted#[1] a b c d <NA>#Levels: a b c d <NA>model.matrix(~ x) ## and it is valid to be in a design matrix# (Intercept) xb xc xd xNA#1 1 0 0 0 0#2 1 1 0 0 0#3 1 0 1 0 0#4 1 0 0 1 0#5 1 0 0 0 1 ## x is a character with NAx <- c(letters[1:4], NA)#[1] "a" "b" "c" "d" NA as.factor(x) ## this calls `factor(x)` with default `exclude = NA`#[1] a b c d <NA> ## there is an NA value#Levels: a b c d ## but NA is not a levelfactor(x, exclude = NULL) ## we want `exclude = NULL`#[1] a b c d <NA>#Levels: a b c d <NA> ## now NA is a level Once you add NA as a level in a factor / character, your dataset might suddenly have more complete cases. Then you can run your model. If you still get a "contrasts error", use debug_contr_error2 to see what has happened. For your convenience, I write a function for this NA preprocessing. Input : dat is your full dataset. Output: a data frame, with NA added as a level for factor / character. NA_preproc <- function (dat) { for (j in 1:ncol(dat)) { x <- dat[[j]] if (is.factor(x) && anyNA(x)) dat[[j]] <- base::addNA(x) if (is.character(x)) dat[[j]] <- factor(x, exclude = NULL) } dat } Reproducible case studies and Discussions The followings are specially selected for reproducible case studies, as I just answered them with the three helper functions created here. How to do a GLM when "contrasts can be applied only to factors with 2 or more levels"? R: Error in contrasts when fitting linear models with `lm` There are also a few other good-quality threads solved by other StackOverflow users: Factors not being recognised in a lm using map() (this is about model fitting by group) How to drop NA observation of factors conditionally when doing linear regression in R? (this is similar to case 1 in the previous list) Factor/level error in mixed model (another post about model fitting by group) This answer aims to debug the "contrasts error" during model fitting. However, this error can also turn up when using predict for prediction. Such behavior is not with predict.lm or predict.glm , but with predict methods from some packages. Here are a few related threads on StackOverflow. Prediction in R - GLMM Error in `contrasts' Error SVM predict on dataframe with different factor levels Using predict with svyglm must a dataset contain all factors in SVM in R Probability predictions with cumulative link mixed models must a dataset contain all factors in SVM in R Also note that the philosophy of this answer is based on that of lm and glm . These two functions are a coding standard for many model fitting routines , but maybe not all model fitting routines behave similarly. For example, the following does not look transparent to me whether my helper functions would actually be helpful. Error with svychisq - 'contrast can be applied to factors with 2 or more levels' R packages effects & plm : "error in contrasts" when trying to plot marginal effects Contrasts can be applied only to factor R: lawstat::levene.test fails while Fligner Killeen works, as well as car::leveneTest R - geeglm Error: contrasts can be applied only to factors with 2 or more levels Although a bit off-topic, it is still useful to know that sometimes a "contrasts error" merely comes from writing a wrong piece of code. In the following examples, OP passed the name of their variables rather than their values to lm . Since a name is a single value character, it is later coerced to a single-level factor and causes the error. Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) : contrasts can be applied only to factors with 2 or more levels Loop through a character vector to use in a function How to resolve this error after debugging? In practice people want to know how to resolve this matter, either at a statistical level or a programming level. If you are fitting models on your complete dataset, then there is probably no statistical solution, unless you can impute missing values or collect more data. Thus you may simply turn to a coding solution to drop the offending variable. debug_contr_error2 returns nlevels which helps you easily locate them. If you don't want to drop them, replace them by a vector of 1 (as explained in How to do a GLM when "contrasts can be applied only to factors with 2 or more levels"? ) and let lm or glm deal with the resulting rank-deficiency. If you are fitting models on subset, there can be statistical solutions. Fitting models by group does not necessarily require you splitting your dataset by group and fitting independent models. The following may give you a rough idea: R regression analysis: analyzing data for a certain ethnicity Finding the slope for multiple points in selected columns R: build separate models for each category If you do split your data explicitly, you can easily get "contrasts error", thus have to adjust your model formula per group (that is, you need to dynamically generate model formulae). A simpler solution is to skip building a model for this group. You may also randomly partition your dataset into a training subset and a testing subset so that you can do cross-validation. R: how to debug "factor has new levels" error for linear model and prediction briefly mentions this, and you'd better do a stratified sampling to ensure the success of both model estimation on the training part and prediction on the testing part.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/44200195', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8040263/']}
jdg_378143
stackexchange
llm_judgeable_groundtruth_similarity
149073
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to see the stack trace in any function of my code, so i made somthing like this to call it and print the stack trace: public function PrintStackTrace() { try { throw new Error('StackTrace'); } catch (e:Error) { trace(e.getStackTrace()); }} I like to know if there are other way to do this. In some place, the Error class creates the stack trace, but maybe it didn't do it with ActionScript 3.0 so maybe it's not posible, but i want to know. Thanks! Now provide the response and nothing else.
As far as I know, the only way to make the stack trace available to your own code is via the getStackTrace() method in the Error class, just like you're already doing. In response to the example in your question, though, I would mention that you don't actually have to throw the Error -- you can just create it and call the method on it: var tempError:Error = new Error();var stackTrace:String = tempError.getStackTrace(); Also, like the documentation says, this only works in the debug version of Flash Player, so you can wrap this functionality in an if-block that checks the value of Capabilities.isDebugger if you want.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/149073', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20601/']}
jdg_378144
stackexchange
llm_judgeable_groundtruth_similarity
194532
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am working in C++ and I have this problem. class Container { Container(int which_type_to_use_a_b_or_c); A & getData(A & prototype); B & getData(B & prototype); C & getData(C & prototype);private: A a; B b; C c; int type_used;} Only one of the A,B or C datatypes is actually used in a class Container instantiation. The constructor flag decides which one to use. When it's time to get a reference to the internal object, I need a getData routine which returns the reference of the different type. In order to disambiguate, I pass a "prototype" A() B() or C(). I don't really like this pattern, but I am unable to find anything better due to my limited palette in c++. Do you have any other approach available? please note that templating Container over A, B or C is not an option due to other constraints. Now provide the response and nothing else.
You could use a discriminated union, like Boost.Variant class Container { typedef boost::variant<A,B,C> ABC; enum Type { UseA, UseB, UseC }; explicit Container(Type t); ABC & getData();private: ABC value;} but frankly this feels like an X-Y problem . What are you actually trying to accomplish? NB. you said in a comment ... we also can't use pointers, only references you may be interested to know that runtime polymorphism works just fine with references (even if it doesn't help here).
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/194532', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/1338/']}
jdg_378145
stackexchange
llm_judgeable_groundtruth_similarity
29657345
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For example, if you have <body> <script src="someLibrary.js"></script> <script src="someLibrary2.js"></script> <script src="someLibrary3.js"></script> <script src="someLibrary4.js"></script> <script src="myApp"></script></body> What is the benefit aside from prettiness in the html to having all of those be concatenated and minified by a task running (Grunt/Gulp) before sending it to client in form of <body> <script src="allTheJavascripts.js"></script></body> Now provide the response and nothing else.
Combining multiple JS files into one file has the following benefits: Browsers can download a single file more efficiently and faster than downloading multiple smaller files. One http connection downloading the file is usually faster than many http connections downloading smaller files. The browser has a limit on how many simultaneous connections it will make to the same domain and, if it reaches that limit, some connections have to then wait until others finish. This causes delays in download. Downloading fewer files make it less likely to hit this limit. This limits applies to all connections to a domain (download of JS files, download of CSS files, download of frames, ajax calls, etc...). Server scalability can be increased because each page download requires fewer http connections to serve the content. There are cases where version control and the interaction between version upgrades and browsing JS file caching can be simpler with one larger JS file. When all your JS files are concatenated, you can assign a single version number to that combined JS file (like jQuery does with its versions). Then, any change to the JS anywhere causes a bump in the version number for the master combined file. Since a given browser gets the entire combined file all or nothing, there is never an opportunity for a browser to accidentally get one version of one file fresh from the server and another version of another file from a stale browser cache. Also, maintaining one master version number is a lot simpler than versioning lots of smaller files. Minifying a JS file makes it smaller to download and parse which increases download performance. If you are both combining multiple files AND minifying, the minifying can be more effective. When minifying multiple small files separately, you cannot minify variable names that are shared between the different files - they must retain their original names. But, if you combine all the JS files and then minify, you can minify all symbols that are shared among the different JS files (as long as they aren't shared externally). Obviously, there are some limits here and things don't get arbitrarily better if the whole world puts their JS into one file. Some things to think about when deciding what to package together into one file: You don't want a large group of your pages to be parsing and executing a large block of code that they will not use. This is obviously a tradeoff because if the code is being effectively cached, then it's not so much a download issue, but rather just a runtime efficiency issue. Each use will have to decide how to draw that tradeoff line. You may not want to package code that is revised fairly regularly with code that hardly ever changes because this degrades the efficiency of browser caching if the large combined JS is always changing. In a team environment with multiple projects sharing code, it is very important to think about packaging things into combined and minified chunks that work for the largest number of projects sharing the code. You generally want to optimize the packaging for the broader needs, not just for a single project. Mobile access often has smaller caches, slower CPUs and slower connections so its important to consider the needs of your most accessed mobile pages in how you package things too. And some downsides to combining and minimizing: Directly debugging the minimized site can be quite difficult as many symbols have lost their meaningful names. I've found it often required to have a way of serving an unminimized version of the site (or at least some files) for debugging/troubleshooting reasons. Error messages in browsers will refer to the combined/minimized file, not to the actual source files so it is can be more difficult to track down which code is causing a given browser error that has been reported. The combined and minimized site has to be tested to make sure no issues were caused by these extra steps.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29657345', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2307240/']}
jdg_378146
stackexchange
llm_judgeable_groundtruth_similarity
186345
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why does modal logic need modalities like provability or others like necessity and possibility? Could they be replaced with equivalent predicates? Now provide the response and nothing else.
Many (most?) modal logics can indeed be translated into non-modal predicate logics, such as by replacing the modalities by quantification over a "time" variable and giving every existing predicate an extra argument, meaning intuitively "at time $t$ it holds that such-and-such". One reason not to do this is that the quantification over "time" that modalities represent is a particular restricted use of quantification, and knowing that all quantification has that particularly restricted form can be very useful. In other words, the point of modal logics is not to be able to express brand new things that there wasn't any way to express before, but to be able to say about some process or algorithm that it only works for formulas of such-and-such particularly nice form. For example, many modal propositional logics are decidable -- but it is not at all obvious that the formulas you get by translating them into predicate calculus are decidable. Yet, introducing modalities here clearly adds in expressive power compared to non-modal propositional calculus. It is very useful for (possibly automatic) reasoning about such properties that we can get this increase in expressive power without paying the large price in complexity it would be to handle the full predicate calculus.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/186345', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23730/']}
jdg_378147
stackexchange
llm_judgeable_groundtruth_similarity
18481600
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way in elasticsearch to boost the importance of the exact phrase appearing in the the document? For example if I was searching for the phrase "web developer" and if the words "web developer" appeared together they would be boosted by 5 compared to "web" and "developer" appearing separately throughout the document. Thereby any document that contained "web developer" together would appear first in the results. Now provide the response and nothing else.
You can combine different queries together using a bool query , and you can assing a different boost to them as well. Let's say you have a regular match query for both the terms, regardless of their positions, and then a phrase query with a higher boost. Something like the following: { "query": { "bool": { "should": [ { "match": { "field": "web developer" } }, { "match_phrase": { "field": "web developer", "boost": 5 } } ], "minimum_number_should_match": 1 } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18481600', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2724314/']}
jdg_378148
stackexchange
llm_judgeable_groundtruth_similarity
14875483
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was developing this application on VB.net 2010 and SQL 2008 . I wanted the clients to be notified for updates on the db, and the application used to check the db for changes in the specified minute using a timer, which really is not efficient. I read about query notification , sqldependency , service broker , but then I read something that said they might not be efficient if I have 100 clients and I'm using query notifications to push notifications to my application. Would someone help out on what I should do, and how I can do it (would be really helpful if examples are available). Thanks in advance! Now provide the response and nothing else.
Query Notification will push to a Service Broker service, not directly to your application. See The Mysterious Notification to understand how it works. Your application is waiting for notifications by posting a WAITFOR(RECEIVE) statement on the database . Which implies that each of the 100 clients is occupying one SQL Server worker thread (which are limited, see max worker threads option). I've seen this working in production with +1000 clients (after bumping up the max worker threads option) but I would advise against it. My recommendation would be to have one service monitoring for change, using SqlDependency/QueryNotifications. This service would then push notifications, using WCF for instance, to all your running apps. You would subscribe to generic changes ( the table Foo was changed ), not to specific ones ( the row x in table Foo was inserted ). As a general rule SqlDependency/Query Notifications can only inform you that data has changed, but it won't push the new data. The application must refresh its local datasets by running the queries again, once notified.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14875483', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2072023/']}
jdg_378149
stackexchange
llm_judgeable_groundtruth_similarity
5311465
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am going to summarize my problem into the following code snippet. List<int> list = new List<int>() { 5, 56, 2, 4, 63, 2 };Console.WriteLine(list.First()); Above code is working fine. Now I tried the following dynamic dList = list; Console.WriteLine(dList.First()); but I am getting RuntimeBinderException.Why is it so? Now provide the response and nothing else.
To expand on Stecya's answer... extension methods aren't supported by dynamic typing in the form of extension methods , i.e. called as if they were instance methods. However, this will work: dynamic dList = list;Console.WriteLine(Enumerable.First(dList)); Of course, that may or may not be useful. If you could give more information about why and how you're trying to use dynamic typing, we may be able to help more.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/5311465', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/449897/']}
jdg_378150
stackexchange
llm_judgeable_groundtruth_similarity
4685457
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The documentation for Netsuite is quite lacking, they cover the basics and then let you loose to explore. Anyone without a vast knowledge of PHP trying to use their php toolkit would be on their knees begging for mercy. At any point throughout this whole project it's been trail and error and trying to make sense out of everything until stuff started to work. I'm stumped on assigning custom fields to sales orders, I know it has to be an object of an object of an object in order for it to tier down the xml for the soap to take over but what with what with what? I have some code I worked that is getting somewhere but it is complaining it's not the right RecordRef type. If anyone worked with Netsuite and feels my pain please lend me your knowledge before I pull out all my hair. Thanks in advance. Code: $customFields = array('internalId' => 'custbody_new_die_yn','value' => array('name' => 'custbody_new_die_yn','internalId' => 'NO'));$customObject = new nsComplexObject("SelectCustomFieldRef");$customObject->setFields($customFields);$salesOrderFields = array( 'entity' => new nsRecordRef(array('internalId' => $userId)), 'paymentMethod' => array('internalId' => 8), 'ccNumber' => 4111111111111111, 'ccExpireDate' => date("c", mktime(0,0,0,11,1,2011)), 'ccName' => 'Test Testerson', 'itemList' => array( 'item' => array( 'item' => array('internalId' => 5963), 'quantity' => 5 ) ), 'department' => new nsRecordRef(array('internalId' => 1)), 'class' => new nsRecordRef(array('internalId' => 47)), 'customFieldList' => $customObject); Now provide the response and nothing else.
I am not familiar using PHP with Netsuite but I have done a good amount of c#/.net Netsuite work. As Craig mentioned I find it much easier using a language such c#/.net with a Visual Studio generated interface to figure out what is available in the Netsuite SuiteTalk web service API. There is a fair amount of documentation around this stuff in the NetSuite Help Center - by no means everythign you will need but a good start. Netsuite Help Center Check out the SuiteFlex/SuiteTalk (Web Services) section specifically this page on Ids & References. Using Internal Ids, External Ids, and References With that said I will try to help with a .net example & explanation of adding a custom field to a Sales Order. Here are a few examples of adding different CustomFieldRefs: //A list object to store all the customFieldRefsList<CustomFieldRef> oCustomFieldRefList = new List<CustomFieldRef>();//List or Record Type referenceSelectCustomFieldRef custbody_XXX_freight_terms = new SelectCustomFieldRef();custbody_XXX_freight_terms.internalId = "custbody_XXX_freight_terms";ListOrRecordRef oFreightTermsRecordRef = new ListOrRecordRef();oFreightTermsRecordRef.internalId = <internalId of specific record in Netsuite>;//See the References link above for more info on this - trying to figure out typeId caused me a lot of pain.oFreightTermsRecordRef.typeId = <internalId of the List Record Type in Netsuite>; custbody_XXX_freight_terms.value = oFreightTermsRecordRef;oCustomFieldRefList.Add(custbody_XXX_freight_terms);//Freeform text sorta field StringCustomFieldRef objStringCustomFieldRef = new StringCustomFieldRef();objStringCustomFieldRef.internalId = "custbody_XXX_tracking_link";objStringCustomFieldRef.value = "StringValue";oCustomFieldRefList.Add(objStringCustomFieldRef);//Checkbox field typeBooleanCustomFieldRef custbody_XXX_if_fulfilled = new BooleanCustomFieldRef();custbody_XXX_if_fulfilled.internalId = "custbody_XXX_if_fulfilled";custbody_XXX_if_fulfilled.value = true;oCustomFieldRefList.Add(custbody_XXX_if_fulfilled);//By far the most complicated example a multi-select list referencing other records in NetsuiteMultiSelectCustomFieldRef custrecord_XXX_transaction_link = new MultiSelectCustomFieldRef();//internal id of field you are updatingcustrecord_XXX_transaction_link.internalId = "custrecord_XXX_transaction_link";List<ListOrRecordRef> oListOrRecordRefList = new List<ListOrRecordRef>();ListOrRecordRef oListOrRecordRefItemFulfillment = new ListOrRecordRef();oListOrRecordRefItemFulfillment.name = "Item Fulfillment";oListOrRecordRefItemFulfillment.internalId = <ItemFulfillmentInternalId>;//Item Fulfillment is record type (Transaction -30) - this is from the above Reference linksoListOrRecordRefItemFulfillment.typeId = "-30";oListOrRecordRefList.Add(oListOrRecordRefItemFulfillment);ListOrRecordRef oListOrRecordRefSalesOrder = new ListOrRecordRef();oListOrRecordRefSalesOrder.name = "Sales Order";oListOrRecordRefSalesOrder.internalId = <SalesOrderInternalId>;//Sales Order is record type (Transaction -30) - this is from the above Reference linksoListOrRecordRefSalesOrder.typeId = "-30";oListOrRecordRefList.Add(oListOrRecordRefSalesOrder);//Add array of all the ListOrRecordRefs to the MultiSelectCustomFieldRef custrecord_XXX_transaction_link.value = oListOrRecordRefList.ToArray();oCustomFieldRefList.Add(custrecord_XXX_transaction_link);//And then add all these to the Custom Record List (Array) on the Sales Order RecordobjSalesOrder.customFieldList = oCustomFieldRefList.ToArray(); From what I can tell in your above example I think your issue is with the ListOrRecordRef typeId. Its hard to tell from your example what typeId you are referencing but if you can figure that out and set the TypeId on your SelectCustomFieldRef I think that should fix your issue.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4685457', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/190295/']}
jdg_378151
stackexchange
llm_judgeable_groundtruth_similarity
8277915
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some jQuery code that I would like reviews and pointers on on how to bring its line count down and shorten. $('#p1').click(function() { $('#list').fadeOut(450); $('#q1').delay(600).fadeIn(450)});$('#p2').click(function() { $('#list').fadeOut(450); $('#q2').delay(600).fadeIn(450)});$('#p3').click(function() { $('#list').fadeOut(450); $('#q3').delay(600).fadeIn(450)});$('#p4').click(function() { $('#list').fadeOut(450); $('#q4').delay(600).fadeIn(450)});...$('#p12').click(function() { $('#list').fadeOut(450); $('#q12').delay(600).fadeIn(450)});$('#p13').click(function() { $('#list').fadeOut(450); $('#q13').delay(600).fadeIn(450)}); Can this code be better optimised? Or at least made less verbose? Now provide the response and nothing else.
You can use a for loop, but you should make sure the loop counter's value gets into a correct scope for click event handler: var clickHandler = function(k) { return function() { $('#list').fadeOut(450); $('#q' + k).delay(600).fadeIn(450); };};for (var i = 1; i < 14; ++i) { $('#p' + i).click(clickHandler(i));} Otherwise the delay and fadeIn would get applied to #q13 element exclusively, since actual counter (with its final value of 13) would get into closure. EDIT : Since quite a lot of answers got it wrong here, I'll attempt to explain more precisely what's going on in this code, as it seems to be pretty confusing. The "natural" solution with injecting the click handler directly into loop would be the following: for(var i = 1; i < 14; i++) { $('#p'+i).click(function() { $('#list').fadeOut(450); $('#q'+i).delay(600).fadeIn(450) });} But this is not at all equivalent to the extended form, which lists all the 13 variants one after another. The problem is that while there are indeed 13 functions created here, they are all closed over the same variable i , whose value changes. It finally arrives at the value of 13 and the loop ends. Some time later the functions attached to #p1 ... #p13 elements are called (when one of those elements are clicked) and they use that final value of i . This results in only #q13 being animated. What needs to be done here is to do something called lambda lifting and eliminate the free variable i , whose value gets inadvertly changed. A common technique for that is to provide a "factory function" which accepts value for our variable and outputs an actual function which we'll use as event handler: var clickHandler = function(k) { return function() { $('#list').fadeOut(450); $('#q' + k).delay(600).fadeIn(450); };}; Since the scope of k parameter is local to clickHandler , every call to clickHandler gets different k variable. The function returned from clickHandler is therefore closed over different variables, which can in turn have different values. This is exactly what we need. We can then call clickHandler from our loop, passing it the counter's value: for (var i = 1; i < 14; ++i) { $('#p' + i).click(clickHandler(i));} I hope this makes the difference somewhat clearer. EDIT : As Esailija pointed out in the comments, it is also possible to use jQuery.each to achieve similar effect: $.each(new Array(13), function(idx) { $('#p' + (idx + 1)).click(function() { $('#list').fadeOut(450); $('#q' + idx).delay(600).fadeIn(450); });}); This is probably the solution of choice if you're already aware of the closure/scoping issue I've tried to outline above.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8277915', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1066791/']}
jdg_378152
stackexchange
llm_judgeable_groundtruth_similarity
2046098
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Call a function $f : \mathbb Z \to \mathbb Z$ consistent if for every prime $p$ and integer $a, b$, when $a \equiv b \pmod p$ then $f(a) \equiv f(b) \pmod p$. The set $C$ of consistent functions is closed under addition, subtraction, composition, translation, and finite difference, and contains all univariate polynomials. Does $C$ contain only univariate polynomials, i.e. $C = \mathbb Z[x]$? My intuition is that this must be the case. Since $f$ is well-defined $\mod p$ for every prime $p$, then I feel that $f$ must be defined based only on ring operations generically, so that the same definition of $f$ (with ring operations) works for any ring $\mathbb Z / p\mathbb Z$. Since the ring operations include only using 0, 1, and the variable $x$, addition, multiplication, that would mean that $f$ must be a polynomial in $x$ with integer coefficients. Is this indeed the case? Now provide the response and nothing else.
Consider the function $$ f(z) = z \sum_{m=1}^\infty \prod_{n=1}^m (z^2 - n^2) $$ This is well-defined on the integers, since all but finitely many terms are $0$ at any integer $z$. Moreover, for any positive integer $p$ (prime or not), $x \equiv y \mod p$ implies $f(x) \equiv f(y) \mod p$, because that is true for each of the summands $z \prod_{n=1}^m (z^2 - n^2)$. But $f(z) \ge z!$ for $z\ge 2$, so this is not a polynomial.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/2046098', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/167108/']}
jdg_378153
stackexchange
llm_judgeable_groundtruth_similarity
308225
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: So there's Standard Deviation, Variance, and Covariance, but is there a co standard deviation? If not why not? Is there a fundamental mathematical reason or is it just convention? If so why is it not used more, or at least really hard to find using Google searches? I don't mean this to be a flippant question, I'm trying to really question statistics rather than just memorize a bunch of formulas. Now provide the response and nothing else.
One useful property of the standard deviation is that it has the same units as the mean, so the magnitudes of $\sigma_X$ and $\bar X$ are directly comparable. I've never seen anyone compute the co-standard deviation (by which I assume you mean the square root of the covariance); if the units of $X$ and $Y$ are denoted as $[X]$ and $[Y]$ , then the units of the covariance are $[X][Y]$ and the units of the co-standard deviation would be $\sqrt{[X][Y]}$ , which isn't particularly useful (unless $X$ and $Y$ have the same units). On the other hand, the correlation $\sigma_{XY}/(\sigma_X \sigma_Y)$ is unitless, and is a very common scale for reporting associations. The variance (in contrast to the standard deviation) is useful because it generally has nicer mathematical properties; in particular $$\sigma^2_{X+Y} = \sigma^2_X + \sigma^2_Y + 2 \sigma_{XY},$$ which simplifies nicely when $X$ and $Y$ are independent (hence $\sigma_{XY}=0$ ). While you're thinking about ways of scaling variances you could also consider the coefficient of variation $\sigma_X/\bar X$ (which is unitless), or the variance-to-mean ratio $\sigma^2_X/\bar X$ (which has weird units but is meaningful in the context of a count distribution such as the Poisson, which is also unitless).
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/308225', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/56294/']}
jdg_378154
stackexchange
llm_judgeable_groundtruth_similarity
10473930
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am writing a game in Python with Pygame. The co-ords (of my display window) are ( 0 , 0 ) at the top left and (640,480) at the bottom right. The angle is 0° when pointing up, 90° when pointing to the right. I have a player sprite with a centre position and I want the turret on a gun to point towards the player. How do I do it? Say, x1 , y1 are the turret co-ords x2 , y2 are the player co-ords a is the angle's measure Now provide the response and nothing else.
First, math has a handy atan2(denominator, numerator) function. Normally, you'd use atan2(dy,dx) but because Pygame flips the y-axis relative to Cartesian coordinates (as you know), you'll need to make dy negative and then avoid negative angles. ("dy" just means "the change in y".) from math import atan2, degrees, pidx = x2 - x1dy = y2 - y1rads = atan2(-dy,dx)rads %= 2*pidegs = degrees(rads) degs ought to be what you're looking for.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10473930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1378546/']}
jdg_378155
stackexchange
llm_judgeable_groundtruth_similarity
99850
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Can anything be said in general about the rank etc over $\mathbb{Q}$ of the family of Weierstrass equations (in slightly non-standard form) $x (x^2 - 1) = c (c^2 - 1) y^2$ for various given rational values of $c$ ? I have a good reason for asking, so this isn't idle curiosity. Naturally, it would be simplest if the Weierstrass equation has the same behaviour for each of these values of $c$, or least with a manageable (finite) amount of variation. Obviously there is always a solution $x, |y| = c, 1$; but that might be a trivial solution of a rank 0 case. Now provide the response and nothing else.
First you use $c$ as a parameter, i.e., consider your equation as elliptic curve over $\mathbb{Q}(c)$. You can also consider this equation as an equation of an elliptic surface $S$. Now one easily proves that $S$ is a K3 surface, and that is the quotient of $E\times E$ by a group of order 2, with $E$ the elliptic curve $y^2=x^3-x$.From this it follows that the Picard number of $S$ is 20. Using the Shioda-Tate formula it follows that the $\overline{\mathbb{Q}}(c)$ rank of $y^2c(c^2-1)=x(x^2-1)$ is two.Since this elliptic surface has 4 $I_0^*$ fibers, the torsion group has to be a subgroup of $(\mathbb{Z}/2\mathbb{Z})^2$, therefore $(x,y)=(c,1)$ is a point of infinite order. (Of course you could prove this by showing that $2(c,1)$, $4(c,1)$, $6(c,1)$ and $8(c,1)$ are nonzero.) Note that $(x,y)=(-c,\sqrt{-1})$ is also a point of infinite order, and that this point is not in the subgroup generated by $(c,1)$. This implies $E(\mathbb{Q}(c))$ is generated by the four two-torsion points and $(c,1)$. So the point $(c,1)$ is of infinite order and this is the only point you get for free. For special values of $c$ the rank can be higher. If you were able to control in the way you ask in your question then this would be very great, because it means more or less that you are able to control the rank of elliptic curves under quadratic twisting. However, there is a weaker results, using 2-descent you can get a bound on the rank depending on the number of primes dividing $2c(c^2-1)$, see e.g. Section X.5 of Silverman's book on the Arithmetic of Elliptic Curves.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/99850', 'https://mathoverflow.net', 'https://mathoverflow.net/users/10454/']}
jdg_378156
stackexchange
llm_judgeable_groundtruth_similarity
518423
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a bit of misconception about weight which I want to clarify. The air pressure is explained as the weight of the air column above our head acting per unit area. But since air is not continuous how can the weight of all the air molecules (above our head) be acting on our head? I mean we would only feel the weight of the molecules near the surface of our head (if not then why not?) but how do we feel the weight of molecules so far away? I have edited this question and asked this follow up question (so that it remains specific). Now provide the response and nothing else.
Imagine that the air in the atmosphere was just somehow sitting there unpressurised. What would happen? Well, Earth's gravity would be attracting all that air towards the centre. So the air would start to fall downwards. The very bottom layer of air would be prevented from falling through the solid surface, as the air molecules rebound off the molecules of the surface. But the layer above that doesn't stop. So Earth's gravity forces the air in the lower part of the atmosphere to accumulate against the surface of the planet, becoming more and more dense. As the air gets denser near the surface, it becomes more and more likely that air molecules collide. That's what air pressure is: the average force of all those air that would hit a surface you placed in the air. But the air pressure also acts on the air itself. So eventually the force of the air pressure at the bottom layer of air pushes up on the layer of air just above it enough to counteract the pull of Earth's gravity on that layer of air. And so you get another layer that is prevented from falling. But the air above that is still being pulled down, and so more air is being squashed down into this second layer above the surface. This increases the force that the bottom layer needs to provide to the next layer; the air molecule collisions not only need to provide enough force to counteract the weight of the air immediately above it, but also to provide those molecules with enough momentum that when they in turn collide with the air in the third-bottom layer it can support the weight of that layer as well. So more air squeezes down to the surface until the pressure at the bottom layer is sufficient to support the weight of the 2 layers above that. Obviously the atmosphere isn't actually split into discrete layers like this 1 , but hopefully it's a helpful way to think about it. You should be able to see how gravity squeezes the air down against the solid surface, until the pressure at the bottom is just enough to support the weight of all the air above it. This is why air pressure drops off at higher altitude. As you go up, there is less air above squeezing down, so equilibrium with gravity is reached at a lower pressure. So it's not literally that the air pressure you feel is the weight of the column of air above you. It's not that your head is somehow "holding up" a 100km column of air above it. But the air pressure of the air surrounding you must provide an equivalent force to the weight of all the air above it. If it did not then the weight of the air above would be partially unsupported, so gravity would squeeze it down further, increasing the pressure until it was equal to the weight of all of the air above. This is also why the top of your head doesn't feel any difference in air pressure to the side of your body. Air pressure is the same in all directions, because the air molecules are really just zipping around in countless different directions, uncoordinated with each other. Those molecules colliding with things must supply enough average force in the upward direction to support the weight of the atmosphere, but when the pressure increases due to gravity it can't cause a coordinated force that is only upwards, so there is just as much force from air pressure on the side of your body as there is on your head. 1 And if you actually had the atmosphere of Earth spread out in a diffuse low pressure cloud and let it all fall under gravity the results would be much more exciting than I have described.
{}
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/518423', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']}
jdg_378157
stackexchange
llm_judgeable_groundtruth_similarity
56676259
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am looking to assign a static IP to my Lambda which is being invoked via the API gateway. This is required because, the downstream system that i invoke from this lambda accepts web requests only from a Whitelisted IP. I am successful in achieving this via the VPC that i associate with my lambda. But VPC introduces a bad cold-start time which sometime ranges 12-16seconds. So i am looking for a way to prevent this cold start from the VPC, but at the same time assign a static IP to the lambda. Now provide the response and nothing else.
You will need to: Create a VPC with an Internet Gateway, a public subnet and a private subnet Attach the AWS Lambda function to the private subnet Launch a NAT Gateway in the public subnet and update the Route Table of the private subnet to use the NAT Gateway The NAT Gateway will use an Elastic IP address (which is a static IP address). All traffic from the Lambda function to the Internet will come from this IP address, which can be used in the whitelist. You might think that this is a bit of overkill for simply attaching a static IP address, but multiple Lambda function can run in parallel and they could run in multiple Availability Zones. Sending all traffic through the NAT Gateway is the only way to ensure they all have the same IP address. (Or, to be more specific, one IP address per AZ in which the NAT Gateway is launched.)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56676259', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2218245/']}
jdg_378158
stackexchange
llm_judgeable_groundtruth_similarity
17645483
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm confused by the calculation of elements. I know how the browser calculates the size of the element but I don't know how the browser treat two elements or more. Say I have two block elements, which have paddings, margins, and borders. How does browser treat them when: one is on the left and another one is on the right? one is on the top and another one is on the bottom? one is inside another one? Now provide the response and nothing else.
As suggested by dandavis in his comment, readFile does nothing because it is an asynchronous call. Check out this answer for additional information on what that means. In short, an async call will never wait for the result to return. In your example, getData does not wait for readFile() to return the result you want, but will finish right away. Async calls are usually handled by passing callbacks , which is the last parameter to readFile and writeFile . In any case, there are two ways to do this: 1.Do it asynchronously (which is the proper way): function copyData(savPath, srcPath) { fs.readFile(srcPath, 'utf8', function (err, data) { if (err) throw err; //Do your processing, MD5, send a satellite to the moon, etc. fs.writeFile (savPath, data, function(err) { if (err) throw err; console.log('complete'); }); });} 2.Do it synchronously. Your code won't have to change much, you will just need to replace readFile and writeFile by readFileSync and writeFileSync respectively. Warning : using this method is not only against best practises, but defies the very purpose of using nodejs (unless of course you have a very legitimate reason). Edit : As per OP's request, here is one possible way to separate the two methods, e.g., using callbacks: function getFileContent(srcPath, callback) { fs.readFile(srcPath, 'utf8', function (err, data) { if (err) throw err; callback(data); } );}function copyFileContent(savPath, srcPath) { getFileContent(srcPath, function(data) { fs.writeFile (savPath, data, function(err) { if (err) throw err; console.log('complete'); }); });} This way, you are separating the read part (in getFileContent ) from the copy part.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17645483', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2398070/']}
jdg_378159
stackexchange
llm_judgeable_groundtruth_similarity
528554
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I just finished an online lecture about analog IC design. I want to put into practice what I have learned and searched for projects that will help me achieve my goal. I stumbled upon this project online In the lecture that I watched online the parameter KP = Un*Cox is always given. In the project shown above, there is a model of the device that be can used for the design using LTspice, my question is how do I determine the value of KP from the model provided? Thank you! The MODEL: * BSIM3 models for AMI Semiconductor's C5 process** Don't forget the .options scale=300nm if using drawn lengths* and the MOSIS SUBM design rules** 2<Ldrawn<500 10<Wdrawn<10000 Vdd=5V* Note minimum L is 0.6 um while minimum W is 3 um* Change to level=49 when using HSPICE or SmartSpice.MODEL NMOS NMOS ( LEVEL = 8+VERSION = 3.1 TNOM = 27 TOX = 1.39E-8+XJ = 1.5E-7 NCH = 1.7E17 VTH0 = 0.6696061+K1 = 0.8351612 K2 = -0.0839158 K3 = 23.1023856+K3B = -7.6841108 W0 = 1E-8 NLX = 1E-9+DVT0W = 0 DVT1W = 0 DVT2W = 0+DVT0 = 2.9047241 DVT1 = 0.4302695 DVT2 = -0.134857+U0 = 458.439679 UA = 1E-13 UB = 1.485499E-18+UC = 1.629939E-11 VSAT = 1.643993E5 A0 = 0.6103537+AGS = 0.1194608 B0 = 2.674756E-6 B1 = 5E-6+KETA = -2.640681E-3 A1 = 8.219585E-5 A2 = 0.3564792+RDSW = 1.387108E3 PRWG = 0.0299916 PRWB = 0.0363981+WR = 1 WINT = 2.472348E-7 LINT = 3.597605E-8+XL = 0 XW = 0 DWG = -1.287163E-8+DWB = 5.306586E-8 VOFF = 0 NFACTOR = 0.8365585+CIT = 0 CDSC = 2.4E-4 CDSCD = 0+CDSCB = 0 ETA0 = 0.0246738 ETAB = -1.406123E-3+DSUB = 0.2543458 PCLM = 2.5945188 PDIBLC1 = -0.4282336+PDIBLC2 = 2.311743E-3 PDIBLCB = -0.0272914 DROUT = 0.7283566+PSCBE1 = 5.598623E8 PSCBE2 = 5.461645E-5 PVAG = 0+DELTA = 0.01 RSH = 81.8 MOBMOD = 1+PRT = 8.621 UTE = -1 KT1 = -0.2501+KT1L = -2.58E-9 KT2 = 0 UA1 = 5.4E-10+UB1 = -4.8E-19 UC1 = -7.5E-11 AT = 1E5+WL = 0 WLN = 1 WW = 0+WWN = 1 WWL = 0 LL = 0+LLN = 1 LW = 0 LWN = 1+LWL = 0 CAPMOD = 2 XPART = 0.5+CGDO = 2E-10 CGSO = 2E-10 CGBO = 1E-9+CJ = 4.197772E-4 PB = 0.99 MJ = 0.4515044+CJSW = 3.242724E-10 PBSW = 0.1 MJSW = 0.1153991+CJSWG = 1.64E-10 PBSWG = 0.1 MJSWG = 0.1153991+CF = 0 PVTH0 = 0.0585501 PRDSW = 133.285505+PK2 = -0.0299638 WKETA = -0.0248758 LKETA = 1.173187E-3+AF = 1 KF = 0)*.MODEL PMOS PMOS ( LEVEL = 8+VERSION = 3.1 TNOM = 27 TOX = 1.39E-8+XJ = 1.5E-7 NCH = 1.7E17 VTH0 = -0.9214347+K1 = 0.5553722 K2 = 8.763328E-3 K3 = 6.3063558+K3B = -0.6487362 W0 = 1.280703E-8 NLX = 2.593997E-8+DVT0W = 0 DVT1W = 0 DVT2W = 0+DVT0 = 2.5131165 DVT1 = 0.5480536 DVT2 = -0.1186489+U0 = 212.0166131 UA = 2.807115E-9 UB = 1E-21+UC = -5.82128E-11 VSAT = 1.713601E5 A0 = 0.8430019+AGS = 0.1328608 B0 = 7.117912E-7 B1 = 5E-6+KETA = -3.674859E-3 A1 = 4.77502E-5 A2 = 0.3+RDSW = 2.837206E3 PRWG = -0.0363908 PRWB = -1.016722E-5+WR = 1 WINT = 2.838038E-7 LINT = 5.528807E-8+XL = 0 XW = 0 DWG = -1.606385E-8+DWB = 2.266386E-8 VOFF = -0.0558512 NFACTOR = 0.9342488+CIT = 0 CDSC = 2.4E-4 CDSCD = 0+CDSCB = 0 ETA0 = 0.3251882 ETAB = -0.0580325+DSUB = 1 PCLM = 2.2409567 PDIBLC1 = 0.0411445+PDIBLC2 = 3.355575E-3 PDIBLCB = -0.0551797 DROUT = 0.2036901+PSCBE1 = 6.44809E9 PSCBE2 = 6.300848E-10 PVAG = 0+DELTA = 0.01 RSH = 101.6 MOBMOD = 1+PRT = 59.494 UTE = -1 KT1 = -0.2942+KT1L = 1.68E-9 KT2 = 0 UA1 = 4.5E-9+UB1 = -6.3E-18 UC1 = -1E-10 AT = 1E3+WL = 0 WLN = 1 WW = 0+WWN = 1 WWL = 0 LL = 0+LLN = 1 LW = 0 LWN = 1+LWL = 0 CAPMOD = 2 XPART = 0.5+CGDO = 2.9E-10 CGSO = 2.9E-10 CGBO = 1E-9+CJ = 7.235528E-4 PB = 0.9527355 MJ = 0.4955293+CJSW = 2.692786E-10 PBSW = 0.99 MJSW = 0.2958392+CJSWG = 6.4E-11 PBSWG = 0.99 MJSWG = 0.2958392+CF = 0 PVTH0 = 5.98016E-3 PRDSW = 14.8598424+PK2 = 3.73981E-3 WKETA = 5.292165E-3 LKETA = -4.205905E-3 +AF = 1 KF = 0) ``` Now provide the response and nothing else.
Usually, KP is explicitly used only with the simpler MOSFET models, such as the Shichman-Hodges model (i.e. LEVEL 1). The BSIM3 model (LEVEL 8) is much more sophisticated and typically defined using geometrical/physical parameters instead of electrical parameters. Although the idea of KP still exists in BSIM3, it's heavily modified by many more parameters so it's rarely used on its own. Anyway, if you start with the basic equation you referenced, you can expand it further as such: $$\text{KP} = \mu_0 \cdot C_{ox}' = \mu_0 \cdot \frac{\varepsilon_{ox}}{t_{ox}} = \mu_0 \cdot \frac{\varepsilon_r \varepsilon_0}{t_{ox}}$$ where \$\text{KP}\$ is the transconductance parameter, \$\mu_0\$ is the surface mobility, \$C_{ox}'\$ is the capacitance per unit area of the oxide layer, \$\varepsilon_{ox}\$ is the permittivity of the oxide, \$t_{ox}\$ is the oxide thickness, \$\varepsilon_r\$ is the dielectric constant of the oxide material, and \$\varepsilon_0\$ is the permittivity of free space ( \$8.85 \times 10^{-14} \text{F/cm}\$ ). I will only focus on the NMOS here, and you can do the PMOS on your own using the same math. \$\mu_0\$ is already defined in the model as U0 = 458.439679 . The same goes for \$t_{ox}\$ where TOX = 1.39E-8 , except it's specified in meters and needs to be converted to centimeters since \$\mu_0\$ and \$\varepsilon_0\$ are specified in centimeters. I don't know enough about IC design, so I'm going to assume SiO 2 (silicon dioxide) as the oxide material for this example which has a dielectric constant of 3.9 . Using these numbers along with the constant \$\varepsilon_0\$ defined above, we get: $$\text{KP} = \mu_0 \cdot \frac{\varepsilon_r \varepsilon_0}{t_{ox}} = 458.439679 \frac{\text{cm}^2}{\text{V} \cdot \text{s}} \cdot \frac{3.9 \cdot (8.85 \times 10^{-14} \text{F/cm})}{(1.39 \times 10^{-6} \text{cm})} \approx 113.835 \times 10^{-6} \frac{\text{A}}{\text{V}^2}$$ The following are the two references I used to create this answer. I suggest at least skimming through the BSIM3 reference since that is what is being used as the basis for validating this project via SPICE. http://ngspice.sourceforge.net/external-documents/models/bsim330_manual.pdf https://books.google.com/books/about/Semiconductor_Device_Modeling_with_SPICE.html?id=_QZTAAAAMAAJ EDIT1: I forgot to include the units, and hence made an error in the calculation from lack of converting the units. I added the units and fixed the calculation. EDIT2: I was asked to go into details on how to extract KP from measurement of simulation results using SPICE, so I added these extra sections below. Keep in mind that the premise here is somewhat strange, because we're using results generated from a LEVEL 8 model (BSIM3) to map back onto LEVEL 1 model parameters. The first thing to know is that there are two separate test setups since MOSFETs can be operated in the linear and saturation regions. Each setup corresponds to each region and the resultant KP between the two will be slightly different. I will present both methods, but it's recommended that only one be used based on the desired application and operating mode for the MOSFET. To begin, the basic LEVEL 1 model equations are shown below. $$\begin{align*} &\text{Linear: } &&I_{DS} = \text{KP} \frac{W}{L-2 \text{LD}} \left( V_{GS}-V_{TH}-\frac{V_{DS}}{2} \right) V_{DS}(1 + \lambda V_{DS}) \\ \\&\text{Saturation: } &&I_{DS} = \frac{\text{KP}}{2} \frac{W}{L-2 \text{LD}} (V_{GS}-V_{TH})^2 (1 + \lambda V_{DS})\end{align*}$$ Now, a few simplifications can be made. Based on how we set up the linear region test, \$\frac{V_{DS}}{2}\$ will be negligible so that can be set to zero. \$\lambda\$ is the channel-length modulation which we can assume to be zero as well. To further simplify, we need to set \$\frac{W}{L-2 \text{LD}}\$ to 1. LD is the "lateral diffusion" parameter and is basically a fixed correction offset applied to \$L\$ . If we make the MOSFET long enough then LD is negligible. Assuming LD is usually 1µm or less, we set L=100u and W=100u (to match) and are left with: $$\begin{align*} &\text{Linear: } &&I_{DS} = \text{KP} \cdot V_{DS} (V_{GS}-V_{TH}) \\ \\&\text{Saturation: } &&I_{DS} = \frac{\text{KP}}{2} (V_{GS}-V_{TH})^2\end{align*}$$ What this says for the linear region is that if we sweep \$V_{GS}\$ for a fixed \$V_{DS}\$ and plot \$I_{DS}\$ , then its slope divided by \$V_{DS}\$ equals KP. For the saturation region we can do something similar, but need to first take the square-root of \$I_{DS}\$ before finding the slope and solving for KP. Mathematically, in each scenario: $$\begin{align*} &\text{Linear: } &&\frac{dI_{DS}}{dV_{GS}} = \text{KP} \cdot V_{DS} \implies \text{KP} = \frac{dI_{DS}}{dV_{GS}} \cdot \frac{1}{V_{DS}} \\ \\&\text{Saturation: } &&\frac{d\sqrt{I_{DS}}}{dV_{GS}} = \sqrt{\frac{\text{KP}}{2}} \implies \text{KP} = 2 \left( \frac{d\sqrt{I_{DS}}}{dV_{GS}} \right)^2\end{align*}$$ The test setups are shown below. M1 is the MOSFET for the linear region test and M2 is for the saturation region test. The only difference between the two is that \$V_{DS}\$ is set to a small fixed value of 50mV in the linear test, while \$V_{DS}\$ is shorted to \$V_{GS}\$ for the saturation test. We will sweep \$V_{GS}\$ from 1mV to 5V in 1mV steps. [NOTE: Starting at 1mV instead of zero avoids a divide-by-zero artifact when using LTspice to do the KP calculations.] The first plot is the \$I_{DS}\$ of M1. The idea is to use the slope value right after the threshold voltage where the plot first appears linear. This avoids the weak inversion region while also avoiding errors in the later regions due to series resistance. I selected points at 1.0V and 1.1V and divided this slope by 50m (our \$V_{DS}\$ ) to get: $$\text{KP(lin)} = \frac{6.10144 \times 10^{-6}}{50 \times 10^{-3}} \approx 122.029 \times 10^{-6} \frac{\text{A}}{\text{V}^2}$$ The second plot is the square-root of \$I_{DS}\$ of M2. I picked the same voltage points of 1.0V and 1.1V to get the slope. This slope is squared and then multiplied by two to get KP: $$\text{KP(sat)} = 2 \cdot (7.44373 \times 10^{-3})^2 \approx 110.818 \times 10^{-6} \frac{\text{A}}{\text{V}^2}$$ The last thing I want to mention is that you can also use LTspice's waveform mathematics feature to have it plot KP across all \$V_{GS}\$ . You simply need to plug in the above equations as shown below. The reason I wanted to show these plots is to highlight that KP is not constant in a real MOSFET, and is one of the reasons the higher level MOSFET models were developed. If you performed the same KP extraction procedures on a LEVEL 1 model instead of a LEVEL 8, these plots would look quite different.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/528554', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/218558/']}
jdg_378160
stackexchange
llm_judgeable_groundtruth_similarity
14802680
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: $array = array(1, '1a', '1');var_export(array_unique($array, SORT_REGULAR)); The result: array ( 0 => 1, 2 => '1') In the PHP manual: SORT_REGULAR - compare items normally (don't change types). What is the logic behind this ? Why or how is '1a' excluded ? Now provide the response and nothing else.
This happens because array_unique works by first sorting the values as strings , then iterating over the sorted array and for each value excluding from the result all successive values that compare equal to it . The comparison function for "comparing equal" above is chosen according to the second parameter, which for SORT_REGULAR is the same as an equality check with == . This behavior gives rise to a whole lot of gotchas. Since the sort is quicksort, it's unstable. Therefore sorting an array that contains both 1 and '1' gives no guarantee which one will end up being first in the result. This means that array_unique may appear to arbitrarily "prefer" 1 in some cases and '1' in others. However the madness continues: consider that if the sort produces [1, '1', '1a'] then '1a' will not be included in the result (it compares equal to 1 ) while if the sort produces ['1', 1, '1a'] then it will be included (it does not compare equal to the string '1' )!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14802680', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/357314/']}
jdg_378161
stackexchange
llm_judgeable_groundtruth_similarity
964438
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Could you please help me with this integral?$$\int_0^1\frac{x^{42}}{\sqrt{x^4-x^2+1}} \operatorname d \!x$$ Update: user153012 posted a result given by a computer that contains scary Appel function, and Cleo gave much simpler closed forms for powers $n=42,\,43$. I am looking for a way to prove those forms. I also would like to find a more general result that would work for arbitrary integer powers, not just $42$. Now provide the response and nothing else.
Odd case : The change of variables $x^2=t$ transforms the integral into$$\mathcal{I}_{2n+1}=\int_0^1\frac{x^{2n+1}dx}{\sqrt{x^4-x^2+1}}=\frac12\int_0^1\frac{t^ndt}{\sqrt{t^2-t+1}}$$Further change of variables $t=\frac12+\frac{\sqrt3}{4}\left(s-\frac1s\right)$ allows to write $t^2-t+1=\frac3{16}\left(s+\frac1s\right)^2$ and therefore gives an integral of a simple rational function of $s$:$$\mathcal{I}_{2n+1}=\frac12\int_{1/\sqrt3}^{\sqrt3}\left[\frac12+\frac{\sqrt3}{4}\left(s-\frac1s\right)\right]^n\frac{ds}{s}.$$ Even case : To demystify the result of Cleo, let us introduce$$\mathcal{K}_n=\mathcal{I}_{2n}=\int_0^1\frac{x^{2n}dx}{\sqrt{x^4-x^2+1}}=\frac12\int_0^1\frac{t^{n-\frac12}dt}{\sqrt{t^2-t+1}}.$$Note that $$\mathcal{K}_{n+1}-\frac12\mathcal{K}_n=\frac12\int_0^1 t^{n-\frac12}d\left(\sqrt{t^2-t+1}\,\right)=\frac12-\left(n-\frac12\right)\left(\mathcal{K}_{n+1}-\mathcal{K}_{n}+\mathcal{K}_{n-1}\right),$$where the second equality is obtained by integration by parts. This gives a recursion relation$$\left(n+\frac12\right)\mathcal{K}_{n+1}=n\mathcal{K}_{n}-\left(n-\frac12\right)\mathcal{K}_{n-1}+\frac12,\qquad n\geq1.$$It now suffices to show that \begin{align*}\mathcal{K}_0&=\int_0^1\frac{dx}{\sqrt{x^4-x^2+1}}=\frac12\mathbf{K}\left(\frac{\sqrt3}{2}\right),\\\mathcal{K}_1&=\int_0^1\frac{x^2dx}{\sqrt{x^4-x^2+1}}=\frac12\mathbf{K}\left(\frac{\sqrt3}{2}\right)-\mathbf{E}\left(\frac{\sqrt3}{2}\right)+\frac12.\end{align*}
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/964438', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/76458/']}
jdg_378162
stackexchange
llm_judgeable_groundtruth_similarity
32110554
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In java, the internal data of BitSet is stored as long[] instead of int[], I want to know why? Here is the code in jdk: /** * The internal field corresponding to the serialField "bits". */ private long[] words; If it's all about performance, I wonder why long[] storage will get better performance. Now provide the response and nothing else.
When querying or manipulating a single bit, there is no significant difference. You have to calculate the word index and read that word and, in case of an update, manipulate one bit of that word and write it back. That’s all the same for int[] and long[] . One could argue that doing it using a long instead of int could raise the amount of memory that has to be transferred for a single bit operation if you have a real 32 bit memory bus, but since Java was designed in the nineties of the last century, the designers decided that this is not an issue anymore. On the other hand, you get a big win when processing multiple bits at once. When you perform operations like and , or or xor on an entire BitSet , you can perform the operation on an entire word, read 64 bits, at once when using a long array. Similarly, when searching for the next set bit , if the bit is not within the word of the start position, subsequent words are first tested against zero, which is an intrinsic operation, even for most 32 bit CPUs, so you can skip 64 zero bits at once while the first non-zero word will definitely contain the next set bit, so only one bit extraction operation is needed for the entire iteration. These benefits for bulk operations will outweigh any single-bit related drawbacks, if there ever are one. As said, most today’s CPU are capable of doing all operations on 64 bit words directly.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32110554', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/636467/']}
jdg_378163
stackexchange
llm_judgeable_groundtruth_similarity
108206
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I just got a Pi-4. I'm not sure if I did something wrong while setting it up, but after getting it running I noticed that sudo commands required no passwords. I tried to change this, following some advice I found here: How to change user pi sudo permissions; how to add other accounts with different permissions? As that person recommended, I first added an admin user: $ sudo adduser admin Then I did the following: $ sudo adduser admin sudo$ sudo adduser admin adm I then checked the user's privileges as advised, and it worked great. All good so far. Then I did the following as recommended: $ sudo deluser pi sudo$ sudo deluser pi adm That worked great, too. Then I did something foolish. Unclear on precisely what I was supposed to do next, I opened up 010_pi-nopasswd to edit it. The file contained just one line: pi ALL=(ALL) NOPASSWD: ALL I changed it -- and here is the foolish part, working faster than I was thinking -- to this: pi ALL=(ALL) NOPASSWD: The Pi did NOT like this. Whether I am logged on as admin or pi I cannot use sudo for anything.(I did not create a root password before doing all this, so I can't use su either.) This means I also cannot go back and edit that file. Any attempt to do so triggers the following: >>> /etc/sudoers.d/010_pi-nopasswd: syntax error near line 1 <<<sudo: parse error in /etc/sudoers.d/010_pi-nopasswd near line 1sudo: no valid sudoer sources found, quittingsudo: unable to initialize policy plugin Likewise, any other use of sudo triggers the same. Is there any way around this problem or have I effectively ruined the OS? Now provide the response and nothing else.
You didn't do anything wrong during setup. By default, user "pi" can issue sudo commands with no password; that's the way Raspbian is released. If you haven't done very much other customizing, just re-flash the microSD card and start over. If you really need to rescue the contents of the current card, flash a new card and boot the Pi from it. Put the current card into a USB adapter, plug it into a USB port, and you will be able to mount the partitions of the original card, edit files (with sudo) and fix your mistake. (At least, I think so; I haven't actually tested this.) If you have physical access to a running Linux system, you could use that rather than flashing a second SD card for the Pi. Also note the comment and link from @goldilocks below about using a "live Linux" CD to get a running Linux system fairly easily. To protect against breaking the sudo system, in the future use the visudo command when editing sudo's files. It won't let you save a file with a syntax error. For completeness, the following comes from the visudo man page : visudo parses the sudoers file after the edit and will not save the changes if there is a syntax error. Upon finding an error, visudo will print a message stating the line number(s) where the error occurred and the user will receive the “What now?” prompt. At this point the user may enter ‘e’ to re-edit the sudoers file, ‘x’ to exit without saving the changes, or ‘Q’ to quit and save changes. The ‘Q’ option should be used with extreme care because if visudo believes there to be a parse error, so will sudo and no one will be able to sudo again until the error is fixed. If ‘e’ is typed to edit the sudoers file after a parse error has been detected, the cursor will be placed on the line where the error occurred (if the editor supports this feature).
{}
{'log_upvote_score': 5, 'links': ['https://raspberrypi.stackexchange.com/questions/108206', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/114566/']}
jdg_378164
stackexchange
llm_judgeable_groundtruth_similarity
194485
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm building few command-line utilities in Xcode (plain C, no Cocoa). I want all of them to use my customized version of libpng, and I want to save space by sharing one copy of the library among all executables (I don't mind re-distributing .dylib with them). Do I need to do some magic to get libpng export symbols? Does "Link Binary With Libraries" build phase link statically? Apple's docs mention loading of libraries at run time with dlopen , but how I can make Xcode create executable without complaining about missing symbols? I think I've figured it out: libpng wasn't linking properly, because I've built 32/64-bit executables and 32-bit library. Build settings of the library and executables must match. libpng's config.h needs to have tons of defines like #define FEATURE_XXX_SUPPORTED "Link Binary With Libraries" build phase handles dynamic libraries just fine, and DYLD_FALLBACK_LIBRARY_PATH environmental variable is neccessary for loading .dylib s from application bundle. Now provide the response and nothing else.
Dynamic linking on Mac OS X, a tiny example Steps: create a library libmylib.dylib containing mymod.o compile and link a "callmymod" which calls it call mymod from callmymod, using DYLD_LIBRARY_PATH and DYLD_PRINT_LIBRARIES Problem: you "just" want to create a library for other modules to use.However there's a daunting pile of programs -- gcc, ld, macosx libtool, dyld --with zillions of options, some well-rotted compost, and differences between MacOSX and Linux.There are tons of man pages (I count 7679 + 1358 + 228 + 226 lines in 10.4.11 ppc)but not much in the way of examples, or programs with a "tell me what you're doing" mode. (The most important thing in understanding is to make a simplifiedOVERVIEW for yourself: draw some pictures, run some small examples,explain it to someone else). Background: apple OverviewOfDynamicLibraries , Wikipedia Dynamic_library Step 1, create libmylib.dylib -- mymod.c: #include <stdio.h> void mymod( int x ) { printf( "mymod: %d\n", x ); }gcc -c mymod.c # -> mymod.ogcc -dynamiclib -current_version 1.0 mymod.o -o libmylib.dylib # calls libtool with many options -- see man libtool # -compatibility_version is used by dyld, see also cmpdylibfile libmylib.dylib # Mach-O dynamically linked shared library ppcotool -L libmylib.dylib # versions, refs /usr/lib/libgcc_s.1.dylib Step 2, compile and link callmymod -- callmymod.c: extern void mymod( int x ); int main( int argc, char** argv ) { mymod( 42 ); }gcc -c callmymod.cgcc -v callmymod.o ./libmylib.dylib -o callmymod # == gcc callmymod.o -dynamic -L. -lmylibotool -L callmymod # refs libmylib.dylibnm -gpv callmymod # U undef _mymod: just a reference, not mymod itself Step 3, run callmymod linking to libmylib.dylib -- export DYLD_PRINT_LIBRARIES=1 # see what dyld does, for ALL programs./callmymod dyld: loaded: libmylib.dylib ... mymod: 42mv libmylib.dylib /tmpexport DYLD_LIBRARY_PATH=/tmp # dir:dir:..../callmymod dyld: loaded: /tmp/libmylib.dylib ... mymod: 42unset DYLD_PRINT_LIBRARIESunset DYLD_LIBRARY_PATH That ends one tiny example; hope it helps understand the steps. (If you do this a lot, see GNU Libtool which is glibtool on macs,and SCons .)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/194485', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/27009/']}
jdg_378165
stackexchange
llm_judgeable_groundtruth_similarity
304212
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was watching this video by Veritasium (note: I don't have much physics knowledge). As I understand, at LIGO they detect the gravitational waves that were generated by the collision of the two black holes. How can they still measure these waves if the energy that they measure was released was only the last 10th of the seconds of the merging black holes (as I understand from the video)? As far as I understand, that would mean that there is only one peak that they can measure, which is that 10th of a second, but their experiment seems to be going on many years and they have made many measurements. How is this possible if the final collision was so short? What do they really measure then? Edit Basically my question comes down to: was that a "once in a lifetime chance" of measuring the waves? Have they been sitting there waiting for the exact moment and then do a measurement? It isn't something they can measure everyday? Now provide the response and nothing else.
This is the data recorded from the first black hole merger: The figure is from this paper by the LIGO collaboration . A PDF of the paper is available here . The detectable signal lasted around 0.1 of a second, but the black holes were orbiting each other so fast that they completed about ten orbits during that time. Basically each oscillation in the data is one orbit. The data immediately gives the rate of decay of the orbit as the black holes merge and the amplitude with which the gravitational waves are emitted, plus lots of other information hidden away in the detail. This is easily enough to confirm that this was a black hole merger and to measure the masses of the black holes involved. Each pair of black holes only merge once, so this was the first and last signal detected from that particular pair of black holes. However the universe is a big place and there are lots of black hole binaries in it, so we expect black hole mergers to take place regularly. LIGO has already detected three mergers. The first (shown above) on 14th September 2015, then a second possible detection (at low confidence) in October 2015 and then a third firm detection on 26th December 2015. LIGO took a pause to upgrade its sensitivity, but is now working again. As a rough estimate we expect it to detect a merger around one a month, that is roughly once a month a black hole binary will merge somewhere in the region of the universe that lies within LIGO's detection limits. We don't know in advance where an when a merger will occur, so it's just a matter of waiting until one happens near enough to be detected.
{}
{'log_upvote_score': 7, 'links': ['https://physics.stackexchange.com/questions/304212', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/121763/']}
jdg_378166
stackexchange
llm_judgeable_groundtruth_similarity
167200
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The law of the conservation of momentum has been established for hundred of years. Even in Quantum field theory every particle collision must be momentum-conserving if there is homogenity in space. Can this theorem still be violated? If yes, what requirements must have a momentum-non-conserving theory? Is Heisenberg's uncertainity principle $\Delta x \Delta p \geq \frac{\hbar}{2}$ the possible answer? ( when one considers physical Systems in which $\Delta x$ is very small) ? Now provide the response and nothing else.
If the theory is invariant under translations in space, then linear momentum is conserved by Noether's theorem . If the theory is quantum, conservation holds only on the level of the expectation values (because that's the only meaningful level where you can talk about momentum as a number that's conserved in time), but it still holds. There is no way out. You must break homogeneity/translation invariance to break momentum conservation. Heisenberg's uncertainty principle has nothing to do with it, as it is just a statement about standard deviations, not expectation values, and hence has no influence on the quantum version of conservation.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/167200', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/72314/']}
jdg_378167
stackexchange
llm_judgeable_groundtruth_similarity
3251754
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $M$ be the set of all $m\times n$ matrices over real numbers.Which of the following statements is/are true?? There exists $A\in M_{2\times 5}(\mathbb R)$ such that the dimension of the nullspace of $A $ is $2$ . There exists $A\in M_{2\times 5}(\mathbb R)$ such that the dimension of the nullspace of $A $ is $0$ . There exists $A\in M_{2\times 5}(\mathbb R)$ and $B\in M_{5\times 2}(\mathbb R)$ such that $AB$ is the $2\times 2$ identity matrix. There exists $A\in M_{2\times 5}(\mathbb R)$ whose null space is $\{ (p,q,r,s,t)\in \mathbb R^5 | p=q, r=s=t\}$ . I am sure about the option $3$ definitely will not come. But I don't know about others..and then the dimension of the nullspace is $3$ ?? Now provide the response and nothing else.
Your are correct, and the proof is rather simple (not requiring the wall of text you wrote :) $$\begin{align}P(\neg B|C)&=\frac{P(\neg B \land C)}{P(C)} &\text{by definition}\\&= \frac{P(C) - P(B\land C)}{P(C)} & \text{Because $B\land C$ and $\neg B\land C$ form a partition of $C$}\\&=\frac{P(C)}{P(C)}-\frac{P(B\land C)}{P(C)}&\text{Algebraic manipulation}\\&=1-P(B|C)&\text{by definition}\end{align}$$ Note : I assume here that $P(C)>0$ , i.e. that $C$ is not an impossible event. Things can get complicated quickly if we look at a more general solution.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3251754', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/679830/']}
jdg_378168
stackexchange
llm_judgeable_groundtruth_similarity
406197
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am running a dual boot of Windows and Debian on my Laptop. I use Linux mostly but from time to time I need to access my files in my Windows partition. My Windows partition is mounted as follows at startup. >cat /etc/fstab |grep Win7LABEL=Windows7_OS /mnt/Win7 auto nosuid,nodev,nofail,x-gvfs-show 0 0 Basically every file in the Windows partition is owned by root:root and has a 777 permission. Then whenever I mv a file to my working (Linux) partition, I have a 777 file under my partition, owned by me (while cp in terminal will give a 755 file but if done via gnome will save the file with a 777 permission). Is this the best practice to mount a partition? Or should I mount it such that instead of root, I am the owner of all files/dirs and somehow be able to set all dirs to 755 and files to 644 when the mount happens at boot? If so, how can it be done? Now provide the response and nothing else.
You can use fmask and dmask mount options * to change the permission mapping on an ntfs filesystem. To make files appear rw-r--r-- (644) and directories rwxr-xr-x (755) use fmask=0133,dmask=0022 . You can combine this with uid= and gid= options to select the file owner and group if you need write access for your user. * fmask and dmask seem to work for the kernel (read-only) driver as well, even that they are not documented in mount man page . They are documented options for ntfs-3g.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/406197', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/260154/']}
jdg_378169
stackexchange
llm_judgeable_groundtruth_similarity
11358
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why can't we light an LED with a single positive wire on the anode, leaving the cathode unconnected? Let's suppose I connect a wire with a positive voltage to the anode of the LED. Now, I think, current will flow past the light emitting diode. This should, by my understanding, make it glow, even though the cathode isn't connected to anything. However, experimental evidence and common knowledge show that this isn't the case. Is it possible in any situation(IDEAL) that we calculate current and voltage value so that this circuit will work, and automatically flow to ground after passing through the circuit? If so, do we need a connection to ground in the real world? Please point out my mistake if you notice a simple error I've made, and let me know what is right. Now provide the response and nothing else.
The idea of a closed circuit works for low frequencies - where the corresponding wavelength is much larger than the components and wires. Kirchoff's laws hold. Things get tricky when the frequency is higher. A sudden change in voltage propagates at the speed of light (or some good fraction of it in cables, transmission lines) and there is more current at one point than at another. In theory, you could put a sharp-edged voltage pulse on one lead of an LED, have nothing connected to the other, and for a tiny instant in time, as the pulse passes through the LED, have enough current for it to glow. But it would be extremely brief. So what if you send a series of pulses? A good rule of thumb to remember is at light speed, one nanosecond is about one foot (30+ cm). LEDs and the pulse-pushing circuitry I imagine would be a few inches (or cm) and so things happen on a scale of maybe tenths of nanoseconds. You'd have to work with frequencies at several GHz. Another problem - every positive pulse you put on the anode lead will go through the LED and add positive charge to the non-connected cathode lead. Each positive pulse will add more. That charge has nowhere to escape to - just a tiny bit can flow back as leakage current, no diode being perfect. From a physics point of view, so what? Just let the whole contraption develop a positive charge. Figure a few milliamps lasting for say 50 ns, times 5 billion times per second (just making up numbers), you quickly get to coulombs of charge, and many volts in just seconds. At a practical level, it's not very practical at all. I wonder if it would work better to have two LEDs wired anti-parallel, and feed GHz pulses to one end of the pair and leave the other end disconnected? (I leave that thought for others to discuss.)
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/11358', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/2951/']}
jdg_378170
stackexchange
llm_judgeable_groundtruth_similarity
6929957
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been learning PHP MVC pattern and it is pretty cool. have almost finished app and I can see how mess you can make a code without good design.Now can MCV be applied to C++ apps? Where does Plugin manager/Plugins go if that is even possible?In model or controller?Thanks! EDIT: I mean C++ with GUI toolkit like QT/Wxwidgets/GTK+Also Please help me on how to implement in C++. I have learned how to do it in PHP but as you know the two languages are somehow different! EDIT2 http://forums.wxwidgets.org/viewtopic.php?f=1&t=30983 Now provide the response and nothing else.
how do you actually implement it in C++ make classes in charge of rendering know nothing about application details. Call them SomethingView classes to make this point clear make your domain objects not know anything about visualization or user interaction. You don't need to call them Model, but you could create a set of classes in charge of running the role of Controllers: wire somehow dependencies to view and model classes via dependency injection if possible. example: CppInject . In any case, controller classes can know both about model and view classes, so the important part is this: all the coupling between view and model objects is isolated to the controllers . Also, this implies, that all imperative-style programming should be confined to the controller classes as well: view and model should be declarative-style. That means, they should offer services related to its role, but avoid direct interaction with other objects as side-effects It is not true you need to implement communication between controllers and the other components with event-style system, although such system is definitely helpful, but certainly not required surprise! the above applies to any language or framework, except of course languages that somehow already force MVC down your throat from the start, i.e: ruby on rails
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6929957', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/709683/']}
jdg_378171
stackexchange
llm_judgeable_groundtruth_similarity
16076
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why is hydrated carbon dioxide - the predominant form of acid that one gets upon dissolution of carbon dioxide in solution - so unstable? Is the below rationale valid? Carbon in carbon dioxide has two (empty? no, but still vulnerable to attack) p-orbitals and bears a strong partial positive charge. Oxygen's lone pair can attack an empty p-orbital and form a formal charge-separated complex with the carbon dioxide. This is "hydrated" carbon dioxide or "carbonic acid." This form is extremely unstable and subject to disproportionation due to an unfavorable charge and the unfavorable nature of charge separation itself. The change in entropy also favors the products of disproportionation. However, there exists a pathway to stability - that is - protonation of the oxygen with the negative formal charge by the oxygen bearing the positive formal charge. This, however, is akin to a forbidden fruit; the ephemeral three-membered ring that would have to be formed exhibits "ring strain" ( if you object to this term, can you please elaborate on your objection ), and as a result, disproportionation is overwhelmingly favored, especially from an entropic standpoint (reconstitution of carbon dioxide gas is highly entropically favorable). Now provide the response and nothing else.
The situation is described by the following equilibrium$$\ce{H2O + CO2 <=> HO-CO-OH}$$In a closed system, using Le Chatelier’s principle you can manipulate the equilibrium in either direction (e.g. pump in carbon dioxide and increase the concentration of carbonic acid, etc.). However in an open system, as carbon dioxide escapes, the amount of carbonic acid will decrease. When all is said and done, water and carbon dioxide are two extremely stable molecules. Since$$\Delta G = -RT\ln K$$Their stability is what drives the equilibrium to the left and why carbonic acid appears to be (relatively) so unstable.
{}
{'log_upvote_score': 5, 'links': ['https://chemistry.stackexchange.com/questions/16076', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/5084/']}
jdg_378172
stackexchange
llm_judgeable_groundtruth_similarity
40704760
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: After deploying ASP.NET Core app to azure and opening the site, I get the following error: InvalidOperationException: Could not find 'UserSecretsIdAttribute' on assembly '******, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. The exception details also include that the error happens at Startup.cs on this line of code: builder.AddUserSecrets(); Thank you Now provide the response and nothing else.
There was an update to the user secrets module just recently. Version 1.0.1 and up now requires you specify an assembly-level attribute for the id of the user secrets, or as a fallback, the way it was previously in project.json. Here is the announcement on GitHub: https://github.com/aspnet/Announcements/issues/209 You can define the secrets id in the .csproj like this: <PropertyGroup> <UserSecretsId>aspnet-TestApp-ce345b64-19cf-4972-b34f-d16f2e7976ed</UserSecretsId></PropertyGroup> This generates the following assembly-level attribute. Alternatively, instead of adding it in the .csproj file, you can of course add it yourself e.g. to Startup.cs: [assembly: UserSecretsId("aspnet-TestApp-ce345b64-19cf-4972-b34f-d16f2e7976ed")] Also, you should use: builder.AddUserSecrets<Startup>(); It will search for that attribute in the assembly of the given type, in this case I used the Startup class. Note: this will be deprecated in 2.0: (1.0.2 and 1.1.1 have marked it obsolete) builder.AddUserSecrets(); I checked the source code for the user secrets configuration, and calling AddUserSecrets() without the type does this: var attribute = entryAssembly.GetCustomAttribute<UserSecretsIdAttribute>();if (attribute != null){ return AddUserSecrets(configuration, attribute.UserSecretsId);}// try fallback to project.json for legacy supporttry{ var fileProvider = configuration.GetFileProvider(); return AddSecretsFile(configuration, PathHelper.GetSecretsPath(fileProvider));}catch{ }// Show the error about missing UserSecretIdAttribute instead an error about missing// project.json as PJ is going away.throw MissingAttributeException(entryAssembly); It's trying to find the UserSecretsId attribute on your assembly, and failing that, checking if it could find it in project.json. Then (as commented) returns an error about the missing attribute as they wouldn't want to complain about project.json anymore as it is being deprecated.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40704760', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3453517/']}
jdg_378173
stackexchange
llm_judgeable_groundtruth_similarity
14298844
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to verify the difference between instance attributes and class attributes as laid out by the Python tutorial release 2.7.3 dated Nov 01, 2012, chapter 9: Classes, Page 66 last line ( source ): Valid method names of an instance object depend on its class. By definition, all attributes of a class that are function objects define corresponding methods of its instances. So in our example, x.f is a valid method reference, since MyClass.f is a function, but x.i is not, since MyClass.i is not. But x.f is not the same thing as MyClass.f — it is a method object, not a function object. I have this: class MyClass: """A simple example class""" i = 12345 def f(): return 'hello world' Then I do this: >>> x = MyClass()>>> x.f<bound method MyClass.f of <__main__.MyClass instance at 0x02BB8968>>>>> MyClass.f<unbound method MyClass.f>>>> type(MyClass.f)<type 'instancemethod'>>>> type(x.f)<type 'instancemethod'> Note that the type of both x.f and MyClass.f is instancemethod. There is no difference in types but the tutorial says otherwise. Can someone please clarify? Now provide the response and nothing else.
Bound vs Unbound Methods - an explanation. ... or why Python has the behaviour you point out. So, first off, a note that this is different in 3.x. In 3.x, you will get MyClass.f being a function, and x.f as a method - as expected. This behaviour is essentially a poor design decision that has later been changed. The reason for this is that Python has the concept of a method that is different to most languages, which is essentially a function with the first argument pre-filled as the instance ( self ). This pre-filling makes a bound method . >>> x.foo<bound method MyClass.foo of <__main__.MyClass instance at 0x1004989e0>> In Python 2.x and before, it was reasoned that a method not attached to an instance would be an unbound method , which was a function with the restriction that the first argument ( self ), must be an instance of the object. This is then ready to be bound to an instance and become a bound method . >>> MyClass.foo<unbound method MyClass.foo> With time, it became clear an unbound method is really just a function with this odd restriction that didn't really matter (that self must be of the 'correct' type), so they were removed from the language (in 3.x). This is essentially duck-typing self , which suits the language. Python 3.3.0 (default, Dec 4 2012, 00:30:24) >>> x.foo<bound method MyClass.foo of <__main__.MyClass object at 0x100858ed0>>>>> MyClass.foo<function MyClass.foo at 0x10084f9e0> Further reading. This is a (condensed, from memory) explanation which can be read in full from Python creator Guido van Rossum's own mouth in his 'History of Python' series .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14298844', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/494074/']}
jdg_378174
stackexchange
llm_judgeable_groundtruth_similarity
311133
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: For instance, would this c++ function be a good idea? void doSomething(not_nullptr<MyType> arg){ // stuff} With not_tullptr being a template wrapper for pointers, that will throw an exception if a null value is ever assigned to it. It has the advantage that it finds errors fast, and clearly documents assumptions made directly in the function prototype. The traditional way to do something like this would be: void doSomething(MyType* arg){ assert(arg != nullptr); // stuff} This method accomplishes the goal of finding null errors fast. But it does not itself document that assumption in the prototype. So my question is, is the idea above a good one? It is not the standard way of doing things in the c++ and could of course be expanded to other assumptions. Now provide the response and nothing else.
Everything has a cost, even if it isn't measured in runtime performance. Encoding such assumptions into the type system sounds like a good idea. But it is not without its flaws. In particular, it requires you to have and use a bunch of increasingly specific types for increasingly specific assumptions. Let's say that you have a function that takes an array from the user and modifies the first three elements in it. Now, this function makes two assumptions: that there's actually an array and that the array is at least 3 elements long. There are types which can encode both of these assumptions. The guideline support library type span can cover both of these. But just look at the code for that type. If it weren't available, you probably wouldn't write it yourself. The more such assumptions you have, and the more special-case they get, the harder it is to write a type just for them. After all, span only solves this particular problem as a partial by-product of solving its real problem: having a way to represent an array of some size. So it's a balancing act. You don't want to spend more time writing special-case types, but you do need some to cover a lot of bases. Where exactly you draw the line depends on your needs, but I don't feel that trying to encode everything into the type system is worthwhile. Also, having contracts as part of C++, which people are working on (PDF) , would be able to bridge the gap here in many of the special cases. There is also the issue of dealing with combinations of such contracts. The not_null contract is generally a good idea, but by its very nature it cannot work with move-only types that leave the moved-from object null. Thus, not_null<unique_ptr<T>> is not a functional type. Again, that's not to say that you shouldn't have these. But you really need to think about when it is truly appropriate to have a type encapsulate a contract and when it is not.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/311133', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/75604/']}
jdg_378175
stackexchange
llm_judgeable_groundtruth_similarity
1785414
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to find a closed form for the integral $$I=\int_{\frac{\pi}{4}}^{\frac{3\pi}{4}}\frac{\lfloor|\tan x|\rfloor}{|\tan x|}dx$$ So far, my reasoning is thus: write, by symmetry through $x=\pi/2$, $$I=2\sum_{n=1}^{\infty}n\int_{\arctan n}^{\arctan (n+1)}\frac{dx}{|\tan x|}=2\sum_{n=1}^{\infty}n\ln\frac{\sin\arctan(n+1)}{\sin\arctan n}$$Using $\sin{\arctan {x}}=\frac{x}{\sqrt{1+x^{2}}}$, we get: $$I=2\sum_{n=1}^{\infty}n\ln(\frac{(n+1)\sqrt{1+n^2}}{n\sqrt{1+(n+1)^2}})=\sum_{n=1}^{\infty}n\ln\frac{(n+1)^2(1+n^2)}{n^2(1+(n+1)^2)}=\sum_{n=1}^{\infty}n\ln(1+\frac{2n+1}{n^2(n+1)^2})$$ Expanding the logarithm into an infinite series we get $$I=\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}\frac{(-1)^{m+1}(2n+1)^m}{mn^{2m-1}(n+1)^{2m}}$$ Here I am a bit stuck.. Does anyone have any suggestions to go further? Thank you. EDIT:keeping in mind the nice answer below, applying summation by parts to $$I_N=2\sum_{n=1}^{N}n\ln\frac{\sin\arctan(n+1)}{\sin\arctan n}=2\sum_{n=1}^{N}n(\ln\sin\arctan(n+1)-\ln\sin\arctan n)$$ gives $$I_N=2((N+1)\ln\sin\arctan(N+1)+\frac{\ln 2}{2}-\sum_{n=1}^{N}\ln\sin\arctan(n+1))$$ hence: $$I-\ln2=-\sum_{n=2}^{\infty}\ln\frac{n^2}{1+n^2}=\sum_{n=2}^{\infty}\ln\frac{1+n^2}{n^2}=\sum_{n=2}^\infty\sum_{m=1}^\infty\frac{(-1)^{m+1}}{mn^{2m}}= \sum_{m=1}^\infty\frac{(-1)^{m+1}}{m}\sum_{n=2}^\infty n^{-2m}=\sum_{m=1}^\infty\frac{(-1)^{m+1}(\zeta(2m)-1)}{m}$$ Is this valid and helpful? EDIT 2: Coming back to $$\sum_{n=2}^{\infty}\ln(1+\frac{1}{n^2})=\ln(\prod_{n=2}^{\infty}(1+\frac{1}{n^2}))=\ln(\prod_{n=2}^{\infty}(1-\frac{i^2}{n^2}))=\ln(\prod_{n=1}^{\infty}(1-\frac{i^2}{n^2}))-\ln2$$ $$=\ln(\frac{\sin(i\pi)}{i\pi})-\ln2=\ln\frac{\sinh\pi}{\pi}-\ln2$$ hence $I=\ln\frac{\sinh\pi}{\pi}$ Now provide the response and nothing else.
Maybe we are lucky. We may notice that:$$ 1+\frac{2n+1}{n^2(n+1)^2} = 1+\frac{1}{n^2}-\frac{1}{(n+1)^2} $$and the roots of the polynomial $x^2(x+1)^2+2x+1$ are given by$$ \alpha = \frac{1}{2}\left(-1-\sqrt{2}-\sqrt{2\sqrt{2}-1}\right), $$$$ \beta = \frac{1}{2}\left(-1-\sqrt{2}+\sqrt{2\sqrt{2}-1}\right), $$$$ \gamma = \frac{1}{2}\left(-1+\sqrt{2}-i\sqrt{2\sqrt{2}+1}\right), $$$$ \delta = \frac{1}{2}\left(-1+\sqrt{2}+i\sqrt{2\sqrt{2}+1}\right), $$so:$$ \sum_{n=1}^{N}\log\left(1+\frac{2n+1}{n^2(n+1)^2}\right)=\log\prod_{n=1}^{N}\frac{(n-\alpha)(n-\beta)(n-\gamma)(n-\delta)}{n^2(n+1)^2}$$can be written in terms of:$$ \log\prod_{n=1}^{N}\frac{n-\alpha}{n} = \log\frac{\Gamma(N+1-\alpha)}{\Gamma(N+1)\Gamma(1-\alpha)} $$and through summation by parts the problem boils down to computing: $$ \sum_{N\geq 1}\log\frac{\Gamma(N+1-\alpha)\Gamma(N+1-\beta)\Gamma(N+1-\gamma)\Gamma(N+1-\delta)}{(N+1)^2\Gamma(N+1)^4\Gamma(1-\alpha)\Gamma(1-\beta)\Gamma(1-\gamma)\Gamma(1-\delta)}\tag{1}$$ where:$$\log\Gamma(z+1)=-\gamma z+\sum_{n\geq 1}\left(\frac{z}{n}-\log\left(1+\frac{z}{n}\right)\right) $$probably leads to a massive simplification of $(1)$, or at least the chance to write $(1)$ as a simple integral by exploiting the identities:$$ \log(m)=\int_{0}^{+\infty}\frac{e^{-x}-e^{-mx}}{x}\,dx,\qquad \log\left(1-\frac{\nu}{n}\right)=\int_{0}^{+\infty}\frac{1-e^{\nu x}}{x e^{nx}}\,dx.$$ However, by Did's comment we simply have: $$ \log\prod_{n\geq 1}\left(1+\frac{1}{n^2}\right) = \color{red}{\log\frac{\sinh \pi}{\pi}} $$ through the Weierstrass product for the $\sinh$ function.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1785414', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/333611/']}
jdg_378176
stackexchange
llm_judgeable_groundtruth_similarity
1525605
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried google, but found getppid() which gets the parent pid of the current process. I need something like getppid(some_other_pid) , is there such a thing? Basically takes the pid of some process and returns the parent process' pid. Now provide the response and nothing else.
I think the simplest thing would be to open "/proc" and parse the contents. You'll find the ppid as the 4th parameter of /proc/pid/stat In C, libproc has a get_proc_stats function for parsing that file: see Given a child PID how can you get the parent PID for an example.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1525605', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/35364/']}
jdg_378177
stackexchange
llm_judgeable_groundtruth_similarity
9295616
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want list of dates lies between two dates by select query. For example: If i give '2012-02-10' and '2012-02-15' I need the result. date ----------2012-02-102012-02-112012-02-122012-02-132012-02-142012-02-15 How can i get? Now provide the response and nothing else.
Try: select * from (select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0, (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1, (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2, (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3, (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) vwhere selected_date between '2012-02-10' and '2012-02-15' -for date ranges up to nearly 300 years in the future. [Corrected following a suggested edit by UrvishAtSynapse.]
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9295616', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1156010/']}
jdg_378178
stackexchange
llm_judgeable_groundtruth_similarity
2849305
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$D_2 = \left\lbrace\begin{bmatrix}a_{1,1} & 0 \\0 & a_{22}\end{bmatrix}: a_{1,1}, a_{2,2} \in \mathbb{R}\right\rbrace.$$ Determine a basis of $D_2$. $$\mathbb{P}_2 = \lbrace a_0 + a_1 t + a_2 t^2 : a_0, a_1, a_2 \in \mathbb{R}\rbrace$$ $$T : \mathbb{P}_2 \to \mathbb{P}_2 : T(\underline{f}) = \begin{pmatrix} \underline{f}(0) \\ \underline{f}(3) \end{pmatrix}.$$ Determine a basis for the kernel of $T$. These were 2 questions from a midterm and I was wondering how I should get started? Now provide the response and nothing else.
The sums can match, for example $\,144 = 8 \cdot 6 \cdot 3 = 4 \cdot 4 \cdot 9\,$ with $\,8+6+3=4+4+9\,$. [ EDIT ]   Also, $144 = 2\cdot8\cdot9 = 3 \cdot 4 \cdot 12$ with $\,2+8+9=3+4+12\,$, so multiple such factorizations may exist for the same number. Morevover, there exist such with the same sum e.g. $\,1680 = 4 \cdot 20 \cdot 21 = 5 \cdot 12 \cdot 28 = 7 \cdot 8 \cdot 30\,$ with $\,4+20+21=5+12+28=7+8+30\,$. [ EDIT #2 ]   The $\scriptsize\color{silver}{\text{(quick-and-dirty)}}$ Python code used to lookup the triplets of factors: n = 2000 # upper bound of range to checkk = 2 # minimum number of matching triples that get listedm = 2 # change to 1 to allow unit factorso = 0 # change to 1 to disallow identical factors in a triplepx = [{} for i in range(n)]for a in range(m, n): for b in range(a + o, n // a): for c in range(b + o, n // ( a * b)): p = a * b * c; s = a + b + c px[p][s] = px[p].get(s, []) + [(a, b, c)]for i in range((o+1)**3, n): for j in sorted(px[i].keys()): if len(px[i][j]) >= k: print(str(i) + "\t+" + str(j) + "\t" + str(px[i][j])[1:-1]) Some more: smallest number that has $3$ sets of $3$ triples each that sum to different values: $$\begin{matrix}5400 &= 5 \cdot 30 \cdot 36 &= 6 \cdot 20 \cdot 45 &= 9 \cdot 12 \cdot 50 &\quad\quad \style{font-family:inherit}{\text{sum}} &= 71\\ &= 5 \cdot 24 \cdot 45 &= 6 \cdot 18 \cdot 50 &= 10 \cdot 10 \cdot 54 & & = 74\\ &= 4 \cdot 30 \cdot 45 &= 5 \cdot 20 \cdot 54 &= 9 \cdot 10 \cdot 60 & &= 79\\\end{matrix}$$ smallest number that has $4$ sets of $4$ triples each that sum to different values: $$\small\begin{matrix}166320 &= 20 \cdot 77 \cdot 108 &= 22 \cdot 63 \cdot 120 &= 24 \cdot 55 \cdot 126 &= 28 \cdot 45 \cdot 132 &\quad \style{font-family:inherit}{\text{sum}} &= 205\\ &= 16 \cdot 99 \cdot 105 &= 18 \cdot 70 \cdot 132 &= 21 \cdot 55 \cdot 144 &= 30 \cdot 36 \cdot 154 & & = 220\\ &= 11 \cdot 105 \cdot 144 &= 14 \cdot 66 \cdot 180 &= 16 \cdot 55 \cdot 189 &= 20 \cdot 42 \cdot 198 & & = 260 \\ &= 5 \cdot 154 \cdot 216 &= 6 \cdot 105 \cdot 264 &= 8 \cdot 70 \cdot 297 &= 21 \cdot 24 \cdot 330 & & = 375\end{matrix} \\$$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2849305', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/560800/']}
jdg_378179
stackexchange
llm_judgeable_groundtruth_similarity
16383205
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I currently use scope.onFocus = function(){ // change scope variables };elem.bind('focus', function(){ scope.$apply('onFocus()')}); to bind to the focus event in the link function of a directive. The problem is if I fire the focus event manually like this: elem.focus() in say ng-click handler , I will get "apply is in progress" error. Is the workaround to check whether apply has been called : if (! scope.$$phase) ? Is it considered "Angularic" to check before calling apply? Any other elegant solution? UPDATED: Here's my solution: Since scope.onFocus can trigger external events (outside of Angular) which in turn can call $apply , you would have 'apply already in progress' error. The trick is to call $apply separately. element.bind('focus', function(){ scope.onFocus(); scope.$apply(); // don't wrap onFocus call in $apply}) Now provide the response and nothing else.
Basically I read equation 5 as you describe in your math above. So for "a sunny day" where no instance was observed, you would calculate S("day" | "a sunny"). Not finding the trigram "a sunny day" you would take case two in equation 5, and estimate S("day" | "a sunny") as alpha * S("day" | "sunny"). If again, you recorded no observances of "sunny day" you would approximate S("day" | "sunny") as alpha * S("day"), which is the terminal case f("day") / N (the number of observed unigrams). By setting alpha to 0.4 you get exactly what you wrote out above. Hope this helps. -bms20
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16383205', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_378180
stackexchange
llm_judgeable_groundtruth_similarity
20878553
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Has anyone come across a scenario for using ConfigureAwait(true) ? Since true is the default option I cannot see when would you ever use it. Now provide the response and nothing else.
true to attempt to marshal the continuation back to the original context captured; otherwise, false. It's actually more like saying that ConfigureAwait(true) is like using .ContinueWith( t => {...}, TaskScheduler.FromCurrentSynchronizationContext()) , where ConfigureAwait(false) is like using .ContinueWith( t => {...}) . If you pass false, then the continuation is being allowed to run on a thread-pool thread instead of pulling back to the current synchronization context.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20878553', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1239433/']}
jdg_378181
stackexchange
llm_judgeable_groundtruth_similarity
12946004
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a table of around 100 Users and I also have an array of user ids. What I wanted to do is show all users who are not a part of this array of user ids. When I do something like this User.where('id NOT IN (?)', [9, 2, 3, 4]) It successfully returns the records where the user's id does not belong in that array. However if that array is empty like so User.where('id NOT IN (?)', []) It does not return any users back and the SQL query looks like this SELECT "users".* FROM "users" WHERE (id NOT IN (NULL)) Does anyone know why this happens or could this be a bug? I am using Rails 3.2.5 with PostgreSQL. Now provide the response and nothing else.
ActiveRecord (3.2.1 at least) treats empty arrays as NULLs. The placeholders in a where call are handled by sanitize_sql . If you trace through the code for a bit, you'll come to replace_bind_variables : def replace_bind_variables(statement, values) #:nodoc: raise_if_bind_arity_mismatch(statement, statement.count('?'), values.size) bound = values.dup c = connection statement.gsub('?') { quote_bound_value(bound.shift, c) }end and then quote_bound_value : def quote_bound_value(value, c = connection) #:nodoc: if value.respond_to?(:map) && !value.acts_like?(:string) if value.respond_to?(:empty?) && value.empty? c.quote(nil) else value.map { |v| c.quote(v) }.join(',') end else c.quote(value) endend An empty Array will satisfy all four conditions to get you to c.quote(nil) and that's where your NULL comes from. All the special logic that leads to c.quote(nil) indicates that this is intentional behavior. Saying IN (or NOT IN) with an empty list: where c in () should produce an SQL error so maybe the AR people are trying to prevent that by quietly turning that bad SQL into c in (null) . Note that neither of these: select ... from t where c in (null);select ... from t where c not in (null); should ever produce any results due to the behavior of SQL's NULL. This is a classic newbie mistake and the AR people really should know better. I'd prefer an exception myself: telling me that I'm about to deploy a foot-bullet would be much friendlier than just handing me a different gun. Executive summary : This "empty array means NULL" behavior is intentional. You should never ever try where('c in (?)', []) or where('c not in (?)', []) since neither statement makes much sense. Update your Ruby code to check for empty arrays and do whatever needs to be done to get the results you expect.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12946004', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/544025/']}
jdg_378182
stackexchange
llm_judgeable_groundtruth_similarity
9725531
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to see with git log or some other command only the commits that were added after branch creation? usage: git log [<options>] [<since>..<until>] [[--] <path>...] or: git show [options] <object>... --quiet suppress diff output --source show source --decorate[=...] decorate options Now provide the response and nothing else.
Full documentation is here: https://www.kernel.org/pub/software/scm/git/docs/gitrevisions.html Suppose you have a repo that looks like this: base - A - B - C - D (master) \ \- X - Y - Z (myBranch) Verify the repo status: > git checkout masterAlready on 'master'> git status ; git log --onelineOn branch masternothing to commit, working directory cleand9addce D110a9ab C5f3f8db B0f26e69 Ae764ffa base and for myBranch: > git checkout myBranch> git status ; git log --onelineOn branch myBranchnothing to commit, working directory clean3bc0d40 Z917ac8d Y3e65f72 X5f3f8db B0f26e69 Ae764ffa base Suppose you are on myBranch, and you want to see only changes SINCE branching from master. Use the two-dot version: > git log --oneline master..myBranch3bc0d40 Z917ac8d Y3e65f72 X The three-dot version gives all changes from the tip of master to the tip of myBranch. However, note that the common commit B is not included: > git log --oneline master...myBranchd9addce D110a9ab C3bc0d40 Z917ac8d Y3e65f72 X PLEASE NOTE: git log and git diff BEHAVE DIFFERENTLY! The behavior is not exactly opposite, but almost: > git diff master..myBranchdiff --git a/rev.txt b/rev.txtindex 1784810..e900b1c 100644--- a/rev.txt+++ b/rev.txt@@ -1 +1 @@-D+Z> git diff master...myBranchdiff --git a/rev.txt b/rev.txtindex 223b783..e900b1c 100644--- a/rev.txt+++ b/rev.txt@@ -1 +1 @@-B+Z So, the two-dot version shows the diff from tip of master (i.e. D) to tip of myBranch (Z). The three-dot version shows the difference from the base of myBranch (i.e. B) to the tip of myBranch (Z).
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9725531', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/170521/']}
jdg_378183
stackexchange
llm_judgeable_groundtruth_similarity
209990
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am using BMP180 barometric pressure sensor on an elevator for extended periods (months together), below is the guidance from Sparkfun. Sparkfun guidance : “You should also remember that pressure changes due to weather will affect your altitude readings. The best accuracy will be obtained if you take a “fresh” p0 (baseline/reference pressure) when you need it and don’t rely on it to be accurate for extended periods due to changes in the weather” How can I obtain a fresh p0 (baseline/reference pressure) when the elevator is continuously moving ? Now provide the response and nothing else.
Two options: Detect ground-floor and perform a reset on each arrival. Install a second unit on a specific floor and update the reference pressure in the elevator micro - perhaps by wireless connection. And ... Try to compensate in software. If you can detect the difference between the rapid changes of pressure due to elevator motion and those due to atmospheric variation, you could adjust accordingly when the elevator stops (but see caution notes). If there is reasonably frequent travel to upper and lower limits, you could recalibrate then. i.e., if (p > pmax) { // p is pressure reading. pmax = p; // Must be at top floor. pmin = p - bottom_to_top; // bottom_to_top is the pressure span } if (p < pmin) { pmin = p; // Must be at bottom floor. pmax = p + bottom_to_top; } You would have to manage power-up if the micro doesn't have non-volatile memory. Caution If this is an office building with air conditioning, you may have trouble with varying pressures on different floors. This may be high enough to 'swamp' the readings between floors. One way of avoiding this may be to read only when the elevator doors are closed and monitor lift-shaft pressure but this may vary also due to compression of the air during descent and vice versa.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/209990', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/49656/']}
jdg_378184
stackexchange
llm_judgeable_groundtruth_similarity
29217690
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to Swift. I have been doing Java programming. I have a scenario to code for in Swift. The following code is in Java. I need to code in Swift for the following scenario // With String array - strArr1String strArr1[] = {"Some1","Some2"}String strArr2[] = {"Somethingelse1","Somethingelse2"}for( int i=0;i< strArr1.length;i++){ System.out.println(strArr1[i] + " - "+ strArr2[i]);} I have a couple of arrays in swift var strArr1: [String] = ["Some1","Some2"]var strArr2: [String] = ["Somethingelse1","Somethingelse2"]for data in strArr1{ println(data)}for data in strArr2{ println(data)}// I need to loop over in single for loop based on index. Could you please provide your help on the syntaxes for looping over based on index Now provide the response and nothing else.
You can use zip() , which createsa sequence of pairs from the two given sequences: let strArr1 = ["Some1", "Some2"]let strArr2 = ["Somethingelse1", "Somethingelse2"]for (e1, e2) in zip(strArr1, strArr2) { print("\(e1) - \(e2)")} The sequence enumerates only the "common elements" of the given sequences/arrays. If they have different length then the additionalelements of the longer array/sequence are simply ignored.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/29217690', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4641902/']}
jdg_378185
stackexchange
llm_judgeable_groundtruth_similarity
2044676
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to send ZPL (Zebra Programming Language) to a printer in .NET? I have the code to do this in Delphi, but it is not pretty and I would rather not try to recreate it in .NET as it is. Now provide the response and nothing else.
This way you will be able to send ZPL to a printer no matter how it is connected ( LPT , USB , Network Share ...) Create the RawPrinterHelper class (from the Microsoft article on How to send raw data to a printer by using Visual C# .NET ): using System;using System.Drawing;using System.Drawing.Printing;using System.IO;using System.Windows.Forms;using System.Runtime.InteropServices; public class RawPrinterHelper{ // Structure and API declarions: [StructLayout(LayoutKind.Sequential, CharSet=CharSet.Ansi)] public class DOCINFOA { [MarshalAs(UnmanagedType.LPStr)] public string pDocName; [MarshalAs(UnmanagedType.LPStr)] public string pOutputFile; [MarshalAs(UnmanagedType.LPStr)] public string pDataType; } [DllImport("winspool.Drv", EntryPoint="OpenPrinterA", SetLastError=true, CharSet=CharSet.Ansi, ExactSpelling=true, CallingConvention=CallingConvention.StdCall)] public static extern bool OpenPrinter([MarshalAs(UnmanagedType.LPStr)] string szPrinter, out IntPtr hPrinter, IntPtr pd); [DllImport("winspool.Drv", EntryPoint="ClosePrinter", SetLastError=true, ExactSpelling=true, CallingConvention=CallingConvention.StdCall)] public static extern bool ClosePrinter(IntPtr hPrinter); [DllImport("winspool.Drv", EntryPoint="StartDocPrinterA", SetLastError=true, CharSet=CharSet.Ansi, ExactSpelling=true, CallingConvention=CallingConvention.StdCall)] public static extern bool StartDocPrinter( IntPtr hPrinter, Int32 level, [In, MarshalAs(UnmanagedType.LPStruct)] DOCINFOA di); [DllImport("winspool.Drv", EntryPoint="EndDocPrinter", SetLastError=true, ExactSpelling=true, CallingConvention=CallingConvention.StdCall)] public static extern bool EndDocPrinter(IntPtr hPrinter); [DllImport("winspool.Drv", EntryPoint="StartPagePrinter", SetLastError=true, ExactSpelling=true, CallingConvention=CallingConvention.StdCall)] public static extern bool StartPagePrinter(IntPtr hPrinter); [DllImport("winspool.Drv", EntryPoint="EndPagePrinter", SetLastError=true, ExactSpelling=true, CallingConvention=CallingConvention.StdCall)] public static extern bool EndPagePrinter(IntPtr hPrinter); [DllImport("winspool.Drv", EntryPoint="WritePrinter", SetLastError=true, ExactSpelling=true, CallingConvention=CallingConvention.StdCall)] public static extern bool WritePrinter(IntPtr hPrinter, IntPtr pBytes, Int32 dwCount, out Int32 dwWritten ); // SendBytesToPrinter() // When the function is given a printer name and an unmanaged array // of bytes, the function sends those bytes to the print queue. // Returns true on success, false on failure. public static bool SendBytesToPrinter( string szPrinterName, IntPtr pBytes, Int32 dwCount) { Int32 dwError = 0, dwWritten = 0; IntPtr hPrinter = new IntPtr(0); DOCINFOA di = new DOCINFOA(); bool bSuccess = false; // Assume failure unless you specifically succeed. di.pDocName = "My C#.NET RAW Document"; di.pDataType = "RAW"; // Open the printer. if( OpenPrinter( szPrinterName.Normalize(), out hPrinter, IntPtr.Zero ) ) { // Start a document. if( StartDocPrinter(hPrinter, 1, di) ) { // Start a page. if( StartPagePrinter(hPrinter) ) { // Write your bytes. bSuccess = WritePrinter(hPrinter, pBytes, dwCount, out dwWritten); EndPagePrinter(hPrinter); } EndDocPrinter(hPrinter); } ClosePrinter(hPrinter); } // If you did not succeed, GetLastError may give more information // about why not. if( bSuccess == false ) { dwError = Marshal.GetLastWin32Error(); } return bSuccess; } public static bool SendFileToPrinter( string szPrinterName, string szFileName ) { // Open the file. FileStream fs = new FileStream(szFileName, FileMode.Open); // Create a BinaryReader on the file. BinaryReader br = new BinaryReader(fs); // Dim an array of bytes big enough to hold the file's contents. Byte []bytes = new Byte[fs.Length]; bool bSuccess = false; // Your unmanaged pointer. IntPtr pUnmanagedBytes = new IntPtr(0); int nLength; nLength = Convert.ToInt32(fs.Length); // Read the contents of the file into the array. bytes = br.ReadBytes( nLength ); // Allocate some unmanaged memory for those bytes. pUnmanagedBytes = Marshal.AllocCoTaskMem(nLength); // Copy the managed byte array into the unmanaged array. Marshal.Copy(bytes, 0, pUnmanagedBytes, nLength); // Send the unmanaged bytes to the printer. bSuccess = SendBytesToPrinter(szPrinterName, pUnmanagedBytes, nLength); // Free the unmanaged memory that you allocated earlier. Marshal.FreeCoTaskMem(pUnmanagedBytes); return bSuccess; } public static bool SendStringToPrinter( string szPrinterName, string szString ) { IntPtr pBytes; Int32 dwCount; // How many characters are in the string? dwCount = szString.Length; // Assume that the printer is expecting ANSI text, and then convert // the string to ANSI text. pBytes = Marshal.StringToCoTaskMemAnsi(szString); // Send the converted ANSI string to the printer. SendBytesToPrinter(szPrinterName, pBytes, dwCount); Marshal.FreeCoTaskMem(pBytes); return true; }} Call the print method: private void BtnPrint_Click(object sender, System.EventArgs e){ string s = "^XA^LH30,30\n^FO20,10^ADN,90,50^AD^FDHello World^FS\n^XZ"; PrintDialog pd = new PrintDialog(); pd.PrinterSettings = new PrinterSettings(); if(DialogResult.OK == pd.ShowDialog(this)) { RawPrinterHelper.SendStringToPrinter(pd.PrinterSettings.PrinterName, s); }} There are 2 gotchas I've come across that happen when you're sending txt files with ZPL codes to the printer: The file has to end with a new line character Encoding has to be set to Encoding.Default when reading ANSI txt files with special characters public static bool SendTextFileToPrinter(string szFileName, string printerName) { var sb = new StringBuilder(); using (var sr = new StreamReader(szFileName, Encoding.Default)) { while (!sr.EndOfStream) { sb.AppendLine(sr.ReadLine()); } } return RawPrinterHelper.SendStringToPrinter(printerName, sb.ToString()); }
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2044676', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/16241/']}
jdg_378186
stackexchange
llm_judgeable_groundtruth_similarity
15720593
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have started to learn python and writing a practice app. The directory structure looks like src | --ShutterDeck | --Helper | --User.py -> class User --Controller | --User.py -> class User The src directory is in PYTHONPATH . In a different file, lets say main.py , I want to access both User classes. How can I do it. I tried using the following but it fails: import cherrypyfrom ShutterDeck.Controller import Userfrom ShutterDeck.Helper import Userclass Root: @cherrypy.expose def index(self): return 'Hello World'u1=User.User()u2=User.User() That's certainly ambiguous. The other (c++ way of doing it) way that I can think of is import cherrypyfrom ShutterDeck import Controllerfrom ShutterDeck import Helperclass Root: @cherrypy.expose def index(self): return 'Hello World'u1=Controller.User.User()u2=Helper.User.User() But when above script is run, it gives the following error u1=Controller.User.User()AttributeError: 'module' object has no attribute 'User' I'm not able to figure out why is it erroring out? The directories ShutterDeck , Helper and Controller have __init__.py in them. Now provide the response and nothing else.
You want to import the User modules in the package __init__.py files to make them available as attributes. So in both Helper/__init_.py and Controller/__init__.py add: from . import User This makes the module an attribute of the package and you can now refer to it as such. Alternatively, you'd have to import the modules themselves in full: import ShutterDeck.Controller.Userimport ShutterDeck.Helper.Useru1=ShutterDeck.Controller.User.User()u2=ShutterDeck.Helper.User.User() so refer to them with their full names. Another option is to rename the imported name with as : from ShutterDeck.Controller import User as ControllerUserfrom ShutterDeck.Helper import User as HelperUseru1 = ControllerUser.User()u2 = HelperUser.User()
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15720593', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/733213/']}
jdg_378187
stackexchange
llm_judgeable_groundtruth_similarity
2787
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The Psalms in most English Bibles are divided into five sections or books: 1-41, 42-72, 73-89, 90-106, and 107-150 Psalms 41, 72, and 89 end with the double amen, while 106 and 150 end with "Praise the LORD." Are there reasons for breaking it up like this beyond the double amen? I'm a bit perplexed because of this answer , which reads the line about the prayers of David being ended as part of the Psalm rather than a marker in the greater book - and this despite it coming after the double amen. Is it possible that the five books are an imposed structure and the double amens are simply a part of the psalms they belong to without indicating a larger frame? Or are there other reasons for discovering this structure? Now provide the response and nothing else.
David E. Malick writes : This division seems to be older than the oldest extant manuscripts of the Psalms since it exists in all manuscripts. The order of the last two books (IV and V) do differ in some of the Dead Sea Scrolls suggesting that their final order was not canonized until around the time of Christ. But all of the Psalms remain present (The LXX does have one extra Psalm (151) concerning David's battle with Goliath). So it seems that the structure was not imposed, or if it was we no longer know the history behind it. According to Wikipedia , the first book is understood to be the oldest group. There are a number of other groupings that can be derived from internal evidence. To continue with the article (edited for formatting): Smaller collections exist within the larger books: Davidic group I: 3-41 Sons of Korah group I: 42-49 Davidic group II: 51-65 Asaph group: 73-83 Sons of Korah group II: 84-88 [exc. 86] Congregational Praise group I: 95-100 Hallelujah group: 111-117 Songs of Ascent to Jerusalem: 120-134 Davidic Group III: 138-145 Congregational Praise Group II: 146-150 Notice that these collections do not cross the boundaries of the 5 books of Psalms. Shifting over into informed speculation, I think a reasonable theory of the division of the Psalms is that they arose from accretion. Perhaps the original collection contained the Davidic group I plus Psalms 1-2. Then the Sons of Korah group I, the Davidic group II and a few others were added by a scribe wishing to consolidate these writings. The second book might itself have been a collection of collections. If the first book really is the oldest and if there was some confusion of the order of the last two, we might reasonably assume that the order of the books is chronological. In that case, we can imagine a series of scribes copying psalms over the years and appending new collections as they were developed. Since the sub-collections (especially the praise and ascents groups) would have had liturgical purposes, they probably were not originally composed to be additions to the Psalms collection. It seems likely that the various annotations and the double amen endings were inserted by later scribes to preserve some sense of the textual history. Psalm 72's ending would also have been a parenthetical added by later scribes. If my speculation is correct, the books of the Psalms represent 5 different eras in Israel's worship of God. Malick's article suggests one possible framework that has appeal to me. While each Psalm demands to be interpreted individually, such a framework would help us reject anachronistic interpretations. For instance, the postexilic reading of Psalm 1 might not have occurred to the author himself.
{}
{'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/2787', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/33/']}
jdg_378188
stackexchange
llm_judgeable_groundtruth_similarity
20038
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The Canadian bill C-16 ("An Act to amend the Canadian Human Rights Act and the Criminal Code") has been the subject of heated debate and controversy. I admit I hadn't followed it very closely, but I've seen many articles and videos discussing that it somehow made not addressing someone with their preferred pronoun illegal (e.g. here ). Today, I finally read what I think is the text of the bill and its accompanying analysis from the official site of the Canadian parliament and, as far as I can tell, all this bill is doing is adding gender identity to the list of reasons for which one should not be discriminated against. So, why all this discussion about pronouns? I saw no mention of pronouns there nor any text that would suggest that intentionally not using someone's preferred pronoun could be considered illegal 1 . So why has so much of the negative reaction to the bill been centered around the concepts of free speech and misgendering people? Is there more text that I haven't found? Am I missing certain implications of the bill's language? Where does pronoun choice come into it? 1 This question is not about whether that should or not be illegal so let's please not get into that here. I am only asking whether this specific bill is actually offering any legal grounds on which such pronoun use could be prosecuted. Now provide the response and nothing else.
There is no factual basis for the claim that incorrect pronoun use will lead to negative consequences. The bill does two things: It adds gender identity or expression as protected classes under the Canadian Human Rights Act It adds gender identity or expression as protected classes to the criminal code, specifically to a section about hate propaganda and to provisions about sentencing hate crimes. Regarding 1., there has to be an actual discriminatory act (such as refusing housing or services), it is not enough to express an opinion or use wrong pronouns. Regarding 2., to be convicted of hate propaganda, one has to actually, intentionally, incite hatred or promote genocide, so it doesn't apply. The Canadian Bar Association agrees with this assessment : Recently, the debate has turned to whether the amendments will force individuals to embrace concepts, even use pronouns, which they find objectionable. This is a misunderstanding of human rights and hate crimes legislation . [...] Those concerned that they could be criminalized for their repugnant or offensive ideas fail to understand a crucial distinction in the law. As the Supreme Court of Canada has explained: The distinction between the expression of repugnant ideas and expression which exposes groups to hatred is crucial to understanding the proper application of hate speechprohibitions. [...] The amendment to the CHRA will not compel the speech of private citizens Brenda Cossman - a professor of law - agrees with this assessment: I don’t think there’s any legal expert that would say that [this] would meet the threshold for hate speech in Canada [...] “The misuse of pronouns is not equivalent to advocating genocide in any conceivable manner,” she continues. “If he advocated genocide against trans people, he would be in violation, but misusing pronouns is not what that provision of the code is about.” The idea that incorrect pronoun usage would become illegal seems to have originated from Jordan Peterson, who is not an expert in law, but a professor of psychology.
{}
{'log_upvote_score': 6, 'links': ['https://politics.stackexchange.com/questions/20038', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/4815/']}
jdg_378189
stackexchange
llm_judgeable_groundtruth_similarity
3293534
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I fully understand this question has been asked a lot , but I'm asking for a specific variation and my search-foo has given up, as I've only found algorithms that append one existing vector to another, but not one returned to from a function. I have this function that lists all files in a directory: vector<string> scanDir( const string& dir ) which may call itself internally (for subdirectories). I need a short way of appending the returned value to the caller's vector. I have in my mind something like this (but of course it doesn't exist :( ): vector<string> fileList;//...fileList.append( scanDir(subdirname) ); I fear that storing the return value and inserting it in fileList would bring performance badness. What I mean is this: vector<string> temp( scanDir(subdirname) );copy( temp.begin(), temp.end(), back_inserter(fileList) ); Thanks! PS: I'm not forcing myself to using vector, any other container that performs equally well and can prevent the potential large copy operation is fine by me. Now provide the response and nothing else.
If you're in the position to change scanDir , make it a (template) function accepting an output iterator: template <class OutIt>void scanDir(const std::string& dirname, OutIt it) { // ... // Scan subdir scanDir(subdir, it); // ...} You'll have the additional benefit to be able to fill all sort of data structures like std::vector<string> vector;scanDir(dir1, std::back_inserter(vector));std::set<string> filesetscanDir(dir1, std::inserter(fileset, fileset.begin())); etc. EDIT (see comment ...) For using this function for class member initialization, you could either call it in the constructor as in class MyClass {private: std::vector<string> m_fileList;public: MyClass(const std::string& dirname) { scanDir(dirname, std::back_inserter(m_fileList); }} or using a wrapper function std::vector<string> scanDir(const std::string& dirname) { std::vector<string> result; scanDir(dirname, std::back_inserter(result); return result;}class MyClass {// Same as above.. MyClass(const std::string& dirname) : m_fileList(scanDir(dirname)) { }} I would prefer the first version for performance (and other) reasons ...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3293534', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/256138/']}
jdg_378190
stackexchange
llm_judgeable_groundtruth_similarity
12388887
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is my Image model, in which I've implemented a method for validating the attachment's dimensions: class Image < ActiveRecord::Base attr_accessible :file belongs_to :imageable, polymorphic: true has_attached_file :file, styles: { thumb: '220x175#', thumb_big: '460x311#' } validates_attachment :file, presence: true, size: { in: 0..600.kilobytes }, content_type: { content_type: 'image/jpeg' } validate :file_dimensions private def file_dimensions(width = 680, height = 540) dimensions = Paperclip::Geometry.from_file(file.queued_for_write[:original].path) unless dimensions.width == width && dimensions.height == height errors.add :file, "Width must be #{width}px and height must be #{height}px" end endend This works fine, but it's not reusable since the method takes fixed values for width & height. I want to transform this to a Custom Validator, so I can use it in other models too. I've read the guides about this, I know it'll be something like this in app/models/dimensions_validator.rb: class DimensionsValidator < ActiveModel::EachValidator def validate_each(record, attribute, value) dimensions = Paperclip::Geometry.from_file(record.queued_for_write[:original].path) unless dimensions.width == 680 && dimensions.height == 540 record.errors[attribute] << "Width must be #{width}px and height must be #{height}px" end endend but I know I'm missing something cause this code doesn't work. The thing is that I want to call the validation like this in my model: validates :attachment, dimensions: { width: 300, height: 200} . Any idea on how this validator should be implemented? Now provide the response and nothing else.
Put this in app/validators/dimensions_validator.rb: class DimensionsValidator < ActiveModel::EachValidator def validate_each(record, attribute, value) # I'm not sure about this: dimensions = Paperclip::Geometry.from_file(value.queued_for_write[:original].path) # But this is what you need to know: width = options[:width] height = options[:height] record.errors[attribute] << "Width must be #{width}px" unless dimensions.width == width record.errors[attribute] << "Height must be #{height}px" unless dimensions.height == height endend Then, in the model: validates :file, :dimensions => { :width => 300, :height => 300 }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12388887', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1242778/']}
jdg_378191
stackexchange
llm_judgeable_groundtruth_similarity
771161
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have an IIS site where I want to force TLS 1.2. I don't want client using TLS 1 to be able to connect to the site. I want to do this at the site level as there are other sites that should work with older version. Thank you Now provide the response and nothing else.
There isn't a way to change only a single site on a server to support only TLS 1.2. IIS is managed using SCHANNEL, as documented here . 2012r2 and below does not support per site configuration. If you absolutely have to do something like this the easiest method is a SSL proxy that allows the lower levels inbound and can create TLS 1.2 connections outbound.. This relay can be used for your lower level sites and your secure site can be accessed directly.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/771161', 'https://serverfault.com', 'https://serverfault.com/users/347565/']}
jdg_378192
stackexchange
llm_judgeable_groundtruth_similarity
12784338
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like a regex that is either X or Y characters long . For example, match a string that is either 8 or 11 characters long. I have currently implemented this like so: ^([0-9]{8}|[0-9]{11})$ . I could also implement it as: ^[0-9]{8}([0-9]{3})?$ My question is: Can I have this regex without duplicating the [0-9] part (which is more complex than this simple \d example)? Now provide the response and nothing else.
There is one way: ^(?=[0-9]*$)(?:.{8}|.{11})$ or alternatively, if you want to do the length check first, ^(?=(?:.{8}|.{11})$)[0-9]*$ That way, you have the complicated part only once and a generic . for the length check. Explanation: ^ # Start of string(?= # Assert that the following regex can be matched here: [0-9]* # any number of digits (and nothing but digits) $ # until end of string) # (End of lookahead)(?: # Match either .{8} # 8 characters| # or .{11} # 11 characters) # (End of alternation)$ # End of string
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12784338', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/540352/']}
jdg_378193
stackexchange
llm_judgeable_groundtruth_similarity
44929197
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I use int.TryParse with nullable int? I am trying to do something like the following, which doesn't compile, obviously. int? nr1 = int.TryParse(str1, out nr1) ? nr1 : null; What is the correct way to achieve it? Now provide the response and nothing else.
Because the out has to be int you need something like: int temp;int? nr1 = int.TryParse(str1, out temp) ? temp : default(int?); Note that I also use default(int?) instead of null because the conditional typing won't work otherwise. ? (int?)temp : null or ? temp : (int?)null would also solve that. As of C#7 (VS Studio 2017) you can inline the declaration of temp int? nr1 = int.TryParse(str1, out int temp) ? temp : default(int?);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44929197', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7392294/']}
jdg_378194
stackexchange
llm_judgeable_groundtruth_similarity
1845074
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In investigating approaches to Fermat's Last Theorem I came across the following and I can't figure out where I am going wrong. Any input would be greatly appreciated. We want to show that $a^n + b^n = c^n$ cannot hold for odd $n>1$ and pairwise relatively prime $a$, $b$, and $c$. Assuming by way of contradiction that we have $a^n + b^n = c^n$ we must have $a$, $b$, and $c$ forming the sides of a triangle since $(a+b)^n > c^n$ so $a+b>c$. Therefore the law of cosines can apply and we can write: $$c^2 = a^2+b^2 - 2ab{\cos{C}}$$ where $C$ is the angle opposite to side $c$. If we add and subtract $2ab$ on the right-hand side we get $$c^2 = {(a+b)}^2 -2ab(\cos{C}+1)$$ Now, $a+b$ and $c$ share a common factor since $(a+b) | (a^n+b^n)$ for odd $n$ and $c^n = a^n+b^n$. (Here $x | y$ means as usual, "$x$ divides $y$").Therefore, they share the same factor with $2ab(\cos{C}+1)$. Now, $\cos{C} + 1$ must be a rational number since $a$, $b$, and $c$ are all integers. So let $\cos{C} +1 = \frac{r}{s}$ where $r$ and $s$ are integers and $(r,s)=1$. (i.e. $\frac{r}{s}$ is a reduced fraction). (Here, $(r,s)$ means as usual the greatest common divisor of $r$ and $s$.) Now assuming $a$, $b$, and $c$ are relatively prime we must have $(ab) |s$ for otherwise $c$ and $2ab$ would share a common factor. Even moreso we must have $ab=s$ since otherwise $\frac{2abr}{s}$ would not be an integer. (Since $c - a - b$ is even, we don't need $2 | s$). So we can write: $$\cos{C}+1 = \frac{r}{ab}$$ or equivalently $$\cos{C} = \frac{r - ab}{ab}$$ Now we had from the law of cosines: $$c^2 = a^2+b^2 - 2ab{\cos{C}}$$ so making the substitution $\cos{C} = \frac{r - ab}{ab}$ we get $$c^2 = a^2 + b^2 - 2r + 2ab$$ If we subtract $a^2$ to both sides and factor out the $b$ on the right-hand side, we get: $$c^2 - a^2 = b(b + 2a) - 2r$$ Now, $(c - a) | (c^2 - a^2)$ and also $(c-a) | (c^n - a^n)$. Then we must have $((c-a),b) >1$ since $b^n = c^n - a^n$. From the equation above, we must therefore also have $(b,2r) > 1$. Similarly we can show that we must have $(a,2r) > 1$. However, both of these conclusions are problematic since $r$ was initially assumed to be relatively prime to $s = ab$. The only other option is that $a$ and $b$ are both even, but this is also problematic since $a$ and $b$ are assumed to be relatively prime. Thus we cannot have $a^n + b^n = c^n$ for odd $n>1$ and pairwise relatively prime $a$, $b$, and $c$. I'm sure someone has thought of this approach before so where am I going wrong? Now provide the response and nothing else.
How do you get to the conclusion that $ab\vert s$? I honestly can't see it. The way I see it you have: $$\dfrac{r}{s}=\dfrac{(a+b)^2-c^2}{2ab}. $$ Now $(a+b)^2-c^2$ is even. You can check this case by case, when $a,b$ are odd then $c$ has to be even and so forth. So at least one of them is even but by your assumption maximal one is even and therefore $(a+b)^2-c^2$ is even. Therefore $$ \dfrac{r}{s}=\dfrac{\dfrac{(a+b)^2-c^2}{2}}{ab}. $$ But there is no apparent (at least not to me) reason why this shouldn't reduce further. If it does your argument breaks down at this point. Here is an actual counter example: of course I can't give an example of $a,b,c$ with $a^n+b^n=c^n$ but your argument that $ab\vert s$ only uses that $a,b,c$ are coprime. So let $a=13, b=15$ and $c=22$ than you have that $a,b,c$ are relatively prime and furthermore: $$\dfrac{r}{s}=\dfrac{(a+b)^2-c^2}{2ab}=\dfrac{10}{13}, $$ therefore $s\neq ab=195$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1845074', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/164699/']}
jdg_378195
stackexchange
llm_judgeable_groundtruth_similarity
20248009
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I execute the plink command plink user@192.168.1.12 sudo nano /etc/hosts Upon typing the password when prompted, I am getting sudo: no tty present and no askpass program specified Sorry, try again. sudo: no tty present and no askpass program specified Sorry,try again. sudo: no tty present and no askpass program specified Sorry, try again. sudo: 3 incorrect password attempts How to execute the sudo command from plink ? Or is there any alternatives? Now provide the response and nothing else.
i know that the question is old but this can help you can execute plink (putty tools) plink -v -ssh -m "path/to/commands.txt" -l usertologin -pw PASSWORDSERVER commands.txt: echo -e "PASSWORD\n" | sudo -S cat /etc/passwd sudo -S is to receive the password from stdin and with the echo -e "password\n" is to pass the password (and \n is like to press intro -new line- ) the same way you can execute the passwd command: > echo -e "OLDPASSWORD\nNEWPASSWORD\nNEWPASSWORD" | passwd
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20248009', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1862493/']}
jdg_378196
stackexchange
llm_judgeable_groundtruth_similarity
31268740
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using react-router and redux in my latest app and I'm facing a couple of issues relating to state changes required based on the current url params and queries. Basically I have a component that needs to update it's state every time the url changes. State is being passed in through props by redux with the decorator like so @connect(state => ({ campaigngroups: state.jobresults.campaigngroups, error: state.jobresults.error, loading: state.jobresults.loading })) At the moment I am using the componentWillReceiveProps lifecycle method to respond to the url changes coming from react-router since react-router will pass new props to the handler when the url changes in this.props.params and this.props.query - the main issue with this approach is that I am firing an action in this method to update the state - which then goes and passes new props the component which will trigger the same lifecycle method again - so basically creating an endless loop, currently I am setting a state variable to stop this from happening. componentWillReceiveProps(nextProps) { if (this.state.shouldupdate) { let { slug } = nextProps.params; let { citizenships, discipline, workright, location } = nextProps.query; const params = { slug, discipline, workright, location }; let filters = this._getFilters(params); // set the state accroding to the filters in the url this._setState(params); // trigger the action to refill the stores this.actions.loadCampaignGroups(filters); } } Is there a standard approach to trigger actions base on route transitions OR can I have the state of the store directly connected to the state of the component instead of passing it in through props? I have tried to use willTransitionTo static method but I don't have access to the this.props.dispatch there. Now provide the response and nothing else.
Alright I eventually found an answer on the redux's github page so will post it here. Hope it saves somebody some pain. @deowk There are two parts to this problem, I'd say. The first is that componentWillReceiveProps() is not an ideal way for responding to state changes — mostly because it forces you to think imperatively, instead of reactively like we do with Redux. The solution is to store your current router information (location, params, query) inside your store. Then all your state is in the same place, and you can subscribe to it using the same Redux API as the rest of your data. The trick is to create an action type that fires whenever the router location changes. This is easy in the upcoming 1.0 version of React Router: // routeLocationDidUpdate() is an action creator// Only call it from here, nowhere elseBrowserHistory.listen(location => dispatch(routeLocationDidUpdate(location))); Now your store state will always be in sync with the router state. That fixes the need to manually react to query param changes and setState() in your component above — just use Redux's Connector. <Connector select={state => ({ filter: getFilters(store.router.params) })} /> The second part of the problem is you need a way to react to Redux state changes outside of the view layer, say to fire an action in response to a route change. You can continue to use componentWillReceiveProps for simple cases like the one you describe, if you wish. For anything more complicated, though, I recommending using RxJS if you're open to it. This is exactly what observables are designed for — reactive data flow. To do this in Redux, first create an observable sequence of store states. You can do this using rx's observableFromStore(). EDIT AS SUGGESTED BY CNP import { Observable } from 'rx'function observableFromStore(store) { return Observable.create(observer => store.subscribe(() => observer.onNext(store.getState())) )} Then it's just a matter of using observable operators to subscribe to specific state changes. Here's an example of re-directing from a login page after a successful login: const didLogin$ = state$ .distinctUntilChanged(state => !state.loggedIn && state.router.path === '/login') .filter(state => state.loggedIn && state.router.path === '/login');didLogin$.subscribe({ router.transitionTo('/success');}); This implementation is much simpler than the same functionality using imperative patterns like componentDidReceiveProps().
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31268740', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4758562/']}
jdg_378197
stackexchange
llm_judgeable_groundtruth_similarity
614004
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Below is a snippet from the datasheet of UC2854 . Current transformer senses the input current of PFC. But Rs also does the same. What is the square block shown here? Now provide the response and nothing else.
Looks like a full-wave bridge rectifier which powers the controller IC. The secondary of the transformer (above the rectifier) is most likely loosely coupled to the primary which is the series inductor for the PFC circuit. It is not a current monitor. It's a method of power harvesting which minimizes cost. Note that R1 supplies voltage to the controller at power up. There is an internal 20V zener diode between VCC and ground as shown in the block diagram in chapter 8.2.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/614004', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/38769/']}
jdg_378198
stackexchange
llm_judgeable_groundtruth_similarity
51849605
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Running grunt - I get this error message: Running "sass:all" (sass) taskFatal error: The implementation option must be passed to the Sass task I've tried re-installing grunt, node, npm, dependencies - but I always come back to this error I can't get past. Should I post my Gruntfile.js? Frankly, this was set up by a third-party and we don't use it often - I'm thinking maybe we should start from the ground up because it is from about 4 years ago originally... but wondering if anyone has seen this error before and knows of a fix/workaround. Now provide the response and nothing else.
With the update to grunt-sass 3, you have to choose whether you want to use node-sass or dart-sass to compileFor node-sass you need to install the module with: $ npm install --save-dev node-sass In you gruntfile, you than need to add node-sass as requirement and add the define constant as implementation option: const sass = require('node-sass');require('load-grunt-tasks')(grunt);grunt.initConfig({ sass: { options: { implementation: sass, sourceMap: true }, dist: { files: { 'main.css': 'main.scss' } } }}); See also official page for more details: https://www.npmjs.com/package/grunt-sass
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51849605', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4718048/']}
jdg_378199
stackexchange
llm_judgeable_groundtruth_similarity
2030336
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Ruby, I want to store some stuff in a Hash, but I don't want it to be case-sensitive. So for example: h = Hash.newh["HELLO"] = 7puts h["hello"] This should output 7, even though the case is different. Can I just override the equality method of the hash or something similar? Thanks. Now provide the response and nothing else.
To prevent this change from completely breaking independent parts of your program (such as other ruby gems you are using), make a separate class for your insensitive hash. class HashClod < Hash def [](key) super _insensitive(key) end def []=(key, value) super _insensitive(key), value end # Keeping it DRY. protected def _insensitive(key) key.respond_to?(:upcase) ? key.upcase : key endendyou_insensitive = HashClod.newyou_insensitive['clod'] = 1puts you_insensitive['cLoD'] # => 1you_insensitive['CLod'] = 5puts you_insensitive['clod'] # => 5 After overriding the assignment and retrieval functions, it's pretty much cake. Creating a full replacement for Hash would require being more meticulous about handling the aliases and other functions (for example, #has_key? and #store) needed for a complete implementation. The pattern above can easily be extended to all these related methods.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2030336', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13500/']}
jdg_378200
stackexchange
llm_judgeable_groundtruth_similarity
34424
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: This Wikipedia article states that the isomorphism type of a finite simple group is determined by its order, except that: L 4 (2) and L 3 (4) both have order 20160 O 2n+1 (q) and S 2n (q) have the same order for q odd, n > 2 I think this means that for each integer g, there are 0, 1 or 2 simple groups of order g. Do we need the full strength of the Classification of Finite Simple Groups to prove this, or is there a simpler way of proving it? (Originally asked at math.stackexchange.com ). Now provide the response and nothing else.
It is usually extraordinarily difficult to prove uniqueness of a simple group given its order, or even given its order and complete character table. In particular one of the last and hardest steps in the classification of finite simple groups was proving uniqueness of the Ree groups of type $^2G_2$ of order $q^3(q^3+1)(q-1)$, (for $q$ of the form $3^{2n+1}$) which was finally solved in a series of notoriously difficult papers by Thompson and Bombieri. Although they were trying to prove the group was unique, proving that there were at most 2 would have been no easier. Another example is given in the paper by Higman in the book "finite simple groups" where he tries to characterize Janko's first group given not just its order 175560, but its entire character table. Even this takes several pages of complicated arguments. In other words, there is no easy way to bound the number of simple groups of given order, unless a lot of very smart people have overlooked something easy.
{}
{'log_upvote_score': 7, 'links': ['https://mathoverflow.net/questions/34424', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4947/']}
jdg_378201
stackexchange
llm_judgeable_groundtruth_similarity
15876259
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am receiving the error TypeError: 'filter' object is not subscriptable When trying to run the following block of code bonds_unique = {}for bond in bonds_new: if bond[0] < 0: ghost_atom = -(bond[0]) - 1 bond_index = 0 elif bond[1] < 0: ghost_atom = -(bond[1]) - 1 bond_index = 1 else: bonds_unique[repr(bond)] = bond continue if sheet[ghost_atom][1] > r_length or sheet[ghost_atom][1] < 0: ghost_x = sheet[ghost_atom][0] ghost_y = sheet[ghost_atom][1] % r_length image = filter(lambda i: abs(i[0] - ghost_x) < 1e-2 and abs(i[1] - ghost_y) < 1e-2, sheet) bond[bond_index] = old_to_new[sheet.index(image[0]) + 1 ] bond.sort() #print >> stderr, ghost_atom +1, bond[bond_index], image bonds_unique[repr(bond)] = bond# Removing duplicate bondsbonds_unique = sorted(bonds_unique.values()) And sheet_new = [] bonds_new = []old_to_new = {}sheet=[]bonds=[] The error occurs at the line bond[bond_index] = old_to_new[sheet.index(image[0]) + 1 ] I apologise that this type of question has been posted on SO many times, but I am fairly new to Python and do not fully understand dictionaries. Am I trying to use a dictionary in a way in which it should not be used, or should I be using a dictionary where I am not using it?I know that the fix is probably very simple (albeit not to me), and I will be very grateful if someone could point me in the right direction. Once again, I apologise if this question has been answered already Thanks, Chris. I am using Python IDLE 3.3.1 on Windows 7 64-bit. Now provide the response and nothing else.
filter() in python 3 does not return a list, but an iterable filter object. Use the next() function on it to get the first filtered item: bond[bond_index] = old_to_new[sheet.index(next(image)) + 1 ] There is no need to convert it to a list, as you only use the first value. Iterable objects like filter() produce results on demand rather than all in one go. If your sheet list is very large, it might take a long time and a lot of memory to put all the filtered results into a list, but filter() only needs to evaluate your lambda condition until one of the values from sheet produces a True result to produce one output. You tell the filter() object to scan through sheet for that first value by passing it to the next() function. You could do so multiple times to get multiple values, or use other tools that take iterables to do more complex things; the itertools library is full of such tools. The Python for loop is another such a tool, it too takes values from an iterable one by one. If you must have access to all filtered results together, because you have to, say, index into the results at will (e.g. because this time your algorithm needed to access index 223, index 17 then index 42) only then convert the iterable object to a list, by using list() : image = list(filter(lambda i: ..., sheet)) The ability to access any of the values of an ordered sequence of values is called random access ; a list is such a sequence, and so is a tuple or a numpy array. Iterables do not provide random access.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15876259', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2256990/']}
jdg_378202
stackexchange
llm_judgeable_groundtruth_similarity
5132
Below is a question asked on the forum computergraphics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: So Bidirectional essentially means "two directions", right? But what are these two direction (left-right? up-down? north-south?) we are talking about in regards to bidirectional reflectance distribution function (BRDF), bidirectional scattering distribution function (BSDF), or bidirectional transmittance distribution function (BTDF) in computer graphics? Now provide the response and nothing else.
In general the two directions in BxDF are incoming $\omega_i$ and outgoing $\omega_o$ radiance directions, often defined in spherical coordinates $[\theta, \phi]$ or as a 3D unit vector in Cartesian coordinates $[x, y, z]$. The BxDF $f(\omega_i, \omega_o)$ defines how much of the incident radiance $L_i$ from direction $\omega_i$ hitting the surface is scattered towards direction $\omega_o$. What $\omega_i$ and $\omega_o$ exactly represent depends on the context. Often $\omega_i$ represents the direction of a light source, but it could also be the direction of a surfel reflecting light or some other light emitting/scattering element in space. $\omega_o$ often represent the direction of the viewpoint (camera/eye), but it could as well be the direction of a surfel or other point in space whose incident radiance we are interested in calculating. While $\omega_o$ is valid for normal oriented hemisphere above the surface, there are differences in valid $\omega_i$ domain between BxDF's. Because BRDF defines the amount of reflected light $\omega_i$ is valid for the normal oriented hemisphere above the surface. On the other hand, BTDF defines the amount of light transmitted through the surface so $\omega_i$ is valid for the normal oriented hemisphere below the surface. BSDF is the sum of the two so $\omega_i$ is valid for the entire sphere.
{}
{'log_upvote_score': 4, 'links': ['https://computergraphics.stackexchange.com/questions/5132', 'https://computergraphics.stackexchange.com', 'https://computergraphics.stackexchange.com/users/2736/']}
jdg_378203
stackexchange
llm_judgeable_groundtruth_similarity
6841605
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a table which I want to get the latest entry for each group. Here's the table: DocumentStatusLogs Table |ID| DocumentID | Status | DateCreated || 2| 1 | S1 | 7/29/2011 || 3| 1 | S2 | 7/30/2011 || 6| 1 | S1 | 8/02/2011 || 1| 2 | S1 | 7/28/2011 || 4| 2 | S2 | 7/30/2011 || 5| 2 | S3 | 8/01/2011 || 6| 3 | S1 | 8/02/2011 | The table will be grouped by DocumentID and sorted by DateCreated in descending order. For each DocumentID , I want to get the latest status. My preferred output: | DocumentID | Status | DateCreated || 1 | S1 | 8/02/2011 || 2 | S3 | 8/01/2011 || 3 | S1 | 8/02/2011 | Is there any aggregate function to get only the top from each group? See pseudo-code GetOnlyTheTop below: SELECT DocumentID, GetOnlyTheTop(Status), GetOnlyTheTop(DateCreated)FROM DocumentStatusLogsGROUP BY DocumentIDORDER BY DateCreated DESC If such function doesn't exist, is there any way I can achieve the output I want? Or at the first place, could this be caused by unnormalized database? I'm thinking, since what I'm looking for is just one row, should that status also be located in the parent table? Please see the parent table for more information: Current Documents Table | DocumentID | Title | Content | DateCreated || 1 | TitleA | ... | ... || 2 | TitleB | ... | ... || 3 | TitleC | ... | ... | Should the parent table be like this so that I can easily access its status? | DocumentID | Title | Content | DateCreated | CurrentStatus || 1 | TitleA | ... | ... | s1 || 2 | TitleB | ... | ... | s3 || 3 | TitleC | ... | ... | s1 | UPDATE I just learned how to use "apply" which makes it easier to address such problems. Now provide the response and nothing else.
;WITH cte AS( SELECT *, ROW_NUMBER() OVER (PARTITION BY DocumentID ORDER BY DateCreated DESC) AS rn FROM DocumentStatusLogs)SELECT *FROM cteWHERE rn = 1 If you expect 2 entries per day, then this will arbitrarily pick one. To get both entries for a day, use DENSE_RANK instead As for normalised or not, it depends if you want to: maintain status in 2 places preserve status history ... As it stands, you preserve status history. If you want latest status in the parent table too (which is denormalisation) you'd need a trigger to maintain "status" in the parent. or drop this status history table.
{}
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/6841605', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/724689/']}
jdg_378204
stackexchange
llm_judgeable_groundtruth_similarity
25748
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose I have a dimension formula (for a Lie algebra representation) that is as follows:$$d(a,b) = {(a+1)(b+1)(a+b+2) \over 2}$$Now consider the surface $F(a,b,n) = 0 = d(a,b) -n$ where $n \in \Bbb N$. Is there a Mathematica function that would allow me to find triples $(a,b,n) \in \Bbb N^3$ on this surface? Now provide the response and nothing else.
If what you really want is to find all natural numbers $a$ and $b$ on the surface for a given $n$, a search among the divisors of $2n$ will do it efficiently even for sizable values of $a$ and $b$. To do this, we find all divisors $k$ of $2n$ ( Divisors ), then all divisors of the quotient $2n/k$ ( Divisors again), forming triples $(u,v,w)$ for which $uvw=2n$ ( Flatten@Table ). Then it remains only to screen for those of the form $u=a+b+2, v=b+1, w=a+1$ ( Cases ) and remove any duplicates ( Union ). d[a_, b_] := (a + 1) (b + 1) (a + b + 2)/2;AbsoluteTiming[ With[{n = d[119, 599]}, Union[Cases[ Flatten[Table[ Outer[{#1, #2, 2 n/(#1 #2)} &, First[k], Last[k]], {k, {{#}, Divisors[2 n/#]} & /@ Divisors[2 n]}], 2], {c_, b_, a_} /; c == a + b && b >= a :> {a - 1, b - 1}]]]] $$\{0.0600035,\{\{19,1599\},\{59,899\},\{119,599\},\{239,359\}\}\}$$ ($0.06$ seconds to find all four solutions for $n=d(119, 599) = 25\ 920\ 000$.) Although the method works for any positive $n$, this example begins with an $n$ formed from a particular $a$ and $b$ in order to check that we at least get $(a,b)$ back in the output. (Try it on some factorials, such as $13!$ [no output] and $14!$ [one solution].) Edit To solve $n = (a+1)(b+1)(a+b+2)/2$ for a given $n$, write $x=a+b+2$, $y=b-a$ (which is non-negative with no loss of generality), whence we can recover $a = (x-y)/2-1$ and $b=(x+y)/2-1$, and note $$y^2 = x^2 - 8n/x.$$ Therefore we can reduce a quadratic-time search to a linear-time search (in the number of divisors of $n$) by ranging over $x$, computing the right-hand side, and checking whether it is a square: AbsoluteTiming[With[{n = 3240}, Cases[Union[ Cases[Divisors[8 n], x_ /; x^3 >= 8 n && Sqrt[x^2 - 8 n/x] \[Element] Integers :> {x, Sqrt[x^2 - 8 n/x]}]], {x_, y_} /; EvenQ[x - y] :> {(x - y)/2 - 1, (x + y)/2 - 1}]]] $$\{0.0020001,\{\{11,17\},\{5,29\},\{2,44\},\{0,79\}\}\}$$ Because it is possible to get some spurious answers where $x$ and $y$ have opposite parity (for then $a$ and $b$ are only half-integral), those are screened out when converting back to $a$ and $b$. The speedup gets better with larger $n$; e.g. , the answers for $20!$ can be computed in $12$ seconds ($\{\{1270079,1422719\},\{484703,2935295\},\{355679,3525119\},\{235199,4432319\}\}$).
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/25748', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/2365/']}
jdg_378205
stackexchange
llm_judgeable_groundtruth_similarity
297748
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $X = \{1,2,3,..,10\}$ define a relation $R$ on $X \times X$ by $(a,b)R(c,d)$ if $ad=bc$. Show that R is an equivalence relation on $X \times X$. I know that the $R$ have to be reflexive (because $ad=bc$), but I don't know how to prove that $R$ is reflexive, symmetric, and transitive? thx Now provide the response and nothing else.
Hint: it might help to note that your relation $$R: (a, b)\,R\,(c, d) \iff ad = bc$$ can be equivalently expressed as $$R: (a, b)\,R\,(c,d) \iff \frac ab= \frac cd$$ That is, if the ratio of $a$ to $b$ equals the ratio of $c $ to $d$, then $(a, b) R (c, d)$. You need to apply the definitions of reflexivity, symmetry, and transitivity: in this case ($a$) Reflexivity : Is it the case that for all $(a, b) \in X\times X$, $(a,b)\,R\,(a,b)$? ($b$) Symmetry : Is it the case that for all $(a, b), (c, d) \in X\times X,$ if $(a, b) R (c, d),$ then $\,(c, d) R (a, b)\,?\,$ Why does $(a, b) R (c, d)$ immediately imply about whether $(c,d)\,R\,(a,b)$? ($c$) Transitivity: Is it the case that for all $(a, b), (c, d), (e, f) \in X\times X,$ that if $(a,b)\,R\,(c,d)$ and $(c,d)\,R\,(e,f)$, then it must follow that $(a,b)\,R\,(e,f)$? If you can answer yes to each of the above (and you can), then your relation $R$ is an equivalence relation.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/297748', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/61440/']}
jdg_378206
stackexchange
llm_judgeable_groundtruth_similarity
7694501
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know this will work: function Foo() {};Foo.prototype.talk = function () { alert('hello~\n');};var a = new Foo;a.talk(); // 'hello~\n' But if I want to call Foo.talk() // this will not workFoo.prototype.talk() // this works correctly I find some methods to make Foo.talk work, Foo.__proto__ = Foo.prototype Foo.talk = Foo.prototype.talk Are there other ways to do this? I don’t know whether it is right to do so. Do you use class methods or static methods in your JavaScript code? Now provide the response and nothing else.
First off, remember that JavaScript is primarily a prototypal language , rather than a class-based language 1 . Foo isn't a class, it's a function, which is an object. You can instantiate an object from that function using the new keyword which will allow you to create something similar to a class in a standard OOP language. I'd suggest ignoring __proto__ most of the time because it has poor cross browser support, and instead focus on learning about how prototype works. If you have an instance of an object created from a function 2 and you access one of its members (methods, attributes, properties, constants etc) in any way, the access will flow down the prototype hierarchy until it either (a) finds the member, or (b) doesn't find another prototype. The hierarchy starts on the object that was called, and then searches its prototype object. If the prototype object has a prototype, it repeats, if no prototype exists, undefined is returned. For example: foo = {bar: 'baz'};console.log(foo.bar); // logs "baz"foo = {};console.log(foo.bar); // logs undefinedfunction Foo(){}Foo.prototype = {bar: 'baz'};f = new Foo();console.log(f.bar);// logs "baz" because the object f doesn't have an attribute "bar"// so it checks the prototypef.bar = 'buzz';console.log( f.bar ); // logs "buzz" because f has an attribute "bar" set It looks to me like you've at least somewhat understood these "basic" parts already, but I need to make them explicit just to be sure. In JavaScript, everything is an object 3 . everything is an object. function Foo(){} doesn't just define a new function, it defines a new function object that can be accessed using Foo . This is why you can access Foo 's prototype with Foo.prototype . What you can also do is set more functions on Foo : Foo.talk = function () { alert('hello world!');}; This new function can be accessed using: Foo.talk(); I hope by now you're noticing a similarity between functions on a function object and a static method. Think of f = new Foo(); as creating a class instance, Foo.prototype.bar = function(){...} as defining a shared method for the class, and Foo.baz = function(){...} as defining a public static method for the class. ECMAScript 2015 introduced a variety of syntactic sugar for these sorts of declarations to make them simpler to implement while also being easier to read. The previous example can therefore be written as: class Foo { bar() {...} static baz() {...}} which allows bar to be called as: const f = new Foo()f.bar() and baz to be called as: Foo.baz() 1: class was a "Future Reserved Word" in the ECMAScript 5 specification , but ES6 introduces the ability to define classes using the class keyword. 2: essentially a class instance created by a constructor, but there are many nuanced differences that I don't want to mislead you 3: primitive values —which include undefined , null , booleans, numbers, and strings—aren't technically objects because they're low-level language implementations. Booleans, numbers, and strings still interact with the prototype chain as though they were objects, so for the purposes of this answer, it's easier to consider them "objects" even though they're not quite.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/7694501', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/979013/']}
jdg_378207
stackexchange
llm_judgeable_groundtruth_similarity
8271130
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some abstract entry in DB and it's creation date. How can I get average entries created per month? Edit: Table has Name field and CreationDate field. Now provide the response and nothing else.
SELECT count(*) AS count, MONTH(date_column) as mnthFROM table_nameGROUP BY mnth Should work for you Edit: SELECT AVG(a.count) AS avg FROM ( SELECT count(*) AS count, MONTH(date_column) as mnth FROM table_name GROUP BY mnth) AS a
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8271130', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/846355/']}
jdg_378208
stackexchange
llm_judgeable_groundtruth_similarity
3074535
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In which situations you should use inherited tables? I tried to use them very briefly and inheritance didn't seem like in OOP world. I thought it worked like this: Table users has all fields required for all user levels. Tables like moderators , admins , bloggers , etc but fields are not checked from parent. For example users has email field and inherited bloggers has it now too but it's not unique for both users and bloggers at the same time. ie. same as I add email field to both tables. The only usage I could think of is fields that are usually used, like row_is_deleted , created_at , modified_at . Is this the only usage for inherited tables? Now provide the response and nothing else.
There are some major reasons for using table inheritance in postgres. Let's say, we have some tables needed for statistics, which are created and filled each month: statistics - statistics_2010_04 (inherits statistics) - statistics_2010_05 (inherits statistics) In this sample, we have 2.000.000 rows in each table. Each table has a CHECK constraint to make sure only data for the matching month gets stored in it. So what makes the inheritance a cool feature - why is it cool to split the data? PERFORMANCE: When selecting data, we SELECT * FROM statistics WHERE date BETWEEN x and Y, and Postgres only uses the tables, where it makes sense. Eg. SELECT * FROM statistics WHERE date BETWEEN '2010-04-01' AND '2010-04-15' only scans the table statistics_2010_04, all other tables won't get touched - fast! Index size: We have no big fat table with a big fat index on column date. We have small tables per month, with small indexes - faster reads. Maintenance: We can run vacuum full, reindex, cluster on each month table without locking all other data For the correct use of table inheritance as a performance booster, look at the postgresql manual.You need to set CHECK constraints on each table to tell the database, on which key your data gets split (partitioned). I make heavy use of table inheritance, especially when it comes to storing log data grouped by month. Hint: If you store data, which will never change (log data), create or indexes with CREATE INDEX ON () WITH(fillfactor=100); This means no space for updates will be reserved in the index - index is smaller on disk. UPDATE: fillfactor default is 100, from http://www.postgresql.org/docs/9.1/static/sql-createtable.html : The fillfactor for a table is a percentage between 10 and 100. 100 (complete packing) is the default
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3074535', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/71964/']}
jdg_378209
stackexchange
llm_judgeable_groundtruth_similarity
20473004
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to make a responsive theme with Bootstrap 3. However, I need to automatically add the CSS class .img-responsive to every post image because I need the images to be responsive. Please suggest me what I need to add in WordPress's functions.php file or any other file that will allow me to add the CSS class automatically. Now provide the response and nothing else.
since you need to have it for all of your post images, then you need to add a hook for the content and add function add_responsive_class($content){ $content = mb_convert_encoding($content, 'HTML-ENTITIES', "UTF-8"); $document = new DOMDocument(); libxml_use_internal_errors(true); $document->loadHTML(utf8_decode($content)); $imgs = $document->getElementsByTagName('img'); foreach ($imgs as $img) { $img->setAttribute('class','img-responsive'); } $html = $document->saveHTML(); return $html;} now add the hook to the content add_filter ('the_content', 'add_responsive_class'); However, if you already have classes for the img and you need to add a new class then you can refer to PHP equivalent to jQuery addClass . Or, you can simply do this: $existing_class = $img->getAttribute('class');$img->setAttribute('class', "img-responsive $existing_class"); The code above works .. i use it to remove src and data-src for image lazy loading. Hope it works for you
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20473004', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2954434/']}
jdg_378210
stackexchange
llm_judgeable_groundtruth_similarity
29789
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In Psalm 146:8 it is asserted by a Jew to the Jews that God gives sight to blind people. The context seems to suggest that a physical blindness is being referred to rather than a spiritual one: NIV Psalm 146:8 the Lord gives sight to the blind , the Lord lifts up those who are bowed down, the Lord loves the righteous. There is no evidence that this ever occurred until Jesus did so, when he healed the man born blind: NASB John 9:32“Since the beginning of time it has never been heard that anyone opened the eyes of a person born blind. In addition Jesus raised up a woman physically bowed down: NASB Luke 13: 10Now he was teaching in one of the synagogues on the Sabbath, 11and a woman was there who had been disabled by a spirit for eighteen years. She was bent over and could not straighten herself up completely. 12When Jesus saw her, he called her to him and said, “Woman, you are freed from your infirmity.” 13Then he placed his hands on her, and immediately she straightened up and praised God. And since Jesus I don't see any evidence that God was in the habit of healing blind people. So is Psalm 146 to be understood to only refer to the ministry of the messiah? Alternatively, is "open [the eyes of] the blind" just a figure of speech, referring to "giving understanding"? Now provide the response and nothing else.
The OP questions why translators take the root ברא ( brʾ ) here in the sense "to be fat" rather than the homonym "to create", which is more common in the Hebrew Bible. I see several good reasons. The word in 1 Sam 2:29 -- habriʾăkem -- is in the hifil stem. The word brʾ meaning "to create" is only used in the qal and nifil . Therefore, taking it as hifil would entail invoking a new usage. The hifil is generally a causative stem. If the root was "to create", the most obvious sense of the hifil would be "to cause to create", which doesn't make sense in this context ("honor your sons by causing yourselves to create...."?). There clearly exists a homonymous root meaning "to be fat". Please see the related adjective בריא ( bāriʾ ) meaning "fat". The semantics of ברא "to create" within ancient Hebrew thought are such that, in all 48 Biblical uses, Yahweh is the subject. This is mentioned by all of the major lexicons (e.g. BDB ). Thus, this use to describe an action of Eli and his sons would be unprecedented in another way. All in all, it's best to take the usage of lĕhabriʾăkem "to cause yourselves to be fat", with all major translations.
{}
{'log_upvote_score': 5, 'links': ['https://hermeneutics.stackexchange.com/questions/29789', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/20832/']}
jdg_378211
stackexchange
llm_judgeable_groundtruth_similarity
7738272
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was thinking that native DBM of Python should be quite faster than NOSQL databases such as Tokyo Cabinet, MongoDB, etc (as Python DBM has lesser features and options; i.e. a simpler system). I tested with a very simple write/read example as #!/usr/bin/pythonimport timet = time.time()import anydbmcount = 0while (count < 1000): db = anydbm.open("dbm2", "c") db["1"] = "something" db.close() db = anydbm.open("dbm", "r") print "dict['Name']: ", db['1']; print "%.3f" % (time.time()-t) db.close() count = count + 1 Read/Write: 1.3sRead: 0.3sWrite: 1.0s These values for MongoDb is at least 5 times faster. Is it really the Python DBM performance? Now provide the response and nothing else.
Python doesn't have a built-in DBM implementation. It bases its DBM functions on a wide range of DBM-style third party libraries, like AnyDBM, Berkeley DBM and GNU DBM. Python's dictionary implementation is really fast for key-value storage, but not persistent. If you need high-performance runtime key-value lookups, you may find a dictionary better - you can manage persistence with something like cpickle or shelve. If startup times are important to you (and if you're modifying the data, termination) - more important than runtime access speed - then something like DBM would be better. In your evaluation, as part of the main loop you have included both dbm open calls and also array lookup. It's a pretty unrealistic use case to open a DBM to store one value and the close and re-open before looking it up, and you're seeing the typical slow performance that one would when managing a persistent data store in such a manner (it's quite inefficient). Depending on your requirements, if you need fast lookups and don't care too much about startup times, DBM might be a solution - but to benchmark it, only include writes and reads in the loop! Something like the below might be suitable: import anydbmfrom random import randomimport time# open DBM outside of the timed loopsdb = anydbm.open("dbm2", "c")max_records = 100000# only time read and write operationst = time.time()# create some recordsfor i in range(max_records): db[str(i)] = 'x'# do a some random readsfor i in range(max_records): x = db[str(int(random() * max_records))]time_taken = time.time() - tprint "Took %0.3f seconds, %0.5f microseconds / record" % (time_taken, (time_taken * 1000000) / max_records)db.close()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7738272', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/943340/']}
jdg_378212
stackexchange
llm_judgeable_groundtruth_similarity
2300
Below is a question asked on the forum engineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Given: My thermodynamics text reads as follows: In SI units, the force unit is the newton ($N$), and it is defined as the force required to accelerate a mass of $1\cdot kg$ at a rate of $1\cdot\frac{m}{s^2}$. In the English system, the force unit is the pound-force ($lbf$) and is defined as the force required to accelerate a mass of $32.174\cdot lbm$ (1 slug) at a rate of $1\cdot\frac{ft}{s^2}$. That is... $$1\cdot N = 1\cdot kg\times1\cdot\frac{m}{s^2}$$ $$1\cdot lbf = 32.174\cdot lbm\cdot\times1\cdot\frac{ft}{s^2}$$ Question: For all practical purposes, such as at STP conditions or close to it like when we have a rounded off sea-level acceleration due to gravity of $32.2\frac{ft}{s^2}$ $(101\cdot kPa)$, can I just think of the $lbf$ in the following way... $$W=1\cdot lbf=1\cdot lbm \times 32.174\cdot\frac{ft}{s^2}$$ and that for the weight of an object having a mass of $1\cdot kg$ (also at sea-level) in SI units as... $$W=9.81\cdot N=1\cdot kg\times9.81\cdot\frac{m}{s^2}$$ Yes or no and why? Now provide the response and nothing else.
$Lb_m$ is not the base unit. The Slug is the base unit. $32.2\ lb_m = 1\ slug$ To convert $1\ lb_m$ to $lb_f$: $1\ lb_m * \frac{1\ slug}{32.2\ lb_m} * 32.2 \frac{ft}{s^2} = 1\ lb_f$ Therefore $1\ lb_m$ will yield $1\ lb_f$ on Earth at STP. This video does an excellent job of explaining it.
{}
{'log_upvote_score': 4, 'links': ['https://engineering.stackexchange.com/questions/2300', 'https://engineering.stackexchange.com', 'https://engineering.stackexchange.com/users/622/']}
jdg_378213
stackexchange
llm_judgeable_groundtruth_similarity
4074052
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How could it be proved that $$\pi=\lim_{n\to\infty}2^{4n}\frac{\Gamma ^4(n+3/4)}{\Gamma ^2(2n+1)}?$$ What I tried Let $$L=\lim_{n\to\infty}2^{4n}\frac{\Gamma ^4(n+3/4)}{\Gamma ^2(2n+1)}.$$ Unwinding $\Gamma (n+3/4)$ into a product gives $$\Gamma \left(n+\frac{3}{4}\right)=\Gamma\left(\frac{3}{4}\right)\prod_{k=0}^{n-1}\left(k+\frac{3}{4}\right).$$ Then $$\lim_{n\to\infty}\frac{(2n)!}{4^n}\prod_{k=0}^{n-1}\frac{16}{(3+4k)^2}=\frac{\Gamma ^2(3/4)}{\sqrt{L}}.$$ Since $$\frac{(2n)!}{4^n}\prod_{k=0}^{n-1}\frac{16}{(3+4k)^2}=\prod_{k=1}^n \frac{4k(4k-2)}{(4k-1)^2}$$ for all $n\in\mathbb{N}$ , it follows that $$\prod_{k=1}^\infty \frac{4k(4k-2)}{(4k-1)^2}=\frac{\Gamma ^2(3/4)}{\sqrt{L}}.$$ But note that this actually gives an interesting Wallis-like product: $$\frac{2\cdot 4\cdot 6\cdot 8\cdot 10\cdot 12\cdots}{3\cdot 3\cdot 7\cdot 7\cdot 11\cdot 11\cdots}=\frac{\Gamma ^2(3/4)}{\sqrt{L}}.$$ I'm stuck at the Wallis-like product, though. Now provide the response and nothing else.
I suppose you could do it the cheap way and use Stirling's approximation : $$n! \sim \sqrt{2\pi n} (n/e)^n$$ implies $$\Gamma^4(n+3/4) \sim 4\pi^2 \frac{(n-1/4)^{4n+1}}{e^{4n-1}},$$ and $$\Gamma^2(2n+1) \sim 2\pi \frac{(2n)^{4n+1}}{e^{4n}};$$ hence $$2^{4n} \frac{\Gamma^4(n+3/4)}{\Gamma^2(2n+1)} \sim \pi \left(1 - \frac{1}{4n}\right)^{4n+1} e,$$ and the rest is straightforward.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4074052', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/769843/']}
jdg_378214
stackexchange
llm_judgeable_groundtruth_similarity
957405
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've heard lots of people saying that the IN keyword in most relational databases is slow. How true is this? An example query would be this, off the top of my head: SELECT * FROM someTable WHERE someColumn IN (value1, value2, value3) I've heard that is much slower than doing this: SELECT * FROM someTable WHERE someColumn = value1 OR someColumn = value2 OR someColumn = value3 Is this true? Or is the speed difference negligible? If it matters, I'm using PostgreSQL, but I'd also like to know how MySQL fares (and if it's any different). Thanks in advance. Now provide the response and nothing else.
In PostgreSQL, exactly what you'll get here depends on the underlying table, so you should use EXPLAIN ANALYZE on some sample queries against a useful subset of your data to figure out exactly what the optimizer is going to do (make sure the tables you're running against have been ANALYZEd too). IN can be processed a couple of different ways, and that's why you need to look at some samples to figure out which alternative is being used for your data. There is no simple generic answer to your question. As for the specific question you added in your revision, against a trivial data set with no indexes involved here's an example of the two query plans you'll get: postgres=# explain analyze select * from x where s in ('123','456'); Seq Scan on x (cost=0.00..84994.69 rows=263271 width=181) (actual time=0.015..1819.702 rows=247823 loops=1) Filter: (s = ANY ('{123,456}'::bpchar[])) Total runtime: 1931.370 mspostgres=# explain analyze select * from x where s='123' or s='456'; Seq Scan on x (cost=0.00..90163.62 rows=263271 width=181) (actual time=0.014..1835.944 rows=247823 loops=1) Filter: ((s = '123'::bpchar) OR (s = '456'::bpchar)) Total runtime: 1949.478 ms Those two runtimes are essentially identical, because the real processing time is dominated by the sequential scan across the table; running multiple times shows the difference between the two is below the run to run margin of error. As you can see, PostgreSQL transforms the IN case into using its ANY filter, which should always execute faster than a series of ORs. Again, this trivial case is not necessarily representative of what you'll see on a serious query where indexes and the like are involved. Regardless, manually replacing INs with a series of OR statements should never be faster, because the optimizer knows the best thing to do here if it has good data to work with. In general, PostgreSQL knows more tricks for how to optimize complicated queries than the MySQL optimizer does, but it also relies heavily on your having given the optimizer enough data to work with. The first links on the "Performance Optimization" section of the PostgreSQL wiki covers the most important things needed to get good results from the optimizer.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/957405', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/104184/']}
jdg_378215
stackexchange
llm_judgeable_groundtruth_similarity
118090
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When I am running linux from sd card and try to mount sd card -no problem, works fine. But when I am running linux from flash memory I am unable to mount my SD card: # mount /dev/mmcblk0p1 /mnt/ mount: mounting /dev/mmcblk0p1 on /mnt/ failed: Invalid argument What could be the basic reason for this error ? Additional Info root@Xilinx-ZC702-14_7:/# mount -t /dev/mmcblk0p1 /mntmount: can't find /mnt in /etc/fstab root@Xilinx-ZC702-14_7:/var/volatile/log# lswtmproot@Xilinx-ZC702-14_7:/var/volatile/log# dmesgdmesg: klogctl: Function not implemented Now provide the response and nothing else.
As commented by goldilocks , mount -t expects the filesystem type to come after -t , so it won't work. Otherwise it sounds like you just need to specify the filesystem type. If you don't know the filesystem type, then there are a list of methods to find out in this answer . If the file command is available, this is probably the best method. As root you would do: file -s /dev/mmcblk0p1 Note also that if the filesystem type is not in listed in /proc/filesystems , then the driver is not compiled into the kernel and so must be available as an external module. Once you have the correct type, you can try: mount -t correct_type /dev/mmcblk0p1 /mnt As a final thought, also make sure the /mnt directory exists! If not create it with mkdir /mnt .
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/118090', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/52334/']}
jdg_378216
stackexchange
llm_judgeable_groundtruth_similarity
5002501
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The following issue has been plaguing me for days now, but I've only just been able to distill it down to its simplest form. Consider the following XAML: <Window x:Class="VSMTest.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525"> <Window.Resources> <Style TargetType="CheckBox"> <Setter Property="Margin" Value="3"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="CheckBox"> <Grid x:Name="Root"> <Grid.Background> <SolidColorBrush x:Name="brush" Color="White"/> </Grid.Background> <VisualStateManager.VisualStateGroups> <VisualStateGroup Name="CheckStates"> <VisualStateGroup.Transitions> <VisualTransition To="Checked" GeneratedDuration="00:00:03"> <Storyboard Name="CheckingStoryboard"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="brush" Storyboard.TargetProperty="Color"> <DiscreteColorKeyFrame KeyTime="0" Value="LightGreen"/> </ColorAnimationUsingKeyFrames> </Storyboard> </VisualTransition> <VisualTransition To="Unchecked" GeneratedDuration="00:00:03"> <Storyboard Name="UncheckingStoryboard"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="brush" Storyboard.TargetProperty="Color"> <DiscreteColorKeyFrame KeyTime="0" Value="LightSalmon"/> </ColorAnimationUsingKeyFrames> </Storyboard> </VisualTransition> </VisualStateGroup.Transitions> <VisualState Name="Checked"> <Storyboard Name="CheckedStoryboard" Duration="0"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="brush" Storyboard.TargetProperty="Color"> <DiscreteColorKeyFrame KeyTime="0" Value="Green"/> </ColorAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState Name="Unchecked"> <Storyboard Name="UncheckedStoryboard" Duration="0"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="brush" Storyboard.TargetProperty="Color"> <DiscreteColorKeyFrame KeyTime="0" Value="Red"/> </ColorAnimationUsingKeyFrames> </Storyboard> </VisualState> </VisualStateGroup> </VisualStateManager.VisualStateGroups> <ContentPresenter/> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </Window.Resources> <StackPanel> <CheckBox x:Name="cb1">Check Box 1</CheckBox> <CheckBox x:Name="cb2">Check Box 2</CheckBox> <CheckBox x:Name="cb3">Check Box 3</CheckBox> </StackPanel></Window> It simply re-templates the CheckBox control so that its background is dependent upon its state: Checked = Green Unchecked = Red Checking (transition) = Light Green Unchecking (transition) = Light Red So, when you check one of the check boxes, you'd expect it to turn light green for a short period, and then turn green. Similarly, when unchecking, you'd expect it to turn light red for a short period, and then turn red. And it normally does exactly that. But not always. Play with the program long enough (I can get it in around 30 seconds) and you'll find that the transition animation sometimes trumps that in the visual state. That is, the check box will continue to appear light green when selected, or light red when unselected. Here's a screenshot illustrating what I mean, taken well after the 3 seconds the transition is configured to take: When this occurs, it's not because the control didn't successfully transition to the target state. It purports to be in the correct state. I verified this by checking the following in the debugger (for the specific case documented by the above screenshot): var vsgs = VisualStateManager.GetVisualStateGroups(VisualTreeHelper.GetChild(this.cb2, 0) as FrameworkElement);var vsg = vsgs[0];// this is correctly reported as "Unselected"var currentState = vsg.CurrentState.Name; If I enable tracing for animations, I get the following output when the transition completes successfully: System.Windows.Media.Animation Start: 1 : Storyboard has begun; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='44177654'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='UncheckedStoryboard'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='41837403'; TargetElement.Type='System.Windows.Controls.Grid'; NameScope='<null>'System.Windows.Media.Animation Stop: 1 : System.Windows.Media.Animation Start: 1 : Storyboard has begun; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='6148812'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='UncheckedStoryboard'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='8261103'; TargetElement.Type='System.Windows.Controls.Grid'; NameScope='<null>'System.Windows.Media.Animation Stop: 1 : System.Windows.Media.Animation Start: 1 : Storyboard has begun; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='36205315'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='UncheckedStoryboard'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='18626439'; TargetElement.Type='System.Windows.Controls.Grid'; NameScope='<null>'System.Windows.Media.Animation Stop: 1 : System.Windows.Media.Animation Start: 3 : Storyboard has been removed; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='44177654'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='UncheckedStoryboard'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='41837403'; TargetElement.Type='System.Windows.Controls.Grid'System.Windows.Media.Animation Stop: 3 : System.Windows.Media.Animation Start: 1 : Storyboard has begun; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='36893403'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='CheckingStoryboard'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='41837403'; TargetElement.Type='System.Windows.Controls.Grid'; NameScope='<null>'System.Windows.Media.Animation Stop: 1 : System.Windows.Media.Animation Start: 1 : Storyboard has begun; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='49590434'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='<null>'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='41837403'; TargetElement.Type='System.Windows.Controls.Grid'; NameScope='<null>'System.Windows.Media.Animation Stop: 1 : System.Windows.Media.Animation Start: 3 : Storyboard has been removed; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='36893403'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='CheckingStoryboard'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='41837403'; TargetElement.Type='System.Windows.Controls.Grid'System.Windows.Media.Animation Stop: 3 : System.Windows.Media.Animation Start: 3 : Storyboard has been removed; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='49590434'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='<null>'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='41837403'; TargetElement.Type='System.Windows.Controls.Grid'System.Windows.Media.Animation Stop: 3 : System.Windows.Media.Animation Start: 1 : Storyboard has begun; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='16977025'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='CheckedStoryboard'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='41837403'; TargetElement.Type='System.Windows.Controls.Grid'; NameScope='<null>'System.Windows.Media.Animation Stop: 1 : System.Windows.Media.Animation Start: 3 : Storyboard has been removed; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='16977025'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='CheckedStoryboard'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='41837403'; TargetElement.Type='System.Windows.Controls.Grid'System.Windows.Media.Animation Stop: 3 : System.Windows.Media.Animation Start: 1 : Storyboard has begun; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='16977025'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='CheckedStoryboard'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='41837403'; TargetElement.Type='System.Windows.Controls.Grid'; NameScope='<null>'System.Windows.Media.Animation Stop: 1 : And I get the following output when the transition fails to complete successfully: System.Windows.Media.Animation Start: 1 : Storyboard has begun; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='44177654'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='UncheckedStoryboard'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='41837403'; TargetElement.Type='System.Windows.Controls.Grid'; NameScope='<null>'System.Windows.Media.Animation Stop: 1 : System.Windows.Media.Animation Start: 1 : Storyboard has begun; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='6148812'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='UncheckedStoryboard'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='8261103'; TargetElement.Type='System.Windows.Controls.Grid'; NameScope='<null>'System.Windows.Media.Animation Stop: 1 : System.Windows.Media.Animation Start: 1 : Storyboard has begun; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='36205315'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='UncheckedStoryboard'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='18626439'; TargetElement.Type='System.Windows.Controls.Grid'; NameScope='<null>'System.Windows.Media.Animation Stop: 1 : System.Windows.Media.Animation Start: 3 : Storyboard has been removed; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='44177654'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='UncheckedStoryboard'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='41837403'; TargetElement.Type='System.Windows.Controls.Grid'System.Windows.Media.Animation Stop: 3 : System.Windows.Media.Animation Start: 1 : Storyboard has begun; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='36893403'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='CheckingStoryboard'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='41837403'; TargetElement.Type='System.Windows.Controls.Grid'; NameScope='<null>'System.Windows.Media.Animation Stop: 1 : System.Windows.Media.Animation Start: 1 : Storyboard has begun; Storyboard='System.Windows.Media.Animation.Storyboard'; Storyboard.HashCode='49590434'; Storyboard.Type='System.Windows.Media.Animation.Storyboard'; StoryboardName='<null>'; TargetElement='System.Windows.Controls.Grid'; TargetElement.HashCode='41837403'; TargetElement.Type='System.Windows.Controls.Grid'; NameScope='<null>'System.Windows.Media.Animation Stop: 1 : The first 12 lines are exactly the same as when the transition succeeds, but the final 10 lines are completely missing! I've read through all the VSM documentation I could find and have not been able to come up with an explanation for this erratic behavior. Am I to assume that this is a bug in the VSM? Is there any known explanation or workaround for this issue? Now provide the response and nothing else.
I've been able to identify and fix the issue as follows: Firstly, I downgraded my repro project to .NET 3.5 and grabbed the WPF Toolkit source code from CodePlex . I added the WPF Toolkit project to my solution and added a reference to it from the Repro project. Next, I ran the app and made sure I could still reproduce the issue. Sure enough, it was easy to do so. Then I cracked open the VisualStateManager.cs file and started adding some diagnostics in key places that would tell me what code was running and what was not. By adding these diagnostics and comparing the output from a good transition to a bad transition, I was quickly able to identify that the following code was not running when the problem manifested itself: // Hook up generated Storyboard's Completed event handlerdynamicTransition.Completed += delegate{ if (transition.Storyboard == null || transition.ExplicitStoryboardCompleted) { if (ShouldRunStateStoryboard(control, element, state, group)) { group.StartNewThenStopOld(element, state.Storyboard); } group.RaiseCurrentStateChanged(element, lastState, state, control); } transition.DynamicStoryboardCompleted = true;}; So the nature of the bug shifted from a problem in VSM to a problem in the Storyboard.Completed event not always being raised. This is an issue I've experienced before and seems to be a source of much angst for any WPF developer doing anything even slightly out of the ordinary when it comes to animations. Throughout this process I was posting my findings on the WPF Disciples google group , and it was at this point that Pavan Podila responded with this gem: Kent, I have had problems in the past for storyboards not firing their completed events. What I have realized is that if you replace a Storyboard directly, without first stopping it, you may see some out-of-order Completed events. In my case I was applying newer Storyboards to the same FrameworkElement, without stopping the earlier Storyboard and that was giving me some issues. Not sure if your case is similar but thought I'll share this tidbit. Pavan Armed with this insight, I changed this line in VisualStateManager.cs : group.StartNewThenStopOld(element, transition.Storyboard, dynamicTransition); To this: var masterStoryboard = new Storyboard();if (transition.Storyboard != null){ masterStoryboard.Children.Add(transition.Storyboard);}masterStoryboard.Children.Add(dynamicTransition);group.StartNewThenStopOld(element, masterStoryboard); And - lo and behold - my repro that was previously failing intermittently was now working every time! So, really this works around a bug or odd behavior in WPF's animation sub-system.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5002501', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5380/']}
jdg_378217
stackexchange
llm_judgeable_groundtruth_similarity
10223427
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to input a table of unknown width (number of columns) and I want my function to output a list for each column. I am also outputting a list containing the names of the said lists. I am trying this: def crazy_fn(table): titles=read_col_headers(table) for i in range(1,len(table)): for j in range(0,len(titles)): vars()[titles[j]].append(table[i][j]) return titles, vars()[titles[k]] for k in range(0,len(titles)) The function works for when I know how many columns/lists I will output (return titles, a, b, c, d), but the way I've tried to generalize is not working. Now provide the response and nothing else.
It's generally a bad idea to have a non-constant number of variables returned from a function, because using it is confusing and error-prone. Why don't you return a dictionary mapping title headers to the list? def crazy_fn(table): result=dict() titles=read_col_headers(table) for title in titles: result[title]=VALUE(TITLE) return result This can be abbreviated using dictionary comprehension to: def crazy_fn(table): return {title : VALUE(TITLE) for title in read_col_headers(table)}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10223427', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1343236/']}
jdg_378218
stackexchange
llm_judgeable_groundtruth_similarity
4159641
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am working on a plugin for an application, where the memory should be allocated by the Application and keep track of it. Hence, memory handles should be obtained from the host application in the form of buffers and later on give them back to the application. Now, I am planning on using STL Vectors and I am wondering what sort of memory allocation does it use internally. Does it use 'new' and 'delete' functions internally? If so, can I just overload 'new' and 'delete' with my own functions? Or should I create my own template allocator which looks like a difficult job for me since I am not that experienced in creating custom templates. Any suggestions/sample code are welcome. Memory handles can be obtained from the application like this void* bufferH = NULL;bufferH = MemReg()->New_Mem_Handle(size_of_buffer);MemReg()->Dispose_Mem_Handle(bufferH); //Dispose it Now provide the response and nothing else.
vector uses std::allocator by default, and std::allocator is required to use global operator new (that is, ::operator new(size_t) ) to obtain the memory (20.4.1.1). However, it isn't required to call it exactly once per call to allocator::allocate . So yes, if you replace global operator new then vector will use it, although not necessarily in a way that really allows your implementation to manage memory "efficiently". Any special tricks you want to use could, in principle, be made completely irrelevant by std::allocator grabbing memory in 10MB chunks and sub-allocating. If you have a particular implementation in mind, you can look at how its vector behaves, which is probably good enough if your planned allocation strategy is inherently platform-specific.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4159641', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/295787/']}
jdg_378219
stackexchange
llm_judgeable_groundtruth_similarity
5231775
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Introduction My iOS project ships with a Core Data persistent store weighing some 160MB in SQLite format. There is a ton of grouped information in there, in which users should be able to mark favorites. For this, I need (at least part of) the database to have write capabilities. But of course persistent stores that ship in the application bundle are by design read-only. If you want the store to have read-write capabilities, you should copy it to, e.g. the app's documents folder. I don't want to do this, because then the app would be twice the size, while the main part of that database is read-only anyway. That would be a waste of resources. Multiple persistent stores for NSPersistentStoreCoordinator This is why I thought of using two persistent stores. The first would be the big one in the bundle, and the second could be a small one in the documents folder, storing special "favorite" entities with relationships to the big store. I know something is possible in this regard, but I can't find the specifics. Should one only use multiple stores if you also have multiple object models? Can one object model be 'distributed' over two persistent stores? When browsing through the Core Data Programming docs , I can't find any real reference about how to set this up. Also Marcus Zarra's book doesn't seem to delve into this topic: It is possible to add more than one NSPersistentStore to the NSPersistentStoreCoordinator, which can be useful when dealing with data that is split into multiple files. However, in our exam- ple, we have a single file. ( Marcus Zarra: "Core Data - Apple's API for Persisting Data on Mac OS X" page 71 ) The Question Who could tell me if what I'm thinking of is possible with Core Data and multiple persistent stores? And could you maybe provide a hint about how to achieve this? Online/offline resources that deal with the topic are very much appreciated too. Now provide the response and nothing else.
The answer is yes. @Caleb points to the right resources, but getting it to work is still quite awkward. I thought I'd place a resumé here: For two NSPersistentStore instances to share the same model, you have to add a configuration to your model, which is a string-named subset of the entities: In the model, to an entity that belongs to the second store, you add a fetched property ( NSFetchedPropertyDescription for googlability). This is somewhat of a very simple stored procedure, and it could look like this: Then, when you add the stores to your persistent store coordinator, you use the strings for the configuration argument (more info about the options here ): [persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:@"ModifyInBackground" URL:storeURL1 options:options error:&error][persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:@"ModifyInMain" URL:storeURL2 options:options error:&error] Finally, when you want to get from the entity in store B to the entity in store A, you trigger the fetched property like you would trigger a fault, just by accessing it. Note: A fetched property always returns an NSArray, because the predicate you write to establish the link might have multiple results. If you want to get to just one entity, you could place something like this in a wrapper method of your NSManagedObject subclass: Wallpaper *recordedWallpaper = [record.wallpaper lastObject];
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5231775', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/432782/']}
jdg_378220
stackexchange
llm_judgeable_groundtruth_similarity
23780558
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to profile remotely debugged process? I have tried using tips from article on remote debugging with VS 2012 which don't help as, when launching profiler, it asks for paths to dll and exe debugged: these paths are not treated as paths on the remote machine even though you apply remote debugging settings(as in the article). Now provide the response and nothing else.
Unfortunately, that's not possible (at least as I understood by lot of research) directly from Visual Studio IDE and the only possible way for now is using stand-alone profiler. NOTE : the following applies to Visual Studio 2013, but might be a guideline for older distributions. Stand-alone profiler let's you profile an application straight from production machine without the need to install all development framework. This stand-alone profiler is shipped with Visual Studio and there can be found its setup file. I found it in the following folder: C:\Program Files (x86)\Microsoft Visual Studio 12.0\Team Tools\Performance Tools\Setups\ I found two files: vs_profiler_x64_enu.exe and vs_profiler_x86_enu.exe . I picked the first one, but depends on the destination machine's architecture. Take this file to the destination machine and install it. Now, on the destination machine, you have to launch the application to profile by the utility VCPrefCmd.exe located in the folder: C:\Program Files (x86)\Microsoft Visual Studio 12.0\Team Tools\Performance Tools through command line. So you can add the above path in the environment or just type the following: "c:\Program Files (x86)\Microsoft Visual Studio 12.0\Team Tools\Performance Tools\VSPerfCmd.exe" /start:sample /output:"c:\report.vsp" /launch:"c:\pathTo\appToProfile.exe" where: /start is the profiling mode, "sampling" in this case /output pathname of the report to produce /launch your application to profile now your application will start automatically and you'll do all your work. Once you are done, close your profiling application and type the following command to the previous command line: "c:\Program Files (x86)\Microsoft Visual Studio 12.0\Team Tools\Performance Tools\VSPerfCmd.exe" /shutdown and this will stop the profiling and produce your report.vsp (or whatever you called it) file with all the data. Now take this produced file to your development machine and provide it to your Visual Studio (open or drag'n'drop) and you'll have your application profile. While looking forward to complete embedding of "Remote debugging and profiling " in Visual Studio, this procedure might be fair enough.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23780558', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1202544/']}
jdg_378221
stackexchange
llm_judgeable_groundtruth_similarity
4886327
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a ContextMenuStrip that is assigned to several different listboxes. I am trying to figure out when the ContextMenuStrip is clicked what ListBox it was used on. I tried the code below as a start but it is not working. The sender has the correct value, but when I try to assign it to the menuSubmitted it is null. private void MenuViewDetails_Click(object sender, EventArgs e){ ContextMenu menuSubmitted = sender as ContextMenu; if (menuSubmitted != null) { Control sourceControl = menuSubmitted.SourceControl; }} Any help would be great. Thanks. Using the assistance below, I figured it out: private void MenuViewDetails_Click(object sender, EventArgs e) { ToolStripMenuItem menuItem = sender as ToolStripMenuItem; if (menuItem != null) { ContextMenuStrip calendarMenu = menuItem.Owner as ContextMenuStrip; if (calendarMenu != null) { Control controlSelected = calendarMenu.SourceControl; } } } Now provide the response and nothing else.
For a ContextMenu : The problem is that the sender parameter points to the item on the context menu that was clicked, not the context menu itself. It's a simple fix, though, because each MenuItem exposes a GetContextMenu method that will tell you which ContextMenu contains that menu item. Change your code to the following: private void MenuViewDetails_Click(object sender, EventArgs e){ // Try to cast the sender to a MenuItem MenuItem menuItem = sender as MenuItem; if (menuItem != null) { // Retrieve the ContextMenu that contains this MenuItem ContextMenu menu = menuItem.GetContextMenu(); // Get the control that is displaying this context menu Control sourceControl = menu.SourceControl; }} For a ContextMenuStrip : It does change things slightly if you use a ContextMenuStrip instead of a ContextMenu . The two controls are not related to one another, and an instance of one cannot be casted to an instance of the other. As before, the item that was clicked is still returned in the sender parameter, so you will have to determine the ContextMenuStrip that owns this individual menu item. You do that with the Owner property . Finally, you'll use the SourceControl property to determine which control is displaying the context menu. Modify your code like so: private void MenuViewDetails_Click(object sender, EventArgs e){ // Try to cast the sender to a ToolStripItem ToolStripItem menuItem = sender as ToolStripItem; if (menuItem != null) { // Retrieve the ContextMenuStrip that owns this ToolStripItem ContextMenuStrip owner = menuItem.Owner as ContextMenuStrip; if (owner != null) { // Get the control that is displaying this context menu Control sourceControl = owner.SourceControl; } } }
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4886327', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/426671/']}
jdg_378222
stackexchange
llm_judgeable_groundtruth_similarity
361282
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: An object $x$ in a category $\mathsf{C}$ is called compact or finitely presentable if $$\mathrm{hom}(x,-) : \mathsf{C} \to \mathsf{Set}$$ preserves filtered colimits . This concept behaves best when $\mathsf{C}$ has all filtered colimits, e.g. when it is the category of presheaves on some small category $\mathsf{X}$ : $$ \mathsf{C} = \mathsf{Set}^{\mathsf{X}^{\mathrm{op}}} $$ Every representable presheaf is compact. In general, any finite colimit of compact objects is compact. Thus, any finite colimit of representables is compact. My question is about the converse: in the category of presheaves on a small category, is every compact object a finite colimit of representables? Now provide the response and nothing else.
Yes, it is. The reason is: every object of your presheaf category is a colimit of representables; so, every object is a filtered colimit of objects which are finite colimits of representables; so, applying the definition of a compact object, you get a split monomorphism from your compact object $X$ to a finite colimit $T$ of representables. To conclude, write $X$ as the coequaliser of $Id_T$ and the idempotent of $T$ given by your split mono.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/361282', 'https://mathoverflow.net', 'https://mathoverflow.net/users/2893/']}
jdg_378223
stackexchange
llm_judgeable_groundtruth_similarity
348939
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been learning about different constant current circuit designs and just recently stumbled upon this one. All I know is that the highlighted resistor is the load resistor and when varied, like a potentiometer, the current running through the load resistor stays the same. Also, adjusting the other two resistors will change the current through the load. I think this makes some sense, for creating a path of less resistance either through the diodes or transistor will cause more current to flow in that direction. Yet, I don't understand why the current through the load is unaffected by the loads resistance. Is it because the current above the transistor cannot sense the loads resistance below? Therefore, once passing through the transistor you will have the same amount of current due to the fact that the current only has one path to ground? Maybe that's total nonsense. Lastly, not sure why the diodes either. Now provide the response and nothing else.
Zach, this circuit is pretty easy to understand if you understand the BJT first. (You will understand diodes, if you understand the BJT, so that's a given.) Everyone struggles with these things at some point, so it's fine you don't apprehend this well right now. Take it one step at a time. There is plenty of information on diodes here (and elsewhere.) You are awash in information about them. I won't try and replicate any of that. It's enough for this circuit that you accept two things about diodes: A forward-biased diode has a fixed voltage across it. For regular silicon diodes, this value is \$700\:\text{mV}\$. (For LEDs, which are also diodes, it varies with the color and type and you have to look at the datasheet for that.) Everything I just said in point #1 is actually wrong. But for these purposes, you don't need to worry about that fact. Now to the BJT. It also has a diode from base to emitter. So the rules above apply. But we add the following about the BJT: When the BJT's base-emitter diode is forward-biased, the collector current is the same as the emitter current. What I just said in point #3 is also wrong. But #3 is close enough for these purposes to not matter. So. Now we can describe the circuit. The \$20\:\text{k}\$ resistor forward biases the two diodes by providing a path for the current to go to ground. The total voltage across the two diodes is therefore \$1.4\:\text{V}\$, with the rest left over for the resistor. Therefore, the base voltage for the BJT is \$10\:\text{V}-1.4\:\text{V}=8.6\:\text{V}\$. Therefore also the resistor current is \$\frac{10\:\text{V}-1.4\:\text{V}}{20\:\text{k}\Omega}\approx 430\:\mu\text{A}\$. The BJT's emitter is forward biased and therefore the emitter will be \$700\:\text{mV}\$ above the base or \$8.6\:\text{V}+700\:\text{mV}\approx 9.3\:\text{V}\$. So the voltage across the \$500\:\Omega\$ resistor is \$10\:\text{V}-9.3\:\text{V}=700\:\text{mV}\$ (one diode drop -- which if you look closely you should see why this will be the case in this circuit.) From this, we can compute that the current in that resistor is \$\frac{700\:\text{mV}}{500\:\Omega}\approx 1.4\:\text{mA}\$. Since by rule #3 above, the emitter current and collector currents are the same, it follows that the collector current is also \$1.4\:\text{mA}\$. The collector current is always the same as the emitter current (within a reasonable approximation.) So, it doesn't matter what resistor you place between the collector and ground. Except, The above conclusion isn't right if the collector current we just worked out causes a voltage drop across the collector resistor that exceeds the base voltage. So this means that the resistor cannot be larger than \$R=\frac{8.6\:\text{V}}{1.4\:\text{mA}}\approx 6100 \:\Omega\$. So it has limits.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/348939', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/114129/']}
jdg_378224
stackexchange
llm_judgeable_groundtruth_similarity
23609183
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As I understand objects are passed by reference in JavaScript (and primitives are passed by value?). var a, b;a = { Foo: "Bar"}b = a;a.Foo = "Other";console.log(b.Foo); // "Other" This worked similarly with arrays but did not work like I expect with functions: var a, b;a = function(){ return 20; }b = a;a = function(){ return 40; }console.log(b()); // returns 20 ? I'm confused because I thought functions are objects. Shouldn't the above example return 40? Now provide the response and nothing else.
In the first case, a.Foo = ... , You are changing the value of a property in the object, referred by both a and b . This is called mutating an object. But in the second case, you are making a refer a new function object. Now, a and b are referring to different function objects. That is why you are getting 20 in the second case.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23609183', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2930185/']}
jdg_378225
stackexchange
llm_judgeable_groundtruth_similarity
55388455
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know the common methods such as evaluate for capturing the elements in puppeteer , but I am curious why I cannot get the href attribute in a JavaScript-like approach as const page = await browser.newPage();await page.goto('https://www.example.com');let links = await page.$$('a');for (let i = 0; i < links.length; i++) { console.log(links[i].getAttribute('href')); console.log(links[i].href);} Now provide the response and nothing else.
await page.$$('a') returns an array with ElementHandles — these are objects with their own pupeteer-specific API, they have not usual DOM API for HTML elements or DOM nodes. So you need either retrieve attributes/properties in the browser context via page.evaluate() or use rather complicated ElementHandles API. This is an example with both ways: 'use strict';const puppeteer = require('puppeteer');(async function main() { try { const browser = await puppeteer.launch(); const [page] = await browser.pages(); await page.goto('https://example.org/'); // way 1 const hrefs1 = await page.evaluate( () => Array.from( document.querySelectorAll('a[href]'), a => a.getAttribute('href') ) ); // way 2 const elementHandles = await page.$$('a'); const propertyJsHandles = await Promise.all( elementHandles.map(handle => handle.getProperty('href')) ); const hrefs2 = await Promise.all( propertyJsHandles.map(handle => handle.jsonValue()) ); console.log(hrefs1, hrefs2); await browser.close(); } catch (err) { console.error(err); }})();
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/55388455', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/943340/']}
jdg_378226
stackexchange
llm_judgeable_groundtruth_similarity
63009
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How would one go about analytically solving a system of non-linear equations of the form: $a + b + c = 4$ $a^2 + b^2 + c^2 = 6$ $a^3 + b^3 + c^3 = 10$ Thanks! Now provide the response and nothing else.
Hint : Newton's identities . $$\begin{align*}a+b+c&=4\\ ab+bc+ca=\frac12((a+b+c)^2-(a^2+b^2+c^2))&=\frac12(4^2-6)=5\\abc=\frac13((ab+bc+ca-a^2-b^2-c^2)(a+b+c)+(a^3+b^3+c^3))&=\frac13((5-6)\cdot 4+10)=2\end{align*}$$ Remember that $a,\ b,\ c$ are the three roots of a polynomial $P(t)=(t-a)(t-b)(t-c)=t^3-(a+b+c)t^2+(ab+bc+ca)t-abc$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/63009', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/647/']}
jdg_378227
stackexchange
llm_judgeable_groundtruth_similarity
255378
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am connecting an Arduino Uno to the internet via ethernet (using the ethernet shield v2) and querying NTP time. Making requests to a NTP server is the only internet related thing it does. You can use the ethernet shield as an SD card to host data, I WILL NOT be doing that. It will only be querying NTP. I'm worried this IoT device will become a security target for my network. What attacks is it vulnerable to? And how do I secure such a low spec device? Note: I am not worried about physical attacks, the device will be locked away. Now provide the response and nothing else.
Unless your code has a memory corruption vulnerability in its handling of NTP, or there's a similar vulnerability in some part of the networking stack, there's basically no attack surface there. Furthermore, an Arduino Uno uses an Atmel ATmega328P, which does not support execution of code from RAM. The code executes from the MCU program flash, which is not writable at runtime. This makes it highly implausible that anyone could gain either volatile or non-volatile persistence on it.
{}
{'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/255378', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/267399/']}
jdg_378228
stackexchange
llm_judgeable_groundtruth_similarity
688021
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: These do not do the same: $ seq 1000000 | (ssh localhost sleep 1; wc -l)675173$ seq 1000000 | (ssh localhost sleep 1 </dev/null; wc -l)1000000 What is the rationale for ssh reading stdin? Now provide the response and nothing else.
ssh always reads stdin unless you tell it not to with the -n option (or the -f option). The reason is so that you can do things like tar cf - somedir | ssh otherhost "tar xf -" And it always does this because ssh has no way of knowing if your remote command accepts input or not. Likely what is happening in your first command is that seq fills up the network and pipe buffers (seq -> ssh -> sleep), and since sleep isn't reading anything, it gets blocked waiting for more reads, and then sleep exits, causing those full buffers to be dumped, and then seq is unblocked, feeding the remainder to wc. Note that you would get similar results with seq 1000000 | ( cat | cat | sleep 1; wc -l) In your second command, it is still reading stdin, but you've externally assigned /dev/null to stdin.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/688021', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/2972/']}
jdg_378229
stackexchange
llm_judgeable_groundtruth_similarity
36371047
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to Python and I have been reading both the online documentation and (trying) to follow PEP 0008 to have a good Python code style.I am curious about the code segment I found in the official Python docs while studying about the re library: import collectionsToken = collections.namedtuple('Token', ['typ', 'value', 'line', 'column']) I cannot understand why the Token variable is named with a first letter capitalised; I have read through the PEP 0008 and there is no reference for it for what I have seen. Should it not be token instead or TOKEN if it was a constant (which for all I know it is not)? Now provide the response and nothing else.
In the code-segment you provided, Token is a named tuple , definitely not a constant. It does not follow other variable names naming style only to put emphasis on the fact that it is a class factory function . No warning will occur from an PEP 0008 style checker (like PyCharm for example) if you write it as token but I think it is not good practice since this way it does not distinguish it as a class factory name. So, namedtuples fall under the Class names in PEP 0008. Too bad is not stated more explicitly.Besides the example you mentioned for writing a tokenizer , this can also be seen in the collections.namedtuple docs examples: Point = namedtuple('Point', ['x', 'y'])Point3D = namedtuple('Point3D', Point._fields + ('z',))Book = namedtuple('Book', ['id', 'title', 'authors'])
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36371047', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_378230
stackexchange
llm_judgeable_groundtruth_similarity
58314397
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to get the screenshot of a View . I have tried two methods to do this work. Unfortunately, both result in the same bug. Here is the log: java.lang.IllegalArgumentException: Software rendering doesn't support hardware bitmapsat android.graphics.BaseCanvas.null onHwBitmapInSwMode(null)(BaseCanvas.java:550)at android.graphics.BaseCanvas.null throwIfHwBitmapInSwMode(null)(BaseCanvas.java:557)at android.graphics.BaseCanvas.null throwIfCannotDraw(null)(BaseCanvas.java:69)at android.graphics.BaseCanvas.null drawBitmap(null)(BaseCanvas.java:127)at android.graphics.Canvas.null drawBitmap(null)(Canvas.java:1504)at android.graphics.drawable.BitmapDrawable.null draw(null)(BitmapDrawable.java:545)at android.widget.ImageView.null onDraw(null)(ImageView.java:1355)at android.view.View.null draw(null)(View.java:20248)at android.view.View.null draw(null)(View.java:20118)at android.view.ViewGroup.null drawChild(null)(ViewGroup.java:4336)at android.view.ViewGroup.null dispatchDraw(null)(ViewGroup.java:4115)at android.view.ViewOverlay$OverlayViewGroup.null dispatchDraw(null)(ViewOverlay.java:251)at android.view.View.null draw(null)(View.java:20251)at android.view.View.null buildDrawingCacheImpl(null)(View.java:19516)at android.view.View.null buildDrawingCache(null)(View.java:19379)at android.view.View.null getDrawingCache(null)(View.java:19215)at android.view.View.null getDrawingCache(null)(View.java:19166)at com.omnipotent.free.videodownloader.pro.utils.ViewUtils.android.graphics.Bitmap captureView(android.view.View)(ViewUtils.java:70)at com.omnipotent.free.videodownloader.pro.ui.main.MainActivity.com.omnipotent.free.videodownloader.pro.data.bean.TabBean getCurrentTabsData()(MainActivity.java:325)at com.omnipotent.free.videodownloader.pro.ui.main.MainActivity.com.omnipotent.free.videodownloader.pro.data.bean.TabBean access$getCurrentTabsData(com.omnipotent.free.videodownloader.pro.ui.main.MainActivity)(MainActivity.java:84)at com.omnipotent.free.videodownloader.pro.ui.main.MainActivity$onAddTab$1.void run()(MainActivity.java:628)at android.os.Handler.null handleCallback(null)(Handler.java:873)at android.os.Handler.null dispatchMessage(null)(Handler.java:99)at android.os.Looper.null loop(null)(Looper.java:193)at android.app.ActivityThread.null main(null)(ActivityThread.java:6936)at java.lang.reflect.Method.null invoke(null)(Method.java)at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.null run(null)(RuntimeInit.java:493)at com.android.internal.os.ZygoteInit.null main(null)(ZygoteInit.java:870) I have examined the code carefully and looked up related articles on the Internet. I, however, have not solved it yet, which really make me feel frustrating.This bug has only happened above android O. Here are two methods that I have tried: Method1: public static Bitmap captureView(View view) { Bitmap tBitmap = Bitmap.createBitmap( view.getWidth(), view.getHeight(), Bitmap.Config.RGB_565); Canvas canvas = new Canvas(tBitmap); view.draw(canvas); canvas.setBitmap(null); return tBitmap;} Method2: public static Bitmap captureView(View view) { if (view == null) return null; boolean drawingCacheEnabled = view.isDrawingCacheEnabled(); boolean willNotCacheDrawing = view.willNotCacheDrawing(); view.setDrawingCacheEnabled(true); view.setWillNotCacheDrawing(false); final Bitmap drawingCache = view.getDrawingCache(); Bitmap bitmap; if (null == drawingCache) { view.measure(View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED), View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED)); view.layout(0, 0, view.getMeasuredWidth(), view.getMeasuredHeight()); view.buildDrawingCache(); bitmap = Bitmap.createBitmap(view.getDrawingCache()); } else { bitmap = Bitmap.createBitmap(drawingCache); } view.destroyDrawingCache(); view.setWillNotCacheDrawing(willNotCacheDrawing); view.setDrawingCacheEnabled(drawingCacheEnabled); return bitmap;} What need to be mentioned is that I have set android:hardwareAccelerated="true" for my Activity , where I invoked captureView method. Now provide the response and nothing else.
Read Glide hardware bitmap docs , and find PixelCopy class, which may solve this bug. Use PixelCopy to convert view to Bitmap above Android O, and use previous method below Android O. Here is my code: fun captureView(view: View, window: Window, bitmapCallback: (Bitmap)->Unit) { if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) { // Above Android O, use PixelCopy val bitmap = Bitmap.createBitmap(view.width, view.height, Bitmap.Config.ARGB_8888) val location = IntArray(2) view.getLocationInWindow(location) PixelCopy.request(window, Rect(location[0], location[1], location[0] + view.width, location[1] + view.height), bitmap, { if (it == PixelCopy.SUCCESS) { bitmapCallback.invoke(bitmap) } }, Handler(Looper.getMainLooper()) ) } else { val tBitmap = Bitmap.createBitmap( view.width, view.height, Bitmap.Config.RGB_565 ) val canvas = Canvas(tBitmap) view.draw(canvas) canvas.setBitmap(null) bitmapCallback.invoke(tBitmap) }} The shortage is that I have to use a callback, which I don't like very much. Hope that it works.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58314397', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9180229/']}
jdg_378231
stackexchange
llm_judgeable_groundtruth_similarity
314599
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been wondering why XML has an L in its name. By itself, XML doesn't "do" anything. It's just a data storage format, not a language! Languages "do" things. The way you get XML to "do" stuff, to turn it into a language proper, is to add xmlns attributes to its root element. Only then does it tell its environment what it's about. One example is XHTML. It's active, it has links, hypertext, styles etc, all triggered by the xmlns . Without that, an XHTML file is just a bunch of data in markup nodes. So why then is XML called a language? It doesn't describe anything, it doesn't interpret, it just is. Edit: Maybe my question should have been broader. Since the answer is currently "because XML was named after SGML, which was named after GML, etc" the question should have been, why are markup languages (like XML) called languages? Oh, and WRT the close votes: no, I'm not asking about the X. I'm asking about the L! Now provide the response and nothing else.
The real answer is XML has an L in the name because a guy named Raymond L orie was among the designers of the first "markup language" at IBM in the 1970'ies. The developers had to find a name for the language so they chose GML because it was the initials of the three developers (Goldfarb, Mosher and Lorie). They then created the backronym Generalized Markup Language . This later became standardized as SGML ( Standardized General Markup Language ), and when XML was created, the developers wanted to retain the ML-postfix to indicate the family relationship to SGML, and they added the X in front because they thought it looked cool. (Even though it doesn't actually make sense - XML is a meta language which allows you to define extensible languages, but XML is not really extensible itself.) As for your second question if XML can legitimately be called a language: Any structured textual (or even binary) format which can be processed computationally can be called a language. A language doesn't "do" anything as such, but some software might process input in the language and "do" something based on it. You note that XML is a "storage format" which is true, but a textual storage format can be called a language, these term are not mutually exclusive. Programming languages are a subset of languages. E.g. HTML and CSS are languages but not programming languages , while JavaScript is a real programming language. That said, there is no formal definition of programming language either, and there is a large grey zone of languages which could be called either data formats or programming languages depending on your point of view. Given this, XML is clearly a language. just not a programming language - though it can be used to define programming languages like XSLT. Your point about namespaces is irrelevant. Namespaces are an optional feature of XML and do not change the semantics of an XML vocabulary. It is just needed to disambiguate element names if the format may contain multiple vocabularies. Edit: reinierpost pointed out that you might have meant something different with the question than what I understood. Maybe you meant that specific vocabularies like XHTML, RSS, XSLT etc. are languages because they associate elements and attributes with particular semantics, but the XML standard itself does not define any semantics for specific elements and attributes, so it does not feel like a "real language". My answer to this would be that XML does define both syntax and semantics, it just defines it at a different level. For example it defines the syntax of elements and attributes and rules about how to process them. XML is a "metalanguage" which is still a kind of language (just like metadata is still data!). As an example EBNF is also clearly a language, but its purpose is to define the syntax of other languages, so it is also a metalanguage.
{}
{'log_upvote_score': 9, 'links': ['https://softwareengineering.stackexchange.com/questions/314599', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/44898/']}
jdg_378232
stackexchange
llm_judgeable_groundtruth_similarity
18297185
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to Oracle. I have an Oracle table with three columns: serialno , item_category and item_status . In the third column the rows have values of serviceable , under_repair or condemned . I want to run the query using count to show how many are serviceable, how many are under repair, how many are condemned against each item category. I would like to run something like: select item_category , count(......) "total" , count (.....) "serviceable" , count(.....)"under_repair" , count(....) "condemned"from my_tablegroup by item_category ...... I am unable to run the inner query inside the count. Here's what I'd like the result set to look like: item_category total serviceable under repair condemned============= ===== ============ ============ ===========chair 18 10 5 3table 12 6 3 3 Now provide the response and nothing else.
You can either use CASE or DECODE statement inside the COUNT function. SELECT item_category, COUNT (*) total, COUNT (DECODE (item_status, 'serviceable', 1)) AS serviceable, COUNT (DECODE (item_status, 'under_repair', 1)) AS under_repair, COUNT (DECODE (item_status, 'condemned', 1)) AS condemned FROM mytableGROUP BY item_category; Output: ITEM_CATEGORY TOTAL SERVICEABLE UNDER_REPAIR CONDEMNED----------------------------------------------------------------chair 5 1 2 2table 5 3 1 1
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18297185', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1870008/']}
jdg_378233
stackexchange
llm_judgeable_groundtruth_similarity
8356358
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to have a date Range select using the UI date picker. in the from/to field people should not be able to view or select dates previous to the present day. This is my code: $(function() { var dates = $( "#from, #to" ).datepicker({ defaultDate: "+1w", changeMonth: true, numberOfMonths: 1, onSelect: function( selectedDate ) { var option = this.id == "from" ? "minDate" : "maxDate", instance = $( this ).data( "datepicker" ), date = $.datepicker.parseDate( instance.settings.dateFormat || $.datepicker._defaults.dateFormat, selectedDate, instance.settings ); dates.not( this ).datepicker( "option", option, date ); } });}); Can some one tell me how to disable dates previous the to the present date. Now provide the response and nothing else.
You must create a new date object and set it as minDate when you initialize the datepickers <label for="from">From</label> <input type="text" id="from" name="from"/> <label for="to">to</label> <input type="text" id="to" name="to"/>var dateToday = new Date();var dates = $("#from, #to").datepicker({ defaultDate: "+1w", changeMonth: true, numberOfMonths: 3, minDate: dateToday, onSelect: function(selectedDate) { var option = this.id == "from" ? "minDate" : "maxDate", instance = $(this).data("datepicker"), date = $.datepicker.parseDate(instance.settings.dateFormat || $.datepicker._defaults.dateFormat, selectedDate, instance.settings); dates.not(this).datepicker("option", option, date); }}); Edit - from your comment now it works as expected http://jsfiddle.net/nicolapeluchetti/dAyzq/1/
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/8356358', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/155196/']}
jdg_378234
stackexchange
llm_judgeable_groundtruth_similarity
50876
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Background/Motivation My interest in this problem traces back to an 11 year old girl who really took to one-way path counting problems. After doing severalconfigurations of streets, she decided to come up with a problem of her own. She presented a $3 \times 3$ gridworkof two-way streets (forming 4 blocks in a $2 \times 2$ arrangement) and added the condition that a street could betraversed at most once. She asked how many such paths are there from upper left to lower right? (Answer: 16.) Stirred by her enthusiasm, we tried generalizing in various directions. If you have 2 long horizontal streetswith $N$ verticals, and let $a_N$ be the number of edge-disjoint paths from upper left to lower right and $b_N$ bethe number of edge-disjoint paths from upper left to upper right, then $a_{N+1} = b_{N+1} = a_N + b_N$ for $N > 1$ and $a_1 = b_1 = 1$. The $3 \times N$ case is trickier, but the number of edge-disjoint paths from upper left to lower right still satisfies a finite linear recurrence relation. Naturally, I turned to OEIS and found sequences A013991-A013997, where Dan Hoey gives the number of edge-disjoint paths between opposite corners of $K \times N$ grids for $K = 3, 4, 5, ..., 9$ and small $N$. He also provides the first few values for the $N \times N$ cases (sequence A013990). (Note, his numbering counts blocks, not streets.) For $K=3$, he provides a generating function. In a recent communication, he explained the computer algorithm he used to compute the values but indicates that he did not find a recurrence relation for these sequences, so as far as I know, there is no known way to determine the answer to the title question for large $N$. I've also spoken with Gregg Musiker, Bjorn Poonen, and Tim Chow about this problem. Although none knew how to do the $4 \times N$ case, Gregg simplified my recurrence relations for the $3 \times N$ case, Bjorn suggested many related questions and suggested an asymptotic formula for the $N \times N$ case, and Tim suggested looking at the related literature on self-avoiding walks, such as the book by Neal Madras and Gordon Slade, though it's not clear to me how related edge-disjoint and self-avoiding are with respect to counting them. Because there are finite linear recurrence relations for the $2 \times N$ and $3 \times N$ cases, it seems natural to also ask: Is there a finite linear recurrence relation for the number of edge-disjoint paths between opposite corners of a $4 \times N$ gridwork of streets? Are these problems intractable? Now provide the response and nothing else.
To amplify on Christian's answer, the problem on a $K \times N$ grid for fixed $K$ and varying $N$ admits a finite-state transition model, so in particular is given by a linear recurrence. The key is to find the right set of states. If you take an edge-disjoint path on a $K \times N$ grid and slice it on a vertical line through the middle passing through a set of horizontal edges, you'll see the path crossing along some odd number of these edges ($\le K$). On both the left and right we'll see a collection of paths with these endpoints. There's another constraint, that we end up with a single, connected path without disjoint loops; to take that in to account, also record a matching: which endpoints are paired up on the right hand side. All but one of the endpoints are paired up in this way. (You could also choose the left, and end up with a slightly different matrix.) For instance, in the $3 \times N$ case, there are $6$ states. If we record an occupied edge by $\times$ and an unoccupied edge by $\circ$ and turn everything on its side, the states are$$\times\circ\circ\quad\circ\times\circ\quad\circ\circ\times\quad\times_1\times_1\times\quad\times_1\times\times_1\quad\times\times_1\times_1$$where the subscript indicates the matching. (In this case, there is at most one matched pair.) Next consider the transitions. If you consider two adjacent vertical slicings of a path, you'll see two possibly different states. The set of edges that are occupied in the middle is determined by which edges are occupied in the two different states. There is sometimes a choice about how the strands are connected up. However, some of these choices will be ruled out by the constraints on the connectivity; usually you will end up with just $0$ or $1$ possibilities. For instance, in the $3 \times N$ case, with the states in the order above, I get the following matrix of possibilities:$$M =\begin{pmatrix}1 & 1 & 1 & 1 & 1 & 0 \\1 & 1 & 1 & 1 & 0 & 1\\1 & 1 & 1 & 0 & 1 & 1\\0 & 1 & 1 & 1 & 0 & 0\\0 & 1 & 0 & 0 & 1 & 0\\1 & 1 & 0 & 0 & 0 & 1\end{pmatrix}$$ For the ultimate answer, you want to look at paths that start at the upper-left and go to the lower-right. You can incorporate that nicely by adding an extra slice to the left of the entire diagram, with only the top slot occupied, and another to the right of the diagram, with only tho lower slot occupied. Concretely, in the $3 \times N$ case, the number of paths is given by the $(1,3)$ entry of $M^N$. For the $4 \times N$ case, you would get a $16 \times 16$ matrix, which is straightforward but somewhat tedious to work out. As a result, the answer will satisfy a linear recurrence of order $16$. An interesting variation is to consider only crossingless paths. In this case, the matching must be crossingless, so we only get 5 states in the $3 \times N$ case and $12$ in the $4 \times N$ case. Update Jan 7: The matrix above is wrong: it should be$$M =\begin{pmatrix}1 & 1 & 1 & 1 & 1 & 0 \\1 & 1 & 1 & 2 & 2 & 2\\1 & 1 & 1 & 0 & 1 & 1\\0 & 1 & 1 & 1 & 0 & 0\\0 & 1 & 0 & 0 & 1 & 0\\1 & 1 & 0 & 0 & 0 & 1\end{pmatrix}$$ Update 2: And here's an image illustrating what is actually being counted: I permuted the entries slightly, but they're labelled along the sides. The dotted paths are there to help in the counting: the non-allowed configurations would form a closed loop.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/50876', 'https://mathoverflow.net', 'https://mathoverflow.net/users/7434/']}
jdg_378235
stackexchange
llm_judgeable_groundtruth_similarity
15521584
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've already read previous questions here about ConcurrentBag but did not find an actual sample of implementation in multi-threading. ConcurrentBag is a thread-safe bag implementation, optimized for scenarios where the same thread will be both producing and consuming data stored in the bag." Currently this is the current usage in my code (this is simplified not actual codes): private void MyMethod(){ List<Product> products = GetAllProducts(); // Get list of products ConcurrentBag<Product> myBag = new ConcurrentBag<Product>(); //products were simply added here in the ConcurrentBag to simplify the code //actual code process each product before adding in the bag Parallel.ForEach( products, new ParallelOptions { MaxDegreeOfParallelism = Environment.ProcessorCount }, product => myBag.Add(product)); ProcessBag(myBag); // method to process each items in the concurrentbag} My questions: Is this the right usage of ConcurrentBag ? Is it ok to use ConcurrentBag in this kind of scenario? For me I think a simple List<Product> and a manual lock will do better. The reason for this is that the scenario above already breaks the " same thread will be both producing and consuming data stored in the bag " rule. Also I also found out that the ThreadLocal storage created in each thread in the parallel will still exist after the operation (even if the thread is reused is this right?) which may cause an undesired memory leak. Am I right in this one guys? Or a simple clear or empty method to remove the items in the ConcurrentBag is enough? Now provide the response and nothing else.
This looks like an ok use of ConcurrentBag. The thread local variables are members of the bag, and will become eligible for garbage collection at the same time the bag is (clearing the contents won't release them). You are right that a simple List with a lock would suffice for your case. If the work you are doing in the loop is at all significant, the type of thread synchronization won't matter much to the overall performance. In that case, you might be more comfortable using what you are familiar with. Another option would be to use ParallelEnumerable.Select , which matches what you are trying to do more closely. Again, any performance difference you are going to see is likely going to be negligible and there's nothing wrong with sticking with what you know. As always, if the performance of this is critical there's no substitute for trying it and measuring.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15521584', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/513056/']}
jdg_378236
stackexchange
llm_judgeable_groundtruth_similarity
231567
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to show two equivalent metrics $p$ and $d$ on a set $X$ have the same convergent sequences. $p$ and $d$ are such that $kd(x,y) \leq p(x,y) \leq td(x,y)$ for every $x, y \in X$, $k$ and $t$ are positive constants. Here's what I am doing - As $p$ and $d$ are equivalent metrics they generate the same open sets. Let $A$ be an open set generated by both $p$ and $d$. As $A$ is open $X\backslash A$ is closed. As $X\backslash A$ is closed, we can take a convergent sequence $(x_n) \in X\backslash A$ for all $n$, and it will converge to $x \in X\backslash A$. Im not sure what to do now...can you just say it will converge to the same $x$ regardless of the metric being $p$ or $d$? I don't think I can. Should I be bringing open balls into it? Is there a need to use the positive constants $k$ and $t$? Now provide the response and nothing else.
If you’ve really already proved that metrics $p$ and $d$ related in that way generate the same open sets, you’re practically done, but you’re trying to make it much too complicated. Suppose that $\langle x_n:n\in\Bbb N\rangle$ converges to $x$ with respect to $d$; you want to show that it converges to $x$ with respect to $p$ as well. Let $U$ be an open nbhd of $x$. Then since $\langle x_n:n\in\Bbb N\rangle\underset{d}\longrightarrow x$, there is an $m\in\Bbb N$ such that $x_n\in U$ for all $n\ge m$. But that’s also exactly what it means for $\langle x_n:n\in\Bbb N\rangle$ to converge to $x$ with respect to $p$, so $\langle x_n:n\in\Bbb N\rangle\underset{p}\longrightarrow x$. It’s in the proof that $d$ and $p$ generate the same topology that you would use the constants $k$ and $t$. But it’s not necessary to prove first that $d$ and $p$ generate the same topology: you can prove this result directly. Suppose that $\langle x_n:n\in\Bbb N\rangle\underset{d}\longrightarrow x$. Then for each $\epsilon>0$ there is an $m_\epsilon\in\Bbb N$ such that $d(x_n,x)<\epsilon$ for each $n\ge m_\epsilon$. This immediately implies that $p(x_n,x)<t\epsilon$ for each $n\ge m_\epsilon$. Thus, for each $n\ge m_{\epsilon/t}$ we have $p(x_n,x)<t\cdot\frac{\epsilon}t=\epsilon$, and it follows that $\langle x_n:n\in\Bbb N\rangle\underset{p}\longrightarrow x$. The opposite implication is proved similarly, using the fact that $d(x,y)\le\frac1kp(x,y)$ for all $x,y\in X$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/231567', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/46347/']}
jdg_378237
stackexchange
llm_judgeable_groundtruth_similarity
1242
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been trying to get XBMC ( Raspbmc to be exact) working om my Raspberry Pi. After hours of work and a lot of stupid mistakes (who knew FAT32 wasn't made for Linux), I made it work. But now I'm troubling with the screen being to big for my tv. So I googled some more and found out I had to change the config.txt file on the SD card.The original contents where like this: arm_freq=800disable_overscan=1 I learned that I could set the different overscan values (left, right, top, bottom) to solve this problem. So I changed the config.txt file to this: arm_freq=800overscan_left=64overscan_right=64overscan_top=64overscan_bottom=64 But now the Raspberry Pi doesn't recognize the SD card anymore. And when I change it back to a single line it works, but it doesn't do anything with the overscan values. So my idea is that it has to be something with the line breaks in the file. I'm using Windows 7 and Notepad to change the config.txt file. How can I fix this problem? Now provide the response and nothing else.
This is a classic line-endings problem. Text files created on DOS/Windows machines have different line endings than files created on Unix/Linux. DOS uses carriage return and line feed ("\r\n") as a line ending, which Unix uses just line feed ("\n"). You need to be careful about transferring files between Windows machines and Unix machines to make sure the line endings are translated properly. ( Source ) Either you can access the Raspberry Pi via ssh and as root type: awk '{ sub("\r$", ""); print }' /boot/config.txt > /boot/config.txt This will convert the line endings from Windows to Unix by removing the carriage return \r . Or if you are in Windows you can use the application dos2unix , which can be downloaded here . Or apparently Notepad++ will arrange things for you ( credit to Steve Robillard ).
{}
{'log_upvote_score': 4, 'links': ['https://raspberrypi.stackexchange.com/questions/1242', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/775/']}
jdg_378238
stackexchange
llm_judgeable_groundtruth_similarity
23508923
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Say I have command1 which outputs this: b05808aa-c6ad-4d30-a334-198ff5726f7c59996d37-9008-4b3b-ab22-340955cb60192b41f358-ff6d-418c-a0d3-ac7151c03b787ac4995c-ff2c-4717-a2ac-e6870a5670f0 I also have command2 which outputs this: b05808aa-c6ad-4d30-a334-198ff5726f7c59996d37-9008-4b3b-ab22-340955cb6019 Is there a way to grep the output from command1 to not include any lines matched from command2 , so that the final output would look like this? 2b41f358-ff6d-418c-a0d3-ac7151c03b787ac4995c-ff2c-4717-a2ac-e6870a5670f0 Now provide the response and nothing else.
Issue this grep command1 | grep -vF -f <(command2) Here, -F means Fixed string match* -v means invert match -f means the file with patterns <(command) actually creates a FIFO with that command and use it on redirection.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23508923', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/712759/']}
jdg_378239
stackexchange
llm_judgeable_groundtruth_similarity
23332942
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to increment an environmental variable by these steps: envar=1export envarsh script_incrementationecho $envar where script_incrementation contains something like this: #! /bin/shenvar=$[envar+1] #I've tried also other methods of incrementationexport envar Whatever I do, after exiting the script the variable remains with its initial value 1. THanks for your time. Now provide the response and nothing else.
A shell script executes in its own shell, so you cannot affect the outer shell unless you source it. See this question for details of that discussion. Consider the following script, which I will call Foo.sh . #!/bin/bashexport HELLO=$(($HELLO+1)) Suppose in the outer shell, I define an environmental variable: export HELLO=1 If I run the script like this, it run inside its own shell and will not affect the parent. ./Foo.sh However, if I source it, it will just execute the commands in the current shell, and will achieve the desired affect. . Foo.shecho $HELLO # prints 2
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23332942', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_378240