source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
1823306
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've read about a few alternatives to MySQL's ORDER BY RAND() function, but most of the alternatives apply only to where on a single random result is needed. Does anyone have any idea how to optimize a query that returns multiple random results, such as this: SELECT u.id, p.photo FROM users u, profiles p WHERE p.memberid = u.id AND p.photo != '' AND (u.ownership=1 OR u.stamp=1) ORDER BY RAND() LIMIT 18 Now provide the response and nothing else.
UPDATE 2016 This solution works best using an indexed column . Here is a simple example of and optimized query bench marked with 100,000 rows. OPTIMIZED: 300ms SELECT g.*FROM table g JOIN (SELECT id FROM table WHERE RAND() < (SELECT ((4 / COUNT(*)) * 10) FROM table) ORDER BY RAND() LIMIT 4) AS z ON z.id= g.id note about limit ammount : limit 4 and 4/count(*). The 4s need to be the same number. Changing how many you return doesn't effect the speed that much. Benchmark at limit 4 and limit 1000 are the same. Limit 10,000 took it up to 600ms note about join : Randomizing just the id is faster than randomizing a whole row. Since it has to copy the entire row into memory then randomize it. The join can be any table that is linked to the subquery Its to prevent tablescans. note where clause : The where count limits down the ammount of results that are being randomized. It takes a percentage of the results and sorts them rather than the whole table. note sub query : The if doing joins and extra where clause conditions you need to put them both in the subquery and the subsubquery. To have an accurate count and pull back correct data. UNOPTIMIZED: 1200ms SELECT g.*FROM table gORDER BY RAND()LIMIT 4 PROS 4x faster than order by rand() . This solution can work with any table with a indexed column. CONS It is a bit complex with complex queries. Need to maintain 2 code bases in the subqueries
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1823306', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/221736/']}
jdg_375041
stackexchange
llm_judgeable_groundtruth_similarity
18285415
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to plot a map for my final project, and I am trying to do a heat map of crime by BLock in the US. For each block, I have Lat, Lon, and a prediction of the crime rate. It follows this structure: Lat / Lon / Prediction-76.0 / 40.0 / 125 -76.120 / 40.5 / 145-75.98 / 41.001 / 95 And so on. Is there a way to plot a heat map showing the Prediction as the fill? I think this is what geom_tiles do, but that geom is not working (maybe because the points are not evenly spaced) Any help would be more than welcome. Please! EDIT This is what I have tried so far: -geom_density2d: ggplot(ny2,aes(x=GEO_CENTROID_LON,y=GEO_CENTROID_LON,fill=prediction))+geom_density2d() Gives me the error: "Error in unit(tic_pos.c, "mm") : 'x' and 'units' must have length > 0" -geom_tiles: ggplot(ny2,aes(x=GEO_CENTROID_LON,y=GEO_CENTROID_LON,fill=prediction))+geom_tile() Produces a plot with the proper scale, but not data shown on the map. Regarding chloropeth, it would work if I happend to have block level information for the whole US, but I can't find such data. SUBSAMPLE of data can be found here Now provide the response and nothing else.
First, let's load the data: data<-read.csv(file = "NY subsample.csv") Data points Then, let's try just plotting the basic locations and values of the data: require('ggplot2')# start with pointspred.points <- ggplot(data = data, aes(x = GEO_CENTROID_LON, y = GEO_CENTROID_LAT, colour = prediction)) + geom_point()print(pred.points)ggsave(filename = "NYSubsamplePredPoints.png", plot = p2, scale = 1, width = 5, height = 3, dpi = 300) which gives us this: Binned data Then, you can try to plot the mean in a 2-D region using stat_summary2d() : pred.stat <- ggplot(data = data, aes(x = GEO_CENTROID_LON, y = GEO_CENTROID_LAT, z = prediction)) + stat_summary2d(fun = mean)print(pred.stat)ggsave(filename = "NYSubsamplePredStat.png", plot = pred.stat, scale = 1, width = 5, height = 3, dpi = 300) which gives us this plot of the mean value of prediction in each box. Binned and with custom colormap and correct projection Next, we can set the bin size, color scales, and fix the projection: # refine breaks and palette ----require('RColorBrewer')YlOrBr <- c("#FFFFD4", "#FED98E", "#FE9929", "#D95F0E", "#993404")pred.stat.bin.width <- ggplot(data = data, aes(x = GEO_CENTROID_LON, y = GEO_CENTROID_LAT, z = prediction)) + stat_summary2d(fun = median, binwidth = c(.05, .05)) + scale_fill_gradientn(name = "Median", colours = YlOrBr, space = "Lab") + coord_map()print(pred.stat.bin.width)ggsave(filename = "NYSubsamplePredStatBinWidth.png", plot = pred.stat.bin.width, scale = 1, width = 5, height = 3, dpi = 300) which gives us this: Plotted over a base map And last of all, here's the data overlain on a map. require('ggmap')map.in <- get_map(location = c(min(data$GEO_CENTROID_LON), min(data$GEO_CENTROID_LAT), max(data$GEO_CENTROID_LON), max(data$GEO_CENTROID_LAT)), source = "osm")theme_set(theme_bw(base_size = 8))pred.stat.map <- ggmap(map.in) %+% data + aes(x = GEO_CENTROID_LON, y = GEO_CENTROID_LAT, z = prediction) + stat_summary2d(fun = median, binwidth = c(.05, .05), alpha = 0.5) + scale_fill_gradientn(name = "Median", colours = YlOrBr, space = "Lab") + labs(x = "Longitude", y = "Latitude") + coord_map()print(pred.stat.map)ggsave(filename = "NYSubsamplePredStatMap.png", plot = pred.stat.map, scale = 1, width = 5, height = 3, dpi = 300) Setting the colormap And finally, to set the colormap to something like http://www.cadmaps.com/images/HeatMapImage.jpg , we can take a guess at the colormap: colormap <- c("Violet","Blue","Green","Yellow","Red","White") and do the plotting again: pred.stat.map.final <- ggmap(map.in) %+% data + aes(x = GEO_CENTROID_LON, y = GEO_CENTROID_LAT, z = prediction) + stat_summary2d(fun = median, binwidth = c(.05, .05), alpha = 1.0) + scale_fill_gradientn(name = "Median", colours = colormap, space = "Lab") + labs(x = "Longitude", y = "Latitude") + coord_map()print(pred.stat.map.final)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18285415', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1403840/']}
jdg_375042
stackexchange
llm_judgeable_groundtruth_similarity
4889355
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to determine workflow to fine-tune a data entry web application. Picture several address forms on a single web page: 1. Name___________ Street_________ Phone__________ 2. Name___________ Street_________ Phone__________ [...many more...] Now I'd like to know if the user is using the tab key to get to the second "Name" field (or anywhere within that record), or if they're using the mouse to click on it. (Or shift-tab to move in reverse.) I've set a handler on both focus and click for the input fields: $('input').click(function() { TabulateClick(this) });$('input').focus(function() { TabulateFocus(this) }); And in the handler, I determine which address the user is working on and whether we've "switched" Address records. (If the focus was in "Phone" for the first address, and you click on the "Name" field in the same address , that's not actually switching records, so I don't tabulate that.) function TabulateClick(field) { var currentAddressRecord = FindAddress(field); if ( lastAddressRecord != currentAddressRecord ) switchedAddressesWithClick++; lastAddressRecord = currentAddress; } function TabulateFocus(field) { var currentAddress = FindAddress(field); if ( lastAddressRecord != currentAddressRecord ) switchedAddressesWithTab++; lastAddressRecord = currentAddress; } My problem is that when I mouse-click into the field the focus event fires first tabulating a false switchedAddressesWithTab and changing the currentAddress (that's bad ). When the click handler runs, the lastAddressRecord is spoiled. Is there a way inside of the focus handler to know that there is a pending click event on the same object? Or in the click handler to know that it was previously just handled by focus ? Now provide the response and nothing else.
Here's something that I think works based on the fact that the mousedown happens before the focus. See demo . var lastClick = null;$('input').mousedown(function(e) { lastClick = e.target;}).focus(function(e){ if (e.target == lastClick) { console.log('click'); } else { console.log('tab'); } lastClick = null;}); To fix the bug discovered by Josiah, I changed my code to the below. See demo . var lastFocusedElement = null;var isClick = false;$('input').mousedown(function(e) { isClick= true;}).focus(function(e){ // To prevent focus firing when element already had focus if (lastFocusedElement != e.target) { if (isClick) { console.log('click ----'); } else { console.log('tab -----'); } lastFocusedElement = e.target; isClick = false; }});$(document.body).focus(function(){ lastFocusedElement = document.body;}); The one problem is that you don't get 'click' or tab when you switch away from the window and switch back. You get a focus event on the input that had focus, but you can't determine if it's a tab or a click, because it's neither. I think this is the closest you'll get, I would try this on your page and see if the behavior is good enough.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4889355', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8173/']}
jdg_375043
stackexchange
llm_judgeable_groundtruth_similarity
10432
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: With[ {v1 = #}, With[ {v2 = f[v1]}, g[v1, v2] ]] How to avoid nested With[] like the above? I'd like to use v1 and v2=f[v1] in the module's body. Is using Module[{v1, v2}, v2=f[v1]; g[v1, v2]] the best/only way to avoid nested module? Now provide the response and nothing else.
I don't think one can avoid the need for nested With altogether - I find it a very common case to need declared variables use previously declared variables. Since I once wrote the function (actually macro) that automates nesting With , and generates nested With at run-time, this is a good opportunity to (re)post it as an answer to an exact question that it actually addresses. I will partly borrow the discussion from this answer . Implementation Edit Aug.3, 2015 - added RuleDelayed UpValue, per @Federico's suggestion Here is the code for it (with added local-variable highlighting): ClearAll[LetL];SetAttributes[LetL, HoldAll];SyntaxInformation[LetL] = { "ArgumentsPattern" -> {_, _}, "LocalVariables" -> {"Solve", {1, Infinity}}};LetL /: (assign : SetDelayed | RuleDelayed)[ lhs_,rhs : HoldPattern[LetL[{__}, _]]] := Block[{With}, Attributes[With] = {HoldAll}; assign[lhs, Evaluate[rhs]] ];LetL[{}, expr_] := expr;LetL[{head_}, expr_] := With[{head}, expr];LetL[{head_, tail__}, expr_] := Block[{With}, Attributes[With] = {HoldAll}; With[{head}, Evaluate[LetL[{tail}, expr]]]]; What it does is to first expand into a nested With , and only then allow the expanded construct to evaluate. It also has a special behavior when used on the r.h.s. of function definitions performed with SetDelayed . I find this macro interesting for many reasons, in particular because it uses a number of interesting techniques together to achieve its goals ( UpValues , Block trick, recursion, Hold -attributes and other tools of evaluation control, some interesting pattern-matching constructs). Simple usage First consider simple use cases such as this: LetL[{a=1,b=a+1,c=a+b+2},{a,b,c}] {1,2,5} We can trace the execution to see how LetL expands into nested With : Trace[LetL[{a=1,b=a+1},{a,b}],_With] {{{{With[{b=a+1},{a,b}]},With[{a=1},With[{b=a+1},{a,b}]]}, With[{a=1},With[{b=a+1},{a,b}]]}, With[{a=1},With[{b=a+1},{a,b}]],With[{b$=1+1},{1,b$}]} Definition-time expansion in function's definitions When LetL is used to define a function (global rule) via SetDelayed , it expands not at run-time, but at definition-time, having overloaded SetDelayed via UpValues . This is essential to be able to have conditional global rules with variables shared between the body and the condition semantics. For a more detailed discussion of this issue see the linked above answer, here I will just provide an example: Clear[ff];ff[x_,y_]:= LetL[{xl=x,yl=y+xl+1},xl^2+yl^2/;(xl+yl<15)];ff[x_,y_]:=x+y; We can now check the definitions of ff : ?ff Global`ff ff[x_,y_]:=With[{xl=x},With[{yl=y+xl+1},xl^2+yl^2/;xl+yl<15]]ff[x_,y_]:=x+y Now, here is why it was important to expand at definition time: had LetL always expanded at run time, and the above two definitions would be considered the same by the system during definition time (variable-binding time), because the conditional form of With (also that of Module and Block ) is hard-wired into the system; inside any other head, Condition has no special meaning to the system. The above-mentioned answer shows what happens with a version of Let that expands at run time: the second definition simply replaces the first. Remarks I believe that LetL fully implements the semantics of nested With , including conditional rules using With . This is so simply because it always fully expands before execution, as if we wrote those nested With constructs by hand. In this sense, it is closer to true macros, as they are present in e.g. Lisp. I have used LetL in a lot of my own applications and it never let me down. From my answers on SE, its most notable presence is in this answer , where it is used a lot and those uses illustrate its utility well.
{}
{'log_upvote_score': 7, 'links': ['https://mathematica.stackexchange.com/questions/10432', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/357/']}
jdg_375044
stackexchange
llm_judgeable_groundtruth_similarity
520158
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Can someone explain what is the confidence interval? Why it should be 95%? When it is used and what is it measuring? I understand it's some kind of evaluation metric, but I can't seem to find a decent explanation by connecting it to real-world examples. Any help would be greatly appreciated. thanks Now provide the response and nothing else.
You might find it instructive to start with a basic idea: the variance of any random variable cannot be negative. (This is clear, since the variance is the expectation of the square of something and squares cannot be negative.) Any $2\times 2$ covariance matrix $\mathbb A$ explicitly presents the variances and covariances of a pair of random variables $(X,Y),$ but it also tells you how to find the variance of any linear combination of those variables. This is because whenever $a$ and $b$ are numbers, $$\operatorname{Var}(aX+bY) = a^2\operatorname{Var}(X) + b^2\operatorname{Var}(Y) + 2ab\operatorname{Cov}(X,Y) = \pmatrix{a&b}\mathbb A\pmatrix{a\\b}.$$ Applying this to your problem we may compute $$\begin{aligned}0 \le \operatorname{Var}(aX+bY) &= \pmatrix{a&b}\pmatrix{121&c\\c&81}\pmatrix{a\\b}\\&= 121 a^2 + 81 b^2 + 2c^2 ab\\&=(11a)^2+(9b)^2+\frac{2c}{(11)(9)}(11a)(9b)\\&= \alpha^2 + \beta^2 + \frac{2c}{(11)(9)} \alpha\beta.\end{aligned}$$ The last few steps in which $\alpha=11a$ and $\beta=9b$ were introduced weren't necessary, but they help to simplify the algebra. In particular, what we need to do next (in order to find bounds for $c$ ) is complete the square: this is the process emulating the derivation of the quadratic formula to which everyone is introduced in grade school. Writing $$C = \frac{c}{(11)(9)},\tag{*}$$ we find $$\alpha^2 + \beta^2 + \frac{2c^2}{(11)(9)} \alpha\beta = \alpha^2 + 2C\alpha\beta + \beta^2 = (\alpha+C\beta)^2+(1-C^2)\beta^2.$$ Because $(\alpha+C\beta)^2$ and $\beta^2$ are both squares, they are not negative. Therefore if $1-C^2$ also is non-negative, the entire right side is not negative and can be a valid variance. Conversely, if $1-C^2$ is negative, you could set $\alpha=-c\beta$ to obtain the value $(1-C^2)\beta^2\lt 0$ on the right hand side, which is invalid. You therefore deduce (from these perfectly elementary algebraic considerations) that If $A$ is a valid covariance matrix, then $1-C^2$ cannot be negative. Equivalently, $|C|\le 1,$ which by $(*)$ means $-(11)(9) \le c \le (11)(9).$ There remains the question whether any such $c$ does correspond to an actual variance matrix. One way to show this is true is to find a random variable $(X,Y)$ with $\mathbb A$ as its covariance matrix. Here is one way (out of many). I take it as given that you can construct independent random variables $A$ and $B$ having unit variances: that is, $\operatorname{Var}(A)=\operatorname{Var}(B) = 1.$ (For example, let $(A,B)$ take on the four values $(\pm 1, \pm 1)$ with equal probabilities of $1/4$ each.) The independence implies $\operatorname{Cov}(A,B)=0.$ Given a number $c$ in the range $-(11)(9)$ to $(11)(9),$ define random variables $$X = \sqrt{11^2-c^2/9^2}A + (c/9)B,\quad Y = 9B$$ (which is possible because $11^2 - c^2/9^2\ge 0$ ) and compute that the covariance matrix of $(X,Y)$ is precisely $\mathbb A.$ Finally, if you carry out the same analysis for any symmetric matrix $$\mathbb A = \pmatrix{a & b \\ b & d},$$ you will conclude three things: $a \ge 0.$ $d \ge 0.$ $ad - b^2 \ge 0.$ These conditions characterize symmetric, positive semi-definite matrices. Any $2\times 2$ matrix satisfying these conditions indeed is a variance matrix. (Emulate the preceding construction.)
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/520158', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/-1/']}
jdg_375045
stackexchange
llm_judgeable_groundtruth_similarity
1020705
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In Petersen's Riemannian geometry text, he defines the Hodge operator $*: \Omega^k(M) \to \Omega^{n-k} (M)$ in the standard way. He then proves (Lemma 26, Chap 7) that $*^2: \Omega^k(M) \to \Omega^k(M)$ is multiplication by $(-1)^{k(n-k)}.$ So far no problems. However, he seems to argue that this lemma implies that the Hodge star gives an isomorphism $H^k(M) \to H^{n-k}(M),$ where we are considering the de Rham cohomology groups. It is clear by the lemma that we have an isomorphism from $\Omega^k \to \Omega^{n-k}$ given by the Hodge star. But why must this descend to an isomorphism on cohomology? I guess one would need to show that the Hodge star maps closed forms to closed forms, and exact forms to exact forms? Is this clear from the lemma, or am I to conclude that Pedersen is foreshadowing a theorem to come? Now provide the response and nothing else.
No, what is obvious is that $\star$ gives an isomorphism from the space of harmonic $k$-forms ($\mathscr H^k(M)$) to the space of harmonic $(n-k)$-forms. The Hodge Theorem gives an isomorphism $H^k(M) \overset{\cong}{\to} \mathscr H^k(M)$. (I don't have Petersen's book here with me, so I can't check to see his context.)
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1020705', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/142700/']}
jdg_375046
stackexchange
llm_judgeable_groundtruth_similarity
3581585
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Whats the difference between a Null pointer & a Void pointer ? Now provide the response and nothing else.
Null pointer is a special reserved value of a pointer. A pointer of any type has such a reserved value. Formally, each specific pointer type ( int * , char * etc.) has its own dedicated null-pointer value. Conceptually, when a pointer has that null value it is not pointing anywhere. Void pointer is a specific pointer type - void * - a pointer that points to some data location in storage, which doesn't have any specific type. So, once again, null pointer is a value , while void pointer is a type . These concepts are totally different and non-comparable. That essentially means that your question, as stated, is not exactly valid. It is like asking, for example, "What is the difference between a triangle and a car?".
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3581585', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/425094/']}
jdg_375047
stackexchange
llm_judgeable_groundtruth_similarity
15891038
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I created a DataFrame from a list of lists: table = [ ['a', '1.2', '4.2' ], ['b', '70', '0.03'], ['x', '5', '0' ],]df = pd.DataFrame(table) How do I convert the columns to specific types? In this case, I want to convert columns 2 and 3 into floats. Is there a way to specify the types while converting to DataFrame? Or is it better to create the DataFrame first and then loop through the columns to change the type for each column? Ideally I would like to do this in a dynamic way because there can be hundreds of columns, and I don't want to specify exactly which columns are of which type. All I can guarantee is that each column contains values of the same type. Now provide the response and nothing else.
You have four main options for converting types in pandas: to_numeric() - provides functionality to safely convert non-numeric types (e.g. strings) to a suitable numeric type. (See also to_datetime() and to_timedelta() .) astype() - convert (almost) any type to (almost) any other type (even if it's not necessarily sensible to do so). Also allows you to convert to categorial types (very useful). infer_objects() - a utility method to convert object columns holding Python objects to a pandas type if possible. convert_dtypes() - convert DataFrame columns to the "best possible" dtype that supports pd.NA (pandas' object to indicate a missing value). Read on for more detailed explanations and usage of each of these methods. 1. to_numeric() The best way to convert one or more columns of a DataFrame to numeric values is to use pandas.to_numeric() . This function will try to change non-numeric objects (such as strings) into integers or floating-point numbers as appropriate. Basic usage The input to to_numeric() is a Series or a single column of a DataFrame. >>> s = pd.Series(["8", 6, "7.5", 3, "0.9"]) # mixed string and numeric values>>> s0 81 62 7.53 34 0.9dtype: object>>> pd.to_numeric(s) # convert everything to float values0 8.01 6.02 7.53 3.04 0.9dtype: float64 As you can see, a new Series is returned. Remember to assign this output to a variable or column name to continue using it: # convert Seriesmy_series = pd.to_numeric(my_series)# convert column "a" of a DataFramedf["a"] = pd.to_numeric(df["a"]) You can also use it to convert multiple columns of a DataFrame via the apply() method: # convert all columns of DataFramedf = df.apply(pd.to_numeric) # convert all columns of DataFrame# convert just columns "a" and "b"df[["a", "b"]] = df[["a", "b"]].apply(pd.to_numeric) As long as your values can all be converted, that's probably all you need. Error handling But what if some values can't be converted to a numeric type? to_numeric() also takes an errors keyword argument that allows you to force non-numeric values to be NaN , or simply ignore columns containing these values. Here's an example using a Series of strings s which has the object dtype: >>> s = pd.Series(['1', '2', '4.7', 'pandas', '10'])>>> s0 11 22 4.73 pandas4 10dtype: object The default behaviour is to raise if it can't convert a value. In this case, it can't cope with the string 'pandas': >>> pd.to_numeric(s) # or pd.to_numeric(s, errors='raise')ValueError: Unable to parse string Rather than fail, we might want 'pandas' to be considered a missing/bad numeric value. We can coerce invalid values to NaN as follows using the errors keyword argument: >>> pd.to_numeric(s, errors='coerce')0 1.01 2.02 4.73 NaN4 10.0dtype: float64 The third option for errors is just to ignore the operation if an invalid value is encountered: >>> pd.to_numeric(s, errors='ignore')# the original Series is returned untouched This last option is particularly useful for converting your entire DataFrame, but don't know which of our columns can be converted reliably to a numeric type. In that case, just write: df.apply(pd.to_numeric, errors='ignore') The function will be applied to each column of the DataFrame. Columns that can be converted to a numeric type will be converted, while columns that cannot (e.g. they contain non-digit strings or dates) will be left alone. Downcasting By default, conversion with to_numeric() will give you either an int64 or float64 dtype (or whatever integer width is native to your platform). That's usually what you want, but what if you wanted to save some memory and use a more compact dtype, like float32 , or int8 ? to_numeric() gives you the option to downcast to either 'integer' , 'signed' , 'unsigned' , 'float' . Here's an example for a simple series s of integer type: >>> s = pd.Series([1, 2, -7])>>> s0 11 22 -7dtype: int64 Downcasting to 'integer' uses the smallest possible integer that can hold the values: >>> pd.to_numeric(s, downcast='integer')0 11 22 -7dtype: int8 Downcasting to 'float' similarly picks a smaller than normal floating type: >>> pd.to_numeric(s, downcast='float')0 1.01 2.02 -7.0dtype: float32 2. astype() The astype() method enables you to be explicit about the dtype you want your DataFrame or Series to have. It's very versatile in that you can try and go from one type to any other. Basic usage Just pick a type: you can use a NumPy dtype (e.g. np.int16 ), some Python types (e.g. bool), or pandas-specific types (like the categorical dtype). Call the method on the object you want to convert and astype() will try and convert it for you: # convert all DataFrame columns to the int64 dtypedf = df.astype(int)# convert column "a" to int64 dtype and "b" to complex typedf = df.astype({"a": int, "b": complex})# convert Series to float16 types = s.astype(np.float16)# convert Series to Python stringss = s.astype(str)# convert Series to categorical type - see docs for more detailss = s.astype('category') Notice I said "try" - if astype() does not know how to convert a value in the Series or DataFrame, it will raise an error. For example, if you have a NaN or inf value you'll get an error trying to convert it to an integer. As of pandas 0.20.0, this error can be suppressed by passing errors='ignore' . Your original object will be returned untouched. Be careful astype() is powerful, but it will sometimes convert values "incorrectly". For example: >>> s = pd.Series([1, 2, -7])>>> s0 11 22 -7dtype: int64 These are small integers, so how about converting to an unsigned 8-bit type to save memory? >>> s.astype(np.uint8)0 11 22 249dtype: uint8 The conversion worked, but the -7 was wrapped round to become 249 (i.e. 2 8 - 7)! Trying to downcast using pd.to_numeric(s, downcast='unsigned') instead could help prevent this error. 3. infer_objects() Version 0.21.0 of pandas introduced the method infer_objects() for converting columns of a DataFrame that have an object datatype to a more specific type (soft conversions). For example, here's a DataFrame with two columns of object type. One holds actual integers and the other holds strings representing integers: >>> df = pd.DataFrame({'a': [7, 1, 5], 'b': ['3','2','1']}, dtype='object')>>> df.dtypesa objectb objectdtype: object Using infer_objects() , you can change the type of column 'a' to int64: >>> df = df.infer_objects()>>> df.dtypesa int64b objectdtype: object Column 'b' has been left alone since its values were strings, not integers. If you wanted to force both columns to an integer type, you could use df.astype(int) instead. 4. convert_dtypes() Version 1.0 and above includes a method convert_dtypes() to convert Series and DataFrame columns to the best possible dtype that supports the pd.NA missing value. Here "best possible" means the type most suited to hold the values. For example, this a pandas integer type, if all of the values are integers (or missing values): an object column of Python integer objects are converted to Int64 , a column of NumPy int32 values, will become the pandas dtype Int32 . With our object DataFrame df , we get the following result: >>> df.convert_dtypes().dtypes a Int64b stringdtype: object Since column 'a' held integer values, it was converted to the Int64 type (which is capable of holding missing values, unlike int64 ). Column 'b' contained string objects, so was changed to pandas' string dtype. By default, this method will infer the type from object values in each column. We can change this by passing infer_objects=False : >>> df.convert_dtypes(infer_objects=False).dtypes a objectb stringdtype: object Now column 'a' remained an object column: pandas knows it can be described as an 'integer' column (internally it ran infer_dtype ) but didn't infer exactly what dtype of integer it should have so did not convert it. Column 'b' was again converted to 'string' dtype as it was recognised as holding 'string' values.
{}
{'log_upvote_score': 12, 'links': ['https://Stackoverflow.com/questions/15891038', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_375048
stackexchange
llm_judgeable_groundtruth_similarity
2240049
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using php and building an MVC for my web application. I am to the step where I need to connect to my DB, look at the current URL, compare that to data in my DB, and then return the correct content from my DB to the user. My question is, where should I put the code that connects to the DB and processes it? Would this go in the Model class? Now provide the response and nothing else.
The model should be the entity that actively uses the database connection, for simple models using a table gateway/active record implementation may work. For more complex models a data mapper further isolates the database from the rest of your application (which makes it less invasive to change a storage back-end). You controller should process the request and pass whatever relevant data is needed to the model. That said, simply setting up a database connection is usually done by some kind of bootstrap script. The model (or the data mapper) then uses the connection. I'm not sure if that technically falls under 'controller'. In practice most frameworks have bootstraps, routers, and front controllers - whatever you call them your database should be used by the model but setup by your bootstrap .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2240049', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/144695/']}
jdg_375049
stackexchange
llm_judgeable_groundtruth_similarity
7039271
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to SBT and am unsure how to get a project started. Can someone point me to a beginner's guide to creating a Hello World type project, or give me some clues? My preferred IDE is IDEA. I have run sbt-idea according to the instruction on the IDEA Plugins page . At the moment I'm a bit confused because there are no source folders created - where / how am I supposed to create them and how will SBT know where to look? why is it trying to use Scala 2.8.1, when I have already put scalaVersion := "2.9.0" in the build.sbt file? This means IDEA doesn't recognize object HelloWorld extends App {} . the instructions on the plugins page above suggest changing the Before Launch options of "a Run Configuration (including the Default Run Configuration)". There are 13 different default configurations for different things listed - which one to change? Should I be creating a new one? Are these default configurations just for this project or will it adversely affect all my other projects that don't use SBT? Thanks. Now provide the response and nothing else.
This worked for me: First get sbt and the gen-idea plugin going... Download the sbt-launch.jar and create the script for launching it as described on the SBT Github wiki . Create a directory for your new project, such as (on linux) ~/myCode/myNewProject and change to that directory Run sbt command. This should download the scala libraries and create a 'project' and 'target' directories. Change to the 'project' directory. Create a new file 'build.sbt' in this directory with the following lines, as described on the sbt-idea plugin Github wiki : resolvers += "sbt-idea-repo" at "http://mpeltonen.github.com/maven/"addSbtPlugin("com.github.mpeltonen" % "sbt-idea" % "1.0.0") Change back to your main project directory such as ~/myCode/myNewProject. Run sbt . It should download the gen-idea plugin. From the sbt console (which should be running now), run the gen-idea command. It should create the IDEA project directories. For me, it also emits copious warnings. Now get the IDEA SBT console plugin going... Open IDEA and install the "SBT" plugin from the plugin manager and restart IDEA. (Note this is the IDEA plugin, not the sbt plugin described above.)Configure the SBT plugin as described on its wiki (run configurations, location of sbt-launch.jar etc). Open the freshly generated IDEA project in IDEA. Put your code and other things in the expected default directories as described on the sbt wiki under 'Directory Layout' . You need to create these directories yourself - sbt doesn't create them automatically. The 'src' and 'test' directories should be at the same level as the 'project' and 'target' directories that sbt created. Make up a new 'build.sbt' file and put it in ~/myCode/myProject (or whatever you called it). Since I am just figuring out sbt, mine is simple so far - just nominates scalatest as a dependency and uses Scala 2.9: name := "myProject"version := "0.1"organization := "me"libraryDependencies += "org.scalatest" % "scalatest_2.9.0" % "1.6.1"scalaVersion := "2.9.0" Enter the reload command in the SBT console at the bottom of the IDEA screen. It should download scalatest and Scala 2.9 for you. Maybe you need to run 'update' too.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7039271', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/770361/']}
jdg_375050
stackexchange
llm_judgeable_groundtruth_similarity
9143948
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to create a class in python, which should work like this: Data assigned, maybe bound to a variable (eg a = exampleclass(data) or just exampleclass(data) ) Upon being inserted data, it should automatically determine some properties of the data, and if some certain properties are fullfilled, it will automatically... ... change class to another class The part 3 is the part that i have problem with. How do i really change the class inside of the class? for example: If I have two classes, one is Small_Numbers , and the other is Big_numbers ; now I want any small_number smaller than 1000 to be transferred into a Big_number and vice versa, testcode: a = Small_number(50)type(a) # should return Small_number.b = Small_number(234234)type(b) # should return Big_number.c = Big_number(2)type(c) # should return Small_number. Is this possible to do? Now provide the response and nothing else.
Using a factory method is the usual way to solve this, especially since instantiating a class is indistinguishable from calling a function in Python. However, if you really want, you can assign to self.__class__ : THRESHOLD = 1000class Small(object): def __init__(self, n): if n < THRESHOLD: self.n = n else: self.__class__ = Big self.__init__(n)class Big(object): def __init__(self, n): if n < THRESHOLD: self.__class__ = Small self.__init__(n) else: self.n = n This works as expected: >>> a = Small(100)>>> type(a)<class 'Small'>>>> b = Small(1234)>>> type(b)<class 'Big'>>>> c = Big(2)>>> type(c)<class 'Small'> If assigning to self.__class__ seems too strange, then you can override __new__ instead. This method is called before __init__ is called and it can be used to pick the class to instantiate: THRESHOLD = 1000class Switcher(object): def __new__(cls, n): if n < THRESHOLD: new_cls = Small else: new_cls = Big instance = super(Switcher, new_cls).__new__(new_cls, n) if new_cls != cls: instance.__init__(n) return instanceclass Small(Switcher): def __init__(self, n): self.n = nclass Big(Switcher): def __init__(self, n): self.n = n
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9143948', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1187139/']}
jdg_375051
stackexchange
llm_judgeable_groundtruth_similarity
134581
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: (I)$$\lim_{x \to \infty } \, \left(\sqrt{x^2+x}-\sqrt{x^2-x}\right)=$$$$\lim_{x \to \infty } \, \left(x\sqrt{1+1/x}-x\sqrt{1-1/x}\right)=$$$$\lim_{x \to \infty } \, \left(x\sqrt{1}-x\sqrt{1}\right)=\lim_{x \to \infty } \, \left(x-x\right)=0$$(II)$$\lim_{x \to \infty } \, \left(\sqrt{x^2+x}-\sqrt{x^2-x}\right)=$$$$\lim_{x \to \infty } \, \left(\left(\sqrt{x^2+x}-\sqrt{x^2-x}\right)*\frac{\left(\sqrt{x^2+x}+\sqrt{x^2-x}\right)}{\left(\sqrt{x^2+x}+\sqrt{x^2-x}\right)}\right)=$$$$\lim_{x \to \infty } \, \frac{2x}{\left(\sqrt{x^2+x}+\sqrt{x^2-x}\right)}=$$$$\lim_{x \to \infty } \, \frac{2x}{\left(x\sqrt{1+1/x}+x\sqrt{1-1/x}\right)}=$$$$\lim_{x \to \infty } \, \frac{2x}{\left(x\sqrt{1}+x\sqrt{1}\right)}=\lim_{x \to \infty } \, \frac{2x}{2x}=1$$ I found these two ways to evaluate this limit. I know the answer is 1. The first one is surely wrong. The question is: why? What is wrong there? Now provide the response and nothing else.
You took out the $1/x$ part. Surely $1/x\to0$ in the limit, so it may seem you can evaluate it to $0$ and then look at the rest of the function in the limit all hunky-dory, but consider applying that idea to: $$1=\lim_{x\to\infty} \left(x\cdot\frac{1}{x}\right)=\lim_{x\to\infty}\big(x\cdot0\big) =\lim\,0=0.$$ It doesn't work!
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/134581', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/20927/']}
jdg_375052
stackexchange
llm_judgeable_groundtruth_similarity
4886874
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking to create a view of a table which will highlight data that meets a specific criteria. For example, if I have a table with integer values, I want my view to show the rows which have a value greater than 100. I know how to achieve this by creating a view on a table, however is the view dynamic? I have tested this in MySQL and it seems to be true. But if my table has over 1000 rows, is this efficient? Will the view still update "dynamically" to any changes in the original table? Now provide the response and nothing else.
Basically, there are basically 2 types of views in MySQL. Merge Views This type of view basically just re-writes your queries with the view's SQL. So it's a short-hand for writing the queries yourself. This offers no real performance benefit, but make writing complex queries easier and making maintenance easier (since if the view definition changes, you don't need to change 100 queries against the view, only the one definition). Temptable Views This type of view creates a temporary table with the query from the view's SQL. It has all the benefits of the merge view, but also reduces lock time on the view's tables. Therefore on highly loaded servers it could have a fairly significant performance gain. There's also the "Undefined" view type (the default), which let's MySQL pick what it thinks is the best type at query time... But note something important to note, is that MySQL does not have any support for materialized views . So it's not like Oracle where a complex view will increase the performance of queries against it significantly. The queries of the views are always executed in MySQL. As far as the efficiency, Views in MySQL do not increase or decrease efficiency. They are there to make your life easier when writing and maintaining queries. I have used views on tables with hundreds of millions of rows, and they have worked just fine...
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4886874', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/597453/']}
jdg_375053
stackexchange
llm_judgeable_groundtruth_similarity
1146274
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm playing around and trying to write an implementation of RSA. The problem is that I'm stuck on generating the massive prime numbers that are involved in generating a key pair. Could someone point me to a fast way to generate huge primes/probable primes? Now provide the response and nothing else.
You don't generate prime numbers exactly. You generate a large odd number randomly, then test if that number is prime, if not generate another one randomly. There are some laws of prime numbers that basically state that your odds of "hitting" a prime via random tries is (2/ln n) For example, if you want a 512-bit random prime number, you will find one in 2/(512*ln(2))So roughly 1 out of every 177 of the numbers you try will be prime. There are multiple ways to test if a number is prime, one good one is the "Miller-Rabin test" as stated in another answer to this question . Also, OpenSSL has a nice utility to test for primes: $ openssl prime 1190547592454607531A6F7AC39A53511 is not prime
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1146274', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/19470/']}
jdg_375054
stackexchange
llm_judgeable_groundtruth_similarity
128587
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: It´s a theorem that there exist only five platonic solids ( up to similarity). I was searching some proofs of this, but I could not. I want to see some proof of this, specially one that uses principally group theory. Here´s the definition of Platonic solid Wikipedia Platonic solids Now provide the response and nothing else.
From Regular Polytopes by Coxeter, let a spherical graph have $N_0$ vertices, $N_1$ edges, and $N_2$ faces. Euler's formula reads$$ N_0 - N_1 + N_2 = 2. \; \; \; \; \; (1.61) $$Now, suppose our graph is regular, each face has $p$ sides, each vertex has $q$ surrounding faces. Then both $p,q \geq 3.$ Next,$$ q N_0 = 2 N_1 = p N_2 . \; \; \; \; \; (1.71) $$Put them together,$$ \frac{1}{N_1} = \frac{1}{p} + \frac{1}{q} - \frac{1}{2}. \; \; \; \; \; (1.72) $$As $N_1$ is positive, and $p,q \geq 3,$ the possible solutions to $$ \frac{1}{p} + \frac{1}{q} > \frac{1}{2} $$are $$ \{p,q\} =\{3,3\}, \; \; \{3,4\}, \; \; \{4,3\}, \; \; \{3,5\}, \; \; \{5,3\}. $$ Note that the spherical dual graph to $\{p,q\}$ is $\{q,p\},$ while the tetrahedron $\{3,3\}$ is self dual.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/128587', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/27634/']}
jdg_375055
stackexchange
llm_judgeable_groundtruth_similarity
224748
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having trouble with a custom tag:- org.apache.jasper.JasperException: /custom_tags.jsp(1,0) Unable to find setter method for attribute : firstname This is my TagHandler class: package com.cg.tags;import javax.servlet.jsp.JspException;import javax.servlet.jsp.JspWriter;import javax.servlet.jsp.tagext.TagSupport;public class NameTag extends TagSupport{ public String firstname; public String lastname; public void setFirstName(String firstname){ this.firstname=firstname; } public void setLastName(String lastname){ this.lastname=lastname; } public int doStartTag() throws JspException { try { JspWriter out=pageContext.getOut(); out.println( "First name: "+firstname+ "Last name: "+lastname); } catch (Exception ex) { throw new JspException("IO problems"); } return SKIP_BODY; }} This is my TLD file: ?xml version="1.0" encoding="UTF-8"?><taglib> <tlibversion>1.1</tlibversion> <jspversion>1.1</jspversion> <shortname>utility</shortname> <uri>/WEB-INF/nametagdesc.tld</uri> <info> A simple tag library for the examples </info> <tag> <name>name</name> <tagclass>com.cg.tags.NameTag</tagclass> <bodycontent>empty</bodycontent> <attribute> <name>firstname</name> <required>true</required> <rtexprvalue>true</rtexprvalue> </attribute> <attribute> <name>lastname</name> <required>true</required> <rtexprvalue>true</rtexprvalue> </attribute> </tag></taglib> And this is my JSP page: <%@ taglib uri="/WEB-INF/nametagdesc.tld" prefix="cg" %><cg:name firstname="fname" lastname="lname"/> I have checked that the code is recompiled and deployed correctly etc etc.... So, the question is , why can't it find the setter method??? Now provide the response and nothing else.
Check the case of the attributes in your tag element - they should match the case of the setter, not the case of the member variables (Which should probably be private, by the way). The rule is that the attribute name has its first letter capitalised and then the result is prefixed by 'set', to arrive at the setter name. In your case, you've called the attribute 'firstname' , so the rule results in the the JSP compiler looking for the 'setFirstname' method. As you've named your setter 'setFirstName' (with a capital 'N'), you should use 'firstName' (Also with a capital 'N') for the attribute name. Apply the same rule to the 'lastname' attribute, to arrive at 'lastName' , and you should be in business. P.S. Using a good IDE, like IntelliJ , would have helped in this case, as it would have suggested the valid names for your attributes, saving you a lot of head scratching.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/224748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/28557/']}
jdg_375056
stackexchange
llm_judgeable_groundtruth_similarity
43359734
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For example, something like this: var value = someArray.indexOf(3) !== -1 ? someArray.indexOf(3) : 0 Is there a better way to write that? Again, I am not seeking an answer to the exact question above, just an example of when you might have repeated operands in ternary operator expressions... Now provide the response and nothing else.
Code should be readable, so being succinct should not mean being terse whatever the cost - for that you should repost to https://codegolf.stackexchange.com/ - so instead I would recommend using a second local variable named index to maximize reading comprehensibility (with minimal runtime cost too, I note): var index = someArray.indexOf( 3 );var value = index == -1 ? 0 : index; But if you really want to cut this expression down, because you're a cruel sadist to your coworkers or project collaborators, then here are 4 approaches you could use: 1: Temporary variable in a var statement You can use the var statement's ability to define (and assign) a second temporary variable index when separated with commas: var index = someArray.indexOf(3), value = index !== -1 ? index: 0; 2: Immediately-Invoked Function Expression (IIFE) Another option is an anonymous function which is invoked immediately after it’s defined: // Traditional syntax:var value = function( x ) { return x !== -1 ? x : 0 }( someArray.indexOf(3) );// ES6 syntax:var value = ( x => x !== -1 ? x : 0 )( someArray.indexOf(3) ); 3: Comma operator There is also the infamous "comma operator" which JavaScript supports, which is also present in C and C++. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Comma_Operator You can use the comma operator when you want to include multiple expressions in a location that requires a single expression. You can use it to introduce side-effects, in this case by reassigning to value : var value = ( value = someArray.indexOf(3), value !== -1 ? value : 0 ); This works because var value is interpreted first (as it's a statement), and then the left-most, inner-most value assignment, and then the right-hand of the comma operator, and then the ternary operator - all legal JavaScript. 4: Re-assign in a subexpression Commentator @IllusiveBrian pointed out that the use of the comma-operator (in the previous example) is unneeded if the assignment to value is used as a parenthesized subexpression: var value = ( ( value = someArray.indexOf(3) ) !== -1 ? value : 0 ); Note that the use of negatives in logical expressions can be harder for humans to follow - so all of the above examples can be simplified for reading by changing idx !== -1 ? x : y to idx == -1 ? y : x - or idx < 0 ? y : x . var value = ( ( value = someArray.indexOf(3) ) == -1 ? 0 : value );
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/43359734', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1354934/']}
jdg_375057
stackexchange
llm_judgeable_groundtruth_similarity
165
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is an assignment that had confused me for a long time. So I think you guys who study computational biology might be interested. The original question is: Find the two most similar DNA sequences of length 20 that Blast using a word length of 5 will fail to align. Now provide the response and nothing else.
BLAST works by finding a perfect match between sequences of a length equal to this "word length" and then enlarging it in a standard way -- yet there will be no alignment without this perfectly matched word. So in your case, you must look for two 20bp sequences with no common 5bp sub-sequence; for instance: AAAAAAAAAAAAAAAAAAAA and AAAACAAAACAAAACAAAAC
{}
{'log_upvote_score': 5, 'links': ['https://biology.stackexchange.com/questions/165', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/44/']}
jdg_375058
stackexchange
llm_judgeable_groundtruth_similarity
4728
Below is a question asked on the forum earthscience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why is the sea level in Hudson Bay decreasing so much? Hudson Bay is pretty far up north, much closer to glaciers. Would it make sense for it to recede at this level with sources of fresh water relatively close? Now provide the response and nothing else.
The area is experiencing post-glacial isostatic rebound . Much of Canada was covered in an extensive ice sheet in the last glacial period (the 'Ice Age'), from about 110 ka until 12 ka. The ice in the Hudson Bay area was among the last to melt: A thick ice sheet depresses the crust (the lithosphere), making a small dent in the uppermost mantle (the asthenosphere ) in the process. Well, not that small: p 375 in Gornitz (2009, Encyclopedia of Paleoclimatology and Ancient Environments ) says it could be 800 m for a 3000 metre-thick ice sheet! Since the asthenosphere is highly viscous, it takes a long time time for the depression to 'bounce' back up. This map from Natural Resources Canada shows the current rate: Since global sea-level is currently rising at about 3 mm/a, a local uplift at this rate will break even. Anything more will result in relative sea-level fall, as we see in Hudson Bay (as well as in Scandinavia, the UK, Alaska, and elsewhere — this map is wonderful ). Interesting, for geologists anyway, is the sedimentological record this leaves. I love this example of raised beaches and a small delta experiencing forced regression on the shores of Hudson Bay: Last thing — you asked: Would it make sense for it to recede at the level that it isreceding with sources of freshwater relatively close? Since Hudson Bay is connected to the world's ocean, mainly through Hudson Strait, the runoff into the Bay has no measurable effect on the water level. Credit Ice retreat map by TKostolany, licensed CC-BY-SA. Rebound map by NRCan, free of copyright. Google Maps image contains own credit.
{}
{'log_upvote_score': 6, 'links': ['https://earthscience.stackexchange.com/questions/4728', 'https://earthscience.stackexchange.com', 'https://earthscience.stackexchange.com/users/1034/']}
jdg_375059
stackexchange
llm_judgeable_groundtruth_similarity
171602
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am just getting started with the Gibbs Sampler and came across an implementation from here and here and here . All of theses implementations are based on the first article. There is an inner loop in the implementation and I don't understand it's purpose. Here is the code (written in julia). It's been changed slightly to the implementation from the article, but not where it matters. function gibbs(n, thin) #array to store the results mat = Array(Float64, (n,2)) x = y = 0.0 #outer loop number of samples to draw for i in 1:n #inner loop: purpose unknown. for j in 1:thin x = rand ( Normal( .9 * y, 1 - .9^2)) y = rand ( Normal( .9 * x, 1 - .9^2)) end mat[i,1] = x; mat[i,2] = y end mat#end of programendfunction main() gibbs(10000, 200)endmain() From my understanding the inner loop creates an addition $n * thin$ amount of samples and thus decreases the likelihood of two consecutive draws being to close to each other. Is there another purpose to this? Now provide the response and nothing else.
As it was already mentioned, this is called thinning, however, as Kruschke noticed it is "rarely useful" and quoting Link and Eaton (2012), he writes that ... basic conclusion of the article is that thinning of chains is not usually appropriate when the goal is precision of estimates from an MCMC sample. (Thinning can be useful for other reasons, such as memory or time constraints in post-chain processing, but those are very different motivations than precision of estimation of the posterior distribution.) The paper published in Methods in Ecology and Evolution is freely avilable from the publisher . So it is disputable if thinning is that much helpful, i.e. the inner loop may not be that important in the code and in many cases can be simply omitted with saving the whole simulation output.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/171602', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/88982/']}
jdg_375060
stackexchange
llm_judgeable_groundtruth_similarity
2528039
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: limits.h specifies limits for non-floating point math types, e.g. INT_MIN and INT_MAX . These values are the most negative and most positive values that you can represent using an int. In float.h , there are definitions for FLT_MIN and FLT_MAX . If you do the following: NSLog(@"%f %f", FLT_MIN, FLT_MAX); You get the following output: FLT_MIN = 0.000000, FLT_MAX = 340282346638528859811704183484516925440.000000 FLT_MAX is equal to a really large number, as you would expect, but why does FLT_MIN equal zero instead of a really large negative number? Now provide the response and nothing else.
It's not actually zero, but it might look like zero if you inspect it using printf or NSLog by using %f . According to float.h (at least in Mac OS X 10.6.2), FLT_MIN is described as: /* Minimum normalized positive floating-point number, b**(emin - 1). */ Note the positive in that sentence: FLT_MIN refers to the minimum (normalized) number greater than zero . (There are much smaller non-normalized numbers). If you want the minimum floating point number (including negative numbers), use -FLT_MAX .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2528039', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/86046/']}
jdg_375061
stackexchange
llm_judgeable_groundtruth_similarity
33880396
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When learning to program simple 2D games, each object would have a sprite sheet with little pictures of how a player would look in every frame/animation. 3D models don't seem to work this way or we would need one image for every possible view of the object! For example, a rotating cube would need a lot images depicting how it would look on every single side. So my question is, how are 3D model "images" represented and rendered by the engine when viewed from arbitrary perspectives? Now provide the response and nothing else.
Multiple methods There is a number of methods for rendering and storing 3D graphics and models. There are even different methods for rendering 2D graphics! In addition to 2D bitmaps, you also have SVG . SVG uses numbers to define points in an image. These points make shapes. The points can also define curves. This allows you to make images without the need for pixels. The result can be smaller file sizes, in addition to the ability to transform the image (scale and rotate) without causing distortion. Most 3D graphics use a similar technique, except in 3D. What these methods have in common, however, is that they all ultimately render the data to a 2D grid of pixels. Projection The most common method for rendering 3D models is projection . All of the shapes to be rendered are broken down into triangles before rendering. Why triangles? Because triangles are guaranteed to be coplanar. That saves a lot of work for the renderer since it doesn't have to worry about "coloring outside of the lines". One drawback to this is that most 3D graphics projection technologies don't support perfect spheres or other round surfaces. You have to use approximations and other tricks to make round surfaces (although there are some renderers which support round surfaces). The next step is to convert or project all of the 3D points into 2D points on the screen (as seen below). From there, you essentially "color in" the triangles to make everything look solid. While this is pretty fast, another downside is that you can't really have things like reflections and refractions. Anytime you see a refractive or reflective surface in a game, they are only using trickery to make it look like a reflective or refractive material. The same goes for lighting and shading. Here is an example of special coloring being used to make a sphere approximation look smooth. Notice that you can still see straight lines around the smoothed version: Ray tracing You also can render polygons using ray tracing . With this method, you basically trace the paths that the light takes to reach the camera. This allows you to make realistic reflections and refractions. However, I won't go into detail since it is too slow to realistically use in games currently. It is mainly used for 3D animations (like what Pixar makes). Simple scenes with low quality settings can be ray traced pretty quickly. But with complicated, realistic scenes, rendering can take several hours for a single frame (as is the case with Pixar movies). However, it does produce ultra realistic images: Ray casting Ray casting is not to be confused with the above-mentioned ray tracing. Ray casting does not trace the light paths. That means that you only have flat surfaces; not reflective. It also does not produce realistic light. However, this can be done relatively quickly, since in most cases you don't even need to cast a ray for every pixel. This is the method that was used for early games such as Doom and Wolfenstein 3D. In early games, ray casting was used for the maps, and the characters and other items were rendered using 2D sprites that were always facing the camera. The sprites were drawn from a few different angles to make them look 3D. Here is an image of Wolfenstein 3D: Castle Wolfenstein with JavaScript and HTML5 Canvas: Image by Martin Kliehm Storing the data 3D data can be stored using multiple methods. It is not necessarily dependent on the rendering method that is used. The stored data doesn't mean anything by itself, so you have to render it using one of the methods that have already been mentioned. Polygons This is similar to SVG. It is also the most common method for storing model data. You define the geometry using 3D points. These points can have other properties, such as texture data (in the form of UV mapping), color data, and whatever else you might want. The data can be stored using a number of file formats. A common file format that is used is COLLADA , which is an XML file that stores the 3D data. There are a lot of other formats though. Fundamentally, however, all file formats are still storing the 3D data. Here is an example of a polygon model: Voxels This method is pretty simple. You can think of voxel models like bitmaps, except they are a bunch of bitmaps layered together to make 3D bitmaps. So you have a 3D grid of pixels. One way of rendering voxels is converting the voxel points to 3D cubes. Note that voxels do not have to be rendered as cubes, however. Like pixels, they are only points that may have color data which can be interpreted in different ways. I won't go into much detail since this isn't too common and you generally render the voxels with polygon methods (like when you render them as cubes. Here is an example of a voxel model: Image by Wikipedia user Vossman
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/33880396', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5587455/']}
jdg_375062
stackexchange
llm_judgeable_groundtruth_similarity
25478510
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an existing rails app that is using devise as it's user authentication. I added a discourse forum and everything went smoothly and it resides on a subdomain. I have read the post at https://meta.discourse.org/t/official-single-sign-on-for-discourse/13045 but still don't know what to do with the devise side of things once the user logs in on the existing rails site. Currently this is the process as I understand it: Step1: User hits Discourse forum on subdomain. User needs to login so clicks login button. Step2: User is sent to the login page on the existing rails site. Step3: User logs in on rails site. Step4: User should be redirected to discourse forum subdomain logged in. My question is - What do I need to to do to make it so that when a user logs in on step 3 they get redirected back to the subdomain? Has anyone successfully implemented this? I saw this code snippet on that walkthrough page: class DiscourseSsoController < ApplicationController def sso secret = "MY_SECRET_STRING" sso = SingleSignOn.parse(request.query_string, secret) sso.email = "user@email.com" sso.name = "Bill Hicks" sso.username = "bill@hicks.com" sso.external_id = "123" # unique to your application sso.sso_secret = secret redirect_to sso.to_url("http://l.discourse/session/sso_login") endend Is this what I would need to add in my existing rails app? I'm guessing the parse checks if that information is in the url and if so it redirects once it finishes the devise login process, and if not it just functions as usual. Would I place this code somewhere in the devise files? Now provide the response and nothing else.
This is pretty straightforward. Following on from the instructions at https://meta.discourse.org/t/official-single-sign-on-for-discourse/13045 and extrapolating a little, I have this working: 1) Put the reference implementation - https://github.com/discourse/discourse/blob/master/lib/single_sign_on.rb - in your #{Rails.root}/lib directory 2) Add this route to routes.rb get 'discourse/sso' => 'discourse_sso#sso' 3) Put this controller in your app/controllers directory require 'single_sign_on'class DiscourseSsoController < ApplicationController before_action :authenticate_user! # ensures user must login def sso secret = "MY_SECRET_STRING" sso = SingleSignOn.parse(request.query_string, secret) sso.email = current_user.email # from devise sso.name = current_user.full_name # this is a custom method on the User class sso.username = current_user.email # from devise sso.external_id = current_user.id # from devise sso.sso_secret = secret redirect_to sso.to_url("http://your_discource_server/session/sso_login") endend 4) Set up the SSO config in discourse to have the following sso url: http://your_rails_server/discourse/ssosso secret : what you set as MY_SECRET_STRING above 5) Disable other login types in discourse. 6) Try to login in discourse. It should work...
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25478510', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3307697/']}
jdg_375063
stackexchange
llm_judgeable_groundtruth_similarity
727788
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I use the database name in several places in my script, and I want to be able to quickly change it, so I'm looking for something like this: DECLARE @DBNAME VARCHAR(50)SET @DBNAME = 'TEST'CREATE DATABASE @DBNAMEGOALTER DATABASE @DBNAME SET COMPATIBILITY_LEVEL = 90GOALTER DATABASE @DBNAME SET RECOVERY SIMPLE GO But it doesn't work. So what's the correct way to write this code? Now provide the response and nothing else.
Put the entire script into a template string, with {SERVERNAME} placeholders. Then edit the string using: SET @SQL_SCRIPT = REPLACE(@TEMPLATE, '{SERVERNAME}', @DBNAME) and then run it with EXECUTE (@SQL_SCRIPT) It's hard to believe that, in the course of three years, nobody noticed that my code doesn't work ! You can't EXEC multiple batches. GO is a batch separator, not a T-SQL statement. It's necessary to build three separate strings, and then to EXEC each one after substitution. I suppose one could do something "clever" by breaking the single template string into multiple rows by splitting on GO ; I've done that in ADO.NET code. And where did I get the word "SERVERNAME" from? Here's some code that I just tested (and which works): DECLARE @DBNAME VARCHAR(255)SET @DBNAME = 'TestDB'DECLARE @CREATE_TEMPLATE VARCHAR(MAX)DECLARE @COMPAT_TEMPLATE VARCHAR(MAX)DECLARE @RECOVERY_TEMPLATE VARCHAR(MAX)SET @CREATE_TEMPLATE = 'CREATE DATABASE {DBNAME}'SET @COMPAT_TEMPLATE='ALTER DATABASE {DBNAME} SET COMPATIBILITY_LEVEL = 90'SET @RECOVERY_TEMPLATE='ALTER DATABASE {DBNAME} SET RECOVERY SIMPLE'DECLARE @SQL_SCRIPT VARCHAR(MAX)SET @SQL_SCRIPT = REPLACE(@CREATE_TEMPLATE, '{DBNAME}', @DBNAME)EXECUTE (@SQL_SCRIPT)SET @SQL_SCRIPT = REPLACE(@COMPAT_TEMPLATE, '{DBNAME}', @DBNAME)EXECUTE (@SQL_SCRIPT)SET @SQL_SCRIPT = REPLACE(@RECOVERY_TEMPLATE, '{DBNAME}', @DBNAME)EXECUTE (@SQL_SCRIPT)
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/727788', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2089/']}
jdg_375064
stackexchange
llm_judgeable_groundtruth_similarity
3170772
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What does auto do in margin: 0 auto; ? I can't seem to understand what auto does. I know it sometimes has the effect of centring objects. Now provide the response and nothing else.
When you have specified a width on the object that you have applied margin: 0 auto to, the object will sit centrally within it's parent container. Specifying auto as the second parameter basically tells the browser to automatically determine the left and right margins itself, which it does by setting them equally. It guarantees that the left and right margins will be set to the same size. The first parameter 0 indicates that the top and bottom margins will both be set to 0. margin-top: 0;margin-bottom: 0;margin-left: auto;margin-right: auto; Therefore, to give you an example , if the parent is 100px and the child is 50px, then the auto property will determine that there's 50px of free space to share between margin-left and margin-right : var freeSpace = 100 - 50;var equalShare = freeSpace / 2; Which would give: margin-left: 25;margin-right: 25; Have a look at this jsFiddle . You do not have to specify the parent width, only the width of the child object.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/3170772', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/84201/']}
jdg_375065
stackexchange
llm_judgeable_groundtruth_similarity
11211
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This comes from a discussion forum, where a friend of mine asked the following: We can see objects in space billion of light years away, right? I started wondering about that. If you take 2 objects in space, the other should be able to see the other no matter what angle in degrees you position it at. That would almost seem to imply that light is being sent out in an infinite number of degrees/angles from the source. But that cannot be true because energy cannot be infinite. If the observer goes out far enough from the source, would there be gaps in the light? Could you pick a viewing angular degree (of extremely high angular precision) where there's no light? I'm actually quite curious about this question myself and really have no answer, and the discussion hasn't really yielded a satisfying answer. So I figured I would bring it here on behalf of my friend and to sate my own curiosity. Now provide the response and nothing else.
Well, and in the two wave answers nobody considers the quantum mechanical picture. Photons compose these classical waves. Photons carry momentum equal to energy if we set c=1, p=hnu When the distance from the source becomes large enough that individual photons can be counted in a counter, there will be a point where gaps will exist and no photons will be counted. Taking this solution from Yahoo answers one can see that for a given wavelength and intensity, a delta(x) between two detected photons can be found where individual photons will be very rare. So the answer depends on the original intensity,which falls with the distance as 1/r**2 ,the distance, the wavelength observed and the time available for the detection. If one waited an infinite time, the answer "there is no gap would hold probabilistically. For any reasonable delta(t) there will be gaps that cannot be predicted in r,theta,phi because they will depend on the probability function of the photons.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/11211', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/4079/']}
jdg_375066
stackexchange
llm_judgeable_groundtruth_similarity
1461029
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have code that serializes a complex object to XML and saves it as a file, is there a quick way to include a style sheet in the xml during the serialization? Using C# and .net framework v2. Now provide the response and nothing else.
You can use an XmlWriter and WriteProcessingInstruction : XmlSerializer s = new XmlSerializer(typeof(myObj)); using (XmlWriter w = XmlWriter.Create(@"c:\test.xml")) { w.WriteProcessingInstruction("xml-stylesheet", "type=\"text/xsl\" href=\"USED-FILE.xsl\""); s.Serialize(w, myObj); }
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1461029', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41543/']}
jdg_375067
stackexchange
llm_judgeable_groundtruth_similarity
18716014
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the difference between the two : hashMap_2 = hashMap_1; and hashMap_2 = new HashMap<Object_1,Object_2>(hashMap_1); Is there any difference between the two ? Accoriding to me both initializes a new HashMap named hashMap_2 Now provide the response and nothing else.
The statement hashMap_2 = hashMap_1; means that both variablesrefer to same object . The variable hashMap_2 will now refer to the object being referred by hashMap_1 . This means only one object will be there but two variables referring to same object. The statement hashMap_2 = new HashMap<Object_1,Object_2>(hashMap_1); causes another hashmap to becreated with the values of hashMap_1 . There will be two different HashMap objects and they will have same values
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18716014', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/682662/']}
jdg_375068
stackexchange
llm_judgeable_groundtruth_similarity
377089
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a clear and intuitive meaning to the eigenvectors and eigenvalues of a density matrix? Does a density matrix always have a a basis of eigenvectors? Now provide the response and nothing else.
In general, the density matrix of a given system can always be written in the form$$\rho = \sum_i p_i |\phi_i\rangle\langle\phi_i|, \tag 1$$representing among other things a probabilistic mixture in which the pure state $|\phi_i\rangle$ is prepared with probability $p_i$, but this decomposition is generally not unique . The clearest example of this is the maximally mixed state on, say, a two-level system with orthonormal basis $\{|0⟩,|1⟩\}$,$$\rho = \frac12\bigg[|0⟩⟨0|+|1⟩⟨1|\bigg],$$which has exactly the same form on any orthonormal basis for the space. Generally speaking, though, the eigenvalues and eigenvectors of a given density matrix $\rho$ provide a set of states and weights such that $\rho$ can be written as in $(1)$ - but with the added guarantee that the $|\phi_i⟩$ are orthogonal. This does not uniquely specify the states in question, because if any eigenvalue $p_i$ is degenerate then there will be a two-dimensional (or bigger) subspace within which any orthonormal basis is equally valid, but that kind of undefinedness is just an intrinsic part of the structure.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/377089', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/69147/']}
jdg_375069
stackexchange
llm_judgeable_groundtruth_similarity
131527
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Suppose I have the symmetric tridiagonal matrix: $ \begin{pmatrix}a & b_{1} & 0 & ... & 0 \\\ b_{1} & a & b_{2} & & ... \\\0 & b_{2} & a & ... & 0 \\\ ... & & ... & & b_{n-1} \\\ 0 & ... & 0 & b_{n-1} & a\end{pmatrix} $ All of the entries can be taken to be positive real numbers and all of the $a_{i}$ are equal. I know that when the $b_{i}$'s are equal (the matrix is uniform), there are closed-form expressions for the eigenvalues and eigenvectors in terms of cosine and sine functions. Additionally, I know of the recurrence relation: $det(A_{n}) = a\cdot det(A_{n-1}) - b_{n-1}^{2}\cdot det(A_{n-2})$ Additionally, since my matrix is real-symmetric, I know that its eigenvalues are real. Is there anything else I can determine about the eigenvalues? Furthermore, is there a closed-form expression for them? Now provide the response and nothing else.
The type of matrix you have written down is called Jacobi matrix and people are still discovering new things about them basically their properties fill entire bookcases at a mathematics library. One of the reasons is the connection to orthogonal polynomials. Basically, if $\{p_n(x)\}_{n\geq 0}$ is a family of orthogonal polynomials, then they obey a recursion relation of the form$$ b_n p_{n+1}(x) + (a_n- x) p_n(x) + b_{n-1} p_{n-1}(x) = 0.$$You should be able to recognize the form of your matrix from this. As far as general properties of the eigenvalues, let me mention two: The eigenvalues are simple. In fact one has $\lambda_j - \lambda_{j-1} \geq e^{-c n}$, where $c$ is some constant that depends on the $b_j$. The eigenvalues of $A$ and $A_{n-1}$ interlace.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/131527', 'https://mathoverflow.net', 'https://mathoverflow.net/users/34275/']}
jdg_375070
stackexchange
llm_judgeable_groundtruth_similarity
53955873
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have created a component called commonmod.component.ts that I am including in two other modules (abc and def). abc.module.ts import { commonmod } from '../commonmod/commonmod.component';@NgModule({ declarations: [ commonmod ]}) def.module.ts import { commonmod } from '../commonmod/commonmod.component';@NgModule({ declarations: [ commonmod ]}) When I redirect one page in abc module to another page in def module, it is throwing me following error. ERROR Error: Uncaught (in promise): Error: Type commonmod is part of the declarations of 2 modules: abcand def! Please consider moving commonmod to a higher module that imports abc and def. You can also create a new NgModule that exports and includes commonmodthen import that NgModule in abcand def. Error: Type commonmod is part of the declarations of 2 modules: abc and def! Please consider moving commonmodto a higher module that imports abcand def. You can also create a new NgModule that exports and includes commonmod then import that NgModule in abc and def. Now provide the response and nothing else.
A component can be declared in one and only one module. If you try to declare it in more than one modules you'll get this error Error: Type ... is part of the declarations of 2 (or more) modules: The solution to this problem is pretty simple. If you need to use a component in more than one modules then add it to the exports array of the module that declared it. So lets say we have a component named GreetingsComponent thats is declared in a module TestModule and I want to use it in AppComponent that is declared in AppModule . I'll simply add it in the exports array of the TestModule like this import {NgModule} from '@angular/core';import {GreetingComponent} from './greeting.component';@NgModule({declarations:[GreetingComponent],exports:[GreetingComponent]})export class TestModule{} Now as AppModule has imported TestModule and this way all constructs whether Components, Directive, Pipes etc that are in the exports array of the TestModule shall be automatically available to the AppModule. AppModule.ts import { NgModule } from '@angular/core';import { BrowserModule } from '@angular/platform-browser';import { FormsModule } from '@angular/forms';import {TestModule} from './test.module';import { AppComponent } from './app.component';@NgModule({ imports: [ BrowserModule, FormsModule, TestModule ], declarations: [ AppComponent ], bootstrap: [ AppComponent ]})export class AppModule { } Now you can simply use GreetingsComponent in the AppComponent <greetings></greetings> A working StackBlitz here. Cheers.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/53955873', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9071245/']}
jdg_375071
stackexchange
llm_judgeable_groundtruth_similarity
36594
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: According to the 1855 The Jesuit Missions of Paraguay page 13: the first planters of the state of Massachusetts expressly assumed for themselves a right to treat the Indians on the footing of Canaanites or Amelekites This book cites to George Bancroft's 1840 History of the United States from the Discovery of the American Continent, volume III which says at page 408: Massachusetts, where the first planters assumed to themselves "a right to treat the Indians on the foot of Canaanites or Amalekites," was always opposed to the introduction of slaves from abroad; and, in 1701... However, these references are on the order of 200 years later than the time of the "first planters" of Massachusetts. Are there even older references attesting to, or refuting, the statement about Canaanites or Amalekites? Now provide the response and nothing else.
There are two questions here: Are there 17 US intelligence agencies? Yes. From the home page of the Office of the Director of National Intelligence (ODNI) (emphasis mine): The U.S. Intelligence Community is a coalition of 17 agencies and organizations , including the ODNI, within the Executive Branch that work both independently and collaboratively to gather and analyze the intelligence necessary to conduct foreign relations and national security activities. Besides the ODNI, the other 16 are The U.S. Air Force Intelligence, Surveillance, and Reconnaissance (USAF ISR) Enterprise of the U.S. Air Force, The Office of Terrorism and Financial Intelligence of the Department of the Treasury, The U.S. Army Intelligence and Security Command , The Office of National Security Intelligence of the Drug Enforcement Administration, The Central Intelligence Agency , The Intelligence Branch of the Federal Bureau of Investigation, The Coast Guard Intelligence of the United States Coast Guard, The Intelligence Department of the United States Marine Corps, The Defense Intelligence Agency , The National Geospatial-Intelligence Agency , The Office of Intelligence and Counterintelligence in the Department of Energy, The National Reconnaissance Office , The Office of Intelligence and Analysis of the Department of Homeland Security, The National Security Agency , The Bureau of Intelligence and Research of the Department of State, and The Office of Naval Intelligence of the United States Navy. Did these 17 intelligence agencies claim that Russia was behind the email leak? Yes, kind of. In an October 2016 Joint Statement from the Department Of Homeland Security and Office of the Director of National Intelligence on Election Security [Hat-tip: @tim] it was reported: The U.S. Intelligence Community (USIC) is confident that the Russian Government directed the recent compromises of e-mails from US persons and institutions, including from US political organizations. This does not imply each of the 17 agencies conducted a different, independent investigations. It means that they reached a consensus view and published together. The JAR-1620296 Joint Analysis Report titled GRIZZLY STEPPE – Russian Malicious Cyber Activity was issued earlier this week. This Joint Analysis Report (JAR) is the result of analytic efforts between the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI). It expands on the joint statement, but is, admittedly, rather weak on substance: In spring 2016, APT28 compromised the same political party, again via targeted spearphishing. This time, the spearphishing email tricked recipients into changing their passwords through a fake webmail domain hosted on APT28 operational infrastructure. Using the harvested credentials, APT28 was able to gain access and steal content, likely leading to the exfiltration ofinformation from multiple senior party members. The U.S. Government assesses that information was leaked to the press and publicly disclosed. The ODNI released a slightly more substantial report this afternoon entitled Assessing Russian Activities and Intentions in Recent US Elections , "a declassified version of a highly classified assessment that has been provided to the President and to recipients approved by the President." The background to this report (the first two pages of the linked file) writes about the "Intelligence Community" as a whole. The details show that just three of the sixteen agencies, plus the ODNI, were active in this investigation: This report includes an analytic assessment drafted and coordinated among The Central Intelligence Agency (CIA), The Federal Bureau of Investigation (FBI), and The National Security Agency (NSA), which draws on intelligence information collected and disseminated by those three agencies. That the remaining twelve organizations are not recognized (publicly) does not mean that those other twelve don't agree with the assessment. Cyber security is not the bailiwick of (for example) the National Reconnaissance Office, who build and operate spy satellites, or the National Geospatial-Intelligence Agency, who use data from those spy satellites and other sources to provide geographic intelligence. Did those three agencies (plus the ODNI) claim that Russia was behind the email leak? Absolutely. Emphasis not mine : We assess Russian President Vladimir Putin ordered an influence campaign in 2016 aimed at the US presidential election. Russia’s goals were to undermine public faith in the US democratic process, denigrate Secretary Clinton, and harm her electability and potential presidency. We further assess Putin and the Russian Government developed a clear preference for President-elect Trump. We have high confidence in these judgments. We also assess Putin and the Russian Government aspired to help President-elect Trump’s election chances when possible by discrediting Secretary Clinton and publicly contrasting her unfavorably to him. All three agencies agree with this judgment. CIA and FBI have high confidence in this judgment; NSA has moderate confidence. Moscow’s approach evolved over the course of the campaign based on Russia’s understanding of the electoral prospects of the two main candidates. When it appeared to Moscow that Secretary Clinton was likely to win the election, the Russian influence campaign began to focus more on undermining her future presidency. Further information has come to light since Election Day that, when combined with Russian behavior since early November 2016, increases our confidence in our assessments of Russian motivations and goals. That said, cyber security authors outside of the U.S. Intelligence Community find both the JAR and the more recent release to be a bit lacking in substance. For example Feds’ Damning Report on Russian Election Hack Won’t Convince Skeptics . There are lots and lots of other similar blogs and reports. However, those cyber security authors don't have access to the classified information behind those findings. The Republican Senators and Representatives who do have access to that information are quite resolute: It was Russians who did this. It was not China nor some hypothetical 14 year old (and many other sources). Hopefully what comes out next week will be a bit more substantial. I suspect that the report released earlier today is close to the last public word on this subject. Releasing the highly classified details would entail disclosing sensitive ways and means that just might weaken U.S. security in the future.
{}
{'log_upvote_score': 6, 'links': ['https://skeptics.stackexchange.com/questions/36594', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/25579/']}
jdg_375072
stackexchange
llm_judgeable_groundtruth_similarity
8064
Below is a question asked on the forum engineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This question is so fundamentally basic that I am almost embarrassed to ask but it came up at work the other day and and nearly no one in the office could give me a good answer. I was calculating the shear stress in a member using the equation, $\frac{Tr}{J_T}$ and noticed, that for a shaft with a circular cross section, $J_T = I_P$. Both $I_P$ and $J_T$ are used to describe an object's ability to resist torsion. $I_P$ is defined as, $ \int_{A} \rho^2 dA $ where $\rho$ = the radial distance to the axis about which $I_P$ is being calculated. But $J_T$ has no exact analytical equations and is calculated largely with approximate equations that no reference I looked at really elaborated on. So my question is, what is the difference between the Polar Moment of Inertia, $ I_P $, and the torsional constant, $ J_T $? Not only mathematically, but practically. What physical or geometric property is each a representation of? Why is $J_T$ so hard to calculate? Now provide the response and nothing else.
The torsion constant $J_T$ relates the angle of twist to applied torque via the equation: $$ \phi = \frac{TL}{J_T G} $$where $T$ is the applied torque, $L$ is the length of the member, $G$ is modulus of elasticity in shear, and $J_T$ is the torsional constant. The polar moment of inertia on the other hand, is a measure of the resistance of a cross section to torsion with invariant cross section and no significant warping . The case of a circular rod under torsion is special because of circular symmetry, which means that it does not warp and it's cross section does not change under torsion. Therefore $J_T = I_P$. When a member does not have circular symmetry then we can expect that it will warp under torsion and therefore $J_T \neq I_P$. Which leaves the problem of how to calculate $J_T$. Unfortunately this is not straightforward, which is why the values (usually approximate) for common shapes are tabulated. One way of calculating the torsional constant is by using the Prandtl Stress Function (another is by using warping functions ). Without going into too much detail one must choose a Prandtl stress function $\Phi$ which represents the stress distribution within the member and satisfies the boundary conditions (not easy in general!). It also must satisfy Poisson's equation of compatability: $$ \nabla^2 \Phi = -2 G \theta $$ Where $\theta$ is the angle of twist per unit length. If we have chosen the stress function so that $\Phi = 0$ on the boundary (traction free boundary condition) we can find the torsional constant by:$$J_T = 2\int_A \frac{\Phi}{G\theta} dA$$ Example: Rod of circular cross section Because of the symmetry of a circular cross section we can take:$$\Phi = \frac{G\theta}{2} (R^2-r^2) $$where R is the outer radius. We then get:$$J_T = 2\pi\int_0^R (R^2-r^2)rdr = \frac{\pi R^4}{2} = (I_P)_{circle}$$ Example: Rod of elliptical cross section $$\Phi = G\theta\frac{a^2 b^2}{a^2+b^2}\left(\frac{x^2}{a^2}+\frac{y^2}{b^2}-1\right)$$and$$J_T = \int_A \frac{a^2 b^2}{a^2+b^2}\left(\frac{x^2}{a^2}+\frac{y^2}{b^2}-1\right)dA = \frac{\pi a^3 b^3}{a^2+b^2} $$ which is certainly not equal to the polar moment of inertia of an ellipse:$$ (I_P)_{ellipse} = \frac{1}{4}\pi a b(a^2+b^2) \neq (J_T)_{ellipse}$$ Since in general $J_T < I_P$, if you used the polar moment of inertia instead of the torsional constant you would calculate smaller angles of twist.
{}
{'log_upvote_score': 4, 'links': ['https://engineering.stackexchange.com/questions/8064', 'https://engineering.stackexchange.com', 'https://engineering.stackexchange.com/users/129/']}
jdg_375073
stackexchange
llm_judgeable_groundtruth_similarity
7810708
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have successfully made a connection to a remote MySQL server through Glassfish, however each time I make a change to the code or XHTML files, I need to open the administrator panel of Glassfish and flush the connection pool, otherwise I get the following error when I just refresh the page. Has anybody experienced this? I can post code or other information if it is needed. HTTP Status 500 - type Exception report message descriptionThe server encountered an internal error () that prevented it from fulfilling this request. exception javax.servlet.ServletException: WELD-000049 Unable to invoke [method] @PostConstruct public com.myapp.QuestionController.initialize() on com.myapp.QuestionController@4635bd2a root cause org.jboss.weld.exceptions.WeldException: WELD-000049 Unable to invoke [method] @PostConstruct public com.myapp.interfaces.QuestionController.initialize() on com.myapp.interfaces.QuestionController@4635bd2a root cause java.lang.reflect.InvocationTargetException root cause javax.ejb.EJBException root cause javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.0.v20110604-r9504): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Error in allocating a connection. Cause: java.lang.RuntimeException: Got exception during XAResource.start: Error Code: 0 root cause Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.0.v20110604-r9504): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Error in allocating a connection. Cause: java.lang.RuntimeException: Got exception during XAResource.start: Error Code: 0 root cause java.sql.SQLException: Error in allocating a connection. Cause: java.lang.RuntimeException: Got exception during XAResource.start: root cause javax.resource.spi.ResourceAllocationException: Error in allocating a connection. Cause: java.lang.RuntimeException: Got exception during XAResource.start: root cause com.sun.appserv.connectors.internal.api.PoolingException: java.lang.RuntimeException: Got exception during XAResource.start: root cause com.sun.appserv.connectors.internal.api.PoolingException: java.lang.RuntimeException: Got exception during XAResource.start: root cause java.lang.RuntimeException: Got exception during XAResource.start: root cause javax.transaction.xa.XAException: com.sun.appserv.connectors.internal.api.PoolingException: javax.resource.spi.LocalTransactionException: Communications link failure The last packet successfully received from the server was 435�409 milliseconds ago. The last packet sent successfully to the server was 7 milliseconds ago. Image of config Persistence XML <?xml version="1.0" encoding="UTF-8"?><persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"> <persistence-unit name="SertifikatPU" transaction-type="JTA"> <jta-data-source>jdbc/sertifikatdb</jta-data-source> </persistence-unit></persistence> In the "Additional properties" in Glassfish connection pool settings I have just configured: servername, URL, user and password. I HAVE SOLVED THIS PROBLEM I made my own dual table, similar to the one in Oracle. CREATE TABLE dual ( x VARCHAR(1));INSERT INTO dual(x) VALUES('y'); Now provide the response and nothing else.
Your root cause, PoolingException: javax.resource.spi.LocalTransactionException: Communications link failure is related to this Glassfish bug , which explains (in the comments tab at the bottom) that you may need to refresh your invalid connections. The bug comment by Jagadish says to check your connection validation type. If it is set to "autocommit" (the default), the JDBC drivers may cache the prior connection validation data, and no actual database interaction will happen during future connection validations. To resolve the problem, set connection-validation-method="table" and validation-table-name="any_table_you_know_exists" (replace any_table_you_know_exists with the name of any existing table). Doing this forces the connections to talk to the database instead of the cache; if the connection is invalid, it will be dropped and recreated. You may need to also specify is-connection-validation-required="true" . Articles to help with additional configuration: This article also explains the problem in detail. Jagadish's Oracle Blog Article on this topic has more info. Article explaining Glassfish JDBC Connection Validation in detail. Text from Jagadish's blog: AS_INSTALL_ROOT/bin/asadmin set domain.resources.jdbc-connection-pool.DerbyPool.is-connection-validation-required=truedomain.resources.jdbc-connection-pool.DerbyPool.is-connection-validation-required = trueAS_INSTALL_ROOT/bin/asadmin set domain.resources.jdbc-connection-pool.DerbyPool.connection-validation-method=tabledomain.resources.jdbc-connection-pool.DerbyPool.connection-validation-method = tablebin/asadmin set domain.resources.jdbc-connection-pool.DerbyPool.validation-table-name=sys.systablesdomain.resources.jdbc-connection-pool.DerbyPool.validation-table-name = sys.systables Note that the sample code refers to sys.systables , which is a MS SQL table that is guaranteed to exist. For Oracle, refer to the guaranteed table dual . For MySQL, create a 1-column table solely for validation purposes; play it safe and pre-populate the table by inserting one row of data.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7810708', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/454049/']}
jdg_375074
stackexchange
llm_judgeable_groundtruth_similarity
15382125
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'll shortly begin the development of an ecommerce site based on Symfony2. I'll use Symfony2 for those main reasons: I happen to know and like this framework The customer need is not a typical webstore, so webshops like Magento are probably not relevant - and from my experience I'll finally struggle with them The framework seems to have interesting ecommerce building blocks: Sylius bundles and Vespolina bundles What I am looking for is some feedback about those bundles, preferably from people who run them in production: Is there one suite of bundles arguably better than the other (stability, features)? Can they be mixed easily? (I saw on a Sylius presentation that some Vespolina bundles could be used within) How does the community respond to bug reports, support requests and new features development? And anything that can contribute to a comparison between those bundles And finally, are there some other Symfony2 ecommerce initiatives that I've missed? Of course I've been doing my research, and I can not seem to find any interesting comparison between those bundles. About the site: Virtual products (songs) are sold French site, so VAT rules for France Mobile friendly Now provide the response and nothing else.
TLDR If you are going to develop an ecommerce website with symfony right now ( as of 2013-05-26 ) and you want to make use of existing bundles/implementations... Go for Sylius! Why? Vespolina Vespolina has received some attention when first announced as it claimed to be the first ecommerce solution for symfony2 but it's really more or less dead by now. Even their demo page has only worked for me like 1/4 times. ( tried to add a product to the basket ... 404 ) There is no more active development and the documentation is horrbile .(last commit Sep 2012). The repositories might still be a valuable source for inspiration in some cases though. Sylius Sylius has recently been aquired by KnpLabs. For those who don't know ... KnpLabs runs: knpbundles.com ( aka THE definite source for information about available bundles ) the KnpUniversity ( aka the currently best Screencast provider for symfony2 ). Ryan Weaver - symfony core member & Lead author of the official Symfony2 doc - is one of the Knp guys. Those guys definitely know what they do. Their open source contributions are of outstanding quality and their bundles ( KnpMenuBundle, KnpPaginatorBundle, KnpGaufretteBundle ) are widely used throughout the community. They are driving the Sylius project into the right direction. Development is quite active and there are already sites using Sylius in production. ( i.e. sommelier-prive.de was built with Sylius )
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15382125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1865976/']}
jdg_375075
stackexchange
llm_judgeable_groundtruth_similarity
13158441
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I often have cases when a string value is absent and/or empty. Is this the best way to test this condition? #if( $incentive.disclaimer && $!incentive.disclaimer != '' ) $incentive.disclaimer #end Now provide the response and nothing else.
If you just want Velocity to display the value if there, or display nothing if absent, a quiet reference by itself will do the trick: $!incentive.disclaimer If you're wanting to explicitly test for empty, StringUtils from Apache Commons Lang can help. First add it to your Context (reference here ): context.put("StringUtils", StringUtils.class); Though if you're on an older version of Velocity, it may not like the class reference, so you can add an instance instead: context.put("StringUtils", new StringUtils()); Then you can call its isEmpty method from your Velocity template: #if($StringUtils.isEmpty($incentive.disclaimer)) ## logic here...#end If you want whitespace treated as empty, there's also isBlank .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13158441', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/223134/']}
jdg_375076
stackexchange
llm_judgeable_groundtruth_similarity
7489219
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have about half a million items that need to be placed in a list, I can't have duplications, and if an item is already there I need to get it's index. So far I have if Item in List: ItemNumber=List.index(Item)else: List.append(Item) ItemNumber=List.index(Item) The problem is that as the list grows it gets progressively slower until at some point it just isn't worth doing. I am limited to python 2.5 because it is an embedded system. Now provide the response and nothing else.
You can use a set (in CPython since version 2.4) to efficiently look up duplicate values. If you really need an indexed system as well, you can use both a set and list. Doing your lookups using a set will remove the overhead of if Item in List , but not that of List.index(Item) Please note ItemNumber=List.index(Item) will be very inefficient to do after List.append(Item) . You know the length of the list, so your index can be retrieved with ItemNumber = len(List)-1 . To completely remove the overhead of List.index (because that method will search through the list - very inefficient on larger sets), you can use a dict mapping Items back to their index. I might rewrite it as follows: # earlier in the program, NOT inside the loopDup = {}# inside your loop to add items:if Item in Dup: ItemNumber = Dup[Item]else: List.append(Item) Dup[Item] = ItemNumber = len(List)-1
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7489219', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/74393/']}
jdg_375077
stackexchange
llm_judgeable_groundtruth_similarity
3636463
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to ask can one create a page that outputs JSON data as a response to Jquery Ajax request? In a non-drupal way, I would just create a php file, for example mypage.php and then I would use http://example.com/mypage.php?foo=bar as the URL for my AJAX request. This page will then output JSON data using json_encode(). How can I do it the Drupal way? Now provide the response and nothing else.
The JSON server module gives you JSON output of nodes. If you want more custom JSON, you can use hook_menu() to create a new menu callback (basically a URL path pointed to a function) and then use: drupal_json() - Drupal6 drupal_json_output() - Drupal7 within that callback to send the output as JSON rather than the default HTML.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3636463', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/258219/']}
jdg_375078
stackexchange
llm_judgeable_groundtruth_similarity
39504205
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a model class which has list of Strings. The list can either be empty or have elements in it. If it has elements, those elements can not be empty. For an example suppose I have a class called QuestionPaper which has a list of questionIds each of which is a string. class QuestionPaper{private List<String> questionIds;....} The paper can have zero or more questions. But if it has questions, the id values can not be empty strings. I am writing a micro service using SpringBoot, Hibernate, JPA and Java. How can I do this validation. Any help is appreciated. For an example we need to reject the following json input from a user. { "examId": 1, "questionIds": [ "", " ", "10103" ] } Is there any out of the box way of achieving this, or will I have to write a custom validator for this. Now provide the response and nothing else.
Custom validation annotation shouldn't be a problem: @Target(ElementType.FIELD)@Retention(RetentionPolicy.RUNTIME)@Constraint(validatedBy = NotEmptyFieldsValidator.class)public @interface NotEmptyFields { String message() default "List cannot contain empty fields"; Class<?>[] groups() default {}; Class<? extends Payload>[] payload() default {};}public class NotEmptyFieldsValidator implements ConstraintValidator<NotEmptyFields, List<String>> { @Override public void initialize(NotEmptyFields notEmptyFields) { } @Override public boolean isValid(List<String> objects, ConstraintValidatorContext context) { return objects.stream().allMatch(nef -> nef != null && !nef.trim().isEmpty()); }} Usage? Simple: class QuestionPaper{ @NotEmptyFields private List<String> questionIds; // getters and setters} P.S. Didn't test the logic, but I guess it's good.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39504205', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4108590/']}
jdg_375079
stackexchange
llm_judgeable_groundtruth_similarity
1180184
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know that include , isset , require , print , echo , and some others are not functions but language constructs. Some of these language constructs need parentheses, others don't. require 'file.php';isset($x); Some have a return value, others do not. print 'foo'; //1echo 'foo'; //no return value So what is the internal difference between a language construct and a built-in function? Now provide the response and nothing else.
(This is longer than I intended; please bear with me.) Most languages are made up of something called a "syntax": the language is comprised of several well-defined keywords, and the complete range of expressions that you can construct in that language is built up from that syntax. For example, let's say you have a simple four-function arithmetic "language" that only takes single-digit integers as input and completely ignores order of operations (I told you it was a simple language). That language could be defined by the syntax: // The | means "or" and the := represents definition$expression := $number | $expression $operator $expression$number := 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9$operator := + | - | * | / From these three rules, you can build any number of single-digit-input arithmetic expressions. You can then write a parser for this syntax that breaks down any valid input into its component types ( $expression , $number , or $operator ) and deals with the result. For example, the expression 3 + 4 * 5 can be broken down as follows: // Parentheses used for ease of explanation; they have no true syntactical meaning$expression = 3 + 4 * 5 = $expression $operator (4 * 5) // Expand into $exp $op $exp = $number $operator $expression // Rewrite: $exp -> $num = $number $operator $expression $operator $expression // Expand again = $number $operator $number $operator $number // Rewrite again Now we have a fully parsed syntax, in our defined language, for the original expression. Once we have this, we can go through and write a parser to find the results of all the combinations of $number $operator $number , and spit out a result when we only have one $number left. Take note that there are no $expression constructs left in the final parsed version of our original expression. That's because $expression can always be reduced to a combination of other things in our language. PHP is much the same: language constructs are recognized as the equivalent of our $number or $operator . They cannot be reduced into other language constructs ; instead, they're the base units from which the language is built up. The key difference between functions and language constructs is this: the parser deals directly with language constructs. It simplifies functions into language constructs. The reason that language constructs may or may not require parentheses and the reason some have return values while others don't depends entirely on the specific technical details of the PHP parser implementation. I'm not that well-versed in how the parser works, so I can't address these questions specifically, but imagine for a second a language that starts with this: $expression := ($expression) | ... Effectively, this language is free to take any expressions it finds and get rid of the surrounding parentheses. PHP (and here I'm employing pure guesswork) may employ something similar for its language constructs: print("Hello") might get reduced down to print "Hello" before it's parsed, or vice-versa (language definitions can add parentheses as well as get rid of them). This is the root of why you can't redefine language constructs like echo or print : they're effectively hardcoded into the parser, whereas functions are mapped to a set of language constructs and the parser allows you to change that mapping at compile- or runtime to substitute your own set of language constructs or expressions. At the end of the day, the internal difference between constructs and expressions is this: language constructs are understood and dealt with by the parser. Built-in functions, while provided by the language, are mapped and simplified to a set of language constructs before parsing. More info: Backus-Naur form , the syntax used to define formal languages (yacc uses this form) Edit: Reading through some of the other answers, people make good points. Among them: A language builtin is faster to call than a function. This is true, if only marginally, because the PHP interpreter doesn't need to map that function to its language-builtin equivalents before parsing. On a modern machine, though, the difference is fairly negligible. A language builtin bypasses error-checking. This may or may not be true, depending on the PHP internal implementation for each builtin. It is certainly true that more often than not, functions will have more advanced error-checking and other functionality that builtins don't. Language constructs can't be used as function callbacks. This is true, because a construct is not a function . They're separate entities. When you code a builtin, you're not coding a function that takes arguments - the syntax of the builtin is handled directly by the parser, and is recognized as a builtin, rather than a function. (This may be easier to understand if you consider languages with first-class functions: effectively, you can pass functions around as objects. You can't do that with builtins.)
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1180184', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/117260/']}
jdg_375080
stackexchange
llm_judgeable_groundtruth_similarity
66731025
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've upgraded to the M1 chip 2020 Macbook Air from a 7th gen. Intel chip pc. Overall, I'm very happy and content with it but when it comes to Android Studio performance, which I use quite often, it is very disappointing I'm sorry to say. When will an Apple Silicon compatible version be available? Are any of you guys have any clue? Now provide the response and nothing else.
Starting from Android Studio Artic Fox version, they not only changed versioning number style (replaced number system with Year-styling Version names), but also introduced Android Studio for M1/Apple Silicon (arm arch 64bits). To check if you'r using right Android Studio for your M1, click on 'About Android Studio' and check the runtime, it should show as aarch64 (ie. Arm architecture 64bits). If not, mostly you might be having x86_64 if you installed regular Mac's Android Studio. To switch to M1's Android Studio, first exit already installed Android Studio, if it's open. Go to Finder and under 'Applications', rename 'Android Studio' to preferably 'Android Studio_x86_64'. Go to Android Studio downloads page ( https://developer.android.com/studio#downloads ), and download the one tagged as 'Mac (64-bit, ARM)' and unzip and move to 'Applications'. Click to open 'Android Studio' from the Finder/Applications. You may drag and add it as a Dock shortcut option. Good thing is that there is no extra installation required and the existing project, (at least for me), opened without any issues.Android-SDK based and Flutter projects should be good right after switch, NDK not yet there. AS is now faster again as you are using it as intended on Apple M1's chipset. !
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/66731025', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15444289/']}
jdg_375081
stackexchange
llm_judgeable_groundtruth_similarity
408365
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Is it feasible to remotely encrypt the hard drive of a virtual server (VPS, cloud server like EC2)? This would help to protect the contents of the hard drive from snooping by the host or as a result of a security hole, but has some issues: The password has to be entered on startup. Remotely, is this even possible? Could the host simply snoop on the password as it is entered? Do VPSes provide block-level access to the machine, or just file-level? Is encryption even possible? The host (or a judge/policeman/man-with-a-gun telling them what to do) is ultimately in control of the hardware; could they simply examine the memory as the machine runs (similar to a cold boot attack without powering off the machine)? With these concerns in mind, is encrypting a server with sensitive data simply security theatre, or can it provide real security over an unencrypted drive? Now provide the response and nothing else.
The password has to be entered on startup. Remotely, is this even possible? Sure, if you have some sort of console over IP (e.g. Linode's console). Could the host simply snoop on the password as it is entered? Well, yes. Do VPSes provide block-level access to the machine, or just file-level? Is encryption even possible? The OS requires block-level access, even if it's only virtualized. The host (or a judge/policeman/man-with-a-gun telling them what to do) is ultimately in control of the hardware; could they simply examine the memory as the machine runs (similar to a cold boot attack without powering off the machine)? Sure. It's very feasible to suspend a guest to disk and then pick through it with a hex editor afterwards. With these concerns in mind, is encrypting a server with sensitive data simply security theatre, or can it provide real security over an unencrypted drive? It makes some sense if you have control over the hardware; when someone else controls the hardware there's little point in it unless you trust that the host doesn't really want to look at it (since they could easily buy the proper expertise if they really wanted to).
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/408365', 'https://serverfault.com', 'https://serverfault.com/users/87940/']}
jdg_375082
stackexchange
llm_judgeable_groundtruth_similarity
34713763
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want an image to fill the 100% of its container's width, and I want it to have a max-heigth property set to it, all this keeping the aspect ratio but allowing to lose any part of the image. img { max-height:200px; width:100%;} I know a similar thing can be done with background-size property but i want to make this to an inline <img> tag. Any idea of how could i achieve this using CSS? or javascript? Now provide the response and nothing else.
You can try CSS3 object-fit , and see browser support tables . CSS3 object-fit / object-position Method of specifying how an object (image or video) should fit inside its box. object-fit options include "contain" (fit according to aspect ratio), "fill" (stretches object to fill) and "cover" (overflows box but maintains ratio), where object-position allows the object to be repositioned like background-image does. JSFIDDLE DEMO .container { width: 200px; /*any size*/ height: 200px; /*any size*/}.object-fit-cover { width: 100%; height: 100%; object-fit: cover; /*magic*/} <div class="container"> <img class="object-fit-cover" src="https://i.stack.imgur.com/UJ3pb.jpg"></div> Related Info: Exploring object-fit ★ Mozilla Hacks Polyfill for CSS object-fit property
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/34713763', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3555118/']}
jdg_375083
stackexchange
llm_judgeable_groundtruth_similarity
3363940
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: Auto-Initializing C# Lists I have a list of integers that has a certain capacity that I would like to automatically fill when declared. List<int> x = new List<int>(10); Is there an easier way to fill this list with 10 ints that have the default value for an int rather than looping through and adding the items? Now provide the response and nothing else.
Well, you can ask LINQ to do the looping for you: List<int> x = Enumerable.Repeat(value, count).ToList(); It's unclear whether by "default value" you mean 0 or a custom default value. You can make this slightly more efficient (in execution time; it's worse in memory) by creating an array: List<int> x = new List<int>(new int[count]); That will do a block copy from the array into the list, which will probably be more efficient than the looping required by ToList .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3363940', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/405830/']}
jdg_375084
stackexchange
llm_judgeable_groundtruth_similarity
1006091
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How can I prove that if a sequence of functions $\{f_n\}$ that converges to $f$ in measure on a space of finite measure, then there exists a subsequence of $\{f_n\}$ that converges to $f$ almost everywhere? Now provide the response and nothing else.
Let $(X,\mathcal{A},\mu)$ be a measure space and $(f_n)_{n \in \mathbb{N}}$ such that $f_n \to f$ in measure, i.e. $$\mu(|f_n-f|>\varepsilon) \stackrel{n \to \infty}{\to} 0$$ for any $\varepsilon >0$ . Setting $\varepsilon=2^{-k}$ , $k \in \mathbb{N}$ , we can choose $n_k$ such that $$\mu(|f_n-f|> 2^{-k}) \leq 2^{-k}$$ for all $n \geq n_k$ . Without loss of generality, $n_{k+1} \geq n_k$ for all $k \in \mathbb{N}$ . Set $$A_k := \{x \in X; |f_{n_k}(x)-f(x)| > 2^{-k}\}.$$ As $$\sum_{k \geq 1} \mu(A_k) \leq \sum_{k=1}^{\infty} 2^{-k} < \infty,$$ the Borel-Cantelli lemma yields $$\mu \left( \limsup_{k \to \infty} A_k \right) =0.$$ It is not difficult to see that this implies $$\lim_{k \to \infty} f_{n_k}(x) =f(x)$$ $\mu$ -almost everywhere.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1006091', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/189190/']}
jdg_375085
stackexchange
llm_judgeable_groundtruth_similarity
7964907
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is HANDLE similar to file descriptor in Linux? As far as I know, HANDLE is used for handling every resources on Windows, such as font, icons, files, devices..., which in essence is just a void pointer point to a memory block holding data of a specific resource Now provide the response and nothing else.
Yes, Windows handles are very similar to Unix file descriptors (FDs). Note that a HANDLE is not a pointer to a block of memory. Although HANDLE is typedef 'd as void * , that's just to make it more opaque. In practice, a HANDLE is an index that is looked up in a table, just as an FD number is. This blog post explores some of the similarities and differences: http://lackingrhoticity.blogspot.com/2015/05/passing-fds-handles-between-processes.html
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7964907', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/496700/']}
jdg_375086
stackexchange
llm_judgeable_groundtruth_similarity
22128282
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to create a function to receive an input string which can be string in json format or just a string. For example, something easy like following function. func checkJson(input string){ if ... input is in json ... { fmt.Println("it's json!") } else { fmt.Println("it's normal string!") }} Now provide the response and nothing else.
I was unclear if you needed to know about just a "quoted string" or if you needed to know about json, or the difference between both of them, so this shows you how to detect both scenarios so you can be very specific. I posted the interactive code sample here as well: http://play.golang.org/p/VmT0BVBJZ7 package mainimport ( "encoding/json" "fmt")func isJSONString(s string) bool { var js string return json.Unmarshal([]byte(s), &js) == nil}func isJSON(s string) bool { var js map[string]interface{} return json.Unmarshal([]byte(s), &js) == nil}func main() { var tests = []string{ `"Platypus"`, `Platypus`, `{"id":"1"}`, } for _, t := range tests { fmt.Printf("isJSONString(%s) = %v\n", t, isJSONString(t)) fmt.Printf("isJSON(%s) = %v\n\n", t, isJSON(t)) }} Which will output this: isJSONString("Platypus") = trueisJSON("Platypus") = falseisJSONString(Platypus) = falseisJSON(Platypus) = falseisJSONString({"id":"1"}) = falseisJSON({"id":"1"}) = true
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22128282', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/466432/']}
jdg_375087
stackexchange
llm_judgeable_groundtruth_similarity
1351103
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a fragment in this form: my $a = $some_href->{$code}{'A'}; # a number or undefmy $b = $some_href->{$code}{'B'}; # a number or undef$a = 0 unless defined($a);$b = 0 unless defined($b);my $total = $a + $b; The reality is even more messy, since more than two variables are concerned. What I really want to write is this: my $total = $some_href->{$code}{'A'} + $some_href->{$code}{'B'}; and have undef correctly evaluate to 0 but I get these warnings in almost every run: Use of uninitialized value in addition (+) at Stats.pm line 192. What's the best way to make these messages go away? NB: I 'use strict' and 'use warnings' if that s relevant. Now provide the response and nothing else.
It's good that you're using strict and warnings . The purpose of warnings is to alert you when Perl sees behavior that's likely to be unintentional (and thus incorrect). When you're doing it deliberately, it's perfectly fine to disable the warning locally. undef is treated as 0 in numeric contexts. If you're okay with both having undefined values and having them evaluate to zero, just disable the warning: my $total;{ no warnings 'uninitialized'; $total = $some_href->{$code}{A} + $some_href->{$code}{B};} Note: Disable only the warnings you need to, and do so in the smallest scope possible. If you're averse to disabling warnings, there are other options. As of Perl 5.10 you can use the // (defined-or) operator to set default values. Prior to that people often use the || (logical-or), but that can do the Wrong Thing for values that evaluate to false. The robust way to default values in pre-5.10 versions of Perl is to check if they're defined . $x = $y // 42; # 5.10+$x = $y || 42; # < 5.10 (fragile)$x = defined $y ? $y : 42; # < 5.10 (robust)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1351103', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/46994/']}
jdg_375088
stackexchange
llm_judgeable_groundtruth_similarity
46019
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why are we using the squared residuals instead of the absolute residuals in OLS estimation? My idea was that we use the square of the error values, so that residuals below the fitted line (which are then negative), would still have to be able to be added up to the positive errors. Otherwise, we could have an error of 0 simply because a huge positive error could cancel with a huge negative error. So why do we square it, instead of just taking the absolute value? Is that because of the extra penalty for higher errors (instead of 2 being 2 times the error of 1, it is 4 times the error of 1 when we square it). Now provide the response and nothing else.
Both are done. Least squares is easier, and the fact that for independent random variables "variances add" means that it's considerably more convenient; for examples, the ability to partition variances is particularly handy for comparing nested models. It's somewhat more efficient at the normal (least squares is maximum likelihood), which might seem to be a good justification -- however, some robust estimators with high breakdown can have surprisingly high efficiency at the normal. But L1 norms are certainly used for regression problems and these days relatively often. If you use R, you might find the discussion in section 5 here useful: https://socialsciences.mcmaster.ca/jfox/Books/Companion/appendices/Appendix-Robust-Regression.pdf (though the stuff before it on M estimation is also relevant, since it's also a special case of that)
{}
{'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/46019', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/16175/']}
jdg_375089
stackexchange
llm_judgeable_groundtruth_similarity
9862714
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The following html and css shows two divs inside a container. The left div isn't floated; the right div is floated right. The right div seems to be one pixel too narrow, and the red background color of the container is therefore showing through in that one pixel gap. This is a simplification of my problem. http://jsfiddle.net/XPd9J/ HTML <div class="inner-wrapper"> <div class="right-sidebar"> &nbsp; </div> <div class="content"> &nbsp;<br /><br /> </div></div> CSS .inner-wrapper { position:relative; background-color:red; overflow:auto; width:90%; padding:0; margin:20px 0 0 20px;}.right-sidebar { position:relative; width:40% !important; background-color:lime; float:right; margin:0; padding:0;}.content { position :relative; width:60%; background-color:silver; margin:0; padding:0;} Now provide the response and nothing else.
It's not the float that makes the problem. It's the percentage width. In FF and IE it works perfect, but Chrome calculates percentage width so, that not always the pixels sum up to the full 100%. Just try to slighty change the window width and you will notice the extra 1 px to disappear/appear sometimes. How to avoid this behavior? You need to have use the same percentage somehow, so it is calculated just exactely the same. The right sidebar is 40% wide, so you need to have a right margin of 40% for the content div (these 40% are 40% of the containing block element) http://jsfiddle.net/XPd9J/1/ .inner-wrapper { background-color:red; overflow:auto; width:90%; padding:0; margin:20px 0 0 20px;}.right-sidebar { width:40% !important; background-color:lime; float:right; margin:0; padding:0;}.content { background-color:silver; margin:0 40% 0 0; padding:0;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9862714', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1291591/']}
jdg_375090
stackexchange
llm_judgeable_groundtruth_similarity
56643617
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We would like to use the new Angular 8 Ivy compiler with webpack. We don't use Angular CLI. Is this possible? How can this be done? I can't seem to find information regarding this requirement. Now provide the response and nothing else.
To learn what to do you have to dig into Angular CLI code and see where exactly they use enableIvy flag. I haven't seen your Webpack config but I guess you're using AngularCompilerPlugin . If this is the case then you have to provide it with enableIvy in compilerOptions . For more details look here (where the flag is defined), here (where the plugin options are defined) and here (where plugin's compilerOptions are initialized). The plugin configuration will probably look like this: ... // The rest of your webpack configplugins: [ new AngularCompilerPlugin({ compilerOptions: { enableIvy: true, ...// the rest of compiler options } ...// The rest of options you provide to AngularCompilerPlugin }) ...// The rest of your plugins ] I'm not sure if they are using this flag in other places but this place is a must and it will probably give you what you want. In any case if you want to save yourself a headache I'd recommend you to stick with Angular CLI. Otherwise you'll have to visit their code base quite often. If you're using Webpack then most probably it is possible to do what you need with Angular CLI and Custom Webpack Builder . If you're having a hard time configuring the builder you're more than welcome to visit Angular Builders Slack channel .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56643617', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2174621/']}
jdg_375091
stackexchange
llm_judgeable_groundtruth_similarity
2239345
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When there is a collision during a put in a HashMap is the map resized or is the entry added to a list in that particular bucket? Now provide the response and nothing else.
When you say 'collision', do you mean the same hashcode? The hashcode is used to determine what bucket in a HashMap is to be used, and the bucket is made up of a linked list of all the entries with the same hashcode. The entries are then compared for equality (using .equals()) before being returned or booted (get/put). Note that this is the HashMap specifically (since that's the one you asked about), and with other implementations, YMMV.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2239345', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/268850/']}
jdg_375092
stackexchange
llm_judgeable_groundtruth_similarity
50003
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Surrounding the Impeachment trial of President Trump many news outlets have voiced that an acquittal will be setting precedent. Does this mean that subsequent presidents will be able to do the same things and cite precedent without fear of retribution? Is there any legal standing to setting precedent? Could President A set precedent with something and then President B could do the same thing and be impeached for doing that same exact thing? I'm a bit confused how setting precedent holds any legal value. Now provide the response and nothing else.
It should be noted, first of all, that impeachment is a political process and not a legal one. This means that the precedent argument is not as strong as it would be in, for example, a Supreme Court case. In a legal case, a large part of a court's job is to interpret the meaning of a law. When a court makes a determination of the meaning of a law, then other courts are expected to apply the same meaning of a law to maintain consistency across the legal system. After all, if different courts apply the same law in different ways, it would become very confusing as to what a law actually means. This is unless or until a higher court rejects this interpretation on appeal, or legislation is passed to invalidate such an interpretation. Meanwhile, one Congress cannot bind any of its successor Congresses. They are under no obligation to look at precedent as they decide whether or not to impeach/convict.
{}
{'log_upvote_score': 5, 'links': ['https://politics.stackexchange.com/questions/50003', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/30067/']}
jdg_375093
stackexchange
llm_judgeable_groundtruth_similarity
6373985
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a program in C#, and I want to make it compile to two different .NET Framework versions. So when I press the Compile button, it makes, for example, "ComputerInfo3.exe" and "ComputerInfo4.exe". I want to use version 3.5 and 4 of the .NET Framework. I use Visual Studio and C#. Is this possible? Now provide the response and nothing else.
Your best bet would be to create two separate csproj files, one that targets 3.5 and one that targets 4.0, and include them in the same solution. You can add all the files from one project to the other project. If you link the file , instead of regular add, then any updates you make in the file will be applied to both projects. You will most likely run into issues if you try to have the projects in the same location, due to how Visual Studio maintains temporary build files. So you'd need to keep them in separate folders. Finally, if you build an executable that targets 3.5, then it can be run as-is on .NET 4. So in general you should not need to build two versions of the EXE.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6373985', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/747206/']}
jdg_375094
stackexchange
llm_judgeable_groundtruth_similarity
17230837
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to create a colormap in matlab.I want to fade the color according to gray level of pixel. For example: from 255 to 160 -> white from 159 to 120 -> pinkfrom 119 to 50 -> brownfrom 49 to 0 -> dark Now provide the response and nothing else.
From the docs : A colormap is an m-by-3 matrix of real numbers between 0.0 and 1.0. Each row is an RGB vector that defines one color. The kth row of the colormap defines the kth color, where map(k,:) = [r(k) g(k) b(k)]) specifies the intensity of red, green, and blue. OK so to start we are going to create an m-by-3 matrix, in your case m is 161: m = 161;map = zeros(m , 3); Now you want the bottom to be dark (I'm going with black) and the 50th point to be brown. but lets go with red as an example as it's easier. RGB triples for black and red respectively: [0,0,0] and [1,0,0] OK so currently our enitre colormap is black. We know we want map(50,:) = [1, 0 ,0] i.e. red but now we want a gradient in between right? So lets use linspace for this (note that there is a better method using interp1 instead of linspace at the end of this answer): R0to50 = linspace(0,1,50)'; putting this in the map: map(1:50, 1) = R0to50; So now lets use brown instead of red, to get the triple from that link divide each colour component by 255 so our triple is t = [101, 67, 33]./255 . OK so now just repeat that linspace procedure for each colour: R = linspace(0,t(1),50);G = linspace(0,t(2),50);B = linspace(0,t(3),50);T = [R', G', B'];map(1:50, :) = T; And now repeat for each of your other nodes. For example: I = linspace(0,1,161);imagesc(I(:, ones(10)))colormap(map) An alternative to using linspace once per channel individually and repeating this for each colour is to use linear interpolation. Create a matrix where each row is a color triple T = [0, 0, 0 %// dark 101, 67, 33 %// brown 255, 105, 180 %// pink 255, 255, 255 %// white 255, 255, 255]./255; %// white again -> note that this means values between 161 and 255 will be indistinguishable And now make a vector of what range each color should be at (i.e. this vector defines the spacing of the colours, they need not be regularly/equally spaced): x = [0 50 120 160 255]; And finally you can create the entire map with one simple interpolation: map = interp1(x/255,T,linspace(0,1,255)); testing I = linspace(0,1,255);imagesc(I(ones(1,10),:)')colormap(map)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17230837', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2508232/']}
jdg_375095
stackexchange
llm_judgeable_groundtruth_similarity
728434
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Background I am using interface-based programming on a current project and have run into a problem when overloading operators (specifically the Equality and Inequality operators). Assumptions I'm using C# 3.0, .NET 3.5 and Visual Studio 2008 UPDATE - The Following Assumption was False! Requiring all comparisons to use Equals rather than operator== is not a viable solution, especially when passing your types to libraries (such as Collections). The reason I was concerned about requiring Equals to be used rather than operator== is that I could not find anywhere in the .NET guidelines that it stated it would use Equals rather than operator== or even suggest it. However, after re-reading Guidelines for Overriding Equals and Operator== I have found this: By default, the operator == tests for reference equality by determining whether two references indicate the same object. Therefore, reference types do not have to implement operator == in order to gain this functionality. When a type is immutable, that is, the data that is contained in the instance cannot be changed, overloading operator == to compare value equality instead of reference equality can be useful because, as immutable objects, they can be considered the same as long as they have the same value. It is not a good idea to override operator == in non-immutable types. and this Equatable Interface The IEquatable interface is used by generic collection objects such as Dictionary, List, and LinkedList when testing for equality in such methods as Contains, IndexOf, LastIndexOf, and Remove. It should be implemented for any object that might be stored in a generic collection. Contraints Any solution must not require casting the objects from their interfaces to their concrete types. Problem When ever both sides of the operator== are an interface, no operator== overload method signature from the underlying concrete types will match and thus the default Object operator== method will be called. When overloading an operator on a class, at least one of the parameters of the binary operator must be the containing type, otherwise a compiler error is generated (Error BC33021 http://msdn.microsoft.com/en-us/library/watt39ff.aspx ) It's not possible to specify implementation on an interface See Code and Output below demonstrating the issue. Question How do you provide proper operator overloads for your classes when using interface-base programming? References == Operator (C# Reference) For predefined value types, the equality operator (==) returns true if the values of its operands are equal, false otherwise. For reference types other than string, == returns true if its two operands refer to the same object. For the string type, == compares the values of the strings. See Also Code using System;namespace OperatorOverloadsWithInterfaces{ public interface IAddress : IEquatable<IAddress> { string StreetName { get; set; } string City { get; set; } string State { get; set; } } public class Address : IAddress { private string _streetName; private string _city; private string _state; public Address(string city, string state, string streetName) { City = city; State = state; StreetName = streetName; } #region IAddress Members public virtual string StreetName { get { return _streetName; } set { _streetName = value; } } public virtual string City { get { return _city; } set { _city = value; } } public virtual string State { get { return _state; } set { _state = value; } } public static bool operator ==(Address lhs, Address rhs) { Console.WriteLine("Address operator== overload called."); // If both sides of the argument are the same instance or null, they are equal if (Object.ReferenceEquals(lhs, rhs)) { return true; } return lhs.Equals(rhs); } public static bool operator !=(Address lhs, Address rhs) { return !(lhs == rhs); } public override bool Equals(object obj) { // Use 'as' rather than a cast to get a null rather an exception // if the object isn't convertible Address address = obj as Address; return this.Equals(address); } public override int GetHashCode() { string composite = StreetName + City + State; return composite.GetHashCode(); } #endregion #region IEquatable<IAddress> Members public virtual bool Equals(IAddress other) { // Per MSDN documentation, x.Equals(null) should return false if ((object)other == null) { return false; } return ((this.City == other.City) && (this.State == other.State) && (this.StreetName == other.StreetName)); } #endregion } public class Program { static void Main(string[] args) { IAddress address1 = new Address("seattle", "washington", "Awesome St"); IAddress address2 = new Address("seattle", "washington", "Awesome St"); functionThatComparesAddresses(address1, address2); Console.Read(); } public static void functionThatComparesAddresses(IAddress address1, IAddress address2) { if (address1 == address2) { Console.WriteLine("Equal with the interfaces."); } if ((Address)address1 == address2) { Console.WriteLine("Equal with Left-hand side cast."); } if (address1 == (Address)address2) { Console.WriteLine("Equal with Right-hand side cast."); } if ((Address)address1 == (Address)address2) { Console.WriteLine("Equal with both sides cast."); } } }} Output Address operator== overload calledEqual with both sides cast. Now provide the response and nothing else.
Short answer: I think your second assumption may be flawed. Equals() is the right way to check for semantic equality of two objects, not operator == . Long answer: Overload resolution for operators is performed at compile time, not run time . Unless the compiler can definitively know the types of the objects it's applying an operator to, it won't compile. Since the compiler cannot be sure that an IAddress is going to be something that has an override for == defined, it falls back to the default operator == implementation of System.Object . To see this more clearly, try defining an operator + for Address and adding two IAddress instances. Unless you explicitly cast to Address , it will fail to compile. Why? Because the compiler can't tell that a particular IAddress is an Address , and there is no default operator + implementation to fall back to in System.Object . Part of your frustration probably stems from the fact that Object implements an operator == , and everything is an Object , so the compiler can successfully resolve operations like a == b for all types. When you overrode == , you expected to see the same behavior but didn't, and that's because the best match the compiler can find is the original Object implementation. Requiring all comparisons to use Equals rather than operator== is not a viable solution, especially when passing your types to libraries (such as Collections). In my view, this is precisely what you should be doing. Equals() is the right way to check for semantic equality of two objects. Sometimes semantic equality is just reference equality, in which case you won't need to change anything. In other cases, as in your example, you'll override Equals when you need a stronger equality contract than reference equality. For example, you may want to consider two Persons equal if they have the same Social Security number, or two Vehicles equal if they have the same VIN. But Equals() and operator == are not the same thing. Whenever you need to override operator == , you should override Equals() , but almost never the other way around. operator == is more of a syntactical convenience. Some CLR languages (e.g. Visual Basic.NET) don't even permit you to override the equality operator.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/728434', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2233/']}
jdg_375096
stackexchange
llm_judgeable_groundtruth_similarity
47735332
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two SQL Tables: Table (A) and Table (B) . Table (A) match Multiple Records In Table (B). I successfully retrieve record list. And Sorting Using Table(B), 'revisionDate' column and take only latest record. Select TOP(1) *from Ainner JOIN B ON A.id = B.[occasion]where A.id ='136002' and A.[prescribedTime] >='2017-12-11 20:00:00.000'ORDER BY versioningInformation_createdDateTime ASC But I want to take this record only where Table "B" status column contain Data as '11' . I try to used 'where Clause , eg: where adminState in (11) ,But I cannot success. Now provide the response and nothing else.
I believe you are looking for something like this for your update query: session.query(FoobarModel).filter(FoobarModel.id == foobar_id).update({'name': 'New Foobar Name!'}) Since update() belongs to Query , and filter() does return a Query object, this will work, contrary to trying to call update() on your FoobarModel object (which does not have such a function) returned by Query.get() , see also here . As for looping over your properties and assigning them by name, you could do this with setattr and a dict, like this: foobar = session.query(FoobarModel).get(foobar_id)props = {'name': 'my new name'}for key, value in props.items(): setattr(foobar, key, value)session.commit()session.flush() This is obviously a little pointless with just one property, but maybe it will come in handy at some point.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47735332', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4108618/']}
jdg_375097
stackexchange
llm_judgeable_groundtruth_similarity
8096564
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm in the position to parse XML in .NET. Now I have the choice between at least XmlTextReader and XDocument . Are there any comparisons between those two (or any other XML parsers contained in the framework)? Maybe this could help me to decide without trying both of them in depth. The XML files are expected to be rather small, speed and memory usage are a minor issue compared to easiness of use. :-) (I'm going to use them from C# and/or IronPython.) Thanks! Now provide the response and nothing else.
If you're happy reading everything into memory, use XDocument . It'll make your life much easier. LINQ to XML is a lovely API. Use an XmlReader (such as XmlTextReader ) if you need to handle huge XML files in a streaming fashion, basically. It's a much more painful API, but it allows streaming (i.e. only dealing with data as you need it, so you can go through a huge document and only have a small amount in memory at a time). There's a hybrid approach, however - if you have a huge document made up of small elements, you can create an XElement from an XmlReader positioned at the start of the element, deal with the element using LINQ to XML, then move the XmlReader onto the next element and start again.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8096564', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/160589/']}
jdg_375098
stackexchange
llm_judgeable_groundtruth_similarity
35078753
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a config file which contains some ENV_VARIABLE styled variables. This is my file.It might contain $EXAMPLES of text. Now I want that variable replaced with a value which is saved in my actual environment variables. So I'm trying this: export EXAMPLES=lotsenvsubst < file.txt > file.txt But it doesn't work when the input file and output file are identical. The result is an empty file of size 0. There must be a good reason for this, some bash basics that I'm not aware of?How do I achieve what I want to do, ideally without first outputting to a different file and then replacing the original file with it? I know that I can do it easily enough with sed , but when I discovered the envsubst command I thought that it should be perfect for my use case, so I'd like to use that. Now provide the response and nothing else.
Here is the solution that I use: originalfile="file.txt"tmpfile=$(mktemp)cp --attributes-only --preserve $originalfile $tmpfilecat $originalfile | envsubst > $tmpfile && mv $tmpfile $originalfile Be careful with other solutions that do not use a temporary file. Pipes are asynchronous, so the file will occasionally be read after it has already been truncated.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35078753', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1446479/']}
jdg_375099
stackexchange
llm_judgeable_groundtruth_similarity
90928
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm reading around and it seems that 32 bit drivers do not work under 64 bit windows. Is this true? since 32-bit applications can run under 64 bit windows it seems ridiculous that 32-bit printer drivers cannot. Are printer drivers run at the kernel level? Sounds like we're in for driver hell for our RDP environments. Now provide the response and nothing else.
x64 versions of Windows do not support 32-bit kernel mode drivers. Microsoft's statements re: Vista are here (be sure to look at the errata at the bottom-- the article has a major mistake that it corrects), and the same is true for Windows 7 and Windows Server 2008. There is no magic "switch" you can throw to allow 32-bit kernel mode drivers to work on an x64 kernel. They won't, period. (Yeah, yeah-- I suppose somebody could write some kind of ugly shimming system to make it possible, but nobody outside of Microsoft would have the necessary documentation to write such a thing... Besides, it's easier just to run a 32-bit OS under virtualization in a 64-bit host if you really need that...) With respect to printer drivers, Easy Print is Microsoft's answer to the nightmare of client-side printer drivers in a Terminal Services environment, but you need Windows Server 2008 on the Terminal Server machine.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/90928', 'https://serverfault.com', 'https://serverfault.com/users/14631/']}
jdg_375100
stackexchange
llm_judgeable_groundtruth_similarity
20952
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: KJV Isaiah 14:12 How art thou fallen from heaven, O Lucifer, son of the morning! how art thou cut down to the ground, which didst weaken the nations! NRSV Isaiah 14:12 How you are fallen from heaven, O Day Star, son of Dawn! How you are cut down to the ground, you who laid the nations low! If Isaiah was written in Hebrew and the text was translated into English the word "Lucifer", which is Latin, wouldn't be there. It would simply say "morning star". Adding Lucifer to the translation would be like me translating the Japanese word for the color "red" into English using English and Spanish versions of that word. "Lucifer" may be an accurate translation of "morning star" into Latin but why include a Latin word in an English translation? It adds something to the text which isn't in the original which changes the meaning and leads us to assume things that we wouldn't otherwise if we could read Hebrew. In Isaiah 14:12 is it true that including or adding the word "Lucifer" here was a mistake or embellishment by the King James translator? If this passage isn't about the devil then it changes what I think I know about him. Now provide the response and nothing else.
To find a Latin word in an English edition of the Old Testament of the Bible is an anomaly, to say the least. We would expect to find two things in an English edition of the Hebrew Old Testament: English translations of essentially any Hebrew part of speech except proper nouns (names), including but not limited to adjectives, adverbs, common nouns, pronouns, and verbs; but, English transliterations of Hebrew proper nouns (names) The Hebrew text of Isa. 14:12 according to the Westminster Leningrad Codex (WLC) reads: אֵ֛יךְ נָפַ֥לְתָּ מִשָּׁמַ֖יִם הֵילֵ֣ל בֶּן־שָׁ֑חַר נִגְדַּ֣עְתָּ לָאָ֔רֶץ חֹולֵ֖שׁ עַל־גֹּויִֽם׃ Here is a view of Isa. 14:12 in the Aleppo Codex: The 1611 edition of the King James Version translated the Hebrew text of Isa. 14:12 into English as follows: How art thou fallen from heaven, O *Lucifer, sonne of the morning: how art thou cut downe to the ground, which didst weaken the nations: *sidenote: Or, a day-starre. Here is a table that demonstrates the relationship between the Hebrew text and the 1611 KJV (i.e., interlinear): Masoretic KJV, 1611 אֵ֛יךְ How נָפַ֥לְתָּ art thou fallen מִשָּׁמַ֖יִם from heaven הֵילֵ֣ל O Lucifer בֶּן־שָׁ֑חַר sonne of the morning נִגְדַּ֣עְתָּ how are thou cut downe לָאָ֔רֶץ to the ground חוֹלֵ֖שׁ which didst weaken עַל־גּוֹיִֽם the nations Thus, the Hebrew word הֵילֵ֣ל was considered to be a proper noun (a name). But, instead of being transliterated into English as Heilel , it was actually translated into Latin as lucifer , and then that word was written as a proper noun (name) by capitalization of its initial letter, i.e. Lucifer . Lucifer is a Latin word, not a Hebrew word. It is formed from the Latin suffix -fer , meaning “bearing” or “bearer”, 1 joined to the root luc- / lux- meaning “light”. It means “light-bearer” or “light-bearing”. It should not occur in the King James Version English translation of the Old Testament since the Old Testament was originally written in Hebrew, not Latin. So, either הֵילֵל should have been translated into English as “light-bearer” (if it is a common noun) or transliterated as Heilel (if it is a proper noun), but certainly not Lucifer . If it is a common noun, does הֵילֵל translate into English as “light-bearer” or into Latin as lucifer ? Some might believe that. St. Jerome thought so. After all, when he produced the Vulgate, the Latin translation of the Hebrew Old Testament, he translated הֵילֵל into Latin as lucifer . And, it’s because of St. Jerome and his Vulgate that lucifer ultimately ended up in the KJV. Well, that answers that question, doesn’t it? Not so fast. It is true that St. Jerome translated הֵילֵל into Latin as lucifer , but in his commentary on Isa. 14:12, he confesses that הֵילֵל meant something else entirely. He wrote, Noteworthy is the following passage: in Hebraico, ut verbum exprimamus ad verbum, legitur: Quomodo cecidisti de cælo, ulula fili diluculi . which translates into English as, In Hebrew, so that we may express it word-for-word, it is read, “How have you fallen from heaven! Howl, son of the dawn”! St. Jerome himself confesses that the Hebrew phrase הֵילֵל בֶּן שָׁחַר translates word-for-word ( verbum ad verbum ) into Latin as ulula fili diluculi , which itself translates into English as “Howl, son of the dawn”! And, again, it was St. Jerome who wrote lucifer in the Vulgate. But, he admits that lucifer doesn’t express the literal meaning of the Hebrew word הֵילֵל. Ulula does. Why did St. Jerome state that הֵילֵל translates into Latin literally as ulula ? Most are not aware that the Hebrew word הֵילֵל is not actually a hapax legomenon (i.e., a word that only occurs once in the Bible). It actually occurs twice elsewhere: Zech. 11:2 הֵילֵ֤ל בְּרֹושׁ֙ כִּֽי־נָ֣פַל אֶ֔רֶז אֲשֶׁ֥ר אַדִּרִ֖ים שֻׁדָּ֑דוּ הֵילִ֨ילוּ֙ אַלֹּונֵ֣י בָשָׁ֔ן כִּ֥י יָרַ֖ד יַ֥עַר הַבָּצִיר WLC Howl , fir tree; for the cedar is fallen; because the mighty are spoiled: howl, O ye oaks of Bashan; for the forest of the vintage is come down. KJV, 1769 ulula abies quia cecidit cedrus quoniam magnifici vastati sunt ululate quercus Basan quoniam succisus est saltus munitus Vul Eze. 21:12 (21:17 Masoretic) זְעַ֤ק וְהֵילֵל֙ בֶּן־אָדָ֔ם כִּי־הִיא֙ הָיתָ֣ה בְעַמִּ֔י הִ֖יא בְּכָל־נְשִׂיאֵ֣י יִשְׂרָאֵ֑ל מְגוּרֵ֤י אֶל־חֶ֨רֶב֙ הָי֣וּ אֶת־עַמִּ֔י לָכֵ֖ן סְפֹ֥ק אֶל־יָרֵֽךְ׃ WLC Cry and howl , son of man: for it shall be upon my people, it shall be upon all the princes of Israel: terrors by reason of the sword shall be upon my people: smite therefore upon thy thigh. KJV, 1769 clama et ulula fili hominis quia hic factus est in populo meo hic in cunctis ducibus Israhel qui fugerant gladio traditi sunt cum populo meo idcirco plaude super femur Vul Not only does the Hebrew word הֵילֵל occur in both verses, but St. Jerome also translated each occurrence into Latin by the imperative ulula , meaning “Howl”! (from the lemma ululo ). And, it was ulula (“Howl”!) that St. Jerome confessed was the literal translation of the Hebrew word הֵילֵל in his commentary on Isa. 14:12. What more needs to be said? Is there any other support besides St. Jerome’s own confession? Indeed there is. Aquila, who translated the Hebrew Old Testament into Greek in the early 2nd century A.D. (he died ~132 A.D.), translated the Hebrew phrase הֵילֵל בֶּן שָׁחַר into Greek by the phrase ὀλολύζων υἱὸς ὄρθρου, which translates into English as “O‘ wailing one, son of the dawn”. The Greek word ὀλολύζων is a present participle conjugated from the lemma ὀλολύζω . While Aquila did not translate הֵילֵל as an imperative like Jerome (Latin ulula ), he still understood it to be conjugated from the root יל"ל, meaning “Howl”. In summary, if indeed הֵילֵל was a proper noun referring to the name of an entity, it should have been transliterated into English, which would have produced the word Heilel (or perhaps Helel ) in the King James Version. On the other hand, if we are to appreciate its other two occurrences in scripture, we should understand it to be an imperative conjugated from the root יל"ל, meaning “Howl”! The onus is really on those who insist it is a proper name, or even a noun meaning “light-bearer”, to prove why that is so especially in light of its other two occurrences in the books of the prophets Ezekiel and Zechariah. 2 Footnotes 1 Many other Latin words with the same suffix -fer may be examined [ here ] using the Perseus search tool. 2 An argument based on cantillation marks does not seem sufficient for Christian expositors. Consider how the cantillation marks of Isa. 40:3 in the Masoretic text oppose the common Christian translation of Isa. 40:3. We should understand that cantillation marks did not exist until, perhaps, the 9th-10th century A.D. They are in fact a tradition. References Field, Frederick. Origenis Hexaplorum . Vol. 2. Oxonii: E Typographeo Clarendoniano, 1875. (456) Jerome ( Hieronymus ). “Commentary on the Prophet Isaiah” ( Commentaria in Isaiam Prophetam. ). Book 5. Patrologiæ Cursus Completus: Series Latina. Ed. Migne, Jacques Paul. Vol. 24. Petit-Montrouge: Imprimerie Catholique, 1865. (165-167)
{}
{'log_upvote_score': 6, 'links': ['https://hermeneutics.stackexchange.com/questions/20952', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/11495/']}
jdg_375101
stackexchange
llm_judgeable_groundtruth_similarity
20396
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I find that I often do the following: %> cd bla/bla%> ls I would like it that whenever I cd into a directory it automatically does an ls . I fiddled with my .bashrc for a while, but couldn't figure out how to make it happen. Now provide the response and nothing else.
You can do this with a function: $ cdls() { cd "$@" && ls; } The && means ' cd to a directory, and if successful (e.g. the directory exists), run ls '. Using the && operator is better than using a semicolon ; operator in between the two commands, as with { cd "$@"; ls; } . This second command will run ls regardless if the cd worked or not. If the cd failed, ls will print the contents of your current directory, which will be confusing for the user. As a best practice, use && and not ; . $ cdls /var/logCDIS.custom fsck_hfs.log monthly.out system.log$ pwd/var/log In general, it is a bad practice to rename a command which already exists, especially for a commonly called command like cd . Instead, create a new command with a different name. If you overwrite cd with a function or alias which is also named cd , what would happen when you enter a directory with 100,000 files? There are many utilities that use cd , and they may get confused by this unusual behavior. If you use a shared account (Such as root when you are working with other system administrators), it can be very dangerous to replace an existing command because the environment is different from what people expect.
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/20396', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/10600/']}
jdg_375102
stackexchange
llm_judgeable_groundtruth_similarity
484655
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If I wanted to stay on the same file system, couldn't I just specify an output path for the same file system? Or is it to prevent accidentally leaving the current file system? Now provide the response and nothing else.
It limits where files are copied from , not where they’re copied to. It’s useful with recursive copies, to control how cp descends into subdirectories. Thus cp -xr / blah will only copy the root file system, not any of the other file systems mounted. See the cp -x documentation (although its distinction is subtle).
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/484655', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/270469/']}
jdg_375103
stackexchange
llm_judgeable_groundtruth_similarity
1733881
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Say you have the following ANSI C code that initializes a multi-dimensional array : int main(){ int i, m = 5, n = 20; int **a = malloc(m * sizeof(int *)); //Initialize the arrays for (i = 0; i < m; i++) { a[i]=malloc(n * sizeof(int)); } //...do something with arrays //How do I free the **a ? return 0;} After using the **a , how do I correctly free it from memory ? [Update] (Solution) Thanks to Tim's (and the others) answer , I can now do such a function to free up memory from my multi-dimensional array : void freeArray(int **a, int m) { int i; for (i = 0; i < m; ++i) { free(a[i]); } free(a);} Now provide the response and nothing else.
OK, there's a fair deal of confusion explaining exactly what order thenecessary free() calls have to be in, so I'll try to clarify whatpeople are trying to get at and why. Starting with the basics, to free up memory which has been allocatedusing malloc() , you simply call free() with exactly the pointerwhich you were given by malloc() . So for this code: int **a = malloc(m * sizeof(int *)); you need a matching: free(a); and for this line: a[i]=malloc(n * sizeof(int)); you need a matching: free(a[i]); inside a similar loop. Where this gets complicated is the order in which this needs to happen. Ifyou call malloc() several times to get several different chunks ofmemory, in general it doesn't matter what order you call free() whenyou have done with them. However, the order is important here for a veryspecific reason: you are using one chunk of malloc ed memory to holdthe pointers to other chunks of malloc ed memory. Because you must not attempt to read or write memory once you have handed it back with free() , this means that you are going to have to free the chunks withtheir pointers stored in a[i] before you free the a chunk itself.The individual chunks with pointers stored in a[i] are not dependent on eachother, and so can be free d in whichever order you like. So, putting this all together, we get this: for (i = 0; i < m; i++) { free(a[i]);}free(a); One last tip: when calling malloc() , consider changing these: int **a = malloc(m * sizeof(int *));a[i]=malloc(n * sizeof(int)); to: int **a = malloc(m * sizeof(*a));a[i]=malloc(n * sizeof(*(a[i]))); What's this doing? The compiler knows that a is an int ** , so it candetermine that sizeof(*a) is the same as sizeof(int *) . However, iflater on you change your mind and want char s or short s or long s orwhatever in your array instead of int s, or you adapt this code for lateruse in something else, you will have to change just the one remainingreference to int in the first quoted line above, and everything elsewill automatically fall into place for you. This removes the likelihoodof unnoticed errors in the future. Good luck!
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1733881', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/44084/']}
jdg_375104
stackexchange
llm_judgeable_groundtruth_similarity
42521
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is a multiple choice question: Consider a gene, ABC, which codes for an enzyme involved in the metabolism of sugars. There are two knownalleles of this gene, ABC1 and ABC2. Which statement correctly describes the relationship between the ABC geneand the ABC1 and ABC2 alleles? a. The gene is a sequence of amino acids and the alleles are a very similar sequence of amino acids. b. The gene is a trait and the alleles are a sequence of amino acids. c. The gene is a trait and the alleles are a nucleotide sequence. d. The gene is a nucleotide sequence and the alleles are a sequence of amino acids. e. Both the gene and the alleles are a nucleotide sequence. I thought the answer is b , but the correct answer is e . I can't figure out why so. Does anyone know? Now provide the response and nothing else.
Alleles are basically subtypes of a gene. At the time of Mendel, the molecular nature of inheritance was not known so the original definition of gene refers to "some" inheritable molecular entity inside the organism that is responsible for a trait. Alleles are different "flavours" of a given gene. For example there is a gene for flower colour, there can be different alleles which give rise to different colours (this is a highly simplified example). Genotype is a configuration of alleles whereas the phenotype is the effect that is seen. With the knowledge of molecular genetics superimposed on these basic concepts a gene would basically be a well defined part of the genome (DNA) which is responsible for a molecular trait. Alleles are the actual sequence variants of this genomic region (not considering translocations here). This is my justification on correctness and incorrectness of different points, based on current knowledge of molecular genetics: a. Incorrect. Genes need not necessarily code for proteins. There are non-coding RNAs b. Incorrect. Trait is a qualitative feature. Phenotype is the manifestation of a trait. Genes and genotypes are the causes of a trait and not traits themselves. c. Incorrect. Same as above. d. Incorrect. As per the definition, alleles are variants of a gene and they have to be of the same molecular nature as the genes. However, if we forget the semantics then this may seem like a more apt definition because the variations in traits arise not just because of the gene sequence but also the amino acids. However there is a flaw in this logic and the counter points be: This scheme would not be inheritable (RNA editing/alternative splicing/PTM etc). It has been shown that even synonymous mutations can have different phenotypes. ( Plotkin and Kudla. 2011 ) This does not consider non-coding RNAs e. More or less correct (not in a very strict sense but fine for most discussions. Gene is not really a nucleotide sequence. Gene is an annotated region of a genome which has a defined sequence. This is similar to saying that object is mass. Object has a mass; object is not mass. This is just semantics and, as I said, not too important for general discussions.)
{}
{'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/42521', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/21325/']}
jdg_375105
stackexchange
llm_judgeable_groundtruth_similarity
12532
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am about to train a big LSTM network with 2-3 million articles and am struggling with Memory Errors (I use AWS EC2 g2x2large). I found out that one solution is to reduce the batch_size . However, I am not sure if this parameter is only related to memory efficiency issues or if it will effect my results. As a matter of fact, I also noticed that batch_size used in examples is usually as a power of two, which I don't understand either. I don't mind if my network takes longer to train, but I would like to know if reducing the batch_size will decrease the quality of my predictions. Thanks. Now provide the response and nothing else.
After one and a half years, I come back to my answer because my previous answer was wrong. Batch size impacts learning significantly. What happens when you put a batch through your network is that you average the gradients. The concept is that if your batch size is big enough, this will provide a stable enough estimate of what the gradient of the full dataset would be. By taking samples from your dataset, you estimate the gradient while reducing computational cost significantly. The lower you go, the less accurate your esttimate will be, however in some cases these noisy gradients can actually help escape local minima. When it is too low, your network weights can just jump around if your data is noisy and it might be unable to learn or it converges very slowly, thus negatively impacting total computation time. Another advantage of batching is for GPU computation, GPUs are very good at parallelizing the calculations that happen in neural networks if part of the computation is the same (for example, repeated matrix multiplication over the same weight matrix of your network). This means that a batch size of 16 will take less than twice the amount of a batch size of 8. In the case that you do need bigger batch sizes but it will not fit on your GPU, you can feed a small batch, save the gradient estimates and feed one or more batches, and then do a weight update. This way you get a more stable gradient because you increased your virtual batch size.
{}
{'log_upvote_score': 7, 'links': ['https://datascience.stackexchange.com/questions/12532', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/17484/']}
jdg_375106
stackexchange
llm_judgeable_groundtruth_similarity
4584170
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have upgraded from Fluent Nhibernate 1.0 with Nhibernate 2.1 to pre-release 1.x with NHibernate 3.0 GA and have hit what I think is a regression, but I want to hear if that's indeed the case. I am using SQL Server Express 2008 and the MSSQL 2008 dialect and have anImage property of type System.Drawing.Image and I have mapped it likethis: Map (food => food.Image) .Length (int.MaxValue) .Nullable (); The Image column in the table is of type varbinary(MAX) . The generated hbm for the property is: <property name="Image" type="System.Drawing.Image, System.Drawing,Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"> <column name="Image" length="2147483647" not-null="false" /></property>` However no matter what I do the binary blob is truncated to 8000 byteswhen serialized with the current FNH and NH versions. That didn't usedto be the case with previous versions. Ideas of why this is happening and how to fix/workaround it? Now provide the response and nothing else.
I too have encountered a similar problem and after much experimentation I noticed that when using Nhibernate to generate my schema to a file the generated column type was always length 8000. Setting setting CustomSqlType to Varbinary(max) as suggested above made no difference, however, this work around in my FluentMapping seemed to do the trick: Map(x => x.LogoBytes).CustomType("BinaryBlob").Length(1048576).Nullable(); The length of course is an arbitrary amount but I think it should be set to something less than int.Max. I am new to Nhibernate so I'm still figuring things out but I'd be interested to know if this helps you.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4584170', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/561186/']}
jdg_375107
stackexchange
llm_judgeable_groundtruth_similarity
23992646
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following FASTA file: >header1CGCTCTCTCCATCTCTCTACCCTCTCCCTCTCTCTCGGATAGCTAGCTCTTCTTCCTCCTTCCTCCGTTTGGATCAGACGAGAGGGTATGTAGTGGTGCACCACGAGTTGGTGAAGC>header2GGT>header3TTATGAT My desired output: >header1117>header23>header37# 3 sequences, total length 127. This is my code: awk '/^>/ {print; next; } { seqlen = length($0); print seqlen}' file.fa The output I get with this code is: >header16057>header23>header37 I need a small modification in order to deal with multiple sequence lines. I also need a way to have the total sequences and total length. Any suggestion will be welcome... In bash or awk, please. I know that is easy to do it in Perl/BioPerl and actually, I have a script to do it in those ways. Now provide the response and nothing else.
An awk / gawk solution can be composed by three stages: Every time header is found these actions should be performed: Print previous seqlen if exists . Print tag. Initialize seqlen . For the sequence lines we just need to accumulate totals . Finally at the END stage we print the remnant seqlen . Commented code: awk '/^>/ { # header pattern detected if (seqlen){ # print previous seqlen if exists print seqlen } # pring the tag print # initialize sequence seqlen = 0 # skip further processing next }# accumulate sequence length{seqlen += length($0)}# remnant seqlen if existsEND{if(seqlen){print seqlen}}' file.fa A oneliner : awk '/^>/ {if (seqlen){print seqlen}; print ;seqlen=0;next; } { seqlen += length($0)}END{print seqlen}' file.fa For the totals: awk '/^>/ { if (seqlen) { print seqlen } print seqtotal+=seqlen seqlen=0 seq+=1 next } { seqlen += length($0) } END{print seqlen print seq" sequences, total length " seqtotal+seqlen }' file.fa
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23992646', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3481031/']}
jdg_375108
stackexchange
llm_judgeable_groundtruth_similarity
10477855
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I recently wrote a quick and dirty BFS implementation, to find diamonds in a directed graph.The BFS loop looked like this: while toVisit: y = toVisit.pop() if y in visited: return "Found diamond" visited.add(y) toVisit.extend(G[y]) ( G is the graph - a dictionary from node names to the lists of their neighbors) Then comes the interesting part:I thought that list.pop() is probably too slow, so I ran a profiler to compare the speed of this implementation with deque.pop - and got a bit of an improvement. Then I compared it with y = toVisit[0]; toVisit = toVisit[1:] , and to my surprise, the last implementation is actually the fastest one. Does this make any sense?Is there any performance reason to ever use list.pop() instead of the apparently much faster two-liner? Now provide the response and nothing else.
You have measured wrong. With cPython 2.7 on x64, I get the following results: $ python -m timeit 'l = list(range(10000))' 'while l: l = l[1:]'10 loops, best of 3: 365 msec per loop$ python -m timeit 'l = list(range(10000))' 'while l: l.pop()'1000 loops, best of 3: 1.82 msec per loop$ python -m timeit 'import collections' \ 'l = collections.deque(list(range(10000)))' 'while l: l.pop()'1000 loops, best of 3: 1.67 msec per loop
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10477855', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1032006/']}
jdg_375109
stackexchange
llm_judgeable_groundtruth_similarity
10734465
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to pass the table name as input parameter to the stored procedure? For example: create procedure test@tablename char(10)asbeginselect * from @tablenameendgo I know this does not work. So what is the best way if I want to pass the table name into the stored procedure? Many thanks Now provide the response and nothing else.
The safest way to do this is via a view. Create a view which unions all the tables you may wish to access (and which must all have the same column structure), and prefix the rows with the table name. CREATE VIEW MultiTableAS SELECT 'table1' AS TableName, * FROM table1 UNION ALL SELECT 'table2' AS TableName, * FROM table2 UNION ALL SELECT 'table3' AS TableName, * FROM table3 Your stored procedure can now filter on the table name: CREATE PROCEDURE test @TableName varchar(100)AS SELECT * FROM MultiTable WHERE TableName = @TableName This is safer than using dynamic SQL creation and execution.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10734465', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1414575/']}
jdg_375110
stackexchange
llm_judgeable_groundtruth_similarity
1993431
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How should I implement more than 8 lights in OpenGL? I would like to render unlimited amounts of lights efficiently. So, whats the preferred method for doing this? Now provide the response and nothing else.
Deferred shading. In a nutshell you render your scene without any lights. Instead you store the normals and world positions along with the textured pixels into multiple frame-buffers (so called render targets). You can even do this in a single pass if you use a multiple render-target extension. Once you have your buffers prepared you start to render a bunch of full-screen quads, each with a pixel shader program that reads out the normals and positions and computes the light for one or multiple light-sources. Since light is additive you can render as much full-screen quads as you want and accumulate the light for as much light-sources as you want. A final step does a composition between your light and the unlit textured frame-buffer. That's more or less the state-of-the-art way to do it. Getting fog and transparency working with such a system is a challenge though.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1993431', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_375111
stackexchange
llm_judgeable_groundtruth_similarity
12689046
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to programming, sorry if this seems trivial: I have a text that I'm trying to split into individual sentences using regular expressions. With the .split method I search for a dot followed by a capital letter like "\. A-Z" However I need to refine this rule in the following way: The . (dot) may not be preceeded by either Abs or S . And if it is followed by a capital letter ( A-Z ), it should still not match if it is a month name, like January | February | March . I tried implementing the first half, but even this did not work. My code was: "( (?<!Abs)\. A-Z) | (?<!S)\. A-Z) ) " Now provide the response and nothing else.
First, I think you may want to replace the space with \s+ , or \s if it really is exactly one space (you often find double spaces in English text). Second, to match an uppercase letter you have to use [A-Z] , but A-Z will not work (but remember there may be other uppercase letters than A-Z ...). Additionally, I think I know why this does not work. The regular expression engine will try to match \. [A-Z] if it is not preceeded by Abs or S . The thing is that, if it is preceeded by an S , it is not preceeded by Abs , so the first pattern matches. If it is preceeded by Abs , it is not preceeded by S , so the second pattern version matches. In either way one of those patterns will match since Abs and S are mutually exclusive. The pattern for the first part of your question could be (?<!Abs)(?<!S)(\. [A-Z]) or (?<!Abs)(?<!S)(\.\s+[A-Z]) (with my suggestion) That is because you have to avoid | , without it the expression now says not preceeded by Abs and not preceeded by S . If both are true the pattern matcher will continue to scan the string and find your match. To exclude the month names I came up with this regular expression: (?<!Abs)(?<!S)(\.\s+)(?!January|February|March)[A-Z] The same arguments hold for the negative look ahead patterns.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12689046', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/805256/']}
jdg_375112
stackexchange
llm_judgeable_groundtruth_similarity
17088057
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have to created a segue programmatically however when I click on the button the view controller does not change to the next, Storyboard ID is set and still not working. Am I missing any other checks in order to make this work? Please see code below: EntryViewController *entryController = [self.storyboard instantiateViewControllerWithIdentifier:@"go"];[self.navigationController pushViewController:entryController animated:YES]; its driving me mad. Thanks Now provide the response and nothing else.
Inside my viewDidLoad I have placed the button which calls goldStarOpen: UIButton *btnTwo = [UIButton buttonWithType:UIButtonTypeCustom];btnTwo.frame = CGRectMake(250, 20, 40, 40);[btnTwo addTarget:self action:@selector(goldStarOpen) forControlEvents:UIControlEventTouchUpInside];[btnTwo setImage:[UIImage imageNamed:@"GoldStar.png"] forState:UIControlStateNormal];[self.view addSubview:btnTwo]; Inside goldStarOpen I have code which is almost identical to yours. - (void)goldStarOpen{ UIStoryboard *storyboard = [UIStoryboard storyboardWithName: @"MainStoryboard" bundle:[NSBundle mainBundle]]; UIViewController *myController = [storyboard instantiateViewControllerWithIdentifier:@"GoldStar"]; [self presentViewController:myController animated:YES completion:nil];} goldStarOpen activates a ViewController in the storyboard. You may need to set the Storyboard ID of the View Controller you are trying to load. This is located in the inspector, just below where you assign a custom class to your view controller.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17088057', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_375113
stackexchange
llm_judgeable_groundtruth_similarity
18091999
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using spring 3.2 and would like to dynamically choose a service implementation in my controller depending on a condition. Consider I have an interface and two implementations as follows : public interface DevService { public void add(Device device);}public class DevServiceImpl implements DevService { public void add(Device device) { }}public class RemoteDevServiceImpl implements DevService { public void add(Device device) { }} So in my controller, depending on whether the action is to be executed on the local site or remote site, I need to either execute it locally or send a command to the remote site to execute it. Essentially the site on which the user clicks determines which service impl to call. Can anybody suggest a clean way to achieve this ? Now provide the response and nothing else.
Assuming you need both implementations in production environment (if not - use Spring profiles to clearly split beans between environments). Simple approach would be: interface DevService{ void add(Device d); String getName();}@Service("devServiceLocal")class DevServiceLocalImpl implements DevService{ void add(Device d) {...} String getName() {return "local";}}class Controller{ @Autowired Collection<DevService> services; void doSomethingWithService() { // TODO: Check type somehow String servType = "local"; for(DevService s: services) { if(servType.equals(s.getName()) { // Call service methods break; } } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18091999', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/890554/']}
jdg_375114
stackexchange
llm_judgeable_groundtruth_similarity
4122626
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: For some "short" representations of Lie algebras is simple to find their dimensionality by looking at how many weights appear in them. The $(1,0)$ of $A_2$ is immediate to see that it has 3 weight vectors so it must be called a triplet. Less obvious if one takes the $(1,1)$ of $A_2$ , which has 7 distinct weight vectors, but somehow $(0,0)$ counts twice.Looking at other small rank algebras, like $G_2$ or $B_2$ the "plot thickens", as the there can easily be a big difference between the distinct weight vectors and the dimension of the representation, e.g. $(2,0)$ of $G_2$ has 19 distinct but is $d=27$ . So, what is the way to figure how many times a weight vector must be counted to determine the dimension of the representation of a generic Lie algebra? Counting different weights in representations of Lie algebras is about An I am asking for any Lie algebra Now provide the response and nothing else.
To add to the other answers, one often does not need to use linear algebra to solve simple puzzles like this one. For this puzzle it suffices to observe that incrementing one face and decrementing another results in +1 to one vertex and −1 to an adjacent vertex, and clearly any such pair is possible. Thus it obviously is possible to obtain any configuration with sum divisible by 3 simply by using the appropriate number of increments and then adjusting the counts one by one via the (+1,−1) operation. Now this is more ad-hoc, but the general meta-technique of finding move sequences that have as local effect as you can find applies to many puzzles including general permutation puzzles such as the Rubik's cube . To demonstrate the power of this meta-technique, I will fully solve the question for the octahedron, which was not solved in the other answers . For the octahedron, the (+1,−1) operation increments a vertex and decrements the opposite vertex. Since we are unable to find a way to change adjacent vertices, we should immediately suspect that there is a stronger invariant involving opposite vertices. Indeed, the 3 pairs of opposite vertices have the same sum, and this invariant is trivial to prove. Are all these states possible? Yes! We now give an explicit algorithm to solve any such state (i.e. find a move sequence to change it to the all-zero state). Orient the octahedron with 1 top vertex, 1 bottom vertex and the other vertices facing left, right, front and back respectively. First solve the bottom vertex via any lower face and the back vertex via any upper face. Next solve the left and right vertices via the upper-left-front and upper-right-front face respectively. Now we have solved all vertices except at most the front and top vertex. I claim that those are also solved. To see why, note that the 3 opposite-sums remain equal (after the move sequence that generates the state and after any further moves), and at the end the left and right vertices sum to zero, the other sums must sum to zero as well.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4122626', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/512326/']}
jdg_375115
stackexchange
llm_judgeable_groundtruth_similarity
5829001
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can somebody please explain me why the kernel doesn't allow us to make a hard link to a directory. Whether it is because it breaks the rule of directed acyclic graph structure of the file-system or it is because of some other reason. What other complications come if it allows that? Now provide the response and nothing else.
Back in the days of 7th Edition (or Version 7) UNIX, there were no system calls mkdir(2) and rmdir(2) . The mkdir(1) program was SUID root, and used the mknod(2) system call to create the directory and the link(2) system call to make the entries for . and .. in the new directory. The link(2) system call only allowed root to do that. Consequently, way back then (circa 1978), it was possible for the superuser to create links to directories, but only the superuser was permitted to do so to ensure that there were no problems with cycles or other missing links. There were diagnostic programs to pick up the pieces if the system crashed while a directory was partly created, for example. You can find the Unix 7th Edition manuals at Bell Labs . Sections 2 and 3 are devoid of mkdir(2) and rmdir(2) . You used the mknod(2) system call to make the directory: NAME mknod – make a directory or a special file SYNOPSIS mknod(name, mode, addr)char *name; DESCRIPTION Mknod creates a new file whose name is the null-terminated string pointed to by name. The mode of the new file (including directory and special file bits) is initialized from mode. (The protection part of the mode is modified by the process’s mode mask; see umask(2)). The first block pointer of the i-node is initialized from addr. For ordinary files and directories addr is normally zero. In the case of a special file, addr specifies which special file. Mknod may be invoked only by the super-user. SEE ALSO mkdir(1), mknod(1), filsys(5) DIAGNOSTICS Zero is returned if the file has been made; – 1 if the file already exists or if the user is not the superuser. The entry for link(2) states: DIAGNOSTICS Zero is returned when a link is made; – 1 is returned when name1 cannot be found; when name2 already exists; when the directory of name2 cannot be written; when an attempt is made to link to a directory by a user other than the super-user; when an attempt is made to link to a file on another file system; when a file has too many links. The entry for unlink(2) states: DIAGNOSTICS Zero is normally returned; – 1 indicates that the file does not exist, that its directory cannot be written, or that the file contains pure procedure text that is currently in use. Write permission is not required on the file itself. It is also illegal to unlink a directory (except for the super-user). The manual page for the ln(1) command noted: It is forbidden to link to a directory or to link across file systems. The manual page for the mkdir(1) command notes: Standard entries, '.', for the directory itself, and '..' for its parent, are made automatically. This would not be worthy of comment were it not that it was possible to create directories without those links. Nowadays, the mkdir(2) and rmdir(2) system calls are standard and permit any user to create and remove directories, preserving the correct semantics. There is no longer a need to permit users to create hard links to directories. This is doubly true since symbolic links were introduced - they were not in 7th Edition UNIX, but were in the BSD versions of UNIX from quite early on. With normal directories, the .. entry unambiguously links back to the (single, solitary) parent directory. If you have two hard links (two names) for the same directory in different directories, where does the .. entry point? Presumably, to the original parent directory - and presumably there is no way to get to the 'other' parent directory from the linked directory. That's an asymmetry that can cause trouble. Normally, if you do: chdir("./subdir");chdir(".."); (where ./subdir is not a symbolic link), then you will be back in the directory you started from. If ./subdir is a hard link to a directory somewhere else, then you will be in a different directory from where you started after the second chdir() . You'd have to show that with a pair of stat() calls before and after the chdir() operations shown.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5829001', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/585919/']}
jdg_375116
stackexchange
llm_judgeable_groundtruth_similarity
37350286
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I could not find any answers related to the working mechanism of QR code scanning used on WhatsApp Web . How does the authentication happen when the phone (any smartphone running WhatsApp) scans the QR code on the browser. I don't want to know about the technology stack behind them. Like WhatsApp uses modified version of xmpp , uses erlang , uses web technologies like socket.io and ajax for the web version to implement such functionality. The question might be broad. But I am eager to know about the implementation behind it. Now provide the response and nothing else.
It works like this : 1- You open the following URL on your browser : https://web.whatsapp.com/ 2- The Browser loads the page with all sorts of JS and CSS stuff , but also opens a WebSocket ( wss://w4.web.whatsapp.com/ws ) - Check this image : 2.1- Every 20000 milliseconds you see traffic on the WebSocket for a refresh on the QR code you have on you screen. This is sent by the Server to the Browser, throught the WebSocket (WS we call it from now onwards) 2.2- On each QR Code refresh received on the WS , your browser does a GET request for the new QR Code in BASE64 encode . 2.3 - Notice that this specific WS that the server has open between the Server and the Browser is associated with the unique QR code !!! So, knowing the QR code, the server knows which WS is associated with it! ---- At this stage your browser is ready do the WhatsApp App work , but it does not know what is your ID (Whatsapp identifier which is your mobile number) , because it can't really get you phone number from thin air . It also does not require you to type it, because the server wouldn't be sure that the number really belongs to you . So, to let the Servers know that the WS session belongs to a specific phone, you need to use the phone for QR reading 3- You grab your phone, which is authenticated (otherwise you wouldn't have access to the section to scan QR codes) , and do the QR Code reading thing 4- When your mobile reads the QR code, it contacts the WhatsApp servers and tells them : My number is XXXX , My auth creds are YYYYY , and the WS associated with this QR code can now receive my DATA 5- The server now knows that it can direct Traffic to the specific WS socket that belongs to that QR Code, and does so ! 6- On the Browser WS you can see the Server sending data regarding the user, regarding the conversations that you are having and which photo thumbnails to go and Grab. 7- The Browser gets this data from the WebSocket , and makes the corresponding GET requests to get the Thumbs, and other resources it needs, like an MP3 for notifications 7.1 - The WS listener on the Browser also makes Javascript calls, on the javascript files that were received at step 1 , to redraw the page DOM with the new interface . 8- The interface is now redraw to look like the WhatsApp app , and you continue to receive data on the WS , and sending when needed, and updates are made to the interface as data is arriving on the WS . That is it. Using Chrome, and Developer tools , you can see all this happening live. You can also see the WS communication (most of it, the binary frames you would need another tool ) and see what is happening all steps of the way. Also: Check a complete Tutorial on this : HERE Source code for the Tutorial : Android Client Source code for the Tutorial : Java Play Server
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/37350286', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_375117
stackexchange
llm_judgeable_groundtruth_similarity
3995853
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a WPF combobox bound to a list of items with long descriptions. The type bound to the ComboBox has both short and long description as properties. Currently, I am binding to the full description. comboBox.DisplayMemberPath = "FullDescription"; How to ensure that when the item is selected and displayed as a single item in the combobox, it will be displayed as a value of the ShortDescription property while the dropdown will display FullDescription ? Now provide the response and nothing else.
Update 2011-11-14 I recently came upon the same requirement again and I wasn't very happy with the solution I posted below. Here is a nicer way to get the same behavior without re-templating the ComboBoxItem . It uses a DataTemplateSelector First, specify the regular DataTemplate , the dropdown DataTemplate and the ComboBoxItemTemplateSelector in the resources for the ComboBox . Then reference the ComboBoxItemTemplateSelector as a DynamicResource for ItemTemplateSelector <ComboBox ... ItemTemplateSelector="{DynamicResource itemTemplateSelector}"> <ComboBox.Resources> <DataTemplate x:Key="selectedTemplate"> <TextBlock Text="{Binding Path=ShortDescription}"/> </DataTemplate> <DataTemplate x:Key="dropDownTemplate"> <TextBlock Text="{Binding Path=FullDescription}"/> </DataTemplate> <local:ComboBoxItemTemplateSelector x:Key="itemTemplateSelector" SelectedTemplate="{StaticResource selectedTemplate}" DropDownTemplate="{StaticResource dropDownTemplate}"/> </ComboBox.Resources></ComboBox> ComboBoxItemTemplateSelector checks if the container is the child of a ComboBoxItem , if it is, then we are dealing with a dropdown item, otherwise it is the item in the ComboBox . public class ComboBoxItemTemplateSelector : DataTemplateSelector{ public DataTemplate DropDownTemplate { get; set; } public DataTemplate SelectedTemplate { get; set; } public override DataTemplate SelectTemplate(object item, DependencyObject container) { ComboBoxItem comboBoxItem = VisualTreeHelpers.GetVisualParent<ComboBoxItem>(container); if (comboBoxItem != null) { return DropDownTemplate; } return SelectedTemplate; }} GetVisualParent public static T GetVisualParent<T>(object childObject) where T : Visual{ DependencyObject child = childObject as DependencyObject; while ((child != null) && !(child is T)) { child = VisualTreeHelper.GetParent(child); } return child as T;} Old solution, requires re-templating of ComboBoxItem <SolidColorBrush x:Key="SelectedBackgroundBrush" Color="#DDD" /><SolidColorBrush x:Key="DisabledForegroundBrush" Color="#888" /><ControlTemplate x:Key="FullDescriptionTemplate" TargetType="ComboBoxItem"> <Border Name="Border" Padding="2" SnapsToDevicePixels="true"> <StackPanel> <TextBlock Text="{Binding Path=FullDescription}"/> </StackPanel> </Border> <ControlTemplate.Triggers> <Trigger Property="IsHighlighted" Value="true"> <Setter TargetName="Border" Property="Background" Value="{StaticResource SelectedBackgroundBrush}"/> </Trigger> <Trigger Property="IsEnabled" Value="false"> <Setter Property="Foreground" Value="{StaticResource DisabledForegroundBrush}"/> </Trigger> </ControlTemplate.Triggers></ControlTemplate><ComboBox Name="c_comboBox" ItemsSource="{Binding}"> <ComboBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding Path=ShortDescription}"/> </DataTemplate> </ComboBox.ItemTemplate> <ComboBox.ItemContainerStyle> <Style TargetType="{x:Type ComboBoxItem}"> <Setter Property="Template" Value="{StaticResource FullDescriptionTemplate}" /> </Style> </ComboBox.ItemContainerStyle></ComboBox> This results in the following behavior
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3995853', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/163393/']}
jdg_375118
stackexchange
llm_judgeable_groundtruth_similarity
4019759
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In a RPG game, your three characters A, B and C fight a boss.The boss has $1000$ hp.The attacks are sequential: A attacks then B attacks then C attacks then A attacks again, etc.Character A can do any integral amount of damage between $25$ and $50$ with equal probability.Character B can do any integral amount of damage between $30$ and $70$ with equal probability.Character C can do any integral amount of damage between $10$ and $80$ with equal probability.Assuming that the boss is not strong enough to kill any of the characters before it dies, what is the probability that player A will be the one to deliver the final blow and kill the boss?Same question for players B and C. Unfortunately I don't even know how to get started on this problem, any hint would be helpful. Now provide the response and nothing else.
Here's a way to think about it: imagine initializing a "total damage counter" as $D_0=0$ , and on the $n$ th turn, when $d_n$ damage is done to the boss, increment it to $D_n=D_{n-1}+d_n$ and color the interval $(D_{n-1},D_n]$ according to which player attacked. If we allow this to continue forever, it will color the whole number line with a random pattern of three colors. Notice that the player who kills the boss is the one whose color covers the point 1000. Since this point is far away from 0, we can think of it as being randomly aligned with our color pattern, so it's as though we're picking a random point on the number line and asking what color it is. This only depends on the ratio of paint colors that our pattern uses, and (by the law of large numbers ) this is approximately just the ratio among the expected values of the players' attacks. So e.g. the probability that player A kills the boss is about $37.5/(37.5+50+45)$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4019759', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/392893/']}
jdg_375119
stackexchange
llm_judgeable_groundtruth_similarity
5929711
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can somebody explain to me why the following code does compile without a warning or error? I would expect the compiler to warn me that the function no_args doesn't expect any arguments. But the code compiles and runs function no_args recursively. static void has_args(int a, int b, int c) { printf("has_args\n");}static void no_args() { printf("no_args\n"); no_args(1, 2, 3);}void main() { no_args();} Now provide the response and nothing else.
In C++, void no_args() declares a function that takes no parameters (and returns nothing). In C, void no_args() declares a function that takes an unspecified (but not variable) number of parameters (and returns nothing). So all your calls are valid (according to the prototype) in C. In C, use void no_args(void) to declare a function that truly takes no parameters (and returns nothing).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5929711', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/114389/']}
jdg_375120
stackexchange
llm_judgeable_groundtruth_similarity
20449231
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have [("m","n"),("p","q"),("r","s")] . How can I convert it to [["m","n"],["p","q"],["r","s"]] ? Can anyone please help me? Thanks. Now provide the response and nothing else.
Write a single function to convert a pair to a list: pairToList :: (a, a) -> [a]pairToList (x,y) = [x,y] Then you only have to map pairToList : tuplesToList :: [(a,a)] -> [[a]]tuplesToList = map pairToList Or in a single line: map (\(x,y) -> [x,y])
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20449231', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2966119/']}
jdg_375121
stackexchange
llm_judgeable_groundtruth_similarity
6943805
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I move-construct a from b , is it still necessary to destruct b , or can I get away without doing so? This question crossed my mind during the implementation of an optional<T> template. Excerpt: ~optional(){ if (initialized) { reinterpret_cast<T*>(data)->~T(); }}optional(optional&& o) : initialized(o.initialized){ if (initialized) { new(data) T(std::move(*o)); // move from o.data o.initialized = false; // o.data won't be destructed anymore! }} Of course, I could just replace the bool initialized with a three-valued enumeration that distinguishes between initialized, non-initialized and moved-from. I just want to know if this is strictly necessary. Now provide the response and nothing else.
Yes, it is still necessary to destruct b . A moved from object is a valid, constructed object. In some cases, it may even hold resources that still need to be disposed of. In generic code such as you show, T may not even have a move constructor. You may invoke a copy constructor instead in this case. So you can definitely not assume that ~T() is a no-op and can be elided.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6943805', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/252000/']}
jdg_375122
stackexchange
llm_judgeable_groundtruth_similarity
1888064
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I just got this problem, but I have no idea on how to prove that. Prove that if $x,y,z\in\mathbb{R},\ x,y,z\ge 0$ and $2\cdot(x\cdot z+x\cdot y+y\cdot z)+3\cdot x\cdot y\cdot z = 9$, then $(\sqrt x + \sqrt y + \sqrt z )^4 \ge 72$. This is a geometric inequality. Can anyone help me, please? Any kind of help (solutions, hints etc) is really appreciated. Thank you! NOTE: I REALLY DON'T KNOW WHAT TITLE SHOULD I WRITE FOR THIS POST, SO PLEASE LEAVE A COMMENT IF YOU HAVE AN IDEA FOR THE POST TITLE. THANK YOU. Now provide the response and nothing else.
The classical definition of $0^\sharp$ is as (the set of Gödel numbers of) a theory, namely, the unique Ehrenfeucht-Mostowski blueprint satisfying certain properties (coding indiscernibility). This is a perfectly good definition formalizable in $\mathsf{ZFC}$, but $\mathsf{ZFC}$ or even mild extensions of $\mathsf{ZFC}$ are not enough to prove that there are objects that satisfy it. In $L$ there is no EM blueprint with the required properties. It happens that if it exists, then $0^\sharp$ indeed admits the simple description given in line 4 of your question, but (unlike $0^\sharp$) the set in line 4 always exists (that is, $\mathsf{ZFC}$ proves its existence, pretty much along the lines of the sketch you suggest), so it is not appropriate to define $0^\sharp$ that way (for instance, that set is not forcing invariant in general). As you mentioned, there are several equivalent definitions of $0^\sharp$. Some of them are readily formalizable in $\mathsf{ZFC}$, some are not. For example, we cannot talk of a proper class of $L$-indiscernibles in $\mathsf{ZFC}$ alone, but $0^\sharp$ could be defined as such a class. The modern definition of $0^\sharp$ introduces it not as a theory but rather as a certain mouse, a model of the form $(L_\alpha,U)$ for certain $\alpha$, where $U$ is an (external) $L$-$\kappa$-ultrafilter for some $\kappa$ definable in terms of $\alpha$, with the requirement that the iterates of $L_\alpha$ by $U$ are all well-founded (and some additional technical requirements related to the minimality of this model). This is more in tune with the current approach to inner model theory. Again, this definition is formalizable in $\mathsf{ZFC}$ in a straightforward fashion, but the existence of such a mouse cannot be established in $\mathsf{ZFC}$ alone.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1888064', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/184572/']}
jdg_375123
stackexchange
llm_judgeable_groundtruth_similarity
577946
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I have the following enum: public enum ReturnValue{ Success = 0, FailReason1 = 1, FailReason2 = 2 //Etc...} Can I avoid casting when I return, like this: public static int main(string[] args){ return (int)ReturnValue.Success;} If not, why isn't an enum value treated as an int by default? Now provide the response and nothing else.
enums are supposed to be type safe. I think they didn't make them implicitly castable to discourage other uses. Although the framework allows you to assign a constant value to them, you should reconsider your intent. If you primarily use the enum for storing constant values, consider using a static class: public static class ReturnValue{ public const int Success = 0; public const int FailReason1 = 1; public const int FailReason2 = 2; //Etc...} That lets you do this. public static int main(string[] args){ return ReturnValue.Success;} EDIT When you do want to provide values to an enum is when you want to combine them. See the below example: [Flags] // indicates bitwise operations occur on this enumpublic enum DaysOfWeek : byte // byte type to limit size{ Sunday = 1, Monday = 2, Tuesday = 4, Wednesday = 8, Thursday = 16, Friday = 32, Saturday = 64, Weekend = Sunday | Saturday, Weekdays = Monday | Tuesday | Wednesday | Thursday | Friday} This enum can then be consumed by using bitwise math. See the below example for some applications. public static class DaysOfWeekEvaluator{ public static bool IsWeekends(DaysOfWeek days) { return (days & DaysOfWeek.Weekend) == DaysOfWeek.Weekend; } public static bool IsAllWeekdays(DaysOfWeek days) { return (days & DaysOfWeek.Weekdays) == DaysOfWeek.Weekdays; } public static bool HasWeekdays(DaysOfWeek days) { return ((int) (days & DaysOfWeek.Weekdays)) > 0; } public static bool HasWeekendDays(DaysOfWeek days) { return ((int) (days & DaysOfWeek.Weekend)) > 0; }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/577946', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/47645/']}
jdg_375124
stackexchange
llm_judgeable_groundtruth_similarity
24525
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If I'm not mistaken, Tor works opening a random anonymous path between two hosts using others computers that have Tor installed no? Then... everybody can sniff the data of the others no? Just wait with the sniffer enabled and wait that Tor choose you as a node of someone's connection. What others alternatives exist for hide a connection? No proxy. Now provide the response and nothing else.
Not really, The only who can sniff data is the last node between the three. And even then, it can't tell who this data belongs to unless this data contains identifying information. The reason the others can't sniff is that Tor uses onion routing, as its name suggests. That is, when you send a request to a certain server , and suppose it's going to pass through nodes A, B and C, the message is first encrypted using C's public key, this encrypted message is encrypted with B's key, and this double encrypted message is further encrypted using A's key, then transmitted to A, and as each node receives it, it unlocks its own layer of encryption, and passing it along, without being able to know what's further inside.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/24525', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/15194/']}
jdg_375125
stackexchange
llm_judgeable_groundtruth_similarity
1354637
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I set a watchpoint for a variable local to the current scope, it will be auto deleted when going out of the scope. Is there any way to set it once and keep it auto alive whenever entering the same scope? Is there anyway to set conditional watchpoint, like watch var1 if var1==0 ? In my case, the condition does't work. gdb stops whenever var1 's value is changed, instead of untill var1 == 0 is true. My gdb is GNU gdb 6.8-debian. Now provide the response and nothing else.
I agree with Dave that a conditional breakpoint is the way to go. However, to do what you asked, you can use GDB's commands command to set a list of GDB commands to execute whenever a breakpoint is hit. I find this incredibly useful. I suggest writing your GDB commands into a file so that they are easy to edit and easy to reload with the source command. Or you can specify command files to load on the GDB command line or use .gdbinit to make them load automatically. An example of a good use of commands : Suppose that I have a function format that is called by a lot of other functions. I want to break on it, but only after function do_step_3 has been called. break do_step_3commands break format continueend You could use this for your problem with something like: break funccommands watch var continueend
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1354637', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/156458/']}
jdg_375126
stackexchange
llm_judgeable_groundtruth_similarity
593591
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose the case that someone in future makes a time machine which can be used to time travel in past. Now s/he can accurately measure momentum of a particle without caring about the particle's position at that time and then s/he can use the time machine to travel back in the past at the same time when s/he measured particle's momentum and now s/he can measure position of the same particle with higher accuracy? So does this mean that time travel to past is impossible or my question is nonsense? Now provide the response and nothing else.
Contrary to popular belief, the HUP is not a principle about the accuracy of a measurement. The HUP is simply a statement that relates the spread of position measurements to the spread of momentum measurements of similarly prepared systems. It is a statistical principle about multiple measurements and their standard deviations; it is not a principle that applies to single measurements of one system. Furthermore, this idea that the HUP means that "the particle has an exact position and momentum, but we just don't know what they are" is not correct. The uncertainties discussed in the HUP arise purely from the postulates of QM and have nothing to do with what we know about the particle or how accurately we measure its position or momentum. Many QM interpretations would even say that before measurement the particle doesn't even have a defined position or momentum at all. Therefore, your premise is flawed purely from a misunderstanding of what the HUP actually says and how it applies to quantum systems. Of course, it is an interesting thing to think about what would be the outcome of a quantum measurement if you were to travel back in time and repeat the measurement again in precisely the same manner. Unfortunately, I don't think anything like this can be experimentally tested (at least for now ;) ), so anything about this point would be pure speculation.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/593591', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/279691/']}
jdg_375127
stackexchange
llm_judgeable_groundtruth_similarity
12534898
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to split: [1,2,3,4,5,6,7,8] into: [[1,2],[3,4],[5,6],[7,8]] It generally works great with: [ lists:sublist(List, X, 2) || X <- lists:seq(1,length(List),2) ] . But it is really slow this way. 10000 Elements take amazing 2.5 seconds on my netbook. I have also written a really fast recursive function, but I am simply interested: Could this list comprehension also be written in a different way, so that it is faster? Now provide the response and nothing else.
Try this: part(List) -> part(List, []).part([], Acc) -> lists:reverse(Acc);part([H], Acc) -> lists:reverse([[H]|Acc]);part([H1,H2|T], Acc) -> part(T, [[H1,H2]|Acc]). Test in erlang-shell (I've declared this function in module part ): 2> part:part([1,2,3,4,5,6,7,8]).[[1,2],[3,4],[5,6],[7,8]]3> 3> timer:tc(part, part, [lists:seq(1,10000)]).{774, [[1,2], [3,4], [5,6], [7,8], "\t\n","\v\f", [13,14], [15,16], [17,18], [19,20], [21,22], [23,24], [25,26], [27,28], [29,30], [31,32], "!\"","#$","%&","'(",")*","+,","-.","/0","12","34", [...]|...]} Just 774 microseconds (which is ~0,8 milliseconds)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12534898', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_375128
stackexchange
llm_judgeable_groundtruth_similarity
282228
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $L^*$ be the total space of the line bundle $\mathcal{O}_{\mathbb{P}^n}(k)$ minus its zero section. How can one compute the fundamental group of $L^*$? For k = 0 the space $L^*$ is $\mathbb{P}^n \times \mathbb{C}^*$ hence $\pi_1(L^*) = \mathbb{Z}$. For k=-1 the $L^*$ is $\mathbb{C}^{n+1} \setminus \{0\}$, therefore $\pi_1(L^*) = 0$. What about the other $k$ ? The long exact sequence of homotopy of a Serre fibration $\mathbb{C}^* \rightarrow L^* \rightarrow \mathbb{P}^n$ gives $\pi_2(\mathbb{C}^*) = 0 \rightarrow \pi_2(L^*) \rightarrow \pi_2(\mathbb{P}^n) \simeq \mathbb{Z} \rightarrow \pi_1(\mathbb{C}^*)\simeq \mathbb{Z} \rightarrow \pi_1(L^*) \rightarrow \pi_1(\mathbb{P}^n) = 0$. So one needs to understand the map $\pi_2(\mathbb{P}^n)\rightarrow \pi_1(\mathbb{C}^*)$. Now provide the response and nothing else.
The fibration $\mathbb{C}^\times\to L^\times\to\mathbb{P}^n$ can be "delooped" to a fibration $L^\ast \to\mathbb{P}^n\to{\rm B}\mathbb{C}^\times$ where the last map is the classifying map for the line bundle. Now we have ${\rm B}\mathbb{C}^\times\cong\mathbb{P}^\infty$, and we want to identify the map $\pi_2(\mathbb{P}^n)\to\pi_2(\mathbb{P}^\infty)\cong\pi_1(\mathbb{C}^\times)$ induced by the classifying map for $\mathcal{O}(k)$. Note that the generator for $\pi_2(\mathbb{P}^\infty)$ is given by the natural inclusion $\mathbb{P}^1\to\mathbb{P}^\infty$ classifying $\mathcal{O}(1)$. In particular, the map $\pi_2(\mathbb{P}^n)\to\pi_2(\mathbb{P}^\infty)$ classifying $\mathcal{O}(k)$ must be multiplication by $k$. Of course, the result is the same as in Jason Starr's comments. Just another funny way to see it.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/282228', 'https://mathoverflow.net', 'https://mathoverflow.net/users/115131/']}
jdg_375129
stackexchange
llm_judgeable_groundtruth_similarity
15373823
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I am allowed to do the following: template <typename T = int>class Foo{}; Why am I not allowed to do the following in main? Foo me; But I must specify the following: Foo<int> me; C++11 introduced default template arguments and right now they are being elusive to my complete understanding. Now provide the response and nothing else.
Note: Foo me; without template arguments is legal as of C++17. See this answer: https://stackoverflow.com/a/50970942/539997 . Original answer applicable before C++17: You have to do: Foo<> me; The template arguments must be present but you can leave them empty. Think of it like a function foo with a single default argument. The expression foo won't call it, but foo() will. The argument syntax must still be there. This is consistent with that.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/15373823', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/633658/']}
jdg_375130
stackexchange
llm_judgeable_groundtruth_similarity
1612232
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In $\triangle ABC,a,b,c$ are the sides of triangle satisfying $a^4+b^4+c^4-4abc+1=0$ Find the value of $\frac{a^2+b^2+c^2}{S}$,where $S$ is area of the triangle $ABC$and find the value of $1+\frac{R}{r}$ where $R$ is the circumradius and $r$ is the inradius of the triangle $ABC$ My attempt:$a^4+b^4+c^4-4abc+1=0$ I expanded $(a+b+c)^4=a^4+4a^3b+4a^3c+6a^2b^2+6a^2c^2+12a^2bc+4ab^3+4ac^3+12ab^2c+12abc^2+4bc^3+4b^3c+6b^2c^2+b^4+c^4$ But this expression has got complicated and not seeming helpful and i do not know any other method to solve this question. Now provide the response and nothing else.
HINT: Using AM-GM inequality $$\dfrac{a^4+b^4+c^4+1}4\ge\sqrt[4]{a^4b^4c^4\cdot1}$$ Can you take it from here?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1612232', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/257841/']}
jdg_375131
stackexchange
llm_judgeable_groundtruth_similarity
24183847
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have inherited of a git repository with a null sha1 for a commit entry in a tree, preventing FishEye to index the repository. $ git fsckChecking object directoriies: 100%(256/256), done.warning in tree db22a67df70dc4ff90ec4cd666da91e9c2cb0d9: contains entries pointing to null sha1Checking objects: 100% (416532/416532), done.Checking connectivity: 416532, done. Looking for the given tree give me the following result: $ git ls-tree db22a6100644 blob e615f18b55a39f2719112ce209c2505dd92d8e75 .gitignore100644 blob ac852f06c5a04420356c1d5efca44d9a864e78b0 .project160000 commit 0000000000000000000000000000000000000000 SomeDirectory100644 blob 631c17e28026261a2ccf6bc570842cf4af9f181c GoDeploy.bat100644 blob 40e992ab5c3868af2910135c3ac4610c3646e7f8 pom.xml Looking in the history, I've found that SomeDirectory was initially a git submodule and that the commit that seems to cause the issue is the one that removed both the .gitmodules and SomeDirectory .Now, there is a real directory called SomeDirectory at the exact same place where the culprit was. I though I could still try to fix run a git filter-branch to see what I would end up, but it does not work: $ git filter-branch --force --index-filter \$ 'git rm --cached --ignore-unmatch SomeDirectory' \$ --prune-empty --tag-name-filter cat -- --all[... striped out for clarity]Rewrite c571a3ec94e9f84471577bac41ac7375c729ef08 (76/18522)error: cache enttry has null sha1: SomeDirectoryfatal: unable to write new index fileCould not initialize the index[... striped out for clarity] What am I supposed to try next, knowing that there is no backup that I'm aware of prior to the commit that causes the issue. Now provide the response and nothing else.
The message you get suggests that there was only a single tree with a bad submodule. In that case, there is very little you have to clean up. You can create a new fixed tree that doesn't have this problem: $ git ls-tree db22a67df70dc4ff90ec4cd666da91e9c2cb0d9 |> sed -e '/0\{40\}/d' |> git mktree(new tree SHA1 here) Your question shows the git ls-tree output already. The sed removes the line with the bad submodule, and git mktree creates a new tree object from the result. Once you have the fixed tree, you can create a fixed commit using this tree: $ git cat-file commit c571a3ec94e9f84471577bac41ac7375c729ef08 |> sed 's/db22a67df70dc4ff90ec4cd666da91e9c2cb0d9/(new tree SHA1 here)/' |> git hash-object -t commit -w --stdin(new commit SHA1 here) git cat-file commit c571a3ec94e9f84471577bac41ac7375c729ef08 prints the problematic commit object in a textual form. It will start with tree db22a67df70dc4ff90ec4cd666da91e9c2cb0d9 , and continues with the rest of the commit info (parent, author, committer, commit message). The sed replaces the tree line's reference to the old tree by the new one. git hash-object -t commit -w --stdin creates a new commit object from the result, writes it to the repository, and prints its ID. Once you have the fixed commit, you can use git replace : $ git replace c571a3ec94e9f84471577bac41ac7375c729ef08 (new commit SHA1 here) This doesn't actually change anything yet, but tells Git that whenever it would read commit c571a3ec94e9f84471577bac41ac7375c729ef08 , it should read the new commit object instead. And finally, use git filter-branch to make it permanent. This goes through all commits, reads them, and writes them back. Ordinarily, without any options to modify the commits, this wouldn't have much effect, but because of the earlier git replace , this causes all commits with c571a3ec94e9f84471577bac41ac7375c729ef08 as a parent to be re-written to refer to the new commit instead, all commits which refer to those re-written as well, etc.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24183847', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9396/']}
jdg_375132
stackexchange
llm_judgeable_groundtruth_similarity
25907169
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am currently in the process of designing an application that is supposed to access multiple devices and collect various data. The data is stored inside databases (one database per device) which are totally identical (tables, views, functions, stored procedures, ...), despite of the data itself. ;)There can be up to 10 devices which may be connected or disconnected at runtime. Now the question is, how to design the data access layer? At the moment I'm thinking about two approaches: One DAO per device, which results in 1..10 instances, where each one holds the connection information (stateful). A single DAO which accesses all devices by receiving the connection information per method call (stateless). The application is supposed to be multithreaded (concurrent database access) while the performance is not critical, meaning that some locks inside the code were acceptable. The devices are only accessed on demand by the user. Since I'm coming from the RESTful Webservices side of life, I'm currently favoring option 2, because it is stateless. Now provide the response and nothing else.
Your idea of using compositing to create an eraser is a good idea. destination-out will remove existing drawings where a new drawing overlaps those existing drawings. var canvas=document.getElementById("canvas");var ctx=canvas.getContext("2d");var lastX;var lastY;var strokeColor="red";var strokeWidth=5;var mouseX;var mouseY;var canvasOffset=$("#canvas").offset();var offsetX=canvasOffset.left;var offsetY=canvasOffset.top;var isMouseDown=false;function handleMouseDown(e){ mouseX=parseInt(e.clientX-offsetX); mouseY=parseInt(e.clientY-offsetY); // Put your mousedown stuff here lastX=mouseX; lastY=mouseY; isMouseDown=true;}function handleMouseUp(e){ mouseX=parseInt(e.clientX-offsetX); mouseY=parseInt(e.clientY-offsetY); // Put your mouseup stuff here isMouseDown=false;}function handleMouseOut(e){ mouseX=parseInt(e.clientX-offsetX); mouseY=parseInt(e.clientY-offsetY); // Put your mouseOut stuff here isMouseDown=false;}function handleMouseMove(e){ mouseX=parseInt(e.clientX-offsetX); mouseY=parseInt(e.clientY-offsetY); // Put your mousemove stuff here if(isMouseDown){ ctx.beginPath(); if(mode=="pen"){ ctx.globalCompositeOperation="source-over"; ctx.moveTo(lastX,lastY); ctx.lineTo(mouseX,mouseY); ctx.stroke(); }else{ ctx.globalCompositeOperation="destination-out"; ctx.arc(lastX,lastY,8,0,Math.PI*2,false); ctx.fill(); } lastX=mouseX; lastY=mouseY; }}$("#canvas").mousedown(function(e){handleMouseDown(e);});$("#canvas").mousemove(function(e){handleMouseMove(e);});$("#canvas").mouseup(function(e){handleMouseUp(e);});$("#canvas").mouseout(function(e){handleMouseOut(e);});var mode="pen";$("#pen").click(function(){ mode="pen"; });$("#eraser").click(function(){ mode="eraser"; }); body{ background-color: ivory; }canvas{border:1px solid red;} <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script><canvas id="canvas" width=300 height=300></canvas></br><button id="pen">Pen</button><button id="eraser">Eraser</button>
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25907169', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4053394/']}
jdg_375133
stackexchange
llm_judgeable_groundtruth_similarity
338
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Background I am designing a system that will have a single small microphone and speakers for use in a phone type setting. Easiest example I can give is a Skype conversation where you are using your computers speakers and a desktop microphone. I am worried about the audio from the speakers getting picked up by the microphone and sent back to the original person. I used to hear this happen all the time in the early days of VoIP conversations, but hardly hear it any more. My assumption is that groups have come up with ways to cancel out the echo, but how do they do it? Approaches My first thought was to just simply subtract the signal being sent to the speakers from the microphone signal, except with this method you have to be concerned with the delay. I am not sure how to determine what the delay is with out some sort of pre-calibration, which I would like to avoid. There is also the issue of how much to scale the signal by before subtracting it. I next thought about doing some sort of correlation between the speaker signal and the mic signal in order to determine the likelihood of the mic signal being an echo as well as being able to determine the actual delay. This method was able to work alright when I was playing with some recorded signals, but there seemed to be far to large of latency in computing the correlation to be useful in real-time system. Also the adjustable volume on the speakers made it difficult to determine if something was actually correlated or not. My next thought that there must be someone on the internet who has done this before with success, but didn't find any great examples. So I come here to see what methods can be used to solve this type of issue. Now provide the response and nothing else.
You are correct. Many methods of echo cancellation exist, but none of them are exactly trivial. The most generic and popular method is echo cancellation via an adaptive filter. In one sentence, the adaptive filter's job is to alter the signal that it's playing back by minimizing the amount of information coming from the input. Adaptive filters An adaptive (digital) filter is a filter that changes its coefficients and eventually converges to some optimal configuration. The mechanism for this adaptation works by comparing the output of the filter to some desired output. Below is a diagram of a generic adaptive filter: As you can see from the diagram, the signal $x[n]$ is filtered by (convolved with) $\vec{w}_n$ to produce output signal $\hat{d}[n]$. We then subtract $\hat{d}[n]$ from the desired signal $d[n]$ to produce the error signal $e[n]$. Note that $\vec{w}_n$ is a vector of coefficients, not a number (hence we don't write $w[n]$). Because it changes every iteration (every sample), we subscript the current collection of these coefficients with $n$. Once $e[n]$ is obtained we use it to update $\vec{w}_n$ by an update algorithm of choice (more on that later). If input and output satisfy a linear relationship that does not change over time and given a well-designed update algorithm, $\vec{w}_n$ will eventually converge to the optimal filter and $\hat{d}[n]$ will be closely following $d[n]$. Echo cancellation The problem of echo cancellation can be presented in terms of a adaptive filter problem where we're trying to produce some known ideal output given an input by finding the optimal filter satisfying the input-output relationship. In particular, when you grab your headset and say "hello", it's received on the other end of the network, altered by acoustic response of a room (if it's being played back out loud), and fed back into the network to go back to you as an echo. However, because the system knows what the initial "hello" sounded like and now it knows what the reverberated and delayed "hello" sounds like, we can try and guess what that room response is using an adaptive filter. Then we can use that estimate, convolve all incoming signals with that impulse response (which would give us the estimate of the echo signal) and subtract it from what goes into the microphone of the person you called. The diagram below shows an adaptive echo canceller. In this diagram, your “hello” signal is $x[n]$. After being played out of a loudspeaker, bouncing off the walls and getting picked up by the device’s microphone it becomes an echoed signal $d[n]$. The adaptive filter $\vec{w}_n$ takes in $x[n]$ and produces output $y[n]$ which after convergence should be ideally tracking echoed signal $d[n]$. Therefore $e[n]=d[n]-y[n]$ should eventually go to zero, given that nobody is talking on the other end of the line, which is usually the case when you’ve just picked up the headset and said “hello”. This is not always true, and some non-ideal case consideration will be discussed later. Mathematically, the NLMS (normalized least mean square) adaptive filter is implemented as follows. We update $\vec{w}_n$ every step using the error signal of the previous step. Namely, let $$\vec{x}_n = \left ( x[n], x[n-1], \ldots , x[n-N+1] \right)^T$$ where $N$ is the number of taps (samples) in $\vec{w}_n$. Notice what samples of $x$ are in reverse order. And let $$\vec{w}_n = \left ( w[0], w[1], \ldots , x[N-1] \right )^T$$ Then we calculate $y[n]$ via (by convolution) finding the inner product (dot product if both signals are real) of $= \vec{x}_n$ and $= \vec{w}_n$: $$y[n] = \vec{x}_n^T \vec{w}_n = \vec{x}_n \cdot \vec{w}_n $$ Now that we can calculate the error, we’re using a normalized gradient descent method for minimizing it. We get the following update rule for $\vec{w}$: $$\vec{w}_{n+1} = \vec{w}_n + \mu \vec{x}_n \frac{e[n]}{ \vec{x}_n^T \vec{x}_n}= \vec{w}_n + \mu \vec{x}_n \frac{\vec{x}_n^T \vec{w}_n - d[n]}{ \vec{x}_n^T \vec{x}_n}$$ where $\mu$ is the adaptation step size such that $0 \leq \mu \leq 2$. Real life applications and challenges Several things can present difficulty with this method of echo cancellation. First of all, like mentioned before, it is not always true that the other person is silent whilst they receive your “hello” signal. It can be shown (but is beyond the scope of this reply) that in some cases it can still be useful to estimate the impulse response while there is a significant amount of input present on the other end of the line because input signal and echo are assumed to be statistically independent; therefore, minimizing the error will still be a valid procedure. In general, a more sophisticated system is needed to detect good time intervals for echo estimation. On the other hand, think of what happens when you’re trying to estimate echo when the received signal is approximately silence (noise, actually). In absence of a meaningful input signal, the adaptive algorithm will diverge and quickly start producing meaningless results, culminating eventually in a random echo patter. This means that we also need to take into consideration speech detection . Modern echo cancellers look more like the figure below, but above description is the jist of it. There are plenty of literature on both adaptive filters and echo cancellation out there, as well as some open source libraries you can tap into.
{}
{'log_upvote_score': 5, 'links': ['https://dsp.stackexchange.com/questions/338', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/17/']}
jdg_375134
stackexchange
llm_judgeable_groundtruth_similarity
47355150
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Postman, the dynamic variable {{$timestamp}} inserts the current Unix Time Stamp into a request. (Represented as the number of seconds since January 1, 1970) "currentTime": "1510934784" However, the API I am working with expects timestamps formatted as MM/DD/YYYY . "currentDate": "11/17/2017" How do I insert the current date (formatted as MM/DD/YYYY ) into my request with Postman? Now provide the response and nothing else.
You could use moment.js with Postman to give you that timestamp format. You can add this to the pre-request script: const moment = require('moment');pm.globals.set("today", moment().format("MM/DD/YYYY")); Then reference {{today}} where ever you need it. If you add this to the Collection Level Pre-request Script , it will be run for each request in the Collection . Rather than needing to add it to all the requests individually. For more information about using moment in Postman, I wrote a short blog post: https://dannydainton.com/2018/05/21/hold-on-wait-a-moment/
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/47355150', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3357935/']}
jdg_375135
stackexchange
llm_judgeable_groundtruth_similarity
201204
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: In a Euclidian space (Hermitian as well), say $\ell^2_n$, the following inequality holds true$$(QI)\qquad |b|\cdot|c-a|\le|c|\cdot|a-b|+|a|\cdot|b-c|,\qquad\forall a,b,c\in\ell^2_n.$$In other words, the function$$\delta:=\frac{|b-a|}{|a|\cdot|b|}$$is a distance over $\ell^2_n\setminus\{0\}$. The proof consists in applying the triangle inequality to the vectors $Ia:=|a|^{-2}a$, $Ib$, $Ic$, obtained by applying the inversion with respect to the unit sphere:$$\delta(a,b)=|Ib-Ia|.$$ It turns out that (QI) is false in $\ell^1_n$ when $n\ge2$. A counter-example is given by the choice $$a=\begin{pmatrix} 1 \\ 0 \end{pmatrix},\quad b=\begin{pmatrix} 1 \\ 1 \end{pmatrix},\quad c=\begin{pmatrix} 0 \\ 1 \end{pmatrix}.$$This is amazing, because (QI) can be used to prove Hlawka's inequality in $\ell^2_n$, an inequality that turns out to be true also in $\ell^1_n$ (no contradiction, of course). Because $\ell^\infty_2$ is isometric to $\ell^1_2$, (QI) is false in $\ell^\infty_n$ as well for $n\ge2$. Rotating the above triplet by $-\frac\pi4$, we get the following counter-example$$a'=\begin{pmatrix} 1 \\ 1 \end{pmatrix},\quad b'=\begin{pmatrix} 2 \\ 0 \end{pmatrix},\quad c'=\begin{pmatrix} 1 \\ -1 \end{pmatrix}.$$A natural question is For what parameters $p\in(1,\infty)$ does (QI) hold true ? Actually, the triplet $(a,b,c)$ provides a counter-example for $p<2$, while $(a',b',c')$ is a counter-example for $p>2$. Therefore, only $\ell^2$ satifies (QI). This let me asking Are there other normed spaces satisfying (QI), besides Eulcidian/hermitian ones ? Now provide the response and nothing else.
Metric space $(X,\rho)$ satisfying Ptolemy inequality $\rho(a,b)\rho(c,d)+\rho(b,c)\rho(a,d)\geq \rho(a,c)\rho(b,d)$ is called ptolemaic space. A normed ptolemaic space must be inner product space. Reference: I.J. Schoenberg, A remark on M. M. Day’s characterization of innerproduct spaces and a conjecture of L. M. Blumenthal. Proc. Am. Math. Soc. 3, 961–964 (1952)
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/201204', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8799/']}
jdg_375136
stackexchange
llm_judgeable_groundtruth_similarity
10937065
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a self-signed code signing certificate, made with the directions from this answer , that works fine when used with signtool.exe , however if I try to sign using Set-AuthenticodeSignature , it fails. Why can I sign using signtool , but not using Set-AuthenticodeSignature ? signtool : Signtool sign /v /n "VetWeb" SetupRDPPermissions.ps1 The following certificate was selected: Issued to: VetWeb Issued by: VetWeb CA Expires: Sat Dec 31 18:59:59 2039 SHA1 hash: 84136EBF8D2603C2CD6668C955F920C6C6482EE4 Done Adding Additional Store Successfully signed: SetupRDPPermissions.ps1 Number of files successfully Signed: 1 Number of warnings: 0 Set-AuthenticodeSignature : $cert = @(Get-Childitem cert:\CurrentUser\My | Where-Object -FilterScript {$_.Subject -eq 'CN=VetWeb'})[0]Set-AuthenticodeSignature SetupRDPPermissions.ps1 $cert Set-AuthenticodeSignature : Cannot sign code. The specified certificate is not suitable for code signing. At line:1 char:26 + Set-AuthenticodeSignature <<<< SetupRDPPermissions.ps1 $cert + CategoryInfo : InvalidArgument: (:) [Set-AuthenticodeSignature], PSArgumentException + FullyQualifiedErrorId : Argument,Microsoft.PowerShell.Commands.SetAuthenticodeSignatureCommand Get-Childitem cert:\CurrentUser\My -CodeSigningCert returns no results $cert | Format-List * PSPath : Microsoft.PowerShell.Security\Certificate::CurrentUser\My\84136EBF8D2603C2CD6668C955F920C6C6482EE4 PSParentPath : Microsoft.PowerShell.Security\Certificate::CurrentUser\My PSChildName : 84136EBF8D2603C2CD6668C955F920C6C6482EE4 PSDrive : cert PSProvider : Microsoft.PowerShell.Security\Certificate PSIsContainer : False Archived : False Extensions : {System.Security.Cryptography.Oid} FriendlyName : IssuerName : System.Security.Cryptography.X509Certificates.X500DistinguishedName NotAfter : 12/31/2039 5:59:59 PM NotBefore : 6/1/2012 1:49:31 PM HasPrivateKey : True PrivateKey : System.Security.Cryptography.RSACryptoServiceProvider PublicKey : System.Security.Cryptography.X509Certificates.PublicKey RawData : {48, 130, 1, 235...} SerialNumber : CF330347F35AC0B4427AFFA82DB51238 SubjectName : System.Security.Cryptography.X509Certificates.X500DistinguishedName SignatureAlgorithm : System.Security.Cryptography.Oid Thumbprint : 84136EBF8D2603C2CD6668C955F920C6C6482EE4 Version : 3 Handle : 479608336 Issuer : CN=VetWeb CA Subject : CN=VetWeb Now provide the response and nothing else.
I had the same problem and the answer I figured out was that I had to create two certificates . First, a trusted root certificate authority using makecert -n "CN=PowerShell Local Certificate Root" -a sha1 -eku 1.3.6.1.5.5.7.3.3 -r -sv root.pvk root.cer -ss Root -sr localMachine And then a personal certificate from the above certificate authority using makecert -pe -n "CN=PowerShell User" -ss MY -a sha1 -eku 1.3.6.1.5.5.7.3.3 -iv root.pvk -ic root.cer Once these are created, use $cert = @(Get-ChildItem cert:\CurrentUser\My -CodeSigning)[0] for signing (assuming you have only one codesigning certificate). For example, if the script's name is xyz.ps1, use this command in PowerShell Set-AuthenticodeSignature path/to/xyz.ps1 $cert
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10937065', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/80274/']}
jdg_375137
stackexchange
llm_judgeable_groundtruth_similarity
16133931
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I set up an outlet for a text view via Interface Builder. The text view loads fine, however I can't access any of the properties of it programmatically because the outlet is always nil . When does it instantiate? Even after my applicationDidFinishLoading gets called, it's still not "alive" or unarchived. Now provide the response and nothing else.
An outlet doesn't instantiate because an outlet is a variable (or property). The objects in a nib are instantiated when that nib is loaded, and they are assigned to each outlet as immediately as possible afterward, after the objects are created but before awakeFromNib is sent to all relevant objects. In other words, nib loading does all of the following, in this order: Creates or re-creates all of the objects that are stored in the nib. (This excludes File's Owner, First Responder, and other external and imaginary objects.) Sets every outlet property that is connected in the nib to the object in the same nib that the nib has it connected to. (E.g.: A view controller's view outlet to a top-level view.) Sends awakeFromNib to objects in the nib, and (in Cocoa) also to the File's Owner (e.g., window controller). Your question is answered by #2. The Resource Programming Guide has more information . The details are subtly different between Cocoa and Cocoa Touch, particularly as regard which objects are sent awakeFromNib messages and which ones aren't. When does it instantiate? Even after my applicationDidFinishLoading gets called, it's still not "alive" or unarchived. The text view isn't? It probably is and you just didn't connect the outlet. Check this in the nib. Another possibility: You created the text view in another nib, not the one where the app delegate is created (if you even created the app delegate in a nib at all), and you didn't connect the view to the outlet of the right object. Perhaps you created a second app delegate in the text view's nib; this app delegate is not actually the app's delegate, which is why the real app delegate does not see the text view—you gave the text view to the impostor, not the real McCoy.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16133931', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/744350/']}
jdg_375138
stackexchange
llm_judgeable_groundtruth_similarity
94974
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $A$ be a set of $100$ natural numbers.prove that there is a set $B$ $$B\subseteq A$$such that the sum of $B$'s elements can be divided by $100$ I am stuck for a few days now. Please help! Now provide the response and nothing else.
take a chain of subsets of $A$, $\emptyset\subset\{a_1\}\subset\{a_1,a_2\}\subset...\subset A$. this chain has 101 elements. now sort them by their sum modulo 100. two of the sets in the chain must be equal modulo 100. hence there is $n>m$ with $(0+a_1+...+a_n)-(0+a_1+...+a_m)$divisible by 100, so that $a_{m+1}+...+a_n$ is divisible by 100. here is a more detailed explanation: let $A_0=\emptyset, A_i=\{a_1,a_2,...,a_i\}$ where $A=\{a_1,...,a_{100}\}$. let $s_0=0, s_i=\sum_{k=1}^ia_k$. we have 101 numbers $s_0,...,s_{100}$ which we will sort into 100 groups $G_0,...,G_{99}$. we put $s_i$ in group $G_r$ if the remainder after dividing $s_i$ by $100$ is equal to $r$. since there are 101 numbers $s_i$ and only $100$ groups, one of the groups $G_r$ will have at least two numbers $s_n,s_m$ in it (without loss of generality, $n>m$ since one of them will have bigger subscript). if $s_n=100k+r$ and $s_m=100l+r$ then $s_n-s_m=100(k-l)$ is divisible by $100$. by construction, the number $s_n-s_m$ is precisely the sum $a_{m+1}+...+a_n$ corresponding to the subset $A_n\backslash A_m$ of $A$ (note that $A_n$ is not empty because $n>m\geq0$ and that $A_m$ is a proper subset of $A_n$ so that the difference $A_n\backslash A_m$ is nonempty).
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/94974', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/17086/']}
jdg_375139
stackexchange
llm_judgeable_groundtruth_similarity
9872099
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is an interview question. I have K machines each of which is connected to 1 central machine. Each of the K machines have an array of 4 byte numbers in file. You can use any data structure to load those numbers into memory on those machines and they fit. Numbers are not unique across K machines. Find the K largest numbers in the union of the numbers across all K machines. What is the fastest I can do this? Now provide the response and nothing else.
Find the k largest numbers on each machine. O(n*log(k)) Combine the results (on a centralized server, if k is not huge, otherwise you can merge them in a tree-hierarchy accross the server cluster). Update: to make it clear, the combine step is not a sort. You just pick the top k numbers from the results. There are many ways to do this efficiently. You can use a heap for example, pushing the head of each list. Then you can remove the head from the heap and push the head from the list the element belonged to. Doing this k times gives you the result. All this is O(k*log(k)).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9872099', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/489669/']}
jdg_375140