source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
30290240
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The C++ standards committee is working on a TS (Technical Specification) for Concepts extension: "Programming Languages - C++ Extensions for Concepts". N4377 is the latest version of this document. For inclusion into the C++ standard features are asked to be implemented, ideally for a publicly accessible system. I'm aware of concept-gcc but the concepts proposal above (colloquially referred to as Concepts Lite ) is different. I heard that there is a concepts branch and I have tried the origin/asutton/c++-concepts from gcc 's git mirror but that didn't compile. How do I build and use a version of gcc supporting concepts as specified in the above [draft] TS?
Now provide the response and nothing else.
|
According to Andrew Sutton the git mirror of the Concepts Lite work is entirely out of date and shouldn't be used. There is a concepts branch in the svn repository for gcc implementing "concepts-lite" (there was a pre-C++11 effort to add concepts to C++ which was abandoned; a new effort to add concepts to C++ was started using the name concepts lite). It is fairly straight forward to build (assuming you already have a recent gcc and the packages it depends on installed; see the gcc install instruction for other options you might want to pass: the instructions below worked for me except that I explicitly pointed configure at the locations where gmp , mpc , and mpfr are installed): cd <suitable-directory>svn checkout svn://gcc.gnu.org/svn/gcc/branches/c++-conceptsmkdir build-conceptscd build-concepts../c++-concepts/configure --prefix=<concept-gcc-root> --disable-bootstrap --enable-languages=c,c++make -j8make install What isn't entirely obvious is how to actually use concepts: the option -fconcepts does exist but seems to unconditionally do something different than enabling concepts-lite. Instead, you'd use the -std=c++1z option when compiling with concepts enabled.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30290240', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1120273/']}
|
jdg_369341
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4533300
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do you calculate the Abstracted LCD Density for an AVD ?
Now provide the response and nothing else.
|
The Abstracted LCD Density is measured in dots per inch (see the docs ). The wikipedia article on Pixel density has a helpful section explaining how to calculate this: basically the number of pixels per inch along the diagonal can be calculated by this formula, where x and y are the horizontal and vertical resolution (so that sqrt(x*x + y*y) is the length of the diagonal in pixels), and d is the length of the diagonal in inches: sqrt(x*x + y*y)/d So for example, a LG Optimus One P500 is listed as having a screen resolution of 320 by 480 , and a diagonal screen size of 3.2 inches, so it would have a density very close to 180dpi .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4533300', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4857/']}
|
jdg_369342
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
170147
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to prove the statement in the title. Proof: We prove that if $f$ is not strictly decreasing, then it must be strictly increasing.So suppose $x < y$ . And that's pretty much how far I got. Help will be appreciated.
Now provide the response and nothing else.
|
Prove the contrapositive instead: if $f$ is not strictly increasing and not strictly decreasing, then it is not one-to-one. For example, say there are points $a\lt b\lt c$ such that $f(a)\lt f(b)$ and $f(b)\gt f(c)$ . Either $f(a)=f(c)$ (in which case $f$ is not one-to-one), or $f(a)\lt f(c)$ , or $f(c)\lt f(a)$ . If $f(a)\lt f(c)\lt f(b)$ , then by the Intermediate Value Theorem there exists $d\in (a,b)$ such that $f(d)=f(c)$ ; hence $f$ is not one-to-one. Now, there are other possibilities (I made certain assumptions along the way, and you should check what the alternatives are if they are not met).
|
{}
|
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/170147', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/35562/']}
|
jdg_369343
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
23695727
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to be able to highlight the word under cursor without going to the next result. The solution I found uses a marker: noremap * mP*N`P Is there any better solution ? I noticed vim is not really good to keep the position of its cursor after the execution of commands. For instance, when I switch buffer, the cursor go back to the begining of the line. I might be interesting to have a global setting for this.
Now provide the response and nothing else.
|
There's unfortunately no global setting for that. One common solution to the "highlight without moving" problem is: nnoremap * *`` One solution to the "keep the cursor position when switching buffers" problem is: augroup CursorPosition autocmd! autocmd BufLeave * let b:winview = winsaveview() autocmd BufEnter * if(exists('b:winview')) | call winrestview(b:winview) | endifaugroup END
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23695727', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2612235/']}
|
jdg_369344
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
609434
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In Eisenbud's Commutative Algebra, Chapter 3.9, it says, "If $P$ is a maximal ideal of $R$ and $I$ is any proper ideal containing a power of $P$, then $I$ is $P$-primary: For in this case $P$ is the only prime containing the annihilator $I$ of $R/I$." If the power $n$ is 1, it's trivial by definition, but for $n>1$ I don't see why this is true. First why $P^n\subseteq I$ implies $I\subseteq P$? And since $R$ is not necessarily local ring, why it's not possible that there is another maximal ideal $Q$ so that $P^n\subseteq I \subseteq Q$ for $n>1$?
Now provide the response and nothing else.
|
The answer to your first and second question are the same. $I$ must be contained in some maximal ideal $Q$, so $P^n\subseteq Q$. Suppose $x\in P\setminus Q$, and note that $x^n\in Q$. By primality of $Q$, we have either $x^{n-1}\in Q$ or $x\in Q$. Proceeding by induction shows that $x\in Q$, thus no such $x$ exists, so we must have $P=Q$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/609434', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/112927/']}
|
jdg_369345
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
928073
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was asked to help a student with this limit as X goes to zero. $$\lim_{x \rightarrow 0}\frac{\sin \left(3x\right)}{x}$$ Note, I am able to solve it myself using L'Hopital's rule, just looking at a graph, or by the calculator method of sneaking up on the result by entering .1, .01, etc. But, the student told me the teacher wanted him to use the 'algebraic' method. I am a Math Aide (i.e. an in house tutor) and often the students will not have a text, only notes from class and worksheets. So I have no reference, and no opportunity to check with the teacher. Any assistance for what method the teacher wants? Keep in mind, the students haven't learned derivatives yet.
Now provide the response and nothing else.
|
The limit$$\lim_{x \rightarrow 0} \frac{\sin(x)}{x} = 1$$is an important and standard one, but proving it takes a bit of ingenuity. There is a beautiful answer here: How to prove that $\lim\limits_{x\to0}\frac{\sin x}x=1$? Assuming that limit has been established, we can apply it to this problem. Note that$$\frac{\sin(3x)}{x} = 3\frac{\sin(3x)}{3x}$$If we let $u = 3x$ then notice that $u \rightarrow 0$ if and only if $x \rightarrow 0$, so$$\lim_{x \rightarrow 0}\frac{\sin(3x)}{x} = 3\lim_{x \rightarrow 0}\frac{\sin(3x)}{3x} = 3 \lim_{u \rightarrow 0} \frac{\sin(u)}{u} = 3 \times 1 = 3$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/928073', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/60518/']}
|
jdg_369346
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21977
|
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
For many machine learning projects that we do, we start with the k Nearest Neighbour classifier. This is an ideal starting classifier as we usually have sufficient time to calculate all distances and the number of parameters is limited (k, distance metric and weighting) However, this has often the effect that we stick with the knn classifier as later in the project there is no room for switching to another classifier. What would be good reason to try a new classifier. Obvious ones are memory and time restraints, but are there cases when another classifier can actually improve the accuracy?
Now provide the response and nothing else.
|
First of all - the example doesn't seem well suited because you would probably use some regression or classical ML methods to solve this. Secondly - you are referring to a general problem of feature selection (Kira, Rendell, 1992) or attribute selection (Hall, Holmes, 2003) or variable selection (Guyon, Elisseeff, 2003) or variable subset selection (Stecking, Schebesch, 2005) or feature extraction (Hillion, Masson, Roux, 1988) or dimensionality reduction (Roweis, Saul, 200) or state abstraction (Amarel, 1968) . This problem is relevant not only to genetic algorithms but for almost all machine learning techniques when dealing with high dimensional data. Three cases can be distinguished here: the last instance of this problem known as state abstraction is usually related to process modelling (which suits your example, but not the GA context). The first three, i.e. feature selection , attribute selection or variable selection seem to be most relevant when taking your question literally. In this context a common solution is the mRMR approach (Peng, Long, Ding, 2005) . From my experience it doesn't always work well with continuous data - however, mutual information can be substituted with other coefficients, like correlation for example. Another possible approach is to use cross-validation (Picard, Cook, 1984) for this. You can have multiple models each using different features, and by means of model selection with cross-validation techniques you choose the best model, which gives you the information on which features work best for the given task. The feature extraction and dimensionality reduction cases allow to not only select initial features, but also their combinations. A well-known example solution for this case is the PCA algorithm (Pearson, 1901) , which produces the optimal, in terms of explained variance, set of features being linear combinations of input features. Also note, that there are many models that handle feature extraction task by themselves. Some examples are: Growing Neural Gas Network (Fritzke, 1995) , LASSO (Tibshirani, 2011) , RFE SVM (Zeng, Chen, Tao, 2009) , Decision Trees (Quinlan, 1986) . References: Kenji Kira and Larry A. Rendell. 1992. A practical approach to feature selection. In Proceedings of the ninth international workshop on Machine learning (ML92), Derek Sleeman and Peter Edwards (Eds.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 249-256. Hall, M.A.; Holmes, G., "Benchmarking attribute selection techniques for discrete class data mining," Knowledge and Data Engineering, IEEE Transactions on , vol.15, no.6, pp.1437,1447, Nov.-Dec. 2003 Isabelle Guyon and André Elisseeff. 2003. An introduction to variable and feature selection. J. Mach. Learn. Res. 3 (March 2003), 1157-1182. Ralf Stecking, Klaus B. Schebesch. 2005. Variable Subset Selection for Credit Scoring with Support Vector Machines, Operations Research Proceedings 2005, Springer Berlin Heidelberg, (2005), pp.251-256 Hillion, A.; Masson, P.; Roux, C., "A nonparametric approach to linear feature extraction; application to classification of binary synthetic textures," Pattern Recognition, 1988., 9th International Conference on , vol., no., pp.1036,1039 vol.2, 14-17 Nov 1988 Nonlinear dimensionality reduction by locally linear embedding.Sam Roweis & Lawrence Saul.Science, v.290 no.5500 , Dec.22, 2000. pp.2323--2326. Amarel, S. (1968). On representations of problems of reasoning about actions, Machine Intelligence, (3), 131--171 Peng, H.; Fulmi Long; Ding, C., "Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy," Pattern Analysis and Machine Intelligence, IEEE Transactions on , vol.27, no.8, pp.1226,1238, Aug. 2005 Richard R. Picard and R. Dennis Cook, Journal of the American Statistical Association Vol. 79, No. 387 (Sep., 1984), pp. 575-583 On lines and planes of closest fit to systems of points in space Philosophical Magazine, Vol. 2, No. 6. (1901), pp. 559-572 by K. Pearson A Growing Neural Gas Network Learns Topologies, Bernd Fritzke, Conference: Neural Information Processing Systems - NIPS , pp. 625-632, 1994 Regression shrinkage and selection via the lasso: a retrospective, Robert Tibshirani, Journal of the Royal Statistical Society Series B, 2011, vol. 73, issue 3, pages 273-282 Xiangyan Zeng; Yen-wei Chen; Caixia Tao; Van Alphen, D., "Feature Selection Using Recursive Feature Elimination for Handwritten Digit Recognition," Intelligent Information Hiding and Multimedia Signal Processing, 2009. IIH-MSP '09. Fifth International Conference on , vol., no., pp.1205,1208, 12-14 Sept. 2009 J. R. Quinlan. 1986. Induction of Decision Trees. Mach. Learn. 1, 1 (March 1986), 81-106.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/21977', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/-1/']}
|
jdg_369347
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9687596
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The code below contains a regular expression designed to extract a C# string literal but the performance of the regex matching for input strings of more than a few characters is woeful. class Program{ private static void StringMatch(string s) { // regex: quote, zero-or-more-(zero-or-more-non-backslash-quote, optional-backslash-anychar), quote Match m = Regex.Match(s, "\"(([^\\\\\"]*)(\\\\.)?)*\""); if (m.Success) Trace.WriteLine(m.Value); else Trace.WriteLine("no match"); } public static void Main() { // this first string is unterminated (so the match fails), but it returns instantly StringMatch("\"OK"); // this string is terminated (the match succeeds) StringMatch("\"This is a longer terminated string - it matches and returns instantly\""); // this string is unterminated (so the match will fail), but it never returns StringMatch("\"This is another unterminated string and takes FOREVER to match"); }} I can refactor the regex into a different form, but can anyone offer an explanation why the performance is so bad?
Now provide the response and nothing else.
|
You're running into catastrophic backtracking: Let's simplify the regex a bit (without the escaped quotes and without the second optional group because, as in your comment, it's irrelevant for the tested strings): "(([^\\"]*))*" ([^\\"]*) matches any string except quotes or backslashes. This again is enclosed in an optional group that can repeat any number of times. Now for the string "ABC , the regex engine needs to try the following permutations: " , ABC " , ABC , <empty string> " , AB , C " , AB , C , <empty string> " , AB , <empty string> , C " , AB , <empty string> , C , <empty string> " , <empty string> , AB , C " , <empty string> , AB , C , <empty string> " , <empty string> , AB , <empty string> , C , <empty string> " , <empty string> , AB , <empty string> , C " , A , BC " , A , BC , <empty string> " , A , <empty string> , BC " , <empty string> , A , BC etc. " , A , B , C " , A , B , C , <empty string> " , A , B , <empty string> , C etc. etc. each of which then fails because there is no following " . Also, you're only testing for substrings instead of forcing the regex to match the entire string. And you usually want to use verbatim strings for regexes to cut down on the number of backslashes you need. How about this: foundMatch = Regex.IsMatch(subjectString, @"\A # Start of the string "" # Match a quote (?: # Either match... \\. # an escaped character | # or [^\\""] # any character except backslash or quote )* # any number of times "" # Match a quote \Z # End of the string", RegexOptions.IgnorePatternWhitespace);
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9687596', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1132806/']}
|
jdg_369348
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18451232
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm a beginner java programmer following the java tutorials . I am using a simple Java Program from the Java tutorials 's Data Streams Page , and at runtime, it keeps on showing EOFException . I was wondering if this was normal, as the reader has to come to the end of the file eventually. import java.io.*;public class DataStreams { static final String dataFile = "F://Java//DataStreams//invoicedata.txt"; static final double[] prices = { 19.99, 9.99, 15.99, 3.99, 4.99 }; static final int[] units = { 12, 8, 13, 29, 50 }; static final String[] descs = { "Java T-shirt", "Java Mug", "Duke Juggling Dolls", "Java Pin", "Java Key Chain" }; public static void main(String args[]) { try { DataOutputStream out = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(dataFile))); for (int i = 0; i < prices.length; i ++) { out.writeDouble(prices[i]); out.writeInt(units[i]); out.writeUTF(descs[i]); } out.close(); } catch(IOException e){ e.printStackTrace(); // used to be System.err.println(); } double price; int unit; String desc; double total = 0.0; try { DataInputStream in = new DataInputStream(new BufferedInputStream(new FileInputStream(dataFile))); while (true) { price = in.readDouble(); unit = in.readInt(); desc = in.readUTF(); System.out.format("You ordered %d" + " units of %s at $%.2f%n", unit, desc, price); total += unit * price; } } catch(IOException e) { e.printStackTrace(); } System.out.format("Your total is %f.%n" , total); }} It compiles fine, but the output is: You ordered 12 units of Java T-shirt at $19.99You ordered 8 units of Java Mug at $9.99You ordered 13 units of Duke Juggling Dolls at $15.99You ordered 29 units of Java Pin at $3.99You ordered 50 units of Java Key Chain at $4.99java.io.EOFException at java.io.DataInputStream.readFully(Unknown Source) at java.io.DataInputStream.readLong(Unknown Source) at java.io.DataInputStream.readDouble(Unknown Source) at DataStreams.main(DataStreams.java:39)Your total is 892.880000. From the Java tutorials 's Data Streams Page , it says: Notice that DataStreams detects an end-of-file condition by catching EOFException , instead of testing for an invalid return value. All implementations of DataInput methods use EOFException instead of return values. So, does this mean that catching EOFException is normal, so just catching it and not handling it is fine, meaning that the end of file is reached? If it means I should handle it, please advise me on how to do it. EDIT From the suggestions, I've fixed it by using in.available() > 0 for the while loop condition. Or, I could do nothing to handle the exception, because it's fine.
Now provide the response and nothing else.
|
While reading from the file, your are not terminating your loop. So its read all the values and correctly throws EOFException on the next iteration of the read at line below: price = in.readDouble(); If you read the documentation, it says: Throws: EOFException - if this input stream reaches the end before reading eight bytes. IOException - the stream has been closed and the contained input stream does not support reading after close, or another I/O error occurs. Put a proper termination condition in your while loop to resolve the issue e.g. below: while(in.available() > 0) <--- if there are still bytes to read
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18451232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2397327/']}
|
jdg_369349
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2169600
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the procedure for disabling hardware acceleration in WPF? What is it exactly? Is it a Windows setting, a Visual Studio setting, or something you alter in the code of your WPF project? Will it affect only the program you're running or will it be system-wide?
Now provide the response and nothing else.
|
You can disable it on a Window level starting from .Net 3.5 SP1. public partial class MyWindow : Window{ public MyWindow() : base() { InitializeComponent(); } protected override void OnSourceInitialized(EventArgs e) { var hwndSource = PresentationSource.FromVisual(this) as HwndSource; if (hwndSource != null) hwndSource.CompositionTarget.RenderMode = RenderMode.SoftwareOnly; base.OnSourceInitialized(e); }} or you can subscribe to SourceInitialized event of the window and do the same. Alternatively you can set it on Process level : RenderOptions.ProcessRenderMode = RenderMode.SoftwareOnly; The precedence order for software rendering is: DisableHWAcceleration registry key ProcessRenderMode RenderMode (per-target)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2169600', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/138867/']}
|
jdg_369350
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8656
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I cannot afford a function generator and oscope right now, so I'm trying to learn analog electronics with Spice (MacSpice, to be specific). Below is the Spice model for a simple emitter-follower circuit. Vcc is 15V, and the input at the base is a 5V 60Hz sine wave with a 0V DC offset. R1 sits between the transistor base and the sine input, and R2 sits between the transistor emitter and ground. The output at the emitter has me a bit puzzled and I would appreciate it if anyone has an explanation for the behavior. * Simple Emitter FollowerVcc 3 0 DC 15Vin 1 0 SIN(0 5 60) dc 0R1 1 2 270R2 4 0 3.3kQ1 3 2 4 generic.model generic npn.controltran 1ms 60msplot v(1) v(4).endc.end The output of the transient analysis plot command is below. I was surprised by this output. As expected, there is a slight voltage drop in the amplitude due to the base-emitter junction of the NPN. However, the output is clipped and does not swing below zero. What would I need to do to catch the negative part of the sine wave?
Now provide the response and nothing else.
|
You need to connect R2 to a negative supply voltage with enough headroom for your signal, or bias the input waveform so that it doesn't clip. The output voltage an emitter-follower is usually V_out = V_in - 0.7V , but there are bounds on where this function works. For the expected behavior, VCC + 0.7V > V_in > VEE + 0.7V must be maintained, where VEE is your negative supply. In your case, VEE is equal to 0V since you don't have a negative supply. When your input voltage swings below 0.7V, the transistor turns off, and your output stays at the negative voltage rail. Modify these lines to add in a negative power supply. R2 4 5 3.3kVee 0 5 DC 15 Modify this line to bias the sine wave for the existing circuit. Vin 1 0 SIN(7.5 5 60) dc 0
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/8656', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/2059/']}
|
jdg_369351
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1953593
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to solve the above question, as an application of Sylow's theorem. Let $P$ be the p-Sylow subgroup. Then $n_p | (p+1)$ and $n_p \equiv 1 \pmod{p}$. If $n_p =1$, $P$ is normal and we are done, else $n_p = p+1$. Now, \begin{equation}1+n_p(p-1) = 1 + (p+1)(p-1) = p^2,\end{equation}is the total number of elements in the p-Sylow subgroups.So if $n_p = p+1$, that means the number of elements not in any p-Sylow subgroup is $|G|-p^2=p$. If these $p$ elements and the identity form a subgroup then its a subgroup of the smallest prime index, so we are done. But how do I show that all the elements not in the p-Sylow subgroups form a subgroup, i.e. the subgroup generated by these elements has trivial intersection with the $p-$Sylow subgroups?
Now provide the response and nothing else.
|
I will expand my comment into an answer. Let $S$ be the set of elements that do not lie in any Sylow $p$-subgroup of $G$. You have shown by a counting argument that $|S|=p$. Let $q$ be any prime that divides $p+1$.Then $S$ must contain some element $g$ of order $q$. Since $n_p=p+1$, we have $N_G(P) = P$, so $g \not\in N_G(P)$. Let $x$ be a generator of $P$. Then the powers $x,x^2,\ldots, x^{p-1}$ of $x$ all generate $P$, so none of them can centralize $g$. Hence the $p$ elements $\{ g, g^x,g^{x^2}, \ldots, g^{x^{p-1}} \}$ (where $g^h$ means $hgh^{-1}$) are all distinct. Since they all have order $q$, they all lie in $S$, and so $S = \{ g, g^x,g^{x^2}, \ldots, g^{x^{p-1}} \}$. So every element of $S$ has order $q$, and hence $q$ must be the only prime dividing $p+1$, so $p+1$ is a power of $q$, and $S \cup \{ 1 \}$ must be the unique Sylow $q$-subgroup of $G$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1953593', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/217341/']}
|
jdg_369352
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12383900
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It is said that mmap() maps files to the memory, and it costs to the virtual address space memory of the calling process. Does it really copy data to the memory, or the data still exists in the disk? Is mmap() faster than read() ?
Now provide the response and nothing else.
|
The only thing the mmap function really does is change some kernel data structures, and possibly the page table. It doesn't actually put anything into physical memory at all. After you call mmap , the allocated region probably doesn't even point to physical memory: accessing it will cause a page fault. This kind of page fault is transparently handled by the kernel, in fact, this is one of the kernel's primary duties. What happens with mmap is that the data remains on disk, and it is copied from disk to memory as your process reads it. It can also be copied to physical memory speculatively. When your process gets swapped out, the pages in the mmap region do not have to be written to swap because they are already backed by long-term storage -- unless you have modified them, of course. However, mmap will consume virtual address space, just like malloc and other similar functions (which mostly use mmap behind the scenes, or sbrk , which is basically a special version of mmap ). The main difference between using mmap to read a file and read to read a file is that unmodified pages in an mmap region do not contribute to overall memory pressure, they are almost "free", memory wise, as long as they are not being used. In contrast, files read with the read function will always contribute to memory pressure whether they are being used or not, and whether they have been modified or not. Finally, mmap is faster than read only in the use cases which it favors -- random access and page reuse. For linearly traversing a file, especially a small file, read will generally be faster since it does not require modifying the page tables, and it takes fewer system calls. As a recommendation, I can say that any large file which you will be scanning through should generally be read in its entirety with mmap on 64-bit systems, and you can mmap it in chunks on 32-bit systems where virtual memory is less available. See also: mmap() vs. reading blocks See also (thanks to James): When should I use mmap for file access?
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12383900', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1276473/']}
|
jdg_369353
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
679026
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know the one-way speed of light question has been exhausted , and I'm sorry for the naive question, but I would like to understand one thing. Can we measure the one-way speed of anything at all? If we "truly" can, why can't we synchronize that thing and an emission of light from one place to another to compare their speeds? For instance, and for simplicity sake assume 2 cars pass a point at exactly the same time and we know one car is going 60 mph and we do not know the speed of the other car. We could set up a clock 60 miles away, knowing that the car going 60 will take one hour to get there. Then,by using only one clock and by checking the difference in arrival times, we could calculate the second car's speed. Why can't we do something similar with light and another medium. Even if it needed to be sent from some space shuttle to the ISS, it seems like with modern equipment, we should be able to get some decent approximation of the one way speed.
Now provide the response and nothing else.
|
Can we measure the one-way speed of anything at all? No, there is nothing unique to light in that respect. To measure a one-way speed of anything requires that you allow it to travel over a known distance with a start and stop time measured at the beginning and end of the known distance. Doing so requires that the start and stop clocks must be synchronized. Different clock synchronization conventions will produce different one way speeds. Since any valid synchronization convention can be used, the one way speed simply reflects your choice of convention.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/679026', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/320184/']}
|
jdg_369354
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1201992
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Java I can write: public class Foo { public static Foo DEFAULT_FOO; static { DEFAULT_FOO = new Foo(); // initialize DEFAULT_FOO.init(); } public Foo() { } void init() { // initialize }} How can I get the same functionailty in C# (where static members are initialized before use)? And, if this is a bad thing to try to do, what is a better approach?
Now provide the response and nothing else.
|
you use a static constructor, like this: public class Foo{ static Foo() { // inits }} Here's more info . Bottom line: it's a paramaterless constructor with the static keyword attached to it. Works just like the static block in Java. Edit: One more thing to mention. If you just want to construct something statically, you can statically initialize a variable without the need for the static constructor. For example: public class Foo{ public static Bar StaticBar = new Bar();} Keep in mind that you'll need a static constructor if you want to call any methods on Bar during static initialization, so your example that calls Foo.Init() still needs a static constructor. I'm just sayin' you're not limited, is all. :)
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1201992', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/130964/']}
|
jdg_369355
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17779136
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm fairly unfamiliar with the Go programming language, and I've been trying to find a way to get the type of a variable as a string. So far, I haven't found anything that works. I've tried using typeof(variableName) to obtain a variable's type as a string, but this doesn't appear to be valid. Does Go have any built-in operator that can obtain a variable's type as a string, similar to JavaScript's typeof operator or Python's type operator? //Trying to print a variable's type as a string:package mainimport "fmt"func main() { num := 3 fmt.Println(typeof(num)) //I expected this to print "int", but typeof appears to be an invalid function name.}
Now provide the response and nothing else.
|
There's the TypeOf function in the reflect package: package mainimport "fmt"import "reflect"func main() { num := 3 fmt.Println(reflect.TypeOf(num))} This outputs: int Update: You updated your question specifying that you want the type as a string. TypeOf returns a Type , which has a Name method that returns the type as a string. So typeStr := reflect.TypeOf(num).Name() Update 2: To be more thorough, I should point out that you have a choice between calling Name() or String() on your Type ; they're sometimes different: // Name returns the type's name within its package.// It returns an empty string for unnamed types.Name() string versus: // String returns a string representation of the type.// The string representation may use shortened package names// (e.g., base64 instead of "encoding/base64") and is not// guaranteed to be unique among types. To test for equality,// compare the Types directly.String() string
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17779136', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/975097/']}
|
jdg_369356
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
55316
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a dataset with 8000 clusters and 4 million observations. Unfortunately my statistical software, Stata, runs rather slowly when using its panel data function for logistic regression: xtlogit , even with a 10% subsample. However, when using the nonpanel logit function results appear much sooner. Therefore I may be able to benefit from using logit on modified data that accounts for fixed effects. I believe this procedure is coined the "Mundlak fixed effects procedure" (Mundlak, Y. 1978. Pooling of Time-Series and Cross-Section Data. Econometrica, 46(1), 69-85.) I found an intuitive explanation of this procedure in a paper by Antonakis, J., Bendahan, S., Jacquart, P., & Lalive, R. (2010). On making causal claims: A review and recommendations. The Leadership Quarterly, 21(6). 1086-1120. I quote: One way to get around the problem of omitted fixed effects and to still include Level 2 variables is to include the cluster means of all Level 1 covariates in the estimated model (Mundlak, 1978). The cluster means can be included as regressors or subtracted (i.e., cluster-mean centering) from the Level 1 covariate. The cluster means are invariant within cluster (and vary between clusters) and allow for consistent estimation of Level 1 parameters just as if fixed-effects had been included (see Rabe-Hesketh & Skrondal, 2008). Therefore cluster-mean centering seems ideal and practical for solving my computational problem. However, these papers seem to be geared towards linear regression (OLS). Is this method of cluster-mean centering also applicable for "replicating" fixed effects binary logistic regression? A more technical question that should result in the same answer would be: is xtlogit depvar indepvars, fe with dataset A equal to logit depvar indepvars with dataset B when dataset B is the cluster-mean centered version of dataset A? An added difficulty I found in this cluster-mean centering is how to cope with dummies. Because dummies are either 0 or 1, are they identical in random and fixed effects regression? Should they not be "centered"?
Now provide the response and nothing else.
|
First differencing or within transformations like demeaning are not available in models like logit because in the case of nonlinear models such tricks do not remove the unobserved fixed effects. Even if you had a smaller data set in which it was feasible to include N-1 individual dummies to estimate the fixed effects directly, this would lead to biased estimates unless the time dimension of your data is large. Elimination of the fixed effects in panel logit therefore follows neither differencing nor demeaning and is only possible due to the logit functional form. If you are interested in the details you could have a look at these notes by Söderbom on PDF page 30 (explanation for why demeaning/first differencing in logit/probit doesn't help) and page 42 (introduction of the panel logit estimator). Another problem is that xtlogit and panel logit models in general do not estimate the fixed effects directly which are needed to calculate marginal effects. Without those it will be very awkward to interpret your coefficients which might be disappointing after having run the model for hours and hours. With such a large data set and the previously mentioned conceptional difficulties of FE panel logit I would stick with the linear probability model. I hope this answer does not disappoint you but there are many good reasons for giving such advice: the LPM is much faster, the coefficients can be interpreted straight away (this holds in particular if you have interaction effects in your model because the interpretation of their coefficients in non-linear models changes!), the fixed effects are easily controlled for and you can adjust the standard errors for autocorrelation and clusters without estimation times increasing beyond reason. I hope this helps.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/55316', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/3040/']}
|
jdg_369357
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
33083772
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to add an item to my database with SQLAlchemy + Python, but keep getting an error. My database_setup.py: class company(Base): __tablename__ = 'company' compID = Column(Integer, primary_key = True) name = Column(String(80), nullable = False)class item(Base): __tablename__ = 'items' itemID = Column(Integer, primary_key = True) name = Column(String(80), nullable = False) category = Column(String(250)) description = Column(String(250)) price = Column(String(8)) compID = Column(Integer, ForeignKey('company.compID')) company = relationship(company) after importing sqlalchemy to my terminal, I define an item to insert: JawboneUP3 = item( itemID="1", name="Jawbone UP3", description="The latest UP!", category="tracker", price="$174.99", company="Jawbone") and draw a session to add and commit: session.add(JawboneUP3)session.commit() When I submit, I keep getting this error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1399, in add self._save_or_update_state(state) File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1417, in _save_or_update_state halt_on=self._contains_state): File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/mapper.py", line 2037, in cascade_iterator parent_dict, visited_states, halt_on)) File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/properties.py", line 932, in cascade_iterator get_all_pending(state, dict_) File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 761, in get_all_pending ret = [(instance_state(current), current)]AttributeError: 'str' object has no attribute '_sa_instance_state' I have added a 'Jawbone' object to my company table, that I understand my 'JawboneUP3' should relate back to. This object was added correctly through a browser form that I enabled via my webserver script. I believe I should be able to add items right from the terminal though.
Now provide the response and nothing else.
|
I think the problem is in how you are defining the related company schema: JawboneUP3 = item(itemID = "1", name = "Jawbone UP3", description = "The latest UP!", category = "tracker", price = "$174.99", company = "Jawbone") # HERE^ The item constructor expects a company instance but you are passing a string value. Fix it: JawboneUP3 = item(itemID="1", name="Jawbone UP3", description="The latest UP!", category="tracker", price="$174.99", company=company(name="Jawbone"))
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/33083772', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5255581/']}
|
jdg_369358
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10489611
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
On the controller Put is as following: [HttpPut][ActionName("putname")]public JsonResult putname(string name){ var response = ... return Json(response); } The issue is on the when consuming this API via following using (httpClient = new HttpClient()){ string name = "abc"; string jsonString = JsonConvert.SerializeObject(name); var requestUrl = new Uri("http:...../controller/putname/"); using (HttpContent httpContent = new StringContent(jsonString)) { httpContent.Headers.ContentType = new MediaTypeHeaderValue("application/json"); HttpResponseMessage response = httpClient.PutAsync(requestUrl, httpContent).Result; } This code doesn't pass the parameter name to controller. I even tried changeing uri to /putname/" + name.
Now provide the response and nothing else.
|
Here is what works for me: var jsonString = "{\"appid\":1,\"platformid\":1,\"rating\":3}";var httpContent = new StringContent(jsonString, Encoding.UTF8, "application/json"); var message = await _client.PutAsync(MakeUri("App/Rate"), httpContent);Assert.AreEqual(HttpStatusCode.NoContent, message.StatusCode); and my action method: public void PutRate(AppRating model){ if (model == null) throw new HttpResponseException(HttpStatusCode.BadRequest); if (ModelState.IsValid) { // .. } } and the model public class AppRating{ public int AppId { get; set; } public int PlatformId { get; set; } public decimal Rating { get; set; }} -Stan
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10489611', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1380740/']}
|
jdg_369359
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2142934
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Here is the question I'm currently looking at: Show that the discriminant of the equation $y^3+py+q=0$ is $-4p^3-27q^2$ . I've done some research and found this , but we haven't studied Vieta's theorem in my math class just yet. All I know is that if we take a cubic equation of the form $$x^3+ax^2+bx+c=0$$ and we manipulate it to get rid of the $x^2$ term, then we yield $$y^3+py+q=0$$ where $$p=\frac{3b-a^2}{3}$$ and $$q=\frac{27c+2a^3-9ab}{27}$$ Other than that, I don't know how to approach this question.
Now provide the response and nothing else.
|
Any $y^3 + py + q$ can be written as $F := (y-x_1)(y-x_2)(y+x_1 + x_2)$ for some $x_1,x_2$ in an algebraic extension. So it suffices to prove the formula just for this one polynomial $F$. Compute the discriminant of $F$ directly from the definition, compute the $x^1$ and $x^0$ coefficients in $F$ (i.e. $p$ and $q$) in terms of $x_1,x_2$, then compute $-4p^3-27q^2$ and compare.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2142934', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/397485/']}
|
jdg_369360
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
80735
|
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What will happen to the $K_\text{eq}$ in the following reaction if volume is increased? $$\ce{2NO(g) + O2(g) <=> 2NO2(g) + energy}$$ From le Chatelier's principle I know that it must be constant unless temperature is changed, however it doesn't make sense because if $$K_\text{eq}= \frac{[\ce{NO2}]^2}{[\ce{O2}][\ce{NO}]^2},$$ then by increasing the volume we know that equilibrium will shift left and favor reactants. Hence concentration of reactants would increase while concentration of products would decrease. If we plug that back into the equilibrium constant equation we see that if numerator decrease and denominator increase that must yield a smaller number than the initial number. Hence the equilibrium is not constant. Why this occurs?
Now provide the response and nothing else.
|
First step: Don't use K c ... use K p ! You are focusing on concentration, but as we're working with gases it becomes easier to work with partial pressures. $K_\text{p}= \frac{\ce{p(NO2)}^2}{\ce{p(O2)}\ce{p(NO)}^2}$ Now, if we remember that partial pressure is pressure $\times$ mole fraction ($p_{a}=P \times \chi_{a}$), and substitute these in we get (and cancelling out the pressures): $K_\text{p}= \frac{\chi_{(NO_{2})}^2}{\chi_{(O_{2})}\chi_{(NO)}^{2}P}$ Now, if we consider increasing the volume. By doing so (and assuming that temperature is kept constant), then we have decreased the pressure (ideal gas law: PV=nRT). Now we can see that although the number of moles changes (and hence the mole fractions), our equilibrium constant will remain unchanged thanks to the effect of pressure. A good guide is here .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/80735', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/47350/']}
|
jdg_369361
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5179202
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This question was asked to me in an interview. they asked me how to generate a core dump file with which i can debug.then i said that with -g flag in gcc we can do it. then they have asked me what exactly does that -g flag do to the compiler. i said (probably a wrong answer) that it will open up all the symbols in the core file which can be used for debugging. can anyone tell me what exactly does it do?
Now provide the response and nothing else.
|
That's kind of right, but incomplete. -g requests that the compiler and linker generate and retain source-level debugging/symbol information in the executable itself. If ... the program happens to later crash and produce a core file (which suggests some problem in the actual code), or a deliberate OS command forced it to core (e.g. kill -SIGQUIT pid ), or the program calls a function that dumps core (e.g. abort ) ...- none of which are actually caused by the use of -g - then the debugger will know how to read that " -g " symbol information from the executable and cross-reference it with the core. This means you can see the proper names of variables and functions in your stack frames, get line numbers and see the source as you step around in the executable. That debug information is useful whenever debugging - whether you started with a core or just the executable alone. It even helps produce better output from commands like pstack . Note that your environment may have other settings to control whether cores are generated (they can be big, and there's no general way to know if/when they can be removed, so they're not always wanted). For example, on UNIX/LINUX shells it's often ulimit -c . You may also be interested to read about DWARF Wikipedia - a commonly used debugging information format for encoding the embedded debug/symbol information in executable/library objects (e.g. on UNIX and Linux). UPDATE per Victor's request in comments... Symbol information lists identifiers from the source code (usually only after any name mangling needed), the (virtual) memory addresses/offsets at which they'll be loaded in the process memory, the type (e.g. data vs. code). For example... $ cat ok.ccint g_my_num;namespace NS { int ns_my_num = 2; }int f() { return g_my_num + NS::ns_my_num; }int main() { return f(); }$ g++ -g ok.cc -o ok # compile ok executable with symbol info$ nm ok # show mangled identifiers00000000004017c8 d _DYNAMIC0000000000401960 d _GLOBAL_OFFSET_TABLE_0000000000400478 R _IO_stdin_used w _ITM_deregisterTMCloneTable w _ITM_registerTMCloneTable w _Jv_RegisterClasses000000000040037c T _Z1fv # this is f()0000000000401798 D _ZN2NS9ns_my_numE # this is NS::ns_my_num00000000004017a8 d __CTOR_END__00000000004017a0 d __CTOR_LIST__00000000004017b8 d __DTOR_END__00000000004017b0 d __DTOR_LIST__0000000000400540 r __FRAME_END__00000000004017c0 d __JCR_END__00000000004017c0 d __JCR_LIST__00000000004017c8 d __TMC_END__00000000004017c8 d __TMC_LIST__0000000000401980 A __bss_start0000000000401788 D __data_start0000000000400440 t __do_global_ctors_aux00000000004002e0 t __do_global_dtors_aux0000000000401790 d __dso_handle0000000000000000 a __fini_array_end0000000000000000 a __fini_array_start w __gmon_start__0000000000000000 a __init_array_end0000000000000000 a __init_array_start00000000004003a0 T __libc_csu_fini00000000004003b0 T __libc_csu_init U __libc_start_main0000000000000000 a __preinit_array_end0000000000000000 a __preinit_array_start0000000000401980 A _edata0000000000401994 A _end0000000000400494 T _fini000000000040047c T _init0000000000400220 T _start000000000040024c t call_gmon_start0000000000401980 b completed.61180000000000401788 W data_start0000000000400270 t deregister_tm_clones0000000000401988 b dtor_idx.61200000000000401994 A end0000000000400350 t frame_dummy0000000000401990 B g_my_num # our global g_my_num0000000000400390 T main # the int main() function00000000004002a0 t register_tm_clones$ nm ok | c++filt # c++filt "unmangles" identifiers...00000000004017c8 d _DYNAMIC0000000000401960 d _GLOBAL_OFFSET_TABLE_0000000000400478 R _IO_stdin_used w _ITM_deregisterTMCloneTable w _ITM_registerTMCloneTable w _Jv_RegisterClasses000000000040037c T f()0000000000401798 D NS::ns_my_num00000000004017a8 d __CTOR_END__00000000004017a0 d __CTOR_LIST__00000000004017b8 d __DTOR_END__00000000004017b0 d __DTOR_LIST__0000000000400540 r __FRAME_END__00000000004017c0 d __JCR_END__00000000004017c0 d __JCR_LIST__00000000004017c8 d __TMC_END__00000000004017c8 d __TMC_LIST__0000000000401980 A __bss_start0000000000401788 D __data_start0000000000400440 t __do_global_ctors_aux00000000004002e0 t __do_global_dtors_aux0000000000401790 d __dso_handle0000000000000000 a __fini_array_end0000000000000000 a __fini_array_start w __gmon_start__0000000000000000 a __init_array_end0000000000000000 a __init_array_start00000000004003a0 T __libc_csu_fini00000000004003b0 T __libc_csu_init U __libc_start_main0000000000000000 a __preinit_array_end0000000000000000 a __preinit_array_start0000000000401980 A _edata0000000000401994 A _end0000000000400494 T _fini000000000040047c T _init0000000000400220 T _start000000000040024c t call_gmon_start0000000000401980 b completed.61180000000000401788 W data_start0000000000400270 t deregister_tm_clones0000000000401988 b dtor_idx.61200000000000401994 A end0000000000400350 t frame_dummy0000000000401990 B g_my_num0000000000400390 T main00000000004002a0 t register_tm_clones Notes: our functions f() and main() are type T (which stands for "TEXT" - used for read-only non-zero memory content whether it's actually text or other data or executable code), g_my_num is B being a global with implicitly zero-ed out memory, while NS::ns_my_num is D as the executable has to explicitly provide the value 2 to occupy that memory. The man/info-page for nm documents these things further....
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5179202', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/134713/']}
|
jdg_369362
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1990032
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is our ideal inheritance hierarchy: class Foobar;class FoobarClient : Foobar;class FoobarServer : Foobar;class WindowsFoobar : Foobar;class UnixFoobar : Foobar;class WindowsFoobarClient : WindowsFoobar, FoobarClient;class WindowsFoobarServer : WindowsFoobar, FoobarServer;class UnixFoobarClient : UnixFoobar, FoobarClient;class UnixFoobarServer : UnixFoobar, FoobarServer; This is because the our inheritance hierarchy would try to inherit from Foobar twice, and as such, the compiler would complain of ambiguous references on any members of Foobar . Allow me to explain why I want such a complex model. This is because we want to have the same variable accessible from WindowsFoobar , UnixFoobar , FoobarClient , and FoobarServer . This wouldn't be a problem, only I'd like to use multiple inheritance with any combination of the above, so that I can use a server/client function on any platform, and also use a platform function on either client or server. I can't help but feel this is a somewhat common issue with multiple inheritance... Am I approaching this problem from completely the wrong angle? Update 1: Also, consider that we could use #ifdef to get around this, however, this will tend to yield very ugly code like such: CFoobar::CFoobar()#if SYSAPI_WIN32: m_someData(1234)#endif{} ... yuck! Update 2: For those who want to read more into the background of this issue, I really suggest skimming over the appropriate mailing list thread . Thing start to get interesting around the 3rd post. Also there is a related code commit with which you can see the real life code in question here.
Now provide the response and nothing else.
|
It would work, although you'd get two copies of the base Foobar class. To get a single copy, you'd need to use virtual inheritance. Read on multiple inheritance here . class Foobar;class FoobarClient : virtual public Foobar;class FoobarServer : virtual public Foobar;class WindowsFoobar : virtual public Foobar;class UnixFoobar : virtual public Foobar; However, there are many problems associated with multiple inheritance. If you really want to have the model presented, why not make FoobarClient and FoobarServer take a reference to Foobar at construction time, and then have Foobar& FoobarClient/Server::getFoobar ? Composition is often a way out of multiple inheritance. Take a example now: class WindowsFoobarClient : public WindowsFoobar { FoobarClient client;public: WindowsFoobarClient() : client( this ) {} FoobarClient& getClient() { return client }} However care must be taken in using this in the constructor.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1990032', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/47775/']}
|
jdg_369363
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1491983
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $U_1, U_2,.....$ is a infinite collection of open sets , then their intersection is open ? True or false ? I proved that , If $U_1, U_2,......,U_n$ is a finite collection of open sets , then their intersection is open. But what happens when the collection is infinite ?
Now provide the response and nothing else.
|
Let $U_n =(-1/n, 1/n)$ Clearly the intersection is $\{0\}$ which is not open in the standard topology on $\Bbb R$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1491983', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/280637/']}
|
jdg_369364
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
47800326
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on some serverless applications and am looking to do all of the deployments using AWS SAM. I'm not finding a lot of information on how to include custom authorizers for my endpoints. There are some (year old) posts that talk about defining them in Swagger (which I'm not using) or Cloudformation. Does anyone have an example of either of these methods, or know how to define the custom authorizer in the SAM template?
Now provide the response and nothing else.
|
UPDATE: The AWS Serverless Application Model (SAM) now supports defining an API Auth Object as a part of the AWS::Serverless::Api resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api-auth-object Auth: MyLambdaTokenAuth: FunctionPayloadType: TOKEN FunctionArn: !GetAtt MyAuthFunction.Arn Identity: Header: Authorization ReauthorizeEvery: 300 Original Answer: I did eventually get this working using AWS swagger extensions in my template. I have a basic example on my GitHub: AWSTemplateFormatVersion: '2010-09-09'Transform: AWS::Serverless-2016-10-31Description: An example serverless "Hello World" application with a custom authorizer.Resources: ApiGateway: Type: AWS::Serverless::Api Properties: StageName: Prod DefinitionBody: swagger: 2.0 info: title: Ref: AWS::StackName securityDefinitions: test-authorizer: type: apiKey name: Authorization in: header x-amazon-apigateway-authtype: custom x-amazon-apigateway-authorizer: type: token authorizerUri: Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${TestAuthorizerFunc.Arn}/invocations authorizerResultTtlInSeconds: 5 paths: "/": get: x-amazon-apigateway-integration: httpMethod: post type: aws_proxy uri: Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${HelloWorld.Arn}/invocations responses: {} security: - test-authorizer: [] HelloWorld: Type: AWS::Serverless::Function Properties: Handler: lambda_function.lambda_handler Runtime: python3.6 CodeUri: ./HelloWorld Events: GetApi: Type: Api Properties: Path: / Method: get RestApiId: Ref: ApiGateway TestAuthorizerFunc: Type: AWS::Serverless::Function Properties: Handler: lambda_function.lambda_handler Runtime: python3.6 CodeUri: ./TestAuthorizerFunc TestAuthorizerFuncPerm: Type: AWS::Lambda::Permission DependsOn: - ApiGateway - TestAuthorizerFunc Properties: Action: lambda:InvokeFunction FunctionName: Ref: TestAuthorizerFunc Principal: apigateway.amazonaws.com In the API Gateway resource, the YAML of the swagger definition is added under the DefinitionBody key. The custom authorizer is defined as: securityDefinitions: test-authorizer: type: apiKey name: Authorization in: header x-amazon-apigateway-authtype: custom x-amazon-apigateway-authorizer: type: token authorizerUri: Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${TestAuthorizerFunc.Arn}/invocations authorizerResultTtlInSeconds: 5 Then the authorizer is attached in the definition for the path that it will secure: paths: "/": get: x-amazon-apigateway-integration: httpMethod: post type: aws_proxy uri: Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${HelloWorld.Arn}/invocations responses: {} security: - test-authorizer: [] Code for the Lambda functions can be found here: https://github.com/brysontyrrell/Serverless-Hello-World/tree/master/hello-world
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47800326', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6806114/']}
|
jdg_369365
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
364208
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say I have eigenstates $|x\rangle$ associated with measurement of position. I know that the eigenstates corresponding to their respective eigenvalues form a basis, let's call it $A$. Now let's say I want to expand a generic state vector $|ϕ\rangle$ in that basis formed by eigenstates. Now, what does it mean when physicists say that: "$ϕ(x)$ is the coordinate of the state $|ϕ\rangle$ in the basis $A$?" Any explanation or reference to similar kind of questions would be helpful.
Now provide the response and nothing else.
|
If you think of kets $\vert\psi\rangle$ as vectors, i.e. elements of a vector space in the sense that you can take linear combination of them, and you think of these vector as column vectors, then you can think of $\langle\psi\vert$ as a row vector so that $\langle\phi\vert\psi\rangle$ is a scalar product since it gives a number. If we imagine that there were discretely many $x$'s labelled by $i$, i.e. if our positions came as $\{x_{i-1}, x_i,x_{i+1}\ldots\}$, you could associate the number $\psi(x_i)$ (i.e. the function $\psi$ evaluated at $x_i$) as the component of $\vert \psi\rangle$ on the $i$'th basis vector $\vert x_i\rangle$, with $\langle x_i\vert\psi\rangle=\psi(x_i)$. In this way, $\vert \psi\rangle$ and $\vert x_i\rangle$ are huge column vectors with components$$\vert\psi\rangle = \left(\begin{array}{c}\vdots \\\psi(x_{i-1})\\\psi(x_i)\\\psi(x_{i-1})\\\vdots\end{array}\right)\, ,\qquad \vert x_i\rangle = \left(\begin{array}{c}\vdots \\0\\1\\0\\\vdots\end{array}\right)\begin{array}{c}\\\leftarrow\hbox{position $i$}\\\\\end{array}\, ,\qquad \langle x_i\vert\psi\rangle = \psi(x_i)$$ Of course the position is really continuous, so the index $i$ is not needed but it's still convenient to this of $\psi(x)=\langle x\vert\psi\rangle$ as the value of the vector $\vert\psi\rangle$ on the basis vector $\vert x\rangle$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/364208', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/169636/']}
|
jdg_369366
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25489897
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have text in a paragraph with a button a element: <p> <a class="btn btn-default" href="#">Take exam</a> <span class="text-muted">Available after reading course material</span></p> However, this renders with the two not vertically aligned. What is the best way to align the text baseline here? Fiddle at http://jsfiddle.net/0j20stbh/
Now provide the response and nothing else.
|
Using the same styling as the .btn would probably be a good approach. Example with disabled .btn <p> <a class="btn btn-default" href="#">Take exam</a> <span class="text-muted btn" disabled="true">Text</span></p> Or make a class of .btn-align with the same attributes. Example CSS .btn-align { padding: 6px 12px; line-height: 1.42857143; vertical-align: middle;} HTML <p> <a class="btn btn-default" href="#">Take exam</a> <span class="text-muted btn-align">Available after reading course material</span></p>
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25489897', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/300224/']}
|
jdg_369367
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
533880
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
As far as I know in nmos the current flow from the drain to the source, and the drain and source are determined by the pin with the higher voltage on it (such that the source is the pin with the lower voltage).The bulk is always connected to the source so that the Vth won't change, however it appears that the source is the pin which is connected to the ground.Therefore my question is - (1) how come does the current flow when Q2 is on, from the ground to Vsw?(2) Is it flowing from the source to the drain? how come?(3) And why not use a pmos instead, so that the current will flow from the source (the ground) to the drain (Vsw)?
Now provide the response and nothing else.
|
As far as I know in nmos the current flow from the drain to the source That's not correct. Current can flow in either direction in an N channel MOSFET and, when the synchronizing MOSFET is activated, current flows from the right hand terminal of the inductor, through the load and output capacitor and up through the N channel MOSFET back to Vsw. 4 quadrant MOSFET VI curve: - Image from this website . I believe that all your numbered questions are based on your initial misconception about MOSFETs. The MOSFET drain to source connection is just a "resistive" channel - there's no PN junction involved in it at all so it will conduct just as well with a reverse current as a forward current. The only thing limiting full operation in reverse mode (bottom left quadrant) is the parasitic bulk diode whose effect you can see when Vgs is 0 volts - there is little negative conduction up until Vds becomes about -0.7 volts then, beyond that, it's a diode characteristic.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/533880', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/267815/']}
|
jdg_369368
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
130136
|
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have come across that GCM, CCM, SHA-256 and SHA-384 Cipher suites are TLS 1.2 only. So TLS handshake can not be performed with lower TLS version using these suites. I am trying to figure out supported cipher suites per protocol version (SSL v3, TLS 1.0, 1.1, 1.2) from this list http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-4 Are Cipher suites mentioned in TLS 1.1 RFC ( https://tools.ietf.org/rfc/rfc4346.txt ) supported by TLS 1.0 and SSL v3? Similarly suites mentioned TLS 1.0 supported by SSL v3 and vice versa?
Now provide the response and nothing else.
|
1.) See openssl output below and excellent Wikipedia page below 2.) No not all of them, see the tables at the URL below . 3.) No not all of them, see the tables at the URL below . https://en.wikipedia.org/wiki/Transport_Layer_Security openssl ciphers -v 'ALL:!aNULL' ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA384 ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA384 ECDHE-RSA-AES256-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA1 ECDHE-ECDSA-AES256-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1 SRP-DSS-AES-256-CBC-SHA SSLv3 Kx=SRP Au=DSS Enc=AES(256) Mac=SHA1 SRP-RSA-AES-256-CBC-SHA SSLv3 Kx=SRP Au=RSA Enc=AES(256) Mac=SHA1 SRP-AES-256-CBC-SHA SSLv3 Kx=SRP Au=SRP Enc=AES(256) Mac=SHA1 DH-DSS-AES256-GCM-SHA384 TLSv1.2 Kx=DH/DSS Au=DH Enc=AESGCM(256) Mac=AEAD DHE-DSS-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=DSS Enc=AESGCM(256) Mac=AEAD DH-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH/RSA Au=DH Enc=AESGCM(256) Mac=AEAD DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(256) Mac=AEAD DHE-RSA-AES256-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(256) Mac=SHA256 DHE-DSS-AES256-SHA256 TLSv1.2 Kx=DH Au=DSS Enc=AES(256) Mac=SHA256 DH-RSA-AES256-SHA256 TLSv1.2 Kx=DH/RSA Au=DH Enc=AES(256) Mac=SHA256 DH-DSS-AES256-SHA256 TLSv1.2 Kx=DH/DSS Au=DH Enc=AES(256) Mac=SHA256 DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1 DHE-DSS-AES256-SHA SSLv3 Kx=DH Au=DSS Enc=AES(256) Mac=SHA1 DH-RSA-AES256-SHA SSLv3 Kx=DH/RSA Au=DH Enc=AES(256) Mac=SHA1 DH-DSS-AES256-SHA SSLv3 Kx=DH/DSS Au=DH Enc=AES(256) Mac=SHA1 DHE-RSA-CAMELLIA256-SHA SSLv3 Kx=DH Au=RSA Enc=Camellia(256) Mac=SHA1 DHE-DSS-CAMELLIA256-SHA SSLv3 Kx=DH Au=DSS Enc=Camellia(256) Mac=SHA1 DH-RSA-CAMELLIA256-SHA SSLv3 Kx=DH/RSA Au=DH Enc=Camellia(256) Mac=SHA1 DH-DSS-CAMELLIA256-SHA SSLv3 Kx=DH/DSS Au=DH Enc=Camellia(256) Mac=SHA1 ECDH-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(256) Mac=AEAD ECDH-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(256) Mac=AEAD ECDH-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AES(256) Mac=SHA384 ECDH-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AES(256) Mac=SHA384 ECDH-RSA-AES256-SHA SSLv3 Kx=ECDH/RSA Au=ECDH Enc=AES(256) Mac=SHA1 ECDH-ECDSA-AES256-SHA SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=AES(256) Mac=SHA1 AES256-GCM-SHA384 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(256) Mac=AEAD AES256-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA256 AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1 CAMELLIA256-SHA SSLv3 Kx=RSA Au=RSA Enc=Camellia(256) Mac=SHA1 PSK-AES256-CBC-SHA SSLv3 Kx=PSK Au=PSK Enc=AES(256) Mac=SHA1 ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA256 ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA256 ECDHE-RSA-AES128-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA1 ECDHE-ECDSA-AES128-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA1 SRP-DSS-AES-128-CBC-SHA SSLv3 Kx=SRP Au=DSS Enc=AES(128) Mac=SHA1 SRP-RSA-AES-128-CBC-SHA SSLv3 Kx=SRP Au=RSA Enc=AES(128) Mac=SHA1 SRP-AES-128-CBC-SHA SSLv3 Kx=SRP Au=SRP Enc=AES(128) Mac=SHA1 DH-DSS-AES128-GCM-SHA256 TLSv1.2 Kx=DH/DSS Au=DH Enc=AESGCM(128) Mac=AEAD DHE-DSS-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=DSS Enc=AESGCM(128) Mac=AEAD DH-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH/RSA Au=DH Enc=AESGCM(128) Mac=AEAD DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(128) Mac=AEAD DHE-RSA-AES128-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(128) Mac=SHA256 DHE-DSS-AES128-SHA256 TLSv1.2 Kx=DH Au=DSS Enc=AES(128) Mac=SHA256 DH-RSA-AES128-SHA256 TLSv1.2 Kx=DH/RSA Au=DH Enc=AES(128) Mac=SHA256 DH-DSS-AES128-SHA256 TLSv1.2 Kx=DH/DSS Au=DH Enc=AES(128) Mac=SHA256 DHE-RSA-AES128-SHA SSLv3 Kx=DH Au=RSA Enc=AES(128) Mac=SHA1 DHE-DSS-AES128-SHA SSLv3 Kx=DH Au=DSS Enc=AES(128) Mac=SHA1 DH-RSA-AES128-SHA SSLv3 Kx=DH/RSA Au=DH Enc=AES(128) Mac=SHA1 DH-DSS-AES128-SHA SSLv3 Kx=DH/DSS Au=DH Enc=AES(128) Mac=SHA1 DHE-RSA-SEED-SHA SSLv3 Kx=DH Au=RSA Enc=SEED(128) Mac=SHA1 DHE-DSS-SEED-SHA SSLv3 Kx=DH Au=DSS Enc=SEED(128) Mac=SHA1 DH-RSA-SEED-SHA SSLv3 Kx=DH/RSA Au=DH Enc=SEED(128) Mac=SHA1 DH-DSS-SEED-SHA SSLv3 Kx=DH/DSS Au=DH Enc=SEED(128) Mac=SHA1 DHE-RSA-CAMELLIA128-SHA SSLv3 Kx=DH Au=RSA Enc=Camellia(128) Mac=SHA1 DHE-DSS-CAMELLIA128-SHA SSLv3 Kx=DH Au=DSS Enc=Camellia(128) Mac=SHA1 DH-RSA-CAMELLIA128-SHA SSLv3 Kx=DH/RSA Au=DH Enc=Camellia(128) Mac=SHA1 DH-DSS-CAMELLIA128-SHA SSLv3 Kx=DH/DSS Au=DH Enc=Camellia(128) Mac=SHA1 ECDH-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(128) Mac=AEAD ECDH-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(128) Mac=AEAD ECDH-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AES(128) Mac=SHA256 ECDH-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AES(128) Mac=SHA256 ECDH-RSA-AES128-SHA SSLv3 Kx=ECDH/RSA Au=ECDH Enc=AES(128) Mac=SHA1 ECDH-ECDSA-AES128-SHA SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=AES(128) Mac=SHA1 AES128-GCM-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(128) Mac=AEAD AES128-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA256 AES128-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA1 SEED-SHA SSLv3 Kx=RSA Au=RSA Enc=SEED(128) Mac=SHA1 CAMELLIA128-SHA SSLv3 Kx=RSA Au=RSA Enc=Camellia(128) Mac=SHA1 IDEA-CBC-SHA SSLv3 Kx=RSA Au=RSA Enc=IDEA(128) Mac=SHA1 IDEA-CBC-MD5 SSLv2 Kx=RSA Au=RSA Enc=IDEA(128) Mac=MD5 RC2-CBC-MD5 SSLv2 Kx=RSA Au=RSA Enc=RC2(128) Mac=MD5 PSK-AES128-CBC-SHA SSLv3 Kx=PSK Au=PSK Enc=AES(128) Mac=SHA1 ECDHE-RSA-RC4-SHA SSLv3 Kx=ECDH Au=RSA Enc=RC4(128) Mac=SHA1 ECDHE-ECDSA-RC4-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=RC4(128) Mac=SHA1 ECDH-RSA-RC4-SHA SSLv3 Kx=ECDH/RSA Au=ECDH Enc=RC4(128) Mac=SHA1 ECDH-ECDSA-RC4-SHA SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=RC4(128) Mac=SHA1 RC4-SHA SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=SHA1 RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 RC4-MD5 SSLv2 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 PSK-RC4-SHA SSLv3 Kx=PSK Au=PSK Enc=RC4(128) Mac=SHA1 ECDHE-RSA-DES-CBC3-SHA SSLv3 Kx=ECDH Au=RSA Enc=3DES(168) Mac=SHA1 ECDHE-ECDSA-DES-CBC3-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=3DES(168) Mac=SHA1 SRP-DSS-3DES-EDE-CBC-SHA SSLv3 Kx=SRP Au=DSS Enc=3DES(168) Mac=SHA1 SRP-RSA-3DES-EDE-CBC-SHA SSLv3 Kx=SRP Au=RSA Enc=3DES(168) Mac=SHA1 SRP-3DES-EDE-CBC-SHA SSLv3 Kx=SRP Au=SRP Enc=3DES(168) Mac=SHA1 EDH-RSA-DES-CBC3-SHA SSLv3 Kx=DH Au=RSA Enc=3DES(168) Mac=SHA1 EDH-DSS-DES-CBC3-SHA SSLv3 Kx=DH Au=DSS Enc=3DES(168) Mac=SHA1 DH-RSA-DES-CBC3-SHA SSLv3 Kx=DH/RSA Au=DH Enc=3DES(168) Mac=SHA1 DH-DSS-DES-CBC3-SHA SSLv3 Kx=DH/DSS Au=DH Enc=3DES(168) Mac=SHA1 ECDH-RSA-DES-CBC3-SHA SSLv3 Kx=ECDH/RSA Au=ECDH Enc=3DES(168) Mac=SHA1 ECDH-ECDSA-DES-CBC3-SHA SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=3DES(168) Mac=SHA1 DES-CBC3-SHA SSLv3 Kx=RSA Au=RSA Enc=3DES(168) Mac=SHA1 DES-CBC3-MD5 SSLv2 Kx=RSA Au=RSA Enc=3DES(168) Mac=MD5 PSK-3DES-EDE-CBC-SHA SSLv3 Kx=PSK Au=PSK Enc=3DES(168) Mac=SHA1 EDH-RSA-DES-CBC-SHA SSLv3 Kx=DH Au=RSA Enc=DES(56) Mac=SHA1 EDH-DSS-DES-CBC-SHA SSLv3 Kx=DH Au=DSS Enc=DES(56) Mac=SHA1 DH-RSA-DES-CBC-SHA SSLv3 Kx=DH/RSA Au=DH Enc=DES(56) Mac=SHA1 DH-DSS-DES-CBC-SHA SSLv3 Kx=DH/DSS Au=DH Enc=DES(56) Mac=SHA1 DES-CBC-SHA SSLv3 Kx=RSA Au=RSA Enc=DES(56) Mac=SHA1 DES-CBC-MD5 SSLv2 Kx=RSA Au=RSA Enc=DES(56) Mac=MD5 EXP-EDH-RSA-DES-CBC-SHA SSLv3 Kx=DH(512) Au=RSA Enc=DES(40) Mac=SHA1 export EXP-EDH-DSS-DES-CBC-SHA SSLv3 Kx=DH(512) Au=DSS Enc=DES(40) Mac=SHA1 export EXP-DES-CBC-SHA SSLv3 Kx=RSA(512) Au=RSA Enc=DES(40) Mac=SHA1 export EXP-RC2-CBC-MD5 SSLv3 Kx=RSA(512) Au=RSA Enc=RC2(40) Mac=MD5 export EXP-RC2-CBC-MD5 SSLv2 Kx=RSA(512) Au=RSA Enc=RC2(40) Mac=MD5 export EXP-RC4-MD5 SSLv3 Kx=RSA(512) Au=RSA Enc=RC4(40) Mac=MD5 export EXP-RC4-MD5 SSLv2 Kx=RSA(512) Au=RSA Enc=RC4(40) Mac=MD5 export
|
{}
|
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/130136', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/106685/']}
|
jdg_369369
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9955955
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I get the text of an element without the children?Neither element.textContent nor element.innerText seem to be working. HTML: <body><h1>Test Heading</h1><div>Awesome video and music. Thumbs way up. Love it. Happy weekend to you and your family. Love, Sasha</div></body><script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script><script type="text/javascript"> fool("body");</script> and here's the fool function: jQuery.fn.justtext = function(text) { return $(this).clone() .children() .remove() .end() .text();};function fool(el) { reverse(el); function reverse(el) { $(el).children().each(function() { if($(this).children().length > 0) { reverse(this); if($(this).justtext() != "") reverseText(this); } else { reverseText(this) } }); } function reverseText(el){ var text = el.textContent; var frag = text.toString().split(/ /); var foo = ""; var punctation_marks = [".",",","?","!"," ",":",";"]; for(i in frag){ if(punctation_marks.indexOf(frag[i]) == -1) foo += actualReverse(frag[i],punctation_marks) + " "; } el.textContent = foo; } function actualReverse(text,punctation_marks) { return (punctation_marks.indexOf(text.split("")[text.split("").length-1]) != -1)?text.split("").slice(0,text.split("").length-1).reverse().join("") + text.split("")[text.split("").length-1] : text.split("").reverse().join(""); }} edit : using node.nodeType doesn't really help and here's why:Imaginge the following HTML <td class="gensmall"> Last visit was: Sat Mar 31, 2012 10:50 am <br> <a href="./search.php?search_id=unanswered">View unanswered posts</a> | <a href="./search.php?search_id=active_topics">View active topics</a></td> if I'd use nodeType , only the text of the a element would change , but not the td itself ("last visit....")
Now provide the response and nothing else.
|
Just find the text nodes: var element = document.getElementById('whatever'), text = '';for (var i = 0; i < element.childNodes.length; ++i) if (element.childNodes[i].nodeType === Node.TEXT_NODE) text += element.childNodes[i].textContent; edit — if you want the text in descendant ("children") nodes, and (as is now apparent) you're using jQuery: $.fn.allText = function() { var text = ''; this.each(function() { $(this).contents().each(function() { if (this.nodeType == Node.TEXT_NODE) text += this.textContent; else if (this.nodeType == Node.ELEMENT_NODE) text += $(this).allText(); }); }); return text;}; Hold on and I'll test that out :-) (seems to work)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9955955', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/941123/']}
|
jdg_369370
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11988853
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
data User = User { city :: Text , country :: Text , phone :: Text , email :: Text}instance ToJSON User where toJSON (User a b c d)= object ["a" .= a ,"b" .= b ,"c" .= c ,"d" .= d]test:: User -> IO Valuetest u = do let j = toJSON u return j What I want is Text like so test::User -> IO Texttest u = do let j = pack ("{\"city\":\"test\",\"country\":\"test\",\"phone\":\"test\",\"email\":\"test\"}") return j I can't figure it out how to go from Value to Text
Now provide the response and nothing else.
|
It is more difficult to do this than it should be for what (I think) is a generally useful function. Data.Aeson.Encode.encode does too much work and converts it all the way to a ByteString . Starting with encode and switching the Lazy.Text -> ByteString to Lazy.Text -> Strict.Text conversion does what you want: {-# LANGUAGE OverloadedStrings #-}import Data.Aesonimport Data.Aeson.Text (encodeToTextBuilder)import Data.Textimport Data.Text.Lazy (toStrict)import Data.Text.Lazy.Builder (toLazyText)data User = User { city :: Text , country :: Text , phone :: Text , email :: Text }instance ToJSON User where toJSON (User a b c d) = object [ "city" .= a , "country" .= b , "phone" .= c , "email" .= d ]test :: User -> Texttest = toStrict . toLazyText . encodeToTextBuilder . toJSON
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11988853', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/172754/']}
|
jdg_369371
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2858103
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to get an element id of a list to get it later through list.get(index) when using for(Object obj: o) construction Only define a local var and manually incrementing it?Anything simpler?
Now provide the response and nothing else.
|
No, for-each loop doesn't keep track of index. You can either use a regular indexed loop, or do something like this: int idx = 0;for (Object o : list) { ... idx++;} This is risky since break / continue will make idx goes out of sync, so do use this infrequently, and only when the body is simple and only a few lines long. If the elements are distinct, List.indexOf would also work, albeit at O(N) , and at that point you may want to consider a Set (unordered, but guaranteed distinct). It should also be said that sometimes using a listIterator() also alleviates the need of an explicit index while iterating. A ListIterator supports add , set and remove operations. This is another clear advantage List has over arrays as far as iteration mechanism goes.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2858103', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/217067/']}
|
jdg_369372
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6058
|
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In the question about real vs. fake salt , the answers describe how real salt 'perturbs the encryption algorithm.' I know roughly how initialization vectors work; is this the same concept, or something different entirely?
Now provide the response and nothing else.
|
A salt and an initialization vector are mostly the same thing in the following sense: they are public data, which should be generated anew for each instance (each hashed password, each encrypted message). A salt is about being able to use the same password several times without opening weaknesses; or, if you prefer, preventing an attacker from sharing password attack costs in case the same password could have been used on several instances -- which is all what precomputed (rainbow) tables are about. The point of an IV in, say, symmetric encryption with CBC, is to tolerate the use of the same key to encrypt several distinct messages. The name "initialization vector" hints at a repetitive process over a given internal state, the IV being what the state is initialized at. For instance, the MD5 hash function is defined as repeated action of a compression function which takes as input the current state (128 bits) and the next message block (512 bits), and outputs the next state value; at the beginning, the state is initialized to a conventional value which is called "the IV". In that sense, most "salts" used in password processing are not "initialization vectors". But this is a bit of an overinterpretation of the expression. Still, naming things is mostly a matter of Tradition. A "salt" is a kind of IV which: is involved in some processing of a password; should be distinct for each processing instance (it cannot be a fixed conventional value); only needs uniqueness ("it is not repeated"), not uniform selection among the space of possible salts (although uniform random selection is a good and cheap way to get uniqueness with overwhelming probability, assuming that the salts are long enough). The particulars (how the salt/IV is exactly inserted and at what point in the algorithm) are a red herring.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://security.stackexchange.com/questions/6058', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/3677/']}
|
jdg_369373
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17514648
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working with a data frame in r where my row names are meaningful. Hence, I would like to give the column of row names a name. How do I do this?
Now provide the response and nothing else.
|
It sounds like you want to convert the rownames to a proper column of the data.frame. eg: # add the rownames as a proper columnmyDF <- cbind(Row.Names = rownames(myDF), myDF)myDF# Row.Names id val vr2# row_one row_one A 1 23# row_two row_two A 2 24# row_three row_three B 3 25# row_four row_four C 4 26 If you want to then remove the original rownames: rownames(myDF) <- NULLmyDF# Row.Names id val vr2# 1 row_one A 1 23# 2 row_two A 2 24# 3 row_three B 3 25# 4 row_four C 4 26 Alternatively, if all of your data is of the same class (ie, all numeric, or all string), you can convert to Matrix and name the dimnames myMat <- as.matrix(myDF)names(dimnames(myMat)) <- c("Names.of.Rows", "")myMat# Names.of.Rows id val vr2 # row_one "A" "1" "23"# row_two "A" "2" "24"# row_three "B" "3" "25"# row_four "C" "4" "26"
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17514648', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2146894/']}
|
jdg_369374
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1568766
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've seen the following statement a few times: "Let $A$ be a set, then $\emptyset\subseteq A$". Or, written 'more formally': $$\forall A\,\, \emptyset\subseteq A$$ My doubt is: I've always seen the quantifier $\forall x$ to mean "for all $x$ elements of some set $S$". However, when talking about all the sets, how do we define this quantification?
Now provide the response and nothing else.
|
It's just an unbounded quantifier, not constrained to the elements of a fixed set $S$. This is the normal case; the bounded quantifiers, constrained to members of a set, are defined in terms of the basic quantifiers, $\forall x$ and $\exists x$. $\forall x$ means just that: for all $x$, anything $x$. In the mathematical universe, $x$ ranges over sets, and not, say, people, atoms, oranges, etc. Unless some further conventions are in force constraining $x$ to range over, for example, only reals, $\forall x$ means "for all sets $x$". In order to say "for all $x$ in $S$" you have to bound the quantifier, as in "$\forall x\in S$". However, bounded quantifiers are a shorthand, defined in terms of the basic unbounded quantifiers:$$\begin{align}\exists x\in S\,P(x) &\stackrel{def}\iff \exists x\,(x\in S \land P(x)) \\\forall x\in S\,P(x) &\stackrel{def}\iff \forall x\,(x\in S \to P(x)). \\\end{align}$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1568766', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/193752/']}
|
jdg_369375
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17248462
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So, I'm beginning C++, with a semi-adequate background of python. In python, you make a list/array like this: x = [1, 2, 3, 4, 5, 6, 7, 8, 9] Then, to print the list, with the square brackets included, all you do is: print x That would display this: [1, 2, 3, 4, 5, 6, 7, 8, 9] How would I do the exact same thing in c++, print the brackets and the elements, in an elegant/clean fashion? NOTE I don't want just the elements of the array, I want the whole array, like this: {1, 2, 3, 4, 5, 6, 7, 8, 9} When I use this code to try to print the array, this happens: input: #include <iostream>using namespace std;int main(){ int anArray[9] = {1, 2, 3, 4, 5, 6, 7, 8, 9}; cout << anArray << endl;} The output is where in memory the array is stored in (I think this is so, correct me if I'm wrong): 0x28fedc As a sidenote, I don't know how to create an array with many different data types, such as integers, strings, and so on, so if someone can enlighten me, that'd be great!Thanks for answering my painstakingly obvious/noobish questions!
Now provide the response and nothing else.
|
You can write a simple helper function to allow you to stream the array to an output stream (including but not limited to std::cout ): #include <iostream>// print an array to an output stream// prints to std::cout by defaulttemplate <typename T, std::size_t N>void print_array(const T(&a)[N], std::ostream& o = std::cout){ o << "{"; for (std::size_t i = 0; i < N-1; ++i) { o << a[i] << ", "; } o << a[N-1] << "}\n";} where a function template is used in order to deduce both the type and size of the array at compile time. You can use it like this: #include <fstream>int main(){ int a[] = {1,2,3,4,5}; print_array(a); // prints {1, 2, 3, 4, 5} to stdout std::string sa[] = {"hello", "world"}; print_array(sa, std::cerr); // prints {hello, world} to stderr std::ofstream output("array.txt"); print_array(a, output); // prints {1, 2, 3, 4, 5} to file array.txt} This solution can be trivially generalized to deal with ranges and standard library containers. For even more general approaches, see here . As for the side note, you cannot do that in C++. An array can only hold objects of one type.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17248462', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2511129/']}
|
jdg_369376
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9043820
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to know the regex to match words such that the words have a maximum length. for eg, if a word is of maximum 10 characters in length, I would like the regex to match, but if the length exceeds 10, then the regex should not match. I tried ^(\w{10})$ but that brings me matches only if the minimum length of the word is 10 characters. If the word is more than 10 characters, it still matches, but matches only first 10 characters.
Now provide the response and nothing else.
|
I think you want \b\w{1,10}\b . The \b matches a word boundary. Of course, you could also replace the \b and do ^\w{1,10}$ . This will match a word of at most 10 characters as long as its the only contents of the string. I think this is what you were doing before. Since it's Java, you'll actually have to escape the backslashes: "\\b\\w{1,10}\\b" . You probably knew this already, but it's gotten me before.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9043820', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/945672/']}
|
jdg_369377
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29371695
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In the canvas element, are floating point numbers actually treated as floats by the lineTo and arc methods? For example: context.moveTo(20.4562, 80.8923);context.lineTo(120.1123, 90.2134);context.arc(24.5113, 36.7989, 20.123, 0, Math.PI*2); Does canvas really support float number coordinates, or it just converting the float numbers to integers when drawing line, arc, rect, etc.?
Now provide the response and nothing else.
|
A bitmap does not support floating point values on its own. It can only deal with integer values. The 2D context of the canvas deals mostly with paths which is per-SE not connected to the bitmap. Paths are arbitrary and exists only as vectors internally. It's when they are stroked or filled they are put through a rasterizing process, ie. converted to a bitmap representation. Paths and their assigned points in the bitmap can themselves hold fraction values, but a bitmap can't. However, it is possible to represent a fractional value using interpolation (sub-pixeling) which gives the impression of having a larger available bitmap resolution, which in turn removes the jaggy look if values where cut down to just integers. (And resolution is the main point of doing interpolation. A screen is about the equivalent of 72-96 DPI. If the screen had a higher resolution, 300+ DPI, interpolation would not be needed, as is in the case with printing. But we are there now and drawings would look jaggy if interpolation wasn't used as a compensation for lack of resolution). Here is how it works When a pixel lands on a integer bound value, it is a simple case of setting the pixel to that value. When you deal with a floating point there is no where to set a pixel in part, it has to be either be rounded down (floor'ed) or placed in the next cell (ceil'ed), ie. 3.2 becomes 3, 3.7 becomes 4 etc. Interpolation / sub-pixeling However, someone came up with the idea some decades ago to represent the fractional part as a blend between the actual pixel and the next pixel. If the pixel value was 3.5, the fractional part would represent here 50% of black and 50% of white. It would still occupy a whole pixel cell, but since it is so tiny it will appear to just occupy a fraction of its cell thanks to the surrounding pixels which contribute to the illusion. So in this case the last pixel set wold look like this: In case the value was 3.25, only 25% of the remaining pixel would be mixed, making the last pixel look like: This will apply to all pixels that would land on a fractional value. When you draw a diagonal line and some points "crosses" two pixels, a mix of those will be applied to the closed integer based placement, making the line look smooth. Now, in case of canvas the shapes are interpolated with the alpha channel. Then the shape is blended and composited using Porter-Duff with the existing content, which is what we see as the final result. This is also something to have in mind in regards to performance. If interpolation is needed the cost goes up, as for each pixel the browser (or sub-system) has to calculate the fractional representation. For this reason you could make sure to use integer values by rounding them off before passing them to the path method. This is of course not doable with arcs, ellipses, diagonal lines at some degrees etc., but it can help speed up things in some cases. This is the principle with interpolation which applies when fractional coordinates are used. However, there are more complex ways of interpolate and sub-pixel. In modern world 2x2 or 4x4 sampling is more common to give a more accurate result. Demo If we make a small zoomed in line we can see the end of the line "fade" in and out when we variate the fraction value. This line is 3 pixels wide, then we add a fraction value between [0, 1] to it and we can see the last ceil'ed pixel is being resampled: var ctx = document.querySelector("canvas").getContext("2d"), x = 3.1, dx = 0.1;ctx.imageSmoothingEnabled = ctx.mozImageSmoothingEnabled = ctx.webkitImageSmoothingEnabled = false;ctx.font = "14px sans-serif";(function loop() { ctx.fillStyle = "#000"; ctx.clearRect(0,0,350,50); ctx.fillRect(0,0, x, 1); // forces pixel-alignment for demo ctx.drawImage(ctx.canvas, 0,0,4,1, 0, 0, 200, 50); x += dx; if (x <= 3 || x >= 4) dx = -dx; info(x); document.querySelector("div").innerHTML = "length: " + x.toFixed(2); setTimeout(loop, 160)})();function info(x) { ctx.fillStyle = "#f00"; ctx.fillText("Perceived length", 210, 19); ctx.fillRect(0, 15, x * 50, 2); ctx.fillRect(x*50-1, 12, 2, 7); x = x === 3 ? 3 : 4; ctx.fillText("Actual length", 210, 40); ctx.fillRect(0, 35, x * 50, 2); ctx.fillRect(x*50-1, 32, 2, 7);} div {font:bold 14px sans-serif} <canvas width=500 height=50></canvas><br><div></div>
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29371695', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4705659/']}
|
jdg_369378
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
499734
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How would I integrate the following: .$$\int\frac{\sin^3x}{\cos x}\,\mathrm dx.$$ I am not sure what to do. I could split $\sin^3x=(1-\cos^2x)(\sin x)$ Then get $\int \tan(x)(1-\cos^2x)$ But would this be the right way to go.
Now provide the response and nothing else.
|
Let $u = \cos x \implies du = -\sin x\,dx$ $$\begin{align} \int \dfrac{\sin^3x}{\cos x} \,dx & = -\int \dfrac{-\sin x\sin^2 x}{\cos x}\,dx \\ \\ & = -\int \dfrac{-\sin x(1 - \cos^2 x)}{\cos x}\,dx \\ \\ & = -\int \dfrac{(1 - u^2)}{u}\,du \\ \\ & = \int u \,du - \int\dfrac 1u \,du\\ \\ & = \frac{u^2}{2} - \ln |u| + C \\ \\ & = \frac 12 \cos^2 x - \ln |\cos x| + C\end{align}$$
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/499734', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/37244/']}
|
jdg_369379
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9374556
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to configure log4net to print logs both to console and to files during debug? I am trying to find a way to debug my software efficiently by observing the logs immediately when they happen. Writing to files is problematic for me for debugging because I don't want to wait until the file is flushed to the disk and then to open it. Therefore I prefer it to write into console. What do you suggest? I added the app.config file with the additional appended, but I am not being able to show result console. Following is my app.config configuration: <?xml version="1.0"?><configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> </configSections> <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IProviderService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384"/> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false"/> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm=""/> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default"/> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://localhost:8081/AP2" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IProviderService" contract="IProviderService" name="WSHttpBinding_IProviderService"> <identity> <dns value="localhost"/> </identity> </endpoint> </client> </system.serviceModel> <log4net debug="true"> <appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="C:\\Temp\\1.txt" /> <appendToFile value="true" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="10" /> <maximumFileSize value="10MB" /> <staticLogFileName value="true" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%-5p %d %5rms %-22.22c{1} %-18.18M - %m%n" /> </layout> </appender> <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> <param name="Threshold" value="INFO" /> <layout type="log4net.Layout.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c [%x] - %m%n" /> </layout> </appender> <root> <level value="DEBUG" /> <appender-ref ref="RollingLogFileAppender" /> </root> </log4net> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup></configuration>
Now provide the response and nothing else.
|
You need to have both appenders declared in your <root> section. Configure your log4net like this: <log4net> <appender name="Console" type="log4net.Appender.ConsoleAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date %-5level: %message%newline" /> </layout> </appender> <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="Log.txt" /> <appendToFile value="false" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="10" /> <maximumFileSize value="50MB" /> <staticLogFileName value="true" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date %-5level[%logger]: %message%newline" /> </layout> </appender> <root> <level value="INFO" /> <appender-ref ref="Console" /> <appender-ref ref="RollingFileAppender" /> </root></log4net>
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9374556', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/829174/']}
|
jdg_369380
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
168378
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to do a two-sample t.test comparing values from two regions for several groups. I know it must be a simple trick, but I couldn't find an answer so far. This is a short version of my table. Several species each from 2 regions (There are generally 6 specimens per species available for each region). species region N15 ARGAFF EQ 9.85 ARGAFF EQ 10.42 ARGAFF EQ 10.43 ARGAFF OMZ 10.28 ARGAFF OMZ 10.30 ARGAFF OMZ 10.41 BATABY EQ 10.57 BATABY EQ 10.60 BATABY EQ 10.68 BATABY OMZ 9.21 BATABY OMZ 9.29 BATABY OMZ 9.67 I tried spp<-split(SIA,species,drop=FALSE) followed by lapply(spp,function (x) t.test(N15~region)) I further tried by(SIA[,3], SIA[, "species"], function (x) t.test(N15~region)) but both versions compute the test for all values (N.138, not just for individual species (N:12). On the other hand by(SIA[,3], SIA[, "species"], t.test) results in a one-sample test but for individual species. I'd appreciate advice.
Now provide the response and nothing else.
|
There are various ways to do that, but I'm using a combination of dplyr and broom packages. The advantage of this process is that you export the t-test information as a dataset: library(dplyr)library(broom)set.seed(9)# example datasetdt = data.frame(species = c(rep("ARGAFF",6), rep("BATABY",6)), region = rep(c("EQ","OMZ"),6), N15 = rnorm(12,10,1))dt_result = dt %>% group_by(species) %>% do(tidy(t.test(N15~region, data=.)))dt_result # species estimate estimate1 estimate2 statistic p.value parameter conf.low conf.high # 1 ARGAFF 0.6029705 9.842659 9.239688 1.381967 0.2439265 3.732157 -0.6434982 1.849439 # 2 BATABY 1.0238324 10.740491 9.716659 1.994738 0.1673019 2.300604 -0.9298488 2.977514 Hope it works for you.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/168378', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/86622/']}
|
jdg_369381
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
31857345
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Getting this error Error:(48) Error parsing XML: not well-formed (invalid token) for below mentioned layout. Preview showing correctly. Not able to understand why I am getting this error. I made sure that every view or view closed properly. <?xml version="1.0" encoding="utf-8"?><android.support.v7.widget.CardView xmlns:android="http://schemas.android.com/apk/res/android"xmlns:card_view="http://schemas.android.com/apk/res-auto"android:layout_width="match_parent"android:layout_height="wrap_content"android:gravity="center"card_view:cardCornerRadius="2dp"card_view:cardElevation="2sp"card_view:cardUseCompatPadding="true"card_view:contentPadding="5dp"><RelativeLayout android:layout_width="match_parent" android:layout_height="wrap_content"> <RelativeLayout android:id="@+id/details" android:layout_width="match_parent" android:layout_height="wrap_content"> <ImageView android:id="@+id/profilepic" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentLeft="true" android:layout_alignParentStart="true" /> <TextView android:id="@+id/name" android:layout_width="wrap_content " android:layout_height="wrap_content" android:layout_marginLeft="2dp" android:layout_toRightOf="@+id/profilepic" android:text="prashanth" /> </RelativeLayout> <TextView android:id="@+id/quote" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_below="@+id/details" android:layout_margin="3dp" android:text="jalkdjlajflkdajf akjfdlajfljal alfjk" /> <RelativeLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@+id/quote" android:layout_marginLeft="2dp" android:layout_marginTop="5dp"> < TextView android:id="@+id/time" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentLeft="true" android:layout_centerVertical="true" android:gravity="center" android:text="time started" /> <TextView android:id="@+id/button1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentRight="true" android:layout_marginLeft="3dp" android:background="@drawable/bronze_coin" android:gravity="center" android:text="5" /> <TextView android:id="@+id/button2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="3dp" android:layout_toLeftOf="@+id/button1" android:background="@drawable/bronze_coin" android:gravity="center" android:text="5" /> <TextView android:id="@+id/button3" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="3dp" android:layout_toLeftOf="@+id/button2" android:background="@drawable/bronze_coin" android:gravity="center" android:text="5" /> <TextView android:id="@+id/button4" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="3dp" android:layout_toLeftOf="@+id/button3" android:background="@drawable/bronze_coin" android:gravity="center" android:text="5" /> </RelativeLayout></RelativeLayout></android.support.v7.widget.CardView>
Now provide the response and nothing else.
|
If you use an UIScrollView then you can take benefit from the scrollViewDidScroll: function. You need to save the last position (the contentOffset ) it have and the update it like in the following way: // variable to save the last position visited, default to zero private var lastContentOffset: CGFloat = 0 func scrollViewDidScroll(_ scrollView: UIScrollView) { if (self.lastContentOffset > scrollView.contentOffset.y) { // move up } else if (self.lastContentOffset < scrollView.contentOffset.y) { // move down } // update the new position acquired self.lastContentOffset = scrollView.contentOffset.y print(lastContentOffset) } There are other ways of doing it of course this is one of them. I hope this helps you.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/31857345', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4153915/']}
|
jdg_369382
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
41842440
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've installed tensorflow using pip on ubuntu 16.04 LTS, when running this code https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py i am getting this error Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes. Extracting /tmp/data/train-images-idx3-ubyte.gz Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes. Extracting /tmp/data/train-labels-idx1-ubyte.gz Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes. Extracting /tmp/data/t10k-images-idx3-ubyte.gzSuccessfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes. Extracting /tmp/data/t10k-labels-idx1-ubyte.gz Traceback (most recent call last): File "deep.py", line 71, in <module> pred = RNN(x, weights, biases) File "deep.py", line 60, in RNN x = tf.split(x, n_steps, 0) File "/home/newuser/.local/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1234, in split name=name) File "/home/newuser/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 3241, in _split num_split=num_split, name=name) File "/home/newuser/.local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 508, in apply_op (prefix, dtypes.as_dtype(input_arg.type).name)) TypeError: Input 'split_dim' of 'Split' Op has type float32 that does not match expected type of int32.
Now provide the response and nothing else.
|
It looks like you are using an older version of Tensorflow, and need to update to Tensorflow v0.12.0 or above. The error you are getting is indicating that the split_dim value in your tf.split function is expecting an integer, but is receiving the tensor x which is of type float32. This is because in Tensorflow versions < 0.12.0 the split function takes the arguments as: x = tf.split(0, n_steps, x) # tf.split(axis, num_or_size_splits, value) The tutorial you are working from was written for versions > 0.12.0, which has been changed to be consistent with Numpy's split syntax: x = tf.split(x, n_steps, 0) # tf.split(value, num_or_size_splits, axis) See the changelog for details: https://github.com/tensorflow/tensorflow/blob/64edd34ce69b4a8033af5d217cb8894105297d8a/RELEASE.md
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41842440', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3218279/']}
|
jdg_369383
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
45847035
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to take all the network requests using selenium. I am not getting any way to find this solution if anyone can suggest to me or provide a code or library that would be appreciated.
Now provide the response and nothing else.
|
Not exactly open by dev tools but found some network, performance and other results. Yes you can do that using JavascriptExecutor Below Code will give you all performance, network etc entries:- ChromeOptions options = new ChromeOptions();options.addArguments("start-maximized");DesiredCapabilities capabilities = DesiredCapabilities.chrome();capabilities.setCapability(ChromeOptions.CAPABILITY, options);WebDriver driver = new ChromeDriver(capabilities);driver.get("http://www.google.com");String scriptToExecute = "var performance = window.performance || window.mozPerformance || window.msPerformance || window.webkitPerformance || {}; var network = performance.getEntries() || {}; return network;";String netData = ((JavascriptExecutor)driver).executeScript(scriptToExecute).toString();System.out.println(netData); OR Below Code will give you specific performance entries:- DesiredCapabilities d = DesiredCapabilities.chrome();LoggingPreferences logPrefs = new LoggingPreferences();logPrefs.enable(LogType.PERFORMANCE, Level.ALL);d.setCapability(CapabilityType.LOGGING_PREFS, logPrefs);WebDriver driver = new ChromeDriver(d);driver.get("https://www.google.co.in/");driver.manage().timeouts().implicitlyWait(15, TimeUnit.SECONDS);LogEntries les = driver.manage().logs().get(LogType.PERFORMANCE);for (LogEntry le : les) { System.out.println(le.getMessage());} The first code retrun network return network;" because of this JS tag. You can remove JS code of entity which you don't require The second code return perfromance Hope it will help you :)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/45847035', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7822301/']}
|
jdg_369384
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16164
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two questions about confidence intervals: Apparently a narrow confidence interval implies that there is a smaller chance of obtaining an observation within that interval, therefore, our accuracy is higher. Also a 95% confidence interval is narrower than a 99% confidence interval which is wider. The 99% confidence interval is more accurate than the 95%. Can someone give a simple explanation that could help me understand this difference between accuracy and narrowness?
Now provide the response and nothing else.
|
The 95% is not numerically attached to how confident you are that you've covered the true effect in your experiment. Perhaps recognizing that 95% is attached to the procedure that produced the interval, and not the interval itself, would help. Part of the procedure is that you decide that the interval contains the true value. You'll be right if you do that consistently 95% of the time. But you really don't know how likely it is for your particular experiment without more information. Q1: Your first query conflates two things and misuses a term. A narrower confidence interval may be more precise but it's accuracy is fixed by the procedure backing it, be it 89%, 95%, etc. As long as that procedure was correctly designed, the true value will be contained in the interval at the prescribed rate. (see @Michael R Chernick's answer for a discussion on coverage and a different sense of accuracy) Whether an observation falls in a CI isn't something to consider. A confidence interval is about estimating the mean. If you had an extraordinary large sample size and could estimate the mean very well then the chances of an observation being in the CI would be miniscule. Nevertheless, your question does raise some points and it's important to think about why a CI is narrow. Just because it's narrow doesn't mean you're less likely to encounter an observation that falls within that CI. Keep in mind, a narrow CI can be achieved in one of three ways. The most common way researchers make the CI narrow is by increasing the sample size. Another way they can be narrow is because the experimental method or nature of the data yields very low variance. For example, the confidence interval around the boiling point of water at sea level is small, regardless of the sample size. Finally, it could be narrow because your sample is unrepresentative. In that case, you are actually more likely to have one of the 5% of intervals that do not contain the true value. It's a bit of a paradox regarding CI width that the ones in that 5% of misses tend to be narrow. It's something you should check by knowing the literature and how variable this data typically is. Q2 : A 99% confidence interval is wider than a 95%, all else being equal. Therefore, it's more likely that it will contain the true value. See the distinction above between precise and accurate. If I make a confidence interval narrower with lower variability and higher sample size it becomes more precise because the values cover a smaller range. If I increase the coverage by using a 99% calculation it becomes more accurate because the true value is more likely to be within the range.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/16164', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/5837/']}
|
jdg_369385
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
343934
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to print a check mark and a cross mark in a shell script: #!/bin/bashecho -e "\xE2\x9C\x94 existing"echo -e "\xE2\x9D\x8C missing" Why isn't this working?
Now provide the response and nothing else.
|
As revealed by the OP in the comments, they were calling script with sh file.sh . Depending on the default shell to which /bin/sh is symlinked it might not support unicode characters. For instance, on Ubuntu , the default shell is dash . $ dash$ printf "\xE2\x9C\x94 missing\n"\xE2\x9C\x94 missing$ echo -e "\xE2\x9C\x94"-e \xE2\x9C\x94 The reason why it worked when you called the command in interactive shell, is because users interactive shell is by default (on Ubuntu) /bin/bash To properly run the script, you need to either: run it as ./file.sh run it as argument to proper shell bash file.sh Alternatively, one could resort to shell-independent methods: # this printf is standalone program, not shell built-in$ /usr/bin/printf "\xE2\x9C\x94 check mark\n"✔ check mark$ python -c 'print "\xE2\x9C\x94 check mark"'✔ check mark$ perl -e 'print "\xE2\x9C\x94 check mark"' ✔ check mark
|
{}
|
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/343934', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/210969/']}
|
jdg_369386
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4936553
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to have a button at the bottom of the listview. If I use relativeLayout/FrameLayout, it aligns but listView goes down to very botton. (Behind the button at the bottom) FrameLayout: <?xml version="1.0" encoding="utf-8"?><FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <ListView android:id="@+id/listview" android:layout_width="match_parent" android:layout_height="wrap_content" /> <FrameLayout android:layout_width="wrap_content" android:layout_height="match_parent" android:layout_alignParentBottom="true"> <Button android:id="@+id/btnButton" android:text="Hello" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_gravity="bottom" /> </FrameLayout></FrameLayout> RelativeLayout: <?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <ListView android:id="@+id/listview" android:layout_width="match_parent" android:layout_height="match_parent" /> <RelativeLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentBottom="true"> <Button android:id="@+id/btnButton" android:text="Hello" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_gravity="bottom" /> </RelativeLayout></RelativeLayout> Above two codes only work like the first image. What I want is second image. Can anybody help? Thank you.
Now provide the response and nothing else.
|
A FrameLayout s purpose is to overlay things on top of each other. This is not what you want. In your RelativeLayout example you set the ListView s height and width to MATCH_PARENT this is going to make it take up the same amount of space as its parent, and thus take up all of the space on the page (and covers the button). Try something like: <LinearLayout android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> <ListView android:layout_width="match_parent" android:layout_height="0dip" android:layout_weight="1"/> <Button android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_weight="0"/></LinearLayout> The layout_weight dictates how the extra space is to be used. The Button does not want to stretch beyond the space it requires, so it has a weight of 0. The ListView wants to take up all of the extra space, so it has a weight of 1. You could accomplish something similar using a RelativeLayout , but if it is just these two items then I think a LinearLayout is simpler.
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/4936553', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/432056/']}
|
jdg_369387
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
47710475
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was trying to convert date string 08-12-2017 to 2017-12-08(LocalDate). Here is what I tried- String startDateString = "08-12-2017"; LocalDate date = LocalDate.parse(startDateString); System.out.println(date); Also tried using formatter, but getting same result, an DateTimeParseException.How can I get an output like 2017-12-08, without getting an exception?
Now provide the response and nothing else.
|
Try this (see update below) try { String startDateString = "08-12-2017"; SimpleDateFormat sdf = new SimpleDateFormat("dd-MM-yyyy"); SimpleDateFormat sdf2 = new SimpleDateFormat("yyyy-MM-dd"); System.out.println(sdf2.format(sdf.parse(startDateString)));} catch (ParseException e) { e.printStackTrace();} Update - Java 8 String startDateString = "08-12-2017"; DateTimeFormatter formatter = DateTimeFormatter.ofPattern("dd-MM-yyyy"); DateTimeFormatter formatter2 = DateTimeFormatter.ofPattern("yyyy-MM-dd"); System.out.println(LocalDate.parse(startDateString, formatter).format(formatter2));
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47710475', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6039910/']}
|
jdg_369388
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
121536
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
After reading this NASA article about the "blackest material", the following stuck out to me. The tiny gaps between the tubes absorb 99.5 percent of the light that hits them Is it possible to create a material that absorbs, not just all visible light, but all electromagnetic radiation?
Now provide the response and nothing else.
|
We can make it absorb a lot of energy but if you read about black-body radiation effect you will notice that as energy is introduced into the object it will similarly radiate a small amount of the energy back, at room temperature appears black, as most of the energy it radiates is infra-red and cannot be perceived by the human eye. At higher temperatures, black bodies glow with increasing intensity and colors that range from dull red to blindingly brilliant blue-white as the temperature increases. This means that later even if we did have an $100$ percent absorbing materials of all electro-magnetic radiation the object will radiate some energy due to heating or other similar process. That in mind, it will always leak out some energy.To conclude, we could say it will only absorb radiation for short-time before releasing enough photons which could be detected by an photon detector, therefore "breaking" the truly blackest or absorbing material due to this effect. As a summary as long as the object absorbs some form of energy it can never be completely black. To learn more pertaining black-body radiation, read this: http://en.wikipedia.org/wiki/Black-body_radiation Furthermore, an blackhole is very black however it even is not as the power in the Hawking radiation (if it was proven, currently it is hypothesized) from a solar mass $M$ which is equal to $1.98855\pm 0.00025 * 10^{30}$ black hole turns out to be a minuscule 9 × 10−29 watts. It is indeed an extremely good approximation to call such an object 'black'. As $$P =\hbar c^2/15360\pi G^2M^2 = 9.004 * 10^{-29}$$ That being said, even black-holes one of the strongest objects cannot escape being truly black.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/121536', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/33635/']}
|
jdg_369389
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
59497550
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a formik form in react native code as follow.Full runnable link can be view at: here Current behaviour: Validation can only be done with handleSubmit , with cannot be use for handle further action. Note than onSubmit will not be trigger if handleSubmit detects there is any validation error. <Button onPress={handleSubmit} style={styles.button}> Submit </Button> Expected solution: A lifecycle event called when validation fails after an attempted submit (eg: onValidationError ), where i can access all the input validation error props for further action. Eg: I would like to handle something (like pop up an alert or scroll to first error message when validation fail) The code is expected to be something like follow, or any other way of doing it can also be accepted. As long as i can handle event when validation error occurs. <Formik onSubmit={values => { alert(JSON.stringify(values, null, 2)); Keyboard.dismiss(); }} onValidationError={errorValues => { // any action here // note that onValidationError is not a built in function // but it would be best if it can be achieved this way // else any other equivalent solution also can be accepted }} validationSchema={validationSchema}> What have tried? I have tried to integrated 2 solution here. But failed to get it work. https://github.com/jaredpalmer/formik/issues/1019 https://github.com/jaredpalmer/formik/issues/1484
Now provide the response and nothing else.
|
You can use isValid prop(from <Formik> 's render props) on <form> 's onsubmit event by bypassing your logic there as ( Documentation ) <form onSubmit={e => { console.log("isValid", isValid); isValid ? handleSubmit(e) : alert("Handle your custom method instead here"); }}> You can access isValid prop from render props on formik's {props => { const { values, touched, errors, dirty, isSubmitting, handleChange, handleBlur, handleSubmit, handleReset, isValid // add this to your code } = props; return ( ... your code );}} I have also made codesandbox, you can view working example here - https://codesandbox.io/s/trusting-jennings-7wq0f Note : This is not officially mentioned on any issues on formik repository or anywhere, but this is custom way intercept submit action on <form> 's onsubmit event Hope this was helpful!
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59497550', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4522927/']}
|
jdg_369390
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3400515
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I detect empty cells in a cell array? I know the command to remove the empty cell is a(1) = [] , but I can't seem to get MATLAB to automatically detect which cells are empty. Background: I preallocated a cell array using a=cell(1,53) . Then I used if exist(filename(i)) and textscan to check for a file, and read it in. As a result, when the filename(i) does not exist, an empty cell results and we move onto the next file. When I'm finished reading in all the files, I would like to delete the empty cells of a . I tried if a(i)==[]
Now provide the response and nothing else.
|
Use CELLFUN %# find empty cellsemptyCells = cellfun(@isempty,a);%# remove empty cellsa(emptyCells) = []; Note: a(i)==[] won't work. If you want to know whether the the i-th cell is empty, you have to use curly brackets to access the content of the cell. Also, ==[] evaluates to empty, instead of true / false , so you should use the command isempty instead. In short: a(i)==[] should be rewritten as isempty(a{i}) .
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3400515', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/410114/']}
|
jdg_369391
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14043930
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have my protocol defined. I have mentioned two of my protocol methods as optional. During run-time, how can I find out whether the particular method is optional or not? Is there any way to find out that?
Now provide the response and nothing else.
|
This ought to do what you want: BOOL MethodInProtocolIsRequired(Protocol *protocol, SEL methodSelector){ struct objc_method_description methodDesc = protocol_getMethodDescription(protocol, methodSelector, YES, YES); return methodDesc.name != NULL;} Note that I'm not commenting on the advisability of using this in shipping code, especially since you haven't explained why you want to do this. Also note that this function will return NO when given a selector for a method that the protocol doesn't contain. That's basically reasonable (after all, if a protocol doesn't contain a method, it's not required!), but you could add sophistication to the function by checking to see if the protocol contains the method as an optional method and return something different for all three scenarios (required, optional, not-in-protocol). EDIT: Simple test program here: https://gist.github.com/4381753
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14043930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/500625/']}
|
jdg_369392
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
49726235
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
After evaluating existent tools like Ansible Tower, rundeck and others, it seems that no tool can fulfill the needed requirements. We have complex data-center servers, cluster of DB and web servers, the data-center has a lot of client-systems, +100, and other tools like solr, redis, kafka... deployed there across the physical servers, not to mention that the same data-center servers have different accounts, linux users, (QA,stag,production..etc), for now the meta-data about these environments alongside their web-apps, source code to be used, servers of the cluster are all defined on xml and there is a bash scriptsreads from that XML that operated manually to run any operation/task (like checkout the source, build, deploy, start, stop... and other customized operations) This system should be done with a developer and DevOps engineers together, but what I want to know, is there any preferable framework(s) that could be used for this system? does the workflows frameworks are usable on this case? e.g, Activiti BPMN? the Ant is an option but we don't need just an automation tool more that scheduler and logging and a lot of other services. And if this is not the right place, can you please point out where cat I ask such question? What's required is to create a web-based system as automation tool with: UI to define the specific operations to be done, like build, deploy across the cluster specific web-app on specific env, start/stop specific web-app on specific machine, or any other customized operation, with multiple selections and flexible and dynamic options choosing way. The FE should show the executing workflows and operations within them. Dynamic way to create set of operations as a single workflow, that have dynamic ability to set the dependencies among them. An interface between the back-end code of this system with the already existent bash scripts that will do the actual tasks across the DC servers. A scheduler to be able to organize these operations in respect to a defined complicated dependencies between the workflows. Authentication & authorization services to the users since there will be a lot of customized roles upon the operations,environments, the products...etc Logging system to save the operations outputs.
Now provide the response and nothing else.
|
The following script works for localhost: import numpy as npimport pandas as pddf = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'))INDEX="dataframe"TYPE= "record"def rec_to_actions(df): import json for record in df.to_dict(orient="records"): yield ('{ "index" : { "_index" : "%s", "_type" : "%s" }}'% (INDEX, TYPE)) yield (json.dumps(record, default=int))from elasticsearch import Elasticsearche = Elasticsearch() # no args, connect to localhost:9200if not e.indices.exists(INDEX): raise RuntimeError('index does not exists, use `curl -X PUT "localhost:9200/%s"` and try again'%INDEX)r = e.bulk(rec_to_actions(df)) # return a dictprint(not r["errors"]) Verify using curl -g 'http://localhost:9200/dataframe/_search?q=A:[29%20TO%2039]' There are many little things that can be added to suit different needs but main is there.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49726235', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3305603/']}
|
jdg_369393
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
43900037
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an Excel 2013 Workbook with sheets imported from various sources. These contain Unicode characters all over the place which the clean function do not account for. I found a function which works cell by cell, but I'd like to have it used on a range of cells instead of having to put the function in each cell individually. Could someone please help me convert this function? Thank you Function CleanTrim(ByVal S As String, Optional ConvertNonBreakingSpace As Boolean = True) As String Dim X As Long, CodesToClean As Variant CodesToClean = Array(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, _ 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 127, 129, 141, 143, 144, 157) If ConvertNonBreakingSpace Then S = Replace(S, Chr(160), " ") For X = LBound(CodesToClean) To UBound(CodesToClean) If InStr(S, Chr(CodesToClean(X))) Then S = Replace(S, Chr(CodesToClean(X)), "") Next CleanTrim = WorksheetFunction.Trim(S)'Call function == use CleanTrim just like it was a built-in Excel function. For example, =CleanTrim(B2)End Function
Now provide the response and nothing else.
|
The compiler is failing to figure out the exact shape of detailRoute , because it does not know the shape of Route . Option 1 One way around this is to import Route from its source, thereby providing the information that the compiler needs to determine the shape of detailRoute . import { Route } from "./../node_modules/vue-router/types/router";export const detailRoute = { props: (route: Route) => null,}; Since the index.d.ts file in vue-router (which you were importing in the question) re-exports Route , it does not provide the direct reference to Route that the compiler needed. Option 2 Another option is to opt detailRoute out of static typing altogether. import { Route } from 'vue-router'; // index.d.tsexport const detailRoute: any = { props: (route: Route) => null,}; Since any opts-out of static typing, the compiler does not need to figure out the shape of detailRoute . Option 3 A further is option is what you did in your own answer. Since you provided the type annotation, the compiler again does not need to figure out the shape of detailRoute . import { Route, RouteConfig } from 'vue-router'; // index.d.tsexport const detailRoute: RouteConfig = { props: (route: Route) => null,}; See also https://github.com/Microsoft/TypeScript/issues/5711 When trying to emit [the module], the compiler needs to write an object type literal... representing the shape of the module. But there isn't a name in scope that refers directly to [Route], so the type "cannot be named" and there's an error. If you add [a direct] import of [Route]... the error should go away.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43900037', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7907121/']}
|
jdg_369394
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4415022
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Storing the transaction entry into a double entry accounting database. I came up with two solutions option 1 and option 2, I was told most banking package chooses option 2 for their database design. However I prefer option 1 over option 2 because it simply makes sense and it is more efficient! I.e For the 2 movement of funds, option 1 requires 2 records vs option 2 requires 4 records. I would like to know why the bank would choose option 2 over option 1? what is the reason for this? Option 1)TRANSACTIONCredit_AccountIdDebit_AccountIdAmount...Option 2)TRANSACTIONAccountIdAmount...
Now provide the response and nothing else.
|
Option 1 will potentially be a bit more efficient from an insert perspective. But since a lot of accounting transactions are going to affect more than two accounts, the benefit is likely to be substantially less than 2:1. Option 2 will be clearer for these more complex transactions. That is, an accountant would normally find three rows Debit A $100 Credit B $60 Credit C $40 more clear than two rows Debit A $60 Credit B $60 Debit A $40 Credit C $40 If you have multiple accounts on both sides, it would also be a bit unclear how to match up the debits and credits to a single account. That is, Debit A $100 Debit B $30 Credit C $60 Credit D $70 could be represented as Debit A $60 Credit C $60 Debit A $40 Credit D $40 Debit B $30 Credit D $30 but there are also other possible ways to construct the data for data model 2. Additionally, option 2 is going to be more efficient if you're trying to determine the current balance of a particular account by aggregating the transactions.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4415022', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/103264/']}
|
jdg_369395
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29163349
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a database storing records in UTC timestamp. I want to fetch them in terms of local time standard(IST). By referring to some referral i tried like this. select date(convert_tz(sa.created_at,'+00:00','+05:30')) as date,count(*) as hits from session_acts sa, sessions s where sa.session_id = s.id and s.created_at between convert_tz('2015-03-12T11:33:00+00:00','+00:00','-05:30') and convert_tz('2015-03-13T11:33:00+00:00','+00:00','-05:30') group by date; But it will resulting in +------------+------+| date | hits |+------------+------+| 2015-03-12 | 94 || 2015-03-13 | 34 |+------------+------+ I want to display only hits that are requested on 13th. Where i am going wrong.?
Now provide the response and nothing else.
|
IST is 5.30 hours ahead of UTC, so when 13th starts in IST i.e. 2015-03-13 : 00:00:00 its 2015-03-12 18:30:00 in UTC mysql> select convert_tz('2015-03-13T00:00:00+00:00','+00:00','+05:30') ;+-----------------------------------------------------------+| convert_tz('2015-03-13T00:00:00+00:00','+00:00','+05:30') |+-----------------------------------------------------------+| 2015-03-12 18:30:00 |+-----------------------------------------------------------+1 row in set, 1 warning (0.00 sec) And when 13 ends in IST i.e. 2015-03-13 : 23:59:59 its 2015-03-13 18:29:59 in UTC mysql> select convert_tz('2015-03-13T23:59:59+00:00','+00:00','+05:30') ;+-----------------------------------------------------------+| convert_tz('2015-03-13T23:59:59+00:00','+00:00','+05:30') |+-----------------------------------------------------------+| 2015-03-13 18:29:59 |+-----------------------------------------------------------+ So yo get the data in IST for 13th you will need to search data within this range of dates. So the condition would be as below - s.created_at between convert_tz('2015-03-13T00:00:00+00:00','+00:00','+05:30')and convert_tz('2015-03-13T23:59:59+00:00','+00:00','+05:30'); and since you are doing conversion at the time of select so it will return all 13th data.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29163349', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3554338/']}
|
jdg_369396
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
26499927
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
how to kill the process >netstat -nao Active Connections Proto Local Address Foreign Address State PID TCP 10.10.8.10:50931 122.109.119.13:22 ESTABLISHED 4664 >taskkill /pid 4664 SUCCESS: Sent termination signal to the process with PID 4664. >netstat -naoActive ConnectionsProto Local Address Foreign Address State PIDTCP 10.10.8.10:50931 122.109.119.13:22 ESTABLISHED 4664 why i can not kill pid 4664 with taskkill /pid 4664
Now provide the response and nothing else.
|
The impact of a temporary Optional instance is negligible. Usually the JVM will detect its temporary nature and optimize away the instance. Even if the temporary instance creation is not optimized away, the impact of one single temporary object on the memory management is ridiculously low. See also GC overhead of Optional<T> in Java . However, if the map is mutable, you can use the following trick: public static MyEnum fromString(String value) { return enumMap.computeIfAbsent(value, v -> { throw new IllegalArgumentException("Unsupported value: " + v); });} Note that the Map is not modified by this code but still must be mutable as an immutable map might throw an UnsupportedOperation exception for the attempt to use computeIfAbsent without ever checking whether the operation would really modify the map. But in the end, there is nothing wrong with Optional . But note that the code in your question is wrong. The lambda expression you pass to the method Optional.orElseThrow is meant to supply the desired exception, not to throw it: public static MyEnum fromString(String value) { return Optional.ofNullable(enumMap.get(value)).orElseThrow(() -> new IllegalArgumentException("Unsupported value: " + value) // just return it );}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26499927', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1982032/']}
|
jdg_369397
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17441295
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'll try really hard to turn this into one comprehensive question: I'm writing a method to get a String that contains the name of an Android device's city, as determined by the LocationManager and getLastKnownLocation() and all that. Then I realized I'd need to do the same thing again in another activity, so why not just make an entirely separate class ( LocationFinder ) that I could use across my program, instead of writing duplicate code everywhere? But I've run into problems that confuses me. For instance, if I make this class ( LocationFinder ), should it extend Activity, even though it is never actually visualized? All this class would do is have a variety of getters like getLastKnownCity() or getCurrentCity() and return strings. I assumed it wouldn't HAVE to extend the Activity class, since it's really not an activity. But then what Context do I use for: Geocoder geocoder = new Geocoder(Context context, Locale locale) ? This made me assume it MUST be an activity. So I extended Activity, and replaced the constructor with @Overrideprotected void onCreate(.............. but for some reason, that never ends up getting called, even when I put String city = new LocationFinder().getLastKnownCity(); My very first line of LocationFinder 's onCreate() is System.out.println("HEY!") and it never even gets to that. I get a null pointer at android.internal.os.LoggingPrintStream.println() and other stuff. Plus, there's a bunch of system constants that come from Activity classes. For instance, I need to get at LOCATION_SERVICE , which is a String, which I can't get without extending Activity. Sure, I could cheat and just put in the literal string, but that feels wrong.
Now provide the response and nothing else.
|
EDIT: If possible, use frogmanx's answer. This should only be used when his answer is not possible to use. (ie. singletons that need a context right off the bat.) Sounds like you should extend Application and not Activity. Make your Application something like this: public class MyApplication extends Application { private static MyApplication instance; public MyApplication() { instance = this; } public static MyApplication getInstance() { return instance; } Then add this attribute to the application tag of the manifest: <application android:name=".your.package.MyApplication" ... /> After all that, you can get a Context by calling MyApplication.getInstance() from anywhere.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17441295', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1903064/']}
|
jdg_369398
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
382922
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a question about the Fermi paradox, a topic that can attract a lot of opinions. However, the scope of this question is strictly limited to a specific factual topic, and I've edited it quite substantially to try and make this clearer. Enrico Fermi originally introduced his paradox in the following terms: We expect life / advanced civilisations to be common in the Universe If technologically advanced intelligent life existed, we would expect it to visit Earth Therefore its absence requires an explanation. (There is a huge body of literature that attempts to do this.) However, it seems common in modern popularisations to replace (2) above with: 2$*$. If technologically advanced intelligent life existed, we would be able to detect it remotely. My question is about this point (2*). The claim here is that we should expect extraterrestrial civilisations to have observational signatures that we would have already detected if they were present. I am somewhat skeptical about this claim, and so I would like to know more details about the assumptions and measurements behind it. The reason I'm skeptical is that, as far as I'm aware, SETI has concentrated for most of its existence on detecting deliberate signals, i.e. radio signals that have been directionally beamed at Earth with the objective of making contact with us. However, one can think of all kinds of reasons why signals of that kind might not currently exist, even if advanced civilisations are common in the galaxy. If we rule out this possibility, what else is left? Listening to popular accounts of the Fermi paradox, I'm left with the impression that the presence of extraterrestrial civilisations would just be obvious if they were there. But until the 1990s we could not detect exoplanets at all, and even now we can obtain very little information about their composition or surface details. It does not seem at all trivial to make measurements that would reveal the existence or absence of an extraterrestrial civilisation. Therefore my question is, which specific observations - by which instruments - have been cited to support the idea that extraterrestrial civilisations do not exist, or are not common, in our galaxy? Note that I'm asking for observations that we have made already, rather than ones we potentially could make in the future. Of particular interest to me are the observational signatures of megastructures such as Dyson spheres, or of waste heat from civilisations sufficiently large for that to be detectable. It seems to me that, given some assumptions about the size of the civilisation involved, these might have observational signatures that would allow them to be ruled out to some extent, and I am interested in whether this is possible and/or has actually been done. But more broadly, I am interested in published, referenced versions of claim (2*), so that I can understand the assumptions and the astronomical measurements behind them.
Now provide the response and nothing else.
|
I'll limit this "answer" to just address the part about SETI. Basically, we would not have detected our current level of civilisation even if it existed around Proxima Centauri, apart from under very fortuitous circumstances. Therefore SETI has very little to contribute in terms of the Fermi paradox. A "blind" search for radio signatures that relies on detecting random radio "chatter" and accidental signals generated by our civilisation is hopeless at the levels emitted from Earth.The SETI Phoenix project was the most advanced search for radio signals from other intelligent life. Quoting from Cullers et al. (2000) : " Typical signals, as opposed to our strongest signals, fall below the detection threshold of most surveys, even if the signal were to originate from the nearest star ". Quoting from Tarter (2001) : " At current levels of sensitivity, targeted microwave searches could detect the equivalent power of strong TV transmitters at a distance of 1 light year (within which there are no other stars)... ". So the answer to your first question is a less-than-useful 1 light year. We do emit stronger beamed signals in certain well-defined directions, for example to conduct metrology in the solar system using radar. Such signals have been calculated to be observable over a thousand light years or more. But these signals are brief, beamed into an extremely narrow solid angle and unlikely to be repeated. You would have to be very lucky to be observing in the right direction at the right time if you were performing targeted searches for such signals from other civilizations. Hence to detect signals from even the nearest star systems you would have to scale up the "radio-leakage" from the Earth by an order of magnitude or more. new radio telescope projects and technology like the Square Kilometre Array may be capable of serendipitously detecting radio "chatter" out to distances of 50 pc ($\sim 150$ light years) - see Loeb & Zaldarriaga (2007) . This array, due to begin full operation some time after 2025 could also monitor a multitude of directions at once for beamed signals. A good overview of what might be possible in the near future is given by Tarter et al. (2009) . Addendum Wright et al. (2018) have done a quantitative estimate of what fraction of the "haystack" that all SETI programmes have searched adequately for needles. They consider distances out to 10 kpc and conclude that only 1 part in $10^{17}$ of the multi-dimensional haystack volume has been probed so far, assuming that transmissions could come from anywhere. The paper contains many arguable assumptions and scenarios, but the basic point is that the lack of "success" of SETI cannot be used to support a Fermi paradox-type argument
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/382922', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/5477/']}
|
jdg_369399
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
31012817
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to enqueue a basic job in redis using python-rq, But it throws this error "ValueError: Functions from the main module cannot be processed by workers" Here is my program: import requestsdef count_words_at_url(url): resp = requests.get(url) return len(resp.text.split())from rq import Connection, Queuefrom redis import Redisredis_conn = Redis()q = Queue(connection=redis_conn)job = q.enqueue(count_words_at_url, 'http://nvie.com')print job
Now provide the response and nothing else.
|
Break the provided code to two files: count_words.py : import requestsdef count_words_at_url(url): resp = requests.get(url) return len(resp.text.split()) and main.py (where you'll import the required function): from rq import Connection, Queuefrom redis import Redisfrom count_words import count_words_at_url # added import!redis_conn = Redis()q = Queue(connection=redis_conn)job = q.enqueue(count_words_at_url, 'http://nvie.com')print job I always separate the tasks from the logic running those tasks to different files. It's just better organization. Also note that you can define a class of tasks and import/schedule tasks from that class instead of the (over-simplified) structure I suggest above. This should get you going.. Also see here to confirm you're not the first to struggle with this example. RQ is great once you get the hang of it.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31012817', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3258973/']}
|
jdg_369400
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1069165
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm just solving some probability problems in preparation for my exam, and I stumbled upon this one which I cannot tackle: Suppose that you continually collect coupons andthat there are $m$ different types. Suppose also thateach time a new coupon is obtained, it is a type$i$ coupon with probability $p_i, i = 1, \ldots ,m$. Supposethat you have just collected your $n$-th coupon. Whatis the probability that it is a new type? Hint: Condition on the type of this coupon. Any help would be appreciated, thank you.
Now provide the response and nothing else.
|
Let $E_n$ be the event of interest (at the $n$ -th extraction we get a new coupon type) and let $c_n=1 \cdots m$ be the type of the $n-$ th coupon. Then $$P(E_n) =\sum_{i=1}^m P(E_n | c_n=i) P(c_n=i) = \sum_{i=1}^m (1-p_i)^{n-1} p_i$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1069165', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/122447/']}
|
jdg_369401
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1005480
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to directly "restart" a background worker?Calling CancelAsync() followed by RunWorkerAsync() clearly won't do it as their names imply. Background info:I have a background worker which calculates a total in my .net 2.0 Windows Forms app.Whenever the user modifies any value which is part of this total I'd like to restart the background worker in case it would be running so that directly the latest values are considered.
Now provide the response and nothing else.
|
The backgriound work itself does not do any cancleing. When you call bgw.CancelAsync it sets a flag on the background worker that you need to check yourself in the DoWork handler. something like: bool _restart = false;private void button1_Click(object sender, EventArgs e){ bgw.CancelAsync(); _restart = true;}private void bgw_DoWork(object sender, DoWorkEventArgs e){ for (int i = 0; i < 300; i++) { if (bgw.CancellationPending) { break; } //time consuming calculation }}private void bgw_WorkComplete(object sender, eventargs e) //no ide to hand not sure on name/args{ if (_restart) { bgw.RunWorkerAsync(); _restart = false; }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1005480', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/34022/']}
|
jdg_369402
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6192177
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been looking at using Redis Pub/Sub as a replacement to RabbitMQ. From my understanding Redis's pub/sub holds a persistent connection to each of the subscribers, and if the connection is terminated, all future messages will be lost and dropped on the floor. One possible solution is to use a list (and blocking wait) to store all the message and pub/sub as just a notification mechanism. I think this gets me most of the way there, but I still have some concerns about the failure cases. what happens when a subscriber dies, and comes back online, how should it process all it's pending messages? when a malformed message comes though the system, how do you handle those exceptions? DeadLetter Queue? is there a standard practice to implementing a retry policy?
Now provide the response and nothing else.
|
When a subscriber (consumer) dies, your list will continue to grow until the client returns. Your producer could trim the list (from either side) once it reaches a specific limit, but that is something you would need to handle at the application level. If you include a timestamp within each message, your consumer can then act on the age of a message, assuming you have application logic you want to enforce on message age. I'm not sure how a malformed message would enter the system, as the connection to Redis is usually TCP with the its integrity assurances. But if this happens, perhaps due to a bug in message encoding at the producer layer, you could provide a general mechanism for handling errors by keeping a queue-per-producer that received consumer's exception messages. Retry policies will depend greatly on your application needs. If you need 100% assurance that a message has been received and processed, then you should consider using Redis transactions (MULTI/EXEC) to wrap the work done by a consumer, so you can ensure that a client doesn't remove a message unless it has completed its work. If you need explicit acknowlegement, then you could use an explicit ACK message on a queue dedicated to the producer process(es). Without knowing more about your application needs, it's hard to know how to choose wisely. Generally, if your messages require full ACID protection, then you probably also need to use redis transactions. If your messages are only meaningful when they are timely, then transactions may not be needed. It sounds as though you can't tolerate dropped messages, so your approach of using a list is good. If you need to implement a priority queue for your messages, you can use the sorted set (the Z-commands) to store your messages, using their priority as the score value, along with a polling consumer.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6192177', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/608858/']}
|
jdg_369403
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10936774
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am creating a new content type through an Elements.xml file in Visual Studio. I have reference a custom New form document in the XmlDocment node <XmlDocument NamespaceURI="http://schemas.microsoft.com/sharepoint/v3/contenttype/forms"> <FormTemplates xmlns="http://schemas.microsoft.com/sharepoint/v3/contenttype/forms"> <Display>ListForm</Display> <Edit>ListForm</Edit> <New>MyNewList</New> </FormTemplates> </XmlDocument> Now when I create a new instance of the Content Type, I get a blank. I have read that this New nodes should reference a ASCX control. However I have not found any good examples that walk me through the process.
Now provide the response and nothing else.
|
edit: I just stumbled across a NumPy solution that uses advanced indexing : # a b c d e fA = numpy.array([[0, 1, 2, 3, 4, 5], [1, 0, 3, 4, 5, 6], [2, 3, 0, 5, 6, 7], [3, 4, 5, 0, 7, 8], [4, 5, 6, 7, 0, 9], [5, 6, 7, 8, 9, 0]])# a d b e c fnew_order = [0, 3, 1, 4, 2, 5]A1 = A[:, new_order][new_order] Here is a pure Python solution which may be transferable to NumPy: # a b c d e fA = [[0, 1, 2, 3, 4, 5], [1, 0, 3, 4, 5, 6], [2, 3, 0, 5, 6, 7], [3, 4, 5, 0, 7, 8], [4, 5, 6, 7, 0, 9], [5, 6, 7, 8, 9, 0]]# a d b e c fnew_order = [0, 3, 1, 4, 2, 5] # maps previous index to new indexA1 = [[A[i][j] for j in new_order] for i in new_order] Result: >>> pprint.pprint(A1)[[0, 3, 1, 4, 2, 5], [3, 0, 4, 7, 5, 8], [1, 4, 0, 5, 3, 6], [4, 7, 5, 0, 6, 9], [2, 5, 3, 6, 0, 7], [5, 8, 6, 9, 7, 0]] Here is a version that modifies A in place: A[:] = [A[i] for i in new_order]for row in A: row[:] = [row[i] for i in new_order]
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10936774', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1301903/']}
|
jdg_369404
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
34849035
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've looked at several posts regarding this and they've done the following -The output i get is blank -The output i get is the id, which is practically useless unless somebody can show me how to manipulate it -No output at all i just want to be able to click an item in treeview, and instantly be given the text i just clicked def OnDoubleClick(event): item = course1_assessments.focus() print (item) course1_assessments.bind("<<TreeviewSelect>>", OnDoubleClick) This code gives me 'I001' if i click the first item, and 'I002' when i click the second; id assume these are column values in the tree, but still useless to me
Now provide the response and nothing else.
|
You can get a list of the selected items with the selection method of the widget. It will return a list of item ids. You can use the item method to get information about each item. For example: import tkinter as tkfrom tkinter import ttkclass App: def __init__(self): self.root = tk.Tk() self.tree = ttk.Treeview() self.tree.pack(side="top", fill="both") self.tree.bind("<<TreeviewSelect>>", self.on_tree_select) for i in range(10): self.tree.insert("", "end", text="Item %s" % i) self.root.mainloop() def on_tree_select(self, event): print("selected items:") for item in self.tree.selection(): item_text = self.tree.item(item,"text") print(item_text)if __name__ == "__main__": app = App()
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34849035', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5768711/']}
|
jdg_369405
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1390915
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've come across a chapter in my book which has me stumped and nowhere can I find so that I can move on. The question is "Using the definition 9.1i, prove that the following limits exist:a) $$x_k = \left(\frac 1k, 1 - \frac {1}{k²}\right)$$Def 9.1i being: "Let ${x_k}$ be a sequence of points in $R^n$ then ${x_k}$ is said to converge to some point $a\in\mathbb{R}^n$ (called the limit of ${x_k}$) if and only if for every $\epsilon$>0 there is an $N\in \mathbb{N}$ such that$$k\ge N $$ implies $$||x_k - a|| < \epsilon."$$ I cannot use the definition to get the answer. My work is just to find the limit of $\frac1k$ which is easy stuff and get $x_k\rightarrow$ 0 as $k\rightarrow \infty$ and the same process for $1 - \frac {1}{k²}$ to get 1 and so $x_k \rightarrow (0,1)$ as $k \rightarrow \infty$.Reading on the matter has lead me to believe I should be doing this far better but I really haven't got any clue how to use the definition. Please help.
Now provide the response and nothing else.
|
Suppose that $\mathbb Q(\sqrt{2},\sqrt{3},\dots,\sqrt{p},\dots)|\mathbb Q$ is not an algebraic extension of $\mathbb Q$. Then there is $\alpha \in \mathbb Q(\sqrt{2},\sqrt{3},\dots,\sqrt{p},\dots)$ such $[\mathbb Q(\alpha):\mathbb Q]=\infty$. But $\alpha = \sum_{i=0}^n q_i\sqrt{a_i}$, where $a_i \in \mathbb{N}$ (and they are square free). The most important thing about this sum is that it is finite. It implies that the set of all different prime numbers which are present in decompositions of all $a_i$ is finite (suppose, that $P$ is the largest among them). Then $\alpha \in \mathbb Q(\sqrt{2},\sqrt{3},\dots,\sqrt{P})$ and it is an contradiction to our assumption.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1390915', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/236051/']}
|
jdg_369406
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6317980
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When running rake I get this error: You have already activated rake 0.9.2, but your Gemfile requires rake 0.8.7. Consider using bundle exec. Using bundle exec rake instead of just rake seems to work, but is it the best way to fix this?
Now provide the response and nothing else.
|
Using bundle exec is the right way to do this. Basically what's happening is that you've updated rake to 0.9.2 which now conflicts with the version specified in your Gemfile. Previously the latest version of rake you had matched the version in your Gemfile, so you didn't get any warning when simply using rake . Yehuda Katz (one of the original Bundler developers) explains it all in this blog post . To avoid typing bundle exec ... all the time, you could set up an alias or function in your shell for commands you commonly use with Bundler. For example this is what I use for Rake: $ type bakebake is a functionbake () { bundle exec rake "$@"}
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6317980', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/180663/']}
|
jdg_369407
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
118682
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is a general way to get a integers inside a radical with + or - operation(the numbers adding or subtracting each others, for example, $\sqrt5 +\sqrt7$ is this type of numbers)allow is algebraic or not? In another word, prove any number obtained by a finite combination of algebraic operations (addition, multiplication, root extraction) is algebraic
Now provide the response and nothing else.
|
Every $r\in\mathbb{Q}$ is algebraic over $\mathbb{Q}$. Proof. $r$ is a root of $x-r$, a polynomial with rational coefficients. If $a$ is algebraic over $\mathbb{Q}$ and $n$ is a positive integer, then $\sqrt[n]{a}$ is algebraic over $\mathbb{Q}$ (where $\sqrt[n]{a}$ is any fixed complex $n$th root of $a$). Proof. Since $a$ is algebraic, there is a polynomial $f(x) = x^m+a_{m-1}x^{m-1}+\cdots+ a_0$, with $a_i\in\mathbb{Q}$, wuch that $f(a)=0$. Then $\sqrt[n]{a}$ is a root of$$x^{mn} + a_{m-1}x^{(m-1)n} + \cdots + a_1x^n + a_0,$$hence $\sqrt[n]{a}$ is algebraic. If $a$ is algebraic over $\mathbb{Q}$ and $k$ is in $\mathbb{Q}$, then $ka$ is algebraic over $\mathbb{Q}$. Proof. If $k=0$, there is nothing to do. If $k\neq 0$, and $f(x)=x^m+a_{m-1}x^{m-1}+\cdots + a_0$ has $a$ as a root, then$$\frac{1}{k^m}x^m + \frac{a_{m-1}}{k^{m-1}}x^{m-1}+\cdots + \frac{a_1}{k}x + a_0$$has $ka$ as a root. If $a$ and $b$ are algebraic over $\mathbb{Q}$, then $a+b$ is algebraic over $\mathbb{Q}$. If $f(x)$ and $g(x)$ are any two polynomials, then their resultant $R(f,g)$ is the product of all $a-b$ with $a$ a root of $f$ and $b$ a root of $g$. Suitably interpreted, you can use it to obtain a polynomial whose roots are precisely the elements of the form $a-b$ with $a$ a root of $f$ and $b$ a root of $g$. The resultant can be computed via the determinant Sylvester matrix , and in particular if $f$ and $g$ have integer (or rational) coefficients, then so does the polynomial you obtain from $R(f,g)$. $R(f,g)=0$ if and only if $f$ and $g$ have a zero in common. Then computing the resultant of $f(x)$ and $g(z-x)$ gives a polynomial (in $z$) with rational coefficients that has, among its roots, the root $a+b$; hence, $a+b$ is algebraic. If $a$ is algebraic and $a\neq 0$, then $\frac{1}{a}$ is algebraic Proof. If $f(x) = x^m + a_{m-1}x^{m-1}+\cdots + a_1x + a_0$ is a polynomial with rational coefficients and $f(a)=0$, then $$x^mf\left(\frac{1}{x}\right) = 1 + a_{m-1}x + \cdots + a_1x^{m-1} + a_0x^m$$is a polynomial with rational coefficients that has $\frac{1}{a}$ as a root. If $a$ and $b$ are algebraic over $\mathbb{Q}$, then $ab$ is algebraic over $\mathbb{Q}$. Proof. If $f(x)$ has rational coefficients and $a$ as a root, and $g(x)$ has rational coefficients and $b$ as a root, then the resultant of $f(x)$ and $x^{\deg(g)}g(\frac{t}{x})$ is a polynomial in $t$ with rational coefficients that has $xy$ as a root. Hence $xy$ is algebraic. Thus, every complex number obtained as the result of doing a finite combination of addition, multiplication, and root extraction of algebraic numbers is algebraic. You'll note that the above gives you a method for explicitly producing a polynomial that can "witness" the algebraicity of the result, though you may not want to carry it out in practice. (Note: The above easily generalizes if we replace $\mathbb{Q}$ with an arbitrary field, and $\mathbb{C}$ with an arbitrary extension: if $F$ is a field, and $F\subseteq K$, then the collection of all elements of $K$ that are algebraic over $F$ forms a field that contains $F$.) For example: $\sqrt{3}+\sqrt{5}$; a polynomial with $\sqrt{3}$ as a root is $f(x)=x^2-3$. A polynomial with $\sqrt{5}$ as a root is $g(x)=x^2-5$. We take the resultant of $f(x)$ and $g(z-x) = (z-x)^2 - 5 = x^2 - 2zx + (z^2-5)$: $$\begin{align*}R(f(x),g(z-x)) &= \left|\begin{array}{crcc}1 & 0 & -3 & 0\\0 & 1 & 0 & -3\\1 & -2z & z^2-5 & 0\\0 & 1 & -2z &z^2-5\end{array}\right|\\&\strut\\&= \left|\begin{array}{rcc}1 & 0 & -3\\-2z & z^2-5 & 0\\1 & -2z & z^2-5\end{array}\right| + \left|\begin{array}{ccc}0 & -3 & 0\\1 & 0 & -3\\1 & -2z & z^2-5\end{array}\right|\\&\strut\\&= \left|\begin{array}{cc}z^2-5 & 0\\-2z & z^2-5\end{array}\right| -3\left|\begin{array}{rc}-2z & z^2-5\\1 & -2z\end{array}\right|+3\left|\begin{array}{cc}1 & -3\\1 & z^2-5\end{array}\right|\\&\strut\\&= (z^2-5)^2 -3(4z^2 - z^2+5) +3(z^2-5+3)\\&\strut\\&= z^4 - 10z^2 + 25 - 9z^2 - 15 + 3z^2 -6\\&\strut\\&= z^4 - 16z^2 + 4.\end{align*}$$ Indeed, $\sqrt{3}+\sqrt{5}$ satisfies this polynomial: you can do it directly by substitution, or letting $a=\sqrt{3}+\sqrt{5}$, note that $a^2 = 8+2\sqrt{15}$, hence $a^2-8 = 2\sqrt{15}$, so $(a^2-8)^2 = 60$. Thus, $a^4 - 16a^2 + 64=60$, so $a$ satisfies $x^4 - 16x^2 + 4$, the same polynomial we found above. The polynomial we get need not be the minimal polynomial: again with $\sqrt{3}$ and $\sqrt{5}$, the minimal polynomial of $\sqrt{3}\sqrt{5}$ is of course $x^2-15$. Using the resultant method, we need to compute the resultant of $f(x) = x^2-3$ and of $$x^2g\left(\frac{t}{x}\right) = x^2\left(\frac{t^2}{x^2} - 5\right) = t^2-5x^2.$$We obtain:$$\begin{align*}\mathrm{Res}\left(f(x),x^2g\left(\frac{t}{x}\right)\right) &=\left|\begin{array}{rrrr}1 & 0 & -3 & 0\\0 & 1 & 0 & -3\\-5 & 0 & t^2 & 0\\0 & -5 & 0 & t^2\end{array}\right|\\&\strut\\&= \left|\begin{array}{rrr}1 & 0 & -3\\0 & t^2 & 0\\-5 & 0 & t^2\end{array}\right| -3\left|\begin{array}{rrr}0 & 1 & -3\\-5 & 0 & 0\\0 & -5 & t^2\end{array}\right|\\&\strut\\&= t^2\left|\begin{array}{rr}1 & -3\\-5 & t^2\end{array}\right| + 5\left|\begin{array}{rr}1 & -3\\-5 & t^2\end{array}\right|\\&\strut\\&= t^2(t^2 -15) + 5(t^2 - 15)\\&\strut\\&= (t^2-15)(t^2+5).\end{align*}$$Of course, this polynomial has $\sqrt{15}$ as a root, but it is not the minimal one that does.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/118682', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/12796/']}
|
jdg_369408
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1388289
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to solve the following equation:$$\frac{\partial F}{\partial t} = \alpha^2 \, \frac{\partial^2 F}{\partial x^2}-h \, F$$subject to these conditions: $$F(x,0) = 0, \hspace{5mm} F(0,t) = F(L,t)=F_{0} \, e^{-ht}.$$I know that I am suppose to simplify the equation with:$$F(x,t)=\phi(x,t)e^{-ht}$$My initial guess is to divide by $$\alpha^2$$and have this:$$\frac{d^2F}{dx^2}-\frac{1}{\alpha^2}\frac{dF}{dt}-\frac{h}{\alpha^2} \, F=0.$$I am having trouble with the next steps. Should I assume a solution of the exponential form?
Now provide the response and nothing else.
|
Given:$$\frac{\partial F}{\partial t} = \alpha^2 \, \frac{\partial^2 F}{\partial x^2}-h \, F$$$$F(x,0) = 0, \hspace{5mm} F(0,t) = F(L,t)=F_{0} \, e^{-ht}.$$ The process to obtain a solution is the following. The boundary conditions suggest making the substitution $F(x,t) = \phi(x,t) \, e^{-h t}$ for which the pde becomes$$\frac{\partial \phi}{\partial t} = \alpha^{2} \, \frac{\partial^{2} \phi}{\partial x^{2}}$$where $\phi(x,0) = 0$, $\phi(0,t) = \phi(L,t) = F_{0}$. Now let $\phi(x,t) = F_{0} + \theta(x,t)$ which bring the equation and conditions into the form$$\frac{\partial \theta}{\partial t} = \alpha^{2} \, \frac{\partial^{2} \theta}{\partial x^{2}}$$where $\phi(x,0) = - F_{0}$, $\phi(0,t) = \phi(L,t) = 0$. Let $\theta(x,t) = f(x) \, g(t)$ to obtain\begin{align}\frac{g'}{g} = - \lambda^{2} = \alpha^{2} \, \frac{f''}{f}\end{align}for which \begin{align}& \alpha^{2} \, f'' + \lambda^{2} f = 0 \\& g' + \lambda^{2} \, g = 0.\end{align}The first order equation has the solution $g(t) = e^{- \lambda^{2} \, t}$. The equation for $f$ has the form $f'' + (\lambda/\alpha)^{2} \, f=0$ with solutions $f(x) = A \, \cos(\lambda x/\alpha) + B \, \sin(\lambda x/\alpha)$.From the conditions $\phi(0,t)=\phi(L,t) = 0$ then\begin{align}0 &= A \\0 &= A \, \cos\left(\frac{\lambda L}{\alpha}\right) + B \, \sin\left( \frac{\lambda L}{\alpha}\right)\end{align} for which $B \neq 0$ and $\sin\left(\frac{\lambda L}{\alpha}\right) = 0$. From this $$\lambda_{n} = \frac{n \, \pi \, \alpha}{L}.$$Combining the parts leads to the $\theta(x,t)$ solution\begin{align}\theta(x,t) = \sum_{n=1}^{\infty} B_{n} \, \sin\left(\frac{n \, \pi \, x}{L}\right) \, e^{- \frac{n^{2} \, \pi^{2} \, t}{L^{2}}}\end{align}The remaining condition is $\theta(x,0)= - F_{0}$, \begin{align}- F_{0} = \sum_{n=1}^{\infty} B_{n} \, \sin\left(\frac{n \, \pi \, x}{L}\right)\end{align}The coefficients are obtained by Fourier series methods and are \begin{align}B_{m} = - \frac{2}{L} \, \int_{0}^{L} F_{0} \, \sin\left(\frac{m \, \pi \, u}{L}\right) \, du = - \frac{2 \, F_{0} \, (1 - (-1)^{m})}{m \, \pi}.\end{align}With all of this the solution becomes\begin{align}F(x,t) = F_{0} \, e^{-h t} - \frac{2 \, F_{0}}{\pi} \, e^{-h t} \, \sum_{n=1}^{\infty} \frac{1 - (-1)^{n}}{n} \, \sin\left(\frac{n \, \pi \, x}{L}\right) \, e^{- \frac{n^{2} \, \pi^{2} \, t}{L^{2}}}.\end{align}or\begin{align}F(x,t) = F_{0} \, e^{-h t} - \frac{2 \, F_{0}}{\pi} \, e^{-h t} \, \sum_{n=0}^{\infty} \frac{1}{2n+1} \, \sin\left(\frac{(2n+1) \, \pi \, x}{L}\right) \, e^{- \frac{(2n+1)^{2} \, \pi^{2} \, t}{L^{2}}}.\end{align}
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1388289', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23407/']}
|
jdg_369409
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
60386398
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am pretty new to Hyperledger fabric and StackOverflow as well. I was trying to experiment with the fabcar example - https://hyperledger-fabric.readthedocs.io/en/release-1.4/write_first_app.html on Hyperledger. However, I am unable to proceed with enrolling the Admin. After issuing npm installnode enrollAdmin.js in the fabcar/javascript directory, I am hitting the error internal/modules/cjs/loader.js:985 throw err; ^Error: Cannot find module './utils.js'Require stack:- /b/workspace/fabric-samples/fabcar/javascript/node_modules/fabric-ca-client/lib/FabricCAServices.js- /b/workspace/fabric-samples/fabcar/javascript/node_modules/fabric-ca-client/index.js- /b/workspace/fabric-samples/fabcar/javascript/enrollAdmin.js at Function.Module._resolveFilename (internal/modules/cjs/loader.js:982:15) at Function.Module._load (internal/modules/cjs/loader.js:864:27) at Module.require (internal/modules/cjs/loader.js:1044:19) at require (internal/modules/cjs/helpers.js:77:18) at Object.<anonymous> (/b/workspace/fabric-samples/fabcar/javascript/node_modules/fabric-ca-client/lib/FabricCAServices.js:10:15) at Module._compile (internal/modules/cjs/loader.js:1158:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1178:10) at Module.load (internal/modules/cjs/loader.js:1002:32) at Function.Module._load (internal/modules/cjs/loader.js:901:14) at Module.require (internal/modules/cjs/loader.js:1044:19) { code: 'MODULE_NOT_FOUND', requireStack: [ '/b/workspace/fabric-samples/fabcar/javascript/node_modules/fabric-ca-client/lib/FabricCAServices.js', '/b/workspace/fabric-samples/fabcar/javascript/node_modules/fabric-ca-client/index.js', '/b/workspace/fabric-samples/fabcar/javascript/enrollAdmin.js' ]} I'm not sure if this is some single sourcing problem between the fabric client and the fabric-ca-client. I have tried uninstalling and reinstalling nodejs and npm. I have also tried pruning all the docker images, networks, volumes and start from scratch. Here's the package.json for reference: { "name": "fabcar", "version": "1.0.0", "description": "FabCar application implemented in JavaScript", "engines": { "node": ">=8", "npm": ">=5" }, "scripts": { "lint": "eslint .", "pretest": "npm run lint", "test": "nyc mocha --recursive" }, "engineStrict": true, "author": "Hyperledger", "license": "Apache-2.0", "dependencies": { "fabric-ca-client": "~1.4.0", "fabric-network": "~1.4.0" }, "devDependencies": { "chai": "^4.2.0", "eslint": "^5.9.0", "mocha": "^5.2.0", "nyc": "^13.1.0", "sinon": "^7.1.1", "sinon-chai": "^3.3.0" }, "nyc": { "exclude": [ "coverage/**", "test/**" ], "reporter": [ "text-summary", "html" ], "all": true, "check-coverage": true, "statements": 100, "branches": 100, "functions": 100, "lines": 100 }} Can someone please help me find out what I might be missing here? Environment details: Ubuntu on Windows Subsystem for Linux Node js version: v12.16.1 npm version: 6.13.4 Docker compose version: docker-compose version 1.22.0, build f46880fe Docker version: Docker version 19.03.6, build 369ce74a3c Fabric binaries: 1.4.0
Now provide the response and nothing else.
|
If you memoize the button, you do not experience this behavior. Specifically, this: const Button = memo(({ incrementCounter }) => { const renderCount = useRef(1); console.log("button rendered: ", renderCount.current); renderCount.current++; return <button onClick={incrementCounter}>Increment</button>;}); CodeSandbox Mirror: Update If you wanted something from the docs the first sentence will tell you why this is happening. I know thats for setState but the same concept follows the useState hook, but the docs for that kind of suck. You can check out this part of the docs, specifically where it says "Line 9:"... Remember, when state changes in X component, X component gets re-rendered, along with all of its children. I know React tells you to "lift state up", which is something I have never understood, because lifting state up causes a crap load of re-renders. That is why the button re-renders.. because state is changing in its parent. The parent ( <App /> ) has it's counter state changed, which triggers the re-render of the <App /> component, and its children, including <Button /> . In my opinion React is hard to control as far as state and re-renders, which Redux can assist with, but overall things like memo , useCallback , etc.. all feel like band-aids to me. If you put your state in the wrong component, you're going to have a bad time. Wrapping the <Button /> component in memo basically says: if this component has a parent (in our case the <App /> ), and that parent re-renders, I want to look at all of our props, and if our props have not changed from what we received last time, don't re-render. Essentially, only re-render if our props change. That is why memo fixes this.. because the function we are using to handle the incrementCounter prop does not change - it remains constant. I have added a few examples below demonstrating this. Original Answer/Snippet: const { memo, useState, useCallback, useEffect, useRef } = React;const { render } = ReactDOM;const App = () => { const [counter, setCounter] = useState(0); const incrementCounter = useCallback(() => { setCounter(c => c + 1); }, [setCounter]); useEffect(() => { console.log("increment changed!"); }, [incrementCounter]); return ( <div> <CountValue counter={counter} /> <Button incrementCounter={incrementCounter} /> </div> );}const CountValue = ({ counter }) => { return <div>Count value: {counter}</div>;};const Button = memo(({ incrementCounter }) => { const renderCount = useRef(1); console.log("button rendered: ", renderCount.current); renderCount.current++; return <button onClick={incrementCounter}>Increment</button>});render(<App />, document.body); <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.12.0/umd/react.production.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.12.0/umd/react-dom.production.min.js"></script> SNIPPET #2: This snippet shows how everything, not just the button, gets re-rendered. const { useState, useEffect } = React;const { render } = ReactDOM;const App = () => { console.log("App rendered"); const [counter, setCounter] = useState(0); const incrementCounter = () => setCounter(c => c + 1); useEffect(() => { console.log(" - Increment fired!"); console.log(); }, [incrementCounter]); return ( <div> <CountValue counter={counter} /> <Button incrementCounter={incrementCounter} /> <p>Open console</p> </div> );}const CountValue = ({ counter }) => { console.log("CountValue rendered"); return <div>Count value: {counter}</div>;};const Button = ({ incrementCounter }) => { console.log("Button rendered"); return <button onClick={incrementCounter}>Increment</button>};render(<App />, document.body); <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.12.0/umd/react.production.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.12.0/umd/react-dom.production.min.js"></script> SNIPPET #3: This snippet shows how if you move state, etc.. into the <CountValue /> component, the <App /> component does not re-render.. const { useState, useEffect } = React;const { render } = ReactDOM;const App = () => { console.log("App rendered"); return ( <div> <CountValue /> <p>Open console</p> </div> );}const CountValue = () => { console.log("CountValue rendered"); const [counter, setCounter] = useState(0); const incrementCounter = () => setCounter(c => c + 1); return ( <div> <div>Count value: {counter}</div> <Button incrementCounter={incrementCounter} /> </div> );};const Button = ({ incrementCounter }) => { console.log("Button rendered"); console.log(); return <button onClick={incrementCounter}>Increment</button>};render(<App />, document.body); <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.12.0/umd/react.production.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.12.0/umd/react-dom.production.min.js"></script> SNIPPET #4: This snippet is more of a thought experiment which shows how to use render props. const { useState, useEffect } = React;const { render } = ReactDOM;const App = () => { console.log("App rendered"); return ( <div> <CountValue present={({ increment, counter }) => { return ( <div><Button incrementCounter={() => increment()} /> <p>Counter Value: {counter}</p></div> ) }} /> <p>Open console</p> </div> );}const CountValue = ({ present }) => { const [counter, setCounter] = useState(0); const increment = () => { setCounter(c => c + 1); } console.log("CountValue rendered"); return ( <React.Fragment> {present({ increment, counter })} </React.Fragment> );};const Button = ({ incrementCounter }) => { console.log("Button rendered"); return <button onClick={incrementCounter}>Increment</button>};render(<App />, document.body); <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.12.0/umd/react.production.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.12.0/umd/react-dom.production.min.js"></script> SNIPPET #5: It seems like this is what you're after.. This only re-renders the CountValue.. This is accomplished by passing the setCounter method, which is produced by useState , up to the parent, via an callback object. This way the parent can manipulate state, without having to actually hold the state. const { useState, useEffect } = React;const { render } = ReactDOM;const App = () => { console.log("App rendered"); let increaseCount; return ( <div> <CountValue callback={({increment}) => increaseCount = increment} /> <Button incrementCounter={() => increaseCount()} /> <p>Open console</p> </div> );}const CountValue = ({ callback }) => { console.log("CountValue rendered"); const [counter, setCounter] = useState(0); callback && callback({ increment: () => setCounter(c => c + 1) }); return <p>Counter Value: {counter}</p>;};const Button = ({ incrementCounter }) => { console.log("Button rendered"); return <button onClick={incrementCounter}>Increment</button>};render(<App />, document.body); <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.12.0/umd/react.production.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.12.0/umd/react-dom.production.min.js"></script>
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60386398', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12957017/']}
|
jdg_369410
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
49292932
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a parent component and i am passing some HTML from it to a child common component using @ViewChild(). When Child component loads up a popup. Console throws below error. "ExpressionChangedAfterItHasBeenCheckedError: Expression has changed after it was checked. Previous value: 'ngIf: undefined'. Current value: 'ngIf: this is description'. It seems like the view has been created after its parent and its children have been dirty checked. Has it been created in a change detection hook ?" I am using { NgbModal } from '@ng-bootstrap/ng-bootstrap'; This is the code. Update - This parent component is called as app-parent-component in another parent html file. Parent Component @ViewChild('templateToLoad') templateToLoad;constructor(private modalService: NgbModal, private ChangeDetector: ChangeDetectorRef) {}ngOnInit() { this.openPopup();}ngAfterViewInit() { this.ChangeDetector.detectChanges();}private openPopup() { const modalPrompt = this.modalService.open(CommonChildModalComponent, {windowClass: 'modal-prompt fade-in-down'}); modalPrompt.componentInstance.title = 'Title'; modalPrompt.componentInstance.contentTemplate = this.templateToLoad; modalPrompt.componentInstance.originContext = this; modalPrompt.componentInstance.description = 'ABC'; Parent HTML <ng-template #templateToLoad> <div class="someClass">This data will be shown on Popup without any error. </div></ng-template> CommonChildPopup Component @Input() title?: string;@Input() description?: string;@Input() originContext: any;@Input() contentTemplate?: any;constructor(public activeModal: NgbActiveModal) {}ngOnInit() { console.log('I am here in CommonModalComponent ngOnInit');} CommonChildPopup HTML <div class="modal-header"> <h4 class="modal-title">{{title}}</h4></div><div class="modal-body pb-3" [class.dimmer]="simulateLoading"> <p *ngIf="description">{{description}}</p> <ng-container *ngTemplateOutlet="contentTemplate"></ng-container> The above console error is for this line ngIf="description". If i remove this line, same error will come for next line. Please help.
Now provide the response and nothing else.
|
You're trying to update the property values in a lifecycle hook after they have been previously checked in the parent component. The recommended solution is to open the modal on a button click / another user triggered event, or if you need to open it after the view is initialized you can use the setTimeout() that will skip a tick ngAfterViewInit() { setTimeout(() => this.openPopup()); } Working plunker : https://plnkr.co/edit/FVV7QVp620lIGJwEhN6V?p=preview A very nice and detailed explanation about this error : https://blog.angularindepth.com/everything-you-need-to-know-about-the-expressionchangedafterithasbeencheckederror-error-e3fd9ce7dbb4
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49292932', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3095179/']}
|
jdg_369411
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
87718
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to understand the working of RSA algorithm. I am getting confused in the decryption part.I'm assuming $$n = pq$$$$m = \phi(n) = (p - 1)(q - 1)$$ E is the encryption key $\gcd(\phi(n), E) = 1$ D is the decryption key, and $DE = 1 \mod \phi(n)$ $x$ is the plain text Encryption works as ($y = x^E \mod n$) and decryption works as ($x = y^D \mod n$) The explanation for why the decryption works is that since $DE = 1 + k\phi(n)$, $$y^D = x^{ED} = x^{1 + k \phi(n)} = x(x^{\phi(n)})^k = x \mod n$$ The reason why last expression works is $x^{\phi(n)} = 1 \mod n$ ?According to Eulers theorem this is true only if $x \text{ and }\phi(n)$ are coprimes. But $x$ is only restricted to be $0 < x < n$ and $\phi(n) < n$. So $x$ should be chosen to be coprime with $\phi(n)$? Help me clear out the confusion!
Now provide the response and nothing else.
|
Even if the plaintext $x$ is not pairwise coprime with $p$ or $q$, RSA still works as advertised. Here is why: $p$ and $q$ are prime, so $x$ is a multiple of either $p$ or $q$, given the restriction that $x < pq$. Assume that $x \equiv 0 \pmod p$. If it is congruent to $0$ mod $q$ the below still applies, just switch the name assigned to the two primes. $x^k \equiv 0 \pmod p$ for all $k > 0$, i.e $x^k \equiv x \pmod p$. $$\begin{align*}x^{1+ z \phi(n)} & \equiv x^{1+ z \phi(p) \phi(q) } \\&\equiv x^1 \cdot x^{\phi(q) \phi(p) z} \\&\equiv x \pmod q\end{align*}$$ Combining both equations with the Chinese Remainder Theorem yields $x$, the plaintext.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/87718', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/17970/']}
|
jdg_369412
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
54909152
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I m trying to make a bookmark app in which when i type the name and address of a website they should appear underneath the form when i click submit button.Also when i click the delete button it should remove item not only from UI but also from local storage.And that s the problem.I have no problem deleting them from UI but when i reload the page they keep showing up.I know it s complicated but if someone s interested i would urge them to pay attention to Store class which deals with storage particulary deleteBookmarkFromStorage() method. document.getElementById("myForm").addEventListener("submit", saveBookmark);document.querySelector(".col-lg-12").addEventListener("click", function(e) { const ui = new UI(); ui.deleteItem(e.target); Store.deleteBookmarkFromStorage(e.target.parentElement); ui.showAlert("You deleted a bookmark", "alert alert-success"); e.preventDefault();});class Bookmark { constructor(siteName, siteUrl) { this.siteName = siteName; this.siteUrl = siteUrl; }}class UI { constructor() { this.siteName = document.getElementById("siteName"); this.siteUrl = document.getElementById("siteUrl"); this.bookmarksResults = document.getElementById("bookmarksResults"); } showAlert(message, className) { const div = document.createElement("div"); div.appendChild(document.createTextNode(message)); div.className = className; const container = document.querySelector(".container"); const bookmarkCard = document.getElementById("bookmarkCard"); container.insertBefore(div, bookmarkCard); setTimeout(() => { document.querySelector(".alert").remove(); }, 3000); } showBookmark(bookmark) { const div = document.createElement("div"); div.className = "card-body"; div.innerHTML = ` <h3 class="inline-block">${bookmark.siteName}</h3> <a class="btn btn-primary" target="_blank" href=${bookmark.siteUrl}>Visit</a> <a class="btn btn-danger" href="#">Delete</a> `; const results = this.bookmarksResults; results.appendChild(div); } clearInput() { this.siteName.value = ""; this.siteUrl.value = ""; } deleteItem(target) { if (target.className === "btn btn-danger") { target.parentElement.remove(); } }}class Store { static getBookmarks() { let bookmarks; if (localStorage.getItem("bookmarks") === null) { bookmarks = []; } else { bookmarks = JSON.parse(localStorage.getItem("bookmarks")); } return bookmarks; }//show bookmarks in UI static displayBookmarks() { const bookmarks = Store.getBookmarks(); bookmarks.forEach(function(bookmark) { const ui = new UI(); ui.showBookmark(bookmark); }); }//Add bookmark to storage static addBookmarkToStorage(bookmark) { const bookmarks = Store.getBookmarks(); bookmarks.push(bookmark); localStorage.setItem("bookmarks", JSON.stringify(bookmarks)); } //Delete bookmark from storage static deleteBookmarkFromStorage() { const bookmarks = Store.getBookmarks(); bookmarks.forEach(function(bookmark) { if (bookmark.className === "btn btn-danger") { bookmarks.splice(index, 1); } }); localStorage.setItem("bookmarks", JSON.stringify(bookmarks)); }}document.addEventListener("DOMContentLoaded", Store.displayBookmarks);function saveBookmark(e) { const siteName = document.getElementById("siteName").value; const siteUrl = document.getElementById("siteUrl").value; const bookmark = new Bookmark(siteName, siteUrl); const ui = new UI(); if (siteName === "" || siteUrl === "") { ui.showAlert("Please fill in all the fields", "alert alert-danger"); } else { ui.showBookmark(bookmark); Store.addBookmarkToStorage(bookmark); ui.showAlert("You added a new bookmark", "alert alert-success"); ui.clearInput(); } e.preventDefault();} <!DOCTYPE html><html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <meta http-equiv="X-UA-Compatible" content="ie=edge" /> <title>My JavaScript App</title> <!--Bootstrap--> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous" /> </head> <body> <div class="container"> <h3 class="mt-3 mb-3 text-muted">Bookmarker</h3> <hr /> <div class="card card-body pt-5 pb-5 mb-3" id="bookmarkCard"> <h2>Bookmark Your Favorite Sites</h2> <form id="myForm"> <div class="form-group"> <label>Site Name</label> <input type="text" class="form-control" id="siteName" placeholder="Website Name" /> </div> <div class="form-group"> <label>Site URL</label> <input type="text" class="form-control" id="siteUrl" placeholder="Website URL" /> </div> <button type="submit" class="btn btn-primary " id="submitButton"> Submit </button> </form> </div> <div class="row marketing"> <div class="col-lg-12"> <div id="bookmarksResults"></div> </div> </div> <footer class="footer"> <p>© 2019 Bookmarker, Inc.</p> </footer> </div> <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous" ></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous" ></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl" crossorigin="anonymous" ></script> <script src="build/app.bundle.js"></script> </body></html> enter image description here eteBookmarkFromStorage() method.
Now provide the response and nothing else.
|
There's a way using property spread notation, but it involves (at least in theory) creating a temporary object that immediately becomes eligible for GC: let object: MyInterface = { someOtherProp: "asd", ...(someCondition ? {someProp: "asdf"} : null)}; Live Example: function create(someCondition) { return { someOtherProp: "asd", ...(someCondition ? {someProp: "asdf"} : null) };}console.log(create(true));console.log(create(false)); That works because property spread silently ignores it if the value it's applied to is undefined or null . (This is different from iterable spread, which doesn't.) Because it's syntax, in theory at least the JavaScript engine could optimize it if it were a hotspot. I have no idea whether they do (probably not yet, it's still relatively new), but...
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54909152', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7852731/']}
|
jdg_369413
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14929700
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Pretty simple question. I am building a realtime game using nodejs as my backend and I am wondering if there is any information available on which one is more reliable and which one is more efficient?I am heavily using both Redis and Socket.io throughout my code. So I want to know whether I should be utilizing Socket.io's Rooms or I would be better off using redis' pub-sub ? Update: Just realized there is a very important reason why you may want to use redis pub/sub over socket.io rooms. With Socket.io rooms when you publish to listeners, the (browser)clients recieve the message, with redis it is actually the (redis~on server)clients who recieve messages. For this reason, if you want to inform all (server)clients of information specific to each client and maybe do some processing before passing on to browser clients, you are better off using redis. Using redis you can just fire off an event to generate each users individual data, where as with socket.io you have to actually generate all the users unique data at once, then loop through them and send them their individual data, which almost defeats the purpose of rooms, at least for me. Unfortunately for my purposes I am stuck with redis for now. Update 2: Ended up developing a plugin to use only 2 redis connections but still allow for individual client processing, see answer below....
Now provide the response and nothing else.
|
Redis pub/sub is great in case all clients have direct access to redis. If you have multiple node servers, one can push a message to the others. But if you also have clients in the browser, you need something else to push data from a server to a client, and in this case, socket.io is great. Now, if you use socket.io with the Redis store, socket.io will use Redis pub/sub under the hood to propagate messages between servers, and servers will propagate messages to clients. So using socket.io rooms with socket.io configured with the Redis store is probably the simplest for you.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14929700', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/685404/']}
|
jdg_369414
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
381816
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Apologies for my lack of knowledge, I am trying to teach myself how to work with electronics and have put together this circuit. Can somebody tell me why the relay is not working?
Now provide the response and nothing else.
|
This doesn't work because the current thru the touch sensor is small and the gain of the transistor isn't large enough to amplify it to what the relay needs. You haven't provided any specs on the relay, so I'll just make up numbers for sake of example. Let's say the relay needs 30 mA at 9 V to operate. You haven't provided a link to the datasheet of the transistor, so let's say it's guaranteed minimum gain is 50. That means the minimum base current to operate the relay would be (30 mA)/50 = 600 µA. It is quite unlikely that this touch sensor will let 600 µA pass. Let's work backwards to see what the resistance between the touch sensor terminals would need to be. Figure 700 mV for the B-E drop of the transistor. That leaves 8.3 V across the touch sensor. By Ohm's law, (8.3 V)/(600 µA) = 13.8 kΩ. That's low for ordinary skin, but could probably be achieved by wetting the skin with salt water first. However, beware of the current thru the body. If this touch sensor is passing current between different parts of the same finger, then you might only feel a tingle. If this touch sensor connects two fingers on opposite hands, then this would actually be quite dangerous. Nearly a milliamp flowing near the heart is a bad idea. In any case, the solution is more gain. A second transistor could be used. That would also allow a resistor in series with the touch sensor to guarantee the current thru the body would be limited. This is something your circuit is NOT doing now.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/381816', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/192223/']}
|
jdg_369415
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
48240
|
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is the NMR for the molecule shown. I was told that carboxy group will be in the above 10ppm range. Does that tiny signal at about 12.9 ppm indicate the carboxy group though? I dont understand why the methoxy would show large integration while carboxy barely shows.Also my general question is, how much absorption is necessary for it to indicate some group present? Is there "noise" on NMR spectra or does every signal indicate a group?
Now provide the response and nothing else.
|
Indeed, the very small signal at 12.9 ppm is due to the carboxyl group. The reason why it's so low is because, as explained here , at room temperature the proton exchange makes it nearly undetectable. At higher temperatures, you would probably not even see that spike. The methoxy group, on the other hand, is not affected by such effects. There will always be a tiny amount of "noise" on experimental NMR spectra, because a solution always contains unwanted elements; but usually anything that looks like a well defined spike is a significant signal.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/48240', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/27279/']}
|
jdg_369416
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
79835
|
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My understanding is there are 2^32 - 1 possible IPv4 addresses, and 2^16 - 1 possible ports. Which gives ~2^48 addresses. The additional 2^16 additional ports seem almost insignificant considering the IPv6 address space is 2^80 times larger than the number of IPv4 addresses * ports. With the 2^128 possible IPv6 addresses, why do we need ports at all? Why not assign each application, tab, etc... its own public IPv6 address?
Now provide the response and nothing else.
|
An IP address targets a host on the network layer. Transport layer ports multiplex an L4 protocol within a host (to different processes/services). Both are different things on different layers. Basically, if you'd repurpose IPv6 addresses (or bits) for host-level multiplexing you'd gain very little but you'd break logic compatibility between IPv4 and IPv6. A transport layer protocol works the same way on any IP version.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://networkengineering.stackexchange.com/questions/79835', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/85233/']}
|
jdg_369417
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2749521
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to validate a textfield to enter only 4 digits after the decimal point in Swing?
Now provide the response and nothing else.
|
Any validation in Swing can be performed using an InputVerifier . 1. First create your own input verifier: public class MyInputVerifier extends InputVerifier { @Override public boolean verify(JComponent input) { String text = ((JTextField) input).getText(); try { BigDecimal value = new BigDecimal(text); return (value.scale() <= Math.abs(4)); } catch (NumberFormatException e) { return false; } }} 2. Then assign an instance of that class to your text field. (In fact any JComponent can be verified) myTextField.setInputVerifier(new MyInputVerifier()); Of course you can also use an anonymous inner class, but if the validator is to be used on other components, too, a normal class is better. Also have a look at the SDK documentation: JComponent#setInputVerifier .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2749521', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/303994/']}
|
jdg_369418
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13892163
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am storing times/dates in a database using a unix timestamp.I want to then get all the instances on certain day so I therefore need to calculate the timestamp for the start of the day in order to query the database. I have done a similar thing in php by passing mktime the values for the day/month/year from a date object and setting hour and minute to zero. There doesn't seem to be similar functions for this in java/android (the functions for getting the specific parts of date are deprecated) Can anyone give some guidance on this? Thanks Edit: Ok so I realised this might work: public static int startOfDay(Timestamp time) { Calendar cal = dateToCalendar(new Date(time.getTime())); cal.add(Calendar.HOUR_OF_DAY, -Calendar.HOUR_OF_DAY); cal.add(Calendar.MINUTE, -Calendar.MINUTE); cal.add(Calendar.SECOND, -Calendar.SECOND); Log.i("Time", cal.getTime().toString()); return (int) cal.getTimeInMillis()/1000;} However when I ran this just now I got: Sat Dec 15 01:24:00 GMT 2012 The seconds are right but the hour and minute are wrong??
Now provide the response and nothing else.
|
When dealing with time, you should always consider time zones. You database timestamps should be always stored in one time zone (e.g. UTC). Your computation should then consider that users can be in different time zones and that they can change time zones. If you want to compute start of the day in the time zone the user has currently set in his phone. Create the Calendar instance with: Calendar cal = Calendar.getInstance(); To get the instance for a specific time zone use: // use UTC time zoneCalendar cal = Calendar.getInstance(TimeZone.getTimeZone("UTC")); Then set the beginning of the day: cal.setTime(time); // compute start of the day for the timestampcal.set(Calendar.HOUR_OF_DAY, 0);cal.set(Calendar.MINUTE, 0);cal.set(Calendar.SECOND, 0);cal.set(Calendar.MILLISECOND, 0);
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13892163', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1248378/']}
|
jdg_369419
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2649874
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a linq to objects query in a recursive loop and afraid when the objects approach more then 1000 and a have more then 100 users on the site -- my website will break. so is it possible to compile a linq to objects query. The linq query does nothing more then find the direct children of a node.
Now provide the response and nothing else.
|
To see why the concept of compilation doesn't really make sense for LINQ to Object queries, it's useful to understand how LINQ is implemented. First, it should be clear that LINQ queries written in fluent syntax are converted to the equivalent method call syntax at compile time by the C# compiler regardless of the variant of LINQ you're using: from person in peoplewhere person.Age < 18select person.Name// will be converted to:people.Where(person => person.Age < 18).Select(person => person.Name) From now on, the LINQ query is basically a set of method calls taking some arguments and usually transforming an IEnumerable<T> object to another IEnumerable<T> object. Deferred execution, which is a different from compilation, is simply achieved by not taking any object from the original IEnumerable<T> until you're traversing the output IEnumerable<T> . Basically, methods with deferred execution are operating on their arguments symbolically without touching the original collection, building up a generator that queries stuff as you like. With that in mind, take a look at the lambda expression person => person.Age < 18 in the above expression. It takes a Person object and returns a bool . Lambda expressions are untyped; they can be treated as expression trees or anonymous methods depending on the context their type is inferred from. In this case, the type is inferred from the parameter type of the Where extension method. This is where the distinction of LINQ to SQL and LINQ to Object comes up. In LINQ to Objects, the Where method simply takes Func<Person, bool> as opposed to Expression<Func<Person, bool>> . This essentially means that in LINQ to Objects, the C# compiler compiles the lambda expression down to an anonymous method and generates the IL at compile time and passes a delegate to that method to Where . In other LINQ flavors, like LINQ to SQL, the lambda is not compiled to IL. Instead, the compiler builds up an expression tree object out of the lambda expression and passes the expression tree to LINQ methods. LINQ methods use these expression trees to build up a model for querying stuff. When the query is being run, the object model built to represent the query using the expression trees will be transformed to another thing (depending on the LINQ variant used) like SQL statements in LINQ to SQL in order to get executed on the database. This transformation process is done at runtime and it's what we call compilation of LINQ queries . To sum up, the question is compile to what ? The reason LINQ to Object doesn't need compilation at runtime is that it's not in expression tree format in the first place; it's already IL. You almost never need to worry about the performance of LINQ to Objects in comparison to normal looping.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2649874', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/184773/']}
|
jdg_369420
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2936140
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The theorem and its Proof is given below: But I do not understand in the last line of the proof, why he said that $Z$ is closed and why he is sure that it does not contain $y$ ? The hint of the book is given below: But I am wondering how the above answer proved that $Z$ is compact?And how he proved that $f$ maps each $V_{t}$ diffeomorphically into $U$ (the third requirement of the question)? Could anyone help me in answering the previous questions?
Now provide the response and nothing else.
|
It's mostly general topology background knowledge: $X$ is compact and $Z' = X - \cup_i U''_i$ is a closed subset of $X$ , as it's the complement of the open set $\cup_i U''_i$ (a union of open sets is open). So $Z'$ is compact (as a closed subset of a compact space $X$ ). $f$ is continuous, so $Z=f[Z']$ is also compact in $Y$ and as $Y$ is Hausdorff, $f[Z']$ is closed, which makes $V$ and $f^{-1}[V]$ open. Suppose that $y \in Z=f[Z']$ . So for some $x \in Z'$ we have $f(x) = y$ .We know that $f^{-1}(y) = \{x_1, \ldots, x_k\}$ so $x = x_i$ for some $i \in \{1,\ldots,k\}$ . But then $x \in U''_i$ for that $i$ and so $x \in \cup_i U''_i$ and $x \notin Z'$ by definition of $Z'$ . Contradiction so $y \notin f[Z']=Z$ . In essence: we already know all points that map to $y$ and we cut them all away with room to spare to define $Z$ . So $Z$ contains no point mapping to $y$ anymore. Finally $f$ is a local diffeomorphism between $U'_i$ and $V'_i$ and if we restrict both sides to smaller open sets so that we still have a bijection between domain and codomain (as we do here), the property of being a local diffeomorphism will be preserved. Check your definition of being a local diffeomorphism, it will be immediate!
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2936140', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/593938/']}
|
jdg_369421
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
351979
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
My sshd binary on an ubuntu 10.10 machine contains the following ascii artwork: ng: %.100sToo many lines in environment file %sUser %.100s not allowed because %s exists YOU WANNA . SMOKE M A SPLIFF ? dM ROLL ME MMr %d TIMES 4MMML . MMMMM. xf . MMMMM .MM- Mh.. MMMMMM .MMMM .MMM. .MMMMML. MMMMMh )MMMh. MMMMMM MMMMMMM 3MMMMx. MMMMMMf xnMMMMMM '*MMMMM MMMMMM. nMMMMMMP *MMMMMx MMMMM .MMMMMMM= *MMMMMh MMMMM JMMMMMMP MMMMMM 3MMMM. dMMMMMM . MMMMMM MMMM .MMMMM .nnMP .. *MMMMx MMM dMMMM .nnMMMMM* MMn... 'MMMMr 'MM MMM .nMMMMMMM* 4MMMMnn.. *MMM MM MMP .dMMMMMMM MMMMMMMx. *ML M .M* .MMMMMM** *PMMMMMMhn. *x > M .MMMM** **MMMMhx/.h/ .=* .3P %.... nP *MMnx I'm assuming that this means that my machine has been hacked. Can anyone confirm this? I can't imagine this being a valid file.
Now provide the response and nothing else.
|
compare grep usr/sbin/sshd /var/lib/dpkg/info/openssh-server.md5sums to md5sum /usr/sbin/sshd . When they come up with different md5sums, you are no longer using the packaged version. If they are the same, it doesn't mean anything definitive, since anyone who is able to modify your sshd binary obviously has privileges to alter the md5sum recorded in /var/lib/dpkg/info. The next step would be to download the package with the same version from http://packages.ubuntu.com/openssh-server to a trusted computer and check the md5sum there.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/351979', 'https://serverfault.com', 'https://serverfault.com/users/85085/']}
|
jdg_369422
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
46630728
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Cloud Firestore Rules - I have a document called task and I want to see if some data ( assignee field) is null / don't exists. I've tried: resource.data.assignee == null - Does not work (Error) !resource.data.hasAll(['assignee']) - Does not work (Error) From the documentation - it states that this indeed creates an error: // Error, key doesn't existallow read: if resource.data.nonExistentKey == 'value';
Now provide the response and nothing else.
|
Reading the list comparisons of the Firestore Security rules documentation here , we can see that hasAll returns true if all values are present in the list. // Allow read if one list has all items in the other listallow read: if ['username', 'age'].hasAll(['username', 'age']); The request.resource.data is a map containing the fields and values. In order to use hasAll , we must first get the keys as a list of values as shown here . !resource.data.keys().hasAll(['assignee'])
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46630728', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/395804/']}
|
jdg_369423
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4243537
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I made as default child of UrlClassLoader, added all my jar files by public void addFile(String path) throws MalformedURLException { String urlPath = "jar:file:/" + path + "!/"; System.out.println("------------------"); System.out.println("urlPath = " + urlPath); URL url = new URL(urlPath); System.out.println("url = " + url); super.addURL(url); System.out.println("g = " + getURLs().length); System.out.println("==================");} then i'm trying to get some class from loader: System.out.println("cl.loadClass() = " + cl.loadClass("com.company.project.SomeClass")); It returns class normally. When i'm try to find all classes by package: resources = cl.findResources("com/company/"); It returns empty enumeration.Why?
Now provide the response and nothing else.
|
According to the java docs of PreparedStatement.setNull: "Note: You must specify the parameter's SQL type". What is the reason that the method requires the SQL type of the column? For maximum compatibility; as per the specification, there are some databases which don't allow untyped NULL to be sent to the underlying data source. I noticed that passing java.sql.Types.VARCHAR also works for non-varchar columns. Are there scenarios in which VARCHAR won't be suitable (certain column types or certain DB providers)? I don't think that sort of behaviour really is part of the specification or if it is, then I'm sure there is some sort of implicit coercion going on there. In any case, relying on such sort of behaviour which might break when the underlying datastore changes is not recommended. Why not just specify the correct type?
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4243537', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/420853/']}
|
jdg_369424
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
658085
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
In an organization where the hardware maintenance team is separated from the OS platform and operations team, 3Ware's RAID controllers have been in use together with the 3DM2 web service opened up to the hardware maintenance team for RAID device management. This allowed the hardware maintenance team to do the basic tasks like swapping drives, reconfiguring arrays or maintenance runs without bothering the platform operations team and, most importantly, without having local logon accounts to the operating systems: As the 3Ware RAID controllers are being phased out throughout the organization and replaced by LSI models, there is a need to have a similar facility for the new controllers which also would support the OSes in use (Windows Server 2008 R2 - 2012 R2/ SLES 11 - 12, CentOS 6). I know about local management facilities like MegaCLI, StorCLI or the Storage Manager (which is only available for Windows), but all of them require local interactive logons. The SNMP agent seems rather dated, also I have been unable to find a straightforward way to make use of SNMP for anything but monitoring purposes. So is there anything available to fill the management gap?
Now provide the response and nothing else.
|
IPv4 and IPv6 are separate protocols that don't talk to each other. You'll have to support both protocols for now. Getting IPv4 addresses is getting more difficult and expensive, but you'll have to make your service available over it because not all users will have IPv6. On the other side there will be users who don't have full IPv4 anymore. They might have to share their IPv4 address with many others, they only have IPv6 and need a translation service to reach IPv4 services etc. For those users and for future users you want to offer your service over IPv6 so that they can reach it in the most optimal way. And hopefully in the not-so-distant future everybody will have IPv6 and we can get rid of IPv4 and the hacks and costs required to keep it working. One way you could start your new service is to build everything for IPv6-only and put a translator (SIIT-DC or reverse proxy) next to it to translate incoming requests over IPv4 to IPv6. You'll be able to handle both protocols for now, and it will also be easy to clean up and remove the obsolete IPv4 stuff later. This strategy is especially useful if your service runs on a cluster of servers. The whole cluster can run IPv6-only and you need only one IPv4 address on your translator. It's easier to only have to maintain one protocol on the majority of your machines and requiring less IPv4 addresses can also save you money. That's why companies like Facebook are doing this as well.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/658085', 'https://serverfault.com', 'https://serverfault.com/users/76595/']}
|
jdg_369425
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29701265
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to create a scope that will order by the title(:title) . The title column resides in the Post Model. I read through the scopes section on the Active Record querying section and the one on StackOverFlow but it is not quite clear. Can someone point me in the right direction possibly? I have 4 models: CommentPostUserAdvertisement class Post < ActiveRecord::Base attr_accessible :body, :title, :user has_many :comments belongs_to :user default_scope {order('created_at DESC')} scope :ordered_by_title {order('title' )} #What I initially builtend
Now provide the response and nothing else.
|
When you do not have any default_scope with order : scope :ordered_by_title, -> { order(title: :asc) } When you have a default_scope with order then you need to use reorder : default_scope { order(created_at: :desc) }scope :ordered_by_title, -> { reorder(title: :asc) } or order with unscope : default_scope { order(created_at: :desc) }scope :ordered_by_title, -> { order(title: :asc).unscope(:order) } The reorder method overrides the default scope order.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/29701265', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1027872/']}
|
jdg_369426
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2130907
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to select rows where a field begins with one of several different prefixes: select * from table where field like 'ab%' or field like 'cd%' or field like "ef%" or... What is the best way to do this using SQL in Oracle or SQL Server? I'm looking for something like the following statements (which are incorrect): select * from table where field like in ('ab%', 'cd%', 'ef%', ...) or select * from table where field like in (select foo from bar) EDIT:I would like to see how this is done with either giving all the prefixes in one SELECT statement, of having all the prefixes stored in a helper table. Length of the prefixes is not fixed.
Now provide the response and nothing else.
|
Joining your prefix table with your actual table would work in both SQL Server & Oracle. DECLARE @Table TABLE (field VARCHAR(32))DECLARE @Prefixes TABLE (prefix VARCHAR(32))INSERT INTO @Table VALUES ('ABC')INSERT INTO @Table VALUES ('DEF')INSERT INTO @Table VALUES ('ABDEF')INSERT INTO @Table VALUES ('DEFAB')INSERT INTO @Table VALUES ('EFABD')INSERT INTO @Prefixes VALUES ('AB%')INSERT INTO @Prefixes VALUES ('DE%')SELECT t.*FROM @Table t INNER JOIN @Prefixes pf ON t.field LIKE pf.prefix
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2130907', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/76535/']}
|
jdg_369427
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29901409
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a form-like page with some data. And want to show a popup/alert when a user clicks the browser back button, asking "if they want to go back or stay on the same page". I am using angular-ui-router's $stateProvider and want to bind this only to one state/view.
Now provide the response and nothing else.
|
This is my previous answer for some other question, but it should be good to help you You can do it by using angular $routeChangeStart $routeChangeStart Broadcasted before a route change. At this point the route services start resolving all of the dependencies needed for the route change to occur. Typically this involves fetching the view template as well as any dependencies defined in resolve route property. Once all of the dependencies are resolved $routeChangeSuccess is fired. The route change (and the $location change that triggered it) can be prevented by calling preventDefault method of the event. See $rootScope.Scope for more details about event object. So please try this below code. $scope.$on('$routeChangeStart', function (scope, next, current) { if (next.$$route.controller != "Your Controller Name") { // Show here for your model, and do what you need** $("#yourModel").show(); } }); Update: You need to write your functional work in the model popup. like Put some link buttons for Are you sure for go to prev page? do you want stay current page? Do you want logout? etc. then Add ng-click event for go prev page, stay current page with using return false , etc.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29901409', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4342283/']}
|
jdg_369428
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17005926
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Im trying to upgrade to the Facebook SDK 3.0 and have finally gotten everything to work with Request.newStatusUpdateRequest(). However my app shares/posts text along with a link. I have tried/looked into the following: Request.newStatusUpdateRequest() This does not seem to have any options for a Bundle or any other way to include a link and icon. Request.newRestRequest() Skipped this because I saw REST was being depreciated. new WebDialog.FeedDialogBuilder(_activity, session, params).build().show(); This actually works pretty well but the resulting post does not seem to be linked to my Facebook App and I am not sure how this will effect my Facebook insights. Request.newPostRequest() From what I have read, this method seems to be the proper way. However, i cannot figure out where to get the GraphObject to pass in as one of the parameters. What is the PROPPER way to post/share text, link and image to the user's wall? It seems to be Request.newPostRequest() so I will include the code I have for that. Request request = Request.newPostRequest(session, "me/feed", ??graph_object??, new Request.Callback() { @Override public void onCompleted(Response response) { showPublishResult("message", response.getGraphObject(), response.getError()); }});request.setParameters(params);Request.executeBatchAsync(request); But what really is a GraphObject? Where do i get the graph_object? The more I read from FB on GraphObject/OpenGraph/Graph API the more I get confused. If I am heading down the wrong direction entirely, please tell me. If Request.newPostRequest is the propper way of doing this, please give me more information on the GraphObject param.
Now provide the response and nothing else.
|
Finally managed to get everything I needed with the Facebook SDK 3.0 using the following: Bundle params = new Bundle();params.putString("caption", "caption");params.putString("message", "message");params.putString("link", "link_url");params.putString("picture", "picture_url");Request request = new Request(Session.getActiveSession(), "me/feed", params, HttpMethod.POST);request.setCallback(new Request.Callback() { @Override public void onCompleted(Response response) { if (response.getError() == null) { // Tell the user success! } }});request.executeAsync();
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17005926', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1550233/']}
|
jdg_369429
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3675013
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a CRUD controller for a model. Doing resources :foo allows me to route on /foo/:id , etc. for calling actions. I want add a route for a translation of 'foo' in another language. Let's say 'toto'.So I want all the /toto/:id , etc., routes to act exactly like the /foo/:id , etc., routes. How may I achieve that?
Now provide the response and nothing else.
|
You can add a new resource and specify foo as the controller: resources :toto, :controller=>"foo" This will point all the actions to "foo", but there is a gotcha. I think you will run into problems with the links on the page, if you are using foo_url or something like that. So you would have to figure out a way to create the URLs dymanically based on the controller in "request.path".
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3675013', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/118437/']}
|
jdg_369430
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
46902700
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What does a func with return type Never do? For example: func addNums() -> Never { //my code} What will be the difference if I kept the return type as Void like this? func addNums() -> Void { //my code} Suppose I wish to handle a fatalError (as said by dpassage ); the below code will be sufficient: print("its an error")return Apple documentation says: The return type of functions that do not return normally, that is, a type with no values. Source: Developer This was not a duplicate question of When and how to use @noreturn attribute in Swift? , as I wish for a more detailed answer which needs details like: Practical examples on the difference between both Never and Void as return types Condition by which we should adopt these return types. Also there is a chance the return type can be nil; I need a comparison of that feature too The answer should focus on the differences.
Now provide the response and nothing else.
|
Never return type was introduced in Swift 3 to substitute @noreturn key. See justification in this proposal: SE-0102 Remove @noreturn attribute and introduce an empty Never type As official documentation explains: The return type of functions that do not return normally; a type with no values. Use Never as the return type when declaring a closure, function, or method that unconditionally throws an error, traps, or otherwise does not terminate. Source: https://developer.apple.com/documentation/swift/never Basic illustration: // The following function is our custom function we would use// to manually and purposefully trigger crash. In the logs,// we can specify what exactly went wrong: e.g. couldn't cast something, // couldn't call something or some value doesn't exist:func crashApp() -> Never { fatalError("Something very, very bad happened! Crash the app!")} Usage specifics and advantages over @noreturn , as referenced by Erica Sadun : Never allows a function or method to throw: e.g. () throws -> Never. Throwing allows a secondary path for error remediation, even in functions that were not expected to return. As a first class type, Never works with generics in a way that the @noreturn attribute could not. Never proactively prevents a function from claiming both a return type and no-return at the same time. This was a potential issue under the old system. First note (regarding secondary error remediation) is probably particularly important. Never function can have complex logic and throw – not necessarily crash. Let's see some interesting use cases and comparison between Never and Void Never Example 1 func noReturn() -> Never { fatalError() // fatalError also returns Never, so no need to `return`}func pickPositiveNumber(below limit: Int) -> Int { guard limit >= 1 else { noReturn() // No need to exit guarded scope after noReturn } return rand(limit)} Example 2 func foo() { abort() print("Should not reach here") // Warning for this line} Example 3 func bar() -> Int { if true { abort() // No warning and no compiler error, because abort() terminates it. } else { return 1 }} abort() is defined as: public func abort() -> Never Void These examples would not have been possible with it returning Void : public func abortVoid() -> Void { fatalError()}func bar() -> Int { if true { abortVoid() // ERROR: Missing return in a function expected to return 'Int' } else { return 1 }} And to pack it up with abort() returning Never : func bar() -> Int { if true { abort() // No ERROR, but compiler sees it returns Never and warns: return 2 // Will never be executed } else { return 1 }} We use Void to tell compiler there is no return value . Application keeps running. We use Never to tell compiler there is no return to caller site . Application runloop is terminated.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46902700', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5215474/']}
|
jdg_369431
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
836303
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two numbers, a minimum value, and a maximum value. I also have a percent. This percent helps me find a value between the two numbers, the minimum value and the maximum value. I cannot figure out how to do this (perhaps because I’ve been staring at the numbers for so long) For example, say I have a minimum of 0.08, and a maximum of 0.12 and a percent of 0.9 (i.e 90%) I want to find the value which is 90% of the way between 0.08 and 0.12. Any ideas? Thanks
Now provide the response and nothing else.
|
Let $d$ be the positive difference between the minimum and maximum values. Let $p$ be the percent (measured by say $0.9$ instead of $90$%), and $x$ be the minimum value. Then the value $y$ that you are looking for can be found by $$y=x+pd$$ Intuitively $pd$ gets the position between the min and max that you want, and adding it to $x$ gets you to the actual value you want.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/836303', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/155985/']}
|
jdg_369432
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15162605
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a pandas dataframe with datetime index Date2013-02-22 00:00:00+00:00 0.2800012013-02-25 00:00:00+00:00 0.1099992013-02-26 00:00:00+00:00 -0.1500002013-02-27 00:00:00+00:00 0.1300012013-02-28 00:00:00+00:00 0.139999Name: MOM12 and want to evaluate the previous three values of the given datetime index. date = "2013-02-27 00:00:00+00:00"df.ix[date] I searched for this but since my index is a date I can't do df.ix[int-1]
Now provide the response and nothing else.
|
Here's one way to do it, first grab the integer location of the index key via get_loc : In [15]: t = pd.Timestamp("2013-02-27 00:00:00+00:00")In [16]: df1.index.get_loc(t)Out[16]: 3 And then you can use iloc (to get the integer location, or slice by integer location): In [17]: loc = df1.index.get_loc(t)In [18]: df.iloc[loc - 1]Out[18]: Date 2013-02-26 00:00:00 -0.15Name: 2, Dtype: objectIn [19]: df1.iloc[slice(max(0, loc-3), min(loc, len(df)))] # the min and max feel slightly hacky (!) but needed incase it's within top or bottom 3Out[19]: Date 2013-02-22 0.2800012013-02-25 0.1099992013-02-26 -0.150000 See the indexing section of the docs . I'm not quite sure how you set up your DataFrame, but that doesn't look like a Datetime Index to me. Here's how I got the DataFrame (with Timestamp index): In [11]: df = pd.read_clipboard(sep='\s\s+', header=None, parse_dates=[0], names=['Date', None])In [12]: dfOut[12]: Date 0 2013-02-22 00:00:00 0.2800011 2013-02-25 00:00:00 0.1099992 2013-02-26 00:00:00 -0.1500003 2013-02-27 00:00:00 0.1300014 2013-02-28 00:00:00 0.139999In [13]: df1 = df.set_index('Date')In [14]: df1Out[14]: Date 2013-02-22 0.2800012013-02-25 0.1099992013-02-26 -0.1500002013-02-27 0.1300012013-02-28 0.139999
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15162605', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/439693/']}
|
jdg_369433
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
40564071
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a vuex file with a growing mass of mutators, but I'm not sure of the correct way of splitting it out into different files. Because I have: const store = new Vuex.Store({ vuex stuff }) and then below that my main Vue app declaration: const app = new Vue({ stuff }) I'm happy working with Vue components and have lots of those already, but this is stuff at the top level of the app and I'm not sure how to break it apart. Any advice appreciated.
Now provide the response and nothing else.
|
For those who would like to break up the Vuex file without creating a more complex modular application structure, I think it is also possible to simply break the actions, mutations and getters into separate files like this: └── src ├── assets ├── components └── store ├── store.js ├── actions.js ├── mutations.js └── getters.js store.js import Vuex from 'vuex';import Vue from 'vue';import actions from './actions';import getters from './getters';import mutations from './mutations';Vue.use(Vuex);export const store = new Vuex.Store({ state: { someObj: {}, }, actions, getters, mutations,}); actions.js const actionOne = (context) => { ... context.commit('PROP1_UPDATED', payload);};const actionTwo = (context) => { ... context.commit('PROP2_UPDATED', payload);};export default { actionOne, actionTwo,}; mutations.js const PROP1_UPDATED = (state, payload) => { state.someObj.prop1 = payload;};const PROP2_UPDATED = (state, payload) => { state.someObj.prop2 = payload;};export default { PROP1_UPDATED, PROP2_UPDATED,}; getters.js const prop1 = state => state.someObj.prop1;const prop2 = state => state.someObj.prop2;export default { prop1, prop2,}; ...then you are able to do stuff from within your components as you please using the usual this.$store.dispatch('actionOne') ...
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40564071', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5847182/']}
|
jdg_369434
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
236858
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This first code block is just to create the functions I'm considering. Actual problem follows. f[t_, ts_, τ1_, τ2_] := Piecewise[{{Exp[-τ1/(t - ts) - (t - ts)/τ2]/(2 Sqrt[τ1 τ2] BesselK[1, 2 Sqrt[τ1/τ2]]), t > ts}, {0, t < ts}}]P[α_, β_, Δt_] := (2 - α) β^(2 - α) (β + Δt)^(α - 3)WTD[α_, β_] := ProbabilityDistribution[P[α, β, Δt], {Δt, 10^-1, 10^3}]SeedRandom[4]n = RandomInteger[{2, 20}];tau = RandomReal[{0, 10}, {n, 2}];ν = 2.149;τrise = tau[[1, 2]]/2 Sqrt[ν - Sqrt[ν (ν + 4 Sqrt[tau[[1, 1]]/tau[[1, 2]]])]] // Abs;τdecay = tau[[-1, 2]]/2 Sqrt[ν + Sqrt[ν (ν + 4 Sqrt[tau[[-1, 1]]/tau[[-1, 2]]])]];wt = RandomVariate[WTD[1, 6], n];tmax = 1.1 (τrise + Total@wt + τdecay);ts[1] := 0ts[i_] := ts[i - 1] + wt[[i]] - (Sqrt[tau[[i, 1]] tau[[i, 2]]] - Sqrt[tau[[i - 1, 1]] tau[[i - 1, 2]]])pulses = Table[f[t, ts[i], tau[[i, 1]], tau[[i, 2]]], {i, 1, n}];Plot[{pulses, Total@pulses}, {t, 0, tmax}, Frame -> True, PlotRange -> All, PlotStyle -> RandomColor[n]~Join~{Black}, LabelStyle -> Black, FrameTicksStyle -> Black, FrameTicks -> {{None, None}, {Automatic, Automatic}}, AspectRatio -> 1/6, ImageSize -> 1300, Exclusions -> None, FrameLabel -> {"t", None}, BaseStyle -> 16, Epilog -> {Text[Style["n=" <> ToString[n] <> " pulses", Black, 22], Scaled[{0.94, 0.9}]]}] // Quiet I need to find the arguments $t_1$ and $t_2$ such that in between there is 90% of the total curve's integral, where $t_1$ is the point at which the integral from $0$ to $t_1$ is 5%, and the integral from $0$ to $t_2$ is 95%. Each pulse is normalized to unity, and Total@pulses can be normalized as well for simplicity so that its integral from zero to infinity is always unity. With the following code I divide the whole range into smaller chunks $(i,i+step)$ , where step divides the range in roughly 100 such chunks. [I found this is already faster than full integrations with an increasing upper limit.] I create a discrete version of the cumulative integral in order to use Nearest to find starting values for the FindRoot s. tstart = AbsoluteTime[];Round[tmax]step = If[Round[tmax/100] == 0, 1, Round[tmax/100]]temp = Monitor[ Table[{i + step, NIntegrate[Total@pulses, {t, i, i + step}, PrecisionGoal -> 100, WorkingPrecision -> 100]}, {i, 0, Round[tmax] - 1}], i]; // Quietint = Transpose[{#[[1]], 1/(n step) Accumulate@#[[2]]} &@Transpose[temp]];ListLinePlot[int, Frame -> True, PlotRange -> All, GridLines -> {{}, {0.05, 0.95}}, Epilog -> Point[int]]nf = Nearest[int[[All, 2]] -> int[[All, 1]]];AbsoluteTime[] - tstart 6.323058 With the starting values, the actual interval from $t_1$ to $t_2$ is quite straightforward: F[tint_?NumberQ] := 1/n NIntegrate[Total@pulses, {t, 0, tint}](tint /. FindRoot[F[tint] == 0.95, {tint, First@nf[0.95]}]) - (tint /. FindRoot[F[tint] == 0.05, {tint, First@nf[0.05]}]) 82.0999 It works , but this particular SeedRandom results in the total computational time about 7 seconds. I encountered instances when it took 30 seconds. The problem is that I need to perform about a million of such simulations, so assuming optimistically it will take on average 10 seconds per iteration, it will last 4 months. With ParallelTable in temp I get slightly over a factor of two improvement (so optimistically leading to 2 months of computations). I also changed the Piecewise in the definition of f to HeavisideTheta/UnitStep with no improvement whatsoever. Is there a way to either speed up the integration (I checked various Methods , no improvement, including Method -> {Automatic, "SymbolicProcessing" -> 0} – all lead to a longer evaluation; the PrecisionGoal and WorkingPrecision are set to 100 'cause with lower values often there was lack of stability and no answer was returned), or to attack it altogether differently to obtain the values of $t_2-t_1$ ? A tenfold reduction of time would suffice.
Now provide the response and nothing else.
|
Should be a bit faster... Long story short: Supply the Jacobian of your equation to FindRoot , too. ϕ = t \[Function] Evaluate[Total@pulses];ClearAll[Φ];Φ[x_?NumericQ] := NIntegrate[ϕ[t], {t, 0., x}, PrecisionGoal -> 8];rhs = 0.05 Φ[tmax];(*An approximate inverse of Φ*)tlist = Subdivide[0., tmax, 100];Ψ = Quiet[ Interpolation[ Transpose[{Accumulate[N[ϕ /@ tlist]], tlist}], InterpolationOrder -> 1] ];(*Initial guess:*)x0 = Ψ[rhs];(*Using Newton's method.The Jacobian of Φ is easy enough to compute!;)*)sol = FindRoot[Φ[x] == rhs , {x, x0}, Jacobian :> {{ϕ[x]}}]; // AbsoluteTiming // First 0.056017 Test: Φ[x] - rhs /. sol -1.11022*10^-16 Btw.: There is no point in enforcing 100(!) digits of precision with PrecisionGoal -> 100 if FindRoot 's default PrecisionGoal is used (it's about 8, isn't it?).
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/236858', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/22013/']}
|
jdg_369435
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7503277
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I'm running a complete test suite, it would be helpful if exceptions that caused a test to fail would appear in my (SLF4J-)log. What is the best method to achieve this? What I would like is a junit4 rule that handles exception logging for me. The code @Rulepublic TestRule logException = new TestWatcher() { @Override public void failed(Description d) { catch (Exception e) { logger.error("Test ({}) failed because of exception {}", d, e); throw e; } }} of course does not work, since I can only catch exceptions out of a try block. Is there a workaround to somehow achieve this in a similarly simple and general way? BTW, what I'm doing right now is logging the exception the moment it is created. But it would be nicer to log exceptions at the interface between caller and the library, so in my case in the test case. Not logging when the exceptions are created would also guarantee that they don't show up multiple times when the caller decides to log them.
Now provide the response and nothing else.
|
You need to extend TestRule, in particular the apply(). For an example, have a look at org.junit.rules.ExternalResource & org.junit.rules.TemporaryFolder. ExternalResource looks like this: public abstract class ExternalResource implements TestRule { public Statement apply(Statement base, Description description) { return statement(base); } private Statement statement(final Statement base) { return new Statement() { @Override public void evaluate() throws Throwable { before(); try { base.evaluate(); } finally { after(); } } }; } /** * Override to set up your specific external resource. * @throws if setup fails (which will disable {@code after} */ protected void before() throws Throwable { // do nothing } /** * Override to tear down your specific external resource. */ protected void after() { // do nothing }} TemporaryFolder then extends this and implements before() and after(). public class TemporaryFolder extends ExternalResource { private File folder; @Override protected void before() throws Throwable { // create the folder } @Override protected void after() { // delete the folder } So the before gets called before the testMethod, and the after is called in the finally, but you can catch and log any Exception, like: private Statement statement(final Statement base) { return new Statement() { @Override public void evaluate() throws Throwable { before(); try { base.evaluate(); } catch (Exception e) { log.error("caught Exception", e); } finally { after(); } } }; } EDIT: The following works: public class SoTest { public class ExceptionLoggingRule implements TestRule { public Statement apply(Statement base, Description description) { return statement(base); } private Statement statement(final Statement base) { return new Statement() { @Override public void evaluate() throws Throwable { try { base.evaluate(); } catch (Exception e) { System.out.println("caught an exception"); e.printStackTrace(System.out); throw e; } } }; } } @Rule public ExceptionLoggingRule exceptionLoggingRule = new ExceptionLoggingRule(); @Rule public ExpectedException expectedException = ExpectedException.none(); @Test public void testMe() throws Exception { expectedException.expect(IOException.class); throw new IOException("here we are"); }} The test passes and you get the following output: caught an exceptionjava.io.IOException: here we are at uk.co.farwell.junit.SoTest.testMe(SoTest.java:40) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)... The order that the rules are applied is ExpectedException which calls ExceptionLoggingRule which calls the testMe method. The ExceptionLoggingRule catches the Exception, logs it and rethrows it, and it is then processed by ExpectedException. If you want to log only unexpected exceptions, you just switch the declaration order of the rules: @Rule public ExpectedException expectedException = ExpectedException.none(); @Rule public ExceptionLoggingRule exceptionLoggingRule = new ExceptionLoggingRule(); That way, expectedException is applied first (i.e. nested in exceptionLoggingRule), and only rethrows exceptions that are not expected. Furthermore, if some exception was expected and none occured, expectedException will throw an AssertionError which will also get logged. This evaluation order isn't guaranteed, but it is quite unlikely to vary unless you're playing with very different JVMs, or inheriting between Test classes. If the evaluation order is important, then you can always pass one rule to the other for evaluation. EDIT: With the recently released Junit 4.10, you can use @RuleChain to chain rules correctly: public static class UseRuleChain { @Rule public TestRule chain= RuleChain .outerRule(new LoggingRule("outer rule") .around(new LoggingRule("middle rule") .around(new LoggingRule("inner rule"); @Test public void example() { assertTrue(true); }} writes the log starting outer rulestarting middle rulestarting inner rulefinished inner rulefinished middle rulefinished outer rule
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7503277', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/750378/']}
|
jdg_369436
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25907924
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for a solution to the SQL problem I have, I have to display three totals for financial period in a row with appropriate headings, so far I have all the information displayed but it is in a column using union. I do not know where to put the joins or which type would be better suited, I would really appreciate help with this answer as I have been looking at it for a while. SELECT (SELECT SUM(ComStatinYearTotal) FROM tblComStatInYear WHERE ComStatinYearYear = '2014' AND ComStatInYearWeekEnd > '2014-07-01' AND ComStatInYearWeekEnd < '2014-09-30' AND ComStatInYearCltCode = 'STI02') - (SELECT SUM(ComStatinYearTotal) FROM tblComStatInYear WHERE ComStatinYearYear = '2013' AND ComStatInYearWeekEnd > '2013-07-01' AND ComStatInYearWeekEnd < '2013-09-30' AND ComStatInYearCltCode = 'STI02') TotalDiff UNIONSELECT (SELECT SUM(ComStatinYearPen) FROM tblComStatInYear WHERE ComStatinYearYear = '2014' AND ComStatInYearWeekEnd > '2014-07-01' AND ComStatInYearWeekEnd < '2014-09-30' AND ComStatInYearCltCode = 'STI02') - (SELECT SUM(ComStatinYearPen) FROM tblComStatInYear WHERE ComStatinYearYear = '2013' AND ComStatInYearWeekEnd > '2013-07-01' AND ComStatInYearWeekEnd < '2013-09-30' AND ComStatInYearCltCode = 'STI02') PenDiff UNIONSELECT (SELECT SUM(ComStatinYearPrin) FROM tblComStatInYear WHERE ComStatinYearYear = '2014' AND ComStatInYearWeekEnd > '2014-07-01' AND ComStatInYearWeekEnd < '2014-09-30' AND ComStatInYearCltCode = 'STI02') - (SELECT SUM(ComStatInYearPrin) FROM tblComStatInYear WHERE ComStatinYearYear = '2013' AND ComStatInYearWeekEnd > '2013-07-01' AND ComStatInYearWeekEnd < '2013-09-30' AND ComStatInYearCltCode = 'STI02') PrinDiff
Now provide the response and nothing else.
|
NEST provides a nice Reindex method you can use, although the documentation is lacking. I've used it in a very rough-and-ready fashion with this ad-hoc WinForms code. private ElasticClient client; private double count; private void reindex_Completed() { MessageBox.Show("Done!"); } private void reindex_Next(IReindexResponse<object> obj) { count += obj.BulkResponse.Items.Count(); var progress = 100 * count / (double)obj.SearchResponse.Total; progressBar1.Value = (int)progress; } private void reindex_Error(Exception ex) { MessageBox.Show(ex.ToString()); } private void button1_Click(object sender, EventArgs e) { count = 0; var reindex = client.Reindex<object>(r => r.FromIndex(fromIndex.Text).NewIndexName(toIndex.Text).Scroll("10s")); var o = new ReindexObserver<object>(onError: reindex_Error, onNext: reindex_Next, completed: reindex_Completed); reindex.Subscribe(o); } And I've just found the blog post that showed me how to do it: http://thomasardal.com/elasticsearch-migrations-with-c-and-nest/
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25907924', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4053564/']}
|
jdg_369437
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
52617609
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We can use <noscript> to say Sorry, this website requires JavaScript to run . What's the analogous way to announce that the site doesn't support screen readers ? Something like <noscreenreader>Sorry, ...</noscreenreader> . (Short backstory: it's an app dependent on the idea to never use words. It heavily relies on images to convey information. It wouldn't make sense to announce anything in spoken language.)
Now provide the response and nothing else.
|
Screen readers work on top of the browser so there is no straightforward way (just some convoluted Flash techniques ) to detect when somebody is using one. Your best bet is to place the warning at the beginning of the content and to hide it for sighted users. This article mentions several techniques. .hidden { position: absolute; left: -10000px; top: auto; width: 1px; height: 1px; overflow: hidden;} <div class="hidden">Sorry, this website requires JavaScript to run.</div>
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52617609', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2131286/']}
|
jdg_369438
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
436513
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The code below compiles, but has different behavior for the char type than for the int types. In particular cout << getIsTrue< isX<int8>::ikIsX >() << endl; cout << getIsTrue< isX<uint8>::ikIsX >() << endl; cout << getIsTrue< isX<char>::ikIsX >() << endl; result in 3 instantiations of templates for three types: int8, uint8 and char. What gives? The same is not true for ints: int and uint32 which result in the same template instantiation, and signed int another. The reason seems to be that C++ sees char, signed char and unsigned char as three different types. Whereas int is the same as a signed int. Is this right or am I missing something? #include <iostream>using namespace std;typedef signed char int8;typedef unsigned char uint8;typedef signed short int16;typedef unsigned short uint16;typedef signed int int32;typedef unsigned int uint32;typedef signed long long int64;typedef unsigned long long uint64;struct TrueType {};struct FalseType {};template <typename T>struct isX{ typedef typename T::ikIsX ikIsX;};// This int==int32 is ambiguous//template <> struct isX<int > { typedef FalseType ikIsX; }; // Failstemplate <> struct isX<int32 > { typedef FalseType ikIsX; };template <> struct isX<uint32 > { typedef FalseType ikIsX; };// Whay isn't this ambiguous? char==int8template <> struct isX<char > { typedef FalseType ikIsX; };template <> struct isX<int8 > { typedef FalseType ikIsX; };template <> struct isX<uint8 > { typedef FalseType ikIsX; };template <typename T> bool getIsTrue();template <> bool getIsTrue<TrueType>() { return true; }template <> bool getIsTrue<FalseType>() { return false; }int main(int, char **t ){ cout << sizeof(int8) << endl; // 1 cout << sizeof(uint8) << endl; // 1 cout << sizeof(char) << endl; // 1 cout << getIsTrue< isX<int8>::ikIsX >() << endl; cout << getIsTrue< isX<uint8>::ikIsX >() << endl; cout << getIsTrue< isX<char>::ikIsX >() << endl; cout << getIsTrue< isX<int32>::ikIsX >() << endl; cout << getIsTrue< isX<uint32>::ikIsX >() << endl; cout << getIsTrue< isX<int>::ikIsX >() << endl;} I'm using g++ 4.something
Now provide the response and nothing else.
|
Here is your answer from the standard: 3.9.1 Fundamental types [basic.fundamental] Objects declared as characters ( char ) shall be large enough to store any member of the implementation's basic character set. If a character from this set is stored in a character object, the integral value of that character object is equal to the value of the single character literal form of that character. It is implementation-defined whether a char object can hold negative values. Characters can be explicitly declared unsigned or signed . Plain char , signed char , and unsigned char are three distinct types. A char , a signed char , and an unsigned char occupy the same amount of storage and have the same alignment requirements ( basic.types ); that is, they have the same object representation. For character types, all bits of the object representation participate in the value representation. For unsigned character types, all possible bit patterns of the value representation represent numbers. These requirements do not hold for other types. In any particular implementation, a plain char object can take on either the same values as a signed char or an unsigned char ; which one is implementation-defined.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/436513', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/48956/']}
|
jdg_369439
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2090019
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
It's a rather elementary fact that a sum of two independent normally distributed random variables $X$ and $Y$ is normally distributed (or, if you will, the convolution of two normal densities is a normal density). To what extent does this go the other way around? It seems that is $X$ and $Y$ are independent and $X$ is normally distributed, then $Y$ is normally distributed or constant. If you drop both independence of $X$ and $Y$ and $X$ being normally distributed, it's pretty easy to e.g. take $X \sim \mathcal{N}(0,1)$, $Y = 1(X \geq 0)$ and consider $U=XY$, $V=X \,1(Y = 0)$. Now neither $U$ or $V$ are normally distributed but $U+V = X$. However, if you require $X$ and $Y$ to be independent, but drop the requirement of $X$ being normally distributed, it's seems more difficult. Is there a counterexample in that case, where $X$ and $Y$ are not normally distributed but $X+Y$ is?
Now provide the response and nothing else.
|
According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed. I assume that $Y$ being a constant (and hence independent of $X$)is considered as being accounted for in this theorem byconsidering $Y$ to be a degenerate normal random variable withvariance $0$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2090019', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/66856/']}
|
jdg_369440
|
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.