source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
42576198
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is my code. I want to get both data in object & target element using onClick event.Can anyone help me. handleClick = (data) => { console.log(data);}<input type="checkbox" value={data.id} defaultChecked={false} onClick={this.handleClick.bind(null, data)}/> Now provide the response and nothing else.
What about using an arrow function in the onClick handler? handleClick = (e, data) => { // access to e.target here console.log(data);}<input type="checkbox" value={data.id} defaultChecked={false} onClick={((e) => this.handleClick(e, data))}/>
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/42576198', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7245976/']}
jdg_376241
stackexchange
llm_judgeable_groundtruth_similarity
42923895
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here is my program. public class Foo{ public static void main(String[] args) { System.out.println((int) 2147483648l); System.out.println((int) 2147483648f); }} Here is the output. -21474836482147483647 Why isn't 2147483648l and 2147483648f type cast to the same integer? Can you explain what is going on here or what concept in Java I need to understand to predict the output of type casts like these? Now provide the response and nothing else.
These are examples of the Narrowing Primitive Conversion operation. In your first example, long to int : A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value. So your (int) 2147483648l is taking the 64 bits of the long : 00000000 00000000 00000000 00000000 10000000 00000000 00000000 00000000 ...and dropping the top 32 bits entirely: 10000000 00000000 00000000 00000000 ...and taking the remaining 32 bits as an int . Since the leftmost of those is now a sign bit ( long and int are stored as two's complement), and since it happens to be set in your 2147483648l value, you end up with a negative number. Since no other bits are set, in two's complement, that means you have the lowest negative number int can represent: -2147483648. The float to int example follows a more complex rule. The relevant parts for your value are: ...if the floating-point number is not an infinity, the floating-point value is rounded to an integer value V, rounding toward zero using IEEE 754 round-toward-zero mode (§4.2.3). ...[if] the value [is] too large (a positive value of large magnitude or positive infinity), [then] the result of the first step is the largest representable value of type int or long. (But see the part of the spec linked above for the details.) So since 2147483648f rounds to 2147483648 , and 2147483648 is too large to fit in int , the largest value for int ( 2147483647 ) is used instead. So in the long to int , it's bit fiddling; in the float to int , it's more mathematical. In a comment you've asked: Do you know why both (short) 32768 and (short) 32768f evaluate to -32768 ? I was exepecting the latter to evaluate to 32767 . Excellent question, and that's where my "see the part of the spec linked above for the details" above comes in. :-) (short) 32768f does, in effect, (short)(int)32768f : In the spec section linked above, under "A narrowing conversion of a floating-point number to an integral type T takes two steps:", it says In the first step, the floating-point number is converted either to a long , if T is long , or to an int , if T is byte , short , char , or int ... and then later in Step 2's second bullet: * If T is byte , char , or short , the result of the conversion is the result of a narrowing conversion to type T (§5.1.3) of the result of the first step. So in step one, 32768f becomes 32768 (an int value), and then of course (short)32768 does the bit-chopping we saw in long => int above, giving us a short value of -32768 .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42923895', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1175080/']}
jdg_376242
stackexchange
llm_judgeable_groundtruth_similarity
1076
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: The pentagon and hexagon axioms in the definition of a symmetric monoidal category are one example that I was thinking of here; the axioms of a weak 2-category are another. I understand that it can be checked laboriously that these few coherence axioms are sufficient to show, e.g. in the first case, that all coherence conditions we want on associativity and commutativity to hold do, but this is rather tedious. Is there some other motivation for the choice of coherence axioms? Now provide the response and nothing else.
Yes, this is closely related to the theory of operads. Here is a very informal discussion of the case of monoidal categories. In particular I will intentionally blur the distinction between spaces and groupoids. A (non-symmetric) operad O is a gadget consisting of a bunch of spaces O n which we think of parameterizing n-ary operations, together with structure that tells us how to compose operations. An algebra X over an operad O consists of a bunch of maps O n x X n -> X which are compatible with this composition structure. For example, the associative operad A has A n = * for every n, so there's just one n-ary operation X^n -> X for each n, and this makes X into a monoid. We would like to say that a monoidal category is a monoid object in categories, but this is too strict for most purposes. The problem in model category language is that the associative operad is not "cofibrant". What we need to do is find a "cofibrant replacement"--an operad B such that all the spaces B n are still contractible, but in which those composition structure maps which I glossed over are better behaved. An example is the operad B formed from the associahedra. B 2 is still a point, but B 3 is an interval, and B 4 is a pentagon. Now a B-algebra in categories consists of a category C together with a functor B 2 x C x C = C x C -> C, a functor B 3 x C x C x C -> C, a functor B 4 x C 4 -> C, etc. These functors are the monoidal product, the associator, and the pentagon identity respectively. There's nothing higher because the next bit of structure would be an "identity between identities", and we don't have any such thing in a category. But if we were defining a monoidal 2-category the pentagon identities be replaced by 2-morphisms called "pentagonators" and there would be a coherence condition coming from B 5 . Edit: I should emphasize that we did not obtain the operad B from A in any canonical way--B was "pulled out of a hat". But the model category machinery ensures that if we had chosen a different cofibrant replacement B', then the notions of B-algebra and B'-algebra would be equivalent. This notion is thus associated to A in a canonical way; the familiar description is not canonical.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/1076', 'https://mathoverflow.net', 'https://mathoverflow.net/users/344/']}
jdg_376243
stackexchange
llm_judgeable_groundtruth_similarity
5178830
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I use a package variable in a simple SELECT query statement in Oracle? Something like SELECT * FROM MyTable WHERE TypeId = MyPackage.MY_TYPE Is it possible at all or only when using PL/SQL (use SELECT within BEGIN/END)? Now provide the response and nothing else.
You can't. For a public package variable to be used in a SQL statement, you have to write a wrapper function to expose the value to the outside world: SQL> create package my_constants_pkg 2 as 3 max_number constant number(2) := 42; 4 end my_constants_pkg; 5 /Package created.SQL> with t as 2 ( select 10 x from dual union all 3 select 50 from dual 4 ) 5 select x 6 from t 7 where x < my_constants_pkg.max_number 8 / where x < my_constants_pkg.max_number *ERROR at line 7:ORA-06553: PLS-221: 'MAX_NUMBER' is not a procedure or is undefined Create a wrapper function: SQL> create or replace package my_constants_pkg 2 as 3 function max_number return number; 4 end my_constants_pkg; 5 /Package created.SQL> create package body my_constants_pkg 2 as 3 cn_max_number constant number(2) := 42 4 ; 5 function max_number return number 6 is 7 begin 8 return cn_max_number; 9 end max_number 10 ; 11 end my_constants_pkg; 12 /Package body created. And now it works: SQL> with t as 2 ( select 10 x from dual union all 3 select 50 from dual 4 ) 5 select x 6 from t 7 where x < my_constants_pkg.max_number() 8 / X---------- 101 row selected.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5178830', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/511016/']}
jdg_376244
stackexchange
llm_judgeable_groundtruth_similarity
26020979
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Currently working on a universal SpriteKit project. I'll be supporting the following devices: iPhone 4/s,iPhone 5/c/s,iPhone 6,iPhone 6+ iPad non-ret,iPad retina I am confused on the iPhone part. I already have 4 versions for my background sprite for the 4 different screen resolutions of the iPhones. But which goes to which? I know the 3x is for the 6+, and I think the 5/c/s goes to the Retina 4 2x, but I do not know where the iPhone4/s, and 6 go. Anyone know? Side note, when I create a Launch Image inside my xcassets file, I am shown these options, which basically has all the device I am supporting. Just wondering why this is not also the case when creating an Image Set Also how do you guys approach creating images/sprites for a universal application? Now that the new iPhone 6, and 6 plus are out, I have 2 more resolutions to support which is still confusing for me as I'm still a beginner. Now provide the response and nothing else.
This is a little confusing - here's how I understand it (this is in reference to the top image): 1x images are for the original iPhone through the 3GS - 'standard' resolution devices (3.5" screens) 2x images are for the iPhone 4 and 4S (3.5" Retina screens) and are also used for the iPhone 6. Retina 4 2x are for the iPhone 5 and 5s (4" Retina screens) 3x images are for the new iPhone 6+ (5.5" super-Retina [3x] screen) I believe that the iPhone 6 (4.7" screen) will use the Retina 4 2x images, but I would have to test it. Side note, when I create a Launch Image inside my xcassets file, I am shown these options, which basically has all the device I am supporting. Just wondering why this is not also the case when creating an Image Set If you compare the two images, the lower one has everything the upper one does, except for a 1x iPhone graphic. You don't need that if you're only supporting iOS 7 and above, since iOS 7 doesn't run on any non-Retina phone-form devices. To be honest, I don't understand why the top image has a 1x iPhone form graphic option - maybe because you checked the "iPhone" box in the sidebar? Also how do you guys approach creating images/sprites for a universal application For most non-fullscreen images (like a logo), you really only have 3 resolutions to support - standard (1x), Retina (2x), and the iPhone 6+ (3x). These are simply different quality of images, not really different sizes. So if you have a 10x10 image on a standard device, that would mean you need a 20x20 image on a Retina device and a 30x30 image on an iPhone 6+. On all devices, they would show up as a 10x10 image. A great tool I used for managing different resolutions of icons is iConify . I create them at the highest size I need (30x30 [@3x] for an image I want to be 10x10 on a device), then save it as a png and resize copies to 20x20 [@2x] and 10x10 [standard]. A better solution would be to create and use vector graphics , which would resize better to any size.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/26020979', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2687665/']}
jdg_376245
stackexchange
llm_judgeable_groundtruth_similarity
153731
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: For symmetric group conjugacy classes and irreducible representation both are parametrized by Young diagramms, so there is a kind of "good" bijection between the two sets. For general finite groups see MO discussion . Question: What are the finite groups where some "good" bijection(s) between conjugacy classes and irreducible representations are known ? "Good" bijection is an informal "definition", nevertheless I hope example of S_n and other examples listed below, may convince that the question makes sense. I think that it is far too optimistic to have one unique bijection for general group,but it seems to me that for certain classes of groups there can be some set of "good" bijections. Let me briefly discuss below some properties which "good" bijection may satisfy, and may be discuss details in the next question. Some examples: 1) symmetric group S_n 2) Z/2Z is naturally isomprhic to its dual, as well as $Z/2Z \oplus Z/2Z$ see e.g. MO "fantastic properties of Z/2Z" 3) Generally for abelian finite groups: among all set-theoretic bijections $G \to \hat G$, some are distinguished that they are group isomorphisms. So we have non unique, buta class of "good" bijections. 4) For GL(2,F_q) Paul Garret writes: "conjugacy classes match in an ad hoc fashion with specic representations". (See here table at page 11). 5) G. Kuperberg describes relation of the McKay correspondence and that kind of bijection for A_5 (or its central extension), see here . 6) If I understand correctly here at MO D. Jordan mentions that bijection existsfor Coxeter groups. (I would be thankful for detailed reference). 7) Dihedral groups $D_{2n}$ - see answer by Glasby below 8) Finite Heisenberg group with $p^{2n+1}$ elements, also known as extraspecial group - see answer by Glasby below 9) Quternionic group $Q_{8}$ - this actually can be seen as a particular example of the item above. Or note that it is $Z/2Z$ central extension of $Z/2Z \oplus Z/2Z$,and $ Z/2Z \oplus Z/2Z$ has natural bijection as mentioned in item 2 above, and it is easy to extend it to $Q_8$. 10) It seems that for Drinfeld double of a finite group (and probably more generally for "modular categories") there is known some analog of natural bijection.There is such remark at page 5 of Drinfeld Doubles for Finite Subgroupsof SU(2) and SU(3) Lie Groups.R. COQUEREAUX, Jean-Bernard ZUBER : In other words, there is not only an equal number of classes and irreps in a double, there is also a canonical bijection between them. There can be several properties which "good" bijection may satisfy, at least for some "good" groups 1) Respect the action of $Out(G)$. Actually the two sets are not isomorphic in generalas $Out(G)$-sets (see MO21606 ), however there are many cases where they are isomorphic see G. Robinson's MO-answer . 2) Reality/Rationality constraints. Again in general there is no correspondence see MO , but there are some cases where corresponding properties of classes and irreps agree - see J. Schmidt's answer on that question. Two properties below are even more problematic 3) It might be that product on conjugacy classes have something to do with tensor product of representations (at least for abelian groups we may require these two fully agree). 4) If to think about kind of "orbit method" ideology, and think that conjugacy class is in some sense perversed coadjoint orbit, we may hope that structure of conjugacy representation,should somehow respect the "good" bijection. For example for $S_n$ we proved that irrep corresponding to Young diagram "d" lives inside the conjugacy subrepresentationrealized as functions on the conjugacy class parametrized exactly by "d".(See MO153561 , MO153991 for some discussion of conjugacy (adjoint) representation). 5) For algebraic groups over finite fields conjugacy classes and irreps sometimes naturally divided into families (e.g. conjugacy classes are often parametrized by equations $ F_{t_i}(x_k) = 0 $ - changing "t"we get different conjugacy classes in the same "family"). So we may hope that good bijection respects the families.(It is works fine for Heisenberg group, but for UT(4,p) I have met some problems). Now provide the response and nothing else.
This is an interesting question, even though it is not well defined. Call a group "good" if it has a "good" bijection between its conjugacy classes and its irreducible complex representations. I agree with Alexander that the definition of a "good" bijection/group should be guided by classes of examples, but I preferthat a class of bijections/groups should be infinite . There are families of good metacyclic groups. For example, if $n=2m+1$ is an odd integer, then the dihedral groups $D_{2n}=\langle a,b\mid a^2=b^n=1,\; a^{-1}ba=b^{-1}\rangle$ of order $2n$ are good. The conjugacy classes $\{b^j,b^{-j}\}$, $1\leq j\leq m$, $\{1\}$, and$\{a,ab,ab^2,\dots,ab^{n-1}\}$ correspond bijectively (I believe this is "good") to the irreducible representations$\rho_j$, $1\leq j\leq m$, $\sigma_0$, and $\sigma_1$, respectively where $\rho_j(a)=\begin{pmatrix}0&1\\1&0\end{pmatrix}$, $\rho_j(b)=\begin{pmatrix}\zeta_n^j&0\\0&\zeta_n^{-j}\end{pmatrix}$,$\sigma_k(a)=\begin{pmatrix}(-1)^k\end{pmatrix}$, $\sigma_k(b)=\begin{pmatrix}1\end{pmatrix}$ and $\zeta_n=e^{2\pi i/n}$. If an infinite family $G_1, G_2,\dots$ of groups is good, then you know a vast amount about each $G_n$ and can likely produce is a formula writing $|G_n|$ as a sum of the squares of the degrees of the irreducible representations. For $D_{2n}$ this is $2n=4m+2=m\times 2^2+2\times 1^2$. If $G_n$ is an extraspecial group of (odd) order $p^{1+2n}$ and exponent $p$, then the formula is $p^{1+2n}=(p-1)\times(p^n)^2+p^{2n}\times 1^2$. Perhaps the existence of such a formula should be part of the elusive definition of "good". Addition: Yes Alexander, you are correct, the extraspecial groups $G_n$ of order $p^{1+2n}$ and odd exponent $p$ are "good". To describe a "good" bijection I need some notation. Let $f_n\colon V\times V\to\mathbb{F}_p$ be a nondegenerate symplectic form on $V=\mathbb{F}_p^{2n}$. Multiplication in $G_n=V\times \mathbb{F}_p$ is given by $(v_1,\lambda_1)(v_2,\lambda_2)=(v_1+v_2,\lambda_1+\lambda_2+{\frac12}f_n(v_1,v_2))$, or by the matrices you indicate. The conjugacy classes are as follows: the $p$ one-element (central) classes $Z_\lambda:=\{(0,\lambda)\}$, $\lambda\in\mathbb{F}_p$, and the $p^{2n}-1$ classes $C_v:=\{(v,\lambda)\mid \lambda\in\mathbb{F}_p\}$ where $0\neq v\in V$. The irreducible representations are also easy. The trivial degree-1 representation corresponds to class $Z_0$ containing the identity element. The $p^{2n}-1$ nontrivial degree-1 representations correspond to the $p$-element classes $C_v$. The remaining $p-1$ irreducibles of degree $p^n$ correspond to the $p-1$ central conjugacy classes $Z_\lambda$, with $0\neq\lambda\in\mathbb{F}_p$. Fix a maximal totally isotropic subspace $W$ of $V$. By Witt's theorem $|W|=p^n$. Then $A:=W\times\mathbb{F}_p$ is a maximal abelian subgroup of $G_n$ of index $p^n$. Let $\sigma_\lambda$ be the 1-dimensional representation of $A$, with kernel $W$, mapping $(0,1)$ to $e^{2\pi i\lambda/p}$. The induced representations $\rho_\lambda={\rm Ind}_A^{G_n}(\sigma_\lambda)$ are irreducible of degree $p^n$. (A direct calculation shows $\langle\rho_\lambda,\rho_\lambda\rangle=1$.Choosing a different $f_n$, or a different maximal totally isotropic subspace $W'$, gives equivalent representations $\rho'_\lambda$. The $W$s are permuted by ${\rm Aut}(G_n)$.) This is a "good" bijection, as identifying $V$ with its dual seems allowed.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/153731', 'https://mathoverflow.net', 'https://mathoverflow.net/users/10446/']}
jdg_376246
stackexchange
llm_judgeable_groundtruth_similarity
1432963
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What are the realistic outcomes of programmer bugs pertaining to pointers? What 'bad effects' happen when programmers create pointer bugs? Practical examples with code are preferable. Now provide the response and nothing else.
Things that can go wrong when pointers are misused: Memory leaks - You allocate a pointer in a method and then let it go out of scope without properly deallocating it. The pointer to the memory on the heap is now lost, but the memory remains allocated. Freeing this memory is now extremely difficult. More info from Wikipedia. Access violations - You create a pointer that points at a memory address that you do not have access to, or that does not exist. Pointers are just integers afterall, and can be manipulated like any other number. When you attempt to dereference your invalid pointer, your program will halt. More info from Wikipedia. Null pointer errors - This is a special case of an access violation. The proper way to "park" a pointer, so that it doesn't point at anything in particular, is to set its value to zero or null. Attempting to dereference a null pointer will halt your program. More info from Wikipedia. Buffer overflows - You allocate a pointer for a character buffer of 30 characters. You then proceed to stream user input (from a socket, file, console, etc.) into this buffer. If you fail to properly implement buffer bounding checks, then your program could potentially put more than 30 characters into the buffer. This will corrupt any data stored adjacent to the buffer in memory and possibly expose you to a malicious code attack. More info from Wikipedia. Memory corruption - A pointer is just an integer that contains the memory address of something it points to. As an integer, pointer arithmetic can be used to manipulate the pointer's value in all sorts of interesting ways. Subtle bugs can develop if the pointer calculations go wrong. The pointer will now point to some unknown location in memory, and anything could happen when it is dereferenced. Null-terminated string problems - These bugs occur when string library functions that expect null-terminated strings are fed character pointers that are not null terminated. The string library functions will continue to process characters, one at a time, until a null is found -- wherever that may be. A joke best illustrates this bug.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1432963', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/170339/']}
jdg_376247
stackexchange
llm_judgeable_groundtruth_similarity
716990
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If $n\times n$ matrix $A$ has eigenvalues $1,-1$ and $n\times n$ matrix $B$ also has eigenvalues $1,-1$, can I then say something about eigenvalues of $AB$ and $BA$? Now provide the response and nothing else.
In general, you can almost never predict the eigenvalues of a product based on the eigenvalues of the matrices you are multiplying together. For example, consider the matrices$$A = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \qquad B = \begin{bmatrix} -1 & 1 \\ 0 & 1 \end{bmatrix} \qquad C = \begin{bmatrix} -1 & 0 \\ 1 & 1\end{bmatrix}$$which all have eigenvalues $\pm 1$. Then$$AB = \begin{bmatrix} -1 & 1 \\ 0 & -1 \end{bmatrix}$$has eigenvalue $-1$, $A^2 = I$ has eigenvalue $1$, and$$BC = \begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}$$does not have $\pm 1$ as an eigenvalue. The only glaring exception to this is $0$. If $0$ is an eigenvalue of either $A$ or $B$ then it is also an eigenvalue of $AB$. This is because a matrix is singular if and only if it has $0$ as an eigenvalue.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/716990', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/76431/']}
jdg_376248
stackexchange
llm_judgeable_groundtruth_similarity
40134313
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a calculated column in a Pandas DataFrame which needs to be assigned base upon a condition. For example: if(data['column_a'] == 0): data['column_c'] = 0else: data['column_c'] = data['column_b'] However, that returns an error: ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). I have a feeling this has something to do with the fact that is must be done in a matrix style. Changing the code to a ternary statement doesn't work either: data['column_c'] = 0 if data['column_a'] == 0 else data['column_b'] Anyone know the proper way to achieve this? Using apply with a lambda? I could iterate via a loop, but I'd rather keep this the preferred Pandas way. Now provide the response and nothing else.
You can do: data['column_c'] = data['column_a'].where(data['column_a'] == 0, data['column_b']) this is vectorised your attempts failed because the comparison with if doesn't understand how to treat an array of boolean values hence the error Example: In [81]:df = pd.DataFrame(np.random.randn(5,3), columns=list('abc'))dfOut[81]: a b c0 -1.065074 -1.294718 0.1657501 -0.041167 0.962203 0.7418522 0.714889 0.056171 1.1975343 0.741988 0.836636 -0.6603144 0.074554 -1.246847 0.183654In [82]:df['d'] = df['b'].where(df['b'] < 0, df['c'])dfOut[82]: a b c d0 -1.065074 -1.294718 0.165750 -1.2947181 -0.041167 0.962203 0.741852 0.7418522 0.714889 0.056171 1.197534 1.1975343 0.741988 0.836636 -0.660314 -0.6603144 0.074554 -1.246847 0.183654 -1.246847
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40134313', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/691816/']}
jdg_376249
stackexchange
llm_judgeable_groundtruth_similarity
48032262
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: consider my simple class class stud(): def __init__(self,a,b): self.name=a self.mark=b self.message=self.name + ' ' +str(self.mark)s1=stud('student1',40)print s1.message --->output=student1 40 s1.name='student2'print s1.message ----> output =student1 40 , but i expected student2 40 My question here is why when I printed the self.message [after modifying the name attribute of the object], it printed the old value? I know init method is called only once during object creation and name attribute is set to value 'student1' at that time. But, i am changing it next line and again printing of self.message shouldn't list the new value? Why is this self.message is not updating the modified value? Now provide the response and nothing else.
Trace the execution. s1=stud('student1',40) This sets s1.name to "student1" s1.mark to 40 s1.message to "student1 40" Think of the attribute values as boxes. Each of the three boxes have a value in them. Two of them hold strings. One of them holds a number. They will hold those values until they are explicitly assigned to. In particular the attribute message has NO IDEA how it got its value . It has no idea it got its value by concatenating the name attribute and the mark attribute. All it knows is that the value it has is "student1 40" . print s1.message This outputs student1 40 (no surprise) s1.name='student2' You updated one of the three attributes. But you did not update the others. print s1.message Since you never explicitly changed s1.message , this still outputs student1 40 . The point is the value of s1.message is just a string. There is no invisible computation that automatically updates it whenever the expression that initialized it gets changed. Other languages work that way perhaps, but Python does not. If you want that behavior, then do this: class stud(): def __init__(self,a,b): self.name=a self.mark=b @property def message(): return self.name + ' ' +str(self.mark)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48032262', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6524447/']}
jdg_376250
stackexchange
llm_judgeable_groundtruth_similarity
308173
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been for long using VIM as my main editor and never touched an IDE since. This works great for most of the programming languages on the market. When it comes to C, though, I still fell limited to simple projects, because writing makefiles is too cumbersome. How do the "Unix as IDE" philosophy deal with makefiles? Is there a tool I'm not aware of that does that particular job from the command line, or is everyone just writing the makefiles themselves? Now provide the response and nothing else.
There are quite a few tools around to generate makefiles. The two most common ones are CMake and Automake ; both of these ask you to describe the components of your project and the desired output, and generate makefiles for you. This is no doubt a matter of opinion, but you'll probably find CMake easier to get to grips with; if you ever need to cross-compile though, you'll end up needing Automake (and Autoconf ). For simple projects, the built-in rules provided with GNU Make can help quite a bit; for example, to build a project consisting of two source files, a.c and b.c , the following Makefile works: all: aa: a.o b.o Running make will figure out that a.c and b.c need to be compiled, and linked to produce a ... (As AProgrammer points out, the built-in rules only go so far, and your makefile needs to specify all the relationships between files, including your project's headers; you'll quickly end up reaching for other tools to help manage dependencies etc.)
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/308173', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/61175/']}
jdg_376251
stackexchange
llm_judgeable_groundtruth_similarity
12414683
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to replicate the IF function from MySQL into PostgreSQL. The syntax of IF function is IF(condition, return_if_true, return_if_false) I created following formula: CREATE OR REPLACE FUNCTION if(boolean, anyelement, anyelement) RETURNS anyelement AS $$BEGIN CASE WHEN ($1) THEN RETURN ($2); ELSE RETURN ($3); END CASE; EXCEPTION WHEN division_by_zero THEN RETURN ($3);END;$$ LANGUAGE plpgsql; It works well with most of the things like if(2>1, 2, 1) but it raises an error for: if( 5/0 > 0, 5, 0) fatal error division by zero. In my program I can't check the denominator as the condition is provided by user. Is there any way around? Maybe if we can replace first parameter from boolean to something else, as in that case the function will work as it will raise and return the exception. Now provide the response and nothing else.
PostgreSQL is following the standard This behaviour appears to be specified by the SQL standard . This is the first time I've seen a case where it's a real problem, though; you usually just use a CASE expression or a PL/PgSQL BEGIN ... EXCEPTION block to handle it. MySQL's default behaviour is dangerous and wrong. It only works that way to support older code that relies on this behaviour. It has been fixed in newer versions when strict mode is active (which it absolutely always should be) but unfortunately has not yet been made the default. When using MySQL, always enable STRICT_TRANS_TABLES or STRICT_ALL_TABLES . ANSI-standard zero division is a pain sometimes, but it'll also protect against mistakes causing data loss. SQL injection warning, consider re-design If you're executing expressions from the user then you quite likely have SQL injection problems. Depending on your security requirements you might be able to live with that, but it's pretty bad if you don't totally trust all your users. Remember, your users could be tricked into entering the malicious code from elsewhere . Consider re-designing to expose an expression builder to the user and use a query builder to create the SQL from the user expressions. This would be much more complicated, but secure. If you can't do that, see if you can parse the expressions the user enters into an abstract syntax, validate it before execution, and then produce new SQL expressions based on the parsed expression. That way you can at least limit what they can write, so they don't slip any nasties into the expression. You can also rewrite the expression to add things like checks for zero division. Finding (or writing) parsers for algebraic expressions isn't likely to be hard, but it'll depend on what kinds of expressions you need to let users write. At minimum, the app needs to be using a role ("user") that has only SELECT privileges on the tables, is not a superuser, and does not own the tables. That'll minimise the harm any SQL injection will cause. CASE won't solve this problem as written In any case, because you currently don't validate and can't inspect the expression from the user, you can't use the SQL-standard CASE statement to solve this. For if( a/b > 0, a, b) you'd usually write something like: CASE WHEN b = 0 THEN b ELSE CASE WHEN a/b=0 THEN a ELSE b ENDEND This explicitly handles the zero denominator case, but is only possible when you can break the expression up. Ugly workaround #1 An alternative solution would be to get Pg to return a placeholder instead of raising an exception for division by zero by defining a replacement division operator or function. This will only solve the divide-by-zero case, not others. I wanted to return 'NaN' as that's the logical result. Unfortunately, 'NaN' is greater than numbers not less then, and you want a less-than or false-like result. regress=# SELECT NUMERIC 'NaN' > 0; ?column? ---------- t(1 row) This means we have to use the icky hack of returning NULL instead: CREATE OR REPLACE FUNCTION div_null_on_zero(numeric,numeric) returns numeric AS $$VALUES (CASE WHEN $2 = 0 THEN NULL ELSE $1/$2 END)$$ LANGUAGE 'SQL' IMMUTABLE;CREATE OPERATOR @/@ ( PROCEDURE = div_null_on_zero(numeric,numeric), LEFTARG = numeric, RIGHTARG = numeric); with usage: regress=# SELECT 5 @/@ 0, 5 @/@ 0>0, CASE WHEN 5 @/@ 0 > 0 THEN 5 ELSE 0 END; ?column? | ?column? | case ----------+----------+------ | | 0(1 row) Your app can rewrite '/' in incoming expressions into @/@ or whatever operator name you choose pretty easily. There's one pretty critical problem with this approach, and that's that @/@ will have different precedence to / so expressions without explicit parentheses may not be evaluated as you expect. You might be able to get around this by creating a new schema, defining an operator named / in that schema that does your null-on-error trick, and then adding that schema to your search_path before executing user expressions. It's probably a bad idea, though. Ugly workaround #2 Since you can't inspect the denominator, all I can think of is to wrap the whole thing in a DO block (Pg 9.0+) or PL/PgSQL function and catch any exceptions from the evaluation of the expression. Erwin's answer provides a better example of this than I did, so I've removed this. In any case, this is an awful and dangerous thing to do, do not do it. Your app needs to be fixed.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12414683', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/232649/']}
jdg_376252
stackexchange
llm_judgeable_groundtruth_similarity
394194
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The nLab has a lot of nice things to say about how you can use the internal logic of various kinds of categories to prove interesting statements using more or less ordinary mathematical reasoning. However, I can't find a single example on the nLab of what such a proof actually looks like. (The nLab has a frustrating lack of examples in general.) Can anyone supply me with some examples? I'd be particularly interested in the following kinds of examples: I've heard in a topos one can internalize the real numbers and, in the topos $\text{Sh}(X)$ of sheaves on a topological space, this reproduces the sheaf of continuous real-valued functions $X \to \mathbb{R}$. Moreover, one can internalize "finitely generated projective $\mathbb{R}$-module" and in $\text{Sh}(X)$ this reproduces real vector bundles on $X$. What can you prove about vector bundles this way? I'd also like to see examples of what you can do in the internal logic of Cartesian closed categories. This MO question is related but it doesn't really satisfy my curiosity. Now provide the response and nothing else.
Here is an arbitrary example from algebraic geometry. We'll prove the followingwell-known statement about $\mathcal{O}_X$-modules on reduced schemes $X$ byreducing to constructive linear algebra interpreted in the topos$\mathrm{Sh}(X)$ of sheaves on $X$: Let $\mathcal{F}$ be an $\mathcal{O}_X$-module locally of finite type. Then $\mathcal{F}$ is locally free iff its rank is constant. We can translate this statement into the internal language of $\mathrm{Sh}(X)$by the following dictionary: In the internal language, the sheaf of rings $\mathcal{O}_X$ looks like an ordinary ring. Accordingly, $\mathcal{F}$ looks like an ordinary module on that ring. $\mathcal{F}$ is locally of finite type iff it is finitely generated fromthe internal point of view. $\mathcal{F}$ is locally free iff it is a free module from the internalpoint of view. Internally, we can define the rank of $\mathcal{F}$ as the minimal numberof elements needed to generate $\mathcal{F}$. But constructively,the natural numbers may fail to have minima of arbitrary inhabited sets (see this enlightening blog post by Andrej Bauer ), so this minimal numbermight not actually be an (internal) natural number, but be an element of asuitable completion. Externally, the rank defined this way induces a uppersemicontinuous function on $X$ (see nLab and the Mulvey reference therein ); it is constant iff internally, the minimal number ofgenerators is an actual natural number. Finally, the scheme $X$ is reduced iff $\mathcal{O}_X$ looks like anordinary reduced ring from the internal perspective. This in turn isequivalent to $\mathcal{O}_X$ being a so-called residue field from theinternal point of view (i.e. a non-trivial ring with every non-unit beingzero). So the statement follows if we can give a constructive proof of the followinglinear algebra fact: Let $A$ be a residue field and let $M$ be a finitely generated $A$-module. Then $A$ is free iff the minimal number of elements needed to generate $M$ as an $A$-module is an actual natural number. The direction "$\Rightarrow$" is clear. For the direction "$\Leftarrow$",consider a minimal generating family $x_1,\ldots,x_n$ of $M$ (which exists byassumption). This family is linearly independent (and therefore a basis): Let $\sum_i \lambda_i x_i =0$. If any $\lambda_i$ were invertible, the family$x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n$ would too generate $M$, contradictingthe minimality. So each $\lambda_i$ is not invertible and thus zero (by assumption on $A$).
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/394194', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/232/']}
jdg_376253
stackexchange
llm_judgeable_groundtruth_similarity
46054233
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to place a button next to an input in Bootstrap. If I put some css or play with the bootstrap classes randomly I know I get this to work but I want to know a good way to do this. I don't want to merge the button to the text field like input-group-btn do. I want the normal bootstrap style for buttons. <div> <input class="form-control" /> <button class="btn btn-primary">enter</button></div> I get this: |input | //occupies the full width(button) //button comes to bottom What I want: |input | (button) //same "line". Now provide the response and nothing else.
Option 1 - form-inline class... <div class="form-inline"> <input class="form-control"> <button class="btn btn-primary">enter</button></div> Also, you can use mr-1 (margin-right) to add a small margin between the input and the button: https://www.codeply.com/go/5XCUJIEvua Option 2 - table-cell class... Another option (if you want the input and button to be full width) is to use d-table-cell class.. <div class="d-table-cell w-100"> <input class="form-control"></div> <div class="d-table-cell align-middle"> <button class="btn btn-primary">enter</button></div> Option 3 - d-flex class... Finally, the easiest way may be to simply use d-flex which sets display:flex <div class="d-flex"> <input class="form-control mr-1"> <button class="btn btn-primary">enter</button></div> Demo of all 3 options: https://www.codeply.com/go/5XCUJIEvua
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46054233', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6415828/']}
jdg_376254
stackexchange
llm_judgeable_groundtruth_similarity
45304810
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've written a dynamic form in which there is a main part and sub parts based on a type that's selected in the main part (widget.type). Showing and hiding the sub parts is done with an ngSwitch. HTML of the form looks like this: <form class="widget-form cc-form" (ngSubmit)="saveChanges()" novalidate> <div class="forms-group"> <label for="title" i18n="@@title">Titel</label> <input class="form-control" id="title" name="title" type="text" [(ngModel)]="widget.title" required /> </div> <div class="forms-group"> <label class="checkbox-label" for="show" i18n> <input id="show" name="show" type="checkbox" [(ngModel)]="widget.show" /> <span>Titel tonen in app</span> </label> </div> <div class="forms-group"> <label for="type" i18n="@@type">Type</label> <select class="form-control" id="type" name="type" [(ngModel)]="widget.type" required> <option value="text-widget" i18n="@@Text">Tekst</option> <option value="tasklist-widget" i18n="@@Tasklists">Takenlijst</option> <option value="image-widget" i18n="@@Text">Afbeelding(en)</option> <option value="video-widget" i18n="@@Video">Youtube</option> <option value="link-widget" i18n="@@Link">Link</option> <option value="contacts-widget" i18n="@@Contacts">Contactpersonen</option> <option value="attachment-widget" i18n="@@Attachments">Bijlage(n)</option> </select> </div> <ng-container [ngSwitch]="widget.type"> <text-widget *ngSwitchCase="'text-widget'" [data]="widget"></text-widget> <tasklist-widget *ngSwitchCase="'tasklist-widget'" [data]="widget"></tasklist-widget> <image-widget *ngSwitchCase="'image-widget'" [data]="widget"></image-widget> <video-widget *ngSwitchCase="'video-widget'" [data]="widget"></video-widget> <link-widget *ngSwitchCase="'link-widget'" [data]="widget"></link-widget> <contacts-widget *ngSwitchCase="'contacts-widget'" [data]="widget"></contacts-widget> <attachment-widget *ngSwitchCase="'attachment-widget'" [data]="widget"></attachment-widget> </ng-container></form> Every widget is it's own component. The problem is that the form validation only checks the inputs from the main part and disregards the sub part (widget components). How can I make sure the input fields from the widgets are included in the validation? I tried adding an isValid() method to the widget components but I couldn't get the instances of the components, probably because they are used in an ngSwitch. @ContentChild, @ContentChildren, @ViewChild etc. all returned undefined. Now provide the response and nothing else.
For future googlers, I had a similar issue to this, albeit with fewer child components and after digging through @penleychan's aforementioned thread on the subject I found a little gem that solved this for me without the need to implement a custom directive. import { ControlContainer, NgForm } from '@angular/forms';@Component({ .... viewProviders: [{ provide: ControlContainer, useExisting: NgForm }],}) This works for my nested form. Just needs to be added to components, which ones directly contains inputs https://github.com/angular/angular/issues/9600#issuecomment-522898551
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/45304810', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1859954/']}
jdg_376255
stackexchange
llm_judgeable_groundtruth_similarity
37059
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Has anyone used Lucene.NET rather than using the full text search that comes with sql server? If so I would be interested on how you implemented it. Did you for example write a windows service that queried the database every hour then saved the results to the lucene.net index? Now provide the response and nothing else.
Yes, I've used it for exactly what you are describing. We had two services - one for read, and one for write, but only because we had multiple readers. I'm sure we could have done it with just one service (the writer) and embedded the reader in the web app and services. I've used lucene.net as a general database indexer, so what I got back was basically DB id's (to indexed email messages), and I've also use it to get back enough info to populate search results or such without touching the database. It's worked great in both cases, tho the SQL can get a little slow, as you pretty much have to get an ID, select an ID etc. We got around this by making a temp table (with just the ID row in it) and bulk-inserting from a file (which was the output from lucene) then joining to the message table. Was a lot quicker. Lucene isn't perfect, and you do have to think a little outside the relational database box, because it TOTALLY isn't one, but it's very very good at what it does. Worth a look, and, I'm told, doesn't have the "oops, sorry, you need to rebuild your index again" problems that MS SQL's FTI does. BTW, we were dealing with 20-50million emails (and around 1 million unique attachments), totaling about 20GB of lucene index I think, and 250+GB of SQL database + attachments. Performance was fantastic, to say the least - just make sure you think about, and tweak, your merge factors (when it merges index segments). There is no issue in having more than one segment, but there can be a BIG problem if you try to merge two segments which have 1mil items in each, and you have a watcher thread which kills the process if it takes too long..... (yes, that kicked our arse for a while). So keep the max number of documents per thinggie LOW (ie, dont set it to maxint like we did!) EDIT Corey Trager documented how to use Lucene.NET in BugTracker.NET here .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/37059', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2041/']}
jdg_376256
stackexchange
llm_judgeable_groundtruth_similarity
360960
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This often happens in my projects. Sometime I have this part of code that is very similar to this other part, yet a few lines makes it complicated to keep the code clean and without duplication. Here is an example of a recent project. I am working on a program which can do 2 things : A will generate an image projected in some way, and B will generate several images in some other projection. Choice A: void generateImage(width, height){ vector<Pixel> pixels; for(x = 0; x < width, x++) { for(y = 0; y < height; y++) { Position pos = projectPixel(x, y); Pixel p = someCrazyFunc(pos); pixels.add(p); } }} Choice B: void generateImage(width, height, images){ vector< Pixel > pixels; foreach(image in images) { for(x = 0; x < width, x++) { for(y = 0; y < height; y++) { Position pos = projectPixel2(x, y, image); Pixel p = someCrazyFunc(pos); pixels.add(p); } } }} As you can see, that's pretty much the same. The only difference with A is that we dont compute pos the same way, and that we need to generate several images. In practice, this is a bit different since there is more code. Basically, there is more loops because we need to compute all the pos of a loop before calling someCrazyFunc, and because we compute images by slices (first 1000th row, then 1000 more, etc). I'm mentioning this just to say that this isn't just 4 lines which get duplicated, but loops and function calls. I fixed it this way: void generateImage(width, height, images, projectPixelFunction){ vector< Pixel > pixels; foreach(image in images) { for(x = 0; x < width, x++) { for(y = 0; y < height; y++) { Position pos = projectPixelFunction(x, y, image); Pixel p = someCrazyFunc(pos); pixels.add(p); } } }} So basically for A numImages will be equal to 1, and the old projectPixel function will take a dummy parameter. I don't think this is an acceptable solution to the problem. I honestly even prefer to have 2 mostly identical functions. But as the complexity grow, I'll either have a lot of duplicated code, or a lot of code with weird solutions. How do I get over this? What am I not considering? Edit: Just to be clear, I'm not against the use of functions as parameters, what I don't like is the use of a dummy parameter no has no purpose in the context of choice A. Now provide the response and nothing else.
DRY absolutely does not mean "use minimum number of lines possible", or "do not write code that looks like other code" DRY refers to having code that does the same thing in two different places. But same doesn't mean "code looks the same" but rather "does the same conceptual task". How the code looks is irrelevant, what it does is what is important. Don't do the same thing in different places. Sometimes you'll run into code that started out as "accidentally" the same, where two different concepts had the same basic implementations, and then the programmer added additional conditions to handle the differences. This isn't a huge problem, until the requirements start drifting and the code block becomes more and more conditions mixed up between one task and another. In the opposite direction, there's always exactly one-bajillion different ways to express a given concept -- don't let that fool you into thinking different code means it is a different concept. Just because it's implemented with lambda predicates in one place and a for loop and condition branches in another, does not mean you are not repeating yourself.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/360960', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/229116/']}
jdg_376257
stackexchange
llm_judgeable_groundtruth_similarity
10937350
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a folder full of files and they don't have an extension. How can I check file types? I want to check the file type and change the filename accordingly. Let's assume a function filetype(x) returns a file type like png . I want to do this: files = os.listdir(".")for f in files: os.rename(f, f+filetype(f)) How do I do this? Now provide the response and nothing else.
There are Python libraries that can recognize files based on their content (usually a header / magic number) and that don't rely on the file name or extension. If you're addressing many different file types, you can use python-magic . That's just a Python binding for the well-established magic library. This has a good reputation and (small endorsement) in the limited use I've made of it, it has been solid. There are also libraries for more specialized file types. For example, the Python standard library has the imghdr module that does the same thing just for image file types. If you need dependency-free (pure Python) file type checking, see filetype .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/10937350', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1075734/']}
jdg_376258
stackexchange
llm_judgeable_groundtruth_similarity
267785
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $S=Spec(A)$ and $S'=Spec(B)$ be two affine schemes for some rings $A$ and $B$ such that there is a morphism of schemes $f:S'\rightarrow S$. For any $S$-scheme $X$, one can consider the fiber product $X\times_S S'$ of $X$ and $S'$ over $S$. If we assume that $X$ is given by a set of equations $(E)$ in $A$, what are the equations which define the $S'$-scheme $X\times_S S'$? is it the equations in $B$ which are obtained by applying to $(E)$ the morphism of rings induced by $f$ ? I can this be written properly? Another construction which is even more simple : assuming that $Y$ is an $S'$-scheme, $Y$ can be considered as an $S$-scheme via $Y\longrightarrow S'\longrightarrow S$ (composing by $f$). I have two questions about this construction : first in the same way i did for fiber products, is it possible to find the equations which define $Y$ as an $S$ variety from whose which define it as an $S'$ variety ? Finally something that seems reasonnable to me : $Z$ is an $S$-scheme, and you consider the fiber product $T=Z\times_S S'$ as an $S'$-scheme. Is the scheme $T$ consider as an $S$-scheme with the previous construction isomorphic to $Z$ as an $S$-scheme? I think its the case just because of the definition of the fiber product, but i would like to be sure. Now provide the response and nothing else.
First paragraph: yes. Second paragraph: getting equations is going to be gross in general, although it depends on the nature of $S'$ and $S$. Third paragraph: no. Maybe this will be a motivating example. Let $S$ be Spec of an algebraically closed field, and let $S'$ be Spec of a much larger algebraically closed field (e.g. adjoin a transcendental to your field, then algebraically close that). If you take an $S'$ scheme and "think of it as an $S$-scheme" you will get a monstrosity that's not of finite type (e.g. in the last paragraph you are asserting that $\mathbb{C}[t]$ is isomorphic to $\overline{\mathbb{Q}}[t]$ as a $\overline{\mathbb{Q}}$-algebra). The lesson to take away is that base change is the reasonable, good functor, and the "forgetful functor" is the ugly one, the opposite of the way you'd think it would be on first glance.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/267785', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/14587/']}
jdg_376259
stackexchange
llm_judgeable_groundtruth_similarity
56284434
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to deploy next.js app on Firebase hosting. But I don't understand which files to push to the server. When I run npm run build and pushed the build folder to firebase. But gives error that No index.html file found. Here is the image of output of build folder. I have just created a simple component for testing purposes. Output of build command Now provide the response and nothing else.
On package.json you need to add npm scripts for building and exporting like. "scripts": { "dev": "next", "build": "next build", "start": "next start", "export": "next export" }, And then you can run npm run build && npm run export Next build will build your project for shipping and export will put your files ready for hosting on a static hosting server (like firebase hosting). npm run export will create an out/ directory and place all your files there ready for uploading. Note: If your app needs to generate dynamic pages at the runtime, you can't deploy it as a static app. Read more
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56284434', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7342834/']}
jdg_376260
stackexchange
llm_judgeable_groundtruth_similarity
1574843
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I wonder if there is a result on the unique factorization of manifolds. Call a topological manifold to be indecomposable if it is not homeomorphic to a product of manifolds of positive dimension. Is every manifold a unique (up to order) product of indecomposable ones? I couldn't find any statements on this simple question. Are there any results on this? Any result in different categories (smooth, complex, Riemannian or whatever) or with extra conditions is fine. [edit] The answer seems to be No in most cases. Can we impose strong conditions so that the answer is positive? Now provide the response and nothing else.
Nope. Consider lens spaces $L(p,q)$. They are all indecomposable, by investigation of the fundamental group. Then the main result of this paper is that $L(p,q) \times L(p,q)$ is a manifold $X_p$ which depends only on $p$! So, for instance, $L(p,1) \times L(p,1) \cong L(p,2) \times L(p,2)$, even though $L(p,1) \not\cong L(p,2)$; there are older examples of non-homeomorphic manifolds with diffeomorphic squares, too. I can't really think of a way to talk about unique factorization that this example doesn't break.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1574843', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/254198/']}
jdg_376261
stackexchange
llm_judgeable_groundtruth_similarity
389373
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have an application which requires a producer to send filenames to a consumer , and have producer indicate to the consumer when the last filename has been sent and the end of file has been reached. For simplicity, in the following example producer is demonstrated with echo and printf , while the consumer is demonstrated with cat . I have tried to extrapolate the "here file" method without success, using <<EOF to indicate to the producer-wrapper (if such a thing exists) what to look for as an indication of end of file . If it worked cat should filter EOF from the output. Ex 1) input {echo "Hello World!" printf '\x04' echo "EOF"} <<EOF |\cat output bash: warning: here-document at line 146 delimited by end-of-file (wanted `EOF')Hello World!EOF Ex 2) input { echo "Hello World!" printf '\x04' echo "EOF"} |\cat <<EOF output bash: warning: here-document at line 153 delimited by end-of-file (wanted `EOF') Is it correct that the "here files" method for indicating delimiter only works for static text, and not dynamically created text? -- the actual application -- inotifywait -m --format '%w%f' /Dir | <consumer> The consumer is waiting for files to be written to directory /Dir.It would be nice if when a file "/Dir/EOF" was written the consumer would detect logical end-of-file condition simply by writing shell script as follows: inotifywait -m --format '%w%f' /Dir |<</Dir/EOF <consumer> -- In response to Giles answer -- Is it theoretically possible to implement cat <<EOFhelloworldEOF as SpecialSymbol="EOF"{ echo hello echo world echo $SpecialSymbol} |\while read Line; do if [[ $Line == $SpecialSymbol ]] break else echo $Line fidone |\cat By theoretically possible I mean "would it support existing usage patterns and only enable extra usage patterns which had previously been illegal syntax?" - meaning no existing legal code would be broken. Now provide the response and nothing else.
For a pipe, the end of file is seen by the consumer(s) once all the producers have closed their file descriptor to the pipe and the consumer has read all the data. So, in: { echo foo echo bar} | cat cat will see end-of-file as soon as the second echo terminates and cat has read both foo\n and bar\n . There's nothing more for you to do. Things to bear in mind though is that if some of the commands on the left side of the pipe starts some background process, that background process will inherit a fd to the pipe (its stdout), so cat will not see eof until that process also dies or closes its stdout. As in: { echo foo sleep 10 & echo bar} | cat You see cat not returning before 10 seconds have passed. Here, you may want to redirect sleep 's stdout to something else like /dev/null if you don't want its (non)output to be fed to cat : { echo foo sleep 10 > /dev/null & echo bar} | cat If you want the writing end of the pipe to be closed before the last command in the subshell left of the | is run, you can close stdout or redirecting to that subshell in the middle of the subshell with exec , like: { echo foo exec > /dev/null sleep 10} | (cat; echo "cat is now gone") However note that most shells will still wait for that subshell in addition to the cat command. So while you'll see cat is now gone straight away (after foo is read), you'll still have to wait 10 seconds for the whole pipeline to finish. Of course, in that example above, it would make more sense to write it: echo foo | catsleep 10 <<ANYTHING...content...ANYTHING is a here-document, it's to make the stdin of command a file that contains the content . It wouldn't be useful there. \4 is byte that when read from a terminal makes data held by a terminal device be flushed to the application reading from it (and when there's no data, read() returns 0 which means end-of-file). Again, not of any use here.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/389373', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/247579/']}
jdg_376262
stackexchange
llm_judgeable_groundtruth_similarity
5864272
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have done very little with encoding of Text. Truthfully, I don't really even know what it means exactly. For example, if I have something like: Dim myStr as String = "Hello" Is that 'encoded' in memory in a particular format? Does that format depend on what language I'm using? If I were in another country, like China, for example, and I had a string of Chinese (mandarin? My apologies if I'm using the wrong words here) would the following code (that I've used fine on English strings) still work the same? System.Text.UTF8Encoding encoding=new System.Text.UTF8Encoding();return encoding.GetBytes(str); Or would it lose all meaning when you convert that .Net string to a UTF8Encoding when that conversion isn't valid? Finally, I've worked with .Net for a few years now and I've never seen, heard, or had to do anything with Encoding. Am I the exception, or is it not a common thing to do? Now provide the response and nothing else.
The .NET string class is encoding strings using UTF16 - that means 2 bytes per character (although it allows for special combinations of two characters to form a single 4 byte character, so called "surrogate pairs") . UTF8 on the other hand will use a variable number of bytes necessary to represent a particular Unicode character, i.e. only one byte for regular ASCII characters, but maybe 3 bytes for a Chinese character. Both encodings allow representing all Unicode characters, so there is always a mapping between them - both are different binary represenations (i.e for storing in memory or on disk) of the same (unicode) character set. Since not all Unicode characters were able to fit into the original 2 bytes reserved by UTF-16, the format also allows to denote a combination of two UTF-16 characters to form 4 byte characters - the so formed character is called a "surrogate" or surrogate pair and is a pair of 16-bit Unicode encoding values that, together, represent a single character. UTF-8 does not have this problem, since the number of bytes per Unicode character is not fixed. A good general overview over UTF-8, UTF-16 and BOMs can be gathered here . An excellent overview / introduction to Unicode character encoding is The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5864272', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/73381/']}
jdg_376263
stackexchange
llm_judgeable_groundtruth_similarity
4449382
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: First I should tell you that ive little knowledge of Objective C or C#.So when one of my collegues asked me whether there is anything like delegates in Objective C,I wondered if such a concept existed in Objective-C.I guess the delegates we use in iphone programing are not the same.C# delegates are function pointers right? Such a facility would be nice to have while working with multiple views.Where can i find info?? Now provide the response and nothing else.
Delegates in Objective-C are merely a concept, not some kind of implementation artifact (like in C#). A delegate in Objective-C (better: Cocoa) is basically an object, which is notified by whoever uses it as its "delegate" of certain events occuring. Delegates may also be asked to perform certain tasks on behalf of the host object. The interface a delegate is required to implement is often formalized by a protocol. @protocol ActionDelegate - (void) actionDidStart: (id) aSender;- (void) actionDidEnd: (id) aSender;@end@interface Action: NSObject { id<ActionDelegate> delegate;}@property (nonatomic,assign) id<ActionDelegate> delegate;@end Delegates in C#, on the other hand, are an implementation artifact. There is a dedicated delegate keyword to declare delegate types and to create actual delegate instances. class Action { delegate void ActionDidStartDelegate(Action sender); delegate void ActionDidEndDelegate(Action sender); ...} (my C# is a bit rusty, so the syntax may be off here, sorry; and in real life, one would probably use events in situations like the above rather than raw delegates). Basically, a C# delegate is akin to a Python method object. You might be able to use the new code block feature of Objective-C to emulate delegates. Not having used this feature (yet), I cannot comment on this. Another way to get something like that would be to use plain function pointers. typedef void (*callback_function)();- (void) doSomethingWithCallback: (callback_function) func { ... func();} And of course, you can always use the method often employed by Cocoa itself: use an object and an associated method selector: - (void) doSomethingWhenDonePerform: (SEL)aSelector onObject: (id) aReceiver { ... [aReceiver perform: aSelector];}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4449382', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/419261/']}
jdg_376264
stackexchange
llm_judgeable_groundtruth_similarity
22625
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am a beginner to time-series analysis. I have the model below; y is sales of product and x is tweet-rate: $y_t=ay_{t-1}+by_{t-2}+...+cy_{t-m}+dx_t+ex_{t-1}+...+fx_{t-n}$ What is this model called? I guess it's called an AR model but I am notsure since the dependent variable y is on R.H.S as well. How do I fix the lag period, $m$ and $n$? Can $x$ and $y$ have different lags? How can I use Python to build this model and also predict the sales for $t+1\ldots t+n$? Any solution for this without using rpy. Now provide the response and nothing else.
The model you have there is called an Autoregressive Distributed Lag (ARDL) Model. To be specific, \begin{equation}y_t=ay_{t-1}+by_{t-2}+...+cy_{t-m}+dx_t+ex_{t-1}+...+fx_{t-n}\end{equation}can be called an ARDL(m,n) model and we can write the model in slightly more compact form as:\begin{equation}y_{t} = \delta + \sum_{i=1}^{m} \alpha_{i} y_{t-i} + \sum_{j=0}^{n} \beta_{j} x_{t-j} + u_{t}\end{equation}where $u_{t} \sim IID(o, \sigma^{2})~ \forall~ t$ and in this case $\delta = 0$. The values of m and n do not have to be the same. That is, the lag length of the autoregressive term does not have to be equal to the lag length of the distributed lag term. Note also that it is possible to include a second (or more) distributed lag terms (for example, $z_{t-k}$).There are different ways of choosing the lag lengths and for a treatment of this issue, I refer you to Chapter 17 of Damodar Gujarati and Dawn Porter's Basic Econometrics (5th ed). To build a model like this in python, it might be worth checking out statsmodels.tsa as well as the other packages mentioned in the other answers.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/22625', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/9126/']}
jdg_376265
stackexchange
llm_judgeable_groundtruth_similarity
9760
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to fit some data to a model of exponential decay using the FindFit function: data = {{0, 78}, {24, 64.5}, {48, 70.5}, {96, 54}, {144, 64.5}, {216, 3}, {336, 0}, {696, 0}};model = data[[1, 2]]*Exp[-k1*t];fit1 = FindFit[data, model, k1, t]fit2 = FindFit[data, {model, k1 > 0}, k1, t]modelf1 = Function[{t}, Evaluate[model /. fit1]];modelf2 = Function[{t}, Evaluate[model /. fit2]];Plot[#[t], {t, 0, 696}, Epilog -> Map[Point, data], PlotRange -> All] & /@ {modelf1, modelf2} Interestingly, the model with no specified constraint on k1 finds a much better solution than the constrained model, but the solution of the unconstrained problem falls within the range of the constrained one. Here is the output: {k1 -> 0.00512571}{k1 -> 1.01979} Why isn't the solution to the constrained problem at least as good as the solution of the unconstrained one? Now provide the response and nothing else.
Different methods are used for constrained and unconstrained problems. Compare for example data = {{0, 78}, {24, 64.5}, {48, 70.5}, {96, 54}, {144, 64.5}, {216, 3}, {336, 0}, {696, 0}};model = data[[1, 2]]*Exp[-k1*t];methods = {Automatic, "ConjugateGradient", "Gradient", "LevenbergMarquardt", "Newton", "QuasiNewton", {"NMinimize", Method -> "NelderMead"}, {"NMinimize", Method -> "DifferentialEvolution"}, {"NMinimize", Method -> "SimulatedAnnealing"}, {"NMinimize", Method -> "RandomSearch"} };fit1 = FindFit[data, model, k1, t, Method -> #] & /@ methods;fit2 = (FindFit[data, {model, k1 > 0}, k1, t, Method -> #] & /@ methods) \ /. FindFit[__] -> "NA";TableForm[Transpose@{fit1, fit2}, TableAlignments -> Left, TableHeadings -> {methods, {"Unconstrained", "Constrained"}}] Note the warnings of the type Method -> XXX can only be used for unconstrained problems . We see that NMinimize was the only method suitable for this constrained problem. For the unconstrained problem the "Gradient" method was probably used. Oh, and if you want the table LeftAlignment to actually work, check out this great answer .
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/9760', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/1319/']}
jdg_376266
stackexchange
llm_judgeable_groundtruth_similarity
22909099
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to build a basic app that displays newstories (currently hardcoded into the Strings.xml file). But it keeps crashing on launch, even though there are no errors being shown in the code. Here's the Java part of my code (I can also supply any other files, or even a zipped copy of my workspace if that'll help anyone) package cara.app;import android.os.Build; import android.os.Bundle; import android.annotation.SuppressLint; import android.app.Activity; import android.content.res.Resources; import android.view.Menu; import android.view.View; import android.widget.Button; import android.widget.TextView; public class MainActivity extends Activity { final Resources res = getResources(); final TextView textView = new TextView(null); final Button next = (Button) findViewById(R.id.LinearLayout1); @SuppressLint("NewApi")@Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); final Button button = (Button) findViewById(R.id.LinearLayout1); button.setText("" + res.getString(R.string.News) + "\n\n" + res.getString(R.string.Story_Title_1) + "\n\n" + res.getString(R.string.Story_Title_2) + "\n\n" + res.getString(R.string.Story_Title_3) + "\n\n" + res.getString(R.string.Story_Title_4) + "\n\n" + res.getString(R.string.Story_Title_5)); button.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { OpenNews(); } }); // Make sure we're running on Honeycomb or higher to use ActionBar APIs if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.HONEYCOMB) { // Show the Up button in the action bar. getActionBar().setDisplayHomeAsUpEnabled(true); }}public void selfDestruct(View view) { // Boom }@SuppressLint("NewApi")@Overridepublic boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true;}public void OpenNews(){ // Create the text view textView.setTextSize(40); textView.setText(res.getString(R.string.Story_1)); // Set the text view as the activity layout setContentView(textView); next.setText("Next"); next.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { int itemNumber = 1; if(itemNumber == 1){ textView.setText(res.getString(R.string.Story_2)); itemNumber = 2; } else if(itemNumber == 2){ textView.setText(res.getString(R.string.Story_3)); itemNumber = 3; } else if(itemNumber == 3){ textView.setText(res.getString(R.string.Story_4)); itemNumber = 4; } else if(itemNumber == 4){ textView.setText(res.getString(R.string.Story_5)); itemNumber = 5; } else if(itemNumber == 5){ textView.setText(res.getString(R.string.Story_1)); itemNumber = 1; } }});} } Now provide the response and nothing else.
private["ISH"] = private.HolidayName.str.contains("(?i)holiday|recess") The (?i) in the regex pattern tells the re module to ignore case. The reason why you were getting an error is because the Series object does not have the contains method; instead the Series.str attribute has the contains method. So you could avoid the error with: private["ISH"] = private.HolidayName.str.lower().str.contains("holiday|recess")
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/22909099', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3214902/']}
jdg_376267
stackexchange
llm_judgeable_groundtruth_similarity
48625164
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I bumped into a very strange problem, the studio compiles all the manifest permissions except one: <uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW"/> Tested in other compilers - the permissions successfully compiled.How can you solve this truly strange problem?My full list of permissions: <uses-permission android:name="android.permission.BIND_ACCESSIBILITY_SERVICE" /> <uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW"/> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.INTERNET" /> Now provide the response and nothing else.
Calling progressBar1.Value = i from a different thread results in the dreaded "cross-thread operation not valid" exception. The Progress class, on the other hand, dispatches the event to the synchronization context captured in the moment of construction: // simplified code, check reference source for actual codevoid IProgress<T>.Report(T value){ // post the processing to the captured sync context m_synchronizationContext.Post(InvokeHandlers, value);}private void InvokeHandlers(object state){ // invoke the handler passed through the constructor m_handler?.Invoke((T)state); // invoke the ProgressChanged event handler ProgressChanged?.Invoke(this, (T)state);} This ensures that all updates to progress bars, labels and other UI elements are done on a (one and only) GUI thread. So, it only makes sense to instantiate the Progress class outside of the background thread, inside a method which is called on a UI thread: void Button_Click(object sender, EventArgs e){ // since this is a UI event, instantiating the Progress class // here will capture the UI thread context var progress = new Progress<int>(i => progressBar1.Value = i); // pass this instance to the background task Task.Run(() => ReportWithProgress(progress));}async Task ReportWithProgress(IProgress<int> p){ for (int i = 0; i <= 100; i++) { await Task.Run(() => HeavyIO()); Console.WriteLine("Progress : " + i); p.Report(i); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48625164', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9200394/']}
jdg_376268
stackexchange
llm_judgeable_groundtruth_similarity
37328
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to install MySQL 5.6 in Raspbian 7 (wheezy). I've added the repo: wget http://dev.mysql.com/get/mysql-apt-config_0.3.7-1debian7_all.debsudo dpkg -i mysql-apt-config_0.3.7-1debian7_all.deb sudo apt-get update When I type in sudo apt-get install mysql-server-5.6 : Reading package lists... DoneBuilding dependency treeReading state information... DoneE: Unable to locate package mysql-server-5.6E: Couldn't find any package by regex 'mysql-server-5.6'Only mysql-server-5.1 & mysql-server-5.5 are available. Where have I gone wrong? Now provide the response and nothing else.
Checking on Raspbian 8 (jessie): > apt-cache search mysql-servermysql-server - MySQL database server (metapackage depending on the latest version)mysql-server-5.1 - MySQL database server binaries and system database setupmysql-server-5.5 - MySQL database server binaries and system database setupmysql-server-core-5.1 - MySQL database server binariesmysql-server-core-5.5 - MySQL database server binaries This is the same as wheezy (actually that's an upgraded to jessie system; if you have a new from scratch jessie 5.1 will not be there). So in fact 5.6 is not available. Note that Raspbian binaries are compiled for the ARMv6 platform -- you cannot use just any Debian package. The current Raspbian (jessie) is parallel to the current Debian anyway. But 5.6 is available for "stretch" , the current Debian testing branch, and as Peter Green notes in a comment, it has at least been built for Raspbian . Along the lines of the methodology described here , you can try and install a stretch package on a jessie system. I.e., add: deb http://archive.raspbian.org/raspbian/ stretch main To /etc/apt/sources.list . Then run apt-get update . Finally: apt-get install mysql-server-5.6 This requires various stretch dependencies and a few upgrades but is worth trying. It also is in theory possible to upgrade the whole system now (from jessie) if you change jessie to stretch in /etc/apt/sources.list , run apt-get update and then (this is the edge of the cliff...) apt-get dist-upgrade (which will probably take some time), but beware this has probably not been tested much by the masses yet. If you have a Pi 2, 5.6 is available for pure ARMv7 distros -- primarily Ubuntu (it is probably an alternative in 14.04 LTS). The default on Fedora 21 ARM is 5.6.26.
{}
{'log_upvote_score': 4, 'links': ['https://raspberrypi.stackexchange.com/questions/37328', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/35839/']}
jdg_376269
stackexchange
llm_judgeable_groundtruth_similarity
3345240
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When seeing this question I noticed that $$\sum_{n = 1}^{\infty} \frac{n^{n - 2}}{e^{n} \cdot n!}= \frac{1}{2}.$$ I don't know how to show this, I tried finding a power series that matches that but no avail. Hints are very much appreciated. But this can be generalised:Define $$S_{k}(x):= \sum_{n = 1}^{\infty} \frac{n^{n - k}}{x^{n} \cdot n!}.$$ WolframAlpha shows e.g. that $S_1(e) = 1$ and $$S_0(x) = - \frac{W(-x^{-1})}{1 + W(-x^{-1})},$$ where $W$ denotes the Lambert W-function.Is there any closed form for this sum or a special $k$ or $x$ beyond those results? Somebody attempted to answer this using the Lagrange inversion theorem. I didn't work out completely but looked quite promising. Now provide the response and nothing else.
Computing the Sums Using the Taylor series for the Lambert W function derived in this answer , we see that $$\newcommand{\W}{\operatorname{W}}-\W(-x)=\sum_{n=1}^\infty\frac{n^{n-1}}{n!}x^n\tag1$$ Define $u(x)=-\W(-x)$ , then we have $$\begin{align}1&=u(1/e)\tag{2a}\\[3pt]x&=u(x)\,e^{-u(x)}\tag{2b}\\\frac{\mathrm{d}x}x&=\frac{1-u(x)}{u(x)}\,\mathrm{d}u(x)\tag{2c}\end{align}$$ Define $u_1=u$ and $$u_{k+1}(x)=\int_0^xu_k(t)\,\frac{\mathrm{d}t}{t}\tag3$$ Then $$u_k(x)=\sum_{n=1}^\infty\frac{n^{n-k}}{n!}x^n\tag4$$ We will show that $$u_k(x)=P_k(u(x))\tag5$$ where $P_k(x)$ is a degree $k$ polynomial with no constant term. It is true for $k=1$ with $P_1(x)=x$ . Assume it is true for $k$ , then $(3)$ and $\text{(2c)}$ imply $$\begin{align}u_{k+1}(x)&=\int_0^xP_k(u(t))\,\frac{\mathrm{d}t}t\\&=\int_0^{u(x)}P_k(u)\,\frac{1-u}u\,\mathrm{d}u\\[9pt]&=P_{k+1}(u(x))\tag6\end{align}$$ Thus, $(5)$ is true for $k+1$ . Equation $(6)$ enables us to compute $$P_{k+1}(x)=\int_0^xP_k(t)\frac{1-t}t\,\mathrm{d}t\tag7$$ The initial part of the sequence of polynomials is $$\begin{align}P_1(x)&=x\\P_2(x)&=x-\tfrac12x^2\\P_3(x)&=x-\tfrac34x^2+\tfrac16x^3\\P_4(x)&=x-\tfrac78x^2+\tfrac{11}{36}x^3-\tfrac1{24}x^4\\P_5(x)&=x-\tfrac{15}{16}x^2+\tfrac{85}{216}x^3-\tfrac{25}{288}x^4+\tfrac1{120}x^5\\P_6(x)&=x-\tfrac{31}{32}x^2+\tfrac{575}{1296}x^3-\tfrac{415}{3456}x^4+\tfrac{137}{7200}x^5-\tfrac1{720}x^6\\P_7(x)&=x-\tfrac{63}{64}x^2+\tfrac{3661}{7776}x^3-\tfrac{5845}{41472}x^4+\tfrac{12019}{432000}x^5-\tfrac{49}{14400}x^6+\tfrac1{5040}x^7\\P_8(x)&=x{-}\tfrac{127}{128}x^2{+}\tfrac{22631}{46656}x^3{-}\tfrac{76111}{497664}x^4{+}\tfrac{874853}{25920000}x^5{-}\tfrac{13489}{2592000}x^6{+}\tfrac{121}{235200}x^7{-}\tfrac1{40320}x^8\end{align}\tag8$$ Applying $\text{(2a)}$ , $(4)$ , and $(5)$ , we get $$\sum_{n=1}^\infty\frac{n^{n-k}}{e^nn!}=P_k(1)\tag9$$ Table of Sums $$\begin{array}{c|c}k&\sum\limits_{n=1}^\infty\frac{n^{n-k}}{e^nn!}\\\hline1&1\\2&\frac12\\3&\frac5{12}\\4&\frac7{18}\\5&\frac{1631}{4320}\\6&\frac{96547}{259200}\\7&\frac{40291823}{108864000}\\8&\frac{16870575007}{45722880000}\end{array}\tag{10}$$ Recursion for the Coefficients of $\boldsymbol{P_k}$ Let $$P_k(x)=\sum_{j=1}^\infty(-1)^{j-1}a_{k,j}x^j\tag{11}$$ where $a_{k,1}=1$ and $a_{k,j}=0$ for $j\gt k$ . Applying $(7)$ to $(11)$ gives $$\begin{align}a_{k,j}&=\frac{a_{k-1,j}+a_{k-1,j-1}}j\tag{12}\\&=\frac1{j^k}\sum_{i=j-1}^{k-1}j^ia_{i,j-1}\tag{13}\end{align}$$ $(13)$ follows from $(12)$ after unrolling the recursion. Properties of the Coefficients We will show that for some $b_{j,n}$ , $$a_{k,j}=\sum_{n=1}^j(-1)^{n-1}b_{j,n}\left(\frac1n\right)^k\tag{14}$$ Since $a_{k,1}=1$ , $(14)$ is true for $j=1$ with $b_{1,1}=1$ and $b_{1,n}=0$ for $n\gt1$ . Suppose that $(14)$ is true for $j-1$ . Applying $(13)$ to $(14)$ yields $$\begin{align}a_{k,j}&=\frac1{j^k}\sum_{i=j-1}^{k-1}j^ia_{i,j-1}\\&=\frac1{j^k}\sum_{i=j-1}^{k-1}\sum_{n=1}^{j-1}(-1)^{n-1}b_{j-1,n}\left(\frac jn\right)^i\\&=\frac1{j^k}\sum_{n=1}^{j-1}(-1)^{n-1}b_{j-1,n}\frac{\left(\frac jn\right)^k-\left(\frac jn\right)^{j-1}}{\frac jn-1}\\&=\sum_{n=1}^{j-1}(-1)^{n-1}\underbrace{\,\frac{nb_{j-1,n}}{j-n}\vphantom{\sum_{n=1}^{j-1}}\,}_{b_{j,n}}\left(\frac1n\right)^k\underbrace{-\sum_{n=1}^{j-1}(-1)^{n-1}\frac{nb_{j-1,n}}{j-n}\left(\frac jn\right)^{j-1}}_{(-1)^{j-1}b_{j,j}}\left(\frac1j\right)^k\tag{15}\end{align}$$ which is of the form of $(14)$ with $$b_{j,n}=\frac{nb_{j-1,n}}{j-n}\tag{16}$$ for $n\lt j$ and $$b_{j,j}=-\sum_{n=1}^{j-1}(-1)^{j-n}b_{j,n}\left(\frac jn\right)^{j-1}\tag{17}$$ Thus, $(14)$ is true for $j$ . Computing $\boldsymbol{b_{j,n}}$ Equation $(14)$ , $(16)$ , and $(17)$ allow us to give formulas for $a_{k,j}$ for each $j$ : $$\begin{align}a_{k,1}&=1\vphantom{\left(\frac11\right)^k}\\a_{k,2}&=1-2\left(\frac12\right)^k\\a_{k,3}&=\frac12-4\left(\frac12\right)^k+\frac92\left(\frac13\right)^k\\a_{k,4}&=\frac16-4\left(\frac12\right)^k+\frac{27}2\left(\frac13\right)^k-\frac{32}3\left(\frac14\right)^k\\a_{k,5}&=\frac1{24}-\frac83\left(\frac12\right)^k+\frac{81}4\left(\frac13\right)^k-\frac{128}3\left(\frac14\right)^k+\frac{625}{24}\left(\frac15\right)^k\\a_{k,6}&=\frac1{120}{-}\frac43\left(\frac12\right)^k{+}\frac{81}4\left(\frac13\right)^k{-}\frac{256}3\left(\frac14\right)^k{+}\frac{3125}{24}\left(\frac15\right)^k{-}\frac{324}5\left(\frac16\right)^k\end{align}\tag{18}$$ Looking at $b_{j,j}$ in $(18)$ , a good guess appears to be $$b_{j,j}=\frac{j^j}{j!}\tag{19}$$ Combining $(16)$ and $(19)$ , we get $$b_{j,n}=\binom{j}{n}\frac{n^j}{j!}\tag{20}$$ which satisfies $(16)$ and $(17)$ , validating the guess made for $(19)$ . Computing $\boldsymbol{a_{k,j}}$ Putting together $(14)$ and $(20)$ gives $$a_{k,j}=\sum_{n=1}^j(-1)^{n-1}\binom{j}{n}\frac{n^j}{j!}\left(\frac1n\right)^k\tag{21}$$ Note that for $j\gt k$ , the sum in $(21)$ is an order $j$ difference of a degree $j-k$ polynomial, hence $a_{k,j}=0$ , which shows that $(21)$ is valid even for $j\gt k$ . Simpler Formula for the Sums Applying $(9)$ and $(11)$ to $(21)$ yields $$\begin{align}\sum_{n=1}^\infty\frac{n^{n-k}}{e^nn!}&=\sum_{j=1}^k(-1)^{j-1}a_{k,j}\\&=\bbox[5px,border:2px solid #C0A000]{\sum_{j=1}^k\sum_{n=1}^j(-1)^{j-n}\binom{j}{n}\frac{n^j}{j!}\left(\frac1n\right)^k}\tag{22}\end{align}$$ For each $k$ , the sum in $(22)$ has $\frac{k^2+k}2$ terms. This is as close to a closed formula for the sums as I have found. Verification of the Formula As noted after equation $(21)$ , the inner sum in $(22)$ for $j\gt k$ is $0$ . That is, $$\begin{align}\sum_{j=1}^k\sum_{n=1}^j(-1)^{j-n}\binom{j}{n}\frac{n^j}{j!}\left(\frac1n\right)^k&=\sum_{j=1}^\infty\sum_{n=1}^j(-1)^{j-n}\binom{j}{n}\frac{n^j}{j!}\left(\frac1n\right)^k\tag{23}\\&=\sum_{n=1}^\infty\sum_{j=n}^\infty(-1)^{j-n}\frac1{n!}\frac{n^{j-k}}{(j-n)!}\tag{24}\\&=\sum_{n=1}^\infty\sum_{j=0}^\infty(-1)^j\frac1{n!}\frac{n^{j+n-k}}{j!}\tag{25}\\&=\sum_{n=1}^\infty\frac{n^{n-k}}{e^nn!}\tag{26}\end{align}$$ Explanation $(23)$ : extend the sum in $j$ since the inner sums vanish for $j\gt k$ $(24)$ : switch order of summation and simplify the summand $(25)$ : substitute $j\mapsto j+n$ $(26)$ : evaluate the sum in $j$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3345240', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/545914/']}
jdg_376270
stackexchange
llm_judgeable_groundtruth_similarity
9766077
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any way for me to get at a UITextField's current caret position through the text field's UITextRange object? Is the UITextRange returned by UITextField even of any use? The public interface for UITextPosition does not have any visible members. Now provide the response and nothing else.
I was facing the same problem last night. It turns out you have to use offsetFromPosition on the UITextField to get the relative position of the "start" of the selected range to work out the position. e.g. // Get the selected text rangeUITextRange *selectedRange = [self selectedTextRange];//Calculate the existing position, relative to the beginning of the fieldint pos = [self offsetFromPosition:self.beginningOfDocument toPosition:selectedRange.start]; I ended up using the endOfDocument as it was easier to restore the user's position after changing the text field. I wrote a blog posting on that here: http://neofight.wordpress.com/2012/04/01/finding-the-cursor-position-in-a-uitextfield/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9766077', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/69783/']}
jdg_376271
stackexchange
llm_judgeable_groundtruth_similarity
9573178
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to D3 and having trouble setting the bounds for my force directed layout. I have managed to piece together (from examples) what I would like, but I need the graph to be contained. In the tick function, a transform/translate will display my graph correctly, but when i use cx and cy with Math.max/min (See commented code), the nodes are pinned to thetop left corner while the lines are contained properly. Here is what I have below... what am I doing wrong?? var w=960, h=500, r=8, z = d3.scale.category20();var color = d3.scale.category20();var force = d3.layout.force() .linkDistance( function(d) { return (d.value*180) } ) .linkStrength( function(d) { return (1/(1+d.value)) } ) .charge(-1000) //.gravity(.08) .size([w, h]);var vis = d3.select("#chart").append("svg:svg") .attr("width", w) .attr("height", h) .append("svg:g") .attr("transform", "translate(" + w / 4 + "," + h / 3 + ")");vis.append("svg:rect") .attr("width", w) .attr("height", h) .style("stroke", "#000");d3.json("miserables.json", function(json) { var link = vis.selectAll("line.link") .data(json.links); link.enter().append("svg:line") .attr("class", "link") .attr("x1", function(d) { return d.source.x; }) .attr("y1", function(d) { return d.source.y; }) .attr("x2", function(d) { return d.source.x; }) .attr("y2", function(d) { return d.source.y; }) .style("stroke-width", function(d) { return (1/(1+d.value))*5 }); var node = vis.selectAll("g.node") .data(json.nodes); var nodeEnter = node.enter().append("svg:g") .attr("class", "node") .on("mouseover", fade(.1)) .on("mouseout", fade(1)) .call(force.drag); nodeEnter.append("svg:circle") .attr("r", r) .style("fill", function(d) { return z(d.group); }) .style("stroke", function(d) { returnd3.rgb(z(d.group)).darker(); }); nodeEnter.append("svg:text") .attr("text-anchor", "middle") .attr("dy", ".35em") .text(function(d) { return d.name; }); force .nodes(json.nodes) .links(json.links) .on("tick", tick) .start(); function tick() { // This works node.attr("transform", function(d) { return "translate(" + d.x + ","+ d.y + ")"; }); // This contains the lines within the boundary, but the nodes arestuck in the top left corner //node.attr("cx", function(d) { return d.x = Math.max(r, Math.min(w- r, d.x)); }) // .attr("cy", function(d) { return d.y = Math.max(r, Math.min(h -r, d.y)); }); link.attr("x1", function(d) { return d.source.x; }) .attr("y1", function(d) { return d.source.y; }) .attr("x2", function(d) { return d.target.x; }) .attr("y2", function(d) { return d.target.y; }); } var linkedByIndex = {}; json.links.forEach(function(d) { linkedByIndex[d.source.index + "," + d.target.index] = 1; }); function isConnected(a, b) { return linkedByIndex[a.index + "," + b.index] ||linkedByIndex[b.index + "," + a.index] || a.index == b.index; } function fade(opacity) { return function(d) { node.style("stroke-opacity", function(o) { thisOpacity = isConnected(d, o) ? 1 : opacity; this.setAttribute('fill-opacity', thisOpacity); return thisOpacity; }); link.style("stroke-opacity", opacity).style("stroke-opacity",function(o) { return o.source === d || o.target === d ? 1 : opacity; }); }; }}); Now provide the response and nothing else.
There's a bounding box example in my talk on force layouts . The position Verlet integration allows you to define geometric constraints (such as bounding boxes and collision detection ) inside the "tick" event listener; simply move the nodes to comply with the constraint and the simulation will adapt accordingly. That said, gravity is definitely a more flexible way to deal with this problem, since it allows users to drag the graph outside the bounding box temporarily and then the graph will recover. Depend on the size of the graph and the size of the displayed area, you should experiment with different relative strengths of gravity and charge (repulsion) to get your graph to fit.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9573178', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1250682/']}
jdg_376272
stackexchange
llm_judgeable_groundtruth_similarity
24648369
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I got a problem with an .xml file of an Android project.Particularly, I need to have a Scroll View for the major part of the screen a footer underneath.The ScrollView is the parent of a LinearLayout which keeps three LinearLayouts inside.The first twos LinearLayout are correctly shown, but the third has never been shown. I don't understand the reason why it happens: if I put the third LinearLayout in first position, everyone is correctly showed. This is my code: <?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="wrap_content" android:fillViewport="true" android:orientation="vertical" > <ScrollView android:layout_width="match_parent" android:layout_height="wrap_content"> <LinearLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="vertical"> <LinearLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="vertical" > <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:text="AppInfo" android:padding="5dp" android:id="@+id/bread" /> <HorizontalScrollView android:id="@+id/horizontal_scroll" android:layout_width="wrap_content" android:layout_height="wrap_content" android:fillViewport="true"> <LinearLayout android:id="@+id/layout_image_container" android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal" > </LinearLayout> </HorizontalScrollView> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="vertical" android:background="#ffffffff" android:gravity="fill_horizontal" android:layout_gravity="top" android:id="@+id/attivita_linear" android:padding="10dp" android:isScrollContainer="true"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Nome Attività" android:id="@+id/textView_nome" android:layout_gravity="center_vertical|center_horizontal" android:textSize="30dp" android:paddingBottom="35dp" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Indirizzo: " android:id="@+id/textView_indirizzo" android:textSize="20dp" android:textIsSelectable="false" android:paddingBottom="5dp" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Telefono: " android:id="@+id/textView_tel" android:textSize="20dp" android:textIsSelectable="false" android:paddingBottom="5dp" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Email: " android:id="@+id/textView_email" android:textSize="20dp" android:textIsSelectable="false" android:paddingBottom="5dp" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="www: " android:id="@+id/textView_www" android:textSize="20dp" android:textIsSelectable="false" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Descrizione: " android:id="@+id/textView_descrizione" android:textSize="20dp" android:textIsSelectable="false" /> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="horizontal" android:background="#000000" android:gravity="fill_horizontal" android:layout_gravity="top" android:id="@+id/attivita_buttons" android:padding="10dp" android:isScrollContainer="true"> <ImageButton android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/ic_action_call" android:background="@drawable/ic_action_call" android:contentDescription="prova" /> <ImageButton android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/ic_action_call" /> <ImageButton android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/ic_action_call" /> </LinearLayout> </LinearLayout> </ScrollView> <RelativeLayout android:id="@+id/footer" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true"> <FrameLayout android:layout_width="match_parent" android:layout_height="100dp" android:background="@drawable/cola"> </FrameLayout> </RelativeLayout></RelativeLayout> Hope you can help me. Any answer would be appreciated. Now provide the response and nothing else.
As global CMake settings, add these lines before add_executable, valid for gcc/clang: set(CMAKE_FIND_LIBRARY_SUFFIXES ".a")set(BUILD_SHARED_LIBS OFF)set(CMAKE_EXE_LINKER_FLAGS "-static") On Modern CMake (3.x+ - target_link_libraries doc ), you can apply the flag to specific targets, in this way: target_link_libraries(your_target_name -static) If you're using MSVC , you have to set the compiler and linker flags: set(CMAKE_FIND_LIBRARY_SUFFIXES ".lib")target_compile_options(your_target_name [PUBLIC|PRIVATE] /MT)target_link_options(your_target_name [PUBLIC|PRIVATE] /INCREMENTAL:NO /NODEFAULTLIB:MSVCRT) or alternatively also: set(CMAKE_MSVC_RUNTIME_LIBRARY "MultiThreaded$<$<CONFIG:Debug>:Debug>") and if you are using MFC, you need to specify the flag to 1 see here : set(CMAKE_MFC_FLAG 1)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24648369', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3819433/']}
jdg_376273
stackexchange
llm_judgeable_groundtruth_similarity
1406029
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to calculate the volume of a 3D mesh object having a surface made up triangles. Now provide the response and nothing else.
Reading this paper , it is actually a pretty simple calculation. The trick is to calculate the signed volume of a tetrahedron - based on your triangle and topped off at the origin. The sign of the volume comes from whether your triangle is pointing in the direction of the origin. (The normal of the triangle is itself dependent upon the order of your vertices, which is why you don't see it explicitly referenced below.) This all boils down to the following simple function: public float SignedVolumeOfTriangle(Vector p1, Vector p2, Vector p3) { var v321 = p3.X*p2.Y*p1.Z; var v231 = p2.X*p3.Y*p1.Z; var v312 = p3.X*p1.Y*p2.Z; var v132 = p1.X*p3.Y*p2.Z; var v213 = p2.X*p1.Y*p3.Z; var v123 = p1.X*p2.Y*p3.Z; return (1.0f/6.0f)*(-v321 + v231 + v312 - v132 - v213 + v123);} and then a driver to calculate the volume of the mesh: public float VolumeOfMesh(Mesh mesh) { var vols = from t in mesh.Triangles select SignedVolumeOfTriangle(t.P1, t.P2, t.P3); return Math.Abs(vols.Sum());}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1406029', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_376274
stackexchange
llm_judgeable_groundtruth_similarity
8100
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I find plane waves are uncompatible with light cone. Perhaps plane waves are "virtual" and can never be measuredin that case, shouldn't we call plane waves as "virtual plane waves"?(other option could be that plane waves allows waves travel faster than c) I would like to clarify this point through this question. If plane waves would exist(as measurable), then higher than c speed could be reached like this: A wave going from X to Y at a speed c, it will reach Z at higherthan c speed, because it will reach at same time, traveling more distance. (X). | v ________________________________________________ plane waves ________________________________________________ going X to Y ________________________________________________ (Y). (Z). In a real situation the wave will be a circle (or a sphere in 3d) so it will get Z later then that's not a plane wave. Now provide the response and nothing else.
A "plane wave" generally refers to an infinite and perfectly flat wavefront, which cannot exist in reality, of course. However, there is nothing at all impossible about a plane wave of finite extent. Such a wave will experience diffraction at its edges, of course, but can still propagate over long distances before losing its planar nature. The problem with your question about faster than light information transmission, is that if X were the only point emitting a wave, then it would be a spherical wave, not a plane wave. A plane wave can be thought of as being composed as a superposition of spherical waves emitted in phase from every point on an infinite plane. So in your example, information would not be arriving at Z from X, but instead from some other source point within the causal past of Z. In your diagram that point could be at the same height as X, but directly above Z. Considering a wavefront (planar or otherwise) as a superposition of spherical waves is the central feature of the Huygens-Fresnel Principle , which would be a good reading on the topic.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/8100', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/1916/']}
jdg_376275
stackexchange
llm_judgeable_groundtruth_similarity
53318936
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using this [ https://github.com/prometheus/pushgateway][1] we are trying to push one metric to prometheus. It seems to require the data in a very specific format. It works fine when doing their example curl of echo "some_metric 3.14" | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/some_job Yet doing a curl with -d option fails as missing end of line/file curl -d 'some_metric 3.15\n' http://pushgateway.example.org:9091/metrics/job/some_job I'm trying to understand the difference in behaviour since I believe both are doing POST commands and I need to replicate this --data-binary option in node.js via "request.post" method but I seem to only be able to replicate the curl -d option which doesn't work. Any suggestions on hints on what the difference is between -d and --data-binary and to do the equivalent to --data-binary from within node.js? Now provide the response and nothing else.
From curl man page: --data-ascii (HTTP) This is just an alias for -d, --data. --data-binary (HTTP) This posts data exactly as specified with no extra processing whatsoever. If you start the data with the letter @, the rest should be a filename. Data is posted > in a similar manner as -d, --data does, except that newlines and carriage returns are > > preserved and conversions are never done. Like -d, --data the default content-type sent to the server is application/x-www-form-> > urlencoded. If you want the data to be treated as arbitrary binary data by the server > then set the content-type to octet-stream: -H "Content-Type: application/octet-stream". If this option is used several times, the ones following the first will append data as > described in -d, --data. Using @- will make curl read the filename from stdin . So, basically in your first variant you send a binary file named "some_metric 3.14".In the second one, you're sending an ascii string "some_metric 3.15\n". If you want curl to strip new lines before sending, use --data-ascii or -d option: echo "some_metric 3.14" | curl -d @- http://pushgateway.example.org:9091/metrics/job/some_job
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53318936', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/463978/']}
jdg_376276
stackexchange
llm_judgeable_groundtruth_similarity
4529
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Regardless of who shot down flight MH17 and taking into account that (non-civil) planes had been shot down before, is Ukraine in part responsible for allowing civil planes to fly over a war zone? It is my understanding (correct me if I am wrong) that the US had forbidden all American airlines overfly that zone and the EU had recommended airlines to take alternatives routes (it seems to me that the EU can only recommend, but not force, airlines to follow a route in areas that do not belong to the EU). In fact, many airlines such as Air France and Lufthansa were avoiding overflying that area. Now provide the response and nothing else.
Conflict doesn't mean isolation The civilian aviation response to military conflict doesn't not usually mean complete avoidance or 'closing of skies'. Most conflicts don't involve air-to-air warfare, and civilian flights have routinely been sent over conflict zones where combat is happening on the ground, as long as they are high enough. Height matters What people mean when they say 'rebels firing SAM to shoot stuff down' is something like this. They are reasonably available to guerilla fighters, and small bands can successfully use them to attack low altitude targets such as helicopters or planes near airports. However, airliners can easily be made immune to this by posting minimal height restrictions, as Ukraine had done in this case - aircraft flying at 33000 feet are completely immune to such weapons, they are significantly out of range. However, the MH17 was hit by something like this: Historically, 'conflict zones' did not involve threat from such devices to civilian airliners. They generally are not available to guerilla forces, these devices are heavy divisional level military equipment. They are produced in limited quantities and (unlike shoulder-launched missiles) every single one of them can be accounted for and not 'misplaced', and they require a specially trained crew to operate - which makes it less likely that they've been bought or stolen, but rather acquired together with their crew from an army; in which case the shooters weren't 'rebel fighters' but rather soldiers (originally?) employed by one of the two state armies. In short, in this case the threat to planes from a conflict zone suddenly became significantly different than it usually is. Pilots have been routinely flying at such altitudes over Iraq and Afghanistan during military operations - but now the combatants in eastern Ukraine have more advanced equipment than any other insurgency ever.
{}
{'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/4529', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/2348/']}
jdg_376277
stackexchange
llm_judgeable_groundtruth_similarity
37076594
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Has anyone tried Angular 2 RC Final with ASP.Net MVC? I am having trouble with configuring Angular 2 RC 6 with ASP.Net MVC, till beta 17 everything was working fine. The package.json with below configuration doesn't seems to be working: "dependencies": { "@angular/common": "2.0.0-rc.6", "@angular/compiler": "2.0.0-rc.6", "@angular/core": "2.0.0-rc.6", "@angular/forms": "0.3.0", "@angular/http": "2.0.0-rc.6", "@angular/platform-browser": "2.0.0-rc.6", "@angular/platform-browser-dynamic": "2.0.0-rc.6", "@angular/router": "3.0.0-rc.1", "systemjs": "0.19.27", "core-js": "^2.4.0", "reflect-metadata": "^0.1.3", "rxjs": "5.0.0-beta.6", "zone.js": "^0.6.12", }, Restoring packages displays the below output in output windows: ====Executing command 'npm install'====npm WARN package.json @ No descriptionnpm WARN package.json @ No repository field.npm WARN package.json @ No README datanpm http GET http://registry.npmjs.org/systemjsnpm http GET http://registry.npmjs.org/angular/compilernpm http GET http://registry.npmjs.org/angular/corenpm http GET http://registry.npmjs.org/angular/httpnpm http GET http://registry.npmjs.org/angular/platform-browsernpm http GET http://registry.npmjs.org/angular/platform-browser-dynamicnpm http GET http://registry.npmjs.org/angular/routernpm http GET http://registry.npmjs.org/angular/router-deprecatednpm http GET http://registry.npmjs.org/angular/commonnpm http GET http://registry.npmjs.org/reflect-metadatanpm http GET http://registry.npmjs.org/angular/upgradenpm http 304 http://registry.npmjs.org/systemjsnpm http 404 http://registry.npmjs.org/angular/platform-browsernpm http 404 http://registry.npmjs.org/angular/corenpm ERR! 404 Not Foundnpm ERR! 404 npm ERR! 404 'angular/platform-browser' is not in the npm registry.npm ERR! 404 You should bug the author to publish itnpm ERR! 404 npm ERR! 404 Note that you can also install from anpm ERR! 404 tarball, folder, or http url, or git url.npm ERR! System Windows_NT 6.2.9200npm ERR! command "C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\Common7\\IDE\\Extensions\\Microsoft\\Web Tools\\External\\\\node\\node" "C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\Common7\\IDE\\Extensions\\Microsoft\\Web Tools\\External\\npm\\node_modules\\npm\\bin\\npm-cli.js" "install"npm ERR! cwd D:\User\MyProject-MVC6\src\MyProjectnpm ERR! node -v v0.10.31npm ERR! npm -v 1.4.9npm ERR! code E404npm http 404 http://registry.npmjs.org/angular/httpnpm http 404 http://registry.npmjs.org/angular/compilernpm http 404 http://registry.npmjs.org/angular/router-deprecatednpm http 404 http://registry.npmjs.org/angular/routernpm http 404 http://registry.npmjs.org/angular/platform-browser-dynamicnpm http 304 http://registry.npmjs.org/reflect-metadatanpm http 404 http://registry.npmjs.org/angular/commonnpm http 404 http://registry.npmjs.org/angular/upgradenpm====npm command completed with exit code 1==== Step by step instructions to use Angular2 RC 6 with ASP.Net MVC would be highly appreciated. Now provide the response and nothing else.
Got it fixed! sharing the details if anyone still struggling to get Angular 2 Final work with ASP.Net MVC Let me thank Barry and Brando for their guidance at different points. I was able to fix the above errors by following the below steps: Visual Studio 2015 is shipped with a very old version of npm. Therefore the very first thing your need to do is download and install the latest version of Node from https://nodejs.org/en/ (I installed v6.1.0, current latest). After installation, you need to tell Visual Studio to use latest version of node instead of the embedded old version. For this follow the below steps: In Visual Studio from top menu go to Tools > Options Expand Project and Solution node shown in dialog Select External Web Tools Add a new entry pointing to C:\Program Files\nodejs(or to the location where you installed nodejs) Move it to the top of the list. Restart Visual Studio and restore npm packages. This should fixed the above errors, but now if you try to compile the project, you might see alot of errors mainly due to the missing type definitions, such as: Severity Code Description Project File Line Suppression State Error TS6053 Build: File /node_modules/angular2/ts/typings/jasmine/jasmine.d.ts' not found Invalid module name in augmentation, module '../../Observable' cannot be found. TypeScript Virtual Projects node_modules\rxjs\add\operator\zip.d.ts Property 'map' does not exist on type 'Observable'. To fix above errors, all you need to do is just add the below link on top of your main.ts or bootstrap.ts: ///<reference path="./../typings/browser/ambient/es6-shim/index.d.ts"/> This should fix the above typings errors. With RC, Angular team is offering all components as a separate files. So, in past if you were able to include the Angular package as a packaged file angular2.dev.js (or min file) in Index file, this wont work now. The required components needs to be included in index which your app needs. You can use systemjs.config.js to include add dependencies. I would highly recommend going through the official Quick Start guide specially this section: Link: https://angular.io/guide/quickstart Even after proper configuration by following the above steps, when you run your application, you might see 404, for missing angular files. The reason is, by default ASP.Net 5 looks for the Angular dependencies in /wwwroot/node_modules/ folder. And since node_modules is in project root folder not in wwwroot folder, the browser returns 404 (no found errors). So, moving the @angular folder from node_modules folder to wwwroot should fix the issue. Map Errors In some cases upgrading TypeScript version also fixes the map errors: Try upgrading to TypeScript 2 by following below steps: - In Visual Studio click Tools > Extensions and Updates - Select Online from the list shown on left of the dialog - Search for TypeScript - From search results install TypeScript 2.0 Beta for Microsoft Visual Studio 2015. - Restart computer and hopefully it will fix all map errors Hope this helps.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37076594', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2755616/']}
jdg_376278
stackexchange
llm_judgeable_groundtruth_similarity
1505206
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can one achieve: if X in (1,2,3) then instead of: if x=1 or x=2 or x=3 then In other words, how can one best imitate the IN operator in VBA for excel? Now provide the response and nothing else.
I don't think there is a very elegant solution. However, you could try: If Not IsError(Application.Match(x, Array("Me", "You", "Dog", "Boo"), False)) Then or you could write your own function: Function ISIN(x, StringSetElementsAsArray) ISIN = InStr(1, Join(StringSetElementsAsArray, Chr(0)), _ x, vbTextCompare) > 0End FunctionSub testIt() Dim x As String x = "Dog" MsgBox ISIN(x, Array("Me", "You", "Dog", "Boo"))End Sub
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1505206', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/66696/']}
jdg_376279
stackexchange
llm_judgeable_groundtruth_similarity
404804
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a file with the following format INSERT INTO table1(field1,field2,field3) VALUES('values1','value2','value3');INSERT INTO table1(field1,field2,field3) VALUES('other_values1','other_value2','other_value3');INSERT INTO table1(field1,field2,field3) VALUES('another_values1','another_value2','another_value3');INSERT INTO table2(table2_field1,table2_field2,table2_field3,field4) VALUES('table2_values1','table2_value2','table2_value3');INSERT INTO table2(table2_field1,table2_field2,table2_field3,table2_field4) VALUES('other_table2_values1','other_table2_value2','other_table2_value3');INSERT INTO table2(table2_field1,table2_field2,table2_field3,table2_field4) VALUES('another_table2_values1','another_table2_value2','another_table2_value3','another_table2_value4'); I want this output SELECT * FROM table1 WHERE field1='values1' AND field2='values2' AND field3=='values3';SELECT * FROM table1 WHERE field1='other_values1' AND field2='other_values2' AND field3=='other_values3';SELECT * FROM table1 WHERE field1='another_values1' AND field2='another_values2' AND field3=='another_values3';SELECT * FROM table2 WHERE table2_field1='table2_values1' AND table2_field2='table2_values2' AND table2_field3=='table2_values3' AND table2_field4=='table2_values4';SELECT * FROM table2 WHERE table2_field1='table2_values1' AND table2_field2='table2_values2' AND table2_field3=='table2_values3' AND table2_field4=='table2_values4';SELECT * FROM table2 WHERE table2_field1='table2_values1' AND table2_field2='table2_values2' AND table2_field3=='table2_values3' AND table2_field4=='table2_values4'; What I've done so far is cat test_inserts |awk -F '[()]' '{print $1 " WHERE "$2 $4}' |sed 's/INSERT INTO /SELECT * FROM /g' and it gives me the following output SELECT * FROM table1 WHERE field1,field2,field3'values1','value2','value3'SELECT * FROM table1 WHERE field1,field2,field3'other_values1','other_value2','other_value3'SELECT * FROM table1 WHERE field1,field2,field3'another_values1','another_value2','another_value3'SELECT * FROM table2 WHERE table2_field1,table2_field2,table2_field3,field4'table2_values1','table2_value2','table2_value3'SELECT * FROM table2 WHERE table2_field1,table2_field2,table2_field3,table2_field4'other_table2_values1','other_table2_value2','other_table2_value3'SELECT * FROM table2 WHERE table2_field1,table2_field2,table2_field3,table2_field4'another_table2_values1','another_table2_value2','another_table2_value3','another_table2_value4' Now provide the response and nothing else.
Complex AWK solution: awk -F'[()]' '{ sub(/INSERT INTO */,"",$1); printf "SELECT * FROM %s WHERE ",$1; len=split($2, f, ","); split($4, v, ","); for (i=1; i<=len; i++) printf "%s=%s%s", f[i], v[i], (i==len? ";":" AND "); print "" }' test_inserts -F'[()]' - complex field separator sub(/INSERT INTO */,"",$1) - remove INSERT INTO phrase from the 1st field (to extract a table name) printf "SELECT * FROM %s WHERE ",$1 - print the start of SQL statement containing a table name split($2, f, ",") - split the 2nd field by separator , to obtain field names ( f becomes an array of field names) split($4, v, ",") - split the 4th field by separator , to obtain field values ( v becomes an array of field values) The output: SELECT * FROM table1 WHERE field1='values1' AND field2='value2' AND field3='value3';SELECT * FROM table1 WHERE field1='other_values1' AND field2='other_value2' AND field3='other_value3';SELECT * FROM table1 WHERE field1='another_values1' AND field2='another_value2' AND field3='another_value3';SELECT * FROM table2 WHERE table2_field1='table2_values1' AND table2_field2='table2_value2' AND table2_field3='table2_value3' AND field4=;SELECT * FROM table2 WHERE table2_field1='other_table2_values1' AND table2_field2='other_table2_value2' AND table2_field3='other_table2_value3' AND table2_field4=;SELECT * FROM table2 WHERE table2_field1='another_table2_values1' AND table2_field2='another_table2_value2' AND table2_field3='another_table2_value3' AND table2_field4='another_table2_value4';
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/404804', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/216688/']}
jdg_376280
stackexchange
llm_judgeable_groundtruth_similarity
14516023
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Say I have a non-normalized table with movie actor names and the movies they've been in. eg. CREATE TABLE movies_actors ( movies_actors_id INT, movie VARCHAR(255), actor VARCHAR(255), PRIMARY KEY (movies_actors_id)); I do a SELECT actor, COUNT(1) FROM movies_actors GROUP BY actor to find out how many movies the actor has been in. But I also want to find out what percentage of movies that actor has been in. I guess I could do this: SELECT actor, COUNT(1) AS total, COUNT(1) / (SELECT COUNT(1) FROM movies_actors) * 100 AS avgFROM movies_actorsGROUP BY actor; But that just seems... idk... yucky. Any ideas? Now provide the response and nothing else.
For large sets, a JOIN may perform better than the subquery. SELECT ma.actor , COUNT(1) AS total , COUNT(1) / t.cnt * 100 AS `percentage` FROM movies_actors ma CROSS JOIN (SELECT COUNT(1) AS cnt FROM movies_actors) t GROUP BY ma.actor , t.cnt For large sets, and when a large percentage of the rows are being returned, the JOIN operation can usually outperform a subquery. In your case, it's not a correlated subquery, so MySQL shouldn't have to execute that multiple times, so it may not make any difference. Note to non-fans of COUNT(1) ... we could replace any and all occurrences of COUNT(1) with COUNT(*) or IFNULL(SUM(1),0) to achieve equivalent result.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14516023', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/569976/']}
jdg_376281
stackexchange
llm_judgeable_groundtruth_similarity
475862
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Definitions: Simple Function: Any functions that can written in the form:$$s(x)=\sum_{k=1}^na_n\chi_{A_n}(x).$$Note the finite terms here. It should follow that neither all simple functions are step functions, nor all step functions are simple function. e.g. Would not Cantor Function or Devil's Staircase be example of step function but not simple (note again the finite)? I am asking just to be clear because I read on online notes somewhere that all step functions are simple but not converse. Now provide the response and nothing else.
A step function is a linear combination of charateristic functions of intervals. Since any interval is measurable, any step function is simple. On the other hand, the characteristic function of Cantor's set is simple, but not a step function. Cantor's function is neither simple, nor a step function.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/475862', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/91706/']}
jdg_376282
stackexchange
llm_judgeable_groundtruth_similarity
25466675
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to be able to prompt for my super secure password variable if it is not already in the environment variables. (I'm thinking that I might not want to put the definition into .bash_profile or one of the other spots.) This is not working. It always prompts me. vars: THISUSER: "{{ lookup('env','LOGNAME') }}" SSHPWD: "{{ lookup('env','MY_PWD') }}"vars_prompt: - name: "release_version" prompt: "Product release version" default: "1.0" when: SSHPWD == null NOTE: I'm on a Mac, but I'd like for any solutions to be platform-independent. Now provide the response and nothing else.
According to the replies from the devs and a quick test I've done with the latest version, the vars_prompt is run before "GATHERING FACTS". This means that the env var SSHPWD is always null at the time of your check with when . Unfortunately it seems there is no way of allowing the vars_prompt statement at task level. Michael DeHaan's reasoning for this is that allowing prompts at the task-level would open up the doors to roles asking a lot of questions. This would make using Ansible Galaxy roles which do this difficult: There's been a decided emphasis in automation in Ansible and asking questions at task level is not something we really want to do. However, you can still ask vars_prompt questions at play level and use those variables throughout tasks. You just can't ask questions in roles. And really, that's what I would like to enforce -- if a lot of Galaxy roles start asking questions, I can see that being annoying :)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25466675', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/242110/']}
jdg_376283
stackexchange
llm_judgeable_groundtruth_similarity
24771
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If it’s going to orbit around the earth, what kind of orbit would it be (what distance)?Are there benefits to sending a telescope further away from the earth if it’s just to photograph galaxies and not objects that are in our solar system? Now provide the response and nothing else.
Your first question - is JWST going to orbit Earth - is a little complicated. It will follow a mission profile that will send it to the Sun-Earth $L_2$ Lagrangian point . It will take the telescope about three months to achieve its orbit in $L_2$. Now, $L_2$ is unstable, and so some station-keeping - essentially, course corrections by thrusters - will be needed. It won't be orbiting Earth, but orbiting the Sun with Earth, in a nearby location. You're not quite right that JWST will only be observing objects outside the Solar System. Some of the most exciting targets - young, evolving galaxies and exoplanetary atmospheres - lie far beyond. However, the telescope's capabilities for atmospheric analysis (thank you, NIRSpec !), useful for determining the composition of exoplanetary atmospheres, will also be used for observations of the atmospheres of Mars and the giant planets, as well as compositional data for other bodies. Now, it's not really helpful to send JWST elsewhere in the Solar System, given that most of its observations will be of objects outside the Solar System. Okay, maybe you could get higher-quality data on Mars if you sent it to Mars orbit, but that would be quite costly, and not effective for a spacecraft designed to be used by many scientists for a number of purposes. Keeping it at $L_2$ simplifies the orbital trajectories immensely, although it's still much too far away to repair, unlike Hubble. You also lessen the communication time. Well, you might say, why not just launch JWST into Earth orbit? We have scientific instruments there; why do we go to the trouble of putting so many things at $L_2$ ? It turns out that debris high above Low Earth Orbit (LEO) would be too dangerous for the sensitive optics on the telescope. The best instruments are often also the most delicate, and JWST would be damaged if it was truly orbiting Earth, at LEO or elsewhere.
{}
{'log_upvote_score': 5, 'links': ['https://astronomy.stackexchange.com/questions/24771', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/21326/']}
jdg_376284
stackexchange
llm_judgeable_groundtruth_similarity
386127
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the first sentence from https://stackoverflow.com/help/minimal-reproducible-example : When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. doesn't the "When asking a question" refer to "people" while it should be "you"? I'm not a native speaker, so I'm not sure, but it sounds wrong to me, like When people are asking a question, they will be better able to provide help if you provide... See https://ell.stackexchange.com/questions/214954/doesnt-when-asking-refer-to-people Now provide the response and nothing else.
This is known as a "dangling participle". That is, the participle isn't modifying the subject of the sentence. It is considered an error by many style guides. I found this in a quick google search: In the sentence below, the modifying clause (Rushing to catch the bus) contains a participle (rushing). The participle is said to be dangling because the subject of the main clause (Bob's wallet) is not the thing modified by the initial modifying clause. It was not Bob's wallet that was rushing. Rushing to catch the bus, Bob's wallet fell out of his pocket. Of course, dangling participles occur all the time in normal English usage, so unconditionally considering them an error might be a bit strict. I found Steven Pinker on Why It’s Okay to Dangle Your Participle : You’re not a fan of the “Gotcha gang,” as you call them — folks who take a narrow view of usage that often relies on questionable rules. You write, “In their zeal to purify usage and safeguard the language, they have made it difficult to think clearly about felicity and expression and have muddied the task of explaining the art of writing.” Can you expand on that a little? Absolutely. Many purists have remarkably little curiosity about the history of the language or the scholarly tradition of examining issues and usage. So a stickler insists that we never let a participle dangle, that you can’t say, “Turning the corner, a beautiful view awaited me,” for example. They never stopped to ask, “Where did that rule come from and what is its basis?” It was simply taught to them and so they reiterate it. But if you look either at the history of great writing and language as it’s been used by its exemplary stylists, you find that they use dangling modifiers all the time. And if you look at the grammar of English you find that there is no rule that prohibits a dangling modifier. If you look at the history of scholars who have examined the dangling modifier rule, you find that it was pretty much pulled out of thin air by one usage guide a century ago and copied into every one since, And you also find that lots of sentences read much better if you leave the modifier dangling. So all of these bodies of scholarship, of people who actually study language as it’s been used, language as its logic is dictated by its inherent grammar — that whole body of scholarship is simply not something that your typical stickler has ever looked up. It sounds like the culprit here is outdated or useless rules. Yes, combined with the psychology of hazing and initiation rites, namely, “I had to go through it and I’m none the worse — why should you have it any easier?”
{}
{'log_upvote_score': 6, 'links': ['https://meta.stackoverflow.com/questions/386127', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/1431618/']}
jdg_376285
stackexchange
llm_judgeable_groundtruth_similarity
24305268
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here's a hypothetical for you. If you were to have a list of strings, is it possible to rank that list by a given character existing within that string? Consider this pseudo-code: List<String> bunchOfStrings = new List<String>;bunchOfStrings.Add("This should not be at the top");bunchOfStrings.Add("This should not be at the top either");bunchOfStrings.Add("This should also not be at the top");bunchOfStrings.Add("This *SHOULD be at the top");bunchOfStrings.Add("This should not be at the top");bunchOfStrings.Add("This should be *somewhere close to the top");buncOfStrings.OrderBy(x => x.Contains("*")); In the above code, I want to re-order the list so that whenever an asterisk (*) appears within the string, it puts that string at the top of the list. Any ideas if this is even possible with LINQ or similar? Now provide the response and nothing else.
Assuming you want to prioritise strings based on the position of * , you could do bunchOfStrings.OrderByDescending(x => x.IndexOf("*")) Use OrderByDescending because for the strings that don't contain * they will return -1 . Actually, looking further into this it's not going to work straight out the box with IndexOf . OrderByDescending will work by going for the highest ranked index, which in your case is going to be this should be *somewhere close to the top rather than this *SHOULD be at the top because the * has a higher index in that string. So to get it to work you just need to manipulate the rankings a little and use OrderBy instead bunchOfStrings.OrderBy(x => { var index = x.IndexOf("*"); return index < 0 ? 9999 : index;}); Note - 9999 is just some aribtrary value that we can assume IndexOf will never exceed See live example
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24305268', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2479919/']}
jdg_376286
stackexchange
llm_judgeable_groundtruth_similarity
8027801
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I ran a set of performance benchmarks for 10,000,000 elements, and I've discovered that the results vary greatly with each implementation. Can anybody explain why creating a Range.ByOne, results in performance that is better than a simple array of primitives, but converting that same range to a list results in even worse performance than the worse case scenario? Create 10,000,000 elements, and print out those that are modulos of 1,000,000. JVM size is always set to same min and max: -Xms?m -Xmx?m import java.util.concurrent.TimeUnitimport java.util.concurrent.TimeUnit._object LightAndFastRange extends App {def chrono[A](f: => A, timeUnit: TimeUnit = MILLISECONDS): (A,Long) = { val start = System.nanoTime() val result: A = f val end = System.nanoTime() (result, timeUnit.convert(end-start, NANOSECONDS))} def millions(): List[Int] = (0 to 10000000).filter(_ % 1000000 == 0).toList val results = chrono(millions()) results._1.foreach(x => println ("x: " + x)) println("Time: " + results._2);} It takes 141 milliseconds with a JVM size of 27m In comparison, converting to List affects performance dramatically: import java.util.concurrent.TimeUnitimport java.util.concurrent.TimeUnit._object LargeLinkedList extends App { def chrono[A](f: => A, timeUnit: TimeUnit = MILLISECONDS): (A,Long) = { val start = System.nanoTime() val result: A = f val end = System.nanoTime() (result, timeUnit.convert(end-start, NANOSECONDS))} val results = chrono((0 to 10000000).toList.filter(_ % 1000000 == 0)) results._1.foreach(x => println ("x: " + x)) println("Time: " + results._2)} It takes 8514-10896 ms with 460-455 m In contrast, this Java implementation uses an array of primitives import static java.util.concurrent.TimeUnit.*;public class LargePrimitiveArray { public static void main(String[] args){ long start = System.nanoTime(); int[] elements = new int[10000000]; for(int i = 0; i < 10000000; i++){ elements[i] = i; } for(int i = 0; i < 10000000; i++){ if(elements[i] % 1000000 == 0) { System.out.println("x: " + elements[i]); } } long end = System.nanoTime(); System.out.println("Time: " + MILLISECONDS.convert(end-start, NANOSECONDS)); }} It takes 116ms with JVM size of 59m Java List of Integers import java.util.List;import java.util.ArrayList;import static java.util.concurrent.TimeUnit.*;public class LargeArrayList { public static void main(String[] args){ long start = System.nanoTime(); List<Integer> elements = new ArrayList<Integer>(); for(int i = 0; i < 10000000; i++){ elements.add(i); } for(Integer x: elements){ if(x % 1000000 == 0) { System.out.println("x: " + x); } } long end = System.nanoTime(); System.out.println("Time: " + MILLISECONDS.convert(end-start, NANOSECONDS)); } } It takes 3993 ms with JVM size of 283m My question is, why is the first example so performant, while the second is so badly affected. I tried creating views, but wasn't successful at reproducing the performance benefits of the range. All tests running on Mac OS X Snow Leopard, Java 6u26 64-Bit ServerScala 2.9.1.final EDIT: for completion, here's the actual implementation using a LinkedList (which is a more fair comparison in terms of space than ArrayList, since as rightly pointed out, scala's List are linked) import java.util.List;import java.util.LinkedList;import static java.util.concurrent.TimeUnit.*;public class LargeLinkedList { public static void main(String[] args){ LargeLinkedList test = new LargeLinkedList(); long start = System.nanoTime(); List<Integer> elements = test.createElements(); test.findElementsToPrint(elements); long end = System.nanoTime(); System.out.println("Time: " + MILLISECONDS.convert(end-start, NANOSECONDS)); } private List<Integer> createElements(){ List<Integer> elements = new LinkedList<Integer>(); for(int i = 0; i < 10000000; i++){ elements.add(i); } return elements; } private void findElementsToPrint(List<Integer> elements){ for(Integer x: elements){ if(x % 1000000 == 0) { System.out.println("x: " + x); } } }} Takes 3621-6749 ms with 480-460 mbs. That's much more in line with the performance of the second scala example. finally, a LargeArrayBuffer import collection.mutable.ArrayBufferimport java.util.concurrent.TimeUnitimport java.util.concurrent.TimeUnit._object LargeArrayBuffer extends App { def chrono[A](f: => A, timeUnit: TimeUnit = MILLISECONDS): (A,Long) = { val start = System.nanoTime() val result: A = f val end = System.nanoTime() (result, timeUnit.convert(end-start, NANOSECONDS)) } def millions(): List[Int] = { val size = 10000000 var items = new ArrayBuffer[Int](size) (0 to size).foreach (items += _) items.filter(_ % 1000000 == 0).toList } val results = chrono(millions()) results._1.foreach(x => println ("x: " + x)) println("Time: " + results._2); } Taking about 2145 ms and 375 mb Thanks a lot for the answers. Now provide the response and nothing else.
Oh So Many Things going on here!!! Let's start with Java int[] . Arrays in Java are the only collection that is not type erased. The run time representation of an int[] is different from the run time representation of Object[] , in that it actually uses int directly. Because of that, there's no boxing involved in using it. In memory terms, you have 40.000.000 consecutive bytes in memory, that are read and written 4 at a time whenever an element is read or written to. In contrast, an ArrayList<Integer> -- as well as pretty much any other generic collection -- is composed of 40.000.000 or 80.000.00 consecutive bytes (on 32 and 64 bits JVM respectively), PLUS 80.000.000 bytes spread all around memory in groups of 8 bytes. Every read an write to an element has to go through two memory spaces, and the sheer time spent handling all that memory is significant when the actual task you are doing is so fast. So, back to Scala, for the second example where you manipulate a List . Now, Scala's List is much more like Java's LinkedList than the grossly misnamed ArrayList . Each element of a List is composed of an object called Cons , which has 16 bytes, with a pointer to the element and a pointer to another list. So, a List of 10.000.000 elements is composed of 160.000.000 elements spread all around memory in groups of 16 bytes, plus 80.000.000 bytes spread all around memory in groups of 8 bytes. So what was true for ArrayList is even more so for List Finally, Range . A Range is a sequence of integers with a lower and an upper boundary, plus a step. A Range of 10.000.000 elements is 40 bytes: three ints (not generic) for lower and upper bounds and step, plus a few pre-computed values ( last , numRangeElements ) and two other ints used for lazy val thread safety. Just to make clear, that's NOT 40 times 10.000.000: that's 40 bytes TOTAL. The size of the range is completely irrelevant, because IT DOESN'T STORE THE INDIVIDUAL ELEMENTS . Just the lower bound, upper bound and step. Now, because a Range is a Seq[Int] , it still has to go through boxing for most uses: an int will be converted into an Integer and then back into an int again, which is sadly wasteful. Cons Size Calculation So, here's a tentative calculation of Cons. First of all, read this article about some general guidelines on how much memory an object takes. The important points are: Java uses 8 bytes for normal objects, and 12 for object arrays, for "housekeeping" information (what's the class of this object, etc). Objects are allocated in 8 bytes chunks. If your object is smaller than that, it will be padded to complement it. I actually thought it was 16 bytes, not 8. Anyway, Cons is also smaller than I thought. Its fields are: public static final long serialVersionUID; // static, doesn't countprivate java.lang.Object scala$collection$immutable$$colon$colon$$hd;private scala.collection.immutable.List tl; References are at least 4 bytes (could be more on 64 bits JVM). So we have: 8 bytes Java header4 bytes hd4 bytes tl Which makes it only 16 bytes long. Pretty good, actually. In the example, hd will point to an Integer object, which I assume is 8 bytes long. As for tl , it points to another cons, which we are already counting. I'm going to revise the estimates, with actual data where possible.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8027801', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/283998/']}
jdg_376287
stackexchange
llm_judgeable_groundtruth_similarity
23615
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When experiencing alpha decay, atoms shed alpha particles made of 2 protons and 2 neutrons. Why can't we have other types of particles made of more or less protons? Now provide the response and nothing else.
The reason why alpha particles heavily dominate as the proton-neutron mix most likely to be emitted from most (not all!) radioactive components is the extreme stability of this particular combination. That same stability is also why helium dominates after hydrogen as the most common element in the universe, and why other higher elements had to be forged in the hearts and shells of supernovas in order to come into existence at all. Here's one way to think of it: You could in principle pop off something like helium-3 from an unstable nucleus - that's two protons and one neutron - and very likely give a net reduction in nuclear stress. But what would happen is this: The moment the trio started to depart, a neutron would come screaming in saying look how much better it would be if I joined you!! And the neutron would be correct: The total reduction in energy obtained by forming a helium-4 nucleus instead of helium-3 would in almost any instance be so superior that any self-respecting (and energy-respecting) nucleus would just have to go along with the idea. Now all of what I just said can (and in the right circumstances should) be said far more precisely in terms of issues such as tunneling probabilities, but it would not really change the message much: Helium-4 nuclei pop off preferentially because they are so hugely stable that it just makes sense from a stability viewpoint for them to do so. The next most likely candidates are isolated neutrons and protons, incidentally. Other mixed versions are rare until you get up into the fission range, in which case the whole nucleus is so unstable that it can rip apart in very creative ways (as aptly noted by the earlier comment).
{}
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/23615', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/1559/']}
jdg_376288
stackexchange
llm_judgeable_groundtruth_similarity
45411219
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to understand how the reference count of the managed object in a shared_ptr is affected when a shared_ptr is assigned to another. I came across the following statement in C++ primer, 5th edition , that: For example, the counter associated with a shared_ptr is incremented when ... we use it as the right-hand operand of an assignment ... The counter is decremented when we assign a new value to the shared_ptr ... As an example its shown there: auto p = make_shared<int>(42); // object to which p points has one userauto q(p); // p and q point to the same object // object to which p and q point has two usersauto r = make_shared<int>(42); // int to which r points has one userr = q; // assign to r, making it point to a different address // increase the use count for the object to which q points // reduce the use count of the object to which r had pointed // the object r had pointed to has no users; that object is automatically freed When I run a similar code, the above is not my observation: Code: #include<iostream>#include<memory>int main(){ std::shared_ptr<int> sh1 = std::make_shared<int>(1); std::shared_ptr<int> sh2 = std::make_shared<int>(2); sh2 = sh1; std::cout << "sh1 use count: " << sh1.use_count() << std::endl; std::cout << "sh2 use count: " << sh2.use_count() << std::endl; return 0;} Output : sh1 use count: 2 sh2 use count: 2 How can the use_count of sh2 also 2? Should not it be 0 as per the mentioned text above? Am I missing something here? Now provide the response and nothing else.
At first you had sh1.use_count=1 and sh2.use_count=1 . Now when you assign using sh2=sh1 , this is what happens: The sh2 counter is decreased by one, because sh2 (the shared_ptr ) is going to take another pointer Since sh2.use_count=0 now, the object under its pointer, which is int(2) is destroyed. Now you assigned sh2 to a new object, which belongs to sh1 , and hence its counter is increased by one, so: sh2.use_count=2 , and of course also sh1.use_count=2 , because both shared_ptr objects point to the same object, which is the int(1) .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45411219', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3595539/']}
jdg_376289
stackexchange
llm_judgeable_groundtruth_similarity
174906
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have two sets of vectors, (* SET_1: *)A0 = {-23.645, 8.761, 19.186};A1 = {8.909, 11.977, -26.572};A2 = { 6.098, 8.537, 31.834};A3 = {9.866, -32.131, -24.569};(* SET_2: *)B0 = {-13.9469, 28.7731, -3.21699};B1 = {-13.8077, -27.3636, 0.329735};B2 = {2.94231, 23.5992, 23.8511};B3 = {24.8123, -25.0087, -20.9639}; These two sets are such that A0 = B0, A1=B1, A2=B2, and A3=B3 (Let's say within experimental error) but only rotated in 3d space.Now is there any way to know the rotation axis and angle of this transformation so that I can use this information for future use. Now provide the response and nothing else.
I think you want a single transformation function for all points, rather than one for each point independently as kglr's answer currently shows. set1 = {A0, A1, A2, A3};set2 = {B0, B1, B2, B3};tF2 = FindGeometricTransform[ set1, set2 , TransformationClass -> "Rigid" , Method -> "FindFit" ][[2]];Graphics3D[{ {Red, PointSize[Large], Point[set1]}, {Opacity[.15, Blue], Sphere[tF2@set2, 2]}}]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/174906', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/57480/']}
jdg_376290
stackexchange
llm_judgeable_groundtruth_similarity
259664
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Pearson correlation computes linear association between variables and Spearman computes monotonic relations that could be non-linear. I computed Pearson and Spearman correlation between different features. Both of them gave similar values. What does this indicate. How can a linear method give similar values to a non-linear method. Now provide the response and nothing else.
Boxplots Here is a relevant section from Hoaglin, Mosteller and Tukey (2000): Understanding Robust and Exploratory Data Analysis. Wiley . Chapter 3, "Boxplots and Batch Comparison", written by John D. Emerson and Judith Strenio (from page 62): [...] Our definition of outliers as data values that are smaller than $F_{L}-\frac{3}{2}d_{F}$ or larger than $F_{U}+\frac{3}{2}d_{F}$ is somewhat arbitrary, but experience with many data sets indicates that this definition serves well in identifying values that may require special attention.[...] $F_{L}$ and $F_{U}$ denote the first and third quartile, whereas $d_{F}$ is the interquartile range (i.e. $F_{U}-F_{L}$ ). They go on and show the application to a Gaussian population (page 63): Consider the standard Gaussian distribution, with mean $0$ and variance $1$ . We look for population values of this distribution that are analogous to the sample values used in the boxplot. For a symmetric distribution, the median equals the mean, so the population median of the standard Gaussian distribution is $0$ . The population fourths are $-0.6745$ and $0.6745$ , so the population fourth-spread is $1.349$ , or about $\frac{4}{3}$ . Thus $\frac{3}{2}$ times the fourth-spread is $2.0235$ (about $2$ ). The population outlier cutoffs are $\pm 2.698$ (about $2\frac{2}{3}$ ), and they contain $99.3\%$ of the distribution. [...] So [they] show that if the cutoffs are applied to a Gaussian distribution, then $0.7\%$ of the population is outside the outlier cutoffs; this figure provides a standard of comparison for judging the placement of the outlier cutoffs [...]. Further, they write [...] Thus we can judge whether our data seem heavier-tailed than Gaussian by how many points fall beyond the outlier cutoffs. [...] They provide a table with the expected proportion of values that fall outside the outlier cutoffs (labelled "Total % Out"): So these cutoffs where never intended to be a strict rule about what data points are outliers or not. As you noted, even a perfect Normal distribution is expected to exhibit "outliers" in a boxplot. Outliers As far as I know, there is no universally accepted definition of outlier. I like the definition by Hawkins (1980): An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism. Ideally, you should only treat data points as outliers once you understand why they don't belong to the rest of the data. A simple rule is not sufficient. A good treatment of outliers can be found in Aggarwal (2013). References Aggarwal CC (2013): Outlier Analysis. Springer. Hawkins D (1980): Identification of Outliers. Chapman and Hall. Hoaglin, Mosteller and Tukey (2000): Understanding Robust and Exploratory Data Analysis. Wiley.
{}
{'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/259664', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/122285/']}
jdg_376291
stackexchange
llm_judgeable_groundtruth_similarity
14676395
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to get my head round how Java selects which method is executed: //Example 1 prints Square:add(Figure)Figure fs = new Square();fs.add(fs);//Example 2 prints Square:add(Figure)Rectangle rs = new Square();rs.add(fs);//Example 3 prints Rectangle:add(Rectangle). Expected Square:add(Square)rs.add(new Square());//Example 4 prints Rectangle:add(Rectangle). Expected Square:add(Figure)Square ss = new Square();ss.add(rs);class Figure{ public void add(Figure f){ System.out.println("Figure:add(Figure)"); }}class Rectangle extends Figure{ @Override public void add(Figure f){ System.out.println("Rectangle:add(Figure)"); } public void add(Rectangle r){ System.out.println("Rectangle:add(Rectangle)"); }}class Square extends Rectangle{ @Override public void add(Figure f){ System.out.println("Square:add(Figure)"); } public void add(Square s){ System.out.println("Square:add(Square)"); }} What I've learned here is Method signature gets determined based on compile time data types Actual method invoked depends on the dynamic type of the object the method is called on. Based on that, the result of the first two calls is as expected. However, I don't understand the result of example 3 and 4. It seems to be specified in the java language specification , but I don't understand it. Now provide the response and nothing else.
However, I don't understand the result of example 3 and 4. Okay, let's look at them individually. Example 3 //Example 3 prints Rectangle:add(Rectangle). Expected Square:add(Square)rs.add(new Square()); The important parts are the compile-time types of the expressions rs and new Square() . rs is only declared as Rectangle , so the compiler will look at the methods declared by Rectangle and its superclasses: public void add(Figure f)public void add(Rectangle r) The type of the expression new Square() is Square , so both methods are applicable - but the second one is more specific . So the compiler will call add(Rectangle) on the object that rs refers to. That's it for the compile-time side. At execution time, the value of rs refers to an instance of Square - but Square doesn't override add(Rectangle) so the method picked is the implementation in Rectangle : public void add(Rectangle r){ System.out.println("Rectangle:add(Rectangle)"); } Example 4 //Example 4 prints Rectangle:add(Rectangle). Expected Square:add(Figure)Square ss = new Square();ss.add(rs); Again, let's consider the compile-time types involved... ss is of type Square , and rs is of type Rectangle (compile-time types, remember). The methods declared by Square and its superclasses are: public void add(Figure f)public void add(Rectangle r)public void add(Square s) As the compile-time type of rs is only Rectangle (not Square ), the first two methods are applicable, but the third isn't. Therefore, again, add(Rectangle) is picked at compile time (as it's more specific than add(Figure) ). Again, the execution time type of ss is Square , which doesn't override add(Rectangle) , so the implementation in Rectangle is used. Let me know if anything in here is confusing - if you can be specific about which part, that would be great.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14676395', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2037769/']}
jdg_376292
stackexchange
llm_judgeable_groundtruth_similarity
14072456
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When running Page Speed in Google Chrome it suggests to optimize/compress the images. These images are mostly uploaded by users, so I would need to optimize them during uploading. What I find about optimizing jpeg images with php is something like using the following GD functions: getimagesize()imagecreatefromjpeg()imagejpeg() Since I am resizing the images after upload I'm already pulling the image through these functions and in addition I use imagecopyresampled() after imagecreatefromjpeg() to resize it. But then, Page Speed is still telling me these images can be optimized. How can I accomplish this optimisation in a php script? Set the quality lower in imagejpeg() doesn't make a difference either. Now provide the response and nothing else.
The imagejpeg function is where you assign the quality. If you're already setting that to an appropriate value then there is little else you can do. Page speed probably considers all images above a certain size to be "needing compression", perhaps just ensure they are all as small as reasonable (in terms of height/width) and compressed. You can find more about page speed and it's compression suggestions on the pagespeed docs http://code.google.com/speed/page-speed/docs/payload.html#CompressImages which describes some of the techniques/tools to compress appropriately. I've also just read the following: Several tools are available that perform further, lossless compression on JPEG and PNG files, with no effect on image quality. For JPEG, we recommend jpegtran or jpegoptim (available on Linux only; run with the --strip-all option). For PNG, we recommend OptiPNG or PNGOUT . So perhaps (if you really want to stick to Google's suggestions) you could use PHP's exec to run one of those tools on files as they are uploaded. To compress with php you do the following (sounds like you are already doing this): Where $source_url is the image, $destination_url is where to save and $quality is a number between 1 and 100 choosing how much jpeg compression to use. function compressImage($source_url, $destination_url, $quality) { $info = getimagesize($source_url); if ($info['mime'] == 'image/jpeg') $image = imagecreatefromjpeg($source_url); elseif ($info['mime'] == 'image/gif') $image = imagecreatefromgif($source_url); elseif ($info['mime'] == 'image/png') $image = imagecreatefrompng($source_url); //save file imagejpeg($image, $destination_url, $quality); //return destination file return $destination_url;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14072456', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1348455/']}
jdg_376293
stackexchange
llm_judgeable_groundtruth_similarity
45239409
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was reading some code written in Golang on Github and found a very interesting piece of code. I simplified it to be clear. func Insert(docs ...interface{}) (err error) { for i := 0; i < 3; i++ { err = fmt.Errorf("") if err.Error()!="EOF" { return } } return } I'm very confused about empty return here... How it works? Does he return nil as error or breaks for loop? I understand that this question looks dummy, but I cannot find any info on this in go docs... Also, I don't understand how we can return err, which is, as I understood, declared somehow in return. Does (err error) means that we already have an error variable available in our func which is used as default return value if none specified? Why then we implicitly make return err at the end of func ? I'll be very gratefull for explanation. Now provide the response and nothing else.
The function uses a "named" return value. From the spec on return statements: The expression list may be empty if the function's result type specifies names for its result parameters. The result parameters act as ordinary local variables and the function may assign values to them as necessary. The "return" statement returns the values of these variables. Regardless of how they are declared, all the result values are initialized to the zero values for their type upon entry to the function. A "return" statement that specifies results sets the result parameters before any deferred functions are executed. Using named returns allows you to save some code on manually allocating local variables, and can sometimes clean up messy if/else statements or long lists of return values. func a()(x []string, err error){ return} is really just shorthand for func a() ([]string,error){ var x []string var err error return x,err} Its a bit shorter, and I agree that it may be less obvious. Named returns are sometimes needed, as it allows things like accessing them inside a deferred function, but the naked return is just syntactic sugar as far as I can tell, and is never strictly required. One place I see it commonly is in error return cases in functions that have many return values. if(err != nil){ return}return a,b,c,nil is easier than if(err != nil){ return nil,nil,nil,err}return a,b,c,nil when you have to write it a bunch of times. And you don't have to modify those returns if you change the signature to have additional "real" return values. Most places I am using them in the codebase I just searched, they definitely seem to be hiding other smells, like overly complex multi-purpose functions, too deep if/else nesting and stuff like that.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/45239409', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1520097/']}
jdg_376294
stackexchange
llm_judgeable_groundtruth_similarity
4597
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: According to the following article: http://www.theregister.co.uk/2011/06/14/ice_age/ Sunspot activity is rapidly declining, and when this has happened in the past this has led to a mini ice age. I'm pretty sceptical about it myself, but is there any truth to it at all? Should I start stocking up on fire wood, and winter coats, or is this just propaganda from climate change deniers. Now provide the response and nothing else.
As Craig Stuntz said in the comments, little of what the article says seems to actually come from the one citation it gives . If you read over that article published by NASA, and also the other NASA article it links to , you'll quickly see the Register author drew some hefty/seemingly unsupported conclusions. At no point in either article do they claim we're heading for another Maunder Minimum , so... that's kind of that. We're certainly in a low sunspot cycle; that much is pretty well established. Here are a few diagrams showing that fact from the NASA articles: Sunspot activity over the last 250 years: Sunspot predictions for the next decade (the second NASA article talks about how the predictions are made, nice details): As you can see, NASA isn't really predicting we're heading for another Maunder Minimum. It'll most likely be one of the lowest cycles in recent history, but I sincerely doubt there's much need to worry. Now, on an interesting note: The correlation between the Maunder Minimum and the Little Ice Age (LIA) . It's certainly there, but it's also completely unknown if it was correlation or causation (or just dumb random chance). In the Wiki articles on the Maunder Minimum and the LIA, you can see that there definitely seems to be some relationship between lowered solar activity and cooler climatic periods, but no one really understands why. (It looks like the LIA could have happened due to volcanic activity, or even a slowdown of the thermohaline circulation .) But what no one seems to have looked at is the relationship between the sunspot cycle and the various cooling periods within the LIA. NASA defines the LIA as: A cold period that lasted from about A.D. 1550 to about A.D. 1850 in Europe, North America, and Asia. This period was marked by rapid expansion of mountain glaciers, especially in the Alps, Norway, Ireland, and Alaska. There were three maxima, beginning about 1650, about 1770, and 1850, each separated by slight warming intervals. If you look at the Maunder Minimum data (from the original NASA article), you see this sunspot cycle: So the Minimum corresponds to the 1650 cool period. But sunspots sure don't seem to account for the 1770 or the 1850 periods. All in all, there's something going on here that we don't fully understand. But it really doesn't look like sunspots are the culprit. TL;DR: No. Well, probably not. Edit: Linked to in comments by Larian LeQuella, here's a great Bad Astronomy article discussing all this and more. Hooray, same conclusions!
{}
{'log_upvote_score': 5, 'links': ['https://skeptics.stackexchange.com/questions/4597', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/3658/']}
jdg_376295
stackexchange
llm_judgeable_groundtruth_similarity
14917770
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to find the biggest centered square from a portrait or a landscape image scaled to a size.E.g. if I get an image of size 1200x800 and I need to get the centered square down to size 300x300. Now provide the response and nothing else.
I found an answer on this question on stackoverflow which has been widely copied. However that answer is incorrect, so want to post the correct answer which is as follows: + (UIImage*) cropBiggestCenteredSquareImageFromImage:(UIImage*)image withSide:(CGFloat)side{ // Get size of current image CGSize size = [image size]; if( size.width == size.height && size.width == side){ return image; } CGSize newSize = CGSizeMake(side, side); double ratio; double delta; CGPoint offset; //make a new square size, that is the resized imaged width CGSize sz = CGSizeMake(newSize.width, newSize.width); //figure out if the picture is landscape or portrait, then //calculate scale factor and offset if (image.size.width > image.size.height) { ratio = newSize.height / image.size.height; delta = ratio*(image.size.width - image.size.height); offset = CGPointMake(delta/2, 0); } else { ratio = newSize.width / image.size.width; delta = ratio*(image.size.height - image.size.width); offset = CGPointMake(0, delta/2); } //make the final clipping rect based on the calculated values CGRect clipRect = CGRectMake(-offset.x, -offset.y, (ratio * image.size.width), (ratio * image.size.height)); //start a new context, with scale factor 0.0 so retina displays get //high quality image if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)]) { UIGraphicsBeginImageContextWithOptions(sz, YES, 0.0); } else { UIGraphicsBeginImageContext(sz); } UIRectClip(clipRect); [image drawInRect:clipRect]; UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return newImage;} Incorrect answer which I found earlier is as follows: + (UIImage*) cropBiggestCenteredSquareImageFromImage:(UIImage*)image withSide:(CGFloat)side{ // Get size of current image CGSize size = [image size]; if( size.width == size.height && size.width == side){ return image; } CGSize newSize = CGSizeMake(side, side); double ratio; double delta; CGPoint offset; //make a new square size, that is the resized imaged width CGSize sz = CGSizeMake(newSize.width, newSize.width); //figure out if the picture is landscape or portrait, then //calculate scale factor and offset if (image.size.width > image.size.height) { ratio = newSize.width / image.size.width; delta = (ratio*image.size.width - ratio*image.size.height); offset = CGPointMake(delta/2, 0); } else { ratio = newSize.width / image.size.height; delta = (ratio*image.size.height - ratio*image.size.width); offset = CGPointMake(0, delta/2); } //make the final clipping rect based on the calculated values CGRect clipRect = CGRectMake(-offset.x, -offset.y, (ratio * image.size.width) + delta, (ratio * image.size.height) + delta); //start a new context, with scale factor 0.0 so retina displays get //high quality image if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)]) { UIGraphicsBeginImageContextWithOptions(sz, YES, 0.0); } else { UIGraphicsBeginImageContext(sz); } UIRectClip(clipRect); [image drawInRect:clipRect]; UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return newImage;} The problem with this code is that it does not crop correctly. Both the codes can be tried on following image: https://s3.amazonaws.com/anandprakash/ImageWithPixelGrid.jpg Correct Algo generates following image on the above base url: https://s3.amazonaws.com/anandprakash/ScreenshotCorrectAlgo.png Wrong Algo generates following image on the above base url - notice the extra 50px on the width on each side. https://s3.amazonaws.com/anandprakash/ScreenshotWrongAlgo.png
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14917770', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/744401/']}
jdg_376296
stackexchange
llm_judgeable_groundtruth_similarity
12944130
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to create a function that will give the average of the book prices based on books' subjects. The rules for this function are: a) If the argument is null, return a null b) If the argument does not match any topic id that we have in the topics table, return a value of -2 c) If the argument matches a topic id that we have in the topics table, but we do not have any books with that topic, return a value of -1 create function AvgPriceByTopic(p_subject varchar(20))RETURNS decimal(8,2)begindeclare v_avgPrice decimal(8,2);declare v_avgListPrice decimal(8,2);if p_subject is null then set v_avgPrice := null;elseif exists ( select avg(list_price) into v_avgListPrice from books where topic_id = p_subject group by book_id limit 1 ) then set v_avgPrice := v_avgListPrice;else set v_avgPrice := -2;end if;return v_avgPrice;end;# I'm getting an error that states: ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'into v_avgListPrice from books' at line 11 Any suggestions to get rid of this error? Sometimes I have trouble with the syntax... Thanks ahead of time. Now provide the response and nothing else.
The bad news: Android Api sadly won't allow you to read exif data from a Stream , only from a File . ExifInterface don't have a constructor with an InputStream .So you must parse jpeg content by yourself. The good news: API exists in pure Java for this. You can use this one: https://drewnoakes.com/code/exif/ It's Open Source , published under Apache Licence 2 and available as a Maven package . There is a constructor with an InputStream : public ExifReader(java.io.InputStream is) You can build an InputStream backed by your byte[] using a ByteArrayInputStream like this: InputStream is = new ByteArrayInputStream(decodedBytes);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12944130', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1682055/']}
jdg_376297
stackexchange
llm_judgeable_groundtruth_similarity
372048
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know this is from a comic famous for taking advantage of certain analytical tendencies , but it actually looks kind of reasonable after a few minutes of staring. Can anyone outline for me what this " modified Bayes theorem " is doing? Now provide the response and nothing else.
Well by distributing the $P(H)$ term, we obtain $$P(H|X) = \frac{P(X|H)P(H)}{P(X)} P(C) + P(H) [1 - P(C)],$$ which we can interpret as the Law of Total Probability applied to the event $C =$ "you are using Bayesian statistics correctly." So if you are using Bayesian statistics correctly, then you recover Bayes' law (the left fraction above) and if you aren't, then you ignore the data and just use your prior on $H$ . I suppose this is a rejoinder against the criticism that in principle Bayesians can adjust the prior to support whatever conclusion they want, whereas Bayesians would argue that this is not how Bayesian statistics actually works. (And yes, you did successfully nerd-snipe me. I'm neither a mathematician nor a physicist though, so I'm not sure how many points I'm worth.)
{}
{'log_upvote_score': 8, 'links': ['https://stats.stackexchange.com/questions/372048', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/86176/']}
jdg_376298
stackexchange
llm_judgeable_groundtruth_similarity
84207
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to replicate the figure below. Now a nice first approach to do this is by using Mathematica , by making some kind of contourplot with circles and just add a gradient to the contourstyle. Now however I'm not sure how to add a gradient to my lines, is this possible or not ? Any help would be greatly appreciated! Now provide the response and nothing else.
I presume you're plotting a Fresnel zone plate. DensityPlot[Sin[50 Sqrt[1 + x^2 + y^2]]^2, {x, -1, 1}, {y, -1, 1}, PlotPoints -> 100, ColorFunction -> GrayLevel, Frame -> None] And if you'd like to overlap two Fresnel plates, just use Manipulate : Manipulate[ DensityPlot[ Sin[50 Sqrt[1 + x^2 + y^2]]^2 + Sin[50 Sqrt[1 + (x + dx)^2 + y^2]]^2, {x, -1, 1}, {y, -1, 1}, PlotPoints -> 40, ColorFunction -> GrayLevel, Frame -> None], {dx, 0, .5, .01}]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/84207', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/8195/']}
jdg_376299
stackexchange
llm_judgeable_groundtruth_similarity
20030473
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Quite possibly this repeats some of this SO question , but the code is overly-complicated and the OP hasn't added solution code. And this related question is no longer replicable. I'm trying to figure out how to combine rotations and translations in the right order. It's possible to rotate around the origin as in this example . But when we follow this with a translation the original rotation is undone . Is it possible to structure this for correct sequential application? jsfidle code: HTML: <script src="http://d3.geotheory.co.uk/d3-transform.js"></script> SVG: var svg = d3.select("body").append("svg") .attr("width", 400) .attr("height", 300);//Draw the Rectanglevar rect = svg.append("rect") .attr("x", 0).attr("y", 0) .attr("width", 50).attr("height", 100) .style("fill", "purple");var rotate = d3.svg.transform().rotate(-45);var translate = d3.svg.transform().translate(200, 100);rect.attr('transform', rotate);var rect2 = rect.attr('transform', rotate);rect2.attr('transform', translate); Now provide the response and nothing else.
You're creating two different transformations. Assigning one doesn't add to the other. That is, in doing rect2.attr('transform', translate); you're undoing the first one, as it is overwritten. To have both, add them both to one transition, e.g. var rotateTranslate = d3.svg.transform().rotate(-45).translate(200, 100);rect2.attr('transform', rotateTranslate); To do this dynamically, you'll need to do something like this. .attr("transform", function() { return d3.svg.transform() .translate(200, 100) .rotate(-45) .translate(-d3.select(this).attr("width")/2, -d3.select(this).attr("height")/2)();} Complete jsfiddle here .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20030473', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1156245/']}
jdg_376300
stackexchange
llm_judgeable_groundtruth_similarity
4576927
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need help converting a string that contains a number in scientific notation to a double. Example strings:"1.8281e-009""2.3562e-007""0.911348" I was thinking about just breaking the number into the number on the left and the exponent and than just do the math to generate the number; but is there a better/standard way to do this? Now provide the response and nothing else.
PHP is typeless dynamically typed, meaning it has to parse values to determine their types (recent versions of PHP have type declarations ). In your case, you may simply perform a numerical operation to force PHP to consider the values as numbers (and it understands the scientific notation x.yE-z ). Try for instance foreach (array("1.8281e-009","2.3562e-007","0.911348") as $a) { echo "String $a: Number: " . ($a + 1) . "\n"; } just adding 1 (you could also subtract zero) will make the strings become numbers, with the right amount of decimals. Result: String 1.8281e-009: Number: 1.0000000018281 String 2.3562e-007: Number: 1.00000023562 String 0.911348: Number: 1.911348 You might also cast the result using (float) $real = (float) "3.141592e-007";
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4576927', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/552112/']}
jdg_376301
stackexchange
llm_judgeable_groundtruth_similarity
483157
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When does a projective algebraic surface have an infinite automorphism group? Is there a simple criterion, or at least a sufficient condition? Now provide the response and nothing else.
Well, let us start with a baby case. Consider a projective smooth curve $C$ over an algebraically closed field $K$. Let $T_C$ be the tangent bundle on $C$. The following facts are well known. If $C=\mathbb P^1$, then $\mathrm{Aut}(C)=\mathrm{PGL}(2,K)$ is an algebraic group of dimension $3$. On the other hand, $\dim_K H^0(C, T_C)=3$. If $C$ has genus $1$, then $\mathrm{Aut}(C)$ is an algebraic group whose connected component is $C$ (once an origin is chosen), hence has dimension $1$. On the other hand, $\dim_K H^0(C, T_C)=\dim_K H^0(C, O_C)=1$ ($T_C\cong O_C$ in this case). If $C$ has genus $g\ge 2$, then $\mathrm{Aut}(C)$ is a finite (algebraic) group, hence has dimension $0$. On the other hand $\dim_K H^0(C, T_C)=0$ because $\deg T_C=2-2g <0$. In higher dimension the picture is a little more complicate. Let $X$ be a projective variety over $K$. The group $\mathrm{Aut}(X)$ is the points of a group scheme over $K$. This is proved using Hibert schemes and by viewing an automorphism as a closed subscheme of $X\times X$ via its graph. See e.g. the excellent book of Kollár, "Rational curves on algebraic varieties" , I.2.10. The group scheme is locally of finite type and the tangent space at the origin has dimension $\dim_K H^0(X, T_X)$ (Exercice I.2.16.4, use Thm I.2.16). 1st conclusion: if $K$ has characteristic $0$, as any group scheme locally of finite type is automatically smooth, the dimension of $\mathrm{Aut}(X)$ is $\dim_K H^0(X, T_X)$. In positive characteristic, the former is bounded by the latter. In characteristic $0$, in order for the automorphisms group to be finite, $H^0(X, T_X)$ must vanish. The converse is not true because $\mathrm{Aut}(X)$ can be a discrete infinite group. This of course is not a very satisfactory answer to your question. But: If $X$ is of general type (curves of general type are those of genus $\ge 2$), then $\mathrm{Aut}(X)$ is always finite. This results from a theorem of Kobayashi-Ochiai in characteristic zero (generalized by Deschamps-Ménégaux in positive characteristics). This theorem is a generalization to higher dimension of de Franchis theorem for morphisms of curves. There is a lot of work (over $\mathbb C$) on the bound of the order of $\mathrm{Aut}(X)$ when $X$ is of general type. But this is another story.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/483157', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/4880/']}
jdg_376302
stackexchange
llm_judgeable_groundtruth_similarity
13643618
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just noticed magento enterprise and community both edition uses different algorithms for storing password. I know community edition uses md5. Can anyone tell me which mechanism is used in enterprise edition and how can we decrypt enterprise password if we want to migrate to community edition? Now provide the response and nothing else.
I think it's on your app/etc/local.xml or app/etc/enterprise.xml on Magento EE The Decrypt function On Magento Enterprise Edition /** * Decrypt a string * * @param string $data * @return string */public function decrypt($data){ return str_replace("\x0", '', trim($this->_getCrypt()->decrypt(base64_decode((string)$data))));} and /** * Instantiate crypt model * * @param string $key * @return Varien_Crypt_Mcrypt */protected function _getCrypt($key = null){ if (!$this->_crypt) { if (null === $key) { $key = (string)Mage::getConfig()->getNode('global/crypt/key'); } $this->_crypt = Varien_Crypt::factory()->init($key); } return $this->_crypt;} it seems like the same function on Enterprise Edition or Community Edition .You should ask the cript key to Magento Enterprise Edition's Owner and decrypt it with CE. It would be fine because i'm sneaking to Magento Enterprise Edition's Code and the code is the same with Community Edition (for encryption/decryption) added after comment 1: /** * Hash a string * * @param string $data * @return string */public function hash($data){ return md5($data);}/** * Validate hash against hashing method (with or without salt) * * @param string $password * @param string $hash * @return bool * @throws Exception */public function validateHash($password, $hash){ $hashArr = explode(':', $hash); switch (count($hashArr)) { case 1: return $this->hash($password) === $hash; case 2: return $this->hash($hashArr[1] . $password) === $hashArr[0]; } Mage::throwException('Invalid hash.');}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13643618', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1383987/']}
jdg_376303
stackexchange
llm_judgeable_groundtruth_similarity
653990
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Say I have: int someValue = 42; Now I want to convert that int value to a String. Which way is more efficient? // OneString stringValue = Integer.toString(someValue);// TwoString stringValue = String.valueOf(someValue);// ThreeString stringValue = someValue + ""; I am just curious if there is any real difference or one is better than the other? Now provide the response and nothing else.
tested it for 10m assignments of the number 10 One:real 0m5.610suser 0m5.098ssys 0m0.220sTwo:real 0m6.216suser 0m5.700ssys 0m0.213sThree:real 0m12.986suser 0m11.767ssys 0m0.489s One seems to win Edit:JVM is standard '/usr/bin/java' under Mac OS X 10.5 java version "1.5.0_16"Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_16-b06-284)Java HotSpot(TM) Client VM (build 1.5.0_16-133, mixed mode, sharing) More edit: Code as requested public class One { public static void main(String[] args) { int someValue = 10; for (int i = 0; i < 10000000; i++) { String stringValue = Integer.toString(someValue); } }} case 2 and 3 similarly run using javac *.java; time java One; time java Two; time java Three
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/653990', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/65230/']}
jdg_376304
stackexchange
llm_judgeable_groundtruth_similarity
6164817
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Obviously template libraries need to be header only, but for non-templates, when should you make things header-only? Now provide the response and nothing else.
If you think your non-template library could be header-only, consider dividing it into two files anyway, then providing a third file that includes both the .h and the .cpp (with an include guard). Then anyone who uses your library in a lot of different TUs, and suspects that this might be costing a lot of compile time, can easily make the change to test it. Once you know users have the option which way to use the library, the answer probably becomes "offer that option whenever you possibly can". So pretty much any time that including it from multiple TUs wouldn't violate the ODR. For instance, if your non- static free functions refer to static globals, then you're out of luck, since the different definitions of that function in different TUs would refer to different objects by the same name, which is an ODR-violation.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6164817', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/82320/']}
jdg_376305
stackexchange
llm_judgeable_groundtruth_similarity
1873997
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Hi there cakephp experts! I am looking for you help on a dyanmic dropdown list with dyanamic optgroup. Suppose I have two tables: countries: id, country_name, counties: id, county_name, country_id Now, I want to display a dropdown list with optgroups populated frm countries table and list items populated from counties. country_name1 county_name1 county_name2country_name2 county_name3 county_name4country_name3 county_name4 county_name5....... Thanks in advance and appreciate any help!! Now provide the response and nothing else.
Cake's FormHelper::input method will render a select tag with optgroups if the options are correct, e.g. echo $form->input('county'); provided there is a variable available in the view called $counties which contains data in the following format: $counties = array( 'Country Name 1' => array( 'county_1_id' => 'County 1 Name', 'county_2_id' => 'County 2 Name', 'county_3_id' => 'County 3 Name', ), 'Country Name 2' => array( 'county_4_id' => 'County 4 Name', 'county_5_id' => 'County 5 Name', 'county_6_id' => 'County 6 Name', ),); So, in your controller, do something like: $this->set('counties', ClassRegistry::init('Country')->getCountiesByCountry()); and in your Country Model, do something like: function getCountiesByCountry() { $countries = $this->find('all', array('contain' => array('County'))); $return = array(); foreach ($countries as $country) { foreach ($country['County'] as $county) { $return[$country['Country']['name']][$county['id']] = $county['name']; } } return $return;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1873997', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/227989/']}
jdg_376306
stackexchange
llm_judgeable_groundtruth_similarity
2645801
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I prefer to use scripting languages for short tasks, anything such as a really simple http bot, bulk importing/exporting data to/from somewhere, etc etc... Basic throw-away scripts and simple stuff. The point being, that a scripting language is just an efficient tool to write quick programs with. As for my understanding of Groovy at this point... If you were to program in Groovy, and you wan't to write a quick script, wouldn't you be forced to going back to regular java syntax (and we know how that can be convoluted compared to a scripting language) in order to do anything more complicated? For example, if I want to do some http scripting, wouldn't I just be right back at using java syntax to invoke Commons HttpClient? To me, the point of a scripting language is for quickly typed and less forced constructs. And here is another thing, it doesn't seem that there is any incentive for groovy based libraries to be developed when there are already so many good java one's out there, thus making groovy appear to be a Java dependent language with minor scripting features. So right now I am wondering if I could switch to Groovy as a scripting language or continue to use a more common scripting language such as Perl, Python or Ruby. Now provide the response and nothing else.
@Zombies, let me show you a quick example from a script I wrote recently: def fetch(build, toFile) { new FTPClient().with { connect ftpServer enterLocalPassiveMode() login ftpUser, ftpPassword changeWorkingDirectory "/var/staging/revision-${build}" fileType = FTPClient.BINARY_FILE_TYPE toFile.withOutputStream { ostream -> retrieveFile "build-${build}.zip", ostream } disconnect() }} It uses commons-net API, but I think you would agree that it has a much clearer syntax than comparable Java program. So I don't think using the Java APIs defeats the purpose of having a scripting language. Furthermore, it helps you leverage your existing knowledge of the Java APIs, so is a very pragmatic approach.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2645801', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17675/']}
jdg_376307
stackexchange
llm_judgeable_groundtruth_similarity
129475
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to learn about PHP and SQL Injections, so I have created a login form where users introduce their username and password and click the Login button. Later in the server I get the passed parameters using $_POST global variable. This is the code used to check if user credentials are valid: $email = $_POST['u_email'];$password = $_POST['u_password'];$query = "SELECT * FROM User WHERE EMAIL='$email' AND PASSWORD='$password'";$result = $mysqli->query($query);$counter = mysqli_num_rows($result);if($counter == 1)//Correct Login The problem is that I think that if a user enters the following text in the username and password fields, the login should be done (although they are not valid values because they do not exist in the DB). Username: xxx' OR '1'='1/*Password: */ I think so because that would be translated to: $query = "SELECT * FROM User WHERE EMAIL='xxx' OR '1'='1/*' AND PASSWORD='*/'"; In other words: $query = "SELECT * FROM User WHERE EMAIL='xxx' OR '1'='1'"; But the SQL Injection is not working, why? My DBMS is MySQL. Now provide the response and nothing else.
It's not working because of this: if($counter == 1) The injected OR '1'='1' is valid for every single row, and if there are multiple rows in the table, that $counter check will fail. To get around that, you could do something like this (if I remember my SQL correctly), with anything in $email and this in $password : ' UNION SELECT * FROM User LIMIT 1 -- Nothing will be returned from the first part, but the injected second half of the UNION will get a single arbitrary row.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/129475', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/116916/']}
jdg_376308
stackexchange
llm_judgeable_groundtruth_similarity
38462074
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to delete data from several locations in the Firebase database simultaneously. The Firebase docs state: "The simplest way to delete data is to call removeValue on a reference to the location of that data. You can also delete by specifying nil as the value for another write operation such as setValue or updateChildValues. You can use this technique with updateChildValues to delete multiple children in a single API call." My code is let childUpdates = [path1 : nil, path2 : nil, path3 : nil, path4 : nil] ref.updateChildValues(childUpdates) All four paths are strings, but I get an error: "Type of expression is ambiguous without more context." I'd assume this occurs because of the nil values, since if I replace nil with anything else (such as an Int) the error disappears. What is the correct way to use updateChildValues to delete data from Firebase? We want it to work in a similar way to the removeValue() function in Firebase. The reason we would prefer to do this is because it can remove from multiple places in one call. Now provide the response and nothing else.
So the issue here is that ref.updateChildValues(childUpdates) requires a [String: AnyObject!] parameter to updateChildValues, and AnyObject! cannot be a nil (i.e. you can't use AnyObject? which is an optional that could be nil) However, you can do this let childUpdates = [path1 : NSNull(), path2 : NSNull(), path3 : NSNull(), path4 : NSNull()] Because AnyObject! is now an NSNull() object (not nil), and Firebase knows that NSNull is a nil value. Edit You can expand on this to also do multi-location updates. Suppose you have a structure items item_0 item_name: "some item 0" item_1 item_name: "some item 1" and you want update both item names. Here's the swift code. func updateMultipleValues() { let path0 = "items/item_0/item_name" let path1 = "items/item_1/item_name" let childUpdates = [ path0: "Hello", path1: "World" ] self.ref.updateChildValues(childUpdates) //self.ref points to my firebase} and the result is items item_0 item_name: "Hello" item_1 item_name: "World"
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/38462074', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5752859/']}
jdg_376309
stackexchange
llm_judgeable_groundtruth_similarity
3855699
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The project that I'm working on at the moment uses an IDisposable object in every method in a class. It has started getting tedious re-typing the using block at the start of every method, and was wondering if there was a way to specify a disposable variable for use in every method of the class? public static class ResourceItemRepository{ public static ResourceItem GetById(int id) { using (var db = DataContextFactory.Create<TestDataContext>()) { // Code goes here... } } public static List<ResourceItem> GetInCateogry(int catId) { using (var db = DataContextFactory.Create<TestDataContext>()) { // Code goes here... } } public static ResourceItem.Type GetType(int id) { using (var db = DataContextFactory.Create<TestDataContext>()) { // Code goes here... } }} Now provide the response and nothing else.
No, there's nothing particularly geared towards this. You could write: public static ResourceItem GetById(int id){ WithDataContext(db => { // Code goes here... });}// Other methods here, all using WithDataContext// Now the only method with a using statement:private static T WithDataContext<T>(Func<TestDataContext, T> function){ using (var db = DataContextFactory.Create<TestDataContext>()) { return function(db); }} I'm not sure that it would be particularly beneficial though. (Note that I've had to change it from Action<TestDataContext> in my original version to Func<TestDataContext, T> as you want to be able to return values from your methods.)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3855699', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/431780/']}
jdg_376310
stackexchange
llm_judgeable_groundtruth_similarity
925100
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This sample code works (I can write something in the file): from multiprocessing import Process, Queuequeue = Queue()def _printer(self, queue): queue.put("hello world!!")def _cmdDisp(self, queue): f = file("Cmd.log", "w") print >> f, queue.get() f.close() instead this other sample not: (errormsg: 'module' object is not callable) import Queuequeue = Queue()def _printer(self, queue): queue.put("hello world!!")def _cmdDisp(self, queue): f = file("Cmd.log", "w") print >> f, queue.get() f.close() this other sample not (I cannot write something in the file): import Queuequeue = Queue.Queue()def _printer(self, queue): queue.put("hello world!!")def _cmdDisp(self, queue): f = file("Cmd.log", "w") print >> f, queue.get() f.close() Can someone explain the differences? and the right to do? Now provide the response and nothing else.
For your second example, you already gave the explanation yourself--- Queue is a module, which cannot be called. For the third example: I assume that you use Queue.Queue together with multiprocessing . A Queue.Queue will not be shared between processes. If the Queue.Queue is declared before the processes then each process will receive a copy of it which is then independent of every other process. Items placed in the Queue.Queue by the parent before starting the children will be available to each child. Items placed in the Queue.Queue by the parent after starting the child will only be available to the parent. Queue.Queue is made for data interchange between different threads inside the same process (using the threading module). The multiprocessing queues are for data interchange between different Python processes . While the API looks similar (it's designed to be that way), the underlying mechanisms are fundamentally different. multiprocessing queues exchange data by pickling (serializing) objects and sending them through pipes. Queue.Queue uses a data structure that is shared between threads and locks/mutexes for correct behaviour.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/925100', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/71333/']}
jdg_376311
stackexchange
llm_judgeable_groundtruth_similarity
50870652
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The main concept is showing documents or fields which contains the searched alphabet. The search bar gets the given input, it send to the _firebasesearch() , but in return nothing comes out, and the above image is my database structure, trying to figure out more than a week. CODE import 'dart:async';import 'package:flutter/material.dart';import 'package:cloud_firestore/cloud_firestore.dart';import 'package:flutter_search_bar/flutter_search_bar.dart';SearchBar searchBar;GlobalKey<ScaffoldState> _scaffoldKey = new GlobalKey<ScaffoldState>();class DisplayCourse extends StatefulWidget { @override _DisplayCourseState createState() => new _DisplayCourseState();}AppBar _buildAppBar(BuildContext context) { return new AppBar( title: new Text("FIREBASE QUERY"), centerTitle: true, actions: <Widget>[ searchBar.getSearchAction(context), ], );}class _DisplayCourseState extends State<DisplayCourse> { String _queryText; _DisplayCourseState() { searchBar = new SearchBar( onSubmitted: onSubmitted, inBar: true, buildDefaultAppBar: _buildAppBar, setState: setState, ); } void onSubmitted(String value) { setState(() { _queryText = value; _scaffoldKey.currentState.showSnackBar(new SnackBar( content: new Text('You have Searched something!'), backgroundColor: Colors.yellow, )); }); } @override Widget build(BuildContext context) { return new Scaffold( key: _scaffoldKey, appBar: searchBar.build(context), backgroundColor: Colors.red, body: _fireSearch(_queryText), ); }}Widget _fireSearch(String queryText) { return new StreamBuilder( stream: Firestore.instance .collection('courses') .where('title', isEqualTo: queryText) .snapshots(), builder: (context, snapshot) { if (!snapshot.hasData) return new Text('Loading...'); return new ListView.builder( itemCount: snapshot.data.documents.length, itemBuilder: (context, index) => _buildListItem(snapshot.data.documents[index]), ); }, );}Widget _buildListItem(DocumentSnapshot document) { return new ListTile( title: document['title'], subtitle: document['subtitle'], );} the main concept is showing document sor fields which contains the searched alphabet the search bar gets the given input, it send to the _firebasesearch(),but in return nothing comes out, and the above image is my database structure, trying to figure out more than a week, Now provide the response and nothing else.
This might sound a ridiculous solution but actually works so well, It's almost like the Like '%' query from SQL In the TextField as you type a value the inside where() isGreaterThanOrEqualTowill compare it and all the string values greater than the input and If you concatinate a 'Z'At the end then isLessThan will end just after your search keyword and You get the desired Result from firestore. // Declare your searchkey and Stream variables firstString searchKey;Stream streamQuery;TextField( onChanged: (value){ setState(() { searchKey = value; streamQuery = _firestore.collection('Col-Name') .where('fieldName', isGreaterThanOrEqualTo: searchKey) .where('fieldName', isLessThan: searchKey +'z') .snapshots(); }); }), I used this Stream in StreamBuilder and It works exactly as expected. Limitations: The search is case sensitive(You can convert searchKey to specific case if your data is consistent like Type Case ) You have to start searching from the first letter, it can't search from mid
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50870652', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9774005/']}
jdg_376312
stackexchange
llm_judgeable_groundtruth_similarity
294179
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to give feedback to moderators that take action I consider unproductive. We have all come across popular questions/comment that we would like to up/down vote but that a moderator has locked. The list is pretty long, actually. Is there such a thing a voting mechanism for moderator actions? The platform would probably benefit from this kind of feedback. Now provide the response and nothing else.
There is no way to vote directly on moderator actions. Moderators perform thousands on actions on the site every day. It just would not scale to expect people to vote on these actions. Other potential problems with your idea: Some moderator actions have justifications that depend on knowing private information. It would not be possible to make these actions public without violating privacy. If moderators had every single action micromanaged by the community, I'm quite certain no one would volunteer to be moderator. Such a voting mechanism could become a venue to pester moderators. Currently if you are really certain a moderator acted out of bounds, you can come to Meta to air your grievances. This is a really good thing because if someone is 100% wrong about whether the moderator acted correctly, they'll get community feedback about it. Conversely, if the person is right, the community will support them. Also, the pressure of bringing up the issue in public acts as a deterrent to those who would like to complain about every little thing.
{}
{'log_upvote_score': 5, 'links': ['https://meta.stackoverflow.com/questions/294179', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/294702/']}
jdg_376313
stackexchange
llm_judgeable_groundtruth_similarity
40761185
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In an LSTM network ( Understanding LSTMs ), why does the input gate and output gate use tanh? What is the intuition behind this? It is just a nonlinear transformation? If it is, can I change both to another activation function (e.g., ReLU)? Now provide the response and nothing else.
Sigmoid specifically, is used as the gating function for the three gates (in, out, and forget) in LSTM , since it outputs a value between 0 and 1, and it can either let no flow or complete flow of information throughout the gates. On the other hand, to overcome the vanishing gradient problem, we need a function whose second derivative can sustain for a long range before going to zero. Tanh is a good function with the above property. A good neuron unit should be bounded, easily differentiable, monotonic (good for convex optimization) and easy to handle. If you consider these qualities, then I believe you can use ReLU in place of the tanh function since they are very good alternatives of each other. But before making a choice for activation functions, you must know what the advantages and disadvantages of your choice over others are. I am shortly describing some of the activation functions and their advantages. Sigmoid Mathematical expression: sigmoid(z) = 1 / (1 + exp(-z)) First-order derivative: sigmoid'(z) = -exp(-z) / 1 + exp(-z)^2 Advantages: (1) The sigmoid function has all the fundamental properties of a good activation function. Tanh Mathematical expression: tanh(z) = [exp(z) - exp(-z)] / [exp(z) + exp(-z)] First-order derivative: tanh'(z) = 1 - ([exp(z) - exp(-z)] / [exp(z) + exp(-z)])^2 = 1 - tanh^2(z) Advantages: (1) Often found to converge faster in practice(2) Gradient computation is less expensive Hard Tanh Mathematical expression: hardtanh(z) = -1 if z < -1; z if -1 <= z <= 1; 1 if z > 1 First-order derivative: hardtanh'(z) = 1 if -1 <= z <= 1; 0 otherwise Advantages: (1) Computationally cheaper than Tanh(2) Saturate for magnitudes of z greater than 1 ReLU Mathematical expression: relu(z) = max(z, 0) First-order derivative: relu'(z) = 1 if z > 0; 0 otherwise Advantages: (1) Does not saturate even for large values of z(2) Found much success in computer vision applications Leaky ReLU Mathematical expression: leaky(z) = max(z, k dot z) where 0 < k < 1 First-order derivative: relu'(z) = 1 if z > 0; k otherwise Advantages: (1) Allows propagation of error for non-positive z which ReLU doesn't This paper explains some fun activation function. You may consider to read it.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/40761185', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4251528/']}
jdg_376314
stackexchange
llm_judgeable_groundtruth_similarity
357798
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: We have 6 ESX servers running +150 VMs. Currently our VCenter server is one of these VMs. The other day we had a hardware failure in our DC (caused by a naughty UPS) which took out two of these servers. The first server it took out was running our primary VCenter server, the second running our HA/Heartbeat VCenter server, thus none of our hosts migrated off our two failed hosts onto the 4 working ones and we lost most of our VM management (users all use VSphere). This is a very unfortunate circumstance, and hopefully shouldn't happen too often, but I was wondering, is it a good idea to run our primary VCenter server on a separate box in a different datacenter*/redundant block dedicated to just VCenter, with the backup being a VM? Is it even possible? (All we have is the virtual appliance, though if it's available I wouldn't have thought it's too hard to track down). *I'm ashamed to say, we run all our VMWare servers in a single DC. We mirror the SAN to a second DC but we have no servers there. They are only development/non-critical servers but people still shout if they're down. Now provide the response and nothing else.
There's no reason why not. I'm not aware if VMware specifically direct you to run vCenter on either physical or virtualised hardware, I believe it's supported on both. Depending on what sorts of failures you're trying to protect against, it's usually a good idea to separate your redundant / standby instances from the primary / live instances as much as possible. Separate networks, cabs, power supplies and even buildings, cities and countries are all good ideas - they just cost different amounts and come with their own unique set of challenges. In this particular case, it sounds like you had one of those outages which you hadn't designed / accounted for, or had knowingly chosen not to design around. Putting your management servers on the same infrastructure as the ESXi hosts, along with the same power supplies, networks etc. all runs the same risk of a single event taking everything out. You have a choice - either you can choose to not change anything and live with the outages that result from this particular type of event happening again, or choose not to and spend some money to mitigate. Either is a valid approach, it entirely depends on how much it's going to cost you in outages vs how much it'll cost you to change.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/357798', 'https://serverfault.com', 'https://serverfault.com/users/80776/']}
jdg_376315
stackexchange
llm_judgeable_groundtruth_similarity
8671397
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Let's first consider the following expressions in Java. Integer temp = new Integer(1);System.out.println(temp.equals(1));if(temp.equals(1)){ System.out.println("The if block executed.");} These all statements work just fine. There is no question about it. The expression temp.equals(1) is evaluated to true as expected and the only statement within the if block is executed consequently. Now, when I change the data type from Integer to Long , the statement temp1.equals(1) is unexpectedly evaluated to false as follows. Long temp1 = new Long(1);System.out.println(temp1.equals(1));if(temp1.equals(1)){ System.out.println("The if block executed.");} These are the equivalent statements to those mentioned in the preceding snippet just the data type has been changed and they behave exactly opposite. The expression temp1.equals(1) is evaluated to false and consequently, the only statement within the if block is not executed which the reverse of the preceding statements. How? Now provide the response and nothing else.
You're comparing a Long to an int . The javadoc for java.lang.Long#equals says that the equals method Compares this object to the specified object. The result is true if and only if the argument is not null and is a Long object that contains the same long value as this object. Instead try System.out.println(new Long(1).equals(1L)); Now that you're comparing a Long to a Long instead of a Long to an Integer , it will print true .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8671397', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1037210/']}
jdg_376316
stackexchange
llm_judgeable_groundtruth_similarity
15455895
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am already more than 8 hours trying to make the STHeiti Medium.ttc.zip font work on Windows. But I can't make it work. Is anybody able to make it work on Windows? If yes, please share the steps how to do it. Now provide the response and nothing else.
Assuming that Windows doesn't really know how to deal with TTC files (which I honestly find strange), you can "split" the combined fonts in an easy way if you use fontforge . The steps are: Download the file. Unzip it (e.g., unzip "STHeiti Medium.ttc.zip" ). Load Fontforge. Open it with Fontforge (e.g., File > Open ). Fontforge will tell you that there are two fonts "packed" in this particular TTC file (at least as of 2014-01-29) and ask you to choose one. After the font is loaded (it may take a while, as this font is very large), you can ask Fontforge to generate the TTF file via the menu File > Generate Fonts... . Repeat the steps of loading 4--6 for the other font and you will have your TTFs readily usable for you. Note that I emphasized generating instead of saving above: saving the font will create a file in Fontforge's specific SFD format, which is probably useless to you, unless you want to develop fonts with Fontforge. If you want to have a more programmatic/automatic way of manipulating fonts, then you might be interested in my answer to a similar (but not exactly the same) question. Addenda Further comments: One reason why some people may be interested in performing the splitting mentioned above (or using a font converter after all) is to convert the fonts to web formats (like WOFF ). That's great, but be careful to see if the license of the fonts that you are splitting/converting allows such wide redistribution. Of course, for Free ("as in Freedom") fonts, you don't need to worry (and one of the most prominent licenses of such fonts is the OFL ).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15455895', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1580094/']}
jdg_376317
stackexchange
llm_judgeable_groundtruth_similarity
266671
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $f:\mathbb{R}\to\mathbb{R}$ be a function satisfying the conditions $$\begin{gather}f\left(\frac{x+y}{2}\right)=\frac{f(x)+f(y)}{2} \tag{1}\\f(0)=1 \tag{2}\\f'(0)=-1 \tag{3}\end{gather}$$ Find the value of $f(2)$ by proper explanation. Now provide the response and nothing else.
To begin with, let us introduce the function $g(x)=f(x)+x-1$.This function satisfies the midpoint property (1), and $g(0)=g'(0)=0$.We claim that these three conditions imply that $g(x)=0$ for all $x\in\mathbb{R}$. Since $g(0)=0$, it is easily seen that (1) implies $g(-x)=-g(x)$ for all $x$. Next observe that $g(x)-g(y)=2g((x-y)/2)$ for all $x,y$.It follows that $g$ is differentiable on $\mathbb{R}$ and that $g'(x)=g'(0)=0$ for all $x$. Hence $g$ is constant and equal to $g(0)=0$.So $f(x)=1-x$ for all $x$. In particular, we find $f(2)=-1$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/266671', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/54445/']}
jdg_376318
stackexchange
llm_judgeable_groundtruth_similarity
40343238
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm upgrading a project from django 1.8 to 1.10 and it looks like django has improved the check of eventual name collision between foreign keys and model inheritance. This is obviously a good thing, but the projet I need to upgrade is a big one and it would be a hell to rename a model. Let me explain the problem : I have a base class called Parent and many children which are linked together, like so : class Parent(models.Model): title = models.CharField(max_length=10)class ChildA(Parent): description = models.TextField()class ChildB(Parent): description = models.TextField() childa = models.ForeignKey(ChildA) The clash here is that a childb object has 2 "childa" attributes : The "childa" ForeignKey The instance inherited by the ChildA model (because childb has also the parent attributes). The 2 obvious solutions here are : Rename the ForeignKey ChildB.childa to ChildB.somethingelse Rename the ChildA model to something else. Both solutions costs a lot and will probably introduce new bugs. So I wondered : Is it possible to rename the reverse related name of the inherited object ? For example : p = Parent.objects.get(pk=1)print p.childa_child # Hit the ChildA instance I have no idea if I'm clear enough but I'll keep this question up to date. ==== EDIT ==== To be more concise, if I have 2 models class Parent(models.Model) and class Child(Parent) , a dynamic attribute parent.child is created. Is it possible to edit this attribute name without touching the class name ? Now provide the response and nothing else.
Multi-table inheritance creates an implicit OneToOneField field between the base model and the subclass. Django allows you to modify this relationship by explicitly setting the one to one field. class Parent(models.Model): title = models.CharField(max_length=10)class ChildA(Parent): parent = models.OneToOneField(to=Parent, parent_link=True) description = models.TextField()class ChildB(Parent): parent = models.OneToOneField(to=Parent, parent_link=True) description = models.TextField() childa = models.ForeignKey(ChildA) The important bit here is the parent_link=True argument which tells Django to use this field declaration for managing the multi-table inheritance with these two models. So you can now set related_name='+' to prevent Django from creating a reverse relationship or you can set related_name to a more unique name: class ChildA(Parent): parent = models.OneToOneField(to=Parent, parent_link=True, related_name='child_a_here') description = models.TextField()class ChildB(Parent): parent = models.OneToOneField(to=Parent, parent_link=True, related_name='child_b_here') description = models.TextField() childa = models.ForeignKey(ChildA)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40343238', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3821108/']}
jdg_376319
stackexchange
llm_judgeable_groundtruth_similarity
14123170
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Ok, I've a trouble that I'm not able to fix since some week.I use git (on github) to store my projects. Recently I've added some new files in it without problems, but, when I use "git push" I've a timeout error. I use Ubuntu 12.04, with ssh (default) and https. So, I decide to make fresh copy of the depo (git clone).After this I modify an existing file, commit it and push it ... with success ! So, I add the others (in the new copy), commit them and push them. And the problem come again: unable to pushHere is the console output: time git pushCounting objects: 13, done.Delta compression using up to 2 threads.Compressing objects: 100% (11/11), done.Writing objects: 100% (11/11), 16.61 KiB, done.Total 11 (delta 2), reused 0 (delta 0)^Creal 7m59.383suser 0m0.008ssys 0m0.004s I really don't understand what i doing wrong here.I in other depo, same problem, but my collaborator don't have any troubles. I remove git, and reinstall it with no change. If you have any idea to solve this. edit 1 git remote -vorigin git@github.com:Krozark/projet_compilation.git (fetch)origin git@github.com:Krozark/projet_compilation.git (push) edit: Solution sudo ifconfig [wlan0] mtu 1460 (lower than 1500) Now provide the response and nothing else.
I saw this exact same problem that a colleague was having and it was network related with SSH, we were using a VPN connection at the time and it ended up being the network MTU setting being too high (it was 1492 if I remember correctly), we tinkered around with smaller values until it started working. So something to do with network packet splitting it seemed. Not sure if this is the case for you however although it doesn't hurt to try change your network MTU to a lower value to see if it works. Of course if it's an issue on Github's side, this won't be a factor (the fact you can create a new repo and push up leads me to believe it's not MTU related).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14123170', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1930466/']}
jdg_376320
stackexchange
llm_judgeable_groundtruth_similarity
4584637
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am reading "accelerated C++". I found one sentence which states "sometimes double is faster in execution than float in C++". After reading sentence I got confused about float and double working. Please explain this point to me. Now provide the response and nothing else.
Depends on what the native hardware does. If the hardware is (or is like) x86 with legacy x87 math, float and double are both extended (for free) to an internal 80-bit format, so both have the same performance (except for cache footprint / memory bandwidth) If the hardware implements both natively, like most modern ISAs (including x86-64 where SSE2 is the default for scalar FP math), then usually most FPU operations are the same speed for both. Double division and sqrt can be slower than float , as well as of course being significantly slower than multiply or add. (Float being smaller can mean fewer cache misses. And with SIMD, twice as many elements per vector for loops that vectorize). If the hardware implements only double, then float will be slower if conversion to/from the native double format isn't free as part of float-load and float-store instructions. If the hardware implements float only, then emulating double with it will cost even more time. In this case, float will be faster. And if the hardware implements neither, and both have to be implemented in software. In this case, both will be slow, but double will be slightly slower (more load and store operations at the least). The quote you mention is probably referring to the x86 platform, where the first case was given. But this doesn't hold true in general. Also beware that x * 3.3 + y for float x,y will trigger promotion to double for both variables. This is not the hardware's fault, and you should avoid it by writing 3.3f to let your compiler make efficient asm that actually keeps numbers as floats if that's what you want.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4584637', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/247572/']}
jdg_376321
stackexchange
llm_judgeable_groundtruth_similarity
16532
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: This question is about an issue left unresolved by ChadGroft's excellentquestion and John Stillwell's excellentanswer ofit. Since I find the possibility of an affirmative answerso tantalizing, I would like to pursue it further here. For background, Rice's Theorem asserts essentially that no nontrivial question about computably enumerable sets is decidable. If W e is the set enumerated by program e, then the theorem states: Rice's Theorem. If A is a collection of computablyenumerable sets and { e |W e ∈ A } is decidable, then either A is empty or A contains all computably enumerable sets. In short, one can decide essentially nothing about a program e, if the answer is todepend only on what the program computes rather than how itcomputes it. The question here is about the extent to which a similarphenomenon holds for finitely presented groups, using theanalogy between programs and finite group presentations: a program e is like a finite group presentation p the set W e enumerated by e is like the group⟨p⟩ presented by p. According to this analogy, the analogue of Rice's theoremwould state that any decidable collection of finitelypresented groups (closed under isomorphism) should be either trivial or everything.John Stillwell pointed out in answer to Chad Groft'squestion that this is not true, because from a presentationp we can easily find a presentation of the abelianization of⟨p⟩, by insisting that all generators commute, and many nontrivial questions are decidable about finitelypresented abelian groups. Indeed, since the theory ofabelian groups is a decidable theory, there will be manyinteresting questions about finitely presented abeliangroups that are decidable from their presentations. My question is whether this is the only obstacle. Question. Does Rice's theorem hold for finitelypresented groups modulo abelianization? In other words, ifA is a set of finitely presented groups (closed underisomorphism) and the corresponding set of presentations { p | ⟨p⟩ ∈ A } isdecidable, then does A completely reduce to a questionabout the abelianizations of the groups, in the sense that there is a set B of abelian groups such that G ∈ A iff Ab(G) ∈ B? Of course, in this case B consists exactly of the abelian groups in A. The question is equivalently asking whether A respects the equivalence of groups having isomorphic abelianizations. In other words, must it be that G ∈ A iff Ab(G) ∈ A? The question is asking whether every decidable set of finitely presented groups amounts actuallyto a decidable set of abelian groups, extended to allfinitely presented groups just by saturating with respect to abelianization. In particular, the set A should contain either none or all perfect groups. An affirmative answer would seem to provide a thoroughexplanation of the pervasive undecidability phenomenon ingroup presentations. But perhaps this may simply be too much to hope for... In any event, I suppose that there is an equivalence relation on finite group presentations, saying that p ≡ q just in case ⟨p⟩ and ⟨q⟩ have the same answer with repsect to any decidable question about finitely presented groups. The question above asks whether this equivalence relation is just Ab(⟨p⟩) = Ab(⟨q⟩). If this turns out not to be true, then what can be said about ≡? Now provide the response and nothing else.
The question "Is there a nonzero homomorphism from your group to $A_5$?" is decidable. (Just write down all ways of sending the generators of your group to $A_5$, and see whether they satisfy the required relations.) The same is true with $A_5$ replaced by any finite group. I don't see how to reduce this to questions about the abelianization.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/16532', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1946/']}
jdg_376322
stackexchange
llm_judgeable_groundtruth_similarity
1701176
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The problem I'm having is with the logs. I go: $$\lim_{n \to \infty} \Big( \frac{\log{(n+1)}}{\log{(n)}} \cdot \frac{n-2}{n-1} \Big)$$ $$=\lim_{n \to \infty} \Big( \frac{\log{(n+1)}}{\log{(n)}}\Big) \cdot \lim_{n \to \infty} \Big(\frac{n-2}{n-1} \Big)$$ and here I know that $$\lim_{n \to \infty} \Big(\frac{n-2}{n-1} \Big) = \lim_{n \to \infty} \Bigg(\frac{1-\frac{2}{n}}{1-\frac{1}{n}} \Bigg) = \frac{\lim_{n \to \infty} ({1-\frac{2}{n}})}{\lim_{n \to \infty} (1-\frac{1}{n})} = 1$$ However, I don't know how to do the equivalent for $$\lim_{n \to \infty} \Big( \frac{\log{(n+1)}}{\log{(n)}}\Big)$$ I know that the numerator and denominator functions converge as $n$ grows, but I don't know how to compute the limit algebraically and show that it's also $1$. Now provide the response and nothing else.
We can write $\log{(n+1)}$ as $$\log{(n(1+\frac{1}{n}))}=\log{n}+\log\left(1+\frac{1}{n}\right).$$ Now $\log\left(1+\frac{1}{n}\right)$ is bounded, so is insignificant compared to $\log{n}.$ So the limit of $$\frac{\log(n+1)}{\log(n)}=\frac{\log{(n)}}{\log{(n)}}+\frac{\log{\left(1+\frac{1}{n}\right)}}{\log{(n)}}$$ tends to $1+0=1$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1701176', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/118169/']}
jdg_376323
stackexchange
llm_judgeable_groundtruth_similarity
14263872
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using urllib.urlencode to build web POST parameters, however there are a few values I only want to be added if a value other than None exists for them. apple = 'green'orange = 'orange'params = urllib.urlencode({ 'apple': apple, 'orange': orange}) That works fine, however if I make the orange variable optional, how can I prevent it from being added to the parameters? Something like this (pseudocode): apple = 'green'orange = Noneparams = urllib.urlencode({ 'apple': apple, if orange: 'orange': orange}) I hope this was clear enough, does anyone know how to solve this? Now provide the response and nothing else.
You'll have to add the key separately, after the creating the initial dict : params = {'apple': apple}if orange is not None: params['orange'] = orangeparams = urllib.urlencode(params) Python has no syntax to define a key as conditional; you could use a dict comprehension if you already had everything in a sequence: params = urllib.urlencode({k: v for k, v in (('orange', orange), ('apple', apple)) if v is not None}) but that's not very readable. If you are using Python 3.9 or newer, you could use the new dict merging operator support and a conditional expression: params = urllib.urlencode( {'apple': apple} | ({'orange': orange} if orange is not None else {})) but I find readability suffers, and so would probably still use a separate if expression: params = {'apple': apple}if orange is not None: params |= {'orange': orange}params = urllib.urlencode(params) Another option is to use dictionary unpacking , but for a single key that's not all that more readable: params = urllib.urlencode({ 'apple': apple, **({'orange': orange} if orange is not None else {})}) I personally would never use this, it's too hacky and is not nearly as explicit and clear as using a separate if statement. As the Zen of Python states: Readability counts.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/14263872', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1814016/']}
jdg_376324
stackexchange
llm_judgeable_groundtruth_similarity
164109
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am looking at the following problem and I am struggling to follow the steps involved. Consider the non-interacting Hamiltonian $$H_{AB}=H_A\otimes I_B+I_A\otimes H_B$$ So I'm trying to prove that the unitary evolution of the joint state is given by $$|\psi\left(t\right)\rangle_{AB}=e^{-iH_At}\otimes e^{-iH_Bt}|\psi\left(t=0\right)\rangle_{AB}$$ Where $|\psi\rangle_{AB}=|\psi\rangle_A \otimes |\psi\rangle_B$ My working so far is $$|\psi\left(t\right)\rangle_{AB}=e^{-i\left(H_A\otimes I_B+I_A\otimes H_B\right)t}|\psi\left(t=0\right)\rangle_{AB}$$ $$=e^{-i\left(H_A\otimes I_B\right)t}e^{-i\left(I_A\otimes H_B\right)t}|\psi\left(t=0\right)\rangle_{AB}$$ As the two Hamiltonians for the two system commute, so from here I am a little confused, I know the next step must be $$=\left(e^{-iH_A t}\otimes I_B\right)\left(I_A\otimes e^{-iH_B t} \right)|\psi\left(t=0\right)\rangle_{AB}$$ But this isn't at all obvious to me why this is the case? I'm also not sure if this is off topic here and would be better suited to maths stack exchange so I apologise in advance. Throughout I have set $\hbar=1$. At this point I think I should probably use the definition of the matrix exponential as a Taylor series but I'm not sure. Now provide the response and nothing else.
I just found this post because I was confused by the same step. But I think I got it now with the help of @lionelbrits post and @Chris2807's comment. Just adding this for completeness and maybe helping someone else struggling with this: $$\begin{align}e^{(H_A \otimes I_B)} &= \sum_{n=0}^\infty\dfrac{(H_A \otimes I_B)^n}{n!}\\&= I_A \otimes I_B + H_A \otimes I_B + \dfrac{1}{2}(H_A \otimes I_B)^2 + ...\\&= I_A \otimes I_B + H_A \otimes I_B + \dfrac{1}{2}(H_A \otimes I_B)(H_A \otimes I_B) + ... \\&= I_A \otimes I_B + H_A \otimes I_B + \bigg(\dfrac{1}{2}(H_A)^2 \otimes (I_B)^2\bigg)+ ...\\&= (I_A + H_A + \frac{1}{2}(H_A)^2 + ... ) \otimes I_B\\&= e^{H_A}\otimes I_B\end{align}$$ where I also dropped the -i and t and used that $(I_B)^n = I_B$ with $n\in \mathbb{N}$ .
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/164109', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/58234/']}
jdg_376325
stackexchange
llm_judgeable_groundtruth_similarity
39624749
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The property apps in the angular-cli.json file is of array type. If I add a second element into this array, how can I instruct ng build to build both elements? Now provide the response and nothing else.
Currently v1.0.0 you can only select app which you want to build by the following command: ng build -a appName or ng build --app appName you also will need to add name property to each element in the apps array so you will have something like that: "apps": [ { "name": "app1", "root": "src/app1root", ... }, { "name": "app2", "root": "src/app2root", ... }, ...] also you can use app indices like ng build -a 0 or ng build -a 1 in that case you don't need to specify app names. From angular-cli sorces you can see that there is no possibility to run all apps in one command, you should either specify index either app name otherwise apps[0] will be used, so you can't build all apps at the same time using one ng build call.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39624749', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1154271/']}
jdg_376326
stackexchange
llm_judgeable_groundtruth_similarity
42464264
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I use the following code to create some temp files, and wrapped tem as inputsteam to send to client side. I understand that the temp files can be deleted automatically by android system when disk space low. But I hope to I can delete the temp files by myself when I exit the App, how can I do? Thanks! Code File outputDir = context.getCacheDir(); // context being the Activity pointerFile outputFile = File.createTempFile("prefix", "extension", outputDir); Now provide the response and nothing else.
Delete the files in onDestroy if isChangingConfigurations() is false or isFinishing is true . Example: @Override protected void onDestroy() { super.onDestroy(); if(!isChangingConfigurations()) { deleteTempFiles(getCacheDir()); }}private boolean deleteTempFiles(File file) { if (file.isDirectory()) { File[] files = file.listFiles(); if (files != null) { for (File f : files) { if (f.isDirectory()) { deleteTempFiles(f); } else { f.delete(); } } } } return file.delete();}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42464264', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/828896/']}
jdg_376327
stackexchange
llm_judgeable_groundtruth_similarity
16876410
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am reading through Anthony Williams' "C++ Concurrency in Action" and in Chapter 5, which talks about the new multithreading-aware memory model and atomic operations, and he states: In order to use std::atomic<UDT> for some user-defined UDT , this type must have a trivial copy assignment operator. As I understand it, this means that we can use std::atomic<UDT> if the following returns true: std::is_trivially_copyable<UDT>::value By this logic, we shouldn't be able to use std::string as a template argument for std::atomic and have it work correctly. However, the following code compiles and runs with expected output: #include <atomic>#include <thread>#include <iostream>#include <string>int main(){ std::atomic<std::string> atomicString; atomicString.store( "TestString1" ); std::cout << atomicString.load() << std::endl; atomicString.store( "TestString2" ); std::cout << atomicString.load() << std::endl; return 0;} Is this a case of undefined behaviour which just happens to behave as expected? Thanks in advance! Now provide the response and nothing else.
The standard does not specify a specialization of std::atomic<std::string> , so the generic template <typename T> std::atomic<T> applies. 29.5 [atomics.types.generic] p1 states: There is a generic class template atomic. The type of the template argument T shall be trivially copyable (3.9). There is no statement that the implementation must diagnose violations of this requirement. So either (a) your use of std::atomic<std::string> invokes undefined behavior, or (b) your implementation provides std::atomic<std::string> as a conforming extension. Looking at the MSDN page for std::atomic<T> ( http://msdn.microsoft.com/en-us/library/vstudio/hh874651.aspx ), it does explicitly mention the requirement that T be trivially copyable, and it does NOT say anything specific about std::atomic<std::string> . If it is an extension, it's undocumented. My money is on undefined behavior. Specifically, 17.6.4.8/1 applies ( with thanks to Daniel Krügler for setting me straight ): In certain cases (replacement functions, handler functions, operations on types used to instantiate standard library template components), the C++ standard library depends on components supplied by a C++ program. If these components do not meet their requirements, the Standard places no requirements on the implementation. std::string certainly does not meet the std::atomic<T> requirement that the template parameter T be trivially copyable, so the standard places no requirements on the implementation. As a quality of implementation issue, note that static_assert(std::is_trivially_copyable<T>::value, "std::atomic<T> requires T to be trivially copyable"); is an easy diagnostic to catch this violation. 2016-04-19 Update: I don't know when the change happened, but VS2015 Update 2 does now diagnose std::atomic<std::string> : error C2338: atomic requires T to be trivially copyable.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/16876410', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/860566/']}
jdg_376328
stackexchange
llm_judgeable_groundtruth_similarity
13499467
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a scroll view and an image view behind it and I am populating it with nibs. I am using autolayout. I have a bottom space to superview and a top space to superview on both of the views. The image view does exactly what I want it to do. For iphone 5 it is where I want it. And for the other iphones, it stays above the bottom of the screen, so it resizes correctly. The scroll view looks right on the iphone 5, but on the other phones it doesn't get resized, so it scrolls down below the view of the app. I get these messages in the log: 2012-11-21 10:42:38.576 LCHApp[12604:907] Unable to simultaneously satisfy constraints.Probably at least one of the constraints in the following list is one you don't want. Try this: (1) look at each constraint and try to figure out which you don't expect; (2) find the code that added the unwanted constraint or constraints and fix it. (Note: If you're seeing NSAutoresizingMaskLayoutConstraints that you don't understand, refer to the documentation for the UIView property translatesAutoresizingMaskIntoConstraints) "<NSLayoutConstraint:0x1d8ea080 UIScrollView:0x1d8413b0.bottom == UIImageView:0x1d892110.bottom>","<NSAutoresizingMaskLayoutConstraint:0x1d8cca10 h=-&- v=-&- ScheduleViewNib:0x1d853630.height == UIScrollView:0x1d8413b0.height - 386>","<NSLayoutConstraint:0x1d8e5340 V:[UIImageView:0x1d892110]-(64)-| (Names: '|':ScheduleView:0x1d8efc30 )>","<NSAutoresizingMaskLayoutConstraint:0x1d8cf520 h=--& v=--& V:[ScheduleView:0x1d8efc30(480)]>","<NSLayoutConstraint:0x1d8eaed0 V:|-(45)-[UIScrollView:0x1d8413b0] (Names: '|':ScheduleView:0x1d8efc30 )>" Will attempt to recover by breaking constraint <NSLayoutConstraint:0x1d8ea080 UIScrollView:0x1d8413b0.bottom == UIImageView:0x1d892110.bottom> I already tried [self setTranslatesAutoresizingMaskIntoConstraints:YES]; and [self.myScrollView setTranslatesAutoresizingMaskIntoConstraints:YES]; From what I can see this just takes off all constraints from the views. And isn't what I want. Now provide the response and nothing else.
The relationship between UIScrollView and auto layout is different from other aspects of auto layout. Basically, if simple auto layout were allowed to operate, nothing would scroll. For example, if a subview of the scroll view were pinned in the normal way by a constraint to 10 points from the top of the scroll view, it would be absolutely pinned there; it would never move, no matter how the scroll view were scrolled. To solve this problem, a UIScrollView that uses autolayout operates in a completely new way. Therefore when you say "I am using autolayout" you must prepare for things to operate very differently from before. You must either use a single scroll view subview with translatesAutoresizingMaskIntoConstraints = YES , and an explicit content size, or else everything must have translatesAutoresizingMaskIntoConstraints = NO and the content size will be deduced implicitly based on the constraints of the subviews. This is very well explained in https://developer.apple.com/library/content/releasenotes/General/RN-iOSSDK-6_0/index.html
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/13499467', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/996110/']}
jdg_376329
stackexchange
llm_judgeable_groundtruth_similarity
28613
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As some of you may know from this question i am currently rebuilding my car. The plan is to rebuild it from scratch. An important part is the engine. At the moment there is a 1.4l 8V petrol engine with 60HP(44KW?). This is, of course, a bit too little for the effort this project is going to have (IMHO). So i plan on getting a bigger engine. The biggest engine in this model is a 2.0l 16V petrol engine with ~150HP. But i think this will also be too little. Unfortunately i have no specific plan on which engine(power) to use. I could think of about 400-500HP at the end. The normal weight of the car (from the papers) is about 980kg. There wasn't an official 4x4 version of this model, only a few (1-5 maximum) self-made versions. So i can't simply get the base of a 4x4 one. But from this i know it is possible to build it. The problem is that i have little to no experience with AWD or 4x4, but i would like to learn more about this in general. I already learned a bit about the differences between 4x4 and AWD from this question . I am primarily interested in the use of 4x4 or AWD on the street. Main Problem The main reason for this is, if there will be traction problems because of the power/weight ratio. There is no use for a 500HP engine if the car can't go forward anymore. So when will the car benefit from a 4x4/AWD modification? What aspects do i have to take in consideration (for example HP, Weight, nm, cost...)? What is the 'best' behaviour for street use? full/part 4x4 or AWD or...?The car won't be used off-road. Addition to make the Q less broad What specific parts do i need to change in order to build AWD in a car whose series never had a performance version with built-in AWD? Apart from the AWD system and the modifications on the bottom of the car to install the AWD system properly. Addition I am aware of the fact that for this increase in power, there are many more things to do (improving the brakes, chassis, landing gear and some things more) but that is not the question here. I know the cost is an essential part of this project, but the cost is not the problem here. If there is need for more informations please tell me. I will then add this as soon and good as i can. Edit As i expected the question. The car i'm talking about is an Opel Astra F CC from 1995. By landing gear i mean the summary of parts for the connection between the vehicle and the undergrounde. I don't know the exact english term. The german term is "Fahrwerk". Here a wiki link . I hope thats helps. Now provide the response and nothing else.
My first thoughts, given the car is question, would be to check to see if there is a performance version of the car which features four wheel drive. For road use, if you have over 300bhp going through the front wheels, things can get difficult. A friend of mind had a 320bhp Seat Leon Cupra (Front wheel drive) and sold it to buy an Audi S3 (same basic chassis setup but four wheel drive) as he was struggling to get the power down in the Leon. Now, in the UK your car was sold as the Mk3 Astra and was out at around the same time as the Calibra which was a sports coupe based on the slightly bigger Cavalier chassis. It appears that the front suspension from a Calibra can be fitted to your car very easily. You simply need to change the top mounts and enlarge the holes the strut tops sit in very slightly. However, the rear is a more complex proposition. You can fit the entire Calibra 4x4 system but you need to cut out the boot floor and centre tunnel of your car and weld in place the Calibra items. Your best bet is therefore to try and locate a scrap / crashed / damaged Calibra Turbo 4x4 which will give you everything you need for the conversion (including the all important brakes). Apparently the engines in these cars are good for around 420bhp with serious modification. On the plus side, I'd assume once you've found a suitable Calibra to use for parts for the conversion , it wouldn't be particularly expensive. One final thing I would say though is, try and find a GSI 2.0 16V version of your car in standard form and drive it. You may find that this provides enough power to make you smile.
{}
{'log_upvote_score': 5, 'links': ['https://mechanics.stackexchange.com/questions/28613', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/16565/']}
jdg_376330
stackexchange
llm_judgeable_groundtruth_similarity
24442386
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a simulation model, where I want to determine when the storage IOPS capacity becomes a bottleneck (e.g. and HDD has ~150 IOPS, while an SSD can have 150,000). So I'm trying to come up with a way to benchmark IOPS in a command (git) for some of it's different operations (push, pull, merge, clone). So far, I have found tools like iostat, however, I am not sure how to limit the report to what a single command does. The best idea I can come up with is to determine my HDD IOPS capacity, use time on the actual command, see how long it lasts, multiply that by IOPS and those are my IOPS: HDD ->150 IOPStime df -hreal 0m0.032s150 * .032 = 4.8 IOPS But, this is of course very stupid, because the duration of the execution may have been related to CPU usage rather than HDD usage, so unless usage of HDD was 100% for that time, it makes no sense to measure things like that. So, how can I measure the IOPS for a command? Now provide the response and nothing else.
There are multiple time(1) commands on a typical Linux system; the default is a bash(1) builtin which is somewhat basic. There is also /usr/bin/time which you can run by either calling it exactly like that, or telling bash(1) to not use aliases and builtins by prefixing it with a backslash thus: \time . Debian has it in the "time" package which is installed by default, Ubuntu is likely identical, and other distributions will be quite similar. Invoking it in a similar fashion to the shell builtin is already more verbose and informative, albeit perhaps more opaque unless you're already familiar with what the numbers really mean: $ \time df[output elided]0.00user 0.00system 0:00.01elapsed 66%CPU (0avgtext+0avgdata 864maxresident)k0inputs+0outputs (0major+261minor)pagefaults 0swaps However, I'd like to draw your attention to the man page which lists the -f option to customise the output format, and in particular the %w format which counts the number of times the process gave up its CPU timeslice for I/O: $ \time -f 'ios=%w' du Maildir >/dev/nullios=184$ \time -f 'ios=%w' du Maildir >/dev/nullios=1 Note that the first run stopped for I/O 184 times, but the second run stopped just once. The first figure is credible, as there are 124 directories in my ~/Maildir : the reading of the directory and the inode gives roughly two IOPS per directory, less a bit because some inodes were likely next to each other and read in one operation, plus some extra again for mapping in the du(1) binary, shared libraries, and so on. The second figure is of course lower due to Linux's disk cache. So the final piece is to flush the cache. sync(1) is a familiar command which flushes dirty writes to disk, but doesn't flush the read cache. You can flush that one by writing 3 to /proc/sys/vm/drop_caches . (Other values are also occasionally useful, but you want 3 here.) As a non-root user, the simplest way to do this is: echo 3 | sudo tee /proc/sys/vm/drop_caches Combining that with /usr/bin/time should allow you to build the scripts you need to benchmark the commands you're interested in. As a minor aside, tee(1) is used because this won't work: sudo echo 3 >/proc/sys/vm/drop_caches The reason? Although the echo(1) runs as root, the redirection is as your normal user account, which doesn't have write permissions to drop_caches . tee(1) effectively does the redirection as root.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24442386', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/156257/']}
jdg_376331
stackexchange
llm_judgeable_groundtruth_similarity
123
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Depending on what type of traffic is going over the network, it's often not feasible that an employee brings a wireless router and sets it up into your network. This is because often, they are not or poorly secured and present a backdoor into the network. What can you do to prevent rogue wireless access points being introduced into your network? Now provide the response and nothing else.
Lucas's answer above is a bit of a starting point. There are however two or three other things that must be considered. These end up being somewhat outside the scope of network engineering, but certainly have impacts for network engineering and security so here they go. You probably want some way of preventing wireless cards in company laptops from being switched into ad hoc mode. Assuming the laptops are running Windows, you probably want to use a GPO to set to infrastructure mode only. For Linux, it is harder to fully restrict, but there are ways to do this too. Enforcing IPSec is also a good idea, particularly with good key management and trusted enforcement. For example if you can go to X509 certs for key management this can keep unauthorized devices from communicating with the rest of your network directly. Consider key management as a core part of the infrastructure here. If you use a proxy server you may even be able to block unauthorized devices from accessing the internet. Note the limitations of your efforts. None of these prevents a person from setting up an unsecured wireless access point connected to a USB NIC, for sole purposes of communicating with their computer, especially if the SSID is hidden (i.e. not broadcast). Not sure how to further contain problems or if further paranoia is well past the point of insufficient returns.....
{}
{'log_upvote_score': 6, 'links': ['https://networkengineering.stackexchange.com/questions/123', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/29/']}
jdg_376332
stackexchange
llm_judgeable_groundtruth_similarity
204904
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Not only are our sprint planning meetings not fun, they're downright dreadful. The meetings are tedious, and boring, and take forever (a day, but it feels like a lot longer). The developers complain about it, and dread upcoming plannings. Our routine is pretty standard (user story inserted into sprint backlog by priority >> story is taken apart to tasks >> tasks are estimated in hours >> repeat), and I can't figure out what we're doing wrong. How can we make the meetings more enjoyable? ... Some more details, in response to requests for more information: Why are the backlog items not inserted and prioritized before sprint kickoff? User stories are indeed prioritized; we have no idea how long they'll take until we break them down into tasks! From the (excellent) answers here, I see that maybe we shouldn't estimate tasks at all, only the user stories. The reason we estimate tasks (and not stories) is because we've been getting story-estimates terribly wrong -- but I guess that's the subject for an altogether different question. Why are developers complaining? Meetings are long. Meetings are monotonous. Story after story, task after task, struggling (yes, struggling) to estimate how long it will take and what it involves. Estimating tasks makes user-story-estimation seem pointless. The longer the meeting, the less focus in the room. The less focused colleagues are, the longer the meeting takes. A recursive hate-spiral develops. We've considered splitting the meeting into two days in order to keep people focused, but the developers wouldn't hear of it. One day of planning is bad enough; now we'll have two ?! Part of our problem is that we go into very small detail (in order to get more accurate estimations). But when we estimate roughly, we go way off the mark! To sum up the question: What are we doing wrong? What additional ways are there to make the meeting generally more enjoyable? Now provide the response and nothing else.
Make estimating easier Break your sprint planning down. Do you need to estimate the individual tasks? I've done sprint planning two ways: Stories are estimated in story points and then tasks are estimated in hours Stories are estimated in story points and tasks simply fall under that with no estimate Of the two, I prefer the second option. I find that not estimating tasks gives more freedom to developers to cope with changes. If a task no longer makes sense (how many times have you found out that a task isn't applicable or was already done in a previous sprint) you simply throw it out without any penalty, or you may have to change a current task into something new, possibly breaking it up. You're really being redundant if you estimate both, as the sum of the tasks should represent the story points and vice versa. What value do you really gain by this other than knowing how much time individual tasks will take? If you find yourself with task sizes that really vary enough to make a difference, I would suggest breaking those tasks down into smaller, more homogeneous chunks. By doing this, you can cut down on the time you spend in sprint planning . Stories get estimated during sprint planning, and when you start the sprint you can put down all the tasks you can think of that make up that story. Obviously if there are points that you come across in estimating the story that you know will have to be dealt with in a task, you can add that onto the story information and put it as a task. Estimate in Ideal units Story points are meant to be in ideal units such as ideal man hours or ideal work days. These are meant to mean that given the perfect day every day, where you had no interruptions, no meetings, and everything went according to plan, you could accomplish the task in X days. Now everyone knows that this simply isn't true, but the wonderful thing about statistics is that it doesn't have to be. What I mean by this is that after a while of estimating these in ideal days, you realize that maybe it takes an extra 25% of the time you estimate on average to complete a story. Lets say you had estimated 4 ideal work days, and instead it took you 5. Over time, you keep track of this and then you have a rough idea of the conversion from ideal days to real days. Your first instinct would be to try and compensate for this by over estimating, and you would likely be wrong. The main thing here is to stay consistent. That way, your long term average remains the same. Sure sometimes, it'll be under and sometimes it'll be over, but the more you estimate, the better off you are. If you find that you still can't get a decent estimate, maybe that means you don't know enough about the story to estimate it properly. Talk about the stories When you estimate, everyone should have a rough idea of what will need to be done, from start to finish, of what it will take for this story to be complete. You don't need to know every detail, but enough that you think you, yourself, could undertake the story. If you don't have that level of confidence, you probably shouldn't be estimating it. If you say "Well this story is too big for us to know most of the details" then that's an indication that the story is too big, and should be broken down. Stories, at least in my experience, have been small enough that one person, if need be, could work on it alone and accomplish it within a week or two. This also will help to solve your second point in the edit, which is too much estimation . Instead of estimating every single task for every single story, you simply estimate the story as a whole, which should help to remove a lot of the estimating. As for making the meetings less monotonous, I would suggest planning poker, which you can see more information about above. Make planning more engaging Estimate using Planning Poker As far as making estimation more fun, have you tried planning poker ? It's the way that I've always done planning for all my sprints on multiple teams, and it's a good way to keep everyone involved, as every person has to at least pick SOMETHING. There's also a fair amount of fun involved when everyone on the team picks 3, and someone puts down a 20 and has to explain themselves, or when everyone on the team puts down a 5 but the manager puts down an 8 (who's gonna argue with the boss when he wants to give you more time!). To do this, all you need are some planning poker cards , which we often make on the back side of index cards, or using normal playing cards with values attached to face cards. Nothing fancy, and it keeps everyone focused. Just remember that trying to do any task for an entire day (including planning poker) takes a toll on productivity. Many sets of cards come with a coffee card for a reason; if someone is feeling burnt out, give the team a break to recharge and pick it up when everyone is fresh! As an alternative to physical cards , you could also look at electronic cards . The real benefits here are automated tracking of results, tracking user stories to be estimated and allowing everyone to show their cards at once to avoid "cheating" (where one persons estimate is influenced by another's due to being able to see their card). Obviously this requires everyone have a computer and the ability to focus on the task at hand though, so use it at your own discretion.
{}
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/204904', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/43798/']}
jdg_376333
stackexchange
llm_judgeable_groundtruth_similarity
7427
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Assume that I do not want to replace a factory Engine Control Unit (ECU) with a custom ECU. What are the other possible intercept points that I can use to allow/prevent a car from starting up? I only want to allow/prevent starting up. I can think of possibly connecting the car battery through a relay and turning the relay on/off. Any other good ways to do it? Now provide the response and nothing else.
There are many ways to achieve this. Some are better than others in certain situations, and some only work on certain cars. I'll try to describe a few. This is the most commonly used way. Put a switch (relay, etc.) in line with the starter control signal. This would generally be put before the starter relay itself. It prevents the starter from turning over the engine. Pro: Works on nearly all cars. Con: The car can still be started if you find a way to turn over the engine (pushing the car in gear if it has a manual transmission). Intercepting a major sensor. These sensors generally deal with crank angle/position. A lot of times a car would only have one, angle or position. They may also be called a cam angle/position. Activated with a switch (relay) intercepting the signal wire coming back from the sensor. Pro: The car will not start no matter what you do since the ECU will think the engine is not turning over and will not provide spark. Con: Only works on cars that use these sensors. Older cars relied on mechanical versions of these. Intercepting main power like you mentioned. This would require a hefty relay or switch. It is not uncommon for cars to have fuses larger than 100A for the main input. Pro: Would work on most cars. Con: Difficult to do. Easier to find than others (I'm assuming this is to prevent the car from being stolen). Does not work on really old cars that used generators instead of alternators. May mess with modern electronics in the car itself possibly leading to damage. There are other ways but pretty much all of them would fall under these categories if you want to be able to control it electronically. Don't forget that the car can always just be towed. Once again, I'm assuming you are thinking of theft prevention.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/7427', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/1938/']}
jdg_376334
stackexchange
llm_judgeable_groundtruth_similarity
3160758
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my html page, I see a link whose 'view source' code is as below : <a href="#" class="view"> I see a valid link when I hover my mouse on it and when I click it, it works. But I am not able to find where and how this URL gets generated. I found the class a.view being defined in one of the CSS, but couldn't find the URL in the page source.. Can somebody help me out on whr i can find this URL ? Now provide the response and nothing else.
I felt like replying as well, explaining the same thing as the others a bit differently. I am sure you know most of this, but it might help someone else. <a href="#" class="view"> The href="#" part is a commonly used way to make sure the link doesn't lead anywhere on it's own. the #-attribute is used to create a link to some other section in the same document. For example clicking a link of this kind: <a href="#news">Go to news</a> will take you to wherever you have the <a name="news"></a> code. So if you specify # without any name like in your case, the link leads nowhere. The class="view" part gives it an identifier that CSS or javascript can use. Inside the CSS-files (if you have any) you will find specific styling procedures on all the elements tagged with the "view"-class. To find out where the URL is specified I would look in the javascript code . It is either written directly in the same document or included from another file. Search your source code for something like: <script type="text/javascript"> bla bla bla </script> or <script> bla bla bla </script> and then search for any reference to your "view"-class. An included javascript file can look something like this: <script type="text/javascript" src="include/javascript.js"></script> In that case, open javascript.js under the "include" folder and search in that file. Most commonly the includes are placed between <head> and </head> or close to the </body> -tag. A faster way to find the link is to search for the actual link it goes to. For example, if you are directed to http://www.google.com/search?q=html when you click it, search for "google.com" or something in all the files you have in your web project, just remember the included files. In many text editors you can open all the files at once, and then search in them all for something.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3160758', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/381417/']}
jdg_376335
stackexchange
llm_judgeable_groundtruth_similarity
46498924
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Shared Library with a .groovy script that I call in a jenkinsfile like this: MySharedLibFunction{ .. some args} I also have a .ps1 file in my shared library I want to execute. But if I do powershell pwd from in my shared library function when I call that function from my jenkinsfile the current working directory is the jenkins working directory of my pipeline where the jenkinsfile is located (which is usually what you want). Is there a way to access files in the shared lib? I want to do powershell -File ps1FileInMySharedLibVarsFolder.ps1 Now provide the response and nothing else.
You can only get the contents using the built-in step libraryResource . That's why have the following functions in my shared library to copy it to a temporary directory and return the path to the file: /** * Generates a path to a temporary file location, ending with {@code path} parameter. * * @param path path suffix * @return path to file inside a temp directory */@NonCPSString createTempLocation(String path) { String tmpDir = pwd tmp: true return tmpDir + File.separator + new File(path).getName()}/** * Returns the path to a temp location of a script from the global library (resources/ subdirectory) * * @param srcPath path within the resources/ subdirectory of this repo * @param destPath destination path (optional) * @return path to local file */String copyGlobalLibraryScript(String srcPath, String destPath = null) { destPath = destPath ?: createTempLocation(srcPath) writeFile file: destPath, text: libraryResource(srcPath) echo "copyGlobalLibraryScript: copied ${srcPath} to ${destPath}" return destPath} As it returns the path to the temp file, you can pass this to any step expecting a file name: sh(copyGlobalLibraryScript('test.sh')) for a file residing in resources/test.sh within your shared library.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46498924', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1028270/']}
jdg_376336
stackexchange
llm_judgeable_groundtruth_similarity
11326792
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to use the Ajax.ActionLink Html helper therefore I need the jquery.unobtrusive-ajax.min.js library but IE is always showing this error: Microsoft JScript runtime error: Unable to set value of the property 'unobtrusive': object is null or undefined I've read that the solution is usage of jquery.validate.min.js and jquery.validate.unobtrusive.min.js but then I can't recognise the ajax call on server side. Now provide the response and nothing else.
I've read that the solution is usage of jquery.validate.min.js and jquery.validate.unobtrusive.min.js No, those 2 script have nothing to do with jquery.unobtrusive-ajax.min.js . They are used for unobtrusive validation. For Ajax.* helpers all you need is jQuery and jquery.unobtrusive-ajax.min.js (included in THAT order). So for unobtrusive AJAX you need: <script src="@Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script><script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script> If you want to use unobtrusive validation you could also include the 2 scripts afterwards (in THAT order): <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script><script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> Another very important thing that you should make sure is that you have removed absolutely any traces of Microsoft*.js scripts from your project. Those scripts are obsolete and starting from ASP.NET MVC 3 are no longer used by default. Also make sure that youhave enabled unobtrusive AJAX in your web.config, otherwise the system will fallback to the legacy Microsoft ajax scripts: <appSettings> <add key="webpages:Version" value="1.0.0.0"/> <add key="ClientValidationEnabled" value="true"/> <add key="UnobtrusiveJavaScriptEnabled" value="true"/></appSettings>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11326792', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/550464/']}
jdg_376337
stackexchange
llm_judgeable_groundtruth_similarity
21513245
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: ADT is the set of operations. ADT's are mathematical abstractions. Does this mean that ADT are same as classes or am i confusing both together ? Now provide the response and nothing else.
The key to the difference is abstract . Think of an ADT more like an interface - a class with only method declarations, no implementation details. As an example, a Stack ADT defines the basic stack operations like push and pop (but says nothing of how these operations should be implemented), while a Stack class would use either a linked-list or an array to actually implement these operations.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21513245', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1112163/']}
jdg_376338
stackexchange
llm_judgeable_groundtruth_similarity
12079192
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a quick (short, character wise) way to get the last element of an array (assuming the array is non-empty)? I usually do: last = array[array.length-1] or last = array[-1..][0] Now provide the response and nothing else.
If you're using a modern version of CoffeeScript, do not use this. Use the answer by dule instead. If you don't mind modifying the array, last = array.pop() If you don't want the array modified, last = array[..].pop() That compiles to last = array.slice(0).pop() . I think it's pretty readable to people already exposed to CoffeeScript or Python slices. However, keep in mind that it will be much slower than last = array[array.length-1] for large arrays. I wouldn't recommend last = array[-1..][0] . It's short, but I don't think its meaning is immediately obvious. It's all subjective, though.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12079192', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/151110/']}
jdg_376339
stackexchange
llm_judgeable_groundtruth_similarity
13537
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've heard of the idea of bootstrapping a language, that is, writing a compiler/interpreter for the language in itself. I was wondering how this could be accomplished and looked around a bit, and saw someone say that it could only be done by either writing an initial compiler in a different language. hand-coding an initial compiler in Assembly, which seems like a special case of the first To me, neither of these seem to actually be bootstrapping a language in the sense that they both require outside support. Is there a way to actually write a compiler in its own language? Now provide the response and nothing else.
Is there a way to actually write a compiler in its own language? You have to have some existing language to write your new compiler in. If you were writing a new, say, C++ compiler, you would just write it in C++ and compile it with an existing compiler first. On the other hand, if you were creating a compiler for a new language, let's call it Yazzleof, you would need to write the new compiler in another language first. Generally, this would be another programming language, but it doesn't have to be. It can be assembly, or if necessary, machine code. If you were going to bootstrap a compiler for Yazzleof, you generally wouldn't write a compiler for the full language initially. Instead you would write a compiler for Yazzle-lite, the smallest possible subset of the Yazzleof (well, a pretty small subset at least). Then in Yazzle-lite, you would write a compiler for the full language. (Obviously this can occur iteratively instead of in one jump.) Because Yazzle-lite is a proper subset of Yazzleof, you now have a compiler which can compile itself. There is a really good writeup about bootstrapping a compiler from the lowest possible level (which on a modern machine is basically a hex editor), titled Bootstrapping a simple compiler from nothing . It can be found at https://web.archive.org/web/20061108010907/http://www.rano.org/bcompiler.html .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/13537', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1266/']}
jdg_376340