source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
13014623
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: >>> import re>>> s = 'this is a test'>>> reg1 = re.compile('test$')>>> match1 = reg1.match(s)>>> print match1None in Kiki that matches the test at the end of the s. What do I miss? (I tried re.compile(r'test$') as well) Now provide the response and nothing else.
Use match1 = reg1.search(s) instead. The match function only matches at the start of the string ... see the documentation here : Python offers two different primitive operations based on regular expressions: re.match() checks for a match only at the beginning of the string, while re.search() checks for a match anywhere in the string (this is what Perl does by default).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13014623', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1743494/']}
jdg_377941
stackexchange
llm_judgeable_groundtruth_similarity
43807502
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have just upgraded to Visual Studio 2017 Professional and SSMS 17. I can access my servers and databases through Visual Studio 17's SQL Server Object Explorer but when I try to execute SSMS outside of the VS IDE it tells me " Cannot find one or more components.  Please reinstall the application ".  I have reinstalled SSMS 17 twice and still cannot run it. The SSMS executable is at: C:\Program Files (x86)\Microsoft SQL Server\140\Tools\Binn\ManagementStudio\Ssms.exe . I get the same problem on another machine running VS 2017 Community. What is causing the message " Cannot find one or more components.  Please reinstall the application " when I try to execute SSMS outside of Visual Studio? What do I need to do to execute SSMS outside of the VS IDE? Now provide the response and nothing else.
I had similar problem and in my case it was due to a corrupt Microsoft Visual Studio 2015 Shell (Isolated) which SSMS 17 requires to run outside the VS IDE. To fix this, I had to do the following: Uninstall Microsoft Visual C++ 2017 Redistributable (both the x64 and x86). This is to ensure that step (2) will succeed as the repair process of the prerequisites might fail in some cases if newer versions have been installed. Repair Microsoft Visual Studio 2015 Shell (Isolated) (or reinstall using https://download.microsoft.com/download/C/9/C/C9CA8400-20AA-4488-92A3-F3F6D978AD81/vs_isoshell.exe ). Reinstall Microsoft Visual C++ 2017 Redistributable ( https://go.microsoft.com/fwlink/?LinkId=746572 and https://go.microsoft.com/fwlink/?LinkId=746571 ) if removed earlier.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/43807502', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6223275/']}
jdg_377942
stackexchange
llm_judgeable_groundtruth_similarity
9890313
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have used KeyPairGenerator to generate a RSA key pair. If I'm not wrong, the KeyStore is only used to store certificates and not keys. How can I properly store the private key on the computer? Now provide the response and nothing else.
NOTE: This code is for demonstration purposes only. Private keys must be encrypted when you store them on disk. Do not use it as is. You can do something like this: KeyPairGenerator kpg = KeyPairGenerator.getInstance("RSA"); kpg.initialize(2048); KeyPair kp = kpg.genKeyPair(); KeyFactory fact = KeyFactory.getInstance("RSA"); RSAPublicKeySpec pub = fact.getKeySpec(kp.getPublic(), RSAPublicKeySpec.class); saveToFile(PUBLIC_KEY_FILE, pub.getModulus(), pub.getPublicExponent()); RSAPrivateKeySpec priv = fact.getKeySpec(kp.getPrivate(), RSAPrivateKeySpec.class); saveToFile(PRIVATE_KEY_FILE, priv.getModulus(), priv.getPrivateExponent()); The save function: private static void saveToFile(String fileName, BigInteger mod, BigInteger exp) throws SomeException { ObjectOutputStream oout = new ObjectOutputStream( new BufferedOutputStream(new FileOutputStream(fileName))); try { oout.writeObject(mod); oout.writeObject(exp); } catch (Exception e) { throw new SomeException(e); } finally { oout.close(); }} And read the same way back: private static PublicKey readPublicKey() throws SomeException { InputStream in = new FileInputStream(PUBLIC_KEY_FILE); ObjectInputStream oin = new ObjectInputStream(new BufferedInputStream(in)); try { BigInteger m = (BigInteger) oin.readObject(); BigInteger e = (BigInteger) oin.readObject(); RSAPublicKeySpec keySpec = new RSAPublicKeySpec(m, e); KeyFactory fact = KeyFactory.getInstance("RSA"); PublicKey pubKey = fact.generatePublic(keySpec); return pubKey; } catch (Exception e) { throw new SomeException(e); } finally { oin.close(); }} Reading private key is similar.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9890313', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/93558/']}
jdg_377943
stackexchange
llm_judgeable_groundtruth_similarity
3731172
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a large table with say 10 columns. 4 of them remains null most of the times. I have a query that does null value takes any size or no size in bytes. I read few articles some of them are saying : http://www.sql-server-citation.com/2009/12/common-mistakes-in-sql-server-part-4.html There is a misconception that if we have the NULL values in a table it doesn't occupy storage space. The fact is, a NULL value occupies space – 2 bytes SQL: Using NULL values vs. default values A NULL value in databases is a system value that takes up one byte of storage and indicates that a value is not present as opposed to a space or zero or any other default value. Can you please guide me regarding the size taken by null value. Now provide the response and nothing else.
If the field is fixed width storing NULL takes the same space as any other value - the width of the field. If the field is variable width the NULL value takes up no space. In addition to the space required to store a null value there is also an overhead for having a nullable column. For each row one bit is used per nullable column to mark whether the value for that column is null or not. This is true whether the column is fixed or variable length. The reason for the discrepancies that you have observed in information from other sources: The start of the first article is a bit misleading. The article is not talking about the cost of storing a NULL value, but the cost of having the ability to store a NULL (i.e the cost of making a column nullable). It's true that it costs something in storage space to make a column nullable, but once you have done that it takes less space to store a NULL than it takes to store a value (for variable width columns). The second link seems to be a question about Microsoft Access. I don't know the details of how Access stores NULLs but I wouldn't be surprised if it is different to SQL Server.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3731172', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/449907/']}
jdg_377944
stackexchange
llm_judgeable_groundtruth_similarity
2405772
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: see this when the output reaches the bottom of the page, i'd like the canvas to automatically extend so that it can keep going. I tried setting the canvas.height property, but it clears the window. Is there any way to do this? Now provide the response and nothing else.
As others have mentioned, the right mouse button can be detected through the usual mouse events (mousedown, mouseup, click) . However, if you're looking for a firing event when the right-click menu is brought up, you're looking in the wrong place. The right-click/context menu is also accessible via the keyboard (shift+F10 or context menu key on Windows and some Linux). In this situation, the event that you're looking for is oncontextmenu : window.oncontextmenu = function (){ showCustomMenu(); return false; // cancel default menu} As for the mouse events themselves, browsers set a property to the event object that is accessible from the event handling function: document.body.onclick = function (e) { var isRightMB; e = e || window.event; if ("which" in e) // Gecko (Firefox), WebKit (Safari/Chrome) & Opera isRightMB = e.which == 3; else if ("button" in e) // IE, Opera isRightMB = e.button == 2; alert("Right mouse button " + (isRightMB ? "" : " was not") + "clicked!");} window.oncontextmenu - MDC
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/2405772', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/289253/']}
jdg_377945
stackexchange
llm_judgeable_groundtruth_similarity
17826
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I hope this question isn't too open-ended for MO --- it's not my favorite type of question, but I do think there could be a good answer. I will happily CW the question if commenters want, but I also want answerers to pick up points for good answers, so... Let $X,Y$ be smooth manifolds. A smooth map $f: Y \to X$ is a bundle if there exists a smooth manifold $F$ and a covering $U_i$ of $X$ such that for each $U_i$, there is a diffeomorphism $\phi_i : F\times U_i \overset\sim\to f^{-1}(U_i)$ that intertwines the projections to $U_i$. This isn't my favorite type of definition, because it demands existence of structure without any uniqueness, but I don't want to define $F,U_i,\phi_i$ as part of the data of the bundle, as then I'd have the wrong notion of morphism of bundles. A definition I'm much happier with is of a submersion $f: Y \to X$, which is a smooth map such that for each $y\in Y$, the differential map ${\rm d}f|_y : {\rm T}_y Y \to {\rm T}_{f(y)}X$ is surjective. I'm under the impression that submersions have all sorts of nice properties. For example, preimages of points are embedded submanifolds (maybe preimages of embedded submanifolds are embedded submanifolds?). So, I know various ways that submersions are nice. Any bundle is in particular a submersion, and the converse is true for proper submersions (a map is proper if the preimage of any compact set is compact), but of course in general there are many submersions that are not bundles (take any open subset of $\mathbb R^n$, for example, and project to a coordinate $\mathbb R^m$ with $m\leq n$). But in the work I've done, I haven't ever really needed more from a bundle than that it be a submersion. Then again, I tend to do very local things, thinking about formal neighborhoods of points and the like. So, I'm wondering for some applications where I really need to use a bundle --- where some important fact is not true for general submersions (or, surjective submersions with connected fibers, say). Now provide the response and nothing else.
One would be that a fibre bundle $F \to E \to B$ has a homotopy long exact sequence $$ \cdots \to \pi_{n+1} B \to \pi_n F \to \pi_n E \to \pi_n B \to \pi_{n-1} F \to \cdots $$ This isn't true for a submersion, for one, the fibre in a submersion does not have a consistent homotopy-type as you vary the point in the base space.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/17826', 'https://mathoverflow.net', 'https://mathoverflow.net/users/78/']}
jdg_377946
stackexchange
llm_judgeable_groundtruth_similarity
1966
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is a follow-up question of How do I apply a Chebishev filter? I designed a filter in Scipy: import scipy.signal as signalfs = 240signal.cheby1(8, 0.05, [0.1/(fs/2), 10.0/(fs/2)],btype='band', analog=0, output='ba')/usr/lib/python2.7/dist-packages/scipy/signal/filter_design.py:268: BadCoefficients: Badly conditioned filter coefficients (numerator): the results may be meaningless "results may be meaningless", BadCoefficients)(array([ -3.64482771e-08, -3.98755170e-23, 1.27568970e-07, 3.98755170e-23, -2.55137939e-07, -2.09346464e-22, 3.18922424e-07, 1.48536301e-21, -2.55137939e-07, -2.79128619e-22, 1.27568970e-07, -3.98755170e-23, -3.64482771e-08, 0.00000000e+00, 4.55603463e-09]),array([ 1.00000000e+00, -1.53699059e+01, 1.10865293e+02, -4.98175837e+02, 1.56087517e+03, -3.61582779e+03, 6.40630344e+03, -8.85525782e+03, 9.65121227e+03, -8.32140081e+03, 5.65721533e+03, -3.00062986e+03, 1.21729735e+03, -3.65135628e+02, 7.63720484e+01, -9.95191544e+00, 6.08648759e-01])) The first array should contain the b coefficients, the second array should contain the b coefficients of the filter. The result of this filter is oscillating and seems to be wrong. Hilmar wrote that this filter "is a numerically challenging filter, since you have poles very very close to the unit circle" and I must "break the filter down into second order sections and apply those sequentially". Since I have no background in signal processing and found nothing about that anywhere I wanted to ask here how I can do that. edit: The zero-pole-gain ('zpk') representation is signal.cheby1(8, 0.05, [0.1/(fs/2), 10.0/(fs/2)],btype='band', analog=0, output='zpk')/usr/lib/python2.7/dist-packages/scipy/signal/filter_design.py:268: BadCoefficients: Badly conditioned filter coefficients (numerator): the results may be meaningless"results may be meaningless", BadCoefficients)(array([ 0.95041122+0.63504447j, 0.95041122-0.63504447j, 0.77688699+0.32179713j, 0.77688699-0.32179713j, 0.72152481+0.14352021j, 0.72152481-0.14352021j, 0.70710678+0.j , -0.95041122+0.63504447j, -0.95041122-0.63504447j, -0.77688699+0.32179713j, -0.77688699-0.32179713j, -0.72152481+0.14352021j, -0.72152481-0.14352021j, -0.70710678+0.j ]), array([ 1.19049737+0.j , 1.16303520+0.1084667j , 1.16303520-0.1084667j , 1.08874047+0.19530989j, 1.08874047-0.19530989j, 0.99220878+0.25677213j, 0.99220878-0.25677213j, 0.93226615+0.2765259j , 0.93226615-0.2765259j , 0.87852354+0.22994371j, 0.87852354-0.22994371j, 0.83333317+0.15993377j, 0.83333317-0.15993377j, 0.80434290+0.08130729j, 0.80434290-0.08130729j, 0.79450811+0.j ]), -3.6448277062436296e-08) Now provide the response and nothing else.
Below the results as sos matrix and gain. Every row in the matrix is a single second order section in the coefficient order of b0, b1, b2, a0, a1, a2. Obviously we need a0=1, and the sections are normalized so that b0=1 as well. The overall gain accumulated is in the single number "gain". The fact that this gain is actually in the order of 10^-9 shows that this is a numerically tricky beast. Here is how this works. Design the filter in poles and zeros (zp or zpk), NOT in polynomial form (ba). I imagine that the cheby1() function has an output option for that. If not, you can't use it. The poles come in conjugate complex pairs, the zeros in this case are real, but you can think of them as a conjugate complex pair with a phase of 0. Group one pole pair each with one zero pair. Start with the pole pair that has the highest magnitude and find that zero pair that's closest to it. This step is really important!! Turn the poles and zeros into polynomial in 1/z of the shape (1-p/z)*(1-p'/z). Let's say the complex zero/pole is x+j*y. Then b0/a0 coefficients are always 1, the b1/a1 coefficients are -2*x and the b2/a2 coefficients are x^2+y^2. Repeat over all poles. Each conjugate pole pair results in one second order section. A word of caution, if I may: If none of things that I have listed above mean anything to you, than you probably should not be doing this. This is not trivial and many things can go wrong in non-obvious ways. Depending on what you are planning to do with the results, you really want to make sure that your code is fully vetted and properly tested. Unless you have a good idea on how to test this, I would not base anything mission critical on the results. sos = 1.000000000000000 2.000000000000000 1.000000000000000 1.000000000000000 -1.824520683146258 0.835012167790563 1.000000000000000 2.000000000000000 1.000000000000000 1.000000000000000 -1.825351261746257 0.854128332434601 1.000000000000000 2.000000000000000 1.000000000000000 1.000000000000000 -1.845749373866514 0.899792317177862 1.000000000000000 2.000000000000000 1.000000000000000 1.000000000000000 -1.890243419701735 0.963689345198141 1.000000000000000 -2.000000000000000 1.000000000000000 1.000000000000000 -1.989176997083584 0.989218209927750 1.000000000000000 -2.000000000000000 1.000000000000000 1.000000000000000 -1.996496707908275 0.996511870590091 1.000000000000000 -2.000000000000000 1.000000000000000 1.000000000000000 -1.998714416931744 0.998722644911976 1.000000000000000 -2.000000000000000 1.000000000000000 1.000000000000000 -1.999653030269727 0.999659262704001gain = 4.556034632804534e-009
{}
{'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/1966', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/1165/']}
jdg_377947
stackexchange
llm_judgeable_groundtruth_similarity
484367
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been trying to find a tight upper bound for the series $$S (x) = e^{-x} \sum_{k=0}^{\infty} \frac{x^k}{k!} \sqrt{k+1}$$ So far, I've managed to get a reasonable bound for small values of $x$ by using the inequality $\sqrt{k+1} \leq \sqrt{\frac{k^{2}}{4} + k + 1} = \frac{k}{2} + 1 ~\forall~k \geq 0$, but it becomes very loose when $x$ is large. I've also tried taking a Taylor series approximation to $\sqrt{k+1}$, but this leads to a complicated infinite sum of weighted Bell polynomials which, as far as I'm aware, doesn't have a closed form. Any suggestions would be greatly appreciated! Now provide the response and nothing else.
Upper and Lower Bounds Note that$$e^{-x}\sum_{k=0}^\infty\frac{x^k}{k!}=1\tag{1}$$and that$$e^{-x}\sum_{k=0}^\infty(k+1)\frac{x^k}{k!}=x+1\tag{2}$$Since $\sqrt{x}$ is concave, Jensen's Inequality gives$$e^{-x}\sum_{k=0}^\infty\sqrt{k+1}\frac{x^k}{k!}\le\sqrt{x+1}\tag{3}$$Also,$$e^{-x}\sum_{k=0}^\infty\frac1{k+1}\frac{x^k}{k!}=\frac{1-e^{-x}}{x}\tag{4}$$Since $1/\sqrt{x}$ is convex, Jensen's Inequality gives$$\begin{align}e^{-x}\sum_{k=0}^\infty\sqrt{k+1}\frac{x^k}{k!}&\ge\sqrt{\frac{x}{1-e^{-x}}}\\&\ge\sqrt{x}\tag{5}\end{align}$$Therefore, we get the bounds$$\sqrt{x}\le e^{-x}\sum_{k=0}^\infty\sqrt{k+1}\frac{x^k}{k!}\le\sqrt{x+1}\tag{6}$$ Asymptotic Expansion Using Stirling's Expansion and the Binomial Theorem, we get$$\begin{align}\frac1{4^n}\binom{2n}{n}&=\frac1{\sqrt{\pi n}} \left(1-\frac1{8n}+\frac1{128n^2}+\frac5{1024n^3}-\frac{21}{32768n^4}+\dots\right)\\&=\frac1{\sqrt{\pi(n+1)}} \left(1+\frac3{8n}-\frac{23}{128n^2}+\frac{89}{1024n^3}-\frac{1509}{32768n^4}+\dots\right)\tag{7}\end{align}$$and therefore,$$\begin{align}\frac{\sqrt{n+1}}{n!}&=\frac{4^n}{\sqrt{\pi}}\frac{n!}{(2n)!}\left(1+\frac3{8n}-\frac{23}{128n^2}+\frac{89}{1024n^3}-\frac{1509}{32768n^4}+\dots\right)\\&=\frac{2^n}{\sqrt{\pi}}\frac1{(2n-1)!!}\left(1+\frac3{8n}-\frac{23}{128n^2}+\frac{89}{1024n^3}-\frac{1509}{32768n^4}+\dots\right)\\&=\frac{2^n}{\sqrt{\pi}}\small\left(\frac1{(2n{-}1)!!}+\frac{3/4}{(2n{+}1)!!}+\frac{1/32}{(2n{+}3)!!}+\frac{9/128}{(2n{+}5)!!}+\frac{491/2048}{(2n{+}7)!!}+\dots\right)\tag{8}\end{align}$$Note that$$\begin{align}\int_x^\infty e^{-t^2/2}\,\mathrm{d}t&=\frac1x\int_x^\infty\frac{x}{t}e^{-t^2/2}\,\mathrm{d}t^2/2\\&\le\frac1x\int_x^\infty e^{-t^2/2}\,\mathrm{d}t^2/2\\&=\frac1xe^{-x^2/2}\tag{9}\end{align}$$therefore, since both the following sum and integral satisfy $f'=1+xf$ and agree at $x=0$,$$\begin{align}\sum_{k=0}^\infty\frac{x^{2k+1}}{(2k+1)!!}&=e^{x^2/2}\int_0^xe^{-t^2/2}\,\mathrm{d}t\\&=\sqrt{\frac\pi2}\ e^{x^2/2}+O\left(\frac1x\right)\\\frac1{\sqrt{2x}}\sum_{k=0}^\infty\frac{(2x)^{k+1}}{(2k+1)!!}&=\sqrt{\frac\pi2}e^x+O\left(\frac1{\sqrt{x}}\right)\\e^{-x}\sum_{k=0}^\infty\frac{(2x)^{k+1}}{(2k+1)!!}&=\sqrt{\pi x}+O\left(e^{-x}\right)\tag{10}\end{align}$$Multiplying $(8)$ by $e^{-x}x^n$, summing, and applying $(10)$ yields the asymptotic expansion that Raymond Manzoni got:$$\begin{align}e^{-x}\sum_{n=1}^\infty\frac{\sqrt{n+1}}{n!}x^n&=\sqrt{x}\small\left(1+\frac3{8x}+\frac1{128x^2}+\frac9{1024x^3}+\frac{491}{32768x^4}+O\left(\frac1{x^5}\right)\right)\tag{11}\end{align}$$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/484367', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/46108/']}
jdg_377948
stackexchange
llm_judgeable_groundtruth_similarity
7290131
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to understand how the -pg (or -p ) flag works when compiling C code with GCC. The official GCC documentation only states : -pg Generate extra code to write profile information suitable for the analysis program gprof. You must use this option when compiling the source files you want data about, and you must also use it when linking. This really interests me, as I'm doing a small research on profilers. I'm trying to pick the best tool for the job. Now provide the response and nothing else.
Compiling with -pg instruments your code, so that Gprof reports detailed information. See gprof's manual, 9.1 Implementation of Profiling : Profiling works by changing how every function in your program is compiled so that when it is called, it will stash away some information about where it was called from. From this, the profiler can figure out what function called it, and can count how many times it was called. This change is made by the compiler when your program is compiled with the -pg option, which causes every function to call mcount (or _mcount , or __mcount , depending on the OS and compiler) as one of its first operations. The mcount routine, included in the profiling library, is responsible for recording in an in-memory call graph table both its parent routine (the child) and its parent's parent. This is typically done by examining the stack frame to find both the address of the child, and the return address in the original parent. Since this is a very machine-dependent operation, mcount itself is typically a short assembly-language stub routine that extracts the required information, and then calls __mcount_internal (a normal C function) with two arguments— frompc and selfpc . __mcount_internal is responsible for maintaining the in-memory call graph, which records frompc , selfpc , and the number of times each of these call arcs was traversed. ... Please note that with such an instrumenting profiler, you're profiling the same code you would compile in release without profiling instrumentation. There is an overhead associated with the instrumentation code itself. Also, the instrumentation code may alter instruction and data cache usage. Contrary to an instrumenting profiler, a sampling profiler like Intel VTune works on noninstrumented code by looking at the target program's program counter at regular intervals using operating system interrupts. It can also query special CPU registers to give you even more insight of what's going on. See also Profilers Instrumenting Vs Sampling .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7290131', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/451461/']}
jdg_377949
stackexchange
llm_judgeable_groundtruth_similarity
2334037
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using Groovy in a Java Swing application as part of my plan to force-feed myself dynamic languages until I like them (which is happening, partly). My stack traces are filled with Groovy stuff like org.codehaus.groovy.runtime.callsite.ConstructorSite$ConstructorSiteNoUnwrapNoCoerce.callConstructor is there a way to get Eclipse to remove all of that codehaus stuff (filter stack traces, basically)? Edit: I can do this from the command-line with grep (well, not yet ) so it's not so bad, but inside of Eclipse would be great too. Now provide the response and nothing else.
There is a Utility in Groovy that does exactly what you want: StackTraceUtils. STU will clean all the callsite information from your stacktrace, leaving the stuff you're really interested in. Edit: In Java you will have to encasulate the exception in a java.lang.RuntimeException according to comments. Example of usage: try { 1/0;} catch (Throwable t) { throw new RuntimeException(org.codehaus.groovy.runtime.StackTraceUtils.sanitize(t)); //Modifies the Throwable and rethrows} StackTraceUtils is available in the latest version of Groovy and originally comes from Grails. I'm not sure how you would go about applying this to all of your projects stacktraces but I think both Griffon and Grails does it so there should be some hints in those projects.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2334037', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8047/']}
jdg_377950
stackexchange
llm_judgeable_groundtruth_similarity
1852537
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using blogger.com to host some texts on programming, and I'd like to use Prettify (same as Stack Overflow) to nicely colour the code samples. How do I install the Prettify scripts into the blog domain?Would it be better (if indeed it's possible) to link to a shared copy somewhere?I have webspace on a different domain. Would that help? Now provide the response and nothing else.
When you make a new entry in Blogger, you get the option to use HTML in your entry and to edit your blog entries. So type http://blogger.com , log in, and navigate to Posting → Edit Posts → Edit . In there put this at the top: <script type="text/javascript" language="javascript" src="https://cdnjs.cloudflare.com/ajax/libs/prettify/r298/prettify.min.js"></script><script type="text/javascript" language="javascript" src="https://cdnjs.cloudflare.com/ajax/libs/prettify/r298/lang-css.min.js"></script><script type="text/javascript">function addLoadEvent(func) { var oldonload = window.onload; if (typeof window.onload != 'function') { window.onload = func; } else { window.onload = function() { if (oldonload) { oldonload(); } func(); } }}addLoadEvent(function() { prettyPrint();});</script><style type="text/css">/* Pretty printing styles. Used with prettify.js. */.str { color: #080; }.kwd { color: #008; }.com { color: #800; }.typ { color: #606; }.lit { color: #066; }.pun { color: #660; }.pln { color: #000; }.tag { color: #008; }.atn { color: #606; }.atv { color: #080; }.dec { color: #606; }pre.prettyprint { padding: 2px; border: 1px solid #888; }@media print { .str { color: #060; } .kwd { color: #006; font-weight: bold; } .com { color: #600; font-style: italic; } .typ { color: #404; font-weight: bold; } .lit { color: #044; } .pun { color: #440; } .pln { color: #000; } .tag { color: #006; font-weight: bold; } .atn { color: #404; } .atv { color: #060; }}</style> Note that you shouldn't use prettyPrint directly as an event handler. It confuses it (see the readme for details). Which is why we're passing addLoadEvent a function that then turns around and calls prettyPrint . In this case, because Blogger does not allow us to link to the stylesheet, we just embed the prettify.css contents. Then add a <code></code> tag or a <pre></pre> tag with the class name of "prettyprint" . You can even specify the language like "prettyprint lang-html" . So it can look like this: <pre class="prettyprint lang-html"><!-- your code here--></pre> Or like this: <code class="prettyprint lang-html"><!-- your code here--></code> The code that you put in needs to have its HTML cleaned from < and > . To do this, just paste your code in here: https://www.freeformatter.com/html-escape.html You can put the top code in your HTML layout, so that it’s included for all pages by default if you like. As of 2012, you can link CSS files in Blogger, so adding this to the <head> should be enough: <link href="https://cdnjs.cloudflare.com/ajax/libs/prettify/r298/prettify.min.css" rel="stylesheet" type="text/css" /><script type="text/javascript" language="javascript" src="https://cdnjs.cloudflare.com/ajax/libs/prettify/r298/prettify.min.js"></script><script type="text/javascript" language="javascript" src="https://cdnjs.cloudflare.com/ajax/libs/prettify/r298/lang-css.min.js"></script><script type="text/javascript"> document.addEventListener('DOMContentLoaded',function() { prettyPrint(); });</script> I chose not to replace the body onload event on purpose. Instead, I'm using the new DOMContentLoaded event that the old browsers don't support. If you need old browser support, you can use any other load event to initiate prettyPrint , for example jQuery: jQuery(function($){ prettyPrint();}); Or the supposedly smallest domready ever And you're done :) As Lim H pointed out in the comments, in case where you use the Blogger dynamic views (Ajax templates) then you need to use the method described here to bind custom JavaScript code: prettyPrint() doesn't get called on page load Use the guide at GitHub: https://github.com/google/code-prettify Basically just use this :) <script src="https://cdnjs.cloudflare.com/ajax/libs/prettify/r298/run_prettify.min.js"></script><pre class="prettyprint"><code class="language-css">...</code></pre>
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1852537', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3568/']}
jdg_377951
stackexchange
llm_judgeable_groundtruth_similarity
516766
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $f:X \to Y$ be a morphism of $k$-schemes locally of finite type, and let $x \in X$ be a closed point. Then $\kappa(x)$ is a finite extension of $k$ by the Nullstellensatz. I want to conclude that $f(x)$ is a closed point of $Y$ because $\kappa(f(x))$ is also a finite extension of $k$. Why is $\kappa(f(x))$ a finite extension of $k$? Now provide the response and nothing else.
A morphism of schemes is a morphism of locally ringed spaces. This means that, writing $y=f(x)$, the induced $k$-morphism on stalks $f^*_x:\mathcal O_{Y,y }\to \mathcal O_{Y,y }$ sends the maximal ideal $\mathfrak m_y \subset \mathcal O_{Y,y }$ to the maximal ideal $\mathfrak m_x \subset \mathcal O_{X,x }$ and thus induces a morphism of $k$-extensions $\kappa (y)=\mathcal O_{Y,y }/\mathfrak m_y \to \kappa (x)=\mathcal O_{X,x }/\mathfrak m_x $ . Since the extension $\kappa (x)/k$ is finite-dimensional, so is its subextension $\kappa (y)/k$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/516766', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/88863/']}
jdg_377952
stackexchange
llm_judgeable_groundtruth_similarity
34877
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Prove that there are infinitely many primes and non-primes in the numbers $10^n+1$, where $n$ is a natural number. So numbers are 101, 1001, 10001 etc. Now provide the response and nothing else.
I am not so sure this is homework. One part is easy: If $\displaystyle n$ is odd (or divisible by an odd number $\displaystyle \gt 1$), then $\displaystyle 1+10^n$ is composite, using the fact that $\displaystyle x^{2n+1} + y^{2n+1}$ is divisible by $\displaystyle x+y$. For $\displaystyle 1 + 10^n$ to be prime, $\displaystyle n$ must be a power of a $\displaystyle 2$, which makes it similar to Fermat numbers , and the question of whether there are an infinite number of Fermat primes is open. I believe the current 'expectation' based on heuristic arguments is that there are only finite number of such primes. I would guess this would be the case with $\displaystyle 1 + 10^{2^m}$ too.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/34877', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/10015/']}
jdg_377953
stackexchange
llm_judgeable_groundtruth_similarity
9263036
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just realized that the recommended Rails way to set locale in your controller before_filter :set_localedef set_locale I18n.locale = params[:locale] || I18n.default_localeend sets the locale globally. The code above works, but I wonder is default_locale really default if you have to type it explicitly? What I'd expect is to have a locale per request (like we have session per request) and doing something like: def set_locale locale = params[:locale] if params[:locale]end And having I18n.default_locale used by default otherwise. This would match ideally the optional locale in path: # config/routes.rbscope "(:locale)", :locale => /en|nl/ do resources :booksend For now if for some reason I skip locale setting in some action it uses the locale set in the previous request which could be from another user! And isn't there a potential race condition as one request can change global I18n.locale while another request (having set another locale beforehande) is in the middle of rendering? UPDATE: Some details I found for now, from the I18n documentstion: Sets the current locale pseudo-globally, i.e. in the Thread.current hash def locale=(locale) Now I want to understand if every request is a separate thread. UPDATE 2: See my answer for explanation. Now provide the response and nothing else.
So now the final answer. TL;DR Setting locale acts as global only when you use threaded web servers, like Thin and Puma. As I mentioned, I18n.locale= Sets the current locale pseudo-globally, i.e. in the Thread.current hash So it is supposed to be per-request, and it works this way in Webrick and Unicorn. But if you use threaded web server like Thin or Puma, seems that the thread lives longer, and the value is preserved for future requests, until it is changed explicitly. Where I learned it is from the new Steve Klabnik's gem request_store : If you need global state, you've probably reached for Thread.current. <...> So people are using those fancy threaded web servers, like Thin or Puma. But if you use Thread.current, and you use one of those servers, watch out! Values can stick around longer than you'd expect, and this can cause bugs.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9263036', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/212245/']}
jdg_377954
stackexchange
llm_judgeable_groundtruth_similarity
480049
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Math people: I think it is a good idea to teach beginning calculus students the Riemann Integral (I refer to what calculus books call the "Riemann Integral" and ignore any controversy about whether it should be called the Darboux integral, etc.), since it gives a rigorous definition to the idea of "area under a curve" and you don't need measure theory to define it. But is the Riemann integral ever used in "real" mathematics by mathematicians who know Lebesgue integration? Is there any purpose in proving difficult-to-prove properties of the Riemann integral (say, in a real analysis class)? The Lebesgue integral is more powerful and has those properties, plus more. The only uses I can think of are: (i) using it as as trick to evaluate certain infinite sums or limits and (ii) a jumping-off point for learning the Riemann-Stieltjes integral. EDIT: I just discovered a similar, unanswered question at Are Specific Facts about the Riemann Integral Logically Required? . If you read it, you may understand my question better. Now provide the response and nothing else.
I think it is a bit of a shame that the standard pedagogical motivation for the Lebesgue integral seems to involve "dumping on" the Riemann integral. There is (of course) a sense in which the Lebesgue integral is stronger: the collection of Lebesgue integrable functions properly contains the collection of (properly!) Riemann integrable functions, so the Lebesgue integral is "better". As Mariano has pointed out in the comments, this is not necessarily very convincing: the standard examples of bounded, measurable, non-Riemann integrable functions look rather contrived. In my opinion, most of the true advantage of the Lebesgue integral over the Riemann integral resides in the Dominated Convergence Theorem . This all-important result is much harder to prove directly for the Riemann integral. In part of course it is hard to prove because it is not true that a pointwise limit of Riemann integrable functions must be Riemann integrable, but again that's not where the crux of the problem lies. In the setting of the DCT if we add the hypothesis that the limit function is Riemann integrable then of course the theorem holds for the Riemann integral...but try to prove it without using Lebesgue's methods! (People have done this, by the way, and the difficulty of these arguments is persuasive evidence in favor of Lebesgue.) I honestly think that in many (certainly not all, of course) areas of mathematics, it is the DCT (and a couple of other related results) which is really important and not the attendant measure theory at all. Thus I wish the approach via the Daniell integral were more popular: e.g. I can imagine an alternate universe in which this is part of undergraduate analysis and "measure theory and Lebesgue integration" was a popular "topics" graduate course rather than something that every young math student cuts her teeth on and many never use again. If measure theory were more divorced from the needs of integration theory one would naturally be tempted to either introduce more geometry or make explicit the connections to probability theory: either one of these would be a major livening up of the material, I think. Right, but I'm meant to be answering the question rather than ranting. There is another sense in which the Riemann integral is stronger than the Lebesgue integral: since Riemann's definition of Riemann integrability is a priori so demanding, knowing that a function is Riemann integrable is better than knowing it is Lebesgue integrable. It can be used to evaluate certain limits, yes, but this is not just a trick! Rather, the fact that an incredibly broad range of "interpolatory sums" associated to e.g. an arbitrary continuous function all converge to the same number is incredibly useful. As I have said before and others have said here, the entire branch of analysis known as approximation theory sure looks like it is founded upon the back of the Riemann integral, not the Lebesgue integral. In this branch of mathematics one is interested in various interpolatory schemes closely related to Riemann sums, and often one looks for a good tradeoff between convergence rates, efficiency and so forth in terms of the amount of smoothness of the function. An approximation scheme which worked for every $C^2$ function, for instance, would be regarded as quite general and useful. Does a numerical analyst ever meet a non-Riemann-integrable function? It helps to fix ideas to restrict to the characteristic function $1_S$ of a bounded subset $S \subset \mathbb{R}^n$ . Then $1_S$ is Lebesgue integrable iff $S$ is Lebesgue measurable. A general Lebesgue measurable set can be quite pathological. On the other hand, $1_S$ is Riemann integrable iff $S$ is Jordan measurable ; this is a less well-known concept but is both technically useful and in some respects more natural. The fact that the volume of a Jordan measurable set can be computed as a limit of lattice-point counting is a key idea linking discrete and continous geometry. Just as an example, this came up (in a very standard and well-known way) in a paper I wrote recently: see Proposition 3.7 here . Geometric facts like these fail for, say, the characteristic function of the rational points in $[0,1]^d$ . Here is a somewhat related instance of Riemann integrability: a sequence $\{x_n\}$ in $[0,1]$ is uniformly distributed iff for all Riemann integrable functions $f: [0,1] \rightarrow \mathbb{R}$ , $\lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N f(x_n) = \int_0^1 f$ . (See e.g. Theorem 7 of these notes .) On the right hand side it (of course) doesn't matter whether you take the integral to be in the sense of Riemann or Lebesgue, but if $f$ is not Riemann integrable then nothing good needs to happen on the left hand side. This is another instance in which Riemann integrable functions are better.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/480049', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/55689/']}
jdg_377955
stackexchange
llm_judgeable_groundtruth_similarity
14561402
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am not able to understand the below code with respect to the comment provided. What does this code does, and what would be the equivalent code for 8-aligned ? /* segment size must be 4-aligned */attr->options.ssize &= ~3; Here, ssize is of unsigned int type. Now provide the response and nothing else.
Since 4 in binary is 100, any value aligned to 4-byte boundaries (i.e. a multiple of 4) will have the last two bits set to zero. 3 in binary is 11, and ~3 is the bitwise negation of those bits, i.e., ...1111100. Performing a bitwise AND with that value will keep every bit the same, except the last two which will be cleared (bit & 1 == bit, and bit & 0 == 0). This gives us a the next lower or equal value that is a multiple of 4. To do the same operation for 8 (1000 in binary), we need to clear out the lowest three bits. We can do that with the bitwise negation of the binary 111, i.e., ~7.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14561402', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/220386/']}
jdg_377956
stackexchange
llm_judgeable_groundtruth_similarity
1110231
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: (Chain rule) Assume $F : \mathbb{R} \to \mathbb{R}$ is $C^1$, with $F'$ bounded. Suppose $U$ is bounded and $u \in W^{1,p}(U)$ for some $1 \le p \le \infty$. Show $$v :=F(u) \in W^{1,p}(U) \quad \text{and} \quad v_{x_i}=F'(u)u_{x_i}.$$ From PDE Evans, 2nd edition: Chapter 5, Exercise 17. Here is what I understand conceptually so far: Since $u \in W^{1,p}(U)$, it follows $Du=u'$ exists, with $$\int_U u \phi' dx = -\int_U Du \phi \, dx.$$ I need to show that $D(F(u))=F'(u)Du$ exists, with $$\int_U F(u) \phi' dx = -\int_U D(F(u)) \phi \, dx.$$ Then, I can conclude that $F(u) \in W^{1,p}(U)$. This is all I know so far; how can I go about making the connection? Now provide the response and nothing else.
Assume first that $1\leq p <\infty$ If $u\in C^\infty(\bar{U})$ then clearly $v=F(u)\in C^1(\bar{U})$ and $\nabla v=F'(u)\nabla u$. Now if $u$ is a general $W^{1,p}$ function then take a sequence $u_k \to u$ in $W^{1,p}$ with $u_k\in C^\infty(\bar{U})$ and such that $u_k\to u$ and $\nabla u_k \to \nabla u$ pointwise a.e. in $U$. Then$$|F(u)-F(u_k)| \leq M|u-u_k|,$$with $M=\| F'\|_\infty$. On the other hand we also have$$F'(u_k(x))\nabla u_k(x) \to F'(u(x))\nabla u(x), \qquad \text{ for a.e. } x\in U, $$and moreover $| F'(u_k)\nabla u_k|\leq M|\nabla u_k|$. Since $\nabla u_k \to \nabla u$ in $L^p$, by the dominated convegence theorem, $F'(u_k)\nabla u_k \to F'(u)\nabla u$ in $L^p(U)$. Combining this with the first estimate we get that $F(u)\in W^{1,p}(U)$ and $\nabla F(u)= F'(u)\nabla u$. If $p=\infty$ then we can simply note that $W^{1,\infty}(U)$ is the space of Lipschitz continuous functions in $U$, so take $u$ a Lipschitz function with Lipschitz constant $N$, and $M$ the Lipschitz constant of $F$ as before, then$$|F(u(x))-F(u(y))|\leq M|u(x)-u(y)| \leq NM|x-y|, \qquad \forall x,\ y\in U.$$Therefore $F(u)\in W^{1,\infty}(U)$. Edit: As an extra exercise try to see that the condition $U$ being a $C^1$ domain is not needed when $1\leq p<\infty$ (try to prove that $F(u)\in W^{1,p}(U)$ whenever $u\in C^\infty(U)\cap W^{1,p}(U)$).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1110231', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/111793/']}
jdg_377957
stackexchange
llm_judgeable_groundtruth_similarity
1409854
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Having this code... var b = new ReadOnlyCollection<int>(new[] { 2, 4, 2, 2 });b[2] = 3; I get a compile error at the second line. I would expect a runtime error since ReadOnlyCollection<T> implements IList<T> and the this[T] have a setter in the IList<T> interface. I've tried to replicate the functionality of ReadOnlyCollection, but removing the setter from this[T] is a compile error. Now provide the response and nothing else.
The indexer is implemented with explicit interface implementation, so you'll only be able to access it if you do: IList<int> b = new ReadOnlyCollection<int>(new[] { 2, 4, 2, 2 });b[2] = 3; or var b = new ReadOnlyCollection<int>(new[] { 2, 4, 2, 2 });((IList<int>)b)[2] = 3; Of course, it'll then fail at execution time... This is entirely deliberate and helpful - it means that when the compiler knows it's a ReadOnlyCollection , the unsupported bits of functionality aren't available to you, helping to divert you away from execution time failure. It's an interesting and relatively unusual step though, effectively implementing one half of a property/indexer implicitly, and one half explicitly. Contrary to my previous thoughts, I believe ReadOnlyCollection<T> actually implements the whole indexer explicitly, but also provides a public readonly indexer. In other words, it's something like this: T IList<T>.this[int index]{ // Delegate interface implementation to "normal" implementation get { return this[index]; } set { throw new NotSupportedException("Collection is read-only."); }}public T this[int index]{ get { return ...; }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1409854', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/170196/']}
jdg_377958
stackexchange
llm_judgeable_groundtruth_similarity
9944
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: let $G$ be a group such that $\mathrm{Aut}(G)$ is abelian. is then $G$ abelian? This is a sort of generalization of the well-known exercise, that $G$ is abelian when $\mathrm{Aut}(G)$ is cyclic, but I have no idea how to answer it in general. At least, the finitely generated abelian groups $G$ such that $\mathrm{Aut}(G)$ is abelian can be classified. Now provide the response and nothing else.
From MathReviews: MR0367059 (51 #3301)Jonah, D.; Konvisser, M.Some non-abelian $p$ -groups with abelian automorphism groups.Arch. Math. (Basel) 26 (1975), 131--133. This paper exhibits, for each prime $p$ , $p+1$ nonisomorphic groups of order $p^8$ with elementary abelian automorphism group of order $p^{16}$ . All of these groups have elementary abelian and isomorphic commutator subgroups and commutator quotient groups, and they are nilpotent of class two. All their automorphisms are central. With the methods of the reviewer and Liebeck one could also construct other such groups, but the orders would be much larger. FYI, I found this via a google search. The first to construct such a group (of order $64 = 2^6$ ) was G.A. Miller * in 1913. If you know something about this early American group theorist (he studied groups of order 2, then groups of order 3, then...and he was good at it, and wrote hundreds of papers!), this is not so surprising. I found a nice treatment of "Miller groups" in Section 8 of http://arxiv.org/PS_cache/math/pdf/0602/0602282v3.pdf (*): The wikipedia page seems a little harsh. As the present example shows, he was a very clever guy.
{}
{'log_upvote_score': 7, 'links': ['https://mathoverflow.net/questions/9944', 'https://mathoverflow.net', 'https://mathoverflow.net/users/2841/']}
jdg_377959
stackexchange
llm_judgeable_groundtruth_similarity
50876
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: Can you help me with my software licensing question? We're looking to implement a continuous integration/build machine process that will build our product and deploy automatically into a virtual machine (copied from a base VHD OS install) for delivery to testing and for automated tests to be run against. There could end up being 10s or even a hundred or so VHDs at any one time with different builds of the software. As a developer I have some questions around licensing, specifically: a) Do we need Windows OS (probably Win2003 Server but maybe newer) licenses for each virtual machine we create and copy? Or can we just license those that are currently running? b) What is the best way to manage keys given that we will be copying VHDs on a regular basis? Should be not enter product keys at all until we have to or do something else? c) How does volume licensing (I'm really not sure what the difference is) fit into all of this? Basically we're looking for the cheapest way to properly accomplish this. Thanks! Now provide the response and nothing else.
a) Get an MSDN subscription for each person involved with developing or maintaining an environment that is not production - Technet licensing is not permissible for a development environment. b) Use one set of keys - Microsoft are perfectly fine with a team working from one set of keys, so long as each team member has a valid licence. This is also OK with volume licensing and multiple licences. This was confirmed to me by two MS Licencing specialists at two different LARS within the past year. c) Volume licensing is for your production environment, and gives you special access to add-on things such as Software Assurance (upgrade to the latest version when it is released for free, added training benefits and other benefits), training, centralised licence and key management et al. Basically, MSDN is what you want. The development and testing environment that I have setup for our team consists of the following: Hyper V Host Server (Dual Quad Core Xeons, 32GB RAM, 2TB SAS Disk) x 4 - One Development, one Development Test (more stable than Development, less stable than Test), one Test, one Infrastructure Test (for integration testing, patch testing et al) Windows Server 2003 R2 virtual machine image, setup as a standard base (patched, IIS installed, .Net installed et al) and then sysprepped with an install script. No need to enter a key on setup, just copy the .vhd and add it to a new machine instance in Hyper V, boot it and let it set itself up. Then customise each to their specific role, snapshot it so you can roll back changes. Each system supports a good 25 virtual machines, which is plenty for a fair sized development environment. If you want more, add more hardware and keep deploying sysprepped images :)
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/50876', 'https://serverfault.com', 'https://serverfault.com/users/3931/']}
jdg_377960
stackexchange
llm_judgeable_groundtruth_similarity
503646
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have a small server closet that sits on a climate-controlled manufacturing floor. Since the main room is air conditioned, I want to use the air from that space to inexpensively cool the server closet by exhausting the server closet to the main room. How can I calculate the CFM requirements for an exhaust fan to cool to a certain temp if I know the size of the room, the BTUs/hr of the equipment and the temp of the air coming in? FYI, the room is 800 ft³, my heat output is about 5000 BTUs/hr and the air intake is at 72 °F. Now provide the response and nothing else.
The basic formula for translating is: BTU/hrCFM = ------- 1.1 * delta-T Delta-T is the temperature differential between the two sides of the barrier in Celsius . The 1.1 value is a fudge factor for the air's heat-carrying capacity, which we can't figure without percent-humidity. Since we don't know the temperature of the room being cooled, here are some values based on 5K BTU/hr and a 72 degree intake. Room T | CFM----------------| 74 | 4132 || 75 | 3030 || 76 | 2066 || 77 | 1623 || 78 | 1420 || 79 | 1165 || 80 | 1033 |---------------- For comparison a certain portable AC unit I just looked at is rated for 10,000 BTU/Hr and runs at about 220 CFM. It can get such low flow rates because for that unit delta-T is a lot higher than what you're attempting here. I understand why that is an undesirable option though. Those need drains, or a scheduled bucket-brigade, as well as routine maintenance to keep running.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/503646', 'https://serverfault.com', 'https://serverfault.com/users/112725/']}
jdg_377961
stackexchange
llm_judgeable_groundtruth_similarity
44648780
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My objective is to create a piece of code that will create a copy of a template sheet for each string in an array - renaming the copied sheet to match the string. This is the first time I have used Microsoft Visual Basic for Applications. I have added my code under "ThisWorkbook". However when I run the code I get a Subscript out of range error. Sub generateStationTabs() Dim stringNames() As Variant Dim currentString As String Dim previousSheet As String previousSheet = "Template " stringNames() = Array("String 1", "String 2", "String 3", "String 4", "String 5", "String 6", "String 7", "String 8", "String 9", "String 10", "String 11", "String 12", "String 13", "String 14", "String 15", "String 16", "String 17", "String 18", "String 19", "String 20", "String 21", "String 22", "String 23", "String 24", "String 25", "String 26", "String 27", "String 28", "String 29", "String 30") For indexVariable = 0 To UBound(stringNames) ' Pad String Name to 20 Characters Length currentString = Left(stringNames(indexVariable) & Space(8), 20) ' Copy the Template worksheet to after the Previous Sheet ActiveWorkbook.Sheets("Template ").Copy after:=ActiveWorkbook.Sheets(previousSheet) ' Rename the copied worksheet ActiveWorkbook.ActiveSheet.Name = currentString ' Update the previous sheet previousSheet = currentString Next End Sub Now provide the response and nothing else.
I'm currently trying to configure correctly more than one remote repo. I understood that you have already setup correctly your remotes (by click plus simbole near REMOTE).Now : if you fetch (pull) some branches from remotes: the default are automatically set to remote you chose/click in REMOTE section. if you create a new branch: when you push first time, at the top of gitkraken window appear confirmation message. There you can chose which remote to push. Your chose will set as default for that branch. if you want to change remote of a branch already in use: right click on desired brach in LOCAL section, in the menu you can find "set ", in my case "set upstream". So appear at the top of gitkraken window the confirmation message to setup the default remote for that branch. You can check the result in the .git/config file in the home dir of your project. Edit:Ok, I've tested this solution for some day. It's running structurally and I confirm this solution. Evenif when you switch from one branch with some remote to an other brach with different remote, some times the ui not responding correctly (freez or stop refreshing). So a I need to close and reopen gitkraken.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44648780', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/394601/']}
jdg_377962
stackexchange
llm_judgeable_groundtruth_similarity
165370
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm taking a graduate course in probability and statistics using Larsen and Marx, 4th edition and looking specifically at estimation methods this week. I ran into a homework problem that is related to moment generating functions and I can't quite connect the dots on how they arrived at the solution. If you have three independent random variables $$Y_{1}, Y_{2}, Y_{3}$$ and you would like to determine the moment-generating function of $$W = Y_{1} + Y_{2} + Y_{3}$$ knowing that each of the three independent random variables have the same pdf $$f_{y} = \lambda y e^{-\lambda y}, y \geq 0$$ The easy part of the this problem is applying the theorem that says for $$W = W_{1} + W_{2} + W_{3}$$ the moment generating function of the sum is: $$M_{W}(t) = M_{W_{1}}(t)* M_{W_{2}}(t)* M_{W_{3}}(t)$$ Where I run into trouble is getting the individual moment generating functions for the Y's. The problem directs you to apply yet another theorem where you would let, for example, another random variable V equal to $$aY_{1}+b$$ and it follows that $$M_{V}(t) = e^{bt}M_{W}(at)$$ The solution states that if you allow $$V = (1/\lambda)*W$$ then the pdf of V then becomes $$f_{V}(y) = ye^{-y}, y \geq 0$$ and subsequently, you can get the moment generating function using a simple integration by parts but I can't quite follow the application of the theorem used to get to the pdf of V. Any insight? Likely a fundamental property I missed along the way... Now provide the response and nothing else.
You have not made use of the definition of a moment generating function. The moment generating function for any random variable $X$ is usually defined as$$M_X(t) = \mathbb{E} \left( e^{tX} \right)$$ EDIT Adding more details. First, your $f_y$ is incorrect. It should be $f_Y(y) = \lambda e^{- \lambda y}$. We get $$M_Y(t) = \displaystyle \int_0^{\infty} e^{ty} \lambda e^{-\lambda y} dy = \displaystyle \int_0^{\infty} \lambda e^{(t-\lambda) y} dy = \dfrac{\lambda}{\lambda-t}$$Hence, the moment generating function for $Y = Y_1 + Y_2 + Y_3$ is $M_Y(t) = \dfrac{\lambda_1}{\lambda_1-t} \dfrac{\lambda_2}{\lambda_2-t} \dfrac{\lambda_3}{\lambda_3-t}$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/165370', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33491/']}
jdg_377963
stackexchange
llm_judgeable_groundtruth_similarity
1176941
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The functions $x$ and $x^2 - {1\over2}$ are orthogonal with respect to their inner product on the interval [0, 1]. However, when you graph the two functions, they do not look orthogonal at all. So what does it truly mean for two functions to be orthogonal? Now provide the response and nothing else.
Consider these two functions defined on a grid of $x\in\{1,2,3\}$: $$f_1(x)=\sin\left(\frac{\pi x}2\right),$$$$f_2(x)=\cos\left(\frac{\pi x}2\right).$$ Their plot looks like If you look at their graph, they don't look orthogonal at all, as the functions plotted in the OP. Yet, being interpreted as vectors $(1,0,-1)^T$ and $(0,-1,0)^T$, they are indeed orthogonal with respect to the usual dot product. And this is exactly what is meant by "orthogonal functions" — orthogonality with respect to some inner product, not orthogonality of the curves $y=f_i(x)$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1176941', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/201121/']}
jdg_377964
stackexchange
llm_judgeable_groundtruth_similarity
53502419
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a UITableView that displays cells with an image and some text. The data is requested on demand - I first ask for data for 10 rows, then for then next 10 and so on. I do this in tableView(_ tableView: UITableView, willDisplay cell: UITableViewCell, forRowAt indexPath: IndexPath) . The problem is that when I receive the data and need to update the tableview it sometimes jumps and/or flickers. I make a call to reloadData . Here is part of the code: func tableView(_ tableView: UITableView, willDisplay cell: UITableViewCell, forRowAt indexPath: IndexPath) { DispatchQueue.global(qos: .background).async { if indexPath.row + 5 >= self.brands.count && !BrandsManager.pendingBrandsRequest { BrandsManager.getBrands() { (error, brands) in self.brands.append(contentsOf: brands as! [Brand]) DispatchQueue.main.async { UIView.performWithoutAnimation { self.brandsTableView.reloadData() } } } } }} The height of the cells is constant returned like this: func tableView(_ tableView: UITableView, heightForRowAt indexPath: IndexPath) -> CGFloat { return 70} I am using Kingfisher to download and cache the images. Here is some more code from the datasource: func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return brands.count}func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: CellIdentifiers.ImageTableCell, for: indexPath) as! ImageTableViewCell let brand = brands[indexPath.row] cell.centerLabel.text = brand.brand cell.leftImageView.image = nil if let url = BrandsManager.brandLogoURL(forLogoName: brand.logo!) { let resource = ImageResource(downloadURL: url, cacheKey: url.absoluteString) cell.leftImageView.kf.setImage(with: resource) } else { print("Cannot form url for brand logo") } return cell} How can I avoid the flickering and jumping of the table view on scroll? I looked at some of the similar questions but couldn't find a working solution for my case. Now provide the response and nothing else.
To remove the jumping issue you need to set estimatedHeightForRowAt the same as your row height. Assuming you will have no performance issues you can simply do the following: func tableView(_ tableView: UITableView, estimatedHeightForRowAt indexPath: IndexPath) -> CGFloat { return self.tableView(tableView, heightForRowAt: indexPath)} Or if the cell height is constant you can do tableView.estimatedRowHeight = 70.0 . Why this happens is because table view when reloading will use estimatedRowHeight for the cells that are invisible which results in jumping when the estimated height differs from the actual. To give you an idea: Let's say that estimated height is 50 while the real height is 75 . Now that you have scrolled down so that 10 cells are off the screen you have 10*75 = 750 pixels of content offset. No when reload occurs table view will ignore how many cells are hidden and will try to recompute that. It will keep reusing estimated row height until it finds the index path that should be visible. In this example it starts calling your estimatedHeightForRow with indexes [0, 1, 2... and increasing the offset by 50 until it gets to your content offset which is still 750 . So that means it gets to index 750/50 = 15 . And this produces a jump from cell 10 to cell 15 on reload. As for the flickering there are many possibilities. You could avoid reloading the cells that don't need reloading by reloading only the portion of data source that has changed. In your case that means inserting new rows like: tableView.beginUpdates()tableView.insertRows(at: myPaths, with: .none)tableView.endUpdates() Still it seems strange you even see flickering. If only image flickers then the issue may be elsewhere. Getting an image like this is usually an asynchronous operation, even if the image is already cached. You could avoid it by checking if you really need to update the resource. If your cell is already displaying the image you are trying to show then there is no reason to apply the new resource: if let url = BrandsManager.brandLogoURL(forLogoName: brand.logo!) { if url != cell.currentLeftImageURL { // Check if new image needs to be applied let resource = ImageResource(downloadURL: url, cacheKey: url.absoluteString) cell.currentLeftImageURL = url // Save the new URL cell.leftImageView.kf.setImage(with: resource) }} else { print("Cannot form url for brand logo")} I would rather put this code into the cell itself though var leftImageURL: URL { didSet { if(oldValue != leftImageURL) { let resource = ImageResource(downloadURL: url, cacheKey: url.absoluteString) leftImageView.kf.setImage(with: resource) } }} but this is completely up to you.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53502419', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3066272/']}
jdg_377965
stackexchange
llm_judgeable_groundtruth_similarity
8426160
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My HTML: <p>Doggies</p><p class="green_guys">Froggies</p><p>Cupcakes</p><p>Piggies</p> An all inclusive sibling selector (as I wish it to be), when used to select green_guys' siblings, would select the doggies cupcakes and piggies. Other Selectors: The + selector (a.k.a. adjacent sibling selector ) would only select the cupcakes: .green_guys + p { /* selects the <p> element that immediately follows .green_guys */} The ~ selector (a.k.a. general sibling selector ) would only select the cupcakes, and piggies: .green_guys ~ p { /* selects all <p> elements that follow .green_guys */} Now provide the response and nothing else.
There is no sibling combinator that looks backward or around, only the adjacent and general sibling combinators that look forward. The best you can do is determine a way to limit selection only to these p elements with the same parent, and then select the p children that are :not(.green_guys) . If the parent element has an ID of #parent , for example, you can use this selector: #parent > p:not(.green_guys) { /* selects all <p> children of #parent that are not .green_guys */} However the above will still match your p elements even if none of them have the class. It is currently not possible to select the siblings of an element only given the existence of said element (which is the purpose of a sibling combinator — to establish a relationship between two sibling elements). Selectors 4's :has() will hopefully rectify this without the need for a preceding-sibling combinator, resulting in the following solution: p:has(~ .green_guys), .green_guys ~ p { /* selects all <p> elements that are siblings of .green_guys */} This will not match anything if none of the children of the parent element have the class.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8426160', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/552067/']}
jdg_377966
stackexchange
llm_judgeable_groundtruth_similarity
9599259
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know the question can be regarded as "politically incorrect", but I'm designing an app which "by design" must get the attention of people within the maximum possible distance range, otherwise it will not be used... :-) I'm currently using SoundManager class, and this is the code which plays my ogg clip: public void playSound(int index) { int streamVolume = mAudioManager.getStreamVolume(AudioManager.STREAM_MUSIC); mSoundPool.play(mSoundPoolMap.get(index), streamVolume, streamVolume, 0, 0, 1.0f); } The problem is that the sound volume I get the clip played with appears to be dependent by "Settings/Audio/Voulme" settings the user has set. Instead it appears to be indipendent by the hardware volume buttons setting. Is there a way for an Android app to play a sound to the maximum physical volume allowed by the device? Now provide the response and nothing else.
I'd suggest using getStreamMaxVolume and setStreamVolume to do this: int origionalVolume = mAudioManager.getStreamVolume(AudioManager.STREAM_MUSIC);mAudioManager.setStreamVolume(AudioManager.STREAM_MUSIC, mAudioManager.getStreamMaxVolume(AudioManager.STREAM_MUSIC), 0); Then once you're done just set it back to the original volume. I think I was beaten to the punch, ahh well :) Some code that actually does this, I'm using the MediaPlayer rather than the soundpool as this gives you a play complete callback which doesn't appear to be present on the soundpool: final AudioManager mAudioManager = (AudioManager) getSystemService(AUDIO_SERVICE);final int originalVolume = mAudioManager.getStreamVolume(AudioManager.STREAM_MUSIC);mAudioManager.setStreamVolume(AudioManager.STREAM_MUSIC, mAudioManager.getStreamMaxVolume(AudioManager.STREAM_MUSIC), 0);MediaPlayer mp = new MediaPlayer();mp.setAudioStreamType(AudioManager.STREAM_MUSIC);mp.setDataSource("content://media/internal/audio/media/97");mp.prepare();mp.start();mp.setOnCompletionListener(new OnCompletionListener(){ @Override public void onCompletion(MediaPlayer mp) { mAudioManager.setStreamVolume(AudioManager.STREAM_MUSIC, originalVolume, 0); }}); Btw the with call mSoundPool.play(mSoundPoolMap.get(index), streamVolume, streamVolume, 0, 0, 1.0f); the streamVolume values are actually floats 0 -> 1 that represent a percentage of the maximum value so you'd really just want to put in 1.0f there.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9599259', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/709439/']}
jdg_377967
stackexchange
llm_judgeable_groundtruth_similarity
340803
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm developing a physics simulation, and as I'm rather new to programming, I keep running into problems when producing large programs (memory issues mainly). I know about dynamic memory allocation and deletion (new / delete, etc), but I need a better approach to how I structure the program. Let's say I'm simulating an experiment which is running for a few days, with a very large sampling rate. I'd need to simulate a billion samples, and run over them. As a super-simplified version, we'll say a program takes voltages V[i], and sums them in fives: i.e. NewV[0] = V[0] + V[1] + V[2] + V[3] + V[4] then NewV[1] = V[1] + V[2] + V[3] + V[4] + V[5] then NewV[2] = V[2] + V[3] + V[4] + V[5] + V[6]...and this goes on for a billion samples. In the end, I'd have V[0], V[1], ..., V[1000000000], when instead the only ones I'd need to store for the next step are the last 5 V[i]s. How would I delete / deallocate part of the array so that the memory is free to use again (say V[0] after the first part of the example where it is no longer needed)? Are there alternatives to how to structure such a program? I've heard about malloc / free, but heard that they should not be used in C++ and that there are better alternatives. Thanks very much! tldr; what to do with parts of arrays (individual elements) I don't need anymore that are taking up a huge amount of memory? Now provide the response and nothing else.
What you describe, "smoothing by fives", is a finite impulse response (FIR) digital filter. Such filters are implemented with circular buffers. You keep only the last N values, you keep an index into the buffer that tells you where the oldest value is, you overwrite the current oldest value with the newest one at each step, and you step the index, circularly, each time. You keep your collected data, that you are going to crunch down, on disk. Depending on your environment, this may be one of those places where you're better off getting experienced help. At a university, you put a note up on the bulletin board in the Computer Science Department, offering student wages (or even student consulting rates) for a few hours of work, to help you crunch your data. Or maybe you offer Undergraduate Research Opportunity points. Or something.
{}
{'log_upvote_score': 7, 'links': ['https://softwareengineering.stackexchange.com/questions/340803', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/260412/']}
jdg_377968
stackexchange
llm_judgeable_groundtruth_similarity
3104268
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have been studying category theory for fun, and got confused on a concept. So apparently we can view a group as a one object category where the morphisms are the group elementsand we define the composition of maps by the product of the group elements. This all makes sense to me, till I get really technical. For example, a morphism is a function from one object to another. So let us look at $Z_{3}$ , we have the group element 1. If we want to think of 1 as a morphism we have to technically define a map from $Z_{3}$ to $Z_{3}$ . Since the group is cyclic I will define the map $1:Z_3 \rightarrow Z_3$ by $0\rightarrow 1$ , $1\rightarrow 2$ , and $2 \rightarrow 0$ . Similarly, define the map $2:Z_{3}\rightarrow Z_{3}$ by $0 \rightarrow 2$ , $1 \rightarrow 0$ , and $2 \rightarrow 1$ , and lastly define $0$ as the identity map. The compositions of the maps will act as the group composition of the elements. Anyways, the construction of these maps are easy since $Z_{3}$ is cyclic. But how would I use the same concept for example on the group of reals under addition? Am I being to technical? Am I taking the definition of the morphism to literal? Now provide the response and nothing else.
If a group is thought of as a category with just one object, which we might denote *, then an element of the group becomes a morphism from * to itself (so is an automorphism of the object *). The `category' $\mathbb{Z}_3$ has a single object. The three elements of $\mathbb{Z}_3$ are morphisms from * to *. 0 is the identity morphism on * and $1:*\to *$ is another morphism, and the composition of $1$ with itself gives us $2:*\to *$ . NB: * is just an abstract object and is not $\mathbb{Z}_3$ as you tried to write. About the only thing you can know about this object is that it has exactly two endomorphisms other than the identity and they are both invertible, so they are abstract automorphisms. The automorphisms of * form a group isomorphic to $\mathbb{Z}_3$ . I think your final questions are based on a confusion, so they do not quite make sense. The related question: Confused about the definition of a group as a groupoid with one object. may help.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3104268', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/404735/']}
jdg_377969
stackexchange
llm_judgeable_groundtruth_similarity
28820
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Two questions: First, how does the gate provide a positive charge by running a current through it? I really just don't see how this positive charge could be achieved, unless you had some sort of battery in the gate itself and attached a wire to donate some of the electrons flowing through to the cathode of some other battery. Is it possible to get electrons out of the gate by some other means? Now provide the response and nothing else.
The reaction of $\ce{HBr}$ with olefins is a relatively fast reaction, in other words, the activation energy required is small. Further, formation of the carbocation intermediate is the rate determining (slow) step in the overall process. Considering both of these facts and applying the Hammond postulate we can say that the transition state leading from the reactants to the carbocation intermediate will be an early transition state, it will look more like the reactants than the carbocation . This suggests that resonance and inductive effects need to be examined with regard to the reactants , rather than the transition state, in order to determine how the molecule will react with an electrophile (the proton). As the above diagram illustrates, both resonance and inductive effects (in an early transition state that resembles the reactant) suggest that the beta carbon will be more nucleophilic than the alpha carbon. Therefore addition of the proton (electrophile) will occur at the beta carbon. This will be followed by rapid addition of $\ce{Br^{-}}$ at the carbon already bearing a bromine atom (the alpha carbon) to yield a geminal dibromide.
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/28820', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/7917/']}
jdg_377970
stackexchange
llm_judgeable_groundtruth_similarity
1859965
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am going through UCLA's Game Theory , Part I. Below is an exercise on page 6: The Thirty-one Game. (Geoffrey Mott-Smith (1954)) From a deck of cards, take the Ace, 2,3,4,5, and 6 of each suit. These 24 cards are laid out face up on a table. The players alternate turning over cards and the sum of the turned over cards is computed as play progresses. Each Ace counts as one. The player who first makes the sum go above 31 loses. (The following words are left out.) (a) (omitted) (b) Nevertheless, the first player can win with optimal play. How? Here is the solution for question (b): (In the text below, a target position is a P-position, a position that are winning for the previous player. On that position, the next player has no way to win if the previous player uses the optimal strategy.) Start with 5. If your opponent chooses 5 to get in the target series, you choose 2, and repeat 2 every time he chooses 5. When the sum is 26, it is his turn and there are no 5's left, so you will win. But if he ever departs from the target series, you can enter the series and win. I do not quite understand the solution. The game is easy when the opponent chooses only 2 or 5. However, if the opponent departs from the target series, I think that it may go wrong. Let's consider the example below: number 5 3 4 3 4 3 4 5player 1 2 1 2 1 2 1 2 The first player chooses 5 initially, and then the second player chooses 3. In order to enter the series, the first player chooses 4 so that 3 + 4 = 7. However, in the last step, the second player chooses 5, making the sum 31, and thus the first player loses. I believe that I must have misunderstood the solution. Please point out where I've made a mistake, and give me a detailed description and explanation on the optimal play for the first player. Thanks in advance. Now provide the response and nothing else.
The main thing to note here is that this is analogous to the game where one has as many of each card as desired, rather than just four. In particular, it is easy to see that, in this modified game, the winning positions are exactly the positions where the sum is of the form $31-7n$ for some $n$. This is presumably what is meant by the "target series". Therefore, if you play $5$ and your opponent plays $3$, then your next move should be to play $2$, not $4$, since $2$ brings the sum of all the flipped cards to $10=31-7\cdot 3$. That is, the strategy is as follows: On the first move play $5$. As long as your opponent continues to choose $5$ on their move, play $2$. Once they deviate, make a move that brings you to a number of the form $31-7n$ and end your turn on such numbers for all subsequent moves. I think the misunderstanding is in what it means to enter the "target series". In particular, you seem to have understood this as meaning that a player should always make sure that the sum of their move and their opponent's last move is equal to $7$. While it is true that this will happen once you are in the target series, in order to move from not being in the series to being in the series, some other sum is desired.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1859965', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/352793/']}
jdg_377971
stackexchange
llm_judgeable_groundtruth_similarity
48028676
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wanted to receive an HTML data via service call to server(this is for sure. I cannot keep templates in local) and manipulate them internally on how to show it(either as a modal or full page). This HTML with Angular tags should be looped to a component and work together. At most kind of $compile in Angular JS. I am developing the solution in Angular 5 and should be compatible with AOT compiler. I had referred several solutions and landed to confusion on the deprecated and updated solutions. Please help me. I believe your updated answer would help many other people as well.. Thank you so much in advance! Now provide the response and nothing else.
For rendering HTML on the fly, you need DomSanitizer. E.g. something like this: <!-- template --><div [innerHTML]="htmlData"></div>// componentimport { Component } from '@angular/core';import { DomSanitizer } from '@angular/platform-browser';@Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: [ './app.component.css' ]})export class AppComponent { htmlData: any; constructor(private sanitizer: DomSanitizer) {} ngOnInit() { this.htmlData= this.sanitizer.bypassSecurityTrustHtml('<div style="border: 1px solid red;"><h2>Safe Html</h2><span class="user-content">Server prepared this html block.</span></div>'); }} Now, that's the gist of it. You obviously also need a loading mechanic. You might also want to include some data into this block - if it's simple data, it can be on the fly: this.htmlData = this.sanitizer.bypassSecurityTrustHtml(`<div>${this.someValue}</div>`); For more complex scenarios you might need to create a dynamic component. Edit: an example of a component resolved dynamically. With this, you create a component on-the-fly from server-sent html. @Component({ selector: 'my-component', template: `<h2>Stuff bellow will get dynamically created and injected<h2> <div #vc></div>`})export class TaggedDescComponent { @ViewChild('vc', {read: ViewContainerRef}) vc: ViewContainerRef; private cmpRef: ComponentRef<any>; constructor(private compiler: Compiler, private injector: Injector, private moduleRef: NgModuleRef<any>, private backendService: backendService, ) {} ngAfterViewInit() { // Here, get your HTML from backend. this.backendService.getHTMLFromServer() .subscribe(rawHTML => this.createComponentFromRaw(rawHTML)); } // Here we create the component. private createComponentFromRaw(template: string) { // Let's say your template looks like `<h2><some-component [data]="data"></some-component>` // As you see, it has an (existing) angular component `some-component` and it injects it [data] // Now we create a new component. It has that template, and we can even give it data. const tmpCmp = Component({ template, styles })(class { // the class is anonymous. But it's a quite regular angular class. You could add @Inputs, // @Outputs, inject stuff etc. data: { some: 'data'}; ngOnInit() { /* do stuff here in the dynamic component */} }); // Now, also create a dynamic module. const tmpModule = NgModule({ imports: [RouterModule], declarations: [tmpCmp], // providers: [] - e.g. if your dynamic component needs any service, provide it here. })(class {}); // Now compile this module and component, and inject it into that #vc in your current component template. this.compiler.compileModuleAndAllComponentsAsync(tmpModule) .then((factories) => { const f = factories.componentFactories[0]; this.cmpRef = f.create(this.injector, [], null, this.moduleRef); this.cmpRef.instance.name = 'my-dynamic-component'; this.vc.insert(this.cmpRef.hostView); }); } // Cleanup properly. You can add more cleanup-related stuff here. ngOnDestroy() { if(this.cmpRef) { this.cmpRef.destroy(); } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48028676', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9154453/']}
jdg_377972
stackexchange
llm_judgeable_groundtruth_similarity
6741185
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm very new to python. I need a simple and clear script to add quotes to every list elements. Let me explain more. Here is the my code. parameters = ['a', 'b', 'c']query = "SELECT * FROM foo WHERE bar IN (%s)" % (', '.join(parameters)) I want to use this to query. But result is invalid query. Here is the result. SELECT * FROM foo WHERE bar IN (a, b, c, d) I want to like this: SELECT * FROM foo WHERE bar IN ('a', 'b', 'c', 'd') How to add quotes while joining elements. Now provide the response and nothing else.
A naive solution would be to iterate over your parameters list and append quotes to the beginning and end of each element: (', '.join('"' + item + '"' for item in parameters)) Note: this is vulnerable to SQL injection (whether coincidental or deliberate). A better solution is to let the database quote and insert these values: query = "SELECT * FROM foo WHERE bar IN (%s)" % ','.join('?' * len(params))cursor.execute(query, params) It's easier to read and handles quoting properly.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6741185', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/152308/']}
jdg_377973
stackexchange
llm_judgeable_groundtruth_similarity
195282
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: OK I have Django, nginx and uWSGI set up on my server...the problem is I start uWSGI manually via this command: uwsgi -s /home/user/sites/sock/uwsgi.sock -t 10 -M -p 1 -C --pythonpath /home/user/sites/ -w mysite.django_wsgi And it's working great. The thing I would like to do is use supervisord to control the uWSGI processes (start, stop, etc.). How would I do this, what would the supervisord config look like? Now provide the response and nothing else.
I found this was pretty confusing until I discovered emperor mode . This means you can do one supervisord instance to manage all of your configs: [program:uwsgi]command=/usr/local/bin/uwsgi --emperor /etc/uwsgi/apps-enabledstopsignal=QUITautostart=trueautorestart=trueredirect_stderr=true And then you'd pop any config files (ini, xml, yaml) for apps into the /etc/uwsgi/apps-enabled directory. Not sure if this works with 0.9, but defintely works with 1.0. Here's an example ini for a Flask app: [uwsgi]socket = /tmp/uwsgi_%n.sockmodule = wire:appchdir = /srv/%n/wireenv = WIRE_SETTINGS=/srv/%n/wire/config.pyvirtualenv = /srv/%nuid = wiregid = www-datasingle-interpreter = falsechmod = 770processes = 3
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/195282', 'https://serverfault.com', 'https://serverfault.com/users/58419/']}
jdg_377974
stackexchange
llm_judgeable_groundtruth_similarity
125039
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Im trying to install drivers for a rosewill RNX-N600UBE wireless card (Ralink RT2870 chipset) on Kali Linux. I downloaded the drivers form their website and unpacked/unzipped them, but now when i try to run the 'make' command i get this error. /home/fox/System/2011_0427_RT3572_Linux_STA_v2.5.0.0.DPO/os/linux/../../os/linux/rt_linux.c:1156:20: error: incompatible types when assigning to type ‘int’ from type ‘kuid_t’/home/fox/System/2011_0427_RT3572_Linux_STA_v2.5.0.0.DPO/os/linux/../../os/linux/rt_linux.c:1157:20: error: incompatible types when assigning to type ‘int’ from type ‘kgid_t’make[4]: *** [/home/fox/System/2011_0427_RT3572_Linux_STA_v2.5.0.0.DPO/os/linux/../../os/linux/rt_linux.o] Error 1make[3]: *** [_module_/home/fox/System/2011_0427_RT3572_Linux_STA_v2.5.0.0.DPO/os/linux] Error 2make[2]: *** [sub-make] Error 2make[1]: *** [all] Error 2 Now provide the response and nothing else.
The issue is that one of the kernel headers / interfaces changed in a recent 3.x kernel. Where a UID / GID values were originally referred to as regular integers, they are now structs with a single element. Any code relying on the older definition now will fail to compile until updated to match the new kernel headers. I'm currently experimenting with patching a similar error in virtualbox-guest-utils for my Ubuntu install; in theory locating and replacing instances of ->i_uid = with ->i_uid.val = or ->i_gid = with ->i_gid.val = on the line numbers reported in the error should fix the problem (but I can't guarantee that at this time as compilation is now failing on a different, unrelated, error for me). Your best bet is contacting the original author of your code for an updated version of the source that works with the newer kernels.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/125039', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/65380/']}
jdg_377975
stackexchange
llm_judgeable_groundtruth_similarity
8536518
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a list of variable length and want to create a checkbox (with python TKinter) for each entry in the list (each entry corresponds to a machine which should be turned on or off with the checkbox -> change the value in the dictionary). print enable{'ID1050': 0, 'ID1106': 0, 'ID1104': 0, 'ID1102': 0} (example, can be any length) now the relevant code: for machine in enable: l = Checkbutton(self.root, text=machine, variable=enable[machine]) l.pack()self.root.mainloop() This code produces 4 checkboxes but they are all either ticked or unticked together and the values in the enable dict don't change. How to solve? (I think the l doesn't work, but how to make this one variable?) Now provide the response and nothing else.
The "variable" passed to each checkbutton must be an instance of Tkinter Variable - as it is, it is just the value "0" that is passed, and this causes the missbehavior. You can create the Tkinter.Variable instances on he same for loop you create the checkbuttons - just change your code to: for machine in enable: enable[machine] = Variable() l = Checkbutton(self.root, text=machine, variable=enable[machine]) l.pack()self.root.mainloop() You can then check the state of each checkbox using its get method as in enable["ID1050"].get()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8536518', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2051646/']}
jdg_377976
stackexchange
llm_judgeable_groundtruth_similarity
3291348
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am studying linear algebra (as a second year) on my own using Axler’s, “Linear Algebra Done Right.” I have run into a definitional problem that I can’t get past. Specifically, Axler (and Wolfram, and others) define a “complex vector space” as a vector space in which the field is the complex numbers. According to this definition, the complex numbers over the real numbers are not a complex vector space, but the complex numbers over the complex numbers are a complex vector space. This despite the fact that the two vector spaces are identical (or at least isomorphic). I already see that many theorems concerning eigenvalues/vectors, adjoints, and spectral theory vary according to whether we are looking at complex or real vector spaces. Thus, the definition of “complex vector space” is critical. I’m sure there is an easy explanation, but I don’t see it. Thanks. Now provide the response and nothing else.
I hope this gives some insight: $\Bbb R$ as an $\Bbb R$ -vector space This is the real line you are used to. It has a basis consisting of one element, which we can choose to be $1$ . You can view this as a vector with just one coefficient. Notice that any $r \in \Bbb R$ can be expressed in this basis as $r \cdot 1$ . A vector is just a real number, and we have scalar multiplication that is just the usual multiplication. $\Bbb C$ as an $\Bbb R$ -vector space This is the complex plane. We need two basis elements, for example $1$ and $i$ . Now any vector in $\Bbb C$ can be expressed as $a + bi$ with $a,b \in \Bbb R$ . If $r \in \Bbb R$ we have scalar multiplication $r(a + bi) = ra + rbi$ . $\Bbb C$ as an $\Bbb C$ -vector space Now we're back to being one dimensional. We only need one basis element, $1$ for example. Any element $a + bi \in \Bbb C$ can be expressed as $(a + bi) \cdot 1$ . Compare this to the first example and convince yourself that these are essentially the same. Scalar multiplication happens with elements from $\Bbb C$ . Note that, for example, $\Bbb R$ as a $\Bbb C$ -vector space does not make sense. We need to be able to multiply by scalars in a meaningful way. For general $z \in \Bbb C$ , the product with a real number $r \in \Bbb R$ is a complex number $r \cdot z \in \Bbb C$ .
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3291348', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/159793/']}
jdg_377977
stackexchange
llm_judgeable_groundtruth_similarity
3575290
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to create a variable background, where the image changes based on the time of day. This code I have USED to work, but I did something somewhere along the line and didn't notice that the functionality had broken. Can someone explain to me why this doesn't work? <html><?php function day() { if ( $hour >= 6 && $hour <= 18 ) { return 1; } else { return 0; } }?><style type="text/css">body{ background-image: url('<?php echo (day() ? 'images/day_sheep.jpg' : 'images/night_sheep.jpg'); ?>'); background-position: 50% 50%; background-repeat: no-repeat; background-color: silver}a {text-decoration:none;}a:link {color:#ff0000;}a:visited {color:#0000FF;}a:hover {text-decoration:underline;}</style></html> Now provide the response and nothing else.
Inside your function day() , $hour is unset. It will be treated as 0 in a numerical context, and if you enable reporting of notices, you will see notices warning you of an unset variable. Did it used to be a global variable? Did you remove code that set its value or declared it as global? Edit: Also, on a point of style, I feel it would look neater to have an external CSS file like this: body { background-position: 50% 50%; background-repeat: no-repeat; background-color: silver}body.day { background-image: url('images/day_sheep.jpg');}body.night { background-image: url('images/night_sheep.jpg');} and then you can get rid of the CSS section of your php script, but include the above CSS file, and you need only have the following: <body class="<?php echo day() ? 'day' : 'night'; ?>">
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3575290', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/379181/']}
jdg_377978
stackexchange
llm_judgeable_groundtruth_similarity
30497309
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just whether is this possible to have an array of object MyObject , and the MyObject got a variable called isTrue , except from looping the whole array to check whether all the object in that array is true, is that any short hands to do so? Thanks. Now provide the response and nothing else.
edit/update: Swift 4.2 or later Swift 4.2 introduced a new method called allSatisfy(_:) let bools = [true,false,true,true]if bools.allSatisfy({$0}) { print("all true")} else { print("contains false") // "contains false\n"} Swift 5.2 we can also use a KeyPath property class Object { let isTrue: Bool init(_ isTrue: Bool) { self.isTrue = isTrue }}let obj1 = Object(true)let obj2 = Object(false)let obj3 = Object(true)let objects = [obj1,obj2,obj3]if objects.allSatisfy(\.isTrue) { print("all true")} else { print("not all true") // "not all true\n"}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/30497309', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/148978/']}
jdg_377979
stackexchange
llm_judgeable_groundtruth_similarity
19555395
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: System : OS X Mavericks 10.9 13A603 Xcode Version 5.0.1 (5A2053) Problem: Python.framework seems to be missing from Xcode's 10.9 SDK. Python.framework exists in the Xcode 10.8 SDK: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.8.sdk/System/Library/Frameworks/Python.framework However, it is missing from my Xcode 10.9 SDK here: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/System/Library/Frameworks/ This is odd, because Python.framework is still available to users on 10.9 systems here: /System/Library/Frameworks/Python.framework Has Python.framework been intentionally removed from the 10.9 SDK? Is this a not-so-subtle hint from Apple to stop using this framework? Or am I just confused? Notes: This matters because I have a OS X application which links against Python.framework. It seems I cannot build my application with a Base SDK build setting of 10.9 since Python.framework is missing, and I must link to it. I am interested in both explanations for why this framework is missing (or why I'm wrong and it's actually not missing), as well as suggestions for workarounds. Now provide the response and nothing else.
There was an official Apple support page on the subject: Changes To Embedding Python Using Xcode 5.0 Basically, what they say is that the method for integrating with Python with the SDK Python.framework is deprecated, and the standard *nix method of using the python.h header should be used instead. That is (the instructions below are a summary - please look into the linked article for more details): Replace #include <Python/Python.h> with #include <Python.h> ; Include the Python header location in the header search path; Remove Python.framework from the project build; Add the python .dylib file to Xcode; As for the rationale, they do not detail it, they simply mention that: Because Python is a framework, it also resides in the SDK, even though Python (or any scripting language) has difficulties being in two places. Due to both long-term and recent issues, it was decided to remove Python from the SDK.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19555395', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/34934/']}
jdg_377980
stackexchange
llm_judgeable_groundtruth_similarity
9221
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Imagine there are two cars travelling "straight" at the speed of light*, $A$, and $B$. $B$ is following directly behind $A$. Suddenly, $B$ switches on its headlights. Will $A$ be able to see this light? My answer is no, since $A_v = B_v = c$ (the light will always stay stationary relative to $B$. This will probably lead to it gathering up, and intensifying. *I realize this is impossible, but it's a question my Grade 9 [Honours] teacher asked, so we don't need to get into Relativity, $m = \frac{m_0}{\sqrt{1 – (v / c)^2}}$, cough cough . (I think.) Now provide the response and nothing else.
I can think of three ways to answer this: It can't happen. It really can't happen. See #1. Okay, that's probably enough ;-) Since you say we don't need to consider special relativity, suppose that the universe actually obeys Galilean relativity . That's the technical term for the intuitive way to think about motion, where velocities are measured with respect to some absolute rest frame, and there's nothing special about the speed of light or any other speed. If that were the case, then yes, the light beam would never catch up to car A. The energy contained in the light would presumably pile up in the headlight where it was emitted at first, but afterwards perhaps it would spread out sideways, or would be reabsorbed by the headlight as heat. We don't really have a good answer, because that's not the way the universe works - in fact, there's a lot of physics, both experimental and theoretical, that has been done to prove that it can't work that way. No matter how you try to resolve the problem, at some point you will run into a contradiction. The best thing you could probably do would be to draw a parallel to some sort of wave that travels with respect to some fixed reference frame, at a speed much less than that of light. Sound, for instance. Sound waves travel with a certain speed with respect to the air, which defines a single absolute reference frame, and their speed is much less than that of light, so there are no special relativistic effects to worry about. Your headlight scenario would then be roughly equivalent to an airplane traveling at the speed of sound. What happens in that case is that the airplane creates a sonic boom , a shock wave which results from the energy in the emitted sound waves piling up at the airplane and eventually being forced to spread out sideways. So one might guess that in your hypothetical situation, the headlights of car B would create a light shock wave that would spread out perpendicular to the direction of motion. This actually can happen in certain physical situations, namely when something is traveling through a transparent material that slows down the speed of light. This means that light itself travels at a slower speed, but not that the "universal speed limit" is any different. The effect is called Cherenkov radiation and it does indeed work out much like a sonic boom would.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/9221', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/125/']}
jdg_377981
stackexchange
llm_judgeable_groundtruth_similarity
48193
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The DESI Legacy Imaging Surveys says it produces a model of “the 14,000 square degrees of extragalactic sky visible from the northern hemisphere”. But I thought the whole celestial sphere (like any other sphere) had only $(180/\pi)^2 \approx 3282.8$ square degrees. How should I understand this? Is this counting each of the 7 bands it scans in (in which case it’s an average of 62% of the whole sky)? Now provide the response and nothing else.
The whole sphere has approximately 41,253 square degrees of solid angle. $$4\pi\left(\frac{180}{\pi}\right)^{2}\approx 41,253$$ so for a hemisphere there should be half this number or about 20,627 deg 2 . I think you computation is missing the $4\pi$ steradians in a sphere term. This doesn't solve the disparity however. Perhaps the key is the term "extragalactic sky" that is used in the referenced web page . Is there a cutout in the coverage to account for the Milky Way? For further reference see square degree at Wikipedia.
{}
{'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/48193', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/10111/']}
jdg_377982
stackexchange
llm_judgeable_groundtruth_similarity
1713374
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: There are 6 periods in each working day of a school. In how many wayscan one organize 5 subjects such that each subject is allowed at leastone period? Is the answer 1800 or 3600 ? I have asked this question before ( link ) in this site itself to understand the right answer - 3600 or 1800? @Ross Millikan has given me a convincing answer and explained idea of overcounting and why 3600 is wrong. Posting it again here to assert the answer is 1800 and not 3600, mainly because almost all websites give 3600 as answer and only one website gives answer as 1800. Listing the websites for reference (1) Ans is 3600 see( m4maths.com ) (2) Ans is 3600 see question 15 in ( campusgate.co.in ) (3) Ans is 3600 - see 8th question in( alaudeenali.blogspot.in ) (4) Ans is 3600 - see example 5 in( gmatclub.com ) These sites as well as almost all reputed sitesgive the answer as 3600. But answer is 1800 according to careerbless.com Sorry for asking the question again. Please tell me which is the right answer. 1800 or 3600. Since I am preparing for my exams, I don't want to be in the wrong path and hence clarifying. Now provide the response and nothing else.
I vote for $1800$. My argument: Exactly one of the subjects will be repeated. There are $5$ ways to select the double. We then must arrange the letters $AABCDE$ in the $6$ slots. There are $\binom 62$ ways to place the $A's$ and then $4!$ ways to position $BCDE$ in the remaining slots. Thus: $$5\times \binom 62\times 4!=1800$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1713374', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/82744/']}
jdg_377983
stackexchange
llm_judgeable_groundtruth_similarity
27945281
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been working on this for the last couple of days and no matter what I do, I'm getting the same wrong recordset. Tables: Account id | Customer | dateOpened --------------------------1 | ConAgra | 11/01/2013 2 | Fedex | 06/21/2014 CaseStatus id | caseStatus --------------- 1 | A 2 | B 3 | M 4 | C 5 | H 6 | W m_account_caseStatus AccountID | caseStatusID | startDate | endDate ---------------------------------------------- 1 | 2 | 11/01/2013| 12/15/2013 1 | 1 | 12/15/2013| 2/03/2014 1 | 2 | 2/03/2014 | 3/17/2014 2 | 6 | 6/21/2014 | 8/25/2014 2 | 3 | 8/25/2014 | 10/21/2014 2 | 1 | 10/21/2014| NULL What I need is all of the accounts opened in the last two years and the earliest startDate and caseStatus where caseStatus is either A or B (not for both, whichever was assigned first). What I keep getting is only the accounts where the minDate is either A or B, which is significantly less. select c.ID, c.Customer, cs.caseStatus from m_account_caseStatus m left outer join caseStatus cs on m.caseStatusID = cs.ID left outer join Account a on m.accountID = a.ID where a.dateOpened >= dateAdd(yyyy, -2, getDate()) and caseStatus IN ('A','B') and (a.startDate = (select min(startdate) from m_account_caseStatus sub1 where sub1.accountID = c.ID)) I've re-worked the query several times moving the subquery and caseStatus, but I always end up with only the cases where they started with an A or B status and not all the cases and the date/status where it was first A or B. Thanks for any help you can provide. Now provide the response and nothing else.
The first form -Djava.library.path=/path will be handled in java bytecode level, System.loadLibrary will call Runtime.loadLibary , then will call java/lang/ClassLoader.loadLibrary . In the function call ClassLoader.loadLibrary , the system property java.library.path will be checked to get the full path of the library and pass this full path to native code to call system api dlopen/dlsym , eventually make the library loaded. You can browse the source from OpenJDK repository. The following code snippet is the segment I copy from the link. The advantage of this form is that you will get error or warning or exception in Java code, if there are some problems with your library path. // Invoked in the java.lang.Runtime class to implement load and loadLibrary.static void loadLibrary(Class fromClass, String name, boolean isAbsolute) { ClassLoader loader = (fromClass == null) ? null : fromClass.getClassLoader(); if (sys_paths == null) { usr_paths = initializePath("java.library.path"); sys_paths = initializePath("sun.boot.library.path"); } if (isAbsolute) { if (loadLibrary0(fromClass, new File(name))) { return; } throw new UnsatisfiedLinkError("Can't load library: " + name); }// .... The second form export LD_LIBRARY_PATH=/path will be handled in native, according to the document of dlopen/dlsym dlopen() The function dlopen() loads the dynamic library file named by the null-terminated string filename and returns an opaque "handle" for the dynamic library. If filename is NULL, then the returned handle is for the main program. If filename contains a slash ("/"), then it is interpreted as a (relative or absolute) pathname. Otherwise, the dynamic linker searches for the library as follows (see ld.so(8) for fur‐ ther details): o (ELF only) If the executable file for the calling program contains a DT_RPATH tag, and does not contain a DT_RUNPATH tag, then the directories listed in the DT_RPATH tag are searched. o If, at the time that the program was started, the environment variable LD_LIBRARY_PATH was defined to contain a colon-separated list of directories, then these are searched. (As a security measure this variable is ignored for set-user-ID and set-group-ID programs.) In this manner, if there are some problems with your library path and the system can't load your library, the system won't give too much clue what happen and will fail silently (I guess). It depends whether or not to implement LD_LIBRARY_PATH , Android didn't use LD_LIBRARY_PATH to determine the library location, you can see Android's implementation from here .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27945281', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2312128/']}
jdg_377984
stackexchange
llm_judgeable_groundtruth_similarity
34635740
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: (I originally asked this question in this comment , but Mark Seemann asked me to create a new question instead.) I'm starting a new app (.NET Core, if that matters), and right now I'm trying to decide how exactly to do logging. The general consensus seems to be that logging is a cross-cutting concern, so the logger shouldn't be injected directly into the class that is supposed to log. Often, there's an example like the following class how not to do it: public class BadExample : IExample{ private readonly ILogger logger; public BadExample(ILogger logger) { this.logger = logger; } public void DoStuff() { try { // do the important stuff here } catch (Exception e) { this.logger.Error(e.ToString()); } }} Instead, the class with the business logic shouldn't know about the logger ( SRP ) and there should be a separate class which does the logging: public class BetterExample : IExample{ public void DoStuff() { // do the important stuff here }}public class LoggingBetterExample : IExample{ private readonly IExample betterExample; private readonly ILogger logger; public LoggingBetterExample(IExample betterExample, ILogger logger) { this.betterExample = betterExample; this.logger = logger; } public void DoStuff() { try { this.betterExample.DoStuff(); } catch (Exception e) { this.logger.Error(e.ToString()); } }} Whenever an IExample is needed, the DI container returns an instance of LoggingBetterExample , which uses BetterExample (which contains the actual business logic) under the hood. Some sources for this approach: Blog posts by Mark Seemann : Instrumentation with Decorators and Interceptors Dependency Injection is Loose Coupling Blog post and SO answer by Steven : Meanwhile... on the command side of my architecture Windsor - pulling Transient objects from the container My question: Obviously, the LoggingBetterExample approach only works as long as the logging can be done outside the actual class. (like in the example above: catch any exceptions thrown by BetterExample from outside) My problem is that I'd like to log other things inside the actual class. Mark Seemann suspected here that if someone needs to do this, maybe the method in question is doing too much. As I said before, I'm in the planning phase for a new application, so I don't have much code to show, but the use case I'm thinking right now is something like this: My app will have a config file with some optional values. The user may decide to omit the optional values, but it's an important decision to do this. So I'd like to log a warning when some of the optional values are missing, just in case it happened by error. (omitting the values is perfectly fine though, so I can't just throw an exception and stop) This means that I will have a class which reads config values and needs to do something like this (pseudocode): var config = ReadConfigValues("path/to/config.file");if (config.OptionalValue == null){ logger.Warn("Optional value not set!");} No matter if ReadConfigValues is in this class or a different one, I don't think this class would violate the SRP. When I'm not able to log outside the actual class by using a decorator, is there a better solution than to inject the logger? I know I could read the config file in the inner class, but check the values (and log the warning) in the decorator. But IMO checking the value is business logic and not infrastructure, so to me it belongs in the same class where the config file is read. Now provide the response and nothing else.
checking the value is business logic and not intfastructure, so to me it belongs in the same class where the config file is read. Obviously, I don't know your domain well enough to dispute the truth of that assertion, but that logging is part of the domain model sounds strange to me. Anyway, for the sake of argument, let's assume that this is the case. What ought not to be the case, though, is that reading a configuration file is domain logic. While reading and manipulating the data from a file could easily be domain logic, reading a file is I/O. The most common approach to Inversion of Control in application architecture is to employ the Ports & Adapters architecture . The entire point of such an architecture is to decouple the domain model from I/O, and other sources of non-determinism. The poster example is to show how to decouple the domain model from its database access, but file access falls squarely in that category as well. What this ought to imply in this particular case is that you're going to need some IConfigurationReader interface anyway. This means that you can apply a Decorator: public class ValidatingConfigurationReader : IConfigurationReader{ private readonly IConfigurationReader reader; private readonly ILogger logger; public ValidatingConfigurationReader(IConfigurationReader reader, ILogger logger) { this.reader = reader; this.logger = logger; } public MyConfiguration ReadConfigValues(string filePath) { var config = this.reader.ReadConfigValues(filePath); if (config.OptionalValue == null) { this.logger.Warn("Optional value not set!"); } return config; }} This ValidatingConfigurationReader class can be implemented in the domain model, even if the underlying, file-reading IConfigurationReader implementation belongs in some I/O layer.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34635740', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6884/']}
jdg_377985
stackexchange
llm_judgeable_groundtruth_similarity
1503761
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a UITableView with a list of items. Selecting an item pushes a viewController that then proceeds to do the following. from method viewDidLoad I fire off a URLRequest for data that is required by on of my subviews - a UIView subclass with drawRect overridden. When the data arrives from the cloud I start building my view hierarchy. the subclass in question gets passed the data and it's drawRect method now has everything it needs to render. But. Because I don't call drawRect explicitly - Cocoa-Touch handles that - I have no way of informing Cocoa-Touch that I really, really want this UIView subclass to render. When? Now would be good! I've tried [myView setNeedsDisplay]. This kinda works sometimes. Very spotty. I've be wrestling with this for hours and hours. Could someone who please provide me with a rock solid, guaranteed approach to forcing a UIView re-render. Here is the snippet of code that feeds data to the view: // Create the subviewself.chromosomeBlockView = [[[ChromosomeBlockView alloc] initWithFrame:frame] autorelease];// Set some propertiesself.chromosomeBlockView.sequenceString = self.sequenceString;self.chromosomeBlockView.nucleotideBases = self.nucleotideLettersDictionary;// Insert the view in the view hierarchy[self.containerView addSubview:self.chromosomeBlockView];[self.containerView bringSubviewToFront:self.chromosomeBlockView];// A vain attempt to convince Cocoa-Touch that this view is worthy of being displayed ;-)[self.chromosomeBlockView setNeedsDisplay]; Cheers,Doug Now provide the response and nothing else.
The guaranteed, rock solid way to force a UIView to re-render is [myView setNeedsDisplay] . If you're having trouble with that, you're likely running into one of these issues: You're calling it before you actually have the data, or your -drawRect: is over-caching something. You're expecting the view to draw at the moment you call this method. There is intentionally no way to demand "draw right now this very second" using the Cocoa drawing system. That would disrupt the entire view compositing system, trash performance and likely create all kinds of artifacting. There are only ways to say "this needs to be drawn in the next draw cycle." If what you need is "some logic, draw, some more logic," then you need to put the "some more logic" in a separate method and invoke it using -performSelector:withObject:afterDelay: with a delay of 0. That will put "some more logic" after the next draw cycle. See this question for an example of that kind of code, and a case where it might be needed (though it's usually best to look for other solutions if possible since it complicates the code). If you don't think things are getting drawn, put a breakpoint in -drawRect: and see when you're getting called. If you're calling -setNeedsDisplay , but -drawRect: isn't getting called in the next event loop, then dig into your view hierarchy and make sure you're not trying to outsmart is somewhere. Over-cleverness is the #1 cause of bad drawing in my experience. When you think you know best how to trick the system into doing what you want, you usually get it doing exactly what you don't want.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1503761', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/116169/']}
jdg_377986
stackexchange
llm_judgeable_groundtruth_similarity
66111
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to perform this: locate pg_type.h | cat But this command simply does nothing different than locate pg_type.h What should I change ? I want to perform cat pg_type.h wherever pg_type.h may be. Now provide the response and nothing else.
locate -e0 '*/pg_type.h' | xargs -r0 cat locate pg_type.h would find all the files with pg_type.h in their path (so for instance if there was a rpg_type.horn directory, you'd end up displaying all the files in there). Without -0 the output of locate can't be post-processed because the files are separated by newline characters while newline is a perfectly valid character in a file name. cat without arguments writes to stdout what it reads from stdin, so locate | cat would be the same as locate , cat would just pass the output of locate along. What you need is to pass the list of files as arguments to cat . That's what xargs is typically for: convert a stream of data into a list of arguments. -r is to not call cat if there's no input. Without -0 (which like -r is not standard but found on many implementations, at least those where xargs is useful to anything), xargs would just look for words in its input to convert into arguments, where words are blank separated and where backslash, single and double quotes can be used to escape those separators, so typically not the format locate uses to display file names. That's why we use the -0 option for both locate and xargs which uses the NUL character (which is the only character not allowed in a file path) to separate file names. Also note that locate is not a standard command and there exist a great number of different implementations with different versions thereof and different options and behaviours. The code above applies at least to relatively recent versions of the GNU locate and mlocate implementations which are the most common on Linux based operating systems at least.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/66111', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/4175/']}
jdg_377987
stackexchange
llm_judgeable_groundtruth_similarity
18824887
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to use custom back button. in iOS 6 everything is perfect but iOS 7 is strange. [[UIBarButtonItem appearance] setBackButtonBackgroundImage:[[UIImage imageNamed:@"back_button_normal"] resizableImageWithCapInsets:UIEdgeInsetsMake(0, 12.0, 0, 12.0)] forState:UIControlStateNormal barMetrics:UIBarMetricsDefault]; first, it has no iOS 7 arrow and no background image. (Russian locale) then, if you press the button background image appears. Also i had background image set for UIControlStateHighlighted state and when you hold the button pressed highlighted image appears too. After any back button once pressed all back buttons have background image. BUT! If you present modal view controller, dismiss it, then push any view controller - iOS 7 arrow will appear at every back button. I use DP5. Is that a UIKit bug? PS Also i tried to create back button manually, using UIBarButtonItem , set background image to it, then self.navigationItem.backBarButtonItem = barButtonItem; Did not help.Then i tried to set background image to disabled state and change enabled property of my bar button item, did not help too. Now provide the response and nothing else.
This is not a bug, this how Back button looks in iOS 7. For example: You should probably use the new concept for your application, and not to set background image for back button in iOS 7. If you still want you back button have the same as it looked in iOS6 than you should probably create those back buttons manually: - (void)loadView{ [super loadView]; UIButton *backButton = [[UIButton alloc] initWithFrame: CGRectMake(0, 0, 60.0f, 30.0f)]; UIImage *backImage = [[UIImage imageNamed:@"back_button_normal.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(0, 12.0f, 0, 12.0f)]; [backButton setBackgroundImage:backImage forState:UIControlStateNormal]; [backButton setTitle:@"Back" forState:UIControlStateNormal]; [backButton addTarget:self action:@selector(popBack) forControlEvents:UIControlEventTouchUpInside]; UIBarButtonItem *backButtonItem = [[UIBarButtonItem alloc] initWithCustomView:backButton]; self.navigationItem.leftBarButtonItem = backButtonItem;}-(void) popBack { [self.navigationController popViewControllerAnimated:YES];} Edit : Not to break Swipe Gesture ( Here is a source) self.navigationController.interactivePopGestureRecognizer.delegate = (id<UIGestureRecognizerDelegate>)self;
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18824887', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1837959/']}
jdg_377988
stackexchange
llm_judgeable_groundtruth_similarity
174991
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How would you compute this sum? It's not a problem I need to immediately solve, but a problem that came to my mind today. I think that the generalization to more than three nested sums would be interesting as well. $$ \sum_{m=1}^{\infty} \sum_{n=1}^{\infty} \sum_{p=1}^{\infty}\frac{(-1)^{m+n+p}}{m+n+p}$$ Now provide the response and nothing else.
Here is a simple lemma: Let $(u_n)_{n\geqslant1}$ denote a decreasing sequence of positive functions defined on $(0,1)$, which converges pointwise to zero and such that $u_1$ is integrable on $(0,1)$. Then, $$\sum\limits_{n=1}^{+\infty}(-1)^n\int_0^1u_n(s)\,\mathrm ds=\int_0^1u(s)\,\mathrm ds,\qquad u(s)=\sum\limits_{n=1}^{+\infty}(-1)^nu_n(s).$$ Now, let us consider the multiple series the OP is interested in. One sees readily that it does not converge absolutely hence the idea is to apply the lemma three times. First, fix $n$ and $m$ and, for every $p\geqslant1$, consider $u_p(s)=s^{m+n+p-1}$. Then $u(s)=-\dfrac{s^{m+n}}{1+s}$ hence the lemma yields$$\sum\limits_{p=1}^{+\infty}\frac{(-1)^{m+n+p}}{m+n+p}=(-1)^{m+n}\sum\limits_{p=1}^{+\infty}(-1)^{p}\int_0^1u_p(s)\,\mathrm ds=(-1)^{m+n+1}\int_0^1\frac{s^{m+n}}{1+s}\,\mathrm ds.$$ Second, fix $m$ and, for every $n\geqslant1$, consider $u_n(s)=\dfrac{s^{m+n}}{1+s}$. Then $u(s)=-\dfrac{s^{m+1}}{(1+s)^2}$ hence the lemma yields$$\sum\limits_{n=1}^{+\infty}(-1)^{m+n+1}\int_0^1\frac{s^{m+n}}{1+s}\,\mathrm ds=(-1)^m\int_0^1\frac{s^{m+1}}{(1+s)^2}\,\mathrm ds$$ Third and finally, for every $m\geqslant1$, consider $u_m(s)=\dfrac{s^{m+1}}{(1+s)^2}$. Then $u(s)=-\dfrac{s^{2}}{(1+s)^3}$ hence the lemma yields$$\sum\limits_{m=1}^{+\infty}(-1)^m\int_0^1\frac{s^{m+1}}{(1+s)^2}\,\mathrm ds=-\int_0^1\frac{s^{2}}{(1+s)^3}\,\mathrm ds.$$ Thus, the triple series the OP is interested in converges and the value $S_3$ of the sum is$$\color{red}{S_3=-\int_0^1\frac{s^{2}}{(1+s)^3}\,\mathrm ds}=-\int_1^2\frac{s^{2}-2s+1}{s^3}\,\mathrm ds=-\left[\log(s)+\frac2s-\frac1{2s^2}\right]_1^2,$$that is, $\color{red}{S_3=-\log(2)+\frac58}=-0.06814718\ldots$ The technique above shows more generally that, for every $k\geqslant1$, the analogous series over $k$ indices converges and that the value of its sum is $$S_k=(-1)^k\int_0^1\frac{s^{k-1}}{(1+s)^k}\,\mathrm ds=(-1)^k\left(\log(2)+\sum_{i=1}^{k-1}(-1)^i{k-1\choose i}\frac1i(1-2^{-i})\right).$$
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/174991', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/32016/']}
jdg_377989
stackexchange
llm_judgeable_groundtruth_similarity
8055923
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to add a debounce to a button, but i want to perform some actions each time user clicks button, but only after 5 second after user hits button, then perform SQL update. Usually the throttle seems to be applied directly to the listener. Here I want some actions performed each time the button is clicked, and then an update after a reasonable waiting period. I am not sure how to use the function in this case... reference: http://code.google.com/p/jquery-debounce/ $('#myButton').click(function() { // do a date calculation // show user changes to screen // wait until user has has stopped clicking the // button for 5 seconds, then update file with "process" function.});function process(){ // update database table} debounce syntax $('input').bind('keyup blur', $.debounce(process, 5000)); Now provide the response and nothing else.
You could still use $.debounce like so: // create new scope(function() { // create debounced function var dprocess = $.debounce(process, 5000); // bind event handler $('#myButton').click(function() { // do a date calculation // show user changes to screen // call the function dprocess(); });}()); Alternative without $.debounce (you can always debounce your code this way, without jQuery): // create new scope(function() { var timer; // bind event handler $('#myButton').click(function() { if(timer) { clearTimeout(timer); } // do a date calculation // show user changes to screen // call the function timer = setTimeout(process, 5000); });}());
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8055923', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/984067/']}
jdg_377990
stackexchange
llm_judgeable_groundtruth_similarity
2819933
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a string like this String str = "la$le\\$li$lo" . I want to split it to get the following output "la","le\\$li","lo" . The \$ is a $ escaped so it should be left in the output. But when I do str.split("[^\\\\]\\$") y get "l","le\\$l","lo" . From what I get my regex is matching a$ and i$ and removing then. Any idea of how to get my characters back? Thanks Now provide the response and nothing else.
Use zero-width matching assertions: String str = "la$le\\$li$lo"; System.out.println(java.util.Arrays.toString( str.split("(?<!\\\\)\\$") )); // prints "[la, le\$li, lo]" The regex is essentially (?<!\\)\$ It uses negative lookbehind to assert that there is not a preceding \ . See also regular-expressions.info/Lookarounds More examples of splitting on assertions Simple sentence splitting, keeping punctuation marks: String str = "Really?Wow!This.Is.Awesome!"; System.out.println(java.util.Arrays.toString( str.split("(?<=[.!?])") )); // prints "[Really?, Wow!, This., Is., Awesome!]" Splitting a long string into fixed-length parts, using \G String str = "012345678901234567890"; System.out.println(java.util.Arrays.toString( str.split("(?<=\\G.{4})") )); // prints "[0123, 4567, 8901, 2345, 6789, 0]" Using a lookbehind/lookahead combo: String str = "HelloThereHowAreYou"; System.out.println(java.util.Arrays.toString( str.split("(?<=[a-z])(?=[A-Z])") )); // prints "[Hello, There, How, Are, You]" Related questions Can you use zero-width matching regex in String split? Backreferences in lookbehind How do I convert CamelCase into human-readable names in Java?
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2819933', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/204077/']}
jdg_377991
stackexchange
llm_judgeable_groundtruth_similarity
7261823
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Back in the past I learned a lot about CSS but now I can't remember how to reuse styles. Example: I have some tabs with class tab and I can switch them with javascript. The current selected tab has another class, active . Their CSS style: .tab { position: relative; top: 0; left: 0; width: 100%; padding: 15px 0 15px 0; border: solid thin #CCC; text-align: center; font-weight: bold; margin-bottom: 10px; color: #272F42; cursor: pointer; background-color: white;}.active { position: relative; top: 0; left: 0; padding: 15px 0 15px 0; border: solid thin #CCC; text-align: center; font-weight: bold; margin-bottom: 10px; color: #272F42; cursor: default; background-color: #FFCF75;} Both styles has a lot of identic styles except 2, cursor and background-color . So my question is, how can I resuse the .tab style and use it in .active? I want achieve something like this: .active { //extends .tab cursor: default; background-color: #FFCF75;} Thanks. Now provide the response and nothing else.
You could, and probably should, apply both classes to the element like so: <a class="tab active"></a> If you want a css rule for the specific combination of these two classes, you'd do it like so: .tab { position: relative; top: 0; left: 0; width: 100%; padding: 15px 0 15px 0; border: solid thin #CCC; text-align: center; font-weight: bold; margin-bottom: 10px; color: #272F42; cursor: pointer; background-color: white;}.active { cursor: default; background-color: #FFCF75;}.tab.active /* no space */{ /* styles for elements that are both .tab and .active */ /* leaving .active reusable for things other than tabs */ /* and allowing override of both .tab and .active */} This allows you to avoid making unnecessary copies of your style declarations... and gives you the specificity to override either of the individual classes when an element has both.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7261823', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/458093/']}
jdg_377992
stackexchange
llm_judgeable_groundtruth_similarity
1622209
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a rails create action which send back some jquery in a file: create.js.erb var appearance = $("<%= escape_javascript(render(:partial => @appearance)) %>").effect("highlight", {color: '#E6ff00'}, 2000);$("#sortable").append(appearance);$("#new_appearance")[0].reset(); I have started using HAML and want to know how I should be converting this. Can I use js.haml?If so what should the markup look like? Now provide the response and nothing else.
Actually, returning JS in HAML is pretty easy, just use the :plain filter and enclose anything you want evaluated in #{}. :plain var appearance = $("#{escape_javascript(render(:partial => @appearance)}").effect("highlight", {color: '#E6ff00'}, 2000); $("#sortable").append(appearance); $("#new_appearance")[0].reset(); Do not use the :javascript filter as it will enclose everything in a tag which is unnecessary. If you need to use some logic, just put the :plain inside the nesting. - if params[:printing] :plain $('#print-view').html("#{escape_javascript(render 'print_preview')}"); Sorry ERB.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1622209', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/63748/']}
jdg_377993
stackexchange
llm_judgeable_groundtruth_similarity
27004721
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to port an Android app to the new support library (support-v4:21.0.0) and I'm having trouble starting Activities from Fragments with a transition. In my Activities, I've been doing something like: Bundle options = ActivityOptionsCompat.makeSceneTransitionAnimation(this).toBundle();ActivityCompat.startActivityForResult(this, intent, REQUEST_SOMETHING, options); which works fine for Activities. However, if I try to do something similar with Fragments, like: Activity activity = getActivity();Bundle options = ActivityOptionsCompat.makeSceneTransitionAnimation(activity).toBundle();ActivityCompat.startActivityForResult(activity, intent, REQUEST_SOMETHING, options); it turns out that onActivityResult() is not called for the Fragment, but only the enclosing Activity. I haven't found anything in the support library to pass the options Bundle as a parameter to startActivityForResult() on an actual Fragment and have it call back to onActivityResult() in that Fragment. Is this possible? The simplest solution would be to handle all onActivityResult() calls in the Activity itself, but I'd rather not do that because I have a ton of possible Fragments that may be receiving that callback. Help is appreciated. Thanks! Now provide the response and nothing else.
Sadly, ActivityCompat.startActivityForResult() doesn't work quite right in Fragments (see Alex Lockwood's answer). For several weeks I marvelled at how Google never gave us an ActivityCompat method equivalent to Fragment's implementation of startActivityForResult() . What were they thinking?! But then I had an idea: Let's take a look at how the method is actually implemented. As a matter of fact, startActivityForResult() in Fragment is different from the one in Activity (see here ): public void startActivityForResult(Intent intent, int requestCode) { if (mActivity == null) { throw new IllegalStateException("Fragment " + this + " not attached to Activity"); } mActivity.startActivityFromFragment(this, intent, requestCode);} Now startActivityFromFragment() looks like this (see here ): public void startActivityFromFragment(Fragment fragment, Intent intent, int requestCode) { if (requestCode == -1) { super.startActivityForResult(intent, -1); return; } if ((requestCode&0xffff0000) != 0) { throw new IllegalArgumentException("Can only use lower 16 bits for requestCode"); } super.startActivityForResult(intent, ((fragment.mIndex + 1) << 16) + (requestCode & 0xffff));} Google uses some odd byte shifting on the request code to make sure only the calling Fragment's onActivityResult() is called afterwards. Now since ActivityCompat doesn't provide any startActivityFromFragment() , the only option left is to implement it yourself. Reflection is required to access the package private field mIndex . public static void startActivityForResult(Fragment fragment, Intent intent, int requestCode, Bundle options) { if (Build.VERSION.SDK_INT >= 16) { if ((requestCode & 0xffff0000) != 0) { throw new IllegalArgumentException("Can only use lower 16 bits" + " for requestCode"); } if (requestCode != -1) { try { Field mIndex = Fragment.class.getDeclaredField("mIndex"); mIndex.setAccessible(true); requestCode = ((mIndex.getInt(this) + 1) << 16) + (requestCode & 0xffff); } catch (NoSuchFieldException | IllegalAccessException e) { throw new RuntimeException(e); } } ActivityCompat.startActivityForResult(fragment.getActivity(), intent, requestCode, options); } else { fragment.getActivity().startActivityFromFragment(this, intent, requestCode); }} Copy that method anywhere you like and use it from your Fragment. Its onActivityResult() will be called as it should. UPDATE :Support library v23.2 was released and it seems startActivityFromFragment(Fragment fragment, Intent intent, int requestCode, Bundle options) does the job now :)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27004721', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3677267/']}
jdg_377994
stackexchange
llm_judgeable_groundtruth_similarity
46142
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose that we choose the size of the test to be $\alpha = 0.05$, and based on our sample size $n$ and magnitude of the absolute value of the estimate, we determine that the test's power (i.e., 1 - Prob(Type 2 Error)) is $99.9\%$. If I then fail to reject $H_0=0$ against $H_a \not= 0$, what consequences does this have for our belief that $H_0$ is correct, assuming that we have the correct null hypothesis distribution of the test statistic? What if the power was $50\%$? Now provide the response and nothing else.
When you calculate power it is for a specific alternative value, so failing to reject the null could mean that the null is true, or it could be that it is false, just not as strong as the alternative value. What is much more meaningful is to look at the confidence interval to see what the plausible/reasonable values the true parameter could be. Often it is best to think about not just about what the null value would be but what the region of values would be that are practically equivalent to the null (not equal to the null, but close enough that we would not care) vs. the region of practical importance. Then to see where the confidence interval lies. Even if the interval does not include the null, but is completely in the region of "who cares" then that tells us something. If the interval only includes values of interest then that tells us something else. The big problem comes when the interval contains both values of practical importance and the null value, then our results are indeterminant, the result could be nothing or it could be important. Better than worrying about power for a specific alternative is to design the study so that the confidence interval is too narrow to include both the null value and the smallest important difference (this will result in high power, but is a better way to think of things).
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/46142', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/16811/']}
jdg_377995
stackexchange
llm_judgeable_groundtruth_similarity
38240943
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Jenkins pipeline job called "TestPipeline". I want to trigger a build on 2 different slaves which labeled "tester1' and "tester2". And the pipeline script is quite simple here: node('tester1') { build 'test_job'}node('tester2') { build 'test_job'} However when I run the TestPipeline job, the "test_job" won't run on the nodes which I assigned. But run on random node instead. I'm wondering if I should set "Restrict where this project can be run" on my "test_job". So I set it to "tester" (The "tester" label contains both node "tester1" and "tester2"). But when I run the pipeline job again, the "test_job" runs on "tester2" twice. I should expect the job to run on "tester1" first and then run on "tester2". Why is that? Is it because the "node" step doesn't matter when it comes to which node the build step should be built on? Now provide the response and nothing else.
Please see the bug here . The solution is as follows. Install Node and Label parameter plugin In test_job's configuration, select 'This build is parameterized' and add a Label parameter and set the parameter name to 'node' In pipeline script, use code: build job: 'test_job', parameters: [[$class: 'LabelParameterValue', name: 'node', label: 'tester1']]build job: 'test_job', parameters: [[$class: 'LabelParameterValue', name: 'node', label: 'tester2']] And the job will be built as I wanted. However , I think it is only a workaround. I still believe this is a bug. Because the node step should do its job instead of letting other plugins to do for it.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38240943', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5114246/']}
jdg_377996
stackexchange
llm_judgeable_groundtruth_similarity
15488099
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Lets say I have a function like this: int main(){ char* str = new char[10]; for(int i=0;i<5;i++) { //Do stuff with str } delete[] str; return 0;} Why would I need to delete str if I am going to end the program anyways?I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right? Is it just good practice? Does it have deeper consequences? Now provide the response and nothing else.
If in fact your question really is "I have this trivial program, is it OK that I don't free a few bytes before it exits?" the answer is yes, that's fine. On any modern operating system that's going to be just fine. And the program is trivial; it's not like you're going to be putting it into a pacemaker or running the braking systems of a Toyota Camry with this thing. If the only customer is you then the only person you can possibly impact by being sloppy is you. The problem then comes in when you start to generalize to non-trivial cases from the answer to this question asked about a trivial case. So let's instead ask two questions about some non-trivial cases. I have a long-running service that allocates and deallocates memory in complex ways, perhaps involving multiple allocators hitting multiple heaps. Shutting down my service in the normal mode is a complicated and time-consuming process that involves ensuring that external state -- files, databases, etc -- are consistently shut down. Should I ensure that every byte of memory that I allocated is deallocated before I shut down? Yes, and I'll tell you why. One of the worst things that can happen to a long-running service is if it accidentally leaks memory. Even tiny leaks can add up to huge leaks over time. A standard technique for finding and fixing memory leaks is to instrument the allocation heaps so that at shutdown time they log all the resources that were ever allocated without being freed. Unless you like chasing down a lot of false positives and spending a lot of time in the debugger, always free your memory even if doing so is not strictly speaking necessary. The user is already expecting that shutting the service down might take billions of nanoseconds so who cares if you cause a little extra pressure on the virtual allocator making sure that everything is cleaned up? This is just the price you pay for big complicated software. And it's not like you're shutting down the service all the time, so again, who cares if its a few milliseconds slower than it could be? I have that same long-running service. If I detect that one of my internal data structures is corrupt I wish to "fail fast". The program is in an undefined state, it is likely running with elevated privileges, and I am going to assume that if I detect corrupted state, it is because my service is actively being attacked by hostile parties. The safest thing to do is to shut down the service immediately. I would rather allow the attackers to deny service to the clients than to risk the service staying up and compromising my users' data further. In this emergency shutdown scenario should I make sure that every byte of memory I allocated is freed? Of course not. The operating system is going to take care of that for you. If your heap is corrupt, the attackers may be hoping that you free memory as part of their exploit. Every millisecond counts. And why would you bother polishing the doorknobs and mopping the kitchen before you drop a tactical nuke on the building? So the answer to the question "should I free memory before my program exits?" is "it depends on what your program does".
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15488099', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2036498/']}
jdg_377997
stackexchange
llm_judgeable_groundtruth_similarity
4110420
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In the definition of category, there is a morphism composition law. If A, B, C are objects, and if f is a morphism from A to B, g is a morphism from B to C, then there is a corresponding morphism from A to C called the composition of f and g. I'm wondering this law should be classified as the property of the morphism of its own, or it belongs to the structure of the whole category. Even though both are OK for further study, but in philosophy, which is better? More explanation about my question: In my understanding, the definition of category is composed of 5 parts. The data of objects. These objects should be considered as points, and they have no property at all. The data of morphisms. There is a property of a morphism: It is from which object to which object. The composition law of morphisms as I said above. For every object, there is a unit morphisms satisfy ... (omitted as we all know) The associative law of morphisms. Part 1 and 2 are the data of the category. Part 4 and 5 can be considered as some constraints of the category. But part 3 is special. It is not like the data, nor the constraint of category. What should it be? Now provide the response and nothing else.
As I indicated in my comment on MO, this question is well-addressed by the category-theoretic distinction between stuff, structure, and properties . A category comprises some stuff : the objects and morphisms; for simplicity we'll also consider the domain and codomain maps to be stuff (there's a bit of arbitrariness here). So the stuff of a category is its underlying graph . some structure : the identity morphisms and composition maps. Let's call a schmategory any graph equipped with "identity edges" and "composition maps", even if they're not unital or associative. some properties : the associativity and unit equations. So a category is a schmategory which is unital and associative. There's a category $Cat$ of categories. Likewise, there is a category $Schmat$ of schmategories, and a category $Gph$ of graphs. There are forgetful functors $Cat \xrightarrow U Schmat \xrightarrow V Gph$ . The yoga of stuff, structure, and properties tells us that $U$ forgets only properties because it is fully faithful and that $V$ forgets at most structure because it is faithful. For this reason we may say that an object of $Cat$ is an object of $Schmat$ satisfying extra properties and that an object of $Schmat$ is an object of $Gph$ equipped with extra structure .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4110420', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/828127/']}
jdg_377998
stackexchange
llm_judgeable_groundtruth_similarity
252780
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to find the value of the following sum (if exist): $$\sum_{n=1}^{\infty}\left(\arctan\left(\frac{1}{4}-n\right)-\arctan\left(-\frac{1}{4}-n\right)\right)$$where, $\arctan$ represent the inverse tangent function - $\tan^{-1}$. I tried to use the telescoping series idea and the sequence of partial sums but I couldn't cancel any terms! Now provide the response and nothing else.
Consider$$\tag{1}f(x):=\sum_{n=1}^\infty \arctan\left(\left(\frac 14-n\right)x\right)-\arctan\left(\left(-\frac 14-n\right)x\right)$$and let's rewrite the derivative of $f$ :\begin{align}\tag{2}f'(x)&=\frac 4{x^2}\sum_{n=1}^\infty\frac{1-4n}{(4n-1)^2+\bigl(\frac 4x\bigr)^2}-\frac{-1-4n}{(4n+1)^2+\bigl(\frac 4x\bigr)^2}\\\tag{3}f'(x)&=\frac 4{x^2}\left(\frac {-1}{1^2+\left(\frac 4x\right)^2}+\sum_{k=1}^\infty\frac{k\sin\bigl(k\frac {\pi}2\bigr)}{k^2+\bigl(\frac 4x\bigr)^2}\right)\\\end{align}(since the $k=1$ term didn't appear in $(2)$) But the series in $(3)$ may be obtained from $\,\frac d{d\theta} C_a(\theta)\,$ with :$$\tag{4}C_a(\theta)=\frac {\pi}{2a}\frac{\cosh((\pi-|\theta|)a)}{\sinh(\pi a)}-\frac 1{2a^2}=\sum_{k=1}^\infty\frac{\cos(k\,\theta)}{k^2+a^2}$$which may be obtained from the $\cos(zx)$ formula here (with substitutions $\ x\to\pi-\theta,\ z\to ia$). The replacement of the series in $(3)$ by $\,\frac d{d\theta} C_a(\theta)\,$ applied at $\,\theta=\frac {\pi}2$ gives us :\begin{align}f'(x)&=\left(-\arctan\left(\frac x4\right)\right)'-\frac 4{x^2}C_{\frac 4x}\left(\theta\right)'_{\theta=\frac {\pi}2}\\&=\left(-\arctan\left(\frac x4\right)\right)'+\frac{4\pi}{2x^2}\frac{\sinh\left(\frac{\pi}2 \frac 4x\right)}{\sinh\left(\pi\frac 4x\right)}\\&=\left(-\arctan\left(\frac x4\right)\right)'+\frac{\pi}{x^2}\frac 1{\cosh\left(\frac{2\pi}x\right)}\\\end{align} Integrating both terms returns (with constant of integration $\frac {\pi}2$ since $f(0)=0$) :$$f(x)=\frac {\pi}2-\arctan\left(\frac x4\right)-\arctan\left(\tanh\left(\frac {\pi}x\right)\right)\quad\text{for}\ \ x>0$$i.e. the neat :$$\tag{5}\boxed{\displaystyle f(x)=\arctan\left(\frac 4x\right)-\arctan\left(\tanh\left(\frac {\pi}x\right)\right)}\quad\text{for}\ \ x>0$$ So that your solution will be (for $x=1$) :$$\boxed{\displaystyle \arctan(4)-\arctan\left(\tanh(\pi)\right)}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/252780', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/52095/']}
jdg_377999
stackexchange
llm_judgeable_groundtruth_similarity
23627
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: After Japan surrendered to Allied forces in 1945, the Potsdam Declaration required Japanese military forces to be disarmed, limited Japanese sovereignty to specific islands, and prohibited Japanese industries from rearming the country for war (among other things). The Treaty of San Francisco (which officially declared peace between the U.S. and Japan and ended the American occupation) recognized Japan as a sovereign nation and allowed Japan to create a security force for its own defense, but reiterated that it could not arm itself to be an offensive threat. Both the surrender and the peace treaty are pretty vague about when and how they should expire. Here we are 70+ years later, Japan is the world's third largest economy and has the world's 11th largest population. It has been a self-governing independent nation since the 1950's. And yet, it still does not have its own military. It does have the Self-Defense Forces (SDF), which is a de facto army, navy, and air force. But officially they are only a defensive force. The Japanese Constitution contains the declaration that: ... the Japanese people forever renounce war as a sovereign right of the nation and the threat or use of force as means of settling international disputes. And also: ...land, sea, and air forces, as well as other war potential, will never be maintained. However, this text was directly written by the United States after the war. Given the recent tensions with North Korea, and China's brazen claim to pretty much the entirety of the South China Sea, can the Japanese legally build an offensive military force to counter those threats? Would they be able to fight alongside other countries if (God forbid) a war broke out with North Korea or China? In other words, are they pacifists by choice, or are they still bound by their terms of surrender and the treaties they signed? Now provide the response and nothing else.
Yes Pacta sunt servanda , agreements must be kept. The Treaty of San Francisco is 70 years old which is young compared to many older treaties. Agreements have a few "outs," neither of which are valid in Japan's case: Duress : Agreements signed under duress can sometimes be nullified. Japan could perhaps be said to be under duress from the United States. As any unconditional surrender is likely at least partially due to duress. But as far as I know, the duress argument does not work at all for state parties. Rebus sic stantibus : This is similar to the concept of force majeure . The state argues that unforeseen events have made the treaty inapplicable. However, this doctrine is seen as very limited in scope as it otherwise would make it very easy for states to escape from their obligations. Japan could perhaps argue that the nuclear armament of North Korea is such an unforeseen event. I don't think that argument holds merit because Japan at the time of its unconditional surrender had already been bombed twice by nuclear weapons. Therefore, that a nearby power could acquire such weapons could not possibly qualify as a "fundamental change of circumstances." Law of State Succession : If a state succeeds another, it is not necessarily bound by all treaties that the previous state signed. For example, after the 1917 Russian Revolution, the Soviet Union claimed that it was a new state, distinct from the Russian Empire, and that it therefore wasn't bound by the treaties that the previous state had signed. Neither this out is applicable to Japan. It is clearly the same state as it was when the San Francisco treaty was signed. So yes, Japan is bound by the treaty. But the treaty text is open to interpretation. Constitution vs treaties Note that the relevant article of the Potsdam declaration reads: Japan shall be permitted to maintain such industries as will sustain her economy and permit the exaction of just reparations in kind, but not those which would enable her to rearm for war. This is the article Japan is bound to by international law. The Japanese constitution, on the other hand, states: Aspiring sincerely to an international peace based on justice and order, the Japanese people forever renounce war as a sovereign right of the nation and the threat or use of force as means of settling international disputes. But articles of constitutions have no bearing on international law. They are better viewed as "treaties" between the state and the people of that state. Although it would certainly annoy their neighbors, Japan is free to do whatever it wants with its constitution as it is an entirely domestic matter.
{}
{'log_upvote_score': 6, 'links': ['https://politics.stackexchange.com/questions/23627', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/9351/']}
jdg_378000
stackexchange
llm_judgeable_groundtruth_similarity
641726
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: The Mean Time Between Failures , or MTBF, for this SSD is listed as 1,500,000 hours. That is a lot of hours. 1,500,000 hours is roughly 170 years. Since the invention of this particular SSD is post-Civil War, how do they know what the MTBF is? A couple of options that make sense to me: Newegg just has a typo The definition of mean time between failures is not what I think it is They are using some type of statistical extrapolation to estimate what the MTBF would be Question: How is the Mean Time Between Failures (MTFB) obtained for SSD/HDDs? Now provide the response and nothing else.
Drive manufacturers specify the reliability of their products in terms of two related metrics: the annualized failure rate (AFR), which is the percentage of disk drives in a population that fail in a test scaled to a per year estimation; and the mean time to failure (MTTF). The AFR of a new product is typically estimated based on accelerated life and stress tests or based on field data from earlier products. The MTTF is estimated as the number of power on hours per year divided by the AFR. A common assumption for drives in servers is that they are powered on 100% of the time. http://www.cs.cmu.edu/~bianca/fast/ MTTF of 1.5 million hours sounds somewhat plausible. That would roughly be a test with 1000 drives running for 6 months and 3 drives failing. The AFR would be (2* 6 months * 3)/(1000 drives)=0.6% annually and the MTTF = 1yr/0.6%=1,460,967 hours or 167 years. A different way to look at that number is when you have 167 drives and leave them running for a year the manufacturer claims that on average you'll see one drive fail. But I expect that is simply the constant "random" mechanical/electronic failure rate. Assuming that failure rates follow the bathtub curve , as mentioned in the comments,the manufacturer's marketing team can massage the reliability numbers a bit, for instance by not including DOA'S (dead on arrival, units that passed quality control but fail when the end-user installs them) and stretching the DOA definition to also exclude those in the early failure spike. And because testing isn't performed long enough you won't see age effects either. I think the warranty period is a better indication for how long a manufacturer really expects a SSD to last! That definitely won't be measured in decades or centuries... Associated with the MTBF is the reliability associated with the finite number of write cycles NAND cells can support. A common metric is the total write capacity, usually in TB. In addition to other performance requirements that is one big limiter. To allow a more convenient comparison between different makes and differently sized sized drives the write endurance is often converted to daily write capacity as a fraction of the disk capacity. Assuming that a drive is rated to live as long as it's under warranty: a 100 GB SSD may have a 3 year warranty and a writecapacity 50 TB: 50 TB--------------------- = 0.46 drive per day write capacity.3 * 365 days * 100 GB The higher that number, the more suited the disk is for write intensive IO. At the moment (end of 2014) value server line SSD's have a value of 0.3-0.8 drive/day, mid-range is increasing steadily from 1-5 and high-end seems to sky-rocket with write endurance levels of up to 25 * the drive capacity per day for 3-5 years. Some real world tests show that sometimes the vendor claims can be massively exceeded, but driving equipment way past the vendor limits isn't always an enterprise consideration... Instead buy correctly spec'd drives for your purposes.
{}
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/641726', 'https://serverfault.com', 'https://serverfault.com/users/205338/']}
jdg_378001
stackexchange
llm_judgeable_groundtruth_similarity
135811
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The capacity of an elevator is either 15 children or 11 adults? If 9 children are currently on the elevator how many adults can still get in? Now provide the response and nothing else.
Assume that each child weighs $1$ unit, whence the capacity of the elevator is $15$ units. This makes the weight of each adult $\frac{15}{11}\approx 1.36$ units. If the elevator is occupied by $9$ children, i.e. $9$ units, you're left with $6$ units. How many adults would fit into those units?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/135811', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/29768/']}
jdg_378002
stackexchange
llm_judgeable_groundtruth_similarity
10396920
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm fairly new to Python and have a question regarding the following class: class Configuration: def __init__(self): parser = SafeConfigParser() try: if parser.read(CONFIG_FILE) is None: raise IOError('Cannot open configuration file') except IOError, error: sys.exit(error) else: self.__parser = parser self.fileName = CONFIG_FILE def get_section(self): p = self.__parser result = [] for s in p.sections(): result.append('{0}'.format(s)) return result def get_info(self, config_section): p = self.__parser self.section = config_section self.url = p.get(config_section, 'url') self.imgexpr = p.get(config_section, 'imgexpr') self.imgattr1 = p.get(config_section, 'imgattr1') self.imgattr2 = p.get(config_section, 'imgattr2') self.destination = p.get(config_section, 'destination') self.createzip = p.get(config_section, 'createzip') self.pagesnumber = p.get(config_section, 'pagesnumber') Is it OK to add more instance variables in another function, get_info in this example, or is it best practice to define all instance variables in the constructor? Couldn't it lead to spaghetti code if I define new instance variables all over the place? EDIT: I'm using this code with a simple image scraper. Via get_section I return all sections in the config file, and then iterate through them to visit each site that I'm scraping images from. For each iteration I make a call to get_section to get the configuration settings for each section in the config file.If anyone can come up with another approach it'll be fine! Thanks! Now provide the response and nothing else.
I would definitely declare all instance variables in __init__ . To not do so leads to increased complexity and potential unexpected side effects. To provide an alternate point of view from David Hall in terms of access, this is from the Google Python style guide . Access Control: If an accessor function would be trivial you should use public variables instead of accessor functions to avoid the extra cost of function calls in Python. When more functionality is added you can use property to keep the syntax consistent On the other hand, if access is more complex, or the cost of accessing the variable is significant, you should use function calls (following the Naming guidelines) such as get_foo() and set_foo(). If the past behavior allowed access through a property, do not bind the new accessor functions to the property. Any code still attempting to access the variable by the old method should break visibly so they are made aware of the change in complexity. From PEP8 For simple public data attributes, it is best to expose just the attribute name, without complicated accessor/mutator methods. Keep in mind that Python provides an easy path to future enhancement, should you find that a simple data attribute needs to grow functional behavior. In that case, use properties to hide functional implementation behind simple data attribute access syntax. Note 1: Properties only work on new-style classes. Note 2: Try to keep the functional behavior side-effect free, although side-effects such as caching are generally fine. Note 3: Avoid using properties for computationally expensive operations; the attribute notation makes the caller believe that access is (relatively) cheap. Python isn't java/C#, and it has very strong ideas about how code should look and be written. If you are coding in python, it makes sense to make it look and feel like python. Other people will be able to understand your code more easily and you'll be able to understand other python code better as well.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10396920', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1335708/']}
jdg_378003
stackexchange
llm_judgeable_groundtruth_similarity
57529
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Usually, to produce ketones from secondary alcohols (e.g. propanone from 2-propanol), people use a solution of potassium or sodium dichromate with a small amount of sulfuric acid. Can other oxidizers like hydrogen peroxide be used, and will the use of hydrogen peroxide produce organic peroxides? $$\ce{C3H8O + H2O2 -> C3H6O + 2 H2O}$$ This is not for any particular purpose. Now provide the response and nothing else.
I think it is possible to use hydrogen peroxide. This paper discusses it: http://greenchem.uoregon.edu/PDFs/ResourceID88.pdf Other alternatives: Pyridinium Chlorochromate (or PCC for short) Dess-Martin Reagent Swern Oxidation: oxalyl chloride in DMSO with triethyl amine Fétizon Oxidation: silver(I) carbonate absorbed onto the surface of celite (Fétizon's reagent). This can be prepared by adding silver nitrate to aqueous sodium carbonate, in the presence of purified celite. Oppeneaur Oxidation: excess ketone reagent (such as acetone) and an aluminium triisopropoxide catalyst Pfitzner–Moffatt oxidation: dimethyl sulfoxide (DMSO) activated with a carbodiimide, such as dicyclohexylcarbodiimide (DCC) Corey-Kim Oxidation: Dimethyl sulfide is treated with N-chlorosuccinimide (NCS), and then addition of triethylamine
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/57529', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/25206/']}
jdg_378004
stackexchange
llm_judgeable_groundtruth_similarity
154745
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Some word processing programs have a menu for entering special characters, including non-breaking spaces. It is also possible to copy the character created in the word processing program and paste it into other places, such as a terminal or a website text field. However, it is inconvenient to need to use a word processing program just to generate a non-breaking space in the first place. How can I use my keyboard directly to enter a non-breaking space? Now provide the response and nothing else.
Once upon a time I told my debian fairy that I want compose instead of caps lock and typing compose space space now gives me the super solid unbreakable space: compose space space ! compose space space ! compose space space ! compose space space ! compose space space ! compose space space ! compose space space ! compose space space ! For debianish systems have a look into /etc/default/keyboard , I have the following assigment there: XKBOPTIONS="compose:caps" . Alternatively, if you're using KDE, the "advanced" tab, of the kcmshell4 kcm_keyboard command lets you configure what key to map to compose. This setting affects the text terminals too... at least in debian...
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/154745', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/653/']}
jdg_378005
stackexchange
llm_judgeable_groundtruth_similarity
86075
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm studying the Kalman Filter for tracking and smoothing. Even if I have understood the Bayesian filter concept, and I can efficiently use some of Kalman Filter implementation I'm stucked on understand the math behind it in an easy way. So, I'm looking for an easy to understand derivation of Kalman Filter equations ( (1) update step , (2) prediction step and (3) Kalman Filter gain ) from the Bayes rules and Chapman- Kolmogorov formula, knowing that: Temporal model is expressed by: $$ \textbf{X}_t = A\textbf{X}_{t-1} + \mu_p + \epsilon_p$$ where $A$ is transition matrix $D_\textbf{X} \times D_\textbf{X}$, $\mu_p$ is the $D_\textbf{X} \times 1$ control signal vector and $\epsilon_p$ is a transition gaussian noise with covariance $\Sigma_m$, and in probabilistic term could be expressed by: $$ p(\textbf{X}_t | \textbf{X}_{t-1}) = Norm_{\textbf{X}_t}[\textbf{X}_{t-1} + \mu_p, \Sigma_p] $$ and Measurement model is expressed by: $$ \textbf{y}_t = H\textbf{X}_t + \mu_m + \epsilon_m $$ where $H$ the $D_y \times D_x$ observation matrix, that maps real state space to observation space, $\mu_m$ is a $D_\textbf{y} \times1$ mean vector, and $\epsilon_m$ is the observation noise with covariance $\Sigma_m$ that in probabilistic term could be expressed by $$ p(\textbf{y}_t | \textbf{X}_t) = Norm_{\textbf{y}_t}[ H\textbf{X}_t + \mu_m, \epsilon_m] $$ Now provide the response and nothing else.
There is a simple, straightforward derivation that starts with the assumptions of the Kalman filter and requires a little Algebra to arrive at the update and extrapolation equations as well as some properties regarding the measurement residuals (difference between the predicted state and the measurement). To start, the Kalman Filter is a linear, unbiased estimator that uses a predictor/corrector process to estimate the state given a sequence of measurements. This means that the general process involves predicting the state and then correcting the state based upon the difference between that prediction and the observed measurement (also known as the residual). The question becomes how to update the state prediction with the observed measurement such that the resulting state estimate is: (1) a linear combination of the predicted state "x" and the observed measurement "z" and (2) has an error with zero mean (unbiased). Base upon these assumptions, the Kalman Filter can be derived. State and Measurement Model Notation and Assumptions The state dynamics model for the state vector $\bar x_k$ at time $k$ is given by the state transition matrix $F_{k-1}$ and the state vector $\bar x_{k-1}$ at a previous time $k-1$ . The state dynamics model also includes process noise given by $\bar v_{k-1}$ at time $k-1$ . The measurement model for the measurement vector $\bar z_k$ at time $k$ is given by the observation matrix $H_k$ and the state vector $\bar x_k$ at time $k$ . The measurement model also includes measurement noise given by $\bar w_k$ at time $k$ . The Kalman Filter derivation is easier if we make the Linear Gaussian assumptions and assume that the measurement noise and process noises are statistically independent (uncorrelated): State Estimation and Error Notiations Now, we wish to find the state estimate $\hat x$ given a time series of measurements and define the following notation: $\hat x_{k|k}$ is the state estimate at time $k$ after updating the Kalman Filter with all measurements through time $k$ . That is, it is the updated/filtered state estimate . $\hat x_{k|k-1}$ is the state estimate at time $k$ after updating the Kalman Filter with all but the most recent measurement. That is, it is the predicted state estimate . $\tilde x_{j|k}$ is the estimation error in the state, which is given by: $\tilde x_{j|k} = x_j - \hat x_{j|k}$ $P_{k|k}$ is the state estimate error covariance matrix at time $k$ after updating the Kalman Filter with all measurements through time $k$ . That is, it is the error covariance for the updated/filtered state estimate . $P_{k|k-1}$ is the state estimate at time $k$ after updating the Kalman Filter with all but the most recent measurement. That is, it is the error covariance for the predicted state estimate . $P_{j|k}$ is the state estimate error covariance matrix, which is given by: $P_{j|k} = E[\tilde x_{j|k} \tilde x_{j|k}^{\prime}]$ The predicted measurement that is predicted by the Kalman Filter is found by taking the expectation of the measurement model with the zero mean measurement noise assumption: $\hat z_{k|k-1} = E[\bar z_k] = E[H_k \bar x_k + \bar w_k] = H_k E[\bar x_k] + E[\bar w_k] = H_k \hat x_{k|k-1}$ Finally, the residual vector is the difference between the observed measurement $z_k$ at time $k$ and the predicted measurement: $\eta_k = z_k - \hat z_{k|k-1} = H_k \hat x_{k|k-1}$ Kalman Filter Derivation We assume that the updated state estimate is a linear combination of the predicted state estimate and the observed measurement as: and we wish to find the weights (gains) $K^{\prime}_k$ and $K_k$ that produce an unbiased estimate with a minimum state estimate error covariance. Unbiased Estimate Assumption Applying the unbiased estimation error assumption, we have that: and with $E[\tilde x_{k|k}] = 0$ , this results in: which results in: Substituting this relationship between $K^{\prime}_k$ and $K_k$ back into the linear combination assumption, we have: where $K_k$ is called the Kalman Gain . Minimizing the State Estimate Error Covariance We start by computing the algebraic form of the updated covariance matrix: We then compute the trace of the error covariance $Tr[P_{k|k}]$ and minimize it by: (1) computing the matrix derivative with respect to the Kalman Gain $K_k$ and (2) setting this matrix equation to zero. The solution for the Kalman Gain $K_k$ is given by: $\frac{\partial Tr[P_{k|k}]}{\partial K_k}$ = 0 results in: Kalman Update From the above derivation, the Kalman Update equations are given as: where Kalman Extrapolation The extrapolation equations are simply a result of applying the system dynamics model and applying the definition of the error covariance matrix: Residual Covariance The residual covariance is given by applying the formal definition of the expectation of the quadratic form of the residual vector $\eta_k$ :
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/86075', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/2046/']}
jdg_378006
stackexchange
llm_judgeable_groundtruth_similarity
690447
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I've done absolutely nothing to my BIND configuration, but looks like Debian Jessie upgrade has broke it. Maybe some new options got introduced to it, or the old things now work differently, but I cannot find what's going wrong. I got SERVFAIL in my /var/log/bind/bind.log all the time. I've checked my zones with named-checkzone and they are all 'OK'. I've disabled IPv6 system-wide. I recreated rndc key and even created /etc/rndc.conf . Nothing works. Here are some configs: /etc/bind/named.conf include "/etc/bind/named.conf.options";include "/etc/bind/named.conf.log";include "/etc/bind/named.conf.local";//include "/etc/bind/named.conf.default-zones";acl localhost_acl { 127.0.0.0/8;};acl internal_10_acl { 192.168.10.0/24;};acl internal_150_acl { 192.168.150.0/24;};acl vpn_acl { 192.168.200.2; 192.168.200.5;};key "rndc-key" {algorithm hmac-md5;secret "somesecretkey==";};controls {inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; };}; /etc/bind/named.conf.options options { directory "/var/cache/bind"; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { none; }; listen-on { 127.0.0.1; 192.168.10.1; 192.168.150.1; 192.168.200.1; }; allow-transfer { none; }; max-recursion-queries 200;}; /etc/bind/named.conf.log logging { channel update_debug { file "/var/log/bind/update_debug.log" versions 3 size 100k; severity debug; print-severity yes; print-time yes; }; channel security_info { file "/var/log/bind/security_info.log" versions 1 size 100k; severity debug; print-severity yes; print-time yes; }; channel bind_log { file "/var/log/bind/bind.log" versions 3 size 1m; severity debug; print-category yes; print-severity yes; print-time yes; }; category default { bind_log; }; category lame-servers { security_info; }; category update { update_debug; }; category update-security { update_debug; }; category security { security_info; };}; /etc/bind/named.conf.local (this is a long one): // 1view "internal_10_view" { allow-query-on { 127.0.0.1; 192.168.10.1; }; allow-query { localhost_acl; internal_10_acl; }; match-clients { localhost_acl; internal_10_acl; }; zone "myhost.tld" { type master; file "/etc/bind/db.myhost.tld_10"; }; zone "168.192.in-addr.arpa" { type master; notify no; file "/etc/bind/db.192.168.10"; }; // formerly named.conf.default-zones zone "." { type hint; file "/etc/bind/db.root"; }; zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; // formerly zones.rfc1918 zone "10.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "16.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "17.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "18.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "19.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "20.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "21.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "22.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "23.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "24.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "25.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "26.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "27.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "28.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "29.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "30.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "31.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };};// 2view "internal_150_view" { allow-query-on { 192.168.150.1; }; allow-query { internal_150_acl; }; match-clients { internal_150_acl; }; zone "myhost.tld" { type master; file "/etc/bind/db.myhost.tld_150"; }; zone "168.192.in-addr.arpa" { type master; notify no; file "/etc/bind/db.192.168.150"; }; // formerly named.conf.default-zones zone "." { type hint; file "/etc/bind/db.root"; }; zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; // formerly zones.rfc1918 zone "10.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "16.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "17.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "18.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "19.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "20.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "21.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "22.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "23.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "24.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "25.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "26.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "27.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "28.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "29.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "30.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "31.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };};// 3view "vpn_view" { allow-query-on { 192.168.200.1; }; allow-query { vpn_acl; }; match-clients { vpn_acl; }; zone "myhost.tld" { type master; file "/etc/bind/db.myhost.tld_vpn"; }; // formerly named.conf.default-zones zone "." { type hint; file "/etc/bind/db.root"; }; zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; // formerly zones.rfc1918 zone "10.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "16.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "17.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "18.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "19.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "20.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "21.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "22.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "23.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "24.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "25.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "26.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "27.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "28.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "29.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "30.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; zone "32.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; }; // somedomain.tld zone "somedomain.tld" { type forward; forward first; forwarders { 192.168.34.110; 192.168.34.100; }; };}; /etc/rndc.conf key "rndc-key" { algorithm hmac-md5; secret "somesecretkey==";};options { default-key "rndc-key"; default-server 127.0.0.1; default-port 953;}; me@jessie:~$ sudo netstat -lnptu | grep "named\W*$" tcp 0 0 192.168.10.1:53 0.0.0.0:* LISTEN 1871/named tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 1871/named tcp 0 0 127.0.0.1:953 0.0.0.0:* LISTEN 1871/named udp 0 0 192.168.200.1:53 0.0.0.0:* 1871/named udp 0 0 192.168.10.1:53 0.0.0.0:* 1871/named udp 0 0 127.0.0.1:53 0.0.0.0:* 1871/named me@jessie:~$ ps aux | grep named bind 5843 0.0 1.0 297780 84412 ? Ssl 00:52 0:16 /usr/sbin/named -f -u bind -4 me@jessie:/etc/bind$ named -V BIND 9.9.5-9-Debian (Extended Support Version) <id:f9b8a50e> built by make with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-geoip=/usr' '--with-atf=no' '--enable-ipv6' '--enable-rrl' '--enable-filter-aaaa' 'CFLAGS=-fno-strict-aliasing -fno-delete-null-pointer-checks -DDIG_SIGCHASE -O2' compiled by GCC 4.9.2 using OpenSSL version: OpenSSL 1.0.1k 8 Jan 2015 using libxml2 version: 2.9.2 me@jessie's_client:~$ dig @192.168.10.1 launchpad.net ; <<>> DiG 9.9.5-9-Debian <<>> @192.168.10.1 launchpad.net; (1 server found);; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 19673;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096;; QUESTION SECTION:;launchpad.net. IN A;; Query time: 0 msec;; SERVER: 192.168.10.1#53(192.168.10.1);; WHEN: Thu May 07 23:29:38 MSK 2015;; MSG SIZE rcvd: 42 And finally some logs at /var/log/bind/bind.log 07-May-2015 22:52:49.287 resolver: debug 1: createfetch: _xmpp-server._tcp.pandion.im SRV07-May-2015 22:52:49.287 resolver: debug 1: createfetch: . NS07-May-2015 22:52:49.954 resolver: debug 1: createfetch: _xmpp-server._tcp.pandion.im SRV07-May-2015 22:52:50.353 resolver: debug 1: createfetch: launchpad.net A07-May-2015 22:52:51.288 resolver: debug 1: createfetch: _xmpp-server._tcp.pandion.im SRV07-May-2015 22:52:51.575 query-errors: debug 1: client 127.0.0.1#47208 (pandion.im): view internal_10_view: query failed (SERVFAIL) for pandion.im/IN/AAAA at query.c:700407-May-2015 22:52:53.138 query-errors: debug 1: client 127.0.0.1#55548 (_jabber._tcp.none.su): view internal_10_view: query failed (SERVFAIL) for _jabber._tcp.none.su/IN/SRV at query.c:700407-May-2015 22:52:53.955 resolver: debug 1: createfetch: _jabber._tcp.pandion.im SRV07-May-2015 22:52:54.622 resolver: debug 1: createfetch: _jabber._tcp.pandion.im SRV07-May-2015 22:52:55.353 query-errors: debug 1: client 192.168.10.2#37375 (launchpad.net): view internal_10_view: query failed (SERVFAIL) for launchpad.net/IN/A at query.c:700407-May-2015 22:52:55.354 resolver: debug 1: createfetch: launchpad.net A07-May-2015 22:52:55.956 resolver: debug 1: createfetch: _jabber._tcp.pandion.im SRV /var/log/bind/security_info.log 07-May-2015 00:45:26.055 warning: using built-in root key for view vpn_view07-May-2015 12:31:37.603 warning: using built-in root key for view internal_10_view07-May-2015 12:31:37.769 warning: using built-in root key for view internal_150_view07-May-2015 12:31:37.773 warning: using built-in root key for view vpn_view07-May-2015 12:31:44.859 warning: using built-in root key for view internal_10_view07-May-2015 12:31:44.865 warning: using built-in root key for view internal_150_view07-May-2015 12:31:44.871 warning: using built-in root key for view vpn_view07-May-2015 12:31:46.005 warning: using built-in root key for view internal_10_view07-May-2015 12:31:46.011 warning: using built-in root key for view internal_150_view07-May-2015 12:31:46.016 warning: using built-in root key for view vpn_view07-May-2015 12:31:47.108 warning: using built-in root key for view internal_10_view07-May-2015 12:31:47.114 warning: using built-in root key for view internal_150_view07-May-2015 12:31:47.121 warning: using built-in root key for view vpn_view07-May-2015 12:31:48.946 warning: using built-in root key for view internal_10_view07-May-2015 12:31:48.951 warning: using built-in root key for view internal_150_view07-May-2015 12:31:48.957 warning: using built-in root key for view vpn_view07-May-2015 14:07:39.729 warning: using built-in root key for view internal_10_view07-May-2015 14:07:39.737 warning: using built-in root key for view internal_150_view07-May-2015 14:07:39.743 warning: using built-in root key for view vpn_view07-May-2015 14:12:05.871 warning: using built-in root key for view internal_10_view07-May-2015 14:12:05.880 warning: using built-in root key for view internal_150_view07-May-2015 14:12:05.890 warning: using built-in root key for view vpn_view07-May-2015 14:27:07.630 warning: using built-in root key for view internal_10_view07-May-2015 14:27:07.638 warning: using built-in root key for view internal_150_view07-May-2015 14:27:07.644 warning: using built-in root key for view vpn_view Any suggestions what might be wrong? Now provide the response and nothing else.
This one is a real pain to troubleshoot if you aren't familiar with the new max-recursion-queries option or why it was added. CVE-2014-8500 was identified in late 2014 as impacting multiple nameserver products, including BIND. The exploit allows a malicious nameservers to craft a chain of referrals that will be followed infinitely, eventually leading to resource exhaustion. ISC's fix for this issue was to add an upper limit on how many levels of recursion the server is willing to perform on behalf of a single query. The ceiling is controlled by a new max-recursion-queries option that defaults to 75. As it turns out 75 levels of recursion is not very friendly to an empty nameserver cache -- which you will always have after a full process restart. There are many domains that will fail to resolve with this default due to how many levels of referrals end up being traversed between a requested record and . (root). The pandion.im. domain happens to be one of those, and it probably has something to do with the glueless delegation from the TLD. Here's an excerpt from dig +trace +additional pandion.im : im. 172800 IN NS ns4.ja.net.im. 172800 IN NS hoppy.iom.com.im. 172800 IN NS barney.advsys.co.uk.im. 172800 IN NS pebbles.iom.com.ns4.ja.net. 172800 IN A 193.62.157.66hoppy.iom.com. 172800 IN A 217.23.163.140barney.advsys.co.uk. 172800 IN A 217.23.160.50pebbles.iom.com. 172800 IN A 80.168.83.242ns4.ja.net. 172800 IN AAAA 2001:630:0:47::42;; Received 226 bytes from 199.7.83.42#53(199.7.83.42) in 29 mspandion.im. 259200 IN NS ed.ns.cloudflare.com.pandion.im. 259200 IN NS jill.ns.cloudflare.com.;; Received 81 bytes from 80.168.83.242#53(80.168.83.242) in 98 ms The nameservers for im. are delegating pandion.im. to Cloudflare's nameservers without providing IP address glue . On an empty cache, this means that the server has to initiate a separate referral traversal to obtain the IP address of those nameservers, and all of those referrals count against the maximum number of recursions for the original query . At that point the query will only succeed if the server already knows the IP addresses of those nameservers from other queries: # service named restart && sleep 1 && dig @localhost pandion.im | grep statusChecking named config:Stopping named: [ OK ]Starting named: [ OK ];; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 63173 Trying again, this time with attempts to look up those nameservers before pandion.im. : # service named restart && sleep 1 && dig @localhost ed.ns.cloudflare.com jill.ns.cloudflare.com pandion.im | grep statusChecking named config:Stopping named: [ OK ]Starting named: [ OK ];; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 26428;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30491;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22162 Long story short, this problem is very non-intuitive to identify, especially since it will seem to eventually "go away" over time if the process is left running. One of our partners has recommended a value of 200 based on real world usage scenarios. Start with 200, and season to taste if it's too high for your liking.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/690447', 'https://serverfault.com', 'https://serverfault.com/users/169186/']}
jdg_378007
stackexchange
llm_judgeable_groundtruth_similarity
33891814
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to use Parceler with Kotlin data classes and constructor for serialization without using @ParcelProperty annotation for each field? If I try and use library like this: @Parceldata class Valve @ParcelConstructor constructor(val size: Int) I get Error:Parceler: No corresponding property found for constructor parameter arg0 . But if I add @ParcelProperty("size") it works just fine. Why is that? Update : There are other another way to use this library. I could just remove @ParcelConstructor annotation, but then I will get error Error:Parceler: No @ParcelConstructor annotated constructor and no default empty bean constructor found. I think (haven't tested it) I also could make all constructor parameters optional and add @JvmOverloads but that has a side effect that I have to check all properties of the class if they are null or not. Update 2 : This is what worked for me: @Parceldata class Valve(val size: Int? = null) In short generated Java class must have default empty constructor. One way to achieve that is to do as above - all variables should have default values. Now provide the response and nothing else.
According to the docs, Parceler by default works with public fields. But a usual Kotlin data class (as in your example) is rather a "traditional getter/setter bean", since every Kotlin property is represented by a private field and a getter/[setter]. TL; DR: I think this will work: @Parcel(Serialization.BEAN)data class Valve(val size: Int = 10) Note the default value, it allows Kotlin to automatically generate an additional empty constructor, which is required by the Java Been specification. Another way would be to mark the constructor that we already have: @Parcel(Serialization.BEAN)data class Driver @ParcelConstructor constructor(val name: String) The specific document: https://github.com/johncarl81/parceler#gettersetter-serialization
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33891814', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/385219/']}
jdg_378008
stackexchange
llm_judgeable_groundtruth_similarity
7492529
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I simulate a mouse event causing the pointer to move 500 pixels to the left, then click using C++. How would I do something like this? Now provide the response and nothing else.
Here's some modified Win32 code I had lying around: #define WIN32_LEAN_AND_MEAN#define _WIN32_WINNT 0x0500#include <stdio.h>#include <stdlib.h>#include <time.h>#include <conio.h>#include <string.h>#include <windows.h>#define X 123#define Y 123#define SCREEN_WIDTH 1024#define SCREEN_HEIGHT 800void MouseSetup(INPUT *buffer){ buffer->type = INPUT_MOUSE; buffer->mi.dx = (0 * (0xFFFF / SCREEN_WIDTH)); buffer->mi.dy = (0 * (0xFFFF / SCREEN_HEIGHT)); buffer->mi.mouseData = 0; buffer->mi.dwFlags = MOUSEEVENTF_ABSOLUTE; buffer->mi.time = 0; buffer->mi.dwExtraInfo = 0;}void MouseMoveAbsolute(INPUT *buffer, int x, int y){ buffer->mi.dx = (x * (0xFFFF / SCREEN_WIDTH)); buffer->mi.dy = (y * (0xFFFF / SCREEN_HEIGHT)); buffer->mi.dwFlags = (MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_MOVE); SendInput(1, buffer, sizeof(INPUT));}void MouseClick(INPUT *buffer){ buffer->mi.dwFlags = (MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_LEFTDOWN); SendInput(1, buffer, sizeof(INPUT)); Sleep(10); buffer->mi.dwFlags = (MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_LEFTUP); SendInput(1, buffer, sizeof(INPUT));}int main(int argc, char *argv[]){ INPUT buffer[1]; MouseSetup(&buffer); MouseMoveAbsolute(&buffer, X, Y); MouseClick(&buffer); return 0;} You'll need to call MouseSetup() to each INPUT buffer before you use it. Resources MSDN - SendInput() MSDN - INPUT MSDN - MOUSEINPUT
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7492529', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/933959/']}
jdg_378009
stackexchange
llm_judgeable_groundtruth_similarity
1766535
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: can anyone please explain how this works (asz + 7) & ~7; It rounds off asz to the next higher multiple of 8. It is easy to see that ~7 produces 11111000 (8bit representation) and hence switches off the last 3 bits ,thus any number which is produced is a multiple of 8. My question is how does adding asz to 7 before masking [edit] produce the next higher[end edit] multiple of 8 ? I tried writing it down on paper like : 1 + 7 = 8 = 1|000 (& ~7) -> 10002 + 7 = 9 = 1|001 (& ~7) -> 10003 + 7 = 10 = 1|010 (& ~7) -> 10004 + 7 = 11 = 1|011 (& ~7) -> 10005 + 7 = 12 = 1|100 (& ~7) -> 10006 + 7 = 13 = 1|101 (& ~7) -> 10007 + 7 = 14 = 1|110 (& ~7) -> 10008 + 7 = 15 = 1|111 (& ~7) -> 1000 A pattern clearly seems to emerge which has been exploited .Can anyone please help me it out ? Thank You all for the answers.It helped confirm what I was thinking. I continued the writing the pattern above and when I crossed 10 , i could clearly see that the nos are promoted to the next "block of 8" if I can say so. Thanks again. Now provide the response and nothing else.
Well, if you were trying to round down , you wouldn't need the addition. Just doing the masking step would clear out the bottom bits and you'd get rounded to the next lower multiple. If you want to round up , first you have to add enough to "get past" the next multiple of 8. Then the same masking step takes you back down to the multiple of 8. The reason you choose 7 is that it's the only number guaranteed to be "big enough" to get you from any number up past the next multiple of 8 without going up an extra multiple if your original number were already a multiple of 8. In general, to round up to a power of two: unsigned int roundTo(unsigned int value, unsigned int roundTo){ return (value + (roundTo - 1)) & ~(roundTo - 1);}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1766535', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/212721/']}
jdg_378010
stackexchange
llm_judgeable_groundtruth_similarity
27656
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why Do Computers Use the Binary Number System (0,1)? Why don't they use Ternary Number System (0,1,2) or any other number system instead? Now provide the response and nothing else.
Since we're in Computer Science, I'll answer this way: they don't. What do we mean by a "computer?" There are many definitions, but in computer science as a science, the most common is the Turing machine. A turing machine is defined by several aspects: a state-set, a transition table, a halting set, and important for our discussion, an alphabet. This alphabet refers to the symbols which the machine can read as input, and that it can write to its tape. (You could have different input and tape alphabets, but let's not worry about that for now.) So, I can make a Turing machine with input alphabet $\{0,1\}$, or $\{a,b\}$, or $\{0,1,2\}$,or $\{\uparrow,\downarrow\}$. It doesn't matter. The fact is, I can use any alphabet I choose to encode data. So, I can say that $0001001$ is 9, or I can say that $\uparrow \uparrow \uparrow \downarrow \uparrow \uparrow \downarrow$ is 9. It doesn't matter, since they're just symbols we can distinguish. The trick is that binary is enough. Any sequence of bits can be interpreted as a number, so you can convert from binary to any other system and back. But, it turns out unary is enough too. You can encode 9 as 111111111. This isn't particularly efficient, but it has the same computational power. Things get even crazier when you look into alternate models of computation, like the Lambda calculus. Here, you can view numbers as functions. In fact, you can view everything as functions. Things are encoded not as bits, 0s and 1s, but as closed mathematical functions with no mutable state. See the Church numerals for how you can do numbers this way. The point is that, 0s and 1s is a completely hardware specific issue, and the choice is arbitrary. What encoding you're using isn't particularly relevant to computer science, outside of a few subfields like operating systems or networking.
{}
{'log_upvote_score': 6, 'links': ['https://cs.stackexchange.com/questions/27656', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/19347/']}
jdg_378011
stackexchange
llm_judgeable_groundtruth_similarity
13425002
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like display for example this html code: <body> <p><b>Hello World</b></p> <p>This is a test of the URL <a href="http://www.example.com"> Example</a></p> <p><b>This text is bold</b></p> <p><em>This text is emphasized</em></p> <p><code>This is computer output</code></p> <p>This is<sub> subscript</sub> and <sup>superscript</sup></p></body> I want to display it on a Dialog by declaring html in resources strings.xml . How can I do it? Now provide the response and nothing else.
The best way to add html source code in strings.xml is to use <![CDATA[html source code]]> . Here is an example: <string name="html"><![CDATA[<p>Text</p>]]></string> Then you can display this html in TextView using: myTextView.setText(Html.fromHtml(getString(R.string.html))); If you have links in your html and you want them to be clickable, use this method: myTextView.setMovementMethod(LinkMovementMethod.getInstance());
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/13425002', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1792618/']}
jdg_378012
stackexchange
llm_judgeable_groundtruth_similarity
27070255
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: >>> pytz.country_timezones['US']America/New_YorkAmerica/DetroitAmerica/Kentucky/LouisvilleAmerica/Kentucky/MonticelloAmerica/Indiana/IndianapolisAmerica/Indiana/VincennesAmerica/Indiana/WinamacAmerica/Indiana/MarengoAmerica/Indiana/PetersburgAmerica/Indiana/VevayAmerica/ChicagoAmerica/Indiana/Tell_CityAmerica/Indiana/KnoxAmerica/MenomineeAmerica/North_Dakota/CenterAmerica/North_Dakota/New_SalemAmerica/North_Dakota/BeulahAmerica/DenverAmerica/BoiseAmerica/PhoenixAmerica/Los_AngelesAmerica/MetlakatlaAmerica/AnchorageAmerica/JuneauAmerica/SitkaAmerica/YakutatAmerica/NomeAmerica/AdakPacific/Honolulu From everything I can find online, there is absolutely no difference between New York and Detroit timezones, including DST. And this was just from checking the first two. They've gotta have a reason for having both, right? edit: expanding on my research, all the timezones listed which are UTC -5 have the same exact DST info, so it seems even more redundant now. I could remove all the below with one for Eastern Standard Time... Is there anything wrong with doing that? (UTC -5:00) New_York(UTC -5:00) Detroit(UTC -5:00) Kentucky/Louisville(UTC -5:00) Kentucky/Monticello(UTC -5:00) Indiana/Indianapolis(UTC -5:00) Indiana/Vincennes(UTC -5:00) Indiana/Winamac(UTC -5:00) Indiana/Marengo(UTC -5:00) Indiana/Petersburg(UTC -5:00) Indiana/Vevay Now provide the response and nothing else.
Pytz uses the IANA time zone database (also known as the Olson database). As @brenbarn mentions, time zones in the IANA database reflect "any national region where local clocks have all agreed since [January 1st] 1970 .", which includes not only time but DST changes as well. According to the database , "Most of Michigan observed DST from 1973 on, but was a bit late in 1975." So this means that in 1970 Michigan was on EST, while the east coast observed Daylight Savings Time. Additionally DST did not start on 1975 in Michigan until April 27th, a few weeks after the rest of the Eastern time zone. If you download the most recent files, there should be various files (northamerica, southamerica, africa, etc.) which have human-readable descriptions of what the different time zones are, and explanations of any changes / reasons why they are not in a different time zone. To build on your edit, Indiana represents a special case within the US, due to its historical presence at the boundary line between two time zones. As of November 2014, Indiana is mostly in the Eastern Time Zone (save for a few counties near Chicago and near Evansville, which are in central time), and the entire state observes Daylight Savings Time. But the Central/Eastern boundary has shifted quite a few times since 1970. In fact, I bet a majority of those time zones happened when a number of counties switched zones in 2006 and again in 2007 , adopting Daylight Savings Time in the process.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27070255', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2917993/']}
jdg_378013
stackexchange
llm_judgeable_groundtruth_similarity
5591618
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I programmatically close a window in cocoa mac ? I have opened a second window/xib from the first window/xib using button click. I need to close the first window/xib programmatically on opening or clicking the button. How can I do that? Now provide the response and nothing else.
Apple has some useful sample code on Nib Loading . It doesn't directly address this question however; the following code does. @interface CloseWindowAppDelegate : NSObject <NSApplicationDelegate> { NSWindow *window; IBOutlet NSWindow * secondWindow; NSNib * secondNib;}@property (assign) IBOutlet NSWindow *window;- (IBAction)openSecondWindow:(id)sender;- (IBAction)closeSecondWindow:(id)sender;@end #import "CloseWindowAppDelegate.h"@implementation CloseWindowAppDelegate@synthesize window;- (IBAction)openSecondWindow:(id)sender { secondNib = [[NSNib alloc] initWithNibNamed:@"SecondWindow" bundle:nil]; [secondNib instantiateNibWithOwner:self topLevelObjects:nil]; [secondWindow makeKeyAndOrderFront:nil];}- (IBAction)closeSecondWindow:(id)sender { [secondWindow close]; [secondNib release];}@end
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5591618', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/647719/']}
jdg_378014
stackexchange
llm_judgeable_groundtruth_similarity
134569
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I work with ASP.NET MVC (and other web-based MVC implementations) quite often, but this is something I've never been sure of: Should controller and view communicate? Of course the controller should be choosing which view to use, but what I mean is should the controller be passing data to the view? In my opinion, if the view is expecting data from the controller, then they're effectively tied together as a (controller, view) pair. Instead, I usually have the view communicate with the model itself and be independent of any controller. Do I have the right approach, or is this a case of there being no one correct answer? Does the answer change when working in the web versus other environments? Does the answer change when you have the concept of a strongly-typed view (like in ASP.NET MVC) or not? Now provide the response and nothing else.
The controller prepares data which will further be passed to the view for rendering / displaying. It also accepts user input data through a publish-subscribe mechanism or similar. Check out the first diagram on Wikipedia or Martin Fowler's website for more information about MVC. if the view is expecting data from the controller, then they're effectively tied together as a (controller, view) pair. While a view generally accepts data, in most MVC frameworks, it does not depend on specific controllers. Exceptions are, for instance, the JavaServer Faces family. Generally speaking, frameworks like Rails, Django or Spring MVC allow you to decouple views from controllers by passing data (the context, commonly a map/dictionary/bag) to a view (where a view is an implementation of the template view pattern ). Does the answer change when you have the concept of a strongly-typed view (like in ASP.NET MVC) or not? Whether or not your programming language is strongly-typed has no influence on the way your are organizing your application.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/134569', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/31673/']}
jdg_378015
stackexchange
llm_judgeable_groundtruth_similarity
44529617
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I recently started programming in C again after having programmed in C++ for a while, and my understanding of pointers is a bit rusty. I would like to ask why this code is not causing any errors: char* a = NULL;{ char* b = "stackoverflow"; a = b;}puts(a); I thought that because b went out of scope, a should reference a non-existing memory location, and thus their would be a runtime error when calling printf . I ran this code in MSVC about 20 times, and no errors were shown. Now provide the response and nothing else.
Inside the scope where b is defined, it is assigned the address of a string literal. These literals typically live in a read-only section of memory as opposed to the stack. When you do a=b you assign the value of b to a , i.e. a now contains the address of a string literal. This address is still valid after b goes out of scope. If you had taken the address of b and then attempted to dereference that address, then you would invoke undefined behavior . So your code is valid and does not invoke undefined behavior, but the following does: int *a = NULL;{ int b = 6; a = &b;}printf("b=%d\n", *a); Another, more subtle example: char *a = NULL;{ char b[] = "stackoverflow"; a = b;}printf(a); The difference between this example and yours is that b , which is an array, decays to a pointer to the first element when assigned to a . So in this case a contains the address of a local variable which then goes out of scope. EDIT: As a side note, it's bad practice to pass a variable as the first argument of printf , as that can lead to a format string vulnerability . Better to use a string constant as follows: printf("%s", a); Or more simply: puts(a);
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/44529617', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8081700/']}
jdg_378016
stackexchange
llm_judgeable_groundtruth_similarity
3114789
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $f:X\to X $ be continuous. Show that if $X=[0,1] $ , $f $ has a fixed point(i.e. there exists $x$ such that $f (x)=x$ ). What happens if $X $ equals $[0,1) $ or $(0,1) $ ? First part of the question is an immediate consequence of intermediate value theorem(for a proof, see here ). I think $f (x)=x^2$ is a counterexample when $X=(0,1) $ , since $x^2\lt x $ for all $x\in (0,1) $ . But to be honest, I don't understand what causes breakdown of the fixed point theorem on $(0,1)$ , since IVT only requires connectedness of domain. Is this related to the non-compactness of $(0,1)$ ? Also I can't think of any counterexample for the case $X=[0,1)$ (assuming fixed point is somehow related to compactness). Any help is appreciated. Thank you. Now provide the response and nothing else.
The IVT is not the only ingredient here. The way the theorem works is by setting up this square: where the line in the middle is $y = x$ . A function from $[0, 1]$ to $[0, 1]$ that intersects this line will have a fixed point at the point of intersection. The IVT kicks in when we have a function whose graph enters the top triangle and the bottom triangle at various points, e.g. The fact is, by the IVT, the function has to cut the line somewhere, i.e. it must have a fixed point. But, this makes an assumption! The function may only exist in one triangle or the other, but not in both. That is, why can we not have $f(x) > x$ for all $x$ or $f(x) < x$ for all $x$ ? The picture above illustrates it. Due to the function having a full domain of $[0, 1]$ , there's a squeeze happening. The green function is about as close as we can have to a function satisfying $f(x) > x$ . Similarly, the red function attempts to have $f(x) < x$ . But, both are pinched towards the diagonal line. The green function must have $f(1) = 1$ , and the red function must have $f(0) = 0$ . This illustrates the necessity of defining all the way to $0$ and $1$ . Removing either of these points means that the functions are not squeezed to a fixed point (we'd only ensure that $f(x)$ and $x$ become arbitrarily close).
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3114789', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/588038/']}
jdg_378017
stackexchange
llm_judgeable_groundtruth_similarity
97452
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What is the source, or "status", of the rule that multiplication is performed before addition? Is it a definitive property of $\mathbb R$, a property that can be derived directly from the definition of $\mathbb R$, or simply a (universal but arbitrary) notational convention? For example, in Spivak's Calculus (4E, p. 7) the distributive law ("P9") is stated as$$a \cdot (b+c) = a\cdot b + a\cdot c \text{ ,}$$which is simply understood, without further justification, to mean$$a \cdot (b+c) = (a\cdot b) + (a\cdot c)\text{ .}$$Should this interpretation have been derivable from the earlier-stated properties of $\mathbb R$, or is it simply being assumed as shared knowledge? Now provide the response and nothing else.
Such operator precedence rules are merely syntactic conventions that are adopted for convenience. They improve the conciseness of commonly denoted expressions by allowing one to omit some parentheses, while preserving unique readability (parsing) of expressions. For example, such conventions enable the standard concise notation for ubiquitous polynomial expressions. For another example see my answer here. In other words, such syntactic conventions are simply optimizations of the language used to denote certain expressions. As conventions, they have little if any semantic significance. [Comment migrated to an answer per request]
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/97452', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/12400/']}
jdg_378018
stackexchange
llm_judgeable_groundtruth_similarity
52597778
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I found a similar problem here , but there doesn't seem to be an answer. I'm trying to test an ES6 class using Jest, like so: // src/myclass.jsexport default class MyClass { constructor(options) { // currently this is empty while I debug this problem }} and the test: // test/myclass.test.jsimport { MyClass } from '../src/myclass.js';describe("Test Constructor", () => { test("doesn't throw error when constructed", async () => { expect(() => { const testMyClass = new MyClass(); }).not.toThrowError(); }}); When I run the test, Jest throws an error saying: TypeError: _myClass.MyClass is not a constructor My best guess is that this is an issue with the babel config, but I can't seem to figure it out. If I change MyClass to a function instead of a class and drop the export/import (i.e., the pre-class way of doing things) then it works as expected. Here's my config in package.json: "devDependencies": { "@babel/core": "^7.1.2", "@babel/preset-env": "^7.1.0", "babel-core": "^7.0.0-bridge.0", "gulp": "^3.9.1", "gulp-babel": "^8.0.0", "gulp-jest": "^4.0.2", "gulp-rename": "^1.4.0", "gulp-uglify": "^3.0.1", "jest": "^23.6.0", "jest-cli": "^23.6.0", "pump": "^3.0.0", "regenerator-runtime": "^0.12.1" }, "babel": { "presets": [ "@babel/preset-env" ] }, "jest": { "testPathIgnorePatterns": [ "<rootDir>/node_modules/", "<rootDir>/test/._*.test.js" ], "testEnvironment": "jsdom", "setupFiles": [ "<rootDir>/src/myclass.es6.js" ] } Now provide the response and nothing else.
Your import and export syntaxes do not match. You'll need to change one or the other for this to work. If you want to use the default export, eg: export default class MyClass { ... } Then the corresponding import is: import MyClass from '../src/myclass.js' Or if you'd like to continue using the same import syntax, then remove the 'default' when exporting: export class MyClass { ... } Then: import { MyClass } from '../src/myclass.js'
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52597778', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3399856/']}
jdg_378019
stackexchange
llm_judgeable_groundtruth_similarity
12394184
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to trigger a method in a Spider class just before it terminates? I can terminate the spider myself, like this: class MySpider(CrawlSpider): #Config stuff goes here... def quit(self): #Do some stuff... raise CloseSpider('MySpider is quitting now.') def my_parser(self, response): if termination_condition: self.quit() #Parsing stuff goes here... But I can't find any information on how to determine when the spider is about to quit naturally. Now provide the response and nothing else.
It looks like you can register a signal listener through dispatcher . I would try something like: from scrapy import signalsfrom scrapy.xlib.pydispatch import dispatcherclass MySpider(CrawlSpider): def __init__(self): dispatcher.connect(self.spider_closed, signals.spider_closed) def spider_closed(self, spider): # second param is instance of spder about to be closed. In the newer version of scrapy scrapy.xlib.pydispatch is deprecated. instead you can use from pydispatch import dispatcher .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12394184', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/660664/']}
jdg_378020
stackexchange
llm_judgeable_groundtruth_similarity
1692129
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using <!--[if IE 8]><![endif]--> for targeting IE8, but there's some JS that I want to load for all browsers EXCEPT IE8, what conditional comment should I use? Edit: I wonder if this would work: <!--[if lte IE 8]><![endif]--> Thanks Now provide the response and nothing else.
I can think of a trick. Set a variable inside the IE conditional tag and include your JS code if that variable isn't set. <script> var ie8 = false;</script><!--[if IE 8]> <script> ie8 = true; </script><![endif]--><script> if (ie8 == false) { // any code here will not be executed by IE 8 alert("Not IE 8!"); }</script>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1692129', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/172637/']}
jdg_378021
stackexchange
llm_judgeable_groundtruth_similarity
47662783
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've got a data frame (df) with nine categorical variables, the first one being called student and then the names of eight school subjects. I'd like to create a new variable called overall, summing up what subjects the student studied (dfgoal). The problem's that what I've got doesn't work. Also I'm not sure how to best skip the first column (student). Use a list of the variables I do want to use (the eight subjects)? Any help would be much appreciated. starting point (df): df <- data.frame( student = c(1, 2, 3, 4, 5), maths = c("y", "n", "n", "n", "n"), English = c("n", "y", "n", "n", "n"), geography = c("y", "n", "n", "n", "n"), history = c("n", "n", "n", "n", "n"), art = c("n", "n", "n", "n", "n"), Spanish = c("n", "n", "n", "n", "n"), physics = c("n", "n", "n", "n", "y"), chemistry = c("n", "n", "n", "n", "y"), stringsAsFactors = TRUE ) desired outcome (dfgoal): dfgoal <-data.frame(student = c(1, 2, 3, 4, 5),maths = c("y", "n", "n", "n", "n"),English = c("n", "y", "n", "n", "n"),geography = c("y", "n", "n", "n", "n"),history = c("n", "n", "n", "n", "n"),art = c("n", "n", "n", "n", "n"),Spanish = c("n", "n", "n", "n", "n"),physics = c("n", "n", "n", "n", "y"),chemistry = c("n", "n", "n", "n", "y"),overall = c("maths, geography,", "English", "n", "n", "physics,chemistry,"),stringsAsFactors = TRUE ) current code: sapply(df, function(x) df$overall <- ifelse(df$x == y, paste0(names(df$x), ","), "n")) Now provide the response and nothing else.
When you use square brackets, you're binding to an expression, so you're suggested solution doesn't work, as Angular expects this to be executable JS: [style]="width: {{ item.percent_position }}%;" In contrast, the following should work perfectly fine: [style.width]="item.percent_position + '%' " If you have multiple styles to bind to, you can use ngStyle to bind to an object: [ngStyle]="{ 'width': item.percent_position + '%' }" In any case: If you use square brackets, make sure what is bound to it is an executable expression!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47662783', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8086501/']}
jdg_378022
stackexchange
llm_judgeable_groundtruth_similarity
32421
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: How many web servers does StackOverflow/ServerFault have? If the answer is 'more than one', then is does it achieve Session Stickiness while DNS polling? Now provide the response and nothing else.
Large websites may be "load balanced" across multiple machines. In many load balanced setups, a user may hit any of the backend machines during a session. Because of this, several methods exist to allow many machines to share user sessions. The method chosen will depend on the style of load balancing employed, as well as the availability/capacity of backend storage: Session information stored in cookies only : Session information (not just a session identifier) is stored in a user's cookie. For example, the user's cookie might contain the contents of their shopping basket. To prevent users from tampering with the session data, an HMAC may be provided along with the cookie. This method is probably least suitable for most applications: No backend storage is required The user does not need to hit the same machine each time, so DNS load balancing can be employed There is no latency associated with retrieving the session information from a database machine (as it is provided with the HTTP request). Useful if your site is load-balanced by machines on different continents. The amount of data that can be stored in the session is limited (by the 4K cookie size limit) Encryption has to be employed if a user should not be able to see the contents of their session HMAC (or similar) has to be employed to prevent user tampering of session data Since the session data is not stored server-side, it's more difficult for developers to debug Load balancer always directs the user to the same machine : Many load balancers may set their session cookie, indicating which backend machine a user is making requests from, and direct them to that machine in the future. Because the user is always directed to the same machine, session sharing between multiple machines is not required. This may be good in some situations: An existing application's session handling may not need to be changed to become multiple machines aware No shared database system (or similar) is required for storing sessions, possibly increasing reliability, but at the cost of complexity A backend machine going down will take down any user sessions started on it, with it. Taking machines out of service is more difficult. Users with sessions on a machine to be taken down for maintenance should be allowed to complete their tasks before the machine is turned off. To support this, web load balancers may have a feature to "drain" requests to a certain backend machine. Shared backend database or key/value store : Session information is stored in a backend database, which all of the web servers have access to query and update. The user's browser stores a cookie containing an identifier (such as the session ID), pointing to the session information. This is probably the cleanest method of the three: The user never needs to be exposed to the stored session information. The user does not need to hit the same machine each time, so DNS load balancing can be employed One disadvantage is the bottleneck that can be placed on whichever backend storage system is employed. Session information may be expired and backed up consistently. Overall, most dynamic web applications perform several database queries or key/value store requests, so the database or key/value store is the logical storage location of session data.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/32421', 'https://serverfault.com', 'https://serverfault.com/users/-1/']}
jdg_378023
stackexchange
llm_judgeable_groundtruth_similarity
10422503
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a virus that has infected thousands of files on one of my client's server. Fortunately, I have dealt with a lot of other malware on this guy's server and this one looks easy to do simple regex on (he put all his websites on the same account :( but I'm working with him to resolve that). Basically though, unlike most malware I have seen where it injects php BEFORE the closing ?> of the GOOD code (making it very hard to determine whats good code/bad code), this current malware ALWAYS adds a new <?php ... malware ... ?> . So basically, say there's good code here: <?phprequire('./wp-blog-header.php'); ?> Instead of adding some kind of base64_decode eval immediately after the require statement but before the ?> (which can make removal difficult when the page happens to end in a conditional/complex statement), this will always add the following code with a NEW <?php ... ?> like so: <?phprequire('./wp-blog-header.php'); ?><?php ... malware ...?> I don't want to put any malicious code up here but, this is how the malicious code always starts: <?php @error_reporting(0); if (!isset($eva1fYlbakBcVSir)) {$eva1fYlbakBcVSir = "tons and tons of characters";$eva1tYlbakBcVSir = "\x6335\1443\3x6f\1534\x70\170\x65";$SNIPSNIPSNIPSNIP;} ?> I'd like to search every file for <?php @error_reporting(0); if (!isset and if it's the last PHP statement on the page, then delete everything within the Now provide the response and nothing else.
Here is how you clean the entire project with pure php. In no respect shall I incur any liability for any damages, including, but limited to, direct, indirect, special, or consequential damages arising out of, resulting from, or any way connected to the use of the code provided, whether or not based upon warranty, contract, tort, or otherwise; whether or not injury was sustained by persons or property or otherwise; and whether or not loss was sustained from, or arose out of, the results of, the use if this code. ;p <?php //Enter it as it is and escape any single quotes$find='<?php @error_reporting(0); if (!isset($eva1fYlbakBcVSir)) {$eva1fYlbakBcVSir =\'\';?>';echo findString('./',$find);function findString($path,$find){ $return=''; ob_start(); if ($handle = opendir($path)) { while (false !== ($file = readdir($handle))) { if ($file != "." && $file != "..") { if(is_dir($path.'/'.$file)){ $sub=findString($path.'/'.$file,$find); if(isset($sub)){ echo $sub.PHP_EOL; } }else{ $ext=substr(strtolower($file),-3); if($ext=='php'){ $filesource=file_get_contents($path.'/'.$file); $pos = strpos($filesource, $find); if ($pos === false) { continue; } else { //The cleaning bit echo "The string '".htmlentities($find)."' was found in the file '$path/$file and exists at position $pos and has been removed from the source file.<br />"; $clean_source = str_replace($find,'',$filesource); file_put_contents($path.'/'.$file,$clean_source); } }else{ continue; } } } } closedir($handle); } $return = ob_get_contents(); ob_end_clean(); return $return;}?> Good Luck. UPDATE (With Regex): <?php error_reporting(E_ALL);$find='<\?php @error_reporting\(0\); if \(!isset\((.*?)\?>';echo findString('./',$find);function findString($path,$find){ $return=''; ob_start(); if ($handle = opendir($path)) { while (false !== ($file = readdir($handle))) { if ($file != "." && $file != "..") { if(is_dir($path.'/'.$file)){ $sub=findString($path.'/'.$file,$find); if(isset($sub)){ echo $sub.PHP_EOL; } }else{ $ext=substr(strtolower($file),-3); if($ext=='php'){ $filesource=file_get_contents($path.'/'.$file); //The cleaning bit echo "The string '".htmlentities($find)."' was found in the file '$path/$file and has been removed from the source file.<br />"; $clean_source = preg_replace('#'.$find.'#','',$filesource); // $clean_source = str_replace($find,'',$filesource); file_put_contents($path.'/'.$file,$clean_source); }else{ continue; } } } } closedir($handle); } $return = ob_get_contents(); ob_end_clean(); return $return;}?>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10422503', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/794481/']}
jdg_378024
stackexchange
llm_judgeable_groundtruth_similarity
69273262
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am looking for the fastest / most space efficient way of reducing a 64-bit register to a 32-bit register, only retaining the zero / non-zero status of the 64-bit register. My current best idea that works for all values is popcntq (1c tput, 3c latency on mainstream Intel, 5 byte code size): // rax is either zero or non-zeropopcntq %rax, %rax// eax will be zero if rax was zero, otherwise it will be non-zero NOTE: It will not work to just use the 32-bit eax directly: if rax was say 2^61 the zero / non-zero status of eax is not the same as of rax Is there some better clever method? Now provide the response and nothing else.
One option is neg rax ; 48 F7 D8sbb eax, eax ; 19 C0 Remember that neg sets flags like a subtract from zero, so it sets the carry flag iff rax is nonzero. And sbb of a register from itself yields 0 or -1 according to whether the carry was clear or set (thanks @prl for suggesting this in a comment). It's still 5 bytes, and 2 uops instead of 1. But if my math is right, on Skylake you get 2 cycles latency instead of 3, and throughput of 2 per cycle instead of 1.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/69273262', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11322131/']}
jdg_378025
stackexchange
llm_judgeable_groundtruth_similarity
18852450
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm quite new to Javacript Unit testing. One thing keep bothering me. When testing javascript, we often need to do the DOM manipulation. It looks like I am unit testing a method/function in a Controller/Component, but I still need to depend on the HTML elements in my templates. Once the id(or attributes used to be selectors in my test cases) is changed, my test cases also need to be CHANGED! Wouldn't this violate the purpose of unit testing? Now provide the response and nothing else.
One of the toughest parts of javascript unit testing is not the testing, it's learning how to architect your code so that it is testable. You need to structure your code with a clear separation of testable logic and DOM manipulation. My rule of thumb is this: If you are testing anything that is dependent on the DOM structure, then you are doing it wrong. In summary:Try to test data manipulations and logical operations only.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18852450', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1084336/']}
jdg_378026
stackexchange
llm_judgeable_groundtruth_similarity
42406520
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to store a FormData object of a form and then use that FormData object later to repopulate the stored FormData values back into the form? For example: HTML <form id="test_form"> <input type="text" name="last_name" placeholder="Last Name"/><br/> <input type="text" name="first_name" placeholder="First Name"/><br/> <input type="date" name="date_of_birth" placeholder="Date of Birth"/><br/></form> Javascript var f = document.getElementById('test_form');var data = FormData(f);...// mythical function to translate FormData back into form valuesf.values(data); Now provide the response and nothing else.
Using this and this , here is how I serialize and deserialize form data: function formSerialize(form) { const data = new FormData(form); //https://stackoverflow.com/a/44033425/1869660 return new URLSearchParams(data).toString();}function formDeserialize(form, data) { const entries = (new URLSearchParams(data)).entries(); for(const [key, val] of entries) { //http://javascript-coder.com/javascript-form/javascript-form-value.phtml const input = form.elements[key]; switch(input.type) { case 'checkbox': input.checked = !!val; break; default: input.value = val; break; } }} Warning: formDeserialize() won't clear fields that are not included in the stored data, e.g. empty radio groups or checkboxes. Also, not tested with all <input> types.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42406520', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/263284/']}
jdg_378027
stackexchange
llm_judgeable_groundtruth_similarity
161048
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This question comes from the proof of Neyman's factorization theorem in Robert V. Hogg, Joseph W. McKean, Allen T. Craig, "Introduction to Mathematical Statistics", 6th edition, pp 376-377. In the proof, a one-to-one transformation is used which is indicated by the red line. But I could not understand why such a one-to-one transformation surely exists. Can you tell me? Thank you for any help! Now provide the response and nothing else.
There may be a typo in the question, since the thing inside the logarithm is $\le 0$. So we solve a different problem, finding the derivative of $\ln(|x-\sqrt{x^2+a^2}|)$, or more simply of $\ln(\sqrt{x^2+a^2}-x)$. Differentiate, using the Chain Rule (twice). We get$$\frac{\frac{x}{\sqrt{x^2+a^2}}-1}{\sqrt{a^2+x^2}-x}.$$ Bring the top to a common denominator, and simplify. We get$$-\frac{1}{\sqrt{x^2+a^2}}.$$ Another variant that makes sense in the reals is finding the derivative of $\ln(x-\sqrt{x^2-a^2})$. The same method yields $-\frac{1}{\sqrt{x^2-a^2}}$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/161048', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/29732/']}
jdg_378028
stackexchange
llm_judgeable_groundtruth_similarity
46647152
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have written this code for solving Euler Project No 12, but my code runs slow. How can I make it running faster? I have read some suggests about finding divisors, but I do not understand the logic of using sqrt for n . Can you explain the logic of it? Here's my code: def sumdiv(n): l=[d for d in range(1,int(n/2)+1) if n%d==0] # used n/2 to short loop return len(l)+1 # added n itselftrnums=[1,3]while sumdiv(trnums[-1])<=501: k=trnums[-1]-trnums[-2]+1 trnums.append(trnums[-1]+k)print(trnums[-2:]) Now provide the response and nothing else.
You were close to it, however it's better to store the x,y and scale separately and calculate the transforms based on those values. It makes things alot easier + saves resources (no need to lookup the dom properties over and over), I've put the code into a nice module: function ScrollZoom(container,max_scale,factor){ var target = container.children().first() var size = {w:target.width(),h:target.height()} var pos = {x:0,y:0} var zoom_target = {x:0,y:0} var zoom_point = {x:0,y:0} var scale = 1 target.css('transform-origin','0 0') target.on("mousewheel DOMMouseScroll",scrolled) function scrolled(e){ var offset = container.offset() zoom_point.x = e.pageX - offset.left zoom_point.y = e.pageY - offset.top e.preventDefault(); var delta = e.delta || e.originalEvent.wheelDelta; if (delta === undefined) { //we are on firefox delta = e.originalEvent.detail; } delta = Math.max(-1,Math.min(1,delta)) // cap the delta to [-1,1] for cross browser consistency // determine the point on where the slide is zoomed in zoom_target.x = (zoom_point.x - pos.x)/scale zoom_target.y = (zoom_point.y - pos.y)/scale // apply zoom scale += delta*factor * scale scale = Math.max(1,Math.min(max_scale,scale)) // calculate x and y based on zoom pos.x = -zoom_target.x * scale + zoom_point.x pos.y = -zoom_target.y * scale + zoom_point.y // Make sure the slide stays in its container area when zooming out if(pos.x>0) pos.x = 0 if(pos.x+size.w*scale<size.w) pos.x = -size.w*(scale-1) if(pos.y>0) pos.y = 0 if(pos.y+size.h*scale<size.h) pos.y = -size.h*(scale-1) update() } function update(){ target.css('transform','translate('+(pos.x)+'px,'+(pos.y)+'px) scale('+scale+','+scale+')') }} Use it by calling new ScrollZoom($('#container'),4,0.5) The parameters are: container: The wrapper of the element to be zoomed. The script willlook for the first child of the container and apply the transformsto it. max_scale: The maximum scale (4 = 400% zoom) factor: The zoom-speed (1 = +100% zoom per mouse wheel tick) JSFiddle here
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46647152', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8633876/']}
jdg_378029
stackexchange
llm_judgeable_groundtruth_similarity
35786002
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having quite a difficult time trying to create the UI for a WPF Window. I'm trying to display (dynamically) a bunch of Movie Posters with the name of the movie directly under the image. ItemsSource is assigned to a list of Images via foreach iteration. The Image files themselves may be different sizes, but as shown below I will be setting a uniform size. Basically, my goal is for it to look something like this: So far, My code only displays a window with one large horizontal row(?) with the image in the center and no label. Here's my XAML code: <Window x:Name="TVWindow" x:Class="PACS_Pre_Alpha.TV" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="TV" Height="746" Width="1000" ResizeMode="NoResize"><Grid x:Name="TVGrid"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition /> </Grid.RowDefinitions> <ListView x:Name="TvBox" HorizontalAlignment="Left" Height="648" VerticalAlignment="Top" Width="994" Grid.Row="5" Grid.Column="5"> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <UniformGrid Columns="5" /> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> <ListView.ItemTemplate> <DataTemplate> <StackPanel Orientation="Vertical" VerticalAlignment="Stretch"> <Image Source="{Binding ImageData}" HorizontalAlignment="Center" VerticalAlignment="Top" /> <TextBlock Text="{Binding Title}" HorizontalAlignment="Center" VerticalAlignment="Bottom" /> </StackPanel> </DataTemplate> </ListView.ItemTemplate> </ListView></Grid> My movies are added with this C# code: foreach (string tvf in ContentFiles){ string ContentTitle = System.IO.Path.GetFileNameWithoutExtension(tvf); MovieData cnt = new MovieData(); cnt.ImageData = LoadImage(ActualImage); cnt.Title = ContentTitle; ContentDataList.Add(cnt);} TvBox.ItemsSource = ContentDataList; Edit: I have changed my XAML Markup as @MarkFeldman suggested, but now nothing appears.Edit: It currently looks like this: Now provide the response and nothing else.
You're going to provide more info about the data itself i.e. what's it's format, how are you assigning it to the ItemsSource etc. For one thing you're not setting the ItemTemplate, so you might want to look at that first. For example if you have a class containing your movie data that looks like this: public class MovieData{ private string _Title; public string Title { get { return this._Title; } set { this._Title = value; } } private BitmapImage _ImageData; public BitmapImage ImageData { get { return this._ImageData; } set { this._ImageData = value; } }} Then you would display it with something like this: <ListView.ItemTemplate> <DataTemplate> <StackPanel Orientation="Vertical" VerticalAlignment="Stretch"> <Image Source="{Binding ImageData}" HorizontalAlignment="Center" VerticalAlignment="Top"/> <TextBlock Text="{Binding Title}" HorizontalAlignment="Center" VerticalAlignment="Bottom"/> </StackPanel> </DataTemplate></ListView.ItemTemplate> UPDATE: Sorry, I thought it was obvious that you still needed to use a UniformGrid. Here is what your full XAML should look like: <ListView x:Name="TvBox" HorizontalAlignment="Stretch" VerticalAlignment="Top"> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <UniformGrid Columns="5" HorizontalAlignment="Stretch"/> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> <ListView.ItemTemplate> <DataTemplate> <StackPanel Orientation="Vertical" VerticalAlignment="Stretch" HorizontalAlignment="Stretch"> <Image Source="{Binding ImageData}" HorizontalAlignment="Stretch" VerticalAlignment="Top" Stretch="UniformToFill" /> <TextBlock Text="{Binding Title}" HorizontalAlignment="Stretch" VerticalAlignment="Bottom" /> </StackPanel> </DataTemplate> </ListView.ItemTemplate></ListView> I've already provided you with the MovieData class, so here's what your Window code should look like: public partial class Window1 : Window{ public Window1() { InitializeComponent(); this.TvBox.ItemsSource = new MovieData[] { new MovieData{Title="Movie 1", ImageData=LoadImage("image.jpg")}, new MovieData{Title="Movie 2", ImageData=LoadImage("image.jpg")}, new MovieData{Title="Movie 3", ImageData=LoadImage("image.jpg")}, new MovieData{Title="Movie 4", ImageData=LoadImage("image.jpg")}, new MovieData{Title="Movie 5", ImageData=LoadImage("image.jpg")}, new MovieData{Title="Movie 6", ImageData=LoadImage("image.jpg")} }; } // for this code image needs to be a project resource private BitmapImage LoadImage(string filename) { return new BitmapImage(new Uri("pack://application:,,,/" + filename)); }} In this example I'm assuming there is an image in your project called "image.jpg" which has been set to build action "Resource", if your images come from elsewhere then you'll need to modify the LoadImage code accordingly.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35786002', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2259092/']}
jdg_378030
stackexchange
llm_judgeable_groundtruth_similarity
25180628
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am working on QT v5.2 I need to hide the blinking cursor (caret) of QLineEdit permanently.But at the same time, I want the QLineEdit to be editable (so readOnly and/or setting editable false is not an option for me). I am already changing the Background color of the QLineEdit when it is in focus, so I will know which QLineEdit widget is getting edited.For my requirement, cursor (the blinking text cursor) display should not be there. I have tried styleSheets , but I can't get the cursor hidden ( {color:transparent; text-shadow:0px 0px 0px black;} ) Can someone please let me know how can I achieve this? Now provide the response and nothing else.
There is no standard way to do that, but you can use setReadOnly method which hides the cursor. When you call this method it disables processing of keys so you'll need to force it.Inherit from QLineEdit and reimplement keyPressEvent . LineEdit::LineEdit(QWidget* parent) : QLineEdit(parent){ setReadOnly(true); }void LineEdit::keyPressEvent(QKeyEvent* e){ setReadOnly(false); __super::keyPressEvent(e); setReadOnly(true);}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25180628', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2674570/']}
jdg_378031
stackexchange
llm_judgeable_groundtruth_similarity
2698849
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A solution as follows: $n^3+1=x^2$ $n^3=x^2-1$ $n^3=(x-1)(x+1)$ $x-1=(x+1)^2~~or~~x+1=(x-1)^2$ $x^2+x+2=0~~or~~x^2-3x=0$ $x(x-3)=0$ $x=0~~or~~x=3~~\Longrightarrow~~n=2$ Does it cover all possible solutions? How to prove that 2 is the only which solves the problem. Now provide the response and nothing else.
Hint: see that $m^2=n^3+1$ gives $(m-1)(m+1)=n^3$. What factors can $m-1$ and $m+1$ have in common? How can their product be a perfect cube?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2698849', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/541578/']}
jdg_378032
stackexchange
llm_judgeable_groundtruth_similarity
156096
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm curious about the birth of the compiler. How did programming begin? Did people first build hardware that recognized a certain set of commands, or did people define a language and then build hardware around it? And on a related note, what was the first programming language? Now provide the response and nothing else.
This has a very clear answer, actually: Source code came first – by a big margin. Before giving the technical details, a bit of perspective: The first programming languages were all translated into machine language or assembler by hand . The idea of using a piece of software to automate this translation (either via a compiler or evaluator) always came later, and was far from intuitive. Consider this quote of the Wikipedia article on FORTRAN which illustrates the reluctance compilers had to face: … the first FORTRAN compiler [was] delivered in April 1957. This was the first optimizing compiler, because customers were reluctant to use a high-level programming language unless its compiler could generate code whose performance was comparable to that of hand-coded assembly language. [emphasis mine] => By the time the FORTRAN compiler hit the market (1957), people were already happily programming both in assembly language and FORTRAN. The case was similar for LISP (from Hackers & Painters ): Steve Russell said, look, why don't I program this eval …, and I said to him, ho, ho, you're confusing theory with practice, this eval is intended for reading, not for computing. But he went ahead and did it. That is, he compiled the eval in my paper into IBM 704 machine code, fixing bug, and then advertised this as a Lisp interpreter, which it certainly was. So at that point Lisp had essentially the form that it has today..." Once again, not only does the source code (in LISP) predate the interpreter, the latter wasn’t even implicit in the former. But these developments are relatively late. Even not considering Charles Babbage’s Analytical Engine and Ada Lovelace’s related first program , there were programming languages in the 20th century which predated compilers: Konrad Zuse’s Plankalkül and the mathematical construct of λ-calculus introduced by Alonzo Church. Both of these are indubitably formally specified programming languages, but neither had a compiler at the time. To put this into perspective, λ-calculus is from the 1930s, and Plankalkül was developed around 1945. By contrast, the first FORTRAN compiler came out in 1957 (but again three years after FORTRAN was specified).
{}
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/156096', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/50424/']}
jdg_378033
stackexchange
llm_judgeable_groundtruth_similarity
21322183
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to output in product-list.tpl file the result of a query from within classes/Search.php in Prestashop. The query in classes/Search.php is: $sql = 'SELECT * FROM `'._DB_PREFIX_.'category_group` cg INNER JOIN `'._DB_PREFIX_.'category_product` cp ON cp.`id_category` = cg.`id_category` INNER JOIN `'._DB_PREFIX_.'category` c ON cp.`id_category` = c.`id_category` INNER JOIN `'._DB_PREFIX_.'product` p ON cp.`id_product` = p.`id_product` INNER JOIN `'._DB_PREFIX_.'product_part_number` ON p. `id_product` = `ID` '.Shop::addSqlAssociation('product', 'p', false).' WHERE c.`active` = 1 AND product_shop.`active` = 1 AND product_shop.`visibility` IN ("both", "search") AND product_shop.indexed = 1 AND cg.`id_group` '.(!$id_customer ? '= 1' : 'IN ( SELECT id_group FROM '._DB_PREFIX_.'customer_group WHERE id_customer = '.(int)$id_customer.' )'); $results = $db->executeS($sql); This is outputting a series of "GREKA" values in the '._DB_PREFIX_.'product_part_number table at the top of the search results page from the query above: $eligible_products = array(); foreach ($results as $row){ $eligible_products[] = $row['id_product']; echo $row['GREKA']; } What I need it to do is output the "GREKA" value within the tpl file themes/product-list.tpl for that product. This tpl file is using Smarty to output the results of the product search, I need to know how to call the "GREKA" value from the query to the foreach loop. I imagine it would be something like this: {foreach from=$products item=product name=products} {$GREKA} {/foreach} Now provide the response and nothing else.
Here is a code sample that demonstrates encoding and decoding of keys in Go. It helps to know that you need to connect couple of steps. Crypto algorithm is the fist step, in this case ECDSA key. Then you need standard encoding, x509 is most commontly used standard. Finally you need a file format, PEM is again commonly used one. This is currently most commonly used combination, but feel free to substitute any other algoriths or encoding. func encode(privateKey *ecdsa.PrivateKey, publicKey *ecdsa.PublicKey) (string, string) { x509Encoded, _ := x509.MarshalECPrivateKey(privateKey) pemEncoded := pem.EncodeToMemory(&pem.Block{Type: "PRIVATE KEY", Bytes: x509Encoded}) x509EncodedPub, _ := x509.MarshalPKIXPublicKey(publicKey) pemEncodedPub := pem.EncodeToMemory(&pem.Block{Type: "PUBLIC KEY", Bytes: x509EncodedPub}) return string(pemEncoded), string(pemEncodedPub)}func decode(pemEncoded string, pemEncodedPub string) (*ecdsa.PrivateKey, *ecdsa.PublicKey) { block, _ := pem.Decode([]byte(pemEncoded)) x509Encoded := block.Bytes privateKey, _ := x509.ParseECPrivateKey(x509Encoded) blockPub, _ := pem.Decode([]byte(pemEncodedPub)) x509EncodedPub := blockPub.Bytes genericPublicKey, _ := x509.ParsePKIXPublicKey(x509EncodedPub) publicKey := genericPublicKey.(*ecdsa.PublicKey) return privateKey, publicKey}func test() { privateKey, _ := ecdsa.GenerateKey(elliptic.P384(), rand.Reader) publicKey := &privateKey.PublicKey encPriv, encPub := encode(privateKey, publicKey) fmt.Println(encPriv) fmt.Println(encPub) priv2, pub2 := decode(encPriv, encPub) if !reflect.DeepEqual(privateKey, priv2) { fmt.Println("Private keys do not match.") } if !reflect.DeepEqual(publicKey, pub2) { fmt.Println("Public keys do not match.") }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21322183', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3230023/']}
jdg_378034
stackexchange
llm_judgeable_groundtruth_similarity
32995
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A friend of mine is a PhD candidate and posted a picture on Facebook with funny words starting with 'p', 'h' and 'd' like "Please hire. Desperate" or "Permanent head Damage". I thought I could try and programmatically generate some word combinations with Mathematica. Here is my version. Most of the results are meaningless, but occasionally a good combination will show up like "panegyric happiness date". Could it be improved (i.e. increase its signal/noise ratio)? Could the following construct be condensed {"." -> "", ";" -> "", "," -> "", ... -> "" ? Schematically: {".", ";", ","} - >"" The reason I used StringMatchQ with Map[] is because I didn't know how to group the results based on the first letter of the words, e.g. {please, hire, permanent, damage, head, desperate} -> {{please, permanent}, {hire, head}, {damage, desperate}}`. Also, I used words from DictionaryLookup[] , picking those with the most meanings (to increase the likelihood of a meaningful combination), but the results were worse. Here is my code: ClearAll["Global`*"];allWords = ExampleData[{"Text", "PrideAndPrejudice"}, "Words"] // DeleteDuplicates;phdWords = Pick[ allWords, StringMatchQ[allWords, #]] & /@ {"p" ~~ __, "h" ~~ __, "d" ~~ __};phdWords = StringReplace[#, {"." -> "", ";" -> "", "," -> "", "!" -> "", "?" -> "", "\"" -> ""}] & /@ phdWords;filterWordsByPart[wordList_, part_] := Select[ wordList, MemberQ[ WordData[#, "PartsOfSpeech"], part] &];finalWords = MapThread[ filterWordsByPart, {phdWords, {"Adjective", "Noun", "Noun"}}];Table[ Flatten@(RandomSample[#, 1] & /@ finalWords), {20}] // Grid[#, Frame -> All] & And this is a sample output: Now provide the response and nothing else.
To find out what words go well together one can sample some text from ExampleData and pick only words that have at least been mention together pairwise before. sample = ToLowerCase@StringJoin[{ ExampleData[{"Text", "AliceInWonderland"}], ExampleData[{"Text", "OnTheNatureOfThingsEnglish"}], ExampleData[{"Text", "OriginOfSpecies"}], ExampleData[{"Text", "USConstitution"}]}];words = StringCases[sample, WordCharacter..];(* All adjacent words {word1, word2} *)pairs = Partition[words, 2, 2, 1];phpairs = Pick[pairs, Thread[ StringMatchQ[pairs[[All, 1]], "p" ~~ __, IgnoreCase -> True] && StringMatchQ[pairs[[All, 2]], "h" ~~ __, IgnoreCase -> True]]];hdpairs = Pick[pairs, Thread[ StringMatchQ[pairs[[All, 1]], "h" ~~ __, IgnoreCase -> True] && StringMatchQ[pairs[[All, 2]], "d" ~~ __, IgnoreCase -> True]]];(* Filter out {"p","h"} pairs that have no corresponding {"h","d"} *)phpairs = Select[phpairs, MemberQ[hdpairs[[All, 1]], #[[2]]] &]; After that initialization run this a few times: (* Too many boring ones with had/has/have *)ph = RandomChoice[Cases[phpairs, {_, Except["had" | "has" | "have"]}]];hd = RandomChoice@Cases[hdpairs, {Last@ph, _}];StringJoin[{First@ph, " ", First@hd, " ", Last@hd}](* Example results: "published how different" "person having diverged" "perish herself down" "pleasure hath dropped" "pities his death" "plants higher degree" *)
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/32995', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/9083/']}
jdg_378035
stackexchange
llm_judgeable_groundtruth_similarity
20591
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have always wondered why it is that bad to reuse old passwords; it should not be the end of the world if we happen to use an old password that we previously used. After all, I believe most of the time that we change our passwords isn't because of real threats (it will usually be because our internal paranoia). But while it is true that at some point, one of those threats will be real, and we will succeed in saving our accounts, is not likely that the hacker will store the unsuccessful password and try again. But as I always say, companies don't bother thousands of users for no good reason, the pros are clearly outweighing the cons for them . However, I am involuntarily covering important details with my fingers when I take this photo, anyone that have the complete picture mind to tell me what did I miss? (I have really large fingers.) Now provide the response and nothing else.
The first question is: why do some services require passwords to be periodically changed. The answer is "Risk Mitigation". Corporate governance requires IT security policies to be defined in accordance to a risk management plan. One of the question that risk management plans ask is how can one mitigate a risk if it occurs. In the context of passwords, the question is how can we limit the damage of a password leak. If the system administrator is aware of the leak then users can be notified and other steps can be taken. To reduce the damage cause by a password leak of which the administrator is not aware, the lifetime of passwords is limited so that any leaked password can be used only for a short period of time. So services require periodic changing of passwords. The problem is that users really don't like changing their passwords. So what users used to do when forced to change their password was to change it twice - once to some temporary password and then a second time back to the original password. This of course nullifies the purpose of the policy to require passwords to be changed. So the next thing administrators did was store the last two passwords and check that the new password is different than the previous two passwords. The wily users countered that by changing the password three times - two temporary passwords and back to the original password. You might think that users wouldn't go to all that trouble just in order to not change passwords, but this is what actually happened. A administrator friend of mine once compared the hashed passwords in his system after a year and found that almost all passwords were the same - despite the fact that that password policy forced users to change passwords every three months. So the administrators started storing the last 10 passwords. And the users countered by using a fixed password plus a single changing digit at the end for a cycle of 10 passwords. And thus we've reached the situation today where many systems store all previous passwords. Having said all that, the real value of these policies is dubious. Human beings have a limited capacity for remembering passwords and if it's wasted on remembering these rapidly changing passwords it can't be used to keep different passwords on different sites (which is much more important).
{}
{'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/20591', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/13323/']}
jdg_378036
stackexchange
llm_judgeable_groundtruth_similarity
1060506
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to use Seq.cache with a function that I made that returns a sequence of primes up to a number N excluding the number 1. I'm having trouble figuring out how to keep the cached sequence in scope but still use it in my definition. let rec primesNot1 n = {2 .. n} |> Seq.filter (fun i -> (primesNot1 (i / 2) |> Seq.for_all (fun o -> i % o <> 0))) |> Seq.append {2 .. 2} |> Seq.cache Any ideas of how I could use Seq.cache to make this faster? Currently it keeps dropping from scope and is only slowing down performance. Now provide the response and nothing else.
Seq.cache caches an IEnumerable<T> instance so that each item in the sequence is only calculated once. In your case, though, you're caching the sequence returned by a function, and each time you call the function you get a new cached sequence, which doesn't do you any good. I don't think caching is really the right approach to your problem as you've outlined it; instead you should probably look into memoization. If instead of defining a function giving the primes less than n you want to define an infinite enumerable sequence of primes, then caching makes more sense. That would look more like this: let rec upFrom i = seq { yield i yield! upFrom (i+1) }let rec primes = seq { yield 2 yield! upFrom 3 |> Seq.filter (fun p -> primes |> Seq.takeWhile (fun j -> j*j <= p) |> Seq.forall (fun j -> p % j <> 0)) } |> Seq.cache I haven't compared the performance of this method compared to yours.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1060506', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17919/']}
jdg_378037
stackexchange
llm_judgeable_groundtruth_similarity
3136293
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to learn R and want to bring in an SPSS file, which I can open in SPSS. I have tried using read.spss from foreign and spss.get from Hmisc . Both error messages are the same. Here is my code: ## install.packages("Hmisc")library(foreign)## change the working directorygetwd()setwd('C:/Documents and Settings/BTIBERT/Desktop/')## load in the file## ?read.spssasq <- read.spss('ASQ2010.sav', to.data.frame=T) And the resulting error: Error in read.spss("ASQ2010.sav", to.data.frame = T) : error reading system-file header In addition: Warning message: In read.spss("ASQ2010.sav", to.data.frame = T) : ASQ2010.sav: position 0: character `\000' ( Also, I tried saving out the SPSS file as a SPSS 7 .sav file (was previously using SPSS 18). Warning messages: 1: In read.spss("ASQ2010_test.sav", to.data.frame = T) : ASQ2010_test.sav: Unrecognized record type 7, subtype 14 encountered in system file 2: In read.spss("ASQ2010_test.sav", to.data.frame = T) : ASQ2010_test.sav: Unrecognized record type 7, subtype 18 encountered in system file Now provide the response and nothing else.
I had a similar issue and solved it following a hint in read.spss help.Using package memisc instead, you can import a portable SPSS file like this: data <- as.data.set(spss.portable.file("filename.por")) Similarly, for .sav files: data <- as.data.set(spss.system.file('filename.sav')) although in this case I seem to miss some string values, while the portable import works seamlessly. The help page for spss.portable.file claims: The importer mechanism is more flexible and extensible than read.spss and read.dta of package "foreign", as most of the parsing of the file headers is done in R. They are also adapted to load efficiently large data sets. Most importantly, importer objects support the labels, missing.values, and descriptions, provided by this package.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3136293', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/155406/']}
jdg_378038
stackexchange
llm_judgeable_groundtruth_similarity
1085121
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $Q_8$ be the 8-element quaternion group. What is the minimum degree of an irreducible polynomial over a field which has a Galois group isomorphic to the $ Q_8 $ group? Now provide the response and nothing else.
Let $f$ be an irreducible polynomial of degree $d$. Then its Galois group is a (transitive) subgroup of $S_d$, the symmetric group on $d$ letters. At this site , there is a proof that the quaternion group is not a subgroup of $S_d$ for $d<8$, so the answer to the question is, at least 8. But examples with degree 8 are given in the comments, so the minimal degree is 8.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1085121', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/57165/']}
jdg_378039
stackexchange
llm_judgeable_groundtruth_similarity
36220058
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to get dependency version at run time from command line in maven, but it does not reflect into artifact pom at maven repository. My project pom is like :- <parent> <groupId>com.company.project</groupId> <artifactId>parentProject</artifactId> <version>5.6.0.14</version> </parent> <properties> <my.version>${my.version}</my.version> </properties> <groupId>com.company.project</groupId> <artifactId>childProject</artifactId> <dependencies> <dependency> <artifactId>someArtifact_one</artifactId> <groupId>com.company.project</groupId> <version>${my.version}</version> </dependency> <dependency> <artifactId>someArtifact_one</artifactId> <groupId>com.company.project</groupId> <version>${my.version}</version> </dependency> </dependencies> My command is like - mvn install -Dmy.version=5.6.0.12 , project is build successfully and uploaded at maven repository , but when I verify artifact pom.xml at maven repository its same as. <dependency> <artifactId>someArtifact_one</artifactId> <groupId>com.company.project</groupId> <version>${my.version}</version></dependency> I think it should be like below at maven repository. <dependency> <artifactId>someArtifact_one</artifactId> <groupId>com.company.project</groupId> <version>5.6.0.12</version></dependency> How could I resolve this issue, or if someone has some other solution for this issue please suggest. Now provide the response and nothing else.
It's obvious that you should return status 200 for successful logout? Not at all. If you don't return a response with that status, then 204 or 205 would be more appropriate (205 = "no content, refresh"), since there is no content to return, and the client should probably refresh its view. If the user wasn't logged in: Think about what a client would think about it. Either the client wasn't aware that the user wasn't logged in. Or the client wasn't sure whether the user wasn't logged in, and logged out just in case. After the call, the user is logged out. What purpose would it serve to give a different status than for a formerly logged in user? Even if the client detected such a status correctly, what is there that the client could usefully do? I'd give the exact same response. Don't see it as "I was logged out", see it as "I am not logged in". If you really want to report it, return status 200 with a different content for users that were logged in and users that were not logged in.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36220058', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3351697/']}
jdg_378040