source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
2780459
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using svn for my xcode project. I added two files to my current project, added them to the repository and commited changes for my classes folder. But when I try to commit the entire project, I get this error: Error: 155005 (Working copy not locked; this is probably a bug, please report) Description: Commit failed (details follow): Error: 155005 (Working copy not locked; this is probably a bug, please report) Description: Directory '/Users/gopalmadhu/Desktop/All My data/Schduler current code/build/Debug-iphonesimulator/scheduler.app.dSYM/.svn' containing working copy admin area is missing Due to this, the classes that are already checked in do not become visible when I check my code out. The files are in the project, but not visible. As a workaround, I need to add them again from the project folder to the classes folder. This is not the correct way of managing svn. What should I do? Now provide the response and nothing else.
Your problem is in committing build results to Subversion. Subversion works by putting a hidden folder called .svn in every folder that is under version control. This does not work well with folders that might get deleted and recreated by a tool because the tool will probably not respect the existence of the .svn folder. That includes the whole build directory, the contents of which get wiped every time you clean your project targets. You should remove your build directory from version control. It shouldn't be in there anyway since everything in it can be regenerated by doing an Xcode build. To get out of your existing dilemma try the following: Back up your project somewhere In a terminal go to your project directory and rm -rf build do an svn update . At this point Subversion will hopefully restore the build directory. svn rm build svn commit -m "Removed build from version control" (optional) svn propedit svn:ignore . This will bring up an editor for the svn:ignore property. Add build on a separate line. After that subversion won't show you the build directory in svn status commands.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2780459', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/308196/']}
jdg_376341
stackexchange
llm_judgeable_groundtruth_similarity
25855
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Plenty of hard graph problems are solvable in polynomial time on graphs of bounded treewidth . Indeed, textbooks typically use e.g. independet set as an example, which is a local problem . Roughly, a local problem is a problem whose solution can be verified by examining some small neighborhood of every vertex. Interestingly, even problems (such as Hamiltonian path) of a global nature can still be solved efficiently for bounded treewidth graphs. For such problems, usual dynamic programming algorithms have to keep track of all the ways in which the solution can traverse the corresponding separator of the tree decomposition (see e.g. [1]). Randomized algorithms (based on so-called cut'n'count) were given in [1], and improved (even deterministic) algorithms were developed in [2]. I don't know if it's fair to say that many, but at least some global problems can be solved efficiently for graphs of bounded treewidth. So what about problems that remain hard on such graphs? I'm assuming they are also of a global nature, but what else? What separates these hard global problems from global problems that can be solved efficiently? For instance, how and why would known methods fail to give us efficient algorithms for them? For example, one could consider the following problem(s): Edge precoloring extension Given a graph $G$ with some edges colored, decide if this coloring can be extended to a proper $k$-edge-coloring of the graph $G$. Edge precoloring extension (and its list edge coloring variant) is NP-complete for bipartite series-parallel graphs [3] (such graphs have treewidth at most 2). Minimum sum edge coloring Given a graph $G=(V,E)$, find an edge-coloring $\chi : E \to \mathbb{N}$ such that if $e_1$ and $e_2$ have a common vertex, then $\chi(e_1) \neq \chi(e_2)$. The objective is to minimize $E'_\chi(E) = \sum_{e \in E} \chi(e)$, the sum of the coloring. In other words, we have to assign positive integers to the edges of a graph such that adjacent edges receive different integers and the sum of the assigned numbers is minimal. This problem is NP-hard for partial 2-trees [4] (i.e. graphs of treewidth at most 2). Other such hard problems include the edge-disjoint paths problem, the subgraph isomorphism problem, and the bandwidth problem (see e.g. [5] and the references therein). For problems that remain hard even on trees, see this question . [1] Cygan, M., Nederlof, J., Pilipczuk, M., van Rooij, J. M., & Wojtaszczyk, J. O. (2011, October). Solving connectivity problems parameterized by treewidth in single exponential time. In Foundations of Computer Science (FOCS), 2011 IEEE 52nd Annual Symposium on (pp. 150-159). IEEE. [2] Bodlaender, H. L., Cygan, M., Kratsch, S., & Nederlof, J. (2013). Deterministic single exponential time algorithms for connectivity problems parameterized by treewidth. In Automata, Languages, and Programming (pp. 196-207). Springer Berlin Heidelberg. [3] Marx, D. (2005). NP‐completeness of list coloring and precoloring extension on the edges of planar graphs. Journal of Graph Theory, 49(4), 313-324. [4] Marx, D. (2009). Complexity results for minimum sum edge coloring. Discrete Applied Mathematics, 157(5), 1034-1045. [5] Nishizeki, T., Vygen, J., & Zhou, X. (2001). The edge-disjoint paths problem is NP-complete for series–parallel graphs. Discrete Applied Mathematics, 115(1), 177-186. Now provide the response and nothing else.
Most algorithms for graphs of bounded treewidth are based on some form of dynamic programming. For these algorithms to be efficient, we need to bound the number of states in the dynamic programming table: if you want a polynomial-time algorithm, then you need a polynomial number of states (e.g., n^tw), if you want to show that the problem is FPT, you usually want to show that the number of states is some function of treewidth. The number of states typically corresponds to the number of different types of partial solutions when breaking the graph at some small separator. Thus a problem is easy on bounded-treewidth graphs usually because partial solutions interacting with the outside world via a bounded number of vertices have only a bounded number of types. For example, in the independent set problem the type of a partial solution depends only on which boundary vertices are selected. In the Hamiltonian cycle problem, the type of a partial solution is described by how the subpaths of the partial solution match the vertices of the boundary to each other. Variants of Courcelle's Theorem give sufficient conditions for a problem to have the property that partial solutions have only a bounded number of types. If a problem is hard on bounded-treewidth graphs, then it is usually because of one of the following three reasons. There are interactions in the problem not captured by the graph. For example, Steiner Forest is NP-hard on graphs of treewidth 3, intuitively because the source-destination pairs create interactions between nonadjacent vertices. Elisabeth Gassner: The Steiner Forest Problem revisited. J. Discrete Algorithms 8(2): 154-163 (2010) MohammadHossein Bateni, Mohammad Taghi Hajiaghayi, Dániel Marx: Approximation Schemes for Steiner Forest on Planar Graphs and Graphs of Bounded Treewidth. J. ACM 58(5): 21 (2011) The problem is defined on the edges of the graph. Then even if a part of the graph is attached to the rest of the graph via a bounded number of vertices, there could be many edges incident to those few vertices and then the state of a partial solution can be described only by describing the state of all these edges. This is what made the problems in [3,4] hard. Each vertex can have a large number of different states. For example, Capacitated Vertex Cover is W[1]-hard parameterized by treewidth, intuitively because the description of a partial solution involves not only stating which vertices of the separator were selected, but also stating how many times each selected vertex of the separator was used to cover edges. Michael Dom, Daniel Lokshtanov, Saket Saurabh, Yngve Villanger: Capacitated Domination and Covering: A Parameterized Perspective. IWPEC 2008: 78-90
{}
{'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/25855', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/5693/']}
jdg_376342
stackexchange
llm_judgeable_groundtruth_similarity
448181
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Does JPA/EJB3 framework provide standard way to do batch insert operation...?We use hibernate for persistence framework, So I can fall back to Hibernate Session and use combination session.save()/session.flush() achieve batch insert. But would like to know if EJB3 have a support for this... Now provide the response and nothing else.
Neither JPA nor Hibernate do provide particular support for batch inserts and the idiom for batch inserts with JPA would be the same as with Hibernate: EntityManager em = ...;EntityTransaction tx = em.getTransaction();tx.begin();for ( int i=0; i<100000; i++ ) { Customer customer = new Customer(.....); em.persist(customer); if ( i % 20 == 0 ) { //20, same as the JDBC batch size //flush a batch of inserts and release memory: em.flush(); em.clear(); }}tx.commit();session.close(); Using Hibernate's proprietary API in this case doesn't provide any advantage IMO. References JPA 1.0 Specification Section 4.10 "Bulk Update and Delete Operations" Hibernate Core reference guide Chapter 13. Batch processing
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/448181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/55582/']}
jdg_376343
stackexchange
llm_judgeable_groundtruth_similarity
73323
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The Church-Turing hypothesis says one can not build a computing device which has more computing power (in terms of computability) than the abstract model of Turing machine. So, there is something in our laws of physics which prevent us from making devices which are more powerful than Turing machine, so in this respect it can be viewed as a law of physics. What is the physicists' view of Church-Turing hypothesis? Can the Church-Turing hypothesis be deduced from other fundamental law of physics? Now provide the response and nothing else.
You are asking two questions. I am only going to address one of them: Can the Church-Turing hypothesis be deduced from other fundamental law of physics? There are two fundamental theories of physics that account for nearly all experiments and observations performed to date: general relativity and the Standard Model. If we could simulate these theories by Turing machines, then the outcomes of any experiment could be deduced by a Turing machine, and then then any physical computational device could be simulated by a Turing machine. General Relativity: There was a breakthrough in numerical relativity in 2005, and we now have computer programs that do an excellent job of simulating general relativity. While we can't rigorously show that relativity satisfies the Church-Turing hypothesis, this is good evidence that it does. The Standard Model: Lattice field theory seems to do a very good job of simulating the Standard Model (albeit with enormous computation times). Again, we can't rigorously show that the Standard Model satisfies the Church-Turing hypothesis, but this is good evidence that it does. If you are talking about computing devices that can be built using any conceivable future technology, these two theories probably cover all of them.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/73323', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/6068/']}
jdg_376344
stackexchange
llm_judgeable_groundtruth_similarity
12335848
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I compiled and installed openCV 2.4.2 in ubuntu 12.04. Under /usr/local/include I can see the directories /usr/local/opencv and /usr/local/opencv2 . Here is the code I wrote: #include <cv.h>#include <highgui.h>#include <iostream>using namespace cv;using namespace std;int main(int argc,char **argv){ Mat image; image = imread(argv[1],1); if(argc != 2 || !image.data) { cout << "No image data\n"; return -1; } namedWindow("Display Image",CV_WINDOW_AUTOSIZE); imshow("Display Image",image); waitKey(0); return 0;} I compiled it using this command line: g++ DisplayImage.cpp -o DisplayImage `pkg-config opencv --cflags --libs` There were no compile time errors, however when I try to run the resulting binary with /DisplayImage code.png I get the following error message: ./DisplayImage: error while loading shared libraries: libopencv_core.so.2.4: cannot open shared object file: No such file or directory Now provide the response and nothing else.
You haven't put the shared library in a location where the loader can find it. look inside the /usr/local/opencv and /usr/local/opencv2 folders and see if either of them contains any shared libraries (files beginning in lib and usually ending in .so ). when you find them, create a file called /etc/ld.so.conf.d/opencv.conf and write to it the paths to the folders where the libraries are stored, one per line. for example, if the libraries were stored under /usr/local/opencv/libopencv_core.so.2.4 then I would write this to my opencv.conf file: /usr/local/opencv/ Then run sudo ldconfig -v If you can't find the libraries, try running sudo updatedb && locate libopencv_core.so.2.4 in a shell. You don't need to run updatedb if you've rebooted since compiling OpenCV. References: About shared libraries on Linux: http://www.eyrie.org/~eagle/notes/rpath.html About adding the OpenCV shared libraries: http://opencv.willowgarage.com/wiki/InstallGuide_Linux
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12335848', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1642487/']}
jdg_376345
stackexchange
llm_judgeable_groundtruth_similarity
2079180
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Here's Prob. 17, Chap. 3 in the book Principles of Mathematical Analysis by Walter Rudin, 3rd edition: Fix $\alpha > 1$. Take $x_1 > \sqrt{\alpha}$, and define $$x_{n+1} = \frac{\alpha + x_n}{1+x_n} = x_n + \frac{\alpha - x_n^2}{1+x_n}.$$ (a) Prove that $x_1 > x_3 > x_5 > \cdots$. (b) Prove that $x_2 < x_4 < x_6 < \cdots$. (c) Prove that $\lim x_n = \sqrt{\alpha}$. My effort: From the recursion formula, we can obtain $$\begin{align}x_{n+1} &= \frac{ \alpha + x_n}{1+ x_n} \\&= \frac{ \alpha + \frac{\alpha + x_{n-1}}{1+x_{n-1}} }{ 1 + \frac{\alpha + x_{n-1}}{1+x_{n-1}} } \\&= \frac{ (\alpha + 1) x_{n-1} + 2 \alpha }{ 2x_{n-1} + ( 1 + \alpha ) } \\&= \frac{\alpha+1}{2} + \frac{2 \alpha - \frac{(\alpha+1)^2}{2} }{2x_{n-1} + ( 1 + \alpha ) } \\ &= \frac{\alpha+1}{2} + \frac{ \alpha - \frac{\alpha^2+1 }{2} }{2x_{n-1} + ( 1 + \alpha ) }. \end{align}$$ What next? Now provide the response and nothing else.
Notice that$$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} = 1 + \frac{\alpha-1}{1+x_n}. $$Since $\alpha-1>0$, we have that if $x_n<\sqrt{\alpha}$, then$$ x_{n+1} > 1 + \frac{\alpha-1}{1+\sqrt{\alpha}} = \frac{1+\sqrt{\alpha}+\alpha-1}{1+\sqrt{\alpha}} = \frac{\sqrt{\alpha}+\alpha}{1+\sqrt{\alpha}} = \sqrt{\alpha}$$and similarly, if $x_n > \sqrt{\alpha}$, then $x_{n+1} < \sqrt{\alpha}$. Since $x_1>\sqrt{\alpha}$, it follows from induction that $x_n<\sqrt{\alpha}$ for $n$ even and $x_n>\sqrt{\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n > 0 $ if $n$ is even, and $x_{n+1}-x_n < 0 $ if $n$ is odd. Notice that$$ x_{n+1} = \frac{\alpha+x_n}{1+x_n}\implies x_{n+1}(1+x_n) = \alpha+x_n \implies x_nx_{n+1} = \alpha - (x_{n+1}-x_n) $$and hence$$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$It is clear that $x_n>0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}>0$ and $x_{n+1}-x_n<0$, so $x_{n+1}-x_{n-1} > 0$, while if $n$ is even, then $x_n-x_{n-1}<0$ and $x_{n+1}-x_n>0$, so $x_{n+1}-x_{n-1} <0$. Thus, $x_3-x_1<0$, $x_5-x_3<0$, and so on, so $x_1 > x_3 > x_5 > \dots$, while $x_4-x_2>0$, $x_6-x_4>0$, and so on, so $x_2 < x_4 < x_6 < \dots$. This proves (a) and (b). Now, since $x_n>\sqrt{\alpha}$ for $n$ odd, and $x_1>x_3>x_5>\dots$, it follows that the subsequence of odd terms $\{x_{2n-1}\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\{x_{2n}\}$ is monotonically increasing and bounded from above (namely by $\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\ge\sqrt{\alpha}$ and $M\le\sqrt{\alpha}$. From the equation$$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} $$if we consider $n$ odd and take limits on both sides, we obtain$$ M = \frac{\alpha+L}{1+L} $$while if we consider $n$ even and take limits, we obtain$$ L = \frac{\alpha+M}{1+M}. $$Thus, if we define the sequence $\{y_n\}$ by $y_1 = L$ and $y_{n+1} = \frac{\alpha+y_n}{1+y_n}$, then the sequence $\{y_n\}$ is just $\{L,M,L,M,\dots\}$. If $L>\sqrt{\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 > y_3 > y_5 > \dots$, which is impossible since all odd terms are $L$. Since $L\ge\sqrt{\alpha}$, it must follow that $L = \sqrt{\alpha}$, and you can easily check that this forces $M = \sqrt{\alpha}$ as well. Thus, the odd and even subsequential limits are both $\sqrt{\alpha}$, so the limit of the sequence is $\sqrt{\alpha}$ as well.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2079180', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/59734/']}
jdg_376346
stackexchange
llm_judgeable_groundtruth_similarity
232976
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Torque-free Euler equations experiment seen in low gravity of Russian spacecraft is modelled here with a view to see its tumbling motion around the intermediate axis $\omega_2$ rotation. However its reversal is not observed here. The boundary conditions do play a role, varying them did not much change the sine behaviour towards interfering periodic flips. Due to easy demonstration possibility here I posted this hopefully interesting problem although strictly speaking it is a physics problem. {I1, I2, I3} = {8, 4, 0.4};Dzhanibekov = {I1 TH1''[t] == (I2 - I3) TH2'[t] TH3'[t], I2 TH2''[t] == (I3 - I1) TH3'[t] TH1'[t], I3 TH3''[t] == (I1 - I2) TH1'[t] TH2'[t], TH1'[0] == -0.4, TH2'[0] == 0.08, TH3'[0] == 0.65, TH1[0] == 0.75, TH2[0] == -0.85, TH3[0] == 0.2};NDSolve[Dzhanibekov, {TH1, TH2, TH3}, {t, 0, 15.}];{th1[u_], th2[u_], th3[u_]} = {TH1[u], TH2[u], TH3[u]} /. First[%];Plot[Tooltip[{th1'[t], th2'[t], th3'[t]}], {t, 0, 15}, GridLines -> Automatic] Please help choose better initial conditions for getting a jump around $\theta_2$ axis. Thanks in advance. EDIT1: ICs updated per MichaelE2's suggestion showing the effect on the three angular velocity variations. The flip frequency is surprisingly dependent on choice of ICs. Is it possible to determine the common frequency analytically? Wing Nut Flips Wiki Ref Now provide the response and nothing else.
If this is a physical problem then choice of I1,I2,I3 depends on the form of the body we are tested. To make animation we first make a body as, for example, Graphics3D[{Cone[{{0, 0, 0}, {0, 0, 3}}, 1/2], Cuboid[{-0.2, -1, 0}, {0.2, 1, .7}]}, Boxed -> False]G3D = RegionUnion[Cone[{{0, 0, 0}, {0, 0, 3}}, 1/2], Cuboid[{-0.3, -1, 0}, {0.3, 1, 1}]];c = RegionCentroid[G3D]; Then we calculate moment of inertia and define equations J3 = NIntegrate[x^2 + y^2, {x, y, z} \[Element] G3D];J2 = NIntegrate[x^2 + (z - c[[3]])^2, {x, y, z} \[Element] G3D];J1 = NIntegrate[y^2 + (z - c[[3]])^2, {x, y, z} \[Element] G3D]; eq1 = {\[CapitalOmega]1[ t] == \[CurlyPhi]'[t]*Sin[\[Theta][t]]* Sin[\[Psi][t]] + \[Theta]'[t]*Cos[\[Psi][t]], \[CapitalOmega]2[ t] == \[CurlyPhi]'[t]*Sin[\[Theta][t]]* Cos[\[Psi][t]] - \[Theta]'[t]*Sin[\[Psi][t]], \[CapitalOmega]3[ t] == \[CurlyPhi]'[t]*Cos[\[Theta][t]] + \[Psi]'[t]};eq2 = {J1*\[CapitalOmega]1'[t] + (J3 - J2)*\[CapitalOmega]2[ t]*\[CapitalOmega]3[t] == 0, J2*\[CapitalOmega]2'[t] + (J1 - J3)*\[CapitalOmega]1[ t]*\[CapitalOmega]3[t] == 0, J1*\[CapitalOmega]3'[t] + (J2 - J1)*\[CapitalOmega]2[ t]*\[CapitalOmega]1[t] == 0};eq3 = {\[CurlyPhi][0] == .001, \[Theta][0] == .001, \[Psi][ 0] == .001, \[CapitalOmega]3[0] == 10, \[CapitalOmega]1[0] == .0, \[CapitalOmega]2[0] == .025}; Finally we export gif file Export["C:\\Users\\...\\Desktop\\J0.gif", Table[Graphics3D[{Cuboid[{5, 5, -3}, {5.2, 5.2, 5}], Cuboid[{-5, -5, -3.1}, {5, 5, -3}], GeometricTransformation[{Cone[{{0, 0, 0}, {0, 0, 3}}, 1/2], Cuboid[{-0.2, -1, 0}, {0.2, 1, .7}]}, EulerMatrix[{NDSolveValue[{eq1, eq2, eq3}, \[CurlyPhi][tn], {t, 0, tn}], NDSolveValue[{eq1, eq2, eq3}, \[Theta][tn], {t, 0, tn}], NDSolveValue[{eq1, eq2, eq3}, \[Psi][tn], {t, 0, tn}]}]]}, Boxed -> False, Lighting -> {{"Point", Yellow, {10, 3, 3}}}], {tn, 0, 11.6, .1}], AnimationRepetitions -> Infinity] This problem has an analytical solution explained by Landau L.D., Lifshits E.M. in Mechanics. Let put $E$ is energy, $M^2$ is a squared angular momentum, $I_1,I_2,I_3$ are principal moments of inertia, $k^2=\frac{(I_2-I_1)(2EI_3-M^2)}{(I_3-I_2)(M^2-2EI_1)}$ , $sn(\tau,k), cn(\tau,k), dn(tau,k)$ -are Jacobi elliptic functions. Then the solution of the problem can be written in a closed form as $$\Omega_1=\sqrt {\frac{2EI_3-M^2}{I_1(I_3-I_1)}}cn(\tau,k)$$ $$\Omega_2=\sqrt {\frac{2EI_3-M^2}{I_2(I_3-I_2)}}sn(\tau,k)$$ $$\Omega_3=\sqrt {\frac{-2EI_1+M^2}{I_2(I_2-I_1)}}dn(\tau,k)$$ $$\tau=t\sqrt {\frac{(-2EI_1+M^2)(I_3-I_2)}{I_1I_2I_3}}sn(\tau,k)$$ The dynamics of system is determined by two parameters - the period $T$ and the time of the flip $T_f$ , which are related to each other as $T=4K(k)\sqrt{\frac{I_1I_2I_3}{(I_3-I_2)(M^2-2EI_1)}}, T_f=\frac{T}{2K(k)} $ where $K(k)$ is complete elliptic integral of the first kind.
{}
{'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/232976', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/19067/']}
jdg_376347
stackexchange
llm_judgeable_groundtruth_similarity
539877
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: After ordering this 200VA, 32V transformer, I realized that it doesn't have a center tap on each secondary, which I need to have. (duh on my part) So I want to use 110V across the 220V primary leads to give me half the voltage at each secondary, resulting in a single secondary of 32V and a center tap. I know that this works because I have done it and observed that the output is 32V RMS with a center tap, but I want to make sure I'm not throwing away half the VA rating by doing so. I read TONS about transformers trying to get a definitive answer to this, and I think that the core saturation and the secondary wire size and associated voltage drop and heat are the main concerns, along with the fact that the ratio of number of turns is proportional to the ratio of the voltages and inversely proportional to the ratio of the currents in the primary and secondary. So on those points, I think I'm not reducing the usable 200VA, with the possible exception of secondary wire heating. To illustrate what I mean using the image below, I would wire-nut the middle black and red primary wires together and connect 110V across the top red and bottom black wires. And on the secondary, I would connect the middle blue and green wires and use that connection as my center tap. I would use the top green and the bottom blue wires as my supply rails and there would be 32V RMS across these rails. If that's confusing for me to say rails, I'll add that I'm using a bridge rectifier and 2 shunt capacitors to make them into +/- 45-ish volt rails for an amplifier. Yes, I know that this is NOT standard, and this is not for a commercial product, so please don't reply simply that this is not standard or that it will void my warranty. There is a similar question here with 2 answers to my question, but unfortunately those two answers contradict each other exactly. Also, here is a question with a very promising title, but the answers don't answer my question. The other similar questions don't have answers which address the current-rating-doubling part. Please weigh in if you know whether I can safely get the full 200VA in this way. Now provide the response and nothing else.
You will halve the rated VA of the transformer if you run it at half the rated input voltage. The rated maximum VA is the product of maximum allowable Volts and maximum allowable Amps. The Amps will be unchanged, that's limited by wire heating. You've halved the Volts, so you've halved the VA, to first order at least. Running the core at half the field reduces the core losses, so the thermal load on the transformer is not quite so high. This means you can increase the thermal loading on the copper a little, at the expense of voltage drop aka voltage regulation. Maybe then the rated VA has only fallen to 55% and not to 50% of what it was. If you have a centre-tapped 32 V, ie 16-0-16, you can bridge rectify that to about +/- 22 V. If you want +/- 45 V (that is, a total of 90 V from positive to negative, with ground in the middle), then the simplest way to get there is with a centre-tapped 64 V secondary (two 32 V secondaries in series), leaving your transformer configured and supplied as it sounds like it was meant to be, and like my audio amp transformer is. simulate this circuit – Schematic created using CircuitLab
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/539877', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/241733/']}
jdg_376348
stackexchange
llm_judgeable_groundtruth_similarity
4268845
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Someone send me a script today starting with #: and after googling I didn't found any answer. Even if the script works I wonder what does that mean. Now provide the response and nothing else.
Wow! That's brings backs lots of memories! Back in the 1980s and early 1990s, there were two basic shells, Bourne shell ( /bin/sh ) and C shell ( /bin/csh ). Bourne shell had very few user friendly things. There were no aliases or command substitutions. Therefore, most people liked using C Shell as their default shell. However, Csh was a terrible scripting language. (See http://www.faqs.org/faqs/unix-faq/shell/csh-whynot/ ). Therefore, you used C shell as your shell, but wrote your scripts in Bourne shell which had a much better syntax. However, there was a tiny little problem: Since your default shell is C Shell, typing a shell script name at the command line, and the C Shell would pick it up and try to execute it. To get around this, you put : as the first line in your program. This was a Bourne shell comment, but was an invalid C Shell command. Thus the script would fail to run if you forgot to put sh in front of it. Later on, systems would know if the first line was : , it should be a Bourne shell script. And, even later, you could put #: so it would be a comment and not a command. Some people put the name of the shell, /bin/sh next to it to help remind people were suppose to run this as a Bourne shell script. C shell started dying out after Kornshell started becoming popular. It was about this time when the shebang ( #! ) came out, but that was only for AT&T and not the Berkeley derived systems. BSD systems didn't get the shebang until the late 1980s. And, Sun people used C Shell as their default shell until Solaris came out. I hadn't seen a program begin with #: /bin/sh in ages. BTW, it is common to start your scripts this way: #! /usr/bin/env perl This way, you use the version of Perl that's in your path and don't have to worry what directory it is in. For example, if you begin your script with: #! /usr/local/bin/perl And, Perl is actually in /usr/bin , your script won't run. The env program is always in /usr/bin and is guaranteed to work. Of course, if you want to use a specific version of Perl and not the one that is in the path, you'd go with the first method.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4268845', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/519067/']}
jdg_376349
stackexchange
llm_judgeable_groundtruth_similarity
401594
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Some ( link 1 , link 2 ) programming languages allow spaces in their identifiers (e.g. variables, procedures) but most of them don't and instead programmers usually use camel case , snake case and other ways to separate words in names. To support spaces or even other Unicode characters some programming languages allow encapsulating the name with a certain character to delimit its start and end. Is it a bad idea to allow spaces or is it just commonly not allowed for historical reasons (when there were more limitations than now or simply being decided not worth implementing)? The question is more about the main pros and cons of implementing it in newly created programming languages. Related pages: link 1 , link 2 . Now provide the response and nothing else.
Consider the following. var [Example Number] = 5; [Example Number] = [Example Number] + 5; print([Example Number]); int[] [Examples Array] = new int[25]; [Examples Array][[Example Number]] = [Example Number] Compare it with the more traditional example: var ExampleNumber = 5; ExampleNumber = ExampleNumber + 5; print(ExampleNumber); int[] ExamplesArray = new int[25]; ExamplesArray[ExampleNumber] = ExampleNumber; I'm pretty sure you noticed that the strain for your brain to read the second example was much lower. If you allow whitespaces on an identifier, you'll need to put some other language element to mark the start and the stop of a word. Those delimiters force the brain to do some extra parsing and, depending on which one you pick, create a whole new set of ambiguity issues for the human brain. If you don't put delimiters, and try to infer what identifier you're talking about when typing code by context only, you invite another type of can of worms: var Example = 5; var Number = 10; var Example Number = Example + Number; int[] Examples Array = new int[25]; Examples Array[Example Number] = Example Number; Example Number = Example Number + Example + Number; print text(Example Number); Perfectly doable. A total pain for your brain's pattern matching. Those examples are painful to read not only because of the choice of the words I'm picking, but also because your brain takes some extra time to identify what is every identifier. Consider the more regular format, once again: var Example = 5; var Number = 10; var ExampleNumber = Example + Number; int[] ExamplesArray = new int[25]; ExamplesArray[ExampleNumber] = ExampleNumber; ExampleNumber = ExampleNumber + Example + Number; printText(ExampleNumber); Do you notice something? The names of the variables are still terrible, but the strain to read it went way down. That happens because your brain now has a natural anchor to identify the beginning and the ending of every word, enabling you to abstract away that part of your thinking. You don't need to worry about that context anymore - you see a break in the text, you know it is a new identifier coming. When reading code, you brain doesn't much read the words as much as it matches it with what you have in your mind right now. You don't really stop to read "ExampleWord". You see the overal shape of the thing, ExxxxxxWxxd, matches it with whatever you have stashed in your mental heap, and them go ahead reading. That's why it is easy to miss up mistakes like "ExampleWord = ExapmleWord" - your brain isn't really reading it. You're just matching up similar stuff. Once more, consider the following: Example Word += Example Word + 1; Now imagine yourself trying to debug that code. Imagine how many times you'll miss that extra space on "Example Word". A misplaced letter is already hard as fork to detect at first glance; an extra space is an order of magnitude worse. In the end, it is hard to say that allowing whitespaces would make the text more readable. I find it difficult to believe that the added hassle of extra terminators and the extra overhead on my brain would be worth to use this type of functionality if the language I'm working with had it. Personally, I consider it bad design - not because of the hassle on the compiler, interpreter, or whatever, but because my brain trips on those spaces thinking that it is a new identifier that is about to begin, when it is not. In a sense, our brain suffers the same problems than our processors, when it comes to branch prediction . So please, be kind to our trains of thought. Don't put whitespaces on your identifiers. I completely forgot to add a mention to a language I use every single day accepts spaces in identifiers - SQL! That doesn't mean it is a good idea to use them, however. Most people I know agree it's a Bad Idea to shove spaces around on your identifiers - to the point it's sometimes a forgotten feature of the language.
{}
{'log_upvote_score': 8, 'links': ['https://softwareengineering.stackexchange.com/questions/401594', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/351728/']}
jdg_376350
stackexchange
llm_judgeable_groundtruth_similarity
22991294
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: there is a QJsonObject { "a":"...", "b":"...", "c":"..." } is there a method to check if this object contains "a"? Now provide the response and nothing else.
You have a few options, according to the documentation : The most obvious is QJsonObject::contains which returns a bool You can call QJsonObject::find which will return an iterator. If the item isn't found, the return value will be equal to QJsonObject::end Use this if you need an iterator anyways. You can call QJsonObject::value , which will return the value for the key if present, and QJsonValue::Undefined otherwise. You're probably using the value method anyways to get the value for a key, so this will allow you to do one lookup instead of two. It may be tempting to use this for a performance boost, but remember that it will be harder to read and in most cases the performance gain is small enough that it's probably not worth it All of this came directly from the Qt documentation - my favorite thing about Qt is their fantastic documentation, so I encourage you to make that your first stop when you have questions like these.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22991294', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2714012/']}
jdg_376351
stackexchange
llm_judgeable_groundtruth_similarity
12326705
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Hi I've been having a probably trying to install the Ruby gem bundle. I follow the directions gem install bundle after I receive this message, but it still doesn't work when I type bundle -v . I also type which bundle and receive /Users/edmundmai/.rvm/bin/bundle so it exists!! So why does it not work!! Is there something wrong with the mysterious $PATH that I don't get? (I'm a noob). Here's my .bash_profile: PATH=$PATH:~/binexport PATH[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # Load RVM into a shell session *as a function*[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm" # Load RVM function Here's my .bashrc : PATH=$PATH:$HOME/.rvm/bin # Add RVM to PATH for scripting[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" Here's my gem env: RubyGems Environment:- RUBYGEMS VERSION: 1.8.24- RUBY VERSION: 1.9.3 (2012-04-20 patchlevel 194) [x86_64-darwin11.3.1]- INSTALLATION DIRECTORY: /Users/edmundmai/.rvm/gems/ruby-1.9.3-p194@rails3tutorial2ndEd- RUBY EXECUTABLE: /Users/edmundmai/.rvm/rubies/ruby-1.9.3-p194/bin/ruby- EXECUTABLE DIRECTORY: /Users/edmundmai/.rvm/gems/ruby-1.9.3-p194@rails3tutorial2ndEd/bin- RUBYGEMS PLATFORMS: - ruby - x86_64-darwin-11- GEM PATHS: - /Users/edmundmai/.rvm/gems/ruby-1.9.3-p194@rails3tutorial2ndEd - /Users/edmundmai/.rvm/gems/ruby-1.9.3-p194@global- GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - "install" => "--no-rdoc --no-ri" - "update" => "--no-rdoc --no-ri"- REMOTE SOURCES: - http://rubygems.org/ echo $PATH /Users/edmundmai/.rvm/bin:/Users/edmundmai/.rvm/gems/ruby-1.9.3-p194@rails3tutorial2ndEd/bin:/Users/edmundmai/.rvm/gems/ruby-1.9.3-p194@global/bin:/Users/edmundmai/.rvm/rubies/ruby-1.9.3-p194/bin:/Users/edmundmai/.rvm/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/Users/edmundmai/bin:/Users/edmundmai/bin UPDATESo after trying to go through the RoR tutorial again, I repeated a few of the steps from earlier in the chapter in one of my two terminal windows (let's call them Terminal #1 and Terminal #2). Terminal #2 still doesn't work (the command line isn't on any directory, just the default directory) while Terminal #1 (at my sample_app directory) works. I think rvm get head && rvm reload might be the key? But does that mean I have to run that every time I open my terminal? Check out my terminal #1 commands for before and after (I deleted some of the output so that it would be more clear what my input lines were): Edmunds-MacBook-Pro:sample_app edmundmai$ bundle -vERROR: Gem bundler is not installed, run `gem install bundler` first.Edmunds-MacBook-Pro:sample_app edmundmai$ gem installERROR: While executing gem ... (Gem::CommandLineError)Please specify at least one gem name (e.g. gem build GEMNAME)Edmunds-MacBook-Pro:sample_app edmundmai$ rvm -vrvm 1.15.8 (master) by Wayne E. Seguin <wayneeseguin@gmail.com>, Michal Papis <mpapis@gmail.com> [https://rvm.io/]Edmunds-MacBook-Pro:sample_app edmundmai$ rspec spec/No DRb server is running. Running in local process instead ...........Finished in 0.36709 seconds8 examples, 0 failuresRandomized with seed 59500Edmunds-MacBook-Pro:sample_app edmundmai$ subl .Edmunds-MacBook-Pro:sample_app edmundmai$ bundle install --binstubs=./bundler_stubsUsing rake (0.9.2.2) ......(**list of stuff)Using uglifier (1.2.3) Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.Edmunds-MacBook-Pro:sample_app edmundmai$ bundle show bundler/Users/edmundmai/.rvm/gems/ruby-1.9.3-p194@rails3tutorial2ndEd/gems/bundler-1.2.0 I'm extremely unclear as to what the $PATH does. If you guys need any more info, feel free to tell me and I'll provide it. Now provide the response and nothing else.
I think this is the problem: You have bundler installed to a specific gemset , which is why it's only available when you're in your app's directory (I'm assuming there's a .rvmrc file in there). You have a few options: Install bundler to a global gemset. rvm gemset use global && gem install bundler If you have Homebrew installed, just do brew install ruby and avoid rvm altogether. (There's also rbenv and ry as alternatives to rvm, but I just use 1.9.3 across all my apps, so Homebrew is fine.) For reference, $PATH is a shell environmental variable containing a list of directories that hold executables (e.g., echo , ls , vim , etc.). It's intrinsic to shells.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12326705', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1555312/']}
jdg_376352
stackexchange
llm_judgeable_groundtruth_similarity
6818441
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to output the results of a SELECT statement as a JSON object. I would like this to be a Function and not a stored procedure ! For example, the below table Users id name active1 Bob Jones 12 John Smith 0 Would be returned like this: [{"id":1,"name":"Bob Jones","active":1},{"id":2,"name":"John Smith","active":0}] Thanks in advance. Now provide the response and nothing else.
Starting from SQL Server 2016 you can use for json : declare @t table(id int, name nvarchar(max), active bit)insert @t values (1, 'Bob Jones', 1), (2, 'John Smith', 0)select id, name, activefrom @tfor json auto With older versions of SQL Server you can use for xml path , e.g.: select '[' + STUFF(( select ',{"id":' + cast(id as varchar(max)) + ',"name":"' + name + '"' + ',"active":' + cast(active as varchar(max)) +'}' from @t t1 for xml path(''), type ).value('.', 'varchar(max)'), 1, 1, '') + ']' Output: [{"id":1,"name":"Bob Jones","active":1},{"id":2,"name":"John Smith","active":0}]
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6818441', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/150939/']}
jdg_376353
stackexchange
llm_judgeable_groundtruth_similarity
19282815
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using javax.validation.Validation to validate jpa entities. I am always validating against the same Entities. I would like to know if it is better to use one Validator for all validations or to instantiate a new Validator each time I validate. As well, how expensive in terms of computation is it for me to instantiate a new validator each time I would like to use it? Option1: instantiate new validator for each validation. public class Validator{ public static void main(String[] args) { //Validation1 ValidatorFactory factory = Validation.buildDefaultValidatorFactory(); Validator validator = factory.getValidator(); Entity entityToValidate = new Entity(); entityToValidate.setEmail("NOT_A_VALID_EMAIL@@@tomanyat.com"); Set<ConstraintViolation<T>> constraintViolations = validator.validate(entityToValidate); //Validation2 (Note that validator has been created yet a second time !! ) Validator validator2 = factory.getValidator(); Entity entityToValidate2 = new Entity(); entityToValidate.setEmail("NOT_A_VALID_EMAIL@@@tomanyat.com"); Set<ConstraintViolation<T>> constraintViolations2 = validator2.validate(entityToValidate); }} Option2: single validator for all validations. public class Validator{ public static void main(String[] args) { //Validator created only once ValidatorFactory factory = Validation.buildDefaultValidatorFactory(); Validator validator = factory.getValidator(); //Validation #1 Entity entityToValidate = new Entity(); entityToValidate.setEmail("NOT_A_VALID_EMAIL@@@tomanyat.com"); Set<ConstraintViolation<T>> constraintViolations = validator.validate(entityToValidate); //Validation #2 Entity entityToValidate2 = new Entity(); entityToValidate.setEmail("NOT_A_VALID_EMAIL@@@tomanyat.com"); Set<ConstraintViolation<T>> constraintViolations2 = validator .validate(entityToValidate); }} Now provide the response and nothing else.
Note how the Validator javadoc states Validates bean instances. Implementations of this interface must be thread-safe. As such a Validator shouldn't really contain state, unless that state is also thread-safe. Therefore, you shouldn't need to create a new Validator instance, just re-use the same one, depending on the types obviously. As well, how expensive in terms of computation is it for me to instantiate a new validator each time I would like to use it? This depends on your Validator . But the instantiation (creating but not initializing the object) itself is almost completely negligible, especially when you consider all the processing that a JPA implementation performs.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19282815', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2763514/']}
jdg_376354
stackexchange
llm_judgeable_groundtruth_similarity
1077737
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using OCUnit, is there a way to test delegate protocols? I'm trying this, which doesn't work. -(void) testSomeObjDelegate { SomeObj obj = [[SomeObj alloc] initWithDelegate:self]; [obj executeMethod];}-(void) someObjDelegateMethod { //test something here} I'm going to try calling the obj method on a different thread and have the test sleep until the delegate is called. It just seems like there should be an easier way to test this. Now provide the response and nothing else.
Testing a delegate is trivial. Just set an ivar in the test in your callback method, and check it after what should be triggering the delegate callback. For example, if I have a class Something that uses a delegate of protocol SomethingDelegate and sends that delegate -something:delegateInvoked: in response to some message, I can test it lik ethis: @interface TestSomeBehavior : SenTestCase <SomethingDelegate>{ Something *_object; BOOL _callbackInvoked;}@end@implementation TestSomeBehavior- (void)setUp { [super setUp]; _object = [[Something alloc] init]; _object.delegate = self;}- (void)tearDown { _object.delegate = nil; [_object release]; [super tearDown];}- (void)testSomeBehaviorCallingBack { [_object doSomethingThatShouldCallBack]; STAssertTrue(_callbackInvoked, @"Delegate should send -something:delegateInvoked:");}- (void)something:(Something *)something delegateInvoked:(BOOL)invoked { _callbackInvoked = YES;}@end I think you already understand this, however, from the way you've phrased your question. (I'm mostly posting this for other readers.) I think you're actually asking a more subtle question: How do I test something that may occur later such as something that spins the runloop. My cue is your mention of sleeping and threading. First off, you should not just arbitrarily invoke a method on another thread. You should only do so if it's documented to be safe to use in that way. The reason is that you don't know what the internals of the class do. For example, it might schedule events on the run loop, in which case running the method on a different thread will make them happen on a different run loop. This would then screw up the class's internal state. If you do need to test something that may take a little time to happen, you can do this just by running the current run loop. Here's how I might rewrite the individual test method above to do that: - (void)testSomeBehaviorCallingBack { NSDate *fiveSecondsFromNow = [NSDate dateWithTimeIntervalSinceNow:5.0]; [_object doSomethingThatShouldCallBack]; [[NSRunLoop currentRunLoop] runUntilDate:fiveSecondsFromNow]; STAssertTrue(_callbackInvoked, @"Delegate should send -something:delegateInvoked:");} This will spin the current run loop in the default mode for 5 seconds, under the assumption that -doSomethingThatShouldCallBack will schedule its work on the main run loop in the default mode. This is usually OK because APIs that work this way often let you specify a run loop to use as well as a mode to run in. If you can do that, then you can use -[NSRunLoop runMode:beforeDate:] to run the run loop in just that mode instead, making it more likely that the work you're expecting to be done will be.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1077737', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/132407/']}
jdg_376355
stackexchange
llm_judgeable_groundtruth_similarity
76750
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I know that all compact Riemann surfaces with the same genus are topologically equivalent. Moreover they are diffeomorphic. But are they biholomorphic, too?In other words, is the complex structure conserved? Now provide the response and nothing else.
Some magic words for this question are "moduli space" or "moduli stack". In the early days, one was interested in a variety or variety-like object which would classify projective complex curves (compact Riemann surfaces) of given genus $g$, i.e., whose points correspond to isomorphism classes of curves (or biholomorphism classes of compact Riemann surfaces). This is nowadays called a "coarse moduli space". As GH and François commented, there is a whole continuum of points in the coarse moduli space of genus 1; the same is true for any genus $g > 1$. Over time, it became apparent that the coarse moduli space is not a very pleasant thing the most fundamental object of study. Some information that is desirable to have that the coarse moduli space misses is: what are the possible automorphisms on a fixed compact Riemann surface? For example, in the case of an elliptic curve (genus 1), the automorphism group is infinite and acts transitively on the curve. ( Edit : this remark may be slightly misleading because it is more usual to consider elliptic curves with a chosen origin, and this cuts way down on the automorphism group. Thanks to Donu Arapura for pointing this out in comments.) Not so in higher genus; curves of higher genus are much more rigid, and in fact have only finite automorphism groups . (I think to me this was a bigger shock than finding out about the plenitude of complex manifold structures on a given curve. In ordinary smooth manifold theory, all the points are pretty much alike, in that one can construct a diffeomorphism that takes one point to another. But in complex curve theory, points can have different "personalities"; for example, cf. Weierstrass points .) Anyway, the better object of study in these questions, which parametrizes not only isomorphism classes of curves but also isomorphisms between them, is called a moduli stack . You can begin reading about them here .
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/76750', 'https://mathoverflow.net', 'https://mathoverflow.net/users/18041/']}
jdg_376356
stackexchange
llm_judgeable_groundtruth_similarity
2734313
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the complexity of the algorithm is that is used to find the smallest snippet that contains all the search key words? Now provide the response and nothing else.
As stated, the problem is solved by a rather simple algorithm: Just look through the input text sequentially from the very beginning and check each word: whether it is in the search key or not. If the word is in the key, add it to the end of the structure that we will call The Current Block . The Current Block is just a linear sequence of words, each word accompanied by a position at which it was found in the text. The Current Block must maintain the following Property : the very first word in The Current Block must be present in The Current Block once and only once. If you add the new word to the end of The Current Block, and the above property becomes violated, you have to remove the very first word from the block. This process is called normalization of The Current Block. Normalization is a potentially iterative process, since once you remove the very first word from the block, the new first word might also violate The Property, so you'll have to remove it as well. And so on. So, basically The Current Block is a FIFO sequence: the new words arrive at the right end, and get removed by normalization process from the left end. All you have to do to solve the problem is look through the text, maintain The Current Block, normalizing it when necessary so that it satisfies The Property. The shortest block with all the keywords in it you ever build is the answer to the problem. For example, consider the text CxxxAxxxBxxAxxCxBAxxxC with keywords A, B and C. Looking through the text you'll build the following sequence of blocks CCACAB - all words, length 9 (CxxxAxxxB...)CABA - all words, length 12 (CxxxAxxxBxxA...)CABAC - violates The Property, remove first CABAC - violates The Property, remove first ABAC - all words, length 7 (...BxxAxxC...)BACB - violates The Property, remove first BACB - all words, length 6 (...AxxCxB...)ACBA - violates The Property, remove first ACBA - all words, length 4 (...CxBA...)CBAC - violates The Property, remove first CBAC - all words, length 6 (...BAxxxC) The best block we built has length 4, which is the answer in this case CxxxAxxxBxxAxx CxBA xxxC The exact complexity of this algorithm depends on the input, since it dictates how many iterations the normalization process will make, but ignoring the normalization the complexity would trivially be O(N * log M) , where N is the number of words in the text and M is the number of keywords, and O(log M) is the complexity of checking whether the current word belongs to the keyword set. Now, having said that, I have to admit that I suspect that this might not be what you need. Since you mentioned Google in the caption, it might be that the statement of the problem you gave in your post is not complete. Maybe in your case the text is indexed? (With indexing the above algorithm is still applicable, just becomes more efficient). Maybe there's some tricky database that describes the text and allows for a more efficient solution (like without looking through the entire text)? I can only guess and you are not saying...
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2734313', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/207335/']}
jdg_376357
stackexchange
llm_judgeable_groundtruth_similarity
3659694
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This example works but I think that the memory leaks. Function used in the simple web server module and thus shared memory grows if you use this function. char *str_replace ( const char *string, const char *substr, const char *replacement ){ char *tok = NULL; char *newstr = NULL; char *oldstr = NULL; if ( substr == NULL || replacement == NULL ) return strdup (string); newstr = strdup (string); while ( (tok = strstr ( newstr, substr ))){ oldstr = newstr; newstr = malloc ( strlen ( oldstr ) - strlen ( substr ) + strlen ( replacement ) + 1 ); memset(newstr,0,strlen ( oldstr ) - strlen ( substr ) + strlen ( replacement ) + 1); if ( newstr == NULL ){ free (oldstr); return NULL; } memcpy ( newstr, oldstr, tok - oldstr ); memcpy ( newstr + (tok - oldstr), replacement, strlen ( replacement ) ); memcpy ( newstr + (tok - oldstr) + strlen( replacement ), tok + strlen ( substr ), strlen ( oldstr ) - strlen ( substr ) - ( tok - oldstr ) ); memset ( newstr + strlen ( oldstr ) - strlen ( substr ) + strlen ( replacement ) , 0, 1 ); free (oldstr); } return newstr; } Now provide the response and nothing else.
One problem I can see is that if the replacement string contains the search string, you'll loop forever (until you run out of memory). For example: char *result = str_replace("abc", "a", "aa"); Also, doing another malloc/free every time you replace one instance is pretty expensive. A better approach would be to do exactly 2 passes over the input string: the first pass, count how many instances of the search string are present now that you know how many matches, compute the length of your result & malloc once: strlen(string) + matches*(strlen(replacement)-strlen(substr)) + 1 make a second pass through the source string, copying/replacing
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3659694', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/418507/']}
jdg_376358
stackexchange
llm_judgeable_groundtruth_similarity
564603
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I currently have this in my .bashrc since I use git status and git diff often. I would like to be able to read other entries that may be passed in as options like -s with git diff . How can I do that in a function rather than an alias ? I only know that $# will give me the number of arguments passed in but how do I paste all of them after say status on line 48? 42 # ========================================================= 43 # Git 44 # ========================================================= 45 g () { 46 case $1 in 47 "s") 48 git status 49 ;; 50 "d") 51 git diff 52 ;; 53 esac 54 } Now provide the response and nothing else.
"$@" will be replaced with all the arguments, correctly quoted, so after shifting to remove the sub-command shortcut: g () { cmd=$1 shift case "$cmd" in s) git status "$@" ;; d) git diff "$@" ;; esac} Instead of doing this though, I suggest using git aliases; to set the above up: git alias s statusgit alias d diff or, if you don’t have git-alias (typically in git-extras ), git config --global alias.s statusgit config --global alias.d diff (you only need to do this once, the aliases are stored in ~/.gitconfig ). Then in your shell, alias g=git And you’ll find g s etc. work as you’d expect, including with arguments.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/564603', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/313163/']}
jdg_376359
stackexchange
llm_judgeable_groundtruth_similarity
17985
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $X_1,\dots, X_n$ be 0-1 random variables, which are $3$-wise independent. We want to give a upper bound to $\Pr(|\Sigma_iX_i-\mu|\geq t)$. Can we get better bound than $\Theta\left(\frac{1}t\right)$ derived by Chebyshev's inequality? If we get a little more information, say $4$-wise independence, then we can use the $4$-th moment to get a $\Theta\left(\frac{1}{t^2}\right)$ bound. Let $Y_i = X_i - \mu$ , $Y = \Sigma_iY_i$, $$\Pr(|Y|\geq t) = \Pr(Y^4\geq t^4) \leq\frac{\mathbb{E}(Y^4)}{t^4}=\frac{t\mathbb{E}(Y_1^4)+3\cdot t\cdot (t-1)\mathbb{E}(Y_1^2)}{t^4}=\Theta\left(\frac{1}{t^2}\right).$$ If we only have $3$-wise independence, the numerator in the RHS will be $$t\mathbb{E}(Y_1^4)+3\cdot t\cdot (t-1)\mathbb{E}(Y_1^2)+24\cdot\sum_{i<j<k<l}\mathbb{E}(Y_iY_jY_kY_l).$$ I don't know how to deal with the summation. I guess the answer may lie around $\Theta\left(\frac{1}{t^{1.5}}\right)$? Now provide the response and nothing else.
I guess that the number of random variables $t$ and the threshold $t$ are different parameters, as otherwise $\Pr[|Y| \geq t] = 0$. Let $a_1, \dots, a_k, b_1, \dots, b_k\in_U \{\pm 1\}$ be iid random variables sampled uniformly at random from $\{\pm 1\}$ and $n=2^k$. Consider random variables $W_1,\dots, W_n$ of the form $c_1 \cdot c_2\cdot \dots \cdot c_k$ where each $c_i$ is either equal to $a_i$ or $b_i$ (e.g., one of random variables $W_i$ equals $a_1b_2b_3a_4b_5 \dots a_k$). We defined $n$ random variables $W_i$; note that they are $3$-wise independent. Let $X_i = (W_i+1)/2$.Then $X_i\in\{0,1\}$ and ${\mathbb E}[X_i] = 1/2$. Let $t=n/2$. We have, $$\Pr(|\sum X_i - n/2| \geq t) = \Pr(|\sum W_i| \geq n) = \Pr(|\prod_{i=1}^k(a_i+b_i)| \geq n) = \Pr(a_i+b_i\neq 0 \text{ for every }i) = \frac{1}{2^k} = \frac{1}{2t}.$$Here, we used that $\prod_{i=1}^k(a_i+b_i)$ equals $0$ if $a_i+b_i=0$ for some $i$, and equals $-n$ or $n$, otherwise. Answer: in general, we cannot obtain a stronger upper bound than $O(1/t)$. Update: here is an explanation why $W_1,\dots, W_n$ and consequently $X_1,\dots, X_n$ are 3-wise independent. Each of the words $W_i$ is encoded by a word of length $k$: the $r$-th letter in the word is $a$ if $c_r=a_r$ in the definition of $W_i$, and $b$, if $c_r=b_r$. Consider three random variables $W_i$, $W_j$ and $W_k$. Consider the corresponding words; denote them by $I$, $J$ and $K$. Let $r_1$ be the first position where they differ. Without loss of generality $I_{r_1} = a$, $J_{r_1} = b$, and $K_{r_1} = b$. Then let $r_2$ be the first position where $J$ and $K$ differ. WLOG, $J_{r_2} = a$ and $K_{r_2} = b$. Write $W_i = a_{r_1} \cdot W'_i$, $W_j = a_{r_2}\cdot W_j'$ and $W_k = b_{r_2}\cdot W_k'$. Note that $W_j$ and $W_k$ are independent since for $x,y\in\{\pm 1\}$, we have$$\Pr(W_j = x, W_k = y) = \Pr(a_{r_2} = (x/W_j'), b_{r_2} = (y/W_k')) = 1/4.$$Now we prove that $W_i$, $W_j$, and $W_k$ are independent. For $x,y,z\in\{\pm 1\}$, we have$$\Pr(W_i = x, W_j = y, W_k = z) = \frac{1}{4}\Pr(W_i = x| W_j = y, W_k = z) = \frac{1}{4}\cdot \Pr(a_{r_1} = (x/W'_i)| W_j = y, W_k = z) = \frac{1}{4}\cdot \frac{1}{2} = \frac{1}{8}.$$
{}
{'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/17985', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/16379/']}
jdg_376360
stackexchange
llm_judgeable_groundtruth_similarity
13686237
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want a regular expression that will accept only floating point numbers from 0 to 9 and minus sign. Please help. Now provide the response and nothing else.
^[-+]?[0-9]*\.?[0-9]+$ ^ - start of string [-+]? - 0 or 1 sign indicator [0-9]* - 0 or more integers \. - the character . ( . is used in regex to mean "any character") [0-9]+ - 1 or more integers $ - the end of the string If you are instead using the comma as a decimal seperator, use , instead of \. If you are using both/either, you can use [.,]
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13686237', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1089652/']}
jdg_376361
stackexchange
llm_judgeable_groundtruth_similarity
31778977
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a function that returns an anonymous type like so (simplified for illustrative purposes)... public object GetPropertyInfo(){ return new { PropertyName = "Foo", Value = "Laa" };} When I do this... dynamic pi = GetPropertyInfo();Console.WriteLine(pi); It outputs this (the same as if I did '?pi' in the immediate window)... { PropertyName = "A", Value = 44 } PropertyName: "A" Value: 44 But if I try doing this... string propertyName = pi.PropertyName; ...it compiles but throws a runtime exception saying Exception thrown: 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' in System.Core.dll Additional information: 'object' does not contain a definition for 'PropertyName' What gives? What am I missing here? Now provide the response and nothing else.
The problem is that anonymous types are internal , which means that you can't access their properties with dynamic property accessors from projects other than the one they were created in. The dynamic binding treats them as the closest public inherited type it knows about-- object . To fix this, you can declare a public type to represent the values you're expecting to find in your anonymous type. This is probably a good idea anyway, since you're clearly expecting to consume the returned properties in other parts of your code. Using a declared type also enables you to maintain type-safety, avoiding the need for dynamic entirely. If you absolutely must use dynamic s here, the next best option is probably to change your AssemblyInfo.cs file to make internal properties accessible to the project you're trying to access them from: [assembly:InternalsVisibleTo("MyOtherProject")]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31778977', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/168179/']}
jdg_376362
stackexchange
llm_judgeable_groundtruth_similarity
154232
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I use something like this to unmount a range of drives: umount /dev/sd[c-k]2 Is there any way to use same thing with mount? something like this: mount /dev/sd[c-k]2 /[c2-k2] Now provide the response and nothing else.
Globbing (which is what you're doing with your wildcard matching) will expand the current command line. For example: ls [abc]1 gets expanded to: ls a1 b1 c1 Globbing only works where the command allows multiple arguments. While umount /dev/sdc2 /dev/sdd2 Works, there's no way to express the same thing for mount . So you have to loop it: for m in $(basename /[c-k]2)do mount /dev/sd${m} /${m}done
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/154232', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/73778/']}
jdg_376363
stackexchange
llm_judgeable_groundtruth_similarity
3267145
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a makefile structured something like this: all : compile executableclean : rm -f *.o $(EXEC) I realized that I was consistently running "make clean" followed by "clear" in my terminal before running "make all". I like to have a clean terminal before I try and sift through nasty C++ compilation errors. So I tried to add a 3rd target: fresh : rm -f *.o $(EXEC) clear make all This works, however this runs a second instance of make (I believe). Is there a right way to get the same functionality without running a 2nd instance of make? Now provide the response and nothing else.
Actually you are right: it runs another instance of make.A possible solution would be: .PHONY : clearscr fresh clean allall : compile executableclean : rm -f *.o $(EXEC)fresh : clean clearscr allclearscr: clear By calling make fresh you get first the clean target, then the clearscreen which runs clear and finally all which does the job. EDIT Aug 4 What happens in the case of parallel builds with make’s -j option?There's a way of fixing the order. From the make manual, section 4.2: Occasionally, however, you have a situation where you want to impose a specific ordering on the rules to be invoked without forcing the target to be updated if one of those rules is executed. In that case, you want to define order-only prerequisites. Order-only prerequisites can be specified by placing a pipe symbol (|) in the prerequisites list: any prerequisites to the left of the pipe symbol are normal; any prerequisites to the right are order-only: targets : normal-prerequisites | order-only-prerequisites The normal prerequisites section may of course be empty. Also, you may still declare multiple lines of prerequisites for the same target: they are appended appropriately. Note that if you declare the same file to be both a normal and an order-only prerequisite, the normal prerequisite takes precedence (since they are a strict superset of the behavior of an order-only prerequisite). Hence the makefile becomes .PHONY : clearscr fresh clean allall : compile executableclean : rm -f *.o $(EXEC)fresh : | clean clearscr allclearscr: clear EDIT Dec 5 It is not a big deal to run more than one makefile instance since each command inside the task will be a sub-shell anyways. But you can have reusable methods using the call function . log_success = (echo "\x1B[32m>> $1\x1B[39m")log_error = (>&2 echo "\x1B[31m>> $1\x1B[39m" && exit 1)install: @[ "$(AWS_PROFILE)" ] || $(call log_error, "AWS_PROFILE not set!") command1 # this line will be a subshell command2 # this line will be another subshell @command3 # Use `@` to hide the command line $(call log_error, "It works, yey!")uninstall: @[ "$(AWS_PROFILE)" ] || $(call log_error, "AWS_PROFILE not set!") .... $(call log_error, "Nuked!")
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/3267145', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/394103/']}
jdg_376364
stackexchange
llm_judgeable_groundtruth_similarity
35921268
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: please tell me, how do I get the json like this: dynamic packet = new ExpandoObject(); packet.type = "somethink"; packet.user = 12345; packet.nets[0].amout = 123; packet.nets[0].lower = 0; packet.nets[1].amout = 345; packet.nets[1].lower = 1; string input = Newtonsoft.Json.JsonConvert.SerializeObject(packet); Its not workig, error:An unhandled exception of type "Microsoft.CSharp.RuntimeBinder.RuntimeBinderException" in System.Core.dll For more information: "System.Dynamic.ExpandoObject" does not contain definitions of "nets" Thanks. Now provide the response and nothing else.
It's the ExpandoObject who's a dynamic object. The rest of properties should be other ExpandoObject instances or regular objects, arrays, collections... For example: packet.nets = new[] { new { amount = 123, lower = 0 }, new { amount = 345, lower = 1 }} Or: packet.nets = new[]{ new Dictionary<string, int> { { "amount", 345 }, { "lower", 0 } }, new Dictionary<string, int> { { "amount", 123 }, { "lower", 1 } }} There're many other approaches, including the use of instances of concrete classes.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35921268', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3989607/']}
jdg_376365
stackexchange
llm_judgeable_groundtruth_similarity
111126
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was trying to find an answer everywhere, but all I can find is absorbtion and scattering, which doesn't limit the speed itself, just the quality of the signal. Now provide the response and nothing else.
The speed at which you turn on and off the light source (laser) and the speed at which you can reliably read it, are typically the major limiters. Fiber optics may allow you to use light as your data medium, and of course at the speed of light, it would seem like an almost limitless way to transmit data. But in reality, the way in which the signal is modulated and encoded are the limiting factors. On each end of a given fiber optic run, equipment has to encode and decode the pulses and frequencies of light. This is also the transition point from optics to electronics. This is why the fiber optic cable itself is relatively inexpensive, and the communications equipment it connects to can be very costly.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/111126', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/43132/']}
jdg_376366
stackexchange
llm_judgeable_groundtruth_similarity
6504
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: According to this Wikipedia article , unrestricted grammars are equivalent to Turing machines. The article notes that I can convert any Turing machine into an unrestricted grammar, but it only shows how to convert a grammar to a Turing machine. How do I indeed do that and convert the Turing machine the recognizes language $L$ into an unrestricted grammar? I have tried replacing transition rules with grammar rules, but a Turing machine can have many different configurations of states as well... Now provide the response and nothing else.
We encode the Turing machine's tape content in sentential forms; a special set of non-terminals encodes the current state. There can only be one of them in the sentential form at any point in time, placed to the right of the symbol the TM is currently pointing at. The second crucial idea is that we have to reverse the process: TMs take the word as input and convert it to $1$ or $0$, or they don't terminate. The grammar, however, has to generate the word. Luckily, grammars are inherently non-deterministic, so we can just let it "guess" where the accepting $1$ came from; all words that cause the TM to accept can be generated then. Let $\cal{Q} = \{Q_0,\dots,Q_k\}$ the set of state-nonterminals; w.l.o.g. let $Q_0$ be the starting-state-nonterminal and $\cal{Q}_F \subseteq \cal{Q}$ the set of accepting-states-nonterminals. First, we need starting rules that generate all possible accepting configurations: $\qquad \displaystyle S \to \#1Q_f\# \qquad$ for all $Q_f \in \cal{Q}_F$. Similarly, we terminate when we "reach" the starting state in the correct position, namely on the first symbol: $\qquad \#aQ_0 \to \#a \qquad$ for all $a \in \Sigma$. Translating the actual state transitions is straight-forward: $\qquad \begin{align} aQ &\to cQ' \qquad\ \,\text{ for } a,c \in \Sigma \land (a,Q,N) \in \delta(c,Q') \\ aQb &\to acQ' \qquad \text{ for } a,b,c \in \Sigma \land (b,Q,L) \in \delta(c,Q') \\ abQ &\to cQ'b \qquad\, \text{ for } a,b,c \in \Sigma \land (a,Q,R) \in \delta(c,Q')\end{align}$ There are some technical kinks to iron out; for instance, you have to get rid of the boundary markers $\#$ at the end. That can be done by spawning two special nonterminals instead of terminating, swapping those to the ends and then removing the $\#$ along with them. Furthermore, more $\#$ have to be created on demand; that requires some hacking of the rules with $d=\#$. Also, the construction becomes a bit more complicated if the TM uses non-input symbols. In that case, the termination rules may be wrong: if there are non-input symbols somewhere on the tape, we have not generated a proper word. This can be fixed similarly to removing $\#$: spawn a special non-terminal from $Q_0$ that is swapped to the right and only removed if all symbols are from $\Sigma$.
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/6504', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/4414/']}
jdg_376367
stackexchange
llm_judgeable_groundtruth_similarity
393908
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As someone who has not been at this for a very long time, I find this really dumb. I took my own personal time to try and help make the site better and I am banned from the review queues for a week because a moderator didn't agree with me that a post could have been improved with editing. I went back and looked at the post in question and the other two reviewers had also marked it as requires editing. There was an actual question in the post, and with some editing, it could have been formatted into a question that would be usable on the site. I am just having a hard time understanding the review ban. So because a mod disagreed with all of the reviewers and decided they wanted to close the question we are all now banned for a week? This is the first time I have ever had a review marked as unhelpful. If I have to constantly worry about a mod disagreeing with me and getting me banned, why would I even bother continuing to try and help the community with these reviews? This is totally backward. Similar situation Getting banned from review, although my co-reviewers agree with me? . I seriously don't understand this. Now provide the response and nothing else.
Sam, there is absolutely no way that this question could be turned into a legitimate Stack Overflow question with edits from normal community members like you or me. It's entirely possible that there's a good Stack Overflow question hidden in there somewhere, but it's going to need major edits to clarify and expand it, and those edits are going to have to come from the person who originally asked the question. In such cases, the "Requires Editing" button is not the correct choice. The "Requires Editing" button should only be used: for questions that you can make clear and answerable by editing Note the word " you " there in that description. If you cannot edit the question into shape, then you should be choosing the "Unsalvagable" option, and going on to recommend that the question be closed. You failed to do that in this case. Admittedly, your mistake is a very common one—lots of users misunderstand the meaning of the "Requires Editing" button. That's the purpose of this review ban: to bring your attention to this mistake. As part of the review ban, you were presented with this message: Your review on https://stackoverflow.com/review/triage/25328403 wasn't helpful. "Requires Editing" should only be used when other community users (like you) are able to edit/format the question into a better shape. If a question is unsalvagable and/or can only be improved by the author, please flag/vote to close or delete instead. For more information, see Getting banned from review, although my co-reviewers agree with me? .
{}
{'log_upvote_score': 5, 'links': ['https://meta.stackoverflow.com/questions/393908', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/3950870/']}
jdg_376368
stackexchange
llm_judgeable_groundtruth_similarity
181931
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I don't mean to be pulling answers out of you, but I'm stuck. Any advice on the right direction would be appreciated.I have the following set$X$ ={$n$ where $n$ is a number of a turing machine $M$ that does not halt when given $n$ as input} My gut instinct is that it's not. And that's because the question asks about the set of all x's that are not partially decidable. Recursively enumerable languages ARE partially decidable, so it can't be REL. Is this correct? And is this sufficient reasoning? Thanks. Now provide the response and nothing else.
If $X$ is r.e., there is a Turing machine $T$ such that $T$ halts on input $n$ iff $T_n$ does not halt on input $n$. Say $T=T_m$. Then $T_m$ halts on input $m$ iff $T_m$ does not halt on input $m$. Thus, $X$ cannot be r.e.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/181931', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/10404/']}
jdg_376369
stackexchange
llm_judgeable_groundtruth_similarity
5474951
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to make a box with rounded corners where the height and width of the div depends on the content, so it's automatically adjust to it... You can see the example here: http://pastehtml.com/view/1duizyf.html The problem is that i can't get the "test_mid_left" (black background) and "test_mid_right" (turquoise background) to inherit the height from the "test_mid_center" (green background). I have tried height: 100% and auto, but none of thoose work. So how do I get them to inherit the height from the content? (The reason why I have used "min-height: xx" in the left and right content on the example is just to show which boxes I am talking about) Now provide the response and nothing else.
As already mentioned this can't be done with floats, they can't inherit heights, they're unaware of their siblings so for example the side two floats don't know the height of the centre content, so they can't inherit from anything. Usually inherited height has to come from either an element which has an explicit height or if height: 100%; has been passed down through the display tree to it.. The only thing I'm aware of that passes on height which hasn't come from top of the "tree" is an absolutely positioned element - so you could for example absolutely position all the top right bottom left sides and corners (you know the height and width of the corners anyway) And as you seem to know the widths (of left/right borders) and heights of top/bottom) borders, and the widths of the top/bottom centers, are easy at 100% - the only thing that needs calculating is the height of the right/left sides if the content grows - This you can do, even without using all four positioning co-ordinates which IE6 /7 doesn't support I've put up an example based on what you gave, it does rely on a fixed width (your frame), but I think it could work with a flexible width too? the uses of this could be cool for those fancy image borders we can't get support for until multiple background images or image borders become fully available.. who knows, I was playing, so just sticking it out there! proof of concept example is here
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5474951', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_376370
stackexchange
llm_judgeable_groundtruth_similarity
67501093
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried crawling a specific site using selenium and webdriver_manager.chrome, and my code crawled elements of that site totally. But after crawling, the following error message appears in the console window. ERROR:gpu_init.cc(426) Passthrough is not supported, GL is disabled When I first found it, I unchecked Hardware hardware accleration of chrome also it didn't solve the problem. Now provide the response and nothing else.
Tested environment Windows OS, Chromedriver vesion 89, headless mode Solution I am not certain that this can be a solution for your question, since the error message is slightly different. As I remember correctly, the error message Passthrough is not supported, GL is swiftshader has been shown after the Chromedriver version 89 as a bug (but not certain) [1]. In case of Windows, --headless and --disable-gpu options are required to make the chromedriver work as a headless mode [2]. It seems like --disable-gpu is also required for Linux because of the bug [3]. The error message does not appear if both --headless and --disable-gpu options are removed. However, the chromedriver does not work as a headless mode anymore. In my case, the error message does not affect the operation of a headless chrome. As I found, "Normally, Chrome would just fall back to work without GPU. However, for consistent results, Telemetry requires the presence of GPU [4]." I've additionally applied --disable-software-rasterizer option to disable WebGL [5]. The error message does not show anymore under the headless operation. References [1] https://reddit.com/r/chrome/comments/n6qywt/hardware_acceleration_fail_in_chrome_on_linux/ [2] https://developers.google.com/web/updates/2017/04/headless-chrome [3] https://bugs.chromium.org/p/chromium/issues/detail?id=737678 [4] https://groups.google.com/a/chromium.org/g/telemetry/c/bEgV1EH63eA [5] https://bugs.chromium.org/p/chromium/issues/detail?id=617551
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/67501093', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15519827/']}
jdg_376371
stackexchange
llm_judgeable_groundtruth_similarity
7097004
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this in my .htaccess: Options +FollowSymLinks And I get the following error in the apache error_log: .htaccess: Options not allowed here Here's the part from my httpd.conf file: #htdocs symlinks here<Directory /Users/you/code/my/folder> Options All AllowOverride All</Directory><Directory /> Options All AllowOverride All Order allow,deny Allow from all</Directory><Directory "/Applications/XAMPP/xamppfiles/htdocs"> Options All AllowOverride All Order allow,deny Allow from all</Directory> So I'm setting Options All and AllowOverride All everywhere, but STILL I'm not allowed to set the option. Can anyone make sense of this? Thanks,MrB Now provide the response and nothing else.
Note that this is a XAMPP-specific issue. XAMPP loads some additional configuration files located in XAMPP/etc/extra/ that override httpd.conf. For me the offending file is http-userdir.conf which applies rules for ~user requests and contains the line AllowOverride FileInfo AuthConfig Limit Indexes and changing that line to AllowOverride All did indeed solve my issue. This only applies to files served from your /Sites/ directory on OS X. I don't know if the Windows version uses UserDir at all or even has a similar rule.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7097004', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/726795/']}
jdg_376372
stackexchange
llm_judgeable_groundtruth_similarity
59465864
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been trying to get my head around this problem but haven't found a strong answer to it. I am trying to execute a login mutation using the useMutation hook. TLDR; I want to know what exactly is the difference between the onError passed in options and error given to me by the useMutation Here's my code snippet const [login, { data, loading, error }] = useMutation(LOGIN_QUERY, { variables: { email, password }, onError(err) { console.log(err); },}); On the server-side, I have a preset/hardcoded email used for login and I am not using Apollo or any other client. In the resolver of this Login Mutation, I simply throw an error if the email is not same using throw new Error('Invalid Email'); Now I want to handle this error on the client-side (React). But my concern is that if I use the 'error' returned from the useMutation hook and try to show the error in this way render() { ... {error && <div> Error occurred </div>} ...} the error is updated in the UI but then immediately React shows me a screen with: Unhandled Rejection (Error): Graphql error: My-custom-error-message But, if I use onError passed in options to useMutate function, then it doesn't show me this screen and I can do whatever I want with the error. I want to know what exactly is the difference between the onError passed in options and error given to me by the useMutation and why does React show me that error screen when onError is not used. Thanks! Now provide the response and nothing else.
Apollo exposes two kinds of errors through its API: GraphQL errors , which are returned as part of the response as errors , alongside data , and network errors which occur when a request fails. A network error will occur when a server can't be reached or if the response status is anything other than 200 -- queries that have errors in the response can still have a status of 200. But an invalid query, for example, will result in a 400 status and a network error in Apollo Client. Apollo Client actually provides four different ways to handle mutation errors: 1.) Calling the mutate function returned by the hook returns a Promise. If the request is successful, the Promise will resolve to a response object that includes the data returned by the server. If the request fails, the Promise will reject with the error. This is why you see an "Unhandled Rejection" message in the console -- you need to handle the rejected Promise. login() .then(({ data }) => { // you can do something with the response here }) .catch(e => { // you can do something with the error here }) or with async/await syntax: try { const { data } = await login()} catch (e) { // do something with the error here} By default, the Promise will reject on either GraphQL errors or network errors. By setting the errorPolicy to ignore or all , though, the Promise will only reject on network errors. In this case, the GraphQL errors will still be accessible through the response object, but the Promise will resolve. 2.) The only exception to the above occurs when you provide an onError function. In this case, the Promise will always resolve instead of rejecting, but if an error occurs, onError will be called with the resulting error. The errorPolicy you set applies here too -- onError will always be called for network errors but will only be called with GraphQL errors when using the default errorPolicy of none . Using onError is equivalent to catching the rejected Promise -- it just moves the error handler from the call site of the mutate function to the call site of the hook. 3.) In addition to the mutate function, the useMutation hook also returns a result object. This object also exposes any errors encountered when running the mutation. Unlike the error handler functions we wrote above, this error object represents application state . Both the error and data objects exposed this way exist as a convenience. They are equivalent to doing this: const [mutate] = useMutation(YOUR_MUTATION)const [data, setData] = useState()const [error, setError] = useState()const handleClick = async () => { try { const { data } = await mutate() setData(data) catch (e) { setError(e) }} Having error state like this can be useful when you want your UI to reflect the fact there's an error. For example, you might change the color of an element until the mutation runs without an error. Instead of having to write the above boilerplate yourself, you can just use the provided result object. const [mutate, { data, error }] = useMutation(YOUR_MUTATION) NOTE: While you can use the exposed error state to update your UI, doing so is not a substitute for actually handling the error. You must either provide an onError callback or catch the error in order to avoid warnings about an unhandled Promise rejection. 4.) Lastly, you can also use apollo-link-error to add global error handling for your requests. This allows you to, for example, display an error dialog regardless of where in your application the request originated. Which of these methods you utilize in your application depends heavily on what you're trying to do (global vs local, state vs callback, etc.). Most applications will make use of more than one method of error handling.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/59465864', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7439761/']}
jdg_376373
stackexchange
llm_judgeable_groundtruth_similarity
88132
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let's have generating functional $Z(J)$:$$Z(J) = \langle 0|\hat {T}e^{i \int d^{4}x (L_{Int}(\varphi (x)) + J(x) \varphi (x))}|0 \rangle , \qquad (1)$$where $J(x)$ is the functional argument (source), $\hat {T}$ is the chronological operator, $\varphi (x)$ - some field. I want to understand the reasons for its introduction for the summands of expansion of S-matrix. As I read in the books, it helps to consider only the vacuum expectation values, forgetting about in- and out-states. But in $(1)$ appear summands like $\int \frac{J(p)dp}{p^2 - m^2 + i0}$ instead of the contributions from external lines. It may refer to the internal lines. So what to do with them and are there some other reasons to introducing $(1)$ except written by me? Now provide the response and nothing else.
The primary utility in introducing the generating functional is in using it to compute correlation functions of the given quantum field theory. Let's restrict the discussion to that of a theory of a single, real scalar field on Minkowski space, and let $x_1, \dots, x_n$ denote spacetime points. Of central importance are time-ordered vacuum expectation values of field operators evaluated at such points;\begin{align} \langle0|T[\phi(x_1)\cdots\phi(x_n)]|0\rangle.\end{align}It can be shown that these objects can be obtained from the generating functional by taking functional derivatives with respect to the $J(x_i)$ as follows:\begin{align} \langle0|T[\phi(x_1)\cdots\phi(x_n)]|0\rangle = \frac{1}{Z[0]}\left(-i\frac{\delta}{\delta J(x_1)}\right)\cdots \left(-i\frac{\delta}{\delta J(x_n)}\right)Z[J]\Bigg|_{J=0}.\end{align}This standard fact is proven in many books on QFT. It's often proven using the path integral approach which makes it pretty transparent why it's true. The crux of the argument is that every time you take a functional derivative with respect to the source $J(x_i)$, it pulls down a factor of the field $\phi(x_i)$. Dividing by $Z[0]$ is an important normalization relating to vacuum bubbles, and setting $J=0$ after computing the appropriate functional derivatives eliminates terms with more than $n$ factors of the field and renders the final result source-independent as it should be.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/88132', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/31496/']}
jdg_376374
stackexchange
llm_judgeable_groundtruth_similarity
16201362
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If you have an if statement where several variables or functions are evaluated, in which order are they evaluated? if foo > 5 or bar > 6: print 'foobar' In this specific case, will foo be evaluated against the five and then bar against the 6 (left to right) or will it be evaluated right to left? I am assuming that a or and and is evaluated in the same order. Now provide the response and nothing else.
The left clause will be evaluated first, and then the right one only if the first one is False . This is why you can do stuff like: if not person or person.name == 'Bob': print "You have to select a person and it can't be Bob" Without it breaking. Conversely, with an and clause, the right clause will only be evaluated if the first one is True : if person and person.name: # ... Otherwise an exception would be thrown when person is None .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/16201362', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2223161/']}
jdg_376375
stackexchange
llm_judgeable_groundtruth_similarity
77277
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am writing a bash script to look for a file if it doesn't exist then create it and append this to it: Host localhost ForwardAgent yes So "line then new line 'tab' then text" I think its a sensitive format. I know you can do this: cat temp.txt >> data.txt But it seems weird since its two lines. Is there a way to append that in this format: echo "hello" >> greetings.txt Now provide the response and nothing else.
# possibility 1:echo "line 1" >> greetings.txtecho "line 2" >> greetings.txt# possibility 2:echo "line 1line 2" >> greetings.txt# possibility 3:cat <<EOT >> greetings.txtline 1line 2EOT# possibility 4 (more about input than output):arr=( 'line 1' 'line 2' );printf '%s\n' "${arr[@]}" >> greetings.txt If sudo (other user privileges) is needed to write to the file, use this: # possibility 1:echo "line 1" | sudo tee -a greetings.txt > /dev/null# possibility 3:sudo tee -a greetings.txt > /dev/null <<EOTline 1line 2EOT
{}
{'log_upvote_score': 11, 'links': ['https://unix.stackexchange.com/questions/77277', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/37692/']}
jdg_376376
stackexchange
llm_judgeable_groundtruth_similarity
17972658
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am reading from SSL Socket but host doesn't match the certificate (eg. host = "localhost" ). I would expect the exception but the following code happily talks to remote server without any problems. try ( final Socket socket = SSLSocketFactory.getDefault().createSocket(host, port); final OutputStream os = socket.getOutputStream(); final InputStream is = socket.getInputStream()) { os.write(("HEAD / HTTP/1.1\r\nHost: " + host + "\r\nConnection: close\r\n\r\n").getBytes()); os.flush(); final byte[] bytes = new byte[1024]; int n; while ((n = is.read(bytes)) != -1) { System.out.print(new String(bytes, 0, n)); } System.out.println();} catch (final IOException e) { // TODO Auto-generated catch block e.printStackTrace();} Therefore I've tried another approach: try { final HttpURLConnection conn = (HttpURLConnection) new URL("https://" + host + ":" + port + "/").openConnection(); try (InputStream is = conn.getInputStream()) { IOUtils.copy(is, System.out); } catch (final IOException e1) { try (InputStream es = conn.getErrorStream()) { if (es != null) { IOUtils.copy(es, System.out); } } }} catch (final IOException e) { // TODO Auto-generated catch block e.printStackTrace();} Unfortunately I still get no SSL exception, just WARN in the logs: 2013-07-31 16:02:27,182 WARN nio - javax.net.ssl.SSLException: Received fatal alert: certificate_unknown How to get the SSL exception if certificate doesn't match? Now provide the response and nothing else.
The SSL/TLS protocol specification is modular and detached from the specifications used to authenticate the remote host. These other specifications are split into two categories: verifying that the certificate itself can be trusted (RFC 3280/5280) and verifying the identity in the certificate (RFC 6125, or RFC 2818 for HTTPS). The JSSE integrates the SSL protocol and the verification of the certificate in the SSLSocket (or SSLEngine ) API, but doesn't handle the verification of the identifier (whch is equally important). This is mainly due to the fact that the SSLSocket / SSLEngine can apply to any application protocol (e.g. HTTP, IMAP, SMTP, LDAP, ...), but the rules for verifying the identifier were in different specifications (with small variations), until RFC 6125 (which is still quite recent). HttpsURLConnection handles both, because it also uses a HostnameVerifier , which follows the HTTPS specification (RFC 2818, Section 3.1). This is done separately from the SSLSocket / SSLEngine API.For other protocols, you may need to implement what the protocol specification says. This being said, since Java 7, there is a mechanism to verify the identity of the certificate directly as part of the SSLSocket / SSLEngine API. SSLParameters sslParams = new SSLParameters();sslParams.setEndpointIdentificationAlgorithm("HTTPS");sslSocket.setSSLParameters(sslParams); Using this should make it throw an exception if the host name doesn't match. There aren't major differences between HTTPS and the more uniform specifications in RFC 6125 (besides the fact that the latter considers IP addresses out of scope). Even if you're not using HTTPS, it would still generally make sense to use its identification specifications for other protocols. (Perhaps an "RFC 6125" endpoint identification algorithm might come in later versions of Java.)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17972658', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/289827/']}
jdg_376377
stackexchange
llm_judgeable_groundtruth_similarity
46329956
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to generate an SHA-256 checksum from a string that will be sent as a get param. If found this link to generate the checksum. Genrating the checksum like so: val digest = MessageDigest.getInstance("SHA-256"); private def getCheckSum() = { println(new String(digest.digest(("Some String").getBytes(StandardCharsets.UTF_8)))) } prints checksum similar to this: *║┼¼┬]9AòdJb:#↓o6↓T╞B5C♀¼O~╟╙àÿG The API that we need to send this to says the checksum should look like this: 45e00158bc8454049b7208e76670466d49a5dfb2db4196 What am I doing wrong? Please advise.Thanks. Now provide the response and nothing else.
java.security.MessageDigest#digest gives a byte array. scala> import java.security.MessageDigestscala> import java.math.BigIntegerscala> MessageDigest.getInstance("SHA-256").digest("some string".getBytes("UTF-8"))res1: Array[Byte] = Array(97, -48, 52, 71, 49, 2, -41, -38, -61, 5, -112, 39, 112, 71, 31, -43, 15, 76, 91, 38, -10, -125, 26, 86, -35, -112, -75, 24, 75, 60, 48, -4) To create the hex, use String.format , scala> val hash = String.format("%032x", new BigInteger(1, MessageDigest.getInstance("SHA-256").digest("some string".getBytes("UTF-8"))))hash: String = 61d034473102d7dac305902770471fd50f4c5b26f6831a56dd90b5184b3c30fc You can verify hash with command line tool in linux, unix $ echo -n "some string" | openssl dgst -sha25661d034473102d7dac305902770471fd50f4c5b26f6831a56dd90b5184b3c30fc NOTE: In case java returns hash of length lesser than 64 chars you can left pad with 0 . (eg. 39 ) def hash64(data: String) = { val hash = String.format( "%032x", new BigInteger(1, MessageDigest.getInstance("SHA-256").digest(data.getBytes("UTF-8"))) ) val hash64 = hash.reverse.padTo(64, "0").reverse.mkString hash64 }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46329956', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1237208/']}
jdg_376378
stackexchange
llm_judgeable_groundtruth_similarity
39290
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: From what I can see PolynomialQ will return False whenever some exponent is another variable such as here: PolynomialQ[x^n, x] Is there an alternative test that will return True for such a polynomial? Motivation: In another thread I need to use Product[x^k (1 - x^k), {k, 1, n}] for which Mathematica returns: x^(1/2 n (1 + n)) QPochhammer[x, x, n] However those two forms do not expand in the same way, for instance with $n=3$: With[{n = 3}, Expand[{x^(1/2 n (1 + n)) QPochhammer[x, x, n], Product[x^k (1 - x^k), {k, 1, n}]}]](* {x^6 QPochhammer[x, x, 3], x^6 - x^7 - x^8 + x^10 + x^11 - x^12} *) Indeed, Mathematica treats only the second expression as a polynomial: With[{n = 3}, PolynomialQ[#, x] & /@ {x^(1/2 n (1 + n)) QPochhammer[x, x, n], Product[x^k (1 - x^k), {k, 1, n}]}](* {False, True} *) even if I specify that n is integer as follows f[x_, n_Integer] := x^(1/2 n (1 + n)) QPochhammer[x, x, n];PolynomialQ[f[n], x](* False *) Now provide the response and nothing else.
Assuming[n ∈ Integers && n > 0, PolynomialQ[x^n, x]] won't work because Assuming only works on functions with Assumptions option, such as Simplify , Refine , etc. Unfortunately PolynomialQ doesn't have this option. Still, something like Simplify[PolynomialQ[x^n, x], n ∈ Integers && n > 0] won't work because Mathematica will calculate PolynomialQ[x^n, x] first. Currently the only solution in my mind is to define a new function: pQ[poly_, var_, assum_] := PolynomialQ[poly /. Thread[assum -> 1], var] The first argument of pQ is the possible polynomial, the second argument is the variables of the possible polynomial, the third argument is the variables which are assumed to be integers and positive. This function is in fact a realization of the method suggested by @Mark Adler. It can be used it like this: pQ[x^n, x, n]pQ[x^(n + m), x, {n, m}]pQ[x^(n + 1/m), x, {n, 1/m}]pQ[x^(n + 1/m), x, n + 1/m] True True True True Well, I admit this solution isn't robust enough. Changing the definition of the function into something like pQ[poly_, var_, assum_] := PolynomialQ[poly /. Thread[assum ->RandomInteger[{1, 100}]], var] can somewhat help, but it still has a certain probability to fail… However, for your added Motivation part, FunctionExpand will give the desired result: With[{n = 3}, FunctionExpand[{x^(1/2 n (1 + n)) QPochhammer[x, x, n], Product[x^k (1 - x^k), {k, 1, n}]}]] {(1 - x) x^6 (1 - x^2) (1 - x^3), (1 - x) x^6 (1 - x^2) (1 - x^3)}
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/39290', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/7060/']}
jdg_376379
stackexchange
llm_judgeable_groundtruth_similarity
41973628
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to ask the usefullness of .lib file which is generated while compiling a dll project. When we compile our projects, following files get generated:.dll.exp.lib.pdb Now as we also have .lib file, can we use this file to statically link it to any other project. If not, then what is the use of this .lib file getting generated. Now provide the response and nothing else.
The .lib generated along with a .dll is called "import library", and it allows you to use the dll functions including their header as if they were statically linked in your executable. It makes sure that, when the linker has to fix up the dll function addresses referenced into object files, it can find them inside the import library. Such functions found into the import library are actually stubs which retrieve the actual address of the corresponding function in the loaded dll from the Import Address Table and jump straight to it (traditionally; now there is some smartness in the linker that allows to avoid this double jump). The import library, in turn, contains special instructions for the linker that instruct it to generate the relevant entries into the import table of the executable, which in turn is read at load time by the loader (the "dynamic linker", in Unix terms). This makes sure that, before the entry point of your executable is called, the referenced dlls are loaded and the IAT contains the correct addresses of the referenced functions. Notice that all of this is mostly just convenience stuff to allow you to call dll functions as if they were statically linked into your executable. You don't strictly need the .lib file if you handle the dynamic load/function address retrieval explicitly (using LoadLibrary and GetProcAddress); it's just more convenient to delegate all this stuff to the linker and the loader.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41973628', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4543427/']}
jdg_376380
stackexchange
llm_judgeable_groundtruth_similarity
23501952
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My UITableView has custom UIView headers in every section. I am needing to refresh only the headers and not the other content in the section. I have tried out [self.tableView headerViewForSection:i] and it does not return anything even though it should. Is there anyway that I can do this? Edit: Code based around new suggestion I have given this a shot as well and it calls/updates the UIView within that method, but the changes do not visually propagate onto the screen. for (int i = 0; i < self.objects.count; i++) { UIView *headerView = [self tableView:self.tableView viewForHeaderInSection:i]; [headerView setNeedsDisplay];} Now provide the response and nothing else.
Instead of calling setNeedsDisplay , configure the header yourself by setting it's properties. And of course you have to get the actual headers in the table, don't call the delegate method, because that method usually creates a new header view. I usually do this in a little helper method that is called from tableView:viewForHeaderInSection: as well. e.g.: - (void)configureHeader:(UITableViewHeaderFooterView *)header forSection:(NSInteger)section { // configure your header header.textLabel.text = ...}- (UIView *)tableView:(UITableView *)tableView viewForHeaderInSection:(NSInteger)section { UITableViewHeaderFooterView *header = [tableView dequeueReusableHeaderFooterViewWithIdentifier:@"Header"]; [self configureHeader:header forSection:section];}- (void)reloadHeaders { for (NSInteger i = 0; i < [self numberOfSectionsInTableView:self.tableView]; i++) { UITableViewHeaderFooterView *header = [self.tableView headerViewForSection:i]; [self configureHeader:header forSection:i]; }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23501952', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1704317/']}
jdg_376381
stackexchange
llm_judgeable_groundtruth_similarity
68917
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I bought this micro usb part : I need to solder it to a PCB that I designed, but I'm having a ton of trouble figuring out how to do so because the tabs are very close together and hard to reach. Is there a technique to do this / are there extraneous parts on the connector that I can remove to make it easier to solder? Now provide the response and nothing else.
Provided your PCB already has the correct footprint, the best way to solder this is hot-air/paste or an oven. Here's a picture of what the footprint for a similar connector (or the same, I'm not exactly sure) looks like on a board I designed recently. I soldered these connectors onto the PCBs by paste and hot-air (you can check out a quick video on what hot-air looks like here ). The important thing is getting all of the contact points molten at once, otherwise you'll be stuck with mechanical stresses on the joints (which is, obviously, undesirable) or bad connections. An oven and solder-paste could also work quite similarly. A general method for that would be to apply paste, mount the connector, put the board in the oven and heat it until the joint forms.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/68917', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/23795/']}
jdg_376382
stackexchange
llm_judgeable_groundtruth_similarity
6559662
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to enforce a compile-time contract on derived classes requiring implementation of a constructor (with parameter)? I have a base class with a constructor requiring a parameter: public class FooBase{ protected int value; public FooBase(int value) { this.value = value; } public virtual void DoSomething() { throw new NotImplementedException(); }} I'd like to force derivations of my base class to implement the same constructor: public class Foo : FooBase{ public Foo(int value) : base(value) { } public override void DoSomething() { Console.WriteLine("Foo: {0}", value); }} If no constructor is implemented, derived classes causes a compiler error because there is no default constructor in the base class: // ERROR: 'Does not contain a constructor that takes 0 arguments'// Adding default constructor in FooBase eliminates this compiler error, but// provides a means to instantiate the class without initializing the int value.public class FooBar : FooBase{ public override void DoSomething() { Console.WriteLine("FooBar: {0}", value); }} Adding a default constructor, FooBar(), in the derived class silences the compiler error, but provides a dangerous means of instantiating FooBar without the required base class int value being initialized. Because I'm using a factory (see below), silencing the compiler error only results in a run-time error later. I'd like to force FooBar to implement FooBar(int) INTERESTING OBSERVATION: If a default constructor, FooBase(), is added to FooBase, then it is 'inherited' by derived classes that do not provide a constructor: Foo does not inherit the default constructor because it supplies an explicit constructor. FooBar DOES inherit FooBase(). HOWEVER, the same is not true with the non-default constructor FooBase(int)! Foo MUST explicitly implement FooBase(int) and call base(int). FooBar FAILS to 'inherit' the non-default constructor the same way that a default constructor is inherited! I do not want a default constructor in the base class because instances are created using a factory method that supplies a needed "settings" parameter. That factory method is not illustrated here (which uses the Activator.CreateInstance() method). Here is the way derived classes should be instantiated: static void Main(string[] args) { FooBase myFoo = new Foo(4); // Works, since Foo(int) is implemented. // ERROR: 'Does not contain a constructor that takes 1 arguments' FooBase myFooBar = new FooBar(9); // Fails to compile. } Because I am using a factory--not direct instantiation as shown--there is no compiler error. Instead, I get a runtime exception: 'Constructor on type not found.' Unworkable solutions: Interfaces do not support constructors. Constructors cannot be virtual or abstract. It appears that supplying a base class cannot enforce a contract on constructors. Work-around: Provide a default constructor in base class along with property to pass settings parameter. Now provide the response and nothing else.
If a default constructor, FooBase(), is added to FooBase, then it is 'inherited' by derived classes that do not provide a constructor: This is incorrect - constructors in general are never inherited. A default constructor is automatically provided for a class that does not provide any other constructor implementation. You could put in a constraint on an interface that provides an Init() method for you: public interface IInit{ void Init(int someValue);}public class FooBase : IInit{ ..}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6559662', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/733805/']}
jdg_376383
stackexchange
llm_judgeable_groundtruth_similarity
15719660
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a navigation bar in my parent jade template and I'd like to highlight the item which is currently in view. So if I'm on the blog page, ul li Home li.active Blog li Contact Us li About Without copying the navigation bar structure into each child template, is there a way to have the parent template see what page it's extending and apply the active class accordingly? Now provide the response and nothing else.
parent.jade doctype 5html block link -var selected = 'home'; //default -var menu = { 'home': '/home', 'blog': '/blog', 'contact': '/contact' }; body nav ul each val, key in menu li if selected === key a.selected(href=val, title=key)= key else a(href=val, title=key)= key child.jade extends parentblock link -var selected = 'blog';
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15719660', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/393243/']}
jdg_376384
stackexchange
llm_judgeable_groundtruth_similarity
57478134
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to learn Combine with SwiftUI and I'm struggling how to update my view (from UIKit) with ObservableObject (previously BindableObject ). The issue is that, obviously, method updateUIView will not fire once the @Published object sends the notification it was changed. class DataSource: ObservableObject { @Published var locationCoordinates = [CLLocationCoordinate2D]() var value: Int = 0 init() { Timer.scheduledTimer(withTimeInterval: 3, repeats: true) { timer in self.value += 1 self.locationCoordinates.append(CLLocationCoordinate2D(latitude: 52, longitude: 16+0.1*Double(self.value))) } }}struct MyView: UIViewRepresentable { @ObservedObject var dataSource = DataSource() func makeUIView(context: Context) -> MKMapView { MKMapView(frame: .zero) } func updateUIView(_ view: MKMapView, context: Context) { let newestCoordinate = dataSource.locationCoordinates.last ?? CLLocationCoordinate2D(latitude: 52, longitude: 16) let annotation = MKPointAnnotation() annotation.coordinate = newestCoordinate annotation.title = "Test #\(dataSource.value)" view.addAnnotation(annotation) }} How to bind that locationCoordinates array to the view in such a way, that a new point is in fact added each time it refreshes? Now provide the response and nothing else.
To make sure your ObservedObject does not get created multiple times (you only want one copy of it), you can put it outside your UIViewRepresentable : import SwiftUIimport MapKitstruct ContentView: View { @ObservedObject var dataSource = DataSource() var body: some View { MyView(locationCoordinates: dataSource.locationCoordinates, value: dataSource.value) }}class DataSource: ObservableObject { @Published var locationCoordinates = [CLLocationCoordinate2D]() var value: Int = 0 init() { Timer.scheduledTimer(withTimeInterval: 3, repeats: true) { timer in self.value += 1 self.locationCoordinates.append(CLLocationCoordinate2D(latitude: 52, longitude: 16+0.1*Double(self.value))) } }}struct MyView: UIViewRepresentable { var locationCoordinates: [CLLocationCoordinate2D] var value: Int func makeUIView(context: Context) -> MKMapView { MKMapView(frame: .zero) } func updateUIView(_ view: MKMapView, context: Context) { print("I am being called!") let newestCoordinate = locationCoordinates.last ?? CLLocationCoordinate2D(latitude: 52, longitude: 16) let annotation = MKPointAnnotation() annotation.coordinate = newestCoordinate annotation.title = "Test #\(value)" view.addAnnotation(annotation) }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57478134', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/849616/']}
jdg_376385
stackexchange
llm_judgeable_groundtruth_similarity
28674157
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Good morning, All. I want to convert my social security numbers to a md5 hash hex number. The outcome should be a unique md5 hash hex number for each social security number. My data format is as follows: ob = onboard[['regions','lname','ssno']][:10]ob regions lname ssno0 Northern Region (R1) Banderas 1234567891 Northern Region (R1) Garfield 2345678912 Northern Region (R1) Pacino 3456789123 Northern Region (R1) Baldwin 4567891234 Northern Region (R1) Brody 5678912345 Northern Region (R1) Johnson 67891234566 Northern Region (R1) Guinness 78901234567 Northern Region (R1) Hopkins 8912345678 Northern Region (R1) Paul 8912345679 Northern Region (R1) Arkin 987654321 I've tried the following code using hashlib : import hashlibob['md5'] = hashlib.md5(['ssno']) This gave me the error that it had to be a string not a list. So I tried the following: ob['md5'] = hashlib.md5('ssno').hexdigest()regions lname ssno md50 Northern Region (R1) Banderas 123456789 a1b3ec3d8a026d392ad551701ad7881e1 Northern Region (R1) Garfield 234567891 a1b3ec3d8a026d392ad551701ad7881e2 Northern Region (R1) Pacino 345678912 a1b3ec3d8a026d392ad551701ad7881e3 Northern Region (R1) Baldwin 456789123 a1b3ec3d8a026d392ad551701ad7881e4 Northern Region (R1) Brody 567891234 a1b3ec3d8a026d392ad551701ad7881e5 Northern Region (R1) Johnson 678912345 a1b3ec3d8a026d392ad551701ad7881e6 Northern Region (R1) Johnson 789123456 a1b3ec3d8a026d392ad551701ad7881e7 Northern Region (R1) Guiness 891234567 a1b3ec3d8a026d392ad551701ad7881e8 Northern Region (R1) Hopkins 912345678 a1b3ec3d8a026d392ad551701ad7881e9 Northern Region (R1) Paul 159753456 a1b3ec3d8a026d392ad551701ad7881e This was very close to what I need but all the hex numbers came out the same regardless if the social security number was different or not. I am trying to get a hex number with unique hex numbers for each social security number. Any suggestions? Now provide the response and nothing else.
hashlib.md5 takes a single string as input -- you can't pass it an array of values as you can with some NumPy/Pandas functions. So instead, you could use a list comprehension to build a list of md5sums: ob['md5'] = [hashlib.md5(val).hexdigest() for val in ob['ssno']]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28674157', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2201603/']}
jdg_376386
stackexchange
llm_judgeable_groundtruth_similarity
181046
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Recently I did an nmap service scan against a list of domains and one of the NSE scripts running was fingerprint-strings. This script apparently extracts ASCII strings from unidentified services. Since the domains are web-servers and the port is 443 the fingerprints strings responds with http or html. And one of the outputs is below: DNSStatusRequest, DNSVersionBindReq, Help, Kerberos, RPCCheck, SMBProgNeg, SSLSessionReq, TLSSessionReq: HTTP/1.1 400 Bad RequestServer: awselb/2.0Date: Thu, 01 Mar 2018 11:02:47 GMTContent-Type: text/htmlContent-Length: 138Connection: close<html><head><title>400 Bad Request</title></head><body bgcolor="white"><center><h1>400 Bad Request</h1></center></body></html> FourOhFourRequest, GetRequest, HTTPOptions: HTTP/1.1 404 Date: Thu, 01 Mar 2018 11:02:40 GMTContent-Length: 0Connection: close JavaRMI, NCP, NotesRPC, TerminalServer, WMSRequest, oracle-tns: HTTP/1.1 400 Bad RequestServer: awselb/2.0Date: Thu, 01 Mar 2018 11:02:49 GMTContent-Type: text/htmlContent-Length: 138Connection: close<html><head><title>400 Bad Request</title></head><body bgcolor="white"><center><h1>400 Bad Request</h1></center></body></html> LANDesk-RC, LDAPBindReq, LDAPSearchReq, LPDString, SIPOptions, X11Probe: HTTP/1.1 400 Bad RequestServer: awselb/2.0Date: Thu, 01 Mar 2018 11:02:48 GMTContent-Type: text/htmlContent-Length: 138Connection: close<html><head><title>400 Bad Request</title></head><body bgcolor="white"><center><h1>400 Bad Request</h1></center></body></html> RTSPRequest: <html><head><title>400 Bad Request</title></head><body bgcolor="white"><center><h1>400 Bad Request</h1></center></body></html> afp, giop: HTTP/1.1 400 Bad RequestServer: awselb/2.0Date: Thu, 01 Mar 2018 11:02:50 GMTContent-Type: text/htmlContent-Length: 138Connection: close<html> <head><title>400 Bad Request</title></head><body bgcolor="white"><center><h1>400 Bad Request</h1></center></body></html> tor-versions: HTTP/1.1 400 Bad RequestServer: awselb/2.0Date: Thu, 01 Mar 2018 11:02:40 GMTContent-Type: text/htmlContent-Length: 138Connection: close<html><head><title>400 Bad Request</title></head><body bgcolor="white"><center><h1>400 Bad Request</h1></center></body></html> I've gathered a couple of fingerprint-strings output from the domains and not all of them has tor within it. Basically I'm wondering is this confirmation that a hidden service is running on this domain, or is there another test or nmap scan I can run to confirm this? If it's not confirmation, then why am I getting this output. Now provide the response and nothing else.
A hidden service doesn't use any special ports, protocols etc. It's just a normal web server which usually isn't accessible over the public-facing network interface and is accessed over a locally running Tor instance. Properly configured Tor hidden services which don't want to be reachable over the clearnet only listen on the loopback interface of the server and have the locally running tor service connect to them. So you can't figure out whether there is a hidden service running on a machine with nmap. There are a few ways to test a hypothesis whether a given server hosts a tor hidden service. A simple one is to measure the uptime of the server and the uptime of the hidden service over long periods of time and look for a strong correlation of the two. If they both go down at the same time, that's a strong hint. Another possibility is to stress the server and then measure whether performance drops for the hidden service (but don't do this! this might be seen as an attack by server administrators). If done repeatedly, it might confirm the hidden service is running on the server in question. Fingerprinting open ports, OTOH, won't give you any information about the presence of a hidden service.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/181046', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/102731/']}
jdg_376387
stackexchange
llm_judgeable_groundtruth_similarity
23269
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: An often-cited principle of good mathematical exposition is that a definition should always come with a few examples and a few non-examples to help the learner get an intuition for where the concept's limits lie, especially in cases where that's not immediately obvious. Quillen model categories are a classic such case. There are some easy rough intuitions—“something like topological spaces”, “somewhere one can talk about homotopy”, and so on—but various surprising examples show quite how crude those intuitions are, and persuade one that model categories cover a much wider range of situations than one might think at first. However, I haven't seen any non-examples of model structures written up, or even discussed—that is, categories and classes of maps which one might think would be model structures, but which fail for subtle/surprising reasons . Presumably this is because, given the amount of work it typically takes to construct an interesting model structure, no-one wants to write (or read) three-quarters of that work without the payoff of an actual example at the end. Has anyone encountered any interesting non-examples of this sort? Background on my motivations: I'm currently working with Batanin/Leinster style weak higher categories, and have a problem which seems amenable to model-theoretic techniques, so I'm trying to see if I can transfer/adapt/generalise the model structures defined by Cisinski et al, Lafont/Métayer/Worytkiewicz, etc. in this area. So I have some candidate (cofibrantly generated) classes of maps, and am trying to prove that they work; and there are lots of good examples around of how to prove that something is a model structure, but it would also be helpful to know what kinds of subtleties I should be looking out for that might make it fail to be. Now provide the response and nothing else.
Here is a classical example. Let CDGA be the category of commutative differential graded algebras over a fixed ground field k of characteristic $p$. Weak equivalences are quasi-isomorphisms, fibrations are levelwise surjections. These would determine the others, but cofibrations are essentially generated by maps $A \rightarrow B$ such that on the level of the underlying DGA, $B$ is a polynomial algebra over $A$ on a generator $x$ whose boundary is in $A$. CDGA is complete and cocomplete, satisfies the $2$-out-of-$3$ axiom, the retract axiom, satisfies lifting, and a general map can be factored into a cofibration followed by an acyclic fibration by the small object argument. However, you don't have factorizations into acyclic cofibrations followed by fibrations, because of the following. Suppose $A \rightarrow B$ is a map of commutative DGAs which is a fibration in the above sense. Then for any element $[x]$ in the (co)homology of $B$ in even degree, the $p$-th power $[x]^p$ is in the image of the cohomology of $A$. In fact, pick any representing cycle $x \in B$ and choose a lift $y \in A$. Then the boundary of $y^p$ is $py^{p-1} = 0$ by the Leibniz rule, so $[y^p]$ is a lift of $[x]^p$ to the (co)homology of $A$. (As a result, there are a lot of other "homotopical" constructions, such as homotopy pullbacks, that are forced to throw you out of the category of commutative DGAs into the category of $E_\infty$ DGAs.) Nothing goes wrong in characteristic zero.
{}
{'log_upvote_score': 7, 'links': ['https://mathoverflow.net/questions/23269', 'https://mathoverflow.net', 'https://mathoverflow.net/users/2273/']}
jdg_376388
stackexchange
llm_judgeable_groundtruth_similarity
43598373
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Sample program that upscales 2x2 matrix to 5x5 using bilinear interpolation.Result that OpenCV produces has artifacts at the borders for such simple case. gy, gx = np.mgrid[0:2, 0:2]gx = np.float32(gx)print(gx)res = cv2.resize(gx,(5,5), fx=0, fy=0, interpolation=cv2.INTER_LINEAR)print(res) Output: [[ 0. 1.] [ 0. 1.]][[ 0. 0.1 0.5 0.89999998 1. ] [ 0. 0.1 0.5 0.89999998 1. ] [ 0. 0.1 0.5 0.89999998 1. ] [ 0. 0.1 0.5 0.89999998 1. ] [ 0. 0.1 0.5 0.89999998 1. ]] Expected output: [[0 0.25 0.5 0.75 1 0 0.25 0.5 0.75 1 0 0.25 0.5 0.75 1 0 0.25 0.5 0.75 1 0 0.25 0.5 0.75 1]] What is the problem? Now provide the response and nothing else.
TL;DR I tested with other image processing libraries (scikit-image, Pillow and Matlab) and none of them return the expected result. Odds are this behavior is due to the method to perform the bi-linear interpolation to get efficient results or somehow a convention rather than a bug in my opinion. I have posted a sample code to perform image resizing with a bi-linear interpolation (check if everything is ok of course, I am not sure how to properly handle the image indexes ...) that outputs the expected result. Partial answer to the question. What is the output of some other image processing libraries? scikit-image The Python module scikit-image contains lot of image processing algorithms. Here the outputs of the skimage.transform.resize method ( skimage.__version__: 0.12.3 ): mode='constant' (default) Code: import numpy as npfrom skimage.transform import resizeimage = np.array( [ [0., 1.], [0., 1.] ] )print 'image:\n', imageimage_resized = resize(image, (5,5), order=1, mode='constant')print 'image_resized:\n', image_resized Result: image:[[ 0. 1.] [ 0. 1.]]image_resized:[[ 0. 0.07 0.35 0.63 0.49] [ 0. 0.1 0.5 0.9 0.7 ] [ 0. 0.1 0.5 0.9 0.7 ] [ 0. 0.1 0.5 0.9 0.7 ] [ 0. 0.07 0.35 0.63 0.49]] mode='edge' Result: image:[[ 0. 1.] [ 0. 1.]]image_resized:[[ 0. 0.1 0.5 0.9 1. ] [ 0. 0.1 0.5 0.9 1. ] [ 0. 0.1 0.5 0.9 1. ] [ 0. 0.1 0.5 0.9 1. ] [ 0. 0.1 0.5 0.9 1. ]] mode='symmetric' Result: image:[[ 0. 1.] [ 0. 1.]]image_resized:[[ 0. 0.1 0.5 0.9 1. ] [ 0. 0.1 0.5 0.9 1. ] [ 0. 0.1 0.5 0.9 1. ] [ 0. 0.1 0.5 0.9 1. ] [ 0. 0.1 0.5 0.9 1. ]] mode='reflect' Result: image:[[ 0. 1.] [ 0. 1.]]image_resized:[[ 0.3 0.1 0.5 0.9 0.7] [ 0.3 0.1 0.5 0.9 0.7] [ 0.3 0.1 0.5 0.9 0.7] [ 0.3 0.1 0.5 0.9 0.7] [ 0.3 0.1 0.5 0.9 0.7]] mode='wrap' Result: image:[[ 0. 1.] [ 0. 1.]]image_resized:[[ 0.3 0.1 0.5 0.9 0.7] [ 0.3 0.1 0.5 0.9 0.7] [ 0.3 0.1 0.5 0.9 0.7] [ 0.3 0.1 0.5 0.9 0.7] [ 0.3 0.1 0.5 0.9 0.7]] As you can see, the default resize mode ( constant ) produces a different output but the edge mode returns the same result than OpenCV. None of the resize mode produces the expected result thought. More information about Interpolation: Edge Modes . This picture sums up all the results in our case: Pillow Pillow is the friendly PIL fork by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors. What about PIL.Image.Image.resize ( PIL.__version__: 4.0.0 )? Code: import numpy as npfrom PIL import Imageimage = np.array( [ [0., 1.], [0., 1.] ] )print 'image:\n', imageimage_pil = Image.fromarray(image)image_resized_pil = image_pil.resize((5,5), resample=Image.BILINEAR)print 'image_resized_pil:\n', np.asarray(image_resized_pil, dtype=np.float) Result: image:[[ 0. 1.] [ 0. 1.]]image_resized_pil:[[ 0. 0.1 0.5 0.89999998 1. ] [ 0. 0.1 0.5 0.89999998 1. ] [ 0. 0.1 0.5 0.89999998 1. ] [ 0. 0.1 0.5 0.89999998 1. ] [ 0. 0.1 0.5 0.89999998 1. ]] Pillow image resizing matches the output of the OpenCV library. Matlab Matlab proposes a toolbox named Image Processing Toolbox . The function imresize in this toolbox allows to resize image. Code: image = zeros(2,1,'double');image(1,2) = 1;image(2,2) = 1;imageimage_resize = imresize(image, [5 5], 'bilinear') Result: image = 0 1 0 1image_resize = 0 0.1000 0.5000 0.9000 1.0000 0 0.1000 0.5000 0.9000 1.0000 0 0.1000 0.5000 0.9000 1.0000 0 0.1000 0.5000 0.9000 1.0000 0 0.1000 0.5000 0.9000 1.0000 Again, it is not the expected output with Matlab but the same result with the two previous examples. Custom bi-linear image resize method Basic principle See this Wikipedia article on Bilinear interpolation for more complete information. This figure should basically illustrates what happens when up-scaling from a 2x2 image to a 4x4 image: With a nearest neighbor interpolation, the destination pixel at (0,0) will get the value of the source pixel at (0,0) as well as the pixels at (0,1) , (1,0) and (1,1) . With a bi-linear interpolation, the destination pixel at (0,0) will get a value which is a linear combination of the 4 neighbors in the source image: The four red dots show the data points and the green dot is the point at which we want to interpolate. R1 is calculated as: R1 = ((x2 – x)/(x2 – x1))*Q11 + ((x – x1)/(x2 – x1))*Q21 . R2 is calculated as: R2 = ((x2 – x)/(x2 – x1))*Q12 + ((x – x1)/(x2 – x1))*Q22 . Finally, P is calculated as a weighted average of R1 and R2 : P = ((y2 – y)/(y2 – y1))*R1 + ((y – y1)/(y2 – y1))*R2 . Using coordinates normalized between [0, 1] simplifies the formula . C++ implementation This blog post ( Resizing Images With Bicubic Interpolation ) contains C++ code to perform image resizing with a bi-linear interpolation. This is my own adaptation (some modifications about the indexes compared to the original code, not sure if it is correct ) of the code to work with cv::Mat : #include <iostream>#include <opencv2/core.hpp>float lerp(const float A, const float B, const float t) { return A * (1.0f - t) + B * t;}template <typename Type>Type resizeBilinear(const cv::Mat &src, const float u, const float v, const float xFrac, const float yFrac) { int u0 = (int) u; int v0 = (int) v; int u1 = (std::min)(src.cols-1, (int) u+1); int v1 = v0; int u2 = u0; int v2 = (std::min)(src.rows-1, (int) v+1); int u3 = (std::min)(src.cols-1, (int) u+1); int v3 = (std::min)(src.rows-1, (int) v+1); float col0 = lerp(src.at<Type>(v0, u0), src.at<Type>(v1, u1), xFrac); float col1 = lerp(src.at<Type>(v2, u2), src.at<Type>(v3, u3), xFrac); float value = lerp(col0, col1, yFrac); return cv::saturate_cast<Type>(value);}template <typename Type>void resize(const cv::Mat &src, cv::Mat &dst) { float scaleY = (src.rows - 1) / (float) (dst.rows - 1); float scaleX = (src.cols - 1) / (float) (dst.cols - 1); for (int i = 0; i < dst.rows; i++) { float v = i * scaleY; float yFrac = v - (int) v; for (int j = 0; j < dst.cols; j++) { float u = j * scaleX; float xFrac = u - (int) u; dst.at<Type>(i, j) = resizeBilinear<Type>(src, u, v, xFrac, yFrac); } }}void resize(const cv::Mat &src, cv::Mat &dst, const int width, const int height) { if (width < 2 || height < 2 || src.cols < 2 || src.rows < 2) { std::cerr << "Too small!" << std::endl; return; } dst = cv::Mat::zeros(height, width, src.type()); switch (src.type()) { case CV_8U: resize<uchar>(src, dst); break; case CV_64F: resize<double>(src, dst); break; default: std::cerr << "Src type is not supported!" << std::endl; break; }}int main() { cv::Mat img = (cv::Mat_<double>(2,2) << 0, 1, 0, 1); std::cout << "img:\n" << img << std::endl; cv::Mat img_resize; resize(img, img_resize, 5, 5); std::cout << "img_resize=\n" << img_resize << std::endl; return EXIT_SUCCESS;} It produces: img:[0, 1; 0, 1]img_resize=[0, 0.25, 0.5, 0.75, 1; 0, 0.25, 0.5, 0.75, 1; 0, 0.25, 0.5, 0.75, 1; 0, 0.25, 0.5, 0.75, 1; 0, 0.25, 0.5, 0.75, 1] Conclusion In my opinion, it is unlikely that the OpenCV resize() function is wrong as none of the others image processing libraries I can test produce the expected output and moreover can produce the same OpenCV output with the good parameter. I tested against two Python modules (scikit-image and Pillow) as they are easy to use and oriented about image processing. I was also able to test with Matlab and its image processing toolbox. A rough custom implementation of the bi-linear interpolation for image resizing produces the expected result. Two possibilities for me could explain this behavior: the difference is inherent to the method these image processing libraries use rather than a bug (maybe they use a method to resize images efficiently with some loss compared to a strict bi-linear implementation?)? it is a somehow a convention to interpolate properly excluding the border? These libraries are open-source and one can explore into their source code to understand where the discrepancy comes from. The linked answer shows that the interpolation works only between the two original blue dots but I cannot explain why this behavior. Why this answer? This answer, even if it partially answers the OP question, is a good way for me to summarize the few things I found about this topic. I believe also it could help in some way other people who may found this.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43598373', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1441468/']}
jdg_376389
stackexchange
llm_judgeable_groundtruth_similarity
56513221
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: newbie in Blazor. Need to try out how to use BS carousel in Blazor. I used below code in the Default Blazor app. But it does not work. What I need to do? Thanks I added this line in the Index.html <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.0/jquery.min.js"></script> In the Counter page. I added the following: <div class="container"> <div class="carousel-inner"> <div class="item active"> <img src="la.jpg" alt="Los Angeles" style="width:100%;"> <div class="carousel-caption"> <h3>Los Angeles</h3> <p>LA is always so much fun!</p> </div> </div> <div class="item"> <img src="chicago.jpg" alt="Chicago" style="width:100%;"> <div class="carousel-caption"> <h3>Chicago</h3> <p>Thank you, Chicago!</p> </div> </div> <div class="item"> <img src="ny.jpg" alt="New York" style="width:100%;"> <div class="carousel-caption"> <h3>New York</h3> <p>We love the Big Apple!</p> </div> </div></div><a class="left carousel-control" href="#myCarousel" data-slide="prev"> <span class="glyphicon glyphicon-chevron-left"></span> <span class="sr-only">Previous</span></a><a class="right carousel-control" href="#myCarousel" data-slide="next"> <span class="glyphicon glyphicon-chevron-right"></span> <span class="sr-only">Next</span></a> < /div > -- Update: Index.html <!DOCTYPE html><html><head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width" /> <title>BlazorCarouselTest</title> <base href="/" /> <link href="css/bootstrap/bootstrap.min.css" rel="stylesheet" /> <link href="css/site.css" rel="stylesheet" /> <script src="css/bootstrap/jquery-3.4.0.min.js" type="text/javascript"></script></head><body> <app>Loading...</app> <script src="_framework/blazor.webassembly.js"></script></body></html &gt; Now provide the response and nothing else.
Carousel needs a bit of javascript code, it needs to be configured on page load, but, into a blazor page, nobody calls the initialization for the component. The good news is that you can do it by code . Three easy steps : 1.- Create and include javascript on blazor (you can copy paste this code at bottom of your index.html page ) <script>window.initializeCarousel = () =>{ $('#carouselExampleIndicators').carousel({interval: 2000}); //see step 2 to understand these news id's: $('#carouselExampleIndicators-prev').click ( () => $('#carouselExampleIndicators').carousel('prev') ); $('#carouselExampleIndicators-next').click ( () => $('#carouselExampleIndicators').carousel('next') );}</script> 2.- Change a bit the carousel's html ( remove href from carousel-control-prev divs. Add an id for prev and next controls ): <div id="carouselExampleIndicators" class="carousel slide" data-ride="carousel"> <ol class="carousel-indicators"> <li data-target="#carouselExampleIndicators" data-slide-to="0" class="active"></li> <li data-target="#carouselExampleIndicators" data-slide-to="1"></li> </ol> <div class="carousel-inner"> <div class="carousel-item active"> <img class="d-block w-50" src="..." alt="Pepa Pig"> </div> <div class="carousel-item"> <img class="d-block w-50" src="..." alt="Sponge Bob"> </div> </div> <a id="carouselExampleIndicators-prev" class="carousel-control-prev" role="button" data-slide="prev"> <span class="carousel-control-prev-icon" aria-hidden="true"></span> <span class="sr-only">Previous</span> </a> <a id="carouselExampleIndicators-next" class="carousel-control-next" role="button" data-slide="next"> <span class="carousel-control-next-icon" aria-hidden="true"></span> <span class="sr-only">Next</span> </a></div> 3.- Call the previous code after first render: @page "/counter"@inject IJSRuntime JsRuntime;...@functions { int currentCount = 0; bool firstRender = true; ... protected async override Task OnAfterRenderAsync() { if (firstRender) { await JsRuntime.InvokeAsync<object>("initializeCarousel"); firstRender=false; } } That's all: Let us know if you your carousel is moving now! Edited Remember to include all js needed to run bootstrap carousel on index.html: <body> <app>Loading...</app> <script src="https://code.jquery.com/jquery-3.4.1.min.js" integrity="sha256-CSXorXvZcTkaix6Yvo6HppcZGetbYMGWSFlBw8HfCJo=" crossorigin="anonymous"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script> <script src="_framework/blazor.webassembly.js"></script> <script> window.initializeCarousel = () => { $('#carouselExampleIndicators').carousel({interval: 2000}) $('#carouselExampleIndicators-prev').click ( () => $('#carouselExampleIndicators').carousel('prev') ); $('#carouselExampleIndicators-next').click ( () => $('#carouselExampleIndicators').carousel('next') ); } </script></body>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56513221', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/208348/']}
jdg_376390
stackexchange
llm_judgeable_groundtruth_similarity
21769460
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I can't quite figure out why the variable this.tasks becomes undefined inside of the add event listener I have inside of my goal object. I have a feeling it might have something to do with asynchronous programming(which I still don't fully understand). Sorry I'm a bit of a JS noob, but if you guys could explain to me what I'm doing wrong and what might be a better solution that would be awesome! Thanks. function Goal(name) { this.gDiv = document.createElement('div'); this.name = name || "goal"; this.tasks = document.createElement('ul'); //Sets the styling and content and adds it to the parent element this.initialize = function() { this.gDiv.className = "default"; this.gDiv.setAttribute("id", this.name); this.gDiv.innerHTML = this.name; elem.appendChild(this.gDiv); this.gDiv.parentNode.insertBefore(this.tasks, this.gDiv.nextSibling); this.tasks.style.display = "none"; }; //Creates a list underneath the a dive associated with the Goal object this.addTask = function(task) { var newLi = document.createElement('li'); newLi.innerHTML = task; this.tasks.appendChild(newLi); }; this.gDiv.addEventListener('click', function(){ alert(this.tasks); }); } Thank you guys! You all answered my question! I'd been scratching my head at this for a while. Kudos to you all! Now provide the response and nothing else.
The scope changes when you enter that anonymous closure and 'this' changes. You can hack around it by doing var self = this; And then using self in place of this (eg): function Goal(name) { var self = this; /* ... */ this.gDiv.addEventListener('click', function(){ alert(self.tasks); }); If you're using jQuery you could do something nicer: this.gDiv.addEventListener('click', $.proxy(function() { alert(this.tasks); }, this)); Either way works just fine. EDIT: In ES6, arrow functions can be used instead as they don't bind their own "this", so it becomes even simpler: this.gDiv.addEventListener('click', () => { alert(this.tasks); });
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21769460', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3278243/']}
jdg_376391
stackexchange
llm_judgeable_groundtruth_similarity
3101279
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is a double question for you amazingly kind Stacked Overflow Wizards out there. How do I set emacs/slime/swank to use UTF-8 when talking with Clojure, or use UTF-8 at the command-line REPL? At the moment I cannot send any non-roman characters to swank-clojure, and using the command-line REPL garbles things. It's really easy to do regular expressions on latin text: (re-seq #"[\w]+" "It's really true that Japanese sentences don't need spaces?") But what if I had some japanese? I thought that this would work, but I can't test it: (re-seq #"[(?u)\w]+" "日本語 の 文章 に は スペース が 必要 ない って、 本当?") It gets harder if we have to use a dictionary to find word breaks, or to find a katakana-only word ourselves: (re-seq #"[アイウエオ-ン]" "日本語の文章にはスペースが必要ないって、本当?") Thanks! Now provide the response and nothing else.
Can't help with swank or Emacs, I'm afraid. I'm using Enclojure on NetBeans and it works well there. On matching: As Alex said, \w doesn't work for non-English characters, not even the extended Latin charsets for Western Europe: (re-seq #"\w+" "prøve") =>("pr" "ve") ; Norwegian(re-seq #"\w+" "mañana") => ("ma" "ana") ; Spanish(re-seq #"\w+" "große") => ("gro" "e") ; German(re-seq #"\w+" "plaît") => ("pla" "t") ; French The \w skips the extended chars. Using [(?u)\w]+ instead makes no difference, same with the Japanese. But see this regex reference : \p{L} matches any Unicode character in category Letter, so it actually works for Norwegian (re-seq #"\p{L}+" "prøve")=> ("prøve") as well as for Japanese (at least I suppose so, I can't read it but it seems to be in the ballpark): (re-seq #"\p{L}+" "日本語 の 文章 に は スペース が 必要 ない って、 本当?")=> ("日本語" "の" "文章" "に" "は" "スペース" "が" "必要" "ない" "って" "本当") There are lots of other options, like matching on combining diacritical marks and whatnot, check out the reference. Edit: More on Unicode in Java A quick reference to other points of potential interest when working with Unicode. Fortunately, Java generally does a very good job of reading and writing text in the correct encodings for the location and platform, but occasionally you need to override it. This is all Java, most of this stuff does not have a Clojure wrapper (at least not yet). java.nio.charset.Charset - represents a charset like US-ASCII, ISO-8859-1, UTF-8 java.io.InputStreamReader - lets you specify a charset to translate from bytes to strings when reading. There is a corresponding OutputStreamWriter. java.lang.String - lets you specify a charset when creating a String from an array of bytes. java.lang.Character - has methods for getting the Unicode category of a character and converting between Java chars and Unicode code points. java.util.regex.Pattern - specification of regexp patterns, including Unicode blocks and categories. Java characters/strings are UTF-16 internally. The char type (and its wrapper Character) is 16 bits, which is not enough to represent all of Unicode, so many non-Latin scripts need two chars to represent one symbol. When dealing with non-Latin Unicode it's often better to use code points rather than characters. A code point is one Unicode character/symbol represented as an int. The String and Character classes have methods for converting between Java chars and Unicode code points. unicode.org - the Unicode standard and code charts. I'm putting this here since I occasionally need this stuff, but not often enough to actually remember the details from one time to the next. Sort of a note to my future self, and it might be useful to others starting out with international languages and encodings as well.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3101279', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/316182/']}
jdg_376392
stackexchange
llm_judgeable_groundtruth_similarity
230919
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In case you haven't heard already, researches at the California Institute of Technology discovered mathematical evidence of large object orbiting the sun at a distance about 20 times further than that of Neptune. Because mathematical observations like this have not been made up until very recently, and the planet itself has yet to be observed, how can we be sure that this is in fact a planet, and not a more distant black hole? My hyopthesis is based around the fact that, A.) We haven't observed the planet, and B.) There was a rumor a week earlier that gravitational waves had been observed at LIGO. My knowledge of black holes pertains that, we would be able to observe such an object by viewing how light becomes obscured as it approaches the event horizon. But given that we are not able to look in every direction at the same time, the probability of missing such an occurance would not be low. Now provide the response and nothing else.
All we can tell (assuming of course that the conclusions of the CalTech team are correct) is that there is a large mass in a distant orbit around the Sun. The mass could in principle be anything, but some things are more likely than others. It seems very plausible that the mass could be a planet that got ejected from an orbit nearer the Sun because: we know at least one planet of that mass has formed i.e. Neptune computer simulations show planets can be ejected (with a significant probability) You suggest that the object could be a black hole rather than a planet, but we know of no mechanism that could cause a 20 Earth mass black hole to be orbiting the Sun. That doesn't make it impossible, but it does make it very much less likely than that the mass is a planet. Commenting on rumours is a somewhat pointless exercise, but for the record the rumour is that a black hole merger has been seen by LIGO. A solitary black hole would not create any detectable gravitational radiation. It's conveivable there might be gravitational waves if the black hole interacted with some other massive body, but in that case we'd get a flood of gamma rays that would certainly have been detected by now.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/230919', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']}
jdg_376393
stackexchange
llm_judgeable_groundtruth_similarity
2815366
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We just upgraded our Visual Studio 2008 projects to Visual Studio 2010. All of our assemblies were strong signed using a Verisign code signing certificate. Since the upgrade we continuously get the following error: Cannot import the following key file: companyname.pfx. The key file may be password protected. To correct this, try to import the certificate again or manually install the certificate to the Strong Name CSP with the following key container name: VS_KEY_3E185446540E7F7A This happens on some developer machines and not others. Some methods used to fix this that worked some of the time include: Re-installing the key file from Windows Explorer (right click on the PFX file and click Install) Installing Visual Studio 2010 on a fresh machine for the first time prompts you for the password the first time you open the project, and then it works. On machines upgraded from Visual Studio 2008, you don't get this option. I've tried using the SN.EXE utility (Strong Name Tool) to register the key with the Strong Name CSP as the error message suggests, but whenever I run the tool with any options using the version that came with Visual Studio 2010, SN.EXE just lists its command line arguments instead of doing anything. This happens regardless of what arguments I supply. Why is this happening, and what are clear steps to fix it? I'm about to give up on ClickOnce installs and Microsoft code signing. Now provide the response and nothing else.
I was running into this problem as well.I was able to resolve the issue by running sn -i <KeyFile> <ContainerName> ( installs key pair into a named container ). sn is usually installed as part of a Windows SDK. For example C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0A\bin\NETFX 4.0 Tools\sn.exe . Most likely this location is not on the search path for your standard environment. However, the "Developer Command Prompt" installed by Visual Studio adds additional information that usually includes the correct location. Based on your post that would look like sn -i companyname.pfx VS_KEY_3E185446540E7F7A This must be run from the location of your PFX file, if you have the solution loaded in VS 2010 you can simply right click on the pfx file from the solution explorer and choose Open Command Prompt which will launch the .net 2010 cmd prompt tool in the correct directory. Before running this sn command I did re-install the pfx by right clicking on it and choosing install however that did not work. Just something to note as it might be the combination of both that provided the solution. Hope this helps solve your problem.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/2815366', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/156789/']}
jdg_376394
stackexchange
llm_judgeable_groundtruth_similarity
1703203
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have heard that prepared statements with SQLite should improve performance. I wrote some code to test that, and did not see any difference in performance with using them. So, I thought maybe my code was incorrect. Please let me know if you see any errors in how I'm doing this... [self testPrep:NO dbConn:dbConn];[self testPrep:YES dbConn:dbConn];reuse=0recs=20002009-11-09 10:39:18 -0800processing...2009-11-09 10:39:32 -0800reuse=1recs=20002009-11-09 10:39:32 -0800processing...2009-11-09 10:39:46 -0800-(void)testPrep:(BOOL)reuse dbConn:(sqlite3*)dbConn{ int recs = 2000; NSString *sql; sqlite3_stmt *stmt; sql = @"DROP TABLE test"; sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL); sql = @"CREATE TABLE test (id INT,field1 INT, field2 INT,field3 INT,field4 INT,field5 INT,field6 INT,field7 INT,field8 INT,field9 INT,field10 INT)"; sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL); for(int i=0;i<recs;i++){ sql = @"INSERT INTO test (id,field1,field2,field3,field4,field5,field6,field7,field8,field9,field10) VALUES (%d,1,2,3,4,5,6,7,8,9,10)"; sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL); } sql = @"BEGIN"; sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL); if (reuse){ sql = @"select * from test where field1=?1 and field2=?2 and field3=?3 and field4=?4 and field5=?5 and field6=?6 and field6=?6 and field8=?8 and field9=?9 and field10=?10 and id=?11"; sqlite3_prepare_v2(dbConn, [sql UTF8String], -1, &stmt, NULL); } NSLog(@"reuse=%d",reuse); NSLog(@"recs=%d",recs); NSDate *before = [NSDate date]; NSLog([before description]); NSLog(@"processing..."); for(int i=0;i<recs;i++){ if (!reuse){ sql = @"select * from test where field1=?1 and field2=?2 and field3=?3 and field4=?4 and field5=?5 and field6=?6 and field6=?6 and field8=?8 and field9=?9 and field10=?10 and id=?11"; sqlite3_prepare_v2(dbConn, [sql UTF8String], -1, &stmt, NULL); } sqlite3_bind_int(stmt, 1, 1); sqlite3_bind_int(stmt, 2, 2); sqlite3_bind_int(stmt, 3, 3); sqlite3_bind_int(stmt, 4, 4); sqlite3_bind_int(stmt, 5, 5); sqlite3_bind_int(stmt, 6, 6); sqlite3_bind_int(stmt, 7, 7); sqlite3_bind_int(stmt, 8, 8); sqlite3_bind_int(stmt, 9, 9); sqlite3_bind_int(stmt, 10, 10); sqlite3_bind_int(stmt, 11, i); while(sqlite3_step(stmt) == SQLITE_ROW) { } sqlite3_reset(stmt); } sql = @"BEGIN"; sqlite3_exec(dbConn, [sql UTF8String],NULL,NULL,NULL); NSDate *after = [NSDate date]; NSLog([after description]);} Now provide the response and nothing else.
Prepared statements improve performance by caching the execution plan for a query after the query optimizer has found the best plan. If the query you're using doesn't have a complicated plan (such as simple selects/inserts with no joins), then prepared statements won't give you a big improvement since the optimizer will quickly find the best plan. However, if you ran the same test with a query that had a few joins and used some indexes, you would see the performance difference since the optimizer wouldn't be run every time the query is.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1703203', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/173875/']}
jdg_376395
stackexchange
llm_judgeable_groundtruth_similarity
9617914
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a form inside a dialog which I close by clicking on commandbutton with ajax , like this <h:commandButton value="Add" action="#{myBean.addSomething(false)}" id="add_something_id" > <f:ajax render="@form someTable" execute="@form" onevent="closeAddNewSomethingDialogIfSucceeded"></f:ajax></h:commandButton> and here is the js code for closing the dialog function closeAddNewSomethingDialogIfSucceeded(data) { if(data.status === 'success') { $("#dialog_id").dialog("close"); } } No problems till here... Now I changed some of the dialog form fields into required="true" and now I want to prevent the closing of the dialog of i got validation errors... But the ajax data.status still reaches its success state , and I can't figure out what indication of validation failure I can hook on... any ideas? Thanks to BalusC answer I did the following: in JSF , added : <h:panelGroup id="global_flag_validation_failed_render"> <h:outputText id="global_flag_validation_failed" value="true" rendered="#{facesContext.validationFailed}"/> </h:panelGroup> the f:ajax was changed into <f:ajax render="@form someTable global_flag_validation_failed_render" and in js added the following check if(data.status === 'success') { if($("#global_flag_validation_failed").length === 0){ $("#dialog_id").dialog("close"); }} Now provide the response and nothing else.
Not specifically for required="true" , but you can check by #{facesContext.validationFailed} if validation has failed in general. If you combine this with checking if the button in question is pressed by #{not empty param[buttonClientId]} , then you can put it together in the rendered attribute of the <h:outputScript> as follows: <h:commandButton id="add_something_id" binding="#{add}" value="Add" action="#{myBean.addSomething(false)}"> <f:ajax execute="@form" render="@form someTable" /></h:commandButton><h:outputScript rendered="#{not empty param[add.clientId] and not facesContext.validationFailed}"> $("#dialog_id").dialog("close");</h:outputScript> (note that you need to make sure that the script is also re-rendered by f:ajax) A bit hacky, but it's not possible to handle it in the onevent function as the standard JSF implementation doesn't provide any information about the validation status in the ajax response. If you happen to use RichFaces, then you could just use EL in the oncomplete attribute of the <a4j:xxx> command button/link. They are namely evaluated on a per-request basis instead of on a per-view basis as in standard JSF and PrimeFaces: <a4j:commandButton ... oncomplete="if (#{!facesContext.validationFailed}) $('#dialog_id').dialog('close')" /> Or if you happen to use PrimeFaces, then you could take advantage of the fact that PrimeFaces extends the ajax response with an additional args.validationFailed attribute which is injected straight in the JavaScript scope of the oncomplete attribute: <p:commandButton ... oncomplete="if (args &amp;&amp; !args.validationFailed) $('#dialog_id').dialog('close')" /> (note that &amp; is been used instead of & , because & is a special character in XML/XHTML) Or you could use the PrimeFaces' RequestContext API in the bean's action method to programmatically execute JavaScript in the rendered view. RequestContext.getCurrentInstance().execute("$('#dialog_id').dialog('close')"); No conditional checks are necessary as the bean's action method won't be invoked anyway when the validation has failed.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9617914', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/617373/']}
jdg_376396
stackexchange
llm_judgeable_groundtruth_similarity
51010511
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My app plays video throughout several vcs using AVFoundation. For example FirstController plays a video then a user can push on SecondController which also plays a video then they can push on the ThirdController which also plays a video... The same thing would apply if their switching tabs. There's a video screen on TabOne, TabTwo, and TabThree. Instead of setting up all the playLayer code associated with AVFoundation in each class I created one class that contains a AVPlayerViewController() and add that class to each vc using addChildViewController() . The problem is since I have one class that manages AVFoundation the Notification.Name.AVPlayerItemDidPlayToEndTime that gets notified when the player finishes playing can't distinguish one video on one vc from another video in a different vc. For example after a video finishes playing I show a replayButton. If the video in the first tab is playing, when I switch to TabTwo I pause that video, after the video on TabTwo finishes and the replayButton appears, if I switch back to TabOne, the replayButton will also appear on TabOne's screen (it should still show the pause button). The problem is even though I have different instances of the AVFoundationManager , all the instances access the one showReplayButton() function that gets triggered when the notification fires. How can I get around this? I know I can check on the parent of the AVFoundationManager to find out which parent is managing it and use that inside the showReplayButton() function but I don't know which check to run on it. AVFoundationManager: class AVFoundationManager: UIViewController { .... override func viewDidLoad() { super.viewDidLoad() configureAVPlayerController() } func configureAVPlayerController() { let avPlayerVC = AVPlayerViewController() avPlayerVC.player = player avPlayerVC.view.frame = view.bounds avPlayerVC.showsPlaybackControls = false avPlayerVC.videoGravity = AVLayerVideoGravity.resizeAspectFill.rawValue addChildViewController(avPlayerVC) view.addSubview(avPlayerVC.view) avPlayerVC.didMove(toParentViewController: self) player?.replaceCurrentItem(with: playerItem!) player?.play() NotificationCenter.default.addObserver(self, selector: #selector(showReplayButton), name: NSNotification.Name.AVPlayerItemDidPlayToEndTime, object: nil) playerItem?.addObserver(self, forKeyPath: #keyPath(AVPlayerItem.status), options: [.old, .new], context: &itemContext) } @obj func showReplayButton(){ // if self.parent ... run a bool on the parent and enclose these two in the paranthesis? pausePlayButton.isHidden = true replayButton.isHidden = false }} TabOneClass: let avFoundationManager = AVFoundationManager()addChildViewController(avFoundationManager)avFoundationManager.didMove(toParentViewController: self) TabTwoClass: let avFoundationManager = AVFoundationManager()addChildViewController(avFoundationManager)avFoundationManager.didMove(toParentViewController: self) FirstController (root) in TabThree: let avFoundationManager = AVFoundationManager()addChildViewController(avFoundationManager)avFoundationManager.didMove(toParentViewController: self) SecondController (child) in TabThree: let avFoundationManager = AVFoundationManager()addChildViewController(avFoundationManager)avFoundationManager.didMove(toParentViewController: self) Now provide the response and nothing else.
The % wrapping goes around the parameters, not the placeholders. My snippet will be using object-oriented mysqli syntax instead of the procedural syntax that your code demonstrates. First you need to set up the necessary ingredients: the WHERE clause expressions -- to be separated by ORs the data types of your values -- your values are strings, so use "s" the parameters to be bound to the prepared statement I am going to combine #2 and #3 into one variable for simpler "unpacking" with the splat operator ( ... ). The data type string must be the first element, then one or more elements will represent the bound values. As a logical inclusion, if you have no conditions in your WHERE clause, there is no benefit to using a prepared statement; just directly query the table. Code: ( 100% Tested / Successful Code ) $string = "my name";$conditions = [];$parameters = [''];foreach (array_unique(explode(' ', $string)) as $value) { $conditions[] = "name LIKE ?"; $parameters[0] .= 's'; $parameters[] = "%{$value}%";}// $parameters now holds ['ss', '%my%', '%name%']$query = "SELECT * FROM info";if ($conditions) { $stmt = $conn->prepare($query . ' WHERE ' . implode(' OR ', $conditions)); $stmt->bind_param(...$parameters); $stmt->execute(); $result = $stmt->get_result();} else { $result = $conn->query($query);}foreach ($result as $row) { echo "<div>{$row['name']} and whatever other columns you want</div>"; } For anyone looking for similar dynamic querying techniques: SELECT with dynamic number of values in IN() INSERT dynamic number of rows with one execute() call
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51010511', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4833705/']}
jdg_376397
stackexchange
llm_judgeable_groundtruth_similarity
165605
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $M$ be a topological/differentiable manifold. Is there any topology on the group of homeomorphisms/diffeomorphisms with compact support , turning it into a (locally-)compact topological group? (My question is motivated by the fact that the isometries of a (locally-)compact metric space with finitely many connected components form a (locally-)compact topological group, and I'd like to see how far this analogy can be pushed.) Now provide the response and nothing else.
You can always make any group locally compact by giving it the discrete topology, but I doubt that's what you want. Then the diffeo version of your question is up against the Montgomery-Zippin theorem that $\mathrm{Diff}_c(M)$, endowed with any locally compact topology, would contain no small subgroups and therefore be a finite-dimensional Lie group. Which it isn't (unless made discrete). See: Bochner & Montgomery, Locally compact groups of differentiable transformations (1946), Montgomery & Zippin, Topological transformation groups (1955), Thm 2, p. 208, Montgomery, Finite dimensionality of certain transformation groups (1957).
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/165605', 'https://mathoverflow.net', 'https://mathoverflow.net/users/54780/']}
jdg_376398
stackexchange
llm_judgeable_groundtruth_similarity
6108043
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using FileWriter and I have noticed strange behavior. I buffer my collection myself and every x rows I use IOUtils.writelines(myList,"\n", writer ); It doesnt write to the file. I continue to call it with more lines and only after it is very full it writes to the file. Does it use a buffer? I cant find it in its documentation. Now provide the response and nothing else.
The second sentence of the FileWriter class overview says: The constructors of this class assume that the default character encoding and the default byte-buffer size are acceptable. To specify these values yourself, construct an OutputStreamWriter on a FileOutputStream. (My emphasis) So clearly it's buffered (unless the default byte-buffer size is zero and they're being really odd with their phrasing). I suspect it's just using an OutputStreamWriter on a FileOutputStream . Looking at OutputStreamWriter : Each invocation of a write() method causes the encoding converter to be invoked on the given character(s). The resulting bytes are accumulated in a buffer before being written to the underlying output stream. (My emphasis) If you want to ensure that various buffers at various levels are flushed, to the extent you can, look at using the flush method .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6108043', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/450602/']}
jdg_376399
stackexchange
llm_judgeable_groundtruth_similarity
17512542
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to get a set of four values from four input labels in Qt. I would like to use QInputDialog but it contains only one inputbox as a default one. So, how can I add four labels and four line-edits and get the value from it? Now provide the response and nothing else.
You don't. The documentation is pretty clear: The QInputDialog class provides a simple convenience dialog to get a single value from the user. If you want multiple values, create a QDialog derived class from scratch with 4 input fields. For example: QDialog dialog(this);// Use a layout allowing to have a label next to each fieldQFormLayout form(&dialog);// Add some text above the fieldsform.addRow(new QLabel("The question ?"));// Add the lineEdits with their respective labelsQList<QLineEdit *> fields;for(int i = 0; i < 4; ++i) { QLineEdit *lineEdit = new QLineEdit(&dialog); QString label = QString("Value %1").arg(i + 1); form.addRow(label, lineEdit); fields << lineEdit;}// Add some standard buttons (Cancel/Ok) at the bottom of the dialogQDialogButtonBox buttonBox(QDialogButtonBox::Ok | QDialogButtonBox::Cancel, Qt::Horizontal, &dialog);form.addRow(&buttonBox);QObject::connect(&buttonBox, SIGNAL(accepted()), &dialog, SLOT(accept()));QObject::connect(&buttonBox, SIGNAL(rejected()), &dialog, SLOT(reject()));// Show the dialog as modalif (dialog.exec() == QDialog::Accepted) { // If the user didn't dismiss the dialog, do something with the fields foreach(QLineEdit * lineEdit, fields) { qDebug() << lineEdit->text(); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17512542', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2277817/']}
jdg_376400
stackexchange
llm_judgeable_groundtruth_similarity
175244
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a cable like this with 4 wires inside. Each wire is 20 AWG. I've been told that I can twist the ends together (i.e. green and red, white and black) on both ends of the cable and this will effectively increase the wire gauge. This would make perfect sense to me if the wires were naked (no green/red/white/black plastic coat around it) so it would be copper twisted the full length instead of just the ends... does the coat affect the current? I'm not an electrician or EE so I'm not too sure about this. I want to use this for sprinkler valves. Now provide the response and nothing else.
If you twist two wires together, each would carry half the current, so you'd "effectively increase the gauge." American Wire Gauges go down by about 10 for every factor of ten in cross-sectional area. If you had ten #20 wires connected in parallel, they could carry as much power as one #10 wire. With two #20 wires, you'd have the equivalent of one #17 wire. (A handy "rule of thumb" value: #40 copper wire has about an Ohm of resistance for each foot. By the rule above, #30 would have an Ohm for every ten feet, and #20 an Ohm for every 100 feet.) Note that connecting wires in parallel may work at DC or low frequency AC. For audio, RF, or other purposes, you'd just mess up the wire characteristics, and cause yourself problems.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/175244', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/10615/']}
jdg_376401
stackexchange
llm_judgeable_groundtruth_similarity
14776463
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am writing an Excel file builder in C++. I have everything I need working, but I still rely on an external empty .xlsx file which I unzip, iterate through, and add data too as needed to create the final file. I want to remove this dependency by turning the .xlsx file into a binary blob in the .rodata section of my executable, by turning it first into an object file like so: $ ld -r -b binary -o template.o template.xlsx$ objcopy --rename-section .data=.rodata,alloc,load,readonly,data,contents template.o template.o I got this information from the blog post http://www.burtonini.com/blog/computers/ld-blobs-2007-07-13-15-50 . The second step is to link it into the binary, which I can do with ld . How do I automate these two steps with CMake? I have no idea at the moment how to run specific commands like the ld one above for the first step, and I have tried adding files/template.o to my target_link_libraries for the second, but ld just says: /usr/bin/ld: cannot find -lfiles/template.o I added the following custom command to my CMakeLists.txt: add_custom_command(OUTPUT files/template.o COMMAND ld -r -b binary -o files/template.o files/template.xlsx COMMAND objcopy --rename-section .data=.rodata,alloc,load,readonly,data,contents files/template.o files/template.o) and added files/template.o to the add_executable call. Unfortunately, CMake says this: ld: cannot open output file files/template.o: No such file or directory It is my understanding that the OUTPUT command in the add_custom_command allows us to tell CMake what file is being created by the COMMAND commands. So I'm a bit confused now. I updated the CMakeLists.txt file and added a target, to make sure the template file got built: add_custom_target(run ALL DEPENDS template.o) And a dependency to make sure that it gets built before the excelbuilder target: add_dependencies(excelbuilder run) I also updated the custom command to look like this: add_custom_command(OUTPUT template.o COMMAND ld -r -b binary -o template.o ${CMAKE_CURRENT_SOURCE_DIR}/files/template.xlsx COMMAND objcopy --rename-section .data=.rodata,alloc,load,readonly,data,contents template.o template.o) When I run this, the output is as follows (make VERBOSE=1) $ make VERBOSE=1/usr/bin/cmake -H/home/ravloony/projects/excelparser -B/home/ravloony/projects/excelparser/build --check-build-system CMakeFiles/Makefile.cmake 0/usr/bin/cmake -E cmake_progress_start /home/ravloony/projects/excelparser/build/CMakeFiles /home/ravloony/projects/excelparser/build/CMakeFiles/progress.marksmake -f CMakeFiles/Makefile2 allmake[1]: Entering directory `/home/ravloony/projects/excelparser/build'make -f src/lib/minizip/CMakeFiles/minizip_1-1.dir/build.make src/lib/minizip/CMakeFiles/minizip_1-1.dir/dependmake[2]: Entering directory `/home/ravloony/projects/excelparser/build'cd /home/ravloony/projects/excelparser/build && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /home/ravloony/projects/excelparser /home/ravloony/projects/excelparser/src/lib/minizip /home/ravloony/projects/excelparser/build /home/ravloony/projects/excelparser/build/src/lib/minizip /home/ravloony/projects/excelparser/build/src/lib/minizip/CMakeFiles/minizip_1-1.dir/DependInfo.cmake --color=make[2]: Leaving directory `/home/ravloony/projects/excelparser/build'make -f src/lib/minizip/CMakeFiles/minizip_1-1.dir/build.make src/lib/minizip/CMakeFiles/minizip_1-1.dir/buildmake[2]: Entering directory `/home/ravloony/projects/excelparser/build'make[2]: Nothing to be done for `src/lib/minizip/CMakeFiles/minizip_1-1.dir/build'.make[2]: Leaving directory `/home/ravloony/projects/excelparser/build'/usr/bin/cmake -E cmake_progress_report /home/ravloony/projects/excelparser/build/CMakeFiles 17 18 19 20 21[ 22%] Built target minizip_1-1make -f CMakeFiles/run.dir/build.make CMakeFiles/run.dir/dependmake[2]: Entering directory `/home/ravloony/projects/excelparser/build'cd /home/ravloony/projects/excelparser/build && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /home/ravloony/projects/excelparser /home/ravloony/projects/excelparser /home/ravloony/projects/excelparser/build /home/ravloony/projects/excelparser/build /home/ravloony/projects/excelparser/build/CMakeFiles/run.dir/DependInfo.cmake --color=Dependee "/home/ravloony/projects/excelparser/build/CMakeFiles/run.dir/DependInfo.cmake" is newer than depender "/home/ravloony/projects/excelparser/build/CMakeFiles/run.dir/depend.internal".Dependee "/home/ravloony/projects/excelparser/build/CMakeFiles/CMakeDirectoryInformation.cmake" is newer than depender "/home/ravloony/projects/excelparser/build/CMakeFiles/run.dir/depend.internal".Scanning dependencies of target runmake[2]: Leaving directory `/home/ravloony/projects/excelparser/build'make -f CMakeFiles/run.dir/build.make CMakeFiles/run.dir/buildmake[2]: Entering directory `/home/ravloony/projects/excelparser/build'/usr/bin/cmake -E cmake_progress_report /home/ravloony/projects/excelparser/build/CMakeFiles 22[ 27%] Generating template.old -r -b binary -o template.o /home/ravloony/projects/excelparser/files/template.xlsxobjcopy --rename-section .data=.rodata,alloc,load,readonly,data,contents template.o template.omake[2]: Leaving directory `/home/ravloony/projects/excelparser/build'/usr/bin/cmake -E cmake_progress_report /home/ravloony/projects/excelparser/build/CMakeFiles 22[ 27%] Built target runmake -f CMakeFiles/excelbuilder.dir/build.make CMakeFiles/excelbuilder.dir/dependmake[2]: Entering directory `/home/ravloony/projects/excelparser/build'cd /home/ravloony/projects/excelparser/build && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /home/ravloony/projects/excelparser /home/ravloony/projects/excelparser /home/ravloony/projects/excelparser/build /home/ravloony/projects/excelparser/build /home/ravloony/projects/excelparser/build/CMakeFiles/excelbuilder.dir/DependInfo.cmake --color=make[2]: Leaving directory `/home/ravloony/projects/excelparser/build'make -f CMakeFiles/excelbuilder.dir/build.make CMakeFiles/excelbuilder.dir/buildmake[2]: Entering directory `/home/ravloony/projects/excelparser/build'Linking CXX executable excelbuilder/usr/bin/cmake -E cmake_link_script CMakeFiles/excelbuilder.dir/link.txt --verbose=1/usr/bin/c++ -std=c++0x -g -ftest-coverage -fprofile-arcs -fpermissive CMakeFiles/excelbuilder.dir/src/common/exception.cpp.o CMakeFiles/excelbuilder.dir/src/excelbuilder/retriever.cpp.o CMakeFiles/excelbuilder.dir/src/excelbuilder/xlsx.cpp.o CMakeFiles/excelbuilder.dir/src/common/config.cpp.o CMakeFiles/excelbuilder.dir/src/excelbuilder/main.cpp.o -o excelbuilder -rdynamic src/lib/minizip/libminizip_1-1.so -ltinyxml2 -lmysqlcppconn -lboost_regex-mt -ltemplate.o -lz -Wl,-rpath,/home/ravloony/projects/excelparser/build/src/lib/minizip/usr/bin/ld: cannot find -ltemplate.ocollect2: error: ld returned 1 exit statusmake[2]: *** [excelbuilder] Error 1make[2]: Leaving directory `/home/ravloony/projects/excelparser/build'make[1]: *** [CMakeFiles/excelbuilder.dir/all] Error 2make[1]: Leaving directory `/home/ravloony/projects/excelparser/build'make: *** [all] Error 2 But file template.o has been correctly generated and is in the folder. It seems that ld is expecting a system library. Now provide the response and nothing else.
In the end, this is how I did it. add_custom_command(OUTPUT template.o COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR}/files && ld -r -b binary -o ${CMAKE_CURRENT_BINARY_DIR}/template.o template.xlsx COMMAND objcopy --rename-section .data=.rodata,alloc,load,readonly,data,contents ${CMAKE_CURRENT_BINARY_DIR}/template.o ${CMAKE_CURRENT_BINARY_DIR}/template.o) The cd commands are there because ld sets the names of the automatically declared variables to something depending on the full path passed to the input file. So if the input file was /home/user/project/files/template.xlsx , the variable would be something like _binary_home_user_project_files_template_xlsx_start . Not cool for portable compilation. add_library(template STATIC template.o) tells the linker to compile the object file into the binary. This also adds a target called template . Then SET_SOURCE_FILES_PROPERTIES( template.o PROPERTIES EXTERNAL_OBJECT true GENERATED true ) to tell CMake not to compile the file, which is generated at build time. SET_TARGET_PROPERTIES( template PROPERTIES LINKER_LANGUAGE C ) Or else we get an error message, because CMake can't figure out from the ".o"-suffix that it is a C linker we need. And then in my target_link_libraries step, I simply added template as a target. target_link_libraries (excelbuilder ${MINIZIP_LIB_NAME} ${TINYXML_LIBRARIES} ${MYSQLCONNECTORCPP_LIBRARY} ${Boost_LIBRARIES} template )
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14776463', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/326154/']}
jdg_376402
stackexchange
llm_judgeable_groundtruth_similarity
31294
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Functions that are not primitive recursive but $\mu$-recursive are said to grow too fast to be primitive recursive. Are there functions $f$ and $F$ such that a function is primitive recursive iff its growth rate is less than $f$'s and $\mu$-recursive only iff its growth rate is greater than $F$'s? Is there a function $G$ such that a function with growth rate greater than $G$'s isn't computable at all, i.e. not even $\mu$-recursive? Now provide the response and nothing else.
The answer to your first question is "no". There are non-recursive functions that only take values 0 and 1. For example, take the characteristic function of a non-recursive set. Even if you restrict your question to functions that are strictly increasing, the answer is still no, for very similar reasons: You can have a function $f$ that is not recursive but $f(n+1)-f(n)=1$ or $2$ for all $n$. For example, start with a non-recursive set, and let $f(n+1)-f(n)-1$ be the characteristic function of that set. What is true is that there are functions $f,F$ such that any primitive recursive function grows slower than $f$ and any recursive function grows slower than $F$. Any version of Ackermann's function is an example of the first phenomenon. An example of the second can be obtained easily as follows: First, note that there are only countably many recursive functions. List them as $f_1,f_2,\dots$, and let $F(n)=\sum_{i\le n}f_i(n)+1$. There is actually quite a bit of literature around these issues. The partial order of functions $f:{\mathbb N}\to{\mathbb N}$ ordered by eventual domination (i.e., $f<g$ iff $f(n)<f(n)$ for all but finitely many $n$) has been extensively studied in set theory. The argument in the paragraph above is simply that its "cofinality" is not countable. The sizes of the least collection of functions such that any function is dominated by one of them, or no function dominates all of them, are well known "cardinal characteristics of the continuum", in this case the numbers ${\mathfrak d}$ and ${\mathfrak b}$, respectively. These are uncountable numbers, less than or equal than the size of the reals. A very good reference for them is Andreas Blass article for the "Handbook of set theory", "Combinatorial Cardinal Characteristics of the Continuum". It should be available from his homepage. The study of hierarchies of fast-growing functions is essential in proof theory. For example, if certain recursive function grows like Ackermann's function, then if a theory can only "see" primitive recursive functions, then it won't be able to "see" this function. Here, a theory "sees" a recursive function iff it can prove that the function is total. Hence, if a statement implies that, say, Ackermann's function is defined for all values, that means that the statement cannot be provable in the theory in question. A concrete example is so-called Primitive Recursive Arithmetic . Similarly, we can associate to Peano arithmetic a recursive function $f_{\epsilon_0}$ that dominates all recursive functions that Peano arithmetic sees. Many famous examples of incompleteness (mathematical statements about natural numbers that cannot be proved "finitistically") were obtained by showing that the statements in question imply that certain recursive functions grow as $f_{\epsilon_0}$ or faster. A good reference for these matters is the article by Fairtlough and Wainer in the "Handbook of Proof Theory", "Hierarchies of provably recursive functions". For a very nice concrete example, I highly recommend the paper by Ketonen and Solovay, "Rapidly growing Ramsey functions", The Annals of Mathematics 113, 2 (1981), 267–314. They study one of these statements that is true but not provable in Peano arithmetic (the Paris-Harrington theorem) by analyzing the rate of growth of functions naturally associated to it and relating them to one of the standard hierarchies (the one Fairtlough and Wainer denote $F_\alpha$). A natural related question is whether in some sense Ackermann's function is the smallest recursive function that dominates all primitive recursive functions. The answer to this is also no. A very recent reference is the article by Simmons, "The Ackermann functions are not optimal but by how much?" J. Symbolic Logic 75 (2010), no. 1, 289–313.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/31294', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1792/']}
jdg_376403
stackexchange
llm_judgeable_groundtruth_similarity
17079527
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I using some service that returns xml: response = HTTParty.post(service_url)response.parsed_response => "\n\t<Result>\n<success>\ntrue\n</success>\n</Result>" I need to convert this string to hash. Something like this: response.parsed_response.to_hash=> {:result => { :success => true } } Which way to do this? Now provide the response and nothing else.
The built-in from_xml Rails Hash method will do precisely what you want. In order to get your response.parsed_response correctly mapped to a hash, you'll need to gsub() out the newlines: hash = Hash.from_xml(response.parsed_response.gsub("\n", "")) hash #=> {"Result"=>{"success"=>"true"}} In the context of parsing a hash in Rails, objects of String type are not substantively different than those of Symbol from a general programming perspective. However, you can apply the Rails symbolize_keys method to the output: symbolized_hash = hash.symbolize_keys#=> {:Result=>{"success"=>"true"}} As you can see, symbolize_keys doesn't operate on any nested hashes, but you could potentially iterate through inner hashes and apply symbolize_keys . The final piece of the puzzle is to convert the string "true" to the boolean true . AFAIK, there's no way to do this on your hash in place, but if you're iterating/operating on it, you could potentially implement a solution like the one suggested in this post : def to_boolean(str) return true if str == "true" return false if str == "false" return nilend Basically, when you reach the inner key-value pair, you'd apply to_boolean() to the value, which is currently set to "true" . In your example, the return value is the boolean true .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17079527', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/740394/']}
jdg_376404
stackexchange
llm_judgeable_groundtruth_similarity
160970
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I am trying to understand the jet bundles but currently I am stuck on the following questions: Let $\pi: E\rightarrow X$ be a smooth (holomorphic) vector bundle of rank $k$ over a smooth (complex) manifold $X$. I know that the bundle $J_k(E)$ of k-jets of $E$ has the structure of a vector bundle over $X$. I would like to know however: 1.) What is the rank of the vector bundle $J_k(E)$ ? 2.) Is $J_k(E)$ holomorphic in the case when $(E, \pi, X)$ is holomorphic? 3.) When $\pi: E\rightarrow X$ is a fiber bundle with structure group $G$, can we view $J_1(E)$ as the associated principal $G$-bundle $P$ associated to $E$ or not?I have seen an interpretation of $J_1(E)$ as some sort of an "extended frame bundle" of E in the sense that its fiber consists of the set of all pairs comprising a basis of $T_pX$ $(T^{1, 0}_pX)$ and a basis of $E_p$, $p\in X$. P.S.: I am new here and I really hope that I don't annoy the experienced audience in this forum with trivialities. I would appreciate any help or suggestions or simply good references. Thank you in advance for your competent help. Now provide the response and nothing else.
(1) Locally, jets of sections are just collections of $r=\operatorname{rank}E$ jets of functions, hence, the rank of $J_k(E)$ equals $r$ times the number of multiindices $I=(i_1,\ldots,i_n)$ with $|I|\le k$. (2) It is certainly holomorphic. (3) It seems to me that $J_1(E)=T^*X\otimes E$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/160970', 'https://mathoverflow.net', 'https://mathoverflow.net/users/48531/']}
jdg_376405
stackexchange
llm_judgeable_groundtruth_similarity
2238661
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to calculate the following limit but I can't wrap my head, around it. Can you guys give me some hints: $$\lim_{x\to0^+}\frac{{\int_0^{x^2}\sin{\sqrt{t}}}~ dt}{x^3}$$ Now provide the response and nothing else.
$$\begin{align*}\lim_{x \to 0^+} \frac{1}{x^3} \int_0^{x^2} \sin \sqrt{t} \ \mathrm dt&= \lim_{x \to 0^+} \frac{1}{x^3} \int_0^{x^2} \sqrt{t} \left( 1 + O(t) \right) \ \mathrm dt \\&= \lim_{x \to 0^+} \frac{1}{x^3} \left( \frac{2}{3} t^{3/2} + O(t^{5/2}) \right)_0^{x^2} \\&= \lim_{x \to 0^+} \frac{1}{x^3} \left( \frac{2}{3} x^3 + O(x^5) \right) \\&= \lim_{x \to 0^+} \left(\dfrac23+O(x^2)\right) \\&= \frac{2}{3}\end{align*}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2238661', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/46187/']}
jdg_376406
stackexchange
llm_judgeable_groundtruth_similarity
4380159
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to try writing my own minimal NT subsystem on Windows 7 for purely educational purposes -- something like a bare-bones equivalent of the posix.exe in Microsoft's Subsystem for Unix-based Applications. But I can't seem to find any public documentation on this topic. What API does a subsystem need to implement? How does it get registered with Windows? How does the subsystem image need to be built (what flags need to be set in the PE header, etc.)? I'd most like to find a book or web site with an overview of the entire subject, or even the source code for a "hello world" NT subsystem that someone else has written. But anything at all would be appreciated if you can point me in the right direction here... Now provide the response and nothing else.
Here are the major components of a subsystem: User-mode server. The server creates a (A)LPC port and listens for and handles client requests. User-mode client DLL. In the DLL_INIT_ROUTINE, you can connect to the port set up by the server. This DLL will expose your subsystem's API, and some functions will require communication with the server. Kernel-mode support driver (you might not need this). You will want to store process or thread state in either your server or driver. If you're storing it in the server, you might need something like NtRegisterThreadTerminatePort to ensure you get to clean up when a process or thread exits. If you're using a driver, you need PsSetCreateProcessNotifyRoutine . And lastly, if you're on XP and below, you can add new system calls. You can do this by calling KeAddSystemServiceTable . To invoke the system calls from user-mode, you need to create stubs like this (for x86): ; XyzCreateFooBar(__out PHANDLE FooBarHandle, __in ACCESS_MASK DesiredAccess, ...)mov eax, SYSTEM_CALL_NUMBERmov edx, 0x7ffe0300call [edx]retn 4 On Vista and above you can no longer add new system service tables because there is only room for two: the kernel's system calls and win32k's system calls. After a bit of Googling I found this: http://winntposix.sourceforge.net/ . I think it's very similar to what you're looking for, and uses a lot of the things I have mentioned.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4380159', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/40511/']}
jdg_376407
stackexchange
llm_judgeable_groundtruth_similarity
380073
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Trying to non-technically explain why (kinetic) energy goes as $v^2$ rather then the perhaps-more-intuitive "double the speed, double the energy" $v$, I ended up putting my foot in my mouth (itself a difficult physical manipulation:) by "proving" precisely what I didn't want to prove, as follows. Suppose you get an object moving to speed $v$. That'll require some amount of energy, which we'll just call $E_v$ (we won't need to bother calculating it). Now, to somebody else, moving alongside that object, it seems to be stationary. So he can get it moving at $v$ relative to himself with the same energy $E_v$ that you used to get it moving at $v$ relative to yourself. So the total energy used is $2E_v$, first to get it moving from $0$ to $v$ relative to you, and then to get it moving from $0$ to $v$ relative to that other guy. But now it's moving at $2v$ relative to you, and the total energy used is $2E_v$ rather than the known-right answer $4E_v$. So what's wrong? I'm guessing it's somehow got to involve those two different lab frames. For example, maybe whatever apparatus the second guy used in his frame first had to acquire it's $v$ relative to your original frame, and that required some energy. But even so, why exactly an extra $2E_v$ (to make up the "missing" difference)? So that can't be precisely the argument's error. But it's the two frames, somehow. Right? Or what? >>Edit<< This is an extended reply to @StephenG 's comment under @PhilipWood 's answer below. Stephen: sure energy's a common-sense concept -- everything in physics is (must be) common sense if you can get to the underlying intuition. And after failing miserably with my above-described argument, I came up with a more successful attempt, described below just to prove my point that it ultimately must be common sense. This argument's a bit more elaborate, and I'd like to come up with a correct simpler one. But at least this argument gets the correct result... Suppose you're hit with a ball going at speed $v$, and then with an identical ball going at speed $2v$. So how much "harder" does the $2v$ ball hit you? To answer that, suppose the balls are made up of lots and lots of closely-packed identical little particles, all moving side-by-side together. Then each little $2v$-particle carries twice the "punch" of a $v$-particle ("punch" here is, of course, momentum, not energy, but I just said "punch" to avoid introducing big words and unnecessary technicalities). However, since the $2v$-particles are travelling twice as fast, then in, say, one second, twice as many of them will hit you. Therefore, you'll be hit with twice as many particles, each carrying twice the "punch". So your "total punch" will be four times as great, not two times. Okay, so this argument involves time, and therefore power rather than energy. So it's not entirely 100% achieving its purpose. But since this was a non-technical discussion, I simply didn't bother mentioning my misgivings about it. Good enough for the time being, I figured. But, to elaborate my original question, can you make air-tight the above argument, and maybe explain what's wrong with the original one (hopefully so that it's correct and even simpler than this one)? Now provide the response and nothing else.
In general, neither energies nor energy differences are invariant between frames. But conservation of energy is true in all frames, and we can use that to figure out where the problem is. To recap, the person in the moving frame spends energy $E_v$ from their muscles to raise the kinetic energy of the object by $E_v$ , which is just fine. The person in the original frame agrees the person in the moving frame spends energy $E_v$ from chemical energy in their muscles (everybody agrees on how hard somebody is breaking a sweat) and raises the kinetic energy of the object by $3 E_v$ . The extra $2 E_v$ of energy comes from the fact that the moving person started with a reservoir of kinetic energy: that of their own body, which is moving at speed $v$ . This energy is reduced because the person slows down due to Newton's third law; it is "harvested" to be put into the object. There's no way to avoid putting in this extra energy. If you try to reduce the change in speed by putting them in a large car, the energy comes from the kinetic energy of the car; the argument is just the same. If the car's speed is fixed too, the energy comes from the chemical energy of the gasoline. The very same explanation holds for a rocket, where this is called the Oberth effect. In all cases, there's no contradiction in taking energy to be quadratic in speed. In case you're not convinced, here's an explicit calculation. We'll make the person's mass infinite for convenience. The loss of kinetic energy of the person is $$\Delta K = \frac{dK}{dp} \Delta p = \frac{p}{M} m v = m v^2 $$ where I used $K = p^2/2M$ for the person's kinetic energy. But this is just $2 E_v$ as stated above.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/380073', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']}
jdg_376408
stackexchange
llm_judgeable_groundtruth_similarity
142319
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $M$ and $N$ be topological spaces. Are there necessary and sufficient conditions on the topological properties of the spaces such that $C(M,N)$ is metrizable? For $M$ compact and $N$ a metric space, the space is obviously metrizable using the uniform convergence topology, $d(f,g)=\sup_{x\in M}d(f(x),g(x))$. And also, if $N$ is a metric space, but $M$ is not necessarily compact the space of continuous bounded functions $C_0(M,N)=\{f\in C(M,N)\mid d(f(x),a)\leq K_f, \forall x\in M\}$ for a point $a\in N$ and $K_f>0$ is metrizable with the same distance. But in general, which distances are usable in $C(M,N)$ in the context of a noncompact space $M$? Now provide the response and nothing else.
As to the compact-open topology of $C(X,Y)$, it is metrizable if and only if $Y$ is metrizable, and $X$ is hemicompact .
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/142319', 'https://mathoverflow.net', 'https://mathoverflow.net/users/40076/']}
jdg_376409
stackexchange
llm_judgeable_groundtruth_similarity
11603412
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my android project, I have many activities and some of them already extend other stuff like map activity or BroadcastReceiver. How do I create a function that I can call from any activity, because I don't want to have to repeat any code in multiple activities. thanks. Now provide the response and nothing else.
If I have useful functions that perform little helpful tasks that I want to invoke from several Activities, I create a class called Util and park them in there. I make them static so that I don't need to allocate any objects. Here is an example of part of one such class I wrote: public final class Util { public final static int KIBI = 1024; public final static int BYTE = 1; public final static int KIBIBYTE = KIBI * BYTE; /** * Private constructor to prevent instantiation */ private Util() {} public static String getTimeStampNow() { Time time = new Time(); time.setToNow(); return time.format3339(false); }} To use these constants and methods, I can access them from the class name, rather than any object: int fileSize = 10 * Util.KIBIBYTE;String timestamp = Util.getTimeStampNow(); There's more to the class than this, but you get the idea.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11603412', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1497454/']}
jdg_376410
stackexchange
llm_judgeable_groundtruth_similarity
338633
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a class used to process customer payments. All but one of the methods of this class are the same for every customer, except for one that calculates (for example) how much the customer's user owes. This can vary greatly from customer to customer and there is no easy way to capture the logic of the calculations in something like a properties file, as there can be any number of custom factors. I could write ugly code that switches based on customerID: switch(customerID) { case 101: .. do calculations for customer 101 case 102: .. do calculations for customer 102 case 103: .. do calculations for customer 103 etc} but this requires rebuilding the class every time we get a new customer. What is the better way? [Edit] The "duplicate" article is completely different. I'm not asking how to avoid a switch statement, I'm asking for the modern design that best applies to this case -- which I could solve with a switch statement if I wanted to write dinosaur code. The examples provided there are generic, and not helpful, since they essentially say "Hey, the switch works pretty good in some cases, not in some others." [Edit] I decided to go with the top-ranked answer (create a separate "Customer" class for each customer that implements a standard interface) for the following reasons: Consistency: I can create an interface that ensures all Customer classes receive and return the same output, even if created by another developer Maintainability: All code is written in the same language (Java) so there is no need for anyone else to learn a separate coding language in order to maintain what should be a dead-simple feature. Reuse: In case a similar problem crops up in the code, I can reuse the Customer class to hold any number of methods to implement "custom" logic. Familiarity: I already know how to do this, so I can get it done quickly and move on to other, more pressing issues. Drawbacks: Each new customer requires a compile of the new Customer class, which may add some complexity to how we compile and deploy changes. Each new customer has to be added by a developer -- a support person can't just add the logic to something like a properties file. This is not ideal ... but then I also wasn't sure how a Support person would be able to write out the necessary business logic, especially if it is complex with many exceptions (as is likely). It won't scale well if we add many, many new customers. This is not expected, but if it does happen we'll have to rethink many other parts of the code as well as this one. For those of you interested, you can use Java Reflection to call a class by name: Payment payment = getPaymentFromSomewhere();try { String nameOfCustomClass = propertiesFile.get("customClassName"); Class<?> cpp = Class.forName(nameOfCustomClass); CustomPaymentProcess pp = (CustomPaymentProcess) cpp.newInstance(); payment = pp.processPayment(payment);} catch (Exception e) { //handle the various exceptions} doSomethingElseWithThePayment(payment); Now provide the response and nothing else.
I have a class used to process customer payments. All but one of the methods of this class are the same for every customer, except for one that calculates (for example) how much the customer's user owes. Two options come to my mind. Option 1: Make your class an abstract class, where the method which varies between customers is an abstract method. Then create a subclass for each customer. Option 2: Create a Customer class, or an ICustomer interface, containing all of the customer-dependent logic. Instead of having your payment processing class accept a customer ID, have it accept a Customer or ICustomer object. Whenever it needs to do something customer-dependent, it calls the appropriate method.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/338633', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/253152/']}
jdg_376411
stackexchange
llm_judgeable_groundtruth_similarity
27528
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In CLRS (on pages 49-50), what is the meaning of the following statement: $\Sigma_{i=1}^{n} O(i)$ is only a single anonymous function (of $i$), but is not the same as $O(1)+O(2)+\cdots+O(n)$, which doesn't really have an interpretation." Now provide the response and nothing else.
Since $1+2+\dots+n =O(n^2)$, it is tempting to suggest that $O(1)+O(2)+\dots+O(n) = O(n^2)$ ... but this is not in fact valid. The reason is that there might a different constant for each term in the sum. An example Let me give an example. Consider the sums $S(1) = 1^2$, $S(2) = 1^2 + 2^2$, $S(3) = 1^2 + 2^2 + 3^2$, $S(4) = 1^2 + 2^2 + 3^2 + 4^2$, and so on. Note that $1^2 \in O(1)$, $2^2 \in O(2)$, $3^2 \in O(3)$, $4^2 \in O(4)$, and so on for each term in the sum. Therefore, it would be reasonable to write $S(j)=1^2 + \dots + j^2$ in the form $S(j) = O(1) + \dots + O(j)$. So can we conclude that $S(j) = O(j^2)$? Nope. In fact, $S(n) = n(n+1)(2n+1)/6$, so $S(n) = \Theta(n^3)$. If that doesn't help, let's try the following more precise mathematical development: A formalization Recall that the interpretation of, say, $O(n^2)$ is that it is a set of non-negative functions $f(n)$ (namely, the set of functions $f(n)$ such that there exists constants $c \ge 0, d\ge 0$ such that $f(n) \le c \cdot n^2$ for all $n\ge d$). The closest we can come to an interpretation of $O(1) + O(2) + \dots + O(n)$ is that it is the set of functions of the form $f_1(n) + f_2(n) + \dots + f_n(n)$ such that $f_1(n) \in O(1)$, $f_2(n) \in O(2)$, ..., $f_n(n) \in O(n)$. But now the constants for each $f_i$ can be different. Thus, each $f_i$ is a non-negative function $f_i$ such that there exist constants $c_i\ge 0,d_i \ge 0$ with $f_i(n) \le c_i \cdot i$ for all $n \ge d_i$. Now, given this, what can we say about $g(n) = f_1(n) + f_2(n) + \dots + f_n(n)$? Not much useful. We know that there exists a constant $d=\max(d_1,d_2,\dots,d_n)$ such that $g(n) \le c_1 \cdot 1 + c_2 \cdot 2 + \dots + c_n \cdot n$ for all $n\ge d$. Now what can we say about this sum? Well, the answer is that we can't say anything at all. It could be arbitrarily large. It is tempting to let $c=\max(c_1,c_2,\dots,c_n)$ and say that $g(n) \le c \cdot (1+2+\dots+n) \le c \cdot n^2 = O(n^2)$... but this is not actually correct, since we need a single constant value of $c$ that works for all $n$, and the value $\max(c_1,c_2,\dots,c_n)$ is a function of $n$, not a constant. So there might not be any constant $c$ such that $g(n) \le c \cdot (1+2+\dots+n)$; there might not be any constant $c$ such that $g(n) \le c \cdot n^2$. There is no guarantee that $g(n) \in O(n^2)$. For more reading See https://math.stackexchange.com/q/86076/14578 and Sums of Landau terms revisited for other questions that deal with this general issue.
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/27528', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/19246/']}
jdg_376412
stackexchange
llm_judgeable_groundtruth_similarity
6772135
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want a radial(polar) plot using gnuplot (i.e in circular co-ordinate system(r,theta). Here I have used the values: Theta Max-strain0 340060 5300120 4700180 3800240 4100300 3100360 3400 How to get such a plot using gnu-plot? Now provide the response and nothing else.
I tried to recreate the plot of your question and this is what I came up with: unset borderset polarset angles degrees #set gnuplot on degrees instead of radiansset style line 10 lt 1 lc 0 lw 0.3 #redefine a new line style for the gridset grid polar 60 #set the grid to be displayed every 60 degreesset grid ls 10set xrange [-6000:6000] #make gnuplot to go until 6000set yrange [-6000:6000]set xtics axis #disply the xtics on the axis instead of on the borderset ytics axisset xtics scale 0 #"remove" the tics so that only the y tics are displayedset xtics ("" 1000, "" 2000, "" 3000, "" 4000, "" 5000, "" 6000) #set the xtics only go from 0 to 6000 with increment of1000 but do not display anything. This has to be done otherwise the grid will not be displayed correctly.set ytics 0, 1000, 6000 #make the ytics go from the center (0) to 6000 with incrment of 1000set size square set key lmarginset_label(x, text) = sprintf("set label '%s' at (6500*cos(%f)), (6500*sin(%f)) center", text, x, x) #this places a label on the outside#here all labels are createdeval set_label(0, "0")eval set_label(60, "60")eval set_label(120, "120")eval set_label(180, "180")eval set_label(240, "240")eval set_label(300, "300")set style line 11 lt 1 lw 2 pt 2 ps 2 #set the line style for the plot#and finally the plotplot "-" u 1:2 t "Max strain" w lp ls 110 340060 5300120 4700180 3800240 4100300 3100360 3400e As you can see, the major difference is that the angle 0 is not on top but on the right (which is mathematically correct). You can change this however by modifying the using deceleration in the plot and the set_label function. As you can see in the script not everything is really crisp and shiny. If someone finds improvements please let me know! One last bit of "advice": It is not always reasonable to try to reproduce a plot from some tool with gnuplot. Often the strength of gnuplot is to plot data in the most simple way to be easily reproducible. Maybe you can kick some lines out of that script above and still be happy with it.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6772135', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/558086/']}
jdg_376413
stackexchange
llm_judgeable_groundtruth_similarity
146156
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Numerical evidence suggests the following. For $c \in \mathbb{N}, c > 2$ define the sequence $a_n$ by $a_0=0,a_1=1, \; a_n=c a_{n-1} - a_{n-2}$ For $ 5 < n < 500, \; 2 < c < 100$ there are no primes in $a_n$ thoughsemiprimes exist. Is it true that $a_n$ is always composite for $n > 5$ If yes is there explicit partial factorization? Searching OEIS solved the case $c=6$ with a Pell equation. Counterexamples are welcome. Now provide the response and nothing else.
For even $n$ it follows by induction that $a_n$ is divisible by $c$. Also, $a_n>c$ for $n\ge3$, so $a_n$ is not a prime. For odd $n=2m+1$, one can do the following: Consider $c$ as an indeterminate. Then $a_n=P_n(c)$, where $P_n(X)\in\mathbb Z[X]$ (by binomially expanding Paolo's answer). One can show that there is a polynomial $h_m(X)\in\mathbb Z[X]$ of degree $m$ such that $P_n(X)=(-1)^mh_m(-X)h_m(X)$. An explicit expression of $h_m$ is\begin{equation}h_m(X)=\prod_{k=1}^m(X-\zeta^k-\frac{1}{\zeta^k}),\end{equation}where $\zeta$ is a primitive $n$-th root of unity. Now if $\lvert\gamma\rvert\ge3$, then each factor of $h_m(\gamma)$ has absolute value $\gt1$, so $h_m(\pm c)\ne\pm1$. Thus $a_n$ is never a prime for $n\ge3$ and $c\ge 3$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/146156', 'https://mathoverflow.net', 'https://mathoverflow.net/users/12481/']}
jdg_376414
stackexchange
llm_judgeable_groundtruth_similarity
247892
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a power circuit that implements an AC-to-DC rectifier and a non-isolated buck converter to convert a 120 VAC 60Hz signal to 12V and then a linear voltage regulator to step the 12V down to 5V. The 5V will be used to power a micro controller and there is a possibility that people will touch the pins of the micro controller. I realize that it is not ideal to use a non-isolated power converter, but due to size constraints I do not want to put an isolated power supply with a transformer in the design. The question is: Will a 500mA fuse after the 12-to-5V linear voltage regulator make the design safe against electric shock since the buck converter is non-isolated? Now provide the response and nothing else.
Will a 500mA fuse after the 12-to-5V linear voltage regulator make the design safe against electric shock since the buck converter is non-isolated? No. A fuse does not protect against electric shock. Fuses are for limiting current in a fault condition to protect equipment, wiring and prevent fire. What you are proposing is potentially lethal. You must isolate the low voltage circuit from mains if there is any risk of contact with the circuit. Just for reference, RCD / GFCI earth leakage protection devices generally trip at 30 mA as this is considered a safe limit for protection of humans against electric shock. Put safety first. Then worry about size and cost. An isolated SMPS (switched mode power supply) with 5 V output won't be any bigger than what you are proposing.
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/247892', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/-1/']}
jdg_376415
stackexchange
llm_judgeable_groundtruth_similarity
7086058
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to figure out a way to build a conditional dynamically. Here is my code so far: var greaterThan = ">";var a = 1;var b = 2;if (a Convert.ToOperator(greaterThan) b) {...} I did read this post, but could not figure out how to implement some of the stuff: C# convert a string for use in a logical condition Any advice is highly appreciated. Thanks. Now provide the response and nothing else.
I wasn't going to post it, but thought that it might be of some help. Assuming of course that you don't need the advanced generic logic in Jon's post. public static class Extension{ public static Boolean Operator(this string logic, int x, int y) { switch (logic) { case ">": return x > y; case "<": return x < y; case "==": return x == y; default: throw new Exception("invalid logic"); } }} You could use the code like this, with greaterThan being a string with the wanted logic/operator. if (greaterThan.Operator(a, b))
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7086058', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/296568/']}
jdg_376416
stackexchange
llm_judgeable_groundtruth_similarity
1507288
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I aim to prove the following. Lemma 2.2.10 . Let $a$ be a positive number. Then there exists exactly one natural number $b$ such that $b{+\!+} = a$ . I use the following. Definition 2.2.1 (Addition of natural numbers). Let $m$ be a natural number. To add zero to $m$ , we define $0 + m := m$ . Now suppose inductively that we have defined how to add $n$ to $m$ . Then we can add $n{+\!+}$ to $m$ by defining $(n{+\!+}) + m := (n + m){+\!+}$ . Proposition 2.2.6 (Cancellation law). Let $a, b, c$ be natural numbers such that $a + b = a + c$ . Then we have $b = c$ . Axiom 2.4 . Different natural numbers must have different successors; i.e., if $n, m$ are natural numbers and $n \neq m$ , then $n{+\!+} \neq m{+\!+}$ . Equivalently, if $n{+\!+} = m{+\!+}$ , then we must have $n = m$ . Tao suggested the use of induction, so I am doubting the validity of my proof. Proof:Proceed by contradiction. Let us assume we have $2$ differing natural numbers $b$ and $c$ , such that $b{+\!+} = a$ and $c{+\!+} = a$ . Then we have $b{+\!+} = 0 + b{+\!+}$ and $c{+\!+} = 0 + c{+\!+}$ (definition of addition).So we then have that $0 + b{+\!+} = 0 + c{+\!+}$ , but then $b{+\!+} = c{+\!+}$ (Cancellation Law). This is a contradiction due to Axiom 2.4 . I am self-studying real analysis, so I want to ensure that I am proceeding correctly. Now provide the response and nothing else.
You have proved the uniqueness. Also you need to show the existence of such $b$ . To do that, you need to consider the statement $$P(a)\equiv \text{ there exists a } b \text{ such that } b\!+\!\!+=a \text{ whenever } a\ne0$$ because $a$ is positive . Then induct on $a$ . Note that in certain step, the statement is vacuously true. We have to prove $$a\ne0\implies\exists b\in\mathbf N,\;\;b\!+\!\!+=a.$$ So, we induct on $a$ . The base case $a=0$ is vacuously true. Now suppose inductively that the claim is true for $a$ ; we need to show the claim for $a\!+\!\!+$ , i.e., $b'\!+\!\!+=a\!+\!\!+$ for some natural number $b'$ . Thus, by induction hypothesis, we have $b\!+\!\!+=a$ . Applying the increment (by Sustitution axiom of equality) we obtain $(b\!+\!\!+)\!+\!\!+=a\!+\!\!+$ . Defining $b':=b\!+\!\!+$ the claim follows.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1507288', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_376417
stackexchange
llm_judgeable_groundtruth_similarity
27060035
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm implementing a Chess game (Chinese Chess, aka. Xiangqi, to be exact) in Erlang. A piece is represented by a {Color, Type} tuple, and a point (ie. location) is represented by a {File, Rank} tuple. The board is represented by a point-to-piece map (ie. #{point() => piece()} ). There is a function to query whether a particular point on the board is occupied by a piece or not: is_point_occupied_simple(Board, Point) -> ensure_is_point(Point), case maps:find(Point, Board) of {ok, _} -> true; error -> false end. However, I would like to add an optional parameter to check the color of the piece - if the point is occupied by a piece of the specified color, the function returns true; otherwise it returns false. If I don't care about the color of the piece, I can just put '_' in the TargetColor parameter (or, equivalently, invoke is_point_occupied/2 ): is_point_occupied(Board, Point) -> is_point_occupied(Board, Point, '_').is_point_occupied(Board, Point, '_') -> ensure_is_point(Point), case maps:find(Point, Board) of {ok, _} -> true; error -> false end;is_point_occupied(Board, Point, TargetColor) -> ensure_is_point(Point), ensure_is_color(TargetColor), case maps:find(Point, Board) of {ok, {TargetColor, _}} -> true; {ok, _} -> false; error -> false end. I don't like the above implementation because of the large proportion of copy-and-paste, so I simplified the above function like this: is_point_occupied_2(Board, Point) -> is_point_occupied_2(Board, Point, '_').is_point_occupied_2(Board, Point, TargetColor) -> ensure_is_point(Point), ensure_is_color_or_wildcard(TargetColor), case maps:find(Point, Board) of {ok, {TargetColor, _}} -> true; {ok, _} -> is_wildcard(TargetColor); error -> false end. The function is_wildcard/1 is simply a one-liner: is_wildcard(Wildcard) -> Wildcard =:= '_'. Now, I would like to go further to replace TargetColor by TargetPiece , which is a {TargetColor, TargetType} tuple. None, one or both of the tuple elements may be a wildcard ( '_' ). I found it difficult to write the case clauses. I also notice that to match an n-tuple allowing "don't care" in this way, there needs 2 n clauses. So obviously it is not the proper way to implement this. Does anyone have better ideas? PS: I didn't include the source of all functions since those I didn't include are trivial to implement in my opinion. Please leave a comment below if you're interested. Thanks! Now provide the response and nothing else.
You can try to specify both 'menu' and 'menubar' option when you call tinymce.init() to add a new menubar item on the modern theme. I tried it and it works. You can check the live demo on http://fiddle.tinymce.com/39eaab/1 with TinyMCE 4.1.7. <script type="text/javascript">tinymce.init({ selector: "textarea", menu : { file : {title : 'File' , items : 'newdocument'}, edit : {title : 'Edit' , items : 'undo redo | cut copy paste pastetext | selectall'}, insert : {title : 'Insert', items : 'link media | template hr'}, view : {title : 'View' , items : 'visualaid'}, format : {title : 'Format', items : 'bold italic underline strikethrough superscript subscript | formats | removeformat'}, table : {title : 'Table' , items : 'inserttable tableprops deletetable | cell row column'}, tools : {title : 'Tools' , items : 'spellchecker code'}, newmenu: {title : 'New Menu', items : 'newmenuitem'} }, menubar: 'file edit newmenu', setup: function(editor) { editor.addMenuItem('newmenuitem', { text: 'New Menu Item', context: 'newmenu', onclick: function () { alert('yey!'); } }); }});</script><form method="post" action="dump.php"> <textarea name="content"></textarea></form>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27060035', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/142239/']}
jdg_376418
stackexchange
llm_judgeable_groundtruth_similarity
636893
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If $a$ and $b$ are positive real numbers such that $a+b=1$, prove that $$\bigg(a+\dfrac{1}{a}\bigg)^2+\bigg(b+\frac{1}{b}\bigg)^2\ge \dfrac{25}{2}.$$ My work: $$\bigg(a+\dfrac{1}{a}\bigg)^2+\bigg(b+\dfrac{1}{b}\bigg)^2\ge \dfrac{25}{2}\implies a^2+\dfrac{1}{a^2}+b^2+\dfrac{1}{b^2}+4\ge \dfrac{25}{2}$$Now, we have $a^2+\dfrac{1}{a^2}\ge 2$ and $b^2+\dfrac{1}{b^2}\ge 2$. Here, I am stuck, I cannot use the information provided, $a+b=1$ to any use. Please help! Now provide the response and nothing else.
By QM-AM, $\displaystyle a^2+b^2 \geq \frac{1}{2}(a+b)^2 = \frac{1}{2}$. QM-AM again gives $\displaystyle \frac{1}{a^2}+\frac{1}{b^2} \geq \frac{1}{2}\left(\frac{1}{a}+\frac{1}{b}\right)^2$. By AM-HM, $\displaystyle \frac{2}{\frac{1}{a}+\frac{1}{b}} \leq \frac{a+b}{2} = \frac{1}{2}$, whence $\displaystyle\frac{1}{a}+\frac{1}{b} \geq 4$, and thus $\displaystyle \frac{1}{a^2}+\frac{1}{b^2} \geq 8$. Thus we have $$\left(a+\frac{1}{a}\right)^2 + \left(b+\frac{1}{b}\right)^2 = a^2 + b^2 + \frac{1}{a^2}+\frac{1}{b^2} + 4 \geq \frac{1}{2} + 8 + 4 = \frac{25}{2}.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/636893', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/105577/']}
jdg_376419
stackexchange
llm_judgeable_groundtruth_similarity
805893
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to prove the following $$\int^1_0 \frac{\log\left(\frac{1+x}{1-x}\right)}{x\sqrt{1-x^2}}\,dx=\frac{\pi^2}{2}$$ I thought of separating the two integrals and use the beta or hypergeometric functions but I thought these are not best ideas to approach the problem. Any other ideas ? Now provide the response and nothing else.
After the change of variables $x=\tanh u$ (suggested by the square root) this integral reduces to$$\mathcal{I}=\int_0^{\infty}\frac{2u\,du}{\sinh u}.$$Expanding $\displaystyle\frac{1}{\sinh u}=2\sum_{k=0}^{\infty}e^{-(2k+1)u}$ and exchanging summation and integration, we find that$$\mathcal{I}=4\sum_{k=0}^{\infty}\frac{1}{(2k+1)^2}.$$Standard manipulations express the last sum in terms of $\zeta(2)=\frac{\pi^2}{6}$:$$\zeta(2)=\sum_{k=0}^{\infty}\frac{1}{(2k+1)^2}+\frac{\zeta(2)}{4}\quad \Longrightarrow \quad \displaystyle\mathcal{I}=3\zeta(2).$$
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/805893', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/87813/']}
jdg_376420
stackexchange
llm_judgeable_groundtruth_similarity
12890754
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I try to call a method from another form. My try: public partial class newLedPopUp : Form{ Form1 back = new Form1(); back.output(); Close();} and public partial class Form1 : Form{ newLedPopUp popup = new newLedPopUp(); public void output() { button3_Click(null, null); }} Can somebody help me? I really can't find my error and I've been looking for a very long time. Now provide the response and nothing else.
Instead of creating an instance of a new Form, you probably need an instance of already opened form and call the method from there. You can try: if (System.Windows.Forms.Application.OpenForms["yourForm"] != null) { (System.Windows.Forms.Application.OpenForms["yourForm"] as Form1).Output(); } plus you can replace calling the button3_Click(null,null) in your Output method, by placing the code of the event in a separate method and then calling that method against your button click event or your public output method
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12890754', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1676819/']}
jdg_376421
stackexchange
llm_judgeable_groundtruth_similarity
1083227
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a site managed by Umbraco. I need to store data about images and clients. I don't think there is any way I can store that data in the existing tables. Is there any reason I shouldn't add the tables I'll need to the Umbraco database, rather than creating a separate DB? I like Umbraco so far but the documentation is a little thin and I haven't found any suggestions one way or the other. TIA Now provide the response and nothing else.
I have built a site using Umbraco, with a separate application with a database of vehicles. I used the same database as Umbraco is using, and prefixed all my custom app tables with a few letters to distinguish them easily (eg: vehicles_xxx) I have had no problems with this arrangement, and don't believe there's much risk involved. Of course you'll need to take care when upgrading Umbraco (never upgrade in the live environment before fully testing, and preferably do it locally anyway), however its unlikely an upgrade script will ever alter or delete any tables that it does not know about. There's heaps of doco available for umbraco now - much more than when i started.. however a question like this is always best for the forums. :) all the bestgreg
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1083227', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/34796/']}
jdg_376422
stackexchange
llm_judgeable_groundtruth_similarity
555208
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I recently purchased a i5-9600K . Which is supposed to run 6 cores and 6 threads (hyperthreading), when I take a look into /proc/cpuinfo the ht flag is on, and checking a tool like htop only shows 6 cores, as you can see in the image below. I've used other Intel and AMD processors, and usually when the product says 6 cores/6 threads the total amount is 12 , but in this case I see just 6 . Am I wrong or what could be the problem? Thank you! Now provide the response and nothing else.
If you scroll down on your CPU’s Ark page , you’ll see that it says Intel® Hyper-Threading Technology ‡ No Your CPU has six cores, but it doesn’t support hyper-threading, so your htop display is correct. The CPU specifications on Ark show the full thread count, there’s no addition or multiplication involved; see for example the Xeon E3-1245v3 for a hyper-threading-capable CPU (four cores, two threads per core, for eight threads in total). The ht moniker given to the underlying CPUID flag is somewhat misleading: in Intel’s manual (volume 3A, section 8.6), it’s described as “Indicates when set that the physical package is capable of supporting Intel Hyper-Threading Technology and/or multiple cores”. So its presence indicates that the CPU supports hyper-threads (even if they’re disabled), or contains multiple cores in the same package, or both. To determine what is really present, you need to enumerate the CPUs in the system, using firmware-provided information, and use the information given to figure out whether there are multiple logical cores, on how many physical cores, on how many sockets, etc. Depending on the CPU, a “CPU” shown in htop (and other tools) can be a thread (on a hyper-threading system), a physical core (on a non-hyper-threading system), or even a full package (on a non-hyper-threading, single-core system). The Linux kernel does all this detection for you, and you can see the result using for example lscpu . At least your CPU isn’t affected by any of the hyperthreading-related vulnerabilities!
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/555208', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/384590/']}
jdg_376423
stackexchange
llm_judgeable_groundtruth_similarity
40813975
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently working on a new Vue.js application. It depends heavily on api calls to my backend database. For a lot of things I use Vuex stores because it manages shared data between my components. When looking at other Vue projects on github I see a special vuex directory with files that handles all the actions, states and so on. So when a component has to call the API, it includes the actions file from the vuex directory. But, for messages for example, I don't want to use Vuex because those data is only important for one specific view. I want to use the component specific data here. But here is my problem: I still need to query my api. But I shouldn't include the Vuex actions file. So in that way I should create a new actions file. This way I have a specific file with api actions for vuex and for single components. How should I structure this? Creating a new directory 'api' that handles actions for both vuex data and component-specific data? Or separate it? Now provide the response and nothing else.
I am using axios as HTTP client for making api calls, I have created a gateways folder in my src folder and I have put files for each backend, creating axios instances , like following myApi.js import axios from 'axios'export default axios.create({ baseURL: 'http://localhost:3000/api/v1', timeout: 5000, headers: { 'X-Auth-Token': 'f2b6637ddf355a476918940289c0be016a4fe99e3b69c83d', 'Content-Type': 'application/json' }}) Now in your component, You can have a function which will fetch data from the api like following: methods: { getProducts () { myApi.get('products?id=' + prodId).then(response => this.product = response.data) }} Similarly you can use this to get data for your vuex store as well. Edited If you are maintaining product related data in a dedicate vuex module , you can dispatch an action from the method in component, which will internally call the backend API and populate data in the store, code will look something like following: Code in component: methods: { getProducts (prodId) { this.$store.dispatch('FETCH_PRODUCTS', prodId) }} Code in vuex store: import myApi from '../../gateways/my-api'const state = { products: []}const actions = { FETCH_PRODUCTS: (state, prodId) => { myApi.get('products?id=' + prodId).then(response => state.commit('SET_PRODUCTS', response)) }} // mutationsconst mutations = { SET_PRODUCTS: (state, data) => { state.products = Object.assign({}, response.data) }}const getters = {}export default { state, mutations, actions, getters}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40813975', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/881988/']}
jdg_376424
stackexchange
llm_judgeable_groundtruth_similarity
7026690
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can I programmatically get the source code of a function by its name? Like: function blah($a, $b) { return $a*$b; }echo getFunctionCode("blah"); is it possible? Are there any php self-descriptive functions to reconstruct function/class code? (I mean instead of getting source code right from the source file.) In Java there exists: http://java.sun.com/developer/technicalArticles/ALT/Reflection/ Now provide the response and nothing else.
Expanding on the suggestion to use the ReflectionFunction, you could use something like this: $func = new ReflectionFunction('myfunction');$filename = $func->getFileName();$start_line = $func->getStartLine() - 1; // it's actually - 1, otherwise you wont get the function() block$end_line = $func->getEndLine();$length = $end_line - $start_line;$source = file($filename);$body = implode("", array_slice($source, $start_line, $length));print_r($body);
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7026690', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/492624/']}
jdg_376425
stackexchange
llm_judgeable_groundtruth_similarity
10438
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to add a list of hosts that are only specific to a certain user? Perhaps a user-specific hosts file? This mechanism should also complement the entries in the /etc/hosts file. Now provide the response and nothing else.
The functionality you are looking for is implemented in glibc. You can define a custom hosts file by setting the HOSTALIASES environment variable. The names in this file will be picked up by gethostbyname (see documentation ). Example (tested on Ubuntu 13.10): $ echo 'g www.google.com' >> ~/.hosts$ export HOSTALIASES=~/.hosts$ wget g -O /dev/null Some limitations: HOSTALIASES only works for applications using getaddrinfo(3) or gethostbyname(3) For setuid / setgid / setcap applications, libc sanitizes the environment, which means that the HOSTALIASES setting is lost. ping is setuid root or is given the net_raw capability upon execution (because it needs to listen for ICMP packets), so HOSTALIASES will not work with ping unless you're already root before you call ping .
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/10438', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/6217/']}
jdg_376426
stackexchange
llm_judgeable_groundtruth_similarity
25236258
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Scala's Iterable has maxBy : Finds the first element which yields the largest value measured by function f . def maxBy[B](f: (A) ⇒ B)(implicit cmp: Ordering[B]): A Example: scala> Seq(-2, 1).maxBy(Math.abs)res0: Int = -2scala> Seq(-2, 3).maxBy(Math.abs)res1: Int = 3 What's the equivalent Haskell way to do this? Now provide the response and nothing else.
Scala's Iterable is related to Haskell's Traversable . However, in this case, Foldable is enough to find the maximum of a given collection, so use maximumBy from Data.Foldable in conjunction with compare `on` f ( on from Data.Function ) or comparing f ( comparing from Data.Ord ): import Data.Foldable (maximumBy, Foldable)import Data.Ord (comparing)maxBy :: (Foldable t, Ord a) => (b -> a) -> t b -> bmaxBy = maximumBy . comparing
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25236258', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/402884/']}
jdg_376427
stackexchange
llm_judgeable_groundtruth_similarity
71855
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This question came up in the context of the ssqueezepy library. As a basic experiment I did compute the synchrosqueezed wavelet transform of three basic signals: A sine of 440 Hz. A sine of 880 Hz. A signal that mixes (1) and (2). The result looks like this: Full reproduction code: here Basically the transform manages to perfectly localize signals (1) and (2), but for the mixed signal there is a surprising oscillation pattern. Zooming in a bit: In general the issue depends on the choice of mu of the underlying wavelet. In this particular example, increasing mu slightly can help to make the transform of the mixed signal non-oscillating. Nonetheless I'm wondering why synchrosqueezing leads to this oscillating interpretation of the signal in the first place? Now provide the response and nothing else.
Solving a deconvolution isn't easy even in simulated environment not to mention in practice. The main trick to solve it is using the proper model / prior for the problem and very good measurements (High SNR). So basically, for deconvolution we're after: $$ \hat{\boldsymbol{x}} = \arg \min_{\boldsymbol{x}} \frac{1}{2} {\left\| H \boldsymbol{x} - \boldsymbol{y} \right\|}_{2}^{2} $$ Where $ H $ is the matrix form of the convolution operator of the known signal and $ \boldsymbol{y} $ is our measurement samples. Now, we need to talk about the Convolution form. The convolution has to deal with boundary conditions which might be crucial for good results. We have few approaches (Basically extrapolation): Extrapolate with Zeros - Assuming data outside the samples of the signals is zero. Extrapolate by Nearest Neighbor (Also called Replicate) - The unknown value is extrapolated by the nearest known value. Extrapolate by Periodic continuation - The data is assumed to be periodic. Hence any missing value is based on that. The building of $ H $ must match that. Since you used, in your code conv() with no explicit mode it means you basically chose zeros and since your convolution output is full (The default) it means we see the transients as well in the output. The solution to the above is given by: $$ \hat{\boldsymbol{x}} = \arg \min_{\boldsymbol{x}} \frac{1}{2} {\left\| H \boldsymbol{x} - \boldsymbol{y} \right\|}_{2}^{2} = {\left( {H}^{T} H \right)}^{-1} {H}^{T} y $$ The stability of this solution depends on the Condition Number of $ {H}^{T} H $ . Remark - One could solve this in Frequency Domain as well. Though it will require some touches as in Frequency Domain the model is the periodic model. Let's have a look on the results: First we can see the Condition Number is huge!You may think about the condition number as the amplification of the error. It means even the slightest noise will make things unstable. As can be seen, indeed even a white noise with a standard deviation of 1e-8 caused errors! In practice, to deal with this instability we use some regularization. $$ \hat{\boldsymbol{x}} = \arg \min_{\boldsymbol{x}} \frac{1}{2} {\left\| H \boldsymbol{x} - \boldsymbol{y} \right\|}_{2}^{2} + \lambda R \left( x \right) $$ Where $ R \left( \cdot \right) $ is the regularization function and $ \lambda $ is the regularization factor which balances between listening to the naïve deconvolution model or to the regularization model. The regularization function must be chosen with respect to the prior knowledge we have about the signal of interest. In your case, something clear about your signal is its piece wise smooth property. Its gradient is very sparse. Hence it is a perfect match to the Total Variation model: $$ \hat{\boldsymbol{x}} = \arg \min_{\boldsymbol{x}} \frac{1}{2} {\left\| H \boldsymbol{x} - \boldsymbol{y} \right\|}_{2}^{2} + \lambda TV \left( x \right) = \arg \min_{\boldsymbol{x}} \frac{1}{2} {\left\| H \boldsymbol{x} - \boldsymbol{y} \right\|}_{2}^{2} + \lambda {\left\| D x \right\|}_{1} $$ Where $ D $ is the finite differences operator (Discrete Derivative). This is a relatively easy task to solve. In my project Total Variation (TV) Regularized Least Squares - Solvers Analysis I implemented and compared few solvers. For this example I used the ADMM based solver . Here is the result for the TV regularization: As can be seen, it totally overcome the ( Very Low !) noise from above. In real world (And higher noise levels) one needs to tweak the $ \lambda $ parameter. You will hardly recover perfect results, but they will be much better than doing the naive approach. MATLAB Implementation The full code is available on my StackExchange Signal Processing Q71822 GitHub Repository (Look at the SignalProcessing\Q71822 folder). It includes the functions to build the convolution matrix from the samples and solve the TV problem.
{}
{'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/71855', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/24353/']}
jdg_376428
stackexchange
llm_judgeable_groundtruth_similarity
35932660
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been trying to get a QComboBox in PyQt5 to become populated from a database table. The problem is trying to find a method that recognizes a click event on it. In my GUI, my combo-box is initially empty, but upon clicking on it I wish for the click event to activate my method for communicating to the database and populating the drop-down list. It seems so far that there is no built-in event handler for a click-event for the combo-box. I am hoping that I am wrong on this. I hope someone will be able to tell me that there is a way to do this. The best article I could find on my use-case here is from this link referring to PyQt4 QComboBox : dropdown event/callback in combo-box in pyqt4 I also found another link that contains a nice image of a QComboBox .The first element seems to be a label followed by a list: Catch mouse button pressed signal from QComboBox popup menu Now provide the response and nothing else.
You can override the showPopup method to achieve this, which will work no matter how the drop-down list is opened (i.e. via the mouse, keyboard, or shortcuts): from PyQt5 import QtCore, QtWidgetsclass ComboBox(QtWidgets.QComboBox): popupAboutToBeShown = QtCore.pyqtSignal() def showPopup(self): self.popupAboutToBeShown.emit() super(ComboBox, self).showPopup()class Window(QtWidgets.QWidget): def __init__(self): super(Window, self).__init__() self.combo = ComboBox(self) self.combo.popupAboutToBeShown.connect(self.populateConbo) layout = QtWidgets.QVBoxLayout(self) layout.addWidget(self.combo) def populateConbo(self): if not self.combo.count(): self.combo.addItems('One Two Three Four'.split())if __name__ == '__main__': import sys app = QtWidgets.QApplication(sys.argv) window = Window() window.show() sys.exit(app.exec_()) However, for your particular use-case, I think a better solution might be to set a QSqlQueryModel on the combo-box, so that the items are updated from the database automatically.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35932660', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3923629/']}
jdg_376429
stackexchange
llm_judgeable_groundtruth_similarity
25662
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When using OpenSSH server ( sshd ) and client ( ssh ), what are all of the default / program preferred ciphers, hash, etc. (security related) and their default options (such as key length)? So, what are the defaults for symmetric key, MAC, key exchange, etc. Now provide the response and nothing else.
The default algorithms (that is, the algorithms which the client and server prefer to use when given the choice) depend on the client and server implementations, how they were compiled and configured. So it may depend on the software vendor, software version, operating system distribution, and sysadmin choices. On an Ubuntu 12.10, man ssh_config indicates that the default order for encryption is: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128, aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc, aes256-cbc,arcfour while the default order for MAC (integrity) is: hmac-md5,hmac-sha1,umac-64@openssh.com, hmac-ripemd160,hmac-sha1-96,hmac-md5-96, hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512, hmac-sha2-512-96 The key exchange algorithm would follow this order of preference: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521, diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1, diffie-hellman-group14-sha1, diffie-hellman-group1-sha1 Of course, preferences are subject to negotiation. An algorithm will be selected only if both the client and server support it (in particular, ECDH key exchange support is rather recent), and both client and server have their say in it (if they do not have the exact same preferences). A survey is theoretically doable: connect to random IP address, and, if a SSH server responds, work out its preferred list of ciphers and MAC (by connecting multiple times, restricting the list of choices announced by the client). OpenSSH makes usage surveys but they are not as thorough (they just want the server "banner").
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/25662', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/1706/']}
jdg_376430
stackexchange
llm_judgeable_groundtruth_similarity
50039331
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: https://github.com/dotnet/dotnet-docker-samples/tree/master/aspnetapp Docker command docker build -t aspnetapp . I am getting an error for docker build command as C:\Program Files\dotnet\sdk\2.1.105\NuGet.targets(104,5): error : Unable to load the service index for source https://api.nuget.org/v3/index.json . [C:\app\aspnetapp.csproj] [1]PS C:\Users\pssharma\dotnet-docker-samples\aspnetapp> docker build -t aspnetapp .Sending build context to Docker daemon 2.444MBStep 1/10 : FROM microsoft/aspnetcore-build:2.0 AS build-env ---> eb21d939e0d8Step 2/10 : WORKDIR /app ---> Using cache ---> d4ec30216ed7Step 3/10 : COPY *.csproj ./ ---> Using cache ---> 2ff39b5e6cb4Step 4/10 : RUN dotnet restore ---> Running in 776764a35311 Restoring packages for C:\app\aspnetapp.csproj... Restoring packages for C:\app\aspnetapp.csproj...C:\Program Files\dotnet\sdk\2.1.105\NuGet.targets(104,5): error : Unable to load the service index for source https://api.nuget.org/v3/index.json. [C:\app\aspC:\Program Files\dotnet\sdk\2.1.105\NuGet.targets(104,5): error : An error occurred while sending the request. [C:\app\aspnetapp.csproj]C:\Program Files\dotnet\sdk\2.1.105\NuGet.targets(104,5): error : The operation timed out [C:\app\aspnetapp.csproj] The following command returned a non-zero code: 1 powershell -Command $ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue'; dotnet restore Now provide the response and nothing else.
This appears to happen when there are multiple network adaptors present for the host and the priority of said adaptor is misconfigured. Run the following to display a table of your network adaptors: Get-NetIPInterface -AddressFamily IPv4 | Sort-Object -Property InterfaceMetric -Descending You want your primary adaptor (in my case wifi) to have the lowest InterfaceMetric. Set-NetIPInterface -InterfaceAlias 'Wi-Fi' -InterfaceMetric 1 My network adaptor table for reference: See this github thread for more information: https://github.com/docker/for-win/issues/2760#issuecomment-430889666
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50039331', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9702801/']}
jdg_376431
stackexchange
llm_judgeable_groundtruth_similarity
32800
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a locked administrator account. Other accounts are unlocked, but they have no rights. I want to get a password from the locked account, or maybe password hash, if possible. Now provide the response and nothing else.
If you have no rights at all, then you are out of luck. There is no way to retrieve the password. Depending on the operating system you can reset the passwords in different ways: Windows For windows you will need to reboot the machine and use a special CD to try and reset the password. One tool you can use is this one: http://pogostick.net/~pnh/ntpasswd/bootdisk.html Linux On Linux you need to boot with a live CD, mount the hard drive and generate a new hash for the root password on the live cd using passwd. Then go to you /etc/shadow and copy that hash for your root account to the /etc/shadow on the mounted hard drive. When you reboot your machine the password will be the password you generated on the live cd.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/32800', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/22199/']}
jdg_376432
stackexchange
llm_judgeable_groundtruth_similarity
8987089
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know that Java code can call C++ code through JNI. Is it possible, however, to call Java code from C++ again through JNI or with any other method? Now provide the response and nothing else.
Yep you certainly can. Here's an example: Here's the java file: public class InvocationHelloWorld { public static void main(String[] args) { System.out.println("Hello, World!"); System.out.println("Arguments sent to this program:"); if (args.length == 0) { System.out.println("(None)"); } else { for (int i=0; i<args.length; i++) { System.out.print(args[i] + " "); } System.out.println(); } }} And heres some C++ that uses it: void invoke_class(JNIEnv* env) { jclass helloWorldClass; jmethodID mainMethod; jobjectArray applicationArgs; jstring applicationArg0; helloWorldClass = (env)->FindClass("InvocationHelloWorld"); if(! helloWorldClass ) { std::cerr<<"Couldn't get \"InvocationHelloWorld\""<<std::endl; return; } mainMethod = (env)->GetStaticMethodID(helloWorldClass, "main", "([Ljava/lang/String;)V"); if(! mainMethod ) { std::cerr<<"Coulnd't get \"InvocationHelloWorld::main\""<<std::endl; return; } applicationArgs = (env)->NewObjectArray(1, (env)->FindClass("java/lang/String"), NULL); applicationArg0 = (env)->NewStringUTF("From-C-program"); (env)->SetObjectArrayElement(applicationArgs, 0, applicationArg0); (env)->CallStaticVoidMethod(helloWorldClass, mainMethod, applicationArgs);}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8987089', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/348183/']}
jdg_376433
stackexchange
llm_judgeable_groundtruth_similarity
98994
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Simply, how can I make GPG not print that message? Here are the commands I'm using: echo "test input" > test.inecho "test" | gpg -q -c --passphrase-fd 0 --output test.enc --yes --force-mdc test.inecho "test" | gpg -q -d --passphrase-fd 0 test.enc > test.out And running it: $ echo "test input" > test.in $ echo "test" | gpg -q -c --passphrase-fd 0 --output test.enc --yes --force-mdc test.inReading passphrase from file descriptor 0 $ echo "test" | gpg -q -d --passphrase-fd 0 test.enc > test.outReading passphrase from file descriptor 0 EDIT: Redirecting stderr doesn't seem to work $ echo "test" | gpg -q -c --passphrase-fd 0 --output test.enc --yes --force-mdc test.in 2> /dev/nullReading passphrase from file descriptor 0 Now provide the response and nothing else.
One way to see what's going on is to trace the system calls involved. Utilities to do this vary according to platform. On Solaris, you would use truss . On Linux (as in my example) you would use strace . To trace, we change the command used from: echo "test" | gpg -q -c --passphrase-fd 0 --output test.enc --yes --force-mdc test.in 2> /dev/null to: echo "test" | strace gpg -q -c --passphrase-fd 0 --output test.enc --yes --force-mdc test.in 2>trace_output.txt . The first thing that stands out as interesting (if a little unrelated) is that gpg is doing repeated single byte reads when taking the input passphrase from stdin. This is sometimes a tell-tale sign of inefficient code - but in this case, it's probably not that big a deal: read(0, "t", 1) = 1read(0, "e", 1) = 1read(0, "s", 1) = 1read(0, "t", 1) = 1read(0, "\n", 1) = 1 The more relevant stuff regarding the output of the log message is all here: open("/dev/tty", O_RDWR) = 3fstat(3, {st_mode=S_IFCHR|0666, st_rdev=makedev(5, 0), ...}) = 0ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, {B9600 opost isig icanon echo ...}) = 0write(3, "Reading passphrase from file des"..., 45) = 45write(3, "\10\10\10 \n", 7) = 7 That's all we hear about file descriptor 3 until exit (it's not explicitly closed). Looking at each of these in turn: open("/dev/tty", O_RDWR) = 3 That's opening the file /dev/tty, for both reading and writing. The return value (a new file descriptor for later use) is 3. /dev/tty is a synonym for the current controlling terminal . You can see the device which is effectively being referenced by this special file, by running $ tty fstat(3, {st_mode=S_IFCHR|0666, st_rdev=makedev(5, 0), ...}) = 0 This is used by gpg to find out about the file just opened with file descriptor 3. The stuff in curly brackets is what's returned (a populated struct stat, with the 5, 0 indicating that this is a particularly special file ). ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, {B9600 opost isig icanon echo ...}) = 0 This is manipulating attributes of the controlling terminal, prior to output. write(3, "Reading passphrase from file des"..., 45) = 45 write(3, "\10\10\10 \n", 7) = 7 These are more straightforward. gpg successfully writes that text (some of which was abbreviated in the strace output) to the terminal. So - this is your answer.. gpg is writing this log message directly to /dev/tty (synonym for the controlling terminal), so you won't be able to redirect it in the same way you can for stdout or stderr. There is a way around this. You can disconnect the controlling terminal prior to executing gpg. Here's a short program which does just that: #include <stdio.h>#include <stdlib.h>#include <unistd.h>#include <sys/types.h>#include <sys/stat.h>#include <fcntl.h>#include <errno.h>int main(int argc, char* argv[]){ int rc, fd; if (argc != 2) { fprintf(stderr, "Provide command line arg to execute after TIOCNOTTY\n"); return EXIT_FAILURE; } fd = open("/dev/tty", O_RDWR); if (fd < 0) { fprintf(stderr, "Failed to open controlling terminal: %s\n", strerror(errno)); return EXIT_FAILURE; } rc = ioctl(fd, TIOCNOTTY); if (rc == -1) { fprintf(stderr, "Failed TIOCNOTTY ioctrl: %s\b", strerror(errno)); return EXIT_FAILURE; } return system(argv[1]);} There should to be an existing utility to do the above, but I couldn't find one. If you were to compile that code, call the resulting executable notty , then you could do this: echo "test" | notty "gpg -q -c --passphrase-fd 0 --output test.enc --yes --force-mdc test.in" This should suppress the message, but keep your stdout and stderr intact. It's unclear what else would be suppressed though (you'd need to look at the gpg source to see what else is output in this way).
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/98994', 'https://serverfault.com', 'https://serverfault.com/users/11154/']}
jdg_376434
stackexchange
llm_judgeable_groundtruth_similarity
1982380
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I understand why $\dfrac {\mathrm d(x^2)}{\mathrm dx} = 2x$ since we're taking the derivative of $x^2$ with respect to $x$. Or $\dfrac {\mathrm dx^2}{\mathrm dx^2} = 1$ since we're taking the derivative of $x^2$ with respect to $x^2$ as the base variable. From the textbook: $$\begin{align}x^2 &= u^3-1\tag1\\[5pt]\frac {\mathrm dx^2}{\mathrm du} &= 3u^2\tag2\\[5pt]dx^2&=3u^2\mathrm du\tag3\\[5pt]2x\mathrm dx&=3u^2\mathrm du\tag4\end{align}$$ How did the textbook go from step 3 to step 4? Specifically how does $\mathrm dx^2 = 2x\mathrm dx$? Now provide the response and nothing else.
This is a total derivative and you appear to be using it in the form taught as implicit differentiation . The set of variables is $\{x,u\}$. (Note that when we write $x^2 = u^3 - 1$, there is no clear division between dependent and independent variables.) Then \begin{align}d(x^2) &= \frac{\partial x^2}{\partial x} \mathrm{d} x + \frac{\partial x^2}{\partial u} \mathrm{d}u \\ &= (2x) \mathrm{d}x + (0) \mathrm{d}u \\ &= 2x \mathrm{d}x \text{.}\end{align} It probably would have been better if your book had demonstrated the use of the chain rule here: \begin{align}\frac{\mathrm{d}(x^2)}{\mathrm{d}u} &= \frac{\mathrm{d}(x^2)}{\mathrm{d}x}\frac{\mathrm{d}x}{\mathrm{d}u} \\ &= 2x \frac{\mathrm{d}x}{\mathrm{d}u} \text{,}\end{align}and then continue on by "multiplying" both sides by $\mathrm{d}u$. (Warning: there is a very good chance your "algebra brain" has just drawn an incorrect conclusion about how the products of derivatives work.) There are a number of horrible ideas present in what you claim is in your book: $\frac{\mathrm{d}(x^2)}{\mathrm{d}u}$ is not a fraction and you cannot go around multiplying things by plain "$\mathrm{d}u$"s safely without knowing what you are doing. In particular, if taken at face value, the step of multiplying both sides of your equation by $\mathrm{d}u$ is equivalent to destroying all information in your equation by multiplying both sides by zero. What is actually meant is that a limit on the left and a limit on the right of the equals sign approach zero in some particularly helpful way that is indicated by everything else in the equation. However, this formal manipulation can be handy for solving certain types of equations. But like any formal manipulation, it can only suggest a solution; you must still verify that the suggestion is a solution.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1982380', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/134552/']}
jdg_376435
stackexchange
llm_judgeable_groundtruth_similarity
317558
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I would like to have a good feature selection method for a continuous response variable, over around 100 predictors. I would like to keep my model type as linear multiple regression model, rather than tree-based model. My current method: I could calculate the (linear) correlation between each of my predictor and response, and select a subset of predictors with "strong" correlations for the final multiple regression. The prediction performance of the selected predictors will be then determined in this final multiple regression model. However, feature selection in this way is subjective, and I am afraid of missing "important" features. I would like to apply a more objective and complete way of feature selection, such as "all-relevant" feature selection in Boruta or "variable of importance" in random forest. However, as I understand, both methods are based on tree-based random forest, which are not linear regression. My questions are: Is my current method a proper way to handle my research purpose? Can random forest based feature selection handle feature selection purpose for multiple linear regression model? Are there any other feature selection methods recommended? Now provide the response and nothing else.
Firstly, a method that first looks at univariate correlations for pre-identifying things that should go into a final model, will tend to do badly for a number of reasons: ignoring model uncertainy (single selected model), using statistical significance/strength of correlation as a criterion to select (if it is about prediction, you should rather try to assess how much something helps for prediction - these are not necessarily the same thing), "falsely" identifying predictors in univariate correlations (i.e. another predictor is even better, but because the one you look at correlates a bit with it, it looks like it correlates pretty well with the outcome) and missing out on predictors (they may only show up/become clear once other ones are adjusted for). Additionally, not wrapping this into any form of bootstrapping/cross-validation/whatever to get a realistic assessment of your model uncertainty is likely to mislead you. Furthermore, treating continuous predictors as having linear effects can often be improved upon by methods that do not make such an assumption (e.g. RF). Using RF as a pre-selection for a linear model is not such a good idea. Variable importance is really hard to interpret and it is really hard (or meaningless?) to set a cut-off on it. You do not know whether variable importance is about the variable itself or about interactions, plus you are losing out on non-linear transformations of variables. It depends in part of what you want to do. If you want good predictions, maybe you should not care too much about whether your method is a traditional statistical model or not. Of course, there are plenty of things like the elastic net, LASSO, Bayesian models with the horseshoe prior etc. that fit better into a traditional modeling framework and could also accomodate e.g. splines for continuous covariates.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/317558', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/13702/']}
jdg_376436
stackexchange
llm_judgeable_groundtruth_similarity
37801370
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have lots of code in Swift 2.x (or even 1.x) projects that looks like this: // Move to a background thread to do some long running workdispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) { let image = self.loadOrGenerateAnImage() // Bounce back to the main thread to update the UI dispatch_async(dispatch_get_main_queue()) { self.imageView.image = image }} Or stuff like this to delay execution: dispatch_after(dispatch_time(DISPATCH_TIME_NOW, Int64(0.5 * Double(NSEC_PER_SEC))), dispatch_get_main_queue()) { print("test")} Or any of all kinds of other uses of the Grand Central Dispatch API... Now that I've opened my project in Xcode 8 (beta) for Swift 3, I get all kinds of errors. Some of them offer to fix my code, but not all of the fixes produce working code. What do I do about this? Now provide the response and nothing else.
Since the beginning, Swift has provided some facilities for making ObjC and C more Swifty, adding more with each version. Now, in Swift 3, the new "import as member" feature lets frameworks with certain styles of C API -- where you have a data type that works sort of like a class, and a bunch of global functions to work with it -- act more like Swift-native APIs. The data types import as Swift classes, their related global functions import as methods and properties on those classes, and some related things like sets of constants can become subtypes where appropriate. In Xcode 8 / Swift 3 beta, Apple has applied this feature (along with a few others) to make the Dispatch framework much more Swifty. (And Core Graphics , too.) If you've been following the Swift open-source efforts, this isn't news , but now is the first time it's part of Xcode. Your first step on moving any project to Swift 3 should be to open it in Xcode 8 and choose Edit > Convert > To Current Swift Syntax... in the menu. This will apply (with your review and approval) all of the changes at once needed for all the renamed APIs and other changes. (Often, a line of code is affected by more than one of these changes at once, so responding to error fix-its individually might not handle everything right.) The result is that the common pattern for bouncing work to the background and back now looks like this: // Move to a background thread to do some long running workDispatchQueue.global(qos: .userInitiated).async { let image = self.loadOrGenerateAnImage() // Bounce back to the main thread to update the UI DispatchQueue.main.async { self.imageView.image = image }} Note we're using .userInitiated instead of one of the old DISPATCH_QUEUE_PRIORITY constants. Quality of Service (QoS) specifiers were introduced in OS X 10.10 / iOS 8.0, providing a clearer way for the system to prioritize work and deprecating the old priority specifiers. See Apple's docs on background work and energy efficiency for details. By the way, if you're keeping your own queues to organize work, the way to get one now looks like this (notice that DispatchQueueAttributes is an OptionSet , so you use collection-style literals to combine options): class Foo { let queue = DispatchQueue(label: "com.example.my-serial-queue", attributes: [.serial, .qosUtility]) func doStuff() { queue.async { print("Hello World") } }} Using dispatch_after to do work later? That's a method on queues, too, and it takes a DispatchTime , which has operators for various numeric types so you can just add whole or fractional seconds: DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) { // in half a second... print("Are we there yet?")} You can find your way around the new Dispatch API by opening its interface in Xcode 8 -- use Open Quickly to find the Dispatch module, or put a symbol (like DispatchQueue ) in your Swift project/playground and command-click it, then brouse around the module from there. (You can find the Swift Dispatch API in Apple's spiffy new API Reference website and in-Xcode doc viewer, but it looks like the doc content from the C version hasn't moved into it just yet.) See the Migration Guide for more tips.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/37801370', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/957768/']}
jdg_376437
stackexchange
llm_judgeable_groundtruth_similarity
6499654
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Synchronous logging incurs a large performance penalty as it may block. Is there a standalone Ruby library that does asynchronous logging (log4r doesn't seem to)? Can I modify the standard library logger to log asynchronously? I'm looking for something like log4j's AsyncAppender - but preferably an implementation that makes use of Ruby's code blocks to shift as much work to the background thread as possible. Now provide the response and nothing else.
I know you shouldn't really answer your own question, but it seems everything is easy in ruby: require 'thread'require 'singleton'require 'delegate'require 'monitor'class Async include Singleton def initialize @queue = Queue.new Thread.new { loop { @queue.pop.call } } end def run(&blk) @queue.push blk endendclass Work < Delegator include MonitorMixin def initialize(&work) super work; @work, @done, @lock = work, false, new_cond end def calc synchronize { @result, @done = @work.call, true; @lock.signal } end def __getobj__ synchronize { @lock.wait_while { !@done } } @result endendModule.class.class_exec { def async(*method_names) method_names.each do |method_name| original_method = instance_method(method_name) define_method(method_name) do |*args,&blk| work = Work.new { original_method.bind(self).call(*args,&blk) } Async.instance.run { work.calc } return work end end end} And for my logging example: require 'Logger'class Logger async :debugendlog = Logger.new STDOUTlog.debug "heloo" As return values work, you can use this for just about anything: require "test/unit"class ReturnValues < Test::Unit::TestCase def do_it 5 + 7 end async :do_it def test_simple assert_equal 10, do_it - 2 endend
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6499654', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/42543/']}
jdg_376438
stackexchange
llm_judgeable_groundtruth_similarity
24701227
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I have previously defined color variable like this: txtred='\e[1;31m' How would I use it in heredoc : cat << EOM [colorcode here] USAGE:EOM I mean what should I write in place of [colorcode here] to render that USAGEtext red? ${txtred} won't work, as that is what I am using throughout mybash script, outside of heredoc Now provide the response and nothing else.
You need something to interpret the escape sequence which cat won't do. This is why you need echo -e instead of just echo to make it work normally. cat << EOM$(echo -e "${txtred} USAGE:")EOM works but you could also not use escape sequences by using textred=$(tput setaf 1) and then just use the variable directly. textred=$(tput setaf 1)cat <<EOM${textred}USAGE:EOM
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24701227', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2434479/']}
jdg_376439
stackexchange
llm_judgeable_groundtruth_similarity
34099336
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: According to the Python 3.5 docs , subprocess.run() returns an a CompletedProcess object with a stdout member that contains "A bytes sequence, or a string if run() was called with universal_newlines=True." I'm only seeing a byte sequence and not a string, which I was assuming (hoping) would be equivalent to a text line. For example, import pprintimport subprocessmy_data = ""line_count = 0proc = subprocess.run( args = [ 'cat', 'input.txt' ], universal_newlines = True, stdout = subprocess.PIPE)for text_line in proc.stdout: my_data += text_line line_count += 1word_file = open('output.txt', 'w')pprint.pprint(my_data, word_file)pprint.pprint(line_count, word_file) Note: this uses a new feature in Python 3.5 that won't run in previous versions. Do I need to create my own line buffering logic, or is there a way to get Python to do that for me? Now provide the response and nothing else.
proc.stdout is already a string in your case, run print(type(proc.stdout)) , to make sure. It contains all subprocess' output -- subprocess.run() does not return until the child process is dead. for text_line in proc.stdout: is incorrect: for char in text_string enumerates characters (Unicode codepoints) in Python, not lines. To get lines, call: lines = result.stdout.splitlines() The result may be different from .split('\n') if there are Unicode newlines in the string. If you want to read the output line by line (to avoid running out of memory for long-running processes): from subprocess import Popen, PIPEwith Popen(command, stdout=PIPE, universal_newlines=True) as process: for line in process.stdout: do_something_with(line) Note: process.stdout is a file-like object in this case. Popen() does not wait for the process to finish -- Popen() returns immidiately as soon as the child process is started. process is a subprocess.Popen instance, not CompletedProcess here. If all you need is to count the number of lines (terminated by b'\n' ) in the output, like wc -l : from functools import partialwith Popen(command, stdout=PIPE) as process: read_chunk = partial(process.stdout.read, 1 << 13) line_count = sum(chunk.count(b'\n') for chunk in iter(read_chunk, b'')) See Why is reading lines from stdin much slower in C++ than Python?
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34099336', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1953757/']}
jdg_376440