source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
264747
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to check the permutation on the polynomial is a Group Action, but I'm not getting the second axiom. I'm following my lecturer's work --- Examples 2.1 and 2.6 on page 5 on http://www.math.uconn.edu/~kconrad/blurbs/grouptheory/gpaction.pdf --- I post this first. Can someone please spot the mistake? Thanks. Lecturer did: For $p \in S_n$ and $ \textbf{v} = (c_1,c_2,\cdots,c_n) \in \mathbb{R^n}$,define $ p \cdot \textbf{v} := (c_{p(1)},,\cdots,c_{p(n)}) $. Check $$ p_2 \cdot (\color{green}{p_1 \cdot (v)}) \overset{?}{\mathop{=}}\ (p_2 \cdot p_1)(v) \tag{$\spadesuit$}$$ LHS = $ \color{maroon}{p_2} \cdot \color{green}{(c_{p_1(1)},,\cdots,c_{p_1(n)})} = \color{green}{{(c_{p_1(\color{maroon}{{p_2}(1)})}}},\cdots,\color{green}{{c_{p_1(\color{maroon}{{p_2}(n)})})} = (c_{(p_1{{p_2})(1)}},\cdots,c_{(p_1{{p_2})(n)}})} $ $\text{since $S_n$ is a group so has associativity.} $ $ = (p_1 \cdot p_2)(v) \neq RHS $. Hence above is NOT a group action. I tried: Define $ p \cdot f(x_1, \cdots,x_n) := f(x_{p(1)},,\cdots,x_{p(n)}) $. Check this is a group action. LHS of $ (\spadesuit) = \color{maroon}{p_2} \cdot \color{green}{(x_{p_1(1)},,\cdots,c_{x_1(n)})} = \color{green}{{f(x_{p_1(\color{maroon}{{p_2}(1)})}}},\cdots,\color{green}{{x_{p_1(\color{maroon}{{p_2}(n)})})} = f(x_{(p_1{{p_2})(1)}},\cdots,x_{(p_1{{p_2})(n)}})} = (p_1 \cdot p_2)(v) \neq RHS $ Hence the above is NOT a group action? Now provide the response and nothing else.
This is tricky - the two cases look the same, but they're not. The first one is a right action $v \cdot (p_1 \cdot p_2) = (v \cdot p_1) \cdot p_2$, while the second one is a left action $p_1 \cdot (p_2 \cdot f) = (p_1 \cdot p_2) \cdot f$. To see why, consider these 2 permutations: $$p_1(1) = 1, p_1(2) = 3, p_1(3) = 2 $$and$$p_2(1) = 3, p_2(2) = 2, p_2(3) = 1.$$ Let's write out explicitly what the actions are in the two cases to see the difference. First, let's work out what the compositions $p_1 \cdot p_2$ and $p_2 \cdot p_1$ are. The composition $p_1 \cdot p_2$ is: $$p_1(p_2(1)) = 2, p_1(p_2(2)) = 3, p_1(p_2(3)) = 1$$ while the composition $p_2 \cdot p_1$ is: $$p_2(p_1(1)) = 3, p_2(p_1(2)) = 1, p_2(p_1(3)) = 2.$$ Observe that they are not the same. We will use these later. Now let's look at the 2 actions. The first action is on vectors . By the definition of the first action, $$p_1 \cdot (v_1, v_2, v_3) = (v_{p_1(1)}, v_{p_1(2)}, v_{p_1(3)}) = (v_1, v_3, v_2).$$ In words: $p_1$ acting on a vector interchanges the second and third coordinates . Similarly, $$p_2 \cdot (v_1, v_2, v_3) = (v_3, v_2, v_1)$$ In words: $p_2$ acting on a vector interchanges the first and third coordinates . (In this situation, I find that thinking in words reduces the confusion: $p_1$ interchanges the second and third coordinates , not $v_2$ and $v_3$. You'll see the difference below.) Thus, $$p_1 \cdot (p_2 \cdot v) = p_1 \cdot (p_2 \cdot (v_1, v_2, v_3)) = p_1 \cdot (v_3, v_2, v_1) = (v_3, v_1, v_2).$$ (If the last equality seems wrong, use the "words" description of $p_1$: Interchange the second and third coordinates.) Now the rightmost term above is $(p_2 \cdot p_1) \cdot v$, not $(p_1 \cdot p_2) \cdot v$. (Use the calculation of $p_2 \cdot p_1$ above.) Conclusion: For the first action, on vectors, $p_1 \cdot (p_2 \cdot v) = (p_2 \cdot p_1) \cdot v$. So this is not a left action, but a right action. Now look at the second action, which is not on vectors, but on real-valued functions on the set of all vectors . By definition, $$(p_1 \cdot f)(x_1, x_2, x_3) = f(x_{p_1(1)}, x_{p_1(2)}, x_{p_1(3)}) = f(x_1, x_3, x_2).$$ In words: To evaluate $p_1$ applied to a function at a vector, interchange the second and third coordinates of the vector, then apply the function. Similarly, for $p_2$, the words version is: To evaluate $p_2$ applied to a function at a vector, interchange the first and third coordinates of the vector, then apply the function. So what is $p_1 \cdot (p_2 \cdot f)$, applied to a vector? It is $$(p_1 \cdot (p_2 \cdot f))(x_1, x_2, x_3) = (p_2 \cdot f)(x_1, x_3, x_2) = f(x_2, x_3, x_1).$$ (Again, using the "words" description may reduce the confusion.) Now is this last term $p_1 \cdot p_2$ or $p_2 \cdot p_1$ applied to $f$? It is the former, as you can see from the calculation of $p_1 \cdot p_2$ above. Conclusion: $p_1 \cdot (p_2 \cdot f) = (p_1 \cdot p_2) \cdot f$. This one is a left action. I hope this clears up why these two cases aren't the same. Of course, I haven't yet proven that these are actions in general (I've only illustrated it for the particular $p_1$ and $p_2$ above), but hopefully this will give you the right idea for the general proof. The key point is to remember that with these definitions, we are permuting the coordinates of the vector according to $p$, rather than the indices of the $v$'s according to $p$. If we did the latter instead, then the left and right action cases above would be reversed. See alias vs alibi for more discussion of this point.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/264747', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_375541
stackexchange
llm_judgeable_groundtruth_similarity
422254
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The question is about a desktop application I'm creating in C# and WPF.As very common I'm using the repository pattern in my Data Access Layer for my CRUD operations. All data comes from the serverside part of the application, using a REST API. For each repository an interface is defined. So the usercode of the repositories can rely on some consistency.You can think of my repository as ApiUserRepository : IUserRepository . If anywhere in the future another repository implementation is necessary, like FileUserRepository : IUserRepository there is no problem about incompatibility because of the principle of interfaces at all. The repositories actually wrap API calls, and return the result of course. The methods for data operations (like GetUserById(...) ) are all async. At this time, I just return the data from the repository methods. So the GetUserById(...) method returns a User object. If the user is not found, it returns null. Each interation with a repository can produce four types of results, I take a reading operation as example: The requested data is returned. The requested data was not found (I return null or an empty list). The data source (in this case REST API) that's used by the repository, returns an error saying the API is called the wrong way or invalid (user) input is provided. There was a technical exceptional error, Like Bad HTTP Request, Time-out, Internal server error, etc.) Case 1. and 2. are already handled, because the return value of a repository method just contains either the data or a null (or empty list). My question is about how to handle case 3. and 4. in a repository design: how to communicate an error message coming from the data source (in this case REST API) to the usercode of the repository?In case 3. the API server responds with an error JSON object.In case 4 the repository can detect this via other ways.The repository can easily interpret all this and that's not my question. My question is about how to communicate this error to outside the repository, so the code that uses the repo can handle this further (and display an error message on screen for example).In case 4. we can say using exceptions is justified, but case 3... The return value of a repository method can not be used anymore because it's already utilized for case 1. and 2. Throwing exceptions from the repository for case 3. seems to be ugly to me. What would be the best way to solve this design problem and keep it all clean? Don't consider my repositories as just 'HTTP clients'.It just has to be able to inform the outside world if something went wrong, and what message was responded from the server. Now provide the response and nothing else.
Repositories are elements of the domain and for this reason, I would limit the interface (arguments and returning types) to elements of this layer. Between null and throwing exceptions, the first seems the lesser evil to me because handling "alternative" execution paths by exceptions can be more complicated (ignore? log? re-throw? log and re-throw? catch later? early? exit?) than checking nulls. Deciding whether null is an acceptable response or not, seems someone else's problem . The repository should not assume that nulls are not supported by the consumer. Concerning the 3rd option, it seems too complicated. It's a sort of good for-all-purposes interface. It reminds me of an HTTP Client more than a repository. Hiding the null behind a wrapper avoids null checking and possible null pointers, but it only turns one kind of checking into another and it doesn't remove the need of transforming KOs into something else if response.ko //do something - Edit: find() vs load() You could, however, implement an additional method in the repository which must return something or fail. For example loadUser() . Differences lay in the semantics. While find implies search and search imply the possibility of 0 results, load implies that whoever is calling got some proof that the element must exist and not finding the entry is problematic at this point. Like searching and loading a non-existing file. The first might result in 0 files found, the second is likely to fail. The file could have been removed by another execution before you open it so your load process must end abruptly.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/422254', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/352709/']}
jdg_375542
stackexchange
llm_judgeable_groundtruth_similarity
147361
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Given below is a page from our Chemistry Textbook: As it can be seen, the direction of $E_{ext}$ in figure $3.2(c)$ is opposite to the direction in figures $3.2(a)$ and $3.2(b)$ . I understand that $E_{ext}$ is supposed to oppose potential of the voltaic cell in figures $3.2(a)$ and $3.2(b)$ . But in figure $3.2(c)$ since $E_{ext}$ is flipped, isn't $E_{ext}$ supporting the electrical potential of the voltaic cell? Is there is reason for the flipping of the direction, or is it a mistake on part of the writer? Now provide the response and nothing else.
The figures in the OP's post are nicely drawn, as expected in a modern textbook, but figure 3.2(c) has the external battery reversed, which is incorrect. Here is how it works, without the needless complication of the potentiometer. First, start with a standard Daniell cell with standard assumptions, i.e., negligible internal resistance, unimolar concentrations, and so on. Then the open circuit voltage is 1.100 V and, under light load, essentially the same. This is shown in Fig. 1 below: As shown, electron flow is from the zinc anode to the copper cathode, via the external load resistor. Note that DMM means digital multimeter, used in voltmeter mode, and DVM means digital voltmeter. The current flow is $11 \mu A $ . Now cut the wire to the cathode and insert an external DC voltage supply that is turned on, but set to supply zero volts between its terminals. This is shown Fig. 2 below: The external DC supply is schematically depicted as a battery supplying 0.000 V between its terminals. Its internal impedance is assumed negligible. Effectively, this 0 V battery is the same as a piece of wire: the situation is the same as in Fig. 1. The current flow is still $11 \mu A $ . Now start making it interesting. First, set the external DC supply to produce +0.500 V, as shown in Fig. 3 below: Note how the external DC supply is connected: its positive terminal connects to the copper cathode. The Daniell cell potential is opposed by the external supply voltage and the DMM shows that the difference , which is +0.600 V, is across the load resistor. Therefore, the current flow is only $6 \mu A $ . Next, set the external DC supply to produce +1.100 V, as shown in Fig. 4 below: No current flows because the external DC supply voltage nulls (exactly opposes) the Daniell cell voltage. Both ends of the resistor are at -1.100 V with respect to the copper electrode, so no current flows and there is no oxidation or reduction taking place in the cell reservoirs. There is no anode or cathode, just electrodes. This is the dividing line between voltaic cell operation and electrolytic cell operation. Finally, set the external DC supply to produce +3.000 V, as shown in Fig. 5 below: Now the electron flow is from the negative terminal of the external DC supply, through the load resistor and into the zinc electrode, where reduction will take place in that cell reservoir. This is the electrolysis mode of operation. Note the voltage across the resistor is -1.900 V, i.e., -3.000 V minus -1.100 V. So the zinc electrode is now the cathode and the copper electrode is the anode.
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/147361', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/56849/']}
jdg_375543
stackexchange
llm_judgeable_groundtruth_similarity
153772
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there any relation between the chromatic number of a graph $G$ and its complement $G'$ that are always true? I saw these ones: $\chi(G)\chi(G')\geq n$ and $\chi(G)+\chi(G')\geq 2n$, but I'm not pretty sure about them. Now provide the response and nothing else.
The following proof is taken from Graphs and Digraphs by Chartrand, Lesniak, and Zhang, who attribute proof to Hudson V. Kronk. Let $G$ be a graph such that $V(G)=n$ . Suppose $\chi(G)=k$ and $\chi(\overline{G})=l$ . Assume we are given a $k$ -coloring $c$ and $l$ -coloring $\overline{c}$ of $G$ and $\overline{G}$ , respectively. With these colorings, one can obtain a coloring of $K_n$ . To each vertex $v$ of $G$ (and also $\overline{G}$ ) one associates the ordered pair $\{c(v),\overline{c}(v)\}$ . Given distinct vertices $v$ and $w$ in $K_n$ , one notes that $v$ and $w$ must be adjacent in either $G$ or $\overline{G}$ , so this gives a coloring of $K_n$ using at most $kl$ colors. Therefore, $$\chi(K_n)=n\leq kl=\chi(G)\cdot \chi(\overline{G}).$$ To prove $\chi(G)+\chi(\overline{G})\leq n+1$ we use the following lemma: Lemma : For every graph $G$ $$\chi(G)\leq 1+\operatorname{max}\{\delta(H)\},$$ where $H$ is a subgraph of $G$ and the maximum is taken over all the subgraphs $H$ of $G$ . Let $q=\operatorname{max}\{\delta(H)\}$ . Then, by the above lemma, we have $\chi(G)\leq 1+q$ . Next we determine $\operatorname{max}\{\delta(\overline{G})\}$ , which I claim is $n-q-1$ . Assume the contrary. Then there is a subgraph $H$ of $G$ such that $\delta(\overline{H})\geq n-q$ . This implies every vertex of $H$ has degree less than or equal to $q-1$ . Let $K$ be a subgraph of $G$ such that $\delta(K)=q$ (note such a subgraph exists since $q=\operatorname{max}\{\delta(H)\}$ ). Clearly no vertex in $K$ is in $H$ . Now, $|V(K)|\geq q+1$ since $\delta(K)=q$ , which implies $$|V(H)|\leq n-(q-1)=n-q-1,$$ contradicting the fact $\delta(\overline{H})\geq n-q$ . Therefore $\operatorname{max}\{\delta(\overline{G})\}\leq n-q-1$ , which implies by the lemma that $\chi(\overline{G})\leq 1+(n-q-1)=n-q$ . Putting this all together gives $$\chi(G)+\chi(\overline{G})\leq (1+q)+(n-q)=n+1.$$ More relations between the chromatic number of a graph and its complement are: $2\sqrt{n}\leq \chi(G)+\chi(\overline{G})$ $\chi(G)\cdot \chi(\overline{G})\leq (\frac{n+1}{2})^2$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/153772', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/32915/']}
jdg_375544
stackexchange
llm_judgeable_groundtruth_similarity
305527
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I cannot figure out a way to disable a container AND its children in Swing. Is Swing really missing this basic feature? If I do setEnabled(false) on a container, its children are still enabled. My GUI structure is pretty complex, and doing a traversion of all elements below the container is not an option. Neither is a GlassPane on top of the container (the container is not the entire window). Now provide the response and nothing else.
To add to mmyers's answer , disabling children is not an easy task (see this thread ) The problem is near-to unsolvable in the general case. That's why it is not part of core Swing. Technically, the disable-and-store-old-state followed by a enable-and-restore-to-old-state might look attractive. It even might be a nice-to-have in special cases. But there are (at least, probably a bunch more) two issues with that. Compound components The recursion must stop on a "compound component" (or "single entity"). Then the component is responsible for keeping dependent's state. There's no general way to detect such a component - examples are JComboBox, JXDatePicker (which as related issue ) To make things even more complicated, dependents don't need to be under the hierarchy of the "compound component", f.i. JXTable takes care of the ColumnControl's (and header's) enabled state. Trying to tackle both would require to have a) a property on the compound: "don't touch my children" and b) a property on the uncontained dependents: "don't touch me" Binding to enabled enable-and-update-to-old might break application state if the enabled status is bound to a (presentation or other) model property and that property changed in-the-meantime - now the old-state is invalid. Trying to tackle that would require to have c) a "real" stored-old-enabled-due-to-view-concerns property d) bind the presentation model property to both the enabled and the stored-old-enabled JXRadioGroup has a variant of that problem: On disabling - the group itself or the general controller - keeps track of the old-enabled of every button. Button's enabled is controlled by the Action - if there is an Action. So the enabled controller needs to restore to old-enabled or to action's enabled. During group's disabled (as-group) a problem looms if the Action's enabled was false on storing and changed to true. Another if actions are added. Now imagine the complexity of state transitions when overloading a)-- d)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/305527', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15932/']}
jdg_375545
stackexchange
llm_judgeable_groundtruth_similarity
6946746
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the difference in using a forward slash and backslash in navigating file systems. Also what is the difference between ./ and ../ ? Now provide the response and nothing else.
Well, in most languages backslashes need to be escaped in string literals, slashes do not. Further backslashes only work on Windows, while slashes work pretty much everywhere. On the other hand, when passing path names as arguments to Windows programs, using slashes might not work because many Windows programs use slashes to signify command line flags, so you need to use backslashes. On . vs. .. : . is the current directory, .. is the parent directory.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6946746', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/798183/']}
jdg_375546
stackexchange
llm_judgeable_groundtruth_similarity
165618
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Each time I listen to this quote from The man from Earth , I'm absolutely intrigued: Dan: Time... you can't see it, you can't hear it, you can't weigh it, you can't... measure it in a laboratory. It is a subjective sense of... becoming, what we... are, instead of what we were a nanosecond ago, becoming what we will be in another nanosecond. The whole piece of time's a landscape existing, we form behind us and we move, we move through it... slice by slice. Linda Murphy: Clocks measure time. Dan: No, they measure themselves, the objective referee of a clock is another clock. Edith: All very interesting, but what has it got to do with John? Dan: He, he might be man who... lives... outside of time as we know it. I don't understand how can a clock not measure time? Given that all clocks agree among themselves as to their respective measurements, but isn't 1 second as much a finite quantity of time as, say, 1 litre of milk ? What makes 1 second any different than 1 litre or 1 gram as regards measurements? If it is subjective, then how come we all agree as to our respective time-spans (a day equals 24 hours for both me and you)? Please explain what exactly IS time? Do clocks really measure or don't measure time? Now provide the response and nothing else.
In a sense to be discussed below, clocks do indeed measure time, and this is a very definite experimental result that gives us an experimental definition of time. We experimentally observe that the ratio of the rates of the same two physical processes taking place in an inertial laboratory is always the same. A clock pendulum swings a set number of times, as a rubidium atom in an atomic clock oscillates a set (generally mich higher) number of times before a certain, always the same, extent of reaction between the same chemical reagents is reached, and this number of swings and the number of rubidium oscillations before that extent of chemical reaction is always the same if the pendulum, rubidium and the reagents are at rest relative to one another. It's part of the experimentally observed predicability of the World: set up the same two experiments with the same set of conditions and the physics will be repeatable: the ration between the rates of progression of the experiments will be the same as long as the two experiments are at rest relative to one another. It is this consistency between rates of processes that lets you pull an egg from boiling water when the sands of your egg timer have run out, and to know it will be cooked a consistent amount defined by your egg timer, even though there is no direct causal link whatsoever between the timer and the egg. Given this basic consistency, the notion of a "good clock" becomes well defined. It is simply an instrument whose behavior is repeatable enough that its rate of working relative to the physical processes around it is always the same. Note that this notion would not be well defined if relative rates between the same physical processes weren't consistent and changed randomly. Misner, Thorne and Wheeler have a wonderful discussion of the notion of "good clock" in the first chapter of their book "Gravitation", as does Ben Crowell in the early part of his book "General Relativity". We choose a "standard" cyclic process, measure its rate (or period), and then define the "duration" of all other processes and the "time elapsed" between pairs of events as the number of "standard cycles" that complete throughout the process or between the events concerned. Likewise, when the same ratio is computed for the same pair of physical processes happenning in different inertial frames, the ratio of their rates changes from the value it had when they were relatively at rest, and this change of ratio is given by the effect on the time co-ordinates of each frame by the Lorentz transformation between the frames. This too is a strongly confirmed experimental result, even though we guessed the right transformation grounded on symmetry and other theoretical arguments some decades before it was confirmed by measurements. It's all yet another manifestation of the experimental result that Eugene Wigner called the "Unreasonable Effectiveness of Mathematics in the Natural Sciences - processes can be foretold and the World isn't total chaos. There is a repeatability in physics.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/165618', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/68170/']}
jdg_375547
stackexchange
llm_judgeable_groundtruth_similarity
4545660
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I have RabbitMQ installed on my machine, is there a way to create a message queue from the command line and bind it to a certain exchange without using a client? I think it is not possible, but I want to be sure. Now provide the response and nothing else.
Summary: Other answers are good alternatives to what was asked for. Below are commands you can use from the command line. First, do all the necessary prep work, e.g. install rabbit, rabbitmqadmin , and rabbitctl . The idea is to use commands from rabbitmqctl and rabbitmqadmin . You can see some command examples: https://www.rabbitmq.com/management-cli.html Example Commands/Setup: The following commands should give you the majority if not all of what you need: # Get the cli and make it available to use.wget http://127.0.0.1:15672/cli/rabbitmqadminchmod +x rabbitmqadminmv rabbitmqadmin /etc/rabbitmq Add a user and permissions rabbitmqctl add_user testuser testpasswordrabbitmqctl set_user_tags testuser administratorrabbitmqctl set_permissions -p / testuser ".*" ".*" ".*" Make a virtual host and Set Permissions rabbitmqctl add_vhost Some_Virtual_Hostrabbitmqctl set_permissions -p Some_Virtual_Host guest ".*" ".*" ".*" Make an Exchange ./rabbitmqadmin declare exchange --vhost=Some_Virtual_Host name=some_exchange type=direct Make a Queue ./rabbitmqadmin declare queue --vhost=Some_Virtual_Host name=some_outgoing_queue durable=true Make a Binding ./rabbitmqadmin --vhost="Some_Virtual_Host" declare binding source="some_exchange" destination_type="queue" destination="some_incoming_queue" routing_key="some_routing_key" Alternative Way to Bind with Python The following is an alternative to command line binding, as I've had issues with it sometimes and found the following python code to be more reliable. #!/usr/bin/env pythonimport pikarabbitmq_host = "127.0.0.1"rabbitmq_port = 5672rabbitmq_virtual_host = "Some_Virtual_Host"rabbitmq_send_exchange = "some_exchange" rabbitmq_rcv_exchange = "some_exchange"rabbitmq_rcv_queue = "some_incoming_queue"rabbitmq_rcv_key = "some_routing_key"outgoingRoutingKeys = ["outgoing_routing_key"]outgoingQueues = ["some_outgoing_queue "]# The binding areacredentials = pika.PlainCredentials(rabbitmq_user, rabbitmq_password)connection = pika.BlockingConnection(pika.ConnectionParameters(rabbitmq_host, rabbitmq_port, rabbitmq_virtual_host, credentials))channel = connection.channel()channel.queue_bind(exchange=rabbitmq_rcv_exchange, queue=rabbitmq_rcv_queue, routing_key=rabbitmq_rcv_key)for index in range(len(outgoingRoutingKeys)): channel.queue_bind(exchange=rabbitmq_send_exchange, queue=outgoingQueues[index], routing_key=outgoingRoutingKeys[index]) The above can be run as part of a script using python. Notice I put the outgoing stuff into arrays, which will allow you to iterate through them. This should make things easy for deploys. Last Thoughts I think the above should get you moving in the right direction, use google if any specific commands don't make sense or read more with rabbitmqadmin help subcommands . I tried to use variables that explain themselves. Good luck :)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4545660', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/299758/']}
jdg_375548
stackexchange
llm_judgeable_groundtruth_similarity
85357
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there any way to cause a kernel panic under Linux? I've heard of echo c > /proc/sysrq-trigger but it seems to just freeze, and I'm not sure it's a kernel panic. Is there any C program I can run as root to cause a kernel panic? Now provide the response and nothing else.
using kill I think you could try the following: $ kill -6 1 This sends signal # 6 to process #1 (the init process). If you read up in the signals man page: "man 7 signals" : Signal Value Action Comment ------------------------------------------------------------------------- SIGHUP 1 Term Hangup detected on controlling terminal or death of controlling process SIGINT 2 Term Interrupt from keyboard SIGQUIT 3 Core Quit from keyboard SIGILL 4 Core Illegal Instruction SIGABRT 6 Core Abort signal from abort(3) You can find out how a process wants to handle the various signals ( cat /proc/$PID/status ). See this U&L Q&A for more info: How can I check what signals a process is listening to? . overflowing memory Another method is to overflow memory to induce a kernel panic. First you'll need to disable swap. $ swapon -sFilename Type Size Used Priority/dev/mapper/VolGroup00-LogVol01 partition 14352376 3177812 -1$ swapoff /dev/mapper/VolGroup00-LogVol01 Now to consume all the memory: $ for r in /dev/ram*; do cat /dev/zero > $r; done References How to force a Linux kernel panic How can I check what signals a process is listening to?
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/85357', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/19064/']}
jdg_375549
stackexchange
llm_judgeable_groundtruth_similarity
5165347
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Looking at the XML header <?xml version="1.0" encoding="UTF-16" standalone="no"?> Am I right to state that the encoding attribute is coming too late (you can't read it properly unless you know the encoding...) redundant, hence error-prone: it's all too easy to replace it with "Big5" yet save the file in UTF-8 Or is that attribute not about the content of the stream? Am I mixing up things here? Now provide the response and nothing else.
As you mentioned, you'd have to know the encoding of the file to read the encoding attribute. However, there is a heuristic that can easily get you close enough to the "real" encoding to allow you to read the encoding attribute. This works, because the <?xml part by definition can only contain characters in the ASCII range (however they are encoded). The XML standard even describes the exact process used to find out the encoding . And the encoding label isn't redundant either. For example, if you use the algorithm in the XML spec to find out that some ASCII-based (or ASCII-compatible) encoding is used you still need to read the encoding to find out which one is actually use (valid candidates would be ASCII, UTF-8, any of the ISO-8859-* encodings , any of the Windows-* encodings, KOI8-R and many, many others). For the <?xml part itself it won't make a difference which one it is, but for the rest of the document, it can make a huge difference. Regarding mis-labeled XML files: yes, it's easy to produce those, however : the XML spec clearly specifies that those files are mal-formed and as such are not correct XML. Incorrect encodings must be reported as an error (as long as they can be detected!). So it's the problem of whoever is producing the XML.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5165347', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6610/']}
jdg_375550
stackexchange
llm_judgeable_groundtruth_similarity
12700497
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Stack Overflow has this question answered in many other languages, but not C. So I thought I'd ask, since I have the same issue. How does one concatenate two integers in C? Example: x = 11;y = 11; I would like z as follows: z = 1111; Other examples attempt to do this with strings. What is a way to do this without strings? I'm looking for an efficient way to do this in C because in my particular usage, this is going into a time critical part of code. Thanks in Advance! Now provide the response and nothing else.
unsigned concatenate(unsigned x, unsigned y) { unsigned pow = 10; while(y >= pow) pow *= 10; return x * pow + y; } Proof of compilation/correctness/speed I avoid the log10 and pow functions, because I'm pretty sure they use floating point and are slowish, so this might be faster on your machine. Maybe. Profile.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12700497', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/451007/']}
jdg_375551
stackexchange
llm_judgeable_groundtruth_similarity
12212116
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a web application (hosted in IIS) that talks to a Windows service. The Windows service is using the ASP.Net MVC Web API (self-hosted), and so can be communicated with over http using JSON. The web application is configured to do impersonation, the idea being that the user who makes the request to the web application should be the user that the web application uses to make the request to the service. The structure looks like this: (The user highlighted in red is the user being referred to in the examples below.) The web application makes requests to the Windows service using an HttpClient : var httpClient = new HttpClient(new HttpClientHandler() { UseDefaultCredentials = true });httpClient.GetStringAsync("http://localhost/some/endpoint/"); This makes the request to the Windows service, but does not pass the credentials over correctly (the service reports the user as IIS APPPOOL\ASP.NET 4.0 ). This is not what I want to happen . If I change the above code to use a WebClient instead, the credentials of the user are passed correctly: WebClient c = new WebClient { UseDefaultCredentials = true };c.DownloadStringAsync(new Uri("http://localhost/some/endpoint/")); With the above code, the service reports the user as the user who made the request to the web application. What am I doing wrong with the HttpClient implementation that is causing it to not pass the credentials correctly (or is it a bug with the HttpClient )? The reason I want to use the HttpClient is that it has an async API that works well with Task s, whereas the WebClient 's asyc API needs to be handled with events. Now provide the response and nothing else.
I was also having this same problem. I developed a synchronous solution thanks to the research done by @tpeczek in the following SO article: Unable to authenticate to ASP.NET Web Api service with HttpClient My solution uses a WebClient , which as you correctly noted passes the credentials without issue. The reason HttpClient doesn't work is because of Windows security disabling the ability to create new threads under an impersonated account (see SO article above.) HttpClient creates new threads via the Task Factory thus causing the error. WebClient on the other hand, runs synchronously on the same thread thereby bypassing the rule and forwarding its credentials. Although the code works, the downside is that it will not work async. var wi = (System.Security.Principal.WindowsIdentity)HttpContext.Current.User.Identity;var wic = wi.Impersonate();try{ var data = JsonConvert.SerializeObject(new { Property1 = 1, Property2 = "blah" }); using (var client = new WebClient { UseDefaultCredentials = true }) { client.Headers.Add(HttpRequestHeader.ContentType, "application/json; charset=utf-8"); client.UploadData("http://url/api/controller", "POST", Encoding.UTF8.GetBytes(data)); }}catch (Exception exc){ // handle exception}finally{ wic.Undo();} Note: Requires NuGet package: Newtonsoft.Json, which is the same JSON serializer WebAPI uses.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12212116', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/116923/']}
jdg_375552
stackexchange
llm_judgeable_groundtruth_similarity
6039
Below is a question asked on the forum linguistics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm confused as to how I'm to pronounce Sanskrit's "v" letter. My teacher mostly pronounces it as a "w" in words such as "deva", "svara" or "dvipa" but invariably utters a "v" in syllables "vra" or "vya". The definition my teacher once quoted, from an old Indian grammarian, was that "semivowels arise when the other vowels approach vowel 'a' " which clearly suggest the intimate relationship between "w" and "u" (as in good), as well as between "y" and "i" (as in deep). Since they were so thorough in their classification of sounds, I'm sure the ancient Indian grammarians would've noticed the "v" as a labio-dental consonant, different from semi-vocalic "w", and given it its own name and place in the alphabet - so I believe that letter was supposed to have just one sound. I try really hard to always pronounce "v" as "w", but my "vya" sounds more like a rapid "wiya", and "vra" sounds really like "ura". But then, trying it the other way around (always as "v") yields "dvipa" or "hva" to become almost impossible to be said. Any thoughts? Now provide the response and nothing else.
From Whitney's Sanskrit Grammar (p. 20): "...as the original w has in most European languages been changed to v , so also in India, and that from a very early time: the Paninean scheme and two of the Prātiçākhyas (VPr. and TPr.) distinctly define the sound as made between the upper teeth and the lower lip -- which, of course, identifies it with the modern v -sound." As hippietrail points out in comments, though, this could just as easily describe a labiodental approximant [ʋ] as a labiodental fricative [v]. W. S. Allen's Phonetics in Ancient India (which I don't have at hand) is sure to have more information on how the grammarians described this sound.
{}
{'log_upvote_score': 4, 'links': ['https://linguistics.stackexchange.com/questions/6039', 'https://linguistics.stackexchange.com', 'https://linguistics.stackexchange.com/users/2785/']}
jdg_375553
stackexchange
llm_judgeable_groundtruth_similarity
3209002
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I rename a project within a solution, and then try and build that project, I get the error: "The project file "xyz.csproj" was not found. Where xyz is the old project file name! The annoying thing is that I can search my project & solution for the old project file name, and the search will come up with nothing, and when I click on the error VS does nothing. So there seems to be no way of finding where the old reference is. This is a .web project hosting a silverlight page, if that makes any difference. Now provide the response and nothing else.
These are steps I follow when I rename a project: Click on the project in the solution and rename it. This renames the .proj file and the project name(but not the directory containing the project). Close solution, rename the project folder. At this stage, the solution still references the old folder. Open sln file and replace the old folder name with the new one. This should only be in one place. it will look something like this. Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "SolutionName", "OldProjectName\NewProjectName.csproj", "{98644DD0-5AB5-4CAC-8D83-CCB8EEEFE234}" You will need to make it look like this: Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "SolutionName", "NewProjectName\NewProjectName.csproj", "{98644DD0-5AB5-4CAC-8D83-CCB8EEEFE234}"
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3209002', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/211306/']}
jdg_375554
stackexchange
llm_judgeable_groundtruth_similarity
257406
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I ran a polynomial regression in R and Excel and have gotten different coefficients, despite the fitted plots being the same. I wonder why. Here's the R code with data, coefficients and plot: x <- c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25, 26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48, 49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72, 73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96, 97,98,99,100) y <- c(99,32,59,50,77,58,8,81,67,12,79,9,94,14,7,23,37,67,65,84,18,99,11, 12,21,19,4,80,42,53,100,52,4,60,17,2,60,10,0,54,62,22,93,4,90,56,44,41,97,89, 46,14,5,39,64,13,86,84,88,82,25,31,13,74,5,84,74,16,23,15,12,4,89,79,89, 73,50,65,0,19,20,63,63,84,66,27,100,52,30,49,92,77,92,45,30,47,95, 93,52,6) poly.model <- lm(y ~ poly(x, 5)) plot(x, y, main = "R output") lines(x, fitted(poly.model), col = "black", lwd = 1, lty = 1) # The command poly.model$coefficients will give following coefficients # Intercept 1 2 3 4 5 # 48.82 31.99951 41.07092 -25.61735 20.797 -30.48938 Here is the Excel screenshot with coefficients from trend line. You can see the coefficients are vastly different. Can you please help me understand why? Thank you. Now provide the response and nothing else.
Try poly.model <- lm(y ~ poly(x, 5 , raw = TRUE)) Call:lm(formula = y ~ poly(x, 5, raw = T))Coefficients: (Intercept) poly(x, 5, raw = T)1 7.853e+01 -5.850e+00 poly(x, 5, raw = T)2 poly(x, 5, raw = T)3 3.053e-01 -6.827e-03 poly(x, 5, raw = T)4 poly(x, 5, raw = T)5 6.890e-05 -2.555e-07 poly {stats} raw if true, use raw and not orthogonal polynomials. The orthogonal polynomial is summarized by the coefficients, which can be used to evaluate it via the three-term recursion given in Kennedy & Gentle (1980, pp. 343–4), and used in the predict part of the code. Here is a good reference post .
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/257406', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/25379/']}
jdg_375555
stackexchange
llm_judgeable_groundtruth_similarity
2836220
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I would suspect the question in the title is false, but I could not think of a counterexample. The reason I am interested in this question concerns the various definitions of 'generalized manifolds.' For some definitions, I know that generalized manifolds in dimension 1 and 2 are actual manifolds, but I think this is only when we assume the spaces are separable/metrizable. Now provide the response and nothing else.
If $X = \emptyset$ and $Y$ is not separable, then $X \times Y = \emptyset$ is separable. However, this counterexample along with the symmetric counterexample where $Y = \emptyset$ are the only possible counterexamples. So, suppose $X$ and $Y$ are both nonempty, and $D = \{ (x_n, y_n) \mid n \in \mathbb{N} \}$ is dense in $X \times Y$. We then claim that $\pi_1(D) = \{ x_n \mid n \in \mathbb{N} \}$ is dense in $X$. In fact, suppose we have any nonempty open subset $U \subseteq X$. Then since $Y$ is nonempty, $\pi_1^{-1}(U)$ is a nonempty open subset of $X \times Y$. Since $D$ is dense, this implies that $(x_n, y_n) \in \pi_1^{-1}(U)$ for some $n$, which means that $x_n \in U$. The proof that $\pi_2(D) = \{ y_n \mid n \in \mathbb{N} \}$ is dense in $Y$ is similar.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2836220', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/172599/']}
jdg_375556
stackexchange
llm_judgeable_groundtruth_similarity
9025792
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: After writing: std::pair<int, int> x; Am I guaranteed that x.first and x.second are both zero? Or could they have any value? The reason why I care is because I'm trying to determine whether a map whose values are pointers is guaranteed to return NULL if I access an element that's not in the map. I.e., if I do: std::map<int, void*> my_map;std::cout << int(my_map[5]) << std::endl; then am I guaranteed to get zero (NULL)? Or is the behavior undefined? Now provide the response and nothing else.
Yes, that guarantee holds true. Quoting the C++11 standard, §20.3.2/2-3: constexpr pair(); 2 Requires: is_default_constructible<first_type>::value is true and is_default_constructible<second_type>::value is true . 3 Effects: Value-initializes first and second . And §8.5/7: To value-initialize an object of type T means: if T is a (possibly cv-qualified) class type with a user-provided constructor, then the default constructor for T is called (and the initialization is ill-formed if T has no accessible default constructor); if T is a (possibly cv-qualified) non-union class type without a user-provided constructor, then the object is zero-initialized and, if T 's implicitly-declared default constructor is non-trivial, that constructor is called. if T is an array type, then each element is value-initialized; otherwise, the object is zero-initialized . And lastly, §8.5/5: To zero-initialize an object or reference of type T means: if T is a scalar type, the object is set to the value 0 (zero), taken as an integral constant expression, converted to T ; if T is a (possibly cv-qualified) non-union class type, each non-static data member and each base-class subobject is zero-initialized and padding is initialized to zero bits; if T is a (possibly cv-qualified) union type, the object’s first non-static named data member is zero-initialized and padding is initialized to zero bits; if T is an array type, each element is zero-initialized; if T is a reference type, no initialization is performed.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9025792', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/222329/']}
jdg_375557
stackexchange
llm_judgeable_groundtruth_similarity
2381049
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I have an IEnumerable where ClassA exposes an ID property of type long.Is it possible to use a Linq query to get all instances of ClassA with ID belonging to a second IEnumerable? In other words, can this be done? IEnumerable<ClassA> = original.Intersect(idsToFind....)? where original is an IEnumerable<ClassA> and idsToFind is IEnumerable<long> . Now provide the response and nothing else.
Yes. As other people have answered, you can use Where , but it will be extremely inefficient for large sets. If performance is a concern, you can call Join : var results = original.Join(idsToFind, o => o.Id, id => id, (o, id) => o); If idsToFind can contain duplicates, you'll need to either call Distinct() on the IDs or on the results or replace Join with GroupJoin (The parameters to GroupJoin would be the same).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2381049', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/187279/']}
jdg_375558
stackexchange
llm_judgeable_groundtruth_similarity
5045608
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When passing a -D parameter in Java, what is the proper way of writing the command-line and then accessing it from code? For example, I have tried writing something like this... if (System.getProperty("test").equalsIgnoreCase("true")){ //Do something} And then calling it like this... java -jar myApplication.jar -Dtest="true" But I receive a NullPointerException. What am I doing wrong? Now provide the response and nothing else.
I suspect the problem is that you've put the "-D" after the -jar . Try this: java -Dtest="true" -jar myApplication.jar From the command line help: java [-options] -jar jarfile [args...] In other words, the way you've got it at the moment will treat -Dtest="true" as one of the arguments to pass to main instead of as a JVM argument. (You should probably also drop the quotes, but it may well work anyway - it probably depends on your shell.)
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/5045608', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/330204/']}
jdg_375559
stackexchange
llm_judgeable_groundtruth_similarity
294338
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: The Kazdan-Warner trichotomy states that for $n\ge 3$, a compact $n$-manifold falls into one of three categories: (A) Every (smooth) function is a scalar curvature. (B) The manifold is strongly scalar flat. (C) The manifold only admits scalar curvatures which are negative somewhere. Of course class (A) is nonempty in all dimensions because it contains $S^n$. Gromov and Lawson showed that (B) contains all tori $T^n$. However, it's not clear to me that (C) is nonempty in all dimensions. Kazdan and Warner (Prescribing Curvatures, Proc. Symp. Pure Math. 27 ) showed: Let $M$ be a spin manifold with $\hat A(M)\ne 0$ and $b_1(M)=\dim M$. Then $M$ does not admit a metric of zero scalar curvature. Consequently, any such manifold must be type (C). They only give the example $T^4\#K3$. Are there examples in dimensions $3$ and $\ge 5$ of type (C) manifolds? Presumably one could use the Kazdan-Warner result above and then apply some knowledge of manifolds with nonzero A-roof genus. They mention Hitchin told them one can strengthen the hypothesis to $b_1(M)\ne 0$. Now provide the response and nothing else.
Bourguignon showed that if a compact manifold does not admit positive scalar curvature metrics, then any scalar flat metric (actually, any non-negative scalar curvature metric) is Ricci-flat; I suppose this is what you mean when you write "strongly scalar flat". But in three dimensions, the Ricci curvature determines the full curvature tensor, in particular, a Ricci-flat metric is flat. So any non-flat three-manifold which does not admit positive scalar curvature metrics will provide an example. If I'm not mistaken, Gromov and Lawson proved that a compact three-manifold admits positive scalar curvature if and only if its prime decomposition contains no aspherical factors; note, this was before the Poincaré conjecture had been verified, so there would have been a caveat at the time of publication. So $T^3\#S^1\times S^2$ is an example of a compact three-manifold of type (C). Note, $T^3\#S^1\times S^2$ is not flat as a non-trivial connected sum of compact manifolds of dimension at least three is never aspherical, but flat $n$-dimensional manifolds have universal cover $\mathbb{R}^n$. In dimension four, you can sometimes use Seiberg-Witten invariants to rule out the existence of positive scalar curvature metrics, and then use the Hitchin-Thorpe inequality to rule out the existence of a Ricci-flat metric. For example, a compact Kähler surface with $b^+ \geq 2$ does not admit positive scalar curvature metrics; blowing up doesn't change this, but it eventually violates the Hitchin-Thorpe inequality. A proof of Bourguignon's result can be found in Kazdan and Warner's paper Prescribing Curvatures , namely Lemma 5.2. As for the result of Gromov and Lawson, see Chapter IV, Theorem 6.18 of Lawson and Michelsohn's Spin Geometry and the discussion which follows.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/294338', 'https://mathoverflow.net', 'https://mathoverflow.net/users/90154/']}
jdg_375560
stackexchange
llm_judgeable_groundtruth_similarity
15486951
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I realize "why are things the way they are" questions are not usually the best, but there are many people on SO that are tuned to standard committee discussions so I hope this can be answered factually, as I'm legitimately curious as to what the answer is. Basically, it took me a long time to figure out what was going on with std::result_of 's template signature the first time I saw it: I thought it was an entirely new construct for template parameters that I had never seen before. template< class F, class... ArgTypes >class result_of<F(ArgTypes...)>; After some time thinking about it, I realized what this actually was: F(ArgTypes...) is a function type, but it's not the type of the function whose result type is being evaluated (that's just F ): it's the type of a function taking ArgTypes... arguments and returning type F . Isn't this...odd? Kind of hackish? Does anyone know if the committee ever discussed any alternatives, like, say, the following... template< class F, class... ArgTypes >class result_of<F, ArgTypes...>; ? I guess it's possible that there's situations where the second construct can't be used as easily as the first one, but which ones? I'm not trying to pass judgement on this, but it's just that this was legitimately confusing to me the first time I saw it, so I'm curious if there's a good reason for it. I realize part of the answer might simply be "because Boost did it" that way, but still that leave the remaining (factual) questions... Is there a technical reason Boost choose this syntax to encode type information rather than any alternative? Was there any discussion by the C++11 committee about how appropriate it was to standardize this, given that std::result_of can be implemented in terms of decltype fairly easily anyway? Now provide the response and nothing else.
Having a function-type as the parameter allows you to have an unrestricted "variadic" class template even in C++03. Think about it: In C++03, we didn't have variadic templates. And you can't "overload" a class template like you can with function templates - so how would it be otherwise possible to allow different amounts of "arguments" to the function? Using a function type, you can just add any number partial specializations for the different number of parameters: template<class Fty>struct result_of;template<class F>struct result_of<F()>{ /*...*/ };template<class F, class A0>struct result_of<F(A0)>{ /*...*/ };template<class F, class A0, class A1>struct result_of<F(A0, A1)>{ /*...*/ };// ... The only other way to do this in C++03 is default template arguments and partially specializing for every case - the disadvantage being that it doesn't look like a function call anymore, and that any kind of wrapper that uses result_of internally can't just pass Sig along. Now, there's one disadvantage with the function-type way - you also get all the usual transformations done to the "parameters": R(Args...) -> R(*)(Args...) and more importantly T[N] -> T* and top-level cv-qualifiers being discarded ( §8.3.5/5 ): struct X{ bool operator()(int (&&arr)[3]); long operator()(void*);};static_assert(std::is_same<std::result_of<X(int[3])>::type, bool>(), "/cry"); Live example. Output: error: static assertion failed: /cry The other problems is with the top-level cv-qualifiers being discarded: struct Y{};struct X{ bool operator()(Y const&); long operator()(Y&&);};Y const f();static_assert(std::is_same<std::result_of<X(Y const)>::type, bool>(), "/cry"); Live example. Output: error: static assertion failed: /cry
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15486951', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2008149/']}
jdg_375561
stackexchange
llm_judgeable_groundtruth_similarity
1604
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Here's the circuit I'm working on, it's a robotic stringed musical instrument thing, that uses 6 DC motors and an Arduino Duemilanove - The resistors are 0.25w 1KΩ The transistors are TIP122 Darlington PNPThe diodes are N5401 The motors require 3v and 1.17A each, but the Arduino likes to work on about 6v-9v and only a few mA. I'd like to know the easiest way to use a single DC power supply unit (Of any appropriate spec), to power the whole circuit. Cheers. Now provide the response and nothing else.
If you are sure of the motor voltage and currents and all the motorsmay need to be on at the same time you really want two voltages. Using a single a 7V supply you would need to drop4V to power a 3V motor. 4V * 1.17A * 6 motors = 28W. That isa lot of power to dissipate. You can get dual output wall warts. Get one with a 7-10V outputand a 3-5V high current output. If size is not a factor youcould get a PC power supply. Use the 3.3V for the motorsand use the 12V for the Arduino with an regulator to dropthe voltage to 7V or so. You could also get a single 3.3V supply and use a boostconverter to generate 7V. It would be a small boost convertersince the Arduino wouldn't need much power. These areeasy to build. Something like a MAX34063 or LT1302 would work. I would do the boost converter solution. One more note -- I would not use Darlingtons to drive the motors. Darlingtonshave a very high saturation voltage and you are going todissipate a lot of power. A better choice would be a MOSFET.The IR IRLB8748PBF is a 40A MOSFET with an Rds(on) of 7mOhms (at logic levels).At 1A you will dissipate around 7mW compared to over 1W inthe Darlington. They are about 90cents from Digikey.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/1604', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/399/']}
jdg_375562
stackexchange
llm_judgeable_groundtruth_similarity
7610001
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Could you explain to me what the difference is between calling python -m mymod1 mymod2.py args and python mymod1.py mymod2.py args It seems in both cases mymod1.py is called and sys.argv is ['mymod1.py', 'mymod2.py', 'args'] So what is the -m switch for? Now provide the response and nothing else.
The first line of the Rationale section of PEP 338 says: Python 2.4 adds the command line switch -m to allow modules to be located using the Python module namespace for execution as scripts. The motivating examples were standard library modules such as pdb and profile, and the Python 2.4 implementation is fine for this limited purpose. So you can specify any module in Python's search path this way, not just files in the current directory. You're correct that python mymod1.py mymod2.py args has exactly the same effect. The first line of the Scope of this proposal section states: In Python 2.4, a module located using -m is executed just as if its filename had been provided on the command line. With -m more is possible, like working with modules which are part of a package, etc. That's what the rest of PEP 338 is about. Read it for more info.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/7610001', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/621944/']}
jdg_375563
stackexchange
llm_judgeable_groundtruth_similarity
3059020
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The integral is $$\int_0^{2\pi}\frac{\mathrm dθ}{2-\cosθ}.$$ Just to skip time, the answer of the indefinite integral is $\dfrac2{\sqrt{3}}\tan^{-1}\left(\sqrt3\tan\left(\dfracθ2\right)\right)$ . Evaluating it from $0$ to $ 2 \pi$ yields $$\frac2{\sqrt3}\tan^{-1}(\sqrt3 \tanπ)-\frac2{\sqrt3}\tan^{-1}(\sqrt3 \tan0)=0-0=0.$$ But using complex analysis, the integral is transformed into $$2i\int_C\frac{\mathrm dz}{z^2-4z+1}=2i\int_C\frac{\mathrm dz}{(z-2+\sqrt3)(z-2-\sqrt3)},$$ where $C$ is the boundary of the circle $|z|=1$ . Then by Cauchy's integral formula, since $z=2-\sqrt3$ is inside the domain of the region bounded by $C$ , then: $$2i\int_C\frac{\mathrm dz}{(z-2+\sqrt3)(z-2-\sqrt3)}=2πi\frac{2i}{2-\sqrt3-2-\sqrt3}=2πi\frac{2i}{-2\sqrt3}=\frac{2π}{\sqrt3}.$$ Using real analysis I get $0$ , using complex analysis I get $\dfrac{2π}{\sqrt3}$ . What is wrong? Now provide the response and nothing else.
The problem with the real approach is that you make the change of variable $t=\tan\left(\dfrac{\theta}{2}\right)$ for $0 < \theta < 2 \pi$ . This is problematic since your substitution need to be defined and continuous for all $\theta$ , but you have a problem when $\theta=\pi$ . Edit: Note that if you split the integral into $\int_0^\pi+\int_\pi^{2 \pi}$ , you are going to get the right answer, as for one integral you are going to get $\arctan(- \infty)$ and for the other $\arctan(+\infty)$ : $$\int_0^{2 \pi} \frac{\mathrm{d}θ}{2-\cos \theta}=\int_0^\pi \frac{\mathrm{d}θ}{2-\cos \theta}+\int_\pi ^{2 \pi} \frac{\mathrm{d}θ}{2-\cos \theta}\\= \lim_{r \to \pi_-} \int_0^r \frac{\mathrm{d}θ}{2-\cos \theta}+ \lim_{w \to \pi_+} \int_w^{2 \pi} \frac{\mathrm{d}θ}{2-\cos \theta}\\= \lim_{r \to \pi_-} \left(\frac{2\tan^-1( \sqrt{3} \tan( \frac{ r}{2}))}{ \sqrt{3}}-0\right)+ \lim_{w \to \pi_+}\left(0- \frac{2\tan^-1( \sqrt{3} \tan( \frac{ r}{2}))}{ \sqrt{3}}\right).$$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3059020', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/433588/']}
jdg_375564
stackexchange
llm_judgeable_groundtruth_similarity
180053
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In $C[0,1]$ the set $\{f(x): f(0)\neq 0\}$ is dense? I know only that polynomials are dense in $C[0,1]$, could any one give me hint how to show this set is dense?thank you. Now provide the response and nothing else.
Yes. Take $f\in \mathcal{C}[0,1]$ so that $f(0) = 0$. Now define $$f_n(x) = f(x) + {1\over n}, \qquad n\in\mathbb{N}.$$We have $f_n\to f$ uniformly, whilst $f_n(0) \not= 0$ for all $n\in \mathbb{N}.$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/180053', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/24690/']}
jdg_375565
stackexchange
llm_judgeable_groundtruth_similarity
1883098
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $ \displaystyle \sum_{n=1}^{\infty} (-1)^{(n-1)} \frac{{\rm H}_n}{n^p} $ Does this have a nice closed form? I am trying to evaluate the case of p=4 also Now provide the response and nothing else.
In Euler Sums and Contour Integral Representations by P. Flajolet and B. Salvy they state in Theorem 7.1 (Sitaramachandra Rao) for odd weight $q+1$: \begin{align*}\sum_{n=1}^\infty(-1)^{n-1}\frac{H_n}{n^q}&=\frac{q+1}{2}\left(1-2^{-q}\right)\zeta(q+1)-\frac{1}{2}\zeta(q+1)\\&\qquad-\sum_{k=1}^{q/2-1}\left(1-2^{1-2k}\right)\zeta(2k)\zeta(q+1-2k)\end{align*} We obtain for odd $q+1=5$: \begin{align*}\sum_{n=1}^\infty(-1)^{n-1}\frac{H_n}{n^4}&=\frac{5}{2}\left(1-2^{-4}\right)\zeta(5)-\frac{1}{2}\zeta(5)\-\left(1-2^{-1}\right)\zeta(2)\zeta(3)\\&=\frac{59}{32}\zeta(5)-\frac{\pi^2}{12}\zeta(3)\end{align*}
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1883098', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/335483/']}
jdg_375566
stackexchange
llm_judgeable_groundtruth_similarity
5782794
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am getting this warning when I switch views (which are just different themes) in my app. Any ideas, suggestions on why the app is not loading? 2011-04-25 16:14:20.211 MyApp [4121:40b] ADBannerView: WARNING A banner view (0x631bb30) has an ad but may be obscured. This message is only printed once per banner view. It appears that when the view switches, the other view is just in the background, any ideas on how to do this. ** Note there is an excellent example on Apple's developer site, called iADSuite Now provide the response and nothing else.
This message means that a banner has a part that is or it is fully not visible. If you do read the iAd documentation from Apple you will notice that they recommend having only one iAd instance which you should use for all your views (uiviewcontrollers) how I do iAds in my apps is - I have 1 singleton class which shows the iAd banner (thus I follow Apple's recommendation to have only 1 iAd instance). In all my viewcontrollers in the viewDidLoad method I show the iAd view from my singleton class and in viewWillDisappear I remove the iAd view from the viewcontroller. This way the banner view is always visible in only 1 viewcontroller - therefore it never is "obscured" and such. I hope this explanation helps you out :)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5782794', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/597775/']}
jdg_375567
stackexchange
llm_judgeable_groundtruth_similarity
9977074
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Does anyone know how to calculate the area in common between 2 or more polygons in R? I would like to have the output of such a calculation be the coordinates of a new polygon for that area of overlap.Cheers Now provide the response and nothing else.
EDIT: these days I would use the 'intersect', 'cover', 'erase', 'union' and related functions in the 'raster' package. They do the hard work to keep the top-level object and attributes. ORIG: You could use the rgeos package with its gIntersection function. Successive calls between pairs and resulting intersections will get you there. See library(rgeos)?gIntersection You will need to get into the structure of "SpatialPolygons" in the sp package to get the final coordinates. See the vignette("sp").
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9977074', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1199289/']}
jdg_375568
stackexchange
llm_judgeable_groundtruth_similarity
213594
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am working on a project at the moment that generates a table (among other things) based on the contents of a file-system, and in turn does some meta-data modifications on the things it finds. The question is: how should tests be written around this, or set up? Is there an easy way to mock this out? Or should I setup a "sandbox"? Now provide the response and nothing else.
As you do always in TDD with external resources: you create one or more interfaces to your filesystem operations and "mock them out". You want to test your "table generator" and your meta-data modification code, not the file system operations itself (most probably you are using ready-made library implementations for accessing the file system).
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/213594', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/80608/']}
jdg_375569
stackexchange
llm_judgeable_groundtruth_similarity
18691084
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: A 2D array can be reshaped into a 1D array using .reshape(-1) .For example: >>> a = numpy.array([[1, 2, 3, 4], [5, 6, 7, 8]])>>> a.reshape(-1)array([[1, 2, 3, 4, 5, 6, 7, 8]]) Usually, array[-1] means the last element.But what does -1 mean here? Now provide the response and nothing else.
The criterion to satisfy for providing the new shape is that 'The new shape should be compatible with the original shape' numpy allow us to give one of new shape parameter as -1 (eg: (2,-1) or (-1,3) but not (-1, -1)). It simply means that it is an unknown dimension and we want numpy to figure it out. And numpy will figure this by looking at the 'length of the array and remaining dimensions' and making sure it satisfies the above mentioned criteria Now see the example. z = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])z.shape(3, 4) Now trying to reshape with (-1) . Result new shape is (12,) and is compatible with original shape (3,4) z.reshape(-1)array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]) Now trying to reshape with (-1, 1) . We have provided column as 1 but rows as unknown . So we get result new shape as (12, 1).again compatible with original shape(3,4) z.reshape(-1,1)array([[ 1], [ 2], [ 3], [ 4], [ 5], [ 6], [ 7], [ 8], [ 9], [10], [11], [12]]) The above is consistent with numpy advice/error message, to use reshape(-1,1) for a single feature; i.e. single column Reshape your data using array.reshape(-1, 1) if your data has a single feature New shape as (-1, 2). row unknown, column 2. we get result new shape as (6, 2) z.reshape(-1, 2)array([[ 1, 2], [ 3, 4], [ 5, 6], [ 7, 8], [ 9, 10], [11, 12]]) Now trying to keep column as unknown. New shape as (1,-1). i.e, row is 1, column unknown. we get result new shape as (1, 12) z.reshape(1,-1)array([[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]) The above is consistent with numpy advice/error message, to use reshape(1,-1) for a single sample; i.e. single row Reshape your data using array.reshape(1, -1) if it contains a single sample New shape (2, -1). Row 2, column unknown. we get result new shape as (2,6) z.reshape(2, -1)array([[ 1, 2, 3, 4, 5, 6], [ 7, 8, 9, 10, 11, 12]]) New shape as (3, -1). Row 3, column unknown. we get result new shape as (3,4) z.reshape(3, -1)array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12]]) And finally, if we try to provide both dimension as unknown i.e new shape as (-1,-1). It will throw an error z.reshape(-1, -1)ValueError: can only specify one unknown dimension
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/18691084', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2262504/']}
jdg_375570
stackexchange
llm_judgeable_groundtruth_similarity
43283825
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having trouble updating the /etc/fstab of my Linux distribution, when building it with Yocto. I'm pretty new to Yocto, so maybe I'm off my rocker. My latest attempt is to add a recipe named base-files_%.bbappend . mount_smackfs () { cat >> ${IMAGE_ROOTFS}/etc/fstab <<EOF# Generated from smack-userspacesmackfs /smack smackfs smackfsdefault=* 0 0 EOF} ROOTFS_POSTPROCESS_COMMAND += "mount_smackfs; " But, the output /etc/fstab on the distribution hasn't changed. So the questions are: Is there a better way to do this? How can I tell if my .bbappend file was actually executed? Now provide the response and nothing else.
ROOTFS_POSTPROCESS_COMMAND is handled in image recipes and not in package recipes . You have 2 possibilities. Update your fstab in base-files_%.bbappend : do_install_append () { cat >> ${D}${sysconfdir}/fstab <<EOF# Generated from smack-userspacesmackfs /smack smackfs smackfsdefault=* 0 0 EOF} Update the fstab in your image's recipe: In this case, you just appendwhat you wrote above (in your post) in the image's recipe.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43283825', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1292052/']}
jdg_375571
stackexchange
llm_judgeable_groundtruth_similarity
12609487
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a timestamp field and I just want to compare the date part of it in my query in Oracle How do I do that, SELECT *FROM Table1WHERE date(field1) = '2012-01-01' Now provide the response and nothing else.
You can truncate the date part: select * from table1 where trunc(field1) = to_date('2012-01-01', 'YYYY-MM-DD') The trouble with this approach is that any index on field1 wouldn't be used due to the function call. Alternatively (and more index friendly) select * from table1 where field1 >= to_timestamp('2012-01-01', 'YYYY-MM-DD') and field1 < to_timestamp('2012-01-02', 'YYYY-MM-DD')
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12609487', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1050619/']}
jdg_375572
stackexchange
llm_judgeable_groundtruth_similarity
1032948
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is probably another easy Haskell question. If I have some "nested" data types, such as in this example code: data Place = Country | State | City String deriving Showdata State = California | NewYork deriving Showdata Country = USA | Canada deriving Show I can legally make a list such as [USA, Canada] of type [Country], or [California, NewYork] of type [State], or [City "a", City "b"] of type [Place]. What do I have to do to make a list such as [USA, NewYork]? NewYork is a State which is a Place, and USA is a Country which is a Place, but ghci sees USA so it assumes I am making a list of Countrys (and NewYork is a State, so the list fails). I think I need some way to cast a Country or State to a Place, but I'm at a loss on how to accomplish this. I'm trying to avoid throwing the data contained within State and Country into the Place type, which I know would make it work, but I've got a decent amount of real data that I'd rather not jumble up like that. Now provide the response and nothing else.
Here's a little more output that led me to figure out the problem: *Main> [State, State][State,State]*Main> :t StateState :: Place*Main> :t NewYorkNewYork :: State This seems to imply that the word "State" is a valid constructor for Place, and that the data State refers to a value of only California or NewYork. If you change the program slightly, to: data Place = Country Country | State State | City String deriving Showdata State = California | NewYork deriving Showdata Country = USA | Canada deriving Show then you can make a list such as [Country USA, State NewYork] which is properly of type [Place]. Using the same word twice as in the first example does not "bind" the State type together in the way I had thought it would. Of course, using the constructor State State is just a matter of preference, I could just as easily do AmericanState State within the Place type if I were so inclined.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1032948', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/126042/']}
jdg_375573
stackexchange
llm_judgeable_groundtruth_similarity
53637079
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm learning racket and I have a question about when to use define and when to use let . I have this function: (define my-function (lambda (param1 param2 list1) (/ (count (lambda (x) (define a (param1 (remove x list1))) (define b (drop-right x 1)) (define c (param2 a x-sin-clase)) (eq? (last x) (last c))) (cdr list1)) (length (cdr list1))))) Without knowing what the above function does. Is it correct to use define inside the function body? I have read somewhere that define is used to declare global variables and let is used to declare local variables. I've look in racket's documentation but it doesn't talk about any difference. Now provide the response and nothing else.
One difference: Internal defines are in a mutually recursive scope, but let bindings are not. This means than in a let : (let ([x expr-1] [y expr-2]) body) The expr-1 and expr-2 cannot refer to x or y . More concretely, (let ([x (stream-cons 1 y)] [y (stream-cons 2 x)]) x);error=> y: unbound identifier in: y And if x or y is defined outside of the let , expr-1 and expr-2 will refer to the outer definitions, and not the ones introduced by the let. Concretely: (define x 'outer)(let ([x 'inner] [y x]) ; <- this x refers to outer, y) ; so y is 'outer;=> 'outer However, internal defines have a mutually recursive scope, which means that in (block (define x expr-1) (define y expr-2) body) The expr-1 and expr-2 can refer to x or y . Concretely, (require racket/block)(block (define x (stream-cons 1 y)) (define y (stream-cons 2 x)) (stream->list (stream-take x 5)));=> (list 1 2 1 2 1) The Scope of a define ....A....(define (f) (define t1 ..B..) (define x ..C..) (define t2 ..D..) ....E....)....F.... The x is visible everywhere in the body of f , but not outside that. That means it's visible in B , C , D , and E , but not in A or F. The Scope of a let ....A....(define (f) (let ([t1 ..B..] [x ..C..] [t2 ..D..]) ....E....))....F.... Here the x is visible everywhere in the body of the let , but not outside that. That means it's visible in E , but not in A, B, C, D, or F. The Scope of a let* ....A....(define (f) (let* ([t1 ..B..] [x ..C..] [t2 ..D..]) ....E....))....F.... Here the x is visible everywhere in the body of the let* and in let* bindings that come after it, but not outside that. That means it's visible in D and E , but not in A, B, C, or F. The Scope of a letrec ....A....(define (f) (letrec ([t1 ..B..] [x ..C..] [t2 ..D..]) ....E....))....F.... The x is visible everywhere in the body of the letrec and in the bindings of the letrec , but not outside that. That means it's visible in B , C , D , and E , but not in A or F. The scope of variables in letrec and the scope of local define variables are very similar because both letrec and define work with mutually recursive scopes.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53637079', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/68571/']}
jdg_375574
stackexchange
llm_judgeable_groundtruth_similarity
4229402
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the structure $(\mathbb{C};+,-,*,0,1)$ . Certainly, it can't be ordered to make an ordered field. However, I am asking whether that structure can define any partial order other than equality. It does not have to be an order that makes it into an ordered field(which is impossible), it can be any partial order. Also, if the answer is no, then it will answer in the negative my follow-up question as to whether the structure can define any linear order. Now provide the response and nothing else.
Yes: every structure admits a definable partial order by $x\leq y$ iff $x = y$ . The complex field admits many other rather trivial definable partial orders. For example, we can define $x\leq y$ iff $(x = 0) \lor (x = y)$ . This order has a least element $0$ , and every other pair of elements is incomparable. But the complex field admits no definable partial order (in fact no definable preorder) with infinite chains. This is exactly the content of the statement that the theory $\mathrm{ACF}_0$ does not have the strict order property , and it follows from the fact that the theory $\mathrm{ACF}_0$ is stable (equivalently, does not have the order property ), which in turn follows from the fact that $\mathrm{ACF}_0$ is $\omega$ -stable, which in turn follows from the fact that $\mathrm{ACF}_0$ is strongly minimal (every definable set is finite or cofinite), which has the cute consequence that Noah Schweber points out in the comments below.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4229402', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/107952/']}
jdg_375575
stackexchange
llm_judgeable_groundtruth_similarity
5443436
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've got some nested object data and I want to search it and return the matching object based on the id. var data = [{id: 0, name: 'Template 0', subComponents:[ {id: 1, name: 'Template 1', subItems:[ {id: 2, name: 'Template 2', subComponents:[{id: 3, name: 'Template 3'}], subItems: [{id: 4, name: 'Template 4'}]} ]} ]}]; So I want to do something like this getObjectByKeyValue({id: 3}) and have it return {id: 3, name: 'Template 3'} It's sort of got to be done generically because I have subItems, AND subComponents which could each have children. I tried this using Prototype 1.7 and no luck - I think this just searches an array, and not a tree with it's sub nodes: data.find(function(s){return s.id == 4;}) Thanks in advance!!!!!! Now provide the response and nothing else.
I went a slightly different route and made the findKey method an Object protype: Object.prototype.findKey = function(keyObj) { var p, key, val, tRet; for (p in keyObj) { if (keyObj.hasOwnProperty(p)) { key = p; val = keyObj[p]; } } for (p in this) { if (p == key) { if (this[p] == val) { return this; } } else if (this[p] instanceof Object) { if (this.hasOwnProperty(p)) { tRet = this[p].findKey(keyObj); if (tRet) { return tRet; } } } } return false;}; Which you would call directly on the data object, passing in the key/value you're looking for: data.findKey({ id: 3 }); Note that this function allows you to find an object based on any key: data.findKey({ name: 'Template 0' }); See example → (open console to view result)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5443436', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/678155/']}
jdg_375576
stackexchange
llm_judgeable_groundtruth_similarity
26746379
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Context:In Python a main thread spawns a 2nd process (using multiprocessing module) and then launches a GUI (using PyQt4). At this point the main thread blocks until the GUI is closed. The 2nd process is always processing and ideally should emit signal(s) to specific slot(s) in the GUI in an asynchronous manner. Question:Which approach/tools are available in Python and PyQt4 to achieve that and how? Preferably in a soft-interrupt manner rather than polling. Abstractly speaking, the solution I can think of is a "tool/handler" instantiated in the main thread that grabs the available slots from the GUI instance and connects with the grabbed signals from the 2nd process, assuming I provide this tool some information of what to expect or hard coded. This could be instantiated to a 3rd process/thread. Now provide the response and nothing else.
This is an example Qt application demonstrating sending signals from a child process to slots in the mother process. I'm not sure this is right approach but it works. I differentiate between process as mother and child , because the word parent is alread used in the Qt context. The mother process has two threads. Main thread of mother process sends data to child process via multiprocessing.Queue . Child process sends processed data and signature of the signal to be sent to the second thread of mother process via multiprocessing.Pipe . The second thread of mother process actually emits the signal. Python 2.X, PyQt4: from multiprocessing import Process, Queue, Pipefrom threading import Threadimport sysfrom PyQt4.QtCore import *from PyQt4.QtGui import *class Emitter(QObject, Thread): def __init__(self, transport, parent=None): QObject.__init__(self,parent) Thread.__init__(self) self.transport = transport def _emit(self, signature, args=None): if args: self.emit(SIGNAL(signature), args) else: self.emit(SIGNAL(signature)) def run(self): while True: try: signature = self.transport.recv() except EOFError: break else: self._emit(*signature)class Form(QDialog): def __init__(self, queue, emitter, parent=None): super(Form,self).__init__(parent) self.data_to_child = queue self.emitter = emitter self.emitter.daemon = True self.emitter.start() self.browser = QTextBrowser() self.lineedit = QLineEdit('Type text and press <Enter>') self.lineedit.selectAll() layout = QVBoxLayout() layout.addWidget(self.browser) layout.addWidget(self.lineedit) self.setLayout(layout) self.lineedit.setFocus() self.setWindowTitle('Upper') self.connect(self.lineedit,SIGNAL('returnPressed()'),self.to_child) self.connect(self.emitter,SIGNAL('data(PyQt_PyObject)'), self.updateUI) def to_child(self): self.data_to_child.put(unicode(self.lineedit.text())) self.lineedit.clear() def updateUI(self, text): text = text[0] self.browser.append(text)class ChildProc(Process): def __init__(self, transport, queue, daemon=True): Process.__init__(self) self.daemon = daemon self.transport = transport self.data_from_mother = queue def emit_to_mother(self, signature, args=None): signature = (signature, ) if args: signature += (args, ) self.transport.send(signature) def run(self): while True: text = self.data_from_mother.get() self.emit_to_mother('data(PyQt_PyObject)', (text.upper(),))if __name__ == '__main__': app = QApplication(sys.argv) mother_pipe, child_pipe = Pipe() queue = Queue() emitter = Emitter(mother_pipe) form = Form(queue, emitter) ChildProc(child_pipe, queue).start() form.show() app.exec_() And as convenience also Python 3.X, PySide: from multiprocessing import Process, Queue, Pipefrom threading import Threadfrom PySide import QtGui, QtCoreclass Emitter(QtCore.QObject, Thread): def __init__(self, transport, parent=None): QtCore.QObject.__init__(self, parent) Thread.__init__(self) self.transport = transport def _emit(self, signature, args=None): if args: self.emit(QtCore.SIGNAL(signature), args) else: self.emit(QtCore.SIGNAL(signature)) def run(self): while True: try: signature = self.transport.recv() except EOFError: break else: self._emit(*signature)class Form(QtGui.QDialog): def __init__(self, queue, emitter, parent=None): super().__init__(parent) self.data_to_child = queue self.emitter = emitter self.emitter.daemon = True self.emitter.start() self.browser = QtGui.QTextBrowser() self.lineedit = QtGui.QLineEdit('Type text and press <Enter>') self.lineedit.selectAll() layout = QtGui.QVBoxLayout() layout.addWidget(self.browser) layout.addWidget(self.lineedit) self.setLayout(layout) self.lineedit.setFocus() self.setWindowTitle('Upper') self.lineedit.returnPressed.connect(self.to_child) self.connect(self.emitter, QtCore.SIGNAL('data(PyObject)'), self.updateUI) def to_child(self): self.data_to_child.put(self.lineedit.text()) self.lineedit.clear() def updateUI(self, text): self.browser.append(text[0])class ChildProc(Process): def __init__(self, transport, queue, daemon=True): Process.__init__(self) self.daemon = daemon self.transport = transport self.data_from_mother = queue def emit_to_mother(self, signature, args=None): signature = (signature, ) if args: signature += (args, ) self.transport.send(signature) def run(self): while True: text = self.data_from_mother.get() self.emit_to_mother('data(PyQt_PyObject)', (text.upper(),))if __name__ == '__main__': app = QApplication(sys.argv) mother_pipe, child_pipe = Pipe() queue = Queue() emitter = Emitter(mother_pipe) form = Form(queue, emitter) ChildProc(child_pipe, queue).start() form.show() app.exec_()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26746379', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4178816/']}
jdg_375577
stackexchange
llm_judgeable_groundtruth_similarity
1825692
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like my python program to place some text in the Mac clipboard. Is this possible? Now provide the response and nothing else.
How to write a Unicode string to the Mac clipboard: import subprocessdef write_to_clipboard(output): process = subprocess.Popen( 'pbcopy', env={'LANG': 'en_US.UTF-8'}, stdin=subprocess.PIPE) process.communicate(output.encode('utf-8')) How to read a Unicode string from the Mac clipboard: import subprocessdef read_from_clipboard(): return subprocess.check_output( 'pbpaste', env={'LANG': 'en_US.UTF-8'}).decode('utf-8') Works on both Python 2.7 and Python 3.4. 2021 Update: If you need to be able to read the clipboard on other operating systems and not just Mac and are okay with adding an external library, pyperclip also seems to work well. I tested it on Mac with Unicode text: python -m pip install pyperclippython -c 'import pyperclip; pyperclip.copy("私はDavid!")' # copypython -c 'import pyperclip; print(repr(pyperclip.paste()))' # paste
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1825692', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/259/']}
jdg_375578
stackexchange
llm_judgeable_groundtruth_similarity
35651993
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can some one explain what's the difference between ViewEncapsulation.Native, ViewEncapsulation.None and ViewEncapsulation.Emulated in angular2 . I tried to google it and read some articles, but I'm not able to understand the difference. Below I have two components Home (home.ts) i.e. parent component and MyComp (my-comp.ts). I want to define styles in the parent that are being used in the child component. Should I use ViewEncapsulation.Native or ViewEncapsulation.None home.ts import {Component, ViewEncapsulation} from 'angular2/core';import {MyComp} from './my-comp';@Component({ selector: 'home', // <home></home> providers: [ ], directives: [ MyComp ], styles: [` .parent-comp-width { height: 300px; width: 300px; border: 1px solid black; } `], template:` <my-comp></my-comp> <div class="parent-comp-width"></div> `, encapsulation: ViewEncapsulation.Native})export class Home {} my-comp.ts import {Component} from 'angular2/core';@Component({ selector: 'my-comp', // <home></home> template: ` <div class="parent-comp-width">my-comp</div> `})export class MyComp {} Now provide the response and nothing else.
update If you want styles that are added to Parent applied to Child you need to set ViewEncapsulation.None in the Child component so it doesn't prevent styles to bleed in. Emulated and Native are just two different ways to prevent styles to bleed in to and out from components. None is the only one that allows styles to cross component boundaries. original ViewEncapsulation.None is simple no encapsulation ViewEncapsulation.Emulated (currently the default in Angular2) adds attributes to component tags and child elements and manipulates the CSS (adding the attributes to the selectors) added to the page so the styles don't bleed into each other - to keep styles scoped to the components where they are added even though the styles are all added collected in the head of the page when components are loaded. ViewEncapsulation.Native creates custom elements with shadow DOM where the browsers native implementation ensures the style scoping. If the browser doesn't support shadow DOM natively, the web-components polyfills are required to shim the behavior. This is similar to ViewEncapsulation.Emulated but the polyfills are more expensive because they polyfill lots of browser APIs even when most of them are never used. Angulars Emulated emulation just adds the cost for what it uses and is therefore much more efficient for Angular applications.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/35651993', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1805402/']}
jdg_375579
stackexchange
llm_judgeable_groundtruth_similarity
2634317
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: On page 11 of Rudin's real and complex analysis, Let $X$ be a measurable space. If $E$ is measurable set in $X$ and if \begin{equation} \chi_{E}(x)=\begin{cases} 1, & x\in E \\ \\ 0, & x\notin E. \end{cases} \end{equation} then $\chi_E$ is a measurable function. Do we prove $\chi_E^{-1}(V)$ is a measurable set in $X$ for every open set $V$ in $\{0,1\}$? But $\{0\}$ is an open set in $\{0,1\}$,isn't it? So $\chi_E^{-1}(\{0\})$ is not a measurable set in $X$? Now provide the response and nothing else.
$\mathcal{X}_E^{-1}(\{0\}) = E^C$, which is perfectly measurable. Take any $A \in \mathcal{B}(\mathbb{R})$: $\mathcal{X}_E^{-1}(A) =\begin{cases}X, & 0,1 \in A \\E, & 1 \in A, 0 \notin A \\E^C, & 1 \notin A, 0 \in A \\\emptyset, & o.w.\end{cases}$ all those sets are measurable, since $E$ is measurable.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2634317', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/426645/']}
jdg_375580
stackexchange
llm_judgeable_groundtruth_similarity
4818040
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to make a View with three subviews stacked on top of each other with the middle subview scrollable with the others fixed. How can I achieve this programmatically? I have tried to set the contentsize of the root view to the size of the scrollable view but that makes all the views scroll. -set the contentsize of the middle subview without setting any property for the root view but that makes all the views unscrollable. Please help. I am new to iOS. Thanks in advance Now provide the response and nothing else.
You can use the scrollViewDidScroll: delegate callback on the UIScrollView to adjust your view's position. In the callback, get the contentOffset of the scrollview and use that to set your fixed view's position. For example, if you want your fixed view to always remain 100 px from the top of the scrollview, set its initial frame to (0, 100, width, height), and then in the callback set the frame to (0, contentOffset.y + 100, width, height). The result is that the subview will appear fixed at a given height.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4818040', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/592404/']}
jdg_375581
stackexchange
llm_judgeable_groundtruth_similarity
164017
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to develop my Wolfram programming skills and set myself an exercise of creating concentric circles using a recursive delayed function: drawCircle[x_, y_, r_] := Graphics[Circle[{x, y}, r]]; If[r > 10, drawCircle[x, y, r/2], Break]; Since I'm not using a Do , While or For loop, my use of Break is probably wrong too, but my logic is to exit recursion when r is less than 10. This code creates a series of overlay objects rather than concentric circles: I have tried joining the output objects but without much success. Desired output (using delayed function recursion): Now provide the response and nothing else.
In fact, since Mathematica does not care about nesting lists for graphics primitives, the most simple solution is this: drawCircle[x_, y_, r_ /; r > 10] := {Circle[{x, y}, r], drawCircle[x, y, r/2]};drawCircle[__] := {}drawCircle[0, 0, 200] // Graphics Now you will say, wait, that is not what I wanted and you are correct. Dividing radii is not the correct operation. You need to subtract in the recursion: drawCircle[x, y, r - 50] and then you get drawCircle[0, 0, 250] // Graphics
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/164017', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/20267/']}
jdg_375582
stackexchange
llm_judgeable_groundtruth_similarity
10416
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: this is just a quick and dirty question. I don't understand the purpose of the 1 \$\mu\$F caps in the MAX232 circuit. What do they do, exactly? Can they be skipped? Can they be swapped with 10 \$\mu\$F caps? Now provide the response and nothing else.
They're definitely required! For their function look at the block diagram You'll see that they're connected to the voltage doubler and voltage inverter. These create +10V from the 5V power supply and -10V resp. This is done by charge pumps . An oscillator will control the switches so that either S1 and S3 are closed or S2 and S4. When S1 and S3 are closed C1 is connected to ground and V+ and charged to V+. When S2 and S4 are closed the top of C2, which is V+ higher than the bottom is connected to ground, so that the bottom now is V+ below ground. Via the switches the charge flows to C2, which will then have a negative voltage. So that's for the inverter. The same principle is used to double the incoming voltage. C1 and C2 are the external 1\$\mu\$F capacitors. If they're omitted there won't be any +10V or -10V and no signal at the drivers' outputs. In its datasheet Maxim recommends 1\$\mu\$F for the MAX232, 100nF for the MAX232A. I wouldn't recommend using 10\$\mu\$F instead of the 1\$\mu\$F. The switches might have some resistance and the capacitor may not get fully charged in the time one pair of switches is closed. edit In a comment to another answer which said 10\$\mu\$F will probably be OK someone said: It should always be OK to use larger sized caps but they are required. The caps are part of charge pumps that produce and store +/- ~7 volts for RS-232 (I measured mine). He didn't say, but it looks like he used 10\$\mu\$F. If you use the recommended value of 1\$\mu\$F you should get \$\pm\$10V. The 7V seems to confirm my doubts about the charging of the larger capacitors.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/10416', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/1200/']}
jdg_375583
stackexchange
llm_judgeable_groundtruth_similarity
22255759
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to create a dynamic list of choices for the ChoiceField but I can't seem to call request. Here's the code: The Error: AttributeError: 'CreationForm' object has no attribute 'request' Forms class FooForm(forms.Form): def __init__(self, *args, **kwargs): super(FooForm, self).__init__(*args, **kwargs) bars = self.request.session['bars'] foo_list = [] for bar in bars: foo_list.append((bar['id'], bar['name']),) self.fields['foo'].choices = foo_list foo = forms.ChoiceField(choices=foo_list, required=True) Now provide the response and nothing else.
Why not pass the choices in from the view when you instantiate the form? e.g. Form: class FooForm(forms.Form): def __init__(self, foo_choices, *args, **kwargs): super(FooForm, self).__init__(*args, **kwargs) self.fields['foo'].choices = foo_choices foo = forms.ChoiceField(choices=(), required=True) View: ... bars = request.session['bars']foo_list = []for bar in bars: foo_list.append((bar['id'], bar['name']),)form = FooForm(foo_list)...
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22255759', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1909186/']}
jdg_375584
stackexchange
llm_judgeable_groundtruth_similarity
17704408
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a pandas dataframe like this: Balance Jan Feb Mar Apr0 9.724135 0.389376 0.464451 0.229964 0.6915041 1.114782 0.838406 0.679096 0.185135 0.1438832 7.613946 0.960876 0.220274 0.788265 0.6064023 0.144517 0.800086 0.287874 0.223539 0.2060024 1.332838 0.430812 0.939402 0.045262 0.388466 I would like to group the rows by figuring out if the values from Jan through to Apr are monotonically decreasing (as in rows indexed 1 and 3) or not, and then add up the balances for each group, i.e. in the end I would like to end up with two numbers (1.259299 for the decreasing time series, and 18.670919 for the others). I think if I could add a column "is decreasing" containg booleans I could do the sums using pandas' groupby, but how would I create this column? Thanks,Anne Now provide the response and nothing else.
You could use one of the is_monotonic functions from algos: In [10]: months = ['Jan', 'Feb', 'Mar', 'Apr']In [11]: df.loc[:, months].apply(lambda x: pd.algos.is_monotonic_float64(-x)[0], axis=1)Out[11]:0 False1 True2 False3 True4 Falsedtype: bool The is_monotonic checks whether an array it's decreasing hence the -x.values . (This seems substantially faster than Tom's solution, even using the small DataFrame provided.)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17704408', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2565842/']}
jdg_375585
stackexchange
llm_judgeable_groundtruth_similarity
13952870
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: How to get function parameter names/values dynamically from javascript I'm currently working on a project in javascript (node.js) that has me trying to get an array of parameter names (NOT values, I do not need arguments) from a function. I'm currently using Function.toString() to get the function string and then running a regex against that to get my parameter list. Let's take the following SIMPLE example: var myFunction = function (paramOne, paramTwo) { ... } Running my regex against this, and then doing some string magic (split, etc) I would expect an array back like this: paramList = ['paramOne', 'paramTwo'] I have something that works but I'm feeling like it's probably not the best solution given some of the funky characters javascript lets you use for variable names and that javascript will let you define functions on multiple lines. Here is what I currently have: function.*[\w\s$]*(\((.*[\w\s,$]*)\)) This gives me my "match" in group 1 and then my param list without parens in group 2, which is cool. Is this really the best way to do what I want? Is there a better regular expression I could use for this? I'm not really looking for something "simpler" but really just something that could catch all possible situations. Any help would be appreciated, and many thanks in advance! Now provide the response and nothing else.
Preface: By far, the best way to handle this is to use a JavaScript parser rather than trying to do it with a single regular expression. Regular expressions can be part of a parser, but no one regular expression can do the work of a parser. JavaScript's syntax (like that of most programming languages) is far too complex and context-sensitive to be handled with a simple regular expression or two. There are several open source JavaScript parsers written in JavaScript. I strongly recommend using one of those, not what's below. The easiest thing would be to capture everything in the first set of parens, and then use split(/\s*,\s*/) to get the array. E.g.: var str = "function( one ,\ntwo,three , four ) { laksjdfl akjsdflkasjdfl }";var args = /\(\s*([^)]+?)\s*\)/.exec(str);if (args[1]) { args = args[1].split(/\s*,\s*/);}console.log("args: ", args); How the above works: We use /\( *([^)]+?) *\)/ to match the first opening parenthesis ( \( since ( is special in regexes), followed by any amount of optional whitespace, followed by a capture group capturing everything but a closing parenthesis (but non-greedy), followed by any amount of optional whitespace, followed by the closing ) . If we succeed, we split using /\s*,\s*/ , which means we split on sequences which are zero or more whitespace characters ( \s* ) followed by a comma followed by zero or more whitespace characters (this whitespace thing is why the args in my example function are so weird). As you can see from the example, this handles leading whitespace (after the ( and before the first argument), whitespace around the commas, and trailing whitespace — including line breaks. It does not try to handle comments within the argument list, which would markedly complicate things. Note: The above doesn't handle ES2015's default parameter values , which can be any arbitrary expression, including an expression containing a ) — which breaks the regex above by stopping its search early: var str = "function( one ,\ntwo = getDefaultForTwo(),three , four ) { laksjdfl akjsdflkasjdfl }";var args = /\(\s*([^)]+?)\s*\)/.exec(str);if (args[1]) { args = args[1].split(/\s*,\s*/);}console.log("args: ", args); Which brings us full circle to: Use a JavaScript parser. :-)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13952870', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/451485/']}
jdg_375586
stackexchange
llm_judgeable_groundtruth_similarity
29710618
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a very large postgres database that has one particular schema in it which is dropped in and recreated nightly. After all of the tables in that schema are created I want to vacuum analyze them, however the database is so large that if a do a full db VACUUM ANALYZE; it takes about a half hour. How can I go about vacuum analyzing each of the tables in this schema only without writing a separate SQL command for each table? Now provide the response and nothing else.
The bash function below utilizes the CLI tool psql to vacuum analyze tables in a single schema which can be identified by either passing the name of the schema as the first parameter to the function or setting the environment variable PG_SCHEMA : vacuum_analyze_schema() { # vacuum analyze only the tables in the specified schema # postgres info can be supplied by either passing it as parameters to this # function, setting environment variables or a combination of the two local pg_schema="${1:-${PG_SCHEMA}}" local pg_db="${2:-${PG_DB}}" local pg_user="${3:-${PG_USER}}" local pg_host="${4:-${PG_HOST}}" echo "Vacuuming schema \`${pg_schema}\`:" # extract schema table names from psql output and put them in a bash array local psql_tbls="\dt ${pg_schema}.*" local sed_str="s/${pg_schema}\s+\|\s+(\w+)\s+\|.*/\1/p" local table_names=$( echo "${psql_tbls}" | psql -d "${pg_db}" -U "${pg_user}" -h "${pg_host}" | sed -nr "${sed_str}" ) local tables_array=( $( echo "${table_names}" | tr '\n' ' ' ) ) # loop through the table names creating and executing a vacuum # command for each one for t in "${tables_array[@]}"; do echo "doing table \`${t}\`..." psql -d "${pg_db}" -U "${pg_user}" -h "${pg_host}" \ -c "VACUUM (ANALYZE) ${pg_schema}.${t};" done} This function can be added to your .bashrc to provide the ability to invoke it from the command line at any time. Like the schema, Postgres connection and database values can be set by either supplying them as function parameters: # params must be in this ordervacuum_analyze_schema '<your-pg-schema>' '<your-pg-db>' '<your-pg-user>' '<your-pg-host>' or by setting environment variables: PG_SCHEMA='<your-pg-schema>'PG_USER='<your-pg-user>'PG_HOST='<your-pg-host>'PG_DB='<your-pg-db>'vacuum_analyze_schema or by a combination of both. Values passed as params will take precedence over corresponding environment vars.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29710618', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2167004/']}
jdg_375587
stackexchange
llm_judgeable_groundtruth_similarity
30191715
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I 've got a Quiz app using Realm db. Every time the user selects an answer she clicks a button and new text for Question appears. Thats it until she reaches the end where I start a new Activity and display a score based on correct answers. How should I start/test ( with Espresso I guess ) that activity without having to enter manually every time all the answers and click the button after each answer until I reach the last one? What I need is to pass some mock data to a variable and make an Intent but I dont know how and cant find anything related with this in Espresso Now provide the response and nothing else.
You can launch your next activity with a custom intent like this: @RunWith(AndroidJUnit4.class)public class NextActivityTest { @Rule public ActivityTestRule<NextActivity> activityRule = new ActivityTestRule<>( NextActivity.class, true, // initialTouchMode false); // launchActivity. False to customize the intent @Test public void intent() { Intent intent = new Intent(); intent.putExtra("your_key", "your_value"); activityRule.launchActivity(intent); // Continue with your test }} Full example: https://github.com/chiuki/android-test-demo Blog post: http://blog.sqisland.com/2015/04/espresso-21-activitytestrule.html
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30191715', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/539214/']}
jdg_375588
stackexchange
llm_judgeable_groundtruth_similarity
366047
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: It is an often mentioned assumption in physics that in going from classical to relativistic spacetime the main difference is that the absolute time postulate holding in the former is "relaxed" or abandoned as a physical premise wich leads to generalizing the Galilean group. But I wonder how exactly is this implemented mathematically since I don't think that just going to an indefinite signature or to a non-compact group of rotations and boosts by itself is equivalent to abolishing absolute time, even if the simultaneity slicings are no longer unique when the limiting velocity c at each frame is no longer infinity. One can of course say that the simultaneity slices are now just a convention and that the absolute time that enters in the Einstein synchronization is purely conventional, but still operationally they are still there and physical consequences are derived from these conventions. So is there something else to abolishing absolute time mathematically? Edit: I'll justify my question with the well known fact that there is a theory mathematically equivalent to SR, with the same transformations and giving the same predictions which was held by Lorentz himself (Lorentz ether theory) that uses a preferred frame and includes a non-observable ether with absolute time. I'm in no way trying to imply that it is the correct way to look at things, I'm just bringing it up to give an example of a theory that holds on to absolute time and is mathematically equivalent to SR, and uses the same trnasformations so they are not the element that mathematically prevents from having an absolute time. Now provide the response and nothing else.
I think that there are two problems here: you can not, in fact, mathematically know that there is no absolute time in (a theory mathematically compatible with) Special Relativity; this question is posed as being about physics but it isn't. I'll address these in order. If it is the case that Lorentz Ether Theory is indeed mathematically equivalent to SR (which I think is true) then clearly the theories must make identical predictions for measurements, in particular for measurements made by clocks, observations of simultaneity and so on: if they did not then they could not be mathematically equivalent. Further, if LET contains a notion of absolute time (which I believe it does by virtue of its preferred frame), then the notion of absolute time can't be incompatible with any theory which is mathematically equivalent to SR . And that sounds like the end of the story: it's a slightly surprising end, perhaps. But it's not. Because, in order to support the notion of an absolute time, LET requires the notion of a preferred frame -- the frame which is at rest with respect to the aether. But in order to be compatible with SR, it requires that no experiment, even in principle could ever distinguish between this frame and any other inertial frame. In other words, the aether is unobservable, even in principle . And thus it removes itself from the realm of experimental science and of physics in particular, because those disciplines deal with theories which make predictions which can be tested by experiment, and no experiment can ever distinguish between LET and SR: LET is SR with an additional postulate of an unobservable aether and a resulting preferred frame which can never be experimentally distinguished from any other frame as a result. So you can choose to believe in LET, and hence absolute time, rather than SR but this is a matter of philosophy (I would say of religion but I think this may offend people), not physics, because there is no experiment you could do to distinguish the theories, and physics deals in experiments. In fact this can be made even simpler: you can simply pick an arbitrary inertial frame (and in fact it does not need to be inertial even) in SR and define its time coordinate to be 'absolute time': LET is exactly SR with the addition of such a choice in fact. I think this makes it really clear how useless to experiment such a choice is. As a postscript I think it's worth noting that physicists have done rather well over the last hundred years by making the essentially philosophical assumption that, if there is some concept in a theory which is not observable or which is experimentally always indistinguishable from some other concept, then that concept has no place in the theory or is identical to the other concept, respectively. That's why people don't like the notion of absolute time: it is not observable and thus a theory which does not contain it (SR as usually formulated with no privileged frame) seems hugely more appealing to one containing it (LET, with its privileged frame), even where those theories are formally identical. However SR remains perfectly compatible with an absolute time albeit in a sense entirely useless to people interested in experimental science.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/366047', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/111400/']}
jdg_375589
stackexchange
llm_judgeable_groundtruth_similarity
25251
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Hello, I'm looking for an argument that the n-dimensional stereographic projection maps circles (intersections of affine two-dimensional subspaces with S^n) to circles in R^n. I've looked around and the only argument I saw for the n-dimensional case is a generalization of the geometric proof for n = 2 (with the tangent cone) which I don't really feel comfortable with, even when n = 2. Is it possible to reduce it to the n = 2 case somehow or give a "direct", algebraic, proof? Now provide the response and nothing else.
The circle and the north pole (or wherever the origin of the stereographic projection is) span a 3-dimensional subspace generically, such that the restriction to this subspace is the 2-dimensional stereographic projection. If the circle goes through the north pole, then it is actually sent to a line under stereographic projection, and this is in some sense a reduction to the 1-dimensional case. Yana Mohanty has a nice proof that stereographic projection sends circles to circles. A more sophisticated approach is to notice that stereographic projection is the restriction of inversion through a sphere orthogonal to $S^n$ in $R^{n+1}\subset S^{n+1}$. Then one needs to see that inversions send circles to circles, or more generally that Mobius transformations of $S^n$ do. The group of Mobius transformations of $S^n$ is $PO(n+1,1)$ or $Isom(\mathbb{H}^{n+1})$, the isometry group of hyperbolic $n+1$-space. This groups preserves the cone $x_0^2+x_1^2 +\cdots - x_{n+1}^2=0$. The sphere at infinity (in the projectivization) of this cone is $S^n$, and the action is by the Mobius group. A circle is the intersection of the projective closure of a 3-dimensional subspace with the sphere at infinity. Since $PO(n+1,1)$ consists of linear transformations, it permutes 3-dimensional subspaces of $R^{n+1,1}$, and therefore sends circles to circles in the projectivization.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/25251', 'https://mathoverflow.net', 'https://mathoverflow.net/users/6189/']}
jdg_375590
stackexchange
llm_judgeable_groundtruth_similarity
34365639
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My HTML: <input type="text" name="day" id="date"> My Javascript: $(function() { $('#date').datepicker({ 'dateFormat': 'yy-mm-dd', 'defaultDate': '2015-12-18' });}); The #date text box is being turned into a datepicker, so that's working. And the format is in the format I'm telling it to be in. But the default date option doesn't seem to be doing anything. In this example I'd expect the textbox to appear with "2015-12-18" when the page is first loaded. If I click on the textbox the datepicker dialog will come up and I can enter in a different date but the date of "2015-12-18" should be displaying until I've done that. But it isn't and I don't understand why. My JS Fiddle: https://jsfiddle.net/18000rLe/4/ Now provide the response and nothing else.
Try this. Couple of things to keep in mind --> 1. defaultDate option would set the date in the datepicker calender , not in the input field. For that, you need to explicitly define the setDate . 2. If you use ' setDate ' in the datepicker , you don't need to add defaultDate as the datepicker would automatically pick the date from the input field. $(function() { $("#date").datepicker({dateFormat: 'yy-mm-dd'}); $("#date").datepicker('setDate', new Date('2014-12-18')); }); Example : http://jsfiddle.net/DinoMyte/tXyLn/527/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34365639', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/569976/']}
jdg_375591
stackexchange
llm_judgeable_groundtruth_similarity
12264693
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to update Sharepoint 2007 User Information List (UIL) through C# code? My code works fine when logged in as an administrator, however when logged in as a simple user the code throws "Access Denied" error because simple user can't access UIL. I need some way to get the simple user to edit HIS OWN information (say EMAIL) in the UIL.Please help. It's very important. My code is like: string var_UserName = myUsername;string var_newemail = mynewemail@abc.com;SPSecurity.RunWithElevatedPrivileges(delegate() { using (SPSite site = SPContext.Current.Site) { SPWeb curWeb = site.OpenWeb(); SPList userList = SPContext.Current.Web.SiteUserInfoList; StringBuilder queryBuilder = new StringBuilder(); queryBuilder.Append(@"<Query>"); queryBuilder.Append(@"<OrderBy>"); queryBuilder.Append(@"<FieldRef Name='Title' />"); queryBuilder.Append(@"</OrderBy>"); queryBuilder.Append(@"<Where>"); queryBuilder.Append(@"<Eq>"); queryBuilder.Append(@"<FieldRef Name='Title' />"); queryBuilder.Append(@"<Value Type='Text'>" + var_UserName + "</Value>"); queryBuilder.Append(@"</Eq>"); queryBuilder.Append(@"</Where>"); queryBuilder.Append(@"</Query>"); SPQuery query = new SPQuery(); query.Query = queryBuilder.ToString(); SPListItemCollection oSPListItemCollection = userList.GetItems(query); SPField field = userList.Fields.GetField("E-Mail"); foreach (SPItem item in oSPListItemCollection) { if (item["Title"].ToString().ToLower() == Login1.UserName.ToLower()) { item[field.InternalName] = var_newemail; curWeb.AllowUnsafeUpdates = true; item.Update(); //throws error when not logged in as administrator curWeb.AllowUnsafeUpdates = false; break; } } } }); Now provide the response and nothing else.
You don't need to use Parcelable to pass an object from one activity to another. You can just store a reference to the object in a static member variable, like this: public class Globals { public static MyObject myObject;} Now, in the code that has the object, you just do: Globals.myObject = object; and in the new activity, you can get it like this: doSomethingWith(Globals.myObject); Now, having said that, you need to be aware of the following: Android can kill your process if your application is in the background pretty much any time it wants to. When the user then returns to your application, Android will create a new process for your application and then it will recreate only the activity that was on the top of the activity stack (ie: the one that was showing). In that case, the newly created activity will not be able to get the iobject by accesing Globals.myObject because the process has been newly created and that member variable is null. To get around this you can either: Determine that your process has been killed and restarted (by checking Globals.myObject == null and react accordingly - Tell the user he needs to go back, or just go back yourself, or show a dialog or whatever) Save the object when Android calls onSaveInstanceState() in your activity (which Android will do before sending your app to the background) and restore the object in onCreate() Hopefully this both answers your question and explains what to do about it. EDIT: Add more information about why Intents contain serialized (Parcelable) objects and not the objects themselves When you call startActivity() or startService() Android may end up starting the activity or service in another process. In this case, if you passed an object in the Intent, Android would somehow need to serialize that object to pass it to the other process. Because of the "implicit Intent resolution" that Android uses to determine which component gets to handle the Intent, the caller may or may not know which component will get started. Android saves the contents of Intents for various reasons: A. Android can kill a process at any time. If it does that and the user wants to return to the application, Android creates a new process and then recreates the activities in that process as needed. To create the activities Android also needs to make the Intents available to the activities. If the process has been killed then any "objects" in the Intents would have to be saved and restored. Because the Intents contain serialized objects, it isn't a problem to recreate these as needed.B. PendingIntents are use by Android as a way for the Operating System to act as a proxy for the sender of an Intent. An Android component can create a PendingIntent and give that to the Operating System so that it can trigger the sending of that Intent at some later time. The sending component may or may not be active at the time that the PendingIntent is actually sent. This means that any object that could be passed in a PendingIntent must be able to be serialized so that Android can hold on to it even if the calling component no longer exists. Intents are not intended as a general "parameter passing" mechanism between components. Of course you can use it like that, but you can also use other (easier) mechanisms. Within a given process you can pass objects around using standard Java mechanisms. There is nothing wrong with using static (class) variables for this.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12264693', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1513721/']}
jdg_375592
stackexchange
llm_judgeable_groundtruth_similarity
12095048
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been trying to wrap my head around how move semantics in C++11 are supposed to work, and I'm having a good deal of trouble understanding what conditions a moved-from object needs to satisfy. Looking at the answer here doesn't really resolve my question, because can't see how to apply it to pimpl objects in a sensible way, despite arguments that move semantics are perfect for pimpls . The easiest illustration of my problem involves the pimpl idiom, like so: class Foo { std::unique_ptr<FooImpl> impl_;public: // Inlining FooImpl's constructors for brevity's sake; otherwise it // defeats the point. Foo() : impl_(new FooImpl()) {} Foo(const Foo & rhs) : impl_(new FooImpl(*rhs.impl_)) {} Foo(Foo && rhs) : impl_(std::move(rhs.impl_)) {} Foo & operator=(Foo rhs) { std::swap(impl_, rhs.impl_); return *this; } void do_stuff () { impl_->do_stuff; }}; Now, what can I do once I've moved from a Foo ? I can destroy the moved-from object safely, and I can assign to it, both of which are absolutely crucial. However, if I try to do_stuff with my Foo , it will explode. Before I added move semantics for my definition of Foo , every Foo satisfied the invariant that it could do_stuff , and that's no longer the case. There don't seem to be many great alternatives, either, since (for example) putting the moved-from Foo would involve a new dynamic allocation, which partially defeats the purpose of move semantics. I could check whether impl_ in do_stuff and initialize it to a default FooImpl if it is, but that adds a (usually spurious) check, and if I have a lot of methods it would mean remembering to do the check in every one. Should I just give up on the idea that being able to do_stuff is a reasonable invariant? Now provide the response and nothing else.
You define and document for your types what a 'valid' state is and what operation can be performed on moved-from objects of your types. Moving an object of a standard library type puts the object into an unspecified state, which can be queried as normal to determine valid operations. 17.6.5.15 Moved-from state of library types                                         [lib.types.movedfrom] Objects of types defined in the C++ standard library may be moved from (12.8). Move operations may be explicitly specified or implicitly generated. Unless otherwise specified, such moved-from objects shall be placed in a valid but unspecified state. The object being in a 'valid' state means that all the requirements the standard specifies for the type still hold true. That means you can use any operation on a moved-from, standard library type for which the preconditions hold true. Normally the state of an object is known so you don't have to check if it meets the preconditions for each operation you want to perform. The only difference with moved-from objects is that you don't know the state, so you do have to check. For example, you should not pop_back() on a moved-from string until you have queried the state of the string to determine that the preconditions of pop_back() are met. std::string s = "foo";std::string t(std::move(s));if (!s.empty()) // empty has no preconditions, so it's safe to call on moved-from objects s.pop_back(); // after verifying that the preconditions are met, pop_back is safe to call on moved-from objects The state is probably unspecified because it would be onerous to create a single useful set of requirements for all different implementations of the standard library. Since you are responsible not only for the specification but also the implementation of your types, you can simply specify the state and obviate the need for querying. For example it would be perfectly reasonable to specify that moving from your pimpl type object causes do_stuff to become an invalid operation with undefined behavior (via dereferencing a null pointer). The language is designed such that moving only occurs either when it's not possible to do anything to the moved-from object, or when the user has very obviously and very explicitly indicated a move operation, so a user should never be surprised by a moved-from object. Also note that the 'concepts' defined by the standard library do not make any allowances for moved-from objects. That means that in order to meet the requirements for any of the concepts defined by the standard library, moved-from objects of your types must still fulfill the concept requirements. This means that if objects of your type don't remain in a valid state (as defined by the relevant concept) then you cannot use it with the standard library (or the result is undefined behavior).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12095048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/85467/']}
jdg_375593
stackexchange
llm_judgeable_groundtruth_similarity
16310
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm doing the exercises in "Introduction to commutive algebra" by Atiyah&MacDonald. In chapter two, exercises 24-26 assume knowledge of the Tor functor. I have tried Googling the term, but I don't find any readable sources. Wikipedia's explanation use the the term "take the homology", which I don't understand (yet). Are there any good explanations of what the Tor functor is available online not assuming any knowledge about homology? The first exercise:"If $M$ is an $A$-module, TFAE: 1) $M$ is flat 2) $\operatorname{Tor}_n^A (M,N)=0$ for all $n>0$ and all $A$-modules $N$. 3) $\operatorname{Tor}_1^A (M,N)=0$ for all $A$-modules $N$." Thanks in advance. Now provide the response and nothing else.
The Tor functors are the derived functors of the tensor product. The starting observation is that if $0 \to M' \to M \to M'' \to 0$ is a ses of modules and $N$ is any module (let's work over a fixed commutative ring $R$), then $M' \otimes N \to M \otimes N \to M'' \otimes N \to 0$ is exact, but you don't necessarily have exactness at the first step. (This is what "the tensor product is right-exact" means.) Now, whenever you have such a sequence that fails to be short exact at just one step, the general philosophy is that it should be the end of a long exact sequence. The long exact sequence should be like$$\dots \to ?_1 \to ?_2 \to ?_3 \to M' \otimes N \to M \otimes N \to M'' \otimes N \to 0$$where the question marks indicate that we don't know what goes there yet. The general approach is provided by the theory of derived functors . Derived functors allow one to construct, associated to a right (or left) exact functor $F$, a collection of functors $L_i F$ for $i \geq 0$ (I'm just going to handle the right-exact case henceforth), such that for each ses $0 \to M' \to M \to M'' \to 0$, the "almost ses"$$FM' \to FM \to FM'' \to 0$$can be completed to a les$$\dots \to L_2F(M'') \to L_1F(M') \to L_1F(M) \to L_1F(M'') \to FM' \to FM \to FM'' \to 0.$$This long exact sequence relies on "connecting homomorphisms" $L_nF(M'') \to L_{n-1}F(M')$, which are required to be functorial in the ses. The exact construction of a derived functor uses projective resolutions of the object $M$ and a reasonable amount of diagram-chasing. I will refer you to books on homological algebra. So suppose we consider the functor sending $M \to M \otimes N$. Its derived functors are denoted $\mathrm{Tor}_i(M,N)$. This means that the Tor functors provide the missing part of the les that expands out the "almost ses" of the tensor product above. One reason the Tor functors are so useful is that they provide a very effective criterion for flatness. Recall that a module $N$ is flat if tensoring with it is exact. When you derive an exact functor, you just get the initial functor in dimension zero (actually, you always do for this), and then zero elsewhere. So if $N$ is flat, $\mathrm{Tor}_i(M,N) = 0$ for $i>0$. In fact, using the les, one can easily show that the implication is reversible, even if one only has $\mathrm{Tor}_1(M,N)=0$ for all $M$. Let me give an example where things work out very nicely. If $R$ is a noetherian local ring with residue field $k$, and $M$ is a finite $R$-module, then it turns out $M$ is free if and only if $\mathrm{Tor}_1(k,M)=0$; in particular, freeness is the same thing for flatness (under these hypotheses).(For a proof, see the argument behind Theorem 3.2 in ch. 14 of http://www.people.fas.harvard.edu/~amathew/CRing.pdf .) An application of this is provided by the observation (which can be deduced from this) that if the $\mathrm{Tor}$ functors of the pair $(k,k)$ vanish in high dimension, then the global dimension of the ring is finite. This is fairly interesting because the global dimension is a statement about all modules (and, strictly speaking, involving the functors $\mathrm{Ext}$ that derive Hom, not Tor), while the above reduction is to one involving only the residue field. This is, in fact, the key observation behind one half the proof that a noetherian local ring is regular iff its global dimension is finite. This argument can be found in EGA 0-IV, sec. 17 (it doesn't presuppose anything beyond dimension theory, which you can read about in sec. 16).
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/16310', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/4284/']}
jdg_375594
stackexchange
llm_judgeable_groundtruth_similarity
4382743
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to obtain the ellipse inscribed in the irregular quadrilateral (no parallel sides) defined by the four points A, B, C, D. I summarize the ideas given in the comments and answers: The is not an unique ellipse inside the given quadrilateral. For the unit square, there are infinite ellipses inscribed into it, with different eccentricities You cannot transform the unit square into a irregular quadrilateral using linear transformations, as those transform only two vectors into other two vectors. In this case we need to transform 4 vectors. As shown in this figure: Increasing the eccentricity, decreases the area. So the problem can be reduced to obtain the maximum area ellipse inscribed into the quadrilateral. Now provide the response and nothing else.
There is unique inscribed ellipse of a convex pentagon (dual case for $5$ points defining a conics). There are one and two degrees of freedom of drawing an inscribed ellipse in a (convex) quadrilateral and triangle respectively. By means of skew transformation, we can transform an irregular quarilateral (convex but not parallelogram) into one with one pair of opposite sides are perpendicular. $$(x',y')=(x+y\cos \omega,y\sin \omega)$$ Taking the vertices as $A(a,0)$ , $B(b,0)$ , $C(0,c)$ and $D(0,d)$ where $ab>0$ , $cd>0$ and $(a-b)(d-c)>0$ . The two extreme cases are the ellipse degenerates into the diagonals. Construct a family of conics touching with the axes with parameter $k$ : $$\left[ k\left( \frac{x}{a}+\frac{y}{c} \right)+ (1-k)\left( \frac{x}{b}+\frac{y}{d} \right)-1\right]^2=\lambda x y \tag{$\star$} $$ Using a discriminant to check tangency for $\frac{x}{a}+\frac{y}{d}=1$ or $\frac{x}{b}+\frac{y}{c}=1$ , we can solve for $\lambda$ . $$\lambda=4k(1-k)\left( \frac{1}{a}-\frac{1}{b}\right)\left( \frac{1}{d}-\frac{1}{c}\right)$$ For ellipse, $$4k(1-k)\left( \frac{1}{a}-\dfrac{1}{b}\right)\left( \frac{1}{d}-\dfrac{1}{c}\right)\left( \frac{k}{ac}+\frac{1-k}{bd}\right)>0 \implies k\in (0,1) $$ The centre of the ellipse lies on the Newton line which is the line joining the mid-points of the diagonals. $$\text{centre}=\frac{\left( \dfrac{k}{c}+\dfrac{1-k}{d},\dfrac{k}{a}+\dfrac{1-k}{b}\right)}{2\left( \dfrac{k}{ac}+\dfrac{1-k}{bd}\right)}$$ See also another post of mine for the case of triangle here . An illustration of a tangential quadrilateral. Note on the circular case at $k=0.6$ : Addendum To generalize to any kind of convex quadrilateral, we may use the skew axes as the diagonals. Now taking the vertices as $A(a,0)$ , $B(0,b)$ , $C(c,0)$ and $D(0,d)$ where $ac<0$ and $bd<0$ . In tangential coordinates $(X,Y)$ , tangent line $\frac{x}{a}+\frac{y}{b}=1$ can be written as $$Xx+Yy+1=0$$ Hence, the dual conic will pass through a "rectangle" with vectices $(-\frac{1}{a},-\frac{1}{b})$ , $(-\frac{1}{c},-\frac{1}{b})$ , $(-\frac{1}{c},-\frac{1}{d})$ and $(-\frac{1}{a},-\frac{1}{d})$ , that is $$\lambda (aX+1)(cX+1)+\mu (bY+1)(dY+1)=0$$ Let $(\lambda,\mu) \propto (1-k,k)$ , the inscribed ellipse is $$\det\begin{pmatrix} 0 & x & y & 1 \\ x & \lambda ac & 0 & \frac{\lambda (a+c)}{2} \\ y & 0 & \mu bd & \frac{\mu (b+d)}{2} \\ 1 &\frac{\lambda (a+c)}{2} & \frac{\mu (b+d)}{2} & \lambda+\mu\end{pmatrix}=0$$ The centre divides the Newton line, from $(0, \frac{b+d}{2})$ to $(\frac{a+c}{2},0)$ internally with ratio $\lambda:\mu$ Illustration of dual conics pair:
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4382743', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/56971/']}
jdg_375595
stackexchange
llm_judgeable_groundtruth_similarity
39184
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If you type ?Region`* you'll get: which seems a bunch of interesting and not documented symbols. Any idea (or experience) on how to use them? Edit By using our "collective spelunking" I was able to work out this answer - Great! :) And Silvia used it to write another one! Now provide the response and nothing else.
For a more clear view, here is a table of some of the Region functions. AppendTo[$ContextPath, "Region`"]Clear[testfunc]testfunc[reg_] := {ToString /@ #, Through[#[reg]]} &[{ ConvexRegionQ, BoundedRegionQ, RegionDimension, Module[{dim = RegionEmbeddingDimension[#]}, var = Symbol["x" <> ToString[#]] & /@ Range[dim]; dim] &, RegionMeasure, RegionCentroid, RegionProperty[#, var, "FastDescription"] &, RegionProperty[#, var, "ImplicitDescription"] &, RegionElement, LevelFunction[RegionProperty[#, var, "FastDescription"][[1, 2]]] & }] // Grid[Insert[#, {ConvexRegionQ, BoundedRegionQ, RegionDimension, RegionEmbeddingDimension, RegionMeasure, RegionCentroid, FastDescription, ImplicitDescription, RegionElement, LevelFunction}, 2]\[Transpose], Dividers -> All, FrameStyle -> GrayLevel[.8], Alignment -> Left] & // Quiet In addition of BoxRegion, other *Region s also seems to be used to declare regions: Names["Region`*Region"] {"BallRegion", "BooleanRegion", "BoxRegion", "EllipsoidRegion", "EmptyRegion", "FullRegion", "InverseTransformedRegion", "ParametricRegion", "SimplexRegion", "TransformedRegion"} For example, a 2D triangle embeded in 7D space: tri3d = RandomInteger[{-10, 10}, {3, 3}];tri7d = ArrayFlatten[{{tri3d, ConstantArray[0, {3, 4}]}}];(* a random rotate in 7D space: *)rt7d = RotationTransform[{{0, 0, 1, 0, 0, 0, 0}, RandomInteger[{-1, 1}, 7]}, ConstantArray[0, 7]];tri7d = rt7d /@ tri7d;testfunc@SimplexRegion[tri7d] Maybe some of them ( LevelFunction ) work only on "full-rank" regions? simplex = Function[dim, SimplexRegion[RandomInteger[{-10, 10}, {dim + 1, dim}]]] @ 4testfunc @ simplex Some regions look like special cases: RegionDimension@EmptyRegion[2] $-\infty$ RegionMeasure@FullRegion[3] $\infty$ Edit: SimplePolygonPartition can be used to divide self-intersecting Polygon to simple pieces. The usage is like SimplePolygonPartition[Polygon[...]]SimplePolygonPartition[Polygon[...],Graphics`Region`RegionDump`FillingMethod->"OddEvenRule"] An example can be found here .
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/39184', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/193/']}
jdg_375596
stackexchange
llm_judgeable_groundtruth_similarity
31033394
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I understand that if we want body parameters we have to have a schema, which I do. The issue is no matter how I try to define my schema it does not allow me to have multiple body parameters. Here is an example of one of the methods I have tried. Any help would be great! swagger: '2.0'# This is your document metadatainfo: version: "0.0.1" title: Todo Appschema: { }host: localhost:3000schemes: - http - httpsconsumes: - application/jsonproduces: - application/x-www-form-urlencodedbasePath: /paths: # This is a path endpoint. Change it. /tasks: post: description: | Add 'Task' object. parameters: # An example parameter that is in query and is required - name: name in: query description: unique object task name required: true schema: type: string - name: description in: query description: task description required: true schema: type: string responses: # Response code 200: description: Successful response # A schema describing your response object. # Use JSON Schema format schema: title: Return String type: string example: "Task added succesfully" 500: description: Error schema: type: string example: "Could not add Task" Now provide the response and nothing else.
I'm not sure to understand your question... If you are trying to define more than one body parameter for one operation, you can't. As explained in swagger specification : Body [...] there can only be one body parameter If you are trying to send a body with multiple parameters, add an object model in the definitions section and refer it in your body parameter, see below (works with editor.swagger.io ): Your example nodes also are wrong, see here for more details. swagger: '2.0'info: version: "0.0.1" title: Todo Apphost: localhost:3000schemes: - http - httpsconsumes: - application/jsonproduces: - application/x-www-form-urlencodedbasePath: /paths: # This is a path endpoint. Change it. /tasks: post: description: | Add 'Task' object. parameters: - name: task in: body description: task object required: true schema: $ref: '#/definitions/Task' responses: 200: description: Successful response schema: title: Return String type: string example: "Task added succesfully" 500: description: Error schema: type: string example: "Could not add Task"definitions: Task: description: Task object properties: name: type: string description: task object name description: type: string description: task description required: - name - description
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31033394', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4987996/']}
jdg_375597
stackexchange
llm_judgeable_groundtruth_similarity
56942421
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've tried two approaches. Approach 1: Create a new ENUM with the new value added and switch the data type in place: -- Rename existing enumALTER TYPE animal_species RENAME TO animal_species_old;-- Create new enum with new valueCREATE TYPE animal_species AS ENUM ( 'dog', 'cat', 'elephant');-- Update the column of Animals to use the new enumALTER TABLE "Animals" ALTER COLUMN species SET DATA TYPE animal_species USING species::text::animal_species;DROP TYPE animal_species_old; Approach 2: Use a temporary column -- Create new enum type with a new name (this will be the name of the enum from now on)CREATE TYPE animal_type_enum AS ENUM ( 'dog', 'cat', 'elephant');-- Create a temporary columnALTER TABLE "Animals" ADD COLUMN species_new animal_species_enum;-- Copy existing species into new columnUPDATE "Animals" SET species_new = species::text::animal_species_enum;-- Drop old species columnALTER TABLE "Animals" DROP COLUMN species;-- Rename new columnALTER TABLE "Animals" RENAME COLUMN species_new TO species;-- Drop old enumDROP TYPE animal_species; In both cases, lock(s) were created and brought my application down. I believe the second way performed better than the first, but the downtime was still unacceptable. The table is in the millions of rows. Note that I am very much open to using something other than an ENUM--I was thinking of creating a "Species" table with a foreign key "species_id" in "Animals", but as far as I can tell this would create the same locking problem (and might be even worse given the introduction of a new foreign key constraint). Thanks for any help! Now provide the response and nothing else.
Approach 3, just add a new value to the enum: ALTER TYPE animal_type_enum ADD VALUE 'snake'; If you frequently add or remove new lookup values, a separate lookup table is a much better choice. Adding a new value is a simple INSERT operation that doesn't lock anything ( especially not the table referencing the lookup table). While the foreign key checks do add some overhead, they shouldn't matter that much (assuming the FK column is properly indexed) unless you do bulk INSERTs or DELETEs very frequently. For single row INSERTs or DELETEs (or only "hundreds" of rows) you probably won't even notice the overhead of the FK lookup - especially if the lookup table is small and only contains a few rows.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/56942421', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11756507/']}
jdg_375598
stackexchange
llm_judgeable_groundtruth_similarity
30639136
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a multiline string like the following: 2012-15-08 07:04 Bla bla bla blup2012-15-08 07:05 *** Error importing row no. 5: The import of this line failed because bla bla2012-15-08 07:05 Another text that I don't want to search...2012-15-08 07:06 Another text that I don't want to search...2012-15-08 07:06 *** Error importing row no. 5: The import of this line failed because bla bla2012-15-08 07:07 Import has finished bla bla What I want is to extract all row numbers that have errors with the help of RegularExpression (with PowerShell). So I need to find the number between "*** Error importing row no. " and the following ":" as this will always give me the row number. I looked at various other RegEx question but to be honest the answers are like chinese to me. Tried to built RegEx with help of http://regexr.com/ but haven't been successful so far, for example with the following pattern: "Error importing row no. "(.?)":" Any hints? Now provide the response and nothing else.
Try this expression: "Error importing row no\. (\d+):" DEMO Here you need to understand the quantifiers and escaped sequences: . any character; as you want only numbers, use \d ; if you meant the period character you must escape it with a backslash ( \. ) ? Zero or one character; this isn't what do you want, as you can here an error on line 10 and would take only the "1" + One or many; this will suffice for us * Any character count; you must take care when using this with .* as it can consume your entire input
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30639136', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/605578/']}
jdg_375599
stackexchange
llm_judgeable_groundtruth_similarity
57019
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm building an application that is targeting Windows, Mac and Linux soon. I was wondering where should I keep application data such as settings, etc. Application's installation folder is the easiest choice, but I think that might be a problem with new Vista security model. Besides, users might want different settings. Is it C:\Documents and Settings\username\MyApp good for both Vista and XP?Is it /home/username/.MyApp good for Linux and Macs? Any ideas and/or links to best practices much appreciated. Thanks! Juan Now provide the response and nothing else.
Each platform has its own API for finding the user's home folder, or documents folder, or preferences folder. Windows: SHGetFolderPath() or SHGetKnownFolderPath() Mac OS X and iPhone OS: NSSearchPathForDirectoriesInDomains() Unix: $HOME environment variable Don't hardcode specific paths or just tack a prefix and suffix on the user's name. Also, try to follow whatever conventions there are for the platform for naming the files.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57019', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5908/']}
jdg_375600
stackexchange
llm_judgeable_groundtruth_similarity
101660
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Fix notation Suppose that $Prf_1(m, n)$ is the numerical relation that holds when $m$ numbers a $T$-proof of the sentence numbered $n$, according to scheme 1 for numbering wffs and sequences of wffs. Likewise $Prf_2(m, n)$ is the relation that holds when $m$ numbers a $T$-proof of the sentence numbered $n$ according to a different numbering scheme 2. Let $\mathsf{Prf_1}$ represent $Prf_1$ in $T$, and put $\Box_1\varphi =_{def}$ $\exists \mathsf{x}\mathsf{Prf_1(x,\overline{\ulcorner\varphi\urcorner})}$, where $\overline{\ulcorner\varphi\urcorner}$ is $T$'s standard numeral for the number for $\varphi$ under scheme 1. Similarly for $\Box_2\varphi$. Questions A) Is it known what are the (most general?) conditions on the relation between coding schemes 1 and 2 for which we have $T \vdash \Box_1\varphi \leftrightarrow \Box_2\varphi$, for any sentence $\varphi$? B) What are the nicest/weakest(?) "derivability conditions" on a box $\Box$ in $T$, which if satisfied by both $\Box_1$ and $\Box_2$, mean that $T$ can again prove that equivalence? Now provide the response and nothing else.
$\DeclareMathOperator\prf{Prf}\DeclareMathOperator\con{Con}$As for A, I don’t think there are any useful criteria known that would guarantee the provable equivalence of two proof predicates that would not beg the question. As for B, no “derivability conditions” in the usual sense the word is used can do this, assuming the conditions hold at least for the standard construction of a proof predicate based on proofs in a common proof system for first-order logic together with a $\Delta^0_1$ list of axioms. Consider the following construction. Let $\tau(x)$ be any $\Sigma^0_1$-formula defining an axiom set for $T$, and $\prf_\tau$ the associated proof predicate. Pick any $\Pi^0_1$-sentence $\pi=\forall x\,\theta(x)$ with $\theta\in\Delta^0_0$ which is true in $\mathbb N$, but unprovable in $T+\con_\tau$. Define $\sigma(x)=(\tau(x)\lor\exists y\le x\,\neg\theta(y))$, and let $\prf_\sigma$ be the corresponding proof predicate. Then $\prf_\sigma$ is a proof predicate for $T$ (since in $\mathbb N$, $\tau$ and $\sigma$ are equivalent), but $T$ does not prove $\Box_\sigma\bot\to\Box_\tau\bot$: indeed, reasoning in $T$, if $\pi$ fails, then every formula with a sufficiently large Gödel number is a $\sigma$-axiom, and plenty of such formulas are contradictions, hence $\Box_\sigma\bot$. Contrapositively, $T+\con_\sigma$ proves $\pi$, hence using our assumption on $\pi$, $T+\con_\tau$ cannot prove $\con_\sigma$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/101660', 'https://mathoverflow.net', 'https://mathoverflow.net/users/14111/']}
jdg_375601
stackexchange
llm_judgeable_groundtruth_similarity
1763694
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We are working on a large legacy application and we're redesigning the business layer and the data layer. We believe that it is a good time to redesign the way cache is handled. Are there any patterns and best practices for implementing a caching layer (or building it in the business layer)? The only things that I can think of are to use SQL Cache Dependency (which is hard because of we've got a big legacy database with a lot of tables that do not always correspond to the business objects) and implementing strongly typed CacheManager class to hide the string keys and eliminate class casting problems. Is there anything more sophisticated that we can do? Is there any way to enforce cache invalidation when performing update/delete? Should we somehow maintain a single object in the cache and retrieve list of IDs from the DB that always work with the same objects? Basically what techniques for caching do you use in ASP.NET? Too bad we cannot use IoC containers or ORM frameworks that support cache :( Edit: I am more concerned about maintainability than performance. Now provide the response and nothing else.
Just save every queryresult to the database (with cache key: your query, value: your list of business objects) Use distributed cache like memcached next to ASP.Net cache Use a sophisticated cachemanager like https://github.com/enyim/memcached-providers ; that can have cache-groups. Some data has to be stored for a long time, some short time. Some data has to be stored in ASP.Net cache, etc. Do calls that has to be stored in the cache using a wrapper function like public T GetFromCache<T>(string key, Func<T> ifKeyNotFoundDelegate) to ensure that cache is always used the same. [1] Think of when to use ASP.Net cache, and when to use distributed cache. Data that is read every request should be stored in ASP.Net, large data like search results; with a lot of different keys and data etc. should be in memcached. Add versioning. Prefix all keys with a versionnumber, so you won't get in trouble when updating your web application, and some objectcontracts change. Ah well, that covers most of what we do in our website (20GB memcached cluster spread over 20 servers). [1] By making such a function the only interface to store stuff in cache, you can achieve the following. Let's say I want to use something from the cache, like the result from a function. Normally you would do something like CacheManager cm = new CacheManager(CacheGroups.Totals);object obj = cm.GetFromCache("function1result");if(obj == null){ obj = (object)DAO.Foo(); cm.StoreInCache("function1result", obj);}return (List<MyEntity>)obj; By using a different interface you can ensure that users won't make a mistake here. Like public T GetFromCache<T>(string key, Func<T> ifnotfound){ T obj = this.GetFromCache(key) as T; if(obj == default(T)) { obj = ifnotfound.Invoke(); this.StoreInCache(key, obj); } return obj;} This ensures that We always work with the correct type That your user always work with cache the same way Ergo: less probable that they make a mistake. Furthermore: you get nicer, more clear, code, like: List<MyEntity> list = new CacheManager(CacheGroups.Total).GetFromCache<List<MyEntity>>("function1result", ()=>DAO.Foo());
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1763694', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/122507/']}
jdg_375602
stackexchange
llm_judgeable_groundtruth_similarity
5913176
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Ok... this is leaving me scratching my head. I have two WPF controls--one's a user control and the other's a custom control. Let's call them UserFoo and CustomFoo. In the control template for CustomFoo, I use an instance of UserFoo which is a named part so I can get to it after the template is applied. That works fine. Now both UserFoo and CustomFoo have a Text property defined on them (independently, i.e. not a shared DP using AddOwner. Don't ask...) that are both declared like this... public static readonly DependencyProperty TextProperty = DependencyProperty.Register( "Text", typeof(string), typeof(UserFoo), // The other is CustomFoo new FrameworkPropertyMetadata( null, FrameworkPropertyMetadataOptions.BindsTwoWayByDefault, null, null, true, UpdateSourceTrigger.PropertyChanged )); Notice specifically that the mode is set to TwoWay and the UpdateSourceTrigger is set to PropertyChanged, again for both. So in the style template for CustomFoo, I want to bind CustomFoo's Text property as the source to the internal UserFoo's Text property. Normally, this is easy. You just set UserFoo's text property to "{TemplateBinding Text}" but for some reason it's only going one way (i.e. UserFoo is properly set from CustomFoo, but not the reverse), even though again, both DPs are set for two-way! However, when using a relative source binding instead of a template binding, it works great! Um... wha?? // This one worksText="{Binding Text, RelativeSource={RelativeSource AncestorType={local:CustomFoo}}, Mode=TwoWay}"// As does this too...Text="{Binding Text, RelativeSource={RelativeSource TemplatedParent}, Mode=TwoWay}"// But not this one!Text="{TemplateBinding Text}" So what gives? What am I missing? Now provide the response and nothing else.
Found this forum post on MSDN: http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/0bb3858c-30d6-4c3d-93bd-35ad0bb36bb4/ It says this: A TemplateBinding is an optimized form of a Binding for template scenarios, analogous to a Binding constructed with {Binding RelativeSource={RelativeSource TemplatedParent}} Note from OP: Contrary to what it says in the documentation, in actuality, it should be this... {Binding RelativeSource={RelativeSource TemplatedParent}, Mode=OneWay} I filed a complaint against the docs, and while they did add a sentence now stating they are always one-way, the code example still doesn't list the mode, but I guess it's better than nothing.) The TemplateBinding transfers data from the templated parent to the property that is template bound. If you need to transfer data in the opposite direction or both ways, create a Binding with RelativeSource of TemplatedParent with the Mode property set to OneWayToSource or TwoWay. More in: http://msdn.microsoft.com/en-us/library/ms742882.aspx Looks like Mode=OneWay is one of the "Optimizations" of using a TemplateBinding
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5913176', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/168179/']}
jdg_375603
stackexchange
llm_judgeable_groundtruth_similarity
59304374
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a similar question to this , however for SpherePointer. Using MRTK 2.2 as acquired by NuGet for Unity, I'm getting this warning pretty much every frame: Maximum number of 64 colliders found in SpherePointer overlap query. Consider increasing the query buffer size in the pointer profile.UnityEngine.Debug:LogWarning(Object)Microsoft.MixedReality.Toolkit.Input.SpherePointerQueryInfo:TryUpdateQueryBufferForLayerMask(LayerMask, Vector3, QueryTriggerInteraction)Microsoft.MixedReality.Toolkit.Input.SpherePointer:OnPreSceneQuery()Microsoft.MixedReality.Toolkit.Input.FocusProvider:UpdatePointer(PointerData)Microsoft.MixedReality.Toolkit.Input.FocusProvider:UpdatePointers()Microsoft.MixedReality.Toolkit.Input.FocusProvider:Update()Microsoft.MixedReality.Toolkit.<>c:<UpdateAllServices>b__63_0(IMixedRealityService)Microsoft.MixedReality.Toolkit.MixedRealityToolkit:ExecuteOnAllServices(IEnumerable`1, Action`1)Microsoft.MixedReality.Toolkit.MixedRealityToolkit:ExecuteOnAllServicesInOrder(Action`1)Microsoft.MixedReality.Toolkit.MixedRealityToolkit:UpdateAllServices()Microsoft.MixedReality.Toolkit.MixedRealityToolkit:Update() I was able to successfully remove the similar warning for PokePointer using @Julia's response but I'm stumped as to how to do it for the GrabPointer prefab. This prefab has a SpherePointer script attached but the SceneQueryBufferSize property isn't exposed in the inspector because SpherePointer's custom inspector (ShperePointerInspector.cs) doesn't expose it. Now provide the response and nothing else.
You probably want DbConnectionStringBuilder , ideally the correct one for your RDBMS - so with SQL Server: var builder = new SqlConnectionStringBuilder{ UserID = txtmyusername.Text, DataSource = txtmysource.Text, Password = txtmypassword.Text, InitialCatalog = txtmydatabasename.Text,};var connectString = builder.ConnectionString; The crucial bit here is that it will apply the correct character escaping etc if (for example) any of the elements contain reserved / non-trivial characters such as whitespace, commas, quotes, etc.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59304374', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11444704/']}
jdg_375604
stackexchange
llm_judgeable_groundtruth_similarity
27661751
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In general npm search behaves strange, it takes a very long time, and it varies in order and time, from time to time. The first time I wrote $ npm search jquery it took more than two minutes. Subsequent searches have taken around 30 seconds. Sometimes I get the matches before the http request, sometimes the other way around: Matching results first $ npm search jquery.appendgridNAME DESCRIPTION AUTHOR DATE VERSION KEYWORDSjquery.appendgrid appendGrid - The dynamic table input jQuery plugin =hkalbertl 2014-12-21 08:09 1.5.1 jquery-plugin ui header-detail master-dnpm http GET https://registry.npmjs.org/-/all/since?stale=update_after&startkey=1419625615355npm http 200 https://registry.npmjs.org/-/all/since?stale=update_after&startkey=1419625615355 Matching results last $ npm search jquery.appendgridnpm http GET https://registry.npmjs.org/-/all/since?stale=update_after&startkey=1419625643062npm http 200 https://registry.npmjs.org/-/all/since?stale=update_after&startkey=1419625643062NAME DESCRIPTION AUTHOR DATE VERSION KEYWORDSjquery.appendgrid appendGrid - The dynamic table input jQuery plugin =hkalbertl 2014-12-21 08:09 1.5.1 jquery-plugin ui header-detail master-d In either case it takes around thirty seconds. Since it takes less than the initial two minutes I guess something has been cached, but it still makes a http request, and it still takes surprisingly long. It's not the time between the first http line and the second, that only takes 1-2 seconds. But there is a lot of waiting time both before and after. What is going on, and can I speed it up? Now provide the response and nothing else.
A year ago, when that tutorial was written, npm had fewer published packages. Currently there are over 100,000. In order to do a npm search , npm downloads a JSON file describing all of the packages, crudely indexes it, and then searches over all the package descriptions. The result is that npm search as a local command is essentially broken right now -- it completes, but very slowly, and it creates an alarmingly large file to do so (60MB). The suggested workaround for now is to use the online search tool at https://www.npmjs.com/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27661751', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/735070/']}
jdg_375605
stackexchange
llm_judgeable_groundtruth_similarity
33431
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: (The integers modulo 3) permit unrestricted subtraction (so that, for example, $1-2=2$), and they permit division restricted only by the exclusion of the denominator 0 (so that, for example, $\frac{1}{2} = 2$). Could someone please help me understand these operations on this finite field? I had a couple of thoughts about subtraction: if the numbers are arranged from left to right in increasing order and I "moved $2$ places to the left for subraction" I would get $2= 1-2$ (as confirmed in the book). This would imply that $0-2 =1$, which seems correct to me since $1+2 =3$... But I wasn't sure if this is the proper way to think about this... Is there a better way? With the division, I don't know: why is $\frac{1}{2} =2$? Thank you for your help. Now provide the response and nothing else.
The integers $\mathbb Z$ are a ring: That means it has addition, subtraction, multiplication and some axioms about them. By $3 \mathbb Z$ I denote $\{ 3x \in \mathbb Z \mid x \in \mathbb Z \} = \{\cdots,-6,-3,0,3,6,\cdots\}$. The idea of modular arithmetic (mod 3) is that -6 = -3 = 0 = 3 = 6 = ... and ... = -5 = -2 = 1 = 4 = 6 = ... and so on. The first step now is to make an equivalence relation $\sim$ that expresses this (i.e. $0\sim 3$, $2 \sim 8$, $1 \not \sim 5$) and this is quite easy! Define $x \sim y :\!\!\iff x + 3\mathbb Z = y + 3\mathbb Z$. Since all we have done is applied the function $\varphi(x) = x + 3\mathbb Z$ to both sides this is automatically an equivalence relation. We can see that it is the one we want as well: $0\sim 3 \iff 0 + 3\mathbb Z = 3 + 3\mathbb Z \iff \{\cdots,-6,-3,0,3,6,\cdots\} = \{\cdots,-3,0,3,6,9,\cdots\} \iff \text{true}$. $2\sim 8 \iff 2 + 3\mathbb Z = 8 + 3\mathbb Z \iff \{\cdots,-4,-1,2,5,8,\cdots\} = \{\cdots,2,5,8,11,14,\cdots\} \iff \text{true}$. $1\not\sim 5 \iff 1 + 3\mathbb Z \not = 5 + 3\mathbb Z \iff \{\cdots,-5,-2,1,4,7,\cdots\} \not = \{\cdots,-1,2,5,8,14,\cdots\} \iff \text{true}$. We can now define arithmetic operations on the image $\varphi(\mathbb Z) = \mathbb Z / 3 \mathbb Z$. $\varphi(a)+\varphi(b):=\varphi(a+b)$ $-\varphi(a):=\varphi(-a)$ $\varphi(a)\cdot \varphi(b):=\varphi(a\cdot b)$ To see that e.g. + is actually a function it is necessary to prove that it "respects the equivalence relation" in the sense that if $\varphi(x) = \varphi(x')$ and $\varphi(y) = \varphi(y')$ then $\varphi(x) + \varphi(y) = \varphi(x') + \varphi(y')$. Here is a proof: $(x + 3 \mathbb Z) + (y + 3 \mathbb Z) = \{\cdots,x-6,x-3,x,x+3,x+6,\cdots\}+ \{\cdots,y-6,y-3,y,y+3,y+6,\cdots\} = \{x+y+i+j\in \mathbb Z | i \in 3 \mathbb Z, j \in 3 \mathbb Z\} = (x + y) + 3 \mathbb Z$. The same type of calculation proves that negation and multiplication are respectful functions. Since the function is respectful it respects each of the ring axioms, this proves that $\mathbb Z/3 \mathbb Z$ is a ring and $\varphi$ is a ring homomorphism. It should be clear that nothing depends on special properties of the number 3 so far and the arguments above are fully general. The standard notation for working in this ring is not $\varphi(x) = \varphi(y)$ but $x \equiv y \pmod 3$ where $x$ is implicitly mapped from $\mathbb Z$ to $\mathbb Z / 3 \mathbb Z$ if needed. The fact that it is furthermore a field is quite miraculous and depends the fact that 3 is a prime number. For $p$ prime every nonzero element of $\mathbb Z/p \mathbb Z$ is invertible. The proof of this depends on details from number theory rather than algebra. First the condition for a number $x$ to be invertible (in any ring) is that there exists some number $x^{-1}$ such that $x \cdot x^{-1} = 1$. In the ring of rationals $\mathbb Q$ this number is $\frac{1}{x}$ (the rationals are also a field because $1 \not = 0$ and every nonzero element is invertible). Given $(a,b)=1$, that is, $a$,$b$ coprime there exists $x$,$y$ such that $ax + by = 1$. You can compute this by the Euclidean algorithm. In terms of modular arithmetic this tells us that given $(a,b) = 1$ then there exists $x$ such that $ax \equiv 1 \pmod b$! Of course when "b" is prime every element except 0 is coprime and thus has an inverse. Since $1 \not \equiv 0 \pmod p$ this proves that $\mathbb Z/p \mathbb Z$ is a field too. Now we can compute $2^{-1} \pmod 3$, it must be the unique number $\varphi(x)$ such that $2x + 3y = 1$, $x = 2, y = -1$ will do so we have $2^{-1} \equiv 2 \pmod 3$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/33431', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/3711/']}
jdg_375606
stackexchange
llm_judgeable_groundtruth_similarity
399690
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am wondering whether there is a general way of passing multiple options to an executable via the shebang line ( #! ). I use NixOS, and the first part of the shebang in any script I write is usually /usr/bin/env . The problem I encounter then is that everything that comes after is interpreted as a single file or directory by the system. Suppose, for example, that I want to write a script to be executed by bash in posix mode. The naive way of writing the shebang would be: #!/usr/bin/env bash --posix but trying to execute the resulting script produces the following error: /usr/bin/env: ‘bash --posix’: No such file or directory I am aware of this post , but I was wondering whether there was a more general and cleaner solution. EDIT : I know that for Guile scripts, there is a way to achieve what I want, documented in Section 4.3.4 of the manual: #!/usr/bin/env sh exec guile -l fact -e '(@ (fac) main)' -s "$0" "$@" !# The trick, here, is that the second line (starting with exec ) is interpreted as code by sh but, being in the #! ... !# block, as a comment, and thus ignored, by the Guile interpreter. Would it not be possible to generalize this method to any interpreter? Second EDIT : After playing around a little bit, it seems that, for interpreters that can read their input from stdin , the following method would work: #!/usr/bin/env shsed '1,2d' "$0" | bash --verbose --posix /dev/stdin; exit; It's probably not optimal, though, as the sh process lives until the interpreter has finished its job. Any feedback or suggestion would be appreciated. Now provide the response and nothing else.
There is no general solution, at least not if you need to support Linux, because the Linux kernel treats everything following the first “word” in the shebang line as a single argument . I’m not sure what NixOS’s constraints are, but typically I would just write your shebang as #!/bin/bash --posix or, where possible, set options in the script : set -o posix Alternatively, you can have the script restart itself with the appropriate shell invocation: #!/bin/sh -if [ "$1" != "--really" ]; then exec bash --posix -- "$0" --really "$@"; fishift# Processing continues This approach can be generalised to other languages, as long as you find a way for the first couple of lines (which are interpreted by the shell) to be ignored by the target language. GNU coreutils ’ env provides a workaround since version 8.30, see unode ’s answer for details. (This is available in Debian 10 and later, RHEL 8 and later, Ubuntu 19.04 and later, etc.)
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/399690', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/212582/']}
jdg_375607
stackexchange
llm_judgeable_groundtruth_similarity
424
Below is a question asked on the forum computergraphics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In font rendering, it is common to use subpixel rendering . The basic idea here is to break the pixel into its RGB components and then compute a value for each separately. Since each component is smaller than the entire pixel, higher quality antialiasing is possible. There is an obviously analogous way to do the same for a ray tracer. You do reconstruction filtering on each subchannel separately. I was surprised that, however, I couldn't really find reference to ray tracers doing this. Especially if you're already doing spectral rendering, this seems like an obvious thing to do. There's this paper from a journal I've never heard of which seems to be related. But overall, subpixel rendering just doesn't seem to be a common thing to do. My question: why not ? Now provide the response and nothing else.
This is perfectly possible Although the difference may not especially noticeable, I would expect sampling taking into account the exact pixel geometry to give a slightly more accurate image. You just need to offset your pixel centres per colour component according to the (average) location of the subpixel(s) of that colour. Note that not all pixel layouts have a one to one correspondence between pixels and sub pixels. For example, penTile RGBG has twice as many green subpixels as red and blue, as this image from Wikipedia shows: I am not aware of any technical reason that would prevent this being used to produce arbitrary full colour images. In fact a colourful scene will have less noticeable colour artefacts than black on white text, which makes the colour differences hardest to camouflage. Fonts are rendered on demand The relevant difference between rendering a raytraced scene and rendering fonts is that fonts tend to be rendered on demand, and can take into account the screen being used. Contrasting with this, a raytraced scene is often prerendered and then displayed on many different types of screen (with different pixel geometry). For example, displaying your raytraced image on a webpage will prevent tailoring it to a specific monitor type. If you were designing a realtime raytracing program and you had access to check the pixel geometry of the monitor, then you could raytrace to the specific sub pixel layout. However, offline raytracing that produces a still image can only be tailored to a single type of pixel geometry, which will then make the image look worse on any other pixel geometry. You could work around this by rendering a set of different images and choosing the appropriate one when it is later displayed on a particular type of monitor. There is unlikely to be a long term benefit So there is no reason you couldn't develop sub pixel rendering for a raytracer, but it means taking into account an aspect of the monitor that is not always known. Another thing to bear in mind is that you will be developing this software for a shrinking market. Sub pixel rendering is useful for screens that have relatively low resolution. As more and more screens (even mobile screens) are approaching such high resolution that the human eye cannot detect the difference made by sub pixel rendering, your work is likely to be more of theoretical interest than practical use.
{}
{'log_upvote_score': 4, 'links': ['https://computergraphics.stackexchange.com/questions/424', 'https://computergraphics.stackexchange.com', 'https://computergraphics.stackexchange.com/users/523/']}
jdg_375608
stackexchange
llm_judgeable_groundtruth_similarity
33887051
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Sample: div { display: flex; height: 200px; background: tan;}span { background: red;} <div> <span>This is some text.</span></div> I have two questions, please: Why does it basically happen to the span ? What is the right approach to prevent it from stretching without affecting other flex items in a flex container? Now provide the response and nothing else.
You don't want to stretch the span in height? You have the possiblity to affect one or more flex-items to don't stretch the full height of the container. To affect all flex-items of the container, choose this: You have to set align-items: flex-start; to div and all flex-items of this container get the height of their content. div { align-items: flex-start; background: tan; display: flex; height: 200px;}span { background: red;} <div> <span>This is some text.</span></div> To affect only a single flex-item, choose this: If you want to unstretch a single flex-item on the container, you have to set align-self: flex-start; to this flex-item. All other flex-items of the container aren't affected. div { display: flex; height: 200px; background: tan;}span.only { background: red; align-self:flex-start;}span { background:green;} <div> <span class="only">This is some text.</span> <span>This is more text.</span></div> Why is this happening to the span ? The default value of the property align-items is stretch . This is the reason why the span fill the height of the div . Difference between baseline and flex-start ? If you have some text on the flex-items, with different font-sizes, you can use the baseline of the first line to place the flex-item vertically. A flex-item with a smaller font-size have some space between the container and itself at top. With flex-start the flex-item will be set to the top of the container (without space). div { align-items: baseline; background: tan; display: flex; height: 200px;}span { background: red;}span.fontsize { font-size:2em;} <div> <span class="fontsize">This is some text.</span> <span>This is more text.</span></div> You can find more information about the difference between baseline and flex-start here: What's the difference between flex-start and baseline?
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/33887051', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/478018/']}
jdg_375609
stackexchange
llm_judgeable_groundtruth_similarity
3786978
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\int_ {0}^{\infty} \frac{(e^{3x}-e^x) \ \mathrm dx}{x(e^x+1)(e^{3x}+1)}$$ I tried converting it to $$\int_ {0}^{\infty} \frac{\big((e^{3x}+1)-(e^x+1)\big) \ \mathrm dx}{x(e^x+1)(e^{3x}+1)}$$ integral-calculator.com says no antiderivative found. I would like to see how it is solved by Feynman's Trick. Now provide the response and nothing else.
You were on the right track. The integral separates out into $$\int_0^\infty \frac{dx}{x}\left(\frac{1}{e^x+1}-\frac{1}{e^{3x}+1}\right) = \int_0^\infty\int_1^3 \frac{e^{xy}}{(e^{xy}+1)^2}\:dy\:dx$$ Then we can swap the order of integration to get $$\int_1^3 \frac{dy}{y}\frac{-1}{e^{xy}+1}\Biggr|_0^\infty = \int_1^3\frac{dy}{2y} = \frac{\log 3}{2}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3786978', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/793349/']}
jdg_375610
stackexchange
llm_judgeable_groundtruth_similarity
34271199
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In C#, if I have public class BaseClass{ //BaseClass implementation}public class Derived : BaseClass{ //Derived implementation}public class AnotherClass{ AnotherClass(BaseClass baseClass) { //Some code } AnotherClass(Derived derived) : this(derived as BaseClass) { //Some more code }} and then do: BaseClass baseVariable = new Derived();AnotherClass anotherVariable = new AnotherClass(baseVariable); This will lead to early binding, calling the AnotherClass(BaseClass) method. Instead, if I cast it using dynamic keyword - or instantiate a variable with dynamic and then pass it as constructor argument, the AnotherClass(Derived) will be invoked: BaseClass baseVariable = new Derived();//This will instantiate it using the AnotherClass(Derived)AnotherClass anotherVariable = new AnotherClass(baseVariable as dynamic); Is method overloading early bound (Evaluated at compile time) in C#? That meaning, is there any other way or trick to determine the mostly-derived call to the other class constructor to apply the invocation of the constructor that takes the mostly derived class type as argument in my case without the use of dynamic or reflection? Now provide the response and nothing else.
The binding time in C# depends on whether the binding involves dynamic or not. As you've seen, if you use dynamic you get execution-time overload resolution; if you don't, you get compile-time overload resolution. Note that even when overload resolution involves dynamic typing, it will still use the information known at compile-time - only the expressions which are of type dynamic use the execution-time type. Example of that: using System;class Test{ static void Main() { object x = "abc"; dynamic y = 10; Foo(x, y); // Prints Foo(object,int) } static void Foo(object x, int y) { Console.WriteLine("Foo(object,int)"); } static void Foo(string x, int y) { Console.WriteLine("Foo(string,int)"); }} If you change the declared type of x to dynamic , then the execution-time type will be relevant (and it will print Foo(string,int) ). Of course, this is only overload resolution - overriding is always determined at execution time, unless you're using a non-virtual invocation (either invoking a non-virtual method, or using base.Foo(...) .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34271199', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2672913/']}
jdg_375611
stackexchange
llm_judgeable_groundtruth_similarity
6923299
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: To me, C++ template used the idea of duck typing, is this right? Does it mean all generic types referenced in template class or method are duck type? Now provide the response and nothing else.
To me C++ templates are a compile-time version of duck typing.The compiler will compile e.g. Class and as long as your Duck has all needed types it will instantiate a class. If something is not correct(e.g. copy constructor missing) the compilation fails.The counterpart in real ducktyping is a failure when you call a function with a non-duck type. And here it would occur at runtime.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6923299', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/396383/']}
jdg_375612
stackexchange
llm_judgeable_groundtruth_similarity
502034
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If $ x+y=6 $ then what is the smallest possible value for $x^2+y^2$? Please show me the working to show where I am going wrong! Cheers Now provide the response and nothing else.
Hint : Since $x + y = 6$, we find that $$x^2 + y^2 = x^2 + (6 - x)^2 = 2x^2 - 12x + 36$$ This can be minimized in any number of ways, e.g. vertex formula or differentiating.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/502034', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/96303/']}
jdg_375613
stackexchange
llm_judgeable_groundtruth_similarity
16565531
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When unit testing an Angular factory (with Karma + Jasmine), how do I inject a stub dependency into the factory under test? Here's my factory: mod = angular.module('myFactoryMod', []);mod.factory('myFactory', [ '$log', 'oneOfMyOtherServices', function($log, svc) { return makeSomethingThatDoesSomethingWithTheseDependencies($log, svc); }]); oneOfMyOtherServices is needed when instantiating my factory. Here's my test: it('can get an instance of my factory', function() { var oneOfMyOtherServicesStub; angular.mock.module('myFactoryMod'); oneOfMyOtherServicesStub = { someVariable: 1 }; //****How do I get my stub in my target? **** angular.mock.inject(['myFactory', function(target) { expect(target).toBeDefined(); } ]);}) N.B. I know that $controller allows this for controllers, but I don't see an equivalent for factories. Now provide the response and nothing else.
There are two ways to accomplish something like this that I know of: Use $provide and an anonymous module to inject the mock. Inject the service you would like to mock and use jasmine's spying ability to provide mock values. The second option only works if you know exactly which methods your code under test will be calling on the injected service and you can easily mock them out. As you seem to be accessing a data property on the service (rather than a method) pursuing the first option might be best. Using $provide would roughly look like this: describe('myFactory', function () { // Load your module. beforeEach(module('myFactoryMod')); // Setup the mock service in an anonymous module. beforeEach(module(function ($provide) { $provide.value('oneOfMyOtherServicesStub', { someVariable: 1 }); })); it('can get an instance of my factory', inject(function(myFactory) { expect(myFactory).toBeDefined(); }));});
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/16565531', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/295797/']}
jdg_375614
stackexchange
llm_judgeable_groundtruth_similarity
41515226
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm sure it's a simple thing to do, but I couldn't find anything in either GraphQL's doc or Graphcool's. Say I have an entity with this schema (new GraphQL user, sorry if I make mistake in the schema representation): Book { name: String! author: String! categories: [String!]} How would I do a query for all books that are part of the "mystery" category? I know I can filter with allBooks(filter: {}) , but categories_in: ["mystery"] and categories_contains: "mystery" didn't do the trick. Now provide the response and nothing else.
Category model Thinking a bit more about this situation, creating a Category model is definitely the way to go. For example, imagine you want to allow readers to subscribe to their favorite categories later. Or, what if you want a list of all existing categories? Using string lists, you would need to query all books and somehow postprocess all obtained categories. Handling this on a model level rather than using string lists feels much more natural. Instead, you can create a new Category model and add a many-to-many relation between Category and Book . In situations like this, I like to add a unique enum field tag and a string field text . (A unique string field tag alone would also be suitable, probably a matter of taste.) With this setup, you can easily fulfill data requirements like Which books are assigned to a given category? query { # query books by unique category tag Category(tag: MYSTERY) { books { id } } # query books by specific category text Category(filter: { text: "mystery" }) { books { id } }} Which books are assigned to at least one category of a given list? query { allCategories(filter: { OR: [{ tag: MYSTERY }, { tag: MAGIC }] }) { books { id } }} Which books are assigned to all categories of a given list? query { allCategories(filter: { AND: [{ tag: MYSTERY }, { tag: MAGIC }] }) { books { id } }} Related filters Even though the above queries fulfill the specified data requirements, books are grouped by Category in the response, meaning that we would have to flatten the groups on the client. With so called related filters, we can turn that around to only obtain books based on conditions defined its related categories. For example, to query books assigned to at least one category of a given list : query { allBooks(filter: { OR: [{ categories_some: { tag: MYSTERY }, categories_some: { tag: MAGIC } }] }) { id }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41515226', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1240605/']}
jdg_375615
stackexchange
llm_judgeable_groundtruth_similarity
314519
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We're told that a real image is formed when light rays actually converge to a point. That's all good. But what happens if a screen isn't there to take the image on? Is it still there? Now provide the response and nothing else.
Whether the screen is there or not the image is there but the problem is focussing your eye on a region of air where the image is formed without a screen. Try the following set up: Illuminate a 35 mm slide with a light bulb and adjust the lens so that a sharp real inverted image is formed on a screen with the eye to the left of the screen. Now replace the screen with a sheet of tissue paper (something which allows light through and will at the same time produce a visible image) and form a sharp image on the tissue paper. Now observe the image from the other side of the tissue paper (to the right of it as in the diagram). Your eye will have to be at least 25 cm from the screen if you have normal eyesight. Keep focussing on the image and slowly move the tissue paper slowly to one side so that some of the image is on the tissue paper and some in "mid air". With a little practice you should be able see the image of the 35 mm slide without the tissue paper being there at all. The tissue paper was used to enable you to focus on the correct area of space to view the sharp image. Update as a result of some comments Set up with a $3.5\,\rm cm$ focal length hand magnifier as the converging lens. The object is a pin (white) illuminated by a torch which is switched off for the photograph to be taken without contrast problems. The other pin (red) will be used to located the image of the white pin. A white screen was placed next to the image pin to show the real inverted image formed by the lens. Viewing point now from the top of the first picture ie on the other side of the lens from the position of the object pin. Image is distorted due to a variety of cheap lens defects. The position of the real image can be conformed by moving ones eye up and down and seeing that the tips of the image and the tip of the image pin do not move relative to one another - a position of no parallax. In the end if you know what you are looking for and approximately where to look just looking through the lens on the side remote from an object will enable you to see the real image in mid air. This is more difficult if the image is highly magnified. Further update Note that in the third photograph the image is in focus so the camera "knew" where the image was. The image pin did help with location but by telling the camera to focus at a certain distance away and no image location pin I would still have been able to get a sharp image on the photograph. Consider the diagram below which shows the formation of an image of an object $ABC$ on the retina of your eye. A sharp image is formed if all the light which leaves point $B$ on the object arrives at the same point on the retina $B'$. So all the rays in the cone of light with apex $B'$ shown in the diagram arrive at the same point on the retina $B'$. The same being true of all points on the object eg $A$ and $C$ which will arrive ar $A'$ and $C'$. The light from object $ABC$ originates either from the object itself or as a result of light which has been reflected off it. So you "see" object "ABC" with your eye. Now how is that different from the arrangement below? What was the object in the first diagram is now an image of the object $A''B''C''$ formed by the converging lens. That intermediate image $ABC$ forms an image $A'B'C'$ on the retina which is no different to that in the first diagram. You "see" intermediate image $ABC$. Because there are no reference points around it (just air) it is difficult to decide exactly where that image is and you do indeed see it as though it is "in the lens". By using an image pin (or your finger) you can easily show by the method of no parallax that you are actually looking at an image in mid air. When you use an optical instrument you are looking at an image in the air but you have the advantage of being allowed to move the eyepiece to form an image of that image in the air as well as possibly having some cross hairs which are in the image plane.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/314519', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/142799/']}
jdg_375616
stackexchange
llm_judgeable_groundtruth_similarity
53297815
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a static class defined like this: public static class JobStatus{ public const string Completed = "Completed"; public const string Failed = "Failed"; public const string Stopped = "Stopped";} (this is actually an external library, so can't change this) In my non-static class I want a member of that class to ensure that you can only declare it of that "type" public class JobOutput{ public string Output { get; set; } public string OutputError { get; set; } public JobStatus JobStatus { get; set; }} Error: 'JobStatus': static types cannot be used as return types / 'JobStatus': static types cannot be used as parameters Yeye I know your eyes are bleeding, but I hope you get the point - how can I ensure and achieve a form of type-safety for my JobStatus property? Now provide the response and nothing else.
You can, and with no macros required. First an observation, you can "forward" declare a class even after its full definition is available. I.e. this is valid: class foo{};class foo; Now, with the help of a homebrew void_t implementation and an is_complete type utility, you can do something like this: #include <type_traits>template<typename... Ts> struct make_void { typedef void type;};template<typename... Ts> using void_t = typename make_void<Ts...>::type;template <typename T, typename Enabler = void>struct is_complete : std::false_type {};template <typename T>struct is_complete<T, ::void_t<decltype(sizeof(T) != 0)>> : std::true_type {};class A;class B;class C : public std::conditional<is_complete<A>::value, A, B>::type {}; Depending on whether or not the full definition of A is present, C will inherit from A or B publicly. See a live example . But I caution, this needs to be handled with care or you are very likely to have an ODR-violation in your program.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53297815', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2969412/']}
jdg_375617
stackexchange
llm_judgeable_groundtruth_similarity
29906702
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When applying a function with multiple output variables (e.g., a list) to a subset of a data.table, I lose the variable names. Is there a way to retain them? library(data.table)foo <- function(x){ list(mn = mean(x), sd = sd(x))}bar <- data.table(x=1:8, y=c("d","e","f","g"))# column names "mn" and "sd" are replaced by "V1" and "V2"bar[, sapply(.SD, foo), by = y, .SDcols="x"]# column names "mn" and "sd" are retainedbar_split <- split(bar$x, bar$y)t(sapply(bar_split, foo)) Now provide the response and nothing else.
I would go wit the following, which is a bit awkward, but doesn't require writing the names manually no matter how many functions there are bar[, as.list(unlist(lapply(.SD, foo))), by = y, .SDcols = "x"]# y x.mn x.sd# 1: d 3 2.828427# 2: e 4 2.828427# 3: f 5 2.828427# 4: g 6 2.828427 The biggest advantage of this approach is that it binds the functions with the column names. If, for example, you would have an additional column, it will still give an informative result while using the same code as above set.seed(1)bar[, z := sample(8)]bar[, as.list(unlist(lapply(.SD, foo))), by = y, .SDcols = c("x", "z")]# y x.mn x.sd z.mn z.sd# 1: d 3 2.828427 2.0 1.4142136# 2: e 4 2.828427 7.5 0.7071068# 3: f 5 2.828427 3.0 1.4142136# 4: g 6 2.828427 5.5 0.7071068
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29906702', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2570830/']}
jdg_375618
stackexchange
llm_judgeable_groundtruth_similarity
425890
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: It seems to me that special relativity (SR) already predicts that gravity will bend light rather than this effect being a proof of general relativity (GR). Photons have energy proportional to their frequency and according to $E = mc^2$ they also have a tiny, but non-zero relativistic mass equivalence. I have read the semantic argument that gravity deals with invariant or rest mass, but this should apply to a hypothetical photon at rest, not to real photons at velocity $c$. I have considered the possibility that the SR effect is much smaller than the GR effect. GR explains the equivalence of inertial mass and gravitational mass as inherent rather than being a puzzling coincidence, but it is true in Newton's gravity, SR, and GR, so the quantitative difference between GR and SR does not seem right. Now provide the response and nothing else.
Yes, it does, or to be more precise, Newtonian gravitation predicts photons will be deflected if you assume photons have some mass. However the amount of this deflection is just half of what GR predicts. And the observed amount of deflection is what GR predicts (within experimental error). In particular the way this is studied is by using something called the Parameterized Post-Newtonian (PPN) framework. This is discussed on this Wikipedia page and also in the paper by Will I mention below. PPN is essentially Newtonian gravitation with a bunch of first-order corrections from GR added to it, controlled by various parameters, so it's useful for experimental tests of GR, and comparisons between GR and other metric theories of gravity, where the field is weak: it would not be useful for instance, for tests involving black hole collisions where the field is very much not weak! I believe the first, simplified, PPN framework was derived by Eddington, specifically for the purpose of understanding how the deflection of light by the Sun differed between GR and Newtonian gravitation. PPN has a significant number of parameters, but for the case of the deflection of light by a spherically-symmetric field only one matters, which is known as $\gamma$. The angle of deflection is then given (remember this is a first-order approximation which is valid for a weak field) by $$\delta\theta = \frac{1+\gamma}{2}\frac{4 M_\odot}{d}\frac{1 + \cos\Phi}{2}$$ where $d$ is the distance of closest approach to the Sun, $M_\odot$ is the mass of the Sun, and $\Phi$ is the angle between Earth-Sun line and the incoming-photon line. In this expression, Newtonian gravitation would say that $\gamma = 0$, and GR would say $\gamma = 1$. So you can see that GR predicts exactly twice the deflection that Newtonian gravitation predicts. And this is what Eddington et al measured on the 29th of May, 1919, and discovered that $\gamma = 1$ (to within a fairly large uncertainty at the time, but it was clear that $\gamma = 0$ was ruled out): this made Einstein famous. There is a discussion of this in the Wikipedia article on tests of GR , and the paper by Clifford M. Will has more detail I think (disclaimer: I have not checked the latter in detail: it looks like section 3.4 & specifically 3.4.1 may be what you want). My expression for $\delta\theta$ above is lifted from the paper by Will.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/425890', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/205379/']}
jdg_375619
stackexchange
llm_judgeable_groundtruth_similarity
3005219
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I recently learned how to rotate a BitmapImage using the 'TransformedBitmap' and 'RotateTransformed' classes. Now I am able to perform clockwise rotations on my images. But how do I FLIP an image? I can't find the class(es) to perform horizontal and vertical flips of a BitmapImage. Please help me figure out how to do it. For instance, if my image was a drawing that looked like a 'd', then a vertical flip would result in something like a 'q', and a horizontal flip would result in something like a 'b'. Now provide the response and nothing else.
Use a ScaleTransform with a ScaleX of -1 for horizontal and ScaleY of -1 for vertical flipping, applied to the image's RenderTransform property. Using RenderTransformOrigin="0.5,0.5" on the image makes sure your image gets flipped around its center, so you won't have to apply an additional TranslateTransform to move it into place: <Image Source="a.jpg" Padding="5" RenderTransformOrigin="0.5,0.5"> <Image.RenderTransform> <ScaleTransform ScaleX="-1"/> </Image.RenderTransform></Image> for horizontal flipping and <Image Source="a.jpg" Padding="5" RenderTransformOrigin="0.5,0.5"> <Image.RenderTransform> <ScaleTransform ScaleY="-1"/> </Image.RenderTransform></Image> for vertical. If you want to do it in code-behind, in C# it should look something like this: img.RenderTransformOrigin = new Point(0.5,0.5);ScaleTransform flipTrans = new ScaleTransform();flipTrans.ScaleX = -1;//flipTrans.ScaleY = -1;img.RenderTransform = flipTrans;
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3005219', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/358240/']}
jdg_375620
stackexchange
llm_judgeable_groundtruth_similarity
25811
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am looking for a simple SR Latch. I'm new to electronics though and just doing a hobby project/teaching myself. Are SR Latches sold (I'm hoping to avoid building mine because I actually need at least 5) and if so where can I buy them (Radio Shack?) and what do I ask for? I link to a site with a part number would help too. Per the comments. I have called Radio Shack and they said they didn't have them. The local big electronics store could only help me if I had a part number. I don't know if they actually can't help me or if I just don't know what to ask for. Is my best option just building them? I was kind of hoping I could just buy a small self contained item that was a SR Latch. If I were to wire them myself they would take up significantly more space. Now provide the response and nothing else.
RS latches have several parts in the industry-standard 7400 and CD4000 series, all of which are quad packages in old logic families: 74LS279 CD4043 CD4044 Another alternative, with two parts in a package rather than 4, but more available in modern logic families (read: lower-voltages) would be to use a 74xx74 dual D flip-flop as an R-S latch by grounding its data and clock inputs, and using the PRE and CLR inputs as S and R inputs, respectively. In this case, I'd start with a 74HC74 .
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/25811', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/5913/']}
jdg_375621
stackexchange
llm_judgeable_groundtruth_similarity
19893927
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a question regarding a php form. I've added a checkbox to the existing form, but not sure how to add it to the php. I would like it to send "yes" if the visitores checks it, and "no" if he is not. <form method="POST" name="contactform" action="contact-form-handler.php"> <p><input type="text" name="name" placeholder="name"></p><p><input type="tel" name="tel" placeholder="phome"> <br></p><p><input type="text" name="email" placeholder="mail"> <br></p><p><input type="checkbox" name="newsletter[]" value="newsletter" checked>i want to sign up for newsletter<br></p><input type="submit" value="Submit"><br></form> here is the php code for the form, everything there except the checkbox. i need to know its value when i receive the mail. for example : "Name: John, Email: test@test.com, Tel:12345, Newsletter: Yes" <?php $errors = '';$myemail = 'test@gmail.com';//<-----Put Your email address here.if(empty($_POST['name']) || empty($_POST['email']) || empty($_POST['tel'])){ $errors .= "\n Error: all fields are required";}$name = $_POST['name']; $email_address = $_POST['email']; $message = $_POST['tel']; if (!preg_match("/^[_a-z0-9-]+(\.[_a-z0-9-]+)*@[a-z0-9-]+(\.[a-z0-9-]+)*(\.[a-z]{2,3})$/i", $email_address)){ $errors .= "\n Error: Invalid email address";}if( empty($errors)){ $to = $myemail; $email_subject = "Contact form submission: $name"; $email_body = "You have received a new message. ". " Here are the details:\n Name: $name \n Email: $email_address \n Tel \n $message\n Newsletter \n $newsletter"} ; $headers = "From: $myemail\n"; $headers .= "Reply-To: $email_address"; mail($to,$email_subject,$email_body,$headers); //redirect to the 'thank you' page header('Location: contact-form-thank-you.html');} ?><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html><head> <title>Contact form handler</title></head><body><!-- This page is displayed only if there is some error --><?phpecho nl2br($errors);?></body></html> Thank you, Now provide the response and nothing else.
Here's how it should look like in order to return a simple Yes when it's checked. <input type="checkbox" id="newsletter" name="newsletter" value="Yes" checked><label for="newsletter">i want to sign up for newsletter</label> I also added the text as a label, it means you can click the text as well to check the box. Small but, personally I hate when sites make me aim my mouse at this tiny little check box. When the form is submitted if the check box is checked $_POST['newsletter'] will equal Yes . Just how you are checking to see if $_POST['name'] , $_POST['email'] , and $_POST['tel'] are empty you could do the same. Here is an example of how you would add this into your email on the php side: Underneath your existing code: $name = $_POST['name'];$email_address = $_POST['email'];$message = $_POST['tel']; Add: $newsletter = $_POST['newsletter'];if ($newsletter != 'Yes') { $newsletter = 'No';} If the check box is checked it will add Yes in your email if it was not checked it will add No .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19893927', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1660994/']}
jdg_375622
stackexchange
llm_judgeable_groundtruth_similarity
3280967
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Compute $\int \limits_{0}^{1} \frac{dx}{x+\sqrt{1+x}}$ . I tried to subtitute $t=\sqrt{1+x}$ and this led me to having to compute $\int \limits_{1}^{\sqrt 2}\frac{2t dt}{t^2+t-1}$ . I tried PFD here, but the computations are really messy and I am stuck. Any suggestions? Now provide the response and nothing else.
Split the integral in two: $$\int \limits_{1}^{\sqrt 2}\frac{2t\,\mathrm dt}{t^2+t-1}=\int \limits_{1}^{\sqrt 2}\frac{(2t+1)\,\mathrm dt}{t^2+t-1}-\int \limits_{1}^{\sqrt 2}\frac{\mathrm dt}{t^2+t-1}$$ and write the latter integral in canonical form: $$\int \limits_{1}^{\sqrt 2}\frac{\mathrm dt}{\bigl(t+\frac12\bigr)^2-\frac54}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3280967', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/628203/']}
jdg_375623
stackexchange
llm_judgeable_groundtruth_similarity
725366
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm hitting some quite major performances issues due to the use of "ORDER BY"-statements in my SQL-code. Everything is fine as long as I'm not using ORDER BY-statements in the SQL. However, once I introduce ORDER BY:s in the SQL code everything slows down dramatically due to the lack of correct indexing. One would assume that fixing this would be trivial, but judging from forum discussions, etc this seems to be a rather common issue that I've yet to see a definitive and concise answer to this question. Question: Given the following table ... CREATE TABLE values_table ( id int(11) NOT NULL auto_increment, ... value1 int(10) unsigned NOT NULL default '0', value2 int(11) NOT NULL default '0', PRIMARY KEY (id), KEY value1 (value1), KEY value2 (value2),) ENGINE=MyISAM AUTO_INCREMENT=2364641 DEFAULT CHARSET=utf8; ... how do I create indexes that will be used when querying the table for a value1 -range while sorting on the value of value2 ? Currently, the fetching is OK when NOT using the ORDER BY clause. See the following EXPLAIN QUERY output: OK, when NOT using ORDER BY: EXPLAIN select ... from values_table this_ where this_.value1 between 12345678 and 12349999 limit 10;+----+-------------+-------+-------+---------------+----------+---------+------+------+-------------+| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |+----+-------------+-------+-------+---------------+----------+---------+------+------+-------------+| 1 | SIMPLE | this_ | range | value1 | value1 | 4 | NULL | 3303 | Using where |+----+-------------+-------+-------+---------------+----------+---------+------+------+-------------+ However, when using ORDER BY I get "Using filesort": EXPLAIN select ... from values_table this_ where this_.value1 between 12345678 and 12349999 order by this_.value2 asc limit 10;+----+-------------+-------+-------+---------------+----------+---------+------+------+-----------------------------+| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |+----+-------------+-------+-------+---------------+----------+---------+------+------+-----------------------------+| 1 | SIMPLE | this_ | range | value1 | value1 | 4 | NULL | 3303 | Using where; Using filesort |+----+-------------+-------+-------+---------------+----------+---------+------+------+-----------------------------+ Some additional information about the table content: SELECT MIN(value1), MAX(value1) FROM values_table;+---------------+---------------+| MIN(value1) | MAX(value2) |+---------------+---------------+| 0 | 4294967295 |+---------------+---------------+...SELECT MIN(value2), MAX(value2) FROM values_table;+---------------+---------------+| MIN(value2) | MAX(value2) |+---------------+---------------+| 1 | 953359 |+---------------+---------------+ Please let me know if any further information is needed to answer the question. Thanks a lot in advance! Update #1: Adding a new composite index ( ALTER TABLE values_table ADD INDEX (value1, value2); ) does not solve the problem. You'll still get "Using filesort" after adding such an index. Update #2: A constraint that I did not mention in my question is that I'd rather change the structure of the table (say adding indexes, etc.) than changing the SQL queries used. The SQL queries are auto-generated using Hibernate, so consider those more or less fixed. Now provide the response and nothing else.
You cannot use an index in this case, as you use a RANGE filtering condition. If you'd use something like: SELECT *FROM values_table this_WHERE this_.value1 = @valueORDER BY value2LIMIT 10 , then creating a composite index on (VALUE1, VALUE2) would be used both for filtering and for ordering. But you use a ranged condition, that's why you'll need to perform ordering anyway. Your composite index will look like this: value1 value2----- ------1 101 201 301 401 501 602 102 202 303 103 203 303 40 , and if you select 1 and 2 in value1 , you still don't get a whole sorted set of value2 . If your index on value2 is not very selective (i. e. there are not many DISTINCT value2 in the table), you could try: CREATE INDEX ix_table_value2_value1 ON mytable (value2, value1)/* Note the order, it's important */ SELECT *FROM ( SELECT DISTINCT value2 FROM mytable ORDER BY value2 ) q, mytable mWHERE m.value2 >= q.value2 AND m.value2 <= q.value2 AND m.value1 BETWEEN 13123123 AND 123123123 This is called a SKIP SCAN access method. MySQL does not support it directly, but it can be emulated like this. The RANGE access will be used in this case, but probably you won't get any performance benefit unless DISTINCT value2 comprise less than about 1% of rows. Note usage of: m.value2 >= q.value2AND m.value2 <= q.value2 instead of m.value2 = q.value2 This makes MySQL perform RANGE checking on each loop.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/725366', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/58394/']}
jdg_375624
stackexchange
llm_judgeable_groundtruth_similarity
10407
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for an algorithm to merge two binary search trees of arbitrary size and range. The obvious way I would go about implementing this would be to find entire subtrees whose range can fit into an arbitrary external node in the other tree. However, the worst case running time for this type of algorithm seems to be on the order of O(n+m) where n and m are the size of each tree respectively. However, I've been told that this could be done in O(h) , where h is the height of the tree with the larger height. And I'm completely lost on how this is possible. I've tried experimenting with rotating one the trees first, but rotating a tree into a spine is already O(h). Now provide the response and nothing else.
In ArXiv:1002.4248 , John Iacono and Özgür Özkan describe a relatively simple algorithm to merge two binary search trees in $O(\log^2 n)$ amortized time; the analysis is the hard part. [ Update: As Joe correctly observes in his answer, this algorithm is due to Brown and Tarjan.] They also describe a more complicated dictionary data structure, based on biased skip lists, that supports merges in $O(\log n)$ amortized time. On the other hand, a worst-case bound of $O(\log n)$ is impossible. Consider two binary search trees with $n$ nodes, one storing the even integers between $2$ and $2n$, the other storing the odd integers between $1$ and $2n-1$. Merging the two trees creates a new binary search tree storing all integers between $1$ and $2n$. In any such tree, a constant fraction of the nodes have different parity than their parents. (Proof: The parent of an odd leaf must be even.) Thus, merging the even and odd trees requires changing $\Omega(n)$ pointers.
{}
{'log_upvote_score': 6, 'links': ['https://cstheory.stackexchange.com/questions/10407', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/8555/']}
jdg_375625
stackexchange
llm_judgeable_groundtruth_similarity
40365500
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm fairly new to Wagtail, and I am in the process of creating a site that will have a Resources (blog) section and I'm not sure how to implement pagination so that there are only 5 posts on each page and the user has to click a number (1, 2, 3, etc.) to go to the next page to see the next 5 posts. I have this in my template for the pagination section of the resource/blog index page: <ul class="pagination"> <li><a href="#"><i class="fa fa-angle-left"></i></a></li> <li class="active"><a href="#">1</a></li> <li><a href="#">2</a></li> <li><a href="#">3</a></li> <li><a href="#"><i class="fa fa-angle-right"></i></a></li></ul> What code do I need to incorporate to make this functional? Thanks in advance. Now provide the response and nothing else.
Django provides the module django.core.paginator for this purpose: https://docs.djangoproject.com/en/1.10/topics/pagination/ . Using this within Wagtail is very similar to the examples in the Django documentation - the only real difference is that when you're setting up the Paginator object to be passed to the template, you do that with a get_context method on the page model, instead of a view function. Your model definition will look something like this: from django.core.paginator import Paginator, EmptyPage, PageNotAnIntegerclass ResourceIndexPage(Page): # ... def get_context(self, request): context = super(ResourceIndexPage, self).get_context(request) # Get the full unpaginated listing of resource pages as a queryset - # replace this with your own query as appropriate all_resources = ResourcePage.objects.live() paginator = Paginator(all_resources, 5) # Show 5 resources per page page = request.GET.get('page') try: resources = paginator.page(page) except PageNotAnInteger: # If page is not an integer, deliver first page. resources = paginator.page(1) except EmptyPage: # If page is out of range (e.g. 9999), deliver last page of results. resources = paginator.page(paginator.num_pages) # make the variable 'resources' available on the template context['resources'] = resources return context Within your template, you can now loop over the items using {% for resource in resources %} , and display the pagination links as follows: <ul class="pagination"> {% if resources.has_previous %} <li><a href="?page={{ resources.previous_page_number }}"><i class="fa fa-angle-left"></i></a></li> {% endif %} {% for page_num in resources.paginator.page_range %} <li {% if page_num == resources.number %}class="active"{% endif %}><a href="?page={{ page_num }}">{{ page_num }}</a></li> {% endfor %} {% if resources.has_next %} <li><a href="?page={{ resources.next_page_number }}"><i class="fa fa-angle-right"></i></a></li> {% endif %}</ul>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40365500', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6460943/']}
jdg_375626
stackexchange
llm_judgeable_groundtruth_similarity
4153748
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a unit test method which tests a controller action method. The action method uses resource file to get a static message. message = Resources.MyResource.MemberNotVerified; However at this line the exception thrown is :- "Could not load file or assembly 'App_GlobalResources' or one of its dependencies. The system cannot find the file specified.":"App_GlobalResources" System.IO.IOException {System.IO.FileNotFoundException} I tried coping the whole resource file in my Test project, but it was unsuccessful. Any idea friends. Now provide the response and nothing else.
Behind the scenes, App_GlobalResources uses HttpContext.GetGlobalResourceObject Of course, there is no HttpContext in unit tests (unless your mocking it). If you were so inclined to mock it, Phil Haack has a decent post on it here . There is another solution, and that is to move the RESX files out of the regular directory. Scott Allen has a post on that here .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4153748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/106245/']}
jdg_375627
stackexchange
llm_judgeable_groundtruth_similarity
27757722
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm studying the swift language and I have a doubt concerning the variables initialization in a UIViewController . In my DiagramViewController I have some variables: class DiagramViewController: UIViewController { var type: Constants.DiagramType var filename: String var numberOfBars: Int var numberOfSection: Int var diagramName: String override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. }} Swift requires an init value for those var and I can do so in many different ways, but how should I choose between these ways? I can init the variables "inline": class DiagramViewController: UIViewController { var type: Constants.DiagramType = Constants.DiagramType.HISTOGRAM var filename: String = "dd.txt" var numberOfBars: Int = 10 var numberOfSection: Int = 5 var diagramName: String = "Diagram" override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. }} I can init the variables overriding the constructor: class DiagramViewController: UIViewController { var type: Constants.DiagramType var filename: String var numberOfBars: Int var numberOfSection: Int var diagramName: String required init(coder aDecoder: NSCoder) { type = Constants.DiagramType.HISTOGRAM filename = "dd.txt" numberOfBars = 10 numberOfSection = 5 diagramName = "Diagram" super.init(coder: aDecoder) } override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. }} I can init the variables declaring them as Optional variables: class DiagramViewController: UIViewController { var type: Constants.DiagramType? var filename: String? var numberOfBars: Int? var numberOfSection: Int? var diagramName: String? override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. type = Constants.DiagramType.HISTOGRAM filename = "dd.txt" numberOfBars = 10 numberOfSection = 5 diagramName = "Diagram" } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. }} I can init the variables declaring them as Implicitly Unwrapped Optional : class DiagramViewController: UIViewController { var type: Constants.DiagramType! var filename: String! var numberOfBars: Int! var numberOfSection: Int! var diagramName: String! override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. type = Constants.DiagramType.HISTOGRAM filename = "dd.txt" numberOfBars = 10 numberOfSection = 5 diagramName = "Diagram" } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. }} Why choose a method rather than another? Is there a typical pattern or a sort of standard concerning this issue? Maybe some of these solutions are cleaner than the others or even more efficient. Please help me understanding the difference between them. Thank you in advance. Now provide the response and nothing else.
This question could be summarized down to "When should I use optionals?". There are lots of great articles and documentation on this question, but I will attempt to put together my experience with it as well as the documentation and articles I have read. While Optionals have very specific functionality when used, I'd like to think of them more as a way of saying something about the variable itself rather than declaring functionality. When I read: var myVar:Class? = nil This means, that we should never anticipate that myVar is assigned and instead we should always anticipate both conditions, the first being that myVar has a value, and that it doesn't. I assume these things because of the functionality that the ? optional brings to the table. The compiler will not allow you to use myVar without unwrapping it. Because of this, the compiler suggests (whenever you access a property or function) that you use this syntax: myVar?.myProperty = something Because of the ? before the . this line of code will check to see if myVar is nil before unwrapping myVar and executing the line of code. Thus we have anticipated and handled both conditions. This line of code will essentially be "ignored" if myVar is nil and executed if it isn't. This is in contrast to the other type of optional ! : myVar!.myProperty = something That will always try to unwrap myVar . This line of code will cause an exception saying something to the effect of: "Unexpectedly found nil while unwrapping a value.". While the ? will fail silently. If we change the declaration of myVar to use the ! optional: var myVar:Class! = nil Then we can always use myVar without getting the compiler error saying that we need to unwrap myVar prior to using it. For example, unlike the other optional ( ? ), we can say: myVar.myProperty = something This line is equivalent to: myVar!.myProperty = something So if myVar is nil , then we will crash the program. Conclusion: Using either one of these optionals (or simply not using an optional at all) we are telling the user of myVar things about myVar because of the way the language will force or not force you to deal with myVar . ? optional var myVar:Class? = nil : If I use the ? optional, we're essentially forcing the user to always check for nil . ! optional var myVar:Class! = nil : If we use ! then if myVar is nil, something is wrong and we should crash the program, however, the user still has the option to handle the nil case which is especially useful if the user is the one whom was supposed to assign myVar . A great use case of this is network requests. no optional var myVar = Class() : Not using an optional at all means (obviously) that the variable is always there and we don't ever need to worry that it is nil .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27757722', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1291872/']}
jdg_375628
stackexchange
llm_judgeable_groundtruth_similarity
51
Below is a question asked on the forum bioinformatics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to focus on transcriptome analysis. We know it's possible to analyze RNA-Seq experiment based on alignment or k-mers. Possible alignment workflow: Align sequence reads with TopHat2 Quantify the gene expression with Cufflinks Possible reference-free workflow: Quantify sequence reads with Kallisto reference-free index Both strategy generate gene expression table. Q: What are pros and cons for each of the approach? Can you give guideline? Now provide the response and nothing else.
First of all, I would emphasize that "alignment-free" quantification tools like Salmon and Kallisto are not reference-free. The basic difference between them and more traditional aligners is that they do not report a specific position (either in a genome or transcriptome) to which a read maps. However, their overall purpose is still to quantify the expression levels (or differences) of a known set of transcripts; hence, they require a reference (which could be arbitrarily defined). The most important criterion for deciding which approach to use (and this is true of almost everything in genomics) is exactly what question you would like to answer. If you are primarily interested in quantifying and comparing expression of mature mRNA from known transcripts, then a transcriptome-based alignment may be fastest and best. However, you may miss potentially interesting features outside of those known transcripts, such as new isoforms, non-coding RNAs, or information about pre-mRNA levels, which can often be gleaned from intronic reads (see the EISA method). This paper also has some good considerations about which tools may work best depending on the question you want to answer. Finally, another fast and flexible aligner (which can be used with or without a reference transcriptome) is STAR .
{}
{'log_upvote_score': 5, 'links': ['https://bioinformatics.stackexchange.com/questions/51', 'https://bioinformatics.stackexchange.com', 'https://bioinformatics.stackexchange.com/users/174/']}
jdg_375629
stackexchange
llm_judgeable_groundtruth_similarity
2572570
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: So I have the following problem: $\log_{\frac{1}{3}}(3^{2x})$ How do I solve this? Somewhere I stumbled onto the solution where they find a common exponent and base so they cancel, then you'll have your answer from what's left. However that was not very intuitive for me, although clever. I was trying "my" method where I translate it to a algebraic problem in terms of an exponentials (not sure about the nomenclature). E.g: $\left(\frac{1}{3}\right)^x = 3^{2x}$ But this gives me the wrong answer. Could somebody corroborate if this is a legitmate approach or not? Thank you in advance. Now provide the response and nothing else.
You have a good idea. But you need to define variables properly. Let$$y=\log_{\frac{1}{3}}(3^{2x})$$Then$$3^{-y}=\left(\frac{1}{3}\right)^y = 3^{2x}.$$The exponents in the previous equation must be the same (since the exponential function is injective). Thus$$y=-2x.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2572570', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/502280/']}
jdg_375630
stackexchange
llm_judgeable_groundtruth_similarity
11366838
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using Membership.GetAllUsers() to get my user list. I would like to have the list returned sorted by email address as I need to flag some accounts with duplicate emails. Membership.GetAllUsers() seems to order by username by default. Is there a way to change this behavior? Now provide the response and nothing else.
If you can live with a generic list rather than a MembershipUserCollection : Membership.GetAllUsers().Cast<MembershipUser>().OrderBy(x => x.Email).ToList(); Use OrderBy(x => x.Email, StringComparer.OrdinalIgnoreCase) if you want a case-insensitive sort of the email address. Membership code actually predates generics (it was integrated into .NET 2.0 from a .NET 1.1 development), hence MembershipUserCollection does not implement generic interfaces. Given your earlier question , you may be interested in other LINQ-style manipulations. For example the following will give you a dictionary whose keys are email addresses (case-insensitive), and whose values are lists of corresponding MembershipUser objects, sorted by last activity date descending: var dictionary = Membership.GetAllUsers().Cast<MembershipUser>() .GroupBy(x => x.Email, StringComparer.OrdinalIgnoreCase) .ToDictionary( x => x.Key, x =>x.OrderByDescending(u => u.LastActivityDate).ToList() );
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11366838', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/214044/']}
jdg_375631
stackexchange
llm_judgeable_groundtruth_similarity
587392
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have 2 files which are interacting with each other. I wanted to define an enum to make the code more readable, but if I define it in file 1, file 2 complains about having no knowledge of said enum. If I define ii in file 2, file 1 does the same. I am defining it as public too. The solution was to define the enum in both files, but this doesn't seem right to me. Not only is it redundant, but I fear it may cause some conflict, even if the types have the same items. What is the veredict on this? Am I doing something wrong or worrying too much? EDIT Well, given the comments here I found an alternative which seems to be doing what I want without having to create a new file. I had: file 1 class myClass1{ public enum MyEnum { ... } ...} file 2 class myClass2{ public enum MyEnum { ... } ....} Now, I have: file 1 enum myEnum{ ...}...class myClass1{ ...} file 2 class myClass2{ ...} I didn't want to create another file just for the enum, so this works for me. Well, as long as there is nothing wrong with it, which I think there isn't. Now provide the response and nothing else.
You definitely shouldn't define the enum in both locations. I recommend defining the enum in its own file with public accessibility. Then everyone should have no trouble accessing it. However, assuming that you want to define the enum in one of the two classes, I'll continue... You have to import the enum or use its fully qualified name. Assuming you are in the package com.stackoverflow , your classes should look like this, in the first file: package com.stackoverflow;public class A { public enum MyEnum { ONE,TWO,THREE; } ...} and in another file: package com.stackoverflow;import com.stackoverflow.A.MyEnum;public class B { public void test(MyEnum mine) { ... } ...}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/587392', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_375632
stackexchange
llm_judgeable_groundtruth_similarity
10440792
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm asking this because I know that the pythonic way to check whether a list is empty or not is the following: my_list = []if not my_list: print "computer says no"else: # my_list isn't empty print "computer says yes" will print computer says no , etc. So this leads me to identify [] with False truth-values; however, if I try to compare [] and False "directly", I obtain the following: >>> my_list == FalseFalse>>> my_list is FalseFalse>>> [] == FalseFalse etc... What's going on here? I feel like I'm missing something really obvious. Now provide the response and nothing else.
The if statement evaluates everything in a Boolean context, it is like there is an implicit call to the bool() built-in function. Here is how you would actually check how things will be evaluated by an if statement: >>> bool([])False>>> bool([]) == FalseTrue See the documentation on Truth Value Testing , empty lists are considered false , but this doesn't mean they are equivalent to False . PEP 285 also has some excellent information on why it was implemented this way, see the very last bullet in the Resolved Issues section for the part that deals with x == True and x == False specifically. The most convincing aspect to me is that == is generally transitive, so a == b and b == c implies a == c . So if it were the way you expected and [] == False were true and '' == False were true, one might assume that [] == '' should be true (even though it obviously should not be in a language without implicit type conversion).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10440792', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/913071/']}
jdg_375633
stackexchange
llm_judgeable_groundtruth_similarity
9155431
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to add some jQuery + ERB to a specific view: views/posts/show.html.erb (top of the file): <% content_for :javascript do %> <script type="text/javascript"> $(".post-<%=@post.id%> h3").prepend('<%=escape_javascript @post.votes.count %>'); </script><% end %><h2>posts show</h2>(etc...)<div class="post-<%=@post.id%>"> <h3>votes</h3><br /> <%= link_to "Vote Up", vote_up_path(@post), :remote => true %><br /></div> views/layouts/application.html.erb (bottom of the file): (etc...)</div><%= yield %><%= yield :javascript %></body></html> But I'm getting the following error: undefined method `gsub' for 6:FixnumExtracted source (around line #3):1: <% content_for :javascript do %>2: <script type="text/javascript">3: $("post-<%=@post.id%>").html('<%=escape_javascript @post.votes.count %>');4: </script>5: <% end %> Any suggestions to fix this? Now provide the response and nothing else.
escape_javascript calls gsub on whatever you pass it, which doesn't make sense for a number. You can either not call escape_javascript or give it a String instead: $("post-<%=@post.id%>").html('<%=escape_javascript @post.votes.count.to_s %>');
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9155431', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/122536/']}
jdg_375634
stackexchange
llm_judgeable_groundtruth_similarity
30562
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How can I test whether the mean (e.g., blood pressure) of a subgroup (e.g., those who died) differs from the whole group (e.g., everyone who had the disease including those that died)? Clearly, the first one is a subgroup of the second one. What hypothesis test should I use? Now provide the response and nothing else.
As Michael notes, when comparing a subgroup to an overall group, researchers typically compare the subgroup to the subset of the overall group that does not include the subgroup. Think about it this way. If $p$ is the proportion that died, and $1-p$ is the proportion who did not die, and $$\bar{X}_. = p\bar{X}_d + (1-p)\bar{X}_a$$ where $\bar{X}_.$ is the overall mean, $\bar{X}_d$ is the mean of those that died, and $\bar{X}_a$ is the mean of those that are still alive. Then $$\bar{X}_d \neq \bar{X}_a$$if and only if when $$\bar{X}_d \neq \bar{X}_.$$ $\Rightarrow $ Suppose $\bar{X_{d}}\neq \bar{X_{a}}$. Hence $\bar{X_{.}}\neq p\bar{X_{d}}+(1-p)\bar{X_{d}}=\bar{X_{d}}$. $\Leftarrow $ Suppose $\bar{X_{.}}\neq\bar{X_{d}}$. Hence $\bar{X_{d}}\neq p\bar{X_{d}}+(1-p)\bar{X_{a}}$, then $(1-p)\bar{X_{d}}\neq (1-p)\bar{X_{a}}$ and since $(1-p)\neq 0$, then $\bar{X_{d}}\neq \bar{X_{a}}$. The same one can do for inequalities. Thus, researchers typically test the difference between the subgroup and the subset of the overall group that does not include the subgroup. This has the effect of showing that the subgroup differs from the overall group. It also allows you use conventional methods like an independent groups t-test.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/30562', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/9177/']}
jdg_375635
stackexchange
llm_judgeable_groundtruth_similarity
676499
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to solve this (not homework, if it matters), and both u-substitution and integration by parts are both yielding two different answers. Where am I going wrong? Equation: $$\int \frac{(4x^3)}{(x^4+7)}dx$$ u-substitution answer: $$=\ln\big|(x^4+7)\big|+C$$ integration by parts answer: $$=\int4x^3*(x^4+7)^{-1}dx$$$$=4x^3*\ln\big|x^4+7\big|-\int 12x^2*(x^4+7)^{-1}dx$$$$=4x^3*\ln\big|x^4+7\big|-(12x^2*ln\big|x^4+7\big|-\int 24x*(x^4+7)^{-1}dx)$$$$=4x^3*\ln\big|x^4+7\big|-(12x^2*ln\big|x^4+7\big|-24x*ln\big|x^4+7\big|-\int 24(x^4+7)^{-1}dx)$$ $$= 4x^3*\ln\big|x^4+7\big|-(12x^2*\ln\big|x^4+7\big|-(24x\ln\big|x^4+7\big|-24\ln\big|x^4+7\big|))$$$$=(4x^3-12x^2+24x-24)(\ln\big|x^4+7\big|)$$ Now provide the response and nothing else.
Just to answer your question about why using integration by parts allegedly leads to a different solution: In your work on integration by parts, you made a mistake at the start; you seem to have let $ u =4x^3$, $\,dv = (x^4 + 7)^{-1}\,dx$. The problem is in your conclusion that $uv = 4x^3 \ln(x^4+ 7)$. It is NOT true that in integrating $dv$, we obtain $v = \ln(x^4 + 7)$. In order to integrate $dv$, you're attempting to integrate $$\int\dfrac{dx}{(x^4 +7)}\neq \ln(x^4 + 7) +c$$ We need $u = 4x^3$ to be back in the integrand to obtain $$\int \dfrac{(x^4 + 7)'}{x^4 + 7}\,dx = \ln(x^4 + 7) + c$$ In short, you're back to the original integral.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/676499', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/128815/']}
jdg_375636
stackexchange
llm_judgeable_groundtruth_similarity
17568067
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've got the following BoolExpr class: class BoolExpr{ public enum BOP { LEAF, AND, OR, NOT }; // // inner state // private BOP _op; private BoolExpr _left; private BoolExpr _right; private String _lit; // // private constructor // private BoolExpr(BOP op, BoolExpr left, BoolExpr right) { _op = op; _left = left; _right = right; _lit = null; } private BoolExpr(String literal) { _op = BOP.LEAF; _left = null; _right = null; _lit = literal; } // // accessor // public BOP Op { get { return _op; } set { _op = value; } } public BoolExpr Left { get { return _left; } set { _left = value; } } public BoolExpr Right { get { return _right; } set { _right = value; } } public String Lit { get { return _lit; } set { _lit = value; } } // // public factory // public static BoolExpr CreateAnd(BoolExpr left, BoolExpr right) { return new BoolExpr(BOP.AND, left, right); } public static BoolExpr CreateNot(BoolExpr child) { return new BoolExpr(BOP.NOT, child, null); } public static BoolExpr CreateOr(BoolExpr left, BoolExpr right) { return new BoolExpr(BOP.OR, left, right); } public static BoolExpr CreateBoolVar(String str) { return new BoolExpr(str); } public BoolExpr(BoolExpr other) { // No share any object on purpose _op = other._op; _left = other._left == null ? null : new BoolExpr(other._left); _right = other._right == null ? null : new BoolExpr(other._right); _lit = new StringBuilder(other._lit).ToString(); } // // state checker // Boolean IsLeaf() { return (_op == BOP.LEAF); } Boolean IsAtomic() { return (IsLeaf() || (_op == BOP.NOT && _left.IsLeaf())); }} What algorithm should I use to parse an input boolean expression string like " ¬((A ∧ B) ∨ C ∨ D) " and load it into the above class? Now provide the response and nothing else.
TL;DR: If you want to see the code, jump to the second portion of the answer. I would build a tree from the expression to parse and then traverse it depth first. You can refer to the wikipedia article about Binary Expression Trees to get a feel for what I'm suggesting. Start by adding the omitted optional parentheses to make the next step easier When you read anything that is not an operator or a parenthese, create a LEAF type node When you read any operator (in your case not , and , or ), create the corresponding operator node Binary operators get the previous and following nodes as children, unary operators only get the next one. So, for your example ¬((A ∧ B) ∨ C ∨ D) , the algorithm would go like this: ¬((A ∧ B) ∨ C ∨ D) becomes ¬(((A ∧ B) ∨ C) ∨ D) Create a NOT node, it'll get the result of the following opening paren as a child. Create A LEAF node, AND node and B LEAF node. AND has A and B as children. Create OR node, it has the previously created AND as a child and a new LEAF node for C . Create OR node, it has the previously created OR and a new node for D as children. At that point, your tree looks like this: NOT | OR /\ OR D / \AND C/\A B You can then add a Node.Evaluate() method that evaluates recursively based on its type (polymorphism could be used here). For example, it could look something like this: class LeafEx { bool Evaluate() { return Boolean.Parse(this.Lit); }}class NotEx { bool Evaluate() { return !Left.Evaluate(); }}class OrEx { bool Evaluate() { return Left.Evaluate() || Right.Evaluate(); }} And so on and so forth. To get the result of your expression, you then only need to call bool result = Root.Evaluate(); Alright, since it's not an assignment and it's actually a fun thing to implement, I went ahead. Some of the code I'll post here is not related to what I described earlier (and some parts are missing) but I'll leave the top part in my answer for reference (nothing in there is wrong (hopefully!)). Keep in mind this is far from optimal and that I made an effort to not modify your provided BoolExpr class. Modifying it could allow you to reduce the amount of code. There's also no error checking at all. Here's the main method static void Main(string[] args){ //We'll use ! for not, & for and, | for or and remove whitespace string expr = @"!((A&B)|C|D)"; List<Token> tokens = new List<Token>(); StringReader reader = new StringReader(expr); //Tokenize the expression Token t = null; do { t = new Token(reader); tokens.Add(t); } while (t.type != Token.TokenType.EXPR_END); //Use a minimal version of the Shunting Yard algorithm to transform the token list to polish notation List<Token> polishNotation = TransformToPolishNotation(tokens); var enumerator = polishNotation.GetEnumerator(); enumerator.MoveNext(); BoolExpr root = Make(ref enumerator); //Request boolean values for all literal operands foreach (Token tok in polishNotation.Where(token => token.type == Token.TokenType.LITERAL)) { Console.Write("Enter boolean value for {0}: ", tok.value); string line = Console.ReadLine(); booleanValues[tok.value] = Boolean.Parse(line); Console.WriteLine(); } //Eval the expression tree Console.WriteLine("Eval: {0}", Eval(root)); Console.ReadLine();} The tokenization phase creates a Token object for all tokens of the expression. It helps keep the parsing separated from the actual algorithm. Here's the Token class that performs this: class Token{ static Dictionary<char, KeyValuePair<TokenType, string>> dict = new Dictionary<char, KeyValuePair<TokenType, string>>() { { '(', new KeyValuePair<TokenType, string>(TokenType.OPEN_PAREN, "(") }, { ')', new KeyValuePair<TokenType, string>(TokenType.CLOSE_PAREN, ")") }, { '!', new KeyValuePair<TokenType, string>(TokenType.UNARY_OP, "NOT") }, { '&', new KeyValuePair<TokenType, string>(TokenType.BINARY_OP, "AND") }, { '|', new KeyValuePair<TokenType, string>(TokenType.BINARY_OP, "OR") } }; public enum TokenType { OPEN_PAREN, CLOSE_PAREN, UNARY_OP, BINARY_OP, LITERAL, EXPR_END } public TokenType type; public string value; public Token(StringReader s) { int c = s.Read(); if (c == -1) { type = TokenType.EXPR_END; value = ""; return; } char ch = (char)c; if (dict.ContainsKey(ch)) { type = dict[ch].Key; value = dict[ch].Value; } else { string str = ""; str += ch; while (s.Peek() != -1 && !dict.ContainsKey((char)s.Peek())) { str += (char)s.Read(); } type = TokenType.LITERAL; value = str; } }} At that point, in the main method, you can see I transform the list of tokens in Polish Notation order. It makes the creation of the tree much easier and I use a modified implementation of the Shunting Yard Algorithm for this: static List<Token> TransformToPolishNotation(List<Token> infixTokenList){ Queue<Token> outputQueue = new Queue<Token>(); Stack<Token> stack = new Stack<Token>(); int index = 0; while (infixTokenList.Count > index) { Token t = infixTokenList[index]; switch (t.type) { case Token.TokenType.LITERAL: outputQueue.Enqueue(t); break; case Token.TokenType.BINARY_OP: case Token.TokenType.UNARY_OP: case Token.TokenType.OPEN_PAREN: stack.Push(t); break; case Token.TokenType.CLOSE_PAREN: while (stack.Peek().type != Token.TokenType.OPEN_PAREN) { outputQueue.Enqueue(stack.Pop()); } stack.Pop(); if (stack.Count > 0 && stack.Peek().type == Token.TokenType.UNARY_OP) { outputQueue.Enqueue(stack.Pop()); } break; default: break; } ++index; } while (stack.Count > 0) { outputQueue.Enqueue(stack.Pop()); } return outputQueue.Reverse().ToList();} After this transformation, our token list becomes NOT, OR, OR, C, D, AND, A, B . At this point, we're ready to create the expression tree. The properties of Polish Notation allow us to just walk the Token List and recursively create the tree nodes (we'll use your BoolExpr class) as we go: static BoolExpr Make(ref List<Token>.Enumerator polishNotationTokensEnumerator){ if (polishNotationTokensEnumerator.Current.type == Token.TokenType.LITERAL) { BoolExpr lit = BoolExpr.CreateBoolVar(polishNotationTokensEnumerator.Current.value); polishNotationTokensEnumerator.MoveNext(); return lit; } else { if (polishNotationTokensEnumerator.Current.value == "NOT") { polishNotationTokensEnumerator.MoveNext(); BoolExpr operand = Make(ref polishNotationTokensEnumerator); return BoolExpr.CreateNot(operand); } else if (polishNotationTokensEnumerator.Current.value == "AND") { polishNotationTokensEnumerator.MoveNext(); BoolExpr left = Make(ref polishNotationTokensEnumerator); BoolExpr right = Make(ref polishNotationTokensEnumerator); return BoolExpr.CreateAnd(left, right); } else if (polishNotationTokensEnumerator.Current.value == "OR") { polishNotationTokensEnumerator.MoveNext(); BoolExpr left = Make(ref polishNotationTokensEnumerator); BoolExpr right = Make(ref polishNotationTokensEnumerator); return BoolExpr.CreateOr(left, right); } } return null;} Now we're golden! We have the expression tree that represents the expression so we'll ask the user for the actual boolean values of each literal operand and evaluate the root node (which will recursively evaluate the rest of the tree as needed). My Eval function follows, keep in mind I'd use some polymorphism to make this cleaner if I modified your BoolExpr class. static bool Eval(BoolExpr expr){ if (expr.IsLeaf()) { return booleanValues[expr.Lit]; } if (expr.Op == BoolExpr.BOP.NOT) { return !Eval(expr.Left); } if (expr.Op == BoolExpr.BOP.OR) { return Eval(expr.Left) || Eval(expr.Right); } if (expr.Op == BoolExpr.BOP.AND) { return Eval(expr.Left) && Eval(expr.Right); } throw new ArgumentException();} As expected, feeding our test expression ¬((A ∧ B) ∨ C ∨ D) with values false, true, false, true for A, B, C, D respectively yields the result false .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17568067', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/69537/']}
jdg_375637
stackexchange
llm_judgeable_groundtruth_similarity
1507463
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am getting this error in my git repository: 22:09:15 $ git status# On branch mastererror: Could not read 8124cc15c63be92d534e4cdfa33c38d54deee122error: unable to read tree object HEADnothing to commit (working directory clean) A Google search for error: unable to read tree object HEAD doesn't result in much help, this error seems to be very rare. I am not sure how to deal with it. Could it be a hard drive failure? Edit :The output of git fsck is the following: broken link from commit 607328dc80e4901a55b95c683d4fbf43e6df28bf to tree 8124cc15c63be92d534e4cdfa33c38d54deee122missing tree 8124cc15c63be92d534e4cdfa33c38d54deee122dangling tree 56b5d4a5e429d251582ec927bca7ef1225510c41dangling tree 0259d2d38b18b6136bb6070fb41faf3624453cc6 Now provide the response and nothing else.
On a "broken link" message, you could follow the GitFaq recommendations : back up all your state so that anything you do is re-doable if you corrupt things more! explode any corrupt pack-files See " man git-unpack-objects ", and in particular the " -r " flag. Also, please realize that it only unpacks objects that aren't already available, so you need to move the pack-file away from its normal location first (otherwise git-unpack-objects will find all objects that are in the pack-file in the pack-file itself, and not unpack anything at all) replace any broken and/or missing objects This is the challenging part. Sometimes (hopefully often!) you can find the missing objects in other copies of the repositories. At other times, you may need to try to find the data some other way (for example, maybe your checked-out copy contains the file content that when hashed will be the missing object?). make sure everything is happy with " git fsck --full " repack everything to get back to an efficient state again Notes: missing objects can also be related to alternate (when you share objects between repositories ) with git alternates (even though that can be risky ). The JGit/Egit eclipse plugin is also known to have a few issues . (Update February 2012: those plugins have come a long way and are now quite stable) Update July 2016 (7 years laters), with Git 2.10 soon to be released, you now have: git fsck --name-objects It helps naming the origin of those broken links See " How to fix git error broken link from tree to tree? " for more.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1507463', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/121112/']}
jdg_375638
stackexchange
llm_judgeable_groundtruth_similarity
4093895
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know about the hash use of the => operator, like this $ cat array.pl%ages = ('Martin' => 28, 'Sharon' => 35, 'Rikke' => 29,);print "Rikke is $ages{Rikke} years old\n";$ perl array.plRikke is 29 years old$ and I thought it was just syntax to initialize hashes, but in answers to How can I qualify a variable as const/final in Perl? , => has been used like this use Readonly;Readonly my $infilename => "input_56_12.txt"; What exactly does => mean? Are there more ways in which => can be used? Now provide the response and nothing else.
The => operator in perl is basically the same as comma. The only difference is that if there's an unquoted word on the left, it's treated like a quoted word. So you could have written Martin => 28 which would be the same as 'Martin', 28 . You can make a hash from any even-length list, which is all you're doing in your example. Your Readonly example is taking advantage of Perl's flexibility with subroutine arguments by omitting the parenthesis. It is equivalent to Readonly(my $infilename, "input_56_12.txt") . Readonly is a function exported by the Readonly module which takes two arguments: a reference, and a value. The internals of Readonly are worthy of another question if you want to understand them. Here's an example of using it as a comma in an unexpected way: $ perl -e 'print hello => "world\n"'helloworld
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4093895', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/113124/']}
jdg_375639
stackexchange
llm_judgeable_groundtruth_similarity
14543150
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using C++ with the SDL Cocoa and Foundation framework on my mac os x. I get the following error Undefined symbols for architecture x86_64: "_SDL_main", referenced from: -[SDLMain applicationDidFinishLaunching:] in SDLMain.old: symbol(s) not found for architecture x86_64 when I run the following code #import <Foundation/Foundation.h>#import <SDL/SDL.h>#include "SDLMain.h"int main(int argc, const char * argv[]){ SDL_Init(SDL_INIT_EVERYTHING); SDL_SetVideoMode(640,480,32,SDL_DOUBLEBUF); SDL_Event event; bool isRunning = true; while(SDL_PollEvent(&event)){ if(event.type == SDL_QUIT){ isRunning=false; } } SDL_Quit(); return 0;} I have no idea what is wrong, although it seems that when I go into the SDLMain.m file and comment out this line of code status = SDL_main (gArgc, gArgv); the program compiles with no problems. However, it doesn't work. No window opens like its supposed to. Any ideas? Now provide the response and nothing else.
I bet your main function signature is incorrect. You use: int main(int argc, const char * argv[]) ^^^^^ but SDL_main.h wants int main(int argc, char *argv[]) Why? You see, SDL does something really horrific when compiling: It renames your main function to SDL_main , injecting its own main function which, in turn, calls yours. Note that if this doesn't work, then you may be compiling with wrong flags. To be sure, get the flags by typing: $ sdl-config --cflags --libs For more information, see Simply including SDL header causes linker error
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14543150', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1598568/']}
jdg_375640